**Contents**



### **Timur Gamilov and Ruslan Yanbarisov**


## **Preface**

Over the past few decades, tremendous interest has been paid to the field of fractional calculus, which finds wide applications in physics, biology, chemistry, finance, signal and image processing, hydrology, non-Newtonian fluids, etc. Given that the analytical solution to fractional models is extremely complicated in terms of transcendental functions and analytically intractable in complex cases, numerical methods have become powerful and useful tools to handle the fractional differential equations. This is the motivation for the Special Issue "Numerical Solution and Applications of Fractional Differential Equations" in the Journal *Fractal and Fractional*.

This Special Issue received 92 submissions and produced 29 accepted and published papers, all of which were subject to a rigorous review process. This reprint is a result of these 29 collections, which cover the topics of analytical techniques, numerical methods and the applications for fractional differential equations.

Regarding the analytical techniques, the separation of variables method, the Toeplitz matrix method, the contraction mapping principle, the perturbed energy method, the Laplace-residual power series technique and the extended direct algebraic method have been considered.

The numerical methods in the collection include the Runge–Kutta method, the finite difference method, the finite element method, the spectral method, the local discontinuous Galerkin method, supervised neural network procedures, the high-order θ method, matrix transfer technique, doubling Smith method and fast method.

Finally, some interesting applications have been presented such as second-grade fluid in a straight rectangular duct, a food supply model, anomalous transport in Comb structures, convective heat transfer in nanofluids, fractional multinomial distribution, a piecewise continuous dynamical system, a coronary blood flow model, asymptotic and pinning synchronization in complex dynamical networks and image denoising.

All the guest editors of this Special Issue are grateful to the authors for their quality contributions, to the reviewers for their valuable comments and advice, and to the administrative staff of MDPI for their support in completing this Special Issue. Special thanks go to the Section Managing Editor Mr. Ethan Zhang for his excellent collaboration and valuable assistance.

> **Libo Feng, Yang Liu, and Lin Liu** *Editors*

## *Article* **A Physical Phenomenon for the Fractional Nonlinear Mixed Integro-Differential Equation Using a General Discontinuous Kernel**

**Sharifah E. Alhazmi 1,\* and Mohamed A. Abdou <sup>2</sup>**


**Abstract:** In this study, a fractional nonlinear mixed integro-differential equation (Fr-NMIDE) is presented and has a general discontinuous kernel based on position and time space. Conditions of the existence and uniqueness of the solution is provided through the principal form of the integral equation, based on the Banach fixed point theorem. After applying the properties of a fractional integral, the Fr-NMIDE conformed to the Volterra–Hammerstein integral equation (V-HIE) of the second kind, with a general discontinuous kernel in position with the Hammerstein integral term and a continuous kernel in time to the Volterra term. Then, using a technique of the separating method, we obtained HIE, where its physical coefficients were variable in time. The Toeplitz matrix method (TMM) and its schemes were used to obtain a nonlinear algebraic system by studying the convergence of the system. The Maple 18 program was implemented to present the numerical results, along with corresponding errors.

**Keywords:** fractional; integro-differential equation; Volterra–Hammerstein; discontinuous kernel; Toeplitz matrix method

#### **1. Introduction**

As integral equations, integro-differential equations and fractional integro-differential equations (IEs/IDEs/fIDEs) can be used to simulate a wide range of problems in the basic sciences, many scientists have focused their attention on presenting the solutions for these systems. These equations have played a significant role in finding solutions using diverse methods, which is in line with the rapid development in finding the answers to diverse problems originating from the basic sciences. Currently, several studies have concentrated on creating more sophisticated and effective techniques for solving the IEs/IDEs, such as the Rieman–Stieltjes integral conditions [1,2], Lerch polynomials method [3] and Legendre–Chebyshev spectral method [4], along with the numerical observations based on semi-analytical approaches, e.g., Adomian's decomposition method [5] and HOBW method [6]. The linear/nonlinear equations (IEs/IDEs/ fIDEs) have various uses in fluid mechanics [7], Stokes flow [8], airfoil [9], quantum mechanics [10], integral models [11], mathematical engineering [12], nuclear physics [13] and the theory of laser [14]. The orthogonal polynomials method is considered one of the most significant operators used to solve various scientific problems. Alhazmi [15] used a new technique based on the separation of variables and the orthogonal polynomials method to obtain many spectral relationships based on the mixed integral equation using a generalized potential kernel. Nemati et al. [16] applied the Legendre polynomials scheme for the outcomes of a secondorder two-dimensional (2D) Volterra integral model, together with a continuous kernel. Mirzaee and Samadyar [17] discussed the convergence of 2D-orthonormal Bernstein collocation method for solving 2D-mixed Volterra-Fredholm integral equations. Basseem

**Citation:** Alhazmi, S.E.; Abdou, M.A. A Physical Phenomenon for the Fractional Nonlinear Mixed Integro-Differential Equation Using a General Discontinuous Kernel. *Fractal Fract.* **2023**, *7*, 173. https:// doi.org/10.3390/fractalfract7020173

Academic Editors: Libo Feng, Lin Liu and Yang Liu

Received: 7 December 2022 Revised: 30 January 2023 Accepted: 31 January 2023 Published: 9 February 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

1

and Alalyani [18] used Chebyshev polynomials to discuss the numerical solution of the quadratic integral equation with a logarithmic kernel. Katani [19] implemented a quadrature scheme for the numerical outcomes of the second kind of Fredholm integral model. Al-Bugami [20] used the Simpson and Trapezoidal schemes to perform numerical representations based on an integral model using 2D surface crack layers. Brezinski and Zalglia [21] used the extrapolation approach to achieve numerical computing results based on the second kind of nonlinear integral model that has a continuous kernel. Baksheesh [22] proposed using the Galerkin scheme to find the approximate results based on the Volterra integral equations of the second kind, which have discontinuous kernels. Alkan and Hatipoglu [23] applied the sinc-collocation method for solving the Volterra-Fredholm IDEs of fractional order. Mosa et al. [24] studied the semi-group scheme to assess uniqueness and existence based on the partial and fractional integro models of heat performance in the Banach space using the Adomian decomposition scheme. Bin Jebreen and Dassios [25] proposed an efficient algorithm to find an approximate solution via the wavelet collocation method for fractional Fredholm integro-differential equations. Akram et al. [26] interpreted the collocation approach to tackle the fractional partial integro-differential equation by employing the extended cubic B-spline Abdelkawy et al. [27] applied the Jacobi–Gauss collocation method after using the Riemann–Liouville fractional integral and derivative fractional to obtain the approximate solution for variable-order fractional integro-differential equations with a weakly singular kernel.

The initial value of the Fr-NMIDE is presented as:

$$
\mu \frac{\partial^{\mathfrak{a}} \Phi(\mathbf{x}, t)}{\partial t^{\mathfrak{a}}} + v \Phi(\mathbf{x}, t) = \lambda \int\_{\Omega} k(|\mathbf{x} - \mathbf{y}|) \Phi^{\mathfrak{m}}(\mathbf{y}, t) d\mathbf{y} + \mathfrak{g}(\mathbf{x}, t),
\\
(\Phi(\mathbf{x}, 0) = \mathfrak{y}(\mathbf{x})).\tag{1}
$$

Here, *g*(*x*, *t*) and Φ(*x*, *t*) are the known and unknown continuous functions, respectively, in *L*2[Ω]*XC*[0, *T*].Ω is the integration domain and *m* = 1, 2, ... *M*. In addition, *μ* indicates the constant values of Equation (1) and *λ* and *v* are constants and have physical sense. The kernel *k*(|*x* − *y*|), in general, has a singular term. For reference, the essential properties and definitions have been stated using fractional calculus theory.

**Definition 1.** *For the function <sup>f</sup>* : (0, <sup>∞</sup>) → R*, the fractional Riemann–Liouville integral with order α* > 0 *is shown as [28]:*

$$I\_{0^{+}}^{\\\alpha}l(t) = \frac{1}{\Gamma(\alpha)} \int\_{0}^{t} (t-s)^{\alpha-1} l(s)ds.$$

*In addition, we define the Caputo derivatives of order as α* > 0

$$D\_{0^{+}}^{\\\alpha}l(t) = \frac{1}{\Gamma(n-\alpha)} \int\_{0}^{t} \frac{l^{(n)}(s)ds}{(t-s)^{\alpha-n+1}}, (n < \alpha \le n+1).$$

The time Abel kernel (*<sup>t</sup>* <sup>−</sup> *<sup>τ</sup>*)*α*−1, <sup>∀</sup>*t*, *<sup>τ</sup>* <sup>∈</sup> [0, *<sup>T</sup>*], 0 <sup>≤</sup> *<sup>τ</sup>* <sup>≤</sup> *<sup>t</sup>* <sup>≤</sup> *<sup>T</sup>* <sup>&</sup>lt; 1 satisfies the following features: 0 ≤ *t*<sup>1</sup> ≤ *t*<sup>2</sup> ≤ *t* ≤ *T* < 1 for every continuous function *h*(*t*). Integrals

$$\int\_0^t (t-\tau)^{a-1} h(\tau) d\tau,\max\_{0 \le t \le T} \int\_0^t (t-\tau)^{a-1} d\tau,\int\_{t\_1}^{t\_2} (t-\tau)^{a-1} h(\tau) d\tau,\ i$$

are the continuous time function, i.e.,

$$\left| \int\_0^t (t-\tau)^{\alpha-1} h(\tau) d\tau \right| \le M.$$

In this paper, Section 2 presents the fractional definition to obtain a NMIE. Then, the theorem-based Banach fixed point is discussed to prove the existence and uniqueness of the solution of NMIE. Section 3 presents the convergence of the solution. Section 4 indicates the technique of separation of the variables for the Hammerstein integral model in position and its coefficients. This scheme would help researchers choose the time known function in an easier way, enabling them to choose the necessary time to obtain the required results. Section 5 indicates the convergence analyses of the Hammerstein integral equation. Section 6 shows the Toeplitz matrix scheme to conform the Hammerstein integral model of the nonlinear algebraic system (NAS). The TMM is considered the best numerical method for solving singular integral equations, where the singular terms disappear and we have simple integrals. Section 7 represents the NAS convergence. Section 8 provides the convergence of the error using one of the famous theorems. Section 9 provides the numerical solutions through Maple 18, together with the kernel of the nonlinear integral equation that takes the logarithmic form, the Carleman function and Hilbert kernel. In addition, the corresponding errors are computed.

#### **2. The Solution's Existence and Uniqueness**

The fundamental Caputo fractional integral is used to find the second order NMIE as:

$$\begin{split} \mu \Phi(\mathbf{x}, t) + \frac{\upsilon}{\Gamma(a)} \int\_0^t (t - \tau)^{a - 1} \Phi(\mathbf{x}, \tau) d\tau - \frac{\lambda}{\Gamma(a)} \int\_0^t \int\_{\Omega} (t - \tau)^{a - 1} k(|\mathbf{x} - \mathbf{y}|) \Phi^m(\mathbf{y}, \tau) d\mathbf{y} d\tau &= f(\mathbf{x}, t), \\ f(\mathbf{x}, t) = \frac{1}{\Gamma(a)} \int\_0^t (t - \tau)^{a - 1} g(\mathbf{x}, \tau) d\tau + \psi(\mathbf{x}), \qquad 0 < a < 1. \end{split} \tag{2}$$

In Equation (2), the free term *f*(*x*, *t*) ∈ *L*2(Ω) × *C*[0, *T*] and the unknown function Φ(*x*, *t*) will be discussed in the same space, *L*2(Ω) × *C*[0, *T*], along with the discontinuous kernel *<sup>k</sup>*|*<sup>x</sup>* <sup>−</sup> *<sup>y</sup>*| ∈ *<sup>L</sup>*2([Ω] <sup>×</sup> [Ω). The discontinuous kernel of time (*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>*)*α*−1, 0 <sup>&</sup>lt; *<sup>α</sup>* <sup>&</sup>lt; 1, is considered in class *C*[0, *T*], where *T* < 1.

To prove the existence of NMIE (2) based on the unique results, the integral operator form is given below:

$$\begin{split} \bar{U}\Phi(\mathbf{x},t) &= \frac{\lambda}{\mu} l I\_2 \Phi(\mathbf{x},t) - \frac{v}{\mu} l I\_1 \Phi(\mathbf{x},t) + \frac{1}{\mu} f(\mathbf{x},t), \\ \mathcal{U}\_1 \Phi(\mathbf{x},t) &= \frac{1}{\Gamma(\alpha)} \int\_0^t (t-\tau)^{\alpha-1} \Phi(\mathbf{x},y,\tau) d\tau, \\ \mathcal{U}\_2 \Phi(\mathbf{x},t) &= \frac{1}{\Gamma(\alpha)} \int\_0^t \int\_\Omega (t-\tau)^{-1+a} k(|y-\mathbf{x}|) \Phi(y,\tau) dyd\tau. \end{split} \tag{3}$$

The following conditions are presented as:

(i) (i-a) The position kernel *k*(|*x* − *y*|) satisfies

$$\left\{ \int\_{\Omega} \int\_{\Omega} k(|\mathbf{x} - y|)^2 dx dy \right\}^{\frac{1}{2}} d\tau = \mathbb{C}, \quad (\mathbb{C} \text{---constant }).$$

(i-b) Therefore, the kernel of position and time (*<sup>t</sup>* <sup>−</sup> *<sup>τ</sup>*)*α*−1*k*(|*<sup>x</sup>* <sup>−</sup> *<sup>y</sup>*|) in *<sup>L</sup>*2(Ω)<sup>×</sup> *<sup>C</sup>*[0, *<sup>T</sup>*] satisfies

$$\max\_{0 \le t \le T} \int\_0^t \left\{ \int\_{\Omega} \int\_{\Omega} \left| (t - \tau)^{a - 1} k(|x - y|) \right|^2 dx dy \right\}^{\frac{1}{2}} d\tau = \frac{T^a \mathbb{C}}{a} \quad (0 < a < 1).$$

(ii) The continuous function *f*(*x*, *t*) ∈ *L*2(Ω) × *C*[0, *T*] and its norm is shown as *<sup>f</sup>*(*x*, *<sup>t</sup>*)*L*2(Ω)×*C*[0,*T*] = max0≤*t*≤*<sup>T</sup> t* 0 <sup>Ω</sup> | *f*(*x*, *τ*)| <sup>2</sup>*dx*<sup>1</sup> <sup>2</sup> *dτ* <sup>=</sup> *<sup>G</sup>* and *<sup>G</sup>* is taken as a constant. (iii) The decreasing function *Q* > *P*, constant *Q* > *Q*<sup>1</sup> and Φ*m*(*x*, *t*) are used as: (iii-a) max0≤*t*≤*<sup>T</sup> t* 0 <sup>Ω</sup>|Φ*m*(*x*, *<sup>τ</sup>*)<sup>|</sup> 2 *dx* 1 2 *dτ* ≤ *<sup>Q</sup>*1Φ(*x*, *<sup>t</sup>*)*L*2(Ω)×*C*[0,*T*]. (iii-b) |Φ<sup>1</sup> *<sup>m</sup>*(*y*, *<sup>t</sup>*) <sup>−</sup> <sup>Φ</sup><sup>2</sup> *<sup>m</sup>*(*y*, *<sup>t</sup>*)<sup>|</sup> <sup>≤</sup> *<sup>N</sup>*(*t*, *<sup>x</sup>*)|Φ1(*x*, *<sup>t</sup>*) <sup>−</sup> <sup>Φ</sup>2(*x*, *<sup>t</sup>*)|, <sup>|</sup>*N*(*t*, *<sup>x</sup>*)<sup>|</sup> <sup>=</sup> *<sup>p</sup>*.

**Theorem 1** (Banach Fixed Point [29])**.** *Consider X* = (*X*, *d*) *to be a metric space, where X* = {Φ}, Φ *is a null set. Suppose X is complete and T* : *X* → *X is the X contraction, then T contains exactly a single fixed point.*

To prove this, *K* shows the contraction operator on *B* with integral form *K*Ψ = Ψ, Ψ presents the unique form of the solution, the general kernel *H* <sup>|</sup>*<sup>y</sup>* <sup>−</sup> *<sup>x</sup>*|,(*<sup>t</sup>* <sup>−</sup> *<sup>τ</sup>*)=(*<sup>t</sup>* <sup>−</sup> *<sup>τ</sup>*)*α*−1<sup>×</sup> *k*(|*y* − *x*|) satisfies in (i-b), L2(Ω) × C[0, T] is the domain of integration, with respect to position Ω and the time *t*, *τ* ∈ [0, *T*], 0 ≤ *τ* ≤ *t* ≤ *T* < 1.

**Theorem 2.** *Principal theorem: The NMIE (2) with the use of above conditions and the space L*2(Ω) × *C*[0, *T*] *takes the form of:*

$$|\mu|\Gamma(\alpha+1) > (v+\mathcal{C}Q\lambda)T^{\alpha}.\tag{4}$$

Lemmas (1) and (2) must be proven to satisfy the above theorem.

**Lemma 1.** *Under conditions (i) to (iii-a), the W operator maps the space L* ¯ <sup>2</sup>(Ω) <sup>×</sup> *<sup>C</sup>*[0, *<sup>T</sup>*] *onto itself:*

**Proof.** Equation (3) is used to prove

$$\left\|\left\|\Box\Phi(\mathbf{x},t)\right\|\right\| \leq \frac{\lambda}{|\mu|} \left\|\mathcal{U}\_2\Phi(\mathbf{x},t)\right\| + \frac{1}{|\mu|} \left\|f(\mathbf{x},t)\right\| + \frac{|\nu|}{|\mu|} \left\|\mathcal{U}\_1\Phi(\mathbf{x},t)\right\|.$$

Using (i)–(iii-a) and the inequality of Cauchy–Schwarz, we have:

$$\|\|\mathcal{U}\Phi(\mathbf{x},t)\|\| \le \frac{\mathcal{Q}}{|\mu|} + \sigma \|\|\Phi(\mathbf{x},t)\|\|\_{\prime} \left(\sigma = \frac{(\nu + \lambda \mathcal{C}\mathcal{Q})T^{\mathbf{x}}}{|\mu|\Gamma(\mathbf{a}+1)}\right). \tag{5}$$

In the above statement, the *U*¯ operator maps the *Sr* ball as:

$$r = \frac{Gr(\mathfrak{a} + \mathbf{1})}{[|\mathfrak{a}|\Gamma(\mathfrak{a} + \mathbf{1}) - (\mathfrak{v} + \lambda C \mathbf{Q})T^{\mathfrak{a}}]} . \tag{6}$$

As *r* > 0, *G*Γ(*α* + 1) > 0, therefore *σ* < 1. Furthermore, the inequality (5) includes the boundedness operators *U*1, *U*<sup>2</sup> and *U*¯ .

**Lemma 2.** *If (i)-(iii-b) conditions are fulfilled, then U*¯ *operator is a contractive-based Banach space L*2(Ω) × *C*[0, *T*]*.*

**Proof.** For Φ1(*x*, *t*) and Φ2(*x*, *t*) functions using *L*2(Ω) × *C*[0, *T*] space, Formula (3) becomes:

$$\left\|\left\|\mathcal{U}(\Phi\_1(\mathbf{x},t)-\Phi\_2(\mathbf{x},t))\right\|\right\| \le \frac{|v|}{|\mu|} (\left\|\mathcal{U}\_1(\Phi\_1(\mathbf{x},t)-\Phi\_2(\mathbf{x},t))\right\| + \left\|\mathcal{U}\_2(\Phi\_1(\mathbf{x},t)-\Phi\_2(\mathbf{x},t))\right\|),$$

The conditions (i), (ii) and (iii-b) have been applied to the Cauchy–Schwarz inequality as:

$$\|\|\mathbb{\hat{U}}(\Phi\_1(\mathbf{x},t) - \Phi\_2(\mathbf{x},t))\|\| \le \sigma \|\|\Phi\_1(\mathbf{x},t) - \Phi\_2(\mathbf{x},t)\|\|.\tag{7}$$

Inequality (7) presents the operator *U*¯ (contraction operator), which shows the continuity in the *L*2(Ω) × *C*[0, *T*] space.

#### **3. Convergence of the Solution**

Consider the simple iteration {Φ1(*x*, *y*),..., Φ*n*−1(*x*, *t*), Φ*n*(*x*, *t*),...} ⊂ Φ(*x*, *t*), where the two functions {Φ*n*−1(*x*, *t*), Φ*n*(*x*, *t*)} are used to satisfy

$$\begin{split} \mu(\Phi\_n(\mathbf{x},t) - \Phi\_{n-1}(\mathbf{x},t)) + \frac{\upsilon}{\Gamma(a)} \int\_0^t (t-\tau)^{a-1} (\Phi\_{n-1}(\mathbf{x},t) - \Phi\_{n-2}(\mathbf{x},t)) d\tau &= \\ \frac{\lambda}{\Gamma(a)} \int\_0^t \int\_{\Omega} (t-\tau)^{a-1} k(|\mathbf{x}-y|) \left(\Phi\_{n-1}^m(y,\tau) - \Phi\_{n-2}^m(y,\tau)\right) dy d\tau. \end{split} \tag{8}$$

Consider

$$\Phi\_{\mathfrak{n}}(\mathbf{x};t) = \sum\_{i=0}^{n} \Psi\_{i}(\mathbf{x};t), \tag{9}$$

where

$$
\Psi\_n(\mathbf{x}, t) = \Phi\_n(\mathbf{x}, t) - \Phi\_{n-1}(\mathbf{x}, t); (n \ge 1), \\
\Psi\_0(\mathbf{x}, t) = f(\mathbf{x}, t).
$$

Equation (8) is updated by using Equation (9):

$$\|\mu\|\|\Psi\_n(\mathbf{x},t)\| \le \frac{v}{\Gamma(a)} \left\| \int\_0^t (t-\tau)^{a-1} \Psi\_{n-1}(\mathbf{x},t) d\tau \right\| + \frac{\lambda}{\Gamma(a)} \left\| \int\_0^t \int\_{\Omega} (t-\tau)^{a-1} k(|\mathbf{x}-y|) \Psi\_{n-1}^{\mathbf{m}}(y,\tau) dy d\tau \right\|.$$

Taking *n* = 1, the above formula becomes:

$$\|\|\Psi\_1(\mathbf{x}, t)\|\| \le \sigma \mathcal{G}\_\prime \left(\sigma = \frac{(v + \lambda \mathcal{C}Q)T^a}{|\mu|\Gamma(\alpha + 1)}\right),$$

and we see

$$\left\|\left|\Psi\_{n}(\mathbf{x},t)\right\|\right\| \leq \sigma^{\text{\textquotedblleft}{\text{\textquotedblleft}{\text{\textquotedblleft}{\text{\textquotedblleft}{\text{\textquotedblleft}{\text{\textquotedblright}{\text{\textquotedblleft}{\text{\textquotedblright}{\text{\textquotedblleft}{\text{\textquotedblleft}{\text{\textquotedblright}{\text{\textquotedblleft}{\text{\textquotedblleft}{\text{\textquotedblleft}{\text{\textquotedblright}{\text{\textquotedblleft}{\text{\textquotedblleft}{\text{\textquotedblright}{\text{\textquotedblleft}{\text{\textquotedblleft}{\text{\textquright}}{\text{\textquright}}{\text{\textquright}}{\text{\textquright}}}}}}\right)}\right)} \leq \sigma^{\text{\textquotedblleft}{\text{\textquotedblleft}{\text{\textquotedblleft}{\text{\textqull}}{\text{\textquotedblleft}{\text{\textqull}}{\text{\textquright}}}}}\right)}\right) \leq \sigma^{\text{\textquotedblleft}{\text{\textquotedblleft}{\text{\textqull}}}}\mathbf{G}\_{\text{\textquotedblleft}{\text{\textqull}}}\tag{10}$$

Equation (10) shows the convergent sequence {Ψ*n*(*x*, *t*)} uniformly. Moreover, it provides the convergent solution of the sequence {Φ*n*(*x*, *t*)}. As Ψ*i*(*x*; *t*) is continuous and <sup>Φ</sup>(*x*, *<sup>t</sup>*) = lim*n*→<sup>∞</sup> <sup>Φ</sup>*n*(*x*, *<sup>t</sup>*) = lim*n*→<sup>∞</sup> <sup>∑</sup>*<sup>n</sup> <sup>i</sup>*=<sup>0</sup> Ψ*i*(*x*; *t*), Φ(*x*; *t*) is uniformly continuous with an infinite {Φ*n*(*x*, *<sup>t</sup>*)}<sup>∞</sup> *<sup>n</sup>*=<sup>0</sup> series. This proves the lemma.

#### **4. Separation of Variables Scheme**

In the problems of mathematical physics, we find that researchers are interested in finding the unidentified potential function, which is linked to time and position. A variety of methods can be used to obtain the unknown function. One of these methods is time division that turns the mixed integral equation into an algebraic system of integral equations. Researchers apply the separating variable method to solve the mixed integral equation using the coefficients of the space functions. Moreover, these time coefficients are in the form of an integral operator of the Volterra type (Jan [30,31]). This scheme helps researchers to choose the time known function in an easier way, which enables them to choose the necessary time to obtain the required results. The unidentified and known functions Φ(*x*, *t*) and *g*(*x*, *t*) are shown in the separation form as:

$$\Phi(\mathbf{x},t) = X(\mathbf{x})Y(t), \quad \mathbf{g}(\mathbf{x},t) = b(\mathbf{x})Y(t), \qquad Y(0) \neq 0,\tag{11}$$

where *X*(*x*) is an unknown function in a position that is to be determined, *b*(*x*) is the given function in a position and *Y*(*t*) shows the known function in time.

The time function is chosen in the form of a series based on the polynomial constants. This form helps the researcher to categorize the function of time based on the constants as a famous function or time representation in several other forms. It is noted that time series convergence is based on the premise of the experiment time being less than one and the start time is not equal to zero. Assume that

$$Y(t) = t^a \sum\_{n=0}^{\infty} a\_n t^n, \qquad a\_0 \neq 0, \qquad t \in [0, T], \qquad T < 1. \tag{12}$$

Using Equations (11) and (12) in Equation (2), shown as

$$
\mu X(\mathbf{x}) - \rho(t) \int\_{\Omega} k(|y - \mathbf{x}|) X^m(y) dy = H(\mathbf{x}, t), \tag{13}
$$

where

$$\begin{cases} \rho(t) = \lambda Q(t) \int\_0^t (t-\tau)^{a-1} Y^m(\tau) d\tau, m = 1, 2, \dots, M, \\\ Q(t) = \left[ v \int\_0^t (t-\tau)^{a-1} Y(\tau) d\tau + \mu \Gamma(a) Y(t) \right]^{-1}, \\\ H(\mathbf{x}, t) = Q(t) \left[ \Gamma(a) \psi(\mathbf{x}) + h(\mathbf{x}) \int\_0^t (t-\tau)^{a-1} Y(\tau) d\tau \right]. \end{cases} \tag{14}$$

By using the updated form of Equation (13) by using Equation (14), the reader can use the separation of time scheme for the nonlinear mixed integral model in time and position, which leads to a nonlinear integral equation in a position with coefficients linked to *t*, where *t* ∈ [0, *T*], *T* < 1. Moreover, Equation (13) represents that the single solution condition on the nonlinear integral equation is an equivalence relationship between position and time, which is given as:

$$\|k(|x-y|)\| \le \delta\_\prime$$

$$\delta = \left| \left[ \mu \Gamma(a) Y(t) + v \int\_0^t (t-\tau)^{a-1} Y(\tau) d\tau \right] \left[ \lambda \int\_0^t (t-\tau)^{a-1} Y^m(\tau) d\tau \right]^{-1} \right| < 1. \tag{15}$$

Equation (15) shows the relationship between the position represented in the kernel form (which represents the properties of matter) and the time required for the continuity of these properties, which is known under the condition that there is a single solution. It is noted that at a certain time, an increase that exceeds the standard value of the nucleus may occur, and this leads to instability.

#### **5. Convergence Investigations Based on Nonlinear Integral Model**

To check the nonlinear integral model (13) convergence, the solution sequence takes the form:

$$X(\mathbf{x}) = \{X\_0(\mathbf{x}), X\_1(\mathbf{x}), \dots, X\_m(\mathbf{x}), \dots, X\_\ell(\mathbf{x}), \dots\}\_{\ell}$$

where *ψ<sup>n</sup>* and *ψ<sup>m</sup>* are two distinct arbitrary partial sums of sequence *Xj*(*x*) : (*n* > *m*), and

$$d(\psi\_{n\prime}\psi\_{\ell}) = \max\_{n,m \in \mathcal{j}} |X\_n - X\_{\ell}| \le \delta d(X\_n, X\_{\ell-1}) \le \delta^2 d(X\_n, X\_{\ell-2}) \le \dots \le \delta^\ell d(X\_n, X\_0),$$

where *δ* is defined in the Equation (15). Finally, we have

$$d(\psi\_{n\prime}\psi\_{\ell}) \le \frac{\delta^{\ell}}{1-\delta}d(\psi\_{1\prime}\psi\_{0});\quad 0 < \delta < 1. \tag{16}$$

As → ∞ and for the fixed values, *d*(*ψ*1, *ψ*0) must approach zero. Hence, it is concluded that {*ψn*} shows a Cauchy sequence throughout the metric space, which shows the convergent series.

#### **6. Toeplitz Matrix Method (Abdou et al. [32])**

Many numerical methods have been used to solve integral equations with continuous or unconnected kernels. In singular integral equations, the best way to solve them is the Toeplitz matrix method (TMM), due to the following reasons: The singular term directly disappears, being transformed into simple integrals that can be solved quickly, and then, forms a linear/nonlinear algebraic system of equations. The degree of convergence in the relative error is less than in the other methods.

There are some methods that give a quick approximation when studying the error, including a circulated preconditioned iterative scheme. Xian et al. [33] considered the conjugate gradient technique with a circulated preconditioned scheme for a linear discretized model. They obtained a matrix whose coefficient kept the structure of a symmetric Toeplitzplus-tridiagonal. They also studied linear large-scale systems with coefficient matrices based on the non-Hermition Toeplitz and applied a product of a fast Toeplitz matrix-vector through iterative solvers for a linear discretized model (circulant preconditioned method). This numerical scheme provides fast results to reduce computational costs, using *O m*3 to *O*(*m* log *m*), and storage through *O m*2 to *O*(*m*) deprived of loss compression, where *m* is the number of spatial grid nodes.

For TMM, consider Ω ∈ (−1, 1), using the nonlinear integral term of (2) as:

$$\begin{split} \int\_{-1}^{1} k(|y-\mathbf{x}|) \mathbf{X}^{m}(y) dy &= \sum\_{\ell=-N}^{N-1} \int\_{a=\ell h}^{(\ell+1)h} k(|y-\mathbf{x}|) \mathbf{X}^{m}(y) dy \\ &= \sum\_{\ell=-N}^{N-1} \left[ A\_{\ell}(\mathbf{x}) \mathbf{X}^{m}(\ell h) + B\_{\ell}(\mathbf{x}) \mathbf{X}^{m}(\ell h+h) \right] + R\_{\ell}, \\ \left( h = \frac{1}{N} \right). \end{split} \tag{17}$$

The two functions *A*(*x*) and *B*(*x*) are

$$A\_{\ell}(\mathbf{x}) = \frac{1}{\Delta} [(\ell h + h)^{\mathrm{m}} I\_{\ell}(\mathbf{x}) - J\_{\ell}(\mathbf{x})],\\B\_{\ell}(\mathbf{x}) = \frac{1}{\Delta} [I\_{\ell}(\mathbf{x}) - (\ell h)^{\mathrm{m}} I\_{\ell}(\mathbf{x})],\tag{18}$$

where

$$\Delta = h^m \sum\_{\zeta=1}^m \frac{\Gamma(m+1)}{\Gamma(\zeta+1)\Gamma(m-\zeta+1)} \ell^{(m-\zeta)}, (-N \le \ell \le N).$$

and

$$I\_{\ell}(\mathbf{x}) = \int\_{\ell\hbar}^{(\ell+1)\hbar} k(|\underline{y} - \mathbf{x}|)d\underline{y},\\I\_{\ell}(\mathbf{x}) = \int\_{\ell\hbar}^{(\ell+1)\hbar} k(|\underline{y} - \mathbf{x}|)\underline{y}^{m}d\underline{y}.\tag{19}$$

The integral term of Equation (17) after using Equation (18) becomes:

$$\int\_{-1}^{1} k(|\mathbf{x} - \mathbf{y}|) \mathbf{X}^{\text{m}}(\mathbf{y}) d\mathbf{y} = \sum\_{\ell=-N}^{N} D\_{\ell}(\mathbf{x}) \mathbf{X}^{\text{m}}(\ell h)$$

$$D\_{\ell}(\mathbf{x}) = \begin{cases} A\_{-N}(\mathbf{x}) & , \ell = -N \\ A\_{\ell}(\mathbf{x}) + B\_{n-1}(\mathbf{x}) & , -N < \ell < N \\ B\_{N-1}(\mathbf{x}) & , \ell = N \end{cases}$$

Equations (18) and (20) are used in *x* = *jh*,(−*N* ≤ *j* ≤ *N*), to obtain

$$X(jh) = X\_{\circ}, A\_{\ell}(jh) = A\_{\ell, j\_{\prime}}, B\_{\ell}(jh) = B\_{\ell, j\_{\prime}}\\D\_{\ell}(jh) = D\_{\ell, j\_{\prime}}, H(jh, t) = H\_{\dagger}(t). \tag{21}$$

The nonlinear integral in Equation (15) shows the NAS of a (2*N* + 1) system

$$
\mu X\_{\circ} - \rho(t) \sum\_{\ell=-N}^{N} D\_{\ell,j} X\_{\ell}^{m} = H\_{\circ}(t), \quad -N \le \ell, j \le N. \tag{22}
$$

The matrices *Dn*, show the Toeplitz matrix as:

$$\begin{aligned} D\_{\ell,j} &= V\_{n-\ell} - \mathcal{U}\_{\mathbf{n},\ell'} \quad V\_{\ell-j} = A\_{\ell,j} + B\_{\ell-1,j}, \quad (\ell \le N, -N \le j), \\ \mathcal{U}\_{\mathbf{l},j} &= \begin{cases} B\_{-N-1,j}\ell &= -N \\ 0 - N &< \ell < N \\ A\_{N,j}\ell &= N. \end{cases} \end{aligned} \tag{23}$$

Equation (23) shows the matrix of two types with a (2*<sup>N</sup>* + 1) order, including *<sup>V</sup>*−*<sup>j</sup>* (Toeplitz matrix) and *U*,*<sup>j</sup>* , which shows the zero elements, excluding the first and last columns or rows. The error *Rn*, can be obtained based on the following formula:

$$\begin{split} R\_{\ell} &= \max\_{0 \le j \le N} \left| \int\_{\ell h}^{\ell h + h} y^{2m} k(|\mathbf{x} - y|) dy - \left( G\_{\ell}(\mathbf{x}) (\ell h)^{2m} + H\_{\ell}(\mathbf{x}) (\ell h + h)^{2m} \right) \right| \\ &= O \left( h^{3m} \right), \quad (\mathbf{x} = jh). \end{split} \tag{24}$$

#### **7. The Nonlinear Algebraic Toeplitz Matrix System**

This section provides the existence based on NAS (22) using the Banach space <sup>∞</sup> <sup>×</sup> *<sup>C</sup>*[0, *<sup>T</sup>*]. The operator form is written as:

$$TX\_j = TX\_j + \frac{H\_j}{\mu}'.\tag{25}$$

where

$$TX\_j = \frac{\rho(t)}{\mu} \sum\_{\ell=-N}^{N} D\_{\ell,j} X\_\ell^m \quad ; \quad (-M \le m \le M, 0 \le t \le T < 1). \tag{26}$$

The following lemma is

**Lemma 3.** *If the position kernel satisfies the conditions below:*

$$\begin{aligned} (i) \left( \int\_{\ell \ln}^{\ell h + h} \int\_{jh}^{jh + h} \left\{ k^2 (|\mathbf{x} - \mathbf{y}|) \right\} d\mathbf{x} dy \right)^{\frac{1}{2}} & \leq C, \\\ (ii) \lim\_{\mathbf{x'} \to \mathbf{x}} \left\| k(\mathbf{x'}, \mathbf{y}) - k(\mathbf{x}, \mathbf{y}) \right\|\_{L\_2} &= 0 \quad , \mathbf{x}, \mathbf{x'} \in (-1, 1). \end{aligned} \tag{27}$$

*Then,*

$$\begin{aligned} (a) \sup\_{N} \sum\_{j=-N}^{N} \left| D\_{\ell,j} \right| \text{exists} \\ (b) \lim\_{\ell' \to 1}^{N} \sup\_{N} \sum\_{j=-N}^{N} \left| D\_{\ell',j} - D\_{\ell,j} \right| = 0. \end{aligned} \tag{28}$$

**Proof.** From Equations (18) and (19), we have

$$|A\_{\ell}(\mathbf{x})| \le \frac{1}{|\Delta|} \left[ |(\ell h + h)^{m}| \left| \int\_{\ell \mathbf{h}}^{(\ell+1)h} k(|y-\mathbf{x}|) dy \right| + \left| \int\_{\ell \mathbf{h}}^{(\ell+1)h} k(|y-\mathbf{x}|) y^{m} dy \right| \right]$$

Applying the Cauchy–Schwarz inequality and taking the sum from = −*N* to = *N*, the above inequality yields

$$\sum\_{\ell=-N}^{N} |A\_{\ell}(\mathbf{x})| \le \frac{1}{|\Delta|} \left\| k(|\mathbf{x} - \mathbf{y}|) \right\| \left[ \sum\_{\ell=-N}^{N} |(\ell h + h)^{m}| + ||\mathbf{y}^{m}|| \right]$$

Based on Equation (27), the function continuity *<sup>y</sup><sup>m</sup>* in input (−1, 1), a small constant *E*<sup>1</sup> exists, i.e., ∑*<sup>N</sup>* =−*N*|*A*(*x*)<sup>|</sup> <sup>≤</sup> *<sup>E</sup>*<sup>1</sup> , <sup>∀</sup>*N*. As each value of <sup>∑</sup>*<sup>N</sup>* =−*N*|*A*(*x*)<sup>|</sup> is bounded, take *x* = *jh* as:

$$\sup\_{N} \sum\_{\ell=-N}^{N} |A\_{\ell}(jh)| \le E\_1. \tag{29}$$

Similarly, by taking a small value of the constant *E*<sup>2</sup> for Equations (18) and (19), we have

$$\sup\_{N} \sum\_{n=-N}^{N} |B\_n(mh)| \le E\_2. \tag{30}$$

Using (29) and (30), we have

$$\sup\_{N} \sum\_{\ell=-N}^{N} \left| D\_{\ell,j} \right| \le \sup\_{N} \sum\_{\ell=-N}^{N} |A\_{\ell}(jh)| + \sup\_{N} \sum\_{\ell=-N}^{N} |B\_{\ell}(jh)| \le E.$$

Hence, sup*<sup>N</sup>* <sup>∑</sup>*<sup>N</sup>* =−*N D*,*<sup>j</sup>* exists.

To prove the second equation of (28) for *x*, *x* ∈ (−1, 1), the Cauchy–Schwarz inequality is applied by taking the sum from = −*N* to

$$\sup\_{N} \sum\_{\ell=-N}^{N} \left| A\_{\ell}(\mathbf{x}') - A\_{\ell}(\mathbf{x}) \right| \le \frac{1}{|\Delta|} \left\| k(\mathbf{x}', y) - k(\mathbf{x}, y) \right\|\_{L\_2} \left\{ \sup\_{N} \sum\_{\ell=-N}^{\ell} \left[ |(\ell \hbar + h)^{m}| + \|y^{m}\| \right] \right\} \tag{31}$$

Using *x* = *jh*, *x* = *j h* and Equation (27), we obtain that as *x* → *x*,

$$\lim\_{j\nearrow} \sup\_N \sum\_{\ell=-N}^N \left| A\_\ell(j'h) - A\_\ell(jh) \right| = 0. \tag{32}$$

Similarly, from (18) and (19), it is proved that

$$\lim\_{N' \to \infty} \sup\_N \sum\_{t=-N}^N \left| B\_\ell \left( j'h \right) - B\_\ell(jh) \right| = 0. \tag{33}$$

Finally, we obtain

$$\lim\_{m \to m} \sup\_N \sum\_{n=-N}^N |D\_{m'n} - D\_{mn}| = 0.$$

Now, the principal theorem is proven based on the nonlinear algebraic systems.

**Theorem 3.** *The NAS (22) using the Banach* <sup>∞</sup> <sup>×</sup> *<sup>C</sup>*[0, *<sup>T</sup>*] *space shows the unique form of the solution, as follows:*

$$\sup\_{j} |H\_{j}(t)| \le \bar{H} < \infty,\tag{34}$$

$$\sup\_{N} \sum\_{\ell=-N}^{N} |D\_{\ell,I}| \le E,\tag{35}$$

*where H and* ¯ *E are constants.* ¯

The functions *Xm*(*jh*), where *m* = 1, 2, ... , *M* for the constants *Q*¯ > *Q*¯ 1, *Q*¯ > *P*¯ <sup>1</sup> satisfy the following:

$$\sup\_{j} |X^{m}(jh)| \le \overline{Q\_{1}} \|X\|\_{\ell^{\infty}} \tag{36}$$

$$\sup\_{j}^{j} |X^{m}(jh) - Z^{m}(jh)| \le \bar{P}\_{1} \|X - Z\|\_{\ell^{\infty}} \tag{37}$$

where *X*<sup>∞</sup> = sup*<sup>j</sup> Xj* , *<sup>X</sup>*(*jh*) = *Xj* for each integer *<sup>j</sup>*. The below lemmas must be proven for the above theorem.

**Lemma 4.** *If Equations (34)–(36) are tested, then the T*¯ *operator is defined by using Equation (25), which maps the* <sup>∞</sup> <sup>×</sup> *<sup>C</sup>*[0, *<sup>T</sup>*] *space onto itself.*

**Proof.** Suppose *U* shows all the functions set as *X* = *Xj* in <sup>∞</sup> <sup>×</sup> *<sup>C</sup>*[0, *<sup>T</sup>*], i.e., Φ∞×*C*[0,*T*] <sup>≤</sup> *<sup>β</sup>*¯, *<sup>β</sup>*¯ is constant and the *<sup>T</sup>*¯<sup>Φ</sup> operator is based on the Banach <sup>∞</sup> <sup>×</sup> *<sup>C</sup>*[0, *<sup>T</sup>*] space:

$$\|\mathcal{T}X\|\_{\ell^{\infty}\times\mathbb{C}[0,T]} = \sup\_{j} |\mathcal{T}X\_{j}|\_{\prime} \text{ for all } j$$

Using conditions (34) and (35), we have

$$\left| \left| \bar{T} X\_{\dot{j}} \right| \right| \leq \left| \frac{\rho(t)}{\mu} \right| \mathbb{Q} \| X \| \|\_{\ell^{\infty} \sup\_{\ell}} \sum\_{\ell=-N}^{N} \left| D\_{\ell, \dot{j}} \right| + \sup\_{\dot{j}} \left| \frac{\bar{H}}{\mu} \right|.$$

For each integer *j*, the above inequality is shown as:

$$\sup\_{j} |\bar{T}X\_{j}| \le \sigma\_{1} \|X\|\_{\ell^{\infty}} + \frac{H}{\mu}, \left(\sigma\_{1} = \left|\frac{\rho}{\mu}\right| \bar{Q}\bar{E}\right) \tag{38}$$

The above inequality (38) represents that the *T*¯ operator maps the *U* set, where

$$\bar{\beta} = \frac{\bar{H}}{(|\mu| - |\rho| \bar{Q} \bar{E})}.$$

Hence, *σ*<sup>1</sup> < 1 have been taken, whereas *T* and *T*¯ operators are bounded.

**Lemma 5.** *Under the conditions (34), (35) and (37), T*¯ *shows a contraction operator in the* <sup>∞</sup> <sup>×</sup> *<sup>C</sup>*[0, *<sup>T</sup>*] *space*

**Proof.** For *<sup>X</sup>* and *<sup>Z</sup>* functions in <sup>∞</sup> <sup>×</sup> *<sup>C</sup>*[0, *<sup>T</sup>*], Formulas (25) and (26) become:

$$\left| \left| \mathcal{T} \mathbf{X}\_{\circ} - \mathcal{T} \mathbf{Z}\_{\circ} \right| \right| \leq \left| \frac{\rho}{\mu} \right| \sum\_{n=-N}^{N} \left| D\_{\ell,j} \right| \sup\_{j} \left| X\_{j} - Z\_{j} \right|.$$

Using conditions (35) and (37),

$$\|\|TX - \mathcal{T}Z\|\|\_{\ell^{\infty} \times \mathbb{C}[0, T]} \le \sigma\_1 \|X - Z\|\_{\ell^{\infty} \times \mathbb{C}[0, T]},\\ \left(\sigma\_1 = \left|\frac{\varrho}{\mu}\right| \bar{Q}E\right). \tag{39}$$

The above form indicates that the *<sup>T</sup>*¯ operator is continuous in <sup>∞</sup> <sup>×</sup> *<sup>C</sup>*[0, *<sup>T</sup>*] space. *<sup>T</sup>*¯ shows the contraction operator based on *σ*<sup>1</sup> < 1. Hence, *T*¯ presents a unique fixed point with specific solutions of the NAS using <sup>∞</sup> <sup>×</sup> *<sup>C</sup>*[0, *<sup>T</sup>*] space.

#### **8. The Error of the Toeplitz Matrix Method**

In any practical use of the TMM, some estimation of the error size is involved. Hence, these two definitions were used to calculate the error of the TMM.

**Definition 2.** *A local error Rj is used as:*

$$X(\mathbf{x}) - X\_{\vec{\xi}}(\mathbf{x}) = \sum\_{\ell=-N}^{N} D\_{\ell,\vec{\jmath}} \left[ X\_{\vec{\jmath}}^{m} - X\_{\vec{\jmath},\vec{\xi}}^{m} \right] + R\_{\dot{\jmath}\prime}(\mathbf{x} = j\mathbf{h}),\tag{40}$$

*where X<sup>ξ</sup>* (*x*) *shows the approximate results of (2).*

In the other formula, Equation (40) is used as:

$$R\_j = \left| \int\_{-1}^{1} k(|y - x|) X^m(y) dy - \sum\_{\ell = -N}^{N} D\_{\ell, j} X\_j^m \right|.$$

**Definition 3.** *The TMM shows a convergence of r order in input* (−1, 1)*; conversely, by taking the large values of N*, *D*¯ > 0 *exists based on N independently, that is,*

$$\|\|X(\mathbf{x}) - X\_N(\mathbf{x})\|\| \le DN^{-r}.\tag{41}$$

Now, we present the theorem, which is provided using NAS, based on Equation (22), which has a unique solution.

**Theorem 4.** *The error Rj is considered negligible as j* → ∞

$$\lim\_{j \to \infty} R\_j = 0.\tag{42}$$

**Proof.** Equation (40) is used as:

$$\left| \mathcal{R}\_{\vec{\xi}} \right| \le \left| X\_{\vec{\jmath}} - \left( X\_{\vec{\jmath}} \right)\_{\vec{\xi}} \right| + \sum\_{\ell=-N}^{N} \left| D\_{\ell,\vec{\jmath}} \right| \sup\_{\vec{\jmath}} \left| X\_{\vec{\jmath}}^{m} - \left( X\_{\vec{\jmath}}^{m} \right)\_{\vec{\xi}} \right|.$$

Using conditions (35) and (37), along with each integer *ξ* , we obtain

$$\left\|\left|R\_{\xi}\right|\right\|\_{\ell^{\infty}\times\mathbb{C}[0,T]} \leq (1+EQ)\left\|\left|X-X\_{\xi}\right|\right\|\_{\ell^{\infty}\times\mathbb{C}[0,T]}.\tag{43}$$

$$\text{Since } \left\| X - X\_{\tilde{\mathsf{g}}} \right\|\_{\ell^{\infty} \times \mathbb{C}[0, T]} \to 0 \text{ as} \\ \tilde{\mathsf{g}} \to \infty, \forall t \in [0, T], \text{ then } \mathsf{R}\_{\tilde{\mathsf{f}}} \to 0 \text{ . } \mathsf{D}$$

#### **9. Applications**

For numerical results and tables, we used Maple 2022.1 software, Version 15, March 2022, Windows, 10, 8 G RAM, 64-bit. In this section, the NMIE of (1) in the following special form was considered

$$\frac{\partial^{\mathfrak{a}}\Phi(\mathbf{x},t)}{\partial t^{\mathfrak{a}}} + 0.5\Phi(\mathbf{x},t) = g(\mathbf{x},t) + 0.33 \int\_{-1}^{1} k(|y-\mathbf{x}|) \Phi^{\mathfrak{m}}(y,t) dy \, \Big/ \Phi(\mathbf{x},0) = \mathbf{x}^{2} \tag{44}$$

The true result of Equation (44) is Φ(*x*, *t*) = 0.5*t* 0.5 + 0.25*t* 1.5 + *x*<sup>2</sup> .

**Example 1** (For logarithmic kernel)**.**

$$k(|\mathbf{x} - \mathbf{y}|) = \ln(|\mathbf{x} - \mathbf{y}|). \tag{45}$$

After using (17)–(19) including the famous integral reference as Gradstein et al. [34]:

$$\int \mathbf{x}^{m} \ln(a+b\mathbf{x})d\mathbf{x} = \frac{1}{m+1} \left[ \mathbf{x}^{m+1} - \frac{(-a)^{m+1}}{b^{m+1}} \right] \ln(a+b\mathbf{x}) + \frac{1}{m+1} \sum\_{k=1}^{m+1} \frac{(-1)^{k} \mathbf{x}^{m-k+2} a^{k-1}}{(m-k+2)b^{k-1}},\tag{46}$$

we have

$$\begin{split} A\_{I}(\ell h) &= \frac{h}{[J^{m}-(I+1)^{m}]} \left[ \frac{1}{(m+1)} \left[ \left[ (I+1)^{m+1} - \ell^{m+1} \right] \ln \left| (\ell-J-1)h \right| \right. \\ & \left. - \left[ J^{m+1} - \ell^{m+1} \right] \ln \left| (\ell-J)h \right| \right] + (I+1)^{m} \left[ (\ell-J-1) \ln \left| (\ell-J-1)h \right| - (\ell-J) \ln \left| (\ell-J)h \right| + 1 \right] \\ & - \frac{1}{(m+1)} \sum\_{k=1}^{m+1} \left[ \frac{(J+1)^{m-k+2} - J^{m-k+2}}{(m-k+2)} \right] \ell^{k-1} \end{split} \tag{47}$$

and

$$\begin{split} B\_{I}(\ell h) &= \frac{h}{\left[ (I+1)^{m} - f^{m} \right]} \left\{ \frac{1}{(m+1)} \left[ \left[ (I+1)^{m+1} - \ell^{m+1} \right] \ln \left| (\ell-J-1)h \right| - \left[ I^{m+1} - \ell^{m+1} \right] \ln \left| (\ell-J)h \right| \right] \right\} \\ &+ \frac{h}{\left[ (I+1)^{m} - f^{m} \right]} \left\{ \ell^{m} \left[ (\ell-J-1)\ln \left| (\ell-J-1)h \right| - (\ell-J)\ln \left| (\ell-J)h \right| + 1 \right] \\ &- \frac{1}{(m+1)} \sum\_{k=1}^{m+1} \frac{\left[ (\ell+1)^{m-k+2} - I^{m-k+2} \right] \ell^{k-1}}{(m-k+2)} \right\}. \end{split} \tag{48}$$

Using (47) and (48), the coefficients *D*,*<sup>J</sup>* of the nonlinear algebraic system (22) are presented as:

$$\begin{split} D\_{\ell,l} &= \frac{h}{[l^m - (1+l)^m]} \Big\{ (1+m)^{-1} \Big[ \left[ (1+l)^{m+1} - \ell^{m+1} \right] \ln \left[ (\ell - 1 - l)h \right] - \left[ l^{m+1} - \ell^{m+1} \right] \ln \left[ (\ell - l)h \right] \Big] \\ &+ (\ell + 1)^m [(\ell - l - 1)\ln \left[ (\ell - l - 1)h \right] - (\ell - l)\ln \left[ (\ell - l - 1)h \right] + 1] \\ &- \frac{1}{(m+1)} \sum\_{k=1}^{m+1} \frac{\left[ (\ell + 1)^{m-k+2} - l^{m-k+2} \right] \ell^{k-1}}{(m-k+2)} \\ &+ \frac{h}{[l^m - (\ell - l)^m]} \Big{{}\_1 \left[ \frac{1}{(1+m)} \left[ \left[ l^{m+1} - \ell^{m+1} \right] \ln \left[ (\ell - l)h \right] - \left[ (\ell - 1)^{m+1} - \ell^{m+1} \right] \ln \left[ (\ell - l + 1)h \right] \right] \right. \\ &\text{In addition,} \end{split} \tag{49}$$

In addition,

$$R = \left| \int\_{a}^{a+h} \ln \left| x - y \right| \varrho^{i}(y) dy - A\_{n}(x) \varrho^{i}(a) - B\_{n}(x) \varrho^{i}(a+h) \right|. \tag{50}$$

The formula (50) takes the form:

$$\begin{split} R &= \mathbb{C}h^{2m+1}, \\ \mathbb{C} &= \left| \left( \frac{\ell^{2m+1}}{2m+1} - \frac{\ell^{m+1}}{m+1} \right) \ln \left| \ell h \right| - \left( \frac{\ell^{2m+1}-1}{2m+1} - \frac{\ell^{m+1}-1}{m+1} \right) \ln \left| h(\ell-1) \right| \\ &- \sum\_{k=1}^{2m+1} \frac{\ell^{k-1}}{(2m-k+2)(2m+1)} + \sum\_{k=1}^{m+1} \frac{\ell^{k-1}}{(m-k+2)(m+1)} \right|. \end{split} \tag{51}$$

The linear case can be obtained from Equations (47)–(51) by letting *m* = 1.

**Example 2** (For Carleman kernel)**.**

$$k(|\mathbf{x} - \mathbf{y}|) = |\mathbf{x} - \mathbf{y}|^{-\beta} (0 < \beta < 1). \tag{52}$$

*The significance based on the Carleman kernel was shown in the Arutiunian [35] work that presented the plane contact problem using nonlinear plasticity theory as its first calculation. It was reduced through the first-order Fredholm integral model with a Carleman kernel.*

After using (17) to (19), we obtain

$$A\_{I}(\ell h) = \frac{h^{1-\beta}}{l^{m} - (1+l)^{m}} \left\{ \sum\_{k=0}^{m} \frac{m! \left[l^{m-k} |\ell - J|^{k+1-\beta} - (\ell+1)^{m-k} |\ell - J - 1|^{k+1-\beta}\right]}{(m-k)! (1-\beta)(2-\beta) \dots (k+1-\beta)}\right. \tag{53}$$
 
$$+ \frac{(1+l)^{m}}{(-\beta+1)} \left[|\left.\ell - J - 1\right|^{1-m} - |\ell - J|^{1-m}\right] \right\}$$

and

$$B\_{I}(\ell h) = \frac{h^{1-\beta}}{[-f^{m} + (1+f)^{m}]} \left\{ \sum\_{k=0}^{m} \frac{m! \left[f^{m-k}|\ell-f|^{k+1-\beta} - (f+1)^{m-k}|\ell-f-1|^{k+1-\beta}\right]}{(m-k)!(1-\beta)(2-\beta)\dots(k+1-\beta)}\right. \tag{54}$$

$$+ \frac{f^{m}}{(1-\beta)} [|\left.\ell-f-1\right|^{1-\beta} - |\left.\ell-f|^{1-\beta}\right] \right\}$$

Therefore, the Toeplitz matrix *D*,*<sup>J</sup>* becomes:

$$D\_{l,j} = h^{1-\beta} \left\{ \frac{1}{[l^m - (l+1)^m]} \left[ \sum\_{k=0}^m \frac{m! \left[ \ell^{m-k} (\ell-j)^{k+1-\beta} - (\ell+1)^{m-k} (\ell-j-1)^{k+1-\beta} \right]}{(m-k)! (1-\ell)(2-\beta)...(k+1-\beta)} + \frac{(l+1)^m}{(1-\beta)} \left[ (\ell-j-1)^{1-\beta} - (\ell-j)^{1-\beta} \right] \right] \right. \\ \left. + \frac{(l+1)^m}{(1-\beta)} \left[ \ell^{m-k} (\ell-j)^{m-k} + (l-1)^{m-k} (\ell-j)^{m-k} - (\ell+1)^{m-k} (\ell-j)^{m-k} \right] \right\} \tag{55}$$
 
$$+ \frac{1}{[(l^m - (l-j)^m)]} \left[ \sum\_{k=0}^m \frac{m! \left[ (l-1)^{1-k} (\ell-j+1)^{k+1-\beta} - J^{m-k} (\ell-j)^{k+1-\beta} \right]}{(m-k)! (1-\beta)(2-\beta)...(k+1-\beta)} + \frac{(l-1)^j}{(1-\beta)} \left[ (\ell-j)^{1-\beta} - (\ell-j+1)^{1-\beta} \right] \right] \right\} \tag{56}$$
 
$$\text{The norm } R \text{ is shown as}$$

The error *R* is shown as:

$$\begin{split} |R| &\leq \mathcal{C}h^{2m+1-\beta} \,, \\ |\mathcal{C} &= |\sum\_{k=0}^{m} \frac{m! \ell^{k+1-\beta} \left| 1 - \frac{1}{\ell} \right|^{k+1-\beta}}{(m-k)! (1-\beta)(2-\beta)\dots(k+1-\beta)} - \sum\_{k=0}^{2m} \frac{(2m)! \ell^{k+1-\beta} \left| 1 - \frac{1}{\ell} \right|^{k+1-\beta}}{(2m-k)! (1-\beta)(2-\beta)\dots(k+1-\beta)} . \end{split} \tag{56}$$

**Example 3** (Suppose the Hilbert kernel)**.**

$$k(|x - y|) = \cot\left(\left|\frac{x - y}{2}\right|\right), \rho(\pm \pi, t) = 0. \tag{57}$$

*The exact output of Equation (57) is ϕ*(*x*, *t*) = 0.5*t* 0.5 + 0.25*t* 1.5 sin *x.*

*The integral equation based on the Hilbert kernel, together with the crack problem used in elasticity theory are discussed in [34].*

$$\int\_{nh}^{(n+1)h} \mathbf{x}^{\text{m}} \cot \mathbf{x} d\mathbf{x} = \sum\_{s=0}^{\infty} \frac{(-1)^{s^{2s}2s} B\_{2s}}{(m+2s)(2s)!} \mathbf{x}^{m+2s} \quad (m \ge 1, |\mathbf{x}| < \pi). \tag{58}$$

where Bernoulli numbers *B*2*<sup>s</sup>* is used as:

$$\begin{split} D\_{\mathbb{J},\ell} &= 2\left\{ (\ell - J + 1) \ln \left| \sin \frac{h(\ell - J + 1)}{2} \right| - 2(\ell - J) \ln \left| \sin \frac{h(\ell - J)}{2} \right| + (\ell - J - 1) \ln \left| \sin \frac{h(\ell - J - 1)}{2} \right| \right\} \\ &- \sum\_{s=0}^{\infty} \frac{(-1)^s 2^{2s} B\_{2s}}{(m + 2s)(2s)!} \times \left\{ (\ell - J + 1)^{1 + 2s} - 2(\ell - J)^{1 + 2s} + (\ell - J - 1)^{1 + 2s} \right\}, m = 1, 2, \dots, M. \end{split}$$

From the above, the results of using the logarithmic function and the effect of time in the first example, as well as the arithmetic error, are described numerically in Table 1 and Figure 1a,b in the nonlinear case (*m* = 2) and linear case (*m* = 1). In the second example, the results of using the Carleman function over time periods for the nonlinear and linear cases, as well as the resulting arithmetic error, were derived in Table 2 and Figure 2a,b, respectively. While Table 3 and Figure 3a,b represent the numerical results and arithmetic errors corresponding to the nonlinear and linear cases, respectively, for different values of the coefficient of the Carleman function. In the third example of the Hilbert kernel, the numerical error of the nonlinear case and the linear case has been calculated at *t* = 0.1 in Figure 4a,b, respectively. While for *t* = 0.4 the corresponding results of errors are described in Figure 5a,b.

**Figure 1.** (**a**) Shows the corresponding error of NMIE with a logarithmic kernel at different times, whereas (**b**) illustrates the corresponding error for the linear case at the same times.

**Figure 2.** (**a**) Shows the error for the nonlinear case at *β* = 0.01 and (**b**) shows the corresponding error for the linear case at *β* = 0.01.

**Figure 3.** (**a**,**b**) Shows the error of Equation (44) respectively for the nonlinear and linear cases of Carleman coefficients at t = 0.2 and N = 20.

**Figure 4.** (**a**,**b**) Shows the error of Equation (44) using the Hilbert kernel for the nonlinear and the linear cases at time *t* = 0.1, N = 20.

**Figure 5.** (**a**,**b**) Shows the error of Equation (44) with Hilbert kernel for the nonlinear and the linear cases at time t = 0.4, N = 20.

**Table 1.** Solutions of the linear/nonlinear solutions of MIE (44) with a logarithmic kernel, along with the corresponding errors using TMM.



**Table 2.** Describes the nonlinear and linear cases for MIE (44), with the Carleman kernel, time and *β* = 0.01.

**Table 3.** Shows the nonlinear and linear cases for Equation (44) for different values of Carleman coefficients at time *t* = 0.2, N = 20.


#### **10. Conclusions**

The following conclusions were drawn:

1. In this paper, the existence of a unique solution is proven using the Banach fixed point theorem. In addition, the reader could use the successive approximate method (Picard method) to arrive at the same conclusion. In the homogeneous case of Equation (1), the successive approximate method fails to prove the existence of a unique solution. For this, we can only use the Banach fixed point theorem.


$$v\Phi(\mathbf{x},t+\delta t) = \mathbf{g}(\mathbf{x},t) + \lambda \int\_{\Omega} k(|\mathbf{x}-y|) \Phi^m(y,t) dy, \quad \left(\mu = v \frac{(\delta t)^a}{\Gamma(a)}\right). \tag{59}$$

The delaying or advancing of time reveals the natural phenomena, especially in the presence of thermoelectricity and magnetic media. Some of applications of fractional integro-differential equations are found in physics, chemistry, economics, and biology [12,29]. Equation (59) explains the physical meaning of the fractional equation of time as the first fractional approximation of the time lag equation, and this lag may be before or after real time.

	- (a) *<sup>∂</sup> <sup>∂</sup><sup>x</sup> <sup>k</sup>*(|*<sup>y</sup>* <sup>−</sup> *<sup>x</sup>*|) = <sup>1</sup> |*y*−*x*| Cauchy kernel.

$$\text{(b)}\qquad \frac{\partial^2}{\partial x^2}k(|y-x|) = \left(\frac{1}{|y-x|^2}\right)\text{ Strong singular kernel}$$

(c) The Carleman function was also established as:

$$\ln|y-x| = \underbrace{\left[ (\ln|y-x|)|y-x|^{\upsilon} \right]}\_{\mathcal{U}(y,x)} |y-x|^{-\upsilon}$$

where *U*(*y*, *x*) is a continuous function.


#### **11. Future Work**

Future work will attempt to solve Equation (1) when the coefficients of the equation are variable. This will lead to solutions for many applications in the sciences related to nonlinear elasticity.

**Author Contributions:** Supervision, M.A.A.; Project administration, S.E.A. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the project number: 22UQU4282396DSR01, through the Deanship for Research & Innovation, Ministry of Education in Saudi Arabia.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Data is contained within the article.

**Acknowledgments:** The authors would like to thank the reviewers for their suggestions that helped improve the research. The authors thank the Deanship for Research & Innovation, Ministry of Education in Saudi Arabia for funding this research work through the project number: 22UQU4282396DSR01.

**Conflicts of Interest:** The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

## *Article* **Existence of Solutions to a System of Riemann-Liouville Fractional Differential Equations with Coupled Riemann-Stieltjes Integrals Boundary Conditions**

**Yuan Ma and Dehong Ji \***

College of Science, Tianjin University of Technology, Tianjin 300384, China **\*** Correspondence: jdh200298@163.com

**Abstract:** A general system of fractional differential equations with coupled fractional Stieltjes integrals and a Riemann–Liouville fractional integral in boundary conditions is studied in the context of pattern formation. We need to transform the fractional differential system into the corresponding integral operator to obtain the existence and uniqueness of solutions for the system. The contraction mapping principle in Banach space and the alternative theorem of Leray–Schauder are applied. Finally, we give two applications to illustrate our theoretical results.

**Keywords:** coupled system; Riemann–Liouville fractional derivative; contraction mapping principle in Banach space; alternative theorem of Leray–Schauder

**MSC:** 34A08, 26A33

#### **1. Introduction**

A general system of fractional differential equations

$$\begin{cases} D\_{0^{+}}^{a\_{1}}(D\_{0^{+}}^{\delta\_{1}}\mathfrak{x}(t)) + f(t, \mathfrak{x}(t), y(t)) = 0, t \in [0, 1],\\ D\_{0^{+}}^{a\_{2}}(D\_{0^{+}}^{\delta\_{2}}\mathfrak{y}(t)) + g(t, \mathfrak{x}(t), y(t)) = 0, t \in [0, 1], \end{cases} \tag{1}$$

supplemented with coupled nonlocal integral boundary conditions are considered.

$$\begin{cases} D\_{0^{+}}^{\delta\_{1}}\mathbf{x}(0) = 0, \mathbf{x}(0) = 0, \mathbf{x}(1) = \gamma\_{1}I\_{0^{+}}^{\delta\_{1}}\mathbf{y}(\xi) + \sum\_{i=1}^{p} \int\_{0}^{1} \mathbf{y}(\tau) d\mathcal{H}\_{i}(\tau), \\ D\_{0^{+}}^{\delta\_{2}}\mathbf{y}(0) = 0, \mathbf{y}(0) = 0, \mathbf{y}(1) = \gamma\_{2}I\_{0^{+}}^{\delta\_{2}}\mathbf{x}(\eta) + \sum\_{j=1}^{q} \int\_{0}^{1} \mathbf{x}(\tau) d\mathcal{K}\_{j}(\tau), \end{cases} \tag{2}$$

where *α*<sup>1</sup> is in the interval (0, 1), *β*<sup>1</sup> is in the interval (1, 2), *α*<sup>2</sup> is in the interval (0, 1], *β*<sup>2</sup> is in the interval (1, 2], *p*, *q* ∈ *N*, and *γ*1, *γ*2, *δ*1, *δ*<sup>2</sup> > 0, 0 < *ξ*, *η* < 1 K*j*(*t*), *j* = 1, ··· , *q*, H*i*(*t*), *i* = 1, ··· , *p* are bounded variation functions. Both function *f* and function *g* are nonlinear.

Coupled boundary conditions appear in the study of reaction-diffusion equations [1], heat equations [2] and mathematical biology [3]. Boundary value problems with coupled boundary conditions constitute a very interesting and important class of problems. Recently, much attention has been focused on the study of the existence of solutions for boundary value problems with coupled boundary conditions, see [4–13].

In [14], Tudorache and Luca investigated the systems of Riemann–Liouville fractional differential equations with coupled integral boundary conditions.

$$\begin{cases} D\_{0^+}^\mathfrak{a} \mathfrak{x}(t) + f(t, \mathfrak{x}(t), \mathfrak{y}(t), I\_{0^+}^{\theta\_1} \mathfrak{x}(t), I\_{0^+}^{\sigma\_1} \mathfrak{y}(t)) = 0, t \in (0, 1), \\ D\_{0^+}^\mathfrak{g} \mathfrak{y}(t) + \mathfrak{g}(t, \mathfrak{x}(t), \mathfrak{y}(t), I\_{0^+}^{\theta\_2} \mathfrak{x}(t), I\_{0^+}^{\sigma\_2} \mathfrak{y}(t)) = 0, t \in (0, 1), \end{cases}$$

**Citation:** Ma, Y.; Ji, D. Existence of Solutions to a System of Riemann-Liouville Fractional Differential Equations with Coupled Riemann-Stieltjes Integrals Boundary Conditions. *Fractal Fract.* **2022**, *6*, 543. https://doi.org/10.3390/ fractalfract6100543

Academic Editors: Libo Feng, Yang Liu, Lin Liu and Stanislaw Migorski

Received: 13 August 2022 Accepted: 20 September 2022 Published: 26 September 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

$$\begin{cases} \mathbf{x}(0) = \mathbf{x}'(0) = \dots = \mathbf{x}^{(n-2)}(0) = 0, \mathbf{D}\_{0^+}^{\gamma\_0} \mathbf{x}(1) = \sum\_{i=1}^p \int\_0^1 \mathbf{D}\_{0^+}^{\gamma\_i} \mathbf{y}(t) d\mathcal{H}\_i(t), \\ \mathbf{y}(0) = \mathbf{y}'(0) = \dots = \mathbf{y}^{(m-2)}(0) = 0, \mathbf{D}\_{0^+}^{\delta\_0} \mathbf{y}(1) = \sum\_{i=1}^q \int\_0^1 \mathbf{D}\_{0^+}^{\delta\_i} \mathbf{x}(t) d\mathcal{K}\_i(t), \end{cases}$$

where *σ*1, *θ*1, *θ*2, *σ*<sup>2</sup> > 0, *f* and *g* are functions that are nonlinear. The contraction mapping principle in Banach space, the alternative theorem of Leray–Schauder and Krasnosel'skiitype theorem are adopted.

In [15], Bashir Ahmad and Rodica Luca considered the system of fractional integrodifferential equations

$$\begin{cases} (^cD^a + \lambda^c D^{a-1})u(t) = f(t, u(t), v(t), ^cD^{p\_1}v(t), I^{q\_1}v(t)), t \in (0, 1), \\ (^cD^{\mathfrak{f}} + \mu^c D^{\mathfrak{f}-1})v(t) = \mathfrak{g}(t, u(t), v(t), ^cD^{p\_2}u(t), I^{q\_2}u(t)), t \in (0, 1), \end{cases}$$

with the coupled boundary conditions

$$\begin{cases} \boldsymbol{u}(0) = \boldsymbol{u}'(0) = \boldsymbol{u}''(0) = 0, \boldsymbol{u}(1) = \int\_0^1 \boldsymbol{u}(s) d\mathcal{H}\_1(s) + \int\_0^1 \boldsymbol{v}(s) d\mathcal{H}\_2(s),\\ \boldsymbol{v}(0) = \boldsymbol{v}'(0) = \boldsymbol{v}''(0) = 0, \boldsymbol{v}(1) = \int\_0^1 \boldsymbol{u}(s) d\mathcal{K}\_1(s) + \int\_0^1 \boldsymbol{v}(s) d\mathcal{K}\_2(s). \end{cases}$$

On the other hand, boundary value problems with Riemann–Liouville fractional integral boundary conditions have attracted much attention.

In [16], Laadjal, M. Al-Mdallal and Jarad discussed the coupled system of fractional Langevin equations

$$\begin{cases} ^cD^{a\_1}(^cD^{\beta\_1} + \lambda)\psi\_1(t) = f(t, \psi\_1(t), \psi\_2(t)), t \in \mathcal{J}, 0 < a\_1 \le 1 < \beta\_1 \le 2, \\\ ^cD^{a\_2}(^cD^{\beta\_2} + k)\psi\_2(t) = g(t, \psi\_1(t), \psi\_2(t)), t \in \mathcal{J}, 0 < a\_2 \le 1 < \beta\_2 \le 2, \end{cases}$$

with nonlocal nonseparated boundary conditions

$$\begin{cases} \psi\_1(0) = a\_{0'} \psi\_2(0) = b\_{0'} \psi'\_1(0) = \psi'\_2(0) = 0, \\ \psi\_1(\xi) = a(^c D^p \psi\_2)(\mu\_1), \xi \in (0, 1], \mu\_1 \in \mathfrak{J}, 0 < p < \beta\_2, \\ \psi\_2(\eta) = b(I^q \psi\_1)(\mu\_2), \eta \in (0, 1], \mu\_2 \in \mathfrak{J}, q \ge 0. \end{cases}$$

In [17], Zhang, Li and Lu considered the fractional differential system with Riemann– Liouville fractional integral boundary conditions

$$\begin{cases} D\_{0^{+}}^{a\_{1}}u(t) = f\_{1}(t, u(t), v(t), D\_{0^{+}}^{\rho\_{1}}u(t), D\_{0^{+}}^{\rho\_{2}}v(t)), t \in (0, 1), \\ D\_{0^{+}}^{a\_{2}}v(t) = f\_{2}(t, u(t), v(t), D\_{0^{+}}^{\rho\_{1}}u(t), D\_{0^{+}}^{\rho\_{2}}v(t)), t \in (0, 1), \end{cases}$$

$$\begin{cases} u(0) = u'(0) = 0, \ v(0) = v'(0) = 0, \\ u(1) = \gamma\_{1} I\_{0^{+}}^{\delta\_{1}}u(\eta\_{1}), \ v(1) = \gamma\_{2} I\_{0^{+}}^{\delta\_{2}}v(\eta\_{2}). \end{cases}$$

However, boundary value problems with fractional Stieltjes integrals and Riemann– Liouville fractional integrals in boundary conditions have not been discussed until now. Now, in this paper, we shall investigate the existence and uniqueness of the solutions for the system (1), (2). As far as the authors know, the contraction mapping principle in Banach space and the alternative theorem of Leray–Schauder type have not been developed for boundary value problems with fractional Stieltjes integrals and Riemann–Liouville fractional integrals in boundary conditions, so it is interesting and important to discuss the (1), (2).

The organization of this paper is as follows. In Section 2, we present some useful basics definitions and lemmas. Section 3 gives the uniqueness and existence of solutions for the system. At the end of the paper, two examples that illustrate our results are given.

#### **2. Preliminary**

For convenience, we first present some useful basics lemmas of fractional calculus [18] in this part.

**Definition 1** ([18])**.** *For a function k* : (0, +∞) → *R*,

$$I\_{0+}^{\mathcal{S}}k(\tau) = \frac{1}{\Gamma(\mathcal{S})} \int\_0^{\tau} (\tau - s)^{\mathcal{S}-1} k(s) ds\_{\tau}$$

*is defined as the β*(*β* > 0) *order Riemann–Liouville fractional integral of the function k.*

**Definition 2** ([18])**.** *For a function k* : (0, +∞) → *R*,

$$D\_{0+}^{\\\\\beta}k(\tau) = \frac{1}{\Gamma(n-\beta)}(\frac{d}{d\tau})^n \int\_0^\tau (\tau-s)^{n-\beta-1}k(s)ds,s$$

*is defined as the β*(*β* > 0) *order Riemann–Liouville fractional derivative of the function k, in this place n* = [*β*] + 1.

**Lemma 1** ([18])**.** *Assume that v* ∈ *C*(0, 1) ∩ *L*(0, 1) *with a fractional derivative of order β* > 0 *that belongs to C*(0, 1) ∩ *L*(0, 1). *Then,*

$$d\_{0+}^{\beta}D\_{0+}^{\beta}\upsilon(\tau) = \upsilon(\tau) + c\_1\tau^{\beta - 1} + c\_2\tau^{\beta - 2} + \cdots + c\_N\tau^{\beta - N}\rho$$

*for some ci* ∈ *R*, *i* = 1, 2, ··· , *N*, *where N is the smallest integer greater than or equal to β*.

**Lemma 2** ([19])**.** *Let T* : *X* → *X be continuous and compact. Denote M*(*T*) = {*u* ∈ *X* : *u* = *mT*(*u*) *for some* 0 < *m* < 1}*. Then, one of the following conclusions is true:*


$$
\Delta\_1 = \frac{\Gamma(\beta\_2)}{\Gamma(\beta\_2 + \delta\_1)} \gamma\_1 \mathfrak{f}^{\beta\_2 + \delta\_1 - 1} + \sum\_{i=1}^p \int\_0^1 \tau^{\beta\_2 - 1} d\mathcal{H}\_i(\tau),
$$

$$
\Delta\_2 = \frac{\Gamma(\beta\_1)}{\Gamma(\beta\_1 + \delta\_2)} \gamma\_2 \eta^{\beta\_1 + \delta\_2 - 1} + \sum\_{j=1}^q \int\_0^1 \tau^{\beta\_1 - 1} d\mathcal{K}\_j(\tau).
$$

**Lemma 3.** *Suppose x*, *y* ∈ *C*[0, 1], Δ = 1 − Δ1Δ<sup>2</sup> = 0, *β*1, *β*<sup>2</sup> ∈ (1, 2]*, α*1, *α*<sup>2</sup> ∈ (0, 1]*, p*, *q* ∈ *N,* (*α*<sup>1</sup> := *α*<sup>1</sup> + *β*<sup>1</sup> + *δ*2, *α*<sup>2</sup> := *α*<sup>2</sup> + *β*<sup>2</sup> + *δ*1)*, γ*1, *γ*2, *δ*1, *δ*<sup>2</sup> > 0*,* 0 < *ξ*, *η* < 1*,* K*j*(*t*), *j* = 1, ··· , *q,* H*i*(*t*), *i* = 1, ··· , *p, are bounded variation functions, h*, *k are continuous on the interval* (0, 1), *furthermore, h*, *k are integrable on the interval* (0, 1)*. Then, the functional expressions*

$$\begin{split} \mathbf{x}(t) &= \begin{aligned} & -\frac{1}{\Gamma(a\_{1}+\beta\_{1})} \int\_{0}^{t} (t-s)^{a\_{1}+\beta\_{1}-1} h(s) ds \\ & + t^{\beta\_{1}-1} \left[ \frac{1}{\Gamma(a\_{1}+\beta\_{1})} \int\_{0}^{1} (1-s)^{a\_{1}+\beta\_{1}-1} h(s) ds - \frac{\gamma\_{1}}{\Gamma(\overline{a\_{1}})} \int\_{0}^{\overline{t}} (\tilde{\xi}-s)^{\overline{\sigma}\_{2}-1} k(s) ds \right. \\ & - \frac{1}{\Gamma(a\_{2}+\beta\_{2})} \sum\_{i=1}^{p} \int\_{0}^{1} \left( \int\_{0}^{\tau} (\tau-s)^{a\_{2}+\beta\_{2}-1} k(s) ds \right) d\mathcal{H}\_{i}(\tau) \\ & + \Delta\_{1} \left( \frac{1}{\Gamma(a\_{2}+\beta\_{2})} \int\_{0}^{1} (1-s)^{a\_{2}+\beta\_{2}-1} k(s) ds \\ & - \frac{\gamma\_{2}}{\Gamma(\overline{a\_{1}})} \int\_{0}^{\eta} (\eta-s)^{\overline{\sigma}\_{1}-1} h(s) ds \\ & - \frac{1}{\Gamma(a\_{1}+\beta\_{1})} \sum\_{j=1}^{q} \int\_{0}^{1} \left( \int\_{0}^{\tau} (\tau-s)^{a\_{1}+\beta\_{1}-1} h(s) ds \right) d\mathcal{K}\_{j}(\tau) \right) \end{aligned} (3)$$

$$\begin{split} y(t) &= -\frac{1}{\Gamma(a\_{2}+\beta\_{2})} \int\_{0}^{t} (t-s)^{a\_{2}+\beta\_{2}-1} k(s) ds \\ &+ \frac{t^{\beta\_{2}-1}}{\Delta} \left[ \frac{1}{\Gamma(a\_{2}+\beta\_{2})} \int\_{0}^{1} (1-s)^{a\_{2}+\beta\_{2}-1} k(s) ds - \frac{\gamma\_{2}}{\Gamma(\overline{a}\_{1})} \int\_{0}^{\eta} (\eta-s)^{\overline{\alpha}\_{1}-1} h(s) ds \right. \\ &- \frac{1}{\Gamma(a\_{1}+\beta\_{1})} \sum\_{j=1}^{q} \int\_{0}^{1} \left( \int\_{0}^{\tau} (\tau-s)^{a\_{1}+\beta\_{1}-1} h(s) ds \right) d\mathcal{K}\_{j}(\tau) \\ &+ \Delta\_{2} \left( \frac{1}{\Gamma(a\_{1}+\beta\_{1})} \int\_{0}^{1} (1-s)^{a\_{1}+\beta\_{1}-1} h(s) ds \right. \\ &- \frac{\gamma\_{1}}{\Gamma(\overline{a}\_{2})} \int\_{0}^{\xi} (\widetilde{\zeta}-s)^{\overline{\alpha}\_{2}-1} k(s) ds \\ &- \frac{1}{\Gamma(a\_{2}+\beta\_{2})} \sum\_{i=1}^{p} \int\_{0}^{1} \left( \int\_{0}^{\tau} (\tau-s)^{a\_{2}+\beta\_{2}-1} k(s) ds \right) d\mathcal{H}\_{i}(\tau) \right) \bigg]. \end{split} (4)$$

*is the solution of the system*

$$\begin{cases} D\_{0^{+}}^{a\_{1}}(D\_{0^{+}}^{\mathcal{S}\_{1}}x(t)) + h(t) = 0, t \in (0,1), \\ D\_{0^{+}}^{a\_{2}}(D\_{0^{+}}^{\mathcal{S}\_{2}}y(t)) + k(t) = 0, t \in (0,1). \end{cases} \tag{5}$$

*Furthermore,* (*x*(*t*), *y*(*t*)) *satisfies the equation condition (2).*

**Proof.** By Lemma 1, the solutions for the systems (2), (5) are give by

$$\mathbf{x}(t) = -I\_{0^{+}}^{a\_{1}+\beta\_{1}}h(t) + c\_{1}t^{\beta\_{1}-1},\tag{6}$$

$$y(t) = -I\_{0^+}^{a\_2 + \beta\_2}k(t) + d\_1 t^{\beta\_2 - 1},\tag{7}$$

where *c*1, *d*<sup>1</sup> ∈ *R*. From the boundary conditions *x*(1) = *γ*<sup>1</sup> *I δ*1 <sup>0</sup><sup>+</sup> *y*(*ξ*) + *p* ∑ *i*=1 1 <sup>0</sup> *y*(*τ*)*d*H*i*(*τ*) and *y*(1) = *γ*<sup>2</sup> *I δ*2 <sup>0</sup><sup>+</sup> *x*(*η*) + *q* ∑ *j*=1 1 <sup>0</sup> *x*(*τ*)*d*K*j*(*τ*), we get

$$\begin{aligned} -I\_{0^{+}}^{a\_{1}+\beta\_{1}}h(1) + c\_{1} &= -\gamma\_{1}I\_{0^{+}}^{\overline{\alpha}\_{2}}k(\xi) + \gamma\_{1}d\_{1}\frac{\Gamma(\beta\_{2})}{\Gamma(\beta\_{2}+\delta\_{1})}\xi^{\beta\_{2}+\delta\_{1}-1} \\ &+ \sum\_{i=1}^{p} \int\_{0}^{1} \left(d\_{1}\,\tau^{\beta\_{2}-1} - I\_{0^{+}}^{\alpha\_{2}+\beta\_{2}}k(\tau)\right) d\mathcal{H}\_{i}(\tau), \end{aligned}$$

$$\begin{aligned} -I\_{0^{+}}^{a\_{2}+\beta\_{2}}k(1) + d\_{1} &= -\gamma\_{2}I\_{0^{+}}^{\overline{\alpha}\_{1}}h(\eta) + \gamma\_{2}c\_{1}\frac{\Gamma(\beta\_{1})}{\Gamma(\beta\_{1}+\delta\_{2})}\eta^{\beta\_{1}+\delta\_{2}-1} \\ &+ \sum\_{j=1}^{q} \int\_{0}^{1} \left(c\_{1}\tau^{\beta\_{1}-1} - I\_{0^{+}}^{a\_{1}+\beta\_{1}}h(\tau)\right) d\mathcal{K}\_{j}(\tau). \end{aligned}$$

Solving the above system, we find that

$$\begin{split} c\_{1} &= \frac{1}{\Delta} \Big( I\_{0^{+}}^{a\_{1}+\beta\_{1}} h(1) - \gamma\_{1} I\_{0^{+}}^{\mathbb{Z}\_{2}} k(\mathbb{Z}) - \sum\_{i=1}^{p} \int\_{0}^{1} I^{a\_{2}+\beta\_{2}} k(\tau) d\mathcal{H}\_{i}(\tau) \Big) \\ &+ \frac{\Delta\_{1}}{\Delta} \Big( I\_{0^{+}}^{a\_{2}+\beta\_{2}} k(1) - \gamma\_{2} I\_{0^{+}}^{\mathbb{Z}\_{1}} h(\eta) - \sum\_{j=1}^{q} \int\_{0}^{1} I\_{0^{+}}^{a\_{1}+\beta\_{1}} h(\tau) d\mathcal{K}\_{j}(\tau) \Big), \\ d\_{1} &= \frac{1}{\Delta} \Big( I\_{0^{+}}^{a\_{2}+\beta\_{2}} k(1) - \gamma\_{2} I\_{0^{+}}^{\overline{\pi}\_{1}} h(\eta) - \sum\_{j=1}^{q} \int\_{0}^{1} I\_{0^{+}}^{a\_{1}+\beta\_{1}} h(\tau) d\mathcal{K}\_{j}(\tau) \Big) \\ &+ \frac{\Delta\_{2}}{\Delta} \Big( I\_{0^{+}}^{a\_{1}+\beta\_{1}} h(1) - \gamma\_{1} I\_{0^{+}}^{\overline{\pi}\_{1}} k(\mathcal{J}) - \sum\_{i=1}^{p} \int\_{0}^{1} I^{a\_{2}+\beta\_{2}} k(\tau) d\mathcal{H}\_{i}(\tau) \Big). \end{split}$$

Substituting the values of *c*1, *d*<sup>1</sup> in (6) and (7), we get the integral functional expressions (3) and (4). The conclusion can be obtained.

The Banach space *E* = *C*[0, 1] is defined with the norm *ω* = max |*ω*(*τ*)|.

0≤*τ*≤1 Let *Y* = *E* × *E*. So, the space *Y* = {(*x*, *y*) : (*x*, *y*) ∈ *Y*} with the norm (*x*, *y*)*<sup>Y</sup>* = *x* + *y* is Banach space. The operator expression *T* : *Y* → *Y* is defined by *T*(*x*, *y*)(*t*) = (*T*1(*x*, *y*)(*t*), *T*2(*x*, *y*)(*t*)), where

*<sup>T</sup>*1(*x*, *<sup>y</sup>*)(*t*) =*tβ*1−<sup>1</sup> Δ <sup>−</sup> <sup>1</sup> Γ(*α*<sup>2</sup> + *β*2) *p* ∑ *i*=1 - 1 0 *τ* 0 (*<sup>τ</sup>* <sup>−</sup> *<sup>s</sup>*)*α*2+*β*2−1*g*(*s*, *<sup>x</sup>*(*s*), *<sup>y</sup>*(*s*))*ds d*H*i*(*τ*) <sup>−</sup> *<sup>γ</sup>*<sup>1</sup> Γ(*α*2) *ξ* 0 (*<sup>ξ</sup>* <sup>−</sup> *<sup>s</sup>*)*α*2−1*g*(*s*, *<sup>x</sup>*(*s*), *<sup>y</sup>*(*s*))*ds* + 1 Γ(*α*<sup>1</sup> + *β*1) - 1 0 (<sup>1</sup> <sup>−</sup> *<sup>s</sup>*)*α*1+*β*1−<sup>1</sup> *<sup>f</sup>*(*s*, *<sup>x</sup>*(*s*), *<sup>y</sup>*(*s*))*ds* + Δ<sup>1</sup> <sup>−</sup> <sup>1</sup> Γ(*α*<sup>1</sup> + *β*1) *q* ∑ *j*=1 - 1 0 *τ* 0 (*<sup>τ</sup>* <sup>−</sup> *<sup>s</sup>*)*α*1+*β*1−<sup>1</sup> *<sup>f</sup>*(*s*, *<sup>x</sup>*(*s*), *<sup>y</sup>*(*s*))*ds d*K*j*(*τ*) <sup>−</sup> *<sup>γ</sup>*<sup>2</sup> Γ(*α*1) *η* 0 (*<sup>η</sup>* <sup>−</sup> *<sup>s</sup>*)*α*1−<sup>1</sup> *<sup>f</sup>*(*s*, *<sup>x</sup>*(*s*), *<sup>y</sup>*(*s*))*ds* + 1 Γ(*α*<sup>2</sup> + *β*2) - 1 0 (<sup>1</sup> <sup>−</sup> *<sup>s</sup>*)*α*2+*β*2−1*g*(*s*, *<sup>x</sup>*(*s*), *<sup>y</sup>*(*s*))*ds* <sup>−</sup> <sup>1</sup> Γ(*α*<sup>1</sup> + *β*1) *t* 0 (*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>*)*α*1+*β*1−<sup>1</sup> *<sup>f</sup>*(*s*, *<sup>x</sup>*(*s*), *<sup>y</sup>*(*s*))*ds*, (8)

*<sup>T</sup>*2(*x*, *<sup>y</sup>*)(*t*) =*tβ*2−<sup>1</sup> Δ <sup>−</sup> <sup>1</sup> Γ(*α*<sup>1</sup> + *β*1) *q* ∑ *j*=1 - 1 0 *τ* 0 (*<sup>τ</sup>* <sup>−</sup> *<sup>s</sup>*)*α*1+*β*1−<sup>1</sup> *<sup>f</sup>*(*s*, *<sup>x</sup>*(*s*), *<sup>y</sup>*(*s*))*ds d*K*j*(*τ*) <sup>−</sup> *<sup>γ</sup>*<sup>2</sup> Γ(*α*1) *η* 0 (*<sup>η</sup>* <sup>−</sup> *<sup>s</sup>*)*α*1−<sup>1</sup> *<sup>f</sup>*(*s*, *<sup>x</sup>*(*s*), *<sup>y</sup>*(*s*))*ds* + 1 Γ(*α*<sup>2</sup> + *β*2) - 1 0 (<sup>1</sup> <sup>−</sup> *<sup>s</sup>*)*α*2+*β*2−1*g*(*s*, *<sup>x</sup>*(*s*), *<sup>y</sup>*(*s*))*ds* + Δ<sup>2</sup> <sup>−</sup> <sup>1</sup> Γ(*α*<sup>2</sup> + *β*2) *p* ∑ *i*=1 - 1 0 *τ* 0 (*<sup>τ</sup>* <sup>−</sup> *<sup>s</sup>*)*α*2+*β*2−1*g*(*s*, *<sup>x</sup>*(*s*), *<sup>y</sup>*(*s*))*ds d*H*i*(*τ*) <sup>−</sup> *<sup>γ</sup>*<sup>1</sup> Γ(*α*2) *ξ* 0 (*<sup>ξ</sup>* <sup>−</sup> *<sup>s</sup>*)*α*2−1*g*(*s*, *<sup>x</sup>*(*s*), *<sup>y</sup>*(*s*))*ds* + 1 Γ(*α*<sup>1</sup> + *β*1) - 1 0 (<sup>1</sup> <sup>−</sup> *<sup>s</sup>*)*α*1+*β*1−<sup>1</sup> *<sup>f</sup>*(*s*, *<sup>x</sup>*(*s*), *<sup>y</sup>*(*s*))*ds* <sup>−</sup> <sup>1</sup> Γ(*α*<sup>2</sup> + *β*2) *t* 0 (*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>*)*α*2+*β*2−1*g*(*s*, *<sup>x</sup>*(*s*), *<sup>y</sup>*(*s*))*ds*. (9)

Note that the couple fixed point of the integral operator *T* happens to satisfy the system (1) and the boundary condition (2).

#### **3. Main Result**

Now we present the main conclusions of the system (1), (2). The tools we used include the contraction mapping principle in Banach space and the alternative theorem of Leray–Schauder type.

We give the following notation:

*<sup>M</sup>*<sup>1</sup> <sup>=</sup> <sup>|</sup>Δ1<sup>|</sup> |Δ|Γ(*α*<sup>1</sup> + *β*<sup>1</sup> + 1) *q* ∑ *j*=1 - 1 0 *<sup>τ</sup>α*1+*β*<sup>1</sup> *<sup>d</sup>*K*j*(*τ*) <sup>+</sup> <sup>|</sup>Δ1|*γ*2*ηα*<sup>1</sup> <sup>|</sup>Δ|Γ(*α*<sup>1</sup> <sup>+</sup> <sup>1</sup>) <sup>+</sup> 1 |Δ|Γ(*α*<sup>1</sup> + *β*<sup>1</sup> + 1) + 1 Γ(*α*<sup>1</sup> + *β*<sup>1</sup> + 1) , *<sup>M</sup>*<sup>2</sup> <sup>=</sup> <sup>1</sup> |Δ|Γ(*α*<sup>2</sup> + *β*<sup>2</sup> + 1) *p* ∑ *i*=1 - 1 0 *<sup>τ</sup>α*2+*β*<sup>2</sup> *<sup>d</sup>*H*i*(*τ*) <sup>+</sup> *<sup>γ</sup>*1*ξα*<sup>2</sup> <sup>|</sup>Δ|Γ(*α*<sup>2</sup> <sup>+</sup> <sup>1</sup>) <sup>+</sup> <sup>|</sup>Δ1<sup>|</sup> |Δ|Γ(*α*<sup>2</sup> + *β*<sup>2</sup> + 1) , *<sup>M</sup>*<sup>3</sup> <sup>=</sup> <sup>|</sup>Δ2<sup>|</sup> |Δ|Γ(*α*<sup>2</sup> + *β*<sup>2</sup> + 1) *p* ∑ *i*=1 - 1 0 *<sup>τ</sup>α*2+*β*<sup>2</sup> *<sup>d</sup>*H*i*(*τ*) <sup>+</sup> <sup>|</sup>Δ2|*γ*1*ξα*<sup>2</sup> <sup>|</sup>Δ|Γ(*α*<sup>2</sup> <sup>+</sup> <sup>1</sup>) <sup>+</sup> 1 |Δ|Γ(*α*<sup>2</sup> + *β*<sup>2</sup> + 1) + 1 Γ(*α*<sup>2</sup> + *β*<sup>2</sup> + 1) , *<sup>M</sup>*<sup>4</sup> <sup>=</sup> <sup>1</sup> |Δ|Γ(*α*<sup>1</sup> + *β*<sup>1</sup> + 1) *q* ∑ *j*=1 - 1 0 *<sup>τ</sup>α*1+*β*<sup>1</sup> *<sup>d</sup>*K*j*(*τ*) <sup>+</sup> *<sup>γ</sup>*2*ηα*<sup>1</sup> <sup>|</sup>Δ|Γ(*α*<sup>1</sup> <sup>+</sup> <sup>1</sup>) <sup>+</sup> <sup>|</sup>Δ2<sup>|</sup> |Δ|Γ(*α*<sup>1</sup> + *β*<sup>1</sup> + 1) , *<sup>M</sup>*<sup>5</sup> <sup>=</sup> *<sup>M</sup>*<sup>1</sup> <sup>−</sup> <sup>1</sup> Γ(*α*<sup>1</sup> + *β*<sup>1</sup> + 1) , *<sup>M</sup>*<sup>6</sup> <sup>=</sup> *<sup>M</sup>*<sup>3</sup> <sup>−</sup> <sup>1</sup> Γ(*α*<sup>2</sup> + *β*<sup>2</sup> + 1) .

Additionally, the following assumptions hold:

**Hypothesis 1** (**H1**)**.** *By continuity of function f , there exist real constants ai*(*i* = 0, 1, 2) *that satisfy*

$$|f(t, u, v)| \le a\_0 + a\_1|u| + a\_2|v|.$$

*By continuity of function g, there exist real constants bi*(*i* = 0, 1, 2) *that satisfy*

$$|g(t, \mu, \upsilon)| \le b\_0 + b\_1|\mu| + b\_2|\upsilon|$$

*for all* (*t*, *u*, *v*) ∈ [0, 1] × *R* × *R.*

**Hypothesis 2** (**H2**)**.** *There exist positive constants K that satisfy*

$$K(|\mathfrak{u} - \overline{\mathfrak{u}}| + |\upsilon - \overline{\upsilon}|) \ge |f(t, \mathfrak{u}, \upsilon) - f(t, \overline{\mathfrak{u}}, \overline{\upsilon})|\_{\ast}$$

*there exist positive constants L that satisfy*

$$L(|\mathfrak{u} - \overline{\mathfrak{u}}| + |\upsilon - \overline{\mathfrak{v}}|) \ge |\mathfrak{g}(t, \mathfrak{u}, \upsilon) - \mathfrak{g}(t, \overline{\mathfrak{u}}, \overline{\mathfrak{v}})|\_{\prime\prime}$$

*for all* (*t*, *u*, *u*),(*t*, *u*, *v*) ∈ [0, 1] × *R* × *R.*

**Hypothesis 3** (**H3**)**.** *There exist positive constants F*<sup>0</sup> *such that F*<sup>0</sup> = sup | *f*(*t*, 0, 0)|,

*and there exist positive constants G*<sup>0</sup> *such that G*<sup>0</sup> = sup *t*∈*J* |*g*(*t*, 0, 0)|*.*

**Theorem 1.** *Suppose that conditions* (*H*2) *and* (*H*3) *are satisfied. Moreover,*

$$K(M\_1 + M\_4) + L(M\_2 + M\_3) < 1,$$

*t*∈*J*

*then there is a unique solution for system (1), (2).*

**Proof.** We consider a real constant *R* > 0 such that

$$\frac{(M\_1 + M\_4)F\_0 + (M\_2 + M\_3)G\_0}{1 - [K(M\_1 + M\_4) + L(M\_2 + M\_3)]} \le R.5$$

Let *BR* = {(*x*, *y*) ∈ *Y*, (*x*, *y*)*<sup>Y</sup>* ≤ *R*}. We prove that *T* mapping *BR* to *BR*. From (*H*2) and (*H*3), we deduce that the following holds:

$$\begin{aligned} |f(t, \boldsymbol{x}(t), \boldsymbol{y}(t))| &\leq |f(t, 0, 0)| + |f(t, \boldsymbol{x}(t), \boldsymbol{y}(t) - f(t, 0, 0))| \\ &\leq F\_0 + K(|\boldsymbol{x}| + |\boldsymbol{y}|) \\ &\leq F\_0 + K(||\boldsymbol{x}|| + ||\boldsymbol{y}||) \\ &= F\_0 + K||(\boldsymbol{x}, \boldsymbol{y})||. \end{aligned}$$

Similarly, we have |*g*(*t*, *x*(*t*), *y*(*t*)| ≤ *G*<sup>0</sup> + *L*(*x*, *y*). For all (*x*, *y*) in *BR*, we obtain

<sup>|</sup>*T*1(*x*, *<sup>y</sup>*)(*t*)| ≤ *<sup>t</sup>β*1−<sup>1</sup> |Δ| *LR* + *G*<sup>0</sup> Γ(*α*<sup>2</sup> + *β*2) *p* ∑ *i*=1 - 1 0 *τ* 0 (*<sup>τ</sup>* <sup>−</sup> *<sup>s</sup>*)*α*2+*β*2−1*ds d*H*i*(*τ*) <sup>+</sup> *<sup>γ</sup>*1(*LR* <sup>+</sup> *<sup>G</sup>*0) Γ(*α*2) *ξ* 0 (*<sup>ξ</sup>* <sup>−</sup> *<sup>s</sup>*)*α*2−1*ds* <sup>+</sup> *KR* + *F*<sup>0</sup> Γ(*α*<sup>1</sup> + *β*1) - 1 0 (<sup>1</sup> <sup>−</sup> *<sup>s</sup>*)*α*1+*β*1−1*ds* + |Δ1| *KR* + *F*<sup>0</sup> Γ(*α*<sup>1</sup> + *β*1) *q* ∑ *j*=1 - 1 0 *τ* 0 (*<sup>τ</sup>* <sup>−</sup> *<sup>s</sup>*)*α*1+*β*1−1*ds d*K*j*(*τ*) <sup>+</sup> *<sup>γ</sup>*2(*KR* <sup>+</sup> *<sup>F</sup>*0) Γ(*α*1) *η* 0 (*<sup>η</sup>* <sup>−</sup> *<sup>s</sup>*)*α*1−1*ds* <sup>+</sup> *LR* + *G*<sup>0</sup> Γ(*α*<sup>2</sup> + *β*2) - 1 0 (<sup>1</sup> <sup>−</sup> *<sup>s</sup>*)*α*2+*β*2−1*ds* + *KR* + *F*<sup>0</sup> Γ(*α*<sup>1</sup> + *β*1) *t* 0 (*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>*)*α*1+*β*1−1*ds* ≤ 1 |Δ| *LR* + *G*<sup>0</sup> <sup>Γ</sup>(*α*<sup>2</sup> <sup>+</sup> *<sup>β</sup>*<sup>2</sup> <sup>+</sup> <sup>1</sup>) <sup>×</sup> *p* ∑ *i*=1 - 1 0 *<sup>τ</sup>α*2+*β*<sup>2</sup> *<sup>d</sup>*H*i*(*τ*) <sup>+</sup> *<sup>γ</sup>*1(*LR* <sup>+</sup> *<sup>G</sup>*0)*ξα*<sup>2</sup> Γ(*α*<sup>2</sup> + 1) + *KR* + *F*<sup>0</sup> <sup>Γ</sup>(*α*<sup>1</sup> <sup>+</sup> *<sup>β</sup>*<sup>1</sup> <sup>+</sup> <sup>1</sup>) <sup>+</sup> <sup>|</sup>Δ1<sup>|</sup> *KR* + *F*<sup>0</sup> Γ(*α*<sup>1</sup> + *β*<sup>1</sup> + 1) *q* ∑ *j*=1 - 1 0 *<sup>τ</sup>α*1+*β*<sup>1</sup> *<sup>d</sup>*K*j*(*τ*) <sup>+</sup> *<sup>γ</sup>*2(*KR* <sup>+</sup> *<sup>F</sup>*0)*ηα*<sup>1</sup> <sup>Γ</sup>(*α*<sup>1</sup> <sup>+</sup> <sup>1</sup>) <sup>+</sup> *LR* + *G*<sup>0</sup> Γ(*α*<sup>2</sup> + *β*<sup>2</sup> + 1) <sup>+</sup> *KR* + *F*<sup>0</sup> Γ(*α*<sup>1</sup> + *β*<sup>1</sup> + 1)

$$\begin{split} \mathcal{I} &= (KR + F\_0) \left( \frac{|\Delta\_1|}{|\Delta|\Gamma(\alpha\_1 + \beta\_1 + 1)} \sum\_{j=1}^q \Big| \int\_0^1 \tau^{\alpha\_1 + \beta\_1} d\mathcal{K}\_j(\tau) \right| + \frac{|\Delta\_1| \gamma\_2 \eta^{\mathfrak{T}\_1}}{|\Delta|\Gamma(\overline{\alpha}\_1 + 1)} \\ &+ \frac{1}{|\Delta|\Gamma(\alpha\_1 + \beta\_1 + 1)} + \frac{1}{\Gamma(\alpha\_1 + \beta\_1 + 1)} \Bigg) \\ &+ (LR + G\_0) \left( \frac{1}{|\Delta|\Gamma(\alpha\_2 + \beta\_2 + 1)} \sum\_{i=1}^p \Big| \int\_0^1 \tau^{\alpha\_2 + \beta\_2} d\mathcal{H}\_i(\tau) \right| \\ &+ \frac{\gamma\_1 \xi^{\mathfrak{T}\_2}}{|\Delta|\Gamma(\overline{\alpha}\_2 + 1)} + \frac{|\Delta\_1|}{|\Delta|\Gamma(\alpha\_2 + \beta\_2 + 1)} \Bigg) \\ &= (KR + F\_0)M\_1 + (LR + G\_0)M\_2. \end{split} \tag{10}$$

Let us continue with the calculations:

<sup>|</sup>*T*2(*x*, *<sup>y</sup>*)(*t*)| ≤ *<sup>t</sup> β*2−1 |Δ| *KR*+*F*<sup>0</sup> Γ(*α*1+*β*1) *q* ∑ *j*=1 1 0 *<sup>τ</sup>* <sup>0</sup> (*<sup>τ</sup>* <sup>−</sup> *<sup>s</sup>*)*α*1+*β*1−1*ds d*K*j*(*τ*) +*γ*2(*KR*+*F*0) Γ(*α*1) *η* <sup>0</sup> (*<sup>η</sup>* <sup>−</sup> *<sup>s</sup>*)*α*1−1*ds* <sup>+</sup> *LR*+*G*<sup>0</sup> Γ(*α*2+*β*2) 1 <sup>0</sup> (<sup>1</sup> <sup>−</sup> *<sup>s</sup>*)*α*2+*β*2−1*ds* +|Δ2| *LR*+*G*<sup>0</sup> Γ(*α*2+*β*2) *p* ∑ *i*=1 1 0 *<sup>τ</sup>* <sup>0</sup> (*<sup>τ</sup>* <sup>−</sup> *<sup>s</sup>*)*α*2+*β*2−1*ds d*H*i*(*τ*) +*γ*1(*LR*+*G*0) Γ(*α*2) *ξ* <sup>0</sup> (*<sup>ξ</sup>* <sup>−</sup> *<sup>s</sup>*)*α*2−1*ds* <sup>+</sup> *KR*+*F*<sup>0</sup> Γ(*α*1+*β*1) 1 <sup>0</sup> (<sup>1</sup> <sup>−</sup> *<sup>s</sup>*)*α*1+*β*1−1*ds* + *LR*+*G*<sup>0</sup> Γ(*α*2+*β*2) *t* <sup>0</sup> (*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>*)*α*2+*β*2−1*ds* <sup>≤</sup> <sup>1</sup> |Δ| *KR*+*F*<sup>0</sup> <sup>Γ</sup>(*α*1+*β*1+1) <sup>×</sup> *<sup>q</sup>* ∑ *j*=1 1 <sup>0</sup> *<sup>τ</sup>α*1+*β*<sup>1</sup> *<sup>d</sup>*K*j*(*τ*) <sup>+</sup> *<sup>γ</sup>*2(*KR*+*F*0)*ηα*<sup>1</sup> Γ(*α*1+1) + *LR*+*G*<sup>0</sup> <sup>Γ</sup>(*α*2+*β*2+1) <sup>+</sup> <sup>|</sup>Δ2<sup>|</sup> *LR*+*G*<sup>0</sup> Γ(*α*2+*β*2+1) *p* ∑ *i*=1 1 <sup>0</sup> *<sup>τ</sup>α*2+*β*<sup>2</sup> *<sup>d</sup>*H*i*(*τ*) <sup>+</sup>*γ*1(*LR*+*G*0)*ξα*<sup>2</sup> <sup>Γ</sup>(*α*2+1) <sup>+</sup> *KR*+*F*<sup>0</sup> Γ(*α*2+*β*2+1) <sup>+</sup> *LR*+*G*<sup>0</sup> Γ(*α*2+*β*2+1) = (*LR* + *G*0) <sup>|</sup>Δ2<sup>|</sup> |Δ|Γ(*α*2+*β*2+1) *p* ∑ *i*=1 1 <sup>0</sup> *<sup>τ</sup>α*2+*β*<sup>2</sup> *<sup>d</sup>*H*i*(*τ*) + <sup>|</sup>Δ2|*γ*1*ξα*<sup>2</sup> |Δ|Γ(*α*2+1) + <sup>1</sup> <sup>|</sup>Δ|Γ(*α*2+*β*2+1) <sup>+</sup> <sup>1</sup> Γ(*α*2+*β*2+1) +(*KR* + *F*0) 1 |Δ|Γ(*α*1+*β*1+1) *q* ∑ *j*=1 1 <sup>0</sup> *<sup>τ</sup>α*1+*β*<sup>1</sup> *<sup>d</sup>*K*j*(*τ*) <sup>+</sup> *<sup>γ</sup>*2*ηα*<sup>1</sup> <sup>|</sup>Δ|Γ(*α*1+1) <sup>+</sup> <sup>|</sup>Δ2<sup>|</sup> |Δ|Γ(*α*2+*β*2+1) = (*LR* + *G*0)*M*<sup>3</sup> + (*KR* + *F*0)*M*4. (11)

Consequently,

$$\|\|T(\mathbf{x},\mathbf{y})\|\| \le (KR + F\_0)M\_1 + (LR + G\_0)M\_2 + (LR + G\_0)M\_3 + (KR + F\_0)M\_4 \le R.$$

Hence, *T*(*BR*) ⊆ *R*.

Now we will prove that *T* is a contraction operator. Choose (*x*, *y*),(*x*, *y*) in *Y*. For all *t* ∈ [0, 1], we find

<sup>|</sup>*T*1(*x*, *<sup>y</sup>*)(*t*) <sup>−</sup> *<sup>T</sup>*1(*x*, *<sup>y</sup>*)(*t*)| ≤ *<sup>t</sup> β*1−1 |Δ| *L*(*x*−*x*+*y*−*y*) Γ(*α*2+*β*2+1) *p* ∑ *i*=1 1 <sup>0</sup> *<sup>τ</sup>α*2+*β*<sup>2</sup> *<sup>d</sup>*H*i*(*τ*) +*γ*1*L*(*x*−*x*+*y*−*y*) <sup>Γ</sup>(*α*2+1) *<sup>ξ</sup>α*<sup>2</sup> +*K*(*x*−*x*+*y*−*y*) Γ(*α*1+*β*1+1) +|Δ1| *K*(*x*−*x*+*y*−*y*) Γ(*α*1+*β*1+1) *q* ∑ *j*=1 1 <sup>0</sup> *<sup>τ</sup>α*1+*β*<sup>1</sup> *<sup>d</sup>*K*j*(*τ*) +*γ*2*K*(*x*−*x*+*y*−*y*) <sup>Γ</sup>(*α*1+1) *<sup>η</sup>α*<sup>1</sup> + *<sup>L</sup>*(*x*−*x*+*y*−*y*) Γ(*α*2+*β*2+1) <sup>+</sup> *<sup>K</sup>*(*x*−*x*+*y*−*y*) <sup>Γ</sup>(*α*1+*β*1+1) *<sup>t</sup> α*1+*β*<sup>1</sup> ≤ *K* 1 <sup>|</sup>Δ|Γ(*α*1+*β*1+1) <sup>+</sup> <sup>|</sup>Δ1<sup>|</sup> |Δ|Γ(*α*1+*β*1+1) *q* ∑ *j*=1 1 <sup>0</sup> *<sup>τ</sup>α*1+*β*<sup>1</sup> *<sup>d</sup>*K*j*(*τ*) <sup>+</sup> <sup>|</sup>Δ1|*γ*2*ηα*<sup>1</sup> <sup>|</sup>Δ|Γ(*α*1+1) <sup>+</sup> <sup>1</sup> Γ(*α*1+*β*1+1) (*x*, *y*) − (*x*, *y*) +*L* 1 <sup>|</sup>Δ|Γ(*α*2+*β*2+1) <sup>×</sup> *<sup>p</sup>* ∑ *i*=1 1 <sup>0</sup> *<sup>τ</sup>α*2+*β*<sup>2</sup> *<sup>d</sup>*H*i*(*τ*) + *<sup>γ</sup>*1*ξα*<sup>2</sup> <sup>|</sup>Δ|Γ(*α*2+1) <sup>+</sup> <sup>|</sup>Δ1<sup>|</sup> |Δ|Γ(*α*2+*β*2+1) (*x*, *y*) − (*x*, *y*) = (*M*1*K* + *M*2*L*)(*x*, *y*) − (*x*, *y*). (12)

Let us continue with the calculations:

<sup>|</sup>*T*2(*x*, *<sup>y</sup>*)(*t*) <sup>−</sup> *<sup>T</sup>*2(*x*, *<sup>y</sup>*)(*t*)| ≤ *<sup>t</sup> β*2−1 |Δ| *K*(*x*−*x*+*y*−*y*) Γ(*α*1+*β*1+1) *q* ∑ *j*=1 1 <sup>0</sup> *<sup>τ</sup>α*1+*β*<sup>1</sup> *<sup>d</sup>*K*j*(*τ*) +*γ*2*K*(*x*−*x*+*y*−*y*) <sup>Γ</sup>(*α*1+1) *<sup>η</sup>α*<sup>1</sup> + *<sup>L</sup>*(*x*−*x*+*y*−*y*) Γ(*α*2+*β*2+1) +|Δ2| *<sup>L</sup>*(*x*−*x*+*y*−*y*) Γ(*α*2+*β*2+1) *p* ∑ *i*=1 1 <sup>0</sup> *<sup>τ</sup>α*2+*β*<sup>2</sup> *<sup>d</sup>*H*i*(*τ*) +*γ*1*L*(*x*−*x*+*y*−*y*) <sup>Γ</sup>(*α*2+1) *<sup>ξ</sup>α*<sup>2</sup> +*K*(*x*−*x*+*y*−*y*) Γ(*α*1+*β*1+1) <sup>+</sup> *<sup>L</sup>*(*x*−*x*+*y*−*y*) <sup>Γ</sup>(*α*2+*β*2+1) *<sup>t</sup> α*2+*β*<sup>2</sup> ≤ *L* 1 <sup>|</sup>Δ|Γ(*α*2+*β*2+1) <sup>+</sup> <sup>|</sup>Δ2<sup>|</sup> |Δ|Γ(*α*2+*β*2+1) *p* ∑ *i*=1 1 <sup>0</sup> *<sup>τ</sup>α*2+*β*<sup>2</sup> *<sup>d</sup>*H*i*(*τ*) + <sup>|</sup>Δ2|*γ*1*ξα*<sup>2</sup> <sup>|</sup>Δ|Γ(*α*2+1) <sup>+</sup> <sup>1</sup> Γ(*α*2+*β*2+1) (*x*, *y*) − (*x*, *y*) +*K* 1 <sup>|</sup>Δ|Γ(*α*1+*β*1+1) <sup>×</sup> *<sup>q</sup>* ∑ *j*=1 1 <sup>0</sup> *<sup>τ</sup>α*1+*β*<sup>1</sup> *<sup>d</sup>*K*j*(*τ*) <sup>+</sup> *<sup>γ</sup>*2*ηα*<sup>1</sup> <sup>|</sup>Δ|Γ(*α*1+1) <sup>+</sup> <sup>|</sup>Δ2<sup>|</sup> |Δ|Γ(*α*2+*β*2+1) (*x*, *y*) − (*x*, *y*) = (*M*1*L* + *M*2*K*)(*x*, *y*) − (*x*, *y*). (13)

Consequently,

$$\|T(\mathfrak{x},\mathfrak{y})(t) - T(\overline{\mathfrak{x}},\overline{\mathfrak{y}})(t)\| \le [(M\_1 + M\_4)K + (M\_2 + M\_3)L] \| (\mathfrak{x},\mathfrak{y}) - (\overline{\mathfrak{x}},\overline{\mathfrak{y}}) \|.$$

Using contraction mapping principle in Banach space, there is a unique function that satisfies *Tu* = *u*, which happens to be the solution of the system (1), (2).

**Theorem 2.** *Suppose that condition* (*H*1) *is satisfied. If ρ* := max{*M*7, *M*8} < 1*, where M*<sup>7</sup> = *a*1(*M*<sup>1</sup> + *M*4) + *b*1(*M*<sup>2</sup> + *M*3) *and M*<sup>8</sup> = *a*2(*M*<sup>1</sup> + *M*4) + *b*2(*M*<sup>2</sup> + *M*3)*, then at least one couple functions* (*x*(*t*), *y*(*t*)) *satisfy the system (1), (2).*

**Proof.** By continuity of functions *f* and *g*, the operators *T*<sup>1</sup> and *T*<sup>2</sup> are continuous, this means the operator *T* is also continuous. We choose an arbitrarily bounded open subset Ω from *E*. There exist *K* > 0 and *L* > 0 that satisfy | *f*(*t*, *x*(*t*), *y*(*t*)| ≤ *K* and |*g*(*t*, *x*(*t*), *y*(*t*)| ≤ *L* for all *t* in the [0,1] and (*x*, *y*) in Ω. Thus, by the proof of Theorem 1, we have

$$|T\_1(\mathbf{x}, y)(t)| \le \overline{\mathcal{K}}M\_1 + \overline{\mathcal{L}}M\_2, \quad |T\_2(\mathbf{x}, y)(t)| \le \overline{\mathcal{L}}M\_3 + \overline{\mathcal{K}}M\_4$$

for all *t* in the [0,1] and (*x*, *y*) in Ω. Then, we obtain

$$\left| \left| T(\mathbf{x}, y) \right| \right| \le \overline{K} (M\_1 + M\_4) + \overline{L} (M\_2 + M\_3), \quad \forall (\mathbf{x}, y) \in \Omega.$$

So, we get the boundedness of *T*(Ω). Take (*x*, *y*) ∈ Ω and 0 ≤ *t*<sup>1</sup> < *t*<sup>2</sup> ≤ 1, one has


Similarly, for (*x*, *y*) ∈ Ω, 0 ≤ *t*<sup>1</sup> < *t*<sup>2</sup> ≤ 1,


So we obtain

*T*2(*x*, *y*)(*t*2) → *T*2(*x*, *y*)(*t*1) when *t*<sup>2</sup> → *t*1, for arbitrary (*x*, *y*) ∈ Ω.

The conclusion that *T* : *BR* → *BR* is continuous and compact can be deduced from the Arzela–Ascoli theorem.

Finally, we will give the fact *M*(*T*) = {(*x*, *y*) ∈ *E* × *E* : (*x*, *y*) = *mT*(*x*, *y*) for some 0 < *m* < 1} is bounded. Let (*x*, *y*) in *M*(*T*) and for any *t* on [0,1], we have *mT*(*x*, *y*) = (*mT*1(*x*, *y*), *mT*2(*x*, *y*)).

By (*H*1), we have


So we deduce

$$\|\mathbf{x}\| \le (a\_0 + a\_1 \|\mathbf{x}\| + a\_2 \|\mathbf{y}\|)M\_1 + (b\_0 + b\_1 \|\mathbf{x}\| + b\_2 \|\mathbf{y}\|)M\_2. \tag{17}$$

Using the same proof process, we get

$$||y|| \le (a\_0 + a\_1 ||\mathbf{x}|| + a\_2 ||y||) M\_4 + (b\_0 + b\_1 ||\mathbf{x}|| + b\_2 ||y||) M\_3. \tag{18}$$

By (17) and (18), we have

$$\begin{aligned} \| (\mathbf{x}, y) \| &= \| \mathbf{x} \| + \| y \| \le a\_0 (M\_1 + M\_4) + b\_0 (M\_2 + M\_3) \\ &+ \left[ a\_1 (M\_1 + M\_4) + b\_1 (M\_2 + M\_3) \right] \| \mathbf{x} \| + \left[ a\_2 (M\_1 + M\_4) + b\_2 (M\_2 + M\_3) \right] \| \mathbf{y} \| \\ &= a\_0 (M\_1 + M\_4) + b\_0 (M\_2 + M\_3) + M\_7 \| \mathbf{x} \| + M\_8 \| \mathbf{y} \| \\ &\le a\_0 (M\_1 + M\_4) + b\_0 (M\_2 + M\_3) + \rho \| (\mathbf{x}, \mathbf{y}) \|. \end{aligned}$$

For *ρ* < 1, we obtain

$$\left|| (\mathbf{x}, \mathbf{y}) \right|| \le \frac{a\_0 (M\_1 + M\_4) + b\_0 (M\_2 + M\_3)}{1 - \rho}, \quad \forall (\mathbf{x}, \mathbf{y}) \in M(T).$$

Hence, we prove *M*(*T*) is a bounded set.

By using the alternative theorem of Leray–Schauder, there exists *x* ∈ *X* that satisfy *Tx* = *x*, therefore, coupled function (*x*, *y*) satisfy system (1) and integral boundary condition (2).

#### **4. Example**

Let *α*<sup>1</sup> = <sup>1</sup> <sup>3</sup> , <sup>H</sup>1(*t*) = <sup>2</sup>*t*, *<sup>t</sup>* <sup>∈</sup> [0, 1], *<sup>α</sup>*<sup>2</sup> <sup>=</sup> <sup>5</sup> <sup>6</sup> , *<sup>β</sup>*<sup>1</sup> <sup>=</sup> <sup>5</sup> <sup>4</sup> , <sup>K</sup><sup>1</sup> <sup>=</sup> *<sup>t</sup>*, *<sup>t</sup>* <sup>∈</sup> [0, 1], *<sup>β</sup>*<sup>2</sup> <sup>=</sup> <sup>7</sup> <sup>5</sup> , *p* = 2, *q* = 1, *γ*<sup>1</sup> = 2, *γ*<sup>2</sup> = 3, *δ*<sup>1</sup> = <sup>3</sup> <sup>7</sup> , *<sup>δ</sup>*<sup>2</sup> <sup>=</sup> <sup>8</sup> <sup>5</sup> , *<sup>ξ</sup>* <sup>=</sup> <sup>1</sup> <sup>5</sup> , *<sup>η</sup>* <sup>=</sup> <sup>1</sup> <sup>3</sup> , <sup>H</sup>2(*t*) = {0, *<sup>t</sup>* <sup>∈</sup> [0, <sup>1</sup> <sup>4</sup> ); 3, *<sup>t</sup>* <sup>∈</sup> [ <sup>1</sup> <sup>4</sup> , 1]}. We consider the following specific fractional order systems

$$\begin{cases} D\_{0^{+}}^{\frac{1}{3}}(D\_{0^{+}}^{\frac{5}{3}}x(t)) + f(t, x(t), y(t)) = 0, t \in [0, 1],\\ D\_{0^{+}}^{\frac{5}{3}}(D\_{0^{+}}^{\frac{7}{3}}y(t)) + g(t, x(t), y(t)) = 0, t \in [0, 1], \end{cases} \tag{19}$$

supplemented with the condition

$$\begin{cases} D\_{0^{+}}^{\frac{5}{4}}\mathbf{x}(0) = 0, \mathbf{x}(0) = 0, \mathbf{x}(1) = 2I\_{0^{+}}^{\frac{3}{4}}y(\frac{1}{5}) + 2\int\_{0}^{1}y(t)dt + 3y(\frac{1}{4}),\\ D\_{0^{+}}^{\frac{5}{4}}y(0) = 0, y(0) = 0, y(1) = 3I\_{0^{+}}^{\frac{5}{4}}\mathbf{x}(\frac{1}{3}) + \int\_{0}^{1}\mathbf{x}(t)dt. \end{cases} \tag{20}$$

We obtain Δ ≈ −3.2945773664941695 = 0. By calculation, we have *M*<sup>4</sup> ≈ 0.3398202114 6365704, *M*<sup>3</sup> ≈ 0.6299976999210883, *M*<sup>2</sup> ≈ 0.5353700439729107, *M*<sup>1</sup> ≈ 1.2401800473948743.

**Example 1.** *We choose*

$$\begin{aligned} f(t, \mu\_1, \upsilon\_1) &= \frac{1}{\sqrt{t^3 + 3}} + \frac{t}{8}\mu\_1 - \frac{1}{5}\sin\upsilon\_1, \\ g(t, \mu\_1, \upsilon\_1) &= \frac{t}{t^2 + 12} - \frac{t}{4}\arctan\mu\_1 + \frac{|\upsilon\_1|}{15 + |\upsilon\_1|}. \end{aligned}$$

*for all t on [0,1], u*1, *v*<sup>1</sup> *in R. Then, we get the following estimates*

$$|f(t, \mu\_1, v\_1) - f(t, \mu\_2, v\_2)| \le \frac{1}{5}(|\mu\_1 - \mu\_2| + |v\_1 - v\_2|).$$

*Thus, K* = <sup>1</sup> <sup>5</sup> *, moreover,*

$$|g(t, \mu\_1, \upsilon\_1) - g(t, \mu\_2, \upsilon\_2)| \le \frac{1}{4}(|\mu\_1 - \mu\_2| + |\upsilon\_1 - \upsilon\_2|).$$

*Thus, L* = <sup>1</sup> <sup>4</sup> *. Hence, K*(*M*<sup>1</sup> + *M*4) + *L*(*M*<sup>2</sup> + *M*3) ≈ 0.6073419877452061 < 1*. So the condition* (*H*2) *holds, and by Theorem 1, there is a couple function* (*x*(*t*), *y*(*t*)) *satisfies the systems (19) and (20).*

**Example 2.** *We choose*

$$\begin{aligned} f(t, u, v) &= \frac{t+1}{5} - \frac{1}{t+8} \sin u + \frac{1}{12} v, \\ g(t, u, v) &= \frac{e^{-t}}{t^2 + 3} + \frac{5}{8} \arctan u + \frac{1}{6} v, \end{aligned}$$

*for all t on [0,1], u*1, *v*<sup>1</sup> *in R. Then, we get the following estimates*

$$\begin{aligned} |f(t, \mu, v)| &\le \frac{2}{5} + \frac{1}{8}|\mu| + \frac{1}{12}|v|\_{\prime},\\ |g(t, \mu, v)| &\le \frac{1}{3} + \frac{5}{8}|\mu| + \frac{1}{6}|v|\_{\prime} \end{aligned}$$

*for all t on [0,1], u*1, *v*<sup>1</sup> *in R. Since the assumption* (*H*1)*, we get a*<sup>0</sup> = <sup>2</sup> <sup>5</sup> *, <sup>a</sup>*<sup>1</sup> <sup>=</sup> <sup>1</sup> <sup>8</sup> *, <sup>a</sup>*<sup>2</sup> <sup>=</sup> <sup>1</sup> <sup>12</sup> *, <sup>b</sup>*<sup>0</sup> <sup>=</sup> <sup>1</sup> 3 *, b*<sup>1</sup> = <sup>5</sup> <sup>8</sup> *and <sup>b</sup>*<sup>2</sup> <sup>=</sup> <sup>1</sup> <sup>6</sup> *. Thus, we obtain M*<sup>7</sup> ≈ 0.9258548722910658*, M*<sup>8</sup> ≈ 0.3258946455538774*, and ρ* = max{*M*7, *M*8} = *M*<sup>7</sup> < 1*. Hence, by Theorem 2, we conclude that problem (19) and (20) have at least one solution* (*x*(*t*), *y*(*t*))*, t* ∈ [0, 1]*.*

**Author Contributions:** Conceptualization, Y.M. and D.J.; methodology, Y.M.; software, Y.M.; validation, Y.M. and D.J.; formal analysis, Y.M.; investigation, Y.M.; resources, Y.M.; data curation, Y.M.; writing—original draft preparation, Y.M.; writing—review and editing, D.J; visualization, Y.M.; supervision, D.J; project administration, D.J.; funding acquisition, D.J. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work is supported by the Natural Science Foundation of Tianjin (No. (19JCYBJC30700)).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors thank to the anonymous reviewers for their help with this work.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Sequential Caputo–Hadamard Fractional Differential Equations with Boundary Conditions in Banach Spaces**

**Ramasamy Arul 1, Panjayan Karthikeyan 2, Kulandhaivel Karthikeyan 3,\*, Ymnah Alruwaily 4, Lamya Almaghamsi <sup>5</sup> and El-sayed El-hady 4,6,\***


**Abstract:** We present the existence of solutions for sequential Caputo–Hadamard fractional differential equations (SC-HFDE) with fractional boundary conditions (FBCs). Known fixed-point techniques are used to analyze the existence of the problem. In particular, the contraction mapping principle is used to investigate the uniqueness results. Existence results are obtained via Krasnoselkii's theorem. An example is used to illustrate the results. In this way, our work generalizes several recent interesting results.

**Keywords:** fractional differential equations; Caputo–Hadamard fractional derivative; fractional boundary conditions; existence and uniqueness

**MSC:** primary 34A08; secondary 26A33, 34G20

#### **1. Introduction**

The study of fractional-order calculus has been a subject of research for many years. It began as a result of Leibniz and L'Hospital's illustrious discourse, in which the issue of a half derivative was first raised (see, e.g., [1–3]). Nowadays, fractional differential equations (FDEs) have gained more popularity due to the impact of deep applications. Some applications of FDEs are in polymer materials, fractional physics, automatic control theory, abnormal diffusion, and in random processes (see, e.g., [1–4]).

Fractional-order models are quite useful in epidemic models to predict the spread of diseases. In 2017, [5] a fractional order Middle East Respiratory Syndrome Corona Virus (MERSCoV) model used an Adams-type predictor-corrector method for the numerical solution of fractional integral equations.

Over the past 150 years, fixed-point theory (FPT) has made significant progress in mathematical analysis. It has applications in a variety of domains, including optimization theory, mathematical physics, topology, and approximation theory. Poincare launched the investigation of FPT in the nineteenth century. The existence and uniqueness of differential and integral equations solutions were established by Banach's 1922 proof of a classical FPT.

In a Banach space of infinite dimensions, Schauder stated the first FPT called Schauder FPT in 1930 and has several applications in game theory, economics, and engineering (see, e.g., [6,7]).

**Citation:** Arul, R.; Karthikeyan, P.; Karthikeyan, K.; Alruwaily, Y.; Almaghamsi, L.; El-hady, E.-s. Sequential Caputo–Hadamard Fractional Differential Equations with Boundary Conditions in Banach Spaces. *Fractal Fract.* **2022**, *6*, 730. https://doi.org/10.3390/ fractalfract6120730

Academic Editors: Ravi P. Agarwal, Libo Feng, Lin Liu and Yang Liu

Received: 8 October 2022 Accepted: 5 December 2022 Published: 10 December 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

The field of FDEs is a new branch of mathematics that is a valuable tool in modeling many phenomena in various fields such as cancer treatment, medicine, and signal processing, etc.; we refer to [2,3,8–15]. The most important definitions of fractional derivatives (FD) and fractional integral derivatives are stated as follows:

(i) The derivative of the fractional order *ν* > 0 of a function *g* : (0, ∞) → *R* is given by

$$\mathfrak{D}\_{0+}^{\nu}\mathcal{S}(t) = \frac{1}{\Gamma(n-\nu)}\frac{d}{dt}^n \int\_0^t \frac{\mathcal{g}(s)}{(t-s)^{\nu-n+1}}ds\prime$$

where *n* = [*ν*] + 1, provided the right-hand side is pointwise defined on (0, ∞).

(ii) The fractional order integral of the function *<sup>g</sup>* <sup>∈</sup> *<sup>L</sup>*1([0, *<sup>T</sup>*], <sup>R</sup>+) of order *<sup>ν</sup>* <sup>∈</sup> <sup>R</sup><sup>+</sup> is defined by

$$I^\nu g(t) = \frac{1}{\Gamma(\nu)} \int\_0^t (t-s)^{\nu-1} g(s) ds,$$

where Γ is the Euler's gamma function.

Recent research on the Hadamard equations has focused primarily on the core theoretical areas. In particular, the existence results of the solutions are investigated in [16–18], where the strip conditions and FPT are employed. In [19], the authors investigated the stability of Hadamard fractional systems and provide a new fractional comparison principle. In [20], the asymptotic of higher order Caputo–Hadamard fractional equations is studied.

A few years ago, many authors studied Caputo and Riemann–Liouville FDs. Moreover, Caputo–Hadamard and Hadamard–Caputo FDs are used to prove the existence and uniqueness results. Recently, Hadamard, Caputo–Fabrizio, Atangana–Baleanu FDs are applied in cancer-treatment models, see [21,22].

Jessada Tariboon et al. [23] investigated the existence and uniqueness of solutions for two sequential Caputo–Hadamard and Hadamard–Caputo FDE separated BCs as (with *<sup>δ</sup>i*, *<sup>κ</sup><sup>i</sup>* ∈ R, *<sup>i</sup>* = 1, 2)

$$\begin{aligned} \, ^C \mathfrak{D}^\eta (^H \mathfrak{D}^\upsilon \mathbf{x})(\eta) &= f(\eta, \mathbf{x}(\eta)), & \eta \in (a, b), & \quad ^H \mathfrak{D}^\upsilon (^C \mathfrak{D}^\mathbf{p} \mathbf{x})(\eta) &= f(\eta, \mathbf{x}(\eta)), \\ \delta\_1 \mathbf{x}(a) + \delta\_2 (^H \mathfrak{D}^\upsilon \mathbf{x})(a) &= 0, & \quad \delta\_1 \mathbf{x}(a) + \delta\_2 (^C \mathfrak{D}^\mathbf{p} \mathbf{x})(a) &= 0, \\ \kappa\_1 \mathbf{x}(b) + \kappa\_2 (^H \mathfrak{D}^\upsilon \mathbf{x})(b) &= 0, & \kappa\_1 \mathbf{x}(b) + \kappa\_2 (^C \mathfrak{D}^\mathbf{p} \mathbf{x})(b) &= 0. \end{aligned}$$

where *<sup>C</sup>*D<sup>p</sup> and *<sup>H</sup>*D*<sup>ν</sup>* are the Caputo and Hadamard FDs of orders p and *ν*, respectively. In [24], the authors took into account the second-order infinite system of DEs

$$\begin{cases} t\frac{d^2u\_j}{dt^2} + \frac{du\_j}{dt} = f\_j(t, u(t)), \quad t \in J := [1, q],\\ u\_j(1) = u\_j(q) = 0, \end{cases}$$

where *<sup>u</sup>*(*t*) = {*uj*(*t*)}<sup>∞</sup> *<sup>j</sup>*=1, in Banach sequence space *<sup>l</sup>p*, *<sup>p</sup>* <sup>≥</sup> 1. They used the Darbo-type FPT and the Hausdorff measure of noncompactness to prove the existence of solutions.

It should be remarked that a great amount of research on sequential fractional differential equations has been carried out by Bashir ahmad and his team, as follows. In [25], the existence of solutions for a fully coupled Riemann–Stieltjes, integro-multipoint, boundary value problem of Caputo-type sequential FDEs was studied using a known FPT. In [26], some theoretical existence results on novel combined configurations of a Caputo sequential inclusion problem and the hybrid integro-differential in which the BCs appear were established. In [27], existence and uniqueness results were established for a nonlinear sequential Hadamard FDE with multi-point BCs using known FPT. In [28], the existence and uniqueness of solutions for sequential Caputo FDE equipped with integro multipoint BCs were obtained. In their study, nonlinearity depends on the unknown function as well as its lower order FDs. In [29], the existence of solutions for sequential FD inclusions containing Riemann-–Liouville and Caputo-type derivatives and supplemented with generalized fractional integral BCs were studied using a combination of different tools. The authors in [30] investigated the existence of solutions for boundary value problems of Caputo-type

sequential FDEs and inclusions supplemented with nonlocal integro-multipoint BCs using tools from functional analysis. One can see [31] for some nice results on a coupled twoparameter system of sequential fractional integro-differential equations supplemented with nonlocal integro-multipoint BCs; see also [32].

Inspired by the above FPT and cited works, we consider the FBCs for SC-HFDE of the form

$$\prescript{C}{}{\mathfrak{D}}^{\nu}(\prescript{H}{}{\mathfrak{D}}^{\nu\_1}\mathbf{x})(\eta) = \prescript{}{\mathbf{g}}(\eta, \mathbf{x}(\eta)), \quad \eta \in I := [a, b], \quad 1 < \nu, \nu\_1 < 2 \tag{1}$$

$$\mathbf{x}(a) = 0,\ \kappa \, \, ^H \mathfrak{D}^{\delta\_1} \mathbf{x}(b) + (1 - \kappa) \, ^H \mathfrak{D}^{\delta\_2} \mathbf{x}(b) = \delta\_3, \quad \delta\_3 \in \mathbb{R} \tag{2}$$

where *<sup>C</sup>*D*<sup>ν</sup>* is the Caputo FD of orders *ν*, *<sup>H</sup>*D*ν*<sup>1</sup> is the Hadamard FD of orders *ν*1, *<sup>H</sup>*D*δ*<sup>1</sup> is the Hadamard FD of orders *<sup>δ</sup>*1,the *<sup>H</sup>*D*δ*<sup>2</sup> is the Hadamard FD of orders *<sup>δ</sup>*2. 0 <sup>&</sup>lt; *<sup>δ</sup>*1, *<sup>δ</sup>*<sup>2</sup> <sup>&</sup>lt; *<sup>ν</sup>* <sup>−</sup> *<sup>ν</sup>*1, <sup>0</sup> ≤ *<sup>κ</sup>* ≤ 1 is some constant and a continuous function *<sup>g</sup>* : *<sup>J</sup>* × R → R.

We use the following assumptions to prove the results of SC-HFDE involving FDs.



(*A*3) There exists the function *<sup>ψ</sup>g*(*t*) <sup>∈</sup> *<sup>C</sup>*([*a*, *<sup>b</sup>*], <sup>R</sup>+):

$$|\lg(t, \mathbf{x}) - \lg(t, \mathbf{x}\_1)| \le \psi\_{\mathcal{S}}(t)|\mathbf{x} - \mathbf{x}\_1|, \quad \text{for any} \quad \mathbf{x}, \mathbf{x}\_1 \in \mathbb{R}$$

The most important definitions of the problem (1)–(2) and lemma are stated in [2,3,10,23].

Our contributions are as follows:


The rest of the article is organized as follows. The next section contains some auxiliary results, Section 3 is devoted to the main contribution, Section 4 is for various applications, and in Section 5 we conclude the work.

#### **2. Auxiliary Results**

**Definition 1** ([1–3])**.** *For at least n-times differentiable function <sup>g</sup>* : [*a*, <sup>∞</sup>) → R*, the Caputo's FD (with order ν) is defined by*

$$(\prescript{C}{}{\mathfrak{D}}\_{0}^{\nu})\_{\mathcal{S}}(t) = \frac{1}{\Gamma(n-\nu)} \int\_{0}^{t} (t-s)^{n-\nu-1} g^{(n)}(s) ds, \text{ for } n-1 < \nu < n\_{\nu}$$

*where n* = [*ν*] + 1 *and* [*ν*] *denotes the integer part of the real number ν.*

**Definition 2** ([1–3])**.** *The Riemann–Liouville fractional integral (of order ν) for a function g* : [*a*, <sup>∞</sup>) → R *is defined as follows*

$$(\prescript{RL}{}{\mathfrak{I}}^{\nu})\! \! \! / (t) = \frac{1}{\Gamma(\nu)} \int\_{a}^{t} \frac{\operatorname{g}(s)}{(t-s)^{1-\nu}} ds \, \! \! / \: \! / \: \! / \: \! / \! / \! / \! / \! / \! / \! / \! / \! / \! / \! / \! / \! / \! / \! / \! / \! / \! / \! / \! / \! / \! / \! / \! / \! / \! / \! / \! / \! / \! / \! \! / \! / \! / \! \! / \! / \! \! / \! / \! \! / \! / \! \! / \! \! / \! \! / \! \! / \! \! / \! \! / \! \! / \! \! \! / \! \! \! / \! \! \! / \! \! \! / \! \! \! / \! \! \! / \! \! \! / \! \! \! \! / \! \! \! \! \! / \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \!$$

*provided the integral exists.*

**Definition 3** ([1–3])**.** *The Hadamard fractional integral of order ν is defined by*

$$(\prescript{H}{}{J}^{\nu})\mathcal{g}(t) = \frac{1}{\Gamma(\nu)} \int\_{\mathcal{b}}^{t} \left(\log \frac{t}{s}\right)^{\nu - 1} \frac{\mathcal{g}(s)}{s} ds, \text{ } \nu > 0.$$

*provided the integral exists.*

**Definition 4** ([1–3])**.** *The Caputo-type Hadamard FD is defined as*

$$\mathcal{D}^H \mathfrak{D}^\nu \mathfrak{g}(t) = \frac{1}{\Gamma(n-\nu)} \int\_{\mathfrak{a}}^t \left( \log \frac{t}{s} \right)^{n-\nu-1} \delta^n \frac{\mathfrak{g}(s)}{s} ds, n-1 < \nu < n, \ n = [\nu] + 1, \nu$$

*where g* : [*a*, <sup>∞</sup>) <sup>→</sup> <sup>R</sup> *is an n-times differentiable function and <sup>δ</sup><sup>n</sup>* <sup>=</sup> *t d dtn .*

**Lemma 1.** *The general solution of <sup>c</sup>*D*νx*(*ρ*) = 0 *(with ν* > 0*) is*

$$\mathfrak{x}(\rho) = c\_0 + c\_1 \rho + \dots + c\_{n-1} (\rho - a)^{n-1} \rho$$

*where ci* ∈ R*, i* = 0, 1, 2, . . . , *<sup>n</sup>* − <sup>1</sup> *(n* = [*ν*] + <sup>1</sup>*).*

In view of Lemma 1, it follows that

$$I^{\nu^{\ c}} \mathfrak{D}^{\nu} \mathbf{x}(\rho) = \mathbf{x}(\rho) + c\_0 + c\_1(\rho - a) + \dots + c\_{n-1}(\rho - a)^{n-1},\tag{3}$$

for *<sup>i</sup>* = 0, 1, 2, . . . , *<sup>n</sup>* − 1 (*<sup>n</sup>* = [*ν*] + 1) and some *ci* ∈ R.

**Lemma 2.** *The FBCs*

$$\prescript{C}{}{\mathfrak{D}}^{\nu}(\prescript{H}{}{\mathfrak{D}}^{\nu\_1}x)(\eta) = w(\eta), \quad \eta \in f := [a, b], \quad 1 < \nu, \nu\_1 \le 2 \tag{4}$$

$$\kappa \, \, ^H \mathfrak{D}^{\delta\_1} \mathbf{x}(b) + (1 - \kappa) \, ^H \mathfrak{D}^{\delta\_2} \mathbf{x}(b) = \delta\_{3\prime} \mathbf{x}(a) = 0 \tag{5}$$

*is equivalent to*

$$\begin{split} \mathbf{x}(\eta) &= {}^{H}I^{\mathbf{v}}({}^{RL}I^{\mathbf{v}}w)(\eta) + \frac{\log({}^{\eta}\_{a})^{\nu\_{1}}}{\lambda\_{1}\Gamma(\nu\_{1}+1)} \Big(\delta\_{3} - \kappa^{\ }{}^{H}I^{\mathbf{v}}{}^{\ }{\ }{\ }^{RL}I^{\mathbf{v}-\delta\_{1}}w)(b) \\ &- (1-\kappa) \, {}^{H}I^{\mathbf{v}}{}^{\ }{\ }{\ }^{RL}I^{\mathbf{v}-\delta\_{2}}w)(b) \Big), \quad \eta \in \mathcal{J} := [a,b], \end{split} \tag{6}$$

*where*

$$\lambda\_1 = \frac{\kappa \log(\frac{b}{a})^{1-\delta\_1}}{\Gamma(2-\delta\_1)} + \frac{(1-\kappa)\log(\frac{b}{a})^{1-\delta\_2}}{\Gamma(2-\delta\_2)} \neq 0 \tag{7}$$

**Proof.** Taking the Riemann–Liouville fractional integral (of order *q*) and Hadamard fractional integral (of order *q*1) in Equation (4), we obtain

$$\mathbf{x}(\eta) = {}^{H}I^{\nu\_{1}}({}^{RL}I^{\nu}w)(\eta) + c\_{1} + c\_{2} \frac{\log(\frac{\eta}{a})^{\nu\_{1}}}{\Gamma(\nu\_{1} + 1)} \tag{8}$$

The first boundary condition of (5) ⇒ *c*<sup>1</sup> = 0 and second boundary condition of (5) in the above, Equation (8), we obtain

$$\delta\_3 = \kappa^H I^{\nu\_1} (^{RL}I^{\nu} w)(b) + c\_2 \kappa \frac{\log(\frac{b}{a})^{1-\delta\_1}}{\Gamma(2-\delta\_1)} + (1-\kappa)^H I^{\nu\_1} (^{RL}I^{\nu} w)(b) + c\_2 (1-\kappa) \frac{\log(\frac{b}{a})^{1-\delta\_2}}{\Gamma(2-\delta\_2)} \tag{9}$$

$$\mathcal{L}\_2 = \frac{1}{\lambda\_1} \left( \delta\_3 - \kappa^{\ \ H} I^{\nu\_1} (^{RL}I^{\nu-\delta\_1}w)(b) - (1-\kappa)^{\ \ H} I^{\nu\_1} (^{RL}I^{\nu-\delta\_2}w)(b) \right) \tag{10}$$

Substituting constant *c*<sup>2</sup> in (8), we obtain the integral Equation (6). The proof is completed.

**Theorem 1** ([35] (Krasnoselskii's FPT))**.** *Suppose a Banach space* X*, select a closed, bounded, and convex set* <sup>∅</sup> = *<sup>B</sup>* ⊂ X*. Let <sup>A</sup>*<sup>1</sup> *and <sup>A</sup>*<sup>2</sup> *be two operators: (i) <sup>A</sup>*1*<sup>x</sup>* + *<sup>A</sup>*2*<sup>y</sup>* ∈ *<sup>B</sup> whenever*

*x,y* ∈ *B; (ii) A*<sup>1</sup> *is compact and continuous; (iii) A*<sup>2</sup> *is a contraction mapping. Therefore,* ∃ *z* ∈ *B: z* = *A*1*z* + *A*2*z*.

#### **3. Main Results**

We start by defining *<sup>ζ</sup>* <sup>=</sup> *<sup>C</sup>*([*a*, *<sup>b</sup>*], <sup>R</sup>+) : [*a*, *<sup>b</sup>*] <sup>→</sup> <sup>R</sup> as the Banach space of all functions (continuous ) with the norm ||*x*|| = sup{|*x*(*t*)|, *t* ∈ [*a*, *b*]}. Now, define the operator <sup>Φ</sup> : *<sup>C</sup>*([*a*, *<sup>b</sup>*], R) → *<sup>C</sup>*([*a*, *<sup>b</sup>*], R) by

$$\begin{split} \Phi \mathbf{x}(t) &= {}^{H}I^{\nu\_{1}}({}^{RL}I^{\nu}(\mathcal{g}\_{x}))(t) + \frac{\log(\frac{t}{a})^{\nu\_{1}}}{\lambda\_{1}\Gamma(\nu\_{1} + 1)} \Big(\delta\_{3} - \kappa \, {}^{H}I^{\nu\_{1}}(\mathop{{}^{RL}I^{\nu-\delta\_{1}}}(\mathcal{g}\_{x}))(b) \\ &- (1 - \kappa) \, {}^{H}I^{\nu\_{1}}({}^{RL}I^{\nu-\delta\_{2}}(\mathcal{g}\_{x}))(b) \Big), \quad t \in \mathcal{J} := [a, b], \end{split} \tag{11}$$

where *gx*(*t*) = *g*(*t*, *x*(*t*)) and set abbreviate notation

$$I^H I^{\nu\_1} (\prescript{RL}{}{I}^{\nu} (\mathcal{g}\_{\mathcal{X}})) (t) = \frac{1}{\Gamma(\nu\_1) \Gamma(\nu)} \int\_a^t \int\_a^s (\log \frac{t}{s})^{\nu\_1} (s - \sigma)^{\nu - 1} \mathcal{g}(\sigma, \mathfrak{x}(\sigma)) d\sigma \frac{ds}{s}$$

FPT play an essential role in many interesting recent results, see, e.g., [36–38].

*3.1. Uniqueness Via Contraction Mapping Principle*

**Theorem 2.** *Assume that (A*1*), (A*3*) are holds. If λ*2*ψ*∗ *<sup>g</sup>* < 1, *where*

$$\begin{aligned} \psi\_{\mathcal{S}}^{\*} &= \sup \{ \psi\_{\mathcal{S}}(t) : t \in [a, b] \} \\ \lambda\_{2} &= \, ^H I^{\mathbb{P}1} (^{RL} I^{\mathbb{v}}(1))(b) + \frac{|\log(\frac{t}{\varrho})^{\upsilon\_{1}}|}{|\lambda\_{1}| \Gamma(\upsilon\_{1} + 1)} \Big( |\delta\_{3}| - |\kappa| \,^H I^{\mathbb{v}\_{1}} (^{RL} I^{\mathbb{v} - \delta\_{1}}(1))(b) - (|1 - \kappa|) \,^H I^{\mathbb{v}\_{1}} (^{RL} I^{\mathbb{v} - \delta\_{2}}(1))(b) \Big). \end{aligned}$$

*then the fractional problem* (1) *and* (2) *has a unique solution on J.*

**Proof.** Let *Br* = {*x* ∈ *C* : *x* ≤ *r*} be a convex and closed bounded subset of *C*, where the fixed constant *r* satisfies

$$r \geq \frac{\mathbf{p}\lambda\_2}{1 - \boldsymbol{\upmu}\_{\mathcal{S}}^\*\lambda\_2} \tag{12}$$

where p = sup{*g*(*t*, 0) : *t* ∈ [*a*, *b*]}. Next, we prove that Φ*Br* ⊂ *Br* and by using the triangle inequality |*gx*|≤|*gx* − *g*0| + |*g*0|, we have

<sup>|</sup>Φ*x*(*t*)| ≤ *<sup>H</sup> <sup>I</sup> <sup>ν</sup>*<sup>1</sup> (*RL I <sup>ν</sup>*(|*gx*|))(*t*) + <sup>|</sup> log( *<sup>t</sup> <sup>a</sup>* )*ν*<sup>1</sup> <sup>|</sup> |*λ*1|Γ(*ν*<sup>1</sup> + 1) <sup>|</sup>*δ*3|−|*κ*<sup>|</sup> *<sup>H</sup> <sup>I</sup> <sup>ν</sup>*<sup>1</sup> (*RL I <sup>ν</sup>*−*δ*<sup>1</sup> (|*gx*|))(*b*) <sup>−</sup> (|<sup>1</sup> <sup>−</sup> *<sup>κ</sup>*|) *<sup>H</sup> <sup>I</sup> <sup>ν</sup>*<sup>1</sup> (*RL I <sup>ν</sup>*−*δ*<sup>2</sup> (|*gx*|))(*b*) , <sup>|</sup>Φ*x*(*t*)| ≤ *<sup>H</sup> <sup>I</sup> <sup>ν</sup>*<sup>1</sup> (*RL I <sup>ν</sup>*(|*gx* <sup>−</sup> *<sup>g</sup>*0<sup>|</sup> <sup>+</sup> <sup>|</sup>*g*0|))(*t*) + <sup>|</sup> log( *<sup>t</sup> <sup>a</sup>* )*ν*<sup>1</sup> <sup>|</sup> |*λ*1|Γ(*ν*<sup>1</sup> + 1) <sup>|</sup>*δ*3|−|*κ*<sup>|</sup> *<sup>H</sup> <sup>I</sup> <sup>ν</sup>*<sup>1</sup> (*RL I <sup>ν</sup>*−*δ*<sup>1</sup> (|*gx* <sup>−</sup> *<sup>g</sup>*0<sup>|</sup> <sup>+</sup> <sup>|</sup>*g*0|))(*b*) <sup>−</sup> (|<sup>1</sup> <sup>−</sup> *<sup>κ</sup>*|) *<sup>H</sup> <sup>I</sup> <sup>ν</sup>*<sup>1</sup> (*RL I <sup>ν</sup>*−*δ*<sup>2</sup> (|*gx* <sup>−</sup> *<sup>g</sup>*0<sup>|</sup> <sup>+</sup> <sup>|</sup>*g*0|))(*b*) , <sup>≤</sup> *<sup>H</sup> <sup>I</sup> <sup>ν</sup>*<sup>1</sup> (*RL I <sup>ν</sup>*(*ψ*<sup>∗</sup> *<sup>g</sup>* <sup>+</sup> *<sup>p</sup>*))(*t*) + <sup>|</sup> log( *<sup>t</sup> <sup>a</sup>* )*ν*<sup>1</sup> <sup>|</sup> |*λ*1|Γ(*ν*<sup>1</sup> + 1) <sup>|</sup>*δ*3|−|*κ*<sup>|</sup> *<sup>H</sup> <sup>I</sup> <sup>ν</sup>*<sup>1</sup> (*RL I <sup>ν</sup>*−*δ*<sup>1</sup> (*ψ*<sup>∗</sup> *<sup>g</sup>* + *p*))(*b*) <sup>−</sup> (|<sup>1</sup> <sup>−</sup> *<sup>κ</sup>*|) *<sup>H</sup> <sup>I</sup> <sup>ν</sup>*<sup>1</sup> (*RL I <sup>ν</sup>*−*δ*<sup>2</sup> (*ψ*<sup>∗</sup> *<sup>g</sup>* + *p*))(*b*) , = *ψ*∗ *<sup>g</sup> rλ*<sup>2</sup> + *pλ*<sup>2</sup> ≤ *r*

Therefore, Φ*Br* ⊂ *Br*. Let *x*1, *x*<sup>2</sup> ∈ *Br*, we have

<sup>|</sup>Φ*x*1(*t*) <sup>−</sup> <sup>Φ</sup>*x*2(*t*)| ≤ *<sup>H</sup> <sup>I</sup> <sup>ν</sup>*<sup>1</sup> (*RL I <sup>ν</sup>*(|*gx*<sup>1</sup> <sup>−</sup> *gx*<sup>2</sup> <sup>|</sup>))(*t*) + <sup>|</sup> log( *<sup>t</sup> <sup>a</sup>* )*ν*<sup>1</sup> <sup>|</sup> |*λ*1|Γ(*ν*<sup>1</sup> + 1) <sup>|</sup>*δ*3|−|*κ*<sup>|</sup> *<sup>H</sup> <sup>I</sup> <sup>ν</sup>*<sup>1</sup> (*RL I <sup>ν</sup>*−*δ*<sup>1</sup> (|*gx*<sup>1</sup> <sup>−</sup> *gx*<sup>2</sup> <sup>|</sup>))(*b*) <sup>−</sup> (|<sup>1</sup> <sup>−</sup> *<sup>κ</sup>*|) *<sup>H</sup> <sup>I</sup> <sup>ν</sup>*<sup>1</sup> (*RL I <sup>ν</sup>*−*δ*<sup>2</sup> (|*gx*<sup>1</sup> <sup>−</sup> *gx*<sup>2</sup> <sup>|</sup>))(*b*) , ≤ *ψ*<sup>∗</sup> *<sup>g</sup>* ||*x*<sup>1</sup> <sup>−</sup> *<sup>x</sup>*2||*<sup>H</sup> <sup>I</sup> <sup>ν</sup>*<sup>1</sup> (*RL I <sup>ν</sup>*(1))(*t*) + <sup>|</sup> log( *<sup>t</sup> <sup>a</sup>* )*ν*<sup>1</sup> <sup>|</sup> |*λ*1|Γ(*ν*<sup>1</sup> + 1) |*δ*3|−|*κ*|*ψ*<sup>∗</sup> *<sup>g</sup>* ||*x*<sup>1</sup> <sup>−</sup> *<sup>x</sup>*2||*<sup>H</sup> <sup>I</sup> <sup>ν</sup>*<sup>1</sup> (*RL I <sup>ν</sup>*−*δ*<sup>1</sup> (1))(*b*) − (|1 − *κ*|)*ψ*<sup>∗</sup> *<sup>g</sup>* ||*x*<sup>1</sup> <sup>−</sup> *<sup>x</sup>*2||*<sup>H</sup> <sup>I</sup> <sup>ν</sup>*<sup>1</sup> (*RL I <sup>ν</sup>*−*δ*<sup>2</sup> (1))(*b*) , = *ψ*∗ *<sup>g</sup>λ*2||*x*<sup>1</sup> − *x*2||,

⇒ |Φ*x*1(*t*) − Φ*x*2(*t*)| ≤ *ψ*<sup>∗</sup> *<sup>g</sup>λ*2||*x*<sup>1</sup> − *x*2||. Since *ψ*<sup>∗</sup> *<sup>g</sup>λ*<sup>4</sup> < 1, then the operator Φ is a contraction. Now, the operator Φ has unique FP, which implies that problem (1)–(2) has a unique solution on *J* = [*a*, *b*].

*3.2. Existence via Krasnoselkii's Theorem* **Theorem 3.** *Suppose* (*A*1),(*A*2) *are satisfied. If*

$$\left[\psi\_{\mathcal{S}}^{\*}\left[\,^{H}I^{\mathcal{V}\_{\mathcal{I}}}\left(\,^{RL}I^{\mathcal{V}}(1)\right)(b)\right]<1,\tag{13}$$

*then the BVP's* (1) *and* (2) *has at least one solution on* [*a*, *b*].

**Proof.** Let *<sup>B</sup><sup>σ</sup>* = {*<sup>x</sup>* ∈ *<sup>C</sup>*([*a*, *<sup>b</sup>*], R) : ||*x*|| ≤ *<sup>σ</sup>*} where a constant *<sup>σ</sup>* satisfies *<sup>σ</sup>* ≥ *<sup>φ</sup>*<sup>∗</sup> *<sup>g</sup>λ*<sup>2</sup> and *φ*∗ *<sup>g</sup>* = sup{*φg*(*t*) : *t* ∈ [*a*, *b*]}. Divide the operator Φ into the two operators Φ<sup>1</sup> and Φ<sup>2</sup> on *Bσ* with

$$\Phi\_1 \mathbf{x}(t) = \frac{\log(\frac{t}{q})^{\nu\_1}}{\lambda\_1 \Gamma(\nu\_1 + 1)} \left(\delta\_3 - \kappa^{\ \ H} I^{\nu\_1} (^{RL}I^{\nu-\delta\_1}(\mathcal{g}\_x))(b) - (1-\kappa)^{\ \ H} I^{\nu\_1} (^{RL}I^{\nu-\delta\_2}(\mathcal{g}\_x))(b)\right),$$

and

$$
\Phi\_2 \mathfrak{x}(t) = \, ^H I^{\nu\_1} (\, ^{RL} I^{\nu} (\mathcal{g}\_x))(t) .
$$

The ball *B<sup>σ</sup>* is a bounded, closed and convex subset of the Banach space *C*([*a*, *b*], R). Now, show that Φ1*x* + Φ2*y* ∈ *Bσ*. Let *x*, *y* ∈ *Bσ*; then, we have

$$\begin{split} |\Phi\_{1}\mathbf{x}(t) + \Phi\_{2}\mathbf{y}(t)| &\leq \frac{|\log(\frac{\xi}{\sigma})^{\mathbf{v}\_{1}}|}{|\lambda\_{1}|\Gamma(\mathbf{v}\_{1}+1)} \Big( |\delta\_{3}| - |\mathbf{x}|^{H}I^{\mathbf{v}\_{1}}(\operatorname{RL}^{\mathbf{r}}I^{\mathbf{v}-\delta\_{1}}(|\mathbf{g}\_{\mathbf{x}}|))(b) \\ &\quad - (|1-\mathbf{x}|)^{H}I^{\mathbf{v}\_{1}}I^{\mathbf{R}}I^{\mathbf{v}-\delta\_{2}}(|\mathbf{g}\_{\mathbf{x}}|))(b) \Big) + \, \, ^{H}I^{\mathbf{v}\_{1}}(^{RL}I^{\mathbf{v}}(|\mathbf{g}\_{\mathbf{y}}|))(t) \\ &\leq \frac{|\log(\frac{\xi}{\sigma})^{\mathbf{v}\_{1}}|}{|\lambda\_{1}|\Gamma(\mathbf{v}\_{1}+1)} \Big( |\delta\_{3}| - |\mathbf{x}|^{H}I^{\mathbf{r}}\Psi\_{\mathcal{S}}^{\star}(^{RL}I^{\mathbf{v}-\delta\_{1}}(1))(b) \\ &\quad - (|1-\mathbf{x}|)^{H}I^{\mathbf{r}}\Psi\_{\mathcal{S}}^{\star}(^{RL}I^{\mathbf{v}-\delta\_{2}}(1))(b) \Big) + \, \, \, \, \, \, I^{\mathbf{r}}I^{\mathbf{v}}\Psi^{\star}(^{RL}I^{\mathbf{r}}(1))(t) \\ &= \Phi\_{\mathcal{S}}^{\star}\lambda\_{2} \\ &\leq \sigma\_{\star} \end{split}$$

which implies that Φ1*x* + Φ2*y* ∈ *Bσ*. Next, to prove that Φ<sup>2</sup> is a contraction mapping, for *x*, *y* ∈ *Bσ*, we have

$$\begin{aligned} ||\Phi\_2 \mathfrak{x} - \Phi\_2 \mathfrak{y}|| &\leq \, ^H I^{\nu\_1} (\, ^{RL} I^{\nu} (|\mathfrak{g}\_{\mathfrak{x}} - \mathfrak{g}\_{\mathfrak{y}}|)) (\mathfrak{b}) \\ &\leq \, ^\star \theta\_{\mathcal{S}} \, ^H I^{\nu\_1} (\, ^{RL} I^{\nu} (1)) (\mathfrak{b}) ||\mathfrak{x} - \mathfrak{y}||\_{\mathcal{H}} \end{aligned}$$

by (*A*3), which is a contraction by (13).

Next, we show that the operator Φ<sup>1</sup> is continuous and compact. By using the continuity of *<sup>g</sup>* on [*a*, *<sup>b</sup>*] × R, we can conclude that <sup>Φ</sup><sup>1</sup> is continuous. For *<sup>x</sup>* ∈ *<sup>B</sup>σ*,

$$||\Phi\_1 x|| \le \phi\_\mathcal{S}^\* \lambda\_{3\prime}$$

where

$$\lambda\_3 = \frac{|\log(\frac{t}{\mathfrak{a}})^{\nu\_1}|}{|\lambda\_1|\Gamma(\nu\_1 + 1)} \left( |\delta\_3| - |\kappa|^H I^{\nu\_1} (^{RL}I^{\nu-\delta\_1}(1))(b) - (|1-\kappa|) \, ^H I^{\nu\_1} (^{RL}I^{\nu-\delta\_2}(1))(b) \right).$$

This implies that Φ1*B<sup>σ</sup>* is uniformly bounded. Now, we prove that Φ1*B<sup>σ</sup>* is equicontinuous. For *t*1, *t*<sup>2</sup> ∈ [*a*, *b*]: *t*<sup>1</sup> < *t*<sup>2</sup> and for *x* ∈ *Bσ*, we have

$$\begin{split} |\Phi\_{1}\mathbf{x}(t\_{1}) - \Phi\_{1}\mathbf{x}(t\_{2})| &\leq \frac{|\log\left(\frac{t\_{2}}{a}\right)^{\nu\_{1}} - \log\left(\frac{t\_{1}}{a}\right)^{\nu\_{1}}|}{|\lambda\_{1}|\Gamma(\nu\_{1} + 1)} \Big( |\delta\_{3}| - |\mathbf{x}|^{H} I^{\nu\_{1}} (^{RL}I^{\nu-\delta\_{1}}(|\mathbf{g}\_{x}|))(b) \\ &\qquad - (|1 - \kappa|) \, \, ^{H}I^{\nu\_{1}} (^{RL}I^{\nu-\delta\_{2}}(|\mathbf{g}\_{x}|))(b) \Big) \\ &\leq \phi\_{\mathcal{S}}^{\*}\lambda\_{\mathcal{I}} |\log\left(\frac{t\_{2}}{a}\right)^{\nu\_{1}} - \log\left(\frac{t\_{1}}{a}\right)^{\nu\_{1}}| .\end{split}$$

It is obvious that the above expression is independent of *x* and also tends to zero as *t*<sup>1</sup> → *t*2. Therefore Φ1*B<sup>σ</sup>* is equicontinuous. Hence Φ1*B<sup>σ</sup>* is relatively compact. Now, by applying the Arzela–Ascoli theorem (see, e.g., [39]), the operator Φ<sup>1</sup> is compact on *Bσ*. Thus, Φ<sup>1</sup> and Φ<sup>2</sup> satisfy the assumptions of Theorem 1. By the conclusion of Theorem 1, we confirm that the problem (1) and (2) has at least one solution on [*a*, *b*].

#### **4. Example**

We consider an example to verify the main results as follows.

**Example 1.** *Suppose the FBCs for SC-HFDEs*

$$\prescript{C}{}{\mathfrak{D}}{\stackrel{\scriptstyle{\mathcal{Q}}}{\stackrel{\scriptstyle{\mathcal{Q}}}{}}}(\prescript{H}{}{\mathfrak{D}}{\curvearrowright}{}\_{\mathbf{x}})(\eta) = \underset{\longleftarrow}{\operatorname{g}}(\eta, \mathbf{x}(\eta)), \quad \eta \in (\frac{1}{2}, \frac{5}{2}), \tag{14}$$

$$\mathbf{x}(\frac{1}{2}) = 0, \frac{1}{8} \, ^H \mathfrak{D}^{\frac{5}{2}} \mathbf{x}(\frac{5}{2}) + \frac{7}{8} \, ^H \mathfrak{D}^{\frac{1}{4}} \mathbf{x}(\frac{5}{2}) = \frac{9}{2} \, ^\prime \text{.} \tag{15}$$

*where ν* = <sup>3</sup> <sup>2</sup> , *<sup>ν</sup>*<sup>1</sup> <sup>=</sup> <sup>4</sup> <sup>3</sup> , *<sup>a</sup>* <sup>=</sup> <sup>1</sup> <sup>2</sup> , *<sup>b</sup>* <sup>=</sup> <sup>5</sup> <sup>2</sup> , *<sup>δ</sup>*<sup>1</sup> <sup>=</sup> <sup>1</sup> <sup>2</sup> , *<sup>δ</sup>*<sup>2</sup> <sup>=</sup> <sup>1</sup> <sup>4</sup> , *<sup>δ</sup>*<sup>3</sup> <sup>=</sup> <sup>3</sup> <sup>4</sup> *and <sup>κ</sup>* <sup>=</sup> <sup>1</sup> <sup>8</sup> *, λ*<sup>1</sup> = 1.005489449, *H I* 4 <sup>3</sup> (*RL I* 3 <sup>2</sup> (1))( <sup>5</sup> <sup>2</sup> ) = 0.039718, *<sup>H</sup> <sup>I</sup>* 4 <sup>3</sup> (*RL I*1(1))( <sup>5</sup> <sup>2</sup> ) = 0.1055989, *<sup>H</sup> <sup>I</sup>* 4 <sup>3</sup> (*RL I* 5 <sup>4</sup> (1))( <sup>5</sup> <sup>2</sup> ) = 0.0821249, 4

(log 5) 3 *λ*1Γ( <sup>7</sup> <sup>3</sup> ) <sup>=</sup> 0.519833119, *and let g* : ( <sup>1</sup> 2 , 5 <sup>2</sup> ) × R → R *with*

$$g(\eta, x(\eta)) = \frac{\cos^2 \eta}{4[(\eta - \frac{1}{2}) + 3]} (\frac{x^2 + |x|}{|x|}) + \frac{1}{7}$$

*gives,* |*g*(*η*, *x*(*η*)) − *g*(*η*, *y*(*η*))| ≤ *ψ*<sup>∗</sup> *<sup>g</sup>* |*x* − *y*| *and ψ*<sup>∗</sup> *<sup>g</sup>* = <sup>1</sup> <sup>3</sup> . *Thus, ψ*<sup>∗</sup> *<sup>g</sup>λ*<sup>4</sup> = 0.782827602 < 1.

*Hence, by Theorem 2, problem* (14) *and* (15) *with g*(*η*, *x*(*η*)) *has a unique solution on* ( <sup>1</sup> 2 , 5 2 )*. This illustrates our results.*

#### **5. Conclusions**

We investigated the existence and uniqueness results for fractional boundary value problems of SC-HFDE. Potential future works could be to develop new fractional models for Corona Virus, and to find controlled corona-virus conditions using a numerical approach with fractional order. Moreover, we intend to investigate our results based on other FD, such as, e.g., Abu-Shady–Kaabar FD, Katugampola derivative, and conformable derivative.

**Author Contributions:** Conceptualization, R.A., P.K., K.K., Y.A., L.A. and E.-s.E.-h.; methodology, R.A., P.K. and K.K.; software, R.A., P.K., K.K., Y.A., L.A. and E.-s.E.-h.; validation, R.A., P.K., K.K., Y.A., L.A. and E.-s.E.-h.; formal analysis, R.A., P.K. and K.K.; investigation, R.A., P.K., K.K., Y.A., L.A. and E.-s.E.-h.; data curation, R.A., P.K., K.K., Y.A., L.A. and E.-s.E.-h.; writing—original draft preparation, R.A., P.K. and K.K.; writing—review and editing, R.A., P.K., K.K., Y.A., L.A. and E.-s.E.-h.; visualization, R.A., P.K., K.K., Y.A., L.A. and E.-s.E.-h.; supervision, K.K.; project administration, R.A., P.K., K.K., Y.A., L.A. and E.-s.E.-h. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Initial Boundary Value Problem for a Fractional Viscoelastic Equation of the Kirchhoff Type**

**Yang Liu \* and Li Zhang**

College of Mathematics and Computer Science, Northwest Minzu University, Lanzhou 730030, China **\*** Correspondence: liuyangnufn@163.com

**Abstract:** In this paper, we study the initial boundary value problem for a fractional viscoelastic equation of the Kirchhoff type. In suitable functional spaces, we define a potential well. In the framework of the potential well theory, we obtain the global existence of solutions by using the Galerkin approximations. Moreover, we derive the asymptotic behavior of solutions by means of the perturbed energy method. Our main results provide sufficient conditions for the qualitative properties of solutions in time.

**Keywords:** fractional viscoelastic equations; global existence; asymptotic behavior

**MSC:** 35R11; 35A01; 35B40

#### **1. Introduction**

In this paper, we study the following initial boundary value problem for a fractional viscoelastic equation of the Kirchhoff type:

$$\begin{aligned} u\_{tt} + h([u]\_{m}^{2})(-\Delta)^{m}u - \int\_{0}^{t} g(t-\tau)(-\Delta)^{m}u(\tau) \,d\tau \\ + u\_{t} = f(u), \; \ge \in \Omega, \; t > 0, \end{aligned} \tag{1}$$

$$
\mu(\mathbf{x},0) = \mu\_0(\mathbf{x}), \ \mu\_t(\mathbf{x},0) = \mu\_1(\mathbf{x}), \ \mathbf{x} \in \Omega,\tag{2}
$$

$$
\mu(\mathbf{x}, t) = 0, \ \mathbf{x} \in \mathbb{R}^N \\
\langle \Omega, \ t > 0, \tag{3}
$$

2

Received: 24 August 2022 Accepted: 6 October 2022 Published: 11 October 2022

**Citation:** Liu, Y.; Zhang, L. Initial Boundary Value Problem for a Fractional Viscoelastic Equation of the Kirchhoff Type. *Fractal Fract.* **2022**, *6*, 581. https://doi.org/ 10.3390/fractalfract6100581 Academic Editors: Haci Mehmet Baskonus and Ivanka Stamova

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

where

$$[\boldsymbol{\mu}]\_{\mathfrak{m}} = \left( \iint\_{\mathbb{R}^{2N}} \frac{|\boldsymbol{\mu}(\boldsymbol{x},t) - \boldsymbol{\mu}(\boldsymbol{y},t)|^{2}}{|\boldsymbol{x} - \boldsymbol{y}|^{N + 2m}} \, \mathrm{d}\boldsymbol{x} \mathrm{d}\boldsymbol{y} \right)^{\frac{1}{2}}$$

is the Gagliardo seminorm, (−Δ)*<sup>m</sup>* is the fractional Laplace operator with 0 <sup>&</sup>lt; *<sup>m</sup>* <sup>&</sup>lt; 1, and <sup>Ω</sup> <sup>⊂</sup> <sup>R</sup>*<sup>N</sup>* (*<sup>N</sup>* <sup>≥</sup> 1) is a bounded domain with a Lipschitz boundary. The unknown function *u* = *u*(*x*, *t*) is the vertical displacement of the small-amplitude vibrating viscoelastic string with the fractional length at position *x* and time *t*, − *t* 0 *<sup>g</sup>*(*<sup>t</sup>* <sup>−</sup> *<sup>τ</sup>*)(−Δ)*mu*(*τ*) <sup>d</sup>*<sup>τ</sup>* is the viscoelastic term, *ut* is the weak damping term, the Kirchhoff function *h*(*s*) = 1 + *sp*−<sup>1</sup> for all *s* ≥ 0, *p* > 1, and the source term *f*(*u*) = |*u*| *<sup>q</sup>*−2*u*. The exponent *q* and the memory kernel *g* will be specified later.

For the classical viscoelastic wave equation of the Kirchhoff type, Wu and Tsai [1] studied the following equation:

$$
\mu\_{tt} - h(\|\nabla u\|\_{2}^{2})\Delta u + \int\_{0}^{t} \mathbf{g}(t-\tau)\Delta u(\tau)\,\mathrm{d}\tau - \Delta u\_{t} = f(u).
$$

They obtained the local existence, global existence, asymptotic behavior, and blow-up of solutions and provided the estimates on the decay rate of the energy function and the blow-up time of the solutions. Moreover, in [2], they considered the following viscoelastic wave equation of the Kirchhoff type with nonlinear weak damping:

$$
\mu\_{tt} - h(\|\nabla u\|\_{2}^{2})\Delta u + \int\_{0}^{t} \mathbf{g}(t-\tau)\Delta u(\tau)\,\mathrm{d}\tau + f\_{2}(u\_{t}) = f\_{1}(u).
$$

They obtained the local existence and blow-up of solutions and also derived the estimates of the blow-up times of the solutions.

When we examine the deep properties of real-world problems and extend them to other studies, some concepts usually have their own limitations. In this regard, many researchers pointed out the limitations of integer-order calculus while studying the systems related to non-Markovian mechanisms, hereditary properties, and other factors. In this situation, fractional calculus plays an important role, which is a generalization of classical calculus (see [3]). In recent years, fractional partial differential equations have attracted a great deal of attention due to their wide applicability in continuum mechanics, quantum and statistical mechanics, population dynamics, optimal control, game theory, and so on (see, for instance, [3–11] and the references therein). Fiscella and Valdinoci [12] proposed a fractional stationary Kirchhoff equation which models the vibration of a string with a fractional length by considering the nonlocal aspect of the tension. Subsequently, many fractional Kirchhoff equations were widely studied. Autuori et al. [13] investigated

$$-h([\mathfrak{u}]\_{\mathfrak{m}}^2) \mathcal{L}\_K \mathfrak{u} = \lambda f(\mathfrak{x}, \mathfrak{u}) + |\mathfrak{u}|^{q^\*-2} \mathfrak{u}\_{\mathfrak{m}}$$

where L*<sup>K</sup>* is a fractional integro-differential operator, *λ* is a parameter, and *q*<sup>∗</sup> is the critical exponent of the fractional Sobolev space *Hm*(R*N*). They proved the existence and asymptotic behavior of nonnegative solutions. Molica Bisci and Vilasi [14] dealt with

$$-h(\left[u\right]\_m^2)\mathcal{L}\_K u = \lambda f(x, u) + \mu g(x, u)\_{\prime\prime}$$

and derived the existence of at least three weak solutions for suitable values of the parameters by the variational approach. Moreover, they provided a concrete estimate for the range of these parameters in the autonomous case. Pucci et al. [15] investigated

$$\lambda h([\boldsymbol{u}]\_{m,p}^p)(-\Delta)\_p^m \boldsymbol{u} + V(\boldsymbol{x})|\boldsymbol{u}|^{p-2}\boldsymbol{u} = \lambda \omega(\boldsymbol{x})|\boldsymbol{u}|^{q-2}\boldsymbol{u} - \nu(\boldsymbol{x})|\boldsymbol{u}|^{r-2}\boldsymbol{u}\_\star$$

where (−Δ)*<sup>m</sup> <sup>p</sup>* is the fractional *p*-Laplace operator, which may be defined along any *<sup>ϕ</sup>* <sup>∈</sup> *<sup>C</sup>*<sup>∞</sup> <sup>0</sup> (R*N*) as

$$(-\Delta)\_p^m \varphi(\boldsymbol{x}) = 2 \lim\_{\boldsymbol{x} \to 0^+} \int\_{\mathbb{R}^N \backslash \mathcal{B}\_t(\boldsymbol{x})} \frac{|\boldsymbol{\varrho}(\boldsymbol{x}) - \boldsymbol{\varrho}(\boldsymbol{y})|^{p-2} (\boldsymbol{\varrho}(\boldsymbol{x}) - \boldsymbol{\varrho}(\boldsymbol{y}))}{|\boldsymbol{x} - \boldsymbol{y}|^{N+mp}} \, \mathrm{d}y$$

for *<sup>x</sup>* <sup>∈</sup> <sup>R</sup>*<sup>N</sup>* and

$$[\boldsymbol{\mu}]\_{\boldsymbol{m},\boldsymbol{p}} = \left( \iint\_{\mathbb{R}^{2N}} \frac{|\boldsymbol{\mu}(\boldsymbol{x}) - \boldsymbol{\mu}(\boldsymbol{y})|^{p}}{|\boldsymbol{x} - \boldsymbol{y}|^{N + mp}} \, \mathrm{d}\boldsymbol{x} \, \mathrm{d}\boldsymbol{y} \right)^{\frac{1}{p}}.$$

By using the variational approach and topological degree theory, they proved the multiplicity results depending on the parameter *λ* and under the suitable general integrability properties of the ratio between some powers of the weights. Moreover, they obtained the existence of infinitely many pairs of entire solutions by genus theory. Wang et al. [16] studied the following fractional Kirchhoff equation involving Choquard nonlinearity and singular nonlinearity:

$$(a+b([u]\_{m,p}^{(\theta-1)p}))(-\Delta)\_p^m u = \lambda \frac{f\_1(\mathbf{x})}{|u|^{\beta}} + \left(\int\_{\mathbb{R}^N} \frac{f\_2(y)|u(y)|^q}{|\mathbf{x}-y|^\mu} \, \mathbf{d}y\right) f\_2(\mathbf{x}) u^{q-1} \mathbf{y}$$

where *a*, *b*, *θ*, *λ*, *β*, and *μ* are constants that meet certain conditions. They obtained the existence and multiplicity of nonnegative solutions by using the Nehari manifold approach combined with the Hardy–Littlehood–Sobolev inequality. Recently, Lin et al. [17] considered the fractional evolution Kirchhoff equation of the form

$$
\mu\_{tt} + [\mu]\_{m}^{2(\theta - 1)}(-\Delta)^{m}\mu = f(\mu)\_{\theta}
$$

and obtained the finite time blow-up of solutions with arbitrary positive initial energy by the concavity arguments.

Continuum mechanics attempts to describe the motions and equilibrium states of deformable bodies. Two types of materials are usually considered in basic texts on continuum mechanics: elastic materials and viscous fluids. At each material point of an elastic material, the stress at the present time depends only on the present value of the strain. On the other hand, for an incompressible viscous fluid, the stress at a given point is a function of the present value of the velocity gradient at that point (plus an undetermined pressure). Viscoelastic materials have properties between those of elastic materials and viscous fluids. Such materials have memory, where the stress depends not only on the present values of the strain or velocity gradient but also on the entire temporal history of motion (see [18]). Therefore, the research on the vibration of the viscoelastic string with a fractional length has important physical significance and scientific value. More recently, Xiang and Hu [19] investigated the following fractional viscoelastic equation of the Kirchhoff type:

$$u\_{tt} + h([u]\_m^2)(-\Delta)^m u - \int\_0^t \mathbf{g}(t-\tau)(-\Delta)^m u(\tau) \,d\tau + (-\Delta)^s u\_t = \lambda |u|^{q-2} u.$$

They proved the local and global existence of solutions by the Galerkin approximations and obtained the blow-up of solutions by the concavity arguments. However, to the best of our knowledge, much less effort has been devoted to similar studies.

Motivated by the above works, we would like to deal with the problems in Equations (1)–(3). In suitable functional spaces, we aim to study the global existence and asymptotic behavior of solutions in time. First of all, compared with [19], Equation (1) is non-degenerate due to the expression of the Kirchhoff function. Secondly, although we also evaluate the evolutional properties of solutions, we concentrate on the relationship between the initial data and them. In addition, our main method is the potential well theory that is different from classical ones. In the framework of our potential well theory, it is not necessary to introduce the Nehari functional or the Nehari manifold.

This paper is organized as follows. Section 1 is the introduction. In Section 2, we prepare the preliminary knowledge on the functional space. Applying the idea from [20], we define a potential well and provide its properties. Moreover, we display assumptions and notations corresponding to the problems in Equations (1)–(3). In Section 3, we introduce our main method in detail. In Section 4, we prove the global existence of solutions. Section 5 is devoted to the proof of the asymptotic behavior of the solutions by means of the perturbed energy method [21,22]. In Section 6, we summarize our main results.

#### **2. Preliminaries**

In this section, we first recall some necessary definitions and properties (see [23–25] for further details).

Let *X* be the linear space of Lebesgue measurable functions from R*<sup>N</sup>* to R such that the restriction to Ω of any function *u* in *X* belongs to *L*2(Ω) and

$$\iint\_{Q} \frac{|u(\mathbf{x}) - u(y)|^2}{|\mathbf{x} - y|^{N + 2m}} \, \mathrm{d}\mathbf{x} dy < \infty,$$

where *<sup>Q</sup>* :<sup>=</sup> <sup>R</sup>2*N*\(C<sup>Ω</sup> × CΩ) and <sup>C</sup><sup>Ω</sup> :<sup>=</sup> <sup>R</sup>*N*\Ω. The space *<sup>X</sup>* is endowed with

$$\|\|u\|\|\_{X} = \|\|u\|\|\_{L^{2}(\Omega)} + \left(\iint\_{Q} \frac{|u(\boldsymbol{x}) - u(\boldsymbol{y})|^{2}}{|\boldsymbol{x} - \boldsymbol{y}|^{N + 2m}} \,\mathrm{d}\boldsymbol{x} d\boldsymbol{y}\right)^{\frac{1}{2}}.$$

It is easy to check that ·*<sup>X</sup>* is a norm on *X*. Moreover, we introduce the following closed linear subspace of *X*:

$$X\_0 = \{ \mu \in X | \mu = 0 \text{ a.e. in } \mathcal{C}\Omega \}.$$

This is a Hilbert space equipped with the inner product

$$(\boldsymbol{u}, \boldsymbol{v})\_\* := (\boldsymbol{u}, \boldsymbol{v})\_{X\_0} = \iint\_Q \frac{(\boldsymbol{u}(\boldsymbol{x}) - \boldsymbol{u}(\boldsymbol{y}))(\boldsymbol{v}(\boldsymbol{x}) - \boldsymbol{v}(\boldsymbol{y}))}{|\boldsymbol{x} - \boldsymbol{y}|^{N + 2\mathfrak{m}}} \,\mathrm{d}\boldsymbol{x} d\boldsymbol{y}$$

and the norm

$$\|\|u\|\|\_{\*} := \|\|u\|\|\_{X\_0} = \left(\iint\_Q \frac{|u(\mathbf{x}) - u(y)|^2}{|\mathbf{x} - y|^{N + 2m}} \, \mathrm{d}\mathbf{x} \, \mathrm{d}y\right)^{\frac{1}{2}}.$$

Here, *u*∗ is equivalent to *uX*.

The embedding *<sup>X</sup>*<sup>0</sup> <sup>→</sup> *<sup>L</sup>q*(Ω) is continuous for any 1 <sup>≤</sup> *<sup>q</sup>* <sup>≤</sup> *<sup>q</sup>*<sup>∗</sup> and compact for any 1 ≤ *q* < *q*∗, where

$$q^\* = \begin{cases} \frac{2N}{N-2m} & \text{if } 2m < N, \\ \infty & \text{if } 2m \ge N. \end{cases}$$

In this paper, the exponent *q* of the source term satisfies the following assumption:

 $(\mathbf{A}\_1)$ 
 $2 < q < q^\*$ 

Moreover, as in [26], the memory kernel *g* satisfies

$$\begin{aligned} \text{g}(\text{A}\_2) &\text{ g} \in \text{C}^1(\mathbb{R}^+) \cap L^1(\mathbb{R}^+), \text{ g}(t) \ge 0, \text{ g}'(t) \le 0 \text{ for all } t \in [0, \infty), \text{ and} \\\ &\quad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \text{g}(t) \text{ dt} > 0. \end{aligned}$$

For the sake of simplicity, we denote

$$\|\|\cdot\|\|\_{\mathcal{P}} := \|\cdot\|\_{L^{p}(\Omega)'} \left(\mu, v\right) := \int\_{\Omega} \mu v \, \mathrm{d}x, v$$

and

$$(\mathcal{g}\circ\boldsymbol{\mu})(t) := \int\_0^t \mathcal{g}(t-\boldsymbol{\tau}) \|\boldsymbol{\mu}(t) - \boldsymbol{\mu}(\boldsymbol{\tau})\|\_\*^2 \,\mathrm{d}\boldsymbol{\tau}.$$

**Definition 1.** *A function <sup>u</sup>* <sup>∈</sup> *<sup>L</sup>*∞(0, *<sup>T</sup>*; *<sup>X</sup>*0) *with ut* <sup>∈</sup> *<sup>L</sup>*∞(0, *<sup>T</sup>*; *<sup>L</sup>*2(Ω)) *is called a weak solution to Equations* (1)*–*(3) *if u*(0) = *u*<sup>0</sup> *in X*0*, ut*(0) = *u*<sup>1</sup> *in L*2(Ω)*, and*

$$\begin{aligned} &\left(u\_t(t), w\right) + \int\_0^t h(\|u(\tau)\|\_\*^2) (u(\tau), w)\_\* \, \mathrm{d}\tau - \int\_0^t \int\_0^s g(s-\tau) (u(\tau), w)\_\* \, \mathrm{d}\tau \, \mathrm{d}s \\ &+ (u(t), w) = (u\_{1t}w) + (u\_0, w) + \int\_0^t (f(u(\tau)), w) \, \mathrm{d}\tau \end{aligned}$$

*for any w* ∈ *X*<sup>0</sup> *and t* ∈ (0, *T*]*.*

We define the total energy function associated with the problems in Equations (1)–(3) as follows:

$$\begin{aligned} E(t) &= \frac{1}{2} \|\mu\_t(t)\|^2\_2 + \frac{1}{2p} \|\mu(t)\|^{2p}\_\* + \frac{1}{2} \left(1 - \int\_0^t \mathbf{g}(\tau) \, \mathrm{d}\tau\right) \|\mu(t)\|^2\_\* \\ &+ \frac{1}{2} (\mathbf{g} \circ \boldsymbol{\iota})(t) - \frac{1}{q} \|\boldsymbol{\iota}(t)\|^q\_q. \end{aligned}$$

The potential well is

$$\mathcal{W} = \left\{ \mu \in X\_0 \, \middle| \, \|\mu\|\|\_{\ast} < \left( \frac{2q}{(q-2)\kappa} d \right)^{\frac{1}{2}} \right\} \tag{4}$$

and its boundary is

$$
\partial \mathcal{W} = \left\{ u \in X\_0 \, \middle| \, \|u\|\|\_\ast = \left( \frac{2q}{(q-2)\kappa} d \right)^{\frac{1}{2}} \right\}, \tag{5}
$$

where the depth of the potential well is

$$d = \frac{q - 2}{2q} \kappa^{\frac{q}{q-2}} \mathfrak{C}\_1^{-\frac{2q}{q-2}}.\tag{6}$$

In addition, <sup>C</sup><sup>1</sup> is the best Sobolev constant for the embedding *<sup>X</sup>*<sup>0</sup> <sup>→</sup> *<sup>L</sup>q*(Ω); in other words, we have

$$\mathfrak{C}\_1 = \sup\_{u \in \mathfrak{X}\_0 \backslash \{0\}} \frac{||u||\_q}{||u||\_\*}.$$

**Lemma 1.** *Let* (A1) *and* (A2) *be fulfilled. Then, the following are true:*


**Proof.** (i) By *u* ∈ W and Equation (4), we have

$$||\mu||\_\* < \left(\frac{2q}{(q-2)\kappa}d\right)^{\frac{1}{2}}.$$

It follows from Equation (6) that

$$||u||\_\* < \kappa^{\frac{1}{q-2}} \mathfrak{C}\_1^{-\frac{q}{q-2}}.$$

Noting that *u*∗ = 0, we have

*<sup>κ</sup>u*<sup>2</sup> <sup>∗</sup> <sup>&</sup>gt; <sup>C</sup>*<sup>q</sup>* <sup>1</sup>*u q* ∗.

Hence, we obtain

$$\kappa \|\|u\|\|\_{\ast}^2 > \|\|u\|\|\_{q}^q.$$

(ii) By *u* ∈ *∂*W and Equation (5), we find

$$\|\|\mu\|\|\_{\*} = \left(\frac{2q}{(q-2)\kappa}d\right)^{\frac{1}{2}}.$$

By the similar arguments in the proof of (i), it is easy to see that *<sup>κ</sup>u*<sup>2</sup> <sup>∗</sup> ≥ *u q q*.

The main results of this paper are proven in Sections 4 and 5.

#### **3. Methods**

The potential well was first proposed by Sattinger [27] in order to study the global existence of solutions to a nonlinear hyperbolic equation. Subsequently, it was widely employed to analyze the qualitative properties of the solutions to evolution equations (see, for example, [18,28–39] and the references therein), and it has now developed into a theoretical system.

In general, by the energy functional *J*(*u*) and the Nehari functional *I*(*u*), the classical potential well can usually be defined by

$$\mathcal{W} = \{ \mu | I(\mu) < d \text{, } I(\mu) > 0 \} \cup \{ 0 \}.$$

The critical points of *J*(*u*) are stationary solutions of the problem under consideration. Under appropriate assumptions, *J*(*u*) satisfies the Palais–Smale condition, and the problem under consideration admits at least a positive stationary solution whose energy *d*, namely the depth of the potential well, can be defined by

$$d = \inf\_{u \in \mathcal{N}} J(u)\_{\prime}$$

where the Nehari manifold is

$$\mathcal{N} = \{ \mu | I(\mu) = 0 \} \nmid \{ 0 \}.$$

In the present paper, we describe the potential well as a sphere (see Equation (4)) whose radius is expressed by *d* (see Equation (6)). Thus, the spatial structure of the potential well is clearer, and it is not necessary to introduce *I*(*u*) and N . As for the original definition and calculation process of *d*, we refer interested readers to [20].

#### **4. Global Existence of Solutions**

**Theorem 1.** *Let* (A1) *and* (A2) *be fulfilled. Assume that <sup>u</sup>*<sup>0</sup> ∈ W*, <sup>u</sup>*<sup>1</sup> <sup>∈</sup> *<sup>L</sup>*2(Ω)*, and <sup>E</sup>*(0) <sup>&</sup>lt; *d. Then, Equations* (1)*–*(3) *admit a global solution u*(*t*) ∈ W := W ∪ *∂*W *for all t* ∈ (0, ∞)*.*

**Proof.** Let {*ωj*}<sup>∞</sup> *<sup>j</sup>*=<sup>1</sup> be an orthogonal basis of *<sup>X</sup>*<sup>0</sup> and an orthonormal basis of *<sup>L</sup>*2(Ω) given by the eigenfunctions of (−Δ)*<sup>m</sup>* with the boundary condition in Equation (3) (see [24] (Proposition 9) for details). Denote *Wn* = Span{*ω*1, *ω*2, ··· , *ωn*}, *n* = 1, 2, ··· . We seek the approximate solutions to Equations (1)–(3), given by

$$
\mu\_n(t) = \sum\_{j=1}^n \xi\_{jn}(t)\omega\_{j\prime} \text{ } n = 1, 2, \cdots \text{ } \tag{7}
$$

which satisfy

$$\begin{aligned} &(u\_{ntt}(t), w) + h(\|u\_{\hbar}(t)\|\_{\*}^{2})(u\_{\hbar}(t), w)\_{\*} - \int\_{0}^{t} g(t - \tau) (u\_{\hbar}(\tau), w)\_{\*} \, d\tau \\ &+ (u\_{\hbar t}(t), w) = (f(u\_{\hbar}(t)), w)\_{\*}, \; t > 0, \end{aligned} \tag{8}$$

$$
\mu\_n(0) = \sum\_{j=1}^n \xi\_{jn}(0)\omega\_j \to \mu\_0 \text{ in } X\_{0\prime} \tag{9}
$$

$$
\mu\_{nt}(0) = \sum\_{j=1}^{n} \mathfrak{E}'\_{jn}(0)\omega\_{j} \to \mu\_{1} \text{ in } L^{2}(\Omega), \tag{10}
$$

for any *<sup>w</sup>* <sup>∈</sup> *Wn*. Let *<sup>ξ</sup>n*(*t*)=(*ξ*1*n*(*t*), *<sup>ξ</sup>*2*n*(*t*), ··· , *<sup>ξ</sup>nn*(*t*))*T*. Then, the vector function *<sup>ξ</sup><sup>n</sup>* solves

$$
\tilde{\varsigma}\_n^{\prime\prime}(t) + \tilde{\varsigma}\_n^{\prime}(t) + \mathcal{L}\_n(t, \tilde{\varsigma}\_n(t)) = \mathcal{F}\_n(\tilde{\varsigma}\_n(t)), \ t > 0,\tag{11}
$$

$$\xi\_n(0) = ((\mu\_0, \omega\_1), (\mu\_0, \omega\_2), \dots, (\mu\_0, \omega\_n))^T,\tag{12}$$

$$\xi\_n'(0) = \left( (\mu\_1, \omega\_1), (\mu\_1, \omega\_2), \dots, (\mu\_1, \omega\_n) \right)^T,\tag{13}$$

where

$$\mathcal{L}\_n(t, \xi\_n(t)) = (\mathcal{L}\_{1n}(t, \xi\_n(t)), \mathcal{L}\_{2n}(t, \xi\_n(t)), \dots, \mathcal{L}\_{nn}(t, \xi\_n(t)))^T, \dots$$

$$\begin{split} \mathcal{L}\_{in}(t,\boldsymbol{\xi}\_{n}(t)) &= \hbar \left( \left\| \sum\_{j=1}^{n} \boldsymbol{\xi}\_{jn}(t)\boldsymbol{\omega}\_{j} \right\|\_{\*}^{2} \right) \left( \sum\_{j=1}^{n} \boldsymbol{\xi}\_{jn}(t)\boldsymbol{\omega}\_{j}, \boldsymbol{\omega}\_{i} \right)\_{\*} \\ &- \int\_{0}^{t} \boldsymbol{g}(t-\tau) \left( \sum\_{j=1}^{n} \boldsymbol{\xi}\_{jn}(\tau)\boldsymbol{\omega}\_{j}, \boldsymbol{\omega}\_{i} \right)\_{\*} \mathrm{d}\tau, \\ \mathcal{F}\_{n}(\boldsymbol{\xi}\_{n}(t)) &= \left( \mathcal{F}\_{1n}(\boldsymbol{\xi}\_{n}(t)), \mathcal{F}\_{2n}(\boldsymbol{\xi}\_{n}(t)), \cdots, \mathcal{F}\_{nn}(\boldsymbol{\xi}\_{n}(t)) \right)^{T}, \\ \mathcal{F}\_{in}(\boldsymbol{\xi}\_{n}(t)) &= \left( f\left( \sum\_{j=1}^{n} \boldsymbol{\xi}\_{jn}(t)\boldsymbol{\omega}\_{j} \right), \boldsymbol{\omega}\_{i} \right). \end{split}$$

In terms of standard theory for ODEs, the Cauchy problem in Equations (11)–(13) admits a solution *<sup>ξ</sup><sup>n</sup>* <sup>∈</sup> *<sup>C</sup>*2[0, *Tn*) with *Tn* <sup>≤</sup> *<sup>T</sup>*. In turn, this gives a solution *un*(*t*) defined by Equation (7) and satisfying Equations (8)–(10). The following estimates will allow us to extend the local solution to [0, *T*] for any *T* > 0.

By using *w* = *unt*(*t*) in Equation (8), we obtain

$$\begin{split} \frac{1}{2} \frac{\text{d}}{\text{d}t} \Big( \frac{1}{2} \|\boldsymbol{u}\_{nt}(t)\|\_{2}^{2} + \frac{1}{2} \|\boldsymbol{u}\_{n}(t)\|\_{\*}^{2} + \frac{1}{2p} \|\boldsymbol{u}\_{n}(t)\|\_{\*}^{2p} \Big) - \int\_{0}^{t} \boldsymbol{g}(t-\tau) (\boldsymbol{u}\_{n}(\tau), \boldsymbol{u}\_{nt}(t))\_{\*} \, \text{d}\tau \\ + \ \|\boldsymbol{u}\_{nt}(t)\|\_{2}^{2} = \frac{1}{q} \frac{\text{d}}{\text{d}t} \|\boldsymbol{u}\_{n}(t)\|\_{q}^{q} \, \end{split} \tag{14}$$

Note that

$$\begin{split} &\int\_{0}^{t} \boldsymbol{g}(t-\tau) (\boldsymbol{u}\_{\boldsymbol{n}}(\tau), \boldsymbol{u}\_{\boldsymbol{n}\boldsymbol{t}}(t))\_{\*} \, \mathrm{d}\tau \\ &= \int\_{0}^{t} \boldsymbol{g}(t-\tau) (\boldsymbol{u}\_{\boldsymbol{n}}(\tau) - \boldsymbol{u}\_{\boldsymbol{n}}(t), \boldsymbol{u}\_{\boldsymbol{n}\boldsymbol{t}}(t))\_{\*} \, \mathrm{d}\tau + \int\_{0}^{t} \boldsymbol{g}(t-\tau) (\boldsymbol{u}\_{\boldsymbol{n}}(t), \boldsymbol{u}\_{\boldsymbol{n}\boldsymbol{t}}(t))\_{\*} \, \mathrm{d}\tau \\ &= -\frac{1}{2} \int\_{0}^{t} \boldsymbol{g}(t-\tau) \frac{\mathrm{d}}{\mathrm{d}t} \|\boldsymbol{u}\_{\boldsymbol{n}}(\tau) - \boldsymbol{u}\_{\boldsymbol{n}}(t)\|\_{\*}^{2} \, \mathrm{d}\tau + \frac{1}{2} \int\_{0}^{t} \boldsymbol{g}(t-\tau) \frac{\mathrm{d}}{\mathrm{d}t} \|\boldsymbol{u}\_{\boldsymbol{n}}(t)\|\_{\*}^{2} \, \mathrm{d}\tau \\ &= -\frac{1}{2} \frac{\mathrm{d}}{\mathrm{d}t} \Big( (\boldsymbol{g}\diamond\boldsymbol{u}\_{\boldsymbol{n}})(t) - \int\_{0}^{t} \boldsymbol{g}(\tau) \, \mathrm{d}\tau \|\boldsymbol{u}\_{\boldsymbol{n}}(t)\|\_{\*}^{2} \Big) + \frac{1}{2} (\boldsymbol{g}'\diamond\boldsymbol{u}\_{\boldsymbol{n}})(t) - \frac{1}{2} \boldsymbol{g}(t) \|\boldsymbol{u}\_{\boldsymbol{n}}(t)\|\_{\*}^{2} .\end{split}$$

By substituting this equality into Equation (14) and integrating it with respect to *t*, we deduce that

$$E\_n(t) + \int\_0^t \left( \|u\_{n\tau}(\tau)\|^2\_2 - \frac{1}{2} (\mathcal{g'} \circ u\_n)(\tau) + \frac{1}{2} \mathcal{g}(\tau) \|u\_n(\tau)\|^2\_\* \right) d\tau = E\_n(0) \tag{15}$$

for all *t* ∈ [0, *T*], where

$$\begin{split} E\_{\boldsymbol{n}}(t) &= \frac{1}{2} \|\boldsymbol{u}\_{\boldsymbol{n}t}(t)\|\_{2}^{2} + \frac{1}{2p} \|\boldsymbol{u}\_{\boldsymbol{n}}(t)\|\_{\*}^{2p} + \frac{1}{2} \Big(1 - \int\_{0}^{t} \boldsymbol{g}(\tau) \, d\tau\Big) \|\boldsymbol{u}\_{\boldsymbol{n}}(t)\|\_{\*}^{2} \\ &+ \frac{1}{2} (\boldsymbol{g} \circ \boldsymbol{u}\_{\boldsymbol{n}})(t) - \frac{1}{q} \|\boldsymbol{u}\_{\boldsymbol{n}}(t)\|\_{q}^{q}. \end{split} \tag{16}$$

In light of Equations (9) and (10), we infer that *En*(0) < *d* and *un*(0) ∈ W for a sufficiently large *n*. We now claim that

$$
\mu\_n(t) \in \mathcal{W} \tag{17}
$$

for all *t* ∈ [0, *T*] and a sufficiently large *n*. Suppose that *un*(*t*) ∈ W/ for some 0 < *t* < *T*. Then, there exists a time 0 < *t*<sup>0</sup> < *T* such that *un*(*t*0) ∈ *∂*W and *un*(*t*) ∈ W for all *t* ∈ [0, *t*0). Hence, we obtain

$$\|\mu\_n(t\_0)\|\_\* = \left(\frac{2q}{(q-2)\kappa}d\right)^{\frac{1}{2}}.$$

Through Equation (16) and (ii) in Lemma 1, we obtain

$$\begin{aligned} E\_n(t\_0) &\geq \frac{1}{2} \kappa ||\boldsymbol{\mu\_n}(t\_0)||\_\*^2 - \frac{1}{q} ||\boldsymbol{\mu\_n}(t\_0)||\_q^q \\ &= \frac{q-2}{2q} \kappa ||\boldsymbol{\mu\_n}(t\_0)||\_\*^2 + \frac{1}{q} \left( \kappa ||\boldsymbol{\mu\_n}(t\_0)||\_\*^2 - ||\boldsymbol{\mu\_n}(t\_0)||\_q^q \right) \\ &\geq \frac{q-2}{2q} \kappa ||\boldsymbol{\mu\_n}(t\_0)||\_\*^2 \\ &= d\_\prime \end{aligned}$$

which contradicts *En*(0) < *d* according to Equation (15).

From Equation (16), the assertion in Equation (17), and (i) in Lemma 1, it follows that

$$\begin{split} E\_n(t) &\geq \frac{1}{2} \| |u\_{nt}(t)| \|\_{2}^{2} + \frac{1}{2} \kappa \| |u\_n(t)| \|\_{\*}^{2} - \frac{1}{q} \| |u\_n(t)| \|\_{q}^{q} \\ &= \frac{1}{2} \| u\_{nt}(t) \|\_{2}^{2} + \frac{q-2}{2q} \kappa \| |u\_n(t)| \|\_{\*}^{2} + \frac{1}{q} \left( \kappa \| |u\_n(t)| \|\_{\*}^{2} - \| u\_n(t) \|\_{q}^{q} \right) \\ &\geq \frac{1}{2} \| |u\_{nt}(t)| \|\_{2}^{2} + \frac{q-2}{2q} \kappa \| |u\_n(t)| \|\_{\*}^{2} \end{split} \tag{18}$$

which, together with Equation (15), gives

$$\frac{1}{2}||\mu\_{tt}(t)||\_2^2 + \frac{q-2}{2q} \kappa ||\mu\_{tt}(t)||\_\*^2 < d$$

for all *t* ∈ [0, *T*]. Thus, for all *t* ∈ [0, *T*], we find

$$\|\|u\_{nt}(t)\|\|\_{2}^{2} < 2d$$

and

$$\left\|\left|u\_{\hbar}(t)\right|\right\|\_{\*}^{2} < \frac{2q}{(q-2)\kappa}d.\tag{19}$$

Furthermore, we deduce from Equation (19) that

$$||f(u\_n(t))||\_r^r = ||u\_n(t)||\_q^q \le \mathfrak{C}\_1^q ||u\_n(t)||\_\*^q < \mathfrak{C}\_1^q \left(\frac{2q}{(q-2)\kappa}d\right)^{\frac{2}{2}}$$

for all *<sup>t</sup>* <sup>∈</sup> [0, *<sup>T</sup>*], where *<sup>r</sup>* <sup>=</sup> *<sup>q</sup> q* − 1 . The above estimates mean the following:

{*un*} is bounded in *<sup>L</sup>*∞(0, *<sup>T</sup>*; *<sup>X</sup>*0),


Therefore, there exist *u*, *χ*, and a subsequence of {*un*}, still denoted by {*un*}, such that as *n* → ∞, the following are true:

$$
\mu\_n \rightharpoonup \mu \text{ weakly star in } L^\infty(0, T; X\_0),
\tag{20}
$$

$$
\mu\_{nt} \rightharpoonup \mu\_t \text{ weakly star in } L^\infty(0, T; L^2(\Omega)), \tag{21}
$$

$$f(\mathfrak{u}\_n) \rightharpoonup \chi \text{ weakly star in } L^\infty(0, T; L^r(\Omega)).$$

Thus, we have the following:

$$
\mu\_n \to \mu \text{ in } L^2(0, T; L^2(\Omega)) \text{ and a.e. in } \Omega \times [0, T].
$$

In terms of [32] (Chapter 1, Lemma 1.3), we have *χ* = *f*(*u*). Integrating Equation (8) with respect to *t* yields

$$\begin{split} & (u\_{\mathrm{n}\mathbb{I}}(t), w) + \int\_{0}^{t} h(\| (u\_{\mathrm{n}}(\tau) \| \, ^{2}\_{\*}) (u\_{\mathrm{n}}(\tau), w) \, \_{\*} \mathrm{d}\tau - \int\_{0}^{t} \int\_{0}^{s} g(s - \tau) (u\_{\mathrm{n}}(\tau), w) \, \_{\*} \mathrm{d}\tau \mathrm{d}s \\ & \quad + (u\_{\mathrm{n}}(t), w) = (u\_{\mathrm{n}\mathbb{I}}(0), w) + (u\_{\mathrm{n}}(0), w) + \int\_{0}^{t} (f(u\_{\mathrm{n}}(\tau)), w) \, \mathrm{d}\tau. \end{split}$$

Using *n* → ∞, we further obtain

$$\begin{aligned} &\left(\boldsymbol{u}\_{\boldsymbol{t}}(t),\boldsymbol{w}\right)+\int\_{0}^{t}h(\|\boldsymbol{u}(\tau)\|\_{\*}^{2})(\boldsymbol{u}(\tau),\boldsymbol{w})\_{\*}\,\mathrm{d}\tau-\int\_{0}^{t}\int\_{0}^{s}\boldsymbol{g}(\boldsymbol{s}-\tau)(\boldsymbol{u}(\tau),\boldsymbol{w})\_{\*}\,\mathrm{d}\tau\,\mathrm{d}\boldsymbol{s} \\ &+\left(\boldsymbol{u}(t),\boldsymbol{w}\right)=\left(\boldsymbol{u}\_{1},\boldsymbol{w}\right)+\left(\boldsymbol{u}\_{0},\boldsymbol{w}\right)+\int\_{0}^{t}(f(\boldsymbol{u}(\tau))\_{\*}\boldsymbol{w})\,\mathrm{d}\tau. \end{aligned}$$

By virtue of Equations (9) and (10), we have *u*(0) = *u*<sup>0</sup> in *X*<sup>0</sup> and *ut*(0) = *u*<sup>1</sup> in *L*2(Ω). Therefore, *u* is a global solution to Equations (1)–(3). In addition, from Equation (20), we have

$$\|u(t)\|\_{\*} \le \liminf\_{n \to \infty} \|u\_n(t)\|\_{\*},$$

which, together with Equation (19), tells us that

$$\|\mu(t)\|\_{\*} \le \left(\frac{2q}{(q-2)\kappa}d\right)^{\frac{1}{2}}.$$

In other words, *u*(*t*) ∈ W for all *t* ∈ (0, ∞).

#### **5. Asymptotic Behavior of the Solutions**

**Theorem 2.** *In addition to all the assumptions of Theorem 1, suppose that there exists a constant ρ* > 0 *such that g* (*t*) ≤ −*ρg*(*t*) *for all t* ∈ [0, ∞)*. Then, we have*

$$||\mathfrak{u}(t)||\_\*^2 + ||\mathfrak{u}\_t(t)||\_2^2 \le \mathfrak{a}e^{-\beta t}, \ \forall t \in [0, \infty),$$

*for some constants α*, *β* > 0*.*

**Proof.** For the approximate solutions given in the proof of Theorem 1, we construct

$$L(t) = E\_n(t) + \varepsilon \Psi(t), \ \forall t \in [0, \infty), \tag{22}$$

where Ψ(*t*)=(*un*(*t*), *unt*(*t*)) and *ε* > 0 is a constant to be determined later.

We now claim that there exist two constants *γ<sup>i</sup>* > 0 (*i* = 1, 2), depending on *ε*, such that

$$
\gamma\_1 E\_n(t) \le L(t) \le \gamma\_2 E\_n(t), \quad \forall t \in [0, \infty). \tag{23}
$$

Indeed, by virtue of Cauchy's inequality, we find

$$|\Psi(t)| \le \frac{1}{2} ||u\_n(t)||\_2^2 + \frac{1}{2} ||u\_{nt}(t)||\_{2'}^2$$

and thus

$$|\Psi(t)| \le \frac{\mathfrak{C}\_2^2}{2} \|u\_{\mathfrak{n}}(t)\|\_{\*}^2 + \frac{1}{2} \|u\_{\mathfrak{n}t}(t)\|\_{2^\*}^2 \tag{24}$$

where <sup>C</sup><sup>2</sup> is the best Sobolev constant for the embedding *<sup>X</sup>*<sup>0</sup> <sup>→</sup> *<sup>L</sup>*2(Ω). By combining Equations (24) and (18), we obtain |Ψ(*t*)| ≤ *C*1*En*(*t*) for some constant *C*<sup>1</sup> > 0 independent of *n* which, together with Equation (22), yields that the assertion in Equation (23) holds. It can be said that

$$E\_n'(t) = \frac{1}{2}(\mathcal{g'} \circ \mu\_n)(t) - \frac{1}{2}\mathcal{g}(t)\|\mu\_n(t)\|\_\*^2 - \|\mu\_{nt}(t)\|\_2^2.$$

Then, a direct calculation gives

$$\begin{split} L'(t) &= \frac{1}{2} (g' \circ u\_n)(t) - \frac{1}{2} \mathfrak{g}(t) \| u\_{\mathfrak{n}}(t) \|\_{\*}^{2} - \| u\_{\mathfrak{n}\mathfrak{l}}(t) \|\_{2}^{2} + \varepsilon \| u\_{\mathfrak{n}\mathfrak{l}}(t) \|\_{2}^{2} \\ &- \varepsilon \| u\_{\mathfrak{n}}(t) \|\_{\*}^{2p} - \varepsilon \| u\_{\mathfrak{n}}(t) \|\_{\*}^{2} + \varepsilon \int\_{0}^{t} \mathfrak{g}(t-\tau) (u\_{\mathfrak{n}}(\tau), u\_{\mathfrak{n}}(t))\_{\*} \, \mathrm{d}\tau \\ &- \varepsilon (u\_{\mathfrak{n}}(t), u\_{\mathfrak{n}\mathfrak{l}}(t)) + \varepsilon \| u\_{\mathfrak{n}}(t) \|\_{q}^{q} .\end{split} \tag{25}$$

For the seventh term on the right side of Equation (25), it follows from Schwarz's inequality and Cauchy's inequality with <sup>1</sup> > 0 that

$$\begin{split} &\int\_{0}^{t} \mathcal{g}(t-\tau) (\boldsymbol{u}\_{n}(\tau), \boldsymbol{u}\_{n}(t))\_{\*} \, \mathrm{d}\tau \\ &= \int\_{0}^{t} \mathcal{g}(t-\tau) \|\boldsymbol{u}\_{n}(t)\|\_{\*}^{2} \, \mathrm{d}\tau + \int\_{0}^{t} \mathcal{g}(t-\tau) (\boldsymbol{u}\_{n}(\tau) - \boldsymbol{u}\_{n}(t), \boldsymbol{u}\_{n}(t))\_{\*} \, \mathrm{d}\tau \\ &\leq \int\_{0}^{t} \mathcal{g}(\tau) \, \mathrm{d}\tau \|\boldsymbol{u}\_{n}(t)\|\_{\*}^{2} + \epsilon\_{1} \int\_{0}^{t} \mathcal{g}(\tau) \, \mathrm{d}\tau \|\boldsymbol{u}\_{n}(t)\|\_{\*}^{2} + \frac{1}{4\epsilon\_{1}} (\boldsymbol{g}\circ\boldsymbol{u}\_{n})(t) \\ &\leq (1-\kappa) \|\boldsymbol{u}\_{n}(t)\|\_{\*}^{2} + \epsilon\_{1}(1-\kappa) \|\boldsymbol{u}\_{n}(t)\|\_{\*}^{2} + \frac{1}{4\epsilon\_{1}} (\boldsymbol{g}\circ\boldsymbol{u}\_{n})(t). \end{split}$$

For the eighth term on the right side of Equation (25), it follows from Cauchy's inequality with <sup>2</sup> > 0 that

$$\begin{aligned} -\left(\mu\_{\boldsymbol{n}}(t), \mu\_{\boldsymbol{n}\boldsymbol{t}}(t)\right) &\leq \varepsilon\_{2} \|\boldsymbol{\mu}\_{\boldsymbol{n}}(t)\|\_{2}^{2} + \frac{1}{4\epsilon\_{2}} \|\boldsymbol{\mu}\_{\boldsymbol{n}\boldsymbol{t}}(t)\|\_{2}^{2} \\ &\leq \varepsilon\_{2} \mathfrak{C}\_{2}^{2} \|\boldsymbol{\mu}\_{\boldsymbol{n}}(t)\|\_{\*}^{2} + \frac{1}{4\epsilon\_{2}} \|\boldsymbol{\mu}\_{\boldsymbol{n}\boldsymbol{t}}(t)\|\_{2}^{2} .\end{aligned}$$

Hence, we have

$$\begin{split} L'(t) \le & \left(\varepsilon + \frac{\varepsilon}{4\varepsilon\_2} - 1\right) \|\boldsymbol{u}\_{\boldsymbol{n}t}(t)\|\_2^2 - \varepsilon \|\boldsymbol{u}\_{\boldsymbol{n}}(t)\|\_\*^{2p} \\ &+ \varepsilon (\varepsilon\_1 (1 - \kappa) + \varepsilon\_2 \mathfrak{C}\_2^2 - \kappa) \|\boldsymbol{u}\_{\boldsymbol{n}}(t)\|\_\*^{2} \\ &+ \left(\frac{\varepsilon}{4\varepsilon\_1} - \frac{\rho}{2}\right) (\boldsymbol{g} \circ \boldsymbol{u}\_{\boldsymbol{n}})(t) + \varepsilon \|\boldsymbol{u}\_{\boldsymbol{n}}(t)\|\_{q'}^{q} .\end{split}$$

and so

$$\begin{split} L'(t) \le & -\varepsilon \eta E\_n(t) + \left(\varepsilon + \frac{\varepsilon}{4\varepsilon\_2} + \frac{\varepsilon \eta}{2} - 1\right) \|u\_{nt}(t)\|\|\_{2}^{2} + \varepsilon \left(\frac{\eta}{2p} - 1\right) \|u\_n(t)\|\|\_{\*}^{2p} \\ &+ \varepsilon \left(\varepsilon\_1 (1 - \kappa) + \varepsilon\_2 \mathfrak{C}\_2^2 + \frac{\eta}{2} - \kappa\right) \|u\_n(t)\|\|\_{\*}^{2} \\ &+ \left(\frac{\varepsilon}{4\varepsilon\_1} + \frac{\varepsilon \eta}{2} - \frac{\rho}{2}\right) (\mathcal{g} \circ u\_n)(t) + \varepsilon \|u\_n(t)\|\|\_{q}^{q} - \frac{\varepsilon \eta}{q} \|u\_n(t)\|\|\_{q}^{q} \end{split} \tag{26}$$

where *η* > 0 is a constant to be determined later. It follows from Equations (15) and (18) that

$$E\_n(0) \ge \frac{q-2}{2q} \kappa ||u\_n(t)||\_{\*{\prime} \prime}^2$$

which leads to

$$||u\_n(t)||\_\* \le \left(\frac{2q}{(q-2)\kappa} E\_n(0)\right)^{\frac{1}{2}}.$$

Hence, we have

$$\|\|u\_{\hbar}(t)\|\|\_{q}^{q} \le \mathfrak{C}\_{1}^{q} \|\|u\_{\hbar}(t)\|\|\_{\*}^{q-2} \|\|u\_{\hbar}(t)\|\|\_{\*}^{2} \le \mathfrak{C}\_{1}^{q} \left(\frac{2q}{(q-2)\kappa} E\_{\hbar}(0)\right)^{\frac{q-2}{2}} \|\|u\_{\hbar}(t)\|\|\_{\*}^{2}.$$

By substituting this inequality into Equation (26), we obtain

$$\begin{split} L'(t) &\leq -\varepsilon \eta E\_{n}(t) + \left(\varepsilon + \frac{\varepsilon}{4\epsilon\_{2}} + \frac{\varepsilon\eta}{2} - 1\right) \|u\_{nt}(t)\|\_{2}^{2} + \varepsilon \left(\frac{\eta}{2p} - 1\right) \|u\_{n}(t)\|\_{\*}^{2p} \\ &\quad + \varepsilon \left(\epsilon\_{1}(1-\kappa) + \epsilon\_{2}\mathfrak{C}\_{2}^{2} + \frac{\eta}{2} + \mathfrak{C}\_{1}^{q} \left(\frac{2q}{(q-2)\kappa}E\_{n}(0)\right)^{\frac{q-2}{2}} - \kappa\right) \|u\_{n}(t)\|\_{\*}^{2} \\ &\quad + \left(\frac{\varepsilon}{4\epsilon\_{1}} + \frac{\varepsilon\eta}{2} - \frac{\rho}{2}\right) (g \circ u\_{n})(t). \end{split}$$

Note that

$$\mathfrak{C}\_1^q\left(\frac{2q}{(q-2)\kappa}E\_n(0)\right)^{\frac{q-2}{2}} < \mathfrak{C}\_1^q\left(\frac{2q}{(q-2)\kappa}d\right)^{\frac{q-2}{2}} = \kappa.$$

We choose a sufficiently small *<sup>i</sup>* (*i* = 1, 2) and *η* such that *η* < 2*p* and

$$
\epsilon\_1 (1 - \kappa) + \epsilon\_2 \mathfrak{C}\_2^2 + \frac{\eta}{2} + \mathfrak{C}\_1^q \left( \frac{2q}{(q - 2)\kappa} E\_\mathfrak{n}(0) \right)^{\frac{q - 2}{2}} - \kappa \le 0.
$$

Thus, for a fixed *<sup>i</sup>* (*i* = 1, 2) and *η*, we can choose

$$\varepsilon < \min\left\{ \frac{1}{C\_1}, \frac{4\epsilon\_2}{4\epsilon\_2 + 1 + 2\eta\epsilon\_2}, \frac{2\rho\epsilon\_1}{1 + 2\eta\epsilon\_1} \right\}.$$

such that *L* (*t*) ≤ −*εηEn*(*t*) which, together with the second inequality in the assertion in Equation (23), gives *L* (*t*) ≤ − *εη γ*2 *L*(*t*). Hence, there exists a constant *C*<sup>2</sup> > 0 independent of *n* such that

$$L(t) \le C\_2 e^{-\frac{i\eta}{\gamma\_2}t}, \ \forall t \in [0, \infty).$$

We further conclude from the first inequality in the assertion in Equation (23) that

$$E\_n(t) \le \frac{C\_2}{\gamma\_1} e^{-\frac{\epsilon \eta}{\gamma\_2} t}, \quad \forall t \in [0, \infty). \tag{27}$$

From Equations (20) and (21), it follows that

$$\|\|u(t)\|\|\_{\*}^{2} + \|\|u\_{t}(t)\|\|\_{2}^{2} \le \liminf\_{n \to \infty} \left( \|\|u\_{n}(t)\|\|\_{\*}^{2} + \|\|u\_{nt}(t)\|\|\_{2}^{2} \right),$$

which, combined with Equations (18) and (27), gives the conclusion of Theorem 2.

#### **6. Conclusions**

In this paper, we studied the initial boundary value problem for a fractional viscoelastic equation of the Kirchhoff type. In the framework of the potential well theory, we established the global existence theorem, specifically Theorem 1. Under appropriate assumptions of the exponent of the source term and the memory kernel, it has been shown that if the initial data *u*<sup>0</sup> lies in the potential well, and the initial energy is less than the depth of the potential well, then the initial boundary value problem admits a global solution that lies in the closure of the potential well. Moreover, we have established the asymptotic behavior theorem, specifically Theorem 2. It is established that as the time variable tends toward infinity, the norm of the solutions in the phase space decays exponentially to zero at the same rate as the memory kernel. In light of the applications, once the initial data and the external force are effectively controlled, the vibration of the string with a fractional length and appropriate viscoelasticity will be stable. In this regard, the methods in [40] may be helpful.

**Author Contributions:** Investigation, Y.L. and L.Z.; Methodology, Y.L. and L.Z.; Project administration, Y.L.; Validation, Y.L.; Writing–original draft, Y.L. and L.Z.; Writing–review and editing, Y.L. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was supported by the Fundamental Research Funds for the Central Universities (Grant No. 31920220062), the Science and Technology Plan Project of Gansu Province in China (Grant No. 21JR1RA200), the Talent Introduction Research Project of Northwest Minzu University (Grant No. xbmuyjrc2021008), and the Key Laboratory of China's Ethnic Languages and Information Technology of the Ministry of Education at Northwest Minzu University.

**Data Availability Statement:** Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **Analytical Solutions of the Nonlinear Time-Fractional Coupled Boussinesq-Burger Equations Using Laplace Residual Power Series Technique**

**Aref Sarhan 1, Aliaa Burqan 2, Rania Saadeh <sup>1</sup> and Zeyad Al-Zhour 2,\***


**Abstract:** In this paper, we present the series solutions of the nonlinear time-fractional coupled Boussinesq-Burger equations (T-FCB-BEs) using Laplace-residual power series (L-RPS) technique in the sense of Caputo fractional derivative (C-FD). To assert the efficiency, simplicity, performance, and reliability of our proposed method, an attractive and interesting numerical example is tested analytically and graphically. In addition, our obtained results show that this algorithm is compatible and accurate for investigating the fractional-order solutions of engineering and physical applications. Finally, Mathematica software 14 is applied to compute the numerical and graphical results.

**Keywords:** Caputo operator; Coupled Boussinesq-Burger equation; Laplace transform (LT); residual power series (RPS) method

#### **1. Introduction**

In the past twenty years, partial fractional differential equations (P-FDEs) have been motivated due to their various applications in several fields of science such as fluid and layer flows, multi-energy groups of neutron diffusion processes, neutral and multi pantograph systems, dynamic and hyperbolic systems, statistical mechanics model, material sciences and engineering [1–20]. These important phenomena and applications are well described by P-FDEs. The nonlocal property is the most significant advantage of using P-FDEs in diverse mathematical modeling.

The main advantage of using fractional derivatives with an arbitrary order is that they are flexible more than classical derivatives and also they are not-local. The two famous and important fractional deriavtives in applications are: The Riemann-Liouville FD (R-L-FD) and C-FD [1–20]. The relationship between the R-L and the C-FDs are very closed since the R-L-FD can be converted to the C-FD under some regularity assumptions of the function. In P-FDEs, the time-fractional derivatives are commonly defined using the C-FDs. The main reason lies in that the P-FDEs in R-L sense needs initial conditions containing the limit values of R-L-FD at the origin of time *t* = 0, whose physical meanings are not very clear. While in P-FDEs via C-FD, the initial conditions are given in integer-orders, whose physical meanings are very clear [9–17].

In most cases, exact solutions do not exist for many Partial differential equations (PDEs), therefore, several numerical methods are created and applied to get the approximate series solutions for such P-FDEs such as the homotopy analysis, asymptotic and perturbation methods [1–3,5], variational iteration and Adomian decomposition methods [2,4,8], LT and differential transform techniques, RPS method [9–14] and L-RPS method [15–19].

In 1870, Boussinesq [21] introduced the Boussinesq equation to describe the motions of waves in shallow water and then it was used in many physics and engineering wave phenomena [22–27]. In 1915, Bateman presented Burger's equation [28] which describes

**Citation:** Sarhan, A.; Burqan, A.; Saadeh, R.; Al-Zhour, Z. Analytical Solutions of the Nonlinear Time-Fractional Coupled Boussinesq-Burger Equations Using Laplace Residual Power Series Technique. *Fractal Fract.* **2022**, *6*, 631. https://doi.org/10.3390/ fractalfract6110631

Academic Editors: Libo Feng, Yang Liu, Lin Liu and Riccardo Caponetto

Received: 21 September 2022 Accepted: 15 October 2022 Published: 29 October 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

several phenomena in physics and engineering such as acoustic and shock waves [29], stochastic processes [30], and gas dynamics [31–33]. There are several techniques and methods [8,11,34–41] were applied by researchers to obtain the approximate solutions to Burger's equations. One of the most interesting mathematical models is the (generalized) Boussinesq-Burger's equation (B-BEs) which describes the propagation waves of shallow water in the behaviors of fluids flow [3,42–46]. This equation was solved analytically and numerically by different techniques. For example, Gupta et al. [3] obtained the soliton solutions of B-BEs based on the optimal homotopy perturbation and asymptotic methods; Rady and Khalfallah [45] presented the periodic wave and multiple soliton solutions for B-BEs by applying Jacobi elliptic method; Wang et al. [46] presented type of solutions and interaction behaviors of the solitons and Lax pair for the B-BEs; Zhang et al. [44] introduced some new solutions of the generalized B-BEs using the modified mapping method; and Chen and Li [43] established some new soliton solutions of soliton B-BEs by applying Darboux transformation.

The well-known nonlinear time T-CB-BEs are given by [46–48]:

$$\begin{aligned} \mu\_t(\mathbf{x}, t) + 2\mu(\mathbf{x}, t)\mu\_\mathbf{x}(\mathbf{x}, t) - \frac{1}{2}w\_\mathbf{x}(\mathbf{x}, t) &= 0, \\\\ w\_\mathbf{l}(\mathbf{x}, t) - \frac{1}{2}\mu\_{\text{xxx}}(\mathbf{x}, t) + 2(\mu(\mathbf{x}, t)w(\mathbf{x}, t))\_\mathbf{x} &= 0, \end{aligned}$$

where *x* is the normalized space, *t* is the time, *u*(*x*, *t*) is the horizontal velocity and *w*(*x*, *t*) is the height of the water surface above the horizontal level.

Some methods were used to solve this coupled such as Lax pair and Bäcklund transformation technique [46], exp-function method [47] and reduced differential transform method [48]. Finally, the generalized T-FCB-BEs can be formulated as [49,50]:

$$\begin{aligned} \mathfrak{D}\_t^\beta u(\mathbf{x}, t) + w\_\mathbf{x}(\mathbf{x}, t) + u(\mathbf{x}, t) u\_\mathbf{x}(\mathbf{x}, t) &= \mathbf{0}, \\ \mathfrak{D}\_t^\beta w(\mathbf{x}, t) + (u(\mathbf{x}, t) w(\mathbf{x}, t))\_\mathbf{x} + u\_{\text{xxx}}(\mathbf{x}, t) &= \mathbf{0}, \end{aligned} \tag{1}$$

subject to:

$$w(x,0) = f(x), \\ w(x,0) = g(x),\tag{2}$$

where 0 < *β* - 1, *x* ∈ I, *t* 0, *f*(*x*), *g*(*x*) are analytic functions, and *u*(*x*, *t*), *w*(*x*, *t*) are unknown real-valued functions to be solved.

#### **2. Materials and Methods**

There are few methods were used to solve this system such as fractional decomposition method with the definition of Caputo fractional derivative [49] and by applying first integral method with the definitions of Riemann-Liouville fractional and local conformable derivatives [50].

The main aim of our work is to employ L-RPS method for obtaining the fractionalorder series solutions to the T-FCB-BEs as in Equations (1) and (2). The proposed method is a new efficient method, and it provides the solution in a rapidly convergent series which yields the solution in a closed form. The L-RPS method combines two power full methods (Laplace transform and RPS methods) for getting the series solution for the system of F-PDEs. In L-RPS method, few calculations are needed to get the series coefficients compared with RPS method since it is determined by employing the concept of limit not the fractional derivative as in RPS technique. The methodology of our proposed method (L-RPS) will be introduced with detail in Section 4. Mathematica software 14 is used to compute the numerical and graphical results.

The novelty of this work is shown in the proposed method chosen to solve the target problem. L-RPS method is a strong method that provides the solution in a rapidly convergent series, and we illustrate that in the results, in which we don't need many terms to

get a good approximate solution. Moreover, this method does not need the linearization, discretization, or fractional differentiation like other numerical methods.

The rest of this present paper is arranged as follows: Basic definitions and basic idea of L-RPS method with convergence analysis are introduced in Section 3. The methodology of the proposed method is explaining in Section 4. An attractive appliaction with graphical results are given and discussues in Section 5 to confirm the efficiency and reliability of our technique. Finally, Section 6 concludes the output of the whole paper.

#### **3. Basic Concepts on Fractional and Laplace Operators**

This section reviews some definitions and theorems for the fractional operators and the LT [1–19] which are essential in constructing the L-RPS solutions for the nonlinear T-FCB-BEs as in Equations (1) and (2).

**Definition 1.** *The C-FD of u*(*x*, *t*) *of order β* > 0 *is defined as:*

$$\mathfrak{D}\_t^{\beta} y(\mathbf{x}, t) = f\_t^{m - \beta} \mathfrak{D}\_t^m u(\mathbf{x}, t), \quad m - 1 < \beta < m, \quad m \in \mathbb{N}, \ \mathbf{x} \in \mathbf{K}, \ t > 0, \mu$$

*where K is a given interval and*

$$J\_t^\S u(\mathbf{x}, t) = \begin{cases} \frac{1}{\Gamma(\beta)} \int\_0^t (t - \tau)^{\beta - 1} u(\mathbf{x}, t) \, d\tau, & t > \tau > 0, \\\ \qquad u(\mathbf{x}, t), & \beta = 0, \end{cases}$$

*is the time R-L fractional integral of order β* > 0.

Most important and useful properties of fractional operators can be summarized as below [1–19]:

**Lemma 1.** *For <sup>μ</sup>* > −1, *<sup>c</sup>* ∈ R, *<sup>m</sup>* − <sup>1</sup> < *<sup>β</sup>* ≤ *m , and t* ≥ 0, *we have:*


**Definition 2.** *Let u*(*x*, *t*) *be a piecewise continuous function (PCF) on K* × [0, ∞) *and of exponential order (EQ) δ*. *Then the LT of u*(*x*, *t*) *is given by:*

$$\mathcal{U}(\mathfrak{x}, \mathfrak{s}) = \mathfrak{L}[\mathfrak{u}(\mathfrak{x}, \mathfrak{t})] := \int\_0^\infty e^{-st} \mathfrak{u}(\mathfrak{x}, \mathfrak{t}) dt, \mathfrak{s} > \delta\_\sigma$$

*and the inverse LT of U*(*x*,*s*) *is:*

$$u(x,t) = \mathfrak{L}^{-1}[\mathcal{U}(x,s)] := \int\_{z-i\infty}^{z+i\infty} e^{st} \mathcal{U}(x,s)ds, \; z = \text{Re}(s) > z\_0.$$

**Lemma 2.** *If u*(*x*, *t*) *and w*(*x*, *t*) *are PCFs on K* × [0, ∞) *and of EQs δ*<sup>1</sup> *and δ*2, *respectively, where <sup>δ</sup>*<sup>1</sup> < *<sup>δ</sup>*2. *Considering U*(*x*,*s*) = <sup>L</sup>[*u*(*x*, *<sup>t</sup>*)], *<sup>W</sup>*(*x*,*s*) = <sup>L</sup>[*w*(*x*, *<sup>t</sup>*)], *and a*, *<sup>b</sup>* ∈ R, *then:*


$$\begin{split} \text{(a)} \quad & \mathfrak{L}\left[\mathfrak{D}\_{t}^{\eta\beta}u(\mathfrak{x},t)\right] = \mathfrak{s}^{\eta\beta}\mathrm{Id}(\mathfrak{x},\mathfrak{s}) - \sum\_{k=0}^{n-1} \mathfrak{s}^{(n-k)\beta -1} \mathfrak{D}\_{t}^{k\beta} u(\mathfrak{x},0), 0 < \beta < 1, \\ \text{where } & \mathfrak{D}\_{t}^{\eta\beta} = \mathfrak{D}\_{t}^{\beta} \mathcal{D}\_{t}^{\beta} \dots \mathcal{D}\_{t}^{\beta} \text{ ( $n$ -times.)}. \end{split}$$

**Theorem 1.** [15] *Let u*(*x*, *t*) *be a PCF on K* × [0, ∞) *of EO δ and U*(*x*,*s*) = L[*u*(*x*, *t*)]*. Then*

$$MI(\mathbf{x}, \mathbf{s}) = \sum\_{n=0}^{\infty} \frac{f\_n(\mathbf{x})}{s^{n\beta + 1}}, 0 < \beta \le 1, \ x \in K, \ s > \delta. \tag{3}$$

Then *fn*(*x*) = <sup>D</sup>*n<sup>β</sup> <sup>t</sup> u*(*x*, 0), *n* = 0, 1, 2, . . ..

The convergence conditions of the fractional expansion in Equation (3) are demonstrated in the following theorem.

**Theorem 2.** [15] *If <sup>s</sup>* <sup>L</sup> D(*n*+1)*<sup>β</sup> <sup>t</sup> u*(*x*, *t*) ≤ M(*x*)*, on <sup>K</sup>* <sup>×</sup> (*δ*, *<sup>d</sup>*]: 0 <sup>&</sup>lt; *<sup>β</sup>* <sup>≤</sup> 1. *Then the reminder* R*n*(*x*,*s*) *of the fractional expansion in Equation (3) satisfies the following inequality:*

$$|\mathcal{R}\_n(\mathbf{x}, s)| \le \frac{\mathcal{M}(\mathbf{x})}{s^{(n+1)\beta + 1}}, \ \mathbf{x} \in \mathcal{K}, \ \delta < s \le d. \tag{4}$$

#### **4. Constructing the L-RPS Solutions for Nonlinear T-FCB-BEs**

The main objective of this section is to construct a solitary solution to the nonlinear T-FCB-BEs using the L-RPS method. This method can be applied to solve nonlinear P-FDEs, while the LT fails to solve nonlinear equations without using power series technique. The main idea of L-RPS method focuses on the power series method to obtain a solution to the given nonlinear FDE in the Laplace space, and this requires an appropriate expansion that represents the solutions in final version. Moreover, we applyin this Section a new technique in detail to find the expansion coefficients.

Consider the nonlinear T-FCB-BEs as given in Equations (1) and (2) in Section 1. Now, operating the LT of both equations in Equation (1) to get:

$$\begin{aligned} \mathfrak{L}\left[\mathfrak{D}\_{t}^{\mathcal{G}}u(\mathbf{x},t)\right] + \mathfrak{L}[\
w\_{x}(\mathbf{x},t)] + \mathfrak{L}[u(\mathbf{x},t)u\_{x}(\mathbf{x},t)] &= 0, \\ \mathfrak{L}[\mathfrak{D}\_{t}^{\mathcal{G}}w(\mathbf{x},t)] + \mathfrak{L}[(u(\mathbf{x},t)w(\mathbf{x},t))\_{x}] + \mathfrak{L}[u\_{xxx}(\mathbf{x},t)] &= 0. \end{aligned} \tag{5}$$

Applying Lemma 2 and using Equation (2), then the coupled equations in Equation (1) can be written as:

$$\begin{cases} s^{\beta} \mathcal{U}(\mathbf{x}, \mathbf{s}) - s^{\beta - 1} f(\mathbf{x}) + \mathfrak{L}\_{\mathbf{x}} [w(\mathbf{x}, \mathbf{t})] + \mathfrak{L} \left[ \mathfrak{L}^{-1} [\mathcal{U}(\mathbf{x}, \mathbf{s})] \left( \mathfrak{L}^{-1} \right)\_{\mathbf{x}} [\mathcal{U}(\mathbf{x}, \mathbf{s})] \right] = 0, \\\ s^{\beta} \mathcal{W}(\mathbf{x}, \mathbf{s}) - s^{\beta - 1} g(\mathbf{x}) + \mathfrak{L}\_{\mathbf{x}} \left[ \mathfrak{L}^{-1} [\mathcal{U}(\mathbf{x}, \mathbf{s})] \mathfrak{L}^{-1} [\mathcal{W}(\mathbf{x}, \mathbf{s})] \right] + \mathcal{U}\_{\mathbf{xxx}}(\mathbf{x}, \mathbf{s}) = 0. \end{cases} \tag{6}$$

From Equation (6), we obtain:

$$\begin{cases} \mathcal{U}(\mathbf{x},\mathbf{s}) - \frac{f(\mathbf{x})}{\mathfrak{s}} + \frac{\mathcal{W}\_{\mathbf{x}}(\mathbf{x},\mathbf{s})}{\mathfrak{s}^{\mathfrak{b}}} + \frac{1}{\mathfrak{s}^{\mathfrak{b}}} \mathfrak{L} \Big[ \mathfrak{L}^{-1} [\mathcal{U}(\mathbf{x},\mathbf{s})] \big( \mathfrak{L}^{-1} \big)\_{\mathbf{x}} [\mathcal{U}(\mathbf{x},\mathbf{s})] \Big] = 0 \\ \mathcal{W}(\mathbf{x},\mathbf{s}) - \frac{\mathfrak{g}(\mathbf{x})}{\mathfrak{s}} + \frac{\mathcal{U}\_{\mathbf{xxx}}(\mathbf{x},\mathbf{s})}{\mathfrak{s}^{\mathfrak{b}}} + \frac{1}{\mathfrak{s}^{\mathfrak{b}}} \mathfrak{L} \Big[ \mathfrak{L}^{-1} [\mathcal{U}(\mathbf{x},\mathbf{s})] \mathfrak{L}^{-1} [\mathcal{W}(\mathbf{x},\mathbf{s})] \Big] = 0 \end{cases} \tag{7}$$

The system in Equation (7) represents a nonlinear system of PDEs that contains derivatives relative *<sup>x</sup>*. Now, according to the L-RPS and using the facts: lim*s*→∞*sU*(*x*,*s*) <sup>=</sup> *<sup>u</sup>*(*x*, 0) and lim*s*→∞*sW*(*x*,*s*) <sup>=</sup> *<sup>w</sup>*(*x*, 0), then the *<sup>k</sup>*th truncated series of *<sup>U</sup>*(*x*,*s*) and *<sup>W</sup>*(*x*,*s*) in Equation (7) can be written as:

$$\text{all}\_k(\mathbf{x}, \mathbf{s}) = \frac{f(\mathbf{x})}{\mathbf{s}} + \sum\_{n=1}^{k} \frac{f\_n(\mathbf{x})}{\mathbf{s}^{n\S} + 1}, \text{ } \mathbf{x} \in I, \text{ } \mathbf{s} > \delta \ge 0,\tag{8}$$

$$\mathcal{W}\_{k}(\mathbf{x}, \mathbf{s}) = \frac{\mathbf{g}(\mathbf{x})}{\mathbf{s}} + \sum\_{n=1}^{k} \frac{\mathbf{g}\_{n}(\mathbf{x})}{\mathbf{s}^{n\mathfrak{F} + 1}}, \mathbf{x} \in I, \text{ s} > \delta \ge 0. \tag{9}$$

In the next step, we define the Laplace-residual functions (L-RFs) of the coupled equations in Equation (7) to find the unknown coefficients of the series in Equations (8) and (9):

$$\begin{array}{l} LRes(\mathcal{U}(\mathbf{x},\mathbf{s})) = \mathcal{U}(\mathbf{x},\mathbf{s}) - \frac{f(\mathbf{x})}{\mathbf{s}} + \frac{\mathcal{W}\_{\mathbf{f}}(\mathbf{x},\mathbf{s})}{\mathcal{g}^{\mathcal{S}}} + \frac{1}{\mathcal{g}^{\mathcal{S}}} \mathfrak{L} \Big[ \mathfrak{L}^{-1} [\mathcal{U}(\mathbf{x},\mathbf{s})] \big( \mathfrak{L}^{-1} \big)\_{\mathbf{x}} [\mathcal{U}(\mathbf{x},\mathbf{s})] \Big], \\ LRes(\mathcal{W}(\mathbf{x},\mathbf{s})) = \mathcal{W}(\mathbf{x},\mathbf{s}) - \frac{g(\mathbf{x})}{\mathbf{s}} + \frac{\mathcal{U}\_{\mathbf{f}xx}(\mathbf{x},\mathbf{s})}{\mathcal{g}^{\mathcal{S}}} + \frac{1}{\mathcal{g}^{\mathcal{S}}} \mathfrak{L} \Big[ \mathfrak{L}^{-1} [\mathcal{U}(\mathbf{x},\mathbf{s})] \mathfrak{L}^{-1} [\mathcal{W}(\mathbf{x},\mathbf{s})] \Big]. \end{array} \tag{10}$$

and the *k*th L-RFs are:

$$\begin{split} L\text{Res}\_{k}(\mathcal{U}(\mathbf{x},\mathbf{s})) &= \mathcal{U}\_{k}(\mathbf{x},\mathbf{s}) - \frac{f(\mathbf{x})}{\mathbf{s}} + \frac{\mathcal{W}\_{k\mathbf{x}}(\mathbf{x},\mathbf{s})}{\mathbf{s}^{\mathcal{S}}} + \frac{1}{\mathcal{s}^{\mathcal{S}}} \mathfrak{L} \Big[\mathfrak{L}^{-1} [\mathcal{U}\_{k}(\mathbf{x},\mathbf{s})] \big(\mathfrak{L}^{-1}\big)\_{\mathbf{x}} [\mathcal{U}\_{k}(\mathbf{x},\mathbf{s})] \Big], \\ L\text{Res}\_{k}(\mathcal{W}(\mathbf{x},\mathbf{s})) &= \mathcal{W}\_{k}(\mathbf{x},\mathbf{s}) - \frac{\mathbf{g}(\mathbf{x})}{\mathbf{s}} + \frac{\mathcal{U}\_{k\mathbf{x}\mathbf{x}}(\mathbf{x},\mathbf{s})}{\mathbf{s}^{\mathcal{S}}} + \frac{1}{\mathcal{s}^{\mathcal{S}}} \mathfrak{L} \Big[\mathfrak{L}^{-1} [\mathcal{U}\_{k}(\mathbf{x},\mathbf{s})] \mathfrak{L}^{-1} [\mathcal{W}\_{k}(\mathbf{x},\mathbf{s})] \Big]. \end{split} \tag{11}$$

Since,

*LRes*(*U*(*x*,*s*)) = 0, *LRes*(*W*(*x*,*s*)) = 0, we have *skβ*+1*LRes*(*U*(*x*,*s*)) = 0, *skβ*+<sup>1</sup> *LRes*(*W*(*x*,*s*)) = 0.

Therefore,

$$\lim\_{s \to \infty} \left( s^{k\beta + 1} L \text{Res}\_k (\mathcal{U}(\mathbf{x}, s)) \right) = 0,\\ \lim\_{s \to \infty} \left( s^{k\beta + 1} L \text{Res}\_k (\mathcal{U}(\mathbf{x}, s)) \right) = 0 \text{ for } k = 0, 1, 2, \dots \quad \text{(12)}$$

To find *<sup>f</sup>*1(*x*) and *<sup>g</sup>*1(*x*) in Equation (11), we substitute *<sup>U</sup>*1(*x*,*s*) <sup>=</sup> *<sup>f</sup>*(*x*) *<sup>s</sup>* <sup>+</sup> *<sup>f</sup>*1(*x*) *<sup>s</sup>β*+<sup>1</sup> and *<sup>W</sup>*1(*x*,*s*) <sup>=</sup> *<sup>g</sup>*(*x*) *<sup>s</sup>* <sup>+</sup> *<sup>g</sup>*1(*x*) *<sup>s</sup>β*+<sup>1</sup> in the first L-RFs to get:

$$\begin{array}{llll} \text{L.Res}\_{1}(\mathcal{U}(x,s)) &= \frac{f(x)}{s} + \frac{f\_{1}(x)}{s^{\beta+1}} - \frac{f(x)}{s} + \frac{\left(\frac{f(x)}{s} + \frac{f\_{1}(x)}{s^{\beta}+1}\right)\_{+}}{s^{\beta}} + \frac{1}{s^{\beta}}L\left[L^{-1}\left[\frac{f(x)}{s} + \frac{f\_{1}(x)}{s^{\beta+1}}\right](L^{-1})\_{x}\left[\frac{f(x)}{s} + \frac{f\_{1}(x)}{s^{\beta+1}}\right]\right] \\ &= \frac{1}{s^{\beta+1}}(f\_{1}(x) + f(x)f'(x) + g'(x)) + \frac{1}{s^{\beta+1}}(f\_{1}(x)f'(x) + f(x)f\_{1}'(x) + g\_{1}"(x)) \\ &+ \frac{1}{s^{\beta+1}}\left(\frac{\Gamma(1+2\beta)f\_{1}(x)f\_{1}'(x)}{\Gamma(1+\beta)^{2}}\right), \end{array} \tag{13}$$

*LRes*1(*W*(*x*,*s*)) <sup>=</sup> *<sup>g</sup>*(*x*) *<sup>s</sup>* <sup>+</sup> *<sup>g</sup>*1(*x*) *<sup>s</sup>β*+<sup>1</sup> <sup>−</sup> *<sup>g</sup>*(*x*) *<sup>s</sup>* + ( *<sup>f</sup>*(*x*) *<sup>s</sup>* <sup>+</sup> *<sup>f</sup>* 1(*x*) *<sup>s</sup>β*+<sup>1</sup> ) *xxx <sup>s</sup><sup>β</sup>* <sup>+</sup> <sup>1</sup> *<sup>s</sup><sup>β</sup>* <sup>L</sup>*<sup>x</sup>* L−<sup>1</sup> *f*(*x*) *<sup>s</sup>* <sup>+</sup> *<sup>f</sup>*1(*x*) *sβ*+<sup>1</sup> L−<sup>1</sup> *g*(*x*) *<sup>s</sup>* <sup>+</sup> *<sup>g</sup>*1(*x*) *sβ*+<sup>1</sup> = <sup>1</sup> *sβ*+<sup>1</sup> *g*1(*x*) + *g*(*x*)*f* (*x*) + *f*(*x*)*g* (*x*) + *f* (3)(*x*) + <sup>1</sup> *s*2*β*+<sup>1</sup> *g*1(*x*)*f* (*x*) + *f*1(*x*)*g* (*x*) + *g*(*x*)*f*<sup>1</sup> (*x*) + *f*(*x*)*g*<sup>1</sup> (*x*) + *f*<sup>1</sup> (3)(*x*) + <sup>1</sup> *s*3*β*+<sup>1</sup> *<sup>Γ</sup>*(1+2*β*)*g*1(*x*)*f*<sup>1</sup> (*x*) *Γ*(1+*β*) <sup>2</sup> <sup>+</sup> *<sup>Γ</sup>*(1+2*β*)*f*1(*x*)*g*<sup>1</sup> (*x*) *Γ*(1+*β*) 2 .

Next, by solving: lim*s*→∞*sβ*+1*LRes*1(*U*(*x*,*s*)) <sup>=</sup> 0, lim*s*→∞*sβ*+1*LRes*1(*U*(*x*,*s*)) <sup>=</sup> 0, one can get:

$$f\_1(\mathbf{x}) = -\left(f(\mathbf{x})f'(\mathbf{x}) + g'(\mathbf{x})\right)\mathbf{}\_i\mathbf{g}\_1(\mathbf{x}) = -\left(g(\mathbf{x})f'(\mathbf{x}) + f(\mathbf{x})g'(\mathbf{x}) + f^{(3)}(\mathbf{x})\right). \tag{14}$$

Thus, the first Laplace series solution (LSS) of the system in Equations (8) and (9) can be written as:

$$\mathcal{U}l\_1(\mathbf{x}, \mathbf{s}) = \frac{f(\mathbf{x})}{\mathbf{s}} + \frac{-(f(\mathbf{x})f'(\mathbf{x}) + g'(\mathbf{x}))}{\mathbf{s}^{\oplus + 1}}, \\ \mathcal{W}\_1(\mathbf{x}, \mathbf{s}) = \frac{g(\mathbf{x})}{\mathbf{s}} + \frac{-\left(g(\mathbf{x})f'(\mathbf{x}) + f(\mathbf{x})g'(\mathbf{x}) + f^{(\mathbf{3})}(\mathbf{x})\right)}{\mathbf{s}^{\oplus + 1}}.\tag{15}$$

To find out the second LSS of system in Equations (8) and (9), substitute *<sup>U</sup>*2(*x*,*s*) <sup>=</sup> *<sup>f</sup>*(*x*) *<sup>s</sup>* + *f*1(*x*) *<sup>s</sup>β*+<sup>1</sup> <sup>+</sup> *<sup>f</sup>*2(*x*) *<sup>s</sup>*2*β*+<sup>1</sup> and *<sup>W</sup>*2(*x*,*s*) <sup>=</sup> *<sup>g</sup>*(*x*) *<sup>s</sup>* <sup>+</sup> *<sup>g</sup>*1(*x*) *<sup>s</sup>β*+<sup>1</sup> <sup>+</sup> *<sup>g</sup>*2(*x*) *<sup>s</sup>*2*β*+<sup>1</sup> into the second L-RF *LRes*2(*U*(*x*,*s*)), *LRes*2(*W*(*x*,*s*)) as:

*LRes*2(*U*(*x*,*s*)) = <sup>1</sup> *<sup>s</sup>β*+<sup>1</sup> (*f*1(*x*) + *<sup>f</sup>*(*x*)*<sup>f</sup>* (*x*) + *g* (*x*)) + <sup>1</sup> *<sup>s</sup>*2*β*+<sup>1</sup> (*f*2(*x*) + *<sup>f</sup>*1(*x*)*<sup>f</sup>* (*x*) + *f*(*x*)*f*<sup>1</sup> (*x*) + *g*<sup>1</sup> (*x*)) + <sup>1</sup> *s*3*β*+<sup>1</sup> *f*2(*x*)*f* (*x*) + *<sup>Γ</sup>*(1+2*β*)*f*1(*x*)*f*<sup>1</sup> (*x*) *<sup>Γ</sup>*2(1+*β*) <sup>+</sup> *<sup>Γ</sup>*(1+2*β*)*f*1(*x*)*f*<sup>1</sup> (*x*) *<sup>Γ</sup>*2(1+*β*) + *<sup>f</sup>*(*x*)*f*<sup>2</sup> (*x*) + *g*<sup>2</sup> (*x*) + <sup>1</sup> *s*4*β*+<sup>1</sup> *Γ*(1+3*β*)*f*2(*x*)*f*<sup>1</sup> (*x*) *<sup>Γ</sup>*(1+*β*)*Γ*(1+2*β*) <sup>+</sup> *<sup>Γ</sup>*(1+3*β*)*f*1(*x*)*f*<sup>2</sup> (*x*) *Γ*(1+*β*)*Γ*(1+2*β*) + <sup>1</sup> *s*5*β*+<sup>1</sup> *Γ*(1+4*β*)*f*2(*x*)*f*<sup>2</sup> (*x*) *Γ*2(1+2*β*) , (16) *LRes*2(*W*(*x*,*s*)) = <sup>1</sup> *sβ*+<sup>1</sup> *g*1(*x*) + *g*(*x*)*f* (*x*) + *f*(*x*)*g* (*x*) + *f* (3)(*x*) + <sup>1</sup> *s*2*β*+<sup>1</sup> *g*2(*x*) + *g*1(*x*)*f* (*x*) + *f*1(*x*)*g* (*x*) + *g*(*x*)*f*<sup>1</sup> (*x*) + *f*(*x*)*g*<sup>1</sup> (*x*) + *f*<sup>1</sup> (3)(*x*) + <sup>1</sup> *s*3*β*+<sup>1</sup> *g*2(*x*)*f* (*x*) + *f*2(*x*)*g* (*x*) + *g*(*x*)*f*<sup>2</sup> (*x*) + *f*(*x*)*g*<sup>2</sup> (*x*) + *f*<sup>2</sup> (3)(*x*) +*Γ*(1+2*β*)*g*1(*x*)*f*<sup>1</sup> (*x*) *<sup>Γ</sup>*2(1+*β*) <sup>+</sup> *<sup>Γ</sup>*(1+2*β*)*f*1(*x*)*g*<sup>1</sup> (*x*) *Γ*2[1+*β*] + <sup>1</sup> *s*4*β*+<sup>1</sup> *Γ*(1+3*β*)*g*2(*x*)*f*<sup>1</sup> (*x*) *<sup>Γ</sup>*(1+*β*)*Γ*(1+2*β*) <sup>+</sup> *<sup>Γ</sup>*(1+3*β*)*g*1(*x*)*f*<sup>2</sup> (*x*) *<sup>Γ</sup>*(1+*β*)*Γ*(1+2*β*) <sup>+</sup> *<sup>Γ</sup>*(1+3*β*)*f*2(*x*)*g*<sup>1</sup> (*x*) *Γ*(1+*β*)*Γ*(1+2*β*) +*Γ*(1+3*β*)*f*1(*x*)*g*<sup>2</sup> (*x*) *Γ*(1+*β*)*Γ*(1+2*β*) + <sup>1</sup> *s*5*β*+<sup>1</sup> *Γ*(1+4*β*)*g*2(*x*)*f*<sup>2</sup> (*x*) *<sup>Γ</sup>*2(1+2*β*) <sup>+</sup> *<sup>Γ</sup>*(1+4*β*)*f*2(*x*)*g*<sup>2</sup> (*x*) *Γ*2(1+2*β*) .

Thus, *f*2(*x*) and *g*2(*x*) can be obtained by substituting the values of *f*1(*x*) and *g*1(*x*) into Equation (16), then multiplying both sides of the new equation by *s*2*β*+<sup>1</sup> and taking the limit as *s* → ∞ to get:

$$\begin{aligned} f\_2(\mathbf{x}) &= -(f\_1(\mathbf{x})f'(\mathbf{x}) + f(\mathbf{x})f\_1'(\mathbf{x}) + g\_1'(\mathbf{x})),\\ g\_2(\mathbf{x}) &= -\left(g\_1(\mathbf{x})f'(\mathbf{x}) + f\_1(\mathbf{x})g'(\mathbf{x}) + g(\mathbf{x})f\_1'(\mathbf{x}) + f(\mathbf{x})g\_1'(\mathbf{x}) + f\_1^{(3)}(\mathbf{x})\right). \end{aligned} \tag{17}$$

Again, to find out the second LSS of the system in Equations (8) and (9), substitute *<sup>U</sup>*3(*x*,*s*) <sup>=</sup> *<sup>f</sup>*(*x*) *<sup>s</sup>* <sup>+</sup> *<sup>f</sup>*1(*x*) *<sup>s</sup>β*+<sup>1</sup> <sup>+</sup> *<sup>f</sup>*2(*x*) *<sup>s</sup>*2*β*+<sup>1</sup> <sup>+</sup> *<sup>f</sup>*3(*x*) *<sup>s</sup>*3*β*+<sup>1</sup> and *<sup>W</sup>*3(*x*,*s*) <sup>=</sup> *<sup>g</sup>*(*x*) *<sup>s</sup>* <sup>+</sup> *<sup>g</sup>*1(*x*) *<sup>s</sup>β*+<sup>1</sup> <sup>+</sup> *<sup>g</sup>*2(*x*) *<sup>s</sup>*2*β*+<sup>1</sup> <sup>+</sup> *<sup>g</sup>*3(*x*) *<sup>s</sup>*3*β*+<sup>1</sup> into the second L-RF: *LRes*3(*U*(*x*,*s*)), *LRes*3(*W*(*x*,*s*)) to get:

*LRes*3(*U*(*x*,*s*)) = <sup>1</sup> *<sup>s</sup>β*+<sup>1</sup> (*f*1(*x*) + *<sup>f</sup>*(*x*)*<sup>f</sup>* (*x*) + *g* (*x*)) + <sup>1</sup> *<sup>s</sup>*2*β*+<sup>1</sup> (*f*2(*x*) + *<sup>f</sup>*1(*x*)*<sup>f</sup>* (*x*) + *f*(*x*)*f*<sup>1</sup> (*x*) + *g*<sup>1</sup> (*x*)) + <sup>1</sup> *s*3*β*+<sup>1</sup> *f*3(*x*) + *f*2(*x*)*f* (*x*) + *f*(*x*)*f*<sup>2</sup> (*x*) + *g*<sup>2</sup> (*x*) + *<sup>Γ</sup>*(1+2*β*)*f*1(*x*)*f*<sup>1</sup> (*x*) *Γ*2(1+*β*) + <sup>1</sup> *s*4*β*+<sup>1</sup> *f*3(*x*)*f* (*x*) + *f*(*x*)*f*<sup>3</sup> (*x*) + *g*<sup>3</sup> (*x*) + *<sup>Γ</sup>*(1+3*β*)*f*2(*x*)*f*<sup>1</sup> (*x*) *<sup>Γ</sup>*(1+*β*)*Γ*(1+2*β*) <sup>+</sup> *<sup>Γ</sup>*(1+3*β*)*f*1(*x*)*f*<sup>2</sup> (*x*) *Γ*(1+*β*)*Γ*(1+2*β*) + <sup>1</sup> *s*5*β*+<sup>1</sup> *Γ*(1+4*β*)*f*3(*x*)*f*<sup>1</sup> (*x*) *<sup>Γ</sup>*(1+*β*)*Γ*(1+3*β*) <sup>+</sup> *<sup>Γ</sup>*(1+4*β*)*f*1(*x*)*f*<sup>3</sup> (*x*) *<sup>Γ</sup>*(1+*β*)*Γ*(1+3*β*) <sup>+</sup> *<sup>Γ</sup>*(1+4*β*)*f*2(*x*)*f*<sup>2</sup> (*x*) *Γ*2(1+2*β*) + <sup>1</sup> *s*6*β*+<sup>1</sup> *Γ*(1+5*β*)*f*3(*x*)*f*<sup>2</sup> (*x*) *<sup>Γ</sup>*(1+2*β*)*Γ*(1+3*β*) <sup>+</sup> *<sup>Γ</sup>*(1+5*β*)*f*2(*x*)*f*<sup>3</sup> (*x*) *Γ*(1+2*β*)*Γ*(1+3*β*) + <sup>1</sup> *s*7*β*+<sup>1</sup> *Γ*(1+6*β*)*f*3(*x*)*f*<sup>3</sup> (*x*) *Γ*2(1+3*β*) , (18) *LRes*3(*W*(*x*,*s*)) = <sup>1</sup> *sβ*+<sup>1</sup> *g*1(*x*) + *g*(*x*)*f* (*x*) + *f*(*x*)*g* (*x*) + *f* (3)(*x*) + <sup>1</sup> *s*2*β*+<sup>1</sup> *g*2(*x*) + *g*1(*x*)*f* (*x*) + *f*1(*x*)*g* (*x*) + *g*(*x*)*f*<sup>1</sup> (*x*) + *f*(*x*)*g*<sup>1</sup> (*x*) + *f*<sup>1</sup> (3)(*x*) + <sup>1</sup> *s*3*β*+<sup>1</sup> *g*3(*x*) + *g*2(*x*)*f* (*x*) + *f*2(*x*)*g* (*x*) + *g*(*x*)*f*<sup>2</sup> (*x*) + *f*(*x*)*g*<sup>2</sup> (*x*) + *f*<sup>2</sup> (3)(*x*) +*Γ*(1+2*β*)*g*1(*x*)*f*<sup>1</sup> (*x*) *<sup>Γ</sup>*2(1+*β*) <sup>+</sup> *<sup>Γ</sup>*(1+2*β*)*f*1(*x*)*g*<sup>1</sup> (*x*) *Γ*2(1+*β*) + <sup>1</sup> *s*4*β*+<sup>1</sup> *g*3(*x*)*f* (*x*) + *f*3(*x*)*g* (*x*) + *g*(*x*)*f*<sup>3</sup> (*x*) + *f*(*x*)*g*<sup>3</sup> (*x*) + *f*<sup>3</sup> (3)(*x*) +*Γ*(1+3*β*)*g*2(*x*)*f*<sup>1</sup> (*x*) *<sup>Γ</sup>*(1+*β*)*Γ*(1+2*β*) <sup>+</sup> *<sup>Γ</sup>*(1+3*β*)*g*1(*x*)*f*<sup>2</sup> (*x*) *<sup>Γ</sup>*(1+*β*)*Γ*(1+2*β*) <sup>+</sup> *<sup>Γ</sup>*(1+3*β*)*f*2(*x*)*g*<sup>1</sup> (*x*) *Γ*(1+*β*)*Γ*(1+2*β*) +*Γ*(1+3*β*)*f*1(*x*)*g*<sup>2</sup> (*x*) *Γ*(1+*β*)*Γ*(1+2*β*) + <sup>1</sup> *s*5*β*+<sup>1</sup> *Γ*(1+4*β*)*g*3(*x*)*f*<sup>1</sup> (*x*) *<sup>Γ</sup>*(1+*β*)*Γ*(1+3*β*) <sup>+</sup> *<sup>Γ</sup>*(1+4*β*)*g*2(*x*)*f*<sup>2</sup> (*x*) *<sup>Γ</sup>*2(1+2*β*) <sup>+</sup> *<sup>Γ</sup>*(1+4*β*)*g*1(*x*)*f*<sup>3</sup> (*x*) *Γ*(1+*β*)*Γ*(1+3*β*) +*Γ*(1+4*β*)*f*3(*x*)*g*<sup>1</sup> (*x*) *<sup>Γ</sup>*(1+*β*)*Γ*(1+3*β*) <sup>+</sup> *<sup>Γ</sup>*(1+4*β*)*f*2(*x*)*g*<sup>2</sup> (*x*) *<sup>Γ</sup>*2(1+2*β*) <sup>+</sup> *<sup>Γ</sup>*(1+4*β*)*f*1(*x*)*g*<sup>3</sup> (*x*) *Γ*(1+*β*)*Γ*(1+3*β*) + <sup>1</sup> *s*6*β*+<sup>1</sup> *Γ*(1+5*β*)*g*3(*x*)*f*<sup>2</sup> (*x*) *<sup>Γ</sup>*(1+2*β*)*Γ*(1+3*β*) <sup>+</sup> *<sup>Γ</sup>*(1+5*β*)*g*2(*x*)*f*<sup>3</sup> (*x*) *<sup>Γ</sup>*(1+2*β*)*Γ*(1+3*β*) <sup>+</sup> *<sup>Γ</sup>*(1+5*β*)*f*3(*x*)*g*<sup>2</sup> (*x*) *Γ*(1+2*β*)*Γ*(1+3*β*) +*Γ*(1+5*β*)*f*2(*x*)*g*<sup>3</sup> (*x*) *Γ*(1+2*β*)*Γ*(1+3*β*) + <sup>1</sup> *s*7*β*+<sup>1</sup> *Γ*(1+6*β*)*g*3(*x*)*f*<sup>3</sup> (*x*) *<sup>Γ</sup>*2(1+3*β*) <sup>+</sup> *<sup>Γ</sup>*(1+6*β*)*f*3(*x*)*g*<sup>3</sup> (*x*) *Γ*2(1+3*β*) 

Thus, *f*3(*x*) and *g*3(*x*) can be obtained by substituting the values of *f*1(*x*), *f*2(*x*), *g*1(*x*) and *g*2(*x*) into the coupled equations in Equation (18), then multiplying both sides of the new equations by *<sup>s</sup>*3*β*+<sup>1</sup> and taking the limit as *<sup>s</sup>* <sup>→</sup> <sup>∞</sup> to get:

$$\begin{aligned} f\_3(\mathbf{x}) &= -\left(f\_2(\mathbf{x})f'(\mathbf{x}) + f(\mathbf{x})f\_2'(\mathbf{x}) + g\_2'(\mathbf{x}) + \frac{\Gamma(1+2\beta)f\_1(\mathbf{x})f\_1'(\mathbf{x})}{\Gamma^2(1+\beta)}\right), \\ g\_3(\mathbf{x}) &= -\left(g\_2(\mathbf{x})f'(\mathbf{x}) + f(\mathbf{x})g\_2'(\mathbf{x}) + f\_2(\mathbf{x})g\_1'(\mathbf{x}) + g(\mathbf{x})f\_2'(\mathbf{x}) + f\_2^{(\beta)}(\mathbf{x}) + \frac{\Gamma(1+2\beta)g\_1(\mathbf{x})f\_1'(\mathbf{x})}{\Gamma^2(1+\beta)} + \frac{\Gamma(1+2\beta)f\_1(\mathbf{x})g\_1'(\mathbf{x})}{\Gamma^2(1+\beta)}\right). \end{aligned} \tag{19}$$

If we continue in the same manner, substituting the *k*th truncated series *Uk*(*x*,*s*), *Wk*(*x*,*s*) into the *k*th L-RF *LResk*(*U*(*x*,*s*)), *LResk*(*W*(*x*,*s*)). By multiplying the resulting new equations by *<sup>s</sup>kβ*+<sup>1</sup> and taking the limit as *<sup>s</sup>* <sup>→</sup> <sup>∞</sup>, *fk*<sup>+</sup>1(*x*), *gk*<sup>+</sup>1(*x*) for *<sup>k</sup>* <sup>≥</sup> 2, then we obtain the following recurrence relation:

$$f\_{k+1}(\mathbf{x}) = -\left( (f\_k(\mathbf{x})\mathbf{f}(\mathbf{x}) + \mathbf{g}\_k(\mathbf{x}))' + \sum\_{i+j=k}^{\infty} \frac{\frac{\mathbf{r}\left(f\_i(\mathbf{x})f\_j(\mathbf{x})\right)'\Gamma(1+\mathbf{k}\beta\_i)}{\Gamma(1+i\beta)\Gamma(1+j\beta\_i)}}{\Gamma(1+i\beta)\Gamma(1+j\beta\_i)} \right), i, j \in \mathbb{Z}^+m,\tag{20}$$
 
$$\text{where } \mathbf{r} = \begin{cases} 0.5, & i = j\\ 1 & i \neq j \end{cases}, \text{for} \\ k = 2, 3, 4, \dots, \ i+j=k.$$

$$\mathcal{R}\_{k+1}(\mathbf{x}) = -\left( (\mathcal{g}\_k(\mathbf{x}) f(\mathbf{x}))' + (f\_k(\mathbf{x}) \mathbf{g}(\mathbf{x}))' + f\_k(\mathbf{x})^{(3)} + \sum\_{i+j=k}^{\infty} \frac{(\mathcal{g}(\mathbf{x}) f\_j(\mathbf{x}))' \Gamma(1+\mathbf{k}\beta)}{\Gamma(1+\mathbf{i}\beta)\Gamma(1+\mathbf{j}\beta)} \right), i, j \in \mathbb{Z}^+.$$

Now, the series solution of the system in Equation (7) is given by:

$$\begin{split} \mathcal{U}(x,s) &= \frac{f(x)}{s} + \frac{-(f(x)f'(x) + f'(x))}{s^{\beta+1}} - \frac{(f(x)f'(x) + f(x)f'(x) + f'(x))}{s^{2\beta+1}} + \sum\_{n=3}^{\infty} \frac{f\_0(x)}{s^{\beta+1}}, \ x \in I, \ s > \delta \ge 0. \\ \mathcal{W}(x,s) &= \frac{\mathfrak{x}(x)}{s} - \frac{\left(\frac{f(x)f'(x) + f(x)g'(x) + f(x)f'(x)}{s^{\beta+1}}\right)}{s^{\beta+1}} \\ &\quad - \frac{\left(\frac{f(x)f'(x) + f\_1(x)g'(x) + g(x)f\_1(x) + f(x)g\_1"(x) + f\_1(x)g\_1"(x) + f\_1(x)}{s^{2\beta+1}}\right)}{s^{2\beta+1}} \\ &\quad + \sum\_{n=3}^{\infty} \frac{g\_0(x)}{s^{\beta+1}}, \ x \in I, \ s > \delta \ge 0. \end{split} \tag{21}$$

So, the series solution of the nonlinear T-FCB-BEs in Equations (1) and (2) can be obtained by transforming the above solution into the original space by using the inverse LT. Therefore, the L-RPS solution of the system in Equation (1) is given by:

$$\begin{split} f(\mathbf{x},t) &= f(\mathbf{x}) + \frac{-(f(\mathbf{x})f'(\mathbf{x}) + \mathbf{y}'(\mathbf{x}))t^{\beta}}{\Gamma(\beta+1)} - \frac{(f\_1(\mathbf{x})f'(\mathbf{x}) + f(\mathbf{x})f\_1'(\mathbf{x}) + g\_1'(\mathbf{x}))t^{2\beta}}{\Gamma(2\beta+1)} + \sum\_{k=3}^{\infty} \frac{f\_k(\mathbf{x})t^{k\beta}}{\Gamma(k\beta+1)}, t \ge 0, \mathbf{x} \in I. \\ \mathbf{x}(\mathbf{x},t) &= \mathbf{g}(\mathbf{x}) + \frac{-\left(g(\mathbf{x})f'(\mathbf{x}) + f(\mathbf{x})g'(\mathbf{x}) + f^{(3)}(\mathbf{x})\right)t^{\beta}}{\Gamma(\beta+1)} \\ &- \frac{\left(g\_1(\mathbf{x})f'(\mathbf{x}) + f\_1(\mathbf{x})g'(\mathbf{x}) + g\_1'(\mathbf{x})f\_1'(\mathbf{x}) + f(\mathbf{x})g\_1'(\mathbf{x}) + f\_1(^{3)}(\mathbf{x})\right)t^{2\beta}}{\Gamma(2\beta+1)} \\ &+ \sum\_{k=3}^{\infty} \frac{g\_k(\mathbf{x})t^{k\beta}}{\Gamma(k\beta+1)}, t \ge 0, \mathbf{x} \in I. \end{split} \tag{22}$$

#### **5. Application with Graphical Result**

In this Section, we give an attractive and interesting example with incuding graphical resukts to assert the efficiency and simplicity our proposed method in Section 4.

**Application 1.** *Consider the following nonlinear T-FCB-BEs:*

$$
\mathfrak{D}\_t^\beta u(\mathbf{x}, t) + w\_x(\mathbf{x}, t) + u(\mathbf{x}, t)u\_x(\mathbf{x}, t) = 0,\\
\mathfrak{D}\_t^\beta w(\mathbf{x}, t) + (u(\mathbf{x}, t)w(\mathbf{x}, t))\_x + u\_{xxx}(\mathbf{x}, t) = 0. \tag{23}
$$

*with the initial conditions:*

$$u(\mathbf{x},0) = 1 + \tanh\left(\frac{\mathbf{x}}{2}\right), \; w(\mathbf{x},0) = \frac{1}{2} - \frac{1}{2}\tanh^2\left(\frac{\mathbf{x}}{2}\right). \tag{24}$$

Comparing Equations (23) and (24) with Equations (1) and (2), respectively, then we find that

$$f(\mathbf{x}) = 1 + \tanh\left(\frac{\mathbf{x}}{2}\right), \text{ and } g(\mathbf{x}) = \frac{1}{2} - \frac{1}{2}\tanh^2\left(\frac{\mathbf{x}}{2}\right).$$

Therefore, according to the discussion and obtained results in Section 4, the L-RPS solution of the system in Equation (23) is given by

$$u(\mathbf{x},t) = f(\mathbf{x}) + \frac{-(f(\mathbf{x})f'(\mathbf{x}) + g'(\mathbf{x}))t^{\beta}}{\Gamma(\beta+1)} - \frac{(f\_1(\mathbf{x})f'(\mathbf{x}) + f(\mathbf{x})f\_1'(\mathbf{x}) + g\_1'(\mathbf{x}))t^{2\beta}}{\Gamma(2\beta+1)} + \sum\_{k=3}^{\infty} \frac{f\_k(\mathbf{x})t^{k\beta}}{\Gamma(k\beta+1)}t \ge 0, \mathbf{x} \in I. \tag{25}$$
 
$$w(\mathbf{x},t) = g(\mathbf{x}) + \frac{-\left(g(\mathbf{x})f'(\mathbf{x}) + f(\mathbf{x})g'(\mathbf{x}) + f^{(3)}(\mathbf{x})\right)t^{\beta}}{\Gamma(\beta+1)} \tag{26}$$
 
$$-\frac{\left(g\_1(\mathbf{x})f'(\mathbf{x}) + f\_1(\mathbf{x})g'(\mathbf{x}) + g(\mathbf{x})f\_1'(\mathbf{x}) + f(\mathbf{x})g\_1'(\mathbf{x}) + f(\mathbf{x})g\_1'(\mathbf{x}) + f\_1^{(3)}(\mathbf{x})\right)t^{2\beta}}{\Gamma(2\beta+1)} + \sum\_{k=3}^{\infty} \frac{g\_k(\mathbf{x})t^{k\beta}}{\Gamma(k\beta+1)}, t \ge 0, \mathbf{x} \in I.$$

Now, Equation (20) produces the series coefficients as follow:

*f*0(*x*) = 1 + tanh *<sup>x</sup>* 2 . *g*0(*x*) = <sup>1</sup> <sup>2</sup> <sup>−</sup> <sup>1</sup> <sup>2</sup> tan*h*<sup>2</sup> *x* 2 . *<sup>f</sup>*1(*x*) = <sup>−</sup><sup>1</sup> <sup>2</sup> sec*h*<sup>2</sup> *x* 2 . *<sup>g</sup>*1(*x*) = <sup>−</sup><sup>1</sup> <sup>4</sup> sec*h*<sup>2</sup> *x* 2 + <sup>1</sup> <sup>4</sup> sec*h*<sup>4</sup> *x* 2 + <sup>1</sup> <sup>2</sup> sec*h*<sup>2</sup> *x* 2 tanh *<sup>x</sup>* 2 + <sup>1</sup> <sup>4</sup> sec*h*<sup>2</sup> *x* 2 tan*h*<sup>2</sup> *x* 2 = 4csc*h*3(*x*)sin*h*<sup>4</sup> *x* 2 . *<sup>f</sup>*2(*x*) = <sup>−</sup><sup>3</sup> <sup>4</sup> sec*h*<sup>2</sup> *x* 2 tanh *<sup>x</sup>* 2 + <sup>1</sup> <sup>4</sup> sec*h*<sup>4</sup> *x* 2 tanh *<sup>x</sup>* 2 + <sup>1</sup> <sup>4</sup> sec*h*<sup>2</sup> *x* 2 tan*h*<sup>3</sup> *x* 2 = 8csc*h*3(*x*)sin*h*<sup>4</sup> *x* 2 . *<sup>g</sup>*2(*x*) = <sup>−</sup><sup>1</sup> <sup>8</sup> sec*h*<sup>4</sup> *<sup>x</sup>* 2 − 1 <sup>8</sup> sec*h*<sup>6</sup> *x* 2 − 1 <sup>2</sup> sec*h*<sup>2</sup> *x* 2 tanh *<sup>x</sup>* 2 + <sup>1</sup> <sup>2</sup> sec*h*<sup>4</sup> *x* 2 tanh *<sup>x</sup>* 2 + <sup>1</sup> <sup>4</sup> sec*h*<sup>2</sup> *x* 2 tan*h*<sup>2</sup> *x* 2 +<sup>1</sup> <sup>8</sup> sec*h*<sup>4</sup> *x* 2 tan*h*<sup>2</sup> *x* 2 + <sup>1</sup> <sup>2</sup> sec*h*<sup>2</sup> *x* 2 tan*h*<sup>3</sup> *x* 2 + <sup>1</sup> <sup>4</sup> sec*h*<sup>2</sup> *x* 2 tan*h*<sup>4</sup> *x* 2 <sup>=</sup> (cosh(*x*)−2) sin*h*<sup>4</sup>( *<sup>x</sup>* 2 ) <sup>4</sup> .

Continue in the same manner to get:

$$u(\mathbf{x},t) = 1 + \tanh\left(\frac{\mathbf{x}}{2}\right) - \frac{\mathrm{sech}^2\left(\frac{\mathbf{x}}{2}\right)t^6}{2\,\Gamma(\beta+1)} + \frac{8\mathrm{csch}^3(\mathbf{x})\mathrm{sind}^4\left(\frac{\mathbf{x}}{2}\right)t^{2\beta}}{\Gamma(2\beta+1)} + \dots = 1 + \tanh\left(\frac{\mathbf{x} - \frac{t^{\beta}}{\beta}}{2}\right),\tag{26}$$

$$w(\mathbf{x},t) = \frac{1}{2} - \frac{1}{2}\text{tanh}^2\left(\frac{\mathbf{x}}{2}\right) + \frac{4\text{csch}^3(\mathbf{x})\text{sind}^4\left(\frac{\mathbf{x}}{2}\right)t^{\beta}}{\Gamma(\beta+1)} + \frac{(\cos h(\mathbf{x})-2)\sinh^4\left(\frac{\mathbf{x}}{2}\right)t^{2\beta}}{4\Gamma(2\beta+1)} + \dots = \frac{1}{2} - \frac{1}{2}\tanh^2\left(\frac{\mathbf{x}-\frac{t^{\beta}}{\beta}}{2}\right).$$

Figures 1 and 2 below show the graphical results for the 5th- approximate L-RPS solutions *u*5(*x*, *t*) and *w*5(*x*, *t*), respectively, of Equations (23) and (24) at different values of *β*.

**Figure 1.** The surface graph of the 3D plots of the 5th-approximate L-RPS solution *u*5(*x*, *t*) in Equations (23) and (24) at: (**a**) *β* = 0.6 (**b**) *β* = 0.75 (**c**) *β* = 0.9.

**Figure 2.** The surface graph of the 3D plots of the 5th-approximate L-RPS solution w5(*x*, *t*) in Equations (23) and (24) at: (**a**) *β* = 0.6 (**b**) *β* = 0.75 (**c**) *β* = 0.9.

#### **6. Conclusions**

We have employed an attractive L-RPS method for solving system of nonlinear T-FCB-BEs. The proposed method is a new efficient method, and it provides the solution in a rapidly convergent series which yields the solution in a closed form. That is, few calculations are needed in L-RPS method to get the series coefficients compared with RPS method since it is determined by employing the concept of limit not the fractional derivative as in RPS method. The L-RPS method will open the door for solving many complicated nonlinear F-PDEs in future studies, since it can be easily employed for creating the exact and approximate solutions of many physical and engineering phenomena depend on F-PDEs such as the nonlinear KdV-Burger, parabolic and mKdV space-time F-PDEs. Moreover, there is a newly proposed fractional derivative definition which is called the "Abu-Shady-Kaabar fractional derivative" and recently introduced by Abu-Shady and Kaabar [51]. This definition obtains the same results of C-FD in a very simple way which is more efficient for solving many nonlinear FDEs, see, [51–53]. In the future, we attend to solve some new attractive modeling scientific phenomena via L-RPS method using Abu-Shady-Kaabar fractional derivative [51–54]. Mathematica software 14 is used to compute the numerical and graphical results.

**Author Contributions:** Conceptualization, A.S., A.B. and R.S.; methodology, A.B.; software, A.S. and Z.A.-Z.; validation, A.B., R.S. and Z.A.-Z.; formal analysis, A.S.; investigation, R.S.; resources, Z.A.-Z. and R.S.; data curation, A.B., R.S. and Z.A.-Z.; writing—original draft preparation, A.S. and A.B.; writing—review and editing, Z.A.-Z. and R.S.; visualization, A.S. and A.B.; supervision, A.B.; project administration, A.B.; funding acquisition, A.B. and R.S. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by Zarqa university, Zarqa, Jordan.

**Data Availability Statement:** No data associated in this work.

**Acknowledgments:** The authors express their sincere thanks to referees for careful reading and valuable suggestions.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **A Novel Analytical LRPSM for Solving Nonlinear Systems of FPDEs**

**Hussam Aljarrah 1, Mohammad Alaroud 2, Anuar Ishak 1,\* and Maslina Darus <sup>1</sup>**


**Abstract:** This article employs the Laplace residual power series approach to study nonlinear systems of time-fractional partial differential equations with time-fractional Caputo derivative. The proposed technique is based on a new fractional expansion of the Maclurian series, which provides a rapid convergence series solution where the coefficients of the proposed fractional expansion are computed with the limit concept. The nonlinear systems studied in this work are the Broer-Kaup system, the Burgers' system of two variables, and the Burgers' system of three variables, which are used in modeling various nonlinear physical applications such as shock waves, processes of the wave, transportation of vorticity, dispersion in porous media, and hydrodynamic turbulence. The results obtained are reliable, efficient, and accurate with minimal computations. The proposed technique is analyzed by applying it to three attractive problems where the approximate analytical solutions are formulated in rapid convergent fractional Maclurian formulas. The results are studied numerically and graphically to show the performance and validity of the technique, as well as the fractional order impact on the behavior of the solutions. Moreover, numerical comparisons are made with other well-known methods, proving that the results obtained in the proposed technique are much better and the most accurate. Finally, the obtained outcomes and simulation data show that the present method provides a sound methodology and suitable tool for solving such nonlinear systems of time-fractional partial differential equations.

**Keywords:** fractional differential equations; Laplace residual power series; fractional Broer-Kaup equations; fractional Burgers' equations

#### **1. Introduction**

Fractional-order systems have acquired a lot of attention and interest in various engineering and scientific fields as popular mathematical models used to describe realworld physical phenomena [1–5]. Fractional calculus provides a valuable instrument for showing the development of complicated dynamical systems with long-term memory impacts. In contrast to ordinary derivatives, defining fractional order derivatives of a specific function necessitates the existence of its complete history. Such a non-local feature, i.e., the memory consequence, has made it much more practical to explain various realworld physical systems using fractional differential equations. Investigating dynamics, including complexity, chaos, stability, bifurcation, and synchronization of these fractional order systems, has recently become an interesting research field in nonlinear sciences [6–13]. In order to study the real-world physical systems' dynamic behavior, it is essential to determine how these solution trajectories can change over slight perturbations. Therefore, performing and developing various numerical techniques to analyze and simulate the systems' nonlinear dynamics is important. Considering fractional derivatives, analyticnumeric approaches to fractional calculus frequently depend on versions of the Riemann-Liouville, Caputo, Grunwald-Letnikov, Riesz, or other approaches, which were discussed

**Citation:** Aljarrah, H.; Alaroud, M.; Ishak, A.; Darus, M. A Novel Analytical LRPSM for Solving Nonlinear Systems of FPDEs. *Fractal Fract.* **2022**, *6*, 650. https://doi.org/ 10.3390/fractalfract6110650

Academic Editors: Libo Feng, Yang Liu and Lin Liu

Received: 11 October 2022 Accepted: 31 October 2022 Published: 4 November 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

in previous studies during the past few years [14–16]. This study, however, will use Caputo's approach of fractional differentiation, benefiting from Caputo's approach that initial conditions of the fractional partial differential equations, i.e., (FPDEs) with Caputo's derivatives take the same conventional form as in integer order.

Differential equations (DEs) can be used for modeling many chemical, biological, and physical phenomena. Because FPDEs have a significant impact on many applied disciplines, particularly nonlinear ones such as fluid flow, biological diffusion of populations, dynamical systems, control theory, electromagnetic waves, etc., there has been a growing interest in them in recent years [17–21]. Most scientific phenomena in various disciplines such as physics, biological systems, and engineering are nonlinear problems; therefore, it might be challenging to find their exact solutions, e.g., physical problems are typically modeled by utilizing higher nonlinear FPDEs, thereby finding exact solutions for these problems is quite challenging. Thus, numerical as well as approximate methods must be employed. Numerous useful techniques were used for solving linear and nonlinear FPDEs, including the variational iteration technique, the Adomian decomposition technique, the homotopy analysis technique, the homotopy perturbation technique, and the fractional residual power series technique [22–28].

The fractional power series method (FPSM) has been employed to solve several classes of differential and integral equations of the fractional order if the solution of the equation can be extended into a fractional power series [29]. Moreover, FPSM is a fast and easy method utilized to determine the fractional power series solution coefficients because if we compare the computational effort required to compute the solutions of the FPDEs in FPSM with other methods, it becomes clear that it is much less. Moreover, the results are much better, as the speed of implementation on mathematical packages helps to obtain the results in less time and with more accuracy, especially in non-linear problems [30–32]. Recently, the FPSM has received the attention of many researchers, whereby various fractional integral and differential equations were investigated successfully by using FPSM, involving fractional Fokker–Planck equations [33], Sawada–Kotera– Ito, Lax, and Kaup–Kupershmidt equations [34], fractional Fredholm integrodifferential equation of order 2*β* arising in natural sciences [35]. The Laplace transform (LT) technique represents a simple technique for solving several kinds of linear differential integral and integrodifferential equations, as well as a specific class of linear FPDEs [5]. Solving linear DEs by LT technique involves three steps. Transforming the main DEs into the Laplace space represents the first step of this process. Solve the new equation algebraically in the Laplace space in the second step. The last step involves transforming back the obtained solution in the previous step into the initial space, which solves the problem at hand [36].

Overall, there are no semi-approximate or conventional analytical methods that can produce accurate closed-form or approximate solutions for nonlinear FPDE systems. Accordingly, there is a pressing need for efficient numerical methods so that accurate approximate solution can be found for these models for extended periods. Motivated by the above-mentioned discussion, designing an innovative iterative algorithm to produce analytical solutions to the nonlinear FPDE systems is the main aim of our study. The motivation of this study is to present an analytical method called LFPSM to solve a nonlinear system of FPDEs. To specify the efficacy and accuracy of this method, we apply it to solve three nonlinear systems of FPDEs and compare the results obtained with the exact solutions and solutions obtained by other methods. According to our best knowledge, the proposed method has not been applied to find analytical solutions to Broer–Kaup and Burgers' systems of fractional orders in the literature, which intensely motivated this work.

This study primarily aims to generate accurate approximate solutions to nonlinear FPDE systems in the Caputo sense, which are subject to proper initial conditions by using an innovative analytical algorithm. This algorithm is called Laplace FPSM, which has been suggested and proved in [37]. It is worth mentioning that this newly introduced method relies on transforming the considered equation into the LT space so that a sequence of Laplace series solutions to the new equation form is established, and then the solution to the considered equation can be established by utilizing the inverse LT. Without perturbation, linearization, or discretization, this innovative method can be applied to generate the FPS expansion solutions for both linear and nonlinear FPDEs [38,39]. Furthermore, this technique, unlike the conventional FPSM, does not necessitate matching the corresponding coefficients terms nor the utilization of a relation of recursion. The technique offered is based on the limit concept for finding the variable coefficients. Unlike FPSM, which needs numerous times to compute different fractional derivatives in the steps of the solution, only a few computations are needed to determine the coefficients specified. Therefore, this proposed method has the capability of yielding closed-form solutions, in addition to accurate approximate solutions, by involving a fast convergence series.

The rest of the article is organized as follows. A review of some necessary definitions, properties, and theorems concerning fractional calculus, Laplace transform, and Laplace fractional expansion is presented in Section 2. The methodology for solving a system of nonlinear time-FPDEs by Laplace FPSM is deeply investigated in Section 3. In Section 4, the Broer-Kaup (BK) system of nonlinear time FPDEs, and two Burgers' systems of nonlinear time FPDEs are solved to show that our approach is accurate and applicable. The results are debated graphically and numerically in Section 5. Finally, Section 6 is lifted for the conclusions.

#### **2. Preliminary Concepts**

This section is devoted to overviewing the essential definitions and theorems of fractional differentiation, in addition, to giving a brief for some preliminary definitions and necessary theorems regarding LT, which will be used in sections three and four.

**Definition 1.** *For <sup>n</sup>* <sup>∈</sup> <sup>N</sup>*, and* <sup>∈</sup> <sup>R</sup>+*the time-fractional derivative in the Caputo sense for the real-valued function* U(, ) *is defined as: [3]*

$$\mathfrak{D}\_t^a \mathcal{U}(x, t) = \begin{cases} \mathcal{Z}\_t^{\mathfrak{n}-a} (D\_t^n \mathcal{U}(x, t)), & 0 < n - 1 < a \le n\_\prime \\ D\_t^n \mathcal{U}(x, t), & a = n\_\prime \end{cases}$$

*where D<sup>n</sup> <sup>t</sup>* <sup>=</sup> *<sup>∂</sup><sup>n</sup> <sup>∂</sup>t<sup>n</sup>* , *and <sup>I</sup>t is the R-L fractional integral operator and which is given by:*

$$\mathcal{T}\_t^a \mathcal{U}(x, t) = \begin{cases} \frac{1}{\Gamma(a)} \int\_0^t \frac{\mathcal{U}(x, \eta)}{(t - \eta)^{1 - a}} d\eta, & 0 \le \eta < t, a > 0, \\\mathcal{U}(x, t) & a = 0. \end{cases}$$

*Consequently, for <sup>n</sup>* <sup>−</sup> <sup>1</sup> <sup>&</sup>lt; <sup>≤</sup> *<sup>n</sup>*, *<sup>β</sup>* <sup>&</sup>gt; <sup>−</sup><sup>1</sup> *and* <sup>≥</sup> <sup>0</sup>*, the operators* <sup>D</sup> *and <sup>I</sup><sup>α</sup> satisfy the following properties:*


**Definition 2.** *The Laplace transformation (LT) of the piecewise continuous function* U(, ) *on I* × [0, ∞) *and of exponential order δ is given by: [38]*

$$\mathfrak{U}(x,\mathfrak{s}) = \mathcal{L}[\mathcal{U}(x,t)] := \int\_0^\infty e^{-\mathfrak{s}t} \mathcal{U}(x,t) dt, \quad \mathfrak{s} > \delta\_\tau$$

*and the inverse LT of the transform function* U(, s) *is given by:*

$$\mathcal{U}(x,t) = \mathcal{L}^{-1}[\mathfrak{U}(x,\mathfrak{s})] = \int\_{c-i\infty}^{c+i\infty} e^{\mathfrak{a}t} \mathfrak{U}(x,\mathfrak{s}) d\mathfrak{s}, c = \mathrm{Re}(\mathfrak{s}) > \delta\_{0\prime}$$

*where δ*<sup>0</sup> *lies in the right half plane of the absolute convergence of the Laplace integral.*

**Lemma 1.** *Let* U(, ) *and* V(, ) *be two piecewise continuous functions defined on I* × [0, ∞) *and of exponential order δ*<sup>1</sup> *and δ*2*, respectively, where δ*<sup>1</sup> < *δ*2*. Suppose that* U(, s) = L[U(, )], /(, <sup>s</sup>) = L[V(, )] *and* {*a*, *<sup>b</sup>*} ∈ R. *Then, [38]*


**Lemma 2.** *Let* U(, ) *be a piecewise continuous function defined on I* × [0, ∞) *and of exponential order δ, and* U(, s) = L[U(, )]*. Then, [31]*


**Proof.** The proof is in [38].

**Theorem 1.** *Let* U(, ) *be a piecewise continuous function defined on I* × [0, ∞) *and of exponential order δ. Suppose that the function* U(, s) = L[U(, )] *has the following fractional expansion (FE): [38]*

$$\mathfrak{U}(x,\mathfrak{s}) = \sum\_{n=0}^{\infty} \frac{\mathfrak{A}\_n(x)}{\mathfrak{s}^{na+1}}, \ x \in I, \mathfrak{s} > \delta, \ 0 < a \le 1.$$

*Then, n*() = D*n* U(, 0).

**Proof.** The proof is in [38].

**Remark 1.** *The inverse LT* <sup>L</sup>−1[U(, <sup>s</sup>)] <sup>=</sup> <sup>U</sup>(, ), *in Theorem 1 is in the following expansion series (FSE) form:*

$$\mathcal{U}(x,t) = \sum\_{n=0}^{\infty} \mathfrak{D}\_t^{na} \mathcal{U}(x,0) \frac{t^{na}}{\Gamma(na+1)}, \ 0 < a \le 1, t > 0.$$

**Theorem 2.** *Let* U(, ) *be an exponential function of order δ defined on I* × [0, ∞)*, and let* U(, s) = L[U(, )] *can be represented as the FE in Theorem 1. If* sL D(*n*+1) U(, ) <sup>≤</sup> *<sup>M</sup>*()*, on <sup>I</sup>* <sup>×</sup> (*<sup>δ</sup>* , *<sup>γ</sup>*] *where* <sup>0</sup> <sup>&</sup>lt; <sup>≤</sup> <sup>1</sup>*, then the reminder Rn*(, <sup>s</sup>) *of the FE in Theorem 1 satisfies the following inequality: [38]*

$$\left|R\_n(x, \mathfrak{s})\right| \le \frac{M(x)}{\mathfrak{s}^{1+(n+1)a}}, \ x \in I, \ \delta < \mathfrak{s} \le \gamma$$

**Proof.** The proof is in [38].

**Theorem 3.** *If* ∈ 0, 1]*,* U*k*+1(, ) ≤ U*k*(, ) *gives* ∀*k* ∈ *N and* 0 < *t* < *T* < 1, *then the series of numerical solutions converges to the exact solution [39].*

**Proof.** We notice that ∀ 0 < < T < 1,

$$\begin{aligned} \left| \|\mathcal{U}(x,t) - \mathcal{U}\_k(x,t) \| \right| &= \left| \left| \sum\_{m=k+1}^{\infty} \mathcal{U}\_m(x,t) \right| \right| \leq \sum\_{m=k+1}^{\infty} \left| \|\mathcal{U}\_m(x,t \mid) \| \right| \\ &\leq \|\mathcal{Q}(\eta)\| \left| \left| \left| \sum\_{m=k+1}^{\infty} \mathcal{C}^m \right| \right| \right| = \frac{\varrho^{k+1}}{1-a} \parallel \mathcal{Q}(\eta) \left| \to 0 \text{ as } k \to \infty \square \end{aligned}$$

#### **3. The Methodology of Laplace RPSM**

In this part, we present the fundamental idea of the Laplace RPSM for solving the system of time FPDEs with initial conditions. Our strategy for using the proposed scheme is to rely on coupling the Laplace transform and the RPS approach. More precisely, consider the system of FPDEs with the initial conditions of the form:

$$\begin{cases} \mathfrak{D}\_t^a \mathbf{U}(\boldsymbol{\eta}, t) = \mathcal{A}\_1[\mathbf{U}(\boldsymbol{\eta}, t)] + \mathcal{A}\_2[\mathbf{U}(\boldsymbol{\eta}, t)], \; 0 < a \le 1, \\\quad \mathcal{U}(\boldsymbol{\eta}, 0) = \mathcal{P}\_j(\boldsymbol{\eta}), \; j = 1, 2, \dots, n \end{cases} \tag{1}$$

where A1, A<sup>2</sup> are two linear or nonlinear operators such that *<sup>U</sup>*(*η*, ) <sup>=</sup> (U1(*η*, ), <sup>U</sup>2(*η*, ),..., <sup>U</sup>*n*(*η*, )), is the unknown vector function to be determined, and *<sup>η</sup>* <sup>=</sup> (*η*1, *<sup>η</sup>*2,..., *<sup>η</sup>m*) <sup>∈</sup> <sup>R</sup>*m*, *<sup>n</sup>*, *<sup>m</sup>* <sup>∈</sup> <sup>R</sup>. Here, <sup>D</sup> *<sup>t</sup>* refers to the time-fractional derivative of order ∈ (0, 1], in the Caputo meaning.

To build the approximate solution of (1) by using the Laplace RPSM, one can accomplish the following procedure:

Step 1: Taking the LT on the two sides of (1) and employing the initial data of (1), as well as relying on Lemma 2, part (2), we get:

$$\mu(\eta, \mathfrak{s}) = \frac{\mathcal{P}\_{\mathfrak{i}}(\eta)}{\mathfrak{s}} - \frac{1}{\mathfrak{s}^{\alpha}} (\mathcal{L}\{\mathcal{A}\_{1}[\mathfrak{U}(\eta, t)]\} + \mathcal{L}\{\mathcal{A}\_{2}[\mathfrak{U}(\eta, t)]\}), \tag{2}$$
 
$$\text{where } \mathfrak{u}(\eta, \mathfrak{s}) = \mathcal{L}[\mathfrak{U}(\eta, t)](\mathfrak{s}), \mathfrak{s} > \delta.$$

Step 2: Based on Theorem 1, we suppose that the approximate solution of the Laplace Equation (2) has the following Laplace fractional expansions:

*n*=1

$$\begin{aligned} \mu\_1(\eta, \mathfrak{s}) &= \frac{\mathcal{P}\_1(\eta)}{\mathfrak{s}} + \sum\_{n=1}^{\infty} \frac{\mu\_n(\eta)}{\mathfrak{s}^{n\mathfrak{a}n + 1}}, \ \eta \in \mathsf{T}, \ \mathfrak{s} > \delta \ge 0, \\\ m\_2(\eta, \mathfrak{s}) &= \frac{\mathcal{P}\_2(\eta)}{\mathfrak{s}} + \sum\_{n=1}^{\infty} \frac{\mu\_n(\eta)}{\mathfrak{s}^{n\mathfrak{a}n + 1}}, \ \eta \in \mathsf{T}, \ \mathfrak{s} > \delta \ge 0, \\\ &\vdots \\\ \mu\_n(\eta, \mathfrak{s}) &= \frac{\mathcal{P}\_n(\eta)}{\mathfrak{s}} + \sum\_{n=1}^{\infty} \frac{\mu\_n(\eta)}{\mathfrak{s}^{n\mathfrak{a}n + 1}}, \ \eta \in \mathsf{T}, \ \mathfrak{s} > \delta \ge 0, \end{aligned} \tag{3}$$

and the *k* − *th* Laplace series solutions take the following form:

$$u\_{1,k}(\eta, \mathfrak{s}) = \frac{\mathcal{P}\_1(\eta)}{\mathfrak{s}} + \sum\_{n=1}^k \frac{\mathcal{A}\_n(\eta)}{\mathfrak{s}^{na+1}}, \ \eta \in \mathsf{T}, \ \mathfrak{s} > \delta \ge 0,$$

$$u\_{2,k}(\eta, \mathfrak{s}) = \frac{\mathcal{P}\_2(\eta)}{\mathfrak{s}} + \sum\_{n=1}^k \frac{\mathcal{A}\_n(\eta)}{\mathfrak{s}^{na+1}}, \ \eta \in \mathsf{T}, \ \mathfrak{s} > \delta \ge 0,$$

$$\vdots$$

$$
\mu\_{n,k}(\eta, \mathfrak{s}) = \frac{\mathcal{P}\_n(\eta)}{\mathfrak{s}} + \sum\_{n=1}^k \frac{\mathbb{A}\_n(\eta)}{\mathfrak{s}^{n\alpha + 1}}, \ \eta \in \mathcal{T}, \ \mathfrak{s} > \delta \ge 0.
$$

Step 3: Define the *k* − *th* Laplace fractional residual function of (2) as:

$$\mathcal{L}\left(\text{Res}\_{u\_k}(\eta,\mathfrak{s})\right) = \frac{\mathcal{P}\_j(\eta)}{\mathfrak{s}} - \frac{1}{\mathfrak{s}^a} (\mathcal{L}\{\mathcal{A}\_1[\mathfrak{U}(\eta,t)]\} + \mathcal{L}\{\mathcal{A}\_2[\mathfrak{U}(\eta,t)]\}),\tag{5}$$

and the Laplace fractional residual function of (2) can be defined as:

$$\begin{split} \lim\_{k \to \infty} \mathcal{L}\left(\operatorname{Res}\_{\mathfrak{u}\_{k}}(\mathfrak{y}, \mathfrak{s})\right) &= \mathcal{L}\left(\operatorname{Res}\_{\mathfrak{u}}(\mathfrak{y}, \mathfrak{s})\right) \\ = \frac{\mathcal{P}\_{\mathfrak{f}}(\mathfrak{y})}{\mathfrak{s}} - \frac{1}{\mathfrak{s}^{\mathfrak{a}}} \left(\mathcal{L}\{\mathcal{A}\_{1}[\mathfrak{U}(\mathfrak{y}, \mathfrak{t})]\} + \mathcal{L}\{\mathcal{A}\_{2}[\mathfrak{U}(\mathfrak{y}, \mathfrak{t})]\}\right) . \end{split} \tag{6}$$

As in [37–39], some of the beneficial facts of Laplace residual function, which are fundamental in constructing the approximate solution, are listed as follows:


Step 4: The *k* − *th* Laplace fractional residual function of (5) is substituted by the *k* − *th* Laplace series solution (4).

Step 5: By solving the system lims→∞s*k*+1<sup>L</sup> *Resuk* (*η*, s) = 0, the unknown coefficients *k*(*η*), for *k* = 1, 2, 3, ..., easily could be founded. Then, we accumulate the received variable coefficients in terms of the Laplace fractional expansion series (4) *uj*,*k*(*η*, <sup>s</sup>).

Step 6: The approximate solution *Uj*,*k*(*η*, ), of the main Equation (1), can be attained by applying the inverse Laplace transform operator on both sides of the obtained Laplace series solution.

#### **4. Numerical Examples**

In this section, we show that the Laplace RPSM is superior, efficient, and applicable, which is achieved by testing three nonlinear time-FPDEs systems. It should be noted here that all numerical and symbolic calculations are made using the Mathematica 12 software package.

**Example 1.** *Consider the following Broer-Kaup system of nonlinear time-FPDEs:*

$$\begin{aligned} \frac{\partial^a \mathcal{U}}{\partial t^a} + \mathcal{U} \frac{\partial \mathcal{U}}{\partial x} + \frac{\partial \mathcal{V}}{\partial x} &= 0, \\ \frac{\partial^a \mathcal{V}}{\partial t^a} + \frac{\partial \mathcal{U}}{\partial x} + \frac{\partial (\mathcal{U} \mathcal{V})}{\partial x} + \frac{\partial^3 \mathcal{U}}{\partial x^3} &= 0, \end{aligned} \tag{7}$$

*subject to ICs*

<sup>U</sup>(*x*, 0) <sup>=</sup> <sup>1</sup> <sup>+</sup> <sup>2</sup>*tanh*(*x*), and <sup>V</sup>(*x*, 0) <sup>=</sup> <sup>1</sup> <sup>−</sup> <sup>2</sup>*tanh*2(*x*),

*where* ∈ (0, 1] *and* (*x*, ) ∈ R × [0, 1]*. The exact solutions when* = 1*, are* (U(*x*, ), V(*x*, )) = <sup>1</sup> <sup>−</sup> <sup>2</sup>*tanh*( <sup>−</sup> *<sup>x</sup>*), 1 <sup>−</sup> <sup>2</sup>*tanh*<sup>2</sup> (*x* − ) .

By applying the LT operator on (7) and using the second part of Lemma 2 and the ICs of (7), the Laplace fractional equations are:

$$
\begin{split}
\mathfrak{U}(\mathbf{x},\mathbf{s}) &= \frac{1+2\tanh(\mathbf{x})}{\mathfrak{s}} - \frac{1}{\mathfrak{s}^{\alpha}} \mathscr{L}\left\{\mathscr{L}^{-1}\{\mathfrak{U}\} \frac{\partial}{\partial \mathbf{x}} \mathscr{L}^{-1}\{\mathfrak{U}\}\right\} - \frac{1}{\mathfrak{s}^{\alpha}} \mathscr{L}\left\{\frac{\partial}{\partial \mathbf{x}} \mathscr{L}^{-1}\{\mathbf{V}\}\right\}, \\
\mathbb{V}(\mathbf{x},\mathbf{s}) &= \frac{1-2\tanh^{2}(\mathbf{x})}{\mathfrak{s}} - \frac{1}{\mathfrak{s}^{\alpha}} \mathscr{L}\left\{\frac{\partial}{\partial \mathbf{x}} \mathscr{L}^{-1}\{\mathfrak{U}\}\right\} - \mathscr{L}\frac{\partial}{\partial \mathbf{x}} \left\{\mathscr{L}^{-1}\{\mathfrak{U}\} \mathscr{L}^{-1}\{\mathbf{V}\}\right\} \\
&\qquad - \frac{1}{\mathfrak{s}^{\alpha}} \mathscr{L}\left\{\frac{\partial^{3}}{\partial \mathbf{x}^{3}} \mathscr{L}^{-1}\{\mathfrak{U}\}\right\},
\end{split} \tag{8}
$$

where U(*x*, s) = L[U(*x*, )] and /(*x*, s) = L[V(*x*, )].

According to the last discussion of the proposed method, the *k* − *th* Laplace series solutions, U*k*(*x*, s) and / *<sup>k</sup>*(*x*, s) for (8) are expressed as:

$$\begin{split} \mathfrak{U}\_{k}(\mathfrak{x}, \mathfrak{s}) &= \frac{1 + 2\tanh(\mathfrak{x})}{\mathfrak{s}} + \sum\_{n=1}^{k} \frac{\mathbb{A}\_{n}(\mathfrak{x})}{\mathfrak{s}^{\mathfrak{n}n+1}}, \\ \bigvee\_{k} (\mathfrak{x}, \mathfrak{s}) &= \frac{1 - 2\tanh^{2}(\mathfrak{x})}{\mathfrak{s}} + \sum\_{n=1}^{k} \frac{g\_{n}(\mathfrak{x})}{\mathfrak{s}^{\mathfrak{n}n+1}}. \end{split} \tag{9}$$

Hence, the *k* − *th* Laplace fractional residual functions of (8) is defined as:

$$\begin{split} \mathcal{L}\left(\text{Res}\_{\mathbb{M}\_{k}}(\mathbf{x},\mathsf{s})\right) &= \sum\_{n=1}^{k} \frac{\mu\_{n}(\mathbf{x})}{\mathfrak{s}^{\alpha+1}} + \frac{1}{\mathfrak{s}^{\alpha}} \mathcal{L}\left\{\mathcal{L}^{-1}\{\mathbb{U}\_{k}\} \frac{\partial}{\partial \overline{\mathbf{x}}} \mathcal{L}^{-1}\{\mathbb{U}\_{k}\}\right\} + \frac{1}{\mathfrak{s}^{\alpha}} \mathcal{L}\left\{\frac{\partial}{\partial \overline{\mathbf{x}}} \mathcal{L}^{-1}\{\mathbb{V}\_{k}\}\right\}, \\ \mathcal{L}\left(\text{Res}\_{\mathbb{V}\_{k}}(\mathbf{x},\mathsf{s})\right) &= \sum\_{n=1}^{k} \frac{g\_{n}(\mathbf{x})}{\mathfrak{s}^{\alpha+1}} + \frac{1}{\mathfrak{s}^{\alpha}} \mathcal{L}\left\{\frac{\partial}{\partial \overline{\mathbf{x}}} \mathcal{L}^{-1}\{\mathbb{U}\_{k}\}\right\} + \mathcal{L}\frac{\partial}{\partial \overline{\mathbf{x}}} \left\{\mathcal{L}^{-1}\{\mathbb{U}\_{k}\} \mathcal{L}^{-1}\{\mathbb{V}\_{k}\}\right\} + \frac{1}{\mathfrak{s}^{\alpha}} \mathcal{L}\left\{\frac{\partial^{3}}{\partial \overline{\mathbf{x}}^{3}} \mathcal{L}^{-1}\{\mathbb{U}\_{k}\}\right\}. \end{split} \tag{10}$$

The 1 − *st* Laplace fractional residual functions can be carried out by letting *k* = 1, in (10):

L *Res*U<sup>1</sup> (*x*, s) = 1(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>1</sup> s L <sup>L</sup>−<sup>1</sup> 1+2tanh(*x*) <sup>s</sup> <sup>+</sup> 1(*x*) s+<sup>1</sup> *∂ ∂x*L−<sup>1</sup> 1+2tanh(*x*) <sup>s</sup> <sup>+</sup> 1(*x*) s+<sup>1</sup> + <sup>1</sup> s L *∂ ∂x*L−<sup>1</sup> <sup>1</sup>−2tanh2(*x*) <sup>s</sup> <sup>+</sup> <sup>ℊ</sup>1(*x*) s+<sup>1</sup> = <sup>1</sup> s+<sup>1</sup> 1(*x*) + 2sec h2(*x*) + <sup>1</sup> s2+<sup>1</sup> 21(*x*)sec h2(*x*) + ℊ <sup>1</sup>(*x*) + <sup>1</sup>(*x*) + 2 <sup>1</sup>(*x*)tanh(*x*) + <sup>1</sup> <sup>s</sup>3+<sup>1</sup> (1(*x*) <sup>1</sup>(*x*)) <sup>Γ</sup>(2+1) (Γ(+1))<sup>2</sup> , L *Res*/ <sup>1</sup> (*x*, s) = <sup>ℊ</sup>1(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>1</sup> s L *∂ ∂x*L−<sup>1</sup> 1+2tanh(*x*) <sup>s</sup> <sup>+</sup> 1(*x*) s+<sup>1</sup> + <sup>1</sup> <sup>s</sup> <sup>L</sup> *<sup>∂</sup> ∂x* <sup>L</sup>−<sup>1</sup> 1+2tanh(*x*) <sup>s</sup> <sup>+</sup> 1(*x*) s+<sup>1</sup> <sup>L</sup>−<sup>1</sup> <sup>1</sup>−2tanh2(*x*) <sup>s</sup> <sup>+</sup> <sup>ℊ</sup>1(*x*) s+<sup>1</sup> + <sup>1</sup> s L *∂*<sup>3</sup> *<sup>∂</sup>x*<sup>3</sup> <sup>L</sup>−<sup>1</sup> 1+2tanh(*x*) <sup>s</sup> <sup>+</sup> 1(*x*) s+<sup>1</sup> = <sup>1</sup> s+<sup>1</sup> <sup>ℊ</sup>1(*x*) <sup>−</sup> 4tanh(*x*)sec h2(*x*) + <sup>1</sup> s2+<sup>1</sup> <sup>2</sup>ℊ1(*x*)sec h2(*x*) <sup>−</sup> <sup>4</sup>1(*x*)tanh(*x*)sec h2(*x*) + <sup>ℊ</sup> <sup>1</sup>(*x*) + 2ℊ <sup>1</sup>(*x*)tanh(*x*) +2 <sup>1</sup>(*x*)sec h2(*x*) + <sup>1</sup> (3)(*x*) + <sup>1</sup> <sup>s</sup>3+<sup>1</sup> (1(*x*)ℊ <sup>1</sup>(*x*) + ℊ1(*x*) <sup>1</sup>(*x*)) <sup>Γ</sup>(2+1) (Γ(+1))<sup>2</sup> . (11)

To find the 1 − *st* Laplace series solution of (8), we simply take the next process lims→∞s*x*+<sup>1</sup> L *Res*U<sup>1</sup> (*x*, s) ,L *Res*/ <sup>1</sup> (*x*, s) <sup>=</sup> (0, 0), which yields that 1(*x*) = <sup>−</sup>2sec h2(*x*) and <sup>ℊ</sup>1(*x*) = 4tanh(*x*)sec h2(*x*). So, the 1 <sup>−</sup> *st* Laplace series solutions of (8) are:

$$\begin{split} \mathfrak{U}\_{1}(\mathfrak{x},\mathfrak{s}) &= \frac{1+2\tanh(\mathfrak{x})}{\mathfrak{s}} - \frac{2\mathrm{sech}^{2}(\mathfrak{x})}{\mathfrak{s}^{\alpha+1}}, \\ \bigvee\_{1}(\mathfrak{x},\mathfrak{s}) &= \frac{1-2\tanh^{2}(\mathfrak{x})}{\mathfrak{s}} + \frac{4\tanh(\mathfrak{x})\mathrm{sech}^{2}(\mathfrak{x})}{\mathfrak{s}^{\alpha+1}}. \end{split} \tag{12}$$

For *k* = 2, in (10) the 2 − *nd* Laplace residual functions can be written as:

L *Res*U<sup>2</sup> (*x*, s) <sup>=</sup> <sup>−</sup>2sec h2(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> 2(*x*) s2+<sup>1</sup> + <sup>1</sup> s L <sup>L</sup>−<sup>1</sup> 1+2tanh(*x*) <sup>s</sup> <sup>−</sup> 2sec h2(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> 2(*x*) s2+<sup>1</sup> *∂ ∂x*L−<sup>1</sup> 1+2tanh(*x*) <sup>s</sup> <sup>−</sup> 2sec h2(*x*) s+<sup>1</sup> +2(*x*) s2+<sup>1</sup> + <sup>1</sup> s L *∂ ∂x*L−<sup>1</sup> <sup>1</sup>−2tanh2(*x*) <sup>s</sup> <sup>+</sup> 4tanh(*x*)sec h2(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup>2(*x*) s2+<sup>1</sup> = <sup>1</sup> s2+<sup>1</sup> 2(*x*) + 4tanh(*x*)sec h2(*x*) + <sup>1</sup> <sup>s</sup>3+<sup>1</sup> (22(*x*)sec h2(*x*) + <sup>ℊ</sup> <sup>2</sup>(*x*) + <sup>2</sup>(*x*) +2 <sup>2</sup>(*x*)tanh(*x*) <sup>−</sup> 8tanh(*x*)sec h4(*x*) <sup>Γ</sup>(2+1) (Γ(+1))<sup>2</sup> ) + <sup>1</sup> s4+<sup>1</sup> <sup>4</sup>2(*x*)tanh(*x*)sec h2(*x*) <sup>−</sup> <sup>2</sup> <sup>2</sup>(*x*)sec h2(*x*) Γ(3+1) Γ(2+1)Γ(+1) + <sup>1</sup> <sup>s</sup>5+<sup>1</sup> (2(*x*) <sup>2</sup>(*x*)) <sup>Γ</sup>(4+1) (Γ(2+1))<sup>2</sup> , L *Res*/ <sup>2</sup> (*x*, s) <sup>=</sup> 4tanh(*x*)sec h2(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup>2(*x*) <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>1</sup> s L *∂ ∂x*L−<sup>1</sup> 1+2tanh(*x*) <sup>s</sup> <sup>−</sup> 2sec h2(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> 2(*x*) s2+<sup>1</sup> + <sup>1</sup> <sup>s</sup> <sup>L</sup> *<sup>∂</sup> ∂x* <sup>L</sup>−<sup>1</sup> 1+2tanh(*x*) <sup>s</sup> <sup>−</sup> 2sec h2(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> 2(*x*) s2+<sup>1</sup> <sup>L</sup>−<sup>1</sup> <sup>1</sup>−2tanh2(*x*) s <sup>+</sup>4tanh(*x*)sec h2(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup>2(*x*) s2+<sup>1</sup> + <sup>1</sup> s L *∂*<sup>3</sup> *<sup>∂</sup>x*<sup>3</sup> <sup>L</sup>−<sup>1</sup> 1+2tanh(*x*) <sup>s</sup> <sup>−</sup> 2sec h2(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> 2(*x*) s2+<sup>1</sup> = <sup>1</sup> s2+<sup>1</sup> <sup>ℊ</sup>2(*x*) <sup>−</sup> 4sec h4(*x*)(cosh(2*x*) <sup>−</sup> <sup>2</sup>) + <sup>1</sup> s3+<sup>1</sup> <sup>2</sup> (3)(*x*) <sup>−</sup> <sup>4</sup>2(*x*)tanh(*x*)sec h2(*x*) + <sup>ℊ</sup> <sup>2</sup>(*x*) + 2ℊ <sup>2</sup>(*x*)tanh(*x*) +2 <sup>2</sup>(*x*)sec h2(*x*) + 2ℊ2(*x*)sec h2(*x*) + 16 cosh(2*x*)sec h6(*x*) <sup>−</sup> 24sec h6(*x*) Γ(2+1) (Γ(+1))<sup>2</sup> + <sup>1</sup> s4+<sup>1</sup> <sup>4</sup>ℊ2(*x*)tanh(*x*)sec h2(*x*) <sup>−</sup> <sup>8</sup>2(*x*)sec h2(*x*) + <sup>12</sup>2(*x*)sec h4(*x*) −2ℊ <sup>2</sup>(*x*)sec h2(*x*) + <sup>4</sup> <sup>2</sup>(*x*)tanh(*x*)sec h2(*x*) Γ(3+1) Γ(2+1)Γ(+1) + <sup>1</sup> <sup>s</sup>5+<sup>1</sup> (2(*x*)ℊ <sup>2</sup>(*x*) + ℊ2(*x*) <sup>2</sup>(*x*)) <sup>Γ</sup>(4+1) (Γ(2+1))<sup>2</sup> . (13)

To find the 2 − *nd* Laplace series solution of (8), we simply find out the next process lims→∞s2+<sup>1</sup> L *Res*U<sup>2</sup> (*x*, s) ,L(*Resx*<sup>2</sup> (*x*, s)) = (0, 0), and by solving limits, we get 2(*x*) = <sup>−</sup>4tanh(*x*)sec h2(*x*) and <sup>ℊ</sup>2(*x*) = 4sec h4(*x*)(cosh(2*x*) <sup>−</sup> <sup>2</sup>). So, the 2 <sup>−</sup> *nd* Laplace series solution of (8) could be expressed as:

$$\begin{split} \mathfrak{U}\_{2}(\mathfrak{x},\mathfrak{s}) &= \frac{1 + 2\tanh(\mathfrak{x})}{\mathfrak{s}} - \frac{2\mathrm{sech}^{2}(\mathfrak{x})}{\mathfrak{s}^{a+1}} - \frac{4\mathrm{tanh}(\mathfrak{x})\mathrm{sech}^{2}(\mathfrak{x})}{\mathfrak{s}^{2a+1}}, \\ \bigvee\_{\mathfrak{s}}(\mathfrak{x},\mathfrak{s}) &= \frac{1 - 2\mathrm{tanh}^{2}(\mathfrak{x})}{\mathfrak{s}} + \frac{4\mathrm{tanh}(\mathfrak{x})\mathrm{sech}^{2}(\mathfrak{x})}{\mathfrak{s}^{a+1}} + \frac{4\mathrm{sech}^{4}(\mathfrak{x})(\cosh(2\mathfrak{x}) - 2)}{\mathfrak{s}^{2a+1}}. \end{split} \tag{14}$$

Similarly, for *k* = 3, we have:

L *Res*U<sup>3</sup> (*x*, s) <sup>=</sup> <sup>−</sup>2sec h2(*x*) <sup>s</sup>+<sup>1</sup> <sup>−</sup> 4tanh(*x*)sec h2(*x*) <sup>s</sup>2+<sup>1</sup> <sup>+</sup> 3(*x*) s3+<sup>1</sup> + <sup>1</sup> s L <sup>L</sup>−<sup>1</sup> 1+2tanh(*x*) <sup>s</sup> <sup>−</sup> 2sec h2(*x*) <sup>s</sup>+<sup>1</sup> <sup>−</sup> 4tanh(*x*)sec h2(*x*) <sup>s</sup>2+<sup>1</sup> <sup>+</sup> 3(*x*) s3+<sup>1</sup> *∂ ∂x*L−<sup>1</sup> 1+2tanh(*x*) s −2sec h2(*x*) <sup>s</sup>+<sup>1</sup> <sup>−</sup> 4tanh(*x*)sec h2(*x*) <sup>s</sup>2+<sup>1</sup> <sup>+</sup> 3(*x*) s3+<sup>1</sup> } + <sup>1</sup> s L *∂ ∂x*L−<sup>1</sup> <sup>1</sup>−2 tan h2(*x*) <sup>s</sup> <sup>+</sup> 4tan h(*x*)sec h2(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> 4 sec h4(*x*)(cosh(2,*x*)−2) <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup>3(*x*) s3+<sup>1</sup> , L *Res*/ <sup>3</sup> (*x*, s) <sup>=</sup> 4tanh(*x*)sec h2(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> 4sec h4(*x*)(cosh(2*x*)−2) <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup>3(*x*) s3+<sup>1</sup> + <sup>1</sup> s L *∂ ∂x*L−<sup>1</sup> 1+2tanh(*x*) <sup>s</sup> <sup>−</sup> 2sec h2(*x*) <sup>s</sup>+<sup>1</sup> <sup>−</sup> 4tanh(*x*)sec h2(*x*) <sup>s</sup>2+<sup>1</sup> <sup>+</sup> 3(*x*) s3+<sup>1</sup> <sup>+</sup><sup>L</sup> *<sup>∂</sup> ∂x* <sup>L</sup>−<sup>1</sup> 1+2tanh(*x*) <sup>s</sup> <sup>−</sup> 2sec h2(*x*) <sup>s</sup>+<sup>1</sup> <sup>−</sup> 4tanh(*x*)sec h2(*x*) s2+<sup>1</sup> <sup>1</sup>−2tanh2(*x*) (15)

$$\begin{split} & \quad + \frac{\mu\_{3}(\mathbf{x})}{\mathfrak{s}^{3a+1}} \Big\{ \mathcal{L}^{-1} \left\{ \frac{1 - 2 \text{tanh}^{2}(\mathbf{x})}{\mathfrak{s}} + \frac{4 \text{tanh}(\mathbf{x}) \text{sech}^{2}(\mathbf{x})}{\mathfrak{s}^{a+1}} + \frac{4 \text{sech}^{4}(\mathbf{x}) (\cosh(2\mathbf{x}) - 2)}{\mathfrak{s}^{2a+1}} + \frac{\mu\_{3}(\mathbf{x})}{\mathfrak{s}^{3a+1}} \right\} \Big\} \\ & \quad + \frac{1}{\mathfrak{s}^{a}} \mathcal{L} \left\{ \frac{\partial^{3}}{\partial \mathbf{x}^{3}} \mathcal{L}^{-1} \left\{ \frac{1 + 2 \text{tanh}(\mathbf{x})}{\mathfrak{s}} - \frac{2 \text{sech}^{2}(\mathbf{x})}{\mathfrak{s}^{a+1}} - \frac{4 \text{tanh}(\mathbf{x}) \text{sech}^{2}(\mathbf{x})}{\mathfrak{s}^{2a+1}} + \frac{\mu\_{3}(\mathbf{x})}{\mathfrak{s}^{3a+1}} \right\} \right\}. \end{split}$$

By solving lims→∞s3+<sup>1</sup> L *Res*U<sup>3</sup> (*x*, s) , L *Res*/ <sup>3</sup> (*x*, s) = (0, 0). It yields that: 3(*x*) = <sup>−</sup>4sec h4(*x*)(cosh(2*x*) <sup>−</sup> <sup>2</sup>) and <sup>ℊ</sup>3(*x*) = 8sec h4(*x*)tanh(*x*)(cosh(2*x*) <sup>−</sup> <sup>5</sup>). So, the 3 − *rd* Laplace series solution of (8) could be written as:

$$\begin{split} \mathfrak{U}\_{3}(\mathbf{x},\mathbf{s}) &= \frac{1+2\tanh(\mathbf{x})}{\mathfrak{s}} - \frac{2\mathrm{s}\mathrm{c}\mathbf{h}^{2}(\mathbf{x})}{\mathfrak{s}^{a+1}} - \frac{4\mathrm{tanh}(\mathbf{x})\mathrm{s}\mathrm{c}\mathbf{h}^{2}(\mathbf{x})}{\mathfrak{s}^{2a+1}} - \frac{4\mathrm{s}\mathrm{c}\mathbf{h}^{4}(\mathbf{x})(\cosh(2\mathbf{x})-2)}{\mathfrak{s}^{3a+1}}, \\ \bigvee\_{3}(\mathbf{x},\mathbf{s}) &= \frac{1-2\mathrm{tanh}^{2}(\mathbf{x})}{\mathfrak{s}} + \frac{4\mathrm{tanh}(\mathbf{x})\mathrm{s}\mathrm{c}\mathbf{h}^{2}(\mathbf{x})}{\mathfrak{s}^{a+1}} + \frac{4\mathrm{s}\mathrm{c}\mathbf{h}^{4}(\mathbf{x})(\cosh(2\mathbf{x})-2)}{\mathfrak{s}^{2a+1}} \\ &\quad + \frac{8\mathrm{s}\mathrm{c}\mathbf{h}^{4}(\mathbf{x})\mathrm{tanh}(\mathbf{x})(\cosh(2\mathbf{x})-5)}{\mathfrak{s}^{3a+1}}. \end{split} \tag{16}$$

Using Mathematica, we can perform the aforesaid steps for an arbitrary *k*, and using the fact lims→∞s*k*+<sup>1</sup> L *Res*U*<sup>k</sup>* (*x*, s) ,L *Res*/ *<sup>k</sup>* (*x*, s) = (0, 0), one can obtain that *k*(*x*) = (−1) *k d*(*k*) *dx*(*k*) (2tanh(*x*)) and <sup>ℊ</sup>*k*(*x*) = (−1) *k d*(*k*) *dx*(*k*) <sup>−</sup>2tanh2(*x*) . Thus, the *k* − *th* Laplace series solution of (8) could be reformulated by the following fractional expansions:

U*k*(*x*, s) = - 1+2tanh(*x*) <sup>s</sup> <sup>−</sup> *<sup>d</sup> dx* (2tanh(*x*)) <sup>s</sup>+<sup>1</sup> + *<sup>d</sup>*(2) *dx*(2) (2tanh(*x*)) <sup>s</sup>2+<sup>1</sup> − *<sup>d</sup>*(3) *dx*(3) (2tanh(*x*)) <sup>s</sup>3+<sup>1</sup> + ... +(−1) *<sup>k</sup> <sup>d</sup>*(*k*) *dx*(*k*) (2tanh(*x*)) s*k*+<sup>1</sup> . = <sup>1</sup>+2tanh(*x*) s *k* ∑ *n*=1 (−1) *<sup>n</sup> <sup>d</sup>*(*n*) *dx*(*n*) (2tanh(*x*)) <sup>s</sup>*n*+<sup>1</sup> , / *<sup>k</sup>*(*x*, s) = - <sup>1</sup>−2tanh2(*x*) <sup>s</sup> <sup>−</sup> *<sup>d</sup> dx* (−2tanh2(*x*)) <sup>s</sup>+<sup>1</sup> + *<sup>d</sup>*(2) *dx*(2) (−2tanh2(*x*)) <sup>s</sup>2+<sup>1</sup> − *<sup>d</sup>*(3) *dx*(3) (−2tanh2(*x*)) <sup>s</sup>3+<sup>1</sup> + ... +(−1) *<sup>k</sup> <sup>d</sup>*(*k*) *dx*(*k*) (−2tanh2(*x*)) s*k*+<sup>1</sup> . <sup>=</sup> <sup>1</sup>−2tanh2(*x*) <sup>s</sup> + *k* ∑ *n*=1 (−1) *<sup>n</sup> <sup>d</sup>*(*n*) *dx*(*n*) (−2tanh2(*x*)) <sup>s</sup>*n*+<sup>1</sup> . (17)

Finally, by applying the inverse Laplace transform for the obtained expansions (17), we conclude that the *k* − *th* approximate solution of the time-fractional nonlinear system (7) can be formulated as:

$$\begin{split} \mathcal{U}\_{k}(\mathbf{x},\mathbf{x}) &= 1 + 2\tanh(\mathbf{x}) + \sum\_{n=1}^{k} (-1)^{n} \frac{d^{(n)}}{dx^{(n)}} (2\tanh(\mathbf{x})) \frac{\mathbf{x}^{\mathbf{x}\mathbf{x}}}{\Gamma(na+1)}, \\ \mathcal{V}\_{k}(\mathbf{x},\mathbf{x}) &= 1 - 2\tanh^{2}(\mathbf{x}) + \sum\_{n=1}^{k} (-1)^{n} \frac{d^{(n)}}{dx^{(n)}} (-2\tanh^{2}(\mathbf{x})) \frac{\mathbf{x}^{\mathbf{x}\mathbf{x}}}{\Gamma(na+1)}. \end{split} \tag{18}$$

When *k* → ∞ and = 1 in (18), we obtain the Maclaurin series expansions of the closed form:

<sup>U</sup>(*x*, ) <sup>=</sup> <sup>1</sup> <sup>+</sup> 2tanh(*x*) + <sup>∞</sup> ∑ *n*=1 (−1) *n d*(*n*) *dx*(*n*) (2tanh(*x*)) *<sup>n</sup> n*! = 1 <sup>+</sup> <sup>∞</sup> ∑ *n*=0 (−1) *n d*(*n*) *dx*(*n*) (2tanh(*x*)) *<sup>n</sup> <sup>n</sup>*! <sup>=</sup> <sup>1</sup> <sup>+</sup> <sup>∞</sup> ∑ *n*=0 (−1) *n d*(*n*) *<sup>d</sup>*(*n*) (2tanh(*<sup>x</sup>* <sup>+</sup> ))|=<sup>0</sup> *<sup>n</sup> n*! <sup>=</sup> <sup>1</sup> <sup>+</sup> <sup>∞</sup> ∑ *n*=0 *d*(*n*) *<sup>d</sup>*(*n*) (2tanh(*<sup>x</sup>* <sup>−</sup> ))|=<sup>0</sup> *<sup>n</sup> <sup>n</sup>*! = 1 + 2tanh(*x* − ) = 1 − 2tanh( − *x*), <sup>V</sup>(*x*, *<sup>x</sup>*) <sup>=</sup> <sup>1</sup> <sup>−</sup> 2tanh2(*x*) + <sup>∞</sup> ∑ *n*=1 (−1) *n d*(*n*) *dx*(*n*) <sup>−</sup>2tanh2(*x*) *<sup>n</sup> n*! = 1 <sup>+</sup> <sup>∞</sup> ∑ *n*=0 (−1) *n d*(*n*) *dx*(*n*) <sup>−</sup>2tanh2(*x*) *<sup>n</sup> n*! = 1 <sup>+</sup> <sup>∞</sup> ∑ *n*=0 (−1) *n d*(*n*) *d*(*n*) <sup>−</sup>2tanh2(*<sup>x</sup>* <sup>+</sup> ) |=0 *n n*! <sup>=</sup> <sup>1</sup> <sup>+</sup> <sup>∞</sup> ∑ *n*=0 *d*(*n*) *d*(*n*) <sup>−</sup>2tanh2(*<sup>x</sup>* <sup>−</sup> ) |=0 *x <sup>n</sup>*! <sup>=</sup> <sup>1</sup> <sup>−</sup> 2tanh2(*<sup>x</sup>* <sup>−</sup> ), (19)

and which is totally in agreement with the exact solution. **Example 2.** *Consider the Burgers' system of nonlinear time fractional IVP:*

$$\begin{aligned} \frac{\partial^a M}{\partial t^a} - \frac{\partial^2 M}{\partial x^2} - 2\mathcal{U}\frac{\partial M}{\partial x} + \mathcal{U}\frac{\partial \mathcal{V}}{\partial x} + \mathcal{V}\frac{\partial \mathcal{U}}{\partial x} &= 0, \\ \frac{\partial^a \mathcal{V}}{\partial t^a} - \frac{\partial^2 \mathcal{V}}{\partial x^2} - 2\mathcal{V}\frac{\partial \mathcal{V}}{\partial x} + \mathcal{U}\frac{\partial \mathcal{V}}{\partial x} + \mathcal{V}\frac{\partial \mathcal{U}}{\partial x} &= 0, \\ \text{subject to } I\text{Cs} &= \sin(\mathbf{x}) \text{ and } \mathcal{V}(\mathbf{x}, 0) = \sin(\mathbf{x}), \end{aligned} \tag{20}$$

*where* ∈ (0, 1] *and* (*x*, ) ∈ R × [0, 1]*. The exact solutions when* = 1*, is* <sup>U</sup>(*x*, ) <sup>=</sup> sin(*x*)*e*−*<sup>x</sup> and* <sup>V</sup>(*x*, ) <sup>=</sup> sin(*x*)*e*−*x*.

By taking the Laplace transform operator on both sides of (20) and using the second part of Lemma 2 and the initial conditions of (20), the Laplace fractional equations will be:

U(*x*, s) = sin(*x*) <sup>s</sup> + <sup>1</sup> s L *∂*<sup>2</sup> *<sup>∂</sup>x*<sup>2</sup> <sup>L</sup>−1{U} + <sup>2</sup> s L <sup>L</sup>−1{U} *<sup>∂</sup> <sup>∂</sup>x*L−1{U} <sup>−</sup> <sup>1</sup> s L <sup>L</sup>−1{U} *<sup>∂</sup> <sup>∂</sup>x*L−1{ /} <sup>−</sup> <sup>1</sup> s L <sup>L</sup>−1{ /} *<sup>∂</sup> <sup>∂</sup>x*L−1{U} , /(*x*, s) = sin(*x*) <sup>s</sup> + <sup>1</sup> s L *∂*<sup>2</sup> *<sup>∂</sup>x*<sup>2</sup> <sup>L</sup>−1{ /} + <sup>2</sup> s L <sup>L</sup>−1{ /} *<sup>∂</sup> <sup>∂</sup>x*L−1{ /} <sup>−</sup> <sup>1</sup> s L <sup>L</sup>−1{U} *<sup>∂</sup> <sup>∂</sup>x*L−1{ /} <sup>−</sup> <sup>1</sup> s L <sup>L</sup>−1{ /} *<sup>∂</sup> <sup>∂</sup>x*L−1{U} , (21)

where U(*x*, s) = L[U(*x*, )] and /(*x*, , s) = L[V(*x*, )].

According to the last discussion of the proposed method, the *k* − *th* Laplace series solutions, U*k*(*x*, s) and / *<sup>k</sup>*(*x*, s) for (21) are expressed as:

$$\begin{split} \mathfrak{U}\_{k}(\mathfrak{x}, \mathfrak{s}) &= \frac{\sin(\mathfrak{x})}{\mathfrak{s}} + \sum\_{n=1}^{k} \frac{\mathbb{A}\_{n}(\mathfrak{x})}{\mathfrak{s}^{n \mathfrak{a} + 1}}, \\ \bigvee\_{k}(\mathfrak{x}, \mathfrak{s}) &= \frac{\sin(\mathfrak{x})}{\mathfrak{s}} + \sum\_{n=1}^{k} \frac{\varrho\_{n}(\mathfrak{x})}{\mathfrak{s}^{n \mathfrak{a} + 1}}. \end{split} \tag{22}$$

As well we define the *k* − *th* Laplace residual functions of (21) are:

L *Res*U*<sup>k</sup>* (*x*, s) <sup>=</sup> *<sup>k</sup>* ∑ *n*=1 *n*(*x*) <sup>s</sup>*n*+<sup>1</sup> <sup>−</sup> <sup>1</sup> s L *∂*<sup>2</sup> *<sup>∂</sup>x*<sup>2</sup> <sup>L</sup>−1{U*k*} <sup>−</sup> <sup>2</sup> s L <sup>L</sup>−1{U*k*} *<sup>∂</sup> <sup>∂</sup>x*L−1{U*k*} + <sup>1</sup> s L <sup>L</sup>−1{U*k*} *<sup>∂</sup> <sup>∂</sup>x*L−1{ / *k*} + <sup>1</sup> s L <sup>L</sup>−1{ / *<sup>k</sup>*} *<sup>∂</sup> <sup>∂</sup>x*L−1{U*k*} , L *Res*/ *<sup>k</sup>* (*x*, s) <sup>=</sup> *<sup>k</sup>* ∑ *n*=1 ℊ*n*(*x*) <sup>s</sup>*n*+<sup>1</sup> <sup>−</sup> <sup>1</sup> s L *∂*<sup>2</sup> *<sup>∂</sup>x*<sup>2</sup> <sup>L</sup>−1{ / *k*} <sup>−</sup> <sup>2</sup> s L <sup>L</sup>−1{ / *<sup>k</sup>*} *<sup>∂</sup> <sup>∂</sup>x*L−1{ / *k*} + <sup>1</sup> s L <sup>L</sup>−1{U*k*} *<sup>∂</sup> <sup>∂</sup>x*L−1{*xk*} + <sup>1</sup> s L <sup>L</sup>−1{ / *<sup>k</sup>*} *<sup>∂</sup> <sup>∂</sup>x*L−1{U*k*} . (23)

By letting *k* = 1, in (23), the 1 − *st* Laplace residual functions are:

L *Res*U<sup>1</sup> (*x*, s) = 1(*x*) <sup>s</sup>+<sup>1</sup> <sup>−</sup> <sup>1</sup> s L *∂*<sup>2</sup> *<sup>∂</sup>x*<sup>2</sup> <sup>L</sup>−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>+</sup> 1(*x*) s+<sup>1</sup> <sup>−</sup> <sup>2</sup> s L <sup>L</sup>−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>+</sup> 1(*x*) s+<sup>1</sup> *∂ ∂x*L−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>+</sup> 1(*x*) s+<sup>1</sup> + <sup>1</sup> s L <sup>L</sup>−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>+</sup> 1(*x*) s+<sup>1</sup> *∂ ∂x*L−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>+</sup> <sup>ℊ</sup>1(*x*) s+<sup>1</sup> + <sup>1</sup> s L <sup>L</sup>−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>+</sup> <sup>ℊ</sup>1(*x*) s+<sup>1</sup> *∂ ∂x*L−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>+</sup> 1(*x*) s+<sup>1</sup> = <sup>1</sup> <sup>s</sup>+<sup>1</sup> (1(*x*) + sin(*x*)) <sup>+</sup> <sup>1</sup> s2+<sup>1</sup> cos(*x*)(ℊ1(*x*) − 1(*x*)) + sin(*x*)(ℊ <sup>1</sup>(*x*) − <sup>1</sup>(*x*)) − <sup>1</sup> (*x*) + <sup>1</sup> <sup>s</sup>3+<sup>1</sup> (1(*x*)ℊ <sup>1</sup>(*x*) + ℊ1(*x*) <sup>1</sup>(*x*) − 21(*x*) <sup>1</sup>(*x*)) <sup>Γ</sup>(2+1) (Γ(+1))<sup>2</sup> , <sup>L</sup>(*Resx*<sup>1</sup> (*x*, <sup>s</sup>)) <sup>=</sup> <sup>ℊ</sup>1(*x*) <sup>s</sup>+<sup>1</sup> <sup>−</sup> <sup>1</sup> s L *∂*<sup>2</sup> *<sup>∂</sup>x*<sup>2</sup> <sup>L</sup>−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>+</sup> <sup>ℊ</sup>1(*x*) s+<sup>1</sup> <sup>−</sup> <sup>2</sup> s L <sup>L</sup>−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>+</sup> <sup>ℊ</sup>1(*x*) s+<sup>1</sup> *∂ ∂x*L−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>+</sup> *<sup>g</sup>*1(*x*) s+<sup>1</sup> + <sup>1</sup> s L <sup>L</sup>−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>+</sup> 1(*x*) s+<sup>1</sup> *∂ ∂x*L−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>+</sup> <sup>ℊ</sup>1(*x*) s+<sup>1</sup> + <sup>1</sup> s L <sup>L</sup>−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>+</sup> <sup>ℊ</sup>1(*x*) s+<sup>1</sup> *∂ ∂x*L−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>+</sup> 1(*x*) s+<sup>1</sup> = <sup>1</sup> <sup>s</sup>+<sup>1</sup> (ℊ1(*x*) + sin(*x*)) + <sup>1</sup> <sup>s</sup>2+<sup>1</sup> (cos(*x*)(1(*x*) − ℊ1(*x*)) + sin(*x*)( <sup>1</sup>(*x*) − ℊ <sup>1</sup>(*x*)) − ℊ<sup>1</sup> (*x*)) + <sup>1</sup> <sup>s</sup>3+<sup>1</sup> (ℊ1(*x*) <sup>1</sup>(*x*) + 1(*x*)ℊ <sup>1</sup>(*x*) − 2ℊ1(*x*)ℊ <sup>1</sup>(*x*)) <sup>Γ</sup>(2+1) (Γ(+1))<sup>2</sup> . (24)

To find the 1 − *st* Laplace series solution of (21), we simply take the next process lims→∞s+<sup>1</sup> L *Res*U<sup>1</sup> (*x*, s) ,L *Res*/ <sup>1</sup> (*x*, s) = (0, 0), which yields that 1(*x*) = − sin(*x*) and ℊ1(*x*) = − sin(*x*). Hence, the 1 − *st* Laplace series solutions of (21) are:

$$\begin{split} \mathfrak{U}\_{1}(\mathfrak{x}, \mathfrak{s}) &= \frac{\sin(x)}{\mathfrak{s}} - \frac{\sin(x)}{\mathfrak{s}^{a+1}}, \\ \bigvee\_{1}(\mathfrak{x}, \mathfrak{s}) &= \frac{\sin(x)}{\mathfrak{s}} - \frac{\sin(x)}{\mathfrak{s}^{a+1}}. \end{split} \tag{25}$$

By letting *k* = 2, in (23), the 2 − *nd* Laplace residual functions are:

L *Res*U<sup>2</sup> (*x*, s) <sup>=</sup> <sup>−</sup> sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> 2(*x*) <sup>s</sup>2+<sup>1</sup> <sup>−</sup> <sup>1</sup> s L *∂*<sup>2</sup> *<sup>∂</sup>x*<sup>2</sup> <sup>L</sup>−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>−</sup> sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> 2(*x*) s2+<sup>1</sup> <sup>−</sup> <sup>2</sup> s L <sup>L</sup>−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>−</sup> sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> 2(*x*) s2+<sup>1</sup> *∂ ∂x*L−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>−</sup> sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> 2(*x*) s2+<sup>1</sup> + <sup>1</sup> s L <sup>L</sup>−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>−</sup> sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> 2(*x*) s2+<sup>1</sup> *∂ ∂x*L−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>−</sup> sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup>2(*x*) s2+<sup>1</sup> + <sup>1</sup> s L <sup>L</sup>−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>−</sup> sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup>2(*x*) s2+<sup>1</sup> *∂ ∂x*L−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>−</sup> sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> 2(*x*) s2+<sup>1</sup> = <sup>1</sup> <sup>s</sup>2+<sup>1</sup> (2(*x*) − sin(*x*)) + <sup>1</sup> s3+<sup>1</sup> cos(*x*)(ℊ2(*x*) − 2(*x*)) + sin(*x*)(ℊ <sup>2</sup>(*x*) − <sup>2</sup>(*x*)) − <sup>2</sup> (*x*) + <sup>1</sup> <sup>s</sup>4+<sup>1</sup> (cos(*x*)(2(*x*) − ℊ2(*x*)) + sin(*x*)( <sup>2</sup>(*x*) − ℊ <sup>2</sup>(*x*))) <sup>Γ</sup>(3+1) Γ(2+1)Γ(+1) + <sup>1</sup> <sup>s</sup>5+<sup>1</sup> (2(*x*)ℊ <sup>2</sup>(*x*) + ℊ2(*x*) <sup>2</sup>(*x*) − 22(*x*) <sup>2</sup>(*x*)) <sup>Γ</sup>(4+1) (Γ(2+1))<sup>2</sup> , L *Res*V2 (*x*, s) <sup>=</sup>−sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup>2(*x*) <sup>s</sup>2+<sup>1</sup> <sup>−</sup> <sup>1</sup> s L *∂*<sup>2</sup> *<sup>∂</sup>x*<sup>2</sup> <sup>L</sup>−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>−</sup> sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup>2(*x*) s2+<sup>1</sup> <sup>−</sup> <sup>2</sup> s L <sup>L</sup>−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>−</sup> sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup>2(*x*) s2+<sup>1</sup> *∂ ∂x*L−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>−</sup> sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup>2(*x*) s2+<sup>1</sup> + <sup>1</sup> s L <sup>L</sup>−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>−</sup> sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> 2(*x*) s2+<sup>1</sup> *∂ ∂x*L−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>−</sup> sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup>2(*x*) s2+<sup>1</sup> + <sup>1</sup> s L <sup>L</sup>−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>−</sup> sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup>2(*x*) s2+<sup>1</sup> *∂ ∂x*L−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>−</sup> sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> 2(*x*) s2+<sup>1</sup> = <sup>1</sup> <sup>s</sup>2+<sup>1</sup> (ℊ2(*x*) − sin(*x*)) + <sup>1</sup> s3+<sup>1</sup> cos(*x*)(2(*x*) − ℊ2(*x*)) + sin(*x*)( <sup>2</sup>(*x*) − ℊ <sup>2</sup>(*x*)) − ℊ <sup>2</sup> (*x*) + <sup>1</sup> <sup>s</sup>4+<sup>1</sup> (cos(*x*)(ℊ2(*x*) − 2(*x*)) + sin(*x*)(ℊ <sup>2</sup>(*x*) − <sup>2</sup>(*x*))) <sup>Γ</sup>(3+1) Γ(2+1)Γ(+1) + <sup>1</sup> <sup>s</sup>5+<sup>1</sup> (ℊ2(*x*) <sup>2</sup>(*x*) + 2(*x*)ℊ <sup>2</sup>(*x*) − 2ℊ2(*x*)ℊ <sup>2</sup>(*x*)) <sup>Γ</sup>(4+1) (Γ(2+1))<sup>2</sup> . (26)

To find the 2 − *nd* Laplace series solution of (21), we simply find out the next process lims→∞s2+<sup>1</sup> L *Res*U<sup>2</sup> (*x*, s) ,L *Res*V2 (*x*, s) = (0, 0), and by solving limits, we obtain 2(*x*) = sin(*x*) and ℊ2(*x*) = sin(*x*). Hence, the 2 − *nd* Laplace series solutions of (21) are:

$$\begin{split} \mathfrak{U}\_{2}(\mathfrak{x}, \mathfrak{s}) &= \frac{\sin(\mathfrak{x})}{\mathfrak{s}} - \frac{\sin(\mathfrak{x})}{\mathfrak{s}^{\alpha+1}} + \frac{\sin(\mathfrak{x})}{\mathfrak{s}^{2\alpha+1}}, \\ \bigvee\_{\mathfrak{d}}(\mathfrak{x}, \mathfrak{s}) &= \frac{\sin(\mathfrak{x})}{\mathfrak{s}} - \frac{\sin(\mathfrak{x})}{\mathfrak{s}^{\alpha+1}} + \frac{\sin(\mathfrak{x})}{\mathfrak{s}^{2\alpha+1}}. \end{split} \tag{27}$$

Similarly, for *k* = 3, we have:

L *Res*U<sup>3</sup> (*x*, s) <sup>=</sup>−sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> sin(*x*) <sup>s</sup>2+<sup>1</sup> <sup>+</sup> 3(*x*) <sup>s</sup>3+<sup>1</sup> <sup>−</sup> <sup>1</sup> s L *∂*<sup>2</sup> *<sup>∂</sup>x*<sup>2</sup> <sup>L</sup>−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>−</sup> sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> sin(*x*) <sup>s</sup>2+<sup>1</sup> <sup>+</sup> 3(*x*) s3+<sup>1</sup> <sup>−</sup> <sup>2</sup> s L <sup>L</sup>−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>−</sup> sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> sin(*x*) <sup>s</sup>2+<sup>1</sup> <sup>+</sup> 3(*x*) s3+<sup>1</sup> *∂ ∂x*L−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>−</sup> sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> sin(*x*) <sup>s</sup>2+<sup>1</sup> <sup>+</sup> 3(*x*) s3+<sup>1</sup> + <sup>1</sup> s L <sup>L</sup>−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>−</sup> sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> sin(*x*) <sup>s</sup>2+<sup>1</sup> <sup>+</sup> 3(*x*) s3+<sup>1</sup> *∂ ∂x*L−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>−</sup> sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> sin(*x*) <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup>3(*x*) s3+<sup>1</sup> + <sup>1</sup> s L <sup>L</sup>−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>−</sup> sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> sin(*x*) <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup>3(*x*) s3+<sup>1</sup> *∂ ∂x*L−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>−</sup> sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> sin(*x*) <sup>s</sup>2+<sup>1</sup> <sup>+</sup> 3(*x*) s3+<sup>1</sup> , L *Res*V3 (*x*, s) <sup>=</sup>−sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> sin(*x*) <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup>3(*x*) <sup>s</sup>3+<sup>1</sup> <sup>−</sup> <sup>1</sup> s L *∂*<sup>2</sup> *<sup>∂</sup>x*<sup>2</sup> <sup>L</sup>−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>−</sup> sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> sin(*x*) <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup>3(*x*) s3+<sup>1</sup> <sup>−</sup> <sup>2</sup> s L <sup>L</sup>−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>−</sup> sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> sin(*x*) <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup>3(*x*) s3+<sup>1</sup> *∂ ∂x*L−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>−</sup> sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> sin(*x*) <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup>3(*x*) s3+<sup>1</sup> + <sup>1</sup> s L <sup>L</sup>−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>−</sup> sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> sin(*x*) <sup>s</sup>2+<sup>1</sup> <sup>+</sup> 3(*x*) s3+<sup>1</sup> *∂ ∂x*L−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>−</sup> sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> sin(*x*) <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup>3(*x*) s3+<sup>1</sup> + <sup>1</sup> s L <sup>L</sup>−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>−</sup> sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> sin(*x*) <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup>3(*x*) s3+<sup>1</sup> *∂ ∂x*L−<sup>1</sup> sin(*x*) <sup>s</sup> <sup>−</sup> sin(*x*) <sup>s</sup>+<sup>1</sup> <sup>+</sup> sin(*x*) <sup>s</sup>2+<sup>1</sup> <sup>+</sup> 3(*x*) s3+<sup>1</sup> . (28)

By solving lims→∞s3+<sup>1</sup> L *Res*U<sup>3</sup> (*x*, s) , L *Res*V3 (*x*, s) = (0, 0). It yields that: 3(*x*) = − sin(*x*) and ℊ3(*x*) = − sin(*x*). Hence, the 3 − *rd* Laplace series solutions of (21) are:

$$\begin{split} \mathfrak{U}\_{3}(\mathfrak{x},\mathfrak{s}) &= \frac{\sin(x)}{\mathfrak{s}} - \frac{\sin(x)}{\mathfrak{s}^{a+1}} + \frac{\sin(x)}{\mathfrak{s}^{2a+1}} - \frac{\sin(x)}{\mathfrak{s}^{3a+1}}, \\ \bigvee\_{\mathfrak{s}}(\mathfrak{x},\mathfrak{s}) &= \frac{\sin(x)}{\mathfrak{s}} - \frac{\sin(x)}{\mathfrak{s}^{a+1}} + \frac{\sin(x)}{\mathfrak{s}^{2a+1}} - \frac{\sin(x)}{\mathfrak{s}^{3a+1}}. \end{split} \tag{29}$$

Using Mathematica, we can process the above steps for any *k*, and by the fact that lims→∞s*k*+<sup>1</sup> L *Res*U*<sup>k</sup>* (*x*, s) ,L *Res*V*<sup>k</sup>* (*x*, s) = (0, 0), one can obtain that *k*(*x*) = (−1) *<sup>k</sup>* sin(*x*) and <sup>ℊ</sup>*k*(*x*) = (−1) *<sup>k</sup>* sin(*x*). Thus, the *<sup>k</sup>* <sup>−</sup> *th* Laplace series solutions of (21) could be formulated on the fractional expansion:

$$\begin{split} \mathfrak{U}\_{k}(\mathbf{x}, \mathbf{s}) &= \sin(\mathbf{x}) \left( \frac{1}{\mathfrak{s}} - \frac{1}{\mathfrak{s}^{a+1}} + \frac{1}{\mathfrak{s}^{2a+1}} - \frac{1}{\mathfrak{s}^{3a+1}} + \dots + (-1)^{k} \frac{1}{\mathfrak{s}^{ka+1}} \right) = \sin(\mathbf{x}) \sum\_{n=0}^{k} \frac{(-1)^{n}}{\mathfrak{s}^{na+1}}, \\ \mathrm{V}\_{k}(\mathbf{x}, \mathbf{s}) &= \sin(\mathbf{x}) \left( \frac{1}{\mathfrak{s}} - \frac{1}{\mathfrak{s}^{a+1}} + \frac{1}{\mathfrak{s}^{2a+1}} - \frac{1}{\mathfrak{s}^{ka+1}} + \dots + (-1)^{k} \frac{1}{\mathfrak{s}^{ka+1}} \right) = \sin(\mathbf{x}) \sum\_{n=0}^{k} \frac{(-1)^{n}}{\mathfrak{s}^{na+1}}. \end{split} \tag{30}$$

In the end, we take the inverse LT for the obtained expansions (30) to get that the *k* − *th* approximate solutions of the nonlinear system of time-FPDEs (20) have the form:

$$\begin{split} \mathcal{M}\_{k}(\mathbf{x},t) &= \sin(\mathbf{x}) \sum\_{n=0}^{k} \frac{(-1)^{n} t^{n}}{\Gamma(n\,a+1)}, \\ \bigvee\_{k}(\mathbf{x},t) &= \sin(\mathbf{x}) \sum\_{n=0}^{k} \frac{(-1)^{n} t^{n\mathbf{x}}}{\Gamma(n\,a+1)}. \end{split} \tag{31}$$

When *k* → ∞ and = 1 in (31), the Maclaurin series expansions of the closed forms

$$\begin{aligned} \text{are:} \\ \mathcal{U}(\mathbf{x},t) &= \sin(\mathbf{x})e^{-t}, \\ \mathcal{V}(\mathbf{x},t) &= \sin(\mathbf{x})e^{-t}. \end{aligned} \tag{32}$$

and which is totally in agreement with the exact solution. **Example 3.** *Consider the Burgers' system of nonlinear time-FPDEs:*

$$\begin{aligned} \frac{\partial^{\alpha}\mathcal{U}}{\partial x^{\ell}} + \frac{\partial\mathcal{V}}{\partial x}\frac{\partial\mathcal{W}}{\partial\boldsymbol{\mu}} - \frac{\partial\mathcal{V}}{\partial\boldsymbol{\mu}}\frac{\partial\mathcal{W}}{\partial\boldsymbol{x}} + \mathcal{U} &= 0, \\ \frac{\partial^{\alpha}\mathcal{V}}{\partial t^{\beta}} + \frac{\partial\mathcal{U}}{\partial x}\frac{\partial\mathcal{W}}{\partial\boldsymbol{\mu}} + \frac{\partial\mathcal{U}}{\partial\boldsymbol{\mu}}\frac{\partial\mathcal{W}}{\partial\boldsymbol{x}} - \mathcal{V} &= 0, \\ \frac{\partial^{\alpha}\mathcal{W}}{\partial t^{\gamma}} + \frac{\partial\mathcal{U}}{\partial x}\frac{\partial\mathcal{V}}{\partial\boldsymbol{y}} + \frac{\partial\mathcal{U}}{\partial y}\frac{\partial\mathcal{V}}{\partial\boldsymbol{x}} - \mathcal{W} &= 0, \end{aligned} \tag{33}$$

*subject to ICs*

$$\mathcal{U}(\mathbf{x}, \boldsymbol{\mathcal{Y}}, \mathbf{0}) = \boldsymbol{\varepsilon}^{\mathbf{x} + \boldsymbol{\mathcal{Y}}}, \; \mathcal{V}(\mathbf{x}, \boldsymbol{\mathcal{Y}}, \mathbf{0}) = \boldsymbol{\varepsilon}^{\mathbf{x} - \boldsymbol{\mathcal{Y}}} \text{ and } \mathcal{W}(\mathbf{x}, \boldsymbol{\mathcal{Y}}, \mathbf{0}) = \boldsymbol{\varepsilon}^{-\mathbf{x} + \boldsymbol{\mathcal{Y}}},$$

*where* <sup>∈</sup> (0, 1] *and* (*x*, , ) <sup>∈</sup> <sup>R</sup><sup>2</sup> <sup>×</sup> [0, 1]*. The exact solutions when* <sup>=</sup> <sup>1</sup>*, are* (U(*x*, , ), V(*x*, , ), W(*x*, , )) = *ex*+−,*ex*−+,*e*−*x*++ .

By taking the LT operator on both sides of (33) and using the second part of Lemma 2 and the ICs of (33), the Laplace fractional equations will be:

U(*x*, , s) = *<sup>e</sup>x*+ <sup>s</sup> <sup>−</sup> <sup>1</sup> s L *Dx*L−1{V}*D*L−1{W} + <sup>1</sup> s L *<sup>D</sup>*L−1{V}*Dx*L−1{W} <sup>−</sup> <sup>1</sup> <sup>s</sup> U, V(*x*, , s) = *<sup>e</sup>x*− <sup>s</sup> <sup>−</sup> <sup>1</sup> s L *Dx*L−1{U}*D*L−1{W} <sup>−</sup> <sup>1</sup> s L *<sup>D</sup>*L−1{U}*Dx*L−1{W} + <sup>1</sup> <sup>s</sup> V, <sup>W</sup>(*x*, , <sup>s</sup>) <sup>=</sup> *<sup>e</sup>*−+*<sup>y</sup>* <sup>s</sup> <sup>−</sup> <sup>1</sup> s L *Dx*L−1{U}*D*L−1{V} <sup>−</sup> <sup>1</sup> s L *<sup>D</sup>*L−1{U}*Dx*L−1{V} + <sup>1</sup> <sup>s</sup> W, (34)

where U(*x*, , s) = L[U(*x*, , )], V(*x*, , s) = L[V(*x*, , )] and W(*x*, , s) = L[W(*x*, , )]. According to the last discussion of the proposed method, the *k* − *th* Laplace series solutions U*k*(*x*, *y*, s), V*k*(*x*, , s) and W*k*(*x*, , s) for (34) are expressed as:

$$\begin{split} \mathsf{M}\_{k}(\mathsf{x},\mathsf{y},\mathsf{s}) &= \frac{\mathsf{e}^{\mathsf{x}+\mathsf{y}}}{\mathsf{a}} + \sum\_{n=1}^{k} \frac{\mathsf{A}\_{n}(\mathsf{x},\mathsf{y})}{\mathsf{a}^{n\alpha+1}}, \\ \mathsf{M}\_{k}(\mathsf{x},\mathsf{y},\mathsf{s}) &= \frac{\mathsf{e}^{\mathsf{x}-\mathsf{y}}}{\mathsf{a}} + \sum\_{n=1}^{k} \frac{g\_{n}(\mathsf{x},\mathsf{y})}{\mathsf{a}^{n\alpha+1}}, \\ \mathsf{M}\_{k}(\mathsf{x},\mathsf{y},\mathsf{s}) &= \frac{\mathsf{e}^{-\mathsf{y}+\mathsf{y}}}{\mathsf{a}} + \sum\_{n=1}^{k} \frac{f\_{n}(\mathsf{x},\mathsf{y})}{\mathsf{a}^{n\alpha+1}}. \end{split} \tag{35}$$

As well, the *k* − *th* Laplace fractional residual functions of (34) are defined as:

L *Res*U*<sup>k</sup>* (*x*, , s) <sup>=</sup> *<sup>k</sup>* ∑ *n*=1 *n*(*x*,) <sup>s</sup>*n*+<sup>1</sup> <sup>+</sup> <sup>1</sup> s L *Dx*L−1{V*k*}*D*L−1{W*k*} <sup>−</sup> <sup>1</sup> s L *<sup>D</sup>*L−1{V*k*}*Dx*L−1{W*k*} + <sup>1</sup> <sup>s</sup> U*k*, L *Res*V*<sup>k</sup>* (*x*, , s) <sup>=</sup> *<sup>k</sup>* ∑ *n*=1 ℊ*n*(*x*,) <sup>s</sup>*n*+<sup>1</sup> <sup>+</sup> <sup>1</sup> s L *Dx*L−1{U*k*}*D*L−1{W*k*} + <sup>1</sup> s L *<sup>D</sup>*L−1{U*k*}*Dx*L−1{W*k*} <sup>−</sup> <sup>1</sup> <sup>s</sup> V*k*, L *Res*W*<sup>k</sup>* (*x*, , s) <sup>=</sup> *<sup>k</sup>* ∑ *n*=1 f*n*(*x*,) <sup>s</sup>*n*+<sup>1</sup> <sup>+</sup> <sup>1</sup> s L *Dx*L−1{U*k*}*D*L−1{V*k*} + <sup>1</sup> s L *<sup>D</sup>*L−1{U*k*}*Dx*L−1{V*k*} <sup>−</sup> <sup>1</sup> <sup>s</sup> W*k*. (36)

For *k* = 1, in (36), the 1 − *st* Laplace residual functions are expressed as:

L(*Res*U<sup>1</sup> (*x*, , s)) = <sup>1</sup> <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>1</sup> s L *Dx*L−<sup>1</sup> *ex*− <sup>s</sup> <sup>+</sup> <sup>ℊ</sup><sup>1</sup> s+<sup>1</sup> *<sup>D</sup>*L−<sup>1</sup> *e*−*x*+ <sup>s</sup> <sup>+</sup> <sup>f</sup><sup>1</sup> s+<sup>1</sup> <sup>−</sup> <sup>1</sup> s L *<sup>D</sup>*L−<sup>1</sup> *ex*− <sup>s</sup> <sup>+</sup> <sup>ℊ</sup><sup>1</sup> s+<sup>1</sup> *Dx*L−<sup>1</sup> *e*−*x*+ <sup>s</sup> <sup>+</sup> <sup>f</sup><sup>1</sup> s+<sup>1</sup> + <sup>1</sup> s *ex*+ <sup>s</sup> <sup>+</sup> *<sup>h</sup>*<sup>1</sup> s+<sup>1</sup> = <sup>1</sup> s+<sup>1</sup> <sup>1</sup> + *ex*+ + <sup>1</sup> s2+<sup>1</sup> *ex*− *∂ f*<sup>1</sup> *<sup>∂</sup><sup>x</sup>* <sup>+</sup> *<sup>∂</sup>*f<sup>1</sup> *∂* + *e*−*x*+ *∂*ℊ<sup>1</sup> *<sup>∂</sup><sup>x</sup>* <sup>+</sup> *<sup>∂</sup>*ℊ<sup>1</sup> *∂* + <sup>1</sup> + <sup>1</sup> s3+<sup>1</sup> *∂*ℊ<sup>1</sup> *∂x ∂*f<sup>1</sup> *<sup>∂</sup>* <sup>−</sup> *<sup>∂</sup>*f<sup>1</sup> *∂x ∂*ℊ<sup>1</sup> *∂* Γ(2+1) (Γ(+1))<sup>2</sup> ,

L(*Res*V1 (*x*, , s))

= <sup>ℊ</sup><sup>1</sup> <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>1</sup> s L *Dx*L−<sup>1</sup> *ex*+ <sup>s</sup> <sup>+</sup> <sup>1</sup> s+<sup>1</sup> *<sup>D</sup>*L−<sup>1</sup> *e*−*x*+ <sup>s</sup> <sup>+</sup> <sup>f</sup><sup>1</sup> s+<sup>1</sup> + <sup>1</sup> s L *<sup>D</sup>*L−<sup>1</sup> *ex*+ <sup>s</sup> <sup>+</sup> <sup>1</sup> s+<sup>1</sup> *Dx*L−<sup>1</sup> *e*−*x*+ <sup>s</sup> <sup>+</sup> <sup>f</sup><sup>1</sup> s+<sup>1</sup> <sup>−</sup> <sup>1</sup> s *ex*− <sup>s</sup> <sup>+</sup> <sup>ℊ</sup><sup>1</sup> s+<sup>1</sup> = <sup>1</sup> s+<sup>1</sup> <sup>ℊ</sup><sup>1</sup> <sup>−</sup> *<sup>e</sup>x*− + <sup>1</sup> s2+<sup>1</sup> *ex*+ *∂*f<sup>1</sup> *<sup>∂</sup><sup>x</sup>* <sup>+</sup> *<sup>∂</sup>*f<sup>1</sup> *∂* + *e*−*x*+ *∂*<sup>1</sup> *<sup>∂</sup><sup>x</sup>* <sup>−</sup> *<sup>∂</sup>*<sup>1</sup> *∂* − ℊ<sup>1</sup> + <sup>1</sup> s3+<sup>1</sup> *∂*ℊ<sup>1</sup> *∂x ∂*f<sup>1</sup> *<sup>∂</sup>* <sup>+</sup> *<sup>∂</sup>*f<sup>1</sup> *∂x ∂*<sup>1</sup> *∂* Γ(2*β*+1) (Γ(*β*+1))<sup>2</sup> , (37)

$$\mathcal{L}\left(\operatorname{Res}\_{\mathsf{W}}\left(\boldsymbol{\chi},\boldsymbol{y},\boldsymbol{s}\right)\right)$$

 = <sup>f</sup>1(*x*,) <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>1</sup> s L *Dx*L−<sup>1</sup> *ex*+ <sup>s</sup> <sup>+</sup> <sup>1</sup> s+<sup>1</sup> *<sup>D</sup>*L−<sup>1</sup> *ex*− <sup>s</sup> <sup>+</sup> <sup>ℊ</sup><sup>1</sup> s+<sup>1</sup> + <sup>1</sup> s L *<sup>D</sup>*L−<sup>1</sup> *ex*+ <sup>s</sup> <sup>+</sup> <sup>1</sup> s+<sup>1</sup> *Dx*L−<sup>1</sup> *e*−*<sup>y</sup>* <sup>s</sup> <sup>+</sup> <sup>ℊ</sup><sup>1</sup> s+<sup>1</sup> <sup>−</sup> <sup>1</sup> s *e*−*x*+ <sup>s</sup> <sup>+</sup> <sup>f</sup><sup>1</sup> s+<sup>1</sup> = <sup>1</sup> s+<sup>1</sup> *<sup>f</sup>*<sup>1</sup> <sup>−</sup> *<sup>e</sup>*−*x*+ + <sup>1</sup> s2+<sup>1</sup> *e*+*<sup>y</sup> ∂*ℊ<sup>1</sup> *<sup>∂</sup><sup>x</sup>* <sup>+</sup> *<sup>∂</sup>*ℊ<sup>1</sup> *∂* <sup>−</sup> *<sup>e</sup>x*− *∂*<sup>1</sup> *<sup>∂</sup><sup>x</sup>* <sup>−</sup> *<sup>∂</sup>*<sup>1</sup> *∂* + f<sup>1</sup> + <sup>1</sup> s3+<sup>1</sup> *∂*<sup>1</sup> *∂x ∂*ℊ<sup>1</sup> *<sup>∂</sup>* <sup>+</sup> *<sup>∂</sup>*ℊ<sup>1</sup> *∂x ∂*<sup>1</sup> *∂* Γ(2*γ*+1) (Γ(*γ*+1))<sup>2</sup> .

To find the 1 − *st* Laplace series solution of (34), we simply take the next process lims→∞s+<sup>1</sup> L *Res*U<sup>1</sup> (*x*, , s) ,L *Res*V1 (*x*, , s) ,L *Res*W<sup>1</sup> (*x*, , s) = (0, 0, 0), which yields that 1(*x*, ) <sup>=</sup> <sup>−</sup>*ex*+, <sup>ℊ</sup>1(*x*, ) <sup>=</sup> *<sup>e</sup>x*− and <sup>f</sup>1(*x*, ) <sup>=</sup> *<sup>e</sup>*−*x*+. Hence, the 1 <sup>−</sup> *st* Laplace series solutions of (34) are:

$$
\begin{array}{ll}
\mathfrak{U}\_{1}(\mathfrak{x},\mathfrak{y},\mathfrak{s}) = \frac{\mathfrak{c}^{x+y}}{\mathfrak{s}} - \frac{\mathfrak{c}^{x+y}}{\mathfrak{s}^{a+1}}, \\
\updownarrow\_{1}(\mathfrak{x},\mathfrak{y},\mathfrak{s}) = \frac{\mathfrak{c}^{x-y}}{\mathfrak{s}} + \frac{\mathfrak{c}^{x-y}}{\mathfrak{s}^{a+1}}, \\
\mathfrak{Y}\_{1}(\mathfrak{x},\mathfrak{y},\mathfrak{s}) = \frac{\mathfrak{c}^{-x+y}}{\mathfrak{s}} + \frac{\mathfrak{c}^{-x+y}}{\mathfrak{s}^{a+1}}.
\end{array}
\tag{38}
$$

For *k* = 2, in (36), the 2 − *nd* Laplace residual functions are:

L(*Res*U<sup>2</sup> (*x*, , s)) <sup>=</sup> <sup>−</sup> *<sup>e</sup>x*+ <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>2</sup> <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>1</sup> s L *Dx*L−<sup>1</sup> *ex*− <sup>s</sup> <sup>+</sup> *<sup>e</sup>x*− <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup><sup>2</sup> s2+<sup>1</sup> *D* <sup>L</sup>−<sup>1</sup> *e*−*x*+ <sup>s</sup> <sup>+</sup> *<sup>e</sup>*−*x*+ <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>f</sup><sup>2</sup> s2+<sup>1</sup> <sup>−</sup> <sup>1</sup> s L *<sup>D</sup>*L−<sup>1</sup> *ex*− <sup>s</sup> <sup>+</sup> *<sup>e</sup>x*− <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup><sup>2</sup> s2+<sup>1</sup> *Dx*L−<sup>1</sup> *e*−*x*+ <sup>s</sup> <sup>+</sup> *<sup>e</sup>*−*x*+ <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>f</sup><sup>2</sup> s2+<sup>1</sup> + <sup>1</sup> s *ex*+ <sup>s</sup> <sup>−</sup> *<sup>e</sup>x*+ <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>2</sup> s2+<sup>1</sup> = <sup>1</sup> s2+<sup>1</sup> <sup>2</sup> <sup>−</sup> *<sup>e</sup>x*+ + <sup>1</sup> s3+<sup>1</sup> *ex*−*<sup>y</sup> ∂*f<sup>2</sup> *<sup>∂</sup><sup>x</sup>* <sup>+</sup> *<sup>∂</sup>*f<sup>2</sup> *∂* + *e*−*x*+ *∂*ℊ<sup>2</sup> *<sup>∂</sup><sup>x</sup>* <sup>+</sup> *<sup>∂</sup>*ℊ<sup>2</sup> *∂* + <sup>2</sup> + <sup>1</sup> s4+<sup>1</sup> *ex*− *∂*f<sup>2</sup> *<sup>∂</sup><sup>x</sup>* <sup>+</sup> *<sup>∂</sup>*f<sup>2</sup> *∂* + *e*−*x*+ *∂*ℊ<sup>2</sup> *<sup>∂</sup><sup>x</sup>* <sup>+</sup> *<sup>∂</sup>*ℊ<sup>2</sup> *∂* Γ(3+1) Γ(2+1)Γ(+1) + <sup>1</sup> s5+<sup>1</sup> *∂g*<sup>2</sup> *∂x ∂*f<sup>2</sup> *<sup>∂</sup>* <sup>−</sup> *<sup>∂</sup>*f<sup>2</sup> *∂x ∂*ℊ<sup>2</sup> *∂* Γ(4+1) (Γ(2+1))<sup>2</sup> ,

L(*Res*V2 (*x*, , s)) = *<sup>e</sup>x*− <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup><sup>2</sup> <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>1</sup> s L *Dx*L−<sup>1</sup> *ex*+ <sup>s</sup> <sup>−</sup> *<sup>e</sup>x*+ <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>2</sup> s2+<sup>1</sup> *<sup>D</sup>*L−<sup>1</sup> *e*−*x*+ <sup>s</sup> <sup>+</sup> *<sup>e</sup>*−*x*+ <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>f</sup><sup>2</sup> s2+<sup>1</sup> + <sup>1</sup> s L *<sup>D</sup>*L−<sup>1</sup> *ex*+ <sup>s</sup> <sup>−</sup> *<sup>e</sup>x*+ <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>2</sup> s2+<sup>1</sup> *Dx*L−<sup>1</sup> *e*−*x*+ <sup>s</sup> <sup>+</sup> *<sup>e</sup>*−*x*+ <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>f</sup><sup>2</sup> s2+<sup>1</sup> <sup>−</sup> <sup>1</sup> s *ex*− <sup>s</sup> <sup>+</sup> *<sup>e</sup>x*− <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup><sup>2</sup> s2+<sup>1</sup> = <sup>1</sup> s2+<sup>1</sup> <sup>ℊ</sup><sup>2</sup> <sup>−</sup> *<sup>e</sup>x*− + <sup>1</sup> s3+<sup>1</sup> *ex*+ *∂*f<sup>2</sup> *<sup>∂</sup><sup>x</sup>* <sup>+</sup> *<sup>∂</sup>*f<sup>2</sup> *∂* + *e*−*x*+ *∂*<sup>2</sup> *<sup>∂</sup><sup>x</sup>* <sup>−</sup> *<sup>∂</sup>*<sup>2</sup> *∂* + *g*<sup>2</sup> + <sup>1</sup> s4+<sup>1</sup> <sup>−</sup>*ex*+ *∂*f<sup>2</sup> *<sup>∂</sup><sup>x</sup>* <sup>+</sup> *<sup>∂</sup>*f<sup>2</sup> *∂* + *e*−*x*+ *∂*<sup>2</sup> *<sup>∂</sup><sup>x</sup>* <sup>−</sup> *<sup>∂</sup>*<sup>2</sup> *∂* Γ(3+1) Γ(2+1)Γ(+1) + <sup>1</sup> s5+<sup>1</sup> *∂ ∂x ∂*f<sup>2</sup> *<sup>∂</sup>* <sup>+</sup> *<sup>∂</sup>*f<sup>2</sup> *∂x ∂*<sup>2</sup> *∂* Γ(4+1) (Γ(2+1))<sup>2</sup> , (39)

L(*Res*W<sup>2</sup> (*x*, , <sup>s</sup>))

= *<sup>e</sup>*−*x*+ <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>f</sup><sup>2</sup> <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>1</sup> s L *Dx*L−<sup>1</sup> *ex*+ <sup>s</sup> <sup>−</sup> *<sup>e</sup>x*+ <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>2</sup> s2+<sup>1</sup> *<sup>D</sup>*L−<sup>1</sup> *ex*− <sup>s</sup> <sup>+</sup> *<sup>e</sup>x*− <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup><sup>2</sup> s2+<sup>1</sup> + <sup>1</sup> s L *<sup>D</sup>*L−<sup>1</sup> *ex*+ <sup>s</sup> <sup>−</sup> *<sup>e</sup>x*+ <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>2</sup> s2+<sup>1</sup> *Dx*L−<sup>1</sup> *ex*− <sup>s</sup> <sup>+</sup> *<sup>e</sup>x*− <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup><sup>2</sup> s2+<sup>1</sup> <sup>−</sup> <sup>1</sup> s *e*−*x*+ <sup>s</sup> <sup>+</sup> *<sup>e</sup>*−*x*+ <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>f</sup><sup>2</sup> s2+<sup>1</sup> = = <sup>1</sup> s2+<sup>1</sup> <sup>f</sup><sup>2</sup> <sup>−</sup> *<sup>e</sup>*−*x*+ + <sup>1</sup> s3+<sup>1</sup> *ex*+ *∂*ℊ<sup>2</sup> *<sup>∂</sup><sup>x</sup>* <sup>+</sup> *<sup>∂</sup>*ℊ<sup>2</sup> *∂* <sup>−</sup> *<sup>e</sup>x*− *∂*<sup>2</sup> *<sup>∂</sup><sup>x</sup>* <sup>−</sup> *<sup>∂</sup>*<sup>2</sup> *∂* + f<sup>2</sup> <sup>−</sup> <sup>1</sup> s4+<sup>1</sup> *ex*+ *∂*ℊ<sup>2</sup> *<sup>∂</sup><sup>x</sup>* <sup>+</sup> *<sup>∂</sup>*ℊ<sup>2</sup> *∂* + *ex*− *∂*<sup>2</sup> *<sup>∂</sup><sup>x</sup>* <sup>−</sup> *<sup>∂</sup>*<sup>2</sup> *∂* Γ(3*γ*+1) Γ(2+1)Γ(+1) + <sup>1</sup> s5+<sup>1</sup> *∂*ℊ<sup>2</sup> *∂x ∂*<sup>2</sup> *<sup>∂</sup>* <sup>+</sup> *<sup>∂</sup>*<sup>2</sup> *∂x ∂*ℊ<sup>2</sup> *∂* Γ(4+1) (Γ(2+1))<sup>2</sup> .

To find the 2 − *nd* Laplace series solution of (34), we simply find out the next process lims→∞s2+<sup>1</sup> L *Res*U<sup>2</sup> (*x*, , s) , L *Res*V2 (*x*, , s) , L *Res*W<sup>2</sup> (*x*, , s) = (0, 0, 0), and by solving limits, we get 2(*x*, ) = *ex*+, ℊ2(*x*, ) = *ex*− and f2(*x*, ) = *e*−*x*+. Hence, the 2 − *nd* Laplace series solutions of (34) are:

$$\begin{split} \mathsf{AL}\_{2}(\mathsf{x},\mathsf{y},\mathsf{z}) &= \frac{\mathsf{c}^{\mathsf{x}+\mathsf{y}}}{\mathsf{s}} - \frac{\mathsf{c}^{\mathsf{x}+\mathsf{y}}}{\mathsf{s}^{\mathsf{a}+1}} + \frac{\mathsf{c}^{\mathsf{x}+\mathsf{y}}}{\mathsf{s}^{2\mathsf{a}+1}}, \\ \mathsf{Val}\_{2}(\mathsf{x},\mathsf{y},\mathsf{z}) &= \frac{\mathsf{c}^{\mathsf{x}-\mathsf{y}}}{\mathsf{s}} + \frac{\mathsf{c}^{\mathsf{x}-\mathsf{y}}}{\mathsf{s}^{\mathsf{a}+1}} + \frac{\mathsf{c}^{\mathsf{x}-\mathsf{y}}}{\mathsf{s}^{2\mathsf{a}+1}}, \\ \mathsf{Adv}\_{2}(\mathsf{x},\mathsf{y},\mathsf{z}) &= \frac{\mathsf{c}^{-\mathsf{x}+\mathsf{y}}}{\mathsf{s}} + \frac{\mathsf{c}^{-\mathsf{x}+\mathsf{y}}}{\mathsf{s}^{\mathsf{a}+1}} + \frac{\mathsf{c}^{-\mathsf{x}+\mathsf{y}}}{\mathsf{s}^{2\mathsf{a}+1}}. \end{split} \tag{40}$$

Similarly, for *k* = 3, we have:

L *R*,*e*,*s*U<sup>3</sup> ,(*x*, , s) = <sup>−</sup>*ex*+*<sup>x</sup>* <sup>s</sup>*x*+<sup>1</sup> <sup>+</sup> *<sup>e</sup>x*+*<sup>x</sup>* <sup>s</sup>2*x*+<sup>1</sup> <sup>+</sup> *<sup>x</sup>*<sup>3</sup> s3*x*+<sup>1</sup> + <sup>1</sup> s L *Dx*L−<sup>1</sup> *ex*− <sup>s</sup> <sup>+</sup> *<sup>e</sup>x*− <sup>s</sup>+<sup>1</sup> <sup>+</sup> *<sup>e</sup>x*− <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup><sup>3</sup> s3+<sup>1</sup> *<sup>D</sup>*L−<sup>1</sup> *e*−*x*+ <sup>s</sup> <sup>+</sup> *<sup>e</sup>*−*x*+ <sup>s</sup>+<sup>1</sup> <sup>+</sup> *<sup>e</sup>*−*x*+ <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>f</sup><sup>3</sup> s2+<sup>1</sup> <sup>−</sup> <sup>1</sup> s L *<sup>D</sup>*L−<sup>1</sup> *ex*− <sup>s</sup> <sup>+</sup> *<sup>e</sup>x*− <sup>s</sup>+<sup>1</sup> <sup>+</sup> *<sup>e</sup>x*− <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup><sup>3</sup> s3+<sup>1</sup> *Dx*L−<sup>1</sup> *e*−*x*+ <sup>s</sup> <sup>+</sup> *<sup>e</sup>*−*x*+ <sup>s</sup>+<sup>1</sup> <sup>+</sup> *<sup>e</sup>*−*x*+ <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>f</sup><sup>3</sup> s2+<sup>1</sup> + <sup>1</sup> s *ex*+ <sup>s</sup> <sup>−</sup> *<sup>e</sup>x*+ <sup>s</sup>+<sup>1</sup> <sup>+</sup> *<sup>e</sup>x*+ <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>3</sup> s*ax*+<sup>1</sup> ,

L *Res*/ <sup>3</sup> (*x*, , s) 

= *<sup>e</sup>x*− <sup>s</sup>+<sup>1</sup> <sup>+</sup> *<sup>e</sup>x*− <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup><sup>3</sup> s3+<sup>1</sup> + <sup>1</sup> s L *<sup>D</sup>*L−<sup>1</sup> *ex*+ <sup>s</sup> <sup>−</sup> *<sup>e</sup>x*+ <sup>s</sup>+<sup>1</sup> <sup>+</sup> *<sup>e</sup>x*+ <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>3</sup> s3+<sup>1</sup> *<sup>D</sup>*L−<sup>1</sup> *e*−*x*+ <sup>s</sup> <sup>+</sup> *<sup>e</sup>*−*x*+ <sup>s</sup>+<sup>1</sup> <sup>+</sup> *<sup>e</sup>*−*x*+ <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>f</sup><sup>3</sup> s2+<sup>1</sup> + <sup>1</sup> s L *<sup>D</sup>*L−<sup>1</sup> *ex*+ <sup>s</sup> <sup>−</sup> *<sup>e</sup>x*+ <sup>s</sup>+<sup>1</sup> <sup>+</sup> *<sup>e</sup>x*+ <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>3</sup> s3+<sup>1</sup> *Dx*L−<sup>1</sup> *e*−*x*+ <sup>s</sup> <sup>+</sup> *<sup>e</sup>*−*x*+ <sup>s</sup>+<sup>1</sup> <sup>+</sup> *<sup>e</sup>*−*x*+ <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>f</sup><sup>3</sup> s2+<sup>1</sup> <sup>−</sup> <sup>1</sup> s *ex*− <sup>s</sup> <sup>+</sup> *<sup>e</sup>x*− <sup>s</sup>+<sup>1</sup> <sup>+</sup> *<sup>e</sup>x*− <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup><sup>3</sup> s3+<sup>1</sup> , (41)

L *Res*W<sup>3</sup> (*x*, , s) 

= *<sup>e</sup>*−*x*+ <sup>s</sup>+<sup>1</sup> <sup>+</sup> *<sup>e</sup>*−*x*+ <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>f</sup><sup>3</sup> s2+<sup>1</sup> + <sup>1</sup> s L *Dx*L−<sup>1</sup> *ex*+ <sup>s</sup> <sup>−</sup> *<sup>e</sup>x*+ <sup>s</sup>+<sup>1</sup> <sup>+</sup> *<sup>e</sup>x*+ <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>3</sup> s3+<sup>1</sup> *<sup>D</sup>*L−<sup>1</sup> *ex*− <sup>s</sup> <sup>+</sup> *<sup>e</sup>x*− <sup>s</sup>+<sup>1</sup> <sup>+</sup> *<sup>e</sup>x*− <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup><sup>3</sup> s3+<sup>1</sup> + <sup>1</sup> s L *<sup>D</sup>*L−<sup>1</sup> *ex*+ <sup>s</sup> <sup>−</sup> *<sup>e</sup>x*+ <sup>s</sup>+<sup>1</sup> <sup>+</sup> *<sup>e</sup>x*+ <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>3</sup> s3+<sup>1</sup> *Dx*L−<sup>1</sup> *ex*− <sup>s</sup> <sup>+</sup> *<sup>e</sup>x*− <sup>s</sup>+<sup>1</sup> <sup>+</sup> *<sup>e</sup>x*− <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>ℊ</sup><sup>3</sup> s3+<sup>1</sup> <sup>−</sup> <sup>1</sup> s *e*−*x*+ <sup>s</sup> <sup>+</sup> *<sup>e</sup>*−*x*+ <sup>s</sup>+<sup>1</sup> <sup>+</sup> *<sup>e</sup>*−*x*+ <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>f</sup><sup>3</sup> s2+<sup>1</sup> .

By solving lims→∞s3*x*+<sup>1</sup> L *Res*U<sup>3</sup> (*x*, *x*, s) ,L(*Resx*<sup>3</sup> (*x*, *x*, s)),L *Res*W<sup>3</sup> (*x*, *x*, s) = (0, 0, 0), it yields that: *<sup>x</sup>*3(*x*, *<sup>y</sup>*) <sup>=</sup> <sup>−</sup>*ex*+*y*, *<sup>x</sup>*3(*x*, *<sup>x</sup>*) <sup>=</sup> *<sup>e</sup>x*−*<sup>x</sup>* and <sup>f</sup>3(*x*, *<sup>x</sup>*) <sup>=</sup> *<sup>e</sup>*−*x*+*x*. Hence, the 3 <sup>−</sup> *rd* Laplace series solutions of (34) are:

$$\begin{split} \mathsf{VI}\_{3}(\mathsf{x},\mathsf{y},\mathsf{z}) &= \frac{\mathsf{c}^{\mathsf{x}+\mathsf{y}}}{\mathsf{z}} - \frac{\mathsf{c}^{\mathsf{x}+\mathsf{y}}}{\mathsf{z}^{\mathsf{a}+1}} + \frac{\mathsf{c}^{\mathsf{x}+\mathsf{y}}}{\mathsf{z}^{\mathsf{a}+1}} - \frac{\mathsf{c}^{\mathsf{x}+\mathsf{y}}}{\mathsf{z}^{\mathsf{a}+1}}, \\ \mathsf{V}\_{3}(\mathsf{x},\mathsf{y},\mathsf{z}) &= \frac{\mathsf{c}^{\mathsf{x}-\mathsf{y}}}{\mathsf{z}} + \frac{\mathsf{c}^{\mathsf{x}-\mathsf{y}}}{\mathsf{z}^{\mathsf{a}+1}} + \frac{\mathsf{c}^{\mathsf{x}-\mathsf{y}}}{\mathsf{z}^{\mathsf{a}+1}} + \frac{\mathsf{c}^{\mathsf{x}-\mathsf{y}}}{\mathsf{z}^{\mathsf{a}+1}}, \\ &\qquad \mathsf{V}\_{4}(\mathsf{x},\mathsf{y},\mathsf{z}) = \frac{\mathsf{c}^{-\mathsf{x}+\mathsf{y}}}{\mathsf{z}} + \frac{\mathsf{c}^{-\mathsf{x}+\mathsf{y}}}{\mathsf{z}^{\mathsf{a}+1}} + \frac{\mathsf{c}^{-\mathsf{x}+\mathsf{y}}}{\mathsf{z}^{\mathsf{a}+1}} + \frac{\mathsf{c}^{-\mathsf{x}+\mathsf{y}}}{\mathsf{z}^{\mathsf{a}+1}}. \end{split} \tag{42}$$

Using Mathematica, we can process the above steps for any *k*, and by the fact that lims→∞s*k*+<sup>1</sup> L *Res*U*<sup>k</sup>* (, , s) ,L *Res*∨*<sup>k</sup>* (, , s) ,L *Res*W*<sup>k</sup>* (, , s) = (0, 0, 0), one can obtain that *k*(, ) = (−1) *k e*+, ℊ*k*(, ) = *e*− and f*k*(, ) = *e*−+. Thus, the *k*th-Laplace series solutions of (34) could be formulated by the fractional expansions:

U*k*(, , s) = *e*+ 1 <sup>s</sup> <sup>−</sup> <sup>1</sup> <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>1</sup> <sup>s</sup>2+<sup>1</sup> <sup>−</sup> <sup>1</sup> <sup>s</sup>3+<sup>1</sup> + ... + (−1) *k* 1 s*k*+<sup>1</sup> <sup>=</sup> *<sup>e</sup>*+ *<sup>k</sup>* ∑ *n*=0 (−1) *n* <sup>s</sup>*n*+<sup>1</sup> , <sup>∨</sup>*k*(, , <sup>s</sup>) <sup>=</sup> *<sup>e</sup>*− 1 <sup>s</sup> + <sup>1</sup> <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>1</sup> <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>1</sup> <sup>s</sup>3+<sup>1</sup> <sup>+</sup> ... <sup>+</sup> <sup>1</sup> s*k*+<sup>1</sup> <sup>=</sup> *<sup>e</sup>*− *<sup>k</sup>* ∑ *n*=0 1 <sup>s</sup>*n*+<sup>1</sup> , <sup>W</sup>*k*(, , <sup>s</sup>) <sup>=</sup> *<sup>e</sup>*−+ 1 <sup>s</sup> + <sup>1</sup> <sup>s</sup>+<sup>1</sup> <sup>+</sup> <sup>1</sup> <sup>s</sup>2+<sup>1</sup> <sup>+</sup> <sup>1</sup> <sup>s</sup>3+<sup>1</sup> <sup>+</sup> ... <sup>+</sup> <sup>1</sup> s*k*+<sup>1</sup> <sup>=</sup> *<sup>e</sup>*−+ *<sup>k</sup>* ∑ *n*=0 1 <sup>s</sup>*n*+<sup>1</sup> . (43)

In the end, we take the inverse LT for the obtained expansions (43) to conclude that the *k* − *th* approximate solutions of the nonlinear systems of time-FPDEs (33) have the form:

$$\begin{split} \mathcal{U}\_{k}(x,y,t) &= \varepsilon^{x+y} \sum\_{n=0}^{k} \frac{(-1)^{n} t^{n\alpha}}{\Gamma(n\alpha+1)}, \\ \mathcal{V}\_{k}(x,y,t) &= \varepsilon^{x-y} \sum\_{n=0}^{k} \frac{t^{n\alpha}}{\Gamma(n\alpha+1)}, \\ \mathcal{W}\_{k}(x,y,t) &= \varepsilon^{-x+y} \sum\_{n=0}^{k} \frac{t^{n\alpha}}{\Gamma(n\alpha+1)}. \end{split} \tag{44}$$

When *k* → ∞ and = 1 in (44), the Maclaurin series expansions of the closed forms are: <sup>U</sup>(, , ) <sup>=</sup> *<sup>e</sup>*+−,

$$\begin{array}{l} \mathcal{U}(x, y, t) = \mathfrak{e}^{x + y - t} \\ \mathcal{V}(x, y, t) = \mathfrak{e}^{x - y + t} \\ \mathcal{W}(x, y, t) = \mathfrak{e}^{-x + y + t} \end{array} \tag{45}$$

and which is totally in agreement with the exact solution.

#### **5. Graphical and Numerical Results**

This section deals with the validity and efficiency of the Laplace RPSM for systems of time-FPDEs discussed in Examples 1–3 through different graphical representations and tabulated data for the obtained approximation and exact solutions.

The absolute error functions calculated demonstrate the accuracy of the Laplace RPSM. Tables 1–3 illustrate several values of the approximate and exact solutions as well as the absolute errors for systems of time-FPDEs (7), (20), and (44) at selected grid points in the domain. From the tables, the approximate solutions are harmonic with the exact solutions, which confirms the performance and accuracy of the Laplace RPSM, whilst the accuracy is in advance by using only a few of the Laplace RPS iterations. Further, numerical simulations for the attained results of the problems studied are achieved at various values of as illustrated in Tables 4–6.


**Table 1.** Numerical results for Example 1 at = 1, = 1, and *n* = 7.


**Table 2.** Numerical results for Example 2 at *x* = 10, = 1, and *n* = 7.

**Table 3.** Numerical results for Example 3 at different values of , = 0.4, = 1, and *n* = 7.



**Table 4.** Numerical results of approximated solutions, at *n* = 7, = 1, and different values of , for Example 1.

**Table 5.** Numerical results of approximated solutions, at *n* = 7, = 10, and different values of , for Example 2.


**Table 6.** Numerical results of approximated solutions, at *y* = 0.4 and different values of , , and , with *n* = 7 for Example 3.



**Table 6.** *Cont.*

Numerical comparisons are established to confirm the mathematical results for the obtained approximate solutions supported by the results of numerical comparisons. Table 7 shows the absolute errors of the obtained approximate solutions for the system of time-FPDEs (7) at = 1 with the absolute errors of the approximate solutions generated by the MGMLFM [40], while Tables 8–10 show a comparison of the obtained approximate solutions for the systems of time-FPDEs (7), (20) and (44), respectively with previous results generated by the existing method as MGMLFM [40], and FNDM [41] at various values of . As it is evident from the comparison results, the results obtained by Laplace RPSM are close to the exact solutions faster than the mentioned methods.

**Table 7.** Numerical comparisons for Example 1 at = 1 and different values of and .



**Table 8.** Numerical comparisons for Example 1 at different values of , , and for the function U.

**Table 9.** Numerical comparisons for Example 2 at different values of , , and for the function U.



**Table 10.** Numerical comparisons for Example 3 at different values of , , and for the function U.

The 3D plots behavior of the approximate solutions of the time-FPDEs (7), (20), and (44) by Laplace RPSM are shown respectively in Figures 1–3 at various values of which are compared with the exact solutions on their domains. Obviously, from these figures, it can be deduced that the geometric behaviors almost agree and strongly match each other, particularly when the integer order derivative is considered. From these graphics, we can conclude that the dynamic behaviors match and correspond well with each other, specifically when the standard order derivative is considered. Moreover, Figures 4 and 5 demonstrate the behavior of the obtained Laplace RPS solutions for the systems of the time-FPDEs (7) and (20) at various values of . It is observed from these figures that the Laplace RPSM approximate solutions match with solutions at = 1 , and this reinforces the effectiveness of the proposed method.

**Figure 1.** 3D-Surfaces plot of the exact solution of U(, ) and V(, ), and the 7 − *th* approximate solution U7(, ) and V7(, ), for IVP (7), with ∈ [0, 1], and ∈ [−2, 2], at various values of . (**a**) (U(, ), V(, )). (**b**) (U7(, ), V7(, )) : = 1. (**c**) (U7(, ), V7(, )) : = 0.97. (**d**) (U7(, ), V7(, )) : = 0.87.

**Figure 2.** *Cont*.

**Figure 2.** 3D-Surfaces plot of exact solutions (U(, ), V(, )), and the 7 − *th* approximate solutions (U7(, ), V7(, )), for system (20), with ∈ [0, 1], and ∈ [−10, 10], at various values of . (**a**) (U(, ), V(, )). (**b**) (U7(, ), V7(, )) : = 1. (**c**) (U7(, ), V7(, )) : = 0.95. (**d**) (U7(, ), V7(, )) : = 0.85.

**Figure 3.** 3D-Surfaces Plot of Exact solutions of (U, V, W) the 7 − *th* approximate solution (U7, V7, W7), for system (33), with ∈ [0, 2], and ∈ [0, 2] and = 0.4, at various values of . (**a**) (U, V, W). (**b**) (U7, V7, W7) : = 1. (**c**) (U7, V7, W7): = 0.8. (**d**) (U7, V7, W7): = 0.6.

**Figure 4.** (**a**) 2D-Plots of exact solutions U(,), and the 7-th approximate solutions U<sup>7</sup> (,) and V<sup>7</sup> (,), for system (7), with ∈ [0, 0.5], and = 1, at various values of . (**b**) 2D-Plots of exact solutions V (,), and the 7-th approximate solutions V<sup>7</sup> (,), for system (7), with ∈ [0, 0.5], and = 1, at various values of .

**Figure 5.** Plots of exact solutions (U(,),V (,)), and the 7-th approximate solutions (U<sup>7</sup> (,), V<sup>7</sup> (,)), for system (20) at various values of . (**a**) ∈ [0, 1], and = 1. (**b**) ∈ [−10, 10], and = 1.

#### **6. Conclusions**

This investigation of time-FPDEs with initial conditions constructs a proper framework for the mathematical modeling of several fractional problems that appear in physical and engineering applications. The current work has introduced the analytical and approximate solutions for known systems of nonlinear time-FPDEs via applying Laplace RPSM. Three nonlinear time-FPDEs systems, including Broer-Kaup and Burgers' systems, have been investigated utilizing Caputo-time fractional derivatives. The exact and the Laplace RPS solutions have been displayed numerically and graphically at various values of the fractional order over (0, 1]. The analysis of simulation results revealed that the Laplace RPS solutions are in imminent consistency with each other, as well as with the exact solutions at integer-order of , which confirms the performance of the proposed method. Numerical comparisons of the obtained results with the results previously calculated by other numerical methods, such as modified generalized Mittag–Leffler function method MGMLFM [40] and fractional natural decomposition method FNDM [41], have been achieved, which indicates the high accuracy and effectiveness of the Laplace RPSM. Consequently, the

analysis of attained results and their simulations confirm that the Laplace RPSM is an easy and systematic, robust, efficient, and suitable instrument to generate analytical and approximate solutions of several fractional physical and engineering problems with fewer computations and iteration steps.

**Author Contributions:** Conceptualization, H.A. and M.A.; methodology, H.A.; software, H.A.; validation, M.A., A.I. and M.D.; writing—original draft preparation, H.A.; writing—review and editing, A.I. and M.D.; supervision, A.I. and M.D.; funding acquisition, A.I. All authors have read and agreed to the published version of the manuscript.

**Funding:** Universiti Kebangsaan Malaysia (DIP-2020-001).

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Analytical Solution of Coupled Hirota–Satsuma and KdV Equations**

**Rania Saadeh 1,\*, Osama Ala'yed <sup>2</sup> and Ahmad Qazza <sup>1</sup>**


**Abstract:** In this study, we applied the Laplace residual power series method (LRPSM) to expand the solution of the nonlinear time-fractional coupled Hirota–Satsuma and KdV equations in the form of a rapidly convergent series while considering Caputo fractional derivatives. We demonstrate the applicability and accuracy of the proposed method with some examples. The numerical results and the graphical representations reveal that the proposed method performs extremely well in terms of efficiency and simplicity. Therefore, it can be utilized to solve more problems in the field of non-linear fractional differential equations. To show the validity of the proposed method, we present a numerical application, compute two kinds of errors, and sketch figures of the obtained results.

**Keywords:** Caputo's fractional derivative; power series solution; Laplace residual power series method; coupled Hirota–Satsuma and KdV equations

#### **1. Introduction**

Fractional differential equations are a generalized form of ordinary and partial differential equations [1–4]. Recently, various studies in engineering and sciences have confirmed that the dynamics of numerous systems in nature can be described more precisely via nonlinear fractional-order differential equations, for instance, in biology, physics, engineering, chaos theory, diffusion, electromagnetism, etc. [5–11]. Therefore, several approaches have been established to acquire approximate and analytic solutions of fractional differential equations, including the variational iteration method [12], the differential transform method [13–15], Laplace transforms [16,17], the fractional sub-equation method [18,19], the homotopy perturbation method [20,21], the exponential rational function method [22], the exponential function method [23], the extended trial equation method [24], the ARA residual power series method [25], the double ARA–Sumudu transform [26], and the reproducing kernel method [27], amongst others.

The power series method [28] is one of the most popular and convenient methods used to establish analytic solutions for linear classes of differential equations. Unfortunately, obtaining a closed-form solution for the nonlinear case is very difficult or impossible. Therefore, the residual power series method is introduced to overcome the aforementioned difficulty of the power series method. The residual power series method [29,30] has been employed to gain the analytical solution of various linear and nonlinear models in different engineering and science areas.

This article develops the residual power series method by employing the Laplace transform (LT) [31] in its methodology. This promotion is known as the Laplace residual power series method (LRPSM). In contrast with other power series methods, LRPSM requires less time and simpler computation but has superior accuracy in obtaining the solution. Moreover, the LRPSM needs no differentiation or linearization: it depends only on applying the LT and taking the limit at infinity. Due to these advantages, various researchers have used it to solve nonlinear fractional problems [29,30,32–34].

**Citation:** Saadeh, R.; Ala'yed, O.; Qazza, A. Analytical Solution of Coupled Hirota–Satsuma and KdV Equations. *Fractal Fract.* **2022**, *6*, 694. https://doi.org/10.3390/fractalfract 6120694

Academic Editors: Libo Feng, Lin Liu and Yang Liu

Received: 15 October 2022 Accepted: 15 November 2022 Published: 23 November 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

In this study, the LRPSM is introduced to solve the coupled Hirota–Satsuma and KdV (HSC–KdV) equations of the form:

$$D^a\_\tau \delta = \frac{1}{2} \delta\_{\tilde{\xi}\tilde{\xi}\tilde{\xi}} - 3\delta \delta\_{\tilde{\xi}} + 3(\phi\psi)\_{\tilde{\xi}'}(1) \\ D^a\_\tau \phi = -\phi\_{\tilde{\xi}\tilde{\xi}\tilde{\xi}} + 3\delta \phi\_{\tilde{\xi}'}(2) \\ D^a\_\tau \psi = -\psi\_{\tilde{\xi}\tilde{\xi}\tilde{\xi}} + 3\delta \psi\_{\tilde{\xi}'} $$

where *δ*(*ξ*, *τ*), *φ*(*ξ*, *τ*), and *ψ*(*ξ*, *τ*) are three unknown functions of the independent variables *ξ* and *τ*, and *D<sup>α</sup> <sup>τ</sup>* is the time Caputo fractional operator with 0 < *α* ≤ 1.

The HSC–KdV equations are of great significance due to their numerous applications in diverse areas. For instance, the HSC–KdV equations are used to represent the dispersive long waves in shallow water which are employed in many implementations in fluid mechanics, including shallow-water undulations with weakly non-linear retrieve vigor, acoustic undulations on a crystal lattice, long inner undulations in a density-stratified ocean, and ion-acoustic undulations in a plasma [35].

The novelty of this work arises in the chosen model, which is difficult to solve by traditional numerical methods: some authors have solved this system numerically and obtained only the first two or three terms of the approximate solution, but not a general term of the series solution. In contrast, the LRPSM allows us to obtain many terms of the series solution easily, using Mathematica software. LRPS is a powerful technique for solving fractional models, and it presents the solution in a form of a rapidly convergent series with less effort and computation than other numerical methods. It also requires no differentiation or linearization, only computing the limit at infinity.

The description of this article is as follows: we start in Section 2 by presenting some fundamental concepts and preliminary results from the fractional calculus theory. In Section 3, we assemble the algorithm of LRPSM for obtaining the solution of the HSC–KdV. Section 4 presents some HSC–KdV problems to demonstrate the simplicity, capability, and potentiality of LRPSM, and Section 5 concludes the paper.

#### **2. Basic Preliminaries**

This section introduces some basic notations, definitions, and theorems related to fractional calculus which are utilized throughout this article.

#### *2.1. Fractional Power Series*

Here, we present some definitions of the Caputo fractional derivative and Laplace transform. We also introduce some theorems related to fractional power series representations.

**Definition 1.** *The Caputo derivative of fractional order <sup>α</sup>* <sup>∈</sup> *<sup>R</sup>*<sup>+</sup> *of the function x*(*τ*) *is given by:*

$$D^{\mathfrak{a}}\mathfrak{x}(\tau) = \begin{cases} \frac{1}{\Gamma(\mu-\mathfrak{a})} \int\_0^\tau \frac{\mathfrak{x}^{(\mu)}(t)}{(\tau-t)^{a+1-\mu}} dt, & \mu-1 < \mathfrak{a} < \mu, \\\ \mathfrak{x}^{(\mu)}(\tau) & \mathfrak{a} = \mu, \ \mu \in \mathbb{N}. \end{cases}$$

**Definition 2** ([36])**.** *The time Caputo derivative of fractional order <sup>α</sup>* <sup>∈</sup> *<sup>R</sup>*<sup>+</sup> *of the function <sup>x</sup>*(*ξ*, *<sup>τ</sup>*) *is given by:*

$$D^{\mathfrak{a}}\_{\tau} \mathfrak{x}(\mathfrak{f}, \tau) = \frac{\partial^{\mathfrak{a}} \mathfrak{x}(\mathfrak{f}, \tau)}{\partial \tau^{\mathfrak{a}}} = \begin{cases} \frac{1}{\Gamma(\mu - \mathfrak{a})} \int\_{0}^{\tau} (\tau - t)^{\mu - \mathfrak{a} - 1} \frac{\partial^{\mathfrak{a}} \mathfrak{x}(\mathfrak{f}, t)}{\partial t^{\mathfrak{a}}} \partial t, & \mu - 1 < \mathfrak{a} < \mu, \\\ \frac{\partial^{\mathfrak{a}} \mathfrak{x}(\mathfrak{f}, \tau)}{\partial \tau^{\mathfrak{a}}} & \mathfrak{a} = \mu, \ \mu \in \mathbb{N}. \end{cases}$$

**Definition 3** ([31])**.** *The Laplace transform of a function x*(*ξ*, *τ*) *regarding the variable τ is defined as:*

$$\mathcal{L}[\mathfrak{x}(\xi,\tau)] = \mathcal{X}(\xi,\mathfrak{s}) = \int\_0^\infty \mathfrak{x}(\xi,\tau)e^{-s\tau}d\tau,\ \mathfrak{s} > 0,$$

*and the inverse LT is given by:*

$$\varkappa(\xi,\tau) = \mathcal{L}^{-1}[X(\xi,s)] = \int\_{c-i\infty}^{c+i\infty} X(\xi,s)e^{st}ds,\ \ c = \text{Re}(s) > 0.$$

Further, if L[*x*1(*ξ*, *τ*)] = *X*1(*ξ*,*s*) and L[*x*2(*ξ*, *τ*)] = *X*2(*ξ*,*s*) and considering *γ*<sup>1</sup> and *γ*<sup>2</sup> are two real constants, we have the following essential properties of Laplace transform, and its inverse [29,30]:


**Definition 4** ([29,30])**.** *A fractional power series of two variables around τ*<sup>0</sup> = 0 *is expressed as:*

$$\sum\_{m=0}^{\infty} a\_m(\xi) \tau^{m\alpha} = a\_0(\xi) + a\_1(\xi)\tau^{\alpha} + \cdots,\\ \vdots\\ \text{ } 0 \le \mu - 1 < \alpha < \mu, \ \tau < 0.$$

**Theorem 1.** *Suppose that a function x has a FPS expansion at τ*<sup>0</sup> = 0 *of the form:*

$$\chi(\tilde{\xi}, \tau) = \sum\_{m=0}^{\infty} a\_m(\tilde{\xi}) \tau^{ma}, \ 0 \le \tau < T,\tag{1}$$

*where T is the radius of convergence of the fractional power series. If D<sup>α</sup> <sup>τ</sup> x*(*ξ*, *τ*) *is continuous on I* × [0, *R*], *then the coefficients am*(*ξ*) *can be written as:*

$$a\_m(\xi) = \frac{D\_{\tau}^{m\alpha} \mathfrak{x}(\xi, 0)}{\Gamma(m\alpha + 1)}, \ m = 0, 1, 2, \dots, \kappa$$

*where Dm<sup>α</sup> <sup>τ</sup>* = *D<sup>α</sup> <sup>τ</sup>* · *<sup>D</sup><sup>α</sup> <sup>τ</sup>* ... *D<sup>α</sup> <sup>τ</sup>* (*m-times*). For the proof, refer to [37].

Using Theorem 1, the fractional power series expansion of the *x*(*ξ*, *τ*) around *τ* = 0 is given by:

$$\chi(\xi,\tau) = \sum\_{m=0}^{\infty} \frac{D\_{\tau}^{\text{max}} \chi(\xi,0)}{\Gamma(m\alpha + 1)} \tau^{\text{max}},\\ 0 \le \mu - 1 < \alpha < \mu, \ \xi \in I,\ 0 \le \tau < T.$$

#### *2.2. Convergence Analysis of LRPSM*

This section covers the conditions of convergence for the new fractional power series in the Laplace space. It is worth mentioning here that the Laplace residual power series approach requires the same conditions of convergence as the usual Taylor's series.

**Theorem 2** ([30])**.** *If the function X*(*ξ*,*s*) = L[*x*(*ξ*, *τ*)] *has the fractional power series:*

$$X(\xi, s) = \sum\_{m=0}^{\infty} \frac{a\_m(\xi)}{s^{ma+1}}, \ 0 < a \le 1, s > 0. \tag{2}$$

*then am*(*ξ*) = *Dm<sup>α</sup> <sup>τ</sup> x*(*ξ*, 0)*, where Dm<sup>α</sup> <sup>τ</sup>* = *D<sup>α</sup> <sup>τ</sup>* · *<sup>D</sup><sup>α</sup> <sup>τ</sup>* ... *D<sup>α</sup> <sup>τ</sup> (m-times). Moreover, the inverse LT of (2) is defined by:*

$$\varkappa(\mathfrak{f},\tau) = \sum\_{m=0}^{\infty} \frac{D\_{\tau}^{m\alpha} \varkappa(\mathfrak{f},0)}{\Gamma(m\alpha+1)} \tau^{m\alpha}, \ 0 < \alpha \le 1, \ \tau \ge 0.$$

**Theorem 3** ([30])**.** *Suppose that:*

$$\left| s\mathcal{L} \left[ D\_{\tau}^{(m+1)a} \left. x(\xi, \tau) \right| \right] \right| \le H$$

*for δ*<sup>1</sup> < *s* ≤ *δ*<sup>2</sup> *and ξ* ∈ *I , where H* = *H*(*ξ*) *and* 0 < *α* ≤ 1*. Then, the remainder Rm*(*ξ*,*s*) *in (2) fulfills:*

$$|R\_{\mathfrak{m}}(\xi, s)| \le \frac{H}{s^{(m+1)\alpha + 1}}, \xi \in I \text{ and } \delta\_1 < s \le \delta\_2.$$

**Proof of Theorem 3.** First, we suppose that <sup>L</sup>[*Dm<sup>α</sup> <sup>τ</sup> <sup>x</sup>*(*ξ*, *<sup>τ</sup>*)](*s*) is defined on *<sup>I</sup>* <sup>×</sup> (*δ*1, *<sup>δ</sup>*2], for *m* = 0, 1, 2 . . . , *n* + 1. As given, we also assume that:

$$\left| s\mathcal{L} \left[ D\_{\tau}^{(m+1)a} \ge (\tilde{\xi}, \tau) \right] \right| \le H(\tilde{\xi}), \tilde{\xi} \in I \text{ and } \delta\_1 < s \le \delta\_2. \tag{3}$$

The definition of the remainder implies:

$$R\_m(\xi, s) = X(\xi, s) - \sum\_{k=0}^{m} \frac{D\_{\tau}^{k\alpha} x(\xi, 0)}{s^{k\alpha + 1}},$$

thus, one can obtain:

$$\begin{split} s^{1+(m+1)a} R\_{\mathfrak{m}}(\mathfrak{f},s) &= s^{1+(m+1)a} X(\mathfrak{f},s) - \sum\_{k=0}^{m} s^{(m+1-k)a} D\_{\mathfrak{r}}^{ka} \mathfrak{x}(\mathfrak{f},0) \\ &= s \Big( s^{(m+1)a} X(\mathfrak{f},s) - \sum\_{k=0}^{m} s^{(m+1-k)a-1} D\_{\mathfrak{r}}^{ka} \mathfrak{x}(\mathfrak{f},0) \Big) \\ &= s \mathcal{L} \Big[ D\_{\mathfrak{r}}^{(m+1)a} \, \mathfrak{x}(\mathfrak{f},\mathfrak{r}) \Big]. \end{split} \tag{4}$$

Equations (3) and (4) imply that *s*1+(*m*+1)*αRm*(*ξ*,*s*) <sup>≤</sup> *<sup>H</sup>*(*ξ*). Hence <sup>−</sup>*H*(*x*) <sup>≤</sup> *<sup>s</sup>*1+(*m*+1)*αRm*(*ξ*,*s*) <sup>≤</sup> *<sup>H</sup>*(*ξ*), *<sup>ξ</sup>* <sup>∈</sup> *<sup>I</sup>*, *<sup>δ</sup>*<sup>1</sup> <sup>&</sup>lt; *<sup>s</sup>* <sup>≤</sup> *<sup>δ</sup>*2. Thus, reformulating the above equation, we can obtain the result.

**Theorem 4** ([33])**.** *Assume that xn*+1(*ξ*, *τ*) ≤ *εxn*(*ξ*, *τ*), ∀*n* ∈ *N for some ε* ∈ (0, 1), *and* 0 < *τ* < *T* < 1; *then, the obtained approximate series solution converges to the exact one, where:*

$$\mathfrak{x}\_{\mathfrak{n}}(\vec{\xi},\mathsf{r}) = \sum\_{m=0}^{n} \frac{D\_{\tau}^{m\alpha}\mathfrak{x}(\vec{\xi},0)}{\Gamma(m\alpha+1)} \mathfrak{r}^{m\alpha}.$$

**Proof of Theorem 4.** Notice that, if 0 < *τ* < *T* < 1, then:

$$\begin{split} \left\lVert \left\lVert \mathbf{x}(\boldsymbol{\xi},\boldsymbol{\tau}) - \mathbf{x}\_{\mathbb{H}}(\boldsymbol{\xi},\boldsymbol{\tau}) \right\rVert \right\rVert &\leq \left\lVert \sum\_{m=n+1}^{\infty} \chi\_{m}(\boldsymbol{\xi},\boldsymbol{\tau}) \right\rVert \leq \sum\_{m=n+1}^{\infty} \left\lVert \chi\_{m}(\boldsymbol{\xi},\boldsymbol{\tau}) \right\rVert \, \, \forall \, 0 < \boldsymbol{\tau} < T < 1. \\ & \left\lVert \chi(\boldsymbol{\xi},\boldsymbol{\tau}) - \chi\_{n}(\boldsymbol{\xi},\boldsymbol{\tau}) \right\rVert \leq \left\lVert \boldsymbol{\xi}(\boldsymbol{y}) \right\rVert \| \, \sum\_{m=n+1}^{\infty} \varepsilon^{m} \| = \frac{\varepsilon^{m+1}}{1 - \varepsilon} \left\lVert \boldsymbol{\xi}(\boldsymbol{y}) \right\rVert \underset{n\to\infty}{\to} 0. \end{split}$$

#### **3. LRPSM Methodology**

In this section, we apply the LRPSM to solve HSC–KdV equations. The main idea of the LRPSM is to first apply the Laplace transform on the target equation and then define the so-called Laplace residual function. Then, using some facts of the residual power series method and taking the limit at infinity allows us to obtain the coefficients of the series solutions.

Now, we consider the system:

$$\begin{aligned} D^{\mathfrak{a}}\_{\tau} \delta &= \frac{1}{2} \delta\_{\tilde{\xi}\tilde{\xi}\tilde{\xi}} - 3 \delta \delta\_{\tilde{\xi}} + 3 (\phi \Psi)\_{\tilde{\xi}'} \\ D^{\mathfrak{a}}\_{\tau} \phi &= -\phi\_{\tilde{\xi}\tilde{\xi}\tilde{\xi}} + 3 \delta \phi\_{\tilde{\xi}'} \\ D^{\mathfrak{a}}\_{\tau} \psi &= -\psi\_{\tilde{\xi}\tilde{\xi}\tilde{\xi}} + 3 \delta \psi\_{\tilde{\xi}'} \end{aligned} \tag{5}$$

subject to the initial conditions (ICs):

$$
\delta(\xi,0) = a(\xi), \; \phi(\xi,0) = b(\xi), \; \psi(\xi,0) = c(\xi). \tag{6}
$$

We illustrate the steps of the LRPSM on system (5) and (6) as follows. **Step 1.** Apply the Laplace transform with respect to *τ* to each equation in (5) to obtain

$$\begin{split} s^{a}\mathcal{G}(\tilde{\xi},s) - s^{a-1}\delta(\tilde{\xi},0) \\ &= \frac{1}{2}\frac{\partial^{3}}{\partial\tilde{\xi}^{3}}\mathcal{G}(\tilde{\xi},s) - 3\mathcal{L}\left[\mathcal{L}^{-1}[\mathcal{G}(\tilde{\xi},s)]\frac{\partial}{\partial\tilde{\xi}}\mathcal{L}^{-1}[\mathcal{G}(\tilde{\xi},s)]\right] \\ &+ 3\frac{\partial}{\partial\tilde{\xi}}\left[\mathcal{L}^{-1}[\Phi(\tilde{\xi},s)]\,\mathcal{L}^{-1}[\Psi(\tilde{\xi},s)]\right], \\ s^{a}\Phi(\tilde{\xi},s) - s^{a-1}\phi(\tilde{\xi},0) &= -\frac{\partial^{3}}{\partial\tilde{\xi}^{3}}\Phi(\tilde{\xi},s) + 3\mathcal{L}\left[\mathcal{L}^{-1}[\mathcal{G}(\tilde{\xi},s)]\cdot \frac{\partial}{\partial\tilde{\xi}}\mathcal{L}^{-1}[\Phi(\tilde{\xi},s)]\right], \\ s^{a}\Psi(\tilde{\xi},s) - s^{a-1}\phi(\tilde{\xi},0) &= -\frac{\partial^{3}}{\partial\tilde{\xi}^{3}}\Psi(\tilde{\xi},s) + 3\mathcal{L}\left[\mathcal{L}^{-1}[\mathcal{G}(\tilde{\xi},s)]\cdot \frac{\partial}{\partial\tilde{\xi}}\mathcal{L}^{-1}[\Psi(\tilde{\xi},s)]\right], \end{split} \tag{7}$$

where G(*ξ*,*s*) = L[*δ*(*ξ*, *τ*)], Φ(*ξ*,*s*) = L[*φ*(*ξ*, *τ*)], and Ψ(*ξ*,*s*) = L[*ψ*(*ξ*, *τ*)].

Simplifying each Equation in (7) and employing the ICs yields:

<sup>G</sup>(*ξ*,*s*) <sup>=</sup> *<sup>a</sup>*(*ξ*) *<sup>s</sup>* <sup>+</sup> <sup>1</sup> <sup>2</sup>*s<sup>α</sup> <sup>∂</sup>*<sup>3</sup> *∂ξ*<sup>3</sup> <sup>G</sup>(*ξ*,*s*) <sup>−</sup> <sup>3</sup> *s<sup>α</sup>* L <sup>L</sup>−1[G(*ξ*,*s*)] · *<sup>∂</sup> ∂ξ* <sup>L</sup>−1[G(*ξ*,*s*)] + <sup>3</sup> *sα ∂ ∂ξ* # <sup>L</sup>−1[Φ(*ξ*,*s*)] · L−1[Ψ(*ξ*,*s*)]\$ , Φ(*ξ*,*s*) = *<sup>b</sup>*(*ξ*) *<sup>s</sup>* <sup>−</sup> <sup>1</sup> *<sup>s</sup><sup>α</sup> <sup>∂</sup>*<sup>3</sup> *∂ξ*<sup>3</sup> <sup>Φ</sup>(*ξ*,*s*) <sup>+</sup> <sup>3</sup> *s<sup>α</sup>* L <sup>L</sup>−1[G(*ξ*,*s*)] · *<sup>∂</sup> ∂ξ* <sup>L</sup>−1[Φ(*ξ*,*s*)] , Ψ(*ξ*,*s*) = *<sup>c</sup>*(*ξ*) *<sup>s</sup>* <sup>−</sup> <sup>1</sup> *<sup>s</sup><sup>α</sup> <sup>∂</sup>*<sup>3</sup> *∂ξ*<sup>3</sup> <sup>Ψ</sup>(*ξ*,*s*) <sup>+</sup> <sup>3</sup> *s<sup>α</sup>* L <sup>L</sup>−1[G(*ξ*,*s*)] · *<sup>∂</sup> ∂ξ* <sup>L</sup>−1[Ψ(*ξ*,*s*)] . (8)

**Step 2.** Define the series solution of (8), as follows:

$$\begin{array}{c} \mathcal{G}(\vec{\xi}, \mathbf{s}) = \sum\_{n=0}^{\infty} \frac{\delta\_n(\vec{\xi})}{\mathfrak{s}^{\mathfrak{n}n+1}}, \\ \Phi(\vec{\xi}, \mathbf{s}) = \sum\_{n=0}^{\infty} \frac{\phi\_n(\vec{\xi})}{\mathfrak{s}^{\mathfrak{n}n+1}}, \end{array}$$

and:

$$\Psi(\xi, s) = \sum\_{n=0}^{\infty} \frac{\psi\_n(\xi)}{s^{n\alpha + 1}}.$$

Using the fact that <sup>L</sup>[*<sup>s</sup>* <sup>G</sup>(*ξ*,*s*)] <sup>=</sup> *<sup>δ</sup>*(*ξ*, 0), one can identify the *<sup>k</sup>*th truncated solution of (8) as:

$$\begin{aligned} \mathcal{G}\_k(\underline{\zeta}, s) &= \frac{a(\underline{\zeta})}{s} + \sum\_{n=1}^k \frac{\delta\_n(\underline{\zeta})}{s^{na+1}}, \\ \Phi\_k(\underline{\zeta}, s) &= \frac{b(\underline{\zeta})}{s} + \sum\_{n=1}^k \frac{\Phi\_n(\underline{\zeta})}{s^{na+1}}, \\ \Psi\_k(\underline{\zeta}, s) &= \frac{c(\underline{\zeta})}{s} + \sum\_{n=1}^k \frac{\psi\_n(\underline{\zeta})}{s^{na+1}}. \end{aligned} \tag{9}$$

**Step 3.** Define the *k*th Laplace residual functions of (8) as:

<sup>L</sup>Res*k*G(*ξ*,*s*) <sup>=</sup> <sup>G</sup>*k*(*ξ*,*s*) <sup>−</sup> *<sup>a</sup>*(*ξ*) *<sup>s</sup>* <sup>−</sup> <sup>1</sup> <sup>2</sup>*s<sup>α</sup> <sup>∂</sup>*<sup>3</sup> *∂ξ*<sup>3</sup> G(*ξ*,*s*) + <sup>3</sup> *s<sup>α</sup>* L <sup>L</sup>−1[G*k*(*ξ*,*s*)] · *<sup>∂</sup> ∂ξ* <sup>L</sup>−1[G*k*(*ξ*,*s*)] − 3 *sα ∂ ∂ξ* # <sup>L</sup>−1[Φ*k*(*ξ*,*s*)] · L−1[Ψ*k*(*ξ*,*s*)]\$ , <sup>L</sup>Res*k*Φ(*ξ*,*s*) <sup>=</sup> <sup>Φ</sup>*k*(*ξ*,*s*) <sup>−</sup> *<sup>b</sup>*(*ξ*) *<sup>s</sup>* <sup>+</sup> <sup>1</sup> *<sup>s</sup><sup>α</sup> <sup>∂</sup>*<sup>3</sup> *∂ξ*<sup>3</sup> Φ*k*(*ξ*,*s*) − 3 *s<sup>α</sup>* L <sup>L</sup>−1[G*k*(*ξ*,*s*)] · *<sup>∂</sup> ∂ξ* <sup>L</sup>−1[Φ*k*(*ξ*,*s*)] , <sup>L</sup>Res*k*Ψ(*ξ*,*s*) <sup>=</sup> <sup>Ψ</sup>*k*(*ξ*,*s*) <sup>−</sup> *<sup>c</sup>*(*ξ*) *<sup>s</sup>* <sup>+</sup> <sup>3</sup> *<sup>s</sup><sup>α</sup> <sup>∂</sup>*<sup>3</sup> *∂ξ*<sup>3</sup> Ψ*k*(*ξ*,*s*) − 3 *s<sup>α</sup>* L <sup>L</sup>−1[G*k*(*ξ*,*s*)] · *<sup>∂</sup> ∂ξ* <sup>L</sup>−1[Ψ*k*(*ξ*,*s*)] . (10)

**Step 4.** To find the first coefficients of the truncated series solution (9), we define the first truncated solution and substitute it in the first truncated Laplace residual functions as:

<sup>L</sup>Res1G(*ξ*,*s*) <sup>=</sup> *<sup>δ</sup>*1(*ξ*) *<sup>s</sup>α*+<sup>1</sup> <sup>−</sup> <sup>1</sup> <sup>2</sup>*s<sup>α</sup> <sup>∂</sup>*<sup>3</sup> *∂ξ*<sup>3</sup> *a*(*ξ*) *<sup>s</sup>* <sup>+</sup> *<sup>δ</sup>*1(*ξ*) *sα*+<sup>1</sup> + <sup>3</sup> *s<sup>α</sup>* L <sup>L</sup>−<sup>1</sup> *a*(*ξ*) *<sup>s</sup>* <sup>+</sup> *<sup>δ</sup>*1(*ξ*) *sα*+<sup>1</sup> *∂ ∂ξ* <sup>L</sup>−<sup>1</sup> *a*(*ξ*) *<sup>s</sup>* <sup>+</sup> *<sup>δ</sup>*1(*ξ*) *sα*+<sup>1</sup> − 3 *sα ∂ ∂ξ* <sup>L</sup>−<sup>1</sup> *b*(*ξ*) *<sup>s</sup>* <sup>+</sup> *<sup>φ</sup>*1(*ξ*) *sα*+<sup>1</sup> <sup>L</sup>−<sup>1</sup> *c*(*ξ*) *<sup>s</sup>* <sup>+</sup> *<sup>ψ</sup>*1(*ξ*) *sα*+<sup>1</sup> = 0, <sup>L</sup>Res1Φ(*ξ*,*s*) <sup>=</sup> *<sup>φ</sup>*1(*ξ*) *<sup>s</sup>α*+<sup>1</sup> <sup>+</sup> <sup>1</sup> *<sup>s</sup><sup>α</sup> <sup>∂</sup>*<sup>3</sup> *∂ξ*<sup>3</sup> *b*(*ξ*) *<sup>s</sup>* <sup>+</sup> *<sup>φ</sup>*1(*ξ*) *sα*+<sup>1</sup> − 3 *s<sup>α</sup>* L <sup>L</sup>−<sup>1</sup> *a*(*ξ*) *<sup>s</sup>* <sup>+</sup> *<sup>δ</sup>*1(*ξ*) *sα*+<sup>1</sup> *∂ ∂ξ* <sup>L</sup>−<sup>1</sup> *b*(*ξ*) *<sup>s</sup>* <sup>+</sup> *<sup>φ</sup>*1(*ξ*) *sα*+<sup>1</sup> , <sup>L</sup>Res1Ψ(*ξ*,*s*) <sup>=</sup> *<sup>ψ</sup>*1(*ξ*) *<sup>s</sup>α*+<sup>1</sup> <sup>+</sup> <sup>1</sup> *<sup>s</sup><sup>α</sup> <sup>∂</sup>*<sup>3</sup> *∂ξ*<sup>3</sup> *c*(*ξ*) *<sup>s</sup>* <sup>+</sup> *<sup>ψ</sup>*1(*ξ*) *sα*+<sup>1</sup> − 3 *s<sup>α</sup>* L <sup>L</sup>−<sup>1</sup> *a*(*ξ*) *<sup>s</sup>* <sup>+</sup> *<sup>δ</sup>*1(*ξ*) *sα*+<sup>1</sup> *∂ ∂ξ* <sup>L</sup>−<sup>1</sup> *c*(*ξ*) *<sup>s</sup>* <sup>+</sup> *<sup>ψ</sup>*1(*ξ*) *sα*+<sup>1</sup> . (11)

**Step 5.** Recall the succeeding facts that appear in the LRPSM [29], as follows:


Now, by multiplying each equation in (11) by *<sup>s</sup>α*+<sup>1</sup> and taking the limit as *<sup>s</sup>* <sup>→</sup> <sup>∞</sup>, we obtain the first unknowns of the series solutions (9) as:

$$\delta\_1(\vec{\xi}) = \frac{1}{2} (\delta'''(\vec{\xi}) - 6\delta(\vec{\xi})\delta'(\vec{\xi}) + 6\psi(\vec{\xi})\phi'(\vec{\xi}) + 6\phi(\vec{\xi})\psi'(\vec{\xi}))$$

$$\begin{aligned} \phi\_1(\vec{\xi}) &= 3\delta(\vec{\xi})\phi'(\vec{\xi}) - \phi'''(\vec{\xi})\\ \psi\_1(\vec{\xi}) &= 3\delta(\vec{\xi})\psi'(\vec{\xi}) - \psi'''(\vec{\xi}) \end{aligned}$$

Repeating the previous steps, one can obtain the second series coefficients recursively, as follows:

*δ*2(*ξ*) = <sup>1</sup> <sup>4</sup> *<sup>δ</sup>*(6)(*ξ*) + <sup>9</sup>*δ*(*ξ*)2(*<sup>δ</sup>* (*ξ*) + 3*ψ* (*ξ*)*φ* (*ξ*) + 3*ψ* (*ξ*)*φ* (*ξ*)) <sup>−</sup> <sup>9</sup> 2 *δ* (*ξ*)<sup>2</sup> −9*φ* (*ξ*)*δ* (*ξ*)*ψ* (*ξ*) − 9*ψ* (*ξ*)*δ* (*ξ*)*φ* (*ξ*) +*φ*(*ξ*) 3 <sup>2</sup>*ψ*(4)(*ξ*) <sup>−</sup> <sup>9</sup>*<sup>δ</sup>* (*ξ*)*ψ* (*ξ*) +3*δ*(*ξ*) <sup>−</sup>*δ*(4)(*ξ*) + <sup>18</sup>*<sup>δ</sup>* (*ξ*)*ψ* (*ξ*)*φ* (*ξ*) + 6*δ* (*ξ*)<sup>2</sup> <sup>−</sup> <sup>3</sup>*φ*(4)(*ξ*)*<sup>ψ</sup>* (*ξ*) −3*φ* (*ξ*)*ψ* (*ξ*) − 3*ψ* (*ξ*)*φ* (*ξ*) − 3 *ψ*(4)(*ξ*) + *ψ* (*ξ*) *φ* (*ξ*) −3*φ*(*ξ*)*ψ* (*ξ*)) <sup>−</sup> <sup>15</sup> <sup>2</sup> *δ* (*ξ*)*δ* (*ξ*) + 3*ψ* (*ξ*)*φ*(4)(*ξ*) <sup>−</sup> <sup>3</sup> <sup>2</sup>*ψ*(*ξ*)*φ*(4)(*ξ*) +3*ψ*(4)(*ξ*)*φ* (*ξ*) + 3*φ* (*ξ*)*ψ* (*ξ*) + 9*ψ* (*ξ*)*φ* (*ξ*) + 6*ψ* (*ξ*)*φ* (*ξ*), *<sup>φ</sup>*2(*ξ*) = <sup>−</sup><sup>3</sup> 2 *δ* (*ξ*)*φ* (*ξ*) − 9*δ* (*ξ*)*φ* (*ξ*) − 9*φ* (*ξ*)*δ* (*ξ*) <sup>−</sup> <sup>6</sup>*δ*(*ξ*)*φ*(4)(*ξ*) + <sup>9</sup>*δ*(*ξ*)2*<sup>φ</sup>* (*ξ*) +*φ*(6)(*ξ*) + 9*φ*(*ξ*)*ψ* (*ξ*)*φ* (*ξ*) + 9*ψ*(*ξ*)*φ* (*ξ*)2, *<sup>ψ</sup>*2(*ξ*) = <sup>−</sup><sup>3</sup> 2 *δ* (*ξ*)*ψ* (*ξ*) − 9*δ* (*ξ*)*ψ* (*ξ*) − 9*ψ* (*ξ*)*δ* (*ξ*) <sup>−</sup> <sup>6</sup>*δ*(*ξ*)*ψ*(4)(*ξ*) + <sup>9</sup>*δ*(*ξ*)2*<sup>ψ</sup>* (*ξ*) +*ψ*(6)(*ξ*) + 9*ψ*(*ξ*)*ψ* (*ξ*)*φ* (*ξ*) + 9*φ*(*ξ*)*ψ* (*ξ*)2.

Continuing in the same manner, we can conclude the following general *k*th terms of the series coefficients as:

$$\begin{split} \delta\_{k}(\vec{\xi}) &= \frac{1}{2} \delta\_{k-1}^{'''}(\vec{\xi}) - 3 \sum\_{i=0}^{k-1} \frac{\delta\_{i}(\vec{\xi}) \delta\_{k-i-1}^{\prime}(\vec{\xi}) \Gamma((k-1)a+1)}{\Gamma(i~a+1)\Gamma((k-i-1)a+1)} \\ &+ 3 \binom{k-1}{\sum\_{i=0}^{k} \frac{\phi\_{i}(\xi) \psi\_{k-i}(\xi) \Gamma((k-1)a+1)}{\Gamma(i~a+1)\Gamma((k-i-1)a+1)}}{\Gamma(i~a+1)\Gamma((k-i-1)a+1)} \\ \phi\_{k}(\xi) &= -\phi\_{k-1}^{'''}(\vec{\xi}) + 3 \sum\_{i=0}^{k-1} \frac{\delta\_{i}(\xi) \phi\_{k-i-1}^{\prime}(\xi) \Gamma((k-1)a+1)}{\Gamma(i~a+1)\Gamma((k-i-1)a+1)}, \\ \psi\_{k}(\xi) &= -\psi\_{k-1}^{'''}(\vec{\xi}) + 3 \sum\_{i=0}^{k-1} \frac{\delta\_{i}(\xi) \psi\_{k-i-1}^{\prime}(\xi) \Gamma((k-1)a+1)}{\Gamma(i~a+1)\Gamma((k-i-1)a+1)}. \end{split}$$

where *k* = 1, 2, . . . .

Thus, the *k*th series solution of (10) can be written as:

$$\begin{aligned} \mathcal{G}\_{k}(\boldsymbol{\xi},\boldsymbol{s}) &= \frac{a(\boldsymbol{\xi})}{\boldsymbol{s}} + \sum\_{m=1}^{k} \frac{\mathfrak{s}\_{m}(\boldsymbol{\xi})}{\mathfrak{s}^{m+1}}, \quad k = 1,2,\dots \\ \Phi\_{k}(\boldsymbol{\xi},\boldsymbol{s}) &= \frac{b(\boldsymbol{\xi})}{\boldsymbol{s}} + \sum\_{m=1}^{k} \frac{\mathfrak{s}\_{m}(\boldsymbol{\xi})}{\mathfrak{s}^{m+1}}, \quad k = 1,2,\dots \\ \Psi\_{k}(\boldsymbol{\xi},\boldsymbol{s}) &= \frac{c(\boldsymbol{\xi})}{\boldsymbol{s}} + \sum\_{m=1}^{k} \frac{\mathfrak{s}\_{m}(\boldsymbol{\xi})}{\mathfrak{s}^{m+1}}. \quad k = 1,2,\dots \end{aligned}$$

Therefore, the solution of (5) and (6) in the original space can be expressed as

$$\begin{aligned} \delta(\xi,\tau) &= \delta\_0 + \frac{\delta\_1(\xi)\tau^a}{\Gamma(a+1)} + \frac{\delta\_2(\xi)\tau^{2a}}{\Gamma(2a+1)} + \cdots \\ \phi(\xi,\tau) &= \phi\_0 + \frac{\phi\_1(\xi)\tau^a}{\Gamma(a+1)} + \frac{\phi\_2(\xi)\tau^{2a}}{\Gamma(2a+1)} + \cdots \\ \psi(\xi,\tau) &= \psi\_0 + \frac{\psi\_1(\xi)\tau^a}{\Gamma(a+1)} + \frac{\psi\_2(\xi)\tau^{2a}}{\Gamma(2a+1)} + \cdots \end{aligned}$$

#### **4. Numerical Application**

Consider the time-fractional HSC–KdV equations:

$$\begin{split} D^{\mathfrak{a}}\_{\tau} \delta(\tilde{\xi}, \tau) &= \frac{1}{2} \delta\_{\tilde{\xi}\tilde{\xi}\tilde{\xi}}(\tilde{\xi}, \tau) - 3 \delta(\tilde{\xi}, \tau) \delta\_{\tilde{\xi}}(\tilde{\xi}, \tau) + 3 (\phi(\tilde{\xi}, \tau) \psi(\tilde{\xi}, \tau))\_{\tilde{\xi}'} \\ D^{\mathfrak{a}}\_{\tau} \phi(\tilde{\xi}, \tau) &= -\phi\_{\tilde{\xi}\tilde{\xi}\tilde{\xi}}(\tilde{\xi}, \tau) + 3 \delta(\tilde{\xi}, \tau) \phi\_{\tilde{\xi}}(\tilde{\xi}, \tau), \\ D^{\mathfrak{a}}\_{\tau} \psi(\tilde{\xi}, \tau) &= -\psi\_{\tilde{\xi}\tilde{\xi}\tilde{\xi}}(\tilde{\xi}, \tau) + 3 \delta(\tilde{\xi}, \tau) \psi\_{\tilde{\xi}}(\tilde{\xi}, \tau), \end{split} \tag{12}$$

subject to the ICs:

$$\delta(\nsubseteq.0) = 0.4933 + 0.02 \tanh^2(0.1 \ \pounds),$$

$$\phi(\nsubseteq.0) = -0.0134 + 0.0134 \tanh(0.1 \ \pounds),$$

$$\psi(\nsubseteq.0) = 1.5 + 1.5 \tanh(0.1 \ \pounds).$$

To obtain the solution by the LRPSM in the series form about *t* = 0, we first apply the LT on both sides of Equation (12) to obtain:

$$\begin{split} \mathcal{L}[\boldsymbol{D}^{\boldsymbol{\pi}}\_{\tau}\boldsymbol{\delta}(\boldsymbol{\xi},\boldsymbol{\tau})] &= \frac{1}{2}\mathcal{L}\left[\delta\_{\boldsymbol{\xi}\boldsymbol{\xi}\boldsymbol{\xi}}(\boldsymbol{\xi},\boldsymbol{\tau})\right] - 3\mathcal{L}\left[\delta(\boldsymbol{\xi},\boldsymbol{\tau})\delta\_{\boldsymbol{\xi}}(\boldsymbol{\xi},\boldsymbol{\tau})\right] + 3\mathcal{L}\left[\left(\boldsymbol{\phi}(\boldsymbol{\xi},\boldsymbol{\tau})\boldsymbol{\upupleft}(\boldsymbol{\xi},\boldsymbol{\tau})\right)\_{\boldsymbol{\xi}}\right], \\ &\mathcal{L}[\boldsymbol{D}^{\boldsymbol{\pi}}\_{\tau}\boldsymbol{\phi}(\boldsymbol{\xi},\boldsymbol{\tau})] = -\mathcal{L}\left[\boldsymbol{\upupleft}\_{\tilde{\boldsymbol{\pi}}\tilde{\boldsymbol{\xi}}\boldsymbol{\xi}}(\boldsymbol{\xi},\boldsymbol{\tau})\right] + 3\mathcal{L}\left[\boldsymbol{\uploright}(\boldsymbol{\xi},\boldsymbol{\tau})\boldsymbol{\upupupleft}(\boldsymbol{\xi},\boldsymbol{\tau})\right], \\ &\mathcal{L}[\boldsymbol{D}^{\boldsymbol{\pi}}\_{t}\boldsymbol{f}(\boldsymbol{x},t)] = -\mathcal{L}\left[\boldsymbol{\upupright}\_{\tilde{\boldsymbol{\pi}}\tilde{\boldsymbol{\xi}}\boldsymbol{\xi}}(\boldsymbol{\xi},\boldsymbol{\tau})\right] + 3\mathcal{L}\left[3\boldsymbol{\upupleft}(\boldsymbol{\xi},\boldsymbol{\tau})\boldsymbol{\upupupleft}\_{\tilde{\boldsymbol{\xi}}}(\boldsymbol{\xi},\boldsymbol{\tau})\right]. \end{split}$$

Using the ICs (11), we have:

<sup>G</sup>(*ξ*,*s*) <sup>=</sup> 0.4933 <sup>+</sup> 0.02 tanh2(0.1 *<sup>ξ</sup>*) *<sup>s</sup>* <sup>+</sup> <sup>1</sup> <sup>2</sup>*s<sup>α</sup> <sup>∂</sup>*<sup>3</sup> *∂ξ*<sup>3</sup> G(*ξ*,*s*) − 3 *s<sup>α</sup>* L <sup>L</sup>−1[G(*ξ*,*s*)] · *<sup>∂</sup> ∂ξ* <sup>L</sup>−1[G(*ξ*,*s*)] + <sup>3</sup> *sα ∂ ∂ξ* # <sup>L</sup>−1[Φ(*ξ*,*s*)] · L−1[Ψ(*ξ*,*s*)]\$ , Φ(*ξ*,*s*) = <sup>−</sup>0.0134 <sup>+</sup> 0.0134 tanh(0.1*ξ*) *<sup>s</sup>* <sup>−</sup> <sup>1</sup> *<sup>s</sup><sup>α</sup> <sup>∂</sup>*<sup>3</sup> *∂ξ*<sup>3</sup> Φ(*ξ*,*s*) + <sup>3</sup> *s<sup>α</sup>* L <sup>L</sup>−1[G(*ξ*,*s*)] · *<sup>∂</sup> ∂ξ* <sup>L</sup>−1[Φ(*ξ*,*s*)] , Ψ(*ξ*,*s*) = 1.5 <sup>+</sup> 1.5 tanh(0.1 *<sup>ξ</sup>*) *<sup>s</sup>* <sup>−</sup> <sup>1</sup> *<sup>s</sup><sup>α</sup> <sup>∂</sup>*<sup>3</sup> *∂ξ*<sup>3</sup> Ψ(*ξ*,*s*) + <sup>3</sup> *s<sup>α</sup>* L <sup>L</sup>−1[G(*ξ*,*s*)] · *<sup>∂</sup> ∂ξ* <sup>L</sup>−1[Ψ(*ξ*,*s*)] . (14)

Define the *k*th-truncated series of Equation (14) as:

$$\begin{split} \mathcal{G}\_{k}(\boldsymbol{\xi},s) &= \frac{0.4933 + 0.02 \tanh^{2}(0.1 \, \boldsymbol{\xi})}{s} + \sum\_{m=1}^{k} \frac{\delta\_{m}(\boldsymbol{x})}{s^{m+1}}, k = 1, 2, \cdots \\ \Phi\_{k}(\boldsymbol{\xi},s) &= \frac{-0.0134 + 0.0134 \tanh(0.1 \boldsymbol{\xi})}{s} + \sum\_{m=1}^{k} \frac{\Phi\_{m}(\boldsymbol{x})}{s^{m+1}}, k = 1, 2, \cdots \\ \Psi\_{k}(\boldsymbol{\xi},s) &= \frac{1.5 + 1.5 \tanh(0.1 \, \boldsymbol{\xi})}{s} + \sum\_{m=1}^{k} \frac{\Psi\_{m}(\boldsymbol{x})}{s^{m+1}}, k = 1, 2, \cdots \end{split} \tag{15}$$

The *k*th Laplace residual function of Equation (14) is defined as:

<sup>L</sup>Res*k*G*k*(*ξ*,*s*) <sup>=</sup> <sup>G</sup>*k*(*ξ*,*s*) <sup>−</sup> 0.4933 <sup>+</sup> 0.02 tanh2(0.1 *<sup>ξ</sup>*) *<sup>s</sup>* <sup>−</sup> <sup>1</sup> <sup>2</sup>*s<sup>α</sup> <sup>∂</sup>*<sup>3</sup> *∂ξ*<sup>3</sup> G(*ξ*,*s*) + <sup>3</sup> *s<sup>α</sup>* L <sup>L</sup>−1[G(*ξ*,*s*)] · *<sup>∂</sup> ∂ξ* <sup>L</sup>−1[G(*ξ*,*s*)] − 3 *sα ∂ ∂ξ* # <sup>L</sup>−1[Φ(*ξ*,*s*)] · L−1[Ψ(*ξ*,*s*)]\$ , <sup>L</sup>Res*k*Φ*k*(*ξ*,*s*) <sup>=</sup> <sup>Φ</sup>*k*(*ξ*,*s*) <sup>+</sup> 0.0134 <sup>+</sup> 0.0134 tanh(0.1*ξ*) *<sup>s</sup>* <sup>+</sup> <sup>1</sup> *<sup>s</sup><sup>α</sup> <sup>∂</sup>*<sup>3</sup> *∂ξ*<sup>3</sup> Φ(*ξ*,*s*) − 3 *s<sup>α</sup>* L <sup>L</sup>−1[G(*ξ*,*s*)] · *<sup>∂</sup> ∂ξ* <sup>L</sup>−1[Φ(*ξ*,*s*)] , <sup>L</sup>Res*k*Ψ*k*(*ξ*,*s*) <sup>=</sup> <sup>Ψ</sup>*k*(*ξ*,*s*) <sup>−</sup> 1.5 <sup>+</sup> 1.5 tanh(0.1 *<sup>ξ</sup>*) *<sup>s</sup>* <sup>+</sup> <sup>1</sup> *<sup>s</sup><sup>α</sup> <sup>∂</sup>*<sup>3</sup> *∂ξ*<sup>3</sup> Ψ(*ξ*,*s*) − 3 *s<sup>α</sup>* L <sup>L</sup>−1[G(*ξ*,*s*)] · *<sup>∂</sup> ∂ξ* <sup>L</sup>−1[Ψ(*ξ*,*s*)] . (16)

Hence, to obtain the values of the coefficients functions δ*k*(*x*), *φk*(*x*) and *<sup>ψ</sup>k*(*x*), *<sup>k</sup>* <sup>=</sup> 1, 2, ··· , we substitute the *<sup>k</sup>*th truncated series <sup>G</sup>*k*(*ξ*,*s*), <sup>Φ</sup>*k*(*ξ*,*s*) and <sup>Ψ</sup>*k*(*ξ*,*s*) in (15) into the *k*th Laplace residual function (14), and then multiply the obtained equation by *skα*+<sup>1</sup> and solve the recurrence relations:

$$\begin{array}{l}\lim\_{s\to\infty}s^{k\alpha+1}\mathcal{L}\mathrm{Res}\_{k}\mathcal{G}\_{k}(\vec{\xi},s)=0,\\\lim\_{s\to\infty}s^{k\alpha+1}\mathcal{L}\mathrm{Res}\_{k}\Phi\_{k}(\vec{\xi},s)=0,\\\lim\_{s\to\infty}s^{k\alpha+1}\mathcal{L}\mathrm{Res}\_{k}\Psi\_{k}(\vec{\xi},s)=0,\end{array}$$

for the unknown coefficients δ*k*(*x*), *φk*(*x*) and *ψk*(*x*), where *k* = 1, 2, ··· . Now, following a few terms of the sequence {δ*k*(*x*)}, {*φk*(*x*)} and {*ψk*(*x*)}, we obtain:

> δ1(ξ) = 0.00598tanh(0.1*ξ*) <sup>−</sup> 1.30104 <sup>×</sup> <sup>10</sup>−18 sech2(0.1*ξ*), *φ*1(ξ) =0.00201sech2(0.1*ξ*), *ψ*1(ξ) =0.22499sech2(0.1*ξ*).

$$\delta\_2(\xi) = -3.53226$$

$$\begin{aligned} & \times 10^{-6} \operatorname{sech}^{9}(0.1\tilde{\xi})(59.21054 \sinh(0.1\tilde{\xi}) + 100.8182 \sinh(0.3\tilde{\xi})) \\ &+ 49.60926 \sinh(0.5\tilde{\xi}) + 8.0016 \sinh(0.7\tilde{\xi}) + 1 \cosh(0.1\tilde{\xi}) \end{aligned}$$

$$(+0.36\cosh(0.3\xi) - 0.04\cosh(0.5\xi) - 0.04\cosh(0.7\xi)),$$

*φ*2(ξ) = − 7.53842

<sup>×</sup>10−5(sinh(0.1*ξ*) <sup>+</sup> 1.49987sinh(0.3*ξ*) + 0.49987sinh(0.5*ξ*)) sech7(0.1*ξ*),

$$\psi\_2(\xi) = -0.00844(\sinh(0.1\xi) + 1.49987 \sinh(0.3\xi) + 0.49987 \sinh(0.5\xi)) \text{sech}^7(0.1\xi).$$

*<sup>δ</sup>*3(ξ) <sup>=</sup> 4.68229 <sup>×</sup> <sup>10</sup>−6sech15(0.1*ξ*)(−5.77628sinh(0.1*ξ*) <sup>−</sup> 11.98959sinh(0.3*ξ*)

− 9.23265sinh(0.5*ξ*) − 3.59499sinh(0.7*ξ*) − 0.52051sinh(0.9*ξ*)

+ 0.08358sinh(1.1*ξ*) + 0.02844sinh(1.3*ξ*) + cosh(0.1*ξ*)

$$\left(+0.43738\cosh(0.3\tilde{\xi}) - 0.005159\cosh(0.5\tilde{\xi}) - 0.070707\cosh(0.7\tilde{\xi})\right)$$

$$(-0.01788\cosh(0.9\frac{\pi}{\cdot}) - 0.000801\cosh(1.1\frac{\pi}{\cdot}) - 0.000378\cosh(1.3\frac{\pi}{\cdot})),$$

$$\phi\_3(\xi) = -6.22028$$

$$\begin{aligned} & \times 10^{-5} \text{sech}^{11}(0.1\overline{\xi})(0.01352 \text{sinh}(0.1\overline{\xi}) + 0.02301 \text{sinh}(0.3\overline{\xi})) \\ &+ 0.01132 \text{sinh}(0.5\overline{\xi}) + 0.00183 \text{sinh}(0.7\overline{\xi}) + \cosh(0.1\overline{\xi}) \\ &+ 0.4935 \cosh(0.3\overline{\xi}) + 0.06625 \cosh(0.5\overline{\xi}) - 0.03592 \cosh(0.7\overline{\xi}) \\ &- 0.011358 \cosh(0.9\overline{\xi})), \\ \psi\_3(\xi) &= -0.00696 \text{sech}^{11}(0.1\overline{\xi})(0.01352 \text{sinh}(0.1\overline{\xi}) + 0.02301 \text{sinh}(0.3\overline{\xi})) \\ &+ 0.01132 \text{sinh}(0.5\overline{\xi}) + 0.00183 \text{sinh}(0.7\overline{\xi}) + \cosh(0.1\overline{\xi}) \end{aligned}$$

$$+0.4935\cosh(0.3\xi) + 0.06625\cosh(0.5\xi) - 0.03592\cosh(0.7\xi)^2$$

### −0.01136 cosh(0.9*ξ*)).

Repeating the previous steps, one can obtain the general terms of the coefficients of the series solution of (10) as:

*δ*(ξ, τ) = 0.4933 + 0.02 tanh2(0.1 *ξ*) <sup>+</sup> ((0.00598tanh(0.1*ξ*) <sup>−</sup> 1.30104 <sup>×</sup> <sup>10</sup>−<sup>18</sup>)sech2(0.1*ξ*))*τ<sup>α</sup> Γ*(*α*+1) + *<sup>τ</sup>*2*<sup>α</sup> <sup>Γ</sup>*(2*α*+1)(−3.53226 <sup>×</sup> <sup>10</sup>−<sup>6</sup> sech9(0.1*ξ*)(59.21054sinh(0.1*ξ*) <sup>+</sup> 100.8182sinh(0.3*ξ*) + 49.60926sinh(0.5*ξ*) + 8.0016sinh(0.7*ξ*) + 1 cosh(0.1*ξ*) + 0.36 cosh(0.3*ξ*) − 0.04 cosh(0.5*ξ*) − 0.04 cosh(0.7*ξ*))) + ..., *<sup>φ</sup>*(*ξ*, *<sup>τ</sup>*) <sup>=</sup> <sup>−</sup>0.0134 <sup>+</sup> 0.0134 tanh(0.1*ξ*) <sup>+</sup> 0.00201sech2(0.1*ξ*)*τ<sup>α</sup> Γ*(*α*+1) + *<sup>τ</sup>*2*<sup>α</sup> <sup>Γ</sup>*(2*α*+1)(−6.22028 <sup>×</sup> <sup>10</sup>−5sech11(0.1*ξ*)(0.01352sinh(0.1*ξ*) + 0.02301sinh(0.3*ξ*) + 0.01132sinh(0.5*ξ*) + 0.00183sinh(0.7*ξ*) + cosh(0.1*ξ*) + 0.4935 cosh(0.3*ξ*) + 0.06625 cosh(0.5*ξ*) −0.03592 cosh(0.7*ξ*) − 0.011358 cosh( 0.9*ξ*))) + ..., *<sup>ψ</sup>*(*ξ*, *<sup>τ</sup>*) <sup>=</sup> 1.5 <sup>+</sup> 1.5 tanh(0.1 *<sup>ξ</sup>*) <sup>+</sup> 0.22499sech2(*ξ*)*τ<sup>α</sup> Γ*(*α*+1) + *<sup>τ</sup>*2*<sup>α</sup> <sup>Γ</sup>*(2*α*+1)(−0.00844(sinh(0.1*ξ*) + 1.49987sinh(0.3*ξ*) + 0.49987sinh(0.5*ξ*))sech7(0.1*ξ*)) + ....

In Table 1, we choose some selected grid points numerically utilizing absolute and relative errors between the accurate solution and fifth order approximation LRPSM solution to present the correctness of the method; it is obvious that that the current work is an uncomplicated and potent tool, and we note that as *τ* decreases, the error becomes smaller.

Figure 1 below, shows the graph of the exact solution and the fifth LRPSM approximate solution of the HSC–KdV equations. The effectiveness of the proposed method is evident in Figure 1 below, which shows the graph of the LRPSM solution that concludes with the exact solution when *α* = 1. The contour plot of the fifth approximation series solution to HSC– KdV equations is shown in Figure 2 below. Figure 3 shows the graph of the corresponding fifth approximation LRPSM and the exact solution in a wide space. However, in Figure 4, we have examined the effect and effect of time. Here, it is clear that when we increase time *δ*(*ξ*, *τ*), *the* LRPSM results show a different behavior and move from the positive to negative *x*-axis; in addition, *φ*(*ξ*, *τ*) and *ψ*(*ξ*, *τ*) show different behaviors at different times and are stable in a wide space, but as we increase the time, the solution also increases. The 5th truncated series of equations, (*ξ*, *τ*), *φ*(*ξ*, *τ*), and *ψ*(*ξ*, *τ*), is plotted in Figure 5a–c for *α* = 0.6, *α* = 0.8 and *α* = 1, respectively, whereas the exact solution at *α* = 1 is plotted in (d). The graphics indicate consistency in the behavior of the solution at various values of *α*, as well as the agreement of the exact solution with the approximate solution in Figure 5c,d.


**Table 1.** The values of and and the values of the 6th approximate of the LRPSM solution for HSC–KdV equations *α* = 1 at and *ξ* = 0.1.

**Figure 1.** *Cont*.

**Figure 1.** The exact solution and the fifth approximate LRPSM solution of HSC–KdV equations for the functions *δ*(*ξ*, *τ*), *φ*(*ξ*, *τ*), and *ψ*(*ξ*, *τ*) at *τ* ∈ [0, 2], *ξ* ∈ [−40, 40], and *α* = 1.

**Figure 2.** *Cont*.

**Figure 2.** The contour graph of the approximate solutions (**a**) *δ*(*ξ*, *τ*), (**b**) *φ*(*ξ*, *τ*), and (**c**) *ψ*(*ξ*, *τ*) for HSC–KdV equation at *τ* ∈ [0, 4], *ξ* ∈ [0, 1], and *α* = 1.

**Figure 3.** The graph of the 5th LRPSM solutions *δ*(*ξ*, *τ*), *φ*(*ξ*, *τ*), and *ψ*(*ξ*, *τ*) for HSC–KdV equations at *τ* = 0.1, *τ* = 1, *τ* = 0.1, *τ* = 3, *ξ* ∈ [−30, 30], and *α* = 1.

**Figure 4.** The graph of the 5th LRPSM solutions *δ*(*ξ*, *τ*), *φ*(*ξ*, *τ*), and *ψ*(*ξ*, *τ*) for HSC–KdV equation at *τ* = 0.1, *τ* = 1, *τ* = 2, *τ* = 3, *ξ* ∈ [−30, 30], and *α* = 1.

**Figure 5.** The 3D surface plot of the 10th approximate solutions of *u*1, *u*2, and *u*<sup>3</sup> at various values of *α* and *t* = 0.5 and *ζ* = 3 for the problem in Example 4.3; (**a**) *α* = 0.6, (**b**) *α* = 0.8, (**c**) *α* = 1, (**d**) *α* = 1 (exact solutions).

#### **5. Conclusions**

This paper introduces a new series solution of the coupled Hirota–Satsuma and KdV equations and provides a general term of the solution. We applied the LRPSM to investigate the solution and obtained a general formula of the series solution for the target equations. We showed the efficiency and applicability of the method by introducing a numerical application and compared our results to the exact ones in the integer case. We analyzed the outcomes and sketched the solutions with different values for the variables and the fractional order. In the future, we intend to solve more physical problems with the LRPSM and compare the outcomes to those obtained by other numerical methods.

As a result of our research, we conclude the following:


**Author Contributions:** Formal analysis, O.A., R.S. and A.Q.; investigation, A.Q., R.S. and O.A.; data curation, A.Q., R.S. and O.A.; methodology, A.Q., R.S. and O.A.; writing—original draft, A.Q., R.S. and O.A.; project administration, A.Q., R.S. and O.A.; resources, R.S., O.A. and A.Q.; writing—review and editing, R.S., O.A. and A.Q. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors express their gratitude to the dear referees, who wish to remain anonymous, and the editor for their helpful suggestions, which improved the final version of this paper.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **Probing Families of Optical Soliton Solutions in Fractional Perturbed Radhakrishnan–Kundu–Lakshmanan Model with Improved Versions of Extended Direct Algebraic Method**

**Humaira Yasmin 1,\*, Noufe H. Aljahdaly 2, Abdulkafi Mohammed Saeed <sup>3</sup> and Rasool Shah <sup>4</sup>**


**Abstract:** In this investigation, we utilize advanced versions of the Extended Direct Algebraic Method (EDAM), namely the modified EDAM (mEDAM) and *r*+mEDAM, to explore families of optical soliton solutions in the Fractional Perturbed Radhakrishnan–Kundu–Lakshmanan Model (FPRKLM). Our study stands out due to its in-depth investigation and the identification of multiple localized and stable soliton families, illuminating their complex behavior. We offer visual validation via carefully designed 3D graphics that capture the complex behaviors of these solitons. The implications of our research extend to fiber optics, communication systems, and nonlinear optics, with the potential for driving developments in optical devices and information processing technologies. This study conveys an important contribution to the field of nonlinear optics, paving the way for future advancements and a greater comprehension of optical solitons and their applications.

**Keywords:** Fractional Perturbed Radhakrishnan Kundu Lakshmanan Model; Extended Direct Algebraic Method; Nonlinear Ordinary Differential Equation; optical soliton solutions; variable transformation; generalized trigonometric functions

#### **1. Introduction**

Fractional Partial Differential Equations (FPDEs) have received great attention in different fields of science due to their ability to accurately model complex physical phenomena [1–4]. This encourages researchers to dedicate their efforts to studying, examining, and analyzing FPDEs. Researchers have used numerical and analytical techniques to understand and analyze the behavior of FPDEs. Numerical methods are based on discretization techniques that approximate the solution through iterative calculations [5–7]. These numerical methods are powerful and widely used but often have limitations, such as computational expenses and the inability to provide exact solutions. In contrast, analytic techniques aim to obtain exact solutions using mathematical techniques and transformations. Researchers often prefer closed formulas and analytical techniques that can provide greater insight into the underlying mathematical structure of a problem. Analytic solutions provide a comprehensive understanding of system behavior, facilitating further theoretical analysis and investigation of physical effects. Therefore, different analytical approaches, such as the Variational Iteration Method (VIM) [8], the Fractional Differential Transform Method (FDTM) [9], the (G'/G)-expansion method [10], the exp-function method [11], the tanexpansion method [12], the Adomian Decomposition Method (ADM) [13], the Laplace Transform Method (LTM) [14] and the EDAM [15,16], etc., are introduced to tackle FPDEs.

**Citation:** Yasmin, H.; Aljahdaly, N.H.; Saeed, A.M.; Shah, R. Probing Families of Optical Soliton Solutions in Fractional Perturbed Radhakrishnan–Kundu–Lakshmanan Model with Improved Versions of Extended Direct Algebraic Method. *Fractal Fract.* **2023**, *7*, 512. https:// doi.org/10.3390/fractalfract7070512

Academic Editors: Nikolay A. Kudryashov, Libo Feng, Lin Liu and Yang Liu

Received: 27 May 2023 Revised: 25 June 2023 Accepted: 26 June 2023 Published: 28 June 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

The EDAM is a particularly efficient and reliable approach among these analytical techniques. The method first transforms a complex nonlinear FPDE into a Nonlinear Ordinary Differential Equation (NODE) by fitly choosing variable transformations. Then, using another ODE, the EDAM assumes a series-form solution. Substituting this solution into NODE adeptly transforms the NODE into a system of algebraic equations. By solving this system of equations skillfully, the EDAM allows us to construct different families of soliton solutions, each with profound implications in different scientific fields. This amazing ability of the EDAM enriches our understanding and exploration of FPDEs and opens the door to groundbreaking discoveries and major advances in the scientific field.

Our study's main goal is to investigate the variety of optical soliton solutions for the FPRKLM using two upgraded versions of the EDAM, namely mEDAM and *r*+EDAM. The FPRKLM is a special type of FPDE incorporating perturbations into the Radhakrishnan– Kundu–Lakshmanan Model (RKLM), a well-known equation governing soliton dynamics. The FPRKLM exhibits rich dynamics and can be applied to various physical systems, such as nonlinear optics, Bose–Einstein condensation, and plasma physics. The FPRKLM provides a valuable theoretical framework for studying wave phenomena and has practical implications. Optical solitons, which are self-amplifying single waves, have attracted much attention due to their potential applications in high-speed communication systems, optical fibers, and optical signal processing. Analyzing the family of optical soliton solutions in the FPRKLM provides important insights into the behavior and manipulation of optical pulses and enables their advancement. With this analytical approach, we hope to decipher the complex wave phenomena of soliton solutions and provide valuable insights into the behavior of optical solitons in the FPRKLM. This investigation's results are important for understanding the FPRKLM and further developing nonlinear optics and related fields. The proposed complex structural FPRKLM under the Kerr law nonlinearity is given by [17]:

$$i D\_t^a u + a\_1 D\_{xx}^{2\beta} u + b\_1 |u|^2 u - i \delta D\_x^\beta u - i \mu\_1 D\_x^\beta (|u|^2 u) - i \sigma u D\_x^\beta (|u|^2) - i \gamma D\_{xxx}^{3\beta} u = 0,\tag{1}$$

where 0 < *α*, *β* ≤ 1 and *u* represents the complex-valued wave-function in space, *x*, and time, *t*. *D<sup>α</sup> <sup>t</sup> u* denotes the fractional time evolution of the nonlinear wave, while *Dβ xu*, *<sup>D</sup>*2*<sup>β</sup> xxu* and *<sup>D</sup>*3*<sup>β</sup> xxxu* denote spatial fractional derivatives. In this study, both time and spatial fractional derivatives are defined in Caputo's derivative sense given in (2). The proposed model was described in terms of time-fractional derivatives in [17]. The goal of this study is to solve the problem using a more thorough model that includes complete fractional derivatives. As a consequence, we generalise the model from [17] by substituting a fractional derivative, *D<sup>β</sup> <sup>x</sup>* , for the traditional spatial derivative. The inclusion of spatial fractional derivatives captures genuine occurrences and improves the description of the system by taking fractional diffusion and the interaction of temporal and spatial dynamics into consideration. It also generalises the issue, allowing for fascinating mathematical analysis, and broadens the study's application to complex systems with temporal and spatial fractional dynamics. The coefficient *a*<sup>1</sup> denotes Group velocity dispersion (GVD), *b*<sup>1</sup> denotes the nonlinearity coefficient, *δ* represents Inter-Modal Dispersion (IMD), *μ*<sup>1</sup> corresponds to short-pulse self-tilt coefficient, and *σ* denotes higher-order dispersion. In contrast, the coefficient *γ* corresponds to third-order dispersion terms.

Prior to this work, many mathematicians have studied the optical wave phenomena of the proposed model in both integer and fractional forms for exploring optical soliton solutions using various analytical approaches. In their work [17], Sulaiman et al. delved into the study of dark, bright, and dark-light mixtures; single, mixed singular optical solitons; and singular periodic wave solutions in time-fractional FPRKLM. Similarly, Saima et al. focused on PRKLM for scattered light solitons of bright, dark, singular and dark singular combinations using (*G* /*G*2)-expansion and sine-Gordon expansion methods [18]. Tukur and Hasan [19] used the extended rational sine-cosine/sinh-cosh method to tackle local Mfractional RKL equations. Finally, Kudryashov [20] studied complex RKL equations using the fourth-power polynomial law of nonlinearity, especially for solitary wave construction.

The fractional derivatives presented in (1) are defined in the Caputo's derivative sense. The operator for this differentiation is defined as [21]

$$D\_y^\gamma u(\mathbf{x}, y) = \begin{cases} \frac{1}{\Gamma(1-\gamma)} \int\_0^y \frac{\partial}{\partial \omega} z(\mathbf{x}, \omega) (\omega - y)^{-\gamma} d\omega, & \gamma \in (0, 1) \\\frac{\partial u(\mathbf{x}, y)}{\partial y}, & \gamma = 1 \end{cases} \tag{2}$$

where the function *u*(*x*, *y*) is fairly smooth. We rely on the application of the subsequent two operator's properties to convert the FPDE indicated in (1) into NODEs:

$$D\_{\varphi}^{\kappa} \varphi^{r} = \frac{\Gamma(1+r)}{\Gamma(1+r-\kappa)} \varphi^{r-\kappa},\tag{3}$$

$$D\_{\varphi}^{\gamma}y[\mathfrak{x}(\mathfrak{q})] = y\_{\mathfrak{x}}'(\mathfrak{x}(\mathfrak{q}))D\_{\mathfrak{q}}^{\gamma}\mathfrak{x}(\mathfrak{q}),\tag{4}$$

Here, we presume that *x*(*ϕ*) & *y*(*ϕ*) symbolises functions that maintain differentiability, whereas *r* is a real number.

#### **2. Method and Materials**

This section outlines the EDAM's operational procedures. Take into account the general FPDE listed below [16]:

$$M(h, \partial\_t^a h, \partial\_{v\_1}^\beta h, \partial\_{v\_2}^\gamma h, h\partial\_{v\_1}^\beta h, \dots) = 0, \ 0 < a, \beta, \gamma \le 1,\tag{5}$$

where *h* = *h*(*t*, *v*1, *v*2, *v*3,..., *vi*).

Following these steps allows us to solve problem (5):

1. First, *h*(*t*, *v*1, *v*2, *v*3, ... , *vi*) = *H*(*ζ*), *ζ* = *ζ*(*t*, *v*1, *v*2, *v*3, ... , *vi*) (*ζ* can be written in many ways) is executed to turn (5) into a NODE of the form:

$$T(H, H', H'H, \dots) = 0,\tag{6}$$

where *H* in (6) has derivatives with respect to *ζ*. (6) may occasionally be integrated once or more to obtain the integration's constant.

	- (a) The following series form solution is suggested by the mEDAM:

$$H(\mathbb{\zeta}) = \sum\_{m=-j}^{j} \mathbb{C}\_{m}(G(\mathbb{\zeta}))^{m} \,. \tag{7}$$

(b) While the *r*+mEDAM offers the subsequent solution:

$$H(\mathbb{\zeta}) = \sum\_{m=-j}^{j} \mathbb{C}\_{m}(r + G(\mathbb{\zeta}))^{m},\tag{8}$$

where *Cm*(*m* = −*j*, ... , 0, ... , *j*) are arbitrary parameters that will be found later and *G*(*ϕ*) satisfies the subsequent nonlinear ODE:

$$G'(\mathbb{\zeta}) = (\mathfrak{c}(G(\mathbb{\zeta}))^2 + bG(\mathbb{\zeta}) + a)\ln(\mu),\tag{9}$$

Here, it ought to be pointed out that *μ* presumes a value different from 0 and 1, whereas *a*, *b*, and *c* remain constant during the investigation.

3. The positive integer symbolised as *j* in (7) and (8) is often referred to as the balance number. It is calculated by applying homogeneous balancing between the greatest nonlinear component in Equation (6) and the highest order derivative.


**Family 1.** In the case when *Q* is below 0 and *c* is not equal to 0. the use of the general solutions if nonlinear ODE provided in Equation (9) results into the development of the given family of travelling soliton solutions:

$$G\_{1}(\zeta) = -\frac{b}{2c} + \frac{\sqrt{-\mathsf{Q}}\tan\_{\mu}\left(1/2\sqrt{-\mathsf{R}}\zeta\right)}{2c},$$

$$G\_{2}(\zeta) = -\frac{b}{2c} - \frac{\sqrt{-\mathsf{Q}}\cot\_{\mu}\left(1/2\sqrt{-\mathsf{R}}\zeta\right)}{2c},$$

$$G\_{3}(\zeta) = -\frac{b}{2c} + \frac{\sqrt{-\mathsf{Q}}\left(\tan\_{\mu}\left(\sqrt{-\mathsf{Q}}\zeta\right) \pm \left(\sqrt{pq}\sec\_{\mu}\left(\sqrt{-\mathsf{Q}}\zeta\right)\right)\right)}{2c},$$

$$G\_{4}(\zeta) = -\frac{b}{2c} - \frac{\sqrt{-\mathsf{Q}}\left(\cot\_{\mu}\left(\sqrt{-\mathsf{Q}}\zeta\right) \pm \left(\sqrt{pq}\csc\_{\mu}\left(\sqrt{-\mathsf{Q}}\zeta\right)\right)\right)}{2c}.$$

and

$$G\_5(\zeta) = -\frac{b}{2c} + \frac{\sqrt{-Q} \left(\tan\_{\mu}\left(1/4\sqrt{-Q}\zeta\right) - \cot\_{\mu}\left(1/4\sqrt{-Q}\zeta\right)\right)}{4c}.$$

**Family 2.** The generic solutions derived from Equation (9) lead to the following family of traveling soliton solutions when *Q* is larger than zero and *c* is not equal to zero:

$$\mathcal{G}\_6(\zeta) = -\frac{b}{2c} - \frac{\sqrt{\mathbb{Q}}\tanh\_{\mu}(1/2\sqrt{\mathbb{Q}}\zeta)}{2c},$$

$$\mathcal{G}\_7(\zeta) = -\frac{b}{2c} - \frac{\sqrt{\mathbb{Q}}\coth\_{\mu}(1/2\sqrt{\mathbb{Q}}\zeta)}{2c},$$

$$\mathcal{G}\_8(\zeta) = -\frac{b}{2c} - \frac{\sqrt{\mathbb{Q}}\Big(\tanh\_{\mu}(\sqrt{\mathbb{Q}}\zeta) \pm \left(\sqrt{pq}\mathrm{sech}\_{\mu}(\sqrt{\mathbb{Q}}\zeta)\right)\Big)}{2c},$$

$$\mathcal{G}\_9(\zeta) = -\frac{b}{2c} - \frac{\sqrt{\mathbb{Q}}\Big(\coth\_{\mu}(\sqrt{\mathbb{Q}}\zeta) \pm \left(\sqrt{pq}\mathrm{sech}\_{\mu}(\sqrt{\mathbb{Q}}\zeta)\right)\Big)}{2c}.$$

and

$$G\_{10}(\zeta) = -\frac{b}{2c} - \frac{\sqrt{\mathbb{Q}} \left( \tanh\_{\mu} \left( 1/4 \sqrt{\mathbb{Q}} \zeta \right) - \coth\_{\mu} \left( 1/4 \sqrt{\mathbb{Q}} \zeta \right) \right)}{4c}.$$

**Family 3.** The generic solutions stated in Equation (9) are applied in the case where the product *ac* is higher than 0 and *b* is equal to 0, producing the required family of traveling soliton solutions:

$$G\_{11}(\zeta) = \sqrt{\frac{a}{c}} \tan\_{\mu}(\sqrt{ac}\zeta),$$

$$G\_{12}(\zeta) = -\sqrt{\frac{a}{c}} \cot\_{\mu}(\sqrt{ac}\zeta),$$

$$\begin{aligned} G\_{13}(\zeta) &= \sqrt{\frac{a}{c}} \left( \tan\_{\mu} \left( 2\sqrt{ac}\zeta \right) \pm \left( \sqrt{pq}\sec\_{\mu} \left( 2\sqrt{ac}\zeta \right) \right) \right), \\ G\_{14}(\zeta) &= -\sqrt{\frac{a}{c}} \left( \cot\_{\mu} \left( 2\sqrt{AC}\zeta \right) \pm \left( \sqrt{pq}\csc\_{\mu} \left( 2\sqrt{ac}\zeta \right) \right) \right), \\ G\_{15}(\zeta) &= \frac{1}{2} \sqrt{\frac{a}{c}} \left( \tan\_{\mu} \left( 1/2\sqrt{ac}\zeta \right) - \cot\_{\mu} \left( 1/2\sqrt{ac}\zeta \right) \right). \end{aligned}$$

\*\*Example 4.\*\* The generic solutions of (9) provide the following family of traveling solutions for  $ac > 0$  &  $b = 0$ :

$$G\_{16}(\zeta) = -\sqrt{-\frac{a}{c}}\tanh\_{\mu}\left(\sqrt{-ac}\zeta\right),$$

$$G\_{17}(\zeta) = -\sqrt{-\frac{a}{c}}\coth\_{\mu}\left(\sqrt{-ac}\zeta\right),$$

$$G\_{18}(\zeta) = -\sqrt{-\frac{a}{c}}\left(\tanh\_{\mu}\left(2\sqrt{-ac}\zeta\right) \pm \left(i\sqrt{pq}\text{sech}\_{A}\left(2\sqrt{-ac}\zeta\right)\right)\right),$$

$$G\_{19}(\zeta) = -\sqrt{-\frac{a}{c}}\left(\coth\_{\mu}\left(2\sqrt{-ac}\zeta\right) \pm \left(\sqrt{pq}\text{sech}\_{\mu}\left(2\sqrt{-ac}\zeta\right)\right)\right).$$

and

$$G\_{20}(\zeta) = -\frac{1}{2}\sqrt{-\frac{a}{c}}\Big(\tanh\_{\mu}\Big(1/2\sqrt{-ac}\zeta\Big) + \coth\_{\mu}\Big(1/2\sqrt{-ac}\zeta\Big)\Big).$$

**Family 5.** The general solutions derived from Equation (9) give birth to the following specific family of travelling soliton solutions as follows when *c* equals *a* and *b* equals 0:

$$G\_{21}(\zeta) = \tan\_{\mu}(a\zeta),$$

$$G\_{22}(\zeta) = -\cot\_{\mu}(a\zeta),$$

$$G\_{23}(\zeta) = \tan\_{\mu}(2A\zeta) \pm \left(\sqrt{pq}\sec\_{\mu}(2a\zeta)\right),$$

$$G\_{24}(\zeta) = -\cot\_{\mu}(2a\zeta) \pm \left(\sqrt{pq}\csc\_{\mu}(2a\zeta)\right),$$

$$G\_{25}(\zeta) = \frac{1}{2}\tan\_{\mu}(1/2\,a\zeta) - 1/2\cot\_{\mu}(1/2\,a\zeta).$$

and

\*\*Example 6.\*\* The following family of traveling soliton solutions is produced when the general solutions derived from Equation (9) are used in the situation when  $c$  is equal to  $-a$  and  $b$  is equal to zero:

$$\begin{aligned} \mathcal{G}\_{26}(\tilde{\zeta}) &= -\tanh\_{\mu}(a\tilde{\zeta}), \\ \mathcal{G}\_{27}(\tilde{\zeta}) &= -\coth\_{\mu}(a\tilde{\zeta}), \\ \mathcal{G}\_{28}(\tilde{\zeta}) &= -\tanh\_{\mu}(2a\tilde{\zeta}) \pm \left(i\sqrt{pq}\mathrm{sech}\_{\mu}(2a\tilde{\zeta})\right), \\ \mathcal{G}\_{29}(\tilde{\zeta}) &= -\coth\_{\mu}(2a\tilde{\zeta}) \pm \left(\sqrt{pq}\mathrm{sech}\cdot(2a\tilde{\zeta})\right), \\ \mathbf{1} \end{aligned}$$

and

$$G\_{30}(\zeta) = -\frac{1}{2}\tanh\_{\mu}(1/2\,a\zeta) - 1/2\,\coth\_{\mu}(1/2\,a\zeta).$$

**Family 7.** The application of the general solutions derived from Equation (9) yields the specified family of traveling soliton solutions when *Q* is equal to zero:

$$G\_{31}(\zeta) = \frac{-2a(b\zeta \ln(\mu) + 2)}{b^2 \zeta \ln(\mu)}.$$

**Family 8.** The generic solutions derived from Equation (9) produce the following family of traveling soliton solutions where *b* is equal to *ν*, *a* is equal to *Nν* (where *N* is a non-zero number), and *c* is equal to zero:

$$G\_{\mathfrak{P}\mathfrak{Z}}(\zeta) = \mu^{\nu\zeta} - N.$$

**Family 9.** The generic solutions derived from Equation (9) give rise to the specified family of traveling soliton solutions when both *b* and *c* are equal to zero:

$$G\_{33}(\zeta) = a\zeta \ln(\mu) \dots$$

**Family 10.** The general solutions derived from Equation (9) result in the stated set of traveling soliton solutions when both *b* and *a* are zero:

$$G\_{34}(\zeta) = -\frac{1}{c\mathcal{\mathbb{S}}\ln(\mu)}.$$

**Family 11.** The general solutions resulting from Equation (9) result in the stated family of traveling soliton solutions when *a* is zero, *b* is not equal to zero, and *c* is not equal to zero:

$$\begin{aligned} G\_{35}(\zeta) &= -\frac{pb}{c \left( \cosh\_{\mu}(b\zeta) - \sinh\_{\mu}(b\zeta) + p \right)'} \\\\ C\_{-1}(\zeta) &= -\frac{b \left( \cosh\_{\mu}(b\zeta) + \sinh\_{\mu}(b\zeta) \right)}{c} \end{aligned}$$

and

$$G\_{36}(\zeta) = -\frac{b\left(\cosh\_{\mu}(b\zeta) + \sinh\_{\mu}(b\zeta)\right)}{\left(c\cosh\_{\mu}(b\zeta) + c\sinh\_{\mu}(b\zeta) + cq\right)}\zeta$$

**Family 12.** The general solutions derived from Equation (9) result in the following set of traveling soliton solutions where *b* is equal to *ν*, *c* is equal to *Nν* (where *N* is a non-zero number), and *a* is equal to zero:

$$G\_{\mathfrak{V}}(\zeta) = \frac{p\mu^{\nu\zeta}}{p - Nq\mu^{\nu\zeta}}.$$

Here, *p* and *q* are both greater than zero, which are known as the deformation parameters. In addition, *<sup>Q</sup>* is defined as *<sup>b</sup>*<sup>2</sup> <sup>−</sup> <sup>4</sup>*ac*. Our solutions contain generalised trigonometric and hyperbolic functions that may be represented as follows:

$$\begin{aligned} \sin\_{\mu}(\zeta) &= \frac{p\mu^{i\overline{\zeta}} - q\mu^{-i\overline{\zeta}}}{2i}, & \cos\_{\mu}(\zeta) &= \frac{p\mu^{-i\overline{\zeta}} + q\mu^{i\overline{\zeta}}}{2}, \\ \sec\_{\mu}(\zeta) &= \frac{1}{\cos\_{\mu}(\overline{\zeta})}, & \csc\_{\mu}(\zeta) &= \frac{1}{\sin\_{\mu}(\overline{\zeta})}, \\ \cot\_{\mu}(\zeta) &= \frac{\cos\_{\mu}(\overline{\zeta})}{\sin\_{\mu}(\overline{\zeta})}, & \tan\_{\mu}(\zeta) &= \frac{\sin\_{\mu}(\overline{\zeta})}{\cos\_{\mu}(\overline{\zeta})}. \end{aligned}$$

Similarly,

$$\begin{aligned} \sinh\_{\mu}(\zeta) &= \frac{p\mu^{\zeta} - q\mu^{-\zeta}}{2}, & \cosh\_{\mu}(\zeta) &= \frac{p\mu^{-\zeta} + q\mu^{\zeta}}{2}, \\ \text{sech-}(\mathfrak{r}) &= \frac{1}{\cosh\_{\mathfrak{r}}(\mathfrak{r})}, & \mathrm{csch-}(\mathfrak{r}) &= \frac{1}{\sinh\_{\mathfrak{r}}(\mathfrak{r})}, \\ \coth\_{\mu}(\zeta) &= \frac{\cosh\_{\mu}(\zeta)}{\sinh\_{\mu}(\zeta)}, & \mathrm{tanh}\_{\mu}(\zeta) &= \frac{\sinh\_{\mu}(\zeta)}{\cosh\_{\mu}(\zeta)}. \end{aligned}$$

#### **3. Results**

In this section, the targeted problem is addressed with improved versions of the EDAM. We debut with the following traveling wave transformation:

$$\begin{aligned} u(\mathbf{x},t) &= \mathcal{U}(\tilde{\xi})e^{i\theta}, \quad where \\ \tilde{\xi} &= \lambda(\frac{\mathbf{x}^{\mathfrak{G}}}{\Gamma(\mathfrak{F}+1)} - \frac{c\_1 t^{\mathfrak{a}}}{\Gamma(\mathfrak{a}+1)}), \quad and \quad \theta = -\frac{k \mathbf{x}^{\mathfrak{G}}}{\Gamma(\mathfrak{F}+1)} + \frac{\omega t^{\mathfrak{a}}}{\Gamma(\mathfrak{a}+1)} + \theta, \end{aligned} \tag{10}$$

substituting (10) in (1) yields:

$$
\lambda^2 (a\_1 + 3k\gamma) \mathcal{U}^\prime + (b\_1 - k\mu) \mathcal{U}^3 - (\omega + \delta k + a\_1 k^2 + \gamma k^3) \mathcal{U} = 0,\tag{11}
$$

from the real part while the imaginary part gives:

$$
\lambda^2 \gamma \mathcal{U}^{\prime\prime} - (c\_1 + 2a\_1 k + 3k^2 \gamma + \delta) \mathcal{U}^{\prime} - (2\sigma + 3\mu) \mathcal{U}^2 \mathcal{U}^{\prime} = 0. \tag{12}
$$

By integrating (12) with respect to *ζ* once and setting constant of integration to zero, we have:

$$3\lambda^2 \gamma \mathcal{U}^{\prime\prime} - 3(c\_1 + 2a\_1k + 3k^2\gamma + \delta)\mathcal{U} - 3(2\sigma + 3\mu)\mathcal{U}^3 = 0. \tag{13}$$

(11) and (13) have the same forms under the following constraint condition:

$$\frac{a\_1 + 3k\gamma}{3\lambda^2} = -\frac{b\_1 - k\mu}{(2\sigma + 3\mu)} = \frac{\omega + \delta k + a\_1 k^2 + \gamma k^3}{3(\varepsilon\_1 + 2a\_1 k + 3k^2 \gamma + \delta)}\tag{14}$$

Solving (14) for *c*<sup>1</sup> and *k* yields:

$$k = -\frac{2a\_1\sigma + 3b\_1\gamma + 3a\_1\mu}{6(\sigma + \mu)\gamma},\tag{15}$$

$$c\_1 = \frac{(\omega + \delta k + a\_1 k^2 + \gamma k^3)\gamma}{a\_1 + 3\gamma k} - (2ka\_1 + \delta + 3k^2 \gamma). \tag{16}$$

The constraints in (14)–(16) reduces the FPRKLM to a single ODE given in (11). The next goal is to solve (11) using the proposed versions of the EDAM for generating families of optical soliton solutions for (1). Balancing the highest order nonlinear term *U*<sup>3</sup> and highest order derivative *U* gives *m* = 1.

#### *3.1. Application of the mEDAM*

First we wish to use M to solve Equation (11). The following series-based solution for problem (11) is obtained by inserting *j* = 1 in Equation (7):

$$\mathcal{U}I(\boldsymbol{\zeta}) = \sum\_{m=-1}^{1} \mathbb{C}\_{m} (G(\boldsymbol{\zeta}))^{m} = \mathbb{C}\_{-1} (G(\boldsymbol{\zeta}))^{-1} + \mathbb{C}\_{0} + \mathbb{C}\_{1} (G(\boldsymbol{\zeta}))^{1},\tag{17}$$

where *C*−1, *C*<sup>0</sup> & *C*<sup>1</sup> are unknown constants. A system of nonlinear algebraic equations is produced by substituting Equation (17) into Equation (11). We use the Maple software to solve this problem, and it offers us the following two sets of solutions:

**Case 1.**

$$\begin{aligned} \mathbb{C}\_{1} &= 0, \mathbb{C}\_{-1} = 2 \, a \sqrt{\frac{m\_{2}}{m\_{3}(b^{2} - 4 \, ac)}}, \mathbb{C}\_{0} = -\sqrt{\frac{m\_{2}}{m\_{3}(b^{2} - 4 \, ac)}} b, \\\ \lambda &= \frac{\sqrt{2}}{\ln(\mu)} \sqrt{\frac{m\_{2}}{m\_{1}(-b^{2} + 4 \, ac)}}, \end{aligned} \tag{18}$$

**Case 2.**

$$\begin{aligned} \mathbf{C}\_{1} &= 2 \ c \sqrt{\frac{m\_{2}}{m\_{3}(b^{2} - 4ac)}}, \mathbf{C}\_{-1} = 0, \mathbf{C}\_{0} = -\sqrt{\frac{m\_{2}}{m\_{3}(b^{2} - 4ac)}} b, \\ \lambda &= \frac{\sqrt{2}}{\ln(\mu)} \sqrt{\frac{m\_{2}}{m\_{1}(-b^{2} + 4ac)}}, \end{aligned} \tag{19}$$

where

$$\begin{aligned} m\_1 &= a\_1 + 3k\gamma\\ m\_2 &= \omega + \delta k + a\_1 k^2 + \gamma k^3\\ m\_3 &= b\_1 - k\mu \end{aligned} \tag{20}$$

Taking Case 1 into consideration, we arrive at the following families of optical soliton solutions:

**Family 1.** When *Q* is less than 0 and *a*, *b*, and *c* are all non-zero, Equations (17) and (10), and the generic solutions obtained from Equation (9), together, give birth to a specific family of optical soliton solutions, which may be stated as follows:

$$\begin{split} u\_1(\mathbf{x}, t) &= \varepsilon^{i\theta} (2 \, a \sqrt{\frac{m\_2}{m\_3(b^2 - 4ac)}} \left( -\frac{b}{2c} + \frac{\sqrt{-Q} \tan\_{\mu} \left( 1/2 \sqrt{-Q} (\zeta) \right)}{2c} \right)^{-1} \\ &- \sqrt{-\frac{m\_2}{m\_3(-b^2 + 4ac)}} b)\_r \end{split} \tag{21}$$

$$\begin{split} u\_2(\mathbf{x}, t) &= \epsilon^{i\theta} (2 \, a \sqrt{\frac{m\_2}{m\_3 (b^2 - 4ac)}} \left( -\frac{b}{2c} - \frac{\sqrt{-Q} \cot\_{\mu} \left( 1/2 \sqrt{-Q} (\zeta) \right)}{2c} \right)^{-1} \\ &- \sqrt{-\frac{m\_2}{m\_3 (-b^2 + 4ac)}} b)\_{\prime} \end{split} \tag{22}$$

$$\begin{split} u\_3(x,t) &= \epsilon^{i\theta} (2a\sqrt{\frac{m\_2}{m\_3(b^2 - 4ac)}} \times \\ &\left(-\frac{b}{2c} + \frac{\sqrt{-Q} \left(\tan\_{\mu}\left(\sqrt{-Q}(\zeta)\right) \pm \left(\sqrt{pq}\sec\_{\mu}\left(\sqrt{-Q}(\zeta)\right)\right)\right)}{2c}\right)^{-1} \\ &- \sqrt{-\frac{m\_2}{m\_3(-b^2 + 4ac)}} b), \end{split} \tag{23}$$

$$\begin{split} u\_4(x,t) &= e^{i\theta} (2a\sqrt{\frac{m\_2}{m\_3(b^2 - 4ac)}} \times \\ &\left( -\frac{b}{2c} - \frac{\sqrt{-Q} \left( \cot\_{\mu} \left( \sqrt{-Q} \zeta(\zeta) \right) \pm \left( \sqrt{p\overline{q}} \csc\_{\mu} \left( \sqrt{-Q} \zeta(\zeta) \right) \right) \right)}{2c} \right)^{-1} \\ &- \sqrt{-\frac{m\_2}{m\_3(-b^2 + 4ac)}} b \,, \end{split} \tag{24}$$

$$\begin{split} u\_5(x,t) &= \epsilon^{i\theta} (2a\sqrt{\frac{m\_2}{m\_3(b^2 - 4ac)}} \times \\ &\left(-\frac{b}{2c} + \frac{\sqrt{-Q} \left(\tan\_{\mu} \left(1/4\sqrt{-Q}(\zeta)\right) - \cot\_{\mu} \left(1/4\sqrt{-Q}(\zeta)\right)\right)}{4c}\right)^{-1} \\ &- \sqrt{-\frac{m\_2}{m\_3(-b^2 + 4ac)}} b). \end{split} \tag{25}$$

**Family 2.** When *Q* is greater than 0 and *a*, *b*, and *c* are all non-zero, Equations (17) and (10), and the generic solutions obtained from Equation (9), together, give birth to a specific family of optical soliton solutions, which may be stated as follows:

$$\begin{split} u\_{6}(\mathbf{x},t) &= \varepsilon^{i\theta} (2\, a \sqrt{\frac{m\_{2}}{m\_{3}(b^{2}-4ac)}} \left( -\frac{b}{2c} - \frac{\sqrt{Q}\tanh\_{\mu}\left(1/2\sqrt{Q}(\zeta)\right)}{2c} \right)^{-1} \\ &- \sqrt{-\frac{m\_{2}}{m\_{3}(-b^{2}+4ac)}} b)\_{\prime} \end{split} \tag{26}$$

$$\begin{split} u\_7(\mathbf{x}, t) &= \epsilon^{i\theta} (2\, a \sqrt{\frac{m\_2}{m\_3(b^2 - 4\, ac)}} \left( -\frac{b}{2c} - \frac{\sqrt{Q} \coth\_{\mu} \left( 1/2\sqrt{Q}(\zeta) \right)}{2c} \right)^{-1} \\ &- \sqrt{-\frac{m\_2}{m\_3(-b^2 + 4\, ac)}} b)\_{\prime} \end{split} \tag{27}$$

$$\begin{split} u\_{\theta}(x,t) &= \epsilon^{j\theta} (2a\sqrt{\frac{m\_{2}}{m\_{3}(b^{2}-4ac)}} \times \\ &\left(-\frac{b}{2c} - \frac{\sqrt{\mathsf{Q}} \Big{(}\tanh\_{\mu}\Big{(}\sqrt{\mathsf{Z}}(\zeta)\Big{)} \pm \Big{(}\sqrt{pq}\mathrm{sech}\_{\mu}\Big{(}\sqrt{\mathsf{Q}}(\zeta)\Big{)}\Big{)}\Big{)}\right)^{-1} \\ &- \sqrt{-\frac{m\_{2}}{m\_{3}(-b^{2}+4ac)}} b\rangle\_{\prime} \end{split} \tag{28}$$

$$\begin{split} u\_{\theta}(x,t) &= \epsilon^{i\theta} (2\, a \sqrt{\frac{m\_2}{m\_3(b^2 - 4ac)}} \times \\ &\left( -\frac{b}{2c} - \frac{\sqrt{\mathsf{Q}} (\coth\_{\mu} \left( \sqrt{\mathsf{Q}}(\zeta) \right) \pm \left( \sqrt{pq} \text{csch}\_{\mu} \left( \sqrt{\mathsf{Q}}(\zeta) \right) \right)}{2c} \right)^{-1} \\ &- \sqrt{-\frac{m\_2}{m\_3(-b^2 + 4ac)}} b )\_{\prime} \end{split} \tag{29}$$

$$\begin{split} u\_{10}(x,t) &= e^{i\theta} \left( 2a \sqrt{\frac{m\_2}{m\_3(b^2 - 4ac)}} \times \\ &\left( -\frac{b}{2c} - \frac{\sqrt{Q} \left( \tanh\_{\mu} \left( 1/4 \sqrt{Q}(\zeta) \right) - \coth\_{\mu} \left( 1/4 \sqrt{Q}(\zeta) \right) \right)}{4c} \right)^{-1} \\ &- \sqrt{-\frac{m\_2}{m\_3(-b^2 + 4ac)}} b \right) . \end{split} \tag{30}$$

**Family 3.** Equations (17) and (10), and the related general solutions obtained from Equation (9), when used in conjunction, produce a particular family of optical soliton solutions where the product *ac* is larger than zero and *b* is equal to zero, which may be written as follows:

$$u\_{11}(\mathbf{x},t) = \epsilon^{i\theta} (\sqrt{\frac{-m\_2}{m\_3}} \left( \tan\_{\mu} \left( \sqrt{ac} \left( \zeta \right) \right) \right)^{-1}),\tag{31}$$

$$u\_{12}(x,t) = e^{i\theta} (-\sqrt{\frac{-m\_2}{m\_3}} \left(\cot\_{\mu}\left(\sqrt{ac}(\zeta)\right)\right)^{-1}),\tag{32}$$

$$u\_{13}(\mathbf{x},t) = e^{i\theta} \left( \sqrt{\frac{-m\_2}{m\_3}} \left( \tan\_{\mu} \left( 2\sqrt{ac}(\zeta) \right) \pm \left( \sqrt{pq} \sec\_{\mu} \left( 2\sqrt{ac}(\zeta) \right) \right) \right)^{-1} \right), \tag{33}$$

$$u\_{14}(\mathbf{x},t) = \dot{\mathbf{c}}^{j\boldsymbol{\theta}}(-\sqrt{\frac{-m\_2}{m\_3}}\left(\cot\_{\boldsymbol{\mu}}\left(2\sqrt{ac}(\boldsymbol{\zeta})\right) \pm \left(\sqrt{pq}\csc\_{\boldsymbol{\mu}}\left(2\sqrt{ac}(\boldsymbol{\zeta})\right)\right)\right)^{-1}),\tag{34}$$

and

$$u\_{15}(\mathbf{x},t) = e^{i\theta} \left( 2\sqrt{\frac{-m\_2}{m\_3}} \left( \tan\_{\mu}\left( 1/2\sqrt{ac}(\zeta) \right) - \cot\_{\mu}\left( 1/2\sqrt{ac}(\zeta) \right) \right)^{-1} \right). \tag{35}$$

**Family 4.** Combining the use of Equations (17) and (10), and the related general solutions obtained from Equation (9) results in a unique family of optical soliton solutions in the situation when the product *ac* is higher than zero and *b* is equal to zero. These solutions are represented as follows:

$$u\_{16}(x,t) = e^{i\theta} (\sqrt{\frac{m\_2}{m\_3}} \left(\tanh\_{\mu}\left(\sqrt{-ac}(\zeta)\right)\right)^{-1}),\tag{36}$$

$$u\_{17}(x,t) = e^{i\theta} (-\sqrt{\frac{m\_2}{m\_3}} \left(\coth\_{\mu}\left(\sqrt{-ac}(\zeta)\right)\right)^{-1}),\tag{37}$$

$$\begin{split} u\_{18}(\mathbf{x},t) &= e^{i\theta} (-\sqrt{\frac{m\_2}{m\_3}} (\tanh\_{\mu} \left( 2\sqrt{-ac}(\zeta) \right) \\ &\pm \left( i\sqrt{p\eta} \mathrm{sech}\_{\mu} \left( 2\sqrt{-ac}(\zeta) \right) \right) )^{-1}), \end{split} \tag{38}$$

$$\begin{split} u\_{19}(\mathbf{x},t) &= \varepsilon^{i\theta} (-\sqrt{\frac{m\_2}{m\_3}} (\coth\_{\mu} \left( 2\sqrt{-ac} (\zeta) \right) \\ &\pm \left( \sqrt{pq} \text{csch}\_{\mu} \left( 2\sqrt{-ac} (\zeta) \right) \right))^{-1}), \end{split} \tag{39}$$

$$\begin{split} u\_{20}(\mathbf{x},t) &= \varepsilon^{i\theta} (-\sqrt{\frac{m\_2}{m\_3}} (\tanh\_{\mu} \left( 1/2 \sqrt{-ac} (\zeta) \right)) \\ &+ \coth\_{\mu} \Big( 1/2 \sqrt{-ac} (\zeta) \Big))^{-1} ). \end{split} \tag{40}$$

**Family 5.** Equations (17) and (10), and the related general solutions obtained from Equation (9) are used to construct a specific family of optical soliton solutions where *c* is equal to *a* and *b* is equal to zero. These solutions are represented as follows:

$$\mu\_{21}(x,t) = e^{i\theta} (\sqrt{\frac{-m\_2}{m\_3}} (\tan\_\mu(a(\zeta)))^{-1}),\tag{41}$$

$$u\_{22}(x,t) = e^{i\theta} \left( -\sqrt{\frac{-m\_2}{m\_3}} \left( \cot\_{\mu} (a(\zeta)) \right)^{-1} \right),\tag{42}$$

$$u\_{23}(\mathbf{x},t) = \epsilon^{i\theta} \left( \sqrt{\frac{-m\_2}{m\_3}} \left( \tan\_{\mu}(2\,a(\zeta)) \pm \left( \sqrt{pq}\sec\_{\mu}(2\,a(\zeta)) \right) \right)^{-1} \right),\tag{43}$$

$$u\_{24}(\mathbf{x},t) = e^{i\theta} \left( \sqrt{\frac{-m\_2}{m\_3}} \left( -\cot\_{\mu}(2\,a(\zeta)) \mp \left( \sqrt{pq}\,\text{csc}\_{\mu}(2\,a(\zeta)) \right) \right)^{-1} \right),\tag{44}$$

and

$$u\_{25}(\mathbf{x},t) = e^{i\theta} \left( \sqrt{\frac{-m\_2}{m\_3}} \left( 1/2 \tan\_{\mu}(1/2 \, a(\zeta)) - 1/2 \cot\_{\mu}(1/2 \, a(\zeta)) \right)^{-1} \right). \tag{45}$$

**Family 6.** Equations (17) and (10), and the related general solutions obtained from Equation (9) are used to construct a specific family of optical soliton solutions where *c* is equal to −*a* and *b* is equal to zero. These solutions are represented as follows:

$$
\mu\_{26}(x,t) = e^{i\theta} (-\sqrt{\frac{m\_2}{m\_3}} (\tanh\_{\mu}(a(\zeta)))^{-1}),\tag{46}
$$

$$u\_{27}(x,t) = e^{i\theta} \left( -\sqrt{\frac{m\_2}{m\_3}} \left( \coth\_{\mu}(a(\zeta)) \right)^{-1} \right),\tag{47}$$

$$u\_{28}(\mathbf{x},t) = e^{i\theta} \left( \sqrt{\frac{m\_2}{m\_3}} \left( -\tanh\_{\mu}(2\,a(\zeta)) \mp \left( i\sqrt{pq}\sec h\_{\mu}(2\,a(\zeta)) \right) \right)^{-1} \right),\tag{48}$$

$$u\_{29}(\mathbf{x},t) = e^{i\theta} \left( \sqrt{\frac{m\_2}{m\_3}} \left( -\coth\_{\mu}(2\,a(\zeta)) \mp \left( \sqrt{pq}\text{csch}\_{\mu}(2\,a(\zeta)) \right) \right)^{-1} \right),\tag{49}$$

$$u\_{\Im 0}(\mathbf{x}, t) = \epsilon^{\mathbf{j}\theta} \left( \sqrt{\frac{m\_2}{m\_3}} \left( -1/2 \tanh\_{\mu} (1/2 \, a(\zeta)) - 1/2 \coth\_{\mu} (1/2 \, a(\zeta)) \right)^{-1} \right). \tag{50}$$

**Family 7.** Equations (17) and (10), and the associated general solutions derived from Equation (9) are used to produce a specific family of optical soliton solutions in the case where *b* is equal to *ν*, *a* is equal to *nν* (where *n* is a non-zero value), and *c* is equal to zero. These solutions are expressed as follows:

$$u\_{31}(\mathbf{x},t) = e^{i\theta} (2\, n \sqrt{\frac{m\_2}{m\_3}} \left(\mu^{\nu \left(\frac{\zeta}{\zeta}\right)} - n\right)^{-1} - \sqrt{\frac{m\_2}{m\_3}}).\tag{51}$$

$$\frac{\sqrt{2}}{2} \quad \sqrt{\frac{m\_2}{m\_3}} \left(-\frac{\varepsilon\_1 t^a}{2} + \frac{\omega^\beta}{\omega^\beta}\right) \quad \& \quad \theta = -\frac{k x^\beta}{\omega^\beta} + \frac{\omega t^a}{\omega^\alpha} + \theta$$

where *ζ* = *ln*(*μ*) 1 *m*<sup>2</sup> *<sup>m</sup>*1(−*b*2+<sup>4</sup> *ac*)(<sup>−</sup> *<sup>c</sup>*1*<sup>t</sup>* <sup>Γ</sup>(*α*+1) <sup>+</sup> *<sup>x</sup><sup>β</sup>* <sup>Γ</sup>(*β*+1)), & *<sup>θ</sup>* <sup>=</sup> <sup>−</sup> *kx<sup>β</sup>* <sup>Γ</sup>(*β*+1) <sup>+</sup> *<sup>ω</sup><sup>t</sup>* <sup>Γ</sup>(*α*+1) + *ϑ*.

Now assuming Case 2, we get the following cluster of optical soliton solutions:

**Family 8.** Equations (17) and (10), and the equivalent general solutions obtained from Equation (9) when *Q* is less than zero and *a*, *b*, and *c* are all non-zero, result in a particular family of optical soliton solutions, which may be written as follows:

$$\begin{split} u\_{32}(\mathbf{x},t) &= \epsilon^{j\theta} (-\sqrt{-\frac{m\_2}{m\_3(-b^2+4ac)}}) \\ &+ 2c\sqrt{\frac{m\_2}{m\_3(b^2-4ac)}} \left( -\frac{b}{2c} + \frac{\sqrt{-Q}\tan\_{\mu}\left(1/2\sqrt{-Q}(\zeta)\right)}{2c} \right) ), \end{split} \tag{52}$$

$$\begin{split} u\_{33}(\mathbf{x},t) &= e^{i\theta} (-\sqrt{-\frac{m\_2}{m\_3(-b^2+4ac)}} b \\ &+ 2c\sqrt{\frac{m\_2}{m\_3(b^2-4ac)}} \left( -\frac{b}{2c} - \frac{\sqrt{-Q}\cot\_{\mu}\left(1/2\sqrt{-Q}(\zeta)\right)}{2c} \right) ), \end{split} \tag{53}$$

$$\begin{split} u\_{34}(x,t) &= e^{i\theta} (-\sqrt{-\frac{m\_2}{m\_3(-b^2+4ac)}}b + 2c\sqrt{\frac{m\_2}{m\_3(b^2-4ac)}} \times \\ &\left(-\frac{b}{2c} + \frac{\sqrt{-Q}(\tan\_{\mu}(\sqrt{-Q}(\zeta)) \pm \left(\sqrt{pq}\sec\_{\mu}(\sqrt{-Q}(\zeta))\right))}{2c}\right)), \end{split} \tag{54}$$

$$\begin{split} u\_{35}(x,t) &= e^{i\theta} (-\sqrt{-\frac{m\_2}{m\_3(-b^2+4ac)}}b + 2c\sqrt{\frac{m\_2}{m\_3(b^2-4ac)}} \times \\ &\left(-\frac{b}{2c} - \frac{\sqrt{-Q}(\cot\_\mu(\sqrt{-Q}(\zeta)) \pm \left(\sqrt{pq}\csc\_\mu(\sqrt{-Q}(\zeta))\right))}{2c}\right)), \end{split} \tag{55}$$

$$\begin{split} u\_{36}(\mathbf{x},t) &= e^{i\theta} (-\sqrt{-\frac{m\_2}{m\_3(-b^2+4ac)}}b + 2c\sqrt{\frac{m\_2}{m\_3(b^2-4ac)}} \times \\ &\left(-\frac{b}{2c} + \frac{\sqrt{-Q}\left(\tan\_{\mu}\left(1/4\sqrt{-Q}(\zeta)\right) - \cot\_{\mu}\left(1/4\sqrt{-Q}(\zeta)\right)\right)}{4c}\right). \end{split} \tag{56}$$

**Family 9.** Equations (17) and (10), and the related general solutions obtained from Equation (9) are all applied in the case when *Q* is higher than zero and *a*, *b*, and *c* are all non-zero, leading to a specific family of optical soliton solutions, which may be stated as follows:

$$\begin{split} u\_{37}(x,t) &= e^{i\theta} (-\sqrt{-\frac{m\_2}{m\_3(-b^2+4ac)}}) \\ &+ 2c\sqrt{\frac{m\_2}{m\_3(b^2-4ac)}} \left( -\frac{b}{2c} - \frac{\sqrt{\mathbb{Q}}\tanh\_{\mu}\left(1/2\sqrt{\mathbb{Q}}(\zeta)\right)}{2c} \right) ). \end{split} \tag{57}$$

$$\begin{split} u\_{38}(\mathbf{x}, t) &= e^{i\theta} (-\sqrt{-\frac{m\_2}{m\_3(-b^2 + 4ac)}} b \\ &+ 2c\sqrt{\frac{m\_2}{m\_3(b^2 - 4ac)}} \left( -\frac{b}{2c} - \frac{\sqrt{\mathbb{Q}}\coth\_{\mu}\left(1/2\sqrt{\mathbb{Q}}(\zeta)\right)}{2c} \right) ), \end{split} \tag{58}$$

$$\begin{split} \mu\_{39}(\mathbf{x},t) &= \epsilon^{j\theta} (-\sqrt{-\frac{m\_{2}}{m\_{3}(-b^{2}+4\,ac)}}b + 2\,c\sqrt{\frac{m\_{2}}{m\_{3}(b^{2}-4\,ac)}} \times \\ & \left(-\frac{b}{2c} - \frac{\sqrt{\mathsf{Q}}\Big{(}\tanh\_{\mu}\Big{(}\sqrt{\mathsf{Z}}(\zeta)\Big{)} \pm \Big{(}\sqrt{pq}\mathrm{sech}\_{\mu}\Big{(}\sqrt{\mathsf{Q}}(\zeta)\Big{)}\Big{)}\Big{)}}{2c}\right)), \end{split} \tag{59}$$

$$\begin{split} u\_{40}(\mathbf{x}, t) &= \epsilon^{i\theta} (-\sqrt{-\frac{m\_2}{m\_3(-b^2 + 4ac)}} b + 2c\sqrt{\frac{m\_2}{m\_3(b^2 - 4ac)}} \times \\ &\left( -\frac{b}{2c} - \frac{\sqrt{\mathsf{Q}} (\coth\_{\mu} \left( \sqrt{\mathsf{Q}}(\zeta) \right) \pm \left( \sqrt{\overline{pq}c} \text{csch}\_{\mu} \left( \sqrt{\mathsf{Q}}(\zeta) \right) \right) \right)}{2c} \right)) \end{split} \tag{60}$$

and

$$\begin{split} u\_{41}(x,t) &= e^{i\theta} (-\sqrt{-\frac{m\_{2}}{m\_{3}(-b^{2}+4ac)}}b + 2c\sqrt{\frac{m\_{2}}{m\_{3}(b^{2}-4ac)}} \times \\ &\left(-\frac{b}{2c} - \frac{\sqrt{\mathbb{Q}}(\tanh\_{\mu}(1/4\sqrt{\mathbb{Q}}(\zeta)) - \coth\_{\mu}(1/4\sqrt{\mathbb{Q}}(\zeta)))}{4c}\right)),\end{split} \tag{61}$$

**Family 10.** Equations (17) and (10), and the related general solutions obtained from Equation (9) are used to provide a particular family of optical soliton solutions where the product *ac* is larger than zero and *b* is equal to zero. These solutions are represented as follows:

$$\mu\_{42}(x,t) = e^{i\theta} (\sqrt{\frac{-m\_2}{m\_3}} \tan\_{\mu} (\sqrt{ac}(\zeta))),\tag{62}$$

$$u\_{43}(x,t) = e^{i\theta} (-\sqrt{\frac{-m\_2}{m\_3}} \cot\_{\mu} \left(\sqrt{ac}(\zeta)\right)),\tag{63}$$

$$u\_{44}(x,t) = e^{i\theta} (\sqrt{\frac{-m\_7}{m\_3}} \left( \tan\_{\mu} \left( 2\sqrt{ac}(\zeta) \right) \pm \left( \sqrt{pq} \sec\_{\mu} \left( 2\sqrt{ac}(\zeta) \right) \right) \right)),\tag{64}$$

$$u\_{45}(x,t) = e^{i\theta} (-\sqrt{\frac{-m\_2}{m\_3}} \left( \cot\_{\mu} \left( 2\sqrt{ac}(\zeta) \right) \pm \left( \sqrt{pq} \csc\_{\mu} \left( 2\sqrt{ac}(\zeta) \right) \right) \right)),\tag{65}$$

$$u\_{4\delta}(x,t) = e^{i\theta} \left( \sqrt{\frac{-m\_2}{m\_3}} \left( \tan\_{\mu} \left( 1/2 \sqrt{ac}(\zeta) \right) - \cot\_{\mu} \left( 1/2 \sqrt{ac}(\zeta) \right) \right) \right). \tag{66}$$

**Family 11.** Equations (17) and (10), and the related general solutions obtained from Equation (9) are used to provide a particular family of optical soliton solutions where the product *ac* is less than zero and *b* is equal to zero. These solutions are represented as follows:

$$\mu\_{47}(\mathbf{x},t) = \varepsilon^{i\theta} (-\sqrt{\frac{m\_2}{m\_3}} \tanh\_{\mu} \left(\sqrt{-a\varepsilon}(\zeta)\right)),\tag{67}$$

$$u\_{48}(x,t) = \epsilon^{i\theta} (-\sqrt{\frac{m\_2}{m\_3}}\coth\_{\mu}\left(\sqrt{-ac}(\zeta)\right)),\tag{68}$$

$$\mu\_{49}(\mathbf{x},t) = \varepsilon^{i\theta} (-\sqrt{\frac{m\_2}{m\_3}} \left( \tanh\_{\mu} \left( 2\sqrt{-a\varepsilon} (\zeta) \right) \pm \left( i\sqrt{pq} \mathrm{sech}\_{\mu} \left( 2\sqrt{-a\varepsilon} (\zeta) \right) \right) \right), \tag{69}$$

$$u\_{50}(x,t) = e^{i\theta} (-\sqrt{\frac{m\_2}{m\_3}} \left( \coth\_{\mu} \left( 2\sqrt{-ac}(\zeta) \right) \pm \left( \sqrt{pq} \text{csch}\_{\mu} \left( 2\sqrt{-ac}(\zeta) \right) \right) \right)), \tag{70}$$

and

$$u\_{51}(\mathbf{x},t) = \epsilon^{j\theta} (-\sqrt{\frac{m\_2}{m\_3}} \left( \tanh\_{\mu} \left( 1/2\sqrt{-ac}(\zeta) \right) + \coth\_{\mu} \left( 1/2\sqrt{-ac}(\zeta) \right) \right)). \tag{71}$$

**Family 12.** The use of Equations (17) and (10), and the related general solutions obtained from Equation (9) results in a different family of optical soliton solutions in the situation when *c* is equal to *a* and *b* is equal to zero. These solutions are represented as follows:

$$u\_{52}(x,t) = e^{i\theta} (\sqrt{\frac{-m\_2}{m\_3}} \tan\_{\mu}(a(\zeta))),\tag{72}$$

$$
\mu\_{53}(x,t) = \epsilon^{i\theta} (-\sqrt{\frac{-m\_2}{m\_3}} \cot\_{\mu}(a(\zeta))),\tag{73}
$$

$$\mu\_{54}(x,t) = \dot{e}^{i\theta} (\sqrt{\frac{-m\_2}{m\_3}} \left( \tan\_{\mu}(2\, a(\zeta)) \pm \left( \sqrt{pq} \sec\_{\mu}(2\, a(\zeta)) \right) \right)),\tag{74}$$

$$u\_{55}(\mathbf{x},t) = e^{i\theta} \left( \sqrt{\frac{-m\_2}{m\_3}} (-\cot\_{\mu}(2\, a(\zeta)) \mp \left( \sqrt{p\eta} \csc\_{\mu}(2\, a(\zeta)) \right)) \right),\tag{75}$$

$$u\_{56}(\mathbf{x},t) = e^{i\theta} \left( \sqrt{\frac{-m\_7}{m\_3}} \left( 1/2 \tan\_{\mu}(1/2 \, a(\zeta)) - 1/2 \cot\_{\mu}(1/2 \, a(\zeta)) \right) \right). \tag{76}$$

**Family 13.** The use of Equations (17) and (10), and the related general solutions obtained from Equation (9) results in a different family of optical soliton solutions in the situation when *c* is equal to −*a* and *b* is equal to zero. These solutions are represented as follows:

$$\mu\_{57}(x,t) = e^{i\theta} (-\sqrt{\frac{m\_2}{m\_3}} \tanh\_{\mu}(a(\zeta))),\tag{77}$$

$$\mu\_{58}(x,t) = e^{i\theta} (-\sqrt{\frac{m\_2}{m\_3}}\coth\_{\mu}(a(\zeta))),\tag{78}$$

$$\mu\_{59}(\mathbf{x}, \mathbf{t}) = \varepsilon^{i\theta} (\sqrt{\frac{m\_2}{m\_3}} (-\tanh\_{\mu}(2\, a(\zeta)) \mp (i\sqrt{pq}\text{sech}\_{\mu}(2\, a(\zeta))))),\tag{79}$$

$$u\_{\theta0}(\mathbf{x},t) = e^{i\theta} (\sqrt{\frac{m\_2}{m\_3}} (-\coth\_{\mu}(2\, a(\zeta)) \mp \left(\sqrt{pq}\text{csch}\_{\mu}(2\, a(\zeta))\right)),\tag{80}$$

and

$$u\_{61}(x,t) = e^{i\theta} \left( \sqrt{\frac{m\_2}{m\_3}} \left( -1/2 \tanh\_{\mu}(1/2 \, a(\zeta)) - 1/2 \coth\_{\mu}(1/2 \, a(\zeta)) \right) \right). \tag{81}$$

**Family 14.** Equations (17) and (10), and the equivalent general solutions obtained from Equation (9) when *a* is equal to zero, *b* is not equal to zero, and *c* is not equal to zero, produce a particular family of optical soliton solutions, which may be written as follows:

$$\mu\_{62}(x,t) = \epsilon^{j\theta} (-\sqrt{\frac{m\_2}{m\_3}} - 2\sqrt{\frac{m\_2}{m\_3}}p\left(\cosh\_{\mu}(b(\zeta)) - \sinh\_{\mu}(b(\zeta)) + p\right)^{-1}),\tag{82}$$

and

$$u\_{63}(\mathbf{x},t) = e^{i\theta} (-\sqrt{\frac{m\_2}{m\_3}} - 2\sqrt{\frac{m\_2}{m\_3}} \frac{\cosh\_{\mu}(b(\zeta)) + \sinh\_{\mu}(b(\zeta))}{\cosh\_{\mu}(b(\zeta)) + \sinh\_{\mu}(b(\zeta)) + q}).\tag{83}$$

**Family 15.** Equations (17) and (10) & the associated general solutions derived from Equation (9) produce a particular set of optical soliton solutions in the case where *b* is equal to *ν*, *c* is equal to *nν* (where *n* is a non-zero value), and *a* is equal to zero. These solutions are expressed as follows:

$$u\_{64}(x,t) = e^{i\theta} (-\sqrt{\frac{m\_2}{m\_3}} + 2\, n\sqrt{\frac{m\_2}{m\_3}} p\mu^{\nu\left(\zeta\right)} \left(p - nq\mu^{\nu\left(\zeta\right)}\right)^{-1}).\tag{84}$$

$$\text{where } \zeta = \frac{\sqrt{2}}{\ln(\mu)} \sqrt{\frac{m\_2}{m\_1(-b^2 + 4ac)}} (\frac{x^{\beta}}{\Gamma(\beta + 1)} - \frac{c\_1 t^a}{\Gamma(a+1)}), \quad \text{and} \quad \theta = -\frac{kx^{\beta}}{\Gamma(\beta + 1)} + \frac{\omega t^a}{\Gamma(a+1)} + \theta.$$

#### *3.2. Application of the r+mEDAM*

Now we wish to address (11) using the *r*+mEDAM. Putting *m* = 1 in (8) gives the subsequent series-based solution for (11):

$$\mathcal{U}(\zeta) = \sum\_{m=-1}^{1} \mathbb{C}\_{m}(r + G(\zeta))^{m} = \mathbb{C}\_{-1}(r + G(\zeta))^{-1} + \mathbb{C}\_{0} + \mathbb{C}\_{1}(r + G(\zeta))^{1}.\tag{85}$$

The coefficients *C*−1, *C*0, and *C*<sup>1</sup> are referred to as unknown parameters. A system of nonlinear algebraic equations is produced by putting Equation (85) into Equation (11). We use the Maple programme to address this problem, and it offers us the following two sets of solutions:

**Case 1**

$$\begin{aligned} \mathbf{C}\_{1} &= 0, \mathbf{C}\_{-1} = 2\left(a - rb + r^{2}c\right) \sqrt{\frac{m\_{2}}{m\_{3}(b^{2} - 4ac)}},\\ \mathbf{C}\_{0} &= \sqrt{-\frac{m\_{2}}{m\_{3}(-b^{2} + 4ac)}}(b - 2\operatorname{cr}), \lambda = \frac{\sqrt{2}}{\ln(\mu)} \sqrt{\frac{m\_{2}}{m\_{1}(-b^{2} + 4ac)}} \end{aligned} \tag{86}$$

**Case 2**

$$\begin{split} \mathbb{C}\_{1} &= 2 \, \mathcal{C} \sqrt{\frac{m\_{2}}{m\_{3}(b^{2} - 4ac)}}, \mathbb{C}\_{-1} = 0, \mathbb{C}\_{0} = -\sqrt{-\frac{m\_{2}}{m\_{3}(-b^{2} + 4ac)}} (b - 2 \, cr), \\ \mathbb{A} &= \frac{\sqrt{2}}{\ln(\mu)} \sqrt{\frac{m\_{2}}{m\_{1}(-b^{2} + 4ac)}} \end{split} \tag{87}$$

In light of Case 1, we discover the families of optical soliton solutions shown below:

**Family 16.** Equations (85) and (10), and the related general solutions obtained from Equation (9) together produce a particular family of optical soliton solutions in the case when *Q* is less than zero and *a*, *b*, and *c* are all non-zero:

$$\begin{split} u\_{65}(x,t) &= \epsilon^{i\theta} \left( 2 \left( a - rb + r^2 c \right) \sqrt{\frac{m\_2}{m\_3 (b^2 - 4ac)}} \times \\ &\left( -\frac{b}{2c} + \frac{\sqrt{-Q} \tan\_{\mu} \left( 1/2 \sqrt{-Q} (\zeta) \right)}{2c} \right)^{-1} + \sqrt{-\frac{m\_2}{m\_3 (-b^2 + 4ac)}} (b - 2cr) ), \end{split} \tag{88}$$

$$\begin{split} u\_{66}(x,t) &= \epsilon^{i\theta} \left( 2 \left( a - rb + r^2 c \right) \sqrt{\frac{m\_2}{m\_3(b^2 - 4ac)}} \times \\ &\left( -\frac{b}{2c} - \frac{\sqrt{-Q} \cot\_{\mu} \left( 1/2 \sqrt{-Q} \left( \zeta \right) \right)}{2c} \right)^{-1} + \sqrt{-\frac{m\_2}{m\_3(-b^2 + 4ac)}} (b - 2 \operatorname{cr}) ), \end{split} \tag{89}$$

$$\begin{split} u\_{\mathcal{O}}(\mathbf{x},t) &= \varepsilon^{i\theta} (2\left(a - rb + r^2c\right) \sqrt{\frac{m\_2}{m\_3(b^2 - 4ac)}} \times \\ &\left(-\frac{b}{2c} + \frac{\sqrt{-\mathsf{Q}}\left(\tan\_{\mu}\left(\sqrt{-\mathsf{Q}}(\zeta)\right) \pm \left(\sqrt{pq}\sec\_{\mu}\left(\sqrt{-\mathsf{Q}}(\zeta)\right)\right)\right)}{2c}\right)^{-1} \\ &+ \sqrt{-\frac{m\_2}{m\_3(-b^2 + 4ac)}}(b - 2\operatorname{cr})), \end{split} \tag{90}$$

$$\begin{split} u\_{68}(\mathbf{x},t) &= \epsilon^{i\theta} (2\left(a - rb + r^2c\right) \sqrt{\frac{m\_2}{m\_3(b^2 - 4ac)}} \times \\ &\left(-\frac{b}{2c} - \frac{\sqrt{-Q}\left(\cot\_{\mu}\left(\sqrt{-Q}(\zeta)\right) \pm \left(\sqrt{p\bar{q}}\csc\_{\mu}\left(\sqrt{-Q}(\zeta)\right)\right)\right)}{2c}\right)^{-1} \\ &+ \sqrt{-\frac{m\_2}{m\_3(-b^2 + 4ac)}}(b - 2cr)), \end{split} \tag{91}$$

$$\begin{split} u\_{\theta\theta}(\mathbf{x},t) &= \epsilon^{i\theta} (2\left(a - rb + r^2c\right) \sqrt{\frac{m\_2}{m\_3(b^2 - 4ac)}} \times \\ &\left(-\frac{b}{2c} + \frac{\sqrt{-Q}\left(\tan\_{\mu}\left(1/4\sqrt{-Q}(\zeta)\right) - \cot\_{\mu}\left(1/4\sqrt{-Q}(\zeta)\right)\right)}{4c}\right)^{-1} \\ &+ \sqrt{-\frac{m\_2}{m\_3(-b^2 + 4ac)}}(b - 2\operatorname{cr})\text{.} \end{split} \tag{92}$$

**Family 17.** Equations (85) and (10), and the related general solutions obtained from Equation (9) together produce a particular family of optical soliton solutions in the case when *Q* is greater than zero and *a*, *b*, and *c* are all non-zero:

$$\begin{split} u\_{70}(\mathbf{x}, \mathbf{t}) &= e^{i\theta} \left( 2 \left( a - rb + r^2 c \right) \sqrt{\frac{m\_2}{m\_3 (b^2 - 4ac)}} \times \\ &\left( -\frac{b}{2c} - \frac{\sqrt{Q} \tanh\_{\mu} \left( 1/2 \sqrt{Q} (\zeta) \right)}{2c} \right)^{-1} + \sqrt{-\frac{m\_2}{m\_3 (-b^2 + 4ac)}} (b - 2 \operatorname{cr}) ), \end{split} \tag{93}$$

$$\begin{split} u\_{71}(x,t) &= \epsilon^{i\theta} \left( 2 \left( a - rb + r^2 c \right) \sqrt{\frac{m\_2}{m\_3 (b^2 - 4ac)}} \times \\ &\left( -\frac{b}{2c} - \frac{\sqrt{Q} \coth\_{\mu} \left( 1/2 \sqrt{Q} (\zeta) \right)}{2c} \right)^{-1} + \sqrt{-\frac{m\_2}{m\_3 (-b^2 + 4ac)}} (b - 2 \operatorname{cr}) ), \end{split} \tag{94}$$

$$\begin{split} u\_{72}(\mathbf{x},t) &= e^{i\theta} \left( 2\left(a - rb + r^2c\right) \sqrt{\frac{m\_2}{m\_3(b^2 - 4ac)}} \times \\ &\left( -\frac{b}{2c} - \frac{\sqrt{\mathsf{Q}}\Big{(\tanh\_{\mu}\Big{(}\sqrt{\mathsf{Z}}(\zeta)\Big{)} \pm \left(\sqrt{pq}\mathrm{sech}\_{\mu}\Big{(}\sqrt{\mathsf{Q}}(\zeta)\Big{)}\Big{)}\Big{)}\Big{)}}{2c} \right)^{-1} \\ &+ \sqrt{-\frac{m\_2}{m\_3(-b^2 + 4ac)}}(b - 2\operatorname{cr})\Big{)},\end{split} \tag{95}$$

$$\begin{split} u\_{73}(\mathbf{x},t) &= e^{i\theta} (2\left(a - rb + r^2c\right) \sqrt{\frac{m\_2}{m\_3(b^2 - 4ac)}} \times \\ &\left(-\frac{b}{2c} - \frac{\sqrt{Q}\left(\coth\_{\mu}\left(\sqrt{Q}(\zeta)\right) \pm \left(\sqrt{pq}\operatorname{csch}\_{\mu}\left(\sqrt{Q}(\zeta)\right)\right)\right)}{2c}\right)^{-1} \\ &+ \sqrt{-\frac{m\_2}{m\_3(-b^2 + 4ac)}} (b - 2\operatorname{cr})), \end{split} \tag{96}$$

$$\begin{split} u\_{74}(x,t) &= e^{i\theta} \left( 2\left( a - rb + r^2 c \right) \sqrt{\frac{m\_2}{m\_3(b^2 - 4ac)}} \times \\ &\left( -\frac{b}{2c} - \frac{\sqrt{Q} \left( \tanh\_{\mu} \left( 1/4 \sqrt{Q}(\zeta) \right) \right) - \coth\_{\mu} \left( 1/4 \sqrt{Q}(\zeta) \right) \right)}{4c} \right)^{-1} \\ &+ \sqrt{-\frac{m\_2}{m\_3(-b^2 + 4ac)}} (b - 2 \, cr) .\end{split} \tag{97}$$

**Family 18.** Equations (85) and (10), and the related general solutions obtained from Equation (9) result in a specific family of optical soliton solutions where the product *ac* is larger than zero and *b* is equal to zero. These solutions are represented as follows:

$$u\_{75}(x,t) = e^{i\theta} \left( (1 + \frac{cr^2}{a}) \sqrt{\frac{-m\_2}{m\_3}} \left( \tan\_{\mu} \left( \sqrt{ac} \left( \zeta \right) \right) \right)^{-1} - \sqrt{-\frac{m\_2c}{m\_3a}} r \right),\tag{98}$$

$$u\_{7\delta}(x,t) = e^{i\theta} \left(-(1+\frac{cr^2}{a})\sqrt{\frac{-m\_2}{m\_3}}(\cot\_\mu(\sqrt{ac}(\zeta)))\right)^{-1} - \sqrt{-\frac{m\_2c}{m\_3a}}r\,,\tag{99}$$

$$\begin{split} u\_{77}(\mathbf{x},t) &= \dot{\epsilon}^{i\theta} (-\sqrt{-\frac{m\_2c}{m\_3a}}r \\ &\left(1+\frac{c\tau^2}{a}\right)\sqrt{\frac{-m\_2}{m\_3}} \left(\tan\_{\mu}\left(2\sqrt{ac}(\zeta)\right) \pm \left(\sqrt{pq}\sec\_{\mu}\left(2\sqrt{ac}(\zeta)\right)\right)\right)^{-1}), \end{split} \tag{100}$$

$$\begin{split} u\_{78}(\mathbf{x},t) &= \dot{\varepsilon}^{i\theta} (-\sqrt{-\frac{m\_2c}{m\_3a}}r \\ &- (1+\frac{c\tau^2}{a})\sqrt{\frac{-m\_2}{m\_3}} \left( \cot\_{\mu} \left( 2\sqrt{ac}(\zeta) \right) \pm \left( \sqrt{pq} \csc\_{\mu} \left( 2\sqrt{ac}(\zeta) \right) \right) \right)^{-1} ), \end{split} \tag{101}$$

and

$$\begin{split} u\_{79}(\mathbf{x},t) &= e^{i\theta} (-\sqrt{-\frac{m\_2c}{m\_3a}}r \\ 2(1+\frac{cr^2}{a})\sqrt{\frac{-m\_2}{m\_3}} \left(\tan\_{\mu}\left(1/2\sqrt{ac}(\zeta)\right) - \cot\_{\mu}\left(1/2\sqrt{ac}(\zeta)\right)\right)^{-1}). \end{split} \tag{102}$$

**Family 19.** Equations (85) and (10), and the related general solutions obtained from Equation (9) result in a specific family of optical soliton solutions where *ac* is less than zero and *b* is equal to zero. These solutions are represented as follows:

$$u\_{\rm sol}(x,t) = e^{i\theta} \left(-(1+\frac{cr^2}{a})\sqrt{\frac{m\_2}{m\_3}} \left(\tanh\_{\mu}\left(\sqrt{-ac}(\zeta)\right)\right)\right)^{-1} - \sqrt{-\frac{m\_2c}{m\_3a}}r),\tag{103}$$

$$u\_{\\$1}(\mathbf{x},t) = e^{i\theta} \left(-(1+\frac{cr^2}{a})\sqrt{\frac{m\_2}{m\_3}} \left(\coth\_{\mu}\left(\sqrt{-ac}(\zeta)\right)\right)\right)^{-1} - \sqrt{-\frac{m\_2c}{m\_3a}}r),\tag{104}$$

$$\begin{split} u\_{\theta2}(x,t) &= e^{i\theta} (-\sqrt{-\frac{m\_2c}{m\_3a}}r \\ &- (1+\frac{c\tau^2}{a})\sqrt{\frac{m\_2}{m\_3}} \Big( \tanh\_{\mu}\Big(2\sqrt{-ac}(\zeta)\Big) \pm \Big(i\sqrt{pq}\text{sech}\_{\mu}\Big(2\sqrt{-ac}(\zeta)\Big)\Big)\Big)^{-1}), \end{split} \tag{105}$$

$$\begin{split} u\_{83}(\mathbf{x},t) &= \epsilon^{i\theta} (-\sqrt{-\frac{m\_2c}{m\_3a}}r \\ &- (1+\frac{c\tau^2}{a})\sqrt{\frac{m\_2}{m\_3}} \Big(\coth\_{\mu}\Big(2\sqrt{-a\bar{c}}(\zeta)\Big) \pm \Big(\sqrt{\overline{p\eta}}\csc\mathrm{ch}\_{\mu}\Big(2\sqrt{-a\bar{c}}(\zeta)\Big)\Big)\Big)^{-1}), \end{split} \tag{106}$$

and

$$\begin{split} u\_{\\$4}(x,t) &= \varepsilon^{j\theta} (-\sqrt{-\frac{m\_2c}{m\_3a}}r \\ &- 2(1+\frac{c\tau^2}{a})\sqrt{\frac{m\_2}{m\_3}} \left( \tanh\_{\mu} \left( 1/2\sqrt{-a\varepsilon}(\zeta) \right) + \coth\_{\mu} \left( 1/2\sqrt{-a\varepsilon}(\zeta) \right) \right)^{-1} ). \end{split} \tag{107}$$

**Family 20.** Equations (85) and (10), and the related general solutions obtained from Equation (9) are used to produce a unique family of optical soliton solutions in the situation when *c* is equal to *a* and *b* is equal to zero. These solutions are written as follows:

$$u\_{85}(\mathbf{x},t) = \varepsilon^{i\theta} \left( (1+r^2)\sqrt{\frac{-m\_2}{m\_3}} \left( \tan\_{\mu}(a(\xi)) \right) \right)^{-1} - \sqrt{-\frac{m\_2}{m\_3}} r \,\mathrm{s},\tag{108}$$

$$u\_{86}(\mathbf{x},t) = \epsilon^{i\theta} \left(-(1+r^2)\sqrt{\frac{-m\_2}{m\_3}} \left(\cot\_{\mu}(a(\zeta))\right)^{-1} - \sqrt{-\frac{m\_2}{m\_3}}r\right) \tag{109}$$

$$\begin{aligned} u\_{87}(x,t) &= e^{i\theta} (-\sqrt{-\frac{m\_2}{m\_3}}r \\ (1+r^2)\sqrt{\frac{-m\_2}{m\_3}} \left(\tan\_{\mu}(2\, a(\zeta)) \pm \left(\sqrt{pq}\,\mathrm{sec}\_{\mu}(2\, a(\zeta))\right)\right)^{-1}), \end{aligned} \tag{110}$$

$$\begin{split} u\_{88}(\mathbf{x},t) &= \dot{\varepsilon}^{i\theta} (-\sqrt{-\frac{m\_2}{m\_3}}r \\ &\left(1+r^2\right)\sqrt{\frac{-m\_2}{m\_3}} \left(-\cot\_{\mu}(2\,a(\zeta)) \mp \left(\sqrt{p\eta}\,\mathrm{csc}\_{\mu}(2\,a(\zeta))\right)\right)^{-1}), \end{split} \tag{111}$$

$$\begin{split} u\_{89}(\mathbf{x},t) &= e^{i\theta} (-\sqrt{-\frac{m\_2}{m\_3}}r \\ &\left(1+r^2\right)\sqrt{\frac{-m\_2}{m\_3}} \left(1/2\tan\_{\mu}(1/2\,a(\zeta)) - 1/2\cot\_{\mu}(1/2\,a(\zeta))\right)^{-1}). \end{split} \tag{112}$$

**Family 21.** Equations (85) and (10), and the related general solutions obtained from Equation (9) are used to produce a unique family of optical soliton solutions in the situation when *c* is equal to −*a* and *b* is equal to zero. These solutions are written as follows:

$$u\_{\theta0}(\mathbf{x},t) = e^{i\theta} \left(-(1-r^2)\sqrt{\frac{m\_2}{m\_3}} \left(\tanh\_{\mu}(a(\zeta))\right)^{-1} + \sqrt{\frac{m\_2}{m\_3}}r\right),\tag{113}$$

$$u\_{\theta1}(\mathbf{x},t) = e^{i\theta} \left(-(1-r^2)\sqrt{\frac{m\_2}{m\_3}} \left(\coth\_{\mu}(a(\zeta))\right)^{-1} + \sqrt{\frac{m\_2}{m\_3}}r\right),\tag{114}$$

$$\begin{split} u\_{\theta2}(x,t) &= e^{i\theta} \left( \sqrt{\frac{m\_2}{m\_3}} r \\ &+ (1 - r^2) \sqrt{\frac{m\_2}{m\_3}} \left( -\tanh\_{\mu}(2\, a(\zeta)) \mp \left( i\sqrt{pq} \text{sech}\_{\mu}(2\, a(\zeta)) \right) \right)^{-1} \right), \end{split} \tag{115}$$

$$\begin{aligned} u\_{\theta3}(\mathbf{x}, \mathbf{t}) &= e^{i\theta} (\sqrt{\frac{m\_2}{m\_3}} r \\ &+ (1 - r^2) \sqrt{\frac{m\_2}{m\_3}} (-\coth\_{\mu}(2 \, a(\zeta)) \mp \left(\sqrt{pq} \text{csch}\_{\mu}(2 \, a(\zeta))\right)^{-1}), \end{aligned} \tag{116}$$

and

$$\begin{split} u\_{\mathfrak{M}}(x,t) &= e^{i\theta} \left( \sqrt{\frac{m\_2}{m\_3}} r \\ &+ (1 - r^2) \sqrt{\frac{m\_2}{m\_3}} \left( -1/2 \tanh\_{\mu}(1/2 \, a(\zeta)) - 1/2 \coth\_{\mu}(1/2 \, a(\zeta)) \right)^{-1} \right). \end{split} \tag{117}$$

**Family 22.** Equations (85) and (10), and the associated general solutions derived from Equation (9) produce a specific family of optical soliton solutions when *b* is equal to *ν*, *a* is equal to *nν* (where *n* is a non-zero value), and *c* is equal to zero. These solutions are expressed as follows:

$$u\_{95}(\mathbf{x},t) = e^{i\theta} (2\left(n-r\right)\sqrt{\frac{m\_2}{m\_3}} \left(\mu^{\nu\left(\zeta\right)} - n\right)^{-1} + \sqrt{\frac{m\_2}{m\_3}}).\tag{118}$$

**Family 23.** Equations (85) and (10), and the associated general solutions derived from Equation (9) are used to produce a specific family of optical soliton solutions in the case where *a* is equal to zero, *b* is not equal to zero, and *c* is not equal to zero. These solutions are expressed as follows:

$$\begin{split} u\_{96}(x,t) &= e^{i\theta} (\sqrt{-\frac{m\_2}{m\_3 b^2}} (b - 2\operatorname{cr}) \\ &- 2 \left( -rb + r^2 c \right) \sqrt{\frac{m\_2}{m\_3}} \frac{c \left( \cosh\_{\mu} (b(\zeta)) - \sinh\_{\mu} (b(\zeta)) + p \right)}{pb^2} ), \end{split} \tag{119}$$

and

$$\begin{split} u\_{\theta\mathcal{T}}(\mathbf{x},t) &= \epsilon^{i\theta} (\sqrt{-\frac{m\_2}{m\_3b^2}} (b-2\operatorname{cr}) \\ &- 2 \left( -rb + r^2 c \right) \sqrt{\frac{m\_2}{m\_3}} \frac{c \left( \cosh\_{\mu} (b(\zeta)) + \sinh\_{\mu} (b(\zeta)) + q \right)}{b \left( \cosh\_{\mu} (b(\zeta)) + \sinh\_{\mu} (b(\zeta)) \right)} ). \end{split} \tag{120}$$

**Family 24.** Equations (85) and (10), and the associated general solutions derived from Equation (9) produce a specific family of optical soliton solutions when *b* is equal to *ν*, *c* is equal to *nν* (where *n* is a non-zero value), and *a* is equal to zero. These solutions are expressed as follows

$$\mathfrak{u}\_{\theta\theta}(\mathbf{x},t) = e^{i\theta} (2\left(-r + r^2 n\right) \sqrt{\frac{m\_2}{m\_3}} \frac{\left(p - nq\mu^{\nu\left(\frac{\pi}{6}\right)}\right)}{p\left(\mu^{\nu\left(\frac{\pi}{6}\right)}\right)} + \sqrt{\frac{m\_2}{m\_3}} (1 - 2nr)). \tag{121}$$
 
$$\text{where } \mathfrak{f} = \frac{\sqrt{2}}{\ln(\mu)} \sqrt{\frac{m\_2}{m\_1(-b^2 + 4ac)}} (-\frac{c\_1 t^a}{\Gamma(a+1)} + \frac{x^{\delta}}{\Gamma(\beta+1)}), \quad \& \quad \theta = -\frac{kx^{\delta}}{\Gamma(\beta+1)} + \frac{\omega v^{\mu}}{\Gamma(a+1)} + \theta.$$

Now, assuming Case 2, we obtain the subsequent families of optical soliiton solutions:

**Family 25.** Equations (85) and (10), and the related general solutions deriving from Equation (9) result in a specific family of optical soliton solutions in the situation when *Q* is less than zero and *a*, *b*, and *c* are all non-zero:

$$\begin{split} u\_{\theta\theta}(x,t) &= e^{i\theta} (-\sqrt{-\frac{m\_2}{m\_3(-Q)}}(b-2\, cr) \\ &+ 2\, c\sqrt{\frac{m\_2}{m\_3Q}} \left(-\frac{b}{2c} + \frac{\sqrt{-Q}\tan\_{\mu}\left(1/2\sqrt{-Q}\zeta(\zeta)\right)}{2c}\right)), \end{split} \tag{122}$$

$$\begin{split} u\_{100}(\mathbf{x}, t) &= e^{i\theta} (-\sqrt{-\frac{m\_2}{m\_3 \left(-Q\right)}} (b - 2\,\mathrm{c}\,\mathrm{r}) \\ &+ 2\,\mathrm{c} \sqrt{\frac{m\_2}{m\_3 Q}} \left( -\frac{b}{2\varepsilon} - \frac{\sqrt{-Q} \cot\_{\mu} \left( 1/2\sqrt{-Q} \zeta(\zeta) \right)}{2\varepsilon} \right) ), \end{split} \tag{123}$$

$$\begin{split} u\_{101}(\mathbf{x},t) &= e^{i\theta} (-\sqrt{-\frac{m\_2}{m\_3(-Q)}}(b-2\operatorname{cr}) + 2\operatorname{c}\sqrt{\frac{m\_2}{m\_3Q}} \times \\ &\left(-\frac{b}{2c} + \frac{\sqrt{-Q}\left(\tan\_{\mu}\left(\sqrt{-Q}(\zeta)\right) \pm \left(\sqrt{pq}\sec\_{\mu}\left(\sqrt{-Q}(\zeta)\right)\right)\right)}{2c}\right)), \end{split} \tag{124}$$

$$\begin{split} u\_{102}(\mathbf{x},t) &= e^{i\theta} (-\sqrt{-\frac{m\_2}{m\_3(-Q)}}(b-2\operatorname{cr}) + 2\operatorname{c}\sqrt{\frac{m\_2}{m\_3Q}} \times \\ &\left(-\frac{b}{2c} - \frac{\sqrt{-Q}(\cot\_\mu(\sqrt{-Q}(\zeta)) \pm \left(\sqrt{p\overline{q}}\operatorname{csc}\_\mu(\sqrt{-Q}(\zeta))\right))}{2c}\right)), \end{split} \tag{125}$$

$$\begin{split} u\_{103}(\mathbf{x},t) &= \varepsilon^{i\theta} (-\sqrt{-\frac{m\_2}{m\_3(-Q)}}(b-2\,cr) + 2\,c\sqrt{\frac{m\_2}{m\_3(Q)}} \times \\ &\left(-\frac{b}{2c} + \frac{\sqrt{-Q}\left(\tan\_{\mu}\left(1/4\sqrt{-Q}(\zeta)\right) - \cot\_{\mu}\left(1/4\sqrt{-Q}(\zeta)\right)\right)}{4c}\right). \end{split} \tag{126}$$

**Family 26.** Equations (85) and (10), and the related general solutions deriving from Equation (9) result in a specific family of optical soliton solutions in the situation when *Q* is greater than zero and *a*, *b*, and *c* are all non-zero:

$$\begin{split} u\_{104}(\mathbf{x}, t) &= e^{i\theta} (-\sqrt{-\frac{m\_2}{m\_3(-Q)}} (b - 2\operatorname{cr}) \\ &+ 2c \sqrt{\frac{m\_2}{m\_3(Q)}} \left( -\frac{b}{2c} - \frac{\sqrt{Q} \tanh\_{\mu} \left( 1/2 \sqrt{Q} (\zeta) \right)}{2c} \right) ). \end{split} \tag{127}$$

$$\begin{split} u\_{105}(\mathbf{x}, t) &= \epsilon^{i\theta} (-\sqrt{-\frac{m\_2}{m\_3(-Q)}} (b - 2\operatorname{cr}) \\ &+ 2\operatorname{c} \sqrt{\frac{m\_2}{m\_3(Q)}} \Big( -\frac{b}{2c} - \frac{\sqrt{Q} \coth\_{\mu} \left( 1/2\sqrt{Q} (\zeta) \right)}{2c} \Big) ). \end{split} \tag{128}$$

$$\begin{split} u\_{106}(x,t) &= e^{i\theta} (-\sqrt{-\frac{m\_2}{m\_3(-Q)}}(b-2\operatorname{cr}) + 2\operatorname{c}\sqrt{\frac{m\_2}{m\_3(Q)}} \times \\ &\left(-\frac{b}{2c} - \frac{\sqrt{Q}\Big{(\tanh\_{\mu}\Big{(}\sqrt{Z}(\zeta)\Big{)} \pm \left(\sqrt{pq}\operatorname{sech}\_{\mu}\Big{(}\sqrt{Q}(\zeta)\right)\Big{)}\Big{)}}{2c}\right)), \end{split} \tag{129}$$

$$\begin{split} u\_{107}(\mathbf{x}, t) &= \varepsilon^{i0} (-\sqrt{-\frac{m\_2}{m\_3(-Q)}} (b - 2\operatorname{c}r) + 2\operatorname{c} \sqrt{\frac{m\_2}{m\_3(Q)}} \times \\ &\left( -\frac{b}{2c} - \frac{\sqrt{Q} (\coth\_{\mu} \left( \sqrt{Q}(\xi) \right) \pm \left( \sqrt{pq} \operatorname{cscch}\_{\mu} \left( \sqrt{Q}(\xi) \right) \right) \right)}{2c} \right) ). \end{split} \tag{130}$$

and

$$\begin{split} u\_{108}(\mathbf{x},t) &= \dot{\varepsilon}^{i\theta} (-\sqrt{-\frac{m\_{2}}{m\_{3}(-Q)}}(b-2\,cr) + 2\,c\sqrt{\frac{m\_{2}}{m\_{3}(Q)}} \times \\ &\left(-\frac{b}{2c} - \frac{\sqrt{Q}\Big(\tanh\_{\mu}\big(1/4\sqrt{Q}(\zeta)\big) - \coth\_{\mu}\big(1/4\sqrt{Q}(\zeta)\big)\big)}{4c}\right). \end{split} \tag{131}$$

**Family 27.** When the product *ac* is larger than 0 and *b* is equal to 0, the application of Equations (85) and (10), and the associated general solutions obtained from Equation (9) results in a different family of optical soliton solutions, which may be represented as follows:

$$u\_{109}(x,t) = e^{i\theta} (\sqrt{-\frac{m\_2}{m\_3}} \tan\_{\mu} \left(\sqrt{ac}(\zeta)\right) + \sqrt{-\frac{m\_2c}{m\_3a}}r),\tag{132}$$

$$u\_{110}(x,t) = e^{i\theta} \left( -\sqrt{-\frac{m\_2}{m\_3}} \cot\_{\mu} \left( \sqrt{ac}(\zeta) \right) + \sqrt{-\frac{m\_2 c}{m\_3 a}} r \right),\tag{133}$$

$$\begin{split} u\_{111}(x,t) &= e^{i\theta} \left( \sqrt{-\frac{m\_2}{m\_3}} \left( \tan\_{\mu} \left( 2\sqrt{ac}(\zeta) \right) \pm \left( \sqrt{pq} \sec\_{\mu} \left( 2\sqrt{ac}(\zeta) \right) \right) \right) \\ &+ \sqrt{-\frac{m\_2c}{m\_3a}} r ), \end{split} \tag{134}$$

$$\begin{split} u\_{112}(x,t) &= \epsilon^{j\theta} (-\sqrt{\frac{-m\_{2}}{m\_{3}}} \left( \cot\_{\mu} \left( 2\sqrt{ac}(\zeta) \right) \pm \left( \sqrt{pq} \csc\_{\mu} \left( 2\sqrt{ac}(\zeta) \right) \right) \right) \\ &+ \sqrt{\frac{-m\_{2}c}{m\_{3}a}} r), \end{split} \tag{135}$$

and

$$\begin{split} u\_{113}(\mathbf{x},t) &= e^{i\theta} (\sqrt{-\frac{m\_2}{4m\_3}} (\tan\_{\mu} \left( 1/2\sqrt{ac}(\boldsymbol{\zeta}) \right) - \cot\_{\mu} \left( 1/2\sqrt{ac}(\boldsymbol{\zeta}) \right)) \\ &+ \sqrt{-\frac{m\_2c}{m\_3a}} r). \end{split} \tag{136}$$

**Family 28.** When the product *ac* is less than 0 and *b* is equal to 0, the application of Equations (85) and (10), and the associated general solutions obtained from Equation (9) results in a different family of optical soliton solutions, which may be represented as follows:

$$u\_{114}(\mathbf{x},t) = e^{i\theta} (-\sqrt{\frac{m\_2}{m\_3}}\tanh\_{\mathbb{H}}\left(\sqrt{-ac}(\zeta)\right) + \sqrt{-\frac{m\_2c}{m\_3a}}r),\tag{137}$$

$$u\_{115}(x,t) = e^{i\theta} (-\sqrt{\frac{m\_2}{m\_3}}\coth\_{\mu}\left(\sqrt{-ac}(\zeta)\right) + \sqrt{-\frac{m\_2c}{m\_3a}}r),\tag{138}$$

$$\begin{split} u\_{116}(\mathbf{x},t) &= e^{i\theta} (\sqrt{-\frac{m\_2c}{m\_3a}}r \\ &- \sqrt{\frac{m\_2}{m\_3}} \left( \tanh\_{\mu} \left( 2\sqrt{-ac}(\zeta) \right) \pm \left( i\sqrt{pq}\text{sech}\_{\mu} \left( 2\sqrt{-ac}(\zeta) \right) \right) \right) ), \end{split} \tag{139}$$

$$\begin{split} u\_{117}(\mathbf{x},t) &= e^{i\theta} (\sqrt{-\frac{m\_2c}{m\_3a}}r \\ &- \sqrt{\frac{m\_2}{m\_3}} \left( \coth\_{\mu} \left( 2\sqrt{-ac}(\zeta) \right) \pm \left( \sqrt{pqc} \text{sech}\_{\mu} \left( 2\sqrt{-ac}(\zeta) \right) \right) \right) ), \end{split} \tag{140}$$

$$\begin{split} u\_{118}(\mathbf{x},t) &= e^{i\theta} (\sqrt{-\frac{m\_2c}{m\_3a}}r \\ &- \sqrt{\frac{m\_2}{4m\_3}} \Big( \tanh\_{\mu} \Big( 1/2 \sqrt{-a\epsilon} (\zeta) \Big) + \coth\_{\mu} \Big( 1/2 \sqrt{-a\epsilon} (\zeta) \Big) \Big). \end{split} \tag{141}$$

**Family 29.** Equations (85) and (10), and the related general solutions obtained from Equation (9) result in a specific family of optical soliton solutions in the case when *c* is equal to *a* and *b* is equal to zero. These solutions are written as follows:

$$u\_{119}(\mathbf{x},t) = e^{i\theta} (\sqrt{\frac{-m\_2}{m\_3}} \tan\_{\mu}(a(\zeta)) + \sqrt{-\frac{m\_2}{m\_3}}r),\tag{142}$$

$$u\_{120}(x,t) = e^{i\theta} (-\sqrt{\frac{-m\_2}{m\_3}}\cot\_{\mu}(a(\zeta)) + \sqrt{-\frac{m\_2}{m\_3}}r),\tag{143}$$

$$u\_{121}(\mathbf{x},t) = e^{i\theta} (\sqrt{\frac{-m\_2}{m\_3}} \left( \tan\_{\mu}(2\, a(\zeta)) \pm \left( \sqrt{pq} \sec\_{\mu}(2\, a(\zeta)) \right) \right) + \sqrt{-\frac{m\_2}{m\_3}}r),\tag{144}$$

$$\begin{split} u\_{122}(\mathbf{x}, t) &= e^{i\theta} (\sqrt{\frac{-m\_2}{m\_3}} (-\cot\_{\mu}(2 \, a(\zeta))) \\ &\mp \left( \sqrt{p\eta} \csc\_{\mu}(2 \, a(\zeta)) \right)) + \sqrt{-\frac{m\_2}{m\_3}} r)\_{\prime} \end{split} \tag{145}$$

and

$$\begin{split} u\_{123}(\mathbf{x}, t) &= e^{i\theta} (\sqrt{\frac{-m\_2}{m\_3}} (1/2 \tan\_{\mu}(1/2 \, a(\zeta))) \\ &- 1/2 \cot\_{\mu}(1/2 \, a(\zeta))) + \sqrt{-\frac{m\_2}{m\_3}} r). \end{split} \tag{146}$$

**Family 30.** Equations (85) and (10), and the related general solutions obtained from Equation (9) result in a specific family of optical soliton solutions in the case when *c* is equal to −*a* and *b* is equal to zero. These solutions are written as follows:

$$u\_{124}(x,t) = e^{i\theta} (\sqrt{\frac{m\_2}{m\_3}} \tanh\_{\mu}(a(\zeta)) + \sqrt{\frac{m\_2}{m\_3}}r),\tag{147}$$

$$u\_{125}(x,t) = e^{i\theta} (\sqrt{\frac{m\_2}{m\_3}} \coth\_{\mu}(a(\zeta)) + \sqrt{\frac{m\_2}{m\_3}}r),\tag{148}$$

$$u\_{12\delta}(\mathbf{x},t) = e^{i\theta} \left( \sqrt{\frac{m\_2}{m\_3}} \left( \tanh\_{\mu} \left( 2\, a(\zeta) \right) \pm \left( i\sqrt{p\eta} \text{sech}\_{\mu} (2\, a(\zeta)) \right) \right) + \sqrt{\frac{m\_2}{m\_3}} r \right),\tag{149}$$

$$\begin{split} u\_{127}(\mathbf{x}, t) &= \varepsilon^{i\theta} (-\sqrt{\frac{m\_2}{m\_3}} (-\coth\_{\mu}(2\, a(\zeta))) \\ &= (\sqrt{pq}\mathrm{csch}\_{\mu}(2\, a(\zeta))) + \sqrt{\frac{m\_2}{m\_3}} r), \end{split} \tag{150}$$

$$\begin{split} u\_{128}(x,t) &= e^{i\theta} \left( \sqrt{\frac{m\_2}{m\_3}} (\frac{1}{2} \tanh\_{\mu} \left( \frac{1}{2} a(\zeta) \right) \right. \\ &+ \frac{1}{2} \coth\_{\mu} \left( \frac{1}{2} a(\zeta) \right)) + \sqrt{\frac{m\_2}{m\_3}} r ). \end{split} \tag{151}$$

**Family 31**. Equations (85) and (10), and the related general solutions obtained from Equation (9) are used to provide a particular family of optical soliton solutions where *a* is equal to zero, *b* is not equal to zero, and *c* is not equal to zero. These solutions are represented as follows:

$$u\_{129}(\mathbf{x},t) = e^{i\theta} (-\sqrt{\frac{m\_2}{m\_3b}}(b-2\,cr) - 2\frac{\sqrt{\frac{m\_2}{m\_3}}p}{\cosh\_{\mu}(b(\zeta)) - \sinh\_{\mu}(b(\zeta)) + p}),\tag{152}$$

and

$$\begin{split} u\_{130}(x,t) &= e^{i\theta} (-\sqrt{\frac{m\_2}{m\_3 b^2}} (b - 2\operatorname{cr}) \\ &- 2 \frac{\sqrt{\frac{m\_2}{m\_3}} (\cosh\_{\mu}(b(\zeta)) + \sinh\_{\mu}(b(\zeta)))}{\cosh\_{\mu}(b(\zeta)) + \sinh\_{\mu}(b(\zeta)) + q}). \end{split} \tag{153}$$

**Family 32.** Equations (85) and (10), and the associated general solutions derived from Equation (9) produce a specific family of optical soliton solutions when *b* is equal to *ν*, *c* is equal to *nν* (where *n* is a non-zero value), and *a* is equal to zero. These solutions are expressed as follows:

$$u\_{131}(x,t) = e^{i\theta} (-\sqrt{\frac{m\_2}{m\_3}}(1-2\,rm) + 2\, n\sqrt{\frac{m\_2}{m\_3}}\frac{p\mu^{\nu\left(\zeta\right)}}{p-nq\mu^{\nu\left(\zeta\right)}}).\tag{154}$$

$$\text{where } \zeta = \frac{\sqrt{2}}{\ln(\mu)} \sqrt{\frac{m\_2}{m\_1(-b^2 + 4ac)}} (\frac{x^{\beta}}{\Gamma(\beta + 1)} - \frac{c\_1 t^a}{\Gamma(a+1)}), \quad and \quad \theta = -\frac{kx^{\delta}}{\Gamma(\beta + 1)} + \frac{\omega t^a}{\Gamma(a+1)} + \theta.$$

#### **4. Discussion and Graphs**

The present study used two improved versions of the EDAM approach, especially the mEDAM and *r*+mEDAM, to successfully build families of optical soliton solutions for the FPRKLM. These findings contribute to further development of the field related to the FPRKLM and enable a deeper understanding of complex waves in nonlinear optical systems. Our obtained results also determine the cogency of the mEDAM and *r*+mEDAM approaches in obtaining analytical solutions for the FPRKLM. Both techniques offer a systematic approach for solving complex FPDEs and provide explicit formulations for optical soliton solutions.

By assigning different values to the model's parameters, several figures have been plotted to show the wave behavior of the designed optical solution. These plots represent the relationship between wave amplitudes and spatial variables, showing the different profiles observed in the solution. The resulting wave profiles include periodic waves, kink waves, solitary waves, lump waves, and more. The presence of these different wave profiles in the optical soliton solution of the FPRKLM highlights the rich dynamics of the model. Each profile produces a different peculiar behavior of the system and provides valuable insight into the underlying physics. Periodic waves indicate the presence of oscillatory motion, kink waves indicate the presence of local disturbances or sudden changes in wave behavior, solitary waves represent self-supporting local structures, and lump waves indicate local concentrations of energy.

The relationship between these waveform profiles and the proposed model is attributed to the nonlinear terms present in the equations and the particular shape of the fractional perturbations. These features introduce nonlinearity and complexity into the system, leading to the emergence of various wave phenomena. The mEDAM and *r*+mEDAM techniques provide powerful tools to capture and understand these phenomena, allowing us to study the complex dynamics of the FPRKLM.

**Remark 1.** *Figure 1 indicates a captivating M-shaped periodic wave structure in the optical soliton solution for the FPRKLM. This wave pattern is governed by the nonlinear behaviour of the system and the fractional perturbations it involves. The parameters in the model, such as the GVD coefficient (a*1*), nonlinearity coefficient (b*1*), IMD δ, μ*1*, σ, and γ, considerably impact the wave profile. GVD plays a role in the formation of distinctive peaks and troughs in the M-shaped pattern, whereas the non-linearity coefficient determines soliton intensity and stability. The relationship between fractional perturbations and σ sets forth complexities and modulations, further shaping the M-shaped wave. Furthermore, taking into account the wave velocities (k and ω) permits for an analysis of soliton spreading characteristics, figuring out the speed and phase that influence the M-shaped periodic wave.*

**Figure 1.** A three-dimensional graph of the function *u*<sup>4</sup> that appears in Equation (24) for *a* = 2, *b* = 1, *c* = 2, *μ* = *e*, *k* = 0, *ω* = 1, *ϑ* = 0, *a*<sup>1</sup> = 3, *b*<sup>1</sup> = 3, *c*<sup>1</sup> = 0, *γ* = 2, *δ* = 2, *p* = 3, *q* = 2, *α* = 0.9, *β* = 1.

**Remark 2.** *Figure 2 shows an asymmetric kink wave that was seen in the FPRKLM's optical soliton solutions. These kink waves, which are distinguished by their unique characteristics, are caused by the existence of nonlinearity inside the model. The soliton solution generally experiences quick transitions between stable states at these locations, causing abrupt changes or discontinuities in the wave pattern. The development and behaviour of these asymmetric kink waves are significantly influenced by the precise parameters regulating the FPRKLM, such as the nonlinearity coefficient (b*1*). Understanding the underlying processes and how they interact with the nonlinear dynamics of the model helps us better understand how such kink wave occurrences in optical solitons develop.*

**Figure 2.** A three-dimensional graph of the function *u*<sup>52</sup> that appears in Equation (72) for *a* = 3, *b* = 0, *c* = 3, *μ* = *e*, *k* = 0, *ω* = −1, *ϑ* = 10, *a*<sup>1</sup> = 3, *b*<sup>1</sup> = 3, *c*<sup>1</sup> = −4, *γ* = 2, *δ* = 2, *p* = 3, *q* = 2, *α* = *β* = 1.

**Remark 3.** *A rogue wave seen in the optical soliton solutions of the FPRKLM is shown graphically in Figure 3. The model's innate nonlinearity and dispersiveness can be used to explain the appearance of rogue waves in the data. These waves, which have amplitudes that are noticeably bigger than those of their neighbours, result from the constructive interference of smaller waves that are modulated and interacted with by nonlinear processes and dispersion effects. Rogue waves display variable amplitudes during propagation as a result of the complex interaction between dispersion (a*1*), nonlinear effects (b*1*), and the underlying dynamics of the FPRKLM system. Understanding the processes that cause rogue waves to form and behave in the FPRKLM might help one better understand the intricate wave phenomena brought on by nonlinear interactions and dispersion in optical solitons.*

**Figure 3.** A three-dimensional graph of the function *u*<sup>72</sup> that appears in Equation (95) for *a* = 3, *b* = 10, *c* = 3, *μ* = *e*, *k* = 1, *ω* = 1, *ϑ* = 0,*r* = 6, *a*<sup>1</sup> = 3, *b*<sup>1</sup> = 3, *c*<sup>1</sup> = 0, *γ* = 2, *δ* = 2, *p* = 3, *q* = 2, *α* = 0.5, *β* = 0.9.

**Remark 4.** *The profile in Figure 4 demonstrates another rogue wave that travels smoothly until it reaches the domain's limit before abruptly changing in amplitude. The occurrence of smooth rogue waves in the optical soliton solutions of the FPRKLM that undergo abrupt changes at certain domain borders may be explained by a combination of components, including critical points, bifurcations, and nonlinear interactions within the system. When the parameters of the FPRKLM system approach their critical values, a transition occurs that results in rapid changes in wave behaviour and the formation of rogue waves. The development of nonlinear interactions within the system may be aided by nonlinear effects and instabilities, which may ultimately lead to abrupt changes in the wave profile. Through analysis and numerical simulations, the specifics of these phenomena may be further investigated. The specifics of these phenomena rely on the system's characteristics and beginning circumstances.*

**Figure 4.** A three-dimensional graph of the function *u*<sup>131</sup> that appears in Equation (154) for *a* = 3, *b* = 10, *c* = 3, *μ* = 2, *k* = 1, *ω* = 1, *ϑ* = 0,*r* = 6, *a*<sup>1</sup> = 3, *b*<sup>1</sup> = 3, *c*<sup>1</sup> = 0, *γ* = 2, *δ* = 2, *p* = 3, *q* = 2, *α* = 0.5, *β* = 0.9, *ν* = *i*, *n* = 1/12.

#### **5. Conclusions**

In the present investigation, we used the mEDAM and *r*+mEDAM methods to explore optical soliton solutions in the FPRKLM. Our study was concentrated on discovering the complex structure of the FPRKLM as well as comprehending the wave behavior of the system via exact analytical formulations obtained by means of these sophisticated methods. The obtained wave profiles, provided graphically, displayed a diverse range of behaviors, which include periodic waves, lump waves, kink waves, solitary waves, and more, demonstrating the complex nature of the FPRKLM. The research we conducted revealed the association between these wave profiles and the nonlinear terms and fractional perturbation of the model, showing the efficacy of the mEDAM and *r*+mEDAM methods in studying these kinds of phenomena. The novelty of our study is rooted in improving the comprehension of non linear optical systems by offering an outline for future studies and potential applications in this field.

**Author Contributions:** Methodology, H.Y.; Validation, N.H.A.; Formal analysis, N.H.A. and A.M.S.; Investigation, H.Y.; Resources, A.M.S. and R.S.; Data curation, R.S.; Funding acquisition, H.Y. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was supported by the Deanship of Scientific Research, the Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia (Grant No. 3733).

**Data Availability Statement:** Data sharing is not applicable to this article as no new data were created or analyzed in this study.

**Acknowledgments:** The authors really appreciated the kind support from the the Deanship of Scientific Research, the Vice Presidency 151 for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia (Grant 152 No. 3216).

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

### *Article* **Fractional Order Runge–Kutta Methods**

**Farideh Ghoreishi 1, Rezvan Ghaffari <sup>1</sup> and Nasser Saad 2,\***


**Abstract:** This paper presents a new class of fractional order Runge–Kutta (FORK) methods for numerically approximating the solution of fractional differential equations (FDEs). We construct explicit and implicit FORK methods for FDEs by using the Caputo generalized Taylor series formula. Due to the dependence of fractional derivatives on a fixed base point, in the proposed method, we had to modify the right-hand side of the given equation in all steps of the FORK methods. Some coefficients for explicit and implicit FORK schemes are presented. The convergence analysis of the proposed method is also discussed. Numerical experiments are presented to clarify the effectiveness and robustness of the method.

**Keywords:** fractional differential equations; Caputo fractional derivative; convergence analysis; consistency; stability analysis

#### **1. Introduction**

In recent years, the numerical approximation for the solutions of FDEs has attracted increasing attention in many fields of applied sciences and engineering [1–3]. It is common for FDEs to be used in formulating many problems in applied mathematics. Developing numerical methods for fractional differential problems is necessary and important because analytic solutions are usually challenging to obtain. Moreover, it is necessary to develop numerical methods that are highly accurate and easy to use.

It is well known that fractional derivatives have different definitions; the most common and important ones in applications are the Riemann–Liouville and Caputo fractional derivatives. Models describing physical phenomena usually prefer the use of the Caputo derivative. One of the reasons is that the Riemann–Liouville derivative needs initial conditions containing the limit values of the Rieman–Liouville fractional derivative at the origin of time. In contrast, the initial conditions for Caputo derivatives are the same as for integer-order differential equations. Therefore, using the Caputo derivative, there is a clear physical interpretation of the prescribed data; see [1,4,5].

Numerous research papers have been published on numerical methods for FDEs. Many researchers considered the trapezoidal method, predictor-corrector method, extrapolation method, and spectral method [6–16]. Some of these methods discretize fractional derivatives directly. As an example, the L1 formula was created by a piecewise linear interpolation approximation for the integrand function on each small interval [17,18]. In [19], the authors applied quadratic interpolation approximation using three points to approximate the Caputo fractional derivative, while in [20], a technique based on the block-by-block approach was presented. This technique became a common method for equations with integral operators. In [21], Caputo fractional differentiation was approximated by a weighted sum of the integer order derivatives of functions. In [22], several numerical algorithms were proposed to approximate the Caputo fractional derivatives by applying higher-order piecewise interpolation polynomials and the Simpson method to design a higher-order algorithm.

**Citation:** Ghoreishi , F.; Ghaffari, R.; Saad, N. Fractional Order Runge–Kutta Methods. *Fractal Fract.* **2023**, *7*, 245. https://doi.org/ 10.3390/fractalfract7030245

Academic Editors: Libo Feng, Lin Liu and Yang Liu

Received: 1 February 2023 Revised: 21 February 2023 Accepted: 23 February 2023 Published: 8 March 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

These methods are appropriate options if the resulting system of equations, generated from the numerical method, is linear and well-conditioned. However, they present a high computational cost when the problem we are solving is badly conditioned or nonlinear. In light of the above discussion and the analysis of other methods for FDEs, despite many papers on numerical methods for FDEs, there are still insufficient efficient numerical approaches for such equations. Therefore, further studies are still in demand. In this case, step-by-step methods such as the Runge–Kutta method are a good option. They are favored due to their simplicity in both calculation and analysis.

Several authors have used Runge–Kutta methods to solve ordinary, partial differential, and integral equations [23–30]. Lubich and others have done some fundamental works regarding Runge–Kutta methods for Volterra integral equations [28–30]. They used the order conditions to derive various Runge–Kutta methods.

One of the efficient implicit Runge–Kutta methods for the numerical approximation of some linear partial differential equations is the Rosenbrock procedure. It is a class of semi-implicit Runge–Kutta methods for the numerical solution of some stiff systems of ODEs. In Osterman and Rochet's papers [31,32], the authors apply the Rosenbrock methods to solve linear partial differential equations, obtaining a sharp lower bound for the order of convergence. They show that the order of convergence is, in general, fractional. So, for the numerical solution of some fractional linear partial differential equations, we can construct fractional Rosenbrock-type methods, in which a special type of fractional semi-implicit Runge–Kutta method could be considered.

This paper introduces a new class of fractional order Runge–Kutta methods for numerical approximation to the solution of FDEs. Using the Caputo generalized Taylor series formula for the Caputo fractional derivative, we construct explicit and implicit FORK methods comparable to the well-known Runge–Kutta schemes for ordinary differential equations.

The remainder of the paper is organized as follows. In Section 2, we review some definitions and properties of fractional calculus. We propose new explicit and implicit FORK methods for solving FDEs in Sections 3 and 4. In Section 5, the theoretical analysis of the convergency, stability, and consistency of the proposed methods is presented. Finally, in Section 6, some numerical examples demonstrate the effectiveness of the methods proposed. Also, in Appendix A two Mathematica computer programming codes are given.

#### **2. Preliminaries**

This section briefly reviews the definitions of the fractional integral and Caputo fractional derivative and explores some of their properties. A more comprehensive introduction to the fractional derivatives can be found in [1,4,33].

**Definition 1.** *The Riemann–Liouville fractional integral operator of order α* > 0 *for a function f*(*x*) ∈ *L*1[*a*, *b*] *with a* ≥ 0 *is defined as*

$$J\_a^a f(\mathbf{x}) = \frac{1}{\Gamma(a)} \int\_a^\mathbf{x} (\mathbf{x} - t)^{a-1} f(t) dt, \quad \mathbf{x} \in [a, b], \qquad J\_a^0 f(\mathbf{x}) = f(\mathbf{x}).$$

*where <sup>L</sup>*1[*a*, *<sup>b</sup>*] = { *<sup>f</sup>* <sup>|</sup> *f is a measurable function on* [*a*, *<sup>b</sup>*] *and <sup>b</sup> <sup>a</sup>* | *f*(*x*)|*dx* < ∞}, Γ *is the Gamma function.*

**Definition 2.** *The Caputo fractional derivatives of order α* > 0 *of a function f*(*x*) ∈ *L*1[*a*, *b*]*, with a* ≥ 0*, is defined as*

$$\begin{aligned} \, \_a^c D\_x^n f(x) &= \, \_a^{n-a} D^n f(x) \\ &= \begin{cases} \frac{1}{\Gamma(n-a)} \int\_a^x (x-t)^{n-a-1} D^n f(t) dt, & n-1 < a < n, \ n \in \mathbb{N}, \\\\ D^n f(x), & n = n. \end{cases} \end{aligned} \tag{1}$$

**Theorem 1** (Generalized Taylor formula for Caputo fractional derivative [34])**.** *Suppose that* (*c aD<sup>α</sup> <sup>t</sup>* )*<sup>k</sup> <sup>f</sup>*(*x*) <sup>∈</sup> *<sup>C</sup>*(*a*, *<sup>b</sup>*] *for <sup>k</sup>* <sup>=</sup> 0, 1, ... , *<sup>n</sup>* <sup>+</sup> <sup>1</sup>*, where* <sup>0</sup> <sup>&</sup>lt; *<sup>α</sup>* - 1*, then,* ∀ *x* ∈ [*a*, *b*]*, there exist ξ* ∈ (*a*, *x*) *such that*

$$f(\mathbf{x}) = \sum\_{i=0}^{n} \frac{(\mathbf{x} - a)^{i\mathbf{a}}}{\Gamma(1 + i\mathbf{a})} (({}^{\mathbb{C}}D\_{t}^{\mathbf{a}})^{i} f)(a) + \frac{(\mathbf{x} - a)^{(n+1)\mathbf{a}}}{\Gamma(1 + (n+1)\mathbf{a})} (({}^{\mathbb{C}}D\_{t}^{\mathbf{a}})^{(n+1)}) f)(\xi),\tag{2}$$

*where* (*<sup>c</sup> aD<sup>α</sup> <sup>t</sup>* )*<sup>n</sup>* = *<sup>c</sup> aD<sup>α</sup> t c aD<sup>α</sup> <sup>t</sup>* ··· *<sup>c</sup> aD<sup>α</sup> t* % &' ( *n times* .

There are also two important functions in fractional calculus. They are a direct generalization of the exponential series, which play essential roles in solving the FDEs and in stability analysis.

**Definition 3.** *The Mittag–Leffler function is defined as*

$$E\_{\mathfrak{a}}(\mathfrak{x}) = \sum\_{k=0}^{\infty} \frac{\mathfrak{x}^k}{\Gamma(1+\mathfrak{a}\,k)}, \quad \mathfrak{R}(\mathfrak{a}) > 0, \,\,\mathfrak{x} \in \mathbb{C}.$$

*In addition, the two-parameter Mittag–Leffler function is defined by*

$$E\_{\mathfrak{a}, \mathfrak{k}}(\mathfrak{x}) = \sum\_{k=0}^{\infty} \frac{\mathfrak{x}^k}{\Gamma(\mathfrak{k} + \mathfrak{a} \, k)}, \quad \mathfrak{R}(\mathfrak{a}) > 0, \mathfrak{k} \in \mathbb{C}, \, \mathfrak{x} \in \mathbb{C}.$$

We note that *Eα*(*x*) = *Eα*,1(*x*) and

$$E\_1(\mathfrak{x}) = \sum\_{k=0}^{\infty} \frac{\mathfrak{x}^k}{\Gamma(1+k)} = \sum\_{k=0}^{\infty} \frac{\mathfrak{x}^k}{k!} = \exp(\mathfrak{x})\,.$$

#### **3. Fractional Order Runge–Kutta Methods**

This section presents a new class of FORK methods for the numerical solutions of FDEs. Consider the following FDE with 0 < *α* ≤ 1:

$$\begin{cases} \,^c\_{t\_0} D\_t^\alpha y(t) = f(t, y(t)), \quad t \in [t\_{0\prime} T],\\ y(t\_0) = y\_0. \end{cases} \tag{3}$$

where *<sup>y</sup>*(*t*) ∈ *<sup>C</sup>*[*t*0, *<sup>T</sup>*] and *<sup>f</sup>*(*t*, *<sup>y</sup>*(*t*)) ∈ *<sup>C</sup>*[*t*0, *<sup>T</sup>*] × R. *<sup>t</sup>*<sup>0</sup> is called the base point of fractional derivative.

We set *tn* <sup>=</sup> *<sup>t</sup>*<sup>0</sup> <sup>+</sup> *n h*, *<sup>n</sup>* <sup>=</sup> 0, 1, ··· , *<sup>N</sup>m*, where *<sup>h</sup>* = (*<sup>T</sup>* <sup>−</sup> *<sup>t</sup>*0)/*N<sup>m</sup>* is the step size, *<sup>N</sup>* is a positive integer (in Section 5, we prove that *m* ≥ 1/*α*). For the existence and uniqueness of the solution of the FDE (3), we consider the following theorem [4].

**Theorem 2.** *Let <sup>α</sup>* > 0*, <sup>y</sup>*<sup>0</sup> ∈ R*, <sup>K</sup>* > 0 *and <sup>T</sup>* > 0 *and also let the function <sup>f</sup>* : *<sup>G</sup>* → R *be continuous and fulfill a Lipschitz condition with respect to the second variable, i.e.,*

$$|f(t, y\_1) - f(t, y\_2)| \le L|y\_1 - y\_2|.$$

*with some constant L* > 0 *independent of t*, *y*<sup>1</sup> *and y*2*. Define*

$$G = \{(t, y) : t \in [0, T], \ |y - y\_0| \le K\}, \quad M = \sup\_{(t, z) \in G} |f(t, z)|.$$

*and*

$$T^\* = \begin{cases} \ T\_{\prime} & M = 0, \\\min\{T\_{\prime}\left(K\Gamma(\alpha+1)/M\right)^{1/\alpha}\}, & \text{else.} \end{cases}$$

*Then, there exists a unique function y* ∈ *C*[0, *T*∗] *solving the initial-value problem (3).*

In the sequel, we assume *f*(*t*, *y*) has continuous partial derivatives with respect to *t* and *y* to as high an order as we want.

Now, we introduce an *s*-stage explicit fractional order Runge–Kutta (EFORK) method for FDEs, which is discussed completely with *s* = 2 and *s* = 3 stages.

**Definition 4.** *A family of s-stage EFORK methods is defined as*

$$\begin{aligned} K\_1 &= h^a f(t, y), \\ K\_2 &= h^a f(t + c\_2 h, y + a\_{21} K\_1), \\ K\_3 &= h^a f(t + c\_3 h, y + a\_{31} K\_1 + a\_{32} K\_2), \\ &\vdots \\ K\_s &= h^a f(t + c\_s h, y + a\_{s1} K\_1 + a\_{s2} K\_2 + \cdots + a\_{s, s-1} K\_{s-1}), \end{aligned} \tag{4}$$

*with*

$$y\_{n+1} = y\_n + \sum\_{i=1}^{s} w\_i \,\mathbb{K}\_{i\prime} \tag{5}$$

*where the unknown coefficients aijs*,*i*−<sup>1</sup> *<sup>i</sup>*=2,*j*=<sup>1</sup> *and the unknown weights* {*ci*}*<sup>s</sup> <sup>i</sup>*=2*,* {*wi*}*<sup>s</sup> <sup>i</sup>*=<sup>1</sup> *has to be determined.*

To specify a particular method, one needs to provide *aijs*,*i*−<sup>1</sup> *<sup>i</sup>*=2,*j*=<sup>1</sup> and {*ci*}*<sup>s</sup> <sup>i</sup>*=2, {*wi*}*<sup>s</sup> <sup>i</sup>*=<sup>1</sup> accordingly. Following Butcher, [23], a method of this type is designated by the following scheme:

$$\begin{array}{c|cccc} c\_2 & a\_{21} & & a\_{21} & & \\ c\_3 & a\_{31} & & a\_{32} & & \\ \vdots & \vdots & & \ddots & & \\ c\_5 & a\_{s1} & a\_{s2} & \cdots & a\_{s,s-1} & \\ \hline & \cdots & w\_1 & w\_2 & \cdots & w\_{s-1} & w\_s \\ \end{array}$$

We expand *yn*+<sup>1</sup> in (5), in powers of *hα*, such that it agrees with the Taylor series expansion of the solution of the FDE (3) in a specified number of terms (see [35]). According to (2), the generalized Taylor formula for *α* ∈ (0, 1] with respect to the Caputo fractional derivative of the function *y*(*t*) is defined as follows:

$$y(t) = y(t\_0) + \frac{(t - t\_0)^{\alpha}}{\Gamma(\alpha + 1)} \,\_0^c D\_t^{\alpha} y(t\_0) + \frac{(t - t\_0)^{2\alpha}}{\Gamma(2\alpha + 1)} ( (^{c}\_{t\_0} D\_t^{\alpha})^2 y)(t\_0)$$

$$+ \frac{(t - t\_0)^{3\alpha}}{\Gamma(3\alpha + 1)} ( (^{c}\_{t\_0} D\_t^{\alpha})^3 y)(t\_0) + \dotsb \,\_ . \tag{6}$$

where using (3),

$$\left(\iota\_{l\_0}^c D\_l^{\underline{a}} y(t) = f(t, y), \left(\iota\_{l\_0}^c D\_l^{\underline{a}}\right)^2 y(t) = \iota\_{l\_0}^c D\_l^{\underline{a}} f(t, y), \left(\iota\_{l\_0}^c D\_l^{\underline{a}}\right)^3 y(t) = \left(\iota\_{l\_0}^c D\_l^{\underline{a}}\right)^2 f(t, y), \left(\cdots\right) \tag{7}$$

Now, we obtain explicit expressions for (7).

Caputo fractional derivatives of composite function *f*(*t*, *y*(*t*)) can be computed by fractional Taylor series:

$$f(t, y(t)) = f(t\_0, y(t\_0)) + \frac{(t - t\_0)^a}{\Gamma(a + 1)} f\_t^a(t\_0, y(t\_0)) + \frac{(y - y\_0)}{1!} f\_y(t\_0, y(t\_0))$$

$$+ \frac{(t - t\_0)^{2a}}{\Gamma(2a + 1)} f\_{t,t}^{a,a}(t\_0, y(t\_0)) + \frac{(y - y\_0)^2}{2!} f\_{y,y}(t\_0, y(t\_0))$$

$$+ \frac{(t - t\_0)^a (y - y\_0)}{\Gamma(a + 1)} f\_{t,y}^{a,1}(t\_0, y(t\_0)) + \frac{(t - t\_0)^{3a}}{\Gamma(3a + 1)} f\_{t,t,t}^{a,a,a}(t\_0, y(t\_0)) + \cdots \text{ },\tag{8}$$

where *f <sup>α</sup> <sup>t</sup>* is the Caputo fractional derivative of *f*(*t*, *y*(*t*)) with respect to *t*. After inserting *y*(*t*) − *y*(*t*0) from (6) in (8) and by using the fractional derivative of (8) for *α* ∈ (0, 1], we have

$$\begin{split} \, \_0^cD\_t^\mathfrak{a} f(t, y(t)) &= f\_t^\mathfrak{a} (t\_0, y(t\_0)) + \left( f(t\_0, y(t\_0)) + \frac{(t - t\_0)^{\mathfrak{a}}}{\Gamma(\mathfrak{a} + 1)} \,\_0^cD\_t^\mathfrak{a} f(t\_0, y(t\_0)) + \dots \right) f\_\mathfrak{f}(t\_0, y(t\_0)) \\ &+ \frac{(t - t\_0)^{\mathfrak{a}}}{\Gamma(\mathfrak{a} + 1)} f\_{t,t}^{\mathfrak{a}, \mathfrak{a}} (t\_0, y(t\_0)) + \frac{1}{2} \left( \frac{\Gamma(2\mathfrak{a} + 1)}{\Gamma(\mathfrak{a} + 1)^3} (t - t\_0)^{\mathfrak{a}} f^2(t\_0, y(t\_0)) + \dots \right) f\_{\mathfrak{f}, \mathfrak{f}}(t\_0, y(t\_0)) \\ &+ \left( \frac{\Gamma(2\mathfrak{a} + 1)}{\Gamma(\mathfrak{a} + 1)^3} (t - t\_0)^{\mathfrak{a}} f(t\_0, y(t\_0)) + \dots \right) f\_{\mathfrak{f}, \mathfrak{f}}^{\mathfrak{a}, \mathfrak{f}}(t\_0, y(t\_0)) \\ &+ \frac{(t - t\_0)^{2\mathfrak{a}}}{\Gamma(2\mathfrak{a} + 1)} f\_{t,t}^{\mathfrak{a}, \mathfrak{a}} (t\_0, y(t\_0)) + \dots \,\_ \end{split}$$

and so

$$f\_{t\_0}^c D\_t^a f(t\_0, y(t\_0)) = f\_t^a(t\_0, y(t\_0)) + f(t\_0, y(t\_0)) f\_y(t\_0, y(t\_0)).\tag{9}$$

In addition,

$$\begin{split} \, \_0^cD\_t^{2a}f(t,y(t)) &= \, \_0^cD\_t^a f(t\_0,y(t\_0))f\_y(t\_0,y(t\_0)) + f\_{t,t}^{a,a}(t\_0,y(t\_0)) \\ &+ \left(\frac{\Gamma(2a+1)}{2\Gamma(a+1)^2} f^2(t\_0,y(t\_0)) + \dots \right) f\_{y,y}(t\_0,y(t\_0)) \\ &+ \left(\frac{\Gamma(2a+1)}{\Gamma(a+1)^2} f(t\_0,y(t\_0)) + \dots \right) f\_{t,y}^{a,1}(t\_0,y(t\_0)) \\ &+ \frac{(t-t\_0)^a}{\Gamma(a+1)} f\_{t,t,t}^{a,a,a}(t\_0,y(t\_0)) + \dots \end{split}$$

which yields

$$\begin{split} f\_{t\_0}^c D\_t^{2a} f(t\_0, y(t\_0)) &= f\_t^a(t\_0, y(t\_0)) f\_y(t\_0, y(t\_0)) + f(t\_0, y(t\_0)) f\_y^2(t\_0, y(t\_0)) + f\_{t,t}^{a,a}(t\_0, y(t\_0)) \\ &\quad + \frac{1}{2} f^2(t\_0, y(t\_0)) f\_{y,y}(t\_0, y(t\_0)) + f(t\_0, y(t\_0)) f\_{t,y}^{a,1}(t\_0, y(t\_0)). \end{split} \tag{10}$$

In a similar manner with (9) and (10), we can obtain the higher fractional derivatives of *f*(*t*, *y*(*t*)).

Now, by using (9) and (10), we have

$$\begin{aligned} \, ^c\_{t\_0} D\_t^\alpha y(t\_0) &= f(t\_0, y\_0), \\ (^c\_{t\_0} D\_t^\alpha)^2 y(t\_0) &= f\_t^\alpha(t\_0, y\_0) + f(t\_0, y\_0) f\_y(t\_0, y\_0), \\ (^c\_{t\_0} D\_t^\alpha)^3 y(t\_0) &= f\_t^\alpha(t\_0, y(t\_0)) f\_y(t\_0, y(t\_0)) + f(t\_0, y(t\_0)) f\_y^2(t\_0, y(t\_0)) + f\_{t,t}^{\alpha, \pi}(t\_0, y(t\_0)) \\ &\quad + \frac{1}{2} f^2(t\_0, y(t\_0)) f\_{y,y}(t\_0, y(t\_0)) + f(t\_0, y(t\_0)) f\_{t,y}^{\alpha, 1}(t\_0, y(t\_0)), \end{aligned} \tag{11}$$

where *f α*,*i <sup>t</sup>*,*<sup>y</sup>* , *<sup>i</sup>* <sup>=</sup> 1, 2, ··· , represents the *<sup>i</sup>*th integer derivative of the function *<sup>f</sup> <sup>α</sup> <sup>t</sup>* with respect to *y*. As we can see from (6), in Caputo fractional derivatives ((*<sup>c</sup> t*0 *D<sup>α</sup> <sup>t</sup>* )*ky*)(*t*0), *<sup>k</sup>* <sup>=</sup> 0, 1, 2, ··· , the argument *t*<sup>0</sup> in *y*(*t*0) and starting value in (*<sup>c</sup> t*0 *D<sup>α</sup> <sup>t</sup>* )*<sup>k</sup>* are the same.

To construct an efficient numerical scheme, we should obtain a similar series with the derivatives evaluated in any other point (*tn* > *t*0), such that the expansion can be constructed independently from the starting point *t*0. In other words, we need

$$y(t\_{n+1}) = y(t\_n) + \frac{h^a}{\Gamma(a+1)} \epsilon\_n^{\epsilon} D\_t^a y(t\_n) + \frac{h^{2\alpha}}{\Gamma(2\alpha+1)} ((\underline{\epsilon}\_n^{\epsilon} D\_t^a)^2 y)(t\_n)$$

$$+ \frac{h^{3\alpha}}{\Gamma(3\alpha+1)} ((\underline{\epsilon}\_n^{\epsilon} D\_t^a)^3 y)(t\_n) + \dotsb \,\_{\epsilon} \tag{12}$$

and ((*<sup>c</sup> tnD<sup>α</sup> t* )*i <sup>y</sup>*)(*tn*), *<sup>i</sup>* <sup>=</sup> 1, 2, ··· . To do so, by using *<sup>c</sup> t*0 *D<sup>α</sup> <sup>t</sup> y*(*t*), we obtain *<sup>c</sup> tnD<sup>α</sup> <sup>t</sup> y*(*t*) for *<sup>n</sup>* <sup>=</sup> 1, 2, ··· , *<sup>N</sup><sup>m</sup>* <sup>−</sup> 1, as

$$\begin{split} \, \_{t\_{0}}^{c}D\_{t}^{a}y(t) &= \, \_{t\_{0}}^{c}D\_{t}^{a}y(t) - \frac{1}{\Gamma(1-a)} \int\_{t\_{0}}^{t\_{0}} (t-s)^{-a}Dy(s)ds\\ &= \, \_{t\_{0}}^{c}D\_{t}^{a}y(t) - \frac{1}{\Gamma(1-a)} \sum\_{i=0}^{n-1} \int\_{t\_{i}}^{t\_{i+1}} (t-s)^{-a}Dy(s)ds. \end{split} \tag{13}$$

By using the linear Lagrange interpolation formula for *y*(*s*) in support abscissas {*ti*, *ti*+1}, we have

$$\begin{aligned} y(s) &\simeq \frac{(s - t\_i)}{(t\_{i+1} - t\_i)} y\_{i+1} - \frac{(s - t\_{i+1})}{(t\_{i+1} - t\_i)} y\_i \\ &= \frac{(s - t\_i)}{h} y\_{i+1} - \frac{(s - t\_{i+1})}{h} y\_{i\prime} \quad s \in [t\_i, t\_{i+1}], \quad i = 0, 1, \ldots, n - 1, \end{aligned}$$

where, for a sufficiently small *h*, we have

$$Dy(s) \simeq \frac{1}{h}(y\_{i+1} - y\_i), \quad s \in [t\_i, t\_{i+1}], \quad i = 0, 1, \dots, n - 1,$$

and

$$\int\_{t\_i}^{t\_{i+1}} (t - s)^{-a} Dy(s) ds \simeq \frac{(y\_{i+1} - y\_i)}{h(1 - a)} \left[ \left( t - t\_i \right)^{1 - a} - \left( t - t\_{i+1} \right)^{1 - a} \right], \ i = 0, 1, \cdots, n - 1.$$

From (13) and *<sup>c</sup> t*0 *D<sup>α</sup> <sup>t</sup> y*(*t*) = *f*(*t*, *y*), we have

$$\mathcal{T}\_{t\_n}^{\varepsilon} D\_t^{\alpha} y(t) = f(t, y) - \frac{1}{\Gamma(1 - \alpha)} \sum\_{i = 0}^{n - 1} \frac{(y\_{i + 1} - y\_i)}{h(1 - \alpha)} \left[ (t - t\_i)^{1 - \alpha} - (t - t\_{i + 1})^{1 - \alpha} \right].$$

So we may write

$$\,^cI\_n D\_t^a y(t) = F\_n(t, y), \qquad n = 0, 1, 2, \dots \tag{14}$$

where *F*0(*t*, *y*) = *f*(*t*, *y*) and for *n* = 1, 2, 3, ··· , we have

$$F\_n(t, y) = f(t, y) - \frac{1}{\Gamma(2 - a)} \sum\_{i = 0}^{n - 1} \frac{y\_{i + 1} - y\_i}{h} \left[ (t - t\_i)^{1 - a} - (t - t\_{i + 1})^{1 - a} \right].$$

Clearly, *Fn*(*t*, *y*) is continuous and satisfies the Lipschitz condition with respect to the second variable, due to the properties of *f*(*t*, *y*). In what follows, for convenience of notation, we rename *Fn*(*t*, *<sup>y</sup>*) as *<sup>f</sup>*(*t*, *<sup>y</sup>*), i.e., in any initial points *tn* <sup>&</sup>gt; *<sup>t</sup>*0, *<sup>n</sup>* <sup>=</sup> 1, 2, ... , *<sup>N</sup><sup>m</sup>* <sup>−</sup> 1, we consider the right terms of (14) as *f*(*tn*, *yn*) instead of *Fn*(*tn*, *yn*) in any stages.

Now, to construct FORK methods, we can use the Taylor formula (6) and (11), where *c tnD<sup>α</sup> <sup>t</sup> y*(*t*) is defined in (14).

#### *3.1. EFORK Method of Order* 2*α*

Let us introduce the following EFORK method with two stages:

$$\begin{aligned} K\_1 &= h^\mathbb{a} f(t\_n, y\_n), \\ K\_2 &= h^\mathbb{a} f(t\_n + c\_2 h, y\_n + a\_{21} K\_1), \\ y\_{n+1} &= y\_n + w\_1 K\_1 + w\_2 K\_2. \end{aligned} \tag{15}$$

where coefficients *c*2, *a*<sup>21</sup> and weights *w*1, *w*<sup>2</sup> are chosen to make the approximate value *yn*+<sup>1</sup> as close as possible to the exact value *y*(*tn*+1). We expand *K*<sup>1</sup> and *K*<sup>2</sup> about the point (*tn*, *yn*), where we use the Caputo Taylor formula (12) about point *tn* and standard integer-order Taylor formula about *yn* as

$$\begin{split} K\_{1} &= h^{a} f(t\_{n}, y\_{n}) \\ K\_{2} &= h^{a} f(t\_{n} + c\_{2}h, y\_{n} + a\_{21}K\_{1}) \\ &= h^{a} \left[ f(t\_{n}, y\_{n}) + \frac{c\_{2}^{2}h^{a}}{\Gamma(a+1)} f\_{t}^{a} + a\_{21}h^{a} f\_{n} f\_{y} + \frac{c\_{2}^{2}h^{2a}}{\Gamma(2a+1)} f\_{t,t}^{a,a} + \frac{a\_{21}^{2}h^{2a}}{2} f\_{n}^{2} f\_{y,y} + \cdots \right] \\ &\quad + \frac{c\_{2}^{2}a\_{21}h^{2a}}{\Gamma(a+1)} f\_{n} f\_{t,y}^{a,1} + \cdots \right] \\ &= h^{a} f\_{n} + h^{2a} \left( \frac{c\_{2}^{2}}{\Gamma(a+1)} f\_{t}^{a} + a\_{21} f\_{n} f\_{y} \right) \\ &\quad + h^{3a} \left( \frac{c\_{2}^{2}}{\Gamma(2a+1)} f\_{t,t}^{a,a} + \frac{a\_{21}^{2}}{2} f\_{n}^{2} f\_{y,y} + \frac{c\_{2}^{2}a\_{21}}{\Gamma(a+1)} f\_{n} f\_{t,y}^{a,1} \right) + \cdots \end{split}$$

Substituting *K*<sup>1</sup> and *K*<sup>2</sup> in (15), we have

$$y\_{n+1} = y\_n + (w\_1 + w\_2)h^a f\_n + h^{2a} w\_2 \left(\frac{c\_2^a}{\Gamma(a+1)} f\_t^a + a\_{21} f\_t f\_y\right)$$

$$+ w\_2 h^{3a} \left(\frac{c\_2^{2a}}{\Gamma(2a+1)} f\_{t,t}^{a,a} + \frac{a\_{21}^2}{2} f\_n^2 f\_{y,y} + \frac{c\_2^a a\_{21}}{\Gamma(a+1)} f\_n f\_{t,y}^{a,1}\right) + \dotsb \tag{16}$$

Comparing (12) with (16) and matching coefficients of powers of *hα*, we obtain three equations:

$$w\_1 + w\_2 = \frac{1}{\Gamma(\alpha + 1)},$$

$$w\_2 \frac{c\_2^{\alpha}}{\Gamma(\alpha + 1)} = \frac{1}{\Gamma(2\alpha + 1)},$$

$$w\_2 a\_{21} = \frac{1}{\Gamma(2\alpha + 1)}.\tag{17}$$

From these equations, we see that, if *c<sup>α</sup>* <sup>2</sup> is chosen arbitrarily (nonzero), then

$$a\_{21} = \frac{c\_2^a}{\Gamma(a+1)'}, \quad w\_2 = \frac{\Gamma(a+1)}{c\_2^a \Gamma(2a+1)'}, \quad w\_1 = \frac{1}{\Gamma(a+1)} - \frac{\Gamma(a+1)}{c\_2^a \Gamma(2a+1)}.\tag{18}$$

Inserting (17) and (18) in (16) we get

$$\begin{split} y\_{n+1} &= y\_n + \frac{h^a}{\Gamma(a+1)} f\_n + \frac{h^{2a}}{\Gamma(2a+1)} \left( f\_t^a + f\_n f\_y \right) \\ &+ \frac{c\_2^a h^{3a}}{\Gamma(2a+1)} \left[ \frac{\Gamma(a+1)}{\Gamma(2a+1)} f\_{t,t}^{a,a} + \frac{1}{2\Gamma(a+1)} f\_n^2 f\_{y,y} + \frac{1}{\Gamma(a+1)} f\_n f\_{t,y}^{a,1} \right] + \dotsb \end{split} \tag{19}$$

Subtracting (19) from (12), we obtain the local truncation error *Tn*

$$\begin{split} T\_n = y(t\_{n+1}) - y\_{n+1} &= h^{3\alpha} \left( \frac{1}{\Gamma(3\alpha+1)} - \frac{c\_2^a \Gamma(\alpha+1)}{(\Gamma(2\alpha+1))^2} \right) f\_{t,t}^{a,\alpha} \\ &+ h^{3\alpha} \left( \frac{2}{\Gamma(3\alpha+1)} - \frac{c\_2^a}{2\Gamma(\alpha+1)} \right) f\_n^2 f\_{y,y} \\ &+ h^{3\alpha} \left( \frac{1}{\Gamma(3\alpha+1)} - \frac{c\_2^a}{\Gamma(\alpha+1)} \right) f\_n f\_{t,y}^{a,1} + \cdots \ . \end{split} \tag{20}$$

.

.

We conclude that no choice of the parameter *c<sup>α</sup>* <sup>2</sup> will make the leading term of *Tn* vanish for all functions *f*(*t*, *y*). Sometimes, the free parameters are chosen to minimize the sum of the absolute values of the coefficients in *Tn*. Such a choice is called the optimal choice.

$$c\_2^{\mathfrak{a}} = \frac{(\Gamma(2\mathfrak{a}+1))^2}{\Gamma(3\mathfrak{a}+1)\Gamma(\mathfrak{a}+1)}, \quad c\_2^{\mathfrak{a}} = \frac{4\Gamma(\mathfrak{a}+1)}{\Gamma(3\mathfrak{a}+1)}, \quad \text{or} \quad c\_2^{\mathfrak{a}} = \frac{\Gamma(\mathfrak{a}+1)}{\Gamma(3\mathfrak{a}+1)}$$

From (20) we have *Tn <sup>h</sup><sup>α</sup>* = (*hα*)2. So, we deduce that the 2-stage EFORK method (15) is of order 2*α*.

Now, the two-stage EFORK method by listing the coefficients is as follows:

$$\begin{array}{c|cc} c\_2 & a\_{21} \\ \hline & w\_1 & w\_2 \\ \end{array} /$$

Choosing *w*<sup>1</sup> = *w*<sup>2</sup> yields

$$\begin{array}{c|c} \left(\frac{2\Gamma(\kappa+1)^2}{\Gamma(2\kappa+1)}\right)^{\frac{1}{\kappa}} & \frac{2\Gamma(\kappa+1)}{\Gamma(2\kappa+1)}\\ \hline & \frac{1}{2\Gamma(\kappa+1)} & \frac{1}{2\Gamma(\kappa+1)} \end{array}$$

In addition, the optimal cases of the two-stage EFORK method are

$$\begin{array}{c|c} \left(\frac{\Gamma(2a+1))^2}{\Gamma(3a+1)\Gamma(a+1)}\right)^{\frac{1}{a}} & \frac{\Gamma(2a+1)^2}{\Gamma(3a+1)\Gamma(a+1)^2} \\\\ \hline \end{array}$$

$$\begin{array}{c|c} \frac{1}{\Gamma(a+1)} & \frac{\Gamma(3a+1)\Gamma(a+1)^2}{\Gamma(2a+1)^3} & \frac{\Gamma(3a+1)\Gamma(a+1)^2}{\Gamma(2a+1)^3} \\\\ \hline \end{array}$$

$$\begin{array}{c|c} \frac{4\Gamma(a+1)}{\Gamma(3a+1)} & \frac{4}{\pi} \\\\ \hline \end{array}$$

$$\begin{array}{c|c} \left(\frac{\Gamma(a+1)}{\Gamma(3a+1)}\right)^{\frac{1}{a}} & \frac{\Gamma(3a+1)}{4\Gamma(2a+1)} & \frac{\Gamma(3a+1)}{4\Gamma(2a+1)} \\\\ \hline \end{array}$$

#### *3.2. EFORK Method of Order* 3*α*

Following (4) and (5), we define a three-stage EFORK method as

$$\begin{aligned} K\_1 &= h^\mathfrak{a} f(t\_n, y\_n), \\ K\_2 &= h^\mathfrak{a} f(t + c\_2 h, y\_n + a\_{21} K\_1), \\ K\_3 &= h^\mathfrak{a} f(t\_n + c\_3 h, y\_n + a\_{31} K\_1 + a\_{32} K\_2), \\ y\_{n+1} &= y\_n + w\_1 K\_1 + w\_2 K\_2 + w\_3 K\_3 \end{aligned} \tag{21}$$

where unknown parameters {*ci*}<sup>3</sup> *<sup>i</sup>*=2, {*aij*}3,*i*−<sup>1</sup> *<sup>i</sup>*=2,*j*=1, and {*wi*}<sup>3</sup> *<sup>i</sup>*=<sup>1</sup> have to be determined accordingly. By using the same procedure as we followed for the two-stage EFORK method, expanding *K*1, *K*<sup>2</sup> and *K*3, comparing with (12) and matching coefficients of powers of *hα*, we obtain the following equations:

$$w\_1 + w\_2 + w\_3 = \frac{1}{\Gamma(a+1)'}, \quad a\_{21} = \frac{c\_2^a}{\Gamma(a+1)'}, \quad w\_2 c\_2^{2a} + w\_3 c\_3^{2a} = \frac{\Gamma(2a+1)}{\Gamma(3a+1)'}$$

$$a\_{31} + a\_{32} = \frac{c\_3^a}{\Gamma(a+1)'}, \quad w\_2 c\_2^a + w\_3 c\_3^a = \frac{\Gamma(a+1)}{\Gamma(2a+1)'}, \quad w\_3 a\_{32} c\_2^a = \frac{\Gamma(a+1)}{\Gamma(3a+1)}.\tag{22}$$

Now, we have six equations with eight unknown parameters. According to the Butcher tableau for the three-stage EFORK method, we have

$$
\begin{array}{c|cc}
 c\_2 & a\_{21} & & \\
 c\_3 & a\_{31} & a\_{32} & \\
 \hline
 & w\_1 & w\_2 & w\_3 \\
\end{array}
$$

If *<sup>c</sup>*<sup>2</sup> and *<sup>c</sup>*<sup>3</sup> are arbitrarily chosen, we calculate weights {*wi*}<sup>3</sup> *<sup>i</sup>*=<sup>1</sup> and coefficients {*aij*}3,*i*−<sup>1</sup> *<sup>i</sup>*=2,*j*=<sup>1</sup> from (22) as

$$\begin{array}{c|c} \left(\frac{1}{2a!}\right)^{\frac{1}{4}} & \frac{1}{2(a!)^{2}}\\ \left(\frac{1}{4a!}\right)^{\frac{1}{a}} & \frac{(a!)^{2}(2a)! + 2(2a)!^{2} - (3a)!}{4(a!)^{2}(2(2a)!^{2} - (3a)!)} & -\frac{(2a)!}{4(2(2a)!) - (3a)!} \\ \hline & \frac{8(a!)^{2}(2a)!}{(3a)!} - \frac{6(a!)^{2}}{(2a)!} + \frac{1}{a!} & \frac{2(a!)^{2}(4(2a)!^{2} - (3a)!)}{(2a)!(3a)!} & -\frac{8(a!)^{2}(2(2a)!^{2} - (3a)!)}{(2a)!(3a)!} \\ \end{array}$$

As a result, we obtain *Tn <sup>h</sup><sup>α</sup>* = (*hα*)3. In a similar procedure to the two and three-stage EFORK methods, we can construct *s*-stage EFORK methods for *s* > 3.

As we can see, to obtain the higher fractional order Runge–Kutta methods, we must consider a method with additional stages. In the next section, we express implicit fractional order Runge–Kutta (IFORK) methods with low stages and high orders.

#### **4. IFORK Methods**

We define a *s*-stage IFORK method by the following equations:

$$K\_i = \frac{1}{s} h^a \sum\_{k=1}^s f\left(t\_n + c\_{i\bar{k}} h, y\_n + \sum\_{j=1}^s a\_{i\bar{j}} K\_{\bar{j}}\right), \quad i = 1, 2, \dots, s \tag{23}$$

and

$$y\_{n+1} = y\_n + \sum\_{i=1}^{s} w\_i K\_{i\prime} \tag{24}$$

where

$$\frac{c\_{i1}^a + c\_{i2}^a + \dots + c\_{is}^a}{a!} = s(a\_{i1} + a\_{i2} + \dots + a\_{is}), \quad i = 1, 2, \dots, s \tag{25}$$

and the parameters *aijs*,*<sup>s</sup> i*,*j*=1 , {*wi*}*<sup>s</sup> <sup>i</sup>*=<sup>1</sup> are arbitrary. We state the IFORK method by listing the coefficients as follows:


Since the functions *Ki* are defined by a set of *s* implicit equations, the derivation of the implicit methods is complicated. Therefore, only the case *s* = 2 is investigated.

Consider (23)–(25) with *s* = 2 as

$$K\_i = \frac{1}{2}h^a[f(t\_n + c\_{i1}h, y\_n + a\_{i1}K\_1 + a\_{i2}K\_2) + f(t\_n + c\_{i2}h, y\_n + a\_{i1}K\_1 + a\_{i2}K\_2)], \quad i = 1, 2\tag{26}$$

$$y\_{n+1} = y\_n + w\_1 \mathcal{K}\_1 + w\_2 \mathcal{K}\_2 \tag{27}$$

where *<sup>c</sup><sup>α</sup>*

$$\frac{c\_{i1}^a + c\_{i2}^a}{a!} = 2(a\_{i1} + a\_{i2}), \quad i = 1, 2. \tag{28}$$

By using a similar procedure as we followed for the EFORK method, we expand *Ki* about the point (*tn*, *yn*), where we apply the Caputo Taylor formula (12) about *tn* and standard integer-order Taylor formula about *yn*.

$$\begin{split} \mathbb{K}\_{i} &= \frac{1}{2}h^{a}[2f\_{t} + \frac{(c\_{i}^{a} + c\_{i2}^{a})h^{a}}{a!}f\_{t}^{a} + 2(a\_{i1}\mathcal{K}\_{1} + a\_{i2}\mathcal{K}\_{2})f\_{y} + \frac{(c\_{i1}^{a} + c\_{i2}^{a})h^{2a}}{(2a)!}f\_{t,t}^{a,a} \\ &+ (a\_{i1}\mathcal{K}\_{1} + a\_{i2}\mathcal{K}\_{2})^{2}f\_{y,y} + \frac{(c\_{i1}^{a} + c\_{i2}^{a})h^{a}}{a!}(a\_{i1}\mathcal{K}\_{1} + a\_{i2}\mathcal{K}\_{2})f\_{t,y}^{a,1} \\ &+ \frac{(c\_{i1}^{3a} + c\_{i2}^{3a})h^{3a}}{(3a)!}f\_{t,t}^{a,a,a} + \frac{(c\_{i1}^{2a} + c\_{i2}^{2a})h^{2a}}{(2a)!}(a\_{i1}\mathcal{K}\_{1} + a\_{i2}\mathcal{K}\_{2})f\_{t,t,y}^{a,a,1} \\ &+ \frac{(c\_{i1}^{a} + c\_{i2}^{a})h^{a}}{a!} \frac{(a\_{i1}\mathcal{K}\_{1} + a\_{i2}\mathcal{K}\_{2})^{2}}{2}f\_{t,y,y}^{a,1,1} + \frac{(a\_{i1}\mathcal{K}\_{1} + a\_{i2}\mathcal{K}\_{2})^{3}}{3}f\_{y,y,y} + \cdots \ \text{,} \end{split}$$

where *i* = 1, 2 .

Since Equation (29) are implicit, we cannot obtain the explicit forms for *K*<sup>1</sup> and *K*2. To determine the explicit form *Ki*, we consider

$$K\_i = h^a A\_i + h^{2a} B\_i + h^{3a} C\_i + \cdots \cdot \, , \quad i = 1, 2 \tag{30}$$

where *Ai*, *Bi* and *Ci* are unknowns. Substituting (30) into (29) and matching the coefficients of powers of *hα*, we get

$$\begin{aligned} A\_i &= f\_{n\prime} \\ B\_i &= \frac{c\_{i1}^a + c\_{i2}^a}{2\langle a!\rangle} [f\_t^a + f f\_y] = \frac{c\_{i1}^a + c\_{i2}^a}{2\langle a!\rangle} D^a f \\ C\_i &= \left( a\_{i1} \frac{c\_{i1}^a + c\_{i2}^a}{2\langle a!\rangle} + a\_{i2} \frac{c\_{i1}^a + c\_{i2}^a}{2\langle a!\rangle} \right) f\_y D^a f + \frac{c\_{i1}^{2a} + c\_{i2}^{2a}}{2\langle 2a\rangle!} f\_{t,i}^{a,a} \\ &+ \frac{1}{4} \left( \frac{c\_{i1}^a + c\_{i2}^a}{a!} \right)^2 \left( \frac{1}{2} f^2 f\_{yy} + f f\_{t,y}^{a,1} \right), \\ &\vdots \end{aligned} \tag{31}$$

Inserting (30) and (31) into (27), we have

$$y\_{n+1} = y\_n + h^a [w\_1 A\_1 + w\_2 A\_2] + h^{2a} [w\_1 B\_1 + w\_2 B\_2] + h^{3a} [w\_1 C\_1 + w\_2 C\_2] + \dotsb \, . \tag{32}$$

Comparing (32) with (12) and equating the coefficient of powers of *hα*, we can get an IFORK method of different orders.

#### *4.1. IFORK Method of Order* 2*α*

To obtain an IFORK method of order 2*α*, we equate the coefficients of *h<sup>α</sup>* and *h*2*<sup>α</sup>* in (12) and (32) correspondingly to get

$$\begin{aligned} w\_1 + w\_2 &= \frac{1}{\mathfrak{a}!} \\ w\_1 \frac{c\_{11}^\alpha + c\_{12}^\alpha}{\mathfrak{a}!} + w\_2 \frac{c\_{21}^\alpha + c\_{22}^\alpha}{\mathfrak{a}!} &= \frac{2}{(2\alpha)!} \end{aligned}$$

where

$$2(a\_{11} + a\_{12}) = \frac{c\_{11}^{\alpha} + c\_{12}^{\alpha}}{a!}, \quad 2(a\_{21} + a\_{22}) = \frac{c\_{21}^{\alpha} + c\_{22}^{\alpha}}{a!}.$$

There are now six arbitrary parameters to be prescribed. If we neglect *K*2, i.e., if we choose *a*<sup>21</sup> = *a*<sup>22</sup> = *a*<sup>12</sup> = 0, *w*<sup>2</sup> = 0, from the above equations, we find

$$w\_1 = \frac{1}{\alpha!} \quad c\_{11}^a + c\_{12}^a = \frac{2(\alpha!)^2}{(2\alpha)!} , \quad a\_{11} = \frac{\alpha!}{(2\alpha)!} .$$

Therefore, a one-stage IFORK method of order 2*α* is obtained as follows:

$$\begin{cases} \mathcal{K}\_1 = \frac{1}{2} h^a [f(t\_n + c\_{11}h, y\_n + a\_{11}K\_1) + f(t\_n + c\_{12}h, y\_n + a\_{11}K\_1)], \\ y\_{n+1} = y\_n + w\_1 K\_1. \end{cases} \tag{33}$$

#### *4.2. IFORK Method of Order* 3*α*

In addition, we can get an IFORK method of order 3*α* with two stages (26)–(28) when equating the coefficients of *hα*, *h*2*<sup>α</sup>* and *h*3*<sup>α</sup>* in (12) and (32) accordingly. In such case, we obtain the following system of equations:

$$\begin{aligned} w\_1 + w\_2 &= \frac{1}{a!}, \\ w\_1 \frac{c\_{11}^x + c\_{12}^x}{a!} + w\_2 \frac{c\_{21}^x + c\_{22}^x}{a!} &= \frac{2}{(2a)!}, \\ w\_1 \left( a\_{11} \frac{c\_{11}^x + c\_{12}^x}{a!} + a\_{12} \frac{c\_{21}^x + c\_{22}^x}{a!} \right) + w\_2 \left( a\_{21} \frac{c\_{11}^x + c\_{12}^x}{a!} + a\_{22} \frac{c\_{21}^x + c\_{22}^x}{a!} \right) &= \frac{2}{(3a)!}, \\ w\_1 \frac{c\_{11}^{2a} + c\_{12}^{2a}}{(2a)!} + w\_2 \frac{c\_{21}^{2a} + c\_{22}^{2a}}{(2a)!} &= \frac{2}{(3a)!}, \\ w\_1 \left( \frac{c\_{11}^x + c\_{12}^x}{a!} \right)^2 + w\_2 \left( \frac{c\_{21}^x + c\_{22}^x}{a!} \right)^2 &= \frac{8}{(3a)!}. \end{aligned}$$

where

$$2(a\_{11} + a\_{12}) = \frac{c\_{11}^{\alpha} + c\_{12}^{\alpha}}{a!}, \quad 2(a\_{21} + a\_{22}) = \frac{c\_{21}^{\alpha} + c\_{22}^{\alpha}}{a!}.$$

The three free parameters can be chosen so that *K*<sup>1</sup> or *K*<sup>2</sup> are explicit. If we want *K*<sup>1</sup> to be explicit, we choose

$$c\_{11} = a\_{11} = a\_{12} = 0.$$

Thus, the IFORK method of order 3*α*, which is explicit in *K*<sup>1</sup> is given by

$$\begin{cases} \mathcal{K}\_1 = \frac{1}{2} h^a [f(t\_n, y\_n) + f(t\_n + c\_{12}h, y\_n)], \\ \mathcal{K}\_2 = \frac{1}{2} h^a [f(t\_n + c\_{21}h, y\_n + a\_{21}K\_1 + a\_{22}K\_2) + f(t\_n + c\_{22}h, y\_n + a\_{21}K\_1 + a\_{22}K\_2)], \\ y\_{n+1} = y\_n + w\_1 K\_1 + w\_2 K\_2. \end{cases} \tag{34}$$

where the coefficients are given by


#### **5. Theoretical Analysis**

To ensure that the obtained numerical solution of FORK algorithms approximates the exact solution of FDEs correctly, we first discuss in this section the consistency of the methods and second the convergence analysis of the FORK methods that may impose some additional conditions under which the approximate solution, discussed in Sections 3 and 4, converges to the exact solution of the problem.

#### *5.1. Consistency*

The EFORK and IFORK methods considered before belong to the class of methods that are characterized by the use of *yn* on the computation of *yn*+1. This family of one-step methods admits the following representation:

$$\begin{aligned} y\_{n+1} &= y\_n + h^a \Phi(t\_n, y\_n, y\_{n+1}, h), \quad n = 0, \ldots, N^m - 1, \\ y\_0 &= y(t\_0). \end{aligned} \tag{35}$$

where <sup>Φ</sup> : [*t*0, *<sup>T</sup>*] <sup>×</sup> <sup>R</sup><sup>2</sup> <sup>×</sup> (0, *<sup>h</sup>*0] <sup>→</sup> <sup>R</sup> and for the particular case of the explicit methods we have the representation

$$\begin{aligned} y\_{n+1} &= y\_n + h^a \Phi(t\_{n\_\prime} y\_n, h), \quad n = 0, \ldots, N^m - 1, \\ y\_0 &= y(t\_0). \end{aligned} \tag{36}$$

with <sup>Φ</sup> : [*t*0, *<sup>T</sup>*] × R × (0, *<sup>h</sup>*0] → R.

We define the truncation error *τ<sup>n</sup>* by

$$
\pi\_n = \frac{y\_{n+1} - y\_n}{h^a} - \Phi(t\_{n\prime} y\_{n\prime} y\_{n+1\prime} h). \tag{37}
$$

The one-step method (35) and (36) is said to be consistent with Equation (3) if

$$\lim\_{n \to 0} \tau\_n = 0, \quad \mathcal{N}^m = (T - t\_0) / h.$$

Using (12) and (37), we may write

$$\begin{split} \lim\_{h \to 0} \tau\_{\boldsymbol{n}} &= \lim\_{h \to 0} \frac{y\_{\boldsymbol{n}+1} - y\_{\boldsymbol{n}}}{h^{\alpha}} - \lim\_{h \to 0} \Phi(t\_{\boldsymbol{n}\_{\prime}} y\_{\boldsymbol{n}\_{\prime}} y\_{\boldsymbol{n}+1}, h), \\ &= \frac{1}{\Gamma(\alpha+1)} \boldsymbol{\varepsilon}\_{\boldsymbol{n}}^{\boldsymbol{c}} D\_{t}^{\boldsymbol{a}} \boldsymbol{y}(t\_{\boldsymbol{n}}) - \lim\_{h \to 0} \Phi(t\_{\boldsymbol{n}\_{\prime}} \boldsymbol{y}(t\_{\boldsymbol{n}}), \boldsymbol{y}(t\_{\boldsymbol{n}+1}), h), \\ &= \frac{1}{\Gamma(\alpha+1)} F\_{\boldsymbol{n}}(t\_{\boldsymbol{n}\_{\prime}} \boldsymbol{y}(t\_{\boldsymbol{n}})) - \lim\_{h \to 0} \Phi(t\_{\boldsymbol{n}\_{\prime}} \boldsymbol{y}(t\_{\boldsymbol{n}}), \boldsymbol{y}(t\_{\boldsymbol{n}}+h), h). \end{split}$$

Hence, we may conclude that the proposed one-step IFORK methods are consistent if and only if

$$\Phi(t, y, y, 0) = \frac{1}{\Gamma(\alpha + 1)} F\_n(t, y).$$

or briefly

$$
\Phi(t, y, y, 0) = \frac{1}{\Gamma(\alpha + 1)} f(t, y).
$$

Similarly, for explicit methods, we have

$$
\Phi(t, y, 0) = \frac{1}{\Gamma(\alpha + 1)} f(t, y).
$$

As an example, consider the two-stage EFORK method (15) with (17),

$$y\_{n+1} = y\_n + h^a [w\_1 f(t\_n, y\_n) + w\_2 f(t\_n + c\_2 h, y\_n + a\_{21} \mathbb{K}\_1)]\_{\prime\prime}$$

where, in comparison to (36), we have

$$\Phi(t, y, h) = [w\_1 f(t, y) + w\_2 f(t + c\_2 h, y + a\_{21} K\_1)]\_{\prime \ast}$$

Hence, as *h* tends to 0, it yields

$$
\Phi(t, y, 0) = (w\_1 + w\_2) f(t, y),
$$

and by using (17), we may write

$$
\Phi(t, y, 0) = \frac{1}{\Gamma(\alpha + 1)} f(t, y).
$$

Therefore, the two-stage EFORK method (15) is consistent. In addition, the three-stage EFORK method (21) is consistent for

$$\begin{aligned} \Phi(t, y, h) &= [w\_1 f(t, y) + w\_2 f(t + c\_2 h, y + a\_{21} K\_1) + w\_3 f(t + c\_3 h, y + a\_{31} K\_1 + a\_{32} K\_2)], \\ \Phi(t, y, 0) &= (w\_1 + w\_2 + w\_3) f(t, y), \end{aligned}$$

and so from (22)

$$
\Phi(t, y, 0) = \frac{1}{\Gamma(\alpha + 1)} f(t, y).
$$

Similarly, we can show the consistency of all proposed FORK methods in Sections 3 and 4.

#### *5.2. Convergence Analysis*

Here, we investigate the convergence behavior of the proposed FORK methods (without loss of generality, we consider only explicit FORK methods). To do so, we express a definition of regularity from [35].

**Definition 5.** *A one-step method of the form (36)*

$$y\_{n+1} = y\_n + h^{\pi} \Phi(t\_{n\prime} y\_n, h), \quad n = 0, 1, 2, \dots, N^m - 1,\tag{38}$$

*is said to be regular if the function* Φ(*t*, *y*, *h*) *is defined and continuous in the domain t* ∈ [0, *T*]*, y* ∈ [0, *T*∗] *and h* ∈ [0, *h*0] *(h*<sup>0</sup> *is a positive constant) and if there exists a constant L such that*

$$|\Phi(t, y, h) - \Phi(t, z, h)| \le L|y - z|\_{\prime}$$

*for every t* ∈ [0, *T*]*, y*, *z* ∈ [0, *T*∗] *and h* ∈ [0, *h*0]*.*

To discuss the convergence of the EFORK methods, first, we prove that the given methods in Section 3 are regular. We know from Theorem 2 that *f*(*t*, *y*) satisfies a Lipschitz condition with respect to the second variable. Thus,

$$\begin{split} \left| \Phi(\mathbf{f}\_{n}, \mathbf{y}\_{n}, h) - \Phi(\mathbf{f}\_{n}, \mathbf{y}\_{n}^{\*}, h) \right| &= h^{-n} \left| \sum\_{i=1}^{s} w\_{i} \mathbf{K}\_{i} - \sum\_{i=1}^{s} w\_{i} \mathbf{K}\_{i}^{\*} \right| \\ &\leq h^{-n} (w\_{1} |\mathbf{K}\_{1} - \mathbf{K}\_{1}^{\*}| + w\_{2} |\mathbf{K}\_{2} - \mathbf{K}\_{2}^{\*}| + \dots + w\_{s} |\mathbf{K}\_{s} - \mathbf{K}\_{s}^{\*}|) \\ &\leq w\_{1} \mathbf{L} |y\_{n} - y\_{n}^{\*}| + w\_{2} \mathbf{L} |y\_{n} + a\_{21} \mathbf{K}\_{1} - y\_{n}^{\*} - a\_{21} \mathbf{K}\_{1}^{\*}| + \dots \\ &+ w\_{s} \mathbf{L} |y\_{n} + a\_{s1} \mathbf{K}\_{1} + a\_{s2} \mathbf{K}\_{2} + \dots + a\_{ss} \mathbf{K}\_{s} - y\_{n}^{\*} - a\_{s1} \mathbf{K}\_{1}^{\*}| \\ &- a\_{s2} \mathbf{K}\_{2}^{\*} - \dots - a\_{ss} \mathbf{K}\_{s}^{\*}| \\ &\leq \dots \leq L^{\*} |y\_{n} - y\_{n}^{\*}|. \end{split}$$

Therefore, the function Φ satisfies a Lipschitz condition in *y* and it is also continuous; thus, EFORK methods are regular. To establish the convergence behavior, we need the following Lemma from [35].

**Lemma 1.** *Let ω*0, *ω*1, *ω*2,... *be a sequence of real positive numbers that satisfy*

$$
\omega\_{n+1} \le (1+\zeta)\omega\_n + \mu, \qquad n = 0, 1, 2, \dots
$$

*where ζ, μ are positive constants. Then,*

$$
\omega\_n \le e^{n\tilde{\xi}} \omega\_0 + \left(\frac{e^{n\tilde{\xi}} - 1}{\tilde{\xi}}\right) \mu\_\prime \qquad n = 0, 1, 2, \dots
$$

We now discuss the behavior of the error *en* = *y*(*tn*) − *yn* in EFORK method for the initial-value problem (3).

**Theorem 3.** *Consider the initial value problem (3) and let f*(*t*, *y*(*t*)) *be continuous and satisfy a Lipschitz condition with Lipschitz constant L, and also let* (*<sup>c</sup> t*0 *D<sup>α</sup> <sup>t</sup>* )(*s*+1)*y*(*t*) *be continuous for t* ∈ [*t*0, *T*]*, Then, the given EFORK method in Section 3 is convergent for mα* ≥ 1 *if and only if it is consistent.*

**Proof.** Let the EFORK method be consistent, and the method can be written in the form

$$y\_{n+1} = y\_n + h^a \Phi(t\_{n\_\prime} y\_{n\_\prime} h). \tag{39}$$

The exact value *y*(*tn*) will satisfy

$$y(t\_{n+1}) = y(t\_n) + h^a \Phi(t\_n, y(t\_n), h) + T\_{n\_f} \tag{40}$$

where *Tn* is the truncation error. By subtracting (39) from (40), we have

$$|e\_{n+1}| \le |e\_n| + h^{\alpha} |(\Phi(t\_{n\prime} y(t\_n), h) - \Phi(t\_{n\prime} y\_{n\prime} h))| + |T\_{\mathcal{U}}|.$$

Now, from the regularity of the EFORK method, it follows that

$$|e\_{n+1}| \le |e\_n| + h^a L |y(t\_n) - y\_n| + |T\_n| \le (1 + h^a L) |e\_n| + |T\_n|.$$

By using the Lemma 1, we have

$$|c\_n| \le (1 + h^a L)^n |c\_0| + \left(\frac{e^{nh^a L} - 1}{h^a L}\right) |T\_n|\_{\prime \nu}$$

where we assumed that the local truncation error for a sufficiently large *n* is constant, i.e., <sup>T</sup> <sup>=</sup> *Tn*, *<sup>n</sup>* <sup>=</sup> 0, 1, 2, . . . In addition, assume that *<sup>e</sup>*<sup>0</sup> <sup>=</sup> 0 and <sup>|</sup>*Tn*<sup>|</sup> <sup>=</sup> *<sup>O</sup>*(*hpα*), *<sup>p</sup>* <sup>≥</sup> 3; therefore,

$$|e\_n| \le O(h^{p\alpha}) \left(\frac{e^{nh^n L} - 1}{h^\alpha L}\right).$$

In Section 3, we assumed *<sup>N</sup><sup>m</sup>* = (*<sup>T</sup>* <sup>−</sup> *<sup>t</sup>*0)/*h*, so we have

$$|e\_n| \le O(h^{(p-1)\alpha}) \left(\frac{e^{(T-t\_0)\frac{\lambda}{\text{tr}\,L\,M^{\alpha-\frac{1}{\text{tr}}}}}-1}{L}\right).$$

Thus, the EFORK methods of Sections 3.1 and 3.2 are convergent if *<sup>α</sup>* <sup>−</sup> <sup>1</sup> *<sup>m</sup>* ≥ 0, i.e *mα* ≥ 1. Conversely, let the EFORK method be convergent. It is sufficient that we give a limit of (39) as *h* tends to 0. Now, the proof of the theorem is complete.

*5.3. Stability Analysis*

For the stability analysis of the proposed methods in Sections 3 and 4, we consider the FDE:

$$\begin{cases} \, \_{t\_0}^c D\_t^\alpha y(t) = \lambda y(t), \quad \lambda \in \mathbb{C}, \quad 0 < \alpha \le 1, \\ y(t\_0) = y\_0. \end{cases} \tag{41}$$

According to [1], the exact solution of (41) is *<sup>y</sup>*(*t*) = *<sup>E</sup>α*(*λ*(*<sup>t</sup>* <sup>−</sup> *<sup>t</sup>*0)*α*)*y*0. When *Re*(*λ*) <sup>&</sup>lt; 0, the solution of (41) asymptotically tends to 0 as *t* → ∞.

We apply the two-stage EFORK method (15) to Equation (41) and obtain

$$\begin{split} K\_{1} &= h^{\mathfrak{a}} f(t\_{n}, y\_{n}) = \lambda h^{\mathfrak{a}} y\_{n} \\ K\_{2} &= h^{\mathfrak{a}} f(t\_{n} + c\_{2} h, y\_{n} + a\_{21} K\_{1}) = \lambda h^{\mathfrak{a}} (y\_{n} + a\_{21} \lambda h^{\mathfrak{a}} y\_{n}) \\ &= \left[ \lambda h^{\mathfrak{a}} + a\_{21} (\lambda h^{\mathfrak{a}})^{2} \right] y\_{n} \\ y\_{n+1} &= y\_{n} + w\_{1} K\_{1} + w\_{2} K\_{2} = y\_{n} + \frac{1}{2 \Gamma(\mathfrak{a} + 1)} \left[ 2\lambda h^{\mathfrak{a}} + a\_{21} (\lambda h^{\mathfrak{a}})^{2} \right] y\_{n} \\ &= \left[ 1 + \frac{\lambda h^{\mathfrak{a}}}{\Gamma(\mathfrak{a} + 1)} + \frac{a\_{21} (\lambda h^{\mathfrak{a}})^{2}}{2 \Gamma(\mathfrak{a} + 1)} \right] y\_{n} = \left[ 1 + \frac{\lambda h^{\mathfrak{a}}}{\Gamma(\mathfrak{a} + 1)} + \frac{c\_{2}^{\mathfrak{a}} (\lambda h^{\mathfrak{a}})^{2}}{2 (\Gamma(\mathfrak{a} + 1))^{2}} \right] y\_{n} \\ &= \left[ 1 + \frac{\lambda h^{\mathfrak{a}}}{\mathfrak{a}!} + \frac{c\_{2}^{\mathfrak{a}} (\lambda h^{\mathfrak{a}})^{2}}{2 (\mathfrak{a}!)^{2}} \right] y\_{n}. \end{split}$$

Therefore, the growth factor for the two-stage EFORK method (15) is [35]

$$E(\lambda h^{\alpha}) = 1 + \frac{\lambda h^{\alpha}}{\alpha!} + \frac{c\_2^{\alpha} (\lambda h^{\alpha})^2}{2 (\alpha!)^2} \, ,$$

Now, consider the following definition ([35]):

**Definition 6.** *A numerical method is called absolutely stable in the sense of Dahlquist if and only if* <sup>|</sup>*E*(*λhα*)| ≤ <sup>1</sup> *when the method is applied with any positive step-size h to the test Equation* (41)*.*

The interval of the absolute stability of a numerical method is defined as [Re(*λhα*), 0) if and only if <sup>|</sup>*E*(*λhα*)| ≤ 1. So, the two-stage method (15) is absolutely stable if

$$|1 + \frac{\lambda h^{\alpha}}{\alpha!} + \frac{c\_2^{\alpha} (\lambda h^{\alpha})^2}{2 (\alpha!)^2}| \le 1 \dots$$

If *λ h<sup>α</sup>* < 0, we can find the interval of absolute stability as follows:

$$\frac{-2\alpha!}{c\_2^a} \lesssim \lambda h^a < 0. \tag{42}$$

According to (42), the interval of absolute stability for the two-stage EFORK method (15) depends on *c<sup>α</sup>* <sup>2</sup>. For instance, if *<sup>c</sup><sup>α</sup>* <sup>2</sup> = <sup>2</sup>(*α*!)2/(2*α*)! , then and so the interval of absolute stability will be

−(2*α*)! *<sup>α</sup>*! *λh<sup>α</sup>* < 0. In addition, for *c<sup>α</sup>* <sup>2</sup> <sup>=</sup> (Γ(2*α*+1))<sup>2</sup> <sup>Γ</sup>(3*α*+1)Γ(*α*+1) , we have

$$\frac{-2(\alpha!)^2(3\alpha)!}{((2\alpha)!)^2} \lesssim \lambda h^{\alpha} < 0.1$$

If we choose *c<sup>α</sup>* <sup>2</sup> <sup>=</sup> <sup>4</sup>Γ(*α*+1) <sup>Γ</sup>(3*α*+1) , we get −(3*α*)! <sup>2</sup> *λh<sup>α</sup>* < 0. or, *c<sup>α</sup>* <sup>2</sup> <sup>=</sup> <sup>Γ</sup>(*α*+1) <sup>Γ</sup>(3*α*+1) , we obtain −2(3*α*)! *λh<sup>α</sup>* < 0.

The graphs of *E*(*λhα*) for different two-stage EFORK methods are shown in Figures 1 and 2. From these figures, for (*λ* < 0), we can find the interval of absolute stability for various *α*.

**Figure 1.** The graph of *E*(*λhα*) for the two-stage EFORK method (15) with *c<sup>α</sup>* <sup>2</sup> <sup>=</sup> <sup>2</sup>(*α*!)<sup>2</sup> (2*α*)! (**left**), and *cα* <sup>2</sup> <sup>=</sup> (Γ(2*α*+1))<sup>2</sup> <sup>Γ</sup>(3*α*+1)Γ(*α*+1) (**right**).

**Figure 2.** The graph of *E*(*λhα*) for two-stage EFORK method (15) with *c<sup>α</sup>* <sup>2</sup> <sup>=</sup> <sup>4</sup>Γ(*α*+1) <sup>Γ</sup>(3*α*+1)(**left**), and *cα* <sup>2</sup> <sup>=</sup> <sup>Γ</sup>(*α*+1) <sup>Γ</sup>(3*α*+1) (**right**).

In addition, we apply the three-stage EFORK method (21) to Equation (41) and get

$$y\_{n+1} = \left[1 + \frac{\lambda h^{\alpha}}{\alpha!} + \frac{(\lambda h^{\alpha})^2}{(2\alpha)!} + \frac{(\lambda h^{\alpha})^3}{(3\alpha)!}\right] y\_n \dots$$

Thus, the growth factor for the three-stage EFORK method (21) is

$$E(\lambda h^{\alpha}) = 1 + \frac{\lambda h^{\alpha}}{\alpha!} + \frac{(\lambda h^{\alpha})^2}{(2\alpha)!} + \frac{(\lambda h^{\alpha})^3}{(3\alpha)!}.$$

The three-stage EFORK method is absolutely stable if

$$|1 + \frac{\lambda h^{\alpha}}{\alpha!} + \frac{(\lambda h^{\alpha})^2}{(2\alpha)!} + \frac{(\lambda h^{\alpha})^3}{(3\alpha)!}| \lessapprox 1.1$$

The graph of *E*(*λhα*) for the three-stage EFORK method (21) is shown in Figure 3. In this figure, we can see the interval of absolute stability for various *α*.

**Figure 3.** The graph of *E*(*λhα*) for three-stage EFORK method (21).

Finally, we apply the IFORK method (33) to Equation (41) and get

$$E(\lambda h^{\alpha}) = 1 + \frac{\frac{1}{\alpha!} \lambda h^{\alpha}}{1 - \lambda h^{\alpha} \frac{\alpha!}{(2\alpha)!}} \prime$$

with the interval of absolute stability (−∞, 0), *λ* < 0. In a similar manner, we can obtain the interval of absolute stability for IFORK method (34). As we can see, in implicit fractional RK methods, the interval of absolute stability is very large, and they are stable.

#### **6. Numerical Examples**

In order to demonstrate the effectiveness and order of accuracy of the proposed methods in Sections 3 and 4, two examples are considered. All computations have been carried out on a Core i7 PC with Mathematica 13.2 software.

**Example 1.** *Consider the fractional differential equation*

$$\begin{aligned} \, \_0^cD\_t^\alpha y(t) &= -y(t) + \frac{t^{4-\alpha}}{\Gamma(5-\alpha)}, \quad t \in [0, T],\\ y(0) &= 0, \end{aligned}$$

*such that the exact solution is y*(*t*) = *t* <sup>4</sup>*Eα*,5(−*<sup>t</sup> <sup>α</sup>*)*. The approximate solutions by the two-stage EFORK method (15), three-stage EFORK method (21), and IFORK methods (33) and (34) are reported in Tables 1–6 (In Appendix A some Mathematica computer programming codes are presented). The computed solutions are compared with the exact solution for different values of h, α, and T. The absolute error in time T is given by*

$$E(h, T) = |y(t\_{N^m}) - y\_{N^m}|\_{\tau}$$

*and the orders of the presented method are computed according to the following relation:*

$$\log\_2 \frac{E(h, T)}{E(h/2, T)}$$

.


**Table 1.** Two-stage method (15) for *T* = 1, *m* = 3 in Example 1.

**Table 2.** Two-stage method (15) for *T* = 1, *m* = 2 in Example 1.


**Table 3.** Three-stage method (21) for *T* = 1, *m* = 4 in Example 1.


**Table 4.** Three-stage method (21) for *T* = 1, *m* = 2 in Example 1.


**Table 5.** IFORK methods for *T* = 1, *m* = 2 in Example 1.



**Table 6.** IFORK methods for *T* = 1, *m* = 3 in Example 1.

From the Tables 1–6, we can conclude that the computed orders of truncation errors are in good agreement with the obtained results of Sections 3 and 4. Figure 4, illustrates the error curves of the two-stage EFORK method (15) and the three-stage EFORK method (21) at *T* = 1, with *α* = 1/2, *m* = 2 and different values of *N*.

**Example 2.** *Consider the following fractional differential equation from [6]:*

$$\begin{cases} \, \_0^c D\_t^\kappa y(t) = \frac{2}{\Gamma(3-\kappa)} t^{2-\kappa} - \frac{1}{\Gamma(2-\kappa)} t^{1-\kappa} - y(t) + t^2 - t, \quad t \in [0, T],\\ y(0) = 0, \end{cases}$$

*with the exact solution y*(*t*) = *t* <sup>2</sup> <sup>−</sup> *t. Again, for different values of h, <sup>α</sup>, and T, we compared, in Tables 7–10, the obtained results by the two-stage EFORK method (15) and the three-stage EFORK method (21) with the exact solution. In Tables 11 and 12, we report the results obtained by the IFORK methods (33) and (34). From Tables 7–12, we can conclude that the computed orders are in good agreement with the given results of Sections 3 and 4.*

**Figure 4.** The error curves of the two-stage EFORK method (15) (**left**), and the error curves of the three-stage EFORK method (21) (**right**) for Example 1.

**Table 7.** Two-stage method (15) for *T* = 1, *m* = 3 in Example 2.



**Table 8.** Two-stage method (15) for *T* = 1, *m* = 2 in Example 2.

**Table 9.** Three-stage method (21) for *T* = 1, *m* = 4 in Example 2.


**Table 10.** Three-stage method (21) for *T* = 1, *m* = 2 in Example 2.


**Table 11.** IFORK methods for *T* = 1, *m* = 2 in Example 2.


**Table 12.** IFORK methods for *T* = 1, *m* = 3 in Example 2.


As we can see, the computational orders for two and three-stage EFORK methods approach 2*α* and 3*α* for *h* > 0, respectively. As seen in Tables and Figures, the three-stage EFORK method provides better results than the two-stage EFORK method. Table 13 shows the numerical results for different values of *T*.


**Table 13.** *E*(*h*, *T*) for *α* = 1/2, *m* = 2 and different values of *T*.

In addition, Tables 5, 6, 11 and 12 show that the computational order from relations (33) and (34) of IFORK methods approach 2*α* and 3*α* for *h* > 0, respectively.

Figure 5, illustrates the numerical results of the two-stage EFORK method (15) and the three-stage EFORK method (21) at *T* = 1 for *α* = 1/2, *m* = 2 and different values of *N*. Additionally, Figure 6 illustrates the numerical results of the IFORK method (33) for Example 1 and Example 2 at *T* = 1 for *α* = 1/2, *m* = 2 and different values of *N*.

**Figure 5.** The error curves of the two-stage EFORK method (15) (**left**), and the three-stage EFORK method (21) (**right**) in Example 2.

**Figure 6.** The error curves of the IFORK method (33) in Example 1 (**left**), and Example 2 (**right**).

In addition, the numerical results for the optimal case *c<sup>α</sup>* <sup>2</sup> <sup>=</sup> (Γ(2*α*+1))<sup>2</sup> <sup>Γ</sup>(3*α*+1)Γ(*α*+1) in the twostage EFORK method are shown in Table 14 with *α* = 1/2 and different values of *h*.


**Table 14.** Optimal two-stage method (15) for *T* = 1, *m* = 2.

#### **7. Conclusions**

This paper introduces new efficient FORK methods for FDEs based on Caputo generalized Taylor formulas. The proposed methods were examined for consistency, convergence, and stability. The interval of absolute stability of FORK methods has been determined, and implicit fractional order RK methods were shown to be A stable. Some examples were provided to demonstrate the effectiveness of these numerical schemes. We can obtain these results for Riemann–Liouville and Gronwald–Letnikov fractional derivatives accordingly. Recently, a new concept of differentiation called fractal and fractional differentiation was suggested and numerically examined by many researchers [36,37], where the differential operator has two orders: the first is fractional order and the second is the fractal dimension. These differential (integral) operators have not been studied intensively yet. In future work, we will extend the presented method for fractional differential equations with fractal–fractional derivatives.

**Author Contributions:** Conceptualization, F.G. and R.G.; Methodology, F.G. and R.G.; Formal analysis, F.G. and R.G.; Investigation, F.G. and R.G.; Resources, N.S.; Writing—original draft, F.G. and R.G.; Writing—review and editing, N.S.; Funding acquisition, N.S. All authors have read and agreed to the published version of the manuscript.

**Funding:** Partial financial support of this work, under Grant No. GP249507 from the Natural Sciences and Engineering Research Council of Canada, is gratefully acknowledged by the third author.

**Data Availability Statement:** All data and methods used in this research are presented in sufficient detail in the article.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Abbreviations**

The following abbreviations are used in this manuscript: FORK Fractional Order Runge–Kutta

EFORK Explicit Fractional Order Runge–Kutta


#### **Appendix A. Some Mathematica Codes of FORK Methods**

**3-stage EFORK method for Example 1.** *N*<sup>1</sup> = Input["Please Enter *Nm*:"]; *T* = Input["Please Enter T:"]; *α* = Input["Please Enter *α*:"]; *h* = *<sup>T</sup> N*<sup>1</sup> ; Do[*tn* = *n* ∗ *h*, {*n*, 0, *N*1}]; *y*<sup>0</sup> = 0; *<sup>w</sup>*<sup>1</sup> <sup>=</sup> <sup>8</sup>Γ[1+*α*] <sup>3</sup>Γ[1+2*α*] <sup>2</sup>−6Γ[1+*α*] <sup>3</sup>Γ[1+3*α*]+Γ[1+2*α*]Γ[1+3*α*] <sup>Γ</sup>[1+*α*]Γ[1+2*α*]Γ[1+3*α*] ; *<sup>w</sup>*<sup>2</sup> <sup>=</sup> <sup>2</sup>Γ[1+*α*] 2 (4Γ[1+2*α*] <sup>2</sup>−Γ[1+3*α*]) <sup>Γ</sup>[1+2*α*]Γ[1+3*α*] ; *<sup>w</sup>*<sup>3</sup> <sup>=</sup> <sup>−</sup>8Γ[1+*α*] 2 (2Γ[1+2*α*] <sup>2</sup>−Γ[1+3*α*]) <sup>Γ</sup>[1+2*α*]Γ[1+3*α*] ; *a*<sup>11</sup> = <sup>1</sup> <sup>2</sup>∗Γ[*α*+1]<sup>2</sup> ;

*<sup>a</sup>*<sup>21</sup> <sup>=</sup> <sup>Γ</sup>[1+*α*] <sup>2</sup>Γ[1+2*α*]+2Γ[1+2*α*] <sup>2</sup>−Γ[1+3*α*] <sup>4</sup>Γ[1+*α*]<sup>2</sup>(2Γ[1+2*α*]<sup>2</sup>−Γ[1+3*α*]) ; *<sup>a</sup>*<sup>22</sup> <sup>=</sup> <sup>−</sup> <sup>Γ</sup>[1+2*α*] <sup>4</sup>(2Γ[1+2*α*]<sup>2</sup>−Γ[1+3*α*]); *c*<sup>2</sup> = 1 2Γ[1+*α*] 1/*<sup>α</sup>* ; *c*<sup>3</sup> = 1 4Γ[1+*α*] 1/*<sup>α</sup>* ; *<sup>f</sup>*0[t\_, y\_] = <sup>−</sup>*<sup>y</sup>* <sup>+</sup> *<sup>t</sup>* 4−*α* Γ[5−*α*] ; Do[ *K*<sup>1</sup> = *h<sup>α</sup> fn*[*tn*, *yn*]; *<sup>K</sup>*<sup>2</sup> <sup>=</sup> *<sup>h</sup><sup>α</sup> fn*[*tn* <sup>+</sup> *<sup>c</sup>*<sup>2</sup> <sup>∗</sup> *<sup>h</sup>*, *yn* <sup>+</sup> *<sup>a</sup>*<sup>11</sup> <sup>∗</sup> *<sup>K</sup>*1]; *<sup>k</sup>*<sup>3</sup> <sup>=</sup> *<sup>h</sup><sup>α</sup> fn*[*tn* <sup>+</sup> *<sup>c</sup>*<sup>3</sup> <sup>∗</sup> *<sup>h</sup>*, *yn* <sup>+</sup> *<sup>a</sup>*<sup>22</sup> <sup>∗</sup> *<sup>K</sup>*<sup>2</sup> <sup>+</sup> *<sup>a</sup>*<sup>21</sup> <sup>∗</sup> *<sup>K</sup>*1]; *yn*+<sup>1</sup> = *yn* + *w*<sup>1</sup> ∗ *K*<sup>1</sup> + *w*<sup>2</sup> ∗ *K*<sup>2</sup> + *w*<sup>3</sup> ∗ *k*3; Print# "n=", *n*, ": Explicit Error=", Abs# *yn*+<sup>1</sup> − *tn*+<sup>1</sup> <sup>4</sup> <sup>∗</sup> *<sup>N</sup>*[MittagLefflerE[*α*, 5, <sup>−</sup>*tn*+<sup>1</sup> *<sup>α</sup>*]]\$\$; *fn*+1[t\_, y\_] = *fn*[*t*, *<sup>y</sup>*] <sup>−</sup> (*yn*+<sup>1</sup> <sup>−</sup> *yn*) <sup>∗</sup> (*t*−*tn*)(1−*α*)−(*t*−*tn*<sup>+</sup>1)(1−*α*) *<sup>h</sup>*∗(1−*α*)∗Γ[1−*α*] ; , {*n*, 0, *N*<sup>1</sup> − 1}]

#### **2-stage IFORK method for Example 1.**

*N*<sup>1</sup> = Input["Please Enter *Nm*:"]; *T* = Input["Please Enter T:"]; *α* = Input["Please Enter *α*:"]; *h* = *<sup>T</sup> N*<sup>1</sup> ; Do[*tn* = *n* ∗ *h*, {*n*, 0, *N*1}]; *y*<sup>0</sup> = 0; *w*<sup>1</sup> = <sup>1</sup> <sup>Γ</sup>[1+*α*] <sup>−</sup> <sup>Γ</sup>[1+3*α*] <sup>2</sup>Γ[1+2*α*]<sup>2</sup> ; *<sup>w</sup>*<sup>2</sup> <sup>=</sup> <sup>Γ</sup>[1+3*α*] <sup>2</sup>Γ[1+2*α*]<sup>2</sup> ; *<sup>a</sup>*<sup>22</sup> <sup>=</sup> <sup>Γ</sup>[1+2*α*] Γ[1+3*α*] ; *c*<sup>21</sup> = ( <sup>1</sup> <sup>Γ</sup>[1+3*α*]<sup>2</sup> (2Γ[1 + *α*]Γ[1 + 2*α*]Γ[1 + 3*α*] − √2 <sup>2</sup>Γ[<sup>1</sup> <sup>+</sup> <sup>2</sup>*α*]2(−2Γ[<sup>1</sup> <sup>+</sup> *<sup>α</sup>*]<sup>2</sup> <sup>+</sup> <sup>Γ</sup>[<sup>1</sup> <sup>+</sup> <sup>2</sup>*α*])Γ[<sup>1</sup> <sup>+</sup> <sup>3</sup>*α*]2))1/*α*; *c*<sup>22</sup> = ( <sup>1</sup> <sup>Γ</sup>[1+3*α*]<sup>2</sup> (2Γ[1 + *α*]Γ[1 + 2*α*]Γ[1 + 3*α*] + √2 <sup>2</sup>Γ[<sup>1</sup> <sup>+</sup> <sup>2</sup>*α*]2(−2Γ[<sup>1</sup> <sup>+</sup> *<sup>α</sup>*]<sup>2</sup> <sup>+</sup> <sup>Γ</sup>[<sup>1</sup> <sup>+</sup> <sup>2</sup>*α*])Γ[<sup>1</sup> <sup>+</sup> <sup>3</sup>*α*]2))1/*α*; *<sup>a</sup>*<sup>21</sup> <sup>=</sup> <sup>Γ</sup>[1+2*α*] Γ[1+3*α*] ; *<sup>f</sup>*0[t\_, y\_] = <sup>−</sup>*<sup>y</sup>* <sup>+</sup> *<sup>t</sup>* 4−*α* Γ[5−*α*] ; Do[ *K*<sup>1</sup> = *h<sup>α</sup> fn*[*tn*, *yn*]; *K*<sup>2</sup> = <sup>1</sup> <sup>2</sup> <sup>∗</sup> *<sup>h</sup><sup>α</sup> fn*[*tn*+*c*21∗*h*,*yn*+*a*21∗*K*1]+*fn*[*tn*+*c*22∗*h*,*yn*+*a*21∗*K*1] <sup>1</sup>+*h<sup>α</sup>*∗*a*<sup>22</sup> ; *yn*+<sup>1</sup> = *yn* + *w*<sup>1</sup> ∗ *K*<sup>1</sup> + *w*<sup>2</sup> ∗ *K*2; Print# "n=", *n*, ": Implicit Error=", Abs# *yn*+<sup>1</sup> − *tn*+<sup>1</sup> <sup>4</sup> <sup>∗</sup> *<sup>N</sup>*[MittagLefflerE[*α*, 5, <sup>−</sup>*tn*+<sup>1</sup> *<sup>α</sup>*]]\$\$; *fn*+1[t\_, y\_] = *fn*[*t*, *<sup>y</sup>*] <sup>−</sup> (*yn*+<sup>1</sup> <sup>−</sup> *yn*) <sup>∗</sup> (*t*−*tn*)(1−*α*)−(*t*−*tn*<sup>+</sup>1)(1−*α*) *<sup>h</sup>*∗(1−*α*)∗Γ[1−*α*] ; , {*n*, 0, *N*<sup>1</sup> − 1}]

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

## *Article* **Local Discontinuous Galerkin Method Coupled with Nonuniform Time Discretizations for Solving the Time-Fractional Allen-Cahn Equation**

**Zhen Wang 1,\*, Luhan Sun <sup>1</sup> and Jianxiong Cao <sup>2</sup>**


**Abstract:** This paper aims to numerically study the time-fractional Allen-Cahn equation, where the time-fractional derivative is in the sense of Caputo with order *α* ∈ (0, 1). Considering the weak singularity of the solution *u*(**x**, *t*) at the starting time, i.e., its first and/or second derivatives with respect to time blowing-up as *<sup>t</sup>* <sup>→</sup> <sup>0</sup><sup>+</sup> albeit the function itself being right continuous at *<sup>t</sup>* <sup>=</sup> 0, two well-known difference formulas, including the nonuniform L1 formula and the nonuniform L2-1*σ* formula, which are used to approximate the Caputo time-fractional derivative, respectively, and the local discontinuous Galerkin (LDG) method is applied to discretize the spatial derivative. With the help of discrete fractional Gronwall-type inequalities, the stability and optimal error estimates of the fully discrete numerical schemes are demonstrated. Numerical experiments are presented to validate the theoretical results.

**Keywords:** time-fractional Allen-Cahn equation; nonuniform time meshes; local discontinuous galerkin method; stability and convergence

#### **1. Introduction**

The classical Allen-Cahn equation, originally proposed by Allen and Cahn [1] to describe the motion of antiphase boundaries in crystalline solids, has subsequently been used in a wide variety of problems such as vesicle membranes, nucleation of solids, and a mixture of two incompressible fluids [2]. It has become a fundamental model equation for diffusion interface methods in materials science to study phase transitions and interface dynamics [3]. Since the Allen-Cahn equation is a nonlinear equation and it is not easy to obtain its analytical solution, various numerical methods have been proposed to solve it, for example, finite difference methods [4], finite element methods [5], local discontinuous Galerkin (LDG) methods [6], and so on. Most of these studies focused on integer-order phase-field models, implicitly assuming that the motion of the underlying particles is normal diffusion and that the spatial interactions between them are local. However, in the original formulation of the physical model [7], nonlocal interactions were part of the phasefield model, and thus in the following decades, the phase-field model was approximated by the local model by assuming slow spatial variations. Meanwhile, it has been reported that the presence of nonlocal operators in time [8] or space [9] in the phase-field model may significantly change the diffusion dynamics.

In this paper, we consider the LDG method for the following time-fractional Allen-Cahn equation

$$\begin{cases} \mathbb{C}D\_{0,t}^{\mu}u - \varepsilon^2 \Delta u = -F'(u) =: f(u), \; \mathbf{x} \in \Omega, \; 0 < t \le T, \\ u(\mathbf{x}, 0) = u\_0(\mathbf{x}), \; \mathbf{x} \in \overline{\Omega} \\ u(\mathbf{x}, t) = 0, \; \mathbf{x} \in \partial \Omega, \; 0 < t \le T, \end{cases} \tag{1}$$

**Citation:** Wang, Z.; Sun, L.; Cao, J. Local Discontinuous Galerkin Method Coupled with Nonuniform Time Discretizations for Solving the Time-Fractional Allen-Cahn Equation. *Fractal Fract.* **2022**, *6*, 349. https:// doi.org/10.3390/fractalfract6070349

Academic Editors: Libo Feng, Yang Liu, Lin Liu and Stanislaw Migorski

Received: 28 May 2022 Accepted: 20 June 2022 Published: 22 June 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

⎧ ⎪⎨ ⎪⎩ where *<sup>ε</sup>* is an interface width parameter and <sup>Ω</sup> = (−1, 1)*<sup>d</sup>* is a bounded domain of <sup>R</sup>*<sup>d</sup>* with *d* = 1, 2. The operator *<sup>C</sup>*D*<sup>α</sup>* 0,*<sup>t</sup>* denotes the Caputo-type fractional derivative of order *α* ∈ (0, 1) in time, which is a typical example of nonlocal operators and defined as [10]

$$\, \_\mathbb{C} \mathbf{D}\_{0,t}^a \mu(\mathbf{x}, t) = \frac{1}{\Gamma(1 - a)} \int\_0^t (t - s)^{-a} \frac{\partial \mu}{\partial s} \mathbf{ds}. \tag{2}$$

The nonlinear term *F*(*u*) is the interficial (or potential) energy. To facilitate the mathematical and numerical analysis of phase-field model, the following Ginzburg-Landau double-well potential has often been used [11,12]

$$F(u) = \frac{1}{4}(1 - u^2)^2.$$

This is a relatively simple phenomenological double-well potential that is commonly used in physical and geometrical applications. It was first shown in [13] that the time-fractional Allen-Cahn equation satisfies the following energy law

$$E(\mathfrak{u}(t)) \le E(\mathfrak{u}\_0)\_\prime$$

where *E*(*u*(*t*)) is the total energy defined by

$$E(\mu) := \int\_{\Omega} \left( \frac{\varepsilon^2}{2} |\nabla \mu|^2 + F(\mu) \right) d\mathbf{x}.$$

For the time-fractional Allen-Cahn Equation (1), several numerical studies have been done. In [8], Liu et al. proposed an efficient finite-difference scheme and a Fourier spectral scheme for the time-fractional Allen-Cahn and Cahn-Hilliard phase-field equations, but there was no stability analysis or error estimate in this paper. In [13], Tang et al. proposed a class of finite difference schemes for the time-fractional phase-field equation. They also proved for the first time that the fractional phase-field model does admit an integral-type energy dissipation law. In [14], Liu et al. considered a fast algorithm based on a two-mesh finite element format for numerically solving the nonlinear spatial-fractional Allen-Cahn equation with smooth and nonsmooth solutions. In [11], Du et al. first studied the wellposedness and regularity of the time-fractional Allen-Cahn equation, and then developed several unconditionally solvable and stable numerical schemes to solve it. In [15], Huang and Stynes presented a numerical scheme to solve the time-fractional Allen-Cahn equation, which is based on the Galerkin finite element method in space and the nonuniform L1 formula in time. In [16], Hou et al. constructed a first-order scheme and a (2 − *α*)thorder scheme for the time-fractional Allen-Cahn equation. In [17], Jiang et al. considered the Legendre spectral method for the time-fractional Allen-Cahn equation. In a series of works [18–20], Liao et al. proposed several efficient finite difference schemes to solve the time-fractional phase-field type models.

The LDG method is a special class of discontinuous Galerkin (DG) methods, introduced first by Cockburn and Shu [21]. This type of method not only inherits the advantages of DG methods, but it can easily handle meshes with hanging nodes, cells of general shape, and different types of local spaces, so it is flexible for *hp*-adaptivity [22,23]. In addition, the LDG scheme is locally solvable, i.e., the auxiliary variables of the derivatives of the approximate solution can be eliminated locally. Therefore, we would like to extend the LDG method to the numerical calculation of the time-fractional Allen-Cahn Equation (1) and further enrich the numerical methods for solving such an equation. Specifically, we construct two fully discrete numerical schemes for problem (1). For the first scheme, we utilize the nonuniform L1 formula to compute the time-fractional derivative and apply the LDG method to approximate the spatial derivative. With the aid of the discrete fractional Gronwall inequality, we show that the constructed scheme is numerically stable and the optimal error estimate is proved detailedly (i.e., (2 − *α*)th-order accurate in time and

(*k* + 1)th-order accurate in space when piecewise polynomials of up to *k* are used). If the solution of Equation (1) has better regularity in the time direction, we approximate the time-fractional derivative by the nonuniform L2-1*σ* formula and still use the LDG method to approach the spatial derivative. The stability and convergence analysis of the scheme are also carefully investigated, and it is proved that this scheme can achieve second-order accuracy in the time direction.

The rest of the paper is organized as follows. In Section 2, we will introduce some necessary notations, projections, and corresponding interpolation properties. In Sections 3 and 4, we consider the LDG method for the time-fractional Allen-Cahn Equation (1). The stability and optimal convergence results are obtained. In Section 5, we perform some numerical experiments to verify the theoretical statements. A brief concluding remark is given in Section 6.

#### **2. Preliminaries**

Let us start by presenting some notations for the mesh, function space, and norm. We also present some projections and certain corresponding interpolation properties for the finite element spaces which will be used for the convergence analysis.

#### *2.1. Finite Element Space and Notations*

Let T*<sup>h</sup>* be a shape-regular subdivision of Ω with elements *K*, Γ denotes the union of the boundary of elements *<sup>K</sup>* ∈ T*h*, i.e., <sup>Γ</sup> = ∪*K*∈T*<sup>h</sup> <sup>∂</sup>K*. Let *<sup>e</sup>* be a face shared by the "left" and "right" elements *KL* and *KR*. Define the normal vectors *ν<sup>L</sup>* and *ν<sup>R</sup>* on *e* pointing exterior to *KL* and *KR*, respectively. If *ϕ* is a function on *KL* and *KR*, but possibly discontinuous across *e*, let *ϕ<sup>L</sup>* denote (*ϕ*|*KL* )|*<sup>e</sup>* and *ϕ<sup>R</sup>* denote (*ϕ*|*KR* )|*e*, the left and right trace, respectively. The associated finite element space is defined as

$$\begin{aligned} V\_{\hbar} &= \left\{ v \in L^{2}(\Omega) : v|\_{K} \in \mathcal{Q}^{k}(K), \,\forall K \in \mathcal{T}\_{\hbar} \right\}, \\ \Sigma\_{\hbar} &= \left\{ \mathbf{q} = (q\_{1}, \cdots, q\_{d})^{T}|\_{K} \in (L^{2}(\Omega))^{d} : q\_{l}|\_{K} \in \mathcal{Q}^{k}(K), \, l = 1, \cdots, d, \,\forall K \in \mathcal{T}\_{\hbar} \right\}, \end{aligned}$$

where <sup>Q</sup>*k*(*K*) denotes the space of polynomials of degrees at most *<sup>k</sup>* <sup>≥</sup> 0 defined on *<sup>K</sup>*. In particular, for one-dimensional case, we have <sup>Q</sup>*k*(*K*) = <sup>P</sup>*k*(*K*).

We define the inner product over the element *K* by

$$\begin{aligned} (\boldsymbol{\mu}, \boldsymbol{\upsilon})\_{\boldsymbol{K}} &= \int\_{K} \boldsymbol{\mu} \boldsymbol{\upsilon} \mathrm{d}K , \ \langle \boldsymbol{\mu}, \boldsymbol{\upsilon} \rangle\_{\partial K} &= \int\_{\partial K} \boldsymbol{\mu} \boldsymbol{\upsilon} \mathrm{d}s , \\ (\boldsymbol{\mathsf{p}}, \boldsymbol{\mathsf{q}})\_{\boldsymbol{K}} &= \int\_{K} \boldsymbol{\mathsf{p}} \cdot \boldsymbol{\mathsf{q}} \mathrm{d}K , \ \langle \boldsymbol{\mathsf{p}}, \boldsymbol{\mathsf{q}} \rangle\_{\partial K} &= \int\_{\partial K} \boldsymbol{\mathsf{p}} \cdot \boldsymbol{\mathsf{q}} \mathrm{d}s , \end{aligned}$$

for scalar variables *u*, *v* and vector variables **p**, **q** respectively. The inner products on Ω are defined as

$$(\mu\_\prime v)\_\Omega = \sum\_K (\mu\_\prime v)\_{K\prime} \ (\mathbf{p}\_\prime \mathbf{q})\_\Omega = \sum\_K (\mathbf{p}\_\prime \mathbf{q})\_K \cdot \mathbf{q}$$

Furthermore, the *L*<sup>2</sup> norm on the domain Ω and the boundary Γ are given by

$$\|\boldsymbol{\mu}\|\_{\Omega}^{2} = (\boldsymbol{\mu}, \boldsymbol{\mu})\_{\Omega'} \, \|\boldsymbol{\mu}\|\_{\Gamma}^{2} = \langle \boldsymbol{\mu}, \boldsymbol{\mu} \rangle\_{\Gamma}.$$

$$\|\mathbf{p}\|\_{\Omega}^{2} = (\mathbf{p}\_{\prime}\mathbf{p})\_{\Omega'} \, \|\mathbf{p}\|\_{\Gamma}^{2} = \langle \mathbf{p}\_{\prime}\mathbf{p}\rangle\_{\Gamma}.$$

For any nonnegative integer *m*, *Hm*(Ω) denotes the standard Sobolev space with its associated norm ·*m*,<sup>Ω</sup> and seminorm |·|*m*,Ω.

#### *2.2. Projections and Interpolation Properties*

In this subsection, we follow [24] to define the projections in one- and two-dimensional space, respectively.

**One-dimensional case.** Assume that the mesh consisting of cells *Kj* = (*xj*<sup>−</sup> <sup>1</sup> 2 , *xj*<sup>+</sup> <sup>1</sup> 2 ), for 1 ≤ *j* ≤ *N*, where −1 = *x* <sup>1</sup> 2 < *x* <sup>3</sup> 2 < ··· < *xN*<sup>+</sup> <sup>1</sup> 2 = 1, covers Ω =[−1, 1]. Denote *xj* = (*xj*<sup>−</sup> <sup>1</sup> 2 + *xj*<sup>+</sup> <sup>1</sup> 2 )/2, *hj* = *xj*<sup>+</sup> <sup>1</sup> 2 <sup>−</sup> *xj*<sup>−</sup> <sup>1</sup> 2 , and *<sup>h</sup>* = max1≤*j*≤*<sup>N</sup> hj*. We assume T*<sup>h</sup>* is quasiuniform mesh in this case; namely, there exists a fixed positive constant *ν* independent of *<sup>h</sup>* such that *<sup>ν</sup><sup>h</sup>* <sup>≤</sup> *hj* <sup>≤</sup> *<sup>h</sup>* for *<sup>j</sup>* <sup>=</sup> 1, ... , *<sup>N</sup>*, as *<sup>h</sup>* goes to zero. We introduce the standard *<sup>L</sup>*<sup>2</sup> projection of a function *<sup>u</sup>* <sup>∈</sup> *<sup>L</sup>*2(Ω) into the finite element space *Vh*, denoted by <sup>P</sup>*hu*, which is a unique function in *Vh* satisfying

$$\int\_{K\_j} \left(\mathcal{P}\mathbb{}\_h\boldsymbol{\mu} - \boldsymbol{\mu}\right) \upsilon\_h \mathrm{d}\mathbf{x} = 0, \forall \upsilon\_h \in \mathcal{P}^k(K\_j), \ j = 1, \ldots, N. \tag{3}$$

For any given function *<sup>u</sup>* <sup>∈</sup> *<sup>H</sup>*1(Ω) and an arbitrary element *Kj*, the special Gauss-Radau projection of *u*, denoted by P± *<sup>h</sup> u*, is the unique function in *Vh* satisfying, for each *j*,

$$\int\_{K\_j} \left(\mathcal{P}\_h^+ u - u\right) v\_h \mathrm{d}x = 0, \forall v\_h \in \mathcal{P}^{k-1}(K\_j), \ (\mathcal{P}\_h^+ u)\_{j-\frac{1}{2}}^+ = u(x\_{j-\frac{1}{2}}^+), \tag{4}$$

$$\int\_{K\_j} \left(\mathcal{P}\_h^- u - u\right) v\_h \mathrm{d}x = 0, \forall v\_h \in \mathcal{P}^{k-1}(K\_j), \ (\mathcal{P}\_h^- u)^-\_{j+\frac{1}{2}} = u(x^-\_{j+\frac{1}{2}}).\tag{5}$$

**Two-dimensional case.** Let T*<sup>h</sup>* = {*Kij*} *j*=1,...,*Ny <sup>i</sup>*=1,...,*Nx* denote a subdivision of <sup>Ω</sup> = (−1, 1)<sup>2</sup> with rectangular element *Kij* = *Ii* × *Jj*, where *Ii* = (*xi*−1/2 − *xi*<sup>+</sup>1/2) and *Jj* = (*yj*−1/2, *yj*<sup>+</sup>1/2), with the length *h<sup>x</sup> <sup>i</sup>* = *xi*<sup>+</sup>1/2 − *xi*−1/2 and width *<sup>h</sup> y <sup>j</sup>* <sup>=</sup> *yj*<sup>+</sup>1/2 <sup>−</sup> *yj*−1/2. Let *hij* <sup>=</sup> max{*h<sup>x</sup> <sup>i</sup>* , *h y j* } and denote *<sup>h</sup>* = max*Kij*∈T*<sup>h</sup> hij*. We also assume T*<sup>h</sup>* is quasi-uniform in this case; namely, there exists a fixed positive constant *<sup>ν</sup>* independent of *<sup>h</sup>* such that *<sup>ν</sup><sup>h</sup>* <sup>≤</sup> min{*h<sup>x</sup> <sup>i</sup>* , *h y <sup>j</sup>* } ≤ *h* for *i* = 1, ... , *Nx* and *j* = 1, ... , *Ny*. Similar to the one-dimensional case, we need to introduce a suitable projection P± *<sup>h</sup>* . The projection for the scalar function is defined as

$$
\beta \mathcal{P}\_{\text{h}}^{-} = \beta \mathcal{P}\_{\text{h}, \text{x}}^{-} \times \mathcal{P}\_{\text{h}, \text{y}}^{-} \tag{6}
$$

where the subscripts *x* and *y* indicate that the one-dimensional projection P− *<sup>h</sup>* defined by (5) is applied with respect to the corresponding variable.

Let P*h*,*<sup>x</sup>* and P*h*,*<sup>y</sup>* be the standard *L*<sup>2</sup> projections in the *x* and *y* directions, respectively. The projection Π<sup>+</sup> *<sup>h</sup>* for vector-valued function **<sup>q</sup>** <sup>=</sup> (*q*1(*x*, *<sup>y</sup>*), *<sup>q</sup>*2(*x*, *<sup>y</sup>*)) <sup>∈</sup> [*H*1(Ω)]<sup>2</sup> is defined by

$$\Pi\_h^+\mathbf{q} = \left(\mathcal{P}\_{h,x}^+ \times \mathcal{P}\_{h,y}\right) : [H^1(\Omega)]^2 \to [\mathcal{Q}^k(I\_i \times J\_j)]^2 \lambda$$

which satisfies

$$\begin{split} & \int\_{I\_{i}} \int\_{I\_{j}} (\Pi\_{h}^{+} \mathbf{q} - \mathbf{q}) \cdot \nabla w \mathbf{dx} dy, \; \forall w \in \mathcal{Q}^{k}(I\_{i} \times I\_{j}), \\ & \int\_{I\_{j}} (\Pi\_{h}^{+} \mathbf{q}(\mathbf{x}\_{i-1/2}, y) - \mathbf{q}(\mathbf{x}\_{i-1/2}, y)) \cdot \mathbf{n} w(\mathbf{x}\_{i-1/2}^{+}, y) \mathbf{d}y = 0, \; \forall w \in \mathcal{Q}^{k}(I\_{i} \times I\_{j}), \\ & \int\_{I\_{i}} (\Pi\_{h}^{+} \mathbf{q}(\mathbf{x}, y\_{j-1/2}) - \mathbf{q}(\mathbf{x}, y\_{j-1/2})) \cdot \mathbf{n} w(\mathbf{x}, y\_{j-1/2}^{+}) \mathbf{d}x = 0, \; \forall w \in \mathcal{Q}^{k}(I\_{i} \times I\_{j}), \end{split}$$

where **n** is the outward unit normal vector of the domain integrated.

**Interpolation properties.** The projections defined above have the following approximation properties. If *<sup>u</sup>* <sup>∈</sup> *<sup>H</sup>k*+1(Ω), we have (see Lemma 2.4 in [25])

$$\|\left|\mathcal{P}\_h^{\pm}u - u\right|\|\_{\Omega} \le \mathbb{C}h^{k+1} \|u\|\_{H^{k+1}(\Omega)^\*} \tag{8}$$

$$\|\|\Pi\_{\hbar}^{+}\mathbf{q}-\mathbf{q}\|\|\_{\Omega} \leq \mathsf{C}h^{k+1} \|\|\mathbf{q}\|\|\_{H^{k+1}(\Omega)}.\tag{9}$$

The projection P− *<sup>h</sup>* on the Cartesian meshes has the following superconvergence property (see Lemma 3.7 in [25]).

**Lemma 1.** *Assume u* <sup>∈</sup> *<sup>H</sup>k*+2(Ω)*,* **<sup>q</sup>** <sup>∈</sup> <sup>Σ</sup>*h, then the projection defined by* (6) *satisfies*

$$\left| (\boldsymbol{\mu} - \boldsymbol{\mathcal{P}}\_{\boldsymbol{h}}^{-} \boldsymbol{u}\_{\prime} \nabla \cdot \mathbf{q})\_{\Omega} - (\boldsymbol{\mu} - \widehat{\boldsymbol{\mathcal{P}}}\_{\boldsymbol{h}}^{-} \widehat{\boldsymbol{u}}\_{\prime} \mathbf{q} \cdot \mathbf{n})\_{\Gamma} \right| \leq C h^{k+1} \| \boldsymbol{u} \|\_{\boldsymbol{H}^{k+2}(\Omega)} \| \mathbf{q} \|\_{\Omega \times \Gamma}$$

*where the "hat" term is the numerical flux.*

#### **3. Nonuniform L1–LDG Scheme**

In this section, Equation (1) is first transformed into a first-order system of differential equations. Then the L1 method on nonuniform meshes is applied to the time-fractional derivative and the spatial derivative is approximated by the LDG method, and a fully discrete numerical scheme is obtained. The stability analysis and error estimate of the scheme is given by choosing suitable numerical fluxes.

#### *3.1. The Fully Discrete Numerical Scheme and Its Stability Analysis*

The usual notations of the nonuniform L1 formula are introduced here. Let *M* be a positive integer. Set *tn* = *T*(*n*/*M*)*<sup>r</sup>* for *n* = 0, 1, ... , *M*, where the temporal mesh grading parameter *r* ≥ 1 is chosen by the user. Denote *τ<sup>n</sup>* = *tn* − *tn*−1, *n* = 1, ... , *M* be the time mesh sizes. It is easy to see that when *r* = 1, the mesh is uniform.

For *<sup>n</sup>* <sup>≥</sup> 1, we approximate the Caputo fractional derivative *<sup>C</sup>*D*<sup>α</sup>* 0,*tu*(**x**, *tn*) by the well-known L1 formula [26]

$$\begin{split} \, \_\mathbb{C} \mathbf{D}\_{0,t}^{\mathbf{a}} u(\mathbf{x}, t\_n) &\approx \, \mathbf{Y}\_t^{\mathbf{a}} u(\mathbf{x}, t\_n) \\ &:= \frac{d\_{n,1}}{\Gamma(2-a)} u^n - \frac{d\_{n,n}}{\Gamma(2-a)} u^0 + \frac{1}{\Gamma(2-a)} \sum\_{i=1}^{n-1} u^{n-i} (d\_{n,i+1} - d\_{n,i}), \end{split} \tag{10}$$

where *dn*,*<sup>i</sup>* = [(*tn* <sup>−</sup> *tn*−*i*)1−*<sup>α</sup>* <sup>−</sup> (*tn* <sup>−</sup> *tn*−*i*+1)1−*α*]/*τn*−*i*+<sup>1</sup> for *<sup>i</sup>* <sup>=</sup> 1, ... , *<sup>n</sup>*. For simplicity, if there is no confusion, we denote *u<sup>n</sup>* = *u*(**x**, *tn*).

Set *a* (*n*) *<sup>n</sup>*−*<sup>k</sup>* <sup>=</sup> *dn*,*n*−*k*<sup>+</sup>1/Γ(<sup>2</sup> <sup>−</sup> *<sup>α</sup>*) for *<sup>k</sup>* <sup>=</sup> 1, . . . , *<sup>n</sup>* and

$$P\_{n-k}^{(n)} = \frac{1}{a\_0^{(k)}} \begin{cases} 1, & k=n, \\ \sum\_{j=k+1}^n (a\_{j-k-1}^{(j)} - a\_{j-k}^{(j)}) P\_{n-j'}^{(n)}, & 1 \le k \le n-1. \end{cases}$$

Therefore, the approximate scheme (10) can be written as Υ*<sup>α</sup> <sup>t</sup> u<sup>n</sup>* = *n* ∑ *i*=1 *a* (*n*) *n*−*i* (*u<sup>i</sup>* <sup>−</sup> *<sup>u</sup>i*−1) for *n* = 1, ... , *M*. It follows from Lemma 2.1 in the literature [27] that the coefficient coefficients {*P*(*n*) *<sup>n</sup>*−*k*} satisfies

$$\sum\_{k=1}^{n} P\_{n-k}^{(n)} \le (t\_n)^{\mathfrak{a}} / \Gamma(1+\mathfrak{a}).\tag{11}$$

Denote the truncation error *R<sup>n</sup>* <sup>1</sup> as

$$R\_1^n = {}\_C\mathbf{D}\_{0,t}^\alpha \mu(\mathbf{x}, t\_n) - \mathbf{Y}\_t^\alpha \mu(\mathbf{x}, t\_n) \dots$$

**Lemma 2** ([26])**.** *Assume that ∂<sup>l</sup> u*(**x**, *t*)/*∂t l* <sup>Ω</sup> <sup>≤</sup> *Ctl*−*<sup>α</sup> for <sup>l</sup>* <sup>=</sup> 0, 1, 2*. Then the following identity holds*

$$\|R\_1^n\|\_{\Omega} \le C n^{-\min\{2-a,ra\}}.$$

**Lemma 3** ([27])**.** *Assume that <sup>u</sup>*(**x**, ·) <sup>∈</sup> *<sup>C</sup>*2((0, *<sup>T</sup>*]) *and ∂<sup>l</sup> u*(**x**, *t*)/*∂t l* <sup>Ω</sup> <sup>≤</sup> *Ctl*−*<sup>α</sup> for <sup>l</sup>* <sup>=</sup> 0, 1, 2*. Then the following identity holds*

$$\sum\_{j=1}^{n} P\_{n-j}^{(n)} |R\_1^j| \le C \left( a^{-1} T^n M^{-ra} + \frac{r^2}{1-a} 4^{r-1} T^n M^{-\min\{ra, 2-a\}} \right), n \ge 1. \tag{12}$$

As the usual treatment, we would like to introduce the auxiliary variable **p** = ∇*u* and consider the equivalent first-order system

$$
\epsilon\_\mathbb{C} \mathbf{D}\_{0,t}^\mathbf{x} \mu - \epsilon^2 \nabla \cdot \mathbf{p} - f(\mu) = 0,\tag{13a}
$$

$$
\mathbf{p} - \nabla \boldsymbol{\mu} = 0.\tag{13b}
$$

Then the weak formulation of (13) at *tn* can be written as

$$\left( (\prescript{}{C}{D}\_{0,t}^{\mu}u^{n},v)\_{K} + \epsilon^{2}(\mathbf{p}^{n},\nabla v)\_{K} - \epsilon^{2} \langle \mathbf{p}^{n} \cdot \mathbf{n}, v \rangle\_{\partial K} - (f(u^{n}),v)\_{K} = 0,\tag{14a}$$

$$(\mathbf{p}^n, \mathbf{w})\_K + (u^n, \nabla \cdot \mathbf{w})\_K - \langle u^n, \mathbf{w} \cdot \mathbf{n} \rangle\_{\partial K} = 0,\tag{14b}$$

where *v*, **w** are test functions.

Let (*U<sup>n</sup> <sup>h</sup>* , **<sup>P</sup>***<sup>n</sup> <sup>h</sup>* ) <sup>∈</sup> (*Vh*, <sup>Σ</sup>*h*) be the approximation of *<sup>u</sup><sup>n</sup>* and **<sup>p</sup>***n*, respectively. Based on (14), a fully discrete nonuniform L1–LDG method is: find (*U<sup>n</sup> <sup>h</sup>* , **<sup>P</sup>***<sup>n</sup> <sup>h</sup>* ) ∈ (*Vh*, Σ*h*) such that for all test functions (*vh*, **w***h*) ∈ (*Vh*, Σ*h*),

$$\epsilon^{1}(\mathbf{Y}\_{l}^{n}\mathbf{U}\_{h'}^{n}\upsilon\_{h})\_{K} + \epsilon^{2}(\mathbf{P}\_{h'}^{n}\nabla\upsilon\_{h})\_{K} - \epsilon^{2}\langle\hat{\mathbf{P}}\_{h}^{n} \cdot \mathbf{n}, \upsilon\_{h}\rangle\_{\partial K} - (f(\mathbf{U}\_{h}^{n}), \upsilon\_{h})\_{K} = 0,\tag{15a}$$

$$(\mathbf{P}\_{h'}^{\mathrm{n}}, \mathbf{w}\_h)\_K + (\mathcal{U}\_{h'}^{\mathrm{n}}, \nabla \cdot \mathbf{w}\_h)\_K - \langle \hat{\mathcal{U}}\_{h'}^{\mathrm{n}}, \mathbf{w}\_h \cdot \mathbf{n} \rangle\_{\partial K} = 0. \tag{15b}$$

All the "hat" terms are numerical fluxes which are yet to be determined. The freedom in choosing numerical fluxes can be utilized for designing a scheme that enjoys a certain stability property. Here alternative flux is chosen

$$\|\widehat{L}\_{\mathrm{h}}^{n}\|\_{\mathfrak{c}} = \mathsf{L}I\_{\mathrm{h},L'}^{n} \ \widehat{\mathbf{P}}\_{\mathrm{h}}^{\widehat{n}}\|\_{\mathfrak{c}} = \mathsf{P}\_{\mathrm{h},\mathbf{R}'}^{n} \tag{16}$$

or

$$
\langle \widehat{\mathcal{U}^{\mathrm{n}}\_{\mathrm{h}}} \rangle\_{\mathfrak{c}} = \langle \mathcal{U}^{\mathrm{n}}\_{\mathrm{h}, \mathbb{R}\prime} \, \widehat{\mathbf{P}^{\mathrm{n}}\_{\mathrm{h}}} \rangle\_{\mathfrak{c}} = \mathbf{P}^{\mathrm{n}}\_{\mathrm{h}, \mathbb{L}}.\tag{17}
$$

Summing Equation (15) over all elements yields

$$\epsilon^{\prime}(\mathbf{Y}\_{\hbar}^{a}\mathbf{U}\_{\hbar'}^{n},\upsilon\_{\hbar})\_{\Omega} + \epsilon^{2}(\mathbf{P}\_{\hbar'}^{n}\nabla\upsilon\_{\hbar})\_{\Omega} - \epsilon^{2}\langle\widehat{\mathbf{P}\_{\hbar}^{n}}\cdot\mathbf{n},\upsilon\_{\hbar}\rangle\_{\Gamma} - (f(\mathsf{U}\_{\hbar}^{n}),\upsilon\_{\hbar})\_{\Omega} = 0,\tag{18a}$$

$$(\mathbf{P}\_{h'}^n \mathbf{w}\_h)\_{\Omega} + (\mathcal{U}\_{h'}^n \nabla \cdot \mathbf{w}\_h)\_{\Omega} - \langle \hat{\mathcal{U}}\_{h'}^n, \mathbf{w}\_h \cdot \mathbf{n} \rangle\_{\Gamma} = 0. \tag{18b}$$

Next, we study the stability of scheme (18) using the numerical flux (16). The case of choosing numerical flux (17) is almost the same, so is omitted here. Firstly, we state a discrete fractional Gronwall inequality and a property of the nonuniform L1 scheme.

**Lemma 4** ([28])**.** *For any finite time tM* = *T* > 0 *and a given nonnegative sequence* (*λl*)*M*−<sup>1</sup> *<sup>l</sup>*=<sup>0</sup> *, assume that there exists a constant λ, independent of time-steps, such that λ* ≥ *M*−1 ∑ *l*=0 *λl. Suppose that the grid function* {*un*|*<sup>n</sup>* <sup>≥</sup> <sup>0</sup>} *satisfies*

$$\Upsilon\_l^u(u^n)^2 \le \sum\_{l=1}^n \lambda\_{n-l}(u^l)^2 + \phi^n u^n + (\psi^n)^2, \ 1 \le n \le M,\tag{19}$$

*where* {*φn*, *<sup>ψ</sup>n*|<sup>1</sup> <sup>≤</sup> *<sup>n</sup>* <sup>≤</sup> *<sup>M</sup>*} *are nonnegative sequences. If the maximum time-step <sup>τ</sup><sup>M</sup>* <sup>≤</sup> (2Γ(<sup>2</sup> <sup>−</sup> *<sup>α</sup>*)*λ*)<sup>−</sup> <sup>1</sup> *<sup>α</sup> , it holds that, for* 1 ≤ *n* ≤ *M,*

$$u^n \le 2E\_{a,1}(2\lambda t\_n^a) \left( u^0 + \max\_{1 \le k \le n} \sum\_{j=1}^k P\_{k-j}^{(k)} \phi^j + \sqrt{\Gamma(1-a)} \max\_{1 \le k \le n} \{ t\_k^{a/2} \psi^k \} \right). \tag{20}$$

**Lemma 5** ([29])**.** *Let the functions u<sup>n</sup>* = *u*(**x**, *tn*) *be in L*2(Ω) *for n* = 0, 1, ... , *M. Then, one has the following inequality*

$$(\Upsilon\_t^\kappa u^n, u^n)\_{\Omega} \ge \frac{1}{2} \Upsilon\_t^\kappa ||u^n||\_{\Omega}^2.$$

**Theorem 1.** *The solution U<sup>n</sup> <sup>h</sup> of the fully discrete nonuniform L1–LDG scheme* (18) *satisfies*

$$\|\|\mathcal{U}\_{\hbar}^{n}\|\|\_{\Omega} \le 2E\_{\mathfrak{a},1}(4t\_n^{\mathfrak{a}}) \|\|\mathcal{U}\_{\hbar}^{0}\|\|\_{\Omega'} n = 1, \dots, M.$$

**Proof.** Taking the test functions in scheme (18) as *vh* = *U<sup>n</sup> <sup>h</sup>* and **<sup>w</sup>***<sup>h</sup>* = 2**P***<sup>n</sup> <sup>h</sup>* , we obtain

$$\left(\left(\mathbf{Y}\_{\mathrm{h}}^{\mathrm{u}}\mathrm{l}\mathbf{l}\_{\mathrm{h}}^{\mathrm{u}},\mathrm{l}\mathrm{l}\_{\mathrm{h}}^{\mathrm{u}}\right)\_{\mathrm{\varOmega}} + \epsilon^{2}\left(\mathbf{P}\_{\mathrm{h}\prime}^{\mathrm{u}}\nabla\mathrm{l}\mathrm{l}\_{\mathrm{h}}^{\mathrm{u}}\right)\_{\mathrm{\varOmega}} - \epsilon^{2}\left(\widehat{\mathbf{P}}\_{\mathrm{h}}^{\mathrm{u}}\cdot\mathbf{n},\mathrm{l}\mathrm{l}\_{\mathrm{h}}^{\mathrm{u}}\right)\_{\mathrm{\varGamma}} + \left(\left(\mathbf{U}\_{\mathrm{h}}^{\mathrm{u}}\right)^{3} - \mathbf{U}\_{\mathrm{h}\prime}^{\mathrm{u}}\mathrm{l}\mathrm{l}\_{\mathrm{h}}^{\mathrm{u}}\right)\_{\mathrm{\varOmega}} = 0,\tag{21a}$$

$$
\epsilon^2 \langle \mathbf{P}\_{h'}^n \mathbf{P}\_h^n \rangle\_{\Omega} + \epsilon^2 \langle \mathcal{U}\_{h'}^n \nabla \cdot \mathbf{P}\_h^n \rangle\_{\Omega} - \epsilon^2 \langle \widehat{\mathcal{U}}\_{h'}^n \mathbf{P}\_h^n \cdot \mathbf{n} \rangle\_{\Gamma} = 0. \tag{21b}
$$

Adding the two equations in (21) and using (16), we have that

$$\|(\mathbf{Y}\_{l}^{n}\mathbf{U}\_{h'}^{n}\mathbf{U}\_{h}^{n})\_{\Omega} + \epsilon^{2}\|\mathbf{P}\_{h}^{n}\|\_{\Omega}^{2} + \|(\mathbf{U}\_{h}^{n})^{2}\|\_{\Omega}^{2} = \|\mathbf{U}\_{h}^{n}\|\_{\Omega'}^{2} \tag{22}$$

which indicates that

$$\|(\Upsilon\_t^n \mathcal{U}\_h^n, \mathcal{U}\_h^n)\_{\Omega} \le \|\mathcal{U}\_h^n\|\_{\Omega}^2. \tag{23}$$

Invoking Lemma 5, we derive that

$$\|\mathbf{Y}\_{\rm I}^{\rm a}\| \|\mathbf{U}\_{\rm I}^{\rm n}\|\_{\boldsymbol{\Omega}}^2 \le 2 \|\mathbf{U}\_{\rm I}^{\rm n}\|\_{\boldsymbol{\Omega}}^2. \tag{24}$$

Therefore, applying Lemma <sup>4</sup> with *<sup>u</sup><sup>n</sup>* <sup>=</sup> *U<sup>n</sup> <sup>h</sup>* Ω, *<sup>φ</sup><sup>n</sup>* <sup>=</sup> *<sup>ψ</sup><sup>n</sup>* <sup>=</sup> 0, *<sup>λ</sup>*<sup>0</sup> <sup>=</sup> 2, and *<sup>λ</sup><sup>j</sup>* <sup>=</sup> 0 for 1 ≤ *j* ≤ *M* − 1, we have

$$\|\|\boldsymbol{\mathcal{U}}\_{\boldsymbol{h}}^{n}\|\|\_{\Omega} \leq 2E\_{\kappa,1}(4t\_{n}^{\alpha})\|\|\boldsymbol{\mathcal{U}}\_{\boldsymbol{h}}^{0}\|\|\_{\Omega}.$$

It completes the proof.

#### **Remark 1.**

*(i) We point out that the stability analysis in Theorem 1 can be further improved by mathematical induction. Following the discussions given in (Theorem 4.4 in [30]), we deduce that*

$$\|\|\mathcal{U}\_{\hbar}^{m}\|\|\_{\Omega} \leq \|\|\mathcal{U}\_{\hbar}^{0}\|\|\_{\Omega}.$$


Suppose the exact solution *u*(**x**, *t*) of Equation (1) has the following smoothness properties:

$$\mu \in L^{\infty} \left( (0, T]; H^{k+2}(\Omega) \right), \left| \left. \partial^l u(\mathbf{x}, t) / \partial t^l \right| \right| \le \mathbb{C} (1 + t^{\mathbf{x} - l}) \text{ for } 0 < t \le T \text{ and } l = 0, 1, 2. \tag{25}$$

Such a regularity assumption with respect to time *t* is often used, see for instance [15,19,30–39]. It implies that the solution *u*(**x**, *t*) likely behaves a weak singularity at the starting time *t* = 0, i.e., <sup>|</sup>*∂u*(**x**, *<sup>t</sup>*)/*∂t*<sup>|</sup> and /or <sup>|</sup>*∂*2*u*(**x**, *<sup>t</sup>*)/*∂<sup>t</sup>* <sup>2</sup><sup>|</sup> blow up as *<sup>t</sup>* <sup>→</sup> <sup>0</sup><sup>+</sup> albeit *<sup>u</sup>*(**x**, *<sup>t</sup>*) is continuous on [0, *T*]. Since it has been shown in [13] that the time-fractional Allen-Cahn Equation (1) satisfies the maximum principle, namely,

$$|u(\mathbf{x},t)| \le 1 \quad \text{for} \quad t > 0 \quad \text{if} \quad |u(\mathbf{x},0)| \le 1,$$

we assume that the nonlinear term *f*(*u*) satisfies

$$\max |f'(u)| \le L,\tag{26}$$

where *L* is a positive constant. For simplicity, we denote

$$\varepsilon\_{\boldsymbol{u}}^{\boldsymbol{n}} = \boldsymbol{u}^{\boldsymbol{n}} - \boldsymbol{\mathcal{U}}\_{\boldsymbol{h}}^{\boldsymbol{n}} = \boldsymbol{u}^{\boldsymbol{n}} - \boldsymbol{P}\boldsymbol{u}^{\boldsymbol{n}} + \boldsymbol{P}\boldsymbol{u}^{\boldsymbol{n}} - \boldsymbol{\mathcal{U}}\_{\boldsymbol{h}}^{\boldsymbol{n}} = \boldsymbol{u}^{\boldsymbol{n}} - \boldsymbol{P}\boldsymbol{u}^{\boldsymbol{n}} + \boldsymbol{P}\boldsymbol{e}\_{\boldsymbol{u}\prime}^{\boldsymbol{n}} \tag{27a}$$

$$\epsilon\_{\mathbf{p}}^{n} = \mathbf{p}^{n} - \mathbf{P}\_{h}^{n} = \mathbf{p}^{n} - \Gamma \mathbf{I} \mathbf{p}^{n} + \Gamma \mathbf{I} \mathbf{p}^{n} - \mathbf{P}\_{h}^{n} = \mathbf{p}^{n} - \Gamma \mathbf{I} \mathbf{p}^{n} + \Gamma \mathbf{I} \epsilon\_{\mathbf{p}}^{n}. \tag{27b}$$

We choose the projection as follows

(*P*, Π)=(P− *<sup>h</sup>* ,P<sup>+</sup> *<sup>h</sup>* ) in one dimension, (*P*, Π)=(P− *<sup>h</sup>* , <sup>Π</sup><sup>+</sup> *<sup>h</sup>* ) in two-dimensions, (28)

which are defined in Section 2.2.

Subtracting (18) from (14), we have the error equation

$$\begin{split} \left( \,\_\mathbb{C} \mathbf{D}\_{0,t}^{\mathbf{u}} \mu^{n} - \mathbf{Y}\_{l}^{\mathbf{u}} \mathbf{U}\_{h'}^{n} \boldsymbol{v}\_{h} \right)\_{\Omega} &+ \epsilon^{2} (\mathbf{p}^{n} - \mathbf{P}\_{h'}^{n} \nabla \boldsymbol{v}\_{h})\_{\Omega} - \epsilon^{2} \langle (\mathbf{p}^{n} - \widehat{\mathbf{P}}\_{l}^{n}) \cdot \mathbf{n}, \boldsymbol{v}\_{h} \rangle\_{\Gamma} \\ - \langle f(\boldsymbol{u}^{n}) - f(\mathbf{U}\_{h}^{n}), \boldsymbol{v}\_{h} \rangle\_{\Omega} &= 0, \end{split} \tag{29a}$$

$$\langle (\mathbf{p}^{\mathrm{n}} - \mathbf{P}\_{h'}^{\mathrm{n}} \mathbf{w}\_h)\_\Omega + (\boldsymbol{\mu}^{\mathrm{n}} - \boldsymbol{\mathcal{U}}\_{h'}^{\mathrm{n}} \nabla \cdot \mathbf{w}\_h)\_\Omega - \langle (\boldsymbol{\mu}^{\mathrm{n}} - \boldsymbol{\hat{U}}\_{h}^{\mathrm{n}}), \mathbf{w}\_h \cdot \mathbf{n} \rangle\_\Gamma = 0. \tag{29b}$$

Now we show the error estimate for Equation (29).

**Theorem 2.** *Let u<sup>n</sup> be the exact solution of Equation* (1) *which satisfies the smoothness assumption* (25)*, and U<sup>n</sup> <sup>h</sup> be the numerical solution of the nonuniform L1–LDG scheme* (18)*. If f*(*u*) *satisfies the condition* (26)*, then for n* = 1, 2, . . . , *M, the following estimate holds*

$$\|\boldsymbol{\mu}^{n} - \boldsymbol{\mathcal{U}}\_{h}^{n}\|\_{\boldsymbol{\Omega}} \leq \mathbb{C} \Big( \boldsymbol{M}^{-\min\{2-a,ra\}} + \boldsymbol{h}^{k+1} \Big),\tag{30}$$

*where C is a positive constant independent of M and h.*

**Proof.** By taking the test functions *vh* = *Pe<sup>n</sup> <sup>u</sup>* and **w***<sup>h</sup>* = 2Π*e<sup>n</sup>* **<sup>p</sup>** in (29) and applying (27), we arrive at

$$(\mathcal{Y}\_l^{\text{u}} P e\_{\boldsymbol{\upnu}}^{\text{u}} P e\_{\boldsymbol{\upnu}}^{\text{u}})\_{\Omega} + \epsilon^2 (\Pi e\_{\mathbf{P}}^{\text{u}}, \Pi e\_{\mathbf{P}}^{\text{u}})\_{\Omega} - \left( f(\boldsymbol{\upnu}^{\text{u}}) - f(\mathcal{U}\_h^{\text{n}}), P e\_{\boldsymbol{\upnu}}^{\text{u}} \right)\_{\Omega} = RHS,\tag{31}$$

where *R<sup>n</sup>* <sup>1</sup> = *<sup>C</sup>*D*<sup>α</sup>* 0,*tu*(**x**, *tn*) <sup>−</sup> <sup>Υ</sup>*<sup>α</sup> <sup>t</sup> u*(**x**, *tn*) and

*RHS* <sup>=</sup> <sup>−</sup> (Υ*<sup>α</sup> <sup>t</sup>* (*u<sup>n</sup>* <sup>−</sup> *Pun*), *Pe<sup>n</sup> <sup>u</sup>*)<sup>Ω</sup> <sup>−</sup> (*R<sup>n</sup>* <sup>1</sup> , *Pe<sup>n</sup> <sup>u</sup>*)<sup>Ω</sup> <sup>−</sup> 2(**p***<sup>n</sup>* <sup>−</sup> <sup>Π</sup>**p***n*, <sup>∇</sup>*Pe<sup>n</sup> <sup>u</sup>*)<sup>Ω</sup> <sup>+</sup> 2(**p***<sup>n</sup>* <sup>−</sup> <sup>Π</sup>4**p***n*) · **<sup>n</sup>**, *Pe<sup>n</sup> <sup>u</sup>*<sup>Γ</sup> <sup>−</sup> 2(**p***<sup>n</sup>* <sup>−</sup> <sup>Π</sup>**p***n*, <sup>Π</sup>*<sup>e</sup> n* **<sup>p</sup>**)<sup>Ω</sup> <sup>−</sup> 2(*u<sup>n</sup>* <sup>−</sup> *Pun*, ∇ · <sup>Π</sup>*<sup>e</sup> n* **<sup>p</sup>**)<sup>Ω</sup> <sup>+</sup> 2(*u<sup>n</sup>* <sup>−</sup> *Pu*4*n*), <sup>Π</sup>*<sup>e</sup> n* **<sup>p</sup>** · **<sup>n</sup>**<sup>Γ</sup> <sup>−</sup> 2(Π*<sup>e</sup> n* **<sup>p</sup>**, <sup>∇</sup>*Pe<sup>n</sup> <sup>u</sup>*)<sup>Ω</sup> <sup>+</sup> 2<sup>Π</sup> 4*en* **<sup>p</sup>** · **<sup>n</sup>**, *Pe<sup>n</sup> u*Γ <sup>−</sup> 2(*Pe<sup>n</sup> <sup>u</sup>*, ∇ · Π*e n* **<sup>p</sup>**)<sup>Ω</sup> <sup>+</sup> 2*Pe*4*<sup>n</sup> <sup>u</sup>*, Π*e n* **<sup>p</sup>** · **n**Γ. (32)

Making use of flux (16) and the property of projections, it is obvious to see that

$$\begin{split} RHS &= -\left(\mathbf{Y}\_t^\mathbf{a}(\boldsymbol{u}^\mathbf{n} - \boldsymbol{P}\boldsymbol{u}^\mathbf{n}), \boldsymbol{P}\boldsymbol{e}\_\mathbf{u}^\mathbf{n}\right)\_\Omega - \left(\mathbf{R}\_1^\mathbf{n}, \boldsymbol{P}\boldsymbol{e}\_\mathbf{u}^\mathbf{n}\right)\_\Omega - \epsilon^2 (\mathbf{p}^\mathbf{n} - \boldsymbol{\Pi}\mathbf{p}^\mathbf{n}, \boldsymbol{\Pi}\boldsymbol{e}\_\mathbf{p}^\mathbf{n})\_\Omega \\ &- \epsilon^2 (\boldsymbol{u}^\mathbf{n} - \boldsymbol{P}\boldsymbol{u}^\mathbf{n}, \boldsymbol{\nabla} \cdot \boldsymbol{\Pi}\mathbf{e}\_\mathbf{p}^\mathbf{n})\_\Omega + \epsilon^2 \langle (\boldsymbol{u}^\mathbf{n} - \boldsymbol{\tilde{P}}\boldsymbol{u}^\mathbf{n}), \boldsymbol{\Pi}\boldsymbol{e}\_\mathbf{p}^\mathbf{n} \cdot \mathbf{n} \rangle\_\Gamma. \end{split} \tag{33}$$

By using Cauchy-Schwarz inequality and Lemma 1, *RHS* can be estimated as follows

$$\begin{split} |RHS| &= \|\mathbf{Y}\_t^n (\boldsymbol{\mu}^n - \boldsymbol{P}\boldsymbol{u}^n)\|\_{\Omega} \|\boldsymbol{P}\boldsymbol{e}\_u^n\|\_{\Omega} + \|\boldsymbol{R}\_1^n\|\_{\Omega} \|\boldsymbol{P}\boldsymbol{e}\_u^n\|\_{\Omega} + \epsilon^2 \|\mathbf{p}^n - \boldsymbol{\Pi}\mathbf{p}^n\|\_{\Omega} \|\boldsymbol{\Pi}\boldsymbol{e}\_\mathbf{p}^n\|\_{\Omega} \\ &+ \boldsymbol{\mathcal{C}}h^{k+1} \|\boldsymbol{\Pi}\boldsymbol{e}\_\mathbf{p}^n\|\_{\Omega} \\ &\leq \boldsymbol{\mathcal{C}}h^{k+1} \left( \|\boldsymbol{P}\boldsymbol{e}\_u^n\|\_{\Omega} + \|\boldsymbol{\Pi}\boldsymbol{e}\_\mathbf{p}^n\|\_{\Omega} \right) + \|\boldsymbol{R}\_1^n\|\_{\Omega} \|\boldsymbol{P}\boldsymbol{e}\_u^n\|\_{\Omega} \end{split} \tag{34}$$

where *C* is a positive constant dependent on *uL*∞((0,*T*];*Hk*+2(Ω)).

Now we estimate the nonlinear term in (31). It is obvious to see that

$$\begin{split} & \left( f(\mathsf{U}\_{\mathrm{h}}^{n}) - f(\mathsf{u}^{n}), Pe\_{\mathrm{u}}^{n} \right)\_{\Omega} \\ &= \left( f(\mathsf{P}\mathsf{u}^{n}) - f(\mathsf{u}^{n}), Pe\_{\mathrm{u}}^{n} \right)\_{\Omega} - \left( f(\mathsf{P}\mathsf{u}^{n}) - f(\mathsf{U}\_{\mathrm{h}}^{n}), Pe\_{\mathrm{u}}^{n} \right)\_{\Omega} \\ &= \left( f'(\mathsf{f})(\mathsf{P}\mathsf{u}^{n} - \mathsf{u}^{n}), Pe\_{\mathrm{u}}^{n} \right)\_{\Omega} - \left( f(\mathsf{P}\mathsf{u}^{n}) - f(\mathsf{U}\_{\mathrm{h}}^{n}), Pe\_{\mathrm{u}}^{n} \right)\_{\Omega} \\ &= I + II, \end{split} \tag{35}$$

where *<sup>ξ</sup>* <sup>=</sup> *<sup>θ</sup>u<sup>n</sup>* + (<sup>1</sup> <sup>−</sup> *<sup>θ</sup>*)*Pu<sup>n</sup>* with 0 <sup>≤</sup> *<sup>θ</sup>* <sup>≤</sup> 1. Then, using the Cauchy-Schwarz inequality, Young's inequality, interpolation property (8), and (26), we can derive

$$\begin{split} |I| &\le ||f'||\_{L^{\infty}(\Omega)} |(Pu^{n} - u^{n}, Pe\_{u}^{n})\_{\Omega}| \\ &\le \mathbb{C} ||Pe\_{u}^{n}||\_{\Omega}^{2} + Ch^{2k+2} .\end{split} \tag{36}$$

It follows from the definition of *<sup>f</sup>*(*u*) (i.e., *<sup>f</sup>*(*u*) = *<sup>u</sup>* <sup>−</sup> *<sup>u</sup>*3) that

$$f(u) - f(v) = f'(u)(u - v) - (u - v)^3 + 3u(u - v)^2. \tag{37}$$

Therefore, *I I* can be rewritten as

$$\begin{split} II &= -\left( f(Pu^{n}) - f(\mathcal{U}\_{h}^{n}), Pe\_{u}^{n} \right)\_{\Omega} \\ &= -\left( f'(Pu^{n})(Pu^{n} - \mathcal{U}\_{h}^{n}) - (Pu^{n} - \mathcal{U}\_{h}^{n})^{3} + 3Pu^{n}(Pu^{n} - \mathcal{U}\_{h}^{n})^{2}, Pe\_{u}^{n} \right)\_{\Omega} \\ &= -\left( f'(Pu^{n})P e\_{u}^{n} - (P e\_{u}^{n})^{3} + 3Pu^{n}(P e\_{u}^{n})^{2}, Pe\_{u}^{n} \right)\_{\Omega} \\ &= \left( (P e\_{u}^{n})^{3}, P e\_{u}^{n} \right)\_{\Omega} - \left( f'(Pu^{n})P e\_{u}^{n} + 3Pu^{n}(P e\_{u}^{n})^{2}, Pe\_{u}^{n} \right)\_{\Omega} . \end{split} \tag{38}$$

From (26) and the Cauchy-Schwarz inequality, it is obvious to see that

$$\left| \left( f'(Pu^{\eta})P\epsilon^{\eta}\_{\mu} + 3Pu^{\eta}(P\epsilon^{\eta}\_{\mu})^2, P\epsilon^{\eta}\_{\mu} \right)\_{\Omega} \right| \leq C \| P\epsilon^{\eta}\_{\mu} \|\_{\Omega}^2 + \| (P\epsilon^{\eta}\_{\mu})^2 \|\_{\Omega}^2. \tag{39}$$

Combining Equations (31), (34), (36), (38) and (39), we have

$$\begin{split} & (\|\mathbf{Y}\_{1}^{n} P\_{\mathbf{u}\prime}^{n} P\_{\mathbf{u}\prime}^{n})\_{\Omega} + \|\|\Pi e\_{\mathbf{p}}^{n}\|\|\_{\Omega}^{2} + \|(P e\_{\mathbf{u}}^{n})^{2}\|\_{\Omega}^{2} \\ & \leq Ch^{k+1} \Big( \|P e\_{\mathbf{u}}^{n}\|\_{\Omega} + \|\Pi e\_{\mathbf{p}}^{n}\|\_{\Omega} \Big) + \|R\_{1}^{n}\|\_{\Omega} \|P e\_{\mathbf{u}}^{n}\|\_{\Omega} + Ch^{k+1} \|\Pi e\_{\mathbf{p}}^{n}\|\_{\Omega} \\ & \quad + \mathbb{C} \|P e\_{\mathbf{u}}^{n}\|\_{\Omega}^{2} + \|(P e\_{\mathbf{u}}^{n})^{2}\|\_{\Omega}^{2} + Ch^{2k+2} \\ & \leq C \|P e\_{\mathbf{u}}^{n}\|\_{\Omega}^{2} + \|\Pi e\_{\mathbf{p}}^{n}\|\_{\Omega}^{2} + \|(P e\_{\mathbf{u}}^{n})^{2}\|\_{\Omega}^{2} + Ch^{2k+2} + \|R\_{1}^{n}\|\_{\Omega} \|P e\_{\mathbf{u}}^{n}\|\_{\Omega} . \end{split} \tag{40}$$

Invoking Lemma 5, one has

$$\|\mathbf{Y}\_{\rm I}^{\rm u}\| \|\mathbf{P}\mathbf{e}\_{\rm u}^{\rm u}\|\_{\Omega}^{2} \le 2C \|\mathbf{P}\mathbf{e}\_{\rm u}^{\rm u}\|\_{\Omega}^{2} + 2C\hbar^{2k+2} + 2\|\mathbf{R}\_{1}^{\rm u}\|\_{\Omega} \|\mathbf{P}\mathbf{e}\_{\rm u}^{\rm u}\|\_{\Omega}.\tag{41}$$

Letting *<sup>λ</sup>*<sup>0</sup> <sup>=</sup> <sup>2</sup>*C*, *<sup>λ</sup><sup>j</sup>* <sup>=</sup> <sup>0</sup> for <sup>1</sup> <sup>≤</sup> *<sup>j</sup>* <sup>≤</sup> *<sup>M</sup>* <sup>−</sup> 1, *<sup>u</sup><sup>n</sup>* <sup>=</sup> *Pe<sup>n</sup> <sup>u</sup>*Ω, *<sup>φ</sup><sup>n</sup>* <sup>=</sup> <sup>2</sup>*R<sup>n</sup>* <sup>1</sup> Ω, and *<sup>ψ</sup><sup>n</sup>* <sup>=</sup> <sup>√</sup>2*Chk*+<sup>1</sup> in Lemma 4, we can obtain from (41) that

$$\|\|Pe\_{u}^{n}\|\|\_{\Omega} \le 2E\_{u,1}(4Ct\_{n}^{\underline{a}}) \left(2\max\_{1\le k\le n} \sum\_{j=1}^{k} P\_{k-j}^{(k)} \|R\_{1}^{j}\|\_{\Omega} + \sqrt{2C\Gamma(1-a)} \max\_{1\le k\le n} \{t\_{k}^{a/2}h^{k+1}\}\right),\tag{42}$$

provided that the maximum time-step *τ<sup>M</sup>* ≤ (4*C*Γ(2 − *α*)) −1/*α* . With the help of Lemma 3 and inequality (11), we have

$$\|\|Pe\_{\mathfrak{u}}^{n}\|\|\_{\Omega} \leq \mathbb{C}\left(M^{-\min\{ra,2-a\}} + h^{k+1}\right).\tag{43}$$

By using the interpolation property (8) and the triangle inequality, the desired estimate follows immediately.

As a conclusion of this section, we present the Algorithm 1 based on the nonuniform L1–LDG scheme (15).

#### **Algorithm 1** The nonuniform L1–LDG scheme for solving the time-fractional Allen-Cahn equation.

**Input:** the order of time-fractional derivative *α*, interface width parameter , temporal mesh grading parameter *r*.

**Output:** nodal values of numerical solution *U<sup>n</sup> <sup>h</sup>* at *tn*.


$$(A\_1^{(K)})\_{\bar{\imath}\bar{\jmath}} = (\varrho\_{K'}^{\bar{\jmath}}\varrho\_K^{\bar{\imath}})\_{\prime} \ (A\_2^{(K)})\_{\bar{\imath}\bar{\jmath}} = (\varrho\_{K'}^{\bar{\jmath}}(\varrho\_K^{\bar{\imath}})\_x)\_{\prime} \ (A\_3^{(K)})\_{\bar{\imath}\bar{\jmath}} = (\varrho\_{K'}^{\bar{\jmath}}(\varrho\_K^{\bar{\imath}})\_y)\_{\bar{\imath}}.$$

Combine the boundary conditions to calculate the *l* × *l* stiffness matrices generated by interface *∂K* with entries

$$(A\_4^{(K)})\_{ij} = \langle \varphi\_{\mathbb{K},\mathbb{R}}^j n\_1, \varphi\_{\mathbb{K}}^i \rangle\_{\prime} \ (A\_5^{(K)})\_{ij} = \langle \varphi\_{\mathbb{K},\mathbb{R}}^j n\_{2\prime} \varphi\_{\mathbb{K}}^i \rangle\_{\prime}$$

$$(A\_6^{(K)})\_{ij} = \langle \varphi\_{\mathbb{K},L'}^j \varphi\_{\mathbb{K}}^i n\_1 \rangle\_{\prime} \ (A\_7^{(K)})\_{ij} = \langle \varphi\_{\mathbb{K},L'}^j \varphi\_{\mathbb{K}}^i n\_2 \rangle.$$

Assemble matrices *A*(*K*) <sup>1</sup> -*A*(*K*) <sup>7</sup> to *A*1-*A*<sup>7</sup> by the global number.


$$\text{10: }\qquad \text{Set }\beta = \frac{\Gamma(2-\alpha)}{d\_{n,1}}.$$

11: **for** *K* ∈ T*<sup>h</sup>* **do**

12: Calculate of the *<sup>l</sup>* <sup>×</sup> *<sup>l</sup>* matrix *<sup>A</sup>*(*K*) <sup>8</sup> corresponding to nonlinear term on *K* at the time level *tn*−<sup>1</sup> with components

$$(A\_8^{(K)})\_{ij} = \left( \left( \sum\_{i=1}^l u\_K^{n-1,i} \boldsymbol{\phi}\_K^i \right)^2 \boldsymbol{\phi}\_{K'}^j \boldsymbol{\phi}\_K^i \right),$$

then assemble *A*<sup>8</sup> according to global number.

#### 13: **end for**

14: Define a zero matrix (*O*)*ij* = 0 of size *l* (*NxNy*) × *l* (*NxNy*). Then the global stiffness matrix and the global load vector are

$$A = \begin{bmatrix} (1 - \beta)A\_1 & \epsilon^2 \beta (A\_2 - A\_4) & \epsilon^2 \beta (A\_3 - A\_5) \\ A\_2 - A\_6 & A\_1 & O \\ A\_2 - A\_7 & O & A\_1 \end{bmatrix}$$

and

$$B = \begin{bmatrix} \beta \frac{d\_{n,n}}{\Gamma(2-\alpha)} A\_1 \\ O \\ O \end{bmatrix} W^0 - \begin{bmatrix} \beta A\_8 \\ O \\ O \end{bmatrix} W^0.$$

15: Solve

$$AW^n = B.$$

16: **end for**

17: **for** *n* = 2, . . . , *M* **do** 18: Set *β* = <sup>Γ</sup>(2−*α*) *dn*,1 .

19: **for** *K* ∈ T*<sup>h</sup>* **do**

20: Assemble the matrices *A*<sup>8</sup> and *A*<sup>9</sup> associated with the nonlinear term at moments *tn*−<sup>1</sup> and *tn*−2, respectively. Their components on *K* are

$$(A\_8^{(K)})\_{ij} = \left( \left( \sum\_{i=1}^l u\_K^{n-1,i} \boldsymbol{q}\_K^i \right)^2 \boldsymbol{q}\_{K'}^j \boldsymbol{q}\_K^i \right), \quad i, j = 1, \dots, l, \bar{q}$$

and

$$(A\_9^{(K)})\_{i\bar{j}} = \left( \left( \sum\_{i=1}^l u\_K^{n-2,i} \boldsymbol{\varrho}\_K^{\bar{i}} \right)^2 \boldsymbol{\varrho}\_{K'}^{\bar{j}} \boldsymbol{\varrho}\_K^{\bar{i}} \right), \quad i, j = 1 \ldots, l.$$

21: **end for**

22: Assemble the global stiffness matrix and the global load vector

$$A = \begin{bmatrix} (1 - \beta)A\_1 & \epsilon^2 \beta (A\_2 - A\_4) & \epsilon^2 \beta (A\_3 - A\_5) \\ A\_2 - A\_6 & A\_1 & O \\ A\_2 - A\_7 & O & A\_1 \end{bmatrix}$$

and

$$B = \sum\_{s=1}^{n-1} \begin{bmatrix} \beta \frac{d\_{n,s} - d\_{n,s+1}}{\Gamma(2-\kappa)} A\_1 \\ O \\ O \end{bmatrix} W^{n-s} + \begin{bmatrix} \beta \frac{d\_{n,s}}{\Gamma(2-\kappa)} A\_1 \\ O \\ O \end{bmatrix} W^0 - \begin{bmatrix} 2\beta A\_8 \\ O \\ O \end{bmatrix} W^{n-1} + \begin{bmatrix} \beta A\_9 \\ O \\ O \end{bmatrix} W^{n-2} \cdot \mathbf{1}$$

23: Solve

*AW<sup>n</sup>* = *B*.

24: **end for**

From Theorem 2, it can be seen that the scheme (18) can reach the optimal convergence order <sup>O</sup>(*M*−(2−*<sup>α</sup>*)) in the time direction when the grid parameter *<sup>r</sup>* <sup>≥</sup> (<sup>2</sup> <sup>−</sup> *<sup>α</sup>*)/*α*. However, the numerical solution generated by (18) will be limited to (2 − *α*)th-order accurate in time, even if the solution is sufficiently smooth. Therefore, in the next section, we will study a higher-order numerical algorithm for the time-fractional Allen-Cahn Equation (1).

#### **4. Nonuniform L2-1***σ***–LDG Scheme**

In the section, we propose a fully discrete nonuniform L2-1*σ*–LDG scheme for solving the time-fractional Allen-Cahn Equation (1), which is based on the L2-1*σ* approximation in the temporal direction and the LDG method in the spatial direction. The stability and the convergence of the scheme are proved rigorously.

#### *4.1. The Fully Discrete Numerical Scheme and Its Stability Analysis*

The usual notations of the nonuniform L2-1*σ* formula are introduced here. Let *M* be a positive integer. Set *tn* = *T*(*n*/*M*)*<sup>r</sup>* for *n* = 0, 1, ... , *M*, where the temporal mesh grading parameter *r* ≥ 1 is chosen by the user. Denote *τ<sup>n</sup>* = *tn* − *tn*−1, *n* = 1, ... , *M* be the time mesh sizes. Set *tn*+*<sup>σ</sup>* <sup>=</sup> *tn* <sup>+</sup> *στn*<sup>+</sup>1, *<sup>u</sup>n*+*<sup>σ</sup>* <sup>=</sup> *<sup>u</sup>*(**x**, *tn*+*σ*), and *<sup>u</sup>n*,*<sup>σ</sup>* <sup>=</sup> *<sup>σ</sup>un*+<sup>1</sup> + (<sup>1</sup> <sup>−</sup> *<sup>σ</sup>*)*u<sup>n</sup>* for *σ* ∈ [0, 1], *n* = 0, 1, . . . , *M* − 1.

The Caputo fractional derivative *<sup>C</sup>*D*<sup>α</sup>* 0,*tu* can be approximated at the point *tn*+*<sup>σ</sup>* (*n* = 0, 1, . . . , *M* − 1) by the L2-1*<sup>σ</sup>* formula [35]

$$\begin{split} \, \_C\mathrm{D}\_{0,t}^a u(\mathbf{x}, t\_{n+\sigma}) &= \frac{1}{\Gamma(1-a)} \int\_0^{t\_{n+\sigma}} \frac{\mathrm{d}u(\mathbf{x}, \mathbf{s})}{\mathrm{d}\mathbf{s}} \frac{\mathrm{d}\mathbf{s}}{(t\_{n+\sigma}-\mathbf{s})^a} \\ &= \frac{1}{\Gamma(1-a)} \sum\_{k=1}^n \int\_{t\_{k-1}}^{t\_k} \frac{\mathrm{d}u(\mathbf{x}, \mathbf{s})}{\mathrm{d}\mathbf{s}} \frac{\mathrm{d}\mathbf{s}}{(t\_{n+\sigma}-\mathbf{s})^a} \\ &+ \frac{1}{\Gamma(1-a)} \int\_{t\_n}^{t\_{n+\sigma}} \frac{\mathrm{d}u(\mathbf{x}, \mathbf{s})}{\mathrm{d}\mathbf{s}} \frac{\mathrm{d}\mathbf{s}}{(t\_{n+\sigma}-\mathbf{s})^a} \\ &\approx g\_{n,n} u^{n+1} - \sum\_{j=0}^n (g\_{n,j} - g\_{n,j-1}) u^j \\ &:= \mathfrak{R}\_t^a u^{n+\sigma}. \end{split} \tag{44}$$

Here *g*0,0 = *τ*−<sup>1</sup> <sup>1</sup> *a*0,0, *gn*,−<sup>1</sup> = 0, and for *n* ≥ 1, it holds that

$$g\_{n,j} = \begin{cases} \begin{array}{ll} \tau\_{j+1}^{-1}(a\_{n,0} - b\_{n,0}), & j = 0, \\ \tau\_{j+1}^{-1}(a\_{n,j} + b\_{n,j-1} - b\_{n,j}), & 1 \le j \le n - 1, \\\ \tau\_{j+1}^{-1}(a\_{n,n} + b\_{n,n-1}), & j = n. \end{array} \end{cases}$$

$$\begin{aligned} a\_{n,n} &= \frac{1}{\Gamma(1-\alpha)} \int\_{t\_n}^{t\_{n+\sigma}} (t\_{n+\sigma} - s)^{-\alpha} \mathrm{d}s = \frac{\sigma^{1-\alpha}}{\Gamma(2-\alpha)} \mathfrak{r}\_{n+1}^{1-\alpha}, \ n \ge 0, \\\ a\_{n,j} &= \frac{1}{\Gamma(1-\alpha)} \int\_{t\_j}^{t\_{j+1}} (t\_{n+\sigma} - s)^{-\alpha} \mathrm{d}s, \ n \ge 1, \ 0 \le j \le n-1, \\\ b\_{n,j} &= \frac{2}{\Gamma(1-\alpha)(t\_{j+2}-t\_j)} \int\_{t\_j}^{t\_{j+1}} (t\_{n+\sigma} - s)^{-\alpha} (s - t\_{j+1/2}) \mathrm{d}s, \ n \ge 1, \ 0 \le j \le n-1. \end{aligned}$$

Define the discrete convolution kernel *An*+1,*<sup>σ</sup> <sup>n</sup>*+1−*<sup>j</sup>* <sup>=</sup> *gn*,*j*, <sup>∇</sup>*tuj*+<sup>1</sup> <sup>=</sup> *<sup>u</sup>j*+<sup>1</sup> <sup>−</sup> *<sup>u</sup><sup>j</sup>* for 0 <sup>≤</sup> *<sup>j</sup>* <sup>≤</sup> *<sup>n</sup>* and 0 ≤ *n* ≤ *M* − 1. Then, the L2-1*<sup>σ</sup>* discretization can be rewritten as

$$\Re\_t^a u^{n+\sigma} = \sum\_{j=0}^n A\_{n+1-j}^{n+1,\sigma} \nabla\_t u^{j+1}, \; n = 0, 1, \dots, M-1.$$

By referring to [40], the discrete convolution kernel *Pn*+1,*<sup>σ</sup> <sup>n</sup>*+1−*<sup>j</sup>* are defined as

$$P\_1^{n+1,\sigma} = \frac{1}{A\_1^{n+1,\sigma}}, \ P\_{n+1-j}^{n+1,\sigma} = \frac{1}{A\_1^{j+1,\sigma}} \sum\_{i=j+1}^n \left( A\_{i-j}^{i+1,\sigma} - A\_{i-j+1}^{i+1,\sigma} \right) P\_{n+1-i}^{n+1,\sigma}$$

The discrete convolution kernels satisfy the following properties

$$\sum\_{j=i}^{n} P\_{n+1-j}^{n+1, \sigma} A\_{j-i+1}^{j+1, \sigma} = 1,\text{ for } 0 \le i \le n \le M-1,\tag{45}$$

and

$$\sum\_{j=0}^{n} P\_{n+1-j}^{n+1, \sigma} \omega\_{1+mu-n}(t\_{j+1}) \le \pi\_A \omega\_{1+mu}(t\_{n+1}), \text{ for } 0 \le n \le M-1 \text{ and } m = 0, 1,\tag{46}$$

where *ωβ*(*t*) = *tβ*−1/Γ(*β*) and *π<sup>A</sup>* is a positive constant.

Let **p** = ∇*u*, then the weak form of the time-fractional Allen-Cahn Equation (1) at *tn*+*σ* is formulated as

$$\begin{split} & \left( (\boldsymbol{\zeta} \cdot \mathbf{D}\_{0,t}^{\boldsymbol{u}} \boldsymbol{u})^{\boldsymbol{n}+\boldsymbol{\sigma}}, \boldsymbol{v} \right)\_{\Omega} - \boldsymbol{\epsilon}^{2} (\nabla \cdot \mathbf{p}^{\boldsymbol{n}+\boldsymbol{\sigma}}, \boldsymbol{v}) + \boldsymbol{\epsilon}^{2} (\nabla \cdot \mathbf{p}^{\boldsymbol{n},\boldsymbol{\sigma}}, \boldsymbol{v}) + \boldsymbol{\epsilon}^{2} (\mathbf{p}^{\boldsymbol{n},\boldsymbol{\sigma}}, \nabla \boldsymbol{v})\_{\Omega} \\ & - \boldsymbol{\epsilon}^{2} \langle \mathbf{p}^{\boldsymbol{n},\boldsymbol{\sigma}} \cdot \mathbf{n}, \boldsymbol{v} \rangle\_{\Gamma} + \left( f(\boldsymbol{u}^{\boldsymbol{n},\boldsymbol{\sigma}}) - f(\boldsymbol{u}^{\boldsymbol{n}+\boldsymbol{\sigma}}), \boldsymbol{v} \right)\_{\Omega} - \left( f(\boldsymbol{u}^{\boldsymbol{n},\boldsymbol{\sigma}}), \boldsymbol{v} \right)\_{\Omega} = 0, \end{split} \tag{47a}$$

$$(\mathbf{p}^{\mathrm{n}\rho}, \mathbf{w})\_{\Omega} + (u^{\mathrm{n}, \sigma}, \nabla \cdot \mathbf{w})\_{\Omega} - \langle u^{\mathrm{n}, \sigma}, \mathbf{w} \cdot \mathbf{n} \rangle\_{\Gamma} = 0,\tag{47b}$$

where *v*, **w** are test functions.

By using the LDG method presented in Section 3 for the spatial discretization and the nonuniform L2-1*σ* formula to time. Then we can define the fully discrete nonuniform L2-1*σ*–LDG scheme as follows: find (*Un*,*<sup>σ</sup> <sup>h</sup>* , **<sup>P</sup>***n*,*<sup>σ</sup> <sup>h</sup>* ) ∈ (*Vh*, Σ*h*) such that for all test functions *vh* ∈ *Vh* and **w***<sup>h</sup>* ∈ Σ*<sup>h</sup>*

$$\left(\left(\mathfrak{R}\_{\hbar}^{\mathbf{a}}\mathcal{U}\_{\hbar}^{n+\upsilon},\upsilon\_{\hbar}\right)\_{\Omega} + \mathfrak{e}^{2}(\mathbf{P}\_{\hbar}^{n,\sigma},\nabla\upsilon\_{\hbar})\_{\Omega} - \mathfrak{e}^{2}\langle\widetilde{\mathbf{P}\_{\hbar}^{n,\widetilde{\sigma}}}\cdot\mathbf{n},\upsilon\_{\hbar}\rangle\_{\Gamma} - \left(f(\mathcal{U}\_{\hbar}^{n,\sigma}),\upsilon\right)\_{\Omega} = 0,\tag{48a}$$

$$(\mathbf{P}\_h^{n,\sigma}, \mathbf{w}\_h)\_{\Omega} + (\mathcal{U}\_h^{n,\sigma}, \nabla \cdot \mathbf{w}\_h)\_{\Omega} - \langle \bar{\mathcal{U}}\_h^{\overline{n,\sigma}}, \mathbf{w}\_h \cdot \mathbf{n} \rangle\_{\Gamma} = 0. \tag{48b}$$

Here the "numerical fluxes" are chosen as (16).

To show the stability of the proposed nonuniform L2-1*σ*–LDG scheme, we need some important lemmas.

**Lemma 6** ([28])**.** *For any finite time tM* = *T* > 0 *and a given nonnegative sequence* (*λl*)*M*−<sup>1</sup> *<sup>l</sup>*=<sup>0</sup> *, assume that there exists a constant* Λ*, independent of time-steps, such that M*−1 ∑ *l*=0 *λ<sup>l</sup>* ≤ Λ*. Let <sup>σ</sup>* <sup>=</sup> <sup>1</sup> <sup>−</sup> *<sup>α</sup>*/2 *and suppose that the grid function* {*un*+1|*<sup>n</sup>* <sup>≥</sup> <sup>0</sup>} *satisfies*

$$\sum\_{i=0}^{n} A\_{n+1-i}^{n+1,\sigma} \nabla\_t (u^{i+1})^2 \le \sum\_{i=0}^{n} \lambda\_{n-i} (u^{i,\sigma})^2 + \phi^{n+1} u^{n,\sigma} + (\psi^{n+1})^2, \ 0 \le n \le M - 1,$$

*where* {*φn*<sup>+</sup>1, *<sup>ψ</sup>n*+1|<sup>0</sup> <sup>≤</sup> *<sup>n</sup>* <sup>≤</sup> *<sup>M</sup>* <sup>−</sup> <sup>1</sup>} *are nonnegative sequences. If the maximum time-step <sup>τ</sup><sup>M</sup>* <sup>≤</sup> (2*πA*Γ(<sup>2</sup> <sup>−</sup> *<sup>α</sup>*)Λ)−1/*α, it holds that, for* <sup>0</sup> <sup>≤</sup> *<sup>n</sup>* <sup>≤</sup> *<sup>M</sup>* <sup>−</sup> <sup>1</sup>*,*

$$\mu^{n+1} \le 2E\_{a,1}(2\pi\_A \Lambda t\_{n+1}^a) \left(\mu^0 + \max\_{0 \le i \le n} \sum\_{j=0}^i P\_{i-j+1}^{i+1,r} \Phi^j + \sqrt{\pi\_A \Gamma(1-a)} \max\_{0 \le j \le n} \{t\_{j+1}^{a/2} \Psi^{j+1}\} \right).$$

*Here Eα*,1(*z*) = ∑<sup>∞</sup> *<sup>k</sup>*=<sup>0</sup> *<sup>z</sup><sup>k</sup>* <sup>Γ</sup>(*kα*+1) *is the Mittag-Leffler function.* **Lemma 7** ([29])**.** *Suppose <sup>σ</sup>* <sup>=</sup> <sup>1</sup> <sup>−</sup> *<sup>α</sup>*/2*. For any function <sup>u</sup>n*+1(<sup>0</sup> <sup>≤</sup> *<sup>n</sup>* <sup>≤</sup> *<sup>M</sup>* <sup>−</sup> <sup>1</sup>)*, we have the following inequality*

$$(\mathfrak{R}\_t^\alpha u^{n+\sigma}, u^{n,\sigma})\_{\Omega} \ge \frac{1}{2} \mathfrak{R}\_t^\alpha (||u||\_{\Omega}^2)^{n+\sigma}.$$

**Theorem 3.** *If the graded mesh satisfies the maximum time-step condition τ<sup>M</sup>* ≤ (4*πA*Γ(2 − *α*)) −1/*α , then the solution Un*+<sup>1</sup> *<sup>h</sup> of the fully discrete nonuniform L2-*1*σ–LDG scheme* (48) *satisfies*

$$\|\|\mathcal{U}\_{h}^{n+1}\|\|\_{\Omega} \le 2E\_{\mathfrak{u},1}(4\pi\_{A}t\_{n+1}^{a})\|\|\mathcal{U}\_{h}^{0}\|\|\_{\Omega\prime} \ n = 0,1,\dots,M-1.$$

**Proof.** Taking the test functions (*vh*, **<sup>w</sup>***h*)=(*Un*,*<sup>σ</sup> <sup>h</sup>* , 2**P***n*,*<sup>σ</sup> <sup>h</sup>* ) in (48) and integrating by parts, we get

$$\left(\left(\Re\_t^n \mathbb{U}\_h^{n+\sigma}, \mathbb{U}\_h^{n,\sigma}\right)\_{\Omega} + \epsilon^2 \|\mathbb{P}\_h^{n,\sigma}\|\_{\Omega}^2 + \left((\mathbb{U}\_h^{n,\sigma})^3 - \mathbb{U}\_h^{n,\sigma}, \mathbb{U}\_h^{n,\sigma}\right)\_{\Omega} = 0.$$

By virtue of Lemma 7 and Cauchy-Schwarz inequality, we obtain

$$\left(\|\mathfrak{R}^{a}\|\|\mathcal{U}\_{\hbar}\|\|\_{\Omega}^{2}\right)^{n+\upsilon} \leq 2\|\|\mathcal{U}\_{\hbar}^{n,\upsilon}\|\|\_{\Omega}^{2}.\tag{49}$$

Using Lemma 6, it follows from (49) that

$$\|\|\mathcal{U}\_h^{n+1}\|\|\_{\Omega} \le 2E\_{\mathfrak{u},1}(4\pi\_A t\_{n+1}^a) \|\|\mathcal{U}\_h^0\|\|\_{\Omega}, n = 0, 1, \dots, M-1.$$

The proof is completed.

#### *4.2. Optimal Error Estimate*

In this subsection, we give the optimal error estimate for the fully discrete nonuniform L2-1*σ*–LDG scheme (48) of Equation (1). Suppose the exact solution *u*(**x**, *t*) of (1) has the following smoothness properties

$$\mu \in L^{\infty} \left( (0, T]; H^{k+2}(\Omega) \right), \left| \left. \partial^l u(\mathbf{x}, t) / \partial t^l \right| \right| \le \mathbb{C} (1 + t^{a-l}) \text{ for } 0 < t \le T \text{ and } l = 0, 1, 2, 3. \tag{50}$$

The same as the nonuniform L1–LDG scheme, we assume that the nonlinear term *f*(*u*) satisfies the condition (26).

**Lemma 8** ([33])**.** *Suppose <sup>σ</sup>* <sup>=</sup> <sup>1</sup> <sup>−</sup> *<sup>α</sup>*/2*. Then for any function u*(*t*) <sup>∈</sup> *<sup>C</sup>*3(0, *<sup>T</sup>*]*, one has*

$$\mathbb{P}\left|\left(\Box^{\mathbf{d}}\_{0,t}\mu\right)^{n+\sigma} - \mathbf{Y}\_t^{\mathbf{u}}\mu^{n+\sigma}\right| \le \mathbb{C}t\_{n+\sigma}^{-\mathfrak{a}}\left(\psi\_{\mathbf{u}}^{\mathbf{u}+\sigma} + \max\_{1 \le s \le \mathfrak{a}}\{\psi\_{\mathbf{u}}^{\mathbf{u},s}\}\right) \text{ for } n = 0, 1, \dots, M-1, \mathfrak{a}$$

*where*

$$\begin{aligned} \psi\_{\boldsymbol{u}}^{n+\boldsymbol{\tau}} &= \tau\_{\boldsymbol{n}+1}^{3-\boldsymbol{\kappa}} \mu\_{\boldsymbol{n}+\boldsymbol{\tau}}^{\boldsymbol{n}} \sup\_{\boldsymbol{s} \in (t\_{\boldsymbol{n}}, t\_{\boldsymbol{n}+1})} |\boldsymbol{u}^{\prime\prime\prime}(\boldsymbol{s})| \text{ for } \boldsymbol{n} = 1, 2, \dots, M - 1, \\\psi\_{\boldsymbol{u}}^{n,1} &= \tau\_{1}^{\boldsymbol{u}} \sup\_{\boldsymbol{s} \in (0, t\_{1})} \left( \boldsymbol{s}^{1-\boldsymbol{u}} |(I\_{2,1}\boldsymbol{u}(\boldsymbol{s}))^{\prime} - \boldsymbol{u}^{\prime}(\boldsymbol{s})| \right) \text{ for } \boldsymbol{n} = 1, 2, \dots, M - 1, \\\psi\_{\boldsymbol{u}}^{n, \boldsymbol{s}} &= \tau\_{\boldsymbol{n}+1}^{-\boldsymbol{n}} \tau\_{i}^{2} (\tau\_{i} + \tau\_{i+1}) t\_{i}^{\boldsymbol{n}} \sup\_{\boldsymbol{s} \in (t\_{i-1}, t\_{i+1})} |\boldsymbol{u}^{\prime\prime\prime}(\boldsymbol{s})| \text{ for } \boldsymbol{2} \le \boldsymbol{i} \le \boldsymbol{n} \le M - 1, \end{aligned}$$

*and I*2,1*u*(*s*) *is the quadratic polynomial that interpolates to u*(*s*) *at the points ts*−1*, ts and ts*+1*.*

**Lemma 9** ([33])**.** *Suppose that u* <sup>∈</sup> *<sup>C</sup>*[0, *<sup>T</sup>*] <sup>∩</sup> *<sup>C</sup>*3(0, *<sup>T</sup>*] *satisfies the condition* (50)*. Then we have*

$$\begin{aligned} \psi\_{\boldsymbol{\mu}}^{n+\sigma} &\leq CM^{-\min\{r\boldsymbol{\alpha}, 3-\boldsymbol{\alpha}\}} \text{ for } n = 0, 1, \dots, M - 1, \\\psi\_{\boldsymbol{\mu}}^{n, \boldsymbol{s}} &\leq CM^{-\min\{r\boldsymbol{\alpha}, 3-\boldsymbol{\alpha}\}} \text{ for } \boldsymbol{s} = 1, \dots, M - 1, \; \boldsymbol{n} \geq 1. \end{aligned}$$

In Section 3.2, we give the convergence analysis for the nonuniform L1–LDG scheme. The same proof idea can be extended to the nonuniform L2-1*σ*–LDG scheme. However, the proof would be somewhat more complicated. Following the similar line as before, we obtain the following error equation

$$\begin{split} & \left( (\! \_\{ \mathbf{D}\_{0,t}^{\mathbf{u}} \boldsymbol{\mu} \right)^{\mathbf{n}+\boldsymbol{\sigma}} - \Re\_{\mathbf{f}} \! \_\{ \mathbf{L}\_{\mathbf{h}}^{\mathbf{n}+\boldsymbol{\sigma}} \boldsymbol{\nu} \boldsymbol{\nu}\_{\boldsymbol{\sigma}} \end{split} \left( \! \_\{ \mathbf{\mathcal{I}} \} \! \_\{ \mathbf{\mathcal{I}} \} \! \_\{ \mathbf{\mathcal{I}} \} \! \_\{ \mathbf{\mathcal{I}} \} \! \_\{ \mathbf{\mathcal{I}} \} \! \_\{ \mathbf{\mathcal{I}} \} \! \_\{ \mathbf{\mathcal{I}} \} \! \_\{ \mathbf{\mathcal{I}} \} \! \_\{ \mathbf{\mathcal{I}} \} \! \_\{ \mathbf{\mathcal{I}} \} \! \_\{ \mathbf{\mathcal{I}} \} \end{split} \tag{51a}$$
 
$$\begin{split} & \left( \! \_\{ \mathbf{\mathcal{I}} \} \! \_\{ \mathbf{\mathcal{I}} \} \! \_\{ \mathbf{\mathcal{I}} \} \! \_\{ \mathbf{\mathcal{I}} \} \! \_\{ \mathbf{\mathcal{I}} \} \! \_\{ \mathbf{\mathcal{I}} \} \! \_\{ \mathbf{\mathcal{I}} \} \! \_\{ \mathbf{\mathcal{I}} \} \end{split} \tag{51a} \end{split} \tag{51a}$$

$$(e^{\mathbf{u}, \boldsymbol{\sigma}}\_{\mathbf{p}}, \mathbf{w}\_{\boldsymbol{h}})\_{\Omega} + (e^{\mathbf{n}, \boldsymbol{\sigma}}\_{\boldsymbol{u}}, \nabla \cdot \mathbf{w}\_{\boldsymbol{h}})\_{\Omega} - \langle \widehat{e^{\boldsymbol{n}, \boldsymbol{\sigma}}\_{\boldsymbol{u}}}, \mathbf{w}\_{\boldsymbol{h}} \cdot \mathbf{n} \rangle\_{\Gamma} = 0,\tag{51b}$$

where (*vh*, **<sup>w</sup>***h*) <sup>∈</sup> *Vh* <sup>×</sup> <sup>Σ</sup>*<sup>h</sup>* are test functions, *<sup>R</sup>n*+*<sup>σ</sup>* <sup>2</sup> <sup>=</sup> 2(∇ · **<sup>p</sup>***n*+*<sup>σ</sup>* −∇· **<sup>p</sup>***n*,*σ*) + *<sup>f</sup>*(*un*+*σ*) <sup>−</sup> *f*(*un*,*σ*), *e n*,*σ <sup>u</sup>* and *e <sup>n</sup>*,*<sup>σ</sup>* **<sup>p</sup>** are the errors with the decompositions

$$\varepsilon\_{u}^{n+1} = u^{n+1} - \mathcal{U}\_{h}^{n+1} = u^{n+1} - Pu^{n+1} + Pu^{n+1} - \mathcal{U}\_{h}^{n+1} = u^{n+1} - Pu^{n+1} + P\varepsilon\_{u}^{n+1}, \tag{52a}$$

$$\epsilon\_{\mathbf{p}}^{n+1} = \mathbf{p}^{n+1} - \mathbf{P}\_h^{n+1} = \mathbf{p}^{n+1} - \Pi \mathbf{p}^{n+1} + \Pi \mathbf{p}^{n+1} - \mathbf{P}\_h^{n+1} = \mathbf{p}^{n+1} - \Pi \mathbf{p}^{n+1} + \Pi \epsilon\_{\mathbf{p}}^{n+1}.\tag{52b}$$

Here *P* and Π are the projections defined in (28).

**Theorem 4.** *Assume that the solution u of the problem* (1) *satisfies the condition* (50) *and <sup>C</sup>*D*<sup>α</sup>* 0,*tu* ∈ *L*∞((0, *T*]; *Hk*+1(Ω))*. Let U<sup>n</sup> <sup>h</sup> be the numerical solution of the fully discrete LDG scheme* (48)*. Suppose σ* = 1 − *α*/2*, f*(*u*) *satisfies the condition* (26)*, and the nonuniform mesh satisfies the maximum time-step condition τ<sup>M</sup>* ≤ (4*πA*Γ(2 − *α*)) −1/*α , then for n* = 1, 2, ... , *M, the following estimate holds*

$$\|\|u^n - \mathcal{U}\_h^n\|\| \le \mathcal{C} \left( \mathcal{M}^{-\min\{r\kappa, 2\}} + h^{k+1} \right),$$

*where C is a positive constant independent of M and h.*

**Proof.** Substituting (52) into (51), we deduce that

$$\begin{split} \left( \mathfrak{R}\_{t}^{a} (\mathrm{Pe}\_{\mathrm{u}})^{n+\sigma}, \upsilon\_{\mathrm{h}} \right)\_{\Omega} &+ \epsilon^{2} (\mathrm{I} \mathrm{I}\_{\mathrm{p}}^{n,\sigma}, \nabla \upsilon\_{\mathrm{h}})\_{\Omega} - \epsilon^{2} (\overline{\mathrm{I} \mathrm{I}\_{\mathrm{p}}^{n,\sigma}} \cdot \mathbf{n}, \upsilon\_{\mathrm{h}})\_{\Gamma} - \left( f (\mathrm{u}^{n,\sigma}) - f (\mathrm{I}\_{\mathrm{h}}^{n,\sigma}), \upsilon\_{\mathrm{h}} \right)\_{\Omega} \\ &= - \left( \mathfrak{R}\_{t}^{a} (\mathrm{u} - \mathrm{P} \mathrm{u})^{n+\sigma}, \upsilon\_{\mathrm{h}} \right)\_{\Omega} - \epsilon^{2} (\mathrm{p}^{n,\sigma} - \Pi \mathrm{p}^{n,\sigma}, \nabla \upsilon\_{\mathrm{h}})\_{\Omega} \\ &+ \epsilon^{2} \langle (\mathrm{p}^{n,\sigma} - \overline{\Pi \mathrm{p}^{n,\sigma}}) \cdot \mathbf{n}, \upsilon\_{\mathrm{h}} \rangle\_{\Gamma} - \langle \zeta^{n+\sigma}, \upsilon\_{\mathrm{h}} \rangle\_{\Omega} + (R\_{\mathrm{2}}^{n+\sigma}, \upsilon\_{\mathrm{h}})\_{\Omega} \, \end{split} \tag{53a}$$

$$\begin{split} & \langle (\boldsymbol{\Pi} \boldsymbol{e}^{\boldsymbol{n},\boldsymbol{\sigma}}\_{\mathbf{p}}, \mathbf{w}\_{\boldsymbol{h}})\_{\Omega} + (P \boldsymbol{e}^{\boldsymbol{n},\boldsymbol{\sigma}}\_{\boldsymbol{u}}, \nabla \cdot \mathbf{w}\_{\boldsymbol{h}})\_{\Omega} - \langle \overline{\boldsymbol{P}} \overline{\boldsymbol{e}^{\boldsymbol{n},\boldsymbol{\sigma}}\_{\boldsymbol{u}}}, \mathbf{w}\_{\boldsymbol{h}} \cdot \mathbf{n} \rangle\_{\Gamma} \\ &= -(\mathbf{p}^{\boldsymbol{n},\boldsymbol{\sigma}} - \boldsymbol{\Pi} \mathbf{p}^{\boldsymbol{n},\boldsymbol{\sigma}}, \mathbf{w}\_{\boldsymbol{h}})\_{\Omega} - \langle \boldsymbol{u}^{\boldsymbol{n},\boldsymbol{\sigma}} - \boldsymbol{P} \boldsymbol{u}^{\boldsymbol{n},\boldsymbol{\sigma}}, \nabla \cdot \mathbf{w}\_{\boldsymbol{h}} \rangle\_{\Omega} + \langle \boldsymbol{u}^{\boldsymbol{n},\boldsymbol{\sigma}} - \overline{\boldsymbol{P}} \widehat{\boldsymbol{u}^{\boldsymbol{n},\boldsymbol{\sigma}}}, \mathbf{w}\_{\boldsymbol{h}} \cdot \mathbf{n} \rangle\_{\Gamma}, \end{split} \tag{53b}$$

where *ζn*+*<sup>σ</sup>* = (*C*D*<sup>α</sup>* 0,*tu*)*n*+*<sup>σ</sup>* − *<sup>α</sup> <sup>t</sup> <sup>u</sup>n*+*<sup>σ</sup>* represents truncation error. Making use of the interpolation properties in Section 2.2, we obtain

$$\begin{split} & \left( \mathfrak{R}\_{t}^{\mathfrak{a}} (P\mathfrak{e}\_{\mathfrak{u}})^{n+\sigma}, \upsilon\_{\mathfrak{h}} \right)\_{\Omega} + \mathfrak{e}^{2} (\Pi \mathfrak{e}\_{\mathfrak{p}}^{\mathfrak{n},\sigma}, \nabla \upsilon\_{\mathfrak{h}})\_{\Omega} - \mathfrak{e}^{2} (\widetilde{\Pi \mathfrak{e}\_{\mathfrak{p}}^{\mathfrak{n},\sigma}} \cdot \mathbf{n}, \upsilon\_{\mathfrak{h}})\_{\Gamma} - \left( f(\mathfrak{u}^{\mathfrak{n},\sigma}) - f(\mathsf{L}\_{\mathfrak{h}}^{\mathfrak{n},\sigma}), \upsilon\_{\mathfrak{h}} \right)\_{\Omega} \\ & = - \left( \mathfrak{R}\_{t}^{\mathfrak{a}} (\mathfrak{u} - P\mathfrak{u})^{\mathfrak{n}+\sigma}, \upsilon\_{\mathfrak{h}} \right)\_{\Omega} - \left( \zeta^{\mathfrak{n}+\sigma}, \upsilon\_{\mathfrak{h}} \right)\_{\Omega} + \left( \mathcal{R}\_{2}^{\mathfrak{n}+\sigma}, \upsilon\_{\mathfrak{h}} \right)\_{\Omega} \end{split} \tag{54a}$$

$$\begin{split} & (\boldsymbol{\Pi} \boldsymbol{\varepsilon}\_{\mathbf{p}}^{\boldsymbol{n},\boldsymbol{\sigma}}, \mathbf{w}\_{\boldsymbol{h}})\_{\Omega} + (P \boldsymbol{e}\_{\boldsymbol{u}}^{\boldsymbol{n},\boldsymbol{\sigma}}, \nabla \cdot \mathbf{w}\_{\boldsymbol{h}})\_{\Omega} - \langle \widetilde{P} \boldsymbol{e}\_{\boldsymbol{u}}^{\boldsymbol{n},\boldsymbol{\sigma}}, \mathbf{w}\_{\boldsymbol{h}} \cdot \mathbf{n} \rangle\_{\Gamma} \\ & \quad = -(\mathbf{p}^{\boldsymbol{n},\boldsymbol{\sigma}} - \boldsymbol{\Pi} \mathbf{p}^{\boldsymbol{n},\boldsymbol{\sigma}}, \mathbf{w}\_{\boldsymbol{h}})\_{\Omega} - (\boldsymbol{u}^{\boldsymbol{n},\boldsymbol{\sigma}} - \boldsymbol{P} \boldsymbol{u}^{\boldsymbol{n},\boldsymbol{\sigma}}, \nabla \cdot \mathbf{w}\_{\boldsymbol{h}})\_{\Omega} + \langle \boldsymbol{u}^{\boldsymbol{n},\boldsymbol{\sigma}} - \widetilde{P \boldsymbol{u}^{\boldsymbol{n},\boldsymbol{\sigma}}}, \mathbf{w}\_{\boldsymbol{h}} \cdot \mathbf{n} \rangle\_{\Gamma}. \end{split} \tag{54b}$$

Setting (*vh*, **<sup>w</sup>***h*)=(*Pen*,*<sup>σ</sup> <sup>u</sup>* , 2Π*e <sup>n</sup>*,*<sup>σ</sup>* **<sup>p</sup>** ) in (54) and integrating by parts, we arrive at

$$\begin{split} \left( \mathfrak{R}\_{t}^{\mu} (\operatorname{P}e\_{\mathrm{u}})^{n+\upsilon}, \operatorname{P}e\_{\mathrm{u}}^{n,\upsilon} \right)\_{\Omega} &+ \epsilon^{2} \| \operatorname{II}\mathbf{e}\_{\mathrm{p}}^{n,\upsilon} \|\_{\Omega}^{2} - \left( f(\boldsymbol{u}^{\mathrm{n},\upsilon}) - f(\boldsymbol{\mathcal{U}}\_{\mathrm{h}}^{\boldsymbol{n},\tau}), \operatorname{P}e\_{\mathrm{u}}^{\boldsymbol{n},\upsilon} \right)\_{\Omega} \\ &= - \left( \mathfrak{R}\_{t}^{\mu} (\boldsymbol{u} - \boldsymbol{P}\boldsymbol{u})^{n+\upsilon}, \operatorname{P}e\_{\mathrm{u}}^{\boldsymbol{n},\upsilon} \right)\_{\Omega} - \left( \zeta^{\mathrm{n}+\upsilon}, \operatorname{P}e\_{\mathrm{u}}^{\boldsymbol{n},\upsilon} \right)\_{\Omega} + (\mathsf{R}\_{2}^{\mu+\upsilon}, \operatorname{P}e\_{\mathrm{u}}^{\boldsymbol{n},\upsilon})\_{\Omega} \\ &- \epsilon^{2} (\mathbf{p}^{\boldsymbol{n},\sigma} - \Pi \mathbf{p}^{\boldsymbol{n},\sigma}, \Pi e\_{\mathbf{p}}^{\boldsymbol{n},\sigma})\_{\Omega} - \epsilon^{2} (\boldsymbol{u}^{\mathrm{n},\sigma} - \boldsymbol{P}\boldsymbol{u}^{\boldsymbol{n},\sigma}, \nabla \cdot \Pi e\_{\mathrm{p}}^{\boldsymbol{n},\sigma})\_{\Omega} \\ &+ \epsilon^{2} \langle \boldsymbol{u}^{\mathrm{n},\sigma} - \widehat{\boldsymbol{P}}\boldsymbol{u}^{\boldsymbol{n},\upsilon}, \Pi e\_{\mathrm{p}}^{\boldsymbol{n},\sigma} \cdot \mathbf{n} \rangle\_{\Gamma} \cdot \end{split} \tag{55}$$

Applying the Cauchy-Schwarz inequality, interpolation property (9), and Lemma 1, we can bound the right hand side of (55) by

 *α <sup>t</sup>* (*Peu*)*n*+*σ*, *Pen*,*<sup>σ</sup> u* <sup>Ω</sup> <sup>+</sup> 2Π*<sup>e</sup> n*,*σ* **<sup>p</sup>** <sup>2</sup> <sup>Ω</sup> − *<sup>f</sup>*(*un*,*σ*) <sup>−</sup> *<sup>f</sup>*(*Un*,*<sup>σ</sup> <sup>h</sup>* ), *Pen*,*<sup>σ</sup> u* Ω ≤ *<sup>α</sup> <sup>t</sup>* (*<sup>u</sup>* <sup>−</sup> *Pu*)*n*+*σ*<sup>Ω</sup> <sup>+</sup> *ζn*+*σ*<sup>Ω</sup> <sup>+</sup> *Rn*+*<sup>σ</sup>* <sup>2</sup> <sup>Ω</sup> *Pen*,*<sup>σ</sup> u* Ω <sup>+</sup> 2**p***n*,*<sup>σ</sup>* <sup>−</sup> <sup>Π</sup>**p***n*,*σ*ΩΠ*<sup>e</sup> n*,*σ* **<sup>p</sup>** <sup>Ω</sup> <sup>+</sup> *Chk*+1Π*<sup>e</sup> n*,*σ* **p** Ω ≤ *<sup>α</sup> <sup>t</sup>* (*<sup>u</sup>* <sup>−</sup> *Pu*)*n*+*σ*<sup>Ω</sup> <sup>+</sup> *ζn*+*σ*<sup>Ω</sup> <sup>+</sup> *Rn*+*<sup>σ</sup>* <sup>2</sup> <sup>Ω</sup> *Pen*,*<sup>σ</sup> <sup>u</sup>* <sup>Ω</sup> <sup>+</sup> *Chk*+1Π*<sup>e</sup> n*,*σ* **<sup>p</sup>** Ω. (56)

By using an analysis similar to that in (35), we can obtain the following estimate

$$\begin{split} \left( \| \mathfrak{R}\_t^a (P e\_{\mathfrak{u}})^{n+\sigma}, P e\_{\mathfrak{u}}^{n,\sigma} \right)\_{\Omega} \leq & \left( \| \mathfrak{R}\_t^a (\mathfrak{u} - P \mathfrak{u})^{n+\sigma} \|\_{\Omega} + \| \zeta^{n+\sigma} \|\_{\Omega} + \| R\_2^{n+\sigma} \|\_{\Omega} \right) \| P e\_{\mathfrak{u}}^{n,\sigma} \|\_{\Omega} \\ &+ \mathsf{C} \| P e\_{\mathfrak{u}}^{n,\sigma} \|\_{\Omega}^2 + \mathsf{C} h^{2k+2} . \end{split} \tag{57}$$

According to interpolation property (8), we can get

$$\begin{split} & \left\| \mathfrak{R}\_{t}^{\mathbf{z}} (\boldsymbol{u} - \boldsymbol{P}\boldsymbol{u})^{n+\sigma} \right\|\_{\Omega} \\ &= \left\| \mathfrak{R}\_{t}^{\mathbf{z}} (\boldsymbol{u} - \boldsymbol{P}\boldsymbol{u})^{n+\sigma} - \left( \_{\mathbb{C}} \mathbf{D}\_{0,t}^{\mathbf{z}} (\boldsymbol{u} - \boldsymbol{P}\boldsymbol{u}) \right)^{n+\sigma} + \left( \_{\mathbb{C}} \mathbf{D}\_{0,t}^{\mathbf{z}} (\boldsymbol{u} - \boldsymbol{P}\boldsymbol{u}) \right)^{n+\sigma} \right\|\_{\Omega} \\ & \leq \left\| - \left( \_{\mathbb{C}} \mathbf{D}\_{0,t}^{\mathbf{z}} \boldsymbol{u} \right)^{n+\sigma} + \mathfrak{R}\_{t}^{\mathbf{z}} \boldsymbol{u}^{n+\sigma} + P \left( \left( \_{\mathbb{C}} \mathbf{D}\_{0,t}^{\mathbf{z}} \boldsymbol{u} \right)^{n+\sigma} - \mathfrak{R}\_{t}^{\mathbf{z}} \boldsymbol{u}^{n+\sigma} \right) \right\|\_{\Omega} \\ & \quad + \left\| \left( \_{\mathbb{C}} \mathbf{D}\_{0,t}^{\mathbf{z}} (\boldsymbol{u} - \boldsymbol{P}\boldsymbol{u}) \right)^{n+\sigma} \right\|\_{\Omega} \\ & \leq C \left\| \zeta^{n+\sigma} \right\|\_{H^{1}(\Omega)} + Ch^{k+1} \left\| \left( \_{\mathbb{C}} \mathbf{D}\_{0,t}^{\mathbf{z}} \boldsymbol{u} \right)^{n+\sigma} \right\|\_{H^{k+1}(\Omega)}. \end{split} \tag{58}$$

Next, we estimate max <sup>0</sup>≤*n*≤*M*−<sup>1</sup> *t α <sup>n</sup>*+*σRn*+*<sup>σ</sup>* <sup>2</sup> <sup>Ω</sup>}. When *n* = 0, it follows from the assumption of *u* that there exists a constant *C* such that

$$t\_{\sigma}^{\kappa} \| R\_2^{n+\sigma} \|\_{\Omega} \le \mathcal{C} t\_1^{\kappa} \le \mathcal{C} M^{-r\kappa}.$$

When *n* ≥ 1, applying (50) and Lemma 9 in the literature [33], we obtain

$$\begin{aligned} \|t\_{n+\sigma}^{\mathfrak{n}}\| \|R\_2^{n+\sigma}\|\|\_{\Omega} &\leq C t\_{n+\sigma}^{\mathfrak{n}} r\_{n+1}^2 t\_n^{n-2} \leq C(n+1)^{r\mathfrak{n}} M^{-r\mathfrak{n}} M^{-2r} n^{r\mathfrak{n}-2} M^{-r\mathfrak{n}+2r} \\ &\leq C(n/M)^{2ra-2} M^{-2}, \end{aligned}$$

where we have used *<sup>τ</sup>n*+<sup>1</sup> <sup>≤</sup> *CTM*−*rnr*−<sup>1</sup> (*<sup>n</sup>* <sup>=</sup> 0, 1, ... , *<sup>M</sup>* <sup>−</sup> <sup>1</sup>) in the second inequality. As a consequence,

$$\|t\_{n+\sigma}^{\mathfrak{a}}\|\|R\_2^{n+\sigma}\|\|\_{\Omega} \le \begin{cases} \quad \mathbb{C}M^{-2}, & n=1,2,\dots,M-1, \ r \ge 1/\mathfrak{a}, \\ \quad \mathbb{C}M^{-2\mathfrak{a}}, & n=1,2,\dots,M-1, \ 1 \le r < 1/\mathfrak{a}. \end{cases}$$

Combining the above two cases, we have

$$\max\_{0 \le n \le M-1} \left\{ t\_{n+\sigma}^{\mathfrak{a}} \, \|R\_2^{n+\sigma}\|\vert\_{\Omega} \right\} \le C M^{-\min\{n,2\}}.\tag{59}$$

By using (58), (59), and Lemmas 8 and 9, we arrive at

*<sup>α</sup> <sup>t</sup>* (*<sup>u</sup>* <sup>−</sup> *Pu*)*n*+*σ*<sup>Ω</sup> <sup>+</sup> *ζn*+*σ*<sup>Ω</sup> <sup>+</sup> *Rn*+*<sup>σ</sup>* <sup>2</sup> <sup>Ω</sup> <sup>≤</sup> *<sup>C</sup>ζn*+*σH*1(Ω) <sup>+</sup> *Chk*+1(*C*D*<sup>α</sup>* 0,*tu*)*n*+*σHk*+1(Ω) <sup>+</sup> *<sup>t</sup>* −*α <sup>n</sup>*+*σt α <sup>n</sup>*+*σRn*+*<sup>σ</sup>* <sup>2</sup> <sup>Ω</sup> <sup>≤</sup> *Ct*−*<sup>α</sup> <sup>n</sup>*+*<sup>σ</sup>* max <sup>1</sup>≤*n*≤*M*−<sup>1</sup> *t α <sup>n</sup>*+*σζn*+*σH*1(Ω) <sup>+</sup> *<sup>t</sup> α <sup>n</sup>*+*σRn*+*<sup>σ</sup>* <sup>2</sup> <sup>Ω</sup> + *Chk*+<sup>1</sup> <sup>≤</sup> *Ct*−*<sup>α</sup> n*+*σ <sup>C</sup>* max <sup>0</sup>≤*n*≤*M*−<sup>1</sup> *ψn*+*<sup>σ</sup> <sup>u</sup> H*1(Ω) <sup>+</sup> max 1≤*s*≤*n ψn*,*<sup>s</sup> u H*1(Ω) + *M*<sup>−</sup> min{*rα*,2} + *Chk*+<sup>1</sup> <sup>≤</sup> *Ct*−*<sup>α</sup> n*+*σ M*<sup>−</sup> min{*rα*,3−*α*} + *M*<sup>−</sup> min{*rα*,2} + *Chk*+<sup>1</sup> <sup>≤</sup> *Ct*−*<sup>α</sup> <sup>n</sup>*+*σM*<sup>−</sup> min{*rα*,3−*α*} <sup>+</sup> *Chk*<sup>+</sup>1. (60)

Substituting (60) into (57) and applying Lemma 7, we thus get

$$\Re\_l^u(\|\|Pe\_u\|\|\_\Omega^2)^{n+\sigma} \le \left(\mathbb{C}t\_{n+\sigma}^{-\underline{n}}M^{-\min\{n,3-n\}} + \mathbb{C}h^{k+1}\right)\|Pe\_u^{n,\sigma}\|\_\Omega + \mathbb{C}\|Pe\_u^{n,\sigma}\|\_\Omega^2 + \mathbb{C}h^{2k+2}.\tag{61}$$

Then, invoking Lemmas 6 and (46), one has

$$\begin{split} \|P e\_{u}^{n+1}\|\_{\Omega} &\leq 2E\_{\mathfrak{a},1}(2\mathsf{C}\pi\_{A}t\_{n+1}^{\mathfrak{a}}) \left( \max\_{0\leq i\leq n} \sum\_{j=0}^{i} P\_{i-j+1}^{i+1,\mathfrak{a}} 2\left(\mathsf{C}t\_{j+v}^{-\mathfrak{a}}M^{-\min\{n,3-\mathfrak{a}\}} + \mathsf{C}h^{k+1}\right) \right) \\ &+ \sqrt{\pi\_{A}\Gamma(1-\mathfrak{a})} \max\_{0\leq j\leq n} \left\{ \sqrt{\mathsf{C}}t\_{j+1}^{\mathfrak{a}/2}h^{k+1} \right\} \\ &\leq \mathbb{C} \max\_{0\leq i\leq n} \sum\_{j=0}^{i} P\_{i-j+1}^{i+1,\mathfrak{a}} \left( \omega\_{1-\mathfrak{a}}(t\_{j+1})M^{-\min\{n,2\}} + h^{k+1} \right) + \mathrm{Ch}^{k+1} \\ &\leq CM^{-\min\{n,2\}} + Ch^{k+1}, \end{split} \tag{62}$$

provided that the maximum time-step *τ<sup>M</sup>* ≤ (4*πA*Γ(2 − *α*)) −1/*α* . By use of the triangle inequality, the interpolation properties (8) and (9), and utilizing (62) yields the desired result. This completes the proof.

#### **5. Numerical Examples**

⎧ ⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎩

The purpose of this section is to numerically validate the accuracy and efficiency of proposed Schemes (18) and (48) for solving the time-fractional Allen-Cahn Equation (1) with initial singularity. All the algorithms are implemented using MATLAB R2016a, which were run in a 3.10 GHz PC having 16GB RAM and Windows 10 operating system.

**Example 1.** *Consider the following two-dimensional time-fractional Allen-Cahn equation with a source term f*(*x*, *y*, *t*)

$$\begin{aligned} \ \_\subset \mathcal{D}\_{0,t}^\mathbf{x} \iota(\mathbf{x}, \mathbf{y}, t) - \Delta \iota(\mathbf{x}, \mathbf{y}, t) &= \iota(\mathbf{x}, \mathbf{y}, t) - \mu^3(\mathbf{x}, \mathbf{y}, t) + f(\mathbf{x}, \mathbf{y}, t), \\ (\mathbf{x}, \mathbf{y}) &\in \Omega, \ t \in (0, \frac{1}{4}], \\ \iota(\mathbf{x}, \mathbf{y}, 0) &= 0, \ (\mathbf{x}, \mathbf{y}) \in \Omega, \\ \iota(\mathbf{x}, \mathbf{y}, t) &= 0, \ (\mathbf{x}, \mathbf{y}) \in \partial \Omega, \ t \in (0, \frac{1}{4}], \end{aligned}$$

*where* 0 < *α* < 1*,* Ω = (−1, 1) × (−1, 1)*, and the source term is given by*

$$\begin{split} f(\boldsymbol{x}, \boldsymbol{y}, t) &= \left( \Gamma(\boldsymbol{a} + 1) + \frac{2t^{2 - \boldsymbol{a}}}{\Gamma(3 - \boldsymbol{a})} \right) (\boldsymbol{x} + 1)^2 (\boldsymbol{x} - 1)^2 (\boldsymbol{y} + 1)^2 (\boldsymbol{y} - 1)^2 \\ &\quad - 4(t^{\boldsymbol{a}} + t^2) (3\boldsymbol{x}^2 - 1) (\boldsymbol{y} + 1)^2 (\boldsymbol{y} - 1)^2 \\ &\quad - 4(t^{\boldsymbol{a}} + t^2) (3\boldsymbol{y}^2 - 1) (\boldsymbol{x} + 1)^2 (\boldsymbol{x} - 1)^2 \\ &\quad - (t^{\boldsymbol{a}} + t^2) (\boldsymbol{x} + 1)^2 (\boldsymbol{x} - 1)^2 (\boldsymbol{y} + 1)^2 (\boldsymbol{y} - 1)^2 \\ &\quad + \left[ (t^{\boldsymbol{a}} + t^2) (\boldsymbol{x} + 1)^2 (\boldsymbol{x} - 1)^2 (\boldsymbol{y} + 1)^2 (\boldsymbol{y} + 1)^2 \right]^3 \boldsymbol{.} \end{split}$$

*The analytical solution is given by u*(*x*, *y*, *t*)=(*t <sup>α</sup>* + *t* <sup>2</sup>)(*<sup>x</sup>* <sup>+</sup> <sup>1</sup>)2(*<sup>x</sup>* <sup>−</sup> <sup>1</sup>)2(*<sup>y</sup>* <sup>+</sup> <sup>1</sup>)2(*<sup>y</sup>* <sup>−</sup> <sup>1</sup>)2*.*

The purpose of Example 1 is to demonstrate the effectiveness of the nonuniform L1– LDG scheme (18) with the numerical flux (16) for the time-fractional Allen-Cahn equation with weak singularity solution. The *L*2-norm errors and convergence orders of the numerical solution *U<sup>n</sup> <sup>h</sup>* at *<sup>t</sup>* <sup>=</sup> <sup>1</sup> <sup>4</sup> are shown in Tables 1–4. From Tables 1 and 2, one can see that the convergence orders of scheme (18) in the temporal direction are close to min{2 − *α*,*rα*}. In Tables 3 and 4, we take *r* = (2 − *α*)/*α* and *α* = 0.4, 0.6, 0.8, and the orders of convergence for *U<sup>n</sup> <sup>h</sup>* are closed to (*k* + 1) in space. These numerical results coincide with Theorem 2.

**Table 1.** The *L*2-norm errors and temporal convergence orders for Example 1 using scheme (18), *M* = *Nx* = *Ny*, *k* = 1, *T* = 1/4, *r* = 1.


**Table 2.** The *L*2-norm errors and temporal convergence orders for Example 1 using scheme (18), *M* = *Nx* = *Ny*, *k* = 1, *T* = 1/4, *r* = (2 − *α*)/*α*.


**Table 3.** The *L*2-norm errors and spatial convergence orders for Example 1 using scheme (18), *M* = 500, *T* = 1/4, *r* = (2 − *α*)/*α*, *k* = 1.


**Table 4.** The *L*2-norm errors and spatial convergence orders for Example 1 using scheme (18), *M* = 1000, *T* = 1/4, *r* = (2 − *α*)/*α*, *k* = 2.


**Example 2.** *Consider the following two-dimensional time-fractional Allen-Cahn equation with a source term f*(*x*, *y*, *t*)

$$\begin{cases} \ \ \_\text{\mathbb{C}D}^{\text{x}}\_{0,t}u(\langle x,y,t\rangle -0.1\Delta u(\langle x,y,t\rangle = u(\langle x,y,t\rangle - u^{3}(\langle x,y,t\rangle + f(\langle x,y,t\rangle)))) \\ \qquad \qquad (\langle x,y\rangle \in \Omega, \, t \in (0, \frac{1}{4}], \\\ u(\langle x,y,0\rangle = 0, \,(x,y) \in \Omega, \\\ u(\langle x,y,t\rangle = 0, \,(x,y) \in \partial\Omega, \, t \in (0, \frac{1}{4}], \end{cases}$$

*where* 0 < *α* < 1*,* Ω = (−1, 1) × (−1, 1)*, and the source term is given by*

$$\begin{split} f(\mathbf{x},y,t) &= \left(\Gamma(\mathbf{a}+1) + \frac{2t^{2-\alpha}}{\Gamma(3-\alpha)}\right) (\mathbf{x}+1)^2 (\mathbf{x}-1)^2 (\mathbf{y}+1)^2 (\mathbf{y}-1)^2 \\ &\quad - 0.4 (t^\alpha + t^2) (3\mathbf{x}^2 - 1) (\mathbf{y}+1)^2 (\mathbf{y}-1)^2 \\ &\quad - 0.4 (t^\alpha + t^2) (3\mathbf{y}^2 - 1) (\mathbf{x}+1)^2 (\mathbf{x}-1)^2 \\ &\quad - (t^\alpha + t^2) (\mathbf{x}+1)^2 (\mathbf{x}-1)^2 (\mathbf{y}+1)^2 (\mathbf{y}-1)^2 \\ &\quad + \left[ (t^\alpha + t^2) (\mathbf{x}+1)^2 (\mathbf{x}-1)^2 (\mathbf{y}+1)^2 (\mathbf{y}-1)^2 \right]^3. \end{split}$$

*The solution u*(*x*, *y*, *t*)=(*t <sup>α</sup>* + *t* <sup>2</sup>)(*<sup>x</sup>* <sup>+</sup> <sup>1</sup>)2(*<sup>x</sup>* <sup>−</sup> <sup>1</sup>)2(*<sup>y</sup>* <sup>+</sup> <sup>1</sup>)2(*<sup>y</sup>* <sup>−</sup> <sup>1</sup>)<sup>2</sup> *solves this equation.*

It is clear that the exact solution *u* of Example 2 satisfies the regularity assumption (50), so we use the proposed nonuniform L2-1*σ*–LDG scheme (48) to solve this problem. Tables 5 and 6 report the numerical errors and convergence orders in the temporal direction. The data in these tables demonstrate that the temporal convergence order of the numerical solution *U<sup>n</sup> <sup>h</sup>* is min{2,*rα*}. In order to test the convergence order of the scheme in spatial direction, we fix sufficiently small temporal step (*M* = 500 for *k* = 1 and *M* = 3000 for *k* = 2) and vary the spatial step sizes. Tables 7 and 8 list the numerical results for different values of *α*, where the (*k* + 1)-th order convergence of scheme (48) in spatial direction can be achieved.

**Table 5.** The *L*2-norm errors and temporal convergence orders for Example 2 using scheme (48), *M* = *Nx* = *Ny*, *k* = 1, *T* = 1/4, *r* = 1.


*α* **= 0.4** *α* **= 0.6** *α* **= 0.8** *M L***2-Error Order** *L***2-Error Order** *L***2-Error Order** 20 4.8695 <sup>×</sup> <sup>10</sup>−<sup>3</sup> – 2.5966 <sup>×</sup> <sup>10</sup>−<sup>3</sup> – 1.9295 <sup>×</sup> <sup>10</sup>−<sup>3</sup> – 40 1.4639 <sup>×</sup> <sup>10</sup>−<sup>3</sup> 1.7340 7.5597 <sup>×</sup> <sup>10</sup>−<sup>4</sup> 1.7802 5.4683 <sup>×</sup> <sup>10</sup>−<sup>4</sup> 1.8190 60 6.9287 <sup>×</sup> <sup>10</sup>−<sup>4</sup> 1.8448 3.5345 <sup>×</sup> <sup>10</sup>−<sup>4</sup> 1.8751 2.5311 <sup>×</sup> <sup>10</sup>−<sup>4</sup> 1.8998 80 4.0253 <sup>×</sup> <sup>10</sup>−<sup>4</sup> 1.8878 2.0395 <sup>×</sup> <sup>10</sup>−<sup>4</sup> 1.9114 1.4529 <sup>×</sup> <sup>10</sup>−<sup>4</sup> 1.9296 100 2.6277 <sup>×</sup> <sup>10</sup>−<sup>4</sup> 1.9113 1.3256 <sup>×</sup> <sup>10</sup>−<sup>4</sup> 1.9309 9.4126 <sup>×</sup> <sup>10</sup>−<sup>5</sup> 1.9454

**Table 6.** The *L*2-norm errors and temporal convergence orders for Example 2 using scheme (48), *M* = *Nx* = *Ny*, *k* = 1, *T* = 1/4, *r* = (3 − *α*)/*α*.

**Table 7.** The *L*2-norm errors and spatial convergence orders for Example 2 using scheme (48), *M* = 500, *T* = 1/4, *r* = (3 − *α*)/*α*, *k* = 1.


**Table 8.** The *L*2-norm errors and spatial convergence orders for Example 2 using scheme (48), *M* = 3000, *T* = 1/4, *r* = (3 − *α*)/*α*, *k* = 2.


#### **6. Concluding Remarks**

This paper focuses on the numerical algorithms for the time-fractional Allen-Cahn equation with a weak singularity solution. In the time direction, it is discretized by the nonuniform L1 scheme and the nonuniform L2-1*<sup>σ</sup>* scheme, respectively. In the spatial direction, the LDG method is utilized. By the discrete fractional Gronwall-type inequalities, the *L*<sup>2</sup> stability and optimal error estimates of these two schemes are proved in detail. Finally, the efficiency and accuracy of proposed fully discrete schemes are verified by some numerical examples. In future work, we extend the technique of coupling the LDG method with the nonuniform time discretization to solve the space-time fractional phase-field model.

**Author Contributions:** Conceptualization, Z.W.; methodology, Z.W.; software, Z.W.; validation, L.S. and J.C.; formal analysis, Z.W.; investigation, L.S.; resources, Z.W.; data curation, L.S.; writing original draft preparation, Z.W.; writing—review and editing, Z.W.; visualization, L.S.; supervision, J.C.; project administration, Z.W. and J.C.; funding acquisition, Z.W. and J.C. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was supported by the National Natural Science Foundation of China (NSFC) under grant Nos. 12101266 and 11901266.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** All the data were computed using our algorithms.

**Conflicts of Interest:** The author declares no conflict of interest. The funder had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

#### **References**


**Baoli Yin \*, Guoyu Zhang, Yang Liu and Hong Li**

School of Mathematical Sciences, Inner Mongolia University, Hohhot 010021, China; guoyu\_zhang@imu.edu.cn (G.Z.); mathliuyang@imu.edu.cn (Y.L.); smslh@imu.edu.cn (H.L.)

**\*** Correspondence: baolimath@126.com

**Abstract:** An exponential-type function was discovered to transform known difference formulas by involving a shifted parameter *θ* to approximate fractional calculus operators. In contrast to the known *θ* methods obtained by polynomial-type transformations, our exponential-type *θ* methods take the advantage of the fact that they have no restrictions in theory on the range of *θ* such that the resultant scheme is asymptotically stable. As an application to investigate the subdiffusion problem, the second-order fractional backward difference formula is transformed, and correction terms are designed to maintain the optimal second-order accuracy in time. The obtained exponential-type scheme is robust in that it is accurate even for very small *α* and can naturally resolve the initial singularity provided *<sup>θ</sup>* <sup>=</sup> <sup>−</sup><sup>1</sup> <sup>2</sup> , both of which are demonstrated rigorously. All theoretical results are confirmed by extensive numerical tests.

**Keywords:** theta methods; subdiffusion problem; fractional calculus; backward difference formula; convolution quadrature

#### **1. Introduction**

Diffusion is one of the most common phenomena of the physical world in which a particle's motion is Brownian and can be characterized by the classical model *∂tu* − Δ*u* = *f* . It is well known that Brownian motions assume that mean-squared particle displacements grow linearly with respect to time *t*, whereas an increasing list of experiments in the last decades indicates that such growths can be sublinear or superlinear; i.e., the diffusion can be anomalous. From a macro-perspective, the probability density function *u* in anomalous diffusion obeys the equation involving a fractional order derivative [1,2]. In this work, we concern ourselves with the subdiffusion transport mechanism (with the fractional derivative order *α* ∈ (0, 1)), which has received much attention in recent years, since the electron transport, thermal diffusion, and protein transport, among others, reveal that the underlying stochastic process is the continuous time random walk instead of Brownian motions [1–4]. Perhaps the simplest subdiffusion model [2,5] takes the following form:

$$
\partial\_t^\mu u(\mathbf{x}, t) - \Delta u(\mathbf{x}, t) = f(\mathbf{x}, t), \tag{1}
$$

with suitable initial boundary conditions. Here, *∂<sup>α</sup> <sup>t</sup>* denotes the Caputo fractional operator [6] of order *α* ∈ (0, 1):

$$\left(\partial\_t^{\alpha}\phi\right)(t) = \frac{1}{\Gamma(1-\alpha)} \int\_0^t \frac{\phi'(s)}{(t-s)^{\alpha}}ds,$$

which satisfies *∂<sup>α</sup> <sup>t</sup> φ* = *D<sup>α</sup> <sup>t</sup>* (*<sup>φ</sup>* <sup>−</sup> *<sup>φ</sup>*(0)) where *<sup>D</sup><sup>α</sup> <sup>t</sup>* is the Riemann–Louisville fractional differential operator [6].

$$(D\_t^\alpha \phi)(t) = \frac{1}{\Gamma(1-\alpha)} \frac{\mathbf{d}}{\mathbf{d}t} \int\_0^t \frac{\phi(s)}{(t-s)^\alpha} \mathbf{ds}.$$

**Citation:** Yin, B.; Zhang, G.; Liu, Y.; Li, H. The Construction of High-Order Robust Theta Methods with Applications in Subdiffusion Models. *Fractal Fract.* **2022**, *6*, 417. https://doi.org/10.3390/ fractalfract6080417

Academic Editor: Paul Eloe

Received: 10 June 2022 Accepted: 26 July 2022 Published: 29 July 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Its integral counterpart, the Riemann-Liouville fractional integral operator [6] *D*−*<sup>α</sup> <sup>t</sup> φ* is defined by

$$(D\_t^{-\alpha}\phi)(t) = \frac{1}{\Gamma(\alpha)} \int\_0^t \frac{\phi(s)}{(t-s)^{1-\alpha}} ds.$$

The literature on numerically exploring the fractional calculus operators is vast. The authors of [7,8] proposed the well-known L1 method for the Caputo fractional derivative, which is convergent of order 2 − *α*. Lubich [9] systematically developed the convolution quadrature (CQ) theory for discretizing the operators *D<sup>α</sup> <sup>t</sup>* and *<sup>D</sup>*−*<sup>α</sup> <sup>t</sup>* . The well-known fractional linear multistep methods, which include the Grünwald formula [6] and *p*th-order fractional backward difference formulas (BDF-*p*) as special cases, belong to such framework. Some other difference formulas that essentially fit the framework of convolution quadrature can be found in [10–12], to mention just a few. In [13], the authors developed the shifted Grünwald formula to overcome the instability of the Grünwald formula when applied to fractional advection–dispersion flow equations. Galeone and Garrappa [14] devised explicit multistep methods for the fractional derivative and examined the stability properties in much detail. By weighting and averaging the shifted Grünwald formula, Tian et al. [15] proposed the weighted and shifted Grünwald difference formulas for space fractional Riemann–Louisville derivatives. Ding et al. [16] built a second-order midpoint formula by shifting the fractional BDF-2 and applied it to fractional cable equations. In [17], the authors investigated shifted convolution quadrature (SCQ) methods in detail aiming to develop *θ*-methods systematically, where a polynomial-type transformation strategy was proposed to convert any known CQ method into a *θ* method. However, the existence of zeros of polynomials severely restricts the choice of the parameter *θ* such that the method is A-stable. In this work, we propose a novel transformation strategy by resorting to the exponential-type function, illustrating its superiority in developing A-stable methods and robust numerical schemes for subdiffusion problems.

Clearly, the fractional operators mentioned above involve a weak singular kernel *<sup>s</sup>*−*<sup>γ</sup>* for some *<sup>γ</sup>* <sup>∈</sup> (0, 1), which renders numerically solving subdiffusion problems rather difficult, since most high-order difference formulas, if directly applied to such problem on uniform meshes, lose their deserved high accuracy [18–20]. To resolve such difficulties, modified difference formulas by adding correction terms [21–25] or using nonuniform meshes are developed [26,27], to mention just a few. Specifically, Yan et al. [21] developed the modified L1 method by adding correction terms to recover the optimal convergence order 2 − *α*, while Jin et al. [22] established the corrected fractional BDF-*p* to restore the high accuracy. By shifting the approximation point by *<sup>α</sup>* <sup>2</sup> with respect to grid points, Jin et al. [23] designed a two-step correction method for the fractional Crank–Nicolson scheme and such a correction technique was further optimized by Wang et al. [24] where only the first-step correction is needed to maintain the optimal accuracy. In particular, we studied a general second-order difference scheme for (1) in [25], which is generated by an SCQ difference formula with a free parameter *<sup>θ</sup>* <sup>∈</sup> (0, <sup>1</sup> <sup>2</sup> ) and can preserve the high accuracy if correction terms are added. The *θ* = <sup>1</sup> <sup>2</sup> is excluded there for the singularity of the corresponding transform functions involved in theoretical analysis. Indeed, the case *θ* = <sup>1</sup> <sup>2</sup> is of special interest since the correction terms vanish, enlightening us that a carefully designed time-stepping method, even on uniform meshes, should automatically resolve the singularity. A close examination, as shown in this study, indicates that the singularity of transform functions stems from the zeros of polynomials, which can be avoided by using the exponential-type transform functions. To sum up, the contribution of this study comes from three aspects:


The rest of the article is outlined as follows. In Section 2, we first review some basic aspects of the SCQ and then determine the stability region of *θ* methods when applied to a simple differential equation. In Section 3, we propose the exponential-type transformation to convert known stepping methods into *θ* methods and demonstrate its superiority over traditional polynomial-type transformations. In Section 4, the exponentially transformed fractional BDF-2 is applied to subdiffusion problems where correction terms are designed to recover the optimal convergence rate, which are confirmed rigorously by theoretical analyses. Extensive numerical tests are offered in Section 5 to verify all theoretical results. Finally, in Section 6, we make some concluding remarks.

#### **2. Preliminaries**

#### *2.1. Review of θ-Methods in SCQ*

The construction of novel robust *θ* methods is based on the framework of shifted convolution quadrature [28] for the fractional calculus, which will be introduced briefly in this subsection.

Divide the time interval [0, *T*] by the following grids: 0 = *t*<sup>0</sup> < *t*<sup>1</sup> < ··· < *tN* = *T* with *tn* = *nτ* and *τ* = *T*/*N*. Let *φ<sup>n</sup>* be the value of a function *φ*(*tn*) for the sake of simplicity. Given a sequence {*ωj*}<sup>∞</sup> *<sup>j</sup>*=0, the difference formula

$$D^{\kappa,n}\_{\pi,\theta}\phi := \pi^{-\alpha} \sum\_{j=0}^{n} \omega\_j \phi^{n-j} \tag{2}$$

represents an approximation to the Riemann–Louisville derivative (*D<sup>α</sup> <sup>t</sup> <sup>φ</sup>*)(*tn*−*<sup>θ</sup>* ) if the generating function *ω*(*ζ*) defined by *ω*(*ζ*) = ∑<sup>∞</sup> *<sup>j</sup>*=<sup>0</sup> *<sup>ω</sup>jζ<sup>j</sup>* for <sup>|</sup>*ζ*<sup>|</sup> <sup>&</sup>lt; 1 satisfies

$$\text{(i) Stability: } \omega\_n = O(n^{-a-1}), \quad \text{(ii) Consumption: } \pi^{-a}e^{\theta\tau}\omega(e^{-\tau}) - 1 = o(1), \tag{3}$$

simultaneously.

**Lemma 1** (See [28], Theorem 1)**.** *The difference Formula (2) is pth-order convergent if and only if both the stability in (3) and the following consistent condition*

$$\text{Consistency of order } p \text{: } \pi^{-a} \varepsilon^{\theta \tau} \omega(e^{-\tau}) - 1 = \mathcal{O}(\tau^p) \tag{4}$$

*are fulfilled.*

It is notable that if the shift parameter *θ* vanishes in (2) or (4), meaning that a difference formula *Dα*,*<sup>n</sup> <sup>τ</sup> φ* := *Dα*,*<sup>n</sup> <sup>τ</sup>*,0*<sup>φ</sup>* is designed for *<sup>D</sup><sup>α</sup> <sup>t</sup> φ* at the grid point *tn*; then, one essentially obtains approximation methods belonging to the convolution quadrature theory, which was partially founded in [9] for approximating fractional calculus and then extended to more general convolution-type operators [29,30]. In previous studies, we have extended several traditional difference formulas such as the fractional BDF-2 [31], the fractional trapezoidal rule [17], and the fractional Adams–Moulton method [32], among others, to their generalized versions by involving shifted parameter *θ*. Moreover, a conversion strategy was proposed in [28] to transform a difference formula *Dα*,*<sup>n</sup> <sup>τ</sup> φ* into *Dα*,*<sup>n</sup> <sup>τ</sup>*,*<sup>θ</sup> φ*, which, in the viewpoint of the generating function reconstruction, can be stated as follows:

$$
\omega(\zeta) = \omega\_p(\zeta)\Theta(\zeta;\theta), \quad \Theta(\zeta;\theta) = \gamma\_0 + \gamma\_1(1-\zeta) + \gamma\_2(1-\zeta)^2 + \dots + \gamma\_{p-1}(1-\zeta)^{p-1}, \tag{5}
$$

where *p*(*ζ*) = ∑<sup>∞</sup> *<sup>j</sup>*=<sup>0</sup> *jζ<sup>j</sup>* represents the generating function with *<sup>j</sup>* from the weights of *Dα*,*<sup>n</sup> <sup>τ</sup> φ*, which is convergent of order *p*. The *γj*s are obtained from identity ∑<sup>∞</sup> *<sup>i</sup>*=<sup>0</sup> *<sup>γ</sup>i*(<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)*<sup>i</sup>* <sup>=</sup> *<sup>ζ</sup>θ*. Specifically, the second-, third- and fourth-order transformed generating functions take the following forms:

$$
\omega(\zeta) = \begin{cases}
\mathcal{O}\_2(\zeta) \left[ (1-\theta) + \theta \zeta \right], & \text{for 2nd-order,} \\
\mathcal{O}\_3(\zeta) \left[ \frac{1}{2}(1-\theta)(2-\theta) + \theta(2-\theta)\zeta + \frac{1}{2}\theta(\theta-1)\zeta^2 \right], & \text{for 3rd-order,} \\
\mathcal{O}\_4(\zeta) \left[ \frac{1}{6}(1-\theta)(2-\theta)(3-\theta) + \frac{1}{2}\theta(2-\theta)(3-\theta)\zeta \\
\quad \quad \quad \quad - \frac{1}{2}\theta(1-\theta)(3-\theta)\zeta^2 + \frac{1}{6}\theta(1-\theta)(2-\theta)\zeta^3 \right], & \text{for 4th-order,}
\end{cases} \tag{6}
$$

where *p*(*ζ*)(*p* = 2, 3) stands for any generating functions in CQ that is convergent of order *p*.

$$\begin{aligned} \varphi\_{2}(\zeta) &= \begin{cases} \left(\frac{3}{2} - 2\zeta + \frac{1}{2}\zeta^{2}\right)^{a}, & \text{fractional BDF-2}, \\ \left(1 - \zeta\right)^{a} \left[1 + \frac{a}{2}(1 - \zeta)\right], & \text{2nd-order Newton-Gregory formula}, \end{cases} \\ \varphi\_{3}(\zeta) &= \begin{cases} \left(\frac{11}{6} - 3\zeta + \frac{2}{3}\zeta^{2} - \frac{1}{2}\zeta^{3}\right)^{a}, & \text{fractional BDF-3}, \\ \left(1 - \zeta\right)^{a} \left[1 + \frac{a}{2}(1 - \zeta)\right] & \text{3rd-order Newton-Gregory formula}, \end{cases} \\ \varphi\_{4}(\zeta) &= \begin{cases} \left(\frac{25}{12} - 4\zeta + 3\zeta^{2} - \frac{4}{5}\zeta^{3} + \frac{1}{5}\zeta^{4}\right)^{a}, & \text{fractional BDF-4}, \\ \left(1 - \zeta\right)^{a} \left[1 + \frac{a}{2}(1 - \zeta)\right] & \text{3rd-order Newton-Gregory formula}, \end{cases} \end{aligned} \tag{7}$$

We also mention that transformation (5) indicates that the function *φ*(*t*) = *D*<sup>0</sup> *<sup>t</sup> φ* at time *tn*−*<sup>θ</sup>* can be approximated, in accordance with (2), by formula <sup>∑</sup>*<sup>n</sup> <sup>j</sup>*=<sup>0</sup> *θjφn*−*<sup>j</sup>* with the weights {*θj*}<sup>∞</sup> *<sup>j</sup>*=<sup>0</sup> generated by Θ(*ζ*; *θ*), where identity *p*(*ζ*) ≡ 1 is prescribed.

#### *2.2. Stability Regions*

Historically, Lubich [33] has proven that when using a convolution quadrature method (with a generating function *<sup>ω</sup>*9(*ζ*) = <sup>∑</sup><sup>∞</sup> *<sup>j</sup>*=<sup>0</sup> *<sup>ω</sup>*9*jζ<sup>j</sup>* ) to solve the linear Abel integral equation

$$u(t) = f(t) + \frac{\lambda}{\Gamma(\alpha)} \int\_0^t (t - s)^{\alpha - 1} u(s) ds,\tag{8}$$

where *f*(*t*) has finite limit as *t* → ∞, the stability region *S* is precisely determined by

$$\mathbb{C} \left\{ 1/\tilde{\omega}(\zeta) : |\zeta| \le 1 \right\},\tag{9}$$

if the weights *<sup>ω</sup>*9*<sup>n</sup>* fulfill the following condition.

$$
\omega\_n = \frac{n^{\alpha - 1}}{\Gamma(\alpha)} + \pi\_{n\prime} n \ge 1, \quad \text{with } \sum\_{n=1}^{\infty} |\pi\_n| < \infty. \tag{10}
$$

Instead of (8), we concern ourselves in this work with the following fractional differential equation:

$$
\partial\_t^a \mu = \lambda \mu + \mathcal{g}(t), \quad a \in (0, 1) \text{ and } \Re(\lambda) < 0,\tag{11}
$$

where *g*(*t*) decays exponentially and partially. Resorting to any *p*(*ζ*) in CQ, the numerical scheme reads as follows:

$$\sum\_{j=0}^{n} \varpi\_{n-j}(\mathcal{U}^j - \mathcal{U}^0) = \mp \left(\mathcal{U}^n + \frac{1}{\lambda} \mathfrak{g}^n\right), \quad n \ge n\_{0, \prime} \tag{12}$$

where *τ* = *λτα*. We next identify conditions on *<sup>j</sup>* or its generating function *p*(*ζ*) to determine the related stability region. It is worth noting that the first few steps (*n* = 1, 2, ··· , *n*<sup>0</sup> − 1) for (12) may need to be treated separately, e.g., by adding correction terms [9], to retain the high accuracy in cases where high-order methods are adopted.

Some assumptions on *p*(*ζ*) are needed.

$$\begin{aligned} \text{(A1) } &\tau^{-a}\oslash\_p(\varepsilon^{-\tau}) - 1 = O(\tau^p),\\ \text{(A2) } &\oslash\_p(\zeta) = (1-\zeta)^a \ell(\zeta), \quad \ell(\zeta) \text{ is nonzero and analytic for } \zeta \in \{\zeta : |\zeta| \le 1\}. \end{aligned} \tag{13}$$

Note that the most well-known implicit CQ methods meet the assumptions (A1) and (A2), e.g., the fractional BDF-p and Newton–Gregory formula, among others. The assumption (A2) actually implies that *<sup>n</sup>* = *O*(*n*−*α*−1).

Given *p*(*ζ*) that fulfills (A1) and (A2), introduce the sequence {(−1) *<sup>j</sup>* }<sup>∞</sup> *<sup>j</sup>*=<sup>0</sup> generated by <sup>1</sup> *p*(*ζ*). The next lemma shows that *<sup>τ</sup><sup>α</sup>* <sup>∑</sup>*<sup>n</sup> <sup>j</sup>*=<sup>0</sup> (−1) *<sup>j</sup> <sup>φ</sup>n*−*<sup>j</sup>* is an approximation to *<sup>D</sup>*−*<sup>α</sup> <sup>t</sup> φ* at *tn*. For a better presentation, the proof of the following lemmas in this section is left in Appendix A.

**Lemma 2.** *Let p*(*ζ*) *satisfy the assumptions (A1) and (A2). Then, τ<sup>α</sup>* ∑*<sup>n</sup> <sup>j</sup>*=<sup>0</sup> (−1) *<sup>j</sup> <sup>φ</sup>n*−*<sup>j</sup> approximates* (*D*−*<sup>α</sup> <sup>t</sup> <sup>φ</sup>*)(*tn*) *with convergence order p, i.e.,* (−1) *<sup>n</sup>* <sup>=</sup> *<sup>O</sup>*(*nα*−1) *and <sup>τ</sup>α*/*p*(*e*−*τ*) <sup>−</sup> <sup>1</sup> <sup>=</sup> *O*(*τp*)*.*

**Lemma 3.** *Let p*(*ζ*) *satisfy the assumptions (A1) and (A2). The stability region S for (12) is determined by the following.*

$$\mathbb{C} \backslash \{ \mathcal{a}\_p(\mathbb{Q}) : |\mathbb{Q}| \le 1 \}. \tag{14}$$

#### **3. Novel Transformation Strategy**

Instead of transformation (5) in which a polynomial-type function Θ(*ζ*; *θ*) is involved, we shall, in this section, propose a different strategy by resorting to an exponential-type transform function and demonstrate that the new strategy is more robust by allowing a wider range of *θ* to guarantee the stability of schemes in solving fractional differential equations.

Let *δ*(*ζ*) = ∑*<sup>p</sup> j*=1 1 *<sup>j</sup>* (<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)*<sup>j</sup>* denote the generating function of backward difference formulas (BDF) of order *p* ≤ 6.

**Theorem 1** (Exponential-type transformation)**.** *Let <sup>θ</sup>* ∈ R *and assume p*(*ζ*) *fulfills (A1) and (A2). Define the following:*

$$
\omega(\zeta) = \varpi\_p(\zeta)e^{\theta\delta(\zeta)},\tag{15}
$$

*then, the difference Formula (2) with weights generated by ω*(*ζ*) *is convergent of order p.*

**Proof.** Clearly, the function *<sup>e</sup>θδ*(*ζ*) (with respect to *<sup>ζ</sup>*) is analytic within the unit disc <sup>|</sup>*ζ*<sup>|</sup> <sup>&</sup>lt; <sup>1</sup> and is *k*-times differentiable on the unit circle for any positive integer *k*; thus, its Fourier coefficients, i.e., the *en* generated from *eθδ*(*ζ*) = ∑<sup>∞</sup> *<sup>n</sup>*=<sup>0</sup> *enζn*, decay faster than, e.g., *O*(*n*−*k*). Then, the asymptotic property of *ω<sup>n</sup>* is fully determined by *n*, which, by assumption (A2), leads to the following.

$$
\omega\_{\mathfrak{n}} = O(n^{-\kappa - 1}).\tag{16}
$$

Moreover, by the consistency condition of order *p* for *p*(*ζ*) due to (A1) and that of *δ*(*ζ*), the following holds.

$$
\pi^{-\kappa} \varpi\_p(e^{-\tau}) = 1 + O(\pi^p), \quad \pi^{-1} \delta(e^{-\tau}) = 1 + O(\pi^p).
$$

Using the Taylor expansion, one immediately obtains the following:

$$
\pi^{-a}e^{\theta\tau}\omega(e^{-\tau}) = \pi^{-a}\varpi\_p(e^{-\tau})e^{\theta\tau}e^{\theta\delta(e^{-\tau})} = 1 + O(\tau^p),\tag{17}
$$

indicating that *ω*(*ζ*), as a generating function in SCQ, is consistent of order *p* as well. Finally, by (16) and (17), we complete the proof of the theorem in accordance with Lemma 1.

**Remark 1.** *In view of the fact that <sup>φ</sup>*(*tn*−*<sup>θ</sup>* )=(*D*<sup>0</sup> *<sup>t</sup> <sup>φ</sup>*)(*tn*−*<sup>θ</sup>* )*, Theorem 1 actually permits us to approximate <sup>φ</sup>*(*tn*−*<sup>θ</sup>* ) *by a discrete convolution.*

$$\phi^{n-\theta} := \sum\_{j=0}^{n} \theta\_j \phi^{n-j}, \quad \text{where } \theta\_j \text{ is generated by } \sum\_{j=0}^{\infty} \theta\_j \mathbb{G}^j = e^{\theta \delta(\zeta)}. \tag{18}$$

*As demonstrated in the arguments, θ<sup>n</sup> decays faster than O*(*n*−*k*) *for any integer k* > 0*. Indeed, since eθδ*(*ζ*) *is analytic for* <sup>|</sup>*ζ*<sup>|</sup> <sup>&</sup>lt; *<sup>ρ</sup> for some <sup>ρ</sup>* <sup>&</sup>gt; <sup>1</sup>*, <sup>θ</sup><sup>n</sup> decays exponentially.*

For the purpose of application, it is of interest to present efficient algorithms to calculate the coefficients of *ω*(*ζ*) in (15). The next lemma offers an algorithm by which *ω<sup>j</sup>* can be obtained in a recursive manner.

**Lemma 4.** *Assume ω*(*ζ*) *takes the form* # *P*(*ζ*) \$*α eθQ*(*ζ*) *where P*(*ζ*) *and Q*(*ζ*) *are polynomials such that ω*(*ζ*) *is analytic within the unit disc* |*ζ*| < 1*; then, we obtain the following:*

$$
\omega\_0 = \left[P(0)\right]^n e^{\theta Q(0)}, \quad \omega\_n = \frac{1}{nP(0)} \left[\omega\_0 \mathbf{G}\_{n-1} + \sum\_{k=1}^{n-1} \omega\_{n-k} \left(\mathbf{G}\_{k-1} - (n-k)\mathbf{P}\_k\right)\right], \quad n \ge 1,\tag{19}
$$

*where Gk includes the coefficients of G*(*ζ*) *defined by G*(*ζ*) = *αP* (*ζ*) + *θP*(*ζ*)*Q* (*ζ*)*.*

**Proof.** Take the derivative of *ω*(*ζ*) = # *P*(*ζ*) \$*α eθQ*(*ζ*) with respect to *ζ* and multiply both sides by *P*(*ζ*) to obtain the following.

$$P(\mathcal{J})\omega'(\mathcal{J}) = \omega(\mathcal{J})G(\mathcal{J}).$$

The Formula (19) then follows by taking the *n*th coefficient of both sides of the above equality.

**Remark 2.** *It is notable that Algorithm (19) is efficient since G*(*ζ*) *and P*(*ζ*) *have finitely many nonzero coefficients; thus, the computing complexity to obtain* {*ωj*}*<sup>N</sup> <sup>j</sup>*=<sup>0</sup> *is of O*(*N*)*.*

In contrast to the polynomial-type transform function Θ(*ζ*; *θ*) in (5), the exponential function *<sup>e</sup>θδ*(*ζ*) takes the advantage that it has no zero for any *<sup>θ</sup>* <sup>∈</sup> <sup>R</sup>, whence *<sup>e</sup>*−*θδ*(*ζ*) can always be expanded into a series without limiting the range of *θ*. The immediate consequence is that in designing A(*ϑ*)-stable schemes, the exponential-type transform places no constraint on *θ* while the polynomial-type transform may limit the choice of *θ* severely, particularly for high-order methods.

To be more specific, consider the following simple trial equation:

$$
\partial\_t^a \mu = \lambda \mu, \quad a \in (0, 1) \text{ and } \Re(\lambda) < 0,\tag{20}
$$

with initial condition *u*(0) = *u*0. For a given generating function *p*(*ζ*) that satisfies the Assumptions (A1) and (A2), by adopting the polynomial-type transform (5) or the exponential transform as in Theorem 1, one obtain the following discrete scheme:

$$\sum\_{j=0}^{n} \omega\_{n-j} (\mathcal{U}^j - \mathcal{U}^0) = \mathsf{T} \mathcal{U}^{n-\theta}, \quad n \ge n\_{0\prime} \tag{21}$$

where *τ* = *λτ<sup>α</sup>* and *Un*−*<sup>θ</sup>* = ∑*<sup>n</sup> <sup>j</sup>*=<sup>0</sup> *<sup>θ</sup>n*−*jU<sup>j</sup>* . The weights *θj*, depending on the choice of transform strategies, are coefficients of Θ(*ζ*; *θ*) or *eθδ*(*ζ*), respectively.

**Theorem 2.** *Assume that p*(*ζ*) *satisfies (A1) and (A2). For ω*(*ζ*) = *p*(*ζ*)Θ(*ζ*; *θ*)*, the stability region S for (21) is determined by the following:*

$$\mathbb{C} \backslash \{ \mathcal{O}\_{\mathbb{P}}(\mathbb{Q}) : |\mathbb{Q}| \le 1 \}, \tag{22}$$

*provided that <sup>θ</sup>* <sup>∈</sup> <sup>Λ</sup>*<sup>θ</sup>* :<sup>=</sup> {*<sup>θ</sup>* : <sup>Θ</sup>(*ζ*; *<sup>θ</sup>*) <sup>=</sup> <sup>0</sup> *for all* <sup>|</sup>*ζ*| ≤ <sup>1</sup>}*. In contrast, for <sup>ω</sup>*(*ζ*) = *p*(*ζ*)*eθδ*(*ζ*)*, stability region S is determined by (22) for any <sup>θ</sup>* ∈ R*.*

**Proof.** Since <sup>Θ</sup>(*ζ*) or *<sup>e</sup>θδ*(*ζ*) is analytic and nonzero for <sup>|</sup>*ζ*| ≤ 1, 1/Θ(*ζ*) or *<sup>e</sup>*−*θδ*(*ζ*) can be expanded at *ζ* = 0 with coefficients *θ* (−1) *<sup>n</sup>* decay exponentially. By replacing *<sup>n</sup>* with *<sup>k</sup>* in the Equation (21) and multiplying both sides by *θ* (−1) *<sup>n</sup>*−*<sup>k</sup>* and summing *<sup>k</sup>* from *<sup>n</sup>*<sup>0</sup> to *<sup>n</sup>*, we obtain the following.

$$\sum\_{k=n\_0}^{n} \theta\_{n-k}^{(-1)} \sum\_{j=0}^{k} \omega\_{k-j} (\mathcal{U}^j - \mathcal{U}^0) = \overline{\pi} \sum\_{k=n\_0}^{n} \theta\_{n-k}^{(-1)} \mathcal{U}^{k-\theta}.\tag{23}$$

By resorting to the fact that *p*(*ζ*) = *ω*(*ζ*) <sup>1</sup> <sup>Θ</sup>(*ζ*) or *p*(*ζ*) = *<sup>ω</sup>*(*ζ*)*e*−*θδ*(*ζ*) and Cauchy product of series, one can obtain the following:

$$\sum\_{k=0}^{n} \varpi\_{n-k} (\mathcal{U}^k - \mathcal{U}^0) = \overline{\pi} \left( \mathcal{U}^n + \frac{1}{\lambda} \mathcal{g}^n \right), \tag{24}$$

where *g<sup>n</sup>* takes the following form

$$\mathcal{g}^n = \tau^{-a} \sum\_{j=0}^{n\_0 - 1} \left[ \sum\_{k=0}^{n\_0 - 1 - j} \left( \omega\_k - \lambda \tau^a \theta\_k \right) \theta\_{n - k - j}^{(-1)} \right] \mathcal{U}^j - \tau^{-a} \mathcal{U}^0 \sum\_{j=0}^{n\_0 - 1} \sum\_{k=0}^{n\_0 - 1 - j} \theta\_{n - k - j}^{(-1)} \omega\_{k - j}$$

indicating that *g<sup>n</sup>* decays exponentially. By comparing (24) with (12), one readily obtains result (22) from Lemma 3.

**Remark 3.** *Several methods can be found in the literature [34] for determining* Λ*<sup>θ</sup> explicitly. For example, resorting to the Schur criterion (see Schur polynomial in Appendix A, one can readily obtain the explicit form of* Λ*θ, as shown in Table 1. The sharpness of the constraints on θ are verified in Example 1 of Section 5.*

**Table 1.** Explicit form of Λ*<sup>θ</sup>* .


#### **4. Applications**

In this section, we apply the exponential-type transformation strategy to the following subdiffusion problem and demonstrate its advantages in developing robust numerical schemes:

$$\begin{cases} \partial\_t^\alpha u(\mathbf{x},t) - \Delta u(\mathbf{x},t) = f(\mathbf{x},t), & (\mathbf{x},t) \in \Omega \times (0,T], \\ u(\mathbf{x},t) = 0, & \mathbf{x} \in \partial\Omega, \ t \in (0,T], \\ u(\mathbf{x},0) = v(\mathbf{x}), & \mathbf{x} \in \Omega, \end{cases} \tag{25}$$

where the space <sup>Ω</sup> <sup>⊂</sup> <sup>R</sup>*<sup>d</sup>* (*<sup>d</sup>* <sup>=</sup> 1, 2, 3) is a bounded convex polygonal domain with the boundary denoted by *<sup>∂</sup>*Ω. The operator <sup>Δ</sup> : *<sup>D</sup>*(Δ) <sup>→</sup> *<sup>L</sup>*2(Ω) stands for the Laplacian with *D*(Δ) = *H*<sup>1</sup> <sup>0</sup> (Ω) <sup>∩</sup> *<sup>H</sup>*2(Ω) and *<sup>f</sup>* : (0, *<sup>T</sup>*] <sup>→</sup> *<sup>L</sup>*2(Ω) is a given function. The initial function *<sup>v</sup>*, depending on its smoothness, belongs to *D*(Δ) or *L*2(Ω).

#### *4.1. Formulation of Fully Discrete Scheme*

In this section, we take *δ*(*ζ*) = <sup>3</sup> <sup>2</sup> <sup>−</sup> <sup>2</sup>*<sup>ζ</sup>* <sup>+</sup> <sup>1</sup> <sup>2</sup> *<sup>ζ</sup>*<sup>2</sup> and let *<sup>ω</sup><sup>j</sup>* be generated by *<sup>ω</sup>*(*ζ*) = # *δ*(*ζ*) \$*α eθδ*(*ζ*). In accordance with Theorem 1 (see also Remark 1), *φn*−*<sup>θ</sup>* and *Dα*,*<sup>n</sup> <sup>τ</sup>*,*<sup>θ</sup> φ* both are of second-order accuracy relative to their continuous counterparts. To formulate the fully discrete scheme of the model, define the finite element space as follows:

*Vh* <sup>=</sup> {*χ<sup>h</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> <sup>0</sup> (Ω) : *χh*|*<sup>e</sup>* is a linear polynomial function, *e* ∈ T*h*}

where T*<sup>h</sup>* is a shape regular and a quasi-uniform triangulation of Ω.

Let *Ph* : *<sup>L</sup>*2(Ω) <sup>→</sup> *Vh* and *Rh* : *<sup>H</sup>*<sup>1</sup> <sup>0</sup> (Ω) <sup>→</sup> *Vh* stand for the *<sup>L</sup>*2(Ω) and Ritz projection, respectively, and define Δ*<sup>h</sup>* : *Vh* → *Vh* as the discrete Laplacian. By replacing *u*(*t*) with *w*(*t*) + *v* and *f*(*t*) with *g*(*t*) + *f*(0) in (25), the space semi-discrete scheme then reads as follows:

$$D\_t^a w\_h(t) - \Delta\_h w(t) = \mathcal{g}\_h(t) + f\_h^0 + \Delta\_h v\_{h\nu} \tag{26}$$

where *gh* := *Phg*, *f* <sup>0</sup> *<sup>h</sup>* <sup>=</sup> *Ph <sup>f</sup>*(0) and *vh* <sup>=</sup> *Rhv* if *<sup>v</sup>* <sup>∈</sup> *<sup>D</sup>*(Δ) or *vh* <sup>=</sup> *Phv* if *<sup>v</sup>* <sup>∈</sup> *<sup>L</sup>*2(Ω). The fully discrete scheme can, thus, be stated as finding *W<sup>n</sup> <sup>h</sup>* ∈ *Vh* such that the following is the case.

$$D^{a,\mu}\_{\tau,\theta} \mathcal{W}\_{\hbar} - \Delta\_{\hbar} \mathcal{W}\_{\hbar}^{n-\theta} = g\_{\hbar}^{n-\theta} + f\_{\hbar}^{0} + \Delta\_{\hbar} v\_{\hbar \prime} \quad n \ge 1, \quad \theta \in (-1, 1). \tag{27}$$

In general cases, scheme (27) can only result in first-order convergence rates at positive times due to the initial singularity of the solution. We propose a modified scheme, with the motivation explained in the next section, by resorting to a single-step correction.

$$\begin{split} D^{\mathfrak{a},1}\_{\mathfrak{r},\theta} \mathcal{W}\_{\mathfrak{h}} - \Delta\_{\mathfrak{h}} \mathcal{W}^{1-\theta}\_{\mathfrak{h}} &= (\theta + 3/2) \left( \Delta\_{\mathfrak{h}} v\_{\mathfrak{h}} + f^{0}\_{\mathfrak{h}} \right) + \mathcal{g}^{1-\theta}\_{\mathfrak{h}}, \quad n = 1, \\ D^{\mathfrak{a},\mathfrak{v}}\_{\mathfrak{r},\theta} \mathcal{W}\_{\mathfrak{h}} - \Delta\_{\mathfrak{h}} \mathcal{W}^{\mathfrak{v}-\theta}\_{\mathfrak{h}} &= \mathcal{g}^{\mathfrak{v}-\theta}\_{\mathfrak{h}} + f^{0}\_{\mathfrak{h}} + \Delta\_{\mathfrak{h}} v\_{\mathfrak{h}}, \quad n \ge 2. \end{split} \tag{28}$$

Note that for *<sup>θ</sup>* <sup>=</sup> <sup>−</sup><sup>1</sup> <sup>2</sup> , the scheme (28) recovers exactly (27), indicating that (27) can resolve the initial singularity automatically if the problem is discretized at point *t n*+ <sup>1</sup> 2 .

#### *4.2. Optimal Error Estimates*

The error estimate is based on solution representation and estimates of some kernels. Denote by *<sup>φ</sup>*: the Laplace transform of *<sup>φ</sup>*. Then, using the Laplace transform and its inverse transform, we obtain the following:

$$w\_h(t) = -\frac{1}{2\pi i} \int\_{\Gamma\_{\mathcal{V},\boldsymbol{\theta}}} e^{zt} \Big[ \mathcal{K}(z)(\Delta\_h v\_h + f\_h(0)) + z\mathcal{K}(z)\widehat{g\_h}(z) \Big] d\mathbf{z},\tag{29}$$

where *<sup>K</sup>*(*z*) = <sup>−</sup>*z*−1(*z<sup>α</sup>* <sup>−</sup> <sup>Δ</sup>*h*)−<sup>1</sup> stands for the kernel function, and the contour (with the direction of an increasing imaginary part) Γ*σ*, is defined by the following.

$$\Gamma\_{\sigma,\varepsilon} := \{ z \in \mathbb{C} : |z| = \epsilon, |\arg z| \le \sigma \} \cup \{ z \in \mathbb{C} : z = r\epsilon^{\pm i\sigma}, r \ge \epsilon \}.$$

**Theorem 3.** *For α* ∈ (0, 1) *and θ* ∈ (−1, 1)*, there exist σ*<sup>0</sup> ∈ (*π*/2, *π*) *and* <sup>0</sup> > 0*, both of which are free of α and τ such that for any σ* ∈ (*π*/2, *σ*0) *and any* < 0*, the solution of (28) takes the following form:*

$$\mathcal{W}\_{\hbar}^{\eta} = -\frac{1}{2\pi i} \int\_{\Gamma\_{\hbar}^{\xi}} \epsilon^{z\dagger} \left[ \ell(\epsilon^{-z\pi}) K(\delta\_{\tau}(\epsilon^{-z\pi})) (\Delta\_{\hbar} \upsilon\_{\hbar} + f\_{\hbar}^{0}) + \tau \delta\_{\tau}(\epsilon^{-z\pi}) K(\delta\_{\tau}(\epsilon^{-z\pi})) g\_{\hbar}(\epsilon^{-z\pi}) \right] d\mathbf{z},\tag{30}$$

*where* Γ*<sup>τ</sup> <sup>σ</sup>*, = {*z* ∈ Γ*σ*, : |(*z*)| ≤ *π*/*τ*}*, δτ*(*ζ*) = *δ*(*ζ*)/*τ and* (*ζ*) = *δ*(*ζ*)*ζ* <sup>1</sup> <sup>1</sup>−*<sup>ζ</sup>* <sup>+</sup> *<sup>θ</sup>* <sup>+</sup> 1 2 *e*−*θδ*(*ζ*)*.*

**Proof.** Multiply both sides of (28) by *ζ<sup>n</sup>* and sum the index *n* from 1 to ∞ to yield the following:

$$\sum\_{n=1}^{\infty} \zeta^n D\_{\tau,\theta}^{a,n} \mathbb{W}\_h - \sum\_{n=1}^{\infty} \zeta^n \Delta\_h \mathbb{W}\_h^{n-\theta} = \sum\_{n=1}^{\infty} \zeta^n g\_h^{n-\theta} + (f\_h^0 + \Delta\_h \upsilon\_h) \left(\sum\_{n=1}^{\infty} \zeta^n + (\theta + 1/2)\zeta\right),$$

which leads to the following:

$$\left( \left[ \delta\_{\tau}(\zeta) \right]^{\kappa} - \Delta\_{h} \right) W\_{h}(\zeta) = \mathcal{g}\_{h}(\zeta) + (f\_{h}^{0} + \Delta\_{h} \upsilon\_{h}) \kappa(\zeta),$$

where *κ*(*ζ*) = *ζ* <sup>1</sup> <sup>1</sup>−*<sup>ζ</sup>* <sup>+</sup> *<sup>θ</sup>* <sup>+</sup> <sup>1</sup> 2 *<sup>e</sup>*−*θδ*(*ζ*). By Lemma B.1 in [22], for fixed constant *<sup>φ</sup>*<sup>0</sup> <sup>∈</sup> (*π*/2, *<sup>π</sup>*), there exists *σ*<sup>0</sup> ∈ (*π*/2, *π*), which depends only on *φ*<sup>0</sup> for any *σ* ∈ (*π*/2, *σ*0) and any < <sup>0</sup> where <sup>0</sup> is small enough, *δτ*(*e*−*zτ*)|*z*∈Γ*<sup>τ</sup> <sup>σ</sup>*, ∈ <sup>Σ</sup>*φ*<sup>0</sup> := {*<sup>z</sup>* ∈ C : | arg *<sup>z</sup>*| < *<sup>φ</sup>*0, *<sup>z</sup>* = 0}. By the Cauchy integral formula, we have the expression for *W<sup>n</sup> <sup>h</sup>* by the following:

$$\mathcal{W}\_{\hbar}^{\mathfrak{n}} = \frac{1}{2\pi\mathfrak{n}} \int\_{|\zeta|=\varepsilon} \frac{\mathcal{W}\_{\hbar}(\zeta)}{\mathcal{J}^{n+1}} \mathrm{d}\zeta \stackrel{\frac{\mathcal{J}}{\sim}=\varepsilon^{-z\varepsilon}}{}{} \frac{\pi}{2\pi\mathfrak{n}} \int\_{\Gamma\_{\hbar}^{\mathbb{T}}} e^{zt\_{\hbar}} \mathcal{W}\_{\hbar}(e^{-z\cdot\tau}) \mathrm{d}z.$$

where Γ*<sup>τ</sup> <sup>ε</sup>* := *<sup>z</sup>* <sup>=</sup> <sup>−</sup><sup>1</sup> *<sup>τ</sup>* ln *<sup>ε</sup>* + i*<sup>y</sup>* : *<sup>y</sup>* ∈ R, |*y*| ≤ *<sup>π</sup>*/*<sup>τ</sup>* . Let L be the region enclosed by contours Γ*<sup>τ</sup> <sup>σ</sup>*,, Γ*<sup>τ</sup> <sup>ε</sup>* , Γ*<sup>τ</sup>* <sup>±</sup> :<sup>=</sup> <sup>R</sup> <sup>±</sup> <sup>i</sup>*π*/*<sup>τ</sup>* (oriented from left to right); one can check whether *Wh*(*e*−*zτ*) is analytic for *<sup>z</sup>* <sup>∈</sup> <sup>L</sup>. By using the Cauchy integral formula again and noting that the integral values along Γ*<sup>τ</sup>* <sup>−</sup> and <sup>Γ</sup>*<sup>τ</sup>* <sup>+</sup> are opposite, result (30) follows readily by taking (*ζ*) = *τδτ*(*ζ*)*κ*(*ζ*). The proof is completed.

**Remark 4.** *The arguments for Theorem 3 reveal the superiority of the exponential-type transformation strategy that, for any arbitrary <sup>θ</sup>, the transform function <sup>e</sup>*−*θδ*(*ζ*)|*ζ*=*e*−*z<sup>τ</sup> appearing in <sup>κ</sup>*(*ζ*) *is analytic for <sup>z</sup>* <sup>∈</sup> <sup>L</sup>*, in contrast to the polynomial-type transform function* <sup>1</sup> <sup>1</sup>−*θ*+*θζ* <sup>|</sup>*ζ*=*e*−*z<sup>τ</sup> adopted in [25], which is singular at points <sup>z</sup>* <sup>=</sup> <sup>±</sup>*<sup>π</sup> <sup>τ</sup>* <sup>∈</sup> <sup>L</sup> *when <sup>θ</sup>* <sup>=</sup> <sup>1</sup> <sup>2</sup> *(in which case, the Crank–Nicolson scheme is excluded). See also [23,24] for similar situations. Therefore, the numerical scheme or numerical analysis is robust against shifted parameter θ when the exponential-type transformation strategy is considered. On the other hand, thanks to Theorem 1, function δτ*(*ζ*) *appearing in (30) is independent of α, allowing us to develop robust analyses even for small α. We argue that such types of robustness are not available for the schemes in [23–25] as δτ*(*ζ*) *in those schemes are singular at α* = 0*, leading to the blow-up of constants C in their estimates. See Example 3 in Section 5.*

**Lemma 5.** *Let* Γ*<sup>τ</sup> <sup>σ</sup>*, *be the contour defined in Theorem 3. For given <sup>θ</sup>* <sup>∈</sup> (−1, 1) *and any <sup>z</sup>* <sup>∈</sup> <sup>Γ</sup>*<sup>τ</sup> <sup>σ</sup>*,*, the following holds:*

$$|\ell(e^{-z\tau}) - 1| \le C r^2 |z|^2,\tag{31}$$

*where C is independent of τ*, *z, but may be dependent on θ.*

**Proof.** Since |*z*|*τ* ≤ *π*/ sin *σ* < +∞, we only need to prove (31) for sufficiently small |*z*|*τ*. By the expansion of (*ζ*) at the point *<sup>ζ</sup>* <sup>=</sup> 1, we have the following: (*ζ*) = <sup>1</sup> <sup>+</sup> *<sup>c</sup>*(*θ*)(<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)<sup>2</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)3*r*(*ζ*), where *<sup>r</sup>*(*ζ*) is analytic at *<sup>ζ</sup>* <sup>=</sup> 1. One then immediately obtains the following: (*e*−*zτ*) = <sup>1</sup> <sup>+</sup> *<sup>c</sup>*(*θ*)*τ*2|*z*<sup>|</sup> <sup>2</sup> <sup>+</sup> *<sup>o</sup>*(*τ*2|*z*<sup>|</sup> <sup>2</sup>), which completes the proof of the lemma.

**Theorem 4.** *Suppose that uh*(*t*) := *wh*(*t*) + *vh is the solution of the space semi-discrete scheme of (25), and U<sup>n</sup> <sup>h</sup>* := *<sup>W</sup><sup>n</sup> <sup>h</sup>* <sup>+</sup> *vh is the solution of the fully discrete scheme of (25). If <sup>f</sup>* <sup>∈</sup> *<sup>W</sup>*1,∞(0, *<sup>T</sup>*; *<sup>L</sup>*2(Ω)) *and <sup>t</sup>* <sup>0</sup> (*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>*)*α*−1 *<sup>f</sup>* (*x*)d*<sup>s</sup>* <sup>∈</sup> *<sup>L</sup>*∞(0, *<sup>T</sup>*) *where* · *denotes the <sup>L</sup>*<sup>2</sup> *norm, then the following is the case:*

$$\|\|L\_h^n - u\_h(t\_n)\|\| \le \mathcal{C}\tau^2 \left(\mathcal{R}(t\_n, v) + t\_n^{a-2} \|f(0)\| + t\_n^{a-1} \|f'(0)\| + \int\_0^{t\_n} (t\_n - s)^{a-1} \|f''(s)\| ds\right),\tag{32}$$

*where* R(*tn*, *v*) = *t <sup>α</sup>*−<sup>2</sup> *<sup>n</sup>* Δ*v if <sup>v</sup>* <sup>∈</sup> *<sup>D</sup>*(Δ) *and* <sup>R</sup>(*tn*, *<sup>v</sup>*) = *<sup>t</sup>* <sup>−</sup><sup>2</sup> *<sup>n</sup> v if <sup>v</sup>* <sup>∈</sup> *<sup>L</sup>*2(Ω)*. The constant <sup>C</sup> is independent of τ*, *α*, *n*, *N and f but may depend on θ.*

**Proof.** The technique for this theorem is quite standard and is essentially and partially based on Lemma 5 and the following estimates on *δτ*(*ζ*), which can be found in [22].

$$|\delta\_{\tau}(e^{-z\tau}) - z| \le \mathcal{C}\tau^2|z|^3, \quad |\delta\_{\tau}^a(e^{-z\tau}) - z^a| \le \mathcal{C}\tau^2|z|^{2+a}, \quad \mathcal{C}\_1|z| \le |\delta\_{\tau}(e^{-z\tau})| \le \mathcal{C}\_2|z|.$$

We omitted the details here for reasons of space.

**Remark 5.** *The error u* − *uh of the space semi-discrete scheme (26) has been well studied by researchers and is not our main concern in this article. Interested readers can refer to [35] for more information.*

#### **5. Numerical Tests**

**Example 1.** *In this example, we explore the stability of the numerical scheme (21):*

$$\sum\_{j=0}^{n} \omega\_{n-j} (\mathcal{U}^j - \mathcal{U}^0) = \mp \mathcal{U}^{n-\theta}, \quad n \ge n\_{0,\theta}$$

*for trial Equation (20) in which n*<sup>0</sup> = 1 *and the polynomial-type transformation is adopted and verify the sharpness of* Λ*<sup>θ</sup> in Theorem 2. Let λ* = −1*, α* = 0.5 *and fix τ* = 0.1*. The exact solution of (20) can be expressed by the Mittag–Leffler function [6] Eα*(*x*) := ∑<sup>∞</sup> *<sup>j</sup>*=<sup>0</sup> *<sup>x</sup><sup>j</sup>* <sup>Γ</sup>(*jα*+1)*, as u*(*t*) = *u*0*Eα*(*λt α*)*.*

*In Figure 1, we illustrate the asymptotic properties of numerical solutions obtained under different θ for different numerical methods. The solutions in the first column are obtained under the threshold values θ* = <sup>1</sup> <sup>2</sup> , <sup>2</sup><sup>−</sup> <sup>√</sup><sup>2</sup> <sup>2</sup> , <sup>3</sup><sup>−</sup> <sup>√</sup><sup>7</sup> <sup>2</sup> *(see Table 1) where one can observe, for each case, that the amplitude is invariant as time passes. By taking a smaller value of θ than its threshold, as shown in the middle column of Figure 1, we obtain numerical solutions that are asymptotically stable in contrast to the unbounded ones demonstrated in the last column in which θ exceeds the threshold a bit.*

**Figure 1.** *Cont*.

**Figure 1.** Justification of the sharpness of Theorem 2 when using the polynomial-type transform function Θ(*ζ*; *θ*) for Example 1. (**a**–**c**): exact solution *u*(*t*) vs. numerical solution *U<sup>n</sup>* obtained by the transformed 2nd-order Newton–Gregory formula. (**d**–**f**): exact solution *u*(*t*) vs. numerical solution *U<sup>n</sup>* obtained by the transformed 3rd-order Newton–Gregory formula. (**g**–**i**): exact solution *u*(*t*) vs. numerical solution *U<sup>n</sup>* obtained by the transformed fractional BDF-4.

**Example 2.** *For the subdiffusion Problem (25), let T* = 1*. Depending on the smoothness of v, we consider two cases:*

*(i) f* = 0*, v* = sin *x* ∈ *D*(Δ)*,* Ω = (0, *π*)*, with the exact solution u*(*x*, *t*) = *Eα*(−*t <sup>α</sup>*) sin *x; (ii) f* = 0*, v* = *χ*(0,1/2)*,* Ω = (0, 1)*.*

*In Tables 2 and 3, we present the L*<sup>2</sup> *error and convergence rates for different α and θ for schemes (27) and (28), respectively. One observes that scheme (28) with correction terms results in optimal convergence rates while scheme (27) is of first-order accuracy except for θ* = −0.5*, both of which are in line with our theoretical results.*




**Table 3.** *L*<sup>2</sup> error and convergence rates at time *t* = 0.5 of Example 2 (ii).

**Example 3.** *We illustrate the robustness of (28) when α* → 0 *for subdiffusion Problem (25). Let* Ω = (0, *π*), *T* = 1 *and u*(*x*, *t*)=(*Eα*(−*t <sup>α</sup>*) + *t* <sup>3</sup>) sin *<sup>x</sup> such that <sup>v</sup>* <sup>=</sup> sin *<sup>x</sup>* <sup>∈</sup> *<sup>D</sup>*(Δ)*. The source term is f*(*x*, *t*) = 6*t* <sup>3</sup>−*α*/Γ(<sup>4</sup> <sup>−</sup> *<sup>α</sup>*) + *<sup>t</sup>* 3 sin *x. In Figure 2a, we illustrate the L*<sup>2</sup> *error of scheme (28) for varying α under different θ* = −0.5, 0.1, 0.4, 0.8*. In particular, the cases θ* = 0.1 *and* 0.4 *of the scheme in [25] are also presented. Obviously, the scheme (28) is much more robust when α* → 0 *than the scheme in [25].*

*It may seem weird that, in (18), the term <sup>φ</sup>*(*tn*−*<sup>θ</sup>* ) *is approximated by a nonlocal formula with coefficients θ<sup>j</sup> with j* = 0, 1, ··· , *n. We note that θ<sup>j</sup> decays exponentially as plotted in Figure 2b, and in application, one can adopt only the first few values; e.g., the first 50 values will be sufficient to guarantee the accuracy.*

**Figure 2.** For Example 3. (**a**) Comparison of *L*<sup>2</sup> error between our scheme and that in [25] for different *α*. (**b**) Exponential decay of the weights |*θn*| defined in (18).

#### **6. Conclusions**

A novel exponential-type transformation strategy is proposed to develop robust and accurate difference formulas for fractional derivatives by involving shifted parameter *θ*. The advantages of this novel strategy over the polynomial type transform methods are explored in detail. As an application, the well-known fractional BDF-2 is transformed under the novel strategy and is adopted in the subdiffusion problem. Rigorous arguments are carried out, showing that the resultant scheme can resolve the solution initial singularity quite naturally at the special point *t n*+ <sup>1</sup> 2 . The robustness for small *α* is also verified both theoretically and numerically.

**Author Contributions:** Conceptualization, B.Y.; methodology, B.Y.; software, B.Y.; validation, G.Z.; writing—original draft preparation, B.Y.; writing—review and editing, G.Z., Y.L., and H.L.; funding acquisition, G.Z., Y.L., and H.L. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work is supported by the National Natural Science Foundation of China (12061053 and 12161063) and Natural Science Foundation of Inner Mongolia (2021BS01003, 2020MS01003, and 2021MS01018).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** All data reported are obtained by the numerical schemes designed in this paper.

**Acknowledgments:** The authors are grateful to the editor and all the anonymous referees for their valuable comments, which greatly improved the presentation of the article.

**Conflicts of Interest:** The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

#### **Abbreviations**

The following abbreviations are used in this manuscript:


#### **Appendix A**

*Appendix A.1*

**Proof of Lemma 2.** Assumption (A2) indicates that <sup>1</sup> *p*(*ζ*) = (<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)−*<sup>α</sup>* <sup>1</sup> (*ζ*) where <sup>1</sup> (*ζ*) is analytic on the closed unit disc; then, clearly (−1) *<sup>n</sup>* <sup>=</sup> *<sup>O</sup>*(*nα*−1) by the expansion of (1<sup>−</sup> *<sup>ζ</sup>*)−*α*. On the other hand, assumption (A1) implies the following:

$$\frac{\tau^{\kappa}}{\varpi\_p(e^{-\tau})} = \frac{1}{\tau^{-\kappa}\varpi\_p(e^{-\tau})} = 1 + O(\tau^p)\tau$$

which concludes the proof of the lemma.

#### *Appendix A.2*

**Proof of Lemma 3.** (*Step 1.*) Since <sup>|</sup>*τ*−*αp*(*e*−*τ*) <sup>−</sup> <sup>1</sup>| → 0 as *<sup>τ</sup>* <sup>→</sup> 0, then the following is the case:

$$\left(\frac{1-e^{-\tau}}{\tau}\right)^{\alpha} \left[\ell(e^{-\tau})-1\right] + \left(\frac{1-e^{-\tau}}{\tau}\right)^{\alpha} - 1 \to 0, \alpha$$

indicating that (1) = 1. By expanding <sup>1</sup> (*ζ*) at *ζ* = 1, one obtains the following:

$$\frac{1}{\ell(\zeta)} = 1 + (1 - \zeta)[c\_1 + c\_2(1 - \zeta) + \dotsb] =: 1 + (1 - \zeta)\psi(\zeta),\tag{A1}$$

where *ψ*(*ζ*) is analytic at 1. Hence, we have the following:

$$\frac{1}{\mathcal{O}\_p(\zeta)} = (1 - \zeta)^{-\kappa} + (1 - \zeta)^{1 - \kappa} \psi(\zeta) \lambda$$

requiring the following:

$$
\omega\_n^{(-1)} = a\_n + \sum\_{j=0}^n b\_{n-j} \psi\_{j\prime} \tag{A2}
$$

where *ψ<sup>n</sup>* are the coefficients of *ψ*(*ζ*), and the following is the case.

$$\begin{aligned} a\_n &:= (-1)^n \binom{-\alpha}{n} = \frac{n^{\alpha - 1}}{\Gamma(\alpha)} [1 + O(n^{-1})]. \\ b\_n &:= (-1)^n \binom{1 - \alpha}{n} = \frac{n^{\alpha - 2}}{\Gamma(\alpha - 1)} [1 + O(n^{-1})]. \end{aligned} \tag{A3}$$

Note that (A1) implies the following:

$$
\psi(\zeta) = \frac{\frac{1}{\ell(\zeta)} - 1}{1 - \zeta},
$$

which combined with the fact that <sup>1</sup> (*ζ*) − 1 is analytic for |*ζ*| ≤ 1 leads to the analyticity of *<sup>ψ</sup>*(*ζ*) for <sup>|</sup>*ζ*| ≤ 1. Hence, *<sup>ψ</sup><sup>n</sup>* decays exponentially, meaning that <sup>∑</sup><sup>∞</sup> *<sup>n</sup>*=<sup>0</sup> |*ψn*| < ∞. On the other hand, using the following inequality:

$$\sum\_{n=0}^{\infty} \left| \sum\_{j=0}^{n} b\_{n-j} \psi\_j \right| \le \sum\_{n=0}^{\infty} |b\_n| \sum\_{n=0}^{\infty} |\psi\_n| < \infty,$$

and combining (A2) and (A3), one immediately obtains the following.

$$\sum\_{n=0}^{\infty} \left| \varpi\_n^{(-1)} - \frac{n^{a-1}}{\Gamma(a)} \right| \le \sum\_{n=0}^{\infty} \left| a\_n - \frac{n^{a-1}}{\Gamma(a)} \right| + \sum\_{n=0}^{\infty} \left| \sum\_{j=0}^n b\_{n-j} \psi\_j \right| < \infty. \tag{A4}$$

(*Step 2.*) Replace *n* in (12) with *k*, multiply both sides by (−1) *<sup>n</sup>*−*<sup>k</sup>* and then sum *<sup>k</sup>* from *<sup>n</sup>*<sup>0</sup> to *n* to obtain the following.

$$\sum\_{k=n\_0}^{n} \varpi\_{n-k}^{(-1)} \sum\_{j=0}^{k} \varpi\_{k-j} (\mathcal{U}^j - \mathcal{U}^0) = \overline{\pi} \sum\_{k=n\_0}^{n} \varpi\_{n-k}^{(-1)} (\mathcal{U}^k + \frac{1}{\lambda} \mathcal{g}^k). \tag{A5}$$

For the left-hand side of (A5), the following holds.

$$\begin{split} &\sum\_{k=n\_0}^{n} \mathcal{O}\_{n-k}^{(-1)} \sum\_{j=0}^{k} \mathcal{O}\_{k-j} (\mathcal{U}^j - \mathcal{U}^0) \\ &= \sum\_{k=0}^{n} \mathcal{O}\_{n-k}^{(-1)} \sum\_{j=0}^{k} \mathcal{O}\_{k-j} (\mathcal{U}^j - \mathcal{U}^0) - \sum\_{k=0}^{n\_0 - 1} \mathcal{O}\_{n-k}^{(-1)} \sum\_{j=0}^{k} \mathcal{O}\_{k-j} (\mathcal{U}^j - \mathcal{U}^0) \\ &= \mathcal{U}^n - \mathcal{U}^0 - \sum\_{j=0}^{n\_0 - 1} \left( \sum\_{k=0}^{n\_0 - 1 - j} \mathcal{O}\_{n-k-j}^{(-1)} \mathcal{O}\_k \right) (\mathcal{U}^j - \mathcal{U}^0). \end{split} \tag{A6}$$

For the right-hand side of (A5), we have the following.

$$\overline{\tau} \sum\_{k=n\_0}^{n} \varpi\_{n-k}^{(-1)} \left( \mathcal{U}^k + \frac{1}{\lambda} \mathcal{g}^k \right) = \overline{\tau} \sum\_{k=0}^{n} \varpi\_{n-k}^{(-1)} \mathcal{U}^k - \overline{\tau} \sum\_{k=0}^{n\_0 - 1} \varpi\_{n-k}^{(-1)} \mathcal{U}^k + \tau^a \sum\_{k=n\_0}^{n} \varpi\_{n-k}^{(-1)} \mathcal{g}^k. \tag{A7}$$

Combining (A5)–(A7), one obtains the following:

$$\mathcal{U}^n = f^n + \overline{\tau} \sum\_{k=0}^n \varpi\_{n-k}^{(-1)} \mathcal{U}^k,\tag{A8}$$

where

$$\begin{aligned} f^n &= \mathsf{U}^0 \Big( 1 - \mathsf{T}\boldsymbol{\sigma}\_n^{(-1)} - \sum\_{j=1}^{n\_0 - 1} \sum\_{k=0}^{n\_0 - 1 - j} \boldsymbol{\sigma}\_{n - k - j}^{(-1)} \boldsymbol{\sigma}\_k \Big) \\ &+ \sum\_{j=1}^{n\_0 - 1} \mathsf{U}^j \left( \sum\_{k=0}^{n\_0 - 1 - j} \boldsymbol{\sigma}\_{n - k - j}^{(-1)} \boldsymbol{\sigma}\_k - \mathsf{T}\boldsymbol{\sigma}\_{n - j}^{(-1)} \right) + \mathsf{\tau}^n \sum\_{k=n\_0}^n \boldsymbol{\sigma}\_{n - k}^{(-1)} s^k \Big) \end{aligned}$$

For fixed *<sup>τ</sup>* <sup>&</sup>gt; 0, since *<sup>n</sup>* <sup>=</sup> *<sup>O</sup>*(*n*−*α*−1), (−1) *<sup>n</sup>* <sup>=</sup> *<sup>O</sup>*(*nα*−1) and *<sup>g</sup><sup>n</sup>* decays exponentially, it holds that *<sup>f</sup> <sup>n</sup>* has finite limit as *<sup>n</sup>* <sup>→</sup> <sup>∞</sup>. Meanwhile, by Lemma 2, (A8) actually is an approximation to (8) with convergence order *p*. In accordance with (10), the estimate (A4) indicates that stability region *S* is the following:

$$\mathbb{C} \backslash \{ 1/\mathcal{o}\_p^{(-1)}(\zeta) : |\zeta| \le 1 \} = \mathbb{C} \backslash \{ \mathcal{o}\_p(\zeta) : |\zeta| \le 1 \},\tag{A9}$$

.

which completes the proof of the lemma.

*Appendix A.3. Schur Polynomial*

The polynomial Φ(*ζ*) of order *k*

$$\Phi(\mathcal{J}) = c\_k \zeta^k + c\_{k-1} \zeta^{k-1} + \dots + c\_1 \zeta + c\_0, \quad c\_k \neq 0, \ c\_0 \neq 0, \epsilon$$

is said to be a Schur polynomial if its roots *ζ<sup>j</sup>* satisfy |*ζj*| < 1, *j* = 1, 2, ··· , *k*. Given Φ(*ζ*), introduce the following polynomials:

$$\begin{aligned} \Phi\_0(\zeta) &= c\_0^\* \zeta^k + c\_1^\* \zeta^{k-1} + \dots + c\_{k-1}^\* \zeta + c\_{k'}^\* \\ \Phi\_1(\zeta) &= \frac{1}{\zeta} \left[ \Phi\_0(0) \Phi(\zeta) - \Phi(0) \Phi\_0(\zeta) \right], \end{aligned}$$

where *c*∗ *<sup>j</sup>* denotes the complex conjugate of *cj*.

**Lemma A1.** Φ(*ζ*) *is a Schur polynomial if and only if* |Φ0(0)| > |Φ(0)| *and* Φ1(*ζ*) *is a Schur polynomial.*

To identify Λ*<sup>θ</sup>* in Theorem 2, one merely needs to require the polynomial *ζ <sup>p</sup>*−1Θ(1/*ζ*; *θ*) be a Schur polynomial, which by Lemma A1, a sequence of Schur polynomials with decreasing degrees are obtained, leading to Λ*<sup>θ</sup>* listed in Table 1.

#### **References**


**Yanan Li 1,2, Yibin Xu 1,3, Yanqin Liu 1,\* and Yanfeng Shen <sup>4</sup>**


**Abstract:** In the current work, a fast *θ* scheme combined with the Legendre spectral method was developed for solving a fractional Klein–Gordon equation (FKGE). The numerical scheme was provided by the Legendre spectral method in the spatial direction, and for the temporal direction, a *θ* scheme of order *O*(*τ*2) with a fast algorithm was taken into account. The fast algorithm could decrease the computational cost from *O*(*M*2) to *O*(*M* log *M*), where *M* denotes the number of time levels. In addition, correction terms could be employed to improve the convergence rate when the solutions have weak regularity. We proved theoretically that the scheme is unconditionally stable and obtained an error estimate. The numerical experiments demonstrated that our numerical scheme is accurate and efficient.

**Keywords:** fractional Klein–Gordon equation; Legendre spectral method; *θ* scheme; unconditional stability; error estimate; fast algorithm; regularity of solution

#### **1. Introduction**

Fractional differential equations (FDEs), as the evolution of integral differential equations, can more precisely describe phenomena with sophisticated dynamics [1–4]. In the past few decades, FDEs have been investigated by a number of scholars because they have practical applications in various fields, such as relativistic quantum mechanics [5], hydromechanics [6], neuroscience [7], and materials science [8]. Due to it being virtually impossible to obtain an analytic solution to an FDE in most cases, many numerical methods for solving FDEs have been developed rapidly. In particular, finite difference methods (FDMs) [9,10], finite element methods (FEMs) [11–13], spectral methods [14–16], and spectral element methods [17,18] have been extensively utilized.

In this article, we concentrate on the following FKGE:

$$\begin{cases} \frac{\partial^{\mathfrak{s}}\xi(\mathbf{x},t)}{\partial t^{\mathfrak{s}}} + \rho \frac{\partial^{\mathfrak{s}}\_{\mathfrak{s}}(\mathbf{x},t)}{\partial t} + \tilde{\xi}(\mathbf{x},t) = \frac{\partial^{2}\tilde{\xi}(\mathbf{x},t)}{\partial \mathbf{x}^{2}} + f(\mathbf{x},t), & \mathbf{x} \in (0,L), t \in (0,T] \\\\ \tilde{\xi}(\mathbf{x},0) = \phi(\mathbf{x}), \ \frac{\partial^{2}\_{\xi}\xi(\mathbf{x},0)}{\partial t} = \varrho(\mathbf{x}), & \mathbf{x} \in (0,L), \\ \tilde{\xi}(0,t) = 0, \ \tilde{\xi}(L,t) = 0, & t \in [0,T], \end{cases} \tag{1}$$

When *α* = 2, (1) is a classical integer-order Klein–Gordon equation. *D<sup>α</sup>* 0,*tξ*(*t*) is a fractional derivative with respect to *t* in the Caputo sense, which is defined as

**Citation:** Li, Y.; Xu, Y.; Liu, Y.; Shen, Y. A Fast *θ* Scheme Combined with the Legendre Spectral Method for Solving a Fractional Klein–Gordon Equation. *Fractal Fract.* **2023**, *7*, 635. https://doi.org/10.3390/ fractalfract7080635

Academic Editor: Stanislaw Migorski

Received: 20 July 2023 Revised: 8 August 2023 Accepted: 15 August 2023 Published: 20 August 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

$$D\_{0,t}^{\mathfrak{a}}\xi(\mathbf{x},t) = \frac{\partial^{\mathfrak{a}}\xi^{\mathfrak{a}}(\mathbf{x})}{\partial t^{a}} = \begin{cases} \frac{1}{\Gamma(2-a)} \int\_{0}^{t} \frac{\partial^{2}\xi^{\mathfrak{a}}(\mathbf{x},\mathbf{s})}{\partial\mathbf{s}^{2}} \frac{\mathbf{d}\mathbf{s}}{(t-\mathbf{s})^{1-a}}, & 1 < a < 2 \\\ \frac{\partial^{2}\xi^{\mathfrak{a}}(\mathbf{x})}{\partial t^{2}}. & a = 2 \end{cases}$$

If we set *ρ* = 0, then an FKGE can be obtained, and a fractional dissipative Klein– Gordon equation can be obtained for *ρ* > 0 [19].

The application of FDEs has been extended to quantum mechanics, which has given rise to fractional quantum mechanics [20,21]. Klein–Gordon equations, which are some of the most fundamental equations in relativistic quantum mechanics, have been generalized to FKGEs [19,22]. As a matter of fact, quite a few scholars have investigated FKGEs. Vong et al. proposed a high-order finite difference scheme for a nonlinear FKGE, and the convergence order of the proposed scheme was *O*(*h*<sup>4</sup> + *τ*3−*α*) [23], where *h* and *τ* are the spatial and temporal step sizes, respectively. Hashemizadeh et al. proposed an approach that relied on the sparse operational matrix of the derivative to solve an FKGE, leading to more efficient operation [19]. By combining the properties of Chebyshev approximations with the FDM, Khadera et al. developed a method that reduced an FKGE to a system of ODEs and then solved it using the FDM [24]. Recently, Saffarian et al. utilized the ADI spectral element method to solve a nonlinear FKGE with a convergent order of *O*(*τ*<sup>2</sup> + *N*1−*m*) [25], where *N* is the polynomial degree and m represents the regularity of the solution. As far as the authors' knowledge is concerned, there have been few reports on numerical methods utilizing fast algorithms for an FKGE. Motivated by the above considerations, our main aim was developing a stable and fast numerical method for FKGEs.

The structure of this paper is as follows: In Section 2, some crucial preliminaries are provided for the subsequent analysis. In Section 3, to obtain the fully discrete scheme, we introduce the *θ* scheme and the Legendre spectral method in the temporal and spatial directions, respectively. Meanwhile, correction terms are considered to improve the weak regularity of the solution. In Section 4, we attach importance to the stability analysis and the convergence analysis. To save on computational expenses for the fractional operators, a fast algorithm is implemented in Section 5. In Section 6, several numerical experiments are conducted to validate our theoretical analysis. In the final section, we present our conclusions.

#### **2. Preliminaries**

In this section, some lemmas and definitions that were necessary for the following analysis are presented.

The space *PN*(Ω) corresponds to the set of polynomials defined in the domain Ω, encompassing polynomials with a degree lower than *N*. Moreover, within *PN*(Ω), we have the subspace *P*<sup>0</sup> *<sup>N</sup>*(Ω) that fulfills the boundary condition *w*(*∂*Ω) = 0 for *w* ∈ *PN*(Ω).

Let us denote *π*1,0 *<sup>N</sup>* (Ω) as the orthogonal projection operator from the Hilbert space *H*<sup>1</sup> <sup>0</sup> (Ω) to the subspace *<sup>P</sup>*<sup>0</sup> *<sup>N</sup>*. For any *<sup>w</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> <sup>0</sup> (Ω) and any *<sup>v</sup>* <sup>∈</sup> *<sup>P</sup>*<sup>0</sup> *<sup>N</sup>*(Ω), the orthogonal projection operator *π*1,0 *<sup>N</sup>* (Ω) exhibits the following property:

$$(\partial\_{\mathfrak{x}}\pi^{1,0}\_{N}w\_{\prime}\partial\_{\mathfrak{x}}\upsilon) = (\partial\_{\mathfrak{x}}w\_{\prime}\partial\_{\mathfrak{x}}\upsilon).$$

Here, we make a crucial assumption that the solution to Equation (1) conforms to the following form [11,17]:

$$\xi(\mathbf{x},t) = \phi + q\mathbf{t} + c\_2t^{\sigma\_2} + c\_3t^{\sigma\_3} + \cdots = \phi + q\mathbf{t} + \sum\_{k=2}^{n} c\_j t^{\sigma\_k} + \Phi(\mathbf{x},t),\tag{2}$$

where *<sup>σ</sup>*<sup>1</sup> <sup>=</sup> 1; *<sup>σ</sup><sup>k</sup>* <sup>&</sup>lt; *<sup>σ</sup>k*<sup>+</sup>1, *<sup>k</sup>* <sup>≤</sup> *<sup>n</sup>* <sup>−</sup> 1; *ck* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> <sup>0</sup> (Ω) <sup>∩</sup> *<sup>H</sup>n*(Ω); and <sup>Φ</sup>(*x*, *<sup>t</sup>*) is a function that is sufficiently smooth with respect to both variables *x* and *t*. There exists *ck* = 0 for *k* = 2, 3, ··· , *n*.

We define *σ* as:

$$
\sigma = \begin{cases}
\sigma\_{2,\prime} & \varphi = 0 \\
1, & \text{otherwise}
\end{cases}
\tag{3}
$$

which describes the regularity of (2).

**Lemma 1** ([14,16])**.** *Suppose <sup>ξ</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> <sup>0</sup> (Ω) <sup>∩</sup> *<sup>H</sup>m*(Ω)*; then, we have*

$$||\mathfrak{f} - \pi\_N^{1,0}\mathfrak{f}|| \le \text{CN}^{-m}||\mathfrak{f}||.\tag{4}$$

**Lemma 2** ([11,19])**.** *Let ξ*(*t*) *be a continuous function with a fractional derivative of order α; then, we have*

$$d\_{0,t}^{\mathfrak{a}}D\_{0,t}^{\mathfrak{a}}\xi(t) = \mathfrak{f}(t) - \sum\_{i=0}^{n-1} \xi^{(k)} \frac{t^{k}}{k!}, n-1 < \mathfrak{a} \le n, n \in \mathcal{N}.\tag{5}$$

**Lemma 3** ([11])**.** *Suppose <sup>ξ</sup>*(*t*) <sup>∈</sup> *<sup>C</sup>k*[0, *<sup>T</sup>*] *for <sup>k</sup>* <sup>∈</sup> <sup>N</sup>+*. Let* , *<sup>γ</sup>* <sup>&</sup>gt; <sup>0</sup> *with <sup>l</sup>* <sup>≤</sup> *<sup>k</sup> and <sup>γ</sup>*, *<sup>γ</sup>* <sup>+</sup> <sup>∈</sup> [*l* − 1, *l*]*. Then, we have*

$$D\_{0,t}^{\varepsilon}D\_{0,t}^{\gamma}\mathfrak{z} = D^{\varepsilon+\gamma}\mathfrak{z}.\tag{6}$$

Integrating both sides of (1) with the operator *I <sup>α</sup>*−<sup>1</sup> 0,*<sup>t</sup>* and combining Lemmas <sup>2</sup> and 3, we obtain

$$I\_t \mathfrak{F}\_t + \rho D\_{0,t}^{2-a} \mathfrak{F} + I\_{0,t}^{a-1} \mathfrak{F} = I\_{0,t}^{a-1} \Delta \mathfrak{F} + \varrho + \rho [D\_{0,t}^{2-a} \mathfrak{F}]\_{t=0} + F(\mathfrak{x}, \mathfrak{t}), \tag{7}$$

where *F*(*x*, *t*) = *I <sup>α</sup>*−<sup>1</sup> 0,*<sup>t</sup> <sup>f</sup>*(*x*, *<sup>t</sup>*). Under the assumption of (2), *<sup>ρ</sup>*[*D*2−*<sup>α</sup>* 0,*<sup>t</sup> <sup>ξ</sup>*]*t*=<sup>0</sup> <sup>=</sup> 0.

#### **3. Fully Discrete Scheme**

Let *<sup>τ</sup>* be a temporal step size and *tn* <sup>=</sup> *<sup>n</sup>τ*(<sup>0</sup> <sup>≤</sup> *<sup>n</sup>* <sup>≤</sup> *<sup>M</sup>*), *<sup>M</sup>* = [1/*τ*]. *<sup>ξ</sup><sup>k</sup> <sup>ξ</sup>*(*tk*) = *<sup>ξ</sup>*(*kτ*). For the discretization of fractional operators (*η* ∈ (0, 1)) and the first-order derivative, we utilize the *θ* schemes as follows [11,12]:

$$\begin{split} D\_{0,t}^{\eta} \tilde{\boldsymbol{\xi}}(t\_{n-\theta}) &= D\_{\tau,\eta}^{n,\theta} \boldsymbol{u} + E\_{n-\theta}^{(1)} = \boldsymbol{\pi}^{-\eta} \sum\_{k=0}^{n} \omega\_{n-k}^{(\eta)} (\boldsymbol{\xi}^{k} - \boldsymbol{\xi}^{0}) + E\_{n-\theta}^{(1)} \\ I\_{0,t}^{\eta} \tilde{\boldsymbol{\xi}}(t\_{n-\theta}) &= I\_{\tau,\eta}^{n,\theta} \boldsymbol{u} + E\_{N-\theta}^{(2)} = \boldsymbol{\pi}^{\eta} \sum\_{k=0}^{n} \omega\_{n-k}^{(-\eta)} (\boldsymbol{\xi}^{k} - \boldsymbol{\xi}^{0}) + I\_{0,t\_{n-\theta}}^{(\eta)} + E\_{n-\theta}^{(2)} \\ \tilde{\boldsymbol{\xi}}\_{t}(t\_{n-\theta}) &= \tilde{\boldsymbol{\xi}}\_{\tau,\theta}^{n} + E\_{n-\theta}^{(3)} \\ &= \begin{cases} \frac{\xi^{1}}{\tau} - \frac{\xi^{0}}{\tau} + E\_{1-\theta}^{(1)} & n = 1 \\ \frac{3 - 2\theta}{2\tau} \boldsymbol{\xi}^{n} - \frac{2 - 2\theta}{\tau} \boldsymbol{\xi}^{n-1} + \frac{1 - 2\theta}{2\tau} \boldsymbol{\xi}^{n-2} + E\_{n-\theta}^{(3)} & n \ge 2 \end{cases} \end{split} \tag{8}$$

where *E*(1) *<sup>n</sup>*−*<sup>θ</sup>* <sup>=</sup> *<sup>O</sup>*(*<sup>t</sup> σ*−*η*−2 *<sup>n</sup>*−*<sup>θ</sup> <sup>τ</sup>*2), *<sup>E</sup>*(2) *<sup>n</sup>*−*<sup>θ</sup>* <sup>=</sup> *<sup>O</sup>*(*<sup>t</sup> σ*+*η*−2 *<sup>n</sup>*−*<sup>θ</sup> <sup>τ</sup>*2), *<sup>E</sup>*(3) *<sup>n</sup>*−*<sup>θ</sup>* <sup>=</sup> *<sup>O</sup>*(*<sup>t</sup> σ*−3 *<sup>n</sup>*−*θτ*2), *<sup>σ</sup>* <sup>=</sup> min{*σ*2, *<sup>σ</sup>*3, ···}. The following expression captures the relationship between the generating function *ω*(*ξ*, *δ*) and its expansion coefficients *ω*(*δ*) *<sup>k</sup>* :

$$
\omega(\mathfrak{F}, \delta) = \sum\_{k=0}^{\infty} \omega\_k^{(\delta)} \mathfrak{F}^k = \frac{(1 - \mathfrak{f})^{\delta}}{1 - (\frac{\delta}{2} - \theta)(1 - \mathfrak{f})}, \\
\delta \in (-1, 0) \cup (0, 1),
$$

where *<sup>θ</sup>* <sup>∈</sup> ( *<sup>δ</sup>*−<sup>1</sup> <sup>2</sup> , 1], and the choice of *<sup>θ</sup>* does not affect the convergence rate. When *<sup>θ</sup>* <sup>=</sup> *<sup>α</sup>* <sup>2</sup> , it simplifies to a fractional Crank–Nicolson scheme [26]. We apply the following formula to determine expansion coefficients *ω*(*δ*) *<sup>k</sup>* :

$$
\omega\_k^{(\delta)} = \begin{cases}
2/[2(1+\theta)-\delta], & k=0\\ 4V\_1^1/[2(1+\theta)-\delta]^2, & k=1\\ (V\_k^1 \omega\_{k-1}^{(\delta)} + V\_k^2 \omega\_{k-2}^{(\delta)})/(1+\theta-\delta/2)/k, & k \ge 2
\end{cases} \tag{9}
$$

where

$$\begin{aligned} V\_k^1 &= \frac{\delta^2}{2} - (\theta + k + \frac{1}{2})\delta + k\theta + k - 1, \\ V\_k^2 &= -\frac{\delta^2}{2} + (\theta + \frac{k-1}{2})\delta + (1-k)\theta. \end{aligned}$$

The semi-discrete scheme of (7) is obtained in the temporal direction utilizing (8) as follows:

$$\mathfrak{F}^{n}\_{\tau,\theta} + \rho D^{n,\theta}\_{\tau,2-n} \mathfrak{F} + I^{n,\theta}\_{\tau,a-1} \mathfrak{F} = I^{n,\theta}\_{\tau,a-1} \Delta \mathfrak{F} + \mathfrak{q} + F^{n-\theta} + E\_{n-\theta\prime} \tag{10}$$

where *<sup>F</sup>n*−*<sup>θ</sup>* <sup>=</sup> *<sup>F</sup>*(*x*, *tn*−*<sup>θ</sup>* ), and *En*−*<sup>θ</sup>* is

$$E\_{n-\theta} = O(t\_{n-\theta}^{\sigma+\kappa-4} \pi^2) + O(t\_{n-\theta}^{\theta-3} \pi^2). \tag{11}$$

The Legendre spectral method is applied for the discretization in the spatial direction and used to find *<sup>Z</sup>* <sup>∈</sup> *<sup>P</sup>*<sup>0</sup> *<sup>N</sup>*(Ω) for <sup>∀</sup>*<sup>ζ</sup>* <sup>∈</sup> *<sup>P</sup>*<sup>0</sup> *<sup>N</sup>*(Ω), such that

$$\begin{split} \left( (\mathbf{Z}\_{\tau,\theta'}^{n}\zeta) + (\rho \mathbf{D}\_{\tau,2-\mathfrak{a}}^{n,\theta}\mathbf{Z}, \zeta) + (\mathbf{I}\_{\tau,\mathfrak{a}-1}^{n,\theta}\mathbf{Z}, \zeta) = (\mathbf{I}\_{\tau,\mathfrak{a}-1}^{n,\theta}\Delta\mathbf{Z}, \zeta) + (\rho, \zeta) + (\mathbf{F}^{n-\theta}, \zeta), \\ \text{with } \mathbf{Z}^{0} = \pi\_{N}^{1,0}\zeta^{0}. \end{split} \tag{12}$$

We see from the truncation errors in (8) that if *σ* < 3, then the convergence order in the temporal direction is lower than *O*(*τ*2). Generally, the solutions of FKGEs have weak regularity. To improve the convergence rate, correction terms are added to the approximation formulas as follows [17,27,28]:

$$\begin{cases} D\_{0,t}^{\delta} \mathfrak{F}(t\_{n-\theta}) \approx D\_{\tau,\delta}^{n,\theta} \mathfrak{F} + \tau^{-\delta} \sum\_{j=1}^{m} w\_{n,j}^{(\delta)} (\mathfrak{F}^j - \mathfrak{F}^0), \\\\ I\_{0,t}^{\delta} \mathfrak{F}(t\_{n-\theta}) \approx I\_{\tau,\delta}^{n,\theta} \mathfrak{F} + \tau^{\delta} \sum\_{j=1}^{m} w\_{n,j}^{(-\delta)} (\mathfrak{F}^j - \mathfrak{F}^0), \\\\ \mathfrak{F}\_t(t\_{n-\theta}) \approx \mathfrak{F}\_{\tau,\theta}^n + \tau^{-1} \sum\_{j=1}^{m} w\_{n,j}^{(1)} (\mathfrak{F}^j - \mathfrak{F}^0), \end{cases} \tag{13}$$

where *w*(*δ*) *<sup>n</sup>*,*<sup>j</sup>* , *<sup>w</sup>*(−*δ*) *<sup>n</sup>*,*<sup>j</sup>* , and *<sup>w</sup>*(1) *<sup>n</sup>*,*<sup>j</sup>* are starting weights, and they can be derived by solving a linear system of equations. Take an example for calculating *w*(−*δ*) *<sup>n</sup>*,*<sup>j</sup>* in (13). *<sup>I</sup><sup>δ</sup>* 0,*tξ*(*tn*−*<sup>θ</sup>* ) = *I n*,*θ <sup>τ</sup>*,*<sup>δ</sup> <sup>ξ</sup>* <sup>+</sup> *<sup>τ</sup><sup>δ</sup>* <sup>∑</sup>*<sup>m</sup> <sup>j</sup>*=<sup>1</sup> *<sup>w</sup>*(−*δ*) *<sup>n</sup>*,*<sup>j</sup>* (*ξ<sup>j</sup>* <sup>−</sup> *<sup>ξ</sup>*0) is exact for *<sup>ξ</sup>*(*t*) = *<sup>t</sup> <sup>σ</sup><sup>r</sup>* (*σ<sup>r</sup>* <sup>&</sup>lt; <sup>2</sup> <sup>−</sup> *<sup>δ</sup>*). Then, it can be solved through the following linear system:

$$\sum\_{j=1}^{m} w\_{n,j}^{( - \delta)} t\_j^{\sigma\_r} = \pi^{-\delta} \frac{\Gamma(\sigma\_r + 1)}{\Gamma(\sigma\_r + 1 + \delta)} t\_{n-\theta}^{\sigma\_r + \delta} - \sum\_{k=1}^{n} \omega\_{n-k}^{( - \delta)} t\_k^{\sigma\_r}. \tag{14}$$

#### **4. Stability and Convergence Analysis**

**Lemma 4** ([11])**.** *For any vector* (*ξ*1, .., *<sup>ξ</sup>M*) <sup>∈</sup> <sup>R</sup>*<sup>M</sup> with <sup>M</sup>* <sup>≥</sup> <sup>1</sup>*, <sup>ω</sup>*(*δ*) *<sup>k</sup> is defined in (8) (δ* ∈ (−1, 0) <sup>∪</sup> (0, 1)*) and <sup>θ</sup>* <sup>∈</sup> ( *<sup>δ</sup>*−<sup>1</sup> <sup>2</sup> , 1]*. Thus, we have*

$$\sum\_{k=1}^{M} \sum\_{i=1}^{k} \omega\_{k-i}^{(\delta)} \xi^{i} \xi^{k} \ge 0. \tag{15}$$

**Lemma 5** ([11])**.** *For any vector* (*ξ*1, ..., *<sup>ξ</sup>M*) <sup>∈</sup> <sup>R</sup>*<sup>M</sup> with <sup>M</sup>* <sup>≥</sup> <sup>2</sup>*, <sup>ξ</sup>*<sup>0</sup> <sup>=</sup> <sup>0</sup> *and <sup>ξ</sup> j <sup>τ</sup>*,*<sup>θ</sup> are defined in (8), and we have*

$$\sum\_{j=1}^{M} \xi^j \xi^j\_{\mathbf{r}, \boldsymbol{\theta}} \ge \frac{1}{4\pi} (\xi^M)^2 - \frac{1}{2\pi} (\xi^1)^2 \tag{16}$$

*with θ* ∈ [0, 1].

**Theorem 1.** *The scheme in (12) is unconditionally stable, and we have the following estimate:*

$$||Z^M|| \le \mathcal{C}(||\phi|| + ||\triangle \phi|| + ||\phi|| + \max\_{0 \le j \le M} ||F^j||). \tag{17}$$

**Proof.** *<sup>Z</sup>*<sup>0</sup> is the proper approximation of *<sup>φ</sup>* that satisfies ||*Z*0|| ≤ ||*φ*|| and ||∇*Z*0|| ≤ ||∇*φ*||. Defining <sup>Λ</sup>*<sup>n</sup> <sup>Z</sup><sup>n</sup>* <sup>−</sup> *<sup>Z</sup>*<sup>0</sup> and considering (8), we can obtain

$$\begin{aligned} Z^{n}\_{\tau,\theta} &= \Lambda^{n}\_{\tau,\theta'} \\ D^{n,\theta}\_{\tau,a} Z &= D^{n,\theta}\_{\tau,a} \Lambda\_{\prime} \\ I^{n,\theta}\_{\tau,a} Z &= I^{n,\theta}\_{\tau,a} \Lambda + I^{n}\_{0,t\_{n-\theta}} Z^{0} \\ I^{n,\theta}\_{\tau,a} \nabla Z &= I^{n,\theta}\_{\tau,a} \nabla \Lambda + I^{n}\_{0,t\_{n-\theta}} \nabla Z^{0} .\end{aligned} \tag{18}$$

Replacing *ζ* with Λ*<sup>n</sup>* in (12) and using (18), we obtain

$$\begin{split} & \left( (\Lambda^{n}\_{\tau,\theta^{\prime}} \Lambda^{n}) + (\rho D^{n,\theta}\_{\tau,2-\mathfrak{a}} \Lambda \wedge \Lambda^{n}) + (I^{n,\theta}\_{\tau,\mathsf{a}-1} \Lambda \wedge \Lambda^{n}) + (I^{n,\theta}\_{\tau,\mathsf{a}-1} \nabla \Lambda , \nabla \Lambda^{n}) \right. \\ & \left. = (\varphi, \Lambda^{n}) + (F^{n-\theta}, \Lambda^{n}) - (I^{\mathfrak{a}-1}\_{0,t\_{n-\theta}} Z^{0}, \Lambda^{n}) - (I^{\mathfrak{a}-1}\_{0,t\_{n-\theta}} \nabla Z^{0}, \nabla \Lambda^{n}) . \end{split} \tag{19}$$

By substituting *n* with *j* and taking the summation of both sides for *j* ranging from 1 to *M* (M ≥ 2), we can derive

$$\begin{split} &\sum\_{j=1}^{M} (\boldsymbol{\Lambda}^{j}\_{\texttt{\tau},\theta^{j}} \boldsymbol{\Lambda}^{j}) + \sum\_{j=1}^{M} (\boldsymbol{\rho} \boldsymbol{D}^{j,\theta}\_{\texttt{\tau},2-\texttt{a}} \boldsymbol{\Lambda}, \boldsymbol{\Delta}^{j}) + \sum\_{j=1}^{M} (\boldsymbol{I}^{j,\theta}\_{\texttt{\tau},\texttt{\kappa}-1} \boldsymbol{\Lambda}, \boldsymbol{\Delta}^{j}) + \sum\_{j=1}^{M} (\boldsymbol{I}^{j,\theta}\_{\texttt{\tau},\texttt{\kappa}-1} \boldsymbol{\nabla} \boldsymbol{\Lambda}, \boldsymbol{\nabla} \boldsymbol{\Delta}^{j}) \\ &= \sum\_{j=1}^{M} (\boldsymbol{\rho}, \boldsymbol{\Lambda}^{j}) + \sum\_{j=1}^{M} (\boldsymbol{\mathcal{F}}^{j-\boldsymbol{\theta}}, \boldsymbol{\Lambda}^{j}) - \sum\_{j=1}^{M} (\boldsymbol{I}^{n-1}\_{0, t\_{j-\boldsymbol{\theta}}} \boldsymbol{Z}^{0}, \boldsymbol{\Lambda}^{j}) - \sum\_{j=1}^{M} (\boldsymbol{I}^{n-1}\_{0, t\_{j-\boldsymbol{\theta}}} \boldsymbol{\nabla} \boldsymbol{Z}^{0}, \boldsymbol{\nabla} \boldsymbol{\Lambda}^{j}). \end{split} \tag{20}$$

Combining Lemmas 4 and 5, we derive the following inequality:

$$\begin{split} &\sum\_{j=1}^{M} (\boldsymbol{\Lambda}^{j}\_{\tau,\theta}, \boldsymbol{\Lambda}^{j}) \geq \frac{1}{4\pi} ||\boldsymbol{\Lambda}^{M}||^{2} - \frac{1}{2\pi} ||\boldsymbol{\Lambda}^{1}||^{2}, \\ &\sum\_{j=1}^{M} (\rho \boldsymbol{D}^{j,\theta}\_{\tau,2-a} \boldsymbol{\Lambda}, \boldsymbol{\Lambda}^{j}) = \boldsymbol{\tau}^{a-2} \int\_{0}^{1} \rho \sum\_{j=1}^{M} \boldsymbol{\Lambda}^{j} \sum\_{k=1}^{j} \boldsymbol{\omega}^{(2-a)}\_{j-k} \boldsymbol{\Lambda}^{k} d\boldsymbol{x} \geq 0, \\ &\sum\_{j=1}^{M} (I^{j,\theta}\_{\tau,a-1} \boldsymbol{\Lambda}, \boldsymbol{\Lambda}^{j}) = \boldsymbol{\tau}^{a-1} \int\_{0}^{1} \sum\_{j=1}^{M} \boldsymbol{\Lambda}^{j} \sum\_{k=1}^{j} \boldsymbol{\omega}^{(1-a)}\_{j-k} \boldsymbol{\Lambda}^{k} d\boldsymbol{x} \geq 0, \\ &\sum\_{j=1}^{M} (I^{j,\theta}\_{\tau,a-1} \nabla \boldsymbol{\Lambda}, \nabla \boldsymbol{\Lambda}^{j}) = \boldsymbol{\tau}^{a-1} \int\_{0}^{1} \sum\_{j=1}^{M} \nabla \boldsymbol{\Delta}^{j} \sum\_{k=1}^{j} \boldsymbol{\omega}^{(1-a)}\_{j-k} \nabla \boldsymbol{\Lambda}^{k} d\boldsymbol{x} \geq 0, \end{split} \tag{21}$$

*M* ∑ *j*=1 (*ϕ*, Λ*<sup>j</sup>* ) ≤ 1 2 *M* ∑ *j*=1 (||*ϕ*||<sup>2</sup> <sup>+</sup> ||Λ*<sup>j</sup>* ||2) = *<sup>M</sup>* <sup>2</sup> ||*ϕ*||<sup>2</sup> <sup>+</sup> 1 2 *M* ∑ *j*=1 ||Λ*<sup>j</sup>* ||2, (22) *M* ∑ *j*=1 (*Fj*−*θ*, Λ*<sup>j</sup>* ) ≤ 1 2 *M* ∑ *j*=1 (||*Fj*−*<sup>θ</sup>* ||<sup>2</sup> <sup>+</sup> ||Λ*<sup>j</sup>* ||2) ≤ 1 2 *M* ∑ *j*=1 ||Λ*<sup>j</sup>* ||<sup>2</sup> <sup>+</sup> *<sup>C</sup> M* ∑ *j*=1 (||*F<sup>j</sup>* ||<sup>2</sup> <sup>+</sup> ||*Fj*−1||2) ≤ 1 2 *M* ∑ *j*=1 ||Λ*<sup>j</sup>* ||<sup>2</sup> <sup>+</sup> *<sup>C</sup> M* ∑ *j*=0 ||*Fj* ||2, (23) *M* ∑ *j*=1 (*I <sup>α</sup>*−<sup>1</sup> 0,*tj*−*<sup>θ</sup> Z*0, Λ*<sup>j</sup>* ) = *M* ∑ *j*=1 (*I <sup>α</sup>*−<sup>1</sup> 0,*tj*−*<sup>θ</sup>* 1)(*Z*0, Λ*<sup>j</sup>* ) ≤ 1 Γ(*α*) *M* ∑ *j*=1 <sup>|</sup>(*Z*0, <sup>Λ</sup>*<sup>j</sup>* )| <sup>≤</sup> *<sup>M</sup>* 2Γ(*α*) ||*Z*0||<sup>2</sup> <sup>+</sup> 1 2Γ(*α*) *M* ∑ *j*=1 ||Λ*<sup>j</sup>* ||2 <sup>≤</sup> *<sup>M</sup>* 2Γ(*α*) ||*φ*||<sup>2</sup> <sup>+</sup> 1 2Γ(*α*) *M* ∑ *j*=1 ||Λ*<sup>j</sup>* ||2. (24)

Let Δ*<sup>N</sup>* be the operator from *P*<sup>0</sup> *<sup>N</sup>* into *<sup>P</sup>*<sup>0</sup> *<sup>N</sup>*, such that

$$(\Delta\_N \Psi, \upsilon) = -(\nabla \Psi, \nabla \upsilon), \; \forall \Psi, \upsilon \in P\_N^0. \tag{25}$$

For a properly defined *<sup>Z</sup>*0, it holds that ||Δ*NZ*<sup>0</sup>|| ≤ ||Δ*φ*||; thus, we have the following inequality:

$$\begin{split} -\sum\_{j=1}^{M} (I\_{0,t\_{j-\theta}}^{\alpha-1} \nabla Z^0, \nabla \Lambda^j) &= \sum\_{j=1}^{M} \frac{I\_{j-\theta}^{\alpha-1}}{\Gamma(\alpha)} (\Delta\_N Z^0, \Lambda^j) \leq \frac{1}{\Gamma(\alpha)} \sum\_{j=1}^{M} (\Delta\_N Z^0, \Lambda^j) \\ &\leq \frac{M}{2\Gamma(\alpha)} ||\Delta\_N Z^0||^2 + \frac{1}{2\Gamma(\alpha)} \sum\_{j=1}^{M} ||\Lambda^j||^2 \\ &\leq \frac{M}{2\Gamma(\alpha)} ||\Delta \phi||^2 + \frac{1}{2\Gamma(\alpha)} \sum\_{j=1}^{M} ||\Lambda^j||^2. \end{split} \tag{26}$$

Combining (20)–(26) and ignoring the non-negative terms, we obtain

$$\begin{split} \| |\Lambda^{M}| |^{2} \leq & 2 ||\Lambda^{1}||^{2} + 4\tau \Big( 1 + \frac{1}{\Gamma(\alpha)} \Big) \sum\_{j=1}^{M} ||\Lambda^{j}||^{2} + \mathcal{C}t\_{M} ||\phi||^{2} \\ & + \frac{2t\_{M}}{\Gamma(\alpha)} ||\Delta\phi||^{2} + 2t\_{M} ||\varphi||^{2} + \mathcal{C}\tau \sum\_{j=0}^{M} ||F^{j}||^{2}. \end{split} \tag{27}$$

For Λ1, let *n* = 1 and *θ* = <sup>1</sup> <sup>2</sup> in (20); then, we obtain

$$\begin{split} \tau^{-1}(\boldsymbol{\Lambda}^{1} - \boldsymbol{\Lambda}^{0}, \boldsymbol{\Lambda}^{1}) + (\rho \tau^{a-2} \boldsymbol{\omega}\_{0}^{(2-a)} \boldsymbol{\Lambda}^{1}, \boldsymbol{\Lambda}^{1}) + (\tau^{a-1} \boldsymbol{\omega}\_{0}^{(1-a)} \boldsymbol{\Lambda}^{1}, \boldsymbol{\Lambda}^{1}) \\ + (\tau^{a-1} \boldsymbol{\omega}\_{0}^{(1-a)} \nabla \boldsymbol{\Lambda}^{1}, \nabla \boldsymbol{\Lambda}^{1}) = (\boldsymbol{\varrho}, \boldsymbol{\Lambda}^{1}) + (F^{\frac{1}{2}}, \boldsymbol{\Lambda}^{1}) - \frac{\frac{1}{2}}{\Gamma(a)} (Z^{0}, \boldsymbol{\Lambda}^{1}) + \frac{\frac{1}{2}}{\Gamma(a)} (\Delta\_{N} Z^{0}, \boldsymbol{\Lambda}^{1}). \end{split} \tag{28}$$

Similarly, for *n* = 1, we have the following inequality:

$$\left(\frac{1}{\tau} - 1 - \frac{1}{\Gamma(a)}\right) ||\Lambda^1||^2 \le \frac{1}{2} ||\phi||^2 + \frac{1}{2\Gamma(a)} ||\phi||^2 + \frac{1}{2\Gamma(a)} ||\Lambda\phi||^2 + \mathcal{C}(||F^0||^2 + ||F^1||^2). \tag{29}$$

So, if *<sup>τ</sup>* <sup>≤</sup> <sup>1</sup> 1+ <sup>1</sup> Γ(*α*) , we can derive

$$||\Lambda^1||^2 \le \mathbb{C}(||\phi||^2 + ||\Delta\phi||^2 + ||\phi||^2 + \tau||F^0||^2 + \tau||F^1||^2). \tag{30}$$

By employing Grönwall's inequality, we can deduce

$$\mathbb{E}||\Lambda^{M}||^{2} \leq \mathbb{C}(||\phi||^{2} + ||\Delta\phi||^{2} + ||\phi||^{2} + \tau\sum\_{j=0}^{M}||F^{j}||^{2}),\tag{31}$$

where *C* represents a constant that does not depend on the variables *n*, *τ*, and *N*.

Finally, using the triangular inequality ||*ZM*|| ≤ ||Λ*M*|| <sup>+</sup> ||*Z*0||, we derive Theorem 1.

Next, we discuss the convergence of (12).

**Theorem 2.** *Suppose that <sup>ξ</sup> and <sup>Z</sup> are solutions of (1) and (12), respectively, where <sup>ξ</sup>* <sup>∈</sup> *<sup>H</sup>*1([0, 1]) <sup>×</sup> (*Hm*(Ω) <sup>×</sup> *<sup>H</sup>*<sup>1</sup> <sup>0</sup> (Ω)), *<sup>m</sup>* <sup>&</sup>gt; <sup>1</sup>*, <sup>ξ</sup>*<sup>0</sup> <sup>=</sup> *<sup>π</sup>*1,0 *<sup>N</sup> <sup>ξ</sup>*0*. Then, for a small enough <sup>τ</sup>, we have the following estimate:*

$$||Z^n - \mathfrak{F}^n|| \le \mathcal{C}r^2 + \mathcal{C}r^{\vartheta - \frac{1}{2}} + \mathcal{C}r^{\sigma + n - \frac{3}{2}} + \mathcal{C}N^{-m}.$$

**Proof.** Defining *<sup>ξ</sup><sup>n</sup>* <sup>−</sup> *<sup>Z</sup><sup>n</sup>* = (*ξ<sup>n</sup>* <sup>−</sup> *<sup>π</sup>*1,0 *<sup>N</sup> <sup>ξ</sup>n*)+(*π*1,0 *<sup>N</sup> <sup>ξ</sup><sup>n</sup>* <sup>−</sup> *<sup>Z</sup>n*) *<sup>χ</sup><sup>n</sup>* <sup>+</sup> *<sup>r</sup><sup>n</sup>* and noting that *<sup>χ</sup>*<sup>0</sup> <sup>=</sup> *<sup>r</sup>*<sup>0</sup> <sup>=</sup> 0, we integrate both sides of (7) with *<sup>ζ</sup>* <sup>∈</sup> *<sup>P</sup>*<sup>0</sup> *<sup>N</sup>* to obtain

$$\begin{split} \left(\xi\_{\tau,\theta'}^n,\zeta\right) &+ \left(\rho D\_{\tau,2-a}^{n,\theta}\xi,\zeta\right) + \left(I\_{\tau,a-1}^{n,\theta}\xi,\zeta\right) + \left(I\_{\tau,a-1}^{n,\theta}\nabla\xi,\nabla\zeta\right) \\ = \left(\varphi,\zeta\right) + \left(F^{n-\theta},\zeta\right) + \left(E\_{n-\theta},\zeta\right). \end{split} \tag{32}$$

Subtracting (12) from (32) and setting *ζ* to *rn*, we substitute *n* with *j* and sum *j* from 1 to *n* (*n* ≥ 2):

$$\begin{split} &\sum\_{j=1}^{n} (\boldsymbol{r}\_{\tau,\theta'}^{j} \boldsymbol{r}^{j}) + \sum\_{j=1}^{n} (\rho \boldsymbol{D}\_{\tau,2-n}^{j,\theta} \boldsymbol{r}\_{\tau} \boldsymbol{r}^{j}) + \sum\_{j=1}^{n} (\boldsymbol{I}\_{\tau,a-1}^{j,\theta} \boldsymbol{r}\_{\tau} \boldsymbol{r}^{j}) + \sum\_{j=1}^{n} (\boldsymbol{I}\_{\tau,a-1}^{j,\theta} \boldsymbol{\nabla} \boldsymbol{r}\_{\tau} \boldsymbol{\nabla} \boldsymbol{r}^{j}) \\ &= -\sum\_{j=1}^{n} (\boldsymbol{\chi}\_{\tau,\theta'}^{j} \boldsymbol{r}^{j}) - \sum\_{j=1}^{n} (\rho \boldsymbol{D}\_{\tau,2-n}^{j,\theta} \boldsymbol{\chi}\_{\tau} \boldsymbol{r}^{j}) - \sum\_{j=1}^{n} (\boldsymbol{I}\_{\tau,a-1}^{j,\theta} \boldsymbol{\chi}\_{\tau} \boldsymbol{r}^{j}) + \sum\_{j=1}^{n} (\boldsymbol{E}\_{j-\theta,\tau} \boldsymbol{r}^{j}). \end{split} \tag{33}$$

Utilizing Lemmas 4 and 5, we obtain the following inequalities:

$$\begin{aligned} \sum\_{j=1}^{n} (r^j, r^j) &\geq \frac{1}{4\pi} ||r^n||^2 - \frac{1}{2\pi} ||r^1||^2, n \geq 2\\ \sum\_{j=1}^{n} (\rho D^{j, \theta}\_{\tau, 2-\mathfrak{a}} r, r^j) &\geq 0, \quad \sum\_{j=1}^{n} (I^{j, \theta}\_{\tau, \mathfrak{a}-1} r, r^j) \geq 0, \; n \geq 1\\ \sum\_{j=1}^{n} (I^{j, \theta}\_{\tau, \mathfrak{a}-1} \nabla r, \nabla r^j) &\geq 0, n \geq 1 \end{aligned} \tag{34}$$

Combining this with (2), we derive

$$\chi(t) = (\phi - \Pi\_N^{1,0}\phi) + (\phi - \Pi\_N^{1,0}\phi)t + \sum\_{j=2}^n (c\_j - \Pi\_N^{1,0}c\_j)t^{\sigma\_j} + (\Phi - \Pi\_N^{1,0}\Phi). \tag{35}$$

Thus, we know ||*χt*|| <sup>+</sup> ||*ρD*2−*<sup>α</sup>* 0,*<sup>t</sup> <sup>χ</sup>*|| <sup>+</sup> ||*<sup>I</sup> <sup>α</sup>*−<sup>1</sup> 0,*<sup>t</sup> <sup>χ</sup>*|| ≤ *CN*−*<sup>m</sup>* according to (4). Moreover, we have

$$\begin{split} \chi^{\mathfrak{n}}\_{\tau,\theta} - \chi\_{t}(t\_{n-\theta}) &= O(t\_{n-\theta}^{\theta-3} \tau^{2}), \\ D^{n,\theta}\_{\tau,2-\mathfrak{a}} \chi - D^{2-\mathfrak{a}}\_{0,t} \chi(t\_{n-\theta}) &= O(t\_{n-\theta}^{\sigma+\mathfrak{a}-4} \tau^{2}), \\ I^{n,\theta}\_{\tau,\mathsf{a}-1} \chi - I^{\mathfrak{a}-1}\_{0,t} \chi\_{t-\theta} &= O(t\_{n-\theta}^{\sigma+\mathfrak{a}-3} \tau^{2}). \end{split} \tag{36}$$

Taking into account the fact that

$$\tau \sum\_{j=1}^{n} t\_{j-\theta}^{k} = \begin{cases} O(\tau^{1+s}) & k < -1 \\ O(\log n) \,, \ k = -1 \\ O(1) \,, \ k > -1 \end{cases} \tag{37}$$

and combining (36) and (37), we obtain

$$\begin{aligned} &\tau \sum\_{j=1}^{n} ||\chi^{j}\_{\tau,\theta} - \chi\_{t}(t\_{j-\theta})||^{2} \\ &\leq \underline{E}\_{n-\theta}^{(3)} \triangleq \mathsf{C}\tau^{5} \sum\_{j=1}^{n} t\_{j-\theta}^{2\overline{\sigma}-6} = \begin{cases} O(\tau^{2\overline{\sigma}-1}), & \overline{\sigma} < 2.5 \\ O(\tau^{4} \log n), & \overline{\sigma} = 2.5 \\ O(\tau^{4}), & \overline{\sigma} > 2.5 \end{cases} \end{aligned} \tag{38}$$

$$\begin{aligned} &\tau \sum\_{j=1}^{n} ||D\_{\tau,2-a}^{j,\theta} \chi - D\_{0,t}^{2-a} \chi(t\_{j-\theta})||^2 \\ &\leq \tilde{\varepsilon}\_{n-\theta}^{(1)} \triangleq \mathbb{C} \tau^5 \sum\_{j=1}^{n} t\_{j-\theta}^{2\sigma+2a-8} = \begin{cases} O(\tau^{2\sigma+2a-3}), & \sigma < -a+3.5 \\ O(\tau^4 \log n), & \sigma = -a+3.5 \\ O(\tau^4), & \sigma > -a+3.5 \end{cases} \end{aligned} \tag{39}$$

$$\begin{aligned} &\tau \sum\_{j=1}^{n} ||I\_{\tau,\alpha-1}^{j,\theta} \chi - I\_{0,t}^{\alpha-1} \chi(t\_{j-\theta})||^2 \\ &\leq \tilde{\varepsilon}\_{n-\theta}^{(2)} \triangleq \mathbb{C}\tau^5 \sum\_{j=1}^{n} t\_{j-\theta}^{2r+2\alpha-6} = \begin{cases} O(\tau^{2r+2\alpha-1}), & \sigma < -\alpha+2.5 \\ O(\tau^4 \log n), & \sigma = -\alpha+2.5 \\ O(\tau^4), & \sigma > -\alpha+2.5 \end{cases} \end{aligned} \tag{40}$$

By multiplying both sides of Equation (33) by *τ*, we can obtain

$$\begin{split} &\tau \sum\_{j=1}^{n} (\rho D\_{\tau,2-a}^{j,\theta} \chi, r^j) \le \operatorname{Cr} \sum\_{j=1}^{n} ||D\_{\tau,2-a}^{j,\theta} \chi||^2 + \frac{\tau}{2} \sum\_{j=1}^{n} ||r^j||^2 \\ &\leq \frac{\tau}{2} \sum\_{j=1}^{n} ||r^j||^2 + \operatorname{Cr} \sum\_{j=1}^{n} (||D\_{\tau,2-a}^{j,\theta} \chi - D\_{0,t}^{2-a} \chi(t\_{j-\theta})||^2 + ||D\_{0,t}^{2-a} \chi(t\_{j-\theta})||^2) \\ &\leq \tilde{E}\_{n-\theta}^{(1)} + CN^{-2m} + \frac{\tau}{2} \sum\_{j=1}^{n} ||r^j||^2 \end{split} \tag{41}$$

$$\begin{split} &\tau \sum\_{j=1}^{n} (l\_{\tau,\mu-1}^{j,\theta} \chi, r^{j}) \leq \frac{\tau}{2} \sum\_{j=1}^{n} ||I\_{\tau,\mu-1}^{j,\theta} \chi||^{2} + \frac{\tau}{2} \sum\_{j=1}^{n} ||r^{j}||^{2} \\ &\leq \tau \sum\_{j=1}^{n} (||I\_{\tau,\mu-1}^{j,\theta} \chi - I\_{0,t}^{n-1} \chi(t\_{j-\theta})||^{2} + ||I\_{0,t}^{n-1} \chi(t\_{j-\theta})||^{2}) + \frac{\tau}{2} \sum\_{j=1}^{n} ||r^{j}||^{2} \\ &\leq \tilde{E}\_{n-\theta}^{(2)} + CN^{-2m} + \frac{\tau}{2} \sum\_{j=1}^{n} ||r^{j}||^{2} .\end{split} \tag{42}$$

*τ n* ∑ *j*=1 (*χj τ*,*θ*,*r<sup>j</sup>* ) <sup>≤</sup> *<sup>τ</sup>* 2 *n* ∑ *j*=1 ||*χj <sup>τ</sup>*,*<sup>θ</sup>* ||<sup>2</sup> <sup>+</sup> *<sup>τ</sup>* 2 *n* ∑ *j*=1 ||*rj* ||2 ≤*τ n* ∑ *j*=1 (||*χ<sup>j</sup> <sup>τ</sup>*,*<sup>θ</sup>* <sup>−</sup> *<sup>χ</sup>t*(*tj*−*<sup>θ</sup>* )||<sup>2</sup> <sup>+</sup> ||*χt*(*tj*−*<sup>θ</sup>* )||2) + *<sup>τ</sup>* 2 *n* ∑ *j*=1 ||*rj* ||<sup>2</sup> (43) ≤*E*˜(3) *<sup>n</sup>*−*<sup>θ</sup>* <sup>+</sup> *CN*−2*<sup>m</sup>* <sup>+</sup> *<sup>τ</sup>* 2 *n* ∑ *j*=1 ||*rj* ||2,

$$\tau \sum\_{j=1}^{n} (E\_{j-\theta}, r^j) \le \bar{E}\_{n-\theta}^{(1)} + \bar{E}\_{n-\theta}^{(3)} + \frac{\tau}{2} \sum\_{j=1}^{n} ||r^j||^2. \tag{44}$$

Combining (34) and (41)–(44), for *n* ≥ 2, we obtain

$$\frac{1}{4}||r^n||^2 \le \frac{1}{2}||r^1||^2 + E\_{n-\theta}^{(1)} + E\_{n-\theta}^{(2)} + E\_{n-\theta}^{(3)} + 2\tau \sum\_{j=1}^n ||r^j||^2 + CN^{-2m} \tag{45}$$

Similarly, let *n* = 1 and *θ* = <sup>1</sup> <sup>2</sup> ; thus, we can easily obtain the following inequality:

$$||r^1||^2 \le E^{(1)}\_{\frac{1}{2}} + E^{(2)}\_{\frac{1}{2}} + E^{(3)}\_{\frac{1}{2}} + CN^{-2m}.\tag{46}$$

We derive the following inequality using Grönwall's inequality:

$$||r^n||^2 \le \mathcal{C}r^4 + \mathcal{C}r^{2\overline{\sigma}-1} + \mathcal{C}r^{2\sigma+2a-3} + \mathcal{C}N^{-2m},\tag{47}$$

where *C* is independent of *n* and *τ*. *C*˜ is defined by

$$
\tilde{\mathcal{C}} = \begin{cases}
O(\sqrt{\log n}), & \overline{\sigma} = 2.5, \text{ and } \sigma = -\mathfrak{a} + 3.5 \\
O(1). & \text{else}
\end{cases}
\tag{48}
$$

Finally, we can prove Theorem 2 by applying the triangle inequality and utilizing Equation (4).

#### **5. Fast Algorithm**

The expansion coefficients *ω*(*δ*) *<sup>n</sup>* (*δ* ∈ (−1, 0) ∪ (0, 1)) in (9) can be represented as integrals by [11,29,30]

$$\begin{split} \pi^{-\delta} \sum\_{n=0}^{\infty} \omega\_n^{(\delta)} \xi^n &= \pi^{-\delta} \omega(\xi, \delta) = F\_{-\delta} \left( \frac{1 - \overline{\xi}}{\pi} \right) \kappa(\xi, \theta) \\ &= \frac{\kappa(\overline{\xi}, \theta)}{2\pi i} \int\_{\mathcal{L}} \left( \frac{1 - \overline{\xi}}{\pi} - \lambda \right)^{-1} F\_{\delta}(\lambda) d\lambda \,\prime \end{split} \tag{49}$$

where

$$\kappa(\vec{\xi}, \theta) = \frac{1}{1 - (\frac{\delta}{2} - \theta)(1 - \vec{\xi})}.\tag{50}$$

If we define <sup>∞</sup>

$$\sum\_{n=0}^{\infty} e\_n^{(\kappa)}(z) \overline{\xi}^n \stackrel{\triangle}{=} \kappa (\overline{\xi}, \theta) (1 - \overline{\xi} - z)^{-1},\tag{51}$$

then

$$
\omega\_n^{(\delta)} = \frac{\pi^{1+\delta}}{2\pi i} \int\_{\varepsilon} e\_n^{(\kappa)}(\tau \lambda) F\_\delta(\lambda) d\lambda,\tag{52}
$$

where *Fδ*(*λ*) = *λδ*. From (51), we can derive

$$\varepsilon\_n^{(\kappa)}(z) = \left[ (1 - z)^{-n - 1} - \left( \frac{-\delta + 2\theta}{2 - \delta + 2\theta} \right)^{n + 1} \right] / \left[ 1 + \frac{1}{2} (-\delta + 2\theta)z \right],\tag{53}$$

and so we can rewrite *e* (*κ*) *<sup>n</sup>* as

$$e\_n^{(\kappa)}(z) = r\_1(z)^n q\_1(z) - r\_2(z)^n q\_2(z) = e\_n^{(1)}(z) - e\_n^{(2)}(z),\tag{54}$$

where *<sup>r</sup>*1(*z*)=(<sup>1</sup> <sup>−</sup> *<sup>z</sup>*)−1, *<sup>r</sup>*2(*z*) = <sup>−</sup>*δ*+2*<sup>θ</sup>* <sup>2</sup>−*δ*+2*<sup>θ</sup>* , and

$$\begin{aligned} q\_1(z) &= (1 - z)^{-1} \left[ 1 + \frac{1}{2} (-\delta + 2\theta) z \right]^{-1}, \\ q\_2(z) &= \frac{-\delta + 2\theta}{2 - \delta + 2\theta} \left[ 1 + \frac{1}{2} (-\delta + 2\theta) z \right]^{-1} \end{aligned} \tag{55}$$

The key to the fast algorithm is that we divide the time domain into a series of fast growing intervals,

$$I\_l = [B^{l-1}\tau, (2B^l - 2)\tau],\tag{56}$$

where *<sup>B</sup>* is a basis chosen satisfying *<sup>B</sup>* <sup>∈</sup> <sup>N</sup>+, *<sup>B</sup>* <sup>&</sup>gt; 1, and *Il* is overlapping.

In Equation (49), we select a Talbot contour *Γ* as our chosen path of integration [31]. Then, we can obtain

$$
\omega\_n^{(\delta)} \approx \tau^{\delta+1} \sum\_{j=-K}^{K} w\_j^{(l)} [e\_n^{(1)}(\tau \lambda\_j^{(l)}) - e\_n^{(2)}(\tau \lambda\_j^{(l)})] F\_\delta(\lambda\_j^{(l)}), \ n\tau \in I\_{l\prime} \tag{57}
$$

where *w*(*l*) *<sup>j</sup>* and *<sup>λ</sup>*(*l*) *<sup>j</sup>* are

$$w\_j^{(l)} = -\frac{i}{2(K+1)}\varrho'(\theta\_j),\ \lambda\_j^{(l)} = \varrho(\theta\_j),\ \theta\_j = \frac{j\pi}{K+1}.\tag{58}$$

To demonstrate the effectiveness of the approximation, we subtract (57) from (9) and obtain the absolute value, which represents the absolute approximation error. Setting *B* = 5, *Il*(*l* = 1, 2, 3, 4, 5), and *K* = 10 and 30, we plot the absolute approximation error in Figure 1.

Figure 1 shows that the approximate effect of the first few weights is poor, so in the calculation process, we calculate the first few weights by (9) and find that the approximate effect of *K* = 30 is generally better than that of *K* = 10, which will be verified in Example 4. Referring to [32], we determine *L* and obtain *n* = *b*<sup>0</sup> > *b*<sup>1</sup> > ··· > *bL*−<sup>1</sup> > *bL* = 0.

**Figure 1.** (**a**) Absolute error for *<sup>ω</sup>*(0.3) *<sup>n</sup>* with *<sup>τ</sup>* <sup>=</sup> <sup>10</sup>−3, (**b**) absolute error for *<sup>ω</sup>*(−0.3) *<sup>n</sup>* with *<sup>τ</sup>* <sup>=</sup> <sup>10</sup>−3.

Now, we rewrite (8) as

$$\begin{split} D^{n,\theta}\_{\tau,\eta}\xi &= \tau^{-\eta}\omega\_{0}^{(\eta)}\left(\xi^{n}-\xi^{0}\right) + \tau^{-\eta}\sum\_{l=1}^{L}\sum\_{k=b\_{l}}^{b\_{l-1}-1}\omega\_{n-k}^{(\eta)}(\xi^{k}-\xi^{0}),\\ I^{n,\theta}\_{\tau,\eta}\xi &= \tau^{\eta}\omega\_{0}^{(-\eta)}\left(\xi^{n}-\xi^{0}\right) + \tau^{\eta}\sum\_{l=1}^{L}\sum\_{k=b\_{l}}^{b\_{l-1}-1}\omega\_{n-k}^{(-\eta)}(\xi^{k}-\xi^{0}).\end{split} \tag{59}$$

We define *u*(*l*) *<sup>n</sup>*,*<sup>δ</sup>* as

$$u\_{n,\delta}^{(l)} = \begin{cases} \pi^{-\delta} \omega\_0^{(\delta)} (\xi^n - \xi^0), & l = 0 \\ \pi^{-\delta} \sum\_{k=b\_l}^{b\_{l-1}-1} \omega\_{n-k}^{(\delta)} (\xi^k - \xi^0). & l = 1, 2, \cdots, L \end{cases} \tag{60}$$

Then, utilizing (57), (60), and the definitions for *e* (*i*) *<sup>n</sup>* (*z*)(*i* = 1, 2), we obtain (for *l* > 0)

$$\begin{split} \boldsymbol{u}\_{n,\delta}^{(l)} &\approx \sum\_{j=-K}^{K} \boldsymbol{w}\_{j}^{(l)} \left[ \mathrm{tr} \sum\_{k=b\_{l}}^{b\_{l-1}-1} \boldsymbol{e}\_{n-k}^{(\boldsymbol{x})} (\boldsymbol{\tau} \boldsymbol{\lambda}\_{j}^{(l)}) (\boldsymbol{\xi}^{k} - \boldsymbol{\xi}^{0}) \right] \boldsymbol{F}\_{\delta} (\boldsymbol{\lambda}\_{j}^{(l)}) \\ &= \sum\_{j=-K}^{K} \boldsymbol{w}\_{j}^{(l)} \boldsymbol{\tau} \left[ \sum\_{k=b\_{l}}^{b\_{l-1}-1} \boldsymbol{e}\_{n-k}^{(\boldsymbol{x})} (\boldsymbol{\tau} \boldsymbol{\lambda}\_{j}^{(l)}) (\boldsymbol{\xi}^{k} - \boldsymbol{\xi}^{0}) - \sum\_{k=b\_{l}}^{b\_{l-1}-1} \boldsymbol{e}\_{n-k}^{(2)} (\boldsymbol{\tau} \boldsymbol{\lambda}\_{j}^{(l)}) (\boldsymbol{\xi}^{k} - \boldsymbol{\xi}^{0}) \right] \boldsymbol{F}\_{\delta} (\boldsymbol{\lambda}\_{j}^{(l)}) \\ &= \sum\_{j=-K}^{K} \boldsymbol{w}\_{j}^{(l)} \left[ \boldsymbol{r}\_{1}^{n-(b\_{l-1}-1)} (\boldsymbol{\tau} \boldsymbol{\lambda}\_{j}^{(l)}) \boldsymbol{v}\_{j}^{(1)} - \boldsymbol{r}\_{2}^{n-(b\_{l-1}-1)} (\boldsymbol{\tau} \boldsymbol{\lambda}\_{j}^{(l)}) \boldsymbol{v}\_{j}^{(2)} \right] \boldsymbol{F}\_{\delta} (\boldsymbol{\lambda}\_{j}^{(l)}) \end{split}$$

where *v* (*i*) *<sup>j</sup>* (*i* = 1, 2) is as follows

$$\boldsymbol{v}\_{\boldsymbol{j}}^{(i)} = \boldsymbol{v}\_{\boldsymbol{j}}^{(i)}(\boldsymbol{b}\_{l}, \boldsymbol{b}\_{l-1}, \boldsymbol{\lambda}\_{\boldsymbol{j}}^{(l)}) = \boldsymbol{\tau} \sum\_{k=b\_{l}}^{b\_{l-1}-1} \boldsymbol{e}\_{(b\_{l-1}-1)-k}^{(i)} (\boldsymbol{\tau} \boldsymbol{\lambda}\_{\boldsymbol{j}}^{(l)}) (\boldsymbol{\xi}^{k} - \boldsymbol{\xi}^{0}).\tag{62}$$

We notice that *v* (*i*) *<sup>j</sup>* (*bl*, *bl*−1, *<sup>λ</sup>*(*l*) *<sup>j</sup>* ) has a recursive structure, which can be utilized to enhance the computation speed:

$$\begin{split} \boldsymbol{w}\_{j}^{(i)}(\boldsymbol{b}\_{l},\boldsymbol{b}\_{s\prime},\boldsymbol{\lambda}\_{j}^{(l)}) &= \boldsymbol{\pi} \sum\_{k=b\_{l}}^{b\_{\mathrm{m}}-1} \boldsymbol{e}\_{(b\_{\mathrm{s}}-1)-k}^{(i)} (\boldsymbol{\pi}\boldsymbol{\lambda}\_{j}^{(l)}) (\boldsymbol{u}^{k} - \boldsymbol{u}^{0}) + \boldsymbol{v}\_{j}^{(i)} (\boldsymbol{b}\_{\mathrm{m}},\boldsymbol{b}\_{\mathrm{s}\prime}\boldsymbol{\lambda}\_{j}^{(l)}) \\ &= \boldsymbol{r}\_{l} (\boldsymbol{\pi}\boldsymbol{\lambda}\_{j}^{(l)})^{b\_{\mathrm{s}}-b\_{\mathrm{m}}} \boldsymbol{v}\_{j}^{(i)} (\boldsymbol{b}\_{l},\boldsymbol{b}\_{\mathrm{m}},\boldsymbol{\lambda}\_{j}^{(l)}) + \boldsymbol{v}\_{j}^{(i)} (\boldsymbol{b}\_{\mathrm{m}},\boldsymbol{b}\_{\mathrm{s}\prime}\boldsymbol{\lambda}\_{j}^{(l)}). \end{split} \tag{63}$$

The first few weights are not described well by (57) (refer to Figure 1). Thus, for *l* = 0, 1, 2, ··· , *k*, we calculate the weights according to (9), and for *l* = *k* + 1, ··· , *L*, we calculate the weights according to (57). Combining (59)–(61), we can obtain

$$\begin{split} &D\_{\tau,\eta}^{n,\theta}\xi\_{\tau}^{\tau} = \sum\_{l=0}^{k} u\_{n\eta}^{(l)} + \sum\_{l=k+1}^{L} u\_{n\eta}^{(l)} \\ &\approx \sum\_{l=0}^{k} u\_{n,\eta}^{(l)} + \sum\_{l=k+1}^{L} \sum\_{j=-K}^{K} w\_{j}^{(l)} \left[ r\_{1}^{n-(b\_{l-1}-1)} (\tau \lambda\_{j}^{(l)}) v\_{j}^{(1)} - r\_{2}^{n-(b\_{l-1}-1)} (\tau \lambda\_{j}^{(l)}) v\_{j}^{(2)} \right] F\_{\eta}(\lambda\_{j}^{(l)}). \\ &I\_{\tau,\eta}^{n,\theta} = \sum\_{l=0}^{k} u\_{n,-\eta}^{(l)} + \sum\_{l=k+1}^{L} u\_{n,-\eta}^{(l)} \\ &\approx \sum\_{l=0}^{k} u\_{n,-\eta}^{(l)} + \sum\_{l=k+1}^{L} \sum\_{j=-K}^{K} w\_{j}^{(l)} \left[ r\_{1}^{n-(b\_{l-1}-1)} (\tau \lambda\_{j}^{(l)}) v\_{j}^{(1)} - r\_{2}^{n-(b\_{l-1}-1)} (\tau \lambda\_{j}^{(l)}) v\_{j}^{(2)} \right] F\_{-\eta}(\lambda\_{j}^{(l)}). \end{split} \tag{64}$$

Below are listed the steps for implementing the fast algorithm:


#### **6. Numerical Examples**

In this section, we provide four examples of solving an FKGE utilizing our proposed scheme, and the results verify our theoretical analysis and the effectiveness of our method. The basis function was chosen as *<sup>ψ</sup>*(*x*) = *Lj*(*x*) <sup>−</sup> *Lj*+2(*x*), *<sup>j</sup>* <sup>=</sup> 0, 1, ··· , *<sup>N</sup>* for <sup>∀</sup>*v<sup>k</sup> N* ∈ *P*0 *<sup>N</sup>*, *<sup>v</sup><sup>k</sup> <sup>N</sup>* <sup>=</sup> <sup>∑</sup>*N*−<sup>2</sup> *<sup>j</sup>*=<sup>0</sup> *<sup>v</sup>*ˆ*<sup>k</sup> <sup>N</sup>ψj*(*x*), where *<sup>v</sup>*ˆ*<sup>k</sup> <sup>N</sup>* is the frequency coefficient. The codes were developed in MATLAB 2022a and executed on a Windows 10 operating system. The computer used for running these codes had a processor speed of 2.60 GHz and 8 GB of RAM.

**Example 1.** *Let ρ* = 1 *in (1). We considered the following fractional dissipative Klein–Gordon equation with homogeneous initial condition φ*(*x*) = 0, *ϕ*(*x*) = 0*:*

$$\frac{\partial^a \xi(\mathbf{x}, t)}{\partial t^a} + \frac{\partial \xi(\mathbf{x}, t)}{\partial t} + \xi(\mathbf{x}, t) = \frac{\partial^2 \xi(\mathbf{x}, t)}{\partial \mathbf{x}^2} + f(\mathbf{x}, t). \tag{65}$$

*Assuming that the exact solution of Equation (65) is ξ*(*x*, *t*) = *t* <sup>4</sup> sin(*πx*)*, the corresponding forcing term is given by*

$$f(\mathbf{x}, t) = \left[ \frac{\Gamma(5)}{\Gamma(5-\alpha)} t^{4-\alpha} + 4t^3 + (1+\pi^2)t^4 \right] \sin(\pi \mathbf{x}).$$

For *N* = 100, the results are presented in Tables 1–3. It can be observed that our numerical scheme exhibited second-order convergence accuracy in the temporal direction, which aligned with the theoretical expectations.

**Table 1.** The *L*<sup>2</sup> error and *L*<sup>∞</sup> error at *α* = 1.5, *θ* = 0.3, and *T* = 1.



**Table 2.** Temporal convergence rates at *α* = 1.2, *θ* = 0.5, and *T* = 1.

**Table 3.** The *L*<sup>2</sup> error and *L*<sup>∞</sup> error at *α* = 1.8, *θ* = 0.8, and *T* = 1.


To analyze the spatial accuracy, we set *τ* = 0.001 to eliminate temporal direction errors. In Figure 2, it can be observed that when *α* = 1.8 and *θ* = 0.3, the error exhibited an exponential decrease. This behavior confirmed the spectral accuracy of the method, which in turn confirmed the validity of our theoretical analysis.

**Figure 2.** *α* = 1.8, *θ* = 0.8 for Example 1 at *T* = 1.

**Example 2.** *Let ρ* = 0 *in (1). We investigated the fractional linear Klein–Gordon equation with the non-homogeneous initial conditions φ*(*x*) = sin(*πx*), *ϕ*(*x*) = 0,

$$\frac{\partial^a \xi(\mathbf{x}, t)}{\partial t^a} + \xi(\mathbf{x}, t) = \frac{\partial^2 \xi(\mathbf{x}, t)}{\partial x^2} + f(\mathbf{x}, t). \tag{66}$$

*Assuming that the exact solution of Equation (66) is ξ*(*x*, *t*)=(*t* <sup>4</sup> + 1) sin(*πx*)*, the corresponding forcing term is*

$$f(\mathbf{x}, t) = \left[ \frac{\Gamma(5)}{\Gamma(5-\alpha)} t^{4-\alpha} + \frac{1}{\Gamma(1-\alpha)} t^{-\alpha} + (t^4 + 1)(1 + \pi^2) \right] \sin(\pi \mathbf{x}).$$

For *N* = 100, the results are illustrated in Tables 4–6. Notably, even when considering non-homogeneous initial conditions, it was evident that our numerical scheme remained applicable. The results indicated the adaptability and flexibility of our method.


**Table 4.** The *L*<sup>2</sup> error and *L*<sup>∞</sup> error at *α* = 1.5, *θ* = 0.9, and *T* = 1.

**Table 5.** The *L*<sup>2</sup> error and *L*<sup>∞</sup> error at *α* = 1.2, *θ* = 0.7, and *T* = 1.


**Table 6.** The *L*<sup>2</sup> error and *L*<sup>∞</sup> error at *α* = 1.8, *θ* = 0.3, and *T* = 1.


**Example 3.** *Let ρ* = 1 *in (1). The non-smooth solution ξ*(*x*, *t*)=(*t* <sup>4</sup> + *t* min{2−*α*,*α*−1}) sin(*πx*) *was considered, with the corresponding forcing term*

$$f(x,t) = \left[\frac{\Gamma(\min\{3-a,a\})}{\Gamma(\min\{3-2a,0\})}t^{\min\{2-2a,-1\}} + \frac{\Gamma(5)}{\Gamma(5-a)}t^{4-a} + 4t^3 + \right.$$

$$\min\{2-a,a-1\}t^{\min\{1-a,a-2\}} + (t^4 + t^{\min\{2-a,a-1\}})(1+\pi^2)\left[\sin(\pi x). + \right.$$

Assuming *N* = 100, it is worth mentioning that due to the weak regularity of the solution, it was not possible to achieve the optimal convergence rate of *O*(*τ*2). Referring to Table 7, we can observe that the inclusion of correction terms led to an improved convergence rate. This result serves as evidence for the efficiency of our method.

**Table 7.** Temporal convergence rates at *T* = 1.


**Example 4.** *Let ρ* = 0 *in (1). We utilized the fast algorithm to solve the Equation (66). Assuming that the exact solution of Equation (66) is ξ*(*x*, *t*) = *t* <sup>4</sup> sin(*πx*)*, the corresponding forcing term is*

$$f(\mathbf{x},t) = \left[\frac{\Gamma(5)}{\Gamma(5-\alpha)}t^{4-\alpha} + (1+\pi^2)t^4\right]\sin(\pi\mathbf{x}).$$

We set *B* = 5 and *N* = 100. To simplify the notation, we denoted the approximation of Equation (57) with 2*K* + 1 points as Fast*K*. We had two sets of solutions: *ZS*, which were obtained using the direct method, and *ZF*, which was obtained using the fast algorithm. We set *θ* = <sup>1</sup>−*<sup>α</sup>* <sup>2</sup> and defined the pointwise error as

$$e(\mathbf{a}, M) = \max\_{t = t\_{0r}, \dots, t\_{M}, \mathbf{x} = \mathbf{x}\_{1r}, \dots, \mathbf{x}\_{N}} |Z\_{\mathbf{S}} - Z\_{F}|. \tag{67}$$

According to Table 8, it is evident that the fast algorithm significantly accelerated the computation process. Moreover, our approach not only attained exceptional precision, but also effectively reduced the computational cost. For example, for *K* = 30, the pointwise error was around 10−15, which was close to the machine accuracy. Figure 3a displays the exact solutions for *M* = 1000, *α* = 1.8. Figure 3b shows the numerical solutions for the given parameters: *M* = 1000, *α* = 1.8, and *K* = 30. Furthermore, in order to obtain the error contour plot shown in Figure 4a, we subtracted the corresponding solutions from Figure 3a,b. In Figure 4b, it is evident that the computational complexity of the fast algorithm was *O*(*M* log *M*), while the direct method had a computational complexity of *O*(*M*2). This result in Figure 4 aligned with the theoretical expectations and confirmed that the algorithms' performances matched the expected efficiencies.

**Figure 3.** (**a**) Exact solution, (**b**) numerical solution.

**Figure 4.** (**a**) Error contour, (**b**) computational complexity.


**Table 8.** Pointwise error with *θ* = <sup>1</sup>−*<sup>α</sup>* <sup>2</sup> .

#### **7. Conclusions**

In this study, we developed a stable and efficient numerical method to solve an FKGE. A stability analysis and the convergence of the discrete scheme were provided in our method. Considering the weak regularity of the solutions, we improved the convergence order by incorporating correction terms into our approach. To optimize the computational complexity, we implemented a fast algorithm, which significantly reduced the runtime required for solving an FKGE. This allowed for quicker computations without sacrificing accuracy. We note the method can be extended to higher-dimensional cases.

**Author Contributions:** Methodology, Y.L. (Yanqin Liu); Formal analysis, Y.S.; Data curation, Y.X.; Writing—original draft, Y.L. (Yanan Li). All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was supported by the Natural science Foundation of Shandong Province (ZR2023MA062), the National Science Foundation of China (62103079), and the Open Research Fund Program of the Data Recovery Key Laboratory of Sichuan Province (DRN19020).

**Data Availability Statement:** All data reported are obtained by the numerical schemes designed in this paper.

**Acknowledgments:** We appreciated the support by the Natural science Foundation of Shandong Province (ZR2023MA062), the National Science Foundation of China (62103079), and the Open Research Fund Program of the Data Recovery Key Laboratory of Sichuan Province (DRN19020).

**Conflicts of Interest:** The authors declare that there were no conflict of interest regarding the publication of this paper.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

### *Article* **Eighth-Kind Chebyshev Polynomials Collocation Algorithm for the Nonlinear Time-Fractional Generalized Kawahara Equation**

**Waleed Mohamed Abd-Elhameed 1,\*, Youssri Hassan Youssri 1,2, Amr Kamel Amin <sup>3</sup> and Ahmed Gamal Atta <sup>4</sup>**


**Abstract:** In this study, we present an innovative approach involving a spectral collocation algorithm to effectively obtain numerical solutions of the nonlinear time-fractional generalized Kawahara equation (NTFGKE). We introduce a new set of orthogonal polynomials (OPs) referred to as "Eighth-kind Chebyshev polynomials (CPs)". These polynomials are special kinds of generalized Gegenbauer polynomials. To achieve the proposed numerical approximations, we first derive some new theoretical results for eighth-kind CPs, and after that, we employ the spectral collocation technique and incorporate the shifted eighth-kind CPs as fundamental functions. This method facilitates the transformation of the equation and its inherent conditions into a set of nonlinear algebraic equations. By harnessing Newton's method, we obtain the necessary semi-analytical solutions. Rigorous analysis is dedicated to evaluating convergence and errors. The effectiveness and reliability of our approach are validated through a series of numerical experiments accompanied by comparative assessments. By undertaking these steps, we seek to communicate our findings comprehensively while ensuring the method's applicability and precision are demonstrated.

**Keywords:** time-fractional Kawahara equation; generalized Gegenbauer polynomials; Chebyshev polynomials; collocation method; connection formulas; convergence analysis

**MSC:** 65M60; 11B39; 40A05; 34A08

### **1. Introduction**

The presence of CPs is widely recognized in the realm of numerical analysis, a fact well-documented by notable mathematicians and numerical analysts across various references such as [1–3]. This observation, sometimes attributed to Philip Davis but collectively acknowledged, underscores the significance of CPs in this field. Their influence is pervasive, consistently emerging in modern advancements encompassing function approximation, integral estimation, and the application of spectral methods to diverse differential equations (DEs).

Various kinds of CPs are explored within the research landscape. Noteworthy attention is directed toward both the first and second kinds, as evidenced in studies such as [4,5]. Similarly, investigations delve into the third and fourth kinds, as exemplified in research such as [6–8]. In the context of numerical solutions for specific fractional differential equations (FDEs), Abd-Elhameed and Youssri have ventured into utilizing the fifth and sixth types of CPs [9,10]. A continuation of this exploration can be observed through their subsequent works, including [11–13], where fifth-kind CPs are harnessed for addressing more intricate partial DEs. Additionally, their application extends to the realm of sixth-kind CPs, effectively addressing advanced partial DE [14,15].

**Citation:** Abd-Elhameed, W.M.; Youssri, Y.H.; Amin, A.K.; Atta, A.G. Eighth-Kind Chebyshev Polynomials Collocation Algorithm for the Nonlinear Time-Fractional Generalized Kawahara Equation. *Fractal Fract.* **2023**, *7*, 652. https:// doi.org/10.3390/fractalfract7090652

Academic Editors: Libo Feng, Lin Liu and Yang Liu

Received: 29 June 2023 Revised: 22 August 2023 Accepted: 27 August 2023 Published: 29 August 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Further substantiating the versatility of CPs, other researchers have utilized fifth-kind CPs for diverse DE types [16–19], while a distinct focus is placed on the sixth-kind CPs by researchers, as highlighted in [20–22]. These endeavors collectively underscore the diverse utility and applicability of CPs within the landscape of differential equations research.

A wide array of applications arises for OPs characterized by their trigonometric representation. These find utility in diverse domains, including signal analysis through Fourier series expansions, approximating and interpolating periodic functions, as well as tackling DEs with periodic boundary conditions. Their pertinence is particularly marked in numerical algorithms such as the spectral method, which effectively leverages these polynomials to achieve precise function approximations. This multifaceted significance elucidates the overarching importance of CPs in various contexts, thereby prompting further exploration and analysis of distinct CPs kinds.

In a compelling Ph.D. dissertation by Masjed-Jamei [23], an extended Sturm–Liouville problem is ingeniously applied to symmetric functions, ushering in a symmetrical class defined by four parameters. This work elucidates the fundamental attributes of these polynomials, including their compliance with a three-term recurrence relation, their orthogonality, and several other notable formulae. The principal advantage of introducing this specific class of OPs lies in its capacity to generalize several noteworthy, established classes of OPs. Furthermore, some lesser-known OPs are revealed as specific instances of this introduced class. Notably, the widely recognized four categories of CPs emerge as special cases within this broader generalized class. Additionally, the exploration yields two new OPs classes that can be derived from this encompassing generalized category. This insightful investigation thus contributes to both the enhancement of existing OPs knowledge and the introduction of novel variants.

A commonly employed technique for solving DEs or approximating functions is the conventional collocation method. This method belongs to the spectral method family, which is renowned for its exceptional accuracy and swift convergence rates. Within the spectral collocation framework, the domain undergoes discretization into a set of collocation points, often termed grid points. These specific points are selected meticulously, guided by criteria such as the extrema of certain functions or the roots of OPs. To enhance precision around regions of interest and accurately capture boundary conditions, these points are usually distributed non-uniformly. After identifying the collocation points, polynomial interpolation is conducted based on this configuration to approximate the unknown function or solve the differential equation.

OPs, including CPs, Legendre polynomials, or Jacobi polynomials, are commonly chosen for this interpolation, depending on the specific scenario. The primary advantage of the collocation approach is its versatility, making it applicable to a wide array of differential equation types. Notable instances of its application include ordinary DEs, as demonstrated in contributions such as [24], partial DEs showcased in [25,26], and FDEs illustrated through references such as [27,28].

For further insights into spectral methods, consider exploring contributions such as [29–31], which further enrich the understanding of this approach's applications and potential.

The use of FDEs in place of classical ones has become increasingly popular in recent years [32–34]. This is because these equations can describe many phenomena in different disciplines of science. More specifically, these equations aid in analyzing signals characterized by non-integer power-law traits, such as fractal time series and self-similar signals. In addition, they can model processes such as chemical reactions, heat transfer, and fluid flow. For some theoretical aspects of FDEs, one can refer to [35,36], while some practical applications of FDEs can be found in [37,38]. In the past years, many studies have been published on numerical methods for time and space FDEs; for example, see [39–41]. For some numerical algorithms that are employed to handle various types of FDEs, one can consult [42–46].

The evolution of wave packets in dispersive media is described by the Kawahara equation, a nonlinear partial differential equation. As a generalization of the widely used Korteweg-de Vries equation, which is used to simulate shallow water waves, Toshio Kawahara first proposed it in 1972. The Kawahara equation is given by:

$$
\mu\_t + \alpha \upharpoonright \mu\_x + \beta \upharpoonright \mu\_{xxx} + \gamma \upharpoonright \mu\_{xxxxx} = 0,
$$

where *u* = *u*(*x*, *t*) represents the dependent variable (usually the amplitude of a wave packet), t is the time variable, and *x* is the spatial variable. The subscripts denote partial derivatives with respect to the corresponding variables, and *α*, *β*, and *γ* are constants that determine the behavior of the equation. The convective term *u ux*, dispersive term *uxxx*, and higher-order dispersion term *uxxxxx* are the three terms included in the Kawahara equation. The dispersive term accounts for wave dispersion, the higher-order dispersion term captures extra dispersion effects that emerge in specific media, and the convective term describes the advection of the wave packet by its own velocity. The authors of [47] have found an explicit solution for the time-fractional generalized dissipative Kawahara equation. In [48], they have conversed about the concepts and uses of Caputo time-fractional nonlinear equations: both their theory and how they are used. Additionally, Refs. [49,50] have examined the Lie symmetry analysis and conservation regulations for the time fractional simplified modified Kawahara equation and the time fractional generalized fifth-order KdV equation.

The primary goals of this research are to solve the Kawahara time fractional equation and examine the performance of the CPs of the eighth-kind spectral approach as a numerical solution technique. We aim to create a mathematical framework for the NTFGKE, which entails comprehending the physical events covered by the equation and constructing the necessary mathematical equations. In addition, we aim to validate the numerical results of the spectral approach.

Some advantages of the proposed method, as far as we are aware, include the following:


We point out here that the novelty of the contribution in this paper can be listed as follows:


Here is how the paper is divided: Some fractional calculus concepts and an overview of CPs of the eighth kind are introduced as useful mathematical tools in Section 2. In Section 3, we develop a few other new formulas related to CPs of the eighth kind. The primary focus of the main part of this article is on developing a collocation procedure for dealing with the NTFGKE, which is covered in Section 4. We examine, in detail, the truncation error and the rate of convergence of the expansion coefficients in Section 5. Some illustrative examples are given in Section 6. Section 7 provides some closing thoughts.

#### **2. Some Relationships and Preliminary Information**

The purpose of this section is to present the definition of fractional Caputo derivatives and to recall some of the important properties they satisfy. A few properties and relations associated with eighth-kind CPs are given.

#### *2.1. Caputo Definition of the Fractional Derivative*

**Definition 1.** *Caputo defined the fractional-order derivative as ([51]):*

$$\frac{d^a \zeta^\sharp(s)}{ds^a} = \frac{1}{\Gamma(p-a)} \int\_0^s (s-t)^{p-a-1} \zeta^{(p)}(t) dt, \quad a, s > 0, \quad p-1 \leqslant a < p, \quad p \in \mathbb{Z}^+.$$

The following property is satisfied by the operator *<sup>d</sup><sup>α</sup> d s<sup>α</sup>* for *p* − 1 *<sup>α</sup>* < *<sup>p</sup>*, *<sup>p</sup>* ∈ N,

$$\frac{d^{\mathfrak{a}}s^{p}}{ds^{\mathfrak{a}}} = \begin{cases} 0, & \text{if } p \in \mathbb{N}\_{0} \quad and \quad p < \lceil \mathfrak{a} \rceil, \\ \frac{\Gamma(p+1)}{\Gamma(p-\mathfrak{a}+1)} s^{p-\mathfrak{a}}, & \text{if } p \in \mathbb{N}\_{0} \quad and \quad p \ge \lceil \mathfrak{a} \rceil, \end{cases}$$

where N<sup>0</sup> = {0, 1, 2, ...}, <sup>Γ</sup>(·) is the gamma function [52] and the notation *<sup>α</sup>* represents the ceiling function.

#### *2.2. An Account of the CPs of Eighth-Kind and Their Shifted Ones*

We account here for the eighth kind of CPs. In addition, we will develop some important formulas for these polynomials that will be useful in the sequel.

The generalized Gegenbauer polynomials *<sup>G</sup>*(*λ*,*μ*) *<sup>n</sup>* (*ξ*) are OPS on [−1, 1] in regard to: *<sup>w</sup>*(*ξ*)=(<sup>1</sup> <sup>−</sup> *<sup>ξ</sup>*2)*λ*<sup>−</sup> <sup>1</sup> <sup>2</sup> |*ξ*| <sup>2</sup> *<sup>μ</sup>*. In fact, these polynomials can be defined as (see, [53,54])

$$G\_{n}^{(\lambda,\mu)}(\xi) = \begin{cases} \frac{(\lambda+\mu)\_{\frac{k}{2}}}{\left(\mu+\frac{1}{2}\right)\_{\frac{k}{2}}} P\_{\frac{k}{2}}^{\left(\lambda-\frac{1}{2},\mu-\frac{1}{2}\right)} \left(2\xi^{2}-1\right), & \text{if } k \text{ even},\\ \frac{(\lambda+\mu)\_{\frac{k+1}{2}}}{\left(\mu+\frac{1}{2}\right)\_{\frac{k+1}{2}}} \xi P\_{\frac{k-1}{2}}^{\left(\lambda-\frac{1}{2},\mu+\frac{1}{2}\right)} \left(2\xi^{2}-1\right), & \text{if } k \text{ odd}, \end{cases} \tag{1}$$

where *P*(*γ*,*δ*) *<sup>k</sup>* (*ξ*) are the classical Jacobi polynomials, and (*z*)*<sup>k</sup>* is the Pochhammer symbol; that is (*z*)*<sup>k</sup>* <sup>=</sup> <sup>Γ</sup>(*<sup>z</sup>* <sup>+</sup> *<sup>k</sup>*) <sup>Γ</sup>(*z*) .

**Remark 1.** *Many celebrated OPs may be extracted from the generalized polynomials <sup>G</sup>*(*λ*,*μ*) *<sup>n</sup>* (*ξ*) *as particular ones. The Gegenbauer polynomials that include the first and second kinds of CPs are also special ones of <sup>G</sup>*(*λ*,*μ*) *<sup>n</sup>* (*ξ*)*. In addition, the fifth and sixth kinds of CPs are specific polynomials of <sup>G</sup>*(*λ*,*μ*) *<sup>n</sup>* (*ξ*)*.*

Now, we will consider eighth-kind CPs, which will be denoted by *Ek*(*ξ*). The sequence {*Ek*(*ξ*)}*k*≥0, *<sup>k</sup>* <sup>≥</sup> 0 is a sequence of OP on [−1, 1] that is orthogonal regarding the weight function *<sup>w</sup>*(*ξ*) = *<sup>ξ</sup>*<sup>4</sup> <sup>2</sup><sup>1</sup> <sup>−</sup> *<sup>ξ</sup>*2. In other words, *Ek*(*ξ*) = *<sup>G</sup>*(2,1) *<sup>k</sup>* (*ξ*). Thus, from (1), they can be represented as

$$E\_k(\tilde{\xi}) = \begin{cases} \frac{(3)\frac{k}{\tilde{\xi}}}{\left(\frac{\tilde{\xi}}{2}\right)\frac{k}{\tilde{\xi}}} P^{\left(\frac{1}{2},\frac{3}{\tilde{\xi}}\right)}\_{\frac{k}{\tilde{\xi}}} \left(2\tilde{\xi}^2 - 1\right), & \text{if } k \text{ even},\\ \frac{(3)\frac{k+1}{\tilde{\xi}}}{\left(\frac{\tilde{\xi}}{2}\right)\frac{k+1}{\tilde{\xi}}} \times P^{\left(\frac{1}{2},\frac{\tilde{\xi}}{2}\right)}\_{\frac{k-1}{\tilde{\xi}}} \left(2\tilde{\xi}^2 - 1\right), & \text{if } k \text{ odd}, \end{cases} \tag{2}$$

with the orthogonality relation:

$$\int\_{-1}^{1} \mathfrak{J}^{4} \sqrt{1 - \mathfrak{J}^{2}} \, E\_{\ell}(\mathfrak{J}) \, E\_{m}(\mathfrak{J}) \, d\mathfrak{J} = h\_{\ell} \, \delta\_{\ell, m'} \tag{3}$$

where

$$h\_{\ell} = \frac{9}{128} \begin{cases} \frac{\left(\ell+2\right)\left(\ell+4\right)}{\left(\ell+3\right)^2}, & \ell \text{ even}, \\\frac{\left(\ell+1\right)\left(\ell+5\right)}{\left(\ell+2\right)\left(\ell+4\right)}, & \ell \text{ odd}, \end{cases}$$

and *δn*,*<sup>m</sup>* is the Kronecker delta function.

Among the pivotal formulas of *Ej*(*ξ*) are the analytic formulas and their inversions. The following two lemmas give these results.

**Lemma 1.** *For every non-negative integer , the polynomials E*(*ξ*) *can be expressed as:*

$$E\_{2\,\ell}(\vec{\xi}) = \sum\_{m=0}^{\ell} \frac{(-1)^m (3)\_{2\ell-m}}{(\ell-m)! m! \left(\frac{5}{2}\right)\_{\ell-m}} \vec{\xi}^{2\ell-2m},\tag{4}$$

$$E\_{2\cdot \ell+1}(\xi) = \sum\_{m=0}^{\ell} \frac{(-1)^m (3)\_{1+2\ell-m}}{(\ell-m)! m! \left(\frac{5}{2}\right)\_{1+\ell-m}} \xi^{2\ell-2m+1}.\tag{5}$$

**Proof.** The proof is direct from (2).

**Lemma 2.** *The inversion formulas to* (4) *and* (5) *are given by*

$$\begin{aligned} \xi^{2\,\ell} &= \sum\_{m=0}^{\ell} \frac{(3+2\ell-2m)\,\ell!\left(\frac{5}{2}\right)\_{\ell}}{m!\,(3)\_{2\ell-m+1}} \,\_{E\_2\ell-2m}(\xi), \quad \ell \ge 0, \\\xi^{2\,\ell+1} &= 2 \sum\_{m=0}^{\ell} \frac{(2+\ell-m)\,\ell!\left(\frac{5}{2}\right)\_{\ell+1}}{m!\,(3)\_{2+2\ell-m}} \,\_{E\_2\ell-2m+1}(\xi), \quad \ell \ge 0. \end{aligned}$$

**Proof.** The proof is analogous to the one presented for the inversion of the CPs of the fifth kind in [55].

## **3. Some Important Formulas Related to** *Ek***(***ξ***) and Their Shifted Ones**

This section is interested in deriving some important formulas concerning eighthkind CPs. We will derive the connection formula between *E*(*ξ*) and second-kind CPs *U*(*ξ*). This formula will be the key to obtaining a trigonometric representation of *E*(*ξ*). In addition, the expressions for the derivatives of *U*(*ξ*) are found.

#### *3.1. Some Formulas Concerned with E*(*ξ*)

The following theorem displays the connection formula between eighth- and first-kind CPs, which will be useful in the sequel.

**Theorem 1.** *The polynomials E*(*ξ*) *can be written as combinations of second-kind CPs U*(*ξ*) *as*

$$E\_{2\ell}(\vec{\xi}) = \frac{3}{2(2\ell+3)} \sum\_{m=0}^{\ell} (-1)^m \left(m+1\right) \left(2\ell-m+2\right) \mathcal{U}\_{2\ell-2m}(\vec{\xi}), \quad \ell \ge 0,\tag{6}$$

$$E\_{2\ell+1}(\vec{\xi}) = \frac{3}{\left(2\ell+3\right)\left(2\ell+5\right)} \sum\_{m=0}^{\ell} (-1)^m \left(m+1\right) \left(-2\left.\ell+m-3\right) \left(-\ell+m-1\right) \mathcal{U}\_{2\ell-2m+1}(\vec{\xi}), \quad \ell \ge 0. \tag{7}$$

**Proof.** The power form representation in (5), along with the inversion formula

$$\xi^{2j+1} = \frac{(2j+1)!}{2^{2j}} \sum\_{r=0}^{j} \frac{(1+j-r)}{r!(2j-r+2)!} \, ^t \mathcal{U}\_{2j-2r+1}(\xi), \quad j \ge 0, 1$$

yields the following formula

$$E\_{2\ell+1}(\xi) = 3 \sum\_{m=0}^{\ell} \frac{(-1)^m (2\ell - m + 3)!}{m! (3 + 2\ell - 2m)(5 + 2\ell - 2m)} \sum\_{s=0}^{\ell - m} \frac{1 + \ell - m - s}{s! (2 + 2\ell - s - 2m)!} L\_{2\ell - 2s - 2m + 1}(\xi),$$

which can be transformed again into

$$E\_{2\ell+1}(\xi) = 3\sum\_{m=0}^{\ell} (1+\ell-m) \sum\_{p=0}^{m} \frac{(-1)^p (3+2\ell-p)!}{(3+2\ell-2p)(5+2\ell-2p)p!(2+2\ell-p-m)!(m-p)!} \mathbf{I}\_{2\ell-2m+1}(\xi). \tag{8}$$

Now, setting

$$M\_{m,\ell} = \sum\_{p=0}^{m} \frac{(-1)^p (3 + 2\ell - p)!}{(3 + 2\ell - 2p)(5 + 2\ell - 2p)p!(2 + 2\ell - p - m)!(m - p)!},$$

so it is not difficult based on Zeilberger's algorithm (see, [56]) that *Mm*, meets the first-order recurrence relation:

$$(\mathfrak{J} + 2\ell - m)(1 + m)M\_{m - 1, \ell} + (4 + 2\ell - m)mM\_{m, \ell} = 0, \quad M\_{0, \ell} = 1,$$

which can be quickly solved to give

$$M\_{m,\ell} = \frac{(-1)^{1+m}(1+m)(-3-2\ell+m)}{(3+2\ell)(5+2\ell)}.$$

Now, Formula (8) turns into Formula (7). Formula (6) can be similarly obtained.

**Corollary 1.** *It is possible to represent E*(*ξ*) *in the following trigonometric expressions:*

$$E\_{2\ell}(\cos(\theta)) = \frac{1}{8\left(2\ell+3\right)} \left[ 3\csc(\theta)\sec^3(\theta)\left( (\ell+2)\sin(2\,\theta\,(\ell+1)) + (\ell+1)\sin(2\,\theta\,(\ell+2)) \right) \right],\tag{9}$$

$$\begin{split} E\_{2\ell+1}(\cos(\theta)) &= \frac{1}{16\left(2\,\ell+3\right)\left(2\,\ell+5\right)} \Big[ 3\csc(\theta)\sec^4(\theta)\left(\left(\ell+3\right)\left(2\,\ell+5\right)\sin\left(2\,\theta\left(\ell+1\right)\right) \right. \\ &\left. + \left(\ell+1\right)\left(4\left(\ell+3\right)\sin\left(2\,\theta\left(\ell+2\right)\right) + \left(2\,\ell+3\right)\sin\left(2\,\theta\left(\ell+3\right)\right)\right) \right]. \end{split} \tag{10}$$

**Proof.** Formulas (9) and (10) are consequences of the connections between Formulas (6) and (7), and the trigonometric representation of *U*(*ξ*).

The theorem that follows demonstrates the inverse formulas for Formulas (6) and (7).

**Theorem 2.** *The polynomials U*(*ξ*) *have the following connection with the polynomials E*(*ξ*)

$$\begin{split} \mathcal{U}\_{2\ell}(\boldsymbol{\xi}) &= \frac{2\ell+3}{3(\ell+1)} E\_{2\ell}(\boldsymbol{\xi}) + \frac{(2\ell+1)^2}{3\cdot\ell(\ell+1)} E\_{2\ell-2}(\boldsymbol{\xi}) + \frac{2\ell-1}{3\cdot\ell} |E\_{2\ell-4}(\boldsymbol{\xi})|, \\ \mathcal{U}\_{2\ell+1}(\boldsymbol{\xi}) &= \frac{2\ell+5}{3(\ell+1)} E\_{2\ell+1}(\boldsymbol{\xi}) + \frac{4}{3} E\_{2\ell-1}(\boldsymbol{\xi}) + \frac{2\ell-1}{3(\ell+1)} E\_{2\ell-3}(\boldsymbol{\xi}). \end{split}$$

**Proof.** In a similar manner to the proof of Theorem 1.

Here, we prove a significant theorem, in which we represent the qth-derivative of *Ek*(*ξ*) as combinations of their original ones.

**Theorem 3.** *The qth-derivative of Ej*(*ξ*) *can be expressed as*

$$\frac{d^q}{d\,\tilde{\xi}^q} = \sum\_{\ell=0}^{j-q} A\_{\ell,j}^q E\_\ell(\tilde{\xi})\_\ell$$

*where*

$$A\_{\ell,j}^q = (\ell+3)^5 \sum\_{r=0}^{\lfloor (j-\ell-q)/2 \rfloor} \frac{(-1)^r \epsilon\_{j,q,\ell} (3)\_{j-r} (j-q-2r+1)\_q}{r! \left(\frac{5}{2}\right)\_{\left\lfloor \frac{j+1}{2} \right\rfloor - r} \Gamma\left(-r + \left\lfloor \frac{j}{2} \right\rfloor + 1\right) \left(\frac{1}{2} (j-\ell-q-2r)\right)! (3)\_{\frac{1}{2}(j+\ell-q-2r+2)}},\tag{11}$$

*and*

$$
\epsilon\_{j,q,\ell} = \begin{cases} 1, & \text{if } (j-\ell-q) \text{ even} \\ 0, & \text{otherwise}. \end{cases}
$$

**Proof.** The proof can be found using the results of Lemmas 1 and 2 after some algebraic computations.

#### *3.2. Shifted Eighth-Kind CPs*

For our present purposes, it is useful to define the shifted CPs of eighth-kind *ES*,*n*(*ξ*) that can be defined on [0, 1] by

$$E\_{\mathbb{S},n}(\xi) = E\_n(2\xi - 1).$$

From (3), it is easy to see that the polynomials *ES*,*n*(*ξ*), *i* ≥ 0 are orthogonal on [0, 1], in the sense that

$$\int\_{0}^{1} \mathcal{E}\_{\mathbb{S}, \mathfrak{n}}(\mathfrak{F}) \, \mathcal{E}\_{\mathbb{S}, \mathfrak{m}}(\mathfrak{F}) \, w(\mathfrak{F}) \, d\mathfrak{F} = \hat{h}\_{\mathfrak{n}} \, \delta\_{\mathfrak{n}, \mathfrak{m}}.\tag{12}$$
  $(2 \, \mathfrak{L} \, \mathfrak{L})^{4} \sqrt{\mathcal{E} \, (1 - \overline{\mathcal{L}})}$  and  $\hat{h}\_{\mathfrak{n}} = \frac{1}{2} \, h\_{\mathfrak{n}}$ .

where *<sup>w</sup>*(*ξ*)=(<sup>1</sup> <sup>−</sup> <sup>2</sup> *<sup>ξ</sup>*)<sup>4</sup> <sup>2</sup>*<sup>ξ</sup>* (<sup>1</sup> <sup>−</sup> *<sup>ξ</sup>*) and <sup>ˆ</sup> *hn* = <sup>1</sup> <sup>4</sup> *hn*.

**Remark 2.** *Starting from a certain formula of Ek*(*ξ*)*, we can deduce their counterparts for the shifted CPs. In the following, we present some of these useful formulas.*

**Corollary 2.** *For every non-negative integer j, the polynomials ES*,*j*(*ξ*) *are linked with the polynomials of the shifted second-kind CPs* (*U*∗ *<sup>j</sup>* (*ξ*)) *as*

$$E\_{S,2\,\ell}(\xi) = \frac{3}{2(2\,\ell+3)}\sum\_{m=0}^{\ell}(-1)^m \begin{pmatrix} m+1 \end{pmatrix} \begin{pmatrix} 2\,\ell-m+2 \end{pmatrix} \mathcal{U}^\*\_{2\,\ell-2,m}(\xi) \tag{13}$$

$$E\_{S,2\ell+1}(\xi) = \frac{3}{\left(2\ell+3\right)\left(2\ell+5\right)} \sum\_{m=0}^{\ell} \left(-1\right)^{m} \left(m+1\right) \left(-2\left.\ell+m-3\right) \left(-\ell+m-1\right) \mathcal{U}^{\*}\_{2,\ell-2m+1}(\xi). \tag{14}$$

**Proof.** When *ξ* is changed to (2*ξ* − 1), it follows directly from Theorem 1.

**Corollary 3.** *The polynomials U*∗ *<sup>j</sup>* (*ξ*) *are linked with ES*,*j*(*ξ*) *by*

$$\begin{split} \mathcal{U}^\*\_{2\ell}(\boldsymbol{\xi}) &= \frac{2\ell+3}{3(\ell+1)} E\_{\mathbb{S},2\ell}(\boldsymbol{\xi}) + \frac{(2\ell+1)^2}{3\ell(\ell+1)} E\_{\mathbb{S},2\ell-2}(\boldsymbol{\xi}) + \frac{2\ell-1}{3\ell} \underset{\mathbb{S},2\ell-4}{\operatorname{E}}(\boldsymbol{\xi}), \\ \mathcal{U}^\*\_{2\ell+1}(\boldsymbol{\xi}) &= \frac{2\ell+5}{3(\ell+1)} E\_{\mathbb{S},2\ell+1}(\boldsymbol{\xi}) + \frac{4}{3} E\_{\mathbb{S},2\ell-1}(\boldsymbol{\xi}) + \frac{2\ell-1}{3(\ell+1)} E\_{\mathbb{S},2\ell-3}(\boldsymbol{\xi}). \end{split}$$

**Proof.** It follows from Theorem 2 by changing *ξ* to (2*ξ* − 1).

**Theorem 4.** *The power form representation of the polynomial ES*,*i*(*ξ*) *is given as follows*

$$E\_{S,i}(\xi) = \sum\_{p=0}^{i} \mathcal{g}\_{p,i} \xi^p \, , \tag{15}$$

*where*

$$g\_{p,j} = \frac{3}{(2^{p}+1)!} \times \begin{cases} \sum\_{j=1}^{j} \frac{(-1)^{j}}{2} \frac{2^{2-p} \left(\frac{j}{2} - j + 1\right) \left(\frac{j}{2} + j + 2\right)} \Gamma\left(\frac{j+3}{2}\right) (-1)^{j+p} \left(2j + p + 1\right)!}{\Gamma\left(\frac{j+1}{2} + 2\right) (2j - p)!}, & \text{if } i \text{ even},\\\sum\_{j=1}^{\frac{i-1}{2}} \frac{(-1)^{\frac{i+1}{2}} (j+1) \cdot 2^{2-p-3} (i - 2j + 1) \left(\frac{i+5}{2} + j\right) \Gamma\left(\frac{j}{2} + 1\right) (-1)^{j+p} \left(2j + p + 2\right)!}{\Gamma\left(\frac{j}{2} + 3\right) (2j - p + 1)!}, & \text{if } i \text{ odd}. \end{cases} \tag{16}$$

**Proof.** The proof can proceed if we start with the connection formulas of Corollary 2 along with the power form of *U*∗ *<sup>j</sup>* (*ξ*) given by

$$\mathcal{U}\_{\vec{j}}^{\*}\left(\vec{\xi}\right) = \sum\_{r=0}^{\vec{j}} \frac{(-1)^{r} 2^{2(j-r)} (2j - r + 1)!}{(2j - 2r + 1)! r!} \mathfrak{F}^{j-r}$$

.

**Theorem 5.** *The inversion formula to the power form representation of the polynomial ES*,*i*(*ξ*) *is given as follows*

$$\tilde{\xi}^m = \sum\_{r=0}^m H\_{r,m} E\_{S,r}(\tilde{\xi})\_r$$

*where*

$$H\_{r,m} = \frac{1}{3} 2^{3-2m} \left( r+3 \right) \left( 2m+1 \right)!$$

$$= \begin{cases} \left\lfloor \frac{m-\ell}{2} \right\rfloor & \left( r+3 \right) \left( 2\ell+r+1 \right)^2 \left( \ell+r \right)! \\ \sum\_{\ell=0}^{\ell} \frac{\left( r+3 \right) \left( 2\ell+r+1 \right)^2 \left( \ell+r \right)!}{\ell! \Gamma \left( 3-\ell \right) \left( \ell+r+3 \right)! \left( -2\ell+m-r \right)! \left( 2\ell+m+r+2 \right)!} & \text{if } r \text{ even}, \\ \sum\_{\ell=0}^{\left\lfloor \frac{m}{2} \left( m-r+1 \right) \right\rfloor} & \frac{\left( r+2 \right) \left( r+4 \right) \left( 2\ell+r+1 \right) \left( \ell+r \right)!}{\ell! \Gamma \left( 3-\ell \right) \left( \ell+r+3 \right)! \left( -2\ell+m-r \right)! \left( 2\ell+m+r+2 \right)!} & \text{if } r \text{ odd}. \end{cases}$$

**Proof.** The proof can proceed if we start with the inversion formula of *U*∗ *<sup>j</sup>* (*ξ*) together with the connection formulas of Corollary 3.

**Theorem 6.** *The qth-derivative of ES*,*j*(*ξ*) *can be expressed as*

$$\frac{d^q \, E\_{S,j}(\xi)}{d|\xi^q|} = \sum\_{\ell=0}^{j-q} \mathcal{A}\_{\ell,j}^q \, E\_{S,\ell}(\xi)\_{\ell}$$

*where* <sup>A</sup>*<sup>q</sup>* ,*j*,*<sup>q</sup>* <sup>=</sup> <sup>2</sup>*<sup>q</sup> <sup>A</sup><sup>q</sup>* ,*j*,*q, and A<sup>q</sup>* ,*<sup>j</sup> is given in* (11)*.*

**Proof.** It follows from Theorem 3 by changing *ξ* to (2*ξ* − 1).

Now, we give an approximation for the fractional derivatives of the shifted polynomials *ES*,*j*(*t*).

**Theorem 7.** *In the case of* 0 < *α* < 1, *the following approximation holds*

$$\frac{d^{\alpha}E\_{S,j}(t)}{dt^{\alpha}} \approx \sum\_{s=0}^{N} \mathcal{D}\_{s,j} E\_{S,s}(t),$$

*where*

$$\mathcal{D}\_{s,j} = \sum\_{r=0}^{j} \frac{\Gamma(r+1)}{\Gamma(r+1-a)} \frac{\rho\_{r,j}}{r}$$

*where gr*,*<sup>j</sup> are given as in* (16)*, and ρ<sup>s</sup> is given by*

$$\rho\_s = \frac{1}{\hat{h}\_s} \sum\_{p=0}^{s} g\_{p,s} \left( \beta \left( p + r - a + \frac{3}{2}, \frac{3}{2} \right) - 8 \beta \left( p + r - a + \frac{5}{2}, \frac{3}{2} \right) + 24 \beta \left( p + r - a + \frac{7}{2}, \frac{3}{2} \right) \right),$$

$$-32 \beta \left( p + r - a + \frac{9}{2}, \frac{3}{2} \right) + 16 \beta \left( p + r - a + \frac{11}{2}, \frac{3}{2} \right) \Big|\_{s},$$

$$r(\cdot) r(\cdot)$$

*and β*(*x*, *y*) = <sup>Γ</sup>(*x*) <sup>Γ</sup>(*y*) <sup>Γ</sup>(*x*+*y*) *is the well-known Beta function [52].*

**Proof.** The application of the operator *<sup>d</sup><sup>α</sup> d t<sup>α</sup>* to *ES*,*j*(*t*), defined in (15), enables us to receive the following relation

$$\frac{d^a E\_{S,j}(t)}{d\,t^a} = \sum\_{r=0}^j \mathcal{g}\_{r,j} \frac{\Gamma(r+1)}{\Gamma(r+1-a)} \,t^{r-a}.\tag{17}$$

In terms of *ES*,*j*(*t*), *t <sup>r</sup>*−*<sup>α</sup>* can be approximated as

$$t^{r-\alpha} \approx \sum\_{s=0}^{N} \rho\_s \, E\_{S,s}(t) \, \tag{18}$$

where *ρ<sup>s</sup>* is determined by means of the orthogonality relation of *ES*,*j*(*t*) defined in (12) as follows

$$\rho\_s = \frac{1}{f\_{\mathbf{l}\_s}} \int\_0^1 t^{r-\alpha} \, E\_{\mathbf{S},s}(t) \, w(t) \, dt.$$

The result of Theorem 7 is obtained by substituting Equation (18) into Equation (17).

#### **4. A Collocation Approach for the NTFGKE**

This section is confined to presenting a collocation algorithm for handling the NTFGKE based on employing eighth-kind CPs as basis functions.

Consider the following the NTFGKE [57]:

$$\begin{split} \frac{\partial^{\mathfrak{a}}u(\tilde{\xi},t)}{\partial t^{\mathfrak{a}}} - \frac{\partial^{\mathfrak{z}}u(\tilde{\xi},t)}{\partial \tilde{\xi}^{\mathfrak{z}}} + \frac{\partial^{\mathfrak{z}}u(\tilde{\xi},t)}{\partial \tilde{\xi}^{\mathfrak{z}}} + u(\tilde{\xi},t) \frac{\partial u(\tilde{\xi},t)}{\partial \tilde{\xi}} \\ + \mathscr{g}\_{1}(\tilde{\xi},t) \frac{\partial u(\tilde{\xi},t)}{\partial \tilde{\xi}^{\mathfrak{z}}} + \mathscr{g}\_{2}(\tilde{\xi},t)u(\tilde{\xi},t) = \mathscr{g}\_{3}(\tilde{\xi},t), \quad 0 \le \tilde{\xi},t \le 1, \end{split} \tag{19}$$

governed by the initial and boundary conditions

$$\begin{aligned} u(\underline{\mathfrak{f}},0)&=0, \\ u(0,t)&=\frac{\partial u(0,t)}{\partial \underline{\mathfrak{f}}} = 0, \\ u(1,t)&=\frac{\partial u(1,t)}{\partial \underline{\mathfrak{f}}} = \frac{\partial^2 u(1,t)}{\partial \underline{\mathfrak{f}}^2} = 0,\end{aligned} \tag{20}$$

where 0 < *α* ≤ 1 and *g*1(*ξ*, *t*), *g*2(*ξ*, *t*), and *g*3(*ξ*, *t*) are continuous functions.

Now, one may set

$$\mathcal{P}^N = \text{span}\{E\_{\mathcal{S},i}(\mathcal{J})E\_{\mathcal{S},j}(t) : 0 \le i, j \le \mathcal{N}\},$$

consequently, any function *u*<sup>N</sup> (*ξ*, *t*) ∈ P<sup>N</sup> can be represented as

$$\mu^{\mathcal{N}}(\xi, t) = \sum\_{i=0}^{\mathcal{N}} \sum\_{j=0}^{\mathcal{N}} c\_{ij} E\_{S,i}(\xi) \, E\_{S,j}(t) \,. \tag{21}$$

We can write the residual R(*ξ*, *t*) of Equation (19) as

$$\begin{split} \mathcal{R}(\tilde{\xi},t) &= \frac{\partial^{\mathfrak{a}}u^{\mathcal{N}}(\tilde{\xi},t)}{\partial t^{\mathfrak{a}}} - \frac{\partial^{5}u^{\mathcal{N}}(\tilde{\xi},t)}{\partial \tilde{\xi}^{5}} + \frac{\partial^{3}u^{\mathcal{N}}(\tilde{\xi},t)}{\partial \tilde{\xi}^{3}} + u^{\mathcal{N}}(\tilde{\xi},t) \frac{\partial u^{\mathcal{N}}(\tilde{\xi},t)}{\partial \tilde{\xi}} + g\_{1}^{\mathcal{N}}(\tilde{\xi},t) \frac{\partial u^{\mathcal{N}}(\tilde{\xi},t)}{\partial \tilde{\xi}} \\ &+ g\_{2}^{\mathcal{N}}(\tilde{\xi},t)u^{\mathcal{N}}(\tilde{\xi},t) - g\_{3}^{\mathcal{N}}(\tilde{\xi},t). \end{split} \tag{22}$$

The expressions of the partial derivatives *<sup>∂</sup><sup>α</sup> <sup>u</sup>*<sup>N</sup> (*ξ*,*t*) *<sup>∂</sup> <sup>t</sup><sup>α</sup>* , *<sup>∂</sup> <sup>u</sup>*<sup>N</sup> (*ξ*,*t*) *∂ ξ* , *<sup>∂</sup>*<sup>3</sup> *<sup>u</sup>*<sup>N</sup> (*ξ*,*t*) *∂ ξ*<sup>3</sup> , and *<sup>∂</sup>*<sup>5</sup> *<sup>u</sup>*<sup>N</sup> (*ξ*,*t*) *∂ ξ*<sup>5</sup> in terms of the proposed basis functions are now provided so that the collocation method can be used. In addition, the expressions for the nonlinear terms *u*<sup>N</sup> (*ξ*, *t*) *<sup>∂</sup> <sup>u</sup>*<sup>N</sup> (*ξ*,*t*) *∂ξ* , *g*<sup>1</sup> <sup>N</sup> (*ξ*, *t*) *<sup>∂</sup> <sup>u</sup>*<sup>N</sup> (*ξ*,*t*) *∂ ξ* , and *g*<sup>2</sup> N (*ξ*, *t*) *u*N (*ξ*, *t*) are also provided.

Thanks to (21), along with Theorem 7, we can write *<sup>∂</sup><sup>α</sup> uN*(*ξ*,*t*) *<sup>∂</sup> <sup>t</sup><sup>α</sup>* as

$$\frac{\partial^{\kappa} \, \imath t^{N}(\not\subset, t)}{\partial \, t^{a}} \approx \sum\_{i=0}^{N} \sum\_{j=0}^{N} \sum\_{s=0}^{N} c\_{ij} \mathcal{D}\_{s,j} \, E\_{\mathbb{S},i}(\not\subset) \, E\_{\mathbb{S},s}(t) \,. \tag{23}$$

Further, the following partial derivatives can be obtained after using (21) and Theorem 6 to give

$$\begin{split} \frac{\partial \, \boldsymbol{u}^{N}(\boldsymbol{\xi},t)}{\partial \boldsymbol{\xi}} &= \sum\_{i=0}^{N} \sum\_{j=0}^{N} \sum\_{\ell=0}^{i-1} c\_{ij} \mathcal{A}\_{\ell,i}^{1} \, E\_{S,\ell}(\boldsymbol{\xi}) \, E\_{S,j}(t), \\ \frac{\partial^{3} \, \boldsymbol{u}^{N}(\boldsymbol{\xi},t)}{\partial \boldsymbol{\xi}^{3}} &= \sum\_{i=0}^{N} \sum\_{j=0}^{N} \sum\_{\ell=0}^{i-3} c\_{ij} \mathcal{A}\_{\ell,i}^{3} \, E\_{S,\ell}(\boldsymbol{\xi}) \, E\_{S,j}(t), \\ \frac{\partial^{5} \, \boldsymbol{u}^{N}(\boldsymbol{\xi},t)}{\partial \boldsymbol{\xi}^{5}} &= \sum\_{i=0}^{N} \sum\_{j=0}^{N} \sum\_{\ell=0}^{i-5} c\_{ij} \mathcal{A}\_{\ell,i}^{5} \, E\_{S,\ell}(\boldsymbol{\xi}) \, E\_{S,j}(t). \end{split}$$

Furthermore, the nonlinear terms can be written as

$$\begin{split} \boldsymbol{u}^{N}(\tilde{\xi},t) \frac{\partial \boldsymbol{u}^{N}(\tilde{\xi},t)}{\partial \tilde{\xi}} &= \sum\_{m=0}^{N} \sum\_{n=0}^{N} \sum\_{i=0}^{N} \sum\_{j=0}^{i} \sum\_{\ell=0}^{i-1} c\_{\ell m} \boldsymbol{c}\_{ij} \, \boldsymbol{E}\_{S,m}(\tilde{\xi}) \, \boldsymbol{E}\_{S,n}(t) \, \boldsymbol{\mathcal{A}}\_{\ell,i}^{1} \, \boldsymbol{E}\_{S,\ell}(\tilde{\xi}) \, \boldsymbol{E}\_{S,\tilde{\xi}}(t) \\ \boldsymbol{g}\_{1}^{N}(\tilde{\xi},t) \frac{\partial \boldsymbol{u}^{N}(\tilde{\xi},t)}{\partial \tilde{\xi}} &= \sum\_{m=0}^{N} \sum\_{n=0}^{N} \sum\_{i=0}^{N} \sum\_{j=0}^{i} \sum\_{\ell=0}^{i-1} a\_{\ell m}^{1} \boldsymbol{c}\_{ij} \, \boldsymbol{E}\_{S,m}(\tilde{\xi}) \, \boldsymbol{E}\_{S,n}(t) \, \boldsymbol{\mathcal{A}}\_{\ell,j}^{1} \, \boldsymbol{E}\_{S,\ell}(\tilde{\xi}) \, \boldsymbol{E}\_{S,\tilde{\xi}}(t), \\ \boldsymbol{g}\_{2}^{N}(\tilde{\xi},t) \, \boldsymbol{u}^{N}(\tilde{\xi},t) &= \sum\_{m=0}^{N} \sum\_{n=0}^{N} \sum\_{i=0}^{N} \sum\_{j=0}^{N} a\_{mn}^{2} \, \boldsymbol{c}\_{ij} \, \boldsymbol{E}\_{S,m}(\tilde{\xi}) \, \boldsymbol{E}\_{S,n}(t) \, \boldsymbol{E}\_{S,i}(t) \, \boldsymbol{E}\_{S,i}(t) \, \boldsymbol{J}\_{S,i}(t). \end{split}$$

Further, *g*3(*ξ*, *t*) can be expressed as:

$$\text{g.g.}^{N}\left(\xi,t\right) = \sum\_{m=0}^{N} \sum\_{n=0}^{N} a\_{mn}^{3} \to\_{S,m} \left(\xi\right) \to\_{S,n} \left(t\right),\tag{24}$$

where {*a<sup>r</sup> mn*, *r* = 1, 2, 3} is computed from the following relation

$$a\_{mn}^r = \frac{1}{f\_{\mathbf{l}\_m} f\_{\mathbf{l}\_n}} \int\_0^1 \int\_0^1 \mathbf{g}\_r(\boldsymbol{\xi}, t) \, E\_{\mathbf{S},m}(\boldsymbol{\xi}) \, E\_{\mathbf{S},n}(t) \, \upvarphi(\boldsymbol{\xi}, t) \, d\boldsymbol{\xi} \, dt,\varphi$$

and *w*ˆ(*ξ*, *t*) = *w*(*ξ*) *w*(*t*).

Thanks to relations (23) and (24), the residual R(*ξ*, *t*) in (22) can be obtained. Now, to get the expansion coefficients *cij*, we apply the spectral collocation method by forcing the residual R(*ξ*, *t*) to be zero at some collocation points (*ξi*, *tj*), as follows

$$\mathcal{R}(\xi\_i, t\_j) = 0, \quad 1 \le i \le N - 4, \quad 1 \le j \le N.$$

Moreover, we get the following initial and boundary conditions

$$\begin{split} &\sum\_{i=0}^{N} \sum\_{j=0}^{N} c\_{ij} \operatorname{E}\_{S,j}(\xi\_{i}) \operatorname{E}\_{S,j}(0) = 0, \quad 1 \le i \le N+1, \\ &\sum\_{i=0}^{N} \sum\_{j=0}^{N} c\_{ij} \operatorname{E}\_{S,j}(0) \operatorname{E}\_{S,j}(t\_{j}) = 0, \quad 1 \le j \le N, \\ &\sum\_{i=0}^{N} \sum\_{j=0}^{N} c\_{ij} \frac{\partial \operatorname{E}\_{S,i}(0)}{\partial \mathcal{J}} \operatorname{E}\_{S,j}(t\_{j}) = 0, \quad 1 \le j \le N, \\ &\sum\_{i=0}^{N} \sum\_{j=0}^{N} c\_{ij} \operatorname{E}\_{S,j}(1) \operatorname{E}\_{S,j}(t\_{j}) = 0, \quad 1 \le j \le N, \\ &\sum\_{i=0}^{N} \sum\_{j=0}^{N} c\_{ij} \frac{\partial \operatorname{E}\_{S,i}(1)}{\partial \mathcal{J}} \operatorname{E}\_{S,j}(t\_{j}) = 0, \quad 1 \le j \le N, \\ &\sum\_{i=0}^{N} \sum\_{j=0}^{N} c\_{ij} \frac{\partial^{2} \operatorname{E}\_{S,i}(1)}{\partial \mathcal{J}^{2}} \operatorname{E}\_{S,j}(t\_{j}) = 0, \quad 1 \le j \le N, \end{split}$$

where {(*ξi*, *tj*) : *i*, *j* = 1, 2, 3, ... , N + 1} represents the initial known zeros of *ES*,*i*(*ξ*) and *ES*,*j*(*t*), respectively. Therefore, we get (N +1) × (N +1) as a nonlinear system of equations that can be solved through a suitable numerical solver, such as Newton's iterative method.

**Remark 3.** *For the case α* = 1, *the NTFGKE becomes*

$$\begin{split} \frac{\partial \ln(\tilde{\xi},t)}{\partial t} - \frac{\partial^{\tilde{\xi}}\ln(\tilde{\xi},t)}{\partial \tilde{\xi}^{5}} + \frac{\partial^{\tilde{\xi}}\ln(\tilde{\xi},t)}{\partial \tilde{\xi}^{3}} + \mu(\tilde{\xi},t) \frac{\partial \ln(\tilde{\xi},t)}{\partial \tilde{\xi}} \\ + \mathcal{g}\_{1}(\tilde{\xi},t) \frac{\partial \ln(\tilde{\xi},t)}{\partial \tilde{\xi}} + \mathcal{g}\_{2}(\tilde{\xi},t) \,\mu(\tilde{\xi},t) = \mathcal{g}\_{3}(\tilde{\xi},t), \quad 0 \le \tilde{\xi}, t \le 1. \end{split}$$

*To solve this problem, the first term <sup>∂</sup> <sup>u</sup>*(*ξ*,*t*) *<sup>∂</sup> <sup>t</sup> can be approximated as:*

$$\frac{\partial \, \mu^{\mathcal{N}}(\vec{\xi}, t)}{\partial \, \vec{\xi}} = \sum\_{i=0}^{\mathcal{N}} \sum\_{j=0}^{\mathcal{N}} \sum\_{\ell=0}^{i-1} c\_{ij} \, \mathcal{A}\_{\ell, i}^{1} \, E\_{S, \ell}(\vec{\xi}) \, E\_{S, j}(t),$$

*and hence, we used similar steps as those given in Section <sup>4</sup> to get* (<sup>N</sup> <sup>+</sup> <sup>1</sup>)2*, a nonlinear algebraic system of equations in the unknown expansion coefficients cij that can be solved using Newton's iterative method.*

#### **5. Error Analysis of the Proposed Chebyshev Expansion**

Convergence analysis of the proposed Chebyshev expansion is the main focus of this section.

**Lemma 3.** *For any positive number, the following inequality holds:*

$$|E\_{\mathcal{S},\ell}(\xi)| \le (\ell+1)^3, \qquad \forall \, \xi \in [0,1]. \tag{25}$$

**Proof.** Consider the following two cases to prove inequality (25): The first case: = 2 *j*:

Using Formula (13) together with the simple inequality |*U*<sup>∗</sup> *<sup>j</sup>* (*ξ*)| ≤ *j* + 1, we get

$$\begin{aligned} |E\_{5,\ell}(\vec{\xi})| &\leq \frac{3}{2(2j+3)} \sum\_{m=0}^{j} (m+1) \left(2j - m + 2\right) \left(2j - 2 \, m + 1\right)^{2} \\ &= \frac{3\left(j+1\right)^{2}\left(j+2\right)^{2}}{4\left(2j+3\right)} \\ &\leq \left(2j+1\right)^{3} = \left(\ell + 1\right)^{3}. \end{aligned}$$

The second case: = 2 *j* + 1:

Using Formula (14) and the inequality *U*∗ *<sup>j</sup>* (*ξ*) <sup>≤</sup> *<sup>j</sup>* <sup>+</sup> 1, yields

$$\begin{aligned} |E\_{5,\ell}(\vec{\xi})| &\leq \ \ \ \frac{3}{(2j+3)(2j+5)} \ \ \sum\_{m=0}^{j} (m+1)\left(-2j+m-3\right)\left(-j+m-1\right)\left(2j-2m+2\right) \\ &= \ \ \frac{1}{5}(j+1)\left(j+2\right)\left(j+3\right) \\ &< \ \ (2j+2)^3 = (\ell+1)^3. \end{aligned}$$

Based on those cases, the following estimate is valid for every ≥ 0.

$$|E\_{S, \ell}(\xi)| \le (\ell + 1)^3, \qquad \forall \, \xi \in [0, 1].$$

Lemma 3 is now proven.

**Theorem 8.** *Consider a function <sup>f</sup>*(*ξ*) <sup>∈</sup> *<sup>L</sup>*<sup>2</sup> *ω*(*ξ*) [0, 1] *with f*(*ξ*) *that has a bounded fifth derivative can be expanded as an infinite series of the shifted eighth kind of CPs as*

$$f(\xi) = \sum\_{i=0}^{\infty} b\_i \, E\_{S,i}(\xi) \,. \tag{26}$$

*The series in* (26) *converges uniformly to f*(*ξ*)*. Moreover, The expansion coefficients bi are estimated as follows:*

$$|b\_i| \lesssim \frac{1}{i^5}, \quad \forall \ i > 4,\tag{27}$$

*and the notation a a*¯ *implies the existence of a positive constant n independent of* N *and of any function with a* ≤ *n a*¯.

**Proof.** With the aid of (12), we have

$$b\_i = \frac{1}{\hbar\_i} \int\_0^1 f(\vec{\xi}) \, E\_{S,i}(\vec{\xi}) \, (1 - 2\,\vec{\xi})^4 \sqrt{\vec{\xi}^\dagger (1 - \vec{\xi})} \, d\vec{\xi} \dots$$

The last formula transforms into the following one after using the substitution *ξ* = <sup>1</sup> <sup>2</sup> (1 + cos *ϑ*), into

$$\begin{split} b\_{i} &= \frac{1}{4\hat{h}\_{i}} \int\_{0}^{\pi} f\left(\frac{1}{2}(1+\cos\theta)\right) E\_{S,i}\left(\frac{1}{2}(1+\cos\theta)\right) \cos^{4}\theta \sin^{2}\theta \,d\theta\\ &= \frac{1}{h\_{i}} \int\_{0}^{\pi} f\left(\frac{1}{2}(1+\cos\theta)\right) E\_{i}(\cos\theta) \cos^{4}\theta \sin^{2}\theta \,d\theta. \end{split} \tag{28}$$

Now, consider the following two cases to prove Inequality (27): Case 1: *i* even:

Based on Corollary 1, Equation (28) can be converted into

$$\begin{aligned} b\_i &= \frac{4\left(i+3\right)}{3\left(\pi\left(i+2\right)\left(i+4\right)\right)} \int\_0^\pi f\left(\frac{1}{2}(1+\cos\theta)\right) \\ &\qquad \times \sin(2\,\theta) \left[ \left(i+4\right)\sin(\theta\left(i+2\right)) + \left(i+2\right)\sin(\theta\left(i+4\right)) \right] d\theta. \end{aligned}$$

Integration of the right-hand side of the last equation by parts yields

$$\begin{split} b\_{i} = \frac{(i+3)}{6\pi \left(i+2\right)\left(i+4\right)} \int\_{0}^{\pi} f'\left(\frac{1}{2}\left(1+\cos\theta\right)\right) \left[\frac{(i+4)}{i}\cos\left(\theta\left(i-1\right)\right) - \frac{4}{i}\cos\left(\theta\left(i+1\right)\right) \\ -2\cos\left(\theta\left(i+3\right)\right) - \frac{2}{\left(i+6\right)}\cos\left(\theta\left(i+5\right)\right) + \frac{(i+2)}{\left(i+6\right)}\cos\left(\theta\left(i+7\right)\right) \right] d\theta. \end{split} \tag{29}$$

Similarly, if we integrate the right-hand side of Equation (29), again by parts, four times, we get

$$b\_i = \frac{\left(i+3\right)}{1536\,\pi\left(i+2\right)\left(i+4\right)} \int\_0^\pi f^{(5)}\left(\frac{1}{2}(1+\cos\theta)\right) \Delta\_i(\theta) \,d\theta. \tag{30}$$

where

$$\begin{aligned} \Delta\_i(\boldsymbol{\theta}) &= \frac{(i+4)}{(i-4)\_3} \cos\left(\theta \,(i-5)\right) - \frac{4\,(i+6)}{(i-2)\_3 \,(i-4) \,(i+1)} \cos(\theta \,(i-3)) \\ &+ \frac{4\,(i+18)}{(i-3)\_3 \,(i)\_2 \,(i+3)} \cos\left(\theta \,(i-1)\right) \\ &+ \frac{2\,(-2880 - 5318i - 3001i^2 - 520i^3 - 35i^4 + 2\,\tilde{r}\right) \cos(\theta \,(i+1)) \\ &+ \frac{372960 + 509208i + 229640i^2 + 50713i^3 + 3921i^4 - 358i^5 - 90i^6 - 5\tilde{r}}{(i-1)\_3 \,(i+4)\_1 (i+6)} \\ &+ \frac{-489840 - 189504i + 118560i^2 + 10850i^3 + 31971i^4 + 50071i^5 + 3641^6 + \tilde{r} - \tilde{r}^8}{(i)\_3 (i+4) (i+6)^2} \\ &+ \frac{-21168 - 8640i - 621i^2 + 307i^3 + 68i^4 + 4\tilde{r}}{(i+3)\_7 (i+6)} \cos(\theta \,(i+7)) \\ &- \frac{2\,(30+15i+2)^2}{(i+5)\_4 (i+6) (i+10)} \cos(\theta \,(i+9)) \\ &+ \frac{(i+2)}{(i+6)\_5} \cos(\theta \,(i+11)). \end{aligned}$$

Note that the notation (*z*)*<sup>r</sup>* represents the well-known Pochhammer symbol. If we take the absolute value for Equation (30) and use the hypothesis of the theorem, we get the following estimation

$$|b\_{i}| \lesssim \frac{1}{i^{5}}, \quad \forall \, i > 4.$$

Case 2: *i* odd:

In virtue of Corollary 1, Equation (28) turns into

$$\begin{split} \left| b\_{i} \right| &= \frac{4}{3\pi \left( i + 1 \right) \left( i + 5 \right)} \int\_{0}^{\pi} f\left( \frac{1}{2} \left( 1 + \cos \theta \right) \right) \sin(\theta) \\ &\quad \times \left[ \left( i + 4 \right) \left( i + 5 \right) \sin \left( \theta \left( i + 1 \right) \right) + 2 \left( i + 1 \right) \left( i + 5 \right) \sin \left( \theta \left( i + 3 \right) \right) + \left( i + 1 \right) \left( i + 2 \right) \sin \left( \theta \left( i + 5 \right) \right) \right] d\theta \\ &= \frac{2}{3\pi \left( i + 1 \right) \left( i + 5 \right)} \int\_{0}^{\pi} f\left( \frac{1}{2} \left( 1 + \cos \theta \right) \right) \\ &\quad \times \left[ \left( i + 4 \right) \left( i + 5 \right) \cos \left( i \theta \right) + \left( i - 2 \right) \left( i + 5 \right) \cos \left( \theta \left( i + 2 \right) \right) - \left( i + 1 \right) \left( i + 8 \right) \cos \left( \theta \left( i + 4 \right) \right) \\ &\quad - \left( i + 1 \right) \left( i + 2 \right) \cos \left( \theta \left( i + 6 \right) \right) \Big] d\theta. \end{split} \tag{31}$$

On the right-hand side of (31), we can use integration by parts to write

$$\begin{aligned} b\_{i} &= \frac{1}{6\pi \left(i+1\right)\left(i+5\right)} \int\_{0}^{\pi} f'\left(\frac{1}{2}\left(1+\cos\theta\right)\right) \times \left[\frac{\left(i+4\right)\left(i+5\right)}{i}\cos\left(\theta\left(i-1\right)\right) \\ &- \frac{8\left(i+1\right)\left(i+5\right)}{i\left(i+2\right)} \cos\left(\theta\left(i+1\right)\right) - \frac{2\left(-12+14i+9\,i^{2}+i^{3}\right)}{\left(i+2\right)\left(i+4\right)} \cos\left(\theta\left(i+3\right)\right) \\ &+ \frac{8\left(i+1\right)\left(i+5\right)}{\left(i+4\right)\left(i+6\right)} \cos\left(\theta\left(i+5\right)\right) + \frac{\left(i+1\right)\left(i+2\right)}{\left(i+6\right)} \cos\left(\theta\left(i+7\right)\right) \Big]d\theta. \end{aligned}$$

Integrating again by parts four times and using the hypothesis of the theorem after taking the absolute value, one has

$$|b\_i| \lesssim \frac{1}{i^5}, \quad \forall \, i > 4.$$

Finally, Cases 1 and 2 enable us to write

$$|b\_{i}| \lesssim \frac{1}{i^{5}}, \quad \forall \, i > 4.$$

With this, Theorem 8 is fully proven.

**Theorem 9.** *Any function u*(*ξ*, *t*) = *g*1(*ξ*) *g*2(*t*) ∈ P<sup>N</sup> *, with g*1(*ξ*) *and g*2(*t*) *that has a bounded fifth derivative can be expanded as:*

$$\mu\left(\xi, t\right) = \sum\_{i=0}^{\infty} \sum\_{j=0}^{\infty} c\_{ij} E\_{S,i}\left(\xi\right) E\_{S,j}\left(t\right). \tag{32}$$

*The aforementioned series is uniformly convergent. Moreover, the expansion coefficients in* (32) *satisfy:*

$$|c\_{ij}| \lesssim \frac{1}{i^5 j^5}, \quad \forall \ i, j > 4.$$

**Proof.** The orthogonality relation of *ES*,*i*(*ξ*) allows one to get

$$\mathcal{L}\_{ij} = \frac{1}{\hat{\mathfrak{h}}\_i \hat{\mathfrak{h}}\_j} \int\_0^1 \int\_0^1 \mu(\vec{\xi}, t) \, E\_{S,i}(\vec{\xi}) \, E\_{S,i}(t) \, \#(\vec{\xi}, t) \, d\vec{\xi} \, dt.$$

By the hypotheses of Theorem 9, we get

$$\begin{split} c\_{ij} &= \frac{1}{\hat{\mathfrak{h}}\_{\hat{\mathfrak{l}}\_{j}}} \left( \int\_{0}^{1} (1 - 2\,\breve{\xi})^{4} \sqrt{\breve{\xi} \,(1 - \breve{\xi})} \, g\_{1}(\breve{\xi}) \, E\_{S,i}(\breve{\xi}) d\,\breve{\xi} \right) \\ & \quad \times \frac{1}{\hat{\mathfrak{h}}\_{j}} \Big( \int\_{0}^{1} (1 - 2\,\breve{t})^{4} \sqrt{\breve{t} \,(1 - \breve{t})} \, \breve{g}\_{2}(\breve{t}) \, E\_{S,j}(\breve{t}) d\,\breve{t} \Big). \end{split}$$

With the aid of the two substitutions, *ξ* = <sup>1</sup> <sup>2</sup> (<sup>1</sup> <sup>+</sup> cos *<sup>φ</sup>*) and *<sup>t</sup>* <sup>=</sup> <sup>1</sup> <sup>2</sup> (1 + cos *ψ*), the last equation transforms into

$$\begin{aligned} c\_{ij} &= \frac{1}{h\_i} \int\_0^\pi \, \mathcal{g}\_1 \left( \frac{1}{2} (1 + \cos \phi) \right) \, E\_i(\cos \phi) \, \cos^4 \phi \, \sin^2 \phi \, d\phi \\ &\times \frac{1}{h\_i} \int\_0^\pi \, \mathcal{g}\_2 \left( \frac{1}{2} (1 + \cos \psi) \right) \, E\_i(\cos \psi) \, \cos^4 \psi \, \sin^2 \psi \, d\psi. \end{aligned}$$

Now, we consider the four cases:


Imitating similar steps as given in Theorem 8 in the previous four cases, we get the following result

$$|c\_{ij}| \lesssim \frac{1}{i^5 j^5}, \quad \forall \ i, j > 4.$$

**Remark 4.** *The following inequalities can be easily obtained after imitating similar steps as in Theorems 8 and 9*

$$|c\_{i0}| \stackrel{<}{\sim} \frac{1}{i^5}, \quad |c\_{i1}| \stackrel{<}{\sim} \frac{1}{i^5}, \quad |c\_{i2}| \stackrel{<}{\sim} \frac{1}{i^5}, \quad |c\_{i3}| \stackrel{<}{\sim} \frac{1}{i^5}, \quad |c\_{i4}| \stackrel{<}{\sim} \frac{1}{i^5}, \quad \forall \ i > 4,\tag{33}$$

*and*

$$|c\_{0j}| \lesssim \frac{1}{\tilde{f}^{5'}} \quad |c\_{1j}| \lesssim \frac{1}{\tilde{f}^{5'}} \quad |c\_{2j}| \lesssim \frac{1}{\tilde{f}^{5'}} \quad |c\_{3j}| \lesssim \frac{1}{\tilde{f}^{5'}} \quad |c\_{4j}| \lesssim \frac{1}{\tilde{f}^{5'}} \quad \forall \, j > 4. \tag{34}$$

**Theorem 10.** *If u*(*ξ*, *t*) *fulfills the assumptions of Theorem 9, and if u*N (*ξ*, *t*) = N ∑ *i*=0 N ∑ *j*=0 *cij ES*,*i*(*ξ*) *ES*,*j*(*t*)*, then the next truncation error estimate applies*

$$|\mu(\mathfrak{f},t) - \mu^{\mathcal{N}}(\mathfrak{f},t)| \lesssim \frac{1}{\mathcal{N}}.$$

**Proof.** The truncation error can be expressed as:


Inserting Equations (33) and (34) into Equation (35) and using Lemma 3 along with the following approximation

$$\sum\_{i=a+1}^b f(i) \le \int\_{\tilde{\xi}=a}^b f(\tilde{\xi}) \, d\tilde{\xi} \, d\tilde{\xi}$$

where *f* is the decreasing function and the inequality:

$$\frac{(i+1)^3}{i^5} < \frac{i+5}{i\left(i^2-1\right)}, \quad \forall \ i > 1,$$

one has

$$|\mu(\xi, t) - \mu^{\mathcal{N}}(\xi, t)| \lesssim \frac{1}{\mathcal{N}} \cdot \varepsilon$$

With this, the theorem is proven.

**Theorem 11.** *If u*(*ξ*, *t*) *fulfills the assumptions of Theorem 9, then the following estimation applies:*

$$\left\|\left|\boldsymbol{u}(\boldsymbol{\xi},\boldsymbol{t})-\boldsymbol{u}^{\mathcal{N}}(\boldsymbol{\xi},\boldsymbol{t})\right\|\right\|\lesssim\frac{1}{\mathcal{N}^{4}}.\tag{36}$$

**Proof.** We have

*u*(*ξ*, *t*) − *u*<sup>N</sup> (*ξ*, *t*)*w*ˆ(*ξ*,*t*) = ∞ ∑ *i*=0 ∞ ∑ *j*=0 *cij ES*,*i*(*ξ*) *ES*,*j*(*t*) − N ∑ *i*=0 N ∑ *j*=0 *cij ES*,*i*(*ξ*) *ES*,*j*(*t*) *w*ˆ(*ξ*,*t*) ≤ ∞ ∑ *j*=N +1 |*c*0*j*| *ES*,0(*ξ*)*w*(*ξ*) + |*c*1*j*| *ES*,1(*ξ*)*w*(*ξ*) + |*c*2*j*| *ES*,2(*ξ*)*w*(*ξ*) + |*c*3*j*| *ES*,3(*ξ*)*w*(*ξ*) + |*c*4*j*| *ES*,4(*ξ*)*w*(*ξ*) *ES*,*j*(*t*)*w*(*t*) + ∞ ∑ *i*=N +1 |*ci*0| *ES*,0(*t*)*w*(*t*) + |*ci*1| *ES*,1(*t*)*w*(*t*) + |*ci*2| *ES*,2(*t*)*w*(*t*) + |*ci*3| *ES*,3(*t*)*w*(*t*) + |*ci*4| *ES*,4(*t*)*w*(*t*) *ES*,*i*(*ξ*)*w*(*ξ*) + N ∑ *i*=5 ∞ ∑ *j*=N +1 |*cij*| *ES*,*i*(*ξ*)*w*(*ξ*) *ES*,*j*(*t*)*w*(*t*) + ∞ ∑ *i*=N +1 ∞ ∑ *j*=5 |*cij*| *ES*,*i*(*ξ*)*w*(*ξ*) *ES*,*j*(*t*)*w*(*t*).

With the aid of Theorem 9, Remark 4, and the following inequalities

$$\begin{aligned} &\|E\_{\vec{s},\vec{t}}(\vec{\xi})\|\_{\mathbf{w}(\vec{\xi})} \lesssim 1, \\ &\|E\_{\vec{s},\vec{t}}(\vec{t})\|\_{\mathbf{w}(\vec{t})} \lesssim 1, \\ &\sum\_{i=\mathcal{N}+1}^{\infty} \frac{1}{i^5} < \frac{1}{\mathcal{N}^4}, \forall \mathcal{N} > 1, \\ &\sum\_{i=5}^{\mathcal{N}} \frac{1}{i^5} < \frac{1}{1024}, \forall \mathcal{N} > 1, \end{aligned}$$

we get the desired result (36).

#### **6. Illustrative Examples**

This section is devoted to testing the performance of our proposed collocation algorithm for treating the NTFGKE. Some test problems are solved, and some comparisons are presented to check the applicability and accuracy of our proposed scheme.

**Example 1** ([57])**.** *Consider the following NTFGKE:*

$$\begin{split} \frac{\partial}{\partial u} \frac{u(\xi,t)}{u(\xi,t)} - \frac{\partial}{\partial \xi} \frac{u(\xi,t)}{u(\xi,t)} + \frac{\partial}{\partial \xi} \frac{u(\xi,t)}{u(\xi,t)} + u(\xi,t) \frac{\partial u(\xi,t)}{\partial \xi} \\ + \left(\xi^2 t + 1\right) \frac{\partial u(\xi,t)}{\partial \xi} + \left(\xi - t\right) u(\xi,t) = f(\xi,t), \quad 0 \le \xi, t \le 1, \end{split}$$

*governed by* (20)*, and f*(*ξ*, *t*) *is determined in such a way that the exact solution is u*(*ξ*, *t*) = *t* <sup>1</sup>+*<sup>α</sup> <sup>ξ</sup>*<sup>2</sup> ( *<sup>ξ</sup>*<sup>3</sup> <sup>6</sup> <sup>−</sup> *<sup>ξ</sup>*<sup>2</sup> <sup>2</sup> <sup>+</sup> *<sup>ξ</sup>* <sup>2</sup> <sup>−</sup> <sup>1</sup> 6 )*.*

*Table 1 presents a comparison of the maximum absolute errors between our method for* N = 16 *and the method in [57] at different values of ξ when* 0 < *t* < 1. *This shows the accuracy of our method. Figures 1 and 2 show the absolute error and approximate solution at different values of α for* N = 16. *It can be seen that the approximate solutions are quite close to the precise ones.*

**Table 1.** Comparison of maximum absolute errors for 0 < *t* < 1 of Example 1.


**Figure 1.** The absolute error and approximate solution at *α* = 0.95 and N = 16 of Example 1.

**Figure 2.** The absolute error and approximate solution at *α* = 0.85 and N = 16 of Example 1.

**Example 2.** *Consider the following NTFGKE:*

$$\begin{split} \frac{\partial^{a}u(\tilde{\xi},t)}{\partial \boldsymbol{t}^{a}} - \frac{\partial^{5}u(\tilde{\xi},t)}{\partial \tilde{\xi}^{5}} + \frac{\partial^{3}u(\tilde{\xi},t)}{\partial \tilde{\xi}^{3}} + u(\tilde{\xi},t) \frac{\partial u(\tilde{\xi},t)}{\partial \tilde{\xi}} \\ + \mathcal{g}\_{1}(\tilde{\xi},t) \frac{\partial u(\tilde{\xi},t)}{\partial \tilde{\xi}^{1}} + \mathcal{g}\_{2}(\tilde{\xi},t)u(\tilde{\xi},t) = f(\tilde{\xi},t), \quad 0 \le \tilde{\xi}, t \le 1, \end{split} \tag{37}$$

*governed by* (20)*, and f*(*ξ*, *t*) *is determined in such a way that the exact solution is u*(*ξ*, *t*) = 1 <sup>12</sup> *t* <sup>1</sup>+*<sup>α</sup> <sup>ξ</sup>*<sup>2</sup> (*ξ*<sup>4</sup> <sup>−</sup> <sup>2</sup> *<sup>ξ</sup>*<sup>3</sup> <sup>+</sup> <sup>2</sup> *<sup>ξ</sup>* <sup>−</sup> <sup>1</sup>)*.*

*Equation* (37) *is solved in two cases corresponding to g*1(*ξ*, *t*) = 1, *g*2(*ξ*, *t*) = 0 *and g*1(*ξ*, *t*) = 0, *g*2(*ξ*, *t*) = 1*.*

*Case 1: At g*1(*ξ*, *t*) = 1 *and g*2(*ξ*, *t*) = 0. *Table 2 presents a comparison of the maximum absolute errors between our method for* N = 16 *and the method in [57] at different values of x when* 0 < *t* < 1. *This shows the accuracy of our method. Further, Figure 3 illustrates the absolute error at different values of α for* N = 16*.*

*Case 2: At g*1(*ξ*, *t*) = 0 *and g*2(*ξ*, *t*) = 1. *Table 3 presents the absolute errors at different values of α for* N = 16. *Figure 4 illustrates the absolute errors at different values of t at α* = 0.95 *and* N = 16. *Figure 5 presents a comparison between the approximate solution and exact solution at α* = 0.9 *and* N = 16. *It can be seen that the approximate solutions are quite near the precise one.*

**Table 2.** Comparison of the maximum absolute errors for 0 < *t* < 1 of Example 2.


**Figure 3.** The absolute error at different values of *α* for N = 16 of Example 2.


**Table 3.** The absolute errors of Example 2.

**Figure 4.** The absolute errors at *α* = 0.95 and N = 16 of Example 2.

**Figure 5.** The exact and approximate solutions at *α* = 0.9 and N = 16 of Example 2.

**Example 3.** *Consider the following NTFGKE:*

$$\frac{\partial^{a}u(\tilde{\xi},t)}{\partial t^{a}} - \frac{\partial^{\tilde{\xi}}u(\tilde{\xi},t)}{\partial \tilde{\xi}^{\tilde{\xi}}} + \frac{\partial^{\tilde{\xi}}u(\tilde{\xi},t)}{\partial \tilde{\xi}^{\tilde{\xi}}} + u(\tilde{\xi},t)\frac{\partial u(\tilde{\xi},t)}{\partial \tilde{\xi}} + u(\tilde{\xi},t) = f(\tilde{\xi},t), \quad 0 \le \tilde{\xi}, t \le 1, \tilde{\xi}$$

*governed by* (20)*, and <sup>f</sup>*(*ξ*, *<sup>t</sup>*) *is chosen such that the exact solution is <sup>u</sup>*(*ξ*, *<sup>t</sup>*) = *<sup>ξ</sup>*<sup>2</sup> (*ξ*<sup>4</sup> <sup>−</sup> <sup>2</sup> *<sup>ξ</sup>*<sup>3</sup> <sup>+</sup> 2 *ξ* − 1) sin(2 *π α t*)*.*

*Figure 6 illustrates the* log10*(maximum absolute error) at different values of α and* N *. Table 4 presents the absolute errors at different values of ξ and t when α* = 0.5 *and* N = 16*. Further, Figure 7 illustrates the absolute error at different values of α for* N = 16*.*

**Figure 6.** log10(maximum absolute error) of Example 3.

**Table 4.** The absolute errors of Example 3.


**Figure 7.** The absolute error at different values of *α* for N = 16 of Example 3.

#### **7. Concluding Remarks**

To summarize the principal findings of our study, we present the following insights: Our research encompasses the introduction, implementation, and thorough investigation of a spectral collocation methodology tailored to address the NTFGKE. An in-depth exploration and analysis of convergence are undertaken. Additionally, the outcomes of our work are substantiated through diverse numerical test scenarios. We anticipate that this approach will find application in upcoming endeavors aimed at addressing even more intricate models within the realm of partial differential equations.

In conclusion, the effective utilization of eighth-kind CPs in conjunction with the collocation method is demonstrated through our application to solve the NTFGKE. This showcases the prowess of our spectral approach, affirming its potential for tackling complex mathematical challenges.

**Author Contributions:** Conceptualization, W.M.A.-E.; Methodology, W.M.A.-E., Y.H.Y. and A.G.A.; Software, Y.H.Y. and A.G.A.; Validation, A.G.A.; Formal analysis, Y.H.Y. and A.G.A.; Resources, W.M.A.-E.; Data curation, Y.H.Y. and A.G.A.; Writing—Original draft, W.M.A.-E., Y.H.Y. and A.G.A.; Writing—Review & editing, W.M.A.-E. and A.K.A.; Supervision, W.M.A.-E.; Project administration, W.M.A.-E. and A.K.A.; Funding acquisition, A.K.A. All authors have read and agreed to the published version of the manuscript.

**Funding:** The third author, Amr Kamel Amin (akgadelrab@uqu.edu.sa), is funded by the Deanship for Research & Innovation, Ministry of Education in Saudi Arabia.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors extend their appreciation to the Deanship for Research & Innovation, Ministry of Education in Saudi Arabia for funding this research work through the project number: IFP22UQU4331287DSR038.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

**Jianqiang Sun \*, Siqi Yang and Lijuan Zhang**

School of Mathematics and Statistics, Hainan University, Haikou 570228, China; yang3765775@163.com (S.Y.); zl973289773@163.com (L.Z.)

**\*** Correspondence: 992539@hainanu.edu.cn

**Abstract:** The Riesz space-fractional derivative is discretized by the Fourier pseudo-spectral (FPS) method. The Riesz space-fractional nonlinear Klein–Gordon–Zakharov (KGZ) and Klein–Gordon– Schrödinger (KGS) equations are transformed into two infinite-dimensional Hamiltonian systems, which are discretized by the FPS method. Two finite-dimensional Hamiltonian systems are thus obtained and solved by the second-order average vector field (AVF) method. The energy conservation property of these new discrete schemes of the fractional KGZ and KGS equations is proven. These schemes are applied to simulate the evolution of two fractional differential equations. Numerical results show that these schemes can simulate the evolution of these fractional differential equations well and maintain the energy-preserving property.

**Keywords:** AVF method; Riesz space-fractional KGZ equation; Riesz space-fractional KGS equation; FPS method

**AMS Subject Classification:** 37K05; 65M20; 65M70

#### **1. Introduction**

Fractional differential equations can better describe the behavior of physical phenomena than integral differential equations. Many scholars have taken great interest in studying fractional differential equations and the theory of the fractional derivative. In general, factional differential equations can not have an exact solution. Numerical simulations for fractional nonlinear differential equations have become very important. Many different numerical methods have been to proposed to solve fractional nonlinear partial differential equations (PDEs) [1–4]. In this paper, we numerically investigate the following Riesz space fractional KGZ and KGS equations by the energy preserving method.

The space fractional KGZ equation can be written as [5–8]

$$\begin{cases} \frac{\partial^2 u(\mathbf{x},t)}{\partial t^2} - \frac{\partial^a u(\mathbf{x},t)}{\partial |\mathbf{x}|^a} + u(\mathbf{x},t) + m(\mathbf{x},t)u(\mathbf{x},t) + |u(\mathbf{x},t)|^2 u(\mathbf{x},t) = 0, \\\ \frac{\partial^2 m(\mathbf{x},t)}{\partial t^2} - \frac{\partial^2 m(\mathbf{x},t)}{\partial \mathbf{x}^2} - \frac{\partial^2 (|u(\mathbf{x},t)|^2)}{\partial \mathbf{x}^2} = 0, \end{cases} \tag{1}$$

which describes the propagation of Langmuir waves in plasma physics. Suppose the finite domain space Ω = (*a*, *b*) × (0, *T*) with the initial conditions *u*(*x*, 0) = *u*0(*x*), *ut*(*x*, 0) = *u*1*x*, *m*(*x*, 0) = *m*0(*x*), *mt*(*x*, 0) = *m*1(*x*), *x* ∈ [*a*, *b*] and the boundary conditions *u*(*a*, *t*) = *<sup>u</sup>*(*b*, *<sup>t</sup>*) = 0, *<sup>m</sup>*(*a*, *<sup>t</sup>*) = *<sup>m</sup>*(*b*, *<sup>t</sup>*) = 0, *<sup>t</sup>* <sup>∈</sup> [0, *<sup>T</sup>*], where *<sup>∂</sup>αu*(*x*,*t*) *<sup>∂</sup>*|*x*|*<sup>α</sup>* is the Riesz space fractional derivative with 1 ≤ *α* ≤ 2.

The KGZ equation can have the following invariant energy conservation

$$E(t) = \int [|u\_t|^2 + |\frac{\partial^{\frac{s}{2}}u}{\partial |x|^{\frac{\alpha}{2}}}|^2 + |u|^2 + m|u|^2 + \frac{1}{2}v^2 + \frac{1}{2}|u|^4 + \frac{1}{2}m^2]dx = E(0),\tag{2}$$

**Citation:** Sun, J.; Yang, S.; Zhang, L. Energy-Preserving AVF Methods for Riesz Space-Fractional Nonlinear KGZ and KGS Equations. *Fractal Fract.* **2023**, *7*, 711. https://doi.org/ 10.3390/fractalfract7100711

Academic Editors: Ricardo Almeida, Libo Feng, Lin Liu and Yang Liu

Received: 27 June 2023 Revised: 25 August 2023 Accepted: 3 September 2023 Published: 27 September 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

where *<sup>∂</sup>v*(*x*,*t*) *<sup>∂</sup><sup>x</sup>* <sup>=</sup> <sup>−</sup>*∂m*(*x*,*t*) *<sup>∂</sup><sup>t</sup>* . When *α* = 2, the Riesz space fractional KGZ equation becomes the classical KGZ equation [9–11]. Ma et al. proposed a multi-scale integrator method to solve the KGZ equation [12]. Wang et al. solved the KGZ equation based on the lattice Boltzmann model [13]. Mei et al. proposed the Galerkin finite element method to solve the KGZ equation [14]. A lot of work has also been conducted on the Riesz space fractional KGZ equation [8,15]. Macias et al. proposed an energy conserving scheme to solve the fractional KGZ equation [15–19].

The fractional KGS equation can be written in the following form:

$$\begin{cases} \, \, \, \, i f\_t - \, \, \, \, \, \, \, \, \, \, \, \, f + \alpha\_1 f h = 0, \\\, \, \, \, \, h\_{tt} - \gamma h\_{xx} + m^2 h - \alpha\_1 |f|^2 = 0, \end{cases} \tag{3}$$

where *<sup>i</sup>* <sup>=</sup> √−1. The initial conditions are *<sup>f</sup>*(*x*, 0) = *<sup>f</sup>*0(*x*), *ht*(*x*, 0) = *<sup>h</sup>*1(*x*), *<sup>x</sup>* <sup>∈</sup> *<sup>R</sup>*, and the boundary conditions are *f*(*x*, *t*) = *h*(*x*, *t*) = 0, *x* ∈ *R* ∈ Ω, Ω = (*xL*, *xR*), *t* ∈ [0, *T*]. The complex function *f*(*x*, *t*) is the complex neutron field and the real function *h*(*x*, *t*) is the meson field. The variables *α*<sup>1</sup> and *β* are the coupled constants.

The fractional KGS equation also has invariant energy conservation:

$$\tilde{E}(t) = \int\_{-\infty}^{+\infty} h\_t^2 + \gamma h\_x^2 + m^2 h^2 + \beta |(-\Delta)^{\frac{\alpha}{4}} f|^2 - 2a\_1 |f|^2 h dx = \tilde{E}(0). \tag{4}$$

When *α* = 2 and *β* = *γ* = 1, Equation (3) is the classical coupled KGS equation. In quantum field theory, the coupled KGS equation is a mathematical model for the interaction of a conservation complex neutron field and a real meson field. Regarding the coupled KGS equation, many important conclusions have been obtained. Ohta et al. analyzed the stability of the coupled KGS equation [20]. Guo et al. investigated the global well-posedness of the KGS equation [21]. Kong et al. proposed a symplectic method to solve the coupled KGS equation [22]. The fractional KGS equation is an extension of the classical coupled KGS equation. Wang et al. proposed to solve the Riesz space fractional KGS equation using the difference method and the spectral method [23,24].

Recently, energy preserving methods have become important numerical methods in simulating energy conservation nonlinear PDEs, and they are structure-preserving numerical methods. Many different energy preserving methods have been derived, such as the discrete gradient method [25,26], the discrete variational derivative method [27] and the Hamiltonian boundary value method [28]. The AVF method, which is a kind of discrete gradient method, has been widely used to solve energy conservation integral PDEs and has achieved great success [29,30]. However, few people have applied the AVF method to solve energy conservation fractional PDEs. In this paper, we apply the AVF method to solve Riesz space fractional KGZ and KGS equations.

The rest of the paper is organized as follows. In Section 2, the definition and properties of the Riesz fractional derivative are given. The Riesz fractional derivative is discretized by the FPS method. In Section 3, the infinite Hamiltonian symplectic structures of the Riesz space fractional KGZ and KGS equations are obtained. These Hamiltonian systems are discretized by the FPS method and the second-order AVF method. Two energy preserving schemes of the Riesz space fractional KGZ and KGS equations are thus obtained. At last, numerical experiments confirm the advantage of the energy preserving schemes of the Riesz space fractional KGZ and KGS equations in simulating solitary wave behavior and preserving the energy conservation property of the equations. The new energy preserving schemes are superior to the existing second-order energy preserving schemes.

#### **2. Discretization of the Riesz Space Fractional Derivative**

**Definition 1.** *When n* − 1 ≤ *α* ≤ *n, n is a positive integer, the Riesz space fractional derivative with α order can be defined as [2]*

$$\frac{\partial^a u(\mathbf{x},t)}{\partial |\mathbf{x}|^a} = \frac{-1}{2\cos(\frac{\pi a}{2})\Gamma(n-a)} \frac{\partial^n}{\partial \mathbf{x}^n} \int\_{-\infty}^{+\infty} \frac{u(\frac{\pi}{\mathbf{x}},t)}{|\mathbf{x}-\frac{\pi}{\mathbf{x}}|^{a+1-n}} d\xi'\_{\mathbf{x}'} \tag{5}$$

*where* Γ(.) *is the Gamma function. ∂<sup>α</sup>* <sup>|</sup>*x*<sup>|</sup> *is denoted as the <sup>α</sup> order derivative of u*(*x*, *<sup>t</sup>*) *at* (*x*, *<sup>t</sup>*)*.*

**Lemma 1.** *In the infinite domain interval* (−∞ < *x* < ∞)*, the function u*(*x*, *t*) *is equivalent to*

$$\frac{\partial^{\mathfrak{a}}u(\mathbf{x},t)}{\partial|\mathbf{x}|^{\mathfrak{a}}} = \frac{-1}{2\cos(\frac{\pi\mathfrak{a}}{2})\Gamma(n-\mathfrak{a})} \frac{\partial^{\mathfrak{a}}}{\partial\mathbf{x}^{\mathfrak{a}}} \int\_{-\infty}^{+\infty} \frac{u(\frac{\pi}{\mathfrak{a}},t)}{|\mathbf{x}-\mathfrak{f}|^{\mathfrak{a}+1-n}} d\mathbf{f} = -(-\triangle)^{\frac{\mathfrak{a}}{2}}u(\mathbf{x},t), \tag{6}$$

*where n* − 1 < *α* < *n.*

In the infinite domain interval (−∞ < *x* < ∞), the fractional Laplace operator is defined as

$$-(-\triangle)^{\frac{\Phi}{2}}u(\mathfrak{x},t) = -F^{-1}|\mathfrak{x}|^{\mathfrak{a}}Fu(\mathfrak{x},t),\tag{7}$$

where *F* and *F*−<sup>1</sup> are represented as the Fourier transformation and the Fourier inverse transform of *u*(*x*, *t*), and we can obtain

$$-(-\triangle)^{\frac{a}{2}}u(x,t) = -\frac{1}{2\pi}\int\_{-\infty}^{+\infty}e^{-ix\frac{\pi}{b}}|x|^a\int\_{-\infty}^{+\infty}e^{i\frac{\pi}{b}\eta}u(\eta,t)d\eta d\xi.\tag{8}$$

On the other hand, in the finite domain interval Ω = (*a*, *b*), the Fourier series can be defined as

$$-(-\triangle)^{\frac{a}{2}}u(x,t) = -\sum\_{l\in\mathbb{Z}}|\upsilon\_l|^a \partial\_l e^{i\upsilon\_l(x-a)},\tag{9}$$

where *ν<sup>l</sup>* = <sup>2</sup>*l<sup>π</sup> <sup>b</sup>*−*<sup>a</sup>* , and the coefficients of the Fourier series are

$$\hat{u}\_l = \frac{1}{b-a} \int\_{\Omega} u(\mathbf{x}, t) e^{-i\nu\_l(\mathbf{x}-\mathbf{a})} d\mathbf{x}.\tag{10}$$

We apply the FPS method to discrete the *α* order Riesz space fractional derivative. Suppose the space integral interval is Ω = [*a*, *b*], the interval Ω is divided into *N* equal parts. *N* is an even number. The space step length is *h*<sup>1</sup> = *<sup>b</sup>*−*<sup>a</sup> <sup>N</sup>* . Take *xj* = *a* + *jh*1, where *j* = 0, 1, ··· , *N* − 1 are the space Fourier collocation points. *INu*(*x*, *t*) is the approximation of *u*(*x*, *t*), we have

$$(I\_N u)(\mathbf{x}, t) = u\_N(\mathbf{x}, t) = \sum\_{k=-N/2}^{N/2} \tilde{u}\_k e^{ik\mu(\mathbf{x} - \mathbf{a})},\tag{11}$$

where *u*˜*k*, *μ* = <sup>2</sup>*<sup>π</sup> <sup>b</sup>*−*<sup>a</sup>* and

$$\mathfrak{u}\_{k} = \frac{1}{N\mathfrak{c}\_{k}} \sum\_{j=0}^{N-1} u(x\_{j}, t) e^{-i\overline{\mathfrak{x}}\mu(x\_{j} - a)}, \mu = \frac{2\pi}{b - a'} |k| < N/2, \mathfrak{c}\_{k} = 1, k = \pm N/2\mathfrak{c}\_{k} = 2. \tag{12}$$

When |*k*| < *N*/2, *ck* = 1, and when *k* = ±*N*/2, *ck* = 2.

We can obtain

$$-(-\triangle)^{\frac{a}{2}}I\_N u(x\_j, t) = -\sum\_{k=-N/2}^{N/2} |k\mu|^a \|i\_k e^{ik\mu(x\_j - a)}.\tag{13}$$

The *α* order derivative of the approximation function *INu*(*x*, *t*) can be denoted as

$$\begin{split} \frac{\partial^{\alpha} I\_{N} u(x\_{j}, t)}{\partial |\mathbf{x}|^{\alpha}} &= -( - \triangle )^{\frac{\alpha}{2}} I\_{N} u(x\_{j}, t) \\ &= - \sum\_{k=-N/2}^{N/2} |k\mu|^{\alpha} (\frac{1}{Nc\_{k}} \sum\_{l=0}^{N-1} u\_{l} e^{-ik\mu(x\_{l} - a)}) \epsilon^{ik\mu(x\_{j} - a)} \\ &= \sum\_{l=0}^{N-1} u\_{l} (- \sum\_{k=-N/2}^{N/2} \frac{1}{Nc\_{k}} |k\mu|^{\alpha} e^{ik\mu(x\_{j} - x\_{l})} \\ &= (\mathbf{D}\_{2}^{a} \mathbf{U})\_{j\nu} \end{split} \tag{14}$$

where **D***<sup>α</sup>* <sup>2</sup> is *<sup>N</sup>* <sup>×</sup> *<sup>N</sup>* matrix, **<sup>U</sup>** = (*U*0, *<sup>U</sup>*1, ··· , *UN*−1) and the coefficients of the matrix **<sup>D</sup>***<sup>α</sup>* 2 are

$$(\mathbf{D}\_2^a)\_{j,l} = -\sum\_{k=-N/2}^{N/2} \frac{1}{Nc\_k} |k\mu|^a e^{ik\mu(\mathbf{x}\_j - \mathbf{x}\_l)}.\tag{15}$$

#### **3. Energy Preserving Method of Fractional KGZ and KGS Equations**

In this section, we first give the Hamiltonian structures of the Riesz space fractional KGZ and KGS equations. Then, the space fractional derivatives of these two equations are discretized by the FPS method. The AVF method is applied to solve the semi-discrete fractional KGZ and KGS equations.

#### *3.1. Energy Preserving Method of the Fractional KGZ Equation*

Let *w*(*x*, *t*) = *ut*(*x*, *t*), −2*qxx*(*x*, *t*) = *mt*(*x*, *t*), then the space fractional KGZ Equation (1) is equivalent to

$$\begin{cases} \begin{aligned} u\_t &= w, \\ m\_t &= -2q\_{xx}, \\ w\_t &= \frac{\partial^\kappa u}{\partial |x|^a} - u - mu - |u|^2 u, \\ q\_t &= -\frac{1}{2}m - \frac{1}{2}|u|^2. \end{aligned} \end{cases} \tag{16}$$

Equation (16) can be expressed by the following infinite dimensional Hamiltonian system

$$\frac{d\mathbf{z}}{dt} = \mathbf{J} \frac{\delta H(\mathbf{z})}{\delta \mathbf{z}}, \quad \mathbf{J} = \begin{pmatrix} \mathbf{O} & \mathbf{I} \\ -\mathbf{I} & \mathbf{O} \end{pmatrix}, \quad \mathbf{I} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}, \tag{17}$$

where **<sup>z</sup>** = (*u*, *<sup>m</sup>*, *<sup>w</sup>*, *<sup>q</sup>*)*T*, **<sup>O</sup>** is 2 <sup>×</sup> 2 zero matrix, **<sup>I</sup>** is 2 <sup>×</sup> 2 unite matrix and the corresponding Hamiltonian energy function is

$$H(\mathbf{z}) = \int [\frac{1}{2}w^2 + (q\_x)^2 + \frac{1}{2}|\frac{\partial^{\frac{a}{2}}u}{\partial |\mathbf{x}|^{\frac{a}{2}}}|^2 + \frac{1}{2}|u|^2 + \frac{1}{2}m|u|^2 + \frac{1}{4}|u|^4 + \frac{1}{4}m^2]d\mathbf{x}.\tag{18}$$

The second-order partial derivative *qxx* of Equation (16) can be approximated by the FPS method [31,32]. Suppose *INq*(*x*, *t*) is the FPS approximation of the function *q*(*x*, *t*). The values of the derivatives *<sup>d</sup> dx INq*(*x*, *<sup>t</sup>*) and *<sup>d</sup>*<sup>2</sup> *dx*<sup>2</sup> *IN <sup>p</sup>*(*x*, *<sup>t</sup>*) at the collocation points *xj* are obtained in terms of the value of *qj*, i.e.,

$$\frac{d}{d\mathbf{x}} I\_{\mathbf{N}} q(\mathbf{x}, t)|\_{\mathbf{x} = \mathbf{x}\_{\mathbf{j}}} = \sum\_{l=0}^{N-1} q\_l \frac{d g\_l(\mathbf{x}\_{\mathbf{j}})}{d\mathbf{x}} = (\mathbf{D}\_1 \mathbf{Q})\_{\mathbf{j}\prime} \tag{19}$$

$$d\frac{d^2}{d\mathbf{x}^2} I\_N q(\mathbf{x}, t)|\_{\mathbf{x} = \mathbf{x}\_j} = \sum\_{l=0}^{N-1} q\_l \frac{d^2 g\_l(\mathbf{x}\_j)}{d\mathbf{x}^2} = (\mathbf{D}\_2 \mathbf{Q})\_{\mathbf{j}}.\tag{20}$$

The function *gl*(*x*) is the trigonometric polynomial explicitly given by

$$\log\_l(x) = \frac{1}{N} \sum\_{k=-N/2}^{N/2} \frac{1}{c\_k} \exp^{\mathrm{i}k\mu \left(x - x\_l\right)},\tag{21}$$

where *ck* <sup>=</sup> <sup>1</sup> (|*k*<sup>|</sup> <sup>=</sup> *<sup>N</sup>*/2), *<sup>c</sup>*−*N*/2 <sup>=</sup> *cN*/2 <sup>=</sup> 2, *<sup>μ</sup>* <sup>=</sup> <sup>2</sup>*<sup>π</sup> <sup>L</sup>* and

$$(\mathbf{D}\_2)\_{i,j} = \begin{cases} \frac{1}{2} \mu^2 (-1)^{i+j+1} \frac{1}{\sin^2(\mu \frac{\mathbf{x}\_i - \mathbf{x}\_j}{2})}, & i \neq j, \\\ -\mu^2 \frac{N^2 + 2}{12}, & i = j. \end{cases} \tag{22}$$

Based on the above FPS method, we can obtain the semi-discrete system of the space fractional KGZ equation

$$\begin{cases} \frac{d}{d\mathbf{j}} u\_{\mathbf{j}} = w\_{\mathbf{j}\prime} \\ \frac{d}{d\mathbf{j}} m\_{\mathbf{j}} = -2(\mathbf{D}\_{2}\mathbf{Q})\_{\mathbf{j}\prime} \\ \frac{d}{d\mathbf{j}} w\_{\mathbf{j}} = (\mathbf{D}\_{2}^{x}\mathbf{U})\_{\mathbf{j}} - u\_{\mathbf{j}} - m\_{\mathbf{j}} u\_{\mathbf{j}} - |u\_{\mathbf{j}}|^{2} u\_{\mathbf{j}\prime} \\ \frac{d}{d\mathbf{j}} q\_{\mathbf{j}} = -\frac{1}{2} m\_{\mathbf{j}} - \frac{1}{2} |u\_{\mathbf{j}}|^{2} \end{cases} \tag{23}$$

where *j* = 0, 1, ··· , *N* − 1.

Equation (23) can be written as the following semi-discrete Hamiltonian system

$$\frac{d\mathbf{Z}}{dt} = \mathbf{J}\nabla\_{\mathbf{Z}}H(\mathbf{Z}), \quad \mathbf{J} = \begin{pmatrix} \mathbf{0} & \mathbf{0} & \mathbf{I\_N} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{I\_N} \\ -\mathbf{I\_N} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & -\mathbf{I\_N} & \mathbf{0} & \mathbf{0} \end{pmatrix}, \tag{24}$$

where **<sup>Z</sup>** = (**U***T*, **<sup>M</sup>***T*, **<sup>W</sup>***T*, **<sup>Q</sup>***T*), **<sup>M</sup>** = (*m*0, *<sup>m</sup>*1, ··· , *mN*−1)*T*, **<sup>W</sup>** = (*w*0, *<sup>w</sup>*1, ··· , *wN*−1)*T*, **<sup>Q</sup>** = (*q*0, *<sup>q</sup>*1, ··· , *qN*−1)*T*. **<sup>0</sup>** and **<sup>I</sup>***<sup>N</sup>* are the *<sup>N</sup>* <sup>×</sup> *<sup>N</sup>* zeros matrix and unite matrix. The corresponding Hamiltonian function is

$$H(\mathbf{Z}) = -\mathbf{Q}^T \mathbf{D}\_2 \mathbf{Q} - \frac{1}{2} \mathbf{U}^T \mathbf{D}\_2^\mathbf{u} \mathbf{U} + \sum\_{j=0}^{N-1} [\frac{1}{2} w\_j^2 + \frac{1}{2} |u\_j|^2 + \frac{1}{2} m\_j |u\_j|^2 + \frac{1}{4} |u\_j|^4 + \frac{1}{4} m\_j^2]. \tag{25}$$

The semi-discrete Hamiltonian system (24) is solved by the following second-order AVF method

$$\frac{\mathbf{Z}^{n+1} - \mathbf{Z}^n}{\tau} = \mathbf{\hat{J}} \int\_0^1 \nabla H((1 - \boldsymbol{\xi})\mathbf{Z}^n + \boldsymbol{\xi}\mathbf{Z}^{n+1}) d\boldsymbol{\xi}.\tag{26}$$

Equation (26) is equivalent to the following equations

$$\frac{u\_j^{n+1} - u\_j^n}{\tau} = \int\_0^1 ((1 - \xi)w\_j^n + \xi w\_j^{n+1})d\xi,\tag{27}$$

$$\frac{m\_j^{n+1} - m\_j^n}{\tau} = -2 \int\_0^1 ((1 - \xi)(\mathbf{D}\_2 \mathbf{Q}^n)\_j + \xi(\mathbf{D}\_2 \mathbf{Q}^{n+1})\_j) d\xi,\tag{28}$$

$$\begin{split} \frac{w\_{\hat{j}}^{n+1} - w\_{\hat{j}}^{n}}{\tau} &= \int\_{0}^{1} [((1-\xi)(\mathbf{D}\_{2}^{\mathfrak{x}}\mathbf{U}^{n})\_{\hat{j}} + \tilde{\xi}\left(\mathbf{D}\_{2}^{\mathfrak{x}}\mathbf{U}^{n+1}\right)\_{\hat{j}}) - ((1-\tilde{\xi})u\_{\hat{j}}^{n} + \tilde{\xi}u\_{\hat{j}}^{n+1})]d\xi \\ &- \int\_{0}^{1} \left((1-\tilde{\xi})m\_{\hat{j}}^{n}\right) + \tilde{\xi}m\_{\hat{j}}^{n+1}\right)\Big((1-\tilde{\xi})u\_{\hat{j}}^{n} + \tilde{\xi}u\_{\hat{j}}^{n+1}\Big)d\xi \\ &- \int\_{0}^{1} \left|\left((1-\tilde{\xi})u\_{\hat{j}}^{n} + \tilde{\xi}u\_{\hat{j}}^{n+1}\right)\right|^{2}\Big((1-\tilde{\xi})u\_{\hat{j}}^{n} + \tilde{\xi}u\_{\hat{j}}^{n+1}\Big)d\tilde{\xi},\tag{29} \\ &= 1\end{split}$$

$$\frac{q\_j^{n+1} - q\_j^n}{\tau} = -\frac{1}{2} \int\_0^1 \left( ((1 - \tilde{\xi}) m\_j^n + \tilde{\xi} m\_j^{n+1}) + \left| ((1 - \tilde{\xi}) u\_j^n + \tilde{\xi} u\_j^{n+1}) \right|^2 \right) d\tilde{\xi}.\tag{30}$$

From Equations (27)–(30), the auxiliary variables *w*, *q* can be deleted. We can obtain

*un*+<sup>1</sup> *<sup>j</sup>* <sup>−</sup> <sup>2</sup>*u<sup>n</sup> <sup>j</sup>* <sup>+</sup> *<sup>u</sup>n*−<sup>1</sup> *j <sup>τ</sup>*<sup>2</sup> = (**D***<sup>α</sup>* 2 **U***n*+<sup>1</sup> + **U***<sup>n</sup>* <sup>4</sup> ) *j* <sup>−</sup> *<sup>u</sup>n*+<sup>1</sup> *<sup>j</sup>* + *<sup>u</sup><sup>n</sup> j* <sup>4</sup> <sup>−</sup> *<sup>m</sup>n*+<sup>1</sup> *<sup>j</sup>* + *<sup>m</sup><sup>n</sup> j* <sup>4</sup> *<sup>u</sup><sup>n</sup> <sup>j</sup>* <sup>−</sup> <sup>2</sup>*mn*+<sup>1</sup> *<sup>j</sup>* + *<sup>m</sup><sup>n</sup> j* <sup>12</sup> (*un*+<sup>1</sup> *<sup>j</sup>* <sup>−</sup> *<sup>u</sup><sup>n</sup> j* ) − 1 6 (*un*+<sup>1</sup> *<sup>j</sup>* ) 2 + 1 6 (*u<sup>n</sup> j* ) <sup>2</sup> + 1 6 *un*+<sup>1</sup> *<sup>j</sup> <sup>u</sup><sup>n</sup> j un j* − 1 8 (*un*+<sup>1</sup> *<sup>j</sup>* ) 2 + 1 <sup>24</sup> (*u<sup>n</sup> j* ) <sup>2</sup> + 1 12*un*+<sup>1</sup> *<sup>j</sup> <sup>u</sup><sup>n</sup> j* (*un*+<sup>1</sup> *<sup>j</sup>* <sup>−</sup> *<sup>u</sup><sup>n</sup> <sup>j</sup>* )+(**D***<sup>α</sup>* 2 **U***<sup>n</sup>* + **U***n*−<sup>1</sup> <sup>4</sup> ) *j* − *un <sup>j</sup>* <sup>+</sup> *<sup>u</sup>n*−<sup>1</sup> *j* <sup>4</sup> <sup>−</sup> *<sup>m</sup><sup>n</sup> <sup>j</sup>* <sup>+</sup> *<sup>m</sup>n*−<sup>1</sup> *j* <sup>4</sup> *<sup>u</sup>n*−<sup>1</sup> *j* <sup>−</sup> <sup>2</sup>*m<sup>n</sup> <sup>j</sup>* <sup>+</sup> *<sup>m</sup>n*−<sup>1</sup> *j* <sup>12</sup> (*u<sup>n</sup> <sup>j</sup>* <sup>−</sup> *<sup>u</sup>n*−<sup>1</sup> *<sup>j</sup>* ) − 1 6 (*u<sup>n</sup> j* ) <sup>2</sup> + 1 6 (*un*−<sup>1</sup> *<sup>j</sup>* ) 2 + 1 6 *un <sup>j</sup> <sup>u</sup>n*−<sup>1</sup> *j un*−<sup>1</sup> *j* − 1 8 (*u<sup>n</sup> j* ) <sup>2</sup> + 1 <sup>24</sup> (*un*−<sup>1</sup> *<sup>j</sup>* ) 2 + 1 12*u<sup>n</sup> <sup>j</sup> <sup>u</sup>n*−<sup>1</sup> *j* (*u<sup>n</sup> <sup>j</sup>* <sup>−</sup> *<sup>u</sup>n*−<sup>1</sup> *<sup>j</sup>* ), (31)

$$\begin{split} \frac{m\_j^{n+1} - 2m\_j^n + m\_j^{n-1}}{\tau^2} = (\mathbf{D}\_2 \frac{\mathbf{M}^{n+1} + 2\mathbf{M}^n + \mathbf{M}^{n-1}}{4})\_j + \sum\_{l=1}^N d\_{jl} (\left| \frac{1}{6} (u\_j^{n+1})^2 + \frac{1}{6} (u\_j^n)^2 + \frac{1}{6} u\_j^{n+1} u\_j^n \right| \\ + \left| \frac{1}{6} (u\_j^n)^2 + \frac{1}{6} (u\_j^{n-1})^2 + \frac{1}{6} u\_j^n u\_j^{n-1} \right|). \end{split} \tag{32}$$

**Theorem 1.** *The semi-discrete scheme (26) can preserve the discrete energy conservation of the finite dimensional Hamiltonian system*

$$H(\mathbf{Z}^{n+1}) = H(\mathbf{Z}^n). \tag{33}$$

**Proof.** From Equation (26), we can obtain that

$$\begin{split} & (\int\_{0}^{1} \nabla H((1-\xi)\mathbf{Z}^{n} + \xi\mathbf{Z}^{n+1})d\xi)^{T} \frac{\mathbf{Z}^{n+1} - \mathbf{Z}^{n}}{\tau} \\ &= (\int\_{0}^{1} \nabla H((1-\xi)\mathbf{Z}^{n} + \xi\mathbf{Z}^{n+1})d\xi)^{T} \mathbf{J} \int\_{0}^{1} \nabla H((1-\xi)\mathbf{Z}^{n} + \xi\mathbf{Z}^{n+1})d\xi, \end{split} \tag{34}$$

where **J** is the skew symmetric matrix. We can find that

$$\frac{1}{\pi} \int\_0^1 (\mathbf{Z}^n - \mathbf{Z}^{n+1})^T \nabla H ((1 - \tilde{\boldsymbol{\xi}}) \mathbf{Z}^n + \tilde{\boldsymbol{\xi}} \mathbf{Z}^{n+1}) d\tilde{\boldsymbol{\xi}} = 0. \tag{35}$$

From the basic theorem of the integral theory, we can get

$$\frac{1}{\tau}(H(\mathbf{Z}^{n+1}) - H(\mathbf{Z}^n)) = 0. \tag{36}$$

The proof of the theorem is completed.

*3.2. Energy Preserving Method of the Fractional KGS Equation*

Suppose *f*(*x*, *t*) = *p*(*x*, *t*) + *r*(*x*, *t*)*i*, *p*(*x*, *t*) = *p*, *r*(*x*, *t*) = *r*, then Equation (3) can be written as the following differential equations

$$\begin{cases} \begin{aligned} r\_t &= -\frac{6}{2}(-\triangle)^{\frac{\alpha}{2}}p + a\_1 h p\_\prime \\ p\_t &= \frac{6}{2}(-\triangle)^{\frac{\alpha}{2}}r - a\_1 h r\_\prime \\ v\_t &= \frac{\gamma}{2}h\_{xx} - \frac{m^2}{2}h + \frac{a\_1}{2}(p^2 + r^2)\_\prime \\ h\_t &= 2v. \end{aligned} \end{cases} \tag{37}$$

Equation (37) can be transformed into the following infinite dimensional Hamiltonian system

$$\frac{d\bar{\mathbf{z}}}{dt} = \mathbf{J} \frac{\delta H(\bar{\mathbf{z}})}{\delta \bar{\mathbf{z}}},\tag{38}$$

where ˜**z** = (*r*, *p*, *v*, *h*)*T*,

$$\mathbf{J} = \begin{pmatrix} 0 & 1 & 0 & 0 \\ -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & -1 & 0 \end{pmatrix} \tag{39}$$

and the corresponding Hamiltonian energy function is

$$H(\bar{\mathbf{z}}) = \int \frac{\mathcal{G}}{4} (-(-\triangle)^{\frac{a}{4}} p)^2 - (-\triangle)^{\frac{a}{4}} r)^2 (-\frac{\gamma}{4} (h\_x)^2 - \frac{1}{4} m^2 h^2 + \frac{a\_1}{2} h (p^2 + r^2) - v^2 d\mathbf{x}. \tag{40}$$

Equation (37) is discretized by the FPS method, and we can obtain

$$\begin{cases} \frac{d}{dt}r\_{\dot{\jmath}} = \frac{\beta}{2} (\mathbf{D}\_{2}^{a}\mathbf{p})\_{\dot{\jmath}} + \alpha\_{1}h\_{\dot{\jmath}}p\_{\dot{\jmath}},\\ \frac{d}{dt}p\_{\dot{\jmath}} = -\frac{\beta}{2} (\mathbf{D}\_{2}^{a}\mathbf{r})\_{\dot{\jmath}} - \alpha\_{1}h\_{\dot{\jmath}}r\_{\dot{\jmath}},\\ \frac{d}{dt}v\_{\dot{\jmath}} = \frac{\gamma}{2} (\mathbf{D}\_{2}\mathbf{h})\_{\dot{\jmath}} - \frac{m^{2}}{2}h\_{\dot{\jmath}} + \frac{a\_{1}}{2}(p\_{\dot{\jmath}}^{2} + r\_{\dot{\jmath}}^{2}),\\ \frac{d}{dt}h\_{\dot{\jmath}} = 2v\_{\dot{\jmath}}.\end{cases} \tag{41}$$

Equation (41) can be transformed into the finite dimensional Hamiltonian system

$$\frac{d\mathbf{Z}}{dt} = \mathbf{S} \nabla\_{\mathbf{z}} H(\mathbf{Z}),\tag{42}$$

where **<sup>Z</sup>**˜ = (*r*0, ··· ,*rN*−1, *<sup>p</sup>*0, ··· , *pN*−1, *<sup>v</sup>*0, ··· , *vN*−1, *<sup>h</sup>*0, ··· , *hN*−1)*<sup>T</sup>* and

$$\mathbf{S} = \begin{pmatrix} \mathbf{0} & \mathbf{I}\_N & \mathbf{0} & \mathbf{0} \\ -\mathbf{I}\_N & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{I}\_N \\ \mathbf{0} & \mathbf{0} & -\mathbf{I}\_N & \mathbf{0} \end{pmatrix}. \tag{43}$$

$$H(\mathbf{Z}) = \frac{\beta}{4} (\mathbf{p}^T \mathbf{D}\_2^a \mathbf{p} + \mathbf{r}^T \mathbf{D}\_2^a \mathbf{r}) + \frac{\gamma}{4} (\mathbf{h}^T \mathbf{D}\_2 \mathbf{h} + \sum\_{j=0}^{N-1} [\frac{a\_1}{2} h\_j (p\_j^2 + r\_j^2) - \frac{1}{4} m^2 h\_j^2 - v\_j^2]. \tag{44}$$

The semi-discrete Hamiltonian system (42) is solved by the second order AVF method, and we can get

$$\frac{\mathbf{\tilde{Z}^{n+1} - \mathbf{\tilde{Z}^n}}{\tau} = \mathbf{S} \int\_0^1 \nabla\_{\mathbf{Z}} H((1 - \boldsymbol{\xi})\mathbf{Z}^n + \boldsymbol{\xi}\mathbf{Z}^{n+1}) d\boldsymbol{\xi}.\tag{45}$$

Equation (45) is equivalent to the following schemes

$$\frac{r\_j^{n+1} - r\_j^n}{\tau} = \frac{\beta}{2} (\mathbf{D}\_2^n \frac{\mathbf{p}^{n+1} + \mathbf{p}^n}{2})\_j + a\_1 (\frac{1}{3} h\_j^{n+1} p\_j^{n+1} + \frac{1}{6} h\_j^{n+1} p\_j^n + \frac{1}{6} h\_j^n p\_j^{n+1} + \frac{1}{3} h\_j^n p\_j^n), \tag{46}$$

$$\frac{p\_j^{n+1} - p\_j^n}{\tau} = -\frac{\beta}{2} (\mathbf{D}\_2^n \frac{\mathbf{r}^{n+1} + \mathbf{r}^n}{2})\_j - a\_1 (\frac{1}{3} h\_j^{n+1} r\_j^{n+1} + \frac{1}{6} h\_j^{n+1} r\_j^n + \frac{1}{6} h\_j^n r\_j^{n+1} + \frac{1}{3} h\_j^n r\_j^n), \tag{47}$$

$$\begin{split} \frac{\upsilon\_{j}^{n+1} - \upsilon\_{j}^{n}}{\tau} = & \frac{\gamma}{2} (\mathbf{D}\_{2} \frac{\mathbf{h}^{n+1} + \mathbf{h}^{n}}{2})\_{j} - \frac{m^{2}}{2} (\frac{h\_{j}^{n+1} + h\_{j}^{n}}{2}) + \frac{a\_{1}}{6} (p\_{j}^{n+1} p\_{j}^{n+1} + p\_{j}^{n+1} p\_{j}^{n} + p\_{j}^{n} p\_{j}^{n}) \\ & + r\_{j}^{n+1} r\_{j}^{n+1} + r\_{j}^{n+1} r\_{j}^{n} + r\_{j}^{n} r\_{j}^{n}), \end{split} \tag{48}$$

$$\frac{h\_j^{n+1} - h\_j^n}{\tau} = 2\frac{v\_j^{n+1} + v\_j^n}{2}.\tag{49}$$

According to the above Theorem 1, schemes (46)–(49) can also preserve the discrete Hamiltonian energy of the Riesz space fractional KGS equation exactly.

#### **4. Numerical Simulation**

#### *4.1. Numerical Simulation of the Fractional KGZ Equation*

In this section, we test the following discrete Hamiltonian energy errors

$$Error\_H^n = |\frac{H(\mathbf{Z}^n) - H(\mathbf{Z}^0)}{H(\mathbf{Z}^0)}|\,\tag{50}$$

by schemes (31) and (32), where

$$H(\mathbf{Z}^n) = -\left(\mathbf{Q}^n\right)^T \mathbf{D}\_2 \mathbf{Q}^n - \frac{1}{2} (\mathbf{U}^n)^T \mathbf{D}\_2^n \mathbf{U}^n + \sum\_{j=0}^{N-1} \left[\frac{1}{2} (w\_j^n)^2 + \frac{1}{2} |u\_j^n|^2 + \frac{1}{2} m\_j^n |u\_j^2|^2 + \dotsb\right] \tag{51}$$
 
$$\frac{1}{4} |u\_j^2|^4 + \frac{1}{4} (m\_j^n)^2 \mathbf{I}\_\prime \tag{52}$$

and *H*(**Z**0) is the discrete Hamiltonian energy at *t* = 0.

The two level initial conditions can be taken as follows [16]:

$$\begin{cases} \ u\_0 = \frac{\sqrt{10} - \sqrt{2}}{2} \mathsf{sech}(\sqrt{\frac{1 + \sqrt{5}}{2}} x) \exp(i \sqrt{\frac{2}{1 + \sqrt{5}}} x), \\\ m\_0 = -2 \mathsf{sech}^2(\sqrt{\frac{1 + \sqrt{5}}{2}} x), \end{cases} \tag{52}$$

and

$$\begin{cases} \boldsymbol{u}\_{1} = \frac{\sqrt{10} - \sqrt{2}}{2} (\tanh \boldsymbol{x} - 1) \mathbf{sech}(\sqrt{\frac{1 + \sqrt{5}}{2}} \boldsymbol{x}) \exp(i \sqrt{\frac{2}{1 + \sqrt{5}}} \boldsymbol{x}), \\\ m\_{1} = -4 \mathbf{sech}^{2}(\sqrt{\frac{1 + \sqrt{5}}{2}} \boldsymbol{x}) \tanh(\sqrt{\frac{1 + \sqrt{5}}{2}} \boldsymbol{x}). \end{cases} \tag{53}$$

Figure 1 gives the numerical solution of the KGZ equation with *α* = 2, *t* ∈ [0, 10]. Figure 2 gives the numerical solution of the Riesz space fractional KGZ equation with *α* = 1.7, *t* ∈ [0, 16]. These numerical results are consistent with the existing numerical results [16]. From Figures 1 and 2, we can see that the new scheme can simulate the

evolution of solitary waves of the Riesz space fractional KGZ equation well. The numerical solutions and the numerical schemes are stable. Figure 3 shows the energy error of the Riesz space fractional KGZ equation with (*a*): *α* = 2, (*b*): *α* = 1.7 at *t* ∈ [0, 20]. The energy error is up to 10−13, which can neglected. It is obvious that the new scheme can well preserve the discrete energy of the fractional KGZ equation.

**Figure 1.** Evolution of solitary waves (**a**) *u*(*x*, *t*) and (**b**) *m*(*x*, *t*) at *α* = 2, *t* ∈ [0, 10].

**Figure 2.** Evolution of solitary waves (**a**) *u*(*x*, *t*) and (**b**) *m*(*x*, *t*) at *α* = 1.7, *t* ∈ [0, 16].

**Figure 3.** Energy error of fractional KGZ equation: (**a**) *α* = 2, (**b**) *α* = 1.7 at *t* ∈ [0, 20].

#### *4.2. Numerical Simulation of the Fractional KGS Equation*

We apply schemes (46)–(49) to simulate the Riesz space fractional KGS equation. The discrete Hamiltonian energy errors can be defined as

$$Error\_H^n = |\frac{H(\tilde{\mathbf{Z}}^n) - H(\tilde{\mathbf{Z}}^0)}{H(\tilde{\mathbf{Z}}^0)}|\,\tag{54}$$

where

$$\begin{split} H(\mathbf{Z}^{n}) &= \frac{\beta}{4} ( (\mathbf{p}^{n})^{T} \mathbf{D}\_{2}^{a} \mathbf{p}^{n} + (\mathbf{r}^{n})^{T} \mathbf{D}\_{2}^{a} \mathbf{r}^{n} ) + \frac{\gamma}{4} (\mathbf{h}^{n})^{T} \mathbf{D}\_{2} \mathbf{h}^{n} \\ &+ \sum\_{j=0}^{N-1} \left[ \frac{\alpha\_{1}}{2} h\_{j}^{n} ( (\boldsymbol{\nu}\_{j}^{n})^{2} + (\boldsymbol{r}\_{j}^{n})^{2} ) - \frac{1}{4} m^{2} (h\_{j}^{n})^{2} - (\boldsymbol{v}\_{j}^{n})^{2} \right] \end{split} \tag{55}$$

and *H*(**Z**˜ <sup>0</sup>) is the discrete Hamiltonian energy at *t* = 0.

The parameter variables are taken as *υ* = 0.8, *x*<sup>0</sup> = 10. First, we consider the evolution of single solitary waves. The initial condition of single solitary wave is taken as follows

$$\begin{cases} \quad f\_0 = f(\mathbf{x} - \boldsymbol{\pi}\_0, 0, \boldsymbol{\upsilon}) = \frac{3\sqrt{2}}{4(1 - \boldsymbol{\upsilon}^2)} \mathsf{sech}^2(\frac{1}{2\sqrt{1 - \boldsymbol{\upsilon}^2}}(\mathbf{x} - \boldsymbol{\pi}\_0) \exp(i\boldsymbol{\upsilon}\mathbf{x}), \\\quad h\_0 = h(\mathbf{x} - \boldsymbol{\pi}\_0, 0, \boldsymbol{\upsilon}) = \frac{3}{4(1 - \boldsymbol{\upsilon}^2)} \mathsf{sech}^2(\frac{1}{2\sqrt{1 - \boldsymbol{\upsilon}^2}}(\mathbf{x} - \boldsymbol{\pi}\_0). \end{cases} \tag{56}$$

Figure 4 shows the numerical solution of a single solitary wave of the KGS equation at *α* = 2, *t* ∈ [0, 20]. Figure 5 shows the numerical solution of solitary waves *f*(*x*, *t*) and *h*(*x*, *t*) of the Riesz space fractional KGS equation at *α* = 1.2, *t* ∈ [0, 20]. From Figures 4 and 5, we can see that the new scheme of the Riesz space fractional KGS equation can also simulate the evolution of a solitary wave of the equation well. The numerical solutions and the numerical schemes are stable. Figure 6 shows the energy error of the Riesz space fractional KGS equation. The error can be neglected. The new scheme of the equation can also well preserve the discrete energy of the equation.

**Figure 4.** Evolution of solitary waves (**a**) *f*(*x*, *t*) and (**b**) *h*(*x*, *t*) at *α* = 2, *t* ∈ [0, 20].

**Figure 5.** Evolution of solitary waves (**a**) *f*(*x*, *t*) and (**b**) *h*(*x*, *t*) at *α* = 1.2, *t* ∈ [0, 20].

**Figure 6.** Energy error of the fractional KGS equation: (**a**) *α* = 2, (**b**) *α* = 1.2 at *t* ∈ [0, 20].

Then, we consider the evolution of two solitary waves with the following initial conditions

$$\begin{cases} \quad f\_0 = f(\mathbf{x} - \mathbf{x}\_{0\prime}, \mathbf{0}, \upsilon) + f(\mathbf{x} + \mathbf{x}\_{0\prime}, \mathbf{0}, -\upsilon), \\\quad h\_0 = h(\mathbf{x} - \mathbf{x}\_{0\prime}, \mathbf{0}, \upsilon) + h(\mathbf{x} + \mathbf{x}\_{0\prime}, \mathbf{0}, -\upsilon). \end{cases} \tag{57}$$

Figure 7 shows the numerical solutions of two solitary waves of the KGS equation at *α* = 2, *t* ∈ [0, 30]. Figure 8 shows numerical solutions of two solitary waves of the Riesz space fractional KGS equation at *α* = 1.6, *t* ∈ [0, 30]. The numerical results are consistent with the existing results [23]. From Figures 7 and 8, we can see that the new scheme of the Riesz space fractional KGS equation can also simulate the evolution of multiple solitary waves of the equation well. The numerical solutions and the numerical schemes are stable. Figure 9 shows the energy error of the Riesz space fractional KGS equation. The error can be neglected. The new scheme of the Riesz space fractional KGS equation can also preserve the discrete energy of the equation well.

**Figure 7.** Evolution of solitary waves (**a**) *f*(*x*, *t*) and (**b**) *h*(*x*, *t*) at *α* = 2, *t* ∈ [0, 30].

**Figure 8.** Evolution of solitary waves (**a**) *f*(*x*, *t*) and (**b**) *h*(*x*, *t*) at *α* = 1.6, *t* ∈ [0, 30].

**Figure 9.** Energy errors of the fractional KGS equations: (**a**) *α* = 2, (**b**) *α* = 1.6 at *t* ∈ [0, 30].

#### **5. Conclusions**

In this paper, two new energy preserving schemes for the Riesz space fractional KGZ and KGS equations are proposed based on the FPS method and the AVF method. The energy conservation property is proven. Numerical results show that these new schemes can simulate the evolution behavior of the solitary waves of these fractional differential equations well and preserve the discrete energy conservation property. The existing energy conserving schemes of the space fractional KGZ and KGS equations are in general of secondorder accuracy. However, these energy conservation schemes based on the second-order AVF method can be easily extended to high-order energy preserving schemes. In future work, we will construct high-order energy preserving schemes for fractional differential equations based on the high-order AVF method and analyze the convergence and stability of these new energy-preserving schemes.

**Author Contributions:** Conceptualization, J.S., S.Y. and L.Z.; methodology, J.S., S.Y. and L.Z.; data collection, S.Y. and L.Z.; writing—original draft and preparation, J.S., S.Y. and L.Z.; data analysis, L.Z. and S.Y. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the Hainan Provincial Natural Science Foundation of China (Grant No. 120RC450) and the Natural Science Foundation of China (Grant Nos. 11961020, 41974114).

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Abbreviations**

The following abbreviations are used in this manuscript:


#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

## *Article* **Fourth-Order Numerical Solutions for a Fuzzy Time-Fractional Convection–Diffusion Equation under Caputo Generalized Hukuhara Derivative**

**Hamzeh Zureigat 1, Mohammed Al-Smadi 2,3, Areen Al-Khateeb 1, Shrideh Al-Omari 4,\* and Sharifah E. Alhazmi <sup>5</sup>**


**Abstract:** The fuzzy fractional differential equation explains more complex real-world phenomena than the fractional differential equation does. Therefore, numerous techniques have been timely derived to solve various fractional time-dependent models. In this paper, we develop two compact finite difference schemes and employ the resulting schemes to obtain a certain solution for the fuzzy time-fractional convection–diffusion equation. Then, by making use of the Caputo fractional derivative, we provide new fuzzy analysis relying on the concept of fuzzy numbers. Further, we approximate the time-fractional derivative by using a fuzzy Caputo generalized Hukuhara derivative under the double-parametric form of fuzzy numbers. Furthermore, we introduce new computational techniques, based on fuzzy double-parametric form, to shift the given problem from one fuzzy domain to another crisp domain. Moreover, we discuss some stability and error analysis for the proposed techniques by using the Fourier method. Over and above, we derive several numerical experiments to illustrate reliability and feasibility of our proposed approach. It was found that the fuzzy fourth-order compact implicit scheme produces slightly better results than the fourth-order compact FTCS scheme. Furthermore, the proposed methods were found to be feasible, appropriate, and accurate, as demonstrated by a comparison of analytical and numerical solutions at various fuzzy values.

**Keywords:** fuzzy time-fractional equation; convection–diffusion equation; fuzzy Caputo gH-derivative; finite difference methods; implicit scheme method; brownian motion

#### **1. Introduction**

In recent years, the study of solving fractional partial differential equations has attracted the attention of many researchers. This can be appraised through a large number of research articles dealing with such equations in several scientific databases. The time-fractional convection–diffusion equation (TFCDE) differs from the integer convection– diffusion equation in the sense that time-fractional derivative can be replaced by a fractional derivative to describe both the movement and speed of particles that are inconsistent with the classical Brownian motion type [1–5]. The exact solutions are often unobtainable using analytical methods. Thus, mathematicians have resorted to using numerical methods to provide solutions for the governing equations. Several finite difference methods discussed by many authors [6–8] are considered to be one of the most essential numerical techniques for solving the time-fractional convection–diffusion equations. The high-order compact finite

**Citation:** Zureigat, H.; Al-Smadi, M.; Al-Khateeb, A.; Al-Omari, S.; Alhazmi, S.E. Fourth-Order Numerical Solutions for a Fuzzy Time-Fractional Convection– Diffusion Equation under Caputo Generalized Hukuhara Derivative. *Fractal Fract.* **2023**, *7*, 47. https:// doi.org/10.3390/fractalfract7010047

Academic Editors: Libo Feng, Lin Liu and Yang Liu

Received: 14 November 2022 Revised: 14 December 2022 Accepted: 15 December 2022 Published: 30 December 2022

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

difference methods are usually preferred due to their high computational efficiency and accuracy. Lately, a number of research articles have been published regarding high-order compact finite difference schemes to solve the time-fractional convection–diffusion equations.

Gao and Sun [9] developed two different three-point combined compact alternatingdirection implicit schemes CC–ADI to solve time-fractional convection–diffusion in the sense of the Riemann–Liouville fractional derivative. The CC–ADI is a combination of compact and alternating-direction implicit schemes that yield a high-order accuracy numerical solution. Several numerical examples are carried out to demonstrate the efficiency of the proposed schemes. The unconditional stabilities of the proposed schemes were proven with the Fourier method. Later, Fazio et al. [10] developed an implicit scheme of non-uniform grids to solve the time-fractional convection–diffusion equation. A spatial non-uniform net was utilized in order to increase the accuracy of the Caputo fractional derivative. The stability analysis of the proposed method showed that the method is unconditionally stable. Several numerical examples were reported to show that the finite difference method is more accurate for the non-uniform grid than the uniform mesh, and the stability is better in the non-uniform mesh than the uniform mesh.

Very recently, Sweilam et al. [11] used the compact finite difference method to obtain a numerical solution for the stochastic fractional convection–diffusion equation. The fractional derivative was approximated by the Caputo definition, and the stability and consistency of the presented method were discussed. Two experiments were also presented to examine the performance of the proposed method. It was found that all of the results obtained are compatible with the analytical exact solutions. Li et al. [12] used a fourth-order compact scheme for solving a fluid dynamic problem, groundwater pollution modeled by two-dimensional TFCDE. The time-fractional derivatives of the considered equation ere approximated by Caputo fractional derivative and fourth-order accuracy compact finite difference discretization that was applied to the spatial derivatives. The solvability, convergence, and stability of the proposed scheme were studied on the basis of the von Neumann method. It was further established that the introduced method has unique solvability and convergence with order *O τ*2−*<sup>α</sup>* + *h*<sup>1</sup> <sup>4</sup> + *h*<sup>2</sup> 4 .

In the mainstream investigation of the processes modelled by fractional partial differential equations, the variables and parameters are defined exactly, but these quantities (variables and parameters) may be uncertain and vague due to measurement errors that occur in the real experiments and that lead to fuzzy fractional partial differential equations.

Senol et al. [13] developed the perturbation–iteration algorithm (PIA) for solving fuzzy time-fractional partial differential equations with a generalized Hukuhara derivative. The fuzzy time-fractional derivative was approximated by the use of the Caputo definition. The convergence analysis of the proposed method was discussed and showed that the proposed approach gives a fast convergence rate and high accuracy when compared with the exact analytical solutions of the crisp problem. Shah et al. [14] presented analytical solutions of fuzzy time-fractional partial differential equations under certain conditions. The Laplace transform was used to compute series-type solutions for the considered equations under the fuzzy concept. Some examples were solved to illustrate the feasibility of the proposed method. Recently, two finite difference methods that are implicit backward time center space (BTCS) and implicit schemes were developed by the authors in [15] to solve the fuzzy time-fractional convection–diffusion equation (FTFCDE).

Based on the literature, it seems, as far as we know, that limited research has been done in the field of fuzzy time-fractional convection–diffusion equations by using classical and compact finite difference methods. Our motivation in this article is to examine the solution of the fuzzy time-fractional convection–diffusion equation. In this research article, we will develop and implement compact finite difference methods, in particular the fourth-order compact Crank–Nicholson and the fourth-order forward time center space schemes to obtain an approximate solution for the time-fractional convection–diffusion equation in the double-parametric form of a fuzzy number.

#### **2. Preliminaries and Fundamental Definitions**

In this section, the main definitions and theorems that are utilized later in this article are considered as follows:

**Definition 1** [16]. *Let* R*<sup>F</sup> denote the set of fuzzy subsets of the real axis and <sup>u</sup>* : *<sup>R</sup>* → [0, 1] *satisfies the following properties:*


*Then, by* R*<sup>F</sup> we denote the space of fuzzy numbers for every* 0 < *<sup>r</sup>* ≤ 1.

Denote by *<sup>u</sup>*(*r*) = *<sup>x</sup>*R*<sup>n</sup>* . . .*u*(*x*) ≥ *r* = *u*(*r*) − *u*(*r*). Then, from Definition 1, it follows that the *<sup>r</sup>* − level set *<sup>u</sup>*(*r*) is a closed interval for all *r*, 0 ≤ *<sup>r</sup>* ≤ 1. For arbitrary *<sup>u</sup>*, *<sup>v</sup>* ∈ R*<sup>F</sup>* and *<sup>k</sup>* ∈ R, the addition and scalar multiplication are defined by (*<sup>u</sup>* ⊕ *<sup>v</sup>*)(*r*) = *<sup>u</sup>*(*r*) + *<sup>v</sup>*(*r*), (*k* # *u*)(*r*) = [*ku*(*r*), *ku*(*r*)], respectively.

**Definition 2** [16]. *The Hausdorff distance between the fuzzy numbers is defined by <sup>d</sup>* : <sup>R</sup>*<sup>F</sup>* <sup>×</sup> <sup>R</sup>*<sup>F</sup>* <sup>→</sup> <sup>R</sup><sup>+</sup> <sup>∪</sup> {0} *as <sup>d</sup>*(*u*, *<sup>v</sup>*) <sup>=</sup> *supr*[0,1] *Max* {|*u*(*r*) <sup>−</sup> *<sup>v</sup>*(*r*)|, <sup>|</sup>*u*(*r*) <sup>−</sup> *<sup>v</sup>*(*r*)|}*. The <sup>r</sup>* − *level representation of the fuzzy-valued function <sup>f</sup>* : [*a*, *<sup>b</sup>*] → R*<sup>F</sup> is given by* (*x*; *r*) = [ *f* (*x*; *r*), *f* (*x*; *r*)],*x* ∈ [*a*, *b*], 0 ≤ *r* ≤ 1.

**Definition 3** [17]. *Let <sup>u</sup>*, *<sup>v</sup>* ∈ R*<sup>F</sup> and there exist <sup>w</sup>* ∈ R*<sup>F</sup> such that <sup>u</sup>* = *<sup>v</sup>* + *w. Then, <sup>w</sup> is called the Hukuhara difference of u and v , and it is denoted by u* \$ *v*, *fora* ≤ *x and* 0 < *α* ≤ 1.

**Definition 4** [18]. *The generalized Hukuhara difference (gH-differentiable for short) of two fuzzy numbers u*, *<sup>v</sup>* ∈ R*<sup>F</sup> , gH- difference for short) is w* ∈ R*<sup>F</sup> defined by*

$$
\mu \ominus\_{\mathcal{S}H} v = w \Leftrightarrow \begin{cases} \quad (i) \; \mu = v + w \\ \; or \; (ii) \; v = \mu + (-1)w \end{cases}
$$

**Definition 5** [19]. *The gH-differentiable of a fuzzy-valued function <sup>f</sup>*(*a*, *<sup>b</sup>*) → R*<sup>F</sup> at <sup>x</sup>*<sup>0</sup> *is defined by*

$$\left(f'\right)\_{\mathcal{G}^H}(\boldsymbol{\chi\_0}) = \frac{\lim\_{h \to 0} f(\boldsymbol{\chi\_0} + h) \ominus \mathcal{g} \, H \, f(\boldsymbol{\chi\_0})}{h}.$$

*If* (*f* )*gH*(*x*0)R*F, we say that f is gH-differentiable at x*0*. In addition, we say that f is* [(*i*) − *gH*]*-differentiable at x*<sup>0</sup> *if:*


**Definition 6** [19]. *We say that a point x*<sup>0</sup> ∈ (*a*, *b*) *is a switching point for the differentiability of f if, in any neighborhoodV of x*<sup>0</sup> *, there exists points x*<sup>1</sup> < *x*<sup>0</sup> < *x*<sup>2</sup> *such that: Type (I): at x*<sup>1</sup> *(i) holds while (ii) does not hold and, at x*<sup>2</sup> *, (ii) holds and (i) does not hold, Type (II): at x*<sup>1</sup> *(ii) holds while (i) does not hold and, at x*<sup>2</sup> *, (i) holds and (ii) does not hold.*

**Definition 7** [20]. *The fractional derivative was defined as follows*

$$\, \_0^{\xi}D\_t^{\alpha}f(t) = \frac{1}{\Gamma(1-\alpha)} \int\_0^t \frac{\frac{\partial}{\partial t}f(\tilde{\xi})}{\left(t-\tilde{\xi}\right)^{\alpha}} \partial \tilde{\xi} \, \, \, 0 < \alpha < 1.1$$

*The time-fractional derivative term is approximated as* [21]

$$\frac{\partial^{\alpha}u(x,t)}{\partial^{\alpha}t} = \frac{\Delta t^{-\alpha}}{\Gamma(2-\alpha)}\sum\_{j=1}^{n}b\_{j}\left(u\_{i}^{n+1-j} - u\_{i}^{n-j}\right),$$

*where bj* = (*j* + 1) <sup>1</sup>−*<sup>α</sup>* <sup>−</sup> (*j*)1−*<sup>α</sup>* , *<sup>j</sup>* <sup>=</sup> 0, 1, 2, . . ..

**Definition 8** [22]. *The fuzzy fractional Caputo gH- derivative of a fuzzy valued function f*(*t*;*r*) *based on (i) and (ii) of gH-differentiable function f is defined as follows:*

$$\mathcal{T}. \quad \left( ^{c}\_{\mathfrak{g}H} D\_t^a f(t; r) = \left[ ^{c}\_0 D\_t^a \underline{f}(t; r) , ^{c}\_0 D\_t^a \overline{f}(t; r) \right] \; . \; 0 \le r \le 1 \; \varkappa$$

$$\text{18. } \quad \left( ^{c}\_{\mathfrak{g}H} D\_t^a f(t; r) = \left[ ^{\xi}\_0 D\_t^a \overline{f}(t; r), {}^{\xi}\_0 D\_t^a f(t; r) \right], \ 0 \le r \le 1, \epsilon \right)$$

$$where \, ^{\xi}\_{0}D\_{t}^{a}\underline{f}(t;r) = \frac{1}{\Gamma(1-a)}\int\_{0}^{t} \frac{\frac{\partial}{\partial t}\underline{f}(\xi;r)}{(t-\underline{\xi})^{x}} \, \partial\underline{\xi} \, \, and \, ^{c}\_{0}D\_{t}^{a}\overline{f}(t;r) = \frac{1}{\Gamma(1-a)}\int\_{0}^{t} \frac{\frac{\partial}{\partial t}\overline{f}(\xi;r)}{(t-\underline{\xi})^{x}} \, \partial\underline{\xi}.$$

**Definition 9** [15]. *(The double-parametric form of fuzzy numbers) Using the single-parametric form, we write <sup>U</sup>*<sup>9</sup> <sup>=</sup> [*u*(*r*), *<sup>u</sup>*(*r*)] *, which may be written as a crisp number using the doubleparametric form as*

$$dI(r, \beta) = \beta[\overline{u}(r) - \underline{u}(r)] + \underline{u}(r), where \ r \ and \ \beta \in [0, 1].$$

#### **3. High-Order Compact Finite Difference Method in Fuzzy Environment**

Let *u<sup>n</sup> <sup>i</sup>* indicate the approximation value of *u* at (*xi*, *tn*). Based on the Taylor series expansions *u<sup>n</sup> <sup>i</sup>*+1and *<sup>u</sup><sup>n</sup> <sup>i</sup>*+<sup>1</sup> , *u* can be expanded about (*xi*, *tn*) to derive the fuzzy high-order compact finite difference scheme for the spatial derivatives.

$$
\begin{array}{l}
\stackrel{\sim}{u}\_{i+1}^{\text{II}} = \stackrel{\sim}{u}\_{i}^{\text{II}} + h \left(\frac{\partial \stackrel{\sim}{u}}{\partial x}\right)\_{i}^{\text{R}} + \frac{h^{2}}{2} \left(\frac{\partial^{2\cdot\text{R}}}{\partial x^{2}}\right)\_{i}^{\text{R}} + \frac{h^{3}}{6} \left(\frac{\partial^{3\cdot\text{R}}}{\partial x^{3}}\right)\_{i}^{\text{R}} + \dotsb \\
\stackrel{\sim}{u}\_{i-1}^{\text{R}} = \stackrel{\sim}{u}\_{i}^{\text{R}} - h \left(\frac{\partial \stackrel{\sim}{u}}{\partial x}\right)\_{i}^{\text{R}} + \frac{h^{2}}{2} \left(\frac{\partial^{2\cdot\text{R}}}{\partial x^{2}}\right)\_{i}^{\text{R}} - \frac{h^{3}}{6} \left(\frac{\partial^{3\cdot\text{R}}}{\partial x^{3}}\right)\_{i}^{\text{R}} + \dotsb
\end{array} \tag{1}
$$

The first derivatives of *u<sup>n</sup> <sup>i</sup>*+<sup>1</sup> and *<sup>u</sup><sup>n</sup> <sup>i</sup>*−<sup>1</sup> are

$$
\begin{pmatrix}
\left(\frac{\partial\hat{u}}{\partial x}\right)\_{i+1}^{n} = \left(\frac{\partial\hat{u}}{\partial x}\right)\_{i}^{n} + h\left(\frac{\partial^{2}\hat{u}}{\partial x^{2}}\right)\_{i}^{n} + \frac{h^{2}}{2}\left(\frac{\partial^{3}\hat{u}}{\partial x^{3}}\right)\_{i}^{n} + \frac{h^{3}}{6}\left(\frac{\partial^{4}\hat{u}}{\partial x^{4}}\right)\_{i}^{n} + \dotsb \\
\left(\frac{\partial\hat{u}}{\partial x}\right)\_{i-1}^{n} = \left(\frac{\partial\hat{u}}{\partial x}\right)\_{i}^{n} - h\left(\frac{\partial^{2}\hat{u}}{\partial x^{2}}\right)\_{i}^{n} + \frac{h^{2}}{2}\left(\frac{\partial^{3}\hat{u}}{\partial x^{3}}\right)\_{i}^{n} - \frac{h^{3}}{6}\left(\frac{\partial^{4}\hat{u}}{\partial x^{4}}\right)\_{i}^{n} + \dots \\
\end{pmatrix}.
\tag{2}
$$

The second derivatives of *u<sup>n</sup> <sup>i</sup>*+<sup>1</sup> and *<sup>u</sup><sup>n</sup> <sup>i</sup>*−<sup>1</sup> are

$$
\begin{pmatrix}
\frac{\partial^2 \check{\boldsymbol{u}}}{\partial \boldsymbol{x}^2}
\end{pmatrix}\_{i+1}^n = \begin{pmatrix}
\frac{\partial^2 \check{\boldsymbol{u}}}{\partial \boldsymbol{x}^2}
\end{pmatrix}\_i^n + h \begin{pmatrix}
\frac{\partial^3 \check{\boldsymbol{u}}}{\partial \boldsymbol{x}^3}
\end{pmatrix}\_i^n + \frac{h^2}{2} \begin{pmatrix}
\frac{\partial^4 \check{\boldsymbol{u}}}{\partial \boldsymbol{x}^4}
\end{pmatrix}\_i^n + \frac{h^3}{6} \begin{pmatrix}
\frac{\partial^5 \check{\boldsymbol{u}}}{\partial \boldsymbol{x}^3}
\end{pmatrix}\_i^n + \cdots \\
\begin{pmatrix}
\frac{\partial^2 \check{\boldsymbol{u}}}{\partial \boldsymbol{x}^2}
\end{pmatrix}\_{i-1}^n = \begin{pmatrix}
\frac{\partial^2 \check{\boldsymbol{u}}}{\partial \boldsymbol{x}^2}
\end{pmatrix}\_i^n - h \begin{pmatrix}
\frac{\partial^3 \check{\boldsymbol{u}}}{\partial \boldsymbol{x}^3}
\end{pmatrix}\_i^n + \frac{h^2}{2} \begin{pmatrix}
\frac{\partial^4 \check{\boldsymbol{u}}}{\partial \boldsymbol{x}^4}
\end{pmatrix}\_i^n - \frac{h^3}{6} \begin{pmatrix}
\frac{\partial^2 \check{\boldsymbol{u}}}{\partial \boldsymbol{x}^3}
\end{pmatrix}\_i^n + \dots
\end{pmatrix}.
\tag{3}
$$

From Equations (1)–(3), the first and second partial derivatives are approximated to give

$$\left(\frac{\partial u}{\partial \mathbf{x}}\right)\_i^n = \frac{\delta\_\mathbf{x} / 2h}{\left(1 + \frac{1}{6}\,\delta^2\right)} u\_i^n + \frac{h^4}{180} \left(\frac{\partial^5 u}{\partial \mathbf{x}^5}\right)\_i^n + O\left(h^5\right),\tag{4}$$

$$\left(\frac{\partial^2 u}{\partial x^2}\right)\_i^n = \frac{\delta^2 \,\_x / h^2}{\left(1 + \frac{1}{12}\,\delta^2\right)} u\_i^n + \frac{h^4}{240} \left(\frac{\partial^6 u}{\partial x^6}\right)\_i^n + O\left(h^6\right),\tag{5}$$

where *<sup>δ</sup><sup>x</sup>* <sup>=</sup> *<sup>u</sup>*9*<sup>n</sup> <sup>i</sup>*+<sup>1</sup> <sup>−</sup> *<sup>u</sup><sup>n</sup> <sup>i</sup>*−<sup>1</sup> and *<sup>δ</sup>*<sup>2</sup> *<sup>x</sup>* <sup>=</sup> *<sup>u</sup>*9*<sup>n</sup> <sup>i</sup>*+<sup>1</sup> <sup>−</sup> <sup>2</sup>*u*9*<sup>n</sup> <sup>i</sup>* <sup>+</sup> *<sup>u</sup>*9*<sup>n</sup> <sup>i</sup>*−<sup>1</sup> for 0 <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> *<sup>M</sup>*, 0 <sup>≤</sup> *<sup>n</sup>* <sup>≤</sup> *<sup>N</sup>*. By taking into account the average function that mentioned in [23], we assume that

$$\frac{1}{\left(1+\frac{1}{6}\,\delta^2\right)^n}\vec{u}\_i^n = \frac{1}{6}\left(\vec{u}\_{i+1}^n + 4\vec{u}\_i^n + \vec{u}\_{i-1}^n\right), \; 1 \le i \le M-1,\tag{6}$$

$$\frac{1}{\left(1+\frac{1}{12}\,\delta^2\right)^n}\bar{u}\_i^n = \frac{1}{12}\left(\bar{u}\_{i+1}^n + 10\bar{u}\_i^n + \bar{u}\_{i-1}^n\right), \; 1 \le i \le M-1. \tag{7}$$

#### **4. Time Fractional Convection–Diffusion Equation in Fuzzy Environment**

A fuzzy fractional convection–diffusion equation is used to describe both the speed and movement of particles that are incompatible with the classical Brownian motion pattern. The fuzzy fractional convection–diffusion equation can be used for modelling some physical problems such as groundwater hydrology and gas transport through heterogeneous soil. The applications also exist in aerodynamics and other fields [24–27].

Let us now consider the general formula of the fuzzy time-fractional convection– diffusion equation involving the boundary and initial conditions [28]

$$\begin{array}{ll}\frac{\partial^\*\overline{u}(\mathbf{x},t,\mathbf{u})}{\partial t}=-\overline{v}(\mathbf{x})\frac{\partial\overline{u}(\mathbf{x},t)}{\partial x}-\overline{D}(\mathbf{x})\frac{\partial^2\overline{u}(\mathbf{x},t)}{\partial x^2}+\overline{q}(\mathbf{x},t) & , \mathbf{0}<\mathbf{x}<\mathbf{l},\mathbf{t}<\mathbf{0},\\\overline{u}(\mathbf{x},0)=\overline{f}(\mathbf{x}), \ \overline{u}(\mathbf{0},t)=\overline{g}(\mathbf{0},t), \ \overline{u}(l,t)=\overline{z}\,(l,\mathbf{t}),\end{array} \tag{8}$$

where *<sup>u</sup>*9(*x*, *<sup>t</sup>*, *<sup>α</sup>*) is the density of a quantity such as fuzzy energy; the fuzzy mass of crisp variables *<sup>x</sup>*, *<sup>t</sup>*, and <sup>α</sup> is an arbitrary order such that *<sup>∂</sup>αU*9(*x*,*t*,*α*) *<sup>∂</sup>α<sup>t</sup>* denotes the fuzzy time-fractional generalized Hukuhara derivative (gH-derivative) of order *<sup>α</sup>*; *<sup>v</sup>*9(*x*) is the average velocity of a fuzzy quantity; *<sup>D</sup>*9(*x*) is the diffusivity coefficient; *<sup>q</sup>*9(*x*, *<sup>t</sup>*) is a function for the uncertainty crisp variable *<sup>x</sup>*; t, *<sup>u</sup>*9(0, *<sup>t</sup>*) and9*u*(*l*, *<sup>t</sup>*) are the fuzzy boundary conditions involving *<sup>g</sup>*9, <sup>9</sup>*<sup>z</sup>* is defined as a fuzzy convex number; and *<sup>u</sup>*9(*x*, 0) is the fuzzy initial condition.

In Equation (8), the fuzzy functions *f* <sup>9</sup>(*x*), *<sup>v</sup>*9(*x*), *<sup>D</sup>*9(*x*), and *<sup>q</sup>*9(*x*) are defined as follows [29]:

$$\begin{cases} \tilde{D}(\mathbf{x}) = \tilde{\theta}\_1 s\_1(\mathbf{x}) \\ \tilde{q}(\mathbf{x}) = \tilde{\theta}\_2 s\_2(\mathbf{x}) \\ \tilde{f}(\mathbf{x}) = \tilde{\theta}\_3 s\_3(\mathbf{x}) \\ \tilde{v}(\mathbf{x}) = \tilde{\theta}\_4 s\_4(\mathbf{x}) \end{cases} \tag{9}$$

where *s*1(*x*), *s*2(*x*), *s*3(*x*) and *s*4(*x*) are the crisp (classical) functions of the classical variable *x* where *θ* 9 1, *θ* <sup>9</sup>2, *<sup>θ</sup>* <sup>9</sup><sup>3</sup> and *<sup>θ</sup>* 9 <sup>4</sup> are introduced as the fuzzy convex numbers. The fuzzy timefractional convection–diffusion equation is defuzzified based on the double-parametric approach of fuzzy numbers as [14]:

$$[\tilde{\mu}(\mathbf{x},t)]\_r = \underline{\mu}(\mathbf{x},t;r), \overline{\mu}(\mathbf{x},t;r), \tag{10}$$

$$
\left[\frac{\partial^a \overline{u}(\mathbf{x}, t, a)}{\partial^a t}\right]\_r = \frac{\partial^a \underline{u}(\mathbf{x}, t, a; r)}{\partial^a t}, \frac{\partial^a \overline{u}(\mathbf{x}, t, a; r)}{\partial^a t}, \tag{11}
$$

$$
\left[\frac{\partial \overline{u}(\mathbf{x},t)}{\partial \mathbf{x}}\right]\_r = \frac{\partial \underline{u}(\mathbf{x},t;r)}{\partial \mathbf{x}}, \frac{\partial \overline{u}(\mathbf{x},t;r)}{\partial \mathbf{x}}\tag{12}
$$

$$
\left[\frac{\partial^2 \tilde{u}(\mathbf{x}, t)}{\partial \mathbf{x}^2}\right]\_r = \frac{\partial^2 \underline{u}(\mathbf{x}, t; r)}{\partial \mathbf{x}^2}, \frac{\partial^2 \overline{u}(\mathbf{x}, t; r)}{\partial \mathbf{x}^2}, \tag{13}
$$

$$[\tilde{v}(\mathbf{x})]\_r = \underline{v}(\mathbf{x}; r), \overline{v}(\mathbf{x}; r) \tag{14}$$

$$\left[\left[\overline{D}(\mathbf{x})\right]\_r = \underline{D}(\mathbf{x};r), \overline{D}(\mathbf{x};r) \tag{15}$$

$$\left[\overline{q}(\mathbf{x})\right]\_r = \underline{q}(\mathbf{x};r), \overline{q}(\mathbf{x};r) \tag{16}$$

$$[\![\overline{u}(\mathbf{x},0)]\_r = \underline{u}(\mathbf{x},0;r), \overline{u}(\mathbf{x},0;r) \tag{17}$$

$$[\![\![\overline{u}(0,t)]\!]\_r = \underline{u}(0,t;r), \overline{u}(0,t;r) \tag{18}$$

$$[\tilde{\boldsymbol{u}}(l,t)]\_r = \underline{\boldsymbol{u}}(l,t;r), \overline{\boldsymbol{u}}(l,t;r), \tag{19}$$

$$\left[\overline{f}(\mathbf{x})\right]\_r = \underline{f}(\mathbf{x};r)\_r \overline{f}(\mathbf{x};r)\_r \tag{20}$$

$$\begin{cases} \left[\widetilde{\mathcal{g}}\right]\_r = \underline{\mathcal{g}}(r), \overline{\mathcal{g}}(r),\\ \left[\widetilde{\boldsymbol{z}}\right]\_r = \underline{\boldsymbol{z}}(r), \overline{\boldsymbol{z}}(r), \end{cases} \tag{21}$$

such that

$$\begin{cases} \left[D(\mathbf{x})\right]\_r = \left[\underline{\theta}(r)\_1, \theta\_1(r)\right] s\_1(\mathbf{x})\\ \left[\overline{q}(\mathbf{x})\right]\_r = \left[\underline{\theta}(r)\_2, \overline{\theta}\_2(r)\right] s\_2(\mathbf{x})\\ \left[\overline{f}(\mathbf{x})\right]\_r = \left[\underline{\theta}(r)\_3, \overline{\theta}\_3(r)\right] s\_3(\mathbf{x})\\ \left[v(\mathbf{x})\right]\_r = \left[\underline{\theta}(r)\_4, \overline{\theta}\_4(r)\right] s\_4(\mathbf{x}) \end{cases} \tag{22}$$

By employing the fuzzy extension principle, the membership function is defined as [10]

$$\begin{cases} \underline{u}(\mathbf{x},t;r) = \min \{ \overline{u}(\overline{\mu}(r),t) \, | \, |\overline{\mu}(r) \in \overline{u}(\mathbf{x},t;r) \},\\ \overline{u}(\mathbf{x},t;r) = \max \{ \overline{u}(\overline{\mu}(r),t) \, | \, \overline{\mu}(r) \in \overline{u}(\mathbf{x},t;r) \}. \end{cases} \tag{23}$$

Now, for 0*xl*, *t* > 0 and *r* ∈ [0, 1], Equation (8) is rewritten to yield the general equation of the fuzzy time-fractional convection–diffusion equation

$$\begin{cases} \frac{\partial^4 \underline{u}(\mathbf{x},t,\mathbf{u})}{\partial^4 t} = -[\underline{\theta}(r)\_4] s\_4(\mathbf{x}) \frac{\partial \underline{u}(\mathbf{x},t,\mathbf{r})}{\partial \mathbf{x}} + [\underline{\theta}(r)\_1] s\_1(\mathbf{x}) \frac{\partial^2 \underline{u}(\mathbf{x},t,\mathbf{r})}{\partial \mathbf{x}^2} + [\underline{\theta}(r)\_2] s\_2(\mathbf{x}), \\ \qquad \underline{\omega}(\mathbf{x},0;r) = \underline{\theta}(r)\_3 s\_3(\mathbf{x}), \\ \underline{\omega}(0,t;r) = \underline{\mathcal{g}}(r), \underline{\omega}(l,t;r) = \underline{\omega}(r), \end{cases} \tag{24}$$

$$\begin{cases} \frac{\partial^s \overline{\mathfrak{u}}(\mathbf{x},t,\mathbf{a})}{\partial^s t} = -\left[\overline{\theta}\_4(r)\right] s\_4(\mathbf{x}) \frac{\partial \overline{\mathfrak{u}}(\mathbf{x},t;r)}{\partial \mathbf{x}} + \left[\overline{\theta}\_1(r)\right] s\_1(\mathbf{x}) \frac{\partial^2 \overline{\mathfrak{u}}(\mathbf{x},t;r)}{\partial \mathbf{x}^2} + \left[\overline{\theta}\_2(r)\right] s\_2(\mathbf{x}),\\ \overline{\mathfrak{u}}(\mathbf{x},0;r) = \overline{\theta}\_3(r)\_3 s\_3(\mathbf{x}),\\ \overline{\mathfrak{u}}(0,t;r) = \overline{\mathfrak{g}}(r), \overline{\mathfrak{u}}(l,t;r) = \overline{\mathfrak{z}}(r). \end{cases} \tag{25}$$

Equations (24) and (25) present the lower and upper bounds of the general formula of the fuzzy time-fractional convection–diffusion equation. Now, for defuzzification, Equation (8), based on the double-parametric form of the fuzzy numbers, as per the singularparametric form, may be expressed as

$$\begin{split} \left[\frac{\partial^{a}\underline{u}(\mathbf{x},t;\mathbf{r})}{\partial^{\mathbf{z}}t}, \frac{\partial^{a}\overline{u}(\mathbf{x},t;\mathbf{r})}{\partial^{\mathbf{z}}t}\right] &= -\left[\underline{\mathbf{z}}(\mathbf{x},r), \overline{\mathbf{z}}(\mathbf{x},r)\right] \left[\frac{\partial\underline{u}\_{i,n}(\mathbf{x},t;\mathbf{r})}{\partial\mathbf{x}}, \frac{\partial\overline{u}\_{i,n}(\mathbf{x},t;\mathbf{r})}{\partial\mathbf{x}}\right] \\ + \left[\underline{D}(\mathbf{x},r), \overline{D}(\mathbf{x},r)\right] \left[\frac{\partial^{2}\underline{u}\_{i,n}(\mathbf{x},t;\mathbf{r})}{\partial\mathbf{x}^{2}}, \frac{\partial^{2}\overline{u}\_{i,n}(\mathbf{x},t;\mathbf{r})}{\partial\mathbf{x}^{2}}\right] &+ \left[\underline{q}(\mathbf{x},t;r), \overline{q}(\mathbf{x},t;r)\right], \end{split} \tag{26}$$

subject to the fuzzy boundary and initial conditions

$$\left[\underline{\boldsymbol{u}}(\mathbf{x},0;r),\overline{\boldsymbol{u}}(\mathbf{x},0;r)\right] = \left[\underline{\boldsymbol{f}}(\mathbf{x},t;r),\overline{\boldsymbol{f}}(\mathbf{x},t;r)\right],\\ \left[\underline{\boldsymbol{u}}(0,t;r),\overline{\boldsymbol{u}}(0,t;r)\right] = \left[\underline{\boldsymbol{g}}(0,t;r),\overline{\boldsymbol{g}}(0,t;r)\right].$$

and [ *u*(*l*, *t*;*r*), *u*(*l*, *t*;*r*)] = [*z*(*l*, *t*;*r*), *z*(*l*, *t*;*r*)].

Now, via the double-parametric form (see, e.g., [14]), we rewrite Equation (26) as:

 *β ∂αu*(*x*,*t*,*α*;*r*) *<sup>∂</sup>α<sup>t</sup>* <sup>−</sup>*∂αu*(*x*,*t*,*α*;*r*) *∂αt* <sup>+</sup> *<sup>∂</sup>αu*(*x*,*t*,*α*;*r*) *∂αt* = −{*β*(*v*(*x*,*r*) − *v*(*x*,*r*)) + *v*(*x*,*r*)} *β ∂ui*,*n*(*x*,*t*;*r*) *<sup>∂</sup><sup>x</sup>* <sup>−</sup> *<sup>∂</sup>ui*,*n*(*x*,*t*;*r*) *∂x* + *<sup>∂</sup>ui*,*n*(*x*,*t*;*r*) *∂x* +{*β*(*D*(*x*,*r*) − *D*(*x*,*r*)) + *D*(*x*,*r*)} *β <sup>∂</sup>*<sup>2</sup>*ui*,*n*(*x*,*t*;*r*) *<sup>∂</sup>x*<sup>2</sup> <sup>−</sup> *<sup>∂</sup>*<sup>2</sup>*ui*,*n*(*x*,*t*;*r*) *∂x*<sup>2</sup> <sup>+</sup> *<sup>∂</sup>*<sup>2</sup>*ui*,*n*(*x*,*t*;*r*) *∂x*<sup>2</sup> +{*β*(*q*(*x*, *t*;*r*) − *q*(*x*, *t*;*r*)) + *q*(*x*, *t*;*r*)}, (27)

subjected to fuzzy initial and boundary conditions

$$\begin{cases} \mathcal{J}\left\{\mathfrak{T}(\mathbf{x},0;r)-\underline{\mathfrak{u}}(\mathbf{x},0;r)\right\}+\underline{\mathfrak{u}}(\mathbf{x},0;r)\right\} = \left\{\mathcal{J}\left(\overline{f}(\mathbf{x};r)-\underline{f}(\mathbf{x};r)\right)+\underline{f}(\mathbf{x};r)\right\},\\ \mathcal{J}\left\{\mathfrak{T}(0,t;r)-\underline{\mathfrak{u}}(0,t;r)\right\}+\underline{\mathfrak{u}}(0,t;r)) = \left\{\mathcal{J}\left(\overline{\mathfrak{T}}(\mathbf{x};r)-\underline{\mathfrak{g}}(\mathbf{x};r)\right)+\underline{\mathfrak{g}}(\mathbf{x};r)\right\},\end{cases}$$

and

$$\{\beta\left(\overline{u}(l,t;r) - \underline{u}(l,t;r)\right) + \underline{u}(l,t;r)\} = \{\beta\left(\overline{z}(\mathbf{x};r) - \underline{z}(\mathbf{x};r)\right) + \underline{z}(\mathbf{x};r)\},$$

where *β* ∈ [0, 1]. Now we donate

*<sup>∂</sup>αu*9(*x*,*t*;*r*,*β*) *<sup>∂</sup>α<sup>t</sup>* = *β ∂αu*(*x*,*t*,*α*;*r*) *<sup>∂</sup>α<sup>t</sup>* <sup>−</sup> *<sup>∂</sup>αu*(*x*,*t*,*α*;*r*) *∂αt* + *<sup>∂</sup>αu*(*x*,*t*,*α*;*r*) *∂αt <sup>v</sup>*9(*x*) *<sup>∂</sup>u*9(*x*,*t*;*r*,*β*) *<sup>∂</sup><sup>x</sup>* = { *β* (*v*(*x*,*r*) − *v*(*x*,*r*)) + *v*(*x*,*r*)} *β ∂ui*,*n*(*x*,*t*;*r*) *<sup>∂</sup><sup>x</sup>* <sup>−</sup> *<sup>∂</sup>ui*,*n*(*x*,*t*;*r*) *∂x* + *<sup>∂</sup>ui*,*n*(*x*,*t*;*r*) *∂x* , <sup>9</sup>*a*(*x*) *<sup>∂</sup>*2*u*9(*x*,*t*;*r*,*β*) *<sup>∂</sup>x*<sup>2</sup> = *<sup>β</sup> D*(*x*,*r*) − *D*(*x*,*r*) + *D*(*x*,*r*) *β <sup>∂</sup>*<sup>2</sup>*ui*,*n*(*x*,*t*;*r*) *<sup>∂</sup>x*<sup>2</sup> <sup>−</sup> *<sup>∂</sup>*<sup>2</sup>*ui*,*n*(*x*,*t*;*r*) *∂x*<sup>2</sup> <sup>+</sup> *<sup>∂</sup>*<sup>2</sup>*ui*,*n*(*x*,*t*;*r*) *∂x*<sup>2</sup> , *<sup>q</sup>*9(*x*, *<sup>t</sup>*;*r*, *<sup>β</sup>*) <sup>=</sup> *β q*(*x*, *t*;*r*) − *q*(*x*, *t*;*r*) + *q*(*x*, *t*;*r*) , *<sup>u</sup>*9(*x*, 0;*r*, *<sup>β</sup>*) <sup>=</sup> { *<sup>β</sup>* (*u*(*x*, 0;*r*) <sup>−</sup> *<sup>u</sup>*(*x*, 0;*r*)) <sup>+</sup> *<sup>u</sup>*(*x*, 0;*r*)}, *f* <sup>9</sup>(*x*;*r*, *<sup>β</sup>*) <sup>=</sup> *β f*(*x*;*r*) − *f*(*x*;*r*) + *f*(*x*;*r*) , *<sup>u</sup>*9(0, *<sup>t</sup>*;*r*, *<sup>β</sup>*) <sup>=</sup> { *<sup>β</sup>* (*u*(0, *<sup>t</sup>*;*r*) <sup>−</sup> *<sup>u</sup>*(0, *<sup>t</sup>*;*r*)) <sup>+</sup> *<sup>u</sup>*(0, *<sup>t</sup>*;*r*)}, *<sup>g</sup>*9(*x*;*r*, *<sup>β</sup>*) <sup>=</sup> *β g*(*x*;*r*) − *g*(*x*;*r*) + *g*(*x*;*r*) , *<sup>u</sup>*9(*l*, *<sup>t</sup>*;*r*, *<sup>β</sup>*) <sup>=</sup> { *<sup>β</sup>* (*u*(*l*, *<sup>t</sup>*;*r*) <sup>−</sup> *<sup>u</sup>*(*l*, *<sup>t</sup>*;*r*)) <sup>+</sup> *<sup>u</sup>*(*l*, *<sup>t</sup>*;*r*)}, <sup>9</sup>*z*(*x*;*r*, *<sup>β</sup>*) <sup>=</sup> { *<sup>β</sup>* (*z*(*x*;*r*) <sup>−</sup> *<sup>z</sup>*(*x*;*r*)) <sup>+</sup> *<sup>z</sup>*(*x*;*r*)}.

Substituting these equations into Equation (26) reveals

$$\begin{split} \frac{\partial^{\mathfrak{s}} \overline{\mathbf{u}}(\mathbf{x},t,\mathfrak{s},\mathfrak{k})}{\partial t} &= -\overline{\mathbf{v}}(\mathbf{x}) \frac{\partial \overline{\mathbf{u}}(\mathbf{x},t,\mathfrak{k})}{\partial \mathbf{x}} + \overline{\mathbf{a}}(\mathbf{x}) \frac{\partial^{2} \overline{\mathbf{u}}(\mathbf{x},t,\mathfrak{k})}{\partial \mathbf{x}^{2}} + \overline{\mathbf{b}}(\mathbf{x},t,\mathfrak{k}) \; \; \; \; \mathbf{0} < \mathbf{x} < \mathbf{l}, \; \; \mathbf{0} < \mathbf{\mathcal{J}}, \; \mathbf{t} < \mathbf{0}, \\ \overline{\mathbf{u}}(\mathbf{x},0,\mathfrak{k}) &= \overline{f}(\mathbf{x},r\_{r}\mathfrak{k}), \; \overline{\mathbf{u}}(\mathbf{0},t,\mathfrak{k}) = \overline{\mathbf{g}} \; \; \; \overline{u}(\mathbf{l},t,\mathfrak{k}) = \overline{\mathbf{z}}. \end{split} \tag{28}$$

To obtain the lower and upper solutions of equation (28) in the single parametric form, assume *β* = 0 and *β* = 1, respectively, to obtain

$$
\overline{u}(\mathbf{x},t;r,0) = \underline{u}(\mathbf{x},t;r) \text{ and } \overline{u}(\mathbf{x},t;r,1) = \overline{u}(\mathbf{x},t;r).
$$

#### **5. The Fuzzy Fourth-Order Compact Implicit Scheme Method for the Solution of FTFCDE**

In this section, the fourth-order compact implicit scheme method is developed and applied to a double-parametric form of fuzzy numbers, utilizing fourth-order approximation at time level *n* + <sup>1</sup> <sup>2</sup> . In addition, the first- and second-order space derivatives, along with a fuzzy Caputo gH-derivative formula, are discretized to approximate the time-fractional derivative to solve the fuzzy time-fractional convection–diffusion equation.

To obtain an approximate solution to the fuzzy time-fractional convection–diffusion equation based on the fuzzy fourth-order compact implicit scheme method, the fuzzy Caputo gH- derivative formula has been applied to approximate the fuzzy time-fractional derivative given in Equation (8). The first and second space partial derivatives are, respectively, approximated by using Equations (4) and (5) as

$$\frac{\Delta t^{-n}}{\Gamma(2-n)}[\tilde{u}\_{i,n+1} - \tilde{u}\_{i,n} + \sum\_{j=1}^{n} b\_{j} \left( \tilde{u}\_{i,n+1-j} - \tilde{u}\_{i,n-j} \right)] = -\tilde{v}(\mathbf{x}, r) \frac{\frac{\delta\_{\mathbf{x}}}{\Delta \tilde{t}}}{\left( 1 + \frac{1}{\delta} \delta^{2}\_{\mathbf{x}} \right)} + \tilde{D}(\mathbf{x}, r) \frac{\frac{\delta\_{\mathbf{x}}^{2}}{\tilde{h}^{2}}}{\left( 1 + \frac{1}{\delta^{2}} \delta^{2}\_{\mathbf{x}} \right)} + \tilde{q}(\mathbf{x}, r). \tag{29}$$

Hence, from Equations (6), (7) and (29) can be simplified to give

Δ*t* −*α* <sup>Γ</sup>(2−*α*) <sup>×</sup> <sup>3</sup> <sup>12</sup> ((<sup>∼</sup> *u n*+1 *<sup>i</sup>*+<sup>1</sup> + 6 ∼ *u n*+1 *<sup>i</sup>* <sup>+</sup> <sup>∼</sup> *u n*+1 *<sup>i</sup>*−<sup>1</sup> ) − ( ∼ *u n <sup>i</sup>*+<sup>1</sup> + 6 ∼ *u n <sup>i</sup>* <sup>+</sup> <sup>∼</sup> *u n <sup>i</sup>*−1) <sup>+</sup> *<sup>n</sup>* ∑ *j*=1 *bj*[(<sup>∼</sup> *u n*+1−*j <sup>i</sup>*+<sup>1</sup> + 6 ∼ *u n*+1−*j <sup>i</sup>* <sup>+</sup> <sup>∼</sup> *u n*+1−*j <sup>i</sup>*−<sup>1</sup> ) − ( ∼ *u n*−*j <sup>i</sup>*+<sup>1</sup> + 6 ∼ *u n*−*j <sup>i</sup>* <sup>+</sup> <sup>∼</sup> *u n*−*j <sup>i</sup>*−<sup>1</sup> )]) <sup>=</sup> <sup>−</sup><sup>∼</sup> *v*(*x*,*r*) ∼ *ui*<sup>+</sup>1,*n*<sup>+</sup> <sup>1</sup> 2 −∼ *ui*−1,*n*<sup>+</sup> <sup>1</sup> 2 <sup>2</sup>*<sup>h</sup>* <sup>+</sup> <sup>∼</sup> *D*(*x*,*r*) ∼ *ui*<sup>+</sup>1,*n*<sup>+</sup> <sup>1</sup> 2 −2 ∼ *ui*,*n*<sup>+</sup> <sup>1</sup> 2 +<sup>∼</sup> *u* 2 *<sup>i</sup>*−1,*n*<sup>+</sup> <sup>1</sup> 2 *h*2 + <sup>∼</sup> *q n*+ <sup>1</sup> 2 *<sup>i</sup>*+<sup>1</sup> + 6 ∼ *q n*+ <sup>1</sup> 2 *<sup>i</sup>* <sup>+</sup> <sup>∼</sup> *q n*+ <sup>1</sup> 2 *i*−1 (30)

$$\begin{split} \frac{\Lambda^{t-x}}{\Gamma(2-a)} &\times \frac{3}{12} ( (\hat{u}\_{i+1}^{n+1} + 6\hat{u}\_{i}^{n+1} + \hat{u}\_{i-1}^{n+1}) - (\hat{u}\_{i+1}^{n} + 6\hat{u}\_{i}^{n} + \hat{u}\_{i-1}^{n}) \\ &+ \sum\_{j=1}^{n} b\_{j} [(\hat{u}\_{i+1}^{n+1-j} + 6\hat{u}\_{i}^{n+1-j} + \hat{u}\_{i-1}^{n+1-j}) - (\hat{u}\_{i+1}^{n-j} + 6\hat{u}\_{i}^{n-j} + \hat{u}\_{i-1}^{n-j})]) \\ &= -\hat{\upsilon}(\mathbf{x}, t, \mathbf{r}) \frac{1}{2} \times \left[ \frac{\tilde{u}\_{i+1,n+1} - \tilde{u}\_{i-1,n+1}}{2h} + \frac{\tilde{u}\_{i+1,n} - \tilde{u}\_{i-1,n}}{2h} \right] \\ &+ \hat{a}(\mathbf{x}, t, \mathbf{r}) \frac{1}{2} \left[ \frac{\tilde{u}\_{i+1,n+1} - 2\tilde{u}\_{i,n+1} + \tilde{u}\_{i-1,n+1}}{h^{2}} + \frac{\tilde{u}\_{i+1,n} - 2\tilde{u}\_{i,n} + \tilde{u}\_{i-1,n}}{h^{2}} \right] + \tilde{b}(\mathbf{x}, t, \mathbf{r}). \end{split} \tag{31}$$

Therefore, we have

$$\begin{split} \widehat{u}\_{i+1}^{n+1} + 6\widehat{u}\_{i}^{n+1} + \widehat{u}\_{i-1}^{n+1} - \widehat{u}\_{i+1}^{n} - 6\widehat{u}\_{i}^{n} - \widehat{u}\_{i-1}^{n} &+ \sum\_{j=1}^{n} b\_{j} [(\widehat{u}\_{i+1}^{n+1-j} + 6\widehat{u}\_{i}^{n+1-j} + \widehat{u}\_{i-1}^{n+1-j}) - (\widehat{u}\_{i+1}^{n-j} + 6\widehat{u}\_{i}^{n-j} + \widehat{u}\_{i-1}^{n-j})] \\ &= -\frac{\widehat{v}(x;\lambda)\Lambda^{\mu}\Gamma(2-a)}{h} [3\widehat{u}\_{i+1,n+1} - 3\widehat{u}\_{i-1,n+1} + 3\widehat{u}\_{i+1,n} - 3\widehat{u}\_{i-1,n}] \\ &+ \frac{\widehat{a}(x;\lambda)\Lambda^{\mu}\Gamma(2-a)}{h^{2}} [6\widehat{u}\_{i+1,n+1} - 12\widehat{u}\_{i,n+1} + 6\widehat{u}\_{i-1,n+1} + 6\widehat{u}\_{i+1,n} - 12\widehat{u}\_{i,n} + 6\widehat{u}\_{i-1,n}] \\ &+ 12\Lambda^{\mu}\Gamma(2-a) [(b\_{i+1} + 6b\_{i}) + b\_{i-1}]. \end{split} \tag{32}$$

Now, assume *<sup>p</sup>*91(*r*) = *<sup>v</sup>*9(*x*,*t*;*r*) <sup>Γ</sup>(2−*α*) <sup>Δ</sup>*<sup>t</sup> α <sup>h</sup>* , *<sup>p</sup>*92(*r*) = *<sup>D</sup>*9(*x*,*t*;*r*) <sup>Γ</sup>(2−*α*) <sup>Δ</sup>*<sup>t</sup> α <sup>h</sup>*<sup>2</sup> . Then, in view of Equation (32) we derive

$$\begin{split} \overset{\star,n+1}{u}\_{i+1} + \overset{\star,n+1}{6u}\_{i} + \overset{\star,n+1}{u}\_{i-1} - \overset{\star,n}{u}\_{i+1} - \overset{\star,n}{6u}\_{i} - \overset{\star,n}{u}\_{i-1} &+ \overset{\star}{\sum}\_{j=1} b\_{j} [ (\overset{\star}{u}\_{i+1}^{n+1-j} + \overset{\star}{6u}\_{i}^{n+1-j} + \overset{\star,n+1-j}{u}\_{i-1} ) - (\overset{\star}{u}\_{i+1}^{n-j} + \overset{\star}{6u}\_{i}^{n-j} + \overset{\star,n-j}{u}\_{i-1} )] \\ &= [ -3\overset{\star}{p}\_{1}\overset{\star}{u}\_{i+1,n+1} + 3\overset{\star}{p}\_{1}\overset{\star}{u}\_{i-1,n+1} - 3\overset{\star}{p}\_{1}\overset{\star}{u}\_{i+1,n} + 3\overset{\star}{p}\_{1}\overset{\star}{u}\_{i-1,n} \\ &+ [ 6\overset{\star}{p}\_{2}\overset{\star}{u}\_{i+1,n+1} - 12\overset{\star}{p}\_{2}\overset{\star}{u}\_{i,n+1} + 6\overset{\star}{p}\_{2}\overset{\star}{u}\_{i-1,n+1} + 6\overset{\star}{p}\_{2}\overset{\star}{u}\_{i+1,n} - \overset{\star}{p}\_{2}12\overset{\star}{u}\_{i,n} + 6\overset{\star}{p}\_{2}\overset{\star}{u}\_{i-1,n} ]] \\ &+ 12\Delta t^{a}\Gamma(2 - a)[ (b\_{i+1} + 6b\_{i} &+ b\_{i-1}) ] \end{split} \tag{33}$$

Thus, we simplify Equation (33) to obtain a general formula for the fourth-order compact implicit scheme method of the FTFCDE as follows

$$\begin{array}{l} \left(1+3\widetilde{p}\_{1}-6\widetilde{p}\_{2}\right) \stackrel{\sim}{u}\_{i+1}^{n+1} + \left(6+12\widetilde{p}\_{2}\right)\stackrel{\sim}{u}\_{i}^{n+1} + \left(1-3\widetilde{p}\_{1}-6\widetilde{p}\_{2}\right)\stackrel{\sim}{u}\_{i-1}^{n+1} \\ \quad = \left(1-3\widetilde{p}\_{1}+6\widetilde{p}\_{2}\right)\stackrel{\sim}{u}\_{i+1}^{n} + \left(6-12\widetilde{p}\_{2}\right)\stackrel{\sim}{u}\_{i}^{n} + \left(1+3\widetilde{p}\_{1}+6\widetilde{p}\_{2}\right)\stackrel{\sim}{u}\_{i-1}^{n} \\ \quad - \sum\_{j=1}^{n}b\_{j}\left[\begin{matrix}\widehat{u}\_{i+1}^{n+1-j} + 6\widetilde{u}\_{i}^{n+1-j} + \widehat{u}\_{i-1}^{n+1-j} \\ \widehat{u}\_{i+1}^{n} + 6\widetilde{u}\_{i}^{n} + 6\widetilde{u}\_{i}^{n} + \widehat{u}\_{i-1}^{n+1} \end{matrix}\right] - \begin{matrix}\widehat{u}\_{i+1}^{n-j} + 6\widetilde{u}\_{i}^{n-j} + \widetilde{u}\_{i-1}^{n-j} \\ \widehat{u}\_{i+1}^{n-j} + 6\widetilde{u}\_{i}^{n} + \widetilde{q}\_{i-1}^{n+1} \end{matrix} \right] \\ \end{array} \tag{34}$$

#### **6. The Fuzzy Fourth-Order FTCS Method for the Solution of FTFCDE**

In this section, the fourth-order FTCS method is developed and applied to the doubleparametric form of fuzzy numbers implementing the fourth-order approximation at time level *n*. The, we discretize the first- and second-order space derivatives and the fuzzy

Caputo gH-derivative formula to approximate the time-fractional derivative that gives rise to the solution of the fuzzy time-fractional convection–diffusion equation.

To obtain an approximate solution to the fuzzy time-fractional convection–diffusion equation based on fuzzy fourth-order compact FTCS method, the fuzzy Caputo gHderivative formula has been applied to approximate the fuzzy time-fractional derivative Equation (8). The first- and second-space partial derivatives are therefore approximated by Equation (5) to yield

$$\begin{split} \frac{\Lambda^{4-n}}{\Gamma(2-n)} [\overline{u}\_{i,n+1} - \overline{u}\_{i,n} + \sum\_{j=1}^{n} b\_{j} \left( \overline{u}\_{i,n+1-j} - \overline{u}\_{i,n-j} \right)] \\ = -\overline{v}(\mathbf{x}, r) \frac{\frac{\delta\_{\mathbf{x}}}{\overline{\mathbf{n}}}}{\left( 1 + \frac{1}{\delta} \cdot \delta^{2} \mathbf{x} \right)} + \overline{D}(\mathbf{x}, r) \frac{\frac{\delta\_{\mathbf{x}}^{2}}{\overline{\mathbf{n}}^{2}}}{\left( 1 + \frac{1}{\delta} \cdot \delta^{2} \mathbf{x} \right)} + \overline{q}(\mathbf{x}, r). \end{split} \tag{35}$$

Hence, simplifying Equation (35) reveals

$$\begin{split} \frac{\Lambda^{\mathfrak{s}-x}}{\Gamma(2-\mathfrak{a})} \times \frac{3}{12} ( (\hat{\widetilde{u}}\_{i+1}^{n+1} + 6\hat{u}\_{i}^{n+1} + \hat{u}\_{i-1}^{n+1}) - (\hat{u}\_{i+1}^{n} + 6\hat{u}\_{i}^{n} + \hat{u}\_{i-1}^{n}) \\ + \sum\_{j=1}^{n} b\_{j} [(\hat{u}\_{i+1}^{n+1-j} + 6\hat{u}\_{i}^{n+1-j} + \hat{u}\_{i-1}^{n+1-j}) - (\hat{u}\_{i+1}^{n-j} + 6\hat{u}\_{i}^{n-j} + \hat{u}\_{i-1}^{n-j})]) \\ = -\hat{\nu}(\mathbf{x}, r) \frac{\widetilde{u}\_{i+1,\mathbf{z}} - \widetilde{u}\_{i-1,\mathbf{z}}}{2\hbar} + \tilde{D}(\mathbf{x}, r) \frac{\widetilde{u}\_{i+1,\mathbf{z}} - 2\widetilde{u}\_{i,\mathbf{z}} + \widetilde{u}\_{i-1,\mathbf{z}}}{\hbar^{2}} + (\widetilde{q}\_{i+1} + 6\hat{q}\_{i}^{\mathfrak{n}} + \hat{q}\_{i-1}^{\mathfrak{n}}) \end{split} \tag{36}$$

Therefore, we have obtained

$$\begin{split} \frac{\widetilde{\mu}^{n+1}}{\widetilde{\mu}^{n+1}\_{i+1}} + \widehat{\theta}\widehat{u}^{n+1}\_{i} + \widehat{u}^{n+1}\_{i-1} - \widehat{u}^{n}\_{i+1} - \widehat{u}^{n}\_{i-1} &+ \sum\_{j=1}^{n} b\_{j} [(\widehat{u}^{n+1-j}\_{i+1} + \widehat{u}^{n+1-j}\_{i} + \widehat{u}^{n+1-j}\_{i-1}) - (\widehat{u}^{n-j}\_{i+1} + \widehat{u}^{n-j}\_{i} + \widehat{u}^{n-j}\_{i-1})] \\ = -\frac{\widehat{\nu}^{n}(x,r)\Delta^{\mu}\Gamma(2-a)}{h} [6\widetilde{u}\_{i+1,n} - 6\widetilde{u}\_{i-1,n}] \\ &+ \frac{\widehat{D}(x)\Delta^{\mu}\Gamma(2-a)}{h^{2}} [12\widetilde{u}\_{i+1,n} - 24\widetilde{u}\_{i,n} + 12\widetilde{u}\_{i-1,n}] + 12\Delta^{\mu}\Gamma(2-a) [(\widehat{q}\_{i+1} + 6\widehat{q}\_{i}^{n} + \widehat{q}\_{i-1}^{n})]. \end{split} \tag{37}$$
 
$$\text{Now, let } \widetilde{p}\_{1}(r) = \frac{\overline{v}(xr)\Delta^{\mu}\Gamma(2-a)}{h}, \quad \widetilde{p}\_{2}(r) = \frac{\bar{D}(xr)\Delta^{\mu}\Gamma(2-a)}{h^{2}}. \text{ Then, from Equation (37) we write} $$

we write

$$\begin{aligned} \widetilde{u}\_{i+1}^{n+1} + \widetilde{6u}\_{i}^{n+1} + \widetilde{u}\_{i-1}^{n+1} - \widetilde{u}\_{i+1}^{n} - \widetilde{u}\_{i-1}^{n} - \widetilde{u}\_{i-1}^{n} + \sum\_{j=1}^{n} b\_{j} [ (\widetilde{u}\_{i+1}^{n+1-j} + \widetilde{6u}\_{i}^{n+1-j} + \widetilde{u}\_{i-1}^{n+1-j}) - (\widetilde{u}\_{i+1}^{n-j} + \widetilde{6u}\_{i}^{n-j} + \widetilde{u}\_{i-1}^{n-j})] \\ = [ -\widetilde{6p}\_{1} \widetilde{u}\_{i+1,n} + \widetilde{6p}\_{1} \widetilde{u}\_{i-1,n} ] + [ 12\widetilde{p}\_{2} \widetilde{u}\_{i+1,n} - 24\widetilde{p}\_{2} \widetilde{u}\_{i,n} + 12\widetilde{p}\_{2} \widetilde{u}\_{i-1,n} ] \\ + 12\Lambda t^{n} \Gamma(2 - a) [ (\widetilde{q}\_{i+1} + 6\widetilde{q}\_{i} + \widetilde{q}\_{i-1})] \end{aligned} \tag{38}$$

By simplifying Equation (38), we obtain the general formula for the fourth-order compact FTCS of the FTFCDE in the form

$$\begin{array}{lcl}\stackrel{\sim}{u}\_{i+1}^{n+1} + \stackrel{\sim}{6u}\_{i}^{n+1} + \stackrel{\sim}{u}\_{i-1}^{n+1} \\ &= (1 - 6\stackrel{\sim}{p}\_1 + 12\stackrel{\sim}{p}\_2)\stackrel{\sim}{u}\_{i+1}^{\mathbb{R}} + (6 - 24\stackrel{\sim}{p}\_2)\stackrel{\sim}{u}\_i^{\mathbb{R}} + (1 + 6\stackrel{\sim}{p}\_1 + 12\stackrel{\sim}{p}\_2)\stackrel{\sim}{u}\_{i-1}^{\mathbb{R}} \\ &- \sum\_{j=1}^n b\_j[(\stackrel{\sim}{u}\_{i+1} + 6\stackrel{\sim}{u}\_i + \stackrel{\sim}{u}\_{i-1}^{n+1-j}) - (\stackrel{\sim}{u}\_{i+1}^{n-j} + \stackrel{\sim}{6u}\_i^{\mathbb{R}-j} + \stackrel{\sim}{u}\_{i-1}^{\mathbb{R}})] \\ &+ 12\Delta t^n \Gamma(2 - a)[(b\_{i+1} + 6b\_i + b\_{i-1})] \end{array} \tag{39}$$

#### **7. The Truncation Error Analysis**

In this section, the truncation error of Equation (34) is considered by employing the Taylor series expansion to give

∼ *T xi*, *t n*+ <sup>1</sup> 2 = <sup>1</sup> <sup>Γ</sup>(2−*α*)Δ*t<sup>α</sup> n* ∑ *j*=0 *bj* <sup>∼</sup> *u n*+1−*j <sup>i</sup>* <sup>−</sup> <sup>∼</sup> *u n*−*j i* + *<sup>δ</sup>x*/2*<sup>h</sup>* (1+ <sup>1</sup> <sup>6</sup> *<sup>δ</sup>*2*<sup>x</sup>* ) *u n*+ <sup>1</sup> 2 *<sup>i</sup>* <sup>−</sup> *<sup>δ</sup>*<sup>2</sup> *<sup>x</sup>*/*h*<sup>2</sup> (1+ <sup>1</sup> <sup>12</sup> *<sup>δ</sup>*<sup>2</sup>) ∼ *u n*+ <sup>1</sup> 2 *i* = <sup>1</sup> <sup>Γ</sup>(2−*α*)Δ*t<sup>α</sup> n* ∑ *j*=0 *bj* <sup>∼</sup> *u n*+1−*j <sup>i</sup>* <sup>−</sup> <sup>∼</sup> *u n*−*j i* <sup>−</sup> *<sup>∂</sup>α*<sup>∼</sup> *u ∂αt n*+ <sup>1</sup> 2 *i* + *<sup>δ</sup>x*/2*<sup>h</sup>* (1+ <sup>1</sup> <sup>6</sup> *<sup>δ</sup>*2*<sup>x</sup>* ) *u n*+ <sup>1</sup> 2 *<sup>i</sup>* <sup>−</sup> *<sup>∂</sup>* ∼ *u ∂x n*+ <sup>1</sup> 2 *i* + *<sup>∂</sup>*2<sup>∼</sup> *u ∂*2*x n*+ <sup>1</sup> 2 *<sup>i</sup>* <sup>−</sup> *<sup>δ</sup>*<sup>2</sup> *<sup>x</sup>*/*h*<sup>2</sup> (1+ <sup>1</sup> <sup>12</sup> *<sup>δ</sup>*2*<sup>x</sup>* ) ∼ *u n*+ <sup>1</sup> 2 *i* = <sup>1</sup> <sup>Γ</sup>(2−*α*)Δ*t<sup>α</sup> n* ∑ *j*=0 *bj* <sup>∼</sup> *u n*+1−*j <sup>i</sup>* <sup>−</sup> <sup>∼</sup> *u n*−*j i* <sup>−</sup> *<sup>∂</sup>α*<sup>∼</sup> *u ∂α n*+ <sup>1</sup> 2 *i* + *<sup>h</sup>*<sup>4</sup> 180*∂*5*<sup>u</sup> ∂x*<sup>5</sup> *n i* + *<sup>h</sup>*<sup>4</sup> 240*∂*6<sup>∼</sup> *u ∂x*<sup>6</sup> *n*<sup>+</sup> <sup>1</sup> 2 *i* I*i* = *O*(Δ*t*)2−*<sup>α</sup>* + *O h*4 + *O h*4 = (Δ*t*)2−*<sup>α</sup>* + *O h*4 (40)

It is not of place to mention here that we have to take into consideration that the truncation error for the second-order implicit scheme method of Equation (14) is *O*(Δ*t*) <sup>2</sup>−*<sup>α</sup>* + *O h*2 .

#### **8. Stability Analysis**

The Fourier method is used in this section to examine the stability of the presented method for the fuzzy time-fractional convection–diffusion equation. First, suppose that the discretization of the initial condition tends to the fuzzy error <sup>9</sup>*ε*<sup>0</sup> *<sup>i</sup>* . Assume *<sup>u</sup>*9<sup>0</sup> *<sup>i</sup>* <sup>=</sup> ´ *u*90 *<sup>i</sup>* <sup>−</sup> <sup>9</sup>*ε*<sup>0</sup> *i* , *u*9*n <sup>i</sup>* , and *<sup>u</sup>*9´*<sup>n</sup> <sup>i</sup>* are the numerical fuzzy solutions of the fourth-order compact formula in Equation (14). Let [*u*9*<sup>n</sup> <sup>i</sup>*+1(*x*, *t*; *α*)] *<sup>r</sup>* = *<sup>β</sup>*[*<sup>u</sup>* (*r*) − *<sup>u</sup>* (*r*)] + *<sup>u</sup>*(*r*)], where *<sup>r</sup>*, *<sup>β</sup>* ∈ [0, 1]. Then, we define the fuzzy error bound as

$$[\tilde{\varepsilon}\_i^n]\_r = \left[\tilde{u}\_i^n - \tilde{u}\_i^n\right]\_r, \quad n = 1, 2, 3, \dots, \dots, X \times M; \ i = 1, 2, 3, \dots, \dots; X - 1. \tag{41}$$

Then, by making use of the presented approach of [30], Equation (14) can be read as

$$\begin{array}{c} \left(1+3\widetilde{p}\_{1}-6\widetilde{p}\_{2}\right)^{\circ n+1}\widehat{u}\_{i+1}^{n+1}+\left(6+12\widetilde{p}\_{2}\right)\widehat{u}\_{i}^{n+1}+\left(1-3\widetilde{p}\_{1}-6\widetilde{p}\_{2}\right)\widehat{u}\_{i-1}^{n+1} \\ \qquad = \left(1-3\widetilde{p}\_{1}+6\widetilde{p}\_{2}-b\_{1}\right)\widehat{u}\_{i+1}^{n}+\left(6-12\widetilde{p}\_{2}-6b\_{1}\right)\widehat{u}\_{i}^{n}+\left(1+3\widetilde{p}\_{1}+6\widetilde{p}\_{2}-b\_{1}\right)\widehat{u}\_{i-1}^{n} \\ \qquad - \sum\_{j=1}^{n-1}\left(b\_{j+1}-b\_{j}\right)\left(\widehat{u}\_{i+1}^{n-j}+6\widehat{u}\_{i}^{n-j}+\widehat{u}\_{i-1}^{n-j}\right)+b\_{n}\left(\widehat{u}\_{i+1}^{0}+6\widehat{u}\_{i}^{0}+\widehat{u}\_{i-1}^{0}\right) \end{array} \tag{42}$$

The error bound of Equation (42) therefore has the form

$$
\begin{split}
\left(1+3\widetilde{p}\_{1}-6\widetilde{p}\_{2}\right)\stackrel{\scriptstyle\alpha}{\varepsilon}\_{i+1}^{n+1}+\left(6+12\widetilde{p}\_{2}\right)\stackrel{\scriptstyle\alpha}{\varepsilon}\_{i}^{n+1}+\left(1-3\widetilde{p}\_{1}-6\widetilde{p}\_{2}\right)\stackrel{\scriptstyle\alpha}{\varepsilon}\_{i-1}^{n+1} \\ &= \left(1-3\widetilde{p}\_{1}+6\widetilde{p}\_{2}-b\_{1}\right)\stackrel{\scriptstyle\alpha}{\varepsilon}\_{i+1}^{n}+\left(6-12\widetilde{p}\_{2}-6b\_{1}\right)\stackrel{\scriptstyle\alpha}{\varepsilon}\_{i}^{n}+\left(1+3\widetilde{p}\_{1}+6\widetilde{p}\_{2}-b\_{1}\right)\hat{\varepsilon}\_{i-1}^{n} \\ &- \sum\_{j=1}^{n-1}\left(b\_{j+1}-b\_{j}\right)\left(\hat{\varepsilon}\_{i+1}^{n-j}+6\hat{\varepsilon}\_{i}^{n-j}+\hat{\varepsilon}\_{i-1}^{n-j}\right)+b\_{n}\left(\hat{\varepsilon}\_{i+1}^{0}+6\hat{\varepsilon}\_{i}^{0}+\hat{\varepsilon}\_{i-1}^{0}\right),
\end{split} \tag{43}
$$

provided that <sup>9</sup>*ε<sup>n</sup>* <sup>0</sup> <sup>=</sup> <sup>9</sup>*ε<sup>n</sup> <sup>X</sup>* = 0, *n* = 1, 2, . . . ., *T* × *M*.

Let <sup>9</sup>*ε<sup>n</sup> <sup>i</sup>* = [9*ε<sup>n</sup>* 1,9*εn* <sup>2</sup>,......,9*ε<sup>n</sup> <sup>X</sup>*−1]. Then, the fuzzy norm is introduced as

$$||\vec{\varepsilon}^n||\_2 = \sqrt{\sum\_{i=1}^{X-1} h\left|\vec{\varepsilon}\_i^n\right|^2}$$

Then, it yields

$$\|\|\overline{\varepsilon}^{n}\|\|\_{2}^{2} = \sum\_{i=1}^{X-1} \hbar \left|\overline{\varepsilon}\_{i}^{n}\right|^{2}.\tag{44}$$

Suppose that <sup>9</sup>*ε<sup>n</sup> <sup>i</sup>* can be expressed in the form

$$
\overline{\epsilon}\_i^n = \overline{\lambda}^n e^{\sqrt{-\theta\_i}} \text{ , where } \overline{\theta}\_i = qih. \tag{45}
$$

Then, substituting Equation (45) into Equation (43) implies

$$\begin{split} \left(1+3\widetilde{p}\_{1}-6\widetilde{p}\_{2}\right) \stackrel{\widetilde{\lambda}^{n+1}}{\lambda}e^{\sqrt{-\theta\_{i}}}+\left(1-3\widetilde{p}\_{1}-6\widetilde{p}\_{2}\right)\stackrel{\widetilde{\lambda}^{n+1}}{\lambda}e^{\sqrt{-\theta\_{i-1}}} \\ &= \left(1-3\widetilde{p}\_{1}+6\widetilde{p}\_{2}-b\_{1}\right)\stackrel{\widetilde{\lambda}^{n}}{\lambda}e^{\sqrt{-\theta\_{i+1}}}+\left(6-12\widetilde{p}\_{2}-6b\_{1}\right)\stackrel{\widetilde{\lambda}^{n}}{\lambda}e^{\sqrt{-\theta\_{i}}} \\ &+\left(1+3\widetilde{p}\_{1}+6\widetilde{p}\_{2}-b\_{1}\right)\stackrel{\widetilde{\lambda}^{n}}{\lambda}e^{\sqrt{-\theta\_{i-1}}} \\ &-\sum\_{j=1}^{n-1}\left(b\_{j+1}-b\_{j}\right)\stackrel{\widetilde{\lambda}^{n-j}}{\lambda}e^{\sqrt{-\theta\_{i+1}}}+\widehat{\lambda}^{n-j}e^{\sqrt{-\theta\_{i}}}+\widehat{\lambda}^{n-j}e^{\sqrt{-\theta\_{i-1}}} \\ &+b\_{n}\left(\widehat{\lambda}^{0}e^{\sqrt{-\theta\_{i+1}}}+6\widehat{\lambda}^{0}e^{\sqrt{-\theta\_{i}}}+\widehat{\lambda}^{0}e^{\sqrt{-\theta\_{i-1}}}\right) \end{split} \tag{46}$$

Divide Equation (46) by *e* √−*<sup>θ</sup><sup>i</sup>* to have

$$\begin{split} \left[ \left( 6 + 12\widetilde{p}\_2 \right) + \left( e^{\sqrt{-\theta\_i}} + e^{-\sqrt{-\theta\_i}} \right) - 6\widetilde{p}\_2 \left( e^{\sqrt{-\theta\_i}} + e^{-\sqrt{-\theta\_i}} \right) \right] \widehat{\boldsymbol{\lambda}}^{n+1} \\ &= \left[ \left( 6 - 12\widetilde{p}\_2 - 6b\_1 \right) + (1 - b\_1) \left( e^{\sqrt{-\theta\_i}} + e^{-\sqrt{-\theta\_i}} \right) + 6\widetilde{p}\_2 \left( e^{\sqrt{-\theta\_i}} + e^{-\sqrt{-\theta\_i}} \right) \right] \widehat{\boldsymbol{\lambda}}^n \\ &- \sum\_{j=1}^{n-1} \left( b\_{j+1} - b\_j \right) \left[ 6 + \left( e^{\sqrt{-\theta\_i}} + e^{-\sqrt{-\theta\_i}} \right) \right] \widehat{\boldsymbol{\lambda}}^{n-j} + b\_n \left[ 6 + \left( e^{\sqrt{-\theta\_i}} + e^{-\sqrt{-\theta\_i}} \right) \right] \widehat{\boldsymbol{\lambda}}^0 \end{split} \tag{47}$$

Then, simplify Equation (47) to write

$$
\begin{array}{rcl}
\bar{\lambda}^{n+1} &=& \begin{bmatrix}
8 - 8b\_1 - 4\sin^2\left(\frac{\theta}{2}\right) + 4b\_1\sin^2\left(\frac{\theta}{2}\right) - 24\bar{p}\_2\sin^2\left(\frac{\theta}{2}\right) \\
\hline
8 - 4\sin^2\left(\frac{\theta}{2}\right) + 24\bar{p}\_2\sin^2\left(\frac{\theta}{2}\right) \\
\end{bmatrix} \bar{\lambda}^n \\
& -\frac{\sum\_{j=1}^{n-1} \left(b\_{j+1} - b\_j\right) \left(8 - 4\sin^2\left(\frac{\theta}{2}\right)\right) \bar{\lambda}^{n-j} + b\_n \left(8 - 4\sin^2\left(\frac{\theta}{2}\right)\right) \bar{\lambda}^0}{8 - 4\sin^2\left(\frac{\theta}{2}\right) + 48\bar{p}\_2\sin^2\left(\frac{\theta}{2}\right) + 12\sqrt{-1}}.
\end{array} \tag{48}
$$

**Proposition 1.** *Let* <sup>9</sup> *λ<sup>n</sup> be the fuzzy numerical solution for Equation (48). Then, we have* 9 *λn* ≤ 9 *λ*0 .

**Proof**. From Equation (48), we, for *n* = 0 , write

$$\left|\tilde{\lambda}^1\right| = \left(\frac{8 - 4\sin^2\left(\frac{\theta}{2}\right) - 24\overline{p}\_2\sin^2\left(\frac{\theta}{2}\right)}{8 - 4\sin^2\left(\frac{\theta}{2}\right) + 24\overline{p}\_2\sin^2\left(\frac{\theta}{2}\right)}\right) \left|\tilde{\lambda}^0\right|.$$

Hence, it follows

$$\left|\tilde{\lambda}^1\right| \le \left|\tilde{\lambda}^0\right| \cdot \cdot$$

Now, assume that

$$\left| \bar{\lambda}^{m} \right| \le \left| \bar{\lambda}^{0} \right|, \ m = 1, \ 2, \ 3, \ \dots, n-1.$$

Therefore, by [30], we state that the standard coefficient *bj* = (*j* + 1) <sup>1</sup>−*<sup>α</sup>* <sup>−</sup> (*j*)1−*α*, *j* = 1, 2, 3, . . . , satisfies


Hence, in view of Equation (48) and the above statement, we obtain

$$\begin{array}{c} \widetilde{\lambda}^{n+1} \leq \begin{bmatrix} \frac{8-8b\_{1}-4\sin^{2}\left(\frac{\theta}{2}\right)+4b\_{1}\sin^{2}\left(\frac{\theta}{2}\right)-24\bar{\rho}\_{2}\sin^{2}\left(\frac{\theta}{2}\right)}{8-4\sin^{2}\left(\frac{\theta}{2}\right)+24\bar{\rho}\_{2}\sin^{2}\left(\frac{\theta}{2}\right)}\\ \hline 8-4\sin^{2}\left(\frac{\theta}{2}\right)+24\bar{\rho}\_{2}\sin^{2}\left(\frac{\theta}{2}\right) \end{bmatrix} \begin{bmatrix} \bar{\lambda}^{n} \\ \end{bmatrix} - \begin{bmatrix} \bar{\lambda}^{n} \\ \end{bmatrix} - \begin{bmatrix} \bar{\lambda}^{n} \\ \end{bmatrix} \\ \hline \frac{\sum\_{j=1}^{n}\left(b\_{j+1}-b\_{j}\right)\left(8-4\sin^{2}\left(\frac{\theta}{2}\right)\right)\left|\bar{\lambda}^{n}-\right| + b\_{n}\left(8-4\sin^{2}\left(\frac{\theta}{2}\right)\right)\left|\bar{\lambda}^{0}\right|} \\ \hline 8-4\sin^{2}\left(\frac{\theta}{2}\right)+24\bar{\rho}\_{2}\sin^{2}\left(\frac{\theta}{2}\right). \end{bmatrix} \end{array}$$

Thus, we write

$$\bar{\lambda}^{x+1} \le \left[ \frac{8 - 8b\_1 - 4\sin^2\left(\frac{\theta}{2}\right) + 4b\_1\sin^2\left(\frac{\theta}{2}\right) - 24\bar{p}\_2\sin^2\left(\frac{\theta}{2}\right) - \left[ (b\_n - b\_1)\left(8 - 4\sin^2\left(\frac{\theta}{2}\right)\right) \right] + 8b\_n - 4b\_n\sin^2\left(\frac{\theta}{2}\right)}{8 - 4\sin^2\left(\frac{\theta}{2}\right) + 24\bar{p}\_2\sin^2\left(\frac{\theta}{2}\right)} \right] |\bar{\lambda}^0| \right]$$

That is,

$$
\tilde{\lambda}^{n+1} \le \begin{bmatrix} 8 - 4\sin^2\left(\frac{\theta}{2}\right) - 24\bar{p}\_2\sin^2\left(\frac{\theta}{2}\right) \\ \frac{8 - 4\sin^2\left(\frac{\theta}{2}\right) + 24\bar{p}\_2\sin^2\left(\frac{\theta}{2}\right)}{2} \end{bmatrix} \left| \tilde{\lambda}^0 \right| \le \left| \tilde{\lambda}^0 \right|.
$$

**Theorem 1.** *The fourth-order compact implicit scheme method Equation (34) is unconditionally stable.*

**Proof**. From Proposition 1 and the formula of Equation (44), it can be easily shown that

$$\|\|\overline{\varepsilon}^{\mathfrak{n}}\|\|\underline{\omega} \le \|\|\overline{\varepsilon}^{0}\|\|\underline{\omega}\,\,\,\,n = 1, 2, 3, \dots, \dots, N - 1$$

This means that the fourth-order compact implicit scheme method Equation (14) is unconditionally stable. On the other hand, using the same approach, it is easy to show that the fourth-order compact FTCS scheme Equation (39) is conditionally stable, i.e., there are stability conditions for the time step.

#### **9. Numerical Experiments**

Consider the one-dimensional time-fractional convection–diffusion equation [28]

$$\frac{\partial^{\mathfrak{a}}\bar{u}(\mathbf{x},t,\mathbf{a})}{\partial t^{\mathfrak{a}}} = -\frac{\partial\bar{u}(\mathbf{x},t;r,\mathfrak{e}\boldsymbol{\beta})}{\partial\mathbf{x}} + \frac{\partial^{2}\bar{u}(\mathbf{x},t;r,\mathfrak{e}\boldsymbol{\beta})}{\partial\mathbf{x}^{2}}, \quad 0 < \mathbf{x} < L, \; \mathbf{t} < 0,\tag{49}$$

subject to the fuzzy boundary conditions *<sup>u</sup>*9(0, *<sup>t</sup>*) <sup>=</sup> *<sup>u</sup>*9(1, *<sup>t</sup>*) <sup>=</sup> 0 and the fuzzy initial condition

$$
\widetilde{\mu}(\mathbf{x},0) = \widetilde{k}e^{-\mathbf{x}}, \ 0 < \mathbf{x} < 1 \tag{50}
$$

According to the *r*-cut approach, the double-parametric is defined as follows

$$\dot{k}(r,\beta) = ( (\beta(0.2 - 0.2r)) + 0.1r - 0.1).$$

It is clear that the time-fractional derivative *<sup>∂</sup>αu*9(*x*,*t*) *<sup>∂</sup>t<sup>α</sup>* and the second-order space derivative *<sup>∂</sup>*2*u*9(*x*,*t*) *<sup>∂</sup>x*<sup>2</sup> follow the (i) case of generalized differentiability defined in Definition 5. It can be noted from [28] that the analytical solution of Equation (34), which is illustrated in Figures 1 and 2, can be defined by

$$
\overline{u}(x, t, a; r, \beta) = \sum\_{n=0}^{\infty} \frac{2^n t^{na}}{\Gamma(na+1)} \overline{k}(r, \beta) e^{-x}.\tag{51}
$$

**Figure 1.** The analytical solution of Equation (49) at *r* = 0.6 and *β* = 0.4.

**Figure 2.** The fuzzy analytical solution of Equation (49) at *r* = 0.6 and *β* = 0.6.

The fuzzy absolute error for the numerical solution of Equation (49) can be determined as

$$\begin{aligned} \left| \begin{bmatrix} \overline{E} \\ \end{bmatrix} \right|\_{\mathbf{r}} &= \left| \begin{bmatrix} \overline{\mathbf{u}}(\mathbf{x},t;r,\boldsymbol{\beta}) - \overline{\mathbf{u}}(\mathbf{x},t;r,\boldsymbol{\beta}) \end{bmatrix} \right| = \begin{cases} \left| \begin{bmatrix} \underline{E} \\ \end{bmatrix} \right|\_{\mathbf{r}} &= \left| \begin{bmatrix} \underline{\mathbf{u}}(\mathbf{x},t;r,\boldsymbol{\beta}) - \underline{\mathbf{u}}(\mathbf{x},t;r,\boldsymbol{\beta}) \end{bmatrix} \right| \\ \left| \begin{bmatrix} \overline{E} \end{bmatrix} \right|\_{\mathbf{r}} &= \left| \begin{bmatrix} \overline{\mathbf{u}}(\mathbf{x},t;r,\boldsymbol{\beta}) - \overline{\mathbf{u}}(\mathbf{x},t;r,\boldsymbol{\beta}) \end{bmatrix} \right| \end{aligned} \tag{52}$$

At *h* = Δ*x* = 0.6 and Δ*t <sup>α</sup>* = 0.01, and based on the use of Wolfram Mathematica software, we obtain the following numerical results:

Figures 1–4 and Tables 1 and 2 demonstrated that the fourth-order compact implicit scheme and fourth-order compact explicit FTCS scheme have a good agreement with the exact solution at *x* = 5.4, *t* = 0.005 and for all *r*, *β* ∈ [0, 1]. Additionally, the numerical solutions to the proposed schemes take on the shape of a triangular fuzzy number, which satisfies the fuzzy number properties of the double-parametric form of fuzzy numbers. The fourth-order compact implicit scheme was more accurate than the fourth-order compact explicit FTCS scheme. Furthermore, the double-parametric form was established to be a general and efficient method for converting a fuzzy equation to a crisp equation, as it reduces computational costs and produces more accurate results than the single-parametric form. In Figure 3, we see that the numerical result for the proposed methods is the more accurate solutions at points that are close to the inflection point (*β* = 0.5). The reason for using a small time step (Δ*t* = 0.001) is that the compact FTCS method is conditionally stable, which means that choosing the value of (Δ*t* and Δ*x*) must be under the stability conditions for the compact FTCS method. However, the compact implicit scheme method handles this problem since it is unconditionally stable (as shown in Section 8), which means that we can use any value of Δ*t* and Δ*x*.

**Figure 3.** Numerical solution of Equation (49) followed by using (**a**) fourth-order compact FTCS and (**b**) fourth-order compact implicit scheme at *t* = 0.005 and *x* = 5.4 for all *r*, *β* ∈ [0, 1].

**Figure 4.** Exact and presented methods of the solution of Equation (49) at *α* = 0.5, *x* = 5.4, *t* = 0.005 and for all *r*, *β* ∈ [0, 1].

**Table 1.** Numerical solution of Equation (49) followed by fourth-order compact FTCS and fourthorder compact implicit scheme at *x* = 5.4 and *t* = 0.005 for all *r*, *β* ∈ [0, 1].



**Table 2.** Numerical solution of Equation (49) by using fourth-order compact FTCS and fourth-order compact implicit scheme at *x* = 5.4 and *t* = 0.005 for all *r*, *β* ∈ [0, 1].

Figure 5 and Tables 3 and 4 demonstrate that the second-order implicit scheme and the fourth-order compact implicit scheme have a good agreement with the analytical solution at *x* = 5.4, *t* = 0.005 and for all *r*, *β* ∈ [0, 1]. The fourth-order compact implicit scheme was more accurate than the second-order classical implicit scheme and thus satisfies and agrees with the theoretical aspects in Section 4.

**Figure 5.** The numerical and exact solution of Equation (14) followed by second-order implicit scheme and fourth-order compact implicit scheme at = 5.4, *β* = 0 and 1, *t* = 0.005 for all *r* ∈ [0, 1].


**Table 3.** Numerical solution of Equation (14) by using second-order implicit scheme and fourth-order compact implicit scheme at *x* = 5.4 and *t* = 0.005 for all *r*, *β* ∈ [0, 1].

**Table 4.** Numerical solution of Equation (14) by using second-order implicit scheme and fourth-order compact implicit scheme at *x* = 5.4 and *t* = 0.005 for all *r*, *β* ∈ [0, 1].


#### **10. Conclusions**

Two fourth-order compact finite difference methods for solving a fuzzy time-fractional convection–diffusion equation were developed and implemented in our work. Based on the approach of the double-parametric form of fuzzy number concepts combined with the properties of the fractional derivative of Caputo sense, the considered equation was transferred from the fuzzy domain to the crisp domain with more generalization. The results obtained using the presented methods satisfy the properties of the fuzzy numbers achieving a triangular shape. Furthermore, the stability analysis is illustrated, following from the proof of the stability theorem of the presented schemes under the double-parametric form of fuzzy numbers and has accuracy of order *O* Δ*t* <sup>2</sup>−*<sup>α</sup>* + Δ*x*<sup>4</sup> . A comparison of numerical and exact solutions for the considered examples at various values of the fuzzy level sets reveals that the fourth-order compact implicit scheme produces slightly better results than the fourth-order compact FTCS scheme. The proposed methods for solving the fuzzy time-fractional convection–diffusion equation were found to be feasible, appropriate, and accurate, as demonstrated by a comparison of analytical and numerical solutions at various fuzzy values.

However, in this paper, the authors focused on the solution of fuzzy linear timefractional convection–diffusion by assuming the solutions are smooth. In future work, the authors plan to discuss the solution of non-linear fuzzy time-fractional convection– diffusion under the reasonable assumptions of the non-smooth solutions as discussed in [31,32]. Furthermore, the authors plan to develop finite difference and finite elements methods to solve the fuzzy linear and nonlinear fuzzy time-fractional convection–diffusion under nonhomogeneous boundary conditions [33,34].

**Author Contributions:** Conceptualization, H.Z.; methodology, A.A.-K.; software, M.A.-S.; validation, S.E.A.; formal analysis, M.A.-S.; investigation, S.A.-O.; writing-original draft preparation, H.Z.; writing-review and editing, H.Z.; visualization; supervision, A.A.-K.; funding acquisition, S.A.-O. and S.E.A. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by Deanship for Research & Innovation, Ministry of Education, in Saudi Arabia project number: IFP22UQU4282396DSR051.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors extend their appreciation to the Deanship for Research & Innovation, Ministry of Education, in Saudi Arabia for funding this research work through the project number: IFP22UQU4282396DSR051. This paper is dedicated to Shaher Momani on the occasion of his 60th birthday.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

## *Article* **An Implicit Difference Scheme for the Fourth-Order Nonlinear Evolution Equation with Multi-Term Riemann–Liouvile Fractional Integral Kernels**

**Xiaoxuan Jiang 1, Xuehua Yang 1, Haixiang Zhang 1,2,\* and Qingqing Tian <sup>1</sup>**


**Abstract:** In this paper, an implicit difference scheme is proposed and analyzed for a class of nonlinear fourth-order equations with the multi-term Riemann–Liouvile (R–L) fractional integral kernels. For the nonlinear convection term, we handle implicitly and attain a system of nonlinear algebraic equations by using the Galerkin method based on piecewise linear test functions. The Riemann– Liouvile fractional integral terms are treated by convolution quadrature. In order to obtain a fully discrete method, the standard central difference approximation is used to discretize the spatial derivative. The stability and convergence are rigorously proved by the discrete energy method. In addition, the existence and uniqueness of numerical solutions for nonlinear systems are proved strictly. Additionally, we introduce and compare the Besse relaxation algorithm, the Newton iterative method, and the linearized iterative algorithm for solving the nonlinear systems. Numerical results confirm the theoretical analysis and show the effectiveness of the method.

**Keywords:** fourth-order nonlinear equation; multi-term kernels; finite difference method; stability; convergence

#### **1. Introduction**

Partial integro-differential equations (PIDEs) have been applied widely in physical models, chemistry and biology [1–4]. Additionally, the fractional reaction–subdiffusion equation is believed to provide a powerful tool for the modeling plenty of natural phenomena in physics, biology, and chemistry [5–7]. Many numerical methods have been extensively studied. In [8], Sanz-Serna was the first to propose the difference scheme for nonlinear integro-differential equations; then, Lopez-Marcos [9] made a direct extension and considered the difference method for a class of nonlinear partial integro differential equations. Tang [10] considered a finite difference scheme for nonlinear PIDEs, approximated the differential term using the Crank–Nicolson scheme, and dealt with the integral term with the product trapezoidal method. Fairweather and Pani [11] used the backward Euler– Galerkin method for some partial integral differentials and derived the prior error estimates. Xu [12–14] also completed a series of studies for nonlinear integro-differential equations. A class of fractional convection–diffusion equations with variable coefficients are solved with the Sinc–Legendre collocation method [15], and nonlinear fractional convection–diffusion equations are solved using the homotopy analysis method [16]. For more development of numerical methods and analysis of the fractional reaction–subdiffusion equations, we refer the readers to [17–19].

**Citation:** Jiang, X.; Yang, X.; Zhang, H.; Tian Q. An Implicit Difference Scheme for the Fourth-Order Nonlinear Evolution Equation with Multi-Term Riemann–Liouvile Fractional Integral Kernels. *Fractal Fract.* **2022**, *6*, 443. https://doi.org/ 10.3390/fractalfract6080443

Academic Editors: Libo Feng, Lin Liu and Yang Liu

Received: 29 June 2022 Accepted: 9 August 2022 Published: 15 August 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

This paper is devoted to the study of an implicit difference scheme for the nonlinear fourth-order equation with the multi-term Riemann–Liouvile fractional integral kernels

$$\begin{aligned} u\_t(\mathbf{x},t) + u(\mathbf{x},t)u\_x(\mathbf{x},t) - \mathfrak{L}\_1 u(\mathbf{x},t) + \mathfrak{L}\_2 u(\mathbf{x},t) &= f(\mathbf{x},t), \\ 0 < t \le T, \quad 0 < \mathbf{x} < L, \end{aligned} \tag{1}$$

the initial condition and the boundary value conditions are

$$
\mu(\mathbf{x},0) = \mu^0(\mathbf{x}), \quad 0 \le \mathbf{x} \le L,\tag{2}
$$

$$
\mu(0, t) = \mu(L, t) = \mu\_{\text{xx}}(0, t) = \mu\_{\text{xx}}(L, t) = 0, \quad 0 < t \le T,\tag{3}
$$

respectively, where *f*(*x*, *t*) and *u*0(*x*) are the given smooth functions. Additionally, the L1*u* and L2*u* are defined by

$$\mathfrak{L}\_1 u(\mathbf{x}, t) = u\_{\mathbf{x}\mathbf{x}}(\mathbf{x}, t) + \mathcal{Z}^{\mathbf{a}\_1} u\_{\mathbf{x}\mathbf{x}}(\mathbf{x}, t), \quad 0 < \mathfrak{a}\_1 < 1,\tag{4}$$

$$\mathfrak{L}\_2 u(\mathbf{x}, t) = u\_{\text{xxx}}(\mathbf{x}, t) + \mathcal{Z}^{a\_2} u\_{\text{xxx}}(\mathbf{x}, t), \quad 0 < a\_2 < 1,\tag{5}$$

where for *<sup>γ</sup>* <sup>=</sup> *<sup>α</sup>*1, *<sup>α</sup>*2, 0 <sup>&</sup>lt; *<sup>γ</sup>* <sup>&</sup>lt; 1, <sup>I</sup>*<sup>γ</sup>* denote the R–L fractional integral operator [2] defined by

$$\mathcal{LT}^{\gamma}\varphi(t) = \int\_{0}^{t} \beta(t-s)\varphi(s)ds = \frac{1}{\Gamma(\gamma)}\int\_{0}^{t} (t-s)^{\gamma-1}\varphi(s)ds, t>0. \tag{6}$$

For the fourth-order nonlinear partial differential equations, many scholars have carried out extensive research [9,20–23]. In the paper, we propose the backward Euler scheme and convolution quadrature finite difference method for (1)–(3). The nonlinear convective term in our equation deals with Galerkin method, which attains an advantage over the scheme in [23]. We also introduce and compare three nonlinear iterative methods, including the Besse relaxation algorithm, the Newton iterative method, and the linearized iterative algorithm, to solve the nonlinear systems. We also discuss the advantages and disadvantages of three kinds of methods. The existence and uniqueness of numerical solutions for nonlinear systems are proved strictly. The stability and convergence are rigorously proved by the discrete energy method.

The outline of the paper is as follows. In Section 2, the backward Euler implicit difference scheme is derived. In Section 3, it is proved that the stability of the difference scheme under the *L*<sup>2</sup> and *H*<sup>1</sup> norms. In particular, the existence of the backward Euler implicit difference scheme is proved by the Leray–Schauder Theorem. In the Section 4, convergence is proved, and the uniqueness of solution is also proved. The numerical examples are given to check our analysis in Section 5. Finally, this paper ends with a brief conclusion in Section 6.

#### **2. The Construction of the Fully Discrete Scheme**

Let *J* be a positive integer, define the space-step size *h* := *<sup>L</sup> <sup>J</sup>* , and *xj* := *jh* (0 ≤ *j* ≤ *J*) is the mesh points. For a positive integer *N*, we introduce the time-step size *k* := *<sup>T</sup> <sup>N</sup>* , the nodes *tn* := *nk* (0 ≤ *n* ≤ *N*), and the intermediate nodes *t <sup>n</sup>*<sup>−</sup> <sup>1</sup> 2 :<sup>=</sup> *tn* <sup>−</sup> <sup>1</sup> <sup>2</sup> (1 ≤ *n* ≤ *N*).

Additionally, we define the following grid functions:

$$\mathcal{U}\_j^n := \mu(\mathfrak{x}\_j, t\_n), \mathcal{f}\_j^n := f(\mathfrak{x}\_{j'} t\_n), 0 \le j \le J, 0 \le n \le N.$$

Giving grid function *<sup>U</sup>* <sup>=</sup> {*U<sup>n</sup> <sup>j</sup>* |0 ≤ *j* ≤ *J*, 0 ≤ *n* ≤ *N*}. Some notations are defined as follows

*δtU<sup>n</sup> <sup>j</sup>* <sup>=</sup> <sup>1</sup> *k* (*U<sup>n</sup> <sup>j</sup>* <sup>−</sup> *<sup>U</sup>n*−<sup>1</sup> *<sup>j</sup>* ), *<sup>δ</sup>xU<sup>n</sup> <sup>j</sup>* <sup>=</sup> <sup>1</sup> *h* (*U<sup>n</sup> <sup>j</sup>* <sup>−</sup> *<sup>U</sup><sup>n</sup> <sup>j</sup>*−1); (7) Δ*U<sup>n</sup> <sup>j</sup>* = *<sup>U</sup><sup>n</sup> <sup>j</sup>*+<sup>1</sup> <sup>−</sup> *<sup>U</sup><sup>n</sup> <sup>j</sup>*−1, <sup>Δ</sup>+*U<sup>n</sup> <sup>j</sup>* = *<sup>U</sup><sup>n</sup> <sup>j</sup>*+<sup>1</sup> <sup>−</sup> *<sup>U</sup><sup>n</sup> j* ; <sup>Δ</sup>−*U<sup>n</sup> <sup>j</sup>* = *<sup>U</sup><sup>n</sup> <sup>j</sup>* <sup>−</sup> *<sup>U</sup><sup>n</sup> <sup>j</sup>*−1, <sup>Δ</sup>*xU<sup>n</sup> <sup>j</sup>* <sup>=</sup> <sup>1</sup> 2*h* (*U<sup>n</sup> <sup>j</sup>*+<sup>1</sup> <sup>−</sup> *<sup>U</sup><sup>n</sup> <sup>j</sup>*−1); <sup>∇</sup>*U<sup>n</sup> <sup>j</sup>* <sup>=</sup> <sup>1</sup> 3 (*U<sup>n</sup> <sup>j</sup>*+<sup>1</sup> + *<sup>U</sup><sup>n</sup> <sup>j</sup>* + *<sup>U</sup><sup>n</sup> <sup>j</sup>*−1), *<sup>δ</sup>*<sup>2</sup> *xU<sup>n</sup> <sup>j</sup>* <sup>=</sup> <sup>1</sup> *h* (*δxU<sup>n</sup> <sup>j</sup>*+<sup>1</sup> <sup>−</sup> *<sup>δ</sup>xU<sup>n</sup> j* ); *δ*4 *xU<sup>n</sup> <sup>j</sup>* <sup>=</sup> <sup>1</sup> *<sup>h</sup>*<sup>2</sup> (*δ*<sup>2</sup> *xU<sup>n</sup> <sup>j</sup>*+<sup>1</sup> <sup>−</sup> <sup>2</sup>*δ*<sup>2</sup> *xU<sup>n</sup> <sup>j</sup>* + *<sup>δ</sup>*<sup>2</sup> *xU<sup>n</sup> <sup>j</sup>*−1).

To construct the scheme fully, we first introduce the first-order quadrature rule [20,21] to approximate the R–L fractional integral <sup>I</sup>*<sup>γ</sup> <sup>ϕ</sup>*(*t*)

$$\mathcal{LT}^{\gamma}\varrho(t\_n) \approx \mathfrak{q}\_n^{\gamma}(\varrho) = k^{\gamma} \sum\_{p=1}^{n} \omega\_{n-p}^{\gamma} \varrho^{p} = k^{\gamma} \sum\_{p=0}^{n-1} \omega\_p^{\gamma} \varrho^{n-p},\tag{8}$$

by the generating power series (*δ*(*ζ*))−*<sup>γ</sup>* = (<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)−*γ*, the quadrature weights *<sup>ω</sup><sup>γ</sup> <sup>p</sup>* can be attained by

$$\sum\_{p=0}^{\infty} \omega\_p^{\gamma} \zeta^p = (1 - \zeta)^{-\gamma}. \tag{9}$$

Further, the quadrature weights *ω<sup>γ</sup> <sup>p</sup>* can be computed by

$$
\omega\_0^\gamma = 1,\\
\omega\_p^\gamma = \frac{\gamma(\gamma+1)\cdot\cdots(\gamma+p-1)}{p!}, \quad p = 1,2,\cdots \,\,. \tag{10}
$$

Let *<sup>E</sup>*(*ϕ*)(*tn*) = <sup>I</sup>*<sup>γ</sup> <sup>ϕ</sup>*(*tn*) <sup>−</sup> *<sup>q</sup>*<sup>ˆ</sup> *γ <sup>n</sup>* (*ϕ*), we can obtain the quadrature error in the next lemma.

**Lemma 1** ([3,14])**.** *Let ϕ*(*t*) *be a real and continuously differentiable function in* 0 < *t* ≤ *T, and ϕt*(*t*) *is continuous and integrable for* 0 < *t* ≤ *T. Then, based on the Equation (9), the error of the convolution quadrature is bounded by*

$$\begin{split} |E(\mathfrak{q})(t\_n)| \leq Ck t\_n^{\gamma - 1} |\mathfrak{q}(0)| + Ck \int\_0^{t\_n - 1} (t\_n - s)^{\gamma - 1} |\mathfrak{q}\_t(s)| ds \\ &+ Ck^{\gamma} \int\_{t\_n - 1}^{t\_n} |\mathfrak{q}\_t(s)| ds, \end{split}$$

*where the constant C does not rely on k .*

**Lemma 2.** *Let u*(*x*, *<sup>t</sup>*) <sup>∈</sup> *<sup>C</sup>*4,2 *<sup>x</sup>*,*t*([0, *L*] × (0, *T*])*, for* 1 ≤ *j* ≤ *J* − 1, 1 ≤ *n* ≤ *N; it holds that*

$$|(\mathcal{R}\_1)^n\_j| = |\mathcal{T}^{a\_1} u\_{xx}(\mathfrak{x}\_j, t\_n) - \mathfrak{q}^{a\_1}\_n(\delta^2\_x L\_j)| \le \mathcal{C} (k^{a\_1} n^{a\_1 - 1} + h^2).$$

**Proof.** By using the Taylor expansion with integral remainder [24–26], we obtain

$$\frac{\partial^2 u}{\partial \mathbf{x}^2}(\mathbf{x}\_{\rangle}, t\_n) = \delta\_\mathbf{x}^2 \mathcal{U}\_{\mathbf{j}}^n - \frac{1}{6} h^2 \int\_0^1 [\frac{\partial^4 u}{\partial \mathbf{x}^4}(\mathbf{x}\_{\rangle} + sh\_\prime t\_n) \tag{11}$$

$$+ \frac{\partial^4 u}{\partial \mathbf{x}^4}(\mathbf{x}\_{\rangle} - sh\_\prime t\_n) [(1 - s)^3 ds, 1 \le j \le J - 1, 1 \le n \le N,$$

by triangle inequality, we obtain

$$\begin{aligned} |(R\_1)\_{\dot{f}}^n| &= |\mathcal{Z}^{a\_1} u\_{xx}(\mathbf{x}\_{\dot{f}}, t\_n) - \dot{q}\_n^{a\_1}(\delta\_x^2 L l\_{\dot{f}})| \\ &\leq |\mathcal{Z}^{a\_1} u\_{xx}(\mathbf{x}\_{\dot{f}}, t\_n) - \dot{q}\_n^{a\_1}(u\_{xx}(\mathbf{x}\_{\dot{f}}, \cdot))| \\ &+ |\dot{q}\_n^{a\_1}(u\_{xx}(\mathbf{x}\_{\dot{f}}, \cdot)) - \dot{q}\_n^{a\_1}(\delta\_x^2 L l\_{\dot{f}})|, 1 \leq j \leq J-1, 1 \leq n \leq N. \end{aligned}$$

$$\text{Since } \mathfrak{f}\_n^{\mathfrak{a}\_1}(1) = k^{\mathfrak{a}\_1} \sum\_{p=0}^{n-1} \omega\_p^{\mathfrak{a}\_1} = \frac{1}{\Gamma(a\_1)} \int\_0^{t\_n} (t\_n - s)^{a\_1 - 1} \cdot 1 ds = \frac{t\_n^{a\_1}}{\Gamma(a\_1 + 1)}, \text{ then }$$

$$\begin{split} |(R\_1)^n\_j| &\leq \quad \ q\_n^{a\_1}(1)\frac{h^2}{6} |\int\_0^1 [\frac{\partial^4 U}{\partial x^4}(\mathbf{x}\_j + sh\_r t\_{n-i}) \\ &\quad + \frac{\partial^4 U}{\partial x^4}(\mathbf{x}\_j - sh\_r t\_{n-i}) [(1-s)^3 ds] + |\mathcal{Z}^{a\_1} u\_{\text{xx}}(\mathbf{x}\_j, t\_n) - \mathfrak{q}^{a\_1}\_n(u\_{\text{xx}}(\mathbf{x}\_j, \cdot))|\_{\rho} \\ &\leq \quad Ch^2 \frac{\mathfrak{f}^{a\_1}\_n}{\Gamma(a\_1 + 1)}, 1 \leq j \leq l - 1, 1 \leq n \leq N. \end{split}$$

By Lemma 1, we obtain

$$|\mathcal{Z}^{a\_1} u\_{\text{xx}}(\mathbf{x}\_{j}, t\_n) - \mathfrak{q}^{a\_1}\_n(u\_{\text{xx}}(\mathbf{x}\_{j'}.))| \le Cn^{a\_1 - 1} k^{a\_1}, 1 \le j \le J - 1, 1 \le n \le N.$$

The proof is finished.

**Lemma 3.** *Let u*(*x*, *<sup>t</sup>*) <sup>∈</sup> *<sup>C</sup>*6,2 *<sup>x</sup>*,*t*([0, *L*] × (0, *T*])*, for* 1 ≤ *j* ≤ *J* − 1, 1 ≤ *n* ≤ *N, we know* <sup>|</sup>(*R*2)*<sup>n</sup> <sup>j</sup>* <sup>|</sup> <sup>=</sup> |I*α*2*uxxxx*(*xi*, *tn*) <sup>−</sup> *<sup>q</sup>*<sup>ˆ</sup> *α*2 *<sup>n</sup>* (*δ*<sup>4</sup> *xUj*)| ≤ *<sup>C</sup>*(*nα*2−1*kα*<sup>2</sup> <sup>+</sup> *<sup>h</sup>*2).

**Proof.** By using the Taylor expansion with integral remainder, we have

$$\frac{\partial^4 \mu}{\partial \mathbf{x}^4} (\mathbf{x}\_j, t\_n) = \delta\_\mathbf{x}^4 \mathcal{U}\_j^n - \frac{1}{6} h^2 \int\_0^1 [\frac{\partial^6 \mu}{\partial \mathbf{x}^6} (\mathbf{x}\_j + sh\_r t\_n) \tag{12}$$

$$+ \frac{\partial^6 \mu}{\partial \mathbf{x}^6} (\mathbf{x}\_j - sh\_r t\_n) [(1 - s)^3 ds, 1 \le j \le J - 1, 1 \le n \le N.$$

Similarly to Lemma 2, we can complete the proof of the Lemma 3.

We now derive the backward Euler implicit difference scheme for the problem (1)–(3). Considering (1) at the point (*xj*, *tn*), we obtain

$$\begin{aligned} &u\_t(\mathbf{x}\_{j\prime}, t\_{\hbar}) + u(\mathbf{x}\_{j\prime}, t\_{\hbar})u\_x(\mathbf{x}\_{j\prime}, t\_{\hbar}) - \mathfrak{L}\_1 u(\mathbf{x}\_{j\prime}, t\_{\hbar}) + \mathfrak{L}\_2 u(\mathbf{x}\_{j\prime}, t\_{\hbar}) \\ &= f(\mathbf{x}\_{j\prime}, t\_{\hbar}), \quad 1 \le j \le J-1, 1 \le n \le N. \end{aligned} \tag{13}$$

Next, we discretize the (13) one by one. First, from Lemmas 2 and 3, we obtain

$$\begin{cases} \mathcal{T}^{a\_1} u\_{xx}(\mathbf{x}\_j, t\_n) = \mathfrak{q}\_n^{a\_1} (\mathcal{S}\_x^2 \mathcal{U}\_j) + (\mathcal{R}\_1)\_j^n, & 1 \le j \le J - 1, 1 \le n \le N, \\\mathcal{T}^{a\_2} u\_{xxxx}(\mathbf{x}\_j, t\_n) = \mathfrak{q}\_n^{a\_2} (\mathcal{S}\_x^4 \mathcal{U}\_j) + (\mathcal{R}\_2)\_j^n, & 1 \le j \le J - 1, 1 \le n \le N, \end{cases} \tag{14}$$

and

$$\begin{cases} u\_{xx}(x\_j, t\_n) = \delta\_x^2 U\_j^n + (R\_3)\_j^n \\ u\_{xxxx}(x\_j, t\_n) = \delta\_x^4 U\_j^n + (R\_4)\_j^n \end{cases}$$

where

$$\begin{cases} \begin{array}{c} (R\_3)\_j^\mathfrak{n} = -\frac{1}{6}h^2 \int\_0^1 \left[ \frac{\partial^4 u}{\partial x^4} (\mathbf{x}\_j + sh\mathbf{\hat{\nu}}\_\mathbf{r} \mathbf{t}\_n) + \frac{\partial^4 u}{\partial x^4} (\mathbf{x}\_j - sh\mathbf{\hat{\nu}}\_\mathbf{r} \mathbf{t}\_n) \right] (1 - s)^3 ds, \\\ (R\_4)\_j^\mathfrak{n} = -\frac{1}{6}h^2 \int\_0^1 \left[ \frac{\partial^6 u}{\partial x^6} (\mathbf{x}\_j + sh\mathbf{\hat{\nu}}\_\mathbf{r} \mathbf{t}\_n) + \frac{\partial^6 u}{\partial x^6} (\mathbf{x}\_j - sh\mathbf{\hat{\nu}}\_\mathbf{r} \mathbf{t}\_n) \right] (1 - s)^3 ds. \end{array} \end{cases}$$

Thus, we have

$$\begin{aligned} \mathfrak{L}\_1 \mathfrak{u}(\mathfrak{x}\_{\dot{\jmath}}, t\_n) &= \mathfrak{u}\_{\text{xx}}(\mathfrak{x}\_{\dot{\jmath}}, t\_n) + \mathcal{Z}^{\text{a1}} \mathfrak{u}\_{\text{xx}}(\mathfrak{x}\_{\dot{\jmath}}, t\_n) \\ &= \delta\_\mathbf{x}^2 \mathcal{U}\_{\dot{\jmath}}^n + \mathfrak{q}\_n^{\text{a1}} (\delta\_\mathbf{x}^2 \mathcal{U}\_{\dot{\jmath}}) + (\mathcal{R}\_1)\_{\dot{\jmath}}^n + (\mathcal{R}\_3)\_{\dot{\jmath}}^n, \\ &\quad \mathbf{1} \le \dot{\jmath} \le \mathbf{J} - \mathbf{1}, \mathbf{1} \le n \le \mathcal{N}, \end{aligned} \tag{15}$$

and

$$\begin{split} \mathfrak{L}\_{2}\mathfrak{u}(\mathfrak{x}\_{\circ},t\_{\mathfrak{n}}) &= \mathfrak{u}\_{\mathrm{xxxx}}(\mathfrak{x}\_{\circ},t\_{\mathfrak{n}}) + \mathscr{L}^{\mathfrak{n}\_{2}}\mathfrak{u}\_{\mathrm{xxxx}}(\mathfrak{x}\_{\circ},t\_{\mathfrak{n}}) \\ &= \, \_{\mathfrak{s}}\delta\_{\mathfrak{x}}^{4}\mathcal{U}\_{\mathfrak{j}}^{\mathfrak{n}} + \mathfrak{q}\_{\mathfrak{n}}^{\mathfrak{a}\_{2}}(\delta\_{\mathfrak{x}}^{4}\mathcal{U}\_{\mathfrak{j}}) + (\mathcal{R}\_{2})\_{\mathfrak{j}}^{\mathfrak{n}} + (\mathcal{R}\_{4})\_{\mathfrak{j}}^{\mathfrak{n}} . \end{split} \tag{16}$$

Second, for the nonlinear convection term *uux*, we discretize it by the Galerkin method with piecewise linear test functions

$$\begin{split} u(\mathbf{x}\_{j},t\_{n})u\_{\mathbf{x}}(\mathbf{x}\_{j},t\_{n}) &= \frac{\mathsf{U}^{n}\_{j+1} + \mathsf{U}^{n}\_{j} + \mathsf{U}^{n}\_{j-1}}{3} \frac{\mathsf{U}^{n}\_{j+1} - \mathsf{U}^{n}\_{j-1}}{2h} + (\mathsf{R}\_{5})^{n}\_{j} \\ &= \frac{1}{6h} (\mathsf{U}^{n}\_{j} \Delta \mathsf{U}^{n}\_{j} + \Delta (\mathsf{U}^{n}\_{j})^{2}) + O(h^{2}), \\ &1 \le j \le J-1, 1 \le n \le N. \end{split} \tag{17}$$

Third, for *ut*(*xj*, *tn*), we have

$$
\mu\_t(\mathbf{x}\_j, t\_n) = \delta\_t \mathbf{U}\_j^n + (\mathbf{R}\_6)\_j^n, \quad 1 \le j \le J-1, 1 \le n \le N,\tag{18}
$$

where

$$(\mathcal{R}\_6)\_{\dot{j}}^n = -k \int\_0^1 \frac{\partial^2 u}{\partial t^2} (x\_{j\prime} t\_{n-1} + sk) s ds.$$

Substituting (14)–(18) into (13), we obtain

$$\delta\_t \mathcal{U} l\_j^n + \frac{1}{6h} (\mathcal{U} l\_j^n \Delta \mathcal{U}\_j^n + \Delta (\mathcal{U}\_j^n)^2) - k^{a\_1} \sum\_{p=1}^n \omega\_{n-p}^{a\_1} \delta\_x^2 \mathcal{U}\_j^p \tag{19}$$

$$\begin{split} 1 + k^{a\_2} \sum\_{p=1}^n \omega\_{n-p}^{a\_2} \delta\_x^4 \mathcal{U}\_j^p - \delta\_x^2 \mathcal{U}\_j^n + \delta\_x^4 \mathcal{U}\_j^n &= f\_j^n + R\_j^n, \\ 1 \le j \le J - 1, 1 \le n \le N\_\prime \end{split}$$

in which

$$\begin{array}{l} R\_j^n = (R\_1)\_j^n - (R\_2)\_j^n - (R\_3)\_j^n - (R\_4)\_j^n - (R\_5)\_j^n - (R\_6)\_j^n, \\ 1 \le j \le J - 1, 1 \le n \le N. \end{array}$$

By Lemmas 1–3, there is a constant *C* independent of *h* and *k*, which satisfies

$$|R\_j^n| \le \mathbb{C}(n^{a\_1 - 1}k^{a\_1} + n^{a\_2 - 1}k^{a\_2} + h^2), 1 \le j \le J - 1, 1 \le n \le N. \tag{20}$$

The following initial and boundary value conditions can be attained

$$\begin{aligned} \mathcal{U}l\_0^n = \mathcal{U}l\_f^n = 0, \ \delta\_x^2 \mathcal{U}l\_0^n = \delta\_x^2 \mathcal{U}l\_f^n = O(h^2), \quad 1 \le n \le N. \end{aligned} \tag{21}$$
  $\mathcal{U}l\_j^0 = u^0(\mathbf{x}\_j), \quad 0 \le j \le J.$ 

Omitting the small terms in (19) and (21), and replacing *U<sup>n</sup> <sup>j</sup>* with its numerical approximation *u<sup>n</sup> <sup>j</sup>* , 1 ≤ *j* ≤ *J* − 1, 1 ≤ *n* ≤ *N*, we obtain the backward Euler implicit difference scheme

$$\begin{cases} \begin{aligned} \delta\_t u\_j^n + \frac{1}{\delta n} (u\_j^n \Delta u\_j^n + \Delta (u\_j^n)^2) - \mathfrak{L}\_1 u\_j^n + \mathfrak{L}\_2 u\_j^n = f\_j^n, \\ \end{aligned} \\\ u\_0^n = u\_f^n = 0, \quad \delta\_x^2 u\_0^n = \delta\_x^2 u\_f^n = 0, \\\ u\_j^0 = u^0(x\_j), \end{aligned} \tag{22}$$

and

$$\begin{aligned} \mathfrak{L}\_1 \mathfrak{u}\_j^n &= \delta\_\mathbf{x}^2 \mathfrak{u}\_j^n + \mathfrak{q}\_n^{a\_1} (\delta\_\mathbf{x}^2 \mathfrak{u}\_j)\_\prime \quad 1 \le j \le J-1, 1 \le n \le N\_\prime \\\\ \mathfrak{L}\_2 \mathfrak{u}\_j^n &= \delta\_\mathbf{x}^4 \mathfrak{u}\_j^n + \mathfrak{q}\_n^{a\_2} (\delta\_\mathbf{x}^4 \mathfrak{u}\_j)\_\prime \quad 1 \le j \le J-1, 1 \le n \le N\_\prime \end{aligned}$$

where

$$\delta\_{\mathbf{x}}^{4}u\_{\mathbf{j}}^{n} = \left\{ \begin{aligned} &(-2\delta\_{\mathbf{x}}^{2}u\_{1}^{n} + \delta\_{\mathbf{x}}^{2}u\_{2}^{n})/h^{2}, j = 1, 1 \le n \le N, \\ &\left(\delta\_{\mathbf{x}}^{2}u\_{\mathbf{j}-1}^{n} - 2\delta\_{\mathbf{x}}^{2}u\_{\mathbf{j}}^{n} + \delta\_{\mathbf{x}}^{2}u\_{\mathbf{j}+1}^{n}\right)/h^{2}, 2 \le j \le J - 2, 1 \le n \le N, \\ &(\delta\_{\mathbf{x}}^{2}u\_{\mathbf{j}-2}^{n} - 2\delta\_{\mathbf{x}}^{2}u\_{\mathbf{j}-1}^{n})/h^{2}, j = J - 1, 1 \le n \le N. \end{aligned} \right.$$

#### **3. Existence and Stability**

In this section, we analyze the *L*<sup>2</sup> stability, *L*<sup>∞</sup> stability, and existence of the backward Euler implicit difference scheme (22).

Firstly, we shall introduce some notations and lemmas that will be used for the proof of the stability. Let *Vh* = {*s*|*s* = (*s*0,*s*1, ... ,*sJ*),*s*<sup>0</sup> = *sJ* = 0}. For any grid functions *s*, *g* ∈ *Vh*, we denote

$$\langle s, g \rangle = h \sum\_{j=1}^{J-1} s\_j g\_{j}, \quad ||s||\_{\infty} = \max\_{1 \le j \le J-1} \{|s\_j|\}, \quad ||s|| = \sqrt{\langle s, s \rangle}. \tag{23}$$

**Lemma 4** ([27,28])**.** *For any function s defined on Vh, we obtain*

$$||s||\_{\infty} \le \frac{\sqrt{L}}{2}||\delta\_x s||.$$

**Lemma 5** ([9,29])**.** *Let s*, *g* ∈ *Vh; then*

$$\langle \delta\_x^2 s, g \rangle = -\sum\_{j=0}^{J-1} h(\delta\_x s\_{j+1}) (\delta\_x g\_{j+1}) \dots$$

**Lemma 6** ([24,25,30])**.** *For any s*, *<sup>g</sup>* <sup>∈</sup> *Vh, such that <sup>δ</sup>*<sup>2</sup> *xs*<sup>0</sup> = *δ*<sup>2</sup> *xsJ* = 0*; then, we have*

$$
\langle \delta\_x^4 s\_\prime g \rangle = \sum\_{j=1}^{J-1} h(\delta\_x^2 s\_j) \, (\delta\_x^2 g\_j) \, .
$$

**Lemma 7** ([31])**.** *Let β*(*t*) = *t <sup>α</sup>*−1/Γ(*α*) *be defined in Equation (6); <sup>β</sup>*(*t*) <sup>∈</sup> *<sup>L</sup>*1,*loc*(0, <sup>∞</sup>) *is a positive type if and only if*

$$\operatorname{Re}(\not\!\beta(t)) \ge 0, \qquad \operatorname{for } t \in \mathbb{C}, \operatorname{Re}(t) > 0,$$

*where Re denotes the real part, β*ˆ *denotes the Laplace transform of β*(*t*)*.*

**Lemma 8** ([24,25])**.** *If {b*0, *b*1,..., *bn*,...*} is a real-valued sequence such that* ˆ *<sup>b</sup>*(*z*) = <sup>∞</sup> ∑ *n*=0 *bnz<sup>n</sup> is analytic in <sup>D</sup>* <sup>=</sup> {*<sup>z</sup>* <sup>∈</sup> *<sup>C</sup>* : <sup>|</sup>*z*| ≤ <sup>1</sup>}*, then for any positive integer <sup>N</sup> and any* (*V*1, *<sup>V</sup>*2, ...*, <sup>V</sup>N*) <sup>∈</sup> *<sup>R</sup>N,*

$$\sum\_{n=1}^{N} \left( \sum\_{p=1}^{n} b\_{n-p} V^p \right) V^n \ge 0,$$

*if and only if*

$$\operatorname{Re}\hat{b}(z) \ge 0,\ for\ z \in D.$$

It is noticed that the generating function (9) satisfies the condition of Lemma 8.

#### *3.1. Stability*

**Theorem 1.** (*L*2-stability) *Assume that* {*u<sup>n</sup> <sup>j</sup>* |1 ≤ *j* ≤ *J* − 1, 1 ≤ *n* ≤ *N*} *is the solution of the backward Euler implicit difference scheme (22). We can obtain*

$$||u^n|| \le ||u^0|| + 2k \sum\_{i=1}^n ||f^i||, \quad 1 \le n \le N.$$

**Proof.** Taking the inner product of (22) with *<sup>u</sup>n*, for 1 <sup>≤</sup> *<sup>n</sup>* <sup>≤</sup> *<sup>N</sup>*, we obtain the following formula

$$
\langle \delta\_t u^n, u^n \rangle + \frac{1}{6h} \langle u^n \Delta u^n + \Delta (u^n)^2, u^n \rangle - k^{a\_1} \sum\_{p=1}^n \omega\_{n-p}^{a\_1} \langle \delta\_x^2 u^p, u^n \rangle \tag{24}
$$

$$
$$

From [9,22,32], we have

$$
\langle \mu^n \Delta \mu^n + \Delta (\mu^n)^2, \mu^n \rangle = 0,
$$

then for *N* ≥ 1, (24) can be rearranged

$$\begin{split} 2k\sum\_{n=1}^{N} \langle \delta\_{\mathbf{i}} u^{n}, u^{n} \rangle - 2k^{x\_{1}+1} \sum\_{n=1}^{N} \sum\_{p=1}^{n} \omega\_{n-p}^{a\_{1}} \langle \delta\_{\mathbf{x}}^{2} u^{p}, u^{n} \rangle \\ - 2k \sum\_{n=1}^{N} \langle \delta\_{\mathbf{x}}^{2} u^{n}, u^{n} \rangle + 2k \sum\_{n=1}^{N} \langle \delta\_{\mathbf{x}}^{4} u^{n}, u^{n} \rangle \\ + 2k^{a\_{2}+1} \sum\_{n=1}^{N} \sum\_{p=1}^{n} \omega\_{n-p}^{a\_{2}} \langle \delta\_{\mathbf{x}}^{4} u^{n}, u^{n} \rangle = 2k \sum\_{n=1}^{N} \langle f^{n}, u^{n} \rangle, \quad 1 \le n \le N. \end{split}$$

Next, we estimate the terms in (25) one by one. First, it is clear that

$$\begin{aligned} \langle \delta\_t u^n, u^n \rangle &= \frac{1}{2k} \langle u^n - u^{n-1}, u^n - u^{n-1} + u^n + u^{n-1} \rangle \\ &\ge \quad \frac{1}{2k} (||u^n||^2 - ||u^{n-1}||^2), \end{aligned}$$

we arrive at

$$2k\sum\_{n=1}^{N} \langle \delta\_l u^n, u^n \rangle \ge ||u^N||^2 - ||u^0||^2. \tag{26}$$

Second, utilizing Lemmas 4, 5, 7, and 8, we have

$$\begin{split} & -2k \sum\_{n=1}^{N} \langle \delta\_{\mathbf{x}}^{2} \boldsymbol{\mu}^{n}, \boldsymbol{\mu}^{n} \rangle - 2k^{a\_{1}+1} \sum\_{n=1}^{N} \sum\_{p=1}^{n} \omega\_{n-p}^{a\_{1}} \langle \delta\_{\mathbf{x}}^{2} \boldsymbol{\mu}^{p}, \boldsymbol{\mu}^{n} \rangle \\ &= \ -2k \sum\_{n=1}^{N} h \sum\_{j=1}^{I-1} (\delta\_{\mathbf{x}} \boldsymbol{u}\_{j}^{n})(\delta\_{\mathbf{x}} \boldsymbol{u}\_{j}^{n}) + 2k^{a\_{1}+1} \sum\_{n=1}^{N} \sum\_{p=1}^{n} \omega\_{n-p}^{a\_{1}} h \sum\_{j=1}^{I-1} (\delta\_{\mathbf{x}} \boldsymbol{u}\_{j}^{p})(\delta\_{\mathbf{x}} \boldsymbol{u}\_{j}^{n}) \\ &= \ -2kh \sum\_{n=1}^{N} \sum\_{j=1}^{I-1} (\delta\_{\mathbf{x}} \boldsymbol{u}\_{j}^{n})(\delta\_{\mathbf{x}} \boldsymbol{u}\_{j}^{n}) + 2k^{a\_{1}+1}h \sum\_{j=1}^{I-1} \sum\_{n=1}^{N} \left( \sum\_{p=1}^{n} \omega\_{n-p}^{a\_{1}} \boldsymbol{\omega}\_{j}^{p} \boldsymbol{u}\_{j}^{p} \right) \\ & \geq \ 0, \quad 1 \leq n \leq N. \end{split}$$

Additionally,

$$\begin{split} &2k\sum\_{n=1}^{N}\langle\delta\_{\mathbf{x}}^{4}u^{n},u^{n}\rangle+2k^{a\_{2}+1}\sum\_{n=1}^{N}\sum\_{p=1}^{n}\omega\_{n-p}^{a\_{2}}\langle\delta\_{\mathbf{x}}^{4}u^{p},u^{n}\rangle \end{split} \tag{28}$$

$$\begin{split} &=\ -2k\sum\_{n=1}^{N}\sum\_{j=1}^{J-1}h(\delta\_{\mathbf{x}}^{2}u\_{j}^{n})(\delta\_{\mathbf{x}}^{2}u\_{j}^{n})+2k^{a\_{2}+1}\sum\_{n=1}^{N}\sum\_{p=1}^{n}\omega\_{n-p}^{a\_{2}}\sum\_{j=1}^{J-1}h(\delta\_{\mathbf{x}}^{2}u\_{j}^{p})(\delta\_{\mathbf{x}}^{2}u\_{j}^{n})\\ &=\ -2k\sum\_{n=1}^{N}\sum\_{j=1}^{J-1}h(\delta\_{\mathbf{x}}^{2}u\_{j}^{n})(\delta\_{\mathbf{x}}^{2}u\_{j}^{n})+2k^{a\_{2}+1}h\sum\_{j=1}^{J-1}\sum\_{n=1}^{N}\left(\sum\_{p=1}^{n}\omega\_{n-p}^{a\_{2}}\delta\_{\mathbf{x}}^{2}u\_{j}^{p}\right)\delta\_{\mathbf{x}}^{2}u\_{j}^{n} \\ &\geq \ 0,\quad 1\leq n\leq N. \end{split}$$

Substituting (26)–(28) into (25), and using the Cauchy–Schwarz inequality, we have

$$||\boldsymbol{u}^{N}||^{2} \leq ||\boldsymbol{u}^{0}||^{2} + 2k \sum\_{n=1}^{N} ||\boldsymbol{f}^{n}|| ||\boldsymbol{u}^{n}||.\tag{29}$$

Taking *uM* <sup>=</sup> max <sup>0</sup>≤*n*≤*<sup>N</sup> un*, we obtain

$$\|\|u^{N}\|\| \le \|\|u^{M}\|\| \le \|\|u^{0}\|\| + 2k\sum\_{i=1}^{M} \|\|f^{i}\|\| \le \|\|u^{0}\|\| + 2k\sum\_{i=1}^{N} \|f^{i}\|.\tag{30}$$

The proof of the Theorem 1 is finished.

**Theorem 2.** (*H*1-stability)*Assume that* {*u<sup>n</sup> <sup>j</sup>* |1 ≤ *j* ≤ *J* − 1, 1 ≤ *n* ≤ *N*} *is the solution of the backward Euler implicit difference scheme (22). Then, it holds that*

$$|\mu^n|\_1 \le |\mu^0|\_1 + 2k \sum\_{n=1}^N |f^n|\_{1\prime} \quad 1 \le n \le N.$$

**Proof.** Taking the inner product of (22) with <sup>−</sup>2*kδ*<sup>2</sup> *xu<sup>n</sup>*, for 1 <sup>≤</sup> *<sup>n</sup>* <sup>≤</sup> *<sup>N</sup>*, we have

$$-2k\langle \delta\_l u^n, \delta\_x^2 u^n \rangle - \frac{k}{3h} \langle u^n \Delta u^n + \Delta (u^n)^2, \delta\_x^2 u^n \rangle \tag{31}$$

$$-2k^{1+a\_1} \sum\_{p=1}^n \omega\_{n-p}^{a\_1} \langle \delta\_x^2 u^p, \delta\_x^2 u^n \rangle + 2k \langle \delta\_x^2 u^n, \delta\_x^2 u^n \rangle$$

$$-2k^{1+a\_2} \sum\_{p=1}^n \omega\_{n-p}^{a\_2} \langle \delta\_x^4 u^p, \delta\_x^2 u^n \rangle - 2k \langle \delta\_x^4 u^n, \delta\_x^2 u^n \rangle = -2k \langle f^n, \delta\_x^2 u^n \rangle.$$

#### Since

−*un*Δ*u<sup>n</sup>* <sup>+</sup> <sup>Δ</sup>(*un*)2, *<sup>δ</sup>*<sup>2</sup> *xu<sup>n</sup>* <sup>=</sup> *δx*(*un*Δ*u<sup>n</sup>* <sup>+</sup> <sup>Δ</sup>(*unun*)), *<sup>δ</sup>xun* <sup>=</sup> *<sup>J</sup>*−<sup>1</sup> ∑ *j*=1 *hδx*(*u<sup>n</sup> <sup>j</sup>* <sup>Δ</sup>*u<sup>n</sup> <sup>j</sup>* + <sup>Δ</sup>(*u<sup>n</sup> <sup>j</sup> <sup>u</sup><sup>n</sup> <sup>j</sup>* ))*δxu<sup>n</sup> j* <sup>=</sup> *<sup>J</sup>*−<sup>1</sup> ∑ *j*=1 *hδx*(*u<sup>n</sup> <sup>j</sup>* (*u<sup>n</sup> <sup>j</sup>*+<sup>1</sup> <sup>−</sup> *<sup>u</sup><sup>n</sup> <sup>j</sup>*−1)+(*u<sup>n</sup> j*+1*u<sup>n</sup> <sup>j</sup>*+<sup>1</sup> <sup>−</sup> *<sup>u</sup><sup>n</sup> <sup>j</sup>*−1*u<sup>n</sup> <sup>j</sup>*−1))*δxu<sup>n</sup> j* <sup>=</sup> *<sup>J</sup>*−<sup>1</sup> ∑ *j*=1 *hδx*((*u<sup>n</sup> <sup>j</sup>* + *<sup>u</sup><sup>n</sup> j*+1)*u<sup>n</sup> <sup>j</sup>*+<sup>1</sup> <sup>−</sup> (*u<sup>n</sup> <sup>j</sup>* + *<sup>u</sup><sup>n</sup> <sup>j</sup>*−1)*u<sup>n</sup> <sup>j</sup>*−1)*δxu<sup>n</sup> j* <sup>=</sup> *<sup>J</sup>*−<sup>1</sup> ∑ *j*=1 *hδx*(*u<sup>n</sup> <sup>j</sup>* + *<sup>u</sup><sup>n</sup> j*+1)*δxu<sup>n</sup> j*+1*δxu<sup>n</sup> <sup>j</sup>* <sup>−</sup> *<sup>J</sup>*−<sup>1</sup> ∑ *j*=1 *hδx*(*u<sup>n</sup> <sup>j</sup>* + *<sup>u</sup><sup>n</sup> <sup>j</sup>*−1)*δxu<sup>n</sup> <sup>j</sup>*−1*δxu<sup>n</sup> j* <sup>=</sup> *<sup>J</sup>*−<sup>1</sup> ∑ *j*=1 *hδx*(*u<sup>n</sup> <sup>j</sup>* + *<sup>u</sup><sup>n</sup> j*+1)*δxu<sup>n</sup> j*+1*δxu<sup>n</sup> <sup>j</sup>* <sup>−</sup> *<sup>J</sup>*−<sup>2</sup> ∑ *j*=0 *hδx*(*u<sup>n</sup> <sup>j</sup>* + *<sup>u</sup><sup>n</sup> j*+1)*δxu<sup>n</sup> j*+1*δxu<sup>n</sup> j* = *hδx*(*u<sup>n</sup> <sup>J</sup>*−<sup>1</sup> <sup>+</sup> *<sup>u</sup><sup>n</sup> J*)*δxu<sup>n</sup> <sup>J</sup> <sup>δ</sup>xu<sup>n</sup> <sup>J</sup>*−<sup>1</sup> <sup>−</sup> *<sup>h</sup>δx*(*u<sup>n</sup>* <sup>0</sup> + *<sup>u</sup><sup>n</sup>* <sup>1</sup> )*δxu<sup>n</sup>* <sup>1</sup> *<sup>δ</sup>xu<sup>n</sup>* 0 = 0.

Then, we have

$$-\frac{k}{3h} \langle u^n \Delta u^n + \Delta (u^n)^2, \delta\_\lambda^2 u^n \rangle = 0.$$

For *N* ≥ 1, (31) can be rearranged

$$\begin{split} & -2k \sum\_{n=1}^{N} \langle \delta\_{t} u^{n}, \delta\_{x}^{2} u^{n} \rangle + 2k^{1+a\_{1}} \sum\_{n=1}^{N} \sum\_{p=1}^{n} \omega\_{n-p}^{a\_{1}} \langle \delta\_{x}^{2} u^{p}, \delta\_{x}^{2} u^{n} \rangle \\ & + 2k \sum\_{n=1}^{N} \langle \delta\_{x}^{2} u^{n}, \delta\_{x}^{2} u^{n} \rangle - 2k^{1+a\_{2}} \sum\_{n=1}^{N} \sum\_{p=1}^{n} \omega\_{n-p}^{a\_{2}} \langle \delta\_{x}^{4} u^{p}, \delta\_{x}^{2} u^{n} \rangle \\ & - 2k \sum\_{n=1}^{N} \langle \delta\_{x}^{4} u^{n}, \delta\_{x}^{2} u^{n} \rangle = -2k \sum\_{n=1}^{N} \langle f^{n}, \delta\_{x}^{2} u^{n} \rangle, \quad 1 \le n \le N. \end{split}$$

Since

$$-\langle \delta\_t \mu^n, \delta\_x^2 \mu^n \rangle \ge \frac{1}{2k} (|\mu^n|\_1^2 - |\mu^{n-1}|\_1^2)\_{\prime\prime}$$

then

$$-2k\sum\_{n=1}^{N} \langle \delta\_t u^n, \delta\_x^2 u^n \rangle \ge |u^N|\_1^2 - |u^0|\_1^2. \tag{33}$$

By Lemma 8, we have

$$\begin{split} 2k \sum\_{n=1}^{N} \langle \delta\_{\mathbf{x}}^{2} \boldsymbol{\mu}^{n}, \delta\_{\mathbf{x}}^{2} \boldsymbol{\mu}^{n} \rangle + 2k^{a\_{1}+1} \sum\_{n=1}^{N} \sum\_{p=1}^{n} \omega\_{n-p}^{a\_{1}} \langle \delta\_{\mathbf{x}}^{2} \boldsymbol{\mu}^{p}, \delta\_{\mathbf{x}}^{2} \boldsymbol{\mu}^{n} \rangle \\ \geq \quad 0, \quad 1 \leq n \leq N. \end{split} \tag{34}$$

Further, by Lemmas 4, 5, 7, and 8, we have

$$\begin{split} & -2k \sum\_{n=1}^{N} \langle \delta\_{\mathbf{x}}^{4} u^{n}, \delta\_{\mathbf{x}}^{2} u^{n} \rangle - 2k^{a\_{2}+1} \sum\_{n=1}^{N} \sum\_{p=1}^{n} \omega\_{n-p}^{a\_{2}} \langle \delta\_{\mathbf{x}}^{4} u^{p}, \delta\_{\mathbf{x}}^{2} u^{n} \rangle \\ &= -2k \sum\_{n=1}^{N} \sum\_{j=0}^{J-1} h(\delta\_{\mathbf{x}}^{3} u\_{j}^{n})(\delta\_{\mathbf{x}}^{3} u\_{j}^{n}) + 2k^{a\_{2}+1} \sum\_{n=1}^{N} \sum\_{p=1}^{n} \omega\_{n-p}^{a\_{2}} \sum\_{j=0}^{J-1} h(\delta\_{\mathbf{x}}^{3} u\_{j}^{p})(\delta\_{\mathbf{x}}^{3} u\_{j}^{n}) \\ &= -2k \sum\_{n=1}^{N} \sum\_{j=0}^{J-1} h(\delta\_{\mathbf{x}}^{3} u\_{j}^{n})(\delta\_{\mathbf{x}}^{3} u\_{j}^{n}) + 2k^{a\_{2}+1}h \sum\_{j=0}^{J-1} \sum\_{n=1}^{N} \left( \sum\_{p=1}^{n} \omega\_{n-p}^{a\_{2}} \delta\_{\mathbf{x}}^{3} u\_{j}^{p} \right) \delta\_{\mathbf{x}}^{3} u\_{j}^{n} \\ &\geq 0, \quad 1 \leq n \leq N. \end{split}$$

Substituting (33)–(35) into (32), and using the Cauchy–Schwarz inequality, we have

$$\|u^{N}\|\_{1}^{2} \le |u^{0}|\_{1}^{2} + 2k \sum\_{n=1}^{N} |f^{n}|\_{1} |u^{n}|\_{1}. \tag{36}$$

Similarly to Equation (30), we finish the proof of the Theorem 2.

#### *3.2. Existence*

Next, we will use the Leray–Schauder Theorem [33] to prove the existence of numerical solutions for the scheme (22).

**Theorem 3.** *Giving two positive integers J, N, and <sup>u</sup>*<sup>0</sup> <sup>∈</sup> *<sup>R</sup>J*−1*, the Equation (22) has a solution <sup>u</sup><sup>n</sup> for* <sup>1</sup> <sup>≤</sup> *<sup>n</sup>* <sup>≤</sup> *<sup>N</sup>*.

**Proof.** We can employ the mathematical induction to prove the Theorem 3. Since *<sup>u</sup>*<sup>0</sup> <sup>∈</sup> *<sup>R</sup>J*−1, for given *<sup>u</sup>m*, 1 <sup>≤</sup> *<sup>m</sup>* <sup>≤</sup> *<sup>n</sup>* <sup>−</sup> 1, we will prove that Equation (22) has a solution for *<sup>u</sup>n*.

At the beginning, we define the mapping <sup>X</sup> : *<sup>R</sup>J*−<sup>1</sup> <sup>→</sup> *<sup>R</sup>J*−<sup>1</sup> by

$$\mathcal{K}(\boldsymbol{\upsilon}) := -\frac{k}{6h}(\boldsymbol{v}\Delta\boldsymbol{\upsilon} + \boldsymbol{\Delta}(\boldsymbol{\upsilon})^2) + k\delta\_\mathbf{x}^2\boldsymbol{\upsilon} + k^{a\_1+1}\omega\_0^{a\_1}\delta\_\mathbf{x}^2\boldsymbol{\upsilon} - k^{a\_2+1}\omega\_0^{a\_2}\delta\_\mathbf{x}^4\boldsymbol{\upsilon} - k\delta\_\mathbf{x}^4\boldsymbol{\upsilon}.$$

Then, *u<sup>n</sup>* is a solution of (22) if and only if

$$
\mu^n = \mathfrak{X}(\mu^n) + \tilde{f}\_{\prime\prime}
$$

in which

$$\tilde{f} = u^{n-1} + k^{a\_1+1} \sum\_{p=1}^{n-1} \omega\_{n-p}^{a\_1} \delta\_x^2 u^p - k^{a\_2+1} \sum\_{p=1}^{n-1} \omega\_{n-p}^{a\_2} \delta\_x^4 u^p + k f^n.$$

Next, we need to prove that the mapping <sup>G</sup>(·) = <sup>X</sup>(·) + ˜ *f* has a fixed point. We consider an open ball <sup>L</sup> <sup>=</sup> *<sup>A</sup>*(0,*r*) in *<sup>R</sup>J*−<sup>1</sup> endowed with the norm · in (23). Suppose that for *<sup>λ</sup>* <sup>&</sup>gt; 1 and *<sup>u</sup><sup>n</sup>* in the boundary of <sup>L</sup>,

$$
\lambda \mu^n = \mathfrak{G}(\mu^n) = \mathfrak{X}(\mu^n) + \tilde{f}.\tag{37}
$$

Since *v*Δ*<sup>v</sup>* <sup>+</sup> <sup>Δ</sup>(*v*)2, *<sup>v</sup>* <sup>=</sup> 0, using Lemmas <sup>5</sup> and 6, we obtain

$$
\langle \mathfrak{X}(\mathfrak{u}^n), \mathfrak{u}^n \rangle \le 0.
$$

Taking the inner product of (37) with *un*, we have

$$\|\lambda\|\|\mu^n\|\|^2 \le \langle \mathcal{f}, \mu^n \rangle \le \|\mathcal{f}\|\|\|\mu^n\|\| \le \frac{1}{2} (\|\mathcal{f}\|\|^2 + \|\mu^n\|^2).$$

Thus,

$$\lambda \le \frac{1}{2} \frac{||\bar{f}||^2 + ||u^n||^2}{||u^n||^2} = \frac{1}{2} (\frac{||\bar{f}||^2}{||u^n||^2} + 1) \le \frac{||\bar{f}||^2}{2r^2} + \frac{1}{2}$$

It is noted that the above inequality contradicts with hypothesis *λ* > 1 for large *r*. Hence, (37) has no solution on *∂*L. By the Leray–Schauder Theorem [33], there is a fixed point of G in the closure of L. The proof of existence Theorem is finished.

#### **4. Uniqueness and Convergence**

*4.1. Convergence*

Let

$$e\_j^n = \mathcal{U}\_j^n - \mathfrak{u}\_j^n, 0 \le j \le J, 1 \le n \le N.$$

Subtracting (22) from (19), we obtain the following error equations

$$\begin{cases} \delta\_l e\_j^n + \frac{1}{6h} (e\_j^n \Delta e\_j^n + \Delta (e\_j^n)^2) - \mathfrak{L}\_1 e\_j^n + \mathfrak{L}\_2 e\_j^n = (\mathsf{R}\_1)\_j^n - (\mathsf{R}\_2)\_j^n \\ - (\mathsf{R}\_3)\_j^n - (\mathsf{R}\_4)\_j^n - (\mathsf{R}\_5)\_j^n - (\mathsf{R}\_6)\_j^n - \frac{1}{6h} (\{\mathsf{R}\_7\}\_j^n + \{\mathsf{R}\_8\}\_j^n)\_\prime \\ 1 \le j \le J - 1, 1 \le n \le N \\\\ e\_0^n = e\_f^n = 0, \quad 1 \le n \le N\_\prime \\\\ e\_j^0 = 0, \quad 0 \le j \le J\_\prime \end{cases} \tag{38}$$

where

$$\begin{cases} \begin{array}{c} (\mathsf{R}\_7)\_j^n = \mathsf{U}\_j^n \Delta e\_j^n + \Delta (e\_j^n \mathsf{U}\_j^n), \\\ (\mathsf{R}\_8)\_j^n = e\_j^n \Delta \mathsf{U}\_j^n + \Delta (e\_j^n \mathsf{U}\_j^n). \end{array} \end{cases} \tag{39}$$

.

To complete the proof of convergence, we provide the following Lemmas.

**Lemma 9** ([34])**.** *(Discrete Gronwall's inequality) If An is a non-negative real sequence and satisfies*

$$A\_n \le \overline{c}\_n + \sum\_{m=0}^{n-1} \overline{d}\_m A\_{m\nu} \qquad n \ge 0,$$

*where <sup>c</sup>*˜*<sup>n</sup> is non-descending and non-negative sequence,* ˜*dm* <sup>≥</sup> <sup>0</sup>*, then it holds that*

$$A\_n \le \tilde{c}\_n e \exp(\sum\_{m=0}^{n-1} d\_m), \quad n \ge 0.$$

**Lemma 10.** *For* ∀*s*, *g* ∈ *Vh, it holds that*

$$\begin{aligned} (i) \ \langle \mathsf{g} \Delta \mathsf{s}, \mathsf{g} \rangle + \langle \Delta (\mathsf{g})^2, \mathsf{s} \rangle &= 0; \\ (ii) \ \langle \mathsf{g} \Delta \mathsf{g}, \mathsf{s} \rangle + \langle \Delta (\mathsf{g} \mathsf{s}), \mathsf{g} \rangle &= 0. \end{aligned}$$

**Proof.** (i) By the definition of ·, ·, we have

$$\begin{split} & \langle \mathsf{g} \Delta \mathsf{s}, \mathsf{g} \rangle + \langle \Delta (\mathsf{g})^2, \mathsf{s} \rangle \\ &= h \sum\_{j=1}^{J-1} \mathcal{g}\_j (s\_{j+1} - s\_{j-1}) \mathcal{g}\_j + h \sum\_{j=1}^{J-1} (\mathcal{g}\_{j+1} \mathcal{g}\_{j+1} - \mathcal{g}\_{j-1} \mathcal{g}\_{j-1}) \mathbf{s}\_j \\ &= h \sum\_{j=2}^{J} \mathcal{g}\_{j-1} s\_j \mathcal{g}\_{j-1} - h \sum\_{j=1}^{J-1} \mathcal{g}\_j s\_{j-1} \mathcal{g}\_j + h \sum\_{j=2}^{J} \mathcal{g}\_j \mathcal{g}\_j \mathbf{s}\_{j-1} \\ &\quad - h \sum\_{j=1}^{J-1} \mathcal{g}\_{j+1} \mathcal{g}\_{j+1} \mathbf{s}\_j \\ &= h \mathcal{g}\_{J-1} s\_J \mathcal{g}\_{J-1} - h \mathcal{g}\_0 s\_1 \mathcal{g}\_0 + h \mathcal{g}\_J s\_{J-1} \mathcal{g}\_J - h \mathcal{g}\_1 s\_0 \mathbf{g}\_1 \\ &= 0. \end{split}$$

The proof of (ii) are similar to (i). Thus, Lemma 10 is proved.

**Lemma 11.** *When U*<sup>0</sup> = *UJ* = 0 *and e*<sup>0</sup> = *eJ* = 0*, then it holds that*

$$\begin{aligned} &k\sum\_{n=1}^N \left( \left\|(R\_1)\_j^n\right\| + \left\|(R\_2)\_j^n\right\| + \left\|(R\_3)\_j^n\right\| + \left\|(R\_4)\_j^n\right\| + \left\|(R\_5)\_j^n\right\| \right) \right) \\ &+ \left\|(R\_6)\_j^n\right\|) \left\| \le C(T)(k + h^2). \end{aligned}$$

**Proof.** Utilizing the conditions *U*<sup>0</sup> = *UJ* = 0 and *e*<sup>0</sup> = *eJ* = 0. Firstly, by (15), Lemmas 1 and 2, we have

$$\begin{split} &k\sum\_{n=1}^{N} \| (\mathcal{R}\_1)\_j^n \| + k \sum\_{n=1}^{N} \| (\mathcal{R}\_3)\_j^n \| \\ &\leq k \sum\_{n=1}^{N} \sqrt{\sum\_{j=1}^{l} h[\mathbb{C}(n^{x\_1} - k^{x\_1} + h^2)]^2} \\ &\leq \mathbb{C}k \sum\_{n=1}^{N} (k^{x\_1} n^{x\_1 - 1} + h^2) \leq \mathbb{C}k (\sum\_{n=1}^{N} t\_n^{x\_1 - 1} k) + \mathbb{C}(Nk)h^2 \\ &\leq \mathbb{C}k (\int\_{t\_0}^{t\_N} s^{x\_1 - 1} ds) + \mathbb{C}(T)h^2 \leq \mathbb{C}k(\frac{T^{x\_1}}{a\_1}) + \mathbb{C}(T)h^2 \leq \mathbb{C}(T)(k + h^2). \end{split}$$

Secondly,

$$\begin{split} \|k \sum\_{n=1}^{N} \| (R\_5)\_j^n \| &= k \sum\_{n=1}^{N} \sqrt{\sum\_{j=1}^{l-1} h((R\_5)\_j^n)^2} \le k \sum\_{n=1}^{N} \sqrt{\sum\_{j=1}^{l-1} h(Ch^2)^2} \\ &\le C(T) h^2. \end{split} \tag{41}$$

Thirdly,

$$k\sum\_{n=1}^{N} \| (R\_6)\_j^n \| = k\sum\_{n=1}^{N} \sqrt{\sum\_{j=1}^{l-1} h((R\_6)\_j^n)^2} \le k\sum\_{n=1}^{N} \sqrt{\sum\_{j=1}^{l-1} h(Ck)^2} \le C(T)k.\tag{42}$$

Finally, by (16), Lemmas 1 and 3, we have

$$k\sum\_{n=1}^{N} \|(R\_2)\_j^n\| + k\sum\_{n=1}^{N} \|(R\_4)\_j^n\| \tag{43}$$

$$\begin{split} &\leq k\sum\_{n=1}^{N} \sqrt{\sum\_{j=1}^{I-1} h[\mathbb{C}(n^{a\_2-1}k^{a\_2} + h^2)]^2} \\ &\leq Ck(\int\_{t\_0}^{t\_N} s^{a\_1-1} ds) + \mathbb{C}(T)h^2 \\ &\leq Ck(\frac{T^{a\_2}}{a\_2}) + \mathbb{C}(T)h^2 \leq \mathbb{C}(T)(k+h^2). \end{split}$$

Combining (40)–(43), we have

$$\begin{split} \|k\sum\_{n=1}^{N} \left( \| (R\_1)\_j^n \| + \| (R\_2)\_j^n \| + \| (R\_3)\_j^n \| + \| (R\_4)\_j^n \| \right) + \| (R\_5)\_j^n \| \\ + \| (R\_6)\_j^n \| \end{split} \tag{44}$$

Therefore, we are done with this proof.

**Lemma 12.** *Set c*ˆ <sup>0</sup> :<sup>=</sup> max (*x*,*t*)∈[0,*L*]×(0,*T*] {|*u*(*x*, *t*)|, |*ux*(*x*, *t*)|}*, for U*<sup>0</sup> = *UJ* = 0*, e*<sup>0</sup> = *eJ* = 0*,* 1 ≤ *n* ≤ *N, we have*

$$|\langle (R\_8)^n, e^n \rangle| \le 3c\xi h ||e^n||^2.$$

**Proof.** By Lemma 10, for ∀*s*, *g* ∈ *Vh*, it holds that

$$
\langle \Delta(\mathcal{g}s)\_\prime s \rangle = \frac{1}{2} \langle s\_{j+1} \Delta\_+ \mathcal{g} + s\_{j-1} \Delta\_- \mathcal{g}\_\prime s \rangle\_\prime,
$$

then, we obtain

$$<\langle \Delta(e^n \mathcal{U}^n), e^n \rangle = \frac{1}{2} \langle e^n\_{\dot{j}+1} \Delta\_+ \mathcal{U}^n + e^n\_{\dot{j}-1} \Delta\_- \mathcal{U}^n, e^n \rangle, 1 \le n \le N. \tag{45}$$

Utilizing the boundary conditions *U*<sup>0</sup> = *UJ* = 0,*e*<sup>0</sup> = *eJ* = 0 and using the Cauchy– Schwarz inequality, we obtain

$$\begin{split} |\langle e\_{j+1}^{\mathrm{u}} \Delta\_{+} \mathcal{U}^{\mathrm{n}}, e^{\mathrm{n}} \rangle| &\leq \quad \|\Delta\_{+} \mathcal{U}^{\mathrm{n}}\|\_{\infty} |\langle e\_{j+1}^{\mathrm{n}}, e^{\mathrm{n}} \rangle| \\ &= \quad \|\Delta\_{+} \mathcal{U}^{\mathrm{n}}\|\_{\infty} \Big| \sum\_{j=1}^{l} h e\_{j+1}^{\mathrm{n}} e\_{j}^{\mathrm{n}} | \\ &\leq \quad \|\Delta\_{+} \mathcal{U}^{\mathrm{n}}\|\_{\infty} \sum\_{j=1}^{l-1} \frac{h}{2} (\langle e\_{j+1}^{\mathrm{n}} \rangle^{2} + \langle e\_{j}^{\mathrm{n}} \rangle^{2}) \\ &= \quad \frac{1}{2} \|\Delta\_{+} \mathcal{U}^{\mathrm{n}}\|\_{\infty} (\sum\_{j=1}^{l-1} h (e\_{j+1}^{\mathrm{n}})^{2} + \|\langle e^{\mathrm{n}} \rangle^{2} \|\_{\infty}) \\ &\leq \quad \frac{1}{2} \|\Delta\_{+} \mathcal{U}^{\mathrm{n}}\|\_{\infty} (\sum\_{j=2}^{l} h (e\_{j}^{\mathrm{n}})^{2} + \|\langle e^{\mathrm{n}} \rangle^{2} \|\_{\infty}) \\ &\leq \quad \|\Delta\_{+} \mathcal{U}^{\mathrm{n}}\|\_{\infty} \|e^{\mathrm{n}} \|^{2}, 1 \leq n \leq N. \end{split}$$

Additionally, we can get

$$|\langle e\_{j-1}^n \Delta \dots \mathcal{U}^n, e^n \rangle| \le ||\Delta \dots \mathcal{U}^n||\_\infty ||e^n||^2, 1 \le n \le N.$$

Thus, we obtain

$$\begin{split} |\langle (\mathcal{R}\_8)^n, e^n \rangle| &= \quad |\langle e^n \Delta \mathcal{U}^n + \Delta (e^n \mathcal{U}^n), e^n \rangle| \\ &= \quad |\langle e^n \Delta \mathcal{U}^n, e^n \rangle + \frac{1}{2} \langle e^n\_{j+1} \Delta + \mathcal{U}^n + e^n\_{j-1} \Delta - \mathcal{U}^n, e^n \rangle| \\ &\leq \quad ||\Delta \mathcal{U}^n||\_\infty ||e^n||^2 + \frac{1}{2} (||\Delta\_+ \mathcal{U}^n||\_\infty + ||\Delta\_- \mathcal{U}^n||\_\infty) ||e^n||^2 \\ &\leq \quad 3\epsilon\_0 \hbar ||e^n||^2, 1 \leq n \leq N. \end{split}$$

The proof is proved.

**Theorem 4.** (*L*2-convergence) *Suppose that the problem (1)–(3) has smooth solution <sup>u</sup>*(*x*, *<sup>t</sup>*) <sup>∈</sup> *C*4,2 *<sup>x</sup>*,*t*([0, *<sup>L</sup>*] <sup>×</sup> (0, *<sup>T</sup>*])*,* {*u<sup>n</sup> <sup>j</sup>* |0 ≤ *j* ≤ *J*, 1 ≤ *n* ≤ *N*} *is the solution of difference scheme (22). We can obtain*

$$\max\_{1 \le n \le N} ||\mathcal{U}^n - \mathfrak{u}^n|| \le \mathcal{C}(T)(k + h^2).$$

**Proof.** Taking the inner product of the (38) with *en*, summing up for *n* from 1 to *N*, utilizing (26) and Lemma 8, and noting

$$
\langle e^n \Delta e^n + \Delta (e^n)^2, e^n \rangle = 0
$$

we can obtain

$$\|e^{N}\|^2 \le \|e^0\|^2 + 2k \sum\_{n=1}^{N} \langle R\_1^n - R\_2^n - R\_3^n - R\_4^n \tag{46}$$
 
$$-R\_5^n - R\_6^n - \frac{1}{6h} (R\_7^n + R\_8^n), e^n \rangle, 1 \le n \le N.$$

It is noted that when *e*<sup>0</sup> = 0, we have

$$\|e^{N}\|^2 \le 2k \sum\_{n=1}^{N} \left( \|R\_1^n\| + \|R\_2^n\| + \|R\_3^n\| + \|R\_4^n\| + \|R\_5^n\| \right) \tag{47}$$

$$+ \|R\_6^n\| \|e^n\| - 2k \sum\_{n=1}^{N} \langle \frac{1}{6h}(R\_7^n + R\_8^n), e^n \rangle, 1 \le n \le N.$$

Since

$$\begin{aligned} \langle R^n\_T, e^n \rangle &= \langle \mathcal{U} \Pi e^n + \Delta (e^n \Pi^n), e^n \rangle \\ &= \sum\_{\substack{j=1 \\ j \neq j}} h(\Pi^n\_j \Delta e^n\_j + \Delta (e^n\_j \Pi^n\_j)) e^n\_j \\ &= \sum\_{\substack{j=1 \\ j \neq j}} h(\Pi^n\_j (e^n\_{j+1} - e^n\_{j-1}) + (e^n\_{j+1} \Pi^n\_{j+1} - e^n\_{j-1} \Pi^n\_{j-1})) e^n\_j \\ &= \sum\_{\substack{j=1 \\ j \neq j}} h((\Pi^n\_j + \mathcal{U}^n\_{j+1}) e^n\_{j+1} - (\mathcal{U}^n\_j + \mathcal{U}^n\_{j-1}) e^n\_{j-1}) e^n\_j \\ &= \sum\_{\substack{j=1 \\ j \neq j}} h(\mathcal{U}^n\_j + \mathcal{U}^n\_{j+1}) e^n\_{j+1} e^n\_j - \sum\_{\substack{j=1 \\ j \neq j}} h(\mathcal{U}^n\_j + \mathcal{U}^n\_{j-1}) e^n\_{j-1} e^n\_j \\ &= \sum\_{\substack{j=1 \\ j \neq j}} h(\mathcal{U}^n\_j + \mathcal{U}^n\_{j+1}) e^n\_{j-1} e^n\_j - \sum\_{\substack{j=1 \\ j \neq j}} h(\mathcal{U}^n\_j + \mathcal{U}^n\_{j+1}) e^n\_{j+1} e^n\_j \\ &= h(\mathcal{U}^n\_{j-1} + \mathcal{U}^n\_j) e^n\_{j-1} - h(\mathcal{U}^n\_0 + \mathcal{U}^n\_1) e^n\_1 e^n\_0 \\ &= 0\_0 \quad 1 \le n \le N. \end{aligned} \tag{48}$$

Combining (48), Lemma 12 and inequality (40), (47) can be written as

$$\begin{aligned} \|e^{N}\|^2 &\le 2k \sum\_{n=1}^{N} \left( \|R\_1^n\| + \|R\_2^n\| + \|R\_3^n\| \right) \\ &+ \|\|R\_4^n\| + \|\|R\_5^n\| + \|\|R\_6^n\|\|) \|e^n\| + \hat{c}\_0 k \sum\_{n=1}^{N} \|e^n\|^2, 1 \le n \le N. \end{aligned}$$

Taking appropriate *<sup>M</sup>* such that *eM* <sup>=</sup> max <sup>0</sup>≤*n*≤*<sup>N</sup> en* and using Lemma 11, we obtain

$$\begin{split} \|\mathbf{e}^{N}\| &\leq \|\mathbf{e}^{M}\| \\ \leq \quad 2k\sum\_{n=1}^{N} \left( \|\mathbf{R}\_{1}^{n}\| + \|\mathbf{R}\_{2}^{n}\| + \|\mathbf{R}\_{3}^{n}\| + \|\mathbf{R}\_{4}^{n}\| + \|\mathbf{R}\_{5}^{n}\| + \|\mathbf{R}\_{6}^{n}\| \right) + \hat{c}\_{0}k\sum\_{n=1}^{N} \|\mathbf{e}^{n}\| \\ \leq \quad \quad \quad \quad \mathcal{C}(T)(k + h^{2}) + \hat{c}\_{0}k\sum\_{n=1}^{N} \|\mathbf{e}^{n}\|. \end{split} \tag{49}$$

Further

$$\|(1 - \mathcal{E}\_0 k) \| e^N \| \le \mathcal{C}(T)(k + h^2) + \mathcal{E}\_0 k \sum\_{n=0}^{N-1} \| e^n \|. \tag{50}$$

Using discrete Gronwall inequality, for *k* < *<sup>c</sup>*<sup>ˆ</sup> 0 <sup>2</sup> , we obtain

$$\|e^{N}\| \le 2\exp\{2\hat{c}\_{0}Nk\}\mathcal{C}(T)(k+h^{2}) \le \mathcal{C}(T)(k+h^{2}).$$

*4.2. Uniqueness*

**Theorem 5.** *Under the assumptions in Theorem 4—for h is small enough and k* = *o*(*h* 3 <sup>4</sup> )*—then the difference scheme (22) has a unique solution.*

**Proof.** Set *<sup>u</sup><sup>n</sup>* <sup>∈</sup> *<sup>R</sup>n*−<sup>1</sup> and *<sup>v</sup><sup>n</sup>* <sup>∈</sup> *<sup>R</sup>n*−1, 0 <sup>≤</sup> *<sup>n</sup>* <sup>≤</sup> *<sup>N</sup>* to be the solutions of (22). Since *<sup>u</sup>*<sup>0</sup> <sup>=</sup> *<sup>v</sup>*0, we assume *<sup>u</sup><sup>m</sup>* <sup>=</sup> *<sup>v</sup><sup>m</sup>* for 0 <sup>≤</sup> *<sup>m</sup>* <sup>≤</sup> *<sup>n</sup>* <sup>−</sup> 1. Next, we need to prove *<sup>u</sup><sup>n</sup>* <sup>=</sup> *<sup>v</sup>n*.

First, using (22), we have

$$\begin{split} &\delta\_t(\boldsymbol{u}\_j^n - \boldsymbol{v}\_j^n) - \delta\_x^2(\boldsymbol{u}\_j^n - \boldsymbol{v}\_j^n) + \delta\_x^4(\boldsymbol{u}\_j^n - \boldsymbol{v}\_j^n) + \frac{1}{6h}(\boldsymbol{u}\_j^n \Delta \boldsymbol{u}\_j^n \\ &+ \Delta(\boldsymbol{u}\_j^n)^2 - \boldsymbol{v}\_j^n \Delta \boldsymbol{v}\_j^n - \Delta(\boldsymbol{v}\_j^n)^2) \\ &= k^{a\_1} \sum\_{p=1}^n \omega\_{n-p}^{a\_1} \delta\_x^2(\boldsymbol{u}\_j^p - \boldsymbol{v}\_j^p) - k^{a\_2} \sum\_{p=1}^n \omega\_{n-p}^{a\_2} \delta\_x^4(\boldsymbol{u}\_j^p - \boldsymbol{v}\_j^p). \end{split} \tag{51}$$

Second, taking the inner product of (51) with *<sup>u</sup><sup>n</sup>* <sup>−</sup> *<sup>v</sup>n*, and using Lemmas 5, 6, and 8, we obtain

$$\begin{split} &\frac{1}{2k}(\|\boldsymbol{u}^{n}-\boldsymbol{v}^{n}\|^{2}-\|\boldsymbol{u}^{n-1}-\boldsymbol{v}^{n-1}\|^{2}) \\ &\leq -\frac{1}{6h}\langle\boldsymbol{u}^{n}\Delta\boldsymbol{u}^{n}+\Delta(\boldsymbol{u}^{n})^{2}-\boldsymbol{v}^{n}\Delta\boldsymbol{v}^{n}-\Delta(\boldsymbol{v}^{n})^{2},\boldsymbol{u}^{n}-\boldsymbol{v}^{n}\rangle \\ &= -\frac{1}{6h}\langle\boldsymbol{u}^{n}\Delta(\boldsymbol{u}^{n}-\boldsymbol{v}^{n})+(\boldsymbol{u}^{n}-\boldsymbol{v}^{n})\Delta\boldsymbol{v}^{n}+\Delta(\boldsymbol{u}^{n}-\boldsymbol{v}^{n})(\boldsymbol{u}^{n}+\boldsymbol{v}^{n}),\boldsymbol{u}^{n}-\boldsymbol{v}^{n}\rangle. \end{split}$$

Since

$$
\begin{split} & \langle \boldsymbol{u}^{\boldsymbol{n}} \Delta (\boldsymbol{u}^{\boldsymbol{n}} - \boldsymbol{v}^{\boldsymbol{n}}) + (\boldsymbol{u}^{\boldsymbol{n}} - \boldsymbol{v}^{\boldsymbol{n}}) \Delta \boldsymbol{v}^{\boldsymbol{n}} + \Delta (\boldsymbol{u}^{\boldsymbol{n}} - \boldsymbol{v}^{\boldsymbol{n}}) (\boldsymbol{u}^{\boldsymbol{n}} + \boldsymbol{v}^{\boldsymbol{n}}), \boldsymbol{u}^{\boldsymbol{n}} - \boldsymbol{v}^{\boldsymbol{n}} \rangle \\ & = \langle (\boldsymbol{u}^{\boldsymbol{n}} - \boldsymbol{v}^{\boldsymbol{n}}) \Delta \boldsymbol{v}^{\boldsymbol{n}} + \Delta (\boldsymbol{v}^{\boldsymbol{n}} (\boldsymbol{u}^{\boldsymbol{n}} - \boldsymbol{v}^{\boldsymbol{n}})), \boldsymbol{u}^{\boldsymbol{n}} - \boldsymbol{v}^{\boldsymbol{n}} \rangle. \end{split} $$

Then,

$$\begin{split} \|\boldsymbol{u}^{n} - \boldsymbol{v}^{n}\|^{2} &\leq \|\boldsymbol{u}^{n-1} - \boldsymbol{v}^{n-1}\|^{2} \\ &- \frac{k}{3h} \langle (\boldsymbol{u}^{n} - \boldsymbol{v}^{n}) \Delta \boldsymbol{v}^{n} + \Delta (\boldsymbol{v}^{n}(\boldsymbol{u}^{n} - \boldsymbol{v}^{n})), \boldsymbol{u}^{n} - \boldsymbol{v}^{n} \rangle \\ &\leq \frac{k}{3h} |\langle (\boldsymbol{u}^{n} - \boldsymbol{v}^{n}) \Delta \boldsymbol{v}^{n} + \Delta (\boldsymbol{v}^{n}(\boldsymbol{u}^{n} - \boldsymbol{v}^{n})), \boldsymbol{u}^{n} - \boldsymbol{v}^{n} \rangle|. \end{split} \tag{52}$$

Further, we have

$$\left| \langle (\boldsymbol{u}^{n} - \boldsymbol{v}^{n}) \Delta \boldsymbol{v}^{n} + \Delta (\boldsymbol{v}^{n} (\boldsymbol{u}^{n} - \boldsymbol{v}^{n})), \boldsymbol{u}^{n} - \boldsymbol{v}^{n} \rangle \right| \tag{53}$$
 
$$\leq \| \Delta \boldsymbol{v}^{n} \|\_{\infty} \| \boldsymbol{u}^{n} - \boldsymbol{v}^{n} \| ^{2} + \frac{1}{2} (\| \Delta\_{+} \boldsymbol{v}^{n} \|\_{\infty} + \| \Delta\_{-} \boldsymbol{v}^{n} \|\_{\infty}) \| \boldsymbol{u}^{n} - \boldsymbol{v}^{n} \| ^{2}.$$

Rearranging, we have

$$\begin{split} \|\Delta v^{n}\|\_{\infty} &= \max\_{1 \le j \le J-1} \{ |u\_{j+1}^{n} - v\_{j-1}^{n}| \} \\ &\le \max\_{1 \le j \le J-1} \{ |v\_{j+1}^{n} - V\_{j+1}^{n}| + |V\_{j+1}^{n} - V\_{j-1}^{n}| + |V\_{j-1}^{n} - v\_{j-1}^{n}| \} \\ &\le 2\|V^{n} - v^{n}\|\_{\infty} + Ch \le 2h^{-\frac{1}{2}}\|V^{n} - v^{n}\| + Ch \\ &\le \quad Ch^{-\frac{1}{2}}(k + h^{2}) + Ch. \end{split} \tag{54}$$

Additionally,

$$\left| \langle (\boldsymbol{u}^{\boldsymbol{n}} - \boldsymbol{v}^{\boldsymbol{n}}) \Delta \boldsymbol{v}^{\boldsymbol{n}} + \Delta (\boldsymbol{v}^{\boldsymbol{n}} (\boldsymbol{u}^{\boldsymbol{n}} - \boldsymbol{v}^{\boldsymbol{n}})), \boldsymbol{u}^{\boldsymbol{n}} - \boldsymbol{v}^{\boldsymbol{n}} \rangle \right| \\ \leq C [\boldsymbol{h}^{-\frac{1}{2}} (\boldsymbol{k} + \boldsymbol{h}^{2}) + \boldsymbol{h}] \| \boldsymbol{u}^{\boldsymbol{n}} - \boldsymbol{v}^{\boldsymbol{n}} \|^{2}. \tag{55}$$

Thus, we ottain

$$||u^n - v^n|| \le \mathcal{C}(k^2h^{-\frac{3}{2}} + kh^{\frac{1}{2}} + k)||u^n - v^n||.\tag{56}$$

Using inequality (56), we have *u<sup>n</sup>* <sup>−</sup> *<sup>v</sup>n*<sup>2</sup> <sup>=</sup> 0 for *<sup>k</sup>* <sup>=</sup> *<sup>o</sup>*(*<sup>h</sup>* 3 <sup>4</sup> ) as *h* → 0 and finish the proof.

#### **5. Numerical Results**

In this section, we solve this problem (1)–(3) with *L* = *T* = 1 by difference scheme (22). We provide three iterative methods [32,35,36]: the Besse relaxtion algorithm (Besse), the Newton iterative method (Newton), and the linearized iterative algorithm (linearized), to solve the nonlinear system (22). Let *MaxStep* <sup>=</sup> 300 and *eps* <sup>=</sup> 1.0 <sup>×</sup> <sup>10</sup>−<sup>5</sup> and

$$\begin{aligned} E(h,k) &= \sqrt{h \sum\_{j=1}^{J-1} (\mathcal{U}\_j^N - \mu\_j^N)^2}, \\ rate^x &= \log\_2(\frac{E(2h,k)}{E(h,k)}), \quad rate^t = \log\_2(\frac{E(h,2k)}{E(h,k)}). \end{aligned}$$

**Example 1.** *In the first example, we consider the initial condition ϕ*(*x*) = sin *πx, the source term*

$$\begin{split} f(\mathbf{x},t) &= \sin 2\pi \mathbf{x} (-\frac{2at^{\alpha-1}}{\Gamma(\alpha+1)} - \frac{2(\alpha+1)t^{\alpha}}{\Gamma(\alpha+2)}) \\ &+ \pi \sin 4\pi \mathbf{x} (1 - \frac{2t^{\alpha}}{\Gamma(\alpha+1)} - \frac{t^{\alpha+1}}{\Gamma(\alpha+2)})^2 \\ &+ 4\pi^2 \sin 2\pi \mathbf{x} (\frac{t^{\alpha\_1}}{\Gamma(\alpha\_1+1)} - \frac{2t^{\alpha\_1+\alpha}}{\Gamma(\alpha+\alpha\_1+1)} - \frac{t^{\alpha\_1+\alpha+1}}{\Gamma(\alpha+\alpha\_1+2)}) \\ &+ 16\pi^4 \sin 2\pi \mathbf{x} (\frac{t^{\alpha\_2}}{\Gamma(\alpha\_2+1)} - \frac{2t^{\alpha\_2+\alpha}}{\Gamma(\alpha+\alpha\_2+1)} - \frac{t^{\alpha\_2+\alpha+1}}{\Gamma(\alpha+\alpha\_2+2)}) \\ &(16\pi^4 \sin 2\pi \mathbf{x} + 4\pi^2 \sin 2\pi \mathbf{x}) (1 - \frac{2t^{\alpha}}{\Gamma(\alpha+1)} - \frac{t^{\alpha+1}}{\Gamma(\alpha+2)}). \end{split}$$

*and the exact solution is*

$$u(\mathbf{x},t) = \sin 2\pi \mathbf{x} (1 - \frac{2t^{\alpha}}{\Gamma(\alpha+1)} - \frac{t^{\alpha+1}}{\Gamma(\alpha+2)}),$$

*where α,* 0 < *α* < 1 *is the regular parameter.*

Table 1 lists the *L*<sup>2</sup> norm errors; the corresponding spatial convergence rate; and the total number of iterations of our scheme under different parameters *α*<sup>1</sup> and *α*2, respectively. Taking the temporal step *k* = 1/1024 and *α* = 0.50, we can know from Table 1 that the spatial convergence order is about order 2. Through comparison, it can be seen that the numerical results of the three iterative methods have a small gap in the spatial direction.

Fix the spatial step *h* = 1/*J* = 1024 and *α* = 0.50. Table 2 shows that the temporal convergence order is about order one. Through the comparison of three iteration methods, we can find that the temporal convergence order of the Basse relaxation algorithm is not very stable. In addition, the total number of iterations of the linear iterative algorithm is less than the Newton iterative method.

Taking *α* = 0.5 fixed, Figure 1 shows the spatial convergence order for *N* = 1024, and Figure 2 shows the convergence order in the time direction for *J* = 1024. It can be seen that the numerical results of the convergence order are in good agreement with the theoretical analysis.

**Figure 1.** The error and convergence orders in space with *α* = 0.50 and *k* = 1/1024, for Example 1.

**Figure 2.** The error and convergence orders in time with *α* = 0.50 and *h* = 1/1024, for Example 1.




**Table 1.** *Cont*.

**Table 2.** The errors and convergence rates when *h* = 1/1024 and *α* = 0.50, for Example 1.



**Table 2.** *Cont*.

**Example 2.** *In the second Example, we take the exact solution*

$$u(\varkappa, t) = \sin \pi \varkappa \frac{2t^{\varkappa}}{\Gamma(\varkappa + 1)}, \quad 0 < \varkappa < 1.$$

*Correspondingly, the initial condition is u*0(*x*) = 0 *and the inhomogeneous term is*

$$\begin{split} f(\mathbf{x},t) &= \\ \sin\pi\boldsymbol{\pi}\mathbf{x} \frac{2\boldsymbol{a}t^{\mathbf{a}-1}}{\Gamma(\mathbf{a}+1)} &+ 2\pi\sin 2\pi\mathbf{x} (\frac{t^{\mathbf{a}}}{\Gamma(\mathbf{a}+1)})^2 + \frac{2\pi^2\sin(\pi\mathbf{x})t^{\mathbf{a}+\mathbf{a}\_1}}{\Gamma(\mathbf{a}+\mathbf{a}\_1+1)} \\ &+ \frac{2\pi^4\sin(\pi\mathbf{x})t^{\mathbf{a}+\mathbf{a}\_2}}{\Gamma(\mathbf{a}+\mathbf{a}\_2+1)} + (\pi^4\sin\pi\mathbf{x} + \pi^2\sin\pi\mathbf{x})\frac{2t^{\mathbf{a}}}{\Gamma(\mathbf{a}+1)}. \end{split}$$

It can be seen from Tables 3 and 4 that the spatial convergence order is about order two and the temporal convergence order is about order one, respectively. It can be seen that the numerical results are the same as Example 1 and the convergence order is in good agreement with the theoretical analysis.




**Table 3.** *Cont*.

**Table 4.** The errors and convergence rates when *h* = 1/1024 and *α* = 0.50, for Example 2.


#### **6. Concluding Remarks**

In this paper, we propose an implicit difference scheme for a class of nonlinear fourthorder equations with the multi-term Riemann–Liouvile fractional integral kernels. For the nonlinear convection term, we use the Galerkin method based on piecewise linear test functions. The Riemann–Liouvile fractional integral terms are treated by convolution quadrature. The standard central difference approximation is used to discretize the spatial derivative. The stability and convergence are rigorously proved by the discrete energy method. The existence and uniqueness of the numerical solutions for nonlinear systems are proved strictly. Lastly, we introduce and compare three iterative methods for solving the nonlinear systems.

**Author Contributions:** Conceptualization, X.J.; methodology, X.Y. and X.J.; software, H.Z.; validation, X.Y., H.Z., and Q.T.; formal analysis, X.Y., and X.J.; writing—original draft preparation, X.J.; and writing—review and editing, X.Y., H.Z., and Q.T. All authors have read and agreed to the published version of the manuscript.

**Funding:** The work was supported by the National Natural Science Foundation of China (12126321), the Scientific Research Fund of Hunan Provincial Education Department (21B0550), and the Hunan Provincial Natural Science Foundation of China (2022JJ50083, 2021JJ30209).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** All the data were computed using our algorithm.

**Conflicts of Interest:** The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

#### **References**


## *Article* **A Fourth-Order Time-Stepping Method for Two-Dimensional, Distributed-Order, Space-Fractional, Inhomogeneous Parabolic Equations**

**Muhammad Yousuf 1,\*, Khaled M. Furati <sup>1</sup> and Abdul Q. M. Khaliq <sup>2</sup>**


**Abstract:** Distributed-order, space-fractional diffusion equations are used to describe physical processes that lack power-law scaling. A fourth-order-accurate, *A*-stable time-stepping method was developed, analyzed, and implemented to solve inhomogeneous parabolic problems having Rieszspace-fractional, distributed-order derivatives. The considered problem was transformed into a multi-term, space-fractional problem using Simpson's three-eighths rule. The method is based on an approximation of matrix exponential functions using fourth-order diagonal Padé approximation. The Gaussian quadrature approach is used to approximate the integral matrix exponential function, along with the inhomogeneous term. Partial fraction splitting is used to address the issues regarding stability and computational efficiency. Convergence of the method was proved analytically and demonstrated through numerical experiments. CPU time was recorded in these experiments to show the computational efficiency of the method.

**Keywords:** distributed-order; Riesz-space-fractional diffusion; Padé approximation; splitting technique

#### **1. Introduction**

Complex processes which obey a mixture of power laws and flexible variations in space are modeled by distributed-order, space-fractional differential equations. Distributedorder, space-fractional differential equations are used to model the phenomena where the order of differentiation varies in a given range [1,2]. Due to their nonlocal properties, the distributed-order differentials can model more complex dynamical systems than the fractional-order or classical models.

Consider the following two-dimensional Riesz-space, distributed-order, fractional, inhomogeneous diffusion equation:

$$\frac{\partial u}{\partial t} = K\_{\rm x} \int\_{1}^{2} P(\mathbf{a}) \frac{\partial^{a} u}{\partial |\mathbf{x}|^{a}} \, da + K\_{\rm y} \int\_{1}^{2} Q(\beta) \frac{\partial^{\beta} u}{\partial |y|^{\beta}} \, d\beta + f(\mathbf{x}, y, t), \ (\mathbf{x}, y, t) \in \Omega \times (0, T], \tag{1}$$

with initial condition

$$
\mu(\mathfrak{x}, \mathfrak{y}, 0) = \mathfrak{u}\_0(\mathfrak{x}, \mathfrak{y}), \qquad \qquad (\mathfrak{x}, \mathfrak{y}) \in \Omega\_{\mathfrak{x}}
$$

and boundary condition

$$
\mu(x, y, t) = 0, \qquad \qquad (x, y) \in \partial\Omega,
$$

**Citation:** Yousuf, M.; Furati, K.M.; Khaliq, A.Q.M. A Fourth-Order Time-Stepping Method for Two-Dimensional, Distributed-Order, Space-Fractional, Inhomogeneous Parabolic Equations. *Fractal Fract.* **2022**, *6*, 592. https://doi.org/ 10.3390/fractalfract6100592

Academic Editors: Libo Feng, Yang Liu and Lin Liu

Received: 12 September 2022 Accepted: 4 October 2022 Published: 13 October 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

where Ω = (*a*, *b*) × (*c*, *d*). The coefficients *P* and *Q* are non-negative functions defined on (1, 2], that are not identically zero and satisfy

$$0 < \int\_1^2 P(\alpha)d\alpha < \infty, \ 0 < \int\_1^2 Q(\beta)d\beta < \infty.$$

The inhomogeneous source term *f* is assumed to be sufficiently smooth. The distributedorder, space-fractional derivative terms are approximated using Simpson's three-eighth rule. For any 1 <sup>&</sup>lt; *<sup>α</sup><sup>i</sup>* <sup>≤</sup> 2, the two-sided, Riesz-space-fractional derivative operator *<sup>∂</sup>α<sup>i</sup> ∂*|*x*| *<sup>α</sup><sup>i</sup>* on a finite interval (*a*, *b*) is given by

$$\frac{\partial^{a\_i}}{\partial |\mathbf{x}|^{a\_i}} = -\frac{1}{2\cos\frac{\pi a\_i}{2}} \left[ {}\_4D\_{\mathbf{x}}^{a\_i} + {}\_3D\_{\mathbf{b}}^{a\_i} \right], \ 1 < a\_i \le 2,\tag{2}$$

where *aDα<sup>i</sup> <sup>x</sup>* is the left Riemann–Liouville and *xDα<sup>i</sup> <sup>b</sup>* is the right Riemann–Liouville fractional derivatives, which are, respectively,

$$\begin{split} \, \_aD\_x^{\alpha\_i} \mu(\mathbf{x}, \mathbf{y}, t) &= \frac{1}{\Gamma(2 - \alpha\_i)} \frac{\partial^2}{\partial \mathbf{x}^2} \int\_a^\chi (\chi - \chi)^{1 - \alpha\_i} u(\chi, \mathbf{y}, t) \, d\chi, \\\\ \, \_xD\_b^{\alpha\_i} \mu(\mathbf{x}, \mathbf{y}, t) &= \frac{1}{\Gamma(2 - \alpha\_i)} \frac{\partial^2}{\partial \mathbf{x}^2} \int\_a^b (\chi - \chi)^{1 - \alpha\_i} u(\chi, \mathbf{y}, t) \, d\chi. \end{split}$$

The Riesz-space-fractional derivative operator *<sup>∂</sup> βj ∂*|*y*| *<sup>β</sup><sup>j</sup>* on (*c*, *d*) is defined similarly.

A recent review article [3] provided a state-of-the-art introduction to the mathematics of distributed-order fractional calculus, along with analytical and numerical methods. An extensive overview of the applications of distributed-order fractional calculus with applications to viscoelasticity, transport processes, and control theory have been discussed.

Anomalous diffusion phenomena take place in many complex systems, such as subsurface flows, human tissues, viscoelastic material, and plasma. In such systems, the diffusion is slower or faster than normal, the probability density function is not anymore Gaussian, and the mean-square displacement is not linear in time; see, for example, [4] and references therein. As such, the predictions obtained through integer-order local models do not match the collected data and observed behaviors. Riesz-space-fractional diffusion equations provide a powerful mathematical tool for modeling such phenomena. In these models, the diffusion rate depends on the global state of the field. In particular, the order of the Riesz fractional derivatives identifies the power-law scaling of the physical process.

Many physical processes, however, lack power-law scaling and cannot be characterized by specific scaling exponents. Among these processes are several cases of accelerating super diffusion [5–7]. These processes can easily be described by Riesz-space distributedorder fractional diffusion equations.

There are many applications of the distributed-order fractional operators. For example, applications to fields such as viscoelasticity, transport processes, and control theory were discussed by Ding et al. in [3], and Patnaik et al. [8] discussed applications of variable- and distributed-order fractional operators to the dynamic analysis of nonlinear oscillators.

Analytical solutions for some problems were constructed by Caputo [5] and Sokolov et al. [6]. The well-posedness of particular classes of such problems were studied by Jia et al. [9]. Numerical solutions for distributed-order space-fractional models on bounded domains are in high demand, since analytical solutions are not in general available. Wang et al. [2] developed a second-order-accurate, implicit numerical method for one- and two-dimensional Riesz-space, distributed-order fractional advection–dispersion equations. Their method is based on use of a midpoint quadrature rule for the Riesz space distributed-order term. Li et al. [10] proposed an unconditionally stable second-order Crank–Nicolson method for a one-dimensional Riesz space distributed-order diffusion equation. The method is based

on midpoint quadrature and the finite volume method. For the two-dimensional Rieszspace, distributed-order advection–diffusion equation, a Crank–Nicolson ADI, Galerkin– Legendre spectral method was developed by Zhang et al. [11]. Jia and Wang [12] designed a fast finite difference method for distributed-order, space-fractional partial differential equations on convex domains. Qiao et al. [13] analyzed the velocity distributions of the distributed/variable-order fractional Maxwell governing equations under specific conditions, and discussed the effects of different parameters on the solution.

The time-stepping methods mentioned in all the above-mentioned references are of second order. The purpose of this work was to develop a computationally efficient, strongly stable, fourth-order time-stepping method that is suitable for solving problems such as (1). The numerical method is obtained by first applying the three-eighth Simpson's rule to the distributed-order space-fractional derivative term. Then, the multi-term fractional derivative equation is discretized in space by using the fractional centered-difference formulas introduced by Ortigueira [14]. The exact solution of the resulting semi-discretized system is written using the Duhamel's principle [15]. This exact solution involves a matrix exponential function and the integral of a matrix exponential function, along with the inhomogeneous term. Matrix exponential functions are approximated by diagonal (2,2)- Padé approximation. The rationale behind using a diagonal (2,2)-Padé approximation is that only one algebraic system needs to be solved at each time step. Therefore, we can implement this fourth-order method with the same computational complexity as a firstorder method. We utilize an approach with a class of single-step, fully discrete numerical methods developed by Brenner et al. [16], and the same approach is summarized in the book by Thomée [15].

The paper is organized as follows. The Riesz-space distributed-order fractional derivative discretization is presented in Section 2. In Section 3, the time-stepping method is developed, and an implementation algorithm is provided. The convergence theorem of the numerical method is given in Section 4. Numerical experiments are shown in Section 5. Solution profiles and convergence tables, along with CPU times, are also given in the same section. Finally, some concluding remarks are included in Section 6.

#### **2. Distributed-Order Space-Fractional Derivative Approximation**

Let 1 = *α*<sup>0</sup> < *α*<sup>1</sup> < ··· < *αN*<sup>1</sup> = 2 and 1 = *β*<sup>0</sup> < *β*<sup>1</sup> < ··· < *βN*<sup>2</sup> = 2 be uniform discretizations of the interval [1, 2]. Let Δ*α* = 1/*N*<sup>1</sup> and Δ*β* = 1/*N*2. By applying the fourth-order Simpson's three-eighths rule to the distributed terms, we obtain

$$\int\_{1}^{2} P(a) \frac{\partial^{a} u}{\partial |\mathbf{x}|^{a}} \, da = \sum\_{i=1}^{N\_{1}} a\_{i} P(a\_{i}) \frac{\partial^{a\_{i}} u}{\partial |\mathbf{x}|^{a\_{i}}} + O((\Delta a)^{4}),\tag{3}$$

$$\int\_{1}^{2} \mathcal{Q}(\boldsymbol{\beta}) \frac{\partial^{\beta\_{1}} \boldsymbol{u}}{\partial |\boldsymbol{x}|^{\boldsymbol{\beta}}} \, d\boldsymbol{\beta} = \sum\_{j=1}^{N\_{2}} b\_{j} \mathcal{Q}(\beta\_{j}) \frac{\partial^{\beta\_{j}} \boldsymbol{u}}{\partial |\boldsymbol{x}|^{\boldsymbol{\beta}\_{j}}} + \mathcal{O}((\Delta \boldsymbol{\beta})^{4}), \tag{4}$$

where *ai* and *bj* are the coefficients of Simpson's three-eighths rule.

To approximate the Riesz derivatives in the right-hand sides of (3) and (4), we use the fractional centered-difference introduced by Ortigueira [14]. We consider *xm* = *a* + *mhx*, *m* = 0, 1, ... , *M* with *hx* = (*b* − *a*)/*M* as the spatial mesh points. Suppose *u*(*x*) to be a sufficiently smooth function defined for −∞ < *x* < ∞. Then, for *i* = 1, 2, ··· , *N*1, we have

$$\frac{d^{a\_i}}{d|\mathbf{x}|^{a\_i}}u(\mathbf{x}) = -\frac{1}{2\cos\frac{\pi a\_i}{2}}\left[ {}\_{-\infty}D\_{\mathbf{x}}^{a\_i} + {}\_{x}D\_{\infty}^{a\_i} \right]u(\mathbf{x},t) = \frac{-1}{h\_{\mathbf{x}}^{a\_i}}\Delta\_{h\_{\mathbf{x}}}^{a\_i}u(\mathbf{x}) + O(h^2),\tag{5}$$

where

$$\Delta\_h^{a\_i} u(\mathbf{x}) = \sum\_{j=-\infty}^{\infty} g\_j^{(a\_i)} u(\mathbf{x} - jh\_x),\tag{6}$$

$$\log\_{j}^{(a\_{i})} = \frac{(-1)^{j} \Gamma(1 + a\_{i})}{\Gamma(a\_{i}/2 - j + 1)\,\Gamma(a\_{i}/2 + j + 1)}, \qquad j = 0, \pm 1, \pm 2, \ldots$$

If *u*(*x*) vanishes outside the interval (*a*, *b*) and *um* = *u*(*xm*) for *m* = 1, ... , *M* − 1, then we have

$$\Delta\_{\mathrm{li}}^{a\_i} v(\mathbf{x}\_{\mathrm{m}}) \quad = \quad \left[ \mathcal{g}\_{\mathrm{m}-1}^{(a\_i)} \boldsymbol{\mu}\_1 + \dots + \mathcal{g}\_{\mathrm{0}}^{(a\_i)} \boldsymbol{\mu}\_{\mathrm{m}} + \dots + \mathcal{g}\_{\mathrm{M}-\mathrm{m}-1}^{(a\_i)} \boldsymbol{\mu}\_{\mathrm{M}-1} \right].$$

We can write this system of equations as:

$$\frac{\Delta\_{h\_x}^{\alpha\_i}\mathbf{u}}{h\_x^{\alpha\_i}} = G\_x^{(\alpha\_i)}\mathbf{u}\_r$$

with

*<sup>G</sup>*(*αi*) *<sup>x</sup>* <sup>=</sup> <sup>1</sup> *hαi x* ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ *g* (*αi*) <sup>0</sup> *g* (*αi*) <sup>1</sup> *g* (*αi*) <sup>2</sup> ... *g* (*αi*) *<sup>M</sup>*−<sup>4</sup> *<sup>g</sup>* (*αi*) *<sup>M</sup>*−<sup>3</sup> *<sup>g</sup>* (*αi*) *M*−2 *g* (*αi*) <sup>1</sup> *g* (*αi*) <sup>0</sup> *g* (*αi*) <sup>1</sup> ... *g* (*αi*) *<sup>M</sup>*−<sup>5</sup> *<sup>g</sup>* (*αi*) *<sup>M</sup>*−<sup>4</sup> *<sup>g</sup>* (*αi*) *M*−3 *g* (*αi*) <sup>2</sup> *g* (*αi*) <sup>1</sup> *g* (*αi*) <sup>0</sup> *g* (*αi*) <sup>1</sup> *g* (*αi*) <sup>2</sup> ... *g* (*αi*) *M*−4 . . . ... ... ... ... ... . . . *g* (*αi*) *<sup>M</sup>*−<sup>4</sup> ... *<sup>g</sup>* (*αi*) <sup>2</sup> *g* (*αi*) <sup>1</sup> *g* (*αi*) <sup>0</sup> *g* (*αi*) <sup>1</sup> *g* (*αi*) 2 *g* (*αi*) *<sup>M</sup>*−<sup>3</sup> *<sup>g</sup>* (*αi*) *<sup>M</sup>*−<sup>4</sup> ... *<sup>g</sup>* (*αi*) <sup>2</sup> *g* (*αi*) <sup>1</sup> *g* (*αi*) <sup>0</sup> *g* (*αi*) 1 *g* (*αi*) *<sup>M</sup>*−<sup>2</sup> *<sup>g</sup>* (*αi*) *<sup>M</sup>*−<sup>3</sup> *<sup>g</sup>* (*αi*) *<sup>M</sup>*−<sup>4</sup> ... *<sup>g</sup>* (*αi*) <sup>2</sup> *g* (*αi*) <sup>1</sup> *g* (*αi*) 0 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ , **u** = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ *u*1 *u*2 *u*3 . . . *uM*−<sup>3</sup> *uM*−<sup>2</sup> *uM*−<sup>1</sup> ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ .

Thus, for each *αi*, we have the space-fractional derivative approximations

$$\frac{d^{\alpha\_i}}{d|\boldsymbol{x}|^{\alpha\_i}} \approx -G\_{\mathbf{x}}^{\mathbb{A}\_i}.\tag{7}$$

Similarly, by considering the spatial nodes: *yn* = *c* + *nhy*, *n* = 0, 1, ... , *N* with *hy* = (*c* − *d*)/*N*, we obtain the following approximation:

$$\frac{d^{\mathcal{G}\_j}}{d|y|^{\mathcal{G}\_j}} \approx -G\_y^{\mathcal{G}\_j}.\tag{8}$$

Using the approximations (7) and (8), the distributed-order terms can be approximated as

$$\int\_{1}^{2} P(a) \frac{\partial^{a}}{\partial |\mathbf{x}|^{a}} \, da \approx \sum\_{i=1}^{N\_{1}} -a\_{i} P(a\_{i}) \mathbf{G}\_{\mathbf{x}}^{a\_{i}} = \mathbf{G}\_{\mathbf{x}'}^{a} \cdot \int\_{1}^{2} \mathbb{Q}(\boldsymbol{\beta}) \frac{\partial^{\boldsymbol{\beta}}}{\partial |\mathbf{y}|^{\boldsymbol{\beta}}} \, d\boldsymbol{\beta} \approx \sum\_{j=1}^{N\_{2}} -b\_{j} \mathbb{Q}(\boldsymbol{\beta}\_{\boldsymbol{\beta}}) \mathbf{G}\_{\mathbf{y}}^{\boldsymbol{\beta}\_{j}} = \mathbf{G}\_{\mathbf{y}}^{\boldsymbol{\beta}}.\tag{9}$$

By applying the Riesz derivative approximation to Equation (1), the following semidiscrete system is obtained:

$$\frac{d\mathbf{u}}{dt} + A\mathbf{u} = \mathbf{f}(t),\tag{10}$$

where *A* = *KxG<sup>α</sup> <sup>x</sup>* <sup>⊗</sup> *<sup>I</sup>* <sup>+</sup> *<sup>I</sup>* <sup>⊗</sup> *KyG<sup>β</sup> <sup>y</sup>* is an (*M* − 1)(*N* − 1) × (*M* − 1)(*N* − 1) matrix, *I* is the (*M* − 1)(*N* − 1) × (*M* − 1)(*N* − 1) identity matrix and **u** is the (*M* − 1)(*N* − 1) × 1 vector that consists of columns of the matrix [*ui*,*j*], where *ui*,*<sup>j</sup>* = *u*(*xi*, *yj*)| 1 ≤ *i* ≤ *M* − 1, 1 ≤ *j* ≤ *N* − 1. The inhomogeneous term **f**(**t**) = [ *f*1,..., *fM*−1] *<sup>T</sup>* is an (*<sup>M</sup>* <sup>−</sup> <sup>1</sup>)(*<sup>N</sup>* <sup>−</sup> <sup>1</sup>) <sup>×</sup> <sup>1</sup> vector, with *fj* = [ *<sup>f</sup>*(*x*1, *yj*, *<sup>t</sup>*), *<sup>f</sup>*(*x*2, *yj*, *<sup>t</sup>*), ··· , *<sup>f</sup>*(*xM*−1, *yj*, *<sup>t</sup>*)]*T*, *<sup>j</sup>* <sup>=</sup> 1, 2, ··· *<sup>N</sup>* <sup>−</sup> 1.

#### **3. Time-Stepping Method**

We consider the following abstract initial value problem:

$$u\_t + Au \quad = \quad f(t), \quad t \in (0, T] = J,\tag{11}$$

$$u(0) \quad = \quad u\_{0\_1}$$

to develop the numerical method and to discuss the convergence analysis that works without dependence on the spatial mesh size; see Thomée [15] [Ch. 3, 7–9]. Using Duhamel's principle [15], the exact solution of (11) is written as:

$$u(t) = e^{-tA}u\_0 + \int\_0^t e^{-(t-s)A} f(s)ds,\tag{12}$$

where the matrix exponential function *e*−*tA* is the solution which corresponds to the homogeneous problem having *f* ≡ 0. First, we replace the variable *t* by the shifted value *t* + *k*, and then we use the following change in variable *s* − *t* = *kτ* and write the exact solution (12) as:

$$u(t+k) = e^{-kA}u(t) + k \int\_0^1 e^{-k(1-\tau)A} f(t+k\tau) \,d\tau,\tag{13}$$

and setup the following recurrence formula as

$$u(t\_{n+1}) = e^{-kA}u(t\_n) + k \int\_0^1 e^{-kA(1-\tau)} f(t\_n + \tau k) \, d\tau,\tag{14}$$

where *k*, with 0 < *k* ≤ *k*<sup>0</sup> for some *k*0, is the temporal step size, and temporal mesh points are given by *tn* = *nk* with 0 ≤ *n* ≤ *n*¯ = *T k* .

Our fourth-order *A*-stable method is based on the following method from [15]:

$$v\_{n+1} \approx r(kA)v\_n + k \sum\_{i=1}^{\bar{m}} P\_i(kA) f(t\_{\mathbb{H}} + \tau\_i k), \quad n \ge 0, \ v\_0 = v. \tag{15}$$

where *<sup>r</sup>*(*z*) and {*Pi*(*z*)}*m*<sup>9</sup> *<sup>i</sup>*=<sup>1</sup> are the rational approximations of *<sup>e</sup>*−*kA* and *<sup>e</sup>*−*kA*(1−*τi*), respectively. These rational approximations are uniformly bounded on the spectrum of *kA* in *k* and *<sup>h</sup>*, where *<sup>h</sup>* represents the spatial discretization step size. The real numbers {*τi*}*m*<sup>9</sup> *<sup>i</sup>*=<sup>1</sup> are the *<sup>m</sup>*<sup>9</sup> Gaussian quadrature points in the interval [0, 1]. Our aim is to obtain a procedure which admits an optimal order-error estimate *vn* <sup>−</sup> *<sup>u</sup>*(*tn*) <sup>=</sup> *<sup>O</sup>*(*h<sup>α</sup>* <sup>+</sup> *<sup>k</sup>q*), with spatial discretization order *α*. The real number *q* > 0 is determined by the properties of the rational functions *<sup>r</sup>*(*z*) and *Pi*(*z*), for *<sup>i</sup>* <sup>=</sup> 1, 2, . . . , *<sup>m</sup>*9.

The time-stepping method (15) is accurate for order *q* if it satisfies some equivalent conditions given in [15]. The reader may consult chapter 9 of [15] to fill in various details omitted here for brevity. The accuracy of the time-stepping method (15) is defined in the following definition.

**Definition 1** ([15] (Ch. 9))**.** *The time discretization method (15) is said to be accurate for order q if the solution of (11) satisfies (15) with an error of order O kq*+<sup>1</sup> *, as k* → 0 *for any choice of linear operator A and a smooth function f on R.*

The following Lemma describes the accuracy of the method (15) and establishes some equivalent relations which are then used in the proof of the main results.

**Lemma 1** ([15] (Lemma 8.1))**.** *The time discretization method (15) is accurate for order q if and only if*

$$r(\lambda) = e^{-\lambda} + O(\lambda^{q+1}), \quad \lambda \to 0,\tag{16}$$

*and for* 0 ≤ *l* ≤ *q*,

$$\sum\_{i=1}^{\overline{m}} \pi\_i^l P\_i(\lambda) = \frac{l!}{( - \lambda )^{l+1}} \left( e^{-\lambda} - \sum\_{j=0}^l \frac{( - \lambda )^j}{j!} \right) + O(\lambda^{q-l}), \quad \lambda \to 0,\tag{17}$$

*or equivalently*

$$\sum\_{i=1}^{\overline{m}} \pi\_i^l P\_i(\lambda) = \int\_0^1 s^l e^{-\lambda(1-s)} ds + O(\lambda^{q-l}), \quad \lambda \to 0. \tag{18}$$

A computationally efficient method can be developed by considering *<sup>r</sup>*(*λ*) and {*Pi*(*λ*)}*m*<sup>9</sup> *i*=1 such that they have the same denominator (the same poles). Let

$$r(\lambda) = \frac{\mathcal{N}(\lambda)}{\mathcal{D}(\lambda)} \text{ and } \ P\_i(\lambda) = \frac{\mathcal{N}\_i(\lambda)}{\mathcal{D}(\lambda)}, \ i = 1, 2, \cdots, \ \tilde{m} \tag{19}$$

be bounded on the spectrum of *kA*, uniformly in *h* and *k*. The method (15) is approximated as:

$$v\_{n+1} \approx \frac{\mathcal{N}(kA)}{\mathcal{D}(kA)} v\_n + k \sum\_{i=1}^{\bar{m}} \frac{\mathcal{N}\_i(kA)}{\mathcal{D}(kA)} f(t\_n + \tau\_i k), \quad n \ge 0, \ v\_0 = v. \tag{20}$$

For the case when *<sup>m</sup>*<sup>9</sup> <sup>=</sup> *<sup>q</sup>*, we can achieve the conditions of Lemma <sup>1</sup> by choosing a rational function *r*(*λ*) which satisfies (16) and by selecting the distinct real numbers as Gaussian quadrature points {*τi*}*m*<sup>9</sup> *<sup>i</sup>*=1. Then, we solve the system of equations [15]

$$\sum\_{i=1}^{q} \pi\_i^l P\_i(\lambda) = \frac{l!}{( - \lambda )^{l+1}} \left( e^{-\lambda} - \sum\_{j=0}^{l} \frac{(-\lambda)^j}{j!} \right), \quad l = 0, 1, \dots, q - 1,\tag{21}$$

to find *Pi*(*λ*). The system given in (21) is of Vandermonde type, and its determinant is not zero. The rational functions {*Pi*(*λ*)} *q <sup>i</sup>*=<sup>1</sup> are obtained as linear combinations of the terms on the right-hand side of (21). Additionally, if *r*(*λ*) is bounded for large *λ*, then the right-hand sides of (21) are small for large *λ*, and the numerator polynomials of *Pi*(*λ*) would be of a smaller degree than their denominator polynomials for each *i*.

A fourth-order, *A*-stable method is developed by considering *r*(*λ*) = *R*2,2(*λ*), where

$$R\_{2,2}(\lambda) = \frac{1 - \frac{1}{2}\lambda + \frac{1}{12}\lambda^2}{1 + \frac{1}{2}\lambda + \frac{1}{12}\lambda^2} = 1 + \frac{-\lambda}{1 + \frac{1}{2}\lambda + \frac{1}{12}\lambda^2} \tag{22}$$

is the fourth-order, *A*-acceptable (2,2)-Padé approximation of *e*−*λ*. By replacing the matrix exponential *e*−*kA* by rational (2,2)-Padé approximation *R*2,2(*kA*) and taking the Gaussian quadrature points *τ*<sup>1</sup> = <sup>3</sup><sup>−</sup> √3 <sup>6</sup> , *<sup>τ</sup>*<sup>2</sup> <sup>=</sup> <sup>3</sup><sup>+</sup> √3 <sup>6</sup> , the system (21) can be written as:

$$\begin{array}{rcl}P\_1(\lambda) + P\_2(\lambda) &=& -\frac{1}{\lambda}(R\_{2,2}(\lambda) - 1),\\ \pi\_1 P\_1(\lambda) + \pi\_2 P\_2(\lambda) &=& \frac{1}{\lambda^2}(R\_{2,2}(\lambda) - 1 + \lambda),\end{array}$$

which results in

$$P\_1(\lambda) = \frac{1 - \frac{\sqrt{3}}{6}\lambda}{2(1 + \frac{1}{2}\lambda + \frac{1}{12}\lambda^2)},\ P\_2(\lambda) = \frac{1 + \frac{\sqrt{3}}{6}\lambda}{2(1 + \frac{1}{2}\lambda + \frac{1}{12}\lambda^2)}.$$

Using these rational approximations, the method (15) is written as

$$v\_{n+1} \approx R\_{2,2}(kA)u\_n + kP\_1(kA)f(t\_n + \tau\_1k) + kP\_2(kA)f(t\_n + \tau\_2k). \tag{23}$$

The above-mentioned method (23) is of fourth-order accuracy of Lemma 1 [15] (Ch. 9), under the assumption that initial data have sufficient regularity.

#### *3.1. Computationally Efficient Version of the Method*

In order to implement the methods, it is required to compute matrix exponential functions, which would be computationally expensive and compromise the computational efficiency of the time-stepping methods. Another challenge is to invert higher-degree matrix polynomials which can cause computational difficulties due to the ill-conditioning of the spatial discretization matrix *A*. Use of the splitting technique not only resolves this but also results in a highly efficient method; see Khaliq, Twizell, and Voss [17] and references therein. We can write:

$$R\_{2,2}(\lambda) = 1 + 2\Re\left(\frac{w}{\lambda - z}\right)$$

and the corresponding {*Pi*(*λ*)}<sup>2</sup> *<sup>i</sup>*=<sup>1</sup> takes the form:

$$P\_i(\lambda) = 2\Re\left(\frac{w\_i}{\lambda - z}\right), \quad i = 1, 2, \ldots$$

where *c* is the non real pole of *R*2,2 and the *Pi* with corresponding weights *w* and *wi*, respectively. All the poles and corresponding weights were computed using MAPLE 11.

#### *3.2. Algorithm*

Solve (*kA* <sup>−</sup> *zI*)*<sup>y</sup>* <sup>=</sup> *wun* <sup>+</sup> <sup>2</sup> ∑ *j*=1 *kwj f tn* + *τjk* for *y*, and then compute *un*+<sup>1</sup> = *un* + 2(*y*), *n* = 0, 1, ··· , where (*y*) = *Real*(*y*). The poles and corresponding weights are:

> *z* = −3.0 − 1.73205080757 *i*, *ω* = −6.0 + 10.3923048454 *i*, *ω*<sup>1</sup> = −0.86602540378 + 3.23205080757 *i*, *ω*<sup>2</sup> = 0.86602540378 + 0.23205080757 *i*.

#### **4. Convergence Analysis**

We present convergence in the Hilbert space case assuming *A* is a self-adjoint operator. We followed the approach of Brenner et al. described in [16], which is also summarized in [15] (Ch. 9]). In this analysis, we used the spaces *<sup>H</sup>*˙ *<sup>s</sup>* <sup>=</sup> <sup>D</sup>(*As*/2), as defined in [15] by the norm

$$|u|\_s = (A^s u, u)^{1/2} = ||A^{s/2} u|| = \left(\sum\_{j=1}^N \lambda\_j^s (u, \phi\_j)^2\right)^{1/2}.$$

where {*φj*}*<sup>N</sup> <sup>j</sup>*=<sup>1</sup> are orthonormal eigenfunctions of *A* with corresponding positive eigenvalues {*λj*}*<sup>N</sup> <sup>j</sup>*=1. We assume *<sup>f</sup>* <sup>∈</sup> *<sup>H</sup>*˙ *<sup>s</sup>* to have sufficient regularity and also use the concept that the operator *Ek* <sup>=</sup> *<sup>r</sup>*(*kA*) is said to be stable in <sup>H</sup> if *E<sup>n</sup> <sup>k</sup>* ≤ *<sup>C</sup>* for *<sup>n</sup>* <sup>≥</sup> 1, 0 <sup>&</sup>lt; *<sup>k</sup>* <sup>≤</sup> ¯ *<sup>k</sup>*, *nk* ≤ ¯*t*; see [15].

**Theorem 1.** *Let A be a self-adjoint operator defined on the Hilbert space* H*; the solution operator Ek* = *r*(*kA*) *is stable in* H*; and the time discretization method (15) is accurate for order q* = 2*m, where <sup>m</sup> is a positive integer. Suppose <sup>f</sup>* (*l*)(*t*) <sup>∈</sup> *<sup>H</sup>*˙ <sup>2</sup>*q*−2*<sup>l</sup> for <sup>l</sup>* <sup>&</sup>lt; *q, <sup>t</sup>* <sup>≥</sup> <sup>0</sup>*. Then, there exists a constant C* = *C*(*t*) *such that*

$$\|\|v\_{\boldsymbol{n}} - \boldsymbol{u}(t\_{\boldsymbol{n}})\|\| \le \mathcal{C}k^{q} \left(t\_{\boldsymbol{n}}^{-q} \|\|v\|\| + t\_{\boldsymbol{n}} \sum\_{l=0}^{q-1} \mathcal{S}\_{l} + \int\_{0}^{t\_{\boldsymbol{n}}} \|\|f^{(q)}\|\| ds\right), \; 0 \le \boldsymbol{n} \le \boldsymbol{\overline{n}}, \; 0 \le k \le \boldsymbol{\overline{k}}, \tag{24}$$

*where* S*<sup>l</sup>* = sup *s*≤*tn* | *f l* (*c*)|2*q*−2*l*.

**Proof.** By letting

$$\mathcal{R}\_k f(t) = \sum\_{i=0}^{s} P\_i(kA) f(t\_n + k\tau\_i),$$

we can write method (15) as

$$v\_n = r^n(kA)v + k \sum\_{j=0}^{n-1} r^{n-1-j}(kA)\mathcal{R}\_k f(t\_j), \quad \text{for } n = 1, 2, \cdots \tag{25}$$

By denoting *E*(*t*) = *e*−*tA*, we write the exact solution, (14), of Equation (11) as:

$$u(t\_n) = E(t\_n)v + k \sum\_{j=0}^{n-1} E(t\_{n-1-j}) \mathcal{Z}\_k f(t\_j),\tag{26}$$

where

$$\mathcal{Z}\_k f(t\_{\vec{\jmath}}) = \int\_0^1 E(k - sk) f(t\_{\vec{\jmath}} + sk) ds.$$

For *<sup>n</sup>* <sup>≥</sup> 0, the error <sup>E</sup> *<sup>n</sup>* <sup>=</sup> *vn* <sup>−</sup> *<sup>u</sup>*(*tn*) can be written as:

$$\begin{split} \mathcal{E}^{n} &= \quad r^{n}(kA)\upsilon - E(\mathfrak{t}\_{\mathfrak{n}})\upsilon + k \sum\_{j=0}^{n-1} \Big( r\_{\mathfrak{n}}^{n-1-j}(kA)\mathcal{R}\_{k}f(\mathfrak{t}\_{j}) - E(\mathfrak{t}\_{n-1-j})\mathcal{Z}\_{k}f(\mathfrak{t}\_{j}) \Big) \\ &= \quad \mathcal{E}\_{0}^{n} + \mathcal{E}\_{\mathfrak{m}}^{n} \end{split} \tag{27}$$

where the error <sup>E</sup> *<sup>n</sup>* <sup>0</sup> corresponds to the homogeneous equation and <sup>E</sup> *<sup>n</sup> <sup>m</sup>* is the error due to the inhomogeneous part of the method. The error <sup>E</sup> *<sup>n</sup>* <sup>0</sup> is approximated by the established result in [15] (Theorem 7.2) as:

$$\|\mathcal{E}\_0^{\mathfrak{n}}\| = \left\| \left(r\_{\mathfrak{n}}^{\mathfrak{n}-2}(kA)r\_s^2(kA) - E(t\_{\mathfrak{n}}) \right) \right\| \le \mathbb{C}k^q t\_{\mathfrak{n}}^{-q} \|\upsilon\|.\tag{28}$$

By adding and subtracting *r <sup>n</sup>*−1−*<sup>j</sup> <sup>m</sup>* (*kA*)I*<sup>k</sup> <sup>f</sup>*(*tj*) in the error term <sup>E</sup> *<sup>n</sup> <sup>m</sup>* and rearranging the terms, we get

$$\begin{split} \mathcal{E}\_{m}^{n} &= \ \ &k \sum\_{j=0}^{n-1} \left( r\_{m}^{n-1-j}(kA) - E(t\_{n-1-j}) \right) \mathbb{Z}\_{k}f(t\_{j}) + k \sum\_{j=0}^{n-1} r\_{m}^{n-1-j}(kA)(\mathcal{R}\_{k} - \mathcal{Z}\_{k})f(t\_{j}) \\ &= \ &\mathcal{E}\_{m1}^{n} + \mathcal{E}\_{m2}^{n}. \end{split} \tag{29}$$

Using the change of variable *tj* + *sk* = *t*, we can write

$$\int\_0^1 f(t\_j + sk)ds = \frac{1}{k} \int\_{t\_j}^{t\_{j+1}} f(t)dt$$

and

$$\sum\_{j=0}^{n-1} \int\_0^1 f(t\_j + sk) ds = \frac{1}{k} \int\_0^{t\_n} f(t) dt$$

Additionally, using the facts:

$$\max\_{0 \le s \le 1} E(k - sk) = \max\_{0 < s \le 1} e^{-(1 - s)k} = I\_\prime \text{ at } s = 1,\tag{30}$$

*<sup>E</sup>*(*<sup>s</sup>* <sup>−</sup> *sk*) commutes with *<sup>r</sup>n*(*kA*) <sup>−</sup> *<sup>E</sup>*(*tn*) and *<sup>r</sup>*(*kA*) is a *<sup>q</sup>*-th order approximation of *<sup>E</sup>*(*t*) = *e*−*kA*. We get the following estimate:

$$\begin{split} \|\mathcal{E}\_{m1}^{n}\| &\quad \le \,\, k \sum\_{j=0}^{n-1} \| \left( r\_{m}^{n-1-j}(kA) - E(t\_{n-1-j}) \right) \mathbb{Z}\_{k} f(t\_{j}) \| \\ &\le \,\, \, k \sum\_{j=0}^{n-1} \int\_{0}^{1} \| E(k - sk) \left( r\_{m}^{n-1-j}(kA) - E(t\_{n-1-j}) \right) f(t\_{j} + sk) \| \, ds \\ &\le \,\, \, k^{q+1} \sum\_{j=0}^{n-1} \int\_{0}^{1} |f(t\_{j} + sk)| \, \_{2} \boldsymbol{q} \, ds = \,\, \, \, k^{q} \int\_{0}^{t\_{q}} |f(t)| \, \_{2} \boldsymbol{q} \, dt, \end{split} \tag{31}$$

which is bounded by the right-hand side of (24). Using Taylor-series expansions of *f*(*tj* + *sk*) and *<sup>f</sup>*(*tj* <sup>+</sup> *sk*) and the approach given in [15] (Theorem 8.1), an estimate for <sup>E</sup> *<sup>n</sup> <sup>m</sup>*<sup>2</sup> can be obtained as follows:

$$\|\mathcal{E}\_{m2}^{\boldsymbol{n}}\|\quad\leq\sum\_{j=2}^{n-1}\mathsf{C}k^{q+1}\sum\_{l=0}^{q-1}|f^{(l)}(t\_{j})|\_{2q-2l}+\mathsf{C}k^{q}\sum\_{j=2}^{n-1}\int\_{t\_{j}}^{t\_{j+1}}\|f^{(q)}\|ds.\tag{32}$$

Since the right-hand side of (31) is bounded by the right-hand side of (32),

$$\|\|\mathcal{E}\_m^n\|\|\leq\sum\_{j=0}^{n-1} \mathsf{C}k^{q+1}\sum\_{l=0}^{q-1} |f^{(l)}(t\_j)|\_{2q-2l} + \mathsf{C}k^q \sum\_{j=0}^{n-1} \int\_{t\_j}^{t\_{j+1}} \|f^{(q)}\|\|ds\tag{33}$$

$$\leq \quad \mathbb{C}k^{q}t\_{n}\sum\_{l=0}^{q-1} \mathcal{S}\_{l} + \mathbb{C}k^{q}\int\_{0}^{t\_{n}}||f^{(q)}||ds\_{\prime} \tag{34}$$

where S*<sup>l</sup>* = sup *s*≤*tn* <sup>|</sup> *<sup>f</sup>* (*l*)(*s*)|2*q*−2*l*, and (28) together with (34), completes the proof.

#### **5. Numerical Experiments**

In this section, we present the solutions of two test problems and discuss the results obtained. The errors between the consecutive solutions were calculated by decreasing the time step size by half. The following formula was used to calculate the rate of convergence:

$$r = \ln \frac{Error(2k)}{Error(k)},$$

where *Error*(*k*) denotes the error between the consecutive solutions corresponding to the numbers of time steps *k* and 2*k*, respectively. The *error*(*k*) are computed by using the *inf* norm. The rate of convergence of the method is computed using this approach when an analytical solution of the problem is not available.

#### *5.1. Example 1*

First we consider the following problem with *f*(*x*, *t*) = 0; see [10]:

$$\frac{\partial u}{\partial t} = \int\_{1}^{2} K\_{\text{x}} P(\boldsymbol{\alpha}) \frac{\partial^{\boldsymbol{\alpha}} u}{\partial |\boldsymbol{\alpha}|^{\alpha}} d\boldsymbol{\alpha}, \ (\boldsymbol{x}, t) \in (0, 1) \times [0, T], \tag{35}$$

with homogeneous Dirichlet boundary condition

$$
\mu(0, t) = 0, \ u(1, t) = 0, \ \succ 0 \tag{36}
$$

and initial condition

$$
\mu(\mathbf{x},0) = \delta(\mathbf{x} - 0.5), \; \mathbf{x} \in (0,1), \tag{37}
$$

where *P*(*α*) = *l <sup>α</sup>*−2*K*[*A*1*δ*(*<sup>α</sup>* <sup>−</sup> *<sup>δ</sup>*1) + *<sup>A</sup>*2*δ*(*<sup>α</sup>* <sup>−</sup> *<sup>δ</sup>*2)] with dimensionless constants *<sup>l</sup>* and *<sup>K</sup>*. Additionally, 0 < *δ*<sup>1</sup> < *δ*<sup>2</sup> ≤ 2, *A*<sup>1</sup> > 0, *A*<sup>2</sup> > 0.

Figure 1 displays the numerical solution *u*(*x*, *t*) at different times, which decays with time. Figure 2 illustrates the impacts of *δ*<sup>1</sup> and *δ*<sup>2</sup> on the diffusion behavior of *u*(*x*, *t*). As the values of *δ*<sup>1</sup> and *δ*<sup>2</sup> increase, the amplitude decreases and more diffusive behavior appears in the profiles. Figure 3 shows how different values of *l* affect the numerical solution *u*(*x*, *t*). It is evident that the amplitude of the solution increases as the value of *l* increases. Figure 4 shows the time evolution graphs of *u*(*x*, *t*) at *t* = 0.1 and at *t* = 1, respectively. Table 1 shows the error and convergence rate of the time-stepping method. A column of CPU time is also included in this table to show the computational efficiency of the method.

**Figure 1.** Example 1: Numerical solutions at different values of *t* using *h* = *k* = 1/200, *l* = 2, *K* = 1, *A*<sup>1</sup> = *A*<sup>2</sup> = 1, *δ*<sup>1</sup> = 1.255, and *δ*<sup>2</sup> = 1.75.

**Figure 2.** Example 1: Numerical solutions using for different values of *δ*<sup>1</sup> and *δ*<sup>2</sup> using *h* = *k* = 1/200, *l* = 2, *K* = 1, and *A*<sup>1</sup> = *A*<sup>2</sup> = 1.

**Figure 3.** Example 1: Numerical solutions at *t* = 1 using *δ*<sup>1</sup> = 1.255 and *δ*<sup>2</sup> = 1.755 with *h* = *k* = 1/200, *K* = 1, and *A*<sup>1</sup> = *A*<sup>2</sup> = 1 for different values of *l*.

**Figure 4.** Time evolution graph of Example 1 at *t* = 0.1 LEFT and *t* = 1 RIGHT, using *δ*<sup>1</sup> = 1.255 and *δ*<sup>2</sup> = 1.755 with *h* = *k* = 1/200, *l* = 2, *K* = 1, and *A*<sup>1</sup> = *A*<sup>2</sup> = 1.



#### *5.2. Example 2*

Here we consider the two-dimensional problem on the rectangular domain Ω = (0, 1) × (0, 1) [1]:

$$\frac{\partial u}{\partial t} = \int\_{1}^{2} K\_{\text{x}} P(\mathbf{a}) \frac{\partial^{\mathbf{a}} u}{\partial |\mathbf{x}|^{a}} da + \int\_{1}^{2} K\_{\text{y}} Q(\boldsymbol{\beta}) \frac{\partial^{\boldsymbol{\beta}} u}{\partial |\mathbf{y}|^{\boldsymbol{\beta}}} d\boldsymbol{\beta} + f(\mathbf{x}, \boldsymbol{y}, t), \ (\mathbf{x}, \boldsymbol{y}, t) \in \Omega \times [0, T], \tag{38}$$

with the homogeneous Dirichlet boundary condition

$$
\mu(x, y, t) = 0, \ (x, y) \in \partial\Omega,\tag{39}
$$

1

and the initial condition

$$u(x, y, 0) = u\_0(x, y) = x^2 (1 - x)^2 y^2 (1 - y)^2, \ (x, y) \in \Omega,\tag{40}$$

where *<sup>P</sup>*(*α*) = *<sup>Q</sup>*(*α*) = <sup>−</sup>2Γ(<sup>5</sup> <sup>−</sup> *<sup>α</sup>*) cos( *απ* <sup>2</sup> ) are non-negative functions and the inhomogeneous term is

$$\begin{array}{rcl}f(\mathbf{x},y,t)&=&e^{t}\mathbf{x}^{2}(1-\mathbf{x})^{2}y^{2}(1-y)^{2}\\&-&e^{t}\mathbf{x}^{2}(1-\mathbf{x})^{2}[R(\mathbf{x})+R(1-\mathbf{x})]\\&-&e^{t}y^{2}(1-y)^{2}[R(y)+R(1-y)]\end{array}$$

with

$$\begin{aligned} R(r) &=& \Gamma(5)R\_1(r) - 2\Gamma(4)R\_2(r) + \Gamma(3)R\_3(r), \\ R\_1(r) &=& \frac{1}{\ln r}(r^3 - r^2), \\ R\_2(r) &=& \frac{1}{\ln r}(3r^2 - 2r) + \frac{1}{(\ln r)^2}(r - r^2), \\ R\_3(r) &=& \frac{1}{\ln r}(6r - 2) + \frac{1}{(\ln r)^2}(3 - 5r) + \frac{2}{(\ln r)^3}(r - 1). \end{aligned}$$

The exact solution of this problem is given as: *u*(*x*, *y*, *t*) = *e<sup>t</sup> <sup>x</sup>*2(<sup>1</sup> <sup>−</sup> *<sup>x</sup>*)2*y*2(<sup>1</sup> <sup>−</sup> *<sup>y</sup>*)2. Figure 5 shows the graph's exact and numerical solution of the problem (38). Convergence results along with CPU times are given in Table 2.

**Figure 5.** Exact and numerical solutions of Example 2 with Δ*x* = 0.04 and Δ*y* = 0.02.



#### **6. Conclusions**

By synthesizing diverse ideas, we developed an implementation strategy to numerically solve the Riesz distributed-order, space-fractional, inhomogeneous diffusion equations. A fourth-order *A*-stable method was developed using a diagonal (2,2)-Padé approximation of a matrix exponential function. Use of the partial fraction splitting makes the method more efficient, stable, and accurate. It can also be noted that we can implement this fourth-order method with the same computational complexity as a first-order method.

**Author Contributions:** The idea was had by A.Q.M.K.; implementation of the idea, numerical computation, proof of the theorem, and paper writing were done by M.Y.; writing—review and editing, along with very useful discussion, comments, and suggestions, were performed by K.M.F. All authors have read and agreed to the published version of the manuscript.

**Funding:** Thanks to King Fahd University of Petroleum & Minerals for providing publishing cost.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The support provided by the Department of Mathematics, King Fahd University of Petroleum and Minerals, is highly appreciated.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **A Second-Order Crank-Nicolson-Type Scheme for Nonlinear Space–Time Reaction–Diffusion Equations on Time-Graded Meshes**

**Yusuf O. Afolabi 1,\*, Toheeb A. Biala 2, Olaniyi S. Iyiola 3, Abdul Q. M. Khaliq <sup>2</sup> and Bruce A. Wade <sup>1</sup>**

<sup>1</sup> Department of Mathematics, University of Louisiana at Lafayette, Lafayette, LA 70504, USA

<sup>2</sup> Department of Mathematical Sciences, Middle Tennessee State University, Murfreesboro, TN 37132, USA

<sup>3</sup> Department of Mathematics, Clarkson University, Potsdam, NY 13699, USA

**\*** Correspondence: yusuf.afolabi1@louisiana.edu

**Abstract:** A weak singularity in the solution of time-fractional differential equations can degrade the accuracy of numerical methods when employing a uniform mesh, especially with schemes involving the Caputo derivative (order *α*,), where time accuracy is of the order (2 − *α*) or (1 + *α*). To deal with this problem, we present a second-order numerical scheme for nonlinear time–space fractional reaction–diffusion equations. For spatial resolution, we employ a matrix transfer technique. Using graded meshes in time, we improve the convergence rate of the algorithm. Furthermore, some sharp error estimates that give an optimal second-order rate of convergence are presented and proven. We discuss the stability properties of the numerical scheme and elaborate on several empirical examples that corroborate our theoretical observations.

**Keywords:** predictor-corrector scheme; Caputo fractional derivative; nonlinear time–space fractional equation; matrix transfer; graded meshes

#### **1. Introduction**

The last decade has witnessed tremendous developments in practical methods to solve fractional differential equations. These problems are of particular importance because they can provide a better model for understanding complex phenomena such as memorydependent processes [1–3], material properties [4], diffusion in media with memory [5,6], groundwater modeling [7,8], and control theory [9]. Recently, many researchers have adopted fractional-order models to predict and gain insight into the evolution of the COVID-19 pandemic. This is possible due to the memory/hereditary properties inherent in the fractional-order derivatives, cf. [10–14].

We study a nonlinear time–space fractional reaction–diffusion problem in the form

$$\begin{aligned} \, \_cD\_{0,t}^{\mathfrak{a}}\mu &= -\kappa(-\Delta)^{\frac{\beta}{2}}u(x,t) + \mathfrak{g}(u), \text{ in } \Omega \times (0,T), \\ u(x,0) &= \mathfrak{q}(x), \ \mathfrak{x} \in \Omega \subset \mathbb{R}, \\ u(x,t)|\_{\partial\Omega} &= 0 \end{aligned} \tag{1}$$

where Ω is bounded in R, *∂*Ω denotes the boundary of Ω, *κ* is the diffusion coefficient, (−Δ) *β* <sup>2</sup> denotes the Laplacian of a fractional order *β*, 1 < *β* ≤ 2, and *g*(*u*) is a sufficiently smooth function. The *<sup>α</sup>*-order Caputo derivative, 0 <sup>&</sup>lt; *<sup>α</sup>* <sup>≤</sup> 1, *cD<sup>α</sup>* 0,*tu*, in variable *t*, is adopted here and defined as

$$\,\_cD\_{0,t}^a\mu(x,t) = \frac{1}{\Gamma(1-a)}\int\_0^t (t-s)^{-a} \frac{\partial \mu(x,s)}{\partial s} ds.$$

**Citation:** Afolabi, Y.O.; Biala, T.A.; Iyiola, O.S.; Khaliq, A.Q.M.; Wade, B.A. A Second-Order Crank-Nicolson- Type Scheme for Nonlinear Space– Time Reaction–Diffusion Equations on Time-Graded Meshes. *Fractal Fract.* **2023**, *7*, 40. https://doi.org/ 10.3390/fractalfract7010040

Academic Editors: Libo Feng, Lin Liu and Yang Liu

Received: 11 November 2022 Revised: 23 December 2022 Accepted: 26 December 2022 Published: 30 December 2022

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

The operator (−Δ) *β* <sup>2</sup> is taken here as the spectral Laplacian of a fractional order *β*, 1 < *β* ≤ 2 given by

$$(-\Delta)^{\frac{\beta}{2}}\mu(\boldsymbol{x}) = \sum\_{k=1}^{\infty} c\_k \lambda\_k^{\frac{\beta}{2}} \phi\_k(\boldsymbol{x}).$$

where *λ<sup>k</sup>* and *φk*(*x*) are the eigenvalues and eigenfunctions of the Laplace operator −Δ on Ω, respectively, and *ck* are Fourier coefficients of *u* (in {*φk*(*x*)}) (see [15]).

Several numerical methods for solving Problem (1) or some of its counterparts have been developed and investigated by authors based on the uniform discretization of the Caputo derivative (see, for example, [16–23]). In most cases, the derived schemes were either (2 − *α*) or (1 + *α*) accurate in time. This somewhat reduced order of convergence is to be expected due to a singular kernel (*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>*)−*<sup>α</sup>* embedded in the time derivative. This, in turn, has motivated the research question of how to improve the order beyond (2 − *α*) and (1 + *α*). The natural choice is to use nonuniform time meshes.

Brunner [24] made use of meshes that are graded in order to improve the accuracy of the approximation to a Volterra integral equation of the second kind with a weakly singular kernel employing collocation methods. Zhang et al. [25] developed a numerical method for a linear counterpart of (1) based on the nonuniform discretization of the Caputo derivative and the compact difference method for spatial discretization. Their theoretical analysis and numerical examples showed the efficiency of their methods. Lyu and Vong [26] proposed a high-order method to resolve a time-fractional Benjamin–Bona–Mahony equation over a nonuniform temporal mesh. Stynes et al. [27] investigated the stability and error analysis of a finite difference scheme using a uniform mesh and meshes graded in time. Liao et al. [28] investigated the convergence and stability of an *L*<sup>1</sup> technique to solve linear reaction–subdiffusion equations with the Caputo derivative. Kopteva [29] discussed the error analysis of the *L*<sup>1</sup> method for a fractional-order parabolic problem in two and three dimensions using both uniform and graded meshes. Wang and Zhou [30] proved the convergence of the corrected *k*-step backward difference formula without imposing further regularity assumptions on the solution of the semilinear subdiffusion equation. Mustapha [31] developed an *L*<sup>1</sup> scheme for subdiffusion equations with Riemann–Liouville time-fractional derivatives on nonuniform time intervals. He used the regularity of the solution and the properties of the nonuniform mesh to obtain a second-order accurate scheme.

In this present study, we propose a predictor–corrector numerical scheme for solving (1) based on time-graded meshes. The scheme is similar to the one given in [18]. We then explore the regularity properties of the solution and some of the properties of the meshes to derive a scheme that is second-order accurate in time.

The remaining sections are organized as follows. Section 2 briefly discusses the spatial discretization method, and thereafter we derive the time-stepping scheme for the solution of (3). In Section 3, we discuss the error analysis and stability of the scheme. In Section 4, we give some numerical examples to illustrate the convergence of the scheme. Finally, in Section 5 there are concluding remarks.

#### **2. Numerical Scheme**

#### *2.1. Matrix Transfer Technique for Spatial Discretizations*

Let *M* be given and we denote by *xj*, for 0 ≤ *j* ≤ *M*, a 1D uniform grid point of size *h*. It was shown in [32] that

$$(-\Delta)^{\frac{\beta}{2}}u(\mathbf{x}) \approx A^{\frac{\beta}{2}}\mathbf{u}(\mathbf{x})\tag{2}$$

where

$$A = \frac{1}{h^2} \begin{bmatrix} 2 & -1 & 0 & 0 & \cdots & 0 \\ -1 & 2 & -1 & 0 & \cdots & 0 \\ & \ddots & \ddots & \cdots & \cdots \\ 0 & \cdots & 0 & -1 & 2 & -1 \\ 0 & \cdots & 0 & 0 & -1 & 2 \end{bmatrix}, \quad \mathbf{u}(\mathbf{x}) = \begin{bmatrix} u(\mathbf{x}\_1) \\ u(\mathbf{x}\_2) \\ \vdots \\ u(\mathbf{x}\_{M-2}) \\ u(\mathbf{x}\_{M-1}) \end{bmatrix},$$

then, the error in (2) is of order 2. This approximation can be thought of as transferring the matrix approximation of −Δ to approximate (−Δ) *β* <sup>2</sup> . Applying this technique to (1), we obtain a system of nonlinear time-fractional differential equations in the form

$$\begin{aligned} \, \_cD\_{0,t}^{\alpha} \mathbf{u} + A^{\frac{\beta}{2}} \mathbf{u} &= \mathbf{g}(\mathbf{u}),\\ \mathbf{u}(0) &= \mathbf{u}\_{0\prime} \end{aligned} \tag{3}$$

where **u** and **g**(**u**) are the vectors that denote the nodal values of *u* and *g*, respectively, and we have chosen *κ* = 1, without any loss of generality. In addition, we write *u* and *g*(*u*) instead of **u** and **g**(**u**) to denote the vectors of the node values. Consequently, the major concern of this paper is the accurate time discretization of Problem (3).

#### *2.2. Time Discretizations*

In this section, the second-order time-stepping scheme over time-graded meshes is considered for solving the semi-discrete problem (3). Furthermore, the stability results are developed. We use the time-graded mesh having subintervals *In* = [*tn*, *tn*+1], *n* = 0, ··· , *N* − 1, with 0 = *t*<sup>0</sup> < ··· < *tN* = *T*. This has the following grid points:

$$t\_n = (n\tau)^\gamma,\ 0 \le n \le N,\ \text{for}\ \gamma \ge 1,\ \text{with}\ \tau = \frac{T^{1/\gamma}}{N}.$$

Let *τ<sup>n</sup>* = *tn*+<sup>1</sup> − *tn* denote the stepsize of the *n*-th subinterval *In*. The following properties (see [24,33]) hold for *n* ≥ 1,

$$t\_{n+1} \quad \le \quad \mathcal{D}^\gamma t\_{n\prime} \tag{4}$$

$$
\gamma \tau t\_n^{1 - 1/\gamma} \quad \le \quad \tau\_n \le \gamma \tau t\_{n+1}^{1 - 1/\gamma} \,\tag{5}
$$

$$
\tau\_n - \tau\_{n-1} \le \mathcal{C}\_\gamma r^2 \min\{1, t\_{n+1}^{1-2/\gamma}\} \tag{6}
$$

$$\mathbf{r}\_n \le \quad \mathbf{r}\_{\text{max}} \le \gamma T / N. \tag{7}$$

where

$$\pi\_{\text{max}} = \max\_{1 \le j \le n-1} \tau\_j$$

We note that Equation (3) can be reformulated in the form of the Volterra integral equation

$$\begin{split} u(t) - u\_0 &= \frac{1}{\Gamma(\mathfrak{a})} \int\_0^t (t - s)^{\mathfrak{a} - 1} \Big( -A^{\frac{\beta}{2}} u(s) + \mathfrak{g}(u(s)) \Big) ds \\ &= {}\_0 \mathcal{Z}\_t^{\mathfrak{a}} \, \mathfrak{g}(u(t)) - A^{\frac{\beta}{2}} {}\_0 \mathcal{Z}\_t^{\mathfrak{a}} u(t), \end{split} \tag{8}$$

where

$$\begin{aligned} \, \_a \mathcal{Z}\_t^\alpha \, w(t) &= \frac{1}{\Gamma(\alpha)} \int\_a^t (t-s)^{\alpha-1} w(s) \, ds, \\ \, \_a \mathcal{Z}\_t^\alpha \, \_s \mathcal{g}(w(t)) &= \frac{1}{\Gamma(\alpha)} \int\_a^t (t-s)^{\alpha-1} \mathcal{g}(w(s)) \, ds. \end{aligned}$$

Now, let *un* := *u*(*tn*) and *g*(*un*) := *g*(*u*(*tn*)). Estimating *t* = *tn* and *tn*+1, we obtain the difference in successive terms as

$$\begin{aligned} \boldsymbol{u}(t\_{n+1}) - \boldsymbol{u}(t\_n) &= \left[ \,\_0 \mathcal{Z}\_{t\_{n+1}}^{\boldsymbol{u}} \boldsymbol{g}(\boldsymbol{u}\_{n+1}) - \,\_0 \mathcal{Z}\_{t\_n}^{\boldsymbol{u}} \boldsymbol{g}(\boldsymbol{u}\_n) \right] - A^{\frac{\beta}{2}} \left[ \,\_0 \mathcal{Z}\_{t\_{n+1}}^{\boldsymbol{u}} \boldsymbol{u}(t\_{n+1}) - \,\_0 \mathcal{Z}\_{t\_n}^{\boldsymbol{u}} \boldsymbol{u}(t\_n) \right] \\ &= \,\_{t\_n} \mathcal{Z}\_{t\_{n+1}}^{\boldsymbol{u}} \boldsymbol{g}(\boldsymbol{u}\_{n+1}) - A^{\frac{\beta}{2}} \,\_{t\_n} \mathcal{Z}\_{t\_{n+1}}^{\boldsymbol{u}} \boldsymbol{u}(t\_{n+1}) + \mathbb{Q}\_{t,\boldsymbol{u}}^{\boldsymbol{\epsilon}} + \mathbb{Q}\_{n,\boldsymbol{g}'}^{\boldsymbol{\epsilon}} \end{aligned}$$

where

$$\mathbb{Q}\_{n,\mu}^{\varepsilon} = -A^{\frac{\beta}{2}} \, \_0\mathbb{Z}\_{t\_n}^{n} \left[ u(t\_{n+1}) - u(t\_n) \right] \tag{9}$$

$$\mathbb{Q}\_{n,\mathbb{g}}^{\varepsilon} = \,\_0\mathcal{Z}\_{t\_n}^{\mathbb{a}}\left[\mathcal{g}(\boldsymbol{\mu}\_{n+1}) - \mathcal{g}(\boldsymbol{\mu}\_n)\right] \tag{10}$$

That is,

$$\begin{aligned} u(t\_{n+1}) - u(t\_n) &= -\frac{1}{\Gamma(a)} A^{\frac{\beta}{2}} \int\_{t\_n}^{t\_{n+1}} (t\_{n+1} - s)^{a-1} u(s) \, ds + \frac{1}{\Gamma(a)} \int\_{t\_n}^{t\_{n+1}} (t\_{n+1} - s)^{a-1} g(u(s)) \, ds \end{aligned} \tag{11}$$

where

$$
\mathbb{H}\_n^\epsilon = \mathbb{Q}\_{n,u}^\epsilon + \mathbb{Q}\_{n,\emptyset}^\epsilon.
$$

If we replace *u*(*s*) and *g*(*u*) with linear interpolants over the interval *In*, that is,

$$
\mu(\mathbf{s}) \approx \mu\_n + (\mathbf{s} - t\_n) \frac{\mu\_{n+1} - \mu\_n}{\mathbf{r}\_n}, \quad \mathbf{s} \in [t\_n, t\_{n+1}], 
$$

and

$$\log(u(s)) \approx g(u\_n) + (s - t\_n) \frac{g(u\_{n+1}) - g(u\_n)}{x\_n}, \quad s \in [t\_n, t\_{n+1}].$$

We obtain

$$u\_{n+1} - u\_n = \frac{-a\tau\_n^a}{\Gamma(a+2)}A^{\frac{\xi}{2}}u\_n - \frac{\tau\_n^a}{\Gamma(a+2)}A^{\frac{\xi}{2}}u\_{n+1} + \frac{a\tau\_n^a}{\Gamma(a+2)}g\_n + \frac{\tau\_n^a}{\Gamma(a+2)}g\_{n+1} + \mathbb{H}\_n^c. \tag{12}$$

At a glance, we see that (12) is an implicit scheme. In order to reduce the computation burden, we thus go back to (11) and approximate the nonlinear function, *g*(*u*), on the interval [*tn*, *tn*+1] by a constant polynomial to obtain the following predictor–corrector scheme after some simplifications:

$$\begin{cases} \left(\Gamma(a+2)\mathbb{I} + \tau\_n^a A^{\frac{\beta}{2}}\right) u\_{n+1}^p = \left[\left(\Gamma(a+2)\mathbb{I} - a\tau\_n^a A^{\frac{\beta}{2}}\right) u\_n + \tau\_n^a (a+1) g(u\_n) + \Gamma(a+2) \mathbb{H}\_n^c \right] \\\\ \left(\Gamma(a+2)\mathbb{I} + \tau\_n^a A^{\frac{\beta}{2}}\right) u\_{n+1} = \left[\left(\Gamma(a+2)\mathbb{I} - a\tau\_n^a A^{\frac{\beta}{2}}\right) u\_n + \tau\_n^a \left(a \, g(u\_n) + \mathcal{g}(u\_{n+1}^p)\right) + \Gamma(a+2) \mathbb{H}\_n^c \right]. \end{cases} \tag{13}$$

where I is the identity matrix.

Using linear approximations for both *u*(*t*) and *g*(*u*(*t*)) in Equations (9) and (10), the history term H*<sup>e</sup> <sup>n</sup>* is approximated as

$$\mathbb{H}\_n^{\mathfrak{c}} \approx \mathbb{H}\_n^{\mathfrak{a}} = \sum\_{j=0}^n a\_{j,n} \left( -A^{\frac{\mathfrak{b}}{2}} u\_j + \mathfrak{g}(u\_j) \right), \tag{14}$$

where

$$a\_{j,\pi} = \frac{1}{\Gamma(\alpha+2)} \begin{cases} \tau\_0^{-1} \left( -\tau\_{n+1,0}^{\mathfrak{a}} (\tau\_{n+1,1} - a\tau\_0) + \tau\_{n+1,1}^{1+\mathfrak{a}} + \tau\_{n,1} \tau\_{n,0}^{\mathfrak{a}} - a\tau\_0 \tau\_{n,0}^{\mathfrak{a}} - \tau\_{n,1}^{1+\mathfrak{a}} \right), & j = 0, \\\\ \tau\_{j-1}^{-1} \left( \tau\_{n+1,j-1}^{1+\mathfrak{a}} - \tau\_{n,j-1}^{1+\mathfrak{a}} \right) + \tau\_j^{-1} \left( \tau\_{n+1,j+1}^{1+\mathfrak{a}} - \tau\_{n,j+1}^{1+\mathfrak{a}} \right) & \tau\_{n,j+1} \ge j \le n-1, \\\\ -\tau\_{n+1,j}^{\mathfrak{a}} \left( \tau\_{j-1}^{-1} \tau\_{n+1,j-1} + \tau\_j^{-1} \tau\_{n+1,j+1} \right), & 1 \le j \le n-1, \\\\ +\tau\_{n,j}^{\mathfrak{a}} \left( \tau\_{j-1}^{1+\mathfrak{a}} - \tau\_{n+1,n-1} \tau\_n^{\mathfrak{a}} - \tau\_{n-1}^{1+\mathfrak{a}} - a\tau\_{n-1} \tau\_n^{\mathfrak{a}} \right), & j = n. \\\\ \tau\_{n,j} = t\_n - t\_j. & \end{cases}$$

#### **3. Error and Stability Analysis**

Here, we carry out the error analysis and discuss the stability property of the proposed scheme (13). For the error analysis, it is assumed that *u*, the solution to (1), satisfies

$$||u(t)||\_1 \le M \text{ and } ||u^{(\ell)}(t)||\_1 \le Mt^{\sigma + a/2 - \ell}, \text{ } \ell = 1, 2, 3,\tag{15}$$

with the regularity parameter *σ* ∈ (0, 1) to be determined from the error analysis. This is in line with the published results [27,30,31,33,34]. The constant *M* is positive, denotes the partial derivative of an appropriate order of *u* with respect to *t*, and || · || is the Sobolev norm on *H*(Ω). Of course, this reduces to the *L*<sup>2</sup> norm whenever = 0. The stability analysis of this article considers the initial data perturbations, i.e., the sensitivity of the numerical solutions to the small changes in the initial data. Function *g*(*u*) and its derivatives with respect to *u*, *g*(*r*)(*u*) for *r* = 1, 2, are assumed to be Lipschitz in the time domain Ω × [0, *T*].

#### *3.1. Error Analysis*

**Lemma 1.** *Given any positive sequence* {*aj*} *and for γ* ≥ 1*, we have*

$$\left| \sum\_{j=2}^{n-1} a\_j \left| \frac{\tau\_j^3}{\tau\_{j-1}^3} L^{j-1,n} - L^{j,n+1} \right| \right| \le C \tau t\_n^{\alpha - 1/\gamma} \max\_{j=2}^{n-1} (a\_j \tau\_j^2),$$

*where*

$$L^{j,n} = \frac{1}{2\Gamma(\mathfrak{a})} \int\_{t\_j}^{t\_{j+1}} (s - t\_j)(t\_{j+1} - s)(t\_n - s)^{\mathfrak{a} - 1} \, ds.$$

**Proof.** Cf. [31].

**Lemma 2.** *For* 1 ≤ *n* ≤ *N, γ* > 2 *<sup>σ</sup>* <sup>+</sup> *<sup>α</sup>*/2 <sup>+</sup> <sup>1</sup> *and <sup>τ</sup> is sufficiently small,*

$$\left\lVert \left\lvert \frac{1}{\Gamma(\mathfrak{a})} \sum\_{j=0}^{n-1} \int\_{t\_j}^{t\_{j+1}} \left[ (t\_{n+1} - s)^{\mathfrak{a}-1} - (t\_n - s)^{\mathfrak{a}-1} \right] E\_{1,\mathfrak{a}}(s) \, ds \right\rVert \right\rVert\_1 \leq C r^2 t\_n^{r + 3\mathfrak{a}/2 - 2/\gamma} \lambda$$

*where*

$$E\_{1,u}(s) = \frac{1}{2} \int\_{t\_j}^s (w - s)^2 u^{\prime\prime\prime}(w) dw - \frac{(s - t\_j)}{2\tau\_j} \int\_{t\_j}^{t\_{j+1}} (w - t\_j)^2 u^{\prime\prime\prime}(w) dw.$$

**Proof.** Let

$$\begin{aligned} |I\_{1,\mu} = \frac{1}{\Gamma(\alpha)} \sum\_{j=0}^{n-1} \int\_{t\_j}^{t\_{j+1}} \left[ (t\_{n+1} - s)^{\alpha - 1} - (t\_n - s)^{\alpha - 1} \right] E\_{1,\mu}(s) \, ds \\ |\, |I\_{1,\mu}|\,|\_1 = \left| \left\lfloor \frac{1}{\Gamma(\alpha)} \sum\_{j=0}^{n-1} \int\_{t\_j}^{t\_{j+1}} \left[ (t\_{n+1} - s)^{\alpha - 1} - (t\_n - s)^{\alpha - 1} \right] E\_{1,\mu}(s) \, ds \right\vert \right|\_1 \\ \leq \frac{2}{\Gamma(\alpha)} \sum\_{j=0}^{n-1} \int\_{t\_j}^{t\_{j+1}} (t\_n - s)^{\alpha - 1} ||E\_{1,\mu}(s)||\_1 \, ds. \end{aligned}$$

For *n* = 1, *s* ∈ (*t*0, *t*1)

$$\begin{aligned} ||E\_{1,u}(s)||\_1 &\leq \frac{1}{2} \int\_0^s (w-s)^2 ||u^{\prime\prime\prime}(w)||\_1 \, dw + \frac{s}{t\_1} \int\_0^{t\_1} w^2 ||u^{\prime\prime\prime}(w)||\_1 \, dw \\ &\leq M \int\_0^s w^2 w^{\sigma + \frac{s}{2} - 3} + \frac{M}{2t\_1} s \int\_0^{t\_1} w^2 w^{\sigma + \frac{s}{2} - 3} \, dw \\ &\leq \mathcal{C} \max\{s^{\sigma + \frac{s}{2}}, st\_1^{\sigma + \frac{s}{2} - 1}\}. \end{aligned}$$

Noting that

$$t\_n - s = n^{\gamma} \left( t\_1 - \frac{s}{n^{\gamma}} \right) \ge n^{\gamma} (t\_1 - s)\_{\gamma}$$

we obtain

$$\begin{split} \int\_{0}^{t\_{1}} (t\_{n} - s)^{a - 1} ||E\_{1,n}(s)||\_{1} ds &\leq \mathsf{C} n^{\gamma(a - 1)} \int\_{0}^{t\_{1}} (t\_{1} - s)^{a - 1} \max\{s^{\sigma + \frac{a}{2}}, st\_{1}^{\sigma + \frac{a}{2} - 1}\} \, ds \\ &\leq \mathsf{C} t\_{n}^{a - 1} \frac{T^{1 - a}}{N^{\gamma(1 - a)}} \int\_{0}^{t\_{1}} (t\_{1} - s)^{a - 1} \max\{s^{\sigma + \frac{a}{2}}, st\_{1}^{\sigma + \frac{a}{2} - 1}\} \, ds \\ &\leq \mathsf{C} t\_{n}^{a - 1} \tau^{\gamma(1 - a)} \int\_{0}^{t\_{1}} (t\_{1} - s)^{a - 1} \max\{s^{\sigma + \frac{a}{2}}, st\_{1}^{\sigma + \frac{a}{2} - 1}\} \, ds \\ &\leq \mathsf{C} t\_{n}^{a - 1} \tau^{\gamma(1 - a)} t\_{1}^{\sigma + \frac{3a}{2}} = \mathsf{C} t\_{n}^{a - 1} \tau^{\sigma + \frac{a}{2} - \frac{2}{\gamma} + 1} \\ &\leq \mathsf{C} \tau^{2} t\_{n}^{\sigma + \frac{3a}{2}} \tau^{\frac{a}{2}}, \text{ for } \gamma \geq \frac{2}{(\sigma + \frac{a}{2} + 1)}. \end{split}$$

Let *s* ∈ (*tj*, *tj*+1), *j* ≥ 1 and *n* ≥ 2,

$$\frac{1}{\Gamma(\boldsymbol{a})}||I\_{1,\boldsymbol{\mu}}||\_{1} \leq \frac{2}{\Gamma(\boldsymbol{a})} \left\{ \int\_{0}^{t\_{1}} (t\_{n}-s)^{\boldsymbol{a}-1} ||E\_{1,\boldsymbol{\mu}}(s)||\_{1} \, ds + \sum\_{j=1}^{n-1} \int\_{t\_{j}}^{t\_{j+1}} (t\_{n}-s)^{\boldsymbol{a}-1} ||E\_{1,\boldsymbol{\mu}}(s)||\_{1} \, ds \right\}.$$

For *s* ∈ (*tj*, *tj*+1) and *j* ≥ 1

$$\begin{aligned} ||E\_{1,\mu}(s)||\_1 &= \frac{1}{2} \int\_{t\_j}^s (w-s)^2 ||u''''(w)||\_1 \, dw + \frac{(s-t\_j)}{2\tau\_j} \int\_{t\_j}^{t\_{j+1}} (w-t\_j)^2 ||u''''(w)||\_1 \, dw \\ &\le C\tau\_j^2 \int\_{t\_j}^{t\_{j+1}} ||u''''(w)||\_1 \, dw \\ &\le C\tau^3 t\_{j+1}^{3-3/\gamma} s^{\sigma + \kappa/2 - 3} .\end{aligned}$$

Therefore,

$$\begin{split} \frac{2}{\Gamma(\alpha)} \sum\_{j=1}^{n-1} \int\_{t\_j}^{t\_{j+1}} (t\_n - s)^{\alpha - 1} ||E\_{1, \mathfrak{a}}(s)||\_1 \, ds &\leq C \tau^3 \sum\_{j=1}^{n-1} t\_{j+1}^{3 - 3/\gamma} \int\_{t\_j}^{t\_{j+1}} (t\_n - s)^{\alpha - 1} s^{\sigma + \frac{\mathfrak{a}}{2} - 3} \, ds \\ &\leq C \tau^3 t\_n^{3 - 3/\gamma} \int\_{t\_1}^{t\_n} (t\_n - s)^{\alpha - 1} s^{\sigma + \frac{\mathfrak{a}}{2} - 3} \, ds \\ &\leq C \tau^3 t\_n^{\sigma + \frac{3\mathfrak{a}}{2} - \frac{2}{\gamma}} .\end{split}$$

Hence, the following bound is obtained

$$||I\_{1,\mu}||\_1 \le C\tau^2 t\_n^{\sigma + \frac{3n}{2} - \frac{2}{\gamma}}, \quad \gamma > \frac{2}{\sigma + \frac{\kappa}{2} + 1}$$

**Lemma 3.** *Assume* 1 ≤ *n* ≤ *N and let τ be sufficiently small,*

$$\left\| \left| \sum\_{j=0}^{n-1} u''(t\_j) \left( L^{j,n} - L^{j,n+1} \right) \right| \right\|\_1 \le C r^2 t\_n^{\sigma + \frac{3q}{2} - \frac{2}{\gamma}},$$

$$\text{holds if } \gamma > \max \left\{ \frac{2}{\sigma + \frac{a}{2} + 1}, \frac{2}{\sigma + \frac{3q}{2} - 3} \right\}.$$

**Proof.** We begin by letting

$$\begin{split} l\_{2,u} &= \sum\_{j=0}^{n-1} u^{\prime\prime}(t\_j) \left[ L^{j,n} - L^{j,n+1} \right] \\ &= -\sum\_{j=1}^{n-1} \left[ u^{\prime\prime}(t\_{j+1}) - u^{\prime\prime}(t\_j) \right] L^{j,n} + u^{\prime\prime}(t\_0) \left[ L^{0,n} - L^{0,n+1} \right] - \sum\_{j=1}^{n-1} u^{\prime\prime}(t\_{j+1}) \left( \frac{\mathfrak{r}\_{j+1}^3}{\mathfrak{r}\_j^3} - 1 \right) L^{j,n} \\ &+ \sum\_{j=2}^{n-1} u^{\prime\prime}(t\_j) \left[ \frac{\mathfrak{r}\_j^3}{\mathfrak{r}\_{j-1}^3} L^{j-1,n} - L^{j,n+1} \right] - u^{\prime\prime}(t\_1) L^{1,n+1} + u^{\prime\prime}(t\_n) \frac{\mathfrak{r}\_n^3}{\mathfrak{r}\_{n-1}^3} L^{n-1,n} \\ &= -\eta\_1 - \eta\_2 + \eta\_3 - \eta\_4. \end{split}$$

where

$$\begin{split} \eta\_{1} &= \sum\_{j=1}^{n-1} [\boldsymbol{u}^{\prime\prime}(t\_{j+1}) - \boldsymbol{u}^{\prime\prime}(t\_{j})] L^{j,n}, \\ \eta\_{2} &= \sum\_{j=1}^{n-1} \boldsymbol{u}^{\prime\prime}(t\_{j+1}) \left(\frac{\boldsymbol{\tau}\_{j+1}^{3}}{\boldsymbol{\tau}\_{j}^{3}} - 1\right) L^{j,n}, \\ \eta\_{3} &= \sum\_{j=2}^{n-1} \boldsymbol{u}^{\prime\prime}(t\_{j}) \left[\frac{\boldsymbol{\tau}\_{j}^{3}}{\boldsymbol{\tau}\_{j-1}^{3}} L^{j-1,n} - L^{j,n+1}\right], \\ \eta\_{4} &= \boldsymbol{u}^{\prime\prime}(t\_{0}) [L^{0,n+1} - L^{0,n}] + \boldsymbol{u}^{\prime\prime}(t\_{1}) L^{1,n+1} - \boldsymbol{u}^{\prime\prime}(t\_{n}) \frac{\boldsymbol{\tau}\_{n}^{3}}{\boldsymbol{\tau}\_{n-1}^{3}} L^{n-1,n}. \end{split}$$

For *η*1, we have

$$\begin{split} ||\eta\_{1}||\_{1} &\leq \mathbb{C} \sum\_{j=1}^{n-1} \tau\_{j}^{2} \int\_{t\_{j}}^{t\_{j+1}} ||u^{\prime\prime\prime}(w)||\_{1} \, dw \int\_{t\_{j}}^{t\_{j+1}} (t\_{n}-s)^{a-1} \, ds \\ &\leq \mathbb{C} \sum\_{j=1}^{n-1} \tau\_{j}^{2} t\_{j+1}^{\sigma + a/2 - 2} \int\_{t\_{j}}^{t\_{j+1}} (t\_{n}-s)^{a-1} \, ds \\ &\leq \mathbb{C} \tau^{2} \int\_{t\_{1}}^{t\_{n}} (t\_{n}-s)^{a-1} s^{\sigma + a/2 - 2/\gamma} \, ds \\ &\leq \mathbb{C} \tau^{2} t\_{n}^{\sigma + \frac{3a}{2} - \frac{2}{\gamma}}, \quad \text{holds if } \gamma > \frac{2}{\sigma + \frac{a}{2} + 1}. \end{split}$$

Noting that

$$\frac{\tau\_{j+1}^3}{\tau\_j^3} - 1 = \frac{\tau\_{j+1}^3 - \tau\_j^3}{\tau\_j^3} \le \mathcal{C} \tau\_j^{-1} (\tau\_{j+1} - \tau\_j) \le \mathcal{C} \tau^2 \tau\_j^{-1} t\_{j+2}^{1 - 2/\gamma}.$$

Therefore,

$$\begin{split} ||\eta\_{2}||\_{1} &\leq \mathrm{Cr}^{2} \sum\_{j=1}^{n-1} \tau\_{j}^{-1} t\_{j+2}^{1-2/\gamma} ||u''(t\_{j+1})||\_{1} \tau\_{j}^{2} \int\_{t\_{j}}^{t\_{j+1}} (t\_{n}-s)^{a-1} \, ds \\ &\leq \mathrm{Cr}^{2} \sum\_{j=1}^{n-1} \tau\_{j} t\_{j+2}^{1-2/\gamma} ||u''(t\_{j+1})||\_{1} \int\_{t\_{j}}^{t\_{j+1}} (t\_{n}-s)^{a-1} \, ds \\ &\leq \mathrm{Cr}^{3} \sum\_{j=1}^{n-1} t\_{j+1}^{1-1/\gamma} t\_{j+2}^{1-2/\gamma} t\_{j+1}^{s+a/2-2} \int\_{t\_{j}}^{t\_{j+1}} (t\_{n}-s)^{a-1} \, ds \\ &\leq \mathrm{Cr}^{3} \max\_{j=1}^{n-1} \left(t\_{j+2}^{1-2/\gamma}\right) \int\_{t\_{1}}^{t\_{n}} \mathrm{s}^{\sigma+a/2-1/\gamma-1} (t\_{n}-s)^{a-1} \, ds \\ &\leq \mathrm{Cr}^{3} \max\_{j=1}^{n-1} \left(t\_{j+2}^{1-2/\gamma}\right) t\_{n}^{\sigma+3a/2-1/\gamma-1} . \end{split}$$

Next, we bound *η*<sup>3</sup> using Lemma 1.

$$\begin{split} ||\eta\_{3}||\_{1} &\leq \operatorname\*{C} \sum\_{j=2}^{n-1} t\_{j}^{\sigma+\mathfrak{a}/2-2} \left| \frac{\mathbf{r}\_{j}^{3}}{\mathbf{r}\_{j}^{3}-1} L^{j-1,n} - L^{j,n+1} \right| \\ &\leq \operatorname\*{C} \mathfrak{r} t\_{n}^{\mathfrak{a}-1/\gamma} \max\_{j=2}^{n-1} \left( t\_{j}^{\sigma+\mathfrak{a}/2-2} \mathfrak{r}\_{j}^{2} \right) \\ &\leq \operatorname\*{C} \mathfrak{r}^{3} t\_{n}^{\mathfrak{a}+2-3/\gamma} \max\_{j=2}^{n-1} \left( t\_{j}^{\sigma+\mathfrak{a}/2-2} \right) . \end{split}$$

To estimate ||*η*4||1, we begin with the bound

$$\begin{split} ||u''(t\_0)[L^{0,\mathfrak{r}+1}-L^{0,\mathfrak{r}}]||\_1 &= \left||\frac{1}{2\Gamma(\mathfrak{a})}u''(t\_0)\int\_0^{t\_1}s(t\_1-s)[(t\_{n+1}-s)^{\mathfrak{a}-1}-(t\_{\mathfrak{n}}-s)^{\mathfrak{a}-1}]ds\right||\_1 \\ &\leq \mathfrak{C}\tau\_1^2\int\_0^{t\_1}(t\_n-s)^{\mathfrak{a}-1}||u''(t\_0)||\_1ds \\ &\leq \mathfrak{C}\tau^2\int\_0^{t\_n}s^{\mathfrak{r}+\mathfrak{a}/2-2/\gamma}(t\_n-s)^{\mathfrak{a}-1}ds \\ &\leq \mathfrak{C}\tau^2t\_n^{\mathfrak{r}+3\mathfrak{a}/2-2/\gamma}. \end{split}$$

In addition,

$$\begin{aligned} ||\boldsymbol{u}^{\prime\prime}(t\_1)\boldsymbol{L}^{1,n+1}||\_1 &\leq C\tau\_1^2 t\_1^{\sigma+\sigma/2-2} \int\_{t\_1}^{t\_2} (t\_{n+1}-\boldsymbol{s})^{\boldsymbol{\alpha}-1} \, d\boldsymbol{s} \\ &\leq C\tau^2 \int\_{t\_1}^{t\_2} \boldsymbol{s}^{\sigma+\boldsymbol{\alpha}/2-2/\gamma} (t\_{\boldsymbol{n}}-\boldsymbol{s})^{\boldsymbol{\alpha}-1} \, d\boldsymbol{s} \\ &\leq C\tau^2 t\_{\boldsymbol{n}}^{\sigma+3\boldsymbol{\alpha}/2-2/\gamma} .\end{aligned}$$

Hence, we obtain

$$\begin{split} \left| \left| \frac{\tau\_n^3}{\tau\_{n-1}^3} u''(t\_n) L^{n-1,n} \right| \right|\_1 &\leq \mathfrak{C} \frac{\tau\_n^3}{\tau\_{n-1}} ||u''(t\_n)||\_1 \int\_{t\_{n-1}}^{t\_n} (t\_n - s)^{a-1} \, ds \\ &\leq \mathfrak{C} \tau^2 t\_{n+1}^3 t\_{n-1}^{1/\gamma - 1} t\_n^{\sigma + a/2 - 2 - 3/\gamma} \int\_{t\_{n-1}}^{t\_n} (t\_n - s)^{a-1} \, ds \\ &\leq \mathfrak{C} \tau^2 t\_{n+1}^3 \int\_{t\_{n-1}}^{t\_n} s^{\sigma + a/2 - 2/\gamma - 3} (t\_n - s)^{a-1} \, ds \\ &\leq \mathfrak{C} \tau^2 t\_n^{\sigma + 3a/2 - 2/\gamma - 3} t\_{n+1}^3 \\ &\leq \mathfrak{C} \tau^2 t\_{n+1}^{\sigma + 3a/2 - 2/\gamma} \, \text{ for } \,\,\gamma > \frac{2}{\sigma + 3a/2 - 3}. \end{split}$$

Thus, for *γ* > 2 *σ* + 3*α*/2 − 3 , we obtain

$$||\eta\_4||\_1 \le C\tau^2 t\_{n+1}^{\sigma+3\alpha/2-2/\gamma}.$$

Combining all the bounds, we finally obtain

$$\begin{aligned} ||1||\_{2,u}||\_1 &\le ||\eta\_1||\_1 + ||\eta\_2||\_1 + ||\eta\_3||\_1 + ||\eta\_4||\_1 \\ &\le \mathcal{C} \tau^2 t\_{n+1}^{\sigma + 3u/2 - 2/\gamma} + \mathcal{C} \tau^3 t\_n^{\sigma + 3u/2 - 1/\gamma - 1} \max\_{j=1}^{n-1} \left( t\_{j+1}^{1 - 2/\gamma} \right) \\ &+ \mathcal{C} \tau^3 t\_n^{a + 2 - 3/\gamma} \max\_{j=2}^{n-1} \left( t\_j^{\sigma + a/2 - 2} \right) \\ &\le \mathcal{C} \tau^2 t\_{n+1}^{\sigma + 3a/2 - 2/\gamma}, \text{ for } \gamma > \max \left\{ \frac{2}{\sigma + a/2 + 1}, \frac{2}{\sigma + 3a/2 - 3} \right\} .\end{aligned}$$

with *σ* + 3*α*/2 > 3 and *τ* is sufficiently small.

**Lemma 4.** *For* <sup>1</sup> <sup>≤</sup> *<sup>n</sup>* <sup>≤</sup> *<sup>N</sup>*, *<sup>γ</sup>* <sup>&</sup>gt; max <sup>2</sup> *σ* + *α*/2 + 1 , <sup>2</sup> *σ* + 3*α*/2 − 3 *and for a sufficiently small τ, the following error bound arises*

$$||\mathbb{Q}\_{n,\mu}^{\varepsilon} - \mathbb{Q}\_{n,\mu}^{a}||\_1 \leq C r^2 t\_{n+1}^{\sigma + 3\alpha/2 - 2/\gamma} \,\_{\mu}$$

*where* Q*<sup>e</sup> <sup>n</sup>*,*<sup>u</sup> is given by (9) and*

$$\mathbb{Q}\_{n,\mu}^{\mathfrak{a}} = \frac{1}{\Gamma(\mathfrak{a})} \sum\_{j=0}^{n} \int\_{t\_j}^{t\_{j+1}} \left[ \left( t\_{\mathfrak{n}+1} - \mathfrak{s} \right)^{\mathfrak{a}-1} - \left( t\_{\mathfrak{n}} - \mathfrak{s} \right)^{\mathfrak{a}-1} \right] \widetilde{\mathfrak{a}}(\mathfrak{s}) \, ds$$

*with*

$$
\mathfrak{u}(s) = \mathfrak{u}\_{\dot{j}} + (s - t\_{\dot{j}}) \frac{\mathfrak{u}\_{\dot{j}+1} - \mathfrak{u}\_{\dot{j}}}{\mathfrak{r}\_{\dot{j}}}.
$$

**Proof.** We begin the proof by observing that

$$\mathbb{Q}\_{n,\mu}^{\varepsilon} - \mathbb{Q}\_{n,\mu}^{\mu} = \frac{1}{\Gamma(\alpha)} \sum\_{j=0}^{n} \int\_{t\_j}^{t\_{j+1}} \left[ \left( t\_{n+1} - s \right)^{\alpha - 1} - \left( t\_n - s \right)^{\alpha - 1} \right] (\mu - \tilde{\mu})(s) \, ds,$$

Now, *u*(*s*) − *u*˜(*s*) = *E*1,*u*(*s*) + *E*2,*u*(*s*), where

$$E\_{1,u}(s) = \frac{1}{2} \int\_{t\_j}^s (w - s)^2 u^{\prime\prime\prime}(w) dw - \frac{(s - t\_j)}{2\tau\_j} \int\_{t\_j}^{t\_{j+1}} (w - t\_j)^2 u^{\prime\prime\prime}(w) dw$$

and

$$E\_{2,\mu}(s) = \frac{1}{2}(s - t\_{\dot{\jmath}})(s - t\_{\dot{\jmath}+1})\mu''(t\_{\dot{\jmath}}).$$

Then,

$$||\mathbb{Q}\_{n,u}^{\varepsilon} - \mathbb{Q}\_{n,u}^{\sigma}||\_1 \le ||I\_{1,u}||\_1 + ||I\_{2,u}||\_1$$

To complete the proof, we use the results in Lemmas 2 and 3.

**Lemma 5.** *Let g*(*r*)(*u*) *be Lipschitz in u for r* <sup>=</sup> 0, 1, 2*. Then, for* <sup>1</sup> <sup>≤</sup> *<sup>n</sup>* <sup>≤</sup> *<sup>N</sup>*, *<sup>γ</sup>* <sup>&</sup>gt; max <sup>2</sup> *σ* + *α*/2 + 1 , <sup>2</sup> *σ* + 3*α*/2 − 3 *with σ* + 3*α*/2 > 3 *and for a sufficiently small τ, we have* ||Q*<sup>e</sup> <sup>n</sup>*,*<sup>g</sup>* <sup>−</sup> <sup>Q</sup>*<sup>a</sup> <sup>n</sup>*,*g*||<sup>1</sup> <sup>≤</sup> *<sup>C</sup>τ*2*<sup>t</sup> σ*+3*α*/2−2/*γ <sup>n</sup>*+<sup>1</sup> ,

*where* Q*<sup>e</sup> <sup>n</sup>*,*<sup>g</sup> is given by (10) and*

$$\mathbb{Q}\_{n,\emptyset}^{\mathfrak{a}} = \frac{1}{\Gamma(\mathfrak{a})} \sum\_{j=0}^{n} \int\_{t\_j}^{t\_{j+1}} [(t\_{n+1} - s)^{\mathfrak{a}-1} - (t\_n - s)^{\mathfrak{a}-1}] \mathfrak{F}\_2(\mathfrak{u})(s) \, ds$$

*with*

$$\mathfrak{g}\_2(\mathfrak{u}(s)) = \mathfrak{g}(\mathfrak{u}\_j) + (s - t\_j) \frac{\mathfrak{g}(\mathfrak{u}\_{j+1}) - \mathfrak{g}(\mathfrak{u}\_j)}{\mathfrak{r}\_j}.$$

**Proof.**

$$\mathbb{Q}^{\varepsilon}\_{n,\mathbb{S}} - \mathbb{Q}^{a}\_{n,\mathbb{S}} = \frac{1}{\Gamma(a)} \sum\_{j=0}^{n} \int\_{t\_{j}}^{t\_{j+1}} [(t\_{n+1} - s)^{a-1} - (t\_n - s)^{a-1}] (\mathbb{g}(u) - \mathbb{g}\_2(u))(s) \, ds \, du$$

Let

$$
\lg(\mathfrak{u}(s)) - \breve{\mathfrak{g}}\_2(\mathfrak{u}(s)) = E\_{1,\mathfrak{J}}(s) + E\_{2,\mathfrak{J}}(s) + E\_{3,\mathfrak{J}}(s),
$$

where

$$\begin{aligned} E\_{1,\emptyset}(s) &= \left[ u(s) - u(t\_j) - \frac{s - t\_j}{\tau\_j} (u(t\_{j+1}) - u(t\_j)) \right] \mathbf{g}'(u(t\_j)) \\ E\_{2,\emptyset}(s) &= \frac{1}{2} \left[ \left( u(s) - u(t\_j) \right)^2 - \frac{s - t\_j}{\tau\_j} (u(t\_{j+1}) - u(t\_j))^2 \right] \mathbf{g}''(u(t\_j)) \\ E\_{3,\emptyset}(s) &= \frac{1}{2} \int\_{u(t\_j)}^{u(s)} (u(s) - u(w))^2 g'''(u(w)) \, du(w) - \frac{s - t\_j}{2\tau\_j} \int\_{u(t\_j)}^{u(t\_{j+1})} (u(w) - u(t\_j))^2 g'''(u(t\_j)) du(w) .\end{aligned}$$

Thus, we have

$$<\langle |E\_{1,\emptyset}(\mathbf{s})| \rangle\_1 \le (\boldsymbol{\mu}(\mathbf{s}) - \boldsymbol{\mathfrak{u}}(\mathbf{s})) || \boldsymbol{\mathfrak{g}}'(\boldsymbol{\mu}(t\_{\hat{j}})) ||\_1 \le M || \boldsymbol{\mathfrak{u}}(\mathbf{s}) - \boldsymbol{\mathfrak{u}}(\mathbf{s}) ||\_1$$

and

$$\begin{split} ||E\_{2,g}||\_1 &\leq \frac{1}{2} \left| \left( u(s) - u(t\_j) \right)^2 - \frac{s - t\_j}{\tau\_j} (u(t\_{j+1}) - u(t\_j))^2 \right| \Big|\_{1} ||g''(u(t\_j))||\_1 \\ &\leq \max\left\{ ||u(s) - u(t\_j)||\_1, \frac{s - t\_j}{\tau\_j} ||u(t\_{j+1}) - u(t\_j)||\_1 \right\} ||u(s) - \tilde{u}(s)||\_1 ||g'''(u(t\_j))||\_1 \\ &\leq \max\left\{ \int\_{t\_j}^s ||u'(w)||\_1 \, dw, \frac{s - t\_j}{\tau\_j} \int\_{t\_j}^{t\_{j+1}} ||u'(w)||\_1 \, dw \right\} ||u(s) - \tilde{u}(s)||\_1 ||g'''(u(t\_j))||\_1 \\ &\leq M\tau\_j t\_{j+1}^{\sigma + n/2 - 1} ||u(s) - \tilde{u}(s)||\_1 \\ &\leq M\tau\_{j+1}^{\sigma + n/2 - 1/\gamma} ||u(s) - \tilde{u}(s)||\_1. \end{split}$$

Moreover,

$$\begin{aligned} ||E\_{3,\emptyset}(s)||\_1 &\leq M\_1 \left| \left| \int\_{u(t\_j)}^{u(s)} (u(s) - u(w))^2 \, du(w) \right| \right|\_1 + M\_2 \frac{s - t\_j}{2\tau\_j} \left| \int\_{u(t\_j)}^{u(t\_{j+1})} \left( u(w) - u(t\_j) \right)^2 \, du(w) \right| \right|\_1 \\ &\leq M \left[ (s - t\_j)^3 s^{3r + 3u/2 - 3} + \frac{s - t\_j}{\tau\_j} r^3 t\_{j+1}^{3r + 3u/2 - 3/\gamma} \right] \\ &\leq C \max \left\{ (s - t\_j)^3 s^{3r + 3u/2 - 3}, \frac{s - t\_j}{\tau\_j} r^3 t\_{j+1}^{3r + 3u/2 - 3/\gamma} \right\} \end{aligned}$$

Therefore,

$$||Q\_{n,\emptyset}^{\varepsilon} - Q\_{n,\emptyset}^{a}||\_1 \le ||I\_{1,\emptyset}||\_1 + ||I\_{2,\emptyset}||\_1 + ||I\_{3,\emptyset}||\_1 \tag{16}$$

where the terms in the RHS are estimated as

$$\begin{aligned} &||I\_{1,\emptyset}||\_1 \leq \frac{1}{\Gamma(\alpha)} \sum\_{j=0}^{n-1} \int\_{t\_j}^{t\_{j+1}} \left[ (t\_{n+1} - s)^{\alpha - 1} - (t\_n - s)^{\alpha - 1} \right] ||E\_{1,\emptyset}(s)||\_1 ds \\ &\leq C r^2 t\_{n+1}^{\sigma + 3\alpha/2 - 2/\gamma}, \text{ for } \gamma > \max\left\{ \frac{2}{\sigma + \alpha/2 + 1}, \frac{2}{\sigma + 3\alpha/2 - 3} \right\}, \end{aligned}$$


$$\begin{split} ||\boldsymbol{1}||\_{3,\boldsymbol{\xi}}||\boldsymbol{1} &\leq \frac{2}{\Gamma(\alpha)} \sum\_{j=0}^{n-1} \int\_{t\_{j}}^{t\_{j+1}} (t\_{n}-s)^{\alpha-1} ||\boldsymbol{E}\_{3,\boldsymbol{\xi}}||\_{1} \, ds \\ &\leq C \sum\_{j=0}^{n-1} \int\_{t\_{j}}^{t\_{j+1}} (t\_{n}-s)^{\alpha-1} \max\left\{ (\boldsymbol{s}-\boldsymbol{t}\_{j})^{3} \boldsymbol{s}^{3r+3\alpha/2-3}, \frac{\boldsymbol{s}-\boldsymbol{t}\_{j}}{2\pi\_{j}} \boldsymbol{t}^{3} \boldsymbol{t}\_{j+1}^{3r+3\alpha/2-3/\gamma} \right\} \\ &\leq C \boldsymbol{\tau}^{3} \sum\_{j=0}^{n-1} \int\_{t\_{j}}^{t\_{j+1}} (t\_{n}-s)^{\alpha-1} \boldsymbol{s}^{3r+3\alpha/2-3/\gamma} \, ds \\ &\leq C \boldsymbol{\tau}^{3} \boldsymbol{t}\_{n}^{3r+3\alpha/2-3/\gamma}. \end{split}$$

The proof is completed using the estimates for||*I*1,*g*||1, ||*I*2,*g*||<sup>1</sup> and ||*I*3,*g*||<sup>1</sup> in Equation (16), where for a sufficiently small *τ*, the *τ*<sup>3</sup> terms are assumed to be negligible.

**Lemma 6.** *Assume the conditions given in Lemma 5. Then, for a sufficiently small τ, the error bound*

$$\left\| \frac{1}{\Gamma(\mathfrak{a})} \int\_{t\_n}^{t\_{n+1}} (t\_{n+1} - s)^{\mathfrak{a} - 1} (\mathfrak{g}(\mathfrak{u}) - \mathfrak{g}\_1(\mathfrak{u}))(s) \, ds \right\|\_1 \le C \tau t\_{n+1}^{\sigma + \mathfrak{a} - 1/\gamma}$$

*holds uniformly on* [*tn*, *tn*+1]*.*

#### **Proof.** Noting that

$$
\xi(\boldsymbol{u}(\boldsymbol{s})) - \tilde{\chi}\_1(\boldsymbol{u}(t\_n)) = (\boldsymbol{u}(\boldsymbol{s}) - \boldsymbol{u}(t\_n))\boldsymbol{\xi}'(\boldsymbol{u}(t\_n)) + \int\_{\boldsymbol{u}(t\_n)}^{\boldsymbol{u}(\boldsymbol{s})} (\boldsymbol{u}(\boldsymbol{s}) - \boldsymbol{u}(\boldsymbol{w})) \boldsymbol{\xi}'(\boldsymbol{u}(\boldsymbol{w})) d\boldsymbol{u}(\boldsymbol{w}) = 0
$$

and

$$||u(s) - u(t\_n)||\_1 = ||\int\_{t\_n}^{s} u'(w) \, dw||\_1 \le (s - t\_n)s^{\sigma + a/2 - 1},$$

we have

$$\left| \left| \left| \frac{1}{\Gamma(a)} \int\_{t\_n}^{t\_{n+1}} (t\_{n+1} - s)^{a-1} (\underline{\boldsymbol{g}}(\boldsymbol{u}) - \widetilde{\boldsymbol{g}}\_1(\boldsymbol{u}))(s) \, ds \right| \right|\_1 \leq \left| |I\_{1, \mathfrak{g}\_1}| \right|\_1 + \left| |I\_{2, \mathfrak{g}\_1}| \right|\_1$$

where

$$\begin{aligned} ||I\_{1,\mathbb{X}\_1}||\_1 &\leq \int\_{t\_n}^{t\_{n+1}} (t\_{n+1} - s)^{a-1} ||u(s) - u(t\_n)||\_1 ||\dot{y}'(u(t\_n))||\_1 \, ds \\ &\leq C \int\_{t\_n}^{t\_{n+1}} (t\_{n+1} - s)^{a-1} (s - t\_n) s^{\sigma + a/2 - 1} \, ds \\ &\leq C \tau t\_{n+1}^{1 - 1/\gamma} \int\_{t\_n}^{t\_{n+1}} (t\_{n+1} - s)^{a-1} s^{\sigma + a/2 - 1} \, ds \\ &\leq C \tau t\_{n+1}^{\sigma + 3a/2 - 1/\gamma} .\end{aligned}$$

Furthermore,

$$\begin{split} ||I\_{2,\zeta\_1}||\_1 &\leq \frac{1}{\Gamma(\mathfrak{a})} \int\_{t\_n}^{t\_{n+1}} (t\_{n+1} - s)^{\mathfrak{a}-1} ||\int\_{\mathfrak{u}(t\_n)}^{\mathfrak{u}(s)} (u(s) - u(w)) \mathcal{g}^{\eta}(u(w)) \, d\mathfrak{u}(w) ||\_1 \, ds \\ &\leq C \int\_{t\_n}^{t\_{n+1}} (t\_{n+1} - s)^{\mathfrak{a}-1} (s - t\_n)^2 s^{2r + \mathfrak{a} - 2} \, ds \\ &\leq C \tau\_n^2 \int\_{t\_n}^{t\_{n+1}} (t\_{n+1} - s)^{\mathfrak{a}-1} s^{2r + \mathfrak{a} - 2} \, ds \\ &\leq C \tau^2 t\_{n+1}^{2r + 2\alpha - 2/\gamma} . \end{split}$$

We complete the proof using the bounds for ||*I*1,*g*<sup>1</sup> ||<sup>1</sup> and ||*I*2,*g*<sup>1</sup> ||1.

**Lemma 7.** *Assume the conditions of Lemma 5. Then, τ is sufficiently small and we have the estimate*

$$\left| \left| \left| \frac{1}{\Gamma(a)} \int\_{t\_n}^{t\_{n+1}} (t\_{n+1} - s)^{a-1} (\underline{\boldsymbol{g}}(\boldsymbol{u}) - \underline{\boldsymbol{g}}\_2(\boldsymbol{u}))(s) \, ds \right| \right|\_1 \leq \mathcal{C} \tau^2 t\_{n+1}^{\sigma + 3a/2 - 2/\gamma} \lambda$$

*for*

$$\gamma > \max \left\{ \frac{2}{\sigma + \alpha/2 + 1}, \frac{2}{\sigma + 3\alpha/2 - 3} \right\}.$$

*with σ* + 3*α*/2 > 3*.*

**Proof.** The proof follows from Lemma 5 and is omitted.

**Theorem 1.** *Assume the conditions in Lemmas 4–7. Then, the error bounds*

$$||u\_{n+1}^p - u(t\_{n+1})||\_1 \le \mathcal{C}\tau \quad \text{and} \quad ||u\_{n+1} - u(t\_{n+1})||\_1 \le \mathcal{C}\tau^2$$

*hold uniformly on* <sup>0</sup> <sup>≤</sup> *tn* <sup>≤</sup> *T, for a sufficiently small <sup>τ</sup>, where u, <sup>u</sup><sup>p</sup> are the solutions obtained from the predictor and corrector schemes in (13).*

**Proof.** Noting that *<sup>A</sup> <sup>β</sup>* <sup>2</sup> is symmetric, positive definite, and using the error bounds obtained in Lemmas 4–7, we obtain

$$\begin{split} \left| \left| u(t\_{n+1}) - u\_{n+1}^p \right| \right| &\leq \left| \left| u(t\_n) - u\_n \right| \right| + \left| \left| \frac{1}{\Gamma(a)} f\_2^{\frac{\rho}{2}} \int\_{t\_n}^{t\_{n+1}} (t\_{n+1} - s)^{a-1} (u - \tilde{u})(s) \, ds \right| \right|\_1 \\ &+ \left| \left| \frac{1}{\Gamma(a)} \int\_{t\_n}^{t\_{n+1}} (t\_{n+1} - s)^{a-1} (\mathcal{g}(u) - \tilde{\varrho}\_1(u))(s) \, ds \right| \right|\_1 + \left| \left| \mathbb{Q}\_{n,u}^{\varepsilon} - \mathbb{Q}\_{n,u}^{\varrho} \right| \right|\_1 \\ &+ \left| \left| \mathbb{Q}\_{n,\emptyset}^{\varepsilon} - \mathbb{Q}\_{n,\emptyset}^{\varrho} \right| \right|\_1 \\ &\leq \left| \left| u(t\_n) - u\_n \right| \right| + \mathcal{Cr}^2 t\_{n+1}^{\sigma + 3\kappa/2 - 2/\gamma} + \mathcal{Cr} t\_{n+1}^{\sigma + \kappa - 1/\gamma} \\ &\leq \left| \left| u(t\_n) - u\_n \right| \right| + \mathcal{Cr} t\_{n+1}^{\sigma + \kappa - 1/\gamma} \end{split}$$

Similarly,

$$||\mu(t\_{n+1}) - \mu\_{n+1}|| \le ||\mu(t\_n) - \mu\_n|| + \mathcal{C}r^2 t\_{n+1}^{\sigma + 3\kappa/2 - 2/\gamma}.$$

The proof is completed through mathematical induction.

#### *3.2. Stability Analysis*

**Definition 1.** *The scheme given by (13) is said to be stable if there is K* > 0*, independent of τ and n, so that*

$$||\mathfrak{u}\_n - \mathfrak{u}\_n|| \le K||\mathfrak{u}\_0 - \mathfrak{u}\_0||\_\prime \quad n = 1, 2, \cdots, M.$$

*where un and u*ˆ*<sup>n</sup> satisfy (13) with the initial data u*<sup>0</sup> *and u*ˆ0*.*

**Lemma 8.** *If* <sup>0</sup> <sup>&</sup>lt; *<sup>α</sup>* <sup>≤</sup> <sup>1</sup> *and tj* = (*jτ*)*γ*, *<sup>j</sup>* <sup>=</sup> 0, 1, ... , *n, <sup>γ</sup>* <sup>≥</sup> <sup>1</sup> *and <sup>τ</sup>* <sup>=</sup> *<sup>T</sup>*1/*<sup>γ</sup> <sup>N</sup> . Then, the following estimate holds*

$$a\_{j,n} \le K\_n \begin{cases} \frac{\tau\_n}{\tau\_0} (t\_{n+1} - t\_0)^{\alpha}, & j = 0, \\\\ \frac{\tau\_n}{\tau\_{j-1}} (t\_{n+1} - t\_{j-1})^{\alpha}, & 1 \le j \le n - 1, \\\\ \tau\_{n-1'}^{\alpha} & j = n, \end{cases}$$

*where*

$$K\_{\mathfrak{a}} = \max\{ \frac{\mathfrak{a} + 1}{\Gamma(\mathfrak{a} + 2)}, \frac{2(\mathfrak{a} + 1)}{\Gamma(\mathfrak{a} + 2)}, 2^{\mathfrak{a} + 1} - \mathfrak{a} + 1 \}.$$

**Proof.** For *j* = 0, we have

$$\begin{aligned} \pi\_0 \Gamma(\mathfrak{a} + 2) a\_{0,n} &= (t\_{n+1} - t\_1)^{a+1} - (t\_n - t\_1)^{a+1} + (t\_n - t\_0)^a [(t\_n - (\mathfrak{a} + 1)t\_1]] \\ &- (t\_{n+1} - t\_0)^a [(t\_{n+1} - (\mathfrak{a} + 1)t\_1]] \\ &\leq (t\_{n+1} - t\_1)^{a+1} - (t\_n - t\_1)^{a+1} \end{aligned}$$

With *ξ* ∈ (*tn*, *tn*+1) and using the MVT, we have

$$
\pi\_0 \Gamma(\alpha + 2) a\_{0,n} \le (\alpha + 1) \pi\_n (\zeta - t\_1)^{\alpha},
$$

which implies

$$a\_{0,n} \le \frac{(\alpha+1)}{\Gamma(\alpha+2)} \frac{\tau\_n}{\tau\_0} (t\_{n+1} - t\_1)^{\alpha}.$$

For 1 ≤ *j* ≤ *n* − 1,

$$\begin{split} \Gamma(a+2)a\_{j,n} &= \frac{1}{\tau\_{j-1}} [(t\_{n+1}-t\_{j-1})^{a+1} - (t\_n-t\_{j-1})^{a+1} + (t\_n-t\_j)^a(t\_n-t\_{j-1}) \\ &- (t\_{n+1}-t\_j)^a(t\_{n+1}-tj-1) \big] + \frac{1}{\tau\_j} [(t\_{n+1}-t\_{j+1})^{a+1} - (t\_n-t\_{j+1})^{a+1} \\ &+ (t\_n-t\_j)^a(t\_n-t\_{j+1}) - (t\_{n+1}-t\_j)^a(t\_{n+1}-t\_{j+1}) \big] \\ &\leq \frac{1}{\tau\_{j-1}} [(t\_{n+1}-t\_{j-1})^{a+1} - (t\_n-t\_{j-1})^{a+1} + (t\_{n+1}-t\_{j+1})^{a+1} - (t\_n-t\_{j+1})^a]. \end{split}$$

Again, applying the MVT, we have

$$a\_{j,n} \le \frac{2(\alpha+1)}{\Gamma(\alpha+2)} \frac{\tau\_n}{\tau\_{j-1}} (t\_{n+1} - t\_{j-1})^{\alpha}$$

For *j* = *n*,

$$\begin{split} \tau\_{n-1}\Gamma(a+2)a\_{n,n} &= (\tau\_n + \tau\_{n-1})^{a+1} - \tau\_{n-1}^{a+1} - \tau\_n^a[\tau\_n + (a+1)\tau\_{n-1}] \\ &= \tau\_{n-1}^{a+1} \frac{a(a+1)}{2!} (\frac{\tau\_{n-1}}{\tau\_n})^{1-a} + \frac{(a-1)a(a+1)}{3!} (\frac{\tau\_{n-1}}{\tau\_n})^{2-a} + \dots - 1 [ \\ &\leq \tau\_{n-1}^{a+1} \frac{a(a+1)}{2!} + \frac{(a-1)a(a+1)}{3!} + \dots - 1 \end{split}$$

where we have used the generalized binomial theorem to arrive at the last inequality and the fact that *τ<sup>j</sup>* is non-decreasing.

Therefore, *an*,*<sup>n</sup>* <sup>≤</sup> *<sup>K</sup>ατ<sup>α</sup> <sup>n</sup>*−1.

**Lemma 9.** *Assume that* 0 < *α* ≤ 1 *and*

$$a\_{j,n} = \mathcal{K}\_a \frac{\mathfrak{r}\_n}{\mathfrak{r}\_{j-1}} (t\_{n+1} - t\_{j-1})^a, \ (j = 1, 2, \dots, n-1), \ 2$$

*and*

$$a\_{0,n} = K\_a \frac{\tau\_n}{\tau\_0} (t\_{n+1} - t\_1)^{n^2}$$

*for tj* = (*jτ*)*γ*, *<sup>j</sup>* <sup>=</sup> 0, 1, ... , *n, <sup>n</sup>* <sup>=</sup> 1, 2, ... , *M. Let* <sup>g</sup><sup>0</sup> *be a positive number and assume the sequence* {*ψj*} *satisfies*

$$\begin{cases} \psi\_0 \le \mathfrak{g}\_{0'} \\\\ \psi\_n \le \sum\_{j=0}^{n-1} a\_{j,n} \psi\_j + \mathfrak{C}\_0 \mathfrak{g}\_{0'} \end{cases}$$

*Then,*

$$
\psi\_{\mathfrak{n}} \le \mathsf{C}\_{0} \mathfrak{g}\_{0'} \text{ : } \mathfrak{n} = 1, 2, \dots, M.
$$

**Proof.** The proof uses a modification of that of [35] (Lemma 3.3).

**Theorem 2.** *Suppose that uj* (*j* = 1, 2, ··· , *N*) *(3) due to Scheme (13) and where g*(*u*) *is Lipschitz in* Ω × (0, *T*] *(with respect to u). Then, (13) is stable.*

**Proof.** We start by considering a history-term perturbation in the form

$$\widetilde{\mathbb{H}}\_n^a = \sum\_{j=0}^{n-1} a\_{j,n} \left( -A^{\frac{6}{2}} \widetilde{\boldsymbol{u}}\_j + \boldsymbol{g}(\boldsymbol{u}\_j + \widetilde{\boldsymbol{u}}\_j) - \boldsymbol{g}(\boldsymbol{u}\_j) \right) + a\_{n,n} \left( -A^{\frac{6}{2}} \widetilde{\boldsymbol{u}}\_n + \boldsymbol{g}(\boldsymbol{u}\_n + \widetilde{\boldsymbol{u}}\_n) - \boldsymbol{g}(\boldsymbol{u}\_n) \right).$$

By using the positive definiteness of *<sup>A</sup> <sup>β</sup>* <sup>2</sup> , the fact that *g*(*u*) is Lipschitz continuous, and Lemma 8, we obtain

$$||\widetilde{\mathbb{H}}\_n^{\mathfrak{a}}|| \le \mathcal{K} \left( \sum\_{j=0}^{n-1} a\_{j,n} ||\widetilde{\mathbb{H}}\_j|| + \pi\_n^{\mathfrak{a}} ||\widetilde{\mathbb{H}}\_n|| \right).$$

Here, *K* is assumed to be a positive constant. The perturbation of Equation (13) works out to be

⎧ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩ ∼ *u p <sup>n</sup>*+<sup>1</sup> = Γ(*α* + 2)I + *τ<sup>α</sup> <sup>n</sup> <sup>A</sup> <sup>β</sup>* 2 −1Γ(*<sup>α</sup>* <sup>+</sup> <sup>2</sup>)<sup>I</sup> <sup>−</sup> *ατ<sup>α</sup> <sup>n</sup> <sup>A</sup> <sup>β</sup>* 2 ∼ *un* + *τ<sup>α</sup> <sup>n</sup>* (*<sup>α</sup>* <sup>+</sup> <sup>1</sup>)(*g*(*un* <sup>+</sup> <sup>∼</sup> *un*) <sup>−</sup> *<sup>g</sup>*(*un*)) + Γ(*α* + 2)I + *τ<sup>α</sup> <sup>n</sup> <sup>A</sup> <sup>β</sup>* 2 −<sup>1</sup> Γ(*α* + 2) ∼ H *a n* , ∼ *un*+<sup>1</sup> = Γ(*α* + 2)I + *τ<sup>α</sup> <sup>n</sup> <sup>A</sup> <sup>β</sup>* 2 −1Γ(*<sup>α</sup>* <sup>+</sup> <sup>2</sup>)<sup>I</sup> <sup>−</sup> *ατ<sup>α</sup> <sup>n</sup> <sup>A</sup> <sup>β</sup>* 2 ∼ *un* + *τ<sup>α</sup> <sup>n</sup> <sup>α</sup>*(*g*(*un* <sup>+</sup> <sup>∼</sup> *un*) <sup>−</sup> *<sup>g</sup>*(*un*)) , + Γ(*α* + 2)I + *τ<sup>α</sup> <sup>n</sup> <sup>A</sup> <sup>β</sup>* 2 −<sup>1</sup> *τα <sup>n</sup>* (*g*(*u<sup>p</sup> <sup>n</sup>*+<sup>1</sup> <sup>+</sup> <sup>∼</sup> *u p <sup>n</sup>*+1) <sup>−</sup> *<sup>g</sup>*(*u<sup>p</sup> <sup>n</sup>*+1)) + Γ(*α* + 2) ∼ H *a n* .

By the positive definiteness of *<sup>A</sup> <sup>β</sup>* <sup>2</sup> , it follows that 0 < *C* < 1, where

$$\mathcal{C} = \left\| \left( \Gamma(\mathfrak{a} + \mathfrak{2}) \mathbb{I} + \mathfrak{r}\_n^{\mathfrak{a}} A^{\frac{\beta}{2}} \right)^{-1} \left( \Gamma(\mathfrak{a} + \mathfrak{2}) \mathbb{I} - \mathfrak{a} \mathfrak{r}\_n^{\mathfrak{a}} A^{\frac{\beta}{2}} \right) \right\|.$$

Therefore,

$$\begin{cases} |\lfloor \widehat{\boldsymbol{u}}\_{n+1}^{p} \rfloor| \le \mathbb{C} \left| |\widehat{\boldsymbol{u}}\_{n} \right| \right| + K\_{1} \left( \tau\_{\max}^{a} ||\widehat{\boldsymbol{u}}\_{n} || + \tau\_{\max}^{a} ||\widehat{\boldsymbol{u}}\_{n} || + \sum\_{j=0}^{n-1} a\_{j,n} ||\widehat{\boldsymbol{u}}\_{j} || \right), \\\\ |\lfloor \widehat{\boldsymbol{u}}\_{n+1} \rfloor| \le \mathbb{C} \left| |\widehat{\boldsymbol{u}}\_{n} \right| \right| + K\_{2} \left( \tau\_{\max}^{a} (||\widehat{\boldsymbol{u}}\_{n} || + ||\widehat{\boldsymbol{u}}\_{n+1}^{p} ||) + \tau\_{\max}^{a} ||\widehat{\boldsymbol{u}}\_{n} || + \sum\_{j=0}^{n-1} a\_{j,n} ||\widehat{\boldsymbol{u}}\_{j} || \right). \end{cases}$$

where *C*, *K*1, *K*<sup>2</sup> are constants, with *τ*max = max *τj n j*=0 . We show the remaining part by employing induction. For *n* = 0 and a sufficiently small *τ*max, it follows that

$$||\widetilde{u}\_1^p|| \le ||\widetilde{u}\_0|| \text{ and } ||\widetilde{u}\_1|| \le ||\widetilde{u}\_0||.$$

Suppose that

$$||\widetilde{u}\_j|| \le ||\widetilde{u}\_0||\_{\prime} \text{ } j = 1, 2, \cdots \text{ } \text{ } n.$$

We consider *j* = *n* + 1, for <sup>∼</sup> *u p <sup>n</sup>*<sup>+</sup>1, that is,

$$\begin{aligned} ||\widehat{\tilde{u}}\_{n+1}^{p}|| &\leq \mathbb{C} \left|| \widetilde{u}\_{n} \right|| + K\_{1} \left( \tau\_{\max}^{a} ||\widehat{u}\_{n}|| + \tau\_{\max}^{a} ||\widehat{u}\_{n}|| + \sum\_{j=0}^{n-1} a\_{j,n} ||\widehat{\tilde{u}}\_{j}|| \right) \\ &\leq \mathbb{C}\_{0} \left( ||\widehat{\tilde{u}}\_{n}|| + K\_{1} \sum\_{j=0}^{n-1} a\_{j,n} ||\widehat{\tilde{u}}\_{j}|| \right. \\ &\leq ||\widehat{\tilde{u}}\_{0}||\_{\prime} \end{aligned}$$

where 0 < *C*<sup>0</sup> = *C* + 2*K*1*τ<sup>α</sup>* max < 1 for a sufficiently small *τ*max, where Lemma 9 has been used.

We have

$$\begin{aligned} ||\widetilde{u}\_{n+1}|| &\leq \mathbb{C} \left|| \widetilde{u}\_{n} \right|| + K\_{2} \left( \tau\_{\max}^{\mathfrak{a}} (||\widetilde{u}\_{n}|| + ||\widetilde{u}\_{n+1}^{p}||) + \tau\_{\max}^{\mathfrak{a}} ||\widetilde{u}\_{n}|| + \sum\_{j=0}^{n-1} a\_{j,n} ||\widetilde{u}\_{j}|| \right) \\ &\leq \mathbb{C}\_{1} \left|| ||\widetilde{u}\_{n}|| + K\_{2} \sum\_{j=0}^{n-1} a\_{j,n} ||\widetilde{u}\_{j}|| \right| \\ &\leq ||\widetilde{u}\_{0}||\_{\prime} \end{aligned}$$

where 0 < *C*<sup>1</sup> = *C* + 3*K*2*τ<sup>α</sup>* max < 1. This completes the proof.

#### **4. Numerical Illustrations**

Here, we corroborate the analysis through the empirical study of the convergence rate for different test problems. For the examples that we consider in this section, the convergence rate (*CR*) is given by

$$\mathcal{CR} = \log\_2\left(\text{Error}\_{\frac{M}{2}} / \text{Error}\_M\right),$$

where

$$\text{Error}\_M = \left\| \left| u\_M - u\_{\frac{M}{2}} \right| \right\|$$

and *uM* is the vector of the solution with *M* mesh points. The numerical examples are the same as those given in Biala and Khaliq [18] and the results for *γ* = 1 can be found there.

**Example 1.** *We consider*

$$\_xD\_{0,t}^
\alpha \mu = -(-
\Delta)^{\frac{\beta}{2}}\mu + g(\mu), \ t \in (0,1], \ \mathbf{x} \in [0,1].$$

$$g(\mathbf{x}) = \mathbf{x}^2(1-\mathbf{x})^2.$$

*As the initial data <sup>ϕ</sup>*(*x*) <sup>∈</sup> *<sup>C</sup>*∞(Ω) <sup>∩</sup> *<sup>H</sup>*<sup>1</sup> <sup>0</sup> (Ω)*, the regularity property in (15) holds true for any σ* ∈ *I<sup>σ</sup>* = 0, *α* 4 *. Consider the problem g*(*u*) = 0*, whose solution can be seen to be*

$$u(x,t) = \sum\_{n=1}^{\infty} \frac{4\left(-12 + n^2 \pi^2\right) \left(-1 + (-1)^n\right)}{n^5 \pi^5} E\_n\left(-(n\pi)^6 t^n\right) \sin(n\pi x) \lambda$$

Here, *Eα* is the one-parameter Mittag–Leffler function. Tables 1 and 2 show the error and the convergence rates when *g*(*u*) = 0 and *g*(*u*) = *u*2, respectively, using the *L*<sup>2</sup> norm. We used a small step size of *dx* = 0.001 so that the error in time is dominant. By Theorem 1, we expect to have *O*(*τ*2) convergence for

$$\gamma > \max\left(\frac{2}{\sigma + \mathfrak{a}/2 + 1}, \frac{2}{\sigma + 3\mathfrak{a}/2 - 3}\right) = \max\left(\frac{8}{3\mathfrak{a}^- + 4}, \frac{8}{7\mathfrak{a}^- - 12}\right).$$

In fact, the second term in the maximum function is not necessary since 0 < *α* ≤ 1. In order not to pepper the text with so many tables with different values of *γ* > 8 3*α*− + 4 , we show the results for only two values of *<sup>γ</sup>* (one that is slightly greater than <sup>8</sup> 3*α*− + 4 and another that is slightly lower) to validate our theoretical order of convergence. We observe that with *<sup>γ</sup>* <sup>=</sup> <sup>8</sup> 3*α* + 5 , the *<sup>O</sup>*(*τ*3/2+) for some <sup>∈</sup> (0, 1/2) is achieved. However, for *<sup>γ</sup>* <sup>=</sup> <sup>8</sup> 3*α* + 3 , the *O*(*τ*2) is obtained, which corroborates our theoretical analysis. These observations are further depicted in Figures 1 and 2, where we fit a linear line for the logarithm (base 10) of *M*−<sup>1</sup> and the corresponding errors. The values of the slope in these figures that depict the rates of convergence for different values of *α* and *γ* further support our theoretical observations.

**Figure 1.** Log-log error plots for Example 1 with *g*(*u*) = 0, showing the rate of convergence of the scheme.

**Figure 2.** Log -log error plots for Example 1 with *g*(*u*) = *u*2, showing the rate of convergence of the scheme.

**Table 1.** *g*(*u*) = 0 with *β* = 1.2.


**Table 2.** *g*(*u*) = *u*<sup>2</sup> with *β* = 1.6.


**Example 2.** *Let us consider a two-dimensional time–space reaction–diffusion problem of fractional order*

$$\,\_1D\_{0,t}^{\mathfrak{a}}u = -( - \Delta)^{\frac{\mathfrak{b}}{2}}u + \mathfrak{g}(u), \; t \in (0,1], \; (\mathfrak{x}, \mathfrak{y}) \in [0,1] \times [0,1]$$

$$u(\mathfrak{x}, \mathfrak{y}, 0) = \mathfrak{x}\mathfrak{y}(1-\mathfrak{x})(1-\mathfrak{y})$$

*with a Dirichlet homogeneous boundary condition. We first solve the problem when g*(*u*) = 0. *The solution in this case is given in Yang et al. [36], by*

$$\begin{aligned} u(x,y,t) &= \sum\_{n=1}^{\infty} \sum\_{m=1}^{\infty} E\_x \left(-\lambda\_{n,m}^{\frac{\beta}{2}} t^{\alpha}\right) c\_{n,m} \phi\_{n,m}(x,y), \\ \lambda\_{n,m} &= \left(n^2 + m^2\right) \pi^2, \\ \phi\_{n,m}(x,y) &= 2 \sin(n\pi x) \sin(n\pi y), \\ c\_{\text{II,m}} &= \int\_0^1 \int\_0^1 u v(1-u)(1-v) \, \phi\_{n,m}(u,v) \, du \, dv. \end{aligned}$$

A space-step size of *dx* = 0.008 (for CPU memory constraints) is used in this problem. Similar to the 1D problem, the results here (see Tables 3 and 4) show that the *O*(*τ*2) order of convergence is achieved when *γ* > 8 3*α*− + 4 . Figure 3 shows the exact and numerical solutions with *β* = 1.4 and *α* = 0.2.

**Figure 3.** Plots of exact (**left**) and numerical solutions (**right**) with *β* = 1.4 and *α* = 0.2.

**Table 3.** *g*(*u*) = 0 with *β* = 1.4.


**Table 4.** *g*(*u*) = *u*<sup>3</sup> with *β* = 1.8.


#### **5. Conclusions**

In this work, we developed a numerical scheme over time-graded meshes for nonlinear time–space fractional reaction–diffusion equations. The analysis uses the regularity properties of the solutions of the proposed equations and an *O*(*τ*2) order of convergence is achieved. The regularity properties of the solution to this class of problem are used to improve the convergence properties of the proposed numerical scheme on time-graded meshes. The stability results are discussed and proved. Furthermore, the sharp error estimates for an optimal *O*(*τ*2) rate of convergence are proved. Some examples are provided to demonstrate the efficiency and accuracy of our proposed scheme across different values of the fractional order *α*.

**Author Contributions:** Conceptualization, Y.O.A., T.A.B., O.S.I., A.Q.M.K. and B.A.W.; Methodology, Y.O.A., T.A.B. and O.S.I.; Software, Y.O.A. and T.A.B.; Formal analysis, Y.O.A., T.A.B. and O.S.I.; Writing—original draft, Y.O.A., T.A.B. and O.S.I.; Writing—review & editing, A.Q.M.K. and B.A.W.; Supervision, O.S.I., A.Q.M.K. and B.A.W. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

### *Article* **Doubling Smith Method for a Class of Large-Scale Generalized Fractional Diffusion Equations**

**Bo Yu, Xiang Li \* and Ning Dong \***

School of Science, Hunan University of Technology, Zhuzhou 412007, China; yubo@hut.edu.cn

**\*** Correspondence: xiangli@hut.edu.cn (X.L.); dongning@hut.edu.cn (N.D.)

**Abstract:** The implicit difference approach is used to discretize a class of generalized fractional diffusion equations into a series of linear equations. By rearranging the equations as the matrix form, the separable forcing term and the coefficient matrices are shown to be low-ranked and of nonsingular *M*-matrix structure, respectively. A low-ranked doubling Smith method with determined optimally iterative parameters is presented for solving the corresponding matrix equation. In comparison to the existing Krylov solver with Fast Fourier Transform (FFT) for the sequence Toeplitz linear system, numerical examples demonstrate that the proposed method is more effective on CPU time for solving large-scale problems.

**Keywords:** generalized fractional diffusion equation; doubling Smith method; large-scale Sylvester equation; *M*-matrix

#### **1. Introduction**

Consider a class of generalized fractional diffusion equations (GFDE)

$$\,\_0^C D\_t^{\gamma,\lambda(t)} u(\mathbf{x}, t) = \kappa \left[ p\_a D\_\mathbf{x}^a u(\mathbf{x}, t) + (1 - p)\_\mathbf{x} D\_b^a u(\mathbf{x}, t) \right] + f(\mathbf{x}, t), \quad (\mathbf{x}, t) \in (a, b) \times (0, T) \tag{1}$$

with the initial values *u*(*x*, 0) = *φ*(*x*), *x* ∈ [*a*, *b*] and zero boundary conditions *u*(*a*, *t*) = *u*(*b*, *t*) = 0, *t* ∈ [0, *T*], where the parameters *α* ∈ (1, 2], *γ* ∈ (0, 1), *p* ∈ [0, 1], and *λ*(*t*) > 0 are the weighting function for *t* ∈ [0, *T*] with *λ* (*t*) ≤ 0. This equation arises from the continuous time random walks (CTRWs) model, with some complicated power-law waiting time distributions WTDs [1–3]. The weight function *λ*(*t*) are of significant importance in the CTRW model, where biological particles have a finite lifespan. In such cases, it is more reasonable to employ the tempered power-law waiting time distribution, *e*−*btt* <sup>−</sup>*γ*, instead of the divergent power-law distribution, *t* <sup>−</sup>*γ*. This selection allows the model to describe the gradual transitions from subdiffusion to normal diffusion and, finally, to superdiffusion. These characteristics of the model have numerous potential applications in physical, biological, and chemical processes. For further details, please refer to [4,5]. The desired function *u*(*x*, *t*) represents the concentration of a particle plume undergoing anomalous diffusion with a diffusion coefficient *κ* ∈ (0, +∞), and the forcing function *f*(*x*, *t*) denotes the source or sink term. Throughout the paper, we assume that the function *f*(*x*, *t*) is separable (or decoupled) with respect to *x* on [*a*, *b*] and *t* on [0, *T*], that is,

$$f(\mathbf{x}, t) = \sum\_{i=1}^{l} f\_{\mathbf{s}\_i}(\mathbf{x}) f\_{t\_i}(t) \quad \text{for all } (\mathbf{x}, t) \in [a, b] \times [0, T].$$

The GFDE (1) reduces to the space fractional diffusion equation (SFDE) when *γ* = *λ*(*t*) = 1. Robust numerical schemes for SFDE have been studied extensively, as outlined

**Citation:** Yu, B.; Li, X.; Dong, N. Doubling Smith Method for a Class of Large-Scale Generalized Fractional Diffusion Equations. *Fractal Fract.* **2023**, *7*, 380. https://doi.org/ 10.3390/fractalfract7050380

Academic Editors: Libo Feng, Yang Liu and Lin Liu

Received: 13 March 2023 Revised: 26 April 2023 Accepted: 28 April 2023 Published: 1 May 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

in [6–10] and references therein. For GFDE, the time fractional derivative is the *γ*-order generalized Caputo fractional derivative [1,11], defined as

$${}^{C}\_{0}D\_{t}^{\gamma,\lambda(t)}u(\mathbf{x},t) = \frac{1}{\Gamma(1-\gamma)}\int\_{0}^{t} \frac{\lambda(t-\eta)}{(t-\eta)^{\gamma}} \frac{\partial u(\mathbf{x},\eta)}{\partial \eta} d\eta.$$

while the left-handed and the right-handed space fractional derivatives are the *α*-order Riemann–Liouville (R-L) fractional derivatives of the form [12]

$$\begin{array}{rcl} \, \_{\mathrm{x}\_{L}}D\_{\mathrm{x}}^{\alpha}\mu(\mathsf{x},t) & = & \frac{1}{\Gamma(2-\alpha)}\frac{\partial^{2}}{\partial\mathbf{x}^{2}}\int\_{\mathrm{x}\_{L}}^{\mathrm{x}}\frac{\mu\left(\mathsf{f},t\right)}{(\mathsf{x}-\mathsf{f})^{\alpha-1}}d\mathsf{f}, \\\, \_{\mathrm{x}}D\_{\mathrm{x}\_{R}}^{\alpha}\mu(\mathsf{x},t) & = & \frac{1}{\Gamma(2-\alpha)}\frac{\partial^{2}}{\partial\mathbf{x}^{2}}\int\_{\mathrm{x}}^{\mathrm{x}\_{R}}\frac{\mu\left(\mathsf{f},t\right)}{(\mathsf{f}-\mathsf{x})^{\alpha-1}}d\mathsf{f}. \end{array}$$

There are various ways to solve mathematical models that involve fractional order derivatives. One such approach is to use cubic splines, which are useful in modeling anomalous diffusion. In this technique, piece-wise polynomial functions are used to interpolate the data points, allowing for the diffusion coefficient to vary with time or space [13]. Another approach is to adapt the finite element method to include fractional order derivatives, which has been applied to determine the rheological properties of biomaterials that exhibit fractal structures. This method has been useful in studying the viscoelastic behavior of collagen and elastin [14]. The Galerkin method is yet another technique used to obtain numerical solutions for fractional differential equations. This method approximates the solution as a linear combination of basis functions and derives a system of algebraic equations, which can then be numerically solved using the Galerkin orthogonality [15].

To obtain an unconditionally stable difference scheme, the implicit difference scheme can be developed for Equation (1), which inherits (2 − *γ*)-order temporal and 2-order spatial convergence [11]. The corresponding Toeplitz linear system is then solved efficiently by using preconditioned Krylov subspace solvers with fast Fourier transformation (FFT), costing about *O*(*ns* log(*ns*)) flops and *O*(*ns*) memory for each temporal node, where *ns* is the number of spatial nodes. However, the derivation of the entire *nt* temporal nodes requires about *O*(*ntns* log(*ns*)) flops, which is not suitable for large-scale computations.

In this paper, we observe that the discretized coefficient matrices in the linear system are the nonsingular Toeplitz *M*-matrix, fitting well with the frame of the *M*-matrix Sylvester equation. This allows us to present a doubling Smith method [16,17] to deal with GFDE (1). The main contributions of this paper include the following aspects:


Some notations and definitions are required in this paper. Let *L*1(R) be the set of all integrable functions in real space. Symbols <sup>R</sup><sup>2</sup> and <sup>R</sup>*n*×*<sup>n</sup>* are the real plane and the *<sup>n</sup>* <sup>×</sup> *<sup>n</sup>* real matrices, respectively. For matrices *<sup>A</sup>*, *<sup>B</sup>* <sup>∈</sup> <sup>R</sup>*n*×*n*, we write *<sup>A</sup>* <sup>≥</sup> *<sup>B</sup>*(*<sup>A</sup>* <sup>&</sup>gt; *<sup>B</sup>*) if their respective elements satisfy *aij* ≥ *bij*(*aij* > *bij*) for all *i*, *j*. A real square matrix *A* is called a *Z*-matrix if all its off-diagonal elements are nonpositive. It is clear that any *Z*-matrix *A* can be written as *sI* − *B* with *B* ≥ 0. A *Z*-matrix *A* = *sI* − *B* with *B* ≥ 0 is called an *M*-matrix if *s* ≥ *ρ*(*B*), where *ρ*(·) denotes the spectral radius. It is called a singular *M*-matrix if *s* = *ρ*(*B*) and a nonsingular *M*-matrix if *s* > *ρ*(*B*). The (non)symmetric Toeplitz matrix is denoted by Toep(*c*, *r*) with vectors *c* and *r* being its first column and row, respectively. The matrix

*<sup>A</sup>* <sup>∈</sup> <sup>R</sup>*n*×*<sup>n</sup>* is numerically low-ranked if there is a constant *<sup>c</sup><sup>τ</sup>* independent of *<sup>n</sup>* such that rank*τ*(*A*) ≤ *cτ*.

The following results about *M*-matrix are well known (see [19] (Section 3.5), [20] (Lem. 2.2) for an example).

**Lemma 1.** *For a Z-matrix A, the following statements are equivalent:*

*(a) A is a nonsingular M-matrix.*

*(b) A is nonsingular and satisfies A*−<sup>1</sup> <sup>≥</sup> <sup>0</sup>*.*

*(c) Av* > 0 *for some vector v* > 0*.*

*(d) All eigenvalues of A have positive real parts.*

**Lemma 2.** *Suppose that A is an M-matrix and B is a Z-matrix.*

*(a) If B* ≥ *A, then B is an M-matrix. Particularly, γI* + *A is an M-matrix for γ* ≥ 0 *and a nonsingular M-matrix for γ* > 0*.*

*(b) The one with the smallest absolute value among all eigenvalues of A, denoted by λ<sup>A</sup>* <sup>1</sup> *, is nonnegative, and λ<sup>A</sup>* <sup>1</sup> ≤ max*<sup>i</sup> Aii.*

#### **2. Implicit Difference Scheme and the Linear Systems**

We will construct an implicit difference scheme for temporal and spatial discretization by using the generalized Caputo fractional derivative [1] for the temporal direction and the second-order WSGD [21–23] spatial discretization for the spatial direction.

#### *2.1. Temporal and Spatial Discretization*

We first introduce the temporal discretization of the function *u*(*x*, *t*) on the rectangle area Rec <sup>=</sup> {(*x*, *<sup>t</sup>*) : *<sup>a</sup>* <sup>≤</sup> *<sup>x</sup>* <sup>≤</sup> *<sup>b</sup>*, 0 <sup>≤</sup> *<sup>t</sup>* <sup>≤</sup> *<sup>T</sup>*} ∈ <sup>R</sup><sup>2</sup> with the discretized mesh *mh* <sup>×</sup> *<sup>m</sup><sup>τ</sup>* <sup>=</sup> {*xi* × *tj* : *xi* = *a* + *ih*, *tj* = *jτ*, 0 ≤ *i* ≤ *ns*, 0 ≤ *j* ≤ *nt*, *h* = (*b* − *a*)/*ns*, *τ* = *T*/*nt*}.

Define the linear interpolation

$$
\Pi\_{1,s}u(\cdot,t) = u(\cdot,t\_{s+1})\frac{t-t\_s}{\tau} + u(\cdot,t\_s)\frac{t\_{s+1}-t}{\tau}
$$

over the time interval (*ti*, *tj*) with 0 ≤ *j* ≤ *nt* − 1. Then, at the time *tj*+1, one has

$${}\_{0}^{\nabla}D\_{t}^{\gamma,\lambda(t)}u(\cdot,t)|\_{t=t\_{j+1}} = \frac{\tau^{1-\gamma}}{\Gamma(2-\gamma)}\sum\_{s=0}^{j}[\lambda\_{j-s+1/2}a\_{j-s} + (\lambda\_{j-s} - \lambda\_{j-s+1})b\_{j-s}]u\_{l;s} + R\_{1}^{j} + R\_{2}^{j},$$

where

 $\lambda\_{\mathfrak{s}} = \lambda(t\_{\mathfrak{s}})$ ,  $u\_{t,\mathfrak{s}} = \frac{u(\cdot, t\_{i+1}) - u(\cdot, t\_{\mathfrak{s}})}{\tau}$ ,  $a\_i = (i+1)^{1-\gamma} - i^{1-\gamma}$ ,  $b\_i = \frac{1}{2-\gamma}[(i+1)^{2-\gamma} - i^{2-\gamma}] - \frac{1}{2}[(i+1)^{1-\gamma} + i^{1-\gamma}]$ .

with *<sup>i</sup>* <sup>≥</sup> 1 and *<sup>R</sup><sup>j</sup>* <sup>1</sup> and *<sup>R</sup><sup>j</sup>* <sup>2</sup> being residuals defined in [1]. The following Lemma concludes the truncation error of the above discretized scheme [1] (Lem. 4.1).

$$\textbf{Lemma 3.}\text{ Let } \gamma \in (0,1), \lambda(t) > 0, \lambda'(t) \le 0, \text{and } \lambda(t), \mathfrak{u}(\cdot,t) \in \mathcal{C}^2[0,t\_{j+1}]. \text{ Then,}$$

$$\,\_0^C D\_t^{\gamma,\lambda(t)} u(\cdot, t\_{j+1}) = \Delta\_{0, t\_{j+1}}^{\gamma,\lambda(t)} u^{j+1} + O(\tau^{2-\gamma})$$

*with*

$$
\Delta\_{0,t\_{j+1}}^{\gamma,\lambda(t)} u^{j+1} = \sum\_{s=0}^{j} c\_{j-s} (u^{s+1} - u^s) \tag{2}
$$

*and*

$$c\_k = \frac{\pi^{-\gamma}}{\Gamma(2-\gamma)} [\lambda\_{k+1/2} a\_k + (\lambda\_k - \lambda\_{k+1}) b\_k]^2$$

*for k* ≥ 0*. Furthermore, elements in sequences* {*ak*}*,* {*bk*}*, and* {*ck*} *are all decreasing with respect to k, i.e.,*

$$\begin{cases} a\_0 > a\_1 > \dots > a\_k > \frac{1-\gamma}{(k+1)\gamma}, \\\ b\_0 > b\_1 > \dots > b\_k > 0, \\\ c\_0 > c\_1 > \dots > c\_k > \frac{\lambda(t\_{k+1/2})}{\Gamma(1-\gamma)t\_{k+1}^{\gamma}}. \end{cases} \tag{3}$$

We next consider the spatial discretization. Let <sup>L</sup>*n*+*α*(R) = {*u*|*<sup>u</sup>* <sup>∈</sup> *<sup>L</sup>*1(R), <sup>+</sup><sup>∞</sup> <sup>−</sup><sup>∞</sup> (<sup>1</sup> <sup>+</sup> *<sup>k</sup>*)*n*+*α*|*u*ˆ(*k*)|*dk* <sup>&</sup>lt; <sup>∞</sup>} with *<sup>u</sup>*ˆ(*k*) = <sup>+</sup><sup>∞</sup> <sup>−</sup><sup>∞</sup> *<sup>e</sup>***<sup>i</sup>***kxu*(*x*)*dx* being the Fourier transformation of *<sup>u</sup>*(*x*). Here, **i** represents the imaginary unit. The spatial WSGD discretized format for the R–L fractional derivative is summarized in the following lemma, as proposed in [11] (Lem. 2.3); see also in [21–23] .

**Lemma 4.** *Let u*(*x*, ·) ∈ L2+*α*(R)*. Then, for some fixed space step-length h, one has*

$$
\delta\_{\mathsf{a}} D\_{\mathsf{x}}^{\mathsf{a}} \mu(\mathsf{x}, \cdot) = \delta\_{\mathsf{x}, +}^{\mathsf{a}} \mu(\mathsf{x}, \cdot) + O(h^{2}), \quad {}\_{\mathsf{x}} D\_{\mathsf{b}}^{\mathsf{a}} \mu(\mathsf{x}, \cdot) = \delta\_{\mathsf{x}, -}^{\mathsf{a}} \mu(\mathsf{x}, \cdot) + O(h^{2}),
$$

*where*

$$\begin{array}{rcl}\delta\_{x,+}^{\mathfrak{a}}u(x,\cdot)&=&\frac{1}{h^{\mathfrak{a}}}\sum\_{k=0}^{\left[\left[\frac{x-\mathfrak{a}}{h}\right]\right]}w\_{k}^{(\mathfrak{a})}u(x-(k-1)h,\cdot)),\\\delta\_{x,-}^{\mathfrak{a}}u(x,\cdot)&=&\frac{1}{h^{\mathfrak{a}}}\sum\_{k=0}^{\left[\left[\frac{k-x}{h}\right]\right]}w\_{k}^{(\mathfrak{a})}u(x+(k-1)h,\cdot))\end{array}$$

*are difference operators with* [[·]] *being the floor function and*

$$w\_0^{(a)} = \kappa\_1 \mathfrak{g}\_0^{(a)},\ w\_1^{(a)} = \kappa\_1 \mathfrak{g}\_1^{(a)} + \kappa\_0 \mathfrak{g}\_0^{(a)},\ w\_k^{(a)} = \kappa\_1 \mathfrak{g}\_k^{(a)} + \kappa\_0 \mathfrak{g}\_{k-1}^{(a)} + \kappa\_{-1} \mathfrak{g}\_{k-2'}^{(a)} \ (k \ge 2).$$

 $Here, \, \kappa\_1 = \frac{a^2 + 3a + 2}{12}, \, \kappa\_0 = \frac{4 - a^2}{6}, \, \kappa\_{-1} = \frac{a^2 - 3a + 2}{12}, \, and \, g\_k^{(a)} = (-1)^k \binom{a}{k}.$  $Furthermore, for \, k = 1, 2, \dots$ 

*<sup>α</sup>* <sup>∈</sup> (1, 2)*, the sequence* {*w*(*α*) *<sup>k</sup>* } *satisfies*

$$\begin{array}{ll}w\_0^{(a)} > 0, \ w\_1^{(a)} < 0, \ w\_k^{(a)} > 0, \ (k \ge 3),\\ \sum\_{k=0}^{\infty} w\_k^{(a)} = 0, \ \sum\_{k=0}^n w\_k^{(a)} < 0, \ (n > 1).\end{array} \tag{4}$$

#### *2.2. Derivation of the Sequence of Linear Systems*

By employing the above implicit difference scheme, we can obtain a sequence of discretized linear systems. In fact, for *i* = 1, ..., *ns* − 1 and *j* = 0, 1, ..., *nt* − 1, the GFDE (1) at the grid point (*xi*, *tj*) is

$${}^{C}\_{0}D\_{t}^{\gamma,\lambda(t)}u(\mathbf{x}\_{i},\mathbf{t}\_{j}) = \kappa \left[p\_{a}D\_{\mathbf{x}}^{a}u(\mathbf{x},\mathbf{t}) + (1-p)\_{\mathbf{x}}D\_{\mathbf{b}}^{a}u(\mathbf{x},\mathbf{t})\right]\_{(\mathbf{x}\_{i},\mathbf{t}\_{j})} + f(\mathbf{x}\_{i},\mathbf{t}\_{j}).\tag{5}$$

Recalling Lemmas 3 and 4, Equation (5) can be rewritten as

$$
\Delta\_{0,t\_{j+1}}^{\gamma,\lambda(t)} u\_i^{j+1} = \kappa \cdot \delta\_h^\alpha u\_i^{j+1} + f\_i^{j+1} + R\_i^{j+1}.
$$

where *u<sup>j</sup> <sup>i</sup>* = *u*(*xi*, *tj*), *f j <sup>i</sup>* = *f*(*xi*, *tj*),

$$\delta\_h^a u\_i^{j+1} = \frac{1}{h^a} [p \sum\_{k=0}^{i+1} w\_k^{(a)} u\_{i-k+1}^{j+1} + (1-p) \sum\_{k=0}^{n\_s - i+1} w\_k^{(a)} u\_{i+k-1}^{j+1}]\_\prime \tag{6}$$

and *<sup>R</sup>j*+<sup>1</sup> *<sup>i</sup>* is the error. We then omit the error and arrive at the implicit difference scheme

$$
\Delta\_{0,t\_{j+1}}^{\gamma,\lambda(t)} u\_i^{j+1} = \kappa \cdot \delta\_h^\mathfrak{a} u\_i^{j+1} + f\_i^{j+1}, \ 1 \le i \le n\_\mathfrak{s} - 1, \ 0 \le j \le n\_\mathfrak{t} - 1 \tag{7}
$$

with the initial condition *u*<sup>0</sup> *<sup>i</sup>* = *φ*(*xi*) for 0 ≤ *i* ≤ *ns* and the zero boundary conditions *uj* <sup>0</sup> <sup>=</sup> 0, *<sup>u</sup><sup>j</sup> ns* = 0 for 0 ≤ *j* ≤ *nt*.

Before we proceed with the derivation of the linear system, we show that the implicit difference scheme presented in Equation (7) is stable. In fact, by setting *ξ j <sup>i</sup>* = *κ* for all *i* and *j* in [11] (Thm 2.2), one has the following stability theorem.

**Theorem 1.** *By defining <sup>f</sup> <sup>j</sup>*+1<sup>2</sup> <sup>=</sup> *<sup>h</sup>* <sup>∑</sup>*ns*−<sup>1</sup> *<sup>i</sup>*=<sup>1</sup> *<sup>f</sup>* <sup>2</sup>(*xi*, *tj*+1)*, the implicit difference scheme* (7) *is unconditionally stable, and there exists a constant c such that a priori estimate is*

$$||u^{j+1}|| \le ||u^0||^2 + \frac{\Gamma(1-\gamma)T^{\gamma}}{2c\kappa \ln 2\lambda(T)} \max\_{0 \le j \le n\_t - 1} ||f^{j+1}||^2,$$

$$i+1 \le i+1 \le n+1 \le n \le \epsilon \text{ or } \epsilon \text{ or } \alpha \ll \epsilon, \gamma \pi$$

*where uj*+<sup>1</sup> = [*uj*+<sup>1</sup> <sup>1</sup> , *<sup>u</sup>j*+<sup>1</sup> <sup>2</sup> , ..., *<sup>u</sup>j*+<sup>1</sup> *ns*−1] (*, u*<sup>0</sup> = [*u*<sup>0</sup> <sup>1</sup>, *<sup>u</sup>*<sup>0</sup> <sup>2</sup>, ..., *<sup>u</sup>*<sup>0</sup> *ns*−1] (*.*

According to [11] (Thm 2.3), the implicit difference scheme (7) exhibits a 2-*γ* order of convergence in time and a quadratic order of convergence in space variables when the solution of the GFDE (1) is sufficiently smooth.

**Theorem 2.** *Suppose that <sup>u</sup>*true(*x*, *<sup>t</sup>*) ∈ C4,2 *<sup>x</sup>*,*t*([*a*, *<sup>b</sup>*] <sup>×</sup> [0, *<sup>T</sup>*]) *is the solution of GFDE* (1) *and <sup>u</sup><sup>j</sup> i derives from the implicit difference scheme* (7)*. Define*

$$E\_i^j = \mathfrak{u}\_{\text{true}}(\mathfrak{x}\_{i\prime} t\_j) - \mathfrak{u}\_{i\prime}^j \quad 1 \le i \le n\_s - 1, \ 0 \le j \le n\_t - 1.$$

*Then, there exists a constant c such that for j* ˜ ≤ *nt* − 1*,*

$$\|E^j\| \le \varepsilon (r^{2-\gamma} + h^2).$$

Now, we construct the Toeplitz matrix

$$\mathcal{W}\_{\mathfrak{a}} = \text{Toep}([w\_1^{(a)}, \dots, w\_{n\_s-1}^{(a)}], [w\_1^{(a)}, w\_0^{(a)}, \overbrace{0, \dots, 0}^{n\_s-3}]) \in \mathbb{R}^{(n\_s-1)\times(n\_s-1)}$$

and set

$$B = -\frac{\kappa}{h^{\mathfrak{a}}} (p\mathcal{W}\_{\mathfrak{a}} + (1 - p)\mathcal{W}\_{\mathfrak{a}}^{\top}).\tag{8}$$

Then, for each temporal node *j* (0 ≤ *j* ≤ *nt* − 1), Equation (7) with 1 ≤ *i* ≤ *ns* − 1 are equivalent to the linear system

$$(\mathfrak{c}\_0 I\_{n\_s - 1} + B)u^{j+1} = \mathfrak{c}\_j u^0 + \sum\_{k=1}^j (\mathfrak{c}\_{k-1} - \mathfrak{c}\_k) u^{j+1-k} + f^{j+1} \tag{9}$$

with *u<sup>j</sup>* = [*u<sup>j</sup>* <sup>1</sup>, ..., *<sup>u</sup><sup>j</sup> ns*−1] ( and *f <sup>j</sup>* = [ *f j* <sup>1</sup>, ..., *f j ns*−1] (. It is clear that the fast Fourier transform (FFT) method is well-suited for the linear system sequence (7), and the computational complexity for solving the *j*-th equation is *O*((*ns* − 1)log(*ns* − 1)) [24] (Chap. 3). As a result, the total computational cost for the entire sequence of linear systems in Equation (9) is about *O*(*nt*(*ns* − 1)log(*ns* − 1)) flops.

#### **3. Low-Ranked Matrix Equation and Doubling Smith Method**

In this section, we will further transform the sequence of linear systems (9) into a low-ranked matrix equation via the separable forcing function *f*(*x*, *t*), which is then efficiently solved by the presented doubling Smith method, equipped with two determined optimal parameters.

#### *3.1. Matrix Equation with Structured Coefficients*

To construct the matrix equation, we rewrite the linear systems (9) into

$$
\begin{bmatrix}
\vdots & \vdots & \ddots & \ddots & \ddots & \\
& & & & \\
& & & & & \\
& & & & B & \\
& & & B & & B
\end{bmatrix}
\begin{bmatrix}
u^{0} \\
u^{1} \\
u^{2} \\
\vdots \\
u^{n\_{l}}
\end{bmatrix} = 
\begin{bmatrix}
f^{1} \\
u^{2} \\
\vdots \\
u^{n\_{l}}
\end{bmatrix}
$$

By setting the Toeplitz matrix

$$A = \text{Toep}([\underbrace{c\_0, 0, \dots, 0}\_{}], [\underbrace{c\_0, c\_1 - c\_0, c\_2 - c\_1, \dots, c\_{n\_l - 1} - c\_{n\_l - 2}}\_{}]) \in \mathbb{R}^{n\_l \times n\_l},\tag{11}$$

the linear system (10) is the one-shot equation of the scale *nt*(*ns* − 1) × *nt*(*ns* − 1), i.e.,

$$(A^\top \otimes I\_{\mathfrak{n}\_s-1} + I\_{\mathfrak{n}\_t} \otimes B)\mathfrak{u} = f + \mathfrak{c} \otimes \mathfrak{u}^0,$$

where *<sup>f</sup>* = (*<sup>f</sup>* <sup>1</sup>(, *<sup>f</sup>* <sup>2</sup>(, ..., *<sup>f</sup> nt*()( <sup>∈</sup> <sup>R</sup>(*ns*−1)*nt* is the forcing term, *<sup>c</sup>* = (*c*0, *<sup>c</sup>*1, ..., *cnt*−1)( <sup>∈</sup> <sup>R</sup>*nt* is the constant vector, and *<sup>u</sup>* = (*u*1(, *<sup>u</sup>*2(, ..., *<sup>u</sup>nt*()( <sup>∈</sup> <sup>R</sup>(*ns*−1)*nt* is the desired unknown vector.

Furthermore, by rearranging vectors *<sup>u</sup><sup>i</sup>* and *<sup>f</sup> <sup>i</sup>* as matrices *<sup>U</sup>* = [*u*1, *<sup>u</sup>*2, ..., *<sup>u</sup>nt* ] <sup>∈</sup> <sup>R</sup>(*ns*−1)×*nt* and *<sup>F</sup>* = [ *<sup>f</sup>* <sup>1</sup> <sup>+</sup> *<sup>c</sup>*0*u*0, *<sup>f</sup>* <sup>2</sup> <sup>+</sup> *<sup>c</sup>*1*u*0, ..., *<sup>f</sup> nt* <sup>+</sup> *cnt*−1*u*0] <sup>∈</sup> <sup>R</sup>(*ns*−1)×*nt* , respectively, we arrive at the Sylvester matrix equation

$$ULA + B\underline{U} = F\_\prime \tag{12}$$

with *F* being the constant term.

**Remark 1.** *When the forcing function f*(*x*, *t*) *is separable, the constant matrix F in* (12) *is a product of two low-rank factors, i.e.,*

$$F = F\_{\mathfrak{s}} F\_{\mathfrak{t}}^{\top} = [F\_{\mathfrak{s}1}, \mathfrak{u}^{0}] \left[F\_{\mathfrak{t}1}, \mathfrak{c}\right]^{\top}{}\_{\mathsf{t}}$$

*where Fs*<sup>1</sup> <sup>∈</sup> <sup>R</sup>(*ns*−1)×*rf (rf* ) *ns) and Ft*<sup>1</sup> <sup>∈</sup> <sup>R</sup>*nt*×*rf (rf* ) *nt) are discretized spatial and time matrices from f*(*x*, *t*)*.*

The following theorem states the nice property of matrices *A* and *B*, which also contributes to the motivation of developing the doubling Smith method with the optimal parameters.

**Theorem 3.** *Let <sup>λ</sup>*(*t*) ∈ C2[0, *<sup>T</sup>*] *be the weight function satisfying <sup>λ</sup>*(*t*) <sup>&</sup>gt; <sup>0</sup> *and <sup>λ</sup>* (*t*) ≤ 0 *for all <sup>t</sup>* <sup>∈</sup> [0, *<sup>T</sup>*]*. Let* {*w*(*α*) *<sup>k</sup>* } *be the sequence generated by the WSGD format in Lemma 4. Then, matrices A given in* (11) *and B given in* (8) *are both nonsingular M-matrices.*

**Proof.** Let **1** = (1, 1, ..., 1)( be a vector with all its elements equal to 1. According to Equation (3) and the assumption that the weight function *λ*(*t*) is positive, we can conclude that the minimum value of the sequence *ck* is greater than zero. As a result, we have

$$A\mathbf{1} = (c\_{n\_t-1}, c\_{n\_t-2}, \dots, c\_1, c\_0)^\top > \frac{\lambda (n\_t - 1/2)}{\Gamma(1 - \gamma) t\_{n\_t}^\gamma} \mathbf{1} > 0,$$

and *A* is a nonsingular *M*-matrix based on Lemma 1.

Moreover, from ∑<sup>∞</sup> *<sup>i</sup>*=<sup>0</sup> *<sup>w</sup>*(*α*) *<sup>i</sup>* <sup>=</sup> 0 in Equation (4), we can derive that <sup>∑</sup><sup>∞</sup> *<sup>i</sup>*=*ns <sup>w</sup>*(*α*) *<sup>i</sup>* = <sup>−</sup> <sup>∑</sup>*ns*−<sup>1</sup> *<sup>i</sup>*=<sup>0</sup> *<sup>w</sup>*(*α*) *<sup>i</sup>* <sup>&</sup>gt; 0, which implies that <sup>∑</sup>*ns*−<sup>1</sup> *<sup>i</sup>*=<sup>1</sup> *<sup>w</sup>*(*α*) *<sup>i</sup>* <sup>=</sup> <sup>−</sup>(*w*(*α*) <sup>0</sup> <sup>+</sup> <sup>∑</sup><sup>∞</sup> *<sup>i</sup>*=*ns <sup>w</sup>*(*α*) *<sup>i</sup>* ) < 0. Therefore,

 $\mathcal{W}\_{a}\mathbf{1} = [\sum\_{i=0}^{1} w\_{i}^{(a)}, \sum\_{i=0}^{2} w\_{i}^{(a)}, \dots, \sum\_{i=0}^{n\_{s}-2} w\_{i}^{(a)}, \sum\_{i=1}^{n\_{s}-1} w\_{i}^{(a)}]^{\top} < 0,$  $\mathcal{W}\_{a}^{\top}\mathbf{1} = [\sum\_{i=1}^{n\_{s}-1} w\_{i}^{(a)}, \sum\_{i=0}^{n\_{s}-2} w\_{i}^{(a)}, \dots, \sum\_{i=0}^{2} w\_{i}^{(a)}, \sum\_{i=0}^{1} w\_{i}^{(a)}]^{\top} < 0.$ 

Since *κ*, *hα*, and *p* in (8) are all positive, we can find a vector **1** such that *B***1** > 0, and *B* is a nonsingular *M*-matrix via Lemma 1.

#### *3.2. Doubling Smith Method with the Optimal Parameters*

To develop the doubling Smith method, we first use the generalized Cayley transform [17,20,25] (Section 2) to convert the Sylvester Equation (12) to the Stein equation. By introducing two positive parameters *μ* and *ν*, Equation (12) can be rewritten as

$$(B - \nu I\_{n\_s - 1})\mathcal{U}(A - \mu I\_{\mathbb{H}\_t}) - (B + \mu I\_{n\_s - 1})\mathcal{U}(A + \nu I\_{\mathbb{H}\_t}) = -(\mu + \nu)F\_{\prime \mathbb{H}\_t}$$

or the corresponding Stein equation

$$
\dot{B}L\dot{A} - \mathcal{U} + \dot{F} = 0,\tag{13}
$$

where

$$\tilde{A} = (A - \mu I\_{n\_l}) \left( A + \nu I\_{n\_l} \right)^{-1}, \quad \tilde{B} = \left( B - \nu I\_{n\_s - 1} \right) \left( B + \mu I\_{n\_s - 1} \right)^{-1}$$

and *<sup>F</sup>*<sup>9</sup> = (*<sup>μ</sup>* <sup>+</sup> *<sup>ν</sup>*)(*<sup>B</sup>* <sup>+</sup> *<sup>μ</sup>Ins*−1)−1*F*(*<sup>A</sup>* <sup>+</sup> *<sup>ν</sup>Int*)−1. As *<sup>A</sup>* and *<sup>B</sup>* are nonsingular *<sup>M</sup>*-matrices, Lemma <sup>2</sup> implies that *<sup>A</sup>*<sup>9</sup> and *<sup>B</sup>*<sup>9</sup> are non-positive matrices when

$$\mu \ge \max\_{1 \le i \le n\_\ell} A\_{ii} \text{ and } \nu \ge \max\_{1 \le i \le n\_s - 1} B\_{ii}.$$

We can rewrite the Stein equation as *<sup>U</sup>* <sup>=</sup> <sup>S</sup>(*U*) = *BU*<sup>9</sup> *<sup>A</sup>*<sup>9</sup> <sup>+</sup> *<sup>F</sup>*<sup>9</sup> and substitute *<sup>U</sup>* <sup>=</sup> <sup>S</sup>(*U*) into the right-hand side. This gives the equation *<sup>U</sup>* <sup>=</sup> <sup>S</sup>2(*U*) = <sup>S</sup>(*BU*<sup>9</sup> *<sup>A</sup>*<sup>9</sup> <sup>+</sup> *<sup>F</sup>*9) = *<sup>B</sup>*92*UA*9<sup>2</sup> <sup>+</sup> *<sup>B</sup>*9*F*9*A*<sup>9</sup> <sup>+</sup> *<sup>F</sup>*9. By repeatedly substituting *<sup>U</sup>* <sup>=</sup> <sup>S</sup>2(*U*) for the right-hand side of itself, we can derive the *k*-th iteration in the form

$$\mathcal{U} = \overline{\mathcal{B}}^{2^k} \mathcal{U} \overline{\mathcal{A}}^{2^k} + \sum\_{i=0}^{2^k - 1} \overline{\mathcal{B}}^i \overline{\mathcal{F}} \overline{\mathcal{A}}^i{}\_i$$

which contributes to the doubling Smith (DS) iteration

$$
\tilde{A}\_{k+1} = \tilde{A}\_{k'}^2 \quad \tilde{B}\_{k+1} = \tilde{B}\_{k'}^2 \quad \tilde{\mathcal{U}}\_{k+1} = \tilde{\mathcal{U}}\_k + \tilde{B}\_k \tilde{\mathcal{U}}\_k \tilde{A}\_k \quad (k \ge 0) \tag{14}
$$

where *<sup>A</sup>*9<sup>0</sup> <sup>=</sup> *<sup>A</sup>*9, *<sup>B</sup>*9<sup>0</sup> <sup>=</sup> *<sup>B</sup>*9, and *<sup>U</sup>*9<sup>0</sup> <sup>=</sup> *<sup>F</sup>*9.

The following theorem describes how the convergence rate of the DS iteration is a function of the Cayley parameters (*μ*, *ν*) and can attain the optimal by using the *M*matrix property.

**Theorem 4.** *Let the sequence* {*U*9*k*} *be generated by the DS iteration. Assume that <sup>U</sup>*9<sup>∞</sup> <sup>=</sup> ∑<sup>∞</sup> *<sup>i</sup>*=<sup>0</sup> *<sup>B</sup>*9*<sup>i</sup> <sup>F</sup>*9*A*9*<sup>i</sup> is the solution of the Stein Equation* (13)*. Then,*

$$\limsup\_{k \to \infty} \|\tilde{\mathcal{U}}\_{\infty} - \tilde{\mathcal{U}}\_{k}\|^{1/2^{k}} \le \rho(\tilde{A})\rho(\tilde{B}).\tag{15}$$

*Furthermore, <sup>ρ</sup>*(*A*9)*ρ*(*B*9) *will arrive at the minimal value when*

$$(\mu^\*, \nu^\*) = (\max\_i A\_{i\bar{i}\nu} \max\_{\bar{i}} B\_{i\bar{i}}) = (c\_{0\prime} - \kappa w\_1^{(a)}/h^a),\tag{16}$$

*where c*<sup>0</sup> *is given in* (11) *and <sup>κ</sup>, w*(*α*) <sup>1</sup> *, h<sup>α</sup> are given in* (8)*.*

**Proof.** The DS iteration (14) yields that the error at the *<sup>k</sup>*-th iteration is *<sup>U</sup>*9<sup>∞</sup> <sup>−</sup> *<sup>U</sup>*9*<sup>k</sup>* <sup>=</sup> ∑<sup>∞</sup> *<sup>i</sup>*=2*<sup>k</sup> <sup>B</sup>*9*<sup>i</sup> <sup>F</sup>*9*A*9*<sup>i</sup>* <sup>=</sup> *<sup>B</sup>*92*<sup>k</sup> <sup>U</sup>*9<sup>∞</sup> *<sup>A</sup>*92*<sup>k</sup>* and then the inequality (15) holds true. To ensure that the DS converges as fast as possible, one needs to select appropriate values for *μ* and *ν* to minimize the convergence rate.

We first consider *<sup>ρ</sup>*(*A*9). Since all eigenvalues of *<sup>A</sup>* are the same and equal to *<sup>c</sup>*0, let *Av* = *c*0*v* with *v* be the corresponding eigenvector. Then, we have

$$\tilde{A}v = (A - \mu I\_{\mathfrak{n}\_\ell})(A + \nu I\_{\mathfrak{n}\_\ell})^{-1}v = \frac{c\_0 - \mu}{c\_0 + \nu}v\_\nu$$

which implies that

$$
\rho(\bar{A}) = \frac{\mu - c\_0}{c\_0 + \nu}.
$$

On the other hand, let *B* = *sI* − *NB* with *s* > 0 and *NB* ≥ 0 be irreducible. According to the Perron–Frobenius theorem [19] (Thm. 2.7), there exists a positive vector *u* such that *NBu* = *ρ*(*NB*)*u*. Therefore, the minimal eigenvalue of *B*, i.e., *λ<sup>B</sup>* min = *s* − *ρ*(*NB*), is positive, and this leads to

$$
\tilde{B}\mu = (B - \nu I\_{n\_s - 1})(B + \mu I\_{n\_s - 1})^{-1}\mu = \frac{\lambda\_{\text{min}}^B - \nu}{\lambda\_{\text{min}}^B + \mu}\mu.
$$

Since *<sup>B</sup>*<sup>9</sup> = (*<sup>B</sup>* <sup>−</sup> *<sup>ν</sup>Ins*−1)(*<sup>B</sup>* <sup>+</sup> *<sup>μ</sup>Ins*−1)−<sup>1</sup> is non-positive and irreducible for *<sup>ν</sup>* <sup>≥</sup> max*<sup>i</sup> Bii* and *μ* > 0 (see Lemma 2), it follows from the Perron–Frobenius theorem again that

$$\rho(\bar{B}) = \rho(-\bar{B}) = \frac{\nu - \lambda\_{\text{min}}^B}{\lambda\_{\text{min}}^B + \mu}.$$

Construct the functions

$$\lg\_1(\mu) = \frac{\mu - c\_0}{\lambda\_{\min}^B + \mu} \quad \text{and} \quad \lg\_2(\nu) = \frac{\nu - \lambda\_{\min}^B}{c\_0 + \nu}.$$

They are obviously monotonically increasing with respect to *μ* ∈ [*μ*∗, +∞) and *ν* ∈ [*ν*∗, <sup>+</sup>∞), respectively. Then, *<sup>ρ</sup>*(*A*9)*ρ*(*B*9) = *<sup>g</sup>*1(*μ*) · *<sup>g</sup>*2(*ν*) achieves the minimal value at (*μ*∗, *ν*∗).

For the large-scale Equation (12) with separable *F* = *FsF*( *<sup>t</sup>* , the DS method (14) can be further organized as the following low-ranked version [16,17] (Alg. 1)

$$\begin{cases} \begin{aligned} \tilde{\mathcal{U}}\_{k+1} &= \tilde{\mathcal{G}}\_{k+1} \tilde{T}\_{k+1} \tilde{H}\_{k+1'}^\top \ \tilde{T}\_{k+1} = \begin{bmatrix} \tau\_{\tilde{k}} & \stackrel{0}{\tau}\_{\tilde{\tau}\_{k}} \\ \tilde{\mathcal{G}}\_{k+1} & = [\tilde{\mathcal{G}}\_{k'} \ \tilde{B}\_{k} \tilde{\mathcal{G}}\_{k}], \ \tilde{H}\_{k+1} = [\tilde{H}\_{k'} \ \tilde{A}\_{k}^\top \ \tilde{H}\_{k}] \end{aligned} \end{cases} \end{cases} \tag{17}$$

with *<sup>G</sup>*9<sup>0</sup> <sup>=</sup> *Fs*, *<sup>H</sup>*9<sup>0</sup> <sup>=</sup> *Ft*, *<sup>T</sup>*9<sup>0</sup> <sup>=</sup> *<sup>I</sup>*. Then, the solution *<sup>U</sup>*9<sup>∞</sup> is numerically low-ranked. If FFT is still used for coping with the Toeplitz system and the number of the DS iteration is not large (in a sense of 2*<sup>k</sup>* = *O*(1)), the entire complexity is expected to be reduced to *O*((*ns* − 1)log(*ns* − 1)).

#### **4. Numerical Examples**

In this section, we will illustrate the effectiveness of the low-ranked DS method in computing the solution of large-scale GFDE using the implicit difference scheme. We compare the DS method with the Bi-CGSTAB solver [18] (Alg. 3.6.3) (referred to as "ST"), which is used to solve the sequence of Toeplitz linear systems (9) at each temporal node. The same solver is also used to construct *<sup>A</sup>*<sup>9</sup> and *<sup>B</sup>*<sup>9</sup> in (13) for the DS method. Additionally, both algorithms employ the Gohber–Semecul formula [24,26] to solve their respective Toeplitz systems, and the algorithm terminates when the relative residual of the Toeplitz system is less than 10−14.

In the DS method, we use the technique of truncation and compression of the economic QR decomposition [25] (Section 2.2) (see also in [16,17] with a tolerance of 10−30) to reduce the columns of *<sup>G</sup>*9*<sup>k</sup>* and *<sup>H</sup>*9*<sup>k</sup>* as much as possible. We also set the upper bound of the truncated maximal number of columns to 103. The DS method stops either when the number of iterations exceeds six or when the low-ranked residual form [16,17] of the Stein Equation (13) is less than 10−11. We implemented both algorithms using MATLAB 2019a on a 64-bit PC with a 3.0 GHz Intel Core i5 processor and 32G RAM, with the machine error eps = 2.22 <sup>×</sup> <sup>10</sup>−16.

To assess the accuracy of both algorithms, we calculate their errors as

$$\mathbf{Err} = \max\_{0 \le j \le n\_\ell} ||u^j - u^j\_{\mathbf{true}}||\_\infty$$

and record the convergent rate as

$$\text{Rate} = \log\_{h\_1/h\_2}(\text{Err}\_{h\_1} / \text{Err}\_{h\_2})\_{\prime \prime}$$

where *h*<sup>1</sup> and *h*<sup>2</sup> are different step-lengths in two consecutively temporal nodes.

**Example 1.** *Consider the GFDE* (1) *with the diffusion coefficient κ* = 1 *and p* = 1*. The weight function is λ*(*t*) = *e*−*<sup>t</sup> , and the forcing function on the RHS of GFDE is*

$$f(x,t) = \frac{-t^{1-\gamma}e^{-t}}{\Gamma(2-\gamma)}x^3(1-x) - \frac{6e^{-t}}{\Gamma(4-a)}x^{3-a} + \frac{24e^{-t}}{\Gamma(5-a)}x^{4-a}.$$

*The initial-boundary value conditions for this problem are <sup>u</sup>*(*x*, 0) = *<sup>x</sup>*3(<sup>1</sup> <sup>−</sup> *<sup>x</sup>*) *and <sup>u</sup>*(0, *<sup>t</sup>*) = *u*(1, *t*) = 0*. It is not difficult to see the exact solution function of this problem is u*(*x*, *t*) = *e*−*<sup>t</sup> <sup>x</sup>*3(<sup>1</sup> <sup>−</sup> *<sup>x</sup>*) *(see Appendix A).*

To test the numerical performance of the two algorithms, we take *γ* = 0.2 and test the values of *α* at 1.1, 1.5, and 1.9. The obtained results are listed in Table 1, where the column labeled "*h*" indicates the spatial step-length (corresponding to the number of nodes *ns*). The columns "CPU\_ST" and "CPU\_DS" represent the elapsed CPU time for the sequence solver with the Bi-CGSTAB method (abbreviated as "ST") and our DS method, respectively. The letters "It." behind "CPU\_DS" represent the required number of the DS iteration. The columns labeled "Err\_ST" and "Err\_DS" represent the calculated errors of the ST method and our DS method, respectively, after termination. As both algorithms reach similar error levels, we only report the convergent rate of the DS method at various scales in the "Rate\_DS" column.


**Table 1.** Numerical performances of two different methods when *γ* = 0.2 in Example 1.

We can see from Table 1 that both methods efficiently compute the solution for various values of *α* with errors ranging from *O*(10−7) to *O*(10−10). Our DS method requires only 4–5 iterations for middle-scale problems (*ns* = 2048, 4098, 8192) and 5–6 iterations for large-scale problems (*ns* = 16,384 to 32,768) to achieve the prescribed residual level of the Stein equation, resulting in similar error levels as the ST method. Although the Rate\_DS column shows that the convergence rate gradually decreases with increasing scale, the DS method still requires less CPU time than the ST method for all different *α*, especially at *ns* = 32,768. At this scale, the DS method takes only 1/14 of the CPU time required by the ST method to obtain a solution of almost the same order *O*(10−10).

We also carried out further numerical experiments to validate the efficacy of our DS method, with the aim of comparing the error surfaces of the DS and ST methods, and observe any differences in their respective performances. Figure 1 presents the results of our experiments. The figure displays the error surfaces of the DS and ST methods, labeled as "D" and "S" respectively, at specific values of *γ* = 0.3 and *ns* = 2048. The variables *t* and *s* in Figure 1 represent the discretized temporal and spatial values, respectively, within the interval [0, 1] in the error functions. Our analysis of the figure revealed that the error surface of the DS method encompasses that of the ST method, but the errors for both methods are at the level of *O*(10−8), which is a relatively low level of error. This result reinforces the effectiveness and reliability of our proposed DS method.

(**a**) *α* = 1.1.

**Figure 1.** *Cont*.

**Figure 1.** Error surfaces calculated by the DS method (D) and the ST method (S) at *ns* = 2048 and *γ* = 0.3 in Example 1. The subplots from top to bottom correspond to different *α*.

We subsequently increase the value of *γ* for different *α* and compare the numerical performance of the DS method with that of the ST method. The obtained results are listed in Table 2. We can see that for different *α*, the DS method requires 6 iterations to reach the prescribed residual level. For middle-scale problems (*ns* = 2048, 4098), the ST method is faster than the DS method in terms of CPU time. However, with increasing scale (from *ns* = 8192 to 32,768), the DS method gradually becomes faster than the ST method, albeit sacrificing some accuracy. In particular, at the scale of *ns* = 32,768, the DS method takes only about 1/14 of the CPU time required by the ST method to obtain a solution of the order *O*(10−10), indicating that the DS method is more suitable for dealing with large-scale problems. Furthermore, we conducted numerical experiments with a value of *γ* = 0.9. The results, as shown in Table 3, indicate that while the DS method may sacrifice some accuracy, it still outperforms the ST method in terms of CPU time when *ns* is no less than 8192.


**Table 2.** Numerical performances of two different methods when *γ* = 0.5 in Example 1.

**Table 3.** Numerical performances of two different methods when *γ* = 0.9 in Example 1.


**Example 2.** *Consider the GFDE* (1) *with the diffusion coefficient κ* = 5 *and the weight function <sup>λ</sup>*(*t*) = *<sup>e</sup>*−*bt with b* <sup>≥</sup> <sup>0</sup> *[11]. The source term is*

$$\begin{array}{rcl}f(x,t) &=& \frac{10^{3-\gamma}e^{-bt}}{\Gamma(4-\gamma)}x^2(1-x)^2 - 25g(t)\Big{(}\frac{\Gamma(3)}{\Gamma(3-a)}[px^{2-a} + (1-p)(1-x)^{2-a}]\\ &-2\frac{\Gamma(4)}{\Gamma(4-a)}[px^{3-a} + (1-p)(1-x)^{3-a}] + \frac{\Gamma(5)}{\Gamma(5-a)}[px^{4-a} + (1-p)(1-x)^{4-a}]\Big{)}.\end{array}$$

The corresponding initial-boundary value conditions are *<sup>u</sup>*(*x*, 0) = <sup>5</sup>*g*(0)*x*2(<sup>1</sup> <sup>−</sup> *<sup>x</sup>*)2, *u*(0, *t*) = *u*(1, *t*) = 0. It can be verified that the exact solution function is *u*(*x*, *t*) = <sup>5</sup>*g*(*t*)*x*2(<sup>1</sup> <sup>−</sup> *<sup>x</sup>*)<sup>2</sup> ( see Appendix A) with

$$g(t) = 1 + \frac{2 - (2 + 2bt + b^2t^2)e^{-bt}}{b^3}.$$

We chose values of *p* = 0.4 and *γ* = 0.2 and implemented both algorithms for the discretized Stein equation from Example 2. The results obtained are displayed in Table 4, which demonstrates that both methods are capable of efficiently computing the solution

with an error range of approximately between *O*(10−7) to *O*(10−9) for different values of *α*. With the exception of when *h* = 1/2048 and *α* = 1.1, our DS method required six iterations to achieve the desired residual level. In addition, for *α* = 1.1, the DS method was less time-consuming than the ST method but achieved almost the same level of accuracy. For cases where *α* = 1.5 and 1.9, our DS method took more CPU time to compute the solution for middle-scale cases where *ns* was 2048 and 4098. However, as the scale increased, the DS method required less CPU time than the ST method, with only a slight decrease in accuracy.


**Table 4.** Numerical performances of two different methods when *γ* = 0.2, *p* = 0.4 in Example 2.

Additionally, we generated error surfaces for both methods and denoted them as "D" for the DS method and "S" for the ST method. These surfaces were plotted at *γ* = 0.4 and *ns* = 2048, and the results are shown in Figure 2. The figure indicates that the error surface of the DS method covers that of the ST method, while both methods have similar error levels of approximately *O*(10−7).

(**a**) *α* = 1.1.

**Figure 2.** Error surfaces calculated by the DS method (D) and the ST method (S) at *ns* = 2048 and

*γ* = 0.4 in Example 2. The subplots from top to bottom correspond to different *α*.

We also raised the parameter *γ* to 0.5 and executed both algorithms once more. Table 5 illustrates the results, indicating that our DS method produces similar error levels to the ST method. Moreover, when *ns* = 8192, our DS method performs better than the ST method in terms of CPU time. This tendency becomes increasingly apparent as the scale increases, demonstrating that the complexity of the DS method is roughly *O*((*ns* − 1)log(*ns* − 1)) and that it is more suitable for larger-scale problems. In Table 6, we resumed conducting numerical experiments with *γ* = 0.9 and similarly found that while the DS method may compromise some accuracy, it still outperforms the ST method in terms of CPU time when *ns* > 8192.


**Table 5.** Numerical performances of two different methods when *γ* = 0.5, *p* = 0.4 in Example 2.

**Table 6.** Numerical performances of two different methods when *γ* = 0.9, *p* = 0.4 in Example 2.


#### **5. Conclusions**

We have presented a doubling Smith iteration method for solving discretized Stein equations arising from a class of generalized fractional diffusion equations (GFDEs). The method takes advantage of the implicit difference scheme, resulting in coefficient matrices with nonsingular *M*-matrix structures. The two optimal parameters are then determined based on this property, and the separable forcing term of the GFDE contributes to the low-ranked version of the doubling Smith method. Numerical experiments demonstrate that our method outperforms the ST method with the Bi-CGSTAB solver in terms of CPU time, particularly as the scale increases, although it sacrifices some accuracy. However, our approach is limited to GFDEs with the case of *λ*(*t*) = 1. It may not be appropriate for solving other types of fractional diffusion equations. Additionally, if the coefficient matrices do not have nonsingular *M*-matrix structures, the optimal parameters in the presented doubling Smith method may not be determined. As future work, we plan to explore the applicability of the low-ranked doubling Smith methods for solving other large-scale GFDEs. In addition, it should be noted that the GFDE discussed in this paper is limited to zero boundary conditions. It is important to acknowledge that previous studies have shown that the shifted Grunwald–Letnikov formula requires modification when dealing

with absorbing boundary conditions. Researchers have explored this area in depth, as evidenced by works such as [27–29]. As such, it would be beneficial for future research to focus on extending the technique of the classical method of manufactured solutions [30–33] to convert non-zero boundary conditions into zero ones. This would allow the GFDE to be applied to a wider range of problems, improving its practicality and usefulness in real-world scenarios.

**Author Contributions:** Conceptualization, B.Y.; methodology, B.Y.; software, X.L.; validation, N.D.; and formal analysis, X.L. All authors have read and agreed to the final version of this manuscript.

**Funding:** This work was supported partly by the NSF of Hunan Province (2021JJ50032, 2023JJ50040) and the foundation of Education Department of Hunan Province (HNJG-2021-0129).

**Acknowledgments:** We are grateful to the academic editor and three anonymous referees for their useful comments and suggestions, which have significantly enhanced the quality the original paper.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Appendix A. Validation of True Solutions in Example 1 and 2**

**Example A1.** *For the solution u*(*x*, *t*) = *e*−*<sup>t</sup> <sup>x</sup>*3(<sup>1</sup> <sup>−</sup> *<sup>x</sup>*)*, the <sup>γ</sup>-order generalized Caputo fractional derivative on the left of GFDE* (1) *is*

$$\begin{aligned} \ \_0^C D\_t^{\gamma, \lambda(t)} u(x, t) &= \quad \frac{1}{\Gamma(1 - \gamma)} \int\_0^t \frac{\lambda(t - \eta)}{(t - \eta)^\gamma} \frac{\partial u(x, \eta)}{\partial \eta} d\eta \\ &= \quad \frac{-1}{\Gamma(1 - \gamma)} \int\_0^t \frac{e^{-(t - \eta)}}{(t - \eta)^\gamma} e^{-\eta} x^3 (1 - x) d\eta \\ &= \quad \frac{-t^{1 - \gamma} e^{-t}}{\Gamma(2 - \gamma)} x^3 (1 - x) .\end{aligned}$$

The *α*-order Riemann–Liouville (R-L) fractional derivative on the right of GFDE (1) is

$$\begin{split} \, \_0D\_{\mathbf{x}}^{a}u(\mathbf{x},t) &= \quad \frac{1}{\Gamma(2-a)} \frac{\partial^2}{\partial \mathbf{x}^2} \int\_0^x \frac{u(\tilde{\xi},t)}{(\mathbf{x}-\tilde{\xi})^{a-1}} d\tilde{\xi} \\ &= \quad \frac{e^{-t}}{\Gamma(2-a)} \frac{\partial^2}{\partial \mathbf{x}^2} \int\_0^x \frac{\tilde{\xi}^3 - \tilde{\xi}^4}{(\mathbf{x}-\tilde{\xi})^{a-1}} d\tilde{\xi} \\ &= \quad \frac{e^{-t}}{\Gamma(2-a)} \frac{\partial^2}{\partial \mathbf{x}^2} \left( \frac{6}{\Gamma^5\_{i=2}(i-a)} \mathbf{x}^{5-a} - \frac{24}{\Gamma^6\_{i=2}(i-a)} \mathbf{x}^{6-a} \right) \\ &= \quad \frac{6e^{-t}}{\Gamma(4-a)} \mathbf{x}^{3-a} - \frac{24e^{-t}}{\Gamma(5-a)} \mathbf{x}^{4-a} .\end{split}$$

Then, the GFDE (1) with *κ* = 1 , *p* = 1, and *λ*(*t*) = *e*−*<sup>t</sup>* holds true when the forcing function is

$$f(x,t) = \frac{-t^{1-\gamma}e^{-t}}{\Gamma(2-\gamma)}x^3(1-x) - \frac{6e^{-t}}{\Gamma(4-a)}x^{3-a} + \frac{24e^{-t}}{\Gamma(5-a)}x^{4-a}.$$

**Example A2.** *For the solution function <sup>u</sup>*(*x*, *<sup>t</sup>*) = <sup>5</sup>*g*(*t*)*x*2(<sup>1</sup> <sup>−</sup> *<sup>x</sup>*)2*, the <sup>γ</sup>-order generalized Caputo fractional derivative on the left of GFDE* (1) *is*

$$\begin{split} \, \_0^C D\_t^{\gamma, \lambda(t)} u(x, t) &= \quad \frac{1}{\Gamma(1 - \gamma)} \int\_0^t \frac{\lambda(t - \eta)}{(t - \eta)^{\gamma}} \frac{\partial u(x, \eta)}{\partial \eta} d\eta \\ &= \quad \frac{1}{\Gamma(1 - \gamma)} \int\_0^t (t - \eta)^{-\gamma} e^{-b(t - \eta)} [5x^2 (1 - x)^2 \eta^2 e^{-b\eta}] d\eta \\ &= \quad \frac{10t^{3 - \gamma} e^{-bt}}{\Gamma(4 - \gamma)} x^2 (1 - x)^2. \end{split}$$

The *α*-order Riemann–Liouville (R-L) fractional derivatives on the right of GFDE (1) are

$$\begin{split} \, \_0D\_x^{\mathfrak{g}}u(x,t) &= \quad \frac{1}{\Gamma(2-a)} \frac{\partial^2}{\partial x^2} \int\_0^x \frac{u(\mathfrak{F},t)}{(x-\mathfrak{f})^{a-1}} d\mathfrak{F} \\ &= \quad \frac{5\mathfrak{g}(t)}{\Gamma(2-a)} \frac{\partial^2}{\partial x^2} \int\_0^x \frac{\xi^2(1-\mathfrak{f})^2}{(x-\mathfrak{f})^{a-1}} d\mathfrak{f} \\ &= \quad \frac{5\mathfrak{g}(t)}{\Gamma(2-a)} \frac{\partial^2}{\partial x^2} \left( \frac{2}{\Pi\_{i=2}^4(i-a)} x^{4-a} - \frac{12}{\Pi\_{i=2}^5(i-a)} x^{5-a} + \frac{24}{\Pi\_{i=2}^6(i-a)} x^{6-a} \right) \\ &= \quad 5\mathfrak{g}(t) [\frac{\Gamma(3)}{\Gamma(3-a)} x^{2-a} - \frac{2\Gamma(4)}{\Gamma(4-a)} x^{3-a} + \frac{\Gamma(5)}{\Gamma(5-a)} x^{4-a}] \end{split}$$

and

is

$$\begin{split} \, \_1\mathrm{D}\_t^a u(x,t) &= \, \_1\frac{1}{\Gamma(2-a)} \frac{\partial^2}{\partial x^2} \int\_x^1 \frac{u(\tilde{\xi},t)}{(\tilde{\xi}-x)^{a-1}} d\tilde{\xi} \\ &= \, \_1\frac{5g(t)}{\Gamma(2-a)} \frac{\partial^2}{\partial x^2} \int\_x^1 \frac{\xi^2 (1-\xi)^2}{(\tilde{\xi}-x)^{a-1}} d\xi \\ &= \, \_1\frac{5g(t)}{\Gamma(2-a)} \frac{\partial^2}{\partial x^2} \left( \frac{2}{\Pi\_{i=2}^4(i-a)} (1-x)^{4-a} - \frac{12}{\Pi\_{i=2}^5(i-a)} (1-x)^{5-a} \right) \\ &\quad + \frac{24}{\Pi\_{i=2}^6(i-a)} (1-x)^{6-a} \Big) \\ &= \, \_5\mathfrak{H}(t) \Big[ \frac{\Gamma(3)}{\Gamma(3-a)} (1-x)^{2-a} - \frac{2\Gamma(4)}{\Gamma(4-a)} (1-x)^{3-a} + \frac{\Gamma(5)}{\Gamma(5-a)} (1-x)^{4-a} \Big]. \end{split}$$

Then, the GFDE (1) with *κ* = 5 and *λ*(*t*) = *e*−*bt* holds true when the forcing function

$$\begin{split} f(\mathbf{x},t) &= \quad \frac{10t^{3-\gamma}e^{-bt}}{\Gamma(4-\gamma)}\mathbf{x}^2(1-\mathbf{x})^2 - 2\mathbf{5}g(t)\Big\{\frac{\Gamma(3)}{\Gamma(3-a)}[p\mathbf{x}^{2-a} + (1-p)(1-\mathbf{x})^{2-a}] \\ &- 2\frac{\Gamma(4)}{\Gamma(4-a)}[p\mathbf{x}^{3-a} + (1-p)(1-\mathbf{x})^{3-a}] + \frac{\Gamma(5)}{\Gamma(5-a)}[p\mathbf{x}^{4-a} + (1-p)(1-\mathbf{x})^{4-a}]\Big\}. \end{split}$$

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

## *Article* **Local Error Estimate of an L1-Finite Difference Scheme for the Multiterm Two-Dimensional Time-Fractional Reaction–Diffusion Equation with Robin Boundary Conditions**

**Jian Hou 1, Xiangyun Meng 1, Jingjia Wang 1, Yongsheng Han <sup>2</sup> and Yongguang Yu 1,\***


**Abstract:** In this paper, the numerical method for a multiterm time-fractional reaction–diffusion equation with classical Robin boundary conditions is considered. The full discrete scheme is constructed with the L1-finite difference method, which entails using the L1 scheme on graded meshes for the temporal discretisation of each Caputo fractional derivative and using the finite difference method on uniform meshes for spatial discretisation. By dealing with the discretisation of Robin boundary conditions carefully, sharp error analysis at each time level is proven. Additionally, numerical results that can confirm the sharpness of the error estimates are presented.

**Keywords:** multi-term time-fractional; local error analysis; Robin boundary conditions

#### **1. Introduction**

In recent years, fractional calculus, which is considered to be a generalisation of classical derivatives and integrals to non-integer order, has become a powerful modelling tool that is more flexible and precise for describing physical problems than integer calculus. The fractional system has been widely used in engineering, physical science, chemical science, biology, and a variety of other subjects, for which it has gradually become an essential component. For more details on fractional calculus, see [1–5].

At present, it is not generalised enough to consider a numerical solution of the initial boundary value problem with only the time fractional derivative term with the order *α* ∈ (0, 1), such as in [6]. On this basis, more attention is being paid to the summation form of the time fractional derivative with the order

$$0 < \alpha\_L < \dots < \alpha\_2 < \alpha\_1 < 1.$$

where *L* is a positive integer. At the initial time, the typical solutions of such problems have a key factor that must be considered (as in [6]); this factor is weak singularity which significantly complicates analysis. Now, many time-fractional initial-boundary value problems with Robin boundary conditions are widely used in the research fields of heat equation, biomathematics, and so on [7–9]. That is the main reason why this type of boundary condition is considered in this paper.

The problem that we study in the spatial domain is Ω := (0, 1)<sup>2</sup> with closure Ω¯ = [0, 1] 2. Define the boundary as *<sup>∂</sup>*<sup>Ω</sup> <sup>=</sup> <sup>Ω</sup>¯ \ <sup>Ω</sup>. Set *<sup>Q</sup>* <sup>=</sup> <sup>Ω</sup> <sup>×</sup> (0, *<sup>T</sup>*] and *<sup>Q</sup>*¯ <sup>=</sup> <sup>Ω</sup>¯ <sup>×</sup> [0, *<sup>T</sup>*] where *T* > 0 be fixed.

Based on the above description, the purpose of this paper is to propose the following multiterm time-fractional reaction–diffusion problem numerically.

$$\sum\_{l=1}^{L} q\_l D\_t^{a\_l} u(\mathbf{x}, y, t) - \Delta u(\mathbf{x}, y, t) + c(\mathbf{x}, y) u(\mathbf{x}, y, t) = F(\mathbf{x}, y, t) \text{ for } (\mathbf{x}, y, t) \in \mathcal{Q}\_{\prime} \tag{1a}$$

**Citation:** Hou, J.; Meng, X.; Wang, J.; Han, Y.; Yu, Y. Local Error Estimate of an L1-Finite Difference Scheme for the Multiterm Two-Dimensional Time-Fractional Reaction–Diffusion Equation with Robin Boundary Conditions. *Fractal Fract.* **2023**, *7*, 453. https://doi.org/10.3390/ fractalfract7060453

Academic Editor: Riccardo Caponetto

Received: 27 April 2023 Revised: 29 May 2023 Accepted: 29 May 2023 Published: 1 June 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

with the initial condition

$$
\mu(\mathbf{x}, y, 0) = \mu\_0(\mathbf{x}, y) \text{ for } (\mathbf{x}, y) \in \Omega,\tag{1b}
$$

and Robin boundary conditions

$$
\sigma u(\mathbf{x}, y, t) + \frac{\partial}{\partial n} u(\mathbf{x}, y, t) = \mathbf{g}(\mathbf{x}, y, t) \text{ for } (\mathbf{x}, y, t) \in \partial \Omega, \quad 0 < t \le T. \tag{1c}
$$

where *ql* and *σ* are given positive constants, *g* and *u*<sup>0</sup> are sufficiently smooth in their respective domains, *<sup>F</sup>* <sup>∈</sup> *<sup>C</sup>*1(*Q*¯), and *<sup>c</sup>* <sup>∈</sup> *<sup>C</sup>*1(*Q*¯) with *<sup>c</sup>* <sup>≥</sup> *<sup>c</sup>*<sup>0</sup> <sup>&</sup>gt; 0. *<sup>u</sup>*<sup>0</sup> <sup>∈</sup> *<sup>C</sup>*1(Ω¯ ) with *<sup>σ</sup>u*<sup>0</sup> + (*∂*/*∂n*)*u*<sup>0</sup> = 0 on *<sup>∂</sup>*Ω. *<sup>D</sup>α<sup>l</sup> <sup>t</sup> u*(*x*, *y*, *t*) is defined as the temporal Caputo fractional derivative of order *α<sup>l</sup>* of *u* by

$$D\_t^{\alpha\_l} u(x, y, t) := \frac{1}{\Gamma(1 - \alpha\_l)} \int\_0^t (t - s)^{-\alpha\_l} \frac{\partial}{\partial s} u(x, y, s) \, ds.$$

For ([10] Lemma 2.2) and ([11] Section 6), (1) has unique solution which satisfies the following regularity with weak initial singularity

$$\left| \left| \frac{\partial^{\eta}}{\partial x^{\eta}} u(x, y, t) \right| + \left| \frac{\partial^{\eta}}{\partial y^{\eta}} u(x, y, t) \right| \right| \le \mathbb{C} \quad \text{for } \eta = 0, 1, 2, 3, 4,\tag{2}$$

$$\left| \frac{\partial^{\nu}}{\partial t^{\nu}} u(\mathbf{x}, y, t) \right| \le \mathbb{C} (1 + t^{a\_1 - \nu}) \quad \text{for } \nu = 0, 1, 2,\tag{3}$$

where *C* is some fixed constant.

In recent years, the introduction of the classic L1 scheme to the discrete Caputo derivative has received widespread attention [12,13]. To recover the convergence rate, researchers have used the L1 scheme on graded meshes [6,11,14]. Analysis of the local convergence rate is mathematically interesting [15,16], as the local convergence rate on every time node is sharper than the global one. This method has wide applicability. When considering practical problems such as [17–19], it can be combined with the finite difference and finite element methods in the space direction [6,20].

To avoid discrete errors at the boundary, Dirichlet boundary conditions are usually considered in the local error analysis because the boundary values are known. For Robin boundary conditions, to ensure global accuracy, we need to find a suitable boundary discretisation method. One of the novelties of this paper is mitigation of the difficulty caused by Robin boundary conditions in the discretisation process. At present, there are many papers consider global convergence of the time fractional problem with Robin boundary conditions [10,21,22]; But no local in time error analysis for multi-term timefractional problems with Robin boundary conditions has been considered. This is our motivation for completing this paper. The highlights of this paper can be summarized as the following:


The outline of the paper is as follows. In Section 2, the L1-finite difference method that will be used to solve (1) is described. In Section 3, the local error of the L1-finite difference method is analysed. Then, in Section 4, numerical examples are given to verify the local error results.

Notation: throughout the paper, *C* is used as a generic constant to solve (1) numerically, and may take a different value each time it appears. Meanwhile, it is related to the information of the problem (1) but is independent of (*x*, *y*, *t*) and of any mesh.

#### **2. L1-Finite Difference Method for** (1)

We shall consider the L1-finite difference method for construction of the fully discrete scheme for the problem (1). At the boundary and inner points, the discrete scheme which can match each other's accuracy is constructed.

Let *M* and *N* be positive integers. Set *tm* = *T*(*m*/*M*)*<sup>r</sup>* for *m* = 0, 1, ... , *M*, and denote the time step *τ<sup>m</sup>* = *tm* − *tm*−<sup>1</sup> for *m* = 1, 2, ... , *M*. The mesh grading *r* ≥ 1 will be chosen later. Set the spatial step as *<sup>h</sup>* <sup>=</sup> 1/*N*. We divide <sup>Ω</sup>¯ into (*<sup>N</sup>* <sup>−</sup> <sup>1</sup>) <sup>×</sup> (*<sup>N</sup>* <sup>−</sup> <sup>1</sup>) intervals; the mesh point is (*xi*, *yj*) with *xi* = *ih* and *yj* = *jh*, where (0 ≤ *i*, *j* ≤ *N*). Let

$$
\Omega\_h = \{ (\mathfrak{x}\_i, \mathfrak{y}\_j) | 0 \le i, j \le N \}, \quad \Omega\_h = \Omega\_h \cap \Omega\_\prime \quad \partial \Omega\_h = \Omega\_h \cap \partial \Omega\_\prime
$$

Let (*i*, *<sup>j</sup>*) <sup>∈</sup> <sup>Ω</sup>¯ *<sup>h</sup>* represent (*xi*, *yj*) <sup>∈</sup> <sup>Ω</sup>¯ *<sup>h</sup>* to simplify the notation. Similarly, set (*i*, *<sup>j</sup>*) <sup>∈</sup> <sup>Ω</sup>*<sup>h</sup>* and let (*i*, *j*) ∈ *∂*Ω*<sup>h</sup>* represent (*xi*, *yj*) ∈ Ω*<sup>h</sup>* and (*xi*, *yj*) ∈ *∂*Ω*h*, respectively. Thus, our mesh is

$$\{ (x\_i, y\_j, t\_m) : i, j = 0, 1, \ldots, N \text{ and } m = 1, 2, \ldots, M \}.$$

At each mesh point (*xi*, *yj*, *tm*), the computed approximation to the analytical solution *u* will be denoted by *u<sup>m</sup> i*,*j* . Define the grid functions

$$\begin{aligned} \boldsymbol{c}\_{i,j} &= \boldsymbol{c}(\boldsymbol{x}\_{i}, \boldsymbol{y}\_{j}), \quad (\boldsymbol{c}\_{1})\_{i,j} = \frac{\partial}{\partial \boldsymbol{x}} \boldsymbol{c}(\boldsymbol{x}\_{i}, \boldsymbol{y}\_{j}), \quad (\boldsymbol{c}\_{2})\_{i,j} = \frac{\partial}{\partial \boldsymbol{y}} \boldsymbol{c}(\boldsymbol{x}\_{i}, \boldsymbol{y}\_{j}),\\ \boldsymbol{F}\_{i,j}^{m} &= \boldsymbol{F}(\boldsymbol{x}\_{i}, \boldsymbol{y}\_{j}, \boldsymbol{t}\_{m}), \quad (\boldsymbol{F}\_{1})\_{i,j}^{m} = \frac{\partial}{\partial \boldsymbol{x}} \boldsymbol{F}(\boldsymbol{x}\_{i}, \boldsymbol{y}\_{j}, \boldsymbol{t}\_{m}), \quad (\boldsymbol{F}\_{2})\_{i,j}^{m} = \frac{\partial}{\partial \boldsymbol{y}} \boldsymbol{F}(\boldsymbol{x}\_{i}, \boldsymbol{y}\_{j}, \boldsymbol{t}\_{m}).\end{aligned}$$

where (*i*, *<sup>j</sup>*) <sup>∈</sup> <sup>Ω</sup>¯ *<sup>h</sup>*, *<sup>m</sup>* <sup>=</sup> 1, 2, . . . , *<sup>M</sup>*.

The Caputo fractional derivative *D<sup>α</sup> <sup>t</sup> u* can be expressed as

$$D\_t^\mathbf{z} \mu(\mathbf{x}, \mathbf{y}, \mathbf{t}) := \frac{1}{\Gamma(1-\alpha)} \sum\_{k=0}^{m-1} \int\_{s=t\_k}^{t\_k+1} (t-s)^{-\alpha} \frac{\partial}{\partial s} \mu(\mathbf{x}, \mathbf{y}, \mathbf{s}) \, ds.$$

The L1 scheme, which is used to approximate the Caputo fractional derivative to obtain the discretisation of each time-fractional term *qlD<sup>α</sup><sup>l</sup> <sup>t</sup> u*(*xi*, *yj*, *tm*).

$$\begin{split} q\_l D\_M^{a\_l} u\_{i,j}^m &:= \frac{q\_l}{\Gamma(1-a\_l)} \sum\_{k=0}^{m-1} \frac{u\_{i,j}^{k+1} - u\_{i,j}^k}{\tau\_{k+1}} \int\_{s=t\_k}^{t\_{k+1}} (t\_m - s)^{-a\_l} ds \\ &= \frac{q\_l}{\Gamma(2-a\_l)} \sum\_{k=0}^{m-1} \frac{u\_{i,j}^{k+1} - u\_{i,j}^k}{\tau\_{k+1}} \left[ (t\_m - t\_k)^{1-a\_l} - (t\_m - t\_{k+1})^{1-a\_l} \right] \end{split}$$

for *l* = 1, 2, . . . , *L*.

In ([20] Lemma 4), the truncation error has the following estimate

$$\left| \sum\_{l=1}^{L} q\_l D\_M^{a\_l} u\_{i,j}^m - \sum\_{l=1}^{L} q\_l D\_l^{a\_l} u(x\_i, y\_j, t\_m) \right| \le C m^{-\min\{2-a\_1, r(a\_1+1)\}}.\tag{4}$$

For any grid function *v* = {*vi*,*j*|0 ≤ *i*, *j* ≤ *N*}, the spatial difference operators are defined as follows:

$$
\delta\_x v\_{i,j} = \frac{1}{h} (v\_{i,j} - v\_{i-1,j})\_\prime \quad \delta\_y v\_{i,j} = \frac{1}{h} (v\_{i,j} - v\_{i,j-1})\_\prime \quad 1 \le i, j \le N\_\prime
$$

and

$$\begin{aligned} \delta\_x^2 v\_{i,j} &= \frac{1}{l!} (\delta\_x v\_{i+1,j} - \delta\_x v\_{i,j})\_\prime \quad 1 \le i \le N-1, \; 0 \le j \le N\_\prime, \\\delta\_y^2 v\_{i,j} &= \frac{1}{l!} (\delta\_y v\_{i,j+1} - \delta\_y v\_{i,j})\_\prime \quad 1 \le i \le N, \; 0 \le j \le N-1. \end{aligned}$$

We discrete the initial condition (1b) in a standard way: for *<sup>i</sup>*, *<sup>j</sup>* <sup>∈</sup> <sup>Ω</sup>¯ *<sup>h</sup>*, set *<sup>u</sup>*<sup>0</sup> *<sup>i</sup>*,*<sup>j</sup>* = *u*0(*xi*, *yj*). In the following three subsections, we will discretise (1a) and (1c) on inner points, boundary points, and corner points separately.

#### *2.1. Inner Points*

In (*i*, *<sup>j</sup>*) <sup>∈</sup> <sup>Ω</sup>*h*, the diffusion term <sup>Δ</sup>*<sup>u</sup>* <sup>≡</sup> (*∂*/*∂x*)2*<sup>u</sup>* + (*∂*/*∂y*)2*<sup>u</sup>* in (1a) is approximated by a standard second-order discretisation

$$
\Delta\mu(x\_{i\prime}y\_{j\prime}, t\_m) \approx \delta\_x^2 u\_{i\prime,j}^m + \delta\_y^2 u\_{i,j}^m. \tag{5}
$$

Then, the truncation error has the following estimate

$$\left|\delta\_x^2 u(\mathbf{x}\_i, y\_j, t\_m) - \frac{\partial^2}{\partial \mathbf{x}^2} u(\mathbf{x}\_i, y\_j, t\_m) \right| + \left|\delta\_y^2 u(\mathbf{x}\_i, y\_j, t\_m) - \frac{\partial^2}{\partial y^2} u(\mathbf{x}\_i, y\_j, t\_m) \right| \le Ch^2. \tag{6}$$

In summary, we can approximate (1) on (*i*, *j*) ∈ Ω*<sup>h</sup>* with the discrete problem

$$\sum\_{l=1}^{L} q\_l D\_{\mathcal{M}}^{a\_l} u\_{i,j}^m - \delta\_{\mathcal{X}}^2 u\_{i,j}^m - \delta\_y^2 u\_{i,j}^m + c\_{i,j} u\_{i,j}^m = F\_{i,j}^m \qquad \text{for } 1 \le m \le M. \tag{7}$$

*2.2. Boundary Points*

For brevity, we set

$$\begin{aligned} g\_1(\mathbf{x}, y, t) &:= \sum\_{l=1}^{l} q\_l D\_t^{\alpha\_l} g(\mathbf{x}, y, t) - \frac{\partial^2}{\partial y^2} g(\mathbf{x}, y, t) + c(\mathbf{x}, y) g(\mathbf{x}, y, t) + \frac{\partial}{\partial \mathbf{x}} F(\mathbf{x}, y, t), \\ g\_2(\mathbf{x}, y, t) &:= \sum\_{l=1}^{l} q\_l D\_t^{\alpha\_l} g(\mathbf{x}, y, t) - \frac{\partial^2}{\partial \mathbf{x}^2} g(\mathbf{x}, y, t) + c(\mathbf{x}, y) g(\mathbf{x}, y, t) + \frac{\partial}{\partial y} F(\mathbf{x}, y, t), \\ p\_1(\mathbf{x}, y, t) &:= u(\mathbf{x}, y, t) \frac{\partial}{\partial \mathbf{x}} c(\mathbf{x}, y) + \sigma c(\mathbf{x}, y) u(\mathbf{x}, y, t), \\ p\_2(\mathbf{x}, y, t) &:= u(\mathbf{x}, y, t) \frac{\partial}{\partial y} c(\mathbf{x}, y) + \sigma c(\mathbf{x}, y) u(\mathbf{x}, y, t), \\ q\_1(\mathbf{x}, y, t) &:= \frac{2}{\hbar} (\delta\_x u(\mathbf{x}, y, t) - \sigma u(\mathbf{x}, y, t) + g(\mathbf{x}, y, t)), \\ q\_2(\mathbf{x}, y, t) &:= \frac{2}{\hbar} (\delta\_y u(\mathbf{x}, y, t) - \sigma u(\mathbf{x}, y, t) + g(\mathbf{x}, y, t)), \end{aligned}$$

and

$$\begin{aligned} \delta\_x^b u(\mathbf{x}, y, t) := q\_1(\mathbf{x}, y, t) - \frac{\hbar}{3} \left( \sigma \sum\_{l=1}^{l} q\_l D\_{M}^{\mathbf{x}\_l} u(\mathbf{x}, y, t) \right. \\ \left. \quad + p\_1(\mathbf{x}, y, t) - \sigma \delta\_y^2 u(\mathbf{x}, y, t) - g\_1(\mathbf{x}, y, t) \right), \\ \delta\_y^b u(\mathbf{x}, y, t) := q\_2(\mathbf{x}, y, t) - \frac{\hbar}{3} \left( \sigma \sum\_{l=1}^{l} q\_l D\_{M}^{\mathbf{x}\_l} u(\mathbf{x}, y, t) \right. \\ \left. \quad + p\_2(\mathbf{x}, y, t) - \sigma \delta\_x^2 u(\mathbf{x}, y, t) - g\_2(\mathbf{x}, y, t) \right). \end{aligned}$$

Then, define grid functions

$$\begin{split} & (\mathcal{g}\_{1})\_{i,j}^{\mathfrak{m}} = \mathcal{g}\_{1}(\mathbf{x}\_{i}, \mathcal{y}\_{j}, t\_{\mathfrak{m}}), \quad \delta\_{\mathbf{x}}^{b} u\_{i,j}^{\mathfrak{m}} := \delta\_{\mathbf{x}}^{b} u(\mathbf{x}\_{i}, \mathcal{y}\_{j}, t\_{\mathfrak{m}}), \quad (p\_{1})\_{i,j}^{\mathfrak{m}} = p\_{1}(\mathbf{x}\_{i}, \mathcal{y}\_{j}, t\_{\mathfrak{m}}), \\ & (\mathcal{g}\_{2})\_{i,j}^{\mathfrak{m}} = \mathcal{g}\_{2}(\mathbf{x}\_{i}, \mathcal{y}\_{j}, t\_{\mathfrak{m}}), \quad \delta\_{\mathbf{y}}^{b} u\_{i,j}^{\mathfrak{m}} := \delta\_{\mathbf{y}}^{b} u(\mathbf{x}\_{i}, \mathcal{y}\_{j}, t\_{\mathfrak{m}}), \quad (p\_{2})\_{i,j}^{\mathfrak{m}} = p\_{2}(\mathbf{x}\_{i}, \mathcal{y}\_{j}, t\_{\mathfrak{m}}), \\ & (q\_{1})\_{i,j}^{\mathfrak{m}} = p\_{1}(\mathbf{x}\_{i}, \mathcal{y}\_{j}, t\_{\mathfrak{m}}), \quad (q\_{2})\_{i,j}^{\mathfrak{m}} = q\_{2}(\mathbf{x}\_{i}, \mathcal{y}\_{j}, t\_{\mathfrak{m}}). \end{split}$$

**Lemma 1.** *Assume u*(·, ·, *tm*) <sup>∈</sup> *<sup>C</sup>*2(Ω¯ ) *for every tm; then, there exists a constant C such that*

$$\left| \delta\_x^b u(0, y\_j, t\_m) - \frac{\partial^2}{\partial x^2} u(0, y\_j, t\_m) \right| \le Ch^2 + Chm^{-\min\{2 - \alpha\_1, r(a\_1 + 1)\}}.\tag{8}$$

**Proof.** For boundary points (0, *yj*, *tm*), where *j* = 1, ... , *N* − 1 and *m* = 1, ... , *M*. (1a) and (1c) at point (0, *yj*, *tm*) is

$$\sum\_{l=1}^{L} q\_l D\_t^{a\_l} u(0, y\_j, t\_m) - \Delta u(0, y\_j, t\_m) + c(0, y\_j) u(0, y\_j, t\_m) = F(0, y\_j, t\_m), \tag{9}$$

$$
\sigma u(0, y\_{\dot{\jmath}}, t\_{\text{m}}) - \frac{\partial}{\partial \mathbf{x}} u(0, y\_{\dot{\jmath}}, t\_{\text{m}}) = g(0, y\_{\dot{\jmath}}, t\_{\text{m}}). \tag{10}
$$

By Taylor expansion of *u*(*h*, *yj*, *tm*) at point (0, *yj*, *tm*)

$$u(h, y\_j, t\_m) = u(0, y\_j, t\_m) + h \frac{\partial}{\partial \mathbf{x}} u(0, y\_j, t\_m) + \frac{h^2}{2} \frac{\partial^2}{\partial \mathbf{x}^2} u(0, y\_j, t\_m) + \frac{h^3}{6} \frac{\partial^3}{\partial \mathbf{x}^3} u(0, y\_j, t\_m) + Ch^4 \frac{\partial^2}{\partial t^2}$$

using (10), we have

$$\frac{\partial^2}{\partial x^2} \mu(0, y\_{\dot{\gamma}}, t\_{\text{m}}) = q\_1(0, y\_{\dot{\gamma}}, t\_{\text{m}}) - \frac{h}{3} \frac{\partial^3}{\partial x^3} \mu(0, y\_{\dot{\gamma}}, t\_{\text{m}}) - Ch^2. \tag{11}$$

Differentiating (9) with respect to *x*, (*∂*3/*∂x*3)*u* can be expressed as

$$\frac{\partial^3}{\partial \mathbf{x}^3} u(0, y\_{j'}, t\_m) = \sum\_{l=1}^L q\_l D\_t^{\text{adj}} \frac{\partial}{\partial \mathbf{x}} u(0, y\_{j'}, t\_m) - \frac{\partial}{\partial \mathbf{x}} \frac{\partial^2}{\partial y^2} u(0, y\_{j'}, t\_m) \tag{12}$$

$$+ u(0, y\_{j'}, t\_m) \frac{\partial}{\partial \mathbf{x}} c(0, y\_j) + c(0, y\_{j'}) \frac{\partial}{\partial \mathbf{x}} u(0, y\_{j'}, t\_m) - \frac{\partial}{\partial \mathbf{x}} F(0, y\_j, t\_m).$$

Furthermore, in view of (10), we obtain

$$\sigma \frac{\partial^3}{\partial \mathbf{x}^3} u(0, y\_j, t\_m) = \sigma \sum\_{l=1}^{l} q\_l D\_t^{a\_l} u(0, y\_j, t\_m) - \sigma \frac{\partial^2}{\partial y^2} u(0, y\_j, t\_m) + p\_1(0, y\_j) - g\_1(0, y\_j, t\_m). \tag{13}$$

So, substituting (13) into (11) to replace (*∂*3/*∂x*3)*u* yields

$$\frac{\partial^2}{\partial x^2} u(0, y\_{j\prime} t\_m) = q\_1(0, y\_{j\prime} t\_m) - \frac{h}{3} \left( \sigma \sum\_{l=1}^{l} q\_l D\_t^{\alpha\_l} u(0, y\_{j\prime} t\_m) - \sigma \frac{\partial^2}{\partial y^2} u(0, y\_{j\prime} t\_m) \right) \tag{14}$$

$$+ p\_1(0, y\_{j\prime} t\_m) - g\_1(0, y\_{j\prime} t\_m) \left( -C h^2 \right)$$

Adding (*σh*/3)(∑*<sup>J</sup> <sup>l</sup>*=<sup>1</sup> *qlD<sup>α</sup><sup>l</sup> <sup>M</sup>u*(0, *yj*, *tm*)) to the right-hand side and subtract it. Then, by truncation error (4), we have

$$\begin{split} \frac{\partial^2}{\partial x^2} u(0, y\_j, t\_m) &= q\_1(0, y\_j, t\_m) - \frac{\hbar}{3} \left( \sigma \sum\_{l=1}^{l} q\_l D\_{M}^{a\_l} u(0, y\_j, t\_m) - \sigma \frac{\partial^2}{\partial y^2} u(0, y\_j, t\_m) \right) \\ &+ p\_1(0, y\_j) - g\_1(0, y\_j, t\_m) \left( -\text{Ch}^2 - \text{Ch} \text{m}^{-\text{min}} \{ 2 - a\_1, r(a\_1 + 1) \} \right) . \end{split}$$

For 1 ≤ *j* ≤ *N* − 1, 1 ≤ *m* ≤ *M*, we can approximate (1) on boundary point (0, *yj*, *tm*) by the discrete problem

$$\sum\_{l=1}^{L} q\_l D\_M^{\alpha\_l} u\_{0,j}^m - \delta\_x^b u\_{0,j}^m - \delta\_y^2 u\_{0,j}^m + c\_{0,j} u\_{0,j}^m = F\_{0,j}^m \tag{15}$$

The other corner points can be treated similarly.

*2.3. Corner Points*

For convenience, we introduce the following functions

$$\begin{aligned} \delta\_x^\varepsilon u(\mathbf{x}, y, t) &:= \frac{1}{1 - \frac{\varepsilon h}{3}} \left[ q\_1(\mathbf{x}, y, t) - \frac{h}{3} \left( \sigma \sum\_{l=1}^{l} q\_l D\_{\mathbf{M}}^{\mathbf{z}\_l} u(\mathbf{x}, y, t) + p\_1(\mathbf{x}, y, t) - g\_1(\mathbf{x}, y, t) \right) \right] \\ \delta\_y^\varepsilon u(\mathbf{x}, y, t) &:= \frac{1}{1 - \frac{\varepsilon h}{3}} \left[ q\_2(\mathbf{x}, y, t) - \frac{h}{3} \left( \sigma \sum\_{l=1}^{l} q\_l D\_{\mathbf{M}}^{\mathbf{z}\_l} u(\mathbf{x}, y, t) + p\_2(\mathbf{x}, y, t) - g\_2(\mathbf{x}, y, t) \right) \right] \end{aligned}$$

and grid functions

$$
\delta^c\_x u^{\underline{m}}\_{i,j} = \delta^c\_x \iota(x\_{i\prime} y\_{j\prime} t\_m)\_{\prime} \quad \delta^c\_y u^{\underline{m}}\_{i,j} = \delta^c\_y \iota(x\_{i\prime} y\_{j\prime} t\_m)\_{\prime}
$$

**Lemma 2.** *Assume u*(·, ·, *tm*) <sup>∈</sup> *<sup>C</sup>*2(Ω¯ ) *for fixed tm; then, there exists a constant C such that*

$$\begin{split} \left| \delta\_{\mathbf{x}}^{\varepsilon} u(0,0,t\_{m}) - \frac{\partial^{2}}{\partial \mathbf{x}^{2}} u(0,0,t\_{m}) \right| &+ \left| \delta\_{\mathbf{y}}^{\varepsilon} u(0,0,t\_{m}) - \frac{\partial^{2}}{\partial \mathbf{y}^{2}} u(0,0,t\_{m}) \right| \\ &\leq C h^{2} + C l m^{-\min\{2-a\_{1},r(a\_{1}+1)\}}. \end{split} \tag{16}$$

**Proof.** For corner point (0, 0, *tm*), where *m* = 1, . . . , *M* (1a) and (1c) at point (0, 0, *tm*) are

$$\sum\_{l=1}^{L} q\_l D\_t^{\alpha\_l} u(0,0,t\_m) - \Delta u(0,0,t\_m) + c(0,0)u(0,0,t\_m) = F(0,0,t\_m),\tag{17}$$

$$
\sigma u(0,0,t\_m) - \frac{\partial}{\partial \mathbf{x}} u(0,0,t\_m) = \mathcal{g}(0,0,t\_m), \\
\sigma u(0,0,t\_m) - \frac{\partial}{\partial y} u(0,0,t\_m) = \mathcal{g}(0,0,t\_m). \tag{18}
$$

Similarly, by the Taylor expansion of *u*(*h*, 0, *tm*) at point (0, 0, *tm*) and *u*(0, *h*, *tm*) at point (0, 0, *tm*)

$$\begin{split} u(h,0,t\_{m}) &= u(0,0,t\_{m}) + h \frac{\partial}{\partial \mathbf{x}} u(0,0,t\_{m}) + \frac{h^{2}}{2} \frac{\partial^{2}}{\partial \mathbf{x}^{2}} u(0,0,t\_{m}) + \frac{h^{3}}{6} \frac{\partial^{3}}{\partial \mathbf{x}^{3}} u(0,0,t\_{m}) + Ch^{4}, \\ u(0,h,t\_{m}) &= u(0,0,t\_{m}) + h \frac{\partial}{\partial y} u(0,0,t\_{m}) + \frac{h^{2}}{2} \frac{\partial^{2}}{\partial y^{2}} u(0,0,t\_{m}) + \frac{h^{3}}{6} \frac{\partial^{3}}{\partial y^{3}} u(0,0,t\_{m}) + Ch^{4}. \end{split}$$

Combining the above two equations and boundary conditions (18), we have

$$\frac{\partial^2}{\partial x^2} u(0,0,t\_m) + \frac{\partial^2}{\partial y^2} u(0,0,t\_m) = q\_1(0,0,t\_m) + q\_2(0,0,t\_m)$$

$$-\frac{h}{3} \frac{\partial^3}{\partial x^3} u(0,0,t\_m) - \frac{h}{3} \frac{\partial^3}{\partial y^3} u(0,0,t\_m) - Ch^2.\tag{19}$$

Differentiating (17) with respect to *x* and *y*, respectively, we can express (*∂*3/*∂x*3)*u* + (*∂*3/*∂y*3)*u* at (0, 0, *tm*) as

$$\begin{split} \frac{\partial^3}{\partial x^3} \mu(0,0,t\_m) + \frac{\partial^3}{\partial y^3} \mu(0,0,t\_m) &= 2\sigma \sum\_{l=1}^{l} q\_l D\_t^{a\_l} \mu(0,0,t\_m) + p\_1(0,0,t\_m) + p\_2(0,0,t\_m) \\ &- \sigma \frac{\partial^2}{\partial y^2} \mu(0,0,t\_m) - \sigma \frac{\partial^2}{\partial x^2} \mu(0,0,t\_m) - \mathcal{g}\_1(0,0,t\_m) - \mathcal{g}\_2(0,0,t\_m), \end{split} \tag{20}$$

where we can apply (20) into the right-hand side of (19); thus, we have

$$\frac{\partial^2}{\partial x^2} \mu(0,0,t\_m) + \frac{\partial^2}{\partial y^2} \mu(0,0,t\_m) = q\_1(0,0,t\_m) + q\_2(0,0,t\_m) \tag{21}$$

$$-\frac{\hbar}{3} \left( 2\sigma \sum\_{l=1}^{l} q\_l D\_t^{\mathfrak{m}\_l} \mu(0,0,t\_m) - \sigma \frac{\partial^2}{\partial y^2} \mu(0,0,t\_m) - \sigma \frac{\partial^2}{\partial x^2} \mu(0,0,t\_m) \right.$$

$$+ p\_1(0,0,t\_m) + p\_2(0,0,t\_m) - \mathcal{g}\_1(0,0,t\_m) - \mathcal{g}\_2(0,0,t\_m) \Big) - C\hbar^2.$$

That means

$$\begin{split} \frac{\partial^2}{\partial x^2} u(0,0,t\_m) + \frac{\partial^2}{\partial y^2} u(0,0,t\_m) &= \frac{1}{1 - \frac{c\_0^2}{3}} \left[ q\_1(0,0,t\_m) + q\_2(0,0,t\_m) \right. \\ &\left. - \frac{h}{3} \left( 2\sigma \sum\_{l=1}^{l} q\_l D\_t^{a\_l} u(0,0,t\_m) + p\_1(0,0,t\_m) + p\_2(0,0,t\_m) \right. \\ &\left. - g\_1(0,0,t\_m) - g\_2(0,0,t\_m) \right) \right] - Ch^2. \end{split} \tag{22}$$

Add (2*σh*/(<sup>3</sup> <sup>−</sup> *<sup>σ</sup>h*))(∑*<sup>J</sup> <sup>l</sup>*=<sup>1</sup> *qlD<sup>α</sup><sup>l</sup> <sup>M</sup>u*(0, *yj*, *tm*)) to the right-hand side and subtract it. Then, by truncation error (4), we have

$$\begin{split} \frac{\partial^2}{\partial x^2} u(0,0,t\_m) + \frac{\partial^2}{\partial y^2} u(0,0,t\_m) &= \frac{1}{1 - \frac{\sigma h}{3}} \Big[ q\_1(0,0,t\_m) + q\_2(0,0,t\_m) \\ &- \frac{h}{3} \Big( 2\sigma \sum\_{l=1}^{l} q\_l D\_{M}^{a\_l} u(0,0,t\_m) + p\_1(0,0,t\_m) + p\_2(0,0,t\_m) \\ &- g\_1(0,0,t\_m) - g\_2(0,0,t\_m) \Big) \Big] - Ch^2 - Chm^{-\min\{2-a\_1r(a\_1+1)\}}. \end{split}$$

We can approximate (1) on corner point (0, 0, *tm*) with the discrete problem

$$\sum\_{l=1}^{L} q\_l D\_{M}^{\kappa\_l} u\_{0,0}^{\mathfrak{m}} - \delta\_{\mathbf{x}}^{\varepsilon} u\_{0,0}^{\mathfrak{m}} - \delta\_{\mathbf{y}}^{\varepsilon} u\_{0,0}^{\mathfrak{m}} + c\_{0,0} u\_{0,0}^{\mathfrak{m}} = F\_{0,0}^{\mathfrak{m}} \quad \text{for } 1 \le j \le N - 1, \quad 1 \le m \le M. \tag{23}$$

The other corner points can be treated similarly.

#### **3. Error Analysis**

The local error analysis of problem (1) is studied in this section. The discrete scheme is the same as in Section 2. Let

$$\mathcal{E}^{\mathfrak{m}} := \begin{cases} \mathcal{M}^{-r} t\_{\mathfrak{m}}^{a\_1 - 1} & \text{if } 1 \le r < 2 - a\_1, \\\\ \mathcal{M}^{a\_1 - 2} t\_{\mathfrak{m}}^{a\_1 - 1} [1 + \ln(t\_{\mathfrak{m}}/t\_1)] & \text{if } r = 2 - a\_1, \\\\ \mathcal{M}^{a\_1 - 2} t\_{\mathfrak{m}}^{a\_1 - (2 - a\_1) / r} & \text{if } r > 2 - a\_1. \end{cases}$$

From ([20] Theorem 3), we obtain the next result, which will be used.

#### **Lemma 3.**

*(i) If the mesh function* {*Vm*}*<sup>M</sup> <sup>m</sup>*=<sup>0</sup> *satisfies V*<sup>0</sup> = <sup>0</sup> *and*

$$\sum\_{l=1}^{L} q\_l D\_M^{a\_l} |V^m| \le \mathbb{C} m^{-\min\{2-a\_1, r(a\_1+1)\}} \text{ for } m = 1, 2, \dots, M,\tag{24}$$

*for some C* <sup>&</sup>gt; <sup>0</sup>*, then* <sup>|</sup>*Vm*| ≤ *<sup>C</sup>*E*<sup>m</sup> for m* <sup>=</sup> 1, 2, . . . , *M. (ii) If the mesh function* {*Vm*}*<sup>M</sup> <sup>m</sup>*=<sup>0</sup> *satisfies V*<sup>0</sup> = <sup>0</sup> *and*

$$\sum\_{l=1}^{L} q\_l D\_M^{a\_l} |V^m| \le \mathbb{C} \text{ for } m = 1, 2, \dots, M,\tag{25}$$

*for some C* <sup>&</sup>gt; <sup>0</sup>*, then* <sup>|</sup>*Vm*| ≤ *C for m* <sup>=</sup> 1, 2, . . . , *M.*

Now, we provide the main result of this paper. For grid function {*v<sup>m</sup> i*,*j* }, set *vm*∞,Ω¯ <sup>=</sup> max(*i*,*j*)∈Ω¯ *<sup>h</sup>* <sup>|</sup>*v<sup>m</sup> i*,*j* |.

**Theorem 1.** *The solution* {*u<sup>m</sup> i*,*j* } *of the L1-finite difference scheme satisfies*

$$\max\_{(x\_i, y\_j, t\_m) \in \mathcal{Q}} |u(x\_i, y\_j, t\_m) - u^m\_{i,j}| \le \mathcal{C} \left( h^2 + \mathcal{E}^m \right) \tag{26}$$

*for some constant C independent of the mesh.*

**Proof.** Set *e<sup>m</sup> <sup>i</sup>*,*<sup>j</sup>* :<sup>=</sup> *<sup>u</sup>*(*xi*, *yj*, *tm*) <sup>−</sup> *<sup>u</sup><sup>m</sup> i*,*j* , where (*i*, *<sup>j</sup>*, *<sup>m</sup>*) <sup>∈</sup> *<sup>Q</sup>*¯. Set (*<sup>i</sup>* ∗, *j* <sup>∗</sup>) <sup>∈</sup> <sup>Ω</sup>¯ *<sup>h</sup>* such that <sup>|</sup>*e<sup>m</sup> <sup>i</sup>*∗,*j*<sup>∗</sup> | = max (*i*,*j*)∈Ω¯ |*em i*,*j* <sup>|</sup>. Suppose that *<sup>e</sup><sup>m</sup> <sup>i</sup>*∗,*j*<sup>∗</sup> <sup>≥</sup> 0 (the case *<sup>e</sup><sup>m</sup> <sup>i</sup>*∗,*j*<sup>∗</sup> ≤ 0 can be proved similarly).

If (*i* ∗, *j* <sup>∗</sup>) ∈ Ω*h*, by (1a) and (7) we obtain the error equation

$$\begin{split} &\sum\_{l=1}^{L} q\_l D\_{M}^{a\_l} e\_{i,l}^m - \delta\_x^2 e\_{i,j}^m - \delta\_y^2 e\_{i,j}^m + c(\mathbf{x}\_i, y\_j) e\_{i,j}^m \\ &= (\sum\_{l=1}^{L} q\_l D\_{M}^{a\_l} - \sum\_{l=1}^{L} q\_l D\_t^{a\_l}) u(\mathbf{x}\_i, y\_j, t\_m) + (\delta\_x^2 - \frac{\partial^2}{\partial x^2}) u(\mathbf{x}\_i, y\_j, t\_m) \\ &+ (\delta\_y^2 - \frac{\partial^2}{\partial y^2}) u(\mathbf{x}\_i, y\_j, t\_m) \\ &= R\_t^{i,j} + R\_x^{i,j} + R\_y^{i,j} .\end{split} \tag{27}$$

As a result of <sup>|</sup>*e<sup>m</sup> <sup>i</sup>*∗,*j*<sup>∗</sup> | = max (*i*,*j*)∈Ω¯ |*em i*,*j* <sup>|</sup> and *<sup>e</sup><sup>m</sup> <sup>i</sup>*∗,*j*<sup>∗</sup> <sup>≥</sup> 0, we have <sup>−</sup>*δ*<sup>2</sup> *xe<sup>m</sup> <sup>i</sup>*∗,*j*<sup>∗</sup> <sup>−</sup> *<sup>δ</sup>*<sup>2</sup> *ye<sup>m</sup> <sup>i</sup>*∗,*j*<sup>∗</sup> > 0. Combine this with *c* ≥ *c*<sup>0</sup> > 0, we have

$$\sum\_{l=1}^{L} q\_l D\_M^{a\_l} e\_{i^\*,j^\*}^m \le R\_t^{i,j} + R\_x^{i,j} + R\_y^{i,j} \le \mathcal{C} \left( m^{-\min\{2-a\_1, r(a\_1+1)\}} + h^2 \right). \tag{28}$$

If *i* ∗ = 0 and *j* <sup>∗</sup> = 1, . . . , *N* − 1. Applying (1a) into (15) leads to the error equation

$$\begin{split} &(1+\frac{\sigma h}{3})\sum\_{l=1}^{L}q\_{l}D\_{M}^{\alpha\_{l}}\varepsilon\_{0,j}^{m}-\frac{2}{h}\delta\_{x}\varepsilon\_{1,j}^{m}-(1+\frac{\sigma h}{3})\delta\_{y}^{2}\varepsilon\_{0,j}^{m}+\left[c\_{0,j}+\frac{h}{3}\left(\frac{6\sigma}{h^{2}}+(c\_{1})\_{0,j}+\sigma c\_{0,j}\right)\right]\varepsilon\_{0,j}^{m} \\ &=\left(\sum\_{l=1}^{L}q\_{l}D\_{M}^{\alpha\_{l}}-\sum\_{l=1}^{L}q\_{l}D\_{l}^{\alpha\_{l}}\right)u(0,y\_{j},t\_{m})+(\delta\_{x}^{b}-\frac{\partial^{2}}{\partial x^{2}})u(0,y\_{j},t\_{m})+(\delta\_{y}^{2}-\frac{\partial^{2}}{\partial y^{2}})u(0,y\_{j},t\_{m}) \\ &=R\_{t}^{0,j}+R\_{x}^{0,j}+R\_{y}^{0,j}. \end{split}\tag{29}$$

It is easy to obtain <sup>−</sup><sup>2</sup> *<sup>h</sup> <sup>δ</sup>xe<sup>m</sup>* 1,*j*<sup>∗</sup> <sup>−</sup> (<sup>1</sup> <sup>+</sup> *<sup>σ</sup><sup>h</sup>* <sup>3</sup> )*δ*<sup>2</sup> *ye<sup>m</sup>* 0,*j*<sup>∗</sup> <sup>&</sup>gt; 0. When *<sup>h</sup>* is small enough, (*c*0,*<sup>j</sup>* <sup>+</sup> *<sup>h</sup>* <sup>3</sup> ( <sup>6</sup>*<sup>σ</sup> <sup>h</sup>*<sup>2</sup> + (*c*1)0,*<sup>j</sup>* + *σc*0,*j*)) > 0. Compared to the time direction truncation error *Cm*−min{2−*α*1,*r*(*α*1+1)}, we can omit the higher order truncation error *Chm*−min{2−*α*1,*r*(*α*1+1)} caused by boundary discretisation. Then, we have

$$\begin{split} \sum\_{l=1}^{L} q\_l D\_M^{\alpha\_l} e\_{i^\*,j^\*}^m &\le \left(1 + \frac{\sigma h}{3}\right) \sum\_{l=1}^{L} q\_l D\_M^{\alpha\_l} e\_{i^\*,j^\*}^m \\ &\le R\_t^{0,j} + R\_x^{0,j} + R\_y^{0,j} \le \mathbb{C} \left(m^{-\min\{2-a\_1, r(a\_1+1)\}} + h^2\right). \end{split} \tag{30}$$

If *i* ∗ = *j* ∗ = 0. By (1a) into (23), the error equation is

$$\begin{split} & \left(1 + \frac{2h\sigma}{3 - \sigma h}\right) \sum\_{l=1}^{L} q\_l D\_M^{a\_l} e\_{0,0}^n - \frac{1}{1 - \frac{\sigma h}{3}} (\frac{2}{h} \delta\_x e\_{1,0}^m + \frac{2}{h} \delta\_y e\_{0,1}^m) \\ & \quad + \frac{1}{1 - \frac{\sigma h}{3}} \left[ \frac{h}{3} \left( (c\_1)\_{0,0} + (c\_2)\_{0,0} - \frac{2\sigma h}{3} c(\mathbf{x}\_0, y\_0) \right) + 2\sigma \right] e\_{0,0}^m + c\_{0,0} e\_{0,0}^m \\ & \quad = \left(\sum\_{l=1}^L q\_l D\_M^{a\_l} - \sum\_{l=1}^L q\_l D\_t^{a\_l}\right) u(0, 0, t\_m) + (\delta\_x^c - \frac{\partial^2}{\partial x^2}) u(0, 0, t\_m) + (\delta\_y^c - \frac{\partial^2}{\partial y^2}) u(0, 0, t\_m) \\ & \quad = R\_t^{0,0} + R\_x^{0,0} + R\_y^{0,0} .\end{split} \tag{31}$$

Then, we have <sup>−</sup><sup>2</sup> *<sup>h</sup> <sup>δ</sup>xe<sup>m</sup>* 1,0 <sup>−</sup> <sup>2</sup> *<sup>h</sup> <sup>δ</sup>xe<sup>m</sup>* 0,1 > 0. When *h* is small enough, we have *c*(0, 0) − 1 <sup>1</sup><sup>−</sup> *<sup>σ</sup><sup>h</sup>* 3 (*c*1)0,0 + (*c*2)0,0 <sup>+</sup> <sup>2</sup>*<sup>σ</sup>* <sup>−</sup> *<sup>σ</sup><sup>h</sup>* <sup>3</sup> *c*(*x*0, *y*0) . Similarly, omitting higher order error caused by *Chm*−min{2−*α*1,*r*(*α*1+1)}, we have

$$\sum\_{l=1}^{L} q\_l D\_M^{a\_l} e\_{"\cdot, "}^m \le R\_t^{0,0} + R\_x^{0,0} + R\_y^{0,0} \le \mathcal{C} \left( m^{-\min\{2-a\_1, r(a\_1+1)\}} + h^2 \right). \tag{32}$$

For other corner points, we shall obtain similar results.

For *l* = 1, . . . , *L*, rewrite the discretization of each Caputo derivative as

$$D\_M^{a\_l} u\_{i,j}^m = \frac{d\_{m,1}^l}{\Gamma(2-a\_l)} u\_{i,j}^m - \frac{d\_{m,m}^l}{\Gamma(2-a\_l)} u\_{i,j}^0 + \frac{1}{\Gamma(2-a\_l)} \sum\_{k=1}^{m-1} u\_{i,j}^{m-k} \left[ d\_{m,k+1}^l - d\_{m,k}^l \right]$$

where

$$d\_{m,k}^l := \frac{(t\_m - t\_{m-k})^{1 - a\_l} - (t\_m - t\_{m-k+1})^{1 - a\_l}}{\tau\_{m-k+1}} \dots$$

The mean value theorem gives (<sup>1</sup> <sup>−</sup> *<sup>α</sup>l*)(*tm* <sup>−</sup> *tm*−*k*)−*α<sup>l</sup>* <sup>≤</sup> *<sup>d</sup><sup>l</sup> <sup>m</sup>*,*<sup>k</sup>* <sup>≤</sup> (<sup>1</sup> <sup>−</sup> *<sup>α</sup>l*)(*tm* <sup>−</sup> *tm*−*k*+1)−*α<sup>l</sup>* and hence *d<sup>l</sup> <sup>m</sup>*,*<sup>k</sup>* <sup>−</sup> *<sup>d</sup><sup>l</sup> <sup>m</sup>*,*k*+<sup>1</sup> ≥ 0. Then,

$$\begin{split} \|D\_{M}^{\alpha\_{l}}e\_{i^{\*},j^{\*}}^{\mathfrak{m}} &= d\_{m,m}^{\mathfrak{a}\_{l}}e\_{i^{\*},j^{\*}}^{\mathfrak{m}} - \sum\_{k=0}^{m-1} (d\_{m,k}^{\alpha\_{l}} - d\_{m,k+1}^{\alpha\_{l}})e\_{i^{\*},j^{\*}}^{k} \\ &\geq d\_{m,m}^{\mathfrak{a}\_{l}} \|\|e^{\mathfrak{m}}\|\|\_{\infty,\Omega\_{k}} - \sum\_{k=0}^{m-1} (d\_{m,k}^{\alpha\_{l}} - d\_{m,k+1}^{\alpha\_{l}}) \|\|e^{k}\|\|\_{\infty,\Omega\_{k}} \\ &= D\_{M}^{\mathfrak{a}\_{l}} \|\|e^{\mathfrak{m}}\|\|\_{\infty,\Omega\_{k}}. \end{split}$$

By (28), (30) and (32), we have

$$\sum\_{l=1}^{L} q\_l D\_M^{a\_l} \|e^m\|\_{\infty, \Omega\_h} \le \mathbb{C} \left( m^{-\min\{2 - a\_1 r(a\_1 + 1)\}} + h^2 \right). \tag{33}$$

Note that ∑*<sup>J</sup> <sup>l</sup>*=<sup>1</sup> *qlD<sup>α</sup><sup>l</sup> <sup>M</sup>* is associated with an *M*-matrix, so we can deal separately with the terms *m*−min{2−*α*1,*r*(*α*1+1)} and *h*2. This means for *m* = 1, . . . , *M*,

$$||u(\mathfrak{x}\_{i\prime}, \mathfrak{y}\_{j\prime}, \mathfrak{t}\_{m}) - u^{m}\_{i,j}||\_{\infty, \Omega\_{\mathbb{R}}} \leq C \left( h^{2} + \mathcal{E}^{m} \right).$$

Then, we finish the proof.

**Remark 1.** *From* (26)*, we can obtain the following global error results*

$$\max\_{\boldsymbol{\Lambda}, \boldsymbol{m} = 1, \dots, M} \|\boldsymbol{\mu}(\boldsymbol{x}\_{i}, \boldsymbol{y}\_{j}, \boldsymbol{t}\_{m}) - \boldsymbol{u}\_{i,j}^{\rm m}\|\_{\infty, \Omega\_{h}} \leq C \max\_{m = 1, \dots, M} \left(h^{2} + \mathcal{E}^{\rm m}\right) \leq C \left(h^{2} + M^{-\min\{2 - a\_{1}, ra\_{1}\}}\right). \tag{34}$$

#### **4. Numerical Results**

In order to prove the validity of the numerical scheme, two numerical examples are introduced. One example has a known solution, the other is unknown.

We use the full discrete scheme in Section 2 to discretize 1. In the following examples, we set mesh parameters *r* = (2 − *α*1)/0.9, *L* = 2 and 0 < *α*<sup>2</sup> < *α*<sup>1</sup> < 1. Let the space interval *N* equals to the time interval *M* such that the error in the time direction dominates the space error. On this basis, we shall check the sharpness of Theorem (1).

#### **Example 1.**

$$\begin{aligned} D\_t^{u1}u + D\_t^{u2}u - \frac{\partial^2 u}{\partial x^2} - \frac{\partial^2 u}{\partial y^2} + (1+x+y)u &= f(x,y,t) \text{ for } (x,y,t) \in [0,2] \times [0,2] \times (0,1], \\ u(x,y,0) = (\frac{1}{3}x^3 - x^2 + \frac{1}{3}x + \frac{1}{3})(\frac{1}{3}y^3 - y^2 + \frac{1}{3}y + \frac{1}{3}) &\quad \text{for} \quad (x,y) \in [0,2] \times [0,2], \\ u(0,y,t) - \frac{\partial u}{\partial x}(0,y,t) = 0, \quad u(2,y,t) + \frac{\partial u}{\partial x}(2,y,t) = 0 &\quad \text{for} \quad y \in [0,2] \quad t \in (0,1]. \\ u(x,0,t) - \frac{\partial u}{\partial y}(x,0,t) = 0, \quad u(x,2,t) + \frac{\partial u}{\partial y}(x,2,t) = 0 &\quad \text{for} \quad x \in [0,2] \quad t \in (0,1]. \end{aligned} \tag{35}$$

The exact solution is *u*(*x*, *y*, *t*)=(1 + *t <sup>α</sup>*<sup>1</sup> + *t* <sup>3</sup>)( <sup>1</sup> <sup>3</sup> *<sup>x</sup>*<sup>3</sup> <sup>−</sup> *<sup>x</sup>*<sup>2</sup> <sup>+</sup> <sup>1</sup> <sup>3</sup> *<sup>x</sup>* <sup>+</sup> <sup>1</sup> <sup>3</sup> )( <sup>1</sup> <sup>3</sup> *<sup>y</sup>*<sup>3</sup> <sup>−</sup> *<sup>y</sup>*<sup>2</sup> <sup>+</sup> <sup>1</sup> <sup>3</sup> *<sup>y</sup>* <sup>+</sup> <sup>1</sup> 3 ). The right-hand-side function *f*(*x*, *y*, *t*) can be computed from (35).

In Table 1, the table contains the global error, and local error is defined as

$$\operatorname{error}\_{\mathbb{G}}^{M,N} := \max\_{\substack{i,j \in \Omega \\ 1 \le m \le M}} |u\_{i,j}^{m} - \mathfrak{u}(\mathbf{x}\_{i}, y\_{j}, t\_{m})|,\\ \operatorname{error}\_{L}^{M,N} := \max\_{\substack{i,j \in \Omega \\ \left[M/10\right] \le m \le N}} |u\_{i,j}^{m} - \mathfrak{u}(\mathbf{x}\_{i}, y\_{j}, t\_{m})|$$

Then, we can compute the rate of convergence

$$rate\_{G}^{M,N} = \log\_2\left(\frac{error\_{G}^{M,N}}{error\_{G}^{2M,2N}}\right), \quad rate\_{L}^{M,N} = \log\_2\left(\frac{error\_{L}^{M,N}}{error\_{L}^{2M,2N}}\right)$$

.

.



#### **Example 2.**

$$D\_t^{a\_1}u + D\_t^{a\_2}u - \frac{\partial^2 u}{\partial x^2} - \frac{\partial^2 u}{\partial y^2} + (1+\mathbf{x}+y)u = 0 \text{ for } (\mathbf{x}, y, t) \in [0, 2] \times [0, 2] \times (0, 1],$$

$$u(\mathbf{x}, y, 0) = (\frac{1}{3}\mathbf{x}^3 - \mathbf{x}^2 + \frac{1}{3}\mathbf{x} + \frac{1}{3})(\frac{1}{3}y^3 - y^2 + \frac{1}{3}y + \frac{1}{3}) \quad \text{for} \quad (\mathbf{x}, y) \in [0, 2] \times [0, 2]. \tag{36}$$

$$u(0, y, t) - \frac{\partial u}{\partial \mathbf{x}}(0, y, t) = 0, \quad u(2, y, t) + \frac{\partial u}{\partial \mathbf{x}}(2, y, t) = 0 \quad \text{for} \quad y \in [0, 2] \quad t \in (0, 1].$$

$$u(\mathbf{x}, 0, t) - \frac{\partial u}{\partial y}(\mathbf{x}, 0, t) = 0, \quad u(\mathbf{x}, 2, t) + \frac{\partial u}{\partial y}(\mathbf{x}, 2, t) = 0 \quad \text{for} \quad \mathbf{x} \in [0, 2] \quad t \in (0, 1].$$

In this example, the exact solution is unknown, and we can use the two-mesh principle in [25] to check the convergence rate. Let *u<sup>m</sup> <sup>i</sup>*,*<sup>j</sup>* be the numerical solution computed on the mesh {(*xi*, *yj*, *tm*)} for *i*, *j* = 0, ... , *N*, *m* = 0, ... , *M*. The second mesh is defined as {(*xi*/2, *yj*/2, *tm*/2)} for *<sup>i</sup>*, *<sup>j</sup>* <sup>=</sup> 0, ... , 2*N*, *<sup>m</sup>* <sup>=</sup> 0, ... , 2*M*, where *xi*<sup>+</sup>1/2 :<sup>=</sup> <sup>1</sup> <sup>2</sup> (*xi*+<sup>1</sup> + *xi*), *yy*<sup>+</sup>1/2 := <sup>1</sup> <sup>2</sup> (*yj*+<sup>1</sup> <sup>+</sup> *yj*) and *tm*<sup>+</sup>1/2 :<sup>=</sup> <sup>1</sup> <sup>2</sup> (*tm*+<sup>1</sup> + *tm*). Then, compute a new approximation *u*ˆ*m <sup>i</sup>*,*<sup>j</sup>* using the same scheme as *<sup>u</sup><sup>m</sup> i*,*j* .

Now the maximum two-mesh differences are defined by

$$error\_G^{M,N} := \max\_{\substack{i,j \in \Omega \\ 1 \le m \le M}} |u\_{i,j}^m - \hat{u}\_{i,j}^m|, \; error\_L^{M,N} := \max\_{\substack{i,j \in \Omega \\ \{\Psi M / 10\} \le m \le M}} |u\_{i,j}^m - \hat{u}\_{i,j}^m|.$$

and they are used to compute the global and local rate of convergence rates

$$rate\_G^{M,N} = \log\_2\left(\frac{error\_G^{M,N}}{error\_G^{2M,2N}}\right), \quad rate\_L^{M,N} = \log\_2\left(\frac{error\_L^{M,N}}{error\_L^{2M,2N}}\right)$$

In each Tables 1 and 2, let *r* = (2 − *α*1)/0.9 and *α*<sup>2</sup> = 0.1. The global convergence rate is bigger than *<sup>α</sup>*1. The convergence rate *e<sup>m</sup> i*,*j* ≤ *<sup>M</sup>*−*min*{2−*α*1,*rα*1} can be found in other papers that only focus on global errors [26,27]. When the parameters *r* and *a*<sup>1</sup> are selected the same as in this paper, the convergence rate can be seen to be the same. The most important conclusion of this paper is the convergence rate of local errors. We can see that the rate of convergence is (2 − *α*1). It is obvious that the local convergence rate in every time step is sharper than the global convergence rate. All these experimental results demonstrate the sharpness of our theoretical analysis.


**Table 2.** Example 2 with *α*<sup>2</sup> = 0.1 and *r* = (2 − *α*1)/0.9.

#### **5. Discussion**

In this paper, we have presented a fully discrete scheme for multiple time-fractional reaction diffusion equations by using the L1 scheme in time and finite difference method in space. To the best of our knowledge, the Robin boundary conditions have not been explored much in this regard. For this type of boundary conditions, we have constructed a discrete scheme of (1) at the boundary points which can match the convergence rate of the inner points. Based on the fully discrete scheme, a detailed local error analysis for (1) is presented. The convergence rate of each time node is proven, and two numerical examples are used to verify the theoretical results. In our future work, we will consider some methods with higher convergence rates in time and consider methods such as the mixed finite element method in space.

**Author Contributions:** Conceptualization and writing—original draft, J.H.; writing—review and editing, J.H., X.M., J.W. and Y.H.; supervision, Y.Y. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the National Natural Science Foundation of China grant number 62173027 and 12101039.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

## *Article* **Numerical Simulations of the Oscillating Second-Grade Fluid through a Rectangular Cross Duct with Fractional Constitution Relationship**

**Bo Zhang 1, Lin Liu 1,2,\*, Siyu Chen 1, Sen Zhang 1, Lang Liu 1, Libo Feng 3, Jing Zhu 1, Jiangshan Zhang <sup>2</sup> and Liancun Zheng <sup>1</sup>**


<sup>3</sup> School of Mathematical Sciences, Queensland University of Technology, Brisbane, QLD 4001, Australia

**\*** Correspondence: liulin@ustb.edu.cn

**Abstract:** An oscillating second-grade fluid through a rectangular cross duct is studied. A traditional integer time derivative in the kinematic tensors is substituted by a fractional operator that considers the memory characteristics. To treat the fractional governing equation, an analytical method was obtained. To analyze the impact of the parameters more intuitively, the difference method was applied to determine the numerical expression and draw with the help of computer simulation. To reduce the cost of the amount of computation and storage, a fast scheme was proposed, one which can greatly improve the calculation speed. To verify the correctness of the difference scheme, the contrast between the numerical expression and the exact expression—constructed by introducing a source term—was given and the superiority of the fast scheme is discussed. Furthermore, the influences of the involved parameters, including the parameter of retardation time, fractional parameter, magnetic parameter, and oscillatory frequency parameter, on the distributions of velocity and shear force at the wall surface with oscillatory flow are analyzed in detail.

**Keywords:** second-grade fluid; rectangular duct; constitution relationship; fractional derivative; fast algorithms

#### **1. Introduction**

The flow of fluid has widespread applications, including in aerospace, biomedicine, oil exploitation etc. The classical fluid model is the Newtonian fluid in which the stress tensor and the kinematic tensor have a linear relationship. It has a limitation in so far that it can only describe most pure liquids such as water and alcohol. In addition to the fluids listed, most fluids are non-Newtonian whose characteristics have many properties that different from those of Newtonian ones [1]. Studying the flow mechanism has great significance. There are many types of non-Newtonian fluids and this paper studies the second-grade fluid [2–4], in which the shear force is characterized by the stretching tensor and the Rivlin–Ericksen tensors.

Due to the special description of the constitution relationship, the second-grade fluid has its own unique properties. In order to better discover its flow mechanism, the usual method is to consider the flow through simple models. The common categories for this include the flow on semi-infinite plates [5,6], two parallel infinitely long plates [7], the flow in pipes or ducts [8], or the flow in a circular tube [9]. Non-Newtonian fluids in rectangular channels have gained special interest for the engineering applications such as in magnetohydrodynamic generators and marine mechanical equipment, interest which has helped us to study the flow characteristics in depth [10]. Studying second-grade fluid in a rectangular cross duct has important research significance and application value. Erdo ˘gan and ˙ Imrak [11] were the first scholars to study the unsteady motion of second-grade fluid

**Citation:** Zhang, B.; Liu, L.; Chen, S.; Zhang, S.; Liu, L.; Feng, L.; Zhu, J.; Zhang, J.; Zheng, L. Numerical Simulations of the Oscillating Second-Grade Fluid through a Rectangular Cross Duct with Fractional Constitution Relationship. *Fractal Fract.* **2022**, *6*, 666. https:// doi.org/10.3390/fractalfract6110666

Academic Editor: Wojciech Sumelka

Received: 1 October 2022 Accepted: 7 November 2022 Published: 11 November 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

through a rectangular cross duct with the influences of the side walls. It has been further studied by many scholars. Considering heat transfer with relaxation time, Alamri et al. [12] analyzed particle diffusion in the flow of second-grade fluid and discussed the effects of the involved parameters on the profile graphically. Bernard [13] studied a three-dimensional second-grade fluid with a tangential boundary condition in a polyhedron. By comparing with the stress of the Newtonian fluid at the initial time, Erdo ˘gan and ˙ Imrak [14] considered the motion properties of second-grade fluid driven by the impulsive motion or sudden pressure gradient. The comparison of the stress at the start time between the Newtonian fluid and the second-grade fluid was discussed. Furthermore, the influence of the magnetic field has important research significance. It has been applied to the Maxwell fluid [15], Oldroyd-B fluid [16] et al., but it has fewer studies on the second-grade fluid.

Besides, many situations consider the steady-state motion of the second-grade fluid for simplicity. However, for the practical situation, the velocity field produced by the flow should vary with time due to the complexity of the fluid flow. The unsteady state has more research significance for the second-grade fluid with the condition that the time derivative in the constitutional relationship is integral to considering the local characteristics. With further research, it has been found that the fractional model has gained support for its memory characteristic [17]. At present, the fractional operators have been applied in many viscoelastic fluids, such as the Maxwell model [18], Oldroyd-B model [19], Burgers' model [20] et al. For the fractional second-grade fluid, the constitution relationship has a similar form with the viscoelastic fluid, namely, they all have the fractional material derivative term. The application of fractional operator on the motion of second-grade fluid has been analyzed by Tan and Xu [21], Bazhlekova et al. [22], Kan and Wang [23], Li et al. [24] et al. For flow driven by a special form of oscillatory pressure, it has been widely applied in the motions in an isosceles right triangle tube with Maxwell fluid [25], in a straight rectangular duct with the second-grade fluid [26], in a cylindrical domain with the Oldroyd-B fluid [27], and in cylindrical domains with the fractional Burgers fluid [28]. To the best of the authors' knowledge, the two-dimensional flow of second-grade fluid in rectangular ducts driven by oscillatory pressure and considering a magnetic field has not been considered in the literature so far.

There are many methods to solve the governing equation [29–31]. For the treatment of the fractional second-grade fluid, the traditional method is to apply the integral transform method to obtain an analytical solution [27,32,33], with the paradox that the principle of causality causing the initial conditions is a non-rigorous enforcement. In other words, these treatments for the start-up flow proposed by Christov [34,35] are incorrect. There are many numerical methods [36] that can solve the fractional governing equation and the numerical difference method has been applied to solve the corresponding mathematical problem correctly.

The governing equation subject to the fractional second-grade constitutive relationship is solved numerically. The difference is that the integer term has mature calculation methods, while the key is to treat the fractional derivative. The classical method is to choose the L1 scheme [37] to approximate it, though it is limited by the huge amount of computation and storage required for long-term numerical simulation, since the Caputo derivative depends on historical information. This is an urgent problem to be solved at present. Using exponential functions to approximate the Abel kernel function of Caputo derivatives, the fast algorithm [38] has been developed. The main idea is to reduce the number of iterations by constructing a recurrent relationship. At each time step, the convolution containing the exponential kernel is calculated in O(1) time. Then the computational amount O *N*<sup>2</sup> and storage O(*N*) for the direct L1-algorithm reduces to O(*N* log2 *N*) and O(log2 *N*) for the fast algorithm, respectively. This has been applied to treat fractional diffusion models [39], multi-term fractional sub-diffusion models [40], wave models [41] and the variable coefficient fractional diffusion wave models [42]. According to the numerical results, the analyses are discussed and are detailed by graphical illustration.

The paper's outline is given as follows. The derivation of the mathematical model of the second-grade fluid over a rectangular duct with an infinite length and which is caused by a various pressure gradient is given in Section 2. The exact expression for describing the second-grade fluid is deduced in Section 3. Section 4 gives the numerical difference scheme of the formulated governing equation and the analyses of the solvability, stability and convergence are proven in Section 5. Section 6 gives the fast evolution of the difference scheme. Section 7 gives the comparison between the numerical expression and the exact expression. Furthermore, the influences of the relevant parameters on the transfer mechanism of the velocity field and the shear force at the wall surface are also analyzed. The conclusions are summarized in Section 8.

#### **2. The Derivation of the Mathematical Model**

Consider the motions of an incompressible second-grade fluid. The laminar flow in a straight duct with infinite length and the rectangular cross-section is considered and the flow is controlled by pressure gradient with time/space oscillations. As shown in Figure 1, the width and height of the rectangular section are 2*a* and 2*b*. The center in the cross-section is defined as the origin and the boundaries along *x* direction and *y* direction are at the positions *x* = ±*a* and *y* = ±*b* while *z* ∈ [0, +∞) along *z* direction. For simplicity, the body forces are neglected in this paper.

**Figure 1.** The motion of second-grade fluid in a rectangular cross duct.

The continuity equation is given as

$$\nabla \cdot V = 0,\tag{1}$$

where ∇ denotes the gradient operator.

As a development, for the fractional second-grade fluid when considering the memory characteristics [21], the stress tensor *τ* has the following expression

$$
\pi = \mu A\_1 + \mathfrak{a}\_1 A\_2 + \mathfrak{a}\_2 A\_{1'}^2 \tag{2}
$$

where *μ* refers to the dynamic viscosity, *α*<sup>1</sup> and *α*<sup>2</sup> denote the material moduli, *A*<sup>1</sup> and *A*<sup>2</sup> are the kinematic tensors with the expression as

$$A\_1 = \nabla V + \left(\nabla V\right)^T \text{ and } A\_2 = D\_t^a A\_1 + A\_1 \nabla V + \left(\nabla V\right)^T A\_1. \tag{3}$$

where *D<sup>α</sup> <sup>t</sup>* denotes the Riemann–Liouville's fractional operator of order *α* (0 < *a* < 1) [43], the definition for a function *f*(*t*) defined on [*t*1, *t*2] is given as

$$D\_t^\mathfrak{x} f(t) = \frac{d}{dt} \left( \frac{1}{\Gamma(1-a)} \int\_{t\_1}^t \frac{f(\tilde{\xi})}{\left(t - \tilde{\xi}\right)^a} d\tilde{\xi} \right). \tag{4}$$

For considering the Clausius–Duhem inequality and assuming that the minimum at equilibrium for the Helmholtz free energy is [44,45], the material constants satisfy the following restrictions

$$
\mu \ge 0, \ a\_1 \ge 0 \text{ and } a\_1 + a\_2 = 0. \tag{5}
$$

Applying the periodic pressure gradient into the *z*-direction, the motion of secondgrade fluid in the direction is parallel to the axial coordinate with an oscillating form. The velocity field is assumed as

$$V = \left[0, 0, w(\mathbf{x}, y, \mathbf{t})\right]^T,\tag{6}$$

where *w*(*x*, *y*, *t*) refers to the velocity in the *z*-direction. For this consideration, it is simple to find that the continuity Equation (1) automatically satisfies consideration of the velocity field (6).

Considering the effect of an electromagnetic field, the motion equation for describing the second-grade fluid is denoted as follows

$$
\rho D\_t V = -\nabla p + \nabla \cdot \boldsymbol{\tau} - \sigma\_0 B\_0^2 V,\tag{7}
$$

where *V* corresponds to the velocity vector, *ρ* refers to the fluid density, *p* denotes the hydrostatic pressure, the operator *Dt* refers to the material derivative, *σ*<sup>0</sup> refers to electrical conductivity and *B*<sup>0</sup> is the magnetic field.

Combining the expansion of Equation (2) (see the Appendix A) with Equation (7), the fractional governing equation can be derived as

$$\frac{\partial w}{\partial t} = \nu \left( 1 + \lambda D\_t^u \right) \left( \frac{\partial^2 w}{\partial x^2} + \frac{\partial^2 w}{\partial y^2} \right) - Mw - \frac{1}{\rho} \frac{\partial p}{\partial z'} \tag{8}$$

where *ν* = *<sup>μ</sup> <sup>ρ</sup>* denotes the kinematic viscosity, *<sup>λ</sup>* <sup>=</sup> *<sup>α</sup>*<sup>1</sup> *<sup>μ</sup>* refers to the retardation time, *<sup>M</sup>* <sup>=</sup> *<sup>σ</sup>*0*B*<sup>2</sup> 0 *ρ* corresponds to the magnetic parameter.

The initial conditions are

$$w(x, y, 0) = 0,\tag{9}$$

and the boundary conditions regardless of slip are given as

$$w(\pm a, y, t) = w(\mathbf{x}, \pm b, t) = 0.\tag{10}$$

The initial boundary conditions of (9) and (10) are the Dirichlet type based on the physical backgrounds considering a laminar flow in a straight duct with infinite length and rectangular cross-section. When the boundary conditions change to Neumann, Robin or some other kind of initial boundary conditions, the physical meaning of this paper changes. However, the treatment process of the fractional governing equation with the difference method and the fast algorithm is also applicable. The only difference is the boundary discretization is slightly different.

**Theorem 1.** *[43] Assume a positive α satisfies* 0 ≤ *n* − 1 < *α* < *n. Suppose the function f*(*t*) *in region* [*t*1, *t*2] *has n* − 1 *continuous bounded derivative for every t*<sup>2</sup> > *t*1*, then*

$$D\_t^a f(t) = {}^C D\_t^a f(t) + \sum\_{j=1}^{n-1} \frac{f^{(j)}(t\_1)(t - t\_1)^{j-a}}{\Gamma(1 + j - a)}, \ t\_1 \le t \le t\_2,\tag{11}$$

*where CD<sup>α</sup> <sup>t</sup> f*(*t*) *refers to the Caputo's fractional derivative [43].*

Through Theorem 1, we have *D<sup>α</sup> <sup>t</sup> f*(*t*) = *CD<sup>α</sup> <sup>t</sup> f*(*t*) with the condition that second-grade fluid flowing along a straight rectangular duct is subjected to the zero initial condition. In the following discussions, we are able to substitute the Riemann–Liouville derivative with Caputo's derivative.

#### **3. Analytical Solution**

In this part, we try to obtain the analytical solution of (8)–(10). Firstly, we consider the equation:

$$\frac{\partial \mu}{\partial t} = \nu \left( \frac{\partial^2 \mu}{\partial x^2} + \frac{\partial^2 \mu}{\partial y^2} \right) - M\mu\_\prime \tag{12}$$

$$
\mu(\pm 1, y, t, \tau) = \mu(\mathfrak{x}, \pm 1, t, \tau) = 0. \tag{13}
$$

To simplify the calculation, we introduce *u*(*x*, *y*, *t*, *τ*) = *h*(*x* + 1, *y* + 1, *t*, *τ*), after which it becomes:

$$\frac{\partial h}{\partial t} = \nu \left( \frac{\partial^2 h}{\partial x^2} + \frac{\partial^2 h}{\partial y^2} \right) - Mh,\tag{14}$$

$$h(0, y, t, \tau) = h(\mathbf{x}, 0, t, \tau) = h(2, y, t, \tau) = h(\mathbf{x}, 2, t, \tau) = 0\tag{15}$$

The solution of (14)–(15) is obtained by separation of variables. Defining *h*(*x*, *y*, *t*, *τ*) = *T*(*t*, *τ*)Φ(*x*, *y*) and the operator Δ = *<sup>∂</sup>*<sup>2</sup> *<sup>∂</sup>x*<sup>2</sup> <sup>+</sup> *<sup>∂</sup>*<sup>2</sup> *<sup>∂</sup>y*<sup>2</sup> , yields:

$$
\Phi \frac{\partial T}{\partial t} = \nu T \Delta \Phi - MT \Phi\_\prime \tag{16}
$$

$$\Phi(0, y) = \Phi(\mathbf{x}, 0) = \Phi(2, y) = \Phi(\mathbf{x}, 2) = 0 \tag{17}$$

Denote

$$
\Delta\Phi = \eta \cdot \Phi.\tag{18}
$$

It can be deduced immediately that

$$\frac{\partial T}{\partial t} = (\nu \eta - M)T \tag{19}$$

Equation (18) is a Helmholtz equation and the solution with the boundary conditions (17) can be obtained: Φ*n*,*<sup>m</sup>* = sin *<sup>n</sup><sup>π</sup>* <sup>2</sup> *x* sin *<sup>m</sup><sup>π</sup>* <sup>2</sup> *y* , where *<sup>n</sup>*, *<sup>m</sup>* ∈ N. It can then be deduced that *<sup>η</sup>* can only be discrete values with the value *<sup>η</sup>* <sup>=</sup> <sup>−</sup>*n*2+*m*<sup>2</sup> <sup>4</sup> *<sup>π</sup>*2. It is simple to determine the solution to Equation (19) as *T* = *B*(*τ*)*e*(*νη*−*M*)*<sup>t</sup>* , where *B*(*τ*) is an arbitrary function. The solution of *h*(*x*, *y*, *t*, *τ*) has the following form

$$h(\mathbf{x}, y, t, \tau) = \sum\_{n=1}^{+\infty} \sum\_{m=1}^{+\infty} B\_{n,m}(\tau) e^{-\left(\tau \pi^2 \frac{n^2 + m^2}{4} + M\right)t} \sin\left(\frac{n\pi}{2}\mathbf{x}\right) \sin\left(\frac{m\pi}{2}y\right),\tag{20}$$

and then

$$u(\mathbf{x}, y, t, \tau) = \sum\_{n=1}^{+\infty} \sum\_{m=1}^{+\infty} B\_{n,m}(\tau) e^{-\left(\upsilon \tau^2 \frac{n^2 + m^2}{4} + M\right)t} \sin\left(\frac{n\pi}{2}(\mathbf{x} + 1)\right) \sin\left(\frac{m\pi}{2}(y + 1)\right) \tag{21}$$

Denote <sup>1</sup> *ρ ∂p <sup>∂</sup><sup>z</sup>* = *g*(*t*). Equation (8) can be expressed as:

$$\frac{\partial w}{\partial t} = \nu (1 + \lambda D\_t^a) \left( \frac{\partial^2 w}{\partial \mathbf{x}^2} + \frac{\partial^2 w}{\partial y^2} \right) - Mw - \mathbf{g}(t). \tag{22}$$

Suppose there is a function *u*(*x*, *y*, *t*, *τ*) satisfying *w*(*x*, *y*, *t*) = *<sup>t</sup>* <sup>0</sup> *u*(*x*, *y*, *t*, *τ*)*dτ*. Substituting this expression into (21), yields

$$\begin{cases} \mu(\mathbf{x}, y, t, \mathbf{t}) + \int\_{0}^{t} \frac{\partial \mu(\mathbf{x}, y, t, \mathbf{r})}{\partial t} d\mathbf{r} = \int\_{0}^{t} (\nu \Delta \mu(\mathbf{x}, y, t, \mathbf{r}) - M\mu(\mathbf{x}, y, t, \mathbf{r})) d\mathbf{r} \\ + \frac{\nu \lambda}{\Gamma(1 - \kappa)} \frac{d}{dt} \int\_{0}^{t} (t - \overline{\boldsymbol{\zeta}})^{-\kappa} \int\_{0}^{\overline{\zeta}} \Delta \mu(\mathbf{x}, y, \overline{\boldsymbol{\zeta}}, \mathbf{r}) d\mathbf{r} d\overline{\boldsymbol{\zeta}} - \mathcal{g}(t). \end{cases} \tag{23}$$

From Equation (12), Equation (23) can be reduced as:

$$\frac{\nu\lambda}{\Gamma(1-\alpha)}\frac{d}{dt}\int\_{0}^{t}(t-\overline{\xi})^{-\alpha}\int\_{0}^{\overline{\xi}}\Delta u(\mathbf{x},\mathbf{y},\overline{\xi},\mathbf{r})d\mathbf{r}d\overline{\xi}-u(\mathbf{x},\mathbf{y},\mathbf{t},\mathbf{t})=\mathcal{g}(\mathbf{t}).\tag{24}$$

Substituting the solution (21) into Equation (24), yields

$$\begin{cases} -\frac{v\lambda}{\Gamma(1-a)}\sum\_{n=1}^{+\infty}\sum\_{m=1}^{+\infty}\left(\frac{n^2+m^2}{4}\pi^2\right)\frac{d}{dt}\int\_{0}^{t}(t-\xi)^{-\pi}\int\_{0}^{\xi}B\_{n,m}(\tau)d\tau e^{-\left(v\pi^2\frac{n^2+m^2}{4}+M\right)\xi}d\xi\\ -\sum\_{n=1}^{+\infty}\sum\_{m=1}^{+\infty}B\_{n,m}(t)e^{-\left(v\pi^2\frac{n^2+m^2}{4}+M\right)t}\cdot\sin\left(\frac{n\pi}{2}(x+1)\right)\sin\left(\frac{m\pi}{2}(y+1)\right) = \mathcal{g}(t). \end{cases} \tag{25}$$

Perform the inner product with sin *<sup>n</sup>*0*<sup>π</sup>* <sup>2</sup> (*x* + 1) sin *<sup>m</sup>*0*<sup>π</sup>* <sup>2</sup> (*y* + 1) on both sides of Equation (25) and integral interval chosen as [−1, 1] × [−1, 1]. Then for the left side of the Equation (25), we have:

$$\begin{array}{l} \sum\_{n=1}^{+\infty} \sum\_{m=1}^{+\infty} \mathbb{C}\_{n,m}(t) \int\_{-1}^{1} \int\_{-1}^{1} \sin\left(\frac{n\pi}{2}(x+1)\right) \sin\left(\frac{m\pi}{2}(y+1)\right) \sin\left(\frac{n\_0\pi}{2}(x+1)\right) \sin\left(\frac{m\_0\pi}{2}(y+1)\right) dx dy\\ = \sum\_{n=1}^{+\infty} \sum\_{m=1}^{+\infty} \frac{\mathbb{C}\_{n,m}(t)}{4} \int\_{-1}^{1} \left[\cos\left(\frac{(n-n\_0)\pi}{2}(x+1)\right) - \cos\left(\frac{(n+n\_0)\pi}{2}(x+1)\right)\right] dx\\ \cdot \int\_{-1}^{1} \left[\cos\left(\frac{(m-m)\pi}{2}(y+1)\right) - \cos\left(\frac{(m+m)\pi}{2}(y+1)\right)\right] dy\\ = \mathbb{C}\_{n\_0,m\_0}(t) \end{array} \tag{26}$$

where

$$\mathcal{L}\_{n,m}(t) = -\frac{\nu\lambda\pi^2(n^2+m^2)}{4\Gamma(1-a)}\frac{d}{dt}\int\_0^t (t-\xi)^{-a} \int\_0^{\xi} B\_{n,m}(\tau)d\tau e^{-\left(\nu\pi^2\frac{n^2+m^2}{4}+M\right)\xi}d\xi - B\_{n,m}(t)e^{-\left(\nu\pi^2\frac{n^2+m^2}{4}+M\right)t}\frac{d}{dt}\int\_0^t \left(\int\_0^{\xi} e^{-\left(\nu\pi^2-\frac{n^2+m^2}{4}+M\right)\xi}d\xi\right)^{-a}\frac{d}{dt}\mathcal{L}\_{n,m}(t)$$

For the right-hand component of Equation (25), the integral is zero when *n*<sup>0</sup> and *m*<sup>0</sup> are even. Set *n*<sup>0</sup> = 2*k*<sup>1</sup> − 1, *m*<sup>0</sup> = 2*k*<sup>2</sup> − 1, where *k*<sup>1</sup> and *k*<sup>2</sup> are positive integers. Then we have the following integral formula:

$$\int\_{-1}^{1} \int\_{-1}^{1} \sin\left(\frac{(2k\_1 - 1)\pi}{2}(x + 1)\right) \sin\left(\frac{(2k\_2 - 1)\pi}{2}(y + 1)\right) dx dy = \frac{16}{(2k\_1 - 1)(2k\_2 - 1)\pi^2}.\tag{27}$$

By a combination of Equations (26) and (27), the following equation can be obtained:

$$\frac{\mathbb{C}\_{n,m}^{(1)}}{\Gamma(1-a)} \frac{d}{dt} \int\_0^t (t-\xi)^{-a} \int\_0^{\overline{\xi}} B\_{n,m}(\tau) d\tau e^{-\mathbb{C}\_{n,m}^{(2)} \underline{\xi}} d\xi - B\_{n,m}(t) e^{-\mathbb{C}\_{n,m}^{(2)} t} = \mathbb{C}\_{n,m}^{(3)} \underline{g}(t),\tag{28}$$

where *<sup>n</sup>* <sup>=</sup> <sup>2</sup>*k*<sup>1</sup> <sup>−</sup> 1, *<sup>m</sup>* <sup>=</sup> <sup>2</sup>*k*<sup>2</sup> <sup>−</sup> 1, *<sup>C</sup>*(1) *<sup>n</sup>*,*<sup>m</sup>* <sup>=</sup> <sup>−</sup> *n*2+*m*<sup>2</sup> <sup>4</sup> *<sup>π</sup>*<sup>2</sup> *νλ*, *<sup>C</sup>*(2) *<sup>n</sup>*,*<sup>m</sup>* <sup>=</sup> *<sup>v</sup>π*<sup>2</sup> *<sup>n</sup>*2+*m*<sup>2</sup> <sup>4</sup> + *M* and *<sup>C</sup>*(3) *<sup>n</sup>*,*<sup>m</sup>* <sup>=</sup> <sup>16</sup> *nmπ*<sup>2</sup> .

Denote *ξ* = *t* − *γ*, Equation (28) can be rewritten as:

$$\begin{cases} \frac{C\_{n,m}^{(1)}}{\Gamma(1-\kappa)} \left( \int\_0^t \gamma^{-\kappa} B\_{n,m}(t-\gamma) e^{C\_{n,m}^{(2)}(\gamma-t)} d\gamma - \mathcal{C}\_{n,m}^{(2)} \int\_0^t \gamma^{-\kappa} \int\_0^{t-\gamma} B\_{n,m}(\tau) d\tau e^{C\_{n,m}^{(2)}(\gamma-t)} d\gamma \right) \\ -B\_{n,m}(t) e^{-C\_{n,m}^{(2)}t} = \mathcal{C}\_{n,m}^{(3)} g(t) . \end{cases} \tag{29}$$

Denote *t* = 0, we have the relationship:

$$B\_{n,m}(0) = -C\_{n,m}^{(3)} \text{g}(0). \tag{30}$$

Multiplying the left and right sides of (29) by *<sup>e</sup>C*(2) *<sup>n</sup>*,*mt* and taking the derivative of *<sup>t</sup>*, yields:

$$\begin{split} & \frac{\boldsymbol{\mathcal{L}}\_{n,n}^{(1)}}{\Gamma(1-\boldsymbol{a})} \Big( \boldsymbol{t}^{-\boldsymbol{a}} \boldsymbol{B}\_{n,\text{m}}(\boldsymbol{0}) \boldsymbol{\varepsilon}^{\boldsymbol{C}\_{n,\text{m}}^{(2)} \boldsymbol{t} + \int\_{0}^{t} \boldsymbol{\gamma}^{-\boldsymbol{a}} \boldsymbol{B}\_{n,\text{m}} \, ^{\boldsymbol{\prime}} (\boldsymbol{t}-\boldsymbol{\gamma}) \boldsymbol{\varepsilon}^{\boldsymbol{C}\_{n,\text{m}}^{(2)} \boldsymbol{\gamma} \, d\boldsymbol{\gamma} - \boldsymbol{\mathcal{C}}\_{n,\text{m}}^{(2)} \int\_{0}^{t} \boldsymbol{\gamma}^{-\boldsymbol{a}} \boldsymbol{B}\_{n,\text{m}} (\boldsymbol{t}-\boldsymbol{\gamma}) \boldsymbol{\varepsilon}^{\boldsymbol{C}\_{n,\text{m}}^{(2)} \boldsymbol{\gamma} \, d\boldsymbol{\gamma} \Big) \\ & - \frac{d \boldsymbol{B}\_{n,\text{m}}(\boldsymbol{t})}{\boldsymbol{t}} = \boldsymbol{\varepsilon}^{\boldsymbol{C}\_{n,\text{m}}^{(2)} \boldsymbol{t}} \boldsymbol{\mathcal{C}}\_{n,\text{m}}^{(3)} \Big( \boldsymbol{\mathcal{C}}\_{n,\text{m}}^{(2)} \boldsymbol{g}(\boldsymbol{t}) + \boldsymbol{g}'(\boldsymbol{t}) \Big). \end{split} \tag{31}$$

Resort to variable *<sup>γ</sup>* <sup>=</sup> *<sup>t</sup>* <sup>−</sup> *<sup>ξ</sup>* and multiply both sides of Equation (31) by *<sup>e</sup>*−*C*(2) *<sup>n</sup>*,*mt* , we have:

$$\begin{split} \frac{\mathbf{C}\_{n,m}^{(1)}}{\Gamma(1-\mathfrak{a})} \int\_{0}^{t} (t-\mathfrak{f})^{-\mathfrak{a}} \frac{d B\_{n,m}(\mathbf{f}) e^{-\mathbf{C}\_{n,m}^{(2)} \mathbf{f}}}{d\mathfrak{f}} d\mathfrak{f} &= \frac{d B\_{n,m}(t) e^{-\mathbf{C}\_{n,m}^{(2)} \mathbf{f}}}{dt} - \mathsf{C}\_{n,m}^{(2)} B\_{n,m}(t) e^{-\mathbf{C}\_{n,m}^{(2)} \mathbf{f}} \\ &= \mathsf{C}\_{n,m}^{(3)} \left( \mathsf{C}\_{n,m}^{(2)} \mathbf{g}(t) + \mathbf{g}'(t) \right) - \frac{\mathbf{C}\_{n,m}^{(1)}}{\Gamma(1-\mathfrak{a})} t^{-\mathfrak{a}} B\_{n,m}(0). \end{split} \tag{32}$$

Denoting *An*,*m*(*t*) = *Bn*,*m*(*t*)*e*−*C*(2) *<sup>n</sup>*,*mt* and according to (30), we have

$$\mathcal{L}\_{n,m}^{(1)}\frac{d^{n+1}A\_{n,m}}{dt^{n+1}} - \frac{dA\_{n,m}}{dt} - \mathcal{C}\_{n,m}^{(2)}A\_{n,m} = \mathcal{C}\_{n,m}^{(3)}\left(\mathcal{C}\_{n,m}^{(2)}\mathcal{g}(t) + \mathcal{g}'(t)\right) - \frac{\mathcal{C}\_{n,m}^{(1)}A\_{n,m}(0)}{\Gamma(1-a)}t^{-a},\tag{33}$$

where *<sup>n</sup>* <sup>=</sup> <sup>2</sup>*k*<sup>1</sup> <sup>−</sup> 1, *<sup>m</sup>* <sup>=</sup> <sup>2</sup>*k*<sup>2</sup> <sup>−</sup> 1 and *An*,*m*(0) = <sup>−</sup>*C*(3) *<sup>n</sup>*,*mg*(0). The analytical solution can be obtained by referring to [46].

Then the solution to Equation (20) can be obtained as:

$$w(x,y,t) = \sum\_{\substack{n=1 \\ n \equiv 1 \\ n \equiv 2k\_1 - 1 \pmod{\mathfrak{r}}}}^{+\infty} \sum\_{m=1}^{+\infty} \int\_0^t B\_{n,m}(\mathbf{r}) d\mathbf{r} \cdot e^{-\left(n\pi\frac{2\pi^2 m^2}{4} + M\right)t} \sin\left(\frac{n\pi}{2}(x+1)\right) \sin\left(\frac{m\pi}{2}(y+1)\right), \qquad (34)$$

where *Bn*,*m*(*t*) = *<sup>e</sup>C*(2) *<sup>n</sup>*,*mtAn*,*m*(*t*) and *An*,*m*(*t*) refers to the solution of (33).

#### **4. Numerical Discretization Method**

*Numerical Scheme*

Firstly, we divide the spatial region [−*a*, *a*] × [−*b*, *b*] with the uniform mesh points *xi* = −*a* + *ihx*, *i* = 0, 1, ··· , *Mx*, *yj* = −*b* + *jhy*, *j* = 0, 1, ··· , *My*, in which *hx* = 2*a*/*Mx*, *hy* = 2*b*/*My*. For the time region [0, *T*], we take *tn* = *nτ* with time step *τ* = *T*/*N* for *n* = 0, 1, ··· *N*. Define Ω*<sup>h</sup>* ≡ *xi*, *yj* <sup>0</sup> ≤ *<sup>i</sup>* ≤ *Mx*, 0 ≤ *<sup>j</sup>* ≤ *My* and Ω*<sup>τ</sup>* ≡ {*tn*|0 ≤ *n* ≤ *N* }. For a net function *w* = *wn* defined on an

*i*,*j* <sup>0</sup> ≤ *<sup>i</sup>* ≤ *Mx*, 0 ≤ *<sup>j</sup>* ≤ *My*, 0 ≤ *<sup>n</sup>* ≤ *<sup>N</sup>* interval Ω*<sup>h</sup>* × Ω*τ*, denote the following symbols for simplicity:

$$\begin{aligned} \nabla\_t w^n\_{i,j} &= \frac{w^n\_{i,j} - w^{n-1}\_{i,j}}{\tau}, \ \delta\_x w^n\_{i,j} = \frac{w^n\_{i,j} - w^n\_{i-1,j}}{h\_x}, \ \delta\_y w^n\_{i,j} = \frac{w^n\_{i,j} - w^n\_{i,j-1}}{h\_y}, \\\ \delta\_x^2 w^n\_{i,j} &= \frac{w^n\_{i+1,j} - 2w^n\_{i,j} + w^n\_{i-1,j}}{h\_x^2}, \ \delta\_y^2 w^n\_{i,j} = \frac{w^n\_{i,j+1} - 2w^n\_{i,j} + w^n\_{i,j-1}}{h\_x^2} \end{aligned}$$

Furthermore, the exact solution is defined as *W<sup>n</sup> <sup>i</sup>*,*<sup>j</sup>* = *w xi*, *yj*, *tn* for simplicity. Applying the L1-scheme [37] for discretizing the fractional derivative, at the mesh points *xi*, *yj*, *tn* , we have:

$$\frac{\partial^n W^n\_{i,j}}{\partial t^n} = \frac{\tau^{-n}}{\Gamma(2-n)} \left( c\rho W^n\_{i,j} - \sum\_{k=1}^{n-1} (c\_{n-k-1} - c\_{n-k}) W^k\_{i,j} - c\_{n-1} W^0\_{i,j} \right) + (R\_1)^n\_{i,j} \tag{35}$$

where *ck* = (*k* + 1) <sup>1</sup>−*<sup>α</sup>* <sup>−</sup> *<sup>k</sup>*1−*<sup>α</sup>* and (*R*1) *n i*,*j* <sup>≤</sup> *<sup>C</sup>τ*2−*α*.

At the mesh points *xi*, *yj*, *tn* , the backward difference method is applied to discretize the time derivative of order one

$$\frac{\partial \mathcal{W}^{n}\_{i,j}}{\partial t} = \nabla\_t \mathcal{W}^{n}\_{i,j} + \mathcal{O}(\tau) \tag{36}$$

.

Use of the central difference scheme yields the discretization schemes for the second order space derivatives

$$\frac{\partial^2 \mathcal{W}^n\_{i,j}}{\partial \mathbf{x}^2} = \delta^2\_\mathbf{x} \mathcal{W}^n\_{i,j} + \mathcal{O}\left(h^2\_\mathbf{x}\right) \text{ and } \frac{\partial^2 \mathcal{W}^n\_{i,j}}{\partial \mathbf{y}^2} = \delta^2\_\mathbf{y} \mathcal{W}^n\_{i,j} + \mathcal{O}\left(h^2\_\mathbf{y}\right) \tag{37}$$

Combining (35) and (37), we have the difference schemes for the mixed derivatives of time and space:

$$\frac{\partial^n}{\partial t^n} \frac{\partial^2}{\partial x^2} W\_{i,j}^n = \frac{\pi^{-n}}{\Gamma(2-a)} \left( c\_0 \delta\_x^2 \mathcal{W}\_{i,j}^n - \sum\_{k=1}^{n-1} (c\_{n-k-1} - c\_{n-k}) \delta\_x^2 \mathcal{W}\_{i,j}^k - c\_{n-1} \delta\_x^2 \mathcal{W}\_{i,j}^0 \right) + (R\varepsilon)\_{i,j}^n \tag{38}$$

$$\frac{\partial^n}{\partial t^n} \frac{\partial^2}{\partial y^2} W\_{i,j}^n = \frac{\pi^{-n}}{\Gamma(2-a)} \left( c \phi\_y^2 \mathcal{W}\_{i,j}^n - \sum\_{k=1}^{n-1} (c\_{n-k-1} - c\_{n-k}) \delta\_y^2 \mathcal{W}\_{i,j}^k - c\_{n-1} \delta\_y^2 \mathcal{W}\_{i,j}^0 \right) + \left( R \varsigma\_i \right)\_{i,j}^n \tag{39}$$

where (*R*2) *n i*,*j* <sup>≤</sup> *<sup>C</sup> τ*2−*<sup>α</sup>* + *h*<sup>2</sup> *x* and (*R*3) *n i*,*j* <sup>≤</sup> *<sup>C</sup> τ*2−*<sup>α</sup>* + *h*<sup>2</sup> *y ∂p*

Denote the discretization scheme for <sup>−</sup><sup>1</sup> *ρ <sup>∂</sup><sup>z</sup>* at the points *xi*, *yj*, *tn* as *g<sup>n</sup> i*,*j* . Through the difference schemes (35)–(39), we have the final discretization scheme for the governing Equation (8)

$$\begin{split} & \nabla\_{l} \mathbb{W}\_{i,j}^{n} + \lambda \mathbb{W}\_{i,j}^{n} - \nu \delta\_{\mathbf{x}}^{2} \mathbb{W}\_{i,j}^{n} - \nu \delta\_{\mathbf{y}}^{2} \mathbb{W}\_{i,j}^{n} \\ &= \nu \lambda \frac{\tau^{-n}}{\Gamma(2-a)} \Big( \mathbb{c}\_{0} \delta\_{\mathbf{x}}^{2} \mathbb{W}\_{i,j}^{n} - \sum\_{k=1}^{n-1} (\mathbb{c}\_{n-k-1} - \mathbb{c}\_{n-k}) \delta\_{\mathbf{x}}^{2} \mathbb{W}\_{i,j}^{k} - \mathbb{c}\_{n-1} \delta\_{\mathbf{x}}^{2} \mathbb{W}\_{i,j}^{0} \Big) \\ &+ \nu \lambda \frac{\tau^{-n}}{\Gamma(2-a)} \Big( \mathbb{c}\_{0} \delta\_{\mathbf{y}}^{2} \mathbb{W}\_{i,j}^{n} - \sum\_{k=1}^{n-1} (\mathbb{c}\_{n-k-1} - \mathbb{c}\_{n-k}) \delta\_{\mathbf{y}}^{2} \mathbb{W}\_{i,j}^{k} - \mathbb{c}\_{n-1} \delta\_{\mathbf{y}}^{2} \mathbb{W}\_{i,j}^{0} \Big) + \mathcal{g}\_{i,j}^{n} + \mathcal{R}\_{i,j}^{n} \end{split} \tag{40}$$

where *Rn i*,*j* <sup>≤</sup> *<sup>C</sup> τ* + *h*<sup>2</sup> *<sup>x</sup>* + *h*<sup>2</sup> *y* .

Substituting *W<sup>n</sup> <sup>i</sup>*,*<sup>j</sup>* with *<sup>w</sup><sup>n</sup> i*,*j* , we have the numerical difference scheme of Equation (8)

$$\begin{split} & \nabla\_{l} w^{\rm n}\_{i,j} + M w^{\rm n}\_{i,j} - \nu \delta\_{\mathbf{x}}^{2} w^{\rm n}\_{i,j} - \nu \delta\_{\mathbf{y}}^{2} w^{\rm n}\_{i,j} \\ & = \nu \lambda \frac{\tau^{-\kappa}}{\Gamma(2-\kappa)} \Big( c\_{0} \delta\_{\mathbf{x}}^{2} w^{\rm n}\_{i,j} - \sum\_{k=1}^{n-1} (c\_{n-k-1} - c\_{n-k}) \delta\_{\mathbf{x}}^{2} w^{k}\_{i,j} - c\_{n-1} \delta\_{\mathbf{x}}^{2} w^{0}\_{i,j} \Big) \\ & + \nu \lambda \frac{\tau^{-\kappa}}{\Gamma(2-\kappa)} \Big( c\_{0} \delta\_{\mathbf{y}}^{2} w^{\rm n}\_{i,j} - \sum\_{k=1}^{n-1} (c\_{n-k-1} - c\_{n-k}) \delta\_{\mathbf{y}}^{2} w^{k}\_{i,j} - c\_{n-1} \delta\_{\mathbf{y}}^{2} w^{0}\_{i,j} \Big) + \mathcal{g}^{\rm n}\_{i,j}. \end{split} \tag{41}$$

By merging the terms at the same time layer, making the left side the *n*-th time layer, and the right side the time layer with the order less than *n*, Equation (41) can be rewritten in another form:

$$\begin{split} \left(\frac{1}{\tau} + M\right) \boldsymbol{w}\_{i,j}^{n} - \frac{\nu}{h\_{x}^{2}} (\boldsymbol{r}\_{1} + 1) \left(\boldsymbol{w}\_{i+1,j}^{n} - 2\boldsymbol{w}\_{i,j}^{n} + \boldsymbol{w}\_{i-1,j}^{n}\right) - \frac{\nu}{h\_{y}^{2}} (\boldsymbol{r}\_{1} + 1) \left(\boldsymbol{w}\_{i,j+1}^{n} - 2\boldsymbol{w}\_{i,j}^{n} + \boldsymbol{w}\_{i,j-1}^{n}\right) \\ = \frac{1}{\tau} \boldsymbol{w}\_{i,j}^{n-1} - \nu \boldsymbol{r}\_{1} \left[\sum\_{k=1}^{n-1} (\boldsymbol{c}\_{n-k-1} - \boldsymbol{c}\_{n-k}) \left(\boldsymbol{\delta}\_{x}^{2} \boldsymbol{w}\_{i,j}^{k} + \boldsymbol{\delta}\_{y}^{2} \boldsymbol{w}\_{i,j}^{k}\right) + \boldsymbol{c}\_{n-1} \left(\boldsymbol{\delta}\_{x}^{2} \boldsymbol{w}\_{i,j}^{0} + \boldsymbol{\delta}\_{y}^{2} \boldsymbol{w}\_{i,j}^{0}\right)\right] + \boldsymbol{g}\_{i,j}^{n} \\ \end{split} \tag{42}$$
 
$$\text{where } \boldsymbol{r}\_{1} = \frac{\boldsymbol{\lambda}^{-s}}{\Gamma(2-s)} \text{ and } \boldsymbol{g}\_{i,j}^{n} = -\frac{1}{\rho} \frac{\partial \boldsymbol{\rho}\left(\boldsymbol{x}\_{i}; \boldsymbol{y}\_{i}; \boldsymbol{t}\_{i}\right)}{\partial z}.$$

In what follows, the symbol *E* denotes the unit matrix which may be with a different order in various sections. Considering the zero-boundary conditions, the discretization scheme (42) can be rewritten in a matrix form:

$$\begin{aligned} &\left(\frac{1}{\tau} + M\right) E w^n - \frac{\nu(r\_1 + 1)}{h\_x^2} E \otimes K\_1 w^n - \frac{\nu(r\_1 + 1)}{h\_y^2} K\_2 \otimes E w^n \\ &= \frac{1}{\tau} E w^{n-1} - \nu r\_1 \left[ \sum\_{k=1}^{n-1} (\mathfrak{c}\_{n-k-1} - \mathfrak{c}\_{n-k}) (E \otimes K\_1 + K\_2 \otimes E) w^k + \mathfrak{c}\_{n-1} (E \otimes K\_1 + K\_2 \otimes E) w^0 \right] + g^n, \end{aligned}$$

where the symbol ⊗ denotes the Kronecker product [47],

$$\begin{aligned} \mathcal{K}\_{1} &= \begin{pmatrix} -2 & 1 \\ 1 & -2 & 1 \\ & \ddots & \ddots & \ddots \\ & & 1 & -2 & 1 \end{pmatrix}\_{(M\_{2}-1)\times(M\_{2}-1)} \mathcal{K}\_{2} = \begin{pmatrix} -2 & 1 \\ 1 & -2 & 1 \\ & \ddots & \ddots & \ddots \\ & & 1 & -2 & 1 \end{pmatrix}\_{(M\_{2}-1)\times(M\_{2}-1)} \\\\ w^{\mathfrak{n}} &= \left( w^{\mathfrak{n}}\_{1,1}, w^{\mathfrak{n}}\_{2,1}, \dots, w^{\mathfrak{n}}\_{M\_{2}-1,1}, w^{\mathfrak{n}}\_{1,2}, w^{\mathfrak{n}}\_{2,2}, \dots, w^{\mathfrak{n}}\_{M\_{2}-1,2}, \dots, w^{\mathfrak{n}}\_{1,M\_{2}-1}, w^{\mathfrak{n}}\_{2,M\_{2}-1}, \dots, w^{\mathfrak{n}}\_{M\_{2}-1,M\_{2}-1} \right)^{T} \\\\ \mathcal{S}^{\mathfrak{n}} &= \left( \mathcal{S}^{\mathfrak{n}}\_{1,1}, \mathcal{S}^{\mathfrak{n}}\_{2,1}, \dots, \mathcal{S}^{\mathfrak{n}}\_{M\_{2}-1,1}, \mathcal{S}^{\mathfrak{n}}\_{1,2}, \mathcal{S}^{\mathfrak{n}}\_{2,2}, \dots, \mathcal{S}^{\mathfrak{n}}\_{1,M\_{2}-1}, \mathcal{S}^{\mathfrak{n}}\_{2,M\_{2}-1}, \dots, \mathcal{S}^{\mathfrak{n}}\_{M\_{2}-1,M\_{2}-1} \right)^{T} \end{aligned}$$

The initial condition can be discretized as *w*<sup>0</sup> *<sup>i</sup>*,*<sup>j</sup>* = 0 and the boundary conditions are discretized as *w<sup>n</sup>* 0,*<sup>j</sup>* = *<sup>w</sup><sup>n</sup> Mx*,*<sup>j</sup>* <sup>=</sup> *<sup>w</sup><sup>n</sup> <sup>i</sup>*,0 = *<sup>w</sup><sup>n</sup> <sup>i</sup>*,*My* = 0. The above numerical method can be applied to widespread situations, for example, the dynamics in porous media for solving Richards' equation [48]. For this equation, the treating method mentioned above can be similarly applied.

Besides the velocity distribution, the shear force is another important quantity to analyze. We consider the shear force *τxz* for *xz*-direction at the wall surface (*x* = 0), and the difference scheme is given as:

$$\begin{split} \pi\_{\mathbf{x}\mathbf{z}} &= \left(\mu + \alpha\_{1}D\_{t}^{\mathbf{x}}\right) \frac{\partial w}{\partial \mathbf{x}}\Big|\_{\mathbf{x}=0} \\ &\approx \left[\mu + \frac{a\_{1}\mathsf{r}^{-\mathsf{a}}}{\Gamma(2-\mathsf{a})}\right] \frac{w\_{1,j}^{\mathsf{n}} - w\_{0,j}^{\mathsf{n}}}{h\_{\mathbf{x}}} - \frac{a\_{1}\mathsf{r}^{-\mathsf{a}}}{\Gamma(2-\mathsf{a})} \sum\_{k=1}^{n-1} \left(c\_{n-k-1} - c\_{n-k}\right) \frac{w\_{1,j}^{\mathsf{k}} - w\_{0,j}^{\mathsf{k}}}{h\_{\mathbf{x}}} - \frac{a\_{1}\mathsf{r}^{-\mathsf{a}} c\_{n-1}}{\Gamma(2-\mathsf{a})} \frac{w\_{1,j}^{\mathsf{0}} - w\_{0,j}^{\mathsf{0}}}{h\_{\mathbf{x}}}. \end{split}$$

Due to the symmetry of the velocity in the *x*- and *y*-directions, we deduce the shear force along the *yz*-direction at the wall surface *y* = 0 to be the same as the *xz*-direction.

#### **5. Feasibility Analysis**

Denote *Vh* <sup>=</sup> { *<sup>v</sup>*|*v*} is a net function on <sup>Ω</sup>*<sup>h</sup>* <sup>×</sup> <sup>Ω</sup>*τ*, *<sup>v</sup><sup>n</sup> <sup>i</sup>*,*<sup>j</sup>* = 0 when *i* = 0 and *Mx* or *j* = 0 and *My*. For *<sup>w</sup>n*, *<sup>v</sup><sup>n</sup>* <sup>∈</sup> *Vh*, we denote the discrete inner products and norms:

$$(w^n, v^n) = h\_x h\_y \sum\_{i=1}^{M\_x - 1} \sum\_{i=1}^{M\_y - 1} w\_{i,j}^n v\_{i,j}^n \text{ and } \|\\|w^n\|\|^2 = (w^n, w^n). \tag{43}$$

**Lemma 1.** *[49] The matrix* **A** ⊗ **B** *is symmetric positive definite with the condition that both* **<sup>A</sup>** <sup>∈</sup> <sup>R</sup>*n*×*<sup>n</sup> and* **<sup>B</sup>** <sup>∈</sup> <sup>R</sup>*n*×*<sup>n</sup> satisfy symmetric positive definite. For* <sup>∀</sup><sup>0</sup> <sup>=</sup> **<sup>v</sup>** <sup>∈</sup> <sup>R</sup>*n*<sup>2</sup> *, it holds that:*

$$\mathbf{v}^T(\mathbf{A}\odot\mathbf{B})\mathbf{v}>0.\tag{44}$$

**Lemma 2.** *[50] For all* **A** *and* **B**, (**A** ⊗ **B**) *<sup>T</sup>* <sup>=</sup> **<sup>A</sup>***<sup>T</sup>* <sup>⊗</sup> **<sup>B</sup>***T*.

**Lemma 3.** *For <sup>w</sup>*, *<sup>v</sup>* <sup>∈</sup> <sup>Ω</sup>*<sup>h</sup>* <sup>×</sup> <sup>Ω</sup>*τ, it is straightforward to check that δ*2 *xw<sup>k</sup>*, *v<sup>k</sup>* = − *δxwk*, *δxv<sup>k</sup> with the zero-boundary conditions by applying integration by parts.*

**Lemma 4.** *[37] For the symbols cj in (35), define the vector S* = [*S*1, *S*2,..., *SN*] *<sup>T</sup> and constant P, it holds that:*

$$\frac{\tau^{-\kappa}}{\Gamma(2-a)}\sum\_{k=1}^{N} \left[c\_0 S\_k - \sum\_{j=1}^{k-1} \left(c\_{k-j-1} - c\_{k-j}\right) S\_j - c\_{k-1} P\right] S\_k \ge \frac{T^{-\kappa}}{2\Gamma(1-a)} \sum\_{k=1}^{N} S\_k^2 - \frac{T^{1-a}}{2\tau \Gamma(2-a)} P^2$$

*5.1. Solvability*

**Theorem 2.** *Denote w<sup>n</sup> <sup>i</sup>*,*<sup>j</sup> as the numerical solution of Equations (8)–(10) for i* = 0, 1, ··· , *Mx*, *j* = 0, 1, ··· , *My and n* = 0, 1, ··· *N, then (42) is uniquely solvable.*

**Proof.** Denote the coefficient matrix **G** = 1 *<sup>τ</sup>* + *M <sup>E</sup>* <sup>−</sup> *<sup>ν</sup>*(*r*1+1) *h*2 *x <sup>E</sup>* <sup>⊗</sup> *<sup>K</sup>*<sup>1</sup> <sup>−</sup> *<sup>ν</sup>*(*r*1+1) *h*2 *y K*<sup>2</sup> ⊗ *E*. Firstly, using Lemma 3, we have:

$$\mathbf{G}^T = \left(\frac{1}{\tau} + M\right) \mathbf{E}^T - \frac{\nu(r\_1 + 1)}{h\_x^2} \mathbf{E}^T \otimes \mathbf{K}\_1^T - \frac{\nu(r\_1 + 1)}{h\_y^2} \mathbf{K}\_2^T \otimes \mathbf{E}^T = \mathbf{G}^T$$

Furthermore, the matrix **G** can simply be verified as strictly diagonally dominant. Then, the matrix **G** is positive definite. Therefore, the numerical difference scheme has a unique solution.

*5.2. Stability*

**Theorem 3.** *The scheme (41) possesses unconditional stability, which satisfies:*

$$\left\|w\_{i,j}^{N}\right\|^2 \le \frac{T}{2M} \max\_{1 \le n \le N} \left\|g\_{i,j}^{n}\right\|^2$$

**Proof.** Multiplying both sides of Equation (41) by *τhxhyw<sup>n</sup> i*,*j* , and summing *i*, *j*, *n* from 1 to *Mx* − 1, 1 to *My* − 1, 1 to *N*, respectively, we derive the following equation:

*τhxhy Mx*−1 ∑ *i*=1 *My*−1 ∑ *i*=1 *N* ∑ *n*=1 *wn i*,*j* <sup>∇</sup>*tw<sup>n</sup> <sup>i</sup>*,*<sup>j</sup>* + *Mτhxhy Mx*−1 ∑ *i*=1 *My*−1 ∑ *i*=1 *N* ∑ *n*=1 *wn i*,*j wn <sup>i</sup>*.*<sup>j</sup>* − *ντhxhy Mx*−1 ∑ *i*=1 *My*−1 ∑ *i*=1 *N* ∑ *n*=1 *wn i*,*j δ*2 *xw<sup>n</sup> <sup>i</sup>*,*<sup>j</sup>* <sup>−</sup> *<sup>δ</sup>*<sup>2</sup> *yw<sup>n</sup> i*,*j* <sup>−</sup> *νλτ*−*<sup>α</sup>* <sup>Γ</sup>(2−*α*) *<sup>τ</sup>hxhy Mx*−1 ∑ *i*=1 *My*−1 ∑ *i*=1 *N* ∑ *n*=1 *c*0*δ*<sup>2</sup> *xw<sup>n</sup> <sup>i</sup>*,*<sup>j</sup>* <sup>−</sup> *<sup>n</sup>*−<sup>1</sup> ∑ *k*=1 (*cn*−*k*−<sup>1</sup> <sup>−</sup> *cn*−*k*)*δ*<sup>2</sup> *xw<sup>k</sup> <sup>i</sup>*,*<sup>j</sup>* <sup>−</sup> *cn*−1*δ*<sup>2</sup> *xw*<sup>0</sup> *i*,*j wn i*,*j* <sup>−</sup> *νλτ*−*<sup>α</sup>* <sup>Γ</sup>(2−*α*) *<sup>τ</sup>hxhy Mx*−1 ∑ *i*=1 *My*−1 ∑ *i*=1 *N* ∑ *n*=1 *c*0*δ*<sup>2</sup> *yw<sup>n</sup> <sup>i</sup>*,*<sup>j</sup>* <sup>−</sup> *<sup>n</sup>*−<sup>1</sup> ∑ *k*=1 (*cn*−*k*−<sup>1</sup> <sup>−</sup> *cn*−*k*)*δ*<sup>2</sup> *yw<sup>k</sup> <sup>i</sup>*,*<sup>j</sup>* <sup>−</sup> *cn*−1*δ*<sup>2</sup> *yw*<sup>0</sup> *i*,*j wn i*,*j* = *τhxhy Mx*−1 ∑ *i*=1 *My*−1 ∑ *i*=1 *N* ∑ *n*=1 *wn i*,*j gn i*,*j* .

By applying the inequation *<sup>a</sup>*(*<sup>a</sup>* <sup>−</sup> *<sup>b</sup>*) <sup>≥</sup> <sup>1</sup> 2 *<sup>a</sup>*<sup>2</sup> <sup>−</sup> *<sup>b</sup>*<sup>2</sup> and considering the zero initial condition, the first term satisfies:

$$\begin{split} & \left\| \operatorname{tr} h\_{\boldsymbol{x}} h\_{\boldsymbol{y}} \sum\_{i=1}^{M\_{\boldsymbol{x}}-1} \sum\_{i=1}^{M\_{\boldsymbol{y}}-1} \sum\_{n=1}^{N} \operatorname{w}\_{i,j}^{n} \nabla\_{i} \boldsymbol{w}\_{i,j}^{n} \geq \frac{1}{2} h\_{\boldsymbol{x}} h\_{\boldsymbol{y}} \sum\_{i=1}^{M\_{\boldsymbol{x}}-1} \sum\_{n=1}^{M\_{\boldsymbol{y}}-1} \sum\_{n=1}^{N} \left[ \left( \boldsymbol{w}\_{i,j}^{n} \right)^{2} - \left( \boldsymbol{w}\_{i,j}^{n-1} \right)^{2} \right] \right| \\ & = \frac{1}{2} h\_{\boldsymbol{x}} h\_{\boldsymbol{y}} \sum\_{i=1}^{M\_{\boldsymbol{x}}-1} \sum\_{i=1}^{M\_{\boldsymbol{y}}-1} \left[ \left( \boldsymbol{w}\_{i,j}^{N} \right)^{2} - \left( \boldsymbol{w}\_{i,j}^{0} \right)^{2} \right] = \frac{1}{2} \left( \left\| \boldsymbol{w}^{N} \right\|^{2} - \left\| \boldsymbol{w}^{0} \right\|^{2} \right) = \frac{1}{2} \left\| \boldsymbol{w}^{N} \right\|^{2}. \end{split}$$

Considering the relationship between the norm and inner product, the second term yields

$$M\tau h\_x h\_y \sum\_{i=1}^{M\_x - 1} \sum\_{i=1}^{M\_y - 1} \sum\_{n=1}^N w\_{i,j}^n w\_{i,j}^n = M\tau \sum\_{n=1}^N \left( w\_{i,j}^n, w\_{i,j}^n \right) = M\tau \sum\_{n=1}^N ||w^n||^2$$

By using the Lemma 3, for the third term, we have

$$\begin{split} & -\nu \tau h\_{\boldsymbol{x}} h\_{\boldsymbol{y}} \sum\_{i=1}^{M\_{\boldsymbol{x}}-1} \sum\_{i=1}^{N} \sum\_{n=1}^{N} w\_{i,j}^{n} \left( \delta\_{\boldsymbol{x}}^{2} w\_{i,j}^{n} + \delta\_{\boldsymbol{y}}^{2} w\_{i,j}^{n} \right) \\ &= \nu \tau h\_{\boldsymbol{x}} h\_{\boldsymbol{y}} \sum\_{i=1}^{M\_{\boldsymbol{x}}-1} \sum\_{n=1}^{N} \sum\_{n=1}^{N} \delta\_{\boldsymbol{x}} w\_{i,j}^{n} \delta\_{\boldsymbol{x}} w\_{i,j}^{n} + \nu \tau h\_{\boldsymbol{x}} h\_{\boldsymbol{y}} \sum\_{i=1}^{M\_{\boldsymbol{x}}-1} \sum\_{n=1}^{N} \delta\_{\boldsymbol{y}} w\_{i,j}^{n} \delta\_{\boldsymbol{y}} w\_{i,j}^{n} \\ &= \nu \tau \sum\_{n=1}^{N} \left\| \delta\_{\boldsymbol{x}} w^{n} \right\|^{2} + \nu \tau \left\| \delta\_{\boldsymbol{y}} w^{n} \right\|^{2} \ge 0 \end{split}$$

By applying Lemma 4, the fourth term satisfies:

<sup>−</sup> *νλτ*−*<sup>α</sup>* <sup>Γ</sup>(2−*α*) *<sup>τ</sup>hxhy Mx*−1 ∑ *i*=1 *My*−1 ∑ *i*=1 *N* ∑ *n*=1 *c*0*δ*<sup>2</sup> *xw<sup>n</sup> <sup>i</sup>*,*<sup>j</sup>* <sup>−</sup> *<sup>n</sup>*−<sup>1</sup> ∑ *k*=1 (*cn*−*k*−<sup>1</sup> <sup>−</sup> *cn*−*k*)*δ*<sup>2</sup> *xw<sup>k</sup> <sup>i</sup>*,*<sup>j</sup>* <sup>−</sup> *cn*−1*δ*<sup>2</sup> *xw*<sup>0</sup> *i*,*j wn i*,*j* = *νλτ*−*<sup>α</sup>* <sup>Γ</sup>(2−*α*) *<sup>τ</sup>hxhy Mx*−1 ∑ *i*=1 *My*−1 ∑ *i*=1 *N* ∑ *n*=1 *c*0*δxw<sup>n</sup> <sup>i</sup>*,*<sup>j</sup>* <sup>−</sup> *<sup>n</sup>*−<sup>1</sup> ∑ *k*=1 (*cn*−*k*−<sup>1</sup> <sup>−</sup> *cn*−*k*)*δxw<sup>k</sup> <sup>i</sup>*,*<sup>j</sup>* <sup>−</sup> *cn*−1*δxw*<sup>0</sup> *i*,*j δxw<sup>n</sup> i*,*j* ≥ *νλτhxhy Mx*−1 ∑ *i*=1 *My*−1 ∑ *i*=1 *T*−*<sup>α</sup>* 2Γ(1−*α*) *N* ∑ *n*=1 *δxw<sup>n</sup> i*,*j* 2 <sup>−</sup> *<sup>T</sup>*1−*<sup>α</sup>* 2*τ*Γ(2−*α*) *δxw*<sup>0</sup> *i*,*j* 2 = *νλτT*−*<sup>α</sup>* 2Γ(1−*α*) *N* ∑ *n*=1 *δxwn*<sup>2</sup> <sup>−</sup> *νλT*1−*<sup>α</sup>* <sup>2</sup>Γ(2−*α*) *δxw*0 <sup>2</sup> <sup>≥</sup> 0.

Similarly, for the fifth term, it satisfies

$$-\frac{\nu\lambda\tau^{-n}}{\Gamma(2-a)}\tau h\_x h\_y \sum\_{i=1}^{M\_x-1} \sum\_{i=1}^{M\_y-1} \sum\_{n=1}^N \left( c\_0 \delta\_y^2 w\_{i,j}^n - \sum\_{k=1}^{n-1} (c\_{n-k-1} - c\_{n-k}) \delta\_y^2 w\_{i,j}^k - c\_{n-1} \delta\_y^2 w\_{i,j}^0 \right) w\_{i,j}^n \ge 0. \quad \forall \ k \ge 0$$

By using the Cauchy–Schwartz inequality, the last term changes as:

$$\tau \tau h\_{\mathcal{X}} h\_{\mathcal{Y}} \sum\_{i=1}^{M\_x - 1} \sum\_{i=1}^{M\_y - 1} \sum\_{n=1}^N w\_{i,j}^n \underline{\mathcal{g}}\_{i,j}^n = \tau \sum\_{n=1}^N \left( w\_{i,j,\prime}^n \underline{\mathcal{g}}\_{i,j}^n \right) \le M \tau \sum\_{n=1}^N \left\| w^n \right\|^2 + \frac{\tau}{4M} \sum\_{n=1}^N \left\| \underline{\mathcal{g}}^n \right\|^2.$$

As a conclusion, we deduce:

$$\left\| \left\| w^{N} \right\| \right\|^{2} \leq \frac{\pi}{2M} \sum\_{n=1}^{N} \left\| g^{n} \right\|^{2} \leq \frac{T}{2M} \max\_{1 \leq n \leq N} \left\| g^{n} \right\|^{2}.$$

#### *5.3. Convergence*

Define the error *e<sup>n</sup> <sup>i</sup>*,*<sup>j</sup>* = *<sup>w</sup><sup>n</sup> <sup>i</sup>*,*<sup>j</sup>* − *w xi*, *yj*, *tn* . Taking the difference between the Equations (40) and (41), we deduce that the error satisfies:

$$\begin{split} & \nabla\_{t} e\_{i,j}^{n} + M\_{i,j}^{n} - \nu \delta\_{\mathbf{x}}^{2} e\_{i,j}^{n} - \nu \delta\_{y}^{2} e\_{i,j}^{n} \\ &= \nu \lambda \frac{\mathbf{r}^{-n}}{\Gamma(2-n)} \Big( c\_{0} \delta\_{\mathbf{x}}^{2} e\_{i,j}^{n} - \sum\_{k=1}^{n-1} (c\_{n-k-1} - c\_{n-k}) \delta\_{\mathbf{x}}^{2} e\_{i,j}^{k} - c\_{n-1} \delta\_{\mathbf{x}}^{2} e\_{i,j}^{0} \Big) \\ & + \nu \lambda \frac{\mathbf{r}^{-n}}{\Gamma(2-n)} \Big( c\_{0} \delta\_{\mathbf{y}}^{2} e\_{i,j}^{n} - \sum\_{k=1}^{n-1} (c\_{n-k-1} - c\_{n-k}) \delta\_{\mathbf{y}}^{2} e\_{i,j}^{k} - c\_{n-1} \delta\_{\mathbf{y}}^{2} e\_{i,j}^{0} \Big) + \mathcal{O}\Big(\tau + h\_{\mathbf{x}}^{2} + h\_{\mathbf{y}}^{2} \Big). \end{split} \tag{45}$$

**Theorem 4.** *The scheme (41) is convergent with the following form:*

$$\left\|\boldsymbol{\varepsilon}^{\rm N}\right\|^{2} \leq \frac{T}{2M} \left(\boldsymbol{\pi} + h\_{\rm x}^{2} + h\_{\rm y}^{2}\right)^{2}.\tag{46}$$

**Proof.** Similar to the proof of the stability, substituting the source term with the error, we have:

$$\left\|\boldsymbol{\varepsilon}^{N}\right\|^{2} \leq \frac{\mathsf{r}}{2M} \sum\_{n=1}^{N} \left(\mathsf{r} + h\_{\mathsf{x}}^{2} + h\_{\mathsf{y}}^{2}\right)^{2} = \frac{T}{2M} \left(\mathsf{r} + h\_{\mathsf{x}}^{2} + h\_{\mathsf{y}}^{2}\right)^{2}.\tag{47}$$

#### **6. Acceleration of the Fractional Derivative**

The traditional treating method for the fractional derivative is to use the L1 scheme with an expensive cost of computation and storage due to the non-locality that the fractional derivative contains. The difference scheme at *t* = *tn* contains a summation of all values from zero to the current time and the total cost at every spatial point is O *N*<sup>2</sup> . To reduce the computational and storage cost, a fast algorithm [38] is applied. Here we summarized the main idea of the fast algorithm.

The definition of Caputo's fractional derivative of order 0 < *α* < 1 can be expressed as the summation of two terms, a local part *Cl*(*tn*) and a history part *Ch*(*tn*):

$$\begin{array}{lcl} \, ^\mathsf{C}D\_{t}^{\mathfrak{a}}w(t)\Big|\_{t=t\_{n}} &= \, \frac{1}{\Gamma(1-\mathfrak{a})} \int\_{0}^{t\_{n}} \left(t\_{n}-s\right)^{-\mathfrak{a}} \frac{\partial w(s)}{\partial s} ds\\ &= \, \frac{1}{\Gamma(1-\mathfrak{a})} \int\_{t\_{n-1}}^{t\_{n}} \frac{1}{\left(t\_{n}-s\right)^{\mathfrak{a}}} \frac{\partial w(s)}{\partial s} ds + \, \frac{1}{\Gamma(1-\mathfrak{a})} \int\_{0}^{t\_{n-1}} \frac{1}{\left(t\_{n}-s\right)^{\mathfrak{a}}} \frac{\partial w(s)}{\partial s} ds\\ &:= \, \mathsf{C}\_{l}(t\_{n}) + \mathsf{C}\_{h}(t\_{n}). \end{array} \tag{48}$$

For the local portion, we approximate *<sup>∂</sup>w*(*s*) *<sup>∂</sup><sup>s</sup>* by *<sup>w</sup>*(*tn*)−*w*(*tn*−1) *<sup>τ</sup>* , yields

$$\mathbb{C}\Gamma\_{l}(t\_{n}) \approx \frac{w(t\_{n}) - w(t\_{n-1})}{\pi\Gamma(1-a)} \int\_{t\_{n-1}}^{t\_{n}} \frac{ds}{(t\_{n}-s)^{a}} = \frac{w(t\_{n}) - w(t\_{n-1})}{\pi^{a}\Gamma(2-a)}.\tag{49}$$

We employ the integration by parts for the history part

$$\mathbb{C}\_{\hbar}(t\_{\hbar}) = \frac{1}{\Gamma(1-\alpha)} \left[ \frac{w(t\_{\hbar-1})}{\tau^{\alpha}} - \frac{w(t\_0)}{t\_{\hbar}^{\alpha}} - \alpha \int\_{0}^{t\_{\hbar-1}} \frac{w(s)}{\left(t\_{\hbar}-s\right)^{\alpha+1}} ds \right]. \tag{50}$$

Treating the kernel <sup>1</sup> *<sup>t</sup>α*+<sup>1</sup> in the convolution integral is the key. Referring to [38], for any time interval [*τ*, *T*], the kernel <sup>1</sup> *<sup>t</sup>α*+<sup>1</sup> can be approached by an efficient sum-of-exponentials approximation with a prescribed absolute error *ε*. Specifically speaking, there are real positive numbers *wl* and *sl* (*l* = 1, ··· , *N*exp) such that

$$\left| \left| \frac{1}{t^{a+1}} - \sum\_{l=1}^{N\_{\text{exp}}} \omega\_l e^{-s\_l t} \right| \right| \le \varepsilon\_\prime \text{ for any } t \in [\mathbb{T}, T], \tag{51}$$

where *N*exp is of the order

$$N\_{\rm exp} = \mathcal{O}\left(\log\frac{1}{\varepsilon} \left(\log\log\frac{1}{\varepsilon} + \log\frac{T}{\sigma}\right) + \log\frac{1}{\sigma} \left(\log\log\frac{1}{\varepsilon} + \log\frac{1}{\sigma}\right)\right). \tag{52}$$

Equation (4) is the main idea for the fast algorithm. The sum-of-exponentials approximation for the kernel <sup>1</sup> *<sup>t</sup><sup>β</sup>* can also be generalized for the order 0 < *<sup>α</sup>* < 2 [38,51].

We substitute the kernel <sup>1</sup> *<sup>t</sup>α*+<sup>1</sup> via the formular (51) to approximate the history portion as:

$$\mathbb{C}\_{h}(t\_{n}) \approx \frac{1}{\Gamma(1-a)} \left[ \frac{w(t\_{n-1})}{\tau^{a}} - \frac{w(t\_{0})}{t\_{n}^{a}} - a \sum\_{l=1}^{N\_{\text{exp}}} w\_{l} \mathbb{W}\_{his,l}(t\_{n}) \right],\tag{53}$$

where *Whis*,*l*(*tn*) <sup>=</sup> *tn*−<sup>1</sup> <sup>0</sup> *<sup>e</sup>*−(*tn*−*t*)*slw*(*t*)*dt*.

The function *Whis*,*l*(*tn*) is calculated for *n* = 1, 2, ··· , *N* and the following recurrent relationship is derived

$$\mathcal{W}\_{\text{hist},l}(t\_n) = e^{-s\_l \tau} \mathcal{W}\_{\text{hist},l}(t\_{n-1}) + \int\_{t\_{n-2}}^{t\_{n-1}} e^{-s\_l(t\_n - \tau)} w(\tau) d\tau,\\ \mathcal{W}\_{\text{hist},l}(t\_0) = 0. \tag{54}$$

The integral in (54) could be rewritten as:

$$\int\_{t\_{n-2}}^{t\_{n-1}} e^{-s\_l(t\_n - \tau)} w(\tau) d\tau \approx \frac{e^{-s\_l \tau}}{s\_l^2 \tau} \left[ \left( e^{-s\_l \tau} - 1 + s\_l \tau \right) w^{n-1} + \left( 1 - e^{-s\_l \tau} - e^{-s\_l \tau} s\_l \tau \right) w^{n-2} \right]. \tag{55}$$

To compute *Whis*,*i*(*tn*), as Equation (55) indicates, *Whis*,*i*(*tn*−1) is already computed and stored and the cost is needed by only O(1) at each step. As (6.4) indicates, the cost to evaluate the fractional derivative is needed O *N*exp at each time step. That is to say, a reduction from O(*N*) to O(log *N*) or O log2 *N* .

As a summation, the fast evolution of the Caputo's fractional derivative at *t* = *tn* is given as:

$${}^{F}D\_{t}^{a}w(x,y,t\_{n}) = \frac{\mathcal{W}^{n} - \mathcal{W}^{n-1}}{\tau^{a}\Gamma(2-a)} + \frac{1}{\Gamma(1-a)} \left[ \frac{\mathcal{W}^{n-1}}{\tau^{a}} - \frac{\mathcal{W}^{0}}{t\_{n}^{a}} - a \sum\_{l=1}^{N\_{\text{up}}} \omega\_{l} \mathcal{W}\_{\text{hist},l}(t\_{n}) \right] + R\_{1\*} \tag{56}$$

where |*R*1| ≤ *C τ*2−*<sup>α</sup>* + *ε* and the recurrence relation satisfies (6.7) and (6.8). Combining (56) and (37), we have:

$$\frac{\partial^a}{\partial t^a} \frac{\partial^2}{\partial x^2} w\_{i,j}^n = \frac{\delta\_x^2 w\_{i,j}^n - \delta\_x^2 w\_{i,j}^{n-1}}{\tau^a \Gamma(2-a)} + \frac{1}{\Gamma(1-a)} \left[ \frac{\delta\_x^2 w\_{i,j}^{n-1}}{\tau^a} - \frac{\delta\_x^2 w\_{i,j}^0}{t\_n^a} - a \sum\_{l=1}^{N\_{\text{wcp}}} \omega\_l \delta\_x^2 w\_{\text{hist},l}(t\_n) \right],\tag{57}$$

$$\frac{\partial^a}{\partial t^a} \frac{\partial^2}{\partial y^2} w\_{i,j}^n = \frac{\delta\_y^2 w\_{i,j}^n - \delta\_y^2 w\_{i,j}^{n-1}}{\tau^a \Gamma(2-a)} + \frac{1}{\Gamma(1-a)} \left[ \frac{\delta\_y^2 w\_{i,j}^{n-1}}{\tau^a} - \frac{\delta\_y^2 w\_{i,j}^0}{t\_n^a} - a \sum\_{l=1}^{N\_{\text{mp}}} \omega\_l \delta\_y^2 w\_{\text{hist},l}(t\_\text{\textmu}) \right],\tag{58}$$

where *δ*<sup>2</sup> *xwhist*,*l*(*tn*) = *e*−*slτδ*<sup>2</sup> *xwhist*,*l*(*tn*−1) <sup>+</sup> *tn*−<sup>1</sup> *tn*−<sup>2</sup> *<sup>e</sup>*−*sl*(*tn*−*τ*)*δ*<sup>2</sup> *xw*(*τ*)*dτ*, *δ*<sup>2</sup> *xwhist*,*l*(*t*0) = 0, *tn*−<sup>1</sup> *tn*−<sup>2</sup> *<sup>e</sup>*−*sl*(*tn*−*τ*)*δ*<sup>2</sup> *xw*(*τ*)*d<sup>τ</sup>* <sup>≈</sup> *<sup>e</sup>*−*si<sup>τ</sup> s*2 *i τ* # (*e*−*si<sup>τ</sup>* <sup>−</sup> <sup>1</sup> <sup>+</sup> *siτ*)*δ*<sup>2</sup> *xw<sup>n</sup>*−<sup>1</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>e</sup>*−*si<sup>τ</sup>* <sup>−</sup> *<sup>e</sup>*−*si<sup>τ</sup>siτ*)*δ*<sup>2</sup> *xw<sup>n</sup>*−<sup>2</sup> \$ , *δ*2 *ywhist*,*l*(*tn*) = *e*−*slτδ*<sup>2</sup> *ywhist*,*l*(*tn*−1) <sup>+</sup> *tn*−<sup>1</sup> *tn*−<sup>2</sup> *<sup>e</sup>*−*sl*(*tn*−*τ*)*δ*<sup>2</sup> *yw*(*τ*)*dτ*, *δ*<sup>2</sup> *ywhist*,*l*(*t*0) = 0, *tn*−<sup>1</sup> *tn*−<sup>2</sup> *<sup>e</sup>*−*sl*(*tn*−*τ*)*δ*<sup>2</sup> *yw*(*τ*)*d<sup>τ</sup>* <sup>≈</sup> *<sup>e</sup>*−*si<sup>τ</sup> s*2 *i τ* (*e*−*si<sup>τ</sup>* <sup>−</sup> <sup>1</sup> <sup>+</sup> *siτ*)*δ*<sup>2</sup> *yw<sup>n</sup>*−<sup>1</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>e</sup>*−*si<sup>τ</sup>* <sup>−</sup> *<sup>e</sup>*−*si<sup>τ</sup>siτ*)*δ*<sup>2</sup> *yw<sup>n</sup>*−<sup>2</sup> . By a combination, we deduce the final difference scheme:

$$\begin{split} & \left( \frac{1}{\tau} + M \right) w\_{i,j}^{n} - \frac{\nu \tau\_{2}}{h\_{x}^{2}} \left( w\_{i+1,j}^{n} - 2w\_{i,j}^{n} + w\_{i-1,j}^{n} \right) - \frac{\nu \tau\_{2}}{h\_{y}^{2}} \left( w\_{i,j+1}^{n} - 2w\_{i,j}^{n} + w\_{i,j-1}^{n} \right) \\ & = \frac{1}{\tau} w\_{i,j}^{n-1} + \frac{\nu \lambda}{\tau^{\mathsf{a}}} \left( \frac{1}{\Gamma(1-\mathsf{a})} - \frac{1}{\Gamma(2-\mathsf{a})} \right) \left( \delta\_{x}^{2} w\_{i,j}^{n-1} + \delta\_{y}^{2} w\_{i,j}^{n-1} \right) \\ & - \frac{\nu \lambda}{\Gamma(1-\mathsf{a}) t\_{\mathsf{a}}^{\mathsf{a}}} \left( \delta\_{x}^{2} w\_{i,j}^{n} + \delta\_{y}^{2} w\_{i,j}^{n} \right) - \frac{\nu \lambda a}{\Gamma(1-\mathsf{a})} \sum\_{l=1}^{N\_{\mathsf{a}\mathsf{p}}} \omega\_{l} \left[ \delta\_{x}^{2} w\_{\mathsf{hist},l}(t\_{n}) + \delta\_{y}^{2} w\_{\mathsf{hist},l}(t\_{n}) \right] + \mathcal{g}\_{i,j}^{\mathsf{n}} \end{split} \tag{59}$$

where *r*<sup>2</sup> = *<sup>λ</sup> <sup>τ</sup>α*Γ(2−*α*) <sup>+</sup> 1.

The discretization scheme (59) can be rewritten in a matrix form:

$$\begin{cases} \left[ \left( \frac{1}{\Gamma} + M \right) E - \frac{\nu \tau\_2}{h\_x^2} E \otimes K\_1 - \frac{\nu \tau\_2}{h\_y^2} K\_2 \otimes E \right] w^n \\ = \left[ \frac{1}{\Gamma} E + \frac{\nu \lambda}{\Gamma^2} \left( \frac{1}{\Gamma(1-\mathfrak{a})} - \frac{1}{\Gamma(2-\mathfrak{a})} \right) (E \otimes K\_1 + K\_2 \otimes E) \right] w^{n-1} + g^n \\ - \frac{\nu \lambda}{\Gamma(1-\mathfrak{a}) I\_n^2} (E \otimes K\_1 + K\_2 \otimes E) w^0 - \frac{\nu \nu \lambda}{\Gamma(1-\mathfrak{a})} \sum\_{l=1}^{N\_{\text{exp}}} \omega\_l \left[ \delta\_x^2 w\_{hist}(t\_n) + \delta\_y^2 w\_{hist}(t\_n) \right]. \end{cases} \tag{60}$$

#### **7. Results and Discussion**

#### **Example 1.** (*Verification of the discretization scheme*).

The governing equation is solved numerically that the fractional derivative is discretized by the traditional L1 difference method and the fast algorithm. How to verify the correctness of the difference method is the key. As Section 3 indicates, the exact solution is complicated. As a modification, a source term is introduced and the governing equation changes as:

$$\frac{\partial w}{\partial t} = \nu \left( 1 + \lambda \frac{D^a}{Dt^a} \right) \left( \frac{\partial^2 w}{\partial x^2} + \frac{\partial^2 w}{\partial y^2} \right) - Mw + f(x, y, t), \tag{61}$$

with the initial distribution and the boundary distributions:

$$w(x, y, 0) = 0,\tag{62}$$

$$w(\pm 1, y, t) = w(\mathfrak{x}, \pm 1, t) = 0.\tag{63}$$

Define an exact solution for (61)–(63) as: *w*(*x*, *y*, *t*) = (*x* − 1) 2 (*x* + 1) 2 (*y* − 1) 2 (*y* + 1) 2 *t* 2, the expression of the source term can be deduced:

$$f(x,y,t) = (x-1)^2(x+1)^2(y-1)^2(y+1)^2t(2+Mt)$$

$$-4\nu t^2 \left(\frac{2\lambda}{\Gamma(3-\kappa)}t^{-\kappa} + 1\right) \left[ (3x^2-1)(y-1)^2(y+1)^2 + (3y^2-1)(x-1)^2(x+1)^2 \right].\tag{64}$$

Figure 2 presents the three-dimensional comparison behavior between the numerical and exact expressions. Obviously, the distribution of the numerical solution is basically the same as that of the exact solution, showing a bell-shaped curve that is high in the middle and low at both ends. Tables 1 and 2 show the maximum error with the form *E hx*, *hy*, *τ* <sup>=</sup> max <sup>0</sup>≤*i*≤*Mx*,0≤*j*≤*My en i*,*j* , the convergence order for space with *rs* <sup>=</sup> log2 *E*(*hx*,*hy*,*τ*) *E*(*hx*/2,*hy*/2,*τ*) , for time with *rt* = log2 *E*(*hx*,*hy*,*τ*) *<sup>E</sup>*(*hx*,*hy*,*τ*/2) and the computational time between the classical difference scheme and the fast scheme. The two tables show that the error is very small when verifying the accuracy of the numerical scheme and the accuracy is O *h*2 *<sup>x</sup>* + *h*<sup>2</sup> *<sup>y</sup>* + *τ* , which is consistent with the analysis in the convergence in Theorem 3. Furthermore, the computational time indicates that the superiority of the fast scheme is that it can greatly reduce the calculation time without affecting the total accuracy.

**Figure 2.** The three-dimensional comparison of velocity distributions for *α* = 0.5, *M* = 0.5 and *λ* = 0.1.

**Table 1.** The error and convergence order for space and the comparison of computational time between the finite difference scheme and the fast scheme when *α* = 0.5, *M* = 1, *ν* = 1 and *λ* = 0.1.


**Table 2.** The error and convergence order for time and the comparison of computational time between the finite difference scheme and the fast scheme when *α* = 0.5, *M* = 1, *ν* = 1 and *λ* = 0.1.


**Example 2.** *The effects of the dynamic parameters on the distributions of velocity and shear force subject to various pressure with cosine forms.*

Figures 3–5 show the distribution of the velocity and shear force at *x* = 0 (wall surface) with oscillating pressure gradient versus time with the form <sup>−</sup><sup>1</sup> *ρ ∂p <sup>∂</sup><sup>z</sup>* = cos(*t* + 1) when we choose *ν* = 1. The influences of the retardation time parameter on the velocity distributions and the distribution of shear force at the wall are shown in Figure 3. For *λ* = 0, the influences of the retardation time disappear. With the appearance of the retardation time parameter, the big difference is that the overall distribution becomes lower with the physical, meaning that the retardation time parameter reflects a relaxation characteristic in slowing down the velocity propagation and decreasing the magnitude of the shear force at the wall. It can be concluded that a bigger the retardation time parameter corresponds to a larger the relaxation characteristic. The magnetic parameter has important impacts on the distributions of velocity and the shear force. The parameter *M* = 0 indicates that the influence of the magnetic parameter is not considered. As shown in Figure 4, the consideration of the magnetic field makes the distribution at a fixed position smaller, and the value of the distribution becomes smaller when the magnetic parameter becomes bigger. The fractional parameter makes the velocity transport consider the memory characteristic. Figure 5 shows that the value of the distribution becomes smaller with an increase of fractional parameter.

**Figure 3.** The influences of retardation time parameters on the velocity distribution and the shear force *<sup>τ</sup>xz* at the wall surface for *<sup>α</sup>* <sup>=</sup> 0.5, *<sup>M</sup>* <sup>=</sup> 1 and <sup>−</sup><sup>1</sup> *ρ ∂p <sup>∂</sup><sup>z</sup>* = cos(*wt* + 1).

**Figure 4.** The influences of magnetic parameter on the velocity distribution and the shear force *τxz* at the wall surface for *<sup>α</sup>* <sup>=</sup> 0.5, *<sup>λ</sup>* <sup>=</sup> 0.1 and <sup>−</sup><sup>1</sup> *ρ ∂p <sup>∂</sup><sup>z</sup>* = cos(*wt* + 1).

The oscillatory frequency has important impacts on velocity distributions and the shear force distributions. Consider <sup>−</sup><sup>1</sup> *ρ ∂p <sup>∂</sup><sup>z</sup>* = cos(*wt* + 1), the three-dimensional velocity distributions and shear force distributions versus *y* and *t* with the effects of frequency are exhibited in Figures 6 and 7, respectively. For *w* = 0, the pressure is constant and the time parameter (for *t* > 0) has no effects on the distributions. For *w* = 0, the distributions present as an oscillatory form and the bigger the frequency parameter is, the stronger the oscillatory character of the distributions will be. To discuss the effects of the various pressures with the space oscillatory flow, we consider <sup>−</sup><sup>1</sup> *ρ ∂p <sup>∂</sup><sup>z</sup>* = cos(*wz* + 1) with different *w*. The effects of frequency parameter on the velocity distributions and the shear force distributions versus *x* and *z* are respectively exhibited in Figures 8 and 9. Similarly, the distribution curve shows that the distribution exhibits as a normal form for *w* = 0. For

*w* = 0, the distribution presents as an oscillatory form. Finally, the bigger the frequency parameter, the stronger is the oscillation of the distribution curve.

**Figure 5.** The influences of fractional parameter on the velocity distribution and the shear force *τxz* at the wall surface for *<sup>M</sup>* <sup>=</sup> 1, *<sup>λ</sup>* <sup>=</sup> 0.1 and <sup>−</sup><sup>1</sup> *ρ ∂p <sup>∂</sup><sup>z</sup>* = cos(*wt* + 1).

**Figure 6.** The three-dimensional distribution for velocity field versus *y* and *t* with various oscillatory pressure with cosine form versus time <sup>−</sup><sup>1</sup> *ρ ∂p <sup>∂</sup><sup>z</sup>* = cos(*wt* + 1) for different *w* = 0, 1, 2, 3 for *α* = 0.5, *λ* = 0.1 and *M* = 0.1.

**Figure 7.** The three-dimensional distribution for shear force versus *y* and *t* with various oscillatory pressure with cosine form versus time <sup>−</sup><sup>1</sup> *ρ ∂p <sup>∂</sup><sup>z</sup>* = cos(*wt* + 1) for different *w* = 0, 1, 2, 3 for *α* = 0.5, *λ* = 0.1 and *M* = 1.

**Figure 8.** The three-dimensional distribution for velocity versus *y* and *z* with various oscillatory pressure with cosine form versus space <sup>−</sup><sup>1</sup> *ρ ∂p <sup>∂</sup><sup>z</sup>* = cos(*wz* + 1) for different *w* = 0, 1, 2, 3 for *α* = 0.5, *λ* = 0.1 and *M* = 1.

**Figure 9.** The three-dimensional distribution for shear force versus *y* and *z* with various oscillatory pressure with cosine form versus space <sup>−</sup><sup>1</sup> *ρ ∂p <sup>∂</sup><sup>z</sup>* = cos(*wz* + 1) for different *w* = 0, 1, 2, 3 for *α* = 0.5, *λ* = 0.1 and *M* = 1.

#### **8. Conclusions**

This paper considered the motion of fractional second-grade fluid in a straight rectangular duct. Both the analytical solution and the numerical solution were obtained. For faster computation, a fast scheme was proposed. Two examples were given. One illustrated the accuracy of the numerical solution and the advantage of the fast scheme. The other discussed the impacts of the involved parameters on the velocity distributions and the shear force at the wall surface. The results show that the retardation time parameter plays a role in a relaxation characteristic. The magnetic parameter and fractional parameter with the memory characteristic made the distribution of velocity and shear force become slower. The oscillation of the pressure versus space and time made the distribution present as an oscillatory form and for a larger frequency parameter, the oscillation of the distribution was stronger.

**Author Contributions:** Conceptualization, L.L. (Lin Liu) and S.C.; methodology, S.Z.; software, J.Z. (Jing Zhu); validation, L.L. (Lang Liu), L.Z. and B.Z.; formal analysis, S.C.; investigation, S.Z.; resources, J.Z. (Jing Zhu); data curation, B.Z.; writing—original draft preparation, B.Z.; writing—review and editing, L.F.; visualization, L.L. (Lang Liu); supervision, L.L. (Lin Liu); project administration, J.Z. (Jiangshan Zhang); funding acquisition, L.L. (Lin Liu). All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by [National Natural Science Foundation of China] grant number [11801029], [Fundamental Research Funds for the Central Universities] grant number [FRF-TP-20-013A2], [the Open Fund of State key laboratory of advanced metallurgy in University of Science and Technology Beijing] grant number [N0. K22-08].

**Data Availability Statement:** All data reported are obtained by the numerical schemes designed in this paper.

**Acknowledgments:** The authors are grateful to the editor and all the anonymous referees for their valuable comments, which greatly improved the presentation of the article.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Appendix A**

The expanded form of (3) is given as:

$$A\_1 = \nabla V + (\nabla V)^T = \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ \frac{\partial w}{\partial x} & \frac{\partial w}{\partial y} & 0 \end{bmatrix} + \begin{bmatrix} 0 & 0 & \frac{\partial w}{\partial x} \\ 0 & 0 & \frac{\partial w}{\partial y} \\ 0 & 0 & 0 \end{bmatrix} = \begin{bmatrix} 0 & 0 & \frac{\partial w}{\partial x} \\ 0 & 0 & \frac{\partial w}{\partial y} \\ \frac{\partial w}{\partial x} & \frac{\partial w}{\partial y} & 0 \end{bmatrix}.$$

$$\begin{split} &A\_{2} = D\_{t}^{a}A\_{1} + A\_{1}\nabla V + \left(\nabla V\right)^{T}A\_{1} \\ &= \begin{pmatrix} 0 & 0 & D\_{t}^{a}\frac{\partial w}{\partial x} \\ 0 & 0 & D\_{t}^{a}\frac{\partial w}{\partial y} \\ D\_{t}^{a}\frac{\partial w}{\partial x} & D\_{t}^{a}\frac{\partial w}{\partial y} & 0 \end{pmatrix} + \begin{bmatrix} \left(\frac{\partial w}{\partial x}\right)^{2} & \frac{\partial w}{\partial x}\frac{\partial w}{\partial y} & 0 \\ \frac{\partial w}{\partial x}\frac{\partial w}{\partial y} & \left(\frac{\partial w}{\partial y}\right)^{2} & 0 \\ 0 & 0 & 0 \end{bmatrix} + \begin{bmatrix} \left(\frac{\partial w}{\partial x}\right)^{2} & \frac{\partial w}{\partial x}\frac{\partial w}{\partial y} & 0 \\ \frac{\partial w}{\partial x}\frac{\partial w}{\partial y} & \left(\frac{\partial w}{\partial y}\right)^{2} & 0 \\ 0 & 0 & 0 \end{bmatrix} \\ &= \begin{pmatrix} 2\left(\frac{\partial w}{\partial x}\right)^{2} & 2\frac{\partial w}{\partial x}\frac{\partial w}{\partial y} & D\_{t}^{a}\frac{\partial w}{\partial y} \\ 2\frac{\partial w}{\partial x}\frac{\partial w}{\partial y} & 2\left(\frac{\partial w}{\partial y}\right)^{2} & D\_{t}^{a}\frac{\partial w}{\partial y} \\ D\_{t}^{a}\frac{\partial w}{\partial x} & D\_{t}^{a}\frac{\partial w}{\partial y} & 0 \end{bmatrix}. \end{split}$$

Then the expression for the shear force is obtained

$$= \begin{pmatrix} \begin{array}{c} \tau = \mu A\_1 + \alpha\_1 A\_2 + \alpha\_2 A\_1^2\\ \alpha\_1 \left(\frac{\partial \overline{w}}{\partial x}\right)^2 & \alpha\_1 \frac{\partial \overline{w}}{\partial x} \frac{\partial \overline{w}}{\partial y} & \left(\mu + \alpha\_1 D\_t^a\right) \frac{\partial \overline{w}}{\partial x} \\\ \alpha\_1 \frac{\partial \overline{w}}{\partial x} \frac{\partial \overline{w}}{\partial y} & \alpha\_1 \left(\frac{\partial \overline{w}}{\partial y}\right)^2 & \left(\mu + \alpha\_1 D\_t^a\right) \frac{\partial \overline{w}}{\partial y} \\\ \left(\mu + \alpha\_1 D\_t^a\right) \frac{\partial \overline{w}}{\partial x} & \left(\mu + \alpha\_1 D\_t^a\right) \frac{\partial \overline{w}}{\partial y} & \alpha\_2 \left[\left(\frac{\partial \overline{w}}{\partial x}\right)^2 + \left(\frac{\partial \overline{w}}{\partial y}\right)^2\right] \end{array} \end{pmatrix}$$

#### **References**


### *Article* **Mixed Convection of Fractional Nanofluids Considering Brownian Motion and Thermophoresis**

**Mingwen Chen 1, Yefan Tian 1, Weidong Yang <sup>2</sup> and Xuehui Chen 1,\***


**Abstract:** In this paper, the mixed convective heat transfer mechanism of nanofluids is investigated. Based on the Buongiorno model, we develop a novel Cattaneo–Buongiorno model that reflects the non-local properties as well as Brownian motion and thermophoresis diffusion. Due to the highly non-linear character of the equations, the finite difference method is employed to numerically solve the governing equations. The effectiveness of the numerical method and the convergence order are presented. The results show that the rise in the fractional parameter *δ* enhances the energy transfer process of nanofluids, while the fractional parameter *γ* has the opposite effect. In addition, the effects of Brownian motion and thermophoresis diffusion parameters are also discussed. We infer that the flow and heat transfer mechanism of the viscoelastic nanofluids can be more clearly revealed by controlling the parameters in the Cattaneo–Buongiorno model.

**Keywords:** nanofluids; Brownian motion and thermophoresis; fractional derivative; mixed convection

#### **Citation:** Chen, M.; Tian, Y.; Yang, W.; Chen, X. Mixed Convection of Fractional Nanofluids Considering Brownian Motion and Thermophoresis. *Fractal Fract.* **2022**, *6*, 584. https://doi.org/10.3390/ fractalfract6100584

Academic Editor: Riccardo Caponetto

Received: 4 September 2022 Accepted: 8 October 2022 Published: 12 October 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

#### **1. Introduction**

Compared with natural convection and forced convection, mixed convection is more common and significant in all areas of life, industry, and scientific research, and it holds great prospects for research, such as nuclear reactors, electronic cooling technology, and other industrial processes. More and more researchers are involved in the research of mixed convection. Fan et al. [1] analyzed the laminar mixed convective heat transfer in a level channel of nanofluids. Abu-Nada and Chamkha [2] numerically simulated a stable laminar mixed convective flow of a water–CuO nanofluid in a lid-driven cavity with wavy wall. Aaiza et al. [3] studied the energy transfer of the mixed convective unsteady magnetohydrodynamic (MHD) flow of nanofluids in saturated porous media channels. Aman et al. [4] analyzed the MHD mixed convection Poiseuille flow of gold nanoparticles, taking into account the effects of thermal radiation, chemical reaction, and thermal diffusion. Chakravarty et al. [5] employed the Darcy–Brinkman–Forchheimer model for numerical simulation to study the mixed convection heat transfer of fluids. Khanafer and Vafai [6] studied the double-diffusion mixed convective flow in a lid-driven vessel filled with a liquid-saturated porous medium. Moolya and Anbalgan [7] numerically investigated and optimized the influence of vital parameters on double-diffusion mixed convection. In addition, the stability of mixed convection under different specific conditions was also verified [8,9].

Recently, nanofluids have been widely used to improve various heat transfer properties based on their superior characteristics [10–12], such as macro and micro heat exchangers, aerospace applications, electronic equipment cooling, and other heat transfer enhancement fields. Choi first proposed the concept of the nanofluids [13]. Subsequently, Xuan et al. [14] refined the theory of thermal conductivity of nanofluids. In particular, for complex nanofluids, it is vital to introduce the improved constitution equation to describe the heat transfer phenomena. In 2006, the Buongiorno model was proposed [15], which concluded that

Brownian motion and thermophoresis are the significant slip mechanisms for nanofluids, and explained the principles of the Brownian and thermophoresis diffusion. Since then, the model has been broadly applied by researchers, and the research on the nanofluids made great progress. Ahmed et al. [16] used the Buongiorno model to study the flow of nanofluids in a heat-generated porous medium-filled wavy enclosures. Bansch et al. [17] applied the Buongiorno model to analyze the existence of steady-state problem solutions of convection transfer of nanofluids. Sohail et al. [18] numerically calculated the flow of fluid on a stretched sheet applying the Buongiorno model. However, the heat and mass diffusion in the model adopts the classical Fourier and Fick's laws, ignoring thermal relaxation and mass relaxation effects. In subsequent studies, scholars made different improvements to the model. Rana et al. [19] employed the modified Buongiorno model to study 3D flow and heat transfer of nanoliquids. Puneeth et al. [20] applied the modified Buongiorno model to study the jet flow of ternary nanofluids. It is worth noting that traditional constitutive relations cannot be used to describe the special properties of nanofluids, and fractional calculus theory is widely applied because of its non-locality and long memory characteristics [21,22]. Aman et al. [23] researched the heat and mass transfer of graphene nanofluids through a vertical plate by fractional derivative. Zhao et al. [24] first introduced fractional order into boundary layer equations to study the heat transfer of unstable natural convection boundary layers. Chen et al. [25] discussed the boundary layer flow of fractional viscoelastic MHD fluids on a stretched thin plate. Liu et al. [26] introduced fractional derivatives to describe heat conduction in the Cattaneo–Christov model. Cao et al. [27] applied the fractional Maxwell model to analyze the flow and heat of nanofluids on a moving plate. Zhao et al. [28] described the unsteady Marangoni convection of fractional Maxwell fluids. Recently, the double fractional Maxwell model was widely studied by researchers [29–32]. The results display that the double fractional Maxwell model is more flexible and accurate in explaining the flow of viscoelastic fluids.

In recent years, researchers applied fractional calculus theory to the Buongiorno model and made different improvements and revisions. Shen et al. [33] introduced the Cattaneo thermal conductivity model with time fractional derivative in the Buongiorno model to describe the abnormal heat transfer of nanofluids. After that, Zhang et al. [34] introduced the spatial fractional derivative based on the improved Buongiorno model to characterize the non-local behavior of nanofluids. To the best of our knowledge, the fractional constitutive model is more effective and reliable to describe the flow and heat transfer phenomena of the viscoelastic nanofluids. The Cattaneo thermal conductivity model with double time fractional derivatives is introduced to modify the Buongiorno model.

Based on the above discussions, in this paper, a generalized Cattaneo–Buongiorno constitutive model is proposed to explore the heat and mass transfer of nanofluids in mixed convection. The governing equations are resolved by the finite difference method. The accuracy of the numerical algorithm is verified. In addition, the effects of diverse important parameters on heat transfer and mass transfer are depicted graphically and analyzed.

#### **2. Mathematical Formulation**

We propose a generalized Cattaneo–Buongiorno constitutive model, defined as follows:

$$q + \lambda\_2^\delta \frac{\partial^\delta q}{\partial t^\delta} = -K\lambda\_2^\gamma \frac{\partial^{\gamma - 1}}{\partial t^{\gamma - 1}} \left(\frac{\partial T}{\partial y}\right) + h\_p \cdot j\_{p'} 0 \le \delta \le \gamma \le 1 \tag{1}$$

where *<sup>q</sup>* is the heat flux, *<sup>λ</sup>*<sup>2</sup> <sup>=</sup> *<sup>k</sup>*/*<sup>K</sup>* is the temperature relaxation time, *<sup>k</sup>* is the thermal conductivity, *δ* and *γ* are the fractional parameters, and *∂δ*/*∂t <sup>δ</sup>* and *∂γ*−1/*∂t <sup>γ</sup>*−<sup>1</sup> are the Caputo's fractional derivatives. Subscripts *n f* and *p* represent nanofluids and nanosolids, respectively, *hp* <sup>=</sup> *cpT* is the specific enthalpy, and *jp* is the diffusion mass flux, which is expressed as [27]:

$$j\_p = j\_{p,B} + j\_{p,T} = -\rho\_p D\_B \nabla C - \rho\_p D\_T \frac{\nabla T}{T\_0} \, , \tag{2}$$

where *ρ<sup>p</sup>* is the mass density, and *DB* and *DT* express the Brownian diffusion coefficient and thermophoresis diffusion coefficient, respectively. *T* is the nanofluids temperature. The fractional Maxwell model is introduced as the constitution relationship of the viscoelastic nanofluids [29].

Consider mixed convection of fractional Maxwell nanofluids between two infinitely long parallel plates, which is caused by the temperature difference. The distance between the two parallel plates is *d*, and the system of rectangular coordinates (*x*, *y*) is selected. The *x*-axis is parallel to the flow direction of the fluid, and the *y*-axis is perpendicular to the flow direction of the fluid. A geometry image of the system is shown in Figure 1. The equations of the velocity, temperature, and concentration fields can be expressed as:

$$
\rho\_{\rm nf} \frac{\partial \mu}{\partial t} = -\nabla p + \nabla \cdot \mathbf{\tau} + \rho\_{\rm nf} \mathbf{g}\_{\prime} \tag{3}
$$

$$\left(\rho c\_p\right)\_{nf}\frac{\partial T}{\partial t} = -\nabla \cdot q + h\_p \nabla \cdot \dot{p}\_{p\prime} \tag{4}$$

$$\frac{\partial \mathbb{C}}{\partial t} = -\frac{1}{\rho\_p} \nabla \cdot \mathbf{j}\_p - k\_r (\mathbb{C} - \mathbb{C}\_0),\tag{5}$$

with the boundary and initial conditions:

$$\text{Let } t = 0: u = 0, T = T\_0, \mathcal{C} = \mathcal{C}\_0; y = 0: u = 0, T = T\_0, \mathcal{C} = \mathcal{C}\_0; y = d: u = 0, T = T\_{\mathbf{u}}, \mathcal{C} = \mathcal{C}\_{\mathbf{u}}.\tag{6}$$

where *ρn f* is the density of nanofluids and ∇*p* is the pressure gradient, expressed as ∇*p* = *∂p*/*∂x* = −*ρ*∞*g*. By invoking Boussinesq approximation, we have *ρ*<sup>∞</sup> − *ρn f* = *ρn f βn f*(*T* − *T*0). (*ρβ*)*n f* is the thermal expansion coefficient, *g* is the gravitational acceleration, *ρcp n f* is the capacitance, and *kr* is the chemical reaction parameter.

**Figure 1.** Geometric sketch.

The fractional Maxwell model of nanofluids [29] is substituted into the momentum Equation (3). The generalized Cattaneo–Buongiorno constitutive Equation (1) is substituted into energy Equation (4) and concentration Equation (5). The governing equations of nanofluids mixed convection model can be expressed as follows:

$$
\left(\lambda\_1^{1-\beta}\frac{\partial^{1-\beta}}{\partial t^{1-\beta}} + \lambda\_1^{1+a-\beta}\frac{\partial^{1+a-\beta}}{\partial t^{1+a-\beta}}\right)\left(\frac{\partial u}{\partial t} - (\beta\_T)\_{nf}g(T-T\_0)\right) = \upsilon \frac{\partial^2 u}{\partial y^2},\tag{7}
$$

$$\left(\lambda\_2^{1-\gamma}\frac{\partial^{1-\gamma}}{\partial t^{1-\gamma}} + \lambda\_2^{1+\delta-\gamma}\frac{\partial^{1+\delta-\gamma}}{\partial t^{1+\delta-\gamma}}\right)\frac{\partial T}{\partial t} = \frac{k}{\left(\rho c\_p\right)\_{nf}}\frac{\partial^2 T}{\partial y^2} + \sigma\lambda\_2^{1-\gamma}\frac{\partial^{1-\gamma}}{\partial t^{1-\gamma}}\left(Dy\frac{\partial C}{\partial y}\frac{\partial T}{\partial y} + \frac{D\_T}{T\_0}\left(\frac{\partial T}{\partial y}\right)^2\right) - \nu\lambda\_2^{1+\delta-\gamma}\frac{\partial^{1+\delta-\gamma}}{\partial t^{1+\delta-\gamma}}\left(DyT\frac{\partial^2 C}{\partial y^2} + \frac{D\_T T}{T\_0}\frac{\partial^2 T}{\partial y^2}\right), \quad \{\lambda\_2^{1-\gamma}\}\_2 = \left(\lambda\_2^{1-\gamma}\frac{\partial^{1-\gamma}}{\partial y^2} + \lambda\_2^{1-\gamma}\frac{\partial^{1-\gamma}}{\partial y^2}\right)\frac{\partial^{1+\gamma}}{\partial y^2} + \lambda\_2^{1+\delta-\gamma}\frac{\partial^{1+\gamma}}{\partial y^2}\left(\frac{\partial T}{\partial y}\right)$$

$$\frac{\partial \mathbb{C}}{\partial t} = D\_B \frac{\partial^2 \mathbb{C}}{\partial y^2} + \frac{D\_T}{T\_0} \frac{\partial^2 T}{\partial y^2} - k\_r (\mathbb{C} - \mathbb{C}\_0), \tag{9}$$

where *α* and *β* are the fractional parameters of shear stress and shear strain, respectively. *σ* = (*ρc*)*<sup>P</sup>* is the heat capacity and *υ* = *μ*/*ρ* is the kinematic viscosity of nanofluids.

By introducing the following dimensionless variables:

$$\mu^\* = \frac{u}{lL\_0}, x^\* = \frac{x}{d}, y^\* = \frac{y}{d}, t^\* = \frac{tLl\_0}{d}, \lambda\_1^\* = \frac{\lambda\_1 l L\_0}{d},$$

$$\lambda\_2^\* = \frac{\lambda\_2 l L\_0}{d}, T^\* = \frac{T - T\_0}{T\_w - T\_0}, \mathcal{C}^\* = \frac{\mathcal{C} - \mathcal{C}\_0}{\mathcal{C}\_w - \mathcal{C}\_0}, k\_r^{\*\ast} = \frac{k\_r d}{lL\_0},$$

the dimensionless governing equations can be written as (ignoring symbols ∗ for calculation convenience):

$$\left(\lambda\_1^{1-\beta}\frac{\partial^{1-\beta}}{\partial t^{1-\beta}} + \lambda\_1^{1+a-\beta}\frac{\partial^{1+a-\beta}}{\partial t^{1+a-\beta}}\right)\left(\text{Re}\frac{\partial u}{\partial t} - GrT\right) = \frac{\partial^2 u}{\partial y^2},\tag{10}$$

$$
\left(\lambda\_2^{1-\gamma}\frac{\partial^{1-\gamma}}{\partial t^{1-\gamma}} + \lambda\_2^{1+\delta-\gamma}\frac{\partial^{1+\delta-\gamma}}{\partial t^{1+\delta-\gamma}}\right)\frac{\partial T}{\partial t} = \frac{1}{\text{Re}\cdot\text{Pr}}\frac{\partial^2 T}{\partial y^2} + \lambda\_2^{1-\gamma}\frac{\partial^{1-\gamma}}{\partial t^{1-\gamma}}\left(\frac{\text{Nb}}{\text{Re}}\frac{\partial\mathbb{C}}{\partial y}\frac{\partial T}{\partial y} + \frac{\text{Nt}}{\text{Re}}\left(\frac{\partial T}{\partial y}\right)^2\right) \\
$$

$$\frac{\partial \mathbb{C}}{\partial t} = \frac{1}{\text{Re} \cdot Ln} \frac{\partial^2 \mathbb{C}}{\partial y^2} + \frac{1}{\text{Re} \cdot Ln} \frac{Nt}{Nb} \frac{\partial^2 T}{\partial y^2} - k\_r \mathbb{C},\tag{12}$$

where Re is the Reynolds number, *Gr* is the thermal Grashof number, *α<sup>m</sup>* is the thermal diffusion coefficient of nanofluids, Pr is the generalized Prandtl number, *Nt* is the thermophoresis parameter, *Nb* is the Brownian motion parameter, and *Ln* is the Lewis number. Their expressions are as follows:

$$\text{Re} = \frac{\rho L l\_0 d}{\mu},\\ \text{Gr} = \frac{g(\mathcal{J}\_T)\_{nf}(T\_w - T\_0)d^2}{L l\_0 \upsilon},\\ a\_m = \frac{k}{\left(\rho c\_p\right)\_{nf}},\\ \text{Pr} = \frac{\upsilon}{a\_m},\\ \text{Nt} = \frac{\sigma D\_T(T\_w - T\_0)}{T\_0 \upsilon},\\ \text{Nb} = \frac{\sigma D\_B(\mathbb{C}\_w - \mathbb{C}\_0)}{\upsilon},\\ \text{Ln} = \frac{\upsilon}{D\_B},\\ \text{Pr} = \frac{\sigma D\_B(\mathbb{C}\_w - \mathbb{C}\_0)}{T\_0 \upsilon},\\ \text{Pr} = \frac{\sigma D\_B(\mathbb{C}\_w - \mathbb{C}\_0)}{T\_0 \upsilon},\\ \text{Pr} = \frac{\sigma D\_B(\mathbb{C}\_w - \mathbb{C}\_0)}{T\_0 \upsilon},\\ \text{Pr} = \frac{\sigma D\_B(\mathbb{C}\_w - \mathbb{C}\_0)}{T\_0 \upsilon},\\ \text{Pr} = \frac{\sigma D\_B(\mathbb{C}\_w - \mathbb{C}\_0)}{T\_0 \upsilon},\\ \text{Pr} = \frac{\sigma D\_B(\mathbb{C}\_w - \mathbb{C}\_0)}{T\_0 \upsilon},\\ \text{Pr} = \frac{\sigma D\_B(\mathbb{C}\_w - \mathbb{C}\_0)}{T\_0 \upsilon},\\ \text{Pr} = \frac{\sigma D\_B(\mathbb{C}\_w - \mathbb{C}\_0)}{T\_0 \upsilon},\\ \text{Pr} = \frac{\sigma D\_B(\mathbb{C}\_w - \mathbb{C}\_0)}{T\_0 \upsilon},\\ \text{Pr} = \frac{\sigma D\_B(\mathbb{C}\_w - \mathbb{C}\_0)}{T\_0 \upsilon},\\ \text{Pr} = \frac{\sigma D\_B(\mathbb{C}\_w - \mathbb{C}\_0)}{T\_0 \upsilon},\\ \text{Pr} = \frac{\$$

The initial and boundary conditions are:

$$t = 0: u = 0, T = 0, \mathbb{C} = 0; y = 0: u = 0, T = 0, \mathbb{C} = 0; y = 1: u = 0, T = 1, \mathbb{C} = 1. \tag{13}$$

#### **3. Numerical Technique**

The finite difference method is applied to solve the dimensionless Equations (10)–(12). Denote *xi* = *i*Δ*x*(*i* = 0, 1, 2, ··· , *M*), *yj* = *j*Δ*y*(*j* = 0, 1, 2, ··· , *N*), *tk* = *k*Δ*t*(*k* = 0, 1, 2, ··· , *L*), where Δ*x* = *X*max/*M* and Δ*y* = *Y*max/*N* are the space steps, and Δ*t* is the time step. The time fractional derivative is worked out by employing the L1 algorithm.

First, the L1 algorithm is imported as (0 < *α* < 1) [35]:

$$\begin{split} \frac{\partial^{\boldsymbol{a}}f(t\_{k})}{\partial t^{\boldsymbol{a}}} &= \frac{\Delta t^{-\boldsymbol{a}}}{\Gamma(2-\boldsymbol{a})} \sum\_{\boldsymbol{s}=\boldsymbol{0}}^{k-1} a\_{\boldsymbol{s}} [f(t\_{k-\boldsymbol{s}}) - f(t\_{k-\boldsymbol{s}-1})] + \mathcal{O}(\Delta t^{2-\boldsymbol{a}}) \\ &= \frac{\Delta t^{-\boldsymbol{a}}}{\Gamma(2-\boldsymbol{a})} \left[ f(t\_{k}) - a\_{k-1} f(t\_{0}) - \sum\_{\boldsymbol{s}=1}^{k-1} (a\_{\boldsymbol{s}-1} - a\_{\boldsymbol{s}}) f(t\_{k-\boldsymbol{s}}) \right] + \mathcal{O}(\Delta t^{2-\boldsymbol{a}}), \end{split} \tag{14}$$

where *<sup>α</sup><sup>s</sup>* = (*<sup>s</sup>* <sup>+</sup> <sup>1</sup>)1−*<sup>α</sup>* <sup>−</sup> *<sup>s</sup>*1−*α*,*<sup>s</sup>* <sup>=</sup> 0, 1, 2, . . . , *<sup>R</sup>*

Second, the integer order discretization in the system of control equations is as follows:

$$\frac{\partial u}{\partial t}\Big|\_{t=t\_k} = \frac{u\_{i,j}^k - u\_{i,j}^{k-1}}{\Delta t} + \mathcal{O}(\Delta t),\tag{15}$$

$$\left. \frac{\partial^2 u}{\partial y^2} \right|\_{t=t\_k} = \frac{u\_{i,j+1}^k - 2u\_{i,j}^k + u\_{i,j-1}^k}{\Delta y^2} + \mathcal{O}(\Delta y^2),\tag{16}$$

$$\frac{\partial \mathbb{C}}{\partial y} \frac{\partial T}{\partial y} \Big|\_{t=t\_k} = \frac{\mathbb{C}\_{i,j}^{k-1} - \mathbb{C}\_{i,j-1}^{k-1}}{\Delta y} \frac{T\_{i,j}^k - T\_{i,j-1}^k}{\Delta y} + \mathbb{O}(\Delta t + \Delta y), \tag{17}$$

$$
\left(\frac{\partial T}{\partial y}\right)^2\bigg|\_{t=t\_k} = \frac{T\_{i,j}^{k-1} - T\_{i,j-1}^{k-1}}{\Delta y} \frac{T\_{i,j}^k - T\_{i,j-1}^k}{\Delta y} + \mathcal{O}(\Delta t + \Delta y),
\tag{18}
$$

$$T\frac{\partial^2 C}{\partial y^2}\Big|\_{t=t\_k} = T\_{i,j}^k \frac{\mathbb{C}\_{i,j+1}^{k-1} - 2\mathbb{C}\_{i,j}^{k-1} + \mathbb{C}\_{i,j-1}^{k-1}}{\Delta y^2} + \mathcal{O}(\Delta t + \Delta y^2),\tag{19}$$

$$T\frac{\partial^2 T}{\partial y^2}\Big|\_{t=t\_k} = T\_{i,j}^{k-1} \frac{T\_{i,j+1}^k - 2T\_{i,j}^k + T\_{i,j-1}^k}{\Delta y^2} + \mathcal{O}(\Delta t + \Delta y^2). \tag{20}$$

Third, we disperse time fractional derivatives at *xi*, *yj*, *tk* (0 < *α* < 1) as follows:

$$\frac{\partial^{\boldsymbol{a}}}{\partial t^{\boldsymbol{a}}} \left( \frac{\partial \boldsymbol{u}}{\partial t} \right) \bigg|\_{t=t\_k} = \frac{\Delta t^{-1-\boldsymbol{a}}}{\Gamma(2-\boldsymbol{a})} \left( \boldsymbol{u}\_{i,j}^{k} - \boldsymbol{u}\_{i,j}^{k-1} - \sum\_{s=1}^{k-1} (\boldsymbol{a}\_{s-1} - \boldsymbol{a}\_{s}) \left( \boldsymbol{u}\_{i,j}^{k-s} - \boldsymbol{u}\_{i,j}^{k-s-1} \right) \right) + \mathcal{O}(\Delta t), \tag{21}$$

$$\frac{\partial^{\alpha}T}{\partial t^{\alpha}}\Big|\_{t=t\_k} = \frac{\Delta t^{-\alpha}}{\Gamma(2-\alpha)}\left(T^{k}\_{i,j} - \sum\_{s=1}^{k-1} (a\_{s-1} - a\_s)T^{k-s}\_{i,j}\right) + \mathcal{O}\left(\Delta t^{2-\alpha}\right),\tag{22}$$

$$\frac{\partial^{a}}{\partial t^{a}} \left( \frac{\partial \mathcal{L}}{\partial y} \frac{\partial T}{\partial y} \right) \bigg|\_{t=t\_{k}} = \frac{\Lambda t^{-a}}{\Gamma(2-a)\Lambda y^{2}} \left( \left( \mathcal{C}^{k-1}\_{i,j} - \mathcal{C}^{k-1}\_{i,j-1} \right) \left( T^{k}\_{i,j} - T^{k}\_{i,j-1} \right) - \sum\_{s=1}^{k-1} (a\_{s-1} - a\_{s}) \left( \mathcal{C}^{k-s-1}\_{i,j} - \mathcal{C}^{k-s-1}\_{i,j-1} \right) \left( T^{k-s}\_{i,j} - T^{k-s}\_{i,j-1} \right) \right) + \mathcal{O}(\Delta t + \Delta y), \tag{25}$$

$$\frac{\partial^{a}}{\partial t^{a}} \left( T \frac{\partial^{2} \mathbb{C}}{\partial y^{2}} \right) \Big|\_{t=t\_{0}} = \frac{\Lambda t^{-a}}{\Gamma(2-a)\Delta y^{2}} \left( T^{b}\_{i,j} \Big( \mathsf{C}^{k-1}\_{i,j+1} - 2\mathsf{C}^{k-1}\_{i,j} + \mathsf{C}^{k-1}\_{i,j-1} \Big) - \sum\_{s=1}^{k-1} (a\_{s-1} - a\_{s}) T^{k-s}\_{i,j} \Big( \mathsf{C}^{k-s-1}\_{i,j+1} - 2\mathsf{C}^{k-s-1}\_{i,j} + \mathsf{C}^{k-s-1}\_{i,j-1} \Big) \right) + \mathcal{O}(\Lambda t + \Lambda y^{2}). \tag{24}$$
 
$$\text{where} \quad \Lambda t^{a} = \mathsf{C}^{k-a} \mathsf{C}^{k-b} \mathsf{C}^{k-c} \text{ and } \mathsf{C}^{k-b} = \mathsf{C}^{k-c} \mathsf{C}^{k-b}. \tag{35}$$

Then, the results of the iterative Equations of (10)–(12) are:

$$(r\_6 - r\_8 u\_{l,j-1}^k + (r\_6 + r\_7 + 2r\_8)u\_{l,j}^k - r\_8 u\_{l,j+1}^k = (r\_6 + r\_7)u\_{l,j}^{k-1} + r\_6 A\_1 + r\_7 A\_2 + r\_6 r\_{10} \left( T\_{l,j}^k - A\_3 \right) + r\_7 r\_{10} \left( T\_{l,j}^k - A\_4 \right) + R\_{1l,j}^k,\tag{25}$$

$$\begin{cases} -r\_3 + r\_1 r\_4 \left(\mathsf{C}\_{i,j}^{k-1} - \mathsf{C}\_{i,j-1}^{k-1}\right) + r\_1 r\_5 \left(\mathsf{T}\_{i,j}^{k-1} - \mathsf{T}\_{i,j-1}^{k-1}\right) + r\_2 r\_5 \mathsf{T}\_{i,j}^{k-1}\right) \mathcal{T}\_{i,j-1}^{k} + \left(-r\_3 + r\_2 r\_5 \mathsf{T}\_{i,j}^{k-1}\right) \mathcal{T}\_{i,j+1}^{k} \\\ + \left(r\_1 + r\_2 + 2r\_3 - r\_1 r\_4 \left(\mathsf{C}\_{i,j}^{k-1} - \mathsf{C}\_{i,j-1}^{k-1}\right) - r\_1 r\_5 \left(\mathsf{T}\_{i,j}^{k-1} - \mathsf{T}\_{i,j-1}^{k-1}\right) + r\_2 r\_4 \left(\mathsf{C}\_{i,j+1}^{k-1} - 2\mathsf{C}\_{i,j}^{k-1} + \mathsf{C}\_{i,j-1}^{k-1}\right) - 2r\_2 r\_5 \mathcal{T}\_{i,j}^{k-1}\right) \mathcal{T}\_{i,j}^{k} \\\ = (r\_1 + r\_2) T\_{i,j}^{k-1} + r\_1 \mathcal{B}\_1 + r\_2 \mathcal{B}\_2 - r\_1 r\_4 \mathcal{B}\_3 - r\_1 r\_5 \mathcal{B}\_4 + r\_2 r\_4 \mathcal{B}\_5 + r\_2 r\_5 \mathcal{B}\_6 + R\_{2,j,r}^{k} \end{cases} \tag{26}$$

$$-r\_9 \mathfrak{C}\_{i,j-1}^k + (1 + 2r\_9 + k\_r \mathfrak{A} t) \mathfrak{C}\_{i,j}^k - r\_9 \mathfrak{C}\_{i,j+1}^k = \mathfrak{C}\_{i,j}^{k-1} + r\_9 \frac{\mathcal{N}t}{\mathcal{N}b} \left( T\_{i,j+1}^k - 2T\_{i,j}^k + T\_{i,j-1}^k \right) + R\_{3,j}^k. \tag{27}$$

$$\text{where } |\mathcal{R}\_1| \le \mathbb{C} (\Delta t + \Delta y^2), |\mathcal{R}\_2| \le \mathbb{C} (\Delta t + \Delta y), |\mathcal{R}\_3| \le \mathbb{C} (\Delta t + \Delta y^2) \text{ and}$$

$$r\_1 = \frac{\lambda\_2^{1-\gamma} \Delta t^{-(1-\gamma)}}{\Gamma(2-(1-\gamma))^{1/\gamma}}, r\_2 = \frac{\lambda\_2^{1+\delta-\gamma} \Delta t^{-(1+\delta-\gamma)}}{\Gamma(2-(1-\gamma-\delta-\gamma))^{1/\gamma}}, r\_3 = \frac{\Delta t}{\mathbb{R}\_{3,\gamma} \operatorname{Re} \Delta t^2}$$

<sup>Γ</sup>(<sup>2</sup> <sup>−</sup> (<sup>1</sup> <sup>−</sup> *<sup>γ</sup>*)),*r*<sup>2</sup> <sup>=</sup> *<sup>λ</sup>*1+*δ*−*<sup>γ</sup>* <sup>Γ</sup>(<sup>2</sup> <sup>−</sup> (<sup>1</sup> <sup>+</sup> *<sup>δ</sup>* <sup>−</sup> *<sup>γ</sup>*)) ,*r*<sup>3</sup> <sup>=</sup> <sup>Δ</sup>*<sup>t</sup>* Re · PrΔ*y*<sup>2</sup> , *<sup>r</sup>*<sup>4</sup> <sup>=</sup> *Nb* Re Δ*t* <sup>Δ</sup>*y*<sup>2</sup> ,*r*<sup>5</sup> <sup>=</sup> *Nt* Re Δ*t* <sup>Δ</sup>*y*<sup>2</sup> ,*r*<sup>6</sup> <sup>=</sup> *<sup>λ</sup>*1−*<sup>β</sup>* <sup>1</sup> Δ*t* −(1−*β*) <sup>Γ</sup>(<sup>2</sup> <sup>−</sup> (<sup>1</sup> <sup>−</sup> *<sup>β</sup>*)),*r*<sup>7</sup> <sup>=</sup> *<sup>λ</sup>*1+*α*−*<sup>β</sup>* <sup>1</sup> Δ*t* −(1+*α*−*β*) <sup>Γ</sup>(<sup>2</sup> <sup>−</sup> (<sup>1</sup> <sup>+</sup> *<sup>α</sup>* <sup>−</sup> *<sup>β</sup>*)) , *<sup>r</sup>*<sup>8</sup> <sup>=</sup> <sup>Δ</sup>*<sup>t</sup>* Re · <sup>Δ</sup>*y*<sup>2</sup> ,*r*<sup>9</sup> <sup>=</sup> <sup>Δ</sup>*<sup>t</sup>* Re · *Ln*Δ*y*<sup>2</sup> ,*r*<sup>10</sup> <sup>=</sup> *Gr*Δ*<sup>t</sup>* Re , *A*<sup>1</sup> = *k*−1 ∑ *s*=1 [(<sup>1</sup> <sup>−</sup> *<sup>β</sup>*)*s*−<sup>1</sup> <sup>−</sup> (<sup>1</sup> <sup>−</sup> *<sup>β</sup>*)*s*] *uk*−*<sup>s</sup> <sup>i</sup>*,*<sup>j</sup>* <sup>−</sup> *<sup>u</sup>k*−*s*−<sup>1</sup> *i*,*j* , *A*<sup>2</sup> = *k*−1 ∑ *s*=1 [(<sup>1</sup> <sup>+</sup> *<sup>α</sup>* <sup>−</sup> *<sup>β</sup>*)*s*−<sup>1</sup> <sup>−</sup> (<sup>1</sup> <sup>+</sup> *<sup>α</sup>* <sup>−</sup> *<sup>β</sup>*)*s*] *uk*−*<sup>s</sup> <sup>i</sup>*,*<sup>j</sup>* <sup>−</sup> *<sup>u</sup>k*−*s*−<sup>1</sup> *i*,*j* , *A*<sup>3</sup> = *k*−1 ∑ *s*=1 [(<sup>1</sup> <sup>−</sup> *<sup>β</sup>*)*s*−<sup>1</sup> <sup>−</sup> (<sup>1</sup> <sup>−</sup> *<sup>β</sup>*)*s*]*Tk*−*<sup>s</sup> <sup>i</sup>*,*<sup>j</sup>* , *A*<sup>4</sup> = *k*−1 ∑ *s*=1 [(<sup>1</sup> <sup>+</sup> *<sup>α</sup>* <sup>−</sup> *<sup>β</sup>*)*s*−<sup>1</sup> <sup>−</sup> (<sup>1</sup> <sup>+</sup> *<sup>α</sup>* <sup>−</sup> *<sup>β</sup>*)*s*]*Tk*−*<sup>s</sup> <sup>i</sup>*,*<sup>j</sup>* , *B*<sup>1</sup> = *k*−1 ∑ *s*=1 [(<sup>1</sup> <sup>−</sup> *<sup>γ</sup>*)*s*−<sup>1</sup> <sup>−</sup> (<sup>1</sup> <sup>−</sup> *<sup>γ</sup>*)*s*](*Tk*−*<sup>s</sup> <sup>i</sup>*,*<sup>j</sup>* <sup>−</sup> *<sup>T</sup>k*−*s*−<sup>1</sup> *<sup>i</sup>*,*<sup>j</sup>* ), *B*<sup>2</sup> = *k*−1 ∑ *s*=1 [(<sup>1</sup> <sup>+</sup> *<sup>δ</sup>* <sup>−</sup> *<sup>γ</sup>*)*s*−<sup>1</sup> <sup>−</sup> (<sup>1</sup> <sup>+</sup> *<sup>δ</sup>* <sup>−</sup> *<sup>γ</sup>*)*s*](*Tk*−*<sup>s</sup> <sup>i</sup>*,*<sup>j</sup>* <sup>−</sup> *<sup>T</sup>k*−*s*−<sup>1</sup> *<sup>i</sup>*,*<sup>j</sup>* ),

$$\begin{split} B\_{3} &= \sum\_{s=1}^{k-1} [(1-\gamma)\_{s-1} - (1-\gamma)\_{s}] \left( \mathbf{C}\_{i,j}^{k-s-1} - \mathbf{C}\_{i,j-1}^{k-s-1} \right) (T\_{i,j}^{k-s} - T\_{i,j-1}^{k-s}), \\ B\_{4} &= \sum\_{s=1}^{k-1} [(1-\gamma)\_{s-1} - (1-\gamma)\_{s}] \left( T\_{i,j}^{k-s-1} - T\_{i,j-1}^{k-s-1} \right) (T\_{i,j}^{k-s} - T\_{i,j-1}^{k-s}), \\ B\_{5} &= \sum\_{s=1}^{k-1} [(1+\delta-\gamma)\_{s-1} - (1+\delta-\gamma)\_{s}] T\_{i,j}^{k-s} \left( \mathbf{C}\_{i,j+1}^{k-s-1} - 2\mathbf{C}\_{i,j}^{k-s-1} + \mathbf{C}\_{i,j-1}^{k-s-1} \right), \\ B\_{6} &= \sum\_{s=1}^{k-1} [(1+\delta-\gamma)\_{s-1} - (1+\delta-\gamma)\_{s}] T\_{i,j}^{k-s-1} \left( T\_{i,j+1}^{k-s} - 2T\_{i,j}^{k-s} + T\_{i,j-1}^{k-s} \right). \end{split}$$

The initial and boundary conditions of the discrete scheme are:

$$t=0: \mu=0, T=0, \mathcal{C}=0; \mathcal{y}=0: \mu=0, T=0, \mathcal{C}=0; \mathcal{y}=N: \mu=0, T=1, \mathcal{C}=1.\tag{28}$$

#### **4. Validation of the Numerical Method**

To examine the validity of the numerical method, the source terms *f*1(*x*, *y*, *t*), *f*2(*x*, *y*, *t*), and *f*3(*x*, *y*, *t*) are introduced into the governing equations. The expressions of the source terms are obtained in the governing equations through the analytical solutions. Next, a set of numerical solutions are acquired by the numerical method for comparison with the analytical solutions. As follows:

$$
\left(\lambda\_1^{1-\beta}\frac{\partial^{1-\beta}}{\partial t^{1-\beta}} + \lambda\_1^{1+a-\beta}\frac{\partial^{1+a-\beta}}{\partial t^{1+a-\beta}}\right)\left(\text{Re}\frac{\partial u}{\partial t} - GrT\right) = \frac{\partial^2 u}{\partial y^2} + f\_1(y,t),\tag{29}
$$

$$
\begin{split} \frac{1}{2} \left( \lambda\_2^{1-\gamma} \frac{\partial^{1-\gamma}}{\partial t^{1-\gamma}} + \lambda\_2^{1+\delta-\gamma} \frac{\partial^{1+\delta-\gamma}}{\partial t^{1+\delta-\gamma}} \right) \frac{\partial T}{\partial t} &= \frac{1}{\text{Re}\,\text{Pr}} \frac{\partial^2 T}{\partial y^2} + \lambda\_2^{1-\gamma} \frac{\partial^{1-\gamma}}{\partial t^{1-\gamma}} \left( \frac{\text{Nb}}{\text{Ra}} \frac{\partial \text{C}}{\partial y} \frac{\partial T}{\partial y} + \frac{\text{Nt}}{\text{Ra}} \left( \frac{\partial T}{\partial y} \right)^2 \right) \\ &- \lambda\_2^{1+\delta-\gamma} \frac{\partial^{1+\delta-\gamma}}{\partial t^{1+\delta-\gamma}} \left( \frac{\text{Nb}}{\text{Ra}} T \frac{\partial^2 \text{C}}{\partial y^2} + \frac{\text{Nt}}{\text{Ra}} T \frac{\partial^2 T}{\partial y^2} \right) + f\_2(y, t), \end{split} \tag{30}
$$

$$\frac{\partial \mathbb{C}}{\partial t} = \frac{1}{\text{Re} \cdot \text{L} \, n \, \partial y^2} + \frac{1}{\text{Re} \cdot \text{L} \, n \, \text{N} \, b} \frac{\partial t}{\partial y^2} \frac{\partial^2 T}{\partial y^2} - k\_l \mathbb{C} + f\_3(y, t), \tag{31}$$

with the new initial and boundary conditions:

$$\text{If } t = 0: u = 0, T = 0, \mathbb{C} = 0; y = 0: u = 0, T = 0, \mathbb{C} = 0; y = 1: u = 0, T = 0, \mathbb{C} = 0. \tag{32}$$

where

$$\begin{split} f\_{1}(y,t) &= -\frac{2\lambda\_{1}^{1-\beta}y^{2}(y-1)^{2}\left(t^{1+\beta}Gr\Gamma(1+\beta)-t^{\beta}\text{Re}\Gamma(2+\beta)\right)}{\Gamma(1+\beta)\Gamma(2+\beta)} \\ &- \frac{2\lambda\_{1}^{1+a-\beta}y^{2}(y-1)^{2}(t^{1-a+\beta}Gr\Gamma(1-a+\beta)-t^{-a+\beta}\text{Re}\Gamma(2-a+\beta))}{\Gamma(1-a+\beta)\Gamma(2-a+\beta)} \\ &- 2(1-y)^{2}t^{2}+8y(1-y)t^{2}-2y^{2}t^{2}, \end{split} \tag{33}$$

$$f\_{2}(y,t) = \frac{2\lambda\_{2}^{1-\gamma}t^{\gamma}y^{2}(y-1)^{2}}{\Gamma(1+\gamma)} + \frac{2\lambda\_{2}^{1+\delta-\gamma}t^{-\delta+\gamma}y^{2}(y-1)^{2}}{\Gamma(1-\delta+\gamma)} - \frac{2(1-y)^{2}t^{2} - 8y(1-y)t^{2} + 2y^{2}t^{2}}{\text{Re}\,\text{Pr}}$$

$$+\frac{96\lambda\_{2}^{1-\gamma}t^{1+\gamma}y^{2}(2y^{2}-3y+1)^{2}(Nb+Nt)}{\text{Re}\,\Gamma(4+\gamma)}\tag{34}$$

$$+\frac{48\lambda\_{2}^{1+\delta-\gamma}t^{3-\delta+\gamma}y^{2}(y-1)^{2}\left(6y^{2}Nb+6y^{2}Nt-6yNb-6yNt+Nb+Nt\right)}{\text{Re}\,\Gamma(4-\delta+\gamma)}\_{\text{F.P.}}$$

$$f\_3(y,t) = 2y^2(1-y)^2t - \frac{(2-12y+12y^2)t^2(1+Nt/Nb)}{\text{Re}\cdot Ln} + Kr \cdot y^2(1-y)^2t^2. \tag{35}$$

The following analytical solutions are obtained:

$$u(y,t) = T(y,t) = \mathbb{C}(y,t) = y^2(1-y)^2t^2. \tag{36}$$

In Figure 2, the velocity, temperature, and concentration distributions of nanofluids along *t* direction are given by numerical and analytical solutions, respectively. It can be seen that the arithmetic solutions coincide well with the analytical solutions, which shows the correctness of the numerical algorithm. To examine the convergence order of the numerical method, Tables 1–3 give the *L*<sup>2</sup> error, the *L*<sup>∞</sup> error, and the convergence order of the momentum, energy, and concentration equations for different time steps Δ*t*. The convergence order can reach the first order, as we expected.

**Figure 2.** Comparisons between the analytical solutions and numerical solutions of *t*.

**Table 1.** The truncation error and convergence order of velocity *u* with Δ*y* = 0.01.


**Table 2.** The truncation error and convergence order of temperature *T* with Δ*y* = 0.01.


**Table 3.** The truncation error and convergence order of concentration *C* with Δ*y* = 0.01.


#### **5. Results and Discussion**

The governing Equations (10)–(12) with conditions (13) are resolved by the finite difference method. The space and time steps are Δ*y* = 0.01, Δ*t* = 0.02, respectively. In this section, we mainly discuss the influence of fractional parameters, Brownian parameters, and thermophoresis parameters on the temperature and concentration of the nanofluids.

#### *5.1. Effects of the Fractional Parameters on the Temperature Field*

Figure 3 describes the relationship between fractional parameters *δ* and *γ* and the temperature of nanofluids in the *y* direction. Particularly, the temperature distributions under different fractional order parameter *δ* when *γ* = 0.9 are shown in Figure 3a. With the increase in *δ* in the same location, the temperature profile rises uniformly, which means that the heat transfer process of the nanofluids is enhanced with the augment in the fractional parameter *δ*. Figure 3b gives the temperature distributions under different fractional order parameter *γ* when *δ* = 0.1. The results manifest that the greater the fractional parameter *γ*, the lower the temperature of the nanofluids. It follows that the nanofluids heat transfer process is weakened with the growth of fractional parameter *γ*.

**Figure 3.** Temperature distributions with respect to *y*: (**a**) for different *δ*; (**b**) for different *γ*.

Figure 4 shows the effect of different fractional order parameters *δ* and *γ* on the temperature of nanofluids in the *t* direction. Figure 4a reveals that with the passage of time, the temperature always first elevates to a peak, then decreases, and finally reaches a stable value. Figure 4b describes the temperature distributions when *γ* = 1, 0.9, 0.8, 0.7. There is a peak in temperature as the fractional parameter *γ* decreases. A smaller *γ* corresponds to a higher temperature peak. Similarly, for each value of *γ*, the temperature eventually reaches a stable level and does not change any more.

**Figure 4.** Temperature distributions with respect to *t*: (**a**) for different *δ*; (**b**) for different *γ*.

#### *5.2. Effects of the Fractional Parameters on the Concentration Field*

Figure 5 describes the influence of different fractional parameters *δ* and *γ* on the concentration of nanofluids in the *y* direction. The result from Figure 5a shows that the concentration of nanofluids presents a downward trend with the enlargement of fractional parameter *δ*; that is, the distribution of nanoparticles becomes more sparse in the same region of *y*. This is mainly because the increase in the temperature reduces the concentration of nanoparticles in the flow region. When *δ* = 0.1, the concentration distributions for different parameter *γ* are given in Figure 5b. As the fractional parameter *γ* decreases, the concentration presents a downward trend. Overall, the above results demonstrate that the fractional parameters *δ* and *γ* affect the movement of nanoparticles by changing the temperature, and then affect the mass transfer process of nanofluids.

**Figure 5.** Concentration distributions with respect to *y*: (**a**) for different *δ*; (**b**) for different *γ*.

The influence of different fractional order parameters *δ* and *γ* on concentration distribution in the *t* direction is shown in Figure 6. As can be seen in Figure 6a, with the rise in parameter *δ*, the peak value of concentration distribution decreases, but eventually tends to be stable. Figure 6b shows the concentration distributions under different fractional parameter *γ*. The peak of the concentration rises as the value of *γ* increases. It is because the increase in temperature difference of nanofluids leads to the enhancement of the thermophoresis of nanoparticles and the nanoparticles quickly shift from the higher temperature district to the lower temperature district, making the concentration of the nanoparticles decrease and reach the stable state more rapidly.

**Figure 6.** Concentration distributions with respect to *t*: (**a**) for different *δ*; (**b**) for different *γ*.

#### *5.3. Effects of Nb and Nt*

Figure 7 describes the temperature and concentration distributions with different Brownian motion parameter *Nb*. Figure 7a displays that the temperature change rate increases with the rise in *Nb*. Physically, the adding of Brownian motion contributes to the efficient movement of nanoparticles between plates, thus, improving the heat transfer efficiency of nanofluids. Different from the temperature, the concentration gradually descends with larger *Nb*. The performances of different thermophoresis parameter *Nt* on temperature and concentration distributions are shown in Figure 8. The temperature presents an upward trend with the augment of *Nt*, which is due to the effect of heat capacity of nanoparticles. However, the improvement in the thermophoresis results in a decrease in concentration, which is consistent with the results [36]. Therefore, the enhancement

of Brownian diffusion and thermophoresis promotes the heat transfer of nanofluids, which plays a crucial part in the diffusion process of nanoparticles.

**Figure 7.** Effects of *Nb*: (**a**) on temperature distributions; (**b**) on concentration distributions.

**Figure 8.** Effects of *Nt*: (**a**) on temperature distributions; (**b**) on concentration distributions.

#### **6. Conclusions**

In this paper, we investigate the mixed convection of fractional nanofluids considering Brownian motion and thermophoresis. The arithmetic solutions of the fractional equations are obtained by employing the finite difference method. The effects of fractional order parameters, Brownian motion parameters, and thermophoresis parameters on the temperature and concentration are discussed. The consequences manifest that the rise in fractional parameter *δ* enhances the energy transfer process of nanofluids, while the augment of fractional parameter *γ* weakens the heat transfer. However, the opposite effects are found in the concentration distribution. In fact, the change in temperature affects the effective movement of nanoparticles, which is also an important reason for the increase and decrease in concentration. In addition, the enhancement of Brownian diffusion and thermophoresis promotes the heat transfer of nanofluids, which plays a crucial part in the diffusion process of nanoparticles.

**Author Contributions:** Y.T. and M.C.: formal analysis and computed the numerical results; Y.T. and X.C.: writing—review and editing; Y.T. and W.Y.: analyzed the results; M.C.: funding acquisition. All authors have read and agreed to the published version of the manuscript.

**Funding:** The work was supported by the National Natural Science Foundations of China under grant No. 51971031.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors would like to express their gratitude to the anonymous reviewers for their valuable comments on the improvement of this paper.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Nomenclature**


#### **References**


## *Article* **Fractional-Order Windkessel Boundary Conditions in a One-Dimensional Blood Flow Model for Fractional Flow Reserve (FFR) Estimation**

**Timur Gamilov 1,2,3,4,\* and Ruslan Yanbarisov 3,4,5**


**Abstract:** Recent studies have demonstrated the benefits of using fractional derivatives to simulate a blood pressure profile. In this work we propose to combine a one-dimensional model of coronary blood flow with fractional-order Windkessel boundary conditions. This allows us to obtain a greater variety of blood pressure profiles for better model personalization An algorithm of parameter identification is described, which is used to fit the measured mean value of arterial pressure and estimate the fractional flow reserve (FFR) for a given patient. The proposed framework is used to investigate sensitivity of mean blood pressure and fractional flow reserve to fractional order. We demonstrate that the fractional derivative order significantly affects the fractional flow reserve (FFR), which is used as an indicator of stenosis significance.

**Keywords:** fractional derivative; parameter estimation; coronary hemodynamic; blood flow model; mean arterial pressure; fractional flow reserve

#### **1. Introduction**

Atherosclerotic diseases of coronary vessels are the main reason for myocardial ischemia frequently resulting in disability or death. These diseases are mainly caused by blockages due to an abnormal narrowing in a blood vessel—stenosis [1]. The choice of medical treatment involves evaluation of stenosis significance, which may require invasive measurements. To assess the severity of each stenosis case, clinicians use various hemodynamic indices. The most popular and well-developed index is the fractional flow reserve (FFR), which is a ratio between mean pressure distal (downstream) to stenosis and mean aortic pressure during artificially induced hyperemia [2,3]. Stenoses with values of FFR below 0.8 are considered to be significant and should be surgically treated.

Measuring FFR involves expensive pressure sensors and specialized equipment. Some patients have multiple stenoses with complicated interactions. These problems led to the development of coronary blood flow models capable of estimating FFR from coronary computed tomography angiography (CCTA) and patient's data (age, heart rate, stroke volume, blood pressure, etc.). Some of these models are based on solving three-dimensional Navier–Stokes equations [4], but in this work, we concentrate on one-dimensional (1D) models of blood flow [5–8]. The 1D approach is less time-consuming, and it was shown that 3D and 1D FFR calculations demonstrate similar results [9].

**Citation:** Gamilov, T.; Yanbarisov, R. Fractional-Order Windkessel Boundary Conditions in a One-Dimensional Blood Flow Model for Fractional Flow Reserve (FFR) Estimation. *Fractal Fract.* **2023**, *7*, 373. https://doi.org/10.3390/ fractalfract7050373

Academic Editors: Libo Feng, Yang Liu and Lin Liu

Received: 6 April 2023 Revised: 27 April 2023 Accepted: 28 April 2023 Published: 30 April 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

One-dimensional models with fractional derivatives can provide a compromise between accuracy and computational cost. Fractional-order models extend the concept of differentiability and incorporate nonlocal and memory effects using a small amount of parameters. This feature can be used to describe complex flows over different space and time scales without splitting the problem into smaller subproblems. Fractional-order models have been proposed in hemodynamic applications. Some examples are the model of blood flow in viscoelastic vessels [10,11] and the model of heat and mass transfer through an arterial segment, which takes into account the interaction with a magnetic field [12]. Fractional derivatives are used to obtain a more realistic prediction of pulse wave forms, which involves the development of parameter identification procedures [13,14]. The latter task has become especially relevant in recent years with the development of computer technology and increasing interest in inverse problems.

In order to extract the patient-specific data of coronary arteries, CCTA images are used. However, there is no geometry data for arteries of systemic circles beyond coronary vessels which can be accounted for in the model. One approach to resolving this issue is to simulate the whole systemic circle with some averaged parameters [8]. This is a physiologically based approach, and it requires a lot of computational resources and includes many parameters that are difficult to estimate. Another option is to impose pressure-derived boundary conditions directly on the inlet of coronary arteries [9], which represents the impact of smaller arteries and microcirculation. This approach simplifies the model but makes it difficult to investigate the effect of various heart conditions on coronary blood flow. Another approach is to take into account the impact of smaller arteries and microcirculation using a submodel coupled with the blood flow model in coronary arteries. A popular choice is Windkessel-type models which are based on the representation of blood vessels (or the whole systemic circulation) as elastic reservoirs with resistance [5]. This leads to small amount (from 2 to 4) of parameters to describe the influence of systemic circulation. An alternative option, similar to the one we use in this work, is to include a part of the aorta in the model and impose a boundary condition on the end of ascending aorta [15]. This approach allows us to use cardiac output as an inlet boundary condition and calculate pressure in the aorta.

Boundary conditions in blood flow models usually imitate the impact of smaller arteries and microcirculation. A porous media-based approach was previously used to simulate microcirculation [16], and fractional derivatives were used to describe flow in porous media [17]. Fractional derivatives were also used to simulate blood flow in capillary vessels [18].

We describe the flow in the systemic circle and microcirculation using Windkesseltype boundary conditions that utilize fractional derivatives. Fractional derivatives have already been used in Windkessel models to simulate hypertensive and normal blood pressure profiles [14]. We propose to couple the Windkessel fractional derivative model with a 1D coronary blood flow model to obtain a greater variety of aortic pressure profiles. We demonstrate that the resulting shape of the aortic pressure profile allows for better personalization of the model and affects the calculated FFR as well as the patient's diagnosis.

#### **2. Materials and Methods**

#### *2.1. Coronary Blood Flow Model*

We simulated coronary blood flow and calculated the FFR with a 1D hemodynamic model [19,20]. This model is based on the flow of incompressible viscous fluid through a network of one-dimensional elastic tubes. The conditions for mass and momentum conservation within the network are expressed as a system of hyperbolic equations for each tube:

$$\frac{\partial A}{\partial t} + \frac{\partial A u}{\partial x} = 0,\tag{1}$$

$$\frac{\partial \mu}{\partial t} + \frac{\partial}{\partial x} \left( \frac{\mu^2}{2} + \frac{P}{\rho} \right) = -8\pi \mu \frac{\mu}{A'} \tag{2}$$

where *t* is time, *x* is the coordinate along the vessel (tube), *A* = *A*(*x*, *t*) is the cross-sectional area, *u* = *u*(*x*, *t*) is the velocity averaged over the cross-section, *P* = *P*(*x*, *t*) is the blood pressure, *ρ* = 1.06 g/cm3 is the blood density, and *μ* = 4 *cP* is the blood viscosity. The right-hand side of Equation (2) represents friction force. An additional relation between the blood pressure and cross-sectional area of the vessel wall is required to close the system:

$$P(A) = \rho\_w c^2 f(A), \ f(A) = \begin{cases} \exp\left(\frac{A}{A\_0} - 1\right) - 1, \frac{A}{A\_0} \ge 1\\ \ln \frac{A}{A\_0}, \frac{A}{A\_0} < 1, \end{cases} \tag{3}$$

where *ρ<sup>w</sup>* = 1.1 g/cm3 is the blood vessel wall density, *A*<sup>0</sup> is the cross-sectional area of the unstressed vessel, and *c* is elasticity index. The physiological meaning of *c* is the pulse wave velocity or velocity of small disturbances propagated in the vessel wall [21]. Equation (3) is an analytical approximation of the pressure–area curves obtained in experimental studies [22].

The computational domain consists of the aortic root, aorta, left coronary artery (with branches), and right coronary artery (with branches). The diameters, lengths, and topology of vessels can be extracted from CCTA scans. A simplified version of arterial network is presented in Figure 1. We simulated stenosis as a separate segment with decreased diameter.

**Figure 1.** A simplified network of major coronary arteries. Segment 6 represents 66% stenosis. The model parameters for each segment are presented in Table A1 in Appendix A. We impose cardiac output function (Figure 2) on the inlet of segment 1. On the terminal ends of segments 4, 7, and 8, we impose hydraulic resistance and outflow pressure (6). Boundary condition on the terminal end of the aorta (segment 2) involves a 2-element Windkessel model (7) .

One-dimensional vessels are connected to each other in junction points to create an arterial structure. The conditions of mass conservation and total pressure continuity are imposed at the junction points:

$$\sum\_{i} Q\_{i} = 0,\tag{4}$$

$$\frac{\rho u\_i^2}{2} + P\_i = \frac{\rho u\_j^2}{2} + P\_{j\prime} \quad i \neq j. \tag{5}$$

Equation (4) represents an algebraic sum of influxes and effluxes, where *i* is the index of a vessel connected to a junction. *ui*, *uj* and *Pi*, and *Pj* in (5) are the velocities and blood pressures of vessels with indices *i* and *j* near the junction point.

**Figure 2.** Cardiac output function.

On the inlet of the aorta (segment 1 on Figure 1), we set cardiac the output as a periodic time function *Q*(*t*) (Figure 2). The shape of the cardiac output was proposed in [23]. It can be adjusted according to the patient's heart rate (HR), stroke volume (SV), or other data (peak-to-mean flow ratio, QT-interval).

On the outlet of coronary arteries (segments 4,7,8 on Figure 1), we impose a pressure drop condition:

$$P\_k - P\_{out} = R\_k Q\_{k'} \tag{6}$$

where *k* is the index of a segment, *Pk* is the blood pressure at the boundary point, *Qk* is the blood flux at the boundary point, and *Rk* is the hydraulic resistance, *Pout* is the outflow pressure. Outflow pressure can be described as a value of blood pressure at which the microcirculation between arteries and veins stops. It ranges between 20 and 60 mm Hg [5] and can be adjusted according to the patient's data. Resistances *Rk* are distributed according to empiric Murray's law through an algorithm described in [24]. Resistances increase during the systolic phase to simulate contractions of myocardium tissue that hinder coronary blood flow [19].

The boundary condition on the outlet of the aorta (segment 2 on Figure 1) differs from boundary condition on the terminal coronary arteries (6) since the former one represents the whole systemic circle as well as microcirculation. In order to describe the behavior of the microcirculation vessels, the two-element Windkessel model [25] is extensively used:

$$Q\_a(t) = \frac{P\_a(t) - P\_{out}}{R\_a} + C\frac{dP\_a}{dt} \,\prime \tag{7}$$

where *Qa* and *Pa* are the blood flow and pressure in the aorta, and *Ra* is the hydraulic resistance of the systemic circle and microcirculation. In (7), compliance *C* is introduced. which represents the ability of blood vessels to distend and store blood volume. Larger values of *C* correspond to greater vessel elasticity. Compliance *C* can be adjusted according to patient's systolic and diastolic blood pressures, and resistance *Ra* can be derived from the systemic vascular resistance—the ratio between mean blood pressure and cardiac output [24]. The elastic index *c* from (3), on the other hand, increases with the rigidity of the vessels and, thus, has a different physiological interpretation.

We solve the hyperbolic system (1) and (2) inside each vessel numerically with the help of an explicit grid-characteristic method [26], which is monotone and first-order accurate. Compatibility conditions imposed on junctions with Equations (4) and (5) and boundary points with conditions (6) and (7) form the system of nonlinear equations which is solved with the Newton method. Compatibility conditions are discretized implicitly with a first-order approximation. Discretizations and convergence studies are presented in [27].

The described model can be used to calculate FFR at any point of the coronary arteries. We calculate FFR as a ratio between mean pressure in the coronary artery distal to stenosis (*P<sup>h</sup> dist*) and mean aortic pressure (*P<sup>h</sup> aortic*) during vasodilation of the coronary vessels (hyperemia) [2]:

$$FFR = \frac{\overline{\mathcal{P}}\_{dist}^{\text{fr}}}{\overline{\mathcal{P}}\_{aortic}^{\text{fr}}}.\tag{8}$$

Hyperemia is simulated by decreasing terminal resistances *Rk* by 70% [28]. FFR values below 0.8 are considered to be significant. This means that the coronary vessel has an abnormal narrowing (stenosis) that should be surgically treated.

By adjusting *C* and *Pout* in (6) and (7), we can reproduce patient's systolic and diastolic blood pressures. However, the range of possible blood profiles is quite limited. To improve the mathematical model, additional elements can be introduced into the arterial structure and the Windkessel model [8]. Instead, in this paper, we propose to use the fractional time derivative in Equation (7).

#### *2.2. Fractional-Order Boundary Conditions*

We impose boundary condition on the terminal end of the aorta using the fractionalorder Windkessel model, which can be written as follows:

$$Q\_{\mathfrak{a}}(t) = \frac{P\_{\mathfrak{a}}(t) - P\_{out}}{R\_{\mathfrak{a}}} + \mathcal{C}\_{\mathfrak{a}} D\_t^{\mathfrak{a}} P\_{\mathfrak{a}}(t), \tag{9}$$

where *D<sup>α</sup> <sup>t</sup>* is a fractional differentiation operator; *α* is a fractional differentiation order, which is assumed to be between 0 and 2 in this work; and *C<sup>α</sup>* is a pseudo-compliance (pseudo-capacitance). Fractional differentiation order *α* determines the relative degree of interaction between the capacitance of the microvasculature vessels, elastic compliance of the vessels, and the dissipation forces inside them. This, in turn, defines the physiological meaning of *Cα*.

Windkessel models with fractional derivatives were extensively studied before [13,29]. However, combining a one-dimensional hemodynamic model with the fractional-order Windkessel boundary condition is a new approach that allows us to represent a greater variety of storage and dissipation effects that can be represented with a single additional parameter *α*.

A large number of different definitions have been proposed for the fractional differentiation operator *D<sup>α</sup> <sup>t</sup>* [30]. We use the Caputo fractional derivative in this work:

$$D\_t^a P(t) = \frac{1}{\Gamma(\lceil \pi \rceil - a)} \int\_0^t \frac{P^{(\lceil \pi \rceil)}(t')}{(t - t')^{1 + a - \lceil \pi \rceil}} dt',\tag{10}$$

where Γ is a gamma function, *α* is a ceiling of *α* (smallest integer greater than *α*), and *P*(*<sup>α</sup>*)(*t* ) is a derivative with an integer order *α*. There are many other definitions of the fractional derivative: the Atangana–Baleanu fractional integral [31], Riemann–Liouville fractional derivative [32], Riesz derivative [33], etc. The choice of the Caputo fractional derivative is due to its simplicity in representation (relative to other fractional derivatives) and availability of well-studied numerical methods with approximation estimates for various problems. Another useful representation derived in [34] can be obtained using integration by parts in (10):

$$D\_t^a P(t) = \frac{1}{\Gamma(-a)} \int\_0^t \frac{P(t')}{(t - t')^{1 + a}} dt'.\tag{11}$$

This representation is valid for 0 < *α* < 2. We use it to approximate the integral in (11) with a trapezoidal rule. For the interval [0, *t*] with a grid {*t <sup>n</sup>* <sup>=</sup> *<sup>n</sup><sup>τ</sup>* : *<sup>n</sup>* <sup>=</sup> 0, 1, 2, .., *<sup>N</sup>*}, assuming a constant time-step *τ* and *P*(0) = 0, *P* (0) = 0, numerical approximation of *D<sup>α</sup> <sup>τ</sup>PN* can be expressed as

$$\_rD\_\tau^\kappa P\_N = \frac{\tau^{-\kappa}}{\Gamma(2-\alpha)} \sum\_{n=0}^N a\_{n,N} P\_{N-n\prime} \tag{12}$$

where coefficients *an*,*<sup>N</sup>* are

$$a\_{n,N} = \begin{cases} 1, & n = 0, \\ (n+1)^{1-a} - 2n^{1-a} + (n-1)^{1-a}, & 0 < n < N, \\ (1-a)N^{-a} - N^{1-a} + (N-1)^{1-a}, & n = N. \end{cases}$$

The error and stability analysis of this numerical approximation was described in detail in [30,35]. It was demonstrated that the order of approximation error is *O*(*τ*2−*α*).

We use an explicit approximation scheme for (9) with the discretization of fractional derivative described above. This allows us to determine the blood pressure *PN* at the terminal end of the aorta from the values of pressure and flux at the previous time steps. Then, we calculate the outflow *QN* at the terminal end of the aorta using compatibility conditions (1) and (2) and wall-state Equation (3).

#### *2.3. Model Personalization*

One of the most important problems in patient-specific blood flow modeling is to identify the parameters of the model. Diameters, lengths, and overall arterial structure can be extracted from CCTA images with the help of segmentation and skeletonization algorithms [36]. Parameter *c* in (3) represents the pulse wave velocity and can be estimated from the patient's age, blood pressure, heart rate, and stroke volume with the help of machine learning methods [20]. Terminal resistances *Rk*, *Ra* in (6), (7) and (9) are calculated from systemic vascular resistance (the ratio between mean pressure and cardiac output) and the diameters of the terminal arteries [24].

Outflow pressure *Pout* and compliance *C* in (7) are estimated to reproduce measured systolic and diastolic blood pressure. A number of algorithms for *Pout* and *C* estimation are presented in [25]. The following procedure to estimate these parameters is used in this work. We calculate initial value of *C* as a ratio between stroke volume and pulse pressure (*PP* = *Psys* − *Pdia*) and the initial value of *Pout* as 50% of diastolic pressure. After this, *C* and *Pout* are iteratively adjusted until the measured diastolic and systolic blood pressures match:

$$P\_{out}^{i+1} = P\_{out}^i \frac{P\_{sys}^{trur} + P\_{dia}^{trur}}{P\_{sys}^i + P\_{dia}^i}, \\ \mathcal{C}^{i+1} = \mathcal{C}^i \frac{P\_{sys}^{trur} - P\_{dia}^{trur}}{P\_{sys}^i - P\_{dia}^i}. \tag{13}$$

Adjustment of *Cα* in (9) is performed using the same procedure, but initial estimation is usually less precise since the dimension and interpretation of *Cα* changes with *α*.

Unfortunately, relying solely on systolic and diastolic pressures may produce incorrect diagnostic outcomes. For example, the hemodynamic significance of stenosis may vary for the same values of systolic and diastolic pressures. In order to obtain more accurate estimates, additional available information about the pressure profile, such as mean pressure, must be taken into account.

We propose to estimate an additional parameter, fractional derivative order *α*, based on the value of the mean pressure *Pmean*. To do this, we first iteratively adjust *α* to match the mean pressure and then adjust *C<sup>α</sup>* and *Pout* for each *α* to match the systolic and diastolic blood pressures. After this, we calculate *Pmean* and compare it with the measured mean pressure. If the calculated mean pressure is higher than the measured one, the order *α* should be decreased, and vice-versa. As we will see from the results, the relationship between *α* and *Pmean* is very close to linear. Therefore, in most situations, it is sufficient to perform two preliminary calculations for *α* = 1 and *α* = 1.5 or *α* = 0.5 to estimate *α*, which provides the appropriate value for the mean pressure.

The procedure of parameter identification described above is summarized on Figure 3.

**Figure 3.** Parameter identification procedure.

#### *2.4. Patient Data*

We tested the parameter identification procedure (Figure 3) on a publicly available dataset [5]. This dataset contains data from ten patients, including the geometry of arterial networks and the location stenoses in various coronary vessels. We kept the numeration of patients from [5] but we excluded Patient 9 from our study since the measurement of mean pressure is unavailable. As a result, our study included nine patients (Table 1 with 13 stenoses).

**Table 1.** Characteristics of the patient dataset (mean ± standard deviation). Details are presented in [5]. *θ* = *Pmean*−*Pdia Psys*−*Pdia* is a measure of blood profile thickness


Figure 4 presents examples of two patient-specific structures for Patient 1 and Patient 4. Table 2 presents patient metadata as well as measured FFR values. These two patients were chosen based on the value *θ*, which is the measure of the blood profile thickness:

$$\theta = \frac{P\_{\text{mean}} - P\_{\text{dia}}}{P\_{\text{sys}} - P\_{\text{dia}}}.\tag{14}$$

The average value of *θ* across all nine patients is *θave* = 0.45. Patient 1 has the lowest value *θ*<sup>1</sup> = 0.36, and Patient 4 has the closest to the average value *θ*<sup>4</sup> = 0.44. As a result, Patient 1 has the thinnest blood pressure profile and Patient 4 has the most typical profile.

**Figure 4.** Patient-specific network of coronary arteries: (**a**) patient 1 with 70% stenosis (segment 5) and (**b**) patient 4 with prolonged 50–60% stenosis (segment 5). Parameters of each segment are presented in Tables A2 and A3.

**Table 2.** Characteristics of Patient 1 and Patient 4. Stroke volume (SV) was estimated from patient age and BMI values presented in [5].


#### **3. Results**

#### *3.1. Blood Pressure and FFR Sensitivity to Order α*

The proposed model was applied to calculate the blood pressure profiles for various fractional differentiation orders *α* in a simplified network of coronary arteries (Figure 1). We also calculated the FFR for 66% of the stenoses for various *α*.

First, we calculated aortic blood pressure for *α* = 1.0 and adjusted the model parameters to acquire the physiological systolic (125 mm Hg) and diastolic (75 mm Hg) blood pressures. Then, we performed calculations for other values of *α* with the same set of model parameters. The value of compliance *C* remains constant for various *α*. This is technically incorrect since the physiological meaning and dimensional formula for *C* depend on *α*, but it helps us to explore changes in pressure profiles with the change of *α* (Figure 5a).

Second, we adjusted *C* and *Pout* for each order *α* to obtain the same values of systolic and diastolic blood pressures (Figure 5b). As *α* decreases, the pressure peak shifts to the left, and the pressure profile becomes thinner. This results in a drop in mean pressure. Conversely, as *α* increases, the profile peak shifts to the right, the profile becomes thicker, and the mean pressure grows.

**Figure 5.** Aortic blood pressure for various orders *α*. (**a**) All parameters, except for *α*, are fixed. (**b**) Model parameters were adjusted to yield the same values of the systolic and diastolic blood pressure.

Systolic and diastolic blood pressures are one of the most commonly available patientspecific parameters. All blood profiles on Figure 5b have the same systolic and diastolic blood pressures, but the mean pressures are different for each *α*. If the mean pressure is available for a given patient, we can use it to select an appropriate *α* and calculate FFR. Figure 6 demonstrates how the calculated mean pressure and FFR depend on *α*, assuming that the systolic and diastolic blood pressures are the same (125/75 mm Hg). The relationship between mean pressure and *α* is very close to linear within the considered interval (from 0.25 to 1.5). Therefore, this simplifies the process of *α* identification from patient's mean pressure: if we calculate the mean pressures for any two values of *α*, we can derive an appropriate *α* for a given mean pressure value using linear interpolation.

At the same time, FFR drops with increasing *α* (Figure 6b). This relation resembles exponential decay. For 0 < *α* < 1, FFR drops rapidly from 0.95 (*α* = 0.25) to 0.78 (*α* = 1.0). The threshold between the significant and insignificant lesions is 0.8, so the choice of *α* affects the diagnostic outcome. For 1 < *α* < 2, FFR is almost constant.

**Figure 6.** Mean pressure and FFR for a simplified network of coronary arteries. (**a**) Mean pressure and *α*. (**b**) FFR and *α*. The horizontal red line corresponds to FFR = 0.8—threshold value between significant (FFR < 0.8) and insignificant (FFR > 0.8) stenoses .

#### *3.2. Patient-Specific Calculations*

In this section, we describe applying a parameter estimation algorithm (Figure 3) to estimate FFR for nine patients (Table 1). We start with two examples: Patient 1 and Patient 4.

Patient 1 had stenosis in the left anterior descending artery (LAD) with a corresponding measured FFR value of 0.89 and a mean aortic pressure of 111 mm Hg. Blood flow calculations for *α* = 1 (utilizing boundary condition (7)) yielded *FFRα*=<sup>1</sup> = 0.83 and *Pα*=<sup>1</sup> *mean* = 119 mm Hg. Then, we applied the proposed model with fractional-order Windkessel boundary conditions (9) with the parameter identification procedure to fit the given systolic, diastolic, and mean pressures. As a result, the following estimates were obtained: *Pα*=0.56 *mean* = 111 mm Hg was achieved for *α* = 0.56, and the resulting FFR was *FFRα*=0.56 = 0.87. The error in FFR estimation was significantly reduced after applying optimization of *α* for mean pressure.

Patient 4 had stenosis in the LAD with a corresponding measured FFR value of 0.82 and a mean aortic pressure of 94 mm Hg. Blood flow calculations for *α* = 1 (utilizing boundary condition (7)) yielded *FFRα*=<sup>1</sup> = 0.82 and *Pα*=<sup>1</sup> *mean* = 94 mm Hg. No further optimization was required in this case since the calculated mean pressure matched the measured mean pressure with good accuracy. The calculated FFR value also matched the measured one.

FFR estimations for other patients are presented in Table 3. The original approach to estimate FFR involves a boundary condition (7) without fractional derivative. We ignored the measured mean pressure and adjusted our model to achieve the measured systolic and diastolic blood pressures. The fractional-order approach involves adjusting the fractional derivative order to obtain measured mean pressure. The RMSE for the original approach was 0.05, and the RMSE for the fractional order approach was 0.04. The RMSE was mainly defined by large errors in the FFR estimations of patients 6, 8, and 10. We assumed that stenosis degree and length were not identified properly for these patients. The FFR estimation was improved for patients with a "thin" blood profile (*θ* < 0.4), including Patient 1, Patient 5, and Patient 7. Patients 2, 3, 4, and 6 had am optimal fractional order *αopt* close to 1.0 (or equal to 1.0), so FFR estimations for both approaches were similar. The FFR estimations for patient 6 were less precise for the fractional order approach, but the difference was very small. Patients 8 and 10 had an optimal fractional order *αopt* > 1.0, and the FFR estimation was similar for both approaches. This is due to the fact that FFR is almost constant for *α* > 1.0 (Figure 6).

**Table 3.** FFR estimations for the patient dataset. Patient data: *Pmean* is the measured mean pressure, mm Hg; *θ* is a measure of the pressure profile thickness (14); Loc. is a location of stenosis; *FFR* is the invasively measured FFR. The original approach for FFR estimation (order *α* = 1.0): *FFRα*=<sup>1</sup> is the calculated FFR value with a boundary condition (7); *Pest mean* is the calculated mean pressure with order *α* = 1. The fractional derivative approach (order *α* = *αopt*) involves adjusting order *α* so that the calculated mean pressure matches the measured one: *FFRα*=*αopt* is the calculated FFR value with the boundary condition (9), and fractional order *α* = *αopt*; *αopt* is the optimal fractional order. Patient 9 was excluded due to the absence of a mean pressure measurement.


Results of the FFR estimation showed that the fractional-order approach provided benefits for a certain group of patients (patients 1, 5, and 7) with a thin blood pressure profile. These patients may have a common cardiovascular pathology. According to [14], low fractional orders can be used to simulate the blood profiles of patients with hypertension. Patients 1, 5, and 7 have a BMI > 30 kg/m2 which is a good predictor of hypertension. However, their blood pressure levels are not the highest in the dataset. We need a larger sample of patients to make further conclusions.

#### **4. Discussion**

We proposed coupling the well-established one-dimensional hemodynamic model of coronary blood flow with a fractional-order boundary condition, as well as a procedure for estimating its parameters. The fractional derivative can be a useful tool for more accurate modeling of the pressure profile. The actual blood pressure profiles depend on many factors, such as age, height, weight, medical history, and artery elasticity. The most commonly available characteristics of blood pressure profiles are systolic and diastolic blood pressures. Unfortunately, relying solely on these two values alone may produce incorrect diagnostic outcomes. For example, the hemodynamic significance of stenosis may vary for the same systolic and diastolic blood pressures. This fact has motivated researchers to look for new tools to model blood pressure profiles.

We used the fractional derivative order to match calculated mean blood pressure with the measured one. Adjusting mean pressure can be performed in other ways: introducing a larger Windkessel model, expanding the arterial network, etc. The fractional-derivative Windkessel model is a good compromise: we introduced a very small amount of new parameters (1–2) and gained the ability to simulate a whole spectrum of dissipative and storage mechanisms with the help of fractional order *α*. Our approach does require additional information on the blood pressure profile. These data can be obtained with simple noninvasive procedures. Unfortunately, in many cases, these data are unavailable, and all the pressure profile information is reduced to systolic and diastolic pressure values. This was the case for Patient 9 who was excluded from our study.

The proposed approach has a number of shortcomings that need to be resolved in the future. First, in some cases, the only data available regarding a patient's pressure profile were systolic and diastolic blood pressures. Identifying the fractional derivative order *α* in this case would require a completely different approach that can be based on other data, such as patient medical history. Second, calculating fractional derivatives calls for significant computational resources. This negates one of the main advantages of one-dimensional blood flow models—computational efficiency. Instead of the basic approach presented in this work, new efficient numerical approximations can be used. Third, the proposed approach should be tested on a larger number of patients with a wide range of FFR values. The FFR is nearly constant for fractional differentiation orders *α* > 1, so for some patients, its adjustment will be useless.

Further research will focus on integrating more efficient approaches to identify model parameters and to approximate a fractional derivative. Other areas of work include collaborating with clinicians to find effective methods for pulse wave assessment and further validation on a larger amount of patients. The proposed approach has great potential to provide an alternative means to simulating arterial stiffness and pulse waves.

**Author Contributions:** Conceptualization, T.G. and R.Y.; methodology, T.G.; software, T.G.; validation, T.G.; formal analysis, R.Y.; investigation, T.G.; data curation, T.G.; writing—original draft preparation, T.G.; writing—review and editing, R.Y.; visualization, T.G. and R.Y.; supervision, T.G. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was supported by the Russian Science Foundation under project no. 22-71-10087.

**Data Availability Statement:** The datasets used in this work are described in [5] and are freely available at https://doi.org/10.6084/m9.figshare.8047742.v2 (accessed on 2 April 2023).

**Acknowledgments:** We thank Alexander Lapin (Sechenov University) for useful discussion and literature recommendations.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Abbreviations**

The following abbreviations are used in this manuscript:


#### **Appendix A**

**Table A1.** Parameters of the vessels for simplified structure (Figure 1).


**Table A2.** Parameters of the vessels for Patient 1 (Figure 4).



**Table A3.** Parameters of the vessels for Patient 4 (Figure 4).

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

## *Article* **An Application of the Distributed-Order Time- and Space-Fractional Diffusion-Wave Equation for Studying Anomalous Transport in Comb Structures**

**Lin Liu 1,\*, Sen Zhang 1, Siyu Chen 1, Fawang Liu 2,\*, Libo Feng 2, Ian Turner 2, Liancun Zheng <sup>1</sup> and Jing Zhu <sup>1</sup>**


**Abstract:** A comb structure consists of a one-dimensional backbone with lateral branches. These structures have widespread application in medicine and biology. Such a structure promotes an anomalous diffusion process along the backbone (*x*-direction), along with classical diffusion along the branches (*y*-direction). In this work, we propose a distributed-order time- and space-fractional diffusion-wave equation to model a comb structure in the more general setting. The distributed-order time- and space-fractional diffusion-wave equation is firstly formulated to study the anomalous diffusion in the comb model subject to an irregular convex domain with the motivation that the time-fractional derivative considers the memory characteristic and the space one with the variable diffusion coefficient possesses the nonlocal characteristic. The finite element method is applied to obtain the numerical solution. The stability and convergence of the numerical discretization scheme are derived and analyzed. Two numerical examples of relevance to the comb model are given to verify the correctness of the numerical method. Moreover, the influence of the involved parameters on the three-dimensional and axial projection drawing particle distribution subject to an elliptical domain are analyzed, and the physical meanings are interpreted in detail.

**Keywords:** distributed-order fractional derivative; anomalous diffusion; comb model; constitutive relationship

**PACS:** 26A33; 65M12; 65M60; 35R11; 74Q15

### **1. Introduction**

A comb model is used to study anomalous diffusion in a medium of a specific structure [1]. Systematic research on this class of models is of great theoretical significance and application to comb structures such as dendritic spines [2] that arise in medicine and biology. An example of an experimental setup to probe the dynamics of actin polymerization is given in Figure 1a. The image at the top gives the optical micrograph of the microfluidics structure, and the image at the bottom is of the microfluidic micrographs fluorescently labeled, polymerized actin filaments [3]. Figure 1b presents the electron tomogram of a spiny dendrite [4]. From the two practical problems, it is easily seen that the particle transport is not random but in the form of a comb, and this specific structure is named a comb model [5]. Comb models are a powerful tool for studying many other diffusion phenomena, such as the diffusion of cancer cells [6], the fractal glioma development under RF electric field treatment [7] and the diffusion of percolation clusters [8]. For this study, we assume for all the practical problems that the transport of particles in a comb form can be simplified to the structure exhibited in Figure 2. As the figure shows, the comb model contains a straight backbone on the *x*-axis with the lateral branches attached perpendicularly to the backbone, which plays

**Citation:** Liu, L.; Zhang, S.; Chen, S.; Liu, F.; Feng, L.; Turner, I.; Zheng, L.; Zhu, J. An Application of the Distributed-Order Time- and Space-Fractional Diffusion-Wave Equation for Studying Anomalous Transport in Comb Structures. *Fractal Fract.* **2023**, *7*, 239. https://doi.org/ 10.3390/fractalfract7030239

Academic Editors: Stanislaw Migorski and Riccardo Caponetto

Received: 30 December 2022 Revised: 21 January 2023 Accepted: 10 February 2023 Published: 7 March 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

the role of traps [8]. Through the special structure of the comb model, the diffusion process along the *x*-direction only happens on the *x*-axis and the transport between any different fingers must take place through the backbone. As is well known, one of the most striking characteristics for the classical normal diffusion is the linear growth with time of the second moment of the particle positions. For the special structure of the comb model, one important pattern of diffusion can be deduced, whereby the subdiffusion with exponent 1/2 arises subject to the classical Fick's constitution model with a linear form [9]

$$\vec{J}(\mathbf{x}, y, t) = \left( -k\_x \delta(y) \frac{\partial u(\mathbf{x}, y, t)}{\partial \mathbf{x}}, -k\_y \frac{\partial u(\mathbf{x}, y, t)}{\partial y} \right), \tag{1}$$

where *J*(*x*, *y*, *t*) refers to the diffusion flux vector, *u*(*x*, *y*, *t*) denotes the distribution function at the special positions (*x*, *y*) and time *t*, *kxδ*(*y*) indicates the diffusion coefficient along the *x*-direction while *ky* is the diffusion coefficient along the *y*-direction, *δ*(*y*) refers to the Dirac delta function which reflects the special structure of the comb model.

**Figure 1.** (**a**) Optical micrograph of a segment of the microfluidics comb-like structures (on top). On the bottom: microfluidic micrographs of fluorescently labeled, polymerized actin filaments in a comb-like structure [4]. (**b**) One of the examples of a physical environment suitable for the comb model. Electron tomogram of a spiny dendrite. Image taken from Internet (http://www.cacr.caltech. edu/projects/ldviz/results/levelsets/, accessed on 1 July 2013).

**Figure 2.** The schematic of a comb model.

Due to the geometrical structure and the non-uniformity of the medium transmission, the classical Fick's law in conventional diffusion with the paradox of an infinite transport velocity [10,11] is no longer applicable. In order to study the transmission mechanism of the concentration field for the anomalous diffusion in the comb model, three modifications for the Fick's model are presented. The first is the introduction of the relaxation parameter *ξ* and the second considers the time-fractional derivatives with the motivation that the relaxation parameter makes the transport process attach a finite transport velocity, while the time-fractional derivative takes the memory characteristic of the transport process into account. Furthermore, as discussed in [12], due to the special structure of the comb model, the highly inhomogeneous characteristic happens along the *x*-axis, and this characteristic can be reflected by the space-fractional derivative [13]. Thus, as the third modification, the second space-integer derivative is modified to a space-fractional derivative with variable coefficients, which considers a left and right nonlocal characteristic. Based on the time-fractional Cattaneo model [14] and the two-dimensional space-fractional constitutive model [15], the following time- and space-fractional Cattaneo constitution relationship with variable diffusion coefficients is formulated

$$\begin{split} \vec{f}(\mathbf{x}, \boldsymbol{y}, t) + \xi \frac{\partial \vec{f}(\mathbf{x}, \boldsymbol{y}, t)}{\partial t} \\ = \xi\_0^{RL} D\_t^{1-a} \left( -\delta(\boldsymbol{y}) \left( d^+(\mathbf{x}, \boldsymbol{y})\_\perp D\_x^\gamma \boldsymbol{u}(\mathbf{x}, \boldsymbol{y}, t) - d^-(\mathbf{x}, \boldsymbol{y})\_\mathcal{X} D\_\mathcal{K}^\gamma \boldsymbol{u}(\mathbf{x}, \boldsymbol{y}, t) \right), -e(\mathbf{x}, \boldsymbol{y}) \frac{\partial \boldsymbol{u}(\mathbf{x}, \boldsymbol{y}, t)}{\partial \boldsymbol{y}} \right), \end{split} \tag{2}$$

where *δ*(*y*)*d*+(*x*, *y*) and *δ*(*y*)*d*−(*x*, *y*) represent the left and right variable diffusion coefficient along the *x*-direction, respectively; *e*(*x*, *y*) refers to the variable diffusion coefficient along the *y*-direction; *RL* <sup>0</sup> *<sup>D</sup>*1−*<sup>α</sup> <sup>t</sup>* refers to time-fractional derivative of order 1 <sup>−</sup> *<sup>α</sup>* (0 <sup>&</sup>lt; *<sup>α</sup>* <sup>&</sup>lt; 1) with the Riemann–Liouville definition; *LD<sup>γ</sup> <sup>x</sup>* and *xD<sup>γ</sup> <sup>R</sup>*, respectively, denote the left and right Riemann–Liouville space-fractional derivatives of order *γ* (0 < *γ* < 1). The definitions are, respectively, given by

$$\begin{split} \, \_0^{RL}D\_t^{1-\alpha}u(x,y,t) &= \frac{1}{\Gamma(\alpha)} \frac{\partial}{\partial t} \int\_0^t \frac{1}{(t-\tau)^{1-\alpha}} u(x,y,\tau) d\tau, \\ \, \_LD\_x^{\gamma}u(x,y,t) &= \frac{1}{\Gamma(1-\gamma)} \frac{\partial}{\partial x} \int\_L^x \frac{1}{(x-s)^{\overline{\gamma}}} u(s,y,t) ds, \\ \, \_xD\_R^{\gamma}u(x,y,t) &= \frac{-1}{\Gamma(1-\gamma)} \frac{\partial}{\partial x} \int\_x^R \frac{1}{(s-x)^{\overline{\gamma}}} u(s,y,t) ds, \end{split}$$

where the symbol Γ(·) refers to the Euler gamma function, *L* and *R* refer to the left and right boundaries along the *x*-direction.

By combining the constitutive relation (2) with the following mass conservation equation

$$\frac{\partial u(\mathbf{x}, \mathbf{y}, t)}{\partial t} + \nabla \cdot \vec{J}(\mathbf{x}, \mathbf{y}, t) = 0,\tag{3}$$

we obtain the time- and space-fractional Cattaneo governing equation

$$\begin{aligned} &\xi \frac{\partial^{1+a}u(\mathbf{x},y,t)}{\partial t^{1+a}} + \frac{\partial^a u(\mathbf{x},y,t)}{\partial t^a} \\ &= \frac{\partial}{\partial x} \left[ \delta(y) \left( d^+(\mathbf{x},y)\_L D\_X^\gamma u(\mathbf{x},y,t) - d^-(\mathbf{x},y)\_X D\_X^\gamma u(\mathbf{x},y,t) \right) \right] + \frac{\partial}{\partial y} \left[ \epsilon(\mathbf{x},y) \frac{\partial u(\mathbf{x},y,t)}{\partial y} \right], \end{aligned} \tag{4}$$

where the symbols *<sup>∂</sup>αu*(*x*,*y*,*t*) *<sup>∂</sup>t<sup>α</sup> <sup>d</sup><sup>α</sup>* and *<sup>∂</sup>*1+*αu*(*x*,*y*,*t*) *<sup>∂</sup>t*1+*<sup>α</sup>* denote the Caputo fractional derivative operators of order *α* and 1 + *α*, respectively, in which the definitions are given as

$$\frac{\partial^{\alpha}u(\boldsymbol{x},\boldsymbol{y},t)}{\partial t^{\alpha}} = \frac{1}{\Gamma(1-\alpha)} \int\_{0}^{t} \frac{1}{(t-\tau)^{\alpha}} \frac{\partial u(\boldsymbol{x},\boldsymbol{y},\tau)}{\partial \tau} d\tau,$$

$$\frac{\partial^{1+\alpha}u(\boldsymbol{x},\boldsymbol{y},t)}{\partial t^{1+\alpha}} = \frac{1}{\Gamma(1-\alpha)} \int\_{0}^{t} \frac{1}{(t-\tau)^{\alpha}} \frac{\partial^{2}u(\boldsymbol{x},\boldsymbol{y},\tau)}{\partial \tau^{2}} d\tau.$$

As a generalization of the integer derivative, the fractional operator considers the memory and nonlocal characteristics and has important applications in a variety of fields. For the fractional governing Equation (4), a limitation is that it is suitable for describing the probability density distribution of a very narrow class of diffusion processes because it is characterized by a unique time-fractional exponent [16]. Motivated by this idea, by integrating the fractional-order derivatives with respect to the order of differentiation, a distribution-order operator was proposed by Caputo [17]. The governing equation with the distributed-order operator exhibits memory and nonlocal effects over various timefractional scales and becomes a powerful tool to describe transport phenomena in complex heterogeneous media. However, as far as we are aware, the distributed-order time and space diffusion-wave equation has not been considered to study the anomalous diffusion in the comb model.

Motivated by the above discussions, as an original contribution to the literature, we discuss and analyze the following distributed-order time- and space-fractional diffusionwave equation to study the anomalous diffusion in the comb model

*ξ* - 2 1 *ϕ*1(*β*) *∂βu*(*x*, *y*, *t*) *<sup>∂</sup>t<sup>β</sup> <sup>d</sup><sup>β</sup>* <sup>+</sup> - 1 0 *ϕ*0(*α*) *∂αu*(*x*, *y*, *t*) *<sup>∂</sup>t<sup>α</sup> <sup>d</sup><sup>α</sup>* (5) <sup>=</sup> *<sup>∂</sup> ∂x δ*(*y*) *<sup>d</sup>*+(*x*, *<sup>y</sup>*)*LD<sup>γ</sup> <sup>x</sup> <sup>u</sup>*(*x*, *<sup>y</sup>*, *<sup>t</sup>*) <sup>−</sup> *<sup>d</sup>*−(*x*, *<sup>y</sup>*)*xD<sup>γ</sup> <sup>R</sup>u*(*x*, *y*, *t*) + *∂ ∂y e*(*x*, *y*) *∂u*(*x*, *y*, *t*) *∂y* + *f*(*x*, *y*, *t*),

where *ϕ*1(*β*) and *ϕ*0(*α*) denote weight functions and *β* ∈ (1, 2), *ϕ*1(*β*) ≥ 0, *ϕ*1(*β*) ≡ 0, 0 < <sup>2</sup> <sup>1</sup> *<sup>ϕ</sup>*1(*β*)*d<sup>β</sup>* <sup>&</sup>lt; <sup>∞</sup>, *<sup>α</sup>* <sup>∈</sup> (0, 1), *<sup>ϕ</sup>*0(*α*) <sup>≥</sup> 0, *<sup>ϕ</sup>*0(*α*) ≡ 0, 0 <sup>&</sup>lt; <sup>1</sup> <sup>0</sup> *ϕ*0(*α*)*dα* < ∞ and *f*(*x*, *y*, *t*) is a source term.

As Ref. [6] indicated, the diffusion in the comb model, which is described by the Fick's model, is an example of a subdiffusive one-dimensional medium where a continuous-time random walk takes place along the backbone while the diffusion along the *y* direction has a traps effect. The classical Fick's model possesses the local characteristic, and the fractional derivative is proposed, considering the memory and nonlocal characteristics. For a further development, the distributed-order time-fractional derivative is proposed by integrating the fractional-order derivatives. In conclusion, Equation (5) is a development to describe the continuous-time random walk, considering various memory and nonlocal characteristics.

The anomalous diffusion in the comb model has been applied to many different fields in medicine and biology. In some applications of the comb model treating with the numerical method, the infinite regions are approximated by the rectangular domains with large sides [18]. However, by extending the computational modeling to irregular domains, we can broaden the potential applicability of the comb model. Based on these discussions, in this paper, the initial and boundary conditions are given by

$$u(\mathbf{x}, y, 0) = \phi\_0(\mathbf{x}, y), \ u\_t(\mathbf{x}, y, 0) = \phi\_1(\mathbf{x}, y), \ (\mathbf{x}, y) \in \overline{\Omega},\tag{6}$$

$$
\mu(x, y, t) = 0, \ (x, y, t) \in \partial\Omega \times (0, T], \tag{7}
$$

respectively, where Ω is an irregular convex domain.

The new governing Equation (5) is a generalization and development of the classical model to study the anomalous diffusion in the comb model. By choosing *ϕ*0(*α*)=*δ*(*α* − *α*0), *<sup>ϕ</sup>*1(*β*)=*δ*(*<sup>β</sup>* <sup>−</sup> *<sup>α</sup>*<sup>0</sup> <sup>−</sup> <sup>1</sup>), *<sup>d</sup>*+(*x*, *<sup>y</sup>*)=*d*−(*x*, *<sup>y</sup>*) = *kx*, *<sup>e</sup>*(*x*, *<sup>y</sup>*) = *ky*, *<sup>γ</sup>* <sup>=</sup> 1 and *<sup>f</sup>*(*x*, *<sup>y</sup>*, *<sup>t</sup>*) = 0, Equation (5) reduces to the time-fractional Cattaneo governing equation [14]

$$
\xi \frac{\partial^{a\_0+1}\mu(\mathbf{x},y,t)}{\partial t^{a\_0+1}} + \frac{\partial^{a\_0}\mu(\mathbf{x},y,t)}{\partial t^{a\_0}} = k\_\mathbf{x}\delta(y)\frac{\partial^2\mu(\mathbf{x},y,t)}{\partial \mathbf{x}^2} + k\_\mathbf{y}\frac{\partial^2\mu(\mathbf{x},y,t)}{\partial y^2}.
$$

For the choice *<sup>ϕ</sup>*0(*α*)=*δ*(*<sup>α</sup>* <sup>−</sup> *<sup>α</sup>*0), *<sup>ξ</sup>* <sup>=</sup> 0, *<sup>d</sup>*+(*x*, *<sup>y</sup>*)= *kx*, *<sup>d</sup>*−(*x*, *<sup>y</sup>*) = 0, *<sup>e</sup>*(*x*, *<sup>y</sup>*) = *ky* and *f*(*x*, *y*, *t*) = 0, Equation (5) reduces to the time- and space-fractional governing equation [19]

$$\frac{\partial^{\alpha\_0} u(\mathbf{x}, y, t)}{\partial t^{\alpha\_0}} = k\_x \delta(y) \frac{\partial^{\gamma + 1} u(\mathbf{x}, y, t)}{\partial \mathbf{x}^{\gamma + 1}} + k\_y \frac{\partial^2 u(\mathbf{x}, y, t)}{\partial y^2}.$$

Finally, by choosing *<sup>ξ</sup>* <sup>=</sup> 0, *<sup>ϕ</sup>*0(*α*)=*δ*(*<sup>α</sup>* <sup>−</sup> <sup>1</sup>), *<sup>d</sup>*+(*x*, *<sup>y</sup>*)=*d*−(*x*, *<sup>y</sup>*) = *kx*, *<sup>e</sup>*(*x*, *<sup>y</sup>*) = *ky*, *γ* = 1 and *f*(*x*, *y*, *t*) = 0, we obtain the anomalous diffusion based on the classical Fick's model [2]

$$\frac{\partial \mu(\mathbf{x}, y, t)}{\partial t} = k\_x \delta(y) \frac{\partial^2 \mu(\mathbf{x}, y, t)}{\partial x^2} + k\_y \frac{\partial^2 \mu(\mathbf{x}, y, t)}{\partial y^2}.$$

The study of distributed-order time- and space-fractional diffusion-wave equations is of great significance to further improve and predict the anomalous diffusion phenomena in the comb-like structures.

#### **2. The Structure of the Paper**

In this paper, the two-dimensional irregular convex domain is defined as Ω = {(*x*, *y*)|*xL*(*y*) ≤ *x* ≤ *xR*(*y*), *yD*(*x*) ≤ *y* ≤ *yU*(*x*)}, where *xL*(*y*), *xR*(*y*), *yD*(*x*), *yU*(*x*) are the left, right, lower and upper boundaries of Ω, respectively. Denote *x*min = min (*x*,*y*)∈Ω *xL*(*y*),

*x*max = max (*x*,*y*)∈Ω *xR*(*y*), *y*min = min (*x*,*y*)∈Ω *yD*(*x*), *y*max = max (*x*,*y*)∈Ω *yU*(*x*). Then, the inner product is defined as

$$u(\mu, v)\_{\Omega} = \int\_{y\_{\min}}^{y\_{\max}} \int\_{x\_{\mathbb{L}}(y)}^{x\_{\mathbb{R}}(y)} u(\mathbf{x}, y) v(\mathbf{x}, y) d\mathbf{x} dy = \int\_{x\_{\min}}^{x\_{\max}} \int\_{y\_{\mathbb{D}}(x)}^{y\_{\mathbb{D}}(x)} u(\mathbf{x}, y) v(\mathbf{x}, y) d\mathbf{x} dy,$$

and the *<sup>L</sup>*2-norm is given as ||*u*||*L*2(Ω) <sup>=</sup> (*u*, *u*)<sup>Ω</sup> 1/2.

The fractional derivative space in one dimension was firstly established by Roop and Ervin [20] and then developed further by Bu et al. [21,22], Yang et al. [23], Hao et al. [24] and Wang et al. [25,26]. Due to the special form of the governing equation with a space-fractional derivative of order *γ* in *x* and the integer derivative in *y*, some new definitions and lemmas of the fractional derivative spaces are defined.

**Definition 1.** *The definitions for the left (right) fractional derivative space with semi-norm and norm are, respectively, given as*

$$\begin{split} \left| \left. \left| u \right| \right|\_{\boldsymbol{f}\_{L}^{\boldsymbol{1}}(\Omega)} = \left( \left|| \left. D\_{x}^{\gamma} u \right| \right|\_{\boldsymbol{L}^{2}(\Omega)}^{2} + \left| \left| \left. \frac{\partial u}{\partial y} \right| \right|\_{\boldsymbol{L}^{2}(\Omega)}^{2} \right)^{1/2}, \left| u \right|\_{\boldsymbol{f}\_{R}^{\boldsymbol{1}}(\Omega)} \right) \right| ^{1/2}, \left| u \right|\_{\boldsymbol{f}\_{R}^{\boldsymbol{1}}(\Omega)} = \left( \left|| \left. u \right|\_{R}^{\gamma} \boldsymbol{\Omega} \boldsymbol{u} \right| \right|\_{\boldsymbol{L}^{2}(\Omega)}^{2} + \left| \left| \left. \frac{\partial u}{\partial y} \right| \right|\_{\boldsymbol{L}^{2}(\Omega)}^{2} \right)^{1/2}, \left| u \right|\_{\boldsymbol{f}\_{R}^{\boldsymbol{1}}(\Omega)} \right| ^{1/2}, \\ \left| \left( \left| \boldsymbol{u} \right| \right|\_{\boldsymbol{f}\_{L}^{\boldsymbol{1}}(\Omega)}^{2} + \left| \left| \left. \boldsymbol{u} \right| \right|\_{\boldsymbol{f}\_{L}^{\boldsymbol{1}}(\Omega)}^{2} \right)^{1/2}, \left| \left| \left| \boldsymbol{u} \right| \right|\_{\boldsymbol{f}\_{R}^{\boldsymbol{1}}(\Omega)} \right| \right) \right| ^{1/2}, \left| \left( \left| \left. \boldsymbol{u} \right| \right|\_{\boldsymbol{L}^{2}(\Omega)}^{2} + \left| \left| \left. \frac{\left| \left$$

*where Jγ*,1 *<sup>L</sup>* (Ω) (*J γ*,1 *<sup>R</sup>* (Ω)) *denotes the closure of C*∞(Ω) *with respect to* || · ||*<sup>J</sup> γ*,1 *<sup>L</sup>* (Ω) (|| · ||*<sup>J</sup> γ*,1 *<sup>R</sup>* (Ω) )*.*

**Definition 2.** *The fractional Sobolev space with the semi-norm and norm of order μ are, respectively, defined as*

$$\begin{aligned} |\mu|\_{H^{\mathfrak{p}}(\Omega)} &= ||\, |\xi|^{\mathfrak{p}} \mathcal{F}(\mathfrak{d})(\xi) ||\_{L^2(\mathbb{R}^2)'} \\ ||\mu||\_{H^{\mathfrak{p}}(\Omega)} &= \left( ||\mu||\_{L^2(\Omega)}^2 + |\mu|\_{H^{\mathfrak{p}}(\Omega)}^2 \right)^{1/2} \end{aligned}$$

*where* F(*u*ˆ)(*ξ*) *is the Fourier transformation of the function u*ˆ*, and u*ˆ *is the zero extension of the function u outside of* <sup>Ω</sup>*, Hμ*(Ω) *denotes the closure of C*∞(Ω) *with respect to* || · ||*Hμ*(Ω)*.*

**Definition 3.** *For the symmetric fractional derivative space, when γ* = 1/2*, we define the seminorm and norm*

$$\begin{aligned} \left| \left. u \right| \right|\_{\boldsymbol{J}\_{\boldsymbol{S}}^{\gamma, 1}(\Omega)} &= \left( \left| \left( \_{L} D\_{x}^{\gamma} u \,, \_{x} D\_{R}^{\gamma} u \right)\_{\Omega} \right| + \left| \left( \frac{\partial u}{\partial y} \,, \frac{\partial u}{\partial y} \right)\_{\Omega} \right| \right)^{1/2} \right) \\ \left|| \left. u \right| \right|\_{\boldsymbol{J}\_{\boldsymbol{S}}^{\gamma, 1}(\Omega)} &= \left( \left|| \left. u \right| \right|\_{L^{2}(\Omega)}^{2} + \left| \left. u \right|\_{\boldsymbol{J}\_{\boldsymbol{S}}^{\gamma, 1}(\Omega)}^{2} \right) \right)^{1/2} \end{aligned}$$

*where Jγ*,1 *<sup>S</sup>* (Ω) *denotes the closure of C*∞(Ω) *with respect to* || · ||*<sup>J</sup> γ*,1 *<sup>S</sup>* (Ω) *.*

We denote *J γ*,1 *<sup>L</sup>*,0 (Ω), *J γ*,1 *<sup>R</sup>*,0(Ω), *<sup>H</sup><sup>γ</sup>* <sup>0</sup> (Ω), *<sup>H</sup>*<sup>1</sup> <sup>0</sup> (Ω) and *J γ*,1 *<sup>S</sup>*,0 (Ω) as the closure of *<sup>C</sup>*<sup>∞</sup> <sup>0</sup> (Ω) with respect to || · ||*<sup>J</sup> γ*,1 *<sup>L</sup>* (Ω) , || · ||*<sup>J</sup> γ*,1 *<sup>R</sup>* (Ω) , || · ||*Hγ*(Ω), || · ||*H*1(Ω) and || · ||*<sup>J</sup> γ*,1 *<sup>S</sup>* (Ω) , respectively, where *C*<sup>∞</sup> <sup>0</sup> (Ω) is the space of smooth functions with compact support in Ω. Based on the above definitions, some useful and important lemmas are introduced.

**Lemma 1.** *For u*(*x*, *y*), *v*(*x*, *y*) ∈ *J γ*,1 *<sup>L</sup>*,0 (Ω) B *J γ*,1 *<sup>R</sup>*,0(Ω) (0 < *γ* < 1)*, we have*

$$\left( \,\_{L}D\_{\mathbf{x}}^{\gamma}u(\mathbf{x},\mathbf{y}), \frac{\partial v(\mathbf{x},\mathbf{y})}{\partial \mathbf{x}} \right) = -\left( \,\_{L}D\_{\mathbf{x}}^{(\gamma+1)/2}u(\mathbf{x},\mathbf{y}) \,, \,\_{\mathbf{x}}D\_{\mathbf{R}}^{(\gamma+1)/2}v(\mathbf{x},\mathbf{y}) \right),\tag{8}$$

$$\left( \,\_x D\_R^{\gamma} u(\mathbf{x}, y), \frac{\partial v(\mathbf{x}, y)}{\partial \mathbf{x}} \right) = - \left( \,\_x D\_R^{(\gamma + 1)/2} u(\mathbf{x}, y) \,\_t D\_x^{(\gamma + 1)/2} v(\mathbf{x}, y) \right). \tag{9}$$

**Proof.** See [27].

**Lemma 2.** *For u* <sup>∈</sup> *<sup>H</sup><sup>γ</sup>* <sup>0</sup> (Ω) *and* 0 < *γ* < (*γ* + 1)/2*, then*

$$||u||\_{L^{2}(\Omega)} \leq \mathcal{C}\_{2}||\_{L}D\_{x}^{\gamma}u||\_{L^{2}(\Omega)} \leq \mathcal{C}\_{1}||\_{L}D\_{x}^{(\gamma+1)/2}u||\_{L^{2}(\Omega)^{\prime}}\tag{10}$$

$$||u||\_{L^{2}(\Omega)} \leq \mathbb{C}\_{4} ||\_{\mathfrak{x}} D\_{\mathfrak{R}}^{\gamma} u||\_{L^{2}(\Omega)} \leq \mathbb{C}\_{3} ||\_{\mathfrak{x}} D\_{\mathfrak{R}}^{(\gamma+1)/2} u||\_{L^{2}(\Omega)}.\tag{11}$$

*For u* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> <sup>0</sup> (Ω)*, we have*

$$||u||\_{L^{2}(\Omega)} \leq \mathcal{C}\_{5} \left||\left.\frac{\partial u}{\partial y}\right||\_{L^{2}(\Omega)}\right|\,,\tag{12}$$

*where C*1*, C*2*, C*3*, C*<sup>4</sup> *and C*<sup>5</sup> *are positive constants independent of u.*

**Proof.** The proof of this Lemma follows that given in [28].

**Lemma 3.** *For u*(*x*, *y*) ∈ *J γ*,1 *<sup>L</sup>*,0 (Ω) ∩ *J γ*,1 *<sup>R</sup>*,0(Ω)*, we have*

$$\left( {}\_{L}D\_{\mathbf{x}}^{(\gamma+1)/2} \boldsymbol{u}\_{\times \mathbf{x}} D\_{\mathbf{R}}^{(\gamma+1)/2} \boldsymbol{u} \right) = \cos \left( \pi (\gamma+1)/2 \right) \left| \left| {}\_{L}D\_{\mathbf{x}}^{(\gamma+1)/2} \boldsymbol{u} \right| \right|\_{L^{2}(\mathbb{R}^{2})}^{2} \tag{13}$$

$$
\left(\frac{\partial \mu}{\partial y}, \frac{\partial \mu}{\partial y}\right) = \left\| \left| \frac{\partial \mu}{\partial y} \right| \right\|\_{L^2(\mathbb{R}^2)}^2. \tag{14}
$$

**Proof.** Similar to the derivation process in [27], we can obtain the results immediately.

**Lemma 4.** *If <sup>u</sup>* > 0*, <sup>γ</sup>* = *<sup>n</sup>* − 1/2*, <sup>n</sup>* ∈ N*, then <sup>J</sup> γ*,1 *<sup>L</sup>*,0 (Ω)*, J γ*,1 *<sup>R</sup>*,0(Ω)*, J γ*,1 *<sup>S</sup>*,0 (Ω)*, <sup>H</sup><sup>γ</sup>* <sup>0</sup> (Ω) *and <sup>H</sup>*<sup>1</sup> <sup>0</sup> (Ω) *are equivalent with equivalent norms and semi-norms.*

**Proof.** The proof of this lemma can be found in [28].

#### **3. Derivation of the Finite Element Scheme for the Comb Model**

*3.1. Finite Element Fully Variational Formulation*

In the following section, for the sake of simplicity, denote *d*1(*x*, *y*) = *δ*(*y*)*d*+(*x*, *y*), *d*2(*x*, *y*) = *δ*(*y*)*d*−(*x*, *y*). Due to the irregular shape of the solution domain, the traditional rectangular mesh cannot be used. The finite element method is applied to obtain the solution of the governing Equation (5), subject to the initial conditions (6) and irregular boundary conditions (7).

Firstly, in the governing Equation (5), the distributed-order time-fractional derivatives are discretized using the mid-point quadrature rule [29]

$$\int\_{1}^{2} \varphi\_{1}(\beta) \frac{\partial^{\beta}}{\partial t^{\beta}} \left( \frac{\partial^{2} u}{\partial x^{2}} \right) d\beta = \sum\_{i=0}^{M\_{1}} \omega\_{i}^{(1)} \frac{\partial^{\beta\_{i}}}{\partial t^{\beta\_{i}}} \left( \frac{\partial^{2} u}{\partial x^{2}} \right) + O(h\_{\beta}^{2}),$$

$$\int\_{0}^{1} \varphi\_{0}(\alpha) \frac{\partial^{\alpha+1}}{\partial t^{\alpha+1}} d\alpha = \sum\_{i=0}^{M\_{2}} \omega\_{i}^{(2)} \frac{\partial^{\alpha\_{i}+1} u}{\partial t^{\alpha\_{i}+1}} + O(h\_{\alpha}^{2}),$$

where *ω*(1) *<sup>i</sup>* <sup>=</sup> *<sup>h</sup>βϕ*1(*αi*), *<sup>ω</sup>*(2) *<sup>i</sup>* <sup>=</sup> *<sup>h</sup>αϕ*0(*αi*), *<sup>h</sup><sup>α</sup>* <sup>=</sup> <sup>1</sup> *<sup>M</sup>*2+<sup>1</sup> and *<sup>h</sup><sup>β</sup>* <sup>=</sup> <sup>1</sup> *<sup>M</sup>*1+<sup>1</sup> are fractional parameter steps and *<sup>α</sup><sup>i</sup>* <sup>=</sup> *ihα*+(*i*+1)*h<sup>α</sup>* <sup>2</sup> for *<sup>i</sup>* <sup>=</sup> 0, 1, . . . , *<sup>M</sup>*2, *<sup>β</sup><sup>i</sup>* <sup>=</sup> *ihβ*+(*i*+1)*h<sup>β</sup>* <sup>2</sup> for *i* = 0, 1, . . . , *M*1.

Let *τ* = *T*/*N* be the time step and *tk* = *kτ* (*k* = 0, 1, ... , *N*) where *N* is a positive integer. Denote *uk*−1/2 = *<sup>u</sup>k*+*uk*−<sup>1</sup> <sup>2</sup> for *k* = 1, ... , *N*. For *u*(*x*, *y*, *t*) ∈ *C*(Ω × [0, *T*]), denote *<sup>u</sup><sup>k</sup>* <sup>=</sup> *<sup>u</sup>k*(·) = *<sup>u</sup>*(·, *tk*). We introduce <sup>∇</sup>*tuk*−1/2 <sup>=</sup> *<sup>u</sup>k*−*uk*−<sup>1</sup> *<sup>τ</sup>* . At *<sup>t</sup>* = *tk*−1/2, the L2-scheme [30] to approximate the fractional derivative of order *β<sup>i</sup>* (1 < *β<sup>i</sup>* < 2) with the Caputo definition is given as

$$\frac{\partial^{[\boldsymbol{\beta}\_{l}]}u^{k-1/2}}{\partial t^{\boldsymbol{\beta}\_{l}}} = \frac{\tau^{1-\boldsymbol{\beta}\_{l}}}{\Gamma(3-\boldsymbol{\beta}\_{l})} \left[ a\_{0}^{\{\boldsymbol{\beta}\_{l}\}} \nabla\_{t}u^{k-1/2} - \sum\_{j=1}^{k-1} (a\_{k-1-j}^{\{\boldsymbol{\beta}\_{l}\}} - a\_{k-j}^{\{\boldsymbol{\beta}\_{l}\}}) \nabla\_{t}u^{j-1/2} - a\_{k-1}^{\{\boldsymbol{\beta}\_{l}\}} u\_{t}^{0} \right] + R\_{0}^{k\_{l}\delta\_{l}} \tag{15}$$
 
$$= \nabla\_{t}^{\{\boldsymbol{\beta}\_{l}\}}u^{k-1/2} + R\_{0}^{k\_{l}\delta\_{l}}.\tag{15}$$

where *a* (*βi*) *<sup>j</sup>* = (*<sup>j</sup>* <sup>+</sup> <sup>1</sup>)2−*β<sup>i</sup>* <sup>−</sup> *<sup>j</sup>* <sup>2</sup>−*β<sup>i</sup>* , *<sup>j</sup>* <sup>=</sup> 0, 1, 2, ..., *<sup>k</sup>* <sup>−</sup> 1, <sup>|</sup>*Rk*,*β<sup>i</sup>* <sup>0</sup> | ≤ *<sup>C</sup>*(*τ*3−*βi*).

**Lemma 5.** *For the above a* (*βi*) *<sup>j</sup> , define vector S* = [*S*1, *S*2, *S*3, ..., *SN*] *<sup>T</sup> and constant P, it holds that*

$$\frac{2\tau^{1-\beta\_l}}{\Gamma(3-\beta\_l)}\sum\_{k=1}^N \left[a\_0^{(\beta\_l)}S\_k - \sum\_{j=1}^{k-1} (a\_{k-j-1}^{(\beta\_l)} - a\_{k-j}^{(\beta\_l)})S\_j - a\_{k-1}^{(\beta\_l)}P\right]S\_k \geq \frac{T^{1-\beta\_l}}{\Gamma(2-\beta\_l)}\sum\_{k=1}^N S\_k^2 - \frac{T^{2-\beta\_l}}{\tau\Gamma(3-\beta\_l)}P^2.$$

**Proof.** See [30].

The time Caputo fractional derivative of order *α<sup>i</sup>* (0 < *α<sup>i</sup>* < 1) is discretized by using the L1-scheme [30], and the scheme at *<sup>t</sup>* = *tk*−1/2 is given as

$$\frac{\partial^{a\_i} u^{k-1/2}}{\partial t^{a\_i}} = \frac{\tau^{1-a\_i}}{2\Gamma(2-a\_i)} \sum\_{j=1}^k b\_{k-j}^{(a\_i)} \nabla\_t u^{j-1/2} + \frac{\tau^{1-a\_i}}{2\Gamma(2-a\_i)} \sum\_{j=1}^{k-1} b\_{k-j-1}^{(a\_i)} \nabla\_t u^{j-1/2} + R\_1^{k, a\_i}$$
 
$$= \nabla\_t^{(a\_i)} u^{k-\frac{1}{2}} + R\_1^{k, a\_i} \tag{16}$$

where *b* (*αi*) *<sup>j</sup>* = (*<sup>j</sup>* <sup>+</sup> <sup>1</sup>)1−*α<sup>i</sup>* <sup>−</sup> *<sup>j</sup>* <sup>1</sup>−*α<sup>i</sup>* , *<sup>j</sup>* <sup>=</sup> 0, 1, 2, ..., *<sup>k</sup>* <sup>−</sup> 1, <sup>|</sup>*Rk*,*α<sup>i</sup>* <sup>1</sup> | ≤ *<sup>C</sup>*(*τ*2−*αi*).

**Lemma 6.** *For the above b* (*αi*) *<sup>j</sup> , choosing any positive integer M and vector* [*v*1, *v*2, *v*3, ..., *vM*] ∈ R*M, we have*

$$\sum\_{k=1}^{M} \sum\_{j=1}^{k} b\_{k-j}^{(\alpha\_i)} (\upsilon\_{j'} \, \upsilon\_k) + \sum\_{k=1}^{M} \sum\_{j=1}^{k-1} b\_{k-j-1}^{(\alpha\_i)} (\upsilon\_{j'} \, \upsilon\_k) \ge 0.1$$

**Proof.** See [31].

At time *<sup>t</sup>* = *tk*−1/2, the semi-discrete scheme for the governing Equation (5) is given as follows:

$$\begin{split} &\frac{\partial}{\partial\sum\_{i=0}^{M\_1}\omega\_i^{(1)}\nabla\_t^{(\mathcal{G}\_i)}u^{k-1/2} + \frac{1}{2}\sum\_{i=0}^{M\_2}\omega\_i^{(2)}\nabla\_t^{(a\_i)}u^{k-1/2} \\ &=\frac{\partial}{\partial x}\left[d\_1(\mathbf{x},\mathbf{y})\_LD\_\mathbf{x}^\gamma u^{k-1/2} - d\_2(\mathbf{x},\mathbf{y})\_xD\_R^\gamma u^{k-1/2}\right] + \epsilon(\mathbf{x},\mathbf{y})\frac{\partial^2 u^{k-1/2}}{\partial y^2} + f^{k-1/2}.\end{split} \tag{17}$$

Define *V* = *H<sup>γ</sup>* <sup>0</sup> (Ω) <sup>∩</sup> *<sup>H</sup>*<sup>1</sup> <sup>0</sup> (Ω) to be the numerical solution space. In this work, we choose to use triangular elements to mesh Ω. Because the domain is irregularly shaped, we refer to this throughout this paper as an unstructured mesh. Denote {Γ*h*} as a family of unstructured triangulations of domain Ω, where *h* is the maximum diameter of any triangle in Γ*h*. Then, we obtain the conforming finite element subspace *Vh* ∈ *V* as

$$V\_{\hbar} = \{ v\_{\hbar} | v\_{\hbar} \in \mathbb{C}(\overline{\Omega}) \cap V\_{\prime} v\_{\hbar} |\_{K} \text{ is linear for all } K \in \Gamma\_{\hbar} \text{ and } v\_{\hbar} |\_{\partial \Omega} = 0 \}.$$

Assume that *u<sup>k</sup> <sup>h</sup>* is the approximation of *u*(*x*, *y*, *t*) at time *t* = *tk*. We can derive the fully discrete formulation of (5)–(7): find *u<sup>k</sup> <sup>h</sup>* ∈ *Vh*, for any *k* = 0, 1, . . . , *N*, such that

$$\begin{split} &\mathbb{E}\sum\_{i=0}^{M\_{1}}\frac{\omega\_{i}^{(1)}\mathbbm{1}^{1-\beta\_{i}}}{\Gamma(3-\beta\_{i})}\left[a\_{0}^{\theta\_{i}}(\nabla\_{i}u\_{h}^{k-1/2},v\_{h})-\sum\_{j=1}^{k-1}(a\_{k-1-j}^{(\delta\_{i})}-a\_{k-j}^{(\delta\_{i})})(\nabla\_{i}u\_{h}^{j-1/2},v\_{h})-a\_{k-1}^{(\delta\_{i})}\left((u\_{h}^{0})\_{t},v\_{h}\right)\right] \\ &+\frac{1}{2}\sum\_{i=0}^{M\_{2}}\frac{\omega\_{i}^{(2)}\mathbbm{1}^{1-\kappa\_{i}}}{\Gamma(2-\kappa\_{i})}\sum\_{j=1}^{k}b\_{k-j}^{(a\_{j})}(\nabla\_{i}u\_{h}^{j-1/2},v\_{h})+\frac{1}{2}\sum\_{i=0}^{M\_{2}}\frac{\omega\_{i}^{(2)}\mathbbm{1}^{1-\kappa\_{i}}}{\Gamma(2-\kappa\_{i})}\sum\_{j=1}^{k-1}b\_{k-j-1}^{(a\_{i})}(\nabla\_{i}u\_{h}^{j-1/2},v\_{h}) \\ &=-\mathcal{B}(u\_{h}^{k-1/2},v\_{h})+(f^{k-1/2},v\_{h}), \end{split} \tag{18}$$

with the initial conditions and boundary conditions given by

$$
u\_h^0 = 
u\_{0h\prime} \left. 
u\_h^k \right|\_{\left(\partial \Omega\right)} = 0,\tag{19}$$

where *<sup>u</sup>*0*<sup>h</sup>* <sup>∈</sup> *Vh* is a reasonable approximation for *<sup>u</sup>*0. The expression for *<sup>B</sup>*(*u*, *<sup>v</sup>*) is given as

$$B(\boldsymbol{u}, \boldsymbol{v}) = \left(d\_1(\mathbf{x}, \boldsymbol{y})\_L D\_\mathbf{x}^\gamma \boldsymbol{u}, \frac{\partial \boldsymbol{v}}{\partial \mathbf{x}}\right) - \left(d\_2(\mathbf{x}, \boldsymbol{y})\_x D\_R^\gamma \boldsymbol{u}, \frac{\partial \boldsymbol{v}}{\partial \mathbf{x}}\right) + \left(\boldsymbol{e}(\mathbf{x}, \boldsymbol{y}) \frac{\partial \boldsymbol{u}}{\partial \boldsymbol{y}'}, \frac{\partial \boldsymbol{v}}{\partial \boldsymbol{y}}\right). \tag{20}$$

#### *3.2. Implementation of Finite Element Method with an Unstructured Mesh*

In this section, we provide details of the implementation of the finite element method with an unstructured mesh. Firstly, we use the software Gmsh [32] to partition the convex domain Ω with unstructured triangular elements. For every triangular element *ep*, define *Ne* as total number of the triangles and *NP* is the number of elements. Using piecewise linear polynomials on every triangular element *ep*, for each time step, we can write *u<sup>k</sup> <sup>h</sup>* in the form *u<sup>k</sup> <sup>h</sup>* <sup>=</sup> *NP* ∑ *n*=1 *uk <sup>n</sup>ϕn*(*x*, *y*), where *ϕn*(*x*, *y*) is the basis function and *u<sup>k</sup> <sup>n</sup>* is the unknown to be solved for. Denote *ϕn*(*xm*, *ym*) = *δnm* (*n*, *m* = 1, 2, ... , *Np*), where *δnm* refers to the Kronecker delta function, *vh* <sup>=</sup> *<sup>ϕ</sup>m*(*x*, *<sup>y</sup>*), the mass matrix *<sup>M</sup>* = (*ϕn*, *<sup>ϕ</sup>m*)*Np*×*Np* , the stiffness matrix *<sup>A</sup>* <sup>=</sup> *<sup>B</sup>*(*ϕn*, *<sup>ϕ</sup>m*)*Np*×*Np* , *F<sup>k</sup>* = (*F<sup>k</sup>* <sup>1</sup> , *<sup>F</sup><sup>k</sup>* <sup>2</sup> ,..., *<sup>F</sup><sup>k</sup> Np* )*<sup>T</sup>* where *F<sup>k</sup> <sup>m</sup>* <sup>=</sup> *<sup>f</sup> <sup>k</sup>*+*<sup>f</sup> <sup>k</sup>*−<sup>1</sup> <sup>2</sup> , *ϕ<sup>m</sup>* ,

*ω*0 *<sup>m</sup>* = (*u*<sup>0</sup> *h*)*t* ,*ϕ<sup>m</sup>* , *U<sup>k</sup>* = # *uk* <sup>1</sup>, *<sup>u</sup><sup>k</sup>* <sup>2</sup>,..., *<sup>u</sup><sup>k</sup> Np* \$*T* , *W*<sup>0</sup> = # *ω*0 <sup>1</sup>, *<sup>ω</sup>*<sup>0</sup> <sup>2</sup>,..., *<sup>ω</sup>*<sup>0</sup> *Np* \$*T* , then we can rewrite (18) in matrix forms as follows

$$\begin{split} & \left[ \left( 2\xi \sum\_{i=0}^{M\_1} \omega\_i u\_0^{(\beta\_i)} + 2 \sum\_{i=0}^{M\_2} r\_i b\_0^{(a\_i)} \right) M + \tau A \right] \mathbb{I} l^k \\ &= \left[ \left( 2\xi \sum\_{i=0}^{M\_1} \omega\_i u\_0^{(\beta\_i)} + 2 \sum\_{i=0}^{M\_2} r\_i b\_0^{(a\_i)} \right) M + \tau A \right] \mathbb{I} l^{k-1} + 2\xi \tau \sum\_{i=0}^{M\_1} \omega\_i u\_{k-1}^{(\beta\_i)} \mathbb{W}^0 + 2\tau \mathbb{I} r^k \\ &+ 2\xi \sum\_{i=0}^{M\_1} \omega\_i \sum\_{j=1}^{k-1} \left( a\_{k-1-j}^{(\beta\_i)} - a\_{k-j}^{(\beta\_i)} \right) \left[ M \mathbb{I} l^j - M \mathbb{I} l^{j-1} \right] - 2 \sum\_{i=0}^{M\_2} r\_i \sum\_{j=1}^{k-1} \left( b\_{k-j}^{(a\_i)} + b\_{k-j-1}^{(a\_i)} \right) \left[ M \mathbb{I} l^j - M \mathbb{I} l^{-1} \right], \end{split}$$

where *<sup>ω</sup><sup>i</sup>* <sup>=</sup> *<sup>ω</sup>*(1) *<sup>i</sup> <sup>τ</sup>*1−*β<sup>i</sup>* <sup>Γ</sup>(3−*βi*) , *ri* <sup>=</sup> *<sup>ω</sup>*(2) *<sup>i</sup> <sup>τ</sup>*1−*α<sup>i</sup>* <sup>2</sup>Γ(2−*αi*) .

The critical point to obtain the solution is to approximate the first and the second terms of *B*(*ϕn*, *ϕm*). By applying the Gauss quadrature [23], we obtain

 *<sup>d</sup>*1(*x*, *<sup>y</sup>*)*LD<sup>γ</sup> <sup>x</sup> ϕn*, *∂ϕ<sup>m</sup> ∂x* = ∑ *E*∈Γ*<sup>h</sup>* - *E <sup>d</sup>*1(*x*, *<sup>y</sup>*)*LD<sup>γ</sup> <sup>x</sup> ϕ<sup>n</sup> ∂ϕ<sup>m</sup> <sup>∂</sup><sup>x</sup> dxdy* ≈ ∑ *E*∈Γ*<sup>h</sup>* ∑ (*xci*,*yci*)∈*GE <sup>λ</sup>id*1(*x*, *<sup>y</sup>*)*LD<sup>γ</sup> <sup>x</sup> ϕn*| (*xci*,*yci*) *∂ϕ<sup>m</sup> ∂x* (*xci*,*yci*) , *<sup>d</sup>*2(*x*, *<sup>y</sup>*)*RD<sup>γ</sup> <sup>x</sup> ϕn*, *∂ϕ<sup>m</sup> ∂x* = ∑ *E*∈Γ*<sup>h</sup>* - *E <sup>d</sup>*2(*x*, *<sup>y</sup>*)*xD<sup>γ</sup> <sup>R</sup>ϕ<sup>n</sup> ∂ϕ<sup>m</sup> <sup>∂</sup><sup>x</sup> dxdy* ≈ ∑ *E*∈Γ*<sup>h</sup>* ∑ (*xci*,*yci*)∈*GE <sup>κ</sup>id*2(*x*, *<sup>y</sup>*)*xD<sup>γ</sup> <sup>R</sup>ϕn*| (*xci*,*yci*) *∂ϕ<sup>m</sup> ∂x* (*xci*,*yci*) ,

where *GE* is the set of Gauss points in a certain element *E* and *λi*, *κ<sup>i</sup>* are the weight coefficients corresponding to the Gauss points (*xci*, *yci*). In this article, we used four Gauss points in each triangle.

**Remark 1.** *I would be precise on the number of Gauss points used and the accuracy of the approximation used.*

In this paper, we use four Gauss points in every triangle. The detailed computation process can be summarized in Algorithm 1.

**Algorithm 1** Calculate (*d*1(*x*, *<sup>y</sup>*)*LD<sup>γ</sup> <sup>x</sup> <sup>ϕ</sup>n*, *∂ϕ<sup>m</sup> <sup>∂</sup><sup>x</sup>* ) and (*d*2(*x*, *<sup>y</sup>*)*RD<sup>γ</sup> <sup>x</sup> <sup>ϕ</sup>n*, *∂ϕ<sup>m</sup> <sup>∂</sup><sup>x</sup>* ) using finite element method on an unstructured mesh


#### **4. Stability and Convergence**

In this section, we analyze the stability and convergence of the discrete scheme on the irregular convex domain. In the following section, for the sake of simplicity, we consider the constant diffusion coefficient case *d*1(*x*, *y*) = *d*2(*x*, *y*) = *e*(*x*, *y*) = 1. Denote (·, ·)=(·, ·)Ω, || · ||<sup>0</sup> = || · ||*L*2(Ω). Prior to presenting the numerical analysis, we firstly give the definitions of the semi-norm |·|(*γ*,1) and norm || · ||(*γ*,1)

$$\left|u\right|\_{\left(\gamma,1\right)} = \left(\left|\left|\_{L}D\_{\mathbf{x}}^{\left(\gamma+1\right)/2}u\right|\right|\_{0}^{2} + \left|\left|\left|\frac{\partial u}{\partial y}\right|\right|\_{0}^{2}\right)^{1/2}\right.\tag{21}$$

$$||u||\_{\left(\gamma,1\right)} = \left(||u||\_0^2 + |u|\_{\left(\gamma,1\right)}^2\right)^{1/2}.\tag{22}$$

In what follows, the constant *C* may be different in various sections.

#### *4.1. Stability*

**Lemma 7.** *For any u* ∈ *V, the semi-norm* |*u*|(*γ*,1) *and norm* ||*u*||(*γ*,1) *are equivalent and there exists positive constants C*<sup>1</sup> *and C*<sup>2</sup> *independent of u, such that*

$$C\_1 ||u||\_{\left(\gamma,1\right)} \le |u|\_{\left(\gamma,1\right)} \le ||u||\_{\left(\gamma,1\right)} \le C\_2 |u|\_{H^1\left(\Omega\right)}.$$

**Proof.** From Lemma 2, we immediately have

$$||u||\_0 \le C||\_L D\_x^{(\gamma+1)/2} u||\_0 \le C|u|\_{(\gamma,1)}.$$

By applying Lemma 2 and the Definitions (21) and (22), we have

$$||u||\_{\left(\gamma,1\right)} = \mathbb{C}\_{2} (||u||\_{0}^{2} + |u|\_{\left(\gamma,1\right)}^{2})^{1/2} \leq \mathbb{C}\_{2} |u|\_{\left(\gamma,1\right)} \leq \mathbb{C} |u|\_{\tilde{l}\_{L}^{\gamma,1}\left(\Omega\right)} \leq \mathbb{C}\_{2} |u|\_{H^{1}\left(\Omega\right)}.$$

By using Lemma 4 and applying the definitions of the norm and semi-norm, we have

$$||u||\_{\left(\gamma,1\right)} = \left(||u||\_0^2 + |u|\_{\left(\gamma,1\right)}^2\right)^{1/2} \ge \left(|u|\_{\left(\gamma,1\right)}^2\right)^{1/2} = |u|\_{\left(\gamma,1\right)}.\tag{23}$$

The proof is completed.

**Lemma 8.** *For any <sup>u</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>H</sup><sup>γ</sup>* <sup>0</sup> (Ω) <sup>∩</sup> *<sup>H</sup>*<sup>1</sup> <sup>0</sup> (Ω)*, there exists constants C*<sup>1</sup> *and C*<sup>2</sup> *such that the function B*(*u*, *<sup>v</sup>*) *satisfies* <sup>|</sup>*B*(*u*, *<sup>v</sup>*)| ≤ *<sup>C</sup>*1||*u*||(*γ*,1)||*v*||(*γ*,1) *and B*(*u*, *<sup>u</sup>*) <sup>≥</sup> *<sup>C</sup>*2||*u*||<sup>2</sup> (*γ*,1) *.*

**Proof.** Firstly, by using Lemma 1, Lemma 4 and Lemma 7, the Definitions (21) and (22) and applying the Cauchy–Schwartz inequality, namely (*u*, *v*) ≤ ||*u*||0||*v*||0, we have

<sup>|</sup>*B*(*u*, *<sup>v</sup>*)| ≤ (*LD*(*γ*+1)/2 *<sup>x</sup> <sup>u</sup>*,*<sup>x</sup> <sup>D</sup>*(*γ*+1)/2 *<sup>R</sup> v*) <sup>+</sup> (*xD*(*γ*+1)/2 *<sup>R</sup> <sup>u</sup>*,*<sup>L</sup> <sup>D</sup>*(*γ*+1)/2 *<sup>x</sup> <sup>v</sup>*) <sup>+</sup> *<sup>∂</sup><sup>u</sup> ∂y* , *∂v ∂y* ≤ *LD*(*γ*+1)/2 *<sup>x</sup> <sup>u</sup>* 0 *xD*(*γ*+1)/2 *<sup>R</sup> v* <sup>0</sup> + *xD*(*γ*+1)/2 *<sup>R</sup> u* 0 *LD*(*γ*+1)/2 *<sup>x</sup> <sup>v</sup>* <sup>0</sup> + *∂u ∂y* 0 *∂v ∂y* 0 ≤*C* |*u*|(*γ*,1)|*v*|(*γ*,1) + |*u*|(*γ*,1)|*v*|(*γ*,1) + |*u*|(*γ*,1)|*v*|(*γ*,1) ≤ *C*||*u*||(*γ*,1)||*v*||(*γ*,1), *B*(*u*, *u*) ≥ (*LD*(*γ*+1)/2 *<sup>x</sup> <sup>u</sup>*, *xD*(*γ*+1)/2 *<sup>R</sup> u*) + (*xD*(*γ*+1)/2 *<sup>R</sup> <sup>u</sup>*, *LD*(*γ*+1)/2 *<sup>x</sup> <sup>u</sup>*) + *∂u ∂y* , *∂u ∂y* ≥ *C* (*LD*(*γ*+1)/2 *<sup>x</sup> <sup>u</sup>*, *xD*(*γ*+1)/2 *<sup>R</sup> u*) + *∂u ∂y* , *∂u ∂y* ≥ *C*|*u*| 2 *<sup>H</sup>*1(Ω) <sup>≥</sup> *<sup>C</sup>*||*u*||<sup>2</sup> (*γ*,1).

**Theorem 1.** *(Stability) The fully discrete scheme (18) is unconditionally stable and it holds that*

$$||u\_{\mathbf{h}}^{N}||\_{\left(\gamma,1\right)} \leq \mathbb{C} \left[ \max\_{1 \leq k \leq N} ||f^{k-1/2}||\_{0}^{2} + ||\phi\_{1}||\_{1}^{2} + ||\phi\_{0}||\_{\left(\gamma,1\right)} \right].$$

**Proof.** Denote *vh* <sup>=</sup> <sup>∇</sup>*tuk*−1/2 *<sup>h</sup>* . By multiplying 2*τ* with each item and summing *k* from 1 to *N*, the discrete scheme (18) changes as

$$2\pi\xi\sum\_{i=0}^{M\_1}\omega\_i^{(1)}\sum\_{k=1}^N \left(\nabla\_t^{(\beta)}u\_h^{k-1/2}, \nabla\_t u\_h^{k-1/2}\right) + 2\pi\sum\_{k=1}^N \sum\_{i=0}^{M\_2} \omega\_i^{(2)} \left(\nabla\_t^{(a)}u\_h^{k-1/2}, \nabla\_t u\_h^{k-1/2}\right)$$

$$-2\pi\sum\_{k=1}^N B\left(u\_h^{k-1/2}, \nabla\_t u\_h^{k-1/2}\right) - 2\pi\sum\_{k=1}^N \left(f^{k-1/2}, \nabla\_t u\_h^{k-1/2}\right) = 0. \tag{24}$$

For the first term with the initial condition (*u*<sup>0</sup> *<sup>h</sup>*)*<sup>t</sup>* = *φ*1, by using Lemma 5, we have

$$2\pi \mathfrak{F} \sum\_{i=0}^{M\_1} \omega\_i^{(1)} \sum\_{k=1}^N \left(\nabla\_t^{(\mathfrak{f}\_i)} u\_h^{k-1/2} \, , \nabla\_t u\_h^{k-1/2} \right) \ge 2\pi \mathfrak{F} a\_1 \sum\_{k=1}^N \left| \left| \nabla\_t u\_h^{k-1/2} \right| \right|\_0^2 - \left| \mathfrak{F} a\_2 \right| \left| \phi\_1 \right| \Big|\_{0^\sigma}^2 \tag{25}$$

where *<sup>a</sup>*<sup>1</sup> <sup>=</sup> *<sup>M</sup>*<sup>1</sup> ∑ *i*=0 *<sup>ω</sup>*(1) *<sup>i</sup> <sup>T</sup>*1−*β<sup>i</sup>* <sup>2</sup>Γ(2−*βi*) , *<sup>a</sup>*<sup>2</sup> <sup>=</sup> *<sup>M</sup>*<sup>1</sup> ∑ *i*=0 *<sup>ω</sup>*(1) *<sup>i</sup> <sup>T</sup>*2−*β<sup>i</sup>* <sup>Γ</sup>(3−*βi*) .

By applying Lemma 6, we have

$$2\pi \sum\_{k=1}^{N} \sum\_{i=0}^{M\_2} \omega\_i^{(2)} \left(\nabla\_t^{(a\_i)} u\_h^{k-1/2}, \nabla\_t u\_h^{k-1/2}\right) \ge 0. \tag{26}$$

Define a symmetrical and continuous function

$$B\_0(\boldsymbol{u}, \boldsymbol{v}) = \left| \left( {\_1D\_x^{(\gamma+1)/2} \boldsymbol{u}\_{\times x} D\_R^{(\gamma+1)/2} \boldsymbol{v} } \right) \right| + \left| \left( {\_1D\_R^{(\gamma+1)/2} \boldsymbol{u}\_{\times L} D\_x^{(\gamma+1)/2} \boldsymbol{v} } \right) \right| + \left| \left( \frac{\partial \boldsymbol{u}}{\partial \boldsymbol{y}'} \frac{\partial \boldsymbol{v}}{\partial \boldsymbol{y}} \right) \right|. \tag{27}$$

then we have *B*(*u*, *v*) ≤ *CB*0(*u*, *v*). For the newly defined function *B*<sup>0</sup> *uk*−1/2 *<sup>h</sup>* , <sup>∇</sup>*tuk*−1/2 *h* , we have *B*<sup>0</sup> *uk*−1/2 *<sup>h</sup>* , <sup>∇</sup>*tuk*−1/2 *h* = <sup>1</sup> 2*τ B*0 *uk <sup>h</sup>*, *<sup>u</sup><sup>k</sup> h* − *B*<sup>0</sup> *uk*−<sup>1</sup> *<sup>h</sup>* , *<sup>u</sup>k*−<sup>1</sup> *h* [21,22]. Perform the summation of *k* from 1 to *N*, and we derive the following inequality

$$2\pi \sum\_{k=1}^{N} B\left(u\_h^{k-1/2}, \nabla\_t u\_h^{k-1/2}\right) \le 2\pi \mathbb{C} \sum\_{k=1}^{N} B\_0\left(u\_h^{k-1/2}, \nabla\_t u\_h^{k-1/2}\right)$$

$$= 2\pi \mathbb{C} \sum\_{k=1}^{N} \frac{1}{2\pi} \left[B\left(u\_h^k, u\_h^k\right) - B\left(u\_h^{k-1}, u\_h^{k-1}\right)\right] = \mathbb{C}\left(B\left(u\_h^N, u\_h^N\right) - B\left(u\_h^0, u\_h^0\right)\right). \tag{28}$$

By using the important inequality 2*ab* <sup>≤</sup> *<sup>a</sup>*<sup>2</sup> <sup>2</sup>*<sup>ε</sup>* + <sup>2</sup>*εb*2, the fifth item changes as

$$\begin{split} \sum\_{k=1}^{N} 2\tau \langle f^{k-1/2}, \nabla t u\_h^{k-1/2} \rangle &\leq \sum\_{k=1}^{N} \tau \left[ \frac{||f^{k-1/2}||\_0^2}{2a\_1 \tilde{\xi}} + 2a\_1 \tilde{\xi} ||\nabla t u\_h^{k-1/2}||\_0^2 \right] \\ &\leq \frac{T}{2a\_1 \tilde{\xi}} \max\_{1 \leq k \leq N} ||f^{k-1/2}||\_0^2 + 2a\_1 \tilde{\xi} \sum\_{k=1}^{N} \tau ||\nabla\_t u\_h^{k-1/2}||\_0^2. \end{split} \tag{29}$$

Then, by using the inequality of (25)–(29), Equation (24) changes as

$$B(\boldsymbol{u}\_h^N, \boldsymbol{u}\_h^N) \le \mathbb{C} \left[ \frac{T}{2a\_1 \mathfrak{F}} \max\_{1 \le k \le N} ||f^{k-1/2}||\_0^2 + a\_2 \mathfrak{F} ||\phi\_1||\_0^2 + B(\boldsymbol{u}\_{h'}^0, \boldsymbol{u}\_h^0) \right].$$

Applying Lemma 8, we have

$$\|\|u\_{\hbar}^{N}||\_{\left(\gamma,1\right)}^{2} \leq C \left[ \max\_{1 \leq k \leq N} ||f^{k-1/2}||\_{0}^{2} + ||\phi\_{1}||\_{0}^{2} + ||\phi\_{0}||\_{\left(\gamma,1\right)} \right].$$

Therefore, the scheme is unconditionally stable.

#### *4.2. Convergence*

Prior to providing the convergence of the discrete scheme, we first give an approximation property. Define the interpolation operator *Ih*: *<sup>H</sup>s*+1(Ω) <sup>→</sup> *Vh*, for any *<sup>u</sup>* <sup>∈</sup> *<sup>H</sup>μ*(Ω), 1 < *μ* ≤ *s* + 1, there exists a constant *C* depending only on Ω such that [28]

$$||\mu - I\_h \mu||\_{H^1(\Omega)} \le Ch^{\mu - 1} ||\mu||\_{H^\mu(\Omega)}.\tag{30}$$

For any *u* ∈ *V* and *vh* ∈ *Vh*, we define a projection operator *Ph*: *V* → *Vh* possessing the following property

$$B(P\_{\hbar}u\_{\prime}v\_{\hbar}) = B(u\_{\prime}v\_{\hbar}).\tag{31}$$

Then, we have the following lemma.

**Lemma 9.** *If <sup>u</sup>* <sup>∈</sup> *<sup>H</sup>μ*(Ω) <sup>∩</sup> *V,* <sup>1</sup> <sup>&</sup>lt; *<sup>μ</sup>* <sup>≤</sup> *<sup>s</sup>* <sup>+</sup> <sup>1</sup>*, there exists a constant <sup>C</sup> independent of <sup>h</sup> and <sup>u</sup> such that*

$$||P\_{\hbar}u - u||\_{\left(\gamma, 1\right)} \leq C\hbar^{\mu - 1}||u||\_{\mu}.\tag{32}$$

**Proof.** Because

$$||P\_h\mu - \mu||\_{\left(\gamma, 1\right)}^2 \le \mathbb{C}B\left(P\_h\mu - \mu, P\_h\mu - \mu\right) \le \mathbb{C}B\left(P\_h\mu - \mu, I\_h\mu - \mu\right),$$

and

$$B(P\_\hbar \mu - \mu, I\_\hbar \mu - \mu) \le C||P\_\hbar \mu - \mu||\_{\left(\gamma, 1\right)}||I\_\hbar \mu - \mu||\_{\left(\gamma, 1\right)}.$$

Using the approximation properties, we have

$$||P\_\hbar \mu - \mu||\_{\left(\gamma, 1\right)} \le C||I\_\hbar \mu - \mu||\_{\left(\gamma, 1\right)} \le C||I\_\hbar \mu - \mu||\_{H^1\left(\Omega\right)} \le Ch^{\mu - 1}||\mu||\_{\left(\mu, 1\right)}$$

**Theorem 2.** *(Convergence) Assume that <sup>u</sup><sup>N</sup>* <sup>=</sup> *<sup>u</sup>*(*x*, *<sup>y</sup>*, *tN*) *is the exact solution with u, <sup>∂</sup>α<sup>i</sup> <sup>u</sup> ∂t <sup>α</sup><sup>i</sup>* , *<sup>∂</sup>β<sup>i</sup> <sup>u</sup> ∂t <sup>β</sup><sup>i</sup>* ∈ *<sup>L</sup>*∞(*Hμ*(Ω); 0, *<sup>T</sup>*)*,* <sup>1</sup> <sup>&</sup>lt; *<sup>μ</sup>* <sup>≤</sup> *<sup>s</sup>* <sup>+</sup> <sup>1</sup>*, then the numerical solution u<sup>N</sup> <sup>h</sup> satisfies*

$$\begin{split} \left|| \left| u\_{h}^{\mu} - u(t\_{n}) \right| \right|\_{\left( \gamma, 1 \right)}^{2} &\leq \mathcal{C} \pi^{2\min\{ |\mathcal{S} - \beta\_{l}|, |\mathcal{Q} - a\_{i}| \}} \\ &+ \mathcal{C} h^{2\left( \mu - 1 \right)} \left[ \left| \left| u^{N} \right| \right|\_{\mu}^{2} + ||\phi||\_{\mu}^{2} + ||\phi||\_{\mu}^{2} + \max\_{1 \leq k \leq N} \left| \left| \left| \frac{\partial^{\ell\_{l}} u^{k - 1/2}}{\partial t^{\ell\_{l}}} \right| \right| \right|\_{\mu}^{2} + \max\_{1 \leq k \leq N} \left| \left| \frac{\partial^{\ell\_{l}} u^{k - 1/2}}{\partial t^{\ell\_{l}}} \right| \right|\_{\mu}^{2} \right]. \end{split}$$

**Proof.** Let *e<sup>n</sup>* = *u<sup>n</sup> <sup>h</sup>* <sup>−</sup> *<sup>u</sup>*(*tn*), then the newly defined *<sup>e</sup><sup>n</sup>* satisfies

$$\begin{split} \xi \sum\_{i=0}^{M\_1} \omega\_i^{(1)} (\nabla\_i^{(\delta)} \boldsymbol{\varepsilon}^{k-1/2}, \boldsymbol{v}\_h) + \xi \sum\_{i=0}^{M\_1} \omega\_i^{(1)} (R\_0^{\delta, k}, \boldsymbol{v}\_h) + \sum\_{i=0}^{M\_2} \frac{\omega\_i^{(2)} \tau^{1-a\_i}}{2\Gamma(2-a\_i)} \sum\_{j=1}^k b\_{k-j}^{(a\_i)} (\nabla\_i \boldsymbol{\varepsilon}^{j-1/2}, \boldsymbol{v}\_h) \\ + \sum\_{i=0}^{M\_2} \frac{\omega\_i^{(2)} \tau^{1-a\_i}}{2\Gamma(2-a\_i)} \sum\_{j=1}^{k-1} b\_{k-j-1}^{(a\_i)} (\nabla\_i \boldsymbol{\varepsilon}^{j-1/2}, \boldsymbol{v}\_h) + \sum\_{i=0}^{M\_3} \omega\_i^{(2)} (R\_1^{a\_i k}, \boldsymbol{v}\_h) + \mathcal{B}(\boldsymbol{\varepsilon}^{k-1/2}, \boldsymbol{v}\_h) = 0. \end{split} \tag{33}$$

Define *<sup>e</sup><sup>n</sup>* <sup>=</sup> *<sup>ρ</sup><sup>n</sup>* <sup>+</sup> *<sup>θ</sup>n*, where *<sup>ρ</sup><sup>n</sup>* <sup>=</sup> *Phu*(*tn*) <sup>−</sup> *<sup>u</sup>*(*tn*) and *<sup>θ</sup><sup>n</sup>* <sup>=</sup> *<sup>u</sup><sup>n</sup> <sup>h</sup>* − *Phu*(*tn*). Then, for any *vh*, by using the Definition (31), we have

$$B(\rho^{k-1/2}, v\_h) = B\left(P\_h u(t\_{k-1/2}) - u(t\_{k-1/2}), v\_h\right) = 0.$$

In addition, choosing the interpolations as the initial values of *u*<sup>0</sup> *<sup>h</sup>* at time *t*0, i.e., *u*0 *<sup>h</sup>* = *Ihφ*0, we obtain

$$\mathcal{B}(\theta^0, \theta^0) \le \mathcal{C} \mathcal{B}\_0(\theta^0, \theta^0) \le \mathbb{C} \left[ ||u(t\_0) - u^0\_h||\_{\left(\gamma, 1\right)}^2 + ||P\_h u(t\_0) - u(t\_0)||\_{\left(\gamma, 1\right)}^2 \right] \le \mathcal{C} h^{2\left(\mu - 1\right)} ||\phi\_0||\_{\mu}^2.$$

Similarly, choosing (*u*<sup>0</sup> *<sup>h</sup>*)*<sup>t</sup>* <sup>=</sup> *Ihφ*1, the norm ||(*θ*<sup>0</sup> *h*)*t* ||2 <sup>0</sup> satisfies the following relationship

$$\|\| (\theta\_h^0)\_t \|\|\_0^2 \le \mathbb{C} \| |(u(t\_0) - u\_h^0)\_t| \|\_{\left(\gamma, 1\right)}^2 + | |(P\_h u(t\_0) - u(t\_0))\_t| \|\_{\left(\gamma, 1\right)}^2 \le C h^{2(\mu - 1)} ||\phi\_1||\_{\mu}^2.$$

Defining *vh* <sup>=</sup> <sup>∇</sup>*tθk*−1/2 and summing *<sup>k</sup>* from 1 to *<sup>N</sup>*, Equation (33) can be rewritten as

$$\begin{split} & \quad \xi \sum\_{i=0}^{M\_1} \omega\_i^{(1)} \sum\_{k=1}^N \left( \nabla\_i^{(\delta\_1)} \theta^{k-1/2}, \nabla\_i \theta^{k-1/2} \right) + \sum\_{k=1}^N \sum\_{i=0}^{M\_2} \omega\_i^{(2)} \left( \nabla\_i^{(a\_i)} \theta^{k-1/2}, \nabla\_i \theta^{k-1/2} \right) + \sum\_{k=1}^N \mathcal{B} \left( \theta^{k-1/2}, \nabla\_i \theta^{k-1/2} \right) \\ & = -\xi \sum\_{i=0}^{M\_1} \omega\_i^{(1)} \sum\_{k=1}^N \left( \nabla\_i^{(\delta\_1)} \rho^{k-1/2}, \nabla\_i \theta^{k-1/2} \right) - \sum\_{k=1}^N \sum\_{i=0}^{M\_2} \omega\_i^{(2)} \left( \nabla\_i^{(a\_i)} \rho^{j-1/2}, \nabla\_i \theta^{k-1/2} \right) \\ & - \sum\_{k=1}^N \xi \sum\_{i=0}^{M\_1} \omega\_i^{(1)} \left( \mathcal{R}\_0^{\delta,k}, \nabla\_i \theta^{k-1/2} \right) - \sum\_{k=1}^N \sum\_{i=0}^{M\_2} \omega\_i^{(2)} \left( \mathcal{R}\_1^{\delta,k}, \nabla\_i \theta^{k-1/2} \right). \end{split}$$

Note that ||·||<sup>0</sup> ≤ ||·||(*γ*,1). By applying Lemma 9, the norm ||∇(*βi*) *<sup>t</sup> <sup>ρ</sup>k*−1/2||<sup>2</sup> <sup>0</sup> can be estimated as

$$\begin{split} \left\lVert \left| \nabla\_{t}^{\langle \beta\_{i} \rangle} \rho^{k-1/2} \right| \right\rVert\_{0}^{2} &= \left\lVert \left| \nabla\_{t}^{\langle \beta\_{i} \rangle} \rho^{k-1/2} - \frac{\partial^{\beta\_{i}} \rho^{k-1/2}}{\partial t^{\beta\_{i}}} + \frac{\partial^{\beta\_{i}} \rho^{k-1/2}}{\partial t^{\beta\_{i}}} \right| \right\rVert\_{0}^{2} \\ &\leq \left\lVert \left| \nabla\_{t}^{\langle \beta\_{i} \rangle} \rho^{k-1/2} - \frac{\partial^{\beta\_{i}} \rho^{k-1/2}}{\partial t^{\beta\_{i}}} \right| \right\rVert\_{0}^{2} + \left\lVert \left| \frac{\partial^{\beta\_{i}} \rho^{k-1/2}}{\partial t^{\beta\_{i}}} \right| \right\rVert\_{0}^{2} \\ &= \left\lVert \left| R\_{0}^{\beta\_{i}k} \right| \right|\_{0}^{2} + \left\lVert \left| \frac{\partial^{\beta\_{i}}}{\partial t^{\beta\_{i}}} \right| P\_{h} u(t\_{k-1/2}) - u(t\_{k-1/2}) \right\rVert \right\rVert\_{0}^{2} \\ &\leq C \tau^{2(3-\beta\_{i})} + C h^{2(\mu-1)} \left\lVert \frac{\partial^{\beta\_{i}} u^{k-1/2}}{\partial t^{\beta\_{i}}} \right\rVert\_{\mu}^{2} . \end{split}$$

Similarly, we derive

$$\left| \left| \nabla\_t^{(a\_i)} \rho^{k-1/2} \right| \right|\_0^2 \le C r^{2(2-a\_i)} + C h^{2(\mu-1)} \left| \left| \frac{\partial^{x\_i} u^{k-1/2}}{\partial t^{a\_i}} \right| \right|\_{\mu}^2.$$

By applying the inequality in Lemma 9, the norms ||*ρN*||<sup>2</sup> (*γ*,1) and ||*ρ*0||<sup>2</sup> (*γ*,1) satisfy

$$\begin{aligned} ||\rho^N||\_{\left(\gamma,1\right)}^2 &= ||P\_\hbar u(t\_N) - u(t\_N)||\_{\left(\gamma,1\right)}^2 \leq \mathsf{C}h^{2\left(\mu-1\right)}||u^N||\_{\mu}^2, \\ ||\rho^0||\_{\left(\gamma,1\right)}^2 &= ||P\_\hbar u(t\_0) - u(t\_0)||\_{\left(\gamma,1\right)}^2 \leq \mathsf{C}h^{2\left(\mu-1\right)}||u^0||\_{\mu}^2. \end{aligned}$$

By using Lemma 5 and the initial condition (*θ*<sup>0</sup> *<sup>h</sup>*)*<sup>t</sup>* = *φ*1, the following inequality holds

$$\left| \sqrt{\sum}\_{i=0}^{M\_1} \omega\_i^{(1)} \sum\_{k=1}^N \left( \nabla\_t^{(\mathfrak{f}\_i)} \theta^{k-1/2}, \nabla\_t \theta^{k-1/2} \right) \right| \ge a\_1 \overline{\xi} \sum\_{k=1}^N ||\nabla\_t \theta^{k-1/2}||\_0^2 - \frac{\overline{\xi} a\_2}{2\pi} ||(\theta\_h^0)\_t||\_0^2. \tag{34}$$

Applying Lemma 6, we have the following inequality

$$\sum\_{k=1}^{N} \sum\_{l=0}^{M\_2} \frac{\omega\_l^{(2)} \tau^{1-a\_l}}{2\Gamma(2-a\_l)} \sum\_{j=1}^{k} b\_{k-j}^{(a\_l)} (\nabla\_l \theta^{(-1/2)}, \nabla\_l \theta^{k-1/2}) + \sum\_{k=1}^{N} \sum\_{l=0}^{M\_2} \frac{\omega\_l^{(2)} \tau^{1-a\_l}}{2\Gamma(2-a\_l)} \sum\_{j=1}^{k-1} b\_{k-j-1}^{(a\_j)} (\nabla\_l \theta^{(-1/2)}, \nabla\_l \theta^{k-1/2}) \ge 0. \tag{35}$$

Applying the mid-point formula, we have the result

$$\sum\_{k=1}^{N} \mathcal{B} \left( \theta^{k-1/2}, \nabla\_i \theta^{k-1/2} \right) \ge \mathbb{C} \sum\_{k=1}^{N} \mathcal{B}\_0 \left( \theta^{k-1/2}, \nabla\_i \theta^{k-1/2} \right) \\ = \mathbb{C} \frac{1}{2\pi} \left[ \mathcal{B}\_0 \left( \theta^N, \theta^N \right) - \mathcal{B}\_0 \left( \theta^0, \theta^0 \right) \right]. \tag{36}$$

By using the important inequality <sup>−</sup>*ab* <sup>≤</sup> *<sup>ε</sup>a*<sup>2</sup> <sup>+</sup> *<sup>b</sup>*2/(4*ε*), we have

$$\begin{split} & -\mathfrak{F} \sum\_{i=0}^{M\_1} \omega\_i^{(1)} \sum\_{k=1}^N \left( \nabla\_t^{(\beta\_i)} \rho^{k-1/2}, \nabla\_t \theta^{k-1/2} \right) \\ \leq & \xi \sum\_{i=0}^{M\_1} \omega\_i^{(1)} \sum\_{k=1}^N \chi\_1 \left[ \mathsf{C} \tau^{2(3-\beta\_i)} + \mathsf{C} h^{2(\mu-1)} \left| \left| \frac{\partial^{\theta\_i} u^{k-1/2}}{\partial t^{\theta\_i}} \right| \right|\_{\mu}^2 \right] + \xi \sum\_{i=0}^{M\_1} \omega\_i^{(1)} \sum\_{k=1}^N \frac{1}{4\chi\_1} ||\nabla\_t \theta^{k-1/2}||\_0^2 \\ \leq & \xi \mathsf{C} \left[ \tau^{2(3-\beta\_i)} + h^{2(\mu-1)} \max\_{1 \leq k \le N} \left| \left| \frac{\partial^{\theta\_i} u^{k-1/2}}{\partial t^{\theta\_i}} \right| \right|\_{\mu}^2 \right] + \xi \sum\_{i=0}^{M\_1} \omega\_i^{(1)} \sum\_{k=1}^N \frac{1}{4\chi\_1} ||\nabla\_t \theta^{k-1/2}||\_0^2. \end{split} \tag{37}$$

$$\begin{split} & -\sum\_{k=1}^{N} \sum\_{i=0}^{M\_2} \omega\_i^{(2)} \left( \nabla\_t^{(a\_i)} \rho^{j-1/2}, \nabla\_t \theta^{k-1/2} \right) \\ & \leq \sum\_{i=0}^{M\_2} \omega\_i^{(2)} \sum\_{k=1}^{N} \chi\_2 \left( \mathcal{C} \tau^{2(2-a\_i)} + \text{Ch}^{2(\mu-1)} \left| \left| \frac{\partial^{a\_i} u^{k-1/2}}{\partial t^{a\_i}} \right| \right|\_{\mu}^2 \right) + \sum\_{k=1}^{N} \sum\_{i=0}^{M\_2} \frac{\omega\_i^{(2)}}{4\chi\_2} ||\nabla\_t \theta^{k-1/2}||\_0^2 \\ & \leq \text{NC} \left( \tau^{2(2-a\_i)} + h^{2(\mu-1)} \max\_{1 \leq k \leq N} ||\frac{\partial^{a\_i} u^{k-1/2}}{\partial t^{a\_i}}||\_{\mu}^2 \right) + \sum\_{k=1}^{N} \sum\_{i=0}^{M\_2} \frac{\omega\_i^{(2)}}{4\chi\_2} ||\nabla\_t \theta^{k-1/2}||\_0^2 \end{split} \tag{38}$$

where *χ*<sup>1</sup> = *<sup>a</sup>*<sup>3</sup> *a*1 , *χ*<sup>2</sup> = *<sup>a</sup>*<sup>4</sup> *ξa*<sup>1</sup> , *<sup>a</sup>*<sup>3</sup> <sup>=</sup> *<sup>M</sup>*<sup>1</sup> ∑ *i*=0 *ω*(1) *<sup>i</sup>* and *<sup>a</sup>*<sup>4</sup> <sup>=</sup> *<sup>M</sup>*<sup>2</sup> ∑ *i*=0 *ω*(2) *<sup>i</sup>* . Similarly,

$$1 - \sum\_{k=1}^{N} \tilde{\xi} \sum\_{i=0}^{M\_1} \omega\_i^{(1)} (R\_0^{\otimes i, k}, \nabla\_t \theta^{k - 1/2}) \le N \mathbb{C} \tau^{2(3 - \beta\_i)} + \tilde{\xi} \sum\_{k=1}^{N} \sum\_{i=0}^{M\_1} \omega\_i^{(1)} \frac{1}{4\chi\_3} ||\nabla\_t \theta^{k - 1/2}||\_{0^+}^2 \tag{39}$$

$$-\sum\_{k=1}^{N} \sum\_{i=0}^{M\_2} (R\_1^{a\_i;k}, \nabla\_t \theta^{k-1/2}) \le N \mathbb{C} \tau^{2(2-a\_i)} + \sum\_{i=0}^{M\_2} \sum\_{k=1}^{N} \omega\_i^{(2)} \frac{1}{4\chi\_4} ||\nabla\_t \theta^{k-1/2}||\_{0^\prime}^2 \tag{40}$$

where *χ*<sup>3</sup> = *<sup>a</sup>*<sup>3</sup> *<sup>a</sup>*<sup>1</sup> and *<sup>χ</sup>*<sup>4</sup> <sup>=</sup> *<sup>a</sup>*<sup>4</sup> *ξa*<sup>1</sup> .

By using the inequalities (34)–(40), we obtain

$$\begin{split} &\frac{1}{2\tau}[B(\theta^{N},\theta^{N})-B(\theta^{0},\theta^{0})] \leq NC\Big{[}\tau^{2(3-\beta\_{i})}+h^{2(\mu-1)}\max\_{1\leq k\leq N}\left|\left|\frac{\partial^{\theta\_{i}}\mu^{k-1/2}}{\partial t^{\beta\_{i}}}\right|\Big|\_{\mu}^{2}\right] \\ &+NC\Big{(}\tau^{2(2-a\_{i})}+h^{2(\mu-1)}\max\_{1\leq k\leq N}\left|\left|\frac{\partial^{a\_{i}}\mu^{k-1/2}}{\partial t^{a\_{i}}}\right|\Big|\_{\mu}^{2}\right)+\frac{\tilde{\xi}^{a}a\_{2}}{2\tau}Ch^{2(\mu-1)}||\phi\_{1}||\_{\mu}^{2}. \end{split}$$

By utilizing Lemma 8, the above equation changes as

$$\mathbb{P}\left(|\theta^{N}|\right)^{2}\_{\left(\gamma,1\right)} \leq \mathsf{C}\tau^{2\min\left\{|\left|\boldsymbol{\beta}-\boldsymbol{\beta}\_{l}\right|,\left|\boldsymbol{2}-\boldsymbol{a}\_{l}\right|\right\}} + \mathsf{C}\mathcal{H}^{2(\mu-1)}\left[\max\_{1\leq k\leq N} \left|\left|\frac{\partial^{\boldsymbol{\beta}\_{l}}\boldsymbol{u}^{k-1/2}}{\partial\boldsymbol{t}^{\beta\_{l}}}\right|\right|\_{\mu}^{2} + \max\_{1\leq k\leq N} \left|\left|\frac{\partial^{\boldsymbol{\mu}\_{l}}\boldsymbol{u}^{k-1/2}}{\partial\boldsymbol{t}^{\mu\_{l}}}\right|\right|\_{\mu}^{2} + ||\boldsymbol{\phi}\_{l}||\_{\mu}^{2} + ||\boldsymbol{\phi}\_{l}||\_{\mu}^{2}\right].$$

The simplified form is given as

$$\begin{split} ||\boldsymbol{u}\_{h}^{N} - \boldsymbol{u}(t\_{N})||\_{\left(\gamma,1\right)}^{2} &\leq ||\boldsymbol{\rho}^{N}||\_{\left(\gamma,1\right)}^{2} + ||\boldsymbol{\theta}^{N}||\_{\left(\gamma,1\right)}^{2} \leq \mathsf{C}\tau^{2\min\left\{|\boldsymbol{\beta}-\boldsymbol{\beta}\_{i}|,|2-a\_{i}|\right\}} \\ &+ \mathsf{C}\mathsf{H}^{2\left(\boldsymbol{\mu}-1\right)} \left[ ||\boldsymbol{\mu}^{N}||\_{\boldsymbol{\mu}}^{2} + ||\boldsymbol{\phi}\_{0}||\_{\boldsymbol{\mu}}^{2} + ||\boldsymbol{\phi}\_{1}||\_{\boldsymbol{\mu}}^{2} + \max\_{1\leq k\leq N} ||\frac{\partial^{\boldsymbol{\theta}\_{i}}\mathsf{u}^{k-1/2}}{\partial\boldsymbol{\theta}^{\boldsymbol{\theta}\_{i}}}||\_{\boldsymbol{\mu}}^{2} + \max\_{1\leq k\leq N} ||\frac{\partial^{\boldsymbol{\theta}\_{i}}\mathsf{u}^{k-1/2}}{\partial\boldsymbol{\theta}^{\boldsymbol{\theta}\_{i}}}||\_{\boldsymbol{\mu}}^{2} \right]. \end{split}$$

The proof is completed.

**Remark 2.** *By using the triangular linear basis function, i.e., s* = 1*, it can be concluded from Theorem 2 that the error satisfies*

$$||u\_h^n - u(t\_n)||\_{\left(\gamma, 1\right)} \le C(r^{\min\{3-\beta i, 2-a\_i\}} + h).$$

#### **5. Numerical Examples**

In this section, we present two numerical examples: one is in a rectangular domain with the main purpose to demonstrate the effectiveness of our theoretical analysis, and the other is in an elliptical domain for analyzing the effects of different parameters on the particle distributions. In the mid-point quadrature rule [29], we choose *M*<sup>1</sup> = 9, *M*<sup>2</sup> = 9.

**Example 1.** *Firstly, we consider the following two-dimensional distributed-order time- and spacefractional diffusion-wave equation on a rectangular domain*

$$\begin{split} &\mathbb{E}\int\_{1}^{2}\varrho\_{1}(\boldsymbol{\beta})\frac{\partial^{\boldsymbol{\beta}}u(\boldsymbol{x},\boldsymbol{y},t)}{\partial t^{\boldsymbol{\beta}}}d\boldsymbol{\beta}+\int\_{0}^{1}\varrho\_{0}(\boldsymbol{a})\frac{\partial^{\boldsymbol{a}}u(\boldsymbol{x},\boldsymbol{y},t)}{\partial t^{\boldsymbol{a}}}d\boldsymbol{a} \\ &=\frac{\partial}{\partial\boldsymbol{x}}\Big[d\_{1}(\boldsymbol{x},\boldsymbol{y})\frac{\partial^{\boldsymbol{\gamma}}u(\boldsymbol{x},\boldsymbol{y},t)}{\partial\boldsymbol{x}^{\boldsymbol{\gamma}}}-d\_{2}(\boldsymbol{x},\boldsymbol{y})\frac{\partial^{\boldsymbol{\gamma}}u(\boldsymbol{x},\boldsymbol{y},t)}{\partial(-\boldsymbol{x})^{\boldsymbol{\gamma}}}\Big]+\frac{\partial}{\partial\boldsymbol{y}}\Big[\boldsymbol{\varepsilon}(\boldsymbol{x},\boldsymbol{y})\frac{\partial u(\boldsymbol{x},\boldsymbol{y},t)}{\partial\boldsymbol{y}}\Big]+f(\boldsymbol{x},\boldsymbol{y},t), \end{split}$$

subject to

$$\begin{aligned} u(\mathbf{x}, y, 0) &= \mathbf{x}^2 (1 - \mathbf{x})^2 y^2 (1 - y)^2, \; u\_t(\mathbf{x}, y, 0) = \mathbf{0}, \; (\mathbf{x}, y) \in \overline{\Omega}, \\ u(\mathbf{x}, y, \mathbf{t}) &= \mathbf{0}, \; (\mathbf{x}, y, \mathbf{t}) \in \partial \Omega \times [0, T], \end{aligned}$$

where Ω = (0, 1) × (0, 1). The exact solution of this problem is given by *u*(*x*, *y*, *t*) = (*t* <sup>2</sup> <sup>+</sup> <sup>1</sup>)*x*2(<sup>1</sup> <sup>−</sup> *<sup>x</sup>*)2*y*2(<sup>1</sup> <sup>−</sup> *<sup>y</sup>*)2.

In Table 1, we take the special case with *ξ* = 1, *γ* = 0.8, *d*1(*x*, *y*) = *d*2(*x*, *y*) = *e*(*x*, *y*) = *x*<sup>2</sup> + *y*<sup>2</sup> to compute the *H<sup>γ</sup>* error, *L*<sup>2</sup> error and convergence order of *h* with *τ* = <sup>1</sup> <sup>1000</sup> at *t* = 1 with different weight coefficients *ϕ*1(*β*) = *wi*(*β*) and *ϕ*0(*α*) = *ri*(*α*), *i* = 1, 2, 3, which are given by *w*1(*β*) = 0.5*δ*(*β* − 1.5) + 0.5*δ*(*β* − 1.8), *r*1(*α*) = 0.5*δ*(*α* − 0.5) + 0.5*δ*(*α* − 0.8), *<sup>w</sup>*2(*β*) = <sup>2</sup>−*<sup>β</sup>* <sup>2</sup> , *<sup>r</sup>*2(*α*) = <sup>1</sup>−*<sup>α</sup>* <sup>2</sup> , *<sup>w</sup>*3(*β*) = *<sup>β</sup>*2/2, *<sup>r</sup>*3(*α*) = *<sup>α</sup>*2/2. By examining the spatial convergence orders shown in Table 1, we notice that the expected convergence orders proved in Theorem 2 are obtained. With a different choice of the weight coefficient, the numerical solutions are in agreement with the theoretical analysis which indicates the validity of the proposed method.

**Example 2.** *In this example, we consider the following two-dimensional distributed-order timeand space-fractional diffusion-wave equation on an elliptical domain* <sup>Ω</sup> <sup>=</sup> {(*x*, *<sup>y</sup>*)<sup>|</sup> *<sup>x</sup>*<sup>2</sup> *R*2 *a* + *<sup>y</sup>*<sup>2</sup> *R*2 *b* < 1}

$$\begin{split} &\mathbb{E}\int\_{1}^{2}\varphi\_{1}(\boldsymbol{\beta})\frac{\partial^{\boldsymbol{\beta}}u(\boldsymbol{x},\boldsymbol{y},t)}{\partial t^{\beta}}d\boldsymbol{\beta}+\int\_{0}^{1}\varphi\_{0}(\boldsymbol{a})\frac{\partial^{\boldsymbol{\alpha}}u(\boldsymbol{x},\boldsymbol{y},t)}{\partial t^{\boldsymbol{\alpha}}}d\boldsymbol{a} \\ &=\frac{\partial}{\partial\boldsymbol{x}}\Big[d\_{1}(\boldsymbol{x},\boldsymbol{y})\frac{\partial^{\boldsymbol{\gamma}}u(\boldsymbol{x},\boldsymbol{y},t)}{\partial\boldsymbol{x}^{\boldsymbol{\gamma}}}-d\_{2}(\boldsymbol{x},\boldsymbol{y})\frac{\partial^{\boldsymbol{\gamma}}u(\boldsymbol{x},\boldsymbol{y},t)}{\partial(-\boldsymbol{x})^{\boldsymbol{\gamma}}}\Big]+\frac{\partial}{\partial\boldsymbol{y}}\Big[\boldsymbol{\varepsilon}(\boldsymbol{x},\boldsymbol{y})\frac{\partial u(\boldsymbol{x},\boldsymbol{y},t)}{\partial\boldsymbol{y}}\Big]+f(\boldsymbol{x},\boldsymbol{y},t)\_{\boldsymbol{\gamma}}.\end{split}$$


**Table 1.** The *H<sup>γ</sup>* error, *L*<sup>2</sup> error and convergence order of *h* with *τ* = <sup>1</sup> <sup>1000</sup> at *t* = 1 for the case *ξ* = 1, *γ* = 0.8, *d*1(*x*, *y*) = *d*2(*x*, *y*) = *e*(*x*, *y*) = *x*<sup>2</sup> + *y*<sup>2</sup> with different *ϕ*1(*β*) and *ϕ*0(*α*).

*subject to*

$$u(x,y,0) = \frac{1}{100} \left(\frac{x^2}{R\_a^2} + \frac{y^2}{R\_b^2} - 1\right)^2, \ u\_t(x,y,0) = 0, \quad (x,y) \in \overline{\Omega}\_r$$

$$u(x,y,t) = 0, \quad (x,y,t) \in \partial\Omega \times [0,T],$$

*where T* = 1. *The exact solution of this problem is given by u*(*x*, *y*, *t*) = *<sup>t</sup>* <sup>2</sup>+<sup>1</sup> <sup>100</sup> *x*2 *R*2 *a* + *<sup>y</sup>*<sup>2</sup> *R*2 *b* − 1 2 *.*

In the following discussions, for the sake of simplicity, all the numerical results listed in the tables and figures are evaluated at *Ra* = 0.5, *Rb* = 1. In Table 2, the *H<sup>γ</sup>* error, *L*<sup>2</sup> error and convergence order of *h* with *τ* = <sup>1</sup> <sup>1000</sup> at *t* = 1 with different *ϕ*1(*β*) = *wi*(*β*), *ϕ*0(*α*) = *ri*(*α*), *i* = 1, 2 are presented, where *w*1(*β*) = *δ*(*β* − 1.8), *r*1(*α*) = *δ*(*α* − 0.8), *<sup>w</sup>*2(*β*) = <sup>2</sup>−*<sup>β</sup>* <sup>2</sup> , *<sup>r</sup>*2(*α*) = <sup>1</sup>−*<sup>α</sup>* <sup>2</sup> . The linear triangular elements are applied for this numerical example to verify the theoretical analysis. As we can see, the *H<sup>γ</sup>* spatial convergence order is close to 1 while the *L*<sup>2</sup> spatial convergence order is close to 2, which coincide with the theoretical analysis in Theorem 2. Through the above analysis, we see that our finite element algorithm also works well for an elliptical domain.

The solution behaviors with the effects of the different involved parameters, such as the relaxation parameter and the weight coefficient, are highlighted by graphical illustrations and analyzed in detail. We choose *ξ* = 1, *f*(*x*, *y*, *t*) = 0, *d*1(*x*, *y*) = <sup>1</sup>−*<sup>x</sup>* <sup>2</sup> *<sup>δ</sup>*(*y*), *<sup>d</sup>*2(*x*, *<sup>y</sup>*) = <sup>1</sup>+*<sup>x</sup>* <sup>2</sup> *<sup>δ</sup>*(*y*), *<sup>e</sup>*(*x*, *<sup>y</sup>*) = *<sup>x</sup>*<sup>2</sup> + *<sup>y</sup>*<sup>2</sup> + 1, *<sup>t</sup>* = 1 to observe the behaviors of the temporal evolution of the particle distribution, with the effect of the weight coefficients as shown in Figure 3. We use the exponential function with the form *<sup>δ</sup>*(*x*) <sup>≈</sup> <sup>1</sup> 2 <sup>√</sup>*πσ <sup>e</sup>*<sup>−</sup> *<sup>x</sup>*<sup>2</sup> <sup>4</sup>*<sup>σ</sup>* to approximate the Dirac delta function in the numerical simulation. Similar to [16], the weight coefficients are chosen as the power-law form with *ϕ*1(*β*) = *nβn*−1, *ϕ*0(*α*) = *nαn*−<sup>1</sup> where *n* = 1/2, 1, 3. We observe that the impacts of the weight coefficients are significant on the solution behaviors. For *n* = 1/2, the weight coefficient is monotonically decreasing with the increase of *β* while monotonically increasing with the increase of *α*, and at this stage, the distribution presents as a diffusion form. For *n* = 1, the weight coefficient is constant which means that the weight for every fractional parameter is equal and the wave characteristic appears. With the increase of *n*, for *n* = 3, the weight coefficient is monotonically increasing with the increase of *β* and the decrease of *α*. As shown in Figure 3, the wave characteristic of the distribution becomes stronger.


**Table 2.** The *H<sup>γ</sup>* error, *L*<sup>2</sup> error and convergence order of *h* with *τ* = <sup>1</sup> <sup>1000</sup> at *t* = 1 for the case *ξ* = 1, *γ* = 0.8, *d*1(*x*, *y*) = *d*2(*x*, *y*) = *e*(*x*, *y*) = *x*<sup>2</sup> + *y*<sup>2</sup> with different *ϕ*1(*β*) and *ϕ*0(*α*).

**Figure 3.** The three-dimensional and axial projection drawing particle distribution when *n* = 1/2, *n* = 1 and *n* = 3.

Figure 4 presents the influence of parameter *ξ* on the particle distributions for *t* = 1, *f*(*x*, *y*, *t*) = 0, *d*1(*x*, *y*) = <sup>1</sup>−*<sup>x</sup>* <sup>2</sup> *<sup>δ</sup>*(*y*), *<sup>d</sup>*2(*x*, *<sup>y</sup>*) = <sup>1</sup>+*<sup>x</sup>* <sup>2</sup> *<sup>δ</sup>*(*y*), *<sup>e</sup>*(*x*, *<sup>y</sup>*) = *<sup>x</sup>*<sup>2</sup> + *<sup>y</sup>*<sup>2</sup> + 1, *<sup>ϕ</sup>*1(*β*) = 1 and *ϕ*0(*α*) = 1. As the relaxation parameter increases from *ξ* = 0 to *ξ* = 1, the central region of the particle distribution begins to cave inward, which indicates that the distributions have a wave characteristic. The larger the relaxation parameter is, the stronger the wave characteristic will be. The reason is that the relaxation parameter is added on the distributedorder time-fractional derivative of order (1, 2), which possesses the wave characteristics. With an increase in the relaxation parameter, the fractional derivative of order (1, 2) with the wave characteristic plays a greater role in the particles' transport.

**Figure 4.** The three-dimensional and axial projection drawing particle distribution when *ξ* = 0, *ξ* = 1/2 and *ξ* = 1.

#### **6. Conclusions**

In this paper, we presented an original distributed-order time- and space-fractional diffusion-wave equation to analyze the anomalous diffusion in comb structures. The solution of the governing equation was obtained using the finite element method for the case where the coefficients are taken as constant. Two examples were given: one was in a rectangular domain and the other one was in an elliptical domain. In the two examples, the *H<sup>γ</sup>* error, *L*<sup>2</sup> error and convergence order of *h* with *τ* = <sup>1</sup> <sup>1000</sup> at *t* = 1 subject to different weight coefficients showed that the results demonstrated the effectiveness of the numerical method. For the elliptical domain, the influence of the involved parameters, such as the relaxation parameter and the weight coefficient on the particle distribution, were analyzed, and the physical meaning of the diffusion-wave characteristics was discussed in detail.

**Author Contributions:** Conceptualization, L.L. and S.Z.; methodology, S.Z.; software, L.F.; validation, L.Z., J.Z. and F.L.; formal analysis, I.T.; investigation, S.Z.; resources, S.C.; data curation, L.L.; writing original draft preparation, S.Z.; writing—review and editing, L.Z.; visualization, S.C.; supervision, L.L.; project administration, L.L.; funding acquisition, L.L. All authors have read and agreed to the published version of the manuscript.

**Funding:** The work is supported by the Project funded by the National Natural Science Foundation of China (No. 11801029), the Fundamental Research Funds for the Central Universities (No. QNXM20220048) and the Open Fund of the State key laboratory of advanced metallurgy in the University of Science and Technology Beijing (No. K22-08) and the Australian Research Council (ARC) via the Discovery Project (DP180103858). Authors Liu and Zheng wish to acknowledge that this research is partially supported by the Natural Science Foundation of China (No. 11772046).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

### *Article* **Supervised Neural Network Procedures for the Novel Fractional Food Supply Model**

**Basma Souayeh 1,2,\*, Zulqurnain Sabir 3,4, Muhammad Umar <sup>4</sup> and Mir Waqas Alam 1,2**

	- umar\_maths@hu.edu.pk

**Abstract:** This work presents the numerical performances of the fractional kind of food supply (FKFS) model. The fractional kinds of the derivatives have been used to acquire the accurate and realistic solutions of the FKFS model. The FKFSM system contains three types, special kind of the predator *L*(*x*), top-predator *M*(*x*) and prey populations *N*(*x*). The numerical solutions of three different cases of the FKFS model are provided through the stochastic procedures of the scaled conjugate gradient neural networks (SCGNNs). The data selection for the FKFS model is chosen as 82%, for training and 9% for both testing and authorization. The precision of the designed SCGNNs is provided through the achieved and Adam solutions. To rationality, competence, constancy, and correctness is approved by using the stochastic SCGNNs along with the simulations of the regression actions, mean square error, correlation performances, error histograms values and state transition measures.

**Keywords:** fractional order; food supply model; scaled conjugate gradient; artificial neural networks; numerical solutions; Adam method

### **1. Introduction**

There are various mathematical models that designate the natural phenomena based on the prey-predator investigations along with the collaborations of different species [1,2]. The functional response term in the prey-predator modelling has an important role to present that most of the prey affects the predators with the use of time. There are numerous functional responses species that have been reported in the literature, such as a ratio-dependent [3–5], Beddington–DeAngelis [6–8] and the Holling phase I to III [9,10]. One of the important models is food supply (FS), which is applied in the association of multiple prey or predators. The updated form of the FS system together with common qualitative investigations and numerous communications is presented in [11–13]. The mathematical modelling has an important role to present the dynamics of the nonlinear differential systems, e.g., SITR based coronavirus [14], dengue virus [15] and nervous stomach system [16].

In the FS chain, the role of the "Allee effects" is very important. The Allee effects defined in 1930 after the name by the famous scientist Allee. These effects allocate the progress to reduce the growing rate by using the small quantity of public. The Allee effects appear in the fishery, vertebrates, invertebrates, and plants. The Allee effects occasionally indicate the negative influences in the dispensation of population dynamics based on the fishery. The "Allee effects" have been divided into multiplication and addition [17–20]. Initially, Singh et al. described the double shape of "Allee effects" with the improved

**Citation:** Souayeh, B.; Sabir, Z.; Umar, M.; Alam, M.W. Supervised Neural Network Procedures for the Novel Fractional Food Supply Model. *Fractal Fract.* **2022**, *6*, 333. https:// doi.org/10.3390/fractalfract6060333

Academic Editors: Libo Feng, Lin Liu and Yang Liu

Received: 22 May 2022 Accepted: 14 June 2022 Published: 16 June 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Gower-Leslie system based on the prey predator, in which prey population shows the various junction associated with the suitable parameters. Vinoth et al. [21] formulated a mathematical model to investigate the dynamical FS system using the "Allee effect" based on the addition [22].

The aim of this work is to provide the numerical performances of the fractional kind of food supply (FKFS) model by using the stochastic procedures of the scaled conjugate gradient neural networks (SCGNNs). The stochastic solvers have been used to exploit the variety of applications in recent years. Few of them are the nonlinear dynamics of the coronavirus models [23], functional form of the singular models [24], the infectious-based HIV models [25], functional for of the delay differential system [26] and nonlinear model of the smoking [27]. The idea to implement the fraction kinds of the derivatives is to perform the accurate and realistic solutions. In fractional order models, the minute particulars based on the superfast transition and super slow evolution are examined that provides more detail of the dynamics of the system by using the fractional calculus, which is not easy to interpret by using the integer order counterparts. Additionally, the system dynamics for the index is performed by using the fractional calculus. The fractional order derivatives show much better performance as compared to the integer order with the availability of the situation. The fractional kind of the derivatives have been applied to authenticate the performance of the system using the applications of the real-world applications [28,29]. Moreover, the fractional derivatives have been extensively investigated to solve the number of applications based on the control networks, engineering, physical and mathematical systems. The implementation of the fractional calculus is performed broadly over the last 30 years by using the substantial operators, such as Weyl-Riesz [30], Caputo [31], Riemann-Liouville [32], Erdlyi-Kober [33], and Grnwald-Letnikov [34]. All these operators have their own worth and significance. However, the most widely definition of the Caputo derivative that works to solve homogeneous initial conditions as well as non-homogeneous initial conditions. The Caputo derivatives are considered easy to implement as compared to the other definitions. Bases on this fractional order applications, the authors are interested to develop the FKFS model and provide the numerical performances through the SCGNNs.

The remaining structure of the paper is given as: The FKFS system is constructed in the Section 2. The designed methodology based on the stochastic SCGNNs procedures is provided in the Section 3. The simulations of the results are provided in the Section 4. The concluding notes are given in the Section 5.

#### **2. Mathematical FKFS System with Insights**

In this section, the communication model is provided based on the two or more prey and predators. A differential FKFS system using the analysis of mutual qualitative along with the multiple relationships is given in [35,36]. Few researchers presented the multiple trophic-level of food supply systems through the structure of logistic prey *L*(*x*), Holling type or Lotka–Volterra predator *M*(*x*) and top-predator *N*(*x*) [37–43]. The mathematical form of the three species based on the FS system is presented as [44]:

$$\begin{cases} \frac{dL(\mathbf{x})}{dx} = a\_0 L(\mathbf{x}) - \frac{\rho\_0 L(\mathbf{x}) M(\mathbf{x})}{L(\mathbf{x}) + d\_0} - \frac{k\_1}{k\_2 + L(\mathbf{x})} - b\_0 L^2(\mathbf{x}), & L\_0 = i\_1 \\ \frac{dM(\mathbf{x})}{dx} = \frac{\rho\_1 L(\mathbf{x}) M(\mathbf{x})}{L(\mathbf{x}) + d\_1} - a\_1 M(\mathbf{x}) - \frac{\rho\_2 M(\mathbf{x}) N(\mathbf{x})}{M(\mathbf{x}) + d\_2}, & M\_0 = i\_2 \\ \frac{dN(\mathbf{x})}{dx} = c\_3 N^2(\mathbf{x}) - \frac{\rho\_3 N^2(\mathbf{x})}{M(\mathbf{x}) + d\_3} & M\_0 = i\_3 \end{cases} \tag{1}$$

where prey *L*(*x*) and species *M*(*x*) indicate the Volterra scheme that presents the population of the predator to decrease exponentially in the prey absence. The relationship of the species *N*(*x*) and the prey *M*(*x*) is provided by using the Leslie–Gower approach that represents the predator population reduces per capita accessibility [45,46]. *a*<sup>0</sup> and *c*<sup>3</sup> are the growth rates of *L*(*x*) and *N*(*x*), the environmental protection factors for *L*(*x*) are *d*<sup>0</sup> and *d*1, while the reduction per capita of *M*(*x*) is *<sup>υ</sup>*<sup>2</sup> <sup>2</sup> described in *d*2, the term *a*<sup>1</sup> shows the values of the *M*(*x*), which reduces the nonappearance of *L*(*x*), *b*<sup>0</sup> provides the competition strength for *L*(*x*), the residual lessens for *N*(*x*) based on the food shortage *M*(*x*) is signified by *d*3, the maximum

presentations through the lessening of per capita of *L*(*x*) is represented by *ρ*0, *ρ*1, *ρ*<sup>2</sup> and *ρ*3, the hyperbolic *<sup>k</sup>*<sup>1</sup> *<sup>k</sup>*2+*L*(*x*) function shows the addictive form of the Allee effects, while *<sup>k</sup>*<sup>1</sup> and *k*<sup>2</sup> are the constant values of the Allee effects. If *k*<sup>1</sup> < *k*2, then it means a weak Allee effect; otherwise *k*<sup>2</sup> < *k*1, shows a strong Allee effect, the initial conditions are represented by *i*1, *i*<sup>2</sup> and *i*3. The mathematical form of the FKFS system is given as:

$$\begin{cases} \frac{dL^v(\mathbf{x})}{dx^v} = a\_0 L(\mathbf{x}) - \frac{\rho\_0 L(\mathbf{x}) M(\mathbf{x})}{L(\mathbf{x}) + d\_0} - \frac{k\_1}{k\_2 + L(\mathbf{x})} - b\_0 L^2(\mathbf{x}), & L\_0 = i\_1, \\\frac{d^v M(\mathbf{x})}{dx^v} = \frac{\rho\_1 L(\mathbf{x}) M(\mathbf{x})}{L(\mathbf{x}) + d\_1} - a\_1 M(\mathbf{x}) - \frac{\rho\_2 M(\mathbf{x}) N(\mathbf{x})}{M(\mathbf{x}) + d\_2}, & M\_0 = i\_2. \\\frac{d^v N(\mathbf{x})}{dx^v} = c\_3 N^2(\mathbf{x}) - \frac{\rho\_3 N^2(\mathbf{x})}{M(\mathbf{x}) + d\_3}, & M\_0 = i\_3. \end{cases} \tag{2}$$

where *υ* shows the fractional order Caputo derivative to solve the fractional FS model given in Equation (2). The values of the fractional order derivative *υ* are taken between 0 and 1 to present the behavior of the fractional FS model. The fractional kinds of the derivative in the FS system (2) are incoroprated to observe the minute particulars, i.e., superslow evolution and superfast transients that is not easy to interpret by using the integer order counterparts as shown in the system (1). In recent few years, the fractional calculus have been implemented in various submission, such as anomalous heat transfer [47], pine wilt disease model with convex rate [48], patterns of the spatiotemporal using the systems based on the Belousov–Zhabotinskii reaction [49], quantitative approximation of soil animal substance content using the visible/near infrared spectrometry [50], predator-prey model with herd performance [51], Hepatitis B virus mathematical model [52] and biological based population growing model using the carrying volume [53].

The novel features of the proposed SCGNNs for solving the mathematical FKFS system are defined as:


#### **3. Designed SCGNNs Procedure**

This section of the study provides the procedure of the stochastic computing SCGNNs scheme for the mathematical form of the FKFS system as defined in the set of system (1). The workflow diagram is provided in Figure 1 for the mathematical FKFS model using the computing SCGNNs scheme based on the three blocks, the mathematical model, designed methodology and results performances. The design performances are given in two measures.


$$\begin{cases} \frac{d L^{\nu}(\infty)}{d \boldsymbol{\lambda}^{\nu}} = a\_{0} L(\infty) - \frac{\rho\_{0} L(\infty) M(\infty)}{L(\infty) + d\_{0}} - \frac{k\_{1}}{k\_{2} + L(\infty)} - b\_{0} L^{2}(\infty), & \boldsymbol{L}\_{0} = i\_{1}, \\\frac{d^{\nu} M(\infty)}{d \boldsymbol{\lambda}^{\nu}} = \frac{\rho\_{1} L(\infty) M(\infty)}{L(\infty) + d\_{1}} - a\_{1} M(\infty) - \frac{\rho\_{2} M(\infty) N(\infty)}{M(\infty) + d\_{2}}, & \boldsymbol{M}\_{0} = i\_{2}, \\\frac{d^{\nu} N(\infty)}{d \boldsymbol{\lambda}^{\nu}} = c\_{1} N^{2}(\infty) - \frac{\rho\_{3} N^{2}(\infty)}{M(\infty) + d\_{1}}, & \boldsymbol{M}\_{0} = i\_{1}. \end{cases}$$

**Figure 1.** Workflow of the SCGNNs construction to solve the FKFS model.

The significant procedures regarding to the generalization have been provided by using the Adam scheme, while the numerical procedures are implemented with the default parameter setting to generate the model dataset. The hidden neurons have been selected 15 in this study along with the data selection for the FKFS model as 82%, for training and 9% for both testing and authorization. The artificial intelligence abilities based supervised learning SCGNNs have been performed with best cooperation in the indices, including complexity, premature convergence, overfitting and underfitting cases. Additionally, these parameters of the networks are set after exhaustive simulation studies, experience, knowledge and care and small variations in these setting results in degraded performance of the networks.

The second phase of the stochastic SCGNNs is expressed by using the generic perception based on the solo neuron model as presented in Figure 2. The Figure 2a shows the single layered neural network structure, while the designed layer construction, a single input layer vector having 15 hidden numbers of neurons in the hidden layer along with the three outcomes in the outer layer as described in Figure 2b for solving the mathematical FKFS model. The stochastic based SCGNNs are applied by using the 'Matlab' software (nftool command) for the appropriate sections of hidden neurons, testing statistics, learning methods and verification statics. Whereas the implementation performances of the SCGNNs scheme to solve the mathematical FKFS model along with the parameter setting is provided in Table 1. The networks training is performed using the proposed stochastic SCGNNs scheme, where the backpropagation is oppressed to improve the Jacobian '*JB*' for the performance, i.e., MSE, to adjust the weight vectors along with the bias variables of *B*. The variation or modification of the decision variables with the use of scale conjugate gradient is given as:

$$\begin{array}{c} Jf = JB \times JB, \\ Je = JB \times e, \\ dB = \frac{-(JJ + I \times mn)}{fc} \end{array}$$

where *e* indicates the error, and *I* is the identity vector. The SCGNNs scheme's parameter setting is provided in Table 1 along with the slight disparity/change/modification may result in poor performance, i.e., premature convergence. Therefore, these settings will be unified with extensive attention, after directing thorough the numerical investigation and understanding.

**Table 1.** Parameter setting to execute the SCGNNs procedure.


#### **Table 1.** *Cont.*


$$\mathbf{(a)}$$

**Figure 2.** Generic and specific ANNs structure to solve the FKFS model. (**a**) Generic structure of single neuron; (**b**) Designed layer structure, a single input layer vector having 15 hidden numbers of neurons in the hidden layer along with the three outcomes in the outer layer.

#### **4. Results of the FKFS Model**

Three fractional order cases of the model have been presented by using the designed SCGNNs operator. The mathematical descriptions of these operators are given as:

**Case 1:** The updated form of Equation (2) based on the FKFS model by taking *υ* = 0.5, *a*<sup>0</sup> = 1.5, *a*<sup>1</sup> = 1, *b*<sup>0</sup> = 0.06, *ρ*<sup>0</sup> = 1, *ρ*<sup>1</sup> = 2, *ρ*<sup>2</sup> = 0.405, *ρ*<sup>3</sup> = 1, *c*<sup>3</sup> = 1.5, *k*<sup>1</sup> = *k*<sup>2</sup> = 0.1, *d*<sup>0</sup> = 10, *d*<sup>1</sup> = 10, *d*<sup>2</sup> = 10, *d*<sup>3</sup> = 20 and *i*<sup>1</sup> = *i*<sup>2</sup> = *i*<sup>3</sup> = 1.2 is shown as:

$$\begin{cases} \frac{d^{15}L(\mathbf{x})}{dx^{0.5}} = 1.5L(\mathbf{x}) - \frac{L(\mathbf{x})M(\mathbf{x})}{L(\mathbf{x}) + 10} - \frac{0.1}{0.1 + L(\mathbf{x})} - 0.06L^2(\mathbf{x}), & L\_0 = 1.2, \\\frac{d^{05}M(\mathbf{x})}{dx^{0.5}} = \frac{2L(\mathbf{x})M(\mathbf{x})}{L(\mathbf{x}) + 10} - M(\mathbf{x}) - \frac{0.405M(\mathbf{x})N(\mathbf{x})}{M(\mathbf{x}) + 10}, & M\_0 = 1.2, \\\frac{d^{05}N(\mathbf{x})}{dx^{0.5}} = 1.5N^2(\mathbf{x}) - \frac{N^2(\mathbf{x})}{M(\mathbf{x}) + 20}, & N\_0 = 1.2. \end{cases} \tag{3}$$

**Case 2:** The updated form of Equation (2) based on the FKFS model by taking *υ* = 0.7, *a*<sup>0</sup> = 1.5, *a*<sup>1</sup> = 1, *b*<sup>0</sup> = 0.06, *ρ*<sup>0</sup> = 1, *ρ*<sup>1</sup> = 2, *ρ*<sup>2</sup> = 0.405, *ρ*<sup>3</sup> = 1, *c*<sup>3</sup> = 1.5, *k*<sup>1</sup> = *k*<sup>2</sup> = 0.1, *d*<sup>0</sup> = 10, *d*<sup>1</sup> = 10, *d*<sup>2</sup> = 10, *d*<sup>3</sup> = 20 and *i*<sup>1</sup> = *i*<sup>2</sup> = *i*<sup>3</sup> = 1.2 is shown as:

$$\begin{cases} \frac{d^{0.7}L(\mathbf{x})}{dx^{0.7}} = 1.5L(\mathbf{x}) - \frac{L(\mathbf{x})M(\mathbf{x})}{L(\mathbf{x}) + 10} - \frac{0.1}{0.1 + L(\mathbf{x})} - 0.06L^2(\mathbf{x}), & L\_0 = 1.2, \\\frac{d^{0.7}M(\mathbf{x})}{dx^{0.7}} = \frac{2L(\mathbf{x})M(\mathbf{x})}{L(\mathbf{x}) + 10} - M(\mathbf{x}) - \frac{0.405M(\mathbf{x})N(\mathbf{x})}{M(\mathbf{x}) + 10}, & M\_0 = 1.2, \\\frac{d^{0.7}N(\mathbf{x})}{dx^{0.7}} = 1.5N^2(\mathbf{x}) - \frac{N^2(\mathbf{x})}{M(\mathbf{x}) + 20}, & N\_0 = 1.2. \end{cases} \tag{4}$$

**Case 3:** The updated form of Equation (2) based on the FKFS model by taking *υ* = 0.9, *a*<sup>0</sup> = 1.5, *a*<sup>1</sup> = 1, *b*<sup>0</sup> = 0.06, *ρ*<sup>0</sup> = 1, *ρ*<sup>1</sup> = 2, *ρ*<sup>2</sup> = 0.405, *ρ*<sup>3</sup> = 1, *c*<sup>3</sup> = 1.5, *k*<sup>1</sup> = *k*<sup>2</sup> = 0.1, *d*<sup>0</sup> = 10, *d*<sup>1</sup> = 10, *d*<sup>2</sup> = 10, *d*<sup>3</sup> = 20 and *i*<sup>1</sup> = *i*<sup>2</sup> = *i*<sup>3</sup> = 1.2 is shown as:

$$\begin{cases} \frac{d^{0.5}L(\mathbf{x})}{dx^{0.9}} = 1.5L(\mathbf{x}) - \frac{L(\mathbf{x})M(\mathbf{x})}{L(\mathbf{x}) + 10} - \frac{0.1}{0.1 + L(\mathbf{x})} - 0.06L^2(\mathbf{x}), & L\_0 = 1.2, \\\frac{d^{0.9}M(\mathbf{x})}{dx^{0.9}} = \frac{2L(\mathbf{x})M(\mathbf{x})}{L(\mathbf{x}) + 10} - M(\mathbf{x}) - \frac{0.405M(\mathbf{x})N(\mathbf{x})}{M(\mathbf{x}) + 10}, & M\_0 = 1.2, \\\frac{d^{0.9}N(\mathbf{x})}{dx^{0.9}} = 1.5N^2(\mathbf{x}) - \frac{N^2(\mathbf{x})}{M(\mathbf{x}) + 20}, & N\_0 = 1.2. \end{cases} \tag{5}$$

Figures 3–7 illustrate the stochastic SCGNNs procedures for the FKFS mathematical system. Figure 3 shows the values of the STs along with the best performances of the FKFS mathematical system. The STs and MSE results based on the authentication, training and best curve measures have been demonstrated in Figure 3 using the stochastic SCGNNs procedures for the FKFS mathematical system. The obtained best measures of the FKFS model have been illustrated at iterations 81, 27 and 17 that have been performed as 7.58035 <sup>×</sup> <sup>10</sup>−10, 1.72965 <sup>×</sup> <sup>10</sup> <sup>−</sup><sup>9</sup> and 4.49765 <sup>×</sup> <sup>10</sup>−11. The second half of the Figure <sup>3</sup> shows the gradient values using the SCGNNs scheme for the FKFS mathematical system. The performances of the gradient are found as 9.35 <sup>×</sup> <sup>10</sup>−8, 9.61 <sup>×</sup> <sup>10</sup>−<sup>8</sup> and 6.57 <sup>×</sup> <sup>10</sup>−8. These depictions indicate the correctness and the convergence of the SCGNNs scheme for the FKFS mathematical system. The result assessments based on the training targets, training outputs, validations targets, validation outputs, test targets, test outputs, errors and fitness curves are illustrated in the 1st half of the Figure 4. While the EHs based on the training, validation, test and zero error have been drawn in the 2nd half of the Figure 4 for the FKFS mathematical system. The EHs performances are provided as 1.68 <sup>×</sup> <sup>10</sup>−5, 5.79 <sup>×</sup> <sup>10</sup>−<sup>6</sup> and 1.05 <sup>×</sup> <sup>10</sup>−<sup>7</sup> for the FKFS mathematical system. Figure <sup>5</sup> represents the correlation performances based on the training, validation and testing in the mathematical form of the FKFS system. It is seen that the correlation measures are authenticated as 1 in the mathematical form of the FKFS system. These measures indicate the correctness of the stochastic SCGNNs procedure for the mathematical form of the FKFS model. The MSE convergence measures indicate the complexity values, training performances, validation measures, iterations, testing, and backpropagation are authenticated in Table 2 based on the mathematical form of the FKFS model.

**Table 2.** SCGNNs procedures for the mathematical form of the FKFS model.


**Figure 3.** MSE and STs for the mathematical form of the FKFS model. (**a**) MSE for C-1; (**b**) MSE for C-2; (**c**) MSE for C-3; (**d**) EHs for C-1; (**e**) EHs for C-2; (**f**) EHs for C-3.

**Figure 4.** Results valuations and EHs for the mathematical form of the FKFS model. (**a**) Result measures for C-1; (**b**) Result measures for C-2; (**c**) Result measures for C-3; (**d**) EHs for C-1; (**e**) EHs for C-2; (**f**) EHs for C-3.

**Figure 5.** Regression performances for the mathematical form of the FKFS model. (**a**) Regression for C-1; (**b**) Regression for C-2; (**c**) Regression for C-3.

**Figure 6.** Results overlapping for the mathematical form of the FKFS model. (**a**) Results of the logistic prey *L*(*x*); (**b**) Results of the Holling type *M*(*x*); (**c**) Results for the top-predator *N*(*x*).

Figures 6 and 7 indicate the comparative investigations based on the comparison of the solutions and AE performances to solve the FKFS system. Figure 6 shows the correctness of the SCGNNs scheme through the overlapping of the results for each class of the mathematical FKFS system. The AE values for each class of the mathematical FKFS system using the SCGNNs scheme are provided in Figure 7. The AE measures for the logistic prey *L*(*x*) are calculated as 10−<sup>5</sup> to 10−6, 10−<sup>4</sup> to 10−<sup>7</sup> and 10−<sup>5</sup> to 10−<sup>7</sup> for 1st, 2nd and 3rd case of the mathematical FKFS system. The AE performances of the Holling type or Lotka–Volterra predator *M*(*x*) lie as 10−<sup>4</sup> to 10<sup>−</sup>6, 10−<sup>4</sup> to 10−<sup>5</sup> and 10−<sup>5</sup> to 10−<sup>8</sup> for 1st, 2nd and 3rd case of the nonlinear FKFS system. The values of the AE for top-predator *N*(*x*) are calculated as 10−<sup>4</sup> to 10−6, 10−<sup>5</sup> to 10−<sup>6</sup> and 10−<sup>5</sup> to 10−<sup>8</sup> for 1st, 2nd and 3rd case of the nonlinear FKFS system. These illustrations based on the AE authenticate the correctness of the stochastic SCGNNs LMB-NNs to solve the nonlinear FKFS system.

**Figure 7.** AE for the mathematical form of the FKFS model. (**a**) AE for the logistic prey *L*(*x*); (**b**) AE for the Holling type *M*(*x*); (**c**) AE for the top-predator *N*(*x*).

#### **5. Conclusions**

The motive of this work is to perform the solutions of the fractional food supply model. The fractional derivatives have been used to provide the realistic and accurate solutions of the food supply mathematical model. The fractional food supply mathematical system contains three categories, special kind of the predator *L*(*x*), top-predator *M*(*x*) and prey populations *N*(*x*). The efficient numerical performances of three different variations of the fractional food supply mathematical system have been provided by using the stochastic procedures based on the scaled conjugate gradient neural network scheme. The selection of the data for fractional food supply mathematical system is selected as 82%, for training and 9% for both testing and authorization along with the 15 numbers of neurons. The precision and accuracy of the designed SCGNNs have been provided through the achievements and reference solutions. The AE values have been calculated as 10−<sup>6</sup> to 10−8, which shows the exactness of the scaled conjugate gradient neural network scheme for solving the fraction food supply system. The rationality, competence, constancy, and correctness has been approved by using the stochastic SCGNNs along with the simulations of the regression actions, mean square error, correlation performances, error histograms values and state transition measures. It is also observed that by taking the fractional order values close to 1, the solutions are performed better as compared to other values. These observations have been provided in the AE graphs to solve the model.

In upcoming studies, the proposed SCGNNs scheme have been implemented to present the solutions of the lonngren-wave systems, fluid dynamical models and fractional kinds of systems.

**Author Contributions:** Conceptualization, B.S. and Z.S.; methodology, B.S. and M.U.; software, B.S. and Z.S.; validation, B.S. and M.U.; formal analysis, Z.S. and M.W.A.; investigation, B.S. and Z.S.; resources, B.S., Z.S., M.U. and M.W.A.; data curation, B.S. and Z.S.; writing—original draft preparation, B.S. and M.W.A.; writing—review and editing, Z.S. and B.S.; visualization, M.U. and B.S.; supervision, B.S.; project administration, B.S.; funding acquisition, B.S.; All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was supported by Al Bilad Bank Scholarly Chair for Food Security in Saudi Arabia, the Deanship of Scientific Research, Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia [Project No. CHAIR37].

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The data that support the findings of this study are available from the corresponding author upon reasonable request.

**Acknowledgments:** This work was supported by Al Bilad Bank Scholarly Chair for Food Security in Saudi Arabia, the Deanship of Scientific Research, Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia [Project No. CHAIR37].

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **A Finite-State Stationary Process with Long-Range Dependence and Fractional Multinomial Distribution**

**Jeonghwa Lee**

Department of Statistics, Truman State University, Kirksville, MO 63501, USA; jlee@truman.edu

**Abstract:** We propose a discrete-time, finite-state stationary process that can possess long-range dependence. Among the interesting features of this process is that each state can have different long-term dependency, i.e., the indicator sequence can have a different Hurst index for different states. Furthermore, inter-arrival time for each state follows heavy tail distribution, with different states showing different tail behavior. A possible application of this process is to model overdispersed multinomial distribution. In particular, we define a fractional multinomial distribution from our model.

**Keywords:** long-range dependence; Hurst index; over-dispersed multinomial distribution

#### **1. Introduction**

Long-range dependence (LRD) refers to a phenomenon where correlation decays slowly with the time lag in a stationary process in a way that the correlation function is no longer summable. This phenomenon was first observed by Hurst [1,2] and since then it has been observed in many fields such as economics, hydrology, internet traffic, queueing networks, etc. [3–6]. In a second order stationary process, LRD can be measured by the Hurst index *H* [7,8],

$$H = \inf\{h : \limsup\_{n \to \infty} n^{-2h+1} \sum\_{k=1}^n cov(X\_1, X\_k) < \infty\}.$$

Note that *H* ∈ (0, 1), and if *H* ∈ (1/2, 1), the process possesses a long-memory property.

Among the well-known stochastic processes that are stationary and possess longrange dependence are fractional Gaussian noise (FGN) [9] and fractional autoregressive integrated moving average processes (FARIMA) [10,11].

Fractional Gaussian noise *Xj* is a mean-zero, stationary Gaussian process with covariance function:

$$\gamma(j) := \csc(X\_0, X\_j) = \frac{var(X\_0)}{2} (|j+1|^{2H} - 2|j|^{2H} + |j-1|^{2H})$$

where *H* ∈ (0, 1) is the Hurst parameter. The covariance function obeys the power law with exponent 2*H* − 2 for large lag,

$$
\gamma(j) \sim \text{var}(\mathbb{X}\_0) H(2H - 1) j^{2H - 2} \text{ as } j \to \infty.
$$

If *H* ∈ (1/2, 1), then the covariance function decreases slowly with the power law, and ∑*<sup>j</sup> γ*(*j*) = ∞, i.e., it has the long-memory property.

A FARIMA(*p*, *d*, *q*) process {*Xt*} is the solution of:

$$\phi(B)\nabla^d X\_t = \theta(B)\epsilon\_{t\star}$$

**Citation:** Lee, J. A Finite-State Stationary Process with Long-Range Dependence and Fractional Multinomial Distribution. *Fractal Fract.* **2022**, *6*, 596. https://doi.org/ 10.3390/fractalfract6100596

Academic Editors: Libo Feng, Yang Liu and Lin Liu

Received: 17 September 2022 Accepted: 11 October 2022 Published: 14 October 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

where *p*, *q* are positive integers, *d* is real, *B* is the backward shift, *BXt* = *Xt*−1, and the fractional-differencing operator *d*, autoregressive operator *φ*, and moving average operator *θ* are, respectively,

$$\begin{aligned} \nabla^d &= (1 - B)^d = \sum\_{k=1}^{\infty} \frac{d(d-1)\cdots(d+1-k)}{k!}(-B)^k, \\ \phi(B) &= 1 - \phi\_1 B - \phi\_2 B^2 \cdots - \phi\_p B^p, \\ \theta(B) &= 1 - \theta\_1 B - \theta\_2 B^2 \cdots - \theta\_q B^q. \end{aligned}$$

where {*t*} is the white-noise process, which consists of iid random variables with the finite second moment. Here, the parameter *d* manages the long-term dependence structure, and by its relation to the Hurst index, *H* = *d* + 1/2, *d* ∈ (0, 1/2) corresponds to the long-range dependence in the FARIMA process.

Another class of stationary processes that can possess long-range dependence is from the countable-state Markov process [12]. In a stationary, positive recurrent, irreducible, aperiodic Markov chain, the indicator sequence of visits to a certain state is long-range dependent if and only if return time to the state has an infinite second moment, and this is possible only when the Markov chain has infinite state space. Moreover, if one state has the infinite second moment of return time, then all the other states also have the infinite second moment of return time, and all the states have the same rate of dependency; that is, the indicator sequence of each state is long-range dependence with the same Hurst index.

In this paper, we develop a discrete-time finite-state stationary process that can possess long-range dependence. We define a stationary process {*Xi*, *<sup>i</sup>* ∈ N} where the number of possible outcomes of *Xi* is finite, *<sup>S</sup>* = {0, 1, ··· , *<sup>m</sup>*} for any *<sup>m</sup>* ∈ N, and for *<sup>k</sup>* = 1, 2, ··· , *<sup>m</sup>*,

$$cov(I\_{\{X\_i = k\}}, I\_{\{X\_j = k\}}) = c'\_k |i - j|^{2H\_k - 2},\tag{1}$$

for any *<sup>i</sup>*, *<sup>j</sup>* ∈ N, *<sup>i</sup>* = *<sup>j</sup>*, and some constants *<sup>c</sup> <sup>k</sup>* ∈ R+, *Hk* ∈ (0, 1). This leads to:

$$
\cos(X\_{i\prime}X\_j) \sim c\_{k'}'|i-j|^{2H\_{k'}-2} \quad \text{as } |i-j| \to \infty,\tag{2}
$$

where *k* = *argmaxk*{*Hk*; *k* = 1, ··· , *m*}. If *Hk* = max{*Hk*; *k* = 1, ··· , *m*} ∈ (1/2, 1), (1.2) implies that as *<sup>n</sup>* <sup>→</sup> <sup>∞</sup>, <sup>∑</sup>*<sup>n</sup> <sup>i</sup>*=<sup>1</sup> *cov*(*X*1, *Xi*) diverges with the rate of |*n*| 2*Hk* <sup>−</sup>1, and the process is said to have long-memory with Hurst parameter *Hk* . Furthermore, from (1.1), for *<sup>k</sup>* = {1, ··· , *<sup>m</sup>*}, the process {*I*{*Xi*=*k*}; *<sup>i</sup>* = 1, 2, ···} is long-range dependence if *Hk* ∈ (1/2, 1). In particular, if *Hi* = *Hj*, then the states "*i*" and "*j*" produce different levels of dependence. For example, if *Hi* < 1/2 < *Hj*, then the state "*j*" produces a long-memory counting process whereas state "*i*" produces a short-memory process.

A possible application of our stochastic process is to model the over-dispersed multinomial distribution. In the multinomial distribution, there are *n* trials, each trial results in one of the finite outcomes, and the outcomes of the trials are independent and identically distributed. When applying the multinomial model to real data, it is often observed that the variance is larger than what it is assumed to be, which is called over-dispersion, due to the violation of the assumption that trials are independent and have identical distribution [13,14], and there have been several ways to model an overdispersed multinomial distribution [15–18].

Our stochastic process provides a new method to model an over-dispersed multinomial distribution by introducing dependency among trials. In particular, the variance of the number of a certain outcomes among *n* trials is asymptotically proportional to the fractional exponent of *n*, from which we define:

$$\Upsilon\_k := \sum\_{i=1}^n I\_{\{X\_i = k\}} \text{ for } k = 1, 2, \dots, m\_n$$

and call the distribution of (*Y*1,*Y*2, ··· ,*Ym*) the fractional multinomial distribution.

The work in this paper is an extension of the earlier work of the generalized Bernoulli process [19], and the process in this paper is reduced to the generalized Bernoulli process if there are only two states in the possible outcomes of *Xi*, e.g., *S* = {0, 1}.

In Section 2, a finite state stationary process that can possess long-range dependence is developed. In Section 3, the properties of our model are investigated with regard to tail behavior and moments of inter-arrival time of a certain state "*k*", and conditional probability of observing a state "*k*" given the past observations in the process. In Section 4, the fractional multinomial distribution is defined, followed by the conclusions in Section 5. Some proofs of propositions and theorems are in Section 6.

Throughout this paper, {*i*, *i*0, *i*1, ···}, {*i* , *i* 0, *i* <sup>1</sup>, ···}⊂ N, with *<sup>i</sup>*<sup>0</sup> < *<sup>i</sup>*<sup>1</sup> < *<sup>i</sup>*<sup>2</sup> < ··· , and *i* <sup>0</sup> < *i* <sup>1</sup> < *i* <sup>2</sup> < ··· . For any set *A* = {*i*0, *i*1, ··· , *in*}, |*A*| = *n* + 1, the number of elements in the set *A*, and for the empty set, we define |∅| = 0.

#### **2. Finite-State Stationary Process with Long-Range Dependence**

We define the stationary process {*Xi*, *<sup>i</sup>* ∈ N} where the set of possible outcomes of *Xi* is finite, *<sup>S</sup>* = {0, 1, ··· , *<sup>m</sup>*}, for *<sup>m</sup>* ∈ N, with the probability that we observe a state "*k*" at time *<sup>i</sup>* is *<sup>P</sup>*(*Xi* <sup>=</sup> *<sup>k</sup>*) = *pk* <sup>&</sup>gt; 0, for *<sup>k</sup>* <sup>=</sup> 0, 1, ··· , *<sup>m</sup>*, and <sup>∑</sup>*<sup>m</sup> <sup>k</sup>*=<sup>0</sup> *pk* = 1.

For any set *<sup>A</sup>* = {*i*0, *<sup>i</sup>*1, ··· , *in*} ⊂ N, define the operator:

$$L\_{H,p,c}^\*(A) := p \prod\_{j=1,\cdots,n} (p+c|i\_j - i\_{j-1}|^{2H-2}).$$

If *A* = ∅, define *L*∗ *<sup>H</sup>*,*p*,*c*(*A*) := 1, and if *A* = {*i*0}, *L*<sup>∗</sup> *<sup>H</sup>*,*p*,*c*(*A*) := *p*.

Let **H** = (*H*1, *H*2, ··· , *Hm*), **p** = (*p*1, *p*2, ··· , *pm*), **c** = (*c*1, *c*2, ··· , *cm*) be vectors of length *<sup>m</sup>*, and **<sup>H</sup>**, **<sup>p</sup>**, **<sup>c</sup>** <sup>∈</sup> (0, 1)*m*. We are now ready to define the following operators.

**Definition 1.** *Let A*0, *<sup>A</sup>*1, ··· , *Am* ⊂ N *be pairwise disjoint, and A*<sup>0</sup> = *<sup>n</sup>* > 0. *Define,*

$$L^\*\_{\mathbf{H}, \mathbf{p}, \mathbf{c}}(A\_1, A\_{2'}, \cdots, A\_m) := \prod\_{k=1, \cdots, m} L^\*\_{H\_k, p\_k, c\_k}(A\_k),$$

*and,*

$$D^\*\_{\mathbf{H}, \mathbf{p}, \mathbf{c}}(A\_1, A\_2, \dots, A\_{m'}, A\_0) := \sum\_{\ell=0}^{n'} (-1)^{\ell} \sum\_{\substack{|B| = \ell \\ B \subset A\_0}} \sum\_{\substack{B\_l \subset B \\ R \subset \bar{R}\_\gamma \cup \bar{R}\_\gamma = \bigotimes \\ \cup \{B\_l = B\}}} L^\*\_{\mathbf{H}, \mathbf{p}, \mathbf{c}}(A\_1 \cup B\_1, A\_2 \cup B\_2, \dots, A\_{m'} \cup B\_m).$$

For ease of notation, we denote *D*∗ **<sup>H</sup>**,**p**,**c**, *L*<sup>∗</sup> **<sup>H</sup>**,**p**,**c**, and *L*<sup>∗</sup> *Hk*,*pk*,*ck* by **D**∗, **L**∗, *L*<sup>∗</sup> *<sup>k</sup>* , respectively. Note that if *A*<sup>0</sup> = {*i*0},

$$\mathbf{D}^\*(A\_1, A\_2, \cdots, A\_m; A\_0) = \prod\_{k=1,\cdots,m} L\_k^\*(A\_k) \left(1 - \sum\_{k'=1}^m \frac{L\_{k'}^\*(A\_{k'} \cup \{i\_0\})}{L\_{k'}^\*(A\_{k'})}\right). \tag{3}$$

For any pairwise disjoint sets *<sup>A</sup>*0, *<sup>A</sup>*1, ··· *Am* ⊂ N, if **<sup>D</sup>**∗(*A*1, *<sup>A</sup>*2, ··· , *Am*; *<sup>A</sup>*0) > 0, then {*Xi*; *<sup>i</sup>* ∈ N} is well defined stationary process with the following probabilities:

$$P(\cap\_{i \in A\_k} \{X\_i = k\}) = L\_k^\*(A\_k), \text{ for } k = 1, \cdots, m,\tag{4}$$

$$P(\cap\_{k=1,\cdots,m} \cap\_{i \in A\_k} \{X\_i = k\}) = \prod\_{k=1,\cdots,m} L\_k^\*(A\_k),\tag{5}$$

$$P(\cap\_{k=0,\cdots,m} \cap\_{i \in A\_k} \{X\_i = k\}) = \mathbf{D}^\*(A\_1, A\_2, \cdots, \cdot, A\_m; A\_0). \tag{6}$$

In particular, if the stationary process with the probability above is well defined, then, for *k*, *k* = 1, ··· , *m*, we have:

$$\begin{aligned} P(X\_i = k, X\_j = k) &= p\_k (p\_k + c\_k |j - i|^{2H\_k - 2}), \\ P(X\_i = k, X\_j = k') &= p\_k p\_{k'} \end{aligned}$$

$$\begin{split} P(X\_i = 0, X\_j = 0) &= 1 - 2 \sum\_{k=1, \cdots, m} P(X\_i = k) + \sum\_{k, k'=1, \cdots, m} P(X\_i = k, X\_j = k') \\ &= 1 - 2 \sum\_{k=1}^m p\_k + \sum\_{k=1}^m p\_k (p\_1 + p\_2 + \cdots + p\_m + c\_k | i - j|^{2H\_k - 2}) \\ &= p\_0^2 + \sum\_{k=1}^m p\_k c\_k |i - j|^{2H\_k - 2} \end{split}$$
 
$$P(X\_i = k, X\_i = 0) = P(X\_i = 0, X\_i = k) = p\_0(1 - p\_1 - p\_2 - \cdots - p\_m - c\_k | i - j|^{2H\_k - 2})$$

$$\begin{split} P(X\_i = k, X\_j = 0) &= P(X\_i = 0, X\_j = k) = p\_k(1 - p\_1 - p\_2 - \dots - p\_m - c\_k |i - j|^{2H\_k - 2}) \\ &= p\_k(p\_0 - c\_k |i - j|^{2H\_k - 2}). \end{split}$$

As a result, for *<sup>i</sup>* = *<sup>j</sup>*, *<sup>i</sup>*, *<sup>j</sup>* ∈ N, *<sup>k</sup>* = *<sup>k</sup>* , *k*, *k* ∈ {1, 2, ··· , *m*},

$$cov(I\_{\{X\_i = k\} \prime} I\_{\{X\_j = k\}}) = p\_k \varepsilon\_k |i - j|^{2H\_k - 2},\tag{7}$$

$$cov(I\_{\{X\_i = k\}}, I\_{\{X\_j = k'\}}) = 0,\tag{8}$$

$$cov(I\_{\{X\_i=0\}}, I\_{\{X\_j=0\}}) = \sum\_{k=1}^{m} p\_k c\_k |i-j|^{2H\_k-2},\tag{9}$$

$$cov(I\_{\{X\_i = k\}}, I\_{\{X\_j = 0\}}) = -p\_k c\_k |i - j|^{2H\_k - 2}. \tag{10}$$

Note that ({*I*{*Xi*=1}}*<sup>i</sup>*∈N, {*I*{*Xi*=2}}*<sup>i</sup>*∈N, ··· , {*I*{*Xi*=*m*}}*<sup>i</sup>*∈N) are *<sup>m</sup>* generalized Bernoulli processes with Hurst parameter, *H*1, *H*2, ··· , *Hm*, respectively (see [19]). However, they are not independent, since for = *k*, ∈ {1, 2, ··· , *m*},

$$P(\{I\_{\{X\_i=\ell\}}=1\} \cap \{I\_{\{X\_i=k\}}=1\}) = 0 \neq P(I\_{\{X\_i=\ell\}}=1)P(I\_{\{X\_i=k\}}=1) = p\_\ell p\_k.$$

Further, we have,

$$\begin{split} cov(X\_i, X\_j) &= E(X\_i X\_j) - E(X\_i)E(X\_j) \\ &= \sum\_{k,k'} k k' P(I\_{\{X\_i = k\}} = 1, I\_{\{X\_j = k'\}} = 1) - \sum\_{k,k'} k k' p\_k p\_{k'} \\ &= \sum\_{k=1, \cdots, m} k^2 p\_k c\_k |i - j|^{2H\_k - 2} .\end{split}$$

Therefore, the process {*Xi*}*i*∈<sup>N</sup> possesses long-range dependence if min{*H*1, ··· , *Hk*} <sup>&</sup>gt; 1/2.

All the results that appear in this paper are valid regardless of how the finite-state space of *Xi* is defined. More specifically, given that: **D**∗(*A*1, *A*2, ··· , *Am*; *A*0) > 0 for any pairwise disjoint sets *<sup>A</sup>*0, *<sup>A</sup>*1, ··· *Am* ⊂ N, we can define probability (4)–(6) with any state space *<sup>S</sup>* = {*s*0,*s*1,*s*2, ··· ,*sm*} ⊂ R for any *<sup>m</sup>* ∈ N in the following way.

$$\begin{aligned} P(\cap\_{i \in A\_k} \{X\_i = s\_k\}) &= L\_k^\*(A\_k), \text{ for } k = 1, \dots, m, \\ P(\cap\_{k=1,\dots,m} \cap\_{i \in A\_k} \{X\_i = s\_k\}) &= \prod\_{k=1,\dots,m} L\_k^\*(A\_k), \\ P(\cap\_{k=0,\dots,m} \cap\_{i \in A\_k} \{X\_i = s\_k\}) &= \mathbf{D}^\*(A\_1, A\_2, \dots, A\_m; A\_0). \end{aligned}$$

Note that the only difference is that the space "*k*" is replaced by "*sk*". As a result, we can obtain the same results as (7)–(10), except that *<sup>I</sup>*{*Xi*=*k*} is replaced by *<sup>I</sup>*{*Xi*=*sk*}, and we get:

$$\begin{split} cov(X\_{i\prime}X\_{j}) &= cov(X\_{i} - s\_{0\prime}X\_{j} - s\_{0}) \\ &= \sum\_{k,k'=1,\cdots,\rm m} s\_{k}s\_{k}^{\prime}P(I\_{\{X\_{i}=s\_{k}\}}=1, I\_{\{X\_{j}=s\_{k}^{\prime}\}}=1) - \sum\_{k,k'=1,\cdots,\rm m} s\_{k}s\_{k}^{\prime}p\_{k}p\_{k'} \\ &= \sum\_{k=1,\cdots,\rm m} (s\_{k} - s\_{0})^{2}p\_{k}c\_{k}|i-j|^{2H\_{k}-2}. \end{split}$$

In a similar way, all the results in this paper can be easily transfered to any finite-state space *<sup>S</sup>* ⊂ R. For the sake of simplicity, we assume *<sup>S</sup>* = {0, 1, ··· , *<sup>m</sup>*}, *<sup>m</sup>* ∈ N, without loss of generality, and define *<sup>S</sup>*<sup>0</sup> :<sup>=</sup> {1, ··· , *<sup>m</sup>*}.

Now, we will give a restriction on the parameter values, {*Hk*, *pk*, *ck*; *<sup>k</sup>* <sup>∈</sup> *<sup>S</sup>*0}, which will make **<sup>D</sup>**∗(*A*1, *<sup>A</sup>*2, ··· , *Am*; *<sup>A</sup>*0) > 0 for any pairwise disjoint sets *<sup>A</sup>*0, ··· *Am* ⊂ N; therefore, the process {*Xi*} is well-defined with the probability (4)–(6).

ASSUMPTIONS: (A.1) *ck*, *Hk*, *pk* <sup>∈</sup> (0, 1) for *<sup>k</sup>* <sup>∈</sup> *<sup>S</sup>*0. (A.2) For any *<sup>i</sup>*<sup>0</sup> < *<sup>i</sup>*<sup>1</sup> < *<sup>i</sup>*2, *<sup>i</sup>*0, *<sup>i</sup>*1, *<sup>i</sup>*<sup>2</sup> ∈ N,

$$\sum\_{k=1}^{m} \frac{\left(p\_k + c\_k|i\_1 - i\_0|^{2H\_k - 2}\right)(p\_k + c\_k|i\_2 - i\_1|^{2H\_k - 2})}{p\_k + c\_k|i\_2 - i\_0|^{2H\_k - 2}} < 1. \tag{11}$$

For the rest of the paper, it is assumed that ASSUMPTIONS (A.1, A.2) hold.

**Remark 1.** *(a). (11) holds if,*

$$\sum\_{k=1}^{m} \frac{(p\_k + c\_k)(p\_k + c\_k)}{p\_k + c\_k 2^{2H\_k - 2}} < 1,$$

*since,*

$$\frac{(p\_k + c\_k|i\_1 - i\_0|^{2H\_k - 2})(p\_k + c\_k|i\_2 - i\_1|^{2H\_k - 2})}{(p\_k + c\_k|i\_2 - i\_0|^{2H\_k - 2})}$$

*is maximized when i*<sup>2</sup> − *i*<sup>0</sup> = 2, *i*<sup>1</sup> − *i*<sup>0</sup> = 1, *as it was seen in Lemma 2.1 of [19]. (b). If* (*i*<sup>1</sup> − *i*0)/(*i*<sup>2</sup> − *i*0) → 0,(*i*<sup>2</sup> − *i*1)/(*i*<sup>2</sup> − *i*0) → 1 *with i*<sup>2</sup> − *i*<sup>0</sup> → ∞ *in (11), then we have:*

$$\sum\_{k=1}^{m} p\_k + c\_k |\dot{\imath}\_1 - \dot{\imath}\_0|^{2H\_k - 2} < 1,\tag{12}$$

*and this, together with (11), implies that for any set* {*Ak*, *i <sup>k</sup>*} ⊂ N,

$$\sum\_{k=1}^{m} \frac{L\_k^\*(A\_k \cup \{i\_k^{\prime}\})}{L\_k^\*(A\_k)} < 1.$$

*This means that for any A*<sup>0</sup> = {*i*0} ⊂ N, **<sup>D</sup>**∗(*A*1, *<sup>A</sup>*2, ··· , *Am*; *<sup>A</sup>*0) > <sup>0</sup> *by (3). (c). From (12),* ∑*<sup>m</sup> <sup>k</sup>*=<sup>1</sup> *ck* <sup>&</sup>lt; <sup>1</sup> <sup>−</sup> <sup>∑</sup>*<sup>m</sup> <sup>k</sup>*=<sup>1</sup> *pk* = *p*0. *(d). If m* = 1, *(11) is reduced to (2.7) in the Lemma 2.1 in [19].*

Now we are ready to show that {*Xi*, *<sup>i</sup>* ∈ N} is well defined with probability (4)–(6).

**Proposition 1.** *For any disjoint sets A*0, *<sup>A</sup>*1, *<sup>A</sup>*2, ··· , *Am* ⊂ N, *<sup>A</sup>*<sup>0</sup> = <sup>∅</sup>,

$$\mathbf{D}^\*(A\_1, A\_2, \cdots, A\_m; A\_0) > 0.$$

The next theorem shows that the stochastic process {*Xi*, *<sup>i</sup>* ∈ N} defined with probability (4)–(6) is stationary, and it has long-range dependence if max{*Hk*, *<sup>k</sup>* <sup>∈</sup> *<sup>S</sup>*0} <sup>&</sup>gt; 1/2. Furthermore, the indicator sequence of each state is stationary, and has long-range dependence if its Hurst exponent is greater than 1/2.

**Theorem 1.** {*Xi*, *<sup>i</sup>* ∈ N} *is a stationary process with the following properties. i.*

$$\vec{\mu}$$

$$cov(I\_{\{X\_i = k\}}, I\_{\{X\_j = k\}}) = p\_k c\_k |i - j|^{2H\_k - 2}, \text{ for } k \in S^0,$$

*<sup>P</sup>*(*Xi* <sup>=</sup> *<sup>k</sup>*) = *pk*, *for k* <sup>∈</sup> *<sup>S</sup>*0.

*and*

$$cov(I\_{\{X\_i=0\}'}, I\_{\{X\_j=0\}}) \sim p\_{k'} c\_{k'} |i-j|^{2H\_{k'}-2}, \text{ as } |i-j| \to \infty$$

*where k* = *argmaxkHk*. *iii.*

$$cov(X\_i, X\_j) = \sum\_{k=1}^{m} k^2 p\_k c\_k |i - j|^{2H\_k - 2}, \text{ for } i \neq j.$$

**Proof.** By Proposition 1, {*Xi*} is a well-defined stationary process with probability (4)–(6). The other results follow by (7)–(10).

#### **3. Tail Behavior of Inter-Arrival Time and Other Properties**

For *<sup>k</sup>* <sup>∈</sup> *<sup>S</sup>*0, {*I*{*Xi*=*k*}}*<sup>i</sup>*∈<sup>N</sup> is a stationary process in which the event {*Xi* <sup>=</sup> *<sup>k</sup>*} is recurrent, persistent, and aperiodic (here, we follow the terminology and definition in [20]). We define a random variable *T<sup>i</sup> kk* as the inter-arrival time between the *i*-th "*k*" from the previous "*k*", i.e.,

$$T\_{kk}^i := \inf\{i > 0 : X\_{i + T\_{kk}^{i-1}} = k\},$$

with *T*<sup>0</sup> *kk* :<sup>=</sup> 0. Since {*I*{*Xi*=*k*}}*<sup>i</sup>*∈<sup>N</sup> is GBP with parameters (*Hk*, *pk*, *ck*) for *<sup>k</sup>* <sup>∈</sup> *<sup>S</sup>*0, *<sup>T</sup>*<sup>2</sup> *kk*, *<sup>T</sup>*<sup>3</sup> *kk*, ··· are iid (see page 9 [21]). Therefore, we will denote the inter-arrival time between two consecutive observations of *k* as *Tkk*. The next Lemma is directly obtained from Theorem 3.6 in [21].

**Lemma 1.** *For k* <sup>∈</sup> *<sup>S</sup>*0, *the inter-arrival time for state k, Tkk*, *satisfies the following. i. Tkk has a mean of* 1/*pk. It has an infinite second moment if Hk* ∈ (1/2, 1). *ii.*

$$P(T\_{kk} > t) = t^{2H\_k - 3} L\_k(t)\_{\prime\prime}$$

*where Lk is a slowly varying function that depends on the parameter Hk*, *pk*, *ck.*

The first result *i* in Lemma 1 is similar to Lemma 1 in [22]. However, here, we have a finite-state stationary process, whereas countable-state space Markov chain was assumed in [22]. Now, we investigate the conditional probabilities and the uniqueness of our process.

**Theorem 2.** *Let <sup>A</sup>*0, *<sup>A</sup>*1, ··· , *Am be disjoint subsets of* <sup>N</sup>. *For any* <sup>∈</sup> *<sup>S</sup>*<sup>0</sup> *such that* max *<sup>A</sup>* <sup>&</sup>gt; max *A*0, *and for i* ∈ ∪ / *<sup>m</sup> <sup>k</sup>*=0*Ak such that i* > max *A*, *the conditional probability satisfies the following:*

$$P(X\_{i'} = \ell | \cap\_{k=0,\cdots,m} \cap\_{i \in A\_k} \{X\_i = k\}) = p\_\ell + c\_\ell |i' - \max A\_\ell|^{2H\_\ell - 2}.$$

*If there has been no interruption of "0" after the last observation of "", then the chance to observe "" depends on the distance between the current time and the last time of observation of "", regardless of how other states appeared in the past. This can be considered as a generalized Markov property. Moreover, this chance to observe* "" *decreases as the distance increases, following the power law with exponent* 2*H* − 2*.*

**Proof.** The result follows from the fact that:

$$P(\{X\_{i'} = \ell\} \cap\_{i \in A\_k} \{X\_i = k\}) = P(\cap\_{i \in A\_k, k \in \mathbb{S}^0} \{X\_i = k\}) \times (p\_\ell + c\_\ell |i' - \max A\_\ell|^{2H\_\ell - 2}),$$

since there is no *i* ∈ *A*<sup>0</sup> between *i* and max *A*.

In a countable state space Markov chain, long-range dependence is possible only when it has infinite state space, and additionally if it is stationary, positive recurrent, irreducible, aperiodic Markov chain, then each state should have the same long-term memory, i.e., sequence indicators have the same Hurst exponent for all states [22]. By relaxing the Markov property, long-range dependence was made possible in a finite-state stationary process, also with different Hurst parameter for different states.

**Theorem 3.** *Let <sup>A</sup>*0, *<sup>A</sup>*1, ··· , *Am be disjoint subsets of* <sup>N</sup>. *For* <sup>∈</sup> *<sup>S</sup>*<sup>0</sup> *such that* max *<sup>A</sup>* <sup>&</sup>lt; max *A*0*, and i* 1, *i* 2, *i* <sup>3</sup> ∈ ∪ / *<sup>m</sup> <sup>k</sup>*=0*Ak such that i* 1, *i* 2, *i* <sup>3</sup> > max *A*0, *and i* <sup>2</sup> > *i* <sup>3</sup>, *the conditional probability satisfies the following:*

$$\underline{a}$$

$$|p\_\ell + c\_\ell| i\_1' - \max A\_\ell |^{2H\_\ell - 2} > P(X\_{i\_1'} = \ell | \cap\_{i \in A\_k, k \in S^0} \{X\_i = k\}).$$

*b.*

$$\frac{P(X\_{i\_2'} = \ell | \cap\_{i \in A\_k} \mathbb{A} \in \mathbb{S}^0 \mid \{X\_i = k\})}{P(X\_{i\_3'} = \ell | \cap\_{i \in A\_k} \mathbb{A} \in \mathbb{S}^0 \mid \{X\_i = k\})} > \frac{p\_\ell + c\_\ell |i\_2' - \max A\_\ell |^{2H\_\ell - 2}}{p\_\ell + c\_\ell |i\_3' - \max A\_\ell |^{2H\_\ell - 2}}.$$

**Theorem 4.** *A stationary process with (4)–(6) is the unique stationary process that satisfies i. for k* ∈ *S:*

$$P(X\_i = k) = p\_{k'} \quad \text{where } p\_k > 0 \text{ and } \sum\_{k=0}^{m} p\_k = 1.$$

*ii. for k* <sup>∈</sup> *<sup>S</sup>*<sup>0</sup> *and any i*, *<sup>j</sup>* <sup>∈</sup> <sup>N</sup>, *<sup>i</sup>* <sup>=</sup> *<sup>j</sup>*,

$$cov(I\_{\{X\_i = k\} \text{ } I\_{\{X\_j = k\} }}) = c\_k' |i - j|^{2H\_k - 2},$$

*for some constants c <sup>k</sup>* ∈ R+, *Hk* ∈ (0, 1), *iii. for any sets, A* <sup>⊂</sup> *<sup>S</sup>*<sup>0</sup> *and* {*ik*; *<sup>k</sup>* <sup>∈</sup> *<sup>A</sup>*} ⊂ <sup>N</sup>,

$$P(\cap\_{k \in A} \{X\_{i\_k} = k\}) = \prod\_{k \in A} p\_{k, \epsilon}$$

*iv. for* <sup>∈</sup> *<sup>S</sup>*0, *there is a function h*(·) *such that,*

$$P(\mathbf{X}\_{i'} = \ell | \cap\_{i \in A\_k} \mathbb{A} \in \mathbb{S}^0 \mid \{\mathbf{X}\_i = k\}) = h\_\ell(i' - \max A\_\ell).$$

*for disjoint subsets, A*0, *A*1, ··· , *Am*, {*i* } ⊂ N*, such that <sup>A</sup>* = <sup>∅</sup>, *<sup>i</sup>* > max *A*, *and* max *A* > max *A*<sup>0</sup> *(A*<sup>0</sup> *can be the empty set).*

**Proof.** Let *X*∗ be a stationary process that satisfies *i*–*iv*. By *i*, *ii*,

$$P(X\_{i\_0}^\* = k, X\_{i\_1}^\* = k) = \text{cov}(I\_{\{X\_{i\_0}^\* = k\}}, I\_{\{X\_{i\_1}^\* = k\}}) + p\_k^2 = c\_k'|i\_0 - i\_1|^{2H\_k - 2} + p\_{k'}^2$$

which results in:

$$h\_k(i\_0 - i\_1) = P(X\_{i\_1}^\* = k | X\_{i\_0}^\* = k) = p\_k + (c\_k'/p\_k)|i\_0 - i\_1|^{2H\_k - 2}.$$

Therefore, by *iv*,

$$\begin{aligned} P(X\_{i\_0}^\* = k\_\prime X\_{i\_1}^\* = k\_\prime X\_{i\_2}^\* = k\_\prime \cdots \cdot, X\_{i\_n}^\* = k) &= p\_k \prod\_{j=1}^n h\_k(i\_j - i\_{j-1}) \\ &= L\_k^\*(\{i\_0, i\_{2\prime}, \cdots, i\_n\})\_{\prime\prime} \end{aligned}$$

where *L*∗ *<sup>k</sup>* = *L*<sup>∗</sup> *Hk*,*pk*,*c <sup>k</sup>*/*pk* . Furthermore, by applying *iii, iv* to *X*∗,

$$P(\cap\_{i \in A\_k} \!\_{k \in \mathbb{S}^0} \{X\_i = k\}) = \prod\_{k=1,\cdots,m} L\_k^\*(A\_k).$$

This implies that *X*<sup>∗</sup> satisfies (4)–(6) with *ck* = *c <sup>k</sup>*/*pk* for *<sup>k</sup>* <sup>∈</sup> *<sup>S</sup>*0.

#### **4. Fractional Multinomial Distribution**

In this section, we define a fractional multinomial distribution that can serve as an over-dispersed multinomial distribution.

Note that ∑*<sup>n</sup> <sup>i</sup>*=<sup>1</sup> *<sup>I</sup>*{*Xi*=*k*} has mean *npk* for *<sup>k</sup>* ∈ *<sup>S</sup>*. Further, as *<sup>n</sup>* → <sup>∞</sup>,

$$var\left(\sum\_{i=1}^{n} I\_{\{X\_{i}=k\}}\right) \sim \begin{cases} (p\_k(1-p\_k) + \frac{c\_k'}{2H\_k - 1})n & H\_k \in (0, 1/2), \\ c\_k' n \ln n & H\_k = 1/2, \\ \frac{c\_k'}{2H\_k - 1} |n|^{2H\_k} & H\_k \in (1/2, 1), \end{cases}$$

for *<sup>k</sup>* <sup>∈</sup> *<sup>S</sup>*0, and,

$$var\left(\sum\_{i=1}^{n} I\_{\{X\_{i}=0\}}\right) \sim \begin{cases} (p\_{k'}(1-p\_{k'}) + \frac{c\_{k'}'}{2H\_{k'}-1})n & H\_{k'} \in (0,1/2), \\ c\_{k'}'n\ln n & H\_{k'} = 1/2, \\ \frac{c\_{k'}'}{2H\_{k'}-1}|n|^{2H\_{k'}} & H\_{k'} \in (1/2,1), \end{cases}$$

where *k* <sup>=</sup> *argmaxk*{*Hk*; *<sup>k</sup>* <sup>∈</sup> *<sup>S</sup>*0}, and *<sup>c</sup> <sup>k</sup>* = *pkck*. It also has the following covariance.

$$cov\left(\sum\_{i=1}^{n} I\_{\{X\_i = k\}}, \sum\_{i=1}^{n} I\_{\{X\_i = k'\}}\right) = -np\_kp\_{k'}$$

$$cov\left(\sum\_{i=1}^{n} I\_{\{X\_i = 0\}}, \sum\_{i=1}^{n} I\_{\{X\_i = k\}}\right) = -np\_0p\_k - \sum\_{\substack{i \neq j\\i, j = 1, \dots, n}} c'\_k |i - j|^{2H\_k - 2}, 0\right)$$

for *k*, *k* <sup>∈</sup> *<sup>S</sup>*0.

We define *Yk* := ∑*<sup>n</sup> <sup>i</sup>*=<sup>1</sup> *I*{*Xi*=*k*}, for *k* ∈ *S*, and a fixed *n*, and call its distribution fractional multinomial distribution with parameters *n*, **p**, **H**, **c**.

If **c** = **0**, (*Y*0,*Y*1,*Y*2, ··· ,*Ym*) follows a multinomial distribution with parameters *n*, **p**, and *E*(*Yk*) = *npk*, *var*(*Yk*) = *npk*(1 − *pk*), *cov*(*Yk*,*Yk* ) = −*npk pk* , for *k*, *k* ∈ *S*, *k* = *k* , and *<sup>p</sup>*<sup>0</sup> <sup>=</sup> <sup>1</sup> <sup>−</sup> <sup>∑</sup>*<sup>m</sup> <sup>i</sup>*=<sup>1</sup> *pi*.

If **c** = **0**, (*Y*0,*Y*1, ··· ,*Ym*) can serve as over-dispersed multinomial random variables with:

$$E(\boldsymbol{\chi}\_k) = np\_k, \quad Var(\boldsymbol{\chi}\_k) = np\_k(1 - p\_k)(1 + \psi\_{n,k}),$$

where the over-dispersion parameter *ψn*,*<sup>k</sup>* is as follows.

$$\psi\_{n,k} \sim \begin{cases} \frac{c}{(1-p\_k)(2H\_k-1)} & \text{if } H\_k \in (0,1/2), \\\frac{c\ln n}{1-p\_k} - 1 & \text{if } H\_k = 1/2, \\\frac{c n^{2H\_k-1}}{(1-p\_k)2H\_k-1} - 1 & \text{if } H\_k \in (1/2,1), \end{cases}$$

for *<sup>k</sup>* <sup>∈</sup> *<sup>S</sup>*0, and,

$$\psi\_{n,0} \sim \begin{cases} \frac{c}{(1-p\_{k'})(2H\_{k'}-1)} & \text{if } H\_{k'} \in (0,1/2),\\ \frac{c\ln n}{1-p\_{k'}} - 1 & \text{if } H\_{k'} = 1/2,\\ \frac{c n^{2H\_{k'}-1}}{(1-p\_{k'})2H\_{k'}-1} - 1 & \text{if } H\_{k'} \in (1/2,1), \end{cases}$$

where *k* <sup>=</sup> *argmaxk*{*Hk*; *<sup>k</sup>* <sup>∈</sup> *<sup>S</sup>*0}, as *<sup>n</sup>* <sup>→</sup> <sup>∞</sup>. If *Hk* <sup>∈</sup> (0, 1/2), the over-dispersion parameter *ψn*,*<sup>k</sup>* remains stable as *n* increases, whereas if *Hk* ∈ (1/2, 1) the over-dispersed parameter *ψn*,*<sup>k</sup>* increases with the rate of fractional exponent of *n*, *n*2*Hk*−1.

#### **5. Conclusions**

A new method for modeling long-range dependence in discrete-time finite-state stationary process was proposed. This model allows different states to have different Hurst indices except that for the base state "0", the Hurst exponent is the maximum Hurst index of all other states. Inter-arrival time for each state follows a heavy tail distribution, and its tail behavior is different for different states. The other interesting feature of this process is that the conditional probability to observe a state "*k*" (*k* is not the base state "0") depends on the Hurst index *Hk* and the time difference between the last observation of "*k*" and the current time, no matter how other states appeared in the past, given that there was no base state observed since the last observation of "*k*". From the stationary process developed in this paper, we defined a fractional multinomial distribution that can express a wide range of over-dispersed multinomial distributions; each state can have a different over-dispersion parameter that can behave as an asymptotically constant or grow with a fractional exponent of the number of trials.

#### **6. Proofs**

**Lemma 2.** *For any* {*a*0, *a*1, ··· , *an*, *a* <sup>0</sup>, *a* <sup>1</sup>, ··· , *a <sup>n</sup>*} ⊂ <sup>R</sup><sup>+</sup> *that satisfies <sup>a</sup>*<sup>0</sup> <sup>−</sup> <sup>∑</sup>*<sup>j</sup> <sup>i</sup>*=<sup>1</sup> *ai* > 0, *a* 0 − ∑*j <sup>i</sup>*=<sup>1</sup> *a <sup>i</sup>* > 0 *for j* = 1, 2, ··· , *n, i. if, <sup>a</sup>*<sup>0</sup> <sup>≥</sup> *<sup>a</sup>*<sup>1</sup> ≥···≥ *an*

*a*

*n* ,

.

then, 
$$\frac{a\_0 - a\_1 - a\_2 - \dots - a\_n}{a\_0' - a\_1' - a\_2' - \dots - a\_n'} \ge \frac{a\_0}{a\_0'}$$

ii. If, 
$$\frac{a\_0}{a\_0'} < \frac{a\_1}{a\_1'} \le \dots \le \frac{a\_n}{a\_n'}$$

then, 
$$\frac{a\_0 - a\_1 - a\_2 - \dots - a\_n}{} < \frac{a\_0}{a\_0'}$$

*a*

1

*a*

0

$$\frac{a\_0 - a\_1 - a\_2 - \dots - a\_n}{a\_0' - a\_1' - a\_2' - \dots - a\_n'} \le \frac{a\_0}{a\_0'}.$$

*iii. For any* {*a*0, *a*1, ··· , *an*, *a* <sup>0</sup>, *a* <sup>1</sup>, ··· , *a <sup>n</sup>*} ⊂ R+*,*

$$\max\_{i} \frac{a\_i}{a'\_i} \ge \frac{a\_1 + a\_2 + \dots + a\_n}{a'\_1 + a'\_2 + \dots + a'\_n} \ge \min\_{i} \frac{a\_i}{a'\_i}.$$

**Proof.** *i* and *ii* were proved in Lemma 5.2 in [19]. For *iii*, define *bj* such that, *aj*

$$\frac{a\_j}{a'\_j} = b\_{j\cdot}.$$

Then,

$$\frac{a\_1 + a\_2 + \dots + a\_n}{a\_1' + a\_2' + \dots + a\_n'} = \frac{b\_1 a\_1' + b\_2 a\_2' + \dots + b\_n a\_n'}{a\_1' + a\_2' + \dots + a\_n'}$$

which is weighted average of {*bj*, *j* = 1, ··· , *n*}.

To ease our notation, we will denote:

$$\mathbf{L}^\*(A\_1, A\_2, \cdots, A\_{k-1}, A\_k \cup \{i\}, A\_{k+1}, \cdots, A\_m)$$

by,

$$
\mathbf{L}^\*(\cdot \cdot \cdot \text{ } \text{\textquotedblleft} A\_k \cup \{i\} \text{\textquotedblright} \cdot \cdot \text{\textquotedblright}),
$$

and,

$$\mathbf{L}^\*(\cdot \cdot \cdot, A\_k \cup \{i\}, A\_{k'} \cup \{j\}, \cdot \cdot \cdot) = \mathbf{L}^\*(A\_{1'}^\* A\_{2'}^\* \cdot \cdot, \cdot, A\_m^\*)$$

where, if *k* = *k* ,

$$A\_i^\* = \begin{cases} A\_i \text{ if } i \neq k, k'\\ A\_i \cup \{i\} \text{ if } i = k'\\ A\_i \cup \{j\} \text{ if } i = k', \end{cases}$$

and if *k* = *k* 

,

$$A\_i^\* = \begin{cases} A\_i \text{ if } i \neq k, \\ A\_i \cup \{i \cup\} \text{ if } i = k. \end{cases}$$

**D**∗(··· , *Ak* ∪ {*i*}, ···) and **D**∗(··· , *Ak* ∪ {*i*}, *Ak* ∪ {*j*}, ···) are also defined in a similar way.

**Lemma 3.** *For any disjoint sets A*1, ··· , *Am*, {*i*0, *<sup>i</sup>*1} ⊂ N*, i.*

$$\mathbf{D}^\*(A\_1, A\_2, \dots, A\_m; \{i\_0\}) > 0$$

*ii.*

$$\mathbf{D}^\*(A\_1, A\_2, \cdots, A\_m; \{i\_0, i\_1\}) > 0$$

**Proof.** *i.*

$$\begin{aligned} \mathbf{D}^\*(A\_1, A\_2, \cdots, A\_m; \{i\_0\}) &= \prod\_{k=1}^m L\_k^\*(A\_k) \left( 1 - \sum\_{k'=1}^m \frac{L\_{k'}^\*(A\_{k'} \cup \{i\_0\})}{L\_{k'}^\*(A\_{k'})} \right) \\ &= \prod\_{k=1}^m L\_k^\*(A\_k) \left( 1 - \sum\_{k'=1}^m \frac{L\_{k'}^\*(\{i\_{1,k'}, i\_{2,k'}, i\_0\})}{L\_{k'}^\*(\{i\_{1,k'}, i\_{2,k'}\})} \right) \end{aligned}$$

where *i*1,*<sup>k</sup>* , *i*2,*<sup>k</sup>* ∈ *Ak* are two closest elements to *i*<sup>0</sup> among *Ak* such that if min *Ak* < *i*<sup>0</sup> < max *Ak* , then *i*1,*<sup>k</sup>* < *i*<sup>0</sup> < *i*2,*<sup>k</sup>* , if *i*<sup>0</sup> > max *Ak* , then *i*1,*<sup>k</sup>* < *i*2,*<sup>k</sup>* < *i*0, if *i*<sup>0</sup> < min *Ak* , then *i*<sup>0</sup> < *i*1,*<sup>k</sup>* < *i*2,*<sup>k</sup>* , and if *Ak* = ∅, then *i*1,*<sup>k</sup>* = *i*2,*<sup>k</sup>* = ∅. Therefore,

$$= \begin{cases} \frac{L\_{k'}^{'} \left( \{i\_{1',k'}^{'}, i\_{2,k'}^{'}, i\_{0} \} \right)}{L\_{k'}^{'} \left( \{i\_{1,k',i}^{'}, i\_{2,k'}^{'} \} \right)}\\ \begin{cases} \frac{(p\_{k'}^{'} + c\_{k'}^{'} |i\_{1,k'} - i\_{0}|^{2H\_{k'}^{'} - 2}) (p\_{k'} + c\_{k'} |i\_{0} - i\_{2,k'}|^{2H\_{k'}^{'} - 2})}{p\_{k'} + c\_{k'} |i\_{1,k'} - i\_{2,k'}|^{2H\_{k'}^{'} - 2}} & \text{if } \min A\_{k'} < i\_{0} < \max A\_{k', k} \\\ p\_{k'} + c\_{k'} |\max A\_{k'} - i\_{0}|^{2H\_{k'}^{'} - 2} & \text{if } i\_{0} > \max A\_{k'} \\\ p\_{k'} + c\_{k'} |\min A\_{k'} - i\_{0}|^{2H\_{k'}^{'} - 2} & \text{if } i\_{0} < \min A\_{k'} \\\ p\_{k'} & \text{if } A\_{k'} = \mathcal{O}. \end{cases} $$

By (11), ∑*<sup>m</sup> k* =1 *L*∗ *k* ({*i* 1,*k* ,*i* 2,*k* ,*i*0}) *L*∗ *k* ({*i* 1,*k* ,*i* 2,*k* }) <sup>&</sup>lt; 1, and the result is derived.

*ii.* Since,

$$\mathbf{D}^\*(A\_1, A\_2, \cdots, A\_m; \{i\_0, i\_1\}) = \mathbf{D}^\*(A\_1, A\_2, \cdots, A\_m; \{i\_0\}) - \sum\_{k=1}^m \mathbf{D}^\*(\cdot, \cdot, A\_k \cup \{i\_1\}, \cdots, \cdot; \{i\_0\}),$$

it is sufficient if we show:

$$\frac{\mathbf{L}^\*(A\_1, A\_2, \cdots, A\_m) - \sum\_{k=1}^m \mathbf{L}^\*(\cdot, \cdot, A\_k \cup \{i\_0\}, \cdot, \cdot)}{\sum\_{k'=1}^m \mathbf{L}^\*(\cdot, \cdot, A\_{k'} \cup \{i\_1\}, \cdot, \cdot, \cdot) - \sum\_{k, k'=1}^m \mathbf{L}^\*(\cdot, \cdot, A\_k \cup \{i\_0\}, A\_{k'} \cup \{i\_1\}, \cdot, \cdot, \cdot)} > 1.$$

Note that:

$$\frac{\mathbf{L}^\*(A\_1, A\_{2'}, \cdots, A\_m)}{\sum\_{k'=1}^m \mathbf{L}^\*(\cdot \cdot \cdot \cdot \prime, A\_{k'} \cup \{i\_1\}\_{k'} \cdot \cdots)} = \frac{1}{\sum\_{k'=1}^m \frac{L\_{k'}^\*(\{i\_{1,k'}, i\_{2,k'}, i\_0\})}{L\_{k'}^\*(\{i\_{1,k'}, i\_{2,k'}\})}},$$

which is non-increasing as set *Ak* increases for *k* = 1, ··· , *m*. That is,

$$\frac{\mathbf{L}^\*(A\_1, A\_2, \cdots, A\_m)}{\sum\_{k'=1}^m \mathbf{L}^\*(\cdot, \cdot, A\_{k'} \cup \{i\_1\}, \cdot, \cdot)} \le \frac{\mathbf{L}^\*(A'\_1, A'\_2, \cdot, \cdots, A'\_m)}{\sum\_{k'=1}^m \mathbf{L}^\*(\cdot, \cdot, A'\_{k'} \cup \{i\_1\}, \cdot, \cdot)}$$

for any sets *Ak* ⊆ *A <sup>k</sup>*, *k* = 1, 2, ··· , *m*. Therefore,

$$\frac{\mathbf{L}^\*(A\_1, A\_2, \cdots, A\_m)}{\sum\_{k'=1}^m \mathbf{L}^\*(\cdot, \cdot, A\_{k'} \cup \{i\_1\}, \cdot, \cdot)} > \frac{\sum\_{k=1}^m \mathbf{L}^\*(\cdot, \cdot, A\_k \cup \{i\_0\}, \cdot, \cdot)}{\sum\_{k, k'=1}^m \mathbf{L}^\*(\cdot, \cdot, A\_k \cup \{i\_0\}, A\_{k'} \cup \{i\_1\}, \cdot, \cdot)}$$

by *iii* of Lemma 2. By *i* of Lemma 2 combined with the fact that:

$$\frac{1}{\sum\_{k'=1}^{\mathfrak{m}} \frac{L\_{k'}^\*(\{i\_{1,k'},i\_{2,k'},i\_0\})}{L\_{k'}^\*(\{i\_{1,k'},i\_{2,k'}\})}} > 1$$

from (11), the result is derived.

Note that for any disjoint sets *A*1, *A*2, ··· , *Am*, {*i*0, *i*1, ··· , *in*}

$$\begin{split} \mathbf{D}^{\*}(A\_{1},A\_{2},\cdots,A\_{m};\{i\_{0},i\_{1},\cdots,i\_{n}\}) &= \mathbf{D}^{\*}(A\_{1},A\_{2},\cdots,A\_{m};\{i\_{0},i\_{1},\cdots,\cdot,i\_{n-1}\}) \\ &- \mathbf{D}^{\*}(A\_{1}\cup\{i\_{n}\},A\_{2},\cdots,A\_{m};\{i\_{0},i\_{1},\cdots,i\_{n-1}\}) \\ &- \mathbf{D}^{\*}(A\_{1},A\_{2}\cup\{i\_{n}\},\cdots,A\_{m};\{i\_{0},i\_{1},\cdots,i\_{n-1}\}) \\ &\cdots \\ &- \mathbf{D}^{\*}(A\_{1},A\_{2},\cdots,A\_{m}\cup\{i\_{0}\};\{i\_{0},i\_{1},\cdots,\cdot,i\_{n-1}\}). \end{split}$$

Let us denote:

by:

$$\begin{aligned} \sum\_{k=1}^m \mathbf{D}^\*(A\_{1\prime} \cdots \prime, A\_{k-1}, A\_k \cup \{i\_n\}, A\_{k+1} \cdots \prime, A\_m; \{i\_0, i\_1, \cdots, i\_{n-1}\}), \\\\ \sum\_{k=1}^m \mathbf{D}^\*(\cdots \prime, A\_k \cup \{i\_n\}, \cdots \prime; \{i\_0, i\_{1\prime} \cdots \prime, i\_{n-1}\}). \end{aligned}$$

**Proof of Proposition 1.** We will show by mathematical induction that {*Xi*<sup>1</sup> , ··· , *Xin* } is a random vector with probability (4)–(6) for any *<sup>n</sup>* and any {*i*1, *<sup>i</sup>*2, ··· , *in*} ⊂ N. For *<sup>n</sup>* = 1, it is trivial. For *n* = 2, it is proved by Lemma 3. Let us assume that {*Xi*<sup>1</sup> , ··· , *Xi n* −1 } is a random vector with probability (4)–(6) for any {*i*1, *i*2, ··· , *in* <sup>−</sup>1} ⊂ <sup>N</sup>. We will prove that {*Xi*<sup>1</sup> , ··· , *Xi* } is a random vector for any {*i*1, *i*2, ··· , *in* } ⊂ N.

*n* Without loss of generality, fix a set {*i*1, *i*2, ··· , *in* } ⊂ N. To prove that {*Xi*<sup>1</sup> , ··· , *Xi n* } is a random vector with probability (4)–(6), we need to show that **D**∗(*A*1, ··· , *Am*; *A*0) > 0 for any pairwise disjoint sets, *<sup>A</sup>*0, ··· , *Am*, such that <sup>∪</sup>*<sup>m</sup> <sup>k</sup>*=0*Ak* = {*i*1, ··· , *in* }. If |*A*0| = 0 or 1, then the result follows from the definition of **D**∗ and Lemma 3, respectively. Therefore, we assume that |*A*0| ≥ 2, *A*<sup>0</sup> = {*i* 0, *i* <sup>1</sup>, ··· , *i <sup>n</sup>*<sup>0</sup> }, and max *A*<sup>0</sup> = *i <sup>n</sup>*<sup>0</sup> . Let *A* <sup>0</sup> = *A*0/{*i <sup>n</sup>*<sup>0</sup> }. We will first show that for any such sets,

$$\frac{\mathbf{D}^\*(A\_1, \cdots, A\_m; A\_0')}{\sum\_{\ell=1}^m \mathbf{D}^\*(\cdot, \cdots, A\_\ell \cup \{i\_{n\_0}'\}, \cdots, i\_0')} > 1. \tag{13}$$

(13) is equivalent to **D**∗(*A*1, ··· , *Am*; *A*0) > 0.

*k*=1

For fixed ∈ {1, 2, ··· , *m*}, define the following vectors of length *m* − 1,

$$\begin{aligned} \mathbf{H}^{\ell} &= (H\_{1\prime} \cdots \prime, H\_{\ell-1\prime} H\_{\ell+1\prime} \cdots \prime, H\_{\mathfrak{m}})\_{\ell}, \\ \mathbf{p}^{\ell} &= (p\_{1\prime} \cdots \prime, p\_{\ell-1\prime} p\_{\ell+1\prime} \cdots \prime, p\_{\mathfrak{m}})\_{\ell}, \\ \mathbf{c}^{\ell} &= (c\_{1\prime} \cdots \prime, c\_{\ell-1\prime} c\_{\ell+1\prime} \cdots \prime, c\_{\mathfrak{m}})\_{\ell}. \end{aligned}$$

We also define:

$$D^\*\_{(-\ell)}(\cdot,\cdot,A\_{\ell-1},A\_{\ell+1},\cdot,\cdot,A\_0) := D^\*\_{\mathbf{H}^\ell,\mathbf{p}^\ell,\mathbf{c}^\ell}(A\_1,\cdot,\cdot,A\_{\ell-1},A\_{\ell+1},\cdot,\cdot,A\_{\mathbf{m}};A\_0).$$

Since {*Xi*; *<sup>i</sup>* ∈ ∪*<sup>m</sup> <sup>k</sup>*=1*Ak* ∪ *A* <sup>0</sup>} is a random vector with (4)–(6), **D**∗(··· , *A*, ··· ; *A* <sup>0</sup>) > 0, and it can be written as:

$$\mathbf{D}^\*(\cdots, \ A\_{\ell \times \iota}, \cdots, \mathcal{A}\_0') = P\left(\cap\_{i \in A\_0'} \{X\_i = 0\} \cap \bigcap\_{\substack{i \in A\_k \\ k = 1, \cdots, m \\ k \neq \ell}} \{X\_i = k\} \cap\_{\substack{i \in A\_\ell \\ k \neq \ell}} \{X\_i = \ell\}\right) \tag{14}$$

= *P* ∩*i*∈*<sup>A</sup>* <sup>0</sup> {*Xi* ∈ {0, }} ∩ *<sup>i</sup>*∈*Ak k*=1,··· ,*m k*= {*Xi* = *<sup>k</sup>*} ∩*i*∈*A* {*Xi* = } − *P* ∩*i*∈*<sup>A</sup>* <sup>0</sup>/{*i* <sup>0</sup>} {*Xi* ∈ {0, }} ∩ *<sup>i</sup>*∈*Ak k*=1,··· ,*m k*= {*Xi* <sup>=</sup> *<sup>k</sup>*} ∩*i*∈*A*∪{*<sup>i</sup>* <sup>0</sup>} {*Xi* <sup>=</sup> } − *P* ∩*i*∈*<sup>A</sup>* <sup>0</sup>/{*i* 0,*i* <sup>1</sup>} {*Xi* ∈ {0, }} ∩ *<sup>i</sup>*∈*Ak k*=1,··· ,*m k*= {*Xi* <sup>=</sup> *<sup>k</sup>*} ∩*i*∈*A*∪{*<sup>i</sup>* <sup>1</sup>} {*Xi* <sup>=</sup> }∩{*Xi* <sup>0</sup> = <sup>0</sup>} − *P* ∩*i*∈*<sup>A</sup>* <sup>0</sup>/{*i* 0,*i* 1,*i* <sup>2</sup>} {*Xi* ∈ {0, }} ∩ *<sup>i</sup>*∈*Ak k*=1,··· ,*m k*= {*Xi* <sup>=</sup> *<sup>k</sup>*} ∩*i*∈*A*∪{*<sup>i</sup>* <sup>2</sup>} {*Xi* <sup>=</sup> } ∩*i*∈{*<sup>i</sup>* 0,*i* <sup>1</sup>} {*Xi* <sup>=</sup> <sup>0</sup>} . . . − *P* ∩ *<sup>i</sup>*∈*Ak* {*Xi* <sup>=</sup> *<sup>k</sup>*} ∩*i*∈*A*∪{*<sup>i</sup> <sup>n</sup>*0−1} {*Xi* <sup>=</sup> } ∩*i*∈*<sup>A</sup>* <sup>0</sup>/{*i <sup>n</sup>*0−1} {*Xi* <sup>=</sup> <sup>0</sup>} .

$$\text{Note that:}$$

*k*=1,··· ,*m k*=

$$P\left(\bigcap\_{\substack{i\in A\_{0}^{\ell}\\k=1,\cdots,m}}\{X\_{i}\in\{0,\ell\}\}\cap\bigcap\_{\substack{i\in A\_{k}\\k\neq\ell}}\{X\_{i}=k\}\cap\_{i\in A\_{\ell}}\{X\_{i}=\ell\}\right)\tag{15}$$

$$=P\left(\bigcap\_{\substack{i\in A\_{k}\\k=1,\cdots,m}}\{X\_{i}=k\}\cap\_{i\in A\_{\ell}}\{X\_{i}=\ell\}\right)$$

$$=\left(\bigcap\_{\substack{i\in A\_{0}^{\ell}\\k\neq\ell}}\{X\_{i}\in\{1,\cdots,\ell-1,\ell+1,\prime,\cdots,m\}\}\cap\_{i\in A\_{k}}\{X\_{i}=k\}\cap\_{i\in A\_{\ell}}\{X\_{i}=\ell\}\right)$$

$$=L^{\*}\_{\ell}(A\_{\ell})\mathbf{D}^{\*}\_{(-\ell)}(\cdots\cdot\_{\ell}A\_{\ell-1},A\_{\ell+1},\cdots,A\_{0}^{\prime}),$$

$$\text{and:}$$

*P* ∩*i*∈{*<sup>i</sup> <sup>j</sup>*+1,··· ,*i <sup>n</sup>*0−1} {*Xi* ∈ {0, }} ∩ *<sup>i</sup>*∈*Ak k*=1,··· ,*m k*= {*Xi* = *k*} ∩*i*∈*A*∪{*<sup>i</sup> j* } {*Xi* <sup>=</sup> } ∩*i*∈{*<sup>i</sup>* <sup>0</sup>,··· ,*i <sup>j</sup>*−1} {*Xi* <sup>=</sup> <sup>0</sup>} = *P* ∩*i*∈*<sup>A</sup>* <sup>0</sup>/{*i j* } {*Xi* ∈ {0, }} ∩ *<sup>i</sup>*∈*Ak k*=1,··· ,*m k*= {*Xi* <sup>=</sup> *<sup>k</sup>*} ∩*i*∈*A*∪{*<sup>i</sup> j* } {*Xi* <sup>=</sup> } − ∑ *i*∗∈*A* <sup>0</sup>,*i*∗<*i j P* ∩*i*∈*<sup>A</sup>* <sup>0</sup>/{*i j* ,*i*∗} {*Xi* ∈ {0, }} ∩ *<sup>i</sup>*∈*Ak k*=1,··· ,*m k*= {*Xi* <sup>=</sup> *<sup>k</sup>*} ∩*i*∈*A*∪{*<sup>i</sup> j* ,*i*∗} {*Xi* <sup>=</sup> } + ∑ *i* ∗,*i* ∗∗∈*A* 0, *i* ∗<*i* ∗∗<*i j P* ∩*i*∈*<sup>A</sup>* <sup>0</sup>/{*i j* ,*i*∗,*i*∗∗} {*Xi* ∈ {0, }} ∩ *<sup>i</sup>*∈*Ak k*=1,··· ,*m k*= {*Xi* <sup>=</sup> *<sup>k</sup>*} ∩*i*∈*A*∪{*<sup>i</sup> j* ,*i*∗,*i*∗∗} {*Xi* <sup>=</sup> } . . .

$$(-1)^{l}P\left(\cap\_{i \in A\_{0}^{\ell} \cup \{i\_{j}^{\ell}, i\_{0}^{\ell}, i\_{1}^{\cdots}, \cdots, i\_{j-1}^{\ell}\}}\left\{X\_{i} \in \{0, \ell\}\right\} \cap\_{i \in A\_{k}} \{X\_{i} = k\} \cap\_{i \in A\_{\ell} \cup \{i\_{j}^{\ell}, i\_{0}^{\ell}, i\_{1}^{\cdots}, \cdots, i\_{j-1}^{\ell}\}}\left\{X\_{i} = \ell\right\}\right)$$

$$= \sum\_{\substack{\mathcal{C} \cap D = \mathcal{D} \\ \mathcal{C} = \mathcal{D} \text{ or } \max \mathcal{C} < i\_{j}^{\ell}}} (-1)^{|\mathcal{C}|} L\_{\epsilon}^{\prime} (A\_{\ell} \cup \{i\_{j}^{\ell}\} \cup \mathcal{C}) \mathbf{D}\_{(-\ell)}^{\ast} (\cdots, A\_{\ell-1}, A\_{\ell+1}, \cdots, \cdot; D) \tag{16}$$
(16)

where |∅| = 0. Therefore, by (14)–(16),

$$\mathbf{D}^\*(\cdot \cdot \cdot, A\_{\ell'} \cdot \cdot \cdot; A\_0') = L^\*\_{\ell}(A\_{\ell}) \mathbf{D}^\*\_{( - \ell)}(\cdot \cdot \cdot, A\_{\ell - 1 \prime} A\_{\ell + 1 \prime} \cdot \cdot \cdot; A\_0') \tag{17}$$

$$+\sum\_{j=0}^{n\_0-1} \sum\_{\substack{\mathbf{C}\cap D=\mathcal{O}\\ \mathbf{C}=\mathcal{O}\text{ or }\max\mathbf{C}$$

(17) can also be derived by the definition of *L*∗ , **D**∗, without using probability for {*Xi*; *i* ∈ ∪*m <sup>k</sup>*=1*Ak* ∪ *A* <sup>0</sup>}. In the same way, using the definition of *L*<sup>∗</sup> , **D**∗,

$$\mathbf{D}^\*(\cdots, A\_{\ell} \cup \{i\_{n\_0}^{\prime}\}, \cdots, \mathbf{:} A\_0^{\prime}) = L\_{\ell}^\*(A\_{\ell} \cup \{i\_{n\_0}^{\prime}\}) \mathbf{D}^\*\_{(-\ell)}(\cdots, A\_{\ell-1}, A\_{\ell+1}, \cdots, \mathbf{:} A\_0^{\prime}) \tag{18}$$

$$+ \sum\_{j=0}^{n\_0-1} \sum\_{\substack{\mathbf{C} \cap \mathbf{D} = \mathcal{Q} \\ \mathbf{C} \cup \mathbf{D} = \mathbf{A}\_0^{\prime} \text{ or } \max \mathbf{C} < i\_j^{\prime}}} (-1)^{|\mathcal{C}|+1} L\_{\ell}^\*(A\_{\ell} \cup \{i\_{n\_0}^{\prime}, i\_j^{\prime}\} \cup \mathbf{C}) \mathbf{D}^\*\_{(-\ell)}(\cdots, A\_{\ell-1}, A\_{\ell+1}, \cdots, \mathbf{:} \mathbf{D}).$$

$$\mathbf{C} \cup \mathbf{D} = A\_0^{\prime} / \{i\_j^{\prime}\}$$

Note that, for *j* = 0, 1, ··· , *n*<sup>0</sup> − 1,

$$\begin{split} \operatorname{ggH}\_{\mathsf{P},\mathsf{E}}(A\_{1},\cdots,A\_{\ell}\cup\{i\_{n\_{0}}^{\prime}\},\cdots,A\_{m};A\_{0}^{\prime};i\_{j}^{\prime}) &:= \\ \sum\_{\begin{subarray}{c}\mathsf{C}\cap D=\mathcal{Q}\\ \mathsf{C}=\mathcal{Q}\text{ or }\max\mathsf{C}$$

since we have:

$$\begin{split} \operatorname{\mathfrak{g}\mathbf{H},\mathbf{p},\mathbf{c}}(A\_{1},\cdots,A\_{\ell'},\cdots,A\_{m'},A'\_{0};1'\_{j}) &= \\ \operatorname{\mathfrak{g}\mathbf{I}} - P\Big(\bigcap\_{\begin{subarray}{c}i\in\{i'\_{j+1},\cdots,i'\_{n\_0-1}\}\end{subarray}}\{X\_{i}\in\{0,\ell\}\}\cap\bigcap\_{\begin{subarray}{c}i\in A\_{k}\\k=1,\cdots,m\\k\neq\ell\end{subarray}}\{X\_{i}=k\}\cap\_{i\in A\_{\ell}\cup\{i'\_{j}\}}\{X\_{i}=\ell\}\Big) \\ \operatorname{\cap}\_{i\in\{i'\_{0},\cdots,i'\_{j-1}\}}\{X\_{i}=0\}\Big) &< 0 \end{split}$$

by (16), and:

$$f\_{\rm H\_{\ell},p\_{\ell},\varepsilon\_{\ell}}(A\_{\ell};i\_{j}^{\prime};i\_{\mathrm{n}\_{0}}^{\prime}) := \frac{\operatorname{g\_{\rm H,p,c}}(A\_{1\prime}\cdots\cdot, A\_{\ell\prime}\cdot\cdots\cdot, A\_{\mathrm{m}};A\_{0\prime}^{\prime};i\_{j}^{\prime})}{\operatorname{g\_{\rm H,p,c}}(A\_{1\prime}\cdots\cdot, A\_{\ell}\cup\{i\_{\mathrm{n}\_{0}}^{\prime}\},\cdots\cdot, A\_{\mathrm{m}};A\_{0\prime}^{\prime};i\_{j}^{\prime})} > 1. \tag{19}$$

The last inequality is due to the fact that:

$$\begin{split} & \frac{\operatorname{\mathcal{G}\mathsf{H},\mathsf{p},\mathsf{c}}(A\_{1},\cdots,A\_{\ell},\cdots,A\_{\ell},\cdots,A\_{m};A'\_{0};i'\_{j})}{\operatorname{\mathcal{G}\mathsf{H},\mathsf{p},\mathsf{c}}(A\_{1},\cdots,A\_{\ell}\cup\{i'\_{m\_{0}}\}\_{i'},\cdots,A\_{m};A'\_{0};i'\_{j})} \\ & \frac{\sum\_{j=0}^{\mathsf{n}\_{0}-1}\sum\_{\begin{subarray}{c}\mathsf{C}\subseteq A'\_{0}/\{i'\_{j}\} \\ \mathsf{C}=\mathcal{O}\text{ or }\max\mathsf{C}$$

and for any set *C* such that max *C* < *i <sup>j</sup>* or *C* = ∅,

$$\frac{L\_\ell^\*(A\_\ell \cup \{i\_j'\} \cup \mathbb{C})}{L\_\ell^\*(A\_\ell \cup \{i\_{n\_0}', i\_j'\} \cup \mathbb{C})} = \frac{L\_\ell^\*(A\_\ell \cup \{i\_j'\})}{L\_\ell^\*(A\_\ell \cup \{i\_{n\_0}', i\_j'\})} > 1$$

by (11). More specifically,

$$f\_{H\_{\ell},\mathbf{p}\_{\ell},\mathbf{c}\_{\ell}}(A\_{\ell};\mathbf{i}\_{j}^{\prime};\mathbf{i}\_{\mathbf{n}\_{0}}^{\prime}) = \frac{L\_{\ell}^{\*}(A\_{\ell}\cup\{\mathbf{i}\_{j}^{\prime}\}\cup\mathbb{C})}{L\_{\ell}^{\*}(A\_{\ell}\cup\{\mathbf{i}\_{\mathbf{n}\_{0}}^{\prime},\mathbf{i}\_{j}^{\prime}\}\cup\mathbb{C})} = \frac{L\_{\ell}^{\*}(i\_{\ell,j,1},i\_{\ell,j,2})}{L\_{\ell}^{\*}(i\_{\ell,j,1},i\_{\ell,j,2},i\_{\mathbf{n}\_{0}}^{\prime})} \tag{20}$$

where *i*,*j*,1, *i*,*j*,2 are the two closest elements to *i <sup>n</sup>*<sup>0</sup> among *A* ∪ {*i j* }. That is, *i*,*j*,1, *i*,*j*,2 ∈ *A* ∪ {*i j* } are two closest elements to *i <sup>n</sup>*<sup>0</sup> such that if min *A* ∪ {*i j* } < *i <sup>n</sup>*<sup>0</sup> < max *A*, then *i*,*j*,1 < *i <sup>n</sup>*<sup>0</sup> < *i*,*j*,2, and if *i <sup>n</sup>*<sup>0</sup> > max *A* ∪ {*i j* }, then *i*,*j*,1 < *i*,*j*,2 < *i n*0 .

$$= \begin{cases} \frac{L\_{\ell}^{\*}\left(\{i\_{\ell,j,1}i\_{\ell,j,2}\}\right)}{L\_{\ell}^{\*}\left(\{i\_{\ell,j,1}i\_{\ell,j,2}, i\_{n'}\}\right)}\\ = \begin{cases} \frac{p\_{\ell}+c\_{\ell}|i\_{\ell,j,1}-i\_{\ell,j,2}|^{2H\_{\ell}-2}}{(p\_{\ell}+c\_{\ell}|i\_{\ell,j,1}-i\_{n'}|^{2H\_{\ell}-2})(p\_{\ell}+c\_{\ell}|i\_{n'}-i\_{\ell,j,2}|^{2H\_{\ell}-2})} & \text{if } \min A\_{\ell}\cup\{i\_{j}'\}\max A\_{\ell}\cup\{i\_{j}'\}, \end{cases}$$

which is non-increasing as *j* increases since *i <sup>j</sup>* < *i <sup>n</sup>*<sup>0</sup> . Therefore, *fH*,*p*,*c* (*A*; *i j* ; *i <sup>n</sup>*<sup>0</sup> ) is nonincreasing as *j* increases. Also, for fixed *j*, *C* such that max *C* < *i <sup>j</sup>* or *C* = ∅,

$$\frac{L\_{\ell}^{\*}\left(A\_{\ell}\cup\{i\_{n\_{0}}^{\prime},i\_{j}^{\prime}\}\cup\subset\mathbb{C}\right)}{L\_{\ell}^{\*}\left(A\_{\ell}\cup\{i\_{j}^{\prime}\}\cup\subset\right)}\geq\frac{L\_{\ell}^{\*}\left(A\_{\ell}\cup\{i\_{n\_{0}}^{\prime}\}\right)}{L\_{\ell}^{\*}\left(A\_{\ell}\right)}\tag{21}$$

by the fact that *<sup>L</sup>*<sup>∗</sup> (*A*∪{*i*}) *L*∗ (*A*) is non-decreasing as the set *<sup>A</sup>* increases.

Combining the above facts with (17) and (18), and by *i* of Lemma 2,

$$\frac{L\_{\ell}^{\*}(A\_{\ell})}{L\_{\ell}^{\*}(A\_{\ell} \cup \{i\_{n\_{0}}^{\prime}\})} \leq \frac{\mathbf{D}^{\*}(\cdot \cdot \cdot \, \_{\prime}A\_{\ell} \cdot \cdot \, \_{\prime}A\_{0}^{\prime})}{\mathbf{D}^{\*}(\cdot \cdot \, \_{\prime}A\_{\ell} \cup \{i\_{n\_{0}}^{\prime}\}\_{\prime} \cdot \cdot \, \_{\prime}A\_{0}^{\prime})}.$$

Therefore,

$$\frac{\mathbf{D}^\*(A\_1, \cdots, A\_m; A\_0')}{\sum\_{\ell=1}^m \mathbf{D}^\*(\cdot, \cdot, A\_\ell \cup \{i\_{n\_0}'\}, \cdot, \cdots, A\_0')} \ge \frac{1}{\sum\_{\ell=1}^m \frac{L\_\ell^\*(A\_\ell \cup \{i\_{n\_0}'\})}{L\_\ell^\*(A\_\ell)}} > 1,$$

which proves (13) and,

$$\mathbf{D}^\*(A\_1 \cdot \cdots \cdot, A\_m; A\_0) > 0.$$

**Proof of Theorem 3.** a. Let *A*<sup>0</sup> = {*i*0, *i*1, ··· , *in*}. Note that:

$$\begin{split}P(X\_{\acute{I}\_{1}}=\ell|\cap\_{k=0,\cdots,m}\cap\_{i\in A\_{k}}\{X\_{i}=k\}) &= \frac{\mathbf{D}^{\*}(\cdot\cdot,\,{}\_{\ell}A\_{\ell}\cup\{i'\_{1}\},\cdot\cdot,\,{}\_{\ell}A\_{0})}{\mathbf{D}^{\*}(A\_{1\prime}\cdot,\,{}\_{\ell}A\_{m};A\_{0})} = \\ \frac{L^{\*}\_{\ell}(A\_{\ell}\cup\{i'\_{1}\})\mathbf{D}^{\*}\_{(-\ell)}(\cdot\cdot,\,{}\_{\ell}A\_{\ell+1},\ell\_{\ell+1}\cdot,\,{}\_{\ell}A\_{0}) + \sum\_{j=0}^{n}\mathscr{g}\_{\mathbf{H},\mathbf{p},\mathbf{c}}(\cdot\cdot,\,{}\_{\ell}A\_{\ell}\cup\{i'\_{1}\},\cdot\cdot,\,{}\_{\ell}A\_{0};i\_{j})}{L^{\*}\_{\ell}(A\_{\ell})\mathbf{D}^{\*}\_{(-\ell)}(\cdot\cdot,\,{}\_{\ell}A\_{\ell-1},\ell\_{\ell+1}\cdot,\,{}\_{\ell}A\_{0}) + \sum\_{j=0}^{n}\mathscr{g}\_{\mathbf{H},\mathbf{p},\mathbf{c}}(A\_{1\prime}\cdot,\ldots,A\_{m};A\_{0};i\_{j})}. \end{split}$$

Since,

$$\frac{\mathcal{G}\_{\mathsf{H},\mathsf{p},\mathsf{c}}(A\_1,\cdots,A\_{\ell}\cup\{i\_1'\},\cdots,A\_{\mathsf{m}};A\_0;i\_j)}{\mathcal{G}\_{\mathsf{H},\mathsf{p},\mathsf{c}}(A\_1,\cdots,A\_{\mathsf{m}};A\_0;i\_j)}$$

is non-decreasing as *j* increases, and by (19) and (20):

$$\frac{L\_{\ell}^{\*}(A\_{\ell}\cup\{i\_{1}^{\prime}\})}{L\_{\ell}^{\*}(A\_{\ell})} \leq \frac{\operatorname{g\_{\mathsf{H},\mathsf{p},\mathsf{c}}(A\_{1\prime}\cdot\cdot\cdot,A\_{\ell}\cup\{i\_{1}^{\prime}\},\cdot\cdot,A\_{\mathsf{m}};A\_{0};i\_{j})}{\operatorname{g\_{\mathsf{H},\mathsf{p},\mathsf{c}}(A\_{1\prime}\cdot\cdot\cdot,A\_{\ell}\cdot\cdot\cdot,A\_{\mathsf{m}};A\_{0};i\_{j})}}.$$

the result follows by *ii* of Lemma 2. b.

$$\begin{split} &P(\underline{X\_{i\_{2}}^{\prime}}=\ell|\cap\_{k=0,\cdots,m}\cap\_{i\in A\_{k}}\{X\_{i}=k\}) = \frac{P^{\*}(\cdot\cdot,\,{}\_{\ell}A\_{\ell}\cup\{i\_{2}^{\prime}\},\cdots,\,{}\_{\ell}A\_{0})}{P(\underline{X\_{i\_{2}}^{\prime}}=\ell|\cap\_{k=0,\cdots,m}\cap\_{i\in A\_{k}}\{X\_{i}=k\})} = \frac{\mathbf{D}^{\*}(\cdot\cdot,\,{}\_{\ell}A\_{\ell}\cup\{i\_{2}^{\prime}\},\cdots,\,{}\_{\ell}A\_{0})}{\mathbf{D}^{\*}(\cdot\cdot,\,{}\_{\ell}A\_{\ell}\cup\{i\_{3}^{\prime}\},\cdots,\,{}\_{\ell}A\_{0})} = \\ &L\_{\ell}^{\*}(A\_{\ell}\cup\{i\_{2}^{\prime}\})\mathbf{D}^{\*}\_{(-\ell)}(\cdot\cdot,\,{}\_{\ell}A\_{\ell+1},\cdots,\,{}\_{\ell}A\_{0}) + \sum\_{j=0}^{n}\operatorname{gg}\_{\mathbf{H},\mathbf{p},\mathbf{c}}(\cdot\cdot,\,{}\_{\ell}A\_{\ell}\cup\{i\_{2}^{\prime}\},\cdots,\,{}\_{\ell}A\_{0};i\_{j}) \\ &\overline{L\_{\ell}^{\*}(A\_{\ell}\cup\{i\_{3}^{\prime}\})\mathbf{D}^{\*}\_{(-\ell)}(\cdot\cdot,\,{}\_{\ell}A\_{\ell-1},\ell\_{\ell+1},\cdots,\,{}\_{\ell}A\_{0}) + \sum\_{j=0}^{n}\operatorname{gg}\_{\mathbf{H},\mathbf{p},\mathbf{c}}(\cdot\cdot,\,{}\_{\ell}A\_{\ell}\cup\{i\_{3}^{\prime}\},\cdots,\,{}\_{\ell}A\_{0};i\_{j})}.\end{split}$$

For fixed *j*, *C* such that max *C* < *ij*,

$$\frac{L\_\ell^\*(A\_\ell \cup \{i\_{2'}^\prime i\_{\bar{j}}\} \cup \mathbb{C})}{L\_\ell^\*(A\_\ell \cup \{i\_{3'}^\prime i\_{\bar{j}}\} \cup \mathbb{C})} \le \frac{L\_\ell^\*(A\_\ell \cup \{i\_2^\prime\})}{L\_\ell^\*(A\_\ell \cup \{i\_3^\prime\})'},$$

and,

$$\frac{L\_\ell^\*(A\_\ell \cup \{i\_{2\prime}^\prime i\_{\dot{\jmath}}\} \cup \mathcal{C})}{L\_\ell^\*(A\_\ell \cup \{i\_{3\prime}^\prime i\_{\dot{\jmath}}\} \cup \mathcal{C})}$$

is non-increasing as *j* increases. Therefore, the result follows by *i* of Lemma 2.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


**Marius-F. Danca 1,2,\* and Jagan Mohan Jonnalagadda <sup>3</sup>**


**Abstract:** In this paper, it is shown that a class of discrete Piece Wise Continuous (PWC) systems with Caputo-type delta fractional difference may not have solutions. To overcome this obstacle, the discontinuous problem is restarted as a continuous fractional problem. First, the single-valued PWC problem is transformed into a set-valued one via Filippov's theory, after which Cellina's theorem allows the restart of the problem into a single-valued continuous one. A numerical example is proposed and analyzed.

**Keywords:** Caputo-type delta fractional difference; Cellina's theorem; discrete PWC system; discrete fractional PWC systems

#### **1. Introduction**

PWC real-valued functions *<sup>f</sup>* : *<sup>D</sup>* <sup>⊆</sup> <sup>R</sup> <sup>×</sup> <sup>R</sup>*<sup>n</sup>* <sup>→</sup> <sup>R</sup>*<sup>n</sup>* are time-continuous but discontinuous with respect to the state variable, *x*, are defined in a finite domain *D* of an (*n* + 1) dimensional (*t*, *x*) space, where *D* consists of a finite number of domains, *Di*, *i* = 1, 2, ..., *k*, in each of which *f* is continuous up to the boundary of the domains. We denote by M the discontinuity set containing the boundary points. The considered discontinuity is of the jump type when in the points of M, the function jumps (switches) and left-hand and right-hand limits exist and are different. Inside the domains, *f* is continuous [1].

Throughout the paper, discontinuity is considered only with respect to the state variable. Dynamical systems modeled by this kind of PWC functions appear in many different branches of engineering and applied sciences, such as dry friction, impacting machines, systems oscillating due to earthquakes, impacts in mechanical devices, power circuits, forced vibrations, elasto-plasticity, switching in electronic circuits, uncertain systems and many others (see, e.g., [2–6] and their references). The vast majority of such systems are defined by time-continuous Initial Value Problems (IVPs), modeled by ODEs of integer order. The numerical integration of such IVPs is a difficult task for which only special difference methods can be used (see, e.g., [4]). While the standard methods for continuous systems rely heavily on linearization, in the case of PWC systems modeled by ODEs, they do not require linearization in general. Another difficulty is that the underlying IVP might not even admit solutions (see, e.g., [7]). Further, the PWC systems could have trajectories colliding with the discontinuity surfaces, thereby generating a new kind of bifurcation [8–10]. To overcome the problem of having no solutions, tools of differential inclusions of integer order can provide a possible resolution (see, e.g., the method proposed in [11–14]), where the discontinuous single-valued IVP is transformed to a set-valued IVP. Next, to obtain a numerical solution, either special numerical schemes for differential inclusions [15–18] can be utilized or, via the selection theory, continuous or even smooth approximations in the neighborhood of discontinuity can be adopted [19–21].

On the other side, the main existing definitions of fractional order derivatives are based on the formulae presented by Caputo, Riemann–Liouville and Grünwald–Letnikov [22–25].

**Citation:** Danca, M.-F.; Jonnalagadda, J.M. On the Solutions of a Class of Discrete PWC Systems Modeled with Caputo-Type Delta Fractional Difference Equations. *Fractal Fract.* **2023**, *7*, 304. https://doi.org/ 10.3390/fractalfract7040304

Academic Editors: Libo Feng, Lin Liu and Yang Liu

Received: 9 February 2023 Revised: 27 March 2023 Accepted: 27 March 2023 Published: 30 March 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

However, the number of proposed definitions based on these derivatives became huge such that it represents an obstacle to the diffusion of fractional calculus in Science and Engineering [26]. For practical applications, due to the considerable advantage of allowing the coupling of differential equations with classical initial conditions as for differential equations of integer order, compared to other non-integer derivatives, the Caputo derivative is one of the most commonly used derivatives for solving fractional differential equations.

Equally interestingly, PWC systems modeled by Fractional Differential Equations (FDEs) can be approached numerically via fractional differential inclusions [27]. Once the set-valued IVP is transformed into a single-valued IVP, it can be numerically integrated using one of the existing schemes for FDEs, such as the Adams–Bashforth–Moulton method [28] (for fractional differential inclusions, see [29–33] and references therein).

In recent years, discrete fractional calculus has gained considerable interest, and now the study of ordinary difference equations is widespread. However, the theory of fractional difference equations, a very new area for scientists, is still evolving [34]. The left and right Caputo fractional sums and differences, as well as their properties with relation to Riemann–Liouville differences, are studied in [34] (an early paper on the theory of fractional finite difference equations), initial value problems in discrete fractional calculus are analyzed in [34–39], with existence results for nonlinear fractional difference equations presented in [34,36,38,40–42]. For further reading of qualitative properties of fractional difference equations, see [35,43–45], and for applications of discrete fractional calculus, see [46,47].

Compared to PWC systems modeled by FDEs, which represent the subject of several works, there are no results on discrete PWC systems modeled by fractional differences. Therefore, we are motivated to propose a new class of fractional discrete PWC systems modeled by Caputo delta differences. The existence of the solutions of the underlying IVPs is also studied. Moreover, because the continuity is considered as a required property for the existence of the solutions of fractional difference equations (see, e.g., [36,48]), in this paper, the continuous approximation of the PWC function is proposed. One of the significant advantages of the considered Caputo fractional difference operator over the other fractional difference operators is that it includes traditional initial and boundary conditions in formulating the problem. In addition, another advantage is the fact that the Caputo fractional difference for a constant is zero. Because the systems modeled by the considered class of discrete PWC problems might have no solutions, continuous approximation is proposed.

This paper is structured as follows: Section 2 presents the class of fractional discrete PWC systems, modeled by Caputo-type delta fractional difference, and the existence of the solutions. Section 3 deals with the approximation of the PWC function. While in Section 4, some representative numerical simulations underline the theoretical results. In the end, we give our conclusions.

#### **2. PWC Systems Modeled with Caputo-Type Delta Fractional Difference Equations**

In this paper, the considered systems are time-independent, i.e., autonomous systems. Let us first consider a PWC system modeled by the following FDE [11]

$$\begin{cases} D\_\*^q \mathfrak{x} = 2 - 3 \operatorname{sgn}(\mathfrak{x}), \\ \mathfrak{x}(0) = \mathfrak{x}\_{0\prime} \end{cases} \tag{1}$$

where *D<sup>q</sup>* <sup>∗</sup> is Caputo's derivative with starting point 0, *q* ∈ (0, 1), and the right-hand side is a jumping PWC function.

**Proposition 1.** *The IVP* (1) *has no classical solutions.*

**Proof.** In this case, *D*<sup>1</sup> := *D*<sup>−</sup> = (−∞, 0) and *D*<sup>2</sup> := *D*<sup>+</sup> = (0, ∞) and *M* = {0}. Since *Dq* <sup>∗</sup>(0) = 0 = 2 = 2 − 3 sgn(0), *x*(*t*) = 0 is not a solution. Therefore, there are no solutions starting from *x*(0) = 0. Further, if one chooses *x*<sup>0</sup> ∈ *D*+, there exists a solution

*x*(*t*) = *x*<sup>0</sup> − *t <sup>q</sup>*/Γ(1 + *q*), but this is defined only on [0, *T* ) with *T* = (*x*0Γ(1 + *q*))1/*<sup>q</sup>* and, because it tends to the line *x* = 0, cannot be extended onto larger intervals larger than [0, *T* ). Similarly, for *x*<sup>0</sup> ∈ *D*−, the solution *x*(*t*) = *x*<sup>0</sup> + 5*t <sup>q</sup>*/Γ(*q* + 1) exists but, again, only a finite interval [0, *T* ), with *T* = (*x*0Γ(1 + *q*)/5)1/*q*. In both cases, the obtained solutions tend to the line *x* = 0 but, as seen above, they hold at points *A*(*T* , 0) and *B*(*T* , 0), respectively, and cannot extend along this line (see Figure 1a, where *q* = 0.8 and *x* <sup>0</sup> = 0.2, *x* <sup>0</sup> = −0.3).

**Figure 1.** Equation (1); (**a**) Solutions for *q* = 0.8, and *x* <sup>0</sup> = 0.2 (blue) and *x* <sup>0</sup> = −0.3 (red). The solutions are defined only for *t* ∈ [0, *T* ) and *t* ∈ [0, *T* ), respectively; (**b**) Incorrect solution obtained by solving numerically the not-approximated PWC problem.

Note that running this equation for some numerical scheme (such as an Adams– Bashforth–Moulton scheme for FDEs [28]), one can obtain a numerical result, but due to Proposition 1, this does not represent the correct numerical solution. For example, for *q* = 0.8, Figure 1b presents the "solution" for *x*<sup>0</sup> = 0. Due to the finite precision in which computers perform calculations (see Section 2.1), the utilized numerical method can pass through points *A* and *B*, i.e., at these points, *x*(*t*) is not (exactly) zero and, therefore, one obtains a wrong solution.

To overcome this obstacle, the problem has to be restarted as a differential inclusion and next as a continuous problem, which admits solutions (see details in [11–14]).

To introduce the class of discrete PWC fractional systems, some basic notions necessary to introduce the class of fractional PWC systems are presented next.

Denote by <sup>N</sup>*<sup>c</sup>* <sup>=</sup> {*c*, *<sup>c</sup>* <sup>+</sup> 1, *<sup>c</sup>* <sup>+</sup> 2, ...} and <sup>N</sup>*<sup>d</sup> <sup>c</sup>* = {*c*, *c* + 1, *c* + 2, ... , *d*}, for any real numbers *<sup>c</sup>* and *<sup>d</sup>* such that *<sup>d</sup>* − *<sup>c</sup>* ∈ N1.

**Definition 1.** *The Euler gamma function is defined by*

$$
\Gamma(z) = \int\_0^\infty e^{-t} t^{z-1} dt, \quad \Re(z) > 0.
$$

*Using its reduction formula, the Euler gamma function can also be extended to the half-plane* (*z*) < 0 *except for z* ∈ {· · · , −2, −1, 0}*.*

**Definition 2.** *Assume u* : N*<sup>b</sup> <sup>a</sup>* → R *and <sup>N</sup>* ∈ N1*. The first-order forward difference of <sup>u</sup> is defined by*

$$
\Delta u(t) = u(t+1) - u(t), \quad t \in \mathbb{N}\_a^{b-1},
$$

*and the Nth-order forward difference of u is defined recursively by*

$$
\Delta^N \mu(t) = \Delta \left( \Delta^{N-1} \mu(t) \right), \quad t \in \mathbb{N}\_a^{b-N}.
$$

*Finally,* Δ<sup>0</sup> *denotes the identity operator.*

**Definition 3.** *Let <sup>u</sup>* : <sup>N</sup>*<sup>a</sup>* <sup>→</sup> <sup>R</sup> *and <sup>q</sup>* <sup>&</sup>gt; <sup>0</sup>*. The <sup>q</sup>th-order delta fractional sum of <sup>u</sup> based on <sup>a</sup> is given by*

$$
\Delta\_a^{-q} u(t) = \frac{1}{\Gamma(q)} \sum\_{s=a}^{t-q} \frac{\Gamma(t-s)}{\Gamma(t-s-q+1)} u(s), \quad t \in \mathbb{N}\_{a+q}.
$$

**Definition 4.** *Let <sup>u</sup>* : <sup>N</sup>*<sup>a</sup>* <sup>→</sup> <sup>R</sup>*, <sup>q</sup>* <sup>&</sup>gt; <sup>0</sup>*, and <sup>q</sup>* <sup>∈</sup>/ <sup>N</sup>1*. The <sup>q</sup>th-order Caputo delta fractional difference of u based on a is given by*

$$
\Delta\_{\mathfrak{a}\ast}^q \mu(t) = \Delta\_{\mathfrak{a}}^{-(N-q)} \left( \Delta^N \mu(t) \right), \quad t \in \mathbb{N}\_{a+N-q\prime}
$$

*where N* = [*q*] + <sup>1</sup>*. If q* = *<sup>N</sup>* ∈ N1*, then*

$$
\Delta\_{a\*}^q \mu(t) = \Delta^N \mu(t), \quad t \in \mathbb{N}\_a.
$$

Consider now the class of fractional PWC systems with Caputo-type delta fractional difference, modeled by the following IVP

$$\begin{cases} \Delta\_\*^q \mathfrak{x}(n) = f(\mathfrak{x}(n+q-1)), \quad n \in \mathbb{N}\_{1-q\prime} \\ \mathfrak{x}(0) = \mathfrak{x}\_0, \end{cases} \tag{2}$$

where *<sup>N</sup>*1−*<sup>q</sup>* <sup>=</sup> {<sup>1</sup> <sup>−</sup> *<sup>q</sup>*, 2 <sup>−</sup> *<sup>q</sup>*, 3 <sup>−</sup> *<sup>q</sup>*, ···}, <sup>Δ</sup>*<sup>q</sup>* <sup>∗</sup> represents the *q*th fractional Caputo-like difference in the usual case of a zero starting point *a* = 0, with *q* ∈ (0, 1) [34] and *f* is a jump discontinuous scalar function of the following form

$$f(\mathbf{x}) = \begin{cases} f\_1(\mathbf{x}), \mathbf{x} \in (-\infty, a], \\ f\_2(\mathbf{x}), \mathbf{x} \in (a, \infty). \end{cases} \tag{3}$$

Function *f*1,2 is continuous in its domain, with *f*1(*a*) = *f*2(*a*).

If the solution of the fractional IVP (2) exists, it can be found with the following integral [34,41] (see [38] for <sup>∇</sup>*<sup>q</sup>* difference equations)

$$\mathbf{x}(n) = \mathbf{x}(0) + \frac{1}{\Gamma(q)} \sum\_{r=1-q}^{n-q} \frac{\Gamma(n-r)}{\Gamma(n-r-q+1)} f(\mathbf{x}(r+q-1)), \quad n \in \mathbb{N}\_0. \tag{4}$$

To obtain a convenable numerical form, consider in (4) the following substitution *r* + *q* = *s*. Then, a convenient iterative numerical form of the sum of Equation (4) is given by

$$\mathbf{x}(n) = \mathbf{x}(0) + \frac{1}{\Gamma(q)} \sum\_{s=1}^{n} \frac{\Gamma(n - s + q)}{\Gamma(n - s + 1)} f(\mathbf{x}(s - 1)), \quad n \in \mathbb{N}\_0. \tag{5}$$

The example considered in this paper is a fractional order variant of the model presented in [49], with the right-hand side

$$f(\mathbf{x}) = \begin{cases} m - px^2, \mathbf{x} \in ( -\infty, 0], \\ 1 - px^2, \mathbf{x} \in (0, \infty), \end{cases} \tag{6}$$

with *D*<sup>1</sup> := *D*<sup>−</sup> = (−∞, 0] and *D*<sup>2</sup> := *D*<sup>+</sup> = (0, ∞), *p* is a real parameter, *m* ∈ (0, 1) and M = {0}. For all considered *m* values, function *f* has a jump discontinuity at *x* = 0, *f*1(0) = *m* = 1 = *f*2(0).

The PWC (2) becomes

$$\begin{array}{ll} \Lambda\_\*^q \mathbf{x}(n) = \left\{ \begin{array}{ll} m - px(n + q - 1)^2, & \mathbf{x}(n + q - 1) \in ( -\infty, 0], \\ 1 - px(n + q - 1)^2, & \mathbf{x}(n + q - 1) \in (0, \infty), \end{array} \right. \\ \mathbf{x}(0) = \mathbf{x}\_0, \quad n \in \mathbb{N}\_{1 - q}. \end{array} \tag{7}$$

Like in the case of the time-continuous PWC system (1), if one considers *x*<sup>0</sup> = 0 ∈ *D*−, one can see that, in this case, Equation (2) is not verified because Δ*<sup>q</sup>* <sup>∗</sup>(0) = <sup>0</sup> = <sup>1</sup> = <sup>1</sup> − *p* × 0. For other values of *x*<sup>0</sup> = 0, it is possible that after some iterations, the solution reaches the line *x* = 0, which cannot be the solution (see the case of system (1)).

The same situation can happen in the case of the general IVP (2) with *f* given by (3): it is possible that, for some *x*0, the orbit crosses the line *x* = *a*, which does not verify the equation. Therefore, one can be deduced that the IVP (2) with *f* given by (3) might have no solutions.

**Remark 1.** *It is possible that, for some set of parameters and x*<sup>0</sup> *and q, the orbit does not cross line x* = *a and remains in the same domain of x*<sup>0</sup> *(either D*<sup>−</sup> *or D*+*) when the IVP admits solutions (see the example in Figure 4d, Section 4).*

#### *2.1. Computational Approach*

Theoretically, it has been shown that it is possible that the solution to IVP (2) with *f* given by (3) can reach line *x* = 0, i.e., *x*(*n*) becomes 0, where the problem has no solution. However, a numerical method of calculation is an approximation that can be stable (meaning that it tends to reduce rounding errors) or unstable (meaning that rounding errors are magnified); therefore, very often, there are both stable and unstable solutions for a problem [50]. Further, in computer hardware, a value is not necessarily exactly computed, and the loss in precision could sometimes be inevitable. Moreover, considering any numeric representation that is limited to finite precision, for example, operating a decimal at 100,000,000 digits, which will be able to distinguish between the values that differ in their hundred millionth decimal places, there would still be an infinite number of values that would not be able to be exactly represented. In other words, considering the Pigeon Hole Principle, or Dirichlet drawer principle, a simple yet powerful idea in mathematics, which says that if you have *n* items to put into *m* containers where *n* > *m*, then at least one container must have more than one item [51–53]. Hence, if you have an *N*-bit representation of numbers, then at most 2*N* different numbers can be represented, and other numbers cannot exactly exist within that system. Therefore, it is easy to understand that the numerical schemes, such as integral (5), will not precisely meet the zero value, where *x*(*n*) = 0 cannot be a solution. Namely it will either exceed the discontinuity or return to a previous value.

#### **Remark 2.**

*(i) Similar situations can arise in integer-order discrete systems, such as the system in [49], defined by the following IVP*

$$\begin{cases} \mathfrak{x}(n+1) = f(\mathfrak{x}(n)), & n \in \mathbb{N}, \\ \mathfrak{x}(0) = \mathfrak{x}\_{0\prime} \end{cases}$$

*where f is some PWC function defined by* (3)*;*

*(ii) There are PWC systems with jump discontinuity, for which f is not defined at x* = *a (see, e.g., [54]). In these cases, after some number of iterations, n* = *k, in the internal representation,* *as shown above, it is possible for x*(*k*) *to enter a sufficiently small neighborhood of a, where x*(*k*) *cannot be determined, and the software considers an unpredictable value for x*(*k*)*.*

Concluding, to overcome this inconvenience, the continuous (even smooth) approximation of the PWC problem is proposed so that the underlying problem admits a solution.

### **3. Continuous Approximation of Map** *f*

In this section, it is briefly shown how the PWC function, *f* , defined by (3), can be continuously approximated using Filippov's approach [1,19,20] (details on the approximation algorithm can be found in [11,14]).

The PWC function *f* is transformed into a set-valued map *F* : R ⇒ R via the Filippov regularization [1], which is a map from R to the set of subsets of R. *F* can be defined in several ways. One of the simplest forms of *F* is defined as follows

$$F(\mathbf{x}) = \bigcap\_{\varepsilon > 0} \bigcap\_{\mu(\mathcal{M}) = 0} \overline{\operatorname{co} \overline{\operatorname{co}}}(f(y \in \mathbb{R} \mid \{0\} : |y - \mathbf{x}| \le \varepsilon)),\tag{8}$$

where *ε* is the radius of the ball centered on *x*. At the points where *f* is continuous, *F*(*x*) consists of one single point, i.e., *F*(*x*) = { *f*(*x*)}, while at the points *x* ∈ M, *F*(*x*) is given by (8). The set-valued function *F* defined by (8) has values in the convex subsets of R.

To justify the use of the Filippov regularization in physical systems, the value of *ε* must be chosen to be small enough so that the motion of the physical systems approaches a certain solution (ideally, it coincides with the solution if *ε* → 0).

In the sketch in Figure 2a, the graph of a set-valued function *F* is plotted, while in Figure 2b, the closure of the convex hull is plotted in blue. The values of *F*(*x*) for *x* = *x*<sup>1</sup> and *x* = *x*<sup>3</sup> are segments, while at *x* = *x*2, *F*(*x*2) is a single point (see Condition *γ* in [1] p. 68).

**Figure 2.** (**a**) Sketch of a set-valued function *F*; (**b**) The convex hull of *F* (blue plot) and the values of *F* at points *x*1, *x*<sup>2</sup> and *x*3.

For function *f* given by (6), with the discontinuity set M = {0} and for *m* = 0.6 and *p* = 1.5, the graph is presented in Figure 3a. Consider a *ε*-neighborhood of *x* = 0. For clarity of the graphical exposition, the ray of the neighborhood is considered as *ε* = 0.1 (Figure 3b). The set-valued map *F* : R ⇒ R defined with (8) is

$$F(\mathbf{x}) = \begin{cases} f\_1(\mathbf{x}), & \mathbf{x} < \mathbf{0}, \\ \text{[A',B']}, & \mathbf{x} = \mathbf{0}, \\ f\_2(\mathbf{x}), & \mathbf{x} > \mathbf{0}, \end{cases} \tag{9}$$

where *A* and *B* are the endpoints of the vertical segment at *x* = 0. Points *A* and *B* are the intersections of the graph of *f* with the lines *x* = −*ε* and *x* = *ε*.

**Figure 3.** (**a**) Graph of the discontinuous function *f* for *m* = 0.6 and *p* = 1.5; (**b**) The underlying set-valued function *F* (green plot) defined on the neighborhood [−*ε*,*ε*] (yellow) together with a continuous selection *g* connecting points *A* and *B* (red); (**c**) The graph of the obtained smooth function ˜ *<sup>f</sup>* : R → R.

With the Filippov regularization, the fractional discrete difference (2) is restarted as a set-valued fractional discrete difference (inclusion)

$$\begin{cases} \Delta\_\*^q x(n) \in F(x(n+q-1)), & \text{for almost all } n \in \mathbb{N}\_{1-q}, \\ x(0) = x\_0. \end{cases} \tag{10}$$

which is identical to (2) for those values of *x* for which *F*(*x*) = { *f*(*x*)}. For *x* = 0, points *A* and *B* (Figure 3b) become *A* (0, *m*) and *B* (0, 1), respectively, and, for *x* = 0, system (7) transforms into the fractional differential inclusion, Δ*<sup>q</sup>* <sup>∗</sup>(0) <sup>∈</sup> [*m*, 1], i.e., <sup>Δ</sup>*<sup>q</sup>* <sup>∗</sup>(0) could take every value within the line [*m*, 1].

Solutions to the set-valued IVP (10) (absolutely continuous functions satisfying (10) for almost all *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>1−*q*) are not considered here (see, e.g., [1]).

**Definition 5.** *A single-valued function <sup>h</sup>* : R → R *is called the approximation (selection) of the set-valued function F if h*(*x*) ∈ *<sup>F</sup>*(*x*)*, for all x* ∈ R*.*

**Definition 6.** *As set-valued function <sup>F</sup>* : R ⇒ R *is upper semicontinuous at <sup>x</sup>*<sup>0</sup> ∈ R*, if for any open set B containing F*(*x*0)*, there exists a neighborhood A of x*<sup>0</sup> *such that F*(*A*) ∈ *B.*

It is said that F is upper semicontinuous if it is so at every *<sup>x</sup>*<sup>0</sup> ∈ R.

**Remark 3.** *A set-valued function satisfies a property if and only if its graph satisfies it (i.e., symmetric interpretation of a set-valued function as a graph [19]). Therefore, a set-valued function is said to be closed if and only if its graph is closed. Further, a set-valued function F* : R ⇒ R *whose graph is closed is upper semicontinuous [19] p. 42.*

Finding the approximations, which are locally Lipschitz, is allowed by the Approximate Selection Theorem (Cellina's Theorem), whose proof presents an explicit way (see [19] p. 84 and [20] p. 358) to construct the approximation.

It is easy to see that the function defined by (9) has a closed graph and, therefore, it is upper semicontinuous and admits a continuous (even smooth) selection *g* (Figure 2b, red plot).

**Theorem 1** ([21] (see also [19]))**.** *Let F* : *X* ⇒ *Y be an upper semicontinuous function from a compact metric space X to a Banach space Y. If the values of F are nonempty and convex, then for every ε* > 0 *there exists a locally Lipschitz single-valued map g* : *X* → *Y such that*

$$Graph(\mathcal{g}) \subset B(Graph(F), \varepsilon)\_{\prime}$$

*and for every x* ∈ *X, g*(*x*) *belongs to the convex hull of the image of F.*

Next, the main result can be presented

**Theorem 2.** *The set-valued map F* : R ⇒ R*, defined by* (9)*, admits a locally Lipschitz selection <sup>g</sup>* : [−*ε*,*ε*] → R*.*

**Proof.** From Remark 3 it follows that *F* defined by (9) is upper semicontinuous, nonempty and convex. Therefore, Theorem 2 applies.

One of the simplest continuous approximations of the discontinuous function *f* (6) is the cubic polynomial (There exists an infinity of smooth functions to approximate the PWC function *f*).

$$g(\mathbf{x}) = c\_1 \mathbf{x}^3 + c\_2 \mathbf{x}^2 + c\_3 \mathbf{x} + c\_4, \ c\_i, \ i = 1, 2, 3, 4.$$

The approximated function (2) becomes

$$\vec{f}(\mathbf{x}) = \begin{cases} m - px^2, & \mathbf{x} < -\varepsilon, \\ g(\mathbf{x}), & \mathbf{x} \in [-\varepsilon, \varepsilon], \\ 1 - px^2, & \mathbf{x} > \varepsilon. \end{cases}$$

Since *f*<sup>1</sup> and *f*<sup>2</sup> in (6) are smooth, to define *g* as connecting points *A* and *B*, the following "gluing" conditions are to be set

$$\begin{array}{l} \bar{f}(-\varepsilon) = \mathfrak{g}(-\varepsilon), \\ \bar{f}(\varepsilon) = \mathfrak{g}(\varepsilon), \\ \bar{f}'(-\varepsilon - 0) = \mathfrak{g}'(-\varepsilon + 0), \\ \bar{f}'(\varepsilon + 0) = \mathfrak{g}'(\varepsilon + 0), \end{array}$$

which represents a system with unknown *ci*, *i* = 1, 2, 3, 4. ˜ *f* (±*ε* ± 0) and *g* (±*ε* ± 0) are lateral limits of the derivatives ˜ *f* and *g* at ±*ε*. Note that the last two equations represent the smoothness conditions on points *A* and *B*.

Solving the system, one obtains

$$\begin{aligned} c\_1 &= \frac{m-1}{4\varepsilon^3}, \\ c\_2 &= -p\_\prime \\ c\_3 &= -\frac{3m-3}{4\varepsilon}, \\ c\_4 &= \frac{m+1}{2}. \end{aligned}$$

Finally, the obtained smooth discrete fractional system is

$$\Delta\_{\*}^{q}\mathbf{x}(n) = \tilde{f}(\mathbf{x}(n)) := \begin{cases} m - p\mathbf{x}(n+q-1)^{2}, & \mathbf{x}(n) < -\varepsilon, \\ g(\mathbf{x}(n+q-1)), & \mathbf{x}(n) \in [-\varepsilon, \varepsilon], \\ 1 - p\mathbf{x}(n+q-1)^{2}, & \mathbf{x}(n) > \varepsilon, \end{cases} \tag{11}$$
  $\mathbf{x}(0) = \mathbf{x}\_{0}, \quad n \in \mathbb{N}\_{1-q}.$ 

The existence of the following numerical integral

$$\mathbf{x}(n) = \mathbf{x}(0) + \frac{1}{\Gamma(q)} \sum\_{s=1}^{n} \frac{\Gamma(n-s+q)}{\Gamma(n-s+1)} \tilde{f}(\mathbf{x}(s-1)), \quad n \in \mathbb{N}\_0. \tag{12}$$

is ensured by the smoothness of the right-hand side of (11) [41,42,55].

To computationally implement integral (12), the entire orbit history (main characteristic of fractional systems) must be taken into account. Therefore, a modality is inside the cycle that calculates the sum, every step *x*(*s* − 1) is tested for which the domain belongs.

#### **4. Dynamics of the Approximated Fractional System** (11)

Before studying the dynamics of fractional system (11), recall the following important result regarding continuous and discrete fractional systems [56,57].

**Theorem 3.** *Autonomous, continuous-time and discrete fractional systems cannot admit nonconstant exact periodic solutions.*

**Proof.** This result regarding continuous fractional systems modeled by fractional order differential equations is proven in [56], while for discrete fractional systems, it is proven in [57].

**Remark 4.** *Due to Theorem 3, periodicity cannot be considered in continuous or discrete fractional systems. Therefore, notions of stable cycles, bifurcation and even chaos (where unstable periodic orbits form the skeleton of chaotic dynamics) represent a delicate problem. Thus, following the definition given by, e.g., Wiggins in [58]: a non-constant solution x*(*t*) *of a system is periodic if there exists <sup>T</sup>* > 0 *such that <sup>x</sup>*(*t*) = *<sup>x</sup>*(*<sup>t</sup>* + *<sup>T</sup>*)*, for all <sup>t</sup>* ∈ R*, it follows that even using some asymptotic approach, one cannot obtain periodic orbits in fractional systems. Instead, one can consider* numerically periodic orbits *in the sense that the trajectory, from the numerical point of view, up to some small error, can be considered in the state phase as a closed orbit. However, there are particular cases when one can talk about periodicity in the case of continuous fractional systems when the lower terminal of the fractional derivative is* ±∞ *(see, e.g., [59]). Further, in the case of discrete fractional systems, there could exist S-asymptotically periodic orbits [57].*

To obtain the numerical results in this section, a Matlab code has been written.

Consider first the not approximated system (7) with parameters *m* = 0.92, *p* = 1.556 and *q* = 0.6. Applying the integral, one obtains the chaotic orbit presented in Figure 4a. The value of the orbits close to 0 is plotted in red. As can be seen, integral (5) gives a numerical result (orbit), and *x*(*n*) is close to 0 (see Section 2.1) at the *n*<sup>0</sup> ≈ 1200th iteration. However, as shown in Section 2, the result is not correct.

If one considers the approximated fractional system (11), with *ε* = 10−<sup>3</sup> and the same parameters *m* = 0.92, *p* = 1.556 and *q* = 0.6, the obtained correct chaotic orbit is presented in Figure 4b. Note that the approximation is performed only in a relatively large neighborhood of the discontinuity. As can be seen in the images, as expected, the differences between the non-approximated and approximated cases appear only after the intersection with the line *x* = 0 and within the neighborhood of the ray *ε*, respectively (see the vertical dotted red line at *n*<sup>0</sup> in Figure 4a,b). Because, in the approximated case where the code also considers the function *g*, once the orbit enters the neighborhood after *n* > *n*0, the orbits are different.

A numerically periodic orbit (Remark 4) can be obtained for *q* = 0.6, *p* = 1.2 and *m* = 0.92, with *ε* = 10−<sup>3</sup> (Figure 4c).

An example of when the orbit remains within one of the domains *D*<sup>−</sup> or *D*+, is presented in Figure 4d, where the numerically periodic orbit, obtained for *q* = 0.6, *p* = 0.9, *<sup>m</sup>* <sup>=</sup> 0.92, *<sup>ε</sup>* <sup>=</sup> <sup>10</sup>−<sup>3</sup> and *<sup>x</sup>*<sup>0</sup> <sup>∈</sup> *<sup>D</sup>*+, remains in *<sup>D</sup>*+, while a numerically periodic orbit that visits both *<sup>D</sup>*<sup>−</sup> and *<sup>D</sup>*+, obtained for *<sup>ε</sup>* <sup>=</sup> <sup>65</sup> <sup>×</sup> <sup>10</sup>−<sup>4</sup> and *<sup>q</sup>* <sup>=</sup> 0.71, is presented in Figure 4e. While the orbit in Figure 4d is not related to discontinuity *x* = 0 in the case presented in Figure 4e, although the orbit does not seem to depend on the discontinuity, the transient somehow meets the neighborhood of the discontinuity (red point).

Intensive numerical tests show that the smallest neighborhood size where the orbits could be identified is of order *ε* = 10−3.

**Figure 4.** Orbits of the fractional systems (7) and (11); (**a**) Orbit of the fractional PWC (7) *q* = 0.6, *m* = 0.92 and *p* = 1.556, without approximation; (**b**) Orbit of the continuous fractional system (11) with *q* = 0.6, *m* = 0.92, *p* = 1.556 and *ε* = 10−3; (**c**) Numerically periodic orbit of the continuous fractional system (11) with *q* = 0.6, *m* = 0.92, *p* = 1.2 and *ε* = 10−<sup>3</sup> situated in *D*+; (**e**) Numerically periodic orbit of the continuous fractional system (11) with *q* = 0.7, *m* = 0.95, *p* = 1.556 and *ε* = 65<sup>−</sup>4.

#### **5. Conclusions**

In this paper, it is shown that the fractional PWC systems (7) might have no solutions. Even if the use of the numerical integral (5) could offer a numerical solution, this could be incorrect. This characteristic is also explained computationally. A possible solution is to use the Cellina theorem, which allows the restarting of the PWC problem as a continuous one, where integral (5) can be applied.

**Author Contributions:** Conceptualization, M.-F.D.; methodology, M.-F.D.; software, M.-F.D.; validation, J.M.J.; formal analysis, M.-F.D. and J.M.J; investigation, M.-F.D. and J.M.J.; writing—original draft preparation, M.-F.D.; writing—review and editing, M.-F.D. and J.M.J.; visualization, J.M.J.; supervision, M.-F.D. and J.M.J. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** We thank the reviewers for their valuable comments which improved our paper.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

## *Article* **Asymptotic and Pinning Synchronization of Fractional-Order Nonidentical Complex Dynamical Networks with Uncertain Parameters**

**Yu Wang, Xiliang He \* and Tianzeng Li**

School of Mathematics and Statistics, Sichuan University of Science and Engineering, Zigong 643000, China; jdwangyu@suse.edu.cn (Y.W.); litianzeng@suse.edu.cn (T.L.)

**\*** Correspondence: 321070108109@stu.suse.edu.cn

**Abstract:** This paper is concerned with the asymptotic and pinning synchronization of fractionalorder nonidentical complex dynamical networks with uncertain parameters (FONCDNUP). First of all, some synchronization criteria of FONCDNUP are proposed by using the stability of fractionalorder dynamical systems and inequality theory. Moreover, a novel controller is derived by using the Lyapunov direct method and the differential inclusion theory. Next, based on the Lyapunov stability theory and pinning control techniques, a new group of sufficient conditions to assure the synchronization for FONCDNUP are obtained by adding controllers to the sub-nodes of networks. At last, two numerical simulations are utilized to illustrate the validity and rationality of the acquired results.

**Keywords:** pinning synchronization; nonidentical networks; uncertain parameters

#### **1. Introduction**

As is known to all, complex networks cover almost everywhere and have been rapidly growing with a wide range of applications. Over the past few decades, several results on the dynamical behavior of complex dynamical networks have been published, such as chaos [1], bifurcation [2], stability [3], and dissipativity [4].

Fractional-order derivatives, as a generalization of integer-order derivatives, can describe natural phenomena more easily. Moreover, fractional-order derivatives have more advantages than integer-order derivatives in terms of memory and genetic properties. Additionally, they have a wide range of promising applications in secure communications [5], viscoelastic systems [6], power systems [7], robotics [8], and heat conduction [9]. Furthermore, real-world models can be better portrayed by fractional-order derivatives, such as hydrodynamics [10] and biological models [11]. It is necessary to introduce fractional-order derivatives in complex networks. Additionally, fractional-order complex networks (FOCN) can be seen as an important stretch of traditional integer-order complex networks, which have excellent modeling capabilities and are well suited to assist people in physics, engineering, and interdisciplinary areas to simulate a variety of materials and systems with longtime memory and genetic properties [12–14]. It is worth noting that dynamical characteristics such as the synchronization of FOCN occupy an important position in applications and are gradually gaining attention. Therefore, theoretical and applied studies of FOCN are very important and interesting [15].

Dynamical phenomena in complex networks have been broadly studied, among which the synchronization is one of the most critical dynamic activities in complex networks. In reality, synchronization as a kind of basic natural activity has been extensively studied in different fields. The synchronization of FOCN as an interesting and essential dynamical behavior has been studied by a large amount of scholars and has a wide range of research in unmanned ground vehicles [16], cryptography [17], and image encryption [18]. Hence, there has been a great deal of research on synchronization [19,20]. The synchronization of

**Citation:** Wang, Y.; He, X.; Li, T. Asymptotic and Pinning Synchronization of Fractional-Order Nonidentical Complex Dynamical Networks with Uncertain Parameters. *Fractal Fract.* **2023**, *7*, 571. https:// doi.org/10.3390/fractalfract7080571

Academic Editors: Gani Stamov, Libo Feng, Lin Liu and Yang Liu

Received: 20 May 2023 Revised: 23 July 2023 Accepted: 23 July 2023 Published: 25 July 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

FOCN by specific control strategies is an increasingly important issue in control of FOCN. Many strategies have been proposed for synchronous control [21,22], and they are widely applied in different areas. However, because the FOCN is made up of many nodes, these effective control strategies are difficult and costly to implement as they require moment by moment control of the entire node. They require constant activation of control inputs, which can result in wasted energy. Therefore, pinning control is no doubt a more efficient control method because pinning control synchronizes the network by controlling a small portion of the nodes instead of all of them [23].

Most of the models studied in the current works have the same parameters. However, due to the complexity of the real world, the parameters of the drive and response systems are hardly identical. Therefore, considering that, a nonidentical network with different parameters for the driving system and the response system is more realistic [24].

In reality, the presence of parameter uncertainty is inevitable. This is why we cannot always obtain the precise values of the parameters used in modeling the real system. Additionally, there may be unknown topology and modeling errors. Uncertain parameters can result in the negative dynamic behavior of the system, such as decreased performance, prolonged synchronization time, and even the destabilization of the trajectory. Therefore, it is important to consider the effects of parameter uncertainty on the system. Incorporating parameter uncertainty into the network model is essential.

For instance [25], the authors utilized the comparison theorem and Lyapunov method to derive conditions for the synchronization of the FOCN with multiple time delays and parameter uncertainty. Additionally, in the literature [26], they employed the homogeneous embryonic principle and inequality techniques to establish two criteria—one independent of time delay and one dependent on time delay—in order to ensure the accuracy of the conclusions. Apart from that, in [27], the authors obtained conditions for achieving projection synchronization by modeling fractional-order T-S fuzzy neural networks with uncertain parameters and applying system stability theory and matrix inequality techniques. In the [28], a new sliding-mode surface controller on nonidentical networks was designed. In Ref. [29], some sufficient conditions were derived to achieve the synchronization of the global Mittag–Leffler projection of the processing model using the Lyapunov method and the Razumikhin technique to converge the states to a specified sliding surface for sliding motion. In Ref. [30], the synchronization of the FOCN with delays under adaptive control was achieved using inequality theory and the comparison principle of linear fractional equations with delays. In order to achieve asymptotic synchronization of uncertain FOCN, an adaptive pinning controller was designed in the paper [31]. In Ref. [32], it was demonstrated that the uncertain fractional-order T-S fuzzy complex networks were stable under the designed controller, reducing the impact of coupled time-varying and uncertainty perturbations on the tracking error. However, there is a scarcity of relevant studies on the pinning synchronization of FOCN with parameter uncertainty, making it a worthwhile area for exploration.

Motivated by the above discussions, this paper mainly considers the asymptotic and pinning synchronization of FONCDNUP. The main contributions are as follows:

(1) By employing stability and the inequality theory of fractional dynamical systems, a new criterion for the synchronization of FONCDNUP is discovered.

(2) By utilizing the Lyapunov direct method and pinning control theory, a novel pinning controller is designed.

(3) Since there is limited research on pinning control in FOCN with parameter uncertainty, the previous studies are extended.

The remainder of the paper is organized as follows: Section 2 provides the preliminaries; Section 3 presents some sufficient conditions for the asymptotic and pinning synchronization of FOCN with uncertain parameters; the effectiveness of the obtained results is verified through simulations in Section 4; and finally, Section 5 concludes the paper and offers prospects for future research.

#### **2. Preliminaries**

**Definition 1** ([33])**.** *The function* ˜ *h*¯(*t*) *with α-order fractional integral is defined as*

$$I\_t^a \tilde{\hbar}(t) = \frac{1}{\Gamma(a)} \int\_{t\_0}^t (t - s)^{a - 1} \tilde{\hbar}(s) ds,\tag{1}$$

*where α* > 0*, t*<sup>0</sup> *is the initial time, and t* ≥ *t*0*,* Γ(·) *is the Euler gamma function.*

**Definition 2** ([33])**.** *The function* ˜ *h*¯(*t*) *with α-order Caputo derivative of fractional is defined as follows:*

$$\, \_{l\_0}^C D\_t^\alpha \tilde{\hbar}(t) = \frac{1}{\Gamma(\hat{k} - \alpha)} \int\_{t\_0}^t (t - \upsilon)^{\hat{k} - \alpha - 1} \tilde{\hbar}(t)^{(\hat{k})}(\upsilon) d\upsilon,\tag{2}$$

*where* ˆ *<sup>k</sup>* <sup>−</sup> <sup>1</sup> <sup>&</sup>lt; *<sup>α</sup>* <sup>&</sup>lt; <sup>ˆ</sup> *k,* ˆ *<sup>k</sup>* <sup>∈</sup> <sup>Z</sup>+*, and t*<sup>0</sup> *is the initial time. And in particular, when* <sup>0</sup> <sup>&</sup>lt; *<sup>α</sup>* <sup>&</sup>lt; 1,

$$\, \_{t\_0}^C D\_t^a \tilde{\hbar}(t) = \frac{1}{\Gamma(1-\alpha)} \int\_{t\_0}^t (t-\upsilon)^{-a} \tilde{\hbar}'(t)(\upsilon) d\upsilon. \tag{3}$$

**Lemma 1** ([34])**.** *For all* 0 < *α* < 1*, if* ˜ *<sup>h</sup>*¯(*t*) <sup>∈</sup> *<sup>C</sup>*1([*t*, <sup>+</sup>∞), <sup>R</sup>)*, then*

$$|D\_{t\_0}^C D\_t^a|\tilde{\hbar}(t)| \le \text{sign}(\tilde{\hbar}(t))\_{t\_0}^C D\_t^a \tilde{\hbar}(t),\tag{4}$$

*where t* ≥ *t*0*, and t*<sup>0</sup> *is the initial time.*

**Lemma 2** ([35])**.** *<sup>P</sup>* <sup>∈</sup> <sup>R</sup>*n*×*<sup>n</sup> is a positive-definite matrix, and* <sup>∀</sup> *vectors , <sup>s</sup>* <sup>∈</sup> <sup>R</sup>*n, the below inequality holds true:*

$$
\omega \sigma^\top s \le \frac{1}{2} \sigma^\top P \sigma + \frac{1}{2} s^\top P^{-1} s. \tag{5}
$$

**Assumption 1.** *The activation functions* ˜ *h*¯*i*(·)*; for all i* = 1, 2, ···, *n, it satisfies the Lipschitz conditions if a positive matrix L* = *diag*(*l*1, *l*2, ··· , *ln*) *exists such that*

$$|\tilde{\hbar}\_i(t, \mathbf{s}) - \tilde{\hbar}\_p(t, \varpi)| \le L\_i |\mathbf{s} - \varpi|\_\prime \tag{6}$$

*for all* ,*<sup>s</sup>* <sup>∈</sup> <sup>R</sup>*n*.

#### **3. Main Results**

In this section, one can obtain some results on asymptotic and pinning synchronization of FOCN with uncertain parameters by means of Lyapunov theory and inequality theory, and some controllers are designed to ensure that synchronization is realizable.

#### *3.1. Asymptotic Synchronization for FONCDNUP*

In this segment, the work synchronizes the system (7) and system (8) by an appropriate controller. Then, one will consider the below FONCDNUP: the drive system is described as

$$
\sigma\_{t\_0}^c D\_t^a \sigma\_i(t) = A\_0 \sigma\_i(t) + B\_0 \hbar(\sigma\_i(t)) + c \sum\_{j=1}^N d\_{ij} \Lambda \sigma\_j(t), \tag{7}
$$

and the response system is described as

$$\mathbf{c}\_{l\_0}^c D\_l^a s\_i(t) = \left(\mathbf{E}\_0 + \Delta \mathbf{E}(t)\right) s\_i(t) + \left(\mathbf{G}\_0 + \Delta \mathbf{G}(t)\right) \hbar (s\_i(t)) + c \sum\_{j=1}^N d\_{ij} \Lambda s\_j(t) + u\_i(t), \tag{8}$$

where 0 < *α* < 1, *i* = 1, 2, ··· , *n*; *i*(*t*) and *si*(*t*) are the state of the *i*-node, *A*<sup>0</sup> and *E*<sup>0</sup> are constant matrices, *h*¯(*i*(*t*)), and *h*¯(*si*(*t*)) indicate the continuous nonlinear functions. *B*<sup>0</sup> and *G*<sup>0</sup> stand for the weight matrices. Λ = diag(*ε*1,*ε*2, ··· ,*εn*) > 0 is the internal coupling matrix. (*dij*)*n*×*<sup>n</sup>* is the outer coupling matrix, and if it has a linkage in the node *i* to *j*, *dij* = 0; otherwise *dij* = 0. Δ*E*(*t*) and Δ*G*(*t*) are the parametric uncertainties.

The vector of synchronization error (sync-error) is defined as

$$
\sigma\_i(t) = s\_i(t) - \sigma\_i(t). \tag{9}
$$

Based on (7) and (8), the sync-error system is as follows:

$$\begin{aligned} \,^c\_0 D\_t^a e\_i(t) &= \,^c\_{t\_0} D\_t^a s\_i(t) - \,^c\_{t\_0} D\_t^a \boldsymbol{\varpi}\_i(t) \\ &= (\mathbf{E}\_0 + \Delta \mathbf{E}(t)) s\_i(t) + (\mathbf{G}\_0 + \Delta \mathbf{G}(t)) \boldsymbol{\hbar}(s\_i(t)) \\ &- B\_0 \boldsymbol{\hbar}(\boldsymbol{\varpi}\_i(t)) - A\_0 \boldsymbol{\varpi}\_i(t) + c \sum\_{j=1}^N d\_{ij} \boldsymbol{\Lambda} \boldsymbol{\varepsilon}\_i(t) + \boldsymbol{\nu}\_i(t), \end{aligned} \tag{10}$$

where *i* = 1, 2, ··· , *n*.

*c*

In order to obtain the main results, one makes the following assumption.

**Assumption 2.** *The parametric uncertainties* Δ*E*(*t*)*,* Δ*G*(*t*)*,* Δ*D*(*t*)*, and* Δ*Q*(*t*) *are thefollowing forms:*

$$\begin{aligned} \Delta E(t) &= M\_{\mathfrak{c}} F(t) H\_{\mathfrak{c}'} \\ \Delta G(t) &= M\_{\mathfrak{F}} F(t) H\_{\mathfrak{F}'} \\ \Delta D(t) &= M\_{\mathfrak{d}} F(t) H\_{\mathfrak{d}'} \\ \Delta Q(t) &= M\_{\mathfrak{q}} F(t) H\_{\mathfrak{q}'} \end{aligned}$$

*where Me, Mg, He, Hg, Hd, Hq, Md, and Mq are the diagonal matrices with appropriate dimensions. And the uncertain matrix F*(*t*) *can satisfy F*((*t*)*F*(*t*) ≤ *I, where I is the identity matrix.*

**Theorem 1.** *Under the Assumptions 1 and 2, and scalar* 0 < *α* < 1*, FONCDNUP can achieve asymptotic synchronization, if the following inequalities hold:*

*(i)* Υˆ < 0*, (ii) ui*(*t*) = −(*E*<sup>0</sup> + Δ*E*(*t*) − *A*0)*i*(*t*) − (*G*<sup>0</sup> + Δ*G*(*t*))*h*¯(*i*(*t*)) + *B*0*h*¯(*i*(*t*) − *δiei*(*t*), *and* Υˆ = *E*<sup>0</sup> + <sup>1</sup> <sup>2</sup> *MeM*( *<sup>e</sup>* + <sup>1</sup> <sup>2</sup> *H*( *<sup>e</sup> He* + *LG*<sup>0</sup> + *L*<sup>1</sup> <sup>2</sup> *MgM*( *<sup>g</sup>* + *L*<sup>1</sup> <sup>2</sup> *H*( *<sup>g</sup> Hg* + ∑*<sup>N</sup> <sup>j</sup>*=<sup>1</sup> *dij*Λ − *δi*.

**Proof.** Construct the following Lyapunov function:

$$V(t) = \sum\_{i=1}^{n} |e\_i(t)|\_{\prime} \tag{11}$$

then, taking the fractional derivative of *V*(*t*) by Lemma 1, we can obtain

$$\begin{split} \xi\_{0}D\_{t}^{\kappa}V(t) &= \xi\_{0}^{\tau}D\_{t}^{\kappa} \sum\_{i=1}^{n} |e\_{i}(t)| \\ &\leq \sum\_{i=1}^{n} \text{sign}^{\top}(e\_{i}(t))\_{i}^{\epsilon}D\_{t}^{\kappa}e\_{i}(t) \\ &\leq \sum\_{i=1}^{n} \text{sign}^{\top}(e\_{i}(t)) [(E\_{0} + \Delta E(t))s\_{i}(t) - A\_{0}\varpi\_{i}(t) - B\_{0}\hbar(\varpi\_{i}(t)) \\ &\quad + \left(G\_{0} + \Delta G(t)\right)\hbar(s\_{i}(t) + c\sum\_{j=1}^{N} d\_{ij}\Delta e\_{i}(t) + u\_{i}(t)] \\ &\leq \sum\_{i=1}^{n} \text{sign}^{\top}(e\_{i}(t)) [(E\_{0} + \Delta E(t))s\_{i}(t) - A\_{0}\varpi\_{i}(t)] \\ &\quad + \sum\_{i=1}^{n} \text{sign}^{\top}(e\_{i}(t)) [(G\_{0} + \Delta G(t))\hbar(s\_{i}(t)) - B\_{0}\hbar(\varpi\_{i}(t))] \\ &\quad + \sum\_{i=1}^{n} \text{sign}^{\top}(e\_{i}(t))c\sum\_{j=1}^{N} d\_{ij}\Delta e\_{i}(t) + \sum\_{i=1}^{n} \text{sign}^{\top}(e\_{i}(t))u\_{i}(t). \end{split} \tag{12}$$

It follows from Lemma 2 and Assumption 2 that

$$\begin{split} \mathcal{W}\_{1} &= \sum\_{i=1}^{n} \text{sign}^{\top} \left( e\_{i}(t) \left( E\_{0} + \Delta E(t) \right) s\_{i}(t) - A\_{0} \boldsymbol{\varpi}\_{i}(t) \right) \\ &\leq \sum\_{F=1}^{n} \left| \left( E\_{0} + \Delta E(t) \right) s\_{i}(t) + \left( E\_{0} + \Delta E(t) \right) \right| \boldsymbol{\varpi}\_{i}(t) \\ &\quad - \left( E\_{0} + \Delta E(t) \right) \boldsymbol{\varpi}\_{i}(t) - A\_{0} \boldsymbol{\varpi}\_{i}(t) \right| \\ &\leq \sum\_{i=1}^{n} \left| \left( E\_{0} + \Delta E(t) \right) e\_{i}(t) + \left( E\_{0} - A\_{0} + \Delta E(t) \right) \boldsymbol{\varpi}\_{i}(t) \right| \\ &\leq \sum\_{i=1}^{n} \left| \left( E\_{0} + M\_{\mathcal{E}} F(t) H\_{\mathcal{E}} \right) e\_{i}(t) + \left( E\_{0} + M\_{\mathcal{E}} F(t) H\_{\mathcal{E}} - A\_{0} \right) \boldsymbol{\varpi}\_{i}(t) \right| \\ &\leq \sum\_{i=1}^{n} \left| \left( E\_{0} + \frac{1}{2} M\_{\mathcal{E}} M\_{\mathcal{E}}^{\top} + \frac{1}{2} H\_{\mathcal{E}}^{\top} H\_{\mathcal{E}} \right) e\_{i}(t) \right| \\ &\quad + \left( E\_{0} + \frac{1}{2} M\_{\mathcal{E}} M\_{\mathcal{E}}^{\top} + \frac{1}{2} H\_{\mathcal{E}}^{\top} H\_{\mathcal{E}} - A\_{0} \right) \boldsymbol{\varpi}\_{i}(t) \right|. \end{split} \tag{13}$$

Using Assumption 1, one has

*W*<sup>2</sup> = *n* ∑ *i*=1 sign((*ei*(*t*)[(*G*<sup>0</sup> + Δ*G*(*t*))*h*¯(*si*(*t*)) − *B*0*h*¯(*i*(*t*))] ≤ *n* ∑ *i*=1 |(*G*<sup>0</sup> + Δ*G*(*t*))*h*¯(*si*(*t*)) − *B*0*h*¯(*i*(*t*))| ≤ *n* ∑ *i*=1 (*G*<sup>0</sup> + Δ*G*(*t*))*h*¯(*si*(*t*)) + (*G*<sup>0</sup> + Δ*G*(*t*))*h*¯(*i*(*t*) − (*G*<sup>0</sup> + Δ*G*(*t*))*h*¯(*i*(*t*)) − *B*0*h*¯(*i*(*t*) ≤ *n* ∑ *i*=1 |(*G*<sup>0</sup> + Δ*G*(*t*))*Lei*(*t*)+(*G*<sup>0</sup> + Δ*G*(*t*))*h*¯(*i*(*t*)) − *B*0*h*¯(*i*(*t*)) ≤ *n* ∑ *i*=1 |(*G*<sup>0</sup> + *MgF*(*t*)*Hg*)*Lei*(*t*)+(*G*<sup>0</sup> + *MgF*(*t*)*Hg*)*h*¯(*i*(*t*)) − *B*0*h*¯(*i*(*t*) ≤ *n* ∑ *i*=1 |(*G*<sup>0</sup> + 1 2 *MgM*( *<sup>g</sup>* + 1 2 *H*( *<sup>g</sup> Hg*)*Lei*(*t*) − *B*0*h*¯(*i*(*t*) + (*G*<sup>0</sup> + 1 2 *MgM*( *<sup>g</sup>* + 1 2 *H*( *<sup>g</sup> Hg*)*h*¯(*i*(*t*))|. (14)

Similarly, one can obtain the following formula:

$$\begin{split} W\_3 &= \sum\_{i=1}^n \text{sign}^\top(e\_i(t))c \sum\_{j=1}^N d\_{ij} \Lambda e\_i(t) \\ &\le \sum\_{i=1}^n |c \sum\_{j=1}^N d\_{ij} \Lambda e\_i(t)| \\ &\le c \sum\_{i=1}^n \sum\_{j=1}^N d\_{ij} \Lambda |e\_i(t)|. \end{split} \tag{15}$$

Adding the controller *ui*(*t*) to (12), we obtain

$$\begin{split} \mathcal{W}\_{4} &= \sum\_{i=1}^{n} \text{sign}^{\top} \left( e\_{i}(t) \right) u\_{i}(t) \\ &\leq \sum\_{i=1}^{n} \text{sign}^{\top} \left( e\_{i}(t) \right) \left[ - \left( E\_{0} + \Delta E(t) - A\_{0} \right) \varpi\_{i}(t) - \delta\_{i} e\_{i}(t) \right. \\ &\quad - \left( G\_{0} + \Delta G(t) \right) \hbar (\varpi\_{i}(t)) + B\_{0} \hbar (\varpi\_{i}(t)) \\ &\leq \sum\_{i=1}^{n} \left| - \left( E\_{0} + \Delta E(t) - A\_{0} \right) \varpi\_{i}(t) - \delta\_{i} e\_{i}(t) \right. \\ &\quad - \left( G\_{0} + \Delta G(t) \right) \hbar (\varpi\_{i}(t)) + B\_{0} \hbar (\varpi\_{i}(t)) \\ &\leq \sum\_{i=1}^{n} \left| - \left( E\_{0} + \frac{1}{2} M\_{c} M\_{c}^{\top} + \frac{1}{2} H\_{c}^{\top} H\_{c} \right) \varpi\_{i}(t) + B\_{0} \hbar (\varpi\_{i}(t)) \right. \\ &\quad - \left( G\_{0} + \frac{1}{2} M\_{g} M\_{g}^{\top} + \frac{1}{2} H\_{g}^{\top} H\_{g} \right) \hbar (\varpi\_{i}(t)) - \delta\_{i} e\_{i}(t) + A\_{0} \sigma\_{i}(t) \big|. \end{split} \tag{16}$$

By adding (13)–(16) to (12), one can obtain

*c <sup>t</sup>*0*D<sup>α</sup> <sup>t</sup> V*(*t*) ≤*W*<sup>1</sup> + *W*<sup>2</sup> + *W*<sup>3</sup> + *W*<sup>4</sup> ≤ *n* ∑ *i*=1 |(*E*<sup>0</sup> + 1 2 *MeM*( *<sup>e</sup>* + 1 2 *H*( *<sup>e</sup> He*)*ei*(*t*)*E*0*i*(*t*) − *A*0*i*(*t*) + ( <sup>1</sup> 2 *MeM*( *<sup>e</sup>* + 1 2 *H*( *<sup>e</sup> He*)*i*(*t*) + <sup>1</sup> 2 *MgM*( *<sup>g</sup> Lei*(*t*) <sup>+</sup> *<sup>G</sup>*0*Lei*(*t*) + <sup>1</sup> 2 *H*( *<sup>g</sup> HgLei*(*t*) + <sup>1</sup> 2 *H*( *<sup>g</sup> Hgh*¯(*i*(*t*)) + (*G*<sup>0</sup> + 1 2 *MgM*( *<sup>g</sup>* )*h*¯(*i*(*t*)) − *B*0*h*¯(*i*(*t*)) + *A*0*i*(*t*) − (*E*<sup>0</sup> + 1 2 *MeM*( *<sup>e</sup>* + 1 2 *H*( *<sup>e</sup> He*)*i*(*t*) + *N* ∑ *j*=1 *dij*Λ*ei*(*t*) − (*G*<sup>0</sup> + 1 2 *MgM*( *<sup>g</sup>* + 1 2 *H*( *<sup>g</sup> Hg*)*h*¯(*i*(*t*)) − *δiei*(*t*) + *B*0*h*¯(*i*(*t*)| ≤ *n* ∑ *i*=1 [*E*<sup>0</sup> + 1 2 *MeM*( *<sup>e</sup>* + 1 2 *H*( *<sup>e</sup> He* <sup>+</sup> *LG*<sup>0</sup> <sup>+</sup> *<sup>L</sup>*<sup>1</sup> 2 *MgM*( *g* <sup>+</sup> *<sup>L</sup>*<sup>1</sup> 2 *H*( *<sup>g</sup> Hg* + *N* ∑ *j*=1 *dij*Λ − *δi*]|*ei*(*t*)| ≤ *n* ∑ *i*=1 <sup>Υ</sup><sup>ˆ</sup> <sup>|</sup>*ei*(*t*)|. (17)

When Υˆ < 0, the *<sup>c</sup> t*0 *D<sup>α</sup> <sup>t</sup> V*(*t*) ≤ 0, which means the FONCDNUP can achieve asymptotical synchronization under the controller.

#### *3.2. Pinning Synchronization for FOCDNUP*

In this subsection, we explore the pinning synchronization of the following fractionalorder complex dynamical networks with uncertain parameters (FOCDNUP): the drive system is described as

$$\mathbf{u}\_{t\_0}^c D\_t^\mathbf{a} \mathfrak{o}\_i(t) = \left(D + \Delta D(t)\right) \mathfrak{o}\_i(t) + \left(Q + \Delta Q(t)\right) \mathfrak{g}\left(\mathfrak{o}\_i(t)\right) + c \sum\_{j=1}^N d\_{ij} \Lambda \mathfrak{o}\_j(t),\tag{18}$$

and the response system is described as

$$\mathbf{f}\_{l\_0}^c D\_l^\mathbf{z} \vec{s}\_i(t) = \left(D + \Lambda D(t)\right) \vec{s}\_i(t) + \left(Q + \Lambda Q(t)\right) \mathbf{g}\left(\vec{s}\_i(t)\right) + c \sum\_{j=1}^N d\_{ij} \Lambda \vec{s}\_j(t) + \vec{u}\_i(t), \tag{19}$$

where 0 < *α* < 1, *i* = 1, 2, ··· , *n*, ˜*i*(*t*) and *s*˜*i*(*t*) are the state of the *i*-node, *D* and is the real constant matrix, and *Q* indicate the weight matrix. *g*(˜*i*(*t*)) and *g*(*s*˜*i*(*t*)) show the continuous nonlinear functions. Λ˜ = diag(*ε*˜ 1,*ε*˜ 2, ··· ,*ε*˜ *<sup>n</sup>*) > 0 is the internal coupling matrix of the networks. ( ˜*dij*)*n*×*<sup>n</sup>* is the outer coupling matrix, and if they have a linkage in the node *<sup>i</sup>* to *<sup>j</sup>*, ˜*dij* <sup>=</sup> 0 ; otherwise, ˜*dij* <sup>=</sup> 0. <sup>Δ</sup>*D*(*t*) and <sup>Δ</sup>*Q*(*t*) are the parametric uncertainties.

The sync-error vector is defined as

$$
\vec{e}\_i(t) = \vec{s}\_i(t) - \vec{\omega}\_i(t). \tag{20}
$$

Based on (18) and (19), the sync-error system is expressed as

$$\mathbb{E}\_{t0}^c D\_t^\mathbf{z} \mathbb{E}\_i(t) = \left(D + \Delta D(t)\right) \mathbb{E}\_i(t) + \left(Q + \Delta Q(t)\right) \mathbf{g}\left(\mathbb{E}\_i(t)\right) + \mathbb{E} \sum\_{j=1}^N \tilde{d}\_{ij} \tilde{\Lambda} \mathbb{A}\_i(t) + \mathfrak{a}\_i(t). \tag{21}$$

Then, the pinning controller of FOCDNUP is described as

$$
\mathfrak{u}\_{i}(t) = \begin{cases}
0, & i = m+1, m+2, \cdots, n.
\end{cases} \tag{22}
$$

Synchronizing the FOCDNUP by the pinning controller is the next task.

**Theorem 2.** *Under Assumptions 1 and 2, and scalar* 0 < *α* < 1*, the FOCDNUP can achieve synchronization under the pinning controller, if the following inequalities hold:*

$$D + \frac{1}{2}M\_d M\_d^\top + \frac{1}{2}H\_d^\top H\_d + LQ + \frac{1}{2}LM\_q M\_q^\top + \frac{1}{2}LH\_q^\top H\_q + \tilde{\varepsilon}\sum\_{j=1}^N d\_{ij}\tilde{\Lambda} - \sum\_{i=1}^m \tilde{\delta}\_i < 0.$$

**Proof.** Construct the following Lyapunov function:

*c*

$$V(t) = \sum\_{i=1}^{n} |\mathfrak{e}\_i(t)|\_{\prime} \tag{23}$$

then, taking the fractional derivative of *V*(*t*) by Lemma 1, one can obtain

$$\begin{split} \xi\_{0}D\_{t}^{\alpha}V(t) &= \xi\_{0}^{\prime}D\_{t}^{\alpha} \sum\_{i=1}^{n} |\dot{\varepsilon}\_{i}(t)| \\ &\leq \sum\_{i=1}^{n} \text{sign}^{\top}(\dot{\varepsilon}\_{i}(t)) \dot{\varepsilon}\_{0}^{\prime}D\_{t}^{\alpha}\dot{\varepsilon}\_{i}(t) \\ &\leq \sum\_{i=1}^{n} \text{sign}^{\top}(\dot{\varepsilon}\_{i}(t)) [(D + \Delta D(t))\dot{\varepsilon}\_{i}(t) + (Q + \Delta Q(t))] \mathcal{g}(\dot{\varepsilon}\_{i}(t)) \\ &\quad + c \sum\_{j=1}^{N} \bar{d}\_{ij}\bar{\Lambda}\dot{\varepsilon}\_{i}(t) + \bar{u}\_{i}(t)] \\ &\leq \sum\_{i=1}^{n} \text{sign}^{\top}(\dot{\varepsilon}\_{i}(t)) (D + \Delta D(t))\dot{\varepsilon}\_{i}(t) \\ &\quad + \sum\_{i=1}^{n} \text{sign}^{\top}(\dot{\varepsilon}\_{i}(t)) (Q + \Delta Q(t))g(\dot{\varepsilon}\_{i}(t)) \\ &\quad + \sum\_{i=1}^{n} \text{sign}^{\top}(\dot{\varepsilon}\_{i}(t)) (\mathcal{E} \sum\_{j=1}^{N} \bar{d}\_{ij}\bar{\Lambda}\dot{\varepsilon}\_{i}(t) + \bar{u}\_{i}(t)). \end{split} \tag{24}$$

It follows from Lemma 2 and Assumption 2 that

$$\begin{split} V\_1 &= \sum\_{i=1}^n \text{sign}^\top \left( \dot{e}\_i(t) \right) (D + \Delta D(t)) \dot{e}\_i(t) \\ &\le \sum\_{i=1}^n \left| (D + \Delta D(t)) \dot{e}\_i(t) \right| \\ &\le \sum\_{i=1}^n \left| (D + M\_d F(t) H\_d) \dot{e}\_i(t) \right| \\ &\le \sum\_{i=1}^n \left| D + \frac{1}{2} M\_d M\_d^\top + \frac{1}{2} H\_d^\top H\_d \dot{e}\_i(t) \right|. \end{split} \tag{25}$$

Using Assumption 1, one obtains

$$\begin{split} V\_{2} &= \sum\_{i=1}^{n} \text{sign}^{\top} \left( \mathring{e}\_{i}(t) \right) (Q + \Delta Q(t)) g(\mathring{e}\_{i}(t)) \\ &\leq \sum\_{i=1}^{n} |Q + \Delta Q(t)| L\mathring{e}\_{i}(t)| \\ &\leq \sum\_{i=1}^{n} |(Q + M\_{q}F(t)H\_{q})L\mathring{e}\_{i}(t)| \\ &\leq \sum\_{i=1}^{n} |(Q + \frac{1}{2}M\_{q}M\_{q}^{\top} + \frac{1}{2}H\_{q}^{\top}H\_{q})L\mathring{e}\_{i}(t)|. \end{split} \tag{26}$$

Adding the pinning controller *u*˜*i*(*t*) to (24), one has

$$\begin{split} V\_3 &= \sum\_{i=1}^n \text{sign}^\top \left( \pounds\_i(t) \right) \left( \vec{c} \sum\_{j=1}^N d\_{ij} \bar{\Lambda} \pounds\_i(t) + \vec{u}\_i(t) \right) \\ &\leq \sum\_{i=1}^n \left| \vec{c} \sum\_{j=1}^N \vec{d}\_{ij} \bar{\Lambda} \hat{e}\_i(t) \right| - \sum\_{i=1}^m \bar{\delta}\_i \left| \hat{e}\_i(t) \right|. \end{split} \tag{27}$$

By adding (25)–(27) to (24), we have

$$\begin{split} \mathbb{E}\_{t\_0}^T D\_t^\alpha V(t) &= V\_1 + V\_2 + V\_3 \\ &\le \sum\_{i=1}^n \left| D + \frac{1}{2} M\_d M\_d^\top + \frac{1}{2} H\_d^\top H\_d \varrho\_i(t) + \tilde{c} \sum\_{j=1}^N \bar{d}\_{ij} \Lambda \dot{e}\_i(t) - \bar{\delta}\_l \dot{e}\_i(t) \right| \\ &\quad + (Q + \frac{1}{2} M\_q M\_q^\top + \frac{1}{2} H\_q^\top H\_q) L \dot{e}\_i(t) \big| \\ &\le \sum\_{i=1}^n \left[ D + \frac{1}{2} M\_d M\_d^\top + \frac{1}{2} H\_d^\top H\_d + LQ + \frac{1}{2} L M\_q M\_q^\top \right] \\ &\quad + \frac{1}{2} L H\_q^\top H\_q + \tilde{c} \sum\_{j=1}^N \bar{d}\_{ij} \bar{\Lambda} - \sum\_{i=1}^m \bar{\delta}\_i \|\dot{e}\_i(t)\|. \end{split} \tag{28}$$

If *D* + <sup>1</sup> <sup>2</sup> *MdM*( *<sup>d</sup>* <sup>+</sup> <sup>1</sup> <sup>2</sup> *H*( *<sup>d</sup> Hd* <sup>+</sup> *LQ* <sup>+</sup> <sup>1</sup> <sup>2</sup> *LMqM*( *<sup>q</sup>* + <sup>1</sup> <sup>2</sup> *LH*( *<sup>q</sup> Hq* + *c*˜ ∑*<sup>N</sup> <sup>j</sup>*=<sup>1</sup> ˜*dij*Λ˜ <sup>−</sup> <sup>∑</sup>*<sup>m</sup> <sup>i</sup>*=<sup>1</sup> ˜ *δ<sup>i</sup>* < 0, the FOCDNUP is pinning synchronization under the controller.

#### **4. Numerical Simulation**

In this section, the viability and validity of the approaches are verified by two numerical instances.

**Example 1.** *Suppose that the below FONCDNUP is made up of n nodes and it is given in the following way:*

$$
\sigma\_{t\_0}^c D\_t^\mathbf{a} \sigma\_i(t) = A\_0 \sigma\_i(t) + B\_0 \hbar(\sigma\_i(t)) + c \sum\_{j=1}^N d\_{ij} \Lambda \sigma\_j(t), \tag{29}
$$

*and*

$$\mathbf{d}\_{t0}^{\varepsilon} D\_t^{\mathbf{u}} s\_i(t) = \left(E\_0 + \Delta E(t)\right) s\_i(t) + \left(\mathbf{G}\_0 + \Delta G(t)\right) \hbar (s\_i(t)) + c \sum\_{j=1}^{N} d\_{i\bar{j}} \Lambda s\_j(t) + u\_i(t). \tag{30}$$

The controller is

$$u\_i(t) = -(E\_0 + \Delta E(t) - A\_0)\varpi\_i(t) - (G\_0 + \Delta G(t))\hbar(\varpi\_i(t)) + B\_0\hbar(\varpi\_i(t) - \delta\_i \varepsilon\_i(t). \tag{31}$$

Considering the 10 nodes of FONCDNUP, where *δ* = 20, *i*(*t*)=[*i*1, *i*2, *i*3, *i*4] (, *si*(*t*)=[*si*1,*si*2,*si*3,*si*4] (, *i* = 1, 2, ··· , 10, *c* is the coupling coefficient, which represents the degree of connection between each node, and *c* = 0.01, *a* = 0.25, *b* = 8.1; and the nonlinear functions can be expressed *h*¯(*i*(*t*)) = [*a* ∗ tanh(*i*1(*t*)), *a* ∗ tanh(*i*2(*t*)), *a* ∗ tanh(*i*3(*t*)), *a* ∗ tanh(*i*4(*t*))](; *h*¯(*si*(*t*)) = [*b* ∗ tanh(*si*1(*t*)), *b* ∗ tanh(*si*2(*t*)), *b* ∗ tanh(*si*3(*t*)), *b* ∗ tanh(*si*4(*t*))](.

The weight matrices and the parametric matrices are

$$A\_{0} = \begin{pmatrix} -18.058 & 0 & 0 & 0 \\ 0 & -1.256 & 0 & 0 \\ 0 & 0 & -10.847 & 0 \\ 0 & 0 & 0 & -1.865 \end{pmatrix}, \quad B\_{0} = \begin{pmatrix} 10.8 & 0 & 5.5 & 0.18 \\ 0 & -1.55 & 0.01 & 0.05 \\ 15.3 & 1 & -10 & 0 \\ 2.5 & 0 & 0 & -2.815 \end{pmatrix}.$$

$$\mathbf{E}\_{0} = \begin{pmatrix} -20.204 & 0 & 0 & 0 \\ 0 & -4.15 & 0 & 0 \\ 0 & 0 & -5.357 & 0 \\ 0 & 0 & 0 & -1.613 \end{pmatrix}, \quad \mathbf{G}\_{0} = \begin{pmatrix} -1.048 & 0.015 & 0.05 & 0.6 \\ -0.01 & 0.85 & 0 & 1.47 \\ 0 & -1.3 & -4.25 & -1.45 \\ 0.86 & 0 & 3 & -1.45 \end{pmatrix}.$$

The internal coupling matrix can be shown as

$$
\Lambda = \begin{pmatrix}
0.25 & 0 & 0 & 0 \\
0 & 0.25 & 0 & 0 \\
0 & 0 & 0.25 & 0 \\
0 & 0 & 0 & 0.25
\end{pmatrix}.
$$

The outer coupling matrix can be indicated by

$$(d\_{ij})\_{\Pi^{(12)}\Pi} = \begin{pmatrix} -2.1 & 0 & 1.15 & 0 & 2.05 & -2.1 & 0.2 & -1 & 0.22 & -2.15 \\ -2.1 & -1.15 & 0 & -0.12 & 0.21 & -0.45 & 0 & 1.5 & 0.1 & -3.3 \\ 0 & -1 & -3.1 & 0 & 1 & -1.5 & 0 & -1.25 & -1.15 & 1.02 \\ 2.5 & 0 & 1.5 & -2.2 & 1.5 & 0 & -1.5 & 0 & -2.03 & 0.5 \\ 0 & -1 & 0 & 1 & -2.35 & -1.25 & -1.5 & 3.01 & 0.5 & -1.1 \\ -1.01 & 0 & 2 & -2.1 & 1.05 & -1.5 & 0 & 0.1 & -1 & 0 \\ 0.5 & 0 & -1 & 2.1 & 0 & 1.5 & -1.5 & -0.5 & 1.01 & 1.45 \\ -2.03 & 0.1 & -1.01 & 1.2 & -1.5 & 0 & -1.2 & -1.25 & 2.35 & -0.2 \\ 1 & -2.51 & 0 & 1.5 & -1 & 0.2 & 0.5 & 1.06 & -1.15 & -1.25 \\ 1.5 & -3.13 & 0.5 & -0.5 & 0 & -1.15 & 3 & 0 & 2.1 & -1 \end{pmatrix}.$$

The parameter uncertainties matrices can be shown in terms of

$$M\_{\varepsilon} = \begin{pmatrix} 0.1 & 0 & 0 & 0 \\ 0 & 0.5 & 0 & 0 \\ 0 & 0 & 0.2 & 0 \\ 0 & 0 & 0 & 0.3 \end{pmatrix}, \quad H\_{\varepsilon} = \begin{pmatrix} 0.2 & 0 & 0 & 0 \\ 0 & 0.1 & 0 & 0 \\ 0 & 0 & 0.2 & 0 \\ 0 & 0 & 0 & 0.5 \end{pmatrix},$$

$$F\_{\varepsilon}(t) = \begin{pmatrix} \cos(\varpi\_1(t)) & 0 & 0 & 0 \\ 0 & \cos(\varpi\_2(t)) & 0 & 0 \\ 0 & 0 & \cos(\varpi\_3(t)) & 0 \\ 0 & 0 & 0 & \cos(\varpi\_4(t)) \end{pmatrix},$$

and

$$M\_{\mathcal{S}} = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 0.9 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0.3 \end{pmatrix}, \quad H\_{\mathcal{S}} = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 0.9 & 0 & 0 \\ 0 & 0 & 0.8 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix},$$

$$F\_{\mathcal{S}}(t) = \begin{pmatrix} 0.41 \cos(s\_1(t)) & 0 & 0 & 0 \\ 0 & \cos(s\_2(t)) & 0 & 0 \\ 0 & 0 & 0.3 \cos(s\_3(t)) & 0 \\ 0 & 0 & 0 & 0.13 \cos(s\_4(t)) \end{pmatrix}.$$

We choose the appropriate initial values. Then, using the MATLAB R2020a, the Admas–Bashforth–Moulton predictor corrector method is employed for numerical simulation. If the previous parameter matrix changes, it will extend our control time. Figures 1–4 show the trajectories of sync−error (9) (*ei*1, *ei*2, *ei*3, *ei*4) without control, respectively. We can observe that the (29) and the (30) without control is unsynchronized. Figures 5–8 reflect the trajectories of sync−error 9 (*ei*1, *ei*2, *ei*3, *ei*4) under control, respectively. Figure 9 shows the trajectories of total sync−error systems (9) not under control. Figure 10 shows the trajectories of total sync−error systems (9) under the control. From the simulation results and graphs, it can be obtained that the error system is actuated to the point of initial. Clearly, (29) and (30) can achieve asymptotic synchronization. This shows the effectiveness and feasibility of Theorem 1.

**Figure 1.** Time behaviors of sync−error trajectories *ei*1(*i* = 1, 2, ··· , 10) without controller.

**Figure 2.** Time behaviors of sync−error trajectories *ei*2(*i* = 1, 2, ··· , 10) without controller.

**Figure 3.** Time behaviors of sync−error trajectories *ei*3(*i* = 1, 2, ··· , 10) without controller.

**Figure 4.** Time behaviors of sync−error trajectories *ei*4(*i* = 1, 2, ··· , 10) without controller.

**Figure 5.** Time behaviors of sync−error trajectories *ei*1(*i* = 1, 2, ··· , 10) with controller.

**Figure 6.** Time behaviors of sync−error trajectories *ei*2(*i* = 1, 2, ··· , 10) with controller.

**Figure 7.** Time behaviors of sync−error trajectories *ei*3(*i* = 1, 2, ··· , 10) with controller.

**Figure 8.** Time behaviors of sync−error trajectories *ei*4(*i* = 1, 2, ··· , 10) with controller.

**Figure 9.** Time behaviors of sync−error trajectories *eij*(*i* = 1, 2, ··· , 10; *j* = 1, 2, 3, 4) without controller.

**Figure 10.** Time behaviors of sync−error trajectories *eij*(*i* = 1, 2, ··· , 10; *j* = 1, 2, 3, 4) with controller.

**Example 2.** *Supposed that the below FOCDNUP is made up of n nodes and it is given in the following way:*

$$\boldsymbol{\varepsilon}\_{t\_0}^{\varepsilon} D\_t^{\mathfrak{a}} \vec{\boldsymbol{\alpha}}\_{\dot{i}}(t) = \left(\boldsymbol{D} + \Delta \boldsymbol{D}(t)\right) \vec{\boldsymbol{\alpha}}\_{\dot{i}}(t) + \left(\boldsymbol{Q} + \Delta \boldsymbol{B}(t)\right) \boldsymbol{g}\left(\vec{\boldsymbol{\alpha}}\_{\dot{i}}(t)\right) + \vec{\boldsymbol{\varepsilon}} \sum\_{j=1}^{N} \vec{d}\_{\dot{i}j} \vec{\boldsymbol{\Lambda}} \vec{\boldsymbol{x}}\_{j}(t),\tag{32}$$

*and*

$$\tilde{\mathbf{s}}\_{l\_0}^c D\_t^a \tilde{\mathbf{s}}\_i(t) = \left(D\_0 + \Delta D(t)\right) \tilde{\mathbf{s}}\_i(t) + \left(Q + \Delta Q(t)\right) \mathbf{g}\left(\tilde{\mathbf{s}}\_i(t)\right) + \tilde{c} \sum\_{j=1}^N \tilde{d}\_{lj} \tilde{\Lambda} \tilde{y}\_j(t) + \tilde{u}\_i(t). \tag{33}$$

The 10 nodes of FOCDNUP are considered, where *δ* = 15, ˜*i*(*t*)=[˜*i*1, ˜*i*2, ˜*i*3, ˜*i*4] (, *s*˜*i*(*t*)=[*s*˜*i*1,*s*˜*i*2,*s*˜*i*3,*s*˜*i*4, ] ( *<sup>i</sup>* <sup>=</sup> 1, 2, ··· , 10, *<sup>c</sup>* <sup>=</sup> 5, *<sup>a</sup>*<sup>ˆ</sup> <sup>=</sup> 0.01, <sup>ˆ</sup> *b* = 0.01; and the nonlinear functions can be expressed as: *g*(˜*i*(*t*)) = [*a*ˆ ∗ tanh(˜*i*1(*t*)), *a*ˆ ∗ tanh(˜*i*2(*t*)), *a*ˆ ∗ tanh(˜*i*3(*t*)), *<sup>a</sup>*<sup>ˆ</sup> <sup>∗</sup> tanh(˜*i*4(*t*))](; *<sup>g</sup>*(*s*˜*i*(*t*)) = [<sup>ˆ</sup> *<sup>b</sup>* <sup>∗</sup> tanh(*s*˜*i*1(*t*)), <sup>ˆ</sup> *<sup>b</sup>* <sup>∗</sup> tanh(*s*˜*i*2(*t*)), <sup>ˆ</sup> *b* ∗ tanh(*s*˜*i*3(*t*)), ˆ *b* ∗ tanh(*s*˜*i*4(*t*))](.

The first five nodes are added to the controller, namely,

$$\mathfrak{u}\_{i}(t) = \begin{cases} -15\mathfrak{e}\_{i}(t), & i = 1,2,3,4,5\\ 0, & i = 6,7,8,9,10. \end{cases} \tag{34}$$

The weight matrices and parametric matrices are

$$D = \begin{pmatrix} -3.54 & 0 & 0 & 0 \\ 0 & -2.96 & 0 & 0 \\ 0 & 0 & -5.07 & 0 \\ 0 & 0 & 0 & -2.93 \end{pmatrix}, \quad Q = \begin{pmatrix} -15.5 & 0.05 & 5.1 & 0 \\ 70 & 0 & -15.01 & 35 \\ 60.1 & -10.1 & -8.25 & 15.5 \\ -0.1 & -0.03 & 30.01 & -0.35 \end{pmatrix}.$$

And the inner coupling matrices can be expressed in terms of

$$
\bar{\Lambda} = \begin{pmatrix}
0.25 & 0 & 0 & 0 \\
0 & 0.25 & 0 & 0 \\
0 & 0 & 0.25 & 0 \\
0 & 0 & 0 & 0.25
\end{pmatrix}.
$$

The outer coupling matrix can be indicated by

$$(\bar{d}\_{\bar{ij}})\_{n\times n} = \begin{pmatrix} -2 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & -2 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & -3 & 0 & 0 & 1 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 1 & -3 & 0 & 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 & 0 & 0 & -2 & 0 & 0 \\ 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & -2 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & -1 \end{pmatrix}.$$

The matrices of parameter uncertainties are

$$M\_d = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 0.81 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}, \quad H\_d = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0.8 & 0 \\ 0 & 0 & 0 & 0.3 \end{pmatrix},$$

$$F\_d(t) = \begin{pmatrix} 0.41\cos(\bar{\omega}\_1(t)) & 0 & 0 & 0 \\ 0 & \cos(\bar{\omega}\_2(t)) & 0 & 0 \\ 0 & 0 & 0.3\cos(\bar{\omega}\_3(t)) & 0 \\ 0 & 0 & 0 & 0.13\cos(\bar{\omega}\_4(t)) \end{pmatrix},$$

and

$$M\_{\overline{q}} = \begin{pmatrix} 0.1 & 0 & 0 & 0 \\ 0 & 0.5 & 0 & 0 \\ 0 & 0 & 0.2 & 0 \\ 0 & 0 & 0 & 0.15 \end{pmatrix}, \quad H\_{\overline{q}} = \begin{pmatrix} 0.2 & 0 & 0 & 0 \\ 0 & 0.1 & 0 & 0 \\ 0 & 0 & 0.2 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix},$$

$$F\_{\overline{q}}(t) = \begin{pmatrix} \cos(\overline{s}\_1(t)) & 0 & 0 & 0 \\ 0 & \cos(\overline{s}\_2(t)) & 0 & 0 \\ 0 & 0 & \cos(\overline{s}\_3(t)) & 0 \\ 0 & 0 & 0 & \cos(\overline{s}\_4(t)) \end{pmatrix}.$$

We choose the appropriate initial values and use the same MATLAB and methods as in Example 1. Figures 11–14 show the trajectories of sync−error (20) (*e*ˆ*i*1, *e*ˆ*i*2, *e*ˆ*i*3, *e*ˆ*i*4) not under control, respectively. We can observe that (32) and (33) not under control are unsynchronized. Figures 15–18 reflect the trajectories of sync−error (20) (*e*ˆ*i*1, *e*ˆ*i*2, *e*ˆ*i*3, *e*ˆ*i*4) under control, respectively. Figure 19 shows the trajectories of total sync−error systems (20) not under control. Figure 20 shows the trajectories of total sync−error systems (20) under the control. From the simulation results and graphs, it can be obtained that the error system is actuated to the point of initial; it is clear that (32) and (33) can achieve asymptotic synchronization. This shows the effectiveness and feasibility of Theorem 2.

**Figure 11.** Time behaviors of pinning sync−error trajectories *e*ˆ*i*1(*i* = 1, 2, ··· , 10) without controller.

**Figure 12.** Time behaviors of pinning sync−error trajectories *e*ˆ*i*2(*i* = 1, 2, ··· , 10) without controller.

**Figure 13.** Time behaviors of pinning sync−error trajectories *e*ˆ*i*3(*i* = 1, 2, ··· , 10) without controller.

**Figure 14.** Time behaviors of pinning sync−error trajectories *e*ˆ*i*4(*i* = 1, 2, ··· , 10) without controller.

**Figure 15.** Time behaviors of pinning sync−error trajectories *e*ˆ*i*1(*i* = 1, 2, ··· , 10) with controller.

**Figure 16.** Time behaviors of pinning sync−error trajectories *e*ˆ*i*2(*i* = 1, 2, ··· , 10) with controller.

**Figure 17.** Time behaviors of pinning sync−error trajectories *e*ˆ*i*3(*i* = 1, 2, ··· , 10) with controller.

**Figure 18.** Time behaviors of pinning sync−error trajectories *e*ˆ*i*4(*i* = 1, 2, ··· , 10) with controller.

**Figure 19.** Time behaviors of pinning sync−error trajectories *e*ˆ*ij*(*i* = 1, 2, ··· , 10; *j* = 1, 2, 3, 4) without controller.

**Figure 20.** Time behaviors of pinning sync−error trajectories *e*ˆ*ij*(*i* = 1, 2, ··· , 10; *j* = 1, 2, 3, 4) with controller.

#### **5. Conclusions**

The asymptotic synchronization of FONCDNUP was studied via a novel control. Several sufficient conditions were derived for ensuring the asymptotic synchronization of FONCDNUP, utilizing fractional differential theory, differential inclusion theory, and the Lyapunov method. The pinning synchronization of FOCDNUP was investigated, where Parameter uncertainties were introduced to the networks. Instead of adding controllers to all nodes, controllers were only added to the first five nodes, reducing costs and enhancing efficiency. Finally, two numerical instances were also presented to demonstrate the effectiveness of the proposed approaches. However, this paper does not consider time delays or extending pinning control to fractional nonidentical complex networks. In the future, the inclusion of time-varying delays in FONCDNUP will be considered, along with the exploration of pinning control for non-identical networks, which presents an interesting and challenging area.

**Author Contributions:** Y.W. proposed the main the idea and prepared the manuscript initially. X.H. gave the numerical simulation of this paper. T.L. revised the English grammar of this paper. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work is partly funded by Sichuan University of Science and Engineering (Grant No. 2022RC12) and the Postgraduate Innovation Fund Project of Sichuan University of Science and Engineering (Grant No. Y2022191).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The data used to support the findings of this study are available from the corresponding author upon request.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

## *Article* **Fractional-Order Total Variation Geiger-Mode Avalanche Photodiode Lidar Range-Image Denoising Algorithm Based on Spatial Kernel Function and Range Kernel Function**

**Xuyang Wei 1, Chunyang Wang 1,2,\*, Da Xie 2,\*, Kai Yuan 2, Xuelian Liu 1, Zihao Wang 1, Xinjian Wang <sup>2</sup> and Tingsheng Huang <sup>2</sup>**


**Abstract:** A Geiger-mode avalanche photodiode (GM-APD) laser radar range image has much noise when the signal-to-background ratios (SBRs) are low, making it difficult to recover the real target scene. In this paper, based on the GM-APD lidar denoising model of fractional-order total variation (FOTV), the spatial relationship and similarity relationship between pixels are obtained by using a spatial kernel function and range kernel function to optimize the fractional differential operator, and a new FOTV GM-APD lidar range-image denoising algorithm is designed. The lost information and range anomalous noise are suppressed while the target details and contour information are preserved. The Monte Carlo simulation and experimental results show that, under the same SBRs and statistical frame number, the proposed algorithm improves the target restoration degree by at least 5.11% and the peak signal-to-noise ratio (PSNR) by at least 24.6%. The proposed approach can accomplish the denoising of GM-APD lidar range images when SBRs are low.

**Keywords:** GM-APD lidar; FOTV; range-image denoising; spatial kernel function; kernel of the range

#### **1. Introduction**

Lidar has been widely used in terrain mapping, forestry exploration, autonomous driving, military defense, and other fields due to its high resolution, strong anti-interference ability, and fast response speed [1–4]. Geiger-mode avalanche photodiode (GM-APD) laser radar can detect single-photon echo signals and remote weak signals [5–9]. However, due to the single-photon detection system of GM-APD, the detection cycle has detection dead time, and the acquired range image loses information. Moreover, under the condition of low SBRs (the ratios of the photon of the target signal received to the photon of the background noise in the gate), the target signal acquired is effortlessly submerged in noise, resulting in a giant variety of range anomalous noise. Therefore, in order to improve these range images to be high quality, it is urgent that an effective range-image denoising algorithm be developed.

The existing GM-APD range-image denoising techniques are mainly divided into local filter denoising and global filter denoising. Local filter denoising is widely used in the denoising of GM-APD lidar range images due to its advantages of having a simple principle, low computation requirements, and low resource consumption. References [10,11] used the extended median filter to filter the noise in a range image. This simple method can effortlessly achieve an appropriate suppression impact on the non-linear noise in a range image; however, it will damage the part of the target and will not maintain the details of the target. Reference [12] proposed an Improved Donut Filter algorithm (IDF), but

**Citation:** Wei, X.; Wang, C.; Xie, D.; Yuan, K.; Liu, X.; Wang, Z.; Wang, X.; Huang, T. Fractional-Order Total Variation Geiger-Mode Avalanche Photodiode Lidar Range-Image Denoising Algorithm Based on Spatial Kernel Function and Range Kernel Function. *Fractal Fract.* **2023**, *7*, 674. https://doi.org/10.3390/ fractalfract7090674

Academic Editors: Viorel-Puiu Paun, Libo Feng, Lin Liu and Yang Liu

Received: 11 June 2023 Revised: 9 August 2023 Accepted: 30 August 2023 Published: 7 September 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

the algorithm sacrifices a part of its target detail protection ability to improve its noise suppression ability. Reference [13] proposed a 2D dual-threshold denoising algorithm with the advantages of neighborhood smoothing and threshold segmentation. Compared with the global filtering algorithm, this algorithm has a poor smoothing effect on the whole target. The local filtering method only carries out denoising based on the relationship between sub-pixels and adjacent pixels and does not consider the similarity of image texture and details; thus, it often produces a local smoothing effect, resulting in the obvious deviation of the recovered target range information. Global filtering usually uses the spatial relationship between pixels within the whole image and the similarity of pixel values to realize range-image denoising. Compared with local filtering, global filtering can obtain a smoother target range image. Reference [14] proposed a non-local probabilistic statistical filtering algorithm (NLPS) that can maintain the true value of the range for the lidar range image, and a denoising study was carried out. However, the edge-preserving effect of this algorithm needs to be improved when there is a low SBR. Reference [15] proposed an image reconstruction algorithm based on total variation and Discrete Cosine Transform (DCT), and the effective Alternating Direction Method of Multipliers (ADMM) was used to resolve the issue. This method achieves range-image reconstruction from the perspective of global smoothing. Reference [16] proposed a range-image restoration algorithm based on non-local correlation. By constructing an energy equation with a regular term of nonlocal spatial correlation between pixels, this algorithm uses the ADMM to find a solution iteratively that achieves range-image restoration under sparse photons. This algorithm can suppress noise while preserving the integrity of image edges, but it is easy to over-smooth the noise and destroy target details. Reference [17] proposed an intensity guidance method to estimate range images by using the temporal and spatial correlation of reflected signals. This method utilized the sharp edges and detailed information of intensity images to achieve background noise suppression with higher complexity, but unfortunately, it also has a higher calculation cost.

In recent years, GM-APD lidar range-image denoising algorithms have mostly been used to construct energy variance with a regular term, the denoising problem has been transformed into an optimization problem, and the numerical solution method has been used for finding an iterative solution. The global filtering method has been used to achieve range-image denoising, but with this method, it is difficult to balance range-image noise removal while preserving the target details and edges. The fractional differential operator takes extra neighborhood information into account and can linearly enhance the intermediate frequency signal in the image, non-linearly retain the low-frequency signal, and at the same time, can better retain detailed information while suppressing the noise in the range image. Currently, fractional-order image denoising models can be primarily classified into two categories: fractional-order denoising models based on partial differential equations and fractional-order denoising models based on masks. Most researchers predominantly apply fractional-order denoising models to grayscale image denoising, aiming to enhance the details and edge information of the images [18]. However, there is limited research on the application of these models to range images.

Fractional-order denoising models based on masks are predominantly constructed by deriving eight directionally overlaid mask templates from the G-L definition and combining them with other theories to create improved fractional-order models. Huang et al. [19] explored the feasibility of applying a 3 × 3 fractional-order mask template to denoise range images. The experimental results show that the fractional-order integral denoising operator can successfully manage noise in range images while maintaining features and edge information, demonstrating good denoising performance for range images. However, due to the fact that the photon signals reflected by GM-APD laser radar targets originate from emitted short-pulse lasers and are constrained by technological barriers in the preparation of GM-APD array detectors, the imaging resolution of GM-APD is relatively low. As a result, the obtained signals exhibit strong consistency in both temporal and spatial distributions. When using the fractional-order denoising model based on masks for denoising range

images, the lack of additional image neighborhood information leads to limited prior information during range-image denoising, resulting in insufficient availability of highquality range images. The fractional-order denoising algorithm based on partial differential equations follows a similar process to a physical phenomenon known as heat diffusion. The 3 × 3 mask proposed by Wang et al. [20], which incorporates eight directions, exhibits limited capability in utilizing neighborhood pixel information. To address this limitation, we introduce a 5 × 5 mask extended to include sixteen directions, enabling comprehensive utilization of effective pixel information within the neighborhood. This extension aims to mitigate noise interference and enhance the quality of depth images when statistical frames are scarce. Finally, simulation and imaging experiments are conducted to validate the effectiveness of our algorithm. Xie et al. [21], based on the idea of fractional-order partial differential equation image denoising, first utilized the fractional-order whole version regularization denoising algorithm to denoise range images. However, immediately making use of the fractional-order whole variation regularization denoising mannequin to range images with a massive quantity of noise would excessively set up connections between pixels, thereby increasing the impact of noise on the current pixel. To address this problem, a preprocessing step was designed to identify the noise points by considering adjacent pixels, and only the noise points in the range image are denoised in fractional order. Although this algorithm achieved excellent denoising results for range images, the introduced preprocessing step increased the overall complexity and computation time of the algorithm, making it not an end-to-end range-image denoising algorithm. Therefore, a denoising method suitable for GM-APD lidar range images is proposed in this paper based totally on the fractional-order whole variant denoising method.

In order to achieve the denoising of GM-APD lidar range images with a low SBR, a FOTV-based denoising model of GM-APD lidar was constructed by introducing fractional differential operators. Secondly, the spatial relationship and similarity relationship between pixels were obtained by using a spatial kernel function and a range kernel function. The fractional differential operator was optimized, the FOTV model was improved, and the split Bregman algorithm was used for range-image denoising, which suppressed the noise of lost information and abnormal range values while preserving the target details and contour information. Finally, Monte Carlo simulation experiments were carried out on the algorithms proposed, including a bilateral filtering algorithm (BF), total variation denoising algorithm (TV), and fractional-order total variation denoising algorithm (FOTV), to verify their effectiveness under different SBRs and different statistical frames. Additionally, a GM-APD lidar system was built for outdoor experiments. The experimental outcomes exhibit that the denoising overall performance of the algorithm proposed is better in contrast with that of the different algorithms.

#### **2. Algorithm Principle**

A fractional differential operator is a global operator. Using an FOTV model to denoise range images can balance each frequency component in a range image, improve the accuracy of range-image reconstruction, and retain the edge details of the image. However, for the range anomalous noise and lost information generated by GM-APD lidar, it is impossible to calibrate the noise because of the small diffusion coefficient of the fractionalorder total variation differential equation at the range mutation point. In this paper, the spatial proximity and pixel value similarity between pixels are introduced to optimize the fractional-order differential operator, reduce the impact of noise on the target echo data, and realize the accurate denoising of a range image with a low SBR. The flow diagram of the algorithm in this paper is shown in Figure 1.

**Figure 1.** Algorithm principle diagram.

#### *2.1. Range-Image Extraction*

In order to extract a GM-APD range image, the maximum likelihood estimation approach is utilized in this research to estimate the range parameters pixel by pixel. The method is divided into three steps. The first step is to build the impulse response function of the GM-APD, the second step is to build the logarithmic likelihood function related to the arrival time of the signal photons, and the third step is to search and solve for the likelihood function within the whole range gate to obtain the target range information.

According to [22], the output model of a pulsed laser is as follows:

$$f(t) = \frac{t}{T^2} \exp(-(t/\tau)),\tag{1}$$

where *f*(*t*) represents the laser pulse waveform, and *τ* represents the laser pulse width.

Without considering the noise photons caused by background light and the detector dark count rate, the expression of the impulse response function (IRF) of GM-APD is as follows:

$$f(t\_0|t) = \frac{t - t\_0}{\pi^2} \exp(-((t - t\_0)/\pi)),\tag{2}$$

where *f*(*t*0|*t*) is the impulse response function of GM-APD, and *t*<sup>0</sup> is the flight time of the target photon to be estimated. The relationship between the flight time of the target photon and the target range is:

$$z = \frac{ct\_0}{2},\tag{3}$$

where *z* is the target range, and *c* is the speed of light.

In a single pixel, the logarithmic likelihood function of a depth *zi*,*<sup>j</sup>* related to the photon time of flight *ti*,*<sup>j</sup>* is:

$$\mathrm{L}\_{Z}\left(z\_{i,j}; \left\{t\_{i,j}\right\}\_{t\_{i,j}\in\mathrm{L}\_{i,j}}\right) = \sum\_{t\_{i,j}\in\mathrm{L}\_{i,j}} \log\left[f\left(t\_{i,j} - \frac{2z\_{i,j}}{c}\right)\right],\tag{4}$$

where *Ui*,*<sup>j</sup>* is the set of flight times within a single pixel gate.

In the range parameter *z*, which ranges from the time interval at the beginning of the gating to *bins* <sup>−</sup> 1 time interval (*bins* <sup>=</sup> *<sup>T</sup>* <sup>Δ</sup> , where *T* is the gating length, and Δ is the minimum time resolution of the count), the last time interval is discarded because the number of untriggered times accumulates at the last time interval. At this time, the corresponding likelihood function under different parameters can be obtained, and the estimated echo position *zpos* can be obtained by determining the parameter value corresponding to the maximum value of the likelihood function.

$$z\_{\text{pos}} = \text{argmax}\_{z}(L\_{\mathbb{Z}}).\tag{5}$$

The above process is repeated, and maximum likelihood estimation is carried out pixel by pixel to extract the 3D range image *g*.

#### *2.2. Definition of Fractional Differential Operator and Its Effect on Image Signals*

Let the function *f*(*x*) be defined by the interval [*a*, *b*], and *n* − 1 ≤ *v* < *n*, where *n* is a positive integer, then [23]

$$D\_v^{GL} D\_x^v f(\mathbf{x}) = \lim\_{h \to 0} \sum\_{k=0}^{\left[\frac{\mathbf{x} - \mathbf{c}}{h}\right]} (-1)^k \binom{v}{k} f(\mathbf{x} - kh), v > 0,\tag{6}$$

where  *v k* = <sup>Γ</sup>(*v*+1) <sup>Γ</sup>(*k*+1)Γ(*v*−*k*+1), [·] represents the integer operation, and *<sup>h</sup>* represents the differential step size.

In order to define the discrete derivative, according to the G-L definition, on the interval [*a*, *t*], use the same partition *h* = 1 , so *m* = # *<sup>t</sup>*−*<sup>a</sup> h* \$ = [*t* − *a*], and the discrete form is expressed as:

$$D\_t^v(t) = f(t) + (-1)^{-1} \cdot (v) \cdot f(t-1) + (-1)^2 \cdot \left(\frac{v(v-1)}{2}\right) \cdot f(t-2) + \dots + (-1)^j \cdot \frac{\Gamma(v+1)}{\Gamma(j+1)\Gamma(v-j+1)} \cdot f(t-j). \tag{7}$$

Extend the above concepts to the functions of the two variables

$$D\_x^v(\mathbf{x}, \mathbf{y}) = \lim\_{N \to \infty} \left[ \sum\_{j=0}^{N-1} (-1)^j \cdot \frac{\Gamma(v+1)}{\Gamma(i+1)\Gamma(v-i+1)} \cdot f(\mathbf{x} - i, \mathbf{y}) \right],\tag{8}$$

$$D\_y^v(\mathbf{x}, y) = \lim\_{N \to \infty} \left[ \sum\_{j=0}^{N-1} (-1)^j \cdot \frac{\Gamma(v+1)}{\Gamma(j+1)\Gamma(v-j+1)} \cdot f(\mathbf{x}, y-j) \right],\tag{9}$$

*N* is the number of terms of the polynomial. From Equations (3) and (4), the fractional differential coefficient *w<sup>v</sup> <sup>m</sup>* of order *v* can be written:

$$w\_m^v = (-1)^m \cdot \frac{\Gamma(v+1)}{\Gamma(m+1)\Gamma(v-m+1)}.\tag{10}$$

The edge information and detailed information of the range image are generally sub-high-frequency or high-frequency information, and the smooth region is generally low-frequency information. Next, the influence of the frequency response of the fractional differential operator on the range image is analyzed.

Given that the general real number *<sup>v</sup>* <sup>∈</sup> *<sup>R</sup>*<sup>+</sup> is a derivative of *<sup>f</sup>*(*t*) <sup>∈</sup> *<sup>L</sup>*2(*R*), it can be expressed as:

$$D^v f(t) = \frac{d^v f(t)}{dt^v}.\tag{11}$$

According to the Fourier formula, the form of *D<sup>v</sup> f*(*t*) in the Fourier transform domain can be obtained via:

$$D^\upsilon f(t) = \int\_{R} \left(i2\pi w\_r\right)^\upsilon F(w\_r) \exp^{i2\pi w\_r t} dw\_r. \tag{12}$$

On the basis of signal processing, the form of the derivative of the signal in the frequency domain is obtained. The Fourier transformation process is defined as follows:

$$D^v F(w\_r) = (iw\_r)^v F(w\_r),\tag{13}$$

$$D^{\upsilon}f(t) \stackrel{\to}{\Leftrightarrow} \hat{D}^{\upsilon}f(w\_r) = (iw\_r)^{\upsilon} \hat{f}(w\_r) = |w\_r|^{\upsilon} \exp[i\theta^{\upsilon}(w\_r)] \hat{f}(w\_r) = |w\_r|^{\upsilon} \exp\left[\frac{\upsilon\pi i}{2}\text{sgn}(w\_r)\right] \hat{f}(w\_r),\tag{14}$$

where *D<sup>v</sup>* represents the differential operator of order *v*, *wr* represents the angular frequency, (*iwr*) *<sup>v</sup>* <sup>=</sup> <sup>|</sup>*wr*<sup>|</sup> *v* exp *vπi* <sup>2</sup> sgn(*wr*) is a filter, and sgn(·) is a sign function.

Amplitude–frequency characteristic curves of different orders are drawn [24], as shown in Figure 2.

**Figure 2.** Fractional differential magnitude–frequency response.

Generally, the region of *w* > 1 is the edge and detailed part of the image. Figure 2 illustrates how the fractional differential operator contributes to the enhancement of the signal in the high-frequency region, and with the increase in the fractional order, the non-linear enhancement ability of the fractional differential operator is stronger; so, the fractional differential can enhance the edge information of the image. In addition, the region of 0 < *w* < 1 is generally the smooth region of the image. The fractional order has a weaker weakening effect on the image than the integer order, so the amplitude of the smooth region can be kept unchanged, which indicates that fractional-order differentiation can protect the information of the smooth region from the influence of the filter while denoising.

In the GM-APD lidar range image, the edge and noise manifest as locally discontinuous points, with adjacent pixels corresponding to noise and edge exhibiting significant variations in depth values, representing high-frequency components of the image. The edges possess order and directionality, displaying a strong correlation with neighboring pixels, whereas the noise signal is characterized by randomness and lacks correlation with nearby pixels. Generally, low-frequency regions in the image correspond to smooth areas of the target object. In signal and image processing, leveraging the correlation among adjacent pixels can help mitigate the impact of noise. By constructing a differential operator based on this concept, it becomes possible to effectively handle image noise while preserving edge details. Optimal results can be achieved by adjusting the order of fractional differentiation, thereby enhancing the performance and quality of GM-APD lidar range images.

#### *2.3. FOTV Denoising Model*

The FOTV denoising model is [25] represented as:

$$\min\_{u \in BV} \|D^v u\|\_1 + \frac{\lambda}{2} \|u - g\|\_{2'}^2 \tag{15}$$

where *u* is the range image to be denoised, and *g* is the input range image containing noise

$$\begin{cases} \left| \boldsymbol{D}^{\boldsymbol{v}} \boldsymbol{u} \right|\_{1} = \sum\_{i,j} \left| \left( \boldsymbol{D}^{\boldsymbol{v}} \boldsymbol{u} \right)\_{i,j} \right| \\\left| \left( \boldsymbol{D}^{\boldsymbol{v}} \boldsymbol{u} \right)\_{i,j} \right| = \sqrt{\left( \left( \boldsymbol{D}\_{1}^{\boldsymbol{v}} \boldsymbol{u} \right)\_{i,j} \right)^{2} + \left( \left( \boldsymbol{D}\_{2}^{\boldsymbol{v}} \boldsymbol{u} \right)\_{i,j} \right)^{2}} \end{cases} \tag{16}$$

where *Dvu*<sup>1</sup> is the fractional-order variational regular term (FOTV) and (*Dvu*)*i*,*<sup>j</sup>* is the fractional-order differential operator.

According to the G-L definition, the discrete fractional differential operator is defined as follows:

$$(D^v u)\_{i,j} = \left( (D\_1^v u)\_{i,j}, (D\_2^v u)\_{i,j} \right), i = 1, 2, \dots, N, \; j = 1, 2, \dots, N,\tag{17}$$

Here,

$$D\_1^v(\mathbf{x}, y) = \sum\_{j=0}^{N-1} (-1)^j \cdot \frac{\Gamma(v+1)}{\Gamma(i+1)\Gamma(v-i+1)} \cdot f(\mathbf{x} - i, y),\tag{18}$$

$$D\_2^v(x, y) = \sum\_{j=0}^{N-1} (-1)^j \cdot \frac{\Gamma(v+1)}{\Gamma(j+1)\Gamma(v-j+1)} \cdot f(x, y-j),\tag{19}$$

where *N* ≥ 3 is an integer and Γ represents the Gamma function. Operator *D* can be realized in the following form:

$$\mu\left(D\_1^v u\right)(\cdot, j) = B \times u(\cdot, j), 1 \le j \le N,\tag{20}$$

Here, the matrix is

$$B = \begin{bmatrix} w\_0^v & 0 & \cdots & 0 \\ w\_1^v & w\_0^v & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ w\_m^v & w\_{m-1}^v & \cdots & w\_0^v \end{bmatrix} \tag{21}$$

where *w<sup>v</sup> <sup>k</sup>* = (−1) *k* Γ(*v*+1) <sup>Γ</sup>(*k*+1)Γ(*v*−*k*+1), and *<sup>D</sup><sup>v</sup>* <sup>2</sup> is the same.

When the order *v* is not an integer, *B* is a lower triangular matrix. As can be seen from the above equation, the fractional derivative of the *k* point is calculated by using all of the points preceding *k*. Obviously, the fractional derivative is regarded as a worldwide operator.

#### *2.4. Solution to the FOTV Denoising Model*

The solution to the FOTV denoising model usually optimizes the objective function by using the iterative algorithm, among which the split Bregman algorithm is an effective way to solve for the TV regularization model containing an L1 norm. The convex optimization model with L1 norm regularization makes it difficult to obtain the optimal solution using traditional algorithms. In the calculation process of the split Bregman algorithm, the regularization parameter is fixed as a constant so as to reduce the amount of memory, thus improving the calculation accuracy and convergence speed [26–28].

By introducing the auxiliary variable *z*, the original denoising problem is transformed into:

$$\min\_{u,z} \left( \left\| z \right\| \left\| \, 1 + \frac{\lambda}{2} \right\| \left\| u - \emptyset \right\| \right\|\_2^2 \right), \text{ s.t. } D^v u = z,\tag{22}$$

*z* and *<sup>λ</sup>* <sup>2</sup> *<sup>u</sup>* <sup>−</sup> *<sup>g</sup>* <sup>2</sup> <sup>2</sup> are convex functions and differentiable. As a result, the constraint problem can be recast as an unconstrained optimization problem.

$$\min\_{u,z} \left( \|z\|\_1 + \frac{\lambda}{2} \|u - \mathcal{g}\|\_2^2 + \frac{\gamma}{2} \|z - D^v u\|\_2^2 \right),\tag{23}$$

where *γ* is the penalty function. Since there are two variables, an auxiliary variable *b* is introduced to fix the problem.

$$\begin{cases} \left( \boldsymbol{u}^{k+1}, \boldsymbol{z}^{k+1} \right) = \underset{\boldsymbol{u}, \boldsymbol{z}}{\operatorname{argmin}} \left( \left\| \boldsymbol{z} \right\|\_{1} + \frac{\lambda}{2} \left\| \boldsymbol{u} - \boldsymbol{\mathcal{g}} \right\|\_{2}^{2} + \frac{\gamma}{2} \left\| \boldsymbol{z} - \boldsymbol{D}^{\boldsymbol{v}} \boldsymbol{u} - \boldsymbol{b}^{k} \right\|\_{2}^{2} \right) \\ \qquad \qquad \boldsymbol{b}^{k+1} = \boldsymbol{b}^{k} + \gamma \lambda \left( \boldsymbol{D}^{\boldsymbol{v}} \boldsymbol{u}^{k+1} - \boldsymbol{z}^{k+1} \right) . \end{cases} \tag{24}$$

Since the sub-problem needs to solve for both *u* and *z* at the same time, the calculation is complicated, which can be decomposed into:

$$\begin{cases} u^{k+1} = \underset{u}{\operatorname{argmin}} \left( \frac{\lambda}{2} \|u - \operatorname{g}\|\_{2}^{2} + \frac{\gamma}{2} \left\| z^{k} - D^{v}u - b^{k} \right\|\_{2}^{2} \right), \\\ z^{k+1} = \underset{z}{\operatorname{argmin}} \left( \left\| z \right\|\_{1} + \frac{\gamma}{2} \left\| z - D^{v}u^{k+1} - b^{k} \right\|\_{2}^{2} \right), \\\ b^{k+1} = b^{k} + \gamma \lambda \left( D^{v}u^{k+1} - z^{k+1} \right). \end{cases} \tag{25}$$

If *uk*+1−*uk uk* <sup>≥</sup> , the iteration ends, and the range image after denoising is output as *u* = *uk*+1; otherwise, the iteration continues until convergence.

The pseudo-code of the range-image denoising algorithm based on FOTV is shown in Algorithm 1.

**Algorithm 1** Range-Image Denoising Algorithm Based on G-L Fractional-Order Total Variation

1. Initialize the system: *k* = 0, *u*<sup>0</sup> = *g*, *z*<sup>0</sup> = 0, *b*<sup>0</sup> = 0 2. The value of the given parameter: *γ*, *λ* 3. Calculation *D<sup>v</sup>* 4. For *k* = 0, 1, 2, . . . : *uk*+<sup>1</sup> ➞argmin *u λ* <sup>2</sup> *<sup>u</sup>* <sup>−</sup> *<sup>g</sup>* <sup>2</sup> <sup>2</sup> <sup>+</sup> *<sup>γ</sup>* 2 *z<sup>k</sup>* <sup>−</sup>*Dvu* <sup>−</sup> *<sup>b</sup><sup>k</sup>* 2 2 *zk*+<sup>1</sup> ➞argmin *z <sup>z</sup>* <sup>1</sup> <sup>+</sup> *<sup>γ</sup>* 2 *z*−*Dvuk*<sup>+</sup><sup>1</sup> <sup>−</sup> *<sup>b</sup><sup>k</sup>* 2 2 *bk*+<sup>1</sup> = *b<sup>k</sup>* + *γλ <sup>D</sup>vuk*<sup>+</sup><sup>1</sup> <sup>−</sup> *<sup>z</sup>k*+<sup>1</sup> If *uk*<sup>+</sup>1−*uk uk* <sup>≥</sup> *u* = *uk*<sup>+</sup><sup>1</sup> Else *k* = *k* + 1 To step 4 End End

*2.5. Fractional-Order Total Variational Range-Image Denoising Algorithm Based on Spatial Kernel Function and Range Kernel Function*

In this paper, a novel fractional-order range-image denoising algorithm is proposed. This algorithm introduces range kernel functions and spatial kernel functions to capture the relationships between pixel values and the spatial distribution of pixels, optimizing the fractional-order operator and enabling end-to-end range-image denoising.

Due to the fact that the target usually converges at multiple pixels on the detector focal plane, the accuracy of denoising can be improved by leading a fractional-order operator to establish a relationship between the pixels. However, when the target range image contains a large amount of noise, establishing the connection with any other pixel will also quickly boost the current pixel's susceptibility to noise. By using the variational method, the partial differential Equation (15) is derived as follows:

$$-D^{v\_\cdot} \left(\frac{D^v u}{|D^v u|}\right) + \lambda(u - \mathfrak{g}) = 0,\tag{26}$$

where *D<sup>v</sup>* is the fractional difference operator, and <sup>1</sup> <sup>|</sup>*Dvu*<sup>|</sup> is the diffusion coefficient. As can be seen from the above equation, for noise points with lost information and abnormal range values, the diffusion coefficient is small due to the large value of <sup>|</sup>*Dvu*|, and the FOTV denoising algorithm cannot remove the noise at this time. Therefore, in order to suppress the anomalous noise in the range image, this paper introduces a spatial kernel function to obtain the spatial relationship between pixels, introduces a range kernel function to obtain the pixel value similarity relationship between pixels, and reconstructs the fractional differential operator [29].

The inclusion of fractional differential operators expands the penalty term to infinite dimensions. Therefore, it is necessary for the spatial kernel function and range kernel function introduced in this paper to correspond with the dimension of fractional differential operators *D<sup>v</sup>* <sup>1</sup> and *<sup>D</sup><sup>v</sup>* 2.

The spatial kernel function describes the spatial range between the neighborhood pixel and the current pixel, which is usually an attenuation function for the spatial range. In this paper, a Gaussian function is selected as its attenuation function. The weights of the spatial kernel function at (*x*, *y*) are extended to an infinite number of dimensions in both the *x* and *y* directions, respectively.

The weight of the spatial kernel function *ws*1(*x*, *y*) at (*x*, *y*) in the *x* direction is defined as follows:

$$w\_{s1}(\mathbf{x}, \mathbf{y}) = \exp\left(-\frac{|\mathbf{i} - \mathbf{x}|^2}{2\sigma\_s^2}\right), \mathbf{i} = 1, 2, \cdots, N,\tag{27}$$

The weight of the spatial kernel function *ws*2(*x*, *y*) at (*x*, *y*) in the y direction is defined as follows:

$$w\_{s2}(x,y) = \exp\left(-\frac{|j-y|^2}{2\sigma\_s^2}\right), j = 1, 2, \cdots, N,\tag{28}$$

where *σ<sup>s</sup>* is the variance in the spatial kernel function. *N* is the total number of pixels in the *x* or *y* direction of the range image.

The range kernel function describes the degree of correlation between pixel values of other pixels in the image and the current pixel. In this paper, a Gaussian function is selected to represent the similarity relationship between pixel values. The weights of the range kernel function at (*x*, *y*) are extended to an infinite number of dimensions in both the x and y directions, respectively.

The weight of the range kernel function *wr*1(*x*, *y*) at (*x*, *y*) in the *x* direction is defined as follows.

$$w\_{r1}(\mathbf{x}, \mathbf{y}) = \exp\left(-\frac{\left(\mathbf{g}(i, \mathbf{y}) - \mathbf{g}(\mathbf{x}, \mathbf{y})\right)^2}{2\sigma\_r^2}\right), i = 1, 2, \cdots, N,\tag{29}$$

The weight of range kernel function *wr*2(*x*, *y*) at (*x*, *y*) in the *y* direction is defined as follows.

$$w\_{r2}(\mathbf{x}, \mathbf{y}) = \exp\left(-\frac{\left(\mathbf{g}(\mathbf{x}, \mathbf{j}) - \mathbf{g}(\mathbf{x}, \mathbf{y})\right)^2}{2\sigma\_r^2}\right), \mathbf{j} = 1, 2, \cdots, N\_r \tag{30}$$

where *σ<sup>r</sup>* is the variance in the spatial kernel function. *N* is the total number of pixels in the *x* or *y* direction of the range image.

Due to the limited detection field of GM-APD LiDAR systems compared to target size, resulting images exhibit high spatial sampling of targets and contain rich spatial distribution information, enabling the depiction of edge and detail features. This paper introduces a value kernel function to accurately capture correlations between all image pixels for improved target spatial distribution analysis. In addition, as the global total variation filtering method may excessively smooth the range image and overlook its details, a spatial kernel function is introduced to utilize the spatial relationship between pixels and preserve these details. Moreover, for signal points triggered by noise in the range image, the spatial kernel function can restrict the use of surrounding pixels that affect such points to prevent excessive noise signals from affecting them.

The inclusion of fractional differential operators expands the penalty term to infinite dimensions. By combining Equations (18) and (19), the values of Equations (27)–(30) range from 0 to 1, exhibiting a consistent order of magnitude and allowing for multiplication; the new fractional differential operator is:

$$\begin{array}{l} D\_{1'}^{v}u(\mathbf{x},y) = \frac{\sum\_{i=0}^{N-1} \left(-1\right)^{\ell} \mathcal{C}\_{i}^{v} u\_{x-i,y} w\_{s1}(\mathbf{x}-i,y) w\_{r1}(\mathbf{x}-i,y)}{\sum\_{x=1}^{x=M} \sum\_{i=0}^{N-1} \left(-1\right)^{\ell} \mathcal{C}\_{i}^{v} u\_{x-i,y} w\_{s1}(\mathbf{x}-i,y) w\_{r1}(\mathbf{x}-i,y)}{\sum\_{i=0}^{N-1} \left(-1\right)^{\ell} \mathcal{C}\_{i}^{v} u\_{x,y-i} w\_{s2}(\mathbf{x},y-i) w\_{r2}(\mathbf{x},y-i)} \\ \frac{\sum\_{y=M}^{y=M} \left(-1\right)^{\ell} \mathcal{C}\_{i}^{v} u\_{x,y-i} w\_{s2}(\mathbf{x},y-i) w\_{r2}(\mathbf{x},y-i)}{\sum\_{y=1}^{y=M} \sum\_{i=0}^{N-1} \left(-1\right)^{\ell} \mathcal{C}\_{i}^{v} u\_{x,y-i} w\_{s2}(\mathbf{x},y-i) w\_{r2}(\mathbf{x},y-i)},\end{array} \tag{31}$$

where *M* is the number of rows (columns) of the GM-APD focal plane array.

If *w v <sup>i</sup>* = (−1) *i Cv <sup>i</sup> ws*(*x* − 1, *y*)*wr*(*x* − *k*, *y*), Equation (31) can be written as follows:

$$(\Delta\_1^\mathbb{F}\mu)' \approx \boldsymbol{\mu} \cdot \boldsymbol{B}', \ \Delta\_2^\mathbb{F}\boldsymbol{\mu} \approx \boldsymbol{B}'^T \cdot \boldsymbol{\mu},\tag{32}$$

The form of matrix *B* is as follows:

$$B' = \begin{bmatrix} w'^{\upsilon}\_0 & 0 & \cdots & 0 \\ w'^{\upsilon}\_1 & w'^{\upsilon}\_0 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ w'^{\upsilon}\_m & w'^{\upsilon}\_{m-1} & \cdots & w'^{\upsilon}\_0 \end{bmatrix} . \tag{33}$$

Multiplying the spatial kernel function with the range kernel function is a compromise that combines the spatial proximity and pixel value similarity of an image. It simultaneously considers spatial information and pixel value similarity, achieving the goal of edge-preserving denoising. The optimization method proposed in this paper takes into account the spatial distribution relationship, which allows for better noise filtering in the image.

In the first step, the median filtering algorithm is used for preprocessing; in the second step, the fractional difference operator combined with the spatial kernel function and range kernel function is constructed; in the third step, the fractional-order total variation model is based on the spatial kernel function and range kernel function; and in the fourth step, the split Bregman algorithm is used to solve the problem. The pseudo-code of the range-image denoising algorithm is shown in Algorithm 2.

#### **Algorithm 2** FOTV Based on Spatial Kernel Function and Range Kernel Function

1. Median filtering: *g* = *medfilt*(*g*, [3, 3]) 2. Initialization: *k* = 0, *u*<sup>0</sup> = *g*, *z*<sup>0</sup> = 0, *b*<sup>0</sup> = 0 3. The value of the given parameter: *γ*, *μ*, *σs*, *σr* 4. Calculation *ws*, *wr*, *B* 5. For *k* = 0, 1, 2, . . . : *uk*+<sup>1</sup> ➞argmin *u λ* <sup>2</sup> *<sup>u</sup>* <sup>−</sup> *<sup>g</sup>* <sup>2</sup> <sup>2</sup> <sup>+</sup> *<sup>γ</sup>* 2 *z<sup>k</sup>* −∇*<sup>B</sup> <sup>u</sup>* <sup>−</sup> *<sup>b</sup><sup>k</sup>* 2 2 *zk*+<sup>1</sup> ➞argmin *z <sup>z</sup>* <sup>1</sup> <sup>+</sup> *<sup>γ</sup>* 2 *z*−∇*<sup>B</sup> <sup>u</sup>k*+<sup>1</sup> <sup>−</sup> *<sup>b</sup><sup>k</sup>* 2 2 *bk*+<sup>1</sup> = *b<sup>k</sup>* + *γλ* ∇*B <sup>u</sup>k*+<sup>1</sup> <sup>−</sup> *<sup>z</sup>k*+<sup>1</sup> If *uk*<sup>+</sup>1−*uk uk* <sup>≥</sup> *u* = *uk*<sup>+</sup><sup>1</sup> Else *k* = *k* + 1 To step 5 End End

#### **3. Evaluation Index and Simulation Verification**

#### *3.1. Evaluation Index*

*K*, the target recovering degree, was adopted in this study [30] as an objective evaluation indicator. The PSNR was used to evaluate the denoising performance of the algorithm cited in this paper and the range image of the algorithm proposed. *K* is shown as follows:

$$f(x) = \begin{cases} 1, |d - d\_s| < d\_{b\prime} \\ 0, |d - d\_s| \ge d\_{b\prime} \end{cases} \tag{34}$$

$$K = \frac{m}{n'} \tag{35}$$

where *d* is the target reconstruction range value, *ds* is the target standard range value, *db* is the target allowable error range value, *n* is the total pixel number of the target, and *m* is the pixel number of the target acceptable error range value. The *K* value represents the degree of target restoration.

The peak signal-to-noise ratio is as follows:

$$PSNR = 10\log\_{10}\left(\frac{255^2 \times M \times N}{\sum\_{\vec{i},\vec{j}} \left( (\mu)\_{\vec{i},\vec{j}} - (f)\_{\vec{i},\vec{j}} \right)^2}\right) \tag{36}$$

where *f* is the observation range image, *u* is the range image after noise removal, and *M* and *N* are the number of rows and columns in the image, respectively.

#### *3.2. Simulation Analysis*

The Monte Carlo method was adopted to simulate and verify the GM-APD lidar range-image denoising performance of the algorithm proposed. The range image of the simulated target is shown in Figure 3. The laser single-pulse energy was set as 1.25 <sup>×</sup> <sup>10</sup>−<sup>9</sup> J, the laser wavelength as 1064 nm, the laser pulse width as 5 ns, the detector array as 64 × 64, the detector time resolution as 1 ns, the round-trip atmospheric attenuation coefficient as 0.8 × 0.8, the target diffuse reflection coefficient as 0.3, the receiving transmittance as 90%, and the transmitting transmittance as 80%. The range gate was set as 200 m, and the target was 60 m inside the range gate. The TV, FOTV, and BF algorithms, as well as the method proposed in this research, were utilized to process the simulation range image utilizing various SBRs and various frames. The simulation experiment was performed 1000 times in

each example, and the mean value was computed to assess the outcomes using the target reduction degree and peak signal-to-noise ratio [31].

#### 3.2.1. Fractional-Order Selection

To investigate how the fractional order affects the denoising effectiveness of low SBR simulation data, the fractional orders were set to 0.1, 0.3, 0.5, 0.7, 1, 1.2, 1.5, 1.8, and 2, and 20 statistical frames were needed to obtain an SBR equal to 0.3. The Monte Carlo repeated experiments were conducted 1000 times. The range-image quality was assessed using the average values of the target reduction degrees and PSNRs. The valuation indices K and PSNR of different orders are shown in Table 1.

**Table 1.** Evaluation indices K and PSNR of different orders.


The data in Table 1 indicates that when the fractional order is 0.7 with a low SBR, the values of K and PSNR are the largest, and the denoising effect is the best.

#### 3.2.2. Simulation Analysis of Range Images with Different SBRs under 20 Frames

In order to verify the denoising performance of the algorithm proposed in this paper with the same frame number as that used in Section 3.2.1 but with different SBRs, the SBRs were set to 0.3, 0.4, 0.5, 0.6, and 0.7. The K and PSNR were used to assess the denoising performance. The single Monte Carlo simulation results of 20 frames with different SBRs are shown in Figure 4.

As can be seen in Figure 4, when the SBR is equal to 0.3, there is a large amount of noise at the target position in the range image when processed by the TV, FOTV, and BF algorithms, and the integrity and contour information of the target is poor. The algorithm proposed in this paper filters out most of the noise at the target position and can roughly identify the contour information of the target, but the interior of the target is incomplete. When the SBR is equal to 0.4, the range image processed by the TV, FOTV, and BF algorithms has a complete target that is roughly recovered, but there is still noise, and the smoothness of the range image is poor. This algorithm not only recovers the complete target precisely compared with the others but also shows few differences with the standard image and has a good denoising effect. When the SBR = 0.8, the range image target processed by the TV, FOTV, and BF algorithms is complete and smooth, but there is still a small amount of noise. This algorithm still ensures a good denoising effect.

**Figure 4.** Denoising results of different signal-to-background ratio range images in 20 frames.

In order to verify the stability of the denoising performance of the algorithm proposed with the same frame number and different SBRs, Monte Carlo experiments were conducted 1000 times on the TV, FOTV, and BF algorithms alongside the algorithm proposed in this paper. The K, PSNR, and SSIM were used to evaluate the range image processed by each algorithm. The average values of each index are shown in Table 2.


**Table 2.** Denoising results of different signal-to-background ratios at frame 20.

According to the above data, K and PSNR curves under different signal-to-background ratios under 20 frames are drawn, as shown in Figure 5.

**Figure 5.** K of different SBRs when the number of frames is 20.

As is discernible from Figures 5 and 6, the increase in the SBR, K, and PSNR of each algorithm is improved to varying degrees, and the K and PSNR of the algorithm proposed are superior to those of the comparison algorithms in regard to the SBRs. When the SBR is 0.4, the K value of the TV, FOTV, and BF algorithms is less than 85%, and the PSNR is less than 16, while the target K value of the algorithm in this paper reaches 95.49%, and the PSNR value reaches 20.6488. It is proved that the algorithm proposed has good denoising performance. When the SBR is 0.5, the K value of the algorithm proposed reaches 0.9865, which is at least 5.11% higher than that of the other algorithms, and the PSNR value reaches 26.0541, which is at least 24.6% higher than that of the other algorithms.

The fractional-order derivative is a global operator with long memory, which distinguishes it from integer-order derivatives. When the depth image of a target contains a large amount of noise, establishing connections between pixels can increase the influence of noise on the current pixel. TV's results are better than FOTV's results.

**Figure 6.** PSNR of different SBRs when the number of frames is 20.

3.2.3. Simulation Analysis of Range Image with Different Frame Numbers When SBR Is 0.5

In order to verify the influence of different statistical frames on the denoising performance of the algorithm proposed, the simulation data from when the SBR = 0.5 were selected to discuss the processing results of the TV, FOTV, and BF algorithms and the algorithm proposed in this paper when the frame numbers were 20, 25, 30, 35, 40, 45, and 50. The single Monte Carlo simulation results when the SBR is equal to 0.5 with different frame numbers are shown in Table 3.


**Table 3.** Denoising results of different frames when SBR = 0.5.

From Figures 7 and 8, it can be seen that the K and PSNRs of each algorithm are improved to varying degrees with the increase in the statistical frame number. When the statistical frame number was 25, the K of the TV, FOTV, and BF algorithms did not exceed 70%. In contrast, the K of the algorithm proposed reached 0.8198, which indicates a better denoising of the range image. When the quantity of image frames is 35, compared with the other comparison algorithms, the K and PSNR of the algorithm proposed are improved by at least 13.58% and 24.55%.

**Figure 7.** K of different frames when SBR = 0.5.

**Figure 8.** PSNR of different frames when SBR = 0.5.

#### **4. Experimental Verification**

*4.1. Experimental System Construction*

A 64 × 64 array GM-APD was selected as the detector of the system when building the laser radar system with a separate transmitter and receiver, as shown in Figure 9. The transmit–receive field of view was 0.9◦ × 0.9◦, and a 1064 nm fiber laser was selected as the laser source, of which the pulse laser output energy was set to 110 uJ with a 10 ns pulse width and 15 kHz repetition frequency.

**Figure 9.** System diagram.

#### *4.2. Experimental Data Processing and Analysis*

Imaging experiments were conducted on residential buildings with a range of 446.1 m to 463.2 m under strong sunlight to verify the denoising performance of the algorithm proposed. The scenario of the target area is shown in Figure 10. In order to obtain the ideal range image, the same target region was detected and imaged by the peak-picking method at night. A total of 5000 frames were used for the multi-frame statistics. The image obtained was taken as the ideal range image of the target, as shown in Figure 11.

**Figure 10.** Target scene.

**Figure 11.** Ideal target range image.

In the daytime imaging experiment, the SBR was equal to 0.8. In the case of 100 frames, the TV denoising, FOTV denoising, and BF denoising algorithms, as well as the algorithm proposed in this paper, were used to denoise the range image obtained by the maximum likelihood estimation method. Figures 12–15 show the result after denoising.

**Figure 12.** TV.

**Figure 13.** FOTV.

**Figure 14.** BF.

**Figure 15.** Proposed algorithm.

The denoising algorithm designed in this paper can denoise the target range image better than the contrast method and produce a smoother target region. Various indicators were used to evaluate the reconstructed range image quality, the data results are shown in Table 4.



The denoising method proposed in this paper improves the target restoration degree by at least 4.29%, and the PSNR is 4.6969, both of which are better than those of the comparison algorithms. For GM-APD range images, the method provided in this paper's denoising performance has been successfully verified as good.

In order to verify the advancement of our algorithm, our algorithm is compared with [21] in the case of 100 frames, and the SBR is 0.8. Figure 16 shows the denoised results. The comparative data between [21] and the algorithm proposed are presented in Table 5.

**Figure 16.** Algorithm [21].


**Table 5.** Range image reconstruction results.

Both K and PSNR of the proposed algorithm are superior to the comparison algorithm, which can verify the advancement of the proposed algorithm.

#### **5. Discussion**

In order to achieve the denoising of GM-APD lidar range images with a low SBR, a fractional-order total variational GM-APD lidar range-image denoising method based on a spatial kernel function and range kernel function was proposed. The simulation results show that when the SBR is equal to 0.4, and the statistical frame number is 20, compared with BF denoising, FOTV denoising, and TV denoising, the K and PSNR of the algorithm proposed here are improved by at least 11.98% and 24.83%, respectively. The experimental results show that when the SBR is 0.8 and the statistical frame number is 100, the K of the algorithm proposed increases by at least 0.2% compared with that obtained via BF denoising, FOTV denoising, and TV denoising. It can be seen that the denoising method proposed in this paper has a good image denoising effect under the condition of a low SBR.

**Author Contributions:** Conceptualization, X.W. (Xuyang Wei), D.X., Z.W. and K.Y.; methodology, X.W. (Xuyang Wei), D.X., and X.W. (Xinjian Wang); software, X.W. (Xuyang Wei) and Z.W.; formal analysis, X.W. (Xuyang Wei); data curation, K.Y., X.W. (Xuyang Wei), Z.W. and T.H.; writing—original draft preparation, X.W. (Xuyang Wei); writing—review and editing, K.Y., X.W. (Xuyang Wei), X.L., and C.W.; funding acquisition, X.L. and C.W. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the National Key R&D Program of China, grant number 2022YFC3803700.

**Data Availability Statement:** No new data were created.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

MDPI St. Alban-Anlage 66 4052 Basel Switzerland www.mdpi.com

*Fractal and Fractional* Editorial Office E-mail: fractalfract@mdpi.com www.mdpi.com/journal/fractalfract

Disclaimer/Publisher's Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Academic Open Access Publishing

mdpi.com ISBN 978-3-0365-9299-2