**New Developments in Functional and Fractional Differential Equations and in Lie Symmetry**

Printed Edition of the Special Issue Published in *Symmetry* Ioannis P. Stavroulakis and Hossein Jafari Edited by

www.mdpi.com/journal/symmetry

## **New Developments in Functional and Fractional Differential Equations and in Lie Symmetry**

## **New Developments in Functional and Fractional Differential Equations and in Lie Symmetry**

Editors

**Ioannis P. Stavroulakis Hossein Jafari**

MDPI • Basel • Beijing • Wuhan • Barcelona • Belgrade • Manchester • Tokyo • Cluj • Tianjin

*Editors* Ioannis P. Stavroulakis University of Ioannina Greece

Hossein Jafari University of South Africa South Africa

*Editorial Office* MDPI St. Alban-Anlage 66 4052 Basel, Switzerland

This is a reprint of articles from the Special Issue published online in the open access journal *Symmetry* (ISSN 2073-8994) (available at: https://www.mdpi.com/journal/symmetry/special issues/New developments Lie Symmetry).

For citation purposes, cite each article independently as indicated on the article page online and as indicated below:

LastName, A.A.; LastName, B.B.; LastName, C.C. Article Title. *Journal Name* **Year**, *Volume Number*, Page Range.

**ISBN 978-3-0365-1158-0 (Hbk) ISBN 978-3-0365-1159-7 (PDF)**

© 2021 by the authors. Articles in this book are Open Access and distributed under the Creative Commons Attribution (CC BY) license, which allows users to download, copy and build upon published articles, as long as the author and publisher are properly credited, which ensures maximum dissemination and a wider impact of our publications.

The book as a whole is distributed by MDPI under the terms and conditions of the Creative Commons license CC BY-NC-ND.

## **Contents**


## **About the Editors**

**Ioannis P. Stavroulakis** holds a master's degree in mathematics (City University of New York), a Ph.D. in mathematics (University of Ioannina) and a doctor honoris causa (University of Gjirokastra). His research interests lie in the area of Qualitative theory of differential equations; retarded and advanced differential equations; difference equations; functional equations; dynamic equations; partial differential equations. He is the author of 3 books and more than 140 research papers, most of them of *high quality*, published in *superior journals with excellent reviews* from the editors and/or referees, and cited very often (more than *3000 citations*). He has held positions at 10 universities in several countries and has been invited as a member of the *scientific/organizing* committee and/or as a *keynote/plenary/invited speaker* at many international conferences and universities, delivering *more than 130 lectures at 130 universities in 30 countries in the 5 continents* and collaborated with *more than 70* researchers around the world. He is also *Editor-in-Chief, Managing Editor, Guest Editor, or Editor* of 14 mathematical journals. He was awarded several awards and distinctions among them: *Ampere Foundation Fellowship; Canon Foundation in Europe Research Fellowship; The Flinders University of South Australia Visiting Research Fellowship; DOCTOR HONORIS CAUSA* (he was the first to be awarded an Honorary Doctorate from the University of Gjirokastra). Two Special Issues have been dedicated to him: *Global and Stochastic Analysis* 5 No.1, (2018) (www.mukpublications.com) and *Nonlinear Dynamics and Systems* Theory 19 (1-SI) (2019) (www.e-ndst.kiev.ua). He received a certificate of OBADA-PRIZE 2019 (in the top 10 out of 370).

**Hossein Jafari** Professor Hossein Jafari is an accomplished applied mathematician. His research area is in fractional-order differential equations and their applications. According to the American Mathematical Society database, Prof Jafari's publications cover the following interesting topics:


In 2020, Prof. Jafari introduced a new general integral transform. As a result of his extensive research output and citations, he has been included on the list of the top 2% of scientists in the world as compiled by Stanford University (USA). Since 2017, he has also been on the list of the Essential Science Indicators (ESI) as a top researcher by the Web of Science, which lists the top 1% scientists in the world.

## **Preface to "New Developments in Functional and Fractional Differential Equations and in Lie Symmetry"**

Ordinary differential equations (ODEs) appear frequently in mathematical models that attempt to describe real-life situations in which the rate of change of the system depends only on its present stage. However, in many cases, the past state of the system has to be taken into consideration. Delay differential equations or differential equations with retarded argument or hystero-differential equations provide more realistic mathematical models for systems in which the rate of change depends not only on their present stage but also on their past history, such as population models, models for epidemics, economic models, nuclear reactors, collision problems in electrodynamics, and many others. In recent years, there has also been a great deal of interest in the study of the discrete analogue difference equations.

Many physical phenomena in areas such as electrochemistry, physics, biology, mechanics, signal processing, and viscoelastic materials can be modelled using fractional derivatives. Fractional calculus is a generalization of differentiation and integration to arbitrary non-integer order.

The method of group analysis of differential equations was introduced by Sophus Lie more than one hundred years ago. A symmetry transformation maps an equation into itself. The set of such transformations forms a Lie group and gives rise to Lie algebra, which enables easier manipulation of differential equations. The Lie symmetry method is a powerful tool to solve or reduce ODEs and a way to find exact solutions of partial differential equations (PDEs) by reducing the number of independent variables in the equations and solve engineering and applied science problems, which are modelled in terms of nonlinear and complicated ODEs and PDEs.

In this Special Issue, recent developments on the above-mentioned areas are presented by experts on the subjects. The guest editors believe that the papers published in this Special Issue will be useful to a wide range of researchers and will motivate further research in the topics presented as well as in the related fields.

> **Ioannis P. Stavroulakis, Hossein Jafari** *Editors*

## *Article* **New Exact Solutions and Conservation Laws to the Fractional-Order Fokker–Planck Equations**

**Nematollah Kadkhoda 1, Elham Lashkarian 2, Mustafa Inc 3, Mehmet Ali Akinlar <sup>4</sup> and Yu-Ming Chu 5,6,\***


Received: 6 July 2020; Accepted:29 July 2020; Published: 3 August 2020

**Abstract:** The main purpose of this paper is to present a new approach to achieving analytical solutions of parameter containing fractional-order differential equations. Using the nonlinear self-adjoint notion, approximate solutions, conservation laws and symmetries of these equations are also obtained via a new formulation of an improved form of the Noether's theorem. It is indicated that invariant solutions, reduced equations, perturbed or unperturbed symmetries and conservation laws can be obtained by applying a nonlinear self-adjoint notion. The method is applied to the time fractional-order Fokker–Planck equation. We obtained new results in a highly efficient and elegant manner.

**Keywords:** lie point symmetry analysis; approximate conservation laws; approximate nonlinear self-adjointness; perturbed fractional differential equations

**MSC:** 22E10; 35L65; 47A05; 26A33

#### **1. Introduction**

Fractional partial differential equations are a generalization of classical ordinary calculus with utilizations of integrals and derivatives with an arbitrary order. In the last decade, these equations were employed in various scientific and engineering phenomena including fluid mechanics, gas dynamics, nonlinear acoustics, biology, control theory, earthquake modeling, traffic flow models. There are several different types of fractional-order derivative and integral operators including the Riesz, Riemann–Liouville, Grünwald–Letnikov and Caputo fractional derivatives [1].

We are concerned with approximations using a small parameter of the Caputo and Riemann–Liouville type fractional derivative operators. Using this approximation, a fractional-order differential equation may be converted into an integer-order equation [2–7].

By the Lie symmetry techniques [8–10], we can obtain analytical solutions of many perturbed differential equations. Noether's theorem which was introduced by Emmy Noether in 1918 describing general concepts related to symmetry groups and conservation laws is a useful tool in the solutions of perturbed differential equations, see, e.g., [11–13]. Finding approximate symmetries of perturbed partial dofferential equations was first introduced by Fushchich, Shtelen and Baikov [14,15]. Because of the importance of perturbed systems to describe the natural phenomena, they generalized the Noether's theorem to approximated version. This generalization helps to find approximate conservation laws of a given system including the related topics [16,17]. For a system, approximate conservation laws is determined by approximate formal Lagrange and nonlinear self-adjointness for approximate equations [18]. We present conservation laws of fractional partial differential equations [19,20] with an effective method based on nonlinear self-adjointness.

The Fokker–Planck equations play an important role in fluid mechanics, control theory, astrophysics and quantum [21,22]. We are concerned with the perturbed fractional-order Fokker–Planck equation

$$
\delta D\_t^a u - \frac{1}{2} a^2 u\_{xx} - b u - b \ge u\_x + \varepsilon u\_t = 0. \tag{1}
$$

In which *a*, *b* are constants and *D<sup>α</sup> <sup>t</sup>* is fractional derivative of order *α*.

#### **2. Approximation of Fractional-Order Operators**

**Definition 1.** *The left and right-sided Riemann–Liouville fractional partial derivatives are defined as*

$$\left( {}\_{4}D\_{x^{1}}^{a+k}u \right)(\mathbf{x}) = \frac{1}{\Gamma(1-a)} \left( \frac{\partial}{\partial \mathbf{x}^{1}} \right)^{k+1} \int\_{a}^{\mathbf{x}^{1}} \frac{u \left( \frac{\mathbf{y}}{\mathbf{y}^{1}}, \mathbf{x}^{2}, \dots, \mathbf{x}^{n} \right)}{(\mathbf{x}^{1} - \mathbf{y})^{a}} d\mathbf{y} \tag{2}$$

$$\left(\_{\mathbf{x}^1}D\_b^{\mathbf{x}+k}u\right)(\mathbf{x}) = \frac{(-1)^{k+1}}{\Gamma(1-\mathfrak{a})} \left(\frac{\partial}{\partial \mathbf{x}^1}\right)^{k+1} \int\_{\mathbf{x}^1}^b \frac{u\left(\underline{\xi}, \mathbf{x}^2, \dots, \mathbf{x}^n\right)}{(\mathbf{x}^1 - \underline{\xi})^a} d\underline{\xi}.\tag{3}$$

*Respectively in which* <sup>Γ</sup>(·) *denotes the Gamma function and <sup>α</sup>* <sup>∈</sup> (0, 1), *<sup>k</sup>* <sup>=</sup> 0, 1, . . . , *<sup>m</sup>*, *<sup>m</sup>* <sup>∈</sup> <sup>N</sup>*.*

**Definition 2.** *The left and right-sided Caputo type fractional partial derivative are defined as*

$$\left(^{\mathbb{C}}\_{a}D\_{x^{1}}^{a+k}u\right)(\mathbf{x}) = \frac{1}{\Gamma(1-a)} \int\_{a}^{\mathbf{x}^{1}} \frac{1}{(\mathbf{x}^{1}-\boldsymbol{\xi})^{a}} \frac{\partial^{k+1}u(\boldsymbol{\xi},\mathbf{x}^{2},\ldots,\mathbf{x}^{n})}{\partial\boldsymbol{\xi}^{k+1}}d\boldsymbol{\xi},\tag{4}$$

$$\left( \,\_{\mathbf{x}^1}^C D\_b^{a+k} \mathfrak{u} \right) (\mathbf{x}) = \frac{(-1)^{k+1}}{\Gamma(1-a)} \int\_{\mathbf{x}^1}^b \frac{1}{(\mathfrak{z}^r - \mathbf{x}^1)^a} \frac{\partial^{k+1} \mathfrak{u} (\mathfrak{z}^r \mathbf{x}^2, \dots, \mathfrak{x}^n)}{\partial \mathfrak{z}^{k+1}} d\mathfrak{z}. \tag{5}$$

*Respectively in which* <sup>Γ</sup>(·) *denotes the Gamma function and <sup>α</sup>* <sup>∈</sup> (0, 1), *<sup>k</sup>* <sup>=</sup> 0, 1, . . . , *<sup>m</sup>*, *<sup>m</sup>* <sup>∈</sup> <sup>N</sup>*.*

For the natural numbers, *<sup>k</sup>*, *<sup>c</sup>*, *<sup>d</sup>*, let *<sup>u</sup>*(*x*) :<sup>=</sup> *<sup>u</sup>* be the function of *<sup>x</sup>* = (*x*1, *<sup>x</sup>*2, ... , *<sup>x</sup>n*) <sup>∈</sup> **<sup>R</sup>***n*, we consider an fractional differential equation in the form of

$$P\left(\mathbf{x},\boldsymbol{\mu},\boldsymbol{\mu}\_{(1)},\ldots,\boldsymbol{\mu}\_{(k)},\boldsymbol{D}^{a\_0}\_{\mathbf{x}^{1:\mathsf{aff}}},\boldsymbol{D}^{a\_1}\_{\mathbf{x}^1},\ldots,\boldsymbol{D}^{a\_d}\_{\mathbf{x}^{1:\mathsf{cr}}},\boldsymbol{D}^{\mathcal{G}\_0}\_{\mathbf{b}},\boldsymbol{\nu}\_{\mathbf{x}^1}^1,\boldsymbol{D}^{\mathcal{G}\_1}\_{\mathbf{b}},\ldots,\boldsymbol{\nu}\_{\mathbf{x}^1}^1,\boldsymbol{D}^{\mathcal{G}\_c}\_{\mathbf{b}}\right) = 0,\tag{6}$$

0 < *α*<sup>0</sup> < *α*<sup>1</sup> < ... < *αd*, 0 < *β*<sup>0</sup> < *β*<sup>1</sup> < ... < *βc*.

The partial derivative of *u* is denoted as

$$\mu\_{\{\mathbf{s}\}} \equiv \{u\_{i\_1 \ldots i\_s}\} = \{\frac{\partial^s \mathbf{u}(\mathbf{x})}{\partial \mathbf{x}^{i\_1} \ldots \partial \mathbf{x}^{i\_s}}\}, \qquad (i\_1, \ldots, i\_s = 1, \ldots n, \mathbf{s} = 1, \ldots, k).$$

If the orders of fractional differential Equation (6) are all nearly integers, then it is possible to approximate Equation (6):

$$P\left(\mathbf{x},\boldsymbol{\mu},\boldsymbol{\mu}\_{(1)},\ldots,\boldsymbol{\mu}\_{(r)^{\operatorname{ad}}},\boldsymbol{D}^{\boldsymbol{a}}\_{\mathbf{x}^{1:\operatorname{ad}}},\boldsymbol{D}^{\boldsymbol{a}+1}\_{\mathbf{x}^{1}},\ldots,\boldsymbol{D}^{\boldsymbol{a}+d}\_{\mathbf{x}^{1}}\,\boldsymbol{D}^{\boldsymbol{a}}\_{\mathbf{b}^{r}},\boldsymbol{D}^{\boldsymbol{a}+1}\_{\mathbf{b}^{r}},\ldots,\boldsymbol{\mu}\_{\mathbf{x}^{1}}\,\boldsymbol{D}^{\boldsymbol{a}+d}\_{\mathbf{b}}\right) = \boldsymbol{0}.\tag{7}$$

In which *α* ∈ (0, 1). Assuming *α* = *ε* or *α* = 1 − *ε* in Equation (7), we can turn the right and left-sided Riemann–Liouville fractional partial derivatives into a Taylor expansion having arbitrarily small parameter, 1 > *ε* > 0.

Supposing the existence of each derivative *aDk*+*<sup>ε</sup> <sup>x</sup>*<sup>1</sup> *<sup>u</sup>*, *<sup>x</sup>*<sup>1</sup>*Dk*+*<sup>ε</sup> <sup>b</sup> <sup>u</sup>* (*<sup>k</sup>* <sup>=</sup> 0, 1, ...) or *aDk*−*<sup>ε</sup> <sup>x</sup>*<sup>1</sup> *u*, *<sup>x</sup>*<sup>1</sup> *<sup>D</sup>k*−*<sup>ε</sup> <sup>b</sup> u* (*<sup>k</sup>* <sup>=</sup> 1, 2, . . .) at arbitrary point *<sup>x</sup>*<sup>1</sup> <sup>∈</sup> (*a*, *<sup>b</sup>*), we have

$$\begin{split} \, \_aD\_{x^1}^{k \pm \varepsilon} u &= \quad \sum\_{s=0}^{\infty} \binom{k \pm \varepsilon}{s} \frac{(\mathbf{x}^1 - a)^{s-k \mp \varepsilon}}{\Gamma(1-k+s \mp \varepsilon)} \frac{\partial^s u(\frac{\mathbf{x}}{s}, \mathbf{x}^2, \dots, \mathbf{x}^n)}{\partial \xi^s} \\ &= \quad \frac{\partial^k u}{\partial (\mathbf{x}^1)^k} \pm \varepsilon \Big( [\Psi(k+1) - \ln(\mathbf{x}^1 - a)] \frac{\partial^k u}{\partial (\mathbf{x}^1)^k} \\ &- \quad \sum\_{s=0, s \neq k}^{\infty} \frac{(-1)^{s-k}}{(s-k)} \frac{k!}{s!} (\mathbf{x}^1 - a)^{s-k} \frac{\partial^s u}{\partial (\mathbf{x}^1)^s} \Big) + o(\varepsilon), \end{split} \tag{8}$$

$$\begin{split} \, \_{x^{1}}D\_{b}^{k\pm\varepsilon}u &= \quad \frac{\partial^{k}u}{\partial(\mathbf{x}^{1})^{k}} \pm \varepsilon \Big( [\![\psi(k+1)-\ln(b-\mathbf{x}^{1})] \!] \frac{\partial^{k}u}{\partial(\mathbf{x}^{1})^{k}} \\ &- \quad \sum\_{s=0,s\neq k}^{\infty} \frac{(-1)^{s-k}}{(s-k)} \frac{k!}{\mathbf{s!}!} (b-\mathbf{x}^{1})^{s-k} \frac{\partial^{s}u}{\partial(\mathbf{x}^{1})^{s}} \Big) + o(\varepsilon). \end{split} \tag{9}$$

Here, *ψ*(*z*) = <sup>Γ</sup> (*z*) <sup>Γ</sup>(*z*) is the digamma function and ( *k*±*ε <sup>s</sup>* ) <sup>=</sup> <sup>Γ</sup>(1+*k*±*ε*) <sup>Γ</sup>(1+*k*−*s*±*ε*)*s*! is a binomial coefficient.

For the Caputo fractional derivative

$$D\_{\mathfrak{a}}^{\mathbb{C}}D\_{\mathfrak{x}^{1}}^{k \pm \varepsilon}u = \_{\mathfrak{a}}D\_{\mathfrak{x}^{1}}^{k \pm \varepsilon}u \mp \varepsilon \sum\_{s=0}^{k-1}(-1)^{s-k}(k-s-1)!(\mathfrak{x}^{1}-a)^{s-k}\frac{\partial^{s}u}{\partial(\mathfrak{x}^{1})^{s}}|\_{\mathfrak{x}^{1}=\mathfrak{a}} + p(\mathfrak{x},a),\tag{10}$$

$$D\_{x^1}^C D\_b^{k \pm \varepsilon} u =\_{x^1} D\_b^{k \pm \varepsilon} u \mp \varepsilon \sum\_{s=0}^{k-1} (-1)^{s-k} (k - s - 1)! (b - x^1)^{s-k} \frac{\partial^s u}{\partial (x^1)^s}|\_{x^1 = b} + q(\mathbf{x}, b). \tag{11}$$

In which

$$p(\mathbf{x},a) = \begin{cases} -[1+\varepsilon(\psi(1)-\ln(\mathbf{x}^{1}-a))]\frac{\partial^{k}\mathbf{u}}{\partial(\mathbf{x}^{1})^{k}}|\_{\mathbf{x}^{1}=a}; & for \quad \stackrel{\mathcal{C}}{a}D\_{\mathbf{x}^{1}}^{k+\varepsilon}\mathbf{u},\\ 0; & for \quad \stackrel{\mathcal{C}}{a}D\_{\mathbf{x}^{1}}^{k-\varepsilon}\mathbf{u}, \end{cases}$$

$$q(\mathbf{x},b) = \begin{cases} -[1+\varepsilon(\psi(1)-\ln(b-\mathbf{x}^{1}))]\frac{\partial^{k}\mathbf{u}}{\partial(\mathbf{x}^{1})^{k}}|\_{\mathbf{x}^{1}=b}; & for \quad \stackrel{\mathcal{C}}{\mathbf{x}^{1}}D\_{\mathbf{b}}^{k+\varepsilon}\mathbf{u},\\ 0; & for \quad \stackrel{\mathcal{C}}{\mathbf{x}^{1}}D\_{\mathbf{b}}^{k-\varepsilon}\mathbf{u}. \end{cases}$$

**Proposition 1.** *Let F be a continuously differentiable function with respect to aDα*+*<sup>k</sup> <sup>x</sup>*<sup>1</sup> *<sup>u</sup> and <sup>x</sup>*<sup>1</sup>*Dα*+*<sup>k</sup> <sup>b</sup> u* (*k* = 0, 1, . . . , *d*)*. Then, for α* = *ε or α* = 1 − *ε, we can approximate Equation (7) as follows:*

$$P\_{(0)}(\mathbf{x}, \boldsymbol{\mu}, \boldsymbol{\mu}\_{(1)}, \dots) + \varepsilon P\_{(1)}(\mathbf{x}, \boldsymbol{\mu}, \boldsymbol{\mu}\_{(1)}, \dots, D\_{\mathbf{x}^1}^{c+1} \mathbf{u}, D\_{\mathbf{x}^1}^{c+2} \mathbf{u}, \dots) \approx 0,\tag{12}$$

*in which c* = *max*{*d*,*r*} *for α* = 1 − *ε and c* = *max*{*d* − 1,*r*} *for α* = *ε.*

#### **3. Lie Group Analysis**

We consider a differential operator of first order defined as

$$\begin{split} \mathcal{X} &\approx \quad \mathcal{X}\_{(0)} + \varepsilon \mathcal{X}\_{(1)} \\ &\equiv \quad \left( \mathcal{I}\_{(0)}^{i}(\mathbf{x}, \boldsymbol{\mu}) + \varepsilon \mathcal{I}\_{(1)}^{i}(\mathbf{x}, \boldsymbol{\mu}) \right) \frac{\partial}{\partial \mathbf{x}^{i}} + \left( \theta\_{(0)}(\mathbf{x}, \boldsymbol{\mu}) + \varepsilon \theta\_{(1)}(\mathbf{x}, \boldsymbol{\mu}) \right) \frac{\partial}{\partial \boldsymbol{\mu}^{\prime}} \end{split} \tag{13}$$

in which

$$\begin{aligned} \mathcal{J}\_{(0)}^{i}(\mathbf{x},\boldsymbol{\mu}) &= \frac{\partial g\_{(0)}^{i}(\mathbf{x},\boldsymbol{\mu},a)}{\partial a}|\_{a=0\prime} & \mathcal{J}\_{(1)}^{i}(\mathbf{x},\boldsymbol{\mu}) &= \frac{\partial g\_{(1)}^{i}(\mathbf{x},\boldsymbol{\mu},a)}{\partial a}|\_{a=0\prime} \\ \theta\_{(0)}(\mathbf{x},\boldsymbol{\mu}) &= \frac{\partial h\_{(0)}^{i}(\mathbf{x},\boldsymbol{\mu},a)}{\partial a}|\_{a=0\prime} & \theta\_{(1)}(\mathbf{x},\boldsymbol{\mu}) &= \frac{\partial h\_{(1)}^{i}(\mathbf{x},\boldsymbol{\mu},a)}{\partial a}|\_{a=0\prime} \end{aligned}$$

Calculating the solutions of

$$X\left(P\_{(0)} + \varepsilon P\_{(1)}\right)|\_{\{12\}} \approx 0,\tag{14}$$

exact symmetries of the perturbed Equation (7) can be achieved.

$$\begin{array}{ll} \mathfrak{x}^{i} & \approx & \mathfrak{g}^{i}(\mathfrak{x}, u, a, \mathfrak{e}) \equiv \mathfrak{g}^{i}\_{(0)}(\mathfrak{x}, u, a) + \varepsilon \mathfrak{g}^{i}\_{(1)}(a, u, a), \\\mathfrak{u} & \approx & h(\mathfrak{x}, u, a, \mathfrak{e}) \equiv h\_{(0)}(\mathfrak{x}, u, a) + \varepsilon h\_{(1)}(a, u, a), \end{array} \tag{15}$$

with

$$
\bar{\mathfrak{x}}^i|a=0 \approx \mathfrak{x}^i, \quad \mathfrak{a}|\_{a=0} = \mathfrak{u}\_{\prime}
$$

are group of Lie point transformations under the group conditions

*gi* - *g*1(*x*, *u*, *a*,*ε*),..., *gn*(*x*, *u*, *a*,*ε*), *h*(*x*, *u*, *a*,*ε*), *b*,*ε* <sup>≈</sup> *<sup>g</sup><sup>i</sup>* (*x*, *a* + *b*,*ε*); *h* - *g*1(*x*, *u*, *a*,*ε*),..., *gn*(*x*, *u*, *a*,*ε*), *h*(*x*, *u*, *a*,*ε*), *b*,*ε* ≈ *h*(*x*, *a* + *b*,*ε*),

by *o*(*ε*).

#### **4. Classification of Group-Invariant Solution**

We present the optimal system of approximate Fokker–Planck equation symmetries [23] by employing the fact that every *s*-dimensional subalgebra is equivalent to a unique member of the optimal system with an adjoint representation. If we know the infinitesimal adjoint action *ad***g** of a Lie algebra **g** on itself, we can reconstruct the adjoint representation *AdG* of the underlying Lie group.

$$\frac{dX}{d\varepsilon} = adY|\_{X\_{\prime}} \qquad X(0) = X\_{0\prime}$$

with solution

$$X(\varepsilon) = Ad\left(\exp(\varepsilon Y)\right)X\_{0\prime}$$

where

$$\operatorname{Ad}\left(\exp(\varepsilon Y)\right)X\_0 = \sum\_{n=0}^{\infty} \frac{\varepsilon^n}{n!} (\operatorname{ad}Y)^n(X\_0) = X\_0 - \varepsilon[Y, X\_0] + \frac{\varepsilon^2}{2} [Y, [Y, X\_0]] - \dots - \varepsilon^n$$

It is clear that [*Xi*, *Xj*] is the usual commutator and *ε* is a parameter.

#### *Optimal System and Exact Solutions*

Consider the perturbed fractional-order Fokker–Planck equation

$$
\mu\_0 D\_t^\mu u = \frac{1}{2} a^2 u\_{xx} + b u + b x u\_x - \varepsilon u\_{t\prime} \quad \mathfrak{u} = \mathfrak{u}(\mathfrak{x}, t), \quad \mathfrak{a} \in (0, 1). \tag{16}
$$

In order to calculate the approximate symmetries of the perturbed fractional equation, we apply the extension of Equation (8) to Equation (16). Setting *α* = 1 − *ε*, we can write Equation (16) as

$$\begin{aligned} P\_{(0)} + \varepsilon P\_{(1)} &= \quad u\_t - \frac{1}{2}a^2 u\_{xx} - bu - bxu\_x \\ &+ \quad \varepsilon \left[ (\ln t + \nu)u\_t + u + \sum\_{k=1}^{\infty} \frac{(-t)^k}{k(k+1)!} u\_t^{(k+1)} \right] + \varepsilon u\_t = 0. \end{aligned} \tag{17}$$

We get symmetries of perturbed equation Equation (17) using the Maple software.

*X*<sup>1</sup> = *∂t*, *X*<sup>2</sup> = *u∂u*, *X*<sup>3</sup> = *e* <sup>−</sup>*bt∂x*, *X*<sup>4</sup> = *e bt∂<sup>x</sup>* <sup>−</sup> <sup>2</sup>*bxu <sup>a</sup>*<sup>2</sup> *<sup>e</sup> bt∂u*, *X*<sup>5</sup> = *e* <sup>−</sup>2*bt∂<sup>t</sup>* <sup>−</sup> *bxe*−2*bt∂<sup>x</sup>* <sup>+</sup> *bue*−2*bt∂u*, *<sup>X</sup>*<sup>6</sup> <sup>=</sup> *bxe*2*bt∂<sup>t</sup>* <sup>+</sup> *bxe*2*bt∂<sup>x</sup>* <sup>−</sup> <sup>2</sup>*b*2*x*2*<sup>u</sup> <sup>a</sup>*<sup>2</sup> *<sup>e</sup>* <sup>2</sup>*bt∂u*, *X*<sup>7</sup> = *e bt*+ *<sup>c</sup>*1*<sup>t</sup>* <sup>2</sup> <sup>−</sup> *<sup>b</sup> <sup>a</sup>*<sup>2</sup> *<sup>x</sup>*<sup>2</sup> *KummerM*( 4*b* + *c*<sup>1</sup> <sup>4</sup>*<sup>b</sup>* , 3 2 , *b <sup>a</sup>*<sup>2</sup> *<sup>x</sup>*2)*∂u*, *X*<sup>8</sup> = *e bt*+ *<sup>c</sup>*1*<sup>t</sup>* <sup>2</sup> <sup>−</sup> *<sup>b</sup> <sup>a</sup>*<sup>2</sup> *<sup>x</sup>*<sup>2</sup> *KummerU*( 4*b* + *c*<sup>1</sup> <sup>4</sup>*<sup>b</sup>* , 3 2 , *b <sup>a</sup>*<sup>2</sup> *<sup>x</sup>*2)*∂u*. *<sup>Y</sup>*<sup>1</sup> <sup>=</sup> *<sup>ε</sup> <sup>a</sup>*<sup>2</sup> *<sup>∂</sup>t*, *<sup>Y</sup>*<sup>2</sup> <sup>=</sup> *<sup>ε</sup>u∂u*, *<sup>Y</sup>*<sup>3</sup> <sup>=</sup> <sup>−</sup>*<sup>ε</sup> a*2 *e* <sup>−</sup>*bt∂x*, *<sup>Y</sup>*<sup>4</sup> <sup>=</sup> <sup>−</sup>*<sup>ε</sup> a*2 *e bt* (*∂<sup>x</sup>* + 2*bxu∂u*), *<sup>Y</sup>*<sup>5</sup> <sup>=</sup> *<sup>ε</sup> a*2 *e* <sup>−</sup>2*bt* - *∂<sup>t</sup>* + *bx∂<sup>x</sup>* + *ba*2*u∂<sup>u</sup>* , *<sup>Y</sup>*<sup>6</sup> <sup>=</sup> *<sup>ε</sup> a*2 *e* <sup>2</sup>*bt* - *<sup>∂</sup><sup>t</sup>* <sup>−</sup> *bx∂<sup>x</sup>* <sup>−</sup> <sup>2</sup>*b*2*x*2*u∂<sup>u</sup>* , *Y*<sup>7</sup> = *xεe tc*1<sup>−</sup> *bx*<sup>2</sup> *<sup>a</sup>*<sup>2</sup> *KummerM*( *b* + *c*<sup>1</sup> <sup>2</sup>*<sup>b</sup>* , 3 2 , *b <sup>a</sup>*<sup>2</sup> *<sup>x</sup>*2)*∂u*, *Y*<sup>8</sup> = *xεe tc*1<sup>−</sup> *bx*<sup>2</sup> *<sup>a</sup>*<sup>2</sup> *KummerU*( *b* + *c*<sup>1</sup> <sup>2</sup>*<sup>b</sup>* , 3 2 , *b <sup>a</sup>*<sup>2</sup> *<sup>x</sup>*2)*∂u*. (18)

where the Kummer functions, *KummerM*(*μ*, *ν*, *z*) and *KummerU*(*μ*, *ν*, *z*) solve the differential equation *zy* + (*ν* − *z*)*y* − *μy* = 0.

By the possession of infinitesimal generators (18), a number of adjoint representations are given as

*Ad*[*X*1, *Xj*] = *Xj*, *j* = 1, . . . , 5, *Ad*[*Xi*, *Xi*] = *Xi*, *i* = 1, . . . , 5, *Ad*[*X*2, *<sup>X</sup>*1] = *<sup>X</sup>*<sup>1</sup> <sup>−</sup> *<sup>ε</sup>bX*3, *Ad*[*X*2, *<sup>X</sup>*4] = <sup>2</sup>*ε<sup>b</sup> <sup>a</sup>*<sup>2</sup> *<sup>X</sup>*<sup>2</sup> <sup>+</sup> *<sup>X</sup>*4, *Ad*[*X*3, *<sup>X</sup>*1] = *<sup>X</sup>*<sup>1</sup> <sup>−</sup> *<sup>ε</sup>bX*4, *Ad*[*X*3, *<sup>X</sup>*4] = <sup>2</sup>*ε<sup>b</sup> <sup>a</sup>*<sup>2</sup> *<sup>X</sup>*<sup>2</sup> <sup>+</sup> *<sup>X</sup>*4, *Ad*[*X*4, *<sup>X</sup>*1] = *<sup>X</sup>*<sup>1</sup> <sup>+</sup> *<sup>ε</sup>bX*4, *Ad*[*X*4, *<sup>X</sup>*3] = <sup>−</sup>2*ε<sup>b</sup> <sup>a</sup>*<sup>2</sup> *<sup>X</sup>*<sup>2</sup> <sup>+</sup> *<sup>X</sup>*3, *Ad*[*X*4, *<sup>X</sup>*5] = <sup>−</sup>2*ε*2*b*<sup>2</sup> *<sup>a</sup>*<sup>2</sup> *<sup>X</sup>*<sup>2</sup> <sup>+</sup> <sup>2</sup>*εbX*<sup>3</sup> <sup>+</sup> *<sup>X</sup>*5, *Ad*[*X*5, *<sup>X</sup>*1] = *<sup>X</sup>*<sup>1</sup> <sup>−</sup> <sup>2</sup>*εbX*5, *Ad*[*Y*1,*Yj*] = *Yj*, *j* = 1, . . . , 4, *Ad*[*Yi*,*Yi*] = *Yi*, *i* = 1, . . . , 4, *Ad*[*Y*2,*Y*1] = *<sup>Y</sup>*<sup>1</sup> <sup>−</sup> *<sup>ε</sup><sup>b</sup> <sup>a</sup>*2*Y*3, *Ad*[*Y*2,*Y*4] = <sup>−</sup>2*ε<sup>b</sup> <sup>a</sup>*<sup>4</sup> *<sup>Y</sup>*<sup>2</sup> <sup>+</sup> *<sup>Y</sup>*4, *Ad*[*Y*3,*Y*1] = *<sup>Y</sup>*<sup>1</sup> <sup>−</sup> *<sup>ε</sup><sup>b</sup> <sup>a</sup>*2*Y*3, *Ad*[*Y*3,*Y*4] = <sup>−</sup>2*ε<sup>b</sup> <sup>a</sup>*<sup>4</sup> *<sup>Y</sup>*<sup>3</sup> <sup>+</sup> *<sup>Y</sup>*4, *Ad*[*Y*4,*Y*1] = *<sup>Y</sup>*<sup>1</sup> <sup>+</sup> *<sup>ε</sup><sup>b</sup> <sup>a</sup>*2*Y*4, *Ad*[*Y*4,*Y*3] = <sup>2</sup>*ε<sup>b</sup> <sup>a</sup>*<sup>4</sup> *<sup>Y</sup>*<sup>2</sup> <sup>+</sup> *<sup>Y</sup>*3, ...

Suppose that *V* = ∑<sup>8</sup> *<sup>i</sup>*=<sup>1</sup> *Xi* and *<sup>V</sup>*˜ = <sup>∑</sup><sup>8</sup> *<sup>i</sup>*=<sup>1</sup> *Yi* are the most general element. Eventually, we will obtain one-dimensional optimal system of Equation (18). The following symmetries are just a few members of optimal system of the perturbed Fokker–Planck equation

$$\begin{aligned} V\_1 &= X\_1, & V\_2 &= X\_2, & V\_3 &= X\_3, & V\_4 &= X\_4 & V\_5 &= X\_2 + X\_3, \\ V\_6 &= X\_5, & V\_7 &= X\_6, & V\_8 &= X\_7, & V\_9 &= X\_8 & V\_{10} &= X\_0, \\ V\_{11} &= X\_3 + X\_5, & V\_{12} &= X\_2 + X\_5, & V\_{13} &= X\_2 + X\_4, & V\_{14} &= X\_1 + X\_4, \\ V\_{15} &= X\_2 + X\_3 + X\_4, & V\_{16} &= X\_2 + X\_3 + X\_5, & V\_{17} &= X\_1 + X\_2 + X\_4, \\ V\_{18} &= X\_1 + X\_3 + X\_4, & V\_{19} &= X\_1 + X\_4 + X\_5, & V\_{20} &= X\_2 + X\_3 + X\_4 + X\_5, \dots, \\ \bar{V}\_1 &= Y\_1, & \bar{V}\_2 &= Y\_2, & \bar{V}\_3 &= Y\_3, & \bar{V}\_4 &= Y\_4 & \bar{V}\_5 &= Y\_2 + Y\_4, \\ \bar{V}\_6 &= Y\_1 + Y\_3 + Y\_4, & \bar{V}\_7 &= Y\_1 + Y\_2 + Y\_3, & \dots \end{aligned}$$

**Case 1:** For the symmetry of *V*<sup>1</sup> = *X*1, corresponding characteristic equation is given as:

$$\frac{dt}{1} = \frac{dx}{0} = \frac{du}{0},\tag{19}$$

integration of Equation (19) yields the following similarity variable and function

$$
\mu = \lg(\mathbf{x}),
\tag{20}
$$

thus we have

$$u\_t = 0, \quad u\_x = \mathcal{g}'(\mathbf{x}), \quad u\_{xx} = \mathcal{g}''(\mathbf{x}). \tag{21}$$

Substituting Equations (20) and (21) into Equation (17), we can get the reduced equation:

$$-\frac{1}{2}a^2\mathbf{g''} - b\mathbf{g} - b\mathbf{x}\mathbf{g'} + \varepsilon[\frac{\mathbf{g}}{t}] = 0\_r$$

where solution of unperturbed part of reduced equation will be in the form

$$u = e^{\left(-\frac{bx^2}{a^2}\right)} \text{er} f\left(-\frac{\sqrt{b}c\_1x}{a} + c\_2\right).$$

**Case 2:** For *V*<sup>3</sup> = *X*3, using the corresponding characteristic equation and change of variables, we write

$$\begin{array}{rcl} \frac{dt}{0} &=& \frac{dx}{\varepsilon^{-bt}} = \frac{du}{0}, & u = g(t),\\ u\_t &=& g'(t), \quad u\_x = u\_{xx} = 0. \end{array}$$

We reduce the perturbed equation Equation (17) to a first order equation:

$$\left[\mathcal{g}'(t) - b\mathcal{g}(t) + \varepsilon \left[ (\ln t + \nu)\mathcal{g}'(t) + \mathcal{g}(t) + \sum\_{k=1}^{\infty} \frac{(-t)^{k}}{k(k+1)!} \frac{\partial^{k+1}\mathcal{g}}{\partial t^{k+1}} \right] = 0.1$$

*u* = *c*1*ebt* is a solution of unperturbed equation *g* (*t*) − *bg*(*t*) = 0.

**Case 3:** For *V*<sup>5</sup> = *X*<sup>2</sup> + *X*3, the reduced equation is:

$$\log' - \frac{1}{2}a^2 \varepsilon^{2bt} \mathbf{g} - b\mathbf{g} - \varepsilon \left[ (\ln t + \nu) \mathbf{g}' + \mathbf{g} (1 + bx e^{bt}) + \sum\_{k=1}^{\infty} \frac{(-t)^k}{k(k+1)!} \frac{\partial^{k+1} (\varrho e^{\mathbf{x} t^k})}{\partial t^{k+1}} \right] = 0.1$$

where *<sup>u</sup>* <sup>=</sup> exp( *<sup>a</sup>*2*e*2*bt*+4*b*2*<sup>t</sup>* <sup>4</sup>*<sup>b</sup>* ) is a solution of unperturbed equation.

**Case 4:** For component of one-dimensional optimal system *V*4, *V*<sup>6</sup> and *V*7, solutions of unperturbed part of Equation (17) are given in Table 1.


**Table 1.** Solutions for unperturbed part of equation Equation (17).

#### **5. Approximate Conservation Laws**

We consider approximate nonlinear self-adjointness for a system of perturbed PDEs, see, e.g., [24,25] for details. In the rest of this section, we present a formal Lagrange of perturbed Equation (12) and obtain conservation laws.

#### *5.1. Basic Definitions for Constructing Conservation Laws*

Let L be the formal Lagrange of Equation (12):

$$
\mathcal{L} \approx \mathcal{L}\_{(0)} + \varepsilon \mathcal{L}\_{(1)} \equiv \upsilon P\_{(0)} + \varepsilon \upsilon P\_{(1)\prime} \tag{22}
$$

hence, the adjoint equations of Equation (12) are defined as

$$\begin{split} \frac{\delta \mathcal{L}}{\delta u} &= P\_{(0)}^{\*} (\mathbf{x}, \boldsymbol{\mu}, \boldsymbol{\upsilon}, \boldsymbol{u}\_{(1)}, \boldsymbol{\upsilon}\_{(1)}, \dots) \\ + \varepsilon P\_{(1)}^{\*} (\mathbf{x}, \boldsymbol{\mu}, \boldsymbol{\upsilon}, \dots, D\_{\mathbf{x}^{c}}^{c+1} \boldsymbol{u}\_{\prime} D\_{\mathbf{x}^{1}}^{c+1} \boldsymbol{v}\_{\prime} D\_{\mathbf{x}^{1}}^{c+2} \boldsymbol{u}\_{\prime} D\_{\mathbf{x}^{1}}^{c+3} \boldsymbol{v}\_{\prime} \dots) & \approx 0, \end{split} \tag{23}$$

where *vi* represents all *i th*-order derivatives of variable *v* with respect to *x*, *<sup>δ</sup> <sup>δ</sup><sup>u</sup>* is the variational derivative written in terms of the total derivative operator *Di*:

$$\frac{\delta}{\delta u} = \frac{\partial}{\partial u} + \sum\_{s=1}^{\infty} (-1)^s D\_{i\_1} \dots D\_{i\_s} \frac{\partial}{\partial u\_{i\_1 \dots i\_s}}.$$

*Di* indicates the operator of total differentiation with respect to *x<sup>i</sup>* :

$$D\_{i} = \frac{\partial}{\partial x^{i}} + u\_{i}\frac{\partial}{\partial u} + v\_{i}\frac{\partial}{\partial v} + \sum\_{s=1}^{\infty} \left[ u\_{ii\_{1}...i\_{s}}\frac{\partial}{\partial u\_{i\_{1}...i\_{s}}} + v\_{ii\_{1}...i\_{s}}\frac{\partial}{\partial v\_{i\_{1}...i\_{s}}} \right] \dots$$

If we consider

$$w \approx \varphi\_{(0)}(\mathbf{x}, \boldsymbol{\mu}) + \varepsilon \varphi\_{(1)}(\mathbf{x}, \boldsymbol{\mu}) \neq 0,\tag{24}$$

we have

$$\mathcal{L} \approx \varphi\_{(0)} P\_{(0)} + \varepsilon \left( \varphi\_{(1)} P\_{(0)} + \varphi\_{(0)} P\_{(1)} \right) \cdot \varepsilon$$

and if it satisfies the nonlinear self adjoint condition:

$$\varepsilon P\_0^\*|\_{\mathbb{T}\simeq\varphi\_{(0)}+\varepsilon\varphi\_{(1)}} + \varepsilon P\_{(1)}^\*|\_{\mathbb{T}\simeq\varphi\_{(0)}} \approx \gamma\_{(0)}P\_{(0)} + \varepsilon \left(\gamma\_{(1)}P\_{(0)} + \gamma\_{(0)}P\_{(1)}\right). \tag{25}$$

In which *γ*(0) and *γ*(1) are to be determined coefficients.

Any approximate symmetry Equation (13) of Equation (12) leads to a conservation law

$$D\_i(\mathbb{C}^i) = 0, \qquad \mathbb{C}^i \approx \mathbb{C}^i\_{(0)} + \varepsilon \mathbb{C}^i\_{(1)'} $$

where the components *C<sup>i</sup>* are obtained by

$$\begin{split} \mathcal{C}\_{(0)}^{i} &= \; \mathcal{W}\_{(0)} \left( \frac{\partial \mathcal{L}\_{(0)}}{\partial u\_{i}} + \sum\_{s=1}^{c-1} (-1)^{s} D\_{i\_{1}} \dots D\_{i\_{s}} \frac{\partial \mathcal{L}\_{(0)}}{\partial u\_{i\bar{i}1\ldots i\_{s}}} \right) \\ &+ \; \sum\_{r=1}^{c-1} D\_{k\_{1}} \dots D\_{k\_{r}} \left( \mathcal{W}\_{(0)} \right) \left[ \frac{\partial \mathcal{L}\_{(0)}}{\partial u\_{i\bar{k}\_{1}\ldots \bar{k}\_{r}}} + \sum\_{s=1}^{c-r-1} (-1)^{s} D\_{i\_{1}} \dots D\_{i\_{s}} \frac{\partial \mathcal{L}\_{(0)}}{\partial u\_{i\bar{k}\_{1}\ldots \bar{k}\_{r}i\_{1}\ldots i\_{s}}} \right], \end{split} \tag{26}$$

$$\begin{split} \mathcal{C}^{i}\_{(1)} &= \; \mathcal{W}\_{(1)} \left( \frac{\partial \mathcal{L}\_{(0)}}{\partial u\_{i}} + \sum\_{s=1}^{c-1} (-1)^{s} D\_{i\_{1}} \dots D\_{i\_{s}} \frac{\partial \mathcal{L}\_{(0)}}{\partial u\_{i i\_{1} \dots i\_{s}}} \right) \\ &+ \; \sum\_{r=1}^{c-1} D\_{k\_{1}} \dots D\_{k\_{r}} \left( W\_{(1)} \right) \left[ \frac{\partial \mathcal{L}\_{(0)}}{\partial u\_{i k\_{1} \dots k\_{r}}} + \sum\_{s=1}^{c-r-1} (-1)^{s} D\_{i\_{1}} \dots D\_{i\_{s}} \frac{\partial \mathcal{L}\_{(0)}}{\partial u\_{i k\_{1} \dots k\_{r} i\_{1} \dots i\_{s}}} \right] \\ &+ \; \mathcal{W}\_{(0)} \left( \frac{\partial \mathcal{L}\_{(1)}}{\partial u\_{i}} + \sum\_{s=1}^{\infty} (-1)^{s} D\_{i\_{1}} \dots D\_{i\_{s}} \frac{\partial \mathcal{L}\_{(1)}}{\partial u\_{i i\_{1} \dots i\_{s}}} \right) \\ &+ \; \sum\_{r=1}^{\infty} D\_{k\_{1}} \dots D\_{k\_{r}} \left( W\_{(0)} \right) \left[ \frac{\partial \mathcal{L}\_{(1)}}{\partial u\_{i k\_{1} \dots k\_{r}}} + \sum\_{s=1}^{\infty} (-1)^{s} D\_{i\_{1}} \dots D\_{i\_{s}} \frac{\partial \mathcal{L}\_{(1)}}{\partial u\_{i k\_{1} \dots k\_{r} i\_{1} \dots i\_{s}}} \right]. \tag{27} \end{split}$$

In which *<sup>W</sup>*(0) <sup>=</sup> *<sup>θ</sup>*(0) <sup>−</sup> *<sup>ζ</sup><sup>i</sup>* (0) *ui* , *<sup>W</sup>*(1) <sup>=</sup> *<sup>θ</sup>*(1) <sup>−</sup> *<sup>ζ</sup><sup>i</sup>* (1) *ui*.

#### *5.2. Approximate Conservation Laws for pfPE*

By choosing approximate formal Lagrange

$$\begin{split} \mathcal{L} & \quad v(\mathbf{x}, t, u) \left( P\_{(0)} + \varepsilon P\_{(1)} \right) = \approx v(\mathbf{x}, t, u) \left[ u\_t - \frac{1}{2} a^2 u\_{xx} - b u - b \varepsilon u\_x \right] \\ & \quad + \varepsilon \left( (\ln t + \nu) u\_t + \frac{u}{t} + \sum\_{k=1}^{\infty} \frac{(-t)^k}{k(k+1)!} u\_t^{(k+1)} \right) \Big|\_{\mathcal{I}} \end{split} \tag{28}$$

where

$$
\sigma = \varrho\_0(\mathbf{x}, t, \mu) + \varepsilon \varrho\_1(\mathbf{x}, t, \mu),
\tag{29}
$$

we obtain adjoint equation using Equation (23) as:

$$P^\* \approx -\upsilon\_l + bx\upsilon\_x - \frac{1}{2}a^2\upsilon\_{xx} - \varepsilon \left[\upsilon\_l(\ln t + \nu) + \sum\_{k=1}^{\infty} \frac{(-t)^k}{k(k+1)!} D\_t^{(k+1)}(\upsilon t^k)\right].\tag{30}$$

It is easy to achieve an approximate formal Lagrange by placing Equation (29) into Equation (30), and solving characteristic equation of the Equation (25) with the Maple software, we have

$$\begin{array}{rcl} v = \left(c\_1 \mathbf{x} \mathbf{z}^{bt} + c\_2 \right) &+& \varepsilon c\_3 \mathbf{x} \mathbf{z}^{c\_1 t} \left(c\_4 \text{KummerM}(\frac{b - c\_1}{2b}, \frac{3}{2}, \frac{b \mathbf{x}^2}{a^2}) \\ &+& c\_5 \text{KummerM}(\frac{b - c\_1}{2b}, \frac{3}{2}, \frac{b \mathbf{x}^2}{a^2}) \right) \end{array} \tag{31}$$

and

$$
\mathcal{L} \approx \mathcal{L}\_{(0)} + \varepsilon \mathcal{L}\_{(1)'} \tag{32}
$$

where

$$\begin{array}{rcl} \mathcal{L}\_{(0)} &=& (c\_1 \mathbf{x} e^{\mathbf{b} t} + c\_2)(u\_t - \frac{1}{2} a^2 u\_{xx} - b u - b \mathbf{x} u\_x), \\ \mathcal{L}\_{(1)} &=& \varepsilon \left[ c\_3 \mathbf{x} e^{c\_1 t} \left( c\_4 \text{KummerM} (\frac{b - c\_1}{2b}, \frac{3}{2}, \frac{b \mathbf{x}^2}{a^2}) \right) \\ &+& c\_5 \text{KummerM} (\frac{b - c\_1}{2b}, \frac{3}{2}, \frac{b \mathbf{x}^2}{a^2}) \right) (u\_t - \frac{1}{2} a^2 u\_{xx} - b u - b \mathbf{x} u\_x) \\ &+& (c\_1 \mathbf{x} e^{b t} + c\_2) \left( (\ln t + \nu) u\_t + \frac{u}{t} + \sum\_{k=1}^{\infty} \frac{(-t)^k}{k(k+1)} u\_t^{(k+1)} \right) \end{array} \tag{33}$$

Here, *c*1, *c*2, *c*3, *c*4, *c*5, *a* and *b* are arbitrary constants. Applying the formula Equations (26) and (27), we perform all computations to approximate conservation laws. Finally, we obtain

$$\begin{array}{rcl} \mathcal{C}\_{(0)}^{x} &=& W\_{(0)} \left( -bx\mathfrak{q}\_{(0)} + \frac{1}{2}a^{2}D\_{\mathfrak{x}} \left( \mathfrak{q}\_{(0)} \right) \right) - \frac{1}{2}a^{2}\mathfrak{q}\_{(0)}D\_{\mathfrak{x}}(W\_{(0)}),\\ \mathcal{C}\_{(0)}^{t} &=& W\_{(0)}\mathfrak{q}\_{(0)'}\\ \mathcal{C}\_{(1)}^{x} &=& W\_{(1)} \left( -bx\mathfrak{q}\_{(0)} + \frac{1}{2}a^{2}D\_{\mathfrak{x}} \left( \mathfrak{q}\_{(0)} \right) \right) - \frac{1}{2}a^{2}\mathfrak{q}\_{(0)}D\_{\mathfrak{x}}(W\_{(1)})\\ &+& W\_{(0)} \left( -bx\mathfrak{q}\_{(1)} + \frac{1}{2}a^{2}D\_{\mathfrak{x}} \left( \mathfrak{q}\_{(1)} \right) \right) - \frac{1}{2}a^{2}\mathfrak{q}\_{(1)}D\_{\mathfrak{x}}(W\_{(0)}), \end{array}$$

$$\begin{split} \mathcal{C}\_{(1)}^{t} &= \mathcal{W}\_{(1)}\varphi\_{(0)} \quad + \quad \mathcal{W}\_{(0)} \left[ c\_{3} \mathbf{x} e^{c\_{1}t} \varphi\_{(1)} + \varphi\_{(0)} \left( \ln t + \nu + \sum\_{k=1}^{\infty} \frac{(-t)^{k}}{k(k+1)!} \right) \right] \\ &\quad + \quad \varphi\_{(0)} \sum\_{s=1}^{\infty} (-1)^{(s+1)} D\_{st}(\mathcal{W}\_{(0)}) \left[ \sum\_{k=s}^{\infty} D\_{(k-s)t} \frac{t^{k}}{k(k+1)!} \right]. \end{split}$$

where

$$\mathbf{C}^{\mathbf{x}} = \mathbf{C}^{\mathbf{x}}\_{(0)} + \varepsilon \mathbf{C}^{\mathbf{x}}\_{(1) \prime} \qquad \mathbf{C}^{\mathbf{t}} = \mathbf{C}^{\mathbf{t}}\_{(0)} + \varepsilon \mathbf{C}^{\mathbf{t}}\_{(1) \prime}.$$

1. For *X*<sup>1</sup> = *∂t*, we have *W*(0) = −*ut*, *W*(1) = 0, the components of approximate conservation laws are:

$$\begin{aligned} \mathbb{C}^{\chi} &= \quad \mu\_{t} \left( b \mathbf{x} \boldsymbol{\varrho}\_{(0)} - \frac{1}{2} a^{2} c\_{1} e^{bt} \right) + \frac{1}{2} a^{2} \boldsymbol{\mu}\_{\boldsymbol{x}t} \boldsymbol{\varrho}\_{(0)} \\ &+ \quad \varepsilon \left[ \mu\_{t} \left( b \mathbf{x} \boldsymbol{\varrho}\_{(1)} - \frac{1}{2} a^{2} D\_{\boldsymbol{x}} \boldsymbol{\varrho}\_{(1)} \right) + \frac{1}{2} a^{2} \boldsymbol{\varrho}\_{(1)} \boldsymbol{\mu}\_{\boldsymbol{x}t} \right] \end{aligned}$$

,

$$\begin{split} \mathcal{C}^{t} &= \left. - \left. u\_{t} \boldsymbol{\varrho} \boldsymbol{\varrho}\_{(0)} + \varepsilon \right[ - \boldsymbol{u}\_{t} \boldsymbol{\varrho}\_{(1)} + (\ln t + \nu) \boldsymbol{\varrho}\_{(0)} \sum\_{k=1}^{\infty} D\_{kt} \frac{(-t)^{k}}{k(k+1)!} \\ &- \left. \boldsymbol{\varrho}\_{(0)} \sum\_{s=1}^{\infty} D\_{st} (\boldsymbol{u}\_{t}) \left( \sum\_{k=s}^{\infty} D\_{(k-s)t} \frac{t^{k}}{k(k+1)!} \right) \right]. \end{split}$$

2. For *X*<sup>2</sup> = *u∂<sup>u</sup>* , *W*(0) = *u* and *W*(1) = 0, we have:

$$\begin{aligned} \mathbb{C}^{x} &= \begin{array}{c} + \quad \mu \left( -bx\mathfrak{q}\_{(0)} + \frac{1}{2}c\_{1}a^{2}c^{bt} \right) - \frac{1}{2}a^{2}\mu\_{x}\mathfrak{q}\_{(0)} \\ + \quad \varepsilon \left[ \mu \left( -bx\mathfrak{q}\_{(1)} + \frac{1}{2}a^{2}D\_{x}\mathfrak{q}\_{(1)} \right) - \frac{1}{2}a^{2}\mu\_{x}\mathfrak{q}\_{(1)} \right] \end{array} \end{aligned}$$

$$\begin{split} \mathcal{C}^{t} &= \left. + \quad \mathfrak{u}\varphi\_{(0)} + \varepsilon \left[ \mathfrak{u} \left( \varphi\_{(1)} + \varphi\_{(0)} (\ln t + \nu + \sum\_{k=1}^{\infty} D\_{\mathrm{k}t} \frac{(-t)^{k}}{k(k+1)!} \right) \right) \right. \\ &\left. + \quad \mathfrak{p}\_{(0)} \sum\_{s=1}^{\infty} (-1)^{s+1} D\_{\mathrm{st}}(\mathsf{u}) \left( \sum\_{k=s}^{\infty} D\_{(k-s)t} \frac{t^{k}}{k(k+1)!} \right) \right] . \end{split}$$

3. For *<sup>X</sup>*<sup>3</sup> <sup>=</sup> *<sup>e</sup>*−*bt∂x*, *<sup>W</sup>*(0) <sup>=</sup> <sup>−</sup>*e*−*btux* and *<sup>W</sup>*(1) <sup>=</sup> 0, we have:

*C<sup>x</sup>* = + *e* <sup>−</sup>*btux*(*bxϕ*(0) <sup>−</sup> <sup>1</sup> 2 *c*1*a*2*e bt*) + <sup>1</sup> 2 *a*2*e* <sup>−</sup>*btuxxϕ*(0) + *e* <sup>−</sup>*btux*(*bxϕ*(1) <sup>−</sup> <sup>1</sup> 2 *<sup>a</sup>*2*Dxϕ*(1)) + <sup>1</sup> 2 *a*2*e* <sup>−</sup>*btuxxϕ*(1),

$$\begin{split} \mathcal{C}^{t} &= \left. - \ u\_{\boldsymbol{x}} \boldsymbol{e}^{-bt} \boldsymbol{\varphi}\_{(0)} - \varepsilon \left[ \boldsymbol{u}\_{\boldsymbol{x}} \boldsymbol{e}^{-bt} \left( \boldsymbol{\varphi}\_{(1)} + \boldsymbol{\varphi}\_{(0)} (\ln t + \nu + \sum\_{k=1}^{\infty} \frac{(-t)^{k}}{k(k+1)!}) \right) \right] \\ &+ \quad \boldsymbol{\varphi}\_{(0)} \sum\_{s=1}^{\infty} (-1)^{s+1} D\_{st} (\boldsymbol{u}\_{\boldsymbol{x}} \boldsymbol{e}^{-bt}) \left( \sum\_{k=s}^{\infty} D\_{(k-s)t} \frac{t^{k}}{k(k+1)!} \right) \Bigg| . \end{split}$$

4. For *<sup>X</sup>*<sup>4</sup> <sup>=</sup> *<sup>e</sup>bt∂<sup>x</sup>* <sup>−</sup> <sup>2</sup>*<sup>b</sup> <sup>a</sup>*<sup>2</sup> *xuebt∂<sup>u</sup>* , *<sup>W</sup>*(0) <sup>=</sup> <sup>−</sup>2*<sup>b</sup> <sup>a</sup>*<sup>2</sup> *xuebt* <sup>−</sup> *<sup>e</sup>btux* and *<sup>W</sup>*(1) <sup>=</sup> 0, therefore:

$$\begin{aligned} \mathcal{C}^{\scriptscriptstyle{x}} &= \ & \varepsilon^{bt} \left[ (\frac{2b}{a^2} x u + u\_x)(b x \varrho\_{(0)} - \frac{1}{2} c\_1 a^2 e^{bt}) + (\frac{2b}{a^2} u + u\_{xx})(\frac{1}{2} a^2 \varrho\_{(0)}) \right. \\ &+ \quad \varepsilon (\frac{2b}{a^2} x u + u\_x)(b x \varrho\_{(1)} - \frac{1}{2} a^2 D\_x \varrho\_{(1)}) + (\frac{2b}{a^2} u + u\_{xx})(\frac{1}{2} a^2 \varrho\_{(1)}) \right] \end{aligned}$$

$$\begin{split} \mathcal{C}^{t} &= \begin{array}{rcl} - & \varepsilon^{bt} \Big[ \left( \frac{2b}{a^{2}} \mathbf{x}u + \mathbf{u}\_{x} \right) \boldsymbol{\varrho}\_{(0)} \\ & + & \varepsilon \Big( \left( \frac{2b}{a^{2}} \mathbf{x}u + \mathbf{u}\_{x} \right) \Big( \boldsymbol{\varrho}\_{(1)} + \boldsymbol{\varrho}\_{(0)} (\ln t + \nu + \sum\_{k=1}^{\infty} D\_{kt} \frac{(-t)^{k}}{k(k+1)!}) \Big) \\ & + & \boldsymbol{\varrho}\_{(0)} \sum\_{s=1}^{\infty} (-1)^{s+1} D\_{st} \big( \frac{2b}{a^{2}} \mathbf{x}u e^{bt} - e^{bt} u\_{x} \big) \left( \sum\_{k=s}^{\infty} D\_{(k-s)t} \frac{t^{k}}{k(k+1)!} \right) \Big) \Big]. \end{split}$$

5. For *<sup>X</sup>*<sup>5</sup> <sup>=</sup> *<sup>e</sup>*−2*bt*(*∂<sup>t</sup>* <sup>−</sup> *bx∂<sup>x</sup>* <sup>+</sup> *bu∂u*) , *<sup>W</sup>*(0) <sup>=</sup> *<sup>e</sup>*−2*bt*(*bu* <sup>−</sup> *bxux* <sup>−</sup> *ut*) and *<sup>W</sup>*(1) <sup>=</sup> 0, so we have:

$$\begin{split} \mathbb{C}^{x} &= \quad -\varepsilon^{-2bt} \Big[ \big( bu - bxu\_{\mathrm{x}} - u\_{t} \big) \big( bx\varphi\_{(0)} + \frac{1}{2}c\_{1}a^{2}\varepsilon^{bt} \big) - \frac{1}{2}a^{2}\varphi\_{(0)}(bxu\_{\mathrm{xx}} + u\_{\mathrm{xt}}) \\ &+ \quad \varepsilon \Big( \big( bu - bxu\_{\mathrm{x}} - u\_{t} \big) \big( bx\varphi\_{(1)} - \frac{1}{2}a^{2}D\_{x}\varphi\_{(1)} \big) - \frac{1}{2}a^{2}\varphi\_{(1)}(bxu\_{\mathrm{x}} + u\_{\mathrm{tt}}) \Big) \Big) \Big] \big( \\ \big( \mathscr{C}^{t} &= \quad e^{-2bt}\varphi\_{(0)}(bu - bxu\_{\mathrm{x}} - u\_{t}) \\ &+ \quad \varepsilon \Big[ e^{-2bt}(bu - bxu\_{\mathrm{x}} - u\_{t}) \big( \varPsi\_{(1)} + \varPsi\_{(0)}(\ln t + \nu) \big( \sum\_{k=1}^{\infty} D\_{kt} \frac{(-t)^{k}}{k(k+1)!} \big) \Big) \\ &+ \quad \varrho\_{(0)} \sum\_{s=1}^{\infty} (-1)^{s+1} D\_{st} \big( e^{-2bt}(bu - bxu\_{\mathrm{x}} - u\_{t}) \big) \Big( \sum\_{k=s}^{\infty} D\_{(k-s)t} \frac{t^{k}}{k(k+1)!} \Big) \Big]. \end{split}$$

6. For *<sup>X</sup>*<sup>6</sup> <sup>=</sup> *<sup>e</sup>*2*bt*(*∂<sup>t</sup>* <sup>+</sup> *bx∂<sup>x</sup>* <sup>−</sup> <sup>2</sup>*b*<sup>2</sup> *<sup>a</sup>*<sup>2</sup> *<sup>x</sup>*2*u∂u*) , *<sup>W</sup>*(0) <sup>=</sup> <sup>−</sup>*e*2*bt*( <sup>2</sup>*b*<sup>2</sup> *<sup>a</sup>*<sup>2</sup> *<sup>x</sup>*2*<sup>u</sup>* <sup>+</sup> *ut* <sup>+</sup> *bxux*) and *<sup>W</sup>*(1) <sup>=</sup> 0, we have:

*C<sup>x</sup>* = *e* 2*bt* ( 2*b*<sup>2</sup> *<sup>a</sup>*<sup>2</sup> *<sup>x</sup>*2*<sup>u</sup>* <sup>+</sup> *ut* <sup>+</sup> *bxux*)( <sup>1</sup> 2 *c*1*a*2*e bt* <sup>−</sup> *bxϕ*(0)) + 1 2 *a*2*ϕ*(0) -4*b*<sup>2</sup> *<sup>a</sup>*<sup>2</sup> *xu* <sup>+</sup> 2*b*2*x*<sup>2</sup> *<sup>a</sup>*<sup>2</sup> *ux* <sup>+</sup> *bux* <sup>+</sup> *bxuxx* <sup>+</sup> *uxt* + *ε* ( 2*b*<sup>2</sup> *<sup>a</sup>*<sup>2</sup> *<sup>x</sup>*2*<sup>u</sup>* <sup>+</sup> *ut* <sup>+</sup> *bxux*)(*bxϕ*(1) <sup>−</sup> <sup>1</sup> 2 *a*2*Dxϕ*(1)) + 1 2 *a*2*ϕ*(1) -4*b*<sup>2</sup> *<sup>a</sup>*<sup>2</sup> *xu* <sup>+</sup> 2*b*2*x*<sup>2</sup> *<sup>a</sup>*<sup>2</sup> *ux* <sup>+</sup> *bux* <sup>+</sup> *bxuxx* <sup>+</sup> *uxt*, *<sup>C</sup><sup>t</sup>* <sup>=</sup> <sup>−</sup>*<sup>e</sup>* <sup>2</sup>*bt*( 2*b*<sup>2</sup> *<sup>a</sup>*<sup>2</sup> *<sup>x</sup>*2*<sup>u</sup>* <sup>+</sup> *ut* <sup>+</sup> *bxux*)*ϕ*(0) − *ε e* <sup>2</sup>*bt*( 2*b*<sup>2</sup> *<sup>a</sup>*<sup>2</sup> *<sup>x</sup>*2*<sup>u</sup>* <sup>+</sup> *ut* <sup>+</sup> *bxux*) - *ϕ*(1) + *ϕ*(0)(ln *t* + *ν*)( ∞ ∑ *k*=1 *Dkt* (−*t*)*<sup>k</sup> k*(*k* + 1)! ) + *ϕ*(0) ∞ ∑ *s*=1 (−1)*s*+1*Dst*(*<sup>e</sup>* <sup>2</sup>*bt*( 2*b*<sup>2</sup> *<sup>a</sup>*<sup>2</sup> *<sup>x</sup>*2*<sup>u</sup>* <sup>+</sup> *ut* <sup>+</sup> *bxux*)) <sup>∞</sup> ∑ *k*=*s D*(*k*−*s*)*<sup>t</sup> t k k*(*k* + 1)! .

7. For *Y*<sup>1</sup> = <sup>1</sup> *<sup>a</sup>*<sup>2</sup> *ε∂<sup>t</sup>* , *<sup>W</sup>*(0) <sup>=</sup> 0 and *<sup>W</sup>*(1) <sup>=</sup> <sup>−</sup><sup>1</sup> *<sup>a</sup>*<sup>2</sup> *εut*, we have:

$$\begin{array}{rcl} \mathbb{C}^{\scriptscriptstyle{X}} &=& \varepsilon \left[ \frac{1}{a^2} \boldsymbol{\mu}\_t (b \boldsymbol{x} \boldsymbol{\varrho}\_{(0)} - \frac{1}{2} \boldsymbol{\varepsilon}\_1 \boldsymbol{a}^2 \boldsymbol{\varepsilon}^{bt}) + \frac{1}{2} \boldsymbol{\mu}\_{\boldsymbol{x}t} \boldsymbol{\varrho}\_{(0)} \right],\\ \mathbb{C}^{t} &=& -\frac{1}{a^2} \boldsymbol{\varepsilon} \boldsymbol{u}\_t \boldsymbol{\varrho}\_{(0)}. \end{array}$$

8. For *Y*<sup>2</sup> = *εu∂<sup>u</sup>* , *W*(0) = 0 and *W*(1) = *εu*, we have:

$$\begin{aligned} \mathcal{C}^{(\mathbf{x})} &= \, \, \, \varepsilon \left[ \mathfrak{u} (\frac{1}{2} c\_1 a^2 e^{bt} - b \mathfrak{x} \mathfrak{q}\_{(0)}) - \frac{1}{2} a^2 \mathfrak{u}\_{\mathcal{X}} \mathfrak{q}\_{(0)} \right], \\ \mathcal{C}^{(t)} &= \, \, \, \varepsilon \mathfrak{u} \mathfrak{p}\_{(0)} . \end{aligned}$$

9. For *Y*<sup>3</sup> = <sup>−</sup><sup>1</sup> *<sup>a</sup>*<sup>2</sup> *<sup>ε</sup>e*−*bt∂<sup>x</sup>* , *<sup>W</sup>*(0) <sup>=</sup> 0 and *<sup>W</sup>*(1) <sup>=</sup> <sup>1</sup> *<sup>a</sup>*<sup>2</sup> *<sup>ε</sup>e*−*btux*, we have:

$$\begin{aligned} \mathcal{C}^{\chi} &= -\varepsilon \left[ \frac{1}{a^2} \varepsilon^{-bt} u\_x \left( bx \varphi\_{(0)} - \frac{1}{2} c\_1 a^2 \varepsilon^{bt} \right) + \frac{1}{2} \varepsilon^{-bt} u\_{xx} \varphi\_{(0)} \right], \\ \mathcal{C}^{\mathfrak{t}} &= \frac{1}{a^2} \varepsilon \varepsilon^{-bt} u\_{\mathcal{X}} \varphi\_{(0)}. \end{aligned}$$

10. For *Y*<sup>4</sup> = <sup>−</sup><sup>1</sup> *<sup>a</sup>*<sup>2</sup> *<sup>ε</sup>ebt*(*∂<sup>x</sup>* <sup>+</sup> <sup>2</sup>*bxu∂u*) , *<sup>W</sup>*(0) <sup>=</sup> 0 and *<sup>W</sup>*(1) <sup>=</sup> <sup>1</sup> *<sup>a</sup>*<sup>2</sup> *<sup>ε</sup>ebt*(*ux* <sup>−</sup> <sup>2</sup>*bxu*), we have:

$$\begin{aligned} \mathbf{C}^x &= \frac{1}{a^2} \varepsilon e^{bt} \\ &\quad \left[ \begin{array}{c} (\boldsymbol{\mu}\_x - 2b\boldsymbol{x}\boldsymbol{u}) \left( \frac{1}{2} c\_1 a^2 e^{bt} - b\boldsymbol{x} \boldsymbol{q}\_{(0)} \right) \right] \\ &\quad + \quad \frac{a^2}{2} \boldsymbol{\varphi}\_{(0)} (2b\boldsymbol{u} + 2b\boldsymbol{x}\boldsymbol{u}\_{\boldsymbol{X}} - \boldsymbol{u}\_{\boldsymbol{X}\boldsymbol{u}}) \end{array} \right] \end{aligned}$$

$$\mathbb{C}^t = \frac{1}{a^2} \varepsilon e^{bt} \left( u\_x - 2b\mathfrak{x}u \right) \mathfrak{p}\_{(0)} \cdot \mathbb{C}$$

$$\begin{aligned} \text{11.} \qquad \text{For } Y\_5 &= \frac{1}{a} \varepsilon e^{-2bt} (\partial\_t + b \,\text{x}\partial\_x + b u \partial\_u) \,, W\_{(0)} = 0 \text{ and} \\ W\_{(1)} &= \frac{1}{a^2} \varepsilon e^{-2bt} (bu - u\_t - b x u\_x) \,\text{we have:} \\ & \begin{array}{ll} \mathcal{C}^x &= \frac{1}{a^2} \varepsilon e^{-2bt} \Big[ (bu - u\_t - b x u\_x) \left( \frac{1}{2} \varepsilon\_1 a^2 \varepsilon^{bt} - b x \varrho\_{(0)} \right) \Big] \\ & + \Big\lfloor \frac{1}{a} a^2 \varrho\_{(0)} (u\_{xt} + b x u\_{xx}) \Big\rfloor \end{array} \\\\ \mathcal{C}^t &= \frac{1}{a^2} \varepsilon e^{-2bt} (bu - u\_t - b x u\_x) \,\mathcal{O}\_{(0)}. \end{aligned}$$
  $12. \qquad \text{For } Y\_6 = \frac{1}{a^2} \varepsilon e^{2bt} (\partial\_t - b x \partial\_x - 2b^2 x^2 u \partial\_u)$   $W\_{(0)} = 0$  and  $W\_{(1)} = \frac{1}{a^2} \varepsilon e^{2bt} (b \varepsilon x u\_x - \nu\_t - 2b^2 x^2 u)$ , we have:

$$\begin{split} \frac{1}{a^2} \varepsilon e^{2bt} (b \mathbf{x} u\_x - u\_t - 2b^2 \mathbf{x}^2 u), &\text{we have:} \\ \begin{split} \mathsf{C}^x &= \frac{1}{a^2} \varepsilon e^{2bt} (b \mathbf{x} u\_x - u\_t - 2b^2 \mathbf{x}^2 u) \left(\frac{1}{2} c\_1 a^2 e^{bt} - b \mathbf{x} q\_{(0)}\right) \\ &+ \frac{1}{a^2} \varrho q\_{(0)} (4b^2 \mathbf{x} u + 2b^2 \mathbf{x}^2 u\_x + u\_{xt} - b u\_x - b \mathbf{x} u\_{xx}), \\ \mathsf{C}^t &= \frac{1}{a^2} \varepsilon e^{2bt} (b \mathbf{x} u\_x - u\_t - 2b^2 \mathbf{x}^2 u) \varrho\_{(0)}. \end{split}$$

#### **6. Conclusions and Outlook**

We presented a new approach for calculating new exact analytical solutions of parameter containing fractional-order equations. Using the nonlinear self-adjoint notion, approximate solutions, conservation laws and symmetries for these equations are obtained. Computational results indicate the strength of new method. We will apply the method to fractional-stochastic differential equations in a future work.

**Author Contributions:** Formal analysis, E.L.; Funding acquisition, Y.-M.C.; Investigation, N.K.; Supervision, M.I.; Writing—original draft, M.A.A. All authors have read and agreed to the published version of the manuscript.

**Funding:** The work was supported by the Natural Science Foundation of China (Grant Nos. 61673169, 11301127, 11701176, 11626101, 11601485).

**Conflicts of Interest:** The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **New Soliton Solutions of Fractional Jaulent-Miodek System with Symmetry Analysis**

#### **Subhadarshan Sahoo 1, Santanu Saha Ray 2, Mohamed Aly Mohamed Abdou 3,4, Mustafa Inc 5,6,\* and Yu-Ming Chu 7,8,\***


Received: 12 May 2020; Accepted: 5 June 2020; Published: 12 June 2020

**Abstract:** New soliton solutions of fractional Jaulent-Miodek (JM) system are presented via symmetry analysis and fractional logistic function methods. Fractional Lie symmetry analysis is unified with symmetry analysis method. Conservation laws of the system are used to obtain new conserved vectors. Numerical simulations of the JM equations and efficiency of the methods are presented. These solutions might be imperative and significant for the explanation of some practical physical phenomena. The results show that present methods are powerful, competitive, reliable, and easy to implement for the nonlinear fractional differential equations.

**Keywords:**fractional Jaulent-Miodek (JM) system; fractional logistic function method; symmetry analysis

#### **1. Introduction**

Integral and derivative operators of any arbitrary order are the basis of fractional calculus, which has been of great interest for researchers due to its dynamic behavior and exact description of nonlinear complex phenomena in numerous fields in science and engineering [1–6]. Analytical methods have played an essential role for Fractional partial differential equations (FPDEs) [1–4]. Lie symmetry analysis also gives a powerful and effectual implement for generating invariant solutions. The theory of symmetry analysis is based on the invariance of variables [7–14]. Hence, the study of symmetry analysis has been made a huge interest for researchers during past decades.

Time-fractional coupled Jaulent-Miodek (JM) type equations [15–17] is considered as:

$$D\_t^{\mathfrak{a}}u + u\_{\mathfrak{xxx}} + \frac{3}{2}\upsilon v\_{\mathfrak{xxx}} + \frac{9}{2}v\_{\mathfrak{x}}v\_{\mathfrak{xx}} - 6uu\_{\mathfrak{x}} - 6uv v\_{\mathfrak{x}} - \frac{3}{2}u\_{\mathfrak{x}}v^2 = 0 \tag{1}$$

and

$$D\_t^\alpha v + v\_{\text{xxx}} - 6u\_\text{x}v - 6uv\_\text{x} - \frac{15}{2}v\_\text{x}v^2 = 0\tag{2}$$

where 0 < α ≤ 1 denotes the fractional-order derivative.

The coupled JM equations were first introduced by Jaulent and Miodek [18] by using inverse scattering transform with the help of energy dependent Schrödinger potentials. The Equations (1) and (2) also have a relation with Euler-Darboux equation, which has been presented by Matsuno [19]. The Darboux transformation of the JM spectral problem has been studied by Xu [20]. By using hereditary symmetries, Ruan and Lou [21] have presented the symmetries of Jaulent-Miodek hierarchy. The sech and tanh–coth methods have been used by Wazwaz [22] and some more methods like homotopy analysis [23], exp-function [24], extended tanh [25], hyperbolic tangent [26] were presented in the literature for approximate and exact solutions of classical coupled Jaulent-Miodek equation.

A large interest has been focused for the improvement of past methods dealing with solutions of FPDEs. The fractional coupled JM equations play an important role in several areas of science such as fluid mechanics, plasma physics, condense matter physics, optics and associates with energy dependent Schrödinger potential [27–32]. As the practical application of fractional Jaulent–Miodek (JM) system, the Wang and Xia has studied its super-Hamiltonian structure using fractional supertrace identity [33].

Some of these methods for solving fractional coupled JM equation are: method of homotopy perturbation natural transform [34], Sumudu transform [15], residual power series method (RSPM) and q-homotopy analysis method (q-HAM) [17], Hermite wavelet [35], (*G'*/*G*)-expansion and hyperbolic tangent [16].

This article deals with fractional coupled JM system by utilizing an original fractional logistic function method [36], which has been presented in Section 3. Moreover, in the corresponding section, the numerical simulation has been done for analyzing the physical properties of the solutions. In Section 4, the symmetry analysis with conservation laws [37,38] for time-fractional coupled JM, equations have been presented. In Section 4, the fractional Lie group analysis method for symmetry properties [39,40] of fractional JM system are applied more precisely. Furthermore, conservation laws [37,41] also have been presented in order to get a new conserved vector by utilizing theorems of conservation law.

#### **2. Theory of Fractional Operators**

#### *2.1. Riemann–Liouville (RL) Fractional Derivative*

The fractional order Riemann–Liouville (RL) derivative of order α(>0) is defined as [1,3]

$$D\_t^a f(t) = \begin{cases} \frac{1}{\Gamma(m-a)} \frac{d^m}{dt^m} \int\_0^t (t-\tau)^{(m-a-1)} f(\tau) d\tau & \text{if } m-1 < a < m, \ m \in \mathbb{N},\\ \frac{d^m f(t)}{dt^m} & \text{if } a = m, \ m \in \mathbb{N}, \end{cases} \tag{3}$$

Riemann–Liouville (RL) derivative of order α (>0) has subsequent property [1–3] is given as:

$$D^{\alpha}t^{\beta} = \frac{\Gamma(\beta+1)}{\Gamma(\beta-\alpha+1)}\ \ ,\ \ \beta > \alpha-1. \tag{4}$$

#### *2.2. Local Fractional-Order Derivative*

Assume *h*( *x* ) ∈ *C*α(*m*, *n*), where *C*α(*m*, *n*) denotes α times differentiable with each derivative continuous in (*m*, *<sup>n</sup>*). Then, the derivative with fractional order <sup>α</sup> at *<sup>x</sup>* <sup>=</sup> *x* <sup>0</sup> is defined as [42,43]

$$h^{(a)}(\overleftarrow{\dot{x}\,\_{0}}) = \frac{d^{a}h(\overleftarrow{\dot{x}\,)})}{d\overleftarrow{\dot{x}}^{a}}\Big|\_{\overleftarrow{\dot{x}}=\frac{\omega}{\dot{x}\,\_{0}}} = \lim\_{\overleftarrow{\dot{x}}\to\overleftarrow{\dot{x}}\,\_{0}} \frac{\Delta^{a}\big(h(\overleftarrow{\dot{x}}) - h(\overleftarrow{\dot{x}\,\_{0}})\big)}{\left(\overleftarrow{\dot{x}}-\overleftarrow{\dot{x}\,\_{0}}\right)^{a}}\tag{5}$$

where Δα(*h*( *x* ) − *h*( *x* <sup>0</sup>)) - Γ(1 + α)(*h*( *x* ) − *h*( *x* <sup>0</sup>)) and 0 < α ≤ 1.

And has following property [42,43]:

If *z*( *<sup>x</sup>* )=(*<sup>h</sup>* ◦ *<sup>u</sup>*)( *x* ), where *u*( *x* ) = *f*( *x* ), then

$$\frac{d^a z(\overleftarrow{\dot{x}})}{d\overleftarrow{\dot{x}}^a} = h^{(1)}(f(\overleftarrow{\dot{x}}))f^{(a)}(\overleftarrow{\dot{x}})\tag{6}$$

when *h*(1) *f*( *x* ) and *f*(α)( *x* ) exist.

#### **3. The Brief Descriptions of the Fractional Logistic Function Method and Implementations**

#### *3.1. Brief Description of the Proposed Method*

The section emphasizes describing a comparatively new analytic method for getting solutions for the FPDEs. The procedure for the proposed method has been described in the following manner:

#### **Step 1:**

The FPDE is given as:

$$Q(u, D\_1^a u, \dots, \ u\_{\infty}, u\_{\infty}, \ u\_{\text{xxx}}, \dots) = 0, \ 0 < a \le 1,\tag{7}$$

where *u*(*x*, *t*) is a function.

#### **Step 2:**

Solution of Equation (7) is presented as

$$\mu(\mathbf{x},t) = \mathcal{U}(\xi), \; \xi = k\mathbf{x} - \frac{\mathcal{V}\; \mathcal{t}^{\alpha}}{\Gamma(\alpha+1)},\tag{8}$$

where γ and *k* are parameters.

Then, (6) [44,45] can reduce the fractional derivative into the following form

$$D\_t^\alpha u = \sigma\_t \mathcal{U}\_\xi D\_t^\alpha \xi$$

Then, the Equation (7) can be reduced by using Equation (7), by the following form:

$$\mathbb{Q}(\mathcal{U}, \mathcal{V}\mathcal{U}', \dots, k\mathcal{U}', k^2\mathcal{U}'', k^3\mathcal{U}''', \dots) = 0 \tag{9}$$

**Step 3:**

Here, the exact solution of Equation (7) is mentioned in terms of the polynomial in ϕ(ξ) as follows:

$$MI(\xi) = a\_0 + \sum\_{i=1}^{n} a\_i \phi^i(\xi),\tag{10}$$

where ϕ(ξ) is considered as the sigmoid function or logistic function [46,47], is defined as follows: ϕ(ξ) = *<sup>e</sup>*<sup>ξ</sup> <sup>1</sup>+*e*<sup>ξ</sup> and satisfies the following Riccati equation:

$$
\phi\_{\xi} = \phi - \phi^2,
\tag{11}
$$

and the value of *n* can be evaluated by using the homogenous balancing principle [48,49]. Moreover, the derivatives of different order for the function *U*(ξ) can be determined by using Equation (11).

#### **Step 4:**

Now, the coefficients *ai* are determined by putting Equation (11) into Equation (9) and solving the acquired algebraic equations obtained by equating coefficients of ϕ*<sup>i</sup>* to 0.

#### **Step 5:**

Unknowns obtained in step 4 are written into Equation (10) to get the solutions for Equation (7).

#### *3.2. Soliton Solutions for JM System*

The logistic function method is employed for solving Equation (1). By using Equation (8) in Equation (1), we have:

$$\begin{array}{cccc} -\gamma l I'(\xi) + k^3 l I'''(\xi) + \frac{3k^3}{2} V(\xi) V'''(\xi) + \frac{9k^3}{2} V'(\xi) V''(\xi) \\ -6k l I(\xi) l I'(\xi) - 6k l I(\xi) V(\xi) V'(\xi) - \frac{3}{2} k l I'(\xi) V^2(\xi) = 0, \end{array} \tag{12}$$

and

$$-\gamma V'(\xi) + k^3 V''''(\xi) - 6k \mathcal{U}'(\xi) V(\xi) - 6k \mathcal{U}(\xi) V'(\xi) - \frac{15k}{2} V(\xi) V^2(\xi) = 0,\tag{13}$$

Similar to Equation (10), let us consider the solutions of the governing system are presented by following mathematical equations as

$$\text{vol}(\xi) = a\_0 + \sum\_{i=1}^{n} a\_i \varphi^i \text{ and } V(\xi) = b\_0 + \sum\_{i=1}^{m} b\_i \varphi^i \tag{14}$$

By means of homogenous balance principle [48,49], we get *n* = 2 and *m* = 1. Thus, the solutions are:

$$\mathcal{U}(\xi) = a\_0 + a\_1 \varphi + a\_2 \varphi^2 \text{ and } V(\xi) = b\_0 + b\_1 \varphi,\tag{15}$$

where ϕ follows satisfies Equation (11).

Putting Equation (15) with Equation (11) into Equations (12) and (13), equating the obtained coefficient of ϕ*<sup>i</sup>* to 0, we get:

**Set 1:**

$$\gamma = \frac{k^3}{4}, \ a\_0 = -\frac{k^2}{32}, \ a\_1 = -\frac{3k^2}{8}, \ a\_2 = \frac{3k^2}{8}, \ b\_0 = \frac{ik}{2\sqrt{2}}, \ b\_1 = -\frac{ik}{\sqrt{2}}.$$

For **set 1**, the following hyperbolic solutions can be obtained as

$$\begin{aligned} lI\_{11} &= -\frac{k^2(\cosh(\xi) + 7)}{32(1 + \cosh(\xi))} \\ V\_{12} &= -\frac{\mathrm{iktanh}\left(\frac{\xi}{2}\right)}{2\sqrt{2}} \end{aligned} \tag{16}$$

where <sup>ξ</sup> <sup>=</sup> *kx* <sup>−</sup> *<sup>k</sup>*3*<sup>t</sup>* α <sup>4</sup>Γ(α+1).

**Set 2:**

$$\gamma = \frac{k^3}{4}, \ a\_0 = -\frac{k^2}{32}, \ a\_1 = -\frac{3k^2}{8}, \ a\_2 = \frac{3k^2}{8}, \ b\_0 = -\frac{ik}{\sqrt{2}}, \ b\_1 = \frac{ik}{\sqrt{2}}$$

For **set 2**, the following hyperbolic solutions can be obtained as

$$\begin{array}{l} \mathcal{U}\_{21} = -\frac{k^2(\cosh(\xi) + 7)}{32(1 + \cosh(\xi))}\\ V\_{22} = -\frac{ik(1 + 3\cosh(\xi) + 3\sinh(\xi))}{2\sqrt{2}(1 + \cosh(\xi) + \sinh(\xi))} \end{array} \tag{17}$$

where <sup>ξ</sup> <sup>=</sup> *kx* <sup>−</sup> *<sup>k</sup>*3*<sup>t</sup>* α <sup>4</sup>Γ(α+1). **Set 3:**

$$\gamma = \frac{11k^3}{5}, \ a\_0 = \frac{k^2}{20}, \ a\_1 = -2k^2, \ a\_2 = 2k^2, \ b\_0 = i\sqrt{5}k, \ b\_1 = -2i\sqrt{5}k^2$$

For **set 3**, the following hyperbolic solutions can be obtained as

$$\begin{array}{l} \mathcal{U}\_{31} = \frac{k^2(\cosh(\xi) - 19)}{20(1 + \cosh(\xi))}\\ \mathcal{V}\_{32} = -i\sqrt{5}k \tanh\left(\frac{\xi}{2}\right) \end{array} \tag{18}$$

where <sup>ξ</sup> <sup>=</sup> *kx* <sup>−</sup> <sup>11</sup>*k*3*<sup>t</sup>* α <sup>5</sup>Γ(α+1).

**Set 4:**

$$\text{If } \gamma = \frac{11k^3}{5}, \; a\_0 = \frac{k^2}{20}, \; a\_1 = -2k^2, \; a\_2 = 2k^2, \; b\_0 = -i\sqrt{5}k, \; b\_1 = 2i\sqrt{5}k^2$$

For **set 4**, the following hyperbolic solutions can be obtained as

$$\begin{array}{l} l L\_{41} = \frac{k^2(\cosh(\xi) - 19)}{20(1 + \cosh(\xi))}\\ V\_{42} = i\sqrt{5}k \tanh\left(\frac{\xi}{2}\right) \end{array} \tag{19}$$

where <sup>ξ</sup> <sup>=</sup> *kx* <sup>−</sup> <sup>11</sup>*k*3*<sup>t</sup>* α <sup>5</sup>Γ(α+1).

#### *3.3. Numerical Simulations*

This part emphasizes on numerical simulation for the Equations (1) and (2) by the fractional logistic equation method. Furthermore, the Equations (16) and (18) have been used here for generating solutions graphs.

The Figures 1–4 illustrates obtained solutions of governing equations.

**Case 1:** For α = 0.1 (Fractional order)

**Figure 1.** (**a**) A three dimensional (3-D) solitary wave figure of *u*(*x*,*t*) in Equation (16) with *U*11, when *k* = 0.3 and α = 0.1, (**b**) 2-D figure of *u*(*x*, *t*), for *t* = 0.1.

**Figure 2.** (**a**) A 3-D solitary wave of *v*(*x*, *<sup>t</sup>*) in Equation (16) with *<sup>V</sup>*12, when *<sup>k</sup>* <sup>=</sup> 0.3 and <sup>α</sup> <sup>=</sup> 0.1, (**b**) 2-D figure of *v*(*x*, *<sup>t</sup>*) for *<sup>t</sup>* <sup>=</sup> 0.1.

**Case 2:** For α = 0.1 (Fractional order)

**Figure 3.** (**a**) A 3-D solitary wave figure of *u*(*x*,*t*) in Equation (18) as *U*31, for *k* = 0.3 and α = 0.1, (**b**) 2-D figure of *u*(*x*, *t*) for *t* = 0.1.

**Figure 4.** (**a**) A 3-D solitary wave figure of *v*(*x*, *<sup>t</sup>*) in Equation (16) with *<sup>V</sup>*32, for *<sup>k</sup>* <sup>=</sup> 0.3 and <sup>α</sup> <sup>=</sup> 0.1, (**b**) 2-D figure of *v*(*x*, *<sup>t</sup>*) for *<sup>t</sup>* <sup>=</sup> 0.1.

#### **4. Lie Symmetry Analysis Method**

#### *4.1. Theory of Symmetry Analysis Method*

In this part, the general method for generating the symmetries of FPDEs is discussed by means of fractional Lie symmetry analysis.

Consider

$$D\_t^\alpha \mathfrak{u} = F(t, \mathfrak{x}, \mathfrak{u}, \mathfrak{u}\_{\mathfrak{x}\_t} \mathfrak{u}\_{\mathfrak{X}\mathfrak{X}\_t} \mathfrak{u}\_{\mathfrak{X}\mathfrak{X}\_t} \mathfrak{v}\_\rho \mathfrak{v}\_{\mathfrak{X}\_t} \mathfrak{v}\_{\mathfrak{X}\mathfrak{X}\_t} \mathfrak{v}\_{\mathfrak{X}\mathfrak{X}\_t} \dots) \tag{20}$$

$$D\_t^a \upsilon = G(t, \mathbf{x}, \boldsymbol{\mu}, \boldsymbol{\mu}\_{\mathbf{x}\_t} \boldsymbol{\mu}\_{\mathbf{x}\mathbf{x}\_t} \boldsymbol{\mu}\_{\mathbf{x}\mathbf{x}\_t} \boldsymbol{\upsilon}\_{\mathbf{}} \boldsymbol{\upsilon}\_{\mathbf{x}\_t} \boldsymbol{\upsilon}\_{\mathbf{x}\mathbf{x}\_t} \boldsymbol{\upsilon}\_{\mathbf{x}\mathbf{x}\_t} \dots) \tag{21}$$

Let us now consider that the Equations (20) and (21) are invariant in one-parameter Lie group transformation: <sup>↔</sup>

*<sup>x</sup>* <sup>→</sup> *<sup>x</sup>* <sup>+</sup> εξ(*t*, *<sup>x</sup>*, *<sup>u</sup>*, *<sup>v</sup>*) + *<sup>O</sup>*(ε2), <sup>↔</sup> *<sup>t</sup>* <sup>→</sup> *<sup>t</sup>* <sup>+</sup> ετ(*t*, *<sup>x</sup>*, *<sup>u</sup>*, *<sup>v</sup>*) + *<sup>O</sup>*(ε2), <sup>↔</sup> *<sup>u</sup>* <sup>→</sup> *<sup>u</sup>* <sup>+</sup> εη(*t*, *<sup>x</sup>*, *<sup>u</sup>*, *<sup>v</sup>*) + *<sup>O</sup>*(ε2), <sup>↔</sup> *<sup>v</sup>* <sup>→</sup> *<sup>v</sup>* + εϑ(*t*, *<sup>x</sup>*, *<sup>u</sup>*, *<sup>v</sup>*) + *<sup>O</sup>*(ε2), *D*<sup>α</sup> *t* ↔ *<sup>u</sup>* <sup>→</sup> *<sup>D</sup>*<sup>α</sup> *<sup>t</sup> <sup>u</sup>* <sup>+</sup> εη<sup>0</sup> <sup>α</sup>(*t*, *x*, *u*, *v*) + *O*(ε2), *D*<sup>α</sup> *t* ↔ *<sup>v</sup>* <sup>→</sup> *<sup>D</sup>*<sup>α</sup> *<sup>t</sup> <sup>v</sup>* <sup>+</sup> εϑ<sup>0</sup> <sup>α</sup>(*t*, *x*, *u*, *v*) + *O*(ε2), ∂ ↔ *u* ∂ ↔ *<sup>x</sup>* <sup>→</sup> <sup>∂</sup>*<sup>u</sup>* <sup>∂</sup>*<sup>x</sup>* <sup>+</sup> εη*x*(*t*, *<sup>x</sup>*, *<sup>u</sup>*, *<sup>v</sup>*) + *<sup>O</sup>*(ε2), ∂ ↔ *v* ∂ ↔ *<sup>x</sup>* <sup>→</sup> <sup>∂</sup>*<sup>v</sup>* <sup>∂</sup>*<sup>x</sup>* <sup>+</sup> εϑ*x*(*t*, *<sup>x</sup>*, *<sup>u</sup>*, *<sup>v</sup>*) + *<sup>O</sup>*(ε2), ∂2<sup>↔</sup> *u* ∂ ↔ *x* <sup>2</sup> <sup>→</sup> <sup>∂</sup>3*<sup>u</sup>* <sup>∂</sup>*x*<sup>3</sup> <sup>+</sup> εη*xx*(*t*, *<sup>x</sup>*, *<sup>u</sup>*, *<sup>v</sup>*) + *<sup>O</sup>*(ε2), ∂2<sup>↔</sup> *v* ∂ ↔ *x* <sup>2</sup> <sup>→</sup> <sup>∂</sup>2*<sup>v</sup>* <sup>∂</sup>*x*<sup>2</sup> <sup>+</sup> εϑ*xx*(*t*, *<sup>x</sup>*, *<sup>u</sup>*, *<sup>v</sup>*) + *<sup>O</sup>*(ε2), ∂3<sup>↔</sup> *u* ∂ ↔ *x* <sup>3</sup> <sup>→</sup> <sup>∂</sup>3*<sup>u</sup>* <sup>∂</sup>*x*<sup>3</sup> <sup>+</sup> εη*xxx*(*t*, *<sup>x</sup>*, *<sup>u</sup>*, *<sup>v</sup>*) + *<sup>O</sup>*(ε2), ∂3<sup>↔</sup> *v* ∂ ↔ *x* <sup>3</sup> <sup>→</sup> <sup>∂</sup>3*<sup>v</sup>* <sup>∂</sup>*x*<sup>3</sup> <sup>+</sup> εϑ*xxx*(*t*, *<sup>x</sup>*, *<sup>u</sup>*, *<sup>v</sup>*) + *<sup>O</sup>*(ε2), . ... (22)

where ε << 1 is considered as a group parameter, τ, η, ϑ, ξ are infinitesimals. Total expression for η*x*, η*xx*, η*xxx*, ϑ*x*, ϑ*xx* and ϑ*xxx* are:

$$\begin{aligned} \eta^{\text{x}} &= D\_{\text{x}}(\eta) - u\_{\text{x}}D\_{\text{x}}(\xi) - u\_{\text{t}}D\_{\text{x}}(\tau), \\ \eta^{\text{xxx}} &= D\_{\text{x}}(\eta^{\text{x}}) - u\_{\text{xx}}D\_{\text{x}}(\xi) - u\_{\text{xt}}D\_{\text{x}}(\tau), \\ \eta^{\text{xxx}} &= D\_{\text{x}}(\eta^{\text{xx}}) - u\_{\text{xxx}}D\_{\text{x}}(\xi) - u\_{\text{xxx}}D\_{\text{x}}(\tau), \\ \xi^{\text{x}} &= D\_{\text{x}}(\xi) - v\_{\text{x}}\ D\_{\text{x}}(\xi) - v\_{\text{t}}\ D\_{\text{x}}(\tau), \\ \xi^{\text{xx}} &= D\_{\text{x}}(\xi^{\text{x}}) - v\_{\text{xx}}D\_{\text{x}}(\xi) - v\_{\text{xt}}D\_{\text{x}}(\tau), \\ \xi^{\text{xxx}} &= D\_{\text{x}}(\xi^{\text{xx}}) - v\_{\text{xxx}}D\_{\text{x}}(\xi) - v\_{\text{x}\text{x}}D\_{\text{x}}(\tau) \end{aligned} \tag{23}$$

where *Dxj* <sup>=</sup> <sup>∂</sup> <sup>∂</sup>*x<sup>j</sup>* <sup>+</sup> *uj* ∂ <sup>∂</sup>*<sup>u</sup>* + *vj* ∂ <sup>∂</sup>*<sup>v</sup>* + *ujk* ∂ <sup>∂</sup>*uk* <sup>+</sup> *vjk* ∂ <sup>∂</sup>*uk* <sup>+</sup> ..., *<sup>j</sup>*, *<sup>k</sup>* <sup>=</sup> 1, 2, 3, ... and *uj* <sup>=</sup> <sup>∂</sup>*<sup>u</sup>* <sup>∂</sup>*x<sup>j</sup>* ,*vj* <sup>=</sup> <sup>∂</sup>*<sup>v</sup>* <sup>∂</sup>*x<sup>j</sup>* , *ujk* <sup>=</sup> <sup>∂</sup>2*<sup>u</sup>* ∂*x<sup>j</sup>* <sup>∂</sup>*x<sup>k</sup>* , *vjk* <sup>=</sup> <sup>∂</sup>2*<sup>v</sup>* ∂*x<sup>j</sup>* <sup>∂</sup>*x<sup>k</sup>* and so on.

$$\mathbf{V} = \xi(t, \mathbf{x}, \boldsymbol{\mu}, \upsilon) \frac{\partial}{\partial \mathbf{x}} + \tau(t, \mathbf{x}, \boldsymbol{\mu}, \upsilon) \frac{\partial}{\partial t} + \eta(t, \mathbf{x}, \boldsymbol{\mu}, \upsilon) \frac{\partial}{\partial \boldsymbol{u}} + \mathcal{S}(t, \mathbf{x}, \boldsymbol{\mu}, \upsilon) \frac{\partial}{\partial \upsilon} \tag{24}$$

**V** satisfies:

$$\left. \text{Pr}^{(n)} \mathbf{V} (\Delta\_1) \right|\_{\Lambda\_1 = 0} = 0 \text{ and } \left. \text{Pr}^{(n)} \mathbf{V} (\Delta\_2) \right|\_{\Lambda\_2 = 0} = 0, \left. n \right|\_1 = 1, \left. 2, \dots, \tag{25}$$

here, Pr denotes the prolongation for the given vector and

$$\Delta\_1 := D\_t^\alpha u - F(t, \ge, u, u\_\varkappa, u\_{\ge \varkappa}, u\_{\ge \infty \prime}, v\_\nu v\_{\ge \varkappa}, v\_{\ge \varkappa}, v\_{\ge \varkappa}, \dots)$$

and

$$\Delta\_2 := D\_t^\alpha \upsilon - G(t, \ge, \iota\_\nu \iota\_\nu, \iota\_{\ge \iota\_\nu} \iota\_{\ge \ltimes \iota\_\nu} \upsilon, \upsilon\_{\ge \iota\_\nu} \upsilon\_{\ge \iota\_\nu} \upsilon\_{\ge \ltimes \iota\_\nu} \dots),$$

Now, by considering the usual structure of RL fractional operator, the transformations of system (22) has been formed. We have

$$\left.\pi(\mathbf{x},t,\mathbf{u},\mathbf{v})\right|\_{t=0} = 0\tag{26}$$

By RL derivative, the α-th infinitesimal [50–52] with Equation (26) can be presented as follows:

$$\eta\_a^0 = D\_t^a(\eta) + \xi D\_t^a(u\_x) - D\_t^a(\xi u\_x) + D\_t^a(D\_t(\tau)u) - D\_t^{a+1}(\tau u) + \tau D\_t^{a+1}(u)$$

and

$$\mathcal{S}\_a^0 = D\_t^a(\mathcal{S}) + \xi D\_t^a(\upsilon\_x) - D\_t^a(\xi \upsilon\_x) + D\_t^a(D\_t(\tau)\upsilon) - D\_t^{a+1}(\tau \upsilon) + \tau D\_t^{a+1}(\upsilon) \tag{27}$$

where the *D*<sup>α</sup> *<sup>t</sup>* denotes the total fractional differential operator.

We have:

$$D\_t^{\alpha}(f(t)\mathbb{g}(t)) = \sum\_{m=0}^{\infty} \binom{\alpha}{m} D\_t^{\alpha-m} f(t) D\_t^m \mathbb{g}(t), \ \alpha > 0 \tag{28}$$

where

$$
\left( \begin{array}{c} \alpha \\ m \end{array} \right) = \frac{(-1)^{m-1} \alpha \Gamma(m-\alpha)}{\Gamma(1-\alpha)\Gamma(m+1)}
$$

We also have

$$\eta\_a^0 = D\_t^\alpha(\eta) - a D\_t^\alpha(\tau) \frac{\partial^\alpha u}{\partial t^\alpha} - \sum\_{n=1}^\infty \binom{\alpha}{n} D\_t^\alpha(\xi) D\_t^{\alpha - n} u\_\Gamma - \sum\_{n=1}^\infty \binom{\alpha}{n+1} D\_t^{n+1}(\tau) D\_t^{\alpha - n}(u)$$

and

$$\mathcal{S}\_{a}^{0} = D\_{t}^{a}(\mathcal{S}) - aD\_{t}^{a}(\tau)\frac{\partial^{a}\upsilon}{\partial t^{a}} - \sum\_{n=1}^{\infty} \binom{a}{n} D\_{t}^{n}(\mathcal{L})D\_{t}^{a-n}\upsilon\_{\mathcal{X}} - \sum\_{n=1}^{\infty} \binom{a}{n+1} D\_{t}^{n+1}(\tau)D\_{t}^{a-n}(\upsilon) \tag{29}$$

We have:

$$\frac{d^m g(h(t))}{dt^m} = \sum\_{k=0}^m \sum\_{r=0}^k \binom{k}{r} \frac{1}{k!} [-h(t)]^r \frac{d^m}{dt^m} [h(t)^{k-r}] \frac{d^k \mathcal{G}(h)}{dh^k} \tag{30}$$

Now by using Equations (28) and (30) with *f*(*t*) = 1, we have

$$D\_t^\alpha(\eta) = \frac{\partial^\alpha \eta}{\partial t^\alpha} + \eta\_u \frac{\partial^\alpha \eta}{\partial t^\alpha} - \mu \frac{\partial^\alpha \eta\_u}{\partial t^\alpha} + \sum\_{n=1}^\infty \binom{\alpha}{n} \frac{\partial^n \eta\_u}{\partial t^n} D\_t^{n-n}(\mu) + \mu$$

and

$$D\_t^a(\boldsymbol{\theta}) = \frac{\partial^a \boldsymbol{\vartheta}}{\partial t^a} + \boldsymbol{\vartheta}\_v \frac{\partial^a \boldsymbol{\upsilon}}{\partial t^a} - \boldsymbol{\upsilon} \frac{\partial^a \boldsymbol{\eta}\_v}{\partial t^a} + \sum\_{n=1}^{\infty} \binom{a}{n} \frac{\partial^n \boldsymbol{\vartheta}\_v}{\partial t^n} D\_t^{n-n}(\boldsymbol{\upsilon}) + \lambda \tag{31}$$

where

$$\mu = \sum\_{n=2}^{\infty} \sum\_{m=2}^{n} \sum\_{k=2}^{m} \sum\_{r=0}^{k-1} \binom{\alpha}{n} \binom{n}{m} \binom{k}{r} \frac{1}{k!} \frac{t^{n-\alpha}}{\Gamma(n+1-\alpha)} (-u)^r \frac{\partial^m}{\partial t^m} (u^{k-r}) \frac{\partial^{n-m+k} \eta}{\partial t^{n-m} \partial u^k}$$

and

$$\lambda = \sum\_{n=2}^{\infty} \sum\_{m=2}^{n} \sum\_{k=2}^{m} \sum\_{r=0}^{k-1} \binom{n}{n} \binom{n}{m} \binom{k}{r} \frac{1}{k!} \frac{t^{n-a}}{\Gamma(n+1-a)} (-v)^r \frac{\partial^m}{\partial t^m} (v^{k-r}) \frac{\partial^{n-m+k} \partial^r}{\partial t^{n-m} \partial v^k}$$

Thus, Equation (29) yields

$$\begin{aligned} \eta\_{\alpha}^{0} &= \frac{\partial^{a}\eta}{\partial t^{a}} + \left(\eta\_{\mu} - \alpha D\_{t}(\tau)\right) \frac{\partial^{a}\eta}{\partial t^{a}} - \mu \frac{\partial^{a}\eta\_{\mu}}{\partial t^{a}} + \mu \\ + \sum\_{n=1}^{\infty} \left[ \binom{\alpha}{n} \frac{\partial^{n}\eta\_{\mu}}{\partial t^{a}} - \binom{\alpha}{n+1} D\_{t}^{n+1}(\tau) \right] D\_{t}^{\alpha-n}(\mu) - \sum\_{n=1}^{\infty} \binom{\alpha}{n} D\_{t}^{n}(\xi) D\_{t}^{\alpha-n} \mu\_{\nu,\xi} \end{aligned}$$

and

$$\begin{aligned} \boldsymbol{\upbeta}\_{\boldsymbol{\alpha}}^{0} &= \frac{\partial^{\boldsymbol{\alpha}}\boldsymbol{\upbeta}}{\partial t^{\boldsymbol{\alpha}}} + (\boldsymbol{\upbeta}\_{\boldsymbol{\upsilon}} - \boldsymbol{\alpha}\boldsymbol{D}\_{t}(\boldsymbol{\tau})) \frac{\partial^{\boldsymbol{\alpha}}\boldsymbol{\uprho}}{\partial t^{\boldsymbol{\alpha}}} - \boldsymbol{\upmu} \frac{\partial^{\boldsymbol{\alpha}}\boldsymbol{\upbeta}\_{\boldsymbol{\tau}}}{\partial t^{\boldsymbol{\alpha}}} + \lambda \\ + \sum\_{n=1}^{\infty} \left[ \binom{\boldsymbol{\alpha}}{n} \frac{\partial^{\boldsymbol{\alpha}}\boldsymbol{\upbeta}\_{\boldsymbol{\eta}}}{\partial t^{\boldsymbol{\alpha}}} - \binom{\boldsymbol{\alpha}}{n+1} D\_{t}^{\boldsymbol{\alpha}+1}(\boldsymbol{\tau}) \right] D\_{t}^{\boldsymbol{\alpha}-n}(\boldsymbol{\upsilon}) - \sum\_{n=1}^{\infty} \binom{\boldsymbol{\alpha}}{n} D\_{t}^{\boldsymbol{\alpha}}(\boldsymbol{\xi}) D\_{t}^{\boldsymbol{\alpha}-n} \boldsymbol{\uprho}\_{\boldsymbol{\alpha}} \end{aligned} \tag{32}$$

#### *4.2. Lie Symmetry*

By third prolongation in Equations (1) and (2), we can obtain infinitesimals:

$$\begin{aligned} \xi &= \alpha \text{xc}\_2 + c\_{1\prime} \\ \tau &= 3tc\mathbf{2}\_{\prime} \\ \eta &= -2u\alpha c\_{2\prime} \\ \theta &= -v\alpha c\_{2\prime} \end{aligned} \tag{33}$$

Lie algebra corresponding to infinitesimal symmetry of governing system is spanned by

$$\mathbf{V}\_1 = \frac{\partial}{\partial \mathbf{x}}\tag{34}$$

$$\mathbf{V}\_{2} = \mathbf{x}\alpha \frac{\partial}{\partial \mathbf{x}} + 3t \frac{\partial}{\partial t} - 2\mu \alpha \frac{\partial}{\partial \mu} - v\alpha \frac{\partial}{\partial v} \tag{35}$$

Now, corresponding to Equations (1) and (2), we have following infinitesimal generators given as [7,8]

$$\mathbf{V} = c\_1 \mathbf{V}\_1 + c\_2 \mathbf{V}\_2$$

#### *4.3. Similarity Reduction*

**Case 2:** The following characteristic equation can be obtained by using the infinitesimal generator in Equation (35), given as

$$\frac{d\mathbf{x}}{d\mathbf{x}\alpha} = \frac{d\mathbf{t}}{3\mathbf{t}} = -\frac{d\mathbf{u}}{2u\alpha} = -\frac{dv}{v\alpha} \tag{36}$$

After solving Equation (36), the following similarity variable can be obtained, given as

$$X = \mathfrak{x}t^{\frac{\mathfrak{x}}{2}} \tag{37}$$

$$
\mu = F(X)t^{\frac{-2a}{3}} \tag{38}
$$

$$w = G(\mathcal{X})t^{\frac{-n}{3}} \tag{39}$$

**Theorem 1.** *The transformation (38) and (39) reduces Equations (1) and (2) to the following form of Ordinary di*ff*erential equations (ODEs) given as*:

$$\left(P\_{\frac{3}{4}}^{1-\frac{5\alpha}{3},\ \ \ \ \alpha}F\right)(X) + F\_{\text{XXX}} + \frac{3}{2}G\text{Gxxx} + \frac{9}{2}G\text{xGxx} - 6\ F\text{F}\chi - 6F\text{G}\chi - \frac{3}{2}F\chi\text{G}^2 = 0\tag{40}$$

$$\left(P\_{\frac{3}{a}}^{1-\frac{4p}{3},\ \ a}G\right)(X) + G\_{XXX} - 6\, GF\_X - 6FG\_X - \frac{15}{2}\, G\_XG^2 = 0\tag{41}$$

*with the Erdélyi-Kober operator P*τ,<sup>α</sup> <sup>β</sup> :

$$\mathbb{E}\left(P\_{\boldsymbol{\beta}}^{\tau,\alpha}F\right) := \prod\_{j=0}^{n-1} \binom{\tau+j-1}{\tau-\boldsymbol{\beta}} X \frac{d}{dX} \binom{\boldsymbol{\epsilon}}{\boldsymbol{\epsilon}} \boldsymbol{K}\_{\boldsymbol{\beta}}^{\tau+\boldsymbol{\alpha},\boldsymbol{\mu}-\alpha} F\Big)(X) \tag{42}$$

*and*

$$\mathbb{E}\left(P\_{\beta}^{\tau,a}G\right) := \prod\_{j=0}^{n-1} \binom{\tau+j-\frac{1}{\beta}X\frac{d}{dX}}{} \binom{K\_{\beta}^{\tau+a,n-a}G}{}(X) \tag{43}$$

*where, the Erdélyi-Kober fractional integral operator can be expressed as:*

$$\mathbb{P}\left(K\_{\beta}^{\tau+a,n-a}F\right)(X) := \begin{cases} \frac{1}{\Gamma(a)} \int\_{1}^{\infty} (u-1)^{a-1} u^{-(\tau+a)} F(Xu^{\frac{1}{\beta}}) du, & a > 0, \\\ F(X), & a = 0. \end{cases} \tag{44}$$

*and*

$$\left(K\_{\beta}^{\tau+a,n-a}G\right)(X) := \begin{cases} \frac{1}{\Gamma(a)} \int\_{1}^{\infty} (u-1)^{a-1} u^{-(\tau+a)} G\Big(Xu^{\frac{1}{\beta}}\Big) du, & a > 0, \\\ \mathcal{G}(X), & a = 0. \end{cases} \tag{45}$$

*and*

$$m = \begin{cases} \ [\alpha] + 1, & \alpha \in \mathcal{N}, \\ \ a, & \alpha \notin \mathcal{N}. \end{cases} \tag{46}$$

#### *4.4. Conservation Laws of Time-Fractional Coupled JM Equations*

Let us consider the following conservation vectors viz. *C*<sup>1</sup> and *C*<sup>2</sup> for the Equations (1) and (2), which satisfies the conservation equations expressed as:

$$[D\_t(\mathbb{C}^1) + D\_x(\mathbb{C}^2)]\_{\text{(1.1), (1.2)}} = 0 \tag{47}$$

A Lagrangian of Equations (1) and (2) is:

$$\begin{array}{l} \text{L = } \boldsymbol{\omega} (\mathbf{x}, t) (D\_t^a \boldsymbol{u} + \boldsymbol{u}\_{\text{xxx}} + \frac{3}{2} \boldsymbol{\upsilon} \boldsymbol{v}\_{\text{xxx}} + \frac{9}{2} \boldsymbol{\upsilon}\_x \boldsymbol{v}\_{\text{xx}} - 6 \boldsymbol{u} \boldsymbol{u}\_x - 6 \boldsymbol{u} \boldsymbol{v} \boldsymbol{v}\_x - \frac{3}{2} \boldsymbol{u}\_x \boldsymbol{v}^2) \\ \quad + \boldsymbol{\gamma} (\mathbf{x}, t) (D\_t^a \boldsymbol{\upsilon} + \boldsymbol{\upsilon}\_{\text{xxx}} - 6 \boldsymbol{u}\_x \boldsymbol{\upsilon} - 6 \boldsymbol{u} \boldsymbol{v}\_x - \frac{15}{2} \boldsymbol{\upsilon}\_x \boldsymbol{v}^2) \end{array} \tag{48}$$

where, γ and ω are dependent variables.

By considering Equation (48), the action integral can be defined as:

$$\int\limits\_{0}^{t} \int\limits\_{\Omega} \text{ L}(\mathbf{x}, t, u, v, v, u\prime, v\prime, D\_{t}^{a}u, u\_{\text{x}}, u\_{\text{xxx}}, D\_{t}^{a}v, v\_{\text{x}}, v\_{\text{xx}}) dx \, dt \tag{49}$$

The Euler-Lagrangian operator is given by

$$\frac{\delta}{\delta u} = \frac{\partial}{\partial u} + \left(D\_t^a\right)^\* \frac{\partial}{\partial D\_t^a u} - D\_x \frac{\partial}{\partial u\_x} - D\_x^3 \frac{\partial}{\partial u\_{xxx}}\tag{50}$$

and

$$\frac{\delta}{\delta v} = \frac{\partial}{\partial v} + (D\_t^a)^\* \frac{\partial}{\partial D\_t^a v} - D\_x \frac{\partial}{\partial v\_x} - D\_x^2 \frac{\partial}{\partial v\_{xx}} - D\_x^3 \frac{\partial}{\partial v\_{xxx}} \tag{51}$$

where (*D*<sup>α</sup> *t* ) <sup>∗</sup> = (−1) *n t I n*−α *<sup>T</sup> Dn <sup>t</sup>* is the adjoint operator of *<sup>D</sup>*<sup>α</sup> *t* .

Euler Lagrange equations:

$$\frac{\delta \mathcal{L}}{\delta u} = 0, \text{ and } \frac{\delta \mathcal{L}}{\delta v} = 0 \tag{52}$$

Considering the case of the independent variables *t*, *x* and the dependent variables *v*(*x*, *t*), *u*(*x*, *t*), we have

$$
\overline{X} + D\_l(\tau)I + D\_x(\xi)I = W\_1 \frac{\delta}{\delta u} + W\_2 \frac{\delta}{\delta v} + D\_l \mathbf{C}^1 + D\_x \mathbf{C}^2 \tag{53}
$$

where <sup>δ</sup> <sup>δ</sup>*<sup>u</sup>* , <sup>δ</sup> <sup>δ</sup>*<sup>v</sup>* are the Euler-Lagrange operators and *<sup>I</sup>* is the identity operator, *<sup>C</sup>*<sup>1</sup> and *<sup>C</sup>*<sup>2</sup> are the conserved vectors, and

So *X* is given as

$$\begin{cases} \overline{X} = \xi \frac{\partial}{\partial x} + \tau \frac{\partial}{\partial \overline{t}} + \eta \frac{\partial}{\partial u} + \mathfrak{d} \frac{\partial}{\partial \overline{v}} + \eta \frac{\partial}{\partial x} \frac{\partial}{\partial \overline{t}^u} + \mathfrak{d}^0 \frac{\partial}{\partial \overline{D}^u\_t v} \\ + \eta^x \frac{\partial}{\partial u} + \eta^{\text{xxx}} \frac{\partial}{\partial u\_{\text{xxx}}} + \mathfrak{d}^x \frac{\partial}{\partial v\_{\text{x}}} + \mathfrak{d}^{\text{xx}} \frac{\partial}{\partial v\_{\text{xx}}} + \mathfrak{d}^{\text{xxx}} \frac{\partial}{\partial v\_{\text{xxx}}} \end{cases} \tag{54}$$

Lie characteristic function *W*<sup>1</sup> and *W*<sup>2</sup> are:

$$\begin{aligned} W\_1 &= \eta - \tau u\_t - \underline{\xi} u\_x \\ W\_2 &= \gamma - \tau v\_t - \underline{\xi} v\_x \end{aligned}$$

Here, for **V**1, we have following conserved vectors

$$\begin{aligned} \mathcal{W}\_1 &= -\mu\_x \\ \mathcal{W}\_2 &= -\upsilon\_x \end{aligned} \tag{55}$$

Here, for **V**2, we have following conserved vectors

$$\begin{aligned} \mathcal{W}\_1 &= -2\boldsymbol{u}\boldsymbol{\alpha} - \mathbf{x}\boldsymbol{\alpha}\boldsymbol{u}\_\mathbf{x} - 3\boldsymbol{t}\boldsymbol{u}\_\mathbf{t} \\ \mathcal{W}\_2 &= -\boldsymbol{v}\boldsymbol{\alpha} - \mathbf{x}\boldsymbol{a}\boldsymbol{v}\_\mathbf{x} - 3\boldsymbol{t}\boldsymbol{v}\_\mathbf{t} \end{aligned} \tag{56}$$

In case of RL fractional differentiation in Equations (1) and (2), the components of the conserved vector can be written as follows:

For *W*<sup>1</sup> = −2*u*α − *x*α*ux* − 3*tut* and *W*<sup>2</sup> = −*v*α − *x*α*vx* − 3*tvt*, we have

$$\begin{split} \mathbf{C}^{1} &= \tau \mathbf{L} + \, \_0 \mathbf{D}\_t^{\mathbf{u}-1} (\mathcal{W}\_1) \frac{\partial \mathbf{l}\_t}{\partial \sqrt{\mathbf{D}\_t^{\mathbf{u}}} t} + \left( \mathcal{W}\_1 \, D\_t \frac{\partial \mathbf{l}\_t}{\partial \sqrt{\mathbf{D}\_t^{\mathbf{u}}} t} \right) + \, \_0 \mathbf{D}\_t^{\mathbf{u}-1} (\mathcal{W}\_2) \frac{\partial \mathbf{l}\_t}{\partial \sqrt{\mathbf{D}\_t^{\mathbf{u}}} t} + \left( \mathcal{W}\_2 \, D\_t \frac{\partial \mathbf{l}\_t}{\partial \sqrt{\mathbf{D}\_t^{\mathbf{u}}} t} \right) \\ &= \omega \, \_0 \boldsymbol{D}\_t^{\mathbf{u}-1} (-2\boldsymbol{\omega}\boldsymbol{\alpha} - \mathbf{x}\boldsymbol{\omega}\boldsymbol{u}\_x - 3\boldsymbol{\omega}\boldsymbol{t}\_t) + \boldsymbol{J} ((-2\boldsymbol{\omega}\boldsymbol{\alpha} - \mathbf{x}\boldsymbol{\omega}\boldsymbol{u}\_x - 3\boldsymbol{\upgamma}\_t), \boldsymbol{\omega}\_t) \\ &+ \boldsymbol{\gamma}\_0 \boldsymbol{D}\_t^{\mathbf{u}-1} (-\boldsymbol{\nu}\boldsymbol{\alpha} - \mathbf{x}\boldsymbol{\alpha}\boldsymbol{\gamma}\_t - 3\boldsymbol{\upgamma}\_t) + \boldsymbol{J} ((-\boldsymbol{\nu}\boldsymbol{\alpha} - \mathbf{x}\boldsymbol{\alpha}\boldsymbol{\gamma}\_t - 3\boldsymbol{\upgamma}\_t), \boldsymbol{\gamma}\_t). \end{split} \tag{57}$$

*C*<sup>2</sup> = ξL + *W*<sup>1</sup> ∂L <sup>∂</sup>*ux* + *DxDx* ∂L ∂*uxxx* + *W*<sup>2</sup> ∂L <sup>∂</sup>*vx* − *Dx* ∂L ∂*vxx* + *DxDx* ∂L ∂*vxxx* +*Dx*(*W*1) −*Dx* ∂L ∂*uxxx* + *Dx*(*W*2) ∂L <sup>∂</sup>*vxx* − *Dx* ∂L ∂*vxxx* + *DxDx*(*W*1) ∂L ∂*uxxx* + *DxDx*(*W*2) ∂L ∂*vxxx* = <sup>1</sup> <sup>2</sup> ((4α*vx*γ*<sup>x</sup>* + <sup>6</sup>α*ux*ω*<sup>x</sup>* + <sup>9</sup>*tvtvx*ω*<sup>x</sup>* + <sup>3</sup>*x*α*v*<sup>2</sup> *<sup>x</sup>*ω*<sup>x</sup>* + 6*t*ω*xuxt* + 6*t*γ*xvxt* + 9*tv*ω*xvxt* +2*x*α(ω*xuxx* + γ*xvxx*) + 3*x*α*v*ω*xvxx* − 2α*v*γ*xx* − 6*tvt*γ*xx* − 2*x*α*vx*γ*xx* − 4α*u*ω*xx* <sup>−</sup>3α*v*2ω*xx* <sup>−</sup> <sup>6</sup>*tut*ω*xx* <sup>−</sup> <sup>9</sup>*tvx*ω*xx* <sup>−</sup> <sup>2</sup>*x*α*ux*ω*xx* + *vvx*(9αω*<sup>x</sup>* <sup>−</sup> <sup>3</sup>*x*αω*xx*)) +γ(36α*uv* + 15α*v*<sup>3</sup> + 12*v*(3*tut* + *x*α*ux*) + 12*u*(3*tvt* + *x*α*vx*) + 15*v*2(3*tvt* + *x*α*vx*) <sup>−</sup>6α*vxx* <sup>−</sup> <sup>6</sup>*tvxxt* <sup>−</sup> <sup>2</sup>*x*α*vxxx*) + <sup>ω</sup>(24α*u*<sup>2</sup> + <sup>18</sup>α*uv*<sup>2</sup> + <sup>12</sup>*u*(3*tut* + *<sup>x</sup>*α*ux*) +3*v*2(3*tut* + *<sup>x</sup>*α*ux*) <sup>−</sup> <sup>12</sup>α*v*<sup>2</sup> *<sup>x</sup>* + 12*uv*(3*tvt* + *x*α*vx*) − 18*tvxvxt* − 8α*uxx* − 12α*vvxx* −9*tvtvxx* − 9*x*α*vxvxx* − 6*tuxxt* − 9*tvvxxt* − 2*x*α*uxxx* − 3*x*α*vvxxx*)) (58)

#### **5. Conclusions**

Fractional logistic function technique is proposed for soliton solutions of fractional JM system. Numerical simulation for solutions has been shown for analyzing the physical nature of obtained solutions. Moreover, Lie group analysis technique is proposed for investigation of symmetry properties and conservation laws for fractional Jaulent-Miodek system. Conservation laws for the system are acquired by new theorem and formal Lagrangian. These analyses are relatively new and reliable for finding exact solutions and constructing conservation laws with generating similarity solutions for the FPDEs. Furthermore, this method enriches the solution of the equations, which is of great significance for study of the FPDEs.

**Author Contributions:** Methodology, S.S.; validation, formal analysis, S.S.R.; software, investigation, M.A.M.A.; data curation, writing, original draft preparation, S.S.; writing, review and editing, M.I.; visualization, Y.-M.C. All authors have read and agreed to the published version of the manuscript.

**Funding:** The work was supported by the Natural Science Foundation of China (Grant Nos. 61673169, 11301127, 11701176, 11626101, 11601485).

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **On Nonlinear Fractional Difference Equation with Delay and Impulses**

#### **Rujira Ouncharoen 1, Saowaluck Chasreechai 2,\* and Thanin Sitthiwirattham 3,\***


Received: 12 May 2020; Accepted: 5 June 2020; Published: 8 June 2020

**Abstract:** In this paper, we establish the existence results for a nonlinear fractional difference equation with delay and impulses. The Banach and Schauder's fixed point theorems are employed as tools to study the existence of its solutions. We obtain the theorems showing the conditions for existence results. Finally, we provide an example to show the applicability of our results.

**Keywords:** fractional difference equations; delay; impulses; existence

**JEL Classification:** 39A05; 39A12

#### **1. Introduction**

Discrete fractional calculus studies have been an interesting field of present day, because some real-world phenomena are described by using fractional difference operators (see papers [1–3] and the references therein). Basic knowledge of fractional difference calculus can be found in [4]. The extension of this field can be found in [5–37] and references cited therein.

For the development of the fractional difference equations theory, which is the discrete case of fractional differential equations, there are still few publications. However, there are some recent papers studying fractional difference equations with delay. In 2017, Kaewwisetkul et al. [38] studied boundary value problems for Caputo fractional functional difference equations with delay. In 2018, Wu et al. [39] proposed the finite-time stability of discrete fractional delay systems, Alzabut et al. [40] studied nonlinear delay fractional difference equations with applications on the discrete fractional Lotka–Volterra competition model, Alzabut et al. [41] investigated the application on the uniqueness of solutions for nonlinear delay fractional difference system, and Luo et al. [42] considered the uniqueness and finite-time stability of solutions for a class of nonlinear fractional delay difference systems.

In particular, the fractional difference equations with delay and impulses have not been studied extensively. In 2018, Wu et al. [43] studied a linear fractional delay difference equations with impulse. These results are incentives for research. In this paper, we propose a nonlinear fractional difference equation with delay and impulses of the form:

$$\Delta\_{\mathbb{C}}^{\pi} u(t) = F \left[ t + \pi - 1, u\_{t+\pi-1}, \Delta^{\beta} u(t+\pi-\beta) \right], \ t \in \mathbb{N}\_{0,T}, \ t + \pi - 1 \neq t\_k$$

$$u(t\_k) = I\_k \left( u\_{t\_k-1} \right), \ k = 1, 2, \dots, p, \ t\_{k+1} - t\_k \ge 2,$$

$$u(t+\pi-1) = \psi(t+\pi-1), \ t \in \mathbb{N}\_{-r,0\prime}, r \in \mathbb{N}\_{0,T+1\prime} \tag{1}$$

where <sup>N</sup>0,*<sup>T</sup>* :<sup>=</sup> {0, 1, ... , *<sup>T</sup>*}, *<sup>α</sup>*, *<sup>β</sup>* <sup>∈</sup> (0, 1), <sup>Δ</sup>*u*(*tk*) = *<sup>u</sup>*(*tk* <sup>+</sup> <sup>1</sup>) <sup>−</sup> *<sup>u</sup>*(*tk*), *<sup>t</sup>*<sup>0</sup> <sup>=</sup> *<sup>α</sup>* <sup>−</sup> <sup>1</sup> <sup>&</sup>lt; *<sup>t</sup>*<sup>1</sup> <sup>&</sup>lt; *<sup>t</sup>*<sup>2</sup> <sup>&</sup>lt; ... <sup>&</sup>lt; *tp* < *T* + *α*, *F* ∈ *C* <sup>N</sup>*α*−1,*T*+*<sup>α</sup>* <sup>×</sup> *Cr* <sup>×</sup> <sup>R</sup>, <sup>R</sup> , *Ik* : *Cr* <sup>→</sup> <sup>R</sup> and *<sup>ψ</sup>* is an element of the space:

$$\Delta\_r^+(a-1) := \left\{ \psi \in \mathbb{C}\_r : \psi(a-1) = 0, \ \Delta\_C^\beta \psi(s-\beta+1) = 0, \ s \in \mathbb{N}\_{a-r-1,a-1} \right\}.$$

For *<sup>r</sup>* <sup>∈</sup> <sup>N</sup>0,*T*<sup>+</sup>1, let *Cr* be the Banach space of all continuous functions *<sup>ψ</sup>* : <sup>N</sup>*α*−*r*−1,*α*−<sup>1</sup> <sup>→</sup> <sup>R</sup> with the norm:

$$||\psi||\_{C\_r} = \max\_{s \in \mathbb{N}\_{a-r-1,a-1}} |\psi(s)|.$$

If *<sup>u</sup>* : <sup>N</sup>*α*−*r*−1,*α*−<sup>1</sup> <sup>→</sup> <sup>R</sup>, then for any *<sup>t</sup>* <sup>∈</sup> <sup>N</sup>*α*−1,*T*+*α*, we define the element *ut* of *Cr* as,

$$u\_t(\theta) = u(t+\theta) \text{ for } \theta \in \mathbb{N}\_{-r\beta}.$$

We aim to prove the existence results to the problem of Equation (1) by using the Banach and Schauder's fixed point theorems. Finally, we present an example in the last section.

#### **2. Preliminaries**

In this section, we recall some notations, definitions, and lemmas used in the main results.

**Definition 1.** *The generalized falling function is defined by:*

$$t\_{\overline{\mathfrak{u}}} := \frac{\Gamma(t+1)}{\Gamma(t+1-a)}.$$

*If t* <sup>+</sup> <sup>1</sup> <sup>−</sup> *<sup>α</sup> is a pole of the Gamma function and t* <sup>+</sup> <sup>1</sup> *is not a pole, then t<sup>α</sup>* <sup>=</sup> <sup>0</sup>*.*

**Definition 2.** *For <sup>α</sup>* <sup>&</sup>gt; <sup>0</sup> *and f defined on* <sup>N</sup>*<sup>a</sup>* :<sup>=</sup> {*a*, *<sup>a</sup>* <sup>+</sup> 1, . . .}*, the <sup>α</sup>-order fractional sum of f is defined by:*

$$\Delta^{-\alpha}f(t) := \frac{1}{\Gamma(\alpha)} \sum\_{s=a}^{t-\alpha} (t - \sigma(s)) \overline{\frac{a-1}{\Gamma(s)}} f(s),$$

*where t* <sup>∈</sup> <sup>N</sup>*a*+*<sup>α</sup> and <sup>σ</sup>*(*s*) = *<sup>s</sup>* <sup>+</sup> <sup>1</sup>*.*

**Definition 3.** *For <sup>α</sup>* <sup>&</sup>gt; <sup>0</sup>*, <sup>N</sup>* <sup>∈</sup> <sup>N</sup> *is satisfied with* <sup>0</sup> <sup>≤</sup> *<sup>N</sup>* <sup>−</sup> <sup>1</sup> <sup>&</sup>lt; *<sup>α</sup>* <sup>&</sup>lt; *<sup>N</sup> and <sup>f</sup> defined on* <sup>N</sup>*a, the <sup>α</sup>-order Riemann–Liouville fractional difference of f is defined by:*

$$\Delta^a f(t) := \Delta^N \Delta^{-(N-a)} f(t) = \frac{1}{\Gamma(-a)} \sum\_{s=a}^{t+a} (t - \sigma(s)) \overline{\frac{-a-1}{s}} f(s),$$

*where t* <sup>∈</sup> <sup>N</sup>*a*+*N*−*α. The <sup>α</sup>-order Caputo fractional difference of f is defined by:*

$$\Delta\_C^a f(t) := \Delta^{-(N-a)} \Delta^N f(t) = \frac{1}{\Gamma(N-a)} \sum\_{s=a}^{t-(N-a)} (t - \sigma(s)) \overline{\underline{\gamma}^{N-a-1}} \Delta^N f(s),$$

*where t* <sup>∈</sup> <sup>N</sup>*a*+*N*−*α. If <sup>α</sup>* <sup>=</sup> *N, then* <sup>Δ</sup>*<sup>α</sup> <sup>f</sup>*(*t*) = <sup>Δ</sup>*<sup>α</sup> <sup>C</sup> <sup>f</sup>*(*t*) = <sup>Δ</sup>*<sup>N</sup> <sup>f</sup>*(*t*)*.*

**Lemma 1.** *[5] Assume that α* > 0 *and f defined on* N*a. Then,*

$$
\Delta^{-a} \Delta\_C^a y(t) = y(t) + \mathbb{C}\_0 + \mathbb{C}\_1 (t - a)^1 + \mathbb{C}\_2 (t - a)^2 + \dots + \mathbb{C}\_{N-1} (t - a)^{\underline{N-1}},
$$

*for some Ci* <sup>∈</sup> <sup>R</sup>*,* <sup>0</sup> <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> *<sup>N</sup>* <sup>−</sup> <sup>1</sup> *and* <sup>0</sup> <sup>≤</sup> *<sup>N</sup>* <sup>−</sup> <sup>1</sup> <sup>&</sup>lt; *<sup>α</sup>* <sup>≤</sup> *N.*

Next, we aim to find a solution of the linear variant of the mixed problem in Equation (1) as follows.

**Lemma 2.** *Let α* ∈ (0, 1), *h* ∈ *C* <sup>N</sup>*α*−1,*T*+*α*, <sup>R</sup> , *Ik* : *Cr* <sup>→</sup> <sup>R</sup> *and <sup>ψ</sup>* <sup>∈</sup> *<sup>C</sup>*<sup>+</sup> *<sup>r</sup>* (*α* − 1) *be given. Then the problem*

$$\Delta\_C^a u(t) = h(t + a - 1), \ t \in \mathbb{N}\_{0, T} := \{0, 1, \dots, T\}, \ t + a - 1 \neq t\_k$$

$$u(t\_k) = I\_k \left(u\_{t\_k - 1}\right), \ k = 1, 2, \dots, p, \ t\_{k + 1} - t\_k \ge 2,$$

$$u(t + a - 1) = \psi(t + a - 1), \ t \in \mathbb{N}\_{-r, 0}, \ r \in \mathbb{N}\_{0, T + 1}.\tag{2}$$

*has the unique solution which is in a form:*

$$u(t) = \begin{cases} \frac{1}{\Gamma(\alpha)} \sum\_{s=t\_0}^{t-1} (t-s+\mathfrak{a}-2) \underline{\hspace{0.1cm}} h(\mathbf{s}), & t \in \mathbb{N}\_{t\_0, t\_1} \\ \sum\_{i=1}^{k} I\_i \left( u\_{t\_i-1} \right) + \frac{1}{\Gamma(\alpha)} \sum\_{i=1}^{k} \sum\_{s=t\_{i-1}}^{t\_i-1} (t\_i - s + \mathfrak{a}-2) \underline{\hspace{0.1cm}} h(\mathbf{s}) \\ \qquad + \frac{1}{\Gamma(\alpha)} \sum\_{s=t\_k}^{t-1} (t-s+\mathfrak{a}-2) \underline{\hspace{0.1cm}} h(\mathbf{s}), & t \in \mathbb{N}\_{t\_k+1, t\_{k+1}} \\ \varphi(t), & t \in \mathbb{N}\_{\mathfrak{a}-r-1, \mathfrak{a}-1} \end{cases} (3)$$

*where* Δ*u*(*tk*) = *u*(*tk* + 1) − *u*(*tk*), *t*<sup>0</sup> = *α* − 1 < *t*<sup>1</sup> < *t*<sup>2</sup> < ... < *tp* < *T* + *α*.

**Proof.** For *<sup>t</sup>* <sup>∈</sup> <sup>N</sup>*t*0,*t*<sup>1</sup> , taking the fractional sum of order *<sup>α</sup>* for Equation (2) and from Lemma 1, we have:

$$u(t) \quad = \quad \varphi(a-1) + \frac{1}{\Gamma(a)} \sum\_{s=0}^{t-a} (t - \sigma(s)) \overline{\frac{a-1}{\Gamma(a)}} h(s+a-1). \tag{4}$$

From *ϕ*(*α* − 1) = 0, we can write Equation (4) as:

$$\mu(t) \quad = \quad \frac{1}{\Gamma(a)} \sum\_{s=t\_0}^{t-1} (t - s + a - 2) \overline{\frac{a-1}{\Gamma(a)}} h(s). \tag{5}$$

By substituting *t* = *t*<sup>1</sup> into Equation (5), we have:

$$u(t\_1) \quad = \quad \frac{1}{\Gamma(\alpha)} \sum\_{s=t\_0}^{t\_1 - 1} (t\_1 - s + \mathfrak{a} - 2) \overline{\frac{\mathfrak{a} - 1}{\alpha}} h(s). \tag{6}$$

If *<sup>t</sup>* <sup>∈</sup> <sup>N</sup>*t*1+1,*t*<sup>2</sup> , then we get"

$$\begin{aligned} u(t) &= \quad u(t\_1 + 1) + \frac{1}{\Gamma(\alpha)} \sum\_{s=t\_1}^{t-1} (t - s + \alpha - 2) \overline{u - 1} h(s) \\ &= \quad \Delta u(t\_1) + u(t\_1) + \frac{1}{\Gamma(\alpha)} \sum\_{s=t\_1}^{t-1} (t - s + \alpha - 2) \overline{u - 1} h(s) .\end{aligned}$$

Substiting *u*(*t*1) from Equation (6) to above equation, we obtain:

$$u(t) = \begin{cases} I\_1\left(u\_{t\_1-1}\right) + \frac{1}{\Gamma(\alpha)} \sum\_{s=t\_0}^{t\_1-1} (t\_1 - s + \alpha - 2) \overline{\frac{a-1}{\alpha}} h(s) \\ \quad + \frac{1}{\Gamma(\alpha)} \sum\_{s=t\_1}^{t-1} (t - s + \alpha - 2) \overline{\frac{a-1}{\alpha}} h(s) . \end{cases} \tag{7}$$

If *<sup>t</sup>* <sup>∈</sup> <sup>N</sup>*t*2+1,*t*<sup>3</sup> , then we have:

$$\begin{aligned} u(t) &= \quad u(t\_2 + 1) + \frac{1}{\Gamma(\alpha)} \sum\_{s=t\_2}^{t-1} (t - s + \alpha - 2) \overline{\frac{\alpha - 1}{\Gamma(\alpha)}} h(s) \\ &= \quad \Delta u(t\_2) + u(t\_2) + \frac{1}{\Gamma(\alpha)} \sum\_{s=t\_2}^{t-1} (t - s + \alpha - 2) \overline{\frac{\alpha - 1}{\Gamma(\alpha)}} h(s) .\end{aligned}$$

Substiting *u*(*t*2) from Equation (7) to above equation, we obtain:

$$\begin{aligned} u(t) &= \, \, \_1I\_2\left(u\_{t\_2-1}\right) + \left[ I\_1\left(u\_{t\_1-1}\right) + \frac{1}{\Gamma(a)} \sum\_{s=t\_0}^{t\_1-1} (t\_1 - s + a - 2) \overline{\frac{a-1}{\Gamma(a)}} h(s) \right. \\ &+ \frac{1}{\Gamma(a)} \sum\_{s=t\_1}^{t\_2-1} (t\_2 - s + a - 2) \overline{\frac{a-1}{\Gamma(a)}} h(s) \right] + \frac{1}{\Gamma(a)} \sum\_{s=t\_2}^{t-1} (t - s + a - 2) \overline{\frac{a-1}{\Gamma(a)}} h(s) . \end{aligned} \tag{8}$$

By using the recursive process, we obtain the solution *<sup>u</sup>*(*t*) for *<sup>t</sup>* <sup>∈</sup> <sup>N</sup>*tk*+1,*tk*+<sup>1</sup> (*<sup>k</sup>* <sup>=</sup> 1, 2, ..., *<sup>p</sup>*) as given by:

$$u(t) = \sum\_{i=1}^{k} I\_i \left( u\_{t\_i - 1} \right) + \frac{1}{\Gamma(a)} \sum\_{i=1}^{k} \sum\_{s=t\_{i-1}}^{t\_i - 1} (t\_i - s + a - 2) \overline{\frac{a-1}{\gamma}} h(s)$$

$$+ \frac{1}{\Gamma(a)} \sum\_{s=t\_k}^{t-1} (t - s + a - 2) \overline{\frac{a-1}{\gamma}} h(s). \tag{9}$$

Obviously, for each *<sup>t</sup>* <sup>∈</sup> <sup>N</sup>*α*−*r*−1,*α*−1, we have *<sup>u</sup>*(*t*) = *<sup>ϕ</sup>*(*t*).

#### **3. Existence and Uniqueness Result**

In this section, we employ the Banach fixed point theorem to consider the existence and uniqueness result for the problem in Equation (1). Define the Banach space:

$$\mathcal{X} := \left\{ u : u \in \mathbb{C}(\mathbb{N}\_{a-r-1,T+a\prime}, \mathbb{R}), \ \Delta^{\beta}u \in \mathbb{C}(\mathbb{N}\_{a-\beta-r,T+a-\beta+1\prime}, \mathbb{R}), \ 0 < \beta < 1 \right\}$$

with the norm defined by:

$$\|\|u\|\|\_{\mathcal{X}} = \|\|u\|\| + \|\Delta^{\mathcal{S}}u\|\|\_{\prime} \tag{10}$$

where *<sup>u</sup>* <sup>=</sup> max *<sup>t</sup>*∈N*α*−*r*−1,*T*+*<sup>α</sup>* <sup>|</sup>*u*(*t*)<sup>|</sup> and <sup>Δ</sup>*β<sup>u</sup>* <sup>=</sup> max *<sup>t</sup>*∈N*α*−*r*−1,*T*+*<sup>α</sup>* <sup>|</sup>Δ*νu*(*<sup>t</sup>* <sup>−</sup> *<sup>β</sup>* <sup>+</sup> <sup>1</sup>)|.

In view of the definitions of *ut* and *ψ*, we have:

$$u\_{n-1} = u\_{n-1}(\theta) = u(\theta + \mathfrak{a} - 1) = \psi(\theta + \mathfrak{a} - 1) \text{ for } \theta \in \mathbb{N}\_{-r, 0}.\tag{11}$$

Thus, we obtainL

$$
\mu(t) = \psi(t) \text{ for } t \in \mathbb{N}\_{a-r-1, a-1}. \tag{12}
$$

Next, define an operator T : X→X as:

$$\begin{split} \left(\mathcal{T}u\right)(t) := \begin{cases} \frac{1}{\Gamma(a)} \sum\_{s=t\_0}^{t-1} (t-s+a-2) \overset{a\to -1}{\longrightarrow} F\left[s, \mu\_s, \Delta^\beta u(s-\beta+1)\right], & t \in \mathbb{N}\_{l \times l+1} \\ \sum\_{l=1}^k I\_l\left(u\_{l\_l-1}\right) + \frac{1}{\Gamma(a)} \sum\_{l=1}^k \sum\_{s=t\_{l-1}}^{t\_l-1} (t\_l - s + a-2) \overset{a\to -1}{\longrightarrow} F\left[s, \mu\_s, \Delta^\beta u(s-\beta+1)\right] \\ + \frac{1}{\Gamma(a)} \sum\_{s=t\_k}^{t-1} (t-s+a-2) \overset{a\to -1}{\longrightarrow} F\left[s, \mu\_s, \Delta^\beta u(s-\beta+1)\right], & t \in \mathbb{N}\_{l+1,l+1} \\ \varphi(t), & t \in \mathbb{N}\_{a-r-1,a-1} \end{cases} \end{split} \tag{13}$$

where *k* = 1, 2, ..., *p*, *tk*<sup>+</sup><sup>1</sup> − *tk* ≥ 2, *t*<sup>0</sup> = *α* − 1 < *t*<sup>1</sup> < *t*<sup>2</sup> < ... < *tp* < *T* + *α*.

Firstly, we provide some basic knowledge that is used in this section as follows.

**Definition 4.** *A mapping S from a subset M of a Banach space X into X is called a contraction mapping (or simply a contraction) if there exists a positive number α* < 1 *such that:*

$$\|\|S(\mathbf{x}) - S(\mathbf{y})\|\|\_{X} \le a \|\|\mathbf{x} - \mathbf{y}\|\|\_{X} \text{ for all } \mathbf{x}, \mathbf{y} \in M.$$

**Lemma 3.** *[44] (Banach fixed point theorem) Let M be a closed subset of a Banach space X and let S be a contraction mapping from M into M. Then there exists a uniquez* ∈ *M such that S*(*z*) = *z.*

If one can prove that T has fixed point, we can conclude that the problem of Equation (1) has a solution.

**Theorem 1.** *Assume the following properties:*

(*H*1) *There exists a constant* -> 0 *such that:*

$$\left| \left| F[t, \boldsymbol{\mu}\_1, \boldsymbol{\mu}\_2] - F[t, \boldsymbol{\nu}\_1, \boldsymbol{\nu}\_2] \right| \right| \leq \ell \left( |\boldsymbol{\mu}\_1 - \boldsymbol{\nu}\_1| + |\boldsymbol{\mu}\_2 - \boldsymbol{\nu}\_2| \right) \, , \, \| \boldsymbol{\nu}\_1 - \boldsymbol{\nu}\_2 \| $$

*for each u*1, *<sup>v</sup>*<sup>1</sup> <sup>∈</sup> *Cr and u*2, *<sup>v</sup>*<sup>2</sup> <sup>∈</sup> <sup>R</sup>*.*

(*H*2) *There exists a constant λ* > 0 *such that*

$$\left| I\_k \left( u\_{t\_{k-1}} \right) - I\_k \left( v\_{t\_{k-1}} \right) \right| \quad \leq \quad \lambda |u\_t - v\_t|\_{\prime \prime} $$

*for each ut*, *vt* ∈ *Cr and k* = 1, 2, ..., *p.* (*H*3) *p*(*p*+1) <sup>2</sup> *λ* + - - *p*(*p*+1) <sup>2</sup> + 1 (*T*+*α*)*<sup>α</sup>* Γ(*α*+1) 1 + (*T*+*α*−*β*−1) −*β* Γ(1−*β*) < 1*.*

*Then, the problem of Equation* (1) *has an unique solution.*

**Proof.** We will show that T is a contraction. Letting,

$$|\mathcal{H}(u-v)|(t) = \left| F\left[t, u\_{t\prime} \Delta^{\beta} u(t-\beta+1) \right] - F\left[t, v\_{t\prime} \Delta^{\beta} v(t-\beta+1) \right] \right| \Big|\_{\beta}$$

for each *<sup>t</sup>* <sup>∈</sup> <sup>N</sup>*α*−1,*T*+*α*, we obtain:

$$\begin{split} \left| \left( \mathcal{T}u \right)(t) - \left( \mathcal{T}v \right)(t) \right| &\leq \sum\_{i=1}^{p} \left| l\_{i}(u\_{t\_{i}-1}) - l\_{i}(v\_{t\_{i}-1}) \right| + \frac{1}{\Gamma(\mathfrak{a})} \sum\_{i=1}^{k} \sum\_{s=t\_{i-1}}^{t\_{i}-1} \left( t\_{i} - s + a - 2 \right) \frac{\mu - 1}{\left| \mathcal{H}(u - v) \right|(s)} \\ &\quad + \frac{1}{\Gamma(\mathfrak{a})} \sum\_{s=t\_{k}}^{t-1} \left( t - s + a - 2 \right) \frac{\mu - 1}{\left| \mathcal{H}(u - v) \right|(s)} \\ &\leq \frac{p(p+1)}{2} \lambda \left| u\_{t} - v\_{t} \right| + \frac{\ell \left( \left| u\_{t} - v\_{t} \right| + \left| \Delta^{\mathfrak{f}} u (t - \beta + 1) - \Delta^{\mathfrak{f}} v (t - \beta + 1) \right| \right)}{\Gamma(\mathfrak{a} + 1)} \times \\ &\quad \left\{ \left( \frac{p(p+1)}{2} + 1 \right) (T + a)^{\mathfrak{a}} \right\} \\ &\leq \left[ \frac{p(p+1)}{2} \lambda + \ell \left( \frac{p(p+1)}{2} + 1 \right) \frac{(T + a)^{\mathfrak{a}}}{\Gamma(\mathfrak{a} + 1)} \right] \|u - v\|\_{X}. \end{split} \tag{14}$$

Taking the fractional difference of order *β* for Equation (13) and substituting *t* = *t* − *β* + 1, we get:


For each *<sup>t</sup>* <sup>∈</sup> <sup>N</sup>*α*−1,*T*+*α*, we obtain:

 <sup>Δ</sup>*β*<sup>T</sup> *<sup>u</sup>* (*t*−*<sup>β</sup>* <sup>+</sup> <sup>1</sup>) <sup>−</sup> <sup>Δ</sup>*β*<sup>T</sup> *<sup>v</sup>* (*t* − *β* + 1) ≤ 1 Γ(−*β*) *T*+*α*+1 ∑ *s*=*t*<sup>0</sup> (*T* + *α* − *β* − *s*) <sup>−</sup>*β*−<sup>1</sup> *<sup>p</sup>* ∑ *i*=1 *Ii uti*−<sup>1</sup> − *Ii vti*−<sup>1</sup> + 1 Γ(*α*)Γ(−*β*) × *T*+*α*+1 ∑ *ξ*=*t*<sup>0</sup> *k* ∑ *i*=1 *ti*−1 ∑ *<sup>s</sup>*=*ti*−<sup>1</sup> (*T* + *α* − *β* − *ξ*) <sup>−</sup>*β*−<sup>1</sup> (*ti* <sup>−</sup> *<sup>s</sup>* <sup>+</sup> *<sup>α</sup>* <sup>−</sup> <sup>2</sup>) *<sup>α</sup>*−<sup>1</sup> |H(*<sup>u</sup>* <sup>−</sup> *<sup>v</sup>*)|(*s*) + 1 Γ(*α*)Γ(−*β*) *T*+*α*+1 ∑ *ξ*=*t*0+1 *ξ*−1 ∑ *s*=*tk* (*T* + *α* − *β* − *ξ*) <sup>−</sup>*β*−<sup>1</sup> (*<sup>ξ</sup>* <sup>−</sup> *<sup>s</sup>* <sup>+</sup> *<sup>α</sup>* <sup>−</sup> <sup>2</sup>) *<sup>α</sup>*−<sup>1</sup> |H(*<sup>u</sup>* <sup>−</sup> *<sup>v</sup>*)|(*s*) <sup>≤</sup> *<sup>p</sup>*(*<sup>p</sup>* <sup>+</sup> <sup>1</sup>) <sup>2</sup> *<sup>λ</sup>*|*ut* <sup>−</sup> *vt*<sup>|</sup> <sup>+</sup> - <sup>|</sup>*ut* <sup>−</sup> *vt*<sup>|</sup> <sup>+</sup> Δ*βu*(*<sup>t</sup>* <sup>−</sup> *<sup>β</sup>* <sup>+</sup> <sup>1</sup>) <sup>−</sup> <sup>Δ</sup>*βv*(*<sup>t</sup>* <sup>−</sup> *<sup>β</sup>* <sup>+</sup> <sup>1</sup>) Γ(*α* + 1) × *p*(*p* + 1) <sup>2</sup> <sup>+</sup> <sup>1</sup> (*T* + *α*)*<sup>α</sup>* <sup>≤</sup> (*<sup>T</sup>* <sup>+</sup> *<sup>α</sup>* <sup>−</sup> *<sup>β</sup>* <sup>−</sup> <sup>1</sup>) −*β* Γ(1 − *β*) *p*(*p* + 1) <sup>2</sup> *<sup>λ</sup>* <sup>+</sup> - *p*(*p* + 1) <sup>2</sup> <sup>+</sup> <sup>1</sup> (*T* + *α*)*<sup>α</sup>* Γ(*α* + 1) *u* − *v*X . (16)

Obviously, for each *<sup>t</sup>* <sup>∈</sup> <sup>N</sup>*α*−*r*−1,*α*−1, we get (<sup>T</sup> *<sup>u</sup>*)(*t*) <sup>−</sup> (<sup>T</sup> *<sup>v</sup>*)(*t*) <sup>=</sup> 0. Therefore, we have:

$$\begin{split} & \left\| (\mathcal{T}u) - (\mathcal{T}v) \right\|\_{\mathcal{X}} \\ & \leq \left[ \frac{p(p+1)}{2} \lambda + \ell \left( \frac{p(p+1)}{2} + 1 \right) \frac{(T+a)^{\mathfrak{q}}}{\Gamma(a+1)} \right] \left[ 1 + \frac{(T+a-\beta-1)\frac{-\beta}{\pi}}{\Gamma(1-\beta)} \right] \|x-y\|\_{\mathcal{X}}. \end{split} \tag{17}$$

By (*H*3), it implies that T is a contraction. Therefore, by Banach fixed point theorem, T has a fixed point which is a unique solution of the problem in Equation (1).

#### **4. Existence of at Least One Solution**

In this section, we also present the existence of at least one solution of Equation (1) by using the Schauder's fixed point theorem. Firstly, we provide some basic knowledge that is used in this section as follows.

**Lemma 4.** *[44] (Arzelá–Ascoli theorem) A set of function in C*[*a*, *b*] *with the sup norm, is relatively compact if and only it is uniformly bounded and equicontinuous on* [*a*, *b*]*.*

**Lemma 5.** *[44] A bounded set in* R*<sup>n</sup> is relatively compact, a closed bounded set in* R*<sup>n</sup> is compact.*

**Lemma 6.** *[45] (Schauder's fixed point theorem) Let* (*D*, *d*) *be a complete metric space, U be a closed convex subset of D, and T* : *D* → *D be the map such that the set Tu* : *u* ∈ *U is relatively compact in D. Then the operator T has at least one fixed point u*<sup>∗</sup> ∈ *U: Tu*<sup>∗</sup> = *u*∗.

The following notations are defined for using in the sequel.

$$\Theta = \max\_{t \in \mathbb{N}\_{a-r-1, a-1}} \left\{ \left( \frac{1}{2} p(p+1) + 1 \right) \frac{(T+a)^{\underline{a}}}{\Gamma(a+1)} \phi(t) \right\} \tag{18}$$

$$\breve{\Theta} = \max\_{t \in \mathbb{N}\_{x-r-1, x-1}} \left\{ \left( \frac{1}{2} p(p+1) + 1 \right) \frac{(T+a)^{\underline{\omega}} (T+a - \beta + 1) \overline{\-} \overline{\beta}}{\Gamma(a+1) \Gamma(1-\beta)} \phi(t) \right\} \tag{19}$$

$$\mathcal{Y} = \left(\frac{1}{2}p(p+1) + 1\right)\frac{(T+a)^{\underline{\mu}}}{\Gamma(a+1)}\left[1 + \frac{(T+a-\beta+1)\overline{\-\beta}}{\Gamma(1-\beta)}\right].\tag{20}$$

**Theorem 2.** *Assume the following properties:*

(*H*4) *There exists a nonnegative function <sup>φ</sup>* <sup>∈</sup> *<sup>C</sup>* (N*α*−1,*T*+*α*) *such that:*

$$\left| \left| F[t, x, y] \right| \right| \leq \phi(t) + \lambda\_1 |x|^{\chi\_1} + \lambda\_2 |y|^{\chi\_2}.$$

*for each x* <sup>∈</sup> *Cr*, *<sup>y</sup>* <sup>∈</sup> <sup>R</sup> *where <sup>λ</sup>*1, *<sup>λ</sup>*<sup>2</sup> *are negative constants and* <sup>0</sup> <sup>&</sup>lt; *<sup>χ</sup>*1, *<sup>χ</sup>*<sup>2</sup> <sup>&</sup>lt; <sup>1</sup>*; or*

(*H*5) *there exists a nonnegative function <sup>φ</sup>* <sup>∈</sup> *<sup>C</sup>* (N*α*−3,*T*+*α*) *such that:*

$$\left| \left| F[t, x, y] \right| \right| \le \phi(t) + \lambda\_1 |x|^{\chi\_1} + \lambda\_2 |y|^{\chi\_2} $$

*for each x* <sup>∈</sup> *Cr*, *<sup>y</sup>* <sup>∈</sup> <sup>R</sup> *where <sup>λ</sup>*1, *<sup>λ</sup>*<sup>2</sup> *are negative constants and <sup>χ</sup>*1, *<sup>χ</sup>*<sup>2</sup> <sup>&</sup>gt; <sup>1</sup>*.*

*Then boundary value problem of Equation* (1) *has at least one solution.*

**Proof.** The proof is organized into three steps as follows.

**Step I.** We verify that T map bounded sets into bounded sets. Let max |*Ik*(*utk*−1)| = *N* for *k* = 1, 2, ..., *p*. Suppose that (*H*4) holds, we choose a constant:

$$R \ge \max\left\{ 3\left[\frac{1}{2}p(p+1)N\frac{(T+\mathfrak{a}-\mathfrak{f}+1)\overline{\mathfrak{f}^{1-\mathfrak{f}}}}{\Gamma(1-\beta)} + 1\right] + \Theta + \breve{\Theta}, \left(3\lambda\_1 Y\right)^{\frac{1}{1-\overline{\chi}\_1}}, \left(3\lambda\_2 Y\right)^{\frac{1}{1-\overline{\chi}\_2}}\right\},\tag{21}$$

and define the P = *u* ∈ X : *u* ≤ *R*, *R* > 0 . For any *u* ∈ P, we have:

$$\begin{split} \left| \left( \mathcal{T} u \right) (t) \right| &\leq \sum\_{i=1}^{p} \left| I\_{i} (u\_{t\_{i}-1}) \right| + \frac{1}{\Gamma(a)} \sum\_{i=1}^{p} \sum\_{s=t\_{i-1}}^{t\_{i}-1} (t\_{i} - s + a - 2) \frac{a-1}{\tau} \times \\ &\left[ \phi(s) + \lambda\_{1} |u\_{s}|^{X\_{1}} + \lambda\_{2} \left| \Delta^{\delta} u (s - \beta + 1) \right|^{X\_{2}} \right] + \frac{1}{\Gamma(a)} \sum\_{s=t\_{k}}^{t-1} (t - s + a - 2) \frac{a-1}{\tau} \times \\ &\left[ \phi(s) + \lambda\_{1} |u\_{s}|^{X\_{1}} + \lambda\_{2} \left| \Delta^{\delta} u (s - \beta + 1) \right|^{X\_{2}} \right] \\ &\leq \frac{1}{2} p (p+1) N + \Theta + \left( \frac{1}{2} p (p+1) + 1 \right) \frac{(T+a)\mathbb{1}}{\Gamma(a)} \left[ \lambda\_{1} |u\_{s}|^{X\_{1}} + \lambda\_{2} \left| \Delta^{\delta} u (s - \beta + 1) \right|^{X\_{2}} \right]. \end{split}$$

and

$$\begin{split} \left| \left( \Delta^{\beta} \overline{\mathcal{T}} u \right) (t - \beta + 1) \right| &\leq \frac{1}{\Gamma(-\beta)} \sum\_{s=a-1}^{t+1} (t - \beta - s) \overline{\beta^{-\beta-1}} \left[ \sum\_{i=1}^{p} \left| I\_{i} \left( u\_{t\_{i}-1} \right) \right| \right] \\ &+ \frac{1}{\Gamma(a) \Gamma(-\beta)} \sum\_{\xi=b\_{0}}^{t+1} \sum\_{i=1}^{p} \sum\_{s=t\_{i-1}}^{t\_{i}-1} (t - \beta - \xi) \overline{\beta^{-\beta-1}} (t\_{i} - s + a - 2)^{\underline{a}-1} \times \\ &\left[ \phi(s) + \lambda\_{1} |u\_{s}|^{\chi} \right] + \lambda\_{2} \left| \Delta^{\beta} u (s - \beta + 1) \right|^{\chi \underline{a}} \right] \\ &+ \frac{1}{\Gamma(a) \Gamma(-\beta)} \sum\_{\xi=b\_{0}+1}^{t+1} \sum\_{s=b\_{1}}^{\xi=1} (t - \beta - \xi)^{-\beta-1} (\xi - s + a - 2)^{\underline{a}-1} \times \\ &\left[ \phi(s) + \lambda\_{1} |u\_{t}|^{\chi} \right] + \lambda\_{2} \left| \Delta^{\beta} u (s - \beta + 1) \right|^{\chi \underline{a}} \Big{]} \\ &\leq \frac{1}{2} p (p+1) N \sqrt{T + a - \beta + 1} \left| \frac{-\beta}{\Gamma(a-\beta)} + \tilde{\Theta} + \left( \frac{1}{2} p (p+1) + 1 \right) \times \\ &\frac{(T+a)^{\underline{a}} (T + a - \beta + 1) -$$

Hence, we have:

$$\begin{split} \|\mathcal{T}u\|\_{\mathcal{X}} &\leq \frac{1}{2}p(p+1)N\left[\frac{(T+a-\beta+1)\overline{-\beta}}{\Gamma(1-\beta)}+1\right]+\Theta+\widetilde{\Theta} \\ &\quad + \mathcal{Y}\left[\lambda\_{1}|u\_{s}|^{\mathcal{X}\_{1}}+\lambda\_{2}\left|\Delta^{\beta}u(s-\beta+1)\right|^{\mathcal{X}\_{2}}\right] \\ &\leq \frac{R}{3}+\mathcal{Y}\left[\lambda\_{1}|u\_{s}|^{\mathcal{X}\_{1}}+\lambda\_{2}\left|\Delta^{\beta}u(s-\beta+1)\right|^{\mathcal{X}\_{2}}\right] \\ &\leq \frac{R}{3}+\frac{R}{3}+\frac{R}{3}=R. \end{split} \tag{22}$$

This implies that T : P→P.

For the second case, if (*H*5) holds, we choose a constant:

$$R \ge \max\left\{ 3\left[\frac{1}{2}p(p+1)N\frac{(T+\mathfrak{a}-\mathfrak{f}+1)\overline{\mathfrak{f}^{1-\mathfrak{f}}}}{\Gamma(1-\mathfrak{f})} + 1\right] + \Theta + \breve{\Theta}, \left(\frac{1}{3\lambda\_1\mathfrak{f}}\right)^{\frac{1}{1-\mathfrak{f}\lambda\_1}} \left(\frac{1}{3\lambda\_2\mathfrak{f}}\right)^{\frac{1}{1-\mathfrak{f}\lambda}}\right\}.\tag{23}$$

Similary, we find that:

$$\|\|\mathcal{T}u(t)\|\|\_{\mathcal{X}} \leq \ R,\tag{24}$$

which implies that T : P→P.

**Step II.** It is obvious that the operator T is continuous on P since the continuity of *F*.

**Step III.** We prove that T is equicontinuous on P. For any > 0, there exist positive constants *δ*<sup>1</sup> = max{*δ*11, *δ*12}, *δ*<sup>2</sup> and *δ*<sup>3</sup> = max{*δ*31, *δ*32, *δ*33}, such that:

(*i*) for *<sup>τ</sup>*1, *<sup>τ</sup>*<sup>2</sup> <sup>∈</sup> <sup>N</sup>*α*−1,*T*+*<sup>α</sup>* and *<sup>τ</sup>*<sup>1</sup> <sup>&</sup>lt; *<sup>τ</sup>*<sup>2</sup>

$$|\tau\_2^{\underline{R}} - \tau\_1^{\underline{R}}| < \frac{\varepsilon \Gamma(a+1)}{2M}, \text{ whenever } |\tau\_2 - \tau\_1| < \delta\_{11},$$

$$|\tau\_2 - \beta + 1)\overline{\ }^{-\underline{\beta}} - (\tau\_1 - \beta + 1)\overline{\ }^{-\underline{\beta}}| < \frac{\varepsilon \Gamma(a+1)\Gamma(1-\beta)}{2M}, \text{ whenever } |t\_2 - t\_1| < \delta\_{12};$$

(*ii*) for *<sup>τ</sup>*1, *<sup>τ</sup>*<sup>2</sup> <sup>∈</sup> <sup>N</sup>*α*−*r*−1,*α*−<sup>1</sup> and *<sup>τ</sup>*<sup>1</sup> <sup>&</sup>lt; *<sup>τ</sup>*<sup>2</sup>

$$|\psi(\mathfrak{r}) - \psi(\mathfrak{r}\_2)| < \epsilon, \text{ whenever } |\mathfrak{r}\_2 - \mathfrak{r}\_1| < \delta \mathfrak{r}\_2$$

(*iii*) for *<sup>τ</sup>*<sup>1</sup> <sup>∈</sup> <sup>N</sup>*α*−*r*−1,*α*−<sup>1</sup> and *<sup>τ</sup>*<sup>2</sup> <sup>∈</sup> <sup>N</sup>*α*,*T*+*<sup>α</sup>*

$$|\tau\_2^{\mathfrak{g}}| < \frac{\epsilon \, \Gamma(\mathfrak{a} + 1)}{3M}, \text{ whenever } |\tau\_2 - \tau\_1| < \delta\_{31}.$$

$$|\psi(\tau\_1)| < \frac{\epsilon}{3M}, \text{ whenever } |\tau\_2 - \tau\_1| < \delta\_{32}.$$

$$|(\tau\_2 - \beta + 1)\frac{-\beta}{\cdot}| < \frac{\epsilon \, \Gamma(\mathfrak{a} + 1)\Gamma(1 - \beta)}{3M}, \text{ whenever } |\tau\_2 - \tau\_1| < \delta\_{33}.$$

Let *<sup>M</sup>* <sup>=</sup> max *<sup>t</sup>*∈N*α*−1,*T*+*<sup>α</sup> F <sup>t</sup>*, *ut*, <sup>Δ</sup>*βu*(*<sup>t</sup>* <sup>−</sup> *<sup>β</sup>* <sup>+</sup> <sup>1</sup>) . Then, for *<sup>u</sup>* ∈ P and *<sup>τ</sup>*1, *<sup>τ</sup>*<sup>2</sup> <sup>∈</sup> <sup>N</sup>*α*−*r*−1,*T*+*α*.

**Case 1.** If *<sup>τ</sup>*1, *<sup>τ</sup>*<sup>2</sup> <sup>∈</sup> <sup>N</sup>*t*0,*t*<sup>1</sup> <sup>∪</sup> <sup>N</sup>*tk*+1,*tk*+<sup>1</sup> , *tk*<sup>+</sup><sup>1</sup> <sup>−</sup> *tk* <sup>≥</sup> 2, *<sup>k</sup>* <sup>=</sup> 1, 2, ..., *<sup>p</sup>*, and *<sup>τ</sup>*<sup>1</sup> <sup>&</sup>lt; *<sup>τ</sup>*2, we obtain:

$$\begin{split} \left| \left( \left( \varvarprojleft \tau \right) \left( \tau\_{2} \right) - \left( \varprojleft \tau \right) \left( \tau\_{1} \right) \right) \right| &\leq \frac{M}{\Gamma(a)} \left| \sum\_{s=t\_{k}}^{\tau\_{2}-1} \left( \tau\_{2} - s + a - 2 \right) \frac{a-1}{s-t\_{k}} - \sum\_{s=t\_{k}}^{\tau\_{1}-1} \left( \tau\_{1} - s + a - 1 \right) \frac{a-1}{s} \right| \\ &\leq \frac{M}{\Gamma(a+1)} \left| \tau\_{2}^{\underline{a}} - \tau\_{1}^{\underline{a}} \right| \\ &\leq \frac{\epsilon}{2} . \end{split} \tag{25}$$

and

$$\begin{split} \left| \left( \Delta^{\beta} \varGamma u \right) (\tau\_{2} - \beta + 1) - \left( \Delta^{\beta} \varGamma u \right) (\tau\_{1} - \beta + 1) \right| \\ &\leq \frac{M}{\Gamma(\alpha) \Gamma(-\beta)} \left| \sum\_{\xi=t\_{0}}^{\tau\_{2}+1} \sum\_{s=t\_{k}}^{\xi-1} (\tau\_{2} - \beta - \xi) \frac{-\beta - 1}{s} (\xi - s + \alpha - 2) \overline{\frac{\alpha - 1}{\beta}} \right| \\ &\quad - \sum\_{\xi=t\_{0}}^{\tau\_{1}+1} \sum\_{s=t\_{k}}^{\xi-1} (\tau\_{1} - \beta - \xi) \frac{-\beta - 1}{s} (\xi - s + \alpha - 2) \overline{\frac{\alpha - 1}{\beta}} \Big| \\ &\leq \frac{M}{\Gamma(\alpha + 1) \Gamma(1 - \beta)} \left| (\tau\_{2} - \beta + 1) \frac{-\beta}{\tau} - (\tau\_{1} - \beta + 1) \overline{\frac{\alpha - 1}{\beta}} \right| \\ &\leq \frac{\epsilon}{2} . \end{split} \tag{26}$$

By Equations (25) and (26), it implies that:

$$\left\| \left( \mathcal{T}u \right)(\mathfrak{r}\_2) - \left( \mathcal{T}u \right)(\mathfrak{r}\_1) \right\|\_{\mathcal{X}} \leq \frac{\mathfrak{c}}{2} + \frac{\mathfrak{c}}{2} = \mathfrak{c}.\tag{27}$$

**Case 2.** If *<sup>τ</sup>*1, *<sup>τ</sup>*<sup>2</sup> <sup>∈</sup> <sup>N</sup>*α*−*r*−1,*α*−<sup>1</sup> and *<sup>τ</sup>*<sup>1</sup> <sup>&</sup>lt; *<sup>τ</sup>*2, we obtain:

$$\left| \left( \mathcal{T}u \right) (\tau\_2) - \left( \mathcal{T}u \right) (\tau\_1) \right| = \left| \psi(\tau\_2) - \psi(\tau\_1) \right| \\
< \epsilon,\tag{28}$$

$$\text{and} \quad \left| \left( \Delta^{\beta} \mathcal{T} \boldsymbol{u} \right) (\boldsymbol{\tau} \boldsymbol{\varrho}) - \left( \Delta^{\beta} \mathcal{T} \boldsymbol{u} \right) (\boldsymbol{\tau}\_{1}) \right| = \left| \left( \Delta^{\beta} \mathcal{T} \boldsymbol{\varphi} \right) (\boldsymbol{\tau} \boldsymbol{\varrho}) - \left( \Delta^{\beta} \mathcal{T} \boldsymbol{\varphi} \right) (\boldsymbol{\tau}\_{1}) \right| \\ = \boldsymbol{0}. \tag{29}$$

By Equations (28) and (29), it implies that:

$$\left\| \left( \mathcal{T}u \right) (\tau\_2) - \left( \mathcal{T}u \right) (\tau\_1) \right\|\_{\mathcal{X}} < \epsilon + 0 = \epsilon. \tag{30}$$

**Case 3.** If *<sup>τ</sup>*<sup>1</sup> <sup>∈</sup> <sup>N</sup>*α*−*r*−1,*α*−<sup>1</sup> and *<sup>τ</sup>*<sup>2</sup> <sup>∈</sup> <sup>N</sup>*t*0+1,*t*<sup>1</sup> <sup>∪</sup> <sup>N</sup>*tk*+1,*tk*+<sup>1</sup> , *tk*<sup>+</sup><sup>1</sup> <sup>−</sup> *tk* <sup>≥</sup> 2, *<sup>k</sup>* <sup>=</sup> 1, 2, ..., *<sup>p</sup>*, we obtain:

$$\begin{split} \left| \left( \nabla u \right) (\tau\_2) - \left( \nabla u \right) (\tau\_1) \right| &\leq \left| \left( \nabla u \right) (\tau\_2) - \left( \nabla u \right) (\kappa - 1) \right| + \left| \left( \nabla u \right) (\kappa - 1) - \left( \nabla u \right) (\tau\_1) \right| \\ &\leq \left( \nabla u \right) (\tau\_2) + \left| \phi (\tau\_1) \right| \\ &\leq \frac{M}{\Gamma(\alpha + 1)} \left| \tau\_2^{\delta} \right| + \left| \left. \phi (\tau\_1) \right| \right|\_{\mathbb{C}}, \\ &< \frac{\epsilon}{3} + \frac{\epsilon}{3} = \frac{2\epsilon}{3}, \\ \text{and} \qquad \left| \left( \Delta^{\delta} \mathcal{T} u \right) (\tau\_2) - \left( \Delta^{\delta} \mathcal{T} u \right) (\tau\_1) \right| &\leq \left| \left( \Delta^{\delta} \mathcal{T} u \right) (\tau\_2) - 0 \right| \\ &\leq \frac{M}{\Gamma(\alpha) \Gamma(-\beta)} \sum\_{\xi = \delta\_0}^{\frac{\mathsf{T} + 1}{\mathsf{T} - 1}} \sum\_{s = \mathsf{I}\_{\delta}}^{\mathsf{T} - 1} (\tau\_2 - \beta - \xi) \frac{\delta - 1}{\delta} \left( \xi - s + \kappa - 2 \right) \frac{\delta - 1}{\delta} \\ &\leq M \frac{M}{\Gamma(\alpha)} \frac{\delta - 1}{\delta} \end{split} \tag{3}$$

$$\begin{split} & \leq \frac{M}{\Gamma(a)\Gamma(-\beta)} \sum\_{\xi=t\_0}^{\tau\_2+1} \sum\_{s=t\_k}^{\xi-1} (\tau\_2-\beta-\xi) \frac{-\beta-1}{s} (\xi-s+a-2) \frac{a-1}{s} \\ & \leq \frac{M}{\Gamma(a+1)\Gamma(1-\beta)} \big| (\tau\_2-\beta+1) \overline{\frac{-\beta}{\tau}} \big| \\ & \leq \frac{\epsilon}{3} . \end{split} \tag{32}$$

By Equations (31) and (32), it implies that:

$$\left\| \left( \mathcal{T}u \right) (\tau\_2) - \left( \mathcal{T}u \right) (\tau\_1) \right\|\_{\mathcal{X}} < \frac{2\epsilon}{3} + \frac{\epsilon}{3} = \epsilon. \tag{33}$$

From Steps I to III and the Arzelá–Ascoli theorem, we can conclude that T : X→X is completely continuous. Therefore, the problem in Equation (1) has at least one solution by Schauder's fixed point theorem.

#### **5. An Example**

Consider the following fractional difference boundary value problem:

Δ 1 2 *<sup>C</sup>u*(*t*) = *F <sup>t</sup>* <sup>−</sup> <sup>1</sup> 2 , *ut*<sup>−</sup> <sup>1</sup> 2 , Δ<sup>3</sup> <sup>2</sup> *u <sup>t</sup>* <sup>−</sup> <sup>1</sup> 4 , *<sup>t</sup>* <sup>∈</sup> <sup>N</sup>0,10, *<sup>t</sup>* <sup>−</sup> <sup>1</sup> <sup>2</sup> <sup>=</sup> *tk*, *<sup>k</sup>* <sup>=</sup> 1, 2, 3 <sup>Δ</sup>*u*(*tk*) = <sup>1</sup> *<sup>k</sup>* <sup>+</sup> <sup>100</sup> sin <sup>|</sup>*u*(*tk* <sup>−</sup> <sup>1</sup>)|, *tk* <sup>=</sup> <sup>1</sup> <sup>2</sup> <sup>+</sup> <sup>2</sup>*k*, *u* −1 2 = 0. (34)

Here *<sup>α</sup>* <sup>=</sup> <sup>1</sup> 2 , *<sup>β</sup>* <sup>=</sup> <sup>2</sup> 3 , *T* = 5, *p* = 3.

$$\begin{aligned} \text{(i) Let } &F\left[t, u\_{t}, \Delta^{\mathbb{R}} u\_{t-\frac{\pi}{3}+1}\right] = \frac{|u\_{t}| + \left|\Delta^{\frac{2}{3}} u\left(t + \frac{1}{3}\right)\right|}{(t+100)^{3} \left[1 + \left|u\_{t}\right| + \left|\Delta^{\frac{2}{3}} u\left(t + \frac{1}{3}\right)\right|\right]}. \\ &c \in \mathbb{N} \quad \text{...} \quad \text{wo bzwe} \end{aligned}$$

For *<sup>t</sup>* <sup>∈</sup> <sup>N</sup><sup>−</sup> <sup>1</sup> <sup>2</sup> , <sup>21</sup> 2 , we have:

$$\left| F\left[ t, u\_t, \Delta^{\beta} u \right] - F\left[ t, v\_t, \Delta^{\beta} v \right] \right| \leq \frac{8}{7880599} \left[ |u\_t - v\_t| + \left| \Delta^{\frac{2}{3}} u \left( t + \frac{1}{3} \right) - \Delta^{\frac{2}{3}} v \left( t + \frac{1}{3} \right) \right| \right]$$

.

.

So, (*H*1) holds with -<sup>=</sup> <sup>8</sup> <sup>7880599</sup> .

For all *u*, *v* ∈ X and *k* = 1, 2, 3, we have:

$$|I\_k(u) - I\_k(v)| \le \frac{1}{101}|u - v|.$$

So, (*H*2) holds with *λ* = <sup>1</sup> <sup>101</sup> .

We can show that (*H*3) holds with:

$$\left[\frac{p(p+1)}{2}\lambda + \ell \left(\frac{p(p+1)}{2} + 1\right) \frac{(T+a)^{\underline{\omega}}}{\Gamma(a+1)}\right] \left[1 + \frac{(T+a-\beta-1)\frac{-\beta}{\underline{\omega}}}{\Gamma(1-\beta)}\right] \approx 0.06401 < 1.1$$

Therefore, by Theorem 1, the boundary value problem of Equation (34) has an unique solution.

$$\text{(iii)}\text{ Let }F\left[t, u\_{t^\*} \Delta^\beta u\_{t^\*-\beta+1}\right] = t^2 + \frac{e^{-t}}{2(t+1)^3} |u\_t|^{\chi\_1} + \frac{e^{-2(t+1)}}{3} |\Delta^{\frac{2}{3}} u\_{t+\frac{1}{3}}|^{\chi\_2}.$$

For *<sup>t</sup>* <sup>∈</sup> <sup>N</sup><sup>−</sup> <sup>1</sup> <sup>2</sup> , <sup>21</sup> 2 , we have:

$$\left| \left| F \left[ t, u\_{t'} \Delta^{\beta} u\_{t - \beta + 1} \right] \right| \right| \leq \left( \frac{21}{2} \right)^{2} + \frac{4}{\sqrt{\epsilon}} \left| u\_{t} \right|^{\chi\_{1}} + \frac{1}{3\epsilon} \left| \Delta^{\frac{2}{3}} u \left( t + \frac{1}{3} \right) \right|^{\chi\_{2}}$$

Thus <sup>|</sup>*φ*(*t*)| ≤ <sup>441</sup> <sup>4</sup> , *<sup>λ</sup>*<sup>1</sup> <sup>=</sup> <sup>√</sup><sup>4</sup> *e* , *λ*<sup>2</sup> = <sup>1</sup> <sup>3</sup>*e*. We can show that (*H*4) is satisfied for 0 < *χ*1, *χ*<sup>2</sup> < 1, and (*H*5) is satisfied for *χ*1, *χ*<sup>2</sup> > 1. Therefore, by Theorem 2, the boundary value problem in Equation (34) has at least one solution.

#### **6. Conclusions**

We established the conditions for the existence and uniqueness of a solution for a nonlinear fractional difference equation with delay and impulses in Equation (1) by using the Banach fixed point theorem, and the conditions of at least one solution by using the Schauder's fixed point theorem. Our problem contained both delay and impulses, which is a new idea.

**Author Contributions:** Conceptualization, R.O. and S.C.; Formal analysis, R.O. and S.C.; Funding acquisition, S.C.; Investigation, R.O.; Methodology, R.O., S.C., and T.S.; Writing—original draft, R.O., S.C., and T.S.; Writing—review and editing, R.O., S.C., and T.S. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by King Mongkut's University of Technology North Bangkok. Contract No. KMUTNB-61-GOV-D-65.

**Acknowledgments:** This research was supported by Chiang Mai University.

**Conflicts of Interest:** The authors declare no conflict of interest regarding the publication of this paper.

#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **A Collocation Approach for Solving Time-Fractional Stochastic Heat Equation Driven by an Additive Noise**

#### **Afshin Babaei 1, Hossein Jafari 2,3,4,\* and S. Banihashemi <sup>1</sup>**


Received: 27 April 2020; Accepted: 21 May 2020; Published: 1 June 2020

**Abstract:** A spectral collocation approach is constructed to solve a class of time-fractional stochastic heat equations (TFSHEs) driven by Brownian motion. Stochastic differential equations with additive noise have an important role in explaining some symmetry phenomena such as symmetry breaking in molecular vibrations. Finding the exact solution of such equations is difficult in many cases. Thus, a collocation method based on sixth-kind Chebyshev polynomials (SKCPs) is introduced to assess their numerical solutions. This collocation approach reduces the considered problem to a system of linear algebraic equations. The convergence and error analysis of the suggested scheme are investigated. In the end, numerical results and the order of convergence are evaluated for some numerical test problems to illustrate the efficiency and robustness of the presented method.

**Keywords:** fractional calculus; stochastic heat equation; additive noise; chebyshev polynomials of sixth kind; error estimate

#### **1. Introduction**

Many models in physics, chemistry, and engineering reveal stochastic effects and are introduced as stochastic partial differential equations (SPDEs) [1,2]. Some phenomena in various fields such as population dynamics [3], motions of ions in crystals [4], optimal pricing in economics [5] and thermal noise [6] show stochastic behaviors. Fractional stochastic partial differential equation (FSPDE) is an example of these equations that have attracted more attention recently.

In recent decades, investigations have shown that fractional calculus provides some new ways for a better understanding of behaviors of real-world phenomena. Fractional-order operators give helpful tools for modeling inherited memory characteristics of real applications. Scientists proposed models for numerous phenomena in engineering, fluid mechanics, physics [7–11], finance [12,13], geomagnetic [14] and hydrology [15] based on fractional differential and integral equations. Non-Markovian anomalous diffusion in materials with memory, such as, viscoelastic substances is an example of these applications [16], in which the mean square displacement of particles grows faster or slower than in the case of normal diffusion.

In many applications, it is more realistic to represent the mathematical model of the problem in a non-deterministic state. In other words, some kinds of randomness and uncertainty are considered in the mathematical formulation of the problem. Hence, stochastic functional equations have arisen in many situations and numerous problems in different fields of science are modeled as fractional stochastic differential or integral equations [17–19]. Many theoretical investigations on the fractional stochastic differential equations have been made by researchers in the literature. Liu et al. studied some properties of fractional stochastic heat equations [20]. Ralchenko and Shevchenko [21] surveyed the existence and uniqueness of mild solution for a special type of stochastic heat equations of fractional order. Roozbahani et al. [22] proved the unique solvability of a class of SPDEs. Moghaddam et al. [23] proved the existence and uniqueness of solution for some delay stochastic differential equations of fractional order. Moreover, Mishura et al. [24] investigated mild and weak solutions for a SPDE with second order elliptic operator in divergence form. Since the exact solutions of these equations are scarcely known, researchers have examined several numerical algorithms to solve them. Finite difference schemes [25,26], finite element approaches [27–29], wavelets Galerkin method [30], B-spline collocation method [31,32], hat function operational matrix method [33], mean-square dissipative method [34] and operational matrix of Chebyshev wavelets [35] are a number of these schemes.

In the present work, we consider the following TFSHE

$$D\_{0,t}^{a}\mu(\mathbf{x},\mathbf{t}) = \left(\mu + \theta \dot{B}(\mathbf{t})\right)\mu\_{\text{xx}}(\mathbf{x},\mathbf{t}) + \lambda u\_{\text{x}}(\mathbf{x},\mathbf{t}) + f(\mathbf{x},\mathbf{t}),\tag{1}$$

where (x, t) ∈L×I, with the boundary and initial conditions

$$\mu(\mathbf{x}, \mathbf{t}) = \varphi(\mathbf{x}, \mathbf{t}), \qquad \mathbf{x} \in \partial \mathcal{L}, \ \mathbf{t} \in \mathcal{T}, \tag{2}$$

$$u(\mathbf{x},0) = \eta(\mathbf{x}), \qquad \mathbf{x} \in \mathcal{L},\tag{3}$$

where *α* ∈ (0, 1), *μ*, *ϑ* and *λ* are real constants, I := [0, *T*], L := [0, *l*] and *∂*L is the boundary of L. Also, *B*˙(t) := <sup>d</sup>*B*(t) dt denotes a time white noise where *B*(t), t ∈ I is the Brownian motion adapted to a filtration *FB* <sup>=</sup> {*F*t}t∈I in a probability space (Ω*B*, *FB*, <sup>P</sup>*B*) [36]. Moreover, the source term *<sup>f</sup>*(x, <sup>t</sup>), *<sup>ϕ</sup>*(x, <sup>t</sup>) and *η*(x) are some stochastic processes defined on (Ω*B*, *FB*, P*B*) and *u*(x, t) is an unknown stochastic function to be found. Moreover, the operator *D<sup>α</sup>* 0,t[·] denotes Caputo fractional derivative defined as:

$$D\_{0,t}^{\mathfrak{a}}u(\mathbf{x},\mathbf{t}) = \frac{1}{\Gamma(1-a)} \int\_0^t \frac{1}{(\mathbf{t}-\boldsymbol{\xi})^a} \frac{\partial u}{\partial \boldsymbol{\xi}}(\mathbf{x},\boldsymbol{\xi}) \, \mathrm{d}\boldsymbol{\xi}, \quad a \in (0,1), \tag{4}$$

and Γ(·) represents the Gamma function.

Equation (1) is a FSPDE driven by additive noise that takes into account both memory and environmental noise effects. Many physical and engineering models are built based on these types of stochastic equations. Fractional stochastic heat equations [20,37–39], stochastic Burgers equation [40] and stochastic coupled fractional Ginzburg-Landau equation [41] are some examples of these applications. The problem (1)–(3) has been considered in [30], in the case *α* = 1. The authors have proposed a wavelet Galerkin method to find the solution to this equation. When *ϑ* = 0, Equation (1) reduces to an advection-dispersion equation of fractional order describing the transport of passive tracers in a porous medium in groundwater hydrology [42].

Many numerical schemes with Chebyshev polynomials basis functions are established in literature to solve various types of problems. Masjed-Jamei in [43] introduced a class of symmetric orthogonal polynomials. The six various types of Chebyshev polynomials are special cases of this basic class. To our experience, the approaches based on the SKCPs expansions result very accurate numerical estimations. Hence, we motivated to employ this kind of Chebyshev polynomials for solving TFSHEs. Recently, a few authors applied the SKCPs to solve some types of differential equations [44–46].

The structure of this work is organized as follows. In Section 2, the basic concepts of the SKCPs theory are described. In Section 3, the collocation scheme based on the SKCPs is applied. The convergence of the numerical procedure is considered in Section 4. The accuracy of the proposed approach is analyzed in Section 5 by three numerical test problems. In the end, the main concluding remarks are presented in Section 6.

#### **2. The Shifted SKCPs and Their Properties**

In this section, some necessary preliminaries and relevant properties of the shifted SKCPs utilized in the next sections, are reviewed.

**Definition 1.** *The shifted SKCPs on* [0, *l*] *are defined by*

$$\mathcal{J}\_{\mathfrak{m}}(\mathbf{x}) = \hat{\mathcal{J}}\_{\mathfrak{m}}((2/l)\mathbf{x} - 1), \qquad \mathfrak{m} = 0, 1, 2, \dots$$

*where ([43])*

$$\hat{\mathcal{J}}\_{m}(\mathbf{x}) = \prod\_{i=0}^{\lfloor \frac{m}{2} \rfloor - 1} \frac{2i + (-1)^{m+1} + 4}{-5 - (2i + 2\lfloor \frac{m}{2} \rfloor + (-1)^{m+1})} \mathcal{E}\_{m}(\mathbf{x}),\tag{5}$$

*and*

$$\mathcal{E}\_{\mathfrak{m}}(\mathbf{x}) = \sum\_{\tau=0}^{\lfloor \frac{\mathfrak{m}}{2} \rfloor} \prod\_{\kappa=0}^{\lfloor \frac{\mathfrak{m}}{2} \rfloor - (\tau + 1)} \left( \frac{(-1)^{\mathfrak{m}} - 2(\kappa + \lfloor \frac{\mathfrak{m}}{2} \rfloor) - 5}{(-1)^{\mathfrak{m} + 1} + 2(\kappa + 2)} \right) \frac{(\lfloor \frac{\mathfrak{m}}{2} \rfloor)!}{\tau! (\tau - \lfloor \frac{\mathfrak{m}}{2} \rfloor)!} \, \, \mathbf{x}^{\mathfrak{m} - 2\tau} \, . \tag{6}$$

*The explicit form of shifted SKCPs as follows: [45]*

$$\mathcal{J}\_{\rm ml}(\mathbf{x}) = \sum\_{r=0}^{m} \vec{\theta}\_{r,m}(\mathbf{x}/l)^{r},\tag{7}$$

*where*

$$\bar{\theta}\_{r,m} = \begin{cases} \frac{2^{2r-m}}{(2r+1)!} \sum\_{i=\lfloor \frac{r+1}{2} \rfloor}^{\frac{m}{2}} \frac{(-1)^{\frac{m}{2}+i+r}(2i+r+1)!}{(2i-r)!}, & m \text{ even}, \\\\ \frac{2^{2r-m+1}}{(m+1)(2r+1)!} \sum\_{i=\lfloor \frac{r}{2} \rfloor}^{\frac{m}{2}-1} \frac{(-1)^{\frac{m+1}{2}+i+r}(i+1)(2i+r+2)!}{(2i-r+1)!}, & m \text{ odd}. \end{cases}$$

**Theorem 1.** *([46]) Suppose* L<sup>2</sup> <sup>W</sup>(Λ˜ ) *is the square integrable function space according to the weight* <sup>W</sup>(*x*,*t*) = (2*<sup>x</sup>* <sup>−</sup> <sup>1</sup>)2(2*<sup>t</sup>* <sup>−</sup> <sup>1</sup>)<sup>2</sup> √ *x* − *x*<sup>2</sup> *<sup>t</sup>* <sup>−</sup> *<sup>t</sup>* <sup>2</sup>*. Let <sup>g</sup>*(*x*, *<sup>t</sup>*) <sup>∈</sup> *<sup>L</sup>*<sup>2</sup> <sup>W</sup>(Λ˜ ) *is considered with ∂*<sup>6</sup> *g*(*x*,*t*) *∂x*3*∂t*<sup>3</sup> <sup>2</sup> <sup>≤</sup> *<sup>ς</sup> for some constant <sup>ς</sup>* <sup>&</sup>gt; <sup>0</sup>*, satisfies the expansion g*(*x*, *<sup>t</sup>*) = <sup>∞</sup> ∑ *i*=0 ∞ ∑ *j*=0 *ci*,*j*J*i*(*x*)J*j*(*t*)*. If*

$$\mathfrak{G}\_{N,M}(\mathbf{x},\mathbf{t}) = \sum\_{i=0}^{N} \sum\_{j=0}^{M} c\_{i,j} \mathcal{J}\_i(\mathbf{x}) \mathcal{J}\_j(\mathbf{t}),\tag{8}$$

*is an approximation of g*(*x*, *t*)*, then*

$$|\lg(\mathbf{x},t) - \mathsf{G}\_{N,M}(\mathbf{x},t)| < \frac{\xi}{2^{N+M}},$$

$$\left|\frac{\partial \mathcal{g}}{\partial \mathbf{x}}(\mathbf{x},t) - \frac{\partial \mathsf{G}\_{N,M}}{\partial \mathbf{x}}(\mathbf{x},t)\right| < \xi \frac{N}{2^{N+M-2}},$$

$$\left|\frac{\partial^2 \mathcal{g}}{\partial \mathbf{x}^2}(\mathbf{x},t) - \frac{\partial^2 \mathsf{G}\_{N,M}}{\partial \mathbf{x}^2}(\mathbf{x},t)\right| < \varrho \frac{N^3}{2^{N+M-8}},$$

*where ξ and are two positive constants.*

#### **3. The SKCPs-Collocation Approach**

In the following, we describe a numerical technique to solve problem (1)–(3). For this reason, we consider the numerical solution of (1) as follows

$$\mathbf{u}(\mathbf{x},\mathbf{t}) \simeq \mathbf{U}\_{N,M}(\mathbf{x},\mathbf{t}) = \sum\_{i=0}^{N} \sum\_{j=0}^{M} \delta\_{i,j} \mathcal{J}\_{i}(\mathbf{x}) \bar{\mathcal{J}}\_{j}(\mathbf{t}) = \mathbf{J}(\mathbf{x})^{T} \mathbf{C} \mathbf{J}(\mathbf{t}),\tag{9}$$

where

$$\mathbf{J}(\mathbf{x}) = \begin{bmatrix} \mathcal{J}\_0(\mathbf{x}), \dots, \mathcal{J}\_i(\mathbf{x}), \dots, \mathcal{J}\_N(\mathbf{x}) \end{bmatrix}^T,\tag{10}$$

$$\bar{\mathbf{J}}(\mathbf{t}) = \left[ \mathcal{J}\mathbf{\hat{o}}(\mathbf{t}), \dots, \mathcal{J}\mathbf{\hat{j}}(\mathbf{t}), \dots, \mathcal{J}\mathbf{\hat{j}}(\mathbf{t}) \right]^T,\tag{11}$$

in which <sup>J</sup>*i*(x) = <sup>J</sup><sup>ˆ</sup> *<sup>i</sup>*((2/*l*)<sup>x</sup> <sup>−</sup> <sup>1</sup>) on the interval <sup>L</sup> and <sup>J</sup>¯ *<sup>j</sup>*(t) = <sup>J</sup><sup>ˆ</sup> *<sup>i</sup>*((2/*T*)t − 1) on the interval I. Moreover

$$\mathbf{C} = \begin{pmatrix} \delta\_{0,0} & \cdots & \delta\_{0,M} \\ \vdots & \ddots & \vdots \\ \delta\_{N,0} & \cdots & \delta\_{N,M} \end{pmatrix}\_{(N+1)\times(M+1)}' $$

is an unknown coefficients matrix.

**Theorem 2.** *Let* ¯ **J**(*t*) *is the shifted SKCPs vector as (11), then*

$$D\_{0,t}^{\alpha} \mathbf{J}(t) = \Phi^{\alpha}(t), \tag{12}$$

*where* Φ*α*(*t*) *is Caputo's fractional derivative of the vector* ¯ **J**(*t*) *and is defined as*

$$\Phi^{a}(t) = \left[0, \sum\_{r=1}^{1} \psi\_{r,1}^{a}(t), \dots, \sum\_{r=1}^{j} \psi\_{r,j}^{a}(t), \dots, \sum\_{r=1}^{M} \psi\_{r,M}^{a}(t)\right]^{T},\tag{13}$$

*where*

$$
\psi\_{r,j}^{\alpha}(t) = \frac{\Gamma(r+1)}{T^r \Gamma(r+1-\alpha)} \theta\_{r,j} \, t^{r-\alpha}.
$$

**Proof.** Due to the analytic form (7), we have

$$D\_{0,\mathbf{t}}^{\mathbf{x}}\mathcal{J}\_0(\mathbf{t}) = \theta\_{0,0} D\_{0,\mathbf{t}}^{\mathbf{x}}(1) = 0,\tag{14}$$

Also, we know that [7]

$$D\_{0,\mathbf{t}}^{a}\mathbf{t}^{r} = \frac{\Gamma(r+1)}{\Gamma(r+1-a)}\mathbf{t}^{r-a}.\tag{15}$$

for *r* ≥ 1. So, for *j* = 1, . . . , *M*, we get

$$D\_{0, \mathbf{t}}^{\alpha} \vec{\mathcal{J}}\_{\dot{\boldsymbol{\beta}}}(\mathbf{t}) = \sum\_{r=0}^{\dot{j}} \vec{\theta}\_{r, \dot{j}} D\_{0, \mathbf{t}}^{\alpha} (\mathbf{t}/T)^{r} = \sum\_{r=1}^{\dot{j}} \Psi\_{r, \dot{j}}^{\alpha}(\mathbf{t})^{r}$$

in which *ψ<sup>α</sup> r*,*j* (t) = <sup>Γ</sup>(*r*+1) *<sup>T</sup>r*Γ(*r*+1−*α*) ¯ *θr*,*j*t *<sup>r</sup>*−*α*.

According to Equations (1) and (9) and by applying Theorem 2, we have

$$\mathbf{J(x)}^T \mathbf{C} \Phi^\mathbf{a}(\mathbf{t}) = \left(\boldsymbol{\mu} + \theta \dot{\boldsymbol{B}}(\mathbf{t})\right) \mathbf{J}\_\mathbf{x}(\mathbf{x})^T \mathbf{C} \vec{\mathbf{J}}(\mathbf{t}) + \lambda \mathbf{J}\_\mathbf{x}(\mathbf{x})^T \mathbf{C} \vec{\mathbf{J}}(\mathbf{t}) + f(\mathbf{x}, \mathbf{t}), \tag{16}$$

where

$$\begin{aligned} \mathbf{J}\_{\mathbf{x}}(\mathbf{x}) &= [\mathcal{J}\_0'(\mathbf{x}), \dots, \mathcal{J}\_i'(\mathbf{x}), \dots, \mathcal{J}\_N'(\mathbf{x})]^T, \\ \mathbf{J}\_{\mathbf{x}\mathbf{x}}(\mathbf{x}) &= [\mathcal{J}\_0''(\mathbf{x}), \dots, \mathcal{J}\_i''(\mathbf{x}), \dots, \mathcal{J}\_N''(\mathbf{x})]^T, \end{aligned}$$

and from the conditions (2) and (3) and Equation (9), we have

$$\mathbf{J(0)}^T \mathbf{C} \overline{\mathbf{J}}(\mathbf{t}) = \boldsymbol{\varrho}(\mathbf{0}, \mathbf{t}), \tag{17}$$

$$\mathbf{J}(l)^T \mathbf{C} \overline{\mathbf{J}}(\mathbf{t}) = \boldsymbol{\varrho}(l, \mathbf{t}), \tag{18}$$

$$\mathbf{J}(\mathbf{x})^T \mathbf{C} \mathbf{J}(0) = \boldsymbol{\eta}(\mathbf{x}).\tag{19}$$

Let x<sup>0</sup> = 0, x*<sup>N</sup>* = *l*, and x1, ... , x*N*−1, are the roots of J*N*−1(x). Also, suppose t*j*, *j* = 1, ... , *M*, are roots of <sup>J</sup>¯ *<sup>M</sup>*(t). By considering these collocation nodes, we define

$$\mathbf{A}\_{\perp} = \begin{bmatrix} \mathbf{J}(\mathbf{x}\_1), \dots, \mathbf{J}(\mathbf{x}\_i), \dots, \mathbf{J}(\mathbf{x}\_{N-1}) \end{bmatrix}^T,\tag{20}$$

$$\Lambda\_{\mathbf{x}} \quad = \quad \left[ \mathbf{J}\_{\mathbf{x}}(\mathbf{x}\_1), \dots, \mathbf{J}\_{\mathbf{x}}(\mathbf{x}\_i), \dots, \mathbf{J}\_{\mathbf{x}}(\mathbf{x}\_{N-1}) \right]^T,\tag{21}$$

$$\Lambda\_{\text{xx}} = \begin{bmatrix} \mathbf{J}\_{\text{xx}}(\mathbf{x}\_1), \dots, \mathbf{J}\_{\text{xx}}(\mathbf{x}\_i), \dots, \mathbf{J}\_{\text{xx}}(\mathbf{x}\_{N-1}) \end{bmatrix}^T,\tag{22}$$

where the matrices Λ, Λ<sup>x</sup> and Λxx are of the order (*N* − 1) × (*N* + 1) and

$$\Psi \llcorner = \left[ \overline{\mathfrak{J}}(\mathfrak{t}\_1), \dots, \overline{\mathfrak{J}}(\mathfrak{t}\_j), \dots, \overline{\mathfrak{J}}(\mathfrak{t}\_M) \right]\_{(M+1)\times M\prime} \tag{23}$$

$$\Psi^{a} \quad = \begin{bmatrix} \Phi^{a}(\mathfrak{t}\_{1}), \dots, \Phi^{a}(\mathfrak{t}\_{j}), \dots, \Phi^{a}(\mathfrak{t}\_{M}) \end{bmatrix}\_{(M+1)\times M}.\tag{24}$$

By evaluating (16) at (*N* − 1) × *M* collocation points (x*i*, t*j*) for *i* = 1, ... , *N* − 1 and *j* = 1, ... , *M*, we have

$$
\Lambda \mathbf{C} \Psi^{\mathrm{af}} = \Lambda\_{\mathrm{xx}} \mathbf{C} \Psi \mathcal{B} + \lambda \Lambda\_{\mathrm{x}} \mathbf{C} \Psi + \mathcal{F}, \tag{25}
$$

where

$$\mathcal{B} = \operatorname{diag} \left( \mu + \theta \mathbf{b}\_{1\prime} \cdot \cdots \prime, \mu + \theta \mathbf{b}\_{j\prime} \cdot \cdots \prime, \mu + \theta \mathbf{b}\_M \right)\_{\prime \prime}$$

in which b*<sup>j</sup>* = *<sup>B</sup>*(t*j*) − *<sup>B</sup>*(t*j*−1), <sup>t</sup><sup>0</sup> = 0 and

$$\mathcal{F} = \begin{bmatrix} f\_{\mathbf{i}, \mathbf{j}} \end{bmatrix}\_{(N-1)\times M}, \quad f\_{\mathbf{i}, \mathbf{j}} = f(\mathbf{x}\_{\mathbf{i}}, \mathbf{t}\_{\mathbf{j}}), \text{ i } = 1, \dots, N-1, \ j = 1, \dots, M.$$

Also, by evaluating (17) and (18) at collocation points t*<sup>j</sup>* and (19) at collocation points x*i*, we get

$$\mathbf{J}(0)^T \mathbf{C} \Psi = \mathbf{Y}\_{0\prime} \tag{26}$$

$$\mathbf{J}(l)^T \mathbf{C} \Psi = \mathbf{Y}\_{l\prime} \tag{27}$$

$$
\vec{\Lambda} \mathbf{C} \mathbf{J}(0) = \vec{\mathbf{Y}},\tag{28}
$$

where

$$\begin{aligned} \mathbf{Y}\_{0} &= \begin{bmatrix} \varrho(0, \mathfrak{e}\_{1}), \dots, \varrho(0, \mathfrak{e}\_{\bar{\jmath}}), \dots, \varrho(0, \mathfrak{t}\_{M}) \end{bmatrix}^{\mathrm{T}}, & \mathbf{Y}\_{l} &= \begin{bmatrix} \varrho(l, \mathfrak{e}\_{1}), \dots, \varrho(l, \mathfrak{t}\_{\bar{\jmath}}), \dots, \varrho(l, \mathfrak{t}\_{M}) \end{bmatrix}^{\mathrm{T}}, \\\ \bar{\Lambda} &= \begin{bmatrix} \mathbf{J}(\mathbf{x}\_{0}), \dots, \mathbf{J}(\mathbf{x}\_{l}) \end{bmatrix}^{\mathrm{T}}, & \bar{\mathbf{Y}} &= \begin{bmatrix} \eta(\mathbf{x}\_{0}), \dots, \eta(\mathbf{x}\_{l}), \dots, \eta(\mathbf{x}\_{\bar{\jmath}}) \end{bmatrix}^{\mathrm{T}}. \end{aligned}$$

Using the Kronecker product, Equation (25) transforms to

$$\mathcal{A}\mathcal{X} = \mathcal{T}\_{\text{vec}}\tag{29}$$

where

$$\mathcal{A} = \boldsymbol{\Psi}^{\boldsymbol{\mathfrak{u}}^{T}} \otimes \boldsymbol{\Lambda} - \left(\boldsymbol{\Psi}\mathcal{B}\right)^{T} \otimes \boldsymbol{\Lambda}\_{\boldsymbol{\mathfrak{x}}\boldsymbol{\mathfrak{x}}} - \boldsymbol{\lambda}\boldsymbol{\Psi}^{T} \otimes \boldsymbol{\Lambda}\_{\boldsymbol{\mathfrak{x}}\boldsymbol{\mathfrak{x}}}$$

and X = vec(**C**), Tvec = vec(F). Also, Equations (26)–(28) are equivalent to

$$\mathsf{E}\mathcal{X} = \mathsf{Y}, \qquad \mathsf{E}\_0 \mathcal{X} = \mathsf{Y}\_0, \qquad \mathsf{E}\_l \mathcal{X} = \mathsf{Y}\_l. \tag{30}$$

where

$$\mathbf{E} = \mathbf{J}(\mathbf{0})^T \otimes \mathbf{\bar{A}}, \qquad \mathbf{E}\_0 = \mathbf{\Psi}^T \otimes \mathbf{J}(\mathbf{0})^T, \qquad \mathbf{E}\_l = \mathbf{\Psi}^T \otimes \mathbf{J}(l)^T.$$

Thus, from Equations (29) and (30), we obtain a system of linear equations **A**X = **B** in which

$$\mathbf{A} = \left[ \mathcal{A}^T, \mathbb{E}^T, \mathbb{E}\_0^T, \mathbb{E}\_l^T \right]^T, \qquad \mathbf{B} = \left[ \mathcal{T}\_{\text{vec}\prime}^T \mathbf{\tilde{Y}}^T, \mathbf{Y}\_{0\prime\prime}^T \mathbf{Y}\_l^T \right]^T.$$

Solving this system leads to an estimation U*N*,*M*(x, t) for the solution of (1)–(3), in the form (9).

#### **4. Convergence Analysis**

In the following, we examine the convergence of the approximate solution expressed in the form (9) for the problem (1)–(3).

**Theorem 3.** *Let* U*N*,*M*(*x*,*t*) *is the approximate solution obtained by the procedure presented in Section 3 and u*(*x*, *t*) *is the exact solution of (1)–(3). Consider the residual error* R*N*,*M*(*x*, *t*) *of this numerical solution. Then,* <sup>E</sup>R*N*,*M*(*x*, *<sup>t</sup>*)<sup>∞</sup> *tends to zero, when N* <sup>→</sup> <sup>∞</sup> *and M* <sup>→</sup> <sup>∞</sup>*.*

**Proof.** Suppose U*N*,*M*(x, t), for (x, t) ∈L×I, satisfies the equation

$$\begin{split} D\_{0,\mathbf{t}}^{\mathbf{x}}\mathbf{U}\_{N,M}(\mathbf{x},\mathbf{t}) &= \left(\mu + \vartheta\mathcal{B}(\mathbf{t})\right) \frac{\partial^2 \mathbf{U}\_{N,M}}{\partial \mathbf{x}^2}(\mathbf{x},\mathbf{t}) \\ &+ \lambda \frac{\partial \mathbf{U}\_{N,M}}{\partial \mathbf{x}}(\mathbf{x},\mathbf{t}) + f(\mathbf{x},\mathbf{t}) + \mathcal{R}\_{N,M}(\mathbf{x},\mathbf{t}), \end{split} \tag{31}$$

where R*N*,*M*(x, t) is the residual function. Now, from Equations (1) and (31), we get

$$\begin{split} \mathbb{E} \| \mathcal{R}\_{N,M}(\mathbf{x}, \mathbf{t}) \|\infty &\leq \mathbb{E} \Big\| \Big\| D^{\mathbf{x}}\_{0,\mathbf{t}} \Big( \boldsymbol{\mu}(\mathbf{x}, \mathbf{t}) - \mathbb{U}\_{N,M}(\mathbf{x}, \mathbf{t}) \Big) \Big\|\_{\infty} \\ &+ \mathbb{E} \Big( \left\| \boldsymbol{\mu} + \theta \dot{\boldsymbol{\mathcal{B}}}(\mathbf{t}) \right\|\_{\infty} \Big\| \Big\| \boldsymbol{\mu}\_{\infty}(\mathbf{x}, \mathbf{t}) - \frac{\partial^{2} \mathbb{U}\_{N,M}}{\partial \mathbf{x}^{2}}(\mathbf{x}, \mathbf{t}) \Big\|\_{\infty} \Big\| \Big\| \Big\|\_{\infty} \Big) \\ &+ |\lambda| \, \mathbb{E} \Big\| \Big\| \boldsymbol{\mu}\_{\mathbf{x}}(\mathbf{x}, \mathbf{t}) - \frac{\partial \mathbb{U}\_{N,M}}{\partial \mathbf{x}}(\mathbf{x}, \mathbf{t}) \Big\|\_{\infty} . \end{split} \tag{32}$$

By using Theorem 1, we have

$$\begin{split} \left\| u\_{\mathbf{t}}(\mathbf{x},\mathbf{t}) - \frac{\partial \mathbf{U}\_{N,M}}{\partial \mathbf{t}}(\mathbf{x},\mathbf{t}) \right\|\_{\infty} &= \sup\_{(\mathbf{x},\mathbf{t}) \in \mathcal{L} \times \mathcal{T}} \left| u\_{\mathbf{t}}(\mathbf{x},\mathbf{t}) - \frac{\partial \mathbf{U}\_{N,M}}{\partial \mathbf{t}}(\mathbf{x},\mathbf{t}) \right| \\ &< \frac{\theta\_1 M}{2^{N+M-2}}, \end{split}$$

where *θ*<sup>1</sup> is a positive constant, thus

$$\begin{split} \mathbb{E} \left\| D\_{0,\mathbf{t}}^{\mathbf{d}} \left( \mathfrak{u}(\mathbf{x},\mathbf{t}) - \mathfrak{U}\_{N,M}(\mathbf{x},\mathbf{t}) \right) \right\|\_{\infty} &\leq \mathbb{E} \left( \int\_{0}^{\mathbf{t}} \frac{\left\| (\mathfrak{t}-\mathfrak{r})^{-a} \right\|\_{\infty}}{\Gamma(1-a)} \left\| \mathfrak{u}\_{\tau}(\mathbf{x},\tau) - \frac{\partial \mathfrak{U}\_{N,M}}{\partial \tau}(\mathbf{x},\tau) \right\|\_{\infty} \, \mathrm{d}\tau \right) \\ &< \frac{\theta\_{1}M}{\Gamma(1-a)2^{N+M-2}} \mathbb{E} \left( \int\_{0}^{\mathbf{t}} \left\| (\mathfrak{t}-\mathfrak{r})^{-a} \right\|\_{\infty} \mathrm{d}\tau \right). \end{split}$$

Since 0 < *τ* < t ≤ *T*, hence, we get

$$\mathbb{E}\left\|D\_{0,t}^{a}\left(\mu(\mathbf{x},\mathbf{t})-\mathbb{U}\_{N,M}(\mathbf{x},\mathbf{t})\right)\right\|\_{\infty}<\frac{\theta\_{1}T^{1-a}M}{\Gamma(1-a)2^{N+M-2}}.\tag{33}$$

Also, from Theorem 1, we have

$$\begin{split} \left\| u\_{\mathbf{x}\mathbf{x}}(\mathbf{x},\mathbf{t}) - \frac{\partial^2 \mathbb{U}\_{\mathbf{N},M}}{\partial \mathbf{x}^2}(\mathbf{x},\mathbf{t}) \right\|\_{\infty} &= \sup\_{(\mathbf{x},\mathbf{t}) \in \mathcal{L} \times \mathcal{L}} \left| u\_{\mathbf{x}\mathbf{x}}(\mathbf{x},\mathbf{t}) - \frac{\partial^2 \mathbb{U}\_{\mathbf{N},M}}{\partial \mathbf{x}^2}(\mathbf{x},\mathbf{t}) \right| \\ &< \frac{\theta\_2 N^3}{2^{N+M-8}} \\ \left\| u\_{\mathbf{x}}(\mathbf{x},\mathbf{t}) - \frac{\partial \mathbb{U}\_{\mathbf{N},M}}{\partial \mathbf{x}}(\mathbf{x},\mathbf{t}) \right\|\_{\infty} &= \sup\_{(\mathbf{x},\mathbf{t}) \in \mathcal{L} \times \mathcal{L}} \left| u\_{\mathbf{x}}(\mathbf{x},\mathbf{t}) - \frac{\partial \mathbb{U}\_{\mathbf{N},M}}{\partial \mathbf{x}}(\mathbf{x},\mathbf{t}) \right| \\ &< \frac{\theta\_3 N}{2^{N+M-2}} \end{split} \tag{35}$$

where *<sup>θ</sup>*<sup>2</sup> and *<sup>θ</sup>*<sup>3</sup> are positive constants. Let *<sup>γ</sup>*¯ <sup>=</sup> *<sup>B</sup>*˙(t)<sup>∞</sup>, then, from the relations (32)–(35), it can be concluded that

$$\begin{split} \mathbb{E} \| \mathcal{R}\_{N,M}(\mathbf{x}, \mathbf{t}) \|\_{\infty} &< \frac{\theta\_1 T^{1-x} M}{\Gamma(1-a) 2^{N+M-2}} + (|\boldsymbol{\mu}| + \boldsymbol{\gamma}|\boldsymbol{\theta}|) \frac{\theta\_2 N^3}{2^{N+M-8}} + |\boldsymbol{\lambda}| \frac{\theta\_3 N}{2^{N+M-2}} \\ &< \frac{\theta\_1 T^{1-x} M}{\Gamma(1-a) 2^{N+M-8}} + (|\boldsymbol{\mu}| + \boldsymbol{\gamma}|\boldsymbol{\theta}|) \frac{\theta\_2 N^3}{2^{N+M-8}} + |\boldsymbol{\lambda}| \frac{\theta\_3 N^3}{2^{N+M-8}} \\ &< \theta \frac{M+2N^3}{2^{N+M-8}}. \end{split} \tag{36}$$

where

$$\theta = \max \{ \frac{\theta\_1 T^{1-\alpha}}{\Gamma(1-\alpha)}, (|\mu| + \bar{\gamma}|\theta|) \,\theta\_{2\prime} |\lambda| \theta\_3 \}.$$

Moreover, for x ∈ *∂*L and t ∈ I, U*N*,*M*(x, t) satisfies the following equation

$$\begin{split} \mathbb{E} \| \mathcal{R}\_{N,M}(\mathbf{x}, \mathbf{t}) \|\_{\infty} &= \mathbb{E} \| \boldsymbol{\varrho}(\mathbf{x}, \mathbf{t}) - \mathbb{U}\_{N,M}(\mathbf{x}, \mathbf{t}) \|\_{\infty} \\ &= \mathbb{E} \| \boldsymbol{\mu}(\mathbf{x}, \mathbf{t}) - \mathbb{U}\_{N,M}(\mathbf{x}, \mathbf{t}) \|\_{\infty} \\ &= \sup\_{(\mathbf{x}, \mathbf{t}) \in \partial \mathcal{L} \times \mathcal{T}} \left| \boldsymbol{\mu}(\mathbf{x}, \mathbf{t}) - \mathbb{U}\_{N,M}(\mathbf{x}, \mathbf{t}) \right| < \frac{\theta\_4}{2^{N+M}} \end{split} \tag{37}$$

and for x ∈ L, we have

$$\begin{split} \mathbb{E} \| \| \mathcal{R}\_{N,M}(\mathbf{x}, 0) \|\|\_{\infty} &= \mathbb{E} \| \| \eta(\mathbf{x}) - \mathbb{U}\_{N,M}(\mathbf{x}, 0) \|\|\_{\infty} \\ &= \mathbb{E} \| \| u(\mathbf{x}, 0) - \mathbb{U}\_{N,M}(\mathbf{x}, 0) \|\|\_{\infty} \\ &= \sup\_{\mathbf{x} \in \mathcal{L}} \left| u(\mathbf{x}, 0) - \mathbb{U}\_{N,M}(\mathbf{x}, 0) \right| < \frac{\theta\_5}{2^{N+M}}. \end{split} \tag{38}$$

Therefore, from Equations (36)–(38), we can see that <sup>E</sup>R*N*,*M*(x, t)<sup>∞</sup> tends to zero, when *<sup>N</sup>*, *M* → ∞.

#### **5. Applications and Results**

We assess the applicability of our proposed approach to solve some stochastic heat equations of fractional order.

To simulate the Brownian motion *B*(t), we employ the approach described in [36]. Consider a discretization of *B*(t). We set t<sup>0</sup> = 0 and let t*j*, *j* = 1, ... , *M*, are the considered collocation points, where t*<sup>i</sup>* < t*<sup>j</sup>* for *i* < *j*. Also, let *Bj* = *B*(t*j*) and

$$
\Delta\_{\dot{j}} = \mathbf{t}\_{\dot{j}} - \mathbf{t}\_{\dot{j}-1}, \qquad \dot{j} = 1, \ldots, M. \tag{39}
$$

From the definition of Brownian motion *B*(*t*) on (Ω*B*, ℱ*B*, P*B*), we know that ℬ(0) = 0 with the probability 1. <sup>ℬ</sup>(*τ*) <sup>−</sup> <sup>ℬ</sup>(*r*) <sup>∼</sup> <sup>√</sup>*<sup>τ</sup>* <sup>−</sup> *<sup>r</sup>*<sup>N</sup> (0, 1), for 0 <sup>≤</sup> *<sup>r</sup>* <sup>&</sup>lt; *<sup>τ</sup>* <sup>≤</sup> *<sup>T</sup>*, where <sup>N</sup> (0, 1) is a normally distributed random variable with zero mean and unit variance. Also, ℬ(*τ*2) − ℬ(*τ*1) and ℬ(*ν*2) − ℬ(*ν*1) are independent for 0 ≤ *τ*<sup>1</sup> < *τ*<sup>2</sup> < *ν*<sup>1</sup> < *ν*<sup>2</sup> ≤ *T*. Thus, we let *B*<sup>0</sup> = t<sup>0</sup> with the probability 1, and

$$B\_{\rangle} = B\_{\rangle-1} + \mathbf{d}B\_{\rangle}, \qquad \mathfrak{j} = 1, \ldots, M,\tag{40}$$

where each <sup>d</sup>*Bj* is an independent random variable of the form ' Δ*j*N (0, 1). Throughout the section, unless stated otherwise, we assume that *T* = 1, *l* = 1 and *N* = *M*. Also, we evaluate the numerical solution *u*(x, t) along P¯ discretized paths and finally, the average of the results over these paths is considered.

The *L*∞-norm error is evaluated using the following definition:

$$\|\mathbb{E}\_N\|\_{\infty} = \max\_{1 \le i,j \le N} |u(\xi\_{i\prime}^\mathbf{x}\_j \tau\_j) - \mathbb{U}\_N(\xi\_{i\prime}^\mathbf{x}\_j \tau\_j)|\_{\prime} \tag{41}$$

where U*N*(*ξi*, *τj*) and *u*(x, t), are computed by the exact and numerical solutions defined in (9) at the collocation points x = *ξ<sup>i</sup>* and t = *τj*, respectively. The convergence order is defined by the following formula:

$$\mathbf{C} \mathbf{O} = \log\_{\frac{N\_1}{N\_2}} \frac{||\mathbb{E}\_{N\_1}||\_{\infty}}{||\mathbb{E}\_{N\_2}||\_{\infty}},\tag{42}$$

where E*Ni* <sup>∞</sup> denotes the *L*∞-norm error for *Ni* (*i* = 1, 2) collocation points. The numerical computations are performed on a personal computer using a 1.70 GHz processor and the codes are written in Matlab software.

**Example 1.** *Consider the time-fractional stochastic equation*

$$D\_{0,t}^{\\\alpha}u(\mathbf{x},t) = \mathcal{B}(t)u\_{\infty}(\mathbf{x},t) + u\_{\mathbf{x}}(\mathbf{x},t) + f(\mathbf{x},t),$$

*subject to the conditions:*

$$\begin{aligned} u(x,0)&=0, \\ u(0,t)&=0, \quad u(1,t) = a\exp(1)t^2. \end{aligned}$$

*where α* ∈ (0, 1)*, B*(*t*) *is a Brownian motion and*

$$f(\mathbf{x},t) = \frac{a\Gamma(3)}{\Gamma(3-a)}t^{2-a}\mathbf{x}^5 \exp\left(\mathbf{x}^2\right) - at^2\mathbf{x}^4 \exp\left(\mathbf{x}^2\right)(5+2\mathbf{x}^2),$$

$$-2a\dot{B}(t)t^2\mathbf{x}^3 \exp\left(\mathbf{x}^2\right)(10+11\mathbf{x}^2+2\mathbf{x}^4).$$

*u*(*x*, *t*) = *αt* <sup>2</sup>*x*<sup>5</sup> exp *x*2 *is the exact solution to the above problem.*

Now, we evaluate *u*(x, t) along P¯ = 80 discretized Brownian paths. To approximate *B*˙(*t*), we use the discretized scheme described at the beginning of this section. Figure 1 displays the exact and approximate solution with *α* = 0.9 and Figure 2 shows the exact solution and its estimations for different values of *α*, when *N* = 12. These figures confirm that the resulted numerical solutions have good compatibility with the exact solution. Table 1 displays the *l*∞-norm errors and convergence orders for *α* = 0.25, 0.75 and several values of *N*. Also, Figure 3 show the behaviour of the absolute error of *u*(x, t) for different values of *N*, when *α* = 0.5. Table 1 and Figure 3 confirm the accuracy of the obtained numerical approximations.

**Figure 1.** The exact and numerical solution of Example 1 with *α* = 0.9.

**Figure 2.** The exact and approximate solution for different values of *α* in Example 1.


**Table 1.** Example 1: The *l*∞-norm errors and convergence orders.

**Figure 3.** The absolute errors for Example 1 when *N* = 12 (**left**) and *N* = 15 (**right**).

**Example 2.** *Suppose the time-fractional stochastic equation*

$$D\_{0,t}^{\\\alpha}u(\mathbf{x},t) = \left(\pi + \dot{B}(t)\right)u\_{\text{xx}}(\mathbf{x},t) - 2u\_{\text{x}}(\mathbf{x},t) + f(\mathbf{x},t),$$

*where α* ∈ (0, 1)*, B*(*t*) *is a Brownian motion, and*

$$f(\mathbf{x},t) = \frac{3}{\Gamma(2-a)}t^{1-a}\left(\mathbf{x}^2 - \frac{2t\mathbf{x}}{a-2} + \frac{2t^2}{(a-2)(a-3)}\right)\sin(\pi\mathbf{x})$$

$$-(\pi + \dot{B}(t))(\mathbf{x}+t)\left[6\sin(\pi\mathbf{x}) + 6\pi(\mathbf{x}+t)\cos(\pi\mathbf{x})\right]$$

$$-\pi^2(\mathbf{x}+t)^2\sin(\pi\mathbf{x})\left[+2(\mathbf{x}+t)^2\Big(\pi(\mathbf{x}+t)\cos(\pi\mathbf{x}) + 3\sin(\pi\mathbf{x})\Big)\right].$$

*With these assumptions, the exact solution is u*(*x*, *t*)=(*x* + *t*)<sup>3</sup> sin (*πx*)*.*

The numerical solution is evaluated along P¯ = 100 discretized Brownian paths. Table 2 displays the *l*∞-norm errors and order of convergence for several values of *α* and *N*. This table shows the high accuracy of the introduced scheme. Also, Figure 4 displays the exact and numerical solution of *u*(x, t) when *α* = 0.5, *N* = 10 and Figure 5 indicates the absolute error together with the contour plot for *N* = 16. It can be seen that the numerical solution is in well agreement with the exact solution.

**Table 2.** Example 2: The *l*∞-norm errors and convergence order.


**Figure 4.** The exact and numerical solution at different levels of *t* for Example 2.

**Figure 5.** The absolute error (**left**) and contour plot (**right**) for Example 2 with *N* = 16.

**Example 3.** *Let*

$$D\_{0,t}^{\\\alpha}u(\mathbf{x},t) = \left(\frac{1}{\pi^2} + \theta \dot{B}(t)\right)u\_{\infty}(\mathbf{x},t),$$

*subject to:*

$$\begin{aligned} \mu(0, t) &= \mu(1, t) = 0, \\ \mu(\mathbf{x}, 0) &= \sin(\pi \mathbf{x}), \end{aligned}$$

*where α* ∈ (0, 1) *and B*(*t*) *is a Brownian motion.*

The numerical solutions are evaluated along P¯ = 50 discretized Brownian paths. Figure 6 shows the numerical solution at *t* = 1 for different values of *α*, when *ϑ* = 0.5 and *N* = 10. Figure 7 displays the estimation of *u*(x, t) when *ϑ* = 0.15, 0.2, *N* = 8 and *α* = 1. The results are compared with wavelets Galerkin (WG) method [30]. This figure confirms that the present method gives more smooth solution than the numerical scheme in [30]. Also, Figures 8 and 9 indicate the approximate solutions and the contour plots for several values of *ϑ* when *α* = 0.45. The results confirm that the employed approach is very efficient.

**Figure 6.** The numerical solution at *t* = 1 for different values of *α* in Example 3.

**Figure 7.** The numerical solution obtained by the proposed method (**left**) and wavelets Galerkin method [30] (**right**) for Example 3 with different values of *ϑ* when *N* = 8.

**Figure 8.** The numerical approximation (**left**) and contour plot (**right**) for Example 3 with *ϑ* = 0.5.

**Figure 9.** The numerical approximation (**left**) and contour plot (**right**) for Example 3 with *ϑ* = 1.2.

#### **6. Conclusions**

According to numerous applications of FSPDEs, a new numerical scheme was introduced to solve a class of stochastic heat equations of fractional order with additive noise subject to suitable conditions. This numerical method was based on a collocation approach with the SKCPs basis functions. The convergence of the proposed method was proved. Three illustrative examples were investigated to authenticate the efficiency of the discussed approach. The obtained numerical results approved the accuracy of this method.

**Author Contributions:** All authors discussed the results and contributed to the final manuscript. Also They have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

### *Article* **Oscillation Criteria for First Order Differential Equations with Non-Monotone Delays**

#### **Emad R. Attia 1,†, Hassan A. El-Morshedy 2,†, and Ioannis P. Stavroulakis 3,4,***<sup>∗</sup>*


Received: 18 February 2020; Accepted: 31 March 2020; Published: 2 May 2020

**Abstract:** New sufficient criteria are obtained for the oscillation of a non-autonomous first order differential equation with non-monotone delays. Both recursive and *lower-upper* limit types criteria are given. The obtained results improve most recent published results. An example is given to illustrate the applicability and strength of our results.

**Keywords:** Oscillation; Differential equations; Non-monotone delays

#### **1. Introduction**

Consider the first order delay differential equation

$$\mathbf{x}'(t) + p(t)\mathbf{x}(\tau(t)) = \mathbf{0}, \qquad t \ge t\_{0\prime} \tag{1}$$

where *<sup>p</sup>*, *<sup>τ</sup>* <sup>∈</sup> *<sup>C</sup>*([*t*0, <sup>∞</sup>), [0, <sup>∞</sup>)) and *<sup>τ</sup>*(*t*) <sup>&</sup>lt; *<sup>t</sup>* for *<sup>t</sup>* <sup>≥</sup> *<sup>t</sup>*0, such that lim*<sup>t</sup>*→∞*τ*(*t*) = <sup>∞</sup>.

A solution of Equation (1) is a function *<sup>x</sup>*(*t*) on [¯*t*, <sup>∞</sup>), where ¯*<sup>t</sup>* <sup>=</sup> min*t*≥*t*<sup>0</sup> *<sup>τ</sup>*(*t*), which is continuously differentiable on [*t*0, ∞) and satisfies Equation (1) for all *t* ≥ *t*0. As customary, a solution of Equation (1) is called oscillatory if it has arbitrarily large zeros. Equation (1) is said to be oscillatory if all its solutions are oscillatory.

The oscillation of Equation (1) has been extensively studied for many decades; see [1–17]. As far as these authors know, the earliest systematic study of the oscillation of Equation (1) was due to Myshkis [14], who proved that Equation (1) is oscillatory when

$$\limsup\_{t \to \infty} \left( t - \tau(t) \right) < \infty \quad \text{and} \quad \liminf\_{t \to \infty} \left( t - \tau(t) \right) \liminf\_{t \to \infty} p(t) > \frac{1}{\varepsilon}.$$

In 1972, Ladas et al. [13] proved that Equation (1) is oscillatory if

$$L := \limsup\_{t \to \infty} \int\_{\tau(t)}^t p(s)ds > 1,\tag{2}$$

where the delay *τ*(*t*) is assumed to be a nondecreasing function.

*Symmetry* **2020**, *12*, 718; doi:10.3390/sym12050718 www.mdpi.com/journal/symmetry

In 1979, Ladas [12] (for Equation (1) with constant delay) and in 1982, Koplatadze and Chanturija [10] established the celebrated oscillation criterion

$$k := \liminf\_{t \to \infty} \int\_{\tau(t)}^t p(s)ds > \frac{1}{\varepsilon}. \tag{3}$$

The oscillation of Equation (1) has been studied when 0 <sup>&</sup>lt; *<sup>k</sup>* <sup>≤</sup> <sup>1</sup> *<sup>e</sup>* , *L* ≤ 1 and *τ*(*t*) is nondecreasing, see [8,9,15,16] and the references cited therein. In most of these works, the oscillation criteria have been formulated as relations between *L* and *k*. For example, Jaros and Stavroulakis [ ˘ 8], Kon et al. [9], Philos and Sficas [15], and Sficas and Stavroulakis [16] obtained the following criteria, respectively:

$$L > \frac{\ln(\lambda(k)) + 1}{\lambda(k)} - \frac{1 - k - \sqrt{1 - 2k - k^2}}{2},$$

$$L > 2k + \frac{2}{\lambda(k)} - 1,$$

$$L > 1 - \frac{k^2}{2(1 - k)} - \frac{k^2}{2}\lambda(k),$$

and

$$L > \frac{\ln \lambda(k) - 1 + \sqrt{5 - 2\lambda(k) + 2k\lambda(k)}}{\lambda(k)},\tag{4}$$

where *λ*(*k*) is the smaller real root of the equation *λ* = e*λk*.

The same problem has been considered for Equation (1) with non-monotone delays, see [2,4,11,17–19]. The latter case is much more complicated than the monotone delays case. In fact, according to Braverman and Karpuz ([2], Theorem 1), condition (2) does not need to be sufficient for the oscillation of Equation (1) if *τ*(*t*) is non-monotone. To overcome this difficulty, many authors used a nondecreasing function *δ*(*t*) defined by:

$$\delta(t) = \max\_{s \le t} \tau(s), \quad t \ge t\_0; \tag{5}$$

hence, many results were obtained by using techniques similar to those of the monotonic delays case. Most of these results were given by recursive formulas. Next, we give an overview of such results:

In 1994, Koplatadze and Kvinikadze [11] proved the following interesting result which requires the definition of the sequence of functions {*ψi*}<sup>∞</sup> *<sup>i</sup>*=<sup>1</sup> as follows:

$$\psi\_1(t) = 0, \quad \psi\_i(t) = \mathbf{e}^{\int\_{\tau(t)}^t p(s)\psi\_{i-1}(s)ds}, \text{ i } = \text{2, 3, ...} \tag{6}$$

**Theorem 1** ([11])**.** *Let j* ∈ {1, 2, ...} *exist such that*

$$\limsup\_{t \to \infty} \int\_{\delta(t)}^t p(\mathbf{s}) \operatorname{\mathbf{e}}^{\int\_{\delta(s)}^{\delta(t)} p(\mathbf{u}) \psi\_j(\mathbf{u}) d\mathbf{u}} d\mathbf{s} > 1 - \mathfrak{c}(k), \tag{7}$$

*where k, δ, and ψj*, *are defined respectively by (3), (5), and (6) and*

$$c(k) = \begin{cases} \ 0, & \text{if } k > \frac{1}{c'}\\ \frac{1 - k - \sqrt{1 - 2k - k^2}}{2}, & \text{if } 0 \le k \le \frac{1}{c'} \end{cases}$$

*Then, Equation (1) is oscillatory.*

In 2011, Braverman and Karpuz [2] obtained the following sufficient condition for the oscillation of Equation (1),

$$\limsup\_{t \to \infty} \int\_{\delta(t)}^t p(s) \operatorname{e}^{\int\_{\tau(s)}^{\delta(t)} p(u) \, du} ds > 1. \tag{8}$$

In 2014, Stavroulakis [17] improved condition (8) to

$$\limsup\_{t \to \infty} \int\_{\delta(t)}^t p(s) \, \mathbf{e}^{\int\_{\tau(s)}^{\delta(t)} p(u) \, du} ds > 1 - \frac{1 - k - \sqrt{1 - 2k - k^2}}{2}. \tag{9}$$

In 2015, Infante et al. [19] proved that Equation (1) is oscillatory if one of the following conditions is satisfied:

$$\limsup\_{t \to \infty} \int\_{\mathcal{S}(t)}^t p(s) \, \mathbf{e}^{\int\_{\tau(s)}^{\mathbf{f}(t)} p(u) \, \mathbf{e}^{\int\_{\tau(u)}^u p(v) dv} du} ds > 1,\tag{10}$$

or

$$\limsup\_{\varepsilon \to 0^{+}} \left( \limsup\_{t \to \infty} \int\_{\mathcal{S}(t)}^{t} p(s) e^{(\lambda(k) - \varepsilon) \int\_{\tau(s)}^{\mathcal{S}(t)} p(u) du} ds \right) > 1,\tag{11}$$

where *g*(*t*) is a nondecreasing function satisfying that *τ*(*t*) ≤ *g*(*t*) ≤ *t* for all *t* ≥ *t*<sup>1</sup> and some *t*<sup>1</sup> ≥ *t*0.

In 2016, El-Morshedy and Attia [4] proved that Equation (1) is oscillatory if there exists a positive integer *n* such that

$$\limsup\_{t \to \infty} \left( \int\_{\mathcal{S}(t)}^t q\_n(s)ds + \mathfrak{c}(k^\*) \mathbf{e}^{\int\_{\mathcal{S}(t)}^t \sum\_{i=0}^{n-1} q\_i(s)ds} \right) > 1,\tag{12}$$

where *k*∗ := lim inf *t*→∞ ( *t <sup>g</sup>*(*t*) *p*(*s*) *ds*, *c*, *g* are defined as before, and {*qn*(*t*)} is given by

$$q\_0(t) = p(t),\ q\_1(t) = q\_0(t) \int\_{\tau(t)}^t q\_0(s) \mathbf{e}^{\int\_{\tau(s)}^t q\_0(u) du} ds,$$

$$q\_n(t) = q\_{n-1}(t) \int\_{\mathcal{S}(t)}^t q\_{n-1}(s) \mathbf{e}^{\int\_{\mathcal{S}(s)}^t q\_{n-1}(u) du} ds, \quad n = 2, 3, \dots, n$$

Very recently, Bereketoglu et al. [18] proved that Equation (1) oscillates if for some - <sup>∈</sup> <sup>N</sup> the following criterion holds

$$\limsup\_{t \to \infty} \int\_{\mathcal{S}(t)}^t p(s) \mathbf{e}^{\int\_{\tau(s)}^{\mathcal{S}(t)} P\_\ell(u) du} ds > 1 - \mathfrak{c}(k^\*), \tag{13}$$

where

$$P\_\ell(t) = p(t) \left[ 1 + \int\_{\mathcal{S}(t)}^t p(s) \mathbf{e}^{\int\_{\tau(s)}^t P\_{\ell-1}(u) du} ds \right], \; P\_0(t) = p(t).$$

In this work, we obtain new sufficient criteria of recursive type for the oscillation of Equation (1), when the delay is non-monotone and *<sup>k</sup>*<sup>∗</sup> <sup>≤</sup> <sup>1</sup> *<sup>e</sup>* <sup>&</sup>lt; *<sup>L</sup>*˜ <sup>&</sup>lt; 1, where *<sup>L</sup>*˜ :<sup>=</sup> lim sup *t*→∞ ( *t <sup>g</sup>*(*t*) *p*(*s*)*ds*. In addition, new practical lower limit-upper limit type criteria similar to those in [8,9,15,16] are obtained. These new conditions improve some results in [2,5,8,9,11,13,16–19]. An illustrative example is given to show the strength and applicability of our results.

#### **2. Main Results**

Throughout this work, we assume that *c*, *g*, *k*∗, *λ*, *t*<sup>1</sup> are defined as above and *g<sup>i</sup>* (*t*) stands for the *ith* composition of *g*.

For fixed *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>, we define {*Rm*,*n*(*t*)}, {*Qm*,*n*(*t*)}, eventually, as follows:

$$\begin{array}{ll} R\_{\mathfrak{m},n}(t) = 1 + \int\_{\mathfrak{r}(t)}^{t} p(\mathbf{s}) \mathbf{e}^{\int\_{\mathbf{r}(s)}^{t} p(\mathbf{u}) Q\_{\mathfrak{m}-1,n}(\mathbf{u}) du} ds, & \mathfrak{m} = 1, \mathfrak{2}, \dots, \\ Q\_{i,j}(t) = \mathbf{e}^{\int\_{\mathbf{r}(t)}^{t} p(\mathbf{s}) Q\_{i,j-1}(\mathbf{s}) ds}, & i = 1, 2, \dots, m-1, \ j = 1, 2, \dots, n \end{array}$$

where

$$\begin{cases} Q\_{0,0}(t) = (\lambda(k^\*) - \epsilon) \left( 1 + (\lambda(k^\*) - \epsilon) \int\_{\tau(t)}^{\mathcal{S}(t)} p(s) ds \right), \\ Q\_{0,l}(t) = \mathbf{e}^{\int\_{\tau(t)}^l p(s) Q\_{0,l-1}(s) ds}, \quad r = 1, 2, \dots, n \\ Q\_{i,0}(t) = R\_{i,n}, \quad i = 1, 2, \dots, m - 1 \end{cases}$$

and ∈ (0, *λ*(*k*∗)).

**Lemma 1.** *Assume that x*(*t*) *is an eventually positive solution of Equation (1). Then,*

$$\frac{\mathbf{x}(\pi(t))}{\mathbf{x}(t)} \ge \mathcal{R}\_{m,n}(t),$$

*for all sufficiently large t.*

**Proof.** Since *x*(*t*) is an eventually positive solution of Equation (1), there exists a sufficiently large *T* > *t*<sup>1</sup> such that *x*(*t*) satisfies eventually

$$
\mathbf{x}'(t) + p(t)\mathbf{x}(\mathbf{g}(t)) \le \mathbf{0}, \quad t > T.
$$

Using ([5], Lemma 2.1.2), for sufficiently small > 0 and sufficiently large *t*, we have

$$\frac{\mathbf{x}(\mathbf{r}(t))}{\mathbf{x}(t)} \ge \frac{\mathbf{x}(\mathbf{g}(t))}{\mathbf{x}(t)} > \lambda(k^\*) - \varepsilon. \tag{14}$$

On the other hand, dividing both sides of Equation (1) by *x*(*t*) and integrating the resulting equation from *s* to *t*, *s* ≤ *t*, we obtain

$$\mathbf{x}(s) = \mathbf{x}(t)\mathbf{e}^{\int\_s^t p(u)\frac{x(\mathbf{r}(u))}{x(u)}du}.\tag{15}$$

Therefore,

$$\begin{array}{rcl}\mathbf{x}(\boldsymbol{\pi}(t)) &=& \mathbf{x}(t)\mathbf{e}^{\int\_{\mathbf{r}(t)}^{t} p(u) \frac{\mathbf{x}(\boldsymbol{\pi}(u))}{\mathbf{x}(\boldsymbol{g}(u))} \frac{\mathbf{x}(\boldsymbol{g}(u))}{\mathbf{x}(u)} du}{\boldsymbol{\h}} \\ &\geq& \mathbf{x}(t)\mathbf{e}^{\left(\boldsymbol{\lambda}(k^\*) - \boldsymbol{\kappa}\right) \int\_{\mathbf{r}(t)}^{t} p(u) \frac{\mathbf{x}(\boldsymbol{\pi}(u))}{\mathbf{x}(\boldsymbol{g}(u))} du} \end{array} \tag{16}$$

Integrating Equation (1) from *τ*(*ξ*) to *g*(*ξ*),

$$
\pi(\mathfrak{x}(\mathfrak{z})) - \mathfrak{x}(\pi(\mathfrak{z})) + \int\_{\mathfrak{x}(\mathfrak{z})}^{\mathfrak{x}(\mathfrak{z})} p(r)\mathfrak{x}(\pi(r)) dr = 0.
$$

Using (14) as well as the nonincreasing nature of *x*(*t*), it follows that

$$
\left(\mathbf{x}(\mathcal{g}(\xi)) - \mathbf{x}(\tau(\xi)) + (\lambda(k^\*) - \epsilon)\mathbf{x}(\mathcal{g}(\xi))\right) \int\_{\tau(\xi)}^{\mathcal{g}(\xi)} p(r) dr \le 0.
$$

Thus,

$$\frac{x(\tau(\xi))}{x(g(\xi))} \ge 1 + (\lambda(k^\*) - \epsilon) \int\_{\tau(\xi)}^{\xi(\xi)} p(r) dr.$$

This together with (16) gives

$$\frac{\mathbf{x}(\tau(t))}{\mathbf{x}(t)} \quad \ge \quad \mathbf{e}^{(\lambda(k^\*) - \mathfrak{c}) \int\_{\mathbf{r}(t)}^t p(u) \left(1 + (\lambda(k^\*) - \mathfrak{c}) \int\_{\mathbf{r}(u)}^{\mathbf{r}(u)} p(r) dr\right) du} $$
 
$$= \quad \mathbf{e}^{\int\_{\mathbf{r}(t)}^t p(u) Q\_{\mathbb{R}0}(u) du} = Q\_{0,1}(t). \tag{17}$$

Since (15) implies that *<sup>x</sup>*(*τ*(*t*)) *<sup>x</sup>*(*t*) = e ( *t <sup>τ</sup>*(*t*) *<sup>p</sup>*(*s*) *<sup>x</sup>*(*τ*(*s*)) *<sup>x</sup>*(*s*) *ds*, (17) yields

$$\frac{\mathbf{x}(\boldsymbol{\tau}(t))}{\mathbf{x}(t)} \ge \mathbf{e}^{\int\_{\tau(t)}^t p(s) Q\_{0,1}(s) ds} = \mathbf{Q}\_{0,2}(t).$$

Repeating this process, we arrive at the following inequality

$$\frac{\mathbf{x}(\tau(t))}{\mathbf{x}(t)} \ge Q\_{0,n}(t). \tag{18}$$

On the other hand, by integrating Equation (1) from *τ*(*t*) to *t*, we have

$$\mathbf{x}(t) - \mathbf{x}(\tau(t)) + \int\_{\tau(t)}^{t} p(\mathbf{s}) \mathbf{x}(\tau(\mathbf{s})) d\mathbf{s} = \mathbf{0}.\tag{19}$$

Using (15), we obtain *x*(*τ*(*s*)) = *x*(*t*)e ( *t <sup>τ</sup>*(*s*) *<sup>p</sup>*(*u*) *<sup>x</sup>*(*τ*(*u*)) *<sup>x</sup>*(*u*) *du*. Therefore, (19) implies that

$$\frac{\mathbf{x}(\tau(t))}{\mathbf{x}(t)} = 1 + \int\_{\tau(t)}^{t} p(\mathbf{s}) \mathbf{e}^{\int\_{\tau(s)}^{t} p(u) \frac{\mathbf{x}(\tau(u))}{\mathbf{x}(u)} du} ds = 0. \tag{20}$$

Now, substituting (18) into (20), we have

$$\frac{\mathbf{x}(\tau(t))}{\mathbf{x}(t)} \ge \mathbf{1} + \int\_{\tau(t)}^t p(\mathbf{s}) \mathbf{e}^{\int\_{\tau(s)}^t p(u) Q\_{0,n}(u) du} d\mathbf{s} = \mathcal{R}\_{\mathbf{1},n}(t).$$

From the last inequality and (15), we obtain

$$\frac{\mathbf{x}(\boldsymbol{\pi}(t))}{\mathbf{x}(t)} \ge \mathbf{e}^{\int\_{\boldsymbol{\pi}(t)}^t p(s)\mathbf{R}\_{1,\boldsymbol{\pi}}(s)ds} = \mathbf{e}^{\int\_{\boldsymbol{\pi}(t)}^t p(s)\mathbf{Q}\_{1,0}(s)ds} = \mathbf{Q}\_{1,1}(t).$$

It follows from this and (15) that

$$\frac{\mathbf{x}(\boldsymbol{\tau}(t))}{\mathbf{x}(t)} \ge \mathbf{e}^{\int\_{\tau(t)}^t p(s)Q\_{1,1}(s)ds} = Q\_{1,2}(t).$$

A simple induction implies that

$$\frac{\mathbf{x}(\boldsymbol{\pi}(t))}{\mathbf{x}(t)} \ge \mathbf{e}^{\int\_{\boldsymbol{\pi}(t)}^t p(s)Q\_{1,n-1}(s)ds} = \mathbf{Q}\_{1,n}(t).$$

Substituting the previous inequality into (20), we get

$$\frac{\mathbf{x}(\tau(t))}{\mathbf{x}(t)} \ge 1 + \int\_{\tau(t)}^t p(\mathbf{s}) \mathbf{e}^{\int\_{\tau(s)}^t p(u)Q\_{1,n}(u)du} d\mathbf{s} = R\_{2,n}(t).$$

Therefore, by using the same arguments, as before, we obtain

$$\frac{\mathbf{x}(\tau(t))}{\mathbf{x}(t)} \ge 1 + \int\_{\tau(t)}^t p(s) \mathbf{e}^{\int\_{\tau(s)}^t p(u) Q\_{m-1,n}(u) du} ds = R\_{m,n}(t).$$

**Theorem 2.** *Assume that k*<sup>∗</sup> <sup>≤</sup> <sup>1</sup> *<sup>e</sup> and m*, *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> *such that*

$$\limsup\_{t \to \infty} \int\_{\mathcal{S}(t)}^t p(s) e^{\int\_{\tau(s)}^{\mathcal{S}(t)} p(u) e^{\int\_{\tau(u)}^u p(v) \mathcal{R}\_{\text{tr},u}(v) dv} du} ds > 1 - \mathfrak{c}(k^\*). \tag{21}$$

*Then, every solution of Equation (1) is oscillatory.*

**Proof.** Assume the contrary, i.e., there exists a non-oscillatory solution *x*(*t*). Due to the linearity of Equation (1), one can assume that *x*(*t*) is eventually positive. Now, integrating Equation (1) from *g*(*t*) to *t*, we obtain

$$\mathbf{x}(t) - \mathbf{x}(\mathbf{g}(t)) + \int\_{\mathcal{S}(t)}^{t} p(\mathbf{s}) \mathbf{x}(\tau(\mathbf{s})) d\mathbf{s} = \mathbf{0}.\tag{22}$$

By using (15), it follows that

$$\begin{array}{rcl} \mathfrak{x}(\tau(\mathfrak{s})) &=& \mathfrak{x}(\mathfrak{g}(t)) \mathbf{e}^{\int\_{\tau(\mathfrak{s})}^{\mathfrak{g}(t)} p(u) \frac{\mathfrak{x}(\tau(u))}{\mathfrak{x}(u)} du} \\ &=& \mathfrak{x}(\mathfrak{g}(t)) \mathbf{e}^{\int\_{\tau(\mathfrak{s})}^{\mathfrak{g}(t)} p(u) \mathbf{e}^{\int\_{\tau(u)}^{\mathfrak{u}} p(v) \frac{\mathfrak{x}(\tau(v))}{\mathfrak{x}(v)} dv} du} \end{array}$$

Therefore, Lemma 1 yields

$$\mathbf{x}(\boldsymbol{\pi}(s)) \ge \mathbf{x}(\mathbf{g}(t)) \mathbf{e}^{\int\_{\mathbf{r}(s)}^{\mathbf{r}(t)} p(u) \boldsymbol{\alpha}^{\int\_{\mathbf{r}(u)}^{u} p(v) \boldsymbol{\beta}\_{\mathbf{w}, \mathbf{w}}(v) dv} du}$$

Substituting into (22), we get

$$x(t) - x(g(t)) + x(g(t)) \int\_{\mathcal{S}(t)}^t p(s) e^{\int\_{\tau(s)}^{\mathcal{F}(t)} p(u) u^{\int\_{\tau(u)}^u p(v) R\_{\mathcal{W}, \mathcal{W}}(v) dv} du \le 0,$$

that is,

$$\int\_{\mathcal{S}(t)}^t p(\mathbf{s}) \mathbf{e}^{\int\_{\mathbf{r}(s)}^{\mathbf{r}(t)} p(\mathbf{u}) \mathbf{e}^{\int\_{\mathbf{r}(u)}^u p(v) \cdot \mathbf{R}\_{\mathcal{M}}(v) dv} du} ds \leq 1 - \frac{\mathbf{x}(t)}{\mathbf{x}(\mathbf{g}(t))},$$

for sufficiently large *t*. Therefore,

$$\limsup\_{t \to \infty} \int\_{\mathcal{S}(t)}^t p(s) \mathbf{e}^{\int\_{\tau(s)}^{\mathbf{f}(t)} p(u) e^{\int\_{\tau(u)}^u p(v) \operatorname{Re} n(v) dv} du} ds \le 1 - \liminf\_{t \to \infty} \frac{\mathbf{x}(t)}{\mathbf{x}(\mathbf{g}(t))}.$$

However, lim inf *<sup>t</sup>*→<sup>∞</sup> *x*(*t*) *<sup>x</sup>*(*g*(*t*)) ≥ *c*(*k*∗) (see [5], Lemma 2.1.3). Consequently,

$$\limsup\_{t \to \infty} \int\_{\mathcal{S}(t)}^t p(s) \mathbf{e}^{\int\_{\tau(s)}^{\mathcal{S}(t)} p(u) \mathbf{e}^{\int\_{\tau(u)}^u p(v) R\_{\mathcal{W}, \mathcal{W}}(v) dv} du} ds \le 1 - \mathfrak{c}(k^\*),$$

which contradicts to (21).

The proofs of the following two results are basically similar to that of Lemma 1 and Theorem 2.

**Theorem 3.** *Assume that k*<sup>∗</sup> <sup>≤</sup> <sup>1</sup> *<sup>e</sup> and*

$$\limsup\_{t \to \infty} \int\_{\mathcal{S}(t)}^t p(\mathbf{s}) \mathbf{e}^{\left(\lambda(k^\*) - \mathbf{c}\right) \int\_{\tau(\mathbf{s})}^{\mathbf{g}(t)} p(u) du + \left(\lambda(k^\*) - \mathbf{c}\right)^2 \int\_{\tau(\mathbf{s})}^{\mathbf{g}(t)} p(u) \int\_{\tau(\mathbf{u})}^{\mathbf{g}(u)} p(v) dv du} ds > 1 - \mathfrak{c}(k^\*),\tag{23}$$

*where* ∈ (0, *λ*(*k*∗))*. Then, all solutions of Equation (1) oscillate.*

**Theorem 4.** *Assume that k*<sup>∗</sup> <sup>≤</sup> <sup>1</sup> *<sup>e</sup> and m*, *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> *such that*

$$\limsup\_{t \to \infty} \int\_{\mathcal{S}(t)}^t p(s) e^{\int\_{\tau(s)}^{\mathcal{S}(t)} p(u)\mathbb{R}\_{w,v}(u) du} ds > 1 - c(k^\*). \tag{24}$$

*Then, all solutions of Equation (1) oscillate.*

**Lemma 2.** *Let x*(*t*) *be an eventually positive solution of Equation (1). Then,*

$$\limsup\_{t \to \infty} \left( \int\_{\mathcal{S}(t)}^t p(s)ds + w(\mathcal{g}(t)) \int\_{\mathcal{S}(t)}^t p(s) \int\_{\tau(s)}^{\mathcal{S}(t)} p(u) \mathbf{e}^{\int\_{\tau(u)}^{\mathcal{S}(t)} p(v)w(v)dv} du \, ds \right) = 1 - M\_{\tau}$$

*where*

$$\mathcal{M} := \liminf\_{t \to \infty} \frac{\mathfrak{x}(t)}{\mathfrak{x}(\mathfrak{g}(t))}, \quad \text{and} \quad w(t) := \frac{\mathfrak{x}(\mathfrak{g}(t))}{\mathfrak{x}(t)}.$$

**Proof.** The positivity of *x*(*t*) implies that *x*(*t*) is an eventually non-increasing function. Integrating Equation (1) from *g*(*t*) to *t*, we obtain

$$
\mathbf{x}(t) - \mathbf{x}(\mathbf{g}(t)) + \int\_{\mathcal{S}(t)}^{t} p(\mathbf{s}) \mathbf{x}(\tau(\mathbf{s})) d\mathbf{s} = \mathbf{0}.\tag{25}
$$

Since *τ*(*s*) ≤ *g*(*t*) for *s* ≤ *t*, integrating Equation (1) from *τ*(*s*) to *g*(*t*), we have

$$\mathbf{x}(\boldsymbol{\pi}(s)) = \mathbf{x}(\mathbf{g}(t)) + \int\_{\boldsymbol{\pi}(s)}^{\mathbb{R}(t)} p(\boldsymbol{\mu}) \mathbf{x}(\boldsymbol{\pi}(\boldsymbol{\mu})) d\boldsymbol{\mu}.$$

Substituting into (25), we get

$$\mathbf{x}(\mathbf{x}(t) - \mathbf{x}(\mathbf{g}(t)) + \mathbf{x}(\mathbf{g}(t)) \int\_{\mathcal{g}(t)}^{t} p(\mathbf{s}) d\mathbf{s} + \int\_{\mathcal{g}(t)}^{t} p(\mathbf{s}) \int\_{\mathbf{r}(\mathbf{s})}^{\mathbf{g}(t)} p(\mathbf{u}) \mathbf{x}(\mathbf{r}(\mathbf{u})) d\mathbf{u} \, d\mathbf{s} = \mathbf{0}. \tag{26}$$

It is clear that *<sup>τ</sup>*(*u*) <sup>≤</sup> *<sup>g</sup>*2(*t*), for *<sup>u</sup>* <sup>≤</sup> *<sup>g</sup>*(*t*). Therefore, (15) implies that

$$\mathbf{x}(\boldsymbol{\pi}(\boldsymbol{u})) = \mathbf{x}(\boldsymbol{g}^2(\boldsymbol{t})) \mathbf{e}^{\int\_{\boldsymbol{\pi}(\boldsymbol{u})}^{\mathbf{g}^2(\boldsymbol{t})} p(\boldsymbol{v}) \boldsymbol{w}(\boldsymbol{v}) d\boldsymbol{v}}.$$

From this and (26), it follows that

$$\mathbf{x}(\mathbf{x}(t) - \mathbf{x}(\mathbf{g}(t)) + \mathbf{x}(\mathbf{g}(t)) \int\_{\mathcal{S}(t)}^{t} p(\mathbf{s}) d\mathbf{s} + \mathbf{x}(\mathbf{g}^{2}(t)) \int\_{\mathcal{S}(t)}^{t} p(\mathbf{s}) \int\_{\tau(\mathbf{s})}^{\mathcal{S}(t)} p(\mathbf{u}) \mathbf{e}^{\int\_{\tau(\mathbf{u})}^{\mathcal{S}(t)} p(\mathbf{v}) w(\mathbf{v}) d\mathbf{v}} d\mathbf{u} \, d\mathbf{s} = 0.$$

Consequently,

$$\int\_{\mathcal{S}(t)}^t p(\mathbf{s})d\mathbf{s} + w(\mathbf{g}(t)) \int\_{\mathcal{S}(t)}^t p(\mathbf{s}) \int\_{\mathbf{r}(s)}^{\mathbf{g}(t)} p(\mathbf{u}) \mathbf{e}^{\int\_{\mathbf{r}(u)}^{\mathbf{g}(t)} p(\mathbf{v})w(\mathbf{v})dv} d\mathbf{u} \, d\mathbf{s} = 1 - \frac{\mathbf{x}(t)}{\mathbf{x}(\mathbf{g}(t))}.$$

Therefore,

$$\limsup\_{t \to \infty} \left( \int\_{\mathcal{g}(t)}^t p(s)ds + w(\mathcal{g}(t)) \int\_{\mathcal{g}(t)}^t p(s) \int\_{\tau(s)}^{\mathcal{g}(t)} p(u) \mathbf{e}^{\int\_{\tau(u)}^{\mathcal{g}(t)} p(v)w(v)dv} du \, ds \right) = 1 - \liminf\_{t \to \infty} \frac{x(t)}{\mathbf{x}(\mathcal{g}(t))}.$$

The proof of the following theorem is a consequence of Lemmas 1, 2, and ([5], Lemmas 2.1.2 and 2.1.3). **Theorem 5.** *Assume that k*<sup>∗</sup> <sup>≤</sup> <sup>1</sup> <sup>e</sup> *and m*, *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> *such that*

$$\limsup\_{t \to \infty} \left( \int\_{\S(t)}^t p(s)ds + (\lambda(k^\*) - \varepsilon) \int\_{\S(t)}^t p(s) \int\_{\tau(s)}^{\S(t)} p(u) \mathbf{e}^{\int\_{\tau(u)}^{\Re(t)} p(v)R\_{\eta, n(v)} dv} du \, ds \right) > 1 - \varepsilon(k^\*),$$

*where* ∈ (0, *λ*(*k*∗))*. Then, every solution of Equation (1) is oscillatory.*

**Theorem 6.** *Let L*˜ := lim sup *t*→∞ ( *t <sup>g</sup>*(*t*) *<sup>p</sup>*(*s*)*ds* <sup>&</sup>lt; <sup>1</sup>*,* <sup>0</sup> <sup>&</sup>lt; *<sup>k</sup>*<sup>∗</sup> <sup>≤</sup> <sup>1</sup> e *,*

$$\int\_{\mathcal{g}(s)}^{\mathcal{g}(t)} p(u) du \ge \int\_s^t p(u) du, \quad \text{for all } s \in [\mathcal{g}(t), t], \tag{27}$$

*and*

$$\mathcal{A} := \liminf\_{t \to \infty} \int\_{\mathbf{r}(t)}^{\mathbb{X}(t)} p(\mathbf{s}) d\mathbf{s}. \tag{28}$$

*If one of the following conditions is satisfied:*

$$\begin{aligned} \text{(i)} \quad &L > \frac{-1 - A\lambda(k^\*) + \sqrt{2 + \left(1 + A\lambda(k^\*)\right)^2 + 2k^\*\lambda(k^\*)}}{\lambda(k^\*)},\\ \text{(ii)} \quad &\tilde{L} > 1 + k^\* + \frac{1}{\lambda(k^\*)} + A - \sqrt{\left(1 + k^\* + \frac{1}{\lambda(k^\*)} + A\right)^2 - 2\left(k^\* + \frac{1}{\lambda(k^\*)}\right)}.\end{aligned}$$

*then every solution of Equation (1) is oscillatory.*

**Proof.** Assume that Equation (1) has a nonoscillatory solution *x*(*t*); as usual, we assume that *x*(*t*) is an eventually positive solution. Let

$$I(t) = \int\_{\mathcal{S}(t)}^t p(s)ds + w(\mathcal{g}(t)) \int\_{\mathcal{S}(t)}^t p(s) \int\_{\tau(s)}^{\mathcal{g}(t)} p(u) \mathbf{e}^{\int\_{\tau(u)}^{\mathcal{S}(t)} p(v)w(v)dv} du \, ds,\tag{29}$$

where *w*(*t*) = *<sup>x</sup>*(*g*(*t*)) *<sup>x</sup>*(*t*) . Therefore,

$$I(t) \ge \int\_{\mathcal{S}(t)}^t p(\mathbf{s})d\mathbf{s} + w(\mathbf{g}(t)) \left( \int\_{\mathcal{S}(t)}^t p(\mathbf{s}) \int\_{\tau(\mathbf{s})}^{\mathbf{g}(\mathbf{s})} p(\mathbf{u}) \, d\mathbf{u} \, d\mathbf{s} + \int\_{\mathcal{S}(t)}^t p(\mathbf{s}) \int\_{\mathcal{S}(\mathbf{s})}^{\mathbf{g}(\mathbf{t})} p(\mathbf{u}) \, d\mathbf{u} \, d\mathbf{s} \right).$$

In view of [5], Lemma 2.1.2) and (28), for sufficiently small , we obtain

$$I(t) \ge \int\_{\mathcal{S}(t)}^t p(\mathbf{s})d\mathbf{s} + (\lambda(k^\*) - \varepsilon) \left( (A - \varepsilon) \int\_{\mathcal{S}(t)}^t p(\mathbf{s})d\mathbf{s} + \int\_{\mathcal{S}(t)}^t p(\mathbf{s}) \int\_{\mathcal{S}(\mathbf{s})}^{\mathcal{S}(t)} p(\mathbf{u}) \,d\mathbf{u} \,d\mathbf{s} \right).$$

By using (27), it follows that

$$I(t) \ge \left(1 + \left(\lambda(k^\*) - \varepsilon\right)\left(A - \varepsilon\right)\right) \int\_{\mathcal{S}(t)}^t p(s)ds + \left(\lambda(k^\*) - \varepsilon\right) \int\_{\mathcal{S}(t)}^t p(s) \int\_s^t p(u) \, du \, ds. \tag{30}$$

However,

$$\int\_{\mathcal{S}(t)}^t p(s) \int\_s^t p(u) \, du \, ds = \frac{1}{2} \left( \int\_{\mathcal{S}(t)}^t p(s) \, ds \right)^2.$$

Therefore, (30) implies that

$$I(t) \ge \left(1 + \left(\lambda(k^\*) - \varepsilon\right)\left(A - \varepsilon\right)\right) \int\_{\mathcal{S}(t)}^t p(s)ds + \frac{\lambda(k^\*) - \varepsilon}{2} \left(\int\_{\mathcal{S}(t)}^t p(s)ds\right)^2. \tag{31}$$

On the other hand, from [9], we have

$$\liminf\_{t \to \infty} \frac{\mathbf{x}(t)}{\mathbf{x}(\mathbf{g}(t))} \ge 1 - k^\* - \frac{1}{\lambda(k^\*)}.\tag{32}$$

Therefore, Lemma 2 and (32) imply that *I*(*t*) < *k*<sup>∗</sup> + <sup>1</sup> *<sup>λ</sup>*(*k*∗) + for sufficiently large *t*. Thus, (31) yields

$$\left( \left( 1 + \left( \lambda(k^\*) - \varepsilon \right)(A - \varepsilon) \right) \int\_{\mathcal{S}(t)}^t p(s) ds + \frac{\lambda(k^\*) - \varepsilon}{2} \left( \int\_{\mathcal{S}(t)}^t p(s) ds \right)^2 \leq I(t) < k^\* + \frac{1}{\lambda(k^\*)} + \varepsilon,$$

or equivalently,

$$\left(\left(\lambda(k^\*) - \epsilon\right)\Lambda^2 + 2\left(1 + \left(\lambda(k^\*) - \epsilon\right)\left(A - \epsilon\right)\right)\Lambda - 2k^\* - \frac{2}{\lambda(k^\*)} - 2\epsilon < 0, 1\right)$$

where

$$\Lambda := \int\_{\mathcal{S}(t)}^t p(\mathbf{s}) d\mathbf{s}.$$

Then,

$$
\Lambda < \frac{-\left(1 + \left(\lambda(k^\*) - \epsilon\right)\left(A - \epsilon\right)\right) + \sqrt{\left(1 + \left(\lambda(k^\*) - \epsilon\right)\left(A - \epsilon\right)\right)^2 + 2\left(\lambda(k^\*) - \epsilon\right)\left(k^\* + \frac{1}{\lambda(k^\*)} + \epsilon\right)}}{\lambda(k^\*) - \epsilon}.
$$

Thus,

$$L \leq \frac{-\left(1 + \left(\lambda(k^\*) - \epsilon\right)\left(A - \epsilon\right)\right) + \sqrt{\left(1 + \left(\lambda(k^\*) - \epsilon\right)\left(A - \epsilon\right)\right)^2 + 2\left(\lambda(k^\*) - \epsilon\right)\left(k^\* + \frac{1}{\lambda(k^\*)} + \epsilon\right)}}{\lambda(k^\*) - \epsilon}.$$

Now, letting → 0, we obtain

$$
\tilde{L} \le \frac{-1 - A\lambda(k^\*) + \sqrt{2 + \left(1 + A\lambda(k^\*)\right)^2 + 2k^\*\lambda(k^\*)}}{\lambda(k^\*)}.
$$

This completes the proof of case (i).

To prove case (ii), integrating Equation (1) from *g*2(*t*) to *g*(*t*), we obtain

$$
\mathfrak{x}(\mathfrak{g}(t)) - \mathfrak{x}(\mathfrak{g}^2(t)) + \int\_{\mathfrak{x}^2(t)}^{\mathfrak{x}(t)} p(s)\mathfrak{x}(\mathfrak{x}(s))ds = 0,
$$

which, by using the nonincreasing nature of *x*(*t*) and the assumption that *τ*(*t*) ≤ *g*(*t*), implies that

$$\mathbf{x}(\mathbf{g}(t)) - \mathbf{x}(\mathbf{g}^2(t)) + \mathbf{x}(\mathbf{g}^2(t)) \int\_{\mathcal{S}^2(t)}^{\mathbf{g}(t)} p(\mathbf{s}) d\mathbf{s} \le 0. \tag{33}$$

In view of (27), we have

$$\int\_{\mathcal{S}^2(t)}^{\mathcal{S}^\circ(t)} p(\mathbf{s})d\mathbf{s} \ge \int\_{\mathcal{S}(t)}^t p(\mathbf{s})d\mathbf{s}.$$

Substituting into (33), it follows that

$$\frac{\mathfrak{x}(\mathcal{g}^2(t))}{\mathfrak{x}(\mathcal{g}(t))} \ge \frac{1}{1 - \int\_{\mathcal{S}(t)}^t p(s)ds}.$$

From this and (29), we obtain

$$I(t) \ge \int\_{\mathcal{S}(t)}^t p(\mathbf{s})d\mathbf{s} + \frac{1}{1 - \int\_{\mathcal{S}(t)}^t p(\mathbf{s})d\mathbf{s}} \int\_{\mathcal{S}(t)}^t p(\mathbf{s}) \int\_{\tau(s)}^{\mathcal{Y}(t)} p(\mathbf{u})d\mathbf{u} \,d\mathbf{s}.$$

Again Lemma 2 and (32) imply for sufficiently small that

$$\int\_{\mathcal{S}(t)}^{t} p(s)ds + \frac{1}{1 - \int\_{\mathcal{S}(t)}^{t} p(s)ds} \int\_{\mathcal{S}(t)}^{t} p(s) \int\_{\tau(s)}^{\mathcal{Y}(t)} p(u) du \, ds \le I(t) < k^\* + \frac{1}{\lambda(k^\*)} + \epsilon. \tag{34}$$

However, as in the proof of case (i), we have

$$\begin{split} \int\_{\mathcal{S}(t)}^{t} p(s) \int\_{\tau(s)}^{\xi(t)} p(u) du \, ds &= \left( \int\_{\mathcal{S}(t)}^{t} p(s) \int\_{\tau(s)}^{\xi(s)} p(u) \, du \, ds + \int\_{\mathcal{S}(t)}^{t} p(s) \int\_{\mathcal{S}(s)}^{\xi(t)} p(u) \, du \, ds \right) \\ &\ge \left( (A - \epsilon) \int\_{\mathcal{S}(t)}^{t} p(s) ds + \int\_{\mathcal{S}(t)}^{t} p(s) \int\_{s}^{t} p(u) \, du \, ds \right) \\ &= \left( A - \epsilon \right) \int\_{\mathcal{S}(t)}^{t} p(s) ds + \frac{1}{2} \left( \int\_{\mathcal{S}(t)}^{t} p(s) ds \right)^{2} . \end{split} \tag{35}$$

Combining the inequalities (34) and (35), we obtain

$$2\Lambda\_1(1-\Lambda\_1) + 2\left(A-\epsilon\right)\Lambda\_1 + \Lambda\_1^2 - 2\alpha\left(\epsilon\right)\left(1-\Lambda\_1\right) < 0,$$

where

$$\Lambda\_1 = \int\_{\mathcal{S}(t)}^t p(\mathbf{s})d\mathbf{s}, \quad \mathfrak{a}\left(\mathfrak{e}\right) = k^\* + \frac{1}{\lambda\left(k^\*\right)} + \mathfrak{e}.$$

Thus,

$$
\Lambda\_1^2 - 2\left(1 + \varkappa\left(\epsilon\right) + A - \epsilon\right)\Lambda\_1 + 2\varkappa\left(\epsilon\right) > 0,
$$

which implies that Λ<sup>1</sup> < 1 + *α* () + *A* − − ' (1 + *α* () + *A* − ) <sup>2</sup> <sup>−</sup> <sup>2</sup>*<sup>α</sup>* (), and hence

$$\tilde{L} = \limsup\_{t \to \infty} \int\_{\mathcal{X}(t)}^t p(s)ds \le 1 + a\left(\epsilon\right) + A - \epsilon - \sqrt{\left(1 + a\left(\epsilon\right) + A - \epsilon\right)^2 - 2a\left(\epsilon\right)}.$$

Letting → 0, we obtain

$$
\tilde{L} \le 1 + k^\* + \frac{1}{\lambda(k^\*)} + A - \sqrt{\left(1 + k^\* + \frac{1}{\lambda(k^\*)} + A\right)^2 - 2\left(k^\* + \frac{1}{\lambda(k^\*)}\right)}.
$$

#### **Remark 1.**

(i) *Condition (27) is satisfied if (see [9,16])*

$$p(\mathfrak{g}(t))\mathfrak{g}'(t) \ge p(t), \qquad \text{eventually for all } t.$$

(ii) *It is easy to show that the conclusion of Theorem 6 is valid, if p*(*t*) > 0 *and condition (27) is replaced by*

$$\liminf\_{t \to \infty} \frac{p(g(t))g'(t)}{p(t)} = 1.$$

**Corollary 1.** *Assume that* <sup>0</sup> <sup>&</sup>lt; *<sup>k</sup>* <sup>≤</sup> <sup>1</sup> <sup>e</sup> *, L* < 1 *and τ*(*t*) *is a nondecreasing continuous function such that*

$$\int\_{\tau(s)}^{\tau(t)} p(u) du \ge \int\_s^t p(u) du, \quad \text{for all } s \in [\tau(t), t].$$

*If*

$$L > \min\left\{ \frac{-1 + \sqrt{3 + 2k\lambda(k)}}{\lambda(k)}, 1 + k + \frac{1}{\lambda(k)} - \sqrt{1 + \left(k + \frac{1}{\lambda(k)}\right)^2} \right\},\tag{36}$$

*then Equation (1) is oscillatory.*

#### **Remark 2.**


$$\frac{-1 + \sqrt{3 + 2k\lambda(k)}}{\lambda(k)} \le \frac{\ln \lambda(k) - 1 + \sqrt{5 - 2\lambda(k) + 2k\lambda(k)}}{\lambda(k)}.$$

*for all λ*(*k*) ∈ [1, e]*. Therefore, condition (36) improves condition (4).*

The following example illustrates the applicability and strength of our result.

**Example 1.** *Consider the first order delay differential equation*

$$\mathbf{x}'(t) + p(t)\mathbf{x}(\tau(t)) = \mathbf{0}, \quad t \ge \mathbf{2},\tag{37}$$

*where (See Figure 1)*

$$
\pi(t) = t - 1 - \kappa \sin^2 \left( \nu \pi \left( t + \alpha \right) \right) + \alpha,
$$

*and*

$$p(t) := \begin{cases} \frac{1}{(1-a)a^{\prime}} & t \in [2n, 2n+1-a],\\ \frac{1}{a(1-a)} \left(\beta - \frac{1}{\mathfrak{e}}\right)(t - 2n - 1) + \frac{\beta}{(1-a)^{\prime}} & t \in [2n+1-a, 2n+1],\\ \frac{\beta}{(1-a)^{\prime}} & t \in [2n+1, 2n+2-a],\\ \frac{-1}{a(1-a)} \left(\beta - \frac{1}{\mathfrak{e}}\right)(t - 2n - 2) + \frac{1}{(1-a)\mathfrak{e}} & t \in [2n+2-a, 2n+2], \end{cases}$$

*where <sup>n</sup>* <sup>∈</sup> <sup>N</sup>*, <sup>α</sup>* <sup>=</sup> 0.0001*, <sup>β</sup>* <sup>=</sup> 0.505 *and <sup>ν</sup>* <sup>=</sup> 20, 000*. Throughout our calculations, we take <sup>g</sup>* <sup>=</sup> *<sup>δ</sup>. It is clear, from the definition of δ and τ, that*

$$t - 1 \le \tau(t) \le \delta(t) \le t - 1 + a.$$

*Notice that*

$$k^\* = k = \liminf\_{t \to \infty} \int\_{\tau(t)}^t p(\mathbf{s}) d\mathbf{s} = \lim\_{n \to \infty} \int\_{\tau(2n+1-a)}^{2n+1-a} p(\mathbf{s}) d\mathbf{s} = \lim\_{n \to \infty} \int\_{2n}^{2n+1-a} p(\mathbf{s}) d\mathbf{s} = \frac{1}{\mathbf{e}}.\tag{38}$$

*Then, λ*(*k*) = e*, and* <sup>1</sup>−*k*<sup>−</sup> √ <sup>1</sup>−2*k*−*k*<sup>2</sup> <sup>2</sup> ≈ 0.1365429862*.*

**Figure 1.** The graph of *τ*.

*Since*

$$p(t)R\_{1,1}(t) = p(t)\left[1 + \int\_{\tau(t)}^t p(s)e^{\int\_{\tau(s)}^t p(u)g^{(\lambda(k)-\varepsilon)}\int\_{\tau(u)}^u p(\eta)\left(1 + (\lambda(k)-\varepsilon)\int\_{\tau(\eta)}^{\delta(\eta)} p(r)dr\right)d\eta}d\eta\right],$$

*for* = 0.0001*, we have*

$$p(t)R\_{1,1}(t) \ge \frac{1}{(1-a)\,\mathrm{e}} \left[ 1 + \int\_{t-1+a}^{t} \frac{1}{(1-a)\,\mathrm{e}} \mathrm{e}^{\int\_{t-1+a}^{t} \frac{1}{(1-a)\,\mathrm{e}} \mathrm{e}^{(\lambda(k)-\varepsilon)\int\_{t-1+a}^{n} \frac{1}{(1-a)\,\mathrm{e}} \mathrm{d}\eta}} \, d\nu \right] \approx 1.00006322\ldots$$

*Now, assume that*

$$J(t) = \int\_{\delta(t)}^t p(s) \exp\left(\int\_{\tau(s)}^{\delta(t)} p(u) \mathcal{R}\_{1,1}(u) du\right) ds.$$

*Then,*

$$\begin{array}{rcl} I(2n+2-\mathfrak{a}) &=& \int\_{\delta(2n+2-\mathfrak{a})}^{2n+2-\mathfrak{a}} p(s) \exp\left(\int\_{\tau(s)}^{\delta(2n+2-\mathfrak{a})} p(u) R\_{1,1}(u) du\right) ds \\ &\geq& \int\_{2n+1}^{2n+2-\mathfrak{a}} p(s) \exp\left(\int\_{s-1+\mathfrak{a}}^{2n+1-\mathfrak{a}} p(u) R\_{1,1}(u) du\right) ds \\ &\geq& \int\_{2n+1}^{2n+2-\mathfrak{a}} \frac{\beta}{(1-a)} \exp\left(1.00006322 \int\_{s-1+\mathfrak{a}}^{2n+1-\mathfrak{a}} du\right) ds \\ &>& 0.867626. \end{array}$$

*Therefore,*

$$\limsup\_{t \to \infty} J(t) \ge \lim\_{n \to \infty} J(2n + 2 - a) \ge 0.867626 > 1 - \frac{1 - k - \sqrt{1 - 2k - k^2}}{2} \approx 0.8634570138.$$

*Consequently, Theorem (4) with n* = *m* = 1 *implies that Equation (37) is oscillatory. However, by using (38), condition (3) does not hold.*

*Let*

$$J\_1(t) = \int\_{\delta(t)}^t p(s) \exp\left(\int\_{\tau(s)}^{\delta(t)} p(u) \exp\left(\int\_{\tau(u)}^u p(v) dv\right) du\right) ds.$$

*Then,*

$$J\_1(t) \quad \le \int\_{t-1}^t \frac{\beta}{(1-a)} \exp\left(\int\_{s-1}^{t-1+a} \frac{\beta}{(1-a)} \exp\left(\int\_{u-1}^u \frac{\beta}{(1-a)} dv\right) du\right) ds \approx 0.7901391991.$$

*Consequently,* lim sup *t*→∞ *J*1(*t*) < 0.79014*, which means that conditions (7) with j* = 3 *and (10) fail to apply.*

*In addition, since*

$$\int\_{\delta(t)}^t p(s) \exp\left(\int\_{\tau(s)}^{\delta(t)} p(u) du\right) < \int\_{t-1}^t \frac{\beta}{(1-\alpha)} \exp\left(\int\_{s-1}^{t-1+\alpha} \frac{\beta}{(1-\alpha)} du\right),$$

*it follows that*

$$\limsup\_{t \to \infty} \int\_{\delta(t)}^t p(s) \exp\left(\int\_{\tau(s)}^{\delta(t)} p(u) du\right) < 0.6571023948 < 1 - \frac{1 - k - \sqrt{1 - 2k - k^2}}{2} \approx 0.8634570138.$$

*Therefore, none of the conditions (7) with j* = 2*, (8) and (9) are satisfied.*

*Define*

$$J\_2(t) = \int\_{\delta(t)}^t p(s) \int\_{\tau(s)}^s p(u) \exp\left(\int\_{\tau(u)}^s p(v) dv\right) du \, ds + c(k) \exp\left(\int\_{\delta(t)}^t p(s) ds\right).$$

*It follows that*

$$\begin{array}{rcl} \|f\_2(t)\| &\leq& \int\_{t-1}^t \frac{\beta}{(1-a)} \int\_{s-1}^s \frac{\beta}{(1-a)} \exp\left(\int\_{u-1}^s \frac{\beta}{(1-a)} d\upsilon\right) du \, ds + c(k) \exp\left(\int\_{t-1}^t \frac{\beta}{(1-a)} ds\right) \\ &< \quad 0.776165, \end{array}$$

*so* lim sup *t*→∞ *J*2(*t*) ≤ 0.776165*. Thus, condition (12) with n* = 1 *fails to apply.*

*Now, let us define the following functions:*

$$\|\mathfrak{z}(t,\mathfrak{e}) = \int\_{\delta(t)}^t p(\mathfrak{s}) \exp\left( (\lambda(k) - \mathfrak{e}) \int\_{\mathfrak{r}(s)}^{\delta(t)} p(\mathfrak{u}) d\mathfrak{u} \right) \omega$$

*and*

$$J\_4(t) = \int\_{\delta(t)}^t p(s) \exp\left(\int\_{\tau(s)}^{\delta(t)} p(\mu) F\_1(\mu) d\mu\right) ds,$$

*where*

$$F\_1(t) = 1 + \int\_{\delta(t)}^t p(v) \exp\left(\int\_{\tau(v)}^t p(u) du\right) dv.$$

*Since*

$$F\_1(t) \le 1 + \int\_{t-1}^t \frac{\beta}{1-\kappa} \exp\left(\int\_{v-1}^t \frac{\beta}{1-\kappa} d\mu\right) dv \approx 2.088615495\mu$$

*and λ*(*k*) − < e*, it follows that J*3(*t*, ) < *Ge*(*t*) *and J*4(*t*) < *G*2.088615495(*t*)*, where Gω*(*t*) *is defined by*

$$\mathcal{G}\_{\omega}(t) = \int\_{\delta(t)}^{t} p(\mathbf{s}) \exp\left(\omega \int\_{\tau(s)}^{\delta(t)} p(\mathbf{u}) d\mathbf{u}\right) d\mathbf{s}, \quad \text{for } \omega > 0.$$

*Next, we estimate the upper limit of Gω*(*t*) *for ω* = e *and ω* = 2.088615495*.*

*For* 0 ≤ *ζ* ≤ 1 − *α, we have*

$$\begin{array}{rcl} \mathsf{G}\_{\omega}(2n+\zeta) &=& \int\_{\delta(2n+\zeta)}^{2n+\zeta} p(s) \exp\left(\omega \int\_{\tau(s)}^{\delta(2n+\zeta)} p(u) du\right) ds \\ &\leq& \int\_{2n+\zeta-1}^{2n+\zeta} p(s) \exp\left(\omega \int\_{s-1}^{2n+\zeta-1+a} p(u) du\right) ds \\ &=& \int\_{2n+\zeta-1}^{2n-a} p(s) \exp\left(\omega \int\_{s-1}^{2n+\zeta-1+a} p(u) du\right) ds \\ &+& \int\_{2n-a}^{2n} p(s) \exp\left(\omega \int\_{s-1}^{2n+\zeta-1+a} p(u) du\right) ds \\ &+& \int\_{2n}^{2n+\zeta} p(s) \exp\left(\omega \int\_{s-1}^{2n+\zeta-1+a} p(u) du\right) ds, \end{array}$$

*which implies that*

$$\begin{split} \mathcal{G}\_{\omega}(2n+\xi) &\quad \leq \int\_{2n+\xi-1}^{2n-a} \frac{\beta}{(1-a)} \exp\left(\omega \int\_{s-1}^{2n-1-a} \frac{1}{(1-a)} du + \omega \int\_{2n-1-a}^{2n+\xi-1+a} \frac{\beta}{(1-a)} du\right) ds \\ &\quad + \int\_{2n-a}^{2n} \frac{\beta}{(1-a)} \exp\left(\omega \int\_{s-1}^{2n+\xi-1+a} \frac{\beta}{(1-a)} du\right) ds \\ &\quad + \int\_{2n}^{2n+\xi} \frac{1}{(1-a)} \exp\left(\omega \int\_{s-1}^{2n+\xi-1+a} \frac{\beta}{(1-a)} du\right) ds \\ &\approx \frac{1}{\omega} \Big( 1.372732323 e^{\omega} (0.367980451 + 0.137134272\zeta) - 0.3727323230 e^{0.00010101010\omega} \times (5000\zeta + 1) \\ &\quad - e^{0.00009090969\zeta} \omega \Big( 10000\zeta + 1 \Big) + 1.980198020 \, e^{0.509639690\zeta} \omega \zeta - 1 + 0.00096909690\zeta \omega \\ &\quad - 1.980198020 \, e^{0.0000969090\zeta} \omega - 1 \Big). \end{split}$$

*Therefore, G*2.088615495(2*n* + *ζ*) < 0.7725 *and G*e(2*n* + *ζ*) < 0.9162 *for all ζ* ∈ [0, 1 − *α*]*.*

*In addition, if* 1 − *α* ≤ *ζ* ≤ 1*, then*

$$\begin{array}{rcl} G\_{\omega}(2n+\zeta) & \leq & \int\_{2n+\zeta-1}^{2n} p(s) \exp\left(\omega \int\_{s-1}^{2n+\zeta-1+\kappa} p(u) du\right) ds \\ & + \int\_{2n}^{2n+\zeta} p(s) \exp\left(\omega \int\_{s-1}^{2n+\zeta-1+\kappa} p(u) du\right) ds. \end{array}$$

*Therefore,*

*Gω*(2*n* + *ζ*) ≤ 2*n* 2*n*+*ζ*−1 *β* (<sup>1</sup> <sup>−</sup> *<sup>α</sup>*) exp *ω* 2*n*+*ζ*−1+*α s*−1 *β* (1 − *α*) *du ds* + 2*n*+1−*α* 2*n* 1 (<sup>1</sup> <sup>−</sup> *<sup>α</sup>*) <sup>e</sup> exp *ω* 2*n*+*ζ*−1+*α s*−1 *β* (1 − *α*) *du ds* + 2*n*+*ζ* 2*n*+1−*α β* (<sup>1</sup> <sup>−</sup> *<sup>α</sup>*) exp *ω* 2*n*+*ζ*−1+*α s*−1 *β* (1 − *α*) *du ds* <sup>≈</sup> <sup>1</sup> *ω* e0.5051010101*<sup>ω</sup>* <sup>−</sup> <sup>e</sup>0.00005050505050*ω*(10000*ζ*+1) <sup>+</sup> 1.980198020e−1+0.5050505050*ωζ*+0.00005050505050*<sup>ω</sup>* <sup>−</sup>1.980198020e−1+0.5050505050*ωζ*−0.5049494949*<sup>ω</sup>* <sup>+</sup> <sup>e</sup>0.0001010101010*ω*(5000.0*ζ*−4999) <sup>−</sup> <sup>e</sup>0.00005050505050*<sup>ω</sup>* 

*Thus, G*2.088615495(2*n* + *ζ*) < 0.6529 *and G*e(2*n* + *ζ*) < 0.7899 *for all ζ* ∈ [1 − *α*, 1]*. Using similar arguments, we obtain:*

$$\text{G}\_{2.088615495}(2n+\zeta+1) < 0.7603,\text{ G}\_{\text{e}}(2n+\zeta) < 0.8737 \text{ for all } \zeta \in [0, 1-a].$$

.

*and*

$$G\_{2.088615495}(2n+\zeta+1) < 0.7603,\ G\_{\mathfrak{e}}(2n+\zeta) < 0.8681 \text{ for all } \zeta \in [1-n, 1].$$

*Then,*

$$G\_{2.088615495}(t) < 0.7725, \quad \text{for all } t \in [2n, 2n+2], n \in \mathbb{N}\_{\prime}$$

*and*

$$G\_{\mathfrak{e}}(t) < 0.9162, \quad \text{for all } t \in [2n, 2n+2], n \in \mathbb{N}.$$

*Consequently,*

.

$$\limsup\_{\varepsilon \to 0^{+}} \left( \limsup\_{t \to \infty} J\_3(t, \varepsilon) \right) \le \limsup\_{t \to \infty} G\_{\mathfrak{e}}(t) \le 0.9162 < 1.$$

*and*

$$\limsup\_{t \to \infty} J\_4(t) \le \limsup\_{t \to \infty} G\_{2.088615495}(t) \le 0.7726 < 1 - \frac{1 - k - \sqrt{1 - 2k - k^2}}{2} \approx 0.8634570138.$$

*Then, conditions (11) and (13) with l* = 1 *respectively fail to apply.*

**Author Contributions:** All authors contributed equally to the research and to writing the paper. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Acknowledgments:** The authors would like to thank the Reviewers for their useful suggestions.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Noether Symmetries of a Generalized Coupled Lane-Emden-Klein-Gordon-Fock System with Central Symmetry**

#### **B. Muatjetjeja 1,2, S. O. Mbusi <sup>2</sup> and A. R. Adem 3,\***


Received: 20 February 2020; Accepted: 20 March 2020; Published: 5 April 2020

**Abstract:** In this paper we carry out a complete Noether symmetry analysis of a generalized coupled Lane-Emden-Klein-Gordon-Fock system with central symmetry. It is shown that several cases transpire for which the Noether symmetries exist. Moreover, we derive conservation laws connected with the admitted Noether symmetries. Furthermore, we fleetingly discuss the physical interpretation of the these conserved vectors.

**Keywords:** Lane-Emden-Klein-Gordon-Fock system with central symmetry; Noether symmetries; conservation laws

**MSC:** 35J47; 35G50; 35J61

#### **1. Introduction**

In 2017 [1], the authors studied both Lie and Noether symmetries of a Lane-Emden-Klein-Fock system with central symmetry with power functions namely,

$$\begin{aligned} u\_{tt} - u\_{rr} - \frac{n}{r} u\_r + \frac{\gamma v^q}{r^n} &= 0, \\ v\_{tt} - v\_{rr} - \frac{n}{r} v\_r + \frac{\alpha u^p}{r^n} &= 0, \end{aligned} \tag{1}$$

where *p*, *n*, *γ*, *α*, *q* are non-zero constants. In fact, when *n* = 2, *γ* = *α* = 1, system (1) becomes

$$\begin{aligned} u\_{tt} - u\_{rr} - \frac{2}{r} u\_r + \frac{v^q}{r^2} &= 0, \\ v\_{tt} - v\_{rr} - \frac{2}{r} v\_r + \frac{u^p}{r^2} &= 0. \end{aligned} \tag{2}$$

System (2) has been studied in [2] for both Lie and Noether symmetries together with the associated conservation laws.

Systems of this type occur in several physical phenomena, see, for example, Refs. [1–4] and references therein. These type of system can also be viewed as a natural extension of the famous two-component generalization of the nonlinear wave equation, viz,

$$
\mu\_{tt} - u\_{rr} - \frac{m}{r} u\_r - u^p = 0,\tag{3}
$$

with the real-valued function *u* = *u*(*t*,*r*), and *p* representing the interaction power while the independent variables (*t*,*r*) symbolize time and radial coordinates respectively in *m* = 0 dimensions [4].

In 2019 [5], the authors studied the generalization of system (1) where the power functions *v<sup>q</sup>* and *u<sup>p</sup>* are replaced with arbitrary elements namely, *h*(*v*) and *g*(*u*) respectively. Thus system (1) becomes

$$\begin{aligned} u\_{tt} - u\_{rr} - \frac{n}{r} u\_r + \frac{h(v)}{r^{\text{\textquotedblleft}}} &= 0, \\ v\_{tt} - v\_{rr} - \frac{n}{r} v\_r + \frac{\text{g}(u)}{r^{\text{\textquotedblright}}} &= 0. \end{aligned} \tag{4}$$

It is worth mentioning that, if the parameter *n* = 0 in system (1), then system (1) reduces to the Lane-Emden system

$$\begin{aligned} \mu\_{xx} + \mu\_{yy} + \upsilon^p &= 0, \\ \upsilon\_{xx} + \upsilon\_{yy} + \mu^q &= 0, \end{aligned} \tag{5}$$

under the complex transformation (*x*, *y*, *u*, *v*) → (*t*, *ir*, *u*, *v*), where *p* and *q* are non-zero constants. This system has been extensively studied for its Noether and Lie symmetries [6]. Furthermore, if the parameter *n* = 0, in system (4), then system (4) transforms to a generalized Lane-Emden system

$$\begin{aligned} u\_{xx} + u\_{yy} + h(\upsilon) &= 0, \\ v\_{xx} + v\_{yy} + g(\iota) &= 0, \end{aligned} \tag{6}$$

under the aforementioned complex transformation. In [7], authors applied the classical symmetry method to investigate the symmetries of system (6).

In [5], the authors applied the method of modern group analysis to study a generalized coupled Lane-Emden-Klein-Gordon-Fock system with central symmetry (4). Motivated by the recent results in [5], we study the aforemention system (4). To the authors' knowledge, the method of Noether symmetry analysis has not been used in the study of a generalized Lane-Emden-Klein-Fock system with central symmetry (4). Thus in this paper, we aim to compensate for this absence by carrying out a complete Noether symmetry classification of system (4) and derive the connected conservation laws of system (4). Since system (4), has a Lagrangian structure, thus the knowledge of Noether theorem [8] gives us an elegant way to construct conservation of system (4).

The structure of this paper is as follows. Firstly, we seek to establish the admitted Noether symmetries of a generalized coupled Lane-Emden-Klein-Gordon-Fock system with central symmetry (4) associated with the standard Lagrangian. Next, in Section 2, conservation laws connected with the admitted Noether symmetries are derived. Concluding remarks are summarised in Section 3.

#### **2. Complete Noether Symmetries Analysis**

Several authors have done much work on Noether classification for a system of PDEs. See for example [6,7,9]. Here we perform a complete Noether symmetry analysis of system (4) with respect to the standard Lagrangian. System (4) has a Lagrangian structure. This prompts the following Lemma.

**Lemma 1.** *The generalized coupled Lane-Emden-Klein-Gordon-Fock system with central symmetry (4) establishes the Euler-Lagrange equations with the functional*

$$J(u,v) = \int\_0^\infty \int\_0^\infty L(t, r, u\_\prime v\_\prime, u\_{t\prime}, v\_{t\prime}, u\_{r\prime}, v\_r) dt dr\_\prime dr\_\prime$$

*where*

$$\mathcal{L} = \frac{1}{n} \left( r^\text{\(n\)} v\_t v\_t - r^\text{\(n\)} v\_r v\_r - \int h(v) dv - \int g(u) du \right). \tag{7}$$

*is the connected function of Lagrange.*

**Proof.** The insertion of L in the Euler-Lagrange equations [6,9] gives

$$\begin{split} \frac{\delta \mathcal{L}}{\delta u} &= \quad \frac{\partial \mathcal{L}}{\partial u} - D\_t \left( \frac{\partial \mathcal{L}}{\partial u\_t} \right) - D\_r \left( \frac{\partial \mathcal{L}}{\partial u\_r} \right), \\ &= \quad -\frac{\mathcal{G}(u)}{n} - \frac{1}{n} D\_t (r^n v\_t) - \frac{1}{n} D\_r (-r^n v\_r), \\ &= \quad -\frac{\mathcal{G}(u)}{n} - \frac{r^n}{n} v\_{tt} - \frac{1}{n} (-nr^{n-1} v\_r - r^n v\_{rr}), \\ &= \quad v\_{tt} - v\_{rr} - \frac{n}{r} v\_r + \frac{\mathcal{G}(u)}{r^n} = 0, \\ \frac{\delta \mathcal{L}}{\delta v} &= \quad \frac{\partial \mathcal{L}}{\partial v} - D\_t \left( \frac{\partial \mathcal{L}}{\partial v\_t} \right) - D\_r \left( \frac{\partial \mathcal{L}}{\partial v\_r} \right), \\ &= \quad -\frac{h(v)}{n} - \frac{1}{n} D\_t (r^n u\_t) - \frac{1}{n} D\_r (-r^n u\_r), \\ &= \quad -\frac{h(v)}{n} - \frac{r^n}{n} u\_{tt} - \frac{1}{n} (-nr^{n-1} u\_r - r^n u\_{rr}), \\ &= u\_{tt} - u\_{rr} - \frac{n}{r} u\_r + \frac{h(v)}{r^n} = 0, \end{split}$$

Hence this complete the proof.

Let *<sup>x</sup>* = (*x*1, ··· , *<sup>x</sup>n*) be *<sup>n</sup>* independent variables and *<sup>u</sup>* = (*u*1, ··· , *<sup>u</sup>m*) *<sup>m</sup>* dependent variables. An operator (the sum over repeated indices is presupposed)

$$X = \mathfrak{z}^i(\mathbf{x}, \boldsymbol{\mu}) \frac{\partial}{\partial \mathbf{x}^l} + \eta^a(\mathbf{x}, \boldsymbol{\mu}) \frac{\partial}{\partial \boldsymbol{\mu}^a} \tag{8}$$

is called *Noether point symmetry generator* of the coupled system (4) connected to the Lagrangian L in (7) if the Killing-type equation,

$$X^{(1)}\mathcal{L} + D\_i(\xi^i)\mathcal{L} = D\_i A^i,\tag{9}$$

holds for some point-dependent potential terms *A* = (*A<sup>i</sup>* ) where *A<sup>i</sup>* = *A<sup>i</sup>* (*t*,*r*, *u*, *v*), *i* = 1, 2. We now revisit the celebrated Noether Theorem [6,8], that is, corresponding to each Noether symmetry, there exist a vector *T* = (*T<sup>i</sup>* ) with components

$$T^i = \mathfrak{J}^i \mathcal{L} + \frac{\partial \mathcal{L}}{\delta u\_i^j} (\eta\_j - u\_s^j \mathfrak{J}^s) - A^i,\tag{10}$$

which is a conserved vector of system (4). The solution of (9) leads to overdetermining systems of PDEs. Solving the resulting systems of PDEs prompts the following results.

$$\begin{aligned} \tau^1 &= -a(t,r), \\ \xi^1 &= -b(t,r), \\ \eta^1 &= -d\_v(t,r,v)u - \frac{n}{r}bu + k(t,r), \\ \eta^2 &= -d(t,r,v), \\ A^1 &= -\frac{r^n}{n}d\_tu + \frac{r^n}{n}k\_tv + s(t,r), \\ A^2 &= -\frac{r^n}{n}d\_ru - \frac{r^n}{n}k\_rv + w(t,r), \end{aligned}$$

$$\frac{1}{r}(d\_v u - k)\mathbb{g}(u) + \frac{n}{r}bu\mathbb{g}(u) - df(v) - (b\_r + a\_l)\left[\int h(v)dv + \int \mathbb{g}(u)du\right]$$

$$= r^n(d\_{lt} - d\_{l\mathcal{T}})u + r^n(k\_{lt} - k\_{l\mathcal{T}})v - nr^{n-1}(d\_rh + k\_lr) + n(s\_l + w\_l). \tag{11}$$

A complete analysis of Equation (11) yields the following results.

**Theorem 1.** *Suppose n* = 0, *h*(*v*) *and g*(*u*) *are arbitrary functions, then the Noether generator of a generalized coupled Lane-Emden-Klein-Gordon-Fock system with central symmetry (4) and the associated conservation laws are given by (12)*

$$\begin{cases} X\_1 = \frac{\partial}{\partial t} \\ A^i = 0, \\ T\_1^1 = -\frac{r^\pi}{n} u\_t v\_t - \frac{r^\pi}{n} u\_t v\_r - \frac{1}{n} \int h(v) dv - \frac{1}{n} \int g(u) du, \\ T\_1^2 = \frac{r^\pi}{n} u\_t v\_r + \frac{r^\pi}{n} u\_t v\_t. \end{cases} \tag{12}$$

**Theorem 2.** *Let the elements h*(*v*) = *αv* + *β*, *g*(*u*) = *γu* + *λ, with α*, *γ*, *β*, *λ are constants, α*, *γ* = 0 *and n arbitrary. Then the Noether symmetries of system (4) and the connected conserved vectors are (12) and*

$$\begin{cases} X\_2 = k(t, r) \frac{\partial}{\partial u} + f(t, r) \frac{\partial}{\partial v}, A^1 = \frac{r^n}{n} u f\_t + \frac{r^n}{n} u k\_{tr}, A^2 = -\frac{r^n}{n} u f\_r - \frac{r^n}{n} u k\_{tr} \\\ with \quad k\_{tt} - k\_{rr} - \frac{u}{r} k\_r + \frac{\theta}{r} f = 0, \ f\_{tt} - f\_{rr} - \frac{u}{r} f\_r + \frac{\gamma}{r} k = 0, \\\ T\_2^1 = \frac{r^n}{n} f u\_t + \frac{r^n}{n} k v\_t - \frac{r^n}{n} u f\_t - \frac{r^n}{n} v k\_{tr} \\\ T\_2^2 = \frac{r^n}{n} u f\_r + \frac{r^n}{n} v k\_r - \frac{r^n}{n} f u\_r - \frac{r^n}{n} k v\_r. \end{cases} \tag{13}$$

**Theorem 3.** *Suppose that <sup>h</sup>*(*v*) = *<sup>γ</sup>vq*, *<sup>g</sup>*(*u*) = *<sup>α</sup>up*, *<sup>α</sup>*, *<sup>γ</sup>* <sup>=</sup> <sup>0</sup>*. Then the Noether operators of system (4) and the associated conservation laws are as follows;*

$$\begin{aligned} (i) \quad &\acute{f}\,n = \frac{2(q+p+2)}{(p+1)(q+1)}p, q \neq 0, \pm 1,\text{ then we have (12) and} \\ \begin{cases} X\_2 = t\frac{\partial}{\partial t} + r\frac{\partial}{\partial r} - \frac{2}{p+1}u\frac{\partial}{\partial u} - \frac{2}{q+1}v\frac{\partial}{\partial v}, \\\ A^i = 0, \\ T\_2^1 = -\frac{tr^n}{n}(u\_tv\_l + u\_rv\_r) - \frac{1}{n(p+1)}(atu^{p+1} + 2r^nv\_l) - \frac{1}{n(q+1)}(\gamma tv^{q+1} + 2r^nv\_{l'}) - \frac{1}{n(p+1)}(\gamma tv^{q+1} + 2r^nv\_{l'}) - \frac{1}{n(p+1)}(\gamma tv^{q+1} + 2r^nv\_{l'}) - \frac{1}{n(p+1)}(\gamma tv^{q+1} + 2r^nv\_{l'}) - \frac{1}{n(p+1)}(\gamma tv^{q+1} + 2r^nv\_{l'}) \\ \end{cases} \\ \begin{cases} T\_2^2 = -\frac{r^{n+1}}{n}(u\_tv\_l + u\_rv\_{l'}) - \frac{1}{n(p+1)}(aru^{p+1} - 2r^nv\_{l'}) - \frac{1}{n(q+1)}(\gamma rv^{q+1} - 2r^nv\_{l'}v) + \frac{1}{n(p+1)}(\gamma tv^{q+1} - 2r^nv\_{l'}) - \frac{1}{n(p+1)}(\gamma tv^{q+1} - 2r^nv\_{l'}) - \frac{1}{n(p+1)}(\gamma tv^{q+1} - 2r^nv\_{l'}) - \frac{1}{n(p+1)}(\gamma tv^{q+1} - 2r^nv\_{l'}) \\ \quad + \frac{tr^2}{n}(v\_lt$$

*(ii) if p* = *q* = −1*, γ* = *α, n arbitrary. Here we get the generic case (12) and*

$$\begin{cases} X\_3 = \mu \frac{\partial}{\partial \mu} - \upsilon \frac{\partial}{\partial \upsilon}, \\\ A^{\bar{\imath}} = 0, \\\ T\_3^1 = \frac{r^{\text{tr}}}{\eta\_{\bar{\imath}}} (\mu \upsilon\_{\mathfrak{f}} - \mathfrak{u}\_{\mathfrak{f}} \upsilon), \\\ T\_3^2 = \frac{r^{\text{th}}}{n} (\mathfrak{u}\_{\bar{\imath}} \upsilon - \mathfrak{u} \upsilon\_{\mathfrak{f}}). \end{cases}$$

*It should be noted that in any other case one recovers (12). It should also be observed that when p* = *q* = 1, *this falls into Theorem 2.*

**Theorem 4.** *Let the elements <sup>h</sup>*(*v*) = *<sup>α</sup>vp*, *<sup>g</sup>*(*u*) = *<sup>γ</sup>e*−*mu*, *<sup>α</sup>*, *<sup>γ</sup>*, *<sup>m</sup>* <sup>=</sup> <sup>0</sup>*, <sup>p</sup>* <sup>=</sup> <sup>−</sup>1*. Then the Noether generators of system (4) and the corresponding conservation laws are;*

*(i) if <sup>n</sup>* <sup>=</sup> <sup>2</sup> *p* + 1 , *γ* = *α, n*, *m arbitrary. Here the generic case (12) extends by one Noether generator with the associated conservation laws;*

$$\begin{cases} \begin{cases} & X\_{2} = tm(p+1)\frac{\partial}{\partial t} + rm(p+1)\frac{\partial}{\partial r} + 2(p+1)\frac{\partial}{\partial u} - 2m\upsilon\frac{\partial}{\partial \upsilon}, \\ & A^{i} = 0, \\ & T\_{2}^{1} = -\frac{r^{n}}{n}mt\upsilon\_{\mathrm{r}}\upsilon\_{\mathrm{r}} - \frac{r^{n}}{n}mt\upsilon\_{\mathrm{r}}\upsilon\_{\mathrm{l}} - \frac{m\underline{u}}{n(p+1)}\tau\upsilon^{p+1} + \frac{\gamma t}{n}e^{-mu} + \frac{2r^{n}}{n}\upsilon\_{\mathrm{l}} - \frac{\underline{m}^{n+1}}{n}\upsilon\_{\mathrm{r}}\upsilon\_{\mathrm{l}} - m\underline{r}^{n}\upsilon\upsilon\_{\mathrm{l}} - \frac{\underline{m}^{n+1}}{n}\upsilon\_{\mathrm{r}}\upsilon\_{\mathrm{l}} \\ & T\_{2}^{2} = \frac{r^{n+1}}{n}mu\_{l}\upsilon\_{\mathrm{l}} + \frac{r^{n+1}}{n}mu\_{l}\upsilon\_{\mathrm{r}} - \frac{m\underline{u}}{n(p+1)}\tau\upsilon^{p+1} + \frac{\gamma}{n}\underline{r}e^{-mu} - \frac{2r^{n}}{n}\upsilon\_{\mathrm{r}} + \frac{r^{n}}{n}\mathfrak{m}t\_{l}\upsilon\_{\mathrm{r}} + r^{n}m\upsilon\iota\_{\mathrm{r}} + \frac{r^{n}}{n}m\upsilon\iota\_{\mathrm{l}}\iota\_{\mathrm{r}}.\\ \text{It should be noted that in any other case one recovers (12). This analysis will also be encountered in Theorem 5.2.} \end{cases}$$

**Theorem 5.** *Suppose that <sup>h</sup>*(*v*) = *<sup>α</sup>e<sup>λ</sup>v, <sup>g</sup>*(*u*) = *<sup>γ</sup>uq*, *<sup>q</sup>*, *<sup>γ</sup>*, *<sup>α</sup>* <sup>=</sup> <sup>0</sup>*. Then the Noether operators of system (4) and the associated conserved vectors are;*

*(i) if <sup>n</sup>* <sup>=</sup> <sup>2</sup> *q* + 1 , *γ* = *α, n*, *λ arbitrary. In this case, the generic case (12) enlarges by one operator with the following conserved vectors;*

$$\begin{cases} X\_{2} = t\lambda(q+1)\frac{\partial}{\partial t} + r\lambda(q+1)\frac{\partial}{\partial r} + 2(q+1)\frac{\partial}{\partial u} - 2\lambda v \frac{\partial}{\partial v}, \\\ A^{i} = 0, \\\ T\_{2}^{1} = -\frac{r^{n}}{n}\lambda t\mu\_{r}\upsilon\_{r} - \frac{r^{n}}{n}\lambda t\mu\_{l}\upsilon\_{l} - \frac{\gamma\lambda}{n(q+1)}t\mu^{p+1} + \frac{at}{n}e^{-\lambda v} + \frac{2r^{n}}{n}\mu\_{l} - \frac{\lambda r^{n+1}}{n}\mu\_{r}\upsilon\_{l} - \lambda r^{n}\mu\upsilon\_{l} - \frac{\lambda r^{n+1}}{n}\upsilon\_{r}\mu\_{l}, \\\ T\_{2}^{2} = \frac{r^{n}}{n}\lambda\mu\_{l}\upsilon\_{l} + \frac{r^{n}}{n}\lambda\mu\_{r}\upsilon\_{r} - \frac{\lambda\gamma}{n(q+1)}r\mu^{q+1} + \frac{a}{n}r e^{-\lambda v} - \frac{2r^{n}}{n}\mu\_{l} + \frac{r^{n}}{n}\lambda t\upsilon\_{l}\mu\_{l} + r^{n}\lambda\mu\upsilon\_{r} + \frac{r^{n}}{n}\lambda t\mu\_{l}\upsilon\_{l}. \end{cases} \end{cases}$$

The aforementioned theorems can be proved by inserting the values of *Xi*, *n*, *h*(*v*) and *g*(*u*) into Equation (11) and these will satisfy Equation (11). Moreover, substituting these values into Equation (10) one obtains the associated *T<sup>i</sup>* . These *T<sup>i</sup>* then satisfy the divergence condition.

**Remark 1.** *It is worth mentioning that for any case that do not fall in Theorems 2–5, the Noether algebra is one-dimentional and is generated by X*1*. It should be noted that Theorem 2 cannot be directly obtained as a consequence of the results of [1], since the functions h*(*v*) *and g*(*u*) *are not linear, but affine functions, hence these give some new results. In addition, Theorems 4 and 5 exploit new forms of h*(*v*) *and g*(*u*) *which also lead to some new results. The cases when h*(*v*) *and g*(*u*) *are constants are discarded.*

#### **3. Concluding Remarks**

A complete Noether symmetry classification of the generalized coupled Lane-Emden-Klein-Gordon-Fock system with central symmetry (4) was carried out. Several functional forms of the elements *h*(*v*) and *g*(*u*) which resulted in Noether point symmetries were derived. Thereafter, conservation laws connected to the Noether point symmetries were obtained. Conservation laws are of undisputed significance. From the mathematical point of view, when analyzed, they can be employed to detect integrability. Although conservation laws are useful in the analysis of solutions of differential equations, we will exclude this analysis for our future work. The results of the problem under study were motivated by the recent work in [1]. However, the results derived therein were not complete since the function *h*(*v*) and *g*(*u*) were only considered to be power functions. However, in the present work, the function *h*(*v*) and *g*(*u*) were consider to be arbitrary, and this resulted in some new and more general results. The authors thank the anonymous referees whose comments helped to improve the paper.

**Author Contributions:** Conceptualization, B.M., Conceptualization, S.O.M., Conceptualization, A.R.A. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Acknowledgments:** The authors thank the anonymous referees whose comments helped to improve the paper.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Finite Difference Approximation Method for a Space Fractional Convection–Diffusion Equation with Variable Coefficients**

#### **Eyaya Fekadie Anley 1,2 and Zhoushun Zheng 1,\***


Received: 22 February 2020; Accepted: 1 March 2020; Published: date

**Abstract:** Space non-integer order convection–diffusion descriptions are generalized form of integer order convection–diffusion problems expressing super diffusive and convective transport processes. In this article, we propose finite difference approximation for space fractional convection–diffusion model having space variable coefficients on the given bounded domain over time and space. It is shown that the Crank–Nicolson difference scheme based on the right shifted Grünwald–Letnikov difference formula is unconditionally stable and it is also of second order consistency both in temporal and spatial terms with extrapolation to the limit approach. Numerical experiments are tested to verify the efficiency of our theoretical analysis and confirm order of convergence.

**Keywords:** Crank–Nicolson scheme; Shifted Grünwald–Letnikov approximation; space fractional convection-diffusion model; variable coefficients; stability analysis

**MSC:** 26A33; 35R11; 65L20

#### **1. Introduction**

Fractional differential equations (FDE) have attracted the attention of many researchers and scientists due to their importance in different fields of study such as viscoelasticity, fluid mechanics, physics, biology, engineering, and flows in porous media (see [1–6] and the references cited therein). As different experiments and implementations have shown, non-integer space derivatives have been used to develop anomalous diffusion to which a particle spreads at a rate inconsistent with the integer Brownian motion problem in the direction of both time and space. When non-integer order is replaced by the second order derivative in a diffusion equation, it acts to enhance the process which we call super-diffusion [7–12]. Laboratory experiments and field-scale tracer dispersion breakthrough curves (BTCs) are suitable for exhibiting early time arrivals that are not captured by the integer order derivatives and these non-Fickian phenomena can be controlled by non-classical order convection–diffusion and dispersion equations (FCDE) as it was explained in [13]. To increase the number of applications, there should be significant interest in constructing numerical schemes to solve a well known space fractional convection–diffusion model that has space variable coefficients. In most cases, non-integer order differential problems have no exact solution, so various iterative and numerical approximations [3,9,14] must be pointed out in advance. In general, these kinds of approaches have become important in finding the approximate solutions of fractional differential equations, so extensive numerical methods have been developed for space fractional convection–diffusion equations such as the spectral method [15], finite volume method [16,17], finite difference method [2,9,14,18–26], finite element method [27–30] and collocation method [31,32].

When the discretization of domain over the region (which belongs to the geometry) is not complex, finite difference approximations are easier and faster than other methods (see [16,33] for further details) to get numerical solutions. In [34], the author used an unconditional stable difference method for time–space fractional convection–diffusion problems with space variable coefficients with first order convergence both in time and space. The Crank–Nicolson finite difference method for one-sided space fractional diffusion equations using an extrapolation method to get second order convergence was studied in [23]. In [9], the explicit and implicit finite difference methods are discussed for a one-sided space fractional convection–diffusion equation with first order convergence in both time and space. A first-order implicit finite difference discretization method for a two-sided space fractional diffusion equation (SFDE) is also applied in [10]. Recently, an unconditionally stable second order accurate difference method for a two-sided time–space fractional convection–diffusion equation was constructed in [35] using the weighted and Shifted Grünwald–Letnikov difference approximation. It is not suitable to apply the weighted combined with shifted Grünwald–Letnikov difference approximation for one-sided Riemann–Liouville fractional derivative to have second order accurate in space. To deal with such issues, it is important to develop a numerical scheme that leads to evaluate a one-sided space fractional convection–diffusion problem. Thus, the main focus of our study is to have temporal and spatial second order convergence estimates for one-sided space fractional convection–diffusion equations based on a stable finite difference method and using spatial extrapolation to the limit approach. The scheme has been treated using the Crank–Nicholson method with the novel Shifted Grünwald–Letnikov difference approximation and the algorithm has been examined both theoretically and experimentally.

Let us consider space-fractional convection–diffusion equation with variable coefficients:

$$\frac{\partial u(\mathbf{x},t)}{\partial t} + c(\mathbf{x})\frac{\partial u(\mathbf{x},t)}{\partial \mathbf{x}} = d(\mathbf{x})\frac{\partial^a u(\mathbf{x},t)}{\partial \mathbf{x}^a} + p(\mathbf{x},t), \ \mathbf{x} \in (L, \mathbb{R}), \ t \in (0, T], a \in (1, 2]; \tag{1}$$

with the given initial condition:

$$\mu(\mathbf{x},0) = \mathbf{g}(\mathbf{x}), \ L \le \mathbf{x} \le \mathcal{R},$$

and homogeneous Dirichlet boundary conditions:

$$
\mu(L, t) = 0, \ u(R, t) = 0, \ 0 \le t \le T,
$$

where *c*(*x*), *d*(*x*) and *g*(*x*) are continuous functions on [*L*, *R*] and *p*(*x*, *t*) is continuous function on [*L*, *R*] × [0, *T*]. Here *u*(*x*, *t*) is the concentration, *d*(*x*) > 0 is the variable diffusion coefficient, *c*(*x*) > 0 is the fluid variable velocity which means the system is evolving in space due to a velocity field and *p*(*x*, *t*) is sink term so that the fluid transport is from left to right. For the case of integer order (*α* = 2), Equation (1) gives to the classical convection–diffusion equation(CDE). In this study, we have only considered the fractional derivative case which describes a physical meaning in [36] and it involves only a left-sided fractional order derivative. We have assumed that this one-dimensional space fractional convection–diffusion problem has sufficiently smooth and unique enough solutions.

The structure of this paper is arranged as follows. In Section 2, we introduce some preliminary remarks, lemmas and definitions and we show the formulation of the new Crank–Nicolson with right Shifted Grünwald–Letnikov difference scheme in Section 3. In Section 4, we describe the unconditional stability using *Gerschgorin* Theorem and convergence order analysis of the scheme. In Section 5, numerical tests are implemented to show the relevance of our theoretical study and the conclusions are put in Section 6.

#### **2. Preliminary Remarks**

**Definition 1.** *The Riemann fractional derivative operator D<sup>α</sup>* <sup>∗</sup> *with order α is written as:*

$$\left(D\_{\ast}^{a}u\right)\left(\mathbf{x}\right) = \frac{1}{\Gamma(r-a)}\frac{d^{r}}{dx^{r}}\int\_{\mathbb{L}}^{\infty}\frac{u\left(t\right)}{(x-t)^{a-r+1}}dt,\ \ a>0\tag{2}$$

*where r* − 1 < *α* < *r*, *r* ∈ *N*, *t* > 0*.*

**Definition 2.** *The left hand side and the right hand side fractional order derivatives, respectively, in Equation* (1) *are the Riemann–Liouville fractional derivatives with order α which are given by:*

$$\begin{array}{rcl}(D\_{+}^{\mathfrak{a}}\mu)(\mathbf{x})&=&\frac{1}{\Gamma(r-a)}\frac{d^{r}}{dx^{r}}\int\_{L}^{\mathfrak{x}}(\mathbf{x}-\mathbf{s})^{r-a-1}\mu(\mathbf{s})d\mathbf{s}\\ (D\_{-}^{\mathfrak{a}}\mu)(\mathbf{x})&=&\frac{(-1)^{r}}{\Gamma(r-a)}\frac{d^{r}}{dx^{r}}\int\_{\mathfrak{x}}^{\mathbb{R}}(\mathbf{s}-\mathbf{x})^{r-a-1}\mu(\mathbf{s})d\mathbf{s}\end{array} \tag{3}$$

*for r* − 1 < *α* < *r*, *x* ∈ *.*

**Definition 3** ([3])**.** *Let u be given on . The standard Grünwald–Letnikov estimate for* 1 < *α* ≤ 2 *with positive order α is defined by the formula,*

$$D^{\mathfrak{a}}u(\mathfrak{x},t) \approx \frac{1}{h^{\mathfrak{a}}} \sum\_{k=0}^{N\_x} \omega\_k^{(a)} u(\mathfrak{x} - kh, t),\tag{4}$$

*we also define the Grünwald–Letnikov difference operator as:*

$$h^{-a}(\Delta\_h^a u)(\mathbf{x}, t) \approx \sum\_{k=0}^{N\_x} \omega\_k^{(a)} u(\mathbf{x} - kh, t), h > 0, \mathbf{x} \in \mathfrak{R},\tag{5}$$

*where*

$$
\omega\_k^{(a)} = \frac{\mathfrak{a}(a-1)...(a-k+1)}{k!},\tag{6}
$$

*is called Grünwald–Letnikov coefficient which is the Taylor series expansion <sup>ω</sup>*(*z*)=(<sup>1</sup> <sup>−</sup> *<sup>z</sup>*)*<sup>α</sup> which is the generating function. We can expressed the coefficients by the following recursive relations.*

$$
\omega\_0^{(a)} = 1, \omega\_k^{(a)} = (1 - \frac{a+1}{k})\omega\_{k-1}^{(a)}, k = 1, 2, \dots \tag{7}
$$

**Lemma 1** ([37])**.** *Assume that* <sup>1</sup> <sup>&</sup>lt; *<sup>α</sup>* <sup>≤</sup> <sup>2</sup>*, then Grünwald–Letnikov coefficients <sup>ω</sup>*(*α*) *<sup>k</sup> satisfy:*

$$\begin{cases} \omega\_0^{(a)} = 1, \omega\_1^{(a)} = -a < 0, \omega\_2^{(a)} = \frac{a(a-1)}{2} > 0\\ 1 \ge \omega\_2^{(a)} \ge \omega\_3^{(a)} \ge \dots \ge 0, \\ \sum\_{k=0}^{\infty} \omega\_k^{(a)} = 0, \sum\_{k=0}^{N\_x} \omega\_k^{(a)} < 0, \ N\_x \ge 1. \end{cases} \tag{8}$$

The Shifted Grünwald–Letnikov difference operator expression is suitable for our purpose because, it allows us to estimate (*D<sup>α</sup>* <sup>∗</sup>*u*)(*x*), which is defined in Equation (2), numerically in an accurate way. According to [14], right shifted Grünwald–Letnikov difference operator with *p* shifts for *<sup>α</sup>th* order Left R-L fractional derivative of *<sup>u</sup>*(*x*, *<sup>t</sup>*), *<sup>x</sup>* <sup>∈</sup> [*L*, *<sup>R</sup>*] at *<sup>x</sup>* <sup>=</sup> *xm* can be expressed as:

$$\left( \left( D\_{\ast}^{a} u \right) \left( \mathbf{x}, t \right) \right) \; \approx \quad \frac{1}{h^{a}} \sum\_{k=0}^{\frac{x\_{\rm{HI}} - L}{h} + p} \omega\_{k}^{(a)} u \left( \mathbf{x} - (k - p) h \right) \mathbf{t} \; \tag{9}$$

where

$$\alpha\_m = L + mh,\\ h = \frac{R - L}{N\_x},\\ m = 0, 1, 2, \dots \\ N\_x.$$

**Lemma 2** ([38,39])**.** *Let <sup>u</sup>* <sup>∈</sup> *<sup>C</sup>*2*n*() *that has a finite degree of smoothness with* (*D<sup>α</sup>* <sup>+</sup>*u*)(*x*) *which is approximated by h*−*<sup>α</sup>* Δ*α hu* (*x*) *possesses an asymptotic expansion in integer powers of the step-length h, then an expansion in even powers of h for the Shifted operator can be written in the form:*

$$\left(\Delta\_{h,p}^a u\right)(\mathbf{x}) = \sum\_{j=0}^{\infty} (-1)^j \binom{a}{j} u \left(\mathbf{x} + \frac{ah}{2} - jh\right), h > 0. \tag{10}$$

**Lemma 3** ([39])**.** *Let <sup>u</sup>* <sup>∈</sup> *<sup>C</sup>n*+3() *all derivative of <sup>u</sup> up to the order <sup>n</sup>* <sup>+</sup> <sup>4</sup> *belong to <sup>L</sup>*1()*. Then the Fourier transform of the Grünwald–Letnikov difference operator defined in Equation* (5)*, is*

$$
\Phi(\mathbf{x}) = \int\_{\mathcal{R}} \phi(t) e^{i\mathbf{x}t} dt. \tag{11}
$$

**Theorem 1.** *Let <sup>u</sup>* <sup>∈</sup> *<sup>C</sup>*2*n*+3() *with all derivatives of <sup>u</sup> up to order* <sup>2</sup>*<sup>n</sup>* <sup>+</sup> <sup>3</sup> *belong to <sup>L</sup>*1()*. For <sup>p</sup>* <sup>≥</sup> <sup>0</sup> *define the shifted Grünwald–Letnikov operator:*

$$(\Delta\_{h,p}^a)\mu(\mathfrak{x}) = \sum\_{k=0}^{\infty} \omega\_k^{(a)}\mu\left(\mathfrak{x} - \left(k - p\right)h\right)\mu$$

*with ω*(*α*) *<sup>k</sup> <sup>=</sup>*(−1)2*<sup>k</sup>* ( *<sup>α</sup> a*2*<sup>k</sup>* )*=*( *<sup>α</sup> a*2*<sup>k</sup>* )*. Then,if L* = −∞ *in Equation* (2)*, for any computable coefficient a*2*<sup>k</sup> , which is independent of h*, *u and x, we have*

$$h^{-a}\left(\Delta\_{h,p}^a u\right)(\mathbf{x}) = \left(D\_+^a u\right)(\mathbf{x}) + \sum\_{k=1}^{n-1} b\_{2k} \left(D\_+^{a+2k} u\right)(\mathbf{x}) h^{2k} + O(h^{2n}),$$

*uniformly in x* ∈ *.*

**Proof of Theorem 1.** We closely follow the result described in [9,10] for the unshifted Grünwald–Letnikov formula and also in [23] for the shifted Grünwald–Letnikov formula. We can see that with the Riemann-Lebesgue lemma, the assumptions on *u* indicates for real positive constant *C*<sup>1</sup> and from the condition which is imposed on *u*, we have

$$|\vec{u}(t)| \le C\_1 \left(1 + |t|\right)^{-2n-3}.\tag{12}$$

From Lemma 3 for all *t* ∈ the Fourier transform for *u*(*x*) of the Grünwald–Letnikov approximation is

$$\mathfrak{u}(t) = \int\_{\mathfrak{R}} \mathfrak{u}(\mathfrak{x}) e^{i\mathfrak{x}t} d\mathfrak{x}.$$

From the definition of Fourier transform, we have observed that for a constant *a* ∈ , we have:

$$\mathcal{F}([\mathfrak{u}(\mathfrak{x} - a)])(t) = e^{iat}\breve{u}(t).$$

The function

$$\left(\frac{1-e^{-z}}{z}\right)^{\alpha}e^{zp} = \omega\_{\alpha,p}(z)\_{\alpha}$$

have the Taylor expansion

$$
\omega\_{n,p}(z) = \sum\_{k=0}^{\infty} a\_{2k} z^{2k} \,\,\,\,\tag{13}
$$

where *<sup>a</sup>*2*k*=(−1)2*<sup>k</sup>* ( *<sup>α</sup> a*2*<sup>k</sup>* )=( *<sup>α</sup> a*2*<sup>k</sup>* ), converges absolutely for |*z*| ≤ 1 since the function *ωα*,*p*(*z*) is bounded on . The shifted Grünwald difference approximation (Δ*h*,*p*)*u*(*x*) <sup>∈</sup> *<sup>L</sup>*1(). Thus, we have

$$\begin{split} \mathcal{F}(h^{-\kappa}\Delta\_{h,p}^{a}u)(t) &= \quad h^{-\kappa}e^{-itph}\sum\_{k=0}^{\infty} \binom{a}{2k} \, e^{ikhh} \ddot{u}(t) \\ &= \quad h^{-\kappa}e^{-itph} \left(1 - e^{itph}\right)^{a} \ddot{u}(t) \\ &= \quad (-it)^{a} \left(\frac{1 - e^{ith}}{-ith}\right)^{a} e^{-itph} \ddot{u}(t) = (-it)^{a} \omega\_{a,p}(-ith) \ddot{u}(t) \end{split} \tag{14}$$

since *ωα*,*p*(−*ith*) is analytic around the origin, we express it as an even power expansions

$$
\omega\_{a,p}(z) = \sum\_{k=0}^{\infty} a\_{2k} z^{2k}
$$

which absolutely convergent for all |*z*| ≤ *R*. For this a bounded function *ωα*,*p*(*z*) on , there exist a real positive constant *C*<sup>2</sup> which satisfy:

$$\left| \left( \frac{1 - e^{ix}}{-ix} \right)^{a} - \sum\_{k=0}^{n-1} a\_{2k} \left( -ix \right)^{2k} \right| \le C\_{2} \left| x \right|^{2n} \tag{15}$$

is bounded uniformly in *x* ∈ . For any value |*x*| ≤ *R* , we have

$$\left| \left( \omega\_{n,p}(-i\mathbf{x}) - \sum\_{k=0}^{n-1} a\_{2k}(-i\mathbf{x})^{2k} \right| = \left| \sum\_{k=n}^{\infty} a\_{2k}(-i\mathbf{x})^{2k} \right| \leq |\mathbf{x}|^{2n} \sum\_{k=n}^{\infty} \left( \mathfrak{a}\_{2k} \right) |\mathbf{x}|^{2(k-n)} \leq C\_3 |\mathbf{x}|^{2n} \tag{16}$$

which is bounded on . For the other case |*x*| > *R* also, we have

$$\left|\omega\_{a,p}\left(-i\mathbf{x}\right)\right| = \left|\left(\frac{1-\epsilon^{ix}}{-i\mathbf{x}}\right)^{a}\epsilon^{ipx}\right| \le \frac{2^{a}}{R^{a}} < \mathbb{C}\_{4}\left|\mathbf{x}\right|^{2n} \tag{17}$$

where *<sup>C</sup>*<sup>4</sup> <sup>=</sup> <sup>2</sup>*<sup>α</sup> <sup>R</sup>α*+2*<sup>n</sup>* < <sup>∞</sup> and also

$$\left|\sum\_{k=0}^{n-1} a\_{2k}(-i\mathbf{x})^{2k}\right| \le |\mathbf{x}|^{2n} \sum\_{k=0}^{n-1} |(\mathbf{f}\_{a\_{2k}}^{a})| \left|\mathbf{x}\right|^{2(k-n)} \le \mathbf{C}\_{\mathsf{F}} \left|\mathbf{x}\right|^{2n} \tag{18}$$

with *C*<sup>5</sup> = ∑*n*−<sup>1</sup> *k*=0 - *α a*2*<sup>k</sup> R*2*k*−2*<sup>n</sup>* < ∞. Now, we set that

$$C\_2 = \max\left\{ \sum\_{k=0}^{\infty} \left| a\_{2k} \left| R^{2k-2n} , \frac{2^n}{R^{n+2n}} + \sum\_{k=0}^{n-1} \right| a\_{2k} \right| R^{2k-2n} \right\}$$

since <sup>∞</sup>

$$\sum\_{k=0}^{\infty} |a\_{2k}| \, \mathcal{R}^{2k-2n} = \sum\_{k=0}^{n-1} |a\_{2k}| \, \mathcal{R}^{2k-2n} + \sum\_{k=n}^{\infty} |a\_{2k}| \, \mathcal{R}^{2k-2n}$$

$$\mathcal{C}\_2 = \frac{2^n}{\mathcal{R}^{n+2n}} + \sum\_{k=0}^{n-1} |a\_{2k}| \, \mathcal{R}^{2k-2n}$$

Then, this implies that Equation (15) holds for all *x* ∈ . From Equation (17), we can write

$$\mathcal{F}(h^{-\alpha}\Delta\_{h,p}^{\alpha}u)(t) = \sum\_{k=0}^{n-1} a\_{2k} \left(-it\right)^{a+2k} h^{2k} \vec{u}(t) + \vec{\rho}(t,h)t$$

where

$$\bar{\varphi}(k,h) = (-it)^{a} \left( \omega\_{a,p} \left( -ith \right) - \sum\_{k=0}^{n-1} a\_{2k} \left( -ith \right)^{2k} \right) \bar{\pi}(t)^{b}$$

since

$$\left((-it)^{\alpha+2k}\,\vec{u}(t)\right) = \left(D\_+^{\alpha+2k}\right)\vec{u}(t).$$

Therefore, we have

$$(-it)^{\alpha+2k}\vec{u}(t) \in L^1(\mathfrak{R}).$$

Moreover, we see that

$$
\phi(t, h) \in L^1(\mathbb{R}),
$$

and with the conditions imposed on *u*, we can say that (1 + |*x*| <sup>2</sup>*n*+<sup>3</sup> *<sup>u</sup>*˜(*t*) is bounded on . Thus, |*t*| <sup>2</sup>*α*−<sup>3</sup> <sup>|</sup>*u*˜(*t*)| ∈ *<sup>L</sup>*1(*R*). This implies that,

$$|\phi(t,h)| \le Ch^{2\mathfrak{n}} \left(1 + |t|\right)^{2\alpha - 3}$$

for *k* ∈ with *C* = *C*1*C*2. Therefore using the Fourier inversion transform, we have

$$h^{-\mathfrak{a}}\left(\Delta\_{h,p}^{\mathfrak{a}}\mu\right)(\mathfrak{x}) = (D\_+^{\mathfrak{a}}\mu)(\mathfrak{x}) + \sum\_{k=1}^{n-1} a\_{2k} \left(D\_+^{\mathfrak{a}+2k}\mu\right)(\mathfrak{x}) h^{2k} + \mathfrak{q}(\mathfrak{x},h),$$

where

$$\mathfrak{p}(\mathbf{x}, h) = \left| \mathbb{C} \int\_{R} e^{-it\mathbf{x}} \tilde{\mathfrak{p}}(t, h) dt \right| \leq \mathbb{C} \int\_{R} |\tilde{\mathfrak{p}}(t, h) dt| \leq Ch^{2n}.$$

At last, we have

$$h^{-a}\left(\Delta\_{h,p}^a u\right)(\mathbf{x}) = (D\_+^a u)(\mathbf{x}) + \sum\_{k=1}^{n-1} a\_{2k} (D\_+^{a+2k} u)(\mathbf{x}) h^{2k} + O(h^{2n}).\tag{19}$$

**Remark 1.** *From Equation* (10)*, it can be seen that for p* = *α*/2*, the error takes its minimum value and a second order convergence is achieved. We need the grid points xm* − (*k* − *p*)*h to find an optimal positive integer p that makes p* − *α*/2 *is minimum. It is numerically proved in [3] that for the value* 0 < *α* ≤ 1, *p* = 0 *is acceptable; while for* 1 < *α* ≤ 2, *p* = 1 *is optimal.*

**Remark 2.** *Theorem 1 is the base of Extrapolation to the limit. Therefore one can apply it the Shifted Grünwald–Letnikov difference operator to obtain the convergence rate with arbitrary high order hk*, *k* = 1, 2, 3, ..., *n such that*

$$\frac{\,\_{q}\, \_{q}\left(q^{-\alpha}\Delta\_{q\mathfrak{h},p}^{\mathfrak{a}}\mu\right)(\mathfrak{x}) - q\left(\Delta\_{\mathfrak{h},p}^{\mathfrak{a}}\mu\right)(\mathfrak{x})}{1-q}, 0 < q < 1$$

*(q is fixed) converges to* (*D<sup>α</sup>* <sup>+</sup>*u*)(*x*) + *O*(*h*2*).*

#### **3. Problem Formulation of the Scheme**

Consider the following one-dimensional space fractional convection–diffusion problem:

$$\begin{cases} \frac{\partial u(\mathbf{x},t)}{\partial t} = -\varepsilon(\mathbf{x}) \frac{\partial u(\mathbf{x},t)}{\partial \mathbf{x}} + d(\mathbf{x}) \frac{\partial^{\mathbf{a}} u(\mathbf{x},t)}{\partial \mathbf{x}^{\mathbf{a}}} + p(\mathbf{x},t), & (\mathbf{x},t) \in (L, \mathbb{R}) \times (0,T] \\ u(\mathbf{x},0) = \mathbf{g}(\mathbf{x}), & \mathbf{x} \in [L, \mathbb{R}] \\ u(L,t) = 0, u(R,t) = 0, & t \in [0,T] \end{cases} \tag{20}$$

which is based on shifted Grünwald–Letnikov difference method with 1 < *α* ≤ 2 on a finite domain *L* < *x* < *R*.

#### *Crank–Nicolson Scheme for Time and Shifted Grünwald Difference Scheme for Space Discretization*

We partition the finite interval [*L*, *R*] with a uniform mesh in the space size step *h* = (*R* − *L*)/*Nx* and the time step *τ* = *T*/*Nt*, in which *Nx*, *Nt* are non-negative integers and the set of grid size points is symbolized by *xm* = *mh* and *tn* = *nτ* for 0 ≤ *m* ≤ *Nx*, 0 ≤ *n* ≤ *Nt*. Set *tn*<sup>+</sup>1/2 = (*tn*+<sup>1</sup> + *tn*)/2 with 0 ≤ *n* ≤ *Nt* − 1.

We use the following notations:

$$\mu\_m^n = \mu(\mathbf{x}\_m, t\_n), \\ p\_m^{n+1/2} = p(\mathbf{x}\_m, t\_{n+1/2}), \delta\_t \mathbf{u}\_m^n = \frac{\mathbf{u}\_m^{n+1} - \mathbf{u}\_m^n}{\tau}, \\ \mathbf{c}\_m = \mathbf{c}(\mathbf{x}\_m), \\ d\_m = d(\mathbf{x}\_m).$$
 
$$\text{Applying the C-N technique for the time discretization of Equation (20) gives to}$$

$$\begin{aligned} \delta\_l u\_m^n &= -\frac{c\_m}{4h} \left( u\_{m+1}^{n+1} - u\_{m-1}^{n+1} + u\_{m+1}^n - u\_{m-1}^n \right) \\ &+ \quad \frac{d\_m}{2h^a} \sum\_{z=0}^1 \sum\_{k=0}^{N\_x - 1} \omega\_k^{(a)} \left( u\_{m-k+1}^{n+z} \right) = p\_m^{n+1/2} + O(\tau^2). \end{aligned} \tag{21}$$

In space discretization we have used the central finite difference method for the convection term and the Shifted Grünwald–Letnikov operator for the space fractional derivative with the approach of spatial Extrapolation to the limit, respectively.

See the full discretization of the scheme:

$$\begin{array}{rcl} \frac{\mu\_{m}^{n+1} - \mu\_{m}^{n}}{\tau} & = & \frac{-c\_{m} \left( \mu\_{m+1}^{n} - \mu\_{m-1}^{n} + \mu\_{m+1}^{n+1} - \mu\_{m-1}^{n+1} \right)}{4l} \\ & + \frac{d\_{m}}{2h^{a}} \left( \sum\_{z=0}^{1} \sum\_{k=0}^{m+1} \omega\_{k}^{(a)} \mu\_{m-k+1}^{n+z} \right) + \frac{p\_{m}^{n} + p\_{m}^{n+1}}{2}. \end{array} \tag{22}$$

Multiplying Equation (22) by *τ* the discretization equation, we have

$$\begin{array}{rcl} \boldsymbol{u}\_{m}^{n+1} - \boldsymbol{u}\_{m}^{n} &=& \frac{-\mathcal{L}\_{m}\tau}{4h} (\boldsymbol{u}\_{m+1}^{n} - \boldsymbol{u}\_{m-1}^{n} + \boldsymbol{u}\_{m+1}^{n+1} - \boldsymbol{u}\_{m-1}^{n+1}) + \\ & \frac{d\_{m}\tau}{2h^{a}} \sum\_{z=0}^{1} \sum\_{k=0}^{m+1} \boldsymbol{\omega}\_{k}^{(a)} \boldsymbol{u}\_{m-k+1}^{n+z} + \tau p\_{m}^{n+1/2} \end{array} \tag{23}$$

The above equation is used to predict the values of *u*(*x*, *t*) at time *n* + 1, so all the values of *u* at time *n* are assumed to be known. For simplification

*μ<sup>m</sup>* = *cm<sup>τ</sup> <sup>h</sup>* , *<sup>η</sup><sup>m</sup>* <sup>=</sup> *dm<sup>τ</sup> <sup>h</sup><sup>α</sup>* , then we have

$$\begin{split} & \left(1 - \frac{\eta\_{m}}{2} \omega\_{1}^{\kappa} \right) u\_{m}^{n+1} + \left(\frac{-\mu\_{m}}{2} - \frac{\eta\_{m}}{2} \omega\_{2}^{\kappa} \right) u\_{m-1}^{n+1} \\ & \quad + \left(-\frac{\mu\_{m}}{2} - \frac{\eta\_{m}}{2} \omega\_{0}^{\kappa} \right) u\_{m+1}^{n+1} - \frac{\eta\_{m}}{2} \left(\sum\_{k=3}^{m+1} \omega\_{k}^{\kappa} u\_{m-k+1}^{n+1} \right) \\ & \quad = \left(1 + \frac{\eta\_{m}}{2} \omega\_{1}^{\kappa} \right) u\_{m}^{n} + \left(\frac{\mu\_{m}}{2} + \frac{\eta\_{m}}{2} \omega\_{2}^{\kappa} \right) u\_{m-1}^{n} \\ & \quad + \left(\frac{\eta\_{m}}{2} \omega\_{0}^{\kappa} + \frac{\mu\_{m}}{2} \right) u\_{m+1}^{n} + \frac{\eta\_{m}}{2} \left(\sum\_{k=3}^{m+1} \omega\_{k}^{(\kappa)} u\_{m-k+1}^{n} \right) + \tau \left(p\_{m}^{n+\frac{1}{2}} \right). \end{split} \tag{24}$$

Both the convection and diffusion variable coefficients are (*Nx* − 1) × (*Nx* − 1) diagonal matrices which are defined by

$$\begin{array}{rcl} \mu\_{\mathfrak{m}} &=& \frac{\pi}{2h} \operatorname{diag} \left( \mathbf{C}\_{1\prime} \mathbf{C}\_{2\prime} \mathbf{C}\_{3\prime} \dots \mathbf{C}\_{N\_{\mathfrak{x}}-1} \right), \\\eta\_{\mathfrak{m}} &=& \frac{\pi}{h^{\alpha}} \operatorname{diag} \left( d\_{1\prime} d\_{2\prime} d\_{3\prime} \dots d\_{N\_{\mathfrak{x}}-1} \right). \end{array}$$

These discretization together with Dirichlet boundary conditions which results in a linear system of equations for which the coefficient matrix is the sum of lower triangular and upper-diagonal matrices. The above discretization can be re-arranged to yield:

$$\begin{split} & \left( 1 - \frac{\eta\_{m}}{2} \boldsymbol{\omega}\_{1}^{n} \right) \boldsymbol{u}\_{m}^{n+1} + \left( -\frac{\mu\_{m}}{2} - \frac{\eta\_{m}}{2} \boldsymbol{\omega}\_{2}^{n} \right) \boldsymbol{u}\_{m-1}^{n+1} + \\ & \left( -\frac{\mu\_{m}}{2} - \frac{\eta\_{m}}{2} \boldsymbol{\omega}\_{0}^{n} \right) \boldsymbol{u}\_{m+1}^{n+1} - \frac{\eta\_{m}}{2} \left( \sum\_{k=3}^{m+1} \boldsymbol{\omega}\_{k}^{n} \boldsymbol{u}\_{m-k+1}^{n+1} \right) \\ = & \left( 1 + \frac{\eta\_{m}}{2} \boldsymbol{\omega}\_{1}^{n} \right) \boldsymbol{u}\_{m}^{n} + \left( \frac{\mu\_{m}}{2} + \frac{\eta\_{m}}{2} \boldsymbol{\omega}\_{2}^{n} \right) \boldsymbol{u}\_{m-1}^{n} \\ & + \left( \frac{\eta\_{m}}{2} \boldsymbol{\omega}\_{0}^{n} + \frac{\mu\_{m}}{2} \right) \boldsymbol{u}\_{m+1}^{n} + \frac{\eta\_{m}}{2} \left( \sum\_{k=3}^{m+1} \boldsymbol{\omega}\_{k}^{n} \boldsymbol{u}\_{m-k+1}^{n} \right) + \tau \left( P\_{m}^{n+\frac{1}{2}} \right) . \end{split} \tag{25}$$

Denoting *U<sup>n</sup> <sup>m</sup>* as the numerical approximation of *u<sup>n</sup> <sup>m</sup>*, we can construct the C-N scheme for Equation (20)

$$\begin{split} & \left( 1 - \frac{\eta\_m}{2} \omega\_1^n \right) \mathcal{U}\_m^{n+1} + \left( -\frac{\mu\_m}{2} - \frac{\eta\_m}{2} \omega\_2^n \right) \mathcal{U}\_{m-1}^{n+1} + \\ & \left( -\frac{\mu\_m}{2} - \frac{\eta\_m}{2} \omega\_0^n \right) \mathcal{U}\_{m+1}^{n+1} - \frac{\eta\_m}{2} \left( \sum\_{k=3}^{m+1} \omega\_k^n \mathcal{U}\_{m-k+1}^{n+1} \right) \\ = & \left( 1 + \frac{\eta\_m}{2} \omega\_1^n \right) \mathcal{U}\_m^n + \left( \frac{\mu\_m}{2} + \frac{\eta\_m}{2} \omega\_2^n \right) \mathcal{U}\_{m-1}^n \\ & \quad + \left( \frac{\eta\_m}{2} \omega\_0^n + \frac{\mu\_m}{2} \right) \mathcal{U}\_{m+1}^n + \frac{\eta\_m}{2} \left( \sum\_{k=3}^{m+1} \omega\_k^n \mathcal{U}\_{m-k+1}^n \right) + \tau \left( P\_m^{n+\frac{1}{2}} \right). \end{split} \tag{26}$$

*I* is the (*Nx* − 1) × (*Nt* − 1) identity matrix with *Am*,*<sup>n</sup>* as the matrix coefficients. These coefficients,for *m* = 1, 2, 3, ..., *Nx* − 1, *n* = 1, 2, ..., *Nt* − 1 are given by:

$$A\_{m,n} = \begin{cases} 0, & n \ge m+2 \\ -\frac{\mu\_n}{2} - \frac{\eta\_m}{2} \omega\_0^{(a)}, & n = m+1 \\ (1 - \frac{\eta\_m}{2} \omega\_1^{(a)}), & n = m \\ (-\frac{\eta\_m}{2} \omega\_2^{(a)} - \frac{\mu\_m}{2}), & n = m-1 \\ -\frac{\eta\_m}{2} \omega\_{m-n+1}^{(a)} & n \le m-1. \end{cases} \tag{27}$$

The finite difference scheme (24) and (26) defines a linear system of equations as

$$\begin{array}{rcl} (I+A)\mathcal{U}^{n+1} &=& (I-A)\mathcal{U}^{n} + \tau \big( p\_{m}^{n+\frac{1}{2}}\big) \\ \mathcal{U}^{n+1} &=& [u\_{1}^{n+1}, u\_{2}^{n+1}, \dots, u\_{N\_{\text{x}}-1}^{n+1}]^{\top} \\ \mathcal{U}^{n} + \tau p\_{m}^{n+\frac{1}{2}} &=& [0, \tau p\_{1}^{n+\frac{1}{2}}, \tau p\_{2}^{n+\frac{1}{2}}, \dots, \tau p\_{N\_{\text{x}}-1}^{n+\frac{1}{2}} + (\frac{\eta\_{N\_{\text{x}}-1}}{2} + \frac{\mu\_{N\_{\text{x}}-1}}{2}), 0]^{\top} .\end{array} \tag{28}$$

**Theorem 2.** *Suppose that* 1 < *α* ≤ 2*, the coefficient matrix defined in Equations* (24)*–*(27)*, then the diagonal matrix and the coefficient matrix satisfy:*

$$A\_{\mathcal{W},\mathcal{W}} > \sum\_{n=0, m \neq 1}^{N\_{\mathcal{X}}-1} |A\_{\mathcal{W},\mathcal{U}}|\_{\boldsymbol{\nu}} m = 1, 2, 3, \dots, N\_{\mathcal{X}} - 1. \tag{29}$$

**Proof of Theorem 2.** As we have seen from the coefficient matrix defined in Equation (27),

*Am*,*m*+<sup>1</sup> <sup>=</sup> *<sup>μ</sup><sup>m</sup>* <sup>2</sup> <sup>−</sup> *<sup>η</sup><sup>m</sup>* <sup>2</sup> *<sup>ω</sup>*(*α*) <sup>0</sup> <sup>=</sup> *<sup>μ</sup><sup>m</sup>* <sup>2</sup> <sup>−</sup> *<sup>η</sup><sup>m</sup>* <sup>2</sup> <sup>&</sup>lt; <sup>0</sup> *Am*,*m*−<sup>1</sup> <sup>=</sup> <sup>−</sup>*η<sup>m</sup>* <sup>2</sup> *<sup>ω</sup>*(*α*) <sup>2</sup> <sup>−</sup> *<sup>μ</sup><sup>m</sup>* <sup>2</sup> <sup>=</sup> <sup>−</sup>*η<sup>m</sup>* 2 ( *<sup>α</sup>*<sup>2</sup> <sup>−</sup> *<sup>α</sup>* <sup>2</sup> ), but from Lemma 1, *<sup>α</sup>*<sup>2</sup> <sup>−</sup> *<sup>α</sup>* <sup>2</sup> <sup>&</sup>gt; 0 for 1 <sup>&</sup>lt; *<sup>α</sup>* <sup>≤</sup> 2 mean that −*η<sup>m</sup>* 2 ( *<sup>α</sup>*<sup>2</sup> <sup>−</sup> *<sup>α</sup>* <sup>2</sup> ) <sup>&</sup>lt; 0.

When *<sup>n</sup>* <sup>&</sup>lt; *<sup>m</sup>* <sup>−</sup> 1, we have, <sup>−</sup>*η<sup>m</sup>* <sup>2</sup> *<sup>ω</sup>*(*α*) *<sup>m</sup>*−*n*+<sup>1</sup> <sup>&</sup>lt; 0 and when *<sup>n</sup>* <sup>=</sup> *<sup>m</sup>*, *Am*,*<sup>m</sup>* <sup>=</sup> <sup>1</sup> <sup>−</sup> *<sup>η</sup><sup>m</sup>* <sup>2</sup> *<sup>ω</sup>*(*α*) <sup>1</sup> <sup>=</sup> <sup>1</sup> <sup>+</sup> *<sup>η</sup><sup>m</sup>* <sup>2</sup> *<sup>α</sup>* <sup>&</sup>gt; 0. This implies that ∑*Nx*−<sup>1</sup> *<sup>n</sup>*=0,*m*=<sup>1</sup> <sup>|</sup>*Am*,*n*<sup>|</sup> <sup>&</sup>lt; *Am*,*m*.

Therefore, the diagonal matrix is strictly dominant.

#### **4. Theoretical Analysis of Finite Difference Scheme**

In general for analyzing convergence and stability, we consider the following description. Let *χ<sup>h</sup>* = *<sup>ν</sup>* : *<sup>ν</sup>* <sup>=</sup> {*νm*} : {*xm* <sup>=</sup> *mh*}*Nx <sup>m</sup>*=<sup>0</sup> , *ν*<sup>0</sup> = *νNx* = 0 be the grid function. For any *ν* = *ν<sup>m</sup>* ∈ *χh*, we define our point-wise maximum norm as

$$||\boldsymbol{\nu}||\_{\infty} = \max\_{1 \le m \le N\_x} |\boldsymbol{\nu}\_m|\_{\boldsymbol{\nu}} \tag{30}$$

and the discrete *L*2-norm

$$||\nu|| = \sqrt{h \sum\_{m=1}^{N\_x - 1} \nu\_m^2}. \tag{31}$$

#### *4.1. Boundedness of the Fractional Scheme*

The Classical Crank–Nicolson scheme combines the stability of an implicit finite difference method with its accuracy which produce second order convergence in both space and time.

**Theorem 3.** *Crank–Nicolson scheme for solving space fractional convection–diffusion equations given by the following problem:*

$$\frac{\partial u(\mathbf{x},t)}{\partial t} + c(\mathbf{x})\frac{\partial u(\mathbf{x},t)}{\partial \mathbf{x}} = d(\mathbf{x})\frac{\partial^a u(\mathbf{x},t)}{\partial \mathbf{x}^a} + p(\mathbf{x},t). \tag{32}$$

*which is based on shifted Grünwald–Letnikov difference approximation scheme is bounded for* 1 < *α* ≤ 2*.*

**Proof of Theorem 3.** Consider C-N scheme for the space-fractional convection–diffusion problem for 1 < *α* ≤ 2

$$\begin{split} \frac{u\_{m}^{n+1} - u\_{m}^{n}}{\tau} &= \frac{-c\_{m} \left( u\_{m}^{n} - u\_{m-1}^{n} + u\_{m}^{n+1} - u\_{m-1}^{n+1} \right)}{2h} \\ &+ \frac{d\_{m}}{2h^{a}} \left( \sum\_{j=0}^{1} \sum\_{k=0}^{m+1} \omega\_{k}^{(a)} u\_{m-k+1}^{n+j} \right) + \frac{p\_{m}^{n} + p\_{m}^{n+1}}{2}. \end{split} \tag{33}$$

.

Here, we have shown the convergence and boundedness of the scheme by taking the smaller time-step in terms of Lax–Richtmyer stability analysis that uses a weaker bound (see [40]). Our matrix *A* has an eigenvalues of *λ* that have positive real parts, and, we also have found a strictly dominant matrix. These eigenvalues which are centered in the disks at each diagonal entries as:

$$A\_{m,m} = \left(1 - \frac{\eta\_{m}}{2}\omega\_{1}^{a}\right) = \left(1 + a\frac{\eta\_{m}}{2}\right).$$

with *μ<sup>m</sup>* = *cm<sup>τ</sup> <sup>h</sup>* , *<sup>η</sup><sup>m</sup>* <sup>=</sup> *<sup>τ</sup>dm <sup>h</sup><sup>α</sup>* . From the *Gerschgorin* Theorem in [41], the radius of the matrix can be expressed as

$$\begin{aligned} \left\| \sum\_{n=0, m \neq 1}^{N\_\mathbf{x}} A\_{m,n} \right\|\_{2}^{2} &= \left\| \left( -\frac{\eta\_{m}}{2} - \frac{\mu\_{m}}{2} \right) \sum\_{n=0}^{m+1} \omega\_{m-n+1}^{(a)} \right\|^{2} \\ &\leq \left\| \left( -\frac{\eta\_{m}}{2} - \frac{\mu\_{m}}{2} \right) \right\|^{2} \left\| \sum\_{n=0}^{m+1} \omega\_{m-n+1}^{(a)} \right\|^{2} \end{aligned}$$

Since from the Grünwald coefficients we have *ω*(*α*) *<sup>m</sup>*−*n*+<sup>1</sup> <sup>≤</sup> *<sup>ω</sup>*(*α*) <sup>1</sup> and *<sup>ω</sup>*(*α*) <sup>1</sup> = −*α*, we have that:

$$\begin{aligned} \left\| \sum\_{n=0, m \neq 1}^{N\_{\mathbf{x}}} (A\_{m, \mathbf{n}}) \right\|\_{2}^{2} &\leq \left\| \sum\_{n=0, m \neq 1}^{N\_{\mathbf{x}}} (A\_{m, \mathbf{n}}) \right\|\_{2}^{2} \leq \left\| |A\_{m, m}| \right\|\_{2}^{2} \\ &\leq \left\| \left( -\frac{\eta\_{m}}{2} - \frac{\mu\_{m}}{2} \right) \right\|\_{2}^{2} \left\| \omega\_{1}^{(\alpha)} \right\|\_{2}^{2} \leq \left\| 1 + \frac{\eta\_{m}}{2} \alpha \right\|\_{2}^{2}. \end{aligned}$$

For a bounded ratio of time-step *τ* and space-step *h* with *nτ* ≤ *T*, we have

$$\left\|\left(A\_{m,m}\right)^n\right\|\_2 \le \left(1 + \frac{\eta\_m}{2}\alpha\right)^{n/2}$$

From the relation of Parseval's Theorem, [40]

$$||A\_{m,m}||\_2 \le \left(1 + \frac{\eta\_m}{2}\alpha\right)^{n/2} \le e^{\alpha T/2}.$$

which shows that the scheme is bounded.

#### *4.2. Stability Analysis*

**Theorem 4.** *Let U<sup>n</sup> <sup>m</sup> be the numerical approximation of the exact solution u<sup>n</sup> m, then the C-N finite difference scheme* (28) *is unconditionally stable.*

**Proof of Theorem 4.** Consider the matrix coefficient of the difference approximation for the problem (20) can be written as described above

$$(I+A)\mathcal{U}^{n+1} = (I-A)\mathcal{U}^n + \text{tr}p\_m^{n+1/2}.\tag{34}$$

.

Let *e<sup>n</sup>* = *en* <sup>1</sup> ,*e<sup>n</sup>* <sup>2</sup> ,*e<sup>n</sup>* <sup>3</sup> , ...,*e<sup>n</sup> Nx*−1 , and take the relation between the error *en*+<sup>1</sup> in *Un*+<sup>1</sup> and the error *e<sup>n</sup>* in *U<sup>n</sup>* which is given by the linear system

$$e^{n+1} = (I + A)^{-1}(I - A)e^n. \tag{35}$$

First of all, we must show that the (non-real valued) eigenvalues of the coefficient matrices *A* have positive real parts. For *ω*(*α*) <sup>1</sup> = −*α* with fractional order 1 < *α* < 2 and *k* = 1; we have *ω*(*α*) *<sup>k</sup>* <sup>&</sup>gt; 0. In addition to this, <sup>−</sup>*ω<sup>α</sup>* <sup>1</sup> <sup>=</sup> *<sup>α</sup>* <sup>≥</sup> <sup>∑</sup>*<sup>N</sup> <sup>k</sup>*=0,*k*=<sup>1</sup> *<sup>ω</sup><sup>α</sup> <sup>k</sup>* for the value *N* > 1. As stated in *Gerschgorin* Theorem ([41], pp. 136–139), the eigenvalues of the given matrix *A* are inside the disks centered at each diagonal entry.

$$A\_{m,m} = (1 - \frac{\eta\_m}{2}\omega\_1^{(\alpha)}) = 1 + \frac{\eta\_m}{2}\alpha > 0,$$

with radius

$$r\_{\mathcal{U}} = \sum\_{n=0, m \neq 1}^{N\_x} |A\_{\mathcal{U},\mathcal{U}}| = \frac{\eta\_m}{2} \sum\_{n=0}^{m+1} \omega\_{m-n+1}^{(a)} < \left(1 + \frac{\eta\_m}{2}\right).$$

These *Gerschgorin* disks are belong to the right half of the complex plane. Thus, the eigenvalue of the coefficient matrix *A* has positive real part which implies that *A* has an eigenvalue *λ* if and only if (*<sup>I</sup>* <sup>−</sup> *<sup>A</sup>*) has an eigenvalue (<sup>1</sup> <sup>−</sup> *<sup>λ</sup>*) if and only if (*<sup>I</sup>* <sup>+</sup> *<sup>A</sup>*)−1(*<sup>I</sup>* <sup>−</sup> *<sup>A</sup>*) has an eigenvalue -1−*λ* 1+*λ* . From the first part of this sentence, we have seen that all the eigenvalues of the matrix given by (*I* + *A*) have a radius larger than unity which implies the matrix is invertible. Now we can see from the above description the real part of *λ* is non-negative which we can conclude that (1−*λ*) (1+*λ*) <sup>&</sup>lt; 1.

Thus, the spectral radius of the system matrix (*<sup>I</sup>* <sup>+</sup> *<sup>A</sup>*)−1(*<sup>I</sup>* <sup>−</sup> *<sup>A</sup>*) is strictly less than unity which implies that the difference scheme is unconditionally stable.

#### *4.3. Convergence Analysis*

First of all we have given the Truncation error of the C-N scheme. It is obvious to conclude that:

$$\begin{split} \frac{u(\mathbf{x}\_{m},t\_{n+1})-u(\mathbf{x}\_{m},t\_{n})}{\pi} &= \quad \left(\frac{\partial u(\mathbf{x},t)}{\partial t}\right)^{n+1/2} + O(\pi^{2}). \\ \left(c(\mathbf{x})\frac{\partial u(\mathbf{x},t)}{\partial \mathbf{x}} + d(\mathbf{x})\frac{\partial^{\mathbf{a}}u(\mathbf{x},t)}{\partial \mathbf{x}^{\mathbf{a}}}\right)\_{\mathfrak{m}}^{n+1/2} &= \quad \frac{1}{2}\left(c\_{\mathfrak{m}}\frac{\partial u(\mathbf{x}\_{m},t\_{n+1})}{\partial \mathbf{x}} + d\_{\mathfrak{m}}\frac{\partial^{\mathbf{a}}u(\mathbf{x}\_{m},t\_{n+1})}{\partial \mathbf{x}^{\mathbf{a}}}\right) \\ &+ \quad \frac{1}{2}\left(c\_{\mathfrak{m}}\frac{\partial u(\mathbf{x}\_{m},t\_{n})}{\partial \mathbf{x}} + d\_{\mathfrak{m}}\frac{\partial^{\mathbf{a}}u(\mathbf{x}\_{m},t\_{n})}{\partial \mathbf{x}^{\mathbf{a}}}\right) + O(\pi^{2}). \end{split} \tag{36}$$

$$
\omega(\mathbf{x}\_{\mathfrak{M}}) \frac{\partial \mu(\mathbf{x}, t)}{\partial \mathbf{x}} \approx \frac{\mu(\mathbf{x}\_{m+1}, t\_{n+1}) - \mu(\mathbf{x}\_{m-1}, t\_{n+1})}{2h} + O(h^2). \tag{37}
$$

From the above Extrapolation to the limit Theorem for *n* = 1, we got

$$\frac{\partial^a u(x,t)}{\partial x^a} \approx \sum\_{k=0}^{m+1} g\_k^{(a)} u\_{m-k+1} + O(h^2). \tag{38}$$

Therefore the local truncation error of (20) is given by *Tn*+<sup>1</sup> *<sup>m</sup>* = *O*(*τ*<sup>2</sup> + *τh*)

**Theorem 5.** *Let u<sup>n</sup> <sup>m</sup> be the exact solution of problem* (20)*, and U<sup>n</sup> <sup>m</sup> be the solution of the finite difference scheme* (26)*, then for all* 1 ≤ *n* ≤ *Nt, we have the estimate*

$$||u\_m^n - U\_m^n||\_\infty \le c(\tau^2 + h)^2$$

*where <sup>u</sup><sup>n</sup> <sup>m</sup>* <sup>−</sup> *<sup>U</sup><sup>n</sup> <sup>m</sup>* <sup>∞</sup>*=max*1≤*m*≤*Nx* <sup>|</sup>*u<sup>n</sup> <sup>m</sup>* <sup>−</sup> *<sup>U</sup><sup>n</sup> <sup>m</sup>*|*, c is a non-negative constant independent of h and τ with* ||.|| *stands for the discrete L*2*-norm.*

**Proof of Theorem 5.** Denote *e<sup>n</sup>* = *u<sup>n</sup> <sup>m</sup>* <sup>−</sup> *<sup>U</sup><sup>n</sup> <sup>m</sup>* where *e<sup>n</sup>* = (*e<sup>n</sup>* <sup>1</sup> ,*e<sup>n</sup>* <sup>2</sup> , ...,*e<sup>n</sup> Nx*−1). We have *<sup>e</sup>*<sup>0</sup> <sup>=</sup> 0, we have from Equations (26) and (27) if *n* = 0,

$$\begin{aligned} R\_m^1 &= \left(\frac{-\mu\_m}{2} - \frac{\eta\_m}{2} \omega\_0^{(a)}\right) \varepsilon\_{m-1}^1 + \left(1 + \frac{\eta\_m}{2} \alpha\right) \varepsilon\_1^m \\ &+ \left(\frac{-\mu\_m}{2} - \frac{\eta\_m}{2} \omega\_2^{(a)}\right) \varepsilon\_{m+1}^1 - \frac{\eta\_m}{2} \sum\_{k=3}^{N\_x} \omega\_k^{(a)} \varepsilon\_{m-n+1}^1. \end{aligned}$$

if *n* > 0,

$$\begin{aligned} R\_{m}^{n+1} &= \left(\frac{-\mu\_{m}}{2} - \frac{\eta\_{m}}{2}\omega\_{0}^{(a)}\right)\epsilon\_{m-1}^{n+1} + \left(1 + \frac{\eta\_{m}}{2}a\right)\epsilon\_{n+1}^{m} \\ &+ \left(\frac{-\mu\_{m}}{2} - \frac{\eta\_{m}}{2}\omega\_{2}^{(a)}\right)\epsilon\_{m+1}^{n+1} - \frac{\eta\_{m}}{2}\sum\_{k=3}^{N\_{\mathrm{r}}}\omega\_{k}^{(a)}\epsilon\_{m-n+1}^{n+1}.\end{aligned}$$

where *<sup>R</sup>n*+<sup>1</sup> *<sup>m</sup>* <sup>≤</sup> *<sup>c</sup>*(*τ*<sup>2</sup> <sup>+</sup> *<sup>h</sup>*), *<sup>m</sup>* <sup>=</sup> 1, 2, ..., *Nx* <sup>−</sup> 1, *<sup>n</sup>* <sup>=</sup> 1, 2, 3, ..., *Nt* <sup>−</sup> 1, c is non-negative constant independent of *h* and *τ*.

We can use the mathematical induction to prove the Theorem. Let *n* = 1 and assume |*ej*| = *max*1≤*m*≤*Nx*−1|*e*<sup>1</sup> *<sup>m</sup>*|, we have the following expression.

$$\begin{split} ||\boldsymbol{e}^{1}||\_{\mathbb{W}} &= \quad \left| \boldsymbol{e}^{1}\_{j} \right| \leq \left( \frac{-\mu\_{j}}{2} - \frac{\eta\_{j}}{2} \omega\_{0}^{(a)} \right) \left| \boldsymbol{e}^{1}\_{j-1} \right| + \left( 1 + \frac{\eta\_{j}}{2} a \right) \left| \boldsymbol{e}^{1}\_{1} \right| \\ &+ \quad \left( \frac{-\mu\_{j}}{2} - \frac{\eta\_{j}}{2} \omega\_{2}^{(a)} \right) \left| \boldsymbol{e}^{1}\_{j+1} \right| - \frac{\eta\_{j}}{2} \sum\_{k=3}^{N\_{\mathbf{x}}} \omega\_{k}^{(a)} \left| \boldsymbol{e}^{1}\_{j-n+1} \right| \\ &\leq \quad \left| \left( \frac{-\mu\_{j}}{2} - \frac{\eta\_{j}}{2} \omega\_{0}^{(a)} \right) \boldsymbol{e}^{1}\_{j-1} + \left( 1 + \frac{\eta\_{j}}{2} a \right) \boldsymbol{e}^{1}\_{1} + \left( \frac{-\mu\_{j}}{2} - \frac{\eta\_{j}}{2} \omega\_{2}^{(a)} \right) \boldsymbol{e}^{1}\_{j+1} - \frac{\eta\_{j}}{2} \sum\_{k=3}^{N\_{\mathbf{x}}} \omega\_{k}^{(a)} \boldsymbol{e}^{1}\_{j-n+1} \right| \\ &= \quad \left| R^{1}\_{\boldsymbol{\gamma}} \right| \leq c(\boldsymbol{\tau}^{2} + h) . \end{split}$$

Suppose that if *<sup>n</sup>* <sup>≤</sup> *<sup>r</sup>*, ||*er*||<sup>∞</sup> <sup>≤</sup> *<sup>c</sup>*(*τ*<sup>2</sup> <sup>+</sup> *<sup>h</sup>*2) hold and assume *<sup>n</sup>* <sup>=</sup> *<sup>r</sup>* <sup>+</sup> 1, let *e r*+1 *j* <sup>=</sup> *max*1≤*m*≤*Nx*−<sup>1</sup> *er*+<sup>1</sup> *<sup>m</sup>* , notice that from Lemma 1, we have <sup>∑</sup>*Nx <sup>k</sup>*=<sup>0</sup> *<sup>ω</sup>*(*α*) *<sup>k</sup>* < 0, *m* = 1, 2, ..., *Nx*. Therefore,

 *e r*+1 <sup>∞</sup> <sup>=</sup> *e r*+1 *j* ≤ −*μ<sup>j</sup>* <sup>2</sup> <sup>−</sup> *<sup>η</sup><sup>j</sup>* <sup>2</sup> *<sup>ω</sup>*(*α*) 0 *e r*+1 *j*−1 + <sup>1</sup> <sup>+</sup> *<sup>η</sup><sup>j</sup>* 2 *α e j r*+1 + −*μ<sup>j</sup>* <sup>2</sup> <sup>−</sup> *<sup>η</sup><sup>j</sup>* <sup>2</sup> *<sup>ω</sup>*(*α*) 2 *e r*+1 *j*+1 <sup>−</sup> *<sup>η</sup><sup>j</sup>* 2 *Nx* ∑ *k*=3 *ω*(*α*) *k e r*+1 *j*−*n*+1 ≤ −*μ<sup>j</sup>* <sup>2</sup> <sup>−</sup> *<sup>η</sup><sup>j</sup>* <sup>2</sup> *<sup>ω</sup>*(*α*) 0 *e r*+1 *<sup>j</sup>*−<sup>1</sup> <sup>+</sup> <sup>1</sup> <sup>+</sup> *<sup>η</sup><sup>j</sup>* 2 *α e j <sup>r</sup>*+<sup>1</sup> + −*μ<sup>j</sup>* <sup>2</sup> <sup>−</sup> *<sup>η</sup><sup>j</sup>* <sup>2</sup> *<sup>ω</sup>*(*α*) 2 *e r*+1 *<sup>j</sup>*+<sup>1</sup> <sup>−</sup> *<sup>η</sup><sup>j</sup>* 2 *Nx* ∑ *k*=3 *ω*(*α*) *<sup>k</sup> e r*+1 *j*−*n*+1 = *Rr*+<sup>1</sup> *j* <sup>≤</sup> *<sup>c</sup>*(*τ*<sup>2</sup> <sup>+</sup> *<sup>h</sup>*)

which completes the proof.

**Remark 3.** *The Crank–Nicolson scheme, for classical convection–diffusion equation, provides stable C-N finite difference method that is second order convergence in time and space. Also a study based on C-N finite difference method with the spatial extrapolation to the limit method, see Theorem 1, is used to get temporal and spatial second order for one-sided SFCDEs with space variable coefficients.*

#### **5. Numerical Tests**

Problem test 1

1. Consider the space-fractional diffusion type of problem:

$$\frac{\partial \mu(\mathbf{x},t)}{\partial t} = d(\mathbf{x}) \frac{\partial^{\alpha} \mu(\mathbf{x},t)}{\partial \mathbf{x}^{\alpha}} + p(\mathbf{x},t)$$

with initial condition

$$\mu(\mathbf{x},0) = (\mathbf{x}^2 - \mathbf{x}^3); 0 \le \mathbf{x} \le 1$$

homogeneous Dirichlet boundary condition

$$u(0,t) = 0 = u(1,t)$$

with variable diffusion coefficient,

$$d(x) = \Gamma(1.2)x^{\alpha},$$

and source term

$$p(\mathbf{x}, t) = (6\mathbf{x}^3 - 3\mathbf{x}^2)e^{-t}$$

The exact solution is

$$u(\mathbf{x}, t) = (\mathbf{x}^2 - \mathbf{x}^3)e^{-t}$$

All numerical experiments are implemented using Theorem 1 and C-N scheme with the space domain, 0 < *x* < 1 and time domain, 0 < *t* < *T*. Figure 1 shows the maximum error produced by C-N scheme for large enough time domain and numerical solution is close enough to the exact solution using C-N scheme with *α* = 1.5 in Figure 2. The maximum error and second order convergence for the fractional diffusion and fractional convection–diffusion equation with variable coefficients are given in Tables 1–3.

**Figure 1.** The Maximum error by C-N scheme at (*<sup>T</sup>* <sup>=</sup> 10, *Max* <sup>−</sup> *Error* <sup>=</sup> 6.5276*e*−07), (*<sup>T</sup>* <sup>=</sup> 20, *Max* <sup>−</sup> *Error* <sup>=</sup> 1.7244*e*−08), *<sup>α</sup>* <sup>=</sup> 1.5 left to right, respectively, for example 1.

**Figure 2.** The exact (**left**) and numerical (**right**) solution by C-N scheme at *T* = 1, *α* = 1.5, *τ* = 0.01 = *h* for example 1.


**Table 1.** The maximum error and convergence order of the C-N scheme for FDE in example 1.

**Table 2.** The maximum error and convergence order for FCDE in example 2.


**Table 3.** The maximum error and convergence order by C-N for SFCDE in example 2 at *T* = 1, *α* = 1.55.


Problem test 2

2. Consider the space-fractional convection–diffusion type of equation with variable coefficients:

$$\frac{\partial \mu(\mathbf{x},t)}{\partial t} + c(\mathbf{x}) \frac{\partial \mu(\mathbf{x},t)}{\partial \mathbf{x}} = d(\mathbf{x}) \frac{\partial^a \mu(\mathbf{x},t)}{\partial \mathbf{x}^a} + p(\mathbf{x},t).$$

with initial condition

$$\mu(\mathfrak{x},0) = (\mathfrak{x}^{\mathfrak{a}} - \mathfrak{x}); 0 \le \mathfrak{x} \le 1;$$

homogeneous Dirichlet boundary condition

$$u(0,t) = 0 = u(1,t)$$

with variable convection–diffusion coefficients respectively,

$$\boldsymbol{c}(\mathbf{x}) = \mathbf{x}^{\frac{1}{2}}, \boldsymbol{d}(\mathbf{x}) = \mathbf{x}^{\frac{1}{2}}\mathbf{w}\_{\prime}$$

and source term

$$p(\mathbf{x}, t) = e^{-2t} \left( 2(\mathbf{x} - \mathbf{x}^a) - \Gamma(\mathbf{a}) + \frac{\Gamma(\mathbf{a} + 1)}{\Gamma(\mathbf{a})} \mathbf{x}^{a - 1} - 1 \right),$$

The exact solution is

$$u(\mathbf{x}, t) = e^{-2t} (\mathbf{x}^\alpha - \mathbf{x})$$

Figures 3 and 4 show the numerical and exact solutions for fractional diffusion and fractional convection–diffusion problems with large enough time domain in example 1 and 2, respectively.The exact and numerical solution of fractional convection–diffusion equation by C-N scheme is also given in Figure 5. In Table 4, the maximum error and first order convergence in space is obtained using C-N scheme without extrapolation to the limit approach by fixing the time step.

**Figure 3.** Numerical and exact solution by C-N scheme at *α* = 1.5, *τ* = *h* = 0.01, with(*T* = 10, *T* = 30, *T* = 40) left to right-down respectively, for example 1.

**Figure 4.** The exact (**left**) and numerical (**right**) solution by C-N scheme for the FCDE at (*h* = *τ* = 0.005, *<sup>α</sup>* <sup>=</sup> 1.5,(*<sup>t</sup>* <sup>=</sup> 5, max <sup>−</sup>*error* <sup>=</sup> 4.0657*e*−05) for example 2.

**Table 4.** The Maximum error and convergence order produced by C-N scheme for example 3 at *T* = 1, *Nt* = 100.


**Figure 5.** The exact (**left**) and numerical (**right**) solution by C-N scheme for the FCDE at (*h* = *τ* = 0.01,(*<sup>t</sup>* <sup>=</sup> 2, max <sup>−</sup>*Error* <sup>=</sup> 4.2158*e*−04), *<sup>α</sup>* <sup>=</sup> 1.75) for example 2.

Problem Test 3

3. Consider the space-fractional convection–diffusion type of equation with variable coefficients:

$$\frac{\partial \mu(\mathbf{x},t)}{\partial t} + c(\mathbf{x}) \frac{\partial \mu(\mathbf{x},t)}{\partial \mathbf{x}} = d(\mathbf{x}) \frac{\partial^{\alpha} \mu(\mathbf{x},t)}{\partial \mathbf{x}^{\alpha}} + p(\mathbf{x},t)$$

with initial condition

$$u(x,0) = x^2(1-x)$$

homogeneous Dirichlet boundary condition

$$
u(0,t) = 0 = \mu(1,t).$$

with variable convection–diffusion coefficients respectively,

$$\mathfrak{c}(\mathfrak{x}) = \mathfrak{x}^{0.6}, \mathfrak{d}(\mathfrak{x}) = \Gamma(2.8)\mathfrak{x}^{3/4}.$$

and the forcing function

$$p(\mathbf{x}, t) = 2\mathbf{x}^2 (1 - \mathbf{x}) t^{1.3} / \Gamma(2.3) + 0.3\mathbf{x}^{1.8} e^{-t}$$

The exact solution is

$$u(\mathbf{x}, t) = \mathbf{x}^2 (1 - \mathbf{x}) e^{-t}$$

Problem test 4

4. Consider the space-fractional convection–diffusion equation with variable coefficients:

$$\frac{\partial u(\mathbf{x},t)}{\partial t} + c(\mathbf{x}) \frac{\partial u(\mathbf{x},t)}{\partial \mathbf{x}} = d(\mathbf{x}) \frac{\partial^{\alpha} u(\mathbf{x},t)}{\partial \mathbf{x}^{\alpha}} + p(\mathbf{x},t)$$

with initial condition

$$u(\mathbf{x},0) = \mathbf{x}^a (1-\mathbf{x})$$

homogeneous Dirichlet boundary condition

$$u(0,t) = 0 = u(1,t)$$

with variable convection–diffusion coefficients respectively,

$$\mathfrak{c}(\mathfrak{x}) = \mathfrak{x}^{3/5}, d(\mathfrak{x}) = \mathfrak{x}^{3/4}$$

and the forcing function

$$p(\mathbf{x}, \mathbf{t}) = 2\mathbf{x}^{\mathbf{a}}(1-\mathbf{x})t^{1.3}/\Gamma(2.3) + 0.3\mathbf{x}^{1.8}e^{-\mathbf{t}}$$

The exact solution is

$$u(\mathbf{x}, t) = \mathbf{x}^{\alpha} (1 - \mathbf{x}) e^{-t}$$

Problem test 4 is experimented with the grid size reduction extrapolation approach stated in [23]. We have smooth enough numerical and exact solutions by using C-N scheme in Figure 6, and Table 5 shows the maximum error with the error rate is given for space fractional convection–diffusion problem with a grid size reduction extrapolation method.

**Figure 6.** The exact (**left**) and numerical (**right**) solution by C-N scheme at (*h* = *τ* = 0.0025,(*t* = 0.1, max <sup>−</sup>*Error* <sup>=</sup> 1.4*e*−03, *<sup>α</sup>* <sup>=</sup> 1.1) for example 4.


**Table 5.** The Maximum error and error-rate produced by C-N scheme for example 4 at *t* = 0.1.

#### **6. Conclusions**

The one dimension space fractional diffusion and fractional convection–diffusion problem with space variable coefficients is solved by the fractional C-N scheme based on the Extrapolation to the limit approach of right shifted Grünwald–Letnikov approximation. The fractional C-N method, for the fractional diffusion problem and fractional convection–diffusion equation with space variable coefficients, is consistent and unconditionally stable with second order convergence. Numerical examples confirmed that the C-N method is suitable for the space fractional convection–diffusion problem even for a large value of time domain.

**Author Contributions:** The authors contributed equally to the writing and approved the final manuscript of this paper. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was financially supported by the National Key Research (Grant No. 2017YFB0305601) and Development Program of China (Grant No. 2017YFB0701700).

**Acknowledgments:** The authors would like to thank the editor and the anonymous reviewers for their helpful comments for revising the article.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**

1. Diethelm, K. *The Analysis of Fractional Differential Equations*; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2010.


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

### *Article* **A Sharp Oscillation Criterion for a Linear Differential Equation with Variable Delay**

#### **Ábel Garab**

Institute of Mathematics, University of Klagenfurt, Universitätsstraße 65–67, 9020 Klagenfurt am Wörthersee, Austria; abel.garab@aau.at

Received: 30 September 2019; Accepted: 22 October 2019; Published: 24 October 2019

**Abstract:** We consider linear differential equations with variable delay of the form

$$\mathbf{x}'(t) + p(t)\mathbf{x}(t - \tau(t)) = \mathbf{0}, \qquad t \ge t\_{0\star}$$

where *p* : [*t*0, ∞) → [0, ∞) and *τ* : [*t*0, ∞) → (0, ∞) are continuous functions, such that *t* − *τ*(*t*) → ∞ (as *t* → ∞). It is well-known that, for the oscillation of all solutions, it is necessary that

$$B := \limsup\_{t \to \infty} A(t) \ge \frac{1}{\varepsilon} \quad \text{holds, where} \quad A(t) := \int\_{t-\tau(t)}^t p(s) \, ds.$$

Our main result shows that, if the function *A* is slowly varying at infinity (in additive form), then under mild additional assumptions on *p* and *τ*, condition *B* > 1/*e* implies that all solutions of the above delay differential equation are oscillatory.

**Keywords:** oscillation; delay differential equation; variable delay; deviating argument; non-monotone argument; slowly varying function

**MSC:** 34K11; 34K06; 26A12

#### **1. Introduction and Preliminary Results**

Consider the following linear differential equation with variable delay:

$$\mathbf{x}'(t) + p(t)\mathbf{x}(t - \tau(t)) = \mathbf{0}, \qquad t \ge t\_{0\prime} \tag{1}$$

where *p* : [*t*0, ∞) → [0, ∞) and *τ* : [*t*0, ∞) → (0, ∞) are continuous functions, such that *t* − *τ*(*t*) → ∞ (as *t* → ∞). Note that *t* − *τ*(*t*) is not assumed to be nondecreasing. Let *t*−<sup>1</sup> = inf{*s* − *τ*(*s*) : *s* ∈ [*t*0, ∞)} and note that *<sup>t</sup>*−<sup>1</sup> <sup>∈</sup> (−∞, *<sup>t</sup>*0) holds. Then, a continuous function *<sup>x</sup>* : [*t*−1, <sup>∞</sup>)<sup>→</sup> <sup>R</sup> is called a *solution* of Equation (1), if it is continuously differentiable on [*t*0, ∞) and satisfies Equation (1) there.

Such equations, and, in general, delay differential equations with either constant or variable delay arise naturally in a multitude of models from biology, physics, engineering, chemistry and economy. For an extensive introduction to the theory of delay differential equations, we refer to the books [1,2], whereas for more on their applications we recommend the reader to study [3,4].

This paper is concerned with the oscillatory behaviour of Equation (1). By convention, a solution is called *oscillatory* if it has arbitrary large zeros and is *nonoscillatory* otherwise. Results on oscillation of retarded first order equations already appeared in the works of Johann Bernoulli [5]. The first systematic

*Symmetry* **2019**, *11*, 1332; doi:10.3390/sym11111332 www.mdpi.com/journal/symmetry

study of oscillatory and nonoscillatory behaviour of Equation (1) goes back to Myshkis [6]. He showed that, in case the functions *τ* and *p* are bounded, then

$$\inf\_{t \in [t\_0, \infty)} \tau(t) \inf\_{t \in [t\_0, \infty)} p(t) > \frac{1}{\varepsilon} \tag{2}$$

implies that all solutions of Equation (1) are oscillatory, whereas condition

$$\sup\_{t \in [t\_{-}, \infty)} \tau(t) \sup\_{t \in [t\_{-}, \infty)} p(t) \le \frac{1}{\varepsilon} \tag{3}$$

guarantees the existence of a nonoscillatory solution.

Since then, the question of oscillation has received much attention and many results have been published providing sufficient conditions guaranteeing that all solutions are oscillatory and others that establish the existence of a nonoscillatory solution. For more details, we refer the interested reader to monographs [7–9] and to the survey papers [10,11]. Here, we only point out some results that are most relevant from our perspective.

Ladas, Lakshmikantham and Papadakis [12] proved that all solutions of Equation (1) are oscillatory, provided

$$\limsup\_{t \to \infty} \int\_{t-\tau(t)}^{t} p(s) \, ds > 1, \qquad t - \tau(t) \text{ is nondecreasing}, \quad \text{and} \quad p(t) > 0 \text{ for all } t \ge t\_0. \tag{4}$$

The following important contribution is due to Koplatadze and Chanturija [13]. For the proof, see also e.g., Theorem 2.1.1 of [9].

#### **Theorem 1** ([13])**.**

*(i) If*

$$\liminf\_{t \to \infty} \int\_{t-\tau(t)}^{t} p(s) \, ds > \frac{1}{\mathfrak{e}'} \tag{5}$$

*then all solutions of Equation* (1) *are oscillatory. (ii) If*

$$\limsup\_{t \to \infty} \int\_{t-\tau(t)}^{t} p(s) \, ds < \frac{1}{\varepsilon'} \tag{6}$$

*or, more generally, if*

$$\int\_{t-\tau(t)}^{t} p(s) \, ds \le \frac{1}{e} \quad \text{for all large } t,\tag{7}$$

*then Equation* (1) *has a nonoscillatory solution.*

After these central results, many works have focused on filling the gap between Conditions (2) and (3), as well as between the necessary and the sufficient conditions given by Theorem 1 and Condition (4). For more on such results, see, e.g., the recent survey by Moremedi and Stavroulakis [10].

It is worth mentioning that, in case the functions *τ* and *p* are constant, then both Conditions (5) and (2) reduce to condition *τp* > 1/*e*, which is in this case not only sufficient, but—in view of Inequality (3)—also necessary for the oscillation of all solutions. Another immediate corollary of Theorem 1 is that, if *τ*(*t*) is constant *τ* > 0, and *p* is *τ*-periodic, then ( *<sup>t</sup> <sup>t</sup>*−*τ*(*t*) *<sup>p</sup>*(*s*) *ds* is constant and Condition (7) is sharp.

Motivated by these facts, Pituk [14] recently proved that, for constant delay *τ*, there is a class of functions *<sup>p</sup>*, for which the 'almost necessary' condition *<sup>τ</sup>* lim sup*t*→<sup>∞</sup> *<sup>p</sup>*(*t*) <sup>&</sup>gt; 1/*<sup>e</sup>* is sufficient for the

oscillation of all solutions of Equation (1). More precisely, he showed in Theorem 1 of [14] that, if *p* is slowly varying at infinity with lim inf*t*→<sup>∞</sup> *p*(*t*) > 0, then

$$\tau \limsup\_{t \to \infty} p(t) > \frac{1}{\varepsilon} \tag{8}$$

implies that all solutions of Equation (1) are oscillatory, where a function *<sup>f</sup>* : [*t*0, <sup>∞</sup>) <sup>→</sup> <sup>R</sup> is called *slowly varying at infinity* if, for every *s* ≥ 0,

$$f(t+s) - f(t) \to 0 \quad \text{as } t \to \infty. \tag{9}$$

In a subsequent paper, Pituk, Stavroulakis, and the present author [15] generalized the above result and gave a class of functions *p*—broader than *τ*-periodic—for which Condition (6) is 'almost sharp'. More precisely, the following theorem was proved.

**Theorem 2** ([15])**.** *Let the function τ in Equation* (1) *be constant, and function p be nonnegative, bounded and uniformly continuous. Assume further that the function t* → ( *<sup>t</sup> <sup>t</sup>*−*<sup>τ</sup> <sup>p</sup>*(*s*) *ds is slowly varying at infinity. Then,*

$$\liminf\_{t \to \infty} \int\_{t-\tau}^{t} p(s) \, ds > 0 \quad \text{and} \quad \limsup\_{t \to \infty} \int\_{t-\tau}^{t} p(s) \, ds > \frac{1}{\varepsilon} \tag{10}$$

*imply that all solutions of Equation* (1) *are oscillatory.*

The purpose of this paper is to show that Theorem 2 remains valid in case of variable delay, provided *τ* is uniformly continuous and bounded. The proof is similar to that of Theorem 2; nevertheless, some technical difficulties also arise due to the variable delay.

In the next section, we present our main theorems and give some hints to support applicability of the results. Then, in Section 3, we provide an illustrative example. Section 4 is devoted to conclusions.

#### **2. Results**

The following theorem is our main result.

**Theorem 3.** *For some positive numbers M and κ, let p* : [*t*0, ∞) → [0, *M*] *and τ* : [*t*0, ∞) → (0, *κ*] *be uniformly continuous functions, and suppose that the function*

$$A \colon [t\_0 + \kappa, \infty) \to [0, \infty), \qquad A(t) \coloneqq \int\_{t - \tau(t)}^t p(s) \, ds \tag{11}$$

*is slowly varying at infinity. Then,*

$$\liminf\_{t \to \infty} A(t) > 0 \quad \text{and} \quad \limsup\_{t \to \infty} A(t) > \frac{1}{\varepsilon} \tag{12}$$

*imply that all solutions of Equation* (1) *are oscillatory.*

Before we prove the theorem, we make some comments, mainly to support applicability of the result. From Theorem 1, it is apparent that condition lim sup*t*→<sup>∞</sup> *<sup>A</sup>*(*t*) <sup>≥</sup> 1/*<sup>e</sup>* is necessary for the oscillation of all solutions, so Theorem 3 is sharp in this sense. Example 9 of [15] showed that the slowly varying assumption is important: even in the constant delay case, the theorem does not hold if we omit that assumption.

We remark that uniform continuity of *p* and *τ* are guaranteed, if they are globally Lipschitz continuous, which is the case if they are differentiable with their derivatives bounded on (*t*0, ∞).

Let us also devote some comments to functions that are slowly varying at infinity—we shall call them *slowly varying* for brevity.

The class of slowly varying functions was studied already by Karamata [16] in a multiplicative form. For more information about slowly varying functions and their characterization, we refer the reader to the monograph by Seneta [17]. In particular, for the relation between the two terminologies, see the remark below Theorem 1.2 in Chapter 1 of [17].

Here, let us mention only one characterization of slowly varying functions given by Pituk [14] (in the additive form, see Formula (9)): a continuous function *<sup>f</sup>* : [*t*0, <sup>∞</sup>) <sup>→</sup> <sup>R</sup> is slowly varying if and only if there exists *t*<sup>1</sup> ≥ *t*0, such that *f* can be written in the form

$$f(t) = \mathcal{c}(t) + d(t), \quad \text{for all } t \ge t\_{1, \prime} \tag{13}$$

where *<sup>c</sup>* : [*t*1, <sup>∞</sup>) <sup>→</sup> <sup>R</sup> is a continuous function which tends to some finite limit as *<sup>t</sup>* <sup>→</sup> <sup>∞</sup>, and *<sup>d</sup>* : [*t*1, <sup>∞</sup>) <sup>→</sup> <sup>R</sup> is a continuously differentiable function for which lim*t*→<sup>∞</sup> *<sup>d</sup>* (*t*) = 0 holds.

The next lemma will be essential in our proof.

**Lemma 1** ([13])**.** *Suppose that p* : [*t*0, ∞) → [0, ∞) *is a continuous function satisfying*

$$\liminf\_{t \to \infty} \int\_{t-\tau(t)}^t p(s) \, ds > 0.$$

*If x is an eventually positive solution of Equation* (1)*, then, for all sufficiently large T,*

$$\sup\_{t \ge T} \frac{x(t - \tau(t))}{x(t)} < \infty.$$

**Proof of Theorem 3.** Assume to the contrary that *x* is an eventually positive solution and all assumptions of the theorem hold (if the solution *x* is eventually negative, then take the solution −*x*).

By virtue of Lemma 1, there exists *T* ≥ *t*<sup>0</sup> + *κ* such that *x*(*t*) > 0 holds for all *t* ∈ *T* − *κ* and

$$\mathcal{K} := \sup\_{t \ge T} \frac{\mathbf{x}(t - \tau(t))}{\mathbf{x}(t)} < \infty. \tag{14}$$

Then, there exists a sequence {*tn*}*n*∈<sup>N</sup> ⊂ [*T*, <sup>∞</sup>), such that lim*n*→<sup>∞</sup> *tn* = <sup>∞</sup> and

$$\lim\_{n \to \infty} A(t\_n) = \limsup\_{t \to \infty} A(t) =: B.$$

Let us introduce the following sequence of functions:

$$y\_n(t) := \frac{\mathbf{x}(t\_n + t)}{\mathbf{x}(t\_n)}, \qquad p\_n(t) := p(t\_n + t) \quad \text{and} \quad \tau\_n(t) := \tau(t\_n + t) \quad \text{for all } t \ge -\kappa \text{ and } n \in \mathbb{N}. \tag{15}$$

Then, applying (1) leads to the equation

$$y\_n'(t) = \frac{\mathbf{x}'(t\_n + t)}{\mathbf{x}(t\_n)} = \frac{-p(t\_n + t)\mathbf{x}(t\_n + t - \tau(t\_n + t))}{\mathbf{x}(t\_n)}$$

$$= -p(t\_n + t)y\_n(t - \tau(t\_n + t))\tag{16}$$

$$
\dot{\lambda} = -p\_n(t)y\_n(t - \tau\_n(t)).\tag{17}
$$

Now, we would like to pass to the limit by applying the Arzelà–Ascoli theorem for the above sequences of functions {*yn*}*n*∈N, {*pn*}*n*∈<sup>N</sup> and {*τn*}*n*∈N, hence we need to establish their uniform boundedness and equicontinuity. Uniform boundedness, respectively equicontinuity of {*pn*}*n*∈<sup>N</sup> and {*τn*}*n*∈<sup>N</sup> follow from the boundedness, respectively uniform-continuity of functions *p* and *τ*.

It remains to check these properties for {*yn*}*n*∈N. For this, note that by virtue of Equation (1) and Equation (14) we obtain that the inequality

$$\mathbf{x}'(t\_n + t) = -p(t\_n + t) \frac{\mathbf{x}(t\_n + t - \tau(t\_n + t))}{\mathbf{x}(t\_n + t)} \mathbf{x}(t\_n + t) \ge -\mathbf{K} \mathbf{M} \mathbf{x}(t\_n + t)$$

holds for all *<sup>t</sup>* <sup>≥</sup> 0 and *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>. This immediately implies

$$y\_n'(t) = \frac{\mathbf{x}'(t\_n + t)}{\mathbf{x}(t\_n)} \ge -\frac{\mathbf{K}\mathbf{M}\mathbf{x}(t\_n + t)}{\mathbf{x}(t\_n)} = -\mathbf{K}\mathbf{M}y\_n(t).$$

As *yn* is positive on [−*κ*, ∞), we obtain inequalities

$$-KM \le \frac{y\_n'(t)}{y\_n(t)} \le 0 \quad \text{for all } t \ge 0 \text{ and } n \in \mathbb{N}.\tag{18}$$

Integration leads to

$$1 - KMt \le \ln \frac{y\_n(t)}{y\_n(0)} \le 0 \quad \text{for all } t \ge 0 \text{ and } n \in \mathbb{N}.\tag{19}$$

Taking into account that *yn*(0) = 1 for all *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>, we obtain that

$$
\varepsilon^{-\mathbb{K}\mathcal{M}t} \le y\_n(t) \le 1 \tag{20}
$$

holds for all *<sup>t</sup>* <sup>≥</sup> 0 and *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>. Now, Inequalities (20) and (18) imply that {*yn*}*n*∈<sup>N</sup> and {*y <sup>n</sup>*}*n*∈<sup>N</sup> are uniformly bounded on [0, ∞). Furthermore, the uniform boundedness of {*y <sup>n</sup>*} yields that functions *yn* are globally Lipschitz continuous with a common Lipschitz constant, and consequently {*yn*}*n*∈<sup>N</sup> is uniformly equicontinuous.

In view of the above, by the Arzelà–Ascoli theorem, we may assume (by passing to a subsequence without changing notation) that the limits

$$y(t) := \lim\_{n \to \infty} y\_n(t), \qquad q(t) := \lim\_{n \to \infty} p\_n(t) \quad \text{and} \quad \sigma(t) := \lim\_{n \to \infty} \tau\_n(t) \tag{21}$$

exist and are continuous on [0, ∞), and the convergence is uniform on every bounded subinterval of [0, ∞). Note that

$$e^{-\mathsf{K}Mt} \le y(t) \le 1\tag{22}$$

also holds for all *<sup>t</sup>* <sup>≥</sup> 0 and *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>.

Furthermore, from Equation (16), together with the uniform continuity of functions *p* and *τ* and the uniform equicontinuity of {*yn*}*n*∈N, we obtain that {*y <sup>n</sup>*}*n*∈<sup>N</sup> is also equicontinuous on [*κ*, <sup>∞</sup>). Recall that the sequence {*y <sup>n</sup>*}*n*∈<sup>N</sup> is uniformly bounded on [0, <sup>∞</sup>). Hence, according to the Arzelà–Ascoli theorem, we may assume (after passing to a subsequence if necessary) that the limit lim*n*→<sup>∞</sup> *y <sup>n</sup>*(*t*) exists for all *t* ∈ [*κ*, ∞), and the convergence is uniform on all bounded subintervals of [*κ*, ∞). This combined with the fact that lim*n*→<sup>∞</sup> *yn*(*κ*) = *y*(*κ*) yields (see, e.g., Theorem 7.17 of [18]) that

$$y'(t) = \lim\_{n \to \infty} y'\_n(t)$$

holds for all *t* ≥ *κ*. By virtue of Equation (17),

$$y'(t) = -\lim\_{n \to \infty} p\_n(t)y\_n(t - \tau\_n(t))\tag{23}$$

is satisfied for all *t* ≥ *κ*. From Equation (21) and the (uniform) equicontinuity of {*yn*}*n*∈N, one can easily derive that

$$\lim\_{n \to \infty} y\_n(t - \tau\_n(t)) = y(t - \sigma(t))$$

holds for all *t* ≥ *κ*. Thus, Inequality (22) impies that *y* is a positive solution of equation

$$y'(t) = -q(t)y(t - \sigma(t)).\tag{24}$$

.

As a final step, we will apply Theorem 1 (i) to show that every solution of Equation (24) is oscillatory, which is a contradiction. Thus, we need to verify that Equation (24) fulfils the hypotheses imposed on Equation (1) and that Inequality (5) holds.

First, observe that *q*(*t*) ∈ [0, *M*] and *σ*(*t*) ∈ [0, *κ*] for all *t* ≥ *κ* follow immediately from their definitions and from the assumptions on *p* and *τ*, respectively. Note that we have not yet shown that *σ*(*t*) is positive for all *t*.

Next, we prove that Inequality (5) is satisfied. For this, let us fix *t* ≥ *κ* and note that, since *pn* converges uniformly to *q* on the interval [*t* − *σ*(*t*), *t*], we obtain

$$\int\_{t-\sigma(t)}^t q(s) \, ds = \lim\_{n \to \infty} \int\_{t-\sigma(t)}^t p\_n(s) \, ds = \lim\_{n \to \infty} \left( \int\_{t-\tau\_n(t)}^t p\_n(s) \, ds + \int\_{t-\sigma(t)}^{t-\tau\_n(t)} p\_n(s) \, ds \right).$$

The functions *pn* are uniformly bounded, and *τn*(*t*) → *σ*(*t*), as *n* → ∞, so the limit of the last integral vanishes. This in turn leads to

$$\begin{aligned} \int\_{t-\sigma(t)}^t q(s) \, ds &= \lim\_{n \to \infty} \int\_{t-\tau(t\_n+t)}^t p(t\_n+s) \, ds \\ &= \lim\_{n \to \infty} \int\_{t\_n+t-\tau(t\_n+t)}^{t\_n+t} p(u) \, du \\ &= \lim\_{n \to \infty} A(t\_n+t) = \lim\_{n \to \infty} A(t\_n) = B > \frac{1}{\varepsilon} \end{aligned}$$

Here, the last inequality and the last equality hold by assumption, whereas the last but one equality follows from the slowly varying property of *A*. Hence, ( *<sup>t</sup> <sup>t</sup>*−*σ*(*t*) *<sup>q</sup>*(*s*) *ds* is constant *<sup>B</sup>*, and thus Inequality (5) holds.

The only condition that still needs to be verified is that *σ* is positive for all *t* ≥ *κ*. Notice that this follows immediately from the above formulas: since

$$0 < B = \int\_{t-\sigma(t)}^t q(s) \, ds \le M\sigma(t).$$

holds for all *t* ≥ *κ*, thus *σ*(*t*) ≥ *B*/*M* for all *t* ≥ *κ*.

Therefore, Theorem 1 (i) can be applied for Equation (24) with *τ* := *σ*, *t*<sup>0</sup> := *κ* and *p* := *q* to obtain that every solution of Equation (24) is oscillatory, which contradicts Inequality (22).

The following lemma may be helpful to verify the slowly varying property of *A* without having to evaluate it.

**Lemma 2.** *For some <sup>t</sup>*<sup>0</sup> <sup>∈</sup> <sup>R</sup> *and positive number <sup>κ</sup>, let <sup>p</sup>* : [*t*0, <sup>∞</sup>) <sup>→</sup> <sup>R</sup> *be bounded and locally integrable, and τ* : [*t*0, ∞) → [−*κ*, *κ*] *be any function. If both p and τ are slowly varying at infinity, then so is the function*

$$A \colon \left[ t\_0 + \kappa, \infty \right) \to \mathbb{R}, \qquad A(t) := \int\_{t - \tau(t)}^t p(s) \, ds.$$

To prove this lemma, we first need to state the following result (see Lemma 1.1 of [17]).

**Lemma 3.** *If <sup>p</sup>* : [*t*0, <sup>∞</sup>) <sup>→</sup> <sup>R</sup> *is Lebesgue measurable and slowly varying at infinity, then, for all finite interval I,* sup*s*∈*<sup>I</sup>* <sup>|</sup>*p*(*<sup>t</sup>* <sup>+</sup> *<sup>s</sup>*) <sup>−</sup> *<sup>p</sup>*(*t*)| → <sup>0</sup>*, as t* <sup>→</sup> <sup>∞</sup>*.*

**Proof of Lemma 2.** For *t* ≥ *t*<sup>0</sup> + *κ*, we have

$$A(t) = \int\_{t-\tau(t)}^{t} p(\mathbf{s}) \, d\mathbf{s} = \int\_{-\tau(t)}^{0} p(t+\mathbf{u}) \, d\mathbf{u} = \int\_{-\tau(t)}^{0} p(t+\mathbf{u}) - p(t) \, d\mathbf{u} + \tau(t)p(t). \tag{25}$$

From this and the triangle inequality, we obtain that, for any fixed *<sup>r</sup>* <sup>∈</sup> <sup>R</sup>, the inequalities

$$\begin{split} |A(t+r) - A(t)| &\leq \left| \int\_{-\tau(t+r)}^{0} p(t+r+u) - p(t+r) \, du \right| + \left| \int\_{-\tau(t)}^{0} p(t+u) - p(t) \, du \right| \\ &\quad + \left| \tau(t+r)p(t+r) - \tau(t)p(t) \right| \\ &\leq \int\_{-\kappa}^{\kappa} |p(t+r+u) - p(t+r)| \, du + \int\_{-\kappa}^{\kappa} |p(t+u) - p(t)| \, du \\ &\quad + \left| \tau(t+r)p(t+r) - \tau(t)p(t) \right| \\ &\leq 2\kappa \left( \sup\_{u \in [-\kappa, \kappa]} |p(t+r+u) - p(t+r)| + \sup\_{u \in [-\kappa, \kappa]} |p(t+u) - p(t)| \right) \\ &\quad + |p(t+r)| \left| \tau(t+r) - \tau(t) \right| + |\tau(t)| |p(t+r) - p(t)| \end{split}$$

hold. Now, if we let *t* → ∞, then the last two suprema vanish due to Lemma 3 and because *p* is slowly varying. On the other hand, the last two terms also tend to 0, thanks to boundedness and to the slowly varying property of functions *τ* and *p*.

Therefore, lim*t*→<sup>∞</sup> *A*(*t* + *r*) − *A*(*t*) = 0 holds for all *r* ≥ 0.

Note that, for *A* to be slowly varying, it is not sufficient to assume merely that at least one of *p* and *τ* is slowly varying. This is the case even under the additional assumptions of Theorem 3 on *p* and *τ*. This

can be readily seen by considering examples *p* ≡ 1 and *τ*(*t*) = 2 + sin *t*, and *τ* ≡ *π* and *p*(*t*) := 2 + sin *t*, respectively. In both cases, function *A* will be 2*π*-periodic, but nonconstant, so it cannot be slowly varying.

Our last theorem is a corollary of Lemma 2 and Theorem 3, and it gives another generalization of Theorem 1 of [14] in case *p* is bounded.

**Theorem 4.** *For some positive numbers M and κ let p* : [*t*0, ∞) → [0, *M*] *and τ* : [*t*0, ∞) → (0, *κ*] *be continuous and slowly varying at infinity. Then Condition* (12) *implies that all solutions of Equation* (1) *are oscillatory.*

**Proof.** First, Lemma 2 infers that function *A* from Equation (11) is slowly varying. As already noted after Theorem 4 of [15], the slowly varying property together with continuity implies uniform continuity. Hence, *p* and *τ* are uniformly continuous, so Theorem 3 applies, which finishes the proof.

Let us briefly consider the case when *p* is unbounded, and slowly varying. If we further assume that *p*(*t*) > 0 holds for large *t*, and *τ* is such that there exists some *τ*<sup>0</sup> ∈ (0, *κ*], for which lim inf*t*→<sup>∞</sup> *τ*(*t*) ≥ *τ*<sup>0</sup> holds and *t* − *τ*(*t*) is nondecreasing (note that Theorem 1 of [14] meets these assumptions), then, using the slowly varying property of *<sup>p</sup>*, it can be easily shown that lim sup*t*→<sup>∞</sup> ( *t <sup>t</sup>*−*τ*(*t*) *<sup>p</sup>*(*s*) *ds* <sup>=</sup> <sup>∞</sup>. In particular, Condition (4) is fulfilled, which yields that all solutions are oscillatory regardless of Condition (12).

#### **3. Example**

Before concluding the paper, let us consider the following example, which may look a bit artificial. This is because our intention was to design it in such a way that—hopefully—no other known results could guarantee the oscillation of all solutions. Obviously, it is not possible to be aware of all the related results, and to check whether they are applicable; nevertheless, we shall exclude applicability of many classical, as well as many recent theorems.

Consider the equation

$$\mathbf{x}'(t) + \left(\frac{1}{2\pi\varepsilon} + \delta\sin\sqrt{t}\right)\mathbf{x}\left(t - \left(2\pi + \varepsilon\cos\sqrt{t}\right)\right) = \mathbf{0}, \qquad t \ge \mathbf{0},\tag{26}$$

where *<sup>δ</sup>* <sup>∈</sup> (0, <sup>1</sup> <sup>2</sup>*πe*) and *ε* ∈ (0, 2*π*) are small positive constants that will be determined later. Functions *p* and *τ* are clearly positive and bounded, so Equation (26) is a special case of Equation (1) with

$$p(t) = \frac{1}{2\pi\varepsilon} + \delta\sin\sqrt{t}, \qquad \tau(t) = 2\pi + \varepsilon\cos\sqrt{t} \quad \text{and} \quad t\_0 = 0.$$

Note that the functions sin <sup>√</sup>*<sup>t</sup>* and cos <sup>√</sup>*<sup>t</sup>* are slowly varying at infinity, since their derivatives vanish there (see Equation (13)). This in turn yields that both *p* and *τ* are slowly varying, and, thus, in view of Lemma 2, *A* is slowly varying as well.

On the other hand, a direct calculation shows that

$$A(t) = \frac{2\pi + \varepsilon \cos \sqrt{t}}{2\pi e} + \delta \int\_{t-\tau(t)}^t \sin \sqrt{s} \, ds.$$

This immediately implies

$$\frac{2\pi - \varepsilon}{2\pi \varepsilon} - \delta(2\pi + \varepsilon) \le \liminf\_{t \to \infty} A(t) \le \limsup\_{t \to \infty} A(t) \le \frac{2\pi + \varepsilon}{2\pi \varepsilon} + \delta(2\pi + \varepsilon). \tag{27}$$

Now, by setting *tn* = (2*nπ*)<sup>2</sup> and *t <sup>n</sup>* = ((2*<sup>n</sup>* <sup>+</sup> <sup>1</sup>)*π*)<sup>2</sup> for all *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>, we obtain that

$$A(t\_{\rm tr}) = \frac{2\pi + \varepsilon}{2\pi \varepsilon e} + \delta \int\_{t\_{\rm tr} - \tau(t\_{\rm tr})}^{t\_{\rm tr}} \sin \sqrt{s} \, ds \ge \frac{2\pi + \varepsilon}{2\pi \varepsilon e} - \delta (2\pi + \varepsilon)$$

and

$$A(t\_n') = \frac{2\pi - \varepsilon}{2\pi e} + \delta \int\_{t\_n' - \varepsilon(t\_n')}^{t\_n'} \sin\sqrt{s} \, ds \le \frac{2\pi - \varepsilon}{2\pi e} + \delta(2\pi + \varepsilon)$$

hold for all *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>. These together with Inequalities (27) yield the estimates

$$\frac{2\pi+\varepsilon}{2\pi\varepsilon} - \delta(2\pi+\varepsilon) \le \limsup\_{t \to \infty} A(t) \le \frac{2\pi+\varepsilon}{2\pi\varepsilon} + \delta(2\pi+\varepsilon)$$

and

$$
\frac{2\pi - \varepsilon}{2\pi \varepsilon} - \delta(2\pi + \varepsilon) \le \liminf\_{t \to \infty} A(t) \le \frac{2\pi - \varepsilon}{2\pi \varepsilon} + \delta(2\pi + \varepsilon).
$$

Finally, for *γ* > 0, let *ε* := *ε*(*γ*) := 4*πeγ* and *δ* := *δ*(*γ*) := *<sup>γ</sup>* <sup>2</sup>*π*+*ε*. Then, the above estimates take the form

$$
\frac{1}{\mathcal{e}} + \gamma \le \limsup\_{t \to \infty} A(t) \le \frac{1}{\mathcal{e}} + 3\gamma \quad \text{and} \quad \frac{1}{\mathcal{e}} - 3\gamma \le \liminf\_{t \to \infty} A(t) \le \frac{1}{\mathcal{e}} - \gamma.
$$

It is now easy to see that, for all *<sup>γ</sup>* <sup>∈</sup> 0, <sup>1</sup> 3*e* , all assumptions of Theorem 3 (and also of Theorem 4) are fulfilled, and therefore all solutions are oscillatory. Note also that, since lim sup*t*→<sup>∞</sup> *<sup>A</sup>*(*t*) <sup>→</sup> <sup>1</sup> *<sup>e</sup>* as *<sup>γ</sup>* <sup>→</sup> <sup>0</sup>+, and lim inf*t*→<sup>∞</sup> *<sup>A</sup>*(*t*) <sup>&</sup>lt; <sup>1</sup> *<sup>e</sup>* for all *<sup>γ</sup>* <sup>∈</sup> 0, <sup>1</sup> 3*e* , by choosing *γ* > 0 small enough we can rule out the application of Conditions (4), (5) and various other sufficient conditions for the oscillation of all solutions of Equation (26) (see e.g., conditions (C3)–(C12) from [10]). Since function *τ* is nonconstant, therefore neither Condition (8) nor Theorem 2 can be applied to guarantee oscillation.

#### **4. Conclusions**

It has been known for almost forty years that, for the oscillation of all solutions of equation

$$\mathbf{x}'(t) + p(t)\mathbf{x}(t - \tau(t)) = \mathbf{0}, \qquad t \ge t\_{0\star}$$

it is necessary that lim sup*t*→<sup>∞</sup> *<sup>A</sup>*(*t*) <sup>≥</sup> 1/*<sup>e</sup>* holds, where *<sup>A</sup>*(*t*) :<sup>=</sup> ( *<sup>t</sup> <sup>t</sup>*−*τ*(*t*) *<sup>p</sup>*(*s*) *ds* (see [13]). In our main result (see Theorem 3), we showed that, if the function *A* is slowly varying at infinity (see Formula (9)), then, under mild additional assumptions on *<sup>p</sup>* and *<sup>τ</sup>*, the 'almost necessary' condition lim sup*t*→<sup>∞</sup> *<sup>A</sup>*(*t*) <sup>&</sup>gt; 1/*<sup>e</sup>* is sufficient for the oscillation of all solutions.

In Theorem 4, we formulated a corollary of Theorem 3. The advantage of this theorem is that its assumptions can be verified more easily.

The applicability and novelty of our results were demonstrated in Section 3.

**Funding:** This research received no external funding.

**Acknowledgments:** I am grateful to Mihály Pituk for introducing this research topic to me while I was visiting him in Veszprém in the framework of the Young Scientists Mentoring Programme of the University of Klagenfurt. I sincerely thank the referees for their useful comments. I am eligible for Open Access Funding by the University of Klagenfurt.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


© 2019 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Approximation of a Linear Autonomous Differential Equation with Small Delay**

**Áron Fehér 1,2, L˝orinc Márton <sup>1</sup> and Mihály Pituk 2,\***


Received: 30 September 2019; Accepted: 12 October 2019; Published: 15 October 2019

**Abstract:** A linear autonomous differential equation with small delay is considered in this paper. It is shown that under a smallness condition the delay differential equation is asymptotically equivalent to a linear ordinary differential equation with constant coefficients. The coefficient matrix of the ordinary differential equation is a solution of an associated matrix equation and it can be written as a limit of a sequence of matrices obtained by successive approximations. The eigenvalues of the approximating matrices converge exponentially to the dominant characteristic roots of the delay differential equation and an explicit estimate for the approximation error is given.

**Keywords:** delay differential equation; ordinary differential equation; asymptotic equivalence; approximation; eigenvalue

#### **1. Introduction**

Let C and C*n*×*<sup>n</sup>* denote the set of complex numbers and the *n*-dimensional space of complex column vectors, respectively. Given a norm · on <sup>C</sup>*n*, the associated induced norm on <sup>C</sup>*n*×*<sup>n</sup>* will be denoted by the same symbol.

We will study the linear autonomous delay differential equation

$$
\dot{x}(t) = Ax(t) + Bx(t-\tau),
\tag{1}
$$

where *<sup>τ</sup>* <sup>&</sup>gt; 0, *<sup>A</sup>* <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup>* and *<sup>B</sup>* <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup>* is a nonzero matrix. It is well-known that if *<sup>φ</sup>* : [−*τ*, 0] <sup>→</sup> <sup>C</sup>*<sup>n</sup>* is a continuous initial function, then Equation (1) has a unique solution *<sup>x</sup>* : [−*τ*, <sup>∞</sup>) <sup>→</sup> <sup>C</sup>*<sup>n</sup>* with initial values *x*(*t*) = *φ*(*t*) for −*τ* ≤ *t* ≤ 0 (see [1]). The characteristic equation of Equation (1) has the form

$$\det \Delta(\lambda) = 0,\qquad \text{where } \Delta(\lambda) = \lambda I - A - B e^{-\lambda \tau}.\tag{2}$$

Throughout the paper, we will assume that

$$\|B\| \|\tau e^{1+\|A\|\|\tau} < 1,\tag{3}$$

which may be viewed as a smallness condition on the delay *τ*. We will show that if (3) holds, then Equation (1) is asymptotically equivalent to the ordinary differential equation

$$
\dot{\mathbf{x}} = \mathbf{M} \mathbf{x},\tag{4}
$$

where *<sup>M</sup>* <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup>* is the unique solution of the matrix equation

$$M = A + B\varepsilon^{-M\tau} \tag{5}$$

such that

$$\|\|M\|\,<\mu\_{0\prime}\qquad\text{where }\mu\_0 = -\tau^{-1}\ln(\|B\|\|\tau)>0.\tag{6}$$

Furthermore, the coefficient matrix *M* in Equation (4) can be written as a limit of successive approximations

$$M = \lim\_{k \to \infty} M\_{k\prime} \tag{7}$$

where

$$M\_0 = 0 \qquad \text{and} \qquad M\_{k+1} = A + B e^{-M\_k \tau} \quad \text{for } k = 0, 1, 2, \dots \tag{8}$$

The convergence in (7) is exponential and we give an estimate for the approximation error *M* − *Mk*. It will be shown that those characteristic roots of Equation (1) which lie in the half-plane Re *λ* > −*μ*<sup>0</sup> with *μ*<sup>0</sup> as in (6) coincide with the eigenvalues of matrix *M*. As a consequence, the above dominant characteristic roots of Equation (1) can be approximated by the eigenvalues of *Mk*. We give an explicit estimate for the approximation error which shows that the convergence of the eigenvalues of *Mk* to the dominant characteristic roots of Equation (1) is exponentially fast.

The investigation of differential equations with small delays has received much attention. Some results which are related to our study are discussed in the last section of the paper.

#### **2. Main Results**

In this section, we formulate and prove our main results which were indicated in the Introduction.

#### *2.1. Solution of the Matrix Equation and Its Approximation*

First we prove the existence and uniqueness of the solution of the matrix Equation (5) satisfying (6).

**Theorem 1.** *Suppose* (3) *holds. Then Equation* (5) *has a unique solution M* <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup> such that* (6) *holds.*

Before we present the proof of Theorem 1, we establish some lemmas.

**Lemma 1.** *Let P, Q* <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup> and <sup>γ</sup>* <sup>=</sup> max{*<sup>P</sup>*, *<sup>Q</sup>*}*. Then*

$$\|\|P^k - Q^k\|\| \le k\gamma^{k-1} \|\|P - Q\|\|\qquad\text{for } k = 1, 2, \dots \tag{9}$$

**Proof.** We will prove by induction on *k* that

$$P^k - Q^k = \sum\_{j=0}^{k-1} P^j (P - Q) Q^{k-1-j} \tag{10}$$

for *k* = 1, 2, ... . Evidently, (10) holds for *k* = 1. Suppose for induction that (10) holds for some positive integer *k*. Then

$$\begin{aligned} P^{k+1} - Q^{k+1} &= P^k (P - Q) + (P^k - Q^k) Q \\ &= P^k (P - Q) + \left( \sum\_{j=0}^{k-1} P^j (P - Q) Q^{k-1-j} \right) Q = \sum\_{j=0}^k P^j (P - Q) Q^{k-j}. \end{aligned}$$

Thus, (10) holds for all *k*. From (10), we find that

$$\|\|P^k - Q^k\|\| \le \sum\_{j=0}^{k-1} \|\|P\|^j \|\|P - Q\|\| \|Q\|\|^{k-1-j} \le \|\|P - Q\|\| \sum\_{j=0}^{k-1} \gamma^j \gamma^{k-1-j} = k\gamma^{k-1} \|\|P - Q\|\|$$

for *k* = 1, 2, . . . .

Using Lemma 1, we can prove the following result about the distance of two matrix exponentials.

**Lemma 2.** *Let P, Q* <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup> and <sup>γ</sup>* <sup>=</sup> max{*<sup>P</sup>*, *<sup>Q</sup>*}*. Then*

$$\|e^{P} - e^{Q}\| \le e^{\gamma} \|P - Q\|. \tag{11}$$

**Proof.** By the definition of the matrix exponential, we have

$$\varepsilon^{P} - \varepsilon^{Q} = \sum\_{k=0}^{\infty} \frac{P^{k}}{k!} - \sum\_{k=0}^{\infty} \frac{Q^{k}}{k!} = \sum\_{k=1}^{\infty} \frac{P^{k} - Q^{k}}{k!}.$$

From this, by the application of Lemma 1, we find that

$$\|\|\varepsilon^{P} - \varepsilon^{Q}\|\| \le \sum\_{k=1}^{\infty} \frac{\|P^{k} - Q^{k}\|}{k!} \le \|\|P - Q\|\| \sum\_{k=1}^{\infty} \frac{k\gamma^{k-1}}{k!} = \|\|P - Q\|\| \sum\_{k=1}^{\infty} \frac{\gamma^{k-1}}{(k-1)!} = \varepsilon^{\gamma} \|\|P - Q\|\|.$$

which proves (11).

We will also need some properties of the scalar equation

$$
\lambda = a + b e^{\lambda \tau}. \tag{12}
$$

**Lemma 3.** *Let a* ∈ [0, ∞)*, b,τ* ∈ (0, ∞) *and suppose that*

$$br\tau e^{1+a\tau} < 1.\tag{13}$$

*If we let <sup>λ</sup>*<sup>0</sup> <sup>=</sup> <sup>−</sup>*τ*−<sup>1</sup> ln(*bτ*)*, then <sup>λ</sup>*<sup>0</sup> <sup>&</sup>gt; <sup>0</sup> *and Equation* (12) *has a unique root <sup>λ</sup>*<sup>1</sup> <sup>∈</sup> (0, *<sup>λ</sup>*0)*. Moreover,*

$$a + b e^{\lambda \tau} < \lambda \qquad \text{for } \lambda \in (\lambda\_1, \lambda\_0] \tag{14}$$

*and*

$$br\tau e^{\lambda \tau} < 1 \qquad for \; \lambda < \lambda\_0. \tag{15}$$

**Proof.** By virtue of (13), we have *bτ* < *e*−1−*a<sup>τ</sup>* < 1 which implies that ln(*bτ*) < 0 and hence *λ*<sup>0</sup> > 0. Define

$$f(\lambda) = \lambda - a - b e^{\lambda \tau} \qquad \text{for } \lambda \in \mathbb{R}.$$

We have

$$f'(\lambda) = 1 - b\tau e^{\lambda \tau} \qquad \text{and} \qquad f''(\lambda) = -b\tau^2 e^{\lambda \tau} \qquad \text{for } \lambda \in \mathbb{R}.$$

It is easily seen that *f* (*λ*) = 0 if and only if *<sup>λ</sup>* <sup>=</sup> <sup>−</sup>*τ*−<sup>1</sup> ln(*bτ*) = *<sup>λ</sup>*0. Furthermore, (13) is equivalent to *<sup>f</sup>*(*λ*0) = <sup>−</sup>*τ*−<sup>1</sup> ln(*bτ*) <sup>−</sup> *<sup>a</sup>* <sup>−</sup> *<sup>τ</sup>*−<sup>1</sup> <sup>&</sup>gt; 0. Since *<sup>f</sup>* (*λ*) <sup>&</sup>lt; 0 for *<sup>λ</sup>* <sup>∈</sup> <sup>R</sup>, *<sup>f</sup>* strictly decreases on <sup>R</sup>. In particular, *f* (*λ*) > *f* (*λ*0) = 0 for *λ* < *λ*0. Therefore, (15) holds and *f* strictly increases on (−∞, *λ*0]. This, together with *f*(0) < 0 and *f*(*λ*0) > 0, implies that *f* and hence Equation (12) have a unique root *λ*<sup>1</sup> ∈ (0, *λ*0). Since *f* strictly increases on [*λ*1, *λ*0], we have that *f*(*λ*) > *f*(*λ*1) = 0 for *λ* ∈ (*λ*1, *λ*0]. Thus, (14) holds.

Now we can give a proof of Theorem 1.

**Proof of Theorem 1.** By Lemma 3, if (3) holds, then the equation

$$\mu = \|A\| + \|B\|e^{\mu \tau} \tag{16}$$

has a unique solution *μ*<sup>1</sup> ∈ (0, *μ*0), where *μ*<sup>0</sup> is given by (6). Moreover,

$$\|A\| + \|B\| \|\mathcal{e}^{\mu \tau} < \mu \qquad \text{for } \mu \in (\mu\_1, \mu\_0] \tag{17}$$

and

$$||B||\pi e^{\mu \tau} < 1 \qquad \text{for } \mu < \mu\_0. \tag{18}$$

Let *μ* ∈ [*μ*1, *μ*0) be fixed. Define

$$F(M) = A + B\varepsilon^{-M\tau} \qquad \text{for } M \in \mathbb{C}^{n \times n} \tag{19}$$

and

$$S = \{ M \in \mathbb{C}^{n \times n} \mid \|M\| \le \mu \}. \tag{20}$$

Clearly, *<sup>S</sup>* is a nonempty and closed subset of <sup>C</sup>*n*×*n*. By virtue of (17), we have for *<sup>M</sup>* <sup>∈</sup> *<sup>S</sup>*,

$$\|\|F(M)\|\| \le \|A\| + \|B\| \|e^{\|M\|\tau} \le \|A\| + \|B\| \|e^{\mu \tau} \le \mu. \tag{21}$$

Thus, *F* maps *S* into itself. Let *M*1, *M*<sup>2</sup> ∈ *S*. By the application of Lemma 2, we obtain

$$\|\|F(M\_1) - F(M\_2)\|\| = \|\|B(e^{-M\_1\tau} - e^{-M\_2\tau})\|\| \le \|\|B\|\| \|e^{-M\_1\tau} - e^{M\_2\tau}\|\| \le \|\|B\|\|\tau e^{\mu\tau}\|\|M\_1 - M\_2\|\|.$$

In view of (18), *F* : *S* → *S* is a contraction and hence there exists a unique *M* ∈ *S* such that *M* = *F*(*M*). Since *μ* ∈ [*μ*1, *μ*0) was arbitrary, this completes the proof.

In the next theorem, we show that the unique solution of Equation (5) satisfying (6) can be written as a limit of successive approximations *Mk* defined by (8) and we give an estimate for the approximation error.

**Theorem 2.** *Suppose* (3) *holds and let <sup>M</sup>* <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup> be the solution of Equation* (5) *satisfying* (6)*. If* {*Mk*}<sup>∞</sup> *<sup>k</sup>*=<sup>0</sup> *is the sequence of matrices defined by* (8)*, then*

$$\|M\_k\| \le \mu\_1 \qquad \text{for } k = 0, 1, 2, \dots, \tag{22}$$

*and*

$$||M - M\_k|| \le \mu\_1 q^k \qquad \text{for } k = 0, 1, 2, \dots, \tag{23}$$

*where <sup>μ</sup>*<sup>1</sup> *is the unique root of Equation* (16) *in the interval* (0, *<sup>μ</sup>*0) *and q* <sup>=</sup> *<sup>B</sup> <sup>τ</sup>eμ*1*<sup>τ</sup>* <sup>&</sup>lt; <sup>1</sup> *(see* (18)*).*

**Proof.** Note that *Mk*<sup>+</sup><sup>1</sup> = *F*(*Mk*) for *k* = 0, 1, 2, ... , where *F* is defined by Equation (19). Taking *μ* = *μ*<sup>1</sup> in the proof of Theorem 1, we find that *M* ≤ *μ*1. Moreover, from (20) and (21), we obtain that *Mk* ≤ *μ*<sup>1</sup> for *k* = 0, 1, 2, ... . From this and Equations (5) and (8), by the application of Lemma 2, we obtain for *k* ≥ 0,

$$\|\|M - M\_{k+1}\|\| = \|\|B(\varepsilon^{-M\tau} - \varepsilon^{-M\_k\tau})\|\| \le \|\|B\|\| \|\varepsilon^{-M\tau} - \varepsilon^{M\_k\tau}\| \le \|\|B\|\|\tau \varepsilon^{\mu\_1 \tau} \|\|M - M\_k\|\| = q \|\|M - M\_k\|\|.$$

From the last inequality, it follows by easy induction on *k* that

$$||M - M\_k|| \le q^k ||M - M\_0|| = q^k ||M|| \le q^k \mu\_1$$

for *k* = 0, 1, 2, . . . .

#### *2.2. Dominant Eigenvalues and Eigensolutions*

Let us summarize some facts from the theory of linear autonomous delay differential equations (see [1,2]). By an *eigenvalue* of Equation (1), we mean an eigenvalue of the generator of the solution semigroup (see [1,2] for details). It is known that *<sup>λ</sup>* <sup>∈</sup> <sup>C</sup> is an eigenvalue of Equation (1) if and only if *<sup>λ</sup>* is a root of the characteristic equation (2). Moreover, for every *<sup>β</sup>* <sup>∈</sup> <sup>R</sup>, Equation (1) has only finite number of eigenvalues with Re *λ* > *β*. By an *entire solution* of Equation (1), we mean a differentiable function *<sup>x</sup>* : (−∞, <sup>∞</sup>) <sup>→</sup> <sup>C</sup>*<sup>n</sup>* satisfying Equation (1) for all *<sup>t</sup>* <sup>∈</sup> (−∞, <sup>∞</sup>). To each eigenvalue *<sup>λ</sup>* of Equation (1), there correspond nontrivial entire solutions of the form *p*(*t*)*eλ<sup>t</sup>* , *t* ∈ (−∞, ∞), where *p*(*t*) is a C*n*-valued polynomial in *t*. Such solutions are sometimes called *eigensolutions* corresponding to *λ*.

The following theorem shows that under the smallness condition (3) the eigenvalues of Equation (1) with Re *λ* > −*μ*<sup>0</sup> coincide with eigenvalues of matrix *M* from Theorem 1 and the corresponding eigensolutions satisfy the ordinary differential Equation (4).

**Theorem 3.** *Suppose* (3) *holds so that <sup>μ</sup>*<sup>0</sup> <sup>=</sup> <sup>−</sup>*τ*−<sup>1</sup> ln(*<sup>B</sup> <sup>τ</sup>*) <sup>&</sup>gt; <sup>0</sup>*, and define*

$$\Lambda = \{ \lambda \in \mathbb{C} \mid \det \Lambda(\lambda) = 0, \operatorname{Re} \lambda > -\mu\_0 \}.$$

*Let <sup>M</sup>* <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup> be the unique solution of Equation* (5)*satisfying* (6)*. Then* <sup>Λ</sup> <sup>=</sup> *<sup>σ</sup>*(*M*)*, where <sup>σ</sup>*(*M*) *denotes the set of eigenvalues of M. Moreover, for every λ* ∈ Λ*, Equations* (1) *and* (4) *have the same eigensolutions corresponding to λ.*

In the sequel, the eigenvalues of Equation (1) with Re *λ* > −*μ*<sup>0</sup> will be called *dominant*.

As a preparation for the proof of Theorem 3, we establish three lemmas. First we show that if *M* is a solution of the matrix Equation (5), then every solution of the ordinary differential Equation (4) is an entire solution of the delay differential Equation (1).

**Lemma 4.** *Let <sup>M</sup>* <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup> be a solution of Equation* (5)*. Then every <sup>v</sup>* <sup>∈</sup> <sup>C</sup>*n, <sup>x</sup>*(*t*) = *<sup>e</sup>Mtv, <sup>t</sup>* <sup>∈</sup> (−∞, <sup>∞</sup>)*, is an entire solution of Equation* (1)*.*

**Proof.** Since *<sup>e</sup>PeQ* <sup>=</sup> *<sup>e</sup>P*+*<sup>Q</sup>* whenever *<sup>P</sup>* and *<sup>Q</sup>* <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup>* commute, from Equation (5), we find that

$$\begin{aligned} \dot{\mathbf{x}}(t) &= M e^{Mt} \boldsymbol{\upsilon} = (A + B e^{-M\tau}) e^{Mt} \boldsymbol{\upsilon} = A e^{Mt} \boldsymbol{\upsilon} + B e^{-M\tau} e^{Mt} \boldsymbol{\upsilon} = A \mathbf{x}(t) + B e^{M(t-\tau)} \boldsymbol{\upsilon} = A \mathbf{x}(t) + B \mathbf{x}(t-\tau) \\\\ \text{for } t \in ( -\infty, \infty). \quad \Box \end{aligned}$$

In the following lemma, we prove the uniqueness of entire solutions of the delay differential Equation (1) with an appropriate exponential growth as *t* → −∞.

**Lemma 5.** *Suppose* (3) *holds. If x*<sup>1</sup> *and x*<sup>2</sup> *are entire solutions of Equation* (1) *with x*1(0) = *x*2(0) *and such that*

$$\sup\_{t \le 0} \|x\_j(t)\| e^{\eta\_0 t} < \infty, \qquad j = 1, 2,\tag{24}$$

*with μ*<sup>0</sup> *as in* (6)*, then x*<sup>1</sup> = *x*<sup>2</sup> *identically on* (−∞, ∞)*.*

**Proof.** Define

$$\mathcal{C} = \sup\_{t \le 0} \|\mathbf{x}\_1(t) - \mathbf{x}\_2(t)\| e^{\mu\_0 t}.$$

By virtue of (24), we have that 0 ≤ *C* < ∞. From Equation (1), we find for *t* ≤ 0,

$$\mathbf{x}\_{j}(t) = \mathbf{x}\_{j}(0) - A \int\_{t}^{0} \mathbf{x}\_{j}(s) \, ds - B \int\_{t}^{0} \mathbf{x}\_{j}(s - \tau) \, ds, \qquad j = 1, 2.$$

From this, taking into account that *x*1(0) = *x*2(0), we obtain for *t* ≤ 0,

$$\begin{split} \|\|\mathbf{x}\_{1}(t) - \mathbf{x}\_{2}(t)\|\| &\leq \|\|A\|\| \int\_{t}^{0} \|\mathbf{x}\_{1}(s) - \mathbf{x}\_{2}(s)\|\, ds + \|\|B\|\| \int\_{t}^{0} \|\|\mathbf{x}\_{1}(s - \tau) - \mathbf{x}\_{2}(s - \tau)\|\| \, ds \\ &\leq \|\|A\|\| \mathbb{C} \int\_{t}^{0} e^{-\mu\_{0}s} \, ds + \|\|B\|\| \mathbb{C} \int\_{t}^{0} e^{-\mu\_{0}(s - \tau)} \, ds \\ &= \mathcal{C}(\|\|A\|\| + \|\|B\|\| e^{\mu\_{0}\tau}) \int\_{t}^{0} e^{-\mu\_{0}s} \, ds \leq \mathcal{C} \frac{\|\|A\|\| + \|\|B\|\| e^{\mu\_{0}\tau} \right}{\mu\_{0}} e^{-\mu\_{0}t} . \end{split}$$

The last inequality implies for *t* ≤ 0,

$$||x\_1(t) - x\_2(t)||e^{\mu\_0 t} \le C \frac{||A|| + ||B|| e^{\mu\_0 t}}{\mu\_0}.$$

Hence *C* ≤ *κC*, where

$$\kappa = \frac{||A|| + ||B||e^{\mu\_0 \tau}}{\mu\_0}.$$

By virtue of (17), we have that *κ* < 1. Hence *C* = 0 and *x*1(*t*) = *x*2(*t*) for *t* ≤ 0. The uniqueness theorem ([1] Chapter 2, Theorem 2.3) implies that *x*1(*t*) = *x*2(*t*) for all *t* ∈ (−∞, ∞).

Now we show that those entire solutions of Equation (1) which satisfy the growth condition

$$\sup\_{t \le 0} \|\mathbf{x}(t)\| \|e^{\mu\_0 t} < \infty \qquad \text{with } \mu\_0 \text{ as in (6)}\tag{25}$$

coincide with the solutions of the ordinary differential Equation (4).

**Lemma 6.** *Suppose* (3) *holds. Then, for every <sup>v</sup>* <sup>∈</sup> <sup>C</sup>*n, Equation* (1) *has exactly one entire solution <sup>x</sup> with x*(0) = *v and satisfying* (25) *given by*

$$\mathbf{x}(t) = \mathbf{e}^{Mt}\boldsymbol{\upsilon} \qquad \text{for } t \in ( -\infty, \infty), \tag{26}$$

*where M* <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup> is the solution of Equation* (5) *with property* (6)*.*

**Proof.** By Lemma 4, *x* defined by Equation (26) is an entire solution of Equation (1). Moreover, from Equations (6) and (26), we find for *t* ≤ 0,

$$||\varkappa(t)|| \le \varepsilon^{\mu ||M||t||} ||\upsilon|| \le \varepsilon^{\mu\_0 |t|} ||\upsilon|| = \varepsilon^{-\mu\_0 t} ||\upsilon||.$$

Hence sup*t*≤<sup>0</sup> *<sup>x</sup>*(*t*)*<sup>e</sup>μ*0*<sup>t</sup>* <sup>≤</sup>*<sup>v</sup>* <sup>&</sup>lt; <sup>∞</sup>. Thus, *<sup>x</sup>* given by Equation (26) is an entire solution of Equation (1) with *x*(0) = *v* and satisfying (25). The uniqueness follows from Lemma 5.

Now we can give a proof of Theorem 3.

**Proof of Theorem 3.** Suppose that *<sup>λ</sup>* <sup>∈</sup> <sup>Λ</sup>. Since det*Δ*(*λ*) = 0, there exists a nonzero vector *<sup>v</sup>* <sup>∈</sup> <sup>C</sup>*<sup>n</sup>* such that *Δ*(*λ*)*v* = 0 and hence *x*(*t*) = *eλ<sup>t</sup> v*, *t* ∈ (−∞, ∞), is an entire solution of Equation (1). Since Re *λ* > −*μ*0, we have for *t* ≤ 0,

$$||\mathfrak{x}(t)|| = |e^{\Lambda t}| ||\boldsymbol{\upsilon}|| = e^{t \operatorname{Re} \Lambda} ||\boldsymbol{\upsilon}|| \le e^{-\mu\_0 t} ||\boldsymbol{\upsilon}||\_{\prime\prime}$$

which implies (25). Thus, *x*(*t*) = *eλ<sup>t</sup> v* is an entire solution of (1) with *x*(0) = *v* and satisfying (25). By Lemma 6, we have that *eλ<sup>t</sup> <sup>v</sup>* <sup>=</sup> *<sup>e</sup>Mtv* for *<sup>t</sup>* <sup>∈</sup> (−∞, <sup>∞</sup>). Hence

$$\frac{e^{\lambda t} - 1}{t}v = \frac{e^{Mt} - I}{t}v \qquad \text{for } t \in \mathbb{R} \text{ (}0\text{)}.$$

Letting *t* → 0, we obtain *λv* = *Mv*. This proves that Λ ⊂ *σ*(*M*).

Now suppose that *<sup>λ</sup>* <sup>∈</sup> *<sup>σ</sup>*(*M*). Then there exists a nonzero vector *<sup>v</sup>* <sup>∈</sup> <sup>C</sup>*<sup>n</sup>* such that *Mv* <sup>=</sup> *<sup>λ</sup>v*. According to Lemma 4, *x*(*t*) = *eMtv* = *eλ<sup>t</sup> v* is an entire solution of Equation (1). Hence *Δ*(*λ*)*v* = 0 which implies that det*Δ*(*λ*) = 0. In order to prove that *λ* ∈ Λ, it remains to show that Re *λ* > −*μ*0. It is well-known that *<sup>ρ</sup>*(*M*) <sup>≤</sup>*<sup>M</sup>*, where *<sup>ρ</sup>*(*M*) = sup*λ*∈*σ*(*M*) <sup>|</sup>*λ*<sup>|</sup> is the spectral radius of *<sup>M</sup>*. This, together with (6), yields

$$|\operatorname{Re}\lambda| \le |\lambda| \le \rho(M) \le ||M|| < \mu\_0.$$

Therefore Re *λ* > −*μ*<sup>0</sup> which proves that *σ*(*M*) ⊂ Λ.

Let *λ* ∈ Λ = *σ*(*M*). By Lemma 4, every eigensolution of the ordinary differential equation (4) corresponding to *λ* is an eigensolution of the delay differential equation (1). Now suppose that *x* is an eigensolution of the delay differential equation (1) corresponding to *λ*. Then *x*(*t*) = *p*(*t*)*eλ<sup>t</sup>* , where *p*(*t*) is a C*n*-valued polynomial in *t*. If *m* is the order of the polynomial *p*, then there exists *K* > 0 such that

$$||p(t)|| \le K(1+|t|^m) \qquad \text{for } t \in ( -\infty, \infty).$$

Since Re *λ* > −*μ*0, we have that = Re *λ* + *μ*<sup>0</sup> > 0. From this, we find for *t* ≤ 0,

$$\|\|\mathbf{x}(t)\|\| = \|p(t)\|\|e^{\lambda t}\| = \|p(t)\|e^{t\operatorname{Re}\lambda} \le K(1+|t|^m)e^{t\operatorname{Re}\lambda} = K(1+|t|^m)e^{ct}e^{-\mu\_0 t}.$$

Hence

$$||\mathbf{x}(t)||e^{\mu\_{0}t} \le \mathcal{K}(1+|t|^{m})e^{\varepsilon t} \longrightarrow 0 \qquad \text{as } t \to -\infty.$$

Thus, *x* is an entire solution of Equation (1) satisfying the growth condition (25). By Lemma 6, *x* is a solution of the ordinary differential equation (4).

#### *2.3. Asymptotic Equivalence*

The following result from the monograph by Diekmann et al. [2] gives an asymptotic description of the solutions of Equation (1) in terms of the eigensolutions.

**Proposition 1.** ([2] Chapter I, Theorem 5.4) *Let <sup>x</sup>* : [−*τ*, <sup>∞</sup>) <sup>→</sup> <sup>C</sup>*n*×*<sup>n</sup> be a solution of Equation* (1) *corresponding to some continuous initial function <sup>φ</sup>* : [−*τ*, 0] <sup>→</sup> <sup>C</sup>*n. For any <sup>γ</sup>* <sup>∈</sup> <sup>R</sup> *such that* det*Δ*(*λ*) = <sup>0</sup> *has no roots on the vertical line* Re *λ* = *γ, we have the asymptotic expansion*

$$\mathbf{x}(t) = \sum\_{j=1}^{l} p\_j(t)e^{\lambda\_j t} + o(e^{\gamma t}) \qquad \text{as } t \to \infty,\tag{27}$$

*where λ*1, *λ*2, ... , *λ<sup>l</sup> are the finitely many roots of the characteristic equation* (2) *with real part greater than γ and pj*(*t*) *are* C*n-valued polynomials in t of order less than the multiplicity of <sup>λ</sup><sup>j</sup> as a zero of* det*Δ*(*λ*)*.*

Now we can formulate our main result about the asymptotic equivalence of Equations (1) and (4).

**Theorem 4.** *Suppose that* (3) *holds so that <sup>μ</sup>*<sup>0</sup> <sup>=</sup> <sup>−</sup>*τ*−<sup>1</sup> ln(*<sup>B</sup> <sup>τ</sup>*) <sup>&</sup>gt; <sup>0</sup>*. Let <sup>M</sup>* <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup> be the solution of Equation* (5) *satisfying* (6)*. Then the following statements are valid.*

*(i) Every solution of the ordinary differential equation* (4) *is an entire solution of the delay differential equation* (1)*.*

*(ii) For every solution <sup>x</sup>* : [−*τ*, <sup>∞</sup>) <sup>→</sup> <sup>C</sup>*n*×*<sup>n</sup> of the delay differential equation* (1) *corresponding to some continuous initial function <sup>φ</sup>* : [−*τ*, 0] <sup>→</sup> <sup>C</sup>*n, there exists a solution <sup>x</sup>*˜ *of the ordinary differential equation* (4) *such that*

$$\mathbf{x}(t) = \mathbf{\bar{x}}(t) + o(e^{-\mu\_0 t}) \qquad \text{as } t \to \infty. \tag{28}$$

**Proof.** Conclusion (i) follows from Lemma 1. We shall prove conclusion (ii) by applying Proposition 1 with *γ* = −*μ*0. We need to verify that Equation (2) has no root on the vertical line Re *λ* = −*μ*0. Suppose for contradiction that there exists *<sup>λ</sup>* <sup>∈</sup> <sup>C</sup> such that det*Δ*(*λ*) = 0 and Re *<sup>λ</sup>* <sup>=</sup> <sup>−</sup>*μ*0. Then there exists a nonzero vector *<sup>v</sup>* <sup>∈</sup> <sup>C</sup>*<sup>n</sup>* such that *<sup>Δ</sup>*(*λ*)*<sup>v</sup>* <sup>=</sup> 0 and hence *<sup>λ</sup><sup>v</sup>* <sup>=</sup> *Av* <sup>+</sup> *Be*−*λτv*. From this, we find that

$$\begin{aligned} \|\lambda\| \|\upsilon\| &\le \|A\| \|\upsilon\| + \|B\| \|e^{-\lambda \tau} \upsilon\| = \|A\| \|\upsilon\| + \|B\| \|e^{-\lambda \tau}\| \|\upsilon\|,\\ &= (\|A\| + \|B\| \|e^{-\tau \operatorname{Re} \lambda}) \|\upsilon\| = (\|A\| + \|B\| \|e^{\mu\_0 \tau}\|) \|\upsilon\|. \end{aligned}$$

Hence <sup>|</sup>*λ*|≤*<sup>A</sup>* <sup>+</sup> *<sup>B</sup> <sup>e</sup>μ*0*τ*, which together with (17), yields

$$\mu\_0 = |\operatorname{Re}\lambda| \le |\lambda| \le ||A|| + ||B||\mathfrak{e}^{\mu\_0 \tau} < \mu\_0.$$

a contradiction. Thus, we can apply Proposition 1 with *γ* = −*μ*0, which implies that the asymptotic relation (28) holds with

$$\tilde{\mathfrak{x}}(t) = \sum\_{j=1}^{l} p\_j(t) e^{\lambda\_j t} \,. \tag{29}$$

where *λ*1, *λ*2, ... , *λ<sup>l</sup>* are those eigenvalues of Equation (1) which have real part greater than −*μ*<sup>0</sup> and *pj*(*t*) are C*n*-valued polynomials in *<sup>t</sup>*. According to Theorem 3, the eigensolutions of Equation (1) corresponding to eigenvalues with real part greater than −*μ*<sup>0</sup> are solutions of the ordinary differential equation (4). Hence *x*˜ given by Equation (29) is a solution of Equation (4).

#### *2.4. Approximation of the Dominant Eigenvalues*

We will need the following result about the distance of the eigenvalues of two matrices in terms of the norm of their difference due to Bhatia, Elsner and Krause [3].

**Proposition 2.** [3, Theorem 3] *Let P, <sup>Q</sup>* <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup> and <sup>γ</sup>* <sup>=</sup> max{*<sup>P</sup>*, *<sup>Q</sup>*}*. Then the eigenvalues of <sup>P</sup> and Q can be enumerated as λ*1,,..., *λ<sup>n</sup> and μ*1,..., *μ<sup>n</sup> in such a way that*

$$\max\_{1 \le j \le n} |\lambda\_j - \mu\_j| \le 4 \cdot 2^{-1/n} n^{1/n} (2\gamma)^{1 - 1/n} ||P - Q||^{1/n}.\tag{30}$$

Recall that the dominant eigenvalues of Equation (1) are those roots of Equation (2) which have real part greater than −*μ*0. According to Theorem 3, if (3) holds, then the dominant eigenvalues of Equation (1) coincide with the eigenvalues of *M*, the unique solution of Equation (5) satisfying (6). By Theorem 2, *<sup>M</sup>* can be approximated by the sequence of matrices {*Mk*}<sup>∞</sup> *<sup>k</sup>*=<sup>0</sup> defined by (8). As a consequence, the dominant eigenvalues of the delay differential equation (1) can be approximated by the eigenvalues of *Mk*. The explicit estimate (23) for *M* − *Mk*, combined with Proposition 2, yields the following result.

**Theorem 5.** *Suppose* (3) *holds so that the dominant eigenvalues of Equation* (1) *coincide with the eigenvalues <sup>λ</sup>*1, ... , *<sup>λ</sup><sup>n</sup> of matrix <sup>M</sup> from Theorem <sup>1</sup> (see Theorem 3). If* {*Mk*}<sup>∞</sup> *<sup>k</sup>*=<sup>0</sup> *is the sequence of matrices defined by* (8)*, then the eigenvalues λ*[*k*] <sup>1</sup> ,..., *<sup>λ</sup>*[*k*] *<sup>n</sup> of Mk can be renumbered such that*

$$\max\_{1 \le j \le n} |\lambda\_j - \lambda\_j^{[k]}| \le 8 \cdot 4^{-1/n} n^{1/n} \mu\_1 q^{k/n} \tag{31}$$

*where μ*<sup>1</sup> *and q have the meaning from Theorem 2.*

Since *q* < 1, the explicit error estimate (31) in Theorem 5 shows that under the smallness condition (3) the eigenvalues of *Mk* converge to the dominant eigenvalues of the delay differential equation (1) at an exponential rate as *k* → ∞.

#### **3. Discussion**

Let us briefly mention some results which are relevant to our study. For a class of linear differential equations with small delay, Ryabov [4] introduced a family of special solutions and showed that every solution is asymptotic to some special solution as *t* → ∞. Ryabov's result was improved by Driver [5], Jarník and Kurzweil [6]. A more precise asymptotic description was given in [7]. For further related results on asymptotic integration and stability of linear differential equations with small delays, see [8] and [9]. Some improvements and a generalization to functional differential equations in Banach spaces were given by Faria and Huang [10]. Inertial and slow manifolds for differential equations with small delays were studied by Chicone [11]. Results on minimal sets of a skew-product semiflow generated by scalar differential equations with small delay can be found in the work of Alonso, Obaya and Sanz [12]. Smith and Thieme [13] showed that nonlinear autonomous differential equations with small delay generate a monotone semiflow with respect to the exponential ordering and the monotonicity has important dynamical consequences. For the effects of small delays on the stability and control, see the paper by Hale and Verduyn Lunel [14].

The results in the above listed papers show that if the delay is small, then there are similarities between the delay differential equation and an associated ordinary differential equation. The description of the associated ordinary differential equation in general requires the knowledge of certain special solutions. Since in most cases the special solutions are not known, the above results are mainly of theoretical interest. In the present paper, in the simple case of linear autonomous differential equations with small delay, we have described the coefficient matrix of the associated ordinary differential equation. Moreover, we have shown that the coefficient matrix can be approximated by a sequence of matrices defined recursively which yields an effective method for the approximation of the dominant eigenvalues.

**Author Contributions:** All authors contributed equally to this research and to writing the paper.

**Funding:** This research was funded by the Hungarian National Research, Development and Innovation Office grant no. K120186 and Széchenyi 2020 under the EFOP-3.6.1-16-2016-00015.

**Conflicts of Interest:** The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Around the Model of Infection Disease: The Cauchy Matrix and Its Properties**

#### **Alexander Domoshnitsky \*, Irina Volinsky \* and Marina Bershadsky**

Department of Mathematics, Ariel University, 40700 Ariel, Israel

**\*** Correspondence: adom@ariel.ac.il (A.D.); irinav@ariel.ac.il (I.V.);

Tel.: +972547740208 (A.D.); +972526893328 (I.V.)

Received: 27 June 2019; Accepted: 31 July 2019; Published: 6 August 2019

**Abstract:** In this paper the model of infection diseases by Marchuk is considered. Mathematical questions which are important in its study are discussed. Among them there are stability of stationary points, construction of the Cauchy matrices of linearized models, estimates of solutions. The novelty we propose is in a distributed feedback control which affects the antibody concentration. We use this control in the form of an integral term and come to the analysis of nonlinear integro-differential systems. New methods for the study of stability of linearized integro–differential systems describing the model of infection diseases are proposed. Explicit conditions of the exponential stability of the stationary points characterizing the state of the healthy body are obtained. The method of the paper is based on the symmetry properties of the Cauchy matrices which allow us their construction.

**Keywords:** integro–differential systems; Cauchy matrix; exponential stability; distributed control

#### **1. Introduction**

In this paper we consider the Marchuk model of infection diseases

$$\begin{cases} \begin{array}{c} \frac{dV}{dT} = \beta V\left(t\right) - \gamma F\left(t\right)V\left(t\right) \\ \frac{d\mathbb{C}}{dt} = \mathbb{Z}\left(m\left(t\right)\right)aF\left(t\right)V\left(t\right) - \mu\_{\mathbb{C}}\left(\mathbb{C}\left(t\right) - \mathbb{C}^\*\right) \\ \frac{dF}{dt} = \rho\mathbb{C}\left(t\right) - \eta\gamma F\left(t\right)V\left(t\right) - \mu\_{f}F\left(t\right) \\ \frac{dm}{dt} = \sigma V\left(t\right) - \mu\_{m}m\left(t\right) \end{array} \tag{1}$$

proposed in the book [1].

Here *t* is time, *V* (*t*) is antigen concentration rate, *C* (*t*) is the plasma cell concentration rate, *F* (*t*) is the antibody concentration rate, *m* (*t*) is relative features of the body, *m* = 0 for the healthy body, *ζ* (*m*) takes into account the destruction of the normal functioning of the immune system, *ζ* (0) = 1. *α*, *β*, *γ*, *ρ*, *η*, *μ<sup>f</sup>* , *μm*, *μc*, *C*<sup>∗</sup> are corresponding coefficients obtained as results of laboratory experiments. Let us note their biological sense of the coefficients: *β*—coefficient describing the antigen activity, *γ* —the antigen neutralizing factor, *α*—stimulation factor of the immune system, *ρ*—rate of production of antibodies by one plasma cell, *μf*—coefficient inversely proportional to the decay time of the antibodies, *μm*—coefficient inversely proportional to the organ recovery time, i.e., the coefficient *μ<sup>m</sup>* characterizes the rate of regeneration of the target organ, *μc*—coefficient of reduction of plasma cells due to ageing (inversely proportional to the lifetime), *σ*—constant related with a particular disease, *C*∗—the plasma cell concentration of the healthy body. Let us describe now the structure of the model (1). The first equation presents the block of the virus dynamics. It describes the changes in the antigen concentration rate and includes the amount of the antigen in the blood. The antigen concentration decreases as a result of the interaction with the antibodies. The immune process is characterized by the antibodies, whose concentration changes with time (destruction rate) and is described by the third equation. The amount of the antibody cells decreases as a result of interaction with antigen and also as a result of

the natural destruction. However, the plasma restores the antibodies and therefore the plasma state plays an important role in the immune process. Thus, the change in the concentration rate of the plasma cell is included in several differential equations describing this model. Taking into account the healthy body level of plasma cells and their natural ageing, the term *μ<sup>c</sup>* (*C*(*t*) − *C*∗) is included in the second equation of system (1). The second and third equations present the immune response dynamics. Concerning the last equation of system (1), the following can be noted: (1) the value of *m* increases with the antigen's concentration rate *V*(*t*); (2) the maximum value of *m* is one, in the case of 100% organ damage or zero for a fully healthy organ.

This model was studied in many works, note, for example, the recent papers [2–6] and the bibliography therein. The adding control to stabilize the system in the neighborhood of a stationary point was proposed, for example, in [5–8]. In the works [4,9,10], the basic mathematical model that takes into account the concentrated control of the immune response is proposed.

Let us discuss a motivation and novelty of our approach. In constructing every model, the influences of various additional factors that have seemed to be nonessential were neglected. The influence effect of choosing nonlinear terms by their linearization in neighborhood of stationary solution is also neglected. Even in the frame of linearized model, only approximate values of coefficients instead of exact ones are used. Changes of these coefficients with respect to time are not usually taken into account. It looks important to estimate an influence of all these factors.

In order to make this we have to obtain estimates of the elements of the Cauchy matrix of corresponding linearized (in a neighborhood of a stationary point) system. Consider the system

$$\mathbf{x}'(t) = P(t)\mathbf{x}(t) + G(t)\mathbf{y}$$

where *P*(*t*) is a (*n* × *n*)-matrix, *G*(*t*) is an *n*-vector. Its general solution *x*(*t*) = *col*{*x*1(*t*), ...*xn*(*t*)} can be represented in the form (see, for example, [11])

$$\mathbf{x}(t) = \int\_0^t \mathbb{C}(t, s)\mathbf{G}(s)ds + \mathbb{C}(t, 0)\mathbf{x}(0),$$

where *n* × *n*-matrix *C*(*t*,*s*) is called the Cauchy matrix. Its *j*-th column (*j* = 1, ..., *n*) for every fixed *s* as a function of *t*, is a solution of the corresponding homogeneous system

$$\mathbf{x}'(t) = P(t)\mathbf{x}(t),$$

satisfying the initial conditions *xi*(*s*) = *δij*, where

$$\delta\_{ij} = \begin{cases} \ 1, & i = j\_{\prime} \\ \ 0, & i \neq j\_{\prime} \end{cases} \quad i = 1, \ldots, n\_{\prime}$$

(see, for example, [12]). This Cauchy matrix *C*(*t*,*s*) satisfies the following symmetric properties *C*(*t*,*s*) = *X*(*s*)*X*−1(*s*), where *X*(*t*) is a fundamental matrix, *C*(*t*, 0) = *C*(*t*,*s*)*C*(*s*, 0), and in the case of constant matrix *P*(*t*) = *P*, *X*(*t* − *s*) = *C*(*t*,*s*) is a fundamental matrix for every *s* ≥ 0. These definition and properties allow us to construct and estimate *C*(*t*,*s*).

It can be noted that the use of information about behaviour of a disease and the immune system for a long time (defined by distributed control, for example, in the form of an integral term) looks very natural in choosing a strategy of a possible treatment. We add a distributed control in the third equation, describing the antibody concentration rate to achieve stabilization of the process in the neighborhood of stationary solution in the form

$$u\left(t\right) = \int\_{0}^{t} \left(F\left(s\right) - F^\*\right) e^{-k\left(t-s\right)} ds.\tag{2}$$

Here *F*∗ is the antibody concentration that we wish to achieve after the treatment. It can be noted that the influence of a correspondin average value instead of *F* (*t*) − *F*<sup>∗</sup> at the point *t* looks reasonable. The kernel in (2) increases the influence of the previous moments which are closer to the current moment *t*. Note that this control is a reasonable one from the medical point of view. We consider a corresponding integro–differential system and construct its Cauchy matrix. This allows us to estimate the influence of all notes above factors on behavior of solutions.

Note the use of distributed control in stabilization in the papers [13,14]. The goal of this paper is to demonstrate new possibilities of distributed control in the model of infection diseases through analysis of integro-differential systems. From the medical point of view, our results could be interpreted as follows: supporting the immune system we transform infection disease to a stable state of "almost healthy" body. After getting this stable state we do not stop the use of corresponding medicine allowing to hold antibody concentration rate on the higher level than in the normal conditions of a healthy body. In all these stages it is important to estimate influence of many additional factors in order to hold the process in a corresponding zone. Going solution out of this zone can be dangerous for a patient. To give an instrument for these estimations is the main goal of this paper. We propose here a simple method of analysis and estimation based on a reduction of integro–differential systems to ones of ordinary differential equations.

Our paper consists of the following parts. In Section 2 we introduce the distributed control in the Marchuk model of infection diseases and explain how the analysis of this model of the fourth order can be reduced to the analysis of a system of ordinary differential equations of the fifth order. In Section 3 the Cauchy matrix of integro–differential system is constructed and the exponential stability of a stationary point is obtained. The case of uncertain coefficient in the control is studied in Section 4 where results on the exponential stability are proposed. The influence of changes in the right-hand side on behaviour of solutions is discussed in Section 5.

#### **2. Modified Model of Infection Deceases**

Adding the control (2) in the right-hand side of the third equation of system (1) we come to the system of four equations

$$\begin{cases} \begin{array}{c} \frac{dV}{dt} = \beta V\left(t\right) - \gamma F\left(t\right)V\left(t\right) \\ \frac{d\mathbb{C}}{dt} = \zeta\left(m(t)\right)aF\left(t\right)V\left(t\right) - \mu\_c\left(\mathbb{C}\left(t\right) - \mathbb{C}^\*\right) \\ \frac{dF}{dt} = \rho\mathbb{C} - \eta\gamma F\left(t\right)V\left(t\right) - \mu\_f F\left(t\right) - b\int\_0^\cdot \left(F\left(s\right) - F^\*\right)e^{-k\left(t-s\right)}ds \\ \end{array} \tag{3}$$
 
$$\begin{array}{c} \frac{dm}{dt} = \sigma V\left(t\right) - \mu\_m m\left(t\right) \end{array}$$

Let us consider the following system of five equations

$$\begin{cases} \begin{array}{c} \frac{dV}{dt} = \beta V\left(t\right) - \gamma F\left(t\right)V\left(t\right) \\ \frac{dC}{dT} = \zeta \left(m\left(t\right)\right)aF\left(t\right)V\left(t\right) - \mu\_c \left(C\left(t\right) - C^\*\right) \\ \frac{dF}{dt} = \rho C - \eta \gamma F\left(t\right)V\left(t\right) - \mu\_f F\left(t\right) - bu\left(t\right) \\ \frac{d\eta}{dt} = \sigma V\left(t\right) - \mu\_m m\left(t\right) \\ \frac{d\eta}{dt} = F\left(t\right) - F^\* - ku\left(t\right) \end{array} . \end{cases} \tag{4}$$

**Lemma 1.** *The solution-vector col* (*v* (*t*),*s*(*t*), *f* (*t*), *m* (*t*)) *of system (3) and four first components of the solution-vector col* (*v* (*t*),*s*(*t*), *f* (*t*), *m* (*t*), *u* (*t*)) *of system (4) considered with the condition u* (0) = 0 *coincide.*

The proof of Lemma 1. follows from the formula of presentation of the general solution of the scalar linear equation *du dt* + *ku* (*t*) = *F* (*t*) − *F*∗.

Note that a similar trick was used, for example, in papers [15,16].

Following [9] we can pass to the dimensionless case.

Substituting *V* (*t*) = *v* (*t*) *Vm*, *C* (*t*) = *s*(*t*) *C*∗, *F* (*t*) = *f* (*t*) *F*∗, *u* (*t*) = *u* (*t*) *F*<sup>∗</sup> into (3) we obtain.

$$\begin{cases} \frac{d\overline{v}}{dt} = \beta \boldsymbol{\upsilon}\left(t\right) - \gamma \boldsymbol{F}^\* f\left(t\right) \boldsymbol{\upsilon}\left(t\right) \\ \frac{d\boldsymbol{s}}{dt} = a V\_m \frac{\Gamma^\*}{\Gamma^\*} \boldsymbol{\zeta}\left(m\left(t\right)\right) f\left(t\right) \boldsymbol{\upsilon}\left(t\right) - \mu\_\zeta \left(s\left(t\right) - 1\right) \\ \frac{df}{dt} = \frac{\rho \boldsymbol{C}^\*}{\Gamma^\*} \boldsymbol{s}\left(t\right) - \eta \gamma \boldsymbol{V}\_m \boldsymbol{f}\left(t\right) \boldsymbol{\upsilon}\left(t\right) - \mu\_f \boldsymbol{f}\left(t\right) - b\overline{\boldsymbol{u}}\left(t\right) \ . \end{cases} \tag{5}$$
 
$$\begin{cases} \frac{d\boldsymbol{m}}{dt} = \sigma V\_m \boldsymbol{\upsilon}\left(t\right) - \mu\_m \boldsymbol{m}\left(t\right) \\ \frac{d\overline{\boldsymbol{u}}}{dt} = \boldsymbol{f}\left(t\right) - 1 - k\overline{\boldsymbol{u}}\left(t\right) \end{cases} \tag{6}$$

Substituting *a*<sup>1</sup> = *β*, *a*<sup>2</sup> = *γF*∗, *a*<sup>3</sup> = *αVm <sup>F</sup>*<sup>∗</sup> *<sup>C</sup>*<sup>∗</sup> , *<sup>a</sup>*<sup>4</sup> <sup>=</sup> *<sup>μ</sup><sup>f</sup>* <sup>=</sup> *<sup>ρ</sup>C*<sup>∗</sup> *<sup>F</sup>*<sup>∗</sup> , *a*<sup>5</sup> = *μc*, *a*<sup>6</sup> = *σVm*, *a*<sup>7</sup> = *μm*, *a*<sup>8</sup> = *ηγVm* into (6) we come to the system

$$\begin{cases} \begin{array}{c} \frac{d\overline{v}}{dt} = a\_1 \overline{v}\left(t\right) - a\_2 f\left(t\right) \upsilon\left(t\right) \\ \frac{ds}{dt} = a\_3 \overline{\zeta}\left(m\left(t\right)\right) f\left(t\right) \upsilon\left(t\right) - a\_5 \left(s\left(t\right) - 1\right) \\ \frac{df}{dt} = a\_4 \left(s\left(t\right) - f\left(t\right)\right) - a\_8 f\left(t\right) \upsilon\left(t\right) - b\overline{u}\left(t\right) \\ \frac{dm}{dt} = a\_6 \upsilon\left(t\right) - a\_7 m\left(t\right) \\ \frac{d\overline{u}}{dt} = f\left(t\right) - 1 - k\overline{u}\left(t\right) \end{array} . \end{cases} \tag{6}$$

**Remark 1.** *It was obtained by M. Chirkov and S. Rusakov (see their method of identification of parameters, for example in [5,9]) on the basis of the laboratory data of pneumonia, that a*<sup>1</sup> = 0.25*; a*<sup>2</sup> = 8.5000332*; <sup>a</sup>*<sup>3</sup> <sup>=</sup> 1.792175675 <sup>×</sup> <sup>10</sup>9*; a*<sup>4</sup> <sup>=</sup> 1.95992344 <sup>×</sup> <sup>10</sup>−7*; a*<sup>5</sup> <sup>=</sup> 0.5*; a*<sup>6</sup> <sup>=</sup> <sup>10</sup>*; a*<sup>7</sup> <sup>=</sup> 0.4*; a*<sup>8</sup> <sup>=</sup> 1.7 <sup>×</sup> <sup>10</sup>−3*.*

It is clear that *v* = *m* = *u* = 0, *s* = *f* = 1 is a stationary point of system (6).

Linearizing system in a neighborhood of this stationary point, we obtain the corresponding linear system

$$\begin{cases} \begin{array}{c} \frac{dv}{dt} = \left(a\_1 - a\_2\right)v\\ \frac{ds}{dt} = a\_3\zeta(0)v - a\_5\left(s - 1\right) \end{array} \\\ \frac{df}{dt} = -a\_8v - a\_4\left(f - 1\right) + a\_4\left(s - 1\right) - b\overline{u}\left(t\right) \\\ \frac{dm}{dt} = a\_6v - a\_7m \\\ \frac{d\overline{H}}{dt} = f - 1 - k\overline{u} \end{array}$$

where *ζ*(0) = 1, as it was noted above. Denoting *x*<sup>1</sup> = *v*, *x*<sup>2</sup> = *s* − 1, *x*<sup>3</sup> = *f* − 1, *x*<sup>4</sup> = *m*, *x*<sup>5</sup> = *u*, we obtain

$$\begin{cases} \begin{aligned} \mathbf{x}\_1' &= \begin{pmatrix} a\_1 - a\_2 \end{pmatrix} \mathbf{x}\_1 \\ \mathbf{x}\_2' &= a\_3 \mathbf{x}\_1 - a\_5 \mathbf{x}\_2 \\ \mathbf{x}\_3' &= -a\_8 \begin{bmatrix} \mathbf{x}\_1 + a\_4 \mathbf{x}\_2 - a\_4 \mathbf{x}\_3 - b \mathbf{x}\_5 \end{bmatrix} . \end{aligned} \end{cases} \tag{7}$$
 
$$\begin{aligned} \mathbf{x}\_4' &= a\_6 \mathbf{x}\_1 - a\_7 \mathbf{x}\_4 \\ \mathbf{x}\_5' &= \mathbf{x}\_3 - k \mathbf{x}\_5 \end{aligned} \tag{7}$$

#### **3. Constructing the Cauchy Matrix of the System (7)**

In order to estimate the values of *x*1, ..., *x*<sup>5</sup> and the speed of their tending to the stationary solutions we propose below a corresponding technique. Its basis is the Cauchy matrix.

The matrix of the coefficients of system (7) is following

$$A = \begin{pmatrix} a\_1 - a\_2 & 0 & 0 & 0 & 0 \\ a\_3 & -a\_5 & 0 & 0 & 0 \\ -a\_8 & a\_4 & -a\_4 & 0 & -b \\ a\_6 & 0 & 0 & -a\_7 & 0 \\ 0 & 0 & 1 & 0 & -k \end{pmatrix} \tag{8}$$

Its eigenvalues are

$$\begin{aligned} \lambda\_1 &= \frac{-a\_4 - k + \sqrt{\left(a\_4 - k\right)^2 - 4b}}{2}, & \lambda\_2 &= \frac{-a\_4 - k - \sqrt{\left(a\_4 - k\right)^2 - 4b}}{2}, \\ \lambda\_3 &= -a\_7, & \lambda\_4 = -a\_5, & \lambda\_5 &= a\_1 - a\_2. \end{aligned} \tag{9}$$

Their negativity (negativity of the real parts in the case of complex *λ*<sup>1</sup> and *λ*2) leads us to the assertion on stability of the stationary point *v* = *m* = *u* = 0, *s* = *f* = 1 of system (6).

**Theorem 1.** *If k* > 0*, b* > 0 *and ai*, 1 ≤ *i* ≤ 8*, are real positive and different and a*<sup>1</sup> < *a*2*, then system (7) is exponentially stable.*

**Remark 2.** *All steps can be done for the integro–differential system (3) and system of ordinary differential equations (4) also directly without needing to pass to the dimensionless case (6). The linearization will lead us to a corresponding analog of the linear system of ordinary differential Equation (7) with the matrix of the coefficients B. Let us discuss the medicine sense of our result. Let F*<sup>0</sup> *be the value of antibody concentration rate of the healthy body. The case of <sup>F</sup>*<sup>0</sup> > *<sup>β</sup> <sup>γ</sup> is considered by G.I. Marchuk in his book. In this case the stationary point V* = 0, *C* = *C*∗, *F* = *F*0, *m* = 0*, is stable even without control. We can try to consider the "bad" case, where <sup>F</sup>*<sup>0</sup> < *<sup>β</sup> <sup>γ</sup> . It is clear that system (1.1) could not be stable in this case in the neighborhood of this stationary point since V*(*t*) *increases. It means that the immune system with the antibody concentration on the level of the healthy body cannot prevent increasing antigen concentration. Our control (2) in the third equation of system (1.1) cannot help us and makes this stationary point stable. We consider another stationary point V* = 0, *C* = *C*∗, *F* = *F*∗, *m* = 0*. Repeating the analysis of the eigenvalues of the matrix of the coefficients B, we come to the same conclusions. Let all coefficients in system (1) be positive (this is absolutely natural assumption) and <sup>b</sup>* > 0, *<sup>k</sup>* > <sup>0</sup>*, then adding the control in the form (2), where <sup>F</sup>*<sup>∗</sup> > *<sup>F</sup>*<sup>0</sup> + *<sup>β</sup>*−*γF*<sup>0</sup> *<sup>γ</sup> , we can achieve the exponential stability of this new stationary point of systems (3) and (4). Actually, positivity of k*, *b and all coefficients ai*(*i* = 1, ..., 8) *is preserved, to achieve the inequality a*<sup>1</sup> − *a*<sup>2</sup> < 0 *we have to require the noted inequality connecting F*∗ *and F*0*. One can make a conclusion that supporting for a long time the immune system, describing by antibody concentration F*(*t*) *and holding it on the level F*∗ *can be a possible way of a treatment.*

There are three possible cases:


*3.1. Constructing the Cauchy Matrix in the Case 1*

Using Maple, we obtain the eigenvectors of the matrix (8):

$$
\begin{split}
\begin{split}
\begin{pmatrix} 0\\ 0\\ -\frac{2b}{a\_{4}-k+\sqrt{(a\_{4}-k)^{2}-4b}}\\ 0\\ 1 \end{pmatrix},& \begin{split} \begin{split} &\overline{\boldsymbol{\vartheta}}\_{2} = \begin{pmatrix} 0\\ 0\\ -\frac{2b}{a\_{4}-k-\sqrt{(a\_{4}-k)^{2}-4b}}\\ 0\\ 1 \end{pmatrix},& \begin{split} &\overline{\boldsymbol{\vartheta}}\_{3} = \begin{pmatrix} 0\\ 0\\ 0\\ 1 \end{pmatrix},\\ & \overline{\boldsymbol{\vartheta}}\_{4} = \begin{pmatrix} 0\\ -\frac{a\_{4}a\_{5}-a\_{4}k-a\_{3}^{2}+a\_{5}k-b}{a\_{4}}\\ 0\\ 1 \end{pmatrix},& \begin{split} &\overline{\boldsymbol{\vartheta}}\_{5} = \begin{pmatrix} -c\left(a\overline{s}+a\_{1}-a\overline{s}\right)\\ -ca\_{3}\\ a\_{1}-a\_{2}k\\ -\frac{(a\_{3}+a\_{1}-a\_{2})a\_{6}}{a\_{1}-a\_{2}+a\tau}\\ -\frac{(a\_{3}+a\_{1}-a\_{2})a\_{6}}{a\_{1}-a\_{2}+a\tau}\\ 1 \end{pmatrix},
\end{split}
\end{split}
\end{split}
\end{split}
\end{cases}
$$

where *<sup>c</sup>* <sup>=</sup> *<sup>a</sup>*<sup>2</sup> 1−2*a*1*a*2+*a*1*a*4+*a*1*k*+*a*<sup>2</sup> <sup>2</sup>−*a*2*a*4−*a*2*k*+*a*4*k*+*b <sup>a</sup>*1*a*8−*a*2*a*8−*a*3*a*4+*a*5*a*<sup>8</sup> .

Let us denote *<sup>α</sup>*<sup>31</sup> <sup>=</sup> <sup>−</sup> <sup>2</sup>*<sup>b</sup> a*4−*k*+ <sup>√</sup>(*a*4−*k*) <sup>2</sup>−4*<sup>b</sup>* , *<sup>α</sup>*<sup>32</sup> <sup>=</sup> <sup>−</sup> <sup>2</sup>*<sup>b</sup> a*4−*k*− <sup>√</sup>(*a*4−*k*) <sup>2</sup>−4*<sup>b</sup>* , *α*<sup>24</sup> = −*a*4*a*5−*a*4*k*−*a*<sup>2</sup> <sup>5</sup>+*a*5*k*−*b <sup>a</sup>*<sup>4</sup> , *<sup>α</sup>*<sup>34</sup> = −*a*<sup>5</sup> + *<sup>k</sup>*, *<sup>α</sup>*<sup>15</sup> = −*<sup>c</sup>* (*a*<sup>5</sup> + *<sup>a</sup>*<sup>1</sup> − *<sup>a</sup>*2), *<sup>α</sup>*<sup>25</sup> = −*ca*3, *<sup>α</sup>*<sup>35</sup> = *<sup>a</sup>*<sup>1</sup> − *<sup>a</sup>*<sup>2</sup> + *<sup>k</sup>*, *<sup>α</sup>*<sup>45</sup> = −(*a*5+*a*1−*a*2)*a*6*<sup>c</sup> <sup>a</sup>*1−*a*2+*a*<sup>7</sup> , and define the matrix

$$B\_{\quad} = \begin{bmatrix} \stackrel{\scriptstyle \!\! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \!$$

containing eigenvectors and its inverse matrix

$$B^{-1} = \begin{pmatrix} \frac{a\_{24}(a\_{32} - a\_{35}) - a\_{25}(a\_{33} - a\_{34})}{a\_{15}a\_{24}(a\_{31} - a\_{32})} & \frac{a\_{32} - a\_{34}}{a\_{24}(a\_{31} - a\_{32})} & \frac{1}{a\_{31} - a\_{32}} & 0 & -\frac{a\_{32}}{a\_{31} - a\_{32}}\\ -\frac{a\_{24}(a\_{31} - a\_{35}) - a\_{25}(a\_{31} - a\_{34})}{a\_{15}a\_{24}(a\_{31} - a\_{32})} & -\frac{a\_{31} - a\_{34}}{a\_{24}(a\_{31} - a\_{32})} & -\frac{1}{a\_{31} - a\_{32}} & 0 & \frac{a\_{31}}{a\_{31} - a\_{32}}\\ -\frac{a\_{45}}{a\_{15}} & 0 & 0 & 1 & 0\\ -\frac{a\_{25}}{a\_{15}a\_{24}} & \frac{1}{a\_{24}} & 0 & 0 & 0\\ \frac{1}{a\_{15}} & 0 & 0 & 0 & 0 \end{pmatrix}.$$

Let us write now the Cauchy matrix *C*(*t*,*s*) of the the system (7). The Cauchy matrix can be written as *C*(*t*,*s*) = *eA*(*t*−*s*). In our case *A* is diagonalized: *A* = *BDB*−1, we have *eA*(*t*−*s*) = *BeD*(*t*−*s*)*B*−1, where the matrix *<sup>D</sup>* is diagonal, containing the eigenvalues of the matrix *<sup>A</sup>*. The columns −→*<sup>C</sup> <sup>i</sup>* (*t*,*s*)1≤*i*≤<sup>5</sup> of the Cauchy matrix *C*(*t*,*s*) of system (7) are the following ones:

−→*<sup>C</sup>* <sup>1</sup> (*t*,*s*) <sup>=</sup> *<sup>α</sup>*<sup>24</sup> (*α*<sup>32</sup> <sup>−</sup> *<sup>α</sup>*35) <sup>−</sup> *<sup>α</sup>*<sup>25</sup> (*α*<sup>32</sup> <sup>−</sup> *<sup>α</sup>*34) *α*15*α*<sup>24</sup> (*α*<sup>31</sup> − *α*32) ⎛ ⎜⎜⎜⎜⎜⎜⎝ 0 0 <sup>−</sup> <sup>2</sup>*<sup>b</sup> a*4−*k*+ <sup>√</sup>(*a*4−*k*) <sup>2</sup>−4*<sup>b</sup>* 0 1 ⎞ ⎟⎟⎟⎟⎟⎟⎠ *e* −*a*4−*k*+ <sup>√</sup>(*a*4−*k*) <sup>2</sup>−4*<sup>b</sup>* 2 (*t*−*s*) − *α*<sup>24</sup> (*α*<sup>31</sup> − *α*35) − *α*<sup>25</sup> (*α*<sup>31</sup> − *α*34) *α*15*α*<sup>24</sup> (*α*<sup>31</sup> − *α*32) ⎛ ⎜⎜⎜⎜⎜⎜⎝ 0 0 <sup>−</sup> <sup>2</sup>*<sup>b</sup> a*4−*k*− <sup>√</sup>(*a*4−*k*) <sup>2</sup>−4*<sup>b</sup>* 0 1 ⎞ ⎟⎟⎟⎟⎟⎟⎠ *e* −*a*4−*k*− <sup>√</sup>(*a*4−*k*) <sup>2</sup>−4*<sup>b</sup>* 2 (*t*−*s*) − *α*<sup>45</sup> *α*<sup>15</sup> ⎛ ⎜⎜⎜⎜⎜⎝ 0 0 0 1 0 ⎞ ⎟⎟⎟⎟⎟⎠ *e* <sup>−</sup>*a*7(*t*−*s*) <sup>−</sup> *<sup>α</sup>*<sup>25</sup> *α*15*α*<sup>24</sup> ⎛ ⎜⎜⎜⎜⎜⎜⎝ 0 −*a*4*a*5−*a*4*k*−*a*<sup>2</sup> <sup>5</sup>+*a*5*k*−*b a*4 −*a*<sup>5</sup> + *k* 0 1 ⎞ ⎟⎟⎟⎟⎟⎟⎠ *e* <sup>−</sup>*a*5(*t*−*s*) + 1 *α*<sup>15</sup> ⎛ ⎜⎜⎜⎜⎜⎝ −*c* (*a*<sup>5</sup> + *a*<sup>1</sup> − *a*2) −*ca*<sup>3</sup> *a*<sup>1</sup> − *a*<sup>2</sup> + *k* −(*a*5+*a*1−*a*2)*a*6*<sup>c</sup> a*1−*a*2+*a*<sup>7</sup> 1 ⎞ ⎟⎟⎟⎟⎟⎠ *e*(*a*1−*a*2)(*t*−*s*)

$$\begin{array}{ll} \stackrel{\scriptstyle \mathcal{L}}{C}\_{2}(t,s) &=& \frac{a\_{32}-a\_{34}}{a\_{24}\left(a\_{31}-a\_{32}\right)} \left( \begin{array}{c} 0\\0\\-\frac{2b}{a\_{4}-k+\sqrt{\left(4a\_{4}-k\right)^{2}-4b}}\\0\\1\end{array} \right) \varepsilon \left(\frac{-a\_{4}-k+\sqrt{\left(4a\_{4}-k\right)^{2}-4b}}{2}\right) \left(\begin{array}{c} 0\\0\\-\frac{2b}{a\_{4}-k-\sqrt{\left(4a\_{4}-k\right)^{2}-4b}}\\0\\0\end{array}\right) \varepsilon \left(\frac{-a\_{4}-k-\sqrt{\left(4a\_{4}-k\right)^{2}-4b}}{2}\right) \left(t-s\right) \\\\ \frac{1}{a\_{24}} \left(\begin{array}{c} 0\\-\frac{a\_{4}a\_{5}-a\_{4}k-a\_{5}^{2}+a\_{5}k-b}\\-a\_{5}+k\\0\\1\end{array}\right) \varepsilon^{-a\_{5}(t-s)} \end{array}$$

$$\begin{array}{rcl} \overrightarrow{C}\_{3}\left(t,s\right) &=& \frac{1}{a\_{31}-a\_{32}} \begin{pmatrix} 0\\0\\-\frac{2b}{a\_{4}-k+\sqrt{\left(a\_{4}-k\right)^{2}-4b}}\\0\\1\end{pmatrix} e^{t\left(\frac{-a\_{4}-k+\sqrt{\left(a\_{4}-k\right)^{2}-4b}}{2}\right)} - \frac{1}{a\_{4}-k+\sqrt{\left(a\_{4}-k\right)^{2}-4b}}\\0\\-\frac{1}{a\_{4}-k-\sqrt{\left(a\_{4}-k\right)^{2}-4b}}\\0\end{pmatrix} e^{\left(\frac{-a\_{4}-k-\sqrt{\left(a\_{4}-k\right)^{2}-4b}}{2}\right)\left(t-s\right)}\\ & \phantom{\left(\frac{-a\_{4}-k-\sqrt{\left(a\_{4}-k\right)^{2}-4b}}{2}\right)\left(t-s\right)}\\ & \phantom{\left(\frac{-a\_{4}-k-\sqrt{\left(a\_{4}-k\right)^{2}-4b}}{2}\right)\left(t-s\right)}\end{array}$$

$$\overrightarrow{C}\_{4}\left(t,s\right)=\begin{pmatrix}0\\0\\0\\1\\0\end{pmatrix} e^{-a\gamma\left(t-s\right)}$$

$$\begin{array}{rcl} \overrightarrow{C}\_{5} \left( t, s \right) &=& -\frac{a\_{32}}{a\_{31} - a\_{32}} \begin{pmatrix} 0 \\ 0 \\ -\frac{2b}{a\_{4} - k + \sqrt{\left( a\_{4} - k \right)^{2} - 4b}} \\ 0 \\ 1 \end{pmatrix} c \begin{pmatrix} \frac{-a\_{4} - k + \sqrt{\left( a\_{4} - k \right)^{2} - 4b}}{2} \\ 0 \\ 1 \end{pmatrix} + \begin{pmatrix} 0 \\ 0 \\ -\frac{2b}{a\_{4} - k - \sqrt{\left( a\_{4} - k \right)^{2} - 4b}} \\ 0 \\ 1 \end{pmatrix} c \begin{pmatrix} \frac{-a\_{4} - k - \sqrt{\left( a\_{4} - k \right)^{2} - 4b}}{2} \\ 0 \\ 1 \end{pmatrix} (t - s) \end{array}$$

#### *3.2. Constructing the Cauchy Matrix in the Case 2*

We have the eigenvalues

$$
\lambda\_1 = \lambda\_2 = -\frac{a\_4 + k}{2}, \ \lambda\_3 = -a\_7, \ \lambda\_4 = -a\_5, \ \lambda\_5 = a\_1 - a\_2. \tag{11}
$$

Consider the following set of vectors

$$
\begin{aligned}
\overrightarrow{\boldsymbol{\nabla}}\_{1} &= \begin{pmatrix} 0 \\ 0 \\ -\frac{2b}{a\_{4}-k} \\ 0 \\ 1 \end{pmatrix}, \ \overrightarrow{\boldsymbol{\nabla}}\_{2} = \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \\ \frac{2}{a\_{4}-k} \end{pmatrix}, \ \overrightarrow{\boldsymbol{\nabla}}\_{3} = \begin{pmatrix} 0 \\ 0 \\ 0 \\ 1 \\ 0 \end{pmatrix}, \\
\boldsymbol{\Delta}\_{4} &= \begin{pmatrix} 0 \\ -\frac{a\_{4}a\_{5}-a\_{4}k - a\_{3}^{2} + a\_{5}k - b}{a\_{4}} \\ -a\_{5} + k \\ 0 \\ 1 \end{pmatrix}, \ \overrightarrow{\boldsymbol{\nabla}}\_{5} = \begin{pmatrix} -c\left(a\_{5} + a\_{1} - a\_{2}\right) \\ -ca\_{3} \\ a\_{1} - a\_{2} + k \\ -\frac{(a\_{5} + a\_{1} - a\_{2})a\_{6}}{a\_{1} - a\_{2} + a\gamma} \\ 1 \end{pmatrix}, \end{aligned} \tag{12}
$$

here −→*v* 1, −→*v* 3, −→*v* 4, −→*v* <sup>5</sup> are the eigenvectors of matrix (8) and −→*v* <sup>2</sup> is a root vector for −→*v* 1.

Let us denote *<sup>β</sup>*<sup>31</sup> <sup>=</sup> <sup>−</sup> <sup>2</sup>*<sup>b</sup> <sup>a</sup>*4−*<sup>k</sup>* , *<sup>β</sup>*<sup>52</sup> <sup>=</sup> <sup>2</sup> *<sup>a</sup>*4−*<sup>k</sup>* , *<sup>β</sup>*<sup>24</sup> <sup>=</sup> <sup>−</sup>*a*4*a*5−*a*4*k*−*a*<sup>2</sup> <sup>5</sup>+*a*5*k*−*b <sup>a</sup>*<sup>4</sup> , *<sup>β</sup>*<sup>34</sup> = −*a*<sup>5</sup> + *<sup>k</sup>*, *<sup>β</sup>*<sup>15</sup> = <sup>−</sup>*<sup>c</sup>* (*a*<sup>5</sup> <sup>+</sup> *<sup>a</sup>*<sup>1</sup> <sup>−</sup> *<sup>a</sup>*2), *<sup>β</sup>*<sup>25</sup> <sup>=</sup> <sup>−</sup>*ca*3, *<sup>β</sup>*<sup>35</sup> <sup>=</sup> *<sup>a</sup>*<sup>1</sup> <sup>−</sup> *<sup>a</sup>*<sup>2</sup> <sup>+</sup> *<sup>k</sup>*, *<sup>β</sup>*<sup>45</sup> <sup>=</sup> <sup>−</sup>(*a*5+*a*1−*a*2)*a*6*<sup>c</sup> <sup>a</sup>*1−*a*2+*a*<sup>7</sup> , and define the matrix

$$B\_{\;\;\;\;\nu} = [\begin{array}{cccc} \boxed{\overline{\upsilon}}\_{1\prime} \ \overline{\upsilon}\prime\_{2\prime} \ \overline{\upsilon}\prime\_{3\prime} \ \overline{\upsilon}\prime\_{4\prime} \ \overline{\upsilon}\prime\_{5} \end{array}] = \left( \begin{array}{cccc} 0 & 0 & 0 & 0 & \beta\_{15} \\ 0 & 0 & 0 & \beta\_{24} & \beta\_{25} \\ \beta\_{31} & 0 & 0 & \beta\_{34} & \beta\_{35} \\ 0 & 0 & 1 & 0 & \beta\_{45} \\ 1 & \beta\_{52} & 0 & 1 & 1 \end{array} \right)$$

and its inverse matrix

$$B^{-1} = \begin{pmatrix} -\frac{\beta\_{24}\beta\_{35} - \beta\_{25}\beta\_{34}}{\beta\_{31}^{3}\beta\_{15}^{3}\beta\_{24}^{3}} & -\frac{\beta\_{34}}{\beta\_{24}^{3}\beta\_{31}^{3}} & \frac{1}{\beta\_{31}} & 0 & 0\\ -\frac{\beta\_{24}(\beta\_{31} - \beta\_{35}) - \beta\_{25}(\beta\_{31} - \beta\_{34})}{\beta\_{31}^{3}\beta\_{15}^{3}\beta\_{24}^{3}\beta\_{15}^{3}} & -\frac{\beta\_{31} - \beta\_{34}}{\beta\_{31}^{3}\beta\_{24}^{3}\beta\_{32}} & -\frac{1}{\beta\_{31}\beta\_{32}} & 0 & \frac{1}{\beta\_{32}}\\ -\frac{\beta\_{45}}{\beta\_{15}^{3}} & 0 & 0 & 1 & 0\\ -\frac{\beta\_{25}}{\beta\_{15}^{3}\beta\_{24}^{3}} & \frac{1}{\beta\_{24}^{3}} & 0 & 0 & 0\\ \frac{1}{\beta\_{15}} & 0 & 0 & 0 & 0 \end{pmatrix}$$

.

Let us denote: −→*u* <sup>1</sup> (*t*) = −→*v* <sup>1</sup>*eλ*1*<sup>t</sup>* , −→*u* <sup>2</sup> (*t*) = (−→*v* <sup>2</sup> + *t* −→*v* <sup>1</sup>)*eλ*1*<sup>t</sup>* , −→*u <sup>i</sup>* (*t*) = −→*v ie<sup>λ</sup>it* , 3 <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> 5, −→*<sup>w</sup> <sup>j</sup>* (*t*,*s*) <sup>=</sup> −→*<sup>u</sup> <sup>j</sup>* (*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>*), 1 <sup>≤</sup> *<sup>j</sup>* <sup>≤</sup> 5.

Let us build the Cauchy matrix *<sup>C</sup>*(*t*,*s*) = −→*<sup>C</sup> <sup>i</sup>* (*t*,*s*) 1≤*i*≤5 , where −→*<sup>C</sup> <sup>i</sup>* (*t*,*s*) <sup>=</sup> <sup>5</sup> ∑ *j*=1 *bji* −→*w <sup>j</sup>* (*t*,*s*), 1 ≤ *i* ≤ 5.

We have to find *bji*, 1 ≤ *i*, *j* ≤ 5 in this representation. Taking into account that *C* (*s*,*s*) = *I*, where *<sup>I</sup>* is the identity (<sup>5</sup> <sup>×</sup> <sup>5</sup>)-matrix, we can write: −→*<sup>C</sup> <sup>i</sup>* (*s*,*s*) <sup>=</sup> <sup>5</sup> ∑ *j*=1 *bji* −→*<sup>v</sup> <sup>j</sup>*, 1 <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> 5.

Setting *i* = 1, 2, 3, 4, 5, we obtain

−→*<sup>C</sup>* <sup>1</sup> (*s*,*s*) <sup>=</sup> 5 ∑ *j*=1 *bj*<sup>1</sup> −→*v <sup>j</sup>* = *B* ⎛ ⎜⎜⎜⎜⎜⎝ *b*<sup>11</sup> *b*<sup>21</sup> *b*<sup>31</sup> *b*<sup>41</sup> *b*<sup>51</sup> ⎞ ⎟⎟⎟⎟⎟⎠ = ⎛ ⎜⎜⎜⎜⎜⎝ 1 0 0 0 0 ⎞ ⎟⎟⎟⎟⎟⎠ ⇒ ⎛ ⎜⎜⎜⎜⎜⎝ *b*<sup>11</sup> *b*<sup>21</sup> *b*<sup>31</sup> *b*<sup>41</sup> *b*<sup>51</sup> ⎞ ⎟⎟⎟⎟⎟⎠ = ⎛ ⎜⎜⎜⎜⎜⎜⎜⎝ <sup>−</sup>*β*24*β*35−*β*25*β*<sup>34</sup> *β*31*β*15*β*<sup>24</sup> −*β*24(*β*31−*β*35)−*β*25(*β*31−*β*34) *β*31*β*52*β*24*β*<sup>15</sup> <sup>−</sup>*β*<sup>45</sup> *β*<sup>15</sup> <sup>−</sup> *<sup>β</sup>*<sup>25</sup> *β*15*β*<sup>24</sup> 1 *β*<sup>15</sup> ⎞ ⎟⎟⎟⎟⎟⎟⎟⎠

−→*<sup>C</sup>* <sup>2</sup> (*s*,*s*) <sup>=</sup> 5 ∑ *j*=1 *bj*<sup>2</sup> −→*v <sup>j</sup>* = *B* ⎛ ⎜⎜⎜⎜⎜⎝ *b*<sup>12</sup> *b*<sup>22</sup> *b*<sup>32</sup> *b*<sup>42</sup> *b*<sup>52</sup> ⎞ ⎟⎟⎟⎟⎟⎠ = ⎛ ⎜⎜⎜⎜⎜⎝ 0 1 0 0 0 ⎞ ⎟⎟⎟⎟⎟⎠ ⇒ ⎛ ⎜⎜⎜⎜⎜⎝ *b*<sup>12</sup> *b*<sup>22</sup> *b*<sup>32</sup> *b*<sup>42</sup> *b*<sup>52</sup> ⎞ ⎟⎟⎟⎟⎟⎠ = ⎛ ⎜⎜⎜⎜⎜⎜⎝ <sup>−</sup> *<sup>β</sup>*<sup>34</sup> *β*24*β*<sup>31</sup> <sup>−</sup> *<sup>β</sup>*31−*β*<sup>34</sup> *β*31*β*24*β*<sup>52</sup> 0 1 *β*<sup>24</sup> 0 ⎞ ⎟⎟⎟⎟⎟⎟⎠ −→*<sup>C</sup>* <sup>3</sup> (*s*,*s*) <sup>=</sup> 5 ∑ *j*=1 *bj*<sup>3</sup> −→*v <sup>j</sup>* = *B* ⎛ ⎜⎜⎜⎜⎜⎝ *b*<sup>13</sup> *b*<sup>23</sup> *b*<sup>33</sup> *b*<sup>43</sup> *b*<sup>53</sup> ⎞ ⎟⎟⎟⎟⎟⎠ = ⎛ ⎜⎜⎜⎜⎜⎝ 0 0 1 0 0 ⎞ ⎟⎟⎟⎟⎟⎠ ⇒ ⎛ ⎜⎜⎜⎜⎜⎝ *b*<sup>13</sup> *b*<sup>23</sup> *b*<sup>33</sup> *b*<sup>43</sup> *b*<sup>53</sup> ⎞ ⎟⎟⎟⎟⎟⎠ = ⎛ ⎜⎜⎜⎜⎜⎝ 1 *β*<sup>31</sup> <sup>−</sup> <sup>1</sup> *<sup>β</sup>*31*β*<sup>52</sup> 0 0 0 ⎞ ⎟⎟⎟⎟⎟⎠ −→*<sup>C</sup>* <sup>4</sup> (*s*,*s*) <sup>=</sup> 5 ∑ *j*=1 *bj*<sup>4</sup> −→*v <sup>j</sup>* = *B* ⎛ ⎜⎜⎜⎜⎜⎝ *b*<sup>14</sup> *b*<sup>24</sup> *b*<sup>34</sup> *b*<sup>44</sup> *b*<sup>54</sup> ⎞ ⎟⎟⎟⎟⎟⎠ = ⎛ ⎜⎜⎜⎜⎜⎝ 0 0 0 1 0 ⎞ ⎟⎟⎟⎟⎟⎠ ⇒ ⎛ ⎜⎜⎜⎜⎜⎝ *b*<sup>14</sup> *b*<sup>24</sup> *b*<sup>34</sup> *b*<sup>44</sup> *b*<sup>54</sup> ⎞ ⎟⎟⎟⎟⎟⎠ = ⎛ ⎜⎜⎜⎜⎜⎝ 0 0 1 0 0 ⎞ ⎟⎟⎟⎟⎟⎠ −→*<sup>C</sup>* <sup>5</sup> (*s*,*s*) <sup>=</sup> 5 ∑ *j*=1 *bj*<sup>5</sup> −→*v <sup>j</sup>* = *B* ⎛ ⎜⎜⎜⎜⎜⎝ *b*<sup>15</sup> *b*<sup>25</sup> *b*<sup>35</sup> *b*<sup>45</sup> *b*<sup>55</sup> ⎞ ⎟⎟⎟⎟⎟⎠ = ⎛ ⎜⎜⎜⎜⎜⎝ 0 0 0 0 1 ⎞ ⎟⎟⎟⎟⎟⎠ ⇒ ⎛ ⎜⎜⎜⎜⎜⎝ *b*<sup>15</sup> *b*<sup>25</sup> *b*<sup>35</sup> *b*<sup>45</sup> *b*<sup>55</sup> ⎞ ⎟⎟⎟⎟⎟⎠ = ⎛ ⎜⎜⎜⎜⎜⎝ 0 1 *β*<sup>52</sup> 0 0 0 ⎞ ⎟⎟⎟⎟⎟⎠

131

Substituting the coefficients *bji*, 1 <sup>≤</sup> *<sup>i</sup>*, *<sup>j</sup>* <sup>≤</sup> 5 into the equality −→*<sup>C</sup> <sup>i</sup>* (*t*,*s*) <sup>=</sup> <sup>5</sup> ∑ *j*=1 *bji* −→*<sup>w</sup> <sup>j</sup>* (*t*,*s*), 1 <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> <sup>5</sup> we obtain

−→*<sup>C</sup>* <sup>1</sup> (*t*,*s*) <sup>=</sup> <sup>−</sup> *<sup>β</sup>*24*β*<sup>35</sup> <sup>−</sup> *<sup>β</sup>*25*β*<sup>34</sup> *β*31*β*15*β*<sup>24</sup> ⎛ ⎜⎜⎜⎜⎜⎝ 0 0 <sup>−</sup> <sup>2</sup>*<sup>b</sup> <sup>a</sup>*4−*<sup>k</sup>* 0 1 ⎞ ⎟⎟⎟⎟⎟⎠ *e* - <sup>−</sup> *<sup>a</sup>*4+*<sup>k</sup>* 2 (*t*−*s*) − *β*<sup>24</sup> (*β*<sup>31</sup> − *β*35) − *β*<sup>25</sup> (*β*<sup>31</sup> − *β*34) *β*31*β*52*β*24*β*<sup>15</sup> ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎛ ⎜⎜⎜⎜⎜⎝ 0 0 0 0 2 *a*4−*k* ⎞ ⎟⎟⎟⎟⎟⎠ + (*t* − *s*) ⎛ ⎜⎜⎜⎜⎜⎝ 0 0 <sup>−</sup> <sup>2</sup>*<sup>b</sup> <sup>a</sup>*4−*<sup>k</sup>* 0 1 ⎞ ⎟⎟⎟⎟⎟⎠ ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ *e* - <sup>−</sup> *<sup>a</sup>*4+*<sup>k</sup>* 2 (*t*−*s*) − *β*<sup>45</sup> *β*<sup>15</sup> ⎛ ⎜⎜⎜⎜⎜⎝ 0 0 0 1 0 ⎞ ⎟⎟⎟⎟⎟⎠ *e* <sup>−</sup>*a*7(*t*−*s*) <sup>−</sup> *<sup>β</sup>*<sup>25</sup> *β*15*β*<sup>24</sup> ⎛ ⎜⎜⎜⎜⎜⎜⎝ 0 −*a*4*a*5−*a*4*k*−*a*<sup>2</sup> <sup>5</sup>+*a*5*k*−*b a*4 −*a*<sup>5</sup> + *k* 0 1 ⎞ ⎟⎟⎟⎟⎟⎟⎠ *e* <sup>−</sup>*a*5(*t*−*s*) + 1 *β*<sup>15</sup> ⎛ ⎜⎜⎜⎜⎜⎝ −*c* (*a*<sup>5</sup> + *a*<sup>1</sup> − *a*2) −*ca*<sup>3</sup> *a*<sup>1</sup> − *a*<sup>2</sup> + *k* −(*a*5+*a*1−*a*2)*a*6*<sup>c</sup> a*1−*a*2+*a*<sup>7</sup> 1 ⎞ ⎟⎟⎟⎟⎟⎠ *e*(*a*1−*a*2)(*t*−*s*) −→*<sup>C</sup>* <sup>2</sup> (*t*,*s*) <sup>=</sup> ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ <sup>−</sup> *<sup>β</sup>*<sup>34</sup> *β*24*β*<sup>31</sup> ⎛ ⎜⎜⎜⎜⎜⎝ 0 0 <sup>−</sup> <sup>2</sup>*<sup>b</sup> <sup>a</sup>*4−*<sup>k</sup>* 0 1 ⎞ ⎟⎟⎟⎟⎟⎠ <sup>−</sup> *<sup>β</sup>*<sup>31</sup> <sup>−</sup> *<sup>β</sup>*<sup>34</sup> *β*31*β*24*β*<sup>52</sup> ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎛ ⎜⎜⎜⎜⎜⎝ 0 0 0 0 2 *a*4−*k* ⎞ ⎟⎟⎟⎟⎟⎠ + (*t* − *s*) ⎛ ⎜⎜⎜⎜⎜⎝ 0 0 <sup>−</sup> <sup>2</sup>*<sup>b</sup> <sup>a</sup>*4−*<sup>k</sup>* 0 1 ⎞ ⎟⎟⎟⎟⎟⎠ ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ *e* - <sup>−</sup> *<sup>a</sup>*4+*<sup>k</sup>* 2 (*t*−*s*) + 1 *β*<sup>24</sup> ⎛ ⎜⎜⎜⎜⎜⎜⎝ 0 −*a*4*a*5−*a*4*k*−*a*<sup>2</sup> <sup>5</sup>+*a*5*k*−*b a*4 −*a*<sup>5</sup> + *k* 0 1 ⎞ ⎟⎟⎟⎟⎟⎟⎠ *e* −*a*5(*t*−*s*) −→*<sup>C</sup>* <sup>3</sup> (*t*,*s*) <sup>=</sup> ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 1 *β*<sup>31</sup> ⎛ ⎜⎜⎜⎜⎜⎝ 0 0 <sup>−</sup> <sup>2</sup>*<sup>b</sup> <sup>a</sup>*4−*<sup>k</sup>* 0 1 ⎞ ⎟⎟⎟⎟⎟⎠ <sup>−</sup> <sup>1</sup> *β*31*β*<sup>52</sup> ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎛ ⎜⎜⎜⎜⎜⎝ 0 0 0 0 2 *a*4−*k* ⎞ ⎟⎟⎟⎟⎟⎠ + (*t* − *s*) ⎛ ⎜⎜⎜⎜⎜⎝ 0 0 <sup>−</sup> <sup>2</sup>*<sup>b</sup> <sup>a</sup>*4−*<sup>k</sup>* 0 1 ⎞ ⎟⎟⎟⎟⎟⎠ ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ *e* - <sup>−</sup> *<sup>a</sup>*4+*<sup>k</sup>* 2 (*t*−*s*) −→*<sup>C</sup>* <sup>4</sup> (*t*,*s*) <sup>=</sup> ⎛ ⎜⎜⎜⎜⎜⎝ 0 0 0 1 0 ⎞ ⎟⎟⎟⎟⎟⎠ *e* −*a*7(*t*−*s*)

$$\overrightarrow{\mathbf{C}}\_{5}(t,s) = \frac{1}{\beta\_{52}} \begin{bmatrix} 0\\ 0\\ 0\\ 0\\ \frac{2}{a\_{4}-k} \end{bmatrix} + (t-s) \begin{pmatrix} 0\\ 0\\ -\frac{2b}{a\_{4}-k}\\ 0\\ 1 \end{pmatrix} \mathbf{e} \begin{pmatrix} -\frac{a\_{4}+k}{2} \big{)}(t-s) \end{pmatrix}$$

#### *3.3. Constructing the Cauchy Matrix in the Case 3*

We have the eigenvalues

$$\begin{aligned} \lambda\_1 &= \frac{-a\_4 - k + i\sqrt{4b - \left(a\_4 - k\right)^2}}{2}, & \lambda\_2 &= \frac{-a\_4 - k - i\sqrt{4b - \left(a\_4 - k\right)^2}}{2}, \\ \lambda\_3 &= -a\_7, & \lambda\_4 = -a\_5, & \lambda\_5 &= a\_1 - a\_{2\prime} \end{aligned} \tag{13}$$

$$
\begin{aligned}
\overrightarrow{\boldsymbol{\nabla}}\_{1} &= \begin{pmatrix} 0 \\ 0 \\ -\frac{2b}{a\_{4} - k + i\sqrt{4b - (a\_{4} - k)^{2}}} \\ 0 \\ 1 \end{pmatrix}, \ \overrightarrow{\boldsymbol{\nabla}}\_{2} = \begin{pmatrix} 0 \\ 0 \\ -\frac{2b}{a\_{4} - k - i\sqrt{4b - (a\_{4} - k)^{2}}} \\ 0 \\ 1 \end{pmatrix}, \ \overrightarrow{\boldsymbol{\nabla}}\_{3} &= \begin{pmatrix} 0 \\ 0 \\ 0 \\ 1 \\ 0 \end{pmatrix}, \\\ \overrightarrow{\boldsymbol{\nabla}}\_{4} &= \begin{pmatrix} 0 \\ -\frac{a\_{4}a\_{5} - 4a\_{5}k - a\_{4}^{2} + a\_{5}k - b}{a\_{4}} \\ k - a\_{5} \\ 0 \\ 1 \end{pmatrix}, \ \overrightarrow{\boldsymbol{\nabla}}\_{5} = \begin{pmatrix} -c\left(a\_{5} + a\_{1} - a\_{2}\right) \\ -ca\_{3} \\ a\_{1} - a\_{2} + k \\ -\frac{(a\_{5} + a\_{1} - a\_{2})a\_{5}}{a\_{1} - a\_{2} + a\_{7}} \\ 1 \end{pmatrix}, \end{aligned} \tag{14}$$

We can write first two vector-solutions as follows:

$$
\begin{split}
\overrightarrow{m\_{1}}(t) &= \begin{pmatrix} 0 \\ 0 \\ -\frac{2b}{a\_{4} - b + i\sqrt{4b - (a\_{4} - b)^{2}}} \\ 0 \\ 1 \end{pmatrix} \cdot e^{\left(\frac{-a\_{4} - b + i\sqrt{4b - (a\_{4} - b)^{2}}}{2}\right)t} = \\
\begin{pmatrix} 0 \\ 0 \\ -\frac{2b}{a\_{4} - b + i\sqrt{4b - (a\_{4} - b)^{2}}} \\ 0 \\ 1 \end{pmatrix} \cdot e^{-\frac{a\_{4} + b}{2}t} \cdot \left(\cos\left(\frac{\sqrt{4b - (a\_{4} - k)^{2}}}{2}t\right) + i\sin\left(\frac{\sqrt{4b - (a\_{4} - k)^{2}}}{2}t\right)\right) \\
\begin{pmatrix} 0 \\ 0 \\ -\frac{2b}{a\_{4} - b - i\sqrt{4b - (a\_{4} - b)^{2}}} \\ 0 \\ 1 \end{pmatrix} \cdot e^{\left(\frac{-a\_{4} - t - i\sqrt{4b - (a\_{4} - t)^{2}}}{2}\right)t} = \\
\begin{pmatrix} 0 \\ 0 \\ -\frac{2b}{a\_{4} - b - i\sqrt{4b - (a\_{4} - k)^{2}}} \\ 0 \\ 0 \end{pmatrix} \cdot e^{-\frac{a\_{4} + b}{2}t} \cdot \left(\cos\left(\frac{\sqrt{4b - (a\_{4} - k)^{2}}}{2}t\right) - i\sin\left(\frac{\sqrt{4b - (a\_{4} - k)^{2}}}{2}t\right)\right)
\end{pmatrix}
$$

Passing to real solutions:

−*w* →<sup>1</sup> (*t*) = −*u* →<sup>1</sup> (*t*) + −*u* →<sup>2</sup> (*t*) <sup>2</sup> <sup>=</sup> ⎛ ⎜⎜⎜⎜⎜⎝ 0 0 *k*−*a*<sup>4</sup> 2 0 1 ⎞ ⎟⎟⎟⎟⎟⎠ · *e* <sup>−</sup> *<sup>a</sup>*4+*<sup>k</sup>* <sup>2</sup> *<sup>t</sup>* · cos ⎛ ⎝ ' 4*b* − (*a*<sup>4</sup> − *k*) 2 <sup>2</sup> *<sup>t</sup>* ⎞ ⎠ + ⎛ ⎜⎜⎜⎜⎜⎜⎝ 0 0 − <sup>√</sup>4*b*−(*a*4−*k*) 2 2 0 0 ⎞ ⎟⎟⎟⎟⎟⎟⎠ · *e* <sup>−</sup> *<sup>a</sup>*4+*<sup>k</sup>* <sup>2</sup> *<sup>t</sup>* · sin ⎛ ⎝ ' 4*b* − (*a*<sup>4</sup> − *k*) 2 <sup>2</sup> *<sup>t</sup>* ⎞ ⎠ −*w* →<sup>2</sup> (*t*) = −*u* <sup>→</sup><sup>1</sup> (*t*) <sup>−</sup> <sup>−</sup>*<sup>u</sup>* →<sup>2</sup> (*t*) <sup>2</sup>*<sup>i</sup>* <sup>=</sup> ⎛ ⎜⎜⎜⎜⎜⎜⎝ 0 0 <sup>√</sup>4*b*−(*a*4−*k*) 2 2 0 0 ⎞ ⎟⎟⎟⎟⎟⎟⎠ · *e* <sup>−</sup> *<sup>a</sup>*4+*<sup>k</sup>* <sup>2</sup> *<sup>t</sup>* · cos ⎛ ⎝ ' 4*b* − (*a*<sup>4</sup> − *k*) 2 <sup>2</sup> *<sup>t</sup>* ⎞ ⎠ + ⎛ ⎜⎜⎜⎜⎜⎝ 0 0 *a*4−*k* 2 0 1 ⎞ ⎟⎟⎟⎟⎟⎠ · *e* <sup>−</sup> *<sup>a</sup>*4+*<sup>k</sup>* <sup>2</sup> *<sup>t</sup>* · sin ⎛ ⎝ ' 4*b* − (*a*<sup>4</sup> − *k*) 2 <sup>2</sup> *<sup>t</sup>* ⎞ ⎠ −*w* →<sup>3</sup> (*t*) = ⎛ ⎜⎜⎜⎜⎜⎝ 0 0 0 1 0 ⎞ ⎟⎟⎟⎟⎟⎠ · *e* −*a*7*t* −*w* →<sup>4</sup> (*t*) = ⎛ ⎜⎜⎜⎜⎜⎜⎝ 0 −*a*4*a*5−*a*4*k*−*a*<sup>2</sup> <sup>5</sup>+*a*5*k*−*b a*4 *k* − *a*<sup>5</sup> 0 1 ⎞ ⎟⎟⎟⎟⎟⎟⎠ · *e* −*a*5*t* −*w* →<sup>5</sup> (*t*) = ⎛ ⎜⎜⎜⎜⎜⎝ −*c* (*a*<sup>5</sup> + *a*<sup>1</sup> − *a*2) −*ca*<sup>3</sup> *a*<sup>1</sup> − *a*<sup>2</sup> + *k* −(*a*5+*a*1−*a*2)*a*6*<sup>c</sup> a*1−*a*2+*a*<sup>7</sup> 1 ⎞ ⎟⎟⎟⎟⎟⎠ · *<sup>e</sup>*(*a*1−*a*2)*<sup>t</sup>*

Let us construct now the Cauchy matrix *C* (*t*,*s*) = −→*Ci* (*t*,*s*) *<sup>i</sup>*=1,...,5 of the system. Let us define −*w* →*<sup>i</sup>* (*t*,*s*) = −*w* <sup>→</sup>*<sup>i</sup>* (*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>*), then

−→*Ci* (*t*,*s*) <sup>=</sup> *<sup>b</sup>*1*<sup>i</sup>* −*w* →<sup>1</sup> (*t*,*s*) + *b*2*<sup>i</sup>* −*w* →<sup>2</sup> (*t*,*s*) + *b*3*<sup>i</sup>* −*w* →<sup>3</sup> (*t*,*s*) + *b*4*<sup>i</sup>* −*w* →<sup>4</sup> (*t*,*s*) + *b*5*<sup>i</sup>* −*w* →<sup>5</sup> (*t*,*s*) = *b*1*<sup>i</sup>* · ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎛ ⎜⎜⎜⎜⎜⎝ 0 0 *k*−*a*<sup>4</sup> 2 0 1 ⎞ ⎟⎟⎟⎟⎟⎠ · *<sup>e</sup>*<sup>−</sup> *<sup>a</sup>*4+*<sup>k</sup>* <sup>2</sup> (*t*−*s*) · cos <sup>√</sup>4*b*−(*a*4−*k*) 2 <sup>2</sup> (*t* − *s*) + ⎛ ⎜⎜⎜⎜⎜⎜⎝ 0 0 − <sup>√</sup>4*b*−(*a*4−*k*) 2 2 0 0 ⎞ ⎟⎟⎟⎟⎟⎟⎠ · *<sup>e</sup>*<sup>−</sup> *<sup>a</sup>*4+*<sup>k</sup>* <sup>2</sup> (*t*−*s*) · sin <sup>√</sup>4*b*−(*a*4−*k*) 2 <sup>2</sup> (*t* − *s*) ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ + *b*2*<sup>i</sup>* · ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎛ ⎜⎜⎜⎜⎜⎜⎝ 0 0 <sup>√</sup>4*b*−(*a*4−*k*) 2 2 0 0 ⎞ ⎟⎟⎟⎟⎟⎟⎠ · *<sup>e</sup>*<sup>−</sup> *<sup>a</sup>*4+*<sup>k</sup>* <sup>2</sup> (*t*−*s*) · cos <sup>√</sup>4*b*−(*a*4−*k*) 2 <sup>2</sup> (*t* − *s*) + ⎛ ⎜⎜⎜⎜⎜⎝ 0 0 *a*4−*k* 2 0 1 ⎞ ⎟⎟⎟⎟⎟⎠ · *<sup>e</sup>*<sup>−</sup> *<sup>a</sup>*4+*<sup>k</sup>* <sup>2</sup> (*t*−*s*) · sin <sup>√</sup>4*b*−(*a*4−*k*) 2 <sup>2</sup> (*t* − *s*) ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ + *b*3*<sup>i</sup>* · ⎛ ⎜⎜⎜⎜⎜⎝ 0 0 0 1 0 ⎞ ⎟⎟⎟⎟⎟⎠ · *<sup>e</sup>*−*a*7(*t*−*s*) <sup>+</sup> *<sup>b</sup>*4*<sup>i</sup>* · ⎛ ⎜⎜⎜⎜⎜⎜⎝ 0 −*a*4*a*5−*a*4*k*−*a*<sup>2</sup> <sup>5</sup>+*a*5*k*−*b a*4 *k* − *a*<sup>5</sup> 0 1 ⎞ ⎟⎟⎟⎟⎟⎟⎠ · *<sup>e</sup>*−*a*5(*t*−*<sup>s</sup>*)+ *b*5*<sup>i</sup>* · ⎛ ⎜⎜⎜⎜⎜⎝ −*c* (*a*<sup>5</sup> + *a*<sup>1</sup> − *a*2) −*ca*<sup>3</sup> *a*<sup>1</sup> − *a*<sup>2</sup> + *k* −(*a*5+*a*1−*a*2)*a*6*<sup>c</sup> a*1−*a*2+*a*<sup>7</sup> 1 ⎞ ⎟⎟⎟⎟⎟⎠ · *<sup>e</sup>*(*a*1−*a*2)(*t*−*s*)

We have to find *b*1*i*, *b*2*i*, *b*3*i*, *b*4*i*, *b*5*<sup>i</sup>* in this representation. Taking into account that *C* (*s*,*s*) = *I*, where *I* is the identity (5 × 5) matrix, we can write:

$$\begin{array}{rcl} \stackrel{\rightarrow}{C}\_{i}\left(s,s\right) &=& b\_{1i} \cdot \begin{pmatrix} 0\\0\\\frac{k-a\_{4}}{2}\\0\\1 \end{pmatrix} + b\_{2i} \cdot \begin{pmatrix} 0\\0\\\frac{\sqrt{4b-\left(a\_{4}-k\right)^{2}}}{2}\\0\\0 \end{pmatrix} + b\_{3i} \cdot \begin{pmatrix} 0\\0\\0\\1\\0 \end{pmatrix} + \\ b\_{4i} \cdot \begin{pmatrix} 0\\-\frac{a\_{4}a\_{5}-a\_{3}k-a\_{3}^{2}+a\_{5}k-b}{k-a\_{5}}\\k-a\_{5} \\0\\1 \end{pmatrix} + b\_{5i} \cdot \begin{pmatrix} -c\left(a\_{5}+a\_{1}-a\_{2}\right)\\-ca\_{3}\\a\_{1}-a\_{2}+k\\-\frac{\left(a\_{5}+a\_{1}-a\_{2}\right)a\_{6}}{a\_{1}-a\_{2}+a\_{7}}\\1 \end{pmatrix} \end{array}$$

Let us denote *γ*<sup>32</sup> = <sup>√</sup>4*b*−(*a*4−*k*) 2 <sup>2</sup> , *<sup>γ</sup>*<sup>24</sup> <sup>=</sup> <sup>−</sup>*a*4*a*5−*a*4*k*−*a*<sup>2</sup> <sup>5</sup>+*a*5*k*−*b <sup>a</sup>*<sup>4</sup> , *<sup>γ</sup>*<sup>15</sup> = −*<sup>c</sup>* (*a*<sup>5</sup> + *<sup>a</sup>*<sup>1</sup> − *<sup>a</sup>*2), *<sup>γ</sup>*<sup>25</sup> = <sup>−</sup>*ca*3, *<sup>γ</sup>*<sup>35</sup> <sup>=</sup> *<sup>a</sup>*<sup>1</sup> <sup>−</sup> *<sup>a</sup>*<sup>2</sup> <sup>+</sup> *<sup>k</sup>*, *<sup>γ</sup>*<sup>45</sup> <sup>=</sup> <sup>−</sup>(*a*5+*a*1−*a*2)*a*6*<sup>c</sup> <sup>a</sup>*1−*a*2+*a*<sup>7</sup> , and define the matrix

$$B\_{\;} = \begin{pmatrix} 0 & 0 & 0 & 0 & \gamma\_{15} \\ 0 & 0 & 0 & \gamma\_{24} & \gamma\_{25} \\ \frac{k-a\_4}{2} & \gamma\_{32} & 0 & k-a\_5 & \gamma\_{35} \\ 0 & 0 & 1 & 0 & \gamma\_{45} \\ 1 & 0 & 0 & 1 & 1 \end{pmatrix}$$

and its inverse matrix

$$B^{-1} = \begin{pmatrix} -\frac{\gamma 24 - \gamma \gamma 25}{\gamma 15 \gamma 24} & -\frac{1}{\gamma 24} & 0 & 0 & 1\\ -\frac{1}{2} \frac{\gamma \gamma 4 (2 \gamma s \mathbf{s} - 4 \mathbf{s} + \mathbf{k}) + \gamma \gamma 25 (4a - 2as + 3k)}{\gamma 3 \gamma \gamma \gamma 24} & \frac{1}{2} \frac{a\_4 - 3k + 2as}{\gamma \gamma \gamma 24} & \frac{1}{\gamma \gamma \gamma 2} & 0 & \frac{1}{2} \frac{a\_4 - k}{\gamma \gamma \gamma 2} \\\ & -\frac{\gamma 4 \mathbf{s}}{\gamma \gamma \mathbf{s}} & 0 & 0 & 1 & 0 \\\ & -\frac{\gamma \gamma 25}{\gamma \gamma \gamma \gamma 24} & \frac{1}{\gamma \gamma \mathbf{s}} & 0 & 0 & 0 \\\ & & \frac{1}{\gamma \gamma \mathbf{s}} & 0 & 0 & 0 & 0 \end{pmatrix}.$$

Let us build the Cauchy matrix *<sup>C</sup>*(*t*,*s*) = −→*<sup>C</sup> <sup>i</sup>* (*t*,*s*) 1≤*i*≤5 , where −→*<sup>C</sup> <sup>i</sup>* (*t*,*s*) <sup>=</sup> <sup>5</sup> ∑ *j*=1 *bji* −→*w <sup>j</sup>* (*t*,*s*), 1 ≤ *i* ≤ 5.

We have to find *bji*, 1 ≤ *i*, *j* ≤ 5 in this representation. Taking into account that *C* (*s*,*s*) = *I*, where *<sup>I</sup>* is the identity (<sup>5</sup> <sup>×</sup> <sup>5</sup>)-matrix, we can write: −→*<sup>C</sup> <sup>i</sup>* (*s*,*s*) <sup>=</sup> <sup>5</sup> ∑ *j*=1 *bji* −→*<sup>v</sup> <sup>j</sup>*, 1 <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> 5.

Setting *i* = 1, 2, 3, 4, 5, we obtain

$$\overrightarrow{\mathbf{C}}\_{1}(s,s) = \sum\_{j=1}^{5} b\_{j1} \overrightarrow{\mathbf{v}}\_{j}^{\dagger} = B \begin{pmatrix} b\_{11} \\ b\_{21} \\ b\_{31} \\ b\_{41} \\ b\_{51} \end{pmatrix} = \begin{pmatrix} 1 \\ 0 \\ 0 \\ 0 \\ 0 \end{pmatrix} \quad \Rightarrow \quad \begin{pmatrix} b\_{11} \\ b\_{21} \\ b\_{31} \\ b\_{41} \\ b\_{51} \end{pmatrix} = \begin{pmatrix} -\frac{\gamma\_{24} - \gamma\_{23}}{\gamma\_{12}\gamma\_{13}} \\\ -\frac{\gamma\_{24}(2\gamma\_{23} - a\_{4}\overline{\mathbf{v}})^{\dagger}\gamma\_{23}(a\_{4} - 2a\_{3} + 3\overline{\mathbf{v}})}{\gamma\_{13}\gamma\_{13}\gamma\_{14}} \\\ -\frac{\gamma\_{34}}{\gamma\_{13}\gamma\_{24}} \\\ -\frac{\gamma\_{23}}{\gamma\_{13}\gamma\_{24}} \\\ \frac{1}{\gamma\_{13}} \end{pmatrix}$$

$$\begin{aligned} \overrightarrow{\mathcal{C}}\_{2}(s,s) = \sum\_{j=1}^{5} b\_{j2} \ \overrightarrow{\mathcal{v}}\_{j} = B \begin{pmatrix} b\_{12} \\ b\_{22} \\ b\_{32} \\ b\_{42} \\ b\_{52} \end{pmatrix} = \begin{pmatrix} 0 \\ 1 \\ 0 \\ 0 \\ 0 \end{pmatrix} \quad \Rightarrow \quad \begin{pmatrix} b\_{12} \\ b\_{22} \\ b\_{32} \\ b\_{42} \\ b\_{52} \end{pmatrix} = \begin{pmatrix} -\frac{1}{\gamma 2} \\ \frac{1}{2} a\_{\frac{-3\gamma^{2} + 2a\_{5}}{\gamma 2\gamma \sqrt{3}}} \\\ 0 \\ \frac{1}{\gamma 2} \end{pmatrix} \\\\ \overrightarrow{\mathcal{C}}\_{3}(s,s) = \sum\_{j=1}^{5} b\_{j3} \ \overrightarrow{\mathcal{V}}\_{j} = B \begin{pmatrix} b\_{13} \\ b\_{23} \\ b\_{33} \\ b\_{43} \\ b\_{43} \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 1 \\ b\_{33} \\ b\_{43} \end{pmatrix} \quad \Rightarrow \quad \begin{pmatrix} b\_{13} \\ b\_{23} \\ b\_{33} \\ b\_{43} \end{pmatrix} = \begin{pmatrix} 0 \\ \frac{1}{\gamma\_{22}} \\ 0 \\ 0 \\ 0 \end{pmatrix} \end{aligned}$$

$$\begin{aligned} \left\{ \begin{array}{c} b\_{53} \\ b\_{53} \end{array} \right\} \quad \left\{ \begin{array}{c} 0 \\ 0 \end{array} \right\} \quad \left\{ \begin{array}{c} b\_{53} \\ b\_{53} \end{array} \right\} \quad \left\{ \begin{array}{c} b\_{53} \\ b\_{53} \end{array} \right\} \\\\ \left\{ \begin{array}{c} \text{v} \\ \text{G}\_{4} \end{array} \right\} \end{aligned} \quad \left( \begin{array}{c} b\_{14} \\ b\_{24} \\ b\_{34} \\ b\_{44} \\ b\_{54} \end{array} \right) = \left( \begin{array}{c} 0 \\ 0 \\ 0 \\ 1 \\ b\_{44} \\ b\_{54} \\ \end{array} \right) \quad \Rightarrow \quad \left( \begin{array}{c} b\_{14} \\ b\_{24} \\ b\_{34} \\ b\_{44} \\ b\_{54} \\ 0 \end{array} \right) = \left( \begin{array}{c} 0 \\ 0 \\ 1 \\ 0 \\ 0 \end{array} \right) \end{aligned}$$

$$\begin{aligned} \overrightarrow{\mathbf{C}}\_{5}(s,s) = \sum\_{j=1}^{5} b\_{j5} \ \overrightarrow{\mathbf{v}}\_{j}^{\dagger} = B \begin{pmatrix} b\_{15} \\ b\_{25} \\ b\_{35} \\ b\_{45} \\ b\_{55} \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 1 \end{pmatrix} \end{aligned} \quad \Rightarrow \quad \begin{pmatrix} b\_{15} \\ b\_{25} \\ b\_{35} \\ b\_{45} \\ b\_{55} \end{pmatrix} = \begin{pmatrix} 1 \\ \frac{1}{2} \frac{a\_{4} - k}{\gamma\_{32}} \\ 0 \\ 0 \\ 0 \end{pmatrix} $$

Substituting the coefficients *bji*, 1 <sup>≤</sup> *<sup>i</sup>*, *<sup>j</sup>* <sup>≤</sup> 5 into equality −→*<sup>C</sup> <sup>i</sup>* (*t*,*s*) <sup>=</sup> <sup>5</sup> ∑ *j*=1 *bji* −→*<sup>w</sup> <sup>j</sup>* (*t*,*s*), 1 <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> <sup>5</sup> we obtain

−→*<sup>C</sup>* <sup>1</sup> (*t*,*s*) <sup>=</sup> <sup>−</sup>*γ*24−*γ*<sup>25</sup> *<sup>γ</sup>*15*γ*<sup>24</sup> · ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎛ ⎜⎜⎜⎜⎜⎝ 0 0 *k*−*a*<sup>4</sup> 2 0 1 ⎞ ⎟⎟⎟⎟⎟⎠ · *<sup>e</sup>*<sup>−</sup> *<sup>a</sup>*4+*<sup>k</sup>* <sup>2</sup> (*t*−*s*) · cos <sup>√</sup>4*b*−(*a*4−*k*) 2 <sup>2</sup> (*t* − *s*) + ⎛ ⎜⎜⎜⎜⎜⎜⎝ 0 0 − <sup>√</sup>4*b*−(*a*4−*k*) 2 2 0 0 ⎞ ⎟⎟⎟⎟⎟⎟⎠ · *<sup>e</sup>*<sup>−</sup> *<sup>a</sup>*4+*<sup>k</sup>* <sup>2</sup> (*t*−*s*) · sin <sup>√</sup>4*b*−(*a*4−*k*) 2 <sup>2</sup> (*t* − *s*) ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ −

1 2 *γ*24(2*γ*35−*a*4+*k*)+*γ*25(*a*4−2*a*5+3*k*) *<sup>γ</sup>*32*γ*15*γ*<sup>24</sup> · ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎛ ⎜⎜⎜⎜⎜⎜⎝ 0 0 <sup>√</sup>4*b*−(*a*4−*k*) 2 2 0 0 ⎞ ⎟⎟⎟⎟⎟⎟⎠ · *<sup>e</sup>*<sup>−</sup> *<sup>a</sup>*4+*<sup>k</sup>* <sup>2</sup> (*t*−*s*) · cos <sup>√</sup>4*b*−(*a*4−*k*) 2 <sup>2</sup> (*t* − *s*) + ⎛ ⎜⎜⎜⎜⎜⎝ 0 0 *a*4−*k* 2 0 1 ⎞ ⎟⎟⎟⎟⎟⎠ · *<sup>e</sup>*<sup>−</sup> *<sup>a</sup>*4+*<sup>k</sup>* <sup>2</sup> (*t*−*s*) · sin <sup>√</sup>4*b*−(*a*4−*k*) 2 <sup>2</sup> (*t* − *s*) ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ − *γ*<sup>45</sup> *<sup>γ</sup>*<sup>15</sup> · ⎛ ⎜⎜⎜⎜⎜⎝ 0 0 0 1 0 ⎞ ⎟⎟⎟⎟⎟⎠ · *<sup>e</sup>*−*a*7(*t*−*s*) <sup>−</sup> *<sup>γ</sup>*<sup>25</sup> *<sup>γ</sup>*15*γ*<sup>24</sup> · ⎛ ⎜⎜⎜⎜⎜⎜⎝ 0 −*a*4*a*5−*a*4*k*−*a*<sup>2</sup> <sup>5</sup>+*a*5*k*−*b a*4 *a*<sup>5</sup> − *k* 0 1 ⎞ ⎟⎟⎟⎟⎟⎟⎠ · *<sup>e</sup>*−*a*5(*t*−*<sup>s</sup>*)+ 1 *<sup>γ</sup>*<sup>15</sup> · ⎛ ⎜⎜⎜⎜⎜⎝ −*c* (*a*<sup>5</sup> + *a*<sup>1</sup> − *a*2) −*ca*<sup>3</sup> *a*<sup>1</sup> − *a*<sup>2</sup> + *k* −(*a*5+*a*1−*a*2)*a*6*<sup>c</sup> a*1−*a*2+*a*<sup>7</sup> 1 ⎞ ⎟⎟⎟⎟⎟⎠ · *<sup>e</sup>*(*a*1−*a*2)(*t*−*s*)

$$
\begin{split}
\widehat{C}\_{2}\left(t,s\right) &= -\frac{1}{\gamma\_{24}} \cdot \begin{bmatrix} 0\\0\\ \frac{4-a\_{4}}{\gamma} \\ 0\\ 0\\ 1 \end{bmatrix} \cdot \epsilon^{-\frac{a\_{4}+b\_{4}}{2}(t-s)} \cdot \cos\left(\frac{\sqrt{4b-\left(a\_{4}-b\right)^{2}}}{2}\left(t-s\right)\right) + \\\ \begin{bmatrix} 0\\0\\ -\frac{\sqrt{4b-\left(a\_{4}-b\right)^{2}}}{2} \\ 0\\ 0\\ 0 \end{bmatrix} \cdot \epsilon^{-\frac{a\_{4}+b\_{4}}{2}(t-s)} \cdot \sin\left(\frac{\sqrt{4b-\left(a\_{4}-b\right)^{2}}}{2}\left(t-s\right)\right) \\\ 0\\\ \frac{1}{\gamma\_{24}} \frac{a\_{4}-b\_{4}\gamma\_{2}}{2} \\ 0\\\ 0\\\ \begin{bmatrix} 0\\0\\ \frac{4a\_{4}-b}{\gamma} \\ 0\\ 0 \end{bmatrix} \cdot \epsilon^{-\frac{a\_{4}+b}{2}(t-s)} \cdot \cos\left(\frac{\sqrt{4b-\left(a\_{4}-b\right)^{2}}}{2}\left(t-s\right)\right) + \\\ \begin{bmatrix} 0\\0\\0\\0\\1 \end{bmatrix} \cdot \epsilon^{-\frac{a\_{4}+b}{2}} \cdot \left(\frac{\sqrt{4b-\left(a\_{4}-b\right)^{2}}}{2}\left(t-s\right)\right) \\\ \frac{1}{\gamma\_{24}} \cdot \begin{bmatrix} 0\\0\\-\frac{a\_{4}\alpha\_{4}-a\_{4}k\_{2}^{2}+\alpha\_{4}k\_{1}-b}{4}\\ 0\\ 0 \end{bmatrix} \cdot \epsilon^{-\operatorname{\$$

$$\overrightarrow{\mathcal{C}}\_{3}(t,s) = \frac{1}{\gamma\_{32}} \cdot \begin{bmatrix} 0\\0\\\frac{\sqrt{4b - (a\_{4} - k)^{2}}}{2} \\ 0\\0 \end{bmatrix} \cdot e^{-\frac{a\_{4} + k}{2}(t - s)} \cdot \cos\left(\frac{\sqrt{4b - (a\_{4} - k)^{2}}}{2}(t - s)\right) + \begin{bmatrix} 0\\0\\0\\\frac{a\_{4} - k}{2} \\ 0\\0 \end{bmatrix}$$

$$\begin{pmatrix} \frac{a\_{4} - k}{2} \end{pmatrix} \cdot e^{-\frac{a\_{4} + k}{2}(t - s)} \cdot \sin\left(\frac{\sqrt{4b - (a\_{4} - k)^{2}}}{2}(t - s)\right)$$

$$\begin{pmatrix} 0 \end{pmatrix}$$

$$
\overrightarrow{\mathcal{C}}\_4(t,s) = \begin{pmatrix} 0\\0\\0\\1\\0 \end{pmatrix} \cdot e^{-a\gamma(t-s)}
$$

$$\begin{aligned} \overrightarrow{C}\_{5}(t,s) &= \begin{pmatrix} 0\\0\\0\\\frac{k-a\_{4}}{2} \\0\\0\\0 \end{pmatrix} \cdot e^{-\frac{a\_{4}+k}{2}(t-s)} \cdot \cos\left(\frac{\sqrt{4b-(a\_{4}-k)^{2}}}{2}(t-s)\right) + \\\ & \begin{pmatrix} 0\\0\\-\frac{\sqrt{4b-(a\_{4}-k)^{2}}}{2} \\0\\0\\0 \end{pmatrix} \cdot e^{-\frac{a\_{4}+k}{2}(t-s)} \cdot \sin\left(\frac{\sqrt{4b-(a\_{4}-k)^{2}}}{2}(t-s)\right) + \\\ & \begin{pmatrix} 0\\0\\\frac{\sqrt{4b-(a\_{4}-k)^{2}}}{2} \\0\\0\\0 \end{pmatrix} \cdot e^{-\frac{a\_{4}+k}{2}(t-s)} \cdot \cos\left(\frac{\sqrt{4b-(a\_{4}-k)^{2}}}{2}(t-s)\right) + \\\ & \begin{pmatrix} 0\\0\\0\\0\\0 \end{pmatrix} \\\ & \begin{pmatrix} 0\\0\\0\\0\\0 \end{pmatrix} \cdot e^{-\frac{a\_{4}+k}{2}(t-s)} \cdot \sin\left(\frac{\sqrt{4b-(a\_{4}-k)^{2}}}{2}(t-s)\right) \end{pmatrix} \end{aligned}$$

#### **4. System with Uncertain Coefficient in the Distributed Control**

Consider the following system of equations

$$\begin{cases} \frac{dV}{dt} = \beta V\left(t\right) - \gamma F\left(t\right)V\left(t\right) \\ \frac{dC}{dt} = \zeta\left(m(t)\right)aF\left(t\right)V\left(t\right) - \mu\_c\left(C\left(t\right) - C^\*\right) \\ \frac{dF}{dt} = \rho C - \eta\gamma F\left(t\right)V\left(t\right) - \mu\_f F\left(t\right) - \left(b + \triangle b\left(t\right)\right)u\left(t\right) \\ \qquad \qquad \qquad \qquad \frac{dm}{dt} = \sigma V\left(t\right) - \mu\_m m\left(t\right) \\ \frac{du}{dt} = F\left(t\right) - F^\* - ku\left(t\right) \end{cases} \tag{15}$$

Appearing *b*(*t*) in the third equation can be explained by the individual reaction of the human body on the drug. Of course sensitivity of different patients' reactions can be different and it can be variable in time. We assume below that *b*(*t*) is essentially a bounded function.

This system can be rewritten in the form

$$\begin{cases} \begin{array}{c} \mathbf{x}\_{1}^{\prime} = \left(a\_{1} - a\_{2}\right)\mathbf{x}\_{1} + \mathbf{g}\_{1}\left(\mathbf{x}\_{1}(t), \mathbf{x}\_{3}(t)\right) \\ \mathbf{x}\_{2}^{\prime} = a\_{3}\mathbf{x}\_{1} - a\_{5}\mathbf{x}\_{2} + \mathbf{g}\_{2}\left(\mathbf{x}\_{1}(t), \mathbf{x}\_{3}(t)\right) \end{array} \\ \mathbf{x}\_{3}^{\prime} = -as\_{1}\mathbf{x}\_{1} + a\_{4}\mathbf{x}\_{2} - a\_{4}\mathbf{x}\_{3} - \left(b + \triangle b(t)\right)\mathbf{x}\_{5} + \mathbf{g}\_{3}\left(\mathbf{x}\_{1}(t), \mathbf{x}\_{3}(t)\right) \\ \mathbf{x}\_{4}^{\prime} = a\_{6}\mathbf{x}\_{1} - a\_{7}\mathbf{x}\_{4} \\ \mathbf{x}\_{5}^{\prime} = \mathbf{x}\_{3} - k\mathbf{x}\_{5} \end{array} \tag{16}$$

where *gi*(*x*1(*t*), *x*3(*t*))(*t*), 1 ≤ *i* ≤ 3 results of "mistakes" we made in the process of the linearization.

It is clear that the model described by systems (15) and (16) were obtained under the assumption that various factors *gi*(*t*) acting on the antigen, plasma cell and antibody concentrations, were neglected. In reality these factors act although they are "small". Denote the so-called right-hand sides *Gi*(*t*) = *gi*(*x*1(*t*), *x*3(*t*)) + *gi*(*t*) for *i* = 1, 2, 3 and *Gi*(*t*) = *gi*(*t*) for *i* = 4, 5. Denote *F*(*t*) = *col*{*G*1(*t*), ..., *<sup>G</sup>*5(*t*)}, assume that *<sup>F</sup>*(*t*) <sup>∈</sup> *<sup>L</sup>*<sup>5</sup> ∞.

Consider the system

$$X' = AX + \Delta B\left(t\right)X + F\left(t\right),\tag{17}$$

where

$$\begin{array}{rcl} X(t) &=& \begin{pmatrix} x\_1(t) \\ x\_2(t) \\ x\_3(t) \\ x\_4(t) \\ x\_5(t) \end{pmatrix}, & \Delta B \begin{pmatrix} t \\ \end{pmatrix} = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & -\triangle b(t) \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ \end{pmatrix} \end{array}$$

The natural problem is to estimate an influence of the right-hand side *F*(*t*) on the solution *X*(*t*). The general solution of the system

$$X' - AX = Z$$

.

can be represented in the following form (see, for example, [11,12])

$$X(t) = \int\_0^t \mathbb{C}\left(t, s\right) Z\left(s\right) ds + \mathbb{C}\left(t, 0\right) X\left(0\right). \tag{19}$$

Without loss of generality, *X* (0) = *col*{0, 0, 0, 0, 0}. Substituting (19) into (17) we obtain

$$Z(t) - \Delta B\left(t\right) \int\_0^t \mathbb{C}\left(t, s\right) Z\left(s\right) ds = F(t),\tag{20}$$

which can be written in the operator form as

$$Z\left(t\right) = \left(\Omega Z\right)\left(t\right) + F\left(t\right),\tag{21}$$

where the operator Ω : *L*<sup>5</sup> <sup>∞</sup> <sup>→</sup> *<sup>L</sup>*<sup>5</sup> <sup>∞</sup> (*L*<sup>5</sup> <sup>∞</sup> is the space of five vector-functions with essentially bounded components) is defined by the equality

$$\left(\left(\Omega Z\right)\left(t\right) = \Delta B\left(t\right) \int\_0^t \mathbb{C}\left(t, s\right) Z\left(s\right) ds.\right)$$

Denote ||Ω|| the norm of the operator Ω. Estimating Ω for (*a*<sup>4</sup> − *k*) <sup>2</sup> <sup>−</sup> <sup>4</sup>*<sup>b</sup>* <sup>&</sup>gt; 0, we obtain

$$||\Omega|| \le \max\_{1 \le j \le 5} \left( \varepsilon \kappa \sup\_{t \ge 0} \int\_0^t \sum\_{i=1}^5 \left| (\Delta B(t) \to (t, s))\_{ij} \right| ds \right).$$

Denoting *Qj* <sup>=</sup> *ess* sup*t*≥<sup>0</sup> (*t* 0 5 ∑ *i*=1 (Δ*<sup>B</sup>* (*t*) *<sup>C</sup>*(*t*,*s*))*ij ds* and *b*<sup>∗</sup> <sup>=</sup> *ess* sup*t*≥<sup>0</sup> |*b*(*t*)|, we obtain

$$\begin{aligned} Q\_1 &= \triangle b^\* \left[ \begin{array}{c} \left| \frac{a\_{24}(a\_{32} - a\_{35}) - a\_{25}(a\_{33} - a\_{34})}{a\_{13}a\_{24}(a\_{31} - a\_{32})} \right| \frac{1}{|\lambda\_1|} + \\\\ \frac{a\_{24}(a\_{31} - a\_{35}) - a\_{25}(a\_{31} - a\_{34})}{a\_{13}a\_{24}(a\_{31} - a\_{32})} \right| \frac{1}{|\lambda\_2|} + \left| \frac{a\_{35}}{|\lambda\_1| + |\lambda\_2|} \right| \end{array} \right] \tag{22} \\\\ Q\_2 &= \triangle b^\* \left[ \left| \frac{a\_{32} - a\_{34}}{a\_{24}(a\_{31} - a\_{32})} \right| \frac{1}{|\lambda\_1|} + \left| \frac{a\_{31} - a\_{34}}{a\_{24}(a\_{31} - a\_{32})} \right| \frac{1}{|\lambda\_2|} + \left| \frac{1}{|a\_{5}a\_{24}|} \right| \right] \,\tag{22} \\\\ Q\_3 &= \triangle b^\* \left[ \frac{1}{|a\_{31} - a\_{32}|} \frac{1}{|\lambda\_1|} + \frac{1}{|a\_{31} - a\_{32}|} \frac{1}{|\lambda\_2|} \right] \,\,\tag{23} \\\\ Q\_5 &= \triangle b^\* \left[ \left| \frac{a\_{32}}{|a\_{31} - a\_{32}|} \right| \frac{1}{|\lambda\_1|} + \left| \frac{a\_{31}}{a\_{31} - a\_{32}|} \right| \frac{1}{|\lambda\_2|} \right] .\end{aligned} \tag{23}$$

**Theorem 2.** *Let the assumption of Theorem 1 be fulfilled,* (*a*<sup>4</sup> − *k*) <sup>2</sup> > 4*b and the inequality* max1≤*j*≤<sup>5</sup> 7 *Qj* 8 < 1 *be true. Then system* (16) *is exponential stable.*

**Proof.** The inequality in the condition of Theorem 2 implies that the norm Ω of the operator <sup>Ω</sup> is less than one. In this case there exists the inverse operator (*<sup>I</sup>* <sup>−</sup> <sup>Ω</sup>)−<sup>1</sup> : *<sup>L</sup>*<sup>5</sup> <sup>∞</sup> −→ *<sup>L</sup>*<sup>5</sup> <sup>∞</sup> and *Z* = (*<sup>I</sup>* <sup>−</sup> <sup>Ω</sup>)−1*<sup>F</sup>* = (*<sup>I</sup>* <sup>+</sup> <sup>Ω</sup> <sup>+</sup> <sup>Ω</sup><sup>2</sup> <sup>+</sup> ...)*F*. It is clear that ||*Z*||*L*<sup>5</sup> <sup>∞</sup> <sup>≤</sup> <sup>1</sup> <sup>1</sup>−||Ω|| ||*F*||*L*<sup>5</sup> <sup>∞</sup> . It means that all components of the solution-vector *Z* of system (21) are bounded. The Cauchy matrix of system (16) satisfies the exponential estimate i.e., there exist such positive *N*, *M* that

$$\left|\mathbb{C}\_{ij}(t,\mathbf{s})\right| \leq N e^{-M(t-\mathbf{s})}, 0 \leq \mathbf{s} \leq t < \infty.$$

Then all components of the solution-vector *X*(*t*) of system (17) are bounded, according to representation (19). The exponential stability of the homogeneous system

$$X'(t) = AX(t) + \triangle B(t)X(t)$$

follows now from Bohl-Perron theorem (see, for example, [11] p. 500, [12] p. 93).

**Example 1.** *Substituting the values from Remark 1 and setting k* = 4, *b* = 1 *we obtain*

$$Q\_1 \le 327.0253788, \ Q\_2 \le 0.000001437277837, \ Q\_3 \le 1.154699764, \ Q\_4 = 0, \ Q\_5 \le 0.5773500802.$$

*The inequality* 327.0253788 · *b*<sup>∗</sup> < <sup>1</sup> *implies the inequality* max1≤*j*≤<sup>5</sup> 7 *Qj* 8 < 1. *Thus if, according to Theorem 2 b*<sup>∗</sup> < 0.003057866651*, then the system* (16) *is exponentially stable.*

Let us estimate Ω for (*a*<sup>4</sup> − *k*) <sup>2</sup> <sup>=</sup> <sup>4</sup>*b*. Denoting *Pj* <sup>=</sup> *ess* sup*t*≥<sup>0</sup> (*t* 0 5 ∑ *i*=1 (Δ*<sup>B</sup>* (*t*) *<sup>C</sup>*(*t*,*s*))*ij ds*, we obtain

*P*<sup>1</sup> = *b*<sup>∗</sup> ⎡ ⎢ ⎣ *β*24*β*35−*β*25*β*<sup>34</sup> *β*31*β*15*β*<sup>24</sup> <sup>2</sup> <sup>|</sup>*a*4+*k*<sup>|</sup> <sup>+</sup> *β*24(*β*31−*β*35)−*β*25(*β*31−*β*34) *β*31*β*52*β*24*β*<sup>15</sup> 4 |*a*2 <sup>4</sup>−*k*<sup>2</sup><sup>|</sup> <sup>+</sup> <sup>2</sup> |*a*4+*k*| + *β*<sup>25</sup> *β*15*β*<sup>24</sup> 1 <sup>|</sup>*a*5<sup>|</sup> <sup>+</sup> <sup>1</sup> |*β*15| 1 |*a*1−*a*2| ⎤ ⎥ ⎦ , *P*<sup>2</sup> = *b*<sup>∗</sup> *β*<sup>34</sup> *β*24*β*<sup>31</sup> <sup>2</sup> <sup>|</sup>*a*4+*k*<sup>|</sup> <sup>+</sup> *β*31−*β*<sup>34</sup> *β*31*β*24*β*<sup>52</sup> 4 |*a*2 <sup>4</sup>−*k*<sup>2</sup><sup>|</sup> <sup>+</sup> <sup>2</sup> |*a*4+*k*| + <sup>1</sup> |*β*24| 1 |*a*5| , *P*<sup>3</sup> = *b*<sup>∗</sup> 1 |*β*31| 2 <sup>|</sup>*a*4+*k*<sup>|</sup> <sup>+</sup> <sup>1</sup> |*β*31*β*52| 4 |*a*2 <sup>4</sup>−*k*<sup>2</sup><sup>|</sup> <sup>+</sup> <sup>2</sup> |*a*4+*k*| , *P*<sup>4</sup> = 0, *<sup>P</sup>*<sup>5</sup> <sup>=</sup> *b*<sup>∗</sup> <sup>1</sup> |*β*52| 4 |*a*2 <sup>4</sup>−*k*<sup>2</sup><sup>|</sup> <sup>+</sup> <sup>2</sup> |*a*4+*k*| . (23)

**Theorem 3.** *Let the assumption of Theorem 1 be fulfilled,* (*a*<sup>4</sup> − *k*) <sup>2</sup> = 4*b and the inequality* max1≤*j*≤<sup>5</sup> 7 *Pj* 8 < 1 *be true. Then system* (16) *is exponential stable.*

The proof of Theorem 3 repeats the proof of Theorem 2.

**Example 2.** *Substituting the values from Remark 1 and setting k* = 1, *b* = 0.249999902*, we obtain the inequalities*

*<sup>P</sup>*<sup>1</sup> <sup>≤</sup> 4.735918812 · <sup>10</sup>13, *<sup>P</sup>*<sup>2</sup> <sup>≤</sup> 2.047987177 · 105, *<sup>P</sup>*<sup>3</sup> <sup>≤</sup> 9.999999608, *<sup>P</sup>*<sup>4</sup> <sup>=</sup> 0, *<sup>P</sup>*<sup>5</sup> <sup>≤</sup> 2.999999216.

The inequality 4.735918812 · <sup>10</sup><sup>13</sup> · *b*<sup>∗</sup> <sup>&</sup>lt; 1 implies the inequality max1≤*j*≤<sup>5</sup> 7 *Pj* 8 < 1.

Thus if *b*<sup>∗</sup> <sup>&</sup>lt; 2.111522684 · <sup>10</sup>−14, then the system (16) is exponentially stable, according to Theorem 3.

Let us estimate Ω for (*a*<sup>4</sup> − *k*) <sup>2</sup> <sup>−</sup> <sup>4</sup>*<sup>b</sup>* <sup>&</sup>lt; 0. Denoting *Rj* <sup>=</sup> *ess* sup*t*≥<sup>0</sup> (*t* 0 5 ∑ *i*=1 (Δ*<sup>B</sup>* (*t*) *<sup>C</sup>*(*t*,*s*))*ij ds* we obtain

$$R\_1 = \triangle b^\* \left[ \begin{array}{c} \left| \frac{\gamma\_{24} - \gamma\_{25}}{\gamma\_{15}\gamma\_{24}} \right| \frac{2}{|a\_4 + k|} + \left| \frac{\gamma\_{24}(2\gamma\_{35} - a\_4 + k) + \gamma\_{25}(a\_4 - 2a\_5 + 3k)}{\gamma\_{15}\gamma\_{15}\gamma\_{24}} \right| \frac{1}{|a\_4 + k|} \right] \\\\ + \left| \frac{\gamma\_{25}}{\gamma\_{15}\gamma\_{24}} \right| \frac{1}{|a\_5|} + \left| \frac{1}{\gamma\_{15}} \right| \frac{1}{|a\_4 - a\_2|} \end{array} \right],$$

$$R\_2 = \triangle b^\* \left[ \frac{1}{|\gamma\_{24}|} \frac{2}{|a\_4 + k|} + \left| \frac{a\_4 - 3k + 2a\_5}{\gamma\_{24}\gamma\_{32}} \right| \frac{1}{|a\_4 + k|} + \frac{1}{|\gamma\_{24}|} \frac{1}{|a\_5|} \right],$$

$$R\_3 = \triangle b^\* \frac{1}{|\gamma\_{23}|} \frac{2}{|a\_4 + k|},$$

$$R\_4 = 0,$$

$$R\_5 = \triangle b^\* \left[ \frac{2}{|a\_4 + k|} + \left| \frac{a\_4 - k}{\gamma\_{32}} \right| \frac{1}{|a\_4 + k|} \right].$$

**Theorem 4.** *Let the assumption of Theorem 1 be fulfilled,* (*a*<sup>4</sup> − *k*) <sup>2</sup> < 4*b and the inequality* max1≤*j*≤<sup>5</sup> 7 *Rj* 8 < 1 *be true. Then system* (16) *is exponential stable.*

The proof of Theorem 3 repeats the proof of Theorem 2.

#### **Example 3.** *Substituting the values from Remark 1 and setting k* = 1, *b* = 2 *we obtain*

*<sup>R</sup>*<sup>1</sup> <sup>≤</sup> 133.8894553, *<sup>R</sup>*<sup>2</sup> <sup>≤</sup> 6.173038374 · <sup>10</sup><sup>−</sup>7, *<sup>R</sup>*<sup>3</sup> <sup>≤</sup> 1.511857554, *<sup>R</sup>*<sup>4</sup> <sup>=</sup> 0, *<sup>R</sup>*<sup>5</sup> <sup>≤</sup> 0.7559286288.

*The inequality* 133.8894553 · *b*<sup>∗</sup> < <sup>1</sup> *implies the inequality* max1≤*j*≤<sup>5</sup> 7 *Rj* 8 < 1*. Thus if b*<sup>∗</sup> < 0.7468848071*, then the system* (16) *is exponentially stable, according to Theorem 4.*

#### **5. Influence of Changes in the Right-Hand Side on Behavior of Solutions**

Constructing system we neglect the influence of different factors that seem us nonessential. The Cauchy matrix *C* (*t*,*s*) allows us to estimate the influences of all these factors on the solution.

Consider the system

$$
\Delta Y'(t) - AY(t) = G\left(t\right) + \triangle G\left(t\right),
\tag{25}
$$

where the matrix *<sup>A</sup>* is defined by (8) is the matrix of the coefficients of system (7) and *<sup>G</sup>* (*t*) <sup>∈</sup> *<sup>L</sup>*<sup>5</sup> ∞ describes a change of the right-hand side. In the following assertion we estimate the difference between the solution-vector *Y* (*t*) = *col*{*y*1(*t*), ..., *y*5(*t*)} of the system (25) and the solution *X* (*t*) = *col*{*x*1(*t*), ..., *x*5(*t*)} of the system (7).

**Theorem 5.** *Under the assumption of Theorem 1 the system* (7) *is exponentially stable and the following inequality*

$$\|\|Y(t) - X(t)\|\| \le \|\|\mathbf{C}\|\| \|\triangle G(t)\|\|\_{\prime}$$

*is true, where*

$$\|\mathbb{C}\| = \max\_{1 \le i \le 5} \left( \sup\_{t \ge 0} \int\_0^t \sum\_{j=0}^5 \left| c\_{ij}(t, \mathbf{s}) \right| \right) ds, \quad \|\triangle G\left(t\right)\| = \max\_{1 \le i \le 5} \operatorname\*{ess\,sup}\_{t \ge 0} \left| \triangle G\_i\left(t\right) \right|.$$

$$||Y(t) - X(t)|| = \max\_{1 \le i \le 5} \operatorname\*{ess\,sup}\_{t \ge 0} |y\_i(t) - x\_i(t)|.$$

The proof follows from the representation of solution of system (7).

The estimates of *C* can be obtained through the estimates of the elements of the Cauchy matrix obtained in Section 3.

**Author Contributions:** Conceptualization A.D. and I.V.; methodology, A.D. and I.V.; software, I.V. and M.B.; validation, A.D., I.V. and M.B.; formal analysis, A.D., I.V. and M.B.; investigation, A.D., I.V. and M.B.; resources, I.V. and M.B.; data curating, I.V. and M.B.; writing–original draft preparation, A.D., I.V. and M.B.; writing–review and editing, A.D. and I.V.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel. +41 61 683 77 34 Fax +41 61 302 89 18 www.mdpi.com

*Symmetry* Editorial Office E-mail: symmetry@mdpi.com www.mdpi.com/journal/symmetry

MDPI St. Alban-Anlage 66 4052 Basel Switzerland

Tel: +41 61 683 77 34 Fax: +41 61 302 89 18