**Fractional Calculus—Theory and Applications**

Editor

**Jorge E. Mac´ıas D´ıaz**

MDPI • Basel • Beijing • Wuhan • Barcelona • Belgrade • Manchester • Tokyo • Cluj • Tianjin

*Editor* Jorge E. Mac´ıas D´ıaz Tallinn University Estonia Universidad Autonoma de ´ Aguascalientes Mexico

*Editorial Office* MDPI St. Alban-Anlage 66 4052 Basel, Switzerland

This is a reprint of articles from the Special Issue published online in the open access journal *Axioms* (ISSN 2075-1680) (available at: https://www.mdpi.com/journal/axioms/special issues/fractional calculus theory).

For citation purposes, cite each article independently as indicated on the article page online and as indicated below:

LastName, A.A.; LastName, B.B.; LastName, C.C. Article Title. *Journal Name* **Year**, *Volume Number*, Page Range.

**ISBN 978-3-0365-3262-2 (Hbk) ISBN 978-3-0365-3263-9 (PDF)**

© 2022 by the authors. Articles in this book are Open Access and distributed under the Creative Commons Attribution (CC BY) license, which allows users to download, copy and build upon published articles, as long as the author and publisher are properly credited, which ensures maximum dissemination and a wider impact of our publications.

The book as a whole is distributed by MDPI under the terms and conditions of the Creative Commons license CC BY-NC-ND.

## **Contents**


#### **Nauman Ahmed, Jorge E. Mac´ıas-D´ıaz, Ali Raza, Dumitru Baleanu, Muhammad Rafiq, Zafar Iqbal, Muhammad Ozair Ahmad**

Design, Analysis and Comparison of a Nonstandard Computational Method for the Solution of a General Stochastic Fractional Epidemic Model Reprinted from: *Axioms* **2022**, *11*, 10, doi:10.3390/axioms11010010 .................. **175**

## **About the Editor**

**Jorge E. Mac´ıas-D´ıaz** is a full-time professor at the Autonomous University of Aguascalientes (UAA), where he carries out teaching, research, outreach and administration activities, and a visiting associate professor at Tallinn University, Estonia. His work is internationally recognized for his contributions to the numerical analysis of partial differential equations, which have been the basis for the development of new results and techniques in the area. His articles focus on the rigorous analysis of numerical techniques and their efficient computational implementation. He was the first investigator to employ the fractional version of the discrete energy method for fractional hyperbolic systems, which has been subsequently used by other scholars to propose and analyze new computational methodologies.

He has 223 articles published or accepted in journals, 198 of them in Science Citation Index journals and more than a third in Q1 journals. He is one of the most active reviewers and editors in Mexico (Publons). He has been a reviewer for more than 150 journals; an editor of *Applied Numerical Mathematics* (Elsevier), the *International Journal of Computer Mathematics* (Taylor & Francis), *Open Physics* (De Gruyter), *Advances in Mathematical Physics* (Hindawi), *Axioms* (MDPI), and *Computational and Applied Mathematics* (Wiley); a guest editor for several Special Issues in the *Journal of Computational and Applied Mathematics* (Elsevier) and *Discrete Dynamics in Nature and Society* (Hindawi); and an evaluator of several national and foreign research proposals.

### *Editorial* **Fractional Calculus—Theory and Applications**

**Jorge E. Macías-Díaz 1,2**


In recent years, fractional calculus has witnessed tremendous progress in various areas of sciences and mathematics. On one hand, new definitions of fractional derivatives and integrals have appeared in recent years, extending the classical definitions in some sense or another. Moreover, the rigorous analysis of the functional properties of these new definitions has been an active area of research in mathematical analysis. Systems considering differential equations with fractional-order operators have been investigated rigorously from the analytical and numerical points of view, and potential applications have been proposed in the sciences and in technology. The purpose of this Special Issue is to serve as a specialized forum for the dissemination of recent progress in the theory of fractional calculus and its potential applications. We invite authors to submit high-quality reports on the analysis of fractional-order differential/integral equations, the analysis of new definitions of fractional derivatives, numerical methods for fractional-order equations, and applications to physical systems governed by fractional differential equations, among other interesting topics of research.

The present Special Issue includes 10 articles, which cover the following topics.


In one of the articles published in this Special Issue [1], the authors considered a fractional-order system of malaria pestilence. The stability of the model at equilibrium points was investigated by applying the Jacobian matrix technique. The contribution of the basic reproduction number, *R*0, in the infection dynamics and stability analysis was elucidated. The results indicated that the given system is locally asymptotically stable at the disease-free steady-state solution when *R*<sup>0</sup> < 1. A similar result was obtained for the endemic equilibrium when *R*<sup>0</sup> > 1. The underlying system showed global stability at both steady states. The fractional-order system was then converted into a stochastic model. For a more realistic study of the disease dynamics, the non-parametric perturbation version of the stochastic epidemic model was developed and studied numerically. The general stochastic fractional Euler method, the Runge–Kutta method, and a proposed numerical method were applied to solve the model. The standard techniques failed to preserve the positivity property of the continuous system. Meanwhile, the proposed stochastic fractional nonstandard finite-difference method preserved the positivity. For the boundedness of the nonstandard finite-difference scheme, a result was established. All the analytical results were verified by numerical simulations.

The article [2] is devoted to studying GPU-based modeling for a parallel fractionalorder derivative model of the spiral-plate heat exchanger. As pointed out by the authors, a spiral-plate heat exchanger with two fluids is a compact plant that only requires a small space and is excellent in high heat-transfer efficiency. However, the spiral-plate heat exchanger is a nonlinear plant with uncertainties, considering the difference between the

**Citation:** Macías-Díaz, J.E. Fractional Calculus—Theory and Applications. *Axioms* **2022**, *11*, 43. https://doi.org/ 10.3390/axioms11020043

Received: 19 January 2022 Accepted: 20 January 2022 Published: 22 January 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

heat fluid, the heated fluid, and other complex factors. The fractional-order derivation model is more accurate than the traditional integer-order model. In this paper, a parallel fractional order derivation model was proposed by considering the merit of the graphics processing unit (GPU). Then, the parallel fractional-order derivation model for the spiralplate heat exchanger was constructed. Simulations show the relationships between the output temperature of heated fluid and the orders of fractional-order derivatives with two directional fluids impacted by complex factors, namely, the volume flow rate in hot fluid and the volume flow rate in cold fluid, respectively.

In turn, a forecasting of the economic growth of the Group of Seven (G7) via a fractional-order gradient descent approach was investigated in [3]. More concretely, this work established a model of economic growth for all G7 countries from 1973 to 2016, in which the gross domestic product (GDP) is related to land area, arable land, population, school attendance, gross capital formation, exports of goods and services, general government, final consumer spending and broad money. The fractional-order gradient descent and integer-order gradient descent were used to estimate the model parameters to fit the GDP and forecast GDP from 2017 to 2019. The results showed that the convergence rate of the fractional-order gradient descent is faster and has a better fitting accuracy and prediction effect.

In [4], the authors studied the approximate and analytic solutions of the time-fractional intermediate diffusion wave equation associated with the Fokker–Planck operator. More precisely, the time-fractional wave equation associated with the space-fractional Fokker– Planck operator and with the time-fractional-damped term were studied in this work. The concept of the Green function was implemented to drive the analytic solution of the three-term time-fractional equation. The explicit expressions for the green function of the three-term time-fractional wave equation with constant coefficients was also studied for two physical and biological models. The explicit analytic solutions for the two studied models were expressed in terms of the Weber, hypergeometric, exponential, and Mittag– Leffler functions. The relation to the diffusion equation was given therein. The asymptotic behaviors of the Mittag–Leffler function, the hypergeometric function, and the exponential functions were compared numerically. The Grünwald–Letnikov scheme was then used to derive the approximate difference schemes of the Caputo time-fractional operator and the Feller–Riesz space-fractional operator. The explicit difference scheme was numerically studied, and the simulations of the approximate solutions were plotted for different values of the fractional orders.

On the other hand, the authors of [5] reported on some new fractional estimates of inequalities for LR-*p*-convex interval-valued functions by means of pseudo order relation. Interval analysis provides tools to deal with data uncertainty. In general, interval analysis is typically used to deal with the models whose data are composed of inaccuracies that may occur from certain kinds of measurements. In this context, both the inclusion relation (⊆) and the pseudo-order relation (≤*p*) are two different concepts. By using the latter relation, the authors introduce the new class of nonconvex functions known as LR-*p*-convex intervalvalued functions (LR-*p*-convex-IVFs). With the help of this relation, they establish a strong relationship between LR-*p*-convex-IVFs and Hermite–Hadamard-type inequalities (HHtype inequalities) via the Katugampola fractional integral operator. The results include a wide class of new and known inequalities for LR-*p*-convex-IVFs and their variant forms as special cases. Useful examples that demonstrate the applicability of the theory proposed in this study were given in that study.

Sequential Riemann–Liouville and Hadamard–Caputo fractional differential systems with nonlocal coupled fractional integral boundary conditions were studied in [6]. In that work, the authors investigated the existence of solutions for a fractional differential system that contains mixed Riemann–Liouville and Hadamard–Caputo fractional derivatives, complemented with nonlocal coupled fractional integral boundary conditions. They derived necessary conditions for the existence and uniqueness of solutions of those system by using standard fixed-point theorems, such as Banach contraction mapping principle and the

Leray–Schauder alternative. Numerical examples illustrating the theoretical results were also presented.

In [7], a numerical method for solving a fractional diffusion-wave and nonlinear Fredholm and Volterra integral equations with zero absolute error was presented. The method was based on Euler wavelet approximation and matrix inversion of *M* × *M* collocation points. The proposed equations were presented based on the Caputo fractional derivative, and the authors reduced the resulting system to a system of algebraic equations by implementing the Gaussian quadrature discretization. The reduced system was generated via the truncated Euler wavelet expansion. Several examples with known exact solutions were solved with zero absolute error. This method was also applied to the Fredholm and Volterra nonlinear integral equations and achieved the desired absolute error for all tested examples. The new numerical scheme is appealing in terms of its efficiency and accuracy in the field of numerical approximation.

On the other hand, some non-instantaneous impulsive boundary-value problems containing Caputo fractional derivatives of a function with respect to another function as well as Riemann–Stieltjes fractional integral boundary conditions were considered in [8]. In that work, the authors studied existence and uniqueness results for a new class of boundaryvalue problems consisting of non-instantaneous impulses and Caputo fractional derivative of a function with respect to another function, supplemented with Riemann–Stieltjes fractional integral boundary conditions. The existence of a unique solution was obtained via Banach's contraction mapping principle, while an existence result is established by using Leray–Schauder nonlinear alternative. Examples illustrating the main results were also constructed.

In article [9], the authors considered a retarded linear fractional differential system with distributed delays and Caputo-type derivatives of incommensurate orders. For this system, several a priori estimates for the solutions, applying the two traditional approaches (Gronwall's inequality and integral representations of the solutions) were obtained. As an application of the obtained estimates, different sufficient conditions that guarantee finitetime stability of the solutions were established. A comparison of the obtained different conditions was made with respect to the estimates and norms used.

Finally, a fractional coupled hybrid Sturm–Liouville differential equation with a multipoint boundary coupled hybrid condition was presented in [10]. It is worth recalling here that the Sturm–Liouville differential equation is an important tool for physics, applied mathematics, and other fields of engineering and science and has wide applications in quantum mechanics, classical mechanics, and wave phenomena. In this paper, the authors investigated the coupled hybrid version of the Sturm–Liouville differential equation. They studied the existence of solutions for the coupled hybrid Sturm–Liouville differential equation with multi-point boundary-coupled hybrid condition. Furthermore, they investigated the existence of solutions for the coupled hybrid Sturm–Liouville differential equation with an integral boundary coupled hybrid condition. To close that work, the authors gave an application and some examples to illustrate their results.

**Funding:** The editor wishes to acknowledge the financial support from the National Council for Science and Technology of Mexico (CONACYT) through grant A1-S-45928.

**Conflicts of Interest:** The editor declares no potential conflict of interest.

#### **References**


## *Article* **Fractional Coupled Hybrid Sturm–Liouville Differential Equation with Multi-Point Boundary Coupled Hybrid Condition**

**Mohadeseh Paknazar 1,† and Manuel De La Sen 2,\*,†**


**Abstract:** The Sturm–Liouville differential equation is an important tool for physics, applied mathematics, and other fields of engineering and science and has wide applications in quantum mechanics, classical mechanics, and wave phenomena. In this paper, we investigate the coupled hybrid version of the Sturm–Liouville differential equation. Indeed, we study the existence of solutions for the coupled hybrid Sturm–Liouville differential equation with multi-point boundary coupled hybrid condition. Furthermore, we study the existence of solutions for the coupled hybrid Sturm–Liouville differential equation with an integral boundary coupled hybrid condition. We give an application and some examples to illustrate our results.

**Keywords:** Caputo fractional derivative; fractional differential equations; hybrid differential equations; coupled hybrid Sturm–Liouville differential equation; multi-point boundary coupled hybrid condition; integral boundary coupled hybrid condition; dhage type fixed point theorem

**MSC:** 34A08; 47H10

#### **1. Introduction and Preliminaries**

Various papers have been published on fractional differential equations (FDEs) (see, e.g., in [1–6]). Over the years, hybrid fractional differential equations have attracted much attention. There have been many works on the hybrid differential equations, and we refer the readers to the papers in [7–17] and the references therein. During the history of mathematics, an important framework of problems called Sturm–Liouville differential equations has been in the spotlight of the mathematicians of applied mathematics and engineering; scientists of physics, quantum mechanics, and classical mechanics; and certain phenomena; for some examples see in [18,19] and the list of references of these papers. In such a manner, it is important that mathematicians design complicated and more general abstract mathematical models of procedures in the format of applicable fractional Sturm– Liouville differential equations, see in [20–22].

In 2011, Zhao et al. [15] investigated the following fractional hybrid differential equation involving Riemann–Liouville differential operators of order 0 < *α* < 1,

$$\begin{cases} \quad D\_c^a \left( \frac{u(t)}{\mathcal{J}(t, u(t))} \right) = f(t, u(t)), \ t \in I = [0, 1] \\\\ u(0) = 0 \end{cases} \tag{1}$$

where *<sup>g</sup>* ∈ *<sup>C</sup>*(*<sup>I</sup>* × R, R \ {0}) and *<sup>f</sup>* ∈ *<sup>C</sup>*(*<sup>I</sup>* × R, R).

**Citation:** Paknazar, M.; De La Sen, M. Fractional Coupled Hybrid Sturm–Liouville Differential Equation with Multi-Point Boundary Coupled Hybrid Condition. *Axioms* **2021**, *10*, 65. https://doi.org/10.3390/axioms 10020065

Academic Editor: Jorge E. Macías Díaz

Received: 9 March 2021 Accepted: 14 April 2021 Published: 16 April 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

In 2019, El-Sayed et al. [23] investigated the following fractional Sturm–Liouville differential equation:

$$D\_c^\alpha(p(t)u'(t)) + q(t)u(t) = h(t)f(u(t)), \ t \in I$$

with multi-point boundary hybrid condition

$$\begin{cases} \ u'(0) = 0, \\\\ \sum\_{i=1}^{m} \xi\_i u(a\_i) = \nu \sum\_{j=1}^{n} \eta\_j u(b\_j), \end{cases} \tag{2}$$

where *<sup>α</sup>* <sup>∈</sup> (0, 1], *<sup>D</sup><sup>α</sup> <sup>c</sup>* denotes the Caputo fractional derivative and *<sup>p</sup>* ∈ *<sup>C</sup>*(*I*, R), *<sup>q</sup>*(*t*), and *h*(*t*) are absolutely continuous functions on *I* = [0, *T*], *T* < ∞ with *p*(*t*) = 0 for all *t* ∈ *I*, *<sup>f</sup>* : R → R is defined and differentiable on the interval *<sup>I</sup>*, 0 ≤ *<sup>a</sup>*<sup>1</sup> < *<sup>a</sup>*<sup>2</sup> < ... < *am* < *<sup>c</sup>*, *<sup>d</sup>* ≤ *<sup>b</sup>*<sup>1</sup> < *<sup>b</sup>*<sup>2</sup> < ... < *bn* < *<sup>T</sup>*, *<sup>c</sup>* < *<sup>d</sup>* and *<sup>ξ</sup>i*, *<sup>η</sup><sup>j</sup>* and *<sup>ν</sup>* ∈ R.

Motivated by the above results, we study the following fractional coupled hybrid Sturm–Liouville differential equation:

$$D\_{\varepsilon}^{\mathfrak{a}}\left[p(t)D\_{\varepsilon}^{\mathfrak{f}}\left(\frac{u(t)-\mathcal{J}\_{1}(t,u(t))}{\mathcal{J}\_{2}(t,u(t))}\right)-k(t,u(t))\right]+q(t)u(t) = h(t)f(u(t)),$$

with multi-point boundary coupled hybrid condition

$$\begin{cases} \begin{aligned} D\_{\varepsilon}^{\mathbb{R}} \left( \frac{\boldsymbol{u}(t) - \mathbb{Z}\_{1}(t, \boldsymbol{u}(t))}{\mathbb{Z}\_{2}(t, \boldsymbol{u}(t))} \right)\_{t=0} = k(0, \boldsymbol{u}(0)), \\\\ \sum\_{i=1}^{m} \mathbb{Z}\_{i} (\frac{\boldsymbol{u}(a\_{i}) - \mathbb{Z}\_{1}(a\_{i}, \boldsymbol{u}(a\_{i}))}{\mathbb{Z}\_{2}(a\_{i}, \boldsymbol{u}(a\_{i}))}) = \nu \sum\_{j=1}^{n} \eta\_{j} (\frac{\boldsymbol{u}(b\_{j}) - \mathbb{Z}\_{1}(b\_{j}, \boldsymbol{u}(b\_{j}))}{\mathbb{Z}\_{2}(b\_{j}, \boldsymbol{u}(b\_{j}))}), \end{aligned} \end{cases}$$

Motivated by the above results, we study the following fractional coupled hybrid Sturm–Liouville differential equation:

$$D\_c^\alpha \left[ p(t) D\_c^\beta \left( \frac{u(t) - \zeta\_1(t, u(t))}{\zeta\_2(t, u(t))} \right) - k(t, u(t)) \right] + q(t) u(t) = h(t) f(u(t)),$$

with multi-point boundary coupled hybrid condition

$$\begin{cases} \begin{aligned} D\_{\varsigma}^{\mathcal{S}} \left( \frac{\mathfrak{u}(t) - \underline{\mathcal{J}}\_{1}(t, \mathfrak{u}(t))}{\underline{\mathcal{J}}\_{2}(t, \mathfrak{u}(t))} \right)\_{t=0} = k(0, \mathfrak{u}(0)), \\\\ \begin{aligned} \sum\_{i=1}^{\mathfrak{u}} \underline{\mathfrak{x}}\_{i} (\frac{\mathfrak{u}(a\_{i}) - \underline{\mathcal{J}}\_{1}(a\_{i}, \mathfrak{u}(a\_{i}))}{\underline{\mathcal{J}}\_{2}(a\_{i}, \mathfrak{u}(a\_{i}))}) = \nu \sum\_{j=1}^{n} \eta\_{j} (\frac{\mathfrak{u}(b\_{j}) - \underline{\mathcal{J}}\_{1}(b\_{j}, \mathfrak{u}(b\_{j}))}{\underline{\mathcal{J}}\_{2}(b\_{j}, \mathfrak{u}(b\_{j}))}), \end{aligned} \end{cases} \end{cases}$$

where *<sup>α</sup>*, *<sup>β</sup>* <sup>∈</sup> (0, 1], *<sup>D</sup><sup>α</sup> <sup>c</sup>* and *<sup>D</sup><sup>β</sup> <sup>c</sup>* denote the Caputo fractional derivative, *<sup>p</sup>* ∈ *<sup>C</sup>*(*I*, R) and *q*(*t*) and *h*(*t*) are absolutely continuous functions on *I* = [0, 1], with *p*(*t*) = 0 for all *<sup>t</sup>* ∈ *<sup>I</sup>*, *<sup>ζ</sup>*2(., .) ∈ *<sup>C</sup>*(*<sup>I</sup>* × R, R \ {0}), *<sup>ζ</sup>*1(., .) ∈ *<sup>C</sup>*(*<sup>I</sup>* × R, R), *<sup>f</sup>*(*u*(*t*)) : R → R is defined on the interval *I*, 0 ≤ *a*<sup>1</sup> < *a*<sup>2</sup> < ... < *am* < *c*, *d* ≤ *b*<sup>1</sup> < *b*<sup>2</sup> < ... < *bn* < 1, *c* < *d* and *<sup>ξ</sup>i*, *<sup>η</sup><sup>j</sup>* and *<sup>ν</sup>* ∈ R. Moreover, we study the existence of solutions for the coupled hybrid Sturm–Liouville differential equation with integral boundary coupled hybrid condition. We give an application and some examples to illustrate our results.

Define a supremum norm . in *<sup>E</sup>* <sup>=</sup> *<sup>C</sup>*(*I*, <sup>R</sup>) by *u* <sup>=</sup> sup*t*∈*<sup>I</sup>* <sup>|</sup>*u*(*t*)|, and a multiplication in *E* by (*xy*)(*t*) = *x*(*t*)*y*(*t*) for all *x*, *y* ∈ *E*. Evidently, *E* is a Banach algebra with respect

to above supremum norm and the multiplication in it; also notice that *uL*<sup>1</sup> <sup>=</sup> <sup>1</sup> <sup>0</sup> |*u*(*s*)|*ds* is the norm in *L*1[0, 1].

It is well known that the Riemann–Liouville fractional integral of order *α* of a function *f* is defined by *I<sup>α</sup> f*(*t*) = <sup>1</sup> Γ(*α*) *t* <sup>0</sup> (*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>*)*α*−<sup>1</sup> *<sup>f</sup>*(*s*)*ds*(*<sup>α</sup>* <sup>&</sup>gt; 0) and the Caputo derivative of order *α* for a function *f* is defined by

$$D\_c^\alpha f(t) = \frac{1}{\Gamma(n-\alpha)} \int\_0^t \frac{f^{(n)}(s)}{(t-s)^{\alpha-n+1}} ds$$

where *n* = [*α*] + 1 (for more details on Riemann–Liouville fractional integral and Caputo derivative see in [2,4,5]).

**Definition 1.** *Let <sup>α</sup>*, *<sup>β</sup>* <sup>∈</sup> <sup>R</sup>+. *We have*


*(iii) If f*(*t*) *is absolutely continuous on I*, *then* lim*α*→<sup>1</sup> *<sup>D</sup><sup>α</sup> <sup>c</sup> f*(*t*) = *D f*(*t*) *and*

$$DI^{\mathfrak{a}}f(t) = \frac{t^{\mathfrak{a}-1}}{\Gamma(\mathfrak{a})}f(0) + I^{\mathfrak{a}}Df(t), \; \mathfrak{a} > 0.$$

 $\Gamma(iv)$   $I^{\alpha}t^{\gamma} = \frac{\Gamma(\gamma+1)t^{\alpha+\gamma}}{\Gamma(\alpha+\gamma+1)}, \gamma > -1.$ 

The following hybrid fixed point result for three operators, due to Dhage [24], plays a key role in our first main theorem.

**Lemma 1.** *Let S be a closed convex, bounded, and nonempty subset of a Banach algebra E and let* A, C : *E* → *E and* B : *S* → *E be three operators such that*


*Then, the operator equation u* = A*u*B*u* + C*u has a solution in S.*

#### **2. Main Results**

In this section, we take into account the existence and uniqueness of solution for the following fractional coupled hybrid Sturm–Liouville differential equation:

$$D\_c^{\mu} \left[ p(t) D\_c^{\beta} \left( \frac{u(t) - \zeta\_1(t, u(t))}{\zeta\_2(t, u(t))} \right) - k(t, u(t)) \right] + q(t) u(t) = h(t) f(u(t)), \tag{3}$$

with multi-point boundary coupled hybrid condition

$$\begin{cases} \begin{aligned} D\_c^\beta \left( \frac{\boldsymbol{u}(t) - \boldsymbol{\zeta}\_1(t, \boldsymbol{u}(t))}{\boldsymbol{\zeta}\_2(t, \boldsymbol{u}(t))} \right)\_{t=0} &= k(0, \boldsymbol{u}(0)), \\\\ \sum\_{i=1}^m \boldsymbol{\zeta}\_i(\frac{\boldsymbol{u}(a\_i) - \boldsymbol{\zeta}\_1(a\_i, \boldsymbol{u}(a\_i))}{\boldsymbol{\zeta}\_2(a\_i, \boldsymbol{u}(a\_i))}) &= \nu \sum\_{j=1}^n \eta\_j(\frac{\boldsymbol{u}(b\_j) - \boldsymbol{\zeta}\_1(b\_j, \boldsymbol{u}(b\_j))}{\boldsymbol{\zeta}\_2(b\_j, \boldsymbol{u}(b\_j))}), \end{aligned} \tag{4}$$

where *<sup>α</sup>*, *<sup>β</sup>* <sup>∈</sup> (0, 1], *<sup>D</sup><sup>α</sup> <sup>c</sup>* and *<sup>D</sup><sup>β</sup> <sup>c</sup>* denote the Caputo fractional derivative, *<sup>p</sup>* ∈ *<sup>C</sup>*(*I*, R) and *q*(*t*) and *h*(*t*) are absolutely continuous functions on *I* = [0, 1], with *p*(*t*) = 0 for all *<sup>t</sup>* ∈ *<sup>I</sup>*, *<sup>ζ</sup>*2(., .) ∈ *<sup>C</sup>*(*<sup>I</sup>* × R, R \ {0}), *<sup>ζ</sup>*1(., .) ∈ *<sup>C</sup>*(*<sup>I</sup>* × R, R), *<sup>f</sup>*(*u*(*t*)) : R → R is defined on *<sup>I</sup>*, <sup>0</sup> ≤ *<sup>a</sup>*<sup>1</sup> < *<sup>a</sup>*<sup>2</sup> < ... < *am* < *<sup>c</sup>*, *<sup>d</sup>* ≤ *<sup>b</sup>*<sup>1</sup> < *<sup>b</sup>*<sup>2</sup> < ... < *bn* < 1, *<sup>c</sup>* < *<sup>d</sup>* and *<sup>ξ</sup>i*, *<sup>η</sup><sup>j</sup>* and *<sup>ν</sup>* ∈ R, under the following hypotheses.


$$|\zeta\_2(t, \mathbf{x}) - \zeta\_2(t, \mathbf{y})| \le \mu(t)|\mathbf{x} - \mathbf{y}|$$

for all (*t*, *<sup>x</sup>*, *<sup>y</sup>*) ∈ *<sup>I</sup>* × R × R.

(*D*4) Two functions *<sup>f</sup>* , *<sup>k</sup>* : *<sup>I</sup>* × R → R are continuous in their two variables, and there are two functions *μ*˜(*t*), *μ*∗(*t*) ≥ 0 (∀*t* ∈ *I*) such that

$$|\zeta\_1(t, \mathbf{x}) - \zeta\_1(t, y)| \le \tilde{\mu}(t)|\mathbf{x} - y|.$$

and

$$|k(t, \mathbf{x}) - k(t, y)| \le \mu^\*(t)|\mathbf{x} - y|$$

for all (*t*, *<sup>x</sup>*, *<sup>y</sup>*) ∈ *<sup>I</sup>* × R × R, respectively.

(*D*5) There exists a number *r* > 0 such that

$$r \ge \frac{\text{go\u0} + \zeta\_1^\*}{1 - ||\mu||\Theta - ||\tilde{\mu}||} \text{ and } ||\mu||\Theta + ||\tilde{\mu}|| < 1,$$

where

$$\begin{split} \Theta &= \frac{1}{p\Gamma(a+\beta+1)} [E(\sum\_{i=1}^{m} |\xi\_{i}|+|\nu| \sum\_{j=1}^{n} |\eta\_{j}|)+1] [(\|q\|+\mathcal{K}\|h\|+\frac{\Gamma(a+\beta+1)\|\mu^{\*}\|}{\Gamma(\beta+1)})r] \\ &+ \mathcal{M} \|h\| + \frac{\Gamma(a+\beta+1)k\_{0}}{\Gamma(\beta+1)}\Big{|} \\\\ \mathcal{L}^{\*}\_{\*} &= \sup\_{\mathbf{u} \in \mathcal{M}} \mathcal{L}\_{1}(t,0), \mathcal{L}\_{\*}^{\*} = \sup\_{\mathbf{u} \in \mathcal{M}} \mathcal{L}\_{2}(t,0), \mathcal{M} = f(0), k\_{0} = \sup\_{\mathbf{u} \in \mathcal{M}} \mathcal{L}(t,0) \text{ and} \end{split}$$

$$\begin{array}{lll} \zeta\_{1}^{\*} &=& \sup\_{t \in I} \zeta\_{1}(t, 0), \zeta\_{2}^{\*} &=& \sup\_{t \in I} \zeta\_{2}(t, 0), \mathcal{M} &=& f(0), \ k\_{0} = \sup\_{t \in I} k(t, 0) \text{ and} \\ E &=& \frac{1}{\sum\_{i=1}^{m} \zeta\_{i}^{\*} - \nu \sum\_{j=1}^{n} \eta\_{j}} \text{ where } \sum\_{i=1}^{m} \zeta\_{i}^{\*} - \nu \sum\_{j=1}^{n} \eta\_{j} \neq 0. \end{array}$$

**Definition 2.** *We say D<sup>β</sup> <sup>c</sup> has the quotient-property with respect to <sup>u</sup>*1, *<sup>u</sup>*<sup>2</sup> ∈ *<sup>L</sup>*1(*I*, R)) *with <sup>u</sup>*<sup>2</sup> <sup>=</sup> 0, *if D<sup>β</sup> c* ( *u*1(*t*) *u*2(*t*) ) = *<sup>u</sup>*2(*t*)*D<sup>β</sup> <sup>c</sup>* (*u*1(*t*)) <sup>−</sup> *<sup>u</sup>*1(*t*)*D<sup>β</sup> <sup>c</sup>* (*u*2(*t*)) (*u*2(*t*))<sup>2</sup> *.*

We will use the following condition:

(B∗) *<sup>D</sup><sup>β</sup> <sup>c</sup>* has the quotient-property with respect to *ζ*1(*t*, *u*(*t*)) and *ζ*2(*t*, *u*(*t*)), and

$$D\_c^\beta(\mathbb{J}\_1(t,\mu(t)), D\_c^\beta(\mathbb{J}\_2(t,\mu(t)) \in \mathbb{C}(I,\mathbb{R}) \text{ (}\forall \mu \in \mathbb{C}(I,\mathbb{R})\text{)}.$$

**Lemma 2.** *Assume that the hypotheses* (*D*1)*–*(*D*2) *are satisfied. Then, the problem* (3) *and* (4) *is equivalent to the integral equation*

$$\begin{split} u(t) &= \zeta\_2(t, u(t)) \left[ E \left( \sum\_{i=1}^{m} \mathbb{J}\_i A u(a\_i) - \nu \sum\_{j=1}^{n} \eta\_j A u(b\_j) + \nu \sum\_{j=1}^{n} \eta\_j B u(b\_j) \right) \\ &- \sum\_{i=1}^{m} \mathbb{J}\_i B u(a\_i)) + \nu \sum\_{j=1}^{n} \eta\_j \mathbb{C} u(b\_j) - \sum\_{i=1}^{m} \mathbb{J}\_i \mathbb{C} u(a\_i) \right) - A u(t) + B u(t) + \mathbb{C} u(t) \Big] \\ &+ \zeta\_1(t, u(t)). \end{split} \tag{5}$$

$$\begin{array}{rcl} \text{where} \quad \boldsymbol{Au}(t) &=& I^{\beta} \Big( \frac{1}{p(t)} I^{\alpha} (q(t) \boldsymbol{u}(t)) \Big), \; \boldsymbol{Bu}(t) &=& I^{\delta} \Big( \frac{1}{p(t)} I^{\alpha} (\boldsymbol{h}(t) \boldsymbol{f}(\boldsymbol{u}(t))) \Big), \; \mathbb{C}(t) &=& \frac{1}{p(t)} \Big( \frac{1}{p(t)} I^{\alpha} (\boldsymbol{h}(t) \boldsymbol{f}(\boldsymbol{u}(t))) \Big), \; \mathbb{C}(t) &=& \frac{1}{p(t)} \Big( \frac{1}{p(t)} I^{\alpha} (\boldsymbol{h}(t) \boldsymbol{f}(\boldsymbol{u}(t))) \Big), \\\\ \boldsymbol{I}^{\beta} \Big( \frac{1}{p(t)} \boldsymbol{k}(t, \boldsymbol{u}(t)) \Big) \, \boldsymbol{and} \; \boldsymbol{E} &=& \frac{1}{\sum\_{i=1}^{m} \mathbb{Z}\_{i} - \nu \sum\_{j=1}^{n} \eta\_{j}}. \; \text{Moreover,} \\\\ & \boldsymbol{\bullet} & D\_{\boldsymbol{\varepsilon}}^{\beta} \left( \frac{\boldsymbol{u}(t) - \mathbb{Z}\_{1} (t, \boldsymbol{u}(t))}{\mathbb{Z}\_{2} (t, \boldsymbol{u}(t))} \right) \in \mathcal{C}(I, \mathbb{R}); \\\\ & \boldsymbol{\bullet} & \text{if } (\mathbb{B}^{n}) \text{ holds, then } \boldsymbol{D}\_{\boldsymbol{\varepsilon}}^{\beta} (\boldsymbol{u}(t)) \in \mathcal{C}(I, \mathbb{R}); \end{array}$$

$$\begin{array}{ll} \bullet & \frac{d}{dt} \left[ \tilde{L}\_c^{\beta} \left( \frac{u(t) - \zeta\_1(t, u(t))}{\zeta\_2(t, u(t))} \right) - k(t, u(t)) \right] \\ \bullet & \frac{d}{dt} \left[ D\_c^{\beta} \left( \frac{u(t) - \zeta\_1(t, u(t))}{\zeta\_2(t, u(t))} \right) - k(t, u(t)) \right] \in L\_1[0, 1]. \end{array}$$

**Proof.** Equation (3) can be written as

$$I^{1-a} \left( \frac{d}{dt} \left[ p(t) D\_c^\S \left( \frac{u(t) - \zeta\_1(t, u(t))}{\zeta\_2(t, u(t))} \right) - k(t, u(t)) \right] \right) = -q(t) u(t) + h(t) f(u(t)).$$

Operating by *I<sup>α</sup>* on both sides, we get

$$-I^1\left(\frac{d}{dt}\left[p(t)D\_c^\delta\left(\frac{u(t)-\mathbb{Z}\_1(t,u(t))}{\mathbb{Z}\_2(t,u(t))}\right)-k(t,u(t))\right]\right) = -I^a(q(t)u(t)) + I^a(h(t)f(u(t))).$$

Consequently,

$$\begin{aligned} p(t)D\_c^\delta \left( \frac{u(t) - \mathbb{\zeta}\_1(t, u(t))}{\mathbb{\zeta}\_2(t, u(t))} \right) - k(t, u(t)) - p(0)D\_c^\delta \left( \frac{u(t) - \mathbb{\zeta}\_1(t, u(t))}{\mathbb{\zeta}\_2(t, u(t))} \right)\_{t=0} + k(0, u(0)) \\ &= -I^u(q(t)u(t)) + I^u(h(t)f(u(t))). \end{aligned}$$

$$\text{As } D\_c^\delta \left( \frac{u(t) - \zeta\_1(t, u(t))}{\overline{\zeta\_2}(t, u(t))} \right)\_{t=0} = k(0, u(0)), \text{we have}$$

$$p(t) D\_c^\delta \left( \frac{u(t) - \zeta\_1(t, u(t))}{\overline{\zeta\_2}(t, u(t))} \right) - k(t, u(t)) = -I^a(q(t)u(t)) + I^a(h(t)f(u(t))).$$

and so

$$D\_c^\beta \left( \frac{u(t) - \zeta\_1(t, u(t))}{\zeta\_2(t, u(t))} \right) = -\frac{1}{p(t)} l^u(q(t)u(t)) + \frac{1}{p(t)} l^u(h(t)f(u(t))) + \frac{1}{p(t)} k(t, u(t)). \tag{6}$$

The above equation can be written as

$$I^{1-\beta} \frac{d}{dt} \left( \frac{u(t) - \zeta\_1(t, u(t))}{\zeta\_2(t, u(t))} \right) = -\frac{1}{p(t)} l^u(q(t)u(t)) + \frac{1}{p(t)} l^u(h(t)f(u(t))) + \frac{1}{p(t)} k(t, u(t)).$$

Operating by *I<sup>β</sup>* on both sides, we obtain

$$\begin{split} I^1 \frac{d}{dt} \left( \frac{u(t) - \zeta\_1(t, u(t))}{\zeta\_2(t, u(t))} \right) &= -I^\beta \left( \frac{1}{p(t)} I^u(q(t)u(t)) \right) + I^\beta \left( \frac{1}{p(t)} I^u(h(t)f(u(t))) \right) \\ &+ I^\beta \left( \frac{1}{p(t)} k(t, u(t)) \right). \end{split}$$

Therefore, we can obtain

$$\begin{split} \frac{u(t) - \zeta\_1(t, u(t))}{\zeta\_2(t, u(t))} - \ell &= -I^{\S} \left( \frac{1}{p(t)} I^{\u} (q(t)u(t)) \right) + I^{\S} \left( \frac{1}{p(t)} I^{\u} (h(t)f(u(t))) \right) \\ + I^{\S} \left( \frac{1}{p(t)} k(t, u(t)) \right) &= -Au(t) + Bu(t) + Cu(t). \end{split} \tag{7}$$
 
$$u(0) = f(0, u(0))$$

where - <sup>=</sup> *<sup>u</sup>*(0) <sup>−</sup> *<sup>f</sup>*(0, *<sup>u</sup>*(0)) *<sup>g</sup>*(0, *<sup>u</sup>*(0)) . Now, we get

$$\sum\_{i=1}^{m} \mathbb{J}\_{i}(\frac{u(a\_{i}) - \mathbb{J}\_{1}(t, u(a\_{i}))}{\mathbb{J}\_{2}(t, u(a\_{i}))}) - \sum\_{i=1}^{m} \mathbb{J}\_{i} \ell = -\sum\_{i=1}^{m} \mathbb{J}\_{i}Au(a\_{i}) + \sum\_{i=1}^{m} \mathbb{J}\_{i}Bu(a\_{i}) + \sum\_{i=1}^{m} \mathbb{J}\_{i} \mathbb{C}u(a\_{i}).\tag{8}$$

and

$$\begin{split} \nu \sum\_{j=1}^{n} \eta\_{j} (\frac{\mu(b\_{j}) - \mathbb{Z}\_{1}(b\_{j}, u(b\_{j}))}{\mathbb{Z}\_{2}(b\_{j}, u(b\_{j}))}) - \nu \sum\_{j=1}^{n} \eta\_{j} \ell = -\nu \sum\_{j=1}^{n} \eta\_{j} A u(b\_{j}) + \nu \sum\_{j=1}^{n} \eta\_{j} B u(b\_{j}) \\ + \nu \sum\_{j=1}^{n} \eta\_{j} \mathbb{C} u(b\_{j}). \end{split} \tag{9}$$

On subtracting (8) from (9) and applying

$$\sum\_{i=1}^{m} \xi\_i(\frac{\mu(a\_i) - \zeta\_1(a\_i, \mu(a\_i))}{\zeta\_2(a\_i, \mu(a\_i))}) = \nu \sum\_{j=1}^{n} \eta\_j(\frac{\mu(b\_j) - \zeta\_1(b\_j, \mu(b\_j))}{\zeta\_2(b\_j, \mu(b\_j))}),$$

we deduce that

$$\begin{aligned} \ell &= E(\sum\_{i=1}^m \mathbb{\zeta}\_i A u(a\_i) - \nu \sum\_{j=1}^n \eta\_j A u(b\_j) + \nu \sum\_{j=1}^n \eta\_j B u(b\_j) - \sum\_{i=1}^m \mathbb{\zeta}\_i^\* B u(a\_i)) \\ &+ \nu \sum\_{j=1}^n \eta\_j \mathbb{C} u(b\_j) - \sum\_{i=1}^m \mathbb{\zeta}\_i^\* \mathbb{C} u(a\_i)) \end{aligned}$$

where *<sup>E</sup>* <sup>=</sup> <sup>1</sup> ∑*<sup>m</sup> <sup>i</sup>*=<sup>1</sup> *<sup>ξ</sup><sup>i</sup>* <sup>−</sup> *<sup>ν</sup>* <sup>∑</sup>*<sup>n</sup> <sup>j</sup>*=<sup>1</sup> *η<sup>j</sup>* . Therefore, by substituting the value of in (7), we get

$$\begin{aligned} u(t) &= \zeta\_2(t, u(t)) \left[ \mathbb{E} \left( \sum\_{i=1}^m \mathbb{Z}\_{\bar{i}i} A u(a\_i) - \nu \sum\_{j=1}^n \eta\_j A u(b\_j) + \nu \sum\_{j=1}^n \eta\_j B u(b\_j) \right) \\ &- \sum\_{i=1}^m \zeta\_i B u(a\_i)) + \nu \sum\_{j=1}^n \eta\_j \mathbb{C} u(b\_j) - \sum\_{i=1}^m \mathbb{Z}\_{\bar{i}i} \mathbb{C} u(a\_i) \right] - A u(t) + B u(t) + \mathbb{C} u(t) \Big] + \zeta\_1(t, u(t)). \end{aligned}$$

Conversely, to complete the equivalence between integral Equation (5) and the problem (3) and (4), we have from (6)

$$\begin{split} D\_{\varsigma}^{\beta} \left( \frac{u(t) - \check{\varsigma}\_{1}(t, u(t))}{\check{\varsigma}\_{2}(t, u(t))} \right) &= -\frac{1}{p(t)} I^{\mathfrak{a}}(q(t)u(t)) + \frac{1}{p(t)} I^{\mathfrak{a}}(h(t)f(u(t))) \\ &+ \frac{1}{p(t)} k(t, u(t)) \in \mathbb{C}([0, 1]). \end{split} \tag{10}$$

and so

$$\frac{d}{dt}\left[p(t)D\_\varepsilon^\beta\left(\frac{u(t)-\zeta\_1(t,u(t))}{\zeta\_2(t,u(t))}\right)-k(t,u(t))\right] = -\frac{d}{dt}I^a(q(t)u(t)) + \frac{d}{dt}I^a(h(t)f(u(t)))$$

Operating by *I*1−*<sup>α</sup>* on both sides, we obtain

$$\begin{aligned} I^{1-a} \frac{d}{dt} \left[ p(t) D\_c^\S \left( \frac{u(t) - \mathbb{Z}\_1(t, u(t))}{\mathbb{Z}\_2(t, u(t))} \right) - k(t, u(t)) \right] &= -I^{1-a} \frac{d}{dt} I^a (q(t)u(t)) \\ &+ I^{1-a} \frac{d}{dt} I^a (h(t)f(u(t))) \end{aligned}$$

Now, by using the definition of Caputo derivative and (iii), we get

$$\begin{aligned} D^{\mathfrak{a}} \left[ p(t) D\_c^{\mathfrak{f}} \left( \frac{u(t) - \mathbb{J}\_1(t, u(t))}{\mathbb{J}\_2(t, u(t))} \right) - k(t, u(t)) \right] \\ &= -I^{1-a} I^{\mathfrak{a}} \frac{d}{dt} (q(t)u(t)) + I^{1-a} I^{\mathfrak{a}} \frac{d}{dt} (h(t)f(u(t))) \\ &- I^{1-a} \frac{t^{\mathfrak{a}-1}}{\Gamma(a)} q(0) u(0) + I^{1-a} \frac{t^{\mathfrak{a}-1}}{\Gamma(a)} h(0) f(u(0)), \end{aligned}$$

and then by applying (ii) and (iv), we have

$$\begin{aligned} D^a \left[ p(t) D\_c^\beta \left( \frac{u(t) - \zeta\_1(t, u(t))}{\zeta\_2(t, u(t))} \right) - k(t, u(t)) \right] &= -I^1 \frac{d}{dt} (q(t) u(t)) + I^1 \frac{d}{dt} (h(t) f(u(t))) \\ &- q(0) u(0) + h(0) f(u(0)) \\ &= -q(t) u(t) + h(t) f(u(t)). \end{aligned}$$

and so we get (3). Clearly, from (6), we can get

$$D\_{\varepsilon}^{\beta} \left( \frac{\mu(t) - \zeta\_1(t, \mu(t))}{\zeta\_2(t, \mu(t))} \right)\_{t=0} = k(0, \mu(0)).$$

Moreover, by using a simple computation and (5), we can obtain

$$\sum\_{i=1}^{m} \zeta\_i(\frac{\mu(a\_i) - \zeta\_1(a\_i, \mu(a\_i))}{\zeta\_2(a\_i, \mu(a\_i))}) = \nu \sum\_{j=1}^{n} \eta\_j(\frac{\mu(b\_j) - \zeta\_1(b\_j, \mu(b\_j))}{\zeta\_2(b\_j, \mu(b\_j))}).$$

Now, assume that (*B*∗) holds. From (10), we know that

$$\mathcal{H}(t) := D\_{\mathfrak{c}}^{\mathfrak{g}}\left(\frac{\mathfrak{u}(t) - \mathbb{J}\_1(t, \mathfrak{u}(t))}{\mathbb{J}\_2(t, \mathfrak{u}(t))}\right) \in \mathbb{C}(I, \mathbb{R}).$$

Then,

$$\begin{split} \mathcal{H}(t) &= D\_{\mathfrak{c}}^{\mathfrak{f}} \left( \frac{u(t) - \mathbb{J}\_{1}(t, u(t))}{\mathbb{J}\_{2}(t, u(t))} \right) \\ &= \frac{\mathbb{J}\_{2}(t, u(t)) D\_{\mathfrak{c}}^{\mathfrak{f}}(u(t) - \mathbb{J}\_{1}(t, u(t))) - (u(t) - \mathbb{J}\_{1}(t, u(t))) D\_{\mathfrak{c}}^{\mathfrak{f}}(\mathbb{J}\_{2}(t, u(t)))}{(\mathbb{J}\_{2}(t, u(t)))^{2}}, \end{split}$$

and so

$$
\begin{split} \mathcal{H}(t) &= D\_{\varepsilon}^{\beta} \left( \frac{u(t) - \overline{\zeta}\_{1}(t, u(t))}{\overline{\zeta}\_{2}(t, u(t))} \right) \\ &= \frac{\overline{\zeta}\_{2}(t, u(t)) D\_{\varepsilon}^{\beta}(u(t) - \overline{\zeta}\_{1}(t, u(t))) - (u(t) - \overline{\zeta}\_{1}(t, u(t))) D\_{\varepsilon}^{\beta}(\overline{\zeta}\_{2}(t, u(t)))}{(\overline{\zeta}\_{2}(t, u(t)))^{2}} \\ &= \frac{\overline{\zeta}\_{2}(t, u(t)) D\_{\varepsilon}^{\beta}(u(t)) - \overline{\zeta}\_{2}(t, u(t)) D\_{\varepsilon}^{\beta}(\overline{\zeta}\_{1}(t, u(t))) - (u(t) - \overline{\zeta}\_{1}(t, u(t))) D\_{\varepsilon}^{\beta}(\overline{\zeta}\_{2}(t, u(t)))}{(\overline{\zeta}\_{2}(t, u(t)))^{2}} \\ &= \frac{D\_{\varepsilon}^{\beta}(u(t))}{\overline{\zeta}\_{2}(t, u(t))} - \frac{\zeta\_{2}(t, u(t)) D\_{\varepsilon}^{\beta}(\overline{\zeta}\_{1}(t, u(t))) + (u(t) - \overline{\zeta}\_{1}(t, u(t))) D\_{\varepsilon}^{\beta}(\overline{\zeta}\_{2}(t, u(t)))}{(\overline{\zeta}\_{2}(t, u(t)))^{2}}. \end{split}
$$

Therefore, we have

$$\begin{aligned} &D\_{\varepsilon}^{\beta}(u(t)) \\ &= \zeta\_{2}(t,u(t))\left(\mathcal{H}(t) + \frac{\zeta\_{2}(t,u(t))D\_{\varepsilon}^{\beta}(\zeta\_{1}(t,u(t))) + (u(t) - \zeta\_{1}(t,u(t)))D\_{\varepsilon}^{\beta}(\zeta\_{2}(t,u(t)))}{(\zeta\_{2}(t,u(t)))^{2}}\right) \\ &\in \mathbb{C}(\boldsymbol{I},\mathbb{R}). \end{aligned}$$

Let us prove that *<sup>d</sup> dt Dβ c u*(*t*) <sup>−</sup> *<sup>ζ</sup>*1(*t*, *<sup>u</sup>*(*t*)) *<sup>ζ</sup>*2(*t*, *<sup>u</sup>*(*t*)) <sup>−</sup> *<sup>k</sup>*(*t*, *<sup>u</sup>*(*t*)) ∈ *L*1[0, 1]. From (6) and (iii) of Definition 1 we have

$$\begin{split} \frac{d}{dt} \left[ D\_t^\delta \left( \frac{u(t) - \zeta\_1(t, u(t))}{\zeta\_2(t, u(t))} \right) - k(t, u(t)) \right] &= \frac{d}{dt} \left( \frac{1}{p(t)} I^u \left( -q(t)u(t) + h(t)f(u(t)) \right) \right) \\ &= -\frac{p'(t)}{p^2(t)} I^u \left( -q(t)u(t) + h(t)f(u(t)) \right) \\ &\quad + \frac{1}{p(t)} I^u \frac{d}{dt} (-q(t)u(t) + h(t)f(u(t))) \\ &\quad + \frac{1}{p(t)} \frac{t^{u-1}}{\Gamma(a)} (q(0)u(0) + h(0)f(u(0))). \end{split}$$

Now, we can write

$$\begin{split} \left| \frac{d}{dt} \Big[ D\_{\varepsilon}^{\theta} \left( \frac{u(t) - \overline{\zeta}\_{1}(t, u(t))}{\overline{\zeta}\_{2}(t, u(t))} \right) - k(t, u(t)) \Big] \right| \\ &\leq \frac{|p'(t)|}{|p^{2}(t)|} \int\_{0}^{t} \frac{(t - s)^{a-1}}{\Gamma(a)} (|q(s)| |u(s)| + |h(s)| |f(u(s))|) ds \\ &\quad + \frac{1}{|p(t)|} \int\_{0}^{t} \frac{(t - s)^{a-1}}{\Gamma(a)} \Big( |q'(s)| |u(s)| + |q(s)| |u'(s)| \\ &\quad + |h'(s)| f(u(s))| + |h(s)| |\frac{\partial f(u(s))}{\partial u}| |u'(s)| \Big) ds \\ &\quad + \frac{1}{|p(t)|} \frac{t^{a-1}}{\Gamma(a)} (|q(0)| |u(0)| + |h(0)| |f(u(0))|). \end{split}$$

Therefore,

$$\begin{split} &\int\_{0}^{1} \left| \frac{d}{dt} \Big[ D\_{t}^{\xi} \left( \frac{u(t) - \xi\_{1}(t, u(t))}{\tilde{\zeta}\_{2}(t, u(t))} \right) - k(t, u(t)) \Big] \right| dt \\ &\leq \int\_{0}^{1} \frac{|p'(t)|}{|p^{2}(t)|} \int\_{0}^{t} \frac{(t - s)^{a - 1}}{\Gamma(a)} (|q(s)| |u(s)| \\ &+ |h(s)| |f(u(s))|) ds dt + \int\_{0}^{1} \frac{1}{|p(t)|} \int\_{0}^{t} \frac{(t - s)^{a - 1}}{\Gamma(a)} \left( |q'(s)| |u(s)| + |q(s)| |u'(s)| \right) ds dt \\ &+ |h'(s)| f(u(s))| + |h(s)| |\frac{\partial f(u(s))}{\partial u}| |u'(s)| \Big) ds dt \\ &+ (|q(0)| |u(0)| + |h(0)| |f(u(0))|) \int\_{0}^{1} \frac{1}{|p(t)|} \frac{t^{a - 1}}{\Gamma(a)} dt. \end{split}$$

Notice that

$$\begin{split} &\int\_{0}^{1} \frac{|p'(t)|}{|p^2(t)|} \int\_{0}^{t} \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} (|q(s)| |u(s)| + |h(s)| |f(u(s))|) ds dt \\ &= \int\_{0}^{1} (|q(s)| |u(s)| + |h(s)| |f(u(s))|) ds \int\_{s}^{1} \frac{|p'(t)|}{|p^2(t)|} \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} dt \\ &\leq (||q(s)|| \|u(s)\| + ||h(s)|| \|f(u(s))\|) \frac{||p'||}{p^2 \Gamma(\alpha+1)} .\end{split}$$

$$\begin{aligned} &\int\_0^1 \frac{1}{|p(t)|} \int\_0^t \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} \left( |q'(s)| |u(s)| + |q(s)| |u'(s)| + |h'(s)| f(u(s))| \right) ds \\ &+ |h(s)| |\frac{\partial f(u(s))}{\partial u}| |u'(s)| \, \Big| \, ds dt \\ &\le \left( \|q'\| |L\_1| |u| \| + ||q|| |u'| + ||h'||\_{L\_1} \|f\| + \mathcal{K} \|h\| \|u'\| \right) \frac{1}{p \Gamma(\alpha+1)}, \end{aligned}$$

and

$$\begin{aligned} \int\_0^1 \frac{1}{|p(t)|} \frac{t^{\alpha - 1}}{\Gamma(\alpha)} (|q(0)| |u(0)| + |h(0)| |f(u(0))|)) dt \\ \leq \frac{1}{p\Gamma(\alpha + 1)} (|q(0)| |u(0)| + |h(0)| |f(u(0))|)). \end{aligned}$$

Then, we can obtain

$$\begin{split} &\int\_{0}^{1} \left| \frac{d}{dt} \Big[ D\_{c}^{\delta} \left( \frac{u(t) - \zeta\_{1}(t, u(t))}{\zeta\_{2}(t, u(t))} \right) - k(t, u(t)) \Big] \right| dt \\ &\leq \left( \| |q(s)| \| \| u(s) \| + \| h(s) \| \| \| f(u(s)) \| \right) \frac{\| p' \|}{p^{2} \Gamma(\alpha + 1)} \\ &\quad + \left( \| q' \|\_{L\_{1}} \| u \| + \| q \| \| u' \| + \| h' \|\_{L\_{1}} \| f \| + \mathcal{K} \| h \| \| u' \| \right) \frac{1}{p \Gamma(\alpha + 1)} \\ &\quad + \frac{1}{p \Gamma(\alpha + 1)} (|q(0)| |u(0)| + |h(0)| |f(u(0))|). \end{split}$$
 
$$\text{is } \frac{d}{D} \Big[ D\_{c}^{\delta} \left( \frac{u(t) - \zeta\_{1}(t, u(t))}{} \right) - k(t, u(t)) \Big] \in I\_{4}[0, 1]. \text{ This completes} $$

That is, *<sup>d</sup> dt Dβ c <sup>ζ</sup>*2(*t*, *<sup>u</sup>*(*t*)) <sup>−</sup> *<sup>k</sup>*(*t*, *<sup>u</sup>*(*t*)) ∈ *L*1[0, 1]. This completes the proof.

**Lemma 3.** *Assume that the hypotheses* (*D*1)*–*(*D*5) *are satisfied. Let* |*u*(*t*)| ≤ *r for all t* ∈ *I, Au*(*t*) = *I<sup>β</sup>* 1 *<sup>p</sup>*(*t*) *<sup>I</sup>α*(*q*(*t*)*u*(*t*)) *, Bu*(*t*) = *I<sup>β</sup>* 1 *<sup>p</sup>*(*t*) *<sup>I</sup>α*(*h*(*t*)*f*(*u*(*t*))) *and C*(*t*) = *I<sup>β</sup>* 1 *<sup>p</sup>*(*t*) *<sup>k</sup>*(*t*, *<sup>u</sup>*(*t*)) *. Then, (i)* |*Au*(*t*)| ≤ *L*1*,* |*Bu*(*t*)| ≤ *L*<sup>2</sup> *and* |*Cu*(*t*)| ≤ *L*<sup>3</sup> *for all t* ∈ *I where <sup>L</sup>*<sup>1</sup> = *q <sup>p</sup>*Γ(*α*+*β*+1)*r, L*<sup>2</sup> <sup>=</sup> K*h <sup>p</sup>*Γ(*α*+*β*+1)*<sup>r</sup>* <sup>+</sup> M*h <sup>p</sup>*Γ(*α*+*β*+1) *and L*<sup>3</sup> <sup>=</sup> *μ*∗ *<sup>p</sup>*Γ(*β*+1)*<sup>r</sup>* <sup>+</sup> *<sup>k</sup>*<sup>0</sup> *<sup>p</sup>*Γ(*β*+1).

$$(ii) \quad for \ t\_1, t\_2 \in I \; with \ t\_1 < t\_{2\prime}$$

$$|Au(t\_1) - Au(t\_2)| \le \frac{||q||r}{p\Gamma(\alpha+1)\Gamma(\beta+1)} \left[ |t\_2^{\beta} - t\_1^{\beta} - (t\_2 - t\_1)^{\beta}| + (t\_2 - t\_1)^{\beta} \right],$$

$$|Bu(t\_1) - Bu(t\_2)| \le \frac{||h||(\mathcal{K}r + \mathcal{M})}{p\Gamma(\alpha + 1)\Gamma(\beta + 1)} \left[|t\_2^{\beta} - t\_1^{\beta} - (t\_2 - t\_1)^{\beta}| + (t\_2 - t\_1)^{\beta}\right].$$

*and*

$$|\mathbb{C}u(t\_1) - \mathbb{C}u(t\_2)| \le \frac{(||\mu^\*||r + k\_0)}{p\Gamma(\beta + 1)} \left[|t\_2^{\beta} - t\_1^{\beta} - (t\_2 - t\_1)^{\beta}| + (t\_2 - t\_1)^{\beta}\right].$$

**Proof.** (*i*) Assume that |*u*(*t*)| ≤ *r* for all *t* ∈ *I*. Then, we can write

$$\begin{split} |Au(t)| &= |I^{\beta} \left( \frac{1}{p(s)} I^{a} (q(s) \mu(s)) \right)| \\ &= |\frac{1}{\Gamma(a)\Gamma(\beta)} \int\_{0}^{t} \frac{(t-s)^{\beta-1}}{p(s)} \left( \int\_{0}^{s} (s-\tau)^{a-1} q(\tau) \mu(\tau) d\tau \right) ds| \\ &\leq \frac{1}{\Gamma(a)\Gamma(\beta)} \int\_{0}^{t} \frac{(t-s)^{\beta-1}}{|p(s)|} \left( \int\_{0}^{s} (s-\tau)^{a-1} |q(\tau)| |u(\tau)| d\tau \right) ds \\ &\leq \frac{r||q||}{p\Gamma(a)\Gamma(\beta)} \int\_{0}^{t} (t-s)^{\beta-1} \left( \int\_{0}^{s} (s-\tau)^{a-1} d\tau \right) ds \\ &= \frac{r||q||}{p\Gamma(a+1)\Gamma(\beta)} \int\_{0}^{t} s^{a} (t-s)^{\beta-1} ds \\ &\leq \frac{r||q||}{p\Gamma(a+1)\Gamma(\beta)} \int\_{0}^{1} s^{a} (1-s)^{\beta-1} ds \end{split}$$

On the other hand, **B**(*α* + 1, *β*) =  <sup>1</sup> <sup>0</sup> *<sup>s</sup>α*(<sup>1</sup> <sup>−</sup> *<sup>s</sup>*)*β*−1*ds* <sup>=</sup> <sup>Γ</sup>(*α*+1)Γ(*β*) <sup>Γ</sup>(*α*+*β*+1) (where **B** is the beta function). Thus,

$$|Au(t)| \le \frac{||q||}{p\Gamma(\alpha + \beta + 1)}r$$

for all *t* ∈ *I*.

Let |*u*(*t*)| ≤ *r* for all *t* ∈ *I* and M = *f*(0). At first, notice that

$$\begin{aligned} |f(u(t))| = |f(u) - f(0) + f(0)| &\le \mathcal{K}|u| + \mathcal{M} \\ &\le \mathcal{K}r + \mathcal{M}. \end{aligned}$$

Therefore, we have

$$\begin{split} |Bu(t)| &= |I^{\beta} \left( \frac{1}{p(s)} I^{a} (h(s) f(u(s))) \right)| \\ &= |\frac{1}{\Gamma(a) \Gamma(\beta)} \int\_{0}^{t} \frac{(t-s)^{\beta-1}}{p(s)} \left( \int\_{0}^{s} (s-\tau)^{a-1} h(\tau) f(u(\tau)) d\tau \right) ds| \\ &\leq \frac{1}{\Gamma(a) \Gamma(\beta)} \int\_{0}^{t} \frac{(t-s)^{\beta-1}}{|p(s)|} \left( \int\_{0}^{s} (s-\tau)^{a-1} |h(\tau)| |f(u(\tau))| d\tau \right) ds \\ &\leq \frac{(\mathcal{K}r + \mathcal{M}) ||h||}{p \Gamma(a) \Gamma(\beta)} \int\_{0}^{t} (t-s)^{\beta-1} \left( \int\_{0}^{s} (s-\tau)^{a-1} d\tau \right) ds \\ &= \frac{\mathcal{K} ||h||}{p \Gamma(a+\beta+1)} r + \frac{\mathcal{M} ||h||}{p \Gamma(a+\beta+1)}. \end{split}$$

Similarly, we can prove that

$$|\mathcal{C}(t)| \le \frac{||\mu^\*||}{p\Gamma(\beta+1)}r + \frac{k\_0}{p\Gamma(\beta+1)}.$$

(*ii*) Let *t*1, *t*<sup>2</sup> ∈ *I* with *t*<sup>1</sup> < *t*2. Thus,

$$\begin{split} |Au(t\_1) - Au(t\_2)| &= \frac{1}{\Gamma(\beta)} | \int\_0^{t\_1} \frac{(t\_1 - s)^{\beta - 1}}{p(s)} I^a(q(s)u(s)) ds - \int\_0^{t\_2} \frac{(t\_2 - s)^{\beta - 1}}{p(s)} I^a(q(s)u(s)) ds |\\ &= \frac{1}{\Gamma(\beta)} | \int\_0^{t\_1} \frac{(t\_1 - s)^{\beta - 1} - (t\_2 - s)^{\beta - 1}}{p(s)} I^a(q(s)u(s)) ds \\ &- \int\_{t\_1}^{t\_2} \frac{(t\_2 - s)^{\beta - 1}}{p(s)} I^a(q(s)u(s)) ds |\\ &\leq \frac{1}{\Gamma(\beta)} | \int\_0^{t\_1} \frac{| (t\_1 - s)^{\beta - 1} - (t\_2 - s)^{\beta - 1} |}{p(s)} | I^a(q(s)u(s))| ds \\ &+ \int\_{t\_1}^{t\_2} \frac{(t\_2 - s)^{\beta - 1}}{p(s)} | I^a(q(s)u(s))| ds \| \end{split}$$

Now, as <sup>|</sup>*Iα*(*q*(*s*)*u*(*s*))|≤*qr Iα*(1) = *qrs<sup>α</sup>* <sup>Γ</sup>(*α*+1) <sup>≤</sup> *q<sup>r</sup>* <sup>Γ</sup>(*α*+1), then

$$\begin{split} |Au(t\_1) - Au(t\_2)| &\leq \frac{||q||r}{p\Gamma(a+1)\Gamma(\beta)} \left[ \int\_0^{t\_1} |(t\_1 - s)^{\beta - 1} - (t\_2 - s)^{\beta - 1}| ds + \int\_{t\_1}^{t\_2} (t\_2 - s)^{\beta - 1} ds \right] \\ &= \frac{||q||r}{p\Gamma(a+1)\Gamma(\beta+1)} \left[ |t\_2^{\beta} - t\_1^{\beta} - (t\_2 - t\_1)^{\beta}| + (t\_2 - t\_1)^{\beta} \right]. \end{split}$$

Similarly, we have

$$|Bu(t\_1) - Bu(t\_2)| \le \frac{||h||(\mathcal{K}r + \mathcal{M})}{p\Gamma(\mathfrak{a}+1)\Gamma(\beta+1)} \left[|t\_2^{\mathfrak{f}} - t\_1^{\mathfrak{f}} - (t\_2 - t\_1)^{\mathfrak{f}}| + (t\_2 - t\_1)^{\beta}\right]$$

and

$$|\mathbb{C}u(t\_1) - \mathbb{C}u(t\_2)| \le \frac{(||\mu^\*||r + k\_0)}{p\Gamma(\beta + 1)} \left[ |t\_2^{\beta} - t\_1^{\beta} - (t\_2 - t\_1)^{\beta}| + (t\_2 - t\_1)^{\beta} \right].$$

Now, we are ready to state and prove our main theorem.

**Theorem 1.** *Let the hypotheses* (*D*1)*–*(*D*5) *be satisfied. Then, the coupled hybrid Sturm–Liouville differential Equation* (3) *with multi-point boundary hybrid condition* (4) *has a unique solution <sup>u</sup>* <sup>∈</sup> *<sup>C</sup>*[*I*, <sup>R</sup>]*. Furthermore, if* (B∗) *holds, then D<sup>β</sup> <sup>c</sup>* (*u*(*t*)) ∈ *<sup>C</sup>*(*I*, R).

**Proof.** Let *E* = *C*(*I*, R). From (*D*5), we know that there exists a number *r* > 0 such that

$$r \ge \frac{\zeta\_2^\* \Theta + \zeta\_1^\*}{1 - ||\mu|| \Theta - ||\tilde{\mu}||} \text{ and } ||\mu|| \Theta + ||\tilde{\mu}|| < 1.$$

where

$$\begin{split} \Theta &= \frac{1}{p\Gamma(\mathfrak{a} + \mathfrak{f} + 1)} [E(\sum\_{i=1}^{m} |\xi\_i| + |\nu| \sum\_{j=1}^{n} |\eta\_j|) + 1] [(\|q\| + \mathcal{K} \|h\| + \frac{\Gamma(\mathfrak{a} + \mathfrak{f} + 1) \|\mu^\*\|}{\Gamma(\mathfrak{f} + 1)}) r \\ &+ \mathcal{M} \|h\| + \frac{\Gamma(\mathfrak{a} + \mathfrak{f} + 1) k\_0}{\Gamma(\mathfrak{f} + 1)} \big] , \end{split}$$

*ζ*∗ <sup>1</sup> <sup>=</sup> sup*t*∈*<sup>I</sup> <sup>ζ</sup>*1(*t*, 0), *<sup>ζ</sup>*<sup>∗</sup> <sup>2</sup> <sup>=</sup> sup*t*∈*<sup>I</sup> <sup>ζ</sup>*2(*t*, 0), *<sup>k</sup>*<sup>0</sup> <sup>=</sup> sup*t*∈*<sup>I</sup> <sup>k</sup>*(*t*, 0) and <sup>M</sup> <sup>=</sup> *<sup>f</sup>*(0). Define a subset *Sr* of *E* defined by

$$\mathcal{S}\_r = \{ \mathfrak{u} \in E \, : \, \|\mathfrak{u}\| \le r \}. $$

Clearly, *Sr* is a closed, convex, and bounded subset of *E*. From Lemma 2, we know that the problems in (3) and (4) are equivalent to the equation

$$\begin{aligned} u(t) &= \zeta\_2(t, u(t)) \left[ E \left( \sum\_{i=1}^m \zeta\_i A u(a\_i) - \nu \sum\_{j=1}^n \eta\_j A u(b\_j) + \nu \sum\_{j=1}^n \eta\_j B u(b\_j) \right) \\ &- \sum\_{i=1}^m \zeta\_i B u(a\_i)) + \nu \sum\_{j=1}^n \eta\_j \mathbb{C} u(b\_j) - \sum\_{i=1}^m \zeta\_i \mathbb{C} u(a\_i) \right] - A u(t) + B u(t) + \mathbb{C} u(t) \Big[ \\ &+ \zeta\_1(t, u(t)), \ t \in I. \end{aligned} \tag{11}$$

Define three operators A, C : *E* → *E* and B : *Sr* → *E* by

$$\mathcal{A}u(t) = \mathcal{J}\_2(t, u(t)), \ t \in I\_2$$

$$\begin{aligned} \mathcal{B}u(t) &= E\left(\sum\_{i=1}^m \zeta\_i A u(a\_i) - \nu \sum\_{j=1}^n \eta\_j A u(b\_j) + \nu \sum\_{j=1}^n \eta\_j B u(b\_j) \\ &- \sum\_{i=1}^m \zeta\_i B u(a\_i)) + \nu \sum\_{j=1}^n \eta\_j \mathbb{C} u(b\_j) - \sum\_{i=1}^m \mathbb{J}\_i \mathbb{C} u(a\_i) \right) - A u(t) + B u(t) + \mathbb{C} u(t), \ t \in I, \end{aligned}$$

and

$$\mathcal{C}u(t) = \mathcal{J}\_1(t, u(t)), \ t \in I.$$

Now, the integral Equation (11) can be written as

$$u(t) = \mathcal{A}u(t)\mathcal{B}u(t) + \mathcal{C}u(t), \ t \in I.$$

In the following steps, we will show that the operators A, B, and C satisfy all the conditions of Lemma 1.

**Step 1:** In this step, we show that A and C are Lipschitzian on *E*. Let *u*, *v* ∈ *E*, then by (*D*3), we have

$$|\mathcal{A}u(t) - \mathcal{A}v(t)| = |\mathbb{G}\_2(t, u) - \mathbb{G}\_2(t, v)| \le \mu(t)|u(t) - v(t)|$$

for all *t* ∈ *I*. Taking the supremum over *t*, we get

$$\|\mathcal{A}u - \mathcal{A}v\| \le \|\mu\| \|\|u - v\|\|.$$

Similarly, by applying (*D*3), we can obtain

$$\|\mathcal{L}u - \mathcal{C}v\| \le \|\|\tilde{\mu}\|\| \|u - v\|\|.$$

That is, A and C are Lipschitzian with Lipschitz constants *μ* and *μ*˜, respectively. **Step 2:** We show that B is compact and continuous operator on *Sr* into *E*. At first, we show that B is continuous on *Sr*. Let {*un*} be a sequence in *Sr* converging to a point *u* ∈ *Sr*. Then, by the Lebesgue dominated convergence theorem,

lim*n*→<sup>∞</sup> <sup>B</sup>*un*(*t*) = lim*n*→∞[*E*( *m* ∑ *i*=1 *ξiAun*(*ai*) − *ν n* ∑ *j*=1 *ηjAun*(*bj*) + *ν n* ∑ *j*=1 *ηjBun*(*bj*) − *m* ∑ *i*=1 *ξiBun*(*ai*) + *ν n* ∑ *j*=1 *ηjCun*(*bj*) − *m* ∑ *i*=1 *ξiCun*(*ai*)) − *Aun*(*t*) + *Bun*(*t*) + *Cun*(*t*)] = *E*( *m* ∑ *i*=1 *<sup>ξ</sup>iA*( lim*n*→<sup>∞</sup> *un*(*ai*)) <sup>−</sup> *<sup>ν</sup> n* ∑ *j*=1 *<sup>η</sup>jA*( lim*n*→<sup>∞</sup> *un*(*bj*)) + *<sup>ν</sup> n* ∑ *j*=1 *<sup>η</sup>jB*( lim*n*→<sup>∞</sup> *un*(*bj*)) − *m* ∑ *i*=1 *<sup>ξ</sup>iB*( lim*n*→<sup>∞</sup> *un*(*ai*)) + *<sup>ν</sup> n* ∑ *j*=1 *<sup>η</sup>jC*( lim*n*→<sup>∞</sup> *un*(*bj*)) <sup>−</sup> *m* ∑ *i*=1 *<sup>ξ</sup>iC*( lim*n*→<sup>∞</sup> *un*(*ai*))) <sup>−</sup> *<sup>A</sup>*( lim*n*→<sup>∞</sup> *un*(*t*)) + *<sup>B</sup>*( lim*n*→<sup>∞</sup> *un*(*t*)) + *<sup>C</sup>*( lim*n*→<sup>∞</sup> *un*(*t*)) = *E*( *m* ∑ *i*=1 *ξiAu*(*ai*) − *ν n* ∑ *j*=1 *ηjAu*(*bj*) + *ν n* ∑ *j*=1 *ηjBu*(*bj*) − *m* ∑ *i*=1 *ξiBu*(*ai*) + *ν n* ∑ *j*=1 *ηjCu*(*bj*) − *m* ∑ *i*=1 *ξiCu*(*ai*)) − *Au*(*t*) + *Bu*(*t*) + *Cu*(*t*) = B*u*(*t*)

for all *t* ∈ *I*. That is, B is a continuous operator on *Sr*.

Next, we will show that the set B(*Sr*) is a uniformly bounded in *Sr*. For any *u* ∈ *Sr*, by using Lemma 3 (i), we have

$$\begin{split} |\mathcal{B}u(t)| &\leq |E|\sum\_{i=1}^{m} |\xi\_{i}| |Au(a\_{i})| + |\nu| \sum\_{j=1}^{n} |\eta\_{j}| |Au(b\_{j})| \\ &+ |\nu| \sum\_{j=1}^{n} |\eta\_{j}| |Bu(b\_{j})| + \sum\_{i=1}^{m} |\xi\_{i}| |Bu(a\_{i})| + |\nu| \sum\_{j=1}^{n} |\eta\_{j}| |\mathrm{Cu}(b\_{j})| + \sum\_{i=1}^{m} |\xi\_{i}| |\mathrm{Cu}(a\_{i})| \\ &+ |Au(t)| + |Bu(t)| + |\mathrm{Cu}(t)| \\ &\leq |E|\sum\_{i=1}^{m} |\xi\_{i}'|L\_{1} + |E||\nu| \sum\_{j=1}^{n} |\eta\_{j}| L\_{1} + |E||\nu| \sum\_{j=1}^{n} |\eta\_{j}| L\_{2} + |E|\sum\_{i=1}^{m} |\xi\_{i}'|L\_{2} \\ &+ |E||\nu| \sum\_{j=1}^{n} |\eta\_{j}| L\_{3} + |E|\sum\_{i=1}^{m} |\xi\_{i}'|L\_{3} + L\_{1} + L\_{2} + L\_{3} \\ &= [|E|\left(\sum\_{i=1}^{m} |\xi\_{i}| + |\nu| \sum\_{j=1}^{n} |\eta\_{j}|\right) + 1]L\_{1} + [|E|\left(\sum\_{i=1}^{m} |\xi\_{i}'| + |\nu| \sum\_{j=1}^{n} |\eta\_{j}|\right) + 1]L\_{2} \\ &+ [|E|\left(\sum\_{i=1}^{m} |\xi\_{i}| + |\nu| \sum\_{j=1}^{n} |\eta\_{j}|\right) + 1]L\_{3} \\ &=$$

Now, as

$$\begin{split} &L\_1 + L\_2 + L\_3 \\ &= \frac{\|q\|}{p\Gamma(a+\beta+1)}r + \frac{\mathcal{K}\|h\|}{p\Gamma(a+\beta+1)}r + \frac{\|\mu^\*\|}{p\Gamma(\beta+1)}r + \frac{\mathcal{M}\|h\|}{p\Gamma(a+\beta+1)} + \frac{k\_0}{p\Gamma(\beta+1)} \\ &= \frac{1}{p\Gamma(a+\beta+1)}[(\|q\| + \mathcal{K}\|h\| + \frac{\Gamma(a+\beta+1)\|\mu^\*\|}{\Gamma(\beta+1)})r + \mathcal{M}\|h\| + \frac{\Gamma(a+\beta+1)k\_0}{\Gamma(\beta+1)}], \end{split}$$

then we get

$$\begin{split} |\mathcal{B}u(t)| &\leq \frac{1}{p\Gamma(\alpha+\beta+1)} [E(\sum\_{i=1}^{m}|\xi\_{i}|+|\nu|\sum\_{j=1}^{n}|\eta\_{j}|)+1][(||\boldsymbol{q}||+\mathcal{K}||h||)] \\ &+\frac{\Gamma(\alpha+\beta+1)||\mu^{\*}||}{\Gamma(\beta+1)}r+\mathcal{M}||h||+\frac{\Gamma(\alpha+\beta+1)k\_{0}}{\Gamma(\beta+1)}]=\Theta \end{split}$$

Taking supremum over t,

B*u* ≤ Θ

for all *u* ∈ *Sr*. This shows that B is uniformly bounded on *Sr*.

Now, we show that B(*Sr*) is an equi-continuous set in *E*. Let *t*1, *t*<sup>2</sup> ∈ *I* with *t*<*t*2. Then, for any *u* ∈ *Sr*, by applying Lemma 3 (ii), we have

$$\begin{split} |\mathcal{B}u(t\_1) - \mathcal{B}u(t\_2)| &= |-Au(t\_1) + Au(t\_2) + Bu(t\_1) - Bu(t\_2) + \mathcal{C}u(t\_1) - \mathcal{C}u(t\_2)| \\ &\le |Au(t\_1) - Au(t\_2)| + |Bu(t\_1) - Bu(t\_2)| + |\mathcal{C}u(t\_1) - \mathcal{C}u(t\_2)| \\ &\le \frac{||q||r}{p\Gamma(a+1)\Gamma(\beta+1)} \Big[ |t\_2^{\beta} - t\_1^{\beta} - (t\_2 - t\_1)^{\beta}| + (t\_2 - t\_1)^{\beta} \Big] \\ &+ \frac{||h||(\mathcal{K}r + \mathcal{M})}{p\Gamma(a+1)\Gamma(\beta+1)} \Big[ |t\_2^{\beta} - t\_1^{\beta} - (t\_2 - t\_1)^{\beta}| + (t\_2 - t\_1)^{\beta} \Big] \\ &+ \frac{(||\mu^\*||r+k\_0)}{p\Gamma(\beta+1)} \Big[ |t\_2^{\beta} - t\_1^{\beta} - (t\_2 - t\_1)^{\beta}| + (t\_2 - t\_1)^{\beta} \Big] \end{split}$$

Then, for *ε* > 0, there exist *δ* > 0 such that

$$|t\_1 - t\_2| < \delta \implies |\mathcal{B}(t\_1) - \mathcal{B}(t\_2)| < \varepsilon\_\prime$$

for all *t*1, *t*<sup>2</sup> ∈ *I* and for all *u* ∈ *Sr*. This shows that B(*Sr*) is an equi-continuous set in *E*. Therefore, we proved that the set B(*Sr*) is uniformly bounded and equi-continuous set in *E*. Then, B(*Sr*) is compact by Arzela–Ascoli Theorem. As a consequence, B(*Sr*) is a completely continuous operator on *Sr*.

**Step 3:** Let *u* ∈ *E* and *v* ∈ *Sr* be two given elements such that *u* = A*u*B*v* + C*u*. Then, we get

$$\begin{aligned} |u(t)| &\le |\mathcal{A}u(t)| |\mathcal{B}v(t)| + |\mathcal{C}u(t)| \\ &\le \Theta |\zeta\_2(t, u(t))| + |\zeta\_1(t, u(t))| \\ &= \Theta |\zeta\_2(t, u(t)) - \zeta\_2(t, 0) + \zeta\_2(t, 0)| + |\zeta\_1(t, u(t)) - \zeta\_1(t, 0) + \zeta\_1(t, 0)| \\ &\le \Theta (|\|\mu\| |u(t)| + \zeta\_2^\*) + |\|\bar{\mu}\| |u(t)| + \zeta\_1^\* \end{aligned}$$

and so

$$|\mu(t)| \le \frac{\mathcal{J}\_2^\* \Theta + \mathcal{J}\_1^\*}{1 - ||\mu|| \Theta - ||\bar{\mu}||} \le r.$$

Taking the supremum over t, we get

*u* ≤ *r*.

**Step 4:** Finally, we prove that *<sup>δ</sup><sup>M</sup>* <sup>+</sup> *<sup>ρ</sup>* <sup>&</sup>lt; 1. As *<sup>M</sup>* <sup>=</sup> B(*Sr*) <sup>=</sup> sup*u*∈*Sr* {sup*t*∈*<sup>I</sup>* |B*u*(*t*)|} ≤ Θ, we have

$$||\mu||M+||\tilde{\mu}|| \le ||\mu||\Theta+||\tilde{\mu}|| < 1,$$

where *δ* = *μ* and *ρ* = *μ*˜. Therefore, all conditions of Lemma 1 hold and the operator equation *u* = A*u*B*u* + C*u* has a solution in *Sr*. Thus, the problem (3) and (4) has a solution *<sup>u</sup>* ∈ *<sup>C</sup>*(*I*, R).

**Example 1.** *Let us consider the following fractional couple hybrid Sturm–Liouville differential equation:*

$$\begin{split} D\_{\varepsilon}^{\frac{4}{3}}\left(1000\sqrt{\varepsilon^{t}+t^{2}}D\_{\varepsilon}^{\frac{4}{3}}\left(\frac{u(t)-\mathbb{Z}\_{1}(t,u(t))}{\mathbb{Z}\_{2}(t,u(t))}\right)-k(t,u(t))\right) + \varepsilon^{-t}\cos^{2}(t)u(t) \\ &= \varepsilon^{-\frac{t}{1+t}}\tan^{-1}(u(t)+1), \ t \in I \end{split} \tag{12}$$

*with boundary values*

$$\begin{cases} \begin{aligned} D\_{c}^{\frac{9}{10}} \left( \frac{u(t) - \zeta\_{2}(t, u(t))}{\zeta\_{2}(t, u(t))} \right)\_{t=0} = \frac{1}{240} u(0), & t \in I = [0, 1] \\\\ \sum\_{i=1}^{2} \frac{1}{4i} (\frac{u(\frac{1}{n^{\prime}}) - \zeta\_{1}(\frac{1}{n^{\prime}}, u(\frac{1}{n^{\prime}}))}{\zeta\_{2}(\frac{1}{n^{\prime}}, u(\frac{1}{n^{\prime}}))} = \frac{1}{3} \sum\_{j=1}^{3} \frac{1}{2^{j}} (\frac{u(\frac{1}{\varrho^{\prime}}) - \zeta\_{1}(\frac{1}{\varrho^{\prime}}, u(\frac{1}{\varrho^{\prime}}))}{\zeta\_{2}(\frac{1}{\varrho^{\prime}}, u(\frac{1}{\varrho^{\prime}}))}), & \end{aligned} \end{cases} \tag{13}$$

*where*

$$\zeta\_1(t, u(t)) = \frac{e^{-t}}{300} \left( u(t) + e^{-\pi t} \right) + \frac{1}{300 + \ln(t^2 + t + 1)}$$

$$\zeta\_2(t, u(t)) = \frac{\cos^2(\pi t)}{(500 + \ln(1 + e^{\pi t + 1}))} \frac{|u(t)|}{1 + |u(t)|} + e^{-\sin^2(\pi t)}$$

*and*

$$k(t, u(t)) = \frac{e^{-t}}{100}u(t) + e^{-t^2}.$$

*In this case, we take α* = <sup>4</sup> <sup>5</sup> *, <sup>β</sup>* <sup>=</sup> <sup>9</sup> <sup>10</sup> *, <sup>r</sup>* <sup>=</sup> 0.1, *<sup>ξ</sup>*<sup>1</sup> <sup>=</sup> <sup>1</sup> <sup>4</sup> *, <sup>ξ</sup>*<sup>2</sup> <sup>=</sup> <sup>1</sup> <sup>8</sup> *, <sup>η</sup>*<sup>1</sup> <sup>=</sup> <sup>1</sup> <sup>2</sup> *, <sup>η</sup>*<sup>2</sup> <sup>=</sup> <sup>1</sup> <sup>4</sup> *, <sup>η</sup>*<sup>3</sup> <sup>=</sup> <sup>1</sup> 8 *, ν* = <sup>1</sup> <sup>3</sup> , *<sup>p</sup>*(*t*) = <sup>1000</sup>√*e<sup>t</sup>* <sup>+</sup> *<sup>t</sup>*2, *<sup>q</sup>*(*t*) = *<sup>e</sup>*−*<sup>t</sup>* cos2(*t*), *<sup>h</sup>*(*t*) = *<sup>e</sup>* <sup>−</sup> *<sup>t</sup>* <sup>1</sup>+*<sup>t</sup>* , *f*(*u*(*t*)) = tan−1(*u*(*t*) + 1)*. Therefore,* | *∂ f*(*u*) *<sup>∂</sup><sup>u</sup>* | ≤ <sup>1</sup> <sup>=</sup> <sup>K</sup>*,* <sup>M</sup> <sup>=</sup> *<sup>π</sup>* <sup>4</sup> *, p* = 1000, *q* = 1, *h* = 1*. Further,*

$$|\zeta\_1(t, u(t)) - \zeta\_1(t, v(t))| \le \frac{e^{-t}}{300} |u(t) - v(t)|\_{\prime}.$$

$$\begin{aligned} |\zeta\_2(t, \mu(t)) - \zeta\_2(t, \nu(t))| &= \frac{\cos^2(\pi t)}{(500 + \ln(1 + e^{\pi t + 1}))} \frac{| |\mu(t)| - |\nu(t)| |}{(1 + |\mu(t)|)(1 + |\nu(t)|)} \\ &\le \frac{\cos^2(\pi t)}{(500 + \ln(1 + e^{\pi t + 1}))} |\mu(t) - \nu(t)| \end{aligned}$$

*and*

$$|k(t, \mu(t)) - k(t, \upsilon(t))| \le \frac{\varepsilon^{-t}}{100} |\mu(t) - \upsilon(t)|.$$

*Then, ζ*∗ <sup>1</sup> <sup>=</sup> sup*t*∈*<sup>I</sup> <sup>ζ</sup>*1(*t*, 0) = <sup>1</sup> <sup>150</sup> *, ζ*<sup>∗</sup> <sup>2</sup> <sup>=</sup> sup*t*∈*<sup>I</sup> <sup>ζ</sup>*2(*t*, 0) = <sup>1</sup>*, <sup>k</sup>*<sup>0</sup> <sup>=</sup> sup*t*∈*<sup>I</sup> <sup>k</sup>*(*t*, 0) = <sup>1</sup>*, μ* <sup>=</sup> <sup>1</sup> <sup>500</sup>+ln(1+*e*), *μ*∗ <sup>=</sup> <sup>1</sup> <sup>100</sup> *and μ*˜ <sup>=</sup> <sup>1</sup> <sup>300</sup> *. Furthermore,* <sup>∑</sup><sup>2</sup> *<sup>i</sup>*=<sup>1</sup> <sup>1</sup> <sup>4</sup>*<sup>i</sup>* <sup>−</sup> <sup>1</sup> <sup>3</sup> <sup>∑</sup><sup>3</sup> *<sup>j</sup>*=<sup>1</sup> <sup>1</sup> <sup>2</sup>*<sup>j</sup>* <sup>=</sup> <sup>3</sup> <sup>8</sup> <sup>−</sup> <sup>7</sup> <sup>24</sup> = 1 <sup>12</sup> = 0, *and so E* = 12. *Then,*

$$\begin{split} \Theta &= \frac{1}{p\Gamma(\mathfrak{a} + \beta + 1)} [|E|(\sum\_{i=1}^{m} |\xi\_i| + |\nu| \sum\_{j=1}^{n} |\eta\_j|) + 1] [(\|q\| + \mathcal{K} \|h\| + \frac{\Gamma(\mathfrak{a} + \beta + 1) \|\mu^\*\|}{\Gamma(\beta + 1)}) r \\ &+ \mathcal{M} \|h\| + \frac{\Gamma(\mathfrak{a} + \beta + 1) k\_0}{\Gamma(\beta + 1)} \\ &\approx \frac{1}{1000 \Gamma(2.7)} [12 (\sum\_{i=1}^{2} \frac{1}{4i} + \frac{1}{3} \sum\_{j=1}^{3} \frac{1}{2^j}) + 1] [1.807699588 + \frac{\pi}{4}] \approx 0.0151084953 \end{split}$$

*and so*

$$r = 0.1 \ge 0.0218486492 \approx \frac{\zeta\_2^\* \Theta + \zeta\_1^\*}{1 - ||\mu||\Theta - ||\tilde{\mu}||\_1}$$

*and*

$$\|\|\mu\|\Theta + \|\|\hat{\mu}\|\| \approx 0.0033634712 < 1\_{\star}$$

*As all the conditions of Theorem 1 be satisfied, the problems (12) and (13) have a solution.*

**Example 2.** *Let us consider the following fractional couple hybrid Sturm–Liouville differential equation:*

$$D\_{\varepsilon}^{\frac{1}{2}}\left(5^{\frac{4}{1+\varepsilon^2}}D\_{\varepsilon}^{\frac{1}{2}}\left(\frac{u(t)-\mathbb{Z}\_1(t,u(t))}{\mathbb{Z}\_2(t,u(t))}\right)-k(t,u(t))\right)+2^{\lfloor\sin x\rfloor}u(t)=\cot^{-1}(\frac{1}{2}u(t)),\ t\in I\tag{14}$$

*with boundary values*

$$\begin{cases} \begin{array}{l} D\_{\varepsilon}^{\frac{1}{3}} (\frac{u(t) - \mathbb{Z}\_{2}(t, u(t))}{\mathbb{Z}\_{2}(t, u(t))})\_{t=0} = \frac{1}{240} u(0), \ t \in I = [0, 1] \\\\ \Sigma\_{i=1}^{2} \frac{i}{2} (\frac{u(10^{i}) - \mathbb{Z}\_{1}(10^{i}, u(10^{i}))}{\mathbb{Z}\_{2}(10^{i}, u(10^{i}))}) = -2 \sum\_{j=1}^{2} \frac{(-1)^{j}}{j+2} (\frac{u(13^{j}) - \mathbb{Z}\_{1}(13^{j}, u(13^{j}))}{\mathbb{Z}\_{2}(13^{j}, u(13^{j}))}), \end{array} \tag{15}$$

*where*

$$\begin{aligned} \check{\varsigma}\_1(t, u(t)) &= \mathcal{T}^{t-1} (1 + 6 \frac{^{-9}}{^{1+2t}} u(t)) - \frac{76t}{77} \\\\ \check{\varsigma}\_2(t, u(t)) &= \frac{8}{30 + \ln(1+t)} e^{-t^2 - t^3} u(t) + \frac{1}{20} \cos(\frac{\pi}{1+t^2}) \end{aligned}$$

*and*

$$k(t, u(t)) = \frac{u(t)}{(2+t)(5+3t)(6+7t)(4+9t)} + \sinh(\ln(2)t^5).$$

*Now, we put α* = <sup>1</sup> <sup>2</sup> *, <sup>β</sup>* <sup>=</sup> <sup>1</sup> <sup>3</sup> *, <sup>r</sup>* <sup>=</sup> 0.9, *<sup>ξ</sup>*<sup>1</sup> <sup>=</sup> <sup>1</sup>*, <sup>ξ</sup>*<sup>2</sup> <sup>=</sup> <sup>1</sup> <sup>2</sup> *, <sup>η</sup>*<sup>1</sup> <sup>=</sup> <sup>−</sup><sup>1</sup> <sup>3</sup> *, <sup>η</sup>*<sup>2</sup> <sup>=</sup> <sup>1</sup> <sup>4</sup> *, ν* = −2, *<sup>p</sup>*(*t*) = <sup>5</sup> <sup>4</sup> <sup>1</sup>+*t*<sup>2</sup> , *q*(*t*) = 2<sup>|</sup> sin *<sup>x</sup>*<sup>|</sup> , *h*(*t*) = 1, *f*(*u*(*t*)) = cot−1( <sup>1</sup> <sup>2</sup>*u*(*t*))*. Hence,* | *∂ f*(*u*) *<sup>∂</sup><sup>u</sup>* | ≤ <sup>1</sup> <sup>2</sup> = K*,* <sup>M</sup> <sup>=</sup> *<sup>π</sup>* <sup>2</sup> *, p* = 625, *q* = 2, *h* = 1, *ζ*<sup>∗</sup> <sup>1</sup> = <sup>1</sup> <sup>77</sup> *, ζ*<sup>∗</sup> <sup>2</sup> = <sup>1</sup> <sup>20</sup> *, <sup>k</sup>*<sup>0</sup> <sup>=</sup> <sup>3</sup> <sup>4</sup> *, μ* <sup>=</sup> <sup>30</sup> <sup>8</sup> , *μ*∗ <sup>=</sup> <sup>1</sup> <sup>240</sup> , *μ*˜ <sup>=</sup> <sup>1</sup> <sup>216</sup> , <sup>∑</sup><sup>2</sup> *i*=1 *i* <sup>2</sup> <sup>−</sup> *<sup>ν</sup>* 2 ∑ *j*=1 (−1)*<sup>j</sup> <sup>j</sup>* <sup>+</sup> <sup>2</sup> <sup>=</sup> <sup>4</sup> <sup>3</sup> <sup>=</sup> <sup>0</sup> *and <sup>E</sup>* <sup>=</sup> <sup>3</sup> <sup>4</sup> . *Therefore,* Θ ≈ 0.0235484505. *Then,*

*we have*

$$r = 0.9 \ge 0.0564209808 \approx \frac{\zeta\_2^\* \Theta + \zeta\_1^\*}{1 - ||\mu||\Theta - ||\hat{\mu}||\_1}$$

*and*

$$||\mu||\Theta + ||\bar{\mu}|| \approx 0.0047386502 < 1,$$

*That is, all the conditions of Theorem 1 hold and the problem (14) and (15) has a solution.*

If in Theorem 1, we take *<sup>ζ</sup>*1(*t*, *<sup>w</sup>*) = *<sup>k</sup>*(*t*, *<sup>w</sup>*) = *<sup>ζ</sup>*2(*t*, *<sup>w</sup>*) − <sup>1</sup> = 0 for all *<sup>t</sup>* ∈ *<sup>I</sup>* and *<sup>w</sup>* ∈ R, we have the following Corollary.

**Corollary 1.** *Let the hypotheses* (*D*1)*–*(*D*2) *be satisfied. Assume that*

$$\frac{1}{p\Gamma(\alpha+\beta+1)}[|E|(\sum\_{i=1}^{m}|\xi\_i|+|\nu|\sum\_{j=1}^{n}|\eta\_j|)+1](\|q\|+K\|h\|)<1,$$

*where <sup>E</sup>* <sup>=</sup> <sup>1</sup> ∑*<sup>m</sup> <sup>i</sup>*=<sup>1</sup> *<sup>ξ</sup><sup>i</sup>* <sup>−</sup> *<sup>ν</sup>* <sup>∑</sup>*<sup>n</sup> <sup>j</sup>*=<sup>1</sup> *η<sup>j</sup> and* ∑*<sup>m</sup> <sup>i</sup>*=<sup>1</sup> *<sup>ξ</sup><sup>i</sup>* <sup>−</sup> *<sup>ν</sup>* <sup>∑</sup>*<sup>n</sup> <sup>j</sup>*=<sup>1</sup> *η<sup>j</sup>* = 0*. Then, the fractional Sturm– Liouville differential problem*

$$\begin{cases} \begin{aligned} \,^D\_c[p(t)D\_c^\beta(u(t))] + q(t)u(t) &= h(t)f(u(t)), \; t \in I \\\\ \,^D\_c(u(t))\big|\_{t=0} &= 0, \\\\ \sum\_{i=1}^m \,^\chi\_i u(a\_i) &= \nu \sum\_{j=1}^n \eta\_j u(b\_j), \end{aligned} \tag{16}$$

*has a solution u* ∈ *<sup>C</sup>*(*I*, R) *if and only if u solves the integral equation*

$$\begin{aligned} u(t) &= E(\sum\_{i=1}^m \xi\_i A u(a\_i) - \nu \sum\_{j=1}^n \eta\_j A u(b\_j) + \nu \sum\_{j=1}^n \eta\_j B u(b\_j)), \\ &- \sum\_{i=1}^m \xi\_i B u(a\_i)) - A u(t) + B u(t). \end{aligned}$$

*Therefore, D<sup>β</sup> <sup>c</sup>* (*u*(*t*)) ∈ *<sup>C</sup>*(*I*, R).

#### **3. Continuous Dependence**

The following result will be useful in this section (in fact it is a special case of Theorem 1 with *<sup>ζ</sup>*2(*t*, *<sup>x</sup>*) = 1 for all *<sup>t</sup>* ∈ *<sup>I</sup>* and *<sup>x</sup>* ∈ R).

**Corollary 2.** *Let the hypotheses* (*D*1)*,* (*D*2)*, and* (*D*4) *be satisfied. Assume that there exists a number r* > 0 *such that*

$$r > \frac{\Theta + \zeta\_1^\*}{1 - ||\tilde{\mu}||} \text{ and } ||\tilde{\mu}|| < 1.$$

*where*

$$\begin{split} \Theta &= \frac{1}{p\Gamma(\mathfrak{a} + \mathfrak{f} + 1)} [E(\sum\_{i=1}^{m} |\mathfrak{f}\_{i}| + |\nu| \sum\_{j=1}^{n} |\eta\_{j}|) + 1] (\lfloor \|q\| + \mathcal{K} \|h\| + \frac{\Gamma(\mathfrak{a} + \mathfrak{f} + 1) \|\mu^{\mathfrak{a}}\|}{\Gamma(\mathfrak{f} + 1)}) r \\ &+ \mathcal{M} \|h\| + \frac{\Gamma(\mathfrak{a} + \mathfrak{f} + 1) k\_{0}}{\Gamma(\mathfrak{f} + 1)} \text{,} \end{split}$$

*ζ*∗ <sup>1</sup> <sup>=</sup> sup*t*∈*<sup>I</sup> <sup>ζ</sup>*1(*t*, 0)*, <sup>k</sup>*<sup>0</sup> <sup>=</sup> sup*t*∈*<sup>I</sup>* <sup>|</sup>*k*(*t*, 0)|, <sup>M</sup> <sup>=</sup> *<sup>f</sup>*(0) *and <sup>E</sup>* <sup>=</sup> <sup>1</sup> ∑*<sup>m</sup> <sup>i</sup>*=<sup>1</sup> *<sup>ξ</sup><sup>i</sup>* <sup>−</sup> *<sup>ν</sup>* <sup>∑</sup>*<sup>n</sup> <sup>j</sup>*=<sup>1</sup> *η<sup>j</sup> where* ∑*<sup>m</sup> <sup>i</sup>*=<sup>1</sup> *<sup>ξ</sup><sup>i</sup>* <sup>−</sup> *<sup>ν</sup>* <sup>∑</sup>*<sup>n</sup> <sup>j</sup>*=<sup>1</sup> *η<sup>j</sup>* = 0*. Then, the fractional couple hybrid Sturm–Liouville differential equation*

$$D\_{\varepsilon}^{a}\left[p(t)D\_{\varepsilon}^{\beta}\left(u(t)-\zeta\_{1}(t,u(t))\right)-k(t,u(t))\right]+q(t)u(t) = h(t)f(u(t)), \; t \in I \tag{17}$$

*with multi-point boundary couple hybrid condition*

$$\begin{cases} D\_c^\circ \left( u(t) - \zeta\_1(t, u(t)) \right)\_{t=0} = k(0, u(0)), \\\\ \sum\_{i=1}^m \zeta\_i^\circ (u(a\_i) - \zeta\_1(a\_i, u(a\_i))) = \nu \sum\_{j=1}^n \eta\_j (u(b\_j) - \zeta\_1(b\_j, u(b\_j))), \end{cases} \tag{18}$$

*has a solution u* ∈ *<sup>C</sup>*(*I*, R) *if and only if u solves the integral equation*

$$\begin{split} u(t) &= E(\sum\_{i=1}^{m} \xi\_i A u(a\_i) - \nu \sum\_{j=1}^{n} \eta\_j A u(b\_j) + \nu \sum\_{j=1}^{n} \eta\_j B u(b\_j) \\ &- \sum\_{i=1}^{m} \xi\_i B u(a\_i)) + \nu \sum\_{j=1}^{n} \eta\_j \mathbb{C} u(b\_j) - \sum\_{i=1}^{m} \xi\_i \mathbb{C} u(a\_i)) \\ &- A u(t) + B u(t) + \mathbb{C} u(t) + \mathbb{J}\_1(t, u(t)). \end{split} \tag{19}$$

*Furthermore, D<sup>β</sup> <sup>c</sup>* (*u*(*t*)) ∈ *<sup>C</sup>*(*I*, R).

In this section, we will investigate continuous dependence (on the coefficients *ξ<sup>i</sup>* and *η<sup>j</sup>* of the multi-point boundary couple hybrid condition) of the solution of the fractional couple hybrid Sturm–Liouville differential Equation (17) with multi-point boundary couple hybrid condition (18). The main Theorem of this section generalizes Theorem 3.2 in [23] and Theorem 5 in [8].

First, we give the following Definition.

**Definition 3.** *The solution of the fractional couple hybrid Sturm–Liouville differential Equation* (17) *is continuously dependent on the data ξ<sup>i</sup> and η<sup>j</sup> if for every* > 0, *there exist δ*1() *and δ*2()*, such that for any two solutions u*(*t*) *and u*˜(*t*) *of* (17) *with the initial data* (18) *and*

$$\begin{cases} \begin{aligned} D\_\varepsilon^\delta \left( \vec{u}(t) - \zeta\_1(t, \vec{u}(t)) \right) \Big|\_{t=0} &= k(0, \vec{u}(0)), \\\\ \sum\_{i=1}^m \ddot{\zeta}\_i(\vec{u}(a\_i) - \zeta\_1(a\_i, \vec{u}(a\_i)) = \nu \sum\_{j=1}^n \eta\_j(\vec{u}(b\_j) - \zeta\_1(b\_j, \vec{u}(b\_j))), \end{aligned} \end{cases} \tag{20}$$

*respectively, one has* ∑*<sup>m</sup> <sup>i</sup>*=<sup>1</sup> <sup>|</sup>*ξ<sup>i</sup>* <sup>−</sup> ˜ *<sup>ξ</sup>i*<sup>|</sup> <sup>&</sup>lt; *<sup>δ</sup>*<sup>1</sup> *and* <sup>∑</sup>*<sup>n</sup> <sup>j</sup>*=<sup>1</sup> |*η<sup>j</sup>* − *η*˜*j*| < *δ*2*, then u* − *u*˜ < *for all t* ∈ *I*.

**Theorem 2.** *Assume that the assertions of Corollary* (21) *are satisfied. Then, the solution of the fractional couple hybrid Sturm–Liouville differential problem* (17) *and* (18) *is continuously dependent on the coefficients ξ<sup>i</sup> and η<sup>j</sup> of the multi-point boundary couple hybrid condition.*

**Proof.** Assume that *u* is a solution of the fractional couple hybrid Sturm–Liouville differential problem (17) and (18) and that

$$\begin{split} \boldsymbol{\tilde{u}}(t) &= \boldsymbol{E} \sum\_{i=1}^{m} \mathbb{J}\_{i} \boldsymbol{A} \boldsymbol{\tilde{u}}(a\_{i}) - \nu \boldsymbol{E} \sum\_{j=1}^{n} \boldsymbol{\eta}\_{j} \boldsymbol{A} \boldsymbol{\tilde{u}}(b\_{j}) + \nu \boldsymbol{E} \sum\_{j=1}^{n} \boldsymbol{\eta}\_{j} \boldsymbol{B} \boldsymbol{\tilde{u}}(b\_{j}) - \boldsymbol{E} \sum\_{i=1}^{m} \boldsymbol{\xi}\_{i} \boldsymbol{B} \boldsymbol{\tilde{u}}(a\_{i}) \\ &+ \nu \boldsymbol{E} \sum\_{j=1}^{n} \boldsymbol{\eta}\_{j} \boldsymbol{C} \boldsymbol{\tilde{u}}(b\_{j}) - \boldsymbol{E} \sum\_{i=1}^{m} \boldsymbol{\xi}\_{i}^{\boldsymbol{C}} \boldsymbol{C} \boldsymbol{\tilde{u}}(a\_{i}) - \boldsymbol{A} \boldsymbol{\tilde{u}}(t) + \boldsymbol{B} \boldsymbol{\tilde{u}}(t) + \boldsymbol{C} \boldsymbol{\tilde{u}}(t) + \boldsymbol{\zeta}\_{1}(t, \boldsymbol{\tilde{u}}(t)) \end{split}$$

is a solution of the fractional couple hybrid Sturm-Liouville differential Equation (17) with the multi-point boundary couple hybrid condition (18). Therefore,

<sup>|</sup>*u*˜(*t*) <sup>−</sup> *<sup>u</sup>*(*t*)|≤|*E*˜ *<sup>m</sup>* ∑ *i*=1 ˜ *ξiAu*˜(*ai*) − *E m* ∑ *i*=1 *<sup>ξ</sup>iAu*(*ai*)<sup>|</sup> <sup>+</sup> <sup>|</sup>*νE*˜ *<sup>n</sup>* ∑ *j*=1 *η*˜*jAu*˜(*bj*) − *νE n* ∑ *j*=1 *ηjAu*(*bj*)| <sup>+</sup> <sup>|</sup>*νE*˜ *<sup>n</sup>* ∑ *j*=1 *η*˜*jBu*˜(*bj*) − *νE n* ∑ *j*=1 *<sup>η</sup>jBu*(*bj*)<sup>|</sup> <sup>+</sup> <sup>|</sup>*E*˜ *<sup>m</sup>* ∑ *i*=1 ˜ *ξiBu*˜(*ai*) − *E m* ∑ *i*=1 *ξiBu*(*ai*)| <sup>+</sup> <sup>|</sup>*νE*˜ *<sup>n</sup>* ∑ *j*=1 *η*˜*jCu*˜(*bj*) − *νE n* ∑ *j*=1 *<sup>η</sup>jCu*(*bj*)<sup>|</sup> <sup>+</sup> <sup>|</sup>*E*˜ *<sup>m</sup>* ∑ *i*=1 ˜ *ξiCu*˜(*ai*) − *E m* ∑ *i*=1 *ξiCu*(*ai*)| + |*Au*˜(*t*) − *Au*(*t*)| + |*Bu*˜(*t*) − *Bu*(*t*)| + |*Cu*˜(*t*) − *Cu*(*t*)| + |*ζ*1(*t*, *u*˜(*t*)) − *ζ*1(*t*, *u*(*t*))|. (21)

On the other hand,

$$\begin{split} &|E\sum\_{i=1}^{m}\tilde{\xi}\_{i}A u(a\_{i}) - E\sum\_{i=1}^{m}\tilde{\xi}\_{i}A\tilde{u}(a\_{i})| = |E\sum\_{i=1}^{m}\tilde{\xi}\_{i}A u(a\_{i}) - E\sum\_{i=1}^{m}\tilde{\xi}\_{i}A\tilde{u}(a\_{i})| \\ &+ E\sum\_{i=1}^{m}\tilde{\xi}\_{i}A\tilde{u}(a\_{i}) - E\sum\_{i=1}^{m}\tilde{\xi}\_{i}A\tilde{u}(a\_{i}) + E\sum\_{i=1}^{m}\tilde{\xi}\_{i}A\tilde{u}(a\_{i}) - E\sum\_{i=1}^{m}\tilde{\xi}\_{i}A\tilde{u}(a\_{i})| \\ &\leq |E\big|\sum\_{i=1}^{m}|\xi\_{i}||A(a(a\_{i}) - \tilde{u}(a\_{i}))| + |E|\sum\_{i=1}^{m}|\xi\_{i} - \tilde{\xi}\_{i}||A\tilde{u}(a\_{i})| + |E - E|\sum\_{i=1}^{m}|\tilde{\xi}\_{i}||A\tilde{u}(a\_{i})| \\ &\leq |E\big|\sum\_{i=1}^{m}\tilde{\xi}\_{i}Au(a\_{i}) - E\sum\_{i=1}^{m}\tilde{\xi}\_{i}A\tilde{u}(a\_{i})| \\ &\leq \frac{|\|q||E\big|\sum\_{i=1}^{m}|\frac{\xi\_{i}}{\beta\_{i}}||u\_{i} - \tilde{u}|| + \frac{\|q||E\big|\|\|\tilde{u}||}{p\Gamma(a + \beta + 1)}\sum\_{i=1}^{m}|\tilde{\xi}\_{i} - \tilde{\xi}\_{i}| \\ &+ \frac{|\|q||\|\|\|\sum\_{i=1}^{m}\tilde{\xi}\_{i}||E||\tilde{E}|}{p\Gamma(a + \beta + 1)}$$

$$\begin{split} |\boldsymbol{E}| \sum\_{i=1}^{m} \mathbb{J}\_{i} A \boldsymbol{u}(a\_{i}) - \bar{\mathcal{E}} \sum\_{i=1}^{m} \mathbb{J}\_{i} A \boldsymbol{\tilde{u}}(a\_{i})| &\leq \frac{||\boldsymbol{q}|| ||\boldsymbol{E}|| \sum\_{i=1}^{m} |\boldsymbol{\xi}\_{i}|}{p \Gamma(\boldsymbol{a} + \boldsymbol{\beta} + 1)} ||\boldsymbol{u} - \boldsymbol{\tilde{u}}|| + \frac{||\boldsymbol{q}|| ||\boldsymbol{E}|| ||\boldsymbol{\tilde{u}}||}{p \Gamma(\boldsymbol{a} + \boldsymbol{\beta} + 1)} \\ &+ \frac{||\boldsymbol{q}|| ||\boldsymbol{\tilde{u}}|| \sum\_{i=1}^{m} |\boldsymbol{\xi}\_{i}^{\boldsymbol{\ell}}| ||\boldsymbol{E}|| |\boldsymbol{E}|}{p \Gamma(\boldsymbol{a} + \boldsymbol{\beta} + 1)} (\delta\_{1} + |\boldsymbol{\nu}| \delta\_{2}). \end{split}$$

*δ*1

Similarly,

$$\begin{split} & \left| \nu \boldsymbol{E} \sum\_{j=1}^{n} \eta\_{j} \boldsymbol{A} \boldsymbol{u}(b\_{j}) - \nu \boldsymbol{E} \sum\_{j=1}^{n} \bar{\eta}\_{j} \boldsymbol{A} \boldsymbol{u}(b\_{j}) \right| \leq \left| \nu \right| \left| \boldsymbol{E} \right| \sum\_{j=1}^{n} \left| \eta\_{j} \right| \left| \boldsymbol{A} \left( \boldsymbol{u}(b\_{j}) - \boldsymbol{u}(b\_{j}) \right) \right| \\ & + \left| \nu \right| \left| \boldsymbol{E} \right| \sum\_{j=1}^{n} \left| \eta\_{j} - \bar{\eta}\_{j} \right| \left| \boldsymbol{A} \boldsymbol{u}(b\_{j}) \right| + \left| \nu \right| \left| \boldsymbol{E} - \boldsymbol{E} \right| \sum\_{j=1}^{n} \left| \bar{\eta}\_{j} \right| \left| \boldsymbol{A} \boldsymbol{u}(b\_{j}) \right| \\ & \leq \frac{||q|| \left| \boldsymbol{E} \right| \left| \nu \right| \sum\_{i=1}^{m} \left| \eta\_{i} \right|}{p \Gamma(\alpha + \beta + 1)} \left| \boldsymbol{u} - \bar{\boldsymbol{u}} \right| + \frac{||q|| \left| \boldsymbol{E} \right| \left| \boldsymbol{\nu} \right| \left| \boldsymbol{u} \right|}{p \Gamma(\alpha + \beta + 1)} \delta\_{2} \\ & + \frac{||q|| \left| \boldsymbol{u} \right| \left| \boldsymbol{\nu} \right| \sum\_{i=1}^{n} \left| \bar{\eta}\_{i} \right| \left| \boldsymbol{E} \right| \left| \boldsymbol{E} \right| \\ \end{split}$$

and so

$$\begin{split} \mathbb{E} \left| \mathbb{E} \sum\_{i=1}^{m} \mathbb{J}\_{i} A u(a\_{i}) - \tilde{\mathbb{E}} \sum\_{i=1}^{m} \tilde{\mathbb{J}}\_{i} A \vec{u}(a\_{i}) \right| + \left| \nu \mathbb{E} \sum\_{j=1}^{n} \eta\_{j} A u(b\_{j}) - \nu \tilde{\mathbb{E}} \sum\_{j=1}^{n} \vec{\eta}\_{j} A \vec{u}(b\_{j}) \right| \\ \leq \frac{||q|| ||E|| \left( \sum\_{i=1}^{m} |\xi\_{i}| + |\nu| \sum\_{i=1}^{m} |\eta\_{i}| \right)}{p \Gamma(\alpha + \beta + 1)} ||u - \vec{u}|| + \Omega\_{1}(\delta\_{1} + |\nu| \delta\_{2}) \end{split} \tag{22}$$

where

$$\Omega\_1 = \frac{||q|| ||E|| ||\tilde{u}||}{p \Gamma(\alpha + \beta + 1)} + \frac{||q|| ||\tilde{u}|| \sum\_{i=1}^{m} |\xi\_i| ||E|| \|\tilde{E}||}{p \Gamma(\alpha + \beta + 1)} + \frac{||q|| ||\tilde{u}|| |\nu| \sum\_{i=1}^{m} |\eta\_i| |\tilde{E}|| \|\tilde{E}||}{p \Gamma(\alpha + \beta + 1)}$$

Furthermore,

$$\begin{split} & \left| \nu E \sum\_{j=1}^{n} \eta\_{j} B u(b\_{j}) - \nu E \sum\_{j=1}^{n} \eta\_{j} B \bar{u}(b\_{j}) \right| \leq \left| \nu \right| \left| E \right| \sum\_{j=1}^{n} \left| \eta\_{j} \right| \left| B(\mu(b\_{j}) - \bar{u}(b\_{j})) \right| \\ & + \left| \nu \right| \left| E \right| \sum\_{j=1}^{n} \left| \eta\_{j} - \bar{\eta}\_{j} \right| B \bar{u}(b\_{j}) \right| + \left| \nu \right| \left| E - E \right| \sum\_{j=1}^{n} \left| \bar{\eta}\_{j} \right| \left| B \bar{u}(b\_{j}) \right| \\ & \leq \frac{\mathcal{K} \left| \left| h \right| \left| \nu \right| \left| E \right| \sum\_{j=1}^{n} \left| \eta\_{j} \right|}{p \Gamma(\alpha + \beta + 1)} \left| \mu - \bar{\mu} \right| + \frac{(\mathcal{K} \left| \left| \bar{u} \right| \right| + \mathcal{M}) \left| h \right| \left| \nu \right| \left| E \right|}{p \Gamma(\alpha + \beta + 1)} \delta\_{2} \\ & + \frac{(\mathcal{K} \left| \left| \bar{u} \right| \right| + \mathcal{M}) \left| h \right| \left| \nu \right| \sum\_{j=1}^{n} \left| \bar{\eta}\_{j} \right| \left| E \right| \left| E \right| \left| \bar{E} \right| \\ \end{split} $$

Similarly,

$$\begin{split} &|E\sum\_{i=1}^{m}\underline{\xi}\_{i}Bu(a\_{i})-\bar{E}\sum\_{i=1}^{m}\underline{\xi}\_{i}Bu(a\_{i})| \leq |E|\sum\_{i=1}^{m}|\underline{\xi}\_{i}||B(u(a\_{i})-\tilde{u}(a\_{i}))|+|E|\sum\_{i=1}^{m}|\underline{\xi}\_{i}-\underline{\tilde{\xi}}\_{i}||B\tilde{u}(a\_{i})| \\ &+|E-\bar{E}|\sum\_{i=1}^{m}\underline{\xi}\_{i}B\tilde{u}(a\_{i}) \leq \frac{\mathcal{K}||h||\,|\,|\,|\,|\,|\,|\underline{\mathcal{E}}|\,\underline{\mathcal{J}}\_{\tilde{f}}| \,\|u-\tilde{u}\| \,| +\frac{(\mathcal{K}||\tilde{u}||+\mathcal{M})\|h||\|E|}{p\Gamma(\alpha+\beta+1)}\delta\_{1} \\ &+\frac{(\mathcal{K}||\vec{u}||+\mathcal{M})\|h\|\sum\_{j=1}^{n}|\underline{\xi}\_{j}^{\prime}||E||\dot{E}|}{p\Gamma(\alpha+\beta+1)}(\delta\_{1}+|\nu|\delta\_{2}). \end{split}$$

and then

$$\begin{split} \frac{1}{n} \|\boldsymbol{\nu}\| \sum\_{j=1}^{n} \eta\_{j} \boldsymbol{B} \boldsymbol{u}(\boldsymbol{b}\_{j}) - \boldsymbol{\nu} \tilde{\boldsymbol{E}} \sum\_{j=1}^{n} \eta\_{j} \boldsymbol{B} \boldsymbol{\tilde{u}}(\boldsymbol{b}\_{j}) \| + \|\boldsymbol{E} \sum\_{i=1}^{m} \mathbb{S}\_{i}^{\boldsymbol{x}} \boldsymbol{B} \boldsymbol{u}(\boldsymbol{a}\_{i}) - \tilde{\boldsymbol{E}} \sum\_{i=1}^{m} \tilde{\boldsymbol{\xi}}\_{i}^{\boldsymbol{x}} \boldsymbol{B} \boldsymbol{\tilde{u}}(\boldsymbol{a}\_{i}) \| \\ \leq \frac{\mathcal{K} \left\| \boldsymbol{h} \right\| \left| \boldsymbol{E} \right| (\sum\_{i=1}^{m} |\boldsymbol{\xi}\_{i}| + |\boldsymbol{\nu}| \sum\_{j=1}^{n} |\eta\_{j}|) \right|}{p \Gamma(\boldsymbol{a} + \boldsymbol{\beta} + 1)} \| \boldsymbol{u} - \boldsymbol{\bar{u}} \| + \Omega\_{2} (\delta\_{1} + |\boldsymbol{\nu}| \delta\_{2}) \end{split} \tag{23}$$

where

$$\begin{split} \Omega\_{2} &= \frac{(\mathcal{K} \| \| \vec{u} \| + \mathcal{M}) \| h \| \| E |}{p \Gamma(\alpha + \beta + 1)} + \frac{(\mathcal{K} \| \| \vec{u} \| + \mathcal{M}) \| h \| \| \nu \| \sum\_{j=1}^{n} |\vec{\eta}\_{j}| |E| |\vec{E}| }{p \Gamma(\alpha + \beta + 1)} \\ &+ \frac{(\mathcal{K} \| \| \vec{u} \| + \mathcal{M}) \| h \| \sum\_{j=1}^{n} |\vec{\xi}\_{j}| |E| |E|}{p \Gamma(\alpha + \beta + 1)} \end{split}$$

Further,

$$\begin{split} & \left| \nu \boldsymbol{E} \sum\_{j=1}^{n} \overline{\eta}\_{j} \mathbb{C} \overline{\boldsymbol{u}}(b\_{j}) - \nu \boldsymbol{E} \sum\_{j=1}^{n} \eta\_{j} \mathbb{C} \boldsymbol{u}(b\_{j}) \right| \\ & \leq \left| \nu \right| | \boldsymbol{E} \right| \sum\_{j=1}^{n} \left| \eta\_{j} \right| | \mathbb{C} (\boldsymbol{u}(b\_{j}) - \boldsymbol{\tilde{u}}(b\_{j})) | + |\nu| | \boldsymbol{E} \big| \sum\_{j=1}^{n} \left| \eta\_{j} - \bar{\eta}\_{j} \right| | \mathbb{C} \boldsymbol{\tilde{u}}(b\_{j})| + |\nu| | \boldsymbol{E} - \boldsymbol{E} \big| \sum\_{j=1}^{n} |\bar{\eta}\_{j}| | \mathbb{C} \boldsymbol{\tilde{u}}(b\_{j}) | \\ & \leq \frac{||\mu^{\*} || ||\boldsymbol{\nu}|| | \boldsymbol{E} | \sum\_{j=1}^{n} \eta\_{j} \|}{p \Gamma(\beta + 1)} ||\boldsymbol{u} - \boldsymbol{\tilde{u}}|| + \frac{(||\mu^{\*} || ||\boldsymbol{\tilde{u}}|| + k\_{0}) |\boldsymbol{\nu}|| | \boldsymbol{E}|}{p \Gamma(\beta + 1)} \delta\_{2} \\ & + \frac{(||\mu^{\*} || ||\boldsymbol{\tilde{u}}|| + k\_{0}) |\boldsymbol{\nu}| \sum\_{j=1}^{n} |\bar{\eta}\_{j}| | \boldsymbol{E}| | \bar{\boldsymbol{E}}|}{p \Gamma(\beta + 1)} (\delta\_{1} + |\nu| \delta\_{2}). \end{split}$$

Similarly,

$$\begin{split} &|\bar{E}\sum\_{i=1}^{m}\tilde{\xi}\_{i}\mathrm{Ci}(a\_{i})-E\sum\_{i=1}^{m}\xi\_{i}\mathrm{Cu}(a\_{i})| \leq \frac{||\mu^{\*}|| ||E|\sum\_{j=1}^{n}|\xi\_{j}^{\*}|}{p\Gamma(\beta+1)}||\mu-\bar{\mu}|| + \frac{(||\mu^{\*}|| ||\bar{\mu}|| + k\_{0})|E|}{p\Gamma(\beta+1)}\delta\_{1} \\ &+\frac{(||\mu^{\*}|| ||\|\bar{\mu}|| + k\_{0})\sum\_{j=1}^{n}|\xi\_{j}^{\*}||E||E|}{p\Gamma(\beta+1)}(\delta\_{1}+|\nu|\delta\_{2}). \end{split}$$

and so

$$\begin{split} \left| \nu \vec{E} \sum\_{j=1}^{n} \vec{\eta}\_{j} \mathbb{C} \vec{u}(b\_{j}) - \nu E \sum\_{j=1}^{n} \eta\_{j} \mathbb{C} u(b\_{j}) \right| + \left| \vec{E} \sum\_{i=1}^{m} \vec{\zeta}\_{i} \mathbb{C} \vec{u}(a\_{i}) - E \sum\_{i=1}^{m} \xi\_{i} \mathbb{C} u(a\_{i}) \right| \\ \leq \frac{||\mu^{\*}|| ||E|| (\sum\_{j=1}^{n} |\xi\_{j}| + |\nu| \sum\_{j=1}^{n} |\eta\_{j}|)}{p \Gamma(\beta + 1)} ||u - \bar{u}|| + \Omega\_{3} (\delta\_{1} + |\nu| \delta\_{2}) \end{split} \tag{24}$$

where

$$\begin{split} \Omega\_{3} &= \frac{\left(||\mu^{\*}|| ||\bar{u}|| + k\_{0} \right) |E|}{p\Gamma(\beta + 1)} + \frac{\left(||\mu^{\*}|| ||\bar{u}|| + k\_{0} \right) \sum\_{j=1}^{n} |\bar{\xi}\_{j}| |E| |\bar{E}|}{p\Gamma(\beta + 1)} \\ &+ \frac{\left(||\mu^{\*}|| ||\bar{u}|| + k\_{0} \right) \sum\_{j=1}^{n} |\bar{\xi}\_{j}| |E| |\bar{E}|}{p\Gamma(\beta + 1)} \end{split}$$

At last we have

$$\begin{split} |A\vec{u}(t) - Au(t)| &\le \frac{||q||}{p\Gamma(\mathfrak{a} + \beta + 1)} ||u - \vec{u}||\_{\prime} \\ |B\vec{u}(t) - Bu(t)| &\le \frac{\mathcal{K} ||h||}{p\Gamma(\mathfrak{a} + \beta + 1)} ||u - \vec{u}||\_{\prime} \\ |\mathsf{C}\vec{u}(t) - \mathsf{C}u(t)| &\le \frac{||\mu^\*||}{p\Gamma(\beta + 1)} ||u - \vec{u}||\_{\prime} \\ |\zeta\_1(t, \vec{u}(t)) - \zeta\_1(t, u(t))| &\le ||\vec{\mu}|| ||u - \vec{u}||. \end{split} \tag{25}$$

Thus, from (21)–(25), we have

$$\|u - \bar{u}\| \le (\Omega^\* + \|\bar{\mu}\|) \|u - \bar{u}\| + (\Omega\_1 + \Omega\_2 + \Omega\_3)(\delta\_1 + |\nu|\delta\_2)$$

$$\text{where } \Omega^\* = \frac{1}{p\Gamma(a + \beta + 1)} [E(\sum\_{i=1}^n |\xi\_i| + |\nu| \sum\_{j=1}^n |\eta\_j|) + 1](\|q\| + \mathcal{K} \|h\| + \frac{\Gamma(a + \beta + 1) \|\mu^\*\|}{\Gamma(\beta + 1)}). \text{ That is,}$$

$$(1 - \Omega^\* - \|\vec{\mu}\|) \|u - \vec{u}\| \le (\Omega\_1 + \Omega\_2 + \Omega\_3)(\delta\_1 + |\nu|\delta\_2). \tag{26}$$

From our hypotheses, we know that

$$r > \frac{\Theta + \mathcal{J}\_1^\*}{1 - ||\bar{\mu}||'} , \ ||\tilde{\mu}|| < 1 \text{ and}$$

$$\begin{split} \Theta &= \frac{1}{p\Gamma(\mathfrak{a} + \mathfrak{\beta} + 1)} [E(\sum\_{i=1}^{m} |\mathfrak{f}\_{i}| + |\nu| \sum\_{j=1}^{n} |\eta\_{j}|) + 1] [(\|q\| + \mathcal{K} \|h\| + \frac{\Gamma(\mathfrak{a} + \mathfrak{\beta} + 1) \|\mu^{\*}\|}{\Gamma(\mathfrak{f} + 1)}) r] \\ &+ \mathcal{M} \|h\| + \frac{\Gamma(\mathfrak{a} + \mathfrak{\beta} + 1) k\_{0}}{\Gamma(\mathfrak{\beta} + 1)} \big] = \Omega^{\*} r + \Omega\_{0}^{\*} \end{split}$$

where

$$\Omega\_0^\* = \frac{1}{p\Gamma(\mathfrak{a} + \mathfrak{F} + 1)} [E(\sum\_{i=1}^m |\zeta\_i| + |\nu| \sum\_{j=1}^n |\eta\_j|) + 1] [\mathcal{M} \|h\| + \frac{\Gamma(\mathfrak{a} + \mathfrak{F} + 1)k\_0}{\Gamma(\mathfrak{F} + 1)}].$$

Therefore,

$$r > \frac{\Theta + \zeta\_1^\*}{1 - ||\bar{\mu}||} = \frac{\Omega^\* r + \Omega\_0^\* + \zeta\_1^\*}{1 - ||\bar{\mu}||} \lambda$$

and so

$$(1 - \|\vec{\mu}\|\|)r > \Omega^\*r + \Omega\_0^\* + \mathcal{J}\_1^\*.$$

Then, Ω∗*r* < (1 − *μ*˜)*r*. Since *r* > 0, thus 0 < 1 − Ω<sup>∗</sup> − *μ*˜. Thus, from (26), we obtain

$$\|\|\mu - \bar{\mu}\|\| \le \epsilon = (1 - \Theta - \|\|\bar{\mu}\|\|)^{-1} (\Omega\_1 + \Omega\_2 + \Omega\_3)(\delta\_1 + |\nu|\delta\_2).$$

That is, we proved that for every > 0, there exist *δ*1() and *δ*2() such that ∑*<sup>m</sup> <sup>i</sup>*=<sup>1</sup> <sup>|</sup>*ξ<sup>i</sup>* <sup>−</sup> ˜ *<sup>ξ</sup>i*<sup>|</sup> <sup>&</sup>lt; *<sup>δ</sup>*<sup>1</sup> and <sup>∑</sup>*<sup>n</sup> <sup>j</sup>*=<sup>1</sup> |*η<sup>j</sup>* − *η*˜*j*| < *δ*2, then *u* − *u*˜ < .

#### **4. Fractional Couple Hybrid Sturm–Liouville Differential Equation with Integral Boundary Hybrid Condition**

In this section, we deduce some fractional couple hybrid Sturm–Liouville differential equation via integral boundary conditions.

**Theorem 3.** *Let the hypotheses* (*D*1)*–*(*D*4) *be satisfied. Let a number r* > 0 *exist such that*

$$r \ge \frac{\zeta\_2^\* \Theta + \zeta\_1^\*}{1 - ||\mu||\Theta - ||\bar{\mu}||} \text{ and } ||\mu||\Theta + ||\bar{\mu}|| < 1,\tag{27}$$

*where*

$$\begin{split} \Theta &= \frac{1}{p\Gamma(\mathfrak{a}+\mathfrak{\beta}+1)} [\frac{\varpi(\mathfrak{c})-\varpi(\mathfrak{a})+|\nu|(\mathfrak{v}(\mathfrak{c})-\upsilon(\mathfrak{d}))}{|\varpi(\mathfrak{c})-\varpi(\mathfrak{a})-\nu(\mathfrak{v}(\mathfrak{c})-\upsilon(\mathfrak{d}))|} + 1] [(\|\eta\|+\mathcal{K}\|h\|) \\ &+ \frac{\Gamma(\mathfrak{a}+\mathfrak{\beta}+1)\|\mu^\*\|}{\Gamma(\mathfrak{\beta}+1)})r + \mathcal{M}\|h\| + \frac{\Gamma(\mathfrak{a}+\mathfrak{\beta}+1)k\_0}{\Gamma(\mathfrak{c}+1)} \,, \end{split}$$

(*c*) − (*a*) = *ν*(*υ*(*e*) − *υ*(*d*)), (*θ*) *and υ*(*θ*) *are increasing functions and the integrals are meant in the Riemann–Stieltjes sense for* 0 ≤ *a* < *c* ≤ *d* < *e* ≤ 1. *Then, there exists a solution <sup>u</sup>* ∈ *<sup>C</sup>*(*I*, R) *of the fractional couple hybrid Sturm–Liouville differential problem:*

$$\begin{cases} D\_c^a \left[ p(t) D\_c^\beta \left( \frac{u(t) - \zeta\_1(t, u(t))}{\zeta\_2(t, u(t))} \right) - k(t, u(t)) \right] + q(t) u(t) = h(t) f(u(t)), \\\\ D\_c^\beta \left( \frac{u(t) - \zeta\_1(t, u(t))}{\zeta\_2(t, u(t))} \right)\_{t=0} = k(0, u(0)), \\\\ \int\_{\mathcal{A}} (\frac{u(\theta) - \zeta\_1(\theta, u(\theta))}{\zeta\_2(\theta, u(\theta))}) d\phi(\theta) = \nu \int\_{\mathcal{A}} \frac{u(\theta) - \zeta\_1(\theta, u(\theta))}{\zeta\_2(\theta, u(\theta))} d\nu(\theta), \end{cases} \tag{28}$$

*and u solves* (28) *if and only if u solves the integral equation*

$$\begin{split} u(t) &= \zeta\_2(t, u(t)) \left[ \frac{1}{\varpi(c) - \varpi(a) - \nu(\upsilon(e) - \upsilon(d))} (\int\_a^c Au(\theta) d\sigma(\theta)) \\ &- \nu \int\_d^c Au(\theta) d\upsilon(\theta) + \nu \int\_d^c Bu(\theta) d\upsilon(\theta) - \int\_a^c Bu(\theta) d\sigma(\theta) \\ &+ \nu \int\_d^c \mathbb{C}u(\theta) d\upsilon(\theta) - \int\_a^c \mathbb{C}u(\theta) d\sigma(\theta)) \\ &- Au(t) + Bu(t) + \mathbb{C}u(t) \right] + \zeta\_1(t, u(t)). \end{split} \tag{29}$$

*Furthermore, if* (B∗) *holds, then D<sup>β</sup> <sup>c</sup>* (*u*(*t*)) ∈ *<sup>C</sup>*(*I*, R).

**Proof.** Let *<sup>u</sup>* be a solution of the problem (3) and (4). Assume that *<sup>ξ</sup><sup>i</sup>* = (*ti*) − (*ti*−1), *ai* ∈ (*ti*−1, *ti*), 0 ≤ *<sup>a</sup>* = *<sup>t</sup>*<sup>0</sup> < *<sup>t</sup>*<sup>1</sup> < *<sup>t</sup>*<sup>2</sup> < ... < *tm* = *<sup>c</sup>*, *<sup>η</sup><sup>j</sup>* = *<sup>υ</sup>*(*τj*) − *<sup>υ</sup>*(*τj*−1), *bj* ∈ (*τj*−1, *<sup>τ</sup>j*) and *d* = *τ*<sup>0</sup> < *τ*<sup>1</sup> < ... < *τ<sup>n</sup>* = *e* ≤ 1. Thus, the multi-point boundary hybrid condition (4) will be

$$\sum\_{i=1}^{m} (\boldsymbol{\omega}(t\_{i}) - \boldsymbol{\omega}(t\_{i-1})) (\frac{\boldsymbol{\mu}(a\_{i}) - \boldsymbol{\zeta}\_{1}(a\_{i}, \boldsymbol{\mu}(a\_{i}))}{\boldsymbol{\zeta}\_{2}(a\_{i}, \boldsymbol{\mu}(a\_{i}))}) = \nu \sum\_{j=1}^{n} (\boldsymbol{\nu}(\tau\_{j}) - \boldsymbol{\nu}(\tau\_{j-1})) (\frac{\boldsymbol{\mu}(b\_{j}) - \boldsymbol{\zeta}\_{1}(b\_{j}, \boldsymbol{\mu}(b\_{j}))}{\boldsymbol{\zeta}\_{2}(b\_{j}, \boldsymbol{\mu}(b\_{j}))}) \boldsymbol{\zeta}\_{2}$$

As the solution *u* of (3) and (4) is continuous, we have

$$\begin{aligned} &\lim\_{m\to\infty} \sum\_{i=1}^m (\boldsymbol{\varpi}(t\_i) - \boldsymbol{\varpi}(t\_{i-1})) (\frac{\boldsymbol{\mu}(a\_i) - \tilde{\zeta}\_1(a\_i, \boldsymbol{\mu}(a\_i))}{\tilde{\zeta}\_2(a\_i, \boldsymbol{\mu}(a\_i))}) \\ &= \nu \lim\_{n\to\infty} \sum\_{j=1}^n (\boldsymbol{\nu}(\tau\_j) - \boldsymbol{\nu}(\tau\_{j-1})) (\frac{\boldsymbol{\mu}(b\_j) - \tilde{\zeta}\_1(b\_j, \boldsymbol{\mu}(b\_j))}{\tilde{\zeta}\_2(b\_j, \boldsymbol{\mu}(b\_j))}) \end{aligned}$$

or equivalently

$$\int\_{a}^{c} (\frac{\mu(\theta) - \zeta\_{1}(\theta, \mu(\theta))}{\zeta\_{2}(\theta, \mu(\theta))}) d\phi(\theta) = \nu \int\_{d}^{\varepsilon} (\frac{\mu(\theta) - \zeta\_{1}(\theta, \mu(\theta))}{\zeta\_{2}(\theta, \mu(\theta))}) d\upsilon(\theta).$$

Now, from the continuity of the solution *u* in (5), we can obtain

*<sup>u</sup>*(*t*) = *<sup>ζ</sup>*2(*t*, *<sup>u</sup>*(*t*)) <sup>1</sup> ∑<sup>∞</sup> *<sup>i</sup>*=<sup>1</sup> *<sup>ξ</sup><sup>i</sup>* <sup>−</sup> *<sup>ν</sup>* <sup>∑</sup><sup>∞</sup> *<sup>j</sup>*=<sup>1</sup> *η<sup>j</sup>* ( lim*m*→<sup>∞</sup> *m* ∑ *i*=1 ((*ti*) − (*ti*−1))*Au*(*ai*) <sup>−</sup> *<sup>ν</sup>* lim*n*→<sup>∞</sup> *n* ∑ *j*=1 (*υ*(*τj*) <sup>−</sup> *<sup>υ</sup>*(*τj*−1))*Au*(*bj*) + *<sup>ν</sup>* lim*n*→<sup>∞</sup> *n* ∑ *j*=1 (*υ*(*τj*) − *<sup>υ</sup>*(*τj*−1))*Bu*(*bj*) <sup>−</sup> lim*m*→<sup>∞</sup> *m* ∑ *i*=1 ((*ti*) <sup>−</sup> (*ti*−1))*Bu*(*ai*)) + *<sup>ν</sup>* lim*n*→<sup>∞</sup> *n* ∑ *j*=1 (*υ*(*τj*) − *<sup>υ</sup>*(*τj*−1))*Cu*(*bj*) <sup>−</sup> lim*m*→<sup>∞</sup> *m* ∑ *i*=1 ((*ti*) − (*ti*−1))*Cu*(*ai*)) − *Au*(*t*) + *Bu*(*t*) + *Cu*(*t*) + *ζ*1(*t*, *u*(*t*)) <sup>=</sup> *<sup>ζ</sup>*2(*t*, *<sup>u</sup>*(*t*)) <sup>1</sup> (*c*) <sup>−</sup> (*a*) <sup>−</sup> *<sup>ν</sup>*(*υ*(*e*) <sup>−</sup> *<sup>υ</sup>*(*d*))( *c a Au*(*θ*)*d*(*θ*) − *ν e d Au*(*θ*)*dυ*(*θ*) + *ν e d Bu*(*θ*)*dυ*(*θ*) − *c a Bu*(*θ*)*d*(*θ*) + *ν e d Cu*(*θ*)*dυ*(*θ*) − *c a Cu*(*θ*)*d*(*θ*)) − *Au*(*t*) + *Bu*(*t*) + *Cu*(*t*) + *ζ*1(*t*, *u*(*t*)).

and clearly *<sup>u</sup>* ∈ *<sup>C</sup>*(*I*, R) solves the problem (28) if and only if solves (29). Similarly, by taking *<sup>ξ</sup><sup>i</sup>* = (*ti*) − (*ti*−1) and *<sup>η</sup><sup>j</sup>* = *<sup>υ</sup>*(*τj*) − *<sup>υ</sup>*(*τj*−1) and *<sup>m</sup>*, *<sup>n</sup>* → <sup>∞</sup> in (*D*5), we get (27).

**Example 3.** *Consider the fractional couple hybrid Sturm–Liouville differential problem*

$$\begin{cases} D\_c^{\frac{2}{t}} \left( \ln(e^{10t} + t) D\_c^{\frac{2}{t}} \left( \frac{u(t) - \frac{\sin t}{60}}{\frac{1}{200} |u(t)|} \frac{1 + \ln(1+t)}{1 + \ln(1+t)} \right) - u(t) \right) + \frac{1}{400(1+t^2)} u(t) \\\\ \quad = \cos^3(t) \tanh(u(t)) \\\\ D\_c^{\frac{2}{t}} \left( \frac{u(t) - \frac{\sin t}{60} \left( \frac{1}{20} u(t) + 3 \right)}{\frac{1}{200} |u(t)| + \frac{1 + \ln(1+t)}{1 + \ln(1+t)}} \right)\_{t=0} = u(0), \\\\ \int\_0^{\frac{1}{t}} \left( \frac{u(\theta) - \frac{\sin \theta}{60} \left( \frac{1}{20} u(\theta) + 3 \right)}{\frac{1}{200} |u(\theta)| + \frac{1 + \ln(1+\theta)}{1 + \ln(1+\theta)}} \right) d(3\theta + 1) \\\\ \quad = \frac{1}{300} \int\_{\frac{1}{t}}^{1} \left( \frac{u(\theta) - \frac{\sin \theta}{60} \left( \frac{1}{20} u(\theta) + 3 \right)}{\frac{1}{200} |u(\theta)| + \frac{2 + \ln(1+\theta)}{1 + \ln(1+\theta)}} \right) d(\theta^2), \end{cases} (30)$$

*In this case, we take α* = <sup>4</sup> <sup>5</sup> *, <sup>β</sup>* <sup>=</sup> <sup>2</sup> <sup>3</sup> *, <sup>r</sup>* <sup>=</sup> 1, *<sup>ν</sup>* <sup>=</sup> <sup>1</sup> <sup>300</sup> *,* (*θ*) = <sup>3</sup>*<sup>θ</sup>* + <sup>1</sup>*, <sup>υ</sup>*(*θ*) = *<sup>θ</sup>*2, *p*(*t*) = ln(*e*<sup>100</sup> + *t*), *q*(*t*) = <sup>1</sup> 400(1+*t*2) , *h*(*t*) = cos3(*t*), *f*(*u*(*t*)) = tanh(*u*(*t*))*, ζ*1(*t*, *u*(*t*)) = sin *<sup>t</sup>* <sup>60</sup> ( <sup>1</sup> <sup>70</sup>*u*(*t*) + <sup>3</sup>), *<sup>ζ</sup>*2(*t*, *<sup>u</sup>*(*t*)) = *<sup>t</sup>* <sup>200</sup> <sup>|</sup>*u*(*t*)<sup>|</sup> <sup>+</sup> <sup>2</sup>+ln(1+*t*) <sup>1</sup>+ln(1+*t*) *and k*(*t*, *u*(*t*)) = *u*(*t*)*. Therefore* K = 1*,* <sup>M</sup> <sup>=</sup> <sup>0</sup>*, p* <sup>=</sup> 100, *q* <sup>=</sup> <sup>1</sup> <sup>400</sup> , *h* <sup>=</sup> <sup>1</sup>*,* (0) = 1, ( <sup>1</sup> <sup>3</sup> ) = 2, *<sup>υ</sup>*( <sup>1</sup> <sup>2</sup> ) = <sup>1</sup> <sup>4</sup> , *υ*(1) = 1. *Also*

$$|\zeta\_2(t, \mu(t)) - \zeta\_2(t, \upsilon(t))| \le \frac{t}{200} |\mu(t) - \upsilon(t)|\_{\prime}$$

$$|\zeta\_1(t, u(t)) - \zeta\_1(t, v(t))| \le \frac{\sin t}{4200} |u(t) - v(t)|^2$$

*and* <sup>|</sup>*ζ*2(*t*, *<sup>u</sup>*(*t*)) <sup>−</sup> *<sup>ζ</sup>*2(*t*, *<sup>v</sup>*(*t*))|≤|*u*(*t*) <sup>−</sup> *<sup>v</sup>*(*t*)|. *Then, μ* <sup>=</sup> <sup>1</sup> <sup>200</sup> *, μ*˜ <sup>=</sup> <sup>1</sup> <sup>4200</sup> *, μ*∗ = 1 *ζ*<sup>∗</sup> <sup>2</sup> = 2*, ζ*∗ <sup>1</sup> = <sup>1</sup> <sup>20</sup> *and k*<sup>0</sup> = 0*. Thus,*

$$
\varpi(\frac{1}{3}) - \varpi(0) = 1 \neq \frac{1}{400} = \nu(\upsilon(1) - \upsilon(\frac{1}{2})) \text{ and } \Theta \approx 0.0468369692.
$$

$$r = 1 \ge 0.1437418248 \approx \frac{\zeta\_2^\* \Theta + \zeta\_1^\*}{1 - ||\mu||\Theta - ||\bar{\mu}||} $$

*and*

$$\|\|\mu\|\Theta + \|\|\bar{\mu}\|\| \approx 0.0004722801 < 1\_{\prime}$$

*Then, all the conditions of Theorem 3 are satisfied and the problem* (30) *has a solution.*

**Corollary 3.** *Let the hypotheses* (*D*1)*–*(*D*2) *be satisfied. Let*

$$\frac{1}{p\Gamma(a+\beta+1)}[\frac{\varpi(c)-\varpi(a)+|\nu|(\upsilon(c)-\upsilon(d))}{|\varpi(c)-\varpi(a)-\nu(\upsilon(e)-\upsilon(d))|}+1](\|q\|+\mathcal{K}\|h\||)<1,$$

*where* (*c*) − (*a*) = *ν*(*υ*(*e*) − *υ*(*d*)), (*θ*) *and υ*(*θ*) *are increasing functions, and the integrals are meant in the Riemann–Stieltjes sense for* 0 ≤ *a* < *c* ≤ *d* < *e* ≤ 1. *Then, there exists a solution <sup>u</sup>* ∈ *<sup>C</sup>*(*I*, R) *of the fractional couple hybrid Sturm–Liouville differential problem:*

$$\begin{cases} \begin{aligned} \,^cD\_c^\mathbf{z} \Big[ p(t)D\_c^\beta \big( u(t) \big) \Big] + q(t)u(t) &= h(t)f(u(t)), \\\\ \,^cD\_c^\mathbf{f}(u(t))\_{t=0} &= 0, \\\\ \int\_a^c u(\theta)d\phi(\theta) &= \nu \int\_d^c u(\theta)d\upsilon(\theta) \, \end{aligned} \end{cases} \tag{31}$$

*and u solves* (31) *if and only if u solves the integral equation*

$$\begin{split} u(t) &= \frac{1}{\mathcal{Q}(c) - \mathcal{Q}(a) - \nu(\upsilon(c) - \upsilon(d))} (\int\_{a}^{c} A u(\theta) d\phi(\theta) - \nu \int\_{d}^{c} A u(\theta) d\upsilon(\theta)) \\ &+ \nu \int\_{d}^{c} B u(\theta) d\upsilon(\theta) - \int\_{a}^{c} B u(\theta) d\phi(\theta)) - A u(t) + B u(t). \end{split}$$

*Furthermore, D<sup>β</sup> <sup>c</sup>* (*u*(*t*)) ∈ *<sup>C</sup>*(*I*, R).

#### **5. Conclusions**

Scientists utilize various Sturm–Lioville equations for modeling various phenomena and processes. This variety factor in investigating complicates the fractional Sturm-Lioville equations and boosts scientists' ability for exact modelings of more phenomena. This methods will lead scientists to make advanced software which help them to allow more cost-free testing and less material consumption. In this paper, we investigate a coupled hybrid version of the Sturm–Liouville differential equation. Indeed, we study the existence of solutions for the coupled hybrid Sturm–Liouville differential equation with multi-point boundary coupled hybrid condition. Furthermore, we study the existence of solutions for the coupled hybrid Sturm–Liouville differential equation with integral boundary coupled hybrid condition. We give an application and some examples to illustrate our results.

**Author Contributions:** The authors M.P. and M.D.L.S. contributed equally to this work. Both authors have read and agreed to the published version of the manuscript.

**Funding:** This research has been supported by the Basque Government through Grant IT207-19 and by the Spanish Government and European Commission through Grant RTI2018-094336-B-I00 (MCIU/AEI/FEDER, UE).

**Acknowledgments:** The authors thank the anonymous referees for their helpful comments.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **A Comparison of a Priori Estimates of the Solutions of a Linear Fractional System with Distributed Delays and Application to the Stability Analysis**

**Hristo Kiskinov \*, Magdalena Veselinova, Ekaterina Madamlieva and Andrey Zahariev**

Faculty of Mathematics and Informatics, University of Plovdiv, 4000 Plovdiv, Bulgaria; veselinova@uni-plovdiv.bg (M.V.); ekaterinaa.b.m@gmail.com (E.M.); zandrey@uni-plovdiv.bg (A.Z.)

**\*** Correspondence: kiskinov@uni-plovdiv.bg

**Abstract:** In this article, we consider a retarded linear fractional differential system with distributed delays and Caputo type derivatives of incommensurate orders. For this system, several a priori estimates for the solutions, applying the two traditional approaches—by the use of the Gronwall's inequality and by the use of integral representations of the solutions are obtained. As application of the obtained estimates, different sufficient conditions which guaranty finite-time stability of the solutions are established. A comparison of the obtained different conditions in respect to the used estimates and norms is made.

**Keywords:** Caputo fractional derivative; linear fractional system; distributed delay; finite time stability

**MSC:** 34A08; 34A30; 26A33; 34A12

**Citation:** Kiskinov, H.; Veselinova, M.; Madamlieva, E.; Zahariev, A. A Comparison of a Priori Estimates of the Solutions of a Linear Fractional System with Distributed Delays and Application to the Stability Analysis. *Axioms* **2021**, *10*, 75. https://doi.org/ 10.3390/axioms10020075

Academic Editor: Jorge E. Macías Díaz

Received: 31 March 2021 Accepted: 21 April 2021 Published: 27 April 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

#### **1. Introduction.**

As a highly applicable mathematical tool to study models of real-world phenomena, fractional calculus theory attracts a lot of attention. For a deep understanding of the fractional calculus theory and fractional differential equations, we recommend the monographs [1,2]. The distributed order fractional differential equations are treated in [3], and for an application-oriented exposition see [4]. The impulsive functional differential equations and some applications are considered in [5]. Some new ideas for efficient schemes for numerical solving of fractional differential problems can be found, for example, in [6,7].

Fractional differential equations with delay generally speaking are more complicated in comparison with the integer order differential equations with delay. This is conditioned such that a distinguishing feature of the fractional differential equations with delay is that the evolution of the processes described by such equations depends on the past history inspired from two independent sources. The first of them is the impact condition of the delays and the other one the impact condition from the availability of Volterra type integral in the definitions of the fractional derivatives, i.e., the memory of the fractional derivative.

It is well known that the classical stability concepts (Lyapunov type stabilities) are devoted to study the asymptotical properties of the solutions of differential systems over an infinite time interval. It is well known that the theme of the stability of the solutions of fractional differential equations and/or systems (ordinary or with delay) is an "evergreen" theme for research. Furthermore, the wide appearance of the aftereffect to regard it as a universal property of the surrounding world, is a serious reason to consider mathematical models with delay and fractional derivatives. This explains why a lot of papers are devoted to different aspects of this problem. A very good overview of the stability of the fractional differential systems is given in the comprehensive survey [8]. From the recent works we refer also to [9–18].

However, in many practical cases is more important to study the solution behaviors in some specified (finite) time interval, where larger values of the state variables are not

admissible. Moreover, many authors made the observation that a system could be stable, but it can own unacceptable transient outputs. Such a situation from an engineering point of view leads to these types of analysis being useless. This is a reason to study not only Lyapunov type stabilities but also to study the boundedness of the solutions defined over a finite time interval, i.e., the finite-time stability (FTS). As far as we know the first work concerning the FTS is written by Kamenkov [19] in the year 1953. A historical overview of this theme can be obtained from the survey of Dorato [20]. Concerning the more recent works devoted to the different approaches to study the finite-time stability, we refer to the works [21–30].

The aim of our work, motivated by remarkable works [24–27], is twofold. First, we obtain a priori estimates using the two most popular approaches and then compare the precisions of the obtained via them estimates. Second, as an application, we apply these estimates to investigate the finite-time stability of fractional differential systems with Caputo type derivatives in the case of incommensurate fractional orders and distributed delays.

The paper is organized as follows. In Section 2, we recall the definitions of Riemann–Liouville and Caputo fractional derivatives. In the same section is the statement of the problem, as well as some necessary definitions and preliminary results used later. Section 3 is devoted to obtaining a priori estimates of the solutions of nonautonomous fractional differential systems with Caputo type derivatives of incommensurate orders with distributed delays via Gronwall inequality. In Section 4 for the solutions of the same systems we obtain a priori estimates using the approach based on their integral representations obtained in [31]. In Section 5 as application of the proved estimates we obtain sufficient conditions for finite-time stability of the considered systems. Some examples and comments are given in Section 5 and in Section 6 we present conclusions about the two main approaches analyzed in the previous sections.

#### **2. Preliminaries and Problem Statement**

For the reader convenience, below we recall the definitions of Riemann-Liouville and Caputo fractional derivatives. For details and properties we refer to [1–3].

Let *<sup>α</sup>* <sup>∈</sup> (0, 1) be an arbitrary number and denote by *<sup>L</sup>loc* <sup>1</sup> (R, R) the linear space of all locally Lebesgue integrable functions *<sup>f</sup>* : <sup>R</sup> <sup>→</sup> <sup>R</sup>. Then for *<sup>a</sup>* <sup>∈</sup> <sup>R</sup>, *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>loc* <sup>1</sup> (R, R) and each *t* > *a* the definitions of the left-sided fractional integral operator, the left side Riemann–Liouville and Caputo fractional derivatives of order *α* with lower limit (terminal) *a* are given below (see [1]):

$$\begin{aligned} (D\_{a+}^{-\alpha}f)(t) &= \frac{1}{\Gamma(\alpha)} \int\_{a}^{t} (t-s)^{\alpha-1} f(s) \, \mathrm{d}s, \\\\ \_{RL}D\_{a+}^{\alpha}f(t) &= \frac{\mathrm{d}}{\mathrm{d}t} (D\_{a+}^{-(1-\alpha)}f(t)); \\ \_{\subset}D\_{a+}^{\alpha}f(t) &= \_{RL}D\_{a+}^{\alpha}[f(s)-f(a)](t); \end{aligned}$$

Everywhere below the following notations will be used: R<sup>+</sup> = (0, ∞), R¯ <sup>+</sup> = [0, ∞), *JT* = [0, *<sup>T</sup>*], *<sup>T</sup>* <sup>∈</sup> <sup>R</sup>+,*n* <sup>=</sup> {1, 2, ... , *<sup>n</sup>*},*n*<sup>0</sup> <sup>=</sup> *n*∪{0}, *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>, *<sup>I</sup>*, <sup>Θ</sup> <sup>∈</sup> <sup>R</sup>*n*×*<sup>n</sup>* denote the identity and zero matrix respectively, *<sup>I</sup>k*, *<sup>k</sup>* ∈ *n* denotes the *<sup>k</sup>*-th column of the identity matrix and **<sup>0</sup>** <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* is the zero element.

For *<sup>β</sup>* = (*β*1, ... , *<sup>β</sup>n*), *<sup>β</sup><sup>k</sup>* <sup>∈</sup> [−1, 1], *<sup>k</sup>* ∈ *n*,*Y*(*t*)=(*y*1(*t*), ... , *<sup>y</sup>n*(*t*))*<sup>T</sup>* : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup>*<sup>n</sup>* we use the notations *<sup>I</sup>β*(*Y*(*t*)) = diag((*y*1(*t*))*β*<sup>1</sup> , ... ,(*yn*(*t*))*β<sup>n</sup>* ), for *<sup>W</sup>*(*t*) = {*wkj*(*t*)}*<sup>n</sup> <sup>k</sup>*,*j*=<sup>1</sup> : <sup>R</sup>¯ <sup>+</sup> <sup>→</sup> <sup>R</sup>*n*×*n*, *<sup>W</sup>*(*t*) <sup>∈</sup> *<sup>L</sup>loc* <sup>1</sup> (R¯ <sup>+</sup>, <sup>R</sup>*n*×*n*) and is locally bounded, we note for every fixed *<sup>t</sup>* <sup>∈</sup> <sup>R</sup>¯ <sup>+</sup> with *<sup>W</sup>T*(*t*) = {*wjk*(*t*)}*<sup>n</sup> <sup>k</sup>*,*j*=<sup>1</sup> the transposed matrix, with *<sup>σ</sup>Max*(*t*) the largest singular value of *<sup>W</sup>*(*t*) and with <sup>|</sup>*W*(*t*)<sup>|</sup> <sup>=</sup> *<sup>σ</sup>Max* the spectral norm [32]. In addition, *W*(*t*) <sup>=</sup> sup *ξ*∈[0,*t*] <sup>|</sup>*W*(*ξ*)|, *<sup>t</sup>* <sup>∈</sup> <sup>R</sup>¯ <sup>+</sup> and for simplicity we will use the notation *<sup>D</sup><sup>α</sup>* <sup>0</sup><sup>+</sup> =*<sup>C</sup> <sup>D</sup><sup>α</sup>* <sup>0</sup><sup>+</sup> for the left

side Caputo fractional derivative with lower terminal zero.

Below we will study the inhomogeneous linear delayed system of incommensurate type and distributed delay in the following general form

$$D\_{0+}^{
u}X(t) = \int\_{-h}^{0} [\mathbf{d}\_{\theta} \mathcal{U}(t, \theta)] X(t+\theta) + F(t), \ t \in \mathbb{R}\_{+} \tag{1}$$

or described in rows

$$D\_{0+}^{a\_k} \mathbf{x}\_k(t) = \sum\_{j=1}^n \int\_{-\mathrm{l}}^0 \mathbf{x}\_j(t+\theta) d\_\theta \mathbf{u}\_{kj}(t, \theta) \, | \, + f\_k(t), t \in \mathbb{R}\_+, k \in \langle n \rangle\_\mathbf{L}$$

where *X*(*t*)=(*x*1(*t*), ... , *xn*(*t*))*T*, *D<sup>α</sup>* <sup>0</sup><sup>+</sup> <sup>=</sup> diag(*Dα*<sup>1</sup> <sup>0</sup>+, ... , *<sup>D</sup>α<sup>n</sup>* <sup>0</sup>+), *<sup>h</sup>* ∈ R<sup>+</sup> is an arbitrary fixed number, *<sup>α</sup>* = (*α*1, ... , *<sup>α</sup>n*), *<sup>α</sup><sup>k</sup>* <sup>∈</sup> (0, 1), *<sup>U</sup>* : <sup>R</sup>¯ <sup>+</sup> <sup>×</sup><sup>R</sup> <sup>→</sup> <sup>R</sup>*n*×*n*, *<sup>U</sup>*(*t*, *<sup>θ</sup>*) = {*ukj*(*t*, *<sup>θ</sup>*)}*<sup>n</sup> <sup>k</sup>*,*j*=1, *F*(*t*) = (*f*1(*t*),..., *fn*(*t*))*<sup>T</sup>* : *<sup>R</sup>*¯ <sup>+</sup> <sup>→</sup> <sup>R</sup>*n*, *<sup>α</sup><sup>M</sup>* <sup>=</sup> max *k*∈*n α<sup>k</sup>* and *α<sup>m</sup>* = min *k*∈*n αk*.

**Definition 1.** *With <sup>C</sup>*˜ *we denote the Banach space of all bounded vector functions* <sup>Φ</sup>(*t*) <sup>∈</sup> *Lloc* <sup>1</sup> ([−*h*, 0], <sup>R</sup>*n*)*, with finite many jumps and norm* Φ <sup>=</sup> sup *t*∈[−*h*,0] |Φ(*t*)| = max *k*∈*n* ( sup *t*∈[−*h*,0] |*φk*(*t*)|) <sup>&</sup>lt; <sup>∞</sup> *and the subspace of all continuous functions by <sup>C</sup>* <sup>=</sup> *<sup>C</sup>*([−*h*, 0], <sup>R</sup>*n*)*, i.e., <sup>C</sup>* <sup>⊂</sup> *<sup>C</sup>*˜ *. Below we assume for convenience, that every* <sup>Φ</sup> <sup>∈</sup> *<sup>C</sup>*˜ *is prolonged as* <sup>Φ</sup>(*t*) = **<sup>0</sup>** *for <sup>t</sup>* <sup>∈</sup> (−∞, <sup>−</sup>*h*) *and by <sup>S</sup>*<sup>Φ</sup> *we will denote the set of the jump points of* Φ*.*

For the system, (1) introduces the following initial conditions:

$$X(t) = \Phi(t) \ (x\_k(t) = \phi\_k(t), k \in \langle n \rangle), \ t \in (-\infty, 0], \Phi \in \mathcal{C}. \tag{2}$$

We say that for the kernel *<sup>U</sup>* : <sup>R</sup>¯ <sup>+</sup> <sup>×</sup> <sup>R</sup> <sup>→</sup> <sup>R</sup>*n*×*<sup>n</sup>* the conditions **(S)** hold for some *<sup>h</sup>* ∈ R<sup>+</sup> if the following conditions are fulfilled:

**(S1)** The functions (*t*, *<sup>θ</sup>*) <sup>→</sup> *<sup>U</sup>*(*t*, *<sup>θ</sup>*) = {*ukj*(*t*, *<sup>θ</sup>*)}*<sup>n</sup> <sup>k</sup>*,*j*=<sup>1</sup> are measurable in (*t*, *<sup>θ</sup>*) <sup>∈</sup> <sup>R</sup>¯ <sup>+</sup> <sup>×</sup> <sup>R</sup> and normalized so that for *<sup>t</sup>* <sup>∈</sup> <sup>R</sup>¯ <sup>+</sup>, *<sup>U</sup>*(*t*, *<sup>θ</sup>*) = 0 when *<sup>θ</sup>* <sup>∈</sup> <sup>R</sup>¯ <sup>+</sup> and *<sup>U</sup>*(*t*, *<sup>θ</sup>*) = *<sup>U</sup>*(*t*, <sup>−</sup>*h*) for all *<sup>θ</sup>* <sup>∈</sup> (−∞, <sup>−</sup>*h*]. For all *<sup>t</sup>* <sup>∈</sup> <sup>R</sup>¯ <sup>+</sup> the matrix valued function *<sup>U</sup>*¯ (*t*, 0) = *Varθ*∈[−*h*,0]*U*(*t*, *<sup>θ</sup>*) = {*Varθ*∈[−*h*,0]*uk*,*j*(*t*, *<sup>θ</sup>*)}*<sup>n</sup> <sup>k</sup>*,*j*=1, *<sup>U</sup>*¯ (*t*, 0) <sup>∈</sup> *<sup>L</sup>loc* <sup>1</sup> (R+, R*n*×*n*) is locally bounded and max *k*,*j*∈*n Varθ*∈[−*h*,0]*uk*,*j*(*t*, *<sup>θ</sup>*) < <sup>∞</sup>.

**(S2)** The Lebesgue decomposition of the kernel *<sup>U</sup>*(*t*, *<sup>θ</sup>*) for *<sup>t</sup>* <sup>∈</sup> <sup>R</sup>¯ <sup>+</sup> and *<sup>θ</sup>* <sup>∈</sup> [−*h*, 0] has the form:

$$\mathcal{U}(t,\theta) = \mathcal{U}\_{\!\!\!/}(t,\theta) + \mathcal{U}\_{\!\!\!AC}(t,\theta) + \mathcal{U}\_{\!\!\!S}(t,\theta)$$

where *UJ*(*t*, *<sup>θ</sup>*) = *<sup>m</sup>* ∑ *i*=0 *Ai* (*t*)*H*(*<sup>θ</sup>* <sup>+</sup> *<sup>σ</sup>i*(*t*)), *<sup>m</sup>* <sup>∈</sup> <sup>N</sup>, *<sup>A</sup><sup>i</sup>* (*t*) = {*a<sup>i</sup> kj*(*t*)}*<sup>n</sup> <sup>k</sup>*,*j*=<sup>1</sup> <sup>∈</sup> *<sup>L</sup>loc* <sup>1</sup> (R+, R*n*×*n*) are locally bounded on <sup>R</sup>+, *<sup>H</sup>*(*t*) is the Heaviside function, the delays *<sup>σ</sup>i*(*t*) <sup>∈</sup> *<sup>C</sup>*(R¯ <sup>+</sup>, <sup>R</sup>¯ +) are bounded with *σ<sup>i</sup>* = sup *<sup>t</sup>*∈R¯ <sup>+</sup> *σi*(*t*), max *i*∈*m σ<sup>i</sup>* = *h*, *i* ∈ *m*, *σ*0(*t*) ≡ 0, *UAC* = { *θ* −*h b j k*(*t*,*s*)d*s*}*<sup>n</sup> k*,*j*=1 ∈ *Lloc* <sup>1</sup> (R¯ <sup>+</sup> <sup>×</sup> <sup>R</sup>, <sup>R</sup>*n*×*n*) are locally bounded on <sup>R</sup>¯ <sup>+</sup> and *US*(*t*, *<sup>θ</sup>*) <sup>∈</sup> *<sup>C</sup>*(R¯ <sup>+</sup> <sup>×</sup> <sup>R</sup>, <sup>R</sup>*n*×*n*).

**(S3)** For every *t* <sup>∗</sup> <sup>∈</sup> <sup>R</sup><sup>+</sup> the following relation hold: lim*t*→*t*<sup>∗</sup> 0 −*h* |*U*(*t*, *θ*) − *U*(*t*∗, *θ*)|d*θ* = 0.

**(S4)** The set *SU* <sup>=</sup> {*<sup>t</sup>* <sup>∈</sup> <sup>R</sup>¯ <sup>+</sup> <sup>|</sup> *<sup>t</sup>* <sup>−</sup> *<sup>σ</sup>i*(*t*) <sup>∈</sup> *<sup>S</sup>*Φ, *<sup>i</sup>* ∈ *m*} do not have limit points.

**Remark 1.** *At first glance, it seems that condition (S4) imposes certain restrictions on the initial function (more preciously on its jump set S*Φ*, which is a finite set). But the leading role in this interaction belongs to the delays, i.e., the validity of (S4) depends only from the properties of the delays. For example, in the cases of constant delays or when the delays are strictly increasing, then (S4) is ultimately fulfilled.*

Let us consider the following auxiliary system in matrix form

$$\mathbf{X}(t) = \Phi(0) + I\_{-1}(\Gamma(\alpha)) [\int\_0^t I\_{\alpha-1}(t-\eta) \int\_{-h}^0 [\mathbf{d}\_\theta \mathbf{U}(\eta, \theta)] \mathbf{X}(\eta+\theta) \mathbf{d}\eta + \int\_0^t I\_{\alpha-1}(t-\eta) \mathbf{F}(\eta) \mathbf{d}\eta] \tag{3}$$

where *<sup>I</sup>*−1(Γ(*α*)) = diag(Γ−1(*α*1),..., <sup>Γ</sup>−1(*αn*)), or for *<sup>k</sup>* ∈ *n* in row form

$$\mathbf{x}\_k(t) = \phi\_k(0) + \frac{1}{\Gamma(a\_k)} [\int\_0^t (t-\eta)^{a\_k-1} (\sum\_{j=1}^n \int\_{-\hbar}^0 \mathbf{x}\_j(\eta+\theta) \mathbf{d}\_\theta \boldsymbol{u}\_{kj}(\eta, \theta)) \, \mathrm{d}\eta + \int\_0^t (t-\eta)^{a\_k-1} f\_k(\eta) \, \mathrm{d}\eta],$$

with the initial condition (2).

In our exposition below we will use the abbreviation IP for Initial Problem.

**Definition 2.** *The vector function X*(*t*)=(*x*1(*t*), ... , *xn*(*t*))*<sup>T</sup> is a solution of the IP* (1)*,* (2) *or IP* (3)*,* (2) *in* <sup>R</sup>¯ <sup>+</sup>*, if <sup>X</sup>* <sup>∈</sup> *<sup>C</sup>*(R¯ <sup>+</sup>, <sup>R</sup>*n*) *satisfies the system* (1) *respectively* (3) *for all <sup>t</sup>* <sup>∈</sup> <sup>R</sup><sup>+</sup> *and the initial condition* (2) *for each t* ∈ [−*h*, 0]*.*

In virtue of Lemma 3.3 in [33] every solution *X*(*t*) of IP (1), (2) is a solution of IP (3), (2) and vice versa. Moreover, the IP (3), (2) possess a unique solution *<sup>X</sup>* <sup>∈</sup> *<sup>C</sup>*(R¯ <sup>+</sup>, <sup>R</sup>*n*) according Corollary 1 in [34] and hence IP (1), (2) too.

For the corresponding homogeneous system of the system (1) (i.e., *F*(*t*) ≡ **0** for *<sup>t</sup>* ∈ R+):

$$D\_{0+}^{\mu}X(t) = \int\_{-h}^{0} [\mathbf{d}\_{\theta}lI(t,\theta)]X(t+\theta), t \in \mathbb{R}\_{+} \tag{4}$$

and for arbitrary fixed *s* ∈ [−*h*, ∞) introduce the matrix system

$$D\_{0+}^{a} \mathcal{W}(t, s) = \int\_{-h}^{0} [\mathbf{d}\_{\theta} \mathcal{U}(t, \theta)] \mathcal{W}(t + \theta, s), t \in \mathbb{R}\_{+} \cap [s, \infty). \tag{5}$$

as well as the special kind initial matrix valued functions <sup>Φ</sup>1, <sup>Φ</sup><sup>2</sup> : <sup>R</sup><sup>2</sup> <sup>→</sup> <sup>R</sup>*n*×*<sup>n</sup>*

$$\begin{aligned} \Phi\_1(t,s) &= \begin{cases} I\_{\prime} & t = s\_{\prime} \\ \Theta\_{\prime} & t < s \end{cases}, \mathbf{s} \in \mathbb{R}\_{+}, \\\ \Phi\_2(t,s) &= \begin{cases} I\_{\prime} & -h \le s \le t \le 0\_{\prime} \\ \Theta\_{\prime} & t < s \text{ or } s < -h \end{cases}, \mathbf{s} \in [-h,0] \end{aligned} \tag{6}$$

and consider the matrix integral equations

$$\mathbb{C}(t,\mathbf{s}) = \Phi\_1(t,\mathbf{s}) + I\_{-1}(\Gamma(\mathbf{a})) \int\_{\mathbf{s}}^{t} I\_{\mathbf{a}-1}(t-\eta) \int\_{-\sigma}^{0} [\mathbb{d}\mathbb{U}(\eta,\theta)] \mathbb{C}(\eta+\theta,\mathbf{s}) \mathrm{d}\eta, \mathbf{s} \in \mathbb{R}\_{+}, t \in (\mathbf{s}, \infty) \tag{7}$$

$$T\_{-h}(t,s) = \Phi\_2(0,s) + I\_{-1}(\Gamma(a)) \int\_0^t I\_{a-1}(t-\eta) \int\_{-\sigma}^0 [\mathrm{d}\mathcal{U}(\eta,\theta)] T\_{-h}(\eta+\theta,s) \mathrm{d}\eta, s \in [-h,0], t \in \mathbb{R}\_+ \tag{8}$$

For arbitrary fixed *<sup>s</sup>* <sup>∈</sup> <sup>R</sup>¯ <sup>+</sup>, the solution *<sup>C</sup>*(*t*,*s*) of (7) for *<sup>t</sup>* <sup>∈</sup> (*s*, <sup>∞</sup>) with initial condition *C*(*t*,*s*) = Φ1(*t*,*s*), *t* ∈ (−∞,*s*] is called fundamental matrix of the system (4).

By *<sup>T</sup>*−*h*(*t*,*s*) for arbitrary fixed *<sup>s</sup>* <sup>∈</sup> (−∞, 0] we denote the solution of (8) for *<sup>t</sup>* <sup>∈</sup> <sup>R</sup><sup>+</sup> with initial condition *<sup>T</sup>*−*h*(*t*,*s*) = <sup>Φ</sup>2(*t*,*s*), *<sup>t</sup>* ∈ (−∞, 0] and we note that *<sup>C</sup>*(*t*, 0) = *<sup>T</sup>*0(*t*, 0).

The existence and uniqueness of the fundamental matrix *C*(*t*,*s*) of the system (4) and the matrix *<sup>T</sup>*−*h*(*t*,*s*) as well as their properties are proved in [31]. Please note that these matrices are absolutely continuous concerning *t* and continuous in *s* on every compact subinterval in <sup>R</sup>¯ <sup>+</sup> if *<sup>s</sup>* <sup>=</sup> *<sup>t</sup>* and for *<sup>s</sup>* <sup>=</sup> *<sup>t</sup>* possess first kind jumps [31].

Everywhere below we will use the notations:

$$\begin{split} \left| \left| \mathcal{U}(t,0) \right| \right| &= \sup\_{\substack{\boldsymbol{\xi} \in [0,t] \\ \boldsymbol{\xi} \in [0,t]}} \left| \mathcal{U}(\boldsymbol{\xi},0) \right| = \sup\_{\substack{\boldsymbol{\xi} \in [0,t] \\ \boldsymbol{\xi} \in [0,t]}} \left| Var\_{\theta \in [-h,0]} \mathcal{U}(\boldsymbol{\xi},\theta) \right|, \\ \left| \left| \mathcal{C}(t,s) \right| = Var\_{\eta \in [0,s]} \mathcal{C}(t,\eta) = \left\{ Var\_{\eta \in [0,s]} c\_{kj}(t,\eta) \right\}\_{k,j=1,\prime}^{n} \\ \left| \left| \mathcal{C}(t,s) \right| \right| &= \sup\_{\substack{\boldsymbol{\xi} \in [0,t] \\ \boldsymbol{\xi} \in [0,t]}} \left| Var\_{\eta \in [0,s]} \mathcal{C}(\boldsymbol{\xi},\eta) \right|, \\ \left| \left| \mathcal{T}\_{-h}(t,s) \right| = Var\_{\eta \in [-h,s]} \mathcal{T}\_{-h}(t,\eta) = \left\{ Var\_{\eta \in [-h,s]} \Phi\_{kj}(t,\eta) \right\}\_{k,j=1,\prime}^{n} \\ \left| \left| \mathcal{T}\_{-h}(t,s) \right| \right| &= \sup\_{\substack{\boldsymbol{\xi} \in [0,t] \\ \boldsymbol{\xi} \in [0,t]}} \left| \left| \mathcal{T}\_{-h}(\boldsymbol{\xi},s) \right| \right| = \sup\_{\substack{\boldsymbol{\xi} \in [0,t] \\ \boldsymbol{\xi} \in [0,t]}} \left| Var\_{\eta \in [-h,s]} \right| \mathcal{T}\_{-h}(\boldsymbol{\xi},\eta) \right|. \end{split}$$

We recall some needed properties of the gamma function <sup>Γ</sup>(*z*), *<sup>z</sup>* ∈ R+.

It is well known that Γ(*z*) has a local minimum at *zmin* ≈ 1.46163, where it attains the value Γ(*zmin*) ≈ 0.885603. Since Γ(*z*) for *z* ∈ (0, *zmin*) is strictly decreasing, then for arbitrary *α<sup>k</sup>* ∈ (0, 1) we have that

$$\max\_{k \in \langle n \rangle} \frac{1}{\Gamma(\alpha\_k)} < \max\_{k \in \langle n \rangle} \frac{1}{\Gamma(1 + \alpha\_k)} \le \frac{1}{\Gamma(z\_{\min})} \le 1.1279$$

For the function *<sup>I</sup>α*−1(*<sup>t</sup>* <sup>−</sup> *<sup>η</sup>*)=(diag((*<sup>t</sup>* <sup>−</sup> *<sup>η</sup>*)*α*1<sup>−</sup>1, ... ,(*<sup>t</sup>* <sup>−</sup> *<sup>η</sup>*)*αn*−1) we will use below the notations *α*<sup>∗</sup> = *α<sup>m</sup>* when *t* − *η* ≤ 1 and *α*<sup>∗</sup> = *α<sup>M</sup>* when *t* − *η* ≥ 1. Then we have that for *<sup>t</sup>* <sup>∈</sup> <sup>R</sup>¯ <sup>+</sup>, *<sup>η</sup>* <sup>∈</sup> [0, *<sup>t</sup>*), the following relations hold

$$|I\_{a-1}(t-\eta)| = (t-\eta)^{a\_\*-1};\ |I\_{-1}(\Gamma(a))| = \frac{1}{\Gamma(a\_M)} = \Gamma^{-1}(a\_M) = \mathbb{C}\_0\tag{9}$$

where <sup>Γ</sup>−1(*αM*) and (*<sup>t</sup>* <sup>−</sup> *<sup>η</sup>*)*α*∗−<sup>1</sup> are the largest singular values for the diagonal matrices *I*−1(Γ(*α*)) and *Iα*−1(*t* − *η*) respectively.

**Theorem 1.** *[35] Let the following conditions hold:*


*3. For every t* ∈ [0, *T*) *the following inequality holds:*

$$u(t) \le a(t) + \operatorname{g}(t) \int\_0^t (t - \eta)^{\alpha - 1} u(\eta) \mathrm{d}\eta.$$

*Then the following inequality holds for t* ∈ [0, *T*)*:*

$$u(t) \le a(t) + \int\_0^t [\sum\_{q=1}^\infty \frac{(\mathcal{g}(\eta)(\Gamma(a)))^q}{\Gamma(aq)} (t-\eta)^{aq-1} [a(\eta) \text{d}\eta \dots]$$

**Corollary 1.** *[35] Let the conditions of Theorem 1 hold and let the function a*(*t*) *be nondecreasing on* [0, *T*)*.*

*Then for t* ∈ [0, *T*) *the inequality u*(*t*) ≤ *a*(*t*)*Eα*[*g*(*t*)Γ(*α*)*t <sup>α</sup>*] *holds, where E<sup>α</sup> denotes the one parameter Mittag-Leffler function.*

**Definition 3.** *[27] The fractional system given by* (1) *satisfying the initial state* (2) *is finite-time stable with respect to* {0, *JT*, *δ*,*ε*, *h*} *with t* ∈ *JT and δ* ≤ *ε if and only if the inequality* Φ < *δ implies that X*(*t*) < *ε for each t* ∈ *JT, where X*(*t*) *is the unique solution of IP* (1)*,* (2)*.*

#### **3. A Priory Estimates of the Solutions of IP** (1)**,** (2)**—Gronwall's Inequality Approach**

In this section, we obtain some a priori estimates of the solutions of IP (1), (2) and IP (4), (2) in different cases, depending from the properties of the initial function Φ and the function *F*. The different a priori estimates of the solutions in this section are obtained using approaches based on Gronwall's inequality.

**Theorem 2.** *Let T* ∈ R<sup>+</sup> *be an arbitrary fixed number and the following conditions are fulfilled: 1. Conditions (S) hold.*

*2. The function F*(*t*) <sup>∈</sup> *<sup>L</sup>loc* <sup>1</sup> (R+, R*n*) *is locally bounded.*

*Then for every initial function* <sup>Φ</sup> <sup>∈</sup> *<sup>C</sup>*˜ *the corresponding unique solution <sup>X</sup>*(*t*) *of IP* (1)*,* (2) *for every t* ∈ *JT satisfies the estimation*

$$\max\left(\|X(t)\|, \|\Phi\|\right) \le \left(\|\Phi\| + \mathfrak{a}\_\*^{-1} \mathbb{C}\_0 \|F(t)\| t^{\mathfrak{a}\_\*}\right) E\_a\left(\|\varPi(t, 0)\| \|\mathcal{C}\_0 \Gamma(\mathfrak{a}\_\*) t^{\mathfrak{a}\_\*}\right). \tag{10}$$

**Proof.** Let <sup>Φ</sup> <sup>∈</sup> *<sup>C</sup>*˜ be an arbitrary initial function and *<sup>X</sup>*(*t*) be the corresponding unique solution of the IP (1), (2). Then if max(*X*(*T*), Φ) = Φ the estimation (10) obviously holds.

Let assume that max(*X*(*T*), Φ) > Φ. From (3) for every *t* ∈ *JT* it follows that

$$\mathbf{X}(t) = \Phi(0) + I\_{-1}(\Gamma(a)) [\int\_0^t I\_{a-1}(t-\eta)\Gamma(\eta)\mathbf{d}\eta + \int\_0^t I\_{a-1}(t-\eta) \int\_{-\mathbf{i}}^0 [\mathbf{d}\_\theta \mathbf{l} I(\eta, \theta)] \mathbf{X}(\eta+\theta)\mathbf{d}\eta]. \tag{11}$$

Using (9) it is simple to check that

$$\begin{split} |I\_{-1}(\Gamma(a))| \int\_{0}^{t} I\_{a-1}(t-\eta) F(\eta) \mathrm{d}\eta| &\leq \frac{1}{\Gamma(a\_{M})} \int\_{0}^{t} (t-\eta)^{a\_{\*}-1} |F(\eta)| \mathrm{d}\eta \\ &\leq \mathrm{Co} \| F(t) \| \int\_{0}^{t} (t-\eta)^{a\_{\*}-1} \mathrm{d}\eta = \mathrm{Co} a\_{\*}^{-1} \| F(t) \| \| t^{a\_{\*}}. \end{split} \tag{12}$$

Since for each *<sup>η</sup>* <sup>∈</sup> <sup>R</sup>¯ <sup>+</sup> with *<sup>η</sup>* <sup>+</sup> *<sup>θ</sup>* <sup>≤</sup> 0 for some *<sup>θ</sup>* <sup>∈</sup> [−*h*, 0] we have that <sup>|</sup>*X*(*<sup>η</sup>* <sup>+</sup> *<sup>θ</sup>*)|≤Φ and for each *<sup>η</sup>* <sup>∈</sup> <sup>R</sup>¯ <sup>+</sup> with *<sup>η</sup>* <sup>+</sup> *<sup>θ</sup>* <sup>∈</sup> [0, *<sup>η</sup>*] for some *<sup>θ</sup>* <sup>∈</sup> [−*h*, 0] the estimation |*X*(*η* + *θ*)|≤*X*(*η*) holds, then for *t* ∈ *JT* we obtain

$$\geq \int\_{-\hbar}^{0} [\mathbf{d}\_{\theta} \boldsymbol{\varPi}(\eta, \theta)] \boldsymbol{X}(\eta + \theta) | \leq \||\boldsymbol{\varPi}(t, 0)\|| \max(\|\boldsymbol{X}(\eta)\|, \|\boldsymbol{\varPhi}\|). \tag{13}$$

Then from (9), (11)–(13) for *t* ∈ *JT* we obtain

$$\begin{split} \max(\|\|X(t)\|\|, \|\Phi\|) &\leq \|\Phi\| + \operatorname{Co} \limits\_{0}^{t} (t - \eta)^{a\_{s} - 1} \|\|F(t)\|\| \operatorname{d}\eta \\ &\quad + \frac{1}{\Gamma(a\_{M})} \int\_{0}^{t} (t - \eta)^{a\_{s} - 1} | \int\_{-h}^{0} [\mathrm{d}\rho U(\eta, \theta)] X(\eta + \theta) | \operatorname{d}\eta \\ &\leq \|\Phi\| + \operatorname{C}\_{0} a\_{s}^{-1} t^{a\_{s}} \|F(t)\| \\ &\quad + \operatorname{C}\_{0} \|\mathcal{U}(t, 0)\| \int\_{0}^{t} (t - \eta)^{a\_{s} - 1} \max(\|X(\eta)\| | \, | \, |\Phi| \| ) \operatorname{d}\eta \end{split} \tag{14}$$

and denoting *u*(*t*) = max(*X*(*t*), Φ), from (14) it follows that

$$u(t) \le \left( \|\Phi\| + \mathbb{C}\_0 \boldsymbol{\kappa}\_\*^{-1} \boldsymbol{t}^{\mu\_\*} \|\boldsymbol{F}(t)\| \right) + \mathbb{C}\_0 \|\boldsymbol{\varPi}(t, 0)\| \sup\_{\substack{\boldsymbol{\xi} \in [0, t]}} \int\_0^{\overline{\boldsymbol{\xi}}} (\xi - \eta)^{\mu\_\* - 1} \boldsymbol{u}(\eta) \mathbf{d}\eta \tag{15}$$

Since *u*(*t*) is positive and non-decreasing then for each *t* ∈ *JT* we have

$$\begin{aligned} \sup\_{\substack{\mathfrak{F}\in\left[0,t\right] \\ \mathfrak{F}\in\left[0,t\right]}} \int\_{0}^{\mathfrak{F}} (\mathfrak{x}-\eta)^{a\_{\ast}-1} u(\eta) \mathrm{d}\eta &= \sup\_{\substack{\mathfrak{F}\in\left[0,t\right] \\ \mathfrak{F}\in\left[0,t\right]}} \int\_{0}^{\mathfrak{F}} s^{a\_{\ast}-1} u(\mathfrak{x}-\mathfrak{s}) \mathrm{d}\mathfrak{s} \\ &\leq \int\_{0}^{t} s^{a\_{\ast}-1} u(t-s) \mathrm{d}s = \int\_{0}^{t} (t-\eta)^{a\_{\ast}-1} u(\eta) \mathrm{d}\mathfrak{s} \end{aligned} \tag{16}$$

and hence from (15) and (16) it follows for each *t* ∈ *JT* the estimation

$$u(t) \le \left( \|\Phi\| + \mathbb{C}\_0 \alpha\_\*^{-1} t^{a\_\*} \|F(t)\| \right) + \mathbb{C}\_0 \|\varPi(t, 0)\| \int\_0^t (t - \eta)^{a\_\* - 1} u(\eta) \, \mathrm{d}s \tag{17}$$

Then applying Corollary 1 to (17) we obtain (10).

**Corollary 2.** *Let T* ∈ R<sup>+</sup> *be an arbitrary fixed number and the following conditions are fulfilled: 1. The conditions (S) hold.*

$$\underline{2.}\parallel\!\!F(T)\parallel\!=0.$$

*Then for every initial function* <sup>Φ</sup> <sup>∈</sup> *<sup>C</sup>*˜ *with* Φ <sup>&</sup>gt; <sup>0</sup> *the corresponding unique solution <sup>X</sup>*(*t*) *of IP* (1)*,* (2) *for every t* ∈ *JT satisfies the estimation*

$$\max\left(\|X(t)\|\_{\prime}, \|\Phi\|\right) \le \|\Phi\|\|E\_{\mathfrak{a}}\left(\|\bar{\mathcal{U}}(t,0)\|\|C\_{0}\Gamma(a\_{\ast})t^{a\_{\ast}}\right). \tag{18}$$

**Proof.** The estimation (18) follows immediately from (10) using that *F*(*t*) = 0 for each *t* ∈ [0, *T*].

**Corollary 3.** *Let T* ∈ R<sup>+</sup> *be an arbitrary fixed number and the following conditions are fulfilled: 1. The conditions (S) hold.*

*2. The function F*(*t*) <sup>∈</sup> *<sup>L</sup>loc* <sup>1</sup> (R+, <sup>R</sup>*n*) *is locally bounded and* Φ <sup>=</sup> <sup>0</sup>*.*

*Then the corresponding unique solution X*(*t*) *of IP* (1)*,* (2) *satisfies the estimation*

$$\|\|X(t)\|\| \le a\_\*^{-1} \mathcal{C}\_0 t^{a\_\*} \|\|F(t)\|\| E\_a(\|\|\mathcal{U}(t,0)\|\| \mathcal{C}\_0 \Gamma(a\_\*) t^{a\_\*}).\tag{19}$$

**Proof.** The estimation (19) follows immediately from (10) using that Φ = 0.

The next theorem is devoted to obtaining another form of the estimation (10) based on the assumption that Φ > 0. The approach used is the same as in Theorem 2 but the assumption that Φ > 0 allows one technical stunt to be realized.

**Theorem 3.** *Let T* ∈ R<sup>+</sup> *be an arbitrary fixed number and the following conditions are fulfilled:*

*1. The condition of Theorem 2 hold and* ||*F*(*T*)|| > 0*.*

*2. The initial function* <sup>Φ</sup> <sup>∈</sup> *<sup>C</sup>*˜ *satisfies the condition* Φ <sup>&</sup>gt; <sup>0</sup>*.*

*Then the corresponding unique solution X*(*t*) *of IP* (1)*,* (2) *for every t* ∈ *JT satisfies the estimation*

$$\max(\|X(t)\|, \|\Phi\|) \le \|\Phi\| \|E\_{\mathfrak{a}}((\mathbb{C}\mathfrak{a} + \|\bar{\mathcal{U}}(t, 0)\|) \mathbf{C} \boldsymbol{0} \boldsymbol{\Gamma}(\boldsymbol{a}\_{\*}) \mathbf{t}^{\boldsymbol{a}\_{\*}}),\tag{20}$$

*where C*<sup>Φ</sup> <sup>=</sup> Φ−1*F*(*T*)*.*

**Proof.** Let *X*(*t*) be the corresponding unique solution of the IP (1), (2). Condition 2 implies that Φ > 0 and then since sup *s*∈[−*h*,*t*] |*X*(*s*)| is not decreasing and *X*(*t*) = Φ(*t*) for *t* ∈ [−*h*, 0], then we have that sup *t*∈[−*h*,0] |*X*(*t*)| = Φ. Let assume that Φ > |*X*(0)| and let ¯*<sup>t</sup>* ∈ [0, *<sup>T</sup>*] be arbitrary with Φ≥|*X*(¯*t*)|. Then for *<sup>C</sup>*<sup>Φ</sup> <sup>=</sup> Φ−1||*F*(*T*)|| we have

$$\mathbb{C}\_{\Phi} \max(\|X(\overline{t})\|, \|\Phi\|) = \mathbb{C}\_{\Phi} \|\Phi\| = \|\Phi\| \frac{\|F(T)\|}{\|\Phi\|} = \|F(T)\| \ge \|F(\overline{t})\| \ge |F(\overline{t})|.$$

For arbitrary ¯*<sup>t</sup>* ∈ [0, *<sup>T</sup>*] with Φ≤|*X*(¯*t*)| we obtain that the inequality

$$\mathbb{C}\_{\Phi} \max(|X(\overline{t})|, \|\Phi\|) = \mathbb{C}\_{\Phi} |X(\overline{t})| \ge \|\Phi\| \frac{\|F(T)\|}{\|\Phi\|} = \|F(T)\| \ge \|F(\overline{t})\| \ge |F(\overline{t})|$$

holds and hence for each *t* ∈ *JT* the inequality *F*(*t*) ≤ *C*<sup>Φ</sup> max(|*X*(*t*)|, Φ) holds.

Then for each *t* ∈ *JT* from (11) as in the proof of Theorem 2 we obtain that (14) holds. From (14) and taking into account the inequality *F*(*t*) ≤ *C*<sup>Φ</sup> max(|*X*(*t*)|, Φ) it follows that

$$\begin{split} \max(|X(t)|, \|\Phi\|) &\leq \|\Phi\| + \mathbb{C}\_{0} \mathbb{C}\_{\Phi} \int\_{0}^{t} (t-\eta)^{a\_{\*}-1} \max(|X(\eta)|, \|\Phi\|) \mathrm{d}\eta \\ &\quad + \mathbb{C}\_{0} \|\bar{\mathcal{U}}(t,0)\| \int\_{0}^{t} (t-\eta)^{a\_{\*}-1} \max(|X(\eta)|, \|\Phi\|) \mathrm{d}\eta \\ &\leq \|\Phi\| + \mathbb{C}\_{0} (\mathbb{C}\_{\Phi} + \|\bar{\mathcal{U}}(t,0)\|) \int\_{0}^{t} (t-\eta)^{a\_{\*}-1} \max(|X(\eta)|, \|\Phi\|) \mathrm{d}\eta \end{split} \tag{21}$$

and hence from (21) as in the proof of Theorem 2 we obtain

$$u(t) \le \|\Phi\| \|\mathbb{C}\_0(\mathbb{C}\_{\Phi} + \|\bar{\mathcal{U}}(t, 0)\|) \int\_0^t (t - \eta)^{a\_\*-1} u(\eta) d\eta. \tag{22}$$

Then applying Corollary 1 to (22) we obtain (20).

**Remark 2.** *At first glance, it looks like the estimate* (20) *is better at least as it has a more appropriate form for the applications in compare with* (10)*. However, the most important question is which estimate is more accurate since in general the approach used in both proofs is the same. It is simple to establish that if* Φ = 0 *then the estimate* (19) *can be used and in the case when* Φ > 0 *the estimate* (10) *can be rewritten in the form*

$$\max\left(\|X(t)\|, \|\Phi\|\right) \le \|\Phi\| (1 + a\_\*^{-1} \mathbb{C}\_0 \mathbb{C}\_{\Phi} t^{\mu\_\*}) E\_a(\|\varPi(t, 0)\| \mathbb{C}\_0 \Gamma(a\_\*) t^{a\_\*}).\tag{23}$$

*These simple considerations limit the impact to linear (no more than power-law) growth in the right side of the estimation* (23) *and allow avoiding the high nonlinear impact of C*<sup>Φ</sup> = Φ−1*F*(*T*) *as argument in the Mittag-Leffler function Eα*(·) *in* (20)*.*

#### **4. A Priory Estimates of the Solutions Obtained via Their Integral Representations**

The next different a priori estimations are obtained using the other most popular approach, which is essentially based on the different kinds integral representations of the solutions of the considered systems obtained in [31,33] and applying the superposition principle.

**Theorem 4.** *Let T* ∈ R<sup>+</sup> *be an arbitrary fixed number and following conditions are fulfilled: 1. The conditions of Theorem 2 hold.*

*2. The initial function* Φ(*t*) ≡ **0** *for t* ∈ [−*h*, 0] *(i.e.,* Φ = 0*).*

*Then the corresponding unique solution <sup>X</sup>F*(*t*) *of the IP* (1)*,* (2) *for every <sup>t</sup>* <sup>∈</sup> *JT satisfies the estimation*

$$\|\|X^F(t)\|\| \le \alpha\_\*^{-1} t^{\alpha\_\*} C\_0 \|\|F(t)\|\| (1 + \|\|\bar{C}(t, t)\|\|). \tag{24}$$

**Proof.** Let <sup>Φ</sup> <sup>∈</sup> *<sup>C</sup>*˜ and <sup>Φ</sup>(*t*) <sup>≡</sup> **<sup>0</sup>** for *<sup>t</sup>* <sup>∈</sup> [−*h*, 0]. Then according Theorem 4.3 in [33] the unique solution *<sup>X</sup>F*(*t*) of the IP (1), (2) for every *<sup>t</sup>* <sup>∈</sup> <sup>R</sup><sup>+</sup> has the following representation:

$$X^F(t) = \int\_0^t \mathbb{C}(t, s)\_{RL} D\_{a+}^{1-a} F(s) \, \mathrm{d}s,\tag{25}$$

where *C*(*t*,*s*) is the fundamental matrix of the system (5). Then from (25) after simple calculations and integrating by parts we obtain for *<sup>t</sup>* ∈ R<sup>+</sup>

$$\begin{split} X^{\mathrm{F}}(t) &= \int\_{0}^{t} \mathbb{C}(t,s)\_{\mathrm{RL}} D\_{0+}^{1-\mathfrak{a}} F(s) \mathrm{d}s = I\_{-1}(\Gamma(\mathfrak{a})) \int\_{0}^{t} \mathbb{C}(t,s) (\frac{\mathrm{d}}{\mathrm{ds}} \int\_{0}^{s} I\_{\mathfrak{a}-1}(s-\eta) F(\eta) \mathrm{d}\eta) \mathrm{d}\eta \\ &= I\_{-1}(\Gamma(\mathfrak{a})) \int\_{0}^{t} \mathbb{C}(t,s) \mathrm{d}s (\int\_{0}^{s} I\_{\mathfrak{a}-1}(s-\eta) F(\eta) \mathrm{d}\eta) \\ &= I\_{-1}(\Gamma(\mathfrak{a})) \int\_{0}^{t} I\_{\mathfrak{a}-1}(t-\eta) F(\eta) \mathrm{d}\eta) - I\_{-1}(\Gamma(\mathfrak{a})) \int\_{0}^{t} (\int\_{0}^{s} I\_{\mathfrak{a}-1}(s-\eta) F(\eta) \mathrm{d}\eta) \mathrm{d}\mathfrak{d}\_{t}(\mathcal{C}(t,s)) \end{split} \tag{26}$$

Then for the first addend in the right side of (26) using (12) we obtain that

$$|I\_{-1}(\Gamma(a)) \int\_0^t I\_{a-1}(t - \eta) F(\eta) \mathrm{d}\eta| \le \mathcal{C}\_0 \int\_0^t |I\_{a-1}(t - \eta)| |\|F(\eta)\| \mathrm{d}\eta = \mathcal{C}\_0 \int\_0^t (t - \eta)^{a\_s - 1} \|F(\eta)\| \mathrm{d}\eta$$

and hence in virtue of (16) we obtain

$$\left\| I\_{-1}(\Gamma(a)) \int\_0^t I\_{a-1}(t-\eta) F(\eta) \mathrm{d}\eta \right\| \le C\_0 \int\_0^t (t-\eta)^{a\_\*-1} \left\| F(\eta) \right\| \mathrm{d}\eta \le a\_\*^{-1} C\_0 t^{a\_\*} \left\| F(t) \right\|.\tag{27}$$

For the second addend in the right side of (26) we obtain the estimation

$$\begin{split} &\sup\_{s\in[0,t]}|I\_{-1}(\Gamma(a))\int\_{0}^{t}\Big(\int\_{a-1}^{s}(s-\eta)F(\eta)\mathrm{d}\eta)\mathrm{d}\_{s}(\mathbb{C}(t,s))| \\ &\leq \mathbb{C}\_{0}\|\bar{\mathsf{C}}(t,t)\|\sup\_{s\in[0,t]}|\int\_{0}^{s}I\_{a-1}(s-\eta)F(\eta)\mathrm{d}\eta|\leq\mathbb{C}\_{0}\|\bar{\mathsf{C}}(t,t)\|\sup\_{s\in[0,t]}\int\_{0}^{s}|I\_{a-1}(s-\eta)|\big|\,F(\eta)\,\|\,\mathrm{d}\eta \\ &=\mathbb{C}\_{0}\|F(t)\|\,\|\,\|\bar{\mathsf{C}}(t,t)\|\sup\_{s\in[0,t]}\int\_{0}^{s}(t-\eta)^{a\_{s}-1}\mathrm{d}\eta = a\_{\ast}^{-1}\mathrm{C}\mathrm{d}^{a\_{\ast}}\|F(t)\|\,\|\,\bar{\mathsf{C}}(t,t)\|\,. \end{split} \tag{28}$$

Then the statement of the theorem follows from (27) and (28).

**Theorem 5.** *Let T* ∈ R<sup>+</sup> *be an arbitrary fixed number and the following conditions are fulfilled: 1. The conditions (S) hold.*

*2. F*(*T*) = 0*.*

*3. The initial function* <sup>Φ</sup> <sup>∈</sup> *BV*([−*h*, 0], <sup>R</sup>*n*) <sup>∩</sup> *<sup>C</sup>*˜ *and its Lebesgue decomposition does not include a singular term.*

*Then the corresponding unique solution X*Φ(*t*) *of the IP* (1)*,* (2) *for every t* ∈ *JT satisfies the estimation*

$$\|\|X\_{\Phi}(t)\|\| \le |Var\_{\eta \in [-h,0]} \Phi(\eta)| \sup\_{s \in [-h,0]} \|\|T\_{-h}(t,s)\|\| + |\Phi(-h)| \|\|T\_{-h}(t-h)\|\|.\tag{29}$$

**Proof.** According Theorem 9 in [31] the unique solution *X*Φ(*t*) of the IP (1), (2) for every *<sup>t</sup>* ∈ R<sup>+</sup> has the following representation:

$$X\_{\Phi}(t) = \int\_{-h}^{0} T\_{-h}(t, s) \mathbf{d}\Phi(s) + T\_{-h}(t, -h)\Phi(-h). \tag{30}$$

From (30) we obtain

$$\begin{split} \|\|X\_{\Phi}(t)\|\| &\leq \int\_{-h}^{0} \|T\_{-h}(t,s)\|\| \mathbf{d}\| \|Var\_{\eta\in[-h,s]}\Phi(\eta)\|\| + |\Phi(-h)| \|\|T\_{-h}(t,-h)\|\| \\ &\leq \|Var\_{\eta\in[-h,0]}\Phi(\eta)\|\sup\_{s\in[-h,0]} \|\|T\_{-h}(t,s)\|\| + |\Phi(-h)| \|\|T\_{-h}(t-h)\|\| \end{split} \tag{31}$$

and from (31) it follows (29), which complete the proof.

**Corollary 4.** *Let T* ∈ R<sup>+</sup> *be an arbitrary fixed number and the following conditions are fulfilled: 1. The conditions (S) hold.*

*2. The initial functions* <sup>Φ</sup>(*t*) <sup>≡</sup> <sup>Φ</sup><sup>0</sup> <sup>=</sup> **<sup>0</sup>**, <sup>Φ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>*<sup>n</sup> for t* <sup>∈</sup> [−*h*, 0]*.*

*Then the corresponding unique solution X*Φ(*t*) *of the IP* (1)*,* (2) *for every t* ∈ *JT satisfies the estimation*

$$\|\|X\_{\Phi}(t)\|\| \le |\Phi(-h)| \|\|T\_{-h}(t, -h)\|\| = |\Phi\_0| \|\|T\_{-h}(t, -h)\|\|\tag{32}$$

**Proof.** According Theorem 9 in [31] the unique solution *X*(*t*) of the IP (1), (2) for every *<sup>t</sup>* <sup>∈</sup> <sup>R</sup><sup>+</sup> has the representation (30) and hence we obtain that *<sup>X</sup>*Φ(*t*) = *<sup>T</sup>*−*h*(*t*, <sup>−</sup>*h*)Φ(−*h*) which completes the proof.

**Corollary 5.** *Let T* ∈ R<sup>+</sup> *be an arbitrary fixed number and the following conditions are fulfilled: 1. The conditions (S) hold.*

*2. The function F*(*t*) <sup>∈</sup> *<sup>L</sup>loc* <sup>1</sup> (R+, R*n*) *and is locally bounded.*

*3. The initial function* <sup>Φ</sup> <sup>∈</sup> *BV*([−*h*, 0], <sup>R</sup>*n*) <sup>∩</sup> *<sup>C</sup>*˜ *and its Lebesgue decomposition does not include a singular term.*

*Then the corresponding unique solution X<sup>F</sup>* <sup>Φ</sup>(*t*) *of the IP* (1)*,* (2) *for every t* ∈ *JT satisfies the estimation*

$$\begin{split} \|X\_{\Phi}^{\mathbb{F}}(t)\| &\leq \|Var\_{\eta \in [-h,0]}\Phi(\eta)\| \sup\_{s \in [-h,0]} \|T\_{-h}(t,s)\| + |\Phi(-h)| \|T\_{-h}(t-h)\| \\ &+ a\_{\ast}^{-1} t^{\alpha\_{\ast}} C\_{0} \|F(t)\| (1 + \|\mathbb{C}(t,t)\|) \end{split} \tag{33}$$

**Proof.** Using the superposition principle, i.e., *X<sup>F</sup>* <sup>Φ</sup>(*t*) = *X*Φ(*t*) + *XF*(*t*) we obtain that the estimation (33) follows immediately from Theorems 4 and 5.

**Remark 3.** *It is clear that if* Φ*F*(*T*) > 0*, then* (33) *can be rewritten in the form*

$$\begin{split} \|X\_{\Phi}^{\mathrm{F}}(t)\| &\leq \max(|\|\Phi\|\|, |Var\_{\eta \in [-h,0]}\Phi(\eta)|) \| \sup\_{s \in [-h,0]} \|T\_{-h}(t,s)\| + \|T\_{-h}(t-h)\| \\ &+ \alpha\_{\*}^{-1} t^{\kappa\_{\*}} \mathbf{C}\_{0} \mathbf{C}\_{\Phi}(1 + ||\bar{\mathcal{C}}(t,t)||) \| \end{split} \tag{34}$$

The next theorem establishes explicit bounds for the matrix functions involved in (33) and (34), which allows obtaining a new form of these estimations more convenient for practical computer calculations.

**Theorem 6.** *Let T* ∈ R<sup>+</sup> *be an arbitrary fixed number and the following conditions are fulfilled: 1. The conditions (S) hold.*

*2. The function F*(*t*) <sup>∈</sup> *<sup>L</sup>loc* <sup>1</sup> (R+, <sup>R</sup>*n*) *and is locally bounded and* Φ*F*(*T*) <sup>&</sup>gt; <sup>0</sup>*.*

*3. The initial function* <sup>Φ</sup> <sup>∈</sup> *BV*([−*h*, 0], <sup>R</sup>*n*) <sup>∩</sup> *<sup>C</sup>*˜ *and its Lebesgue decomposition does not include a singular term.*

*Then the corresponding unique solution X<sup>F</sup>* <sup>Φ</sup>(*t*) *of the IP* (1)*,* (2) *for every t* ∈ *JT satisfies the estimation*

$$\begin{split} \|X\_{\Phi}^{\mathrm{F}}(t)\| &\leq \left( |Var\_{\eta\in[-h,0]}\Phi(\eta)| + |\Phi(-h)| \right) E\_{\mathfrak{a}}\left( \left| \left| \bar{\mathcal{U}}(t,0) \right| \right| \mathbb{C}\_{0} \Gamma(a\_{\ast}) t^{a\_{\ast}} \right) \\ &+ a\_{\ast}^{-1} t^{a\_{\ast}} \mathbb{C}\_{0} \|F(t)\| \left( 1 + 2E\_{\mathfrak{a}}\left( \left| \left| \bar{\mathcal{U}}(t,0) \right| \right| \mathbb{C}\_{0} \Gamma(a\_{\ast}) t^{a\_{\ast}} \right) \right) \end{split} \tag{35}$$

**Proof.** From (7) it follows that Φ1(*t*,*s*) = 1, *t* ∈ (−∞,*s*] and Φ2(*t*,*s*) = 1,*s* ∈ [−*h*, 0], *t* ∈ [*s*, 0] .

Let *<sup>s</sup>* <sup>∈</sup> <sup>R</sup>¯ <sup>+</sup> be an arbitrary fixed number and *<sup>C</sup>*(*t*,*s*) is the solution for *<sup>t</sup>* <sup>∈</sup> (*s*, <sup>∞</sup>) of the (7) with initial condition *C*(*t*,*s*) = Φ1(*t*,*s*), *t* ∈ (−∞,*s*]. Then from (7), (8) it follows that

$$\mathbb{C}(t,s) = I + I\_{-1}(\Gamma(a)) \int\_{s}^{t} I\_{a-1}(t-\eta) \int\_{-h}^{0} [\mathrm{d}\_{\theta} \mathrm{l}I(\eta,\theta)] \mathbb{C}(\eta+\theta,s) \mathrm{d}\eta \tag{36}$$

and respectively for *<sup>s</sup>* ∈ [−*h*, 0], *<sup>t</sup>* ∈ R<sup>+</sup> we have that

$$T\_{-h}(t,s) = I + I\_{-1}(\Gamma(a)) \int\_{s}^{t} I\_{a-1}(t-\eta) \int\_{-h}^{0} [\mathbf{d}\_{\theta} \mathbf{U}(\eta, \theta)] T\_{-h}(\eta+\theta, s) \mathbf{d}\eta \tag{37}$$

where *<sup>T</sup>*−*h*(*t*,*s*) = <sup>Φ</sup>2(*t*,*s*), *<sup>t</sup>* ∈ (−∞, 0].

For arbitrary fixed *<sup>s</sup>* <sup>∈</sup> <sup>R</sup>¯ <sup>+</sup>, since *C*(*t*,*s*) is nonnegative and nondecreasing in *<sup>t</sup>* from the first system (36) and (16) we obtain that

$$\begin{split} \|\mathbb{C}(t,s)\| &= \sup\_{\tilde{\xi}\in[0,t]} |\mathbb{C}(\tilde{\xi},s)| \le 1 + \mathbb{C}\_{0} \sup\_{\tilde{\xi}\in[0,t]} \int\_{s}^{\tilde{\xi}} |d\_{a^{-}}(\xi-\eta)| \Big| \int\_{-h}^{0} [\mathsf{d}\_{\theta}\mathcal{U}(\eta,\theta)] \mathbb{C}(\eta+\theta,s) \Big| \mathrm{d}\eta \\ &\le 1 + \mathbb{C}\_{0} \|\|\tilde{\mathcal{U}}(t,0)\|\sup\_{\tilde{\xi}\in[0,t]} \int\_{0}^{\tilde{\xi}} (\tilde{\xi}-\eta)^{a\_{\*}-1} \sup\_{\eta+\theta\in[-h,\tilde{\xi}]} |\mathbb{C}(\eta+\theta,s)| \mathrm{d}\eta \\ &\le 1 + \mathbb{C}\_{0} \|\|\tilde{\mathcal{U}}(t,0)\|\sup\_{\tilde{\xi}\in[0,t]} \int\_{0}^{\tilde{\xi}} (\tilde{\xi}-\eta)^{a\_{\*}-1} \|\mathbb{C}(\eta,s)\|\mathrm{d}\eta \\ &\le 1 + \mathbb{C}\_{0} \|\|\tilde{\mathcal{U}}(t,0)\|\| \int\_{0}^{t} (t-\eta)^{\mathfrak{d}\_{\theta}-1} \|\mathbb{C}(\eta,s)\| \|\mathrm{d}\eta \end{split} \tag{38}$$

and then in virtue of Corollary 1 we have that

$$\left\|\left|\mathbb{C}(t,\mathbf{s})\right\|\right\| \le E\_{\mathbf{a}}(\left\|\left|\mathbb{U}(t,\mathbf{0})\right\|\mathbb{C}\_{0}\Gamma(\mathbf{a}\_{\ast})t^{\mathbf{a}\_{\ast}}) \le E\_{\mathbf{a}}(\left\|\left|\mathbb{U}(T,\mathbf{0})\right\|\mathbb{C}\_{0}\Gamma(\mathbf{a}\_{\ast})T^{\mathbf{a}\_{\ast}}), \mathbf{s} \in \mathbb{R}\_{+} \tag{39}$$

Analogical way when *<sup>T</sup>*−*h*(*t*,*s*) is a solution of the (8) with initial condition *<sup>T</sup>*−*h*(*t*,*s*) = <sup>Φ</sup>2(*t*,*s*), *<sup>t</sup>* ∈ (−∞, 0] and since *T*−*h*(*t*,*s*) is nonnegative and nondecreasing in *<sup>t</sup>* from (16) and (37) we obtain

$$\begin{split} \|T\_{-h}(t,s)\| &\leq 1 + \mathbb{C}\_{0} \sup\_{\boldsymbol{\xi}\in[0,t]} \int\_{\boldsymbol{\zeta}} |I\_{n-1}(\boldsymbol{\xi}-\boldsymbol{\eta})| \Big| \int\_{-h}^{0} [\mathbf{d}\_{\theta}\boldsymbol{\Omega}(\boldsymbol{\eta},\theta)] \boldsymbol{T}\_{-h}(\boldsymbol{\eta}+\theta,s) \Big| \, \boldsymbol{\eta} \\ &\leq 1 + \mathbb{C}\_{0} \|\boldsymbol{\Omega}(t,0)\| \sup\_{\boldsymbol{\xi}\in[0,t]} \int\_{0}^{t} (\boldsymbol{\xi}-\boldsymbol{\eta})^{a\_{s}-1} \sup\_{\theta\in[-h,0]} |\boldsymbol{T}\_{-h}(\boldsymbol{\eta}+\theta,s)| \, \boldsymbol{\eta} \\ &\leq 1 + \mathbb{C}\_{0} \|\boldsymbol{\Omega}(t,0)\| \sup\_{\boldsymbol{\xi}\in[0,t]} \int\_{0}^{t} (\boldsymbol{\xi}-\boldsymbol{\eta})^{a\_{s}-1} \|\boldsymbol{T}\_{-h}(\boldsymbol{\eta},s)\| \, \boldsymbol{\mathrm{d}\boldsymbol{\eta}} \\ &\leq 1 + \mathbb{C}\_{0} \|\boldsymbol{\Omega}(t,0)\| \int\_{0}^{t} (t-\eta)^{a\_{s}-1} \|\boldsymbol{T}\_{-h}(\boldsymbol{\eta},s)\| \, \boldsymbol{\mathrm{d}\boldsymbol{\eta}} \end{split}$$

and hence in virtue of Corollary 1 we have

$$\|T\_{-h}(t,s)\| \le E\_{\mathfrak{a}}(\|\varPi(t,0)\|\big|\mathbb{C}\_{0}\Gamma(\mathfrak{a}\_{\ast})t^{\mathfrak{a}\_{\ast}}) \le E\_{\mathfrak{a}}(\|\varPi(T,0)\|\big|\mathbb{C}\_{0}\Gamma(\mathfrak{a}\_{\ast})T^{\mathfrak{a}\_{\ast}}), \mathfrak{s} \in [-h,0].\tag{40}$$

Since for fixed *<sup>t</sup>* the matrix function *C*¯(*t*,*s*) is nondecreasing for *<sup>s</sup>* <sup>∈</sup> [0, *<sup>T</sup>*] , then taking into account (39) and (40) we have that

$$\|\mathbb{C}(T,T)\| = \|\mathbb{C}(T,T) - \mathbb{C}(T,0)\| \le \|\mathbb{C}(T,T)\| + \|\mathbb{C}(T,0)\| \le 2E\_a(\|\mathbb{C}(T,0)\| \mathbb{C}\_0 \Gamma(a\_\*) T^{a\_\*}).\tag{41}$$

Then from (40) and (41) we obtain that for every *t* ∈ *JT* the estimation (35) holds.

**Remark 4.** *Please note that if* Φ*F*(*T*) > 0*, then* (35) *can be rewritten in the form*

$$\begin{split} \|X\_{\Phi}^{\mathrm{F}}(t)\| &\leq \max(\|\Phi\|\_{\prime} \|Var\_{\eta\in[-h,0]}\Phi(\eta)\|) \\ &\quad \left[2E\_{\mathfrak{a}}(\|\varPi(t,0)\|\big|\mathbb{C}\_{0}\Gamma(\mathfrak{a}\_{\ast})t^{\mathfrak{a}\_{\ast}}) + \mathfrak{a}\_{\ast}^{-1}t^{\mathfrak{a}\_{\ast}}\mathbb{C}\_{0}\mathbb{C}\_{\Phi}(1+2E\_{\mathfrak{a}}(\|\varPi(t,0)\|\big|\mathbb{C}\_{0}\Gamma(\mathfrak{a}\_{\ast})t^{\mathfrak{a}\_{\ast}}))\right] \end{split} \tag{42}$$

#### **5. Finite-Time Stability Results**

In this section, we study the finite-time stability (FTS) properties of the system (1), with the initial condition (2) as an application of the different a priori estimations obtained in Sections 4 and 5. In addition, we will study these properties for different types initial functions. A special attention obtains the case when Φ = 0 too.

First, we start with the homogeneous case, i.e., the IP (4), (2).

**Theorem 7.** *Let T* ∈ R<sup>+</sup> *be an arbitrary fixed number and the following conditions are fulfilled:*


$$\delta E\_a(\|\vert \mathcal{U}(T,0)\| \|\mathbb{C}\_0 \Gamma(\mathbb{a}\_\*) T^{\mathbb{a}\_\*}) \le \varepsilon \tag{4.3}$$

*Then for every initial function* <sup>Φ</sup> <sup>∈</sup> *<sup>C</sup>*˜ *with* Φ <sup>&</sup>lt; *<sup>δ</sup> the corresponding unique solution <sup>X</sup>*(*t*) *of the IP* (1)*,* (2) *(in this case this is IP* (4)*,* (2)*) is finite-time stable with respect to* {0, *JT*, *δ*,*ε*, *h*}*.*

**Proof.** Let <sup>Φ</sup> <sup>∈</sup> *<sup>C</sup>*˜ with Φ <sup>&</sup>lt; *<sup>δ</sup>* be an arbitrary initial function. Then if max(*X*(*T*), Φ) = Φ then the statement of the theorem holds. The nontrivial case obviously is when max(*X*(*T*), Φ) > Φ. In this case from condition 1 it follows that Corollary 2 holds and from (18) for *t* ∈ *JT* we obtain that

$$\|\|X(t)\|\| \le \|\|\Phi\|\| E\_{\mathfrak{a}}(\|\|\bar{\mathcal{U}}(t,0)\|\|\mathbb{C}\_{0}\Gamma(\mathfrak{a}\_{\ast})t^{\mathfrak{a}\_{\ast}}) \tag{44}$$

and hence from (43) and (44) it follows that

$$\|X(t)\| < \|\Phi\| \|E\_{\mathfrak{a}}(\|\varPi(t,0)\|\big|\mathbb{C}\_{0}\varGamma(\mathfrak{a}\_{\ast})t^{\mathfrak{a}\_{\ast}}) \leq \delta E\_{\mathfrak{a}}(\|\varPi(T,0)\|\big|\mathbb{C}\_{0}\varGamma(\mathfrak{a}\_{\ast})T^{\mathfrak{a}\_{\ast}}) \leq \varepsilon$$

which completes the proof.

The next theorem considers a special nonhomogeneous case of the system (1) when Φ = 0.

**Theorem 8.** *Let the following conditions be fulfilled:*

*1. The conditions of Theorem 2 hold and* Φ = 0*.*

*2. There exist numbers ε* ≥ *δ* > 0 *such that if F*(*T*) < *δ then the following inequality holds*

$$\delta \mathfrak{a}\_\*^{-1} \mathbb{C}\_0 T^{\mathfrak{a}\_\*} E\_a \left( \left\| \left\| \mathcal{U}(t, 0) \right\| \right\| \mathbb{C}\_0 \Gamma(\mathfrak{a}\_\*) T^{\mathfrak{a}\_\*} \right) \leq \varepsilon \tag{45}$$

*Then the corresponding unique solution X*(*t*) *of the IP* (1)*,* (2) *is finite-time stable with respect to* {0, *JT*, *δ*,*ε*, *h*}*.*

**Proof.** Let us consider the case when max(*X*(*T*), Φ) > Φ. Since Corollary 3 holds, from (19) and (45) for *t* ∈ *JT* it follows that

$$\begin{split} \|X(t)\| &\leq \alpha\_{\ast}^{-1} \mathbb{C}\_{0} t^{\mu\_{\ast}} \, \|F(t)\| \|E\_{\mathfrak{a}}\left( \|\varPi(t,0)\| \|\mathbb{C}\_{0} \Gamma(\mathfrak{a}\_{\ast}) t^{\mathfrak{a}\_{\ast}} \right) \\ &\leq \delta \alpha\_{\ast}^{-1} \mathbb{C}\_{0} T^{\mathfrak{a}\_{\ast}} E\_{\mathfrak{a}} \left( \|\varPi(t,0)\| \|\mathbb{C}\_{0} \Gamma(\mathfrak{a}\_{\ast}) T^{\mathfrak{a}\_{\ast}} \right) \leq \varepsilon \end{split} \tag{46}$$

Thus, from (46) it follows that the corresponding unique solution *X*(*t*) of the IP (1), (2) is finite-time stable with respect to {0, *JT*, *δ*,*ε*, *h*} for every locally bounded *F*(*t*) ∈ *Lloc* <sup>1</sup> (R+, R*n*).

**Theorem 9.** *Let the following conditions be fulfilled:*

*1. The conditions of Theorem 2 hold and* Φ > 0*.*

*2. There exist numbers ε* ≥ *δ* > 0 *such that if* Φ < *δ then the following inequality holds*

$$\delta \left( 1 + \alpha\_\*^{-1} \mathcal{C}\_0 \mathcal{C}\_\Phi T^{\alpha\_\*} \right) E\_a \left( \left| \left| \mathcal{U} (T, 0) \right| \right| \mathcal{C}\_0 \Gamma (\alpha\_\*) T^{\alpha\_\*} \right) \leq \varepsilon \tag{47}$$

*Then for every initial function* <sup>Φ</sup> <sup>∈</sup> *<sup>C</sup>*˜ *with* Φ ∈ (0, *<sup>δ</sup>*) *the corresponding unique solution X*(*t*) *of the IP* (1)*,* (2) *is finite-time stable with respect to* {0, *JT*, *δ*,*ε*, *h*}*.*

**Proof.** Let <sup>Φ</sup> <sup>∈</sup> *<sup>C</sup>*˜ with Φ ∈ (0, *<sup>δ</sup>*) be an arbitrary initial function and assume that max(*X*(*T*), Φ) > Φ. Then since Theorem 2 holds, from (23) and (47) for *t* ∈ *JT* it follows that

$$\begin{split} \|X(t)\| &\le \|\Phi\| (1 + \alpha\_\*^{-1} \mathbb{C}\_0 \mathbb{C}\_{\Phi} t^{a\_\*}) E\_{\mathfrak{a}}(\|\vec{\mathcal{U}}(t, 0)\| \mathbb{C}\_0 \Gamma(\mathfrak{a}\_\*) t^{a\_\*}) \\ &\le \delta (1 + \alpha\_\*^{-1} \mathbb{C}\_0 \mathbb{C}\_{\Phi} t^{a\_\*}) E\_{\mathfrak{a}}(\|\vec{\mathcal{U}}(T, 0)\| \mathbb{C}\_0 \Gamma(\mathfrak{a}\_\*) T^{a\_\*}) \le \varepsilon \end{split} \tag{48}$$

Thus, from (48) it follows that for every initial function <sup>Φ</sup> <sup>∈</sup> *<sup>C</sup>*˜ with Φ ∈ (0, *<sup>δ</sup>*) the corresponding unique solution *X*(*t*) of the IP (1), (2) is finite-time stable with respect to {0, *JT*, *δ*,*ε*, *h*}.

Below we present FTS results based on estimations obtained via different kind integral representations of the solutions and superposition principle.

**Theorem 10.** *Let the following conditions be fulfilled:*


$$
\delta \varkappa\_\*^{-1} \mathbb{C}\_0 T^{\mathfrak{a}\_\*} \left( 1 + \| \bar{\mathsf{C}} (T, T) \| \right) \le \varepsilon \tag{49}
$$

*Then for the initial function* <sup>Φ</sup> <sup>∈</sup> *<sup>C</sup>*˜ *with* Φ <sup>=</sup> <sup>0</sup> *and locally bounded function <sup>F</sup>*(*t*) <sup>∈</sup> *Lloc* <sup>1</sup> (R+, <sup>R</sup>*n*) *with F*(*T*) <sup>&</sup>lt; *<sup>δ</sup> the corresponding unique solution <sup>X</sup>*(*t*) *of the IP* (1)*,* (2) *is finite-time stable with respect to* {0, *JT*, *δ*,*ε*, *h*}*.*

**Proof.** Theorem 4 implies that for each *t* ∈ *JT* the inequality (24) holds and then from (24) and (49) for every *t* ∈ *JT* it follows that

*X*(*t*) ≤ *<sup>α</sup>*−<sup>1</sup> <sup>∗</sup> *C*0*t <sup>α</sup>*<sup>∗</sup> *F*(*t*)(<sup>1</sup> <sup>+</sup> *C*¯(*t*, *<sup>t</sup>*)) <sup>&</sup>lt; *δα*−<sup>1</sup> <sup>∗</sup> *C*0*t <sup>α</sup>*<sup>∗</sup> (<sup>1</sup> <sup>+</sup> *C*¯(*t*, *<sup>t</sup>*)) <sup>≤</sup> *δα*−<sup>1</sup> <sup>∗</sup> *<sup>C</sup>*0*Tα*<sup>∗</sup> (<sup>1</sup> <sup>+</sup> *C*¯(*T*, *<sup>T</sup>*)) <sup>≤</sup> *<sup>ε</sup>*

which completes the proof.

**Theorem 11.** *Let the following conditions be fulfilled:*

*1. The conditions of Theorem 5 hold.*

*2. There exist numbers <sup>ε</sup>* ≥ *<sup>δ</sup>* > <sup>0</sup> *such that if* max(|Φ(−*h*)|, |*Varη*∈[−*h*,0]Φ(*η*)|) < *<sup>δ</sup> then the following inequality holds*

$$\delta(\sup\_{s \in [-h,0]} \|T\_{-h}(T\_\prime s)\| + \|T\_{-h}(T\_\prime - h)\|) \le \varepsilon \tag{50}$$

*Then the corresponding unique solution X*(*t*) *of the IP* (1)*,* (2) *is finite-time stable with respect to* {0, *JT*, *δ*,*ε*, *h*}*.*

**Proof.** Theorem 5 implies that for each *t* ∈ *JT* the inequality (29) holds and then from (29) and (50) same way as above for every *t* ∈ *JT* we obtain that

$$\begin{aligned} \|X(t)\| &\le \|Var\_{\eta \in [-h,0]} \Phi(\eta)\| \sup\_{s \in [-h,0]} \|T\_{-h}(t,s)\| + \|\Phi(-h)\| \|\|T\_{-h}(t,-h)\| \\ &\le \delta (\sup\_{s \in [-h,0]} \|T\_{-h}(T,s)\| + \|\|T\_{-h}(T,-h)\|\|) \le \varepsilon \end{aligned}$$

and hence the corresponding unique solution *X*(*t*) of the IP (1), (2) is finite-time stable with respect to {0, *JT*, *δ*,*ε*, *h*}.

**Corollary 6.** *Let the following conditions be fulfilled:*

*1. The conditions of Corollary 4, hold.*

*2. There exist numbers ε* ≥ *δ* > 0 *such that if* |Φ(−*h*)| < *δ then the following inequality holds*

$$\delta \| |T\_{-h}(T\_\prime - h)| \} \le \varepsilon \tag{51}$$

*Then the corresponding unique solution X*(*t*) *of the IP* (1)*,* (2) *is finite-time stable with respect to* {0, *JT*, *δ*,*ε*, *h*}*.*

**Proof.** Since Φ = |Φ0| = |Φ(−*h*)| < *δ* then using (32) and (51)we obtain

$$\|\|X(t)\|\| \le |\Phi(-h)| \|\|T\_{-h}(t\_\prime - h)\|\| = |\Phi\_0| \|\|T\_{-h}(t\_\prime - h)\|\| \le \delta \|\|T\_{-h}(t\_\prime - h)\|\| \le \varepsilon$$

and then the result follows from Theorem 11.

**Remark 5.** *The FTS results obtained in Theorem 11 and Corollary 6 are new even in the cases considered in [25] when the initial function* <sup>Φ</sup> <sup>∈</sup> *<sup>C</sup>*1([−*h*, 0], <sup>R</sup>*n*)*. Our results are more accurate not only in the case when the initial function* <sup>Φ</sup> <sup>∈</sup> *BV*([−*h*, 0], <sup>R</sup>*n*) *has finite set of jump points <sup>S</sup>*<sup>Φ</sup> = ∅ *, (i.e.,* <sup>Φ</sup> *is not continuous), but also when* <sup>Φ</sup> *is continuous.*

*We illustrate this fact with two simple examples:*

*Let* <sup>Φ</sup>(−*h*)=(0.75, 0)*T*, <sup>Φ</sup>(*t*)=(1, 0)*T*, *<sup>t</sup>* <sup>∈</sup> (−*h*, 0]*. Then* <sup>|</sup>Φ(−*h*)<sup>|</sup> <sup>=</sup> 0.75, Φ <sup>=</sup> 1, |*Varη*∈[−*h*,0]Φ(*η*)| = 0.25 *and* max(|*Varη*∈[−*h*,0]Φ(*η*)|, |Φ(−*h*)|) = |Φ(−*h*)| = 0.75 < Φ = 1*.*

*Let <sup>h</sup>* <sup>=</sup> <sup>1</sup> *and* <sup>Φ</sup>(*t*)=(0.4*<sup>t</sup>* <sup>+</sup> 1, 0)*T*, *<sup>t</sup>* <sup>∈</sup> (−1, 0], <sup>Φ</sup>(−1)=(0.6, 0)*T*, <sup>|</sup>*Varη*∈[−1,0]Φ(*η*)<sup>|</sup> <sup>=</sup> 0.6, Φ = <sup>1</sup> *and hence* max(|*Varη*∈[−1,0]Φ(*η*)|, |Φ(−1)|) = 0.6 < Φ = <sup>1</sup>*.*

*These examples show, that we can establish FTS in some cases, where the conditions presented in [25] are not directly applicable.*

**Remark 6.** *The FTS result for the general case* Φ*F*(*T*) > 0 *needs some preliminary comments. It is clear that the estimations* (32) *and* (33) *will be essentially used, but to obtain a practical applicable estimation we need to solve (clarify) two problems:*

**(a)** *First, we need to clarify which impact is leading for the process, the impact hereditary of the process expressed by* Φ*, the impact of the outer perturbations expressed by F*(*T*)*, or the complex of both factors expressed by the ratio C*<sup>Φ</sup> <sup>=</sup> Φ−1*F*(*T*)*.*

**(b)** *As second, an explicit estimation is needed in the general case for the fundamental matrix <sup>C</sup>*(*t*,*s*) *as well as the matrix T*−*h*(*t*,*s*) *too.*

*Concerning point* **(a)***, it is clear that a reasonable response can be given only on the basis of real empirical data from the process which is described by the mathematical model. From a mathematical point of view, as was mentioned above by the construction of the proofs, we must limit the impact of* Φ *and F*(*T*) *to linear or no more than power-law growth as in the right side of the estimation* (23) *and avoid the high nonlinear impact of <sup>C</sup>*<sup>Φ</sup> <sup>=</sup> Φ−1*F*(*T*) *if it is involved as an argument in the Mittag-Leffler function Eα*(·) *in* (20)*.*

*About* **(b)** *it is possible to obtain the needed estimations in the general case, for example we can use the estimations obtained in the previous sections.*

**Theorem 12.** *Let <sup>T</sup>* ∈ R<sup>+</sup> *be an arbitrary fixed number and the following conditions are fulfilled: 1. The conditions (S) hold.*

*2. The function F*(*t*) <sup>∈</sup> *<sup>L</sup>loc* <sup>1</sup> (R+, R*n*) *and is locally bounded.*

*3. The initial function* <sup>Φ</sup> <sup>∈</sup> *BV*([−*h*, 0], <sup>R</sup>*n*) <sup>∩</sup> *<sup>C</sup>*˜ *and its Lebesgue decomposition does not include a singular term.*

*4.* Φ*F*(*T*) > 0 *and there exist numbers ε* ≥ *δ* > 0 *such that if* max(|Φ(−*h*)|*,* |*Varη*∈[−*h*,0]Φ(*η*)|) < *<sup>δ</sup> then the following inequality holds*

$$\delta \left[ \sup\_{s \in [-h, 0]} \|T\_{-h}(T\_{\prime}s)\| + \|T\_{-h}(T\_{\prime} - h)\| + a\_{\ast}^{-1} T^{a\_{\ast}} \mathbb{C}\_{0} \mathbb{C}\_{\Phi} (1 + \|\vec{\mathcal{C}}(T\_{\prime}T)\|) \right] \le \varepsilon \tag{52}$$

*Then the corresponding unique solution X*(*t*) *of the IP* (1)*,* (2) *for every t* ∈ *JT is finite-time stable with respect to* {0, *JT*, *δ*,*ε*, *h*}*.*

**Proof.** Condition 4 of the theorem implies that the estimate (34) holds. Then from (34) and (52) for every *t* ∈ *JT* it follows

$$\begin{split} \|\|(I(t))\|\| &\leq \|Var\_{\eta\in[-h,0]}\Phi(\eta)\|\sup\_{s\in[-h,0]} \|\|T\_{-h}(t,s)\|\| + |\Phi(-h)| \|\|T\_{-h}(t,-h)\|\| \\ &+ a\_{\ast}^{-1}t^{\mathfrak{a}\_{\ast}}\mathbb{C}\_{0}\|\|F(t)\|\|(1+\|\mathcal{C}(t,t)\|) \\ &\leq \delta\left[\sup\_{s\in[-h,0]} \|\|T\_{-h}(t,s)\|\| + \|\|T\_{-h}(t,-h)\|\| + a\_{\ast}^{-1}t^{\mathfrak{a}\_{\ast}}\mathbb{C}\_{0}\mathbb{C}\_{\Phi}(1+\|\bar{\mathcal{C}}(t,t)\|)\right] \\ &\leq \delta\left[\sup\_{s\in[-h,0]} \|\|T\_{-h}(T,s)\|\| + \|\|T\_{-h}(T,-h)\|\| + a\_{\ast}^{-1}t^{\mathfrak{a}\_{\ast}}\mathbb{C}\_{0}\mathbb{C}\_{\Phi}(1+\|\bar{\mathcal{C}}(T,T)\|)\right] \leq \varepsilon. \end{split}$$

which completes the proof.

**Corollary 7.** *Let T* ∈ R<sup>+</sup> *be an arbitrary fixed number and the following conditions are fulfilled: 1. The conditions (S) hold.*

*2. The function F*(*t*) <sup>∈</sup> *<sup>L</sup>loc* <sup>1</sup> (R+, R*n*) *and is locally bounded.*

*3. The initial function* <sup>Φ</sup> <sup>∈</sup> *BV*([−*h*, 0], <sup>R</sup>*n*) <sup>∩</sup> *<sup>C</sup>*˜ *and its Lebesgue decomposition does not include a singular term.*

*4.* Φ*F*(*T*) > 0 *and there exist numbers ε* ≥ *δ* > 0 *such that if* max(|Φ(−*h*)|*,* |*Varη*∈[−*h*,0]Φ(*η*)|) < *<sup>δ</sup> then the following inequality holds*

$$\delta \left[ 2E\_{\mathfrak{a}}(\left\| \left\| \mathcal{U}(T,0) \right\| \right\| \mathbb{C}\_{0} \Gamma(\mathfrak{a}\_{\ast}) T^{\mathfrak{a}\_{\ast}}) + \mathfrak{a}\_{\ast}^{-1} T^{\mathfrak{a}\_{\ast}} \mathbb{C}\_{0} \mathbb{C}\_{\Phi} (1 + 2E\_{\mathfrak{a}}(\left\| \left\| \mathcal{U}(T,0) \right\| \right\| \mathbb{C}\_{0} \Gamma(\mathfrak{a}\_{\ast}) T^{\mathfrak{a}\_{\ast}}) \right] \leq \varepsilon \tag{53}$$

*Then the corresponding unique solution X*(*t*) *of the IP* (1)*,* (2) *for every t* ∈ *JT is finite-time stable with respect to* {0, *JT*, *δ*,*ε*, *h*}*.*

**Proof.** The statement follows from Theorem 12 and Theorem 6.

#### **6. Examples and Comments**

**Remark 7.** *From a practical point of view, it is important to establish a sharp upper bound of the constant <sup>α</sup>*−<sup>1</sup> <sup>∗</sup> *<sup>C</sup>*<sup>0</sup> *appearing in all estimates except* (29) *and answer the question does the constant <sup>α</sup>*−<sup>1</sup> <sup>∗</sup> *<sup>C</sup>*<sup>0</sup> *attain its upper bound.*

*Let us consider the case when <sup>α</sup>*<sup>∗</sup> <sup>=</sup> *<sup>α</sup>M. Then we have that <sup>α</sup>*−<sup>1</sup> <sup>∗</sup> *<sup>C</sup>*<sup>0</sup> <sup>=</sup> *<sup>α</sup>*−<sup>1</sup> *<sup>M</sup> <sup>C</sup>*<sup>0</sup> = <sup>Γ</sup>−1(<sup>1</sup> + *<sup>α</sup>M*) <sup>≤</sup> <sup>Γ</sup>−1(*zmin*)*. Thus if <sup>α</sup><sup>M</sup>* <sup>=</sup> *zmin* <sup>−</sup> <sup>1</sup> *then <sup>α</sup>*−<sup>1</sup> <sup>∗</sup> *<sup>C</sup>*<sup>0</sup> *attains its upper bound, namely <sup>α</sup>*−<sup>1</sup> <sup>∗</sup> *<sup>C</sup>*<sup>0</sup> <sup>=</sup> <sup>Γ</sup>−1(*zmin*) <sup>≈</sup> 1.1279*. Please note that in the partial case when all orders of the differentiation coincide (i.e., α*<sup>1</sup> = ··· = *α<sup>n</sup>* = *α) then all estimates can be essentially simplified. For example in this case we have that <sup>α</sup>*−<sup>1</sup> <sup>∗</sup> *<sup>C</sup>*<sup>0</sup> <sup>=</sup> <sup>Γ</sup>−1(<sup>1</sup> <sup>+</sup> *<sup>α</sup>*) <sup>≤</sup> <sup>Γ</sup>−1(*zmin*) *and C*0Γ*α*<sup>∗</sup> <sup>=</sup> <sup>1</sup>*.*

**Remark 8.** *First, it must be noted that in the commented works are used different norms. In the works [24,25] the so-called 1-norm is used (i.e., for <sup>W</sup>* <sup>=</sup> {*wij*}*i*,*j*∈*n* <sup>∈</sup> <sup>R</sup>*n*×*<sup>n</sup> the matrix norm* |*W*| = max *j*∈*n n* ∑ *i*=1 |*wij*|*) while in [26,27] is used the spectral norm as well as in our work. A direct comparison shows that the condition* (43) *in our work based on the estimate* (18) *is more accurate in compare with the condition (9) in Theorem 4.1 [24] proved via the integral representation approach and condition (16) in Theorem 3.2 in [27] proved by Gronwall's approach, even in the partial cases considered in these works.*

*Please note that for the partial case when* Φ *is a constant both conditions* (43) *and (9) in [24] coincide. In this case the same results can be established by using* (50) *obtained via the integral representation* (30)*. In the homogeneous case (γ* = 0*) of the considered in [26] partial cases of the system* (4) *(variable matrices and one variable delay), our condition* (43) *coincides with condition (5) of Theorem 1 in [26] proved by Gronwall's approach.*

Below on the base of the considered in the work [24] example we will establish that generally speaking the results obtained via the integral representation approach can be more accurate in comparison with these obtained via the Gronwall's approach but the results depend essentially from the norm choice and from the constructions of their proofs.

**Example 1.** *[24] Consider*

$$\begin{cases} D\_{0+}^{a}X(t) = AX(t-\sigma), & t>0\\ X(t) = \Phi(t), & t \in [-\sigma, 0] \end{cases} \tag{54}$$

*where A* = 0.2 0 0 0.8 , *α* = 0.2, *σ* = 0.2, *T* = 0.8, Φ(*t*)=(0.1, 0.2)*T.*

*The system* (54) *is a partial case of* (4) *in the case when: n* = 2, *α*<sup>1</sup> = *α*<sup>2</sup> = *α* = 0.2, *UAC*(*t*, *<sup>θ</sup>*) = *US*(*t*, *<sup>θ</sup>*) <sup>≡</sup> <sup>Θ</sup>, *UJ*(*t*, *<sup>θ</sup>*) = *<sup>A</sup>*1*H*(*<sup>θ</sup>* <sup>+</sup> *<sup>σ</sup>*), *<sup>A</sup>*<sup>1</sup> <sup>=</sup> *<sup>A</sup>*, *<sup>A</sup>*<sup>0</sup> <sup>=</sup> <sup>Θ</sup>, *<sup>σ</sup>* <sup>=</sup> *<sup>h</sup>* <sup>=</sup> 0.2, *U*¯ (*T*, 0) <sup>=</sup> *A*<sup>1</sup> <sup>=</sup> *A*<sup>2</sup> <sup>=</sup> 0.8 Φ<sup>2</sup> <sup>=</sup> <sup>|</sup>Φ(−0.2)<sup>|</sup> <sup>=</sup> 0.2236*.*

*Using system Wolfram Mathematica, we obtain* <sup>|</sup>Φ(−0.2)|*E*0.2(0.8 <sup>∗</sup> 0.80.2) = 0.2236 <sup>∗</sup> 1.25913 = 0.9292 *and hence* (54) *is finite-time stable with respect to* {0, *JT*, *δ*,*ε*, *σ*} *for ε* ≥ 0.9292*. The compared results are given in Table 1 below:*

**Table 1.** Compared Results.


**Remark 9.** *Please note that the results essentially depend from the used norm and we can show that the spectral norm bring some advantages.*

*For example for the initial function* <sup>Φ</sup>(*t*) = 0.222 0.2 , Φ<sup>1</sup> = 0.422, Φ<sup>2</sup> = 0.299 *and for ε* = 1.2882 *concerning the spectral norm* (54) *is FTS, which cannot be established using the 1-norm. The same remark is also true concerning the matrix A* = 0.2 0 0 0.8 *. Since A is a diagonal matrix then A*<sup>1</sup> <sup>=</sup> *A*<sup>2</sup> <sup>=</sup> 0.8 *but if for example we have <sup>A</sup>*¯ <sup>=</sup> 0.2 0.3 0 0.8 *then A*<sup>1</sup> = 1.1 *, but A*<sup>2</sup> = 0.85742 *and then if we use some of the proved estimations, without direct calculation which for example we present, then the differences between the estimations will increase.*

*One direct calculation via the integral representation established in [24] for sharp upper bounds for the 1-norm and the spectral norm of the state vector for T* = 0.8 *give us X*(0.8<sup>1</sup> = 0.95702 *and X*(0.8<sup>2</sup> = 0.84059*. Namely the solution of* (54) *according Theorem 3.2 in [24] has the following representation X*(*t*) = *EBt<sup>α</sup> <sup>σ</sup>* <sup>Φ</sup>(−*σ*)*, where* <sup>Φ</sup> *is a constant vector and <sup>E</sup>Bt<sup>α</sup> <sup>σ</sup>* <sup>=</sup> *<sup>I</sup>* <sup>+</sup> <sup>∞</sup> ∑ *k*=1 *<sup>A</sup><sup>k</sup>* (*t*−(*k*−1)*σ*)*k<sup>α</sup>* <sup>Γ</sup>(*αk*+1) *<sup>H</sup>*(*k<sup>σ</sup>* <sup>−</sup> *<sup>t</sup>*), *<sup>t</sup>* <sup>∈</sup> <sup>R</sup>¯ <sup>+</sup>, *<sup>E</sup>Bt<sup>α</sup> <sup>θ</sup>* <sup>=</sup> <sup>Θ</sup> *for <sup>t</sup>* <sup>&</sup>lt; <sup>−</sup>*<sup>σ</sup> and <sup>E</sup>Bt<sup>α</sup> <sup>σ</sup>* = *I for* −*σ* ≤ *t* ≤ 0 *is the introduced in the same work delayed matrix with Mittag-Leffler functions. For the values in the example above we have that*

$$X(t) = E\_{0.2}^{At^{0.2}} \Phi(-0.2) = \begin{pmatrix} E\_{0.2}^{0.2t^{0.2}} & 0\\ 0 & E\_{0.2}^{0.8t^{0.2}} \end{pmatrix} \begin{pmatrix} 0.1\\ 0.2 \end{pmatrix}$$

*where the matrix entries are standard scalar Mittag-Leffler functions. Calculating by system Wolfram Mathematica we obtain*

$$X(0.8) = \begin{pmatrix} 1.25913 & 0\\ 0 & 4.15554 \end{pmatrix} \begin{pmatrix} 0.1\\ 0.2 \end{pmatrix} = \begin{pmatrix} 0.125913\\ 0.8311 \end{pmatrix}$$

*and hence X*(0.8)<sup>1</sup> = 0.95702 *and X*(0.8)<sup>2</sup> = 0.84059*.*

*Finally, we note that the integral representation of the solution of* (54) *proved in Theorem 3.2 in [24] for the case when* <sup>Φ</sup> <sup>∈</sup> *<sup>C</sup>*1([−*τ*, 0], <sup>R</sup>*<sup>n</sup> is partial case from the integral representation (4.7) in [31] proved for* <sup>Φ</sup> <sup>∈</sup> *BV*([−*τ*, 0], <sup>R</sup>*n*) *. For the system* (54) *the both presentations coincide when* <sup>Φ</sup> <sup>∈</sup> *AC*([−*τ*, 0], <sup>R</sup>*n*) *.*

Analogically as in the homogeneous case consider one partial case of the IP (1), (2) as follows:

**Example 2.** *Consider*

$$\begin{cases} D\_{0+}^{a}X(t) = A\_{0}(t)X(t) + A\_{1}(t)X(t - \sigma(t)) + f(t, X(t)), & t > 0 \\ X(t) = \Phi(t), & t \in [-\sigma, 0] \end{cases} \tag{55}$$

*The system* (55) *is considered in [25] in the case when <sup>f</sup>* <sup>∈</sup> *<sup>C</sup>*(R¯ <sup>+</sup> <sup>×</sup> <sup>R</sup>*n*, <sup>R</sup>*n*), *<sup>A</sup>*0(*t*) <sup>≡</sup> <sup>Θ</sup>, *<sup>A</sup>*1(*t*) <sup>≡</sup> *<sup>B</sup>* <sup>∈</sup> <sup>R</sup>*n*×*n*, *<sup>σ</sup>*(*t*) <sup>≡</sup> *<sup>σ</sup> for <sup>t</sup>* <sup>∈</sup> <sup>R</sup>¯ <sup>+</sup>*. In the same work an example is given to clear the applicability of*

*the theoretical results by using the following data: α*<sup>1</sup> = *α*<sup>2</sup> = *α* = 0.6, *σ* = 0.2, *T* = 0.6, Φ(*t*) = (*t*, 2*t*)*T*, *ω*(*t*) = *ψ*(*t*) = 2*t* 2, *A*<sup>0</sup> = Θ, *A*<sup>1</sup> = 0.3 0 0 0.5 *and f*(*t*,*Y*)<sup>1</sup> ≤ *ω*(*t*) *for all <sup>t</sup>* <sup>∈</sup> [0, *<sup>T</sup>*] *and Y* <sup>∈</sup> <sup>R</sup>*n. Let define F*(*t*)<sup>1</sup> = sup *<sup>Y</sup>*∈∼*<sup>n</sup> f*(*t*,*Y*)<sup>1</sup> ≤ 2*t* <sup>2</sup>*. We will use the estimation* (47) *and then apply Theorem 9. In our notations we have: U*¯ (*T*, 0) <sup>=</sup> 0.5, *F*(*T*)<sup>2</sup> ≤ *F*(*T*)<sup>1</sup> <sup>=</sup> sup *<sup>Y</sup>*∈R*<sup>n</sup> f*(*t*,*Y*)<sup>1</sup> ≤ *ω*(*T*) <sup>=</sup> <sup>2</sup>*T*<sup>2</sup> <sup>=</sup> 0.72, Φ<sup>2</sup> <sup>=</sup> 0.4473, *<sup>C</sup>*<sup>Φ</sup> <sup>=</sup> *F*(*T*)<sup>1</sup> Φ<sup>2</sup> <sup>=</sup> 1.61, *<sup>C</sup>*<sup>0</sup> <sup>=</sup> <sup>1</sup> <sup>Γ</sup>(0.6) <sup>=</sup> 1.11917, *<sup>T</sup>*0.6 <sup>=</sup> 0.60.6 <sup>=</sup> 0.736022 *and <sup>E</sup>*0.6(0.5 <sup>∗</sup> 0.60.6) = 1.57201*. Then if <sup>δ</sup>* <sup>=</sup> Φ<sup>2</sup> <sup>=</sup> 0.4473 *we obtain that X*(*T*)<sup>2</sup> = *X*(0.6)<sup>2</sup> = 1.64291*. Using the same δ* = 0.61 *as in [25] we obtain that*

*stable with respect to* {0, *JT*, *δ*,*ε*, *σ*} *when ε* ≥ 1.89127*. Please note that our result is better than the best result given in Table 1 in [25] and hence our estimation* (47) *is more accurate than the estimations (12) and (13) used for the best results in Table 1.*

*X*(*T*)<sup>2</sup> = *X*(0.6)<sup>2</sup> = 1.89127*. Then applying Theorem 9 we obtain that* (55) *is finite-time*

#### **Example 3.** *Consider*

$$\begin{cases} D\_{0+}^{\mu}X(t) = A\_0(t)X(t) + A\_1(t)X(t - \sigma(t)) + Dw(t) + f(t, X(t), X(t - \sigma(t), w(t))), & t > 0 \\ X(t) = \Phi(t), & t \in [-\sigma, 0] \end{cases} \tag{56}$$

*The IP* (56) *is considered in [26] for A*<sup>0</sup> = 0 1 −2 0 , *A*<sup>1</sup> = 0 0 3 4 , *D* = 1 0 , *w*(*t*) ∈ *<sup>C</sup>*(R¯ <sup>+</sup>, <sup>R</sup>*n*) *with w*(*t*)<sup>2</sup> <sup>=</sup> 0.1, *<sup>α</sup>* <sup>=</sup> 0.5, *<sup>T</sup>* <sup>=</sup> 5, *<sup>δ</sup>* <sup>=</sup> 0.1 *and <sup>σ</sup>*(*t*) = 0.1 sin<sup>2</sup> *<sup>t</sup> . For simplicity we will assume that <sup>f</sup>*(*t*, *<sup>X</sup>*(*t*), *<sup>X</sup>*(*<sup>t</sup>* <sup>−</sup> *<sup>σ</sup>*(*t*), *<sup>w</sup>*(*t*))) <sup>≡</sup> 0, *<sup>t</sup>* <sup>∈</sup> <sup>R</sup>¯ <sup>+</sup>*. Then via* (47) *we obtain that X*(5)<sup>2</sup> = 1.95384*E* + 106 *and then* (55) *is finite-time stable with respect to* {0, *JT*, *δ*,*ε*, *σ*} *when ε* ≥ 1.95384*E* + 106*, which result coincides with the result calculated by us for this case via condition (5) in [27].*

#### **7. Conclusions**

As was mentioned above, in this work we set out some considerations illustrating our point of view concerning the different sources of the impacts of the finite-time stability. It is easy to see that they appear not only as an influence on the finite-time stability connecting with the impact of the aftereffect (the delay effect) described in the mathematical model through the initial function and the fractional derivatives, but it seems to be reasonable to include into account the impact of external influences too. From a physical point of view, we can interpret as an influence of external forces the existence in the model different kind of functions *F*(*t*, *X*(*t*), *Xt*(*θ*)), etc. . . , mathematically understood as nonlinear perturbations. Namely, if we apply the formal definition to the nonhomogeneous system (1), when *F*(*t*) ≡ **0** for *t* ∈ *JT* and Φ = 0 we obtain a case when the inequality Φ < *δ* is fulfilled for all *<sup>δ</sup>* ∈ R<sup>+</sup> but this fact is not useful to establish the possible existing finite-time stability.

Our attempt to clarify which impact is leading for the process, the impact hereditary of the process expressed by Φ, the impact of the outer perturbations expressed by *F*(*T*), or the complex of both factors expressed by the ratio *<sup>C</sup>*<sup>Φ</sup> <sup>=</sup> Φ−1*F*(*T*) imposes a more detailed study not only of the homogeneous case when*F*(*T*) = 0, but also the important case when Φ = 0 . This reason focuses our attention on the case of the nonhomogeneous system with Φ = 0 and it was very strange for us that we could not find some extra consideration of this case. Please note that conditions of the type "there exists *<sup>M</sup>* ∈ R<sup>+</sup> , such that Φ−1*F*(*T*) ≤ *<sup>M</sup>* " are often used without to clime that Φ <sup>=</sup> 0.

The result from this study is in general a pure mathematical answer, that is the mean by the construction of the proofs, we must limit the impact of Φ and *F*(*T*) to linear or no more than power-law growth as in the right side of the estimation (23) and avoid the high nonlinear impact of *<sup>C</sup>*<sup>Φ</sup> <sup>=</sup> Φ−1*F*(*T*) if it is involved as an argument in the Mittag-Leffler function as in estimation (20).

Our comparison between the two most used approaches leads to the following conclusions: The most accurate estimation can be obtained by direct numerical calculation from the integral representation of the solutions, but before them, it is needed to simplify symbolically these presentations, which essentially increase the accuracy of the results (see Example 54).

Since the estimation via Mittag-Leffler functions of the fundamental matrices involved in the integral representation are not accurate enough, then generally speaking we cannot unequivocally point to one of the compared methods as better. It seems from the examples that this maybe, in general, be not possible, because it depends essentially also from the possibility to have explicit presentation of the fundamental matrices.

**Author Contributions:** Conceptualization, H.K., M.V., E.M. and A.Z. Writing—Review and Editing, H.K., M.V., E.M. and A.Z. All authors contributions in the article are equal. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was partially supported by project FP21-FMI-002 of the Scientific Fund of the University of Plovdiv Paisii Hilendarski, Bulgaria. The third AUTHOR (E.M.) is supported by the Bulgarian Ministry of Education and Science under the National Research Program "Young scientists and postdoctoral students", Stage III-2021/2022.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


#### *Article*

## **Non-Instantaneous Impulsive Boundary Value Problems Containing Caputo Fractional Derivative of a Function with Respect to Another Function and Riemann–Stieltjes Fractional Integral Boundary Conditions**

**Suphawat Asawasamrit 1, Yasintorn Thadang 1, Sotiris K. Ntouyas 2,3 and Jessada Tariboon 1,\***


**Abstract:** In the present article we study existence and uniqueness results for a new class of boundary value problems consisting by non-instantaneous impulses and Caputo fractional derivative of a function with respect to another function, supplemented with Riemann–Stieltjes fractional integral boundary conditions. The existence of a unique solution is obtained via Banach's contraction mapping principle, while an existence result is established by using Leray–Schauder nonlinear alternative. Examples illustrating the main results are also constructed.

**Keywords:** impulsive differential equations; fractional impulsive differential equations; instantaneous impulses; non-instantaneous impulses

#### **1. Introduction and Preliminaries**

Fractional calculus is a generalization of classical differentiation and integration to an arbitrary real order. Fractional differential equations has gained much attention in literature because of its applications for description of hereditary properties in many fields, such as physics, mechanics, engineering, game theory, stability and optimal control. With the help of fractional calculus, the natural phenomena and mathematical models can be described more accurately. Many researchers have shown their interests in fractional differential equations, and the theory and applications of the fractional differential equations have been greatly developed. For the basic theory of fractional calculus and fractional differential equations we refer to the monographs [1–8] and references therein.

The theory of impulsive differential equations arise naturally in biology, physics, engineering, and medical fields where at certain moments they change their state rapidly. There are two type of impulses. One is called instantaneous impulses in which the duration of these changes is relatively short, and the other is called non-instantaneous impulses in which an impulsive action, starting abruptly at some points and continue to be active on a finite time interval. Some examples of such processes can be found in physics, biology, population dynamics, ecology, pharmacokinetics, and others. For results with instantaneous impulses see, e.g., the monographs [9–14], the papers [15–19], and the references cited therein. Non-instantaneous impulsive differential equation was introduced by Hernández and O'Regan in [20] pointed out that the instantaneous impulses cannot characterize some processes such as evolution processes in pharmacotherapy. Some practical problems involving non-instantaneous impulses within the area of psychology have been reviewed in [21]. For some recent works, on non-instantaneous impulsive fractional differential equations we refer the reader to [22–25] and references therein.

**Citation:** Asawasamrit, S.; Thadang, Y.; Ntouyas, S.K.; Tariboon, J. Non-Instantaneous Impulsive Boundary Value Problems Containing Caputo Fractional Derivative of a Function with Respect to Another Function and Riemann–Stieltjes Fractional Integral Boundary Conditions. *Axioms* **2021**, *10*, 130. https://doi.org/10.3390/ axioms10030130

Academic Editor: Jorge E. Macías Díaz

Received: 5 June 2021 Accepted: 22 June 2021 Published: 23 June 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

The scope of this investigation is to establish existence results of the new class of boundary value problems consisting by non-instantaneous impulses and Caputo fractional derivative of a function with respect to another function, supplemented with Riemann– Stieltjes fractional integral boundary conditions of the form

$$\begin{cases} \, \_{\mathbb{S}\_i}D\_{\mathbb{S}\_i}^{a\_i} \mathbf{x}(t) = f(t, \mathbf{x}(t)), \quad t \in [s\_i, t\_{i+1}), \quad i = 0, 1, 2, \dots, m, \\ \mathbf{x}(t) = \varphi\_i(t) + \psi\_i(t) \mathbf{x}(t\_i^-), \quad t \in [t\_i, s\_i), \quad i = 1, 2, 3, \dots, m, \\ \quad \beta\_1 \mathbf{x}(0) + \beta\_2 \mathbf{x}(T) = \sum\_{k=0}^m \mu\_k \int\_{s\_k}^{t\_{k+1}} \left( \,\_k I\_{\mathbb{S}\_k^c}^{\gamma\_k} \mathbf{x} \right)(u) \, dH\_k(u). \end{cases} \tag{1}$$

Here *si Dα<sup>i</sup> gi* is the Caputo fractional derivative of order *α<sup>i</sup>* ∈ (0, 1), with respect to a function *gi* starting at the point *si*, over the interval [*si*, *ti*+1), *si I γi gi* is the Riemann–Liouville fractional integral with respect to the function *gi* on [*si*, *ti*+1) of order *<sup>γ</sup><sup>i</sup>* > 0, *<sup>μ</sup><sup>i</sup>* ∈ R, the bounded variation function *Hi* of the Riemann–Stieltjes on [*si*, *ti*+1) and a function *f* : [*si*, *ti*+1) → R, for *<sup>i</sup>* = 0, 1, 2, ... , *<sup>m</sup>*. (For details on Riemann–Stieltjes integral we refer to [26]). In impulsive interval [*ti*,*si*), *ϕi*, *ψi*, *i* = 1, 2, 3, ... , *m*, are given functions. The points

$$0 = s\_0 < t\_1 < s\_1 < t\_2 < s\_2 < \dots < t\_m < s\_m < t\_{m+1} = T, \dots$$

are fixed in [0, *T*] and *β*1, *β*<sup>2</sup> are known constants. Note that in problem (1), we have *x*(*s*<sup>+</sup> *<sup>i</sup>* ) = *x*(*s*<sup>−</sup> *<sup>i</sup>* ) and if *ψi*(*t*) = 1, *ϕi*(*t*) = 0 at *ti* for all *i* = 1, 2, 3, . . . , *m*, then *x*(*t* + *<sup>i</sup>* ) = *x*(*t* − *<sup>i</sup>* ). For *γ* > 0, the Riemann–Liouville fractional integral of an integrable function *h* : [*a*, *<sup>b</sup>*] <sup>→</sup> <sup>R</sup> with respect to another function *<sup>g</sup>* <sup>∈</sup> *<sup>C</sup>*1([*a*, *<sup>b</sup>*], <sup>R</sup>) such that *<sup>g</sup>* (*t*) > 0, for all *t* ∈ [*a*, *b*] is defined by [2,27,28]

$$\_aI\_3^\gamma h(t) = \frac{1}{\Gamma(\gamma)} \int\_a^t \frac{g'(s)h(s)}{[g(t) - g(s)]^{1-\gamma}} ds,\tag{2}$$

where Γ is the gamma function. The Riemann–Liouville type of fractional derivative of a function *h*, with respect to another function *g* on [*a*, *b*] is defined as

$$\Gamma\_a^\* D\_\mathcal{g}^a h(t) = D\_{\mathcal{g}^a}^n I\_\mathcal{g}^{n-a} h(t) = \frac{1}{\Gamma(n-a)} D\_\mathcal{g}^a \int\_a^t \frac{\mathcal{g}'(s) h(s)}{[\mathcal{g}(t) - \mathcal{g}(s)]^{1+a-n}} ds,\tag{3}$$

while the Caputo type is defined by

$$\, \_aD\_{\mathcal{S}}^{a}h(t) = \, \_aI\_{\mathcal{S}}^{n-a}D\_{\mathcal{S}}^{n}h(t) = \frac{1}{\Gamma(n-a)} \int\_{a}^{t} \frac{\mathbf{g}'(s)D\_{\mathcal{S}}^{n}h(s)}{[\mathbf{g}(t) - \mathbf{g}(s)]^{1+a-n}}ds,\tag{4}$$

where *D<sup>n</sup> <sup>g</sup>* <sup>=</sup> *Dg* ··· *Dg n*−*times* , *n* − 1 < *α* < *n*, *n* is a positive integer and *Dg* is defined by

$$D\_{\mathcal{S}} = \frac{1}{g'(t)} \frac{d}{dt}. \tag{5}$$

There are relations of fractional integral and derivatives of the Riemann–Liouville and Caputo types which will be used in our investigation, see [2], as

$$\,\_aI\_{\mathcal{S}}^{\gamma} \left( \_a^{\star}D\_{\mathcal{S}}^{\gamma}h \right)(t) = h(t) - \sum\_{j=1}^{n} \frac{(g(t) - g(a))^{\gamma - j}}{\Gamma(\gamma - j + 1)} D\_{\mathcal{S}}^{n-j} \left( \_aI\_{\mathcal{S}}^{n-\gamma}h \right)(a),\tag{6}$$

and

$$\_aI\_{\mathcal{S}}^{\gamma} \left( \_aD\_{\mathcal{S}}^{\gamma}h \right)(t) = h(t) - \sum\_{j=0}^{n-1} \frac{(\mathcal{g}(t) - \mathcal{g}(a))^j}{j!} D\_{\mathcal{S}}^j h(a). \tag{7}$$

In addition, for *γ*, *δ* > 0, the relation

$$\,\_aI\_{\mathcal{S}}^{\gamma}(\mathcal{g}(t) - \mathcal{g}(a))^{\delta} = \frac{\Gamma(\delta + 1)}{\Gamma(\gamma + \delta + 1)} (\mathcal{g}(t) - \mathcal{g}(a))^{\gamma + \delta},\tag{8}$$

is applied in the main results ([2]). For some recent results we refer the interesting reader to the papers [29–31].

Note that (2) is reduced to the Riemann–Liouville and Hadamard fractional integrals when *g*(*t*) = *t* and *g*(*t*) = log *t*, respectively, where log(·) = log*e*(·). The Hadamard and Hadamard–Caputo types fractional derivatives can be obtained by substituting *g*(*t*) = log *t* in (3) and (4), respectively. Also the Riemann–Liouville and Caputo fractional derivatives are presented by replacing *g*(*t*) = *t* in (3) and (4), respectively. Therefore, the problem (1) generates many types and also mixed types of impulsive fractional differential equations with boundary conditions. There are some papers that have studied either Hadamard or Caputo fractional derivatives containing in noninstantaneous impulsive equations, see [32–34].

The significance of this studying is to mixed different calculus within the system of non-instantaneous impulsive differential equations. For example if putting *m* = 1, *t*<sup>1</sup> = 1, *s*<sup>1</sup> = 2, *t*<sup>2</sup> = 3, *α*<sup>0</sup> = *α*<sup>1</sup> = 1/2, *g*0(*t*) = *t* and *g*1(*t*) = log*<sup>e</sup> t* in the first two equations of (1), then we obtain

⎧ ⎪⎪⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎪⎪⎩ *d dt*1 2 *x* = *f*(*t*, *x*(*t*)), *t* ∈ [0, 1), *x*(*t*) = *ϕ*(*t*) + *ψ*(*t*)*x*(1−), *t* ∈ [1, 2), *t d dt*1 2 *x* = *f*(*t*, *x*(*t*)), *t* ∈ [2, 3),

which is a special case of mixed Riemann–Liouville and Hadamard fractional impulsive system. In addition, if *Hk*(*t*) = *gk*(*t*), for all *t* ∈ [*si*, *ti*+1), *k* = 0, 1, 2, ... , *m*, then the nonlocal condition in (1), is reduced to

$$
\beta\_1 \mathbf{x}(0) + \beta\_2 \mathbf{x}(T) = \sum\_{k=0}^{m} \mu\_k \left( {}\_{\mathbb{S}\_k} I\_{\mathbb{S}\_k}^{\gamma\_k + 1} \mathbf{x} \right) (t\_{k+1}) \dots$$

If *ϕi*(*t*) = 0, *ψi*(*t*) = 1 and *si* → *ti*, *i* = 1, 2, 3, . . . , *m*, then (1) is reduced to a non impulsive fractional boundary value problem.

In fact, to the best of the authors knowledge, this is the first paper investigating Riemann–Stieltjes integration acting on fractional integral boundary conditions. Existence and uniqueness results are established for the the non-instantaneous impulsive Riemann– Stieltjes fractional integral boundary value problem (1) by using classical fixed point theorems. We make use of Banach's contraction mapping principle to obtain the uniqueness result, while the Leray–Schauder nonlinear alternative is applied to obtain the existence result. The main results are presented in Section 3. In Section 2 we prove an auxiliary result concerning a linear variant of the problem (1) which is of great importance in the proof of main results. Illustrative examples are also presented.

#### **2. An Auxiliary Result**

Let us set some constants which will be used in our proofs.

$$\Lambda\_k \quad = \frac{1}{\Gamma(\gamma\_k + 1)} \int\_{s\_k}^{t\_{k+1}} (\mathcal{g}\_k(u) - \mathcal{g}\_k(s\_k))^{\gamma\_k} dH\_k(u), \quad k = 1, 2, 3, \dots, m,\tag{9}$$

$$\Lambda^\*(i) \quad = \sum\_{j=1}^i \left( \prod\_{j}^{i-1} \psi\_{j+1}(s\_{j+1}) \right) \varphi\_j(s\_j), \qquad i = 1, 2, 3, \dots, m\_\prime \tag{10}$$

$$\Omega \quad = \quad \beta\_1 + \beta\_2 \left(\prod\_{j=1}^m \psi\_j(s\_j)\right) - \sum\_{k=0}^m \mu\_k \left(\prod\_{j=1}^k \psi\_j(s\_j)\right) \Lambda\_k. \tag{11}$$

**Lemma 1.** *Let* <sup>Ω</sup> = <sup>0</sup> *and h* ∈ *<sup>C</sup>*([0, *<sup>T</sup>*], R). *Then the integral equation equivalent to problem* (1) *can be written as*

$$\begin{split} x(t) &= \quad \frac{1}{\Omega} \Big( \prod\_{j=1}^{i} \psi\_{j}(s\_{j}) \Big) \Bigg\{ \sum\_{k=0}^{m} \mu\_{k} \Lambda^{\*}(k) \Lambda\_{k} - \beta\_{2} \Lambda^{\*}(m) \\ &\quad + \sum\_{k=0}^{m} \mu\_{k} \sum\_{j=1}^{k} \Big[ \Big( \prod\_{j}^{k} \Psi\_{j}(s\_{j}) \Big)\_{s\_{j}-1} I\_{\mathcal{S}\_{j-1}^{k}}^{\mathcal{A}\_{j-1}^{-1}} f\_{\mathcal{X}}(t\_{j}^{-}) \Big] \Lambda\_{k} \\ &\quad + \sum\_{k=0}^{m} \mu\_{k} \int\_{s\_{k}}^{t\_{k+1}} \varsigma\_{k} I\_{\mathcal{S}\_{k}^{k}}^{\mathcal{A}\_{k}+\gamma\_{k}} f\_{\mathcal{X}}(u) \, dH\_{k}(u) \\ &\quad - \beta\_{2} \sum\_{j=1}^{m+1} \bigg[ \Big( \prod\_{j}^{m} \Psi\_{j}(s\_{j}) \Big)\_{s\_{j-1}} I\_{\mathcal{S}\_{j-1}^{k-j}}^{\mathcal{A}\_{j-1}^{-1}} f\_{\mathcal{X}}(t\_{j}^{-}) \Big] \Bigg\} \\ &\quad + \Lambda^{\*}(i) + \sum\_{j=1}^{i} \bigg[ \Big( \prod\_{j}^{i} \Psi\_{j}(s\_{j}) \Big)\_{s\_{j}-1} I\_{\mathcal{S}\_{j-1}^{k-j}}^{\mathcal{A}\_{j-1}^{k-j}} f\_{\mathcal{X}}(t\_{j}^{-}) \Big] + s\_{i} I\_{\mathcal{S}\_{i}^{1}}^{\mathcal{A}\_{i}^{\mathcal{A}\_{i}}} f\_{\mathcal{X}}(t), \end{split} \tag{12}$$

*for t* ∈ [*si*, *ti*+1), *i* = 0, 1, 2, . . . , *m*, *and*

$$\begin{split} x(t) &= \; \; \Psi\_i(t) + \psi\_i(t) \left[ \frac{1}{\Omega} \left( \prod\_{j=1}^{i-1} \psi\_j(s\_j) \right) \left\{ \sum\_{k=0}^m \mu\_k \Lambda^\*(i) \Lambda\_k \\ &+ \sum\_{k=0}^m \mu\_k \sum\_{j=1}^k \left[ \left( \prod\_{j}^k \psi\_j(s\_j) \right) s\_{j-1} I\_{\mathcal{S}\_{\mathcal{S}-1}^{\mathcal{A}}}^{\mathcal{A}\_j-1} f\_x(t\_j^-) \right] \Lambda\_k \\ &+ \sum\_{k=0}^m \mu\_k \int\_{s\_k}^{t\_{k+1}} s\_k I\_{\mathcal{S}\_k^{\mathcal{A}}}^{\mu\_k+\gamma\_k} f\_x(u) \, dH\_k(u) - \beta\_2 \Lambda^\*(m) \\ &- \beta\_2 \sum\_{j=1}^{m+1} \left[ \left( \prod\_{j}^m \psi\_j(s\_j) \right)\_{s\_{j-1}} I\_{\mathcal{S}\_{\mathcal{S}-1}^{\mathcal{A}}}^{\mathcal{A}\_{j-1}^\*} f\_x(t\_j^-) \right] \right\} \\ &+ \Lambda^\*(i-1) + \sum\_{j=1}^i \left( \prod\_{j}^{i-1} \psi\_j(s\_j) \right)\_{s\_{j-1}} I\_{\mathcal{S}\_{\mathcal{S}-1}^{\mathcal{A}}}^{\mu\_j-1} f\_x(t\_j^-) \Bigg], \tag{13}$$

*for t* ∈ [*ti*,*si*), *i* = 1, 2, 3, . . . , *m*, *where fx*(*t*) = *f*(*t*, *x*(*t*)).

**Proof.** For *t* ∈ (*s*0, *t*1], taking the fractional integral with respect to a function *g*0(*t*) of order *α*<sup>0</sup> > 0, from *s*<sup>0</sup> to *t* in the first equation of (1) and setting *x*(0) = *A*, we have

$$\mathbf{x}(t) = A + \,\_{s\_0}I\_{\S 0}^{n\_0}f\_{\mathbf{x}}(t). \tag{14}$$

In particular, we get for *t* = *t* − <sup>1</sup> , that *x*(*t* − <sup>1</sup> ) = *A* + *<sup>s</sup>*<sup>0</sup> *I α*0 *<sup>g</sup>*<sup>0</sup> *fx*(*t* − <sup>1</sup> ). In the second interval [*t*1,*s*1), we have from the second equation of (1) as

$$\begin{aligned} x(t) &= \, \_0\varphi\_1(t) + \psi\_1(t)x(t\_1^-) \\ &= \, \_0\varphi\_1(t) + A\psi\_1(t) + \psi\_1(t)\_{s\_0} I\_{\otimes 0}^{a\_0} f\_x(t\_1^-), \end{aligned} \tag{15}$$

and also *x*(*s*1) = *ϕ*1(*s*1) + *Aψ*1(*s*1) + *ψ*1(*s*1)*s*<sup>0</sup> *I α*0 *<sup>g</sup>*<sup>0</sup> *fx*(*t* − <sup>1</sup> ).

In the third interval [*s*1, *t*2), again taking the Riemann–Liouville fractional integral with respect to a function *g*1(*t*) of order *α*1, we obtain

$$\begin{aligned} \mathbf{x}(t) &= \mathbf{x}(s\_1) +\_{s\_1} I^{a\_1}\_{\mathbb{S}1} f\_{\mathbf{x}}(t) \\ &= \quad \varphi\_1(s\_1) + A \psi\_1(s\_1) + \psi\_1(s\_1)\_{s0} I^{a\_0}\_{\mathbb{S}0} f\_{\mathbf{x}}(t\_1^-) + \_{s\_1} I^{a\_1}\_{\mathbb{S}1} f\_{\mathbf{x}}(t), \end{aligned}$$

which has particular case as *x*(*t* − <sup>2</sup> ) = *ϕ*1(*s*1) + *Aψ*1(*s*1) + *ψ*1(*s*1)*s*<sup>0</sup> *I α*0 *<sup>g</sup>*<sup>0</sup> *fx*(*t* − <sup>1</sup> ) + *<sup>s</sup>*<sup>1</sup> *I α*1 *<sup>g</sup>*<sup>1</sup> *fx*(*t* − <sup>2</sup> ). In the fourth interval [*t*2,*s*2), it follows that

$$\mathbf{x}(t) = \boldsymbol{\varphi}\_2(t) + \boldsymbol{\psi}\_2(t) \left[ \boldsymbol{\varphi}\_1(s\_1) + A \boldsymbol{\psi}\_1(s\_1) + \boldsymbol{\psi}\_1(s\_1)\_{\ast 0} l\_{\boldsymbol{\xi}0}^{\mathrm{a}0} f\_{\mathbf{x}}(t\_1^-) + \boldsymbol{\varsigma}\_1 l\_{\boldsymbol{\xi}1}^{\mathrm{a}1} f\_{\mathbf{x}}(t\_2^-) \right].$$

By the previous procedure we can find that

$$x(t) = \begin{cases} A\left(\prod\_{j=1}^{i} \psi\_{j}(s\_{j})\right) + \sum\_{j=1}^{i} \left(\prod\_{j}^{i-1} \psi\_{j+1}(s\_{j+1})\right) \boldsymbol{\uprho}\_{j}(s\_{j}) \\ + \sum\_{j=1}^{i} \left[\left(\prod\_{j}^{i} \psi\_{j}(s\_{j})\right)\_{s\_{j}=1} \boldsymbol{I}\_{\mathcal{S}\_{j-1}^{i-1}}^{a\_{j-1}} \boldsymbol{f}\_{\mathcal{X}}(t\_{j}^{-})\right] + \boldsymbol{s}\_{i} \boldsymbol{I}\_{\mathcal{S}\_{i}^{i}}^{a\_{i}} \boldsymbol{f}\_{\mathcal{X}}(t), \\ \qquad\qquad t \in [s\_{i}, t\_{i+1}), i = 0, 1, 2, \dots, m, \\ \boldsymbol{\uprho}\_{i}(t) + \boldsymbol{\uprho}\_{i}(t) \left[A \prod\_{j=1}^{i-1} \boldsymbol{\uprho}\_{j}(s\_{j}) + \sum\_{j=1}^{i-1} \left(\prod\_{j}^{i-2} \boldsymbol{\uprho}\_{j+1}(s\_{j+1})\right) \boldsymbol{\uprho}\_{j}(s\_{j}) \right. \\ + \sum\_{j=1}^{i} \left(\prod\_{j}^{i-1} \boldsymbol{\uprho}\_{j}(s\_{j})\right)\_{s\_{j-1}} I\_{\mathcal{S}\_{j-1}^{i-1}}^{a\_{j-1}} \boldsymbol{f}\_{\mathcal{X}}(t\_{j}^{-})\right], \quad t \in [t\_{i}, s\_{i}), i = 1, 2, 3, \dots, m. \end{cases} \tag{16}$$

By using the mathematical induction, we will claim that the formula (16) holds. Putting *i* = 0 and *i* = 1 in the first and second parts of (16), respectively, we have results in (14) and (15). Assume that the first part of (16) is true for *i* = *k*, that is, for *t* ∈ [*sk*, *tk*<sup>+</sup>1),

$$\begin{aligned} x(t) &= -A \left( \prod\_{j=1}^k \psi\_j(s\_j) \right) + \sum\_{j=1}^k \left( \prod\_{j}^{k-1} \psi\_{j+1}(s\_{j+1}) \right) \varphi\_j(s\_j) \\ &+ \sum\_{j=1}^k \left[ \left( \prod\_{j}^k \psi\_j(s\_j) \right)\_{\mathcal{S}\_{j-1}} I\_{\mathcal{S}\_{j-1}}^{\mathcal{A}\_j-1} f\_x(t\_j^-) \right] + \_{s\_k} I\_{\mathcal{S}\_k}^{\mathcal{A}\_k} f\_x(t) . \end{aligned}$$

Then for *t* ∈ [*tk*+1,*sk*<sup>+</sup>1), we have

$$\begin{split} \mathbf{x}(t) &= \begin{aligned} \mathbf{y}\_{k+1}(t) &+ \boldsymbol{\Psi}\_{k+1}(t)\mathbf{x}(t\_{k+1}^{-}) \\ &= & \boldsymbol{\Psi}\_{k+1}(t) + \boldsymbol{\Psi}\_{k+1}(t) \left\{ A \left( \prod\_{j=1}^{k} \boldsymbol{\Psi}\_{j}(s\_{j}) \right) + \sum\_{j=1}^{k} \left( \prod\_{j}^{k-1} \boldsymbol{\Psi}\_{j+1}(s\_{j+1}) \right) \boldsymbol{\Psi}\_{j}(s\_{j}), \end{aligned} \right. \\ &\left. + \sum\_{j=1}^{k} \left[ \left( \prod\_{j}^{k} \boldsymbol{\Psi}\_{j}(s\_{j}) \right)\_{\boldsymbol{s}\_{j-1}} I\_{\boldsymbol{\mathcal{S}}\_{j-1}^{k-1}}^{\boldsymbol{a}\_{j-1}} f\_{\boldsymbol{x}}(t\_{j}^{-}) \right] + \boldsymbol{s}\_{k} I\_{\boldsymbol{\mathcal{S}}\_{k}^{k}}^{\boldsymbol{a}\_{k}} f\_{\boldsymbol{x}}(t\_{k+1}) \right] \\ &= \quad \boldsymbol{\Psi}\_{k+1}(t) + \boldsymbol{\Psi}\_{k+1}(t) \left\{ A \left( \prod\_{j=1}^{k} \boldsymbol{\Psi}\_{j}(s\_{j}) \right) + \sum\_{j=1}^{k} \left( \prod\_{j}^{k-1} \boldsymbol{\Psi}\_{j+1}(s\_{j+1}) \right) \boldsymbol{\Psi}\_{j}(s\_{j}), \end{aligned}$$

which implies that the second part of (16) holds. Similarly suppose that the second part of (16) is satisfied for *i* = *k*. Then for *t* ∈ [*sk*, *tk*<sup>+</sup>1), we obtain

$$\begin{aligned} \mathbf{x}(t) &= \quad \mathbf{x}(s\_k) + \boldsymbol{s}\_k \boldsymbol{I}\_{\mathcal{S}k}^{a\_k} f\_{\mathbf{x}}(t) \\ &= \quad \boldsymbol{\varrho}\_k(s\_k) + \boldsymbol{\psi}\_k(s\_k) \left[ A \prod\_{j=1}^{k-1} \boldsymbol{\psi}\_j(s\_j) + \sum\_{j=1}^{k-1} \left( \prod\_{j}^{k-2} \boldsymbol{\psi}\_{j+1}(s\_{j+1}) \right) \boldsymbol{\varrho}\_j(s\_j) \right] \\ &+ \sum\_{j=1}^{k} \left( \prod\_{j}^{k-1} \boldsymbol{\psi}\_j(s\_j) \right)\_{\mathcal{S}\_{j-1}} \boldsymbol{I}\_{\mathcal{S}\_{j-1}}^{a\_{j-1}} f\_{\mathbf{x}}(t\_j^{-}) \Bigg] + \boldsymbol{s}\_k \boldsymbol{I}\_{\mathcal{S}k}^{a\_k} f\_{\mathbf{x}}(t) \end{aligned}$$

$$\begin{aligned} &=\quad A\left(\prod\_{j=1}^k \psi\_j(s\_j)\right) + \sum\_{j=1}^k \left(\prod\_{j}^{k-1} \psi\_{j+1}(s\_{j+1})\right) \varphi\_j(s\_j) \\ &+ \sum\_{j=1}^k \left[\left(\prod\_{j}^k \psi\_j(s\_j)\right)\_{s\_{j-1}} I\_{\%-1}^{a\_{j-1}} f\_x(t\_j^-)\right] + s\_k I\_{\%}^{a\_k} f\_x(t). \end{aligned}$$

Thus the first part of (16) is fulfilled. Therefore, the relation (16) holds for all *t* ∈ [0, *T*]. Now, we put *t* = *T* in (16), we have

$$\begin{split} \mathbb{P}(T) &=& A \left( \prod\_{j=1}^{m} \psi\_{j}(s\_{j}) \right) + \sum\_{j=1}^{m} \left( \prod\_{j}^{m-1} \psi\_{j+1}(s\_{j+1}) \right) \varphi\_{j}(s\_{j}) \\ &+ \sum\_{j=1}^{m} \left[ \left( \prod\_{j}^{m} \psi\_{j}(s\_{j}) \right)\_{s\_{j-1}} I\_{\mathcal{S}\_{j-1}^{-1}}^{a\_{j-1}} f\_{x}(t\_{j}^{-}) \right] + \,\_{s\_{m}} I\_{\mathcal{S}\_{m}^{m}}^{a\_{m}} f\_{x}(T) \\ &=& A \left( \prod\_{j=1}^{m} \psi\_{j}(s\_{j}) \right) + \Lambda^{\*}(m) + \sum\_{j=1}^{m+1} \left[ \left( \prod\_{j}^{m} \psi\_{j}(s\_{j}) \right)\_{s\_{j-1}} I\_{\mathcal{S}\_{j-1}^{-1}}^{a\_{j-1}} f\_{x}(t\_{j}^{-}) \right]. \end{split} \tag{17}$$

By taking the Riemann–Liouville fractional integral of order *γ<sup>k</sup>* > 0 to (16), with respect to a function *gk*(*t*) on [*sk*, *tk*<sup>+</sup>1) for *k* = 0, 1, 2, . . . , *m*, we obtain

$$\begin{split} \, \_{s\_{k}}I\_{\mathcal{G}k}^{\gamma\_{k}}\mathbf{x}(t) &= \, \_{s}A \frac{\left(\mathbf{g}\_{k}(t) - \mathbf{g}\_{k}(s\_{k})\right)^{\gamma\_{k}}}{\Gamma(\gamma\_{k} + 1)} \Biggl(\prod\_{j=1}^{k} \psi\_{j}(s\_{j})\right) \\ &+ \left[\sum\_{j=1}^{k} \left(\prod\_{j}^{k-1} \psi\_{j+1}(s\_{j+1})\right) \wp\_{j}(s\_{j})\right] \times \ \frac{\left(\mathbf{g}\_{k}(t) - \mathbf{g}\_{k}(s\_{k})\right)^{\gamma\_{k}}}{\Gamma(\gamma\_{k} + 1)} \\ &+ \sum\_{j=1}^{k} \left[\left(\prod\_{j}^{k} \Psi\_{j}(s\_{j})\right)\_{j-1} I\_{\mathcal{G}j-1}^{\mathbf{e}\_{j-1}} f\_{\mathcal{X}}(t\_{j}^{-})\right] \frac{\left(\mathbf{g}\_{k}(t) - \mathbf{g}\_{k}(s\_{k})\right)^{\gamma\_{k}}}{\Gamma(\gamma\_{k} + 1)} + \,\_{s\_{k}}I\_{\mathcal{G}^{k}}^{\mathbf{e}\_{k} + \gamma\_{k}} f\_{\mathcal{X}}(t), \end{split}$$

which yields

$$\begin{split} &\sum\_{k=0}^{m} \mu\_{k} \int\_{s\_{k}}^{t\_{k+1}} \left(s\_{k} I\_{\&}^{\gamma\_{k}} \mathbf{x}\right)(u) \, dH\_{k}(u) \\ &= \quad A \sum\_{k=0}^{m} \mu\_{k} \left(\prod\_{j=1}^{k} \psi\_{j}(s\_{j})\right) \Lambda\_{k} + \sum\_{k=0}^{m} \mu\_{k} \Lambda^{\*}(k) \Lambda\_{k} \\ &\quad + \sum\_{k=0}^{m} \mu\_{k} \sum\_{j=1}^{k} \left[ \left(\prod\_{j}^{k} \psi\_{j}(s\_{j})\right)\_{s\_{j}-1} I\_{s\_{j-1}^{j-1}}^{x\_{j-1}} f\_{x}(t\_{j}^{-}) \right] \Lambda\_{k} \\ &\quad + \sum\_{k=0}^{m} \mu\_{k} \int\_{s\_{k}}^{t\_{k+1}} \s\_{k} I\_{\&}^{x\_{k}+\gamma\_{k}} f\_{x}(u) \, dH\_{k}(u) . \end{split} \tag{18}$$

The condition in (1) with (17) and (18) implies

$$\begin{split} A &=& \frac{1}{\Omega} \Bigg\{ \sum\_{k=0}^{m} \mu\_{k} \Lambda^{\*}(k) \Lambda\_{k} + \sum\_{k=0}^{m} \mu\_{k} \sum\_{j=1}^{k} \Bigg[ \left( \prod\_{j}^{k} \psi\_{j}(s\_{j}) \right)\_{s\_{j-1}} I\_{\mathcal{S}\_{j-1}^{k}}^{a\_{j-1}} f\_{x}(t\_{j}^{-}) \Bigg] \Lambda\_{k} \\ &+ \sum\_{k=0}^{m} \mu\_{k} \int\_{s\_{k}}^{t\_{k+1}} s\_{k} I\_{\mathcal{S}\_{k}^{k}}^{a\_{k} + \gamma\_{k}} f\_{x}(u) \, dH\_{k}(u) - \beta\_{2} \Lambda^{\*}(m) \\ &- \beta\_{2} \sum\_{j=1}^{m+1} \left[ \left( \prod\_{j}^{m} \psi\_{j}(s\_{j}) \right)\_{s\_{j-1}} I\_{\mathcal{S}\_{j-1}^{k}}^{a\_{j-1}} f\_{x}(t\_{j}^{-}) \right] \Bigg\}. \tag{19} \end{split}$$

By substituting the constant *A*, (19), into (16), the obtained integral Equations (12) and (13) are presented.

Conversely, by taking the operator *si Dα<sup>i</sup> gi* over [*si*, *ti*+1) to (12), we get *si Dα<sup>i</sup> gi x*(*t*) = *f*(*t*, *x*(*t*)). Putting *t* = *ti* and replacing *i* by *i* − 1 in (12), then (13) implies *x*(*t*) = *ϕi*(*t*) + *ψi*(*t*)*x*(*t* − *<sup>i</sup>* ), *t* ∈ [*ti*,*si*). By direct computation as substituting *t* = 0, *t* = *T* and applying the Riemann–Stieltjes fractional integral of order *γ<sup>k</sup>* with respect to *gk* to the unknown function *x*(*t*) in (12) over [*sk*, *tk*<sup>+</sup>1), then the condition in (1) is satisfied. Therefore the proof is completed.

#### **3. Existence and Uniqueness Results**

Before going to prove our main results, we have to define the space of functions and the operator which are involved to problem (1). Let *J* = [0, *T*] be an interval and let *PC*(*J*, R) and *PC*1(*J*, R) be the spaces of piecewise continuous function defined by *PC*(*J*, R) = {*<sup>x</sup>* : *<sup>J</sup>* → R| *<sup>x</sup>*(*t*) is continuous everywhere except for some *ti* at which *<sup>x</sup>*(*<sup>t</sup>* + *i* ) and *x*(*t* − *<sup>i</sup>* ) exist for *<sup>i</sup>* <sup>=</sup> 1, 2, 3, ... , *<sup>m</sup>*} and *PC*1(*J*, <sup>R</sup>) = {*<sup>x</sup>* <sup>∈</sup> *PC*(*J*, <sup>R</sup>)<sup>|</sup> *<sup>x</sup>* (*t*) is continuous everywhere except for some *ti* at which *x* (*t* + *<sup>i</sup>* ) and *x* (*t* − *<sup>i</sup>* ) exist for *i* = 1, 2, 3, ... , *m*}. Let *<sup>E</sup>* <sup>=</sup> *PC*(*J*, <sup>R</sup>) <sup>∩</sup> *PC*1(*J*, <sup>R</sup>). Then *<sup>E</sup>* is the Banach space with norm *x* <sup>=</sup> sup{|*x*(*t*)|, *<sup>t</sup>* <sup>∈</sup> *<sup>J</sup>*}. Now, we define the operator on *E* by

Q*x*(*t*) = ⎧ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩ 1 Ω *i* ∏ *j*=1 *ψj*(*sj*) *m* ∑ *k*=0 *μk*Λ∗(*k*)Λ*<sup>k</sup>* − *β*2Λ∗(*m*) + *m* ∑ *k*=0 *μk k* ∑ *j*=1 *k* ∏ *j ψj*(*sj*) *sj*−<sup>1</sup> *I αj*−<sup>1</sup> *gj*−<sup>1</sup> *fx*(*<sup>t</sup>* − *j* ) Λ*k* + *m* ∑ *k*=0 *μk tk*+<sup>1</sup> *sk sk I <sup>α</sup>k*+*γ<sup>k</sup> gk fx*(*u*) *dHk*(*u*) −*β*<sup>2</sup> *m*+1 ∑ *j*=1 *m* ∏ *j ψj*(*sj*) *sj*−<sup>1</sup> *I αj*−<sup>1</sup> *gj*−<sup>1</sup> *fx*(*<sup>t</sup>* − *j* ) ! +Λ∗(*i*) + *i* ∑ *j*=1 *i* ∏ *j ψj*(*sj*) *sj*−<sup>1</sup> *I αj*−<sup>1</sup> *gj*−<sup>1</sup> *fx*(*<sup>t</sup>* − *j* ) + *si I αi gi fx*(*t*), *t* ∈ [*si*, *ti*+1), *i* = 0, 1, 2, . . . , *m*, *ϕi*(*t*) + *ψi*(*t*) 1 Ω *i*−<sup>1</sup> ∏ *j*=1 *ψj*(*sj*) *m* ∑ *k*=0 *μk*Λ∗(*i*)Λ*<sup>k</sup>* + *m* ∑ *k*=0 *μk k* ∑ *j*=1 *k* ∏ *j ψj*(*sj*) *sj*−<sup>1</sup> *I αj*−<sup>1</sup> *gj*−<sup>1</sup> *fx*(*<sup>t</sup>* − *j* ) Λ*k* + *m* ∑ *k*=0 *μk tk*+<sup>1</sup> *sk sk I <sup>α</sup>k*+*γ<sup>k</sup> gk fx*(*u*) *dHk*(*u*) <sup>−</sup> *<sup>β</sup>*2Λ∗(*m*) −*β*<sup>2</sup> *m*+1 ∑ *j*=1 *m* ∏ *j ψj*(*sj*) *sj*−<sup>1</sup> *I αj*−<sup>1</sup> *gj*−<sup>1</sup> *fx*(*<sup>t</sup>* − *j* ) ! +Λ∗(*i* − 1) + *i* ∑ *j*=1 *i*−<sup>1</sup> ∏ *j ψj*(*sj*) *sj*−<sup>1</sup> *I αj*−<sup>1</sup> *gj*−<sup>1</sup> *fx*(*<sup>t</sup>* − *j* ) , *t* ∈ [*ti*,*si*), *i* = 1, 2, 3, . . . , *m*.

Next, by applying the Banach's contraction mapping principle, and Leray–Schauder's nonlinear alternative, we derive the existence and uniqueness of solutions to problem (1). Some constants are set as follows:

$$\Phi\_1 = \begin{array}{ccl} \frac{1}{|\Omega|} \left( \prod\_{j=1}^m |\psi\_j(s\_j)| \right) \end{array} \\ \Phi\_2 = \sum\_{k=0}^m |\mu\_k| |\Lambda^\*(k)| |\Lambda\_k| .$$

$$\begin{split} \Phi\_{3} &=& \sum\_{k=0}^{m} |\mu\_{k}| \sum\_{j=1}^{k} \left[ \left( \prod\_{j}^{k} |\psi\_{j}(s\_{j})| \right) \left( \frac{(\mathcal{G}\_{j-1}(t\_{j}) - \mathcal{G}\_{j-1}(s\_{j-1}))^{a\_{j-1}}}{\Gamma(a\_{j-1} + 1)} \right) \right] |\Lambda\_{k}|, \\ \Phi\_{4} &=& \sum\_{k=0}^{m} \frac{|\mu\_{k}|}{\Gamma(a\_{k} + \gamma\_{k} + 1)} \int\_{s\_{k}}^{t\_{k+1}} (\mathcal{g}\_{k}(u) - \mathcal{g}\_{k}(s\_{k}))^{a\_{k} + \gamma\_{k}} \, dH\_{k}(u), \\ \Phi\_{5} &=& \sum\_{j=1}^{m+1} \left[ \left( \prod\_{j}^{m} |\psi\_{j}(s\_{j})| \right) \left( \frac{(\mathcal{G}\_{j-1}(t\_{j}) - \mathcal{G}\_{j-1}(s\_{j-1}))^{a\_{j-1}}}{\Gamma(a\_{j-1} + 1)} \right) \right], \\ \Phi\_{6} &=& \Phi\_{1}(\Phi\_{3} + \Phi\_{4}) + \Phi\_{5}(|\beta\_{2}|\Phi\_{1} + 1). \end{split} \tag{20}$$

**Theorem 1.** *Suppose that the nonlinear function f* : *<sup>J</sup>* × R → R *satisfies the condition:* (*H*1)*There exists a constant L* > <sup>0</sup> *such that for all t* ∈ *J and x*, *<sup>y</sup>* ∈ R,

$$|f(t, \mathbf{x}) - f(t, \mathbf{y})| \le L|\mathbf{x} - \mathbf{y}|.$$

*If L*Φ<sup>6</sup> < 1*, where* Φ<sup>6</sup> *is defined by (20), then the non-instantaneous impulsive Riemann–Stieltjes fractional integral boundary value problem (1) has a unique solution on J.*

**Proof.** Let *Br* be the subset of *E* defined by *Br* = {*x* ∈ *E* : *x* ≤ *r*}, where a fixed constant *r* satisfies

$$r \ge \frac{\Phi\_1 \Phi\_2 + |\Lambda^\*(m)|(|\beta\_2|\Phi\_1 + 1) + M\Phi\_6}{1 - L\Phi\_6}.\tag{21}$$

Now we will prove that Q*Br* ⊂ *Br*. Setting *M* = sup{| *f*(*t*, 0)|, *t* ∈ *J*|}, we have, from triangle inequality and (*H*1), that | *f*(*t*, *x*)|≤| *f*(*t*, *x*) − *f*(*t*, 0)| + | *f*(*t*, 0)| ≤ *Lr* + *M*. Then we obtain

$$\begin{split} |\mathcal{Q}x(t)| &\leq \quad \frac{1}{|\Omega|} \Big(\prod\_{j=1}^{i} |\psi\_{j}(s\_{j})| \Big) \Big\{ \sum\_{k=0}^{m} |\mu\_{k}| |\Lambda^{\*}(k)| |\Lambda\_{k}| + |\beta\_{2}| |\Lambda^{\*}(m)| \\ &\quad + \sum\_{k=0}^{m} |\mu\_{k}| \sum\_{j=1}^{k} \Big[ \Big( \prod\_{j}^{k} |\psi\_{j}(s\_{j})| \Big) s\_{j-1} I\_{\mathcal{S}\_{l}^{j-1}}^{k\_{j}-1} |f\_{x}|(t\_{j}^{-}) \Big] |\Lambda\_{k}| \\ &\quad + \sum\_{k=0}^{m} |\mu\_{k}| \int\_{s\_{k}}^{t\_{k+1}} s\_{k} I\_{\mathcal{S}\_{l}^{k}}^{k\_{l}+\gamma\_{k}} |f\_{x}|(u) \, dH\_{k}(u) \\ &\quad + |\beta\_{2}| \sum\_{j=1}^{m+1} \Big[ \Big( \prod\_{j}^{m} |\psi\_{j}(s\_{j})| \Big)\_{s\_{j-1}} I\_{\mathcal{S}\_{l-1}^{j-1}}^{k\_{j-1}} |f\_{x}|(t\_{j}^{-}) \Big] \Big\} \\ &\quad + |\Lambda^{\*}(i)| + \sum\_{j=1}^{i} \Big[ \Big( \prod\_{j}^{i} |\psi\_{j}(s\_{j})| \Big)\_{s\_{j-1}} I\_{\mathcal{S}\_{l-1}^{j-1}}^{k\_{j-1}} |f\_{x}|(t\_{j}^{-}) \Big] + s\_{i} I\_{\mathcal{S}\_{l}^{i}}^{\mathcal{C}\_{l}} |f\_{x}|(t) \Big] \end{split}$$

for *t* ∈ [*si*, *ti*+1), *i* = 0, 1, 2, . . . , *m*, and

$$\begin{split} |\mathcal{Q}x(t)| &\leq \quad |\varphi\_{i}(t)| + |\psi\_{i}(t)| \Big[ \frac{1}{|\Omega|} \Big( \prod\_{j=1}^{i-1} |\psi\_{j}(s\_{j})| \Big) \Big\{ \sum\_{k=0}^{m} |\mu\_{k}| |\Lambda^{\*}(i)| |\Lambda\_{k}| \\ &+ \sum\_{k=0}^{m} |\mu\_{k}| \sum\_{j=1}^{k} \Big[ \Big( \prod\_{j}^{k} |\psi\_{j}(s\_{j})| \Big)\_{s\_{j-1}} I\_{\mathcal{S}\_{j-1}^{-1}}^{\kappa\_{j-1}} |f\_{\mathcal{X}}(t\_{j}^{-}) \Big] \Lambda\_{k} \\ &+ \sum\_{k=0}^{m} |\mu\_{k}| \int\_{s\_{k}}^{t\_{k+1}} s\_{k} I\_{\mathcal{S}\_{k}^{-}}^{\kappa\_{k} + \gamma\_{k}} |f\_{\mathcal{X}}|(u) \, dH\_{k}(u) + |\beta\_{2}| |\Lambda^{\*}(m)| \\ &+ |\beta\_{2}| \sum\_{j=1}^{m+1} \Big[ \Big( \prod\_{j}^{m} |\Psi\_{j}(s\_{j})| \Big)\_{s\_{j-1}} I\_{\mathcal{S}\_{j-1}^{-1}}^{\kappa\_{j-1}} |f\_{\mathcal{X}}|(t\_{j}^{-}) \Big] \Big\} \end{split}$$

$$+|\Lambda^\*(i-1)| + \sum\_{j=1}^i \left(\prod\_j^{i-1} |\psi\_j(s\_j)|\right)\_{s\_{j-1}} I\_{\mathcal{S}\_{j-1}}^{\kappa\_{j-1}} |f\_\mathbf{x}|(t\_j^{-})\left[\bigwedge\_j^{|\mathbf{x}|}\right]$$

for *t* ∈ [*ti*,*si*), *i* = 1, 2, 3, . . . , *m*. Then we have

sup *t*∈*J* |Q*x*(*t*)| ≤ <sup>1</sup> |Ω| *m* ∏ *j*=1 |*ψj*(*sj*)| *m* ∑ *k*=0 |*μk*||Λ∗(*k*)||Λ*k*| + |*β*2||Λ∗(*m*)| + (*Lr* + *M*) *m* ∑ *k*=0 |*μk*| *k* ∑ *j*=1 *k* ∏ *j* |*ψj*(*sj*)| *sj*−<sup>1</sup> *I αj*−<sup>1</sup> *gj*−<sup>1</sup> (1)(*<sup>t</sup>* − *j* ) |Λ*k*| + (*Lr* + *M*) *m* ∑ *k*=0 |*μk*| *tk*+<sup>1</sup> *sk sk I <sup>α</sup>k*+*γ<sup>k</sup> gk* (1)(*u*) *dHk*(*u*) + (*Lr* + *M*)|*β*2| *m*+1 ∑ *j*=1 *m* ∏ *j* |*ψj*(*sj*)| *sj*−<sup>1</sup> *I αj*−<sup>1</sup> *gj*−<sup>1</sup> (1)(*<sup>t</sup>* − *j* ) ! + |Λ∗(*m*)| + (*Lr* + *M*) *m* ∑ *j*=1 *m* ∏ *j* |*ψj*(*sj*)| *sj*−<sup>1</sup> *I αj*−<sup>1</sup> *gj*−<sup>1</sup> (1)(*<sup>t</sup>* − *j* ) + (*Lr* + *M*)*sm I αm gm* (1)(*T*) = Φ1Φ<sup>2</sup> + |Λ∗(*m*)|(|*β*2|Φ<sup>1</sup> + 1) + *rL*{Φ1(Φ<sup>3</sup> + Φ4) + Φ5(|*β*2|Φ<sup>1</sup> + 1)} + *M*{Φ1(Φ<sup>3</sup> + Φ4) + Φ5(|*β*2|Φ<sup>1</sup> + 1)} = Φ1Φ<sup>2</sup> + |Λ∗(*m*)|(|*β*2|Φ<sup>1</sup> + 1) + *rL*Φ<sup>6</sup> + *M*Φ6,

since

$$\begin{array}{rcl} \prescript{\mathsf{s}\_{j-1}}{}{\prescript{\mathsf{a}\_{j-1}}{}}(1)(t\_{j}^{-}) &=& \frac{\left(\mathscr{g}\_{j-1}(t\_{j}) - \prescript{\mathsf{g}\_{j-1}}{}(\mathsf{s}\_{j-1})\right)^{\mathsf{a}\_{j-1}}}{\Gamma(\mathsf{a}\_{j-1} + 1)},\\ \int\_{\mathscr{s}\_{k}}^{t\_{k+1}} \prescript{\mathsf{s}\_{k}}{}{\prescript{\mathsf{a}\_{k}}{}}(1)(\mathsf{u}) \, dH\_{k}(\mathsf{u}) &=& \int\_{\mathsf{s}\_{k}}^{t\_{k+1}} \frac{\left(\mathscr{g}\_{k}(\mathsf{u}) - \prescript{\mathsf{g}\_{k}}{}(\mathsf{s}\_{k})\right)^{\mathsf{a}\_{k} + \gamma\_{k}}}{\Gamma(\mathsf{a}\_{k} + \gamma\_{k} + 1)} \, dH\_{k}(\mathsf{u}). \end{array}$$

Thus Q*x* ≤ *r*, where *r* satisfies (21). Therefore, we conclude that Q*Br* ⊂ *Br*.

Next we will prove that the operator Q is a contraction. For any *x*, *y* ∈ *Br* we have

$$\begin{split} & \quad \left| \mathcal{Q}x(t) - \mathcal{Q}y(t) \right| \\ & \leq \quad \frac{1}{|\Omega|} \Big( \prod\_{j=1}^{i} |\psi\_{j}(s\_{j})| \Big) \left\{ \sum\_{k=0}^{m} |\mu\_{k}| \sum\_{j=1}^{k} \Big[ \left( \prod\_{j}^{k} |\psi\_{j}(s\_{j})| \right)\_{s\_{j-1}} I\_{\mathcal{S}\_{j-1}}^{n\_{j-1}} |f\_{x} - f\_{y}|(t\_{j}^{-}) \right] |\Lambda\_{k}| \\ & \quad + \sum\_{k=0}^{m} |\mu\_{k}| \int\_{s\_{k}}^{t\_{k+1}} {s\_{k}} I\_{\mathcal{S}\_{k}}^{n\_{k} + \gamma\_{k}} |f\_{x} - f\_{y}|(u) \, dH\_{k}(u) \\ & \quad + |\beta\_{2}| \sum\_{j=1}^{m+1} \Big[ \left( \prod\_{j}^{m} |\psi\_{j}(s\_{j})| \right)\_{s\_{j-1}} I\_{\mathcal{S}\_{j-1}^{n\_{j-1}}}^{n\_{j-1}} |f\_{x} - f\_{y}|(t\_{j}^{-}) \Big] \right\} \\ & \quad + \sum\_{j=1}^{i} \Big[ \left( \prod\_{j}^{i} |\psi\_{j}(s\_{j})| \right)\_{s\_{j-1}} I\_{\mathcal{S}\_{j-1}^{n-1}}^{n\_{j-1}} |f\_{x} - f\_{y}|(t\_{j}^{-}) \Big] + s\_{i} I\_{\mathcal{S}\_{i}^{i}}^{n\_{i}} |f\_{x} - f\_{y}|(t) \end{split}$$

for *t* ∈ [*si*, *ti*+1), *i* = 0, 1, 2, . . . , *m*, and

$$\begin{aligned} &|\mathcal{Q}x(t) - \mathcal{Q}y(t)| \\ \leq & \left|\varphi\_{i}(t)\right| + |\psi\_{i}(t)| \left[\frac{1}{|\Omega|} \left(\prod\_{j=1}^{i-1} |\psi\_{j}(s\_{j})|\right) \right\} \left\{\sum\_{k=0}^{m} |\mu\_{k}| \sum\_{j=1}^{k} \left[\left(\prod\_{j}^{k} |\psi\_{j}(s\_{j})|\right)\right] \right\} \end{aligned}$$

$$\begin{aligned} &\left\{ \begin{aligned} &\boldsymbol{\lambda} \times\_{\boldsymbol{s}\_{j-1}} \boldsymbol{I}\_{\mathcal{S}\_{j-1}^{\boldsymbol{s}\_{j}-1}}^{\boldsymbol{a}\_{j}-1} | \boldsymbol{f}\_{\boldsymbol{x}} - \boldsymbol{f}\_{\boldsymbol{y}} | (\boldsymbol{t}\_{j}^{-}) \end{aligned} \right\} \boldsymbol{\Lambda}\_{k} + \sum\_{k=0}^{m} |\boldsymbol{\mu}\_{k}| \int\_{\boldsymbol{s}\_{k}}^{\boldsymbol{t}\_{k+1}} \boldsymbol{\mu}\_{k} \boldsymbol{I}\_{\mathcal{S}\_{k}^{\boldsymbol{u}}}^{\boldsymbol{a}\_{k}+\gamma\_{k}} | \boldsymbol{f}\_{\boldsymbol{x}} - \boldsymbol{f}\_{\boldsymbol{y}} | (\boldsymbol{u}) \, d\boldsymbol{H}\_{k} (\boldsymbol{u}) \\ &+ |\beta\_{2}| \sum\_{j=1}^{m+1} \left[ \left( \prod\_{j}^{m} |\psi\_{j}(\boldsymbol{s}\_{j})| \right)\_{\boldsymbol{s}\_{j-1}} \boldsymbol{I}\_{\mathcal{S}\_{j-1}^{\boldsymbol{s}\_{j-1}}}^{\boldsymbol{a}\_{j-1}} | \boldsymbol{f}\_{\boldsymbol{x}} - \boldsymbol{f}\_{\boldsymbol{y}} | (\boldsymbol{t}\_{j}^{-}) \right] \right\} \\ &+ \sum\_{j=1}^{i} \left( \prod\_{j}^{i-1} |\psi\_{j}(\boldsymbol{s}\_{j})| \right)\_{\boldsymbol{s}\_{j-1}} \boldsymbol{I}\_{\mathcal{S}\_{j-1}^{\boldsymbol{s}\_{j-1}}}^{\boldsymbol{a}\_{j-1}} | \boldsymbol{f}\_{\boldsymbol{x}} - \boldsymbol{f}\_{\boldsymbol{y}} | (\boldsymbol{t}\_{j}^{-}) \right] \end{aligned}$$

for *t* ∈ [*ti*,*si*), *i* = 1, 2, 3, . . . , *m*. Consequently

$$\begin{split} & \quad \left| Qx(t) - Qy(t) \right| \\ & \leq \quad \frac{1}{|\Omega|} \Big( \prod\_{j=1}^{m} |\psi\_{j}(s\_{j})| \Big) \Big\{ \, \mathop{\, \|}\limits\_{k=0}^{m} |\mu\_{k}| \sum\_{k=0}^{m} |\mu\_{k}| \sum\_{j=1}^{k} \Big[ \left( \prod\_{j}^{k} |\psi\_{j}(s\_{j})| \right)\_{s\_{j-1}} I\_{\mathcal{S}\_{j-1}^{-1}}^{a\_{j-1}}(1)(t\_{j}^{-}) \Big] |\Lambda\_{k}| \\ & \quad + L \|x - y\| \sum\_{k=0}^{m} |\mu\_{k}| \int\_{s\_{k}}^{t\_{k+1}} \boldsymbol{\mu}\_{k} I\_{\mathcal{S}\_{k}^{1}}^{a\_{k} + \gamma\_{k}}(1)(u) \, dH\_{k}(u) \\ & \quad + L \|x - y\| \|\beta\_{2}| \sum\_{j=1}^{m+1} \Big[ \left( \prod\_{j}^{m} |\psi\_{j}(s\_{j})| \right)\_{s\_{j-1}} I\_{\mathcal{S}\_{j-1}^{-1}}^{a\_{j-1}}(1)(t\_{j}^{-}) \Big] \Big\} \\ & \quad + L \|x - y\| \sum\_{j=1}^{m+1} \Big[ \left( \prod\_{j}^{m} |\psi\_{j}(s\_{j})| \right)\_{s\_{j-1}} I\_{\mathcal{S}\_{j-1}^{-1}}^{a\_{j-1}}(1)(t\_{j}^{-}) \Big] \\ & = \quad L \Phi\_{6} \|x - y\| \|. \end{split}$$

which yields Q*x* − Q*y* ≤ *L*Φ6*x* − *y*. As *L*Φ<sup>6</sup> < 1, Q is a contraction. Therefore, we deduce by Banach's contraction mapping principle, that Q has a fixed point which is the solution of the boundary value problem (1). The proof is completed.

**Remark 1.** *If β*<sup>1</sup> = 0*, β*<sup>2</sup> = 0*, then the problem (1) is reduced to the initial and integral values problem. The constants* Ω∗*,* Φ∗ <sup>6</sup> *and* Φ<sup>∗</sup> <sup>1</sup>*, given by*

$$\boldsymbol{\Omega}^{\*} = \beta\_{1} - \sum\_{k=0}^{m} \mu\_{k} \left( \prod\_{j=1}^{k} \psi\_{j}(s\_{j}) \right) \boldsymbol{\Lambda}\_{k} \cdot \boldsymbol{\Phi}\_{6}^{\*} = \boldsymbol{\Phi}\_{1}^{\*} (\boldsymbol{\Phi}\_{3} + \boldsymbol{\Phi}\_{4}) + \boldsymbol{\Phi}\_{5} \cdot \boldsymbol{\Phi}\_{1}^{\*} = \frac{1}{|\boldsymbol{\Omega}^{\*}|} \left( \prod\_{j=1}^{m} |\psi\_{j}(s\_{j})| \right) \boldsymbol{\Lambda}\_{k}$$

*with conditions* (*H*1) *and L*Φ<sup>∗</sup> <sup>6</sup> < 1 *are used to obtain the existence of a unique solution of such a problem on J.*

The following theorem of Leray–Schauder's nonlinear alternative will be applied to the next result.

**Theorem 2** ([35])**.** *Given E is a Banach space, and B is a closed, convex subset of E. In addition let G be an open subset of B such that* 0 ∈ *G*. *Suppose that* Q : *G* → *B is a continuous, compact (that is,* Q(*G*) *is a relatively compact subset of B) map. Then either*


**Theorem 3.** *Suppose that f* : *<sup>J</sup>* × R *is a continuous function. In addition we assume that:*

(*H*2)*There exist a continuous nondecreasing function* Ψ : [0, ∞) → (0, ∞) *and continuous function w* : *<sup>J</sup>* <sup>→</sup> <sup>R</sup>+*, such that*

$$|f(t,x)| \le w(t)\Psi(|x|),$$

*for each* (*t*, *<sup>x</sup>*) ∈ *<sup>J</sup>* × R;

(*H*3)*There exists a constant N* > 0 *such that*

$$\frac{N}{\Phi\_1 \Phi\_2 + |\Lambda^\*(m)|(|\beta\_2|\Phi\_1 + 1) + ||w|| \Psi(N) \Phi\_6} > 1.1$$

*Then the non-instantaneous impulsive Riemann–Stieltjes fractional integral boundary value problem (1) has at least one solution on J.*

**Proof.** Let *ρ* be a radius of a ball *B<sup>ρ</sup>* = {*x* ∈ *E* : *x* ≤ *ρ*}. It is obvious that *B<sup>ρ</sup>* is a closed, convex subset of *E*. Now, we will show that the operator Q is fulfilled all conditions of Theorem 2. Firstly the continuity of operator Q is proved by defining a sequence {*xn*} which is converse to *x*. Then


for *t* ∈ [*si*, *ti*+1), *i* = 0, 1, 2, . . . , *m*, and


for *t* ∈ [*ti*,*si*), *i* = 1, 2, 3, . . . , *m*. Then Q is continuous.

Next the compactness of the operator *Q* will be proved. Assume that *x* ∈ *Bρ*, then we have

$$\begin{split} |\mathcal{Q}x(t)| &\leq \quad \frac{1}{|\Omega|} \Big( \prod\_{j=1}^{m} |\psi\_{j}(s\_{j})| \Big) \Big\{ \sum\_{k=0}^{m} |\mu\_{k}| |\Lambda^{\*}(k)| |\Lambda\_{k}| + |\beta\_{2}| |\Lambda^{\*}(m)| \\ &+ |\boldsymbol{w}| |\Psi(\boldsymbol{\rho}) \sum\_{k=0}^{m} |\mu\_{k}| \sum\_{j=1}^{k} \Big[ \Big( \prod\_{j}^{k} |\psi\_{j}(s\_{j})| \Big)\_{s\_{j-1}} I\_{\mathcal{S}^{t-1}\_{\mathcal{S}^{t-1}\_{j}}}^{\boldsymbol{a}\_{j-1}}(1)(t\_{j}^{-}) \Big] |\Lambda\_{k}| \\ &+ |\boldsymbol{w}| |\Psi(\boldsymbol{\rho}) \sum\_{k=0}^{m} |\mu\_{k}| \int\_{s\_{k}}^{t\_{k+1}} {s\_{k}} I\_{\mathcal{S}^{k}\_{k}}^{\boldsymbol{a}\_{j} + \gamma\_{k}}(1)(\boldsymbol{u}) \, dH\_{k}(\boldsymbol{u}) \end{split}$$

$$\begin{aligned} &+|\|w||\Psi(\rho)|\beta\_{2}|\sum\_{j=1}^{m+1}\left[\left(\prod\_{j}^{m}|\psi\_{j}(s\_{j})|\right)\_{s\_{j-1}}I\_{\mathcal{G}\_{j-1}^{k-1}}^{\kappa\_{j-1}}(1)(t\_{j}^{-})\right]\Bigg| \\ &+|\Lambda^{\*}(m)|+|\|w||\Psi(\rho)\sum\_{j=1}^{m}\left[\left(\prod\_{j}^{m}|\psi\_{j}(s\_{j})|\right)\_{s\_{j-1}}I\_{\mathcal{G}\_{j-1}^{k-1}}^{\kappa\_{j-1}}(1)(t\_{j}^{-})\right]\Bigg| \\ &+|\|w||\Psi(\rho)\_{s\_{m}}I\_{\mathcal{G}\_{m}^{k}}^{\kappa\_{m}}(1)(T) \\ &=&\Phi\_{1}\Phi\_{2}+|\Lambda^{\*}(m)|(|\beta\_{2}|\Phi\_{1}+1)+\|w||\Psi(\rho)\Phi\_{\delta} \\ &:=&\Phi\_{7}. \end{aligned} \tag{22}$$

which yields Q*x* ≤ Φ<sup>7</sup> and then Q*B<sup>ρ</sup>* is a uniformly bounded set. To prove equicontinuity of Q*Bρ*, we let the points *θ*1, *θ*<sup>2</sup> ∈ [0, *T*] such that *θ*<sup>1</sup> < *θ*2. Then for any *x* ∈ *Bρ*, it follows that

$$\begin{split} & \quad \left| \mathcal{Q} \mathbf{x}(\theta\_{2}) - \mathcal{Q} \mathbf{x}(\theta\_{1}) \right| \\ &= \left| \quad \_{s\_{i}}I\_{\mathcal{K}\_{i}^{\circ}}^{a\_{i}}f\_{x}(\theta\_{2}) - \boldsymbol{s}\_{i}I\_{\mathcal{K}\_{i}^{\circ}}^{a\_{i}}f\_{x}(\theta\_{1}) \right| \\ & \leq \quad \left|| w \right|| \mathbb{1} \left( \boldsymbol{\rho} \right) \left| \boldsymbol{s}\_{i}I\_{\mathcal{K}\_{i}^{\circ}}^{a\_{i}}(1)(\theta\_{2}) - \boldsymbol{s}\_{i}I\_{\mathcal{K}\_{i}^{\circ}}^{a\_{i}}(1)(\theta\_{1}) \right| \\ &= \quad \frac{||w|| \mathbb{1}(\rho)}{\Gamma(a\_{i} + 1)} \left\{ 2(\mathcal{g}(\theta\_{2}) - \mathcal{g}(\theta\_{1}))^{a\_{i}} + \left| \mathcal{g}((\theta\_{2}) - \mathcal{g}(\boldsymbol{s}\_{i}))^{a\_{i}} - (\mathcal{g}(\theta\_{1}) - \mathcal{g}(\boldsymbol{s}\_{i}))^{a\_{i}} \right| \right\} \to 0, \end{split}$$

as *θ*<sup>1</sup> → *θ*<sup>2</sup> for *t* ∈ [*si*, *ti*+1), *i* = 0, 1, 2, . . . , *m*, and

$$\begin{aligned} \left| \mathcal{Q} \mathbf{x}(\theta\_2) - \mathcal{Q} \mathbf{x}(\theta\_1) \right| &=& \left| \varphi\_i(\theta\_2) - \varphi\_i(\theta\_1) \right| + \left| \psi\_i(\theta\_2) - \psi\_i(\theta\_1) \right| \times \text{const.} \\ &\to \quad 0, \qquad \text{as} \quad \theta\_1 \to \theta\_2. \end{aligned}$$

for *t* ∈ [*ti*,*si*), *i* = 1, 2, 3, ... , *m*. The above two inequalities are convergent to zero independently of *x*. Then Q*B<sup>ρ</sup>* is equicontinuous set. Therefore, we deduce that Q*B<sup>ρ</sup>* is relatively compact which implies by the Arzel*a*´–Ascoli theorem, that the operator Q is completely continuous.

In the last step, we will illustrate that the condition (*ii*) of Theorem 2 dose not hold. Let *x* be a solution of problem (1). Now, we consider the operator equation *x* = *λ*Q*x* for any fixed constant *λ* ∈ (0, 1). Consequently, from above computation getting (22), we obtain

$$\frac{||x||}{\Phi\_1 \Phi\_2 + |\Lambda^\*(m)|(|\beta\_2| \Phi\_1 + 1) + ||w|| \Psi(||x||) \Phi\_6} \le 1.$$

The hypothesis (*H*3) implies that there exists a positive constants *N* such that *x* = *N*. Define the open subset of *B<sup>ρ</sup>* by *G* = {*x* ∈ *B<sup>ρ</sup>* : *x* < *N*}. It is easy to see that Q : *G* → *E* is continuous and completely continuous. Thus, there is no *x* ∈ *∂G* such that *x* = *λ*Q*x* for some *λ* ∈ (0, 1). Hence the condition (*ii*) of Theorem 2 is not true. Therefore, by the conclusion from Theorem 2 (*i*), the operator Q has a fixed point *x* ∈ *G* which is a solution of the problem (1) on *J*. This is the end of the proof.

A special case can be obtain by setting *p*(*t*) ≡ 1 and Ψ(*x*) = *κ*1*x* + *κ*2, *κ*<sup>1</sup> ≥ 0, *κ*<sup>2</sup> > 0 in Theorem 3.

#### **Corollary 1.** *If*

$$|f(t, \mathbf{x})| \le \kappa\_1 \mathbf{x} + \kappa\_2 \mathbf{y}$$

*and if κ*1Φ<sup>6</sup> < 1, *then the non-instantaneous impulsive Riemann–Stieltjes fractional integral boundary value problem (1) has at least one solution on J.*

**Remark 2.** *In the same way of Remark 1, if β*<sup>1</sup> = 0*, β*<sup>2</sup> = 0*, and conditions* (*H*2)*-*(*H*3) *are fulfilled with*

$$\frac{N}{\Phi\_1^\* \Phi\_2 + |\Lambda^\*(m)| + ||w|| \Psi(N) \Phi\_6^\*} > 1/2$$

*then the initial and integral values problem has at least one solution on J.*

**Example 1.** *Consider the non-instantaneous impulsive Riemann–Stieltjes fractional integral boundary value problem*

$$\begin{cases} \begin{aligned} \,\_2\mathrm{i} \sum\_{(i^{\theta},i+1-i)}^{\frac{1}{2+\theta}} \mathbf{x}(t) &= f(t, \mathbf{x}(t)), \quad t \in [2i, 2i+1), \quad i = 0, 1, 2, 3, \\\ \mathbf{x}(t) &= \frac{1}{2} \log\_e(i+t) + \left(\frac{1}{i+\tan^{-1}(t)}\right) \mathbf{x}(t\_i^{-}), \quad t \in [2i-1, 2i), \quad i = 1, 2, 3, \\\ \frac{3}{11} \mathbf{x}(0) + \frac{4}{13} \mathbf{x}(7) &= \frac{5}{17} \int\_0^1 \left( \mathrm{d}^{\frac{1}{2}}\_{\frac{\sqrt{x}}{(r^{(3+4a-b)})}} \mathbf{x} \right) (\mathbf{u}) \, d(\mathbf{u}^2 + \mathbf{u}) \\ &+ \frac{6}{19} \int\_2^3 \left( 2 \frac{\mathbf{i}^{\frac{1}{2}}\_{\frac{\sqrt{x}}{(r^{(3+4a-b)})}} \mathbf{x} \right) (\mathbf{u}) \, d(\mathbf{u}^2 + 2\mathbf{u}) \\ &+ \frac{7}{23} \int\_4^5 \left( 4 \frac{\mathbf{i}^{\frac{1}{2}}\_{\frac{\sqrt{x}}{(r^{(3+4a-b)})}} \mathbf{x} \right) (\mathbf{u}) \, d(\mathbf{u}^2 + 3\mathbf{u}) \\ &+ \frac{8}{29} \int\_6^7 \left( 6 \frac{\mathbf{i}^{\frac{1}{2}}\_{\frac{\sqrt{x}}{(r^{(3+2a-b)})}} \mathbf{x} \right) (\mathbf{u}) \, d(\mathbf{u}^2 + 4\mathbf{u}). \end{aligned} \tag{23}$$

Here *α<sup>i</sup>* = (4*i* + 5)/(4*i* + 6), *gi*(*t*) = *e<sup>t</sup>* /(*e<sup>t</sup>* <sup>+</sup> <sup>4</sup> <sup>+</sup> *<sup>i</sup>* <sup>−</sup> *<sup>t</sup>*), for *<sup>t</sup>* <sup>∈</sup> [2*i*, 2*<sup>i</sup>* <sup>+</sup> <sup>1</sup>), *<sup>i</sup>* <sup>=</sup> 0, 1, 2, 3, *<sup>ϕ</sup>i*(*t*)=(1/2)log*e*(*<sup>i</sup>* <sup>+</sup> *<sup>t</sup>*), *<sup>ψ</sup>i*(*t*) = 1/(*<sup>i</sup>* <sup>+</sup> tan−<sup>1</sup> *<sup>t</sup>*), *<sup>t</sup>* <sup>∈</sup> [2*<sup>i</sup>* <sup>−</sup> 1, 2*i*), *<sup>i</sup>* <sup>=</sup> 1, 2, 3, *<sup>β</sup>*<sup>1</sup> <sup>=</sup> 3/11, *β*<sup>2</sup> = 4/13. Since [2*i*, 2*i* + 1) ∪ [2*j* − 1, 2*j*) ∪ {7} = [0, 7], for *i* = 0, 1, 2, 3, *j* = 1, 2, 3, we put *T* = 7. Setting *μ*<sup>0</sup> = 5/17, *μ*<sup>1</sup> = 6/19, *μ*<sup>2</sup> = 7/23, *μ*<sup>3</sup> = 8/29, *Hi*(*t*) = *t* <sup>2</sup> + *it*, *i* = 1, 2, 3, 4, *γ*<sup>0</sup> = 1/4, *γ*<sup>1</sup> = 1/2, *γ*<sup>2</sup> = 3/4, *γ*<sup>3</sup> = 3/2. Remark that *g i* (*t*) > 0 for all *t* ∈ [0, 7], *i* = 0, 1, 2, 3. Then from all information, we can compute that |Ω| ≈ 0.5181070744, Φ<sup>1</sup> ≈ 0.06251397190, Φ<sup>2</sup> ≈ 0.8574153788, Φ<sup>3</sup> ≈ 0.1639270834, Φ<sup>4</sup> ≈ 0.1706687388, Φ<sup>5</sup> ≈ 0.1889629435, Φ<sup>6</sup> ≈ 0.2135145724 and Λ∗(3) ≈ 1.376938726.

(i) Consider a nonlinear function *<sup>f</sup>* : [0, 7] × R → R by

$$f(t, \mathbf{x}) = \frac{4}{3} e^{-t} \left( \frac{2\mathbf{x}^2 + 3|\mathbf{x}|}{1 + |\mathbf{x}|} \right) + \frac{1}{2}t + 1. \tag{24}$$

It is easy to check that the function *f*(*t*, *x*) satisfies the Lipchitz condition with *<sup>L</sup>* = 4, as | *<sup>f</sup>*(*t*, *<sup>x</sup>*) − *<sup>f</sup>*(*t*, *<sup>y</sup>*)| ≤ 4|*<sup>x</sup>* − *<sup>y</sup>*|, for all *<sup>t</sup>* ∈ [0, 7] and *<sup>x</sup>*, *<sup>y</sup>* ∈ R. Since *<sup>L</sup>*Φ<sup>6</sup> ≈ 0.8540582896 < 1, by applying the result in Theorem 1, we have that the problem (23), with *f* given by (24), has a unique solution on [0, 7].

(ii) Let now a nonlinear function *f* defined by

$$f(t, \mathbf{x}) = \frac{1}{t+2} \left( \frac{\mathbf{x}^{16}}{1+\mathbf{x}^{14}} + \frac{2}{3} \sin^2 \mathbf{x} + \frac{1}{3} e^{-\mathbf{x}^2} \right). \tag{25}$$

Note that

$$|f(t,x)| \le \frac{1}{t+2} \left(x^2 + 1\right),$$

which satisfies (*H*2) with *<sup>p</sup>*(*t*) = 1/(*<sup>t</sup>* <sup>+</sup>2) and <sup>Ψ</sup>(*x*) = *<sup>x</sup>*<sup>2</sup> <sup>+</sup>1. Accordingly, *p* <sup>=</sup> 1/2 and there exists a constant *N* ∈ (1.984010360, 7.383031794) satisfying the condition (*H*3) of Theorem 3. Therefore, by applying Theorem 3, we deduce that the problem (23), with *f* given by (25), has at least one solution on [0, 7].

(iii) If the term *<sup>x</sup>*<sup>16</sup> is replaced by <sup>|</sup>*x*<sup>|</sup> <sup>15</sup> in (25) then

$$f(t, \mathbf{x}) = \frac{1}{t+2} \left( \frac{|\mathbf{x}|^{15}}{1+\mathbf{x}^{14}} + \frac{2}{3} \sin^2 \mathbf{x} + \frac{1}{3} e^{-\mathbf{x}^2} \right). \tag{26}$$

Hence we get | *f*(*t*, *x*)| ≤ (1/2)|*x*| + (1/2). Putting *κ*<sup>1</sup> = 1/2 and *κ*<sup>2</sup> = 1/2, it follows that *κ*1Φ<sup>6</sup> ≈ 0.1067572862 < 1, which implies, by Corollary 1, that the problem (23) with (26) has at least one solution on [0, 7].

#### **4. Conclusions**

We have presented the sufficient criteria for the existence and uniqueness of solutions for a non-instantaneous impulsive Riemann–Stieltjes fractional integral boundary value problem. The given boundary value problem is converted into an equivalent fixed point operator equation, which is solved by applying the standard fixed point theorems. We make use of Banach's contraction mapping principle to obtain the uniqueness result, while the Leray–Schauder nonlinear alternative is applied to obtain the existence result. We have demonstrated the application of the obtained results by constructing examples.

Our problem generates many types and also mixed types of impulsive fractional boundary value problems. For example, our results are reduced to Riemann–Liouville and Hadamard impulsive fractional boundary value problems when *g*(*t*) = *t* and *g*(*t*) = log *t*, respectively. Our results are new in the given configuration and contributes to the theory of fractional boundary value problems.

**Author Contributions:** conceptualization, S.K.N. and J.T.; methodology, S.A., Y.T., S.K.N. and J.T.; formal analysis, S.A., Y.T., S.K.N. and J.T.; funding acquisition, J.T. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by King Mongkut's University of Technology North Bangkok. Contract no. KMUTNB-61-KNOW-015.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **A Novel Numerical Method for Solving Fractional Diffusion-Wave and Nonlinear Fredholm and Volterra Integral Equations with Zero Absolute Error**

**Mutaz Mohammad 1,\*, Alexandre Trounev <sup>2</sup> and Mohammed Alshbool 1,3,\***


**Abstract:** In this work, a new numerical method for the fractional diffusion-wave equation and nonlinear Fredholm and Volterra integro-differential equations is proposed. The method is based on Euler wavelet approximation and matrix inversion of an *M* × *M* collocation points. The proposed equations are presented based on Caputo fractional derivative where we reduce the resulting system to a system of algebraic equations by implementing the Gaussian quadrature discretization. The reduced system is generated via the truncated Euler wavelet expansion. Several examples with known exact solutions have been solved with zero absolute error. This method is also applied to the Fredholm and Volterra nonlinear integral equations and achieves the desired absolute error of 0. × 10−<sup>31</sup> for all tested examples. The new numerical scheme is exceptional in terms of its novelty, efficiency and accuracy in the field of numerical approximation.

**Keywords:** time-fractional diffusion-wave equations; Euler wavelets; integral equations; numerical approximation

**MSC:** 26A33, 35R11, 45B05

#### **1. Introduction**

Fractional calculus is very useful and widely used in many applications in science, numerical computations and engineering, where the mathematical modeling of several real world problems is presented in terms of fractional differential equations, see, e.g., [1–8]. For example, the authors in [8] approximated the Caputo fractional derivative by quadratic segmentary interpolation. That raised a new approach of approximating fractional derivatives and provides some insights for a new applications where the numerical resolution of ordinary fractional differential equations is achieved.

The definition of such fractional order involves an integration represented as a nonlocal operator. This important feature allows to capture the previous history (memory) when calculating, for example, the time-fractional diffusion wave derivative value of a given function within certain period of time. This could not be achieved based on the classical (integer) derivative order.

The fractional diffusion-wave equation and some types of integral equations, as a mathematical models, are widely used in many physical phenomena, where the exact solution usually is difficult to obtain. Note that the authors of [9] introduced a mathematical model that intermediates between the wave, heat, and transport equations, both time and spatial variations of the corresponding dynamical law are expressed in fractional form (Caputo derivative for the time-variable and Riesz pseudo-differential operator for the

**Citation:** Mohammad, M.; Trounev, A.; Alshbool, M. A Novel Numerical Method for Solving Fractional Diffusion-Wave and Nonlinear Fredholm and Volterra Integral Equations with Zero Absolute Error. *Axioms* **2021**, *10*, 165. https:// doi.org/10.3390/axioms10030165

Academic Editor: Jorge E. Macías Díaz

Received: 19 April 2021 Accepted: 8 June 2021 Published: 28 July 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

spatial one), so that pure wavelike propagation is connected with pure diffusion and transport processes in unified form.

Several authors have reported the higher precision numerical solution with absolute error of 10−16−10−<sup>20</sup> for nonlinear Volterra integral equation as in [10] and for fractional diffusion wave equation in [11]. They used the popular collocation method based on some wavelet systems to solve the nontrivial mathematical problems.

Since the number of collocation points is limited by 16 for 1−dimensional or 4 × 4 for 2−dimensional problems, we have noticed kind of a numerical phenomenon for each case and specifically for the absolute error. In this paper, we propose a novel numerical method to solve the fractional diffusion-wave equations and nonlinear Fredholm and Volterra integro-differential problems with zero absolute error. We also discuss the proposed method in [10] and proposed a new one to solve the nonlinear Volterra integral equation with absolute error of 0 × 10−31. As it has been shown, in every case, there is a numerical phenomenon of error cancellation.

#### **2. Fractional Diffusion-Wave Equation**

We consider the following fractional diffusion-wave equation involved by the Caputo fractional derivative of order *α* > 0:

$$\mathcal{D}\_{\mathbb{C}}^{\alpha}u + \mu u\_t - u\_{\text{xx}} = Q(\mathbf{x}, t), \quad 0 \le \mathbf{x}, t \le 1,\tag{1}$$

where *u* = *u*(*x*,*t*), *μ* is a damping parameter, and the Caputo fractional derivative for this work is defined as

$$\mathcal{D}\_{\mathfrak{c}}^{a}u = \frac{1}{\Gamma(2-\alpha)} \int\_{0}^{t} \frac{u\_{\mathsf{T}\mathsf{T}}(\mathsf{x},\mathsf{T})}{(t-\mathsf{r})^{2-\alpha}} d\mathsf{r}, \quad 1 < \alpha \le 2. \tag{2}$$

The initial and boundary conditions for Equation (1) is given as follows

$$u(\mathbf{x},0) = f\_0(\mathbf{x}), \quad u\_l(\mathbf{x},0) = f\_1(\mathbf{x}), \quad u(0,t) = \mathbf{g}\_0(t), \quad u(1,t) = \mathbf{g}\_1(t), \tag{3}$$

where *α*, *f*0, *f*1, *g*0, *g*1, *Q* are known functions.

We simulate the problem defined in Equations (1)–(3) based on these given functions. We propose a new numerical method based on Euler wavelets with different sets of collocation points. Surprisingly, the numerical scheme used in this paper achieved zero absolute error. The absolute error of the numerical algorithm is defined on the grid only, which is why we were able to estimate zero absolute error. All examples in the manuscript are not trivial, which is why we believe that this method can be interesting to the international community.

#### **3. The New Numerical Scheme**

Wavelets are basis set, very well localized functions, and known as a useful tool for solving various types of differential and integral equations. In particular, orthogonal wavelets are used extensively to approximate different types of fractional differential equations in the literature. To solve the proposed problem in Equations (1)–(3), we use wavelets based on Euler polynomials. We define the Euler polynomials *E*1(*x*), *E*2(*x*) and the needed functions for our novel numerical algorithm as follows:

$$E\_1(\mathbf{x}) \quad = \quad -\frac{1}{2} + \mathbf{x} \,, \ E\_2(\mathbf{x}) = -\mathbf{x} + \mathbf{x}^2 \,. \tag{4}$$

$$I\_1^{1'} = -\int\_0^\infty E\_1(t)dt = -\frac{\mathbf{x}}{2} + \frac{\mathbf{x}^2}{2},\tag{5}$$

$$\frac{1}{2}^1 \quad = \quad \int\_0^\infty E\_2(t)dt = -\frac{\mathbf{x}^2}{2} + \frac{\mathbf{x}^3}{3} \, \Big|\, \tag{6}$$

$$I\_1^2 \quad = \int\_0^\infty I\_1^1(t)dt = -\frac{\mathbf{x}^2}{4} + \frac{\mathbf{x}^3}{6},\tag{7}$$

$$I\_2^2 \quad = \int\_0^\mathbf{x} I\_2^1(t)dt = -\frac{\mathbf{x}^3}{6} + \frac{\mathbf{x}^4}{12},\tag{8}$$

$$I\_1^{\mathfrak{a}} = \int\_0^\infty \frac{E\_1\left(\frac{\mathfrak{x}}{\mathfrak{x}}\right)}{\left(\mathfrak{x} - \mathfrak{x}\right)^{2 - \mathfrak{a}}} d\mathfrak{x} = \frac{\mathfrak{x}^{2 - \mathfrak{a}}\left(-\mathfrak{x} + \mathfrak{a} + 2\mathfrak{x}\right)}{2\left(-2 + \mathfrak{a}\right)\left(-\mathfrak{x} + \mathfrak{a}\right)}\tag{9}$$

$$I\_2^{\mathfrak{a}} = \int\_0^\infty \frac{E\_2(\zeta^\mathfrak{z})}{(\mathfrak{x} - \zeta^\mathfrak{z})^{2-\mathfrak{a}}} d\zeta^\mathfrak{z} = -\frac{\mathfrak{x}^{3-\mathfrak{a}}(-4+\mathfrak{a}+2\mathfrak{x})}{-6+11(\mathfrak{a}-1)-6(\mathfrak{a}-1)^2+(\mathfrak{a}-1)^3}.\tag{10}$$

Define Ψ to be the set of all functions given in Equations (4)–(10). For any function *f* ∈ Ψ, we define the function *ψ*(*x*) as follows

$$\begin{aligned} \psi(\mathfrak{x}) &= \quad f(\mathfrak{x})\_{\prime} \text{ on } \{0, 1\}\_{\prime} \\ &= \quad 0\_{\prime} \text{ otherwise.} \end{aligned}$$

Now, assume that

*I*

$$\begin{aligned} \, \, \, \psi\_1 = E\_1, \, \psi\_2 = E\_2, \, \psi\_{1,1} = I\_1^1, \, \psi\_{2,1} = I\_2^1, \, \psi\_{1,2} = I\_1^2, \, \psi\_{2,2} = I\_2^2, \, \psi\_{1,a} = I\_1^a, \, \psi\_{2,a} = I\_2^a, \, \psi\_{2,a} = I\_2^a \end{aligned} $$

we define the following set of functions (wavelets) depending on *j*, *k* ∈ Z as

$$\begin{array}{rcl}\psi\_{1}(j,k,x)&=&\psi\_{1}(2^{j}x-k),\\\psi\_{2}(j,k,x)&=&\psi\_{2}(2^{j}x-k),\\\psi(j,k,x)&=&(\psi\_{1}(j,k,x)+\psi\_{2}(j,k,x)),\\\psi^{1,1}(j,k,x)&=&\psi\_{1,1}(2^{j}x-k),\\\psi^{1,2}(j,k,x)&=&\psi\_{1,2}(2^{j}x-k),\\\psi^{2,1}(j,k,x)&=&\psi\_{2,1}(2^{j}x-k),\\\psi^{2,2}(j,k,x)&=&\psi\_{2,2}(2^{j}x-k),\\\psi^{2,2}(j,k,x)&=&\psi\_{2,2}(2^{j}x-k),\\\psi^{1,1}(j,k,x)&=&(\psi^{1,1}(j,k,x)+\psi^{2,1}(j,k,x))/j,\\\psi^{2,1}(j,k,x)&=&(\psi^{2,1}(j,k,x)+\psi^{2,2}(j,k,x))/j^{2},\\\psi^{1,a}(j,k,x)&=&\psi\_{1,a}(2^{j}x-k),\\\psi^{2,a}(j,k,x)&=&\psi\_{2,a}(2^{j}x-k),\\\psi^{2,a}(j,k,x)&=&(\psi^{1,a}(j,k,x)+\psi^{2,a}(j,k,x))/j^{a-2}.\end{array}$$

Recall that, see, e.g., [12], a function *f* ∈ *L*2(R) can be expanded using the following series,

$$f(\mathbf{x}) = \sum\_{\ell=1}^{2} \sum\_{j,k \in \mathbb{Z}}^{\infty} d^{\ell}(j,k) \psi^{\ell}(j,k,\mathbf{x}),\tag{11}$$

where,

$$d^{\ell}(j,k) = \left\langle f, \psi^{\ell}(j,k,\mathbf{x}) \right\rangle = \int\_{\mathbb{R}} f(\mathbf{x}) \psi^{\ell}(j,k,\mathbf{x}) w(\mathbf{x}) d\mathbf{x} \,\iota$$

for which ⟨⋅, ⋅⟩ denotes the usual inner product over the space *L*2(R) and *w* is a proper weight function. One may truncate Equation (11) by *fn*,*<sup>M</sup>* as

$$f\_{n,M} = \sum\_{\ell=1}^{2} \sum\_{j=0}^{n} \sum\_{k=0}^{M-1} d^{\ell}(j,k) \psi^{\ell}(j,k,\infty). \tag{12}$$

In order to solve the proposed problem, we construct a vector Ψ*<sup>f</sup>* of length *M* = 2*n*+1, *n* ∈ N, such that

$$\Psi\_f = (\psi\_f, \sigma^\rho(1, 0, \mathbf{x}), \dots, \sigma^\rho(2^j, k, \mathbf{x}), \dots, \sigma^\rho(2^n, 2^{n-1}, \mathbf{x})),\\j = 0, 1, 2, \dots, n; k = 0, 1, 2, \dots, 2^{j-1},\tag{13}$$

where,

$$\begin{cases} \psi\_f = 1, \sigma^{\rho} = \psi & \text{if } f = E\_1, E\_2, \ \rho = 1, \\ \psi\_f = \propto, \sigma^{\rho} = \psi^1 & \text{if } f = I\_1^1, I\_2^1, \ \rho = j, \\ \psi\_f = \propto^2 / 2, \sigma^{\rho} = \psi^2 & \text{if } f = I\_1^2, I\_2^2, \ \rho = j^2, \\ \psi\_f = I\_1^{\alpha}(\propto), \sigma^{\rho} = \psi^{\alpha} & \text{if } f = I\_1^{\alpha}, I\_2^{\alpha}, \ \rho = j^{\alpha - 2}. \end{cases}$$

For example, for *n* = 2, *α* = 3/2, we have the following:

• When *ψ<sup>f</sup>* = 1, *ρ* = 1, we have

$$\Psi\_f = \begin{cases} (1,0,0,0,0,0,0,0) & \mathbf{x} \ge 1 \text{ or } \mathbf{x} < 0\\ \left(1, \mathbf{x}^2 - \frac{1}{2}, 0, (2\mathbf{x} - 1)^2 - \frac{1}{2}, 0, 0, 0, (4\mathbf{x} - 3)^2 - \frac{1}{2}\right) & \frac{3}{4} \le \mathbf{x} < 1\\ \left(1, \mathbf{x}^2 - \frac{1}{2}, 0, (2\mathbf{x} - 1)^2 - \frac{1}{2}, 0, 0, (4\mathbf{x} - 2)^2 - \frac{1}{2}, 0\right) & \frac{1}{2} \le \mathbf{x} < \frac{3}{4}\\ \left(1, \mathbf{x}^2 - \frac{1}{2}, 4\mathbf{x}^2 - \frac{1}{2}, 0, 0, (4\mathbf{x} - 1)^2 - \frac{1}{2}, 0, 0\right) & \frac{1}{4} \le \mathbf{x} < \frac{1}{2}\\ \left(1, \mathbf{x}^2 - \frac{1}{2}, 4\mathbf{x}^2 - \frac{1}{2}, 0, 16\mathbf{x}^2 - \frac{1}{2}, 0, 0, 0\right) & \text{True} \end{cases}$$

$$\text{\textbullet \quad When \text{\textbullet}\,\rho\_f = x\_\prime \,\rho = j\_\prime \text{ we have}$$

$$\Psi\_{f} = \begin{cases} (\mathbf{x},0,0,0,0,0,0,0) & \mathbf{x} \ge 1 \text{ or } \mathbf{x} < 0\\ \left(\mathbf{x}, \frac{1}{6}\mathbf{x}(2\mathbf{x}^{2}-3), 0, \frac{1}{12}(16\mathbf{x}^{3} - 24\mathbf{x}^{2} + 6\mathbf{x} + 1), 0, 0, 0, \frac{1}{12}(4\mathbf{x} - 3)^{3} + \frac{1}{8}(3 - 4\mathbf{x})\right) & \frac{3}{4} \le \mathbf{x} < 1\\ \left(\mathbf{x}, \frac{1}{6}\mathbf{x}(2\mathbf{x}^{2} - 3), 0, \frac{1}{12}(16\mathbf{x}^{3} - 24\mathbf{x}^{2} + 6\mathbf{x} + 1)\right), 0, 0, \frac{1}{12}(64\mathbf{x}^{3} - 96\mathbf{x}^{2} + 42\mathbf{x} - 5), 0) & \frac{1}{2} \le \mathbf{x} < \frac{3}{4}\\ \left(\mathbf{x}, \frac{1}{6}\mathbf{x}(2\mathbf{x}^{2} - 3), \frac{1}{6}\mathbf{x}(8\mathbf{x}^{2} - 3), 0, 0, \frac{16\mathbf{x}^{3}}{3} - 4\mathbf{x}^{2} + \frac{5}{2} + \frac{1}{24}, 0, 0\right) & \frac{1}{4} \le \mathbf{x} < \frac{1}{2}\\ \left(\mathbf{x}, \frac{1}{6}\mathbf{x}(2\mathbf{x}^{2} - 3), \frac{1}{6}\mathbf{x}(8\mathbf{x}^{2} - 3), 0, \frac{1}{6}\mathbf{x}(32\mathbf{x}^{2} - 3), 0, 0, 0\right) & \text{True} \end{cases}$$

$$\text{\textbullet} \qquad \text{When } \psi\_f = \mathfrak{x}^2/2, \rho = j^2, \text{ we have}$$

Ψ*<sup>f</sup>* = ⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪ ⎨ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩ ( *x*2 <sup>2</sup> , 0, 0, 0, 0, 0, 0, 0) *x* ≥ 1 *or x* < 0 ( *x*2 <sup>2</sup> , <sup>1</sup> <sup>12</sup> *<sup>x</sup>*2(*x*<sup>2</sup> <sup>−</sup> <sup>3</sup>), 0, <sup>1</sup> <sup>24</sup> (<sup>1</sup> <sup>−</sup> <sup>2</sup>*x*)2(2*x*<sup>2</sup> <sup>−</sup> <sup>2</sup>*<sup>x</sup>* <sup>−</sup> <sup>1</sup>), 0, 0, 0, <sup>1</sup> <sup>96</sup> (<sup>3</sup> <sup>−</sup> <sup>4</sup>*x*)2(8*x*<sup>2</sup> <sup>−</sup> <sup>12</sup>*<sup>x</sup>* <sup>+</sup> <sup>3</sup>)) <sup>3</sup> <sup>4</sup> ≤ *x* < 1 ( *x*2 <sup>2</sup> , <sup>1</sup> <sup>12</sup> *<sup>x</sup>*2(*x*<sup>2</sup> <sup>−</sup> <sup>3</sup>), 0, <sup>1</sup> <sup>24</sup> (<sup>1</sup> <sup>−</sup> <sup>2</sup>*x*)2(2*x*<sup>2</sup> <sup>−</sup> <sup>2</sup>*<sup>x</sup>* <sup>−</sup> <sup>1</sup>), 0, 0, <sup>1</sup> <sup>48</sup> (<sup>1</sup> <sup>−</sup> <sup>2</sup>*x*)2(16*x*<sup>2</sup> <sup>−</sup> <sup>16</sup>*<sup>x</sup>* <sup>+</sup> <sup>1</sup>), 0) <sup>1</sup> <sup>2</sup> <sup>≤</sup> *<sup>x</sup>* <sup>&</sup>lt; <sup>3</sup> 4 ( *x*2 <sup>2</sup> , <sup>1</sup> <sup>12</sup> *<sup>x</sup>*2(*x*<sup>2</sup> <sup>−</sup> <sup>3</sup>), <sup>1</sup> <sup>12</sup> *<sup>x</sup>*2(4*x*<sup>2</sup> <sup>−</sup> <sup>3</sup>), 0, 0, <sup>1</sup> <sup>96</sup> (<sup>1</sup> <sup>−</sup> <sup>4</sup>*x*)2(8*x*<sup>2</sup> <sup>−</sup> <sup>4</sup>*<sup>x</sup>* <sup>−</sup> <sup>1</sup>), 0, 0) <sup>1</sup> <sup>4</sup> <sup>≤</sup> *<sup>x</sup>* <sup>&</sup>lt; <sup>1</sup> 2 ( *x*2 <sup>2</sup> , <sup>1</sup> <sup>12</sup> *<sup>x</sup>*2(*x*<sup>2</sup> <sup>−</sup> <sup>3</sup>), <sup>1</sup> <sup>12</sup> *<sup>x</sup>*2(4*x*<sup>2</sup> <sup>−</sup> <sup>3</sup>), 0, <sup>4</sup>*x*<sup>4</sup> <sup>3</sup> <sup>−</sup> *<sup>x</sup>*<sup>2</sup> <sup>4</sup> , 0, 0, 0) True

$$\Psi\_{f} = \begin{cases} \begin{aligned} \bullet \quad \mbox{When } \psi\_{f} = I\_{1}^{a}, \rho = j^{a-2}, \text{ we have} \\ \left\{ 2\sqrt{\pi}, 0, 0, 0, 0, 0, 0, 0 \right\} & \quad \mbox{x \geq 1 or } \mathbf{x} < 0 \\ \left\{ 2\sqrt{\pi}, \mathbf{x}^{2} - \frac{1}{2}, 0, \frac{8\mathbf{x}^{2} - 8\mathbf{x} + 1}{\sqrt{2}}, 0, 0, 0, 32\mathbf{x}^{2} - 48\mathbf{x} + 17 \right\} & \frac{3}{4} \le \mathbf{x} < 1 \\ \left\{ 2\sqrt{\pi}, \mathbf{x}^{2} - \frac{1}{2}, 0, \frac{8\mathbf{x}^{2} - 8\mathbf{x} + 1}{\sqrt{2}}, 0, 0, 32\mathbf{x}^{2} - 32\mathbf{x} + 7, 0 \right\} & \frac{1}{2} \le \mathbf{x} < \frac{3}{4} \\ \left\{ 2\sqrt{\pi}, \mathbf{x}^{2} - \frac{1}{2}, \frac{8\mathbf{x}^{2} - 1}{\sqrt{2}}, 0, 0, 32\mathbf{x}^{2} - 16\mathbf{x} + 1, 0, 0 \right\} & \frac{1}{4} \le \mathbf{x} < \frac{1}{2} \\ \left\{ 2\sqrt{\pi}, \mathbf{x}^{2} - \frac{1}{2}, \frac{8\mathbf{x}^{2} - 1}{\sqrt{2}}, 0, 32\mathbf{x}^{2} - 1, 0, 0, 0 \right\} & \text{True} \end{aligned} \end{cases}$$

Now, define the solution of the proposed system given in Equations (1)–(3) in the form of a matrix system by the following equation,

$$
\mu\_{xxtt\ \ast\ast} \Psi\_E^T(\mathbf{x}) \cdot \mathcal{U} \cdot \Psi\_E(t), \tag{14}
$$

where *U* is a matrix of order *M* × *M* that should be determined using some collocation points, Ψ*<sup>T</sup> <sup>E</sup>* is the transpose of the vector Ψ*<sup>E</sup>* and *E* denotes the set of both functions *E*<sup>1</sup> and *E*<sup>1</sup> that are defined earlier.

Integrating Equation (14) step by step two times with respect to *t* yields:

$$\begin{aligned} \mu\_{\text{xx}}(\mathbf{x},t) & \quad \text{\textquotedblleft} \quad \Psi\_E^T(\mathbf{x}) \cdot \mathcal{U} \cdot \Psi\_{I^1}(t) + \boldsymbol{F}\_1^{\prime\prime}(\mathbf{x}),\\ \mu\_{\text{xx}}(\mathbf{x},t) & \quad \text{\textquotedblleft} \quad \Psi\_E^T(\mathbf{x}) \cdot \mathcal{U} \cdot \Psi\_{I^2}(t) + \boldsymbol{t}\boldsymbol{F}\_1^{\prime\prime}(\mathbf{x}) + \boldsymbol{F}\_2^{\prime\prime}(\mathbf{x}),\end{aligned}$$

Now, integrating Equation (14) step by step two times for *x*, reveals the following:

$$\begin{aligned} \mu\_{\mathbf{x}}(\mathbf{x},t) & \quad \circ & \Psi\_{I^{1}}^{T}(\mathbf{x}) \cdot \mathsf{U} \cdot \Psi\_{I^{2}}(t) + t(F\_{1}^{'}(\mathbf{x}) - F\_{1}^{'}(0)) + F\_{2}^{'}(\mathbf{x}) - F\_{2}^{'}(0) + F\_{3}(t), \\ \mu(\mathbf{x},t) & \quad \circ & \Psi\_{I^{2}}^{T}(\mathbf{x}) \cdot \mathsf{U} \cdot \Psi\_{I^{2}}(t) + t(F\_{1}(\mathbf{x}) - F\_{1}(0) - \mathrm{x}F\_{1}^{'}(0)) + F\_{2}(\mathbf{x}) - F\_{2}(0) - \mathrm{x}F\_{2}^{'}(0) + \mathrm{x}F\_{3}(t) + F\_{4}(t), \end{aligned}$$

where

$$I^1 = \{I^1\_1, I^1\_2\}\_{\prime\prime} I^2 = \{I^2\_1, I^2\_2\}\_{\prime\prime}$$

and *F*1(*x*), *F*2(*x*), *F*3(*t*), *F*4(*t*) are arbitrary functions that can be determined using the initial and boundary conditions given in Equation (3).

Hence, we have

*<sup>u</sup>*(*x*,*t*) ≈ <sup>Ψ</sup>*<sup>T</sup> <sup>I</sup>*<sup>2</sup> (*x*) ⋅ *<sup>U</sup>* <sup>⋅</sup> <sup>Ψ</sup>*I*<sup>2</sup> (*t*) + *<sup>t</sup>*( *<sup>f</sup>*1(*x*)−(*xF*′ <sup>3</sup>(0) + *F* ′ <sup>4</sup>(0))) + *f*0(*x*) − *xF*3(0) − *F*4(0) + *xF*3(*t*) + *g*0(*t*), *ut*(*x*,*t*) ≈ <sup>Ψ</sup>*<sup>T</sup> <sup>I</sup>*<sup>2</sup> (*x*) ⋅ *<sup>U</sup>* <sup>⋅</sup> <sup>Ψ</sup>*I*<sup>1</sup> (*t*) + *<sup>f</sup>*1(*x*) − *xF*′ <sup>3</sup>(0) − *F*<sup>4</sup> ′ (0) + *xF*′ <sup>3</sup>(*t*) + *g* ′ <sup>0</sup>(*t*), *uxx*(*x*,*t*) ≈ <sup>Ψ</sup>*<sup>T</sup> <sup>E</sup>*(*x*) ⋅ *<sup>U</sup>* <sup>⋅</sup> <sup>Ψ</sup>*I*<sup>2</sup> (*t*) + *t f* ′′ <sup>1</sup> (*x*) + *f* ′′ <sup>0</sup> (*x*), *utt*(*x*,*t*) ≈ <sup>Ψ</sup>*<sup>T</sup> <sup>I</sup>*<sup>2</sup> (*x*) ⋅ *<sup>U</sup>* <sup>⋅</sup> <sup>Ψ</sup>*E*(*t*) + *xF*′′ <sup>3</sup> (*t*) + *g* ′′ <sup>0</sup> (*t*), D*α <sup>c</sup> <sup>u</sup>*(*x*,*t*) ≈ <sup>1</sup> Γ(2 − *α*) (−*x*Ψ*<sup>T</sup> <sup>I</sup>*<sup>2</sup> (*x*) ⋅ *<sup>U</sup>* <sup>⋅</sup> <sup>Ψ</sup>*I<sup>α</sup>* (*t*) + <sup>Ψ</sup>*<sup>T</sup> <sup>I</sup>*<sup>2</sup> (*x*) ⋅ *U* ⋅ Ψ*I<sup>α</sup>* (*t*) + *F*5(*t*) + *xF*6(*t*)),

where

*I <sup>α</sup>* = {*<sup>I</sup> α* <sup>1</sup> , *I α* <sup>2</sup> }.

Here, we define the functions *Fi* as follows

$$\begin{split} F\_{4}(t) &= \ & \mathcal{g}\_{0}(t),\\ F\_{3}(t) &= & \mathcal{g}\_{1}(t) - \mathcal{g}\_{0}(t) - \Psi\_{I^{2}}^{T}(t) \cdot \mathcal{U} \cdot \Psi\_{I^{2}}(t) + tc\_{1} + c\_{2},\\ F\_{5}(t) &= & \int\_{0}^{t} \frac{\mathcal{g}\_{0}^{\prime\prime}(\tau)}{(t-\tau)^{2-\alpha}} d\tau,\\ F\_{6}(t) &= & \int\_{0}^{t} \frac{\mathcal{g}\_{1}^{\prime\prime}(\tau) - \mathcal{g}\_{0}^{\prime\prime}(\tau)}{(t-\tau)^{2-\alpha}} d\tau,\\ c\_{0} &= & \mathcal{g}\_{1}(0) - 2\mathcal{g}\_{0}(0) + f\_{0}(1),\\ c\_{1} &= & -f\_{1}(1) + c\_{0}/2 + \mathcal{g}\_{0}(0),\\ c\_{2} &= & -f\_{0}(1) + c\_{0}/2 + \mathcal{g}\_{0}(0). \end{split}$$

Now, we have all functions needed for the numerical simulation. Let us define *M* = 21+*n*, *n* = 1, 2, .. as a collocation points and

$$\Delta \mathbf{x} = \mathbf{1}/M, \mathbf{s}\_0 = \mathbf{0}, \mathbf{s}\_i = \mathbf{s}\_{i-1} + \Delta \mathbf{x}, i = \mathbf{1}, \mathbf{2}, \dots, M; \mathbf{x}\_i = t\_i = \frac{1}{2}(\mathbf{s}\_{i-1} + \mathbf{s}\_i), i = \mathbf{1}, \mathbf{2}, \dots, M.$$

Then, we substitute the above equations into the propose system and calculate Equation (1) for each pair of the collocation points as follows

$$\mathcal{D}\_{\varepsilon}^{a}u(\mathbf{x}\_{i},t\_{j}) + \mu u\_{l}(\mathbf{x}\_{i},t\_{j}) - u\_{\text{xx}}(\mathbf{x}\_{i},t\_{j}) = Q(\mathbf{x}\_{i},t\_{j}), \quad i,j = 1,2,\ldots,M. \tag{15}$$

Therefore,

$$\frac{1}{\Gamma(2-a)} \left( -\mathbf{x}\_i \Psi\_{I^2}^T(\mathbf{x}\_i) \cdot \mathcal{U} \cdot \Psi\_{I^a}(t\_j) + \Psi\_{I^2}^T(\mathbf{x}\_i) \cdot \mathcal{U} \cdot \Psi\_{I^a}(t\_j) + F\_5(t\_j) + \mathbf{x}\_i F\_6(t\_j) \right) + \tag{16}$$

$$
\mu \left( \Psi\_{I^2}^T(\mathbf{x}\_i) \cdot \mathcal{U} \cdot \Psi\_{I^1}(t\_j) + f\_1(\mathbf{x}\_i) - \mathbf{x}\_i \dot{F}\_3(0) - F\_4^{'}(0) + \mathbf{x}\_i \dot{F}\_3(t\_j) + \dot{\mathcal{g}}\_0(t\_i) \right) - \tag{17}
$$

$$\left(\Psi\_E^T(\mathbf{x}) \cdot \mathcal{U} \cdot \Psi\_{I^2}(t) + t f\_1^{''}(\mathbf{x}) + f\_0^{''\*}(\mathbf{x})\right) = Q(\mathbf{x}\_i, t\_j) \tag{18}$$

Note that Equations (16)–(18) generates an *M* × *M* system of algebraic equations in order to produce our matrix *U*.

#### **4. Numerical Performance**

In this section, we present some examples of the problem proposed in Equations (1)–(3). The numerical solution demonstrated here achieved a zero absolute error and that was independent from the number of collocation points, damping parameter *μ* and fractional value *α*. We noticed that there is a numerical phenomenon behind the error cancellation in this method.

The generated system of algebraic equations given in Equation (15) is not so simple and it is also a matrix; however, for all examples that we consider, the numerical solution is not different from the exact solution in all collocation points and that makes this technique a special and powerful tool capable of achieving such an excellent order of accuracy.

**Example 1.** *Consider the equation*

$$\mathcal{D}\_{\mathbf{c}}^{a}u(\mathbf{x},t) + \mu u\_{t}(\mathbf{x},t) - u\_{xx}(\mathbf{x},t) = Q(\mathbf{x},t),\tag{19}$$

*where,*

$$Q(\mathbf{x}, t) = \frac{1}{\Gamma(2 - \alpha)} \int\_0^t \frac{u\_{\text{Tr}\tau}(\mathbf{x}, \tau)}{(t - \tau)^{2 - \alpha}} d\tau, 1 < \alpha \le 2\pi$$

*with the following initial and boundary condition given as*

$$
\mu(\mathbf{x},0) = \mathbf{x}, \mu\_\ell(\mathbf{x},0) = 0, \mu(0,t) = (2-\alpha)t^2, \mu(1,t) = 1 + (2-\alpha)t^2. \tag{20}
$$

*The exact solution for this formulation is*

$$
\mu\_c(\mathbf{x}, t) = \mathbf{x} + (2 - \alpha)t^2 \dots
$$

*Applying our algorithm, Figure 1 presents the exact solution (left) and exact solution with numerical solution (middle and right) computed at M* = 8, *μ* = 1, *α* = 3/2*. The maximum absolute error for the numerical solution is calculated by Mathematica as zero and so it is less than the minimal machine number* 2.22507 × 10−308*, which means*

$$
\max |u(\mathbf{x}\_i, t\_j) - u\_t(\mathbf{x}\_i, t\_j)| < 2.22507 \times 10^{-308}, i, j = 1, 2, \dots, M.
$$

*Figure 2 shows the visual representation of the values of elements in the matrix generated during solving the related system of algebraic equations for this example.*

**Figure 1.** The exact solution (**left**) and numerical solution (points) with exact solution (**middle** and **right**) computed for Example 1 when *M* = 8, *μ* = 1, *α* = 3/2.

**Figure 2.** The visual representation of the matrix coefficient computed for Example 1 when *M* = 8, *μ* = 1, *α* = 3/2.

**Example 2.** *Consider the equation*

$$\mathcal{D}\_{\varepsilon}^{\mathbf{x}}u(\mathbf{x},t) + \mu u\_{l}(\mathbf{x},t) - u\_{\text{xx}}(\mathbf{x},t) = \mathbf{0}.\tag{21}$$

*with the following initial and boundary condition given as*

$$
\mu(\mathbf{x},0) = \frac{\mathbf{x}^2}{2}, \\
u\_t(\mathbf{x},0) = \frac{1}{\mu}, \\
u(0,t) = \frac{t}{\mu}, \\
u(1,t) = \frac{1}{2} + \frac{t}{\mu}. \tag{22}
$$

*The exact solution for this formulation is*

$$u\_{\mathcal{E}}(\mathbf{x}, t) = \frac{\mathbf{x}^2}{2} + \frac{t}{\mu}.$$

*In Figure 3, we show the exact solution (left) and exact solution with numerical solution (middle and right) computed at M* = 16, *μ* = 1, *α* = 19/10*. Again, the maximum absolute error*

*for the numerical solution is recognized by Mathematica as zero. Thus, it is less than the minimal machine number* 2.22507 × 10−308*.*

**Figure 3.** The numerical solution (points) with exact solution computed for Example 2 when *M* = 16, *μ* = 1, *α* = 19/10.

Figure 4 shows the visual representation of the values of elements in the matrix generated during solving the related system of algebraic equations.

**Figure 4.** The visual representation of the matrix coefficient computed for Example 2 when *M* = 16, *μ* = 1, *α* = 19/10.

**Example 3.** *The numerical phenomenon of the error cancellation also occurs for the wave equation. In this case, we have α* = 2*, and so Equation* (1) *turns to the common form of the wave equation given by*

$$
\mu\_{tt}(\mathbf{x},t) + \mu u\_t(\mathbf{x},t) - u\_{xx}(\mathbf{x},t) = Q(\mathbf{x},t), \tag{23}
$$

*where,*

*Q*(*x*,*t*) = *μ*.

*The initial and boundary conditions are given as*

$$
\mu(\mathbf{x},0) = \mathbf{x}, \mu\_l(\mathbf{x},0) = 1, \mu(0,t) = t, \mu(1,t) = 1+t. \tag{24}
$$

*The exact solution of this problem is ue*(*x*,*t*) = *x* + *t. In Figure 5, we present the exact solution (left) and exact solution with numerical solution (middle and right) computed at M* = 8, *μ* = 1, *α* = 2*. The maximum absolute error for the numerical solution is zero for this case as well.*

**Figure 5.** The exact solution (**left**) and numerical solution (points) with exact solution (**middle** and **right**) computed for Example 3 with *M* = 8, *μ* = 1, *α* = 2.

**Example 4.** *The numerical phenomenon of the error cancellation also occurs for the wave equation. In this case, we choose α* = 3/2*, and so Equation* (1) *turns to the common form of the wave equation given by*

$$\mathcal{D}\_{\mathbf{c}}^{a}u(\mathbf{x},t) + \mu u\_{\ell}(\mathbf{x},t) - u\_{\text{xx}}(\mathbf{x},t) = Q(\mathbf{x},t),\tag{25}$$

x

*where,*

$$Q(\mathbf{x}, t) = \mu \mathbf{x}\_{-}$$

*The initial and boundary conditions are given as*

$$
\mu(\mathbf{x},0) = 0, \mu\_\ell(\mathbf{x},0) = \mathbf{x}, \mu(0,t) = 0, \mu(1,t) = t. \tag{26}
$$

*The exact solution of this problem is ue*(*x*,*t*) = *xt. In Figure 6, we present the exact solution (left) and exact solution with numerical solution (middle and right) computed at M* = 8, *μ* = 1, *α* = 2*. The maximal absolute error for the numerical solution is zero. This result does not depend on the number of collocation points, nor the fractional parameters α and μ.*

**Figure 6.** The exact solution (**left**) and numerical solution (points) with exact solution (**middle** and **right**) computed for Example 4 with *M* = 8, *μ* = 1, *α* = 3/2.

**Example 5.** *Let us consider another example by involving the fractional parameter α in the function Q as follows*

$$\mathcal{D}\_c^\kappa u(\mathbf{x}, t) + \mu u\_l(\mathbf{x}, t) - u\_{\text{XX}}(\mathbf{x}, t) = \frac{2\mathbf{x}(1 - \mathbf{x})t^{2 - \alpha}}{(2 - \alpha)\Gamma(2 - \alpha)} + 2t\mathbf{x}(1 - \mathbf{x}) + 2t^2. \tag{27}$$

*The initial and boundary conditions are given as*

$$
\mu(\mathbf{x},0) = 0, \mu\_t(\mathbf{x},0) = 0, \mu(0,t) = 0, \mu(1,t) = 0. \tag{28}
$$

x

*This example has been considered and discussed for μ* = 1 *by many authors, see, e.g., [11,13,14]. The exact solution for this case has the following form ue* = *t* <sup>2</sup>*x*(1 − *x*)*. Using our technique, we are able to solve it with a proper setting of precision of the numerical technique. For instance, in Figure 7, we provide the exact solution and numerical solution (points) computed with machine precision of* 1.11022 × 10−<sup>16</sup> *( shown in Figure 8, left). Increasing precision up to* 10−30*, we get the numerical solution with zero absolute error (as it shown in Figure 8, right).*

**Figure 7.** The exact solution (**left**) and numerical solution (points) with exact solution (**middle** and **right**) computed for Example 7 for *M* = 8, *μ* = 1, *α* = 3/2.

**Figure 8.** The absolute error computed for Example 5 given that *M* = 8, *μ* = 1, *α* = 3/2 with machine precision (**left** sub-figure) and with double precision (**right** sub-figure).

#### **5. Numerical Technique for Nonlinear Fredholm and Volterra Integral Equation**

Let us now consider the following form of Volterra integral equation of the second kind

$$
\mu(\mathbf{x}) = \mathbf{g}(\mathbf{x}) + \int\_0^\mathbf{x} \mathcal{K}(\mathbf{x}, t, \mu(t)) dt,\\
0 \le \mathbf{x} \le 1,\tag{29}
$$

where *g* and *K* (the kernel) are known functions. It is well known that Equation (29) has a unique solution under following conditions [15]:


$$|\mathcal{K}(\mathbf{x}, t, \boldsymbol{\mu}\_1) - \mathcal{K}(\mathbf{x}, t, \boldsymbol{\mu}\_2)| \le L|\boldsymbol{\mu}\_1 - \boldsymbol{\mu}\_2|,\tag{50}$$

for all finite *u*1,2 and 0 ≤ *t* ≤ *x* ≤ 1.

To solve Equation (29), we use Euler wavelets in the form of vector Ψ*<sup>E</sup>* that we defined earlier. Then, we proposed that the numerical solution has the following setting:

$$
\mu(\mathbf{x}) = A \cdot \Psi\_E(\mathbf{x}),
\tag{31}
$$

where the vector *A* = (*a*1, *a*2, ..., *aM*) can be computed using the collocation technique such that

$$\mathcal{M} = \mathcal{Z}^{1 \leftrightarrow n} \lrcorner n = 1 \lrcorner \mathcal{Z} \lrcorner \dots \lrcorner$$

and Ψ*E*(*x*) is defined considering *ρ*(*j*) = 2.

To do this, we first transform the integral in Equation (29) by substituting

$$t = \frac{\chi}{2}(s+1)\_\prime - 1 \le s \le 1\dots$$

Then, this integral turns to the fixed limit integral, and so it can be approximated by a finite sum using the Gauss quadrature rule as follows:

$$\int\_0^\infty K(\mathbf{x}, t, \mu(t)) dt \quad = \frac{\mathbf{x}}{2} \int\_{-1}^1 K(\mathbf{x}, \mathbf{x}(s+1)/2, \mu(\mathbf{x}(s+1)/2)) ds \tag{32}$$

$$=\frac{\mathbf{x}}{2}\sum\_{i=1}^{2M} w\_i K(\mathbf{x}, \mathbf{x}(s\_i+1)/2, \mu(\mathbf{x}(s\_i+1)/2)) + \Delta,\tag{33}$$

where *wi* are weights, *si* are points of the Gauss quadrature rule defined on (−1, 1), and Δ is the error of approximation. Substituted Equations (31) and (32) in Equation (29) and using the assumed collocation points, we get a system of algebraic equations:

$$A \cdot \Psi\_E(\mathbf{x}\_{\hat{j}}) = \mathbf{g}(\mathbf{x}\_{\hat{j}}) + \frac{\mathbf{x}\_{\hat{j}}}{2} \sum\_{i=1}^{2M} w\_i K(\mathbf{x}\_{\hat{j}}, \mathbf{x}\_{\hat{j}}(\mathbf{s}\_i + 1) / 2, A \cdot \Psi\_E(\mathbf{x}\_{\hat{j}}(\mathbf{s}\_i + 1) / 2)),\tag{34}$$

for *j* = 1, 2, ..., *M* and Δ = 0.

The system of nonlinear algebraic equations given in Equation (34) can be solved by using several method. In this work, we use Newton's iterative technique.

#### **6. Numerical Examples**

The parameters of the Gauss quadrature rule are not exact; however, it can be calculated with high precision of 10−60, it is possible to solve the system in Equation (34) with absolute error of 0. × 10−31. We can consider this solution as a numerical solution with zero absolute error without future estimation. For the examples in this section, the numerical solution has absolute error as of 0. × 10−31. Furthermore, we consider some intermediate results computed with machine precision as of 1.11022 × 10−16.

**Example 6.** *This example has been discussed by many authors, see for example [16–18]:*

$$u(\mathbf{x}) = \frac{\mathbf{x}^2}{2} (1 + \cos \mathbf{x}^2) + \int\_0^\mathbf{x} t \mathbf{x}^2 \sin(u(t)) dt \tag{35}$$

*The exact solution of this equation is given by u* = *x*2*. Using an iterative multistep kernel method [16] it is possible to get the numerical solution of Equation (35) with absolute error of* 7.8974 × 10−10*, and using the method proposed in [17], the best result has absolute error of* 10−6*. Using our method, we get numerical solution with maximum absolute error of* 2.77556 × 10−<sup>17</sup> *computed with the machine precision (Figure 9, left and middle) and of 0* × 10−<sup>31</sup> *computed with double precision and with precision of* 10−<sup>60</sup> *for the Gauss quadrature rule parameters wi*,*si (Figure 9, right). Note that, we have used ρ*(*j*) = 2 *for this case.*

**Figure 9.** The numerical solution (points) with exact solution (**left**) and the absolute error computed for Example 6 with machine precision (**middle**), and with double precision (**right**).

**Example 7.** *Now, we consider the following nonlinear Fredholm integral Equation [19,20]:*

$$u(\mathbf{x}) = -\mathbf{x}^2 - \frac{\mathbf{x}}{3}(2\sqrt{2} - 1) + 2 + \int\_0^1 \mathbf{x}t\sqrt{u(t)}dt\tag{36}$$

*The exact solution for this problem is u*(*x*) = 2 − *x*2*. This problem can be solved by the Haar wavelets method as in [19,20] with absolute error of* 3.1 × 10−5*,* 4.2 × 10−6*, respectively, where they used 128 collocation points to get into these bounds. With our method, with only 16 collocation points, we have achieved a numerical solution with absolute error of* 6.64685 × 10−<sup>20</sup> *computed using the machine precision, and as of 0* × 10−<sup>31</sup> *computed using a double precision as shown in Figure 10.*

**Figure 10.** The numerical solution (points) with exact solution (**left**) and the absolute error computed for Example 7 with machine precision (**middle**), and with double precision (**right**).

**Example 8.** *We consider now the following nonlinear Volterra integral equation based on two parameters such that*

$$u(\mathbf{x}) = \mathbf{x}^2 - \frac{\mathbf{x}^{5 + \beta + \gamma}}{5 + \gamma} + \int\_0^\mathbf{x} \mathbf{x}^\beta t^\gamma u^2(t) dt. \tag{37}$$

*The exact solution of this equation is u* = *x*2*. Numerical experiments with different β*, *γ shown that for any integer β*, *γ* = 0, 1, 2, ..., 27 *the numerical solution has zero absolute error for all collocation points and for n* = 3*. For integer γ* = 0, 1, 2, ..., 27 *and for some* 1 ≤ *β* ≤ 27 *including π and e the numerical solution has zero absolute error. For non integer β*, *γ* > 1 *there is absolute error that varies from zero up to* 10−15*. However, we cannot check every β*, *γ due to numerical limitations. The graphs of the exact, numerical and error results are depicted in Figure 11.*

**Figure 11.** The numerical solution (points) with exact solution (**left**) and the absolute error computed for Example 8 (**right**).

**Example 9.** *Finally, we consider the generalized form of Equation* (35) *based on two parameters as*

$$u(\mathbf{x}) = \mathbf{x}^2 - \frac{{}\_1F\_2(\frac{3+\gamma}{4}; \frac{3}{2}, \frac{7+\gamma}{4}; -\frac{x^4}{4})}{3+\gamma} \mathbf{x}^{3+\beta+\gamma} + \int\_0^\mathbf{x} \mathbf{x}^\beta \mathbf{t}^\gamma \sin(\mathbf{u}(t))dt,\tag{38}$$

*where* <sup>1</sup>*F*2(*a*; *b*; *z*) *is the generalized hyper-geometric function. The exact solution of this formulation is u* = *x*2*. Note that, Equation* (35) *is a special case of Equation* (38) *when β* = 2, *γ* = 1*. The numerical experiments for any integer β*, *γ* = 0, 1, 2, ..., 80 *demonstrate a numerical solution with zero absolute error up to β* = 30, *γ* = 30*, and then maximum absolute error increases from* 9 × 10−<sup>50</sup> *for β* = 31, *γ* = 31 *to* 1.01 × 10−<sup>28</sup> *for β* = 80, *γ* = 80*. For any integer γ* = 0, 1, 2, ..., 30 *and real* 0 ≤ *β* ≤ 30*, the numerical solution has zero absolute error for all tested points including π and e. We present the exact, numerical and error results in Figure 12.*

**Figure 12.** The numerical solution (points) with exact solution (**left**) and the absolute error computed for Example 9 (**right**).

#### **7. Conclusions**

In this work, a novel numerical method based on a proper wavelet systems generated via Euler functions is proposed. The collocation algorithm based on Euler wavelets has been applied to the time-fractional diffusion wave and nonlinear Fredholm and Volterra integral equations. We used some truncated representations based on Euler wavelets to convert the proposed equations to a system of algebraic equations based on specific discretization. The reduced system was converted to a matrix form and simulated using Mathematica software.

We numerically solved a series of examples related to the proposed equations, where the numerical results achieved an exceptional absolute error among other numerical schemes in the literature. We provided some graphical illustrations to show the efficiency of the method.

**Author Contributions:** Conceptualization, M.M. and A.T.; formal analysis, M.M.; investigation, M.M.; resources, M.M. and A.T.; data curation, A.T.; writing—original draft preparation, A.T.; writing—review and editing, M.M. and M.A.; visualization, M.M. and A.T.; supervision, M.M.; project administration, M.M.; funding acquisition, M.A. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by Abu Dhabi University research fund Grant number/center 19300514.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** We would like to thank the anonymous reviewers for their valuable comments and suggestions.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **Sequential Riemann–Liouville and Hadamard–Caputo Fractional Differential Systems with Nonlocal Coupled Fractional Integral Boundary Conditions**

**Chanakarn Kiataramkul 1,†, Weera Yukunthorn 2,†, Sotiris K. Ntouyas 3,4,† and Jessada Tariboon 1,\***


**Abstract:** In this paper, we initiate the study of existence of solutions for a fractional differential system which contains mixed Riemann–Liouville and Hadamard–Caputo fractional derivatives, complemented with nonlocal coupled fractional integral boundary conditions. We derive necessary conditions for the existence and uniqueness of solutions of the considered system, by using standard fixed point theorems, such as Banach contraction mapping principle and Leray–Schauder alternative. Numerical examples illustrating the obtained results are also presented.

**Keywords:** coupled systems; Riemann–Liouville fractional derivative; Hadamard–Caputo fractional derivative; nonlocal boundary conditions; existence; fixed point

#### **1. Introduction**

Fractional differential equations have played a very important role in almost all branches of applied sciences because they are considered a valuable tool to model many real world problems. For details and applications, we refer the reader to monographs [1–11]. The study of coupled systems of fractional differential equations is important as such systems appear in various problems in applied sciences, see [12–16].

On the other hand, multi-term fractional differential equations also gained considerable importance in view of their occurrence in the mathematical models of certain real world problems, such as behavior of real materials [17], continuum and statistical mechanics [18], an inextensible pendulum with fractional damping terms [19], etc.

Fractional differential equations have several kinds of fractional derivatives, such as Riemann–Liouville fractional derivative, Caputo fractional derivative, Hadamard fractional derivative, and so on. In the literature, there are many papers studying existence and uniqueness results for boundary value problems and coupled systems of fractional differential equations and used mixed types of fractional derivatives, see [20–29]. In [23], the following boundary value problem is considered:

$$\begin{cases} \begin{array}{c} \begin{array}{c} \text{RL} \, ^{RL}D^{q} [^{C}D^{r} \mathbf{x}(t) - \mathbf{g}(t, \mathbf{x}(t))] = f(t, \mathbf{x}(t)), \; 0 < t < T, \\\\ \mathbf{x}(\eta) = \boldsymbol{\phi}(\mathbf{x}), \; \, \, ^{P}\mathbf{x}(T) = h(\mathbf{x}), \end{array} \end{cases} \end{cases} \tag{1}$$

**Citation:** Kiataramkul, C.; Yukunthorn, W.; Ntouyas, S.K.; Tariboon, J. Sequential Riemann–Liouville and Hadamard–Caputo Fractional Differential Systems with Nonlocal Coupled Fractional Integral Boundary Conditions. *Axioms* **2021**, *10*, 174. https://doi.org/10.3390/ axioms10030174

Academic Editor: Jorge E. Macías Díaz

Received: 30 June 2021 Accepted: 30 July 2021 Published: 31 July 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

where *RLDq*, *CDr* are Riemann–Liouville and Caputo fractional derivatives of orders *<sup>q</sup>*,*<sup>r</sup>* <sup>∈</sup> (0, 1), respectively, *<sup>I</sup> <sup>p</sup>* is the Riemann–Liouville fractional integral of order *<sup>p</sup>* <sup>&</sup>gt; 0, *<sup>f</sup>* , *<sup>g</sup>* : *<sup>J</sup>* <sup>×</sup><sup>R</sup> → R are given continuous functions and *<sup>φ</sup>*, *<sup>h</sup>* : *<sup>C</sup>*(*J*, R) → R are two given functionals.

In [24], the authors initiated the study of a coupled system of sequential mixed Caputo and Hadamard fractional differential equations supplemented with coupled separated boundary conditions. To be more precisely, in [24], existence and uniqueness results are established for the following couple system:

> ⎧ ⎪⎪⎪⎪⎪⎪⎪⎨

> ⎪⎪⎪⎪⎪⎪⎪⎩

⎧

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

$$\begin{aligned} \,^C D^{p\_1 H} D^{q\_1} \mathbf{x}(t) &= f(t, \mathbf{x}(t), \mathbf{y}(t)), \qquad t \in [a, b], \\ \,^H D^{q\_2 \mathbb{C}} D^{p\_2} \mathbf{y}(t) &= \mathbf{g}(t, \mathbf{x}(t), \mathbf{y}(t)), \qquad t \in [a, b], \\ \,^A \mathbf{a}\_1 \mathbf{x}(a) + \alpha\_2 \,^C D^{p\_2} \mathbf{y}(a) &= 0, \quad \beta\_1 \mathbf{x}(b) + \beta\_2 \,^C D^{p\_2} \mathbf{y}(b) = 0, \\ \,^B \mathbf{a}\_3 \mathbf{y}(a) + \alpha\_4 \,^H D^{q\_1} \mathbf{x}(a) &= 0, \quad \beta\_3 \mathbf{y}(b) + \beta\_4 \,^H D^{q\_1} \mathbf{x}(b) = 0, \end{aligned} \tag{2}$$

where *CDpi* and *HDqi* are notations of the Caputo and Hadamard fractional derivatives of orders *pi* and *qi*, respectively, 0 < *pi*, *qi* ≤ 1, *<sup>i</sup>* = 1, 2, *<sup>f</sup>* , *<sup>g</sup>* : [*a*, *<sup>b</sup>*] × R × R → R are nonlinear continuous functions, *<sup>a</sup>* > 0, *<sup>α</sup><sup>i</sup>* ∈ R \ {0}, *<sup>β</sup><sup>i</sup>* ∈ R, *<sup>i</sup>* = 1, . . . , 4.

In [25], the existence and uniqueness of solutions for neutral fractional order coupled systems containing mixed Caputo and Riemann–Liouville sequential fractional derivatives were studied, complemented with nonlocal multi-point and Riemann–Stieltjes integral multi-strip conditions of the form:

$$\begin{cases} \ ^cD^q(\prescript{RL}{}{}{D}^p \mathbf{x}(t) + f(t, \mathbf{x}(t)) = \mathbf{g}(t, \mathbf{x}(t), \mathbf{y}(t)), \ t \in (0, 1), \\ \ ^cD^{q\_1}(\prescript{RL}{}{}{D}^{p\_1} \mathbf{y}(t) + f\_1(t, \mathbf{y}(t)) = \mathbf{g}\_1(t, \mathbf{x}(t), \mathbf{y}(t)), \ t \in (0, 1), \\ \ \mathbf{x}(0) = 0, \quad b\mathbf{x}(1) = a \int\_0^1 \mathbf{y}(s) dH(s) + \sum\_{i=1}^n a\_i \int\_{\frac{\pi}{6}i}^{\eta\_i} \mathbf{y}(s) ds, \\ \ \mathbf{y}(0) = 0, \quad b\_1 \mathbf{y}(1) = a\_1 \int\_0^1 \mathbf{x}(s) dH(s) + \sum\_{j=1}^m \beta\_j \int\_{\theta\_j}^{\zeta\_j} \mathbf{x}(s) ds, \end{cases} \tag{3}$$

where *RLDp*, *RL Dp*<sup>1</sup> , and *cDq*, *<sup>c</sup> Dq*<sup>1</sup> denote the Riemann–Liouville and Caputo fractional derivatives of order *p*, *p*<sup>1</sup> and *q*, *q*1, respectively, 0 < *p*, *p*1, *q*, *q*<sup>1</sup> ≤ 1, with 1 < *p* + *q* ≤ 2, 1 < *p*<sup>1</sup> + *q*<sup>1</sup> ≤ 2, *f* , *f*<sup>1</sup> and *g*, *g*<sup>1</sup> are given continuous functions, 0 < *ξ<sup>i</sup>* < *η<sup>i</sup>* < 1, 0 < *<sup>θ</sup><sup>j</sup>* < *<sup>ζ</sup><sup>j</sup>* < 1, *<sup>α</sup>i*, *<sup>β</sup><sup>j</sup>* ∈ R, *<sup>i</sup>* = 1, 2, ... , *<sup>n</sup>*, *<sup>j</sup>* = 1, 2, ... , *<sup>m</sup>*, *<sup>a</sup>*, *<sup>a</sup>*1, *<sup>b</sup>*, *<sup>b</sup>*<sup>1</sup> ∈ R, and *<sup>H</sup>*(·) is a function of bounded variation.

To the best of the authors' knowledge, there are some papers dealing with sequential mixed type fractional derivatives, but we not find in the literature papers dealing with coupled systems with sequential Riemann–Liouville and Hadamard–Caputo fractional differential equations. Motivated by this fact, and to fill this gap, in the present paper, we investigate the existence and uniqueness of solutions for the following coupled system of sequential Riemann–Liouville and Hadamard–Caputo fractional differential equations supplemented with nonlocal coupled fractional integral boundary conditions

$$\begin{aligned} ^{RL}D^{p\_1} \Big( ^{HC}D^{q\_1} \mathbf{x} \Big)(t) &= f(t, \mathbf{x}(t), \mathbf{y}(t)), \qquad t \in [0, T], \\ ^{RL}D^{p\_2} \Big( ^{HC}D^{q\_2} \mathbf{y} \Big)(t) &= \mathbf{g}(t, \mathbf{x}(t), \mathbf{y}(t)), \qquad t \in [0, T], \\ ^{HC}D^{q\_1} \mathbf{x}(0) &= 0, \quad \mathbf{x}(T) = \sum\_{i=1}^{m} \alpha\_i ^{RL} I^{\delta\_i} \mathbf{y}(\xi\_i), \\ ^{HC}D^{q\_2} \mathbf{y}(0) &= 0, \quad \mathbf{y}(T) = \sum\_{j=1}^{k} \lambda\_j ^{RL} I^{\delta\_j} \mathbf{x}(\eta\_j), \end{aligned} \tag{4}$$

where *RLDpr* and *HCDqr* are the Riemann–Liouville and Hadamard–Caputo fractional derivatives of orders *pr* and *qr*, respectively, 0 < *pr*, *qr* < 1, *r* = 1, 2, the nonlinear continuous functions *<sup>f</sup>* , *<sup>g</sup>* : [0, *<sup>T</sup>*] <sup>×</sup> <sup>R</sup><sup>2</sup> <sup>→</sup> <sup>R</sup>, *RL <sup>I</sup><sup>φ</sup>* is the Riemann–Liouville fractional integral of orders *<sup>φ</sup>* > 0, *<sup>φ</sup>* ∈ {*βi*, *<sup>δ</sup>j*} and given constants *<sup>α</sup>i*, *<sup>λ</sup><sup>j</sup>* ∈ R, *<sup>ξ</sup>i*, *<sup>η</sup><sup>j</sup>* ∈ (0, *<sup>T</sup>*), *i* = 1, . . . , *m*, *j* = 1, . . . , *k*.

Let us compare the coupled system (4) with the coupled system (2) studied in [24].


We also notice that the conditions *HCDq*<sup>1</sup> *x*(0) = 0 and *HCDq*<sup>2</sup> *y*(0) = 0 are necessary for the well-posedness of the problem.

By using standard tools from fixed point theory in the present study, we establish existence and uniqueness results for the coupled system (4). The Banach contraction mapping principle is used to obtain the existence and uniqueness result, while an existence result is derived via the Leray–Schauder alternative.

The rest of the paper is organized as follows. In Section 2, some basic definitions and lemmas from fractional calculus are recalled. In addition, an auxiliary lemma, concerning a linear variant of (4), which plays a key role in obtaining the main results, is proved. The main results are presented in Section 3, which also include examples illustrating the basic results. We emphasize that our results are new and significantly enhance the existing literature on the topic, and, as far as we know, they are the first results concerning a coupled system with sequential mixed Riemann–Liouville and Hadamard–Caputo fractional derivatives.

#### **2. Preliminaries**

In this section, we introduce some notations and definitions of fractional calculus [2,30] and present preliminary results needed in our proofs later.

**Definition 1.** *The Riemann–Liouville fractional derivative of order p* > 0 *of a continuous function <sup>f</sup>* : (0, <sup>∞</sup>) → R *is defined by*

$${}^{RL}D^{p}f(t) = \frac{1}{\Gamma(n-p)} \left(\frac{d}{dt}\right)^{n} \int\_{0}^{t} (t-s)^{n-p-1} f(s)ds, \quad n-1 < p < n\_{\star}$$

*where n* = [*p*] + 1*,* [*p*] *denotes the integer part of a real number p and* Γ *is the Gamma function defined by* Γ(*p*) =  <sup>∞</sup> <sup>0</sup> *<sup>e</sup>*−*ssp*−1*ds*.

**Definition 2.** *The Riemann–Liouville fractional integral of order <sup>p</sup> of a function <sup>f</sup>* : (0, <sup>∞</sup>) → R*, is defined as*

$${}^{RL}I^{p}f(t) = \frac{1}{\Gamma(q)} \int\_{0}^{t} (t-s)^{p-1} f(s)ds, \ \ p > 0,$$

*provided the right side is pointwise defined on* R+*.*

**Definition 3.** *For an at least n-times differentiable function <sup>g</sup>* : (0, <sup>∞</sup>) → R, *the Hadamard– Caputo derivative of fractional order q* > 0 *is defined as*

$${}^{HC}D^q g(t) = \frac{1}{\Gamma(n-q)} \int\_0^t \left(\log\frac{t}{s}\right)^{n-q-1} \delta^n g(s) \frac{ds}{s}, \ n-1 < q < n, \ n = [q] + 1, \ldots$$

*where δ* = *t <sup>d</sup> dt and* log(·) = log*e*(·)*.*

**Definition 4.** *The Hadamard fractional integral of order q* > 0 *is defined as*

$${}^{H}I^{\overline{q}}\mathcal{S}(t) = \frac{1}{\Gamma(\alpha)} \int\_{0}^{t} \left(\log\frac{t}{s}\right)^{q-1} \mathcal{S}(s) \frac{ds}{s}.$$

*provided the integral exists.*

**Lemma 1** (see [2])**.** *Let p* > 0*. Then, for y* ∈ *C*(0, *T*) ∩ *L*(0, *T*)*, it holds that*

$${}^{RL}I^p \left( {}^{RL}D^p y \right)(t) = y(t) + c\_1 t^{p-1} + c\_2 t^{p-2} + \dots + c\_n t^{p-n},$$

*where ci* ∈ R*, i* = 1, 2, . . . , *n and n* − <sup>1</sup> < *<sup>p</sup>* < *n.*

**Lemma 2** ([30])**.** *Let <sup>u</sup>* <sup>∈</sup> *AC<sup>n</sup> <sup>δ</sup>* [0, *<sup>T</sup>*] *or <sup>C</sup><sup>n</sup> <sup>δ</sup>* [0, *<sup>T</sup>*] *and <sup>q</sup>* <sup>∈</sup> <sup>C</sup>*, where <sup>X</sup><sup>n</sup> <sup>δ</sup>* [0, *<sup>T</sup>*] = {*<sup>g</sup>* : [0, *<sup>T</sup>*] → C : *<sup>δ</sup>n*−1*g*(*t*) <sup>∈</sup> *<sup>X</sup>*[0, *<sup>T</sup>*]}*. Then, we have*

$${}^{H}I^{q}({}^{HC}D^{q})u(t) = u(t) + c\_{0} + c\_{1} \log t + c\_{2} (\log t)^{2} + \dots + c\_{n-1} (\log t)^{n-1},$$

*where ci* ∈ R*, i* = 0, 1, 2, . . . , *<sup>n</sup>* − <sup>1</sup> (*<sup>n</sup>* = [*q*] + <sup>1</sup>)*.*

**Lemma 3** ([2], p. 113)**.** *Let q* > 0 *and β* > 0 *be given constants. Then, the following formula*

$${}^{H}I^{q}t^{\beta} = \beta^{-q}t^{\beta}{}\_{\prime}$$

*holds.*

Next, the integral equations are obtained by transformation of a linear variant of problem (4). For convenience in computation, we set some constants

$$\Omega\_1 = \sum\_{i=1}^{m} \frac{\varkappa\_i \mathfrak{z}\_i^{\beta\_i}}{\Gamma(\beta\_i + 1)}, \quad \Omega\_2 = \sum\_{j=1}^{k} \frac{\lambda\_j \mathfrak{n}\_j^{\delta\_j}}{\Gamma(\delta\_j + 1)}$$

and Λ = Ω1Ω<sup>2</sup> − 1 = 0.

**Lemma 4.** *Let <sup>f</sup>* <sup>∗</sup>, *<sup>g</sup>*<sup>∗</sup> ∈ *<sup>C</sup>*([*a*, *<sup>b</sup>*], R) *be two given functions. Then, the linear system equivalent to problem* (4) *of sequential Riemann–Liouville and Hadamard–Caputo fractional differential equations*

$$\begin{cases} \begin{aligned} \, ^{RL}D^{p\_1} \Big( ^{HC}D^{q\_1} \mathbf{x} \Big)(t) = f^\*(t), \qquad t \in [0, T], \\ \, ^{RL}D^{p\_2} \Big( ^{HC}D^{q\_2} \mathbf{y} \Big)(t) = \mathbf{g}^\*(t), \qquad t \in [0, T], \\ \, ^{HC}D^{q\_1} \mathbf{x}(0) = 0, \quad \mathbf{x}(T) = \sum\_{i=1}^m a\_i \, ^{RL}I^{\bar{\theta}\_i} \mathbf{y}(\xi\_i^{\bar{\epsilon}}), \\ \, ^{HC}D^{q\_2} \mathbf{y}(0) = 0, \quad \mathbf{y}(T) = \sum\_{j=1}^k \lambda\_j \, ^{RL}I^{\bar{\theta}\_j} \mathbf{x}(\eta\_j), \end{aligned} \end{cases} \tag{5}$$

*can be written into integral equations as*

$$\begin{split} x(t) &= -\frac{1}{\Lambda} \sum\_{i=1}^{m} \alpha\_{i} \, ^{RL} I^{\delta\_{i}} \Big( ^{H}I^{p\_{2}} \left( ^{RL}I^{p\_{2}} \mathcal{g}^{\*} \right) \Big) \left( \mathcal{\xi}\_{i} \right) + \frac{1}{\Lambda} \, ^{H}I^{q\_{1}} \left( ^{RL}I^{p\_{1}} f^{\*} \right) (T) \\ &+ \frac{\Omega\_{1}}{\Lambda} \, ^{H}I^{q\_{2}} \Big( ^{RL}I^{p\_{2}} \mathcal{g}^{\*} \Big) (T) - \frac{\Omega\_{1}}{\Lambda} \sum\_{j=1}^{k} \lambda\_{j} \, ^{RL}I^{\delta\_{j}} \Big( ^{H}I^{q\_{1}} \left( ^{RL}I^{p\_{1}} f^{\*} \right) \Big) (\eta\_{j}) \\ &+ \, ^{H}I^{q\_{1}} \Big( ^{RL}I^{p\_{1}} f^{\*} \Big) (t), \end{split} \tag{6}$$

*and*

$$\begin{split} y(t) &= -\frac{\Omega\_{2}}{\Lambda} \sum\_{i=1}^{m} a\_{i}^{\;RL} I^{\delta\_{i}} \Big( {}^{H}I^{q\_{2}} \Big( {}^{RL}I^{p\_{2}} {}^{s} \Big) \Big) \left( \zeta\_{i} \right) + \frac{\Omega\_{2}}{\Lambda} {}^{H}I^{q\_{1}} \Big( {}^{RL}I^{p\_{1}} {}^{f} \Big) \Big( {}^{T} \Big) \\ &+ \frac{1}{\Lambda} {}^{H}I^{q\_{2}} \Big( {}^{RL}I^{p\_{2}} {}^{g} \Big) \Big( {}^{T} \Big) - \frac{1}{\Lambda} \sum\_{j=1}^{k} \lambda\_{j} {}^{RL}I^{\delta\_{j}} \Big( {}^{H}I^{q\_{1}} {}^{\delta L}I^{p\_{1}} {}^{f} \Big) \Big) \left( \eta\_{j} \right) \\ &+ {}^{H}I^{q\_{2}} \Big( {}^{RL}I^{p\_{2}} {}^{g} \Big) \Big( \mathbf{t} \Big). \end{split}$$

**Proof.** For *t* ∈ [0, *T*] and by taking the Riemann–Liouville fractional integral of order *p*<sup>1</sup> to the first equation of (5), we obtain

$${}^{HC}D^{q\_1}x(t) = c\_1t^{p\_1-1} + {}^{RL}I^{p\_1}f^\*(t), \quad c\_1 \in \mathbb{R}.\tag{8}$$

Similarly, for the second equation of (5), we have

$${}^{HC}D^{q\_2}y(t) = d\_1t^{p\_2-1} + {}^{RL}I^{p\_2}g^\*(t), \quad d\_1 \in \mathbb{R}.\tag{9}$$

Since 0 < *pr* < 1, *r* = 1, 2, the conditions *HCDq*<sup>1</sup> *x*(0) = 0 and *HCDq*<sup>2</sup> *y*(0) = 0 imply *c*<sup>1</sup> = 0 and *d*<sup>1</sup> = 0, respectively. Applying the Hadamard fractional integral of orders *q*<sup>1</sup> and *q*<sup>2</sup> to (8) and (9), respectively, and substituting the values of *c*1, *d*1, we get

$$\mathbf{x}(t) = \mathbf{c}\_0 + \, ^H I^{q\_1} \left( ^{RL}I^{p\_1} f^\* \right)(t), \tag{10}$$

and

$$\hat{y}(t) = d\_0 + \,^H I^{q\_2} \left( \,^{RL} I^{p\_2} \,^\* g^\* \right)(t). \tag{11}$$

Now, we consider the terms

$$\sum\_{i=1}^{m} a\_i \, ^{RL} I^{\beta\_i} y(\xi\_i^{\underline{x}}) = d\_0 \sum\_{i=1}^{m} \frac{a\_i \xi\_i^{\beta\_i}}{\Gamma(\beta\_i + 1)} + \sum\_{i=1}^{m} a\_i \, ^{RL} I^{\beta\_i} \left( ^{I\underline{q}} I^{\underline{q}\_2} \left( ^{RL} I^{\underline{p}\_2} \, ^{\ast} \right) \right)(\xi\_i^{\underline{x}}) \tag{12}$$

and

$$\sum\_{j=1}^{k} \lambda\_j^{\cdot \,RL} I^{\delta\_j} \mathbf{x}(\eta\_j) = \mathbf{c}\_0 \sum\_{j=1}^{k} \frac{\lambda\_j \eta\_j^{\delta\_j}}{\Gamma(\delta\_j + 1)} + \sum\_{j=1}^{k} \lambda\_j^{\cdot \,RL} I^{\delta\_j} \left( {}^H I^{q\_1} \left( {}^{RL} I^{p\_1} f^\* \right) \right) (\eta\_j). \tag{13}$$

Consequently, by (10)–(13) and boundary fractional integral conditions in (5), it follows that

$$\begin{split} \mathcal{L}\_{0} &= \ & -\frac{1}{\Lambda} \sum\_{i=1}^{m} \alpha\_{i} \, ^{RL}I^{\mathfrak{H}\_{i}} \Big( ^{H}I^{q\_{2}} \left( ^{RL}I^{p\_{2}} \mathcal{g}^{\*} \right) \Big) \left( \zeta\_{i} \right) + \frac{1}{\Lambda} \, ^{H}I^{q\_{1}} \left( ^{RL}I^{p\_{1}} f^{\*} \right) (T) \\ & + \frac{\Omega\_{1}}{\Lambda} \, ^{H}I^{q\_{2}} \Big( ^{RL}I^{p\_{2}} \mathcal{g}^{\*} \Big) (T) - \frac{\Omega\_{1}}{\Lambda} \sum\_{j=1}^{k} \lambda\_{j} \, ^{RL}I^{\delta\_{j}} \Big( ^{H}I^{q\_{1}} \left( ^{RL}I^{p\_{1}} f^{\*} \right) \Big) (\eta\_{j} ). \end{split}$$

and

$$\begin{split} d\_{0} &= \quad -\frac{\Omega\_{2}}{\Lambda} \sum\_{i=1}^{m} a\_{i} \, ^{RL} I^{\beta\_{i}} \Big( ^{H}I^{q\_{2}} \Big( ^{RL}I^{p\_{2}} \mathcal{g}^{\*} \Big) \Big) \left( \zeta\_{i} \right) + \frac{\Omega\_{2}}{\Lambda} \, ^{H}I^{q\_{1}} \Big( ^{RL}I^{p\_{1}} f^{\*} \Big) \Big( T \Big) \\ &+ \frac{1}{\Lambda} \, ^{H}I^{q\_{2}} \Big( ^{RL}I^{p\_{2}} \mathcal{g}^{\*} \Big) \left( T \right) - \frac{1}{\Lambda} \sum\_{j=1}^{k} \lambda\_{j} \, ^{RL}I^{\delta\_{j}} \Big( ^{H}I^{q\_{1}} \Big( ^{RL}I^{p\_{1}} f^{\*} \Big) \Big) \left( \eta\_{j} \right), \end{split}$$

Substituting the values of *c*<sup>0</sup> and *d*<sup>0</sup> in (10) and (11), we obtain integral equations in (6) and (7), respectively, as desired.

The converse follows by direct computation. This completes the proof.

Next, we establish formulas for multiple fractional integrals of Riemann–Liouville and Hadamard types.

**Lemma 5.** *Let a*, *b*, *c* > 0 *be constants. Then, we have* (*i*)

$${}^{H}I^{b}\left({}^{RL}I^{a}(1)\right)(t) = \frac{a^{-b}t^{a}}{\Gamma(a+1)}.$$

(*ii*)

$${}^{RL}I^c \left( {}^{H}I^b \left( {}^{RL}I^a (1) \right) \right)(t) = \frac{a^{-b}}{\Gamma(a+c+1)} t^{a+c} \cdot \frac{t}{t}$$

**Proof.** Since *RL I a* (1) = *<sup>t</sup><sup>a</sup>* Γ(*a* + 1) , we have

$${}^{H}I^{b}\left({}^{RL}I^{a}(1)\right)(t) = \frac{1}{\Gamma(a+1)}{}^{H}I^{b}t^{a} = \frac{a^{-b}t^{a}}{\Gamma(a+1)}{}' \tag{14}$$

by using Lemma 3, and (i) is proved. To prove (ii), taking the Riemann–Liouville fractional integral of order *c* > 0 in (14), we have

$${}^{RL}I^c \left( {}^{H}I^b \left( {}^{RL}I^a (1) \right) \right) (t) = \frac{a^{-b}}{\Gamma(a+1)} {}^{RL}I^c t^a = \frac{a^{-b}}{\Gamma(a+c+1)} t^{a+c} {}^{R}I$$

from *RL I c t <sup>a</sup>* <sup>=</sup> <sup>Γ</sup>(*<sup>a</sup>* <sup>+</sup> <sup>1</sup>) Γ(*a* + *c* + 1) *t a*+*c* . The proof is completed.

**Corollary 1.** *Let constants pr*, *qr, r* = 1, 2*, βi*, *ξi*, *δj*, *η<sup>j</sup> be defined in problem* (4)*. Then, from Lemma 5, we have*

$$\begin{aligned} \, \, ^H I^{q\_1} \left( \, ^{RL} I^{p\_1} \mathbf{1} \right) (T) &= \, \frac{p\_1^{-q\_1} T^{p\_1}}{\Gamma(p\_1 + 1)} , \\\, ^H I^{q\_2} \left( \, ^{RL} I^{p\_2} \mathbf{1} \right) (T) &= \, \frac{p\_2^{-q\_2} T^{p\_2}}{\Gamma(p\_2 + 1)} , \\\, ^{RL} I^{\delta\_i} \left( \, ^H I^{q\_2} \left( \, ^{RL} I^{p\_2} \mathbf{1} \right) \right) (\zeta\_i) &= \, \frac{p\_2^{-q\_2}}{\Gamma(p\_2 + \beta\_i + 1)} \zeta\_i^{p\_2 + \beta\_i} , \\\, ^{RL} I^{\delta\_j} \left( \, ^H I^{q\_1} \left( \, ^{RL} I^{p\_1} \mathbf{1} \right) \right) (\eta\_j) &= \, \frac{p\_1^{-q\_1}}{\Gamma(p\_1 + \delta\_j + 1)} \eta\_j^{p\_1 + \delta\_j} , \end{aligned}$$

*which will be used in the next section.*

#### **3. Main Results**

Let C = *<sup>C</sup>*([0, *<sup>T</sup>*], R) be the Banach space of all continuous functions from [0, *<sup>T</sup>*] to <sup>R</sup>. Let *<sup>X</sup>* <sup>=</sup> {*x*(*t*) : *<sup>x</sup>*(*t*) <sup>∈</sup> *<sup>C</sup>*2([0, *<sup>T</sup>*], <sup>R</sup>)} be the space endowed with the norm *x* <sup>=</sup> sup{|*x*(*t*)|, *t* ∈ [0, *T*]}. Obviously, (*X*, ·) is a Banach space. Next, we set *Y* = {*y*(*t*) : *<sup>y</sup>*(*t*) <sup>∈</sup> *<sup>C</sup>*2([0, *<sup>T</sup>*], <sup>R</sup>)} with the norm *y* <sup>=</sup> sup{|*y*(*t*)|, *<sup>t</sup>* <sup>∈</sup> [0, *<sup>T</sup>*]}. The product space (*X* × *Y*, (*x*, *y*)) is Banach space with the norm (*x*, *y*) = *x* + *y*.

In the following, for brevity, we use the subscript notation

$$h\_{\mathbf{x},y}(t) = h(t, \mathbf{x}(t), y(t)), \ h \in \{f, \mathbf{g}\},\tag{15}$$

in fractional integral as

$${}^{RL}I^{p}h\_{\mathbf{x},y}(\boldsymbol{\phi}) = \frac{1}{\Gamma(p)} \int\_{a}^{\boldsymbol{\phi}} (\boldsymbol{\phi} - \mathbf{s})^{p-1} h(\mathbf{s}, \mathbf{x}(\boldsymbol{s}), y(\boldsymbol{s})) \, d\mathbf{s},\tag{16}$$

where *φ* ∈ {*t*, *T*, *ξi*, *ηj*}. In addition, we use it in multiple fractional integrations.

In view of Lemma 4, we define the operator P : *X* × *Y* → *X* × *Y* by

$$\mathcal{P}(\mathbf{x},\mathbf{y})(t) = \begin{pmatrix} \mathcal{P}\_1(\mathbf{x},\mathbf{y})(t) \\ \mathcal{P}\_2(\mathbf{x},\mathbf{y})(t) \end{pmatrix}.$$

where

$$\begin{split} \mathcal{P}\_{\mathbf{1}}(\mathbf{x},\mathbf{y})(t) &= \quad - \frac{1}{\Lambda} \sum\_{i=1}^{m} a\_{i}^{\mathrm{RL}} I^{\bar{\theta}\_{i}} \Big( {}^{\mathrm{H}}I^{\mathbf{q}\_{2}} \Big( {}^{\mathrm{RL}}I^{\mathbf{p}\_{2}} {}\_{\mathbf{g}\_{\mathbf{x},\mathbf{y}}} \Big) \Big) (\xi\_{i}) + \frac{1}{\Lambda} {}^{\mathrm{H}}I^{\mathbf{q}\_{1}} \Big( {}^{\mathrm{RL}}I^{\mathbf{p}\_{1}} {}\_{\mathbf{f}\_{\mathbf{x},\mathbf{y}}} \Big) (\boldsymbol{T}) \\ &+ \frac{\Omega\_{1}}{\Lambda} {}^{\mathrm{H}}I^{\mathbf{q}\_{2}} \Big( {}^{\mathrm{RL}}I^{\mathbf{p}\_{2}} {}\_{\mathbf{g}\_{\mathbf{x},\mathbf{y}}} \Big) (\boldsymbol{T}) - \frac{\Omega\_{1}}{\Lambda} \sum\_{j=1}^{k} \lambda\_{j} {}^{\mathrm{RL}}I^{\mathbf{f}\_{j}} \Big( {}^{\mathrm{H}}I^{\mathbf{q}\_{1}} {}\_{\mathbf{f}} {}^{\mathrm{R}}I\_{\mathbf{x},\mathbf{y}} \Big) \Big) (\eta\_{j}) \\ &+ {}^{\mathrm{H}}I^{\mathbf{q}\_{1}} \Big( {}^{\mathrm{RL}}I^{\mathbf{p}\_{1}} f\_{\mathbf{x},\mathbf{y}} \Big) (\boldsymbol{t}) \end{split}$$

and

$$\begin{split} \mathcal{P}\_{2}(\mathbf{x},y)(t) &= \quad -\frac{\Omega\_{2}}{\Lambda} \sum\_{i=1}^{m} \boldsymbol{\alpha}\_{i}^{\operatorname{RL}} \boldsymbol{I}^{\operatorname{\mathcal{S}}\_{1}} \Big( \prescript{\operatorname{\mathcal{R}}\_{1}}{\prod} \prescript{\operatorname{\mathcal{R}}\_{2}}{\prod} \prescript{\operatorname{\mathcal{R}}\_{3}}{\operatorname{\mathcal{S}}\_{x,y}}(\prescript{\operatorname{\mathcal{S}}\_{1}}{\prod}, \prescript{\operatorname{\mathcal{S}}\_{2}}{\prod}{\pres } \prescript{\operatorname{\mathcal{R}}\_{3}}{\prod}(\prescript{\operatorname{\mathcal{R}}\_{1}}{\prod} \prescript{\operatorname{\mathcal{R}}\_{3}}{\prod}{\pres }}{\prod}(T) \\ &+ \frac{1}{\Lambda} \prescript{\operatorname{\mathcal{R}}\_{1}}{\prod} \prescript{\operatorname{\mathcal{R}}\_{2}}{\prod} \prescript{\operatorname{\mathcal{S}}\_{3}}{\prod}{\pres } \prescript{\operatorname{\mathcal{S}}\_{3}}{\prod}{\pres } \prescript{\operatorname{\mathcal{S}}\_{4}}{\prod}{\pres } \prescript{\operatorname{\mathcal{S}}\_{4}}{\prod}{\pres } \prescript{\operatorname{\mathcal{S}}\_{5}}{\prod}{\pres } \pres$$

For computational convenience, we set

$$\begin{array}{rcl} M\_{1} &=& \left(\frac{1+|\Lambda|}{|\Lambda|}\right)\left(\frac{p\_{1}^{-q\_{1}}T^{p\_{1}}}{\Gamma(p\_{1}+1)}\right) + \frac{|\Omega\_{1}|}{|\Lambda|}\left(p\_{1}^{-q\_{1}}\sum\_{j=1}^{k}\frac{|\lambda\_{j}|p\_{j}^{p\_{1}+\delta\_{j}}}{\Gamma(p\_{1}+\delta\_{j}+1)}\right), \\\\ M\_{2} &=& \frac{|\Omega\_{1}|}{|\Lambda|}\left(\frac{p\_{2}^{-q\_{2}}T^{p\_{2}}}{\Gamma(p\_{2}+1)}\right) + \frac{1}{|\Lambda|}\left(p\_{2}^{-q\_{2}}\sum\_{i=1}^{m}\frac{|a\_{i}|\xi\_{i}^{p\_{i}+\delta\_{i}}}{\Gamma(p\_{2}+\beta\_{i}+1)}\right), \\\\ M\_{3} &=& \frac{|\Omega\_{2}|}{|\Lambda|}\left(\frac{p\_{1}^{-q\_{1}}T^{p\_{1}}}{\Gamma(p\_{1}+1)}\right) + \frac{1}{|\Lambda|}\left(p\_{1}^{-q\_{1}}\sum\_{j=1}^{k}\frac{|\lambda\_{j}|\eta\_{j}^{p\_{1}+\delta\_{j}}}{\Gamma(p\_{1}+\delta\_{j}+1)}\right), \\\\ M\_{4} &=& \left(\frac{1+|\Lambda|}{|\Lambda|}\right)\left(\frac{p\_{2}^{-q\_{2}}T^{p\_{2}}}{\Gamma(p\_{2}+1)}\right) + \frac{|\Omega\_{2}|}{|\Lambda|}\left(p\_{2}^{-q\_{2}}\sum\_{i=1}^{m}\frac{|a\_{i}|\xi\_{i}^{p\_{i}+\delta\_{i}}}{\Gamma(p\_{2}+\beta\_{i}+1)}\right). \end{array}$$

In the first result, Banach's contraction mapping principle is used to prove existence and uniqueness of solutions of system (4).

**Theorem 1.** *Suppose that <sup>f</sup>* , *<sup>g</sup>* : [0, *<sup>T</sup>*] <sup>×</sup> <sup>R</sup><sup>2</sup> <sup>→</sup> <sup>R</sup> *are continuous functions. In addition, we assume that f* , *g satisfies the Lipchitz condition:* (*H*1)*there exist constants mi*, *ni, i* = 1, 2

$$|f(t, u\_1, v\_1) - f(t, u\_2, v\_2)| \le m\_1|u\_1 - u\_2| + m\_2|v\_1 - v\_2|^2$$

*and*

$$|g(t, \mu\_1, \upsilon\_1) - g(t, \mu\_2, \upsilon\_2)| \le n\_1|\mu\_1 - \mu\_2| + n\_2|\upsilon\_1 - \upsilon\_2|,$$
 
$$\text{for all } t \in [0, T] \text{ and } \mathfrak{u}\_i, \upsilon\_i \in \mathbb{R}, \, i = 1, 2. \text{ Then, the system (4) has a unique solution on } [0, T], \text{ if}$$

$$(M\_1 + M\_3)(m\_1 + m\_2) + (M\_2 + M\_4)(n\_1 + n\_2) < 1. \tag{17}$$

**Proof.** Let us define sup*t*∈[0,*T*] *<sup>f</sup>*(*t*, 0, 0) = *<sup>N</sup>*<sup>1</sup> <sup>&</sup>lt; <sup>∞</sup> and sup*t*∈[0,*T*] *<sup>g</sup>*(*t*, 0, 0) = *<sup>N</sup>*<sup>2</sup> <sup>&</sup>lt; <sup>∞</sup>. Choose a constant *r* > 0 satisfying

$$r > \frac{(M\_1 + M\_3)N\_1 + (M\_2 + M\_4)N\_2}{1 - \left[ (M\_1 + M\_3)(m\_1 + m\_2) + (M\_2 + M\_4)(n\_1 + n\_2) \right]}.$$

At first, we shall show that the set P*Br* ⊂ *Br*, where a ball *Br* = {(*x*, *y*) ∈ *X* × *Y* : (*x*, *y*) ≤ *r*}. For (*x*, *y*) ∈ *Br*, and using

$$|f\_{x,y}| \le |f\_{x,y} - f\_{0,0}| + |f\_{0,0}| \le m\_1 ||x|| + m\_2 ||y|| + N\_{1\prime}$$

and

$$|\mathcal{g}\_{x,y}| \le |\mathcal{g}\_{x,y} - \mathcal{g}\_{0,0}| + |\mathcal{g}\_{0,0}| \le n\_1 ||x|| + n\_2 ||y|| + N\_{2,0}$$

we get relations


Therefore, we deduce that

$$\left\lVert \left\lVert \mathcal{P}\_{1}(\mathbf{x},\mathbf{y}) \right\rVert \right\rVert \leq \left[ M\_{1}(m\_{1}+m\_{2}) + M\_{2}(n\_{1}+n\_{2}) \right]r + M\_{1}N\_{1} + M\_{2}N\_{2}.$$

In a similar way of computation, we get


which yields

$$\left\lVert \left\lVert \mathcal{P}\_2(\mathbf{x}, y) \right\rVert \right\rVert \leq \left[ M\_3(m\_1 + m\_2) + M\_4(n\_1 + n\_2) \right] r + M\_3 N\_1 + M\_4 N\_2 \epsilon$$

Then, we conclude that

$$\begin{aligned} \left||\mathcal{P}(\mathbf{x},\mathbf{y})|| \right| &\leq \left[M\_1(m\_1+m\_2) + M\_2(n\_1+n\_2)\right]r + M\_1N\_1 + M\_2N\_2\\ &+ [M\_3(m\_1+m\_2) + M\_4(n\_1+n\_2)]r + M\_3N\_1 + M\_4N\_2 \leq r, \end{aligned}$$

which leads to P*Br* ⊂ *Br*.

In the next step, we will show that the P is a contraction operator. For any (*x*1, *y*1), (*x*2, *y*2) ∈ *X* × *Y*, we have


Then, we get the result that

$$\|\mathcal{P}\_1(\mathbf{x}\_1, y\_1) - \mathcal{P}\_1(\mathbf{x}\_2, y\_2)\| \le M\_1(m\_1 + m\_2) + M\_2(n\_1 + n\_2)(\|\mathbf{x}\_1 - \mathbf{x}\_2\| + \|y\_1 - y\_2\|). \tag{18}$$

In addition, we have


which yields

$$\left\|\left|\mathcal{P}\_{2}(\mathbf{x}\_{1},y\_{1})-\mathcal{P}\_{2}(\mathbf{x}\_{2},y\_{2})\right|\right\| \leq M\_{3}(m\_{1}+m\_{2}) + M\_{4}(n\_{1}+n\_{2})(\left\|\mathbf{x}\_{1}-\mathbf{x}\_{2}\right\| + \left\|y\_{1}-y\_{2}\right\|). \tag{19}$$

The above results in (18) and (19) imply

$$\begin{aligned} \|\mathcal{P}(\mathbf{x}\_1, y\_1) - \mathcal{P}(\mathbf{x}\_2, y\_2)\|\| &\leq \quad \left[ (M\_1 + M\_3)(m\_1 + m\_2) + (M\_2 + M\_4)(n\_1 + n\_2) \right] \\ &\times (\|\mathbf{x}\_1 - \mathbf{x}\_2\| + \|y\_1 - y\_2\|). \end{aligned}$$

Since (*M*<sup>1</sup> + *M*3)(*m*<sup>1</sup> + *m*2)+(*M*<sup>2</sup> + *M*4)(*n*<sup>1</sup> + *n*2) < 1, then the operator P is a contraction. From the benefits of Banach's fixed point theorem, the operator P has a unique fixed point, which is the unique solution of (4) on [0, *T*]. The proof is completed.

The Leray–Schauder alternative is applied to our second existence result.

**Lemma 6.** *(Leray–Schauder alternative) [31]. Let Q* : *U* → *U be a completely continuous operator. Let*

$$\mu(Q) = \{ \mathbf{x} \in \mathcal{U} : \mathbf{x} = \theta Q(\mathbf{x}) \text{ for some } 0 < \theta < 1 \}.$$

*Then, either the set μ*(*Q*) *is unbounded, or Q has at least one fixed point.*

**Theorem 2.** *Suppose that there exist constants ar*, *br* ≥ 0 *for r* = 1, 2 *and a*0, *b*<sup>0</sup> > 0*. In addition, for any u*, *<sup>v</sup>* ∈ R, *we assume that*

$$\begin{aligned} |f(t, u, v)| &\leq & a\_0 + a\_1|u| + a\_2|v|, \\ |g(t, u, v)| &\leq & b\_0 + b\_1|u| + b\_2|v|. \end{aligned}$$

*If* (*M*<sup>1</sup> + *M*3)*a*<sup>1</sup> + (*M*<sup>2</sup> + *M*4)*b*<sup>1</sup> < 1 *and* (*M*<sup>1</sup> + *M*3)*a*<sup>2</sup> + (*M*<sup>2</sup> + *M*4)*b*<sup>2</sup> < 1*, then* (4) *has at least one solution on* [0, *T*].

**Proof.** The first task of the proof is to show that the operator P : *X* × *Y* → *X* × *Y* is completely continuous. The continuity of the functions *<sup>f</sup>* , *<sup>g</sup>* on [0, *<sup>T</sup>*] × R × R can be used to claim that the operator P is continuous. Now, we let Φ be the bounded subset of *X* × *Y*. Then, there exist positive constants *G*<sup>1</sup> and *G*<sup>2</sup> such that

$$|f(t, \mathbf{x}, y)| \le G\_1, \ |g(t, \mathbf{x}, y)| \le G\_2, \ \forall (\mathbf{x}, y) \in \Phi.$$

For any (*x*, *y*) ∈ Φ, we have


which leads to

$$\|\mathcal{P}\_1(x,y)\| \le G\_1 M\_1 + G\_2 M\_2.$$

Furthermore, we get

$$\begin{split} \|\mathcal{P}\_{2}(\mathbf{x},y)\| &\leq \quad \frac{|\Omega\_{2}|}{|\Lambda|} \sum\_{i=1}^{m} |a\_{i}|^{RL} I^{\delta\_{i}} \Big( ^{H}I^{q\_{2}} \Big( ^{RL}I^{p\_{2}} \mathbf{1} \Big) \Big) \mathbf{G}\_{2} + \frac{|\Omega\_{2}|}{|\Lambda|} \, ^{H}I^{q\_{1}} \Big( ^{RL}I^{p\_{1}} \mathbf{1} \Big) \mathbf{(}\mathbf{T}\big) \mathbf{G}\_{1} \\ &+ \frac{1}{|\Lambda|} \, ^{H}I^{q\_{2}} \Big( ^{RL}I^{p\_{2}} \mathbf{1} \Big) \Big( \mathbf{T}\big) \mathbf{G}\_{2} + \frac{1}{|\Lambda|} \sum\_{j=1}^{k} |\lambda\_{j}|^{RL} I^{\delta\_{j}} \Big( ^{H}I^{q\_{1}} \left( ^{RL}I^{p\_{1}} \mathbf{1} \Big) \Big) \left( \eta\_{j} \right) \mathbf{G}\_{1} \\ &+ \, ^{H}I^{q\_{2}} \Big( ^{RL}I^{p\_{2}} \mathbf{1} \Big) \mathbf{(}\mathbf{T}\big) \mathbf{G}\_{2} \\ &= \quad \mathbf{G}\_{1}M\_{3} + \mathbf{G}\_{2}M\_{4}. \end{split}$$

Therefore, from above two results, we deduce that the set PΦ is uniformly bounded. The next is to prove that the set PΦ is equicontinuous. Choosing two points *τ*1, *τ*<sup>2</sup> ∈ [0, *T*] such that *τ*<sup>1</sup> < *τ*2, we have, for any (*x*, *y*) ∈ Φ, that

$$\begin{aligned} \left| \left| \mathcal{P}\_{1}(\mathbf{x},\boldsymbol{y})(\tau\_{2}) - \mathcal{P}\_{1}(\mathbf{x},\boldsymbol{y})(\tau\_{1}) \right| &= \left| \prescript{H}{}{I}^{q\_{1}} \Big( \prescript{RL}{}{I}^{p\_{1}} f\_{\mathbf{x},\boldsymbol{y}} \Big)(\tau\_{2}) - \prescript{H}{}{I}^{q\_{1}} \Big( \prescript{RL}{}{I}^{p\_{1}} f\_{\mathbf{x},\boldsymbol{y}} \Big)(\tau\_{1}) \right| \\ &\leq \left| \prescript{H}{}{G}^{q\_{1}} \Big( \prescript{RL}{}{I}^{p\_{1}} \mathbf{1} \Big)(\tau\_{2}) - \prescript{H}{}{I}^{q\_{1}} \Big( \prescript{RL}{}{I}^{p\_{1}} \mathbf{1} \Big)(\tau\_{1}) \right| \\ &= \left| \prescript{L}{}{G}^{q\_{1}} \frac{p\_{1}^{-q\_{1}}}{\Gamma(p\_{1}+1)} \Big| \tau\_{2}^{p\_{1}} - \tau\_{1}^{p\_{1}} \Big| \Big) . \end{aligned}$$

which implies

$$|\mathcal{P}\_1(\mathfrak{x}, y)(\mathfrak{r}\_2) - \mathcal{P}\_1(\mathfrak{x}, y)(\mathfrak{r}\_1)| \to 0,\quad \text{as}\quad \mathfrak{r}\_1 \to \mathfrak{r}\_2.$$

In addition, we obtain

$$\begin{split} \left| \left| \mathcal{P}\_{2}(\mathbf{x},\boldsymbol{y})(\tau\_{2}) - \mathcal{P}\_{2}(\mathbf{x},\boldsymbol{y})(\tau\_{1}) \right| \right| &= \left| \prescript{H}{}{I}{I}^{q\_{2}} \left( {}^{RL}I^{p\_{2}}{}\_{\mathbf{g},\boldsymbol{y}} \right)(\tau\_{2}) - {}^{H}I^{q\_{2}} \left( {}^{RL}I^{p\_{2}}{}\_{\mathbf{g},\boldsymbol{y}} \right)(\tau\_{1}) \right| \\ &\leq \left| \prescript{H}{}{G}{I}{I}{I}{}^{q\_{2}} \left( {}^{RL}I^{p\_{2}}{}\_{1} \right)(\tau\_{2}) - {}^{H}I^{q\_{2}} \left( {}^{RL}I^{p\_{2}}{}\_{1} \right)(\tau\_{1}) \right| \\ &= \left| \prescript{p\_{2}^{-}}{}{\Gamma}{\overline{\Gamma(p\_{2}+1)}} \left| \tau\_{2}^{p\_{2}} - \tau\_{1}^{p\_{2}} \right|. \end{split}$$

Then,

$$|\mathcal{P}\_2(\mathbf{x}, y)(\tau\_2) - \mathcal{P}\_2(\mathbf{x}, y)(\tau\_1)| \to 0, \quad \text{as} \quad \tau\_1 \to \tau\_2.$$

Thus, the set PΦ is equicontinuous. By taking into account the Arzelá-Ascoli theorem, the set PΦ is relatively compact. Then, operator P is completely continuous.

Finally, we will claim that the set *μ* = {(*x*, *y*) ∈ *X* × *Y* : (*x*, *y*) = *θ*P(*x*, *y*), 0 ≤ *θ* ≤ 1} is bounded. For any (*x*, *y*) ∈ *μ*, then (*x*, *y*) = *θ*P(*x*, *y*). Hence, for *t* ∈ [*a*, *b*], we have

$$x(t) = \theta \mathcal{P}\_1(\mathbf{x}, y)(t) \quad \text{and} \quad y(t) = \theta \mathcal{P}\_2(\mathbf{x}, y)(t).$$

Therefore, we obtain

$$\begin{array}{rcl} \|\mathbf{x}\| &\le& \left(a\_0 + a\_1\|\|\mathbf{x}\|\| + a\_2\|\|\mathbf{y}\|\right)M\_1 + \left(b\_0 + b\_1\|\|\mathbf{x}\|\| + b\_2\|\|\mathbf{y}\|\right)M\_2, \\ \|\mathbf{y}\| &\le& \left(a\_0 + a\_1\|\|\mathbf{x}\|\| + a\_2\|\|\mathbf{y}\|\right)M\_3 + \left(b\_0 + b\_1\|\|\mathbf{x}\|\| + b\_2\|\|\mathbf{y}\|\right)M\_4. \end{array}$$

which lead to

$$\begin{aligned} \|\|\mathbf{x}\|\| + \|\|\mathbf{y}\|\| &\leq \left(M\_1 + M\_3\right)a\_0 + (M\_2 + M\_4)b\_0 + \left[(M\_1 + M\_3)a\_1 + (M\_2 + M\_4)b\_1\right] \|\|\mathbf{x}\|\| \\ &+ \left[(M\_1 + M\_3)a\_2 + (M\_2 + M\_4)b\_2\right] \|\|\mathbf{y}\|\|. \end{aligned}$$

Thus, the following inequality holds:

$$\|(x,y)\| \le \frac{(M\_1 + M\_3)a\_0 + (M\_2 + M\_4)b\_0}{M^\*},\tag{20}$$

where *M*<sup>∗</sup> = min{1 − (*M*<sup>1</sup> + *M*3)*a*<sup>1</sup> − (*M*<sup>2</sup> + *M*4)*b*1, 1 − (*M*<sup>1</sup> + *M*3)*a*<sup>2</sup> − (*M*<sup>2</sup> + *M*4)*b*2}. Hence, the set *μ* is a bounded set. Then, by using Lemma 6, the operator P has at least one fixed point. Therefore, we conclude that problem (4) has at least one solution on [0, *T*]. The proof is complete.

If *ar*, *br* = 0, *r* = 1, 2, in Theorem 2, we have following corollary.

**Corollary 2.** *Assume that* | *f*(*t*, *x*, *y*)| ≤ *a*<sup>0</sup> *and* |*g*(*t*, *x*, *y*)| ≤ *b*0*, where a*0, *b*<sup>0</sup> > 0*,* ∀(*t*, *x*, *y*) ∈ [0, *<sup>T</sup>*] <sup>×</sup> <sup>R</sup>2*. Then, problem* (4) *has at least one solution on* [0, *<sup>T</sup>*].

Next, we present examples to illustrate our results.

**Example 1.** *Consider the following sequential Riemann–Liouville and Hadamard–Caputo fractional differential system with coupled fractional integral boundary conditions of the form*

$$\begin{split} ^{RL}D^{\frac{1}{2}}\Big( ^{HC}D^{\frac{1}{2}}x \Big)(t) &= f(t, x(t), y(t)), \qquad t \in [0, 7/4], \\ ^{RL}D^{\frac{2}{3}}\Big( ^{HC}D^{\frac{2}{3}}y \Big)(t) &= g(t, x(t), y(t)), \qquad t \in [0, 7/4], \\ ^{HC}D^{\frac{4}{3}}x(0) &= 0, \quad x\left(\frac{7}{4}\right) = \frac{1}{3} ^{RL}I\_{4}^{\frac{2}{3}}y\left(\frac{1}{2}\right) + \frac{2}{7} ^{RL}I\_{4}^{\frac{5}{4}}y\left(\frac{5}{4}\right), \\ ^{HC}D^{\frac{2}{3}}y(0) &= 0, \; y\left(\frac{7}{4}\right) = \frac{3}{11} ^{RL}I\_{1}^{\frac{1}{2}}x\left(\frac{1}{4}\right) + \frac{4}{17} ^{RL}I\_{5}^{\frac{2}{3}}x\left(\frac{3}{4}\right) \\ & \qquad + \frac{5}{19} ^{RL}I\_{-5}^{\frac{1}{2}}x\left(\frac{3}{2}\right). \end{split} \tag{21}$$

Here, *p*<sup>1</sup> = 1/5, *p*<sup>2</sup> = 2/5, *q*<sup>1</sup> = 4/5, *q*<sup>2</sup> = 3/5, *T* = 7/4, *m* = 2, *α*<sup>1</sup> = 1/3, *α*<sup>2</sup> = 2/7, *β*<sup>1</sup> = 3/4, *β*<sup>2</sup> = 5/4, *ξ*<sup>1</sup> = 1/2, *ξ*<sup>2</sup> = 5/4, *k* = 3, *λ*<sup>1</sup> = 3/11, *λ*<sup>2</sup> = 4/17, *λ*<sup>3</sup> = 5/19, *δ*<sup>1</sup> = 1/2, *δ*<sup>2</sup> = 7/8, *δ*<sup>3</sup> = 11/8, *η*<sup>1</sup> = 1/4, *η*<sup>2</sup> = 3/4, *η*<sup>3</sup> = 3/2. Form all constants, we find that Ω<sup>1</sup> ≈ 0.5489581728, Ω<sup>2</sup> ≈ 0.7217268652, |Λ| ≈ 0.6038021388, *M*<sup>1</sup> ≈ 13.82028787, *M*<sup>2</sup> ≈ 3.420721316, *M*<sup>3</sup> ≈ 9.093047627, *M*<sup>4</sup> ≈ 7.354860071.

Let the two nonlinear Lipschitzian functions *<sup>f</sup>* , *<sup>g</sup>* : [0, 7/4] <sup>×</sup> <sup>R</sup><sup>2</sup> −→ <sup>R</sup> be defined by

$$f(t, \mathbf{x}, y) \quad = \frac{1}{12(t+12)} \left( \frac{\mathbf{x}^2 + 2|\mathbf{x}|}{1+|\mathbf{x}|} \right) + \frac{e^{-t}\sin y}{15(3t+5)} + \frac{1}{2},\tag{22}$$

$$\log(t, \mathbf{x}, y) \quad = \ \frac{\cos \tau t}{6(2t + 9)} \tan^{-1} \mathbf{x} + \frac{1}{36(4t + 7)} \left( \frac{3y^2 + 4|y|}{1 + |y|} \right) + \frac{3}{4}. \tag{23}$$

From (22)–(23), we see that

⎧

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

$$|f(t, \mathbf{x}\_1, y\_1) - f(t, \mathbf{x}\_2, y\_2)| \le \frac{1}{72}|\mathbf{x}\_1 - \mathbf{x}\_2| + \frac{1}{75}|y\_1 - y\_2|.$$

and

$$|g(t, \mathbf{x}\_1, y\_1) - g(t, \mathbf{x}\_2, y\_2)| \le \frac{1}{54}|\mathbf{x}\_1 - \mathbf{x}\_2| + \frac{1}{63}|y\_1 - y\_2|.$$

for all *xr*, *yr* ∈ R, *<sup>r</sup>* = 1, 2, we obtain (*M*<sup>1</sup> + *<sup>M</sup>*3)(1/72 + 1/75)+(*M*<sup>2</sup> + *<sup>M</sup>*4)(1/54 + 1/63) ≈ 0.9943406888 < 1. From the benefits of Theorem 1, the problem of a sequential Riemann–Liouville and Hadamard–Caputo fractional differential system with coupled fractional integral boundary conditions (21) with *f* and *g* given by (22)–(23), respectively, has a unique solution on [0, 7/4].

**Example 2.** *Consider the sequential Riemann–Liouville and Hadamard–Caputo fractional differential system with coupled fractional integral boundary conditions of the Example 1, where the nonlinear functions f , g* : [0, 7/4] <sup>×</sup> <sup>R</sup><sup>2</sup> −→ <sup>R</sup> *are defined by*

$$f(t, \mathbf{x}, y) \quad = \ \frac{2\mathbf{z}^{-t}}{13} + \frac{1}{2(5t + 23)} \left( \frac{\mathbf{x}^{16}}{1 + |\mathbf{x}|^{15}} \right) + \frac{\cos \pi t}{3(2t + 15)} y \sin^2 \mathbf{x}, \tag{24}$$

$$\log(t, \mathbf{x}, y) \quad = \quad \frac{4t}{3} + \frac{\mathbf{x}e^{-y^2}}{2(4t + 11)} + \frac{|y|^{19}\cos^4 \mathbf{x}}{3(3t + 8)(1 + y^{18})}.\tag{25}$$

*It is easy to obtain that* | *f*(*t*, *x*, *y*)| ≤ (2/13)+(1/46)|*x*| + (1/45)|*y*| *and* |*g*(*t*, *x*, *y*)| ≤ (7/3) + (1/22)|*x*| + (1/24)|*y*|*. By setting a*<sup>0</sup> = 2/13*, a*<sup>1</sup> = 1/46*, a*<sup>2</sup> = 1/45*, b*<sup>0</sup> = 7/3*, b*<sup>1</sup> = 1/22 *and b*<sup>2</sup> = 1/24*, we can find that* (*M*<sup>1</sup> + *M*3)*a*<sup>1</sup> + (*M*<sup>2</sup> + *M*4)*b*<sup>1</sup> ≈ 0.9879151432 < 1 *and* (*M*<sup>1</sup> + *M*3)*a*<sup>2</sup> + (*M*<sup>2</sup> + *M*4)*b*<sup>2</sup> ≈ 0.9581677912 < 1*. The conclusion of Theorem 2 can be implied that system* (21) *with f and g given by* (24)*–*(25)*, respectively, has at least one solution on* [0, 7/4]*.*

**Example 3.** *Consider the sequential Riemann–Liouville and Hadamard–Caputo fractional differential system with coupled fractional integral boundary conditions of the Example 1, where the nonlinear functions f , g* : [0, 7/4] <sup>×</sup> <sup>R</sup><sup>2</sup> −→ <sup>R</sup> *are given by*

$$f(t, \mathbf{x}, y) \quad = \ \frac{1}{2}(1 + \cos^2 t) + \frac{|\mathbf{x}|e^{-t}}{(1 + |\mathbf{x}|)} + \frac{2}{\pi} \tan^{-1} y,\tag{26}$$

$$\log(t, x, y) \quad = \frac{1}{4}(3 + \sin^2 \pi t) + e^{-x^4} + \frac{3y^{22}}{1 + y^{22}}.\tag{27}$$

*We can check that* | *<sup>f</sup>*(*t*, *<sup>x</sup>*, *<sup>y</sup>*)| ≤ 3*,* |*g*(*t*, *<sup>x</sup>*, *<sup>y</sup>*)| ≤ 5 *for all <sup>x</sup>*, *<sup>y</sup>* ∈ R*. Using the Corollary 2, the problem* (21) *with f and g given by* (26) *and* (27)*, respectively, has at least one solution on* [0, 7/4]*.*

#### **4. Conclusions**

In this paper, we studied a new system of sequential fractional differential equations which consists of mixed fractional derivatives of Riemann–Liouville and Hadamard– Caputo types, supplemented with nonlocal coupled fractional integral boundary conditions. To the best of our knowledge, this is the first system of this type that appeared in the literature. After proving a basic lemma, helping us to transform the considered system into a fixed point problem, we use the standard tools from functional analysis to establish existence and uniqueness results. We use a Banach contraction mapping principle to derive the uniqueness result and Leray–Schauder alternative to obtain an existence result. The obtained results are well illustrated by numerical examples. The obtained results enrich the existing literature on sequential systems of fractional differential equations. Other cases of fractional systems with other types of mixed fractional derivatives or other types of boundary conditions can be studied using the methodology of this paper.

**Author Contributions:** Conceptualization, C.K., W.Y., S.K.N., and J.T.; methodology, C.K., W.Y., S.K.N., and J.T.; formal analysis, C.K., W.Y., S.K.N., and J.T.; funding acquisition, J.T. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by King Mongkut's University of Technology North Bangkok, Contract No. KMUTNB-61-KNOW-034.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **Some New Fractional Estimates of Inequalities for LR-***p***-Convex Interval-Valued Functions by Means of Pseudo Order Relation**

**Muhammad Bilal Khan 1, Pshtiwan Othman Mohammed 2,\*, Muhammad Aslam Noor 1, Dumitru Baleanu 3,4,5,\* and Juan Luis García Guirao 6,7**


**Citation:** Khan, M.B.; Mohammed, **Abstract:** It is a familiar fact that interval analysis provides tools to deal with data uncertainty. In general, interval analysis is typically used to deal with the models whose data are composed of inaccuracies that may occur from certain kinds of measurements. In interval analysis, both the inclusion relation (⊆) and pseudo order relation ≤*p* are two different concepts. In this article, by using pseudo order relation, we introduce the new class of nonconvex functions known as LR-*p*-convex interval-valued functions (LR-*p*-convex-IVFs). With the help of this relation, we establish a strong relationship between LR-*p*-convex-IVFs and Hermite-Hadamard type inequalities (*HH*-type inequalities) via Katugampola fractional integral operator. Moreover, we have shown that our results include a wide class of new and known inequalities for LR-*p*-convex-IVFs and their variant forms as special cases. Useful examples that demonstrate the applicability of the theory proposed in this study are given. The concepts and techniques of this paper may be a starting point for further research in this area.

> **Keywords:** LR-*p*-convex interval-valued function; Katugampola fractional integral operator; Hermite-Hadamard type inequality; Hermite-Hadamard-Fejér inequality

#### **1. Introduction**

Hermite [1] and Hadamard [2] derived the familiar inequality known as Hermite-Hadamard inequality (*HH* inequality). This inequality establishes a strong relationship with a convex function such that:

Let *<sup>f</sup>* : *<sup>I</sup>* → R be a convex function defined on an interval *<sup>I</sup>* ⊆ R and *<sup>u</sup>*, *<sup>ν</sup>* ∈ *<sup>I</sup>* such that *ν* > *u*. Then

$$f\left(\frac{u+\nu}{2}\right) \le \frac{1}{\nu-u} \int\_{u}^{\nu} f(x)dx \le \frac{f(u)+f(\nu)}{2} \tag{1}$$

If *f* is a concave function, then both inequalities are reversed. We note that *HH*-inequality may be regarded as a refinement of the concept of convexity and it follows easily from Jensen's inequality. In the last few decades, *HH*-inequality has attracted many authors to devote themselves to this field. Therefore, many authors have proposed different varieties of convexities to introduce *HH*-type inequalities such as harmonic convexity [3], quasi convexity [4], Schur convexity [5,6], strong convexity [7,8], h-convexity [9], p-convexity [10], fuzzy

P.O.; Noor, M.A.; Baleanu, D.; Guirao, J.L.G. Some New Fractional Estimates of Inequalities for LR-*p*-Convex Interval-Valued Functions by Means of Pseudo Order Relation. *Axioms* **2021**, *10*, 175. https://doi.org/ 10.3390/axioms10030175

Academic Editors: Jorge E. Macías Díaz and Chris Goodrich

Received: 28 June 2021 Accepted: 23 July 2021 Published: 31 July 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

convexity [11,12], fuzzy pre-invexity [13] and generalized convexity [14], *P*-convexity [15], etc. Fejér [16] considered the major generalization of *HH*-inequality which is known as *HH*-Fejér inequality. It can be expressed as follows:

Let *<sup>f</sup>* : [*u*, *<sup>ν</sup>*] → R be a convex function on an interval [*u*, *<sup>ν</sup>*] with *<sup>u</sup>* ≤ *<sup>ν</sup>* , and let <sup>W</sup> : [*u*, *<sup>ν</sup>*] <sup>⊂</sup> <sup>R</sup> <sup>→</sup> <sup>R</sup> with W ≥ 0 be an integrable and symmetric function with respect to *<sup>u</sup>*+*<sup>ν</sup>* <sup>2</sup> . Then, we have the following inequality:

$$f\left(\frac{u+\nu}{2}\right)\int\_{u}^{\nu}\mathcal{W}(x)dx \le \int\_{u}^{\nu} f(x)\mathcal{W}(x)dx \le \frac{f(u)+f(\nu)}{2}\int\_{u}^{\nu}\mathcal{W}(x)dx\tag{2}$$

If *f* is concave, then the double inequality (2) is reversed. If W(*x*) = 1, then we obtain (1) from (2). With the assistance of inequality (2), several classical inequalities can be obtained through special convex functions. In addition, these inequalities have a very significant role for convex functions in both pure and applied mathematics. We urge the readers for a further analysis of the literature on the applications and properties of generalized convex functions and *HH*-integral inequalities, see [17–19] and the references therein.

On the other hand, it is a well-known fact that the interval-valued analysis was introduced as an attempt to overcome interval uncertainty, which occurs in the computer or mathematical models of some deterministic real-word phenomena. A classic example of an interval closure is Archimedes' technique, which is associated with the computation of the circumference of a circle. In 1966, Moore [20] gave the concept of interval analysis in his book and discussed its applications in computational Mathematics.

After that, several authors have developed a strong relationship between inequalities and IVFs by means of inclusion relation via different integral operators, as one can see by Costa [21], Costa and Roman-Flores [22], Roman-Flores et al. [23,24], and Chalco-Cano et al. [25,26], but also to more general set-valued maps by Nikodem et al. [27], and Matkowski and Nikodem [28]. In particular, Zhang et al. [29] derived the new version of Jensen's inequalities for set-valued and fuzzy set-valued functions by means of a pseudo order relation and proved that these Jensen's inequalities generalized a form of Costa Jensen's inequalities [21].

In the last two decades, in the development of pure and applied mathematics, fractional calculus has played a key role. Yet, it attains magnificent deliberation in the ongoing research work, which is due to its application in various directions such as image processing, signal processing, physics, biology, control theory, computer networking, and fluid dynamics [30–33].

As a further extension, several authors have introduced the refinements of classical inequalities through fractional integrals and discussed their applications, such as Budak et al. [34], who established a strong relationship between fractional interval *HH*inequality and convex-IVF.

Through Katugampola fractional integral [35], Toplu et al. [36] established the following *HH*-inequality for p-convex functions:

Let *<sup>f</sup>* be a real-valuedLebesgueintegrable function and *<sup>p</sup>*, *<sup>α</sup>* <sup>&</sup>gt; 0. If *<sup>f</sup>* <sup>∈</sup> *SX*([*u*, *<sup>ν</sup>*], <sup>R</sup>+, *<sup>p</sup>*), then

$$f\left(\left[\frac{\boldsymbol{u}^{p}+\boldsymbol{\nu}^{p}}{2}\right]^{\frac{1}{p}}\right) \leq \frac{p^{a}\Gamma(a+1)}{2\left(\boldsymbol{\nu}^{p}-\boldsymbol{u}^{p}\right)^{d}}\left[\mathcal{Z}\_{\boldsymbol{u}^{+}}^{p,\boldsymbol{\mu}}f(\boldsymbol{\nu})+\mathcal{Z}\_{\boldsymbol{\nu}^{-}}^{p,\boldsymbol{\mu}}f(\boldsymbol{u})\right] \leq \frac{f(\boldsymbol{u})+f(\boldsymbol{\nu})}{2}.\tag{3}$$

Due to the vast applications of convexity and fractional *HH*-inequality in mathematical analysis and optimization, many authors have discussed the applications, refinements, generalizations, and extensions, see [37–56] and the references therein.

Inspired by the ongoing research work, we generalize the class of p-convex function known as LR-*p*-convex-IVF, and establish the relationship between *HH*-type inequalities and LR-*p*-convex-IVF via Katugampola fractional integral.

#### **2. Preliminaries**

Let R be the set of real numbers and R*<sup>I</sup>* be the collection of all closed and bounded intervals of R that is R*<sup>I</sup>* = (*ξ*, *<sup>ξ</sup>* : *<sup>ξ</sup>*, *<sup>ξ</sup>* ∈ R and *<sup>ξ</sup>* ≤ *<sup>ξ</sup>* ) . If *<sup>ξ</sup>* <sup>≥</sup> 0, then *ξ*, *ξ* is called positive interval. The set of all positive intervals is denoted by R<sup>+</sup> *<sup>I</sup>* and defined as

$$\mathbb{R}\_I^+ = \left\{ \left[ \underline{\mathfrak{F}}, \underline{\mathfrak{F}} \right] : \left[ \underline{\mathfrak{F}}, \underline{\mathfrak{F}} \right] \in \mathbb{R}\_I \text{ and } \underline{\mathfrak{F}} \ge 0 \right\}.$$

Let ∈ R and *ξ* be defined as

$$\varrho\_{\xi}^{\mathfrak{x}} = \begin{cases} \begin{bmatrix} \varrho\_{\Xi'}^{\mathfrak{x}} \varrho\_{\xi}^{\overline{\mathfrak{x}}} \end{bmatrix}, \ \emptyset > 0, \\\begin{cases} \{0\}, \ \emptyset = 0, \\\ \left[\varrho\_{\xi'}^{\overline{\mathfrak{x}}}, \varrho\_{\Xi}^{\overline{\mathfrak{x}}}\right], \ \emptyset < 0. \end{cases} \end{cases} \tag{4}$$

Then, the addition *<sup>ξ</sup>*<sup>1</sup> + *<sup>ξ</sup>*<sup>2</sup> and Minkowski difference *<sup>ξ</sup>*<sup>1</sup> − *<sup>ξ</sup>*<sup>2</sup> for *<sup>ξ</sup>*1, *<sup>ξ</sup>*<sup>2</sup> ∈ R*<sup>I</sup>* are defined by

$$\mathbb{Z}\_1 + \mathbb{Z}\_2 = \begin{bmatrix} \mathbb{Z}\_{1'} \ \mathbb{Z}\_1 \end{bmatrix} + \begin{bmatrix} \mathbb{Z}\_{2'} \ \mathbb{Z}\_2 \end{bmatrix} = \begin{bmatrix} \mathbb{Z}\_{1'} + \mathbb{Z}\_{2'} \ \mathbb{Z}\_1 + \mathbb{Z}\_2 \end{bmatrix} \tag{5}$$

and

$$
\boldsymbol{\xi}\_1^{\mathbf{x}} - \boldsymbol{\xi}\_2^{\mathbf{x}} = \begin{bmatrix} \boldsymbol{\upvarphi}\_1 \ \boldsymbol{\upvarphi}\_1 \end{bmatrix} - \begin{bmatrix} \boldsymbol{\upvarphi}\_2 \ \boldsymbol{\upvarphi}\_2 \end{bmatrix} = \begin{bmatrix} \boldsymbol{\upvarphi}\_1 - \boldsymbol{\upvarphi}\_2 \ \boldsymbol{\upvarphi}\_1 - \boldsymbol{\upvarphi}\_2 \end{bmatrix} \tag{6}
$$

respectively.

The inclusion relation "⊇" means that

$$\begin{array}{rcl} \mathbb{Z}\_2 \supseteq \mathbb{Z}\_1 \iff \left[ \begin{smallmatrix} \mathbb{Z}\_2 & \overline{\xi}\_2 \end{smallmatrix} \right] \supseteq \left[ \begin{smallmatrix} \mathbb{Z}\_1 & \overline{\xi}\_1 \end{smallmatrix} \right] \iff \left[ \begin{smallmatrix} \mathbb{Z}\_1 \end{smallmatrix} \geq \begin{smallmatrix} \mathbb{Z}\_2 \end{smallmatrix} \begin{smallmatrix} \mathbb{Z}\_2 \end{smallmatrix} \right] \end{array} \tag{7}$$

**Remark 1.** ([29]). (i) *The relation* "≤*p*" *defined on* R*<sup>I</sup> by*

$$
\left[\underline{\mathbb{Z}}', \underline{\mathbb{Z}}\right] \le\_p \left[\underline{\mathbb{Z}}', \underline{\mathbb{Z}}\right] \text{ if and only if } \underline{\mathbb{Z}} \le \underline{\mathbb{Z}}', \underline{\mathbb{Z}} \le \underline{\mathbb{Z}}.\tag{8}
$$

*for all ξ*, *ξ* , *ζ*, *ζ* ∈ R*<sup>I</sup> is a pseudo order relation. In the interval analysis case, both the pseudo order relation (*≤*p) and partial order relation (*≤*) behave alike, thus the relation ξ*, *ξ* ≤*p ζ*, *ζ is coincident to ξ*, *ξ* ≤ *ζ*, *ζ on* R*I*, *for more details see,* [21,29].

*(ii) It can be easily seen that* "≤*p*" *looks similar to "left and right" on the real line* R, *so we call* "≤*p*" *is "left and right" (or "LR" order, in short).*

The concept of Riemann integral for IVF first introduced by Moore [20] is defined as follows:

**Theorem 1.** ([20]). *Let <sup>f</sup>* : [*u*, *<sup>ν</sup>*] ⊂ R → R*<sup>I</sup> is an IVF such that <sup>f</sup>*(*x*) = *f*(*x*), *f*(*x*) . *Then, f is Riemann integrable over* [*u*, *ν*] *if and only if, f and f both are Riemann integrable over* [*u*, *ν*] *such that*

$$(IR)\int\_{\mu}^{\nu}f(\mathbf{x})d\mathbf{x} = \left[ (R)\int\_{\mu}^{\nu}\underline{f}(\mathbf{x})d\mathbf{x},\ (R)\int\_{\mu}^{\nu}\overline{f}(\mathbf{x})d\mathbf{x} \right] \tag{9}$$

Now, we discuss the concept of Katugampola fractional integral operator for IVF.

Let *<sup>q</sup>* ≥ 1, *<sup>c</sup>* ∈ R and *<sup>x</sup> q <sup>c</sup>* (*u*, *ν*) be the set of all complex-valued Lebesgue integrable IVFs *<sup>f</sup>* on [*u*, *<sup>ν</sup>*] for which the norm *<sup>f</sup>* X *<sup>q</sup> <sup>c</sup>* is defined by

$$\|\|f\|\|\mathcal{X}\_{\varepsilon}^{q} = \left(\int\_{u}^{v} |\varrho^{\varepsilon}f(\mathbf{x})|^{q} \frac{d\varrho}{\varrho}\right)^{\frac{1}{q}} < \infty$$

For 1 ≤ *q* < ∞ and

$$\|\|f\|\|\mathcal{X}\_{\mathfrak{c}}^{\infty} = \operatorname\*{ess\,sup}\_{u \le \varrho \le \nu} \varrho^{\mathfrak{c}} |f(\varrho)|$$

Katugampola [35] presented a new fractional integral to generalize the Riemann Liouville and Hadamard fractional integrals under certain conditions.

Let *p*, *α* > 0 and *f* ∈ L[*u*,*ν*] be the collection of all complex-valued Lebesgue integrable IVFs on [*u*, *ν*]. Then, the interval left and right Katugampola fractional integrals of *f* ∈ L[*u*,*ν*] with order are defined by

$$\mathcal{T}\_{\boldsymbol{u}^{+}}^{p,a} f(\boldsymbol{x}) = \frac{p^{1-a}}{\Gamma(a)} \int\_{\boldsymbol{u}}^{\boldsymbol{x}} (\boldsymbol{x}^{p} - \boldsymbol{\zeta}^{p})^{a-1} \boldsymbol{\zeta}^{p-1} f(\boldsymbol{\zeta}) d(\boldsymbol{\zeta}) \, (\boldsymbol{x} > \boldsymbol{u}),\tag{10}$$

and

$$\mathcal{Z}\_{\nu^{-}}^{p,a} f(\mathbf{x}) = \frac{p^{1-a}}{\Gamma(a)} \int\_{\mathcal{X}}^{\nu} (\zeta^p - \mathbf{x}^p)^{a-1} \zeta^{p-1} f(\zeta) d(\zeta) \text{ ( $\mathbf{x} < \nu$ )}\tag{11}$$

respectively, where Γ(*x*) =  ∞ 0 *ζx*−1*u*−*ζd*(*ζ*) is the Euler gamma function.

The concept of *p*-convex functions were established by Zhang and Wang [10], and a number of properties of the functions were introduced.

**Definition 1.** ([54]). *Let p* ∈ R *with p* = 0. *Then, the interval I is said to be p-convex if*

$$[\varrho \mathbf{x}^p + (1 - \varrho)\mathbf{y}^p]^{\frac{1}{p}} \in I\_\prime \tag{12}$$

*for all x*, *y* ∈ *I*, ∈ [0, 1], *where p* = 2*n* + 1 *and n* ∈ *N or p is an odd number.*

**Definition 2.** ([10]). *Let <sup>p</sup>* ∈ R *with <sup>p</sup>* = 0 *and <sup>I</sup>* = [*u*, *<sup>ν</sup>*] ⊆ R*. Then, the function <sup>f</sup>* : [*u*, *<sup>ν</sup>*] <sup>→</sup> <sup>R</sup><sup>+</sup> *is said to be p-convex function if*

$$\int f\left(\left[\varrho x^p + (1-\varrho)y^p\right]^{\frac{1}{p}}\right) \le \varrho f(x) + (1-\varrho)f(y),\tag{13}$$

*for all x*, *y* ∈ [*u*, *ν*], ∈ [0, 1]. *If the inequality (13) is reversed, then f is called p-concave function. The set of all p-convex (LR-p-concave, LR-p-affine) functions is denoted by*

$$SX([\![\mu,\nu],\!],\mathbb{R}^+,\text{ }p\text{) }\left(SV([\![\mu,\nu],\!],\!\mathbb{R}^+,\text{ }p\text{) }\text{) }.$$

*Firstly, we introduce the new class of LR-p-convex-IVF.*

#### **3. LR-***p***-Convex Interval-Valued Functions**

Now, we introduce LR-*p*-convex interval-valued functions.

**Definition 3.** *The IVF <sup>f</sup>* : [*u*, *<sup>ν</sup>*] <sup>→</sup> <sup>R</sup><sup>+</sup> *<sup>I</sup> is said to be LR-p-convex-IVF if for all x*, *y* ∈ [*u*, *ν*] *and* ∈ [0, 1] *we have*

$$f\left(\left[\varrho \mathbf{x}^p + (1-\varrho)y^p\right]^{\frac{1}{p}}\right) \le\_p \varrho f(\mathbf{x}) + (1-\varrho)f(\mathbf{y}).\tag{14}$$

If inequality (14) is reversed, then *f* is said to be LR-*p*-concave on [*u*, *ν*]. The set of all LR-*p*-convex (LR-*p*-concave) IVFs is denoted by

$$LRSX\left(\left[\mu,\,\,\nu\right],\,\,\mathbb{R}\_I^+,\,\,p\right) \text{ (}LRSV\left(\left[\mu,\,\,\,\nu\right],\,\,\mathbb{R}\_I^+,\,\,p\right),\text{)}.$$

**Remark 2.** *If p* = 1*, then LR-p-convex-IVF reduces to LR-convex-IVF, see [24].*

If *p* = −1, then we obtain the class of harmonically convex functions, which is also a new one.

The next Theorem 2 establishes the relationship between Definition 3 and end point functions of IVFs.

**Theorem 2.** *Let <sup>f</sup>* : [*u*, *<sup>ν</sup>*] <sup>→</sup> <sup>R</sup><sup>+</sup> *<sup>I</sup> be an IVF defined by f*(*x*) = *f*(*x*), *f*(*x*) *, for all x* ∈ [*u*, *ν*]*. Then, f* <sup>∈</sup> *LRSX* [*u*, *ν*], R<sup>+</sup> *<sup>I</sup>* , *p if and only if, f* , *<sup>f</sup>* <sup>∈</sup> *SX*([*u*, *<sup>ν</sup>*], <sup>R</sup>+, *<sup>p</sup>*)*.*

**Proof.** Assume that *<sup>f</sup>* , *<sup>f</sup>* <sup>∈</sup> *SX*([*u*, *<sup>ν</sup>*], <sup>R</sup>+, *<sup>p</sup>*). Then, for all *<sup>x</sup>*, *<sup>y</sup>* <sup>∈</sup> [*u*, *<sup>ν</sup>*], <sup>∈</sup> [0, 1], we have

$$\underline{f}\left(\left[\varrho x^p + (1-\varrho)y^p\right]^{\frac{1}{p}}\right) \le \underline{\varrho}\underline{f}(x) + (1-\varrho)\underline{f}(y)$$

and

$$\overline{\mathcal{J}}\left(\left[\varrho x^p + (1-\varrho)y^p\right]^{\frac{1}{p}}\right) \le \varrho \overline{\mathcal{J}}(x) + (1-\varrho)\overline{\mathcal{J}}(y).$$

From Definition 3 and order relation ≤*p*, we have

$$\begin{aligned} & \left[ \underline{f} \left( \left[ \varrho \mathbf{x}^p + (1-\varrho)\mathbf{y}^p \right]^{\frac{1}{p}} \right) , \overline{f} \left( \left[ \varrho \mathbf{x}^p + (1-\varrho)\mathbf{y}^p \right]^{\frac{1}{p}} \right) \right] \\ & \leq\_p \left[ \varrho \underline{f}(\mathbf{x}) + (1-\varrho)\underline{f}(\mathbf{y}), \varrho \overline{f}(\mathbf{x}) + (1-\varrho)\overline{f}(\mathbf{y}) \right] \\ & = \varrho \left[ \underline{f}(\mathbf{x}), \overline{f}(\mathbf{x}) \right] + (1-\varrho) \left[ \underline{f}(\mathbf{y}), \overline{f}(\mathbf{y}) \right] \end{aligned}$$

That is

$$\int f\left( \left[ \varrho \mathbf{x}^p + (1-\varrho)y^p \right]^{\frac{1}{p}} \right) \leq\_p \varrho f(\mathbf{x}) + (1-\varrho)f(y) \forall \, \mathbf{x}, \mathcal{Y} \in [\mathsf{u}, \ \mathsf{v}], \ \varrho \in [0, 1].$$

Hence, *<sup>f</sup>* <sup>∈</sup> *LRSX* [*u*, *ν*], R<sup>+</sup> *<sup>I</sup>* , *p* .

Conversely, let *<sup>f</sup>* <sup>∈</sup> *LRSX* [*u*, *ν*], R<sup>+</sup> *<sup>I</sup>* , *p* . Then, for all *x*, *y* ∈ [*u*, *ν*] and ∈ [0, 1], we have

$$f\left(\left[\varrho\mathbf{x}^p + (1-\varrho)y^p\right]^{\frac{1}{p}}\right) \le\_p \varrho f(\mathbf{x}) + (1-\varrho)f(y).$$

That is

$$\begin{aligned} \left[ \underline{f} \left( \left[ \varrho \mathbf{x}^p + (1-\varrho) y^p \right]^{\frac{1}{p}} \right), \overline{f} \left( \left[ \varrho \mathbf{x}^p + (1-\varrho) y^p \right]^{\frac{1}{p}} \right) \right] & \leq\_p \varrho \left[ \underline{f}(\mathbf{x}), \overline{f}(\mathbf{x}) \right] + (1-\varrho) \left[ \underline{f}(y), \overline{f}(y) \right] \\ & = \left[ \varrho \underline{f}(\mathbf{x}) + (1-\varrho) \underline{f}(y), \varrho \overline{f}(\mathbf{x}) + (1-\varrho) \overline{f}(y) \right] \end{aligned}$$

It follows that

$$\underline{f}\left(\left[\varrho x^p + (1-\varrho)y^p\right]^{\frac{1}{p}}\right) \le \underline{\varrho}\underline{f}(\mathfrak{x}) + (1-\varrho)\underline{f}(\mathfrak{y})\_\*\underline{f}$$

and

$$\overline{f}\left(\left[\varrho\chi^p + (1-\varrho)\chi^p\right]^{\frac{1}{p}}\right) \le \varrho\overline{f}(\mathfrak{x}) + (1-\varrho)\overline{f}(\mathfrak{y})\_{\mathcal{A}}$$

Hence, the result follows. -

**Remark 3.** *If f*(*x*) = *f*(*x*)*, then p-convex-IVF reduces to the classical p-convex function, see [10].*

If *f*(*x*) = *f*(*x*) with *γ* = 1 and *p* = 1, then *p*-convex-IVF reduces to the classical convex function.

**Example 1.** *Let p be an odd number, α* = <sup>1</sup> <sup>2</sup> , *x* ∈ [2, 3] *and f*(*x*) = −*x p* <sup>2</sup> , 2 − *x p* 2 *. Then, we clearly see that both end point functions f*(*x*) = −*x p* <sup>2</sup> *and f*(*x*) = 2 − *x p* <sup>2</sup> *are p-convex functions. Hence, f* <sup>∈</sup> *LRSX* [*u*, *ν*], R<sup>+</sup> *<sup>I</sup>* , *p .*

#### *Fractional Hermite-Hadamard Type Inequalities*

In this section, we will prove some new Hermite-Hadamard type inequalities for LR-*p*-convex-IVFs by means of the pseudo order relation via Katugampola fractional integral operator.

**Theorem 3.** *Let <sup>p</sup>*, *<sup>α</sup>* <sup>&</sup>gt; 0, *<sup>u</sup>*, *<sup>ν</sup>* <sup>∈</sup> *<sup>I</sup> such that <sup>ν</sup>* <sup>&</sup>gt; *<sup>u</sup>*, *<sup>f</sup>* ∈ L([*u*,*ν*])*. If <sup>f</sup>* <sup>∈</sup> *LRSX* [*u*, *ν*], R<sup>+</sup> *<sup>I</sup>* , *p , then*

$$f\left(\left[\frac{u^p+\upsilon^p}{2}\right]^{\frac{1}{p}}\right) \le\_p \frac{p^a \Gamma(a+1)}{2\left(\upsilon^p-\iota p^a\right)^a} \left[\mathcal{Z}\_{u+}^{p,a} f(\upsilon) + \mathcal{Z}\_{\upsilon^-}^{p,a} f(u)\right] \le\_p \frac{f(u)+f(\upsilon)}{2}.\tag{15}$$

*If f* <sup>∈</sup> *LRSV* [*u*, *ν*], R<sup>+</sup> *<sup>I</sup>* , *p* [*u*, *ν*], R<sup>+</sup> *<sup>I</sup>* , *p , then*

$$f\left(\left[\frac{u^p+\nu^p}{2}\right]^{\frac{1}{p}}\right) \ge\_p \frac{p^a \Gamma(a+1)}{2\left(\nu^p-u^p\right)^a} \left[\mathcal{Z}\_{u^+}^{p,a} f(\nu) + \mathcal{Z}\_{\nu^-}^{p,a} f(u)\right] \ge\_p \frac{f(u)+f(\nu)}{2}.\tag{16}$$

**Proof.** Let *<sup>f</sup>* <sup>∈</sup> *LRSX* [*u*, *ν*], R<sup>+</sup> *<sup>I</sup>* , *p* . Then, by hypothesis, we have

$$2f\left(\left[\frac{\mu^p + \nu^p}{2}\right]^{\frac{1}{p}}\right) \le\_p f\left(\left[\varrho\mu^p + (1-\varrho)\nu^p\right]^{\frac{1}{p}}\right) + f\left(\left[(1-\varrho)\mu^p + \varrho\nu^p\right]^{\frac{1}{p}}\right) \tag{17}$$

Multiplying both sides (17) by *α*−<sup>1</sup> and integrating the obtained result with respect to over (0, 1), we have

2 1 0 *α*−<sup>1</sup> *f up*+*ν<sup>p</sup>* 2 1 *p d* ≤*p* 1 0 *α*−<sup>1</sup> *f* [*u<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> )*νp*] 1 *p* + *f* [(<sup>1</sup> <sup>−</sup> )*u<sup>p</sup>* <sup>+</sup> *νp*] 1 *p <sup>d</sup>* (18)

From (18), we get

$$\begin{split} 2\int\_{0}^{1} \varrho^{n-1} f\left(\left[\frac{\underline{u}^{p} + \underline{v}^{p}}{2}\right]^{\frac{1}{p}}\right) d\varrho &= 2\left[\int\_{0}^{1} \varrho^{n-1} \underline{f}\left(\left[\frac{\underline{u}^{p} + \underline{v}^{p}}{2}\right]^{\frac{1}{p}}\right) d\varrho, \int\_{0}^{1} \varrho^{n-1} \overline{f}\left(\left[\frac{\underline{u}^{p} + \underline{v}^{p}}{2}\right]^{\frac{1}{p}}\right) d\varrho\right] \\ &= 2\left[\frac{1}{n} \underline{f}\left(\left[\frac{\underline{u}^{p} + \underline{v}^{p}}{2}\right]^{\frac{1}{p}}\right), \frac{1}{n} \overline{f}\left(\left[\frac{\underline{u}^{p} + \underline{v}^{p}}{2}\right]^{\frac{1}{p}}\right)\right] \\ &= 2\frac{1}{n} f\left(\left[\frac{\underline{u}^{p} + \underline{v}^{p}}{2}\right]^{\frac{1}{p}}\right). \end{split} \tag{19}$$

and

$$\begin{aligned} &\int\_0^1 \varrho^{a-1} \left[ f \left( \left[ \varrho u^p + (1-\varrho)\nu^p \right]^{\frac{1}{p}} \right) + f \left( \left[ (1-\varrho)u^p + \varrho\nu^p \right]^{\frac{1}{p}} \right) \right] d\varrho \\ &= \int\_0^1 \varrho^{a-1} \left[ \underline{f} \left( \left[ \varrho u^p + (1-\varrho)\nu^p \right]^{\frac{1}{p}} \right), \underline{f} \left( \left[ \varrho u^p + (1-\varrho)\nu^p \right]^{\frac{1}{p}} \right) \right] d\varrho \\ &\quad + \int\_0^1 \varrho^{a-1} \left[ \bar{f} \left( \left[ (1-\varrho)u^p + \varrho\nu^p \right]^{\frac{1}{p}} \right), \bar{f} \left( \left[ (1-\varrho)u^p + \varrho\nu^p \right]^{\frac{1}{p}} \right) \right] d\varrho \end{aligned}$$

Let <sup>∈</sup> [0, 1], *<sup>x</sup><sup>p</sup>* <sup>=</sup> *u<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> )*ν<sup>p</sup>* and *<sup>y</sup><sup>p</sup>* <sup>=</sup> (<sup>1</sup> <sup>−</sup> )*u<sup>p</sup>* <sup>+</sup> *νp*. Then, we have

= *<sup>p</sup>* (*νp*−*up*) *α <sup>ν</sup> <sup>u</sup>* (*ν<sup>p</sup>* <sup>−</sup> *<sup>y</sup>p*) *<sup>α</sup>*−<sup>1</sup> *<sup>f</sup>*(*y*) *<sup>y</sup>*1−*<sup>p</sup> dy*, *<sup>ν</sup> <sup>u</sup>* (*ν<sup>p</sup>* <sup>−</sup> *<sup>y</sup>p*) *α*−1 *f*(*y*) *<sup>y</sup>*1−*<sup>p</sup> dy* + *<sup>p</sup>* (*νp*−*up*) *α <sup>ν</sup> <sup>u</sup>* (*x<sup>p</sup>* <sup>−</sup> *<sup>u</sup>p*) *<sup>α</sup>*−<sup>1</sup> *<sup>f</sup>*(*x*) *<sup>x</sup>*1−*<sup>p</sup> dx*, *<sup>ν</sup> <sup>u</sup>* (*x<sup>p</sup>* <sup>−</sup> *<sup>u</sup>p*) *α*−1 *f*(*x*) *<sup>x</sup>*1−*<sup>p</sup> dx* , = *<sup>p</sup>* (*νp*−*up*) *α <sup>ν</sup> <sup>u</sup>* (*ν<sup>p</sup>* <sup>−</sup> *<sup>u</sup>p*) *α*−1 *f*(*y*) *<sup>y</sup>*1−*<sup>p</sup> dy*, *<sup>ν</sup> <sup>u</sup>* (*x<sup>p</sup>* <sup>−</sup> *<sup>u</sup>p*) *α*−1 *f*(*x*) *<sup>x</sup>*1−*<sup>p</sup> dx* , ≤*p pα*Γ(*α*) (*νp*−*up*) *α* I*p*,*α <sup>u</sup>*<sup>+</sup> *<sup>f</sup>*(*ν*) <sup>+</sup> <sup>I</sup>*p*,*<sup>α</sup> <sup>ν</sup>*<sup>−</sup> *f*(*u*) . (20)

Since *<sup>f</sup>* <sup>∈</sup> *LRSX* [*u*, *ν*], R<sup>+</sup> *<sup>I</sup>* , *p* , we obtain

$$f\left(\left[\varrho\mu^p + (1-\varrho)\nu^p\right]^{\frac{1}{p}}\right) \le\_p \varrho f(\mu) + (1-\varrho)f(\nu) \tag{21}$$

and

$$f\left(\left[\varrho\nu^p + (1-\varrho)u^p\right]^{\frac{1}{p}}\right) \le\_p \varrho f(\nu) + (1-\varrho)f(u) \tag{22}$$

Adding (21) and (22), we get

$$f\left(\left[\varrho u^p + (1-\varrho)\nu^p\right]^{\frac{1}{p}}\right) + f\left(\left[\varrho v^p + (1-\varrho)u^p\right]^{\frac{1}{p}}\right) \le\_p f(u) + f(\nu) \tag{23}$$

Multiplying both sides (23) by *α*−<sup>1</sup> and integrating both sides of the obtained result with respect to over (0, 1), we get

$$\frac{p^{\mathfrak{a}}\Gamma(\mathfrak{a})}{(\nu^{\mathfrak{p}}-\mathfrak{u}^{\mathfrak{p}})^{\mathfrak{a}}} \left[ \mathcal{Z}\_{\mathfrak{u}^{+}}^{\mathfrak{p},\mathfrak{a}} f(\nu) + \mathcal{Z}\_{\nu^{-}}^{\mathfrak{p},\mathfrak{a}} f(\mathfrak{u}) \right] \leq\_{p} \frac{f(\mathfrak{u}) + f(\nu)}{\mathfrak{a}} \tag{24}$$

From (20) and (24), (19) becomes

$$\int f\left(\left[\frac{\mu^p + \nu^p}{2}\right]^{\frac{1}{p}}\right) \le\_p \frac{p^a \Gamma(a+1)}{2(\nu^p - \mu^p)^a} \left[\mathcal{Z}^{p,a}\_{\mu^+} f(\nu) + \mathcal{Z}^{p,a}\_{\nu^-} f(\mu)\right] \le\_p \frac{f(\mu) + f(\nu)}{2},$$

and the theorem has been proved. -

**Remark 4.** *Let p* = 1. *Then, Theorem 3 reduces to the result for LR-convex-IVF, which is also a new one:*

$$f\left(\frac{u+\nu}{2}\right) \le\_p \frac{\Gamma(u+1)}{2\left(\nu-u\right)^d} \left[\mathcal{Z}\_{u+}^a f(\nu) + \mathcal{Z}\_{\nu-}^a f(u)\right] \le\_p \frac{f(u)+f(\nu)}{2}.$$

If *α* = 1, then Theorem 3 reduces to the result for LR-*p*-convex-IVF, which is also a new one:

$$f\left(\left[\frac{\mathfrak{u}^p + \mathfrak{v}^p}{2}\right]^{\frac{1}{p}}\right) \le\_p \frac{p}{\mathfrak{v}^p - \mathfrak{u}^p} \ (IR) \int\_u^\nu \mathfrak{x}^{p-1} f(\mathfrak{x}) d\mathfrak{x} \le\_p \frac{f(\mathfrak{u}) + f(\mathfrak{v})}{2}$$

Let *p* = *α* = 1. Then, Theorem 3 reduces to the result for LR-*p*-convex-IVF, which is also a new one:

$$f\left(\frac{u+\nu}{2}\right) \le\_p \frac{1}{\nu-u} \text{ (IR)} \int\_u^\nu f(x)dx \le\_p \frac{f(u)+f(\nu)}{2}$$

If *f* = *f* , then we get inequality (13) from Theorem 3.

If *p* = 1 and *f* = *f* , then from Theorem 3, we obtain fractional *HH*-inequality for convex function, see [41]:

$$f\left(\frac{u+\nu}{2}\right) \le \frac{\Gamma(u+1)}{2\left(\nu-u\right)^{\mathfrak{a}}} \left[\mathcal{Z}\_{u+}^{\mathfrak{a}}f(\nu) + \mathcal{Z}\_{\nu-}^{\mathfrak{a}}f(\mu)\right] \le \frac{f(u)+f(\nu)}{2}.$$

If *α* = 1, and *f* = *f* , then Theorem 3 reduces to the result for LR-*p*-convex-IVF, see [10]:

$$f\left(\left[\frac{\mu^p + \nu^p}{2}\right]^{\frac{1}{p}}\right) \le \frac{p}{\nu^p - \mu^p} \int\_{\mu}^{\nu} \mathbf{x}^{p-1} f(\mathbf{x}) d\mathbf{x} \le \frac{f(\mu) + f(\nu)}{2}.$$

If *α* = *p* = 1 and *f* = *f* , then we obtain the classical inequality (1) from Theorem 3.

**Example 2.** Let *p* be an odd number, *α* = <sup>1</sup> <sup>2</sup> , *x* ∈ [2, 3] and *f*(*x*) = 2 − *x p* <sup>2</sup> , 2 2 − *x p* 2 . Then, we clearly see that *<sup>f</sup>* ∈ L([*u*,*ν*]) and *<sup>f</sup>* <sup>∈</sup> *LRSX* [*u*, *ν*], R<sup>+</sup> *<sup>I</sup>* , *p* . Since *f*(*x*) = 2 − *x p* 2 and *f*(*x*) = 2 2 − *x p* 2 . Now, we compute the following:

$$
\underline{f}\left(\left[\frac{\underline{u}^{p}+\underline{\nu}^{p}}{2}\right]^{\frac{1}{p}}\right) = \underline{f}\left(\frac{5}{2}\right) = \frac{4-\sqrt{10}}{2}
$$

$$
\overline{f}\left(\left[\frac{\underline{u}^{p}+\underline{\nu}^{p}}{2}\right]^{\frac{1}{p}}\right) = \overline{f}\left(\frac{5}{2}\right) = 4-\sqrt{10},
$$

$$
\frac{f(u)+f(\nu)}{2} = 2-\frac{\sqrt{2}-\sqrt{3}}{2},
$$

$$
\frac{\overline{f}(u)+\overline{f}(\nu)}{2} = 4-\sqrt{2}-\sqrt{3}.
$$

*Note that*

$$\begin{split} \frac{p^{\mu}\Gamma(\mathbf{a}+1)}{2\left(\nu^{\mathrm{p}}-\mathbf{u}^{\mathrm{p}}\right)^{\mathrm{R}}} \Big[\mathcal{Z}\_{\mathbf{u}+}^{p,\mu}\underline{\mathbf{f}}(\nu) + \mathcal{Z}\_{\mathbf{v}-}^{p,\mu}\underline{\mathbf{f}}(\mathbf{u})\Big] &= \frac{\Gamma\left(\frac{3}{2}\right)}{2} \frac{1}{\sqrt{\pi}} \int\_{2}^{3} (\mathbf{3}^{p}-\mathbf{x}^{\mathrm{p}})^{\frac{-1}{2}} \,\mathbf{x}^{p-1} \left[2-\mathbf{x}^{\frac{\mathsf{F}}{2}}, 2\left(2-\mathbf{x}^{\frac{\mathsf{F}}{2}}\right)\right] d\mathbf{x} \\ &+ \frac{\Gamma\left(\frac{3}{2}\right)}{2} \frac{1}{\sqrt{\pi}} \int\_{2}^{3} (\mathbf{x}^{p}-\mathbf{2}^{p})^{\frac{-1}{2}} \,\mathbf{x}^{p-1} \left[2-\mathbf{x}^{\frac{\mathsf{F}}{2}}, 2\left(2-\mathbf{x}^{\frac{\mathsf{F}}{2}}\right)\right] d\mathbf{x} \\ &= \frac{1}{4} \Big[\frac{7993}{5000} + \frac{9501}{5000}\Big] = \frac{8447}{10,000} \end{split}$$

*and*

$$\begin{split} \frac{p^a \Gamma(a)}{\left(\nu^p - \nu^p\right)^4} \left[ \mathcal{T}\_{\boldsymbol{u}^+}^{p,a} \overline{f}(\boldsymbol{\nu}) + \mathcal{T}\_{\boldsymbol{\nu}^-}^{p,a} \overline{f}(\boldsymbol{u}) \right] &= \frac{\Gamma\left(\frac{3}{2}\right)}{2} \frac{1}{\sqrt{\pi}} \int\_2^3 \left( 3^p - \boldsymbol{x}^p \right)^{\frac{-1}{2}} \boldsymbol{x}^{p-1} \left( 2 - \boldsymbol{x}^{\frac{p}{2}} \right) d\boldsymbol{x} \\ &+ \frac{\Gamma\left(\frac{3}{2}\right)}{2} \frac{1}{\sqrt{\pi}} \int\_2^3 \left( \boldsymbol{x}^p - 2^p \right)^{-\frac{1}{2}} \boldsymbol{x}^{p-1} \left( 2 - \boldsymbol{x}^{\frac{p}{2}} \right) d\boldsymbol{x} \\ &= \frac{1}{4} \left[ \frac{7393}{10,000} + \frac{9501}{10,000} \right] = \frac{8447}{20,000} .\end{split}$$

*Therefore, we have*

$$\frac{4-\sqrt{10}}{2} \le \frac{8447}{20,000} \le 2 - \frac{\sqrt{2} + \sqrt{3}}{2}$$

$$4 - \sqrt{10} \le \frac{8447}{10,000} \le 4 - \sqrt{2} - \sqrt{3}$$

*and Theorem 3 is verified.*

The next Theorem 4 gives the *HH*-Fejér type inequality for LR-*p*-convex-IVFs.

**Theorem 4.** *Let p*, *α* > 0, *u*, *ν* ∈ *I with ν* > *u*, *f* ∈ L([*u*,*ν*]) *and* W(*x*) = W [*u<sup>p</sup>* <sup>+</sup> *<sup>ν</sup><sup>p</sup>* <sup>−</sup> *<sup>x</sup>p*] 1 *p* <sup>≥</sup> <sup>0</sup> *for <sup>x</sup>* <sup>∈</sup> *I. If <sup>f</sup>* <sup>∈</sup> *LRSX* [*u*, *ν*], R<sup>+</sup> *<sup>I</sup>* , *p , then we have the* HH*-Fejér type inequality as follows:*

$$\begin{split} & f\left(\left[\frac{\boldsymbol{u}^{p}+\boldsymbol{v}^{p}}{2}\right]^{\frac{1}{p}}\right) \left[\boldsymbol{\mathcal{Z}}\_{\boldsymbol{u}^{+}}^{p,a}\,\mathcal{W}(\boldsymbol{v}) + \boldsymbol{\mathcal{Z}}\_{\boldsymbol{v}^{-}}^{p,a}\,\mathcal{W}(\boldsymbol{u})\right] \\ & \leq\_{p} \left[\boldsymbol{\mathcal{Z}}\_{\boldsymbol{u}^{+}}^{p,a}\,\mathcal{Y}\mathcal{W}(\boldsymbol{v}) + \boldsymbol{\mathcal{Z}}\_{\boldsymbol{v}^{-}}^{p,a}\,\mathcal{Y}\mathcal{W}(\boldsymbol{u})\right] \leq\_{p} \frac{f(\boldsymbol{u}) + f(\boldsymbol{v})}{2} \left[\boldsymbol{\mathcal{Z}}\_{\boldsymbol{u}^{+}}^{p,a}\,\mathcal{W}(\boldsymbol{v}) + \boldsymbol{\mathcal{Z}}\_{\boldsymbol{v}^{-}}^{p,a}\,\mathcal{W}(\boldsymbol{u})\right]. \end{split} \tag{25}$$
  $\boldsymbol{f}\boldsymbol{f} \in \operatorname{LRSV}(\left[\boldsymbol{u},\boldsymbol{v}\right],\operatorname{\mathbb{R}}\_{\boldsymbol{I}}^{+},\mathcal{P})$ , then  $\boldsymbol{u}$ 

$$\begin{split} \text{If } f \in \text{LRSC}\left( [u, \,\nu], \,\mathbb{Z}\_{I}^{+}, \,\nu \right), \text{ then} \\\\ f \left( \left[ \frac{u^{p} + \nu^{p}}{2} \right]^{\frac{1}{p}} \right) \left[ \mathbb{Z}\_{u^{+}}^{p,a} \,\mathcal{W}(\nu) + \mathbb{Z}\_{\nu^{-}}^{p,a} \,\mathcal{W}(u) \right] \geq\_{p} \left[ \mathbb{Z}\_{u^{+}}^{p,a} \, f \mathcal{W}(\nu) + \mathbb{Z}\_{\nu^{-}}^{p,a} \, f \mathcal{W}(u) \right] \\\\ \geq\_{p} \frac{f(u) + f(\nu)}{2} \left[ \mathbb{Z}\_{u^{+}}^{p,a} \, \mathcal{W}(\nu) + \mathbb{Z}\_{\nu^{-}}^{p,a} \, \mathcal{W}(u) \right]. \end{split} \tag{26}$$

**Proof.** Since *<sup>f</sup>* <sup>∈</sup> *LRSX* [*u*, *ν*], R<sup>+</sup> *<sup>I</sup>* , *p* , then for ∈ [0, 1], we have

$$f\left(\left[\frac{\boldsymbol{u}^{p}+\boldsymbol{\nu}^{p}}{2}\right]^{\frac{1}{p}}\right) \leq\_{p} \frac{1}{2} \Big(f\left(\left[\boldsymbol{\varrho}\boldsymbol{u}^{p}+(1-\boldsymbol{\varrho})\boldsymbol{\nu}^{p}\right]^{\frac{1}{p}}\Big) + f\left(\left[(1-\boldsymbol{\varrho})\boldsymbol{u}^{p}+\boldsymbol{\varrho}\boldsymbol{\nu}^{p}\right]^{\frac{1}{p}}\right)\Big).\tag{27}$$

Since W [*u<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> )*νp*] 1 *p* = W [*ν<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> )*up*] 1 *p* , then multiplying both sides of (27) by *α*−1<sup>W</sup> [(<sup>1</sup> <sup>−</sup> )*u<sup>p</sup>* <sup>+</sup> *νp*] 1 *p* , and integrating it with respect to over [0, 1], we have

$$\begin{split} &2\int\_{0}^{1}q^{a-1}f\left(\left[\frac{\nu^{p}+\nu^{p}}{2}\right]^{\frac{1}{p}}\right)\mathcal{W}\left(\left[(1-\varrho)\nu^{p}+\varrho\nu^{p}\right]^{\frac{1}{p}}\right)d\varrho\\ &\leq\_{p}\int\_{0}^{1}q^{a-1}f\left(\left[\varrho\mu^{p}+(1-\varrho)\nu^{p}\right]^{\frac{1}{p}}\right)\mathcal{W}\left(\left[(1-\varrho)\mu^{p}+\varrho\nu^{p}\right]^{\frac{1}{p}}\right)d\varrho\\ &+\int\_{0}^{1}q^{a-1}f\left(\left[(1-\varrho)\mu^{p}+\varrho\nu^{p}\right]^{\frac{1}{p}}\right)\mathcal{W}\left(\left[(1-\varrho)\mu^{p}+\varrho\nu^{p}\right]^{\frac{1}{p}}\right)d\varrho\\ &=\int\_{0}^{1}q^{a-1}\left[\underline{\mathcal{F}}\left(\left[\varrho\mu^{p}+(1-\varrho)\nu^{p}\right]^{\frac{1}{p}}\right),\overline{\mathcal{J}}\left(\left[(1-\varrho)\mu^{p}+\varrho\nu^{p}\right]^{\frac{1}{p}}\right)\right]\times\mathcal{W}\left(\left[(1-\varrho)\mu^{p}+\varrho\nu^{p}\right]^{\frac{1}{p}}\right)d\varrho\\ &\int\_{0}^{1}q^{a-1}\left[\underline{\mathcal{F}}\left(\left[(1-\varrho)\mu^{p}+\varrho\nu^{p}\right]^{\frac{1}{p}}\right),\overline{\mathcal{J}}\left(\left[(1-\varrho)\mu^{p}+\varrho\nu^{p}\right]^{\frac{1}{p}}\right)\right]\times\mathcal{W}\left(\left[(1-\varrho)\mu^{p}+\varrho\nu^{p}\right]^{\frac{1}{p}}\right)d\varrho. \end{split}$$

Let *<sup>x</sup><sup>p</sup>* <sup>=</sup> *ν<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> )*up*. Then, we have

2*p* (*νp*−*up*) *<sup>α</sup> f up*+*ν<sup>p</sup>* 2 1 *p <sup>ν</sup> <sup>u</sup>* (*x<sup>p</sup>* <sup>−</sup> *<sup>u</sup>p*) *<sup>α</sup>*−<sup>1</sup> <sup>W</sup>(*x*)*dx* ≤*p p* (*νp*−*up*) *α <sup>ν</sup> <sup>u</sup>* (*x<sup>p</sup>* <sup>−</sup> *<sup>u</sup>p*) *α*−1 *f* [*u<sup>p</sup>* <sup>−</sup> *<sup>ν</sup><sup>p</sup>* <sup>−</sup> *<sup>x</sup>p*] 1 *p* , *f* [*u<sup>p</sup>* <sup>+</sup> *<sup>ν</sup><sup>p</sup>* <sup>−</sup> *<sup>x</sup>p*] 1 *p* <sup>W</sup>(*x*)*xp*−1*dx* +  *<sup>ν</sup> <sup>u</sup>* (*x<sup>p</sup>* <sup>−</sup> *<sup>u</sup>p*) *α*−1 *f* (*x*), *f*(*x*) <sup>W</sup>(*x*)*xp*−1*dx*, = *<sup>p</sup>* (*νp*−*up*) *α <sup>ν</sup> <sup>u</sup>* (*x<sup>p</sup>* <sup>−</sup> *<sup>u</sup>p*) *α*−1 *f* (*x*), *f* (*x*) W [*u<sup>p</sup>* <sup>−</sup> *<sup>ν</sup><sup>p</sup>* <sup>−</sup> *<sup>x</sup>p*] 1 *p xp*−1*dx* + *ν u* (*x<sup>p</sup>* <sup>−</sup> *<sup>u</sup>p*) *α*−1 *f* (*x*), *f*(*x*) <sup>W</sup>(*x*)*xp*−1*dx*, = *<sup>p</sup>* (*νp*−*up*) *α <sup>ν</sup> <sup>u</sup>* (*x<sup>p</sup>* <sup>−</sup> *<sup>u</sup>p*) *α*−1 *f* (*x*), *f* (*x*) <sup>W</sup>(*x*)*xp*−1*dx* + *ν u* (*x<sup>p</sup>* <sup>−</sup> *<sup>u</sup>p*) *α*−1 *f* (*x*), *f*(*x*) <sup>W</sup>(*x*)*xp*−1*dx*, = *<sup>p</sup>* (*νp*−*up*) *α* [ *<sup>ν</sup> <sup>u</sup>* (*ν<sup>p</sup>* <sup>−</sup> *<sup>x</sup>p*) *<sup>α</sup>*−<sup>1</sup> *<sup>f</sup>*(*x*)W(*x*)*xp*−1*dx* <sup>+</sup> *<sup>ν</sup> <sup>u</sup>* (*x<sup>p</sup>* <sup>−</sup> *<sup>u</sup>p*) *<sup>α</sup>*−<sup>1</sup> *<sup>f</sup>*(*x*)W(*x*)*xp*−1*dx*].

Therefore, we have

$$\frac{p^{a}\Gamma(a)}{\left(\nu^{p}-\nu^{p}\right)^{a}}f\left(\left[\frac{\mu^{p}+\nu^{p}}{2}\right]^{\frac{1}{p}}\right)\left[\mathcal{Z}\_{\mu^{+}}^{p,a}\,\mathcal{W}(\nu)+\mathcal{Z}\_{\nu^{-}}^{p,a}\,\mathcal{W}(\mu)\right]$$

$$\leq\_{p}\frac{p^{a}\Gamma(a)}{\left(\nu^{p}-\nu^{p}\right)^{a}}\left[\mathcal{Z}\_{\mu^{+}}^{p,a}\,f\mathcal{W}(\nu)+\mathcal{Z}\_{\nu^{-}}^{p,a}\,f\mathcal{W}(\mu)\right].\tag{28}$$

Now taking the multiplication of (23) by *α*−1<sup>W</sup> [*ν<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> )*up*] 1 *p* , and integrating it with respect to over [0, 1], we get

$$\begin{split} &\int\_{0}^{1} \varrho^{\boldsymbol{\alpha}-1} \mathcal{W} \Big( \left[ \varrho \boldsymbol{\upsilon}^{\boldsymbol{p}} + (1-\varrho) \boldsymbol{\mu}^{\boldsymbol{p}} \right]^{\frac{1}{p}} \Big) f \Big( \left[ \varrho \boldsymbol{\mu}^{\boldsymbol{p}} + (1-\varrho) \boldsymbol{\upsilon}^{\boldsymbol{p}} \right]^{\frac{1}{p}} \Big) d\varrho \\ &+ \int\_{0}^{1} \varrho^{\boldsymbol{\alpha}-1} \mathcal{W} \Big( \left[ \varrho \boldsymbol{\upsilon}^{\boldsymbol{p}} + (1-\varrho) \boldsymbol{\mu}^{\boldsymbol{p}} \right]^{\frac{1}{p}} \Big) f \Big( \left[ \varrho \boldsymbol{\upsilon}^{\boldsymbol{p}} + (1-\varrho) \boldsymbol{\mu}^{\boldsymbol{p}} \right]^{\frac{1}{p}} \Big) d\varrho \\ &\leq\_{\mathbb{P}} \left[ f(\boldsymbol{u}) + f(\boldsymbol{\upsilon}) \right] \int\_{0}^{1} \varrho^{\boldsymbol{a}-1} \mathcal{W} \Big( \left[ \varrho \boldsymbol{\upsilon}^{\boldsymbol{p}} + (1-\varrho) \boldsymbol{\mu}^{\boldsymbol{p}} \right]^{\frac{1}{p}} \Big) d\varrho. \end{split}$$

Therefore, we have

$$\frac{p^{\mathfrak{a}}\Gamma(\mathfrak{a})}{\left(\upsilon^{\mathfrak{p}}-\mathfrak{u}^{\mathfrak{p}}\right)^{\mathfrak{a}}}\left[\mathbb{Z}\_{\mathfrak{u}^{+}}^{\mathfrak{p},\mathfrak{a}}f\mathcal{W}(\upsilon)\widetilde{+}\mathbb{Z}\_{\mathfrak{v}^{-}}^{\mathfrak{p},\mathfrak{a}}f\mathcal{W}(\mathfrak{u})\right] \leq\_{\mathfrak{p}}\frac{p^{\mathfrak{a}}\Gamma(\mathfrak{a})}{\left(\upsilon^{\mathfrak{p}}-\mathfrak{u}^{\mathfrak{p}}\right)^{\mathfrak{a}}}\frac{\mathcal{F}(\mathfrak{u})\overset{\sim}{+}\mathcal{F}(\upsilon)}{2}\left[\mathbb{Z}\_{\mathfrak{u}^{+}}^{\mathfrak{p},\mathfrak{a}}\mathcal{W}(\upsilon)+\mathbb{Z}\_{\mathfrak{v}^{-}}^{\mathfrak{p},\mathfrak{a}}\mathcal{W}(\mathfrak{u})\right].\tag{29}$$

Combining (20) and (21), we get

$$\begin{split} &f\left(\left[\frac{\mathbb{I}^{p\pm\sqrt{\nu}}}{2}\right]^{\frac{1}{p}}\right)\Big[\mathcal{Z}\_{\boldsymbol{u}^{+}}^{p,\boldsymbol{\alpha}}\,\mathcal{W}(\boldsymbol{\nu})+\mathcal{Z}\_{\boldsymbol{\nu}^{-}}^{p,\boldsymbol{\alpha}}\,\mathcal{W}(\boldsymbol{u})\Big] \\ &\leq\_{p}\left[\mathcal{Z}\_{\boldsymbol{u}^{+}}^{p,\boldsymbol{\alpha}}\,\mathcal{f}\mathcal{W}(\boldsymbol{\nu})+\mathcal{Z}\_{\boldsymbol{\nu}^{-}}^{p,\boldsymbol{\alpha}}\,\mathcal{f}\mathcal{W}(\boldsymbol{u})\right]\leq\_{p}\frac{f(\boldsymbol{u})+f(\boldsymbol{\nu})}{2}\left[\mathcal{Z}\_{\boldsymbol{u}^{+}}^{p,\boldsymbol{\alpha}}\,\mathcal{W}(\boldsymbol{\nu})+\mathcal{Z}\_{\boldsymbol{\nu}^{-}}^{p,\boldsymbol{\alpha}}\,\mathcal{W}(\boldsymbol{u})\right]. \end{split}$$

and the theorem has been proved. -

**Remark 5.** *Let p* = 1. *Then, Theorem 4 reduces to the result for LR-convex-IVF, which is also a new one:*

$$f\left(\frac{\mathsf{u}+\mathsf{v}}{2}\right)\left[\mathcal{Z}\_{\mathsf{u}^{+}}^{\mathsf{u}}\mathcal{W}(\mathsf{v})+\mathcal{Z}\_{\mathsf{v}^{-}}^{\mathsf{u}}\mathcal{W}(\mathsf{u})\right] \leq\_{p} \left[\mathcal{Z}\_{\mathsf{u}^{+}}^{\mathsf{u}}f\mathcal{W}(\mathsf{v})+\mathcal{Z}\_{\mathsf{v}^{-}}^{\mathsf{u}}f\mathcal{W}(\mathsf{u})\right] \leq\_{p} \frac{f(\mathsf{u})+f(\mathsf{v})}{2}\left[\mathcal{Z}\_{\mathsf{u}^{+}}^{\mathsf{u}}\mathcal{W}(\mathsf{v})+\mathcal{Z}\_{\mathsf{v}^{-}}^{\mathsf{u}}\mathcal{W}(\mathsf{u})\right].$$

*Let α* = 1. *Then, Theorem 4 reduces to the result for LR-p-convex-IVF, which is also a new one:*

$$\int f\left(\left[\frac{u^p + \nu^p}{2}\right]^{\frac{1}{p}}\right) \le\_p \frac{1}{\int\_u^\nu \mathbf{x}^{p-1} \mathcal{W}(\mathbf{x}) d\mathbf{x}} \int\_u^\nu \mathbf{x}^{p-1} f(\mathbf{x}) \mathcal{W}(\mathbf{x}) d\mathbf{x} \le\_p \frac{f(u) + f(\nu)}{2}.$$

*Let p* = *α* = 1. *Then, Theorem 4 reduces to the result for LR-convex-IVF, which is also a new one:*

$$f\left(\frac{u+\nu}{2}\right) \le\_p \frac{1}{\int\_u^{\nu} \mathcal{W}(x)dx} \int\_u^{\nu} f(x)\mathcal{W}(x)dx \le\_p \frac{f(u) + f(\nu)}{2}$$

If *f* = *f* and *α* = 1, then from Theorem 4, we get Theorem 5 of [39].

If *f* = *f* and *α* = 1, then from Theorem 4, we obtain the classical *HH*-Fejér type inequality (2).

If *f* = *f* and W(*x*) = *p* = *α* = 1, then from Theorem 4, we get the classical *HH*inequality (1).

If W(*x*) = 1, then from Theorem 4, we get Theorem 3.

**Theorem 5.** *Let <sup>p</sup>*, *<sup>α</sup>* <sup>&</sup>gt; 0, *<sup>u</sup>*, *<sup>ν</sup>* <sup>∈</sup> *<sup>I</sup> with <sup>ν</sup>* <sup>&</sup>gt; *<sup>u</sup> and <sup>f</sup>* , *<sup>g</sup>* ∈ L([*u*,*ν*])*. If <sup>f</sup>* , *<sup>g</sup>* <sup>∈</sup> *LRSX* [*u*, *ν*], R<sup>+</sup> *<sup>I</sup>* , *p , then we have*

$$\frac{p^a \Gamma(a)}{2(\nu^p - u^p)^a} \left[ \mathcal{T}\_{u^+}^{p,\mu} f(\nu) g(\nu) + \mathcal{T}\_{\nu^-}^{p,\mu} f(u) f(u) \right] \leq\_p \left( \frac{1}{2} - \frac{a}{(a+1)(a+2)} \right) M(u, \nu) + \left( \frac{a}{(a+1)(a+2)} \right) N(u, \nu). \tag{30}$$

*If f* , *<sup>g</sup>* <sup>∈</sup> *LRSV* [*u*, *ν*], R<sup>+</sup> *<sup>I</sup>* , *p , then*

$$\frac{p^x \Gamma(a)}{2(\nu^y - u^p)^x} \left[ \mathcal{T}\_{u^+}^{p,a} f(\nu) f(\nu) + \mathcal{T}\_{\nu^-}^{p,a} f(u) f(u) \right] \ge\_p \left( \frac{1}{2} - \frac{a}{(a+1)(a+2)} \right) \mathcal{M}(u, \nu) + \left( \frac{a}{(a+1)(a+2)} \right) \mathcal{N}(u, \nu) \tag{31}$$

*where*

$$\mathcal{M}(\mathfrak{u}, \mathfrak{v}) = \left[ f(\mathfrak{u}) \mathfrak{g}(\mathfrak{u}) + f(\mathfrak{v}) \mathfrak{g}(\mathfrak{v}) \right]^\perp$$

*and*

$$N(\mathfrak{u}, \mathfrak{v}) = [f(\mathfrak{u})\mathfrak{g}(\mathfrak{v}) + f(\mathfrak{v})\mathfrak{g}(\mathfrak{u})].$$

**Proof.** Since *<sup>f</sup>* , *<sup>g</sup>* <sup>∈</sup> *LRSX* [*u*, *ν*], R<sup>+</sup> *<sup>I</sup>* , *p* , then for ∈ [0, 1] we have

$$f\left(\left[\varrho\mu^p + (1-\varrho)\nu^p\right]^{\frac{1}{p}}\right) \le\_p \varrho f(\mu) + (1-\varrho)f(\nu)\mu$$

and

$$\varrho\left(\left[\varrho\mu^p + (1-\varrho)\nu^p\right]^{\frac{1}{p}}\right) \le\_p \varrho\varrho(\mu) + (1-\varrho)\varrho(\nu).$$

From the definition of *p*-convex-IVFs, it follows that 0 ≤*<sup>p</sup> f*(*x*) and 0 ≤*<sup>p</sup> g*(*x*), then we have

$$\begin{aligned} &f\left(\left[\varrho u^p + (1-\varrho)\nu^p\right]^{\frac{1}{p}}\right)\mathfrak{g}\left(\left[\varrho u^p + (1-\varrho)\nu^p\right]^{\frac{1}{p}}\right) \\ &\leq\_p \varrho^2 f(u)\mathfrak{g}(u) + (1-\varrho)^2 f(\nu)\mathfrak{g}(\nu) + \mathfrak{g}(1-\varrho)[f(\nu)\mathfrak{g}(u) + f(u)\mathfrak{g}(\nu)] \end{aligned} \tag{32}$$

Similarly, we have

$$\begin{aligned} &f\left(\left[\left(1-\varrho\right)u^p+\varrho v^p\right]^{\frac{1}{\overline{\sigma}}}\right)\mathfrak{g}\left(\left[(1-\varrho)u^p+\varrho v^p\right]^{\frac{1}{\overline{\sigma}}}\right) \\ &\leq\_{\mathbb{P}} \left(1-\varrho\right)^2\mathfrak{g}(u)f(u)+\varrho^2 f(\nu)\mathfrak{g}(\nu)+\varrho(1-\varrho)[\mathfrak{g}(\nu)f(u)+\mathfrak{g}(u)f(\nu)] \end{aligned} \tag{33}$$

Adding (32) and (33), we get

$$\begin{split} &f\left(\left[\varrho u^{p} + (1-\varrho)\upsilon^{p}\right]^{\frac{1}{p}}\right) \mathcal{g}\left(\left[\varrho u^{p} + (1-\varrho)\upsilon^{p}\right]^{\frac{1}{p}}\right) \\ &+ f\left(\left[(1-\varrho)u^{p} + \varrho\upsilon^{p}\right]^{\frac{1}{p}}\right) \mathcal{g}\left(\left[(1-\varrho)u^{p} + \varrho\upsilon^{p}\right]^{\frac{1}{p}}\right) \\ &\leq\_{p} \left[\varrho^{2} + (1-\varrho)^{2}\right] \left[f(u)\mathcal{g}(u) + f(\upsilon)\mathcal{g}(\upsilon)\right] + 2\varrho(1-\varrho)\left[f(\upsilon)\mathcal{g}(u) + f(u)\mathcal{g}(\upsilon)\right] \end{split} \tag{34}$$

Multiplying both sides of (34) by *α*−<sup>1</sup> and integrating the obtained result with respect to over (0,1), we have

$$\begin{split} & \int\_{0}^{1} \varrho^{a-1} f\left( \left[ \varrho \mu^{p} + (1-\varrho) \upsilon^{p} \right]^{\frac{1}{p}} \right) \mathfrak{g}\left( \left[ \varrho \mu^{p} + (1-\varrho) \upsilon^{p} \right]^{\frac{1}{p}} \right) d\varrho \\ & + \int\_{0}^{1} \varrho^{a-1} f\left( \left[ (1-\varrho) \mu^{p} + \varrho \upsilon^{p} \right]^{\frac{1}{p}} \right) \mathfrak{g}\left( \left[ (1-\varrho) \mu^{p} + \varrho \upsilon^{p} \right]^{\frac{1}{p}} \right) d\varrho \\ & \leq\_{p} M(\mathfrak{u}, \nu) \int\_{0}^{1} \varrho^{a-1} \left[ \varrho^{2} + (1-\varrho)^{2} \right] + 2N(\mathfrak{u}, \nu) \int\_{0}^{1} \varrho^{a-1} \varrho (1-\varrho) \, d\varrho. \end{split} \tag{35}$$

Form (35), we have

$$\begin{split} &\int\_{0}^{1} \varrho^{\alpha-1} f\left( \left[ \varrho u^{p} + (1-\varrho)\upsilon^{p} \right]^{\frac{1}{p}} \right) \mathfrak{g}\left( \left[ \varrho u^{p} + (1-\varrho)\upsilon^{p} \right]^{\frac{1}{p}} \right) d\varrho \\ &\quad + \int\_{0}^{1} \varrho^{\alpha-1} f\left( \left[ (1-\varrho)\mu^{p} + \varrho\upsilon^{p} \right]^{\frac{1}{p}} \right) \mathfrak{g}\left( \left[ (1-\varrho)\mu^{p} + \varrho\upsilon^{p} \right]^{\frac{1}{p}} \right) d\varrho \\ &= \frac{\mu^{p} \Gamma(a)}{\left( \upsilon^{p} - \upsilon^{p} \right)^{\pi}} \left[ \mathcal{Z}\_{\mathsf{u}^{+}}^{p,a} f(\upsilon) \mathfrak{g}(\upsilon) + \mathcal{Z}\_{\mathsf{v}^{-}}^{p,a} f(\mathsf{u}) \mathfrak{g}(\mathsf{u}) \right]. \end{split} \tag{36}$$

and

$$\begin{split} \mathcal{M}(\boldsymbol{u},\boldsymbol{\nu}) \int\_{0}^{1} \boldsymbol{\varrho}^{a-1} \Big[ \boldsymbol{\varrho}^{2} + (1-\boldsymbol{\varrho})^{2} \Big] &+ 2\mathcal{N}(\boldsymbol{u},\boldsymbol{\nu}) \int\_{0}^{1} \boldsymbol{\varrho}^{a-1} \boldsymbol{\varrho}(1-\boldsymbol{\varrho}) d\boldsymbol{\varrho} \\ = \frac{2}{a} \Big( \frac{1}{2} - \frac{a}{(a+1)(a+2)} \Big) \mathcal{M}(\boldsymbol{u},\boldsymbol{\nu}) + \frac{2}{a} \Big( \frac{a}{(a+1)(a+2)} \Big) \mathcal{N}(\boldsymbol{u},\boldsymbol{\nu}). \end{split} \tag{37}$$

From (36) and (37), we have

$$\frac{p^{\mu}\Gamma(a)}{2(\nu^{p}-\nu^{p})^{a}}\left[\mathcal{Z}^{p,\mu}\_{\mu^{+}}f(\nu)\mathcal{g}(\nu)+\mathcal{Z}^{p,\mu}\_{\nu^{-}}f(\mu)f(\mu)\right] \leq\_{p} \left(\frac{1}{2}-\frac{a}{(a+1)(a+2)}\right)\mathcal{M}(\mu,\nu)+\left(\frac{a}{(a+1)(a+2)}\right)\mathcal{N}(\mu,\nu) + \frac{a}{(a+2)(a+3)}\mathcal{N}(\mu,\nu)$$

and the required result has been obtained. -

**Example 3.** *Let p be an odd number,* [*u*, *ν*] = [0, 2]*, α* = <sup>1</sup> <sup>2</sup> , *f*(*x*) = *exp* <sup>−</sup> 4, 2*x<sup>p</sup> , and <sup>g</sup>*(*x*) <sup>=</sup> [*x<sup>p</sup>* <sup>−</sup> 3, 2*xp*]*. Then, f g* ∈ L([*u*,*ν*]) *and*

*pα*Γ(1+*α*) <sup>2</sup>(*νp*−*up*) *α* I*p*,*α <sup>u</sup>*<sup>+</sup> *<sup>f</sup>*(*ν*)*g*(*ν*) <sup>+</sup> <sup>I</sup>*p*,*<sup>α</sup> <sup>ν</sup>*<sup>−</sup> *f*(*u*)*g*(*u*) <sup>=</sup> <sup>Γ</sup>( <sup>3</sup> 2 ) 2 <sup>√</sup><sup>2</sup> <sup>√</sup><sup>1</sup> *π*  2 <sup>0</sup> (2*<sup>p</sup>* <sup>−</sup> *<sup>x</sup>p*) −1 <sup>2</sup> *xp*−<sup>1</sup> <sup>4</sup> <sup>−</sup> *<sup>e</sup>x<sup>p</sup>* (<sup>3</sup> <sup>−</sup> *<sup>x</sup>p*), 4*x*2*<sup>p</sup> dx* <sup>+</sup> <sup>Γ</sup>( <sup>3</sup> 2 ) 2 <sup>√</sup><sup>2</sup> <sup>√</sup><sup>1</sup> *π*  2 <sup>0</sup> (*xp*) −1 <sup>2</sup> *xp*−<sup>1</sup> <sup>4</sup> <sup>−</sup> *<sup>e</sup>x<sup>p</sup>* (<sup>3</sup> <sup>−</sup> *<sup>x</sup>p*), 4*x*2*<sup>p</sup> dx* ≈ [2.6446, 5.8664].

*Note that*

$$M(\boldsymbol{\mu}, \boldsymbol{\nu}) = \left[ f(\boldsymbol{\mu}) \mathcal{g}(\boldsymbol{\mu}) + f(\boldsymbol{\nu}) \mathcal{g}(\boldsymbol{\nu}) \right] = \left[ 13 - \boldsymbol{\varepsilon^2}, 16 \right].$$

$$N(\boldsymbol{\mu}, \boldsymbol{\nu}) = \left[ f(\boldsymbol{\mu}) \mathcal{g}(\boldsymbol{\nu}) + f(\boldsymbol{\nu}) \mathcal{g}(\boldsymbol{\mu}) \right] = \left[ 15 - 3 \boldsymbol{\varepsilon^2}, 0 \right].$$

*Therefore, we have*

$$\begin{split} \left(\frac{1}{2} - \frac{a}{(a+1)(a+2)}\right) M(\mu, \nu) + \left(\frac{a}{(a+1)(a+2)}\right) N(\mu, \nu) &= \frac{11}{15} \left[13 - \varepsilon^2, 16\right] + \frac{2}{15} \left[15 - 3\varepsilon^2, 0\right] \\ &\approx [3.1591, 11.7333]. \end{split}$$

*It follows that*

$$[2.6446, 5.8664] \leq\_p [3.1591, 11.7333]\_{\prime\prime}$$

*and Theorem 5 has been illustrated.*

**Theorem 6.***Let <sup>p</sup>*, *<sup>α</sup>* <sup>&</sup>gt; 0, *<sup>u</sup>*, *<sup>ν</sup>* <sup>∈</sup> *<sup>I</sup> with <sup>ν</sup>* <sup>&</sup>gt; *<sup>u</sup> and <sup>f</sup>* , *<sup>g</sup>* ∈ L([*u*,*ν*])*. If <sup>f</sup>* , *<sup>g</sup>* <sup>∈</sup> *LRSX* [*u*, *ν*], R<sup>+</sup> *<sup>I</sup>* , *p , then we have*

$$\begin{split} 2f\left(\left[\frac{\mathbf{u}^{p}+\mathbf{v}^{p}}{2}\right]^{\frac{1}{p}}\right) \mathcal{G}\left(\left[\frac{\mathbf{u}^{p}+\mathbf{v}^{p}}{2}\right]^{\frac{1}{p}}\right) &\leq\_{p} \frac{p^{a}\Gamma(a+1)}{2(\nu^{p}-\nu^{p})^{\mathbb{T}}} \left[\mathcal{Z}\_{\mathbf{u}^{+}}^{p,\mu}f(\nu)\mathcal{g}(\nu) + \mathcal{Z}\_{\nu^{-}}^{p,\mu}f(\mu)\mathcal{g}(\mu)\right] \\ &+ \left(\frac{1}{2} - \frac{a}{(a+1)(a+2)}\right)N(\mu,\nu) + \left(\frac{a}{(a+1)(a+2)}\right)M(\mu,\nu). \end{split} \tag{38}$$

*If f* , *<sup>g</sup>* <sup>∈</sup> *LRSV* [*u*, *ν*], R<sup>+</sup> *<sup>I</sup>* , *p , then*

$$\begin{split} f\left(\left[\frac{\underline{\boldsymbol{u}}^{p}+\boldsymbol{v}^{p}}{2}\right]^{\frac{1}{p}}\right) \otimes \Big(\left[\frac{\underline{\boldsymbol{u}}^{p}+\boldsymbol{v}^{p}}{2}\right]^{\frac{1}{p}}\Big) &\geq p\,\frac{p^{t}\Gamma(a+1)}{4(\boldsymbol{v}^{p}-\boldsymbol{u}^{p})^{\frac{1}{p}}} \Big[\mathcal{Z}^{p,a}\_{\boldsymbol{u}^{+}}f(\boldsymbol{v})\mathcal{g}(\boldsymbol{v}) + \mathcal{Z}^{p,a}\_{\boldsymbol{v}-}f(\boldsymbol{u})\mathcal{g}(\boldsymbol{u})\Big] \\ &+ \frac{1}{2} \Big(\frac{1}{2} - \frac{a}{(a+1)(a+2)}\right) N(\boldsymbol{u},\boldsymbol{v}) + \frac{1}{2} \Big(\frac{a}{(a+1)(a+2)}\Big) M(\boldsymbol{u},\boldsymbol{v}) \end{split} \tag{39}$$

*where M*(*u*, *ν*) *and N*(*u*, *ν*) *are given in Theorem 5.*

**Proof.** Since *<sup>f</sup>* , *<sup>g</sup>* <sup>∈</sup> *LRSX* [*u*, *ν*], R<sup>+</sup> *<sup>I</sup>* , *p* , then by hypothesis, for ∈ [0, 1] we have

*f up*+*ν<sup>p</sup>* 2 1 *p g up*+*ν<sup>p</sup>* 2 1 *p* = *f* [(1−)*up*+*νp*] 1 *p* <sup>2</sup> <sup>+</sup> [*up*+(1−)*νp*] 1 *p* 2 × *g* [(1−)*up*+*νp*] 1 *p* <sup>2</sup> <sup>+</sup> [*up*+(1−)*νp*] 1 *p* 2 <sup>≤</sup>*<sup>p</sup>* <sup>1</sup> 4 *f* [*u<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> )*νp*] 1 *p* + *f* [(<sup>1</sup> <sup>−</sup> )*u<sup>p</sup>* <sup>+</sup> *νp*] 1 *p* × *g* [*u<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> )*νp*] 1 *p* + *g* [(<sup>1</sup> <sup>−</sup> )*u<sup>p</sup>* <sup>+</sup> *νp*] 1 *p* = <sup>1</sup> 4 *f* [*u<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> )*νp*] 1 *p g* [*u<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> )*νp*] 1 *p* + *f* [(<sup>1</sup> <sup>−</sup> )*u<sup>p</sup>* <sup>+</sup> *νp*] 1 *p g* [(<sup>1</sup> <sup>−</sup> )*u<sup>p</sup>* <sup>+</sup> *νp*] 1 *p* + *g* [(<sup>1</sup> <sup>−</sup> )*u<sup>p</sup>* <sup>+</sup> *νp*] 1 *p f* [*u<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> )*νp*] 1 *p* + *f* [(<sup>1</sup> <sup>−</sup> )*u<sup>p</sup>* <sup>+</sup> *νp*] 1 *p g* [*u<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> )*νp*] 1 *p* <sup>≤</sup>*<sup>p</sup>* <sup>1</sup> 4 *f* [*u<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> )*νp*] 1 *p* + *g* [*u<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> )*νp*] 1 *p* +*f* [(<sup>1</sup> <sup>−</sup> )*u<sup>p</sup>* <sup>+</sup> *νp*] 1 *p g* [(<sup>1</sup> <sup>−</sup> )*u<sup>p</sup>* <sup>+</sup> *νp*] 1 *p* +<sup>1</sup> 4 <sup>2</sup><sup>2</sup> <sup>−</sup> <sup>2</sup> <sup>+</sup> <sup>1</sup> *N*(*u*, *ν*) + <sup>1</sup> <sup>2</sup> (1 − )*M*(*u*, *ν*). (40)

1 4  1

Taking both multiplications of (40) with *α*−<sup>1</sup> and integrating the result with respect to over (0,1), we have

$$\begin{split} &\frac{1}{\varrho}\varrho^{a-1}f\left(\left[\frac{u^{p}+v^{p}}{2}\right]^{\frac{1}{p}}\right)\mathcal{S}\left(\left[\frac{u^{p}+v^{p}}{2}\right]^{\frac{1}{p}}\right)d\varrho\\ \leq &\frac{1}{\varrho}\left(\int\_{0}^{1}\varrho^{a-1}f\left(\left[\varrho u^{p}+(1-\varrho)v^{p}\right]^{\frac{1}{p}}\right)\mathcal{S}\left(\left[\varrho u^{p}+(1-\varrho)v^{p}\right]^{\frac{1}{p}}\right)d\varrho\\ &+\int\_{0}^{1}\varrho^{a-1}f\left(\left[(1-\varrho)u^{p}+\varrho v^{p}\right]^{\frac{1}{p}}\right)\mathcal{S}\left(\left[(1-\varrho)u^{p}+\varrho v^{p}\right]^{\frac{1}{p}}\right)d\varrho\\ &+\frac{1}{4}\int\_{0}^{1}\varrho^{a-1}\left(2\varrho^{2}-2\varrho+1\right)\mathcal{N}(u,\nu)+\frac{1}{2}\int\_{0}^{1}\varrho^{a-1}\varrho(1-\varrho)\mathcal{M}(u,\nu)d\varrho. \end{split} \tag{41}$$

From (41), we get 1 0 *α*−<sup>1</sup> *f up*+*ν<sup>p</sup>* 2 1 *p g up*+*ν<sup>p</sup>* 2 1 *p d* = 1 0 *α*−<sup>1</sup> *f up*+*ν<sup>p</sup>* 2 1 *p g up*+*ν<sup>p</sup>* 2 1 *p d*, 1 0 *α*−<sup>1</sup> *f up*+*ν<sup>p</sup>* 2 1 *p g up*+*ν<sup>p</sup>* 2 1 *p d* = 1 *α f up*+*ν<sup>p</sup>* 2 1 *p g up*+*ν<sup>p</sup>* 2 1 *p* , 1 *α f up*+*ν<sup>p</sup>* 2 1 *p g up*+*ν<sup>p</sup>* 2 1 *p* = <sup>1</sup> *α f up*+*ν<sup>p</sup>* 2 1 *p g up*+*ν<sup>p</sup>* 2 1 *p* . (42)

On the other hand, from (42) and taking *x<sup>p</sup>* <sup>=</sup> *u<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> )*ν<sup>p</sup>* and *<sup>y</sup><sup>p</sup>* <sup>=</sup> (<sup>1</sup> <sup>−</sup> )*u<sup>p</sup>* <sup>+</sup> *νp*, we get <sup>0</sup> *α*−<sup>1</sup> *<sup>f</sup>* [*u<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> )*νp*] 1 *p g* [*u<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> )*νp*] 1 *p d* 1 1 

+  <sup>1</sup> <sup>0</sup> *α*−<sup>1</sup> *<sup>f</sup>* [(<sup>1</sup> <sup>−</sup> )*u<sup>p</sup>* <sup>+</sup> *νp*] *p g* [(<sup>1</sup> <sup>−</sup> )*u<sup>p</sup>* <sup>+</sup> *νp*] *p d* +<sup>1</sup> 4  1 <sup>0</sup> *α*−<sup>1</sup> <sup>2</sup><sup>2</sup> <sup>−</sup> <sup>2</sup> <sup>+</sup> <sup>1</sup> *N*(*u*, *ν*)*d* + <sup>1</sup> 2  1 <sup>0</sup> *α*−1(<sup>1</sup> <sup>−</sup> )*M*(*u*, *<sup>ν</sup>*)*d* = *<sup>p</sup>* <sup>4</sup>(*νp*−*up*) *α* ⎡ ⎣ *<sup>ν</sup> <sup>u</sup>* (*ν<sup>p</sup>* <sup>−</sup> *<sup>x</sup>p*) *<sup>α</sup>*−<sup>1</sup> *f*(*x*)*g*(*x*)*xp*−1*dx* +  *<sup>ν</sup> <sup>u</sup>* (*y<sup>p</sup>* <sup>−</sup> *<sup>u</sup>p*) *<sup>α</sup>*−<sup>1</sup> *f*(*y*)*g*(*y*)*yp*−1*dy*,  1 <sup>0</sup> (*ν<sup>p</sup>* <sup>−</sup> *<sup>x</sup>p*) *<sup>α</sup>*−<sup>1</sup> *f*(*x*)*g*(*x*)*xp*−1*dx* +  *<sup>ν</sup> <sup>u</sup>* (*y<sup>p</sup>* <sup>−</sup> *<sup>u</sup>p*) *<sup>α</sup>*−<sup>1</sup> *f*(*y*)*g*(*y*)*yp*−1*dy* ⎤ ⎦ (43)

$$\begin{split} &+\frac{1}{2\pi} \Big(\frac{1}{2} - \frac{a}{(a+1)(a+2)}\Big) N(u,\nu) + \frac{1}{2\pi} \Big(\frac{a}{(a+1)(a+2)}\Big) M(u,\nu) \\ &= \frac{\nu^{\mu}\Gamma(a+1)}{4(\nu^{\nu}-\nu^{\rho})^{4}} \Big[ \mathcal{Z}\_{\mu+}^{p,\mu} f(\nu)g(\nu) + \mathcal{Z}\_{\nu-}^{p,\mu} f(u)g(\nu) \Big] \\ &+ \frac{1}{2\pi} \Big(\frac{1}{2} - \frac{a}{(a+1)(a+2)}\Big) N(u,\nu) + \frac{1}{2\pi} \Big(\frac{a}{(a+1)(a+2)}\Big) M(u,\nu). \end{split}$$
 
$$\text{From (42) and (43), (41) becomes}$$

$$\begin{array}{c} 2f\left(\left[\frac{\mathbb{I}^{\mathbb{P}} + \mathbb{I}^{\mathbb{P}}}{2}\right]^{\frac{1}{\overline{\mathbb{P}}}}\right) \otimes \left(\left[\frac{\mathbb{I}^{\mathbb{P}} + \mathbb{I}^{\mathbb{P}}}{2}\right]^{\frac{1}{\overline{\mathbb{P}}}}\right) \leq\_{p} \frac{p^{a} \Gamma(a+1)}{2(\upsilon^{p} - \upsilon^{p})^{a}} \left[\mathcal{I}^{p,\alpha}\_{\mathsf{u}^{+}} f(\upsilon)g(\upsilon) + \mathcal{I}^{p,\alpha}\_{\mathsf{v}^{-}} f(u)g(u)\right] \\ + \left(\frac{1}{2} - \frac{a}{(a+1)(a+2)}\right) N(\mathsf{u}, \nu) + \left(\frac{a}{(a+1)(a+2)}\right) M(\mathsf{u}, \nu) \end{array}$$

Hence, Theorem 6 has been proved. -

**Example 4.** *Let p be an odd number and α* = 1 *for* ∈ [0, 1]*, and the LR-p-convex f* : [*u*, *ϑ*] = [2, 3] <sup>→</sup> <sup>R</sup><sup>+</sup> *<sup>I</sup> and LR-p-convex IVFs <sup>g</sup>* : [*u*, *<sup>ϑ</sup>*] <sup>=</sup> [2, 3] <sup>→</sup> <sup>R</sup><sup>+</sup> *<sup>I</sup> are respectively defined by f*(*x*) =  2 − *x p* <sup>2</sup> , 2 2 − *x p* 2 *and <sup>g</sup>*(*x*) <sup>=</sup> [*xp*, 2*xp*]*. Since <sup>f</sup>*∗(*x*) <sup>=</sup> <sup>2</sup> <sup>−</sup> *<sup>x</sup> p* <sup>2</sup> *, f* ∗(*x*) = 2 2 − *x p* 2 *and <sup>g</sup>*∗(*x*) <sup>=</sup> *<sup>x</sup>p, g*∗(*x*) <sup>=</sup> <sup>2</sup>*xp, then we compute the following*

2 *f*<sup>∗</sup> *up*+*ϑ<sup>p</sup>* 2 1 *p* × *g*<sup>∗</sup> *up*+*ϑ<sup>p</sup>* 2 1 *p* = <sup>20</sup>−<sup>5</sup> √10 2 2 *f* ∗ *up*+*ϑ<sup>p</sup>* 2 1 *p* × *g*<sup>∗</sup> *up*+*ϑ<sup>p</sup>* 2 1 *p* <sup>=</sup> <sup>40</sup> <sup>−</sup> <sup>10</sup>√10, *pα*Γ(*α*+1) <sup>2</sup>(*νp*−*up*) *α* I*p*,*α <sup>u</sup>*<sup>+</sup> *<sup>f</sup>*∗(*ν*) <sup>×</sup> *<sup>g</sup>*∗(*ν*) <sup>+</sup> <sup>I</sup>*p*,*<sup>α</sup> <sup>ν</sup>*<sup>−</sup> *f*∗(*u*) × *g*∗(*u*) = 1 *pα*Γ(*α*+1) <sup>2</sup>(*νp*−*up*) *α* I*p*,*α <sup>u</sup>*<sup>+</sup> *<sup>f</sup>* <sup>∗</sup>(*ν*) <sup>×</sup> *<sup>g</sup>*∗(*ν*) <sup>+</sup> <sup>I</sup>*p*,*<sup>α</sup> <sup>ν</sup>*<sup>−</sup> *f* <sup>∗</sup>(*u*) × *g*∗(*u*) = 4, *<sup>α</sup>* (*α*+1)(*α*+2) M∗(*u*, *<sup>ϑ</sup>*) <sup>=</sup> <sup>1</sup> 6 10 − 2 <sup>√</sup><sup>2</sup> <sup>−</sup> <sup>3</sup> √3 *<sup>α</sup>* (*α*+1)(*α*+2) <sup>M</sup>∗(*u*, *<sup>ϑ</sup>*) <sup>=</sup> <sup>4</sup> 6 10 − 2 <sup>√</sup><sup>2</sup> <sup>−</sup> <sup>3</sup> √3 , 1 <sup>2</sup> <sup>−</sup> *<sup>α</sup>* (*α*+1)(*α*+2)N∗(*u*, *<sup>ϑ</sup>*) <sup>=</sup> <sup>1</sup> 3 10 − 3 <sup>√</sup><sup>2</sup> <sup>−</sup> <sup>2</sup> √3 1 <sup>2</sup> <sup>−</sup> *<sup>α</sup>* (*α*+1)(*α*+2)<sup>N</sup> <sup>∗</sup>(*u*, *<sup>ϑ</sup>*) <sup>=</sup> <sup>4</sup> 3 10 − 3 <sup>√</sup><sup>2</sup> <sup>−</sup> <sup>2</sup> √3 ,

*that means*

$$\begin{aligned} \frac{20 - 5\sqrt{10}}{2} &\le \left( 1 + \frac{30 - 8\sqrt{2} - 7\sqrt{3}}{6} \right), \\ 40 - 10\sqrt{10} &\le \left( 4 + \frac{60 - 16\sqrt{2} - 14\sqrt{3}}{3} \right), \end{aligned}$$

*hence, Theorem 6 has been illustrated.*

#### **4. Conclusions**

In this work, we introduced the new class of LR-*p*-convex interval-valued functions and established some new Hermite-Hadamard inequalities by means of the pseudo order relation via Katugampola fractional integral operator. Useful examples that verify the applicability of the theory developed in this study are presented. We intend to use various types of LR-convex interval-valued functions to construct interval inequalities of intervalvalued functions. In the future, we will try to explore this concept for fuzzy-interval-valued functions by means of the fuzzy pseudo order relation.

**Author Contributions:** Conceptualization, M.B.K. and M.A.N.; validation, P.O.M., D.B. and J.L.G.G.; formal analysis, D.B. and J.L.G.G.; investigation, M.B.K., M.A.N. and D.B.; resources, M.B.K. and M.A.N.; writing—original draft, M.B.K. and M.A.N.; writing—review and editing, M.B.K., P.O.M. and D.B.; visualization, M.A.N., P.O.M. and D.B.; supervision, M.A.N. and P.O.M.; project administration, M.A.N. and J.L.G.G. All authors have read and agreed to the published version of the manuscript.

**Data Availability Statement:** No data were used to support this study.

**Acknowledgments:** The authors would like to thank the Rector, COMSATS University Islamabad, Islamabad, Pakistan, for providing excellent research and academic environments. This work has been partially supported by Ministerio de Ciencia, Innovaci ón y Universidades, grant number PGC2018-097198-B-I00 and by Fundaci ón Séneca of Región de Murcia, grant number 20783/PI/18.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **The Approximate and Analytic Solutions of the Time-Fractional Intermediate Diffusion Wave Equation Associated with the Fokker–Planck Operator and Applications**

**Entsar A. Abdel-Rehim**

Department of Mathematics, Faculty of Science, Suez Canal University, Eldaree Street, Ismailia 41522, Egypt; entsarabdelrehim@yahoo.com

**Abstract:** In this paper, the time-fractional wave equation associated with the space-fractional Fokker– Planck operator and with the time-fractional-damped term is studied. The concept of the Green function is implemented to drive the analytic solution of the three-term time-fractional equation. The explicit expressions for the Green function *G*3(*t*) of the three-term time-fractional wave equation with constant coefficients is also studied for two physical and biological models. The explicit analytic solutions, for the two studied models, are expressed in terms of the Weber, hypergeometric, exponential, and Mittag–Leffler functions. The relation to the diffusion equation is given. The asymptotic behaviors of the Mittag–Leffler function, the hypergeometric function 1*F*1, and the exponential functions are compared numerically. The Grünwald–Letnikov scheme is used to derive the approximate difference schemes of the Caputo time-fractional operator and the Feller–Riesz space-fractional operator. The explicit difference scheme is numerically studied, and the simulations of the approximate solutions are plotted for different values of the fractional orders.

**Keywords:** space–fractional Fokker–Planck operator; time–fractional wave with the time–fractional damped term; Laplace transform; Mittag–Leffler function; Grünwald–Letnikov scheme; potential and current in an electric transmission line; random walk of a population

**MSC:** 26A33; 35L05; 35J05; 45K05; 60J60; 60G50; 60G51; 65N06; 80-99; 42A38; 33C20; 44A10

#### **1. Introduction and Important Definitions**

The classical intermediate diffusion wave equation, the multiterm wave equation, can be written as:

$$\frac{\partial^2 u(\mathbf{x},t)}{\partial t^2} + k \frac{\partial u(\mathbf{x},t)}{\partial t} = L\_{FP}(u(\mathbf{x},t)) \,, -\infty < \mathbf{x} < \infty \,, t \ge 0 \,\tag{1}$$

where the right-hand side of this equation is the known Fokker–Planck operator; see [1]. The Fokker–Planck operator is always associated with the stochastic processes and is defined as:

$$L\_{FP}\left(u(\mathbf{x},t)\right) = \frac{\partial^2 \left(a(\mathbf{x})\,u(\mathbf{x},t)\right)}{\partial \mathbf{x}^2} - \frac{\partial}{\partial \mathbf{x}} \left(b(\mathbf{x})\,u(\mathbf{x},t)\right) \,,\tag{2}$$

where −∞ < *x* < ∞ , *t* ≥ 0 . The Fokker–Planck operator *LFP* can be derived following the stochastic differential equations because it describes how a collection of initial data evolves in time. This wave equation is governed by the initial conditions:

$$
\mu(\mathbf{x},0) = f(\mathbf{x}\_0), \ \ u\_t(\mathbf{x},0) = 0, \mu\_t(0,t) = 0 \ , \tag{3}
$$

and the boundary conditions:

$$
\mu(-\infty, t) = \mu(\infty, t) = 0 \quad . \tag{4}
$$

117

**Citation:** Abdel-Rehim, E.A. The Approximate and Analytic Solutions of the Time-Fractional Intermediate Diffusion Wave Equation Associated with the Fokker–Planck Operator and Applications. *Axioms* **2021**, *10*, 230. https://doi.org/10.3390/ axioms10030230

Academic Editor: Jorge E. Macías Díaz

Received: 14 August 2021 Accepted: 13 September 2021 Published: 17 September 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Equation (1) mathematically models sound propagation and many physical, chemical, biological, medical, and other real-life phenomena. The description of *u*(*x*, *t*) depends on the nature of the model. Generally, *a*(*x*) and *b*(*x*) are predefined functions according to the model. *b*(*x*) represents the drift (the external force) acting on the wave. The constant *k* with 0 < *k* < 1 is the friction coefficient of theresistance source. The telegraph equation or the cable equation is a special case of Equation (1); see for example [2–5].

Experimental evidence shows that over diagnostic ultrasound frequencies, the acoustic absorption in biological tissue and the wave propagation in many other natural phenomena exhibit a power law with a noninteger frequency, i.e., *t* <sup>−</sup>*β*, with 1 < *β* < 2; see [6–8]. Here, *β* = 1 represents the classical diffusion equation, 0 < *β* < 1 represents the time-fractional diffusion equation, 1 < *β* < 2 represents the intermediate diffusion wave equation, and *β* = 2 represents the classical wave equation. To mathematically model such real phenomena, the extension to the time-fractional derivatives is required.

Experimentally, many physical and chemical phenomena exhibit very sharp random walks (random jumps), and their continuous random walk is not a Brownian motion. Solutes that move through fractal media commonly exhibit large deviations from the stochastic processes of Brownian motion and do not require a finite velocity. The extension to Lévy-stable motion is a straightforward generalization due to the common properties of Lévy-stable motion and Brownian motion, but the Lévy flights differ from the regular Brownian motion due to the occurrence of extremely long jumps, whose length is distributed according to the Lévy long tail ∼ |*x*| <sup>−</sup>1−*γ*, 0 < *γ* < 2. Therefore, in this paper, we are interested in studying the spacetime-fractional intermediate diffusion wave equation with the time-fractional-damped term, which reads:

$$\,\_{t}D\_{\ast}^{\beta}u(\mathbf{x},t) + k \,\_{t}D\_{\ast}^{a}u(\mathbf{x},t) \, = \, \_{FP}^{\gamma} \left( u(\mathbf{x},t) \right) \,, \tag{5}$$

where 0 < *β* < 2, 0 < *α* ≤ 1, and 0 < *γ* < 2. The space-fractional Fokker–Planck operator is defined as:

$$L\_{FP}^{\gamma} = \underset{0 \le \mathbf{x}}{D}^{\gamma} \left( a(\mathbf{x}) \, u(\mathbf{x}, t) \right) - \underset{\overline{\partial} \mathbf{x}}{\overline{\partial}} \left( b(\mathbf{x}) \, u(\mathbf{x}, t) \right) \,. \tag{6}$$

Here, *D* 0 *x <sup>γ</sup>* is the Riesz–Feller potential operator [9]. This fractional operator allows us to simulate the discrete solution along all the *x*-dimension. The Fourier transformation of the Riesz–Feller operator is −|*κ*| *γ f* /(*κ*) for a sufficiently well-behaved function *f*(*x*). *D t* ∗ *β* is the Caputo time-fractional operator, *D t* ∗ *<sup>β</sup>*, with 0 < *β* < 2. The Caputo time-fractional operator (see [10]) is defined as:

$$D\_{\ast}^{\beta}f(t) = \begin{cases} \frac{1}{\Gamma(m-\beta)} \{ \int\_{0}^{t} \frac{f^{(m)}(\tau)}{(t-\tau)^{\beta+1-m}} d\tau & \text{for } m-1 < \beta < m\\ \frac{d^{m}}{dt^{m}} f(t) & \text{for } \beta = m \end{cases} \tag{7}$$

where:

$$K\_{\beta}(t-\tau) = \frac{(t-\tau)^{\beta+1-m}}{\Gamma(m-\beta)}\ '\ '$$

is its kernel and is called the memory function. This kernel reflects the memory effects on many physical, biological, and other processes. The Caputo fractional derivative *D<sup>β</sup>* ∗ is used as a time-fractional operator because of its image in the Laplace transform domain, which is:

$$\begin{aligned} \mathcal{L}\{D\_{\ast}^{\delta}f(t);s\} &= s^{\delta}\widetilde{f}(s) - s^{\delta-1}f(0) - \dot{f}(0)s^{\delta-2} - \dots - f^{(m-1)}(0)s^{\delta-m}, \; s > 0 \; . \\\\ \text{As } \dot{\boldsymbol{0}} &= 0, \; \boldsymbol{\cdots} \; . \; f^{(m-1)}(0) = 0 \; \text{then:} \end{aligned}$$

$$f(0) = 0, \dots, f^{(m-1)}(0) = 0, \text{then: }$$

$$\mathcal{L}\{D\_{\bullet}^{\beta}f(t);s\} = s^{\beta}\dot{f}(s) - s^{\beta - 1}f(0) \; , \; s > 0 \; . \tag{8}$$

In other words, the Caputo time-fractional operator is dependent on the initial condition, and this is the main reason for using it as a time-fractional derivative operator.

Some attempts have been made to discuss such problems. Luchko [11] attempted to derive the fundamental solution of the multidimensional fractional wave equation in order to discuss its solution for some special cases in the form of convergent series. Gorenflo [12] discussed the stochastic processes related to the fractional wave equations and their distributed order. Anh and Leonenko [13] presented the Green functions and the spectral representations of the mean-squared solutions of the fractional diffusion– wave equations with random initial conditions. Chen et al. [14] discussed the analytical solution of the time-fractional telegraph equation with three kinds of nonhomogeneous boundary conditions, namely the Dirichlet, Neumann, and Robin boundary conditions. Wyss [15] used the Mellin transform theory to derive a closed-form solution of the fractional diffusion equation in terms of Fox's H-function. Abdel-Rehim et al. [16–18] studied the explicit approximate solutions of the multiterm time-fractional wave equation and its stationary solutions of different values of the fractional orders and their time evolutions. The Grünwald–Letnikov scheme and the common explicit finite difference rules were implemented to derive the approximate solutions that were proven to be convergent. Sarvestani et al. [19] drove a wavelet approach for the multiterm time-fractional diffusion– wave equation. Mainardi et al. investigated some numerical results to this equation in his book [20].

The aim of the paper is to derive the analytical solution of the classical (1) and the spacetime-fractional wave with time-fractional attenuation Equation (5). The analytic solutions are given by using the separation of variables and by implementing the concepts of the Green function of the three-term equations [9]. The resulting solutions are written in the form of some known special functions. The solutions are proven to be asymptotically convergent solutions. Two physical and biological applications to the time-fractional wave equation associated with the Fokker–Planck operator are also discussed. The stationary solutions are also given and compared. The approximate solutions of the two applications are obtained by implementing the common finite difference rule and the Grünwald–Letnikov scheme.

The organization of this paper is as follows: Section 1 is devoted to the Introduction. Section 2 introduces the two physical and biological applications. Section 3 derives the analytical solution of the classical models. Section 4 is devoted to the solution of the time-fractional models. Section 5 introduces the approximate solutions of the two studied models. Finally, Section 6 is devoted to simulating the approximate solutions and numerically discussing and comparing the asymptotic behaviors of the obtained special functions.

#### **2. Applications**

First, we begin by mathematically formulating the potential and current in an electric transmission line (the cable equation). Consider a transmission line being a coaxial cable containing the resistance *R*, inductance *L*, capacitance *C*, and leakage conductance *G*. Introduce the function *I*(*x*, *t*) to represent the current and *V*(*x*, *t*) for the potential. These variables satisfy the following coupled equations:

$$L\frac{\partial I}{\partial t} + R\,I = -\frac{\partial V}{\partial \mathbf{x}}\,',\tag{9}$$

and:

$$C\frac{\partial V}{\partial t} + G\,V = -\frac{\partial I}{\partial x}\,. \tag{10}$$

Differentiate (9) with respect to *t* and differentiate (10) with respect to *x* in order to eliminate *I* and *V*. After some minor algebra, one can prove that both *I* and *V* satisfy the same following equation:

$$\frac{\partial^2 I}{\partial t^2} + (p + q)\frac{\partial I}{\partial t} = c^2 \frac{\partial^2 I}{\partial x^2} - pqI \,, \tag{11}$$

where *kc*<sup>2</sup> = *<sup>R</sup> <sup>C</sup>* <sup>+</sup> *<sup>G</sup> <sup>C</sup>* = *<sup>p</sup>* + *<sup>q</sup>* and *bc*<sup>2</sup> = *pq*. Replacing *<sup>I</sup>*(*x*, *<sup>t</sup>*) by *<sup>u</sup>*(*x*, *<sup>t</sup>*), this equation is rewritten as:

$$c^2 \frac{\partial^2 u(\mathbf{x}, t)}{\partial t^2} + kc^2 \frac{\partial u(\mathbf{x}, t)}{\partial t} = c^2 \frac{\partial^2 u(\mathbf{x}, t)}{\partial \mathbf{x}^2} + bc^2 u(\mathbf{x}, t) \,. \tag{12}$$

The function *V*(*x*, *t*) satisfies the same Equations (11) and (12). This equation is called the telegraph equation (cable equation) and mathematically models the electrical signal traveling along the transmission cable in which the term *k <sup>∂</sup>u*(*x*,*t*) *<sup>∂</sup><sup>t</sup>* is called the internal resistance of the wires comprising the transmission lines. For further applications in physics and to real phenomena, see [4,5,13]. For this model, the Fokker–Planck operator is *LFP* = *c*2(*Dxx* + *b*).

#### *The Continuous-Time Random Walk of a Population*

This model describes a population of individuals moving either to the left or right along the *x*-axis. The probability density function of moving right and left is *w*(*x*, *t*) and *v*(*x*, *t*), respectively. The total population moving has density *u*(*x*, *t*) = *w*(*x*, *t*) + *v*(*x*, *t*). At any time instant *τ*, each instant *τ*, any individual can move to the left with probability *δ* or to the right with probability 1 − *kτ*. At the next time step, one has:

$$\frac{\partial w(\mathbf{x},t)}{\partial t} = -\rho \frac{\partial w(\mathbf{x},t)}{\partial \mathbf{x}} + k(v(\mathbf{x},t) - w(\mathbf{x},t)) \,, \tag{13}$$

and:

$$\frac{\partial v(\mathbf{x},t)}{\partial t} = \rho \frac{\partial v(\mathbf{x},t)}{\partial \mathbf{x}} - k(v(\mathbf{x},t) - w(\mathbf{x},t)) \,. \tag{14}$$

Adding (13) and (14) and differentiating with respect to *t*, then subtracting (13) from (14) and differentiating with respect to *x*, we obtain:

$$\frac{\partial^2(v+w)}{\partial t^2} = \rho \frac{\partial^2(v-w)}{\partial x \partial t} \,, \tag{15}$$

and:

$$\frac{\partial^2(v-w)}{\partial x \partial t} = \rho \frac{\partial^2(v+w)}{\partial x^2} - 2k \frac{\partial(v-w)}{\partial x} \,. \tag{16}$$

Subtracting (16) from (15), we obtain:

$$\frac{\partial^2 u(\mathbf{x},t)}{\partial t^2} + 2k \frac{\partial u(\mathbf{x},t)}{\partial t} = \rho^2 \frac{\partial^2 u(\mathbf{x},t)}{\partial \mathbf{x}^2} \,. \tag{17}$$

This means the Fokker–Planck operator in this case is *LFP* = *ρ*2*Dxx*. Now, take the direction of the movement of the individuals into consideration. In other words, the individuals move to the right with probability *λ*<sup>1</sup> and move to left with probability *λ*2. Make the suitable changes to the system of Equations (13) and (14) and follow the same mathematical manipulation to obtain:

$$\frac{\partial^2 u(\mathbf{x},t)}{\partial t^2} + (\lambda\_1 + \lambda\_2) \frac{\partial u(\mathbf{x},t)}{\partial t} = \rho^2 \frac{\partial^2 u(\mathbf{x},t)}{\partial \mathbf{x}^2} + \rho (\lambda\_1 - \lambda\_2) \frac{\partial u(\mathbf{x},t)}{\partial \mathbf{x}} \,. \tag{18}$$

Then, the Fokker–Planck operator is *LFP* <sup>=</sup> *<sup>ρ</sup>*2*Dxx* <sup>+</sup> *<sup>ρ</sup>*(*λ*<sup>1</sup> <sup>−</sup> *<sup>λ</sup>*2)*Dx*. If *<sup>λ</sup>*<sup>1</sup> <sup>&</sup>gt; *<sup>λ</sup>*2, the individual moves right, and if *λ*<sup>1</sup> < *λ*2, the individual moves left. This is known as the simple random walk model. Now, suppose the individual is sitting at the position *xj* at the time instant *tn* and makes movements either to *xj*, *xj* − 1, or *xj* + 1 with probabilities *λ*1, *λ*2, *λ*<sup>3</sup> at the next time instant *tn*+<sup>1</sup> with *λ*<sup>1</sup> + *λ*<sup>2</sup> + *λ*<sup>3</sup> = 1. Then, we obtain a similar wave equation, but with the Fokker–Planck operator defined as *LFPu*(*x*, *t*) = *aDxxu*(*x*, *t*) + *bDx*(*xu*(*x*, *t*)). For more information about the random walk in biology, see [21].

The movement of the potential and electricity in the transmission line and the random movement of the population are stochastic processes. Therefore, mathematically modeling them in spacetime-fractional differential equations is a natural generalization to their classical partial differential equations. The numerical results show the effects of the fractional orders on the time evolution of approximate solutions.

#### **3. The Analytical Solution of the Classical Models**

To solve the above-defined partial differential equations, we use the separation of variables method:

$$u(\mathbf{x},t) = X(\mathbf{x}) \, T(t) \, \tag{19}$$

and the initial conditions (3) are rewritten as:

$$T(0) = 1, \dot{T}(0) = 0, X(0) = \delta(\mathbf{x}), \dot{X}(0) = 0 \; , \tag{20}$$

while the boundary conditions (4) are rewritten as:

$$X(-\infty) = X(\infty) = 0 \quad . \tag{21}$$

Applying Equation (19) to Equation (12) (see [14]), we obtain two ordinary differential equations:

$$c^2 \frac{d^2}{X} dx^2 + mx + bc^2 = 0 \; \; \; \tag{22}$$

and:

$$\frac{d^2T}{dt^2} + k\frac{dT}{dt} + m\,T = 0\,\,\,\,\,\tag{23}$$

where for the stability, the friction constant *k* is chosen to satisfy 0 < *k* ≤ 1. Equation (23) models the harmonic oscillator in a resisting medium.

The solution of Equation (22) is:

$$X = c\_1 + c\_2 \cos\sqrt{\frac{m}{c^2}}x + c\_3 \sin\sqrt{\frac{m}{c^2}}x$$

and by applying the initial conditions (20), we obtain:

$$X = -\frac{bc^2}{m} + \cos\sqrt{\frac{m}{c^2}}x\tag{24}$$

Applying the separation of variables on Equation (17), we obtain:

$$\frac{d^2X}{dx^2} = \frac{m}{\rho^2}X \,, \tag{25}$$

and the same Equation (23). By applying the initial conditions (20), Equation (25) has the solution *<sup>X</sup>*(*x*) = cos <sup>1</sup> *<sup>m</sup> <sup>ρ</sup>*<sup>2</sup> *x*. Now, the analytic solution of Equation (18) is given by applying the separation, to obtain two ordinary differential equations:

$$
\rho^2 \frac{d^2 X}{d\mathbf{x}^2} + (\lambda\_1 - \lambda\_2)\rho \frac{dX}{d\mathbf{x}} + mX = 0 \,, \tag{26}
$$

and the same Equation (23). Let *<sup>λ</sup>*1−*λ*<sup>2</sup> *<sup>ρ</sup>* = *B*, then by applying the initial conditions (20), Equation (26) has the solution *X*(*x*) = *e* −*B* <sup>2</sup> *<sup>x</sup>* cos <sup>√</sup>*B*2−4*mρ*<sup>2</sup> <sup>2</sup> *x*. Now, we try to study the analytic solution of the general genetic random walk defined in Section 2, namely Equation (18). This classical partial differential equation is obtained from the general Fokker–Planck Equation (2) by choosing *a*(*x*) = *a* and *b*(*x*) = −*bx* to represent the diffusion constant and the attractive linear force, respectively. Equation (1) is rewritten as:

$$\frac{\partial^2 u(\mathbf{x},t)}{\partial t^2} + k \frac{\partial u(\mathbf{x},t)}{\partial t} = a \frac{\partial^2 u(\mathbf{x},t)}{\partial \mathbf{x}^2} + b \frac{\partial}{\partial \mathbf{x}} (\mathbf{x} \, u(\mathbf{x},t)) \, \, \, \tag{27}$$

Substituting Equation (19) into Equation (27), we obtain the following two ordinary differential equations, defined as:

$$a\frac{d^2X}{dx^2} + bx\frac{dX}{dx} + (b+m)X(x) = 0 \quad , \tag{28}$$

The solution of Equation (28) is the Weber function *Dm*(*x*) of order *m* (see [22–25]),

$$X\_{\mathfrak{M}}(\mathbf{x}) = \; D\_{\mathfrak{M}}(\sqrt{\frac{b}{a}}\mathbf{x}) \; e^{-\frac{b\sqrt{2}}{4a}} \; , \tag{29}$$

where the Weber function of variables (*n*, *y*) is the solution of the ordinary differential equation *<sup>d</sup>*2*<sup>Y</sup> dy*<sup>2</sup> <sup>+</sup> *<sup>y</sup> dY dy* + (<sup>1</sup> <sup>+</sup> *<sup>n</sup>*) *<sup>Y</sup>*(*y*) = 0 and is defined as *Dn* = (−1)*<sup>n</sup> <sup>e</sup>*−*y*2/4 *<sup>d</sup><sup>n</sup> dy<sup>n</sup> <sup>e</sup><sup>y</sup>*2/2; see [23]. The constant *Am* is calculated from:

$$A\_{\mathfrak{m}} = \frac{1}{m!\sqrt{2\pi}} \int\_{-\infty}^{\infty} u(x,0) D\_{\mathfrak{m}}(x) e^{\frac{\mathfrak{k}x^2}{4x}} dx$$

taking into consideration the boundary condition (4). Equation (23) is an ordinary differential equation with constant coefficients having the solution:

$$T(t) = e^{-\frac{\mu}{2}} \left[ c\_1 \sin(\sqrt{\frac{k^2 - 4m}{4}})t + c\_2 \cos(\sqrt{\frac{k^2 - 4m}{4}})t \right]$$

where 4*m* > *k*<sup>2</sup> and the constants *c*<sup>1</sup> and *c*<sup>2</sup> are obtained from the initial conditions (20) as:

$$T(t) = e^{-\frac{kt}{2}} \cos(\sqrt{\frac{k^2 - 4m}{4}})t \,. \tag{30}$$

The solution of Equation (27) is:

$$u(\mathbf{x},t) = e^{-\frac{bt}{2}} \sum\_{m=0}^{\infty} A\_m D\_m(\sqrt{\frac{b}{a}}\mathbf{x}) \, e^{-\frac{bx^2}{4a}} \cos(\sqrt{\frac{k^2 - 4m}{4}})t \, \tag{31}$$

where *Am* is a constant to be defined by using the initial conditions (3) as:

$$A\_m = \frac{1}{m!\sqrt{2\pi}} \int\_{-\infty}^{\infty} f(\mathbf{x}\_0) D\_m(\mathbf{x}) e^{\frac{\mathbf{k}\mathbf{x}^2}{4\pi}} d\mathbf{x} \,. \tag{32}$$

Equation (23) could be solved by the three-term Green function method defined by Podlubny [9]. First, apply the Laplace transformation to both sides of Equation (23) to obtain:

$$\left(\mathbf{s}^2 + \mathbf{s}k + m\right)\tilde{T}(\mathbf{s}) = \mathbf{1} + k \; . \tag{33}$$

Let 1 + *k* = *V*<sup>0</sup> > 1 represent the initial velocity of the wave propagation. Then, rewrite (33) as:

$$\bar{T}(s) = \frac{s^{-2}}{1 + \frac{k}{s}} \frac{V\_0}{1 - \frac{-ms^{-2}}{(1 + k/s)}} \,. \tag{34}$$

Rewrite it again as an infinite series form (see [9]):

$$\tilde{T}(s) = V\_0 \sum\_{n=0}^{\infty} (-1)^n m^n \frac{s^{(-2n-2)}}{(1 + \frac{k}{s})^{n+1}} \, \, \, \, \tag{35}$$

and we need to use the Laplace inverse of two convoluted functions *f*(*t*) and *g*(*t*) defined as:

$$f(t) \* g(t) = \int\_0^t f(t - \tau)g(\tau)d\tau = L^{-1}\{f(\tilde{s}) \* g(\tilde{s}); t\}\,,\tag{36}$$

then term-by-term inversion gives:

$$T(t) = V\_0 \sum\_{n=0}^{\infty} \frac{(-1)^n}{n!} m^n t^{2(n+1)-1} E\_{1,2+n}^{(n)}(-kt) \,, \tag{37}$$

where the two-parameter Mittag–Leffler function *Eα*,*β*(*z*) (see [26]) is defined as:

$$E\_{n, \beta}(z) = \sum\_{n=0}^{\infty} \frac{z^n}{\Gamma[\alpha n + \beta]} \tag{38}$$

and the *k*th derivative of the two-parameter Mittag–Leffler function is defined as:

$$E\_{a,\beta}^{(k)}(z) = \sum\_{j=0}^{\infty} \frac{(j+k)! z^j}{j! \, \Gamma[\alpha j + \alpha k + \beta]} \,. \tag{39}$$

Use the special function 1*F*1, which is the Hypergeometric1F1[*a*, *b*, *c*] function called the Kummer confluent hypergeometric function 1F1(*a*; *b*; *c*). It is related to the convergent function *<sup>e</sup>*−*<sup>z</sup>* by the relation 1*F*1(1, 1, <sup>−</sup>*z*) = *<sup>e</sup>*−*z*; for more details about the relation between the Kummer confluent hypergeometric function and the Mittag–Leffler function, see [27,28]. Equation (37) can be written as:

$$T(t) = V\_0 \sum\_{n=0}^{\infty} (-1)^n t^{1+2n} \frac{1 \, \Gamma \mathbf{1} [1 + n, 2 + 2n, -t]}{\Gamma [2 + 2n]} \, . \tag{40}$$

Finally, the analytic convergent solution in terms of the special functions, 1*F*1 and *Dm*(*x*), reads:

$$u(\mathbf{x},t) = V\_0 \sum\_{m=0}^{\infty} \sum\_{n=0}^{\infty} A\_m \left(-1\right)^n t^{1+2n} \frac{1F1[1+n, 2+2n, -t]}{\Gamma[2+2n]} D\_m(\sqrt{\frac{b}{a}}\mathbf{x}) \, e^{-\frac{\|\mathbf{x}\|^2}{4a}},\tag{41}$$

In what follows, we derive the stationary solution of the discussed model (1), i.e., the solution as *t* → ∞. This solution is derived from Equation (1) by omitting the dependence on the time variable *t* as:

$$a\frac{d^2u(\mathbf{x})}{d\mathbf{x}^2} + \frac{d}{d\mathbf{x}}(b\mathbf{x}u(\mathbf{x},t)) = 0 \; . \tag{42}$$

The solution of this equation is *<sup>u</sup>*(*x*) = *c e*<sup>−</sup> *bx*<sup>2</sup> <sup>2</sup>*<sup>a</sup>* . In the section of the numerical results, we give a numerical comparison of the above-defined special functions.

#### **4. The Analytical Solution of the Time–Fractional Forced-Wave Equation with the Fractional Damping Term**

For *γ* = 2, 0 < *β* < 2, and 0 < *α* < 1, Equation (5) can be written as:

$$\sideset{\_{t}}{\_{t}}{\mathop{D}}^{\beta}u(\mathbf{x},t) + k \underset{t}{\mathop{D}}^{a}u(\mathbf{x},t) = L\_{FP}u(\mathbf{x},t) \quad , \tag{43}$$

where *LFP* is the general Fokker–Planck Equation (2). To find the analytic solution of Equation (43), apply the separation of variables method. To obtain the same ordinary differential Equation (28), for the independent variable *x*, and the following ordinary differential equation for *t*:

$$\,\_{t}D\_{t}^{\beta}T + k \,\_{t}D\_{t}^{\alpha}T + mT = 0 \,. \,\tag{44}$$

This equation represents the time-fractional harmonic oscillator in a fractional resisting medium. Now, apply the Laplace transformation to both sides taking into consideration its dependence on the initial condition (8) (see [3,9]) to obtain:

$$(s^{\beta} + ks^{a} + m)T(s) = s^{\beta - 1} + ks^{a - 1} \,. \tag{45}$$

Again, rewrite Equation (45) as:

$$\mathcal{T}(\mathbf{s}) = \frac{\mathbf{s}^{\mathfrak{f}-1} + \mathbf{k}\mathbf{s}^{a-1}}{(\mathbf{s}^{\mathfrak{f}} + \mathbf{k}\mathbf{s}^{a} + m)} = \mathcal{G}\_{\mathfrak{d}}(\mathbf{s})\,\mathcal{P}(\mathbf{s})\,\,\,\,\tag{46}$$

where *G*˜ <sup>3</sup>(*s*) is the Laplace transform of the Green function of the three-term time-fractional Equation (44), defined as:

$$G\_3(s) = \frac{1}{s^6 + ks^4 + m} \,\, \, \, \, \, \, \, \tag{47}$$

and:

$$
\tilde{P}(s) = s^{\beta - 1} + ks^{a - 1} \,. \tag{48}
$$

Now, rearrange the terms of *G*˜ <sup>3</sup>(*s*) as:

$$G\_3(s) = \frac{s^{-\beta}}{1 + \frac{k}{s^{\beta - a}}} \frac{1}{1 - \frac{-ms^{-\beta}}{1 + \frac{k}{s^{\beta - a}}}} \; \; \tag{49}$$

and it can be rewritten as the sum of infinite series:

$$\tilde{G}\_3(s) = \frac{s^{-\beta}}{1 - \frac{k}{s^{\beta - a}}} \sum\_{n=0}^{\infty} (-1)^n \left( \frac{ms^{-\beta}}{1 + \frac{k}{s^{\beta - a}}} \right)^n \,. \tag{50}$$

The term-by-term inversion is based on the general expansion theorem for the Laplace transform (see [9,29]); we obtain:

$$\mathcal{G}\_3(t) = \sum\_{n=0}^{\infty} (-1)^n \frac{m^n}{n!} t^{\beta(n+1)-1} E\_{\beta-n, \beta+n\nu}^{(n)}(-kt^{\beta-n}) \,, \tag{51}$$

where *E*(*n*) *<sup>α</sup>*,*<sup>β</sup>* is the *<sup>n</sup>*th derivative of *<sup>E</sup>α*,*β*. The inverse Laplace transform of *<sup>P</sup>*˜(*s*) gives:

$$P(t) = \frac{t^{-\beta}}{\Gamma[1-\beta]} + \frac{kt^{-\alpha}}{\Gamma[1-\alpha]}\,. \tag{52}$$

Now, to find the solution *T*(*t*) of Equation (44), the convolution property (36) is used to obtain:

$$T(t) = -\frac{1}{\Gamma[1-\beta]} \int\_0^t (t-t')^{-\beta} \mathcal{G}\_3(t')dt' + \frac{k}{\Gamma[1-a]} \int\_0^t (t-t')^{-a} \mathcal{G}\_3(t')dt' \tag{53}$$

*T*(*t*) is obtained by using the convolution property (36) as:

$$T(t) = \sum\_{n=0}^{\infty} \sum\_{j=0}^{\infty} (-1)^{n+j} m^n k^j \frac{(j+n)!}{n! j! \Gamma[\beta(n+j+1) - jn]}$$

$$\int\_0^t \left( \frac{(t-t')^{-\beta}}{\Gamma(1-\beta)} + \frac{k(t-t')^{-a}}{\Gamma(1-a)} \right) t'^{\beta(j+n+1)-1-a/j} dt' \dots \text{(54)}$$

For the purpose of computing these integrals by Mathematica, it is better to rewrite Equation (54) as:

$$T(t) = \sum\_{n=0}^{\infty} \sum\_{j=0}^{\infty} (-1)^{n+j} m^n k^j \frac{(j+n)!}{n! j! \Gamma[\beta(n+j+1) - j a]}$$

$$\left( \int\_0^t \frac{(t-t')^{-\beta}}{\Gamma(1-\beta)} t'^{\beta(j+n+1)-1-a/j} dt' + \frac{k(t-t')^{-a}}{\Gamma(1-a)} t'^{\beta(j+n+1)-1-a/j} dt' \right), \quad \text{(55)}$$

These integrations are valid under the conditions:

$$\operatorname{Re}[\beta] < 1,\\ \operatorname{Re}[\alpha] < 1,\\ \\ t > 0,\\ \operatorname{Re}[-j\alpha + \beta(1+j+n)] > 0 \dots$$

Since 1 < *β* < 2, then the first integral is omitted because it is divergent. The final computed form of *T*(*t*) is written as:

$$T(t) = \sum\_{n=0}^{\infty} \sum\_{j=0}^{\infty} (-1)^{n+j} m^n k^j \frac{(j+n)!}{n!j!} \left( \frac{t^{-(1+j)a + (1+j+n)\beta}}{\Gamma[1 - (1+j)a + (1+j+n)\beta]} \right) \,. \tag{56}$$

Now, substitute *T*(*t*) defined in Equation (56) in Equation (53) to find the general solution *T*(*t*) as:

$$T(t) = \sum\_{n=0}^{\infty} \sum\_{j=0}^{\infty} (-1)^{n+j} m^n k^j \frac{(j+n)!}{n!j!} t^{(\beta-a)+j(\beta-a)+n\beta}$$

$$\left(\frac{1}{\Gamma[1+(\beta-a)+j(\beta-a)+n\beta]}\right) \cdot \tag{57}$$

Now, by using the definition of the *k*th derivative of the Mittag–Leffler, *E*(*k*) *<sup>α</sup>*,*β*(*z*), defined in (39), we obtain the following elegant form of *T* as:

*T*(*t*) = ∞ ∑ *n*=0 (−1)*<sup>n</sup> <sup>m</sup><sup>n</sup> n*! *t <sup>n</sup>β*+*β*−*<sup>α</sup> E*(*n*) (1+*β*−*α*),(1+*nβ*) (−*kt*(*β*−*α*) ) . (58)

Finally, to find the general solution (44), substitute from Equation (58) Equation (29), and after some minor mathematical manipulations, we obtain:

$$u(\mathbf{x},t) = \sum\_{m=0}^{\infty} \sum\_{n=0}^{\infty} (-1)^n A\_m D\_m(\mathbf{x}) \frac{m^n}{n!} t^{n\beta + \beta - a} E\_{(1+\beta-a),(1+n\beta)}^{(n)}(-k t^{(\beta-a)})\,,\tag{59}$$

where the constant *Am* is obtained by applying Equation (32). This is the general solution of the time-fractional forced wave equation with the fractional damping term. Finally, the stationary solution of Equation (43) is obtained by takingthe dependence on the time of Equation (43) to obtain the same Equation (42) and, consequently, the same solution. In other words, the classical and time-fractional multiterm wave equations have the same stationarity.

Another equation that has great interest among mathematicians and physicists is the time-fractional diffusion Fokker–Planck equation. The Fokker–Planck equation was numerically and analytically studied by Abdel-Rehim [25]. The studied version of the time-fractional Fokker–Planck equation can be obtained from Equation (43) by putting 0 < *α* = *β* < 1 and *φ*(*x*, *t*) = 0 to obtain:

$$\_{d}D\_{t}^{\beta}u(\mathbf{x},t) = \frac{a}{1+k} \frac{\partial^2 u(\mathbf{x},t)}{\partial \mathbf{x}^2} + \frac{b}{1+k} \frac{\partial}{\partial \mathbf{x}}(\mathbf{x}u(\mathbf{x},t)) \tag{60}$$

where *<sup>a</sup>* <sup>1</sup>+*<sup>k</sup>* <sup>≥</sup> 0 is the constant of diffusion and *<sup>b</sup>* <sup>1</sup>+*<sup>k</sup>* ≥ 0 is the drift constant. The simulation of Equation (60) has the same stationary solution as the same studied models here. Equation (5) has only a solution in the Laplace–Fourier domain, and it is hard to invert it to a unique solution. Therefore, it is better to seek convergent approximate solutions instead of the analytic solutions that are given in terms of special functions.

#### **5. Approximate Solutions**

In this section, we implement the common finite difference tools besides the Grünwald– Letnikov scheme to find the approximate solutions of the spacetime-fractional differential Equation (5). We begin with the discrete scheme of the Riesz–Feller operator. The Riesz space-fractional operator *D* 0 *x <sup>γ</sup>* is a pseudo-differential and a symmetric differential operator for the fractional order 0 < *γ* ≤ 2 and is defined as:

$$d\_0^\gamma \Phi(\mathbf{x}) = \frac{1}{2\Gamma(\gamma)\cos(\gamma\pi/2)} \int\_{-\infty}^\infty |\mathbf{x} - \boldsymbol{\xi}|^{\gamma-1} \Phi(\boldsymbol{\xi}) d\boldsymbol{\xi} \,. \tag{61}$$

This definition was extended by Feller [30] and Samko [31] to introduce the inverse Riesz potential operator in the whole range 0 < *α* ≤ 2 as:

$$\,\_{0}D\_{x}^{\gamma} = \frac{-1}{2\cos(\gamma\pi/2)} \left[ I\_{+}^{-\gamma} + I\_{-}^{-\gamma} \right], 0 < \gamma \le 2 \; , \; \gamma \ne 1 \; , \tag{62}$$

where *I* <sup>−</sup>*<sup>α</sup>* <sup>±</sup> are the inverse of the operators *<sup>I</sup><sup>α</sup>* <sup>±</sup>, and its Fourier transform reads:

$$\underset{0\le x}{\widehat{D^{\gamma}\Phi}(x)} = -|\kappa|^{\gamma}\Phi(\kappa)\ .$$

Since the Laplace operator <sup>Δ</sup> in one dimension, namely <sup>Δ</sup> <sup>=</sup> *<sup>∂</sup>*2*u*(*x*,*t*) *<sup>∂</sup>x*<sup>2</sup> , is a symmetric differential operator and its Fourier image is <sup>Δ</sup>2Φ(*x*) = −|*κ*<sup>|</sup> <sup>2</sup>Φˆ (*κ*), then we can simply write *D* 0 *x <sup>γ</sup>* <sup>=</sup> <sup>−</sup>(−Δ)*<sup>γ</sup>*/2; see for more details [30–33]. That is the reason for calling *<sup>D</sup>* 0 *x <sup>γ</sup>* the Riesz–Feller space-fractional operator.

Now, to derive the approximate solutions of the discussed models, one has to define the grid point (*xj*, *tn*):

$$\mathbf{x}\_{j} = j\mathbf{h}, \ h > 0, \ j \in \mathbb{N} \,, \tag{63}$$

where *<sup>j</sup>* <sup>∈</sup> [−*R*, *<sup>R</sup>*], *<sup>h</sup>* <sup>=</sup> <sup>1</sup> <sup>2</sup>*R*+<sup>1</sup> , and *<sup>R</sup>* <sup>∈</sup> <sup>N</sup>, while:

$$
\pi t\_n = n\pi \text{, } \pi > 0, \ n \in \mathbb{N}\_0 \text{ .}\tag{64}
$$

Introduce the clump *y*(*n*) as an approximation to *u*(*x*, *t*) as:

$$y^{(n)} = \{y\_{-R'}^{(n)} y\_{-R+1'}^{(n)} \cdot \dots \cdot y\_0^{(n)} \cdot \dots \cdot y\_{R-1'}^{(n)} y\_R^{(n)}\}^T.\tag{65}$$

Taking into consideration Equation (62), we have to distinguish the discrete scheme of *D* 0 *x <sup>γ</sup>* according to the values of *γ*.

$$\,\_{h\pm}I\_{j}\,^{-n}y\_{j}(t\_{\hbar}) = \frac{1}{h^{\gamma}}\sum\_{i=0}^{\infty}(-1)^{i}\binom{\gamma}{i}y\_{j\mp i}\,\, 0<\gamma<1\,,\tag{66}$$

while:

$$\sideset{\_{h\pm}}{}{\mathop{I}}^{-a} y\_{j}(t\_{\mathbb{H}}) = \frac{1}{h^{\gamma}} \sum\_{i=0}^{\infty} (-1)^{i} \binom{\gamma}{i} y\_{j \pm 1 \mp i} \;/\; 1 < \gamma \le 2 \tag{67}$$

The case as *γ* = 1 is related to the Cauchy distribution, and one cannot use the Grünwald– Letnikov scheme for discretizing *D* 0 <sup>1</sup> because the dominator *<sup>c</sup>*<sup>±</sup> <sup>→</sup> 0 in Equation (62) is undefined for *γ* = 1. Instead of the Grünwald–Letnikov scheme, we use the discretization introduced in [34] and successfully numerically applied by Abdel-Rehim [18]. In these

references, the discretization of *D* 0 <sup>1</sup> was deduced from the Cauchy density *p*1(*x*, 0) = <sup>1</sup> *<sup>π</sup>* <sup>1</sup> <sup>1</sup>+*x*<sup>2</sup> , and the discrete scheme reads:

$${}\_{h}I\_{h}^{-1}y\_{j}(t\_{n}) = \frac{-2}{\pi h}y\_{j}(t\_{n}) + \frac{1}{h} \sum\_{i=1}^{\infty}(-1)^{i} \frac{1}{\pi h i(i+1)}y\_{j \mp i}(t\_{n})\,\tag{68}$$

where: <sup>∞</sup>

$$\sum\_{i=0}^{\infty} \frac{1}{i(i+1)} < \infty. \tag{69}$$

The Grünwald–Letnikov scheme of the Caputo time-fractional operator of order 0 < *β* ≤ 2, defined in Equation (7), reads:

$$\_{1}D\_{"\ast}^{\beta}u(x,t) = \sum\_{s=0}^{n+1}(-1)^{s}\binom{\beta}{s}\frac{y\_{j}^{(n+1-s)} - y\_{j}^{(0)}}{\tau^{\beta}} \quad , 0 < \beta \le 2\,. \tag{70}$$

Combining the above schemes, one obtains the discrete scheme of (5) for *a*(*x*) = *D* and *b*(*x*) = −*bx* for 1 < *γ* < 2 as:

$$\begin{split} \pi^{-\beta} (y\_j^{(n+1)} - \beta y\_j^{(n)} - \sum\_{m=2}^{n+1} (-1)^m \binom{\beta}{m} y\_j^{(n+1-m)} - \sum\_{m=0}^{n+1} (-1)^m \binom{\beta}{m} y\_j^{(0)}) \\ + k \pi^{-n} (y\_j^{(n+1)} - a y\_j^{(n)} - \sum\_{m=2}^{n+1} (-1)^m \binom{n}{m} y\_j^{(n+1-m)} - \sum\_{m=0}^{n+1} (-1)^m \binom{n}{m} y\_j^{(0)}) \\ = \frac{-h^{-\gamma}}{2 \cos \frac{\gamma \pi}{2}} \sum\_{i \in \mathbb{Z}} (-1)^i \binom{\gamma}{i} \left\{ y\_{j+1-i} (t\_n) + y\_{j-1+i} (t\_n) \right\} + \frac{b}{2} \left( (j+1) y\_{j+1}^{(n)} - (j-1) y\_{j-1}^{(n)} \right). \end{split} \tag{71}$$
  $\text{Let } \frac{2}{b} = r \text{ and solve for } y^{n+1} \text{ to obtain:}$ 

$$\begin{split} y^{(n+1)} &= \\ &\frac{-1}{\left(\tau^{-\beta} + k\tau^{-\kappa}\right)} \frac{h^{-\gamma}}{2\cos\frac{\gamma\pi}{2}} \sum\_{i\in\mathbb{Z}} (-1)^{i} \binom{\gamma}{i} \left\{ y^{(n)}\_{j+1-i} + y^{(n)}\_{j-1+i} \right\} \\ &+ \frac{1}{\left(\tau^{-\beta} + k\tau^{-\kappa}\right)} \left( (\beta\tau^{-\beta} + k\alpha\tau^{-\kappa}) y^{(n)}\_{j} + \frac{j+1}{\tau} y^{(n)}\_{j+1} - \frac{j-1}{\tau} y^{(n)}\_{j-1} \right) \\ &+ \frac{1}{\left(\tau^{-\beta} + k\tau^{-\kappa}\right)} \left( \tau^{-\beta} \sum\_{m=2}^{n+1} (-1)^{m} \binom{\beta}{m} + k\tau^{-\kappa} \sum\_{m=2}^{n+1} (-1)^{m} \binom{\alpha}{m} \right) y^{(n+1-m)}\_{j} \\ &+ \frac{1}{\left(\tau^{-\beta} + k\tau^{-\kappa}\right)} \left( \tau^{-\beta} \sum\_{m=0}^{n+1} (-1)^{m} \binom{\beta}{m} + k\tau^{-\kappa} \sum\_{m=0}^{n+1} (-1)^{m} \binom{\alpha}{m} \right) y^{(0)}\_{j} \end{split} \tag{72}$$

This scheme is stable, and henceforth, the approximate solution is convergent if the following condition is satisfied:

$$\frac{\beta + k\tau^{\beta - \alpha}}{h^{\gamma}} + \frac{2\gamma}{\cos\frac{\gamma\pi}{2}} \ge 0 \,, \tag{73}$$

where 0 < *k* ≤ 1; see [35]. For 0 < *γ* < 1, we have: *I h* ± <sup>−</sup>*<sup>γ</sup>yj*(*tn*) = <sup>1</sup> *hγ* ∞ ∑ *i*=0 (−1)*<sup>i</sup> γ i yj*∓*<sup>i</sup>* , (74)

and to find the approximate solution of Equation (5) corresponding to this case, combine the discrete scheme of the Caputo time-fractional operator (70) with (66) to obtain:

$$\begin{split} y^{(n+1)} &= \\ &\frac{-1}{\left(\tau^{-\beta} + k\tau^{-\alpha}\right)} \frac{h^{-\gamma}}{2\cos\frac{\gamma\pi}{2}} \sum\_{i\in\mathbb{Z}} (-1)^{i} \binom{\gamma}{i} \left\{ y^{(n)}\_{j+i} + y^{(n)}\_{j-i} \right\} \\ &+ \frac{1}{\left(\tau^{-\beta} + k\tau^{-\alpha}\right)} \left( (\beta\tau^{-\beta} + k\alpha\tau^{-\alpha}) y^{(n)}\_{j} + \frac{j+1}{\tau} y^{(n)}\_{j+1} - \frac{j-1}{\tau} y^{(n)}\_{j-1} \right) \\ &+ \frac{1}{\left(\tau^{-\beta} + k\tau^{-\alpha}\right)} \left( \tau^{-\beta} \sum\_{m=2}^{n+1} (-1)^{m} \binom{\beta}{m} + k\tau^{-\alpha} \sum\_{m=2}^{n+1} (-1)^{m} \binom{\alpha}{m} \right) y^{(n+1-m)}\_{j} \\ &+ \frac{1}{(\tau^{-\beta} + k\tau^{-\alpha})} \left( \tau^{-\beta} \sum\_{m=0}^{n+1} (-1)^{m} \binom{\beta}{m} + k\tau^{-\alpha} \sum\_{m=0}^{n+1} (-1)^{m} \binom{\alpha}{m} \right) y^{(0)}\_{j} \end{split} \tag{75}$$

where this scheme is also satisfied if the condition (73) is satisfied, but with 0 < *γ* < 1. Finally, the discrete scheme for the singular case as *γ* = 1 reads:

$$\begin{split} y^{(n+1)} &= \\ &\frac{-1}{\left(\tau^{-\beta} + k\tau^{-\alpha}\right)} \left(\frac{-2}{\tau h^{\beta}} y^{(n)}\_{j} + \frac{1}{h} \sum\_{i=1}^{\infty} (-1)^{i} \frac{1}{\pi h i (i+1)} (y^{(n)}\_{j+i} + y^{(n)}\_{j-i})\right) \\ &+ \frac{1}{\left(\tau^{-\beta} + k\tau^{-\alpha}\right)} \left( (\beta\tau^{-\beta} + k\alpha\tau^{-\alpha}) y^{(n)}\_{j} + \frac{i+1}{\tau} y^{(n)}\_{j+1} - \frac{j-1}{\tau} y^{(n)}\_{j-1} \right) \\ &+ \frac{1}{\left(\tau^{-\beta} + k\tau^{-\alpha}\right)} \left(\tau^{-\beta} \sum\_{m=2}^{n+1} (-1)^{m} \binom{\beta}{m} + k\tau^{-\alpha} \sum\_{m=2}^{n+1} (-1)^{m} \binom{\alpha}{m}\right) y^{(n+1-m)}\_{j} \\ &+ \frac{1}{\left(\tau^{-\beta} + k\tau^{-\alpha}\right)} \left(\tau^{-\beta} \sum\_{m=0}^{n+1} (-1)^{m} \binom{\beta}{m} + k\tau^{-\alpha} \sum\_{m=0}^{n+1} (-1)^{m} \binom{\alpha}{m}\right) y^{(0)}\_{j}, \end{split} \tag{76}$$

where this discrete scheme is stable and its corresponding approximate solution is convergent if the following condition is satisfied:

$$
\beta + k\alpha \tau^{\beta - \alpha} + \frac{2\tau \beta}{\pi \text{h}} \ge 0 \,\,\,\,\,\tag{77}
$$

In the following section, we give the simulation of the time evolution of the approximate solution *y*(*n*) discussed here for different values of the fractional orders *α*, *β*, and *γ* and for different values of the initial condition *f*(*x*).

#### **6. Numerical Results**

To computationally prove that the analytic solutions in terms of the Mittag–Leffler function are convergent, we give a brief review of its asymptotic behaviors; see [35] and the references therein. The short and long time behaviors of the Mittag–Leffler function are computed from the following special forms:

$$E\_{\beta}(-t^{\beta}) \sim \sum\_{n=0}^{\infty} (-1)^{n} \frac{t^{n\beta}}{\Gamma(n\beta + 1)} \text{ as } t \ge 0\text{ .}\tag{78}$$

This form is valid only for the short time. To deal with the long time, we have to compute the following function:

$$E\_{\beta}(-t^{\beta}) \sim \frac{\sin \beta \pi}{\pi} \frac{\Gamma(\beta)}{t^{\beta}} \text{ as } t \to \infty \text{ ,}\tag{79}$$

and another useful form of the Mittag–Leffler function, namely:

$$E\_{\beta}(-t^{\beta}) \sim \exp(\frac{-t^{\beta}}{\Gamma[1+\beta]})\,\,\,\,\,\tag{80}$$

where this form is called the stretched exponential function. Substituting in this function with *β* = 1, we obtain the fastest convergent function *e*−*<sup>t</sup>* . Figures 1 and 2 show the asymptotic behaviors of the Mittag–Leffler function for the short and long time. The Hypergeometric1F1[1 + *n*, 2 + *n*, −*t*] function is plotted in Figure 3 and is called the Kummer confluent hypergeometric function. It is known that 1*F*1 is related to the convergent function *<sup>e</sup>*−*<sup>z</sup>* by the relation 1*F*1(1, 1, <sup>−</sup>*z*) = *<sup>e</sup>*−*z*, for more details see [27,28]. Their time evolution is plotted in Figure 4. The simulation of these special functions indicates that the obtained analytic solutions are convergent as *t* → ∞.

**Figure 1.** The simulation of the Mittag–Leffler as *t* : 0 → 1, for different values of *β*.

**Figure 2.** The simulation of the Mittag–Leffler as *t* : 0 → 10, for different values of *β*.

**Figure 3.** Hypergeometric1F1[1 + *n*, 2 + *n*, −*t*].

The time evolution of the approximate solution of the classical Equation (5), i.e., as *<sup>γ</sup>* <sup>=</sup> 2, *<sup>α</sup>* <sup>=</sup> 1, *<sup>β</sup>* <sup>=</sup> 2, *<sup>a</sup>*(*x*) = 1, *<sup>b</sup>*(*x*) = <sup>−</sup>*<sup>x</sup>* and *<sup>f</sup>*(*x*) = sin *<sup>π</sup><sup>x</sup> <sup>L</sup>* , is plotted in Figures 5–8.

**Figure 5.** *t* = 2.

The time evolution of the approximate solution of Equation (5) is as *a*(*x*) = *a* = 1, *<sup>b</sup>*(*x*) = <sup>−</sup>*bx* <sup>=</sup> <sup>−</sup>*x*, *<sup>k</sup>* <sup>=</sup> 1, *<sup>γ</sup>* <sup>=</sup> 2, *<sup>β</sup>* <sup>=</sup> 1.7, *<sup>α</sup>* <sup>=</sup> 0.7, *<sup>r</sup>* <sup>=</sup> 100, and *<sup>f</sup>*(*x*) = sin *<sup>π</sup><sup>x</sup>* <sup>2</sup>*r*+<sup>1</sup> is plotted in Figures 9–12. The figures shows that the approximate solution reaches its stationary solution very fast.

**Figure 11.** *t* = 20.

**Figure 12.** *t* = 25.

**Figure 14.** *t* = 5.

**Figure 16.** *t* = 20.

The time evolution of *y*(*n*) of the same equation, but corresponding to *γ* = 1, *β* = 1.7, *α* = 1, and *f*(*x*) = sin *<sup>π</sup><sup>x</sup> <sup>L</sup>* , is plotted Figures 17–20. These simulations show that the approximate solution reaches its stationary solution at *t* = 20, and it does not change even if we increase the number of iterations till we reach *t* = 60. This is a necessary property of any stochastic process.

**Figure 17.** *t* = 5.

**Figure 20.** *t* = 60.

#### **7. Conclusions**

In this paper, we studied the classical wave equation with the damped term and associated with the stochastic Fokker–Planck operator. Two physical and biological models were studied as direct applications to this partial differential equation. The need to extend the time first-order derivative to the Caputo time-fractional operator was discussed. The need for the space-fractional operator was also discussed. The analytic solutions for the classical models were studied to illustrate that the special functions are very necessary, besides proving that the analytic solutions are not unique. The Laplace transform was implemented to obtain the solution of the time-fractional differential equation of three terms. The solution was given in terms of the Mittag–Leffler function and its *n*th derivative.

The explicit finite difference rules besides the Grünwald–Letnikov scheme were implemented to obtain the approximate solutions of the studied models. The simulation of the approximate solutions of the classical case and the space-time-fractional equations with different values of the fractional orders were presented. As stochastic processes, the approximate solutions do not change after reaching the stationary solution.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Conflicts of Interest:** There is no funders to have a role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

#### **References**


## *Article* **Forecasting Economic Growth of the Group of Seven via Fractional-Order Gradient Descent Approach**

**Xiaoling Wang 1, Michal Feˇckan 2,3 and JinRong Wang 1,\***

	- <sup>3</sup> Mathematical Institute of Slovak Academy of Sciences, Štefánikova 49, 814 73 Bratislava, Slovakia
	- **\*** Correspondence: jrwang@gzu.edu.cn

**Abstract:** This paper establishes a model of economic growth for all the G7 countries from 1973 to 2016, in which the gross domestic product (GDP) is related to land area, arable land, population, school attendance, gross capital formation, exports of goods and services, general government, final consumer spending and broad money. The fractional-order gradient descent and integer-order gradient descent are used to estimate the model parameters to fit the GDP and forecast GDP from 2017 to 2019. The results show that the convergence rate of the fractional-order gradient descent is faster and has a better fitting accuracy and prediction effect.

**Keywords:** fractional derivative; gradient descent; economic growth; group of seven

**MSC:** 26A33

#### **1. Introduction**

In recent years, fractional model has become a research hotspot because of its advantages. Fractional calculus has developed rapidly in academic circles, and its achievements in the fields include [1–10].

Gradient descent is generally used as a method of solving the unconstrained optimization problems, and is widely used in evaluation and in other aspects. The rise in fractional calculus provides a new idea for advances in the gradient descent method. Although numerous achievements have been made in the two fields of fractional calculus and gradient descent, the research results combining the two are still in their infancy. Recently, ref. [11] applied the fractional order gradient descent to image processing and solved the problem of blurring image edges and texture details using a traditional denoising method, based on integer order. Next, ref. [12] improved the fractional-order gradient descent method and used it to identify the parameters of the discrete deterministic system in advance. Thereafter, ref. [13] applied the fractional-order gradient descent to the training of neural networks' backpropagation (BP), which proves the monotony and convergence of the method.

Compared with the traditional integer-order gradient descent, the combination of fractional calculus and gradient descent provides more freedom of order; adjusting the order can provide new possibilities for the algorithm. In this paper, economic growth models of seven countries are established, and their cost functions are trained by gradient descent (fractional- and integer-order). To compare the performance of fractional- and integer-order gradient descent, we visualize the rate of convergence of the cost function, evaluate the model with *MSE*, *MAD* and *R*<sup>2</sup> indicators and predict the GDP of the seven countries in 2017–2019 according to the trained parameters.

**Citation:** Wang, X.; Feˇckan, M.; Wang, J. Forecasting Economic Growth of the Group of Seven via Fractional-Order Gradient Descent Approach. *Axioms* **2021**, *10*, 257. https://doi.org/10.3390/ axioms10040257

Academic Editor: Jorge E. Macías Díaz

Received: 29 August 2021 Accepted: 11 October 2021 Published: 15 October 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

#### *The Group of Seven (G7)*

The G6 was set up by France after western countries were hit by the first oil shock. In 1976, Canada's accession marked the birth of the G7, whose members are the United States, the United Kingdom, France, Germany, Japan, Italy and Canada seven developed countries. The annual summit mechanism of the G7 focuses on major issues of common interest, such as inclusive economic growth, world peace and security, climate change and oceans, which have had a profound impact on global, economic and political governance. In addition to the G7 members, there are a number of developing countries with large economies, such as China, India and Brazil. In the context of economic globalization, the study of G7 economic trends and economic-related factors can provide a useful reference for these countries' development.

The economic crisis broke out in western countries in 1973, so the data in this paper cover the period from 1973 to 2016, and data for the seven countries are available since then. Some G7 members (France, Germany, Italy and the United States) were members of the European Union (EU) during this period, so this paper also establishes the economic growth model of the EU. Data for this article are from the World Bank.

#### **2. Model Describes**

The prediction of variables generally uses time series models [14] (for example, ARIMA and SARIMA), or artificial neural networks [15,16], which have been very popular in recent years. The time series model mainly predicts the future trend in variables, but it is difficult to reflect the change in unexpected factors in the model. Additionally, the neural network model needs to adjust more parameters, the network structure selection is too large, the training efficiency is not high enough, and easy to overfit.

Although the linear model is simple in form and easy to model, its weight can intuitively express the importance of each attribute, so the linear model has a good explanatory ability. It is reasonable to build a linear regression model of economic growth, which can clearly learn which factors have an impact on the economy.

Next, we chose eight explanatory variables to describe the economic growth in this paper. The explained variable is *y*, where *y* refers to GDP and is a function. The expression for *y* is as follows:

$$y(t) = \sum\_{j=1,2,3,4,5,6,7,8} \theta\_j x\_j(t) + \theta\_0 + \varepsilon\_\prime \tag{1}$$

where *t* is year (*t* = 44), *θ*<sup>0</sup> is the intercept. is an unobservable term of random error. *θ<sup>j</sup>* represents the weight of each variable. The eight explanatory variables are:


*x*3: population

*x*4: school attendance (years)


#### **3. Fractional-Order Derivative**

Due to the differing conditions, there are different forms of fractional calculus definition, the most common of which are Gru¨nwald–Letnikov, Riemann–Liouville, and Caputo. In this article, we chose the definition of fractional-order derivative in terms of the Caputo form. Given the function *f*(*t*), the Caputo fractional-order derivative of order *α* is defined as follows:

$${}^{Caputo}{}\_{\mathfrak{c}}D\_{t}^{\mathfrak{a}}f(t) = \frac{1}{\Gamma(1-\mathfrak{a})} \int\_{\mathfrak{c}}^{t} (t-\tau)^{-\mathfrak{a}} f'(\tau)d\tau,$$

where *CaputocD<sup>α</sup> <sup>t</sup>* is the Caputo derivative operator. *α* is the fractional order, and the interval is *<sup>α</sup>* <sup>∈</sup> (0, 1). <sup>Γ</sup>(·) is the gamma function. *<sup>c</sup>* is the initial value. For simplicity, *cD<sup>α</sup> <sup>t</sup>* is used in this paper to represent the Caputo fractional derivative operator instead *CaputocD<sup>α</sup> t* .

Caputo fractional differential has good properties. For example, we provide the Laplace transform of Caputo operator as follows:

$$L\{D^a f(t)\} = \mathbf{s}^a F(\mathbf{s}) - \sum\_{k=0}^{n-1} f^{(k)}(0) \mathbf{s}^{a-k-1},$$

where *F*(*s*) is a generalized integral with a complex parameter *s*, *F*(*s*) =  <sup>∞</sup> <sup>0</sup> *<sup>f</sup>*(*t*)*e*−*stdt*. *n* =: [*α*] is the *α* rounded up to the nearest integer. It can be seen from the Laplace transform that the definition of the initial value of Caputo differentiation is consistent with that of integer-order differential equations and has a definite physical meaning. Therefore, Caputo fractional differentiation has a wide range of applications.

#### **4. Gradient Descent Method**

#### *4.1. The Cost Function*

The cost function (also known as the loss function) is essential for a majority of algorithms in machine learning. The model's optimization is the process of training the cost function, and the partial derivative of the cost function with respect to each parameter is the gradient mentioned in gradient descent. To select the appropriate parameters *θ* for the model (1) and minimize the modeling error, we introduce the cost function:

$$\mathcal{C}(\theta) = \frac{1}{2m} \sum\_{i=1}^{m} (h\_{\theta}(\mathbf{x}^{(i)}) - \mathbf{y}^{(i)})^2,\tag{2}$$

where *<sup>h</sup><sup>θ</sup>* (*x*(*i*)) is a modification of model (1), *<sup>h</sup><sup>θ</sup>* (*x*) = *<sup>θ</sup>*<sup>0</sup> <sup>+</sup> *<sup>θ</sup>*1*x*<sup>1</sup> <sup>+</sup> ··· <sup>+</sup> *<sup>θ</sup>jxj*, which represents the output value of the model. *x*(*i*) are the sample features. *y*(*i*) is the true data, and *t* represents the number of samples (*m* = 44).

#### *4.2. The Integer-Order Gradient Descent*

The first step of the integer-order gradient descent is to take the partial derivative of the cost function *C*(*θ*):

$$\frac{\partial \mathbb{C}(\theta)}{\partial \theta\_{\bar{j}}} = \frac{1}{m} \sum\_{i=1}^{m} (h\_{\theta}(\mathbf{x}^{(i)}) - y^{(i)}) \mathbf{x}\_{\bar{j}}^{(i)}, \qquad j = 1, 2, \dots, 8,\tag{3}$$

and the update function is as follows:

$$\theta\_{\dot{\gamma}+1} = \theta\_{\dot{\gamma}} - \eta \frac{1}{m} \sum\_{i=1}^{m} (h\_{\theta}(\mathbf{x}^{(i)}) - \mathbf{y}^{(i)}) \mathbf{x}\_{\dot{\gamma}}^{(i)},\tag{4}$$

where *η* is learning rate, *η* > 0.

#### *4.3. The Fractional-Order Gradient Descent*

The first step of fractional-order gradient descent is to find the fractional derivative of the cost function *C*(*θ*). According to Caputo's definition of fractional derivative, from [17] we know that if *g*(*h*(*t*)) is a compound function of *t*, then the fractional derivation of *α* with respect to *t* is

$$\, \_cD\_t^\mathfrak{a} \, \_\mathfrak{g}(h) = \frac{\partial (\mathcal{g}(h))}{\partial h} \cdot \, \_cD\_t^\mathfrak{a} \, h(t). \tag{5}$$

It can be known from (5) that the fractional derivative of a composite function can be expressed as the product of integral and fractional derivatives. Therefore, the calculation for *cD<sup>α</sup> θj C*(*θ*) is as follows:

$$\begin{split} \, \_cD\_{\theta\_j}^a \mathbb{C}(\theta) &= \frac{1}{m} \sum\_{i=1}^m (h\_{\theta}(\mathbf{x}^{(i)}) - \mathbf{y}^{(i)}) \Gamma(1-a) \int\_c^{\theta\_j} (\theta\_j - \tau)^{-a} \frac{\partial [h\_{\theta}(\mathbf{x}^{(i)}) - \mathbf{y}^{(i)}]}{\partial \theta\_j} d\tau \\ &= \frac{1}{m} \sum\_{i=1}^m (h\_{\theta}(\mathbf{x}^{(i)}) - \mathbf{y}^{(i)}) \mathbf{x}\_j^{(i)} \Gamma(1-a) \int\_c^{\theta\_j} (\theta\_j - \tau)^{-a} d\tau \\ &= \frac{1}{m(1-a)\Gamma(1-a)} (\theta\_j - c)^{(1-a)} \sum\_{i=1}^m (h\_{\theta}(\mathbf{x}^{(i)}) - \mathbf{y}^{(i)}) \mathbf{x}\_j^{(i)}, \end{split}$$

and the update function is as follows:

$$\theta\_{j+1} = \theta\_j - \eta \frac{1}{m(1-a)\Gamma(1-a)} (\theta\_j - c)^{(1-a)} \sum\_{i=1}^{m} (h\_\theta(\mathbf{x}^{(i)}) - y^{(i)}) \mathbf{x}\_j^{(i)}, \qquad j = 1, 2, \dots, 8 \tag{6}$$

where *η* is the learning rate, *η* > 0. *α* is the fractional order, 0 < *α* < 1. *c* is the initial value of Caputo's fractional derivative, and *c* < min{*θj*}.

#### **5. Model Evaluation Indexes**

We use the absolute relative error (*ARE*) to measure the prediction error:

$$ARE\_{i} = \frac{|y\_{i} - \hat{y}\_{i}|}{y\_{i}}.$$

To evaluate the fitting quality of gradient descent on the model, the following three indicators can be calculated:

The mean square error (*MSE*):

$$MSE = \frac{1}{n} \sum\_{i=1}^{n} (y\_i - \hat{y}\_i)^2.$$

The coefficient of determination (*R*2):

$$R^2 = 1 - \frac{\sum\_{i=1}^{n} (y\_i - \hat{y}\_i)^2}{\sum\_{i=1}^{n} (y\_i - \bar{y}\_i)^2}.$$

The mean absolute deviation (*MAD*):

$$MAD = \frac{\sum\_{i=1}^{n} |y\_i - \hat{y}\_i|}{n}.$$

In these formulas, *n* is the number of years (*n* = 44). *yi* and *y*ˆ*<sup>i</sup>* are the real value and the model output, respectively. *y*¯*<sup>i</sup>* is the mean of the GDP.

#### **6. Main Results**

In this article, we standardize the data for each country before running the algorithm, and each iteration to update *θ* uses m samples. The grid search method was used to select the appropriate learning rate and initial weight interval, and the effects of different fractional orders are compared to select the best order (see Table 1).The learning rate and the initial weight interval are applicable to both fractional-order gradient descent and integer-order gradient descent.


**Table 1.** Parameters for different countries.

#### *6.1. Comparison of Convergence Rate of Fractional and Integer Order Gradient Descent*

In order to facilitate visual comparison, (4) and (6) are iterated 50 times, respectively, as well as their convergence rates (see Figure 1).

As shown in Figure 1, for each dataset, after the same number of iterations, the convergence rate of fractional-order gradient descent is faster than that of integer-order gradient descent, which indicates that the method combining fractional-order and gradient descent is better than the traditional integer-order gradient descent in the convergence rate of update equation.

**Figure 1.** *Cont*.

**Figure 1.** Comparison of convergence rate and fitting error between fractional- and integer-order gradient descent: (**a**) Canada (**b**) France (**c**) Germany (**d**) Italy (**e**) Japan (**f**) The United Kingdom (**g**) The United States (**h**) European Union.

#### *6.2. Fitting Result*

Then, we fit GDP with integer-order gradient descent and fractional-order gradient descent, respectively. Start by setting a threshold and stop iterating when the gradient is less than this threshold. The fitting effect diagram is shown in Figure 2, and the performance evaluation of the model is shown in Table 2.


**Table 2.** Performance of integer order and fractional order gradient descent.

**Figure 2.** Fitting of GDP of the G7 countries by fractional-order gradient descent method: (**a**) Canada (**b**) France (**c**) Germany (**d**) Italy (**e**) Japan (**f**) The United Kingdom (**g**) The United States (**h**) European Union. 145

It can be seen from Table 2 that the *MSE*, *R*<sup>2</sup> and *MAD* results of GDP fitted by fractional-order gradient descent are better than that fitted by integer-order gradient descent, which indicates that, under the same iteration number, learning rate and initial weight interval, the fitting performance of the data fitted by fractional-order gradient descent is better than that of integer-order.

#### *6.3. Predicted Results*

Finally, in order to test the prediction effect of fractional- and integer-order gradient descent on GDP, we forecast the GDP from 2017 to 2019, and used the *ARE* index to measure the prediction error (see Table 3).

**Table 3.** Integer-order and fractional-order gradient descent for G7 countries' GDP data from 2017 to 2019.


#### **7. Conclusions**

In this paper, the gradient descent method is used to study the linear model problems which is different from [18,19]. The results show that, in addition to the least square estimation, the gradient descent method can also solve the regression analysis problem by iterating the cost function, and obtain good results, a without complicating the model. It also improves the interpretability of explanatory variables. We apply the fractional differential to gradient descent, and compare the performance of fractional-order gradient descent with that of integer-order gradient descent. It was found that the fractional-order has a faster convergence rate, higher fitting accuracy and lower prediction error than the integer-order. This provides an alternative method for fitting and forecasting GDP and has a certain reference value.

**Author Contributions:** J.W. supervised and led the planning and execution of this research, proposed the research idea of combining fractional calculus with gradient descent, formed the overall research objective, and reviewed, evaluated and revised the manuscript. According to this research goal, X.W. collected data of economic indicators and applied statistics to create a model and used Python software to write codes to analyze data and optimize the model, and finally wrote the first draft. M.F. reviewed, evaluated and revised the manuscript. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work is partially supported by Training Object of High Level and Innovative Talents of Guizhou Province ((2016)4006), Major Research Project of Innovative Group in Guizhou Education Department ([2018]012), the Slovak Research and Development Agency under the contract No. APVV-18-0308 and by the Slovak Grant Agency VEGA No. 1/0358/20 and No. 2/0127/20.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** https://data.worldbank.org.cn/.

**Acknowledgments:** The authors are grateful to the referees for their careful reading of the manuscript and valuable comments. The authors thank the help from the editor too.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **GPU Based Modelling and Analysis for Parallel Fractional Order Derivative Model of the Spiral-Plate Heat Exchanger**

**Guanqiang Dong and Mingcong Deng \***

The Graduate School of Engineering, Tokyo University of Agriculture and Technology, Tokyo 184-8588, Japan; guanqiangdong@gmail.com

**\*** Correspondence: deng@cc.tuat.ac.jp

**Abstract:** Heat exchangers are commonly used in various industries. A spiral-plate heat exchanger with two fluids is a compact plant that only requires a small space and is excellent in high heat transfer efficiency. However, the spiral-plate heat exchanger is a nonlinear plant with uncertainties, considering the difference between the heat fluid, the heated fluid, and other complex factors. The fractional order derivation model is more accurate than the traditional integer order model. In this paper, a parallel fractional order derivation model is proposed by considering the merit of the graphics processing unit (GPU). Then, the parallel fractional order derivation model for the spiral-plate heat exchanger is constructed. Simulations show the relationships between the output temperature of heated fluid and the orders of fractional order derivatives with two directional fluids impacted by complex factors, namely, the volume flow rate in hot fluid, and the volume flow rate in cold fluid, respectively.

**Keywords:** fractional order derivative model; GPU; a spiral-plate heat exchanger; parallel model; heat transfer; nonlinear system

#### **1. Introduction**

A heat exchanger is most often used in industries such as space heating, refrigeration, air conditioning, power stations, chemical plants, petrochemical plants, petroleum refineries, natural-gas processing, and sewage treatment. It uses the principle of heat transfer between two or more fluids to transfer the heat energy of the high temperature heat fluid to the low temperature heat fluid in order to heating the low temperature fluid or cooling the high temperature heat fluid, which has the idea of energy saving [1]. A spiral-plate heat exchanger is a compact plant that only requires a small space for installation compared to traditional heat exchanger solutions and has excellent in high heat transfer efficiency (See [2–4]). However, the spiral-plate heat exchanger is a nonlinear plant with uncertainties, considering the difference between the heat medium, the heated medium and the other factors. In some applications, the output temperature heated or cooled for the heat exchanger must be controlled accurately. Because the heat transfer coefficient of the heat exchanger is impacted by various factors such as fluid flow, condition pressure, the uncertainties, the error of the mathematical model, and a long-time delay, etc., so it is difficult to be accurately modelled and controlled. In the past few years, the research of heat exchangers has mainly focused on the design of heat exchangers [5–7]. In some papers (Such as [8,9]), an effective internal fluid mathematical model is established by using the heat balance law between the two fluids. Only the effect of the flow velocity on the heat transfer coefficient, but not the effect of the two fluid flows velocity on the heat transfer time is considered.

Fractional order calculus and derivative is a old topic of a more than 300 years since a letter written by Leibniz to L'Hopital in 1695 [10]. Fractional order calculus is an extension from traditional integer calculus. The research of the theory and applications of fractional order calculus and derivatives (such as in solution of fractional order calculus and derivative [11,12] and stability [13–15]) expanded greatly over the 20th and 21st centuries.

**Citation:** Dong, G.; Deng, M. GPU Based Modelling and Analysis for Parallel Fractional Order Derivative Model of the Spiral-Plate Heat Exchanger. *Axioms* **2021**, *10*, 344. https://doi.org/10.3390/ axioms10040344

Academic Editor: Jorge E. Macías Díaz

Received: 23 November 2021 Accepted: 10 December 2021 Published: 16 December 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

In recent years, fractional order calculus and derivatives have been used in various fields such as engineering, physics, chemistry, and hydrology etc. The references in [10,16], give some knowledges about fractional order calculus and derivative. The fractional order PID controller was introduced by Podlubny in 1994 [10]. Fractional order controllers were being used extensively by many researchers to achieve the better robust performance in both the linear and the nonlinear systems. In [17], nonlinear thermoelastic fractional-order model of nonlocal plates is studied. The reference [18] proposed a fractional nonlocal elasticity model. They show elasticity model described by fractional order derivative is more accurate than the traditional system described by integer order in theory and application [19]. In control systems, modelling, stability, controllability, observability is very important for performance. In fractional order system, these need to be considered in [20–22], too. Nowadays, fractional order calculus and derivative are still the absence of solution method and rapid computing algorithm [23].

GPU (That is graphics processing unit), which provides more computing units and high data bandwidth in a limited area [24]. It is originally developed for graphics applications, now, it has been increasingly applied to do parallel computing in scientific and engineering. GPU has higher execution efficiency for parallel data, and the more data parallelism, the higher the execution efficiency. CUDA is a software and hardware system that can make GPU work as a device for data parallel computing [25].

References [20,26–28] show elasticity model described by fractional order derivative is more accurate than the traditional system described by integer order in theory and application. Fractional order derivative equation is more suitable to describe thermoelastic model than integer order equation. Heat transfer for the heat exchanger is thermoelastic model. Therefore, it is motivated by the above references. Traditionally, a spiral-plate heat exchanger mathematical model is constructed by integer order derivative equation. A spiral-plate heat exchanger mathematical model constructed by fractional order derivative equation is more accurate than conventional method. So, a parallel fractional order derivative model is proposed by considering the merits of GPU and fractional order derivative. Further, parallel fractional order derivation model for the spiral heat exchanger is constructed. The parallel fractional order derivation model for the spiral-plate heat exchanger executes faster than traditional model and can quickly reply to disturbance. In the future, we will study operator-based robust nolinear control system for the spiral heat exchanger by using the proposed parallel model [29–31].

The rest of this paper is constructed as follows. In Section 2, Preliminaries and Problem Statement, a parallel fractional order derivative model is proposed, and the problem statement is presented. A mathematic fractional order derivative model for the spiral-plate heat exchanger is derived in Section 3, Mathematics Analysis. The proposed parallel model for the spiral-plate heat exchanger with both the counter-flow type and the parallel-flow type and implementation on GPU are given in Section 4. Then, Section 5 compares the relationships between the output temperature of the heated flow fluid and the orders of the fractional order derivative with the two directional fluids, the volume flow rate of cold fluid, and the volume flow rate of hot fluid, respectively. Finally, in Section 6, a conclusion is given.

#### **2. Preliminaries and Problem Statement**

#### *2.1. Parallel Fractional Order Derivative Model*

In reference [32], parallel fractional order derivative model is not complete only modelling of a spiral heat exchanger with counter-type by using fractional order equation and without theory support. It is richened to derive this paper.

According to the definition of the fractional order derivative (see Appendix A), the fractional order derivative Equation (1) are given as follows.

$$\begin{cases} D\_t^q f(\Delta h) = -\left(\Delta h\right)^{-q} \frac{\Gamma(q+1)}{\Gamma(2)\Gamma(q)} f(0) + (\Delta h)^{-q} f(\Delta h) \\ D\_t^q f(2(\Delta h)) = (\Delta h)^{-q} \frac{\Gamma(q+1)}{\Gamma(3)\Gamma(q-1)} f(0) - (\Delta h)^{-q} \frac{\Gamma(q+1)}{\Gamma(2)\Gamma(q)} f(\Delta h) + (\Delta h)^{-q} f(2(\Delta h)) \\ D\_t^q f(3(\Delta h)) = -(\Delta h)^{-q} \frac{-\Gamma(q+1)}{\Gamma(4)\Gamma(q-2)} f(0) + (\Delta h)^{-q} \frac{\Gamma(q+1)}{\Gamma(3)\Gamma(q-1)} f(\Delta h) \\ \qquad\qquad - (\Delta h)^{-q} \frac{\Gamma(q+1)}{\Gamma(2)\Gamma(q)} f(2(\Delta h)) + (\Delta h)^{-q} f(3(\Delta h)) \\ \vdots \\ D\_t^q f(N(\Delta h)) = (\Delta h)^{-q} \sum\_{j=1}^N (-1)^j \frac{\Gamma(q+1)}{\Gamma(j+1)\Gamma(q-j+1)} f(t - j(\Delta h)) + (\Delta h)^{-q} f(N(\Delta h)) \end{cases} (1)$$

From (1), a parallel fractional order derivative model is described by the matrix, as follow.

$$F\_k = (\Delta h)^q D\_{frac} + B F\_{k-1} \tag{2}$$

where *Fk*, *Fk*−1, *Df rac* <sup>∈</sup> *<sup>R</sup>N*, and *<sup>B</sup>* <sup>∈</sup> *<sup>R</sup>N*×*<sup>N</sup>*

$$F\_k = \begin{pmatrix} f(\Delta h) \\ f(2(\Delta h)) \\ \vdots \\ f(N(\Delta h)) \end{pmatrix} \tag{3}$$

$$D\_{frac} = \begin{pmatrix} D\_t^q f(\Delta h) \\ D\_t^q f(2(\Delta h)) \\ \vdots \\ D\_t^q f(N(\Delta h)) \end{pmatrix} \tag{4}$$

$$B = \begin{pmatrix} \frac{-\Gamma(q+1)}{\Gamma(2)\Gamma(q)} & 0 & \cdots & 0\\ \frac{\Gamma(q+1)}{\Gamma(3)\Gamma(q-1)} & \frac{-\Gamma(q+1)}{\Gamma(2)\Gamma(q)} & \cdots & 0\\ \vdots & \vdots & \vdots & \vdots\\ \frac{(-1)^N \Gamma(q+1)}{\Gamma(N+1)\Gamma(q-N+1)} & \frac{(-1)^{(N-1)}\Gamma(q+1)}{\Gamma(N)\Gamma((q-N+1))} & \cdots & \frac{-\Gamma(q+1)}{\Gamma(2)\Gamma(q)} \end{pmatrix} \tag{5}$$
 
$$F\_{k-1} = \begin{pmatrix} f(0) \\ f(\Delta h) \\ \vdots \\ f((N-1)(\Delta h)) \end{pmatrix} \tag{6}$$

#### *2.2. Problem Statement*

Traditionally, a spiral-plate heat exchanger mathematical model is constructed described by the integer order derivative equation. The spiral-plate heat exchanger mathematical model described by the fractional order derivative is more accurate than the traditional method. So, a fractional order derivation model is considered to describe a spiral-plate heat exchanger plant. Further, parallel fractional order derivative model is proposed by considering the merit of GPU. The proposed parallel model executes faster than traditional model and can quickly reply to disturbance. Further, we get the parallel fractional order derivative model for the spiral heat exchanger by mathematics analysis.

#### **3. Mathematics Analysis**

*3.1. A Spiral-Plate Heat Exchanger Plant*

A spiral-plate heat exchanger is shown in Figure 1. The spiral-plate heat exchanger is used for its many merits, such as high-efficient heat transfer, small-size in comparison to the other heat exchangers, and self-cleaning due to the special spiral structure.

**Figure 1.** A spiral-plate heat exchanger plant.

The spiral-plate heat exchanger is an excellent process equipment, but it is difficult to obtain an accurate model due to a complex inner structure. The conventional method, such as logarithmic mean temperature difference method, could not obtain good control results. The other approach was conducted, but the obtained model was too complex. It is difficult to design a model based controller. Therefore, we consider a novel spiral-plate heat exchanger's fractional order derivative model. Figure 2 gives the cross-section inner structure of the spiral-plate heat exchanger. Where *δ<sup>h</sup>* , *δ<sup>c</sup> δ<sup>s</sup>* is the width of hot fluid, the width of cold fluid and the width of solid wall, respectively. In this study, the cross-section inner structure as shown in Figure 2 is divided into a micro volume in cold fluid. The fractional order derivative model is constructed by considering the heat balance of the hot fluid and cold fluid, respectively.

$$r = b + a \cdot \theta, \theta \in [0, 11\pi] \tag{7}$$

Geometric parameters of the spiral-plate heat exchanger are denoted in Table 1.


**Figure 2.** The cross-section inner structure of the spiral heat exchanger.

The heat exchanger is typically classified into the parallel-flow type and the count-flow type by arrangement [33]. The parallel-flow type is that both the input and the output of the two directional fluids (one is a hot fluid, the other is cold fluid) are in the same directions. If both the hot fluid and the cold fluid are in the opposite directions, then it is the counter-flow type heat exchanger. First, fractional order derivative model for the spiral-plate heat exchanger with the counter-flow type is considered.

#### *3.2. Fractional Order Derivative Model for the Spiral-Plate Counter-Flow Heat Exchanger*

In this section, the spiral-plate heat exchanger with the counter-flow type is analysed here. First, we consider the temperature variable in cold fluid, that is divided into a micro volume as shown in Figure 3. Here, *vh* is the flow rate in hot fluid. *vc* is the flow rate in cold fluid. The directions of *vh*, and *vc* are opposite. Δ*V* is a micro volume in cold fluid. Δ*m*<sup>1</sup> is the heat flux transferring from the inside *Th*(*x*). Δ*m*<sup>2</sup> is the heat flux transferring from the outside *Th*(*x* + *C*), C is the length to the angle of 2*π*.

**Figure 3.** The principle of heat transfer for the spiral heat exchanger.

As seen in Figure 3, it denotes the heat transferring between the two fluids for the spiral-plate counter-flow heat exchanger.

Therefore, according to the heat energy balance law and heat transfer theory [34], the equations are derived as follows.

$$
\varepsilon\_{\varepsilon} \rho\_{\varepsilon}(\Delta V) \frac{\Delta T\_{\varepsilon}(\mathbf{x}, t)}{\Delta t} = \Delta m\_1 + \Delta m\_2 \tag{8}
$$

$$\frac{\Delta T\_c(\mathbf{x}, t)}{\Delta t} = T\_c(\mathbf{x} + \Delta \mathbf{x}, t + \Delta t) - T\_c(\mathbf{x}, t) \tag{9}$$

where *cc*, *ρc*, Δ*V*, *k* is the specific heat capacity of cold fluid, the density of cold fluid, a micro volume, the heat transfer coefficient of the spiral-plate heat exchanger, respectively. According to the Newton's law of cooling.

$$k = \frac{1}{h\_{\rm li}} + \frac{\delta\_s}{\lambda} + \frac{1}{h\_c} \tag{10}$$

where *hh*, *hc*, *δs*, and *λ* is the heat transfer coefficient of hot fluid, the heat transfer coefficient of cold fluid, the width of wall, thermal conductivity, respectively.

$$
\Delta m\_1 = k \cdot \left( T\_h(\mathbf{x}, t) - T\_\varepsilon(\mathbf{x}, t) \right) \cdot \left( \Delta A\_1 \right) \tag{11}
$$

Δ*m*<sup>1</sup> is the heat flux transferring from *Th*(*x*, *t*) to *Tc*(*x*, *t*). Where Δ*A*<sup>1</sup> is the heat transfer suface area *Th*(*x*, *t*) to *Tc*(*x*, *t*).

$$
\Delta m\_2 = k \cdot \left( T\_h(\mathbf{x} + \mathbf{C}, t) - T\_c(\mathbf{x}, t) \right) \cdot \left( \Delta A\_2 \right) \tag{12}
$$

Δ*m*<sup>2</sup> is the heat flux transferring from *Th*(*x* + *C*, *t*) to *Tc*(*x*, *t*). Where Δ*A*<sup>2</sup> is the heat transfer surface area *Th*(*x* + *C*, *t*) to *Tc*(*x*, *t*). Each element is of the length Δ*x* and the heat transfer surface area Δ*A*1, Δ*A*2, and Δ*A*<sup>1</sup> ≈ Δ*A*<sup>2</sup> = Δ*A* = (Δ*x*) · *Z*, *Z* is the height of the spiral-plate heat exchanger, Δ*x* is the displacement of cold fluid that moves in time Δ*t*, <sup>Δ</sup>*<sup>V</sup>* = (Δ*x*) · *<sup>Z</sup>* · *<sup>δ</sup>h*, <sup>Δ</sup>*<sup>t</sup>* <sup>=</sup> <sup>Δ</sup>*<sup>x</sup> vc* . So,

$$c\_{\varepsilon} \rho\_{\varepsilon} \delta\_{\varepsilon} v\_{\varepsilon} \frac{\Delta T\_{\varepsilon}(\mathbf{x}, t)}{\Delta \mathbf{x}} = k (T\_h(\mathbf{x}, t) + T\_h(\mathbf{x} + \mathbf{C}, t) - 2T\_{\varepsilon}(\mathbf{x}, t)) \tag{13}$$

Using the thought of differential theory, the relationship between the length in the differential arc and the angle in differential arc is derived.

$$
\Delta \mathbf{x} = \sqrt{\Delta r^2 + (r(\Delta \theta))^2} \tag{14}
$$

Applying the spiral function of the spiral-plate heat exchanger, *r* = *aθ* + *b*, it is obtained from (14):

$$
\Delta \mathbf{x} = \sqrt{a^2 + (b + a\theta)^2} (\Delta \theta) \tag{15}
$$

Substituting (15) into (13), the differential equation in cold fluid is obtained as follow.

$$\begin{array}{ccccccccc}k\sqrt{a^2 + (b+a\theta)^2}(T\_h(\theta,t) & + & T\_h(\theta & + & 2\pi,t) & - & 2T\_c(\theta,t)\end{array}\quad\longleftrightarrow & [0,11\pi) \quad\text{(16)}$$

Because (16) is complex. we simplify (17) as follows.

$$c\_{\varepsilon} \rho\_{\varepsilon} \delta\_{\varepsilon} v\_{\varepsilon} \frac{\Delta T\_{\varepsilon}(\theta, t)}{\Delta \theta} = Fk \sqrt{a^2 + (b + a\theta)^2} (T\_h(\theta, t) - T\_{\varepsilon}(\theta, t)), \theta \in [0, 11\pi) \tag{17}$$

where *F* is the constant between 1 and 2 relation to the shape of the heat exchanger. According to the thought of fractional order derivative [10] (17) is extended from the

$$c\_{\mathfrak{c}} \rho\_{\mathfrak{c}} \delta\_{\mathfrak{c}} v\_{\mathfrak{c}} \frac{\Delta T\_{\mathfrak{c}}(\theta, \mathfrak{t})}{\Delta \theta} \qquad = \qquad k$$

integer order derivative to the fractional order derivative, we derive fractional order derivative equation in cold fluid for the spiral-plate counter-flow heat exchanger as follows. Fractional order of (18) is impacted by complex factors, it is difficult to derive by theory method.

$$\mathcal{L}\varrho\_{\ell}\varrho\_{\ell}\varepsilon\_{\ell}D\_{\theta}^{q\_2}T\_{\mathfrak{c}}(\theta,t) = Fk\sqrt{a^2 + (b+a\theta)^2}(T\_h(\theta,t) - T\_{\mathfrak{c}}(\theta,t)), \theta \in [0,11\pi) \tag{18}$$

With the same principle, the fractional order derivative equation in hot fluid is derived as follows.

$$\mathcal{L}c\_{h}\rho\_{h}\delta\_{h}\upsilon\_{h}D\_{\theta}^{q\_{1}}T\_{h}(\theta,t) = Fk\sqrt{a^{2} + (b+a\theta)^{2}}(T\_{\mathfrak{c}}(\theta,t) - T\_{h}(\theta,t)), \theta \in [0,11\pi) \tag{19}$$

$$A = \sqrt{a^2 + (b + a\theta)^2} \tag{20}$$

Nonlinear fractional order derivative equations for the spiral-plate counter-flow heat exchanger are given as follows.

$$\begin{cases} D\_{\theta}^{q\_1} T\_h(\theta, t) = \frac{kFA}{\upsilon\_h c\_h \rho\_h \delta\_h} ((T\_\mathfrak{c}(\theta, t) - T\_h(\theta, t)) \\ D\_{\theta}^{q\_2} T\_\mathfrak{c}(\theta, t) = \frac{kFA}{\upsilon\_c c\_c \rho\_c \delta\_c} ((T\_\mathfrak{h}(\theta, t) - T\_\mathfrak{c}(\theta, t)) \\ \theta \in [0, 11\pi] \end{cases} \tag{21}$$

where *vh*(*t*) and *vc*(*t*) is the input flow rate of time *t* in hot fluid, the input flow rate of time *t* in cold fluid side, respectively.

$$\begin{cases} QL\_1 = \delta\_\hbar Z v\_\hbar\\ QL\_2 = \delta\_\hbar Z v\_\hbar \end{cases} \tag{22}$$

where *QL*<sup>1</sup> and *QL*<sup>2</sup> is the input volume flow rate in hot fluid side and the input volume flow rate in cold fluid side, respectively. Substituting (22) into (21), fractional order derivative model for the spiral-plate counter-flow heat exchanger is described as follows.

$$\begin{cases} D\_{\theta}^{q\_1} T\_h(\theta, t) = \frac{kFAZ}{QL\_1 c\_h \rho\_h} ( (T\_\mathfrak{c}(\theta, t) - T\_h(\theta, t)) \\ D\_{\theta}^{q\_2} T\_\mathfrak{c}(\theta, t) = \frac{kFAZ}{QL\_2 c\_h \rho\_\mathfrak{c}} ( (T\_\mathfrak{h}(\theta, t) - T\_\mathfrak{c}(\theta, t)) \\ \theta \in [0, 11\pi] \end{cases} \tag{23}$$

Considering initial conditions, *Th*(11*π*, *t*) and *Tc*(0, *t*) is the input temperature of time *t* in hot fluid, the input temperature of time *t* in cold fluid, respectively.

#### *3.3. Fractional Order Derivative Model for the Spiral-Plate Parallel-Flow Heat Exchanger*

With the same method, the fractional order derivative model for the spiral-plate parallel-flow heat exchanger is derived as follows.

$$\begin{cases} D\_{\theta}^{q\_1} T\_h(\theta, t) = \frac{kFAZ}{QL\_1 \mathfrak{c}\_h \mathfrak{p}\_h} ((T\_\mathfrak{c}(\theta, t) - T\_h(\theta, t)) \\ D\_{\theta}^{q\_2} T\_c(\theta, t) = \frac{kFAZ}{QL\_2 \mathfrak{c}\_c \mathfrak{p}\_c} ((T\_h(\theta, t) - T\_\mathfrak{c}(\theta, t)) \\ \theta \in [0, 11\pi] \end{cases} \tag{24}$$

Considering initial conditions, *Th*(0, *t*) and *Tc*(0, *t*) is the input temperature of time *t* in hot fluid, the input temperature of time *t* in cold fluid side, respectively.

The fractional order derivation equations for the spiral-plate parallel-flow heat exchanger are similar to that with the spiral-plate counter-flow heat exchanger, but the boundary conditions are different.

#### **4. Parallel Fractional Order Derivative Model for the Spiral-Plate Heat Exchanger and Implementation on GPU**

*4.1. Parallel Fractional Order Derivative Model for the Spiral-Plate Heat Exchanger*

4.1.1. Parallel Model for the Spiral-Plate Counter-Flow Heat Exchanger

Applying (1)–(6) into (23), parallel fractional order derivative model for the spiralplate counter-flow heat exchanger is described as follows.

$$\begin{cases} T\_{hk} = (\Delta \theta)^{q\_1} D\_{hfrac} + B\_{lr} T\_{lk-1} \\ T\_{ck} = (\Delta \theta)^{q\_2} D\_{cfrac} + B\_{\varepsilon} T\_{\varepsilon k-1} \end{cases} \tag{25}$$

where *Thk*, *Thk*−1, *Dhf rac* <sup>∈</sup> *<sup>R</sup>N*, and *Bh* <sup>∈</sup> *<sup>R</sup>N*×*<sup>N</sup> Tck*, *Tck*−1, *Dcf rac* <sup>∈</sup> *<sup>R</sup>N*, and *Bc* <sup>∈</sup> *<sup>R</sup>N*×*<sup>N</sup>*

$$T\_{hk} = \begin{pmatrix} T\_h(\Delta\theta) \\ T\_h(2(\Delta\theta)) \\ \vdots \\ T\_h(N(\Delta\theta)) \end{pmatrix} \tag{26}$$

$$B\_h = \begin{pmatrix} \frac{-\Gamma(q\_1+1)}{\Gamma(2)\Gamma(q\_1)} & 0 & \dots & 0\\ \frac{\Gamma(q\_1+1)}{\Gamma(3)\Gamma(q\_1-1)} & \frac{-\Gamma(q\_1+1)}{\Gamma(2)\Gamma(q\_1)} & \dots & 0\\ \vdots & \vdots & \vdots & \vdots\\ \frac{(-1)^N \Gamma(q\_1+1)}{\Gamma(N+1)\Gamma(q\_1-N+1)} & \frac{(-1)^{(N-1)}\Gamma(q\_1+1)}{\Gamma(N)\Gamma((q\_1-N+1))} & \dots & \frac{-\Gamma(q\_1+1)}{\Gamma(2)\Gamma(q\_1)} \end{pmatrix} \tag{27}$$

$$T\_{hk-1} = \begin{pmatrix} T\_h(0) \\ T\_h(\Delta \theta) \\ \vdots \\ T\_h((N-1)(\Delta \theta)) \end{pmatrix} \tag{28}$$

$$B\_c = \begin{pmatrix} \frac{\Gamma(q\_2+1)\Gamma(2)\Gamma(q\_2)}{\Gamma(3)\Gamma(q\_2-1)} & 0 & \dots & 0\\ \frac{\Gamma(q\_2+1)}{\Gamma(3)\Gamma(q\_2-1)} & \frac{-\Gamma(q\_2+1)}{\Gamma(2)\Gamma(q\_2)} & \dots & 0\\ \vdots & \vdots & \vdots & \vdots\\ \frac{(-1)^N \Gamma(q\_2+1)}{\Gamma(N+1)\Gamma(q\_2-N+1)} & \frac{(-1)^{(N-1)}\Gamma(q\_2+1)}{\Gamma(N)\Gamma(q\_2-N+1)} & \dots & \frac{-\Gamma(q\_2+1)}{\Gamma(2)\Gamma(q\_2)} \end{pmatrix} \tag{29}$$

$$T\_{ck} = \begin{pmatrix} T\_c(\Delta \theta) \\ T\_c(2\Delta \theta) \\ \vdots \\ T\_c(N(\Delta \theta)) \end{pmatrix} \tag{30}$$

$$T\_{ck-1} = \begin{pmatrix} T\_{\varepsilon}(0) \\ T\_{\varepsilon}(\Delta \theta) \\ \vdots \\ T\_{\varepsilon}((N-1)(\Delta \theta)) \end{pmatrix} \tag{31}$$

So, the parallel fractional order derivative model for the spiral-plate counter-flow heat exchanger is obtained.

$$\begin{cases} T\_{hk} = (\Delta \theta)^{q\_1} \frac{FkZA}{QL\_1c\_{h}\rho\_{h}} (HT\_{cK-1} - T\_{hK-1}) + B\_{h}T\_{hk-1} \\ T\_{ck} = (\Delta \theta)^{q\_2} \frac{FkZZ}{QL\_2c\_{c}\rho\_{c}} (HT\_{hK-1} - T\_{cK-1}) + B\_{c}T\_{ck-1} \\ T\_{cout} = CT\_{ck} \end{cases} \tag{32}$$

$$\begin{cases} D\_{hfrac} = \frac{FkZA}{QL\_1c\_{h}\rho\_h}(HT\_{cK-1} - T\_{hK-1})\\ D\_{cfrac} = \frac{FkZX}{QL\_2c\_{c}\rho\_c}(HT\_{hK-1} - T\_{cK-1}) \end{cases} \tag{33}$$

$$\begin{cases} T\_{hk} = (\Delta \theta)^{q\_1} D\_{lf\text{rac}} + B\_{\text{h}} T\_{hk-1} \\ T\_{ck} = (\Delta \theta)^{q\_2} D\_{cf\text{rac}} + B\_{\text{c}} T\_{ck-1} \end{cases} \tag{34}$$

where

$$H = \begin{pmatrix} 0 & 0 & 0 & \dots & 0 & 1 \\ 0 & 0 & 0 & \dots & 1 & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ 1 & 0 & 0 & \dots & 0 & 0 \end{pmatrix} \tag{35}$$

$$\mathcal{C} = \begin{pmatrix} 0 & 0 & 0 & 0 & \dots & 1 \end{pmatrix} \tag{36}$$

where *<sup>C</sup>* <sup>∈</sup> *<sup>R</sup>*1×*NandH* <sup>∈</sup> *<sup>R</sup>N*×*N*. The parallel fractional order derivative model for the spiral-plate counter-flow heat exchanger is a model with the parallel input data. It has high efficiency executed on GPU. The proposed parallel model is implemented on GPU by using MATLAB and CUDA [25].

#### 4.1.2. The Proposed Parallel Model for the Spiral-Plate Parallel-Flow Heat Exchanger

The parallel fractional derivation model for the spiral-plate parallel-flow heat exchanger is obtained by the same method with the spiral-plate counter-flow heat exchanger presented as above.

From (23) with the same method, the parallel fractional order derivative equations for the spiral-plate parallel-flow heat exchanger are described as follows.

$$\begin{cases} T\_{hk} = (\Delta \theta)^{q\_1} \frac{FkZA}{QL\_1 \mathfrak{c}\_1 \rho\_{lh}} (T\_{cK-1} - T\_{hK-1}) + B\_{bl} T\_{hk-1} \\ T\_{ck} = (\Delta \theta)^{q\_2} \frac{FkZ\tilde{A}}{QL\_2 \mathfrak{c}\_1 \rho\_{\tilde{c}}} (T\_{hK-1} - T\_{cK-1}) + B\_{\tilde{c}} T\_{ck-1} \\ T\_{cout} = CT\_{ck} \end{cases} \tag{37}$$

$$\begin{cases} D\_{hfrac} = \frac{FkZA}{QL\_1 \mathcal{C}\_h \rho\_h} (T\_{cK-1} - T\_{hK-1})\\ D\_{cfrac} = \frac{FkZZ}{QL\_2 \mathcal{C}\_c \rho\_c} (T\_{hK-1} - T\_{cK-1}) \end{cases} \tag{38}$$

The parallel fractional order derivative model for the spiral-plate parallel-flow heat exchanger is described as follow.

$$\begin{cases} T\_{hk} = (\Delta \theta)^{q\_1} D\_{lift\text{rac}} + B\_{\text{h}} T\_{hk-1} \\ T\_{ck} = (\Delta \theta)^{q\_2} D\_{cfrac} + B\_c T\_{ck-1} \end{cases} \tag{39}$$

where *Thk*, *Thk*−1, *Dhf rac* <sup>∈</sup> *<sup>R</sup>N*, and *Bh* <sup>∈</sup> *<sup>R</sup>N*×*<sup>N</sup> Tck*, *Tck*−1, *Dcf rac* <sup>∈</sup> *<sup>R</sup>N*, and *Bc* <sup>∈</sup> *<sup>R</sup>N*×*N*.

#### *4.2. Implementation on GPU for the Proposed Parallel Model*

In this section, implementation on GPU of the proposed parallel model for the spiralplate heat exchanger is presented. The parallel model with the parallel data has faster efficiency executed on GPU than on CPU. The thread blocks of the proposed parallel model are given in Table 2.

**Table 2.** The thread blocks of the proposed parallel model implemented on GPU.


The thread blocks of the proposed parallel model (2) implemented on GPU are shown in Figure 4. where

$$F\_k = \begin{pmatrix} f\_1 \\ f\_2 \\ \vdots \\ f\_N \end{pmatrix} = \begin{pmatrix} f(\Delta h) \\ f(2(\Delta h)) \\ \vdots \\ \vdots \\ f(N(\Delta h)) \end{pmatrix} \tag{40}$$

$$B = \begin{pmatrix} b\_0 \\ b\_1 \\ b\_2 \\ \vdots \\ b\_{N-1} \end{pmatrix} = \begin{pmatrix} \frac{-\Gamma(q+1)}{\Gamma(2)\Gamma(q)} & 0 & \dots & 0 \\ \frac{\Gamma(q+1)}{\Gamma(3)\Gamma(q-1)} & \frac{-\Gamma(q+1)}{\Gamma(2)\Gamma(q)} & \dots & 0 \\ \vdots & \vdots & \vdots & \vdots \\ \frac{(-1)^N \Gamma(q+1)}{\Gamma(N+1)\Gamma(q-N+1)} & \frac{(-1)^{(N-1)}\Gamma(q+1)}{\Gamma(N)\Gamma((q-N+1))} & \dots & \frac{-\Gamma(q+1)}{\Gamma(2)\Gamma(q)} \end{pmatrix} \tag{41}$$

**Figure 4.** The thread blocks of the proposed parallel model.

$$D\_{\rm frac} = \begin{pmatrix} Df\_0 \\ Df\_1 \\ \vdots \\ Df\_{N-1} \end{pmatrix} = \begin{pmatrix} D\_t^q f(\Delta h) \\ D\_t^q f(2(\Delta h)) \\ \vdots \\ D\_t^q f(\mathcal{N}(\Delta h)) \end{pmatrix} \tag{42}$$

$$F\_{k-1} = \begin{pmatrix} f\_0 \\ f\_1 \\ f\_2 \\ \vdots \\ \vdots \\ f\_{N-1} \end{pmatrix} = \begin{pmatrix} f(0) \\ f(\Delta h) \\ \vdots \\ \vdots \\ f((N-1)(\Delta h)) \end{pmatrix} \tag{43}$$

where *Fk*−<sup>1</sup> is a parallel input data, *Df rac* is a parallel input derivation data, *B* is a matrix relation to the order of fractional order derivation, *Fk* is a parallel output data.

The thread blocks of the proposed parallel model for the spiral-plate counter-flow exchanger (32)–(34) are as shown in Figures 5 and 6.

**Figure 5.** The thread blocks of the proposed parallel model in cold fluid for the counter-flow heat exchanger .

**Figure 6.** The thread blocks of the proposed parallel model in hot fluid for the counter-flow heat exchanger.

Here:

$$T\_{ck-1} = \begin{pmatrix} T\_c(0) \\ T\_c(\Delta \theta) \\ \vdots \\ T\_c((N-1)\Delta \theta) \end{pmatrix} = \begin{pmatrix} T\_{c,in} \\ T\_{c1} \\ \vdots \\ T\_{cN-1} \end{pmatrix} \tag{44}$$

$$T\_{hk-1} = HT\_{hk-1} = \begin{pmatrix} T\_{hN} \\ T\_{hN-1} \\ \vdots \\ T\_{h1} \end{pmatrix} = \begin{pmatrix} T\_{h,out} \\ T\_{hN-1} \\ \vdots \\ T\_{h1} \end{pmatrix} \tag{45}$$

$$T\_{ck} = \begin{pmatrix} T\_c(\Delta\theta) \\ T\_c(2(\Delta\theta)) \\ \vdots \\ T\_c(N(\Delta\theta)) \end{pmatrix} = \begin{pmatrix} T\_{c1} \\ T\_{c2} \\ \vdots \\ T\_{h\rho\text{out}} \end{pmatrix} \tag{46}$$

In Figure 5, *T*˜ *hk*−<sup>1</sup> and *Tck*−<sup>1</sup> is parallel input data, respectively. *Tck* is a parallel output data.

$$T\_{hk-1} = \begin{pmatrix} T\_h(0) \\ T\_h(\Delta \theta) \\ \vdots \\ T\_h((N-1)\Delta \theta) \end{pmatrix} = \begin{pmatrix} T\_{h,in} \\ T\_{h1} \\ \vdots \\ T\_{hN-1} \end{pmatrix} \tag{47}$$

$$\vec{T}\_{ck-1} = HT\_{ck-1} = \begin{pmatrix} T\_{cN} \\ T\_{cN-1} \\ \vdots \\ T\_{c1} \end{pmatrix} = \begin{pmatrix} T\_{c\rho out} \\ T\_{cN-1} \\ \vdots \\ T\_{c1} \end{pmatrix} \tag{48}$$

$$T\_{hk} = \begin{pmatrix} T\_h(\Delta\theta) \\ T\_h(2(\Delta\theta)) \\ \vdots \\ T\_h(N(\Delta\theta)) \end{pmatrix} = \begin{pmatrix} T\_{h1} \\ T\_{h2} \\ \vdots \\ T\_{c,out} \end{pmatrix} \tag{49}$$

In Figure 6, *T*˜ *ck*−<sup>1</sup> and *Thk*−<sup>1</sup> is parallel input data, *Thk* is a parallel output data.

The thread blocks of proposed parallel model for the spiral-plate counter-flow exchanger (37)–(39) are as shown in Figures 7 and 8.

**Figure 7.** The thread blocks of the proposed parallel model in cold fluid for the parallel-flow heat exchanger.

Here:

$$T\_{hk-1} = \begin{pmatrix} T\_h(0) \\ T\_h(\Delta \theta) \\ \vdots \\ T\_h((N-1)(\Delta \theta)) \end{pmatrix} = \begin{pmatrix} T\_{h,in} \\ T\_{h1} \\ \vdots \\ T\_{hN-1} \end{pmatrix} \tag{50}$$

$$T\_{ck-1} = \begin{pmatrix} T\_c(0) \\ T\_c(\Delta \theta) \\ \vdots \\ T\_c((N-1)\Delta \theta) \end{pmatrix} = \begin{pmatrix} T\_{c,in} \\ T\_{c1} \\ \vdots \\ T\_{cN-1} \end{pmatrix} \tag{51}$$

$$T\_{ck} = \begin{pmatrix} T\_c(\Delta\theta) \\ T\_c(2(\Delta\theta)) \\ \vdots \\ T\_c(N(\Delta\theta)) \end{pmatrix} = \begin{pmatrix} T\_{c1} \\ T\_{c2} \\ \vdots \\ T\_{h\rho ut} \end{pmatrix} \tag{52}$$

$$T\_{hk} = \begin{pmatrix} T\_h(\Delta\theta) \\ T\_h(2(\Delta\theta)) \\ \vdots \\ T\_h(N(\Delta\theta)) \end{pmatrix} = \begin{pmatrix} T\_{h1} \\ T\_{h2} \\ \vdots \\ T\_{c,out} \end{pmatrix} \tag{53}$$

**Figure 8.** The thread blocks of the proposed parallel model in hot fluid for the parallel-flow heat exchanger.

In Figure 7, *Thk*−<sup>1</sup> and *Tck*−<sup>1</sup> is a parallel input data, respectively. *Tck* is a parallel output data. In Figure 8, *Tck*−<sup>1</sup> and *Thk*−<sup>1</sup> is a parallel input data, respectively. *Thk* is a parallel output data. *Fh* <sup>=</sup> *FZkA QL*1*chρ<sup>h</sup>* , *Fc* <sup>=</sup> *FZkA QL*2*ccρ<sup>c</sup>* , *Tc*,*in*, *Tc*,*out*, *Th*,*in*, and *Th*,*out* is the input temperature and output temperature in cold fluid, the input temperature and the output temperature in hot fluid, respectively. *Z*−<sup>1</sup> is a sampling delay time.

$$B\_h = \begin{pmatrix} b\_{h0} \\ b\_{h1} \\ \vdots \\ b\_{hN-1} \end{pmatrix} \tag{54}$$

$$B\_{\mathcal{C}} = \begin{pmatrix} b\_{\mathcal{C}0} \\ b\_{\mathcal{C}1} \\ \vdots \\ b\_{\mathcal{C}N-1} \end{pmatrix} \tag{55}$$

Therefore, parallel fractional order derivative model for the spiral-plate heat exchanger is a parallel model with the parallel input and output data. It has high execution efficiency implemented on GPU as shown Figures 4–8.

#### *4.3. The Comparison of Execution Time for the Proposed Parallel Model on CPU and GPU*

The proposed parallel model is implemented on CPU and GPU. Here, GPU (Gefore GTX 1080TI) is used to execute the proposed parallel model. In Figure 9, the comparison of execution time for the proposed parallel model on CPU and GPU is given. where Δ*θ* is discretisation angle for the proposed parallel model. N is Discrete total. It shows that as the N increases, the execution time on the CPU increases, but the execution time on the GPU changes little.

**Figure 9.** The comparison of execution time for the proposed parallel model on CPU and GPU.

#### **5. Simulation on the Proposed Parallel Model for the Spiral-Plate Heat Exchanger**

In this section, it is analysed for the relationships between the output temperature in cold fluid and the fractional orders *q*1, *q*<sup>2</sup> for the proposed parallel model for the spiral-plate heat exchanger, the volume flow rate in hot fluid and the volume flow rate in cold fluid.

#### *5.1. Simulation Conditions*

Simulation parameters of the spiral-plate heat exchanger are shown in Table 3.

**Table 3.** Simulation parameters of the spiral-plate heat exchanger.


*5.2. Simulation on the Proposed Parallel Model for the Spiral-Plate Counter-Flow Heat Exchanger*

The index of all figures for the relationships between the output temperature of cold fluid and the flow rates of hot fluid, cold fluid for the proposed parallel model of the spiral-plate counter-flow heat exchanger is shown in Table 4.

**Table 4.** The relationships between the output temperature of cold fluid and the volume flow rates of hot fluid, cold fluid for the proposed parallel model for the spiral-plate counter-flow heat exchanger.


### 5.2.1. Simulation with the Different Fractional Orders as *q*1, *q*<sup>2</sup> ≤ 1

The relationships between the output temperature in cold fluid and the fractional orders as *q*1, *q*<sup>2</sup> ≤ 1 are shown in Figure 10. Those show that the output temperature increases with the fractional orders *q*1, *q*<sup>2</sup> rises up as shown in Figure 10.

**Figure 10.** The output temperature in cold fluid as *q*1, *q*<sup>2</sup> ≤ 1.

5.2.2. Simulation with the Different Fractional Orders as *q*1, *q*<sup>2</sup> > 1

The relationships between the output temperature in cold fluid and the different fractional orders as *q*1, *q*<sup>2</sup> > 1 are shown in Figure 11. It shows when *q*1, *q*<sup>2</sup> = 1.025, the output temperature in cold fluid is unstable.

**Figure 11.** The output temperature in cold fluid as *q*1, *q*<sup>2</sup> > 1.

5.2.3. The Relationships between the Output Temperature in Cold Fluid and the Different Volume Flow Rate of Hot Fluid

The relationships between the output temperature in cold fluid and the different volume flow rate of hot fluid are shown in Figures 12–15. Those figures show that the output temperature rises with the volume flow rate of hot fluid increases.

**Figure 12.** The output temperature in cold fluid with *QL*<sup>1</sup> = 1 L/min.

**Figure 13.** The output temperature in cold fluid with *QL*<sup>1</sup> = 3 L/min.

**Figure 14.** The output temperature in cold fluid with *QL*<sup>1</sup> = 5 L/min.

**Figure 15.** The output temperature in cold fluid with *QL*<sup>1</sup> = 7 L/min.

5.2.4. The Relationships between the Output Temperature in Cold Fluid and the Different Volume Flow Rate of Cold Fluid

The relationships between the output temperature in cold fluid and the different volume flow rate of cold fluid are shown in Figures 16–19. Those figures show that the output temperature in cold fluid goes down with the volume flow rate of cold fluid increases.

**Figure 16.** The output temperature in cold fluid with *QL*<sup>2</sup> = 1 L/min.

**Figure 17.** The output temperature in cold fluid with *QL*<sup>2</sup> = 3 L/min.

**Figure 18.** The output temperature in cold fluid with *QL*<sup>2</sup> = 5 L/min.

**Figure 19.** The output temperature in cold fluid with *QL*<sup>2</sup> = 7 L/min.

*5.3. Simulation on the Proposed Parallel Model for the Spiral-Plate Parallel-Flow Heat Exchanger*

The index of all figures for the relationships between the output temperature of cold fluid and the flow rates of hot fluid, cold fluid for the proposed parallel model of the spiral-plate parallel-flow heat exchanger is shown in Table 5.

**Table 5.** The relationships between the output temperature of cold fluid and the flow rates of hot fluid, cold fluid for the proposed parallel model of the spiral-plate parallel-flow heat exchanger.


5.3.1. Simulation with the Different Fractional Orders as *q*1, *q*<sup>2</sup> ≤ 1

The relationships between output temperature in cold fluid and the fractional orders as *q*1, *q*<sup>2</sup> ≤ 1 are shown in Figure 20. They show that the output temperature rises with the fractional orders *q*1, *q*<sup>2</sup> increases as shown in Figure 20.

**Figure 20.** The output temperature in cold fluid as *q*1, *q*<sup>2</sup> ≤ 1.

5.3.2. Simulation with the Different Fractional Orders as *q*1, *q*<sup>2</sup> > 1

The relationships between the output temperature in cold fluid and the fractional orders as *q*1, *q*<sup>2</sup> > 1 are shown in Figure 21. When *q*1, *q*<sup>2</sup> is 1.025, the output temperature in cold fluid is unstable.

**Figure 21.** The output temperature in cold fluid as *q*1, *q*<sup>2</sup> > 1.

5.3.3. The Relationships between the Output Temperature of Cold Fluid and the Different Volume Flow Rate of Hot Fluid

The relationships between the output temperature in cold fluid and the different volume flow rate of hot fluid are shown in Figures 22–25. Those figures show that output temperature in cold fluid rises with the volume flow rate of hot fluid increases.

**Figure 22.** The output temperature in cold fluid with *QL*<sup>1</sup> = 1 L/min.

**Figure 23.** The output temperature in cold fluid with *QL*<sup>1</sup> = 3 L/min.

**Figure 24.** The output temperature in cold fluid with *QL*<sup>1</sup> = 5 L/min.

**Figure 25.** The output temperature in cold fluid with *QL*<sup>1</sup> = 7 L/min.

5.3.4. The Relationships between the Output Temperature of Cold Fluid and the Different Volume Flow Rate of Cold Fluid

The relationships between the output temperature in cold fluid and the different volume flow rate of cold fluid are shown in Figures 26–29. Those figures show that the output temperature in cold fluid drops down with the volume flow rate of cold fluid increases.

**Figure 26.** The output temperature in cold fluid with *QL*<sup>2</sup> = 1 L/min.

**Figure 27.** The output temperature in cold fluid with *QL*<sup>2</sup> = 3 L/min.

**Figure 28.** The output temperature in cold fluid with *QL*<sup>2</sup> = 5 L/min.

**Figure 29.** The output temperature in cold fluid with *QL*<sup>2</sup> = 7 L/min.

#### **6. Conclusions**

A parallel fractional order derivative model and the problem statement are introduced in this paper. Then, the fractional order derivative model for the spiral-plate heat exchanger is constructed by mathematic analysis and extending from classical integer order derivative. Further, the parallel fractional order derivative model for the spiral-plate heat exchanger is constructed by considering the merit of GPU. Finally, the parallel fractional order derivative model for the spiral-plate heat exchanger is simulated. Simulations show the relationships between the output temperature of heated fluid and the fractional orders of the two fluids, the input volume flow rate of cold fluid, and the input volume flow rate of cold fluid, respectively.

**Author Contributions:** M.D. supervised the work; G.D. finished the simulation, and wrote the rest of the work. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Appendix A**

**Definition A1** ([10])**.** *(the Caputo's fractional order derivative)*

$$\, \_a^C D\_t^q f(t) = \begin{cases} \frac{1}{\Gamma(n-q)} \int\_a^t \frac{f^{(n)}(t)}{(t-\tau)^{n-q-1}} d\tau, & n-1 < q < n\\\ \frac{d^n f(t)}{dt^n}, & q = n \end{cases} \tag{A1}$$

*where* <sup>Γ</sup>(·) *is gamma function defined by* <sup>Γ</sup>(*x*) =  <sup>∞</sup> <sup>0</sup> *<sup>e</sup>*−*<sup>t</sup> tx*−<sup>1</sup> *and n is a positive integer number.*

**Definition A2** ([10])**.** *(the Grunwald-Letnikov's fractional order derivative)*

$$\Gamma\_a^{GL} D\_t^q f(t) = \lim\_{\Delta h \to 0} (\Delta h)^{-q} \sum\_{j=0}^{\left[\frac{t-a}{\Delta h}\right]} (-1)^j \frac{\Gamma(q+1)}{\Gamma(j+1)\Gamma(q-j+1)} f(t - j(\Delta h)) \tag{A2}$$

*where* [·] *means the integer part.*

(A1) *and* (A2) *are equivalent if f*(·*) is differentiable.* (A1) *is a continues type definition of fractional order derivative.* (A2) *is a non-continues type definition of fractional order derivative.* (A2) *is implemented easily on computer.* (A2) *is used in this paper.* (A1) *is easy to analyse system performance such as stability, tracking, etc. for the fractional order control system.*

*If* Δ*h* ≈ 0 *then*

$${}^{GL}\_0 D\_t^q f(t) \approx (\Delta h)^{-q} \sum\_{j=0}^N (-1)^j \frac{\Gamma(q+1)}{\Gamma(j+1)\Gamma(q-j+1)} f(t - j(\Delta h)) \tag{A3}$$

*where N is* [ *<sup>t</sup>*−*<sup>a</sup>* <sup>Δ</sup>*<sup>h</sup>* ].

#### **References**


## *Article* **Design, Analysis and Comparison of a Nonstandard Computational Method for the Solution of a General Stochastic Fractional Epidemic Model**

**Nauman Ahmed 1, Jorge E. Macías-Díaz 2,3,\*, Ali Raza 4, Dumitru Baleanu 5,6, Muhammad Rafiq 7, Zafar Iqbal <sup>1</sup> and Muhammad Ozair Ahmad <sup>1</sup>**


**Abstract:** Malaria is a deadly human disease that is still a major cause of casualties worldwide. In this work, we consider the fractional-order system of malaria pestilence. Further, the essential traits of the model are investigated carefully. To this end, the stability of the model at equilibrium points is investigated by applying the Jacobian matrix technique. The contribution of the basic reproduction number, *R*0, in the infection dynamics and stability analysis is elucidated. The results indicate that the given system is locally asymptotically stable at the disease-free steady-state solution when *R*<sup>0</sup> < 1. A similar result is obtained for the endemic equilibrium when *R*<sup>0</sup> > 1. The underlying system shows global stability at both steady states. The fractional-order system is converted into a stochastic model. For a more realistic study of the disease dynamics, the non-parametric perturbation version of the stochastic epidemic model is developed and studied numerically. The general stochastic fractional Euler method, Runge–Kutta method, and a proposed numerical method are applied to solve the model. The standard techniques fail to preserve the positivity property of the continuous system. Meanwhile, the proposed stochastic fractional nonstandard finite-difference method preserves the positivity. For the boundedness of the nonstandard finite-difference scheme, a result is established. All the analytical results are verified by numerical simulations. A comparison of the numerical techniques is carried out graphically. The conclusions of the study are discussed as a closing note.

**Keywords:** stochastic epidemic model; malaria infection; stochastic generalized Euler; nonstandard finite-difference method; positivity; boundedness

**MSC:** 65M06; 65M12; 35K15; 35K55; 35K57

#### **1. Introduction**

Malaria is a Latin word which means "foul air". Biologically, malaria is an ailment due to the microorganism plasmodium, which is a bug found in the mosquito. It is also observed that not all mosquitoes transmit malaria; only the female mosquito Anopheles can inject this plasmodium into the human body, causing the fatal malaria disease. Its incubation period varies from 7 to 30 days, and research shows that five types of malarial parasites are found, namely, *P. malarie*, *P. ovale*, *P. vivax*, *P. falciparum* and *P. knowles*. In particular, *P. falciparum* is extremely dangerous and fatal, causing a wide range of physical symptoms, such as fever,

175

**Citation:** Ahmed, N.; Macías-Díaz, J.E.; Raza, A.; Baleanu, D.; Rafiq, M.; Iqbal, Z.; Ahmad, M.O. Design, Analysis and Comparison of a Nonstandard Computational Method for the Solution of a General Stochastic Fractional Epidemic Model. *Axioms* **2022**, *11*, 10. https://doi.org/10.3390/ axioms11010010

Academic Editors: Carlos Lizama and Chris Goodrich

Received: 19 October 2021 Accepted: 21 December 2021 Published: 24 December 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

flu, severe chills, vomiting, muscle aches, headache, nausea, diarrhea, tiredness, low blood pressure, respiratory disorder, cerebral disorder, and hemoglobin in the urine, with some cases showing jaundice and anemia.

Physicians knew about this disease at least 2000 years ago, and noted that it is very common in marshy areas, where stagnant water is found frequently. It was assumed that water and malaria have some relation, and some volunteers at that time drank from pond water but they did not show any symptoms. A treatment for malaria was discovered accidentally in the seventeenth century, when Americans started to use the bark of the plant Quina to cure this disease in America. In 1880, the name plasmodium was given to this parasite because it resembled the multinucleated cells of the sludge type. Nowadays, the vaccine of malaria is used to prevent it, but it is not so effective due to the fact that plasmodium has a very complicated life cycle. Trials and experiments are still in progress. Currently, the vaccine RTSS is used, but it is still rather inefficient.

In America, about 2000 cases of malaria are diagnosed every year. According to the World Health Organization, 2.29 billion cases of malaria were reported in 2019, and 2.28 billion cases were reported in 2018. In 2019, there were 409,000 deaths. In 2018, 411,000 causalities were recorded, worldwide. In 2019, 23% of deaths were calculated in Nigeria, 11% in Congo, 5% in Tanzania, and 45% in Niger. The only region in the world which is free of malaria in northern Australia. In total, 94% of cases were reported in Africa which was the highest ratio in the world. Children under the age of 5 years are at high risk; about 67% of children died worldwide in 2019. In Pakistan, during the monsoon season, the ratio of malarial patients remains at its peak. It is calculated that about 300,000 cases are reported every year in Pakistan. Some cautions are taken to control malaria, such as wearing full clothes during summer, using different mosquito-repellent lotions, using a net on windows and doors, having a proper sanitation system for water, using nets at night while sleeping, using different medicated body oils, etc.

In 2020, Cristhian et al. proposed a SIR model to inhibit malaria [1]. In 2020, Olaniyi et al. presented an SEIR mathematical model to control malaria among travelers [2]. In 2020, Kim et al. modulated an SEI model to save Korean people from Plasmodium vivax [3]. Ibrahim et al. introduced an SEIR model to control the transmission of malaria disease using awareness techniques [4]. In 2020, Baihaqi et al. proposed an SEIRS p-model to investigate how malaria disease spreads among humans [5]. In 2020, Traore et al. proposed an ELPN model by describing different stages of mosquitoes that are involved in malaria transmission [6]. Djidjou et al. formulated an SEIR model to study the effects of weather conditions for spreading malaria disease [7]. That year, Pandey presented a mathematical model to describe how domestic and industrial effluents play a major role in malaria spreading [8]. In 2019, Song et al. introduced a malaria-dynamics mathematical model [9]. In turn, Ogunmiloro presented a model to simulate the infectivity of plasmodium and toxoplasma [10]; Koutou et al. proposed an ELPA model to study the relationship of malaria with mosquito population [11]; and Bakary et al. suggested a model to analyze the impact of frequent biting of mosquitoes and blood transfusions [12].

Beretta et al. studied mathematically the mortality in children and adults caused by malaria [13]. Rafia et al. observed the consequences of vaccination on the dynamics of malaria [14]. In 2017, Traoré et al. presented a model to estimate the variation in the intensity of malaria epidemic by considering the seasonal effects and frequent bite rate of mosquitoes [15]. In 2017, Mojeeb et al. presented an SEIR model to investigate the ways to control the mosquito population and eradication of malaria outbreaks [16]. Olaniyi suggested a system to demonstrate the non-linearity in malarial propagation [17]. In 2011, Mandal et al. projected a system to understand the propagation of malaria disease [18], Chitnis developed an SEIR model to check the propagation of malaria by infectious mosquitoes [19] and Smith et al. presented a scientific design to predict the presence of malaria in a human population [20]. The purpose of this work is to propose a stochastic compartmental system using fractional operators to model the spreading of more general

epidemics in a human population. Our scheme will be able to preserve various important properties of the solutions [21–25].

#### **2. Mathematical Models**

In this section, we introduce the extended stochastic fractional epidemic model [26]. To start with, we quote some basic definitions of fractional calculus.

**Definition 1.** *The Riemann–Liouville fractional derivative of <sup>ψ</sup>* : R → R *of order <sup>α</sup>* > 0 *is defined as*

$$\,\_{RL}D\_0^{\mathfrak{a}}\psi(t) = \frac{1}{\Gamma(k-a)}\frac{d^k}{dt^k}\int\_0^t \frac{\psi(s)}{(t-s)^{k-a-1}}ds, \quad \forall t \in \mathbb{R},\tag{1}$$

*where k* = [*α*] + 1*, k* − 1 < *α* < *k and* Γ *is the gamma function. Meanwhile, the respective Caputo fractional derivative of order α is given by*

$$\, \_0^C D\_t^a \psi(t) = \frac{1}{\Gamma(k-a)} \int\_0^t (t-s)^{k-a-1} \frac{d^k}{dt^k} \mathcal{F}(s) ds \tag{2}$$

To start with, let us consider the following compartmental epidemic model studied in [26]:

$$\frac{dS\_h(t)}{dt\_1} = \mu\_h N\_h(t) - \beta\_h S\_h(t) \left(\frac{I\_v(t)}{N\_v(t)}\right) - \alpha\_h S\_h(t),\tag{3}$$

$$\frac{dI\_h(t)}{dt\_1} = \beta\_h S(t)\_h I\_v(t) - (\delta\_h + a\_h + \gamma\_h) I\_h(t),\tag{4}$$

$$\frac{d\mathcal{R}\_h(t)}{dt\_1} = \gamma\_h I\_h(t) - a\_h \mathcal{R}\_h(t),\tag{5}$$

$$\frac{dS\_v(t)}{dt\_1} = \mu\_V N\_v(t) - \beta\_v S\_v(t) \frac{I\_h(t)}{N\_h(t)} - a\_v S\_v(t),\tag{6}$$

$$\frac{dI\_v(t)}{dt\_1} = \beta\_v S\_v \frac{I\_h(t)}{N\_h(t)} - a\_v I\_v(t). \tag{7}$$

In the above system, *Sh*(*t*) describes the susceptible population at time *t*, *Ih*(*t*) is the infected population, *Rh*(*t*) is the number of recovered individuals, *Sv*(*t*) is the susceptible mosquitoes, *Iv*(*t*) is the number of infected mosquitoes, *Nh*(*t*) is the population size, and *Nv*(*t*) is the total mosquito population. Meanwhile, *μ<sup>h</sup>* is the per capita birth rate of human individuals [*time*−1], *α<sup>h</sup>* is the per capita natural death rate for human individuals [*time*−1], *δ<sup>h</sup>* denotes the per capita disease-induced death rate for human population [*time*−1], *β<sup>h</sup>* is the contact rate of human population [*time*−1], *γ<sup>h</sup>* represents the per capita recovery rate of humans [*time*−1], *μ<sup>v</sup>* denotes the per capita birth rate of mosquitoes [*time*−1], *α<sup>v</sup>* is the per capita natural death rate of mosquitoes [*time*−1], and *β<sup>v</sup>* is the mosquito contact rate [*time*−1].

To generalize systems (3)–(7), we use fractional operators by a scaling of the model. From (3),

$$\frac{1}{\mu\_h N\_h} \frac{dS\_h}{dt\_1} = \frac{\mu\_h N\_h}{\mu\_h N\_h} - \frac{\beta\_h}{\mu\_h} \left(\frac{S\_h}{N\_h}\right) \left(\frac{I\_v}{N\_v}\right) - \left(\frac{\alpha\_h}{\mu\_h}\right) \left(\frac{S\_h}{N\_h}\right)\_{\prime} \tag{8}$$

which leads to the equation

$$\frac{ds\_h}{dt} = 1 - \beta s\_{ll} i\_{\mathcal{V}} - \alpha\_1 s\_{h\prime} \tag{9}$$

where *sh* = *Sh Nh* , *iv* = *Iv Nv* , *<sup>α</sup>*<sup>1</sup> <sup>=</sup> *<sup>α</sup><sup>h</sup> μh* , *β* = *<sup>β</sup><sup>h</sup> <sup>μ</sup><sup>h</sup>* and *<sup>t</sup>* = *<sup>t</sup>*1*μh*. Similarly,

$$\frac{d\dot{l}\_h}{dt} = \beta s\_h \dot{i}\_v - (\gamma + a\_1)\dot{i}\_{h\nu} \tag{10}$$

$$\frac{di\_{\rm v}}{dt} = v(1 - i\_{\rm v}) - \delta i\_{\rm v}.\tag{11}$$

Here, *γ* = *<sup>δ</sup>h*+*γ<sup>h</sup> <sup>μ</sup><sup>h</sup>* , *<sup>v</sup>* <sup>=</sup> *<sup>β</sup><sup>v</sup> Nv* , *<sup>δ</sup>* <sup>=</sup> *<sup>α</sup><sup>v</sup> <sup>μ</sup><sup>v</sup>* , *Rh* = *Nh* − *Sh* − *Ih*, and *Sv* = *Nv* − *Iv*. Finally, the following time-fractional system results:

$$D\_t^a s\_h = 1 - \beta^a s\_h(t) i\_\upsilon(t) - a\_1^a s\_h(t),\tag{12}$$

$$D\_t^a i\_{\hbar} = \beta^a s\_{\hbar}(t) i\_{\upsilon}(t) - (a\_1^a + \gamma^a) i\_{\hbar}(t),\tag{13}$$

$$D\_t^a i\_\upsilon = \upsilon^a (1 - i\_\upsilon(t)) i\_h(t) - \delta^a i\_\upsilon(t). \tag{14}$$

In this system, we convey that *D<sup>α</sup> <sup>t</sup>* = <sup>C</sup> <sup>0</sup> *<sup>D</sup><sup>α</sup> <sup>t</sup>* and, for simplicity, the birth rate and death are same. Moreover, the solution region for systems (12)–(14) is Ω = {(*sh*, *ih*, *iv*) : *sh* + *ih* + *iv* ≤ 1,*sh* ≥ 0, *ih* ≥ 0, *iv* ≥ 0}.

Finally, we investigate a stochastic extension of the fractional epidemic models (12)–(14) following various stochastic approaches available in the literature [27–30]. More precisely, we consider the following system of stochastic differential equations, which extends our fractional epidemic model:

$$\begin{cases} D\_t^a s\_h(t) = 1 - \beta^a s\_h(t) i\_\upsilon(t) - a\_1^a s\_h(t) + \sigma\_1 s\_h(t) dB\_1(t), \\ D\_t^a i\_h(t) = \beta^a s\_h(t) i\_\upsilon(t) - (a\_1^a + \gamma^a) i\_h(t) + \sigma\_2 i\_h(t) dB\_2(t), \\ D\_t^a i\_\upsilon(t) = \upsilon^a (1 - i\_\upsilon) i\_h(t) - \delta^a i\_\upsilon(t) + \sigma\_3 i\_\upsilon(t) dB\_3(t). \end{cases} \tag{15}$$

Here, *σ*1, *σ*2, and *σ*<sup>3</sup> are stochastic perturbations of each state variable and *Bm*(*t*) is the autonomous Brownian motion for each *m* = 1, 2, 3.

#### **3. Mathematical Analysis**

This part is devoted to obtain the equilibrium points of steady states and stability analysis of systems (12)–(14). To that end, we set *D<sup>α</sup> <sup>t</sup> sh*(*t*) = *D<sup>α</sup> <sup>t</sup> ih*(*t*) = *D<sup>α</sup> <sup>t</sup> iv*(*t*) = 0. Then, there are two equilibria of the epidemic models (12)–(14), which are the disease-free *E*<sup>0</sup> = (*sh*<sup>0</sup> , *ih*<sup>0</sup> , *iv*<sup>0</sup> )=(1, 0, 0), and the disease-existing steady state *E*<sup>1</sup> = (*s*<sup>∗</sup> *h*, *i* ∗ *h*, *i* ∗ *<sup>v</sup>*). It is easy to check algebraically that

$$\dot{\mathbf{u}}\_v^\* = \frac{\upsilon^a \dot{\mathbf{i}}\_h^\*}{\upsilon^a \dot{\mathbf{i}}\_h^\* + \delta^a},\tag{16}$$

$$s\_h^\* = \frac{(a\_1^a + \gamma^a)(v^a i\_h^\* + \delta^a)}{\beta^a v^a},\tag{17}$$

$$\dot{a}\_{\hbar}^{\*} = \frac{\beta^{a}\upsilon^{a} - \alpha\_{1}^{a}(\alpha\_{1}^{a} + \gamma^{a})\delta^{\alpha}}{\upsilon^{a}(\alpha\_{1}^{a} + \gamma^{a})(\beta^{a} + \alpha\_{1}^{a})}.\tag{18}$$

On the other hand, to obtain the basic reproductive number, we apply the next generation approach. This method assures that the following identity is satisfied:

$$
\begin{bmatrix} \dot{i}\_h^\* \\ \dot{i}\_v^\* \end{bmatrix} = F \begin{bmatrix} \dot{i}\_h \\ \dot{i}\_v \end{bmatrix} - V \begin{bmatrix} \dot{i}\_h \\ \dot{i}\_v \end{bmatrix} \tag{19}
$$

where

$$F = \begin{bmatrix} 0 & \beta^a s\_h \\ 0 & 0 \end{bmatrix}, \qquad V = \begin{bmatrix} (a\_1^a + \gamma^a) & 0 \\ -\upsilon^a & \delta^a \end{bmatrix}. \tag{20}$$

As a consequence,

$$FV^{-1} = \frac{1}{\delta^a (\alpha\_1^a + \gamma^a)} \begin{bmatrix} \beta^a s\_h \upsilon^a & \beta^a s\_h \alpha^a + \upsilon^a\\ 0 & 0 \end{bmatrix} \tag{21}$$

We conclude that the basic reproductive number is

$$R\_0 = \frac{\beta^a v^a}{\delta^a (a\_1^a + \gamma^a)}.\tag{22}$$

In what follows, we require the Jacobian associated to ouR system fractional differential equations. Its determination is a straightforward task, and it can be readily checked that it is given by

$$J(s\_{\hbar\nu}i\_{\hbar}, i\_{\upsilon}) = \begin{bmatrix} -\beta^a i\_{\upsilon} - a\_1^a & 0 & -\beta^u s\_{\hbar} \\ \beta^u i\_{\upsilon} & -(a\_1^a + \gamma^a) & \beta^u s\_{\hbar} \\ 0 & \upsilon^a (1 - i\_{\upsilon}) & -\upsilon^a i\_{\hbar} - \delta^a \end{bmatrix} \tag{23}$$

**Theorem 1.** *The disease-free steady-state E*<sup>0</sup> *is locally asymptotically stable when R*<sup>0</sup> < 1*.*

**Proof.** Let *I*<sup>3</sup> represent the identity matrix of size 3 × 3. In order to study the stability at the point *E*0(1, 0, 0), observe firstly that

$$|f(1,0,0) - \lambda I\_3| = \begin{vmatrix} -a\_1^\mathfrak{a} - \lambda & 0 & -\beta^\mathfrak{a} \\ 0 & -(a\_1^\mathfrak{a} + \gamma^\mathfrak{a}) - \lambda & \beta^\mathfrak{a} \\ 0 & \upsilon & -\delta^\mathfrak{a} - \lambda \end{vmatrix} = 0,\tag{24}$$

if and only if *<sup>λ</sup>* satisfies *<sup>λ</sup>* <sup>=</sup> <sup>−</sup>*α<sup>α</sup>* <sup>1</sup> or the quadratic equation

$$
\lambda^2 + (a\_1^a + \gamma^a + \delta^a)\lambda + \delta^a a\_1^a + \delta^a \gamma^a - \upsilon^a \beta^a = 0. \tag{25}
$$

By using Routh–Hurwitz criteria for second-order polynomials, we conclude that the system is locally asymptotically stable at *E*<sup>0</sup> if *R*<sup>0</sup> < 1.

**Theorem 2.** *If R*<sup>0</sup> > 1*, then the system is locally asymptotically stable at E*1*.*

**Proof.** Proceeding as in the previous theorem, it follows that the characteristic equation associated to the Jacobian matrix at the equilibrium point is given by

$$
\lambda^3 + \lambda^2(-A - D - G) + \lambda(AD + AG + DG - EF) - ADG - BCF + AEF = 0,\tag{26}
$$

where *<sup>A</sup>* <sup>=</sup> <sup>−</sup>*βαiv* <sup>−</sup> *<sup>α</sup><sup>α</sup>* <sup>1</sup>, *<sup>B</sup>* <sup>=</sup> <sup>−</sup>*β<sup>α</sup>sh*, *<sup>C</sup>* <sup>=</sup> *<sup>β</sup>αiv*, *<sup>D</sup>* <sup>=</sup> <sup>−</sup>(*α<sup>α</sup>* <sup>1</sup> <sup>+</sup> *<sup>γ</sup>*), *<sup>E</sup>* <sup>=</sup> *<sup>β</sup><sup>α</sup>sh*, *<sup>F</sup>* <sup>=</sup> *<sup>v</sup>α*(<sup>1</sup> <sup>−</sup> *iv*) and *<sup>G</sup>* <sup>=</sup> <sup>−</sup>*v<sup>α</sup>ih* <sup>−</sup> *<sup>δ</sup>α*. The conclusion readily follows now from the Routh–Hurwitz criterion for cubic polynomials.

The following lemma is provided to improve the global stability analysis of the system (12)–(14).

**Lemma 1** (Leon [31])**.** *Let <sup>x</sup>* : [0, <sup>∞</sup>) <sup>→</sup> <sup>R</sup><sup>+</sup> *be a continuous function, and let <sup>t</sup>*<sup>0</sup> <sup>≥</sup> <sup>0</sup>*. Then, for any time t* <sup>≥</sup> *<sup>t</sup>*0*, <sup>α</sup>* <sup>∈</sup> (0, 1) *and x*<sup>∗</sup> <sup>∈</sup> <sup>R</sup>+*, the following inequality holds:*

$$D^a \left[ \mathbf{x}(t) - \mathbf{x}^\* - \mathbf{x}^\* \ln \frac{\mathbf{x}(t)}{\mathbf{x}^\*} \right] \le \left( 1 - \frac{\mathbf{x}^\*}{\mathbf{x}(t)} \right) D^a \mathbf{x}(t). \tag{27}$$

We tackle now the global asymptotic stability of the system (12)–(14) at the equilibrium points.

**Theorem 3.** *If R*<sup>0</sup> < 1*, then the system is globally asymptotically stable at E*0*.*

**Proof.** Firstly, let us define the Lyapunov functional

$$\mathcal{G} = \left( s\_{\mathcal{h}} + (i\_{\mathcal{h}} + i\_{\mathcal{v}}) - s\_{\mathcal{h}\_0} - s\_{\mathcal{h}\_0} \log \frac{s\_{\mathcal{h}}}{s\_{\mathcal{h}\_0}} \right) = \left( s\_{\mathcal{h}} - s\_{\mathcal{h}\_0} - s\_{\mathcal{h}\_0} \log \frac{s\_{\mathcal{h}}}{s\_{\mathcal{h}\_0}} \right) + i\_{\mathcal{h}} + i\_{\mathcal{v}}.\tag{28}$$

Using Lemma 1 now, we obtain that

$$\begin{split} D\_t^a G &\leq \left( \frac{s\_h - s\_{h\_0}}{s\_h} \right) D\_t^a s\_h + D\_t^a i\_h + D\_t^a i\_h \\ &= \left( \frac{s\_h - s\_{h\_0}}{s\_h} \right) (1 - \beta^a s\_h i\_v - a\_1^a i\_h) + \beta^a s\_h i\_v - (a\_1^a + \gamma^a) i\_h + \upsilon (1 - i\_v) i\_h - \delta^a i\_v \quad \text{(29)} \\ &= \frac{-(s\_h - s\_{h\_0})^2}{s\_h s\_{h\_0}} - (a\_1^a + \gamma^a) \left( i\_h - \frac{\beta^a s\_{h\_0} i\_v}{a\_1^a + \gamma^a} \right) - \delta^a \left( i\_v - \frac{\upsilon^a (1 - i\_v) i\_h}{\delta^a} \right). \end{split} \tag{20}$$

Clearly, *D<sup>α</sup> <sup>t</sup> G* < 0 if *R*<sup>0</sup> < 1. Meanwhile, *D<sup>α</sup> <sup>t</sup> G* = 0 if *sh* = 1, *ih* = 0 and *iv* = 0. We conclude that the system is globally asymptotically stable at the disease-free equilibrium point when *R*<sup>0</sup> < 1.

#### **Theorem 4.** *The system (12)–(14) is globally asymptotically stable at E*<sup>1</sup> *when R*<sup>0</sup> > 1*.*

**Proof.** The proof is similar to that of the previous theorem. In this case, we construct the Lyapunov functional at *E*<sup>1</sup> as

$$\mathcal{G} = \left( s\_h - s\_h^\* - s\_h^\* \log \frac{s\_h}{s\_h^\*} \right) + \left( i\_h - i\_h^\* - i\_h^\* \log \frac{i\_h}{i\_h^\*} \right) + \left( i\_v - i\_v^\* - i\_v^\* \log \frac{i\_v}{i\_v^\*} \right). \tag{30}$$

Using Lemma 1 and proceeding as in the proof of the preceding theorem, it follows that

$$\begin{split} D\_t^a G \leq & \left( \frac{s\_h - s\_h^\*}{s\_h} \right) D\_t^a s\_h + \left( \frac{i\_h - i\_h^\*}{i\_h} \right) D\_t^a i\_h + \left( \frac{i\_v - i\_v^\*}{i\_v} \right) D\_t^a i\_v \\ = & - \frac{(s\_h - s\_h^\*)^2}{(s\_h s\_h^\*)} - \frac{\beta^a s\_h i\_v (i\_h - i\_h^\*)^2}{(i\_h i\_h^\*)} - \frac{v i\_h (i\_v - i\_v^\*)^2}{i\_v i\_v^\*}. \end{split} \tag{31}$$

Observe that *D<sup>α</sup> <sup>t</sup> <sup>G</sup>* <sup>≤</sup> 0 when *<sup>R</sup>*<sup>0</sup> <sup>&</sup>gt; 1. Moreover, *<sup>D</sup><sup>α</sup> <sup>t</sup> G* = 0 if *sh* = *s*<sup>∗</sup> *<sup>h</sup>*, *ih* = *i* ∗ *<sup>h</sup>* and *iv* = *i* ∗ *<sup>v</sup>*, which means that the system is globally asymptotically stable at the endemic equilibrium solution.

Before closing this section, we investigate the sensitivity of the parameters of the fractional epidemic model. To that end, we employ the derivative based local method to take the partial derivatives of outputs with respect to inputs. Let

$$R\_0 = \frac{\beta v}{(\alpha\_1 + \gamma)\delta}.\tag{32}$$

Observe that the following are satisfied:

$$A\_{\beta} = \frac{\beta}{R\_0} \times \frac{\partial R\_0}{\partial \beta} = 1 > 0,\tag{33}$$

$$A\_{\upsilon} = \frac{v}{R\_0} \times \frac{\partial R\_0}{\partial v} = 1 > 0,\tag{34}$$

$$A\_{\mathfrak{a}\_1} = \frac{\mathfrak{a}\_1}{\mathcal{R}\_0} \times \frac{\partial \mathcal{R}\_0}{\partial \mathfrak{a}\_1} = - (\frac{\mathfrak{a}\_1}{\mathfrak{a}\_1 + \gamma}) < 0,\tag{35}$$

$$A\_{\delta} = \frac{\delta}{R\_0} \times \frac{\partial R\_0}{\partial \delta} = -1 < 0,\tag{36}$$

$$A\_{\uparrow} = \frac{\gamma}{R\_0} \times \frac{\partial R\_0}{\partial \gamma} = -\frac{\gamma}{(a\_1 + \gamma)} < 0. \tag{37}$$

As a conclusion, *β* and *v* are sensitive, and all the remaining parameters concerning the reproduction number are not sensitive.

#### **4. Numerical Model**

We present three generalized stochastic fractional techniques to solve the stochastic fractional-order system (15), namely, Euler, Runge–Kutta and a nonstandard finitedifference (NSFD) scheme. The first two are already standard techniques which are well known in the literature [32,33]. The third model is a new technique which is constructed using a non-local approach [34]. Throughout, Δ*t* representS the temporal step-size.

*Stochastic Euler method:*

$$\begin{cases} s\_h^{n+1} = s\_h^n + \frac{(\Delta t)^a}{\Gamma(a+1)} \left[ 1 - \beta^a s\_h^n i\_v^n - \alpha\_1^a s\_h^n + \sigma\_1 \Delta B\_1 s\_h^n \right]\_\prime \\\ i\_h^{n+1} = i\_h^n + \frac{(\Delta t)^a}{\Gamma(a+1)} \left[ \beta^a s\_h^n i\_v^n - (\alpha\_1^a + \gamma^a) i\_h^n + \sigma\_2 \Delta B\_2 i\_h^n \right]\_\prime \\\ i\_v^{n+1} = i\_v^n + \frac{(\Delta t)^a}{\Gamma(a+1)} \left[ \upsilon^a (1 - i\_v^n) i\_h^n - \delta^a i\_v^n + \sigma\_3 \Delta B\_3 i\_v^n \right]. \end{cases} \tag{38}$$

*Stochastic Runge–Kutta method:*

$$\begin{cases} \omega^{n+1} = \omega^{n} + \frac{1}{6} [M\_{1} + 2M\_{2} + 2M\_{3} + M\_{4}], \\ \quad M\_{1} = (\Delta t)\phi(t^{n}, \omega^{n}) + (\Delta t)\sigma\Delta B\psi(t^{n}, \omega^{n}), \\ \quad M\_{2} = (\Delta t)\phi\left(t^{n} + \frac{1}{2}\Delta t, \omega^{n} + \frac{1}{2}M\_{1}\right) + (\Delta t)\sigma\Delta B\psi\left(t^{n}\frac{1}{2}\Delta t, \omega^{n}, \frac{1}{2}M\_{1}\right), \\ \quad M\_{3} = (\Delta t)\phi\left(t^{n} + \frac{1}{2}\Delta t, \omega^{n} + \frac{1}{2}M\_{2}\right) + (\Delta t)\sigma\Delta B\psi\left(t^{n}\frac{1}{2}\Delta t, \omega^{n}, \frac{1}{2}M\_{2}\right), \\ \quad M\_{4} = (\Delta t)\phi(t^{n} + \hbar, \omega^{n} + M\_{3}) + (\Delta t)\sigma\Delta B\psi(t^{n}\hbar, \omega^{n}, M\_{3}). \end{cases} \tag{39}$$

*NSFD method:*

$$\begin{cases} s\_h^{n+1} = \frac{s\_h^n + \frac{(\Delta t)^x}{\Gamma(a+1)} [1 + \sigma\_1 \Delta B\_1 s\_h^n]}{1 + \frac{h^x}{\Gamma(a+1)} \left(\beta^a i\_v^n + \sigma\_1^a\right)},\\\ i\_h^{n+1} = \frac{i\_h^n + \frac{(\Delta t)^x}{\Gamma(a+1)} [\beta^a s\_h^n i\_v^n + \sigma\_2 \Delta B\_2 i\_h^n]}{1 + \frac{h^a}{\Gamma(a+1)} \left(a^a + \gamma^a\right)},\\\ i\_v^{n+1} = \frac{i\_v^n + \frac{(\Delta t)^x}{\Gamma(a+1)} [\sigma^a i\_h^n + \sigma\_3 \Delta B\_3 i\_v^n]}{1 + \frac{h^a}{\Gamma(a+1)} (\sigma^a i\_h^n + \delta^a)}.\end{cases} \tag{40}$$

Next, we establish the most important properties of the NSFD method.

**Theorem 5** (Positivity)**.** *The deterministic form of system* (40) *preserves the non-negativity of the solution.*

**Proof.** All the equations in the system (40) contain no negative term. So, if the initial conditions are non-negative, then the numerical solutions remain non-negative, as desired.

**Theorem 6** (Boundedness)**.** *Suppose that the initial data of* (40) *are nonnegative. Then, there exists a constant K*(*n*, *<sup>α</sup>*) <sup>≥</sup> <sup>0</sup>*, such that s<sup>n</sup> h* , *i n h* , *i n <sup>v</sup>* ∈ [0, *<sup>K</sup>*(*n*, *<sup>α</sup>*)]*, for each n* ∈ N*.*

**Proof.** By adding and rearranging the equations of the numerical model (40), we readily check that

$$\begin{split} &s\_{h}^{n+1} + i\_{h}^{n+1} + i\_{v}^{n+1} \\ &\leq s\_{h}^{n+1} \left[ 1 + \frac{(\Delta t)^{a} (\beta^{a} t\_{v}^{n} + a\_{1}^{a})}{\Gamma(a+1)} \right] + i\_{h}^{n+1} \left[ 1 + \frac{(\Delta t)^{a} (a\_{1}^{n} + \gamma^{a})}{\Gamma(a+1)} \right] + i\_{v}^{n+1} \left[ 1 + \frac{(\Delta t)^{a} (\sigma^{a} t\_{h}^{n} + \delta^{a})}{\Gamma(a+1)} \right] \\ &= (s\_{h}^{n} + i\_{h}^{n} + i\_{v}^{n}) + \frac{(\Delta t)^{a}}{\Gamma(a+1)} [1 + \sigma\_{1} \Delta B\_{1} s\_{h}^{n} + \beta^{a} s\_{h}^{n} t\_{v} + \sigma\_{2} \Delta B\_{2} i\_{h}^{n} + \upsilon^{a} t\_{h}^{n} + \sigma\_{3} \Delta B\_{3} t\_{v}^{n}]. \end{split} \tag{41}$$

The proof is established using mathematical induction, letting *K*(*n* + 1, *α*) be the right end of this chain of identities and inequalities.

Next, we examine the stability of the NSFD system (40).

**Definition 2** (Arenas et al. [21])**.** *The discrete system* (40) *is* asymptotically stable *if there exist constants* <sup>K</sup>1*,* <sup>K</sup><sup>2</sup> *and* <sup>K</sup><sup>3</sup> *with the property that <sup>s</sup>n*+<sup>1</sup> *<sup>h</sup>* ≤ K1*, i n*+1 *<sup>h</sup>* ≤ K<sup>2</sup> *and i <sup>n</sup>*+<sup>1</sup> *<sup>v</sup>* ≤ K<sup>3</sup> *as α* → 1−*.*

**Theorem 7.** *Under the hypotheses of Theorem 6, the system* (40) *is asymptotically stable.*

**Proof.** The conclusion of this result is a direct consequence of Theorem 6.

Before closing this section, we provide some numerical simulations for the stochastic fractional-order epidemic model (15). To that end, we fix the model parameters as given by Table 1 (see [26]). To start with, Figure 1 depicts the convergence behavior of each compartment of the model at the endemic equilibrium (EE). The behavior of the graphs is investigated for various values of *α*. Each graph adopts a random path to reach the EE at the temporal step-size *h* = 0.1. When the step-size is increased, the infected population may diverge at each value of the non-integer parameter. We conclude from this that the generalized stochastic Euler method fails to illustrate the actual behavior of the disease dynamics.

**Table 1.** Model parameters employed in the simulations of this work. Here, DFE stands for diseasefree equilibrium, and EE for endemic equilibrium.


**Figure 1.** The graphical behavior of each sub-population is presented in the (**a**) numerical solution of *sh*, (**b**) numerical solution of *ih*, (**c**) numerical solution of *iv* and (**d**) numerical solution of *ih*, with different values of *α*, using the generalized fractional stochastic Euler method.

In a second experiment, we used the generalized stochastic Runge–Kutta method to solve the same problem of the last paragraph. The results are shown in Figure 2, which provides the convergence behavior of each compartment of the model at endemic equilibrium (EE) for various values of *α*. When the step-size is increased above Δ*t* = 0.1, the infected population may diverge at each value of *α*. Again, we conclude that this method is not a reliable tool to reflect the actual behavior of the model. On the contrary, Figure 3 provides two runs (left and right columns) obtained by means of the generalized stochastic NSFD. The results show that this technique converges to the equilibrium solution for each of the values of *α* considered, using steps of sizes between Δ*t* = 0.1 and Δ*t* = 100, and at a low computational cost. In that sense, this method is more robust and reliable than the standard approaches used for comparison purposes.

**Figure 2.** The graphical performance of each sub-population is presented in the (**a**) numerical solution of *sh*, (**b**) numerical solution of *ih*, (**c**) numerical solution of *iv* and (**d**) numerical solution of *ih*, with different value of *α*, using the generalized fractional stochastic Runge–Kutta method.

**Figure 3.** *Cont*.

**Figure 3.** The graphical behavior of each sub-population is presented for two sets of numerical experiments (left and right columns) with different values of *α*, using the generalized fractional stochastic NSFD.

#### **5. Conclusions**

In this work, we departed from a fractional-order disease model and transformed it into a non-parametric perturbation stochastic model. A generalized stochastic fractional NSFD method was proposed and applied to solve the epidemic model under study. The proposed scheme preserves the positivity of the numerical solutions at each temporal step. The generalized stochastic fractional NSFD is also capable of preserving the boundedness of the approximations. We proved that the given system has two steady states, namely, a disease-free and an endemic steady state. Furthermore, the constraints under which the given system is locally and globally asymptotically stable were investigated. It is concluded that the system attains the local and global stability when the disease is absent if *R*<sup>0</sup> < 1. In the same way, the role of *R*<sup>0</sup> when *R*<sup>0</sup> > 1 was studied for the endemic equilibrium. Two other methods (a generalized fractional Euler method and a generalized Runge–Kutta method) were also applied to compare the obtained results. The simulations showed that the proposed scheme is superior in terms of its capability to identify correctly the equilibrium solutions, in that sense our present report investigated a structure-preserving technique [35–37] to solve a mathematical system in epidemiology. As a final comment, we would like to point out that the investigation of the stochastic system is justified by the fact that solutions exist for that model. Indeed, notice that the drift functions of this model are locally Lipschitz continuous, which implies that the solutions exist locally. The global existence follows an argument similar to that in [38]. We do not provide the details, as such a study is outside the scope of the present work.

**Author Contributions:** Conceptualization, N.A., J.E.M.-D., A.R., D.B., M.R., Z.I. and M.O.A.; data curation, N.A., J.E.M.-D., A.R., D.B., Z.I. and M.O.A.; formal analysis, N.A., J.E.M.-D., A.R., D.B. and M.R.; funding acquisition, J.E.M.-D.; investigation, N.A., J.E.M.-D., A.R., D.B., M.R. and Z.I.; methodology, N.A., J.E.M.-D., A.R., D.B., M.R. and M.O.A.; project administration, N.A. and J.E.M.-D.; resources, N.A., J.E.M.-D., A.R., D.B. and M.R.; software, N.A., J.E.M.-D., A.R., D.B. and Z.I.; supervision, N.A., J.E.M.-D., A.R., D.B., M.R. and M.O.A.; validation, N.A., J.E.M.-D., A.R., D.B., M.R., Z.I. and M.O.A.; visualization, N.A., J.E.M.-D. and D.B.; writing—original draft, N.A., J.E.M.-D., A.R., D.B., M.R., Z.I. and M.O.A.; writing—review and editing, N.A., J.E.M.-D., A.R., D.B., M.R., Z.I. and M.O.A. All authors have read and agreed to the published version of the manuscript.

**Funding:** The corresponding author wishes to acknowledge the financial support from the National Council for Science and Technology of Mexico (CONACYT) through grant A1-S-45928.

**Data Availability Statement:** The data presented in this study are available on request from the corresponding author.

**Acknowledgments:** The authors wish to thank the guest editors for their kind invitation to submit a paper to the special issue of *Axioms MDPI* on "Fractional Calculus—Theory and Applications". They also wish to thank the anonymous reviewers for their comments and criticisms. All of their comments were taken into account in the revised version of the paper, resulting in a substantial improvement with respect to the original submission.

**Conflicts of Interest:** The authors declare no potential conflict of interest.

#### **References**


MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel. +41 61 683 77 34 Fax +41 61 302 89 18 www.mdpi.com

*Axioms* Editorial Office E-mail: axioms@mdpi.com www.mdpi.com/journal/axioms

MDPI St. Alban-Anlage 66 4052 Basel Switzerland

Tel: +41 61 683 77 34 Fax: +41 61 302 89 18

www.mdpi.com