**Dynamical Systems in Engineering**

Editor

**Ioannis Dassios**

MDPI ' Basel ' Beijing ' Wuhan ' Barcelona ' Belgrade ' Manchester ' Tokyo ' Cluj ' Tianjin

*Editor* Ioannis Dassios FRESLIPS University College Dublin Dublin Ireland

*Editorial Office* MDPI St. Alban-Anlage 66 4052 Basel, Switzerland

This is a reprint of articles from the Special Issue published online in the open access journal *Mathematics* (ISSN 2227-7390) (available at: www.mdpi.com/journal/mathematics/special issues/ Dynamical Systems in Engineering).

For citation purposes, cite each article independently as indicated on the article page online and as indicated below:

LastName, A.A.; LastName, B.B.; LastName, C.C. Article Title. *Journal Name* **Year**, *Volume Number*, Page Range.

**ISBN 978-3-0365-7109-6 (Hbk) ISBN 978-3-0365-7108-9 (PDF)**

© 2023 by the authors. Articles in this book are Open Access and distributed under the Creative Commons Attribution (CC BY) license, which allows users to download, copy and build upon published articles, as long as the author and publisher are properly credited, which ensures maximum dissemination and a wider impact of our publications.

The book as a whole is distributed by MDPI under the terms and conditions of the Creative Commons license CC BY-NC-ND.

## **Contents**


## **About the Editor**

#### **Ioannis Dassios**

Ioannis Dassios is currently a UCD Research Fellow/Assistant Professor at University College Dublin, Ireland. His research interests include dynamical and control systems, dynamical networks, differential and difference equations, singular systems, systems of differential equations of fractional order, optimization methods, linear algebra, and mathematical modeling of engineering problems.

He studied Mathematics, completed a two-year M.Sc. in Applied Mathematics, and obtained his Ph.D. degree at University of Athens, Greece with the grade "Excellent" (the highest mark in the Greek system).

He had positions at the University of Edinburgh, U.K; University of Manchester, U.K.; and University of Limerick, Ireland.

He has published more than 110 articles, a book, and two edited books, has served as a reviewer more than 1100 times in more than 100 different in peer reviewed journals, and is also member of editorial boards of peer-reviewed journals (Mathematics and Computers in Simulation Elsevier (MATCOM), Applied Sciences, Mathematics MDPI, Open Physics DeGruyter, and Experimental Results Cambridge University Press, etc). He has also been the Guest Editor of more than 15 Special Issues, and has been member in committees of international conferences (both organizing and technical).

Finally, he has received several awards (Grants for research results over the period 2016–2023 from the UCD Output-Based Research Support Scheme (OBRSS), the 325 Years of Fractional Calculus Award in testimony of the High Regard of my Achievements in the Area of Fractional Calculus and its Applications, several travel support awards by MDPI Basel Switzerland, "Top 1% peer reviewer in Mathematics and Engineering" award by Web of Science Group, Clarivative Analytics for three consecutive years.

## *Article* **The Variational Iteration Transform Method for Solving the Time-Fractional Fornberg–Whitham Equation and Comparison with Decomposition Transform Method**

**Nehad Ali Shah 1,2,\* , Ioannis Dassios <sup>3</sup> , Essam R. El-Zahar 4,5 , Jae Dong Chung <sup>6</sup> and Somaye Taherifar <sup>7</sup>**


**Abstract:** In this article, modified techniques, namely the variational iteration transform and Shehu decomposition method, are implemented to achieve an approximate analytical solution for the time-fractional Fornberg–Whitham equation. A comparison is made between the results of the variational iteration transform method and the Shehu decomposition method. The solution procedure reveals that the variational iteration transform method and Shehu decomposition method is effective, reliable and straightforward. The variational iteration transform methods solve non-linear problems without using Adomian's polynomials and He's polynomials, which is a clear advantage over the decomposition technique. The solutions achieved are compared with the corresponding exact result to show the efficiency and accuracy of the existing methods in solving a wide variety of linear and non-linear problems arising in various science areas.

**Keywords:** Fractional Fornberg–Whitham equation; variational iteration transform method; Shehu decomposition method; partial differential equation; approximate solution; Caputo's operator

#### **1. Introduction**

In recent decades, the fractional calculus (FC) implemented in several phenomena in physics, engineering, fluid mechanics, biology and other applied sciences can be defined very effectively using fractional calculus mathematical tools. Fractional derivatives (FDs) provide an excellent tool for describing the hereditary and memory properties of different processes and materials. The FD has occurred in several engineering and sciences problems such as diffusion and reaction processes, frequency-dependent signal processing and system identification, damping behaviour materials, relaxation and creeping for viscoelastic materials [1–4].

The analysis of nonlinear wave equations and their solutions is of vital significance in several fields of science. Travelling wave ideas are among the most attractive solutions for nonlinear fractional partial differential equations (FPDEs). Nonlinear FPDEs are commonly known as complex physical and mechanical processes. Therefore, it is of great significance to get exact solutions for nonlinear FPDEs, and in general, travelling wave solutions are among the exciting types of solutions for nonlinear FPDEs. On the other hand, other nonlinear FPDEs, such as the Kortewegde-Vries or the Camassa-Holm equations, have been identified to have several moving wave results. These are design equations for nonlinear multi-directional dispersive waves in shallow water [5,6].

**Citation:** Shah, N.A.; Dassios, I.; El-Zahar, E.R.; Chung, J.D.; Taherifar, S. The Variational Iteration Transform Method for Solving the Time-Fractional Fornberg–Whitham Equation and Comparison with Decomposition Transform Method. *Mathematics* **2021**, *9*, 141. https:// doi.org/10.3390/math9020141

Received: 7 December 2020 Accepted: 30 December 2020 Published: 11 January 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

The study of the Fornberg–Whitham equation (FWE) is of great importance in many fields of mathematical physics. The Fornberg–Whitham equation [7,8] is given as

$$D\_{\eta}\mu - D\_{\tilde{\xi}\tilde{\xi}\eta}\mu + D\_{\tilde{\xi}}\mu = \mu D\_{\tilde{\xi}\tilde{\xi}\tilde{\xi}}\mu - \mu D\_{\tilde{\xi}}\mu + 3D\_{\tilde{\xi}}\mu D\_{\tilde{\xi}\tilde{\xi}}\mu. \tag{1}$$

The qualitative behavior of wave breakage, a nonlinear dispersive wave equation, appears in the study. The FWE is shown to allow peakon solutions as a numerical simulation for limiting wave heights and the occurrence of wave breaks. In 1978, Fornberg

and Whitham achieve a peaked solution of the form *µ*(*ξ*, *η*) = *Ce* −*ξ* 2 − 4*η* 3 , where *C* is constant. Tian and Zhou [2] have identified the implicit type of wave propagation solutions called antikink-like wave solutions and kink-like wave solutions. The analysis of FWEs by different numerical and analytical methods, such as Laplace decomposition technique [9], Lie Symmetry [10], variational iteration technique [11], differential transformation technique [12], new iterative technique [13], homotopy-perturbation technique [14] and homotopy analysis transform technique [15].

The variational iteration method was first developed by J.H.He and was successfully applied to autonomous ODEs in [16,17]. This technique has been demonstrated to be an effective method for solving different types of problems. Similarly, this method is modified with the Shehu transform method's help, so the modified technique is called the variational iteration transform method (VITM). Different types of DEs and PDEs have solved VITM. For instance, this technique is used for solving linear fractional differential equations in [18]. This technique is applied in [19] for solving nonlinear oscillator equations. As a benefit of VITM over Adomian's decomposition process, the former approach provides the problem's solution without computing Adomian's polynomials. This system gives a fast solution to the problem while the [20] mesh point methods provide approximation at mesh points. This method is also useful for obtaining an accurate approximation of the exact solution. G. Adomian is the American mathematician who introduced the Adomian decomposition method. It is focused on searching for solutions in the form of a series and on the decomposition of the nonlinear operator into a sequence where the terms are recurrently computed to use Adomian polynomials [21]. This technique is modified with Shehu transformation, so the modified approach is the Shehu decomposition method. This method is applied to the nonhomogeneous fractional differential equations [22–24].

The present manuscript is concerned with the analytical solution of time-fractional Fornberg–Whitham equation. The solution of time-fractional Fornberg–Whitham equation is a topic for the researchers since long. Recently the analytical solution of time-fractional Fornberg–Whitham equation is the main focus of the researchers and mathematicians. This was the challenging work to extend or develop the existing techniques for the solution of fractional-order Fornberg–Whitham equation. Many of them have got success and developed innovative techniques to solve fractional-order Fornberg–Whitham equation. In this regard, the current research work is a novel contribution towards the analytical solution of fractional-order Fornberg–Whitham equations. The present research work is conducted in a very simple and straightforward manner to achieve the analytical solutions of the targeted problems with a small amount of numerical calculations. The convergence of the proposed method is trivial. In conclusion the proposed technique are considered to be the sophisticated contribution towards the analytical solution of fractional-order partial differential equations which are frequently arising in science and engineering.

This article has used the Shehu decomposition method and the variational iteration transform method to solve the fractional-order Fornberg–Whitham equation, including Caputo sense in the fractional derivative. The SDM and VITM obtain semi-analytic solutions in the form of series solutions. It simply improves the original problem lucidly, and so one can test the result with high accuracy and convergence.

The outline of this article is as follows. In Section 2, the basic definition of Shehu transform and fractional calculus are discussed. In Section 3, the variational iteration transform method and Shehu decomposition method are discussed. In Section 4, two test examples of fractional-order Fornberg–Whitham equation are given to elucidate the suggested schemes. In Section 5, conclusions of the work.

#### **2. Preliminaries Concepts**

In this section of the article, we represent Caputo's fractional operator to inspect our proposed problem. In addition to this, we will give the basic concept of Shehu transform, inverse Shehu transform and the Shehu transform of nth derivative for further analysis and investigation.

**Definition 1.** *The Riemann-Liouville fractional integral is given by [25,26]*

$$I\_0^\gamma f(\tau) = \frac{1}{\Gamma(\gamma)} \int\_0^\eta (\eta - s)^{\gamma - 1} f(s) ds. \tag{2}$$

**Definition 2.** *The fractional-order derivative Caputo's operator of h*(*η*) *is defined as [25,26]*

$$D\_{\eta}^{\gamma}f(\eta) = I^{m-\gamma}f^{m},\ m-1 < \gamma < m, \quad m \in \mathbb{N}$$

$$\frac{d^{m}}{d\eta\_{m}}f(\eta), \quad \gamma = m, \quad m \in \mathbb{N}.\tag{3}$$

**Definition 3.** *Shehu transform is modern and similar to other integral transform described for exponential order functions. In set A, we take a function is represented by [23,24,27]*

$$A = \{ f(\eta) : \exists\_{\prime} \rho\_1, \rho\_2 > 0, |f(\eta)| < M e^{\frac{|\eta|}{\rho\_i}}, \quad \text{if} \quad \eta \in [0, \infty). \tag{4}$$

*The Shehu transform which is given by S*(.) for a function *f*(*η*) is defined as

$$S\left\{f(\eta)\right\} = F(s, u) = \int\_0^\infty f(\eta)e^{\frac{-s\eta}{u}}f(\eta)d\eta, \quad \eta > 0, \ s > 0. \tag{5}$$

*The Shehu transform of a function f*(*η*) *is V*(*s*, *u*)*: then f*(*η*) is called the inverse of *V*(*s*, *u*) which is given as

$$\mathbb{S}^{-1}\{F(\mathbf{s},u)\} = f(\eta)\_{\prime\prime} \text{ for } \eta \ge 0, \quad \mathbb{S}^{-1} \text{ is inverse Schulu transformation.} \tag{6}$$

**Definition 4.** *Consider f* (*m*) (*η*) *be the m-th order classical derivative of the function f*(*η*) ∈ *A, then its Shehu integral transform is given by the following formula [23,24,27]:*

$$S\left\{f^{(m)}(\eta)\right\} = \left(\frac{s}{\mu}\right)^m F(s,\mu) - \sum\_{k=0}^{m-1} \left(\frac{s}{\mu}\right)^{m-k-1} f^{(k)}(0), \quad m \in N. \tag{7}$$

**Definition 5.** *The fractional order derivatives of Shehu transformation for [23,24,27]*

$$\mathcal{S}\left\{f^{(\gamma)}(\eta)\right\} = \left(\frac{s}{\mathfrak{u}}\right)^{\gamma}F(s,\mathfrak{u}) - \sum\_{k=0}^{m-1} \left(\frac{s}{\mathfrak{u}}\right)^{\gamma-k-1}f^{(k)}(\mathfrak{d}), \quad m-1 < \gamma \le m. \tag{8}$$

#### **3. The Conceptualization of VITM**

In this section discuses the VITM solution for FPDEs.

$$D\_{\eta}^{\gamma} \upsilon(\xi, \zeta, \eta) + \mathcal{G}(\xi, \zeta, \eta) + \mathcal{N}(\xi, \zeta, \eta) - \mathcal{P}(\xi, \zeta, \eta) = 0, \quad m - 1 < \gamma \le m,\tag{9}$$

with the initial condition

$$\upsilon(\mathfrak{f}, \mathfrak{J}, \mathfrak{0}) = \mathfrak{g}(\mathfrak{f}, \mathfrak{J}), \tag{10}$$

where is *D γ <sup>η</sup>* = *<sup>∂</sup> γ ∂η<sup>γ</sup>* the Caputo fractional derivative of order *<sup>γ</sup>*, <sup>G</sup>¯, and <sup>N</sup> , are linear and non-linear functions, respectively, and P are source operators.

The Shehu transform is implemented to Equation (9),

$$S[D\_{\eta}^{\gamma}\nu(\xi\_{\prime}\zeta\_{\prime}\eta)] + S[\bar{\mathcal{G}}(\xi\_{\prime}\zeta\_{\prime}\eta) + \mathcal{N}(\xi\_{\prime}\zeta\_{\prime}\eta) - \mathcal{P}(\xi\_{\prime}\zeta\_{\prime}\eta)] = 0. \tag{11}$$

Shehu transform the differentiation property is applying, we get

$$\frac{s^{\gamma}}{u^{\gamma}}\mathcal{S}[\nu(\tilde{\xi},\zeta,\eta)] - \frac{s^{\gamma-1}}{u^{\gamma}}\nu(\tilde{\xi},\zeta,0) = -\mathcal{S}\left[\mathcal{G}(\tilde{\xi},\tilde{\xi},\eta) + \mathcal{N}(\tilde{\xi},\zeta,\eta) - \mathcal{P}(\tilde{\xi},\zeta,\eta)\right].\tag{12}$$

The iterative scheme required the Lagrange multiplier as

$$\begin{split} \mathbb{S}[\boldsymbol{v}\_{j+1}(\boldsymbol{\xi},\boldsymbol{\zeta},\boldsymbol{\eta})] &= \mathbb{S}[\boldsymbol{v}\_{j}(\boldsymbol{\xi},\boldsymbol{\zeta},\boldsymbol{\eta})] + \lambda(\boldsymbol{s})[\frac{\boldsymbol{s}^{\gamma}}{\boldsymbol{u}^{\gamma}}\mathbb{S}[\boldsymbol{v}\_{j}(\boldsymbol{\xi},\boldsymbol{\zeta},\boldsymbol{\eta})] - \frac{\boldsymbol{s}^{\gamma-1}}{\boldsymbol{u}^{\gamma}}\boldsymbol{v}\_{j}(\boldsymbol{\xi},\boldsymbol{\zeta},\boldsymbol{0}) \\ &- \mathbb{S}\{\bar{\boldsymbol{G}}(\boldsymbol{\xi},\boldsymbol{\zeta},\boldsymbol{\eta}) + \boldsymbol{\mathcal{N}}(\boldsymbol{\xi},\boldsymbol{\zeta},\boldsymbol{\eta})\} - \mathbb{S}[\boldsymbol{\mathcal{P}}(\boldsymbol{\xi},\boldsymbol{\zeta},\boldsymbol{\eta})] \,. \end{split} \tag{13}$$

A Lagrange multiplier as

$$
\lambda(\mathbf{s}) = -\frac{\mathfrak{u}^{\gamma}}{s^{\gamma}}.\tag{14}
$$

using inverse Shehu transformation *S* −1 , Equation (13) can be written as

$$\upsilon\_{j+1}(\xi,\zeta,\eta) = \upsilon\_j(\xi,\zeta,\eta) - S^{-1} \left[ \frac{u^{\gamma}}{s^{\gamma}} \left[ -S \left\{ \mathcal{G}(\xi,\zeta,\eta) + \mathcal{N}(\xi,\zeta,\eta) \right\} \right] - S[\mathcal{P}(\xi,\zeta,\eta)] \right], \tag{15}$$

the initial value can be find as

$$\nu\_0(\mathfrak{J}\_\prime \mathcal{J}\_\prime \eta) = \mathcal{S}^{-1} \left[ \frac{\mathfrak{u}^\gamma}{s^\gamma} \left\{ \frac{s^{\gamma - 1}}{\mathfrak{u}^\gamma} \nu(\mathfrak{J}\_\prime \mathcal{J}\_\prime 0) \right\} \right]. \tag{16}$$

#### **4. The Conceptualization of SDM**

In this section, we discus the SDM solution of FPDEs.

$$D\_{\eta}^{\gamma} \upsilon(\xi, \zeta, \eta) + \mathcal{G}(\xi, \zeta, \eta) + \mathcal{N}(\xi, \zeta, \eta) - \mathcal{P}(\xi, \zeta, \eta) = 0, \quad m - 1 < \gamma \le m,\tag{17}$$

with the initial condition

$$\nu(\mathfrak{F}, \mathfrak{J}, \mathfrak{O}) = \mathfrak{g}(\mathfrak{F}, \mathfrak{J}), \tag{18}$$

where is *D γ <sup>η</sup>* = *<sup>∂</sup> γ ∂η<sup>γ</sup>* the Caputo fractional derivative of order *<sup>γ</sup>*, <sup>G</sup>¯ and <sup>N</sup> are linear and non-linear functions, respectively, and P is source functions.

Apply Shehu transform to Equation (17),

$$S[D\_{\eta}^{\gamma}\nu(\xi,\zeta,\eta)] + S[\mathcal{G}(\xi,\zeta,\eta) + \mathcal{N}(\xi,\zeta,\eta) - \mathcal{P}(\xi,\zeta,\eta)] = 0. \tag{19}$$

Applying the differentiation property of Shehu transform, we have

$$\mathcal{S}[\nu(\xi,\zeta,\eta)] = \frac{1}{s}\nu(\xi,\zeta,0) + \frac{u^{\gamma}}{s^{\gamma}}\mathcal{S}[\mathcal{P}(\xi,\zeta,\eta)] - \frac{u^{\gamma}}{s^{\gamma}}\mathcal{S}\{\mathcal{G}(\xi,\zeta,\eta) + \mathcal{N}(\xi,\zeta,\eta)\}].\tag{20}$$

SDM solution of infinite series *ν*(*ξ*, *ζ*, *η*),

$$\nu(\mathfrak{F}\_{\prime}\zeta\_{\prime}\eta) = \sum\_{j=0}^{\infty} \nu\_{m}(\mathfrak{F}\_{\prime}\zeta\_{\prime}\eta). \tag{21}$$

The non-linear terms N is given as

$$\mathcal{N}(\xi, \zeta, \eta) = \sum\_{j=0}^{\infty} \mathcal{A}\_{\mathfrak{m}}.\tag{22}$$

The non-linear term can be find with the help of Adomian polynomials. So the Adomian polynomial formula is define as

$$\mathcal{A}\_m = \frac{1}{j!} \left[ \frac{\partial^m}{\partial \lambda^m} \left\{ \mathcal{N} \left( \sum\_{k=0}^\infty \lambda^k \nu\_k \right) \right\} \right]\_{\lambda=0} . \tag{23}$$

Putting Equations (21) and (22) into (20), gives

$$\mathbb{S}\left[\sum\_{j=0}^{\infty}\nu\_{\mathfrak{m}}(\underline{\mathfrak{x}}\_{\prime}\underline{\mathfrak{x}}\_{\prime}\eta)\right] = \frac{1}{s}\nu(\underline{\mathfrak{x}}\_{\prime}\underline{\mathfrak{x}}\_{\prime}0) + \frac{\mathfrak{u}^{\gamma}}{s^{\gamma}}\mathbb{S}\{\mathcal{P}(\underline{\mathfrak{x}}\_{\prime}\underline{\mathfrak{x}}\_{\prime}\eta)\} - \frac{\mathfrak{u}^{\gamma}}{s^{\gamma}}\mathbb{S}\left\{\mathcal{G}(\sum\_{j=0}^{\infty}\nu\_{\mathfrak{m}}) + \sum\_{j=0}^{\infty}\mathcal{A}\_{\mathfrak{m}}\right\}.\tag{24}$$

Using the inverse Shehu transform to Equation (24),

$$\sum\_{j=0}^{\infty} \nu\_{\mathfrak{m}}(\xi, \zeta, \eta) = \mathcal{S}^{-1} \left[ \frac{1}{s} \nu(\xi, \zeta, 0) + \frac{u^{\gamma}}{s^{\gamma}} \mathcal{S} \{ \mathcal{P}(\xi, \zeta, \eta) \} - \frac{u^{\gamma}}{s^{\gamma}} \mathcal{S} \left\{ \mathcal{G} \left( \sum\_{j=0}^{\infty} \nu\_{\mathfrak{m}} \right) + \sum\_{j=0}^{\infty} \mathcal{A}\_{\mathfrak{m}} \right\} \right]. \tag{25}$$

Identify the following terms,

$$\nu\_0(\boldsymbol{\xi}, \boldsymbol{\zeta}, \boldsymbol{\eta}) = \mathbb{S}^{-1} \left[ \frac{1}{s} \nu(\boldsymbol{\xi}, \boldsymbol{\zeta}, \boldsymbol{0}) + \frac{\boldsymbol{u}^{\gamma}}{s^{\gamma}} \boldsymbol{S} \{ \mathcal{P}(\boldsymbol{\xi}, \boldsymbol{\zeta}, \boldsymbol{\eta}) \} \right], \tag{26}$$

$$\nu\_1(\boldsymbol{\xi}, \boldsymbol{\zeta}, \boldsymbol{\eta}) = -\mathbb{S}^{-1} \left[ \frac{\boldsymbol{u}^{\gamma}}{s^{\gamma}} \boldsymbol{S} \{ \mathcal{G}\_1(\nu\_0) + \boldsymbol{\mathcal{A}}\_0 \} \right].$$

In general for *m* ≥ 1, is define as

$$\nu\_{j+1}(\xi,\zeta,\eta) = -\mathcal{S}^{-1}\left[\frac{\mu^{\gamma}}{s^{\gamma}}\mathcal{S}\{\mathcal{G}(\nu\_m) + \mathcal{A}\_m\}\right].$$

#### **5. Implementation of Techniques**

**Example 1.** *Consider the following fractional-order nonlinear Fornberg–Whitham:*

$$D\_{\eta}^{\gamma}\boldsymbol{\nu} - D\_{\xi\overline{\xi}\eta}\boldsymbol{\nu} + D\_{\overline{\xi}}\boldsymbol{\nu} = \boldsymbol{\nu}D\_{\overline{\xi}\overline{\xi}\xi}\boldsymbol{\nu} - \boldsymbol{\nu}D\_{\overline{\xi}}\boldsymbol{\nu} + \mathfrak{H}\_{\overline{\xi}}\boldsymbol{\nu}D\_{\overline{\xi}\overline{\xi}}\boldsymbol{\nu}, \quad 0 < \gamma \le 1,\tag{27}$$

*with the initial condition*

$$\nu(\xi,0) = e^{\left(\frac{\xi}{2}\right)}.\tag{28}$$

*Taking Shehu transform of (27),*

$$\frac{s^{\gamma}}{u^{\gamma}}\mathcal{S}[\nu(\mathfrak{J},\eta)] - \frac{s^{\gamma-1}}{u^{\gamma}}\nu(\mathfrak{J},0) = \mathcal{S}\left[D\_{\mathfrak{J}\widetilde{\xi}\eta}\nu - D\_{\widetilde{\xi}}\nu + \nu D\_{\widetilde{\xi}\widetilde{\xi}\widetilde{\xi}}\nu - \nu D\_{\widetilde{\xi}}\nu + 3D\_{\widetilde{\xi}}\nu D\_{\widetilde{\xi}\widetilde{\xi}}\nu\right].$$

*Applying inverse Shehu transform*

$$\nu(\mathfrak{E}, \eta) = \mathbb{S}^{-1} \left[ \frac{\nu(\mathfrak{E}, 0)}{s} - \frac{\mu^{\gamma}}{s^{\gamma}} S \Big[ D\_{\mathfrak{F}\xi\eta} \nu - D\_{\xi} \nu + \nu D\_{\xi\xi\xi} \nu - \nu D\_{\xi} \nu + \mathfrak{D}\_{\xi} \nu D\_{\xi\xi} \nu \Big] \right].$$

*Using ADM procedure, we get*

$$\nu\_0(\xi,\eta) = \mathcal{S}^{-1}\left[\frac{\nu(\xi,0)}{s}\right] = \mathcal{S}^{-1}\left[\frac{e^{\left(\frac{\xi}{2}\right)}}{s}\right],$$

$$\nu\_0(\xi,t) = e^{\left(\frac{\xi}{2}\right)},\tag{29}$$

$$\begin{bmatrix} \nu\_\infty \ \int \ \gamma & \gamma & \gamma & \gamma & \gamma \end{bmatrix} \begin{bmatrix} & & & & \\ \end{bmatrix}$$

$$\sum\_{j=0}^{\infty} \nu\_{j+1}(\xi, \eta) = S^{-1} \left| \frac{u^{\gamma}}{s^{\gamma}} S \left[ \sum\_{j=0}^{\infty} (D\_{\xi \overline{\xi} \eta} \upsilon)\_{j} - \sum\_{j=0}^{\infty} (D\_{\xi} \upsilon)\_{j} + \sum\_{j=0}^{\infty} A\_{j} - \sum\_{j=0}^{\infty} B\_{j} + 3 \sum\_{j=0}^{\infty} \mathbb{C}\_{j} \right] \right|, \quad j = 0, 1, 2, \dots, \infty$$

$$\begin{split} A\_{0}(\nu D\_{\xi\xi\xi}\upsilon) &= \nu\_{0}D\_{\xi\xi\xi}\upsilon\_{0\prime} \\ A\_{1}(\nu D\_{\xi\xi\xi}\upsilon) &= \nu\_{0}D\_{\xi\xi\xi}\upsilon\_{1} + \nu\_{1}D\_{\xi\xi\xi\xi}\upsilon\_{0\prime} \\ A\_{2}(\nu D\_{\xi\xi\xi}\upsilon) &= \nu\_{1}D\_{\xi\xi\xi}\upsilon\_{2} + \nu\_{1}D\_{\xi\xi\xi}\upsilon\_{1} + \nu\_{2}D\_{\xi\xi\xi}\upsilon\_{0\prime} \\ B\_{0}(\nu D\_{\xi}\upsilon) &= \nu\_{0}D\_{\xi}\upsilon\_{0\prime} \\ B\_{1}(\nu D\_{\xi}\upsilon) &= \nu\_{0}D\_{\xi}\upsilon\_{1} + \nu\_{1}D\_{\xi}\upsilon\_{0\prime} \\ B\_{2}(\nu D\_{\xi}\upsilon) &= \nu\_{1}D\_{\xi}\upsilon\_{2} + \nu\_{1}D\_{\xi}\upsilon\_{1} + \nu\_{2}D\_{\xi}\upsilon\_{0\prime} \\ C\_{0}(D\_{\xi}\upsilon D\_{\xi\xi}\upsilon) &= D\_{\xi}\upsilon\_{0}D\_{\xi\xi}\upsilon\_{0\prime} \\ C\_{1}(D\_{\xi}\upsilon D\_{\xi\xi}\upsilon) &= D\_{\xi}\upsilon\_{0}D\_{\xi\xi}\upsilon\_{1} + D\_{\xi}\upsilon\_{1}D\_{\xi\xi}\upsilon\_{0\prime} \\ C\_{2}(D\_{\xi}\upsilon D\_{\xi\xi}\upsilon) &= D\_{\xi}\upsilon\_{1}D\_{\xi\xi}\upsilon\_{2} + D\_{\xi}\upsilon\_{1}D\_{\xi\xi}\upsilon\_{1} + D\_{\xi}\upsilon\_{2}D\_{\xi\xi}\upsilon\_{0\prime} \end{split}$$

*for j* = 1

$$\begin{split} \nu\_{1}(\check{\xi},\eta) &= S^{-1}\Big[\frac{\mu^{\gamma}}{s^{\gamma}} S \Big[D\_{\xi\check{\xi}\eta}\nu\_{0} - D\_{\check{\xi}}\nu\_{0} + A\_{0} - B\_{0} + 3\mathsf{C}\_{0} \Big] \Big], \\ \nu\_{1}(\check{\xi},t) &= -\frac{1}{2} S^{-1} \Bigg[\frac{\mu^{\gamma}e^{\left(\frac{\check{\xi}}{2}\right)}}{s^{\gamma+1}} \Bigg] = -\frac{1}{2} e^{\left(\frac{\check{\xi}}{2}\right)} \frac{\eta^{\gamma}}{\Gamma(\gamma+1)}. \end{split} \tag{30}$$

*for j* = 2

$$\begin{split} \nu\_{2}(\boldsymbol{\xi},\boldsymbol{\eta}) &= \mathcal{S}^{-1} \Big[ \frac{\mu^{\gamma}}{s^{\gamma}} S \Big[ D\_{\boldsymbol{\xi}\boldsymbol{\xi}\boldsymbol{\eta}} \nu\_{1} - D\_{\boldsymbol{\xi}} \nu\_{1} + A\_{1} - B\_{1} + 3\mathcal{C}\_{1} \Big] \Big], \\ \nu\_{2}(\boldsymbol{\xi},\boldsymbol{\eta}) &= -\frac{1}{8} e^{\left(\frac{\boldsymbol{\varepsilon}}{2}\right)} \frac{\eta^{2\gamma-1}}{\Gamma(2\gamma)} + \frac{1}{4} e^{\left(\frac{\boldsymbol{\varepsilon}}{2}\right)} \frac{\eta^{2\gamma}}{\Gamma(2\gamma+1)}. \end{split} \tag{31}$$

*for j* = 3

$$\begin{split} \nu\_{3}(\boldsymbol{\xi},\boldsymbol{\eta}) &= \mathrm{S}^{-1} \Big[ \frac{\boldsymbol{u}^{\gamma}}{\boldsymbol{s}^{\gamma}} \mathrm{S} \Big[ \boldsymbol{D}\_{\boldsymbol{\xi}\boldsymbol{\xi}\boldsymbol{\eta}} \nu\_{2} - \boldsymbol{D}\_{\boldsymbol{\xi}} \nu\_{2} + \boldsymbol{A}\_{2} - \boldsymbol{B}\_{2} + \boldsymbol{3} \boldsymbol{\mathcal{C}}\_{2} \Big] \Big] , \\ \nu\_{3}(\boldsymbol{\xi},\boldsymbol{\eta}) &= -\frac{1}{32} \mathrm{e}^{\left(\frac{\boldsymbol{\xi}}{2}\right)} \frac{\eta^{3\gamma-2}}{\Gamma(3\gamma-1)} + \frac{1}{8} \mathrm{e}^{\left(\frac{\boldsymbol{\xi}}{2}\right)} \frac{\eta^{3\gamma-1}}{\Gamma(3\gamma)} - \frac{1}{8} \mathrm{e}^{\left(\frac{\boldsymbol{\xi}}{2}\right)} \frac{\eta^{3\gamma}}{\Gamma(3\gamma+1)} \end{split} \tag{32}$$

*The SDM solution for example (1) is*

$$\nu(\mathfrak{f}, \eta) = \nu\_0(\mathfrak{f}, \eta) + \nu\_1(\mathfrak{f}, \eta) + \nu\_2(\mathfrak{f}, \eta) + \nu\_3(\mathfrak{f}, \eta) + \nu\_4(\mathfrak{f}, \eta) + \dotsb \cdot \nu\_n$$

$$\begin{split} \nu(\xi,\eta) &= e^{\left(\frac{\zeta}{2}\right)} - \frac{1}{2} e^{\left(\frac{\zeta}{2}\right)} \frac{\eta^{\gamma}}{\Gamma(\gamma+1)} - \frac{1}{8} e^{\left(\frac{\zeta}{2}\right)} \frac{\eta^{2\gamma-1}}{\Gamma(2\gamma)} + \frac{1}{4} e^{\left(\frac{\zeta}{2}\right)} \frac{\eta^{2\gamma}}{\Gamma(2\gamma+1)} - \frac{1}{32} e^{\left(\frac{\zeta}{2}\right)} \frac{\eta^{3\gamma-2}}{\Gamma(3\gamma-1)} \\ &+ \frac{1}{8} e^{\left(\frac{\zeta}{2}\right)} \frac{\eta^{3\gamma-1}}{\Gamma(3\gamma)} - \frac{1}{8} e^{\left(\frac{\zeta}{2}\right)} \frac{\eta^{3\gamma}}{\Gamma(3\gamma+1)} - \dotsb \,. \end{split} \tag{33}$$

*The simplification of Equation (33)*

$$\nu(\xi,\eta) = e^{\left(\frac{\xi}{2}\right)} \left[ 1 - \frac{\eta^{\gamma}}{2\Gamma(\gamma+1)} - \frac{1}{8} \frac{\eta^{2\gamma-1}}{\Gamma(2\gamma)} + \frac{1}{4} \frac{\eta^{2\gamma}}{\Gamma(2\gamma+1)} - \frac{1}{32} \frac{\eta^{3\gamma-2}}{\Gamma(3\gamma-1)} + \frac{1}{8} \frac{\eta^{3\gamma-1}}{\Gamma(3\gamma)} - \frac{1}{8} \frac{\eta^{3\gamma}}{\Gamma(3\gamma+1)} + \dotsb \right]. \tag{34}$$

*The approximate solution by VITM. The iteration formulas for Equation (27), we have*

$$\begin{split} \boldsymbol{\nu}\_{j+1}(\boldsymbol{\xi},\boldsymbol{\eta}) &= \boldsymbol{\nu}\_{j}(\boldsymbol{\xi},\boldsymbol{\eta}) - \boldsymbol{S}^{-1} \left[ \frac{\boldsymbol{u}^{\gamma}}{\boldsymbol{s}^{\gamma}} \boldsymbol{S} \left\{ \frac{\boldsymbol{s}^{\gamma}}{\boldsymbol{u}^{\gamma}} \boldsymbol{D}\_{\boldsymbol{\eta}} \boldsymbol{\nu}\_{j} - \boldsymbol{D}\_{\boldsymbol{\xi}\boldsymbol{\xi}\boldsymbol{\eta}} \boldsymbol{\nu}\_{j} + \boldsymbol{D}\_{\boldsymbol{\xi}} \boldsymbol{\nu}\_{j} - \boldsymbol{\nu}\_{j} \boldsymbol{D}\_{\boldsymbol{\xi}\boldsymbol{\xi}\boldsymbol{\xi}} \boldsymbol{\nu}\_{j} + \boldsymbol{\nu}\_{j} \boldsymbol{D}\_{\boldsymbol{\xi}} \boldsymbol{\nu}\_{j} \right. \\ & \left. - 3 \boldsymbol{D}\_{\boldsymbol{\xi}} \boldsymbol{\nu}\_{j} \boldsymbol{D}\_{\boldsymbol{\xi}\boldsymbol{\xi}} \boldsymbol{\nu}\_{j} \right] \boldsymbol{\nu} . \end{split} \tag{35}$$

*where*

$$\upsilon\_0(\xi, t) = e^{\left(\frac{\xi}{2}\right)}.\tag{36}$$

*For j* = 0, 1, 2, · · ·

*ν*1(*ξ*, *η*) = *ν*0(*ξ*, *η*) − *S* −1 *u γ s γ S s γ u γ Dην*<sup>0</sup> − *Dξξην*<sup>0</sup> + *Dξν*<sup>0</sup> − *ν*0*Dξξξν*<sup>0</sup> +*ν*0*Dξν*<sup>0</sup> − 3*Dξν*0*Dξξν*<sup>0</sup> , *ν*1(*ξ*, *η*) = − 1 2 *e ξ* 2 *η γ* Γ(*γ* + 1) , (37) *ν*2(*ξ*, *η*) = *ν*1(*ξ*, *η*) − *S* −1 *u γ s γ S s γ u γ Dην*<sup>1</sup> − *Dξξην*<sup>1</sup> + *Dξν*<sup>1</sup> − *ν*1*Dξξξν*<sup>1</sup> +*ν*1*Dξν*<sup>1</sup> − 3*Dξν*1*Dξξν*<sup>1</sup> , *ν*2(*ξ*, *η*) = − 1 8 *e ξ* 2 *η* 2*γ*−1 Γ(2*γ*) + 1 4 *e ξ* 2 *η* 2*γ* Γ(2*γ* + 1) , (38) *ν*3(*ξ*, *η*) = *ν*2(*ξ*, *η*) − *S* −1 *u γ s γ S s γ u γ Dην*<sup>2</sup> − *Dξξην*<sup>2</sup> + *Dξν*<sup>2</sup> − *ν*2*Dξξξν*<sup>2</sup> +*ν*2*Dξν*<sup>2</sup> − 3*Dξν*2*Dξξν*<sup>2</sup> , *ν*3(*ξ*, *η*) = − 1 32 *e ξ* 2 *η* 3*γ*−2 Γ(3*γ* − 1) + 1 8 *e ξ* 2 *η* 3*γ*−1 Γ(3*γ*) − 1 8 *e ξ* 2 *η* 3*γ* Γ(3*γ* + 1) , (39) *ν*(*ξ*, *η*) = ∞ ∑ *m*=0 *νm*(*ξ*, *ζ*) = *e ξ* 2 − 1 2 *e ξ* 2 *η γ* Γ(*γ* + 1) − 1 8 *e ξ* 2 *η* 2*γ*−1 Γ(2*γ*) + 1 4 *e ξ* 2 *η* 2*γ* Γ(2*γ* + 1) − 1 32 *e ξ* 2 *η* 3*γ*−2 Γ(3*γ* − 1) + 1 8 *e ξ* 2 *γ* 3*γ*−1 Γ(3*γ*) − 1 8 *e ξ* 2 *η* 3*γ* Γ(3*γ* + 1) − · · · . (40)

*The exact solution of Equation (27) at γ* = 1*,*

$$\nu(\mathfrak{f}, \mathfrak{y}) = e^{\left(\frac{\mathfrak{f}}{2} - \frac{2\mathfrak{y}}{3}\right)}.\tag{41}$$

**Example 2.** *Consider the following fractional-order nonlinear Fornberg–Whitham:*

$$D\_{\eta}^{\gamma}\nu - D\_{\tilde{\xi}\tilde{\xi}\eta}\nu + D\_{\tilde{\xi}}\nu = \nu D\_{\tilde{\xi}\tilde{\xi}\tilde{\xi}}\nu - \nu D\_{\tilde{\xi}}\nu + 3D\_{\tilde{\xi}}\nu D\_{\tilde{\xi}\tilde{\xi}}\nu, \quad \eta > 0, \quad 0 < \gamma \le 1,\tag{42}$$

*with the initial condition*

$$\nu(\xi,0) = \cosh^2\left(\frac{\xi}{4}\right). \tag{43}$$

*Taking Shehu transform of (42),*

$$\frac{s^{\gamma}}{\mathfrak{u}^{\gamma}}S[\nu(\mathfrak{f},\eta)] - \frac{s^{\gamma-1}}{\mathfrak{u}^{\gamma}}\nu(\mathfrak{f},0) = \mathcal{S}\left[D\_{\mathfrak{f}\tilde{\xi}\eta}\nu - D\_{\tilde{\xi}}\nu + \nu D\_{\tilde{\xi}\tilde{\xi}\tilde{\xi}}\nu - \nu D\_{\tilde{\xi}}\nu + 3D\_{\tilde{\xi}}\nu D\_{\tilde{\xi}\tilde{\xi}}\nu\right].$$

*Applying inverse Shehu transform*

$$\nu(\mathfrak{E}, \mathfrak{\eta}) = \mathcal{S}^{-1} \left[ \frac{\nu(\mathfrak{E}, 0)}{s} - \frac{\mu^{\gamma}}{s^{\gamma}} S \Big\{ D\_{\mathfrak{F}\mathfrak{S}\eta} \nu - D\_{\mathfrak{F}} \nu + \nu D\_{\mathfrak{F}\mathfrak{S}\xi} \nu - \nu D\_{\mathfrak{F}} \nu + 3 D\_{\mathfrak{F}} \nu D\_{\mathfrak{F}\xi} \nu \Big\} \right].$$

*Using ADM procedure, we get*

$$\nu\_0(\xi, \eta) = \mathcal{S}^{-1}\left[\frac{\nu(\xi, 0)}{s}\right] = \mathcal{S}^{-1}\left[\frac{\exp\left(\cosh^2\left(\frac{\xi}{4}\right)\right)}{s}\right],$$

$$\nu\_0(\xi, t) = \cosh^2\left(\frac{\xi}{4}\right),\tag{44}$$

$$\sum\_{j=0}^{\infty} \nu\_{j+1}(\xi, \eta) = \mathcal{S}^{-1} \left[ \frac{\mu^{\gamma}}{s^{\gamma}} \mathcal{S} \left[ \sum\_{j=0}^{\infty} (D\_{\xi \xi \eta} \nu)\_{j} - \sum\_{j=0}^{\infty} (D\_{\xi} \nu)\_{j} + \sum\_{j=0}^{\infty} A\_{j} - \sum\_{j=0}^{\infty} B\_{j} + \mathfrak{Z} \sum\_{j=0}^{\infty} C\_{j} \right] \right], \quad j = 0, 1, 2, \cdots, \infty$$
 
$$\text{for } j = 0$$

$$\begin{split} \nu\_{1}(\boldsymbol{\xi},\boldsymbol{\eta}) &= \boldsymbol{S}^{-1} \left[ \frac{\boldsymbol{u}^{\gamma}}{\boldsymbol{s}^{\gamma}} \boldsymbol{S} \left[ \boldsymbol{D}\_{\boldsymbol{\xi}\boldsymbol{\xi}\boldsymbol{\eta}} \boldsymbol{\nu}\_{0} - \boldsymbol{D}\_{\boldsymbol{\xi}} \boldsymbol{\nu}\_{0} + \boldsymbol{A}\_{0} - \boldsymbol{B}\_{0} + \boldsymbol{\mathcal{R}} \boldsymbol{\mathcal{C}}\_{0} \right] \right], \\ \nu\_{1}(\boldsymbol{\xi},\boldsymbol{\eta}) &= -\frac{11}{32} \boldsymbol{S}^{-1} \left[ \frac{\boldsymbol{u}^{\gamma} \sinh\left(\frac{\boldsymbol{x}}{2}\right)}{\boldsymbol{s}^{\gamma+1}} \right] = -\frac{11}{32} \sinh\left(\frac{\boldsymbol{\xi}}{4}\right) \frac{\boldsymbol{\eta}^{\gamma}}{\Gamma(\gamma+1)}. \end{split} \tag{45}$$

*for j* = 1

$$\begin{split} \nu\_{2}(\boldsymbol{\xi},\boldsymbol{\eta}) &= \mathrm{S}^{-1}\left[\frac{\boldsymbol{\mu}^{\gamma}}{\boldsymbol{\xi}^{\gamma}}\mathrm{S}\left[D\_{\boldsymbol{\xi}\boldsymbol{\xi}\boldsymbol{\eta}}\boldsymbol{\nu}\_{1} - D\_{\boldsymbol{\xi}}\boldsymbol{\nu}\_{1} + A\_{1} - B\_{1} + 3\mathbf{C}\_{1}\right]\right], \\ \nu\_{2}(\boldsymbol{\xi},\boldsymbol{\eta}) &= -\frac{11}{28}\sinh\left(\frac{\boldsymbol{\xi}}{4}\right)\frac{\boldsymbol{\eta}^{\gamma}}{\Gamma(\gamma+1)} + \frac{121}{1024}\cosh\left(\frac{\boldsymbol{\xi}}{4}\right)\frac{\boldsymbol{\eta}^{2\gamma}}{\Gamma(2\gamma+1)}, \\ \boldsymbol{\xi} &= \boldsymbol{\mathcal{I}} \end{split} \tag{46}$$

$$for \ j = 2$$

$$\begin{split} \nu\_{3}(\boldsymbol{\xi},\boldsymbol{\eta}) &= \mathbb{S}^{-1} \Big[ \frac{\boldsymbol{\mu}^{\gamma}}{\boldsymbol{\xi}^{\gamma}} \mathbb{S} \Big[ D\_{\boldsymbol{\xi}\boldsymbol{\xi}\boldsymbol{\eta}} \nu\_{2} - D\_{\boldsymbol{\xi}} \nu\_{2} + A\_{2} - B\_{2} + 3\mathsf{C}\_{2} \Big] \Big], \\ \nu\_{3}(\boldsymbol{\xi},\boldsymbol{\eta}) &= -\frac{11}{512} \sinh\left(\frac{\boldsymbol{\xi}}{4}\right) \frac{\boldsymbol{\eta}^{\gamma}}{\Gamma(\gamma+1)} + \frac{121}{2048} \cosh\left(\frac{\boldsymbol{\xi}}{4}\right) \frac{\boldsymbol{\eta}^{2\gamma}}{\Gamma(2\gamma+1)} - \frac{1331}{49152} \sinh\left(\frac{\boldsymbol{\xi}}{4}\right) \frac{\boldsymbol{\eta}^{3\gamma}}{\Gamma(3\gamma+1)}. \end{split} \tag{47}$$

*The SDM solution for example (2) is*

$$\nu(\mathfrak{f}, \eta) = \nu\_0(\mathfrak{f}, \eta) + \nu\_1(\mathfrak{f}, \eta) + \nu\_2(\mathfrak{f}, \eta) + \nu\_3(\mathfrak{f}, \eta) + \nu\_4(\mathfrak{f}, \eta) + \dotsb \cdot \nu\_n$$

$$\begin{split} \nu(\xi,\eta) &= \cosh^2\left(\frac{\xi}{4}\right) - \frac{11}{32}\sinh\left(\frac{\xi}{4}\right) \frac{\eta^{\gamma}}{\Gamma(\gamma+1)} - \frac{11}{28}\sinh\left(\frac{\xi}{4}\right) \frac{\eta^{\gamma}}{\Gamma(\gamma+1)} + \frac{121}{1024}\cosh\left(\frac{\xi}{4}\right) \frac{\eta^{2\gamma}}{\Gamma(2\gamma+1)} \\ &- \frac{11}{512}\sinh\left(\frac{\xi}{4}\right) \frac{\eta^{\gamma}}{\Gamma(\gamma+1)} + \frac{121}{2048}\cosh\left(\frac{\xi}{4}\right) \frac{\eta^{2\gamma}}{\Gamma(2\gamma+1)} - \frac{1331}{49152}\sinh\left(\frac{\xi}{4}\right) \frac{\eta^{3\gamma}}{\Gamma(3\gamma+1)} \cdots \ . \end{split} \tag{48}$$

*The approximate solution by VITM. The iteration formulas for Equation (42), we have*

$$\nu\_{\boldsymbol{\eta}+1}(\boldsymbol{\xi},\boldsymbol{\eta}) = \nu\_{\boldsymbol{\eta}}(\boldsymbol{\xi},\boldsymbol{\eta}) - S^{-1} \left[ \frac{\boldsymbol{\mu}^{\gamma}}{s^{\gamma}} S \left\{ \frac{s^{\gamma}}{\boldsymbol{\mu}^{\gamma}} D\_{\boldsymbol{\eta}} \boldsymbol{\nu}\_{\boldsymbol{\zeta}} - D\_{\boldsymbol{\xi}\boldsymbol{\xi}\boldsymbol{\eta}} \boldsymbol{\nu}\_{\boldsymbol{\zeta}} + D\_{\boldsymbol{\xi}} \boldsymbol{\nu}\_{\boldsymbol{\zeta}} - \boldsymbol{\nu}\_{\boldsymbol{\zeta}} D\_{\boldsymbol{\xi}\boldsymbol{\xi}\boldsymbol{\xi}\boldsymbol{\eta}} \boldsymbol{\nu}\_{\boldsymbol{\zeta}} + \boldsymbol{\nu}\_{\boldsymbol{\zeta}} D\_{\boldsymbol{\xi}} \boldsymbol{\nu}\_{\boldsymbol{\zeta}} - 3 \boldsymbol{D}\_{\boldsymbol{\xi}} \boldsymbol{\nu}\_{\boldsymbol{\zeta}} D\_{\boldsymbol{\xi}\boldsymbol{\xi}\boldsymbol{\eta}} \boldsymbol{\nu}\_{\boldsymbol{\zeta}} \right] \right],\tag{49}$$

*where*

*ν*0(*ξ*, *t*) = cosh<sup>2</sup> *ξ* 4 . (50)

$$\begin{aligned} \text{For } \mathfrak{f} &= 0, 1, 2, \cdots\\ \upsilon\_1(\mathfrak{f}, \mathfrak{y}) &= \upsilon\_0(\mathfrak{f}, \mathfrak{y}) - \mathfrak{S}^{-1} \left[ \frac{u^{\gamma}}{\mathfrak{s}^{\gamma}} \mathcal{S} \left\{ \frac{s^{\gamma}}{u^{\gamma}} D\_{\mathfrak{Y}} \upsilon\_0 - D\_{\mathfrak{Y}\mathfrak{Y}} \upsilon\_0 + D\_{\mathfrak{Y}} \upsilon\_0 - \upsilon\_0 D\_{\mathfrak{Y}\mathfrak{Y}} \upsilon\_0 + \upsilon\_0 D\_{\mathfrak{Y}} \upsilon\_0 - 3 D\_{\mathfrak{Y}} \upsilon\_0 D\_{\mathfrak{Y}\mathfrak{Y}} \upsilon\_0 \right\} \right], \\ \upsilon\_1(\mathfrak{f}, \mathfrak{y}) &= \cosh^2 \left( \frac{\mathfrak{f}}{4} \right) - \frac{11}{32} \sinh \left( \frac{\mathfrak{f}}{4} \right) \frac{\eta^{\gamma}}{\Gamma(\gamma + 1)}, \end{aligned} \tag{51}$$

$$\begin{split} \nu\_{2}(\boldsymbol{\xi},\boldsymbol{\eta}) &= \nu\_{1}(\boldsymbol{\xi},\boldsymbol{\eta}) - S^{-1} \Big[ \frac{\boldsymbol{u}^{\gamma}}{\boldsymbol{s}^{\gamma}} \mathcal{S} \Big\{ \frac{\boldsymbol{s}^{\gamma}}{\boldsymbol{u}^{\gamma}} \mathcal{D}\_{\boldsymbol{\eta}} \boldsymbol{\eta} - D\_{\widetilde{\mathsf{s}}\boldsymbol{\xi}\boldsymbol{\eta}} \boldsymbol{\eta}\_{1} + D\_{\widetilde{\mathsf{s}}} \boldsymbol{\eta}\_{1} - \nu\_{1} D\_{\widetilde{\mathsf{s}}\boldsymbol{\xi}\boldsymbol{\xi}\boldsymbol{\eta}} \boldsymbol{\eta}\_{1} + \nu\_{1} D\_{\widetilde{\mathsf{s}}\boldsymbol{\xi}\boldsymbol{\eta}} \boldsymbol{\eta}\_{1} - 3 D\_{\widetilde{\mathsf{s}}\boldsymbol{\xi}\boldsymbol{\eta}} D\_{\widetilde{\mathsf{s}}\boldsymbol{\xi}\boldsymbol{\eta}} \boldsymbol{\eta}\_{1} \Big] \Big], \\ \nu\_{2}(\boldsymbol{\xi},\boldsymbol{\eta}) &= \cosh^{2} \left( \frac{\boldsymbol{\xi}}{\boldsymbol{u}} \right) - \frac{11}{32} \sinh \left( \frac{\boldsymbol{\xi}}{\boldsymbol{u}} \right) \frac{\boldsymbol{\eta}^{\gamma}}{\Gamma(\gamma+1)} - \frac{11}{28} \sinh \left( \frac{\boldsymbol{\xi}}{\boldsymbol{u}} \right) \frac{\boldsymbol{\eta}^{\gamma}}{\Gamma(\gamma+1)} + \frac{121}{1024} \cosh \left( \frac{\boldsymbol{\xi}}{\boldsymbol{u}} \right) \frac{\boldsymbol{\eta}^{\gamma+1}}{\Gamma(2\gamma+1)}. \end{split} \tag{52}$$

*ν*3(*ξ*, *η*) = *ν*2(*ξ*, *η*)−*S* −1 *u γ s γ S s γ u γ Dην*<sup>2</sup> − *Dξξην*<sup>2</sup> + *Dξν*<sup>2</sup> − *ν*2*Dξξξν*<sup>2</sup> + *ν*2*Dξν*<sup>2</sup> − 3*Dξν*2*Dξξν*<sup>2</sup> , *ν*3(*ξ*, *η*) = cosh<sup>2</sup> *ξ* 4 − 11 <sup>32</sup> sinh *ξ* 4 *η γ* Γ(*γ* + 1) − 11 <sup>28</sup> sinh *ξ* 4 *η γ* Γ(*γ* + 1) + 121 <sup>1024</sup> cosh *ξ* 4 *η* 2*γ* Γ(2*γ* + 1) , − 11 <sup>512</sup> sinh *ξ* 4 *η γ* Γ(*γ* + 1) + 121 <sup>2048</sup> cosh *ξ* 4 *η* 2*γ* Γ(2*γ* + 1) − 1331 <sup>49152</sup> sinh *ξ* 4 *η* 3*γ* Γ(3*γ* + 1) , (53) *ν*(*ξ*, *η*) = ∞ ∑ *νm*(*ξ*, *ζ*) = cosh<sup>2</sup> *ξ* 4 − 11 <sup>32</sup> sinh *ξ* 4 *η γ* Γ(*γ* + 1) − 11 <sup>28</sup> sinh *ξ* 4 *η γ* Γ(*γ* + 1) + 121 <sup>1024</sup> cosh *ξ* 4 *η* 2*γ* Γ(2*γ* + 1) ,

$$\begin{aligned} \Xi\_{m=0} &= \frac{\chi\_{+} \chi\_{-}}{\xi 12} \sinh\left(\frac{\xi}{4}\right) \frac{\chi\_{+} \chi\_{-} \chi\_{-}}{\Gamma(\gamma+1)} + \frac{121}{2048} \cosh\left(\frac{\xi}{4}\right) \frac{\eta^{2\gamma}}{\Gamma(2\gamma+1)} - \frac{1331}{49152} \sinh\left(\frac{\xi}{4}\right) \frac{\eta^{3\gamma}}{\Gamma(3\gamma+1)} - \dotsb \end{aligned} \tag{54}$$

*The exact solution of Equation (42) at γ* = 1*,*

$$\nu(\xi,\eta) = \cosh^2\left(\frac{\xi}{4} - \frac{11\eta}{24}\right). \tag{55}$$

#### **6. Results and Discussion**

The present research work aims to find an analytical solution of time-fractional Fornberg–Whitham equations, implemented the efficient analytical methods. The variational iteration transform technique and Shehu decomposition technique are used to solve the targeted problems. To check the validity of the proposed methods, the solution of some illustrative problems are suggested. The solutions graphs are plotted for both fractional and integer-order problems. In Figure 1, (a) the exact and the approximate solution of example 1 at *γ* = 1 and (b) the analytical solution of different fractional-order of *γ* = 1, 0.8, 0.6 and 0.4. In Figure 2, (a) 3d graph of the exact and (b) the SDM and VITM solutions are plotted at *γ* = 1. It is observed that the exact, SDM and VITM solutions are in close contact with

the exact result of the given problems. Also in Figure 3, (a) the exact and VITM and SDM solutions plot of example 1, (b) are calculated at different fractional-order *γ* = 0.8, 0.6 and Figure 4 show fractional-order *γ* = 0.4. It is confirmed that VITM and SDM results are in strong agreement with each other. The similar graphical analysis and discussion can be made for the solutions of example 2 in Figure 5, the 3d graph (a) the exact solution and (b) the SDM and VITM solution are discussed at *γ* = 1. Also in Figure 6, (a) the exact and VITM and SDM results plot of example 2 and (b) are calculated at different fractional-order *γ* = 0.8, 0.6, 0.4. In these graphs, it is investigated that both methods have a sufficient degree of accuracy. In Table 1 the SDM and VITM results are compared in terms of absolute errors for different fractional-order respectively. It has been shown that the proposed techniques have identical accuracy. It is investigated that results of fractional-order problems are convergent to an integer-order result as fractional-order analysis to integer-order. The same phenomenon of convergence of fractional-order solutions towards integral-order solutions is observed.

**Table 1.** The comparison between SDM and VITM for the approximate solution of example 1.


**Figure 1.** (**a**) Exact and approximate solution plot at *γ* = 1 of example 1. (**b**) Approximate solution plot of different fractional of example 1.

**Figure 2.** (**a**) Exact plot of example 1. (**b**) Comparison between approximate solution by SDM and VITM plot of example 1 at *γ* = 1.

**Figure 3.** (**a**) Comparison between approximate solution by SDM and VITM plot of example 1 at *γ* = 0.8. (**b**) Comparison between approximate solution by SDM and VITM plot of example 1 at *γ* = 0.6.

**Figure 4.** Comparison between approximate solution by SDM and VITM plot of example 1 at *γ* = 0.4.

**Figure 5.** (**a**) Exact solution plot of example 2. (**b**) Comparison between approximate solution by SDM and VITM plot of Example 2 at *γ* = 1.

**Figure 6.** (**a**) Exact and approximate solution plot of example 2. (**b**) Approximate solution plot of different fractional of *γ* = 1 of example 2.

#### **7. Conclusions**

In this paper, we implemented Shehu decomposition method and variational iteration transform method for solving time-fractional Fornberg–Whitham equation. Some examples of the analytical solution are measured to confirm the accuracy and efficiency of the available method. Graphs and table of the solutions are plotted to show the closed contact between the obtained and exact solutions. The proposed techniques are easier and faster in their concepts and more effective in solving linear and non-linear fractional-order partial differential equation and useful technique for solving a broader class of non-linear fractional models in high precision applied mathematics.

**Author Contributions:** Conceptualization, N.A.S. and E.R.E.-Z.; methodology, N.A.S.; software, I.D. and E.R.E.-Z.; validation, S.T.; formal analysis, N.A.S. and I.D.; data curation, S.T.; writing–original draft preparation N.A.S.; writing–review and editing, I.D.; supervision, J.D.C.; project administration, N.A.S.; funding acquisition, J.D.C. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Acknowledgments:** This work was supported by Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government (MOTIE) (No. 20192010107020, Development of hybrid adsorption chiller using unutilized heat source of low temperature). S. Taherifar, would like to express her sincere gratitude to the Research Council of Shahid Chamran University of Ahvaz for its financial support (Grant No. SCU.MC99.29826).

**Conflicts of Interest:** The authors have no conflict of interest.

#### **References**


## *Article* **On the Geometric Description of Nonlinear Elasticity via an Energy Approach Using Barycentric Coordinates**

**Odysseas Kosmas \* , Pieter Boom and Andrey P. Jivkov**

Department of MACE, University of Manchester, Oxford Road, Manchester M13 9PL, UK; pieter.boom@manchester.ac.uk (P.B.); andrey.jivkov@manchester.ac.uk (A.P.J.) **\*** Correspondence: odysseas.kosmas@manchester.ac.uk; Tel.: +44-(0)161-306-3727

**Abstract:** The deformation of a solid due to changing boundary conditions is described by a deformation gradient in Euclidean space. If the deformation process is reversible (conservative), the work done by the changing boundary conditions is stored as potential (elastic) energy, a function of the deformation gradient invariants. Based on this, in the present work we built a "discrete energy model" that uses maps between nodal positions of a discrete mesh linked with the invariants of the deformation gradient via standard barycentric coordinates. A special derivation is provided for domains tessellated by tetrahedrons, where the energy functionals are constrained by prescribed boundary conditions via Lagrange multipliers. The analysis of these domains is performed via energy minimisation, where the constraints are eliminated via pre-multiplication of the discrete equations by a discrete null-space matrix of the constraint gradients. Numerical examples are provided to verify the accuracy of the proposed technique. The standard barycentric coordinate system in this work is restricted to three-dimensional (3-D) convex polytopes. We show that for an explicit energy expression, applicable also to non-convex polytopes, the general barycentric coordinates constitute fundamental tools. We define, in addition, the discrete energy via a gradient for general polytopes, which is a natural extension of the definition for discrete domains tessellated by tetrahedra. We, finally, prove that the resulting expressions can consistently describe the deformation of solids.

**Keywords:** nonlinear elasticity; general barycentric coordinates; energy minimisation; Lagrange multipliers; null-space method

#### **1. Introduction and Motivation**

Computational solid mechanics provides approximate solutions for the deformation of continuous domains subjected to changes in boundary conditions [1–6]. The deformation process itself is described by using "intensive quantities"—stresses and strains—and a constitutive relation between them. These quantities have a geometric nature and form continuous tensor fields. The constitutive relation between stresses and strains may vary in the degree of complexity, depending on how the intensive quantities are related to the "extensive quantities"—forces and displacements. For example, the strains can be defined as linearised or nonlinear functions of the displacement gradient, but in all cases they must be symmetric tensors in order to fulfil physical objectivity. On the other hand, stresses can be defined as distributed forces with respect to a specific domain configuration, where the known true or Cauchy stresses are defined with respect to the current/deformed configuration. Furthermore, the simultaneous fulfilment of the balance of linear and angular momenta generates the symmetry of the stress tensor. As a consequence, the differential relations representing the strains as functions of displacements and equilibrium of stresses, together with the constitutive law, form a system of equations, the approximate solution of which is sought either by discretising the underlying solution space or by discretising the operators involved.

For the first approach (discretisation of the solution space), the most prominent example is the well-known finite element method. In this method, the standard finite

**Citation:** Kosmas, O.; Boom, P.; Jivkov, A.P. On the Geometric Description of Nonlinear Elasticity via an Energy Approach Using Barycentric Coordinates. *Mathematics* **2021**, *9*, 1689. https://doi.org/ 10.3390/math9141689

Academic Editor: Dumitru Baleanu

Received: 5 May 2021 Accepted: 7 July 2021 Published: 19 July 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

element formulations, which dominate commercial finite element analysis platforms, are based on a limited number of simple element geometries: triangles and quadrilaterals in 2D, and tetrahedra and hexahedra in 3D. While these are sufficient for most practical problems and make the implementation and the solution quite efficient, there are, however, situations where the use of general polyhedra as the indivisible units covering the domain can be really more advantageous. One obvious example of this kind is related to the representation of large polycrystalline microstructures or cellular assemblies, where the need to insert additional discretisation in the polyhedra may lead to computationally very expensive problems. This has led to developments of finite elements of the form of general polyhedra, including those using an arbitrary number of vertices and faces, those using generally non-convex polyhedra, and those using polyhedra with nonplanar faces [7–9]. In parallel, such general polyhedra proved to be the driving force for the recent development of the virtual element method [10,11]. However, to the best of our knowledge, methods that use general polyhedral meshing tools and then employ the subsequent solvers have not become fully established to date, and they remain in the academic domain.

The second approach (discretisation of the operators) is, to a large extent, based on the discrete differential geometry and was kept under development during the last 20 years. In this method, the discrete structure of the analysed solid at a given length-scale is considered as a starting point, i.e., the discretised computational domain is defined via the finite discrete nature of the solid constitution, and can be seen as an assembly of cells of arbitrary sizes and shapes. Concrete examples of this approach put further effort in order to preserve key properties of the system in terms of important invariants, such as energy, by proper discretisation of the operators. Even though these schemes have been well figured out/formulated and tested for a wide range of physical problems involving scalar fields, only a few of them have been proven to be stable when solving solid mechanics problems involving vector fields [12–14]. A notable approximation within this approach is the representation of the discrete system with a graph (contour) that allows rather simple formulations for the elasticity [15], the elasto-plasticity [16] and the elasto-plasticity involving damage [17].

For both of the aforementioned approaches in computational solid mechanics, the mesh quality plays crucial role in numerical simulations. For example, in several methods that rely on Voronoi meshes there may appear spurious solutions, mainly due to different scales of edges and faces (presence of small edges and/or faces), see, e.g., in [18] and references therein. Similar problems arising from mesh quality are present in several other methods, see, for example, in [13]. A common approach utilised to overcome such difficulties is the application of re-meshing: an initial tetrahedral mesh of any quality is used to create a new one with improved quality by appropriate merging of neighbouring tetrahedra. Examples of this technique can be found, e.g., in [19], using mesh-free methods, and in [20] using discontinuous Galerkin methods. Such a treatment, however, is not applicable in situations where a mesh representing some physically-based structure is required, e.g., an assembly of polytopes, and possible large differences in scales need to be handled. The effect of scale differences is quite strong due to the representation of differential operators in either approach, and could be overcome by an energy-based formulation.

The main aim of this work, is the consideration of the above open questions, focusing on the derivation of an appropriate energy-based model within the field of computational solid mechanics. This model would combine a continuum geometric description for the stresses and strains in the context of nonlinear elasticity, and a discrete energy formulation for tetrahedra as well as arbitrary convex polyhedra. The paper is structured as follows. We first recall the geometric description of deformation and the continuous definition of elastic (stored, conservative) energy in Section 2. This is subsequently used to derive a discrete energy representation in tetrahedral elements through the use of standard barycentric coordinates, Section 3. The problem of elastic deformation is formulated as an "energy minimisation problem", where the boundary conditions are imposed via Lagrange

multipliers. The null-space method is then employed to eliminate the Lagrange multipliers, Section 4. The formulation of . . . and the solution procedure are validated in Section 5 by comparison with analytical solutions for two examples: a cantilever beam subjected to uniformly distributed load, in Section 5.1, and a domain with a spherical hole subjected to remote tension, in Section 5.2. Finally, an energy model for general polytopes is proposed in Section 6. This is achieved by extending the definition of the standard barycentric coordinates to such polytopes, deriving weighted ones, and proving that these can define an energy functional that fully describes the physical system.

#### **2. Deformation and Energy of Conservative Solids**

We will first review some of the basic definitions of nonlinear elasticity from a geometric perspective. We start by identifying a material body with a (smooth) Riemannian manifold *B* and consider a time-dependent deformation of this body to the ambient space Riemannian manifold *S* described by [1]

$$
\varphi\_l: B \to \varphi\_l(B),
\tag{1}
$$

or, for simplicity,

$$
\varphi: B \to S,\tag{2}
$$

see Figure 1. The points within the body are given with their coordinates with respect to a global coordinate system: X = [*X*<sup>1</sup> *X*<sup>2</sup> *X*3] *<sup>T</sup>*—in the initial (reference) configuration, and x = [*x*<sup>1</sup> *x*<sup>2</sup> *x*3] *<sup>T</sup>*—in the current (deformed) configuration. For these we can rewrite the deformation map (2) as

$$
\mathfrak{x} = \mathfrak{q}(\mathbb{X}) \equiv \mathfrak{x}(\mathbb{X}).\tag{3}
$$

The map *ϕ* is considered to be a diffeomorphism, i.e., an invertible differentiable map with differentiable inverse. Manifolds *B* and *S* are considered to be equipped with a metric tensor field *G*. This positive definite second-order tensor field is expressed by symmetric tensors in the tangent space at points of the manifold, denoted by *T*X*B* on *B* and *T*x*S* on *S*.

**Figure 1.** Initial *B* and current *S* configurations of a domain related by map *ϕ<sup>t</sup>* ≡ *ϕ*.

A deformation (3) can also be represented by the deformation gradient *F* , which is the tangent map of *ϕ*, i.e., the map between the tangent space on *B* and the tangent space on *S* [1,2]:

$$\mathcal{F}(\mathbb{X}): T\_{\mathbb{X}}\mathcal{B} \to T\_{\mathbb{X}}\mathcal{S}.\tag{4}$$

In the local coordinate charts for X and x, the components *Fij* of *F* are given by

$$\mathcal{F}\_{\rm ij} = \frac{\partial \mathbf{x}\_{i}}{\partial \mathbf{X}\_{j}} = \mathbf{x}\_{i,j} \qquad \text{for} \qquad i, j = 1, 2, 3. \tag{5}$$

The deformation gradient is a non-singular two-point tensor, i.e., maps the tangent spaces of the two configurations, with a positive determinant, denoted by J. The determinant measures the ratio between the current and initial infinitesimal volume at a given

material point, and we note that the condition J = 1 represents a volume preserving deformation. *F* can be multiplicatively decomposed in two possible ways:

$$
\mathcal{F} = \mathsf{V}\mathsf{R} = \mathsf{R}\mathsf{U},\tag{6}
$$

where *R* is a proper orthogonal rotation tensor (considering that body and space are of the same Riemannian manifold), and *V* and *U* are the so-called left (spatial, current) and right (material, reference) stretch tensors, respectively, which are symmetric positive-definite as J > 0. This reflects the two possible representations of the motion: (1) first rotation of a reference unit triad to a current unit triad, followed by its stretching in the current configuration, and (2) stretching of a reference unit triad, followed by rotation to a current triad. The two stretch tensors have identical three positive eigenvalues—*λ*1, *λ*2, and *λ*3—representing the principal stretches of the deformation.

In this work, we will consider solids undergoing conservative deformation, i.e., where the solid stores no energy on a complete reversal of the deformation [5]. This behaviour covers the cases of linear and nonlinear elasticity, where all work done on the system by changing boundary conditions from one (initial) to another (current) configuration is stored as an elastic energy, which is exactly equal to the energy required to restore the initial configuration by reversed change of boundary conditions.

An elastic energy formulation in terms of deformation must be invariant with respect to rigid body rotations [1,2,4], thus existing formulations are based on the stretch tensors or some functions of their invariants. We will use one such formulation, which is based on invariants of the so-called left Cauchy–Green strain tensor *B*, given by the map *B*(x) : *T*x*S* → *T*x*S* or by

$$\mathcal{B} = \mathcal{V}^2 \equiv \mathcal{F}\mathcal{F}^T,\tag{7}$$

where *F <sup>T</sup>* maps covectors in the cotangent bundle of *S* (or *T* ∗*S*) to covectors in the cotangent bundle of *B* (or *T* ∗*B*), see also in [21–25]. It can be easily shown that *B* is objective, i.e., frame-independent, tensor.

One set of invariants of *B* is given by [1,2,4]

$$\mathbb{I}\_1 = \text{tr}\left(\mathcal{F}\mathbb{F}^T\right), \mathbb{I}\_2 = \frac{1}{2}\left[\left(\text{tr}\left(\mathcal{F}\mathbb{F}^T\right)\right)^2 - \left(\text{tr}\left(\mathcal{F}\mathbb{F}^T\right)^2\right)\right], \mathbb{I}\_3 = \text{det}\left(\mathcal{F}\mathbb{F}^T\right) = \mathbb{J}^2\tag{8}$$

which can be written in terms of the three principal stretches as

$$\mathbb{I}\_1 = \lambda\_1^2 + \lambda\_2^2 + \lambda\_3^2, \quad \mathbb{I}\_2 = (\lambda\_1 \lambda\_2)^2 + (\lambda\_1 \lambda\_3)^2 + (\lambda\_2 \lambda\_3)^2, \quad \mathbb{I}\_3 = \lambda\_1 \lambda\_2 \lambda\_3. \tag{9}$$

Another set of invariants, used to define a large class of non-linear elastic behaviours, is derived from (9) as

$$\mathbb{I}\_1 = \frac{\mathbb{I}\_1}{\mathbb{J}^{2/3}}, \quad \mathbb{I}\_2 = \frac{\mathbb{I}\_2}{\mathbb{J}^{4/3}}, \quad \mathbb{I}\_3 = \sqrt{\det(\mathcal{F}\mathcal{F}^T)} = \mathbb{J}. \tag{10}$$

The elastic energy density of a simple generalised Neo-Hookean material is defined in terms of the second set by [2,4]

$$\mathcal{H} = \frac{\mu}{2} \left( \mathbb{\bar{I}}\_1 - \mathfrak{Z} \right) + \frac{\kappa}{2} (\mathbb{J} - 1)^2,\tag{11}$$

where *µ* and *κ* are material-dependent parameters, which in the small strain approximation are known as shear and bulk moduli, respectively.

The integral of the energy density over the solid domain gives the total elastic (stored, potential) energy, which by the principle of stationary action must be minimum for the true deformation; derivation is shown in Appendix A. This can be understood as the system storing the minimal amount of energy for the change of boundary conditions between the initial and the current configuration, or as the change of boundary conditions doing the

minimal amount of work (which is equal to the stored energy) to deform the solid from the initial to the current configuration.

While the solid deformation will be formulated as an elastic energy minimisation problem in this work, using the energy density expression (11), the stress tensor will be required for comparisons with known solutions of test problems. The (true) Cauchy stress tensor, *σ*, is the derivative of the elastic energy density function with respect to *B*, which for the case of Neo–Hookean materials can be found as

$$
\sigma\_{\rm ij} = \frac{\mu}{\mathbb{J}^{5/3}} \left( \mathcal{B}\_{\rm ij} - \frac{1}{3} \mathcal{B}\_{\rm kk} \delta\_{\rm ij} \right) + \kappa (\mathbb{J} - 1) \delta\_{\rm ij\prime} \tag{12}
$$

where *δij* is the Kroneckers delta.

Interesting relations between the material parameters and energy can be established via the derivatives of the energy function with respect to the invariants of *F* ; these are shown in the Appendix C where the formulation is specialised to linear elasticity, i.e., infinitesimal deformation approximation.

#### **3. Discrete Energy of Tetrahedral Cells**

First, we consider a subdivision of the material manifold *B* into tetrahedra. For a given tetrahedron in R<sup>3</sup> with vertices X*<sup>a</sup>* , X*<sup>b</sup>* , X*<sup>c</sup>* and X*<sup>d</sup>* , any point X induces a partition described via

$$\mathbb{X} = \gamma^a \mathbb{X}^a + \gamma^b \mathbb{X}^b + \gamma^c \mathbb{X}^c + \gamma^d \mathbb{X}^d,\tag{13}$$

where *γ a* , *γ b* , *γ c* , *γ <sup>d</sup>* <sup>∈</sup> <sup>R</sup> are generalised barycentric coordinates. These are the ratios of the partitioned signed volumes *V<sup>i</sup>* and the tetrahedral volume (*V t* ), see Figure 2. We introduce the vector of barycentric coordinates as

$$
\Gamma = \begin{bmatrix} \gamma^a & \gamma^b & \gamma^c & \gamma^d \end{bmatrix}^T. \tag{14}
$$

Further, we define a 4 × 4 matrix *S <sup>r</sup>* describing the tetrahedron shape in the reference configuration by

$$\mathcal{S}^r = \begin{bmatrix} \mathbb{X}^a & \mathbb{X}^b & \mathbb{X}^c & \mathbb{X}^d \end{bmatrix}, \tag{15}$$

where each column contains corresponding nodal coordinates in the reference system with an additional element "1", e.g., for the first node we have

$$\mathbf{X}^{a} = \begin{bmatrix} X\_1^a & X\_2^a & X\_3^a & 1 \end{bmatrix}^T,\tag{16}$$

and similarly for the remaining three nodes (upper indexes here indicate vertices, see Figure 3, and lower the *i*-th component of it, *i* = 1, 2, 3). Using this, any point X in the reference configuration can be expressed by

$$\tilde{\mathbb{X}} = \mathcal{S}^{\mathsf{r}} \Gamma. \tag{17}$$

Similarly we define a 4 × 4 matrix *S <sup>d</sup>* describing the tetrahedron shape in the deformed configuration by

$$\mathcal{S}^d = \begin{bmatrix} \ \tilde{\mathfrak{x}}^a & \ \tilde{\mathfrak{x}}^b & \ \tilde{\mathfrak{x}}^c & \ \tilde{\mathfrak{x}}^d \end{bmatrix}. \tag{18}$$

Using this, any point x in the deformed configuration can be expressed by

$$
\mathfrak{X} = \mathcal{S}^d \Gamma. \tag{19}
$$

These expressions enable us to define the map between the reference and current configuration for any tetrahedral cell (see Figure 3) by

$$\mathfrak{x} = \mathcal{S}^d (\mathcal{S}^r)^{-1} \tilde{\mathbb{X}}.\tag{20}$$

**Figure 2.** Examples of partitioned volumes *V<sup>I</sup>* associated with point X: (**top**) two figures illustrate two volumes for point inside the tetrahedron; (**bottom**) two figures illustrate two volumes for a point outside the tetrahedron.

**Figure 3.** The deformation of a 3D tetrahedral element.

From another side, the physical map between the reference and the current configuration of a tetrahedron can be given by

$$\mathbf{x} = \mathcal{F}\mathbb{X} + \mathbf{t},\tag{21}$$

where t = [*t<sup>i</sup>* ], *i* = 1, 2, 3 are the components of a translation vector, and *F* is the deformation gradient which is assumed to have positive determinant.

After some algebra, it can be shown that *S d* (*S r* ) −1 is the following a 4 × 4 block matrix:

$$\mathcal{S}^d(\mathcal{S}^r)^{-1} = \begin{bmatrix} \frac{\mathcal{F}}{\mathbb{G}^T} & \mathbf{t} \\ \hline \mathbb{G}^T & 1 \end{bmatrix}, \tag{22}$$

where O is a zero vector of size three.

With the knowledge of *F* , the discrete energy of a tetrahedron is calculated by Equation (11), using the invariants given in Equation (10). The sum of energies of all tetrahedrons defines the discrete energy functional of the system.

#### **4. Lagrange Multipliers Using Null-Space Method**

When using the discrete energy functional proposed in Section 3 the Euler–Lagrange Equation (A4) are incomplete, and need to be complemented by prescribing Dirichlet (essential) and Neumann (natural) boundary conditions, see Appendix B. For the proposed formulation via energy minimisation, direct application of the boundary conditions is challenging and in such cases Lagrange multipliers have been extensively used [26]. This has been motivated by the fact that both essential and natural boundary conditions can be expressed as constraints and thus enforced by Lagrange multipliers, resulting in the following constrained Euler–Lagrange equations:

$$\frac{d}{dX\_j} \frac{\partial \mathcal{H}}{\partial \left(\partial \mathbf{x}\_i / \partial X\_j\right)} - \frac{\partial \mathcal{H}}{\partial \mathbf{x}\_i} - \left[\mathbb{C}\left(X\_j\right)\right]^T \lambda^j = \mathbf{0},\tag{23}$$

for *i*, *j* = 1, 2, 3, where *C Xj* are the constraints and *λ <sup>j</sup>* are the Lagrange multipliers.

The introduction of Lagrange multipliers increases the number of unknowns, and in order to reduce them to the number of unknown displacements in the system we use the discrete null-space method of [26], which eliminates all Lagrange multipliers. For this, we define a null-space matrix *N Xj* , which satisfies

$$
\mathcal{C}(X\_{\circ})N(X\_{\circ}) = 0.\tag{24}
$$

Multiplying from the left with its transpose, Equation (23) becomes

$$\left[N(X\_{\dot{j}})\right]^T \left(\frac{d}{dX\_{\dot{j}}} \frac{\partial \mathcal{H}}{\partial \left(\partial x\_{i}/\partial X\_{\dot{j}}\right)} - \frac{\partial \mathcal{H}}{\partial x\_{i}}\right) = 0. \tag{25}$$

This leads to a number of equations equal to the number of unknown degrees of freedom. Importantly, this technique has been proven to be energy consistent, meaning that energy is neither dissipated nor gained artificially during the numerical process [26–31].

#### **5. Numerical Examples**

A canonical way to test a proposal for numerical solution of boundary value problems in elasticity is to examine the solution behaviour for several simple deformation modes: volumetric expansion, pure shear, and possibly unconstrained uniaxial extension/compression. While we have tested these modes successfully, the inclusion of the results would not be of significant value to this work. Instead, we provide results for two known elasticity problems, where the combined effect of the different deformation modes is tested: a cantilever beam subjected to a uniformly distributed load and a cube with a spherical hole subjected to tension. We will present and compare results for displacements in the cantilever case, and for stresses in the cube case.

#### *5.1. Cantilever Beam Subjected to a Regular Distributed Load*

First, we consider the deformation of a three-dimensional cantilever beam subjected to a uniformly distributed load [4]. The beam has dimensions 10 × 2 × 2 and is discretised into 5802 tetrahedral elements using 1322 nodes, see Figure 4. The material of the beam has properties *<sup>E</sup>* <sup>=</sup> <sup>3</sup> <sup>×</sup> <sup>10</sup><sup>7</sup> kPa and *<sup>ν</sup>* <sup>=</sup> 0.3.

The uniformly distributed load is applied in ten increments with step *f<sup>i</sup>* = 4 kN/cm<sup>2</sup> , and the solution is compared with linear and geometrically nonlinear finite elements analyses performed with identical tetrahedral elements. The comparison shown in Figure 5 illustrates that the calculated deflection is in excellent agreement with the geometrically nonlinear finite element solution. This is the first demonstration that the proposed method based on the minimisation of energy obtained by the barycentric map.

**Figure 4.** 3d cantilever beam.

**Figure 5.** The deflection of the geometrically nonlinear case of a cantilever beam subjected to a regular distributed load calculated using finite element method with tetrahedral elements (FEM-tet) and the proposed scheme.

#### *5.2. Cube with a Spherical Hole*

Second, we consider the deformation of a cube with a spherical hole subjected to tensile load (Figure 6). The cube dimensions are 20 × 20 × 20, and the spherical cavity has a radius *r* = 1 and centre at the cube centre. Due to symmetries, only one-eight of the cube is considered and tessellated with approximately 100 tetraherdrons. The material properties are the same as in the cantilever example. Uniform tensile load parallel to the *x* axis is applied.

The problem of a continuum domain with a spherical hole subjected to remote stress has an analytical solution [32]. Specifically the normal stress to the *x*, *y* plane (*z* = 0) is

$$
\sigma\_{33} = \frac{4 - 5\nu}{2(7 - 5\nu)} \left(\frac{r}{\ge}\right)^3 + \frac{9}{2(7 - 5\nu)} \left(\frac{r}{\ge}\right)^5 + 1. \tag{26}
$$

This analytical solution is compared with the results obtained with the proposed method in Figure 7: analytical solution is plotted with blue line, calculated stresses are plotted with red line. The demonstrated good agreement between the two lends further support to the proposed approach.

**Figure 6.** Cube with a spherical hole.

**Figure 7.** Stress distribution around the hole for distance *x* > 1 from the centre of the hole (size does not corresponds to example).

The approach can be tested further against various analytical solutions, but the implementation requires additional work to reduce the computational cost. Additionally, more work must be done on the mesh quality, in order to demonstrate how the number, size and shape of the tetrahedra matter in the calculations. One part of an ongoing work is an implementation of a parallel solver that will massively reduce that time.

#### **6. Discrete Energy on General Polytopes Using Weighted Barycentric Coordinates**

In order to define a discrete energy for general three-dimensional elements, we will follow an approach similar to the one described in Section 3. The extension requires first to define general/weighted barycentric coordinates on general polytopes and then to use them to express points of the material domain with respect to the tessellation nodes.

#### *6.1. Standard Barycentric Coordinates Revisited*

We start with rewriting the standard barycentric coordinates in a form suitable for generalisation. With respect to a convex polytope with vertices X*<sup>I</sup>* for *I* = 1, . . . , *n* (where *<sup>n</sup>* <sup>≥</sup> 4), any point <sup>X</sup> <sup>∈</sup> <sup>R</sup><sup>3</sup> can written as [33]

$$\mathbb{X} = \sum\_{I=1}^{n} \gamma\_I \mathbb{X}\_{I\prime} \qquad \sum\_{I=1}^{n} \gamma\_I = 1\,, \tag{27}$$

where *γ<sup>I</sup>* are the generalised barycentric coordinates of X. This can be also written in the form

$$\sum\_{I=1}^{n} w\_I (\mathbb{X}\_I - \mathbb{X}) = 0, \qquad \sum\_{I=1}^{n} w\_I \neq 0, \qquad \text{with} \qquad \gamma\_I = \frac{w\_I}{\sum\_{I=1}^{n} w\_I} \tag{28}$$

where the weight functions *w<sup>I</sup>* can be appropriately defined. In [33] for example, the authors define the weight functions via the partition volumes

$$N\_{I} = \{ \mathbb{X}\_{1}, \mathbb{X}\_{2}, \dots, \mathbb{X}\_{I-1}, \mathbb{X}, \mathbb{X}\_{I+1}, \dots, \mathbb{X}\_{n} \}\_{\text{volume}} \tag{29}$$

and the positive functions *c<sup>I</sup>* > 0, so that

$$w\_I = \frac{c\_{I+1}V\_{I+1} + c\_IV\_I + c\_{I-1}V\_{I-1}}{V\_{I+1}V\_{I-1}}.\tag{30}$$

The sum of the weight functions becomes

$$\mathcal{W} = \sum\_{I=1}^{n} w\_I = \sum\_{I=1}^{n} \frac{c\_I V}{V\_{I+1} V\_{I-1}} \,\prime \tag{31}$$

where *V* is the volume of the polytope.

The requirement for *c<sup>I</sup>* is that these are arbitrary positive functions. A rather general definition has been proposed in [33]:

$$\mathbf{c}^{I} = |\mathbb{X}^{I} - \mathbb{X}|^{a}. \tag{32}$$

Notably, the selection *α* = 0 results in the known Wachspress coordinates, while the selection *α* = 1 provides the mean value coordinates.

#### *6.2. Weighted Barycentric Coordinates on General Polytopes*

We will now propose an extended version of barycentric coordinates on general polytopes that will be used to formulate an expression of the internal energy. To do so, we use the signed partitioned volumes given by Equation (29) and extend the definition of weight functions given by Equation (31).

Extending the barycentric coordinate expressions to general polytopes is challenging, because one needs to address issues arising from the definition of the weight functions in Equation (31). The problems stem mainly from the denominator, the product of the volumes *VI*−<sup>1</sup> and *VI*+<sup>1</sup> (or areas in two dimensions). For non-convex polytopes, this product might become negative. In the following, we restrict ourselves to arbitrary (nonplatonic) tetrahedral elements, but the generalisation to any general polytope follows clearly. To bypass the possibility of negative denominator, we define the volume using cross-products in the following way:

$$V = \frac{1}{6} |(\mathbb{X}\_1 - \mathbb{X}\_2) \times (\mathbb{X}\_1 - \mathbb{X}\_3)| \left| \mathbf{h}\_{\mathbb{X}\_4, \{\mathbb{X}\_1, \mathbb{X}\_2, \mathbb{X}\_3\}} \right| \,\tag{33}$$

where <sup>h</sup>X*<sup>I</sup>* ,(X<sup>1</sup> ,X2,X3) is the vector normal to the triangle face forming from points (X1, X2, X3) to the point X*<sup>I</sup>* .

We can now rewrite the weights of (31) using the proposed volume definition to obtain

$$\begin{array}{lcl} W &=&\sum\_{I=1}^{n} \frac{\mathsf{6}^{2}c\_{I}V}{\left| \left( \mathbb{X}\_{1} - \mathbb{X}\_{2} \right) \times \left( \mathbb{X}\_{1} - \mathbb{X}\_{3} \right) \right| \left| \mathbf{h}\_{\mathbf{X}, \left( \mathbf{X}\_{1}, \mathbf{X}\_{2}, \mathbf{X}\_{3} \right)} \right|} \left| \left( \mathbb{X}\_{1} - \mathbb{X}\_{3} \right) \times \left( \mathbb{X}\_{1} - \mathbb{X}\_{4} \right) \right| \left| \mathbf{h}\_{\mathbf{X}, \left( \mathbf{X}\_{1}, \mathbf{X}\_{3}, \mathbf{X}\_{4} \right)} \right| \\ &=&\sum\_{I=1}^{n} \frac{\mathsf{6} c\_{I} \left| \mathbf{h}\_{\mathbf{X}, \left( \mathbf{X}\_{1}, \mathbf{X}\_{2}, \mathbf{X}\_{3} \right)} \right|}{\left| \left( \mathbb{X}\_{1} - \mathbb{X}\_{3} \right) \times \left( \mathbb{X}\_{1} - \mathbb{X}\_{4} \right) \right| \left| \mathbf{h}\_{\mathbf{X}, \left( \mathbf{X}\_{1}, \mathbf{X}\_{2}, \mathbf{X}\_{4} \right)} \right|} \left| \left| \mathbf{h}\_{\mathbf{X}, \left( \mathbf{X}\_{1}, \mathbf{X}\_{3}, \mathbf{X}\_{4} \right)} \right| \right| \\ &=&\sum\_{I=1}^{n} \frac{\mathsf{6} c\_{I} \left| \mathbf{h}\_{\mathbf{X}, \left( \mathbf{X}\_{1}, \mathbf{X}\_{2}, \mathbf{X}\_{3} \right)} \right|}{\left| \mathbb{X}\_{1} - \mathbb{X}\_{3} \right| \left| \left$$

With some functions *d<sup>I</sup>* > 0 this can be written as

$$\begin{array}{rcl}\mathcal{W} &=& \sum\_{I=1}^{n} \frac{d\_I}{|\mathbb{X}\_1 - \mathbb{X}\_3| |\mathbb{X}\_1 - \mathbb{X}\_4| \sin\left((\mathbb{X}\_1 - \mathbb{X}\_3), (\mathbb{X}\_1 - \mathbb{X}\_4)\right)}\\&=& \sum\_{I=1}^{n} d\_I \frac{\cot\left((\mathbb{X}\_1 - \mathbb{X}\_3), (\mathbb{X}\_1 - \mathbb{X}\_4)\right)}{(\mathbb{X}\_1 - \mathbb{X}\_3) \cdot (\mathbb{X}\_1 - \mathbb{X}\_4)}.\end{array} \tag{35}$$

As long as the dot product in the denominator is not zero, the barycentric coordinates that use the proposed weight functions are well defined for both convex and non-convex polytopes. Furthermore, the standard barycentric coordinates can be recovered from this definition.

#### *6.3. Energy from Angles and Lengths*

In order to understand more the angles introduced in (35), we will relate them to appropriate lengths of edges. For illustration, we will restrict ourselves to two dimensions, although the extension to three dimensions follows a similar path. We consider the triangle formed by points with coordinates X*a*, X*<sup>b</sup>* and X*<sup>c</sup>* (Figure 8). The three angles *θa*, *θ<sup>b</sup>* and *θ<sup>c</sup>* are opposite to the edges of lengths <sup>h</sup>X*<sup>b</sup>* ,X*c* , <sup>|</sup>hX*a*,X*<sup>c</sup>* | and <sup>h</sup>X*a*,X*<sup>b</sup>* , respectively.

**Figure 8.** Two-dimensional triangle formed from points with nodal positions X*a*, X*<sup>b</sup>* and X*c*.

Considering each angle to be a function of edge lengths, we can write the following expressions:

$$\cos(\theta\_a) = \frac{|\mathbf{h}\_{\mathbb{X}\_a,\mathbb{X}\_c}|^2 + |\mathbf{h}\_{\mathbb{X}\_a,\mathbb{X}\_b}|^2 - |\mathbf{h}\_{\mathbb{X}\_b,\mathbb{X}\_c}|^2}{2|\mathbf{h}\_{\mathbb{X}\_a,\mathbb{X}\_c}| \left| \mathbf{h}\_{\mathbb{X}\_a,\mathbb{X}\_b} \right|}. \tag{36}$$

By taking the derivative with respect to <sup>h</sup>X*<sup>b</sup>* ,X*c* , we have

$$\frac{\partial \theta\_a}{\partial \left| \mathbf{h}\_{\mathbb{X}\_b, \mathbb{X}\_c} \right|} = \frac{\left| \mathbf{h}\_{\mathbb{X}\_b, \mathbb{X}\_c} \right|}{\left| \mathbf{h}\_{\mathbb{X}\_a, \mathbb{X}\_c} \right| \left| \mathbf{h}\_{\mathbb{X}\_a, \mathbb{X}\_b} \right| \sin(\theta\_a)}\,\tag{37}$$

which, writing the area of the triangle as

$$A = \frac{1}{2} |\mathbf{h}\_{\mathbb{X}\_d, \mathbb{X}\_c}| \left| \mathbf{h}\_{\mathbb{X}\_d, \mathbb{X}\_b} \right| \sin(\theta\_a) \tag{38}$$

results in

$$\frac{\partial \theta\_a}{\partial \left| \mathbf{h}\_{\mathbb{X}\_b \mathbb{X}\_c} \right|} = \frac{\left| \mathbf{h}\_{\mathbb{X}\_b \mathbb{X}\_c} \right|}{2A}. \tag{39}$$

Similarly, we can calculate the derivative of (36) with respect to <sup>|</sup>hX*a*,X*<sup>c</sup>* | as

$$\frac{\partial \theta\_a}{\partial |\mathbf{h}\chi\_a \chi\_c|} = -\frac{|\mathbf{h}\_{\mathbf{X}\_b \mathbf{X}\_c}| \cos(\theta\_c)}{2A}.\tag{40}$$

Finally, to connect the cotangent of an angle, required in (35), we define

$$\mathfrak{f}\_a = \frac{1}{2} \left| \mathbf{h}\_{\mathbb{X}\_b, \mathbb{X}\_c} \right|^2, \qquad \mathfrak{f}\_b = \frac{1}{2} \left| \mathbf{h}\_{\mathbb{X}\_b, \mathbb{X}\_c} \right|^2, \qquad \mathfrak{f}\_c = \frac{1}{2} \left| \mathbf{h}\_{\mathbb{X}\_d, \mathbb{X}\_b} \right|^2 \tag{41}$$

and observe that

$$\frac{\partial \cot(\theta\_a)}{\partial \xi\_b} = \frac{1}{|\mathbf{h}\_{\mathbb{X}\_a, \mathbb{X}\_c}|} \frac{\partial \cot(\theta\_d)}{\partial |\mathbf{h}\_{\mathbb{X}\_d, \mathbb{X}\_c}|}. \tag{42}$$

The last expression, combined with (40), provides

$$\frac{\partial \cot(\theta\_a)}{\partial \xi\_b} = \frac{|\mathbf{h}\_{\mathbb{X}\_b, \mathbb{X}\_c}| \cos(\theta\_c)}{2 \sin^2(\theta\_a) |\mathbf{h}\_{\mathbb{X}\_a, \mathbb{X}\_c}|}. \tag{43}$$

This result gives us a perspective to understand the weights defined by (35) via the derivatives with respect to edge lengths. In addition, due to symmetries we can obtain

$$\frac{\partial \cot(\theta\_b)}{\partial \xi\_a} = \frac{|\mathbf{h}\_{\mathbb{X}\_b, \mathbb{X}\_c}| \cos(\theta\_c)}{2 \sin^2(\theta\_a) |\mathbf{h}\_{\mathbb{X}\_a, \mathbb{X}\_c}|} \tag{44}$$

and thus

$$\frac{\partial \cot(\theta\_a)}{\partial \xi\_b} = \frac{\partial \cot(\theta\_b)}{\partial \xi\_a}. \tag{45}$$

The expressions relating cotangent of angles with lengths can be used to formulate an energy expression.

Total energy given by (A2) can be calculated using the energy of any triangle element, i.e., (A1) as

$$\mathfrak{H} \simeq \int\_{\mathfrak{z}\_a \mathfrak{z}\_b \mathfrak{z}\_c \mathfrak{z}} \sum\_{I=1}^n w\_I d\mathfrak{z}\_i \simeq \int\_{\mathfrak{z}\_a \mathfrak{z}\_b \mathfrak{z}\_c \mathfrak{z}\_c} \sum\_{a,b,c} \cot(\theta\_i) d\mathfrak{z}\_i. \tag{46}$$

#### *6.4. Discrete Energy via Weighted Barycentric Coordinates*

We will now prove that the energy expression of (46) can be used to determine the energy at each element. To that end, we here restrict to two-dimensional case, and thus we first prove that the integrant of (46), which can be identified as the differential form

$$\omega = \sum\_{a,b,c} \cot(\theta\_i) d\xi\_i = \cot(\theta\_a) d\xi\_a + \cot(\theta\_b) d\xi\_b + \cot(\theta\_c) d\xi\_c \tag{47}$$

is a closed 1-form, see in [12–14]. The proof can be based on observing that *dω* can be written as

$$\begin{split} d\omega &= \left[ \frac{\partial \cot(\theta\_b)}{\partial \xi\_a} - \frac{\partial \cot(\theta\_a)}{\partial \xi\_b} \right] d\xi\_a \wedge d\xi\_b \\ &+ \left[ \frac{\partial \cot(\theta\_c)}{\partial \xi\_b} - \frac{\partial \cot(\theta\_b)}{\partial \xi\_c} \right] d\xi\_b \wedge d\xi\_c \\ &+ \left[ \frac{\partial \cot(\theta\_a)}{\partial \xi\_c} - \frac{\partial \cot(\theta\_c)}{\partial \xi\_a} \right] d\xi\_c \wedge d\xi\_a. \end{split} \tag{48}$$

To show that *dω* = 0 is then straightforward using (45).

Finally, the differential form defined in (46) is exact and thus the integration is strictly path-dependent, and thus completely defines an energy functional for the system.

#### **7. Summary and Conclusions**

In this work, a geometric formulation of nonlinear elasticity, based on discrete energy functional, is presented. In the first step, discrete energy of tetrahedral elements has been formulated through a map between initial and current positions of their vertices and the use of standard barycentric coordinates. The energy functional of the resulted boundary value problem has been minimised using energy consistent technique for constrained systems, where the constraints are enforced by Lagrange multipliers and are eliminated via pre-multiplication of the discrete equations of motion by a discrete null-space matrix of the constraint gradient. Although not tested specifically here, the convergence of the proposed scheme should inherit the convergence of the well-known methods for solving constrained systems by utilising the discrete null-space method. The numerical examples presented—cantilever beam and cube with a spherical hole—demonstrate the capabilities of the approach.

Because the use of standard barycentric coordinates is restricted to three-dimensional convex polytopes, a natural extension of the existing definitions to discrete domains consisting of arbitrary polytopes has been proposed. This opens the possibility to analyse domains with arbitrary cell shapes, including non-convex cells that posses specific microstructural features. The proposed technique is presently suited to conservative problems, i.e., linear and nonlinear elasticity, but it can be extended to dissipative systems, provided that the dissipation mechanism is given in terms of the deformations and stresses calculated in this work. Such extensions are the subject of future work.

**Author Contributions:** Conceptualization, O.K.; methodology, O.K. and A.P.J.; investigation, O.K., P.B. and A.P.J.; writing—original draft preparation, O.K. and A.P.J.; writing—review & editing, O.K., P.B and A.P.J.; visualization, O.K. and P.B. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by Engineering and Physical Sciences Research Council (EPSRC) UK grant number EP/N026136/1.

**Acknowledgments:** Authors wish to acknowledge the support of Engineering and Physical Sciences Research Council (EPSRC) UK via grand EP/N026136/1 "Geometric Mechanics of Solids".

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Appendix A. Principle of Stationary Action in Elasticity**

In order to derive the equations that describe the deformation of the material we consider its energy per unit volume *h<sup>i</sup>* as a function of the points on the deformed configuration *<sup>x</sup><sup>i</sup>* and the relative displacements of neighbouring points *<sup>∂</sup>x<sup>i</sup> ∂X<sup>j</sup>* , i.e.,

$$\mathfrak{A}\_{\mathrm{i}} \equiv \mathfrak{A}\_{\mathrm{i}} \left( \mathfrak{x}\_{\mathrm{i}\prime} \frac{\partial \mathfrak{x}\_{\mathrm{i}}}{\partial X\_{\mathrm{j}}} \right) , \tag{A1}$$

for more details see [3]. The total energy of the body can be then calculated as the volume integral

$$\mathcal{H} = \int \hbar\_{\text{\textquotedblleft}} dV \tag{A2}$$

or

$$\mathcal{H} = \iiint \delta\_{\mathbf{i}} \left( \mathbf{x}\_{i\prime} \frac{\partial \mathbf{x}\_{i}}{\partial X\_{j}} \right) d\mathbf{X}\_{1} d\mathbf{X}\_{2} d\mathbf{X}\_{3\prime} \qquad i, j = 1, 2, 3. \tag{A3}$$

The principle of stationary action states that this functional must be stationary for the true deformation of the body, i.e., *δH* = 0, which can be interpreted as body points under deformation move so as to minimise this energy. Calculating the variation of the energy functional leads to the Euler–Lagrange equations [1]

$$\frac{d}{dX\_j} \frac{\partial \mathcal{H}}{\partial (\partial x\_i / \partial X\_j)} - \frac{\partial \mathcal{H}}{\partial x\_i} = 0, \qquad i, j = 1, 2, 3. \tag{A4}$$

The later three equations are the differential equations of elasticity, applicable to both infinitesimal and finite deformations.

The total energy (A3) can be represented as a sum of an energy associated with deformation *H <sup>D</sup>* and an energy associated with body forces *H F* , where the former is a function of relative positions only

$$\mathcal{H}^D \equiv \mathcal{H}^D \left(\frac{\partial \mathbf{x}\_i}{\partial X\_j}\right)\_{\prime} \tag{A5}$$

and the latter is a function of absolute positions only [1,4]

$$\mathcal{H}^{\mathcal{F}} \equiv \mathcal{H}^{\mathcal{F}}(\mathfrak{x}\_{l}).\tag{A6}$$

Importantly, *H <sup>D</sup>* must be invariant of coordinate translations and rotations, leading to a requirement for the deformation energy to be a function of some combination of the principal invariants of the gradient of *ϕ* defined by (3), see also [1–4].

#### **Appendix B. Boundary Conditions**

The boundary conditions for Equation (A4) are Dirichlet for prescribed (known) displacements and Neumann for prescribed (known) forces/ tractions. The application of the Dirichlet boundary conditions is straightforward in our formulation as the primal unknowns are the nodal displacements. For the Neumann boundary conditions we derive the following.

The equilibrium of the entire domain is expressed by [1–3,6]

$$\mathbb{F} = \int\_{\Omega} f dV = -\int\_{\partial\Omega} t d\mathbb{S} = \mathbb{T} \tag{A7}$$

which shows that that the resultant body force F must be equal and opposite to the resultant of the applied forces on the boundary.

With the energy definition of Section 2 for a deformation of a body, the internal forces can be defined as the negative gradient of the energy, i.e.,

$$\mathbb{F} = -\nabla\_{\mathbf{x}} \mathcal{H} \tag{A8}$$

or

$$F\_i = -\iiint \frac{\partial \mathcal{H}}{\partial \mathbf{x}\_i} d\mathbf{X}\_1 d\mathbf{X}\_2 d\mathbf{X}\_3 \tag{A9}$$

which, when using (A4) becomes

$$F\_{\rm i} = -\iiint \frac{d}{\partial X\_j} \frac{\partial \mathcal{H}}{\partial (\partial x\_i / \partial X\_j)} dX\_1 dX\_2 dX\_3. \tag{A10}$$

If we further apply the divergence theorem we have and expression of the *i*-th component of the force

$$F\_i = -\int \frac{\partial \mathcal{H}}{\partial \left(\partial \mathbf{x}\_i / \partial \mathbf{X}\_j\right)} dA\_j \tag{A11}$$

at a surface defined its area vector *dA<sup>j</sup>* . The balance of these forces with the surface forces as in (A7) leads to an expression for the the surface forces that uses the definition of the energy per unit volume, i.e.,

$$dT\_i = \frac{\partial \mathcal{H}}{\partial \left(\partial x\_i / \partial X\_j\right)} dA\_j. \tag{A12}$$

This expression for the Neumann boundary conditions is used in the current work (see also [6]).

#### **Appendix C. Navier–Cauchy Equations of Linear Elasticity**

Following the work in [6], we consider the displacement field <sup>u</sup> <sup>=</sup> <sup>x</sup> <sup>−</sup> <sup>X</sup> and define the symmetric part of its gradient by

$$u\_{ik} = \frac{1}{2} \left( \frac{\partial u\_i}{\partial X\_k} + \frac{\partial u\_k}{\partial X\_i} \right). \tag{A13}$$

The energy of (A3) can be considered as

$$\mathcal{H} = \mathcal{H}^d = \mathcal{H}^d \Big|\_{0} + \frac{\lambda}{2} u\_{\text{il}}^2 + \mu u\_{\text{ik}}^2 \tag{A14}$$

where *H d* |<sup>0</sup> corresponds to no change of energy state, thus no deformation. The parameters *λ* and *µ* are the Lamé parameters that can be calculated as [6]:

$$\begin{split} \lambda &=& 4\left(\frac{\partial^2 \mathcal{H}^d}{\partial \mathbb{I}\_1 \partial \mathbb{I}\_1}\bigg|\_{0}\right) + 16\left(\frac{\partial^2 \mathcal{H}^d}{\partial \mathbb{I}\_2 \partial \mathbb{I}\_1}\bigg|\_{0}\right) + 4\left(\frac{\partial^2 \mathcal{H}^d}{\partial \mathbb{I}\_3 \partial \mathbb{I}\_1}\bigg|\_{0}\right) + 16\left(\frac{\partial^2 \mathcal{H}^d}{\partial \mathbb{I}\_2 \partial \mathbb{I}\_2}\bigg|\_{0}\right) \\ &+ 8\left(\frac{\partial^2 \mathcal{H}^d}{\partial \mathbb{I}\_3 \partial \mathbb{I}\_2}\bigg|\_{0}\right) + \left(\frac{\partial^2 \mathcal{H}^d}{\partial \mathbb{I}\_3 \partial \mathbb{I}\_3}\bigg|\_{0}\right) - 2\frac{\partial \mathcal{H}^d}{\partial \mathbb{I}\_1} \end{split} \tag{A15}$$

and

$$\mu = 2 \left( \frac{\partial \mathcal{H}^d}{\partial \mathbb{I}\_1} + \frac{\partial \mathcal{H}^d}{\partial \mathbb{I}\_2} \right) \bigg|\_{0} \tag{A16}$$

Using the energy of (A14) in the Euler-Lagrange Equation (A4), together with the definitions of the Lamé constants (A15) and (A16) we get the equilibrium equations for linear isotropic elastic materials (with no gravitational body force)

$$
\mu \frac{\partial^2 \mu\_l}{\partial (X\_k)^2} + (\lambda + \mu) \frac{\partial^2 \mu\_l}{\partial X\_l \partial X\_l} = 0,\tag{A17}
$$

which can be written in terms of Youngs modulus

$$E = \frac{\mu(3\lambda + 2\mu)}{\lambda + \mu} \tag{A18}$$

and Poisson's ratio

$$\nu = \frac{\lambda}{2(\lambda + \mu)}\tag{A19}$$

as

$$\frac{E}{2(1+\nu)}\frac{\partial^2 u\_l}{\partial (X\_k)^2} + \frac{E}{2(1+\nu)(1-2\nu)}\frac{\partial^2 u\_l}{\partial X\_l \partial X\_l} = 0. \tag{A20}$$

Equations (A17) and (A20) are the Navier-Cauchy equilibrium equations for linear elasticity.

#### **References**


## *Article* **Variational Problems with Time Delay and Higher-Order Distributed-Order Fractional Derivatives with Arbitrary Kernels**

**Fátima Cruz † , Ricardo Almeida \*,† and Natália Martins †**

> Center for Research and Development in Mathematics and Applications, Department of Mathematics, University of Aveiro, 3810-193 Aveiro, Portugal; fatima.cruz@live.ua.pt (F.C.); natalia@ua.pt (N.M.) **\*** Correspondence: ricardo.almeida@ua.pt

† These authors contributed equally to this work.

**Abstract:** In this work, we study variational problems with time delay and higher-order distributedorder fractional derivatives dealing with a new fractional operator. This fractional derivative combines two known operators: distributed-order derivatives and derivatives with respect to another function. The main results of this paper are necessary and sufficient optimality conditions for different types of variational problems. Since we are dealing with generalized fractional derivatives, from this work, some well-known results can be obtained as particular cases.

**Keywords:** fractional calculus; calculus of variations; Euler–Lagrange equations; isoperimetric problems; holonomic problems; higher-order derivatives

#### **1. Introduction**

Fractional calculus is a mathematical area that deals with the generalization of the classical notions of derivative and integral to a noninteger order. This fascinating theory has attracted the interest of the scientific community over the last few decades due to the fact that it is a powerful tool to deal with the dynamics of complex systems. Its importance is notable not only in Mathematics but also in Physics [1], Chemistry [2], Biology [3], Epidemiology [4], Control Theory [5], etc. (for completeness, we also point out that partial differential equations from classical calculus properly fit in the modeling of real problems; see, for instance, Refs. [6–8] for models from mathematical biology).

Since the beginning of the fractional calculus in 1695, numerous definitions of fractional integrals and derivatives were introduced by important mathematicians such as Leibniz, Euler, Fourier, Liouville, Riemann, Letnikov, etc. Many of these fractional derivatives can be related between them by an explicit formula [9,10]. Later on, in 1969, Caputo introduced the distributed-order fractional integrals and derivatives [11,12]. These operators can be seen as a new kind of generalization of the classical fractional operators, since these operators involve a weighted integral of different orders of differentiation. Another way that allows a generalization of the classical fractional operators is considering the notions of fractional integrals and derivatives with respect to another function [9,13,14].

The specificity of fractional calculus that can be considered the cause of its success in applications to real world problems is that the large number of fractional operators allows researchers to choose the most suitable one to model the problem under investigation.

In the recent paper [15], the authors introduced new notions of fractional derivatives combining the distributed-order derivatives and fractional derivatives with respect to an arbitrary smooth function, creating a new type of derivatives: distributed-order fractional derivatives with arbitrary kernels. In this paper, we are going to deal with these kinds of generalized fractional derivatives in order to study different types of problems of the calculus of variations.

**Citation:** Cruz, F.; Almeida, R.; Martins, N. Variational Problems with Time Delay and Higher-Order Distributed-Order Fractional Derivatives with Arbitrary Kernels. *Mathematics* **2021**, *9*, 1665. https:// doi.org/10.3390/math9141665

Academic Editor: Christopher Goodrich

Received: 29 June 2021 Accepted: 14 July 2021 Published: 15 July 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

The fractional calculus of variations was initiated by Riewe in 1996 [16,17] with the deduction of the Euler–Lagrange equation for problems where the Lagrangian depends on Riemann–Liouville fractional derivatives in order to deal with linear non-conservative forces. Since then, several authors have developed the fractional calculus of variations considering different types of fractional derivatives and different types of variational problems (see, e.g., [18–23] and references therein). For more details on fractional calculus of variations, we refer to the books [24–26].

It is well known that, in real world problems, delays are important to model certain processes and dynamical systems [22,27,28]. However, there are still few works in the literature dedicated to the fractional calculus of variations with time delay. To fill this gap, we will study in this paper time-delayed variational problems involving distributedorder fractional derivatives with arbitrary smooth kernels. We will also study variational problems involving higher-order distributed-order fractional derivatives with arbitrary smooth kernels.

The paper is organized as follows: in Section 2, we recall the new concepts of distributed-order fractional derivatives with respect to another function recently introduced in [15] and then we proceed with the extension to the higher-order case. We finalize Section 2 with the proof of the integration by parts formulae involving the higher-order distributed-order fractional derivatives with arbitrary smooth kernels. Section 3 is devoted to the main results of this paper: necessary and sufficient optimality conditions for variational problems with time delay and higher-order distributed-order fractional derivatives with arbitrary smooth kernels. In Section 4, we present three examples that illustrate the applicability of some of our main results. We finalize the paper with concluding remarks and also mentioning some possibilities for future research.

#### **2. Preliminaries and Notations**

We assume that the reader is familiar with the definitions and properties of the Riemann– Liouville and Caputo fractional operators with respect to another function (cf. [9,13], resp.).

In this paper, we consider variational problems involving the new concepts of distributedorder fractional derivatives with respect to an arbitrary smooth kernel recently introduced in [15]. For the reader's convenience, we recall here the definitions introduced in [15].

Let *φ* : [0, 1] → [0, 1] be a continuous function such that

$$\int\_0^1 \phi(\alpha)d\alpha > 0.$$

**Definition 1** ([15])**.** *Let x* : [*a*, *b*] → R *be an integrable function and ψ* ∈ *C* 1 ([*a*, *b*], R) *be an increasing function such that ψ* 0 (*t*) 6= 0*, for all t* ∈ [*a*, *b*]*. The left and right Riemann–Liouville distributed-order fractional derivatives of a function x with respect to ψ are defined by:*

$$\mathbf{D}\_{a^{+}}^{\phi(a),\psi}\mathbf{x}(t) := \int\_{0}^{1} \phi(a) \mathbf{D}\_{a^{+}}^{a,\psi}\mathbf{x}(t) da \quad \text{and} \quad \mathbf{D}\_{b^{-}}^{\phi(a),\psi}\mathbf{x}(t) := \int\_{0}^{1} \phi(a) \mathbf{D}\_{b^{-}}^{a,\psi}\mathbf{x}(t) da \dots$$

*where* D *α*,*ψ <sup>a</sup>*<sup>+</sup> *and* D *α*,*ψ <sup>b</sup>*<sup>−</sup> *are the left and right ψ-Riemann–Liouville fractional derivatives of order α, respectively.*

**Definition 2** ([15])**.** *Let x*, *ψ* ∈ *C* 1 ([*a*, *b*], R) *be two functions such that ψ is increasing and ψ* 0 (*t*) 6= 0*, for all t* ∈ [*a*, *b*]*. The left and right Caputo distributed-order fractional derivatives of x with respect to ψ are defined by*

$${}^{\mathbb{C}}\mathbf{D}\_{a+}^{\phi(a),\psi}\mathbf{x}(t) := \int\_0^1 \phi(a)^{\mathbb{C}} \mathbf{D}\_{a+}^{a,\psi}\mathbf{x}(t) da \quad {}^{\mathbb{C}}\mathbf{D}\_{b-}^{\phi(a),\psi}\mathbf{x}(t) := \int\_0^1 \phi(a)^{\mathbb{C}} \mathbf{D}\_{b-}^{a,\psi}\mathbf{x}(t) da,$$

*where <sup>C</sup>*D *α*,*ψ <sup>a</sup>*<sup>+</sup> *and <sup>C</sup>*<sup>D</sup> *α*,*ψ b*− *are the left and right ψ-Caputo fractional derivatives of order α, respectively.*

Now, we will extend the definitions introduced in [15] to the higher-order case. In the following, we assume that *n* ∈ N and *φ* : [*n* − 1, *n*] → [0, 1] is a continuous function such that

$$\int\_{n-1}^{n} \phi(\alpha) d\alpha > 0.$$

To the best of our knowledge, this is the first work that deals with higher-order distributed-order fractional derivatives.

**Definition 3.** *Let x* : [*a*, *b*] → R *be an integrable function and ψ* ∈ *C n* ([*a*, *b*], R) *be an increasing function such that ψ* 0 (*t*) 6= 0*, for all t* ∈ [*a*, *b*]*. The left and right Riemann–Liouville distributedorder fractional derivatives of a function x with respect to the kernel ψ are defined by:*

$$\mathbf{D}\_{a^{+}}^{\phi(a),\psi}\mathbf{x}(t) := \int\_{n-1}^{n} \phi(a) \mathbf{D}\_{a^{+}}^{a,\psi}\mathbf{x}(t) da \quad \text{and} \quad \mathbf{D}\_{b^{-}}^{\phi(a),\psi}\mathbf{x}(t) := \int\_{n-1}^{n} \phi(a) \mathbf{D}\_{b^{-}}^{a,\psi}\mathbf{x}(t) da \,\omega$$

*where* D *α*,*ψ <sup>a</sup>*<sup>+</sup> *and* D *α*,*ψ <sup>b</sup>*<sup>−</sup> *are the left and right ψ-Riemann–Liouville fractional derivatives of order α* ∈ [*n* − 1, *n*]*, respectively.*

**Definition 4.** *Let x*, *ψ* ∈ *C n* ([*a*, *b*], R) *be two functions such that ψ is increasing and ψ* 0 (*t*) 6= 0*, for all t* ∈ [*a*, *b*]*. The left and right Caputo distributed-order fractional derivatives of x with respect to ψ are defined by*

$$\prescript{}{C}{\mathbf{D}}\_{a+}^{\phi(a),\psi} \mathbf{x}(t) := \int\_{n-1}^{n} \phi(a) \prescript{}{C}{\mathbf{D}}\_{a+}^{a,\psi} \mathbf{x}(t) da \quad \text{and} \quad \prescript{}{C}{\mathbf{D}}\_{b-}^{\phi(a),\psi} \mathbf{x}(t) := \int\_{n-1}^{n} \phi(a) \prescript{}{C}{\mathbf{D}}\_{b-}^{a,\psi} \mathbf{x}(t) da \,\boldsymbol{\omega}$$

*where <sup>C</sup>*D *α*,*ψ <sup>a</sup>*<sup>+</sup> *and <sup>C</sup>*<sup>D</sup> *α*,*ψ b*− *are the left and right ψ-Caputo fractional derivatives of order α* ∈ [*n* − 1, *n*]*, respectively.*

In the following, we use the notations

$$\mathbf{I}\_{a+}^{\boldsymbol{n}-\phi(a),\psi}\mathbf{x}(t) := \int\_{\boldsymbol{n}-1}^{\boldsymbol{n}} \phi(a)\mathbf{I}\_{a+}^{\boldsymbol{n}-\boldsymbol{n},\psi}\mathbf{x}(t)d\boldsymbol{a} \quad \text{and} \quad \mathbf{I}\_{b-}^{\boldsymbol{n}-\phi(a),\psi}\mathbf{x}(t) := \int\_{\boldsymbol{n}-1}^{\boldsymbol{n}} \phi(a)\mathbf{I}\_{b-}^{\boldsymbol{n}-\boldsymbol{n},\psi}\mathbf{x}(t)d\boldsymbol{a} \,\boldsymbol{n}$$

where I *n*−*α*,*ψ <sup>a</sup>*<sup>+</sup> and I *n*−*α*,*ψ b*− are, respectively, the left and right Riemann–Liouville fractional integrals of order *n* − *α* with respect to the kernel *ψ*. In addition, we fix two functions *φ* and *ψ* satisfying the assumptions above. In order to simplify notation, we will use the abbreviated symbol

$$
\mathfrak{x}\_{\psi}^{[m]}(t) := \left(\frac{1}{\psi'(t)} \frac{d}{dt}\right)^m \mathfrak{x}(t).
$$

Next, we prove the integration by parts formulae, which are fundamental tools for the proofs of our main results. In our previous work, we proved a similar result when the fractional order is between 0 and 1 [15] [Theorem 3.1]. In this paper, we present a generalization of such result for the case when function *φ* is defined on the interval [*n* − 1, *n*].

**Theorem 1** (Integration by parts formulae)**.** *Let x* : [*a*, *b*] → R *be a continuous function and y* ∈ *C n* ([*a*, *b*], R)*. Then,*

$$\begin{aligned} \int\_a^b \mathbf{x}(t)^\mathsf{C} \mathbf{D}\_{a^+}^{\phi(a),\varphi} \mathbf{y}(t) dt &= \int\_a^b \left( \mathbf{D}\_{b^-}^{\phi(a),\varphi} \frac{\mathbf{x}(t)}{\psi'(t)} \right) \boldsymbol{\uppsi}'(t) \boldsymbol{y}(t) dt \\ &+ \quad \left[ \sum\_{k=0}^{n-1} \left( \frac{-1}{\psi'(t)} \frac{d}{dt} \right)^k \left( \mathbf{I}\_{b^-}^{n-\phi(a),\varphi} \frac{\mathbf{x}(t)}{\psi'(t)} \right) \boldsymbol{y}\_{\boldsymbol{\uppsi}}^{[n-k-1]}(t) \right]\_{t=a}^{t=b} \end{aligned}$$

*and*

$$\begin{split} \int\_{a}^{b} \mathbf{x}(t) \, ^{\mathsf{C}} \mathbf{D}\_{b^{-}}^{\phi(a),\varphi} \mathbf{y}(t) dt &= \int\_{a}^{b} \left( \mathbf{D}\_{a^{+}}^{\phi(a),\varphi} \frac{\mathbf{x}(t)}{\boldsymbol{\upupmu}^{\varphi}(t)} \right) \boldsymbol{\upmu}^{\prime}(t) \boldsymbol{\upupmu}(t) dt \\ &+ \left[ \sum\_{k=0}^{n-1} (-1)^{n-k} \left( \frac{1}{\boldsymbol{\upupmu}^{\varphi}(t)} \frac{d}{dt} \right)^{k} \left( \mathbf{I}\_{a^{+}}^{\mathbf{n}-\phi(a),\varphi} \frac{\mathbf{x}(t)}{\boldsymbol{\upupmu}^{\varphi}(t)} \right) \boldsymbol{\upupmu}\_{\varphi}^{[n-k-1]}(t) \right]\_{t=a}^{t=b} . \end{split}$$

**Proof.** Using the definition of the left *ψ*-Caputo distributed-order fractional derivative, we have

$$\begin{split} &\int\_{a}^{b} \mathbf{x}(t) \, ^{\mathsf{C}} \mathbf{D}\_{a^{+}}^{\phi(\boldsymbol{a}),\psi} \mathbf{y}(t) dt = \int\_{a}^{b} \mathbf{x}(t) \int\_{n-1}^{n} \boldsymbol{\phi}(a)^{\mathsf{C}} \mathbf{D}\_{a^{+}}^{\boldsymbol{a},\phi} \mathbf{y}(t) da \, dt \\ &= \int\_{a}^{b} \mathbf{x}(t) \int\_{n-1}^{n} \frac{\boldsymbol{\phi}(a)}{\Gamma(n-a)} \int\_{a}^{t} \left( \frac{1}{\boldsymbol{\psi}^{\boldsymbol{s}}(s)} \frac{d}{ds} \right)^{n} \boldsymbol{\psi}(s) \cdot (\boldsymbol{\psi}(t) - \boldsymbol{\psi}(s))^{n-a-1} \boldsymbol{\psi}'(s) ds da \, dt \\ &= \int\_{a}^{b} \mathbf{x}(t) \int\_{n-1}^{n} \frac{\boldsymbol{\phi}(a)}{\Gamma(n-a)} \int\_{a}^{t} \left( \frac{1}{\boldsymbol{\psi}^{\boldsymbol{s}}(s)} \frac{d}{ds} \right) \boldsymbol{\uprho}\_{\boldsymbol{\psi}}^{[n-1]}(s) \cdot (\boldsymbol{\psi}(t) - \boldsymbol{\psi}(s))^{n-a-1} \boldsymbol{\uprho}'(s) ds da \, dt \\ &= \int\_{n-1}^{n} \frac{\boldsymbol{\phi}(a)}{\Gamma(n-a)} \int\_{a}^{b} \boldsymbol{\uprho}(t) \int\_{a}^{t} \frac{d}{ds} \boldsymbol{\uprho}\_{\boldsymbol{\psi}}^{[n-1]}(s) \cdot (\boldsymbol{\psi}(t) - \boldsymbol{\uprho}(s))^{n-a-1} ds dt \, da. \end{split}$$

Applying Dirichlet's formula, we get

$$\begin{aligned} &\int\_{n-1}^{n} \frac{\phi(\alpha)}{\Gamma(n-a)} \int\_{a}^{b} \mathbf{x}(t) \int\_{a}^{t} \frac{d}{ds} \boldsymbol{y}\_{\boldsymbol{\Psi}}^{[n-1]}(s) \cdot (\boldsymbol{\psi}(t) - \boldsymbol{\psi}(s))^{n-a-1} ds dt \, d\alpha \\ &= \int\_{n-1}^{n} \frac{\phi(\alpha)}{\Gamma(n-a)} \int\_{a}^{b} \frac{d}{ds} \boldsymbol{y}\_{\boldsymbol{\Psi}}^{[n-1]}(s) \int\_{s}^{b} \boldsymbol{x}(t) (\boldsymbol{\psi}(t) - \boldsymbol{\psi}(s))^{n-a-1} dt ds \, d\alpha. \end{aligned}$$

Integrating by parts, we have

$$\begin{split} &\int\_{a}^{b} \frac{d}{ds} y\_{\boldsymbol{\Psi}}^{[n-1]}(s) \int\_{s}^{b} x(t) (\boldsymbol{\psi}(t) - \boldsymbol{\psi}(s))^{n-a-1} dt \, ds \\ &= \left[ \int\_{s}^{b} x(t) (\boldsymbol{\psi}(t) - \boldsymbol{\psi}(s))^{n-a-1} dt \cdot y\_{\boldsymbol{\Psi}}^{[n-1]}(s) \right]\_{s=a}^{s=b} \\ &- \int\_{a}^{b} y\_{\boldsymbol{\Psi}}^{[n-1]}(s) \frac{d}{ds} \left( \int\_{s}^{b} x(t) (\boldsymbol{\psi}(t) - \boldsymbol{\psi}(s))^{n-a-1} dt \right) ds \\ &= \left[ \int\_{s}^{b} x(t) (\boldsymbol{\psi}(t) - \boldsymbol{\psi}(s))^{n-a-1} dt \cdot y\_{\boldsymbol{\Psi}}^{[n-1]}(s) \right]\_{s=a}^{s=b} \\ &+ \int\_{a}^{b} \left( \frac{-1}{\boldsymbol{\Psi}'(s)} \frac{d}{ds} \right) \left( \int\_{s}^{b} x(t) (\boldsymbol{\psi}(t) - \boldsymbol{\psi}(s))^{n-a-1} dt \right) \cdot \frac{d}{ds} y\_{\boldsymbol{\Psi}}^{[n-2]}(s) ds. \end{split}$$

Using integration by parts in the last integral, we obtain

$$\begin{split} &\int\_{a}^{b} \left(\frac{-1}{\psi'(s)} \frac{d}{ds}\right) \left(\int\_{s}^{b} x(t) (\psi(t) - \psi(s))^{n-\kappa-1} dt\right) \cdot \frac{d}{ds} \mathcal{Y}\_{\psi}^{[n-2]}(s) ds \\ &= \left[ \left(\frac{-1}{\psi'(s)} \frac{d}{ds}\right) \left(\int\_{s}^{b} x(t) (\psi(t) - \psi(s))^{n-\kappa-1} dt\right) \cdot \mathcal{Y}\_{\psi}^{[n-2]}(s) \right]\_{s=a}^{s=b} \\ & \quad - \int\_{a}^{b} \mathcal{Y}\_{\psi}^{[n-2]}(s) \frac{d}{ds} \left(\frac{-1}{\psi'(s)} \frac{d}{ds}\right) \left(\int\_{s}^{b} x(t) (\psi(t) - \psi(s))^{n-\kappa-1} dt\right) ds \\ &= \left[ \left(\frac{-1}{\psi'(s)} \frac{d}{ds}\right) \left(\int\_{s}^{b} x(t) (\psi(t) - \psi(s))^{n-\kappa-1} dt\right) \cdot \mathcal{Y}\_{\psi}^{[n-2]}(s) \right]\_{s=a}^{s=b} \\ & \quad + \int\_{a}^{b} \left(\frac{1}{\psi'(s)} \frac{d}{ds}\right)^{2} \left(\int\_{s}^{b} x(t) (\psi(t) - \psi(s))^{n-\kappa-1} dt\right) \cdot \frac{d}{ds} \mathcal{Y}\_{\psi}^{[n-3]}(s) ds. \end{split}$$

Since

$$\begin{split} &\quad \int\_{a}^{b} \left(\frac{1}{\psi'(s)} \frac{d}{ds}\right)^{2} \Big( \int\_{s}^{b} x(t) (\psi(t) - \psi(s))^{n-a-1} dt \Big) \cdot \frac{d}{ds} y\_{\psi}^{[n-3]}(s) ds \\ &= \left[ \left(\frac{1}{\psi'(s)} \frac{d}{ds}\right)^{2} \Big( \int\_{s}^{b} x(t) (\psi(t) - \psi(s))^{n-a-1} dt \right) \cdot y\_{\psi}^{[n-3]}(s) \right]\_{s=a}^{s=b} \\ &\quad - \int\_{a}^{b} y\_{\psi}^{[n-3]}(s) \frac{d}{ds} \left[ \left(\frac{1}{\psi'(s)} \frac{d}{ds}\right)^{2} \left(\int\_{s}^{b} x(t) (\psi(t) - \psi(s))^{n-a-1} dt\right) \right] ds \\ &= \left[ \left(\frac{1}{\psi'(s)} \frac{d}{ds}\right)^{2} \Big( \int\_{s}^{b} x(t) (\psi(t) - \psi(s))^{n-a-1} dt \right) \cdot y\_{\psi}^{[n-3]}(s) \right]\_{s=a}^{s=b} \\ &\quad + \int\_{a}^{b} \left(\frac{-1}{\psi'(s)} \frac{d}{ds}\right)^{3} \Big( \int\_{s}^{b} x(t) (\psi(t) - \psi(s))^{n-a-1} dt \right) \cdot \frac{d}{ds} y\_{\psi}^{[n-4]}(s) ds, \end{split}$$

then we get

$$\begin{split} &\int\_{a}^{b} \frac{d}{ds} y\_{\boldsymbol{\Psi}}^{[n-1]}(s) \int\_{s}^{b} \boldsymbol{x}(t) (\boldsymbol{\psi}(t) - \boldsymbol{\psi}(s))^{n-a-1} dt \, ds \\ &= \left[ \sum\_{k=0}^{2} \left( \frac{-1}{\boldsymbol{\Psi}'(s)} \frac{d}{ds} \right)^{k} \left( \int\_{s}^{b} \boldsymbol{x}(t) (\boldsymbol{\psi}(t) - \boldsymbol{\psi}(s))^{n-a-1} dt \right) \cdot \boldsymbol{y}\_{\boldsymbol{\Psi}}^{[n-k-1]}(s) \right]\_{s=a}^{s=b} \\ &+ \int\_{a}^{b} \left( \frac{-1}{\boldsymbol{\Psi}'(s)} \frac{d}{ds} \right)^{3} \left( \int\_{s}^{b} \boldsymbol{x}(t) (\boldsymbol{\psi}(t) - \boldsymbol{\psi}(s))^{n-a-1} dt \right) \cdot \frac{d}{ds} \boldsymbol{y}\_{\boldsymbol{\Psi}}^{[n-k]}(s) ds. \end{split}$$

Repeating the process of integration by parts *n* − 3 more times, we prove the formula. Using similar techniques, we deduce the integration by parts formula involving the operator *<sup>C</sup>*D *φ*(*α*),*ψ <sup>b</sup>*<sup>−</sup> .

#### **3. Main Results**

*3.1. Variational Problems with Time Delay*

We begin this section by studying variational problems involving distributed-order fractional derivatives with time delay. For clarity of presentation, we restrict ourselves to the case where *α* ∈ [0, 1], that is, considering the definitions introduced in [15].

Consider two continuous functions *φ*, *ϕ* : [0, 1] → [0, 1] satisfying the following conditions

$$\int\_0^1 \phi(\alpha)d\alpha > 0 \qquad \text{and} \qquad \int\_0^1 \phi(\alpha)d\alpha > 0.$$

In what follows, *a*, *b* ∈ R are such that *a* < *b* and *τ* is a fixed real number satisfying the condition 0 ≤ *τ* < *b* − *a*.

We are now in position to present the first problem under study.

**Problem 1** ((*Pτ*))**.** *Determine a curve x* ∈ *C* 1 ([*a* − *τ*, *b*], R)*, subject to x*(*t*) = *µ*(*t*) *for all t* ∈ [*a* − *τ*, *a*]*, where µ* ∈ *C* 1 ([*a* − *τ*, *a*], R) *is a given initial function, that minimizes or maximizes the following functional:*

$$\mathcal{J}(\mathbf{x}) := \int\_{a}^{b} \mathbf{L}\Big(t, \mathbf{x}(t), \mathbf{x}(t-\tau), \prescript{\mathbf{C}}{}{\mathbf{D}}\_{a^{+}}^{\phi(a), \psi} \mathbf{x}(t), \prescript{\mathbf{C}}{}{\mathbf{D}}\_{b^{-}}^{\phi(a), \psi} \mathbf{x}(t)\Big) dt,\tag{1}$$

*where <sup>L</sup>* : [*a*, *<sup>b</sup>*] <sup>×</sup> <sup>R</sup><sup>4</sup> <sup>→</sup> <sup>R</sup> *is assumed to be continuously differentiable with respect to the second, third, fourth, and fifth variables. We will consider the variational problem (Pτ) with and without fixed terminal boundary condition, and also with isoperimetric or holonomic constraints.*

Let us fix the following notations: by *∂iL*, we denote the partial derivative of *L* with respect to its *i*th-coordinate and

$$\mathbf{x}(\mathbf{x})\_{\mathsf{T}}(t) := \left(t, \mathbf{x}(t), \mathbf{x}(t-\tau), \prescript{\mathsf{C}}{\mathsf{D}}\_{a+}^{\phi(a),\psi}\mathbf{x}(t), \prescript{\mathsf{C}}{\mathsf{D}}\_{b-}^{\phi(a),\psi}\mathbf{x}(t)\right).$$

To simplify the presentation of our results, we consider the following conditions:

$$\begin{array}{ll} \mathbf{C}^{-}\_{\boldsymbol{\Phi}}[H, i, b - \boldsymbol{\tau}]: & t \rightarrow \left( \mathbf{D}^{\boldsymbol{\Phi}(\boldsymbol{a}), \boldsymbol{\Phi}}\_{(b - \boldsymbol{\tau})^{-}} \frac{\partial\_{\boldsymbol{t}} H[\mathbf{x}]\_{\boldsymbol{\tau}}}{\boldsymbol{\Psi}^{\boldsymbol{t}}} \right) (t) \text{ is continuous for all } t \in [a, b - \boldsymbol{\tau}]: \\\\ & \ddots \end{array}$$

$$\mathbf{C}\_{\boldsymbol{\Phi}}^{-}[H, i, b]: \quad t \to \left(\mathrm{D}\_{b^{-}}^{\boldsymbol{\Phi}(\boldsymbol{\alpha}), \boldsymbol{\Psi}} \frac{\partial\_{i} H[\boldsymbol{x}]\_{\boldsymbol{\tau}}}{\boldsymbol{\Psi}'}\right)(t) \text{ is continuous for all } t \in [b - \boldsymbol{\tau}, b]\_{\boldsymbol{\Phi}}$$

$$\mathbf{C}\_{\boldsymbol{\Phi}}^{+}[H, i, a]: \quad t \to \left(\mathbf{D}\_{a^{+}}^{\boldsymbol{\Phi}(a), \boldsymbol{\Psi}} \frac{\partial\_{i} H[\boldsymbol{x}]\_{\boldsymbol{\tau}}}{\boldsymbol{\Psi}^{\boldsymbol{\nu}}}\right)(t) \text{ is continuous for all } t \in [a, b - \boldsymbol{\tau}]\_{\boldsymbol{\Phi}}$$

$$\mathbf{C}\_{\boldsymbol{\uprho}}^{+}[\boldsymbol{H}, \boldsymbol{i}, \boldsymbol{b} - \boldsymbol{\tau}]: \quad \boldsymbol{t} \to \left(\mathbf{D}\_{(\boldsymbol{b} - \boldsymbol{\tau})^{+}}^{\boldsymbol{\uprho} (\boldsymbol{a}), \boldsymbol{\uprho}} \frac{\partial\_{\boldsymbol{i}} \boldsymbol{H}[\boldsymbol{x}]\_{\boldsymbol{\uptau}}}{\boldsymbol{\uprho}'}\right)(t) \text{ is continuous for all } t \in [\boldsymbol{b} - \boldsymbol{\tau}, \boldsymbol{b}]\_{\boldsymbol{t}}$$

where *H* is a function and *i* ∈ N.

**Theorem 2** (Fractional Euler–Lagrange equations and natural boundary condition for problem (*Pτ*))**.** *Suppose that L satisfies the conditions C* − *φ* [*L*, 4, *b* − *τ*]*, C* + *ϕ* [*L*, 5, *a*]*, C* − *φ* [*L*, 4, *b*] *and C*<sup>+</sup> *ϕ* [*L*, 5, *b* − *τ*]*. If x* ∈ *C* 1 ([*a* − *τ*, *b*], R) *is an extremizer of functional* J *, then x satisfies the following Euler–Lagrange equations*

$$\begin{split} \left\| \frac{\partial\_{2}L[\mathbf{x}]\_{\tau}(t) + \partial\_{3}L[\mathbf{x}]\_{\tau}(t+\tau) + \left( \mathrm{D}^{\theta(a),\varphi}\_{(b-\tau)} \frac{\partial\_{4}L[\mathbf{x}]\_{\tau}(t)}{\psi'(t)} \right) \psi'(t) + \left( \mathrm{D}^{\varphi(a),\varphi}\_{a^{+}} \frac{\partial\_{5}L[\mathbf{x}]\_{\tau}(t)}{\psi'(t)} \right) \psi'(t) \\ - \int\_{0}^{1} \frac{\phi(a)}{\Gamma(1-a)} \frac{d}{dt} \int\_{b-\tau}^{b} (\psi(s) - \psi(t))^{-a} \partial\_{4}L[\mathbf{x}]\_{\tau}(s) ds da = 0, \forall t \in [a, b-\tau] \end{split} \tag{2}$$

*and*

$$\begin{split} \partial\_{2}L[\mathbf{x}]\_{\mathsf{T}}(t) &+ \left(\mathrm{D}\_{b^{-}}^{\theta(a),\psi}\frac{\partial\_{4}L[\mathbf{x}]\_{\mathsf{T}}(t)}{\psi'(t)}\right)\psi'(t) + \left(\mathrm{D}\_{(b-\mathsf{T})^{+}}^{\theta(a),\psi}\frac{\partial\_{5}L[\mathbf{x}]\_{\mathsf{T}}(t)}{\psi'(t)}\right)\psi'(t) \\ &+ \int\_{0}^{1} \frac{\rho(a)}{\Gamma(1-a)} \frac{d}{dt} \int\_{a}^{b-\mathsf{T}} (\psi(t) - \psi(s))^{-\pi} \partial\_{5}L[\mathbf{x}]\_{\mathsf{T}}(s) ds da = 0, \,\forall t \in [b-\mathsf{T},b]. \end{split} \tag{3}$$

*If x*(*b*) *is free, then the following natural boundary condition holds:*

I

$$\mathbf{I}\_{a^{-}}^{1-\phi(a),\psi} \frac{\partial\_{\mathbf{4}}L[\mathbf{x}]\_{\mathsf{T}}(b)}{\psi'(b)} = \mathbf{I}\_{a^{+}}^{1-\phi(a),\psi} \frac{\partial\_{\mathbf{5}}L[\mathbf{x}]\_{\mathsf{T}}(b)}{\psi'(b)}.\tag{4}$$

**Proof.** Consider that *h* ∈ *C* 1 ([*a* − *τ*, *b*], R) is an arbitrary function such that *h*(*t*) = 0, *a* − *τ* ≤ *t* ≤ *a*. Define the function *j* by *j*(*e*) := J (*x* + *eh*), *e* ∈ R. Since *x* is an extremizer of J , *j* 0 (0) = 0, and we have that

$$\begin{split} \int\_{a}^{b} \left( \partial\_{2} \boldsymbol{L} [\boldsymbol{x}]\_{\boldsymbol{\tau}}(t) \cdot \boldsymbol{h}(t) + \partial\_{3} \boldsymbol{L} [\boldsymbol{x}]\_{\boldsymbol{\tau}}(t) \cdot \boldsymbol{h}(t-\boldsymbol{\tau}) + \partial\_{4} \boldsymbol{L} [\boldsymbol{x}]\_{\boldsymbol{\tau}}(t) \cdot \boldsymbol{\mathcal{C}} \, \mathrm{D}^{\boldsymbol{\phi}(a), \boldsymbol{\psi}}\_{\boldsymbol{a}^{+}} \boldsymbol{h}(t) \right. \\ & \left. + \partial\_{5} \boldsymbol{L} [\boldsymbol{x}]\_{\boldsymbol{\tau}}(t) \cdot \boldsymbol{\mathcal{C}} \, \mathrm{D}^{\boldsymbol{\phi}(a), \boldsymbol{\psi}}\_{\boldsymbol{b}^{-}} \boldsymbol{h}(t) \right) dt = 0. \end{split} \tag{5}$$

Since

$$\int\_{a}^{b} \partial\_{3} L[\mathbf{x}]\_{\mathsf{T}}(t) \cdot h(t - \mathsf{T}) dt = \int\_{a - \mathsf{T}}^{a} \partial\_{3} L[\mathbf{x}]\_{\mathsf{T}}(t + \mathsf{T}) \cdot h(t) dt + \int\_{a}^{b - \mathsf{T}} \partial\_{3} L[\mathbf{x}]\_{\mathsf{T}}(t + \mathsf{T}) \cdot h(t) dt,$$

and *h*(*t*) = 0 for *t* ∈ [*a* − *τ*, *a*], then we get

$$\int\_{a}^{b} \partial\_{3} L[\mathbf{x}]\_{\tau}(t) \cdot h(t - \tau) dt = \int\_{a}^{b - \tau} \partial\_{3} L[\mathbf{x}]\_{\tau}(t + \tau) \cdot h(t) dt. \tag{6}$$

Replacing (6) into (5), we get

$$\begin{split} \int\_{a}^{b-\tau} \left( \left\| \partial\_{2} L[\mathbf{x}]\_{\tau}(t) + \partial\_{3} L[\mathbf{x}]\_{\tau}(t+\tau) \right\rangle \cdot h(t) dt + \int\_{b-\tau}^{b} \partial\_{2} L[\mathbf{x}]\_{\tau}(t) \cdot h(t) dt \\ + \int\_{a}^{b} \left( \partial\_{4} L[\mathbf{x}]\_{\tau}(t) \cdot ^{\mathbb{C}} \mathcal{D}\_{a^{+}}^{\phi(a),\psi} h(t) + \partial\_{5} L[\mathbf{x}]\_{\tau}(t) \cdot ^{\mathbb{C}} \mathcal{D}\_{b^{-}}^{\phi(a),\psi} h(t) \right) dt = 0. \end{split} \tag{7}$$

Note that, for all *t* ∈ [*a*, *b* − *τ*], we have

$$\begin{split} \mathbf{D}\_{b^{-}}^{\phi(a),\psi} \frac{\partial\_{4}L[\mathbf{x}]\_{\mathsf{T}}(t)}{\psi'(t)} &= \mathbf{D}\_{(b-\mathsf{T})^{-}}^{\phi(a),\psi} \frac{\partial\_{4}L[\mathbf{x}]\_{\mathsf{T}}(t)}{\psi'(t)} \\ &- \int\_{0}^{1} \frac{\phi(a)}{\Gamma(1-a)} \left( \frac{1}{\psi'(t)} \frac{d}{dt} \right) \int\_{b-\mathsf{T}}^{b} (\psi(s) - \psi(t))^{-a} \partial\_{4}L[\mathbf{x}]\_{\mathsf{T}}(s) ds da \end{split} \tag{8}$$

and, for all *t* ∈ [*b* − *τ*, *b*], we have

$$\begin{split} \mathbf{D}\_{a^{+}}^{\boldsymbol{\varrho}(\boldsymbol{a}),\boldsymbol{\varrho}} \frac{\partial \boldsymbol{\varepsilon}L[\boldsymbol{x}]\_{\boldsymbol{\tau}}(t)}{\boldsymbol{\psi}^{\prime}(t)} &= \mathbf{D}\_{(b-\boldsymbol{\tau})^{+}}^{\boldsymbol{\varrho}(\boldsymbol{a}),\boldsymbol{\upbeta}} \frac{\partial \boldsymbol{\varepsilon}L[\boldsymbol{x}]\_{\boldsymbol{\tau}}(t)}{\boldsymbol{\psi}^{\prime}(t)} \\ &+ \int\_{0}^{1} \frac{\boldsymbol{\varrho}(\boldsymbol{a})}{\Gamma(1-\boldsymbol{a})} \left(\frac{1}{\boldsymbol{\psi}^{\prime}(t)} \frac{d}{dt}\right) \int\_{a}^{b-\boldsymbol{\tau}} (\boldsymbol{\psi}(t) - \boldsymbol{\uppsi}(s))^{-\boldsymbol{a}} \partial\_{\boldsymbol{\Xi}}L[\boldsymbol{x}]\_{\boldsymbol{\tau}}(s) ds da = 0. \end{split} \tag{9}$$

Using Theorem 1 and (8), we obtain

$$\begin{split} &\int\_{a}^{b} \partial\_{4} \mathcal{L}[\mathbf{x}]\_{\tau}(t) \cdot \stackrel{\mathcal{C}}{\circ} \mathcal{D}^{\phi(a),\psi}\_{a^{+}} h(t) dt = \int\_{a}^{b-\tau} \left( \left( \mathcal{D}^{\phi(a),\psi}\_{(b-\tau)^{-}} \frac{\partial\_{4} \mathcal{L}[\mathbf{x}]\_{\tau}(t)}{\psi'(t)} \right) \psi'(t) \right. \\ &\left. - \int\_{0}^{1} \frac{\phi(\alpha)}{\Gamma(1-\alpha)} \frac{d}{dt} \int\_{b-\tau}^{b} (\psi(s) - \psi(t))^{-a} \partial\_{4} \mathcal{L}[\mathbf{x}]\_{\tau}(s) ds da \right) h(t) dt \\ &\left. + \int\_{b-\tau}^{b} \left( \mathcal{D}^{\phi(a),\Psi}\_{b^{-}} \frac{\partial\_{4} \mathcal{L}[\mathbf{x}]\_{\tau}(t)}{\psi'(t)} \right) \psi'(t) h(t) dt + \left[ \left( \mathcal{l}^{1-\phi(a),\Psi}\_{b^{-}} \frac{\partial\_{4} \mathcal{L}[\mathbf{x}]\_{\tau}(t)}{\psi'(t)} \right) h(t) \right]\_{t=a}^{t=b} . \end{split} \tag{10}$$

Once again, by Theorem 1 and (9), we obtain

$$\begin{split} & \int\_{a}^{b} \mathfrak{S}\_{\mathsf{L}} [\mathbf{x}]\_{\mathsf{T}}(t) \stackrel{\mathsf{C}}{\circ} \mathsf{D}\_{b^{-}}^{\mathsf{g}(a),\mathsf{p}} h(t) dt = \int\_{b-\mathsf{T}}^{b} \left( \left( \mathsf{D}\_{(b-\mathsf{T})^{+}}^{\mathsf{g}(a),\mathsf{p}} \frac{\partial\_{\mathsf{L}} L[\mathbf{x}]\_{\mathsf{T}}(t)}{\mathsf{y}^{\prime}(t)} \right) \mathfrak{y}^{\prime}(t) \right. \\ & \left. + \int\_{0}^{1} \frac{\mathfrak{g}(a)}{\Gamma(1-a)} \frac{d}{dt} \int\_{a}^{b-\mathsf{T}} (\psi(t) - \psi(s))^{-a} \partial\_{\mathsf{L}} L[\mathbf{x}]\_{\mathsf{T}}(s) ds d\alpha \right) h(t) dt \\ & \left. + \int\_{a}^{b-\mathsf{T}} \left( \mathsf{D}\_{a^{+}}^{\mathsf{g}(a),\mathsf{p}} \frac{\partial\_{\mathsf{L}} L[\mathbf{x}]\_{\mathsf{T}}(t)}{\mathsf{y}^{\prime}(t)} \right) \mathfrak{y}^{\prime}(t) h(t) dt - \left[ \left( \mathsf{I}\_{a^{+}}^{1-\mathsf{g}(a),\mathsf{p}} \frac{\partial\_{\mathsf{L}} L[\mathbf{x}]\_{\mathsf{T}}(t)}{\mathsf{y}^{\prime}(t)} \right) h(t) \right]\_{t=a}^{t=b} . \end{split} \tag{11}$$

Replacing (10) and (11) into (7), we get that

Z *b*−*τ a <sup>∂</sup>*2*L*[*x*]*τ*(*t*) + *<sup>∂</sup>*3*L*[*x*]*τ*(*<sup>t</sup>* <sup>+</sup> *<sup>τ</sup>*) + D *φ*(*α*),*ψ* (*b*−*τ*)<sup>−</sup> *∂*4*L*[*x*]*τ*(*t*) *ψ*0(*t*) *ψ* 0 (*t*) − Z 1 0 *φ*(*α*) Γ(1 − *α*) *d dt* <sup>Z</sup> *<sup>b</sup> b*−*τ* (*ψ*(*s*) <sup>−</sup> *<sup>ψ</sup>*(*t*))−*α∂*4*L*[*x*]*τ*(*s*)*dsd<sup>α</sup>* <sup>+</sup> D *ϕ*(*α*),*ψ a*+ *∂*5*L*[*x*]*τ*(*t*) *ψ*0(*t*) *ψ* 0 (*t*) ! *h*(*t*)*dt* + Z *b b*−*τ <sup>∂</sup>*2*L*[*x*]*τ*(*t*) + D *φ*(*α*),*ψ b*− *∂*4*L*[*x*]*τ*(*t*) *ψ*0(*t*) *ψ* 0 (*t*) + D *ϕ*(*α*),*ψ* (*b*−*τ*)<sup>+</sup> *∂*5*L*[*x*]*τ*(*t*) *ψ*0(*t*) *ψ* 0 (*t*) + Z 1 0 *ϕ*(*α*) Γ(1 − *α*) *d dt* <sup>Z</sup> *<sup>b</sup>*−*<sup>τ</sup> a* (*ψ*(*t*) <sup>−</sup> *<sup>ψ</sup>*(*s*))−*α∂*5*L*[*x*]*τ*(*s*)*dsd<sup>α</sup>* ! *h*(*t*)*dt* + " I 1−*φ*(*α*),*ψ b*− *∂*4*L*[*x*]*τ*(*t*) *ψ*0(*t*) *h*(*t*) #*t*=*<sup>b</sup> t*=*a* − " I 1−*ϕ*(*α*),*ψ a*+ *∂*5*L*[*x*]*τ*(*t*) *ψ*0(*t*) *h*(*t*) #*t*=*<sup>b</sup> t*=*a* = 0. (12)

From the arbitrariness of *h*, we get the desired Equations (2)–(4).

Next, we consider the case where we add to problem (*Pτ*) an isoperimetric restriction.

**Problem 2** ((*PI<sup>τ</sup>* ))**.** *The isoperimetric problem with a time delay τ can be formulated in the following way: minimize or maximize the functional* J *in* (1) *subject to an integral constraint of type*

$$\mathcal{Z}(\mathbf{x}) := \int\_{a}^{b} \mathbf{G}[\mathbf{x}]\_{\tau}(t)dt = k\_{\prime} \tag{13}$$

*where <sup>k</sup>* <sup>∈</sup> <sup>R</sup> *is fixed and <sup>G</sup>* : [*a*, *<sup>b</sup>*] <sup>×</sup> <sup>R</sup><sup>4</sup> <sup>→</sup> <sup>R</sup> *is a continuously differentiable function with respect to the second, third, fourth, and fifth variables.*

The following theorem presents necessary conditions for *x* to be a solution of the fractional isoperimetric problem (*PI<sup>τ</sup>* ) under the assumption that *x* is not an extremal for *G*.

**Theorem 3** (Necessary optimality conditions for problem (*PI<sup>τ</sup>* )—Case I)**.** *Let x* ∈ *C* 1 ([*a* − *τ*, *b*], R) *be a curve such that* J *attains an extremum at x, when subject to the integral constraint* (13)*. Assume that x does not satisfy the Euler–Lagrange Equation* (2) *or* (3) *with respect to G. Moreover, suppose that L satisfies the conditions C* − *φ* [*L*, 4, *b* − *τ*]*, C* + *ϕ* [*L*, 5, *a*]*, C* − *φ* [*L*, 4, *b*] *and C* + *ϕ* [*L*, 5, *b* − *τ*]*, and G satisfies the conditions C* − *φ* [*G*, 4, *b* − *τ*]*, C* + *ϕ* [*G*, 5, *a*]*, C* − *φ* [*G*, 4, *b*] *and C* + *ϕ* [*G*, 5, *b* − *τ*]*. Then, there exists λ* ∈ R *such that x is a solution of the equations*

$$\begin{split} \left\| \partial\_{2}H[\mathbf{x}]\_{\mathsf{T}}(t) + \partial\_{3}H[\mathbf{x}]\_{\mathsf{T}}(t+\tau) + \left( \mathrm{D}^{\boldsymbol{\Phi}(a),\boldsymbol{\Phi}}\_{(b-\tau)} \frac{\partial\_{4}H[\mathbf{x}]\_{\mathsf{T}}(t)}{\boldsymbol{\Psi}'(t)} \right) \boldsymbol{\upupmu}'(t) + \left( \mathrm{D}^{\boldsymbol{\Phi}(a),\boldsymbol{\Phi}}\_{a^{+}} \frac{\partial\_{5}H[\mathbf{x}]\_{\mathsf{T}}(t)}{\boldsymbol{\Psi}'(t)} \right) \boldsymbol{\upupmu}'(t) \\ - \int\_{0}^{1} \frac{\boldsymbol{\upupmu}(a)}{\Gamma(1-a)} \frac{d}{dt} \int\_{b-\tau}^{b} (\boldsymbol{\upupmu}(s) - \boldsymbol{\upupmu}(t))^{-a} \partial\_{4}H[\mathbf{x}]\_{\mathsf{T}}(s) ds da = \boldsymbol{0}, \,\forall t \in [a, b-\tau] \end{split} \tag{14}$$

*and*

$$\begin{split} \partial\_{2}H[\mathbf{x}]\_{\tau}(t) &+ \left(\mathbf{D}\_{b^{-}}^{\phi(a),\varphi}\frac{\partial\_{4}H[\mathbf{x}]\_{\tau}(t)}{\psi'(t)}\right)\psi'(t) + \left(\mathbf{D}\_{(b-\tau)^{+}}^{\phi(a),\varphi}\frac{\partial\_{5}H[\mathbf{x}]\_{\tau}(t)}{\psi'(t)}\right)\psi'(t) \\ &+ \int\_{0}^{1}\frac{\rho(\mathbf{a})}{\Gamma(1-\mathbf{a})}\frac{d}{dt}\int\_{a}^{b-\tau}(\psi(t)-\psi(s))^{-a}\partial\_{5}H[\mathbf{x}]\_{\tau}(s)dsds = \mathbf{0}, \,\forall t\in[b-\tau,b]. \end{split} \tag{15}$$

*where H* := *L* + *λG. If x*(*b*) *is free, then*

$$\mathbf{I}\_{b^{-}}^{1-\phi(a),\psi} \frac{\partial\_{4}H[\boldsymbol{x}]\_{\boldsymbol{\tau}}(b)}{\boldsymbol{\psi}'(b)} = \mathbf{I}\_{a^{+}}^{1-\phi(a),\psi} \frac{\partial\_{5}H[\boldsymbol{x}]\_{\boldsymbol{\tau}}(b)}{\boldsymbol{\psi}'(b)}.\tag{16}$$

**Proof.** The proof follows from the ideas presented in Theorem 2 and Theorem 3.3 of [15].

Now, we present necessary optimality conditions for the case when the solution of the isoperimetric problem is an extremal for the fractional isoperimetric functional (13).

**Theorem 4** (Necessary optimality conditions for fractional problem (*PI<sup>τ</sup>* )—Case II)**.** *Let x be a curve such that* J *attains an extremum at x, when subject to the integral constraint* (13)*. Moreover, suppose that L satisfies the conditions C* − *φ* [*L*, 4, *b* − *τ*]*, C* + *ϕ* [*L*, 5, *a*]*, C* − *φ* [*L*, 4, *b*] *and C* + *ϕ* [*L*, 5, *b* − *τ*]*, and G satisfies the conditions C* − *φ* [*G*, 4, *b* − *τ*]*, C* + *ϕ* [*G*, 5, *a*]*, C* − *φ* [*G*, 4, *b*] *and C* + *ϕ* [*G*, 5, *b* − *τ*]*. Then, there exists a vector* (*λ*0, *<sup>λ</sup>*) <sup>∈</sup> <sup>R</sup><sup>2</sup> \ {(0, 0)} *such that <sup>x</sup> is a solution of Equations* (14) *and* (15)*, with the Hamiltonian H defined as H* := *λ*0*L* + *λG. If x*(*b*) *is free, then x must satisfy Equation* (16)*.*

**Proof.** The result is an immediate consequence of Theorem 3.

In the following, we study variational problems with a holonomic constraint. For this purpose, we now assume that *x* is a two-dimensional vector function and *L* : [*a*, *b*] × <sup>R</sup><sup>8</sup> <sup>→</sup> <sup>R</sup> is assumed to be continuously differentiable with respect to the *<sup>i</sup>*th variable, with *i* = 2, . . . , 9.

**Problem 3** ((*PC<sup>τ</sup>* ))**.** *Consider the variational problem* (*Pτ*) *but in the presence of a holonomic constraint:*

$$\mathbf{g}\left(t, \mathbf{x}(t)\right) = \mathbf{0}, \quad t \in [a, b],\tag{17}$$

*where <sup>g</sup>* : [*a*, *<sup>b</sup>*] <sup>×</sup> <sup>R</sup><sup>2</sup> <sup>→</sup> <sup>R</sup> *is a <sup>C</sup>* 1 *function. The state variable x is a two-dimensional vector function x* = (*x*1, *x*2)*, where x*1, *x*<sup>2</sup> ∈ *C* 1 ([*a* − *τ*, *b*], R)*. Moreover, the boundary condition*

$$\mu(t) = \mu(t), \ t \in [a - \tau, a], \tag{18}$$

*where µ* ∈ *C* 1 ([*a* − *τ*, *a*], R) × *C* 1 ([*a* − *τ*, *a*], R) *is a given function, is imposed.*

**Theorem 5** (Necessary optimality conditions for problem (*PC<sup>τ</sup>* ))**.** *Consider the functional*

$$\mathcal{J}(\mathfrak{x}) = \int\_{a}^{b} L[\mathfrak{x}]\_{\mathfrak{T}}(t)dt,\tag{19}$$

*defined on C* 1 ([*a* − *τ*, *b*], R) × *C* 1 ([*a* − *τ*, *b*], R) *and subject to the constraints* (17) *and* (18)*. Suppose that L satisfies the conditions C* − *φ* [*L*, *i* + 5, *b* − *τ*]*, C* + *ϕ* [*L*, *i* + 7, *a*]*, C* − *φ* [*L*, *i* + 5, *b*] *and C* + *ϕ* [*L*, *i* + 7, *b* − *τ*]*, with i* = 1, 2*.*

*If x is an extremizer of functional* J *and if*

$$
\partial\_3 \emptyset (t, \mathfrak{x}(t)) \neq 0, \quad \forall t \in [a, b]\_\prime
$$

*then there exists a continuous function λ* : [*a*, *b*] → R *such that x is a solution of*

$$\begin{split} \partial\_{l+1}L[\mathbf{x}]\_{\mathbb{T}}(t) + \partial\_{l+3}L[\mathbf{x}]\_{\mathbb{T}}(t+\tau) + \left(\mathcal{D}^{\phi(a),\phi}\_{(b-\tau)^{-}}\frac{\partial\_{l+5}L[\mathbf{x}]\_{\mathbb{T}}(t)}{\psi'(t)}\right)\psi'(t) \\ + \left(\mathcal{D}^{\phi(a),\phi}\_{a^{+}}\frac{\partial\_{l+7}L[\mathbf{x}]\_{\mathbb{T}}(t)}{\psi'(t)}\right)\psi'(t) - \int\_{0}^{1} \frac{\phi(a)}{\Gamma(1-a)}\frac{d}{dt}\int\_{b-\tau}^{b} (\psi(s)-\psi(t))^{-a}\partial\_{l+5}L[\mathbf{x}]\_{\mathbb{T}}(s)dsda \\ + \lambda(t)\cdot\partial\_{l+1}\varrho(t,\mathbf{x}(t)) = 0, \quad \forall t \in [a,b-\tau], \ i = 1,2 \end{split} \tag{20}$$

*and*

$$\begin{split} \mathfrak{d}\_{l+1} \boldsymbol{L}[\boldsymbol{x}]\_{\mathbb{T}}(t) &+ \left( \mathrm{D}\_{b^{-}}^{\mathfrak{g}(\boldsymbol{a}),\mathfrak{g}} \frac{\mathfrak{d}\_{l+\mathfrak{g}} \mathrm{L}\_{l}[\boldsymbol{x}]\_{\mathbb{T}}(t)}{\mathfrak{y}^{\boldsymbol{t}}(t)} \right) \boldsymbol{\uppsi}^{\mathfrak{t}}(t) + \left( \mathrm{D}\_{(b-\mathsf{T})^{+}}^{\mathfrak{g}(\boldsymbol{a}),\mathfrak{g}} \frac{\mathfrak{d}\_{l+\mathfrak{f}} \mathrm{L}\_{l}[\boldsymbol{x}]\_{\mathbb{T}}(t)}{\mathfrak{y}^{\boldsymbol{t}}(t)} \right) \boldsymbol{\uppsi}^{\mathfrak{t}}(t) \\ &+ \int\_{0}^{1} \frac{\mathfrak{g}(\boldsymbol{a})}{\Gamma(1-\mathsf{a})} \frac{d}{dt} \int\_{a}^{b-\mathsf{T}} (\boldsymbol{\uppsi}(t) - \boldsymbol{\uppsi}(s))^{-\mathsf{a}} \mathfrak{d}\_{l+\mathsf{T}} \mathrm{L}[\boldsymbol{x}]\_{\mathbb{T}}(s) ds d\boldsymbol{a} + \lambda(t) \cdot \mathfrak{d}\_{l+1} \mathfrak{g}(\boldsymbol{t}, \boldsymbol{x}(t)) = \boldsymbol{0}, \end{split} \tag{21}$$
 
$$\forall t \in [b-\mathsf{T}, b], i = 1, 2.$$

*If x*(*b*) *is free, then, for i* = 1, 2*,*

$$\mathbf{I}\_{b^{-}}^{1-\phi(a),\psi} \frac{\partial\_{i+5} L[\mathbf{x}]\_{\tau}(b)}{\psi'(b)} = \mathbf{I}\_{a^{+}}^{1-\phi(a),\psi} \frac{\partial\_{i+7} L[\mathbf{x}]\_{\tau}(b)}{\psi'(b)}.\tag{22}$$

**Proof.** The proof follows combining the ideas from Theorem 2 above with Theorem 3.5 from [15].

Now, we focus our attention on sufficient optimality conditions for all the variational problems studied previously.

**Definition 5.** *Function <sup>f</sup>*(*t*, *<sup>x</sup>*2, *<sup>x</sup>*3, ..., *<sup>x</sup>n*) *defined on <sup>U</sup>* <sup>⊆</sup> <sup>R</sup>*<sup>n</sup> is called convex (resp. concave) if ∂i f*(*t*, *x*2, *x*3, ..., *xn*)*, i* = 2, . . . , *n, exist and are continuous, and if*

$$f(t, \mathbf{x}\_2 + h\_2, \mathbf{x}\_3 + h\_3, \dots, \mathbf{x}\_n + h\_n) - f(t, \mathbf{x}\_2, \mathbf{x}\_3, \dots, \mathbf{x}\_n) \ge (\text{resp.} \le) \sum\_{i=2}^n \partial\_i f(t, \mathbf{x}\_2, \mathbf{x}\_3, \dots, \mathbf{x}\_n) h\_i \ge 0$$

*for all* (*t*, *x*2, *x*3, ..., *xn*),(*t*, *x*<sup>2</sup> + *h*2, *x*<sup>3</sup> + *h*3, ..., *x<sup>n</sup>* + *hn*) ∈ *U.*

**Theorem 6** (Sufficient optimality conditions for problem (*Pτ*))**.** *Let L be convex (resp. concave) in* [*a*, *<sup>b</sup>*] <sup>×</sup> <sup>R</sup><sup>4</sup> *. Then, each solution x of the fractional Euler–Lagrange Equations* (2) *and* (3) *minimizes (resp. maximizes) the functional* J *given in* (1)*, subject to the boundary conditions x*(*t*) = *µ*(*t*), *t* ∈ [*a* − *τ*, *a*] *and x*(*b*) = *x*(*b*)*. If x*(*b*) *is free, then each solution x of the Equations* (2)*–*(4) *minimizes (resp. maximizes)* J *.*

**Proof.** We prove the case when *L* is convex. The other case is similar. Consider *h* ∈ *C* 1 ([*a* − *τ*, *b*], R) an arbitrary function. Since *L* is convex, we can conclude that

$$\begin{split} \mathcal{J}(\overline{\boldsymbol{\pi}} + \boldsymbol{h}) - \mathcal{J}(\overline{\boldsymbol{\pi}}) &\geq \int\_{a}^{b} \Big( \partial\_{2} \boldsymbol{L}[\overline{\boldsymbol{\pi}}]\_{\boldsymbol{\pi}}(t) \cdot \boldsymbol{h}(t) + \partial\_{3} \boldsymbol{L}[\overline{\boldsymbol{\pi}}]\_{\boldsymbol{\pi}}(t) \cdot \boldsymbol{h}(t - \boldsymbol{\pi}) \\ &\quad + \partial\_{4} \boldsymbol{L}[\overline{\boldsymbol{\pi}}]\_{\boldsymbol{\pi}}(t) \cdot \boldsymbol{\mathcal{C}} \, \operatorname{D}\_{a^{+}}^{\boldsymbol{\phi}(a), \boldsymbol{\phi}} \boldsymbol{h}(t) + \partial\_{5} \boldsymbol{L}[\overline{\boldsymbol{\pi}}]\_{\boldsymbol{\pi}}(t) \cdot \boldsymbol{\mathcal{C}} \, \operatorname{D}\_{b^{-}}^{\boldsymbol{\phi}(a), \boldsymbol{\phi}} \boldsymbol{h}(t) \Big) dt. \end{split}$$

Using the same techniques used in the proof of Theorem 2, we get

J (*<sup>x</sup>* + *<sup>h</sup>*) − J (*x*) ≥ Z *b*−*τ a <sup>∂</sup>*2*L*[*x*]*τ*(*t*) + *<sup>∂</sup>*3*L*[*x*]*τ*(*<sup>t</sup>* <sup>+</sup> *<sup>τ</sup>*) + D *φ*(*α*),*ψ* (*b*−*τ*)<sup>−</sup> *∂*4*L*[*x*]*τ*(*t*) *ψ*0(*t*) *ψ* 0 (*t*) − Z 1 0 *φ*(*α*) Γ(1 − *α*) *d dt* <sup>Z</sup> *<sup>b</sup> b*−*τ* (*ψ*(*s*) <sup>−</sup> *<sup>ψ</sup>*(*t*))−*α∂*4*L*[*x*]*τ*(*s*)*dsd<sup>α</sup>* <sup>+</sup> D *ϕ*(*α*),*ψ a*+ *∂*5*L*[*x*]*τ*(*t*) *ψ*0(*t*) *ψ* 0 (*t*) ! *h*(*t*)*dt* + Z *b b*−*τ <sup>∂</sup>*2*L*[*x*]*τ*(*t*) + D *φ*(*α*),*ψ b*− *∂*4*L*[*x*]*τ*(*t*) *ψ*0(*t*) *ψ* 0 (*t*) + D *ϕ*(*α*),*ψ* (*b*−*τ*)<sup>+</sup> *∂*5*L*[*x*]*τ*(*t*) *ψ*0(*t*) *ψ* 0 (*t*) + Z 1 0 *ϕ*(*α*) Γ(1 − *α*) *d dt* <sup>Z</sup> *<sup>b</sup>*−*<sup>τ</sup> a* (*ψ*(*t*) <sup>−</sup> *<sup>ψ</sup>*(*s*))−*α∂*5*L*[*x*]*τ*(*s*)*dsd<sup>α</sup>* ! *h*(*t*)*dt* + " I 1−*φ*(*α*),*ψ b*− *∂*4*L*[*x*]*τ*(*t*) *ψ*0(*t*) *h*(*t*) #*t*=*<sup>b</sup> t*=*a* − " I 1−*ϕ*(*α*),*ψ a*+ *∂*5*L*[*x*]*τ*(*t*) *ψ*0(*t*) *h*(*t*) #*t*=*<sup>b</sup> t*=*a* . (23)

If *x*(*b*) is fixed then *h*(*a*) = *h*(*b*) = 0, and so from (23) we obtain

$$\begin{split} &\mathcal{I}(\mathbb{T}+h)-\mathcal{I}(\mathbb{T}) \geq \int\_{a}^{b-\tau} \left(\partial\_{2}L[\mathbb{T}]\_{\mathbb{T}}(t)+\partial\_{3}L[\mathbb{T}]\_{\mathbb{T}}(t+\tau)+\left(\mathcal{D}\_{(b-\tau)^{-}}^{\phi(a),\psi}\frac{\partial\_{4}L[\mathbb{T}]\_{\mathbb{T}}(t)}{\psi^{\prime}(t)}\right)\mathfrak{h}^{\prime}(t) \\ & -\int\_{0}^{1}\frac{\phi(a)}{\Gamma(1-a)}\frac{d}{dt}\int\_{b-\tau}^{b}(\psi(s)-\psi(t))^{-a}\partial\_{4}L[\overline{\pi}]\_{\mathbb{T}}(s)dsda + \left(\mathcal{D}\_{a^{+}}^{\phi(a),\psi}\frac{\partial\_{2}L[\mathbb{T}]\_{\mathbb{T}}(t)}{\psi^{\prime}(t)}\right)\mathfrak{h}^{\prime}(t)\right)h(t)dt \\ & +\int\_{b-\tau}^{b}\left(\partial\_{2}L[\overline{\pi}]\_{\mathbb{T}}(t)+\left(\mathcal{D}\_{b^{-}}^{\phi(a),\psi}\frac{\partial\_{4}L[\overline{\pi}]\_{\mathbb{T}}(t)}{\psi^{\prime}(t)}\right)\mathfrak{h}^{\prime}(t)+\left(\mathcal{D}\_{(b-\tau)^{+}}^{\phi(a),\psi}\frac{\partial\_{5}L[\overline{\pi}]\_{\mathbb{T}}(t)}{\psi^{\prime}(t)}\right)\mathfrak{h}^{\prime}(t) \\ & +\int\_{0}^{1}\frac{\phi(a)}{\Gamma(1-a)}\frac{d}{dt}\int\_{a}^{b-\tau}(\psi(t)-\psi(s))^{-a}\partial\_{5}L[\mathbb{T}]\_{\mathbb{T}}(s)dsda \Big]h(t)dt. \end{split}$$

Since *x* is a solution of the fractional Euler–Lagrange Equations (2) and (3), then we conclude that J (*x* + *h*) − J (*x*) ≥ 0. The case when *x*(*b*) is free follows by considering *h*(*t*) = 0, *t* ∈ [*a* − *τ*, *a*] and *h*(*b*) non-zero in (23).

Using similar techniques as the ones used in the proof of the last theorem, we can prove the following two results.

**Theorem 7** (Sufficient optimality conditions for problem (*PI<sup>τ</sup>* ))**.** *Let us assume that, for some constant <sup>λ</sup>, the functions <sup>L</sup> and <sup>λ</sup><sup>G</sup> are convex (resp. concave) in* [*a*, *<sup>b</sup>*] <sup>×</sup> <sup>R</sup><sup>4</sup> *and define the function H as H* = *L* + *λG. Then, each solution x of the fractional Equations* (14) *and* (15) *minimizes (resp. maximizes) the functional* J *given in* (1)*, subject to the restrictions x*(*t*) = *µ*(*t*), *t* ∈ [*a* − *τ*, *a*] *and x*(*b*) = *x*(*b*)*, and the integral constraint* (13)*. If x*(*b*) *is free, then each solution x of the fractional Equations* (14)*–*(16) *minimizes (resp. maximizes)* J *subject to* (13)*.*

**Theorem 8** (Sufficient optimality conditions for problem (*PC<sup>τ</sup>* ))**.** *Consider the functional* J *defined in* (19)*, where the Lagrangian function <sup>L</sup> is convex (resp. concave) in* [*a*, *<sup>b</sup>*] <sup>×</sup> <sup>R</sup><sup>7</sup> *. Define function λ* : [*a*, *b*] → R *by*

$$\begin{split} \lambda(t) &:= -\frac{1}{\partial\_3 \mathcal{g}(t, \mathbf{x}(t))} \left( \partial\_3 L[\mathbf{x}]\_\mathsf{T}(t) + \partial\_5 L[\mathbf{x}]\_\mathsf{T}(t+\mathsf{r}) + \left( \mathsf{D}^{\boldsymbol{\phi}(a), \boldsymbol{\psi}}\_{(b-\mathsf{r})-} \frac{\partial\_7 L[\mathbf{x}]\_\mathsf{T}(t)}{\boldsymbol{\psi}^\boldsymbol{\prime}(t)} \right) \boldsymbol{\psi}^\boldsymbol{\prime}(t) \right. \\ &\left. + \left( \mathsf{D}^{\boldsymbol{\phi}(a), \boldsymbol{\Psi}}\_{a^+} \frac{\partial\_3 L[\mathbf{x}]\_\mathsf{T}(t)}{\boldsymbol{\psi}^\boldsymbol{\prime}(t)} \right) \boldsymbol{\psi}^\boldsymbol{\prime}(t) - \int\_0^1 \frac{\boldsymbol{\phi}(a)}{\Gamma(1-a)} \frac{d}{dt} \int\_{b-\mathsf{r}}^b (\boldsymbol{\psi}(s) - \boldsymbol{\psi}(t))^{-a} \partial\_7 L[\mathbf{x}]\_\mathsf{T}(s) ds da \right). \end{split}$$

*for t* ∈ [*a*, *b* − *τ*]*, and*

$$\begin{split} \lambda(t) &:= -\frac{1}{\partial\_3 \mathcal{g}(t, \boldsymbol{x}(t))} \left( \partial\_3 L[\boldsymbol{x}]\_\boldsymbol{\tau}(t) + \left( \mathcal{D}^{\boldsymbol{\theta}(a), \boldsymbol{\theta}}\_b \frac{\partial\_7 L[\boldsymbol{x}]\_\boldsymbol{\tau}(t)}{\boldsymbol{\psi}'(t)} \right) \boldsymbol{\psi}'(t) \\ &+ \left( \mathcal{D}^{\boldsymbol{\theta}(a), \boldsymbol{\theta}}\_{(b-\mathsf{T})^+} \frac{\partial\_9 L[\boldsymbol{x}]\_\boldsymbol{\tau}(t)}{\boldsymbol{\psi}'(t)} \right) \boldsymbol{\psi}'(t) + \int\_0^1 \frac{\boldsymbol{\varrho}(a)}{\Gamma(1-a)} \frac{d}{dt} \int\_a^{b-\mathsf{T}} (\boldsymbol{\psi}(t) - \boldsymbol{\psi}(s))^{-a} \partial\_9 L[\boldsymbol{x}]\_\boldsymbol{\tau}(s) ds da \right), \end{split}$$

*for t* ∈ [*b* − *τ*, *b*]*, where g is a C* 1 *function, such that ∂*3*g*(*t*, *x*(*t*)) 6= 0 *for all t* ∈ [*a*, *b*]*. Then, each solution x* = (*x*1, *x*2) *of the Equations* (20) *and* (21) *minimizes (resp. maximizes) the functional* J *, subject to the restrictions x*(*t*) = *µ*(*t*), *t* ∈ [*a* − *τ*, *a*] *and x*(*b*) = *x*(*b*)*, and the holonomic constraint* (17)*. In addition, if x*(*b*) *is free, then each solution x of the fractional Equations* (20)*–*(22) *minimizes (resp. maximizes)* J *subject to* (17)*.*

#### *3.2. Higher-Order Variational Problems*

In this section, we consider the general case with respect to fractional orders. Thus, the distributions *φ<sup>i</sup>* , *ϕ<sup>i</sup>* have domain [*i* − 1, *i*], *i* = 1, . . . , *n*, where *n* ∈ N is fixed, with

$$\int\_{i-1}^{i} \phi\_i(\alpha) d\alpha > 0 \qquad \text{and} \qquad \int\_{i-1}^{i} \phi\_i(\alpha) d\alpha > 0.$$

The problem is formulated as follows:

**Problem 4** ((*Pn*))**.** *Find a curve x* ∈ *C n* ([*a*, *b*], R) *for which the functional*

$$\mathcal{J}(\mathbf{x}) := \int\_{a}^{b} L\Big(t, \mathbf{x}(t), ^{\mathbb{C}}\mathbf{D}\_{a^{+}}^{\Phi\_{1}(a), \#}\mathbf{x}(t), ^{\mathbb{C}}\mathbf{D}\_{b^{-}}^{\Phi\_{1}(a), \#}\mathbf{x}(t), \dots ^{\mathbb{C}}\mathbf{D}\_{a^{+}}^{\Phi\_{\mathbf{f}}(a), \#}\mathbf{x}(t), ^{\mathbb{C}}\mathbf{D}\_{b^{-}}^{\Phi\_{\mathbf{f}}(a), \#}\mathbf{x}(t)\Big) dt,\tag{24}$$

*attains a minimum or a maximum value, where <sup>L</sup>* : [*a*, *<sup>b</sup>*] <sup>×</sup> <sup>R</sup>2*n*+<sup>1</sup> <sup>→</sup> <sup>R</sup> *is a continuously differentiable function. In addition, the following boundary conditions*

$$\mathbf{x}^{(i)}(a) = \mathbf{x}\_a^i \quad \text{and} \quad \mathbf{x}^{(i)}(b) = \mathbf{x}\_{b\prime}^i \text{ with } \mathbf{x}\_{a\prime}^i \mathbf{x}\_b^i \in \mathbb{R}, \ i = 0, \ldots, n-1 \tag{25}$$

*may be assumed.*

We will consider the variational problem (*Pn*) with and without fixed boundary conditions (25), and also with isoperimetric or holonomic constraints.

As done previously, we use the abbreviations

$$\mathbf{u}\_{\boldsymbol{\mu}}(\mathbf{x})\_{\boldsymbol{n}}(t) := \left(t, \mathbf{x}(t), ^{\mathsf{C}}\mathbf{D}\_{a^{+}}^{\phi\_{1}(a),\psi}\mathbf{x}(t), ^{\mathsf{C}}\mathbf{D}\_{b^{-}}^{\rho\_{1}(a),\psi}\mathbf{x}(t), \dots, ^{\mathsf{C}}\mathbf{D}\_{a^{+}}^{\phi\_{\boldsymbol{n}}(a),\psi}\mathbf{x}(t), ^{\mathsf{C}}\mathbf{D}\_{b^{-}}^{\rho\_{\boldsymbol{n}}(a),\psi}\mathbf{x}(t)\right).$$

and

$$\mathbf{C}\_{\boldsymbol{\Phi}\_{l}}^{-}[H,j]: \quad t \to \left(\mathbf{D}\_{b^{-}}^{\boldsymbol{\Phi}\_{l}(a),\boldsymbol{\Psi}} \frac{\partial\_{j}H[\mathbf{x}]\_{n}}{\boldsymbol{\Psi}'}\right)(t) \text{ is continuous for all } t \in [a,b]$$

$$\mathbf{C}\_{\boldsymbol{\Phi}\_{l}}^{+}[H,j]: \quad t \to \left(\mathbf{D}\_{a^{+}}^{\boldsymbol{\Phi}\_{l}(a),\boldsymbol{\Psi}} \frac{\partial\_{j}H[\mathbf{x}]\_{n}}{\boldsymbol{\Psi}'}\right)(t) \text{ is continuous for all } t \in [a,b]$$

where *H* is a function and *i*, *j* ∈ N.

**Theorem 9** (Fractional Euler–Lagrange equation and natural boundary conditions for problem (*Pn*))**.** *Let x* ∈ *C n* ([*a*, *b*], R) *be an extremizer of functional* J *defined by* (24)*. If conditions C* − *φi* [*L*, 2*i* + 1] *and C* + *ϕi* [*L*, 2*i* + 2] *hold, for all i* ∈ {1, ..., *n*}*, then x satisfies the following Euler–Lagrange equation:*

$$\partial\_2 \mathrm{L}[\mathbf{x}]\_n(t) + \sum\_{i=1}^n \left[ \left( \mathrm{D}^{\phi\_i(a), \psi}\_{b^-} \frac{\partial\_{\overline{\mathbf{z}} \pm \mathbf{1}} \mathrm{L}[\mathbf{x}]\_n(t)}{\psi'(t)} \right) \psi'(t) + \left( \mathrm{D}^{\phi\_i(a), \psi}\_{a^+} \frac{\partial\_{\overline{\mathbf{z}} \pm \mathbf{2}} \mathrm{L}[\mathbf{x}]\_n(t)}{\psi'(t)} \right) \psi'(t) \right] = 0,\tag{26}$$

*for all t* <sup>∈</sup> [*a*, *<sup>b</sup>*]*. In addition, if x*(*i*) (*a*) *are free, for i* = 0, ..., *n* − 1*, then*

$$\begin{split} \sum\_{k=i+1}^{n} \left[ \left( \left( -\frac{1}{\psi'(t)} \frac{1}{dt} \right)^{k-i-1} \left( \mathbf{1}\_{b^{-}}^{k-\rho\_{k}(a),\psi} \frac{\partial\_{2k+1} L[\mathbf{x}]\_{n}(t)}{\psi'(t)} \right) \right. \\ \left. + (-1)^{i+1} \left( \frac{1}{\psi'(t)} \frac{1}{dt} \right)^{k-i-1} \left( \mathbf{1}\_{a^{+}}^{k-\rho\_{k}(a),\psi} \frac{\partial\_{2k+2} L[\mathbf{x}]\_{n}(t)}{\psi'(t)} \right) \right) \right] = 0, \quad \text{at } t = a, \end{split} \tag{27}$$

*and if x*(*i*) (*b*) *are free, for i* = 0, ..., *n* − 1*, then*

$$\begin{split} \sum\_{k=i+1}^{n} \left[ \left( \left( -\frac{1}{\psi'(t)} \frac{1}{dt} \right)^{k-i-1} \left( \mathbf{1}\_{b^{-}}^{k-\phi\_{k}(a),\psi} \frac{\partial \mathbf{2}\_{2k+1} L[\mathbf{x}]\_{\mathbb{R}}(t)}{\psi'(t)} \right) \right. \\ \left. + (-1)^{i+1} \left( \frac{1}{\psi'(t)} \frac{1}{dt} \right)^{k-i-1} \left( \mathbf{1}\_{a^{+}}^{k-\varphi\_{k}(a),\psi} \frac{\partial \mathbf{2}\_{2k+2} L[\mathbf{x}]\_{\mathbb{R}}(t)}{\psi'(t)} \right) \right) \right] = 0, \quad \text{at } t = b. \end{split} \tag{28}$$

**Proof.** Let *h* ∈ *C n* ([*a*, *b*], R) be a function. Observe that, given *i* ∈ {0, ..., *n* − 1}, if *x* (*i*) (*a*) or *x* (*i*) (*b*) are fixed, then we need to assume that *h* (*i*) (*a*) = 0 or *h* (*i*) (*b*) = 0, respectively, and so

$$\left(\frac{1}{\psi'(t)}\frac{d}{dt}\right)^i h(t) = 0, \text{ at } t = a \text{ or } t = b\text{.} $$

respectively. Defining *j* as *j*(*e*) := J (*x* + *eh*), *e* ∈ R, then *j* 0 (0) = 0, and so

$$\begin{aligned} \int\_{a}^{b} \left( \partial\_{2} \boldsymbol{L}[\mathbf{x}]\_{\boldsymbol{n}}(t) \cdot \boldsymbol{h}(t) + \sum\_{i=1}^{n} \left( \partial\_{2i+1} \boldsymbol{L}[\mathbf{x}]\_{\boldsymbol{n}}(t) \cdot \boldsymbol{\mathcal{C}} \, \mathbf{D}\_{a^{+}}^{\phi\_{i}(a), \psi} \boldsymbol{h}(t) \right. \\\\ \left. + \partial\_{2i+2} \boldsymbol{L}[\mathbf{x}]\_{\boldsymbol{n}}(t) \cdot \boldsymbol{\mathcal{C}} \, \mathbf{D}\_{b^{-}}^{\phi\_{i}(a), \psi} \boldsymbol{h}(t) \right) \right) dt = 0. \end{aligned}$$

Using Theorem 1, we obtain, for each *i* ∈ {1, ..., *n*},

$$\begin{split} \int\_{a}^{b} \partial\_{2i+1} L[\mathbf{x}]\_{\boldsymbol{n}}(t) \cdot \prescript{\mathbf{C}}{}{\mathbf{D}}\_{a^{+}}^{\Phi\_{l}(a),\varphi} h(t) dt &= \int\_{a}^{b} \left( \mathbf{D}\_{b^{-}}^{\Phi\_{l}(a),\varphi} \frac{\partial\_{2i+1} L[\mathbf{x}]\_{\boldsymbol{n}}(t)}{\psi'(t)} \right) \boldsymbol{\psi}'(t) h(t) dt \\ &+ \sum\_{k=0}^{l-1} \left[ \left( -\frac{1}{\psi'(t)} \frac{1}{dt} \right)^{k} \left( \mathbf{I}\_{b^{-}}^{\mathrm{i-}\Phi\_{l}(a),\varphi} \frac{\partial\_{2i+1} L[\mathbf{x}]\_{\boldsymbol{n}}(t)}{\psi'(t)} \right) \cdot \mathbf{h}\_{\psi}^{[i-k-1]}(t) \right]\_{t=a}^{t=b} . \end{split}$$

and

$$\begin{split} \int\_{a}^{b} \partial\_{2i+2} L[\mathbf{x}]\_{n}(t) \cdot \mathbb{C} \, \mathrm{D}\_{b^{-}}^{q\_{i}(a),\psi} h(t) dt &= \int\_{a}^{b} \left( \mathrm{D}\_{a^{+}}^{q\_{i}(a),\psi} \frac{\partial\_{2i+2} L[\mathbf{x}]\_{n}(t)}{\psi'(t)} \right) \psi'(t) h(t) dt \\ &+ \sum\_{k=0}^{i-1} \left[ (-1)^{i-k} \left( \frac{1}{\psi'(t)} \frac{1}{dt} \right)^{k} \left( \mathrm{I}\_{a^{+}}^{i-q\_{i}(a),\psi} \frac{\partial\_{2i+2} L[\mathbf{x}]\_{n}(t)}{\psi'(t)} \right) \cdot h\_{\psi}^{[i-k-1]}(t) \right]\_{t=a}^{t=b} . \end{split}$$

Therefore,

$$\begin{split} \int\_{a}^{b} \left( \partial\_{2} L[\mathbf{x}]\_{n}(t) + \sum\_{i=1}^{n} \left[ \left( \mathbf{D}\_{b^{-}}^{\Phi(a),\varphi} \frac{\partial\_{2i+1} L[\mathbf{x}]\_{n}(t)}{\psi'(t)} \right) \psi'(t) \right. \\ \left. + \left( \mathbf{D}\_{a^{+}}^{\Phi(a),\varphi} \frac{\partial\_{2i+2} L[\mathbf{x}]\_{n}(t)}{\psi'(t)} \right) \psi'(t) \right] h(t) dt \\ + \sum\_{i=1}^{n} \sum\_{k=0}^{i-1} \left[ \left( \left( -\frac{1}{\psi'(t)} \frac{1}{dt} \right)^{k} \left( \mathbf{I}\_{b^{-}}^{i-\Phi(a),\varphi} \frac{\partial\_{2i+1} L[\mathbf{x}]\_{n}(t)}{\psi'(t)} \right) \right. \\ \left. + (-1)^{i-k} \left( \frac{1}{\psi'(t)} \frac{1}{dt} \right)^{k} \left( \mathbf{I}\_{a^{+}}^{i-\Phi(a),\varphi} \frac{\partial\_{2i+2} L[\mathbf{x}]\_{n}(t)}{\psi'(t)} \right) \right) h\_{\psi}^{[i-k-1]}(t) \right]\_{t=a}^{t=b} = 0. \end{split}$$

Since

$$\begin{split} &\sum\_{i=1}^{n} \sum\_{k=0}^{i-1} \left[ \left( \left( -\frac{1}{\psi'(t)} \frac{1}{dt} \right)^{k} \left( \mathbf{I}\_{\boldsymbol{b}^{-}}^{i-\phi\_{1}(a),\eta} \frac{\partial\_{2i+1}L[\mathbf{x}]\_{\boldsymbol{n}}(t)}{\psi'(t)} \right) \right. \\ &\left. + (-1)^{i-k} \left( \frac{1}{\psi'(t)} \frac{1}{dt} \right)^{k} \left( \mathbf{I}\_{\boldsymbol{a}^{+}}^{i-\phi\_{1}(a),\eta} \frac{\partial\_{2i+2}L[\mathbf{x}]\_{\boldsymbol{n}}(t)}{\psi'(t)} \right) \right) h\_{\boldsymbol{\psi}}^{[i-k-1]}(t) \right]\_{t=a}^{t=b} \\ &= \sum\_{i=0}^{n-1} h\_{\boldsymbol{\psi}}^{[i]}(t) \sum\_{k=i+1}^{n} \left[ \left( \left( -\frac{1}{\psi'(t)} \frac{1}{dt} \right)^{k-i-1} \left( \mathbf{I}\_{\boldsymbol{b}^{-}}^{k-\phi\_{k}(a),\boldsymbol{\psi}} \frac{\partial\_{2i+1}L[\mathbf{x}]\_{\boldsymbol{n}}(t)}{\psi'(t)} \right) \right. \\ &\left. + (-1)^{i+1} \left( \frac{1}{\psi'(t)} \frac{1}{dt} \right)^{k-i-1} \left( \mathbf{I}\_{\boldsymbol{a}^{+}}^{k-\phi\_{1}(a),\boldsymbol{\psi}} \frac{\partial\_{2k+2}L[\mathbf{x}]\_{\boldsymbol{n}}(t)}{\psi'(t)} \right) \right) \right) \right]\_{t=a}^{t=b}. \end{split}$$

from the arbitrariness of *h*, we prove (26), (27), and (28).

When in the presence of an isoperimetric or holonomic contraint, similar results are proven for this new variational problem. To simplify, we will assume that the boundary conditions (25) hold. In addition, the proofs will be omitted since they follow the same pattern as the ones presented before.

**Problem 5** ( (*PIn* ))**.** *The isoperimetric problem can be formulated as follows: minimize or maximize the functional* J *in* (24) *assuming the boundary conditions* (25) *and also an integral restriction*

$$\mathcal{Z}(\mathbf{x}) = \int\_{a}^{b} \mathcal{G}[\mathbf{x}]\_{n}(t)dt = k, \ k \in \mathbb{R},\tag{29}$$

*where G* : [*a*, *<sup>b</sup>*] <sup>×</sup> <sup>R</sup>2*n*+<sup>1</sup> <sup>→</sup> <sup>R</sup> *is a C*<sup>1</sup> *function.*

**Theorem 10** (Necessary optimality conditions for problem (*PIn* )—Case I)**.** *Let x* ∈ *C n* ([*a*, *b*], R) *be a solution of problem* (*PIn* )*. Suppose that there exists some t* ∈ [*a*, *b*] *such that*

$$\partial\_2 \mathcal{G}[\mathbf{x}]\_\mathbf{t}(t) + \sum\_{i=1}^n \left[ \left( \mathbf{D}\_{b^-}^{\Phi\_i(\mathbf{a}), \boldsymbol{\Psi}} \frac{\partial\_{2i+1} \mathcal{G}[\mathbf{x}]\_\mathbf{n}(t)}{\boldsymbol{\Psi}'(t)} \right) \boldsymbol{\psi}'(t) + \left( \mathbf{D}\_{a^+}^{\Phi(\mathbf{a}), \boldsymbol{\Psi}} \frac{\partial\_{2i+2} \mathcal{G}[\mathbf{x}]\_\mathbf{n}(t)}{\boldsymbol{\Psi}'(t)} \right) \boldsymbol{\psi}'(t) \right] \neq 0. \tag{30}$$

*If conditions C* − *φi* [*L*, 2*i* + 1]*, C* + *ϕi* [*L*, 2*i* + 2]*, C* − *φi* [*G*, 2*i* + 1]*, and C* + *ϕi* [*G*, 2*i* + 2] *hold, for all i* ∈ {1, ..., *n*}*, then there exists a real number λ such that x is a solution of the equation*

$$\partial\_2 H[\mathbf{x}]\_\mathbf{z}(t) + \sum\_{l=1}^n \left[ \left( \mathbf{D}\_{b^-}^{\phi\_l(a), \mathfrak{z}} \frac{\partial\_{2l+1} H[\mathbf{x}]\_\mathbf{z}(t)}{\mathfrak{y}^\prime(t)} \right) \mathfrak{y}^\prime(t) + \left( \mathbf{D}\_{a^+}^{\phi\_l(a), \mathfrak{z}} \frac{\partial\_{2l+2} H[\mathbf{x}]\_\mathbf{z}(t)}{\mathfrak{y}^\prime(t)} \right) \mathfrak{y}^\prime(t) \right] = 0, \tag{31}$$

*for all t* ∈ [*a*, *b*]*, where H* := *L* + *λG.*

**Theorem 11** (Necessary optimality conditions for problem (*PIn* )—Case II)**.** *Let x* ∈ *C n* ([*a*, *b*], R) *be a solution of problem* (*PIn* )*. If conditions C* − *φi* [*L*, 2*i* + 1]*, C* + *ϕi* [*L*, 2*i* + 2]*, C* − *φi* [*G*, 2*i* + 1]*, and C* + *ϕi* [*G*, 2*i* + 2] *hold, for all i* ∈ {1, ..., *n*}*, then there exists a vector* (*λ*0, *λ*) ∈ <sup>R</sup><sup>2</sup> \ {(0, 0)} *such that <sup>x</sup> is a solution of Equation* (31) *for all <sup>t</sup>* <sup>∈</sup> [*a*, *<sup>b</sup>*]*, with the Hamiltonian <sup>H</sup> defined as H* := *λ*0*L* + *λG.*

To finish this section, we will study problem (*Pn*) with a holonomic constraint.

**Problem 6** ((*PCn* ))**.** *The objective is to find x* ∈ *C n* ([*a*, *b*], R) × *C n* ([*a*, *b*], R) *that minimizes or maximizes the functional*

$$\mathcal{J}(\mathbf{x}) = \int\_{a}^{b} L[\mathbf{x}]\_{n}(t)dt,\tag{32}$$

*defined on C<sup>n</sup>* ([*a*, *b*], R) × *C n* ([*a*, *b*], R) *and subject a constraint*

$$\mathcal{g}(t, \mathfrak{x}(t)) = 0, \quad t \in [a, b], \tag{33}$$

*where g* : [*a*, *<sup>b</sup>*] <sup>×</sup> <sup>R</sup><sup>2</sup> <sup>→</sup> <sup>R</sup> *is a C<sup>n</sup> function. In addition, boundary conditions*

$$\mathbf{x}^{(i)}(a) = \mathbf{x}\_a^{(i)} \text{ and } \mathbf{x}^{(i)}(b) = \mathbf{x}\_b^{(i)}, \ \mathbf{x}\_{a'}^{i} \mathbf{x}\_b^{i} \in \mathbb{R}^2 \text{ for } i = 0, \ldots, n-1 \tag{34}$$

*are imposed on the variational problem.*

**Theorem 12** (Necessary optimality conditions for problem (*PCn* ))**.** *Let x be an extremizer of functional* J *defined by* (32) *and subject to the constraints* (33)*–*(34)*. If conditions C* − *φi* [*L*, 4*i* + *j* −1] *and C*<sup>+</sup> *ϕi* [*L*, 4*i* + *j* + 1] *hold for all i* ∈ {1, ..., *n*} *and j* = 1, 2*, and if*

$$
\partial\_3 g(t, \mathfrak{x}(t)) \neq 0, \ \forall \ t \in [a, b]\_\prime
$$

*then there exists a continuous function λ* : [*a*, *b*] → R *such that x is a solution of*

$$\begin{split} \partial\_{\boldsymbol{j}+1}L[\boldsymbol{x}]\_{\boldsymbol{n}}(t) + \sum\_{i=1}^{n} \left[ \left( \mathbf{D}\_{b^{-}}^{\boldsymbol{\Phi}(\boldsymbol{a}),\boldsymbol{\Psi}} \frac{\partial\_{\boldsymbol{4}i+\boldsymbol{j}-1}L[\boldsymbol{x}]\_{\boldsymbol{n}}(t)}{\boldsymbol{\Psi}'(t)} \right) \boldsymbol{\upmu}'(t) + \left( \mathbf{D}\_{a^{+}}^{\boldsymbol{\Phi}(\boldsymbol{a}),\boldsymbol{\Psi}} \frac{\partial\_{\boldsymbol{4}i+\boldsymbol{j}+1}L[\boldsymbol{x}]\_{\boldsymbol{n}}(t)}{\boldsymbol{\upmu}'(t)} \right) \boldsymbol{\upmu}'(t) \right] \\ + \lambda(t) \partial\_{\boldsymbol{\n}+1} \boldsymbol{g}(t, \mathbf{x}(t)) = 0, \end{split} \tag{35}$$

*for all t* ∈ [*a*, *b*] *and j* = 1, 2*.*

**Remark 1.** *In a similar way, we can prove that, in case function L is convex (resp. concave), then the conditions given in Theorems 9–12 are also sufficient conditions to ensure that the candidates of extremizers are indeed minimizers (resp. maximizers) of the functional.*

#### **4. Illustrative Examples**

Some illustrative examples are provided to demonstrate the applicability of our results.

**Example 1.** *Suppose we intend to find a function x* ∈ *C* 3 ([0, 1], R)*, subject to the initial conditions <sup>x</sup>*(0) = (*ψ*(1) <sup>−</sup> *<sup>ψ</sup>*(0))<sup>5</sup> *, x* 0 (0) = −5*ψ* 0 (0)(*ψ*(1) <sup>−</sup> *<sup>ψ</sup>*(0))<sup>4</sup> *, x* <sup>00</sup>(0) = −5*ψ* <sup>00</sup>(0)(*ψ*(1) <sup>−</sup> *<sup>ψ</sup>*(0))<sup>4</sup> <sup>+</sup> 20(*ψ* 0 (0))<sup>2</sup> (*ψ*(1) <sup>−</sup> *<sup>ψ</sup>*(0))<sup>3</sup> *, and terminal conditions x*(1) = *x* 0 (1) = *x* 00(1) = 0*, that extremizes the functional*

$$\begin{split} \mathcal{J}(\mathbf{x}) = \int\_{0}^{1} \Big( ^{\mathsf{C}} \mathsf{D}\_{0^{+}}^{\phi\_{3}(a),\psi} \mathbf{x}(t) \cdot (\boldsymbol{\psi}(1) - \boldsymbol{\psi}(t))^{5} \boldsymbol{\psi}'(t) \\ & - \mathbf{x}(t) \cdot \frac{(\boldsymbol{\psi}(1) - \boldsymbol{\psi}(t))^{3} - (\boldsymbol{\psi}(1) - \boldsymbol{\psi}(t))^{2}}{\ln(\boldsymbol{\psi}(1) - \boldsymbol{\psi}(t))} \boldsymbol{\psi}'(t) \Big) dt, \end{split}$$

*where φ*<sup>3</sup> : [2, 3] → [0, 1] *is defined by*

$$\phi\_3(\mathfrak{a}) = \frac{\Gamma(\mathfrak{G} - \mathfrak{a})}{5!}.$$

*The Euler–Lagrange equation associated is the following (cf. Theorem 9):*

$$-\frac{(\psi(1)-\psi(t))^3 - (\psi(1)-\psi(t))^2}{\ln(\psi(1)-\psi(t))} + \mathcal{D}\_{1^-}^{\phi\_3(a),\psi} \left( (\psi(1)-\psi(t))^5 \right) = 0.$$

*By ([14] Lemma 14),*

$$\mathcal{D}\_{1^{-}}^{\alpha,\psi}\left( (\psi(1) - \psi(t))^5 \right) = \frac{5!}{\Gamma(6-\alpha)} (\psi(1) - \psi(t))^{5-\alpha}.$$

*and so*

$$\mathcal{D}\_{1^{-}}^{\phi\_3(a),\psi}\left(\left(\psi(1)-\psi(t)\right)^5\right) = \frac{(\psi(1)-\psi(t))^3 - (\psi(1)-\psi(t))^2}{\ln(\psi(1)-\psi(t))}\lambda$$

*proving that the function <sup>x</sup>*(*t*) = (*ψ*(1) <sup>−</sup> *<sup>ψ</sup>*(*t*))<sup>5</sup> *, t* ∈ [0, 1]*, is a candidate to be an extremizer of the proposed problem.*

**Example 2.** *We want to find a curve x* ∈ *C* 1 ([−1, 2], R)*, subject to the condition x*(*t*) = *µ*(*t*), *t* ∈ [−1, 0]*, where µ* ∈ *C* 1 ([−1, 0], <sup>R</sup>) *is a fixed initial function with <sup>µ</sup>*(0) = (*ψ*(2) <sup>−</sup> *<sup>ψ</sup>*(0))<sup>2</sup> *, that minimizes the following functional:*

$$\begin{aligned} \mathcal{J}(\mathbf{x}) &= \int\_0^2 \left( \left( \mathbf{x}(t-1) - (\boldsymbol{\psi}(2) - \boldsymbol{\psi}(t-1))^2 \right)^2 \right)^2 \\ &+ \left( \,^\mathbb{C} \mathbf{D}\_{2^-}^{\varphi(a), \psi} \mathbf{x}(t) - \frac{\psi(t) - \psi(2) + (\psi(2) - \psi(t))^2}{\ln(\psi(2) - \psi(t))} \right)^2 \right) dt \,\boldsymbol{\psi} \end{aligned}$$

*where ϕ* : [0, 1] → [0, 1] *is defined by*

$$\varphi(\mathfrak{a}) = \frac{\Gamma(\mathfrak{3} - \mathfrak{a})}{2}.$$

*By Lemma 1 in [13], if <sup>x</sup>* : [−1, 2] <sup>→</sup> <sup>R</sup> *is defined by <sup>x</sup>*(*t*) = (*ψ*(2) <sup>−</sup> *<sup>ψ</sup>*(*t*))<sup>2</sup> *if t* ∈ [0, 2]*, and x*(*t*) = *µ*(*t*)*, if t* ∈ [−1, 0]*, then*

$$\prescript{C}{}{\mathbf{D}}\_{2^{-}}^{\alpha,\psi}\overline{\boldsymbol{x}}(t) = \frac{2}{\Gamma(3-\alpha)}(\psi(2) - \psi(t))^{2-\alpha}, \quad t \in [0,2].$$

*and so the distributed-order derivative with respect to ψ is given by*

$$\prescript{\mathsf{C}}{}{\mathsf{D}}\_{2^{-}}^{\varphi(a),\psi}\overline{\pi}(t) = \int\_{0}^{1} \varphi(a) \prescript{\mathsf{C}}{}{\mathsf{D}}\_{2^{-}}^{a,\psi}\overline{\pi}(t) da = \frac{\psi(t) - \psi(2) + (\psi(2) - \psi(t))^{2}}{\ln(\psi(2) - \psi(t))}.$$

*Notethat x satisfies the assumptions of Theorem 2 and also the Euler–Lagrange Equations* (2) *and* (3)*, as well as the transversality condition* (4)*, proving that x is a candidate to be a local minimizer of* J *. Since the Lagrangian function is convex, we conclude by Theorem 6 that x is a minimizer of* J *.*

**Example 3.** *Determine x that minimizes the functional*

$$\begin{split} \mathcal{J}(\mathbf{x}) = \int\_{0}^{1} \left( \left( \,^{\mathbb{C}} \mathbf{D}\_{0^{+}}^{\theta\_{2}(a),\psi} \mathbf{x}(t) - \frac{\psi(t) - \psi(0) - 1}{\ln(\psi(t) - \psi(0))} \right)^{2} \\ &+ \left( \,^{\mathbb{C}} \mathbf{D}\_{1^{-}}^{\varphi\_{2}(a),\psi} \mathbf{x}(t) - \frac{\psi(1) - \psi(t) - 1}{\ln(\psi(1) - \psi(t))} \right)^{2} \right) dt, \end{split}$$

*in the class of functions C* 2 ([0, 1], R) *subject to the boundary conditions x*(0) = *x* 0 (0) = 0*, where φ*2, *ϕ*<sup>2</sup> : [1, 2] → [0, 1] *are defined by*

$$\phi\_2(\mathfrak{a}) = \frac{\Gamma(3-\mathfrak{a})}{2} = \mathfrak{p}\_2(\mathfrak{a}).$$

*Again, by [13, Lemma 1], if <sup>x</sup>*(*t*) = (*ψ*(*t*) <sup>−</sup> *<sup>ψ</sup>*(0))<sup>2</sup> *, t* ∈ [0, 1]*, then*

$$\, \_\text{C}^{\mathbb{C}}\mathbf{D}\_{0^{+}}^{\alpha,\psi}\overline{\pi}(t) = \frac{2}{\Gamma(\mathfrak{Z}-\mathfrak{a})} (\psi(t) - \psi(0))^{2-\alpha} \check{\mathbf{z}}$$

*and so*

$$\, ^\mathbb{C} \mathbf{D}\_{0^+}^{\phi\_2(a), \psi} \overline{\mathfrak{x}}(t) = \int\_1^2 \phi\_2(a) \, ^\mathbb{C} \mathbf{D}\_{0^+}^{a, \psi} \overline{\mathfrak{x}}(t) d\alpha = \frac{\psi(t) - \psi(0) - 1}{\ln(\psi(t) - \psi(0))}.$$

*In addition, observe that*

$$\begin{array}{rcl} \, ^\mathsf{C}\mathrm{D}\_{1^-}^{a,\psi}\overline{\mathfrak{x}}(t) &=& ^\mathsf{C}\mathrm{D}\_{1^-}^{a,\psi}((\psi(1)-\psi(t))+(\psi(0)-\psi(1)))^2 \\ &=& ^\mathsf{C}\mathrm{D}\_{1^-}^{a,\psi}(\psi(1)-\psi(t))^2 = \frac{2}{\Gamma(3-a)}(\psi(1)-\psi(t))^{2-a} \end{array}$$

*and therefore*

$$\, ^\mathbb{C} \mathbf{D}\_{1^-}^{\varphi\_2(a), \psi} \mathfrak{X}(t) = \int\_1^2 \varphi\_2(a) ^\mathbb{C} \mathbf{D}\_{1^-}^{a, \psi} \mathfrak{X}(t) da = \frac{\psi(1) - \psi(t) - 1}{\ln(\psi(1) - \psi(t))}.$$

*We can easily verify that x satisfies assumptions of Theorem 9, the Euler–Lagrange Equation* (26)*, and the natural boundary condition* (28)*, proving that x is a candidate to be a local minimizer of* J *. Since the Lagrangian function is convex, we conclude that x is a minimizer of* J *.*

#### **5. Conclusions and Future Work**

In this article, we continue the study started in [15], considering now new problems in the calculus of variations. Namely, two distinct types are considered: when the Lagrangian function involves a time delay and derivatives of order greater than 1. Necessary and sufficient optimization conditions are proved, for the basic problem and when in the presence of additional constraints to the problem. The study is formulated in the context of fractional calculus, where the derivative of the state curve is of the fractional type involving distributed-orders and the kernel involves an arbitrary smooth function.

In the future, we intend to study variational problems of Herglotz type and some generalizations involving distributed-order fractional derivatives with arbitrary smooth kernels.

**Author Contributions:** Conceptualization, formal analysis, investigation, writing—review and editing: F.C., R.A. and N.M. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was supported by Portuguese funds through the CIDMA—Center for Research and Development in Mathematics and Applications, and the Portuguese Foundation for Science and Technology (FCT-Fundação para a Ciência e a Tecnologia), reference UIDB/04106/2020.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


#### *Article* **Quadratic First Integrals of Time-Dependent Dynamical Systems of the Form ¨***q <sup>a</sup>* <sup>=</sup> <sup>−</sup>**<sup>Γ</sup>** *a bcq***˙** *bq***˙** *<sup>c</sup>* <sup>−</sup> *<sup>ω</sup>*(*t*)*Q<sup>a</sup>* (*q*)

**Antonios Mitsopoulos \* and Michael Tsamparlis**

Faculty of Physics, Department of Astronomy-Astrophysics-Mechanics, University of Athens, Panepistemiopolis, 15783 Athens, Greece; mtsampa@phys.uoa.gr

**\*** Correspondence: antmits@phys.uoa.gr

**Abstract:** We consider the time-dependent dynamical system *q*¨ *<sup>a</sup>* <sup>=</sup> <sup>−</sup><sup>Γ</sup> *a bcq*˙ *b q*˙ *<sup>c</sup>* <sup>−</sup> *<sup>ω</sup>*(*t*)*Q<sup>a</sup>* (*q*) where *ω*(*t*) is a non-zero arbitrary function and the connection coefficients Γ *a bc* are computed from the kinetic metric (kinetic energy) of the system. In order to determine the quadratic first integrals (QFIs) *I* we assume that *I* = *Kabq*˙ *a q*˙ *<sup>b</sup>* + *Kaq*˙ *<sup>a</sup>* + *K* where the unknown coefficients *Kab*, *Ka*, *K* are tensors depending on *t*, *q <sup>a</sup>* and impose the condition *dI dt* = 0. This condition leads to a system of partial differential equations (PDEs) involving the quantities *Kab*, *Ka*, *K*, *ω*(*t*) and *Q<sup>a</sup>* (*q*). From these PDEs, it follows that *Kab* is a Killing tensor (KT) of the kinetic metric. We use the KT *Kab* in two ways: a. We assume a general polynomial form in *t* both for *Kab* and *Ka*; b. We express *Kab* in a basis of the KTs of order 2 of the kinetic metric assuming the coefficients to be functions of *t*. In both cases, this leads to a new system of PDEs whose solution requires that we specify either *ω*(*t*) or *Q<sup>a</sup>* (*q*). We consider first that *ω*(*t*) is a general polynomial in *t* and find that in this case the dynamical system admits two independent QFIs which we collect in a Theorem. Next, we specify the quantities *Q<sup>a</sup>* (*q*) to be the generalized time-dependent Kepler potential *V* = − *ω*(*t*) *r ν* and determine the functions *ω*(*t*) for which QFIs are admitted. We extend the discussion to the non-linear differential equation *x*¨ = −*ω*(*t*)*x <sup>µ</sup>* <sup>+</sup> *<sup>φ</sup>*(*t*)*x*˙ (*<sup>µ</sup>* <sup>6</sup><sup>=</sup> <sup>−</sup>1) and compute the relation between the coefficients *ω*(*t*), *φ*(*t*) so that QFIs are admitted. We apply the results to determine the QFIs of the generalized Lane–Emden equation.

**Keywords:** time-dependent dynamical systems; quadratic first integrals; Killing tensors; kinetic metric; Kepler potential; oscillator; Lane-Emden equation

#### **1. Introduction**

The equations of motion of a dynamical system define in the configuration space a Riemannian structure with the metric of the kinetic energy (kinetic metric). This metric is inherent in the structure of the dynamical system; therefore, we expect that it will determine the first integrals (FIs) of the system which are important in its evolution. On the other hand a metric is fixed by its symmetries, that is, the linear collineations: Killing vectors (KVs), homothetic vectors (HVs), conformal Killing vectors (CKVs), affine collineations (ACs), projective collineations (PCs); the quadratic collineations: second order Killing tensors (KTs). The question then is how the FIs of the dynamical system and the geometric symmetries of the kinetic metric are related.

The standard way to determine the FIs of a differential equation is the use of Lie/Noether symmetries which applies to the point as well as the generalized Lie/Noether symmetries. The relation of the Lie/Noether symmetries with the symmetries of the kinetic metric has been considered mostly in the case of point symmetries for autonomous conservative dynamical systems moving in a Riemannian space. In particular, it has been shown (see, e.g., [1–4]) that the Lie point symmetries are generated by the special projective algebra of the kinetic metric whereas the Noether point symmetries are generated by the homothetic

**Citation:** Mitsopoulos, A.; Tsamparlis, M. Quadratic First Integrals of Time-Dependent Dynamical Systems of the Form *q*¨ *<sup>a</sup>* = −<sup>Γ</sup> *a bcq*˙ *b q*˙ *<sup>c</sup>* − *<sup>ω</sup>*(*t*)*Q<sup>a</sup>* (*q*). *Mathematics* **2021**, *9*, 1503. https:// doi.org/10.3390/math9131503

Academic Editor: José Velhinho

Received: 23 April 2021 Accepted: 24 June 2021 Published: 27 June 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

algebra of the kinetic metric, the latter being a subalgebra of the projective algebra. A recent clear statement of these results is discussed in [5].

In addition to the autonomous conservative systems this method has been applied to the time-dependent potentials *W*(*t*, *q*) = *ω*(*t*)*V*(*q*), that is, for equations of the form *q*¨ *<sup>a</sup>* <sup>=</sup> <sup>−</sup><sup>Γ</sup> *a bcq*˙ *b q*˙ *<sup>c</sup>* <sup>−</sup> *<sup>ω</sup>*(*t*)*<sup>V</sup>* ,*a* (*q*) (see, e.g., [6–12]). In this case it has been shown that the Lie point symmetries, the Noether point symmetries and the associated FIs are computed in terms of the collineations of the kinetic metric plus a set of constraint conditions involving the time-dependent potential and the collineation vectors. These time-dependent potentials are important because (among others) they contain the time-dependent oscillator (see, e.g., [8,10,13–15]) and the time-dependent Kepler potential (see, e.g., [12,16–18]). A further development in the same line is the extension of this method to time-dependent potentials *W*(*t*, *q*) with linear damping terms [12]. It has been shown that under a suitable time transformation the damping term can be removed and the problem reduces to a timedependent potential of the form *W*(*t*, *q*) = *ω*¯(*t*)*V*(*q*) but with different *ω*¯(*t*). Finally the Lie/Noether method has been applied to the study of partial differential equations (PDEs) [4,19–21].

Besides the aforementioned Lie/Noether method there is a different method which computes the FIs in terms of the collineations of the kinetic metric without using Lie symmetries. This method we shall apply in this paper. It has as follows.

One assumes the generic quadratic first integral (QFI) to be of the form (the linear FIs (LFIs) are also included for *Kab* = 0)

$$I = K\_{ab} \eta^a \eta^b + K\_a \eta^a + K \tag{1}$$

where the coefficients *Kab*, *Ka*, *K* are tensors depending on the coordinates *t*, *q <sup>a</sup>* and imposes the condition *dI dt* = 0. Using again the equations of motion to replace the quantities *q*¨ *a* whenever they appear, this condition leads to a system of PDEs involving the unknown quantities *Kab*, *Ka*, *K* and the dynamical elements, i.e., the potential and the generalized forces of the system. The solution of this system of PDEs provides the QFIs (1). For future reference we shall call this method the *direct method*.

The system of PDEs consists of two parts: a. The geometric part which is independent of the dynamical quantities; b. the dynamical part which contains the scalar *K* and the dynamical quantities. The main conclusion of the geometric part is that the tensor *Kab* is a KT of the kinetic metric whereas the vector *K<sup>a</sup>* is related to the linear collineations of that metric. The dynamical part involves the scalar *K* which is determined by a set of constraint conditions which involve *Kab*, *Ka*, *K*, the potential and the generalized forces. Once *K* is computed one gets the corresponding QFI *I*.

The direct method can always be related to the Noether symmetries. Indeed assuming that the system has a regular Lagrangian (which is always the case since we assume that there exists the kinetic energy) it can be shown by using the inverse Noether theorem (see [22] and section II in [23]) that to each QFI *I* one determines an associated gauged generalized Noether symmetry with generator *η<sup>a</sup>* = −2*Kabq*˙ *<sup>b</sup>* <sup>−</sup> *<sup>K</sup><sup>a</sup>* and Noether function *f* = −*Kabq*˙ *a q*˙ *<sup>b</sup>* + *K* whose Noether integral is the considered QFI. Therefore, we conclude that all QFIs of the form (1) are Noetherian, provided the Lagrangian is regular, that is, the dynamical equations can be solved in terms of *q*¨ *a* .

Moreover, the direct method has been employed in the literature (see [17,24–26]) both for autonomous and time-dependent dynamical systems. A recent account of this method in the case of autonomous conservative systems together with relevant references can be found in [27]. This approach being geometric is powerful and convenient because with minimal calculations it allows the computation of the FIs by using known results from differential geometry.

The purpose of the present work is to apply the direct method to compute the QFIs of time-dependent equations of the form *q*¨ *<sup>a</sup>* <sup>=</sup> <sup>−</sup><sup>Γ</sup> *a bcq*˙ *b q*˙ *<sup>c</sup>* <sup>−</sup> *<sup>ω</sup>*(*t*)*Q<sup>a</sup>* (*q*). Because many wellknown dynamical systems fall in this category we intend to recover in a direct single

approach all the known results derived from the Lie/Noether symmetry method, which are scattered in a large number of papers.

As explained above, the solution of the system requires that the tensor *Kab* is a KT of the kinetic metric. In general, the computation of the KTs of a metric is a major task. However, for spaces of constant curvature, this problem has been solved (see [28–30]). Therefore, in this paper, we restrict our discussion to Euclidean spaces only. Since the KT *Kab* is a function of *t*, *q <sup>a</sup>* we suggest two procedures of work: (a). The polynomial method; (b). the basis method.

In the polynomial method, one assumes a general polynomial form in the variable *t* both for the KT *Kab* and the vector *K<sup>a</sup>* and replaces in the equations of the relevant system. In the basis method, one first computes a basis of the KTs of order 2 of the kinetic metric and then expresses in this basis the KT *Kab* with the coefficients to be functions of *t*. The vector *K<sup>a</sup>* and the FIs follow from the solution of the system of PDEs. Both methods are suitable for autonomous dynamical systems but for time-dependent systems it appears that the basis method is preferable.

Concerning the quantities *ω*(*t*) and *Q<sup>a</sup>* (*q*), again, there are two ways to proceed.


In the following, we shall consider both the polynomial method and the basis method, starting from the former. As a first application, we assume the KT *Kab* = *N*(*t*)*γab* where *N*(*t*) is an arbitrary function and show that we recover all the point Noether integrals found in [12]. As a second application, we assume that *ω*(*t*) = *b*<sup>0</sup> + *b*1*t* + ... + *b*` *t* ` with *<sup>b</sup>*` <sup>6</sup><sup>=</sup> <sup>0</sup> and ` <sup>≥</sup> <sup>1</sup> whereas the quantities *<sup>Q</sup><sup>a</sup>* are unspecified. We find that in this case, the system admits two families of independent QFIs as stated in Theorem 1.

Subsequently, we consider the basis method. This is carried out in two steps. In the first step, we assume that we know a basis {*C*(*N*)*ab*(*q*)} of the space of KTs of the kinetic metric and require that *Kab* has the form *Kab*(*t*, *q*) = ∑ *m N*=1 *αN*(*t*)*C*(*N*)*ab*(*q*). In the second step, we specify the generalized forces to be conservative with the time-dependent Newtonian generalized Kepler potential *V* = − *ω*(*t*) *r <sup>ν</sup>* where *ν* is a non-zero real constant and *r* = p *x* <sup>2</sup> + *y* <sup>2</sup> + *z* 2 . This potential for *ν* = −2, 1 includes, respectively, the three-dimensional (3d) time-dependent oscillator and the time-dependent Kepler potential. For other values of *ν* it reduces to other important dynamical systems, for example, for *ν* = 2 one obtains the Newton–Cotes potential (see, e.g., [31]). We determine the QFIs of the time-dependent generalized Kepler potential and recover in a systematic way the known results concerning the QFIs of the 3d time-dependent oscillator, the time-dependent Kepler potential and the Newton–Cotes potential. For easier reference, we collect the results in Table 2 of Section 14.

Using the well-known result that by a reparameterization the linear damping term *φ*(*t*)*q*˙ *<sup>a</sup>* of a dynamical equation is absorbed to a time-dependent force of the form *ω*(*t*)*Q<sup>a</sup>* (*q*), we also study the non-linear differential equation *x*¨ = −*ω*(*t*)*x <sup>µ</sup>* <sup>+</sup> *<sup>φ</sup>*(*t*)*x*˙ (*<sup>µ</sup>* <sup>6</sup><sup>=</sup> <sup>−</sup>1) and compute the relation between the coefficients *ω*(*t*), *φ*(*t*) for which QFIs are admitted. It is found that a family of 'frequencies' *ω*¯(*s*) is admitted which for *µ* = 0, 1, 2 is parameterized with functions whereas for *µ* 6= −1, 0, 1, 2 is parameterized with constants. As a further application, we study the integrability of the well-known generalized Lane–Emden equation.

The structure of the paper is as follows. In Section 2, we determine the system of PDEs resulting form the condition *dI*/*dt* = 0. In Section 3, we assume that the KT is proportional to the kinetic metric and derive the point Noether FIs of the time-dependent dynamical system (2). In Section 4, we consider the polynomial method and define the general forms of the KT *Kab* and the vector *K<sup>a</sup>* which lead to a new form of the system of PDEs. In Section 5, we assume that *ω*(*t*) is a general polynomial of *t* and we find that the resulting time-dependent system admits two independent QFIs as stated in Theorem 1. In Section 6, we discuss some special cases of the QFI *I<sup>n</sup>* of Theorem 1. In Section 7, we consider the basis method. In Section 8, we find a basis for the KTs in *E* 3 in order to

apply the basis method to 3d Newtonian systems. In Sections 9–13, we study the timedependent generalized Kepler potential and find for which functions *ω*(*t*) admits QFIs. Particularly, in Section 13, we study a special class of time-dependent oscillators with frequency *ω*3*O*(*t*) as given in Equation (123). We collect our results for the several values of *ν* in Table 2 of Section 14. In Section 15, we use the independent LFIs *I*41*<sup>i</sup>* , *I*42*<sup>i</sup>* given in Equations (125) and (126) to integrate the equations of the time-dependent oscillators defined in Section 13; the FIs *L<sup>i</sup>* , *E*2, *A<sup>i</sup>* determined in Section 11.1 to integrate the timedependent Kepler potential with *ω*(*t*) = *<sup>k</sup> b*0+*b*<sup>1</sup> *<sup>t</sup>* where *kb*<sup>1</sup> 6= 0. In Section 16, we consider the second order non-linear time-dependent differential Equation (154) and show that it is integrable with an associated QFI given in Equation (175) iff the functions *ω*(*t*), *φ*(*t*) are related as shown in Equation (174). For the special values *µ* = 0, 1, 2 we find also that there exist additional relations between *ω*(*t*), *φ*(*t*) for which the resulting differential equation admits a QFI. For *µ* = 1 Equation (154) admits the general solution (166) provided that condition (165) is satisfied. We apply these results in Section 16.1 and we study the properties of the well-known generalized Lane–Emden equation. Finally, in Section 17, we draw our conclusions and, in the Appendix A, we give the proof of Theorem 1.

#### **2. The System of Equations**

We consider the dynamical system

$$\ddot{q}^a = -\Gamma^a\_{bc}\dot{q}^b\dot{q}^c - \omega(t)Q^a(q) \tag{2}$$

where Γ *a bc* are the Riemannian connection coefficients determined by the kinetic metric *γab* (kinetic energy) of the system and <sup>−</sup>*ω*(*t*)*Q<sup>a</sup>* (*q*) are the time-dependent generalized forces. Einstein summation convention is assumed and the metric *γab* is used for lowering and raising the indices.

We next consider a function *I*(*t*, *q a* , *q*˙ *a* ) of the form

$$I = K\_{ab}(t,q)\dot{q}^a \dot{q}^b + K\_a(t,q)\dot{q}^a + K(t,q) \tag{3}$$

where *Kab* is a symmetric tensor, *K<sup>a</sup>* is a vector and *K* is an invariant. We demand *I* be a FI of (2) by imposing the condition

$$\frac{dI}{dt} = 0.\tag{4}$$

Using the dynamical Equations (2) to replace *q*¨ *<sup>a</sup>* whenever it appears, we find the system of equations

$$\mathcal{K}\_{(ab\&)} \quad = \quad 0 \tag{5}$$

$$\mathcal{K}\_{ab,t} + \mathcal{K}\_{(a;b)} \quad = \quad 0 \tag{6}$$

$$-2\omega \mathcal{K}\_{ab} \mathcal{Q}^b + \mathcal{K}\_{a,t} + \mathcal{K}\_{,a} \quad = \quad 0 \tag{7}$$

$$\mathcal{K}\_{,t} - \omega \mathcal{K}\_{\mathcal{q}} \mathcal{Q}^{\mathcal{q}} \quad = \quad \mathbf{0} \tag{8}$$

$$\left(K\_{a,tt} + \omega \left(K\_b \mathbb{Q}^b\right)\_{,a} - 2\omega\_{,t} K\_{ab} \mathbb{Q}^b - 2\omega K\_{ab,t} \mathbb{Q}^b\right) = \begin{array}{c} 0 \end{array} \tag{9}$$

$$\left(\mathcal{K}\_{[a;b],t} - 2\omega \left(\mathcal{K}\_{[a|c]} \mathcal{Q}^c\right)\_{;b]} = \begin{array}{c} 0 \end{array} \tag{10}$$

where the last two Equations (9) and (10) express the integrability conditions *K*,[*at*] = 0 and *K*,[*ab*] = 0, respectively, for the scalar *K*. We also note that round and square brackets indicate symmetrization and antisymmetrization, respectively, of the enclosed indices; indices enclosed between vertical lines are overlooked by symmetrization or antisymmetrization symbols; a comma indicates partial derivative and a semicolon Riemannian covariant derivative.

Equation (5) implies that *Kab* is a KT of order 2 (possibly zero) of the kinetic metric *γab*.

The solution of the system requires the function *ω*(*t*) and the quantities *Q<sup>a</sup>* (*q*) both being quantities which are characteristic of the given dynamical system. There are two ways to proceed.


However, before continuing with this kind of considerations, we first proceed with the simple geometric choice *Kab* = *N*(*t*)*γab* where *N*(*t*) is an arbitrary smooth function. By specifying the KT *Kab* as above both the function *ω*(*t*) and the quantities *Q<sup>a</sup>* (*q*) stay unspecified and act as constraints.

#### **3. The Point Noether FIs of the Time-Dependent Dynamical System (2)**

We consider the simplest choice

$$K\_{ab} = N(t)\gamma\_{ab} \tag{11}$$

where *N*(*t*) is an arbitrary smooth function. This choice is purely geometric; therefore, the function *ω*(*t*) and the quantities *Q<sup>a</sup>* (*q*) are unspecified and act as constraints, whereas the vector *K<sup>a</sup>* is identified with a collineation of the kinetic metric. With this *Kab*, the system of Equations (5)–(10) become (Equation (5) vanishes trivially)

$$\mathcal{N}\_{\!t}\gamma\_{ab} + \mathcal{K}\_{\!(a;b)}\; \;=\;\; \mathbf{0} \tag{12}$$

$$-2\omega \mathbf{N} \mathbf{Q}\_a + \mathbf{K}\_{a,t} + \mathbf{K}\_{,a} \quad = \quad \mathbf{0} \tag{13}$$

$$\mathcal{K}\_{\!\!\! } -\omega \mathcal{K}\_{\!\!\! } \mathcal{Q}^{\!\!\! } \quad = \!\!\! 0 \qquad \tag{14}$$

$$\left(\mathbf{K}\_{a,tt} + \omega \left(\mathbf{K}\_b \mathbf{Q}^b\right)\_{,a} - 2\omega\_{,t} \mathbf{N} \mathbf{Q}\_a - 2\omega \mathbf{N}\_{,t} \mathbf{Q}\_a\right) = \begin{array}{c} \mathbf{0} \end{array} \tag{15}$$

$$\mathcal{K}\_{[a;b],t} - 2\omega \mathcal{N} Q\_{[a;b]} \quad = \quad 0. \tag{16}$$

We consider the following cases.

*3.1. Case K<sup>a</sup>* = *Ka*(*q*) *is the HV of γab with Homothety Factor ψ*

In this case, *Ka*,*<sup>t</sup>* = 0 and *K*(*a*;*b*) = *ψγab* where *ψ* is an arbitrary constant.

$$N\_{\!\!\!\!-} = -\psi \implies N = -\psi t + c$$

where *c* is an arbitrary constant.

Equation (12) gives

Equation (16) implies that (take *ω* 6= 0)

$$Q\_{[a;b]} = 0 \implies Q\_a = V\_{\mu}$$

where *V* = *V*(*q*) is an arbitrary potential. Replacing in (13) we find that

$$K\_{,a} = 2\omega(-\psi t + c)V\_{,a} \implies K = 2\omega(-\psi t + c)V + M(t)$$

where *M*(*t*) is an arbitrary function.

Substituting the function *K*(*t*, *q*) in (14) we get

$$
\omega \mathcal{K}\_a V^{,a} - 2\omega\_{,t}(-\psi t + c)V + 2\omega\psi V - M\_{,t} = 0. \tag{17}
$$

The remaining condition (15) is just the partial derivative of (17), and hence is satisfied trivially.

Moreover, since *ω* 6= 0, Equation (17) can be written in the form

$$(K\_a V^a - 2(\ln \omega)\_{,t} (-\psi t + c)V + 2\psi V - \frac{M\_{,t}}{\omega} = 0\tag{18}$$

which implies that

$$2(\ln \omega)\_{,t}(-\psi t + c) \quad = \quad c\_1 \tag{19}$$

$$M\_{\sharp} \quad = \ c\_2 \omega \tag{20}$$

where *c*1, *c*<sup>2</sup> are arbitrary constants.

Therefore, Equation (18) becomes

$$K\_a V^{,a} + (2\psi - c\_1)V - c\_2 = 0.\tag{21}$$

The QFI is

$$I\_1 = (-\psi t + c)\gamma\_{ab}\mathfrak{q}^a \mathfrak{q}^b + \mathcal{K}\_a(\mathfrak{q})\mathfrak{q}^a + 2\omega(-\psi t + c)V + \mathcal{M}(t) \tag{22}$$

where *Q<sup>a</sup>* = *V*,*<sup>a</sup>* and the quantities *ω*(*t*), *M*(*t*), *V*(*q*), *Ka*(*q*) satisfy the conditions (19)–(21).

$$\text{3.2.}\text{ Case }\text{K}\_{\mathfrak{a}} = -M(t)\text{S}\_{\mathcal{A}}(q)\text{ Where }\text{S}\_{\mathcal{A}}\text{ Is the Gradient HV of }\gamma\_{\text{ab}}$$

In this case *S*;*ab* = *ψγab* and *M*(*t*) 6= 0 is an arbitrary function. Equation (12) implies *N*,*<sup>t</sup>* = *ψM*.

From Equation (16) we find that there exists a potential function *V*(*q*) such that *Q<sup>a</sup>* = *V*,*a*.

Replacing the above results in (13) we obtain

$$K\_{\mathcal{A}} = 2\omega\nu N V\_{\mathcal{A}} + M\_{\mathcal{A}} S\_{\mathcal{A}} \implies K = 2\omega\nu N V + M\_{\mathcal{A}} S + \mathcal{C}(t)$$

where *C*(*t*) is an arbitrary function.

Substituting in (14) we get (take *ωM* 6= 0)

$$
\omega M S\_{,t} V^{,t} + 2\omega\_{,t} N V + 2\omega \psi M V + M\_{,tt} S + \mathcal{C}\_{,t} = 0 \implies
$$

$$
S\_{,t} V^{,t} + 2\psi V + \frac{2(\ln \omega)\_{,t} N}{M} V + \frac{M\_{,tt}}{\omega M} S + \frac{\mathcal{C}\_{,t}}{\omega M} = 0
$$

which implies that

$$\frac{2(\ln \omega)\_{,t} N}{M}\_{,} = \quad d\_1 \tag{23}$$

$$\frac{M\_{,tt}}{\omega M} = \varepsilon \quad \text{and} \tag{24}$$

$$\frac{\mathbf{C}\_{\text{f}}}{\mathbf{x}^{\text{M}}}\_{\mathbf{x}^{\text{M}}} = \begin{array}{c} k \\ \end{array} \tag{25}$$

$$S\_{\mu}V^{\mu} + (2\psi + d\_1)V + mS + k \quad \stackrel{\text{(27)}}{=} 0 \tag{26}$$

where *d*1, *m*, *k* are arbitrary constants. The remaining condition (15) is satisfied identically. The QFI is

$$I\_2 = N\gamma\_{ab}\eta^a\eta^b - MS\_{,a}\eta^a + 2\omega\nu NV + M\_{,l}S + \mathcal{C}(t) \tag{27}$$

where *Q<sup>a</sup>* = *V*,*a*, *N*,*<sup>t</sup>* = *ψM* and the conditions (23)–(26) must be satisfied.

*3.3. Case Q<sup>a</sup>* = *V*,*<sup>a</sup> and K<sup>a</sup>* = −*M*(*t*)*V*,*a*(*q*) *Where V*,*<sup>a</sup> Is the Gradient HV of γab* Equation (12) implies *N*,*<sup>t</sup>* = *ψM* where *ψ* is the homothety factor of *V*,*a*. From Equation (13), we obtain

$$K\_{\mu} = 2\omega\nu N V\_{\mu} + M\_{\mu} V\_{\mu} \implies K = 2\omega\nu N V + M\_{\mu}V + \mathcal{C}(t)$$

where *C*(*t*) is an arbitrary function.

Substituting in (14) we get (take *ωM* 6= 0)

$$
\omega M V\_{,a} V^{,a} + 2\omega\_{,t} N V + 2\omega \psi M V + M\_{,tt} V + \mathcal{C}\_{,t} = 0 \implies
$$

$$
V\_{,a} V^{,a} + 2\psi V + \frac{2(\ln \omega)\_{,t} N}{M} V + \frac{M\_{,tt}}{\omega M} V + \frac{\mathcal{C}\_{,t}}{\omega M} = 0
$$

which implies that

$$\frac{M\_{\beta t}}{\omega M} + \frac{2(\ln \omega)\_{\beta} N}{M} = \quad d\_2 \tag{28}$$

*C*,*<sup>t</sup>* = *k* (29)

$$\begin{array}{rcl} \omega \mathcal{M} & & \cdots \\ V\_{\mathcal{A}} V^{\mathcal{A}} + (2\psi + d\_2) V + k & = & 0 \end{array} \tag{30}$$

where *d*2, *k* are arbitrary constants. The remaining conditions are satisfied identically. The QFI is

$$I\_3 = N\gamma\_{ab}\eta^a\eta^b - MV\_{\mu}\eta^a + (2\omega N + M\_{\mu})V + \mathcal{C} \tag{31}$$

where *Q<sup>a</sup>* = *V*,*a*, *N*,*<sup>t</sup>* = *ψM* and the conditions (28)–(30) must be satisfied.

The above results reproduce Theorem 2 of [12] which states that the point Noether symmetries of the time-dependent potentials of the form *ω*(*t*)*V*(*q*) are generated by the homothetic algebra of the kinetic metric (provided the Lagrangian is regular).

It is interesting to observe that the QFIs (22), (27) and (31) produced by point Noether symmetries can be also produced by generalized (gauged) Noether symmetries using the inverse Noether theorem. This proves that a Noether FI may not associated with a unique Noether symmetry.

#### **4. The Polynomial Method for Computing the QFIs**

In the polynomial approach, one assumes a polynomial form in *t* of the KT *Kab*(*t*, *q*) and the vector *Ka*(*t*, *q*) and solves the resulting system for given *ω*(*t*), *Q<sup>a</sup>* (*q*). One application of this method can be found in [27] where a general theorem is given which allows the finding of the QFIs of an autonomous conservative dynamical system. In the present work, we generalize the considerations made in [27] and assume that the quantity *Kab*(*t*, *q*) has the form

$$K\_{ab}(t,q) = \mathcal{C}\_{(0)ab}(q) + \sum\_{N=1}^{n} \mathcal{C}\_{(N)ab}(q)\frac{t^N}{N} \tag{32}$$

where *C*(*N*)*ab*, *N* = 0, 1, ..., *n*, is a sequence of arbitrary KTs of order 2 of the kinetic metric *γab*.

This choice of *Kab* and Equation (6) indicate that we set

$$K\_a(t,q) = \sum\_{M=0}^{m} L\_{(M)a}(q)t^M \tag{33}$$

where *L*(*M*)*<sup>a</sup>* (*q*), *M* = 0, 1, ..., *m*, are arbitrary vectors.

We note that both powers *n*, *m* in the above polynomial expressions may be infinite. Substituting (32) and (33) in the system of Equations (5)–(10) (Equation (5) is identically zero since *C*(*N*)*ab* are KTs) we obtain the system of equations

$$\mathbf{0} \cdot \mathbf{0} = \mathbf{C}\_{(1)ab} + \mathbf{C}\_{(2)ab}t + \dots + \mathbf{C}\_{(n)ab}t^{n-1} + L\_{(0)(a\flat)} + L\_{(1)(a\flat)}t + \dots + L\_{(m)(a\flat)}t^{m} \tag{34}$$

$$\mathbf{0} = -2\omega \mathbf{C}\_{(0)\text{ab}} \mathbf{Q}^b - 2\omega \mathbf{C}\_{(1)\text{ab}} \mathbf{Q}^b \mathbf{t} - \dots - 2\omega \mathbf{C}\_{(n)\text{ab}} \mathbf{Q}^b \frac{\mathbf{t}^\text{\textquotedblleft}}{n} + L\_{(1)a} + 2L\_{(2)a} \mathbf{t} + \dots + mL\_{(m)a} \mathbf{t}^{m-1} + K\_{\mu} \tag{35}$$

$$\mathcal{O} = \underset{\ldots}{\text{K}} -\omega L\_{(0)\mu} \mathcal{Q}^t - \omega L\_{(1)\mu} \mathcal{Q}^t t - \ldots - \omega L\_{(m)a} \mathcal{Q}^t t^m \tag{36}$$

$$\begin{split} 0 \ &= \begin{pmatrix} -2\mathsf{C}\_{(0)\mathsf{id}}Q^{\mathsf{b}} - 2\mathsf{C}\_{(1)\mathsf{id}}Q^{\mathsf{b}}t - \dots - 2\mathsf{C}\_{(n)\mathsf{id}}Q^{\mathsf{b}}\frac{t^{\mathsf{a}}}{n} \end{pmatrix} \omega\_{\mathsf{d}} - 2\omega \mathsf{C}\_{(1)\mathsf{id}}Q^{\mathsf{b}} - 2\omega \mathsf{C}\_{(2)\mathsf{id}}Q^{\mathsf{b}}t - \dots - 2\omega \mathsf{C}\_{(n)\mathsf{id}}Q^{\mathsf{b}}t^{n-1} + \dots \\ & \qquad + \omega \mathsf{C}\_{(n)\mathsf{id}}Q^{\mathsf{b}} \frac{t^{n-1}}{n} \Big( 2\mathsf{C}\_{(1)\mathsf{id}}Q^{\mathsf{b}}t^{n-1} - \dots - 2\omega \mathsf{C}\_{(n)\mathsf{id}}Q^{\mathsf{b}}t^{n-1} \Big) \end{split}$$

$$\begin{split} &+2L\_{(2)\mu} + 6L\_{(3)\mu}t + \ldots + m(m-1)L\_{(m)\mu}t^{m-2} + \omega \left(L\_{(0)b}Q^{b}\right)\_{\mu} + \omega \left(L\_{(1)b}Q^{b}\right)\_{\mu}t + \ldots + \omega \left(L\_{(m)b}Q^{b}\right)\_{\mu}t^{m} \tag{37} \\ &\quad 0 \quad = \ 2\omega \left(\mathcal{C}\_{(0)[a|c]}Q^{\zeta}\right)\_{\dot{\beta}[\zeta]} + 2\omega \left(\mathcal{C}\_{(1)[a|c]}Q^{\zeta}\right)\_{\dot{\beta}[\zeta]}t + \ldots + 2\omega \left(\mathcal{C}\_{(n)[a|c]}Q^{\zeta}\right)\_{\dot{\beta}[\zeta]} \frac{t^{n}}{n} - L\_{(1)[a|c]} - \\ & \quad - 2L\_{(2)[a\dot{\beta}]}t - \ldots - mL\_{(m)[a\dot{\beta}]}t^{m-1} . \tag{38} \end{split} \tag{39}$$

In this system of PDEs the pairs *ω*(*t*), *Q<sup>a</sup>* (*q*) are not specified. As we explained in the introduction, we shall fix a general form of *ω* and find the admitted QFIs in terms of the (unspecified) *Q<sup>a</sup>* . In the following section, we choose *ω*(*t*) to be a general polynomial in *t*; however, any other choice is possible.

**5. The Case** *ω*(*t*) = *b***<sup>0</sup>** + *b***1***t* + · · · + *b*`*t* ` **with** *<sup>b</sup>*` <sup>6</sup><sup>=</sup> **0,** ` <sup>≥</sup> **<sup>1</sup>**

We assume that

$$
\omega(t) = b\_0 + b\_1 t + \dots + b\_\ell t^\ell, \ b\_\ell \neq 0, \ \ell \ge 1\tag{39}
$$

where ` is the degree of the polynomial. Substituting the function (39) in the system of Equations (34)–(38) we find that there are two independent QFIs as given in Theorem 1 (the proof of Theorem 1 is in the Appendix A).

**Theorem 1.** *The independent QFIs of the time-dependent dynamical system (2) where ω*(*t*) = *b*<sup>0</sup> + *b*1*t* + ... + *b*` *t* ` *with b*` <sup>6</sup><sup>=</sup> <sup>0</sup> *and* ` <sup>≥</sup> <sup>1</sup> *are the following:*

*Integral 1.*

$$I\_{\mathbb{H}} = \left(\mathbb{C}\_{(0)ab} + \sum\_{k=1}^{n} \frac{t^k}{k} \mathbb{C}\_{(k)ab}\right) \mathfrak{q}^a \mathfrak{q}^b + \sum\_{k=0}^{n} t^k L\_{(k)a} \mathfrak{q}^a + \sum\_{k=0}^{n} \sum\_{r=0}^{\ell} \left(L\_{(k)a} \mathbb{Q}^a b\_r \frac{t^{k+r+1}}{k+r+1}\right) + G(q)$$

*where <sup>n</sup>* = 0, 1, 2, ...*, <sup>C</sup>*(0)*ab is a KT, the KTs <sup>C</sup>*(*N*)*ab* = −*L*(*N*−1)(*a*;*b*) *for N* = 1, ..., *n, L*(*n*)*<sup>a</sup> is a KV, G*(*q*) *is an arbitrary function defined by the condition*

$$\mathcal{G}\_{,a} = 2b\_0 \mathcal{C}\_{(0)ab} \mathcal{Q}^b - L\_{(1)a} \tag{40}$$

*s is an arbitrary constant defined by the condition*

$$L\_{(n)a} \mathbb{Q}^a = s \tag{41}$$

*and the following conditions are satisfied*

$$\sum\_{s=0}^{\ell-1} \left[ -\frac{2(r+s)b\_{(r+s\leq\ell)}}{n-s} \mathbb{C}\_{(n-s\geq\ell)\text{alt}} Q^b - 2b\_{(r+s\leq\ell)} \mathbb{C}\_{(n-s\geq\ell)\text{alt}} Q^b + b\_{(r+s\leq\ell)} \left( L\_{(n-s-1\geq\ell)0} Q^b \right)\_{,\ell} \right] = 0, \ r = 1,2,\ldots,\ell \tag{42}$$

$$\left[ -\sum\_{s=1}^{\ell} \left[ \frac{2sb\_s}{n-s} \mathbf{C}\_{\left(n-s\geq 0\right)ab} \mathbf{Q}^b \right] + \sum\_{s=0}^{\ell} \left[ -2b\_s \mathbf{C}\_{\left(n-s>0\right)ab} \mathbf{Q}^b + b\_s \left( L\_{\left(n-s-1\geq 0\right)b} \mathbf{Q}^b \right)\_{,\mu} \right] = 0 \tag{43}$$

$$\left[\lambda(k-1)L\_{(1)h} - \sum\_{s=1}^{\ell} \left[\frac{2sb\_s}{k-s-1} \mathbb{C}\_{\{k-s-1\geq 0\} \oplus \emptyset} Q^{\delta}\right] + \sum\_{s=0}^{\ell} \left[-2b\_s \mathbb{C}\_{\{k-s-1>0\} \oplus \emptyset} Q^{\delta} + b\_{\delta} \left(L\_{(k-s-2\geq 0)\theta} Q^{\delta}\right)\_{,\mu}\right] = 0\tag{44}$$

*with k* = 2, 3, ...*n. Integral 2.*

$$I\_{\varepsilon} = I\_{\varepsilon}(\ell = 1) = -e^{\lambda t} L\_{(a;b)} q^a q^b + \lambda e^{\lambda t} L\_{a} q^a + \left(b\_0 - \frac{b\_1}{\lambda}\right) e^{\lambda t} L\_{a} Q^a + b\_1 t e^{\lambda t} L\_{a} Q^a$$

*where L*(*a*;*b*) *is a KT, LbQ<sup>b</sup>* ,*a* = *<sup>λ</sup>* 3 *b*1 *L<sup>a</sup> and λ* <sup>3</sup>*L<sup>a</sup>* <sup>=</sup> <sup>−</sup>2*b*1*L*(*a*;*b*)*Q<sup>b</sup> . We note that the FI I<sup>e</sup> exists only when ω*(*t*) = *b*<sup>0</sup> + *b*1*t, that is, for* ` = 1*.*

#### **6. Special Cases of the QFI** *I<sup>n</sup>*

The parameter *n* in the case Integral 1 of Theorem 1 runs over all positive integers, i.e., *n* = 0, 1, 2, .... This results in a sequence of QFIs *I*0, *I*1, *I*2, ..., one QFI *I<sup>n</sup>* for each value *n*. A significant characteristic of this sequence is that *I<sup>k</sup>* < *Ik*+<sup>1</sup> , that is, each QFI *I<sup>k</sup>* where *k* = 0, 1, 2, ... can be derived from the next QFI *Ik*+<sup>1</sup> as a subcase.

In the following, we consider some special cases of the QFI *I<sup>n</sup>* for small values of *n*.

*6.1. The QFI I*<sup>0</sup>

For *n* = 0 we have

$$I\_0 = \mathcal{C}\_{(0)ab} \dot{q}^a \dot{q}^b + L\_{(0)a} \dot{q}^a + b\_\ell s \frac{t^{\ell+1}}{\ell+1} + \dots + b\_1 s \frac{t^2}{2} + b\_0 st$$

where *C*(0)*ab* is a KT, *L*(0)*<sup>a</sup>* is a KV, *L*(0)*aQ<sup>a</sup>* = *s* and *C*(0)*abQ<sup>b</sup>* = 0. This QFI consists of the independent FIs

$$I\_{0a} = \mathcal{C}\_{(0)ab} \eta^a \eta^b \,, \ I\_{0b} = L\_{(0)a} \eta^a + b\_\ell s \frac{t^{\ell+1}}{\ell+1} + \dots + b\_1 s \frac{t^2}{2} + b\_0 s t \dots$$

*6.2. The QFI I*<sup>1</sup>

For *n* = 1 the conditions (41)–(44) become

$$L\_{(1)a}Q^a = \begin{array}{c}\text{s} \end{array} \tag{45}$$

$$\left(L\_{(0)b}Q^b\right)\_{,a} = -2(\ell+1)L\_{(0)(a;b)}Q^b \tag{46}$$

$$kb\_k \mathbb{C}\_{(0)ab} \mathbb{Q}^b \quad = \ - (\ell - k + 1) b\_{k-1} L\_{(0)(a;b)} \mathbb{Q}^b \ , \ k = 1, \ldots, \ell. \tag{47}$$

Since *b*` 6= 0 the last condition for *k* = ` gives

$$\mathcal{C}\_{(0)ab}\mathcal{Q}^b = -\frac{b\_{\ell-1}}{\ell b\_{\ell}}L\_{(0)(a;b)}\mathcal{Q}^b$$

and the remaining equations become

$$\left[ (\ell - k + 1)b\_{k-1} - \frac{kb\_kb\_{\ell-1}}{\ell b\_{\ell}} \right] L\_{(0)(a;b)} \mathcal{Q}^b = 0, \ k = 1, \ldots, \ell - 1.$$

The last set of equations exists only for ` ≥ 2. From these equations, using mathematical induction, we prove after successive substitutions that

$$\left(b\_0 - \frac{b\_{\ell-1}^{\ell}}{\ell^{\ell} b\_{\ell}^{\ell-1}}\right) L\_{(0)(a;b)} \mathbb{Q}^b = 0.$$

The QFI is (*I*<sup>0</sup> is a subcase of *I*1)

$$\begin{split} I\_{1} &= \left( -tL\_{(0)(a^{\sharp}\mathfrak{z})} + \mathcal{C}\_{(0)ab} \right) \mathfrak{q}^{a} \mathfrak{q}^{b} + tL\_{(1)a} \mathfrak{q}^{a} + L\_{(0)a} \mathfrak{q}^{a} + s\mathfrak{b}\_{\ell} \frac{t^{\ell+2}}{\ell+2} + \left( s\mathfrak{b}\_{\ell-1} + b\_{\ell}L\_{(0)a} \mathfrak{q}^{a} \right) \frac{t^{\ell+1}}{\ell+1} + \dots + 1 \end{split}$$
 
$$\begin{split} + \left( s\mathfrak{b}\_{0} + b\_{1}L\_{(0)a} \mathfrak{Q}^{a} \right) \frac{t^{2}}{2} + b\_{0}L\_{(0)a} \mathfrak{Q}^{a}t + \mathcal{G}(q) \end{split}$$

where *C*(0)*ab*, *L*(0)(*a*;*b*) are KTs, *L*(1)*<sup>a</sup>* is a KV, *L*(1)*aQ<sup>a</sup>* = *s*, *L*(0)*bQ<sup>b</sup>* ,*a* <sup>=</sup> <sup>−</sup>2(` <sup>+</sup>1)*L*(0)(*a*;*b*)*Q<sup>b</sup>* , *<sup>C</sup>*(0)*abQ<sup>b</sup>* <sup>=</sup> <sup>−</sup> *<sup>b</sup>*`−<sup>1</sup> `*b*` *L*(0)(*a*;*b*)*Q<sup>b</sup>* , h (` − *<sup>k</sup>* + <sup>1</sup>)*bk*−<sup>1</sup> − *kbkb*`−<sup>1</sup> `*b*` i *L*(0)(*a*;*b*)*Q<sup>b</sup>* = 0 where *<sup>k</sup>* <sup>=</sup> 1, ..., ` <sup>−</sup> 1 and *<sup>G</sup>*,*<sup>a</sup>* <sup>=</sup> <sup>2</sup>*b*0*C*(0)*abQ<sup>b</sup>* <sup>−</sup> *<sup>L</sup>*(1)*<sup>a</sup>* .

For some values of the degree ` of the polynomial *ω*(*t*) we have:

(1) For ` = 1.

We have *ω* = *b*<sup>0</sup> + *b*1*t* and the QFI is

$$I\_1 = \left(-tL\_{(0)(a\flat)} + C\_{(0)a\flat}\right)\dot{q}^a \dot{q}^b + tL\_{(1)a}\dot{q}^a + L\_{(0)a}\dot{q}^a + sb\_1\frac{\dot{t}^3}{3} + \left(sb\_0 + b\_1L\_{(0)a}Q^a\right)\frac{\dot{t}^2}{2} + b\_0L\_{(0)a}Q^a\dot{t} + G(q)$$


We have *ω* = *b*<sup>0</sup> + *b*1*t* + *b*2*t* <sup>2</sup> and the QFI is

$$\begin{aligned} I\_1 &= \left(-tL\_{(0)(a\sharp b)} + \mathcal{C}\_{(0)ab}\right)\eta^a \eta^b + tL\_{(1)a}\eta^a + L\_{(0)a}\eta^a + sb\_2\frac{t^4}{4} + \left(sb\_1 + b\_2L\_{(0)a}Q^a\right)\frac{t^3}{3} \\ &+ \left(sb\_0 + b\_1L\_{(0)a}Q^a\right)\frac{t^2}{2} + b\_0L\_{(0)a}Q^a t + G(q) \end{aligned}$$

where *C*(0)*ab*, *L*(0)(*a*;*b*) are KTs, *L*(1)*<sup>a</sup>* is a KV, *L*(1)*aQ<sup>a</sup>* = *s*, *L*(0)*bQ<sup>b</sup>* ,*a* <sup>=</sup> <sup>−</sup>6*L*(0)(*a*;*b*)*Q<sup>b</sup>* , *<sup>C</sup>*(0)*abQ<sup>b</sup>* <sup>=</sup> <sup>−</sup> *b*1 2*b*<sup>2</sup> *L*(0)(*a*;*b*)*Q<sup>b</sup>* , *b*<sup>0</sup> − *b* 2 1 4*b*<sup>2</sup> *<sup>L</sup>*(0)(*a*;*b*)*Q<sup>b</sup>* <sup>=</sup> 0 and *<sup>G</sup>*,*<sup>a</sup>* <sup>=</sup> <sup>2</sup>*b*0*C*(0)*abQ<sup>b</sup>* <sup>−</sup> *<sup>L</sup>*(1)*<sup>a</sup>* . (3) For ` = 3.

We have *ω* = *b*<sup>0</sup> + *b*1*t* + *b*2*t* <sup>2</sup> + *b*3*t* <sup>3</sup> and the QFI is

$$\begin{split} I\_{1} &= \quad \left( -tL\_{(0)(a\sharp)} + \mathbb{C}\_{(0)ab} \right) q^{a} q^{b} + tL\_{(1)q} q^{a} + L\_{(0)a} q^{a} + s b\_{3} \frac{t^{5}}{5} + \left( s b\_{2} + b\_{3} L\_{(0)a} Q^{a} \right) \frac{t^{4}}{4} + \left( s b\_{1} + b\_{2} L\_{(0)a} Q^{a} \right) \frac{t^{3}}{3} + \left( s b\_{3} + b\_{4} L\_{(0)a} Q^{a} \right) \frac{t^{2}}{4} \\ &+ \left( s b\_{0} + b\_{1} L\_{(0)a} Q^{a} \right) \frac{t^{2}}{2} + b\_{0} L\_{(0)a} Q^{a} t + G(q) \end{split}$$

where *C*(0)*ab*, *L*(0)(*a*;*b*) are KTs, *L*(1)*<sup>a</sup>* is a KV, *L*(1)*aQ<sup>a</sup>* = *s*, *L*(0)*bQ<sup>b</sup>* ,*a* <sup>=</sup> <sup>−</sup>8*L*(0)(*a*;*b*)*Q<sup>b</sup>* , *<sup>C</sup>*(0)*abQ<sup>b</sup>* <sup>=</sup> <sup>−</sup> *b*2 3*b*<sup>3</sup> *L*(0)(*a*;*b*)*Q<sup>b</sup>* , *b*<sup>0</sup> − *b*1*b*<sup>2</sup> 9*b*<sup>3</sup> *L*(0)(*a*;*b*)*Q<sup>b</sup>* = 0, *b*<sup>1</sup> − *b* 2 2 3*b*<sup>3</sup> *L*(0)(*a*;*b*)*Q<sup>b</sup>* = 0 and *<sup>G</sup>*,*<sup>a</sup>* <sup>=</sup> <sup>2</sup>*b*0*C*(0)*abQ<sup>b</sup>* <sup>−</sup> *<sup>L</sup>*(1)*<sup>a</sup>* .

#### **7. The Basis Method for Computing QFIs**

As it has been explained in the introduction, in the basis method instead of considering the KT *Kab* to be given as a polynomial in *t* with coefficients arbitrary KTs (see Equation (32)) one defines the KT *Kab*(*t*, *q*) by the requirement

$$K\_{ab}(t,q) = \sum\_{N=1}^{m} \alpha\_N(t)\mathbb{C}\_{(N)ab}(q) \tag{48}$$

where *αN*(*t*) are arbitrary smooth functions and the *m* linearly independent KTs *C*(*N*)*ab*(*q*) constitute a basis of the space of KTs of the kinetic metric *γab*(*q*). In this case, one does not assume a form for the vector *Ka*(*t*, *q*) which is determined from the resulting system of Equations (5)–(10).

The basis method has been used previously by Katzin and Levine in [17] in order to determine the QFIs for the time-dependent Kepler potential. As we shall apply the basis method to 3d Newtonian systems, we need a basis of KTs (and other collineations) of the Euclidean space *E* 3 .

#### **8. The Geometric Quantities of** *E* **3**

In *E* 3 the general KT of order 2 has independent components

$$\begin{array}{rcl} \mathbb{C}\_{11} &=& \frac{a\_6}{2}y^2 + \frac{a\_1}{2}z^2 + a\_4yz + a\_5y + a\_2z + a\_3\\ \mathbb{C}\_{12} &=& \frac{a\_{10}}{2}z^2 - \frac{a\_6}{2}xy - \frac{a\_4}{2}xz - \frac{a\_{14}}{2}yz - \frac{a\_5}{2}x - \frac{a\_{15}}{2}y + a\_{16}z + a\_{17} \\ \mathbb{C}\_{13} &=& \frac{a\_{14}}{2}y^2 - \frac{a\_4}{2}xy - \frac{a\_1}{2}xz - \frac{a\_{10}}{2}yz - \frac{a\_2}{2}x + a\_{18}y - \frac{a\_{11}}{2}z + a\_{19} \\ \mathbb{C}\_{22} &=& \frac{a\_6}{2}x^2 + \frac{a\_7}{2}z^2 + a\_{14}xz + a\_{15}x + a\_{12}z + a\_{13} \\ \mathbb{C}\_{23} &=& \frac{a\_4}{2}x^2 - \frac{a\_{14}}{2}xy - \frac{a\_{10}}{2}xz - \frac{a\_7}{2}yz - (a\_{16} + a\_{18})x - \frac{a\_{12}}{2}y - \frac{a\_8}{2}z + a\_{20} \\ \mathbb{C}\_{33} &=& \frac{a\_1}{2}x^2 + \frac{a\_7}{2}y^2 + a\_{10}xy + a\_{11}x + a\_8y + a\_9 \end{array} \tag{49}$$

where *a<sup>I</sup>* with *I* = 1, 2, . . . , 20 are arbitrary real constants. The vector *L <sup>a</sup>* generating the KT *Cab* = *L*(*a*;*b*) is

$$L\_{4} = \begin{pmatrix} -a\_{15}y^2 - a\_{11}z^2 + a\_5xy + a\_2xz + 2(a\_{16} + a\_{18})yz + a\_3x + 2a\_4y + 2a\_1z + a\_6 \\ -a\_5x^2 - a\_8z^2 + a\_{15}xy - 2a\_{18}xz + a\_{12}yz + 2(a\_{17} - a\_4)x + a\_{13}y + 2a\_7z + a\_{14} \\ -a\_2x^2 - a\_{12}y^2 - 2a\_{16}xy + a\_{11}xz + a\_8yz + 2(a\_{19} - a\_1)x + 2(a\_{20} - a\_7)y + a\_9z + a\_{10} \end{pmatrix} \tag{50}$$

and the generated KT is

$$\mathbf{C\_{d9}} = \begin{pmatrix} a\_5y + a\_2z + a\_3 & -\frac{a\_5}{2}\mathbf{x} - \frac{a\_{18}}{2}y + a\_{16}\mathbf{z} + a\_{17} & -\frac{a\_2}{2}\mathbf{x} + a\_{18}y - \frac{a\_{11}}{2}\mathbf{z} + a\_{19} \\\ -\frac{a\_5}{2}\mathbf{x} - \frac{a\_{19}}{2}y + a\_{16}\mathbf{z} + a\_{17} & a\_{15}\mathbf{x} + a\_{12}\mathbf{z} + a\_{13} & -(a\_{16} + a\_{18})\mathbf{x} - \frac{a\_{17}}{2}y - \frac{a\_8}{2}\mathbf{z} + a\_{20} \\\ -\frac{a\_3}{2}\mathbf{x} + a\_{18}y - \frac{a\_{19}}{2}\mathbf{z} + a\_{19} & -(a\_{16} + a\_{18})\mathbf{x} - \frac{a\_{19}}{2}y - \frac{a\_8}{2}\mathbf{z} + a\_{20} & a\_{11}\mathbf{x} + a\_8y + a\_9 \end{pmatrix} \tag{51}$$

which is a subcase of the general KT (49) for *a*<sup>1</sup> = *a*<sup>4</sup> = *a*<sup>6</sup> = *a*<sup>7</sup> = *a*<sup>10</sup> = *a*<sup>14</sup> = 0.

We note that the covariant expression of the most general KT *Mij* of order 2 of *E* 3 is (see [32,33])

$$M\_{\rm ij} = (\varepsilon\_{\rm ikm}\varepsilon\_{\rm jln} + \varepsilon\_{\rm jkm}\varepsilon\_{\rm iln})A^{mn}q^{k}q^{l} + (B\_{\rm (i}^{l}\varepsilon\_{\rm j)kl} + \lambda\_{(i}\delta\_{\rm j)k} - \delta\_{\rm ij}\lambda\_{k})q^{k} + D\_{\rm ij} \tag{52}$$

where *A mn* , *B l i* , *Dij* are constant tensors all being symmetric and *B l i* also being traceless; *λ k* is a constant vector; *εijk* is the 3d Levi-Civita symbol. This result is obtained from the solution of the Killing tensor equation in the Euclidean space.

Observe that *A mn* , *Dij* have each six independent components; *B l i* has five independent components; *λ <sup>k</sup>* has three independent components. Therefore, *Mij* depends on 6 + 6 + 5 + 3 = 20 arbitrary real constants, a result which is in accordance with the one given above in Equation (49).

#### **9. The Time-Dependent Newtonian Generalized Kepler Potential**

The time-dependent Newtonian generalized Kepler potential is *V* = − *ω*(*t*) *r <sup>ν</sup>* where *ν* is a non-zero real constant and *r* = (*x* <sup>2</sup> + *y* <sup>2</sup> + *z* 2 ) 1 <sup>2</sup> . This potential contains (among others) the 3d time-dependent oscillator [8,10,13–15] for *ν* = −2, the time-dependent Kepler potential [12,16–18] for *ν* = 1 and the Newton–Cotes potential for *ν* = 2 [31]. The integrability of these systems has been studied in numerous works over the years using various methods, mainly the Noether symmetries. Our purpose is to recover the results of these works—and also new ones—using the basis method.

The Lagrangian of the system is

$$L = \frac{1}{2} (\dot{\mathfrak{x}}^2 + \dot{\mathfrak{y}}^2 + \dot{\mathfrak{z}}^2) + \frac{\omega(t)}{r^\nu} \tag{53}$$

and the corresponding Euler–Lagrange equations are

$$\sharp = -\frac{\nu \omega(t)}{r^{\nu+2}} x,\ \circ = -\frac{\nu \omega(t)}{r^{\nu+2}} y,\ \sharp = -\frac{\nu \omega(t)}{r^{\nu+2}} z. \tag{54}$$

For this system the *Q<sup>a</sup>* = *νq a r <sup>ν</sup>*+<sup>2</sup> where *q <sup>a</sup>* = (*x*, *y*, *z*) whereas the *ω*(*t*) is unspecified. We shall determine those *ω*(*t*) for which the resulting FIs are not combinations of the angular momentum.

The LFIs and the QFIs of the autonomous generalized Kepler potential, that is, *ω*(*t*) = *k* = *const*, have been determined in [27] using the direct method and are listed in Table 1.


**Table 1.** The LFIs/QFIs of the autonomous generalized Kepler potential for *ω*(*t*) = *k* = *const*.

In Table 1, *H<sup>ν</sup>* is the Hamiltonian of the system, *L<sup>i</sup>* are the components of the angular momentum, *R<sup>i</sup>* are the components of the Runge–Lenz vector and *Bij* are the components of the Jauch–Hill–Fradkin tensor.

Using *Q<sup>a</sup>* = *νq a r ν*+2 , conditions (5)–(10) become (see [17])

$$\mathcal{K}\_{(ab;c)} \quad = \underset{\ldots}{\quad} \tag{55}$$

$$K\_{(a;b)} + K\_{ab,t} = \begin{array}{c} 0 \\ \cdot \end{array} \tag{56}$$

$$\mathbf{K}\_{,a} - \frac{2\nu\omega}{r^{\nu+2}}\mathbf{K}\_{ab}\mathbf{q}^{b} + \mathbf{K}\_{a,t} = \begin{array}{c} \mathbf{0} \\ \end{array} \tag{57}$$

$$K\_{,t} - \frac{\nu \omega}{r^{\nu+2}} K\_a q^a \quad = \quad \mathbf{0} \tag{58}$$

$$\mathbf{K}\_{a,tt} + \nu \omega \left(\frac{\mathbf{K}\_b \boldsymbol{q}^b}{r^{\nu+2}}\right)\_{,a} - \frac{2\nu \omega\_\beta}{r^{\nu+2}} \mathbf{K}\_{ab} \boldsymbol{q}^b - \frac{2\nu \omega}{r^{\nu+2}} \mathbf{K}\_{ab,t} \boldsymbol{q}^b = \boldsymbol{0} \tag{59}$$

$$\left(K\_{[a;b],t} - 2\nu\omega \left(\frac{K\_{[a|c]}q^c}{r^{\nu+2}}\right)\_{;b]} = \quad 0. \tag{60}$$

From the Lagrangian (53), we infer that the kinetic metric is *δij* = *diag*(1, 1, 1).

According to the basis approach, the KT *Kab*(*t*, *q*) of (55) is the KT given by (49) but the 20 arbitrary constants *a<sup>I</sup>* are assumed to be time-dependent functions *aI*(*t*).

Condition (56) gives

$$K\_{a,b} + K\_{b,a} = -2K\_{ab,t} \implies$$

$$\mathbf{K}\_{\mathrm{I},\mathrm{I}} \quad = \begin{array}{c} -\mathbf{K}\_{\mathrm{I}\mathbf{1},\mathrm{I}} \end{array} \tag{61}$$

$$\mathbf{K\_{2,2}} \quad = \ -\mathbf{K\_{22,t}} \tag{62}$$

$$\mathbf{K}\_{\mathbf{3},3} \quad = \quad -\mathbf{K}\_{\mathbf{3}\mathbf{3},t} \tag{63}$$

$$\mathbf{K}\_{1,2} + \mathbf{K}\_{2,1} \quad = \quad -2\mathbf{K}\_{12,t} \tag{64}$$

$$\mathbf{K}\_{1,3} + \mathbf{K}\_{3,1} \quad = \quad -2\mathbf{K}\_{13,t} \tag{65}$$

$$\mathbf{K\_{2,3}} + \mathbf{K\_{3,2}} \quad = \quad -\mathbf{2}\mathbf{K\_{23,t}}.\tag{66}$$

From the first three conditions (61)–(63) we find

$$\begin{cases} K\_1 &=& -\frac{\hbar\_6}{2}xy^2 - \frac{\hbar\_1}{2}xz^2 - \mathfrak{a}\_4xyz - \mathfrak{a}\_5xy - \mathfrak{a}\_2xz - \mathfrak{a}\_3x + A(y, z, t) \\ K\_2 &=& -\frac{\hbar\_6}{2}yz^2 - \frac{\hbar\_7}{2}yz^2 - \mathfrak{a}\_{14}xyz - \mathfrak{a}\_{15}xy - \mathfrak{a}\_{12}yz - \mathfrak{a}\_{13}y + B(x, z, t) \\ K\_3 &=& -\frac{\hbar\_1}{2}zx^2 - \frac{\hbar\_7}{2}zy^2 - \mathfrak{a}\_{10}xyz - \mathfrak{a}\_{11}xz - \mathfrak{a}\_8yz - \mathfrak{a}\_9z + C(x, y, t) \end{cases}$$

where *A*, *B*, *C* are arbitrary functions. Substituting these results in (64)–(66) we obtain

$$\begin{array}{rcl} 0 &=& \mathfrak{d}\_{10}z^2 - 3\mathfrak{d}\_6xy - 2\mathfrak{d}\_4xz - 2\mathfrak{d}\_{14}yz - 2\mathfrak{d}\_5x - 2\mathfrak{d}\_{15}y + 2\mathfrak{d}\_{16}z + 2\mathfrak{d}\_{17} + A\_{2} + B\_{,1} \end{array} \tag{6}$$

$$\begin{array}{ccccccccc}0 & 0 & = & \mathfrak{d}\_{10}z^2 - 3\mathfrak{d}\_6xy - 2\mathfrak{d}\_4xz - 2\mathfrak{d}\_{14}yz - 2\mathfrak{d}\_5\mathfrak{x} - 2\mathfrak{d}\_{15}y + 2\mathfrak{d}\_{16}z + 2\mathfrak{d}\_{17} + \mathfrak{d}\_{18} \\ \ldots & \ldots & \ldots & \ldots & \ldots \end{array} \tag{67}$$

$$\begin{array}{rcl} 0 &=& \mathfrak{d}\_{14}y^2 - 2\mathfrak{d}\_4xy - 3\mathfrak{d}\_1xz - 2\mathfrak{d}\_{10}yz - 2\mathfrak{d}\_2x + 2\mathfrak{d}\_{18}y - 2\mathfrak{d}\_{11}z + 2\mathfrak{d}\_{19} + A\_{,3} + C\_{,1} \end{array} \tag{68}$$

$$\mathbf{0} \cdot \mathbf{0} = \mathbf{d}\_4 \mathbf{x}^2 - 2\mathbf{d}\_{14} \mathbf{x} \mathbf{y} - 2\mathbf{d}\_{10} \mathbf{x} \mathbf{z} - 3\mathbf{d}\_7 \mathbf{y} \mathbf{z} - 2(\mathbf{d}\_{16} + \mathbf{d}\_{18}) \mathbf{x} - 2\mathbf{d}\_{12} \mathbf{y} - 2\mathbf{d}\_8 \mathbf{z} + 2\mathbf{d}\_{20} + \mathbf{B}\_{,3} + \mathbf{C}\_{,2} \tag{69}$$

By taking the second partial derivatives of (67) with respect to (wrt) *x*, *y*, of (68) wrt *x*, *z* and of (69) wrt *y*, *z* we find that

$$a\_1 = c\_1 \quad a\_6 = c\_2 \quad a\_7 = c\_3 \dots$$

are arbitrary constants.

Then, Equations (67)–(69) become

$$0 \quad = \dot{a}\_{10}z^2 - 2\dot{a}\_4xz - 2\dot{a}\_{14}yz - 2\dot{a}\_5x - 2\dot{a}\_{15}y + 2\dot{a}\_{16}z + 2\dot{a}\_{17} + A\_{,2} + B\_{,1} \tag{70}$$

$$0 \quad 0 \quad = \ \mathfrak{d}\_{14}y^2 - 2\mathfrak{d}\_4xy - 2\mathfrak{d}\_{10}yz - 2\mathfrak{d}\_2\mathfrak{x} + 2\mathfrak{d}\_{18}y - 2\mathfrak{d}\_{11}z + 2\mathfrak{d}\_{19} + A\_{,3} + \mathbb{C}\_{,1} \tag{71}$$

$$0 \quad = \ \mathfrak{d}\_4 \mathbf{x}^2 - 2\mathfrak{d}\_{14} \mathbf{x} \mathbf{y} - 2\mathfrak{d}\_{10} \mathbf{x} \mathbf{z} - 2(\mathfrak{d}\_{16} + \mathfrak{d}\_{18}) \mathbf{x} - 2\mathfrak{d}\_{12} \mathbf{y} - 2\mathfrak{d}\_8 \mathbf{z} + 2\mathfrak{d}\_{20} + \mathfrak{d}\_{,3} + \mathbb{C}\_2 \text{.} \tag{72}$$

By suitable differentiations of the above equations, we obtain

$$\begin{array}{rcl} A\_{,22} & = & 2\dot{a}\_{14}z + 2\dot{a}\_{15} \\ A\_{,33} & = & 2\dot{a}\_{10}y + 2\dot{a}\_{11} \\ B\_{,11} & = & 2\dot{a}\_{4}z + 2\dot{a}\_{5} \\ B\_{,33} & = & 2\dot{a}\_{10}x + 2\dot{a}\_{8} \\ C\_{,11} & = & 2\dot{a}\_{4}y + 2\dot{a}\_{2} \\ C\_{,22} & = & 2\dot{a}\_{14}x + 2\dot{a}\_{12} .\end{array}$$

Then,

$$\begin{array}{rcl} A & = & \mathfrak{d}\_{14}zy^2 + \mathfrak{d}\_{10}yz^2 + \mathfrak{d}\_{15}y^2 + \mathfrak{d}\_{11}z^2 + \sigma\_1(t)yz + \sigma\_2(t)y + \sigma\_3(t)z + \sigma\_4(t) \\ B & = & \mathfrak{d}\_{4}zx^2 + \mathfrak{d}\_{10}xz^2 + \mathfrak{d}\_{5}x^2 + \mathfrak{d}\_{8}z^2 + \tau\_1(t)xz + \tau\_2(t)x + \tau\_3(t)z + \tau\_4(t) \\ C & = & \mathfrak{d}\_{4}yx^2 + \mathfrak{d}\_{14}xy^2 + \mathfrak{d}\_{2}x^2 + \mathfrak{d}\_{12}y^2 + \eta\_1(t)xy + \eta\_2(t)x + \eta\_3(t)y + \eta\_4(t) \end{array}$$

where *σ<sup>k</sup>* (*t*), *τ<sup>k</sup>* (*t*), *η<sup>k</sup>* (*t*) for *k* = 1, 2, 3, 4 are arbitrary functions. Substituting in (70)–(72) we find

$$\begin{aligned} (70) \implies & a\_{10} = c\_{4\prime} \quad \sigma\_1 = -\tau\_1 - 2\dot{a}\_{16\prime} \quad \sigma\_2 = -\tau\_2 - 2\dot{a}\_{17} \\ (71) \implies & a\_{14} = c\_{5\prime} \quad \eta\_1 = -\sigma\_1 - 2\dot{a}\_{18\prime} \quad \eta\_2 = -\sigma\_3 - 2\dot{a}\_{19} \\ (72) \implies & a\_4 = c\_{6\prime} \quad \tau\_1 = -\eta\_1 + 2(\dot{a}\_{16} + \dot{a}\_{18}) \quad \tau\_3 = -\eta\_3 - 2\dot{a}\_{20} \end{aligned}$$

from which we finally have

$$a\_{10} = c\_{4\prime} \quad a\_{14} = c\_{5\prime} \quad a\_4 = c\_{6\prime} \quad \tau\_1 = 2\dot{a}\_{18\prime} \quad \eta\_1 = 2\dot{a}\_{16\prime} \quad \sigma\_1 = -2(\dot{a}\_{16} + \dot{a}\_{18}),$$

$$\tau\_2 = -\sigma\_2 - 2\dot{a}\_{17\prime} \quad \eta\_2 = -\sigma\_3 - 2\dot{a}\_{19\prime} \quad \eta\_3 = -\tau\_3 - 2\dot{a}\_{20}$$

where *c*4, *c*5, *c*<sup>6</sup> are arbitrary constants. Therefore, the KT *Kab* is

$$\begin{array}{rcl} K\_{11} &=& \frac{c\_2}{2}y^2 + \frac{c\_1}{2}z^2 + c\_6yz + a\_5y + a\_2z + a\_3\\ K\_{12} &=& \frac{c\_4}{2}z^2 - \frac{c\_2}{2}xy - \frac{c\_6}{2}xz - \frac{c\_5}{2}yz - \frac{a\_5}{2}x - \frac{a\_{15}}{2}y + a\_{16}z + a\_{17} \\ K\_{13} &=& \frac{c\_5}{2}y^2 - \frac{c\_6}{2}xy - \frac{c\_1}{2}xz - \frac{c\_4}{2}yz - \frac{a\_2}{2}x + a\_{18}y - \frac{a\_{11}}{2}z + a\_{19} \\ K\_{22} &=& \frac{c\_2}{2}x^2 + \frac{c\_3}{2}z^2 + c\_5xz + a\_{15}x + a\_{12}z + a\_{13} \\ K\_{23} &=& \frac{c\_6}{2}x^2 - \frac{c\_5}{2}xy - \frac{c\_4}{2}xz - \frac{c\_3}{2}yz - (a\_{16} + a\_{18})x - \frac{a\_{12}}{2}y - \frac{a\_8}{2}z + a\_{20} \\ K\_{33} &=& \frac{c\_1}{2}x^2 + \frac{c\_3}{2}y^2 + c\_4xy + a\_{11}x + a\_8y + a\_9 \end{array} \tag{73}$$

and the vector *K<sup>a</sup>* is

$$\begin{array}{llll} \mathcal{K}\_{1} &=& \dot{a}\_{15}\dot{y}^{2} + \dot{a}\_{11}z^{2} - \dot{a}\_{5}\mathbf{x}\mathbf{y} - \dot{a}\_{2}\mathbf{x}\mathbf{z} - 2(\dot{a}\_{16} + \dot{a}\_{18})\mathbf{y}\mathbf{z} - \dot{a}\_{3}\mathbf{x} + \sigma\_{2}\mathbf{y} + \sigma\_{3}\mathbf{z} + \sigma\_{4} \\ \mathcal{K}\_{2} &=& \dot{a}\_{5}\mathbf{x}^{2} + \dot{a}\_{8}\mathbf{z}^{2} - \dot{a}\_{15}\mathbf{x}\mathbf{y} + 2\dot{a}\_{18}\mathbf{x}\mathbf{z} - \dot{a}\_{12}\mathbf{y}\mathbf{z} - (\sigma\_{2} + 2\dot{a}\_{17})\mathbf{x} - \dot{a}\_{13}\mathbf{y} + \tau\_{3}\mathbf{z} + \tau\_{4} \\ \mathcal{K}\_{3} &=& \dot{a}\_{2}\mathbf{x}^{2} + \dot{a}\_{12}\mathbf{y}^{2} + 2\dot{a}\_{16}\mathbf{x}\mathbf{y} - \dot{a}\_{11}\mathbf{x}\mathbf{z} - \dot{a}\_{8}\mathbf{y}\mathbf{z} - (\sigma\_{3} + 2\dot{a}\_{19})\mathbf{x} - (\tau\_{3} + 2\dot{a}\_{20})\mathbf{y} - \dot{a}\_{9}\mathbf{z} + \eta\_{4} \end{array} \tag{74}$$

Replacing the above results in the constraint (60) we find the following set of equations:

$$a\_2 = a\_{12}, \ a\_5 = a\_{8'} \ a\_{11} = a\_{15'} \ a\_{16} = a\_{18} = 0 \tag{75}$$

$$a\_1(\nu - 1)a\_2 = 0, \ (\nu - 1)a\_5 = 0, \ (\nu - 1)a\_{11} = 0\tag{76}$$

$$(\upsilon + 2)a\_{17} = 0, \ (\upsilon + 2)a\_{19} = 0, \ (\upsilon + 2)a\_{20} = 0, \ (\upsilon + 2)(a\_3 - a\_9) = 0, \ (\upsilon + 2)(a\_3 - a\_{13}) = 0 \tag{77}$$

$$
\vec{u}\_2 = \vec{u}\_5 = \vec{u}\_{11} = 0,\ \vec{v}\_2 = -\vec{u}\_{17},\ \vec{v}\_3 = -\vec{u}\_{19},\ \vec{v}\_3 = -\vec{u}\_{20}.\tag{78}
$$

We consider three cases depending on the value of *ν*:


The Newton–Cotes potential (*ν* = 2) is contained as a subcase of the general case.

#### **10. The General Case**

This case holds for any value of *ν* and conditions (75)–(78) give

$$a\_2 = a\_5 = a\_8 = a\_{11} = a\_{12} = a\_{15} = a\_{16} = a\_{17} = a\_{18} = a\_{19} = a\_{20} = 0\_{21}$$

$$a\_3 = a\_9 = a\_{13}, \ \sigma\_2 = \mathfrak{c}\_7, \ \sigma\_3 = \mathfrak{c}\_8, \ \mathfrak{r}\_3 = \mathfrak{c}\_9$$

where *c*7, *c*8, *c*<sup>9</sup> are arbitrary constants.

Substituting in the constraint (59), we find that

$$
\dddot{a}\_3 = 0, \ (\nu - 2)\omega \mathfrak{d}\_3 - 2\dot{\omega}a\_3 = 0\tag{79}
$$

$$
\vartheta\_4 = \mathfrak{r}\_4 = \mathfrak{r}\_4 = 0, \ \omega \sigma\_4 = \omega \tau\_4 = \omega \eta\_4 = 0 \implies \sigma\_4 = \tau\_4 = \eta\_4 = 0.
$$

Therefore, the KT *Kab* becomes

$$\mathbf{K}\_{ab} = \begin{pmatrix} \frac{c\_1}{2}y^2 + \frac{c\_1}{2}z^2 + c\_6yz + a\_3 & \frac{c\_1}{2}z^2 - \frac{c\_2}{2}xy - \frac{c\_3}{2}xz - \frac{c\_4}{2}yz & \frac{c\_1}{2}y^2 - \frac{c\_4}{2}xy - \frac{c\_1}{2}xz - \frac{c\_4}{2}yz\\ \frac{c\_1}{2}z^2 - \frac{c\_2}{2}xy - \frac{c\_3}{2}xz - \frac{c\_4}{2}yz & \frac{c\_2}{2}x^2 + \frac{c\_3}{2}z^2 + c\_5xz + a\_3 & \frac{c\_4}{2}x^2 - \frac{c\_3}{2}xy - \frac{c\_4}{2}xz - \frac{c\_3}{2}yz\\ \frac{c\_3}{2}y^2 - \frac{c\_4}{2}xy - \frac{c\_1}{2}xz - \frac{c\_4}{2}yz & \frac{c\_4}{2}x^2 - \frac{c\_3}{2}xy - \frac{c\_4}{2}xz - \frac{c\_3}{2}yz & \frac{c\_1}{2}x^2 + \frac{c\_3}{2}y^2 + c\_4xy + a\_3 \end{pmatrix} \tag{80}$$

and the vector

$$K\_d = \begin{pmatrix} -d\_3\mathbf{x} + c\_7\mathbf{y} + c\_8\mathbf{z} \\ -c\_7\mathbf{x} - d\_3\mathbf{y} + c\_9\mathbf{z} \\ -c\_8\mathbf{x} - c\_9\mathbf{y} - d\_3\mathbf{z} \end{pmatrix}. \tag{81}$$

Since the ten parameters *a*3(*t*) and *c<sup>A</sup>* where *A* = 1, 2, . . . , 9 are independent (i.e., they generate different FIs) we consider the following two cases.

*10.1. a*3(*t*) = 0

In this case, the conditions (79) are satisfied identically leaving the function *ω*(*t*) free. Therefore, the KT (80) becomes

$$\mathbf{K}\_{ab} = \begin{pmatrix} \frac{c\_2}{2}y^2 + \frac{c\_1}{2}z^2 + c\_6yz & \frac{c\_4}{2}z^2 - \frac{c\_2}{2}xy - \frac{c\_6}{2}xz - \frac{c\_5}{2}yz & \frac{c\_5}{2}y^2 - \frac{c\_6}{2}xy - \frac{c\_1}{2}xz - \frac{c\_4}{2}yz\\ \frac{c\_4}{2}z^2 - \frac{c\_2}{2}xy - \frac{c\_6}{2}xz - \frac{c\_5}{2}yz & \frac{c\_2}{2}x^2 + \frac{c\_3}{2}z^2 + c\_5xz & \frac{c\_6}{2}x^2 - \frac{c\_5}{2}xy - \frac{c\_4}{2}xz - \frac{c\_3}{2}yz\\ \frac{c\_5}{2}y^2 - \frac{c\_4}{2}xy - \frac{c\_5}{2}xz - \frac{c\_6}{2}yz & \frac{c\_6}{2}x^2 - \frac{c\_5}{2}xy - \frac{c\_4}{2}xz - \frac{c\_3}{2}yz & \frac{c\_1}{2}x^2 + \frac{c\_3}{2}y^2 + c\_4xy \end{pmatrix}$$

and the vector (81) becomes the general non-gradient KV

$$\mathbf{K}\_{\mathfrak{a}} = \begin{pmatrix} c\mathfrak{e}y + c\mathfrak{s}z \\ -c\_{7}\mathfrak{x} + c\_{9}z \\ -c\_{8}\mathfrak{x} - c\mathfrak{y} \end{pmatrix}.$$

Then, the constraint (58) implies that (since *Kaq <sup>a</sup>* = 0) *K* = *G*(*x*, *y*, *z*) which when replaced in (57) gives (since *Kabq <sup>b</sup>* <sup>=</sup> 0) *<sup>G</sup>*,*<sup>a</sup>* <sup>=</sup> 0. Hence *<sup>K</sup>* <sup>=</sup> *const* <sup>≡</sup> 0.

The QFI *I* = *Kabq*˙ *a q*˙ *<sup>b</sup>* + *Kaq*˙ *a* leads only to the three components *L<sup>i</sup>* of the angular momentum. We note that *I* contains nine independent parameters, each of them defining an FI: (a) *c*7, *c*8, *c*<sup>9</sup> lead to the components *L*<sup>1</sup> = *yz*˙ − *zy*˙, *L*<sup>2</sup> = *zx*˙ − *xz*˙, *L*<sup>3</sup> = *xy*˙ − *yx*˙ of the angular momentum (LFIs); (b) *c*1, *c*2, *c*3, *c*4, *c*5, *c*<sup>6</sup> lead to the products (QFIs depending on *Li* ) *L* 2 1 , *L* 2 2 , *L* 2 3 , *L*1*L*2, *L*1*L*<sup>3</sup> and *L*2*L*3.

We have the following result.

**Proposition 1.** *The time-dependent generalized Kepler potential V*(*t*, *q*) = − *ω*(*t*) *r <sup>ν</sup> for a general smooth function ω*(*t*) *admits only the LFIs of the angular momentum L<sup>i</sup> . Independent QFIs in general do not exist, they are all quadratic combinations of L<sup>i</sup> .*

*10.2. c<sup>A</sup>* = 0 *where A* = 1, 2, . . . , 9

In this case, the conditions (79) imply that *a*3(*t*) = *b*<sup>0</sup> + *b*1*t* + *b*2*t* <sup>2</sup> and

$$
\omega\_{\left(\nu\right)}(t) = k \left( b\_0 + b\_1 t + b\_2 t^2 \right)^{\frac{\nu - 2}{2}} \tag{82}
$$

where *k*, *b*0, *b*1, *b*<sup>2</sup> are arbitrary constants and the index (*ν*) denotes the dependence of *ω*(*t*) on the value of *ν*.

Since *c<sup>A</sup>* = 0 the quantities (80) and (81) become

$$K\_{ab} = a\_3 \delta\_{ab\nu} \quad K\_a = -\delta\_3 q\_a.$$

Substituting in the remaining constraints (57) and (58), we find

$$K = b\_2 r^2 - \frac{2k(b\_0 + b\_1 t + b\_2 t^2)^{\nu/2}}{r^{\nu}}.$$

The QFI is

$$J\_{\nu} = \left(b\_0 + b\_1 t + b\_2 t^2\right) \left[\frac{\dot{q}^i \dot{q}\_i}{2} - \frac{k(b\_0 + b\_1 t + b\_2 t^2)^{\frac{\nu - 2}{2}}}{r^{\nu}}\right] - \frac{b\_1 + 2b\_2 t}{2} q^i \dot{q}\_i + \frac{b\_2 r^2}{2}.\tag{83}$$

We note that the resulting time-dependent generalized Kepler potential

$$V = -\frac{\omega\_\nu(t)}{r^\nu}, \ \omega\_\nu = k \left( b\_0 + b\_1 t + b\_2 t^2 \right)^{\frac{\nu - 2}{2}} \tag{84}$$

is a subcase of the Case III potential of [18] if we set the function

$$\mathcal{U}\left(\frac{r}{\phi}\right) = k\_1 \frac{r^2}{\phi^2} - \frac{k\phi^\nu}{r^\nu}$$

with

$$
\phi = \sqrt{b\_0 + b\_1 t + b\_2 t^2}, \quad k\_1 = \frac{b\_0 b\_2}{2} - \frac{b\_1^2}{8}.
$$

Then, the associated QFI (3.13) of [18] (for *K*<sup>1</sup> = *K*<sup>2</sup> = 0) reduces to the QFI *Jν*. For some values of *ν*, we have the following results:


The *ω*(1) (*t*) = *k b*<sup>0</sup> + *b*1*t* + *b*2*t* 2 −1/2 and the QFI *<sup>J</sup>*<sup>1</sup> <sup>=</sup> *<sup>E</sup>*<sup>3</sup> (see Section 11.2 below).


$$\begin{aligned} J\_2 &=& (b\_0 + b\_1 t + b\_2 t^2) \left( \frac{q^i q\_i}{2} - \frac{k}{r^2} \right) - \frac{b\_1 + 2b\_2 t}{2} q^i \dot{q}\_i + \frac{b\_2}{2} r^2 \\ &=& b\_0 H\_2 - b\_1 I\_2 - b\_2 I\_1. \end{aligned}$$

This expression contains the independent QFIs

$$H\_{2} = \frac{\not q^{i}\not q\_{i}}{2} - \frac{k}{r^{2}},\ \ I\_{1} = -t^{2}H\_{2} + t\not q^{i}\not q\_{i} - \frac{r^{2}}{2},\ \ I\_{2} = -tH\_{2} + \frac{q^{i}\not q\_{i}}{2}$$

where *H*<sup>2</sup> is the Hamiltonian of the system. These are the FIs found in [27] (see also Table 1) in the case of the autonomous generalized Kepler potential for *ν* = 2.


$$\text{The } \omega\_{(-2)} = k(b\_0 + b\_1 t + b\_2 t^2)^{-2} \text{ and the QFI is } $$

$$J\_{-2} = (b\_0 + b\_1 t + b\_2 t^2) \left[ \frac{\dot{q}^i \dot{q}\_i}{2} - \frac{k}{(b\_0 + b\_1 t + b\_2 t^2)^2} r^2 \right] - \frac{b\_1 + 2b\_2 t}{2} \dot{q}^i \dot{q}\_i + \frac{b\_2 r^2}{2}.$$

This is the trace of the QFIs (111) found below for *a*3(*t*) = *b*<sup>0</sup> + *b*1*t* + *b*2*t* 2 . Substituting this *<sup>a</sup>*3(*t*) in (110) and (111) we find, respectively, that the *<sup>ω</sup>* = *<sup>ω</sup>*(−2) with constant *<sup>k</sup>* <sup>=</sup> <sup>−</sup><sup>1</sup> 8 (*b* 2 <sup>1</sup> − 4*b*2*b*<sup>0</sup> + 2*c*0) and the QFIs are

$$I\_{\vec{\eta}} = \Lambda\_{\vec{\eta}}(a\_3 = b\_0 + b\_1 t + b\_2 t^2) = (b\_0 + b\_1 t + b\_2 t^2) \left(\mathfrak{d}\_{\vec{\eta}} \mathfrak{d}\_{\vec{\eta}} - 2\omega \mathfrak{q}\_{\vec{\eta}} \mathfrak{q}\_{\vec{\eta}}\right) - (b\_1 + 2b\_2 t) \mathfrak{q}\_{(\vec{\eta})} \mathfrak{d}\_{(\vec{\eta})} + b\_2 \mathfrak{q}\_{\vec{\eta}} \mathfrak{q}\_{\vec{\eta}}.\tag{85}$$

Therefore, the trace *Tr*[*Iij*] = *I*<sup>11</sup> + *I*<sup>22</sup> + *I*<sup>33</sup> = 2*J*−2. Note that *r* <sup>2</sup> = *q i qi* .

We infer the following new general result which includes the time-dependent Kepler potential and the time-dependent oscillator as subcases.

**Proposition 2** (3d time-dependent generalized Kepler potentials which admit FIs)**.** *For all functions ω*(*t*) *the time-dependent generalized Kepler potential V*(*t*, *q*) = − *ω*(*t*) *r <sup>ν</sup> admits the LFIs of the angular momentum and QFIs which are products of the components of the angular momentum. However for the function ω*(*t*) = *ω*(*ν*) (*t*) = *k b*<sup>0</sup> + *b*1*t* + *b*2*t* 2 *<sup>ν</sup>*−<sup>2</sup> 2 *the resulting time-dependent generalized Kepler potential admits the additional QFI J<sup>ν</sup> given by (83).*

#### **11. The Time-Dependent Kepler Potential**

In this case, *ν* = 1 and conditions (75)–(78) give

$$a\_{16} = a\_{17} = a\_{18} = a\_{19} = a\_{20} = 0, \ a\_5 = a\_8, \ a\_2 = a\_{12}, \ a\_3 = a\_9 = a\_{13}, \ a\_{11} = a\_{15}$$

$$\ddot{a}\_2 = \ddot{a}\_5 = \ddot{a}\_{11} = 0$$

$$\sigma\_2 = c\_7, \ \sigma\_3 = c\_8, \ \tau\_3 = c\_9.$$

$$\text{constraint (59) gives}$$

$$\ddot{\omega}\_1 = \omega\_2 = \omega\_3 = \omega\_4 = 0.$$

Then, constraint (59) gives

$$
\dddot{a}\_3 = 0, \ \sigma\_4 = \tau\_4 = \eta\_4 = 0
$$

and

$$a\_3\omega^2 = c\_{10\prime} \quad a\_2\omega = c\_{11\prime} \quad a\_5\omega = c\_{12\prime} \quad a\_{11}\omega = c\_{13\prime}$$

where *c*10, *c*11, *c*12, *c*<sup>13</sup> are arbitrary constants. Finally, we have

$$\begin{array}{rcl} K\_{11} &=& \frac{c\_2}{2}y^2 + \frac{c\_1}{2}z^2 + c\_6yz + a\_5y + a\_2z + a\_3\\ K\_{12} &=& \frac{c\_4}{2}z^2 - \frac{c\_2}{2}xy - \frac{c\_6}{2}xz - \frac{c\_5}{2}yz - \frac{a\_5}{2}x - \frac{a\_{11}}{2}yz\\ K\_{13} &=& \frac{c\_5}{2}y^2 - \frac{c\_6}{2}xy - \frac{c\_1}{2}xz - \frac{c\_4}{2}yz - \frac{a\_2}{2}x - \frac{a\_{11}}{2}z\\ K\_{22} &=& \frac{c\_2}{2}x^2 + \frac{c\_3}{2}z^2 + c\_5xz + a\_{11}x + a\_2z + a\_3\\ K\_{23} &=& \frac{c\_6}{2}x^2 - \frac{c\_5}{2}xy - \frac{c\_4}{2}xz - \frac{c\_3}{2}yz - \frac{a\_2}{2}y - \frac{a\_5}{2}z\\ K\_{33} &=& \frac{c\_1}{2}x^2 + \frac{c\_3}{2}y^2 + c\_4xy + a\_{11}x + a\_5y + a\_3 \end{array}$$

and

$$\begin{array}{rcl} K\_1 &=& \mathfrak{d}\_{11}y^2 + \mathfrak{d}\_{11}z^2 - \mathfrak{d}\_5xy - \mathfrak{d}\_2xz - \mathfrak{d}\_3x + \mathfrak{c}\_7y + \mathfrak{c}\_8z\\ K\_2 &=& \mathfrak{d}\_5x^2 + \mathfrak{d}\_5z^2 - \mathfrak{d}\_{11}xy - \mathfrak{d}\_2yz - \mathfrak{c}\_7x - \mathfrak{d}\_3y + \mathfrak{c}\_9z\\ K\_3 &=& \mathfrak{d}\_2x^2 + \mathfrak{d}\_2y^2 - \mathfrak{d}\_{11}xz - \mathfrak{d}\_5yz - \mathfrak{c}\_8x - \mathfrak{c}\_9y - \mathfrak{d}\_3z \end{array}$$

where

$$
\vec{a}\_2 = \vec{a}\_5 = \vec{a}\_{11} = 0, \ \overleftrightarrow{a}\_3 = 0, \ a\_3 \omega^2 = c\_{10}, \ a\_2 \omega = c\_{11}, \ a\_5 \omega = c\_{12}, \ a\_{11} \omega = c\_{13}.\tag{86}
$$

From the last conditions follow that in order QFIs to be admitted the function *ω*(*t*) can have only three possible forms:


$$\omega(t) = \omega\_{2K}(t) = \frac{c\_{11}}{b\_0 + b\_1 t} \text{ where } c\_{11} b\_1 \neq 0;$$

 $\omega(t) = \omega\_{3K}(t) = \frac{k}{(b\_0 + b\_1t + b\_2t^2)^{1/2}}$   $\text{where } k \neq 0 \text{ and } b\_1^2 - 4b\_2b\_0 \neq 0.$ 

This result confirms the results found previously in [12,17,18]. We note that the timedependent Kepler potential *V* = − *ω*2*K*(*t*) *r* is a subcase of the Case II potential of [18] for *µ*<sup>0</sup> = *c*<sup>11</sup> and *φ* = *b*<sup>0</sup> + *b*1*t*, whereas the potential *V* = − *ω*3*K*(*t*) *r* is a subcase of the Case III potential of [18] (see Section 10.2).

In the following, we discuss the cases for the special functions *ω*2*K*(*t*) and *ω*3*K*(*t*) because the case for a general function *ω*(*t*) reproduces the results of Section 10.1.

*11.1. ω*(*t*) = *ω*2*K*(*t*) = *<sup>c</sup>*<sup>11</sup> *b*0+*b*<sup>1</sup> *t , c*11*b*<sup>1</sup> 6= 0 In that case, conditions (86) give

$$a\_2 = b\_0 + b\_1 t, \ a\_3 = \frac{c\_{10}}{c\_{11}^2} (b\_0 + b\_1 t)^2, \ a\_5 = \frac{c\_{12}}{c\_{11}} (b\_0 + b\_1 t), \ a\_{11} = \frac{c\_{13}}{c\_{11}} (b\_0 + b\_1 t).$$

Substituting the resulting vector *K<sup>a</sup>* and the KT *Kab* in (58) we find the solution

$$K(q,t) = -\frac{2c\_{10}b\_1t}{c\_{11}r} + G(q).$$

Replacing this solution in the remaining constraint (57) we find

$$G(x, y, z) = -\frac{2c\_{10}b\_0}{c\_{11}r} - \frac{c\_{13}x + c\_{12}y + c\_{11}z}{r} + \frac{c\_{10}b\_1^2}{c\_{11}^2}r^2.$$

Therefore,

$$K(x, y, z, t) = \frac{c\_{10}b\_1^2r^2}{c\_{11}^2} - \frac{2c\_{10}(b\_0 + b\_1t)}{c\_{11}r} - \frac{c\_{13}x + c\_{12}y + c\_{11}z}{r}.$$

The QFI is

$$\begin{aligned} I \quad &= -\frac{c\_3}{2}L\_1^2 + \frac{c\_1}{2}L\_2^2 + \frac{c\_2}{2}L\_3^2 - c\_4L\_1L\_2 - c\_5L\_1L\_3 - c\_6L\_2L\_3 - c\_9L\_1 + c\_8L\_2 - c\_7L\_3 + \frac{2c\_{10}}{c\_{11}^2}L\_2 + \frac{c\_{12}}{c\_{11}^2}L\_3 + \frac{c\_{13}}{c\_{11}^2}L\_4 + \frac{c\_{14}}{c\_{11}^2}L\_5 + A\_3\\ &+ \frac{c\_{13}}{c\_{11}}A\_1 + \frac{c\_{12}}{c\_{11}}A\_2 + A\_3 \end{aligned}$$

where *ω*2*K*(*t*) = *<sup>c</sup>*<sup>11</sup> *b*0+*b*<sup>1</sup> *t* and

$$L\_i \equiv \begin{array}{ccccc} q\_{i+1}\phi\_{i+2} - q\_{i+2}\phi\_{i+1} & & & \\ & & \ddots & \\ \end{array} \tag{87}$$

$$E\_2 \equiv \left( b\_0 + b\_1 t \right)^2 \left[ \frac{\eta^i \dot{q}\_i}{2} - \frac{c\_{11}}{r(b\_0 + b\_1 t)} \right] - b\_1 (b\_0 + b\_1 t) \eta^i \dot{q}\_i + \frac{b\_1^2 r^2}{2} \tag{88}$$

$$\tilde{\mathcal{R}}\_i \equiv \begin{array}{rcl} (\phi^j \eta\_j) q\_i - (\phi^j q\_j) \phi\_i - \frac{\mathcal{L}\_{11}}{r(b\_0 + b\_1 t)} q\_i \end{array} \tag{89}$$

$$A\_i \quad \equiv \ (b\_0 + b\_1 t) \tilde{\mathcal{R}}\_i + b\_1 (q\_{i+2} L\_{i+1} - q\_{i+1} L\_{i+2}).\tag{90}$$

We note that *i* = 1, 2, 3, *q<sup>i</sup>* = (*x*, *y*, *z*) and *q<sup>i</sup>* ≡ *qi*+3*<sup>k</sup>* for all *k* ∈ N, that is

$$\infty = q\_1 = q\_4 = q\_7 = \dots, \ y = q\_2 = q\_5 = q\_8 = \dots, \ z = q\_3 = q\_6 = q\_9.$$

The QFI *I* contains the already found LFIs *L<sup>i</sup>* of the angular momentum; the QFI *E*<sup>2</sup> which for *b*<sup>1</sup> = 0 reduces to the Hamiltonian of the Kepler potential *V* = − *c*<sup>11</sup> *b*0*r* ; the QFIs *A<sup>i</sup>* which may be considered as a generalization of the Runge–Lenz vector *R<sup>i</sup> k* = *c*<sup>11</sup> *b*0 for time-dependence *ω*2*K*(*t*) = *<sup>c</sup>*<sup>11</sup> *b*0+*b*<sup>1</sup> *t* . Indeed we have *Ai*(*b*<sup>1</sup> = 0) = *b*0*R<sup>i</sup> k* = *c*<sup>11</sup> *b*0 . The expressions (88)–(90) are written compactly as follows

$$E\_2 \equiv \left. c\_{11}^2 \left[ \frac{1}{\omega\_{2K}^2} \left( \frac{\not q\_i}{2} - \frac{\omega\_{2K}}{r} \right) - \frac{1}{2} \frac{d}{dt} \left( \frac{1}{\omega\_{2K}} \right)^2 \not q\_i^i \not q\_i + \frac{d^2}{dt^2} \left( \frac{1}{\omega\_{2K}} \right)^2 \frac{r^2}{4} \right] \tag{91}$$

$$\tilde{\mathcal{R}}\_i \equiv \begin{array}{c} (\not q^j q\_j) q\_i - (\not q^j q\_j) q\_i - \frac{\omega\_{2K}}{r} q\_i \end{array} \tag{92}$$

$$A\_{i} \equiv \ c\_{11} \left[ \frac{1}{\omega\_{2K}} \tilde{\mathcal{R}}\_{i} - \frac{(\ln \omega\_{2K})}{\omega\_{2K}} (q\_{i+2} L\_{i+1} - q\_{i+1} L\_{i+2}) \right] \tag{93}$$

where *ω*2*K*(*t*) = *<sup>c</sup>*<sup>11</sup> *b*0+*b*<sup>1</sup> *t* .

We remark that only five of the seven FIs *E*2, *L<sup>i</sup>* , *A<sup>i</sup>* are functionally independent because they are related as follows

$$\mathbf{A} \cdot \mathbf{L} = 0,\ 2E\_2\mathbf{L}^2 + c\_{11}^2 = \mathbf{A}^2. \tag{94}$$

For *b*<sup>1</sup> = 0, *b*<sup>0</sup> 6= 0 we have *ω*2*<sup>K</sup>* = *c*<sup>11</sup> *b*0 ≡ *k* = *const*, *E*<sup>2</sup> = *b* 2 <sup>0</sup>*H*, *<sup>R</sup>*˜ *<sup>i</sup>* = *R<sup>i</sup>* and *A<sup>i</sup>* = *b*0*R<sup>i</sup>* where *H* is the Hamiltonian and *R<sup>i</sup>* the Runge–Lenz vector for the Kepler potential *<sup>V</sup>* <sup>=</sup> <sup>−</sup>*<sup>k</sup> r* . Then, as expected, Equation (94) reduces to the well-known relation

$$2H\mathbf{L}^2 + k^2 = \mathbf{R}^2.$$

$$11.2.\ \omega(t) = \omega\_{3K}(t) = \frac{k}{(b\_0 + b\_1t + b\_2t^2)^{1/2}}, k \neq 0, b\_1^2 - 4b\_2b\_0 \neq 0$$

In that case (observe that if *b* 2 <sup>1</sup> − 4*b*2*b*<sup>0</sup> = 0, this case reduces to the case of the Section 11.1 because equation *b*<sup>0</sup> + *b*1*t* + *b*2*t* <sup>2</sup> = 0 has a double root *t*<sup>0</sup> and can be factored in the form *b*2(*t* − *t*0) 2 ), conditions (86) give

$$a\_2 = a\_5 = a\_{11} = 0, \ c\_{11} = c\_{12} = c\_{13} = 0, \ a\_3 = \frac{c\_{10}}{k^2} (b\_0 + b\_1 t + b\_2 t^2).$$

Substituting the *K<sup>a</sup>* and *Kab* of that case in (58) we find the solution

$$K(q, t) = -\frac{2c\_{10}}{r\omega\_{\mathfrak{K}}} + G(q).$$

When this solution is introduced in the remaining constraint (57) gives *G*(*x*, *y*, *z*) = *b*2*c*<sup>10</sup> *k* 2 *r* 2 . Therefore,

$$K(x, y, z, t) = \frac{b\_2 c\_{10}}{k^2} r^2 - \frac{2c\_{10}}{r \omega\_{3K}}.$$

The QFI is

$$I = \frac{c\_3}{2}L\_1^2 + \frac{c\_1}{2}L\_2^2 + \frac{c\_2}{2}L\_3^2 - c\_4L\_1L\_2 - c\_5L\_1L\_3 - c\_6L\_2L\_3 - c\_9L\_1 + c\_8L\_2 - c\_7L\_3 + \frac{2c\_{10}}{k^2}E\_3$$

where

$$E\_3 \equiv (b\_0 + b\_1 t + b\_2 t^2) \left[ \frac{\eta^i \dot{q}\_i}{2} - \frac{k}{r(b\_0 + b\_1 t + b\_2 t^2)^{1/2}} \right] - \frac{b\_1 + 2b\_2 t}{2} q^i \dot{q}\_i + \frac{b\_2 r^2}{2} \tag{95}$$

is the only new independent QFI. This QFI is written equivalently

$$E\_3 = k^2 \left[ \frac{1}{\omega\_{3K}^2} \left( \frac{\not q^i \not q\_i}{2} - \frac{\omega\_{3K}}{r} \right) - \frac{1}{2} \frac{d}{dt} \left( \frac{1}{\omega\_{3K}} \right)^2 \not q^i \not q\_i + \frac{d^2}{dt^2} \left( \frac{1}{\omega\_{3K}} \right)^2 \frac{r^2}{4} \right]. \tag{96}$$

For *b*<sup>1</sup> = *b*<sup>2</sup> = 0, *E*<sup>2</sup> reduces to the well-known Hamiltonian of the time-independent Kepler potential.

We note also that the QFIs (88) and (95) can be written compactly as (see Equation (2.86) in [17])

$$E\_{\mu} = k^2 \left[ \frac{1}{\omega\_{\mu K}^2} \left( \frac{q^i q\_i}{2} - \frac{\omega\_{\mu K}}{r} \right) - \frac{1}{2} \frac{d}{dt} \left( \frac{1}{\omega\_{\mu K}} \right)^2 q^i q\_i + \frac{d^2}{dt^2} \left( \frac{1}{\omega\_{\mu K}} \right)^2 \frac{r^2}{4} \right] \tag{97}$$
 
$$\text{where } \omega\_{\mu K} = (\mu)^2 \qquad k^2 = \dots \qquad (1) \qquad k$$

where *µ* = 2, 3, *ω*2*K*(*t*) = *<sup>k</sup> b*0+*b*<sup>1</sup> *t* and *ω*3*K*(*t*) = *<sup>k</sup>* (*b*0+*b*<sup>1</sup> *t*+*b*2*t* 2) 1/2 .

**Proposition 3** (Time-dependent Kepler potentials which admit additional FIs [17])**.** *The time-dependent Kepler potential V*(*t*, *q*) = − *ω*(*t*) *r for the function ω*2*K*(*t*) = *<sup>c</sup>*<sup>11</sup> *b*0+*b*<sup>1</sup> *t , c*11*b*<sup>1</sup> 6= 0 *and the function ω*3*K*(*t*) = *<sup>k</sup>* (*b*0+*b*<sup>1</sup> *t*+*b*2*t* 2) 1/2 *where k* 6= 0 *and b* 2 <sup>1</sup> − 4*b*2*b*<sup>0</sup> 6= 0 *admits additional QFIs given by (88), (90) and (95), respectively.*

#### **12. The 3d Time-Dependent Oscillator**

In this case, *ν* = −2 and conditions (75)–(78) give

$$a\_2 = a\_5 = a\_8 = a\_{11} = a\_{12} = a\_{15} = a\_{16} = a\_{18} = 0$$

and

$$
\mathfrak{d}\_{2} = -\mathfrak{d}\_{17}, \; \mathfrak{d}\_{3} = -\mathfrak{d}\_{19}, \; \mathfrak{k}\_{3} = -\mathfrak{d}\_{20}.\tag{98}
$$

Then, the constraint (59) implies that

$$
\ddot{\sigma}\_4 - 2\omega\sigma\_4 = 0, \quad \ddot{\tau}\_4 - 2\omega\tau\_4 = 0, \ \dot{\eta}\_4 - 2\omega\eta\_4 = 0,\tag{99}
$$

$$
\dddot{\vec{a}}\_3 - 8\omega \dot{a}\_3 - 4\dot{\omega} a\_3 = 0,\\
\dddot{\vec{a}}\_9 - 8\omega \dot{a}\_9 - 4\dot{\omega} a\_9 = 0,\\
\dddot{\vec{a}}\_{13} - 8\omega \dot{a}\_{13} - 4\dot{\omega} a\_{13} = 0,\tag{100}
$$

$$
\dddot{\vec{a}}\_{17} - 8\omega \dot{a}\_{17} - 4\dot{\omega} a\_{17} = 0,\\
\dddot{\vec{a}}\_{19} - 8\omega \dot{a}\_{19} - 4\dot{\omega} a\_{19} = 0,\\
\dddot{\vec{a}}\_{20} - 8\omega \dot{a}\_{20} - 4\dot{\omega} a\_{20} = 0. \tag{101}
$$

Therefore,

$$\begin{array}{rcl} K\_{11} &=& \frac{c\_2}{2}y^2 + \frac{c\_1}{2}z^2 + c\_6yz + a\_3\\ K\_{12} &=& \frac{c\_4}{2}z^2 - \frac{c\_2}{2}xy - \frac{c\_6}{2}xz - \frac{c\_5}{2}yz + a\_{17} \\ K\_{13} &=& \frac{c\_5}{2}y^2 - \frac{c\_6}{2}xy - \frac{c\_1}{2}xz - \frac{c\_4}{2}yz + a\_{19} \\ K\_{22} &=& \frac{c\_2}{2}x^2 + \frac{c\_3}{2}z^2 + c\_5xz + a\_{13} \\ K\_{23} &=& \frac{c\_6}{2}x^2 - \frac{c\_5}{2}xy - \frac{c\_4}{2}xz - \frac{c\_3}{2}yz + a\_{20} \\ K\_{33} &=& \frac{c\_1}{2}x^2 + \frac{c\_3}{2}y^2 + c\_4xy + a\_9 \end{array} \tag{102}$$

and

$$\begin{array}{rcl} K\_1 &=& -d\_3 \mathbf{x} + \sigma\_2 \mathbf{y} + \sigma\_3 \mathbf{z} + \sigma\_4 \\ K\_2 &=& -(\sigma\_2 + 2d\_{17}) \mathbf{x} - d\_{13} \mathbf{y} + \tau\_3 \mathbf{z} + \tau\_4 \\ K\_3 &=& -(\sigma\_3 + 2d\_{19}) \mathbf{x} - (\tau\_3 + 2d\_{20}) \mathbf{y} - d\_9 \mathbf{z} + \eta\_4. \end{array} \tag{103}$$

Before we proceed with considering various subcases it is important that we discuss the ordinary differential equations (ODEs) (100) and (101).

#### *12.1. The Lewis Invariant*

Equations of the form ...

$$
\ddot{a} - 8\omega \dot{a} - 4\dot{\omega} a = 0\tag{104}
$$

where *a* = *a*(*t*) can be written as follows

$$a\ddot{a} - \frac{1}{2}\dot{a}^2 - 4\omega a^2 = c\_0 = const.\tag{105}$$

By putting *a* = −*ρ* <sup>2</sup> where *ρ* = *ρ*(*t*), Equation (105) becomes

$$
\not\!\!\!\!\!\!\!\!\!\!\/-2\omega\rho\left-\frac{c\_0}{2\rho^3}\right.\tag{106}
$$

For 2*ω*(*t*) = −*ψ* 2 (*t*), Equation (106) is written

$$
\not{p} + \psi^2 \rho - \frac{c\_0}{2\rho^3} = 0.\tag{107}
$$

Equation (107) is the auxiliary Equation (see [8,34,35]) that should be introduced in order to derive the Lewis invariant for the one-dimensional (1d) time-dependent oscillator

$$
\pounds + \psi^2 \mathfrak{x} = 0.\tag{108}
$$

By eliminating the *ψ* <sup>2</sup> using (108) and multiplying with the factor *<sup>x</sup>ρ*˙ <sup>−</sup> *<sup>ρ</sup>x*˙ Equation (107) gives

$$\ddot{\rho} - \frac{\rho}{\chi} \dot{\mathfrak{x}} - \frac{c\_0}{2\rho^3} = 0 \implies \left[ \frac{1}{2} (\mathbf{x}\rho - \rho \dot{\mathfrak{x}})^2 + \frac{c\_0}{4} \left( \frac{\mathbf{x}}{\rho} \right)^2 \right]^\cdot = \mathbf{0} \implies$$

$$I \equiv \frac{1}{2} (\mathbf{x}\rho - \rho \dot{\mathfrak{x}})^2 + \frac{c\_0}{4} \left( \frac{\mathbf{x}}{\rho} \right)^2 = const \tag{109}$$

which is the well-known Lewis invariant for the 1d time-dependent harmonic oscillator or, equivalently, a FI for the two-dimensional (2d) time-dependent system with equations of motion (107) and (108).

#### *12.2. The System of Equations (98)–(101)*

The conditions (99) are not involved into the conditions (98), (100) and (101). This means that the parameters *σ*4, *τ*4, *η*<sup>4</sup> give different independent FIs from the remaining parameters *a*3, *a*9, *a*13, *a*17, *a*19, *a*20. Therefore, without loss of generality they can be treated separately. This leads to the following two cases.

#### 12.2.1. *a*<sup>3</sup> 6= 0, *σ*<sup>4</sup> = *τ*<sup>4</sup> = *η*<sup>4</sup> = 0

Because the ODEs (100) and (101) are independent (i.e., each one leads to a different FI) and are of the same form without loss of generality we assume

$$a\_{9} = k\_{1}a\_{3}, \ a\_{13} = k\_{2}a\_{3}, \ a\_{17} = k\_{3}a\_{3}, \ a\_{19} = k\_{4}a\_{3}, \ a\_{20} = k\_{5}a\_{3}$$

where *k*1, *k*2, *k*3, *k*4, *k*<sup>5</sup> are arbitrary constants.

From the discussion of Section 12.1 and the assumption *a*<sup>3</sup> 6= 0 condition (100) concerning *a*3(*t*) becomes (see Equation (9.2) in [8])

$$\ddot{a}\_3 - 8\omega t\_3 - 4\dot{\omega}a\_3 = 0 \implies a\_3\ddot{a}\_3 - \frac{1}{2}\dot{a}\_3^2 - 4\omega a\_3^2 = c\_0 \implies \omega(\mathbf{t}) = \frac{\ddot{a}\_3}{4a\_3} - \frac{1}{8}\left(\frac{\dot{a}\_3}{a\_3}\right)^2 - \frac{c\_0}{4a\_3^2} \tag{110}$$

where *c*<sup>0</sup> is an arbitrary constant and *a*3(*t*) is an arbitrary non-zero function.

Moreover, conditions (98) become

$$
\sigma\_2 = -d\_{17\prime} \cdot \sigma\_3 = -d\_{19\prime} \cdot \tau\_3 = -d\_{20}
$$

because any additional constant (in general *σ*<sup>2</sup> = −*a*˙<sup>17</sup> + *m*<sup>1</sup> where *m*<sup>1</sup> is a constant) leads to the usual LFIs of the angular momentum.

Then the KT (102) and the vector (103) become (we set *c*<sup>1</sup> = ... = *c*<sup>6</sup> = 0 because they generate the already-found FIs of the angular momentum)

$$K\_{ab} = a\_3 \left( \begin{array}{cccc} 1 & k\_3 & k\_4 \\ k\_3 & k\_2 & k\_5 \\ k\_4 & k\_5 & k\_1 \end{array} \right), \quad K\_a = -d\_3 \left( \begin{array}{c} \ge +k\_3y + k\_4z \\ k\_3\ge +k\_2y + k\_5z \\ k\_4\ge +k\_5y + k\_1z \end{array} \right).$$

Substituting in the constraints (57) and (58) we find

$$K = \frac{a\_3^2 + 2c\_0}{4a\_3} \left(\chi^2 + k\_2 y^2 + k\_1 z^2 + 2k\_3 \chi y + 2k\_4 \chi z + 2k\_5 y z\right).$$

Using Equation (110) we can write *<sup>a</sup>*˙ 2 <sup>3</sup>+2*c*<sup>0</sup> 4*a*<sup>3</sup> = *a*¨3 <sup>2</sup> − 2*ωa*3. The QFI is

$$\begin{split} I &= -a\_3 \left( \dot{\mathbf{x}}^2 + k\_2 \dot{y}^2 + k\_1 \dot{z}^2 + 2k\_3 \dot{\mathbf{x}} \dot{y} + 2k\_4 \dot{\mathbf{x}} \dot{z} + 2k\_5 \dot{y} \dot{z} \right) - \dot{a}\_3 (\mathbf{x} + k\_3 y + k\_4 z) \dot{\mathbf{x}} - \dot{a}\_3 \\ &- \dot{a}\_3 (k\_3 \mathbf{x} + k\_2 y + k\_5 z) \dot{y} - \dot{a}\_3 (k\_4 \mathbf{x} + k\_5 y + k\_1 z) \dot{z} + \\ &+ \left( \frac{\dot{a}\_3}{2} - 2\omega a\_3 \right) \left( \mathbf{x}^2 + k\_2 y^2 + k\_1 z^2 + 2k\_3 xy + 2k\_4 \mathbf{x} z + 2k\_5 yz \right). \end{split}$$

This expression contains six QFIs which are the components of the symmetric tensor (see Equations (1.4) and (6.24) in [8])

$$
\Lambda\_{\rm ij} = a\_3 \left( \dot{q}\_{\rm i} \dot{q}\_{\rm j} - 2\omega \eta\_{\rm i} q\_{\rm j} \right) - d\_3 q\_{\rm (i} \dot{q}\_{\rm j)} + \frac{\ddot{q}\_3}{2} q\_{\rm i} q\_{\rm j}. \tag{111}
$$

This tensor for *a*<sup>3</sup> = *const* 6= 0 reduces to the Jauch–Hill–Fradkin tensor *Bij* for *ω* = − *c*0 4*a* 2 3 = *const*.

If we make the transformation (see Section 12.1) *a*3(*t*) = −*ρ* 2 (*t*) and 2*ω*(*t*) = −*ψ* 2 (*t*), Equation (54) becomes

$$
\ddot{q}^a - 2\omega q^a = 0 \implies \ddot{q}^a + \psi^2 q^a = 0 \tag{112}
$$

and the QFIs (111) give

$$
\Lambda\_{\rm ij} = - (\rho \phi\_i - \rho q\_i) \left( \rho \phi\_j - \rho q\_j \right) - \frac{c\_0}{2} \rho^{-2} q\_i q\_j \tag{113}
$$

where the condition (110) takes the form (107).

The symmetric tensor (113) may be thought of as a 3d generalization of the 1d Lewis invariant (109). Moreover, Equation (113) coincides with Equation (8) in [14] and Equation (1.4) in [8] when *c*<sup>0</sup> = 2.

12.2.2. *a*<sup>3</sup> = *a*<sup>9</sup> = *a*<sup>13</sup> = *a*<sup>17</sup> = *a*<sup>19</sup> = *a*<sup>20</sup> = 0, *σ*<sup>4</sup> 6= 0

In this case, the conditions (100) and (101) vanish identically; the conditions (98) imply that *σ*<sup>2</sup> = *c*7, *σ*<sup>3</sup> = *c*<sup>8</sup> and *τ*<sup>3</sup> = *c*9.

Since the remaining ODEs (99) are all independent (i.e., each one generates an independent FI) and of the same form without loss of generality we assume

$$
\pi\_4 = k\_1 \sigma\_{4\nu} \quad \eta\_4 = k\_2 \sigma\_4
$$

where *k*1, *k*<sup>2</sup> are arbitrary constants.

From (99) for *σ*<sup>4</sup> 6= 0 we get

$$
\omega(t) = \frac{\vartheta\_4}{2\sigma\_4}.\tag{114}
$$

The parameters *c<sup>A</sup>* where *A* = 1, 2, ..., 9 produce the FIs of the angular momentum and we fix them to zero. Therefore

$$K\_{ab} = 0, \ K\_a = \sigma\_4(1, k\_1, k\_2).$$

Substituting in the remaining constraints (57) and (58) we find

$$K = -\sigma\_4(x + k\_1y + k\_2z).$$

The QFI is

$$I = \sigma\_4 \dot{\mathfrak{x}} - \dot{\sigma}\_4 \mathfrak{x} + k\_1(\sigma\_4 \dot{\mathfrak{y}} - \dot{\sigma}\_4 \mathfrak{y}) + k\_2(\sigma\_4 \dot{\mathfrak{z}} - \dot{\sigma}\_4 \mathfrak{z})$$

which contains the irreducible LFIs (see Equation (6.25) in [8])

$$I\_{4i} = f\mathfrak{q}\_i - f\mathfrak{q}\_i \tag{115}$$

where *f*(*t*) is an arbitrary non-zero function satisfying (114). We note that the LFIs (115) can be derived directly from the equations of motion for *<sup>ω</sup>*(*t*) = ¨ *f* 2 *f* .

From the above two cases, we arrive at the following conclusion.

**Proposition 4** (3d time-dependent oscillators which admit additional FIs)**.** *For the function ω*(*t*) = *<sup>a</sup>*¨<sup>3</sup> 4*a*<sup>3</sup> <sup>−</sup> <sup>1</sup> 8 *a*˙3 *a*3 2 − *c*0 4*a* 2 3 *where a*3(*t*) 6= 0*, c*<sup>0</sup> *is an arbitrary constant and the function <sup>ω</sup>*(*t*) = ¨ *f* 2 *f where f*(*t*) 6= 0 *the resulting 3d time-dependent oscillator V*(*t*, *q*) = −*ω*(*t*)*r* 2 *admits the QFIs (111) and the LFIs (115), respectively.*

#### **13. A Special Class of Time-Dependent Oscillators**

In Proposition 4, it has been shown that the time-dependent oscillator (*ν* = −2) for the frequency

$$
\omega\_{10}(t) = \frac{\tilde{f}}{4f(t)} - \frac{1}{8} \left(\frac{f}{f}\right)^2 - \frac{c\_0}{4f^2} \tag{116}
$$

where *f*(*t*) is an arbitrary non-zero function admits the six QFIs

$$
\Lambda\_{i\dot{j}} = f(t) \left( \dot{q}\_i \dot{q}\_{\dot{j}} - 2\omega q\_i q\_{\dot{j}} \right) - \dot{f} q\_{(i} \dot{q}\_{\dot{j})} + \frac{\ddot{f}}{2} q\_{\dot{i}} q\_{\dot{j}} \tag{117}
$$

and for the frequency

$$
\omega\_{2O}(t) = \frac{\circ}{2g(t)}\tag{118}
$$

where *g*(*t*) is an arbitrary non-zero function admits the three LFIs

$$I\_{4i} = \mathcal{g}(t)\boldsymbol{\phi}\_i - \mathcal{g}\boldsymbol{q}\_i. \tag{119}$$

We consider the class of the 3d time-dependent oscillators for which *ω*1*O*(*t*) = *ω*2*O*(*t*). These oscillators admit both the six QFIs Λ*ij* and the three LFIs *I*4*<sup>i</sup>* .

The condition *ω*1*O*(*t*) = *ω*2*O*(*t*) relates the functions *f*(*t*), *g*(*t*) as follows

$$
\omega\_{\rm 3O}(t) = \frac{\not f}{4f(t)} - \frac{1}{8} \left(\frac{f}{f}\right)^2 - \frac{c\_0}{4f^2} = \frac{\not g}{2g(t)}.\tag{120}
$$

It can be easily proved that

$$g = f^{1/2} \cos \theta, \ \theta = \left(\frac{c\_0}{2}\right)^{1/2} f^{-1} \implies \theta(t) = \left(\frac{c\_0}{2}\right)^{1/2} \int \frac{dt}{f(t)}\tag{121}$$

and

$$g = f^{1/2} \sin \theta, \; \theta = \left(\frac{c\_0}{2}\right)^{1/2} f^{-1} \implies \theta(t) = \left(\frac{c\_0}{2}\right)^{1/2} \int \frac{dt}{f(t)}\tag{122}$$

satisfy the requirement (120) for any non-zero function *f*(*t*). In other words, all the timedependent oscillators with frequency

$$
\omega\_{\rm 3O}(t) = \frac{\tilde{f}}{4f(t)} - \frac{1}{8} \left(\frac{\tilde{f}}{\tilde{f}}\right)^2 - \frac{c\_0}{4f^2} \tag{123}
$$

admit the six QFIs

$$
\Lambda\_{i\dot{j}} = f(t) \left( \mathfrak{q}\_i \mathfrak{q}\_{\dot{j}} - 2\omega \mathfrak{q}\_i \mathfrak{q}\_{\dot{j}} \right) - f \mathfrak{q}\_{(i} \mathfrak{q}\_{\dot{j})} + \frac{f}{2} \mathfrak{q}\_i \mathfrak{q}\_{\dot{j}} \tag{124}
$$

and the six LFIs

$$I\_{41i} = -\left(\frac{c\_0}{2}\right)^{1/2} f^{-1/2} q\_i \sin \theta + \left(f^{1/2} q\_i - \frac{f}{2} f^{-1/2} q\_i\right) \cos \theta \tag{125}$$

$$I\_{42i} = -\left(\frac{c\_0}{2}\right)^{1/2} f^{-1/2} q\_{\dot{l}} \cos\theta + \left(f^{1/2} q\_{\dot{l}} - \frac{\dot{f}}{2} f^{-1/2} q\_{\dot{l}}\right) \sin\theta. \tag{126}$$

These are the LFIs *J k* 3 , *J k* 4 derived in Equations (44) and (45) in [10] using Noether point symmetries and Noether's theorem.

We note that

$$\frac{dI\_{42i}}{d\theta} = I\_{41i} \tag{127}$$

and

$$
\Lambda\_{i\uparrow} = I\_{41i} I\_{41\downarrow} + I\_{42i} I\_{42\downarrow}.\tag{128}
$$

Next, we consider the LFIs of the angular momentum *L<sup>i</sup>* = *qi*+1*q*˙ *<sup>i</sup>*+<sup>2</sup> − *qi*+2*q*˙ *<sup>i</sup>*+<sup>1</sup> which can be expressed equivalently as components of the totally antisymmetric tensor

$$L\_{i\dot{j}} = q\_{\dot{i}}\dot{q}\_{\dot{j}} - q\_{\dot{j}}\dot{q}\_{\dot{i}} = \varepsilon\_{\dot{i}\dot{j}\dot{k}}L^{k} \tag{129}$$

where *εijk* is the 3d Levi-Civita symbol and *L <sup>i</sup>* = *L<sup>i</sup>* since the kinetic metric *γij* = *δij*. Then (see Equation (51) in [10])

$$L\_{ij} = \left(\frac{2}{c\_0}\right)^{1/2} \left(I\_{41i}I\_{42j} - I\_{41j}I\_{42i}\right). \tag{130}$$

**Proposition 5.** *For the class of 3d time-dependent oscillators with potential V*(*t*, *q*) = −*ω*(*t*)*r* 2 *where ω*(*t*) *is defined in terms of an arbitrary non-zero (smooth) function f*(*t*) *as in (123), the only independent FIs are the LFIs I*41*<sup>i</sup>* , *I*42*<sup>i</sup> .*

In order to recover the results of [10], we assume a time-dependent oscillator with *ω*3*O*(*t*) given by (123) and we write the non-zero function *f*(*t*) in the form *f*(*t*) = *ρ* 2 (*t*). Then Equation (123) becomes

$$
\omega\_{\rm 3O}(t) = \frac{\not p}{2\rho} - \frac{c\_0}{4\rho^4}.\tag{131}
$$

The relations (121) and (122) become

$$g = \rho \cos \theta, \text{ } \dot{\theta} = \left(\frac{c\_0}{2}\right)^{1/2} \rho^{-2} \implies \theta(t) = \left(\frac{c\_0}{2}\right)^{1/2} \int \frac{dt}{\rho^2} \tag{132}$$

$$g = \rho \sin \theta, \text{ } \dot{\theta} = \left(\frac{c\_0}{2}\right)^{1/2} \rho^{-2} \implies \theta(t) = \left(\frac{c\_0}{2}\right)^{1/2} \int \frac{dt}{\rho^2} \tag{133}$$

and the LFIs (125) and (126) take the form

$$I\_{41i} = \left(\frac{c\_0}{2}\right)^{1/2} \rho^{-1} q\_i \sin\theta + (\rho q\_i - \rho q\_i) \cos\theta \tag{134}$$

$$I\_{42i} \quad = \ -\left(\frac{c\_0}{2}\right)^{1/2} \rho^{-1} q\_i \cos\theta + (\rho q\_i - \rho q\_i)\sin\theta. \tag{135}$$

These latter expressions for *c*<sup>0</sup> = 2 coincide with the independent LFIs (44) and (45) found in [10].

Finally, we note that if we consider in this special class of oscillators the simple case *f* = 1, we find *ω*3*O*(*t*) = *const* = − *c*0 <sup>4</sup> ≡ *k* which is the 3d autonomous oscillator (for *k* < 0). Then it can be shown that the exponential LFIs *I*3*i*<sup>±</sup> (see Table 1) found in [27] can be written in terms of *I*41*<sup>i</sup>* , *I*42*<sup>j</sup>* . Indeed we have *I*3*i*±(*k* > <sup>0</sup>) = *I*41*<sup>i</sup>* ∓ *iI*42*<sup>i</sup>* and *I*3*i*±(*k* < <sup>0</sup>) = *I*41*<sup>i</sup>* ± *iI*42*<sup>i</sup>* .

#### **14. Collection of Results**

We collect the results concerning the time-dependent generalized Kepler potential for all values of *ν* in Table 2. We note that for *ν* = −2, 1, 2 the dynamical system is the time-dependent 3d oscillator, the time-dependent Kepler potential and the Newton–Cotes potential, respectively. Concerning notation, we have *q <sup>i</sup>* = (*x*, *<sup>y</sup>*, *<sup>z</sup>*), *<sup>q</sup><sup>i</sup>* <sup>≡</sup> *<sup>q</sup>i*+3*<sup>k</sup>* for all *k* ∈ N and *R*˜ *<sup>i</sup>* = (*q*˙ *j q*˙ *<sup>j</sup>*)*q<sup>i</sup>* − (*q*˙ *j qj*)*q*˙ *<sup>i</sup>* <sup>−</sup> *<sup>k</sup> r*(*b*0+*b*<sup>1</sup> *t*) *qi* .


**Table 2.** The LFIs/QFIs of the time-dependent generalized Kepler potential *V* = − *ω*(*t*) *r <sup>ν</sup>* .

#### **15. Integrating the Equations**

In this section, we use the independent LFIs *I*41*<sup>i</sup>* , *I*42*<sup>i</sup>* to integrate the equations of the special class of 3d time-dependent oscillators (*ν* = −2) defined in Section 13 with *ω*(*t*) given by (123). We also use the FIs *L<sup>i</sup>* , *E*2, *A<sup>i</sup>* to integrate the time-dependent Kepler potential (*ν* = 1) with *ω*(*t*) = *<sup>k</sup> b*0+*b*<sup>1</sup> *<sup>t</sup>* where *kb*<sup>1</sup> 6= 0 (see Section 11.1).

*15.1. The 3d Time-Dependent Oscillator with ω*(*t*) *Given by (123)*

Using the LFIs (125) and (126) we find

$$q\_i(t) = \left(\frac{2}{c\_0}\right)^{1/2} f^{1/2} \left(I\_{41i} \sin \theta - I\_{42i} \cos \theta\right) \tag{136}$$

where *I*41*<sup>i</sup>* , *I*42*<sup>i</sup>* , *i* = 1, 2, 3, are arbitrary constants (real or imaginary) and *θ*(*t*) = *<sup>c</sup>*<sup>0</sup> 2 1/2 R *f* <sup>−</sup>1*dt*.

The solution (136) coincides with the solution (52) in [10].

In the case of the 1d time-dependent oscillator, if we set 2*ω*(*t*) = −*ψ* 2 (*t*), *c*<sup>0</sup> = 2 and *f*(*t*) = *ρ* 2 (*t*), Equation (54) and the defining relation (123) for *ω*(*t*) become

$$\mathfrak{X} \quad = \quad -\mathfrak{Y}^2 \mathfrak{X} \tag{137}$$

$$\not{p} \quad = \ -\psi^2 \not{p} + \rho^{-3}. \tag{138}$$

The LFIs (134) and (135) become

$$I\_{41} = -\rho^{-1}\mathbf{x}\sin\theta + (\rho\mathbf{x} - \mathbf{x}\rho)\cos\theta\tag{139}$$

$$I\_{42} \quad = \quad -\rho^{-1}\mathbf{x}\cos\theta + (\rho\mathbf{x} - \mathbf{x}\rho)\sin\theta. \tag{140}$$

The general solution (136) is

$$\mathbf{x}(t) = \rho(t) \left( I\_{41} \sin \theta - I\_{42} \cos \theta \right) \tag{141}$$

where ˙*θ* = *ρ* <sup>−</sup><sup>2</sup> and *ρ*(*t*) is a given non-zero function which defines *ψ*(*t*) through (138). This is the 1d solution (9) in [10].

*15.2. The Solution of the Time-Dependent Kepler Potential with ω*2*K*(*t*) = *<sup>k</sup> b*0+*b*<sup>1</sup> *<sup>t</sup> Where kb*<sup>1</sup> 6= 0 In Section 11.1, it is shown that this system admits the following FIs:

$$L\_1 = y\dot{z} - z\dot{y},\ \ L\_2 = z\dot{x} - x\dot{z},\ \ L\_3 = x\dot{y} - y\dot{x}$$

$$E\_2 = (b\_0 + b\_1t)^2 \left[\frac{\dot{q}^i \dot{q}\_i}{2} - \frac{k}{r(b\_0 + b\_1t)}\right] - b\_1(b\_0 + b\_1t)q^i \dot{q}\_i + \frac{b\_1^2r^2}{2}$$

$$A\_i = (b\_0 + b\_1t)\ddot{R}\_i + b\_1(q\_{i+2}L\_{i+1} - q\_{i+1}L\_{i+2})$$

where *R*˜ *<sup>i</sup>* = (*q*˙ *j q*˙ *<sup>j</sup>*)*q<sup>i</sup>* − (*q*˙ *j qj*)*q*˙ *<sup>i</sup>* <sup>−</sup> *<sup>k</sup> r*(*b*0+*b*<sup>1</sup> *t*) *qi* . The components of the generalized Runge– Lenz vector are written

$$\begin{array}{rcl} A\_1 &=& (b\_0 + b\_1 t)(\dot{y}L\_3 - \dot{z}L\_2) + b\_1(zL\_2 - yL\_3) - \frac{k}{r}x \\\\ A\_2 &=& (b\_0 + b\_1 t)(\dot{z}L\_1 - \dot{x}L\_3) + b\_1(xL\_3 - zL\_1) - \frac{k}{r}y \\\\ A\_3 &=& (b\_0 + b\_1 t)(\dot{x}L\_2 - \dot{y}L\_1) + b\_1(yL\_1 - xL\_2) - \frac{k}{r}z. \end{array}$$

Since the angular momentum is an FI, the motion is on a plane. We choose, without loss of generality, the plane *z* = 0 and on that the polar coordinates *x* = *r* cos *θ*, *y* = *r* sin *θ*. Then,

$$L\_1 = L\_2 = 0, \ L\_3 = r^2 \theta, \ E\_2 = (b\_0 + b\_1 t)^2 \left[ \frac{t^2 + r^2 \theta^2}{2} - \frac{k}{r(b\_0 + b\_1 t)} \right] - b\_1 (b\_0 + b\_1 t) r \dot{r} + \frac{b\_1^2 r^2}{2}$$

$$A\_1 = L\_3 \left[ (b\_0 + b\_1 t) \dot{r} - b\_1 r \right] \sin \theta + \left[ (b\_0 + b\_1 t) L\_3 r \theta - k \right] \cos \theta$$

$$A\_2 = -L\_3 \left[ (b\_0 + b\_1 t) \dot{r} - b\_1 r \right] \cos \theta + \left[ (b\_0 + b\_1 t) L\_3 r \theta - k \right] \sin \theta, \ A\_3 = 0.$$

Using the relation ˙*θ* = *L*3 *r* 2 to replace ˙*θ*, the above relations are written

$$E\_2 = -(b\_0 + b\_1 t)^2 \left[ \frac{\dot{r}^2}{2} + \frac{L\_3^2}{2r^2} - \frac{k}{r(b\_0 + b\_1 t)} \right] - b\_1 (b\_0 + b\_1 t) r \dot{r} + \frac{b\_1^2 r^2}{2} \tag{142}$$

$$A\_1 = -L\_3 \left[ (b\_0 + b\_1 t)\dot{r} - b\_1 r \right] \sin \theta + \left[ (b\_0 + b\_1 t) \frac{L\_3^2}{r} - k \right] \cos \theta \tag{143}$$

$$A\_2 = -L\_3 \left[ (b\_0 + b\_1 t)t - b\_1 r \right] \cos \theta + \left[ (b\_0 + b\_1 t) \frac{L\_3^2}{r} - k \right] \sin \theta. \tag{144}$$

By multiplying Equation (143) with cos *θ* and (144) with sin *θ* we find that

$$\frac{1}{r} = \frac{k}{L\_3^2(b\_0 + b\_1t)}(1 + k\_1\cos\theta + k\_2\sin\theta) \implies r = \frac{L\_3^2(b\_0 + b\_1t)}{k(1 + k\_1\cos\theta + k\_2\sin\theta)}\tag{145}$$

where *k*<sup>1</sup> ≡ *A*1 *k* and *k*<sup>2</sup> ≡ *A*2 *k* .

Applying the transformation *k*<sup>1</sup> = *α* cos *β* and *k*<sup>2</sup> = *α* sin *β*, Equation (145) is written (see also Section 5 in [17])

$$\frac{1}{r} = \frac{\omega\_{2K}}{L\_3^2} \left[ 1 + \mathfrak{a} \cos(\theta - \beta) \right] \implies r = \frac{L\_3^2 \omega\_{2K}^{-1}}{1 + \mathfrak{a} \cos(\theta - \beta)}\tag{146}$$

which for *ω*2*K*(*t*) = *const* (standard Kepler problem) reduces to the analytical equation of a conic section in polar coordinates. In that case *α* is the eccentricity.

It is also worthwhile mentioning that the relation (94) becomes

2*E*2*L* 2 <sup>3</sup> + *k* <sup>2</sup> = *α* 2 *k* <sup>2</sup> <sup>=</sup><sup>⇒</sup> <sup>2</sup>*E*2*<sup>L</sup>* 2 <sup>3</sup> = *k* 2 (*α* <sup>2</sup> <sup>−</sup> <sup>1</sup>).

Moreover, Equation (142) gives

$$\left[\frac{d}{dt}\left(\frac{r}{b\_0 + b\_1 t}\right)\right]^2 = -2(b\_0 + b\_1 t)^{-2} \left[\frac{L\_3^2}{2r^2} - \frac{k}{r(b\_0 + b\_1 t)} - \frac{E\_2}{(b\_0 + b\_1 t)^2}\right].$$

Finally, in the polar plane the equations of motion (54) for *ν* = 1 become

$$
\ddot{r} - r\dot{\theta}^2 + \frac{\omega\_{2K}}{r^2} = \quad 0 \tag{147}
$$

$$
\dot{r}\ddot{\theta} + 2\dot{r}\dot{\theta} \quad = \quad 0. \tag{148}
$$

Equation (148) implies the FI of the angular momentum *L*<sup>3</sup> = *r* <sup>2</sup> ˙*θ*. It can be easily checked that the solution (145) satisfies Equation (147) by replacing ¨*θ* from (148) and ˙*θ* with *L*3 *r* 2 . The solution (145) into the FI *L*<sup>3</sup> gives

$$\int \frac{k^2 dt}{L\_3^3 (b\_0 + b\_1 t)^2} = \int \frac{d\theta}{\left(1 + k\_1 \cos \theta + k\_2 \sin \theta\right)^2} \implies \frac{k}{L\_3^2 (b\_0 + b\_1 t)} = -\frac{b\_1 L\_3}{k} \int \frac{d\theta}{\left(1 + k\_1 \cos \theta + k\_2 \sin \theta\right)^2}.\tag{149}$$

Substituting (149) in (145) we obtain

$$\frac{1}{r} = -\frac{b\_1 L\_3}{k} (1 + k\_1 \cos \theta + k\_2 \sin \theta) \int \frac{d\theta}{\left(1 + k\_1 \cos \theta + k\_2 \sin \theta\right)^2} \tag{150}$$

which coincides with Equation (5.17) in [17].

#### **16. A Class of 1d Non-Linear Time-Dependent Equations**

In this section, we use the well-known result [12] that the non-linear dynamical system

$$\dot{q}^a = -\Gamma^a\_{bc}\dot{q}^b\dot{q}^c - \omega(t)Q^a(q) + \phi(t)\dot{q}^a \tag{151}$$

is equivalent to the linear dynamical system (without damping term)

$$\frac{d^2q^a}{ds^2} = -\Gamma^a\_{bc}\frac{dq^b}{ds}\frac{dq^c}{ds} - \bar{\omega}(s)Q^a(q) \tag{152}$$

where *φ*(*t*) is an arbitrary function such that

$$s(t) = \int e^{\int \phi(t)dt} dt,\ \tilde{\omega}(s) = \omega(t(s)) \left(\frac{dt}{ds}\right)^2 \iff \omega(t) = \tilde{\omega}(s(t))e^{2\int \phi(t)dt}.\tag{153}$$

We apply this result to the following problem: *Consider the second order differential equation*

$$\pounds = -\omega(t)\mathfrak{x}^{\mu} + \phi(t)\mathfrak{x} \tag{154}$$

*where the constant µ* 6= −1 *and determine the relation between the functions ω*(*t*), *φ*(*t*) *for which the equation admits a QFI; therefore, it is integrable.*

This problem has been considered previously in [36,37] (see Equation (28a) in [36] and Equation (17) in [37]) and has been answered partially using different methods. In [36], the author used the Hamiltonian formalism where one looks for a canonical transformation to bring the Hamiltonian in a time-separable form. In [37], the author used a direct method for constructing FIs by multiplying the equation with an integrating factor. In [37], it is shown that both methods are equivalent and that the results of [37] generalize those of [36]. In the following, we shall generalize the results of [37]; in addition, we discuss a number of applications.

Equation (154) is equivalent to the equation

$$\frac{d^2\mathbf{x}}{ds^2} = -\boldsymbol{\omega}(\mathbf{s})\mathbf{x}^\mu \; , \; \mu \neq -1 \tag{155}$$

where the function *ω*¯(*s*) is given by (153).

Replacing *Q*<sup>1</sup> = *x µ* in the system of Equations (5)–(10) (in 1d Euclidean space, the KT condition (5) *K*(*ab*;*c*) = 0 becomes *K*11,1 = 0 =⇒ *K*<sup>11</sup> = *K*11(*s*), that is, it is an arbitrary function of *s*), we find that *K*<sup>11</sup> = *K*11(*s*) and the following conditions

$$\mathbf{K}\_1(\mathbf{s}, \mathbf{x}) \quad = \quad -\frac{d\mathbf{K}\_{11}}{ds}\mathbf{x} + b\_1(\mathbf{s}) \tag{156}$$

$$\mathcal{K}(\mathbf{s}, \mathbf{x}) \quad = \ 2\boldsymbol{\omega}\mathbf{\hat{K}}\_{11}\frac{\mathbf{x}^{\mu+1}}{\mu+1} + \frac{d^2\mathbf{K}\_{11}}{ds^2}\frac{\mathbf{x}^2}{2} - \frac{db\_1}{ds}\mathbf{x} + b\_2(\mathbf{s})\tag{157}$$

$$\begin{split} 0 &= \quad \left(\frac{2\frac{d\phi}{ds}K\_{11}}{\mu+1} + \frac{2\bar{\omega}\frac{dK\_{11}}{ds}}{\mu+1} + \bar{\omega}\frac{dK\_{11}}{ds}\right) \mathbf{x}^{\mu+1} - \bar{\omega}b\_{1}\mathbf{x}^{\mu} + \\ &+ \frac{d^3K\_{11}}{ds^3}\frac{\mathbf{x}^2}{2} - \frac{d^2b\_1}{ds^2}\mathbf{x} + \frac{db\_2}{ds} \end{split} \tag{158}$$

where *b*1(*s*), *b*2(*s*) are arbitrary functions. Then, the general QFI (3) becomes

$$I = K\_{11}(s) \left(\frac{d\mathbf{x}}{ds}\right)^2 + K\_1(s,\mathbf{x})\frac{d\mathbf{x}}{ds} + K(s,\mathbf{x}).\tag{159}$$

We consider the solution of the system (156)–(158) for various values of *µ*.

As will be shown for *µ* 6= −1 results a family of 'frequencies' *ω*¯(*s*) parameterized with constants. However, for the specific values *µ* = 0, 1, 2 there results a family of 'frequencies' *ω*¯(*s*) parameterized with functions.

(1) Case *µ* = 0.

We find the QFI

$$I = K\_{11} \left(\frac{d\mathbf{x}}{ds}\right)^2 - \frac{d\mathbf{K}\_{11}}{ds}\mathbf{x}\frac{d\mathbf{x}}{ds} + b\_1(s)\frac{d\mathbf{x}}{ds} + c\_3\mathbf{x}^2 + 2\bar{\omega}(s)\mathbf{K}\_{11}\mathbf{x} - \frac{db\_1}{ds}\mathbf{x} + \int b\_1(s)\bar{\omega}(s)ds \tag{160}$$

where *K*<sup>11</sup> = *c*<sup>1</sup> + *c*2*s* + *c*3*s* 2 , *c*1, *c*2, *c*<sup>3</sup> are arbitrary constants and the functions *b*1(*s*), *ω*¯(*s*) satisfy the condition

$$\frac{d^2 b\_1}{ds^2} = 2\frac{d\bar{\omega}}{ds}\mathcal{K}\_{11} + 3\bar{\omega}\frac{d\mathcal{K}\_{11}}{ds}.\tag{161}$$

Using the transformation (153), Equations (160) and (161) become

$$\begin{split} I &= \left[ c\_1 + c\_2 \int e^{\int \boldsymbol{\varrho}(t)dt} dt + c\_3 \left( \int e^{\int \boldsymbol{\varrho}(t)dt} dt \right)^2 \right] e^{-2\int \boldsymbol{\varrho}(t)dt} \dot{\mathbf{x}}^2 - \left[ c\_2 + 2c\_3 \int e^{\int \boldsymbol{\varrho}(t)dt} dt \right] e^{-\int \boldsymbol{\varrho}(t)dt} \mathbf{x} \dot{\mathbf{x}} + \\ &+ b\_1(\mathbf{s}(t)) e^{-\int \boldsymbol{\varrho}(t)dt} \dot{\mathbf{x}} + c\_3 \mathbf{x}^2 + 2\omega(t) \left[ c\_1 + c\_2 \int e^{\int \boldsymbol{\varrho}(t)dt} dt + c\_3 \left( \int e^{\int \boldsymbol{\varrho}(t)dt} dt \right)^2 \right] e^{-2\int \boldsymbol{\varrho}(t)dt} \mathbf{x} - \\ &- b\_1 \mathbf{e}^{-\int \boldsymbol{\varrho}(t)dt} \mathbf{x} + \int b\_1(\mathbf{s}(t)) \omega(t) e^{-\int \boldsymbol{\varrho}(t)dt} dt \end{split} \tag{162}$$

and

$$\begin{aligned} \label{eq:SDAR-10} \tilde{b}\_1 - \phi b\_1 &=& 2e^{-\int \phi(t)dt} (\dot{\omega} - 2\phi\omega) \bigg[ c\_1 + c\_2 \int e^{\int \phi(t)dt} dt + c\_3 \Big( \int e^{\int \phi(t)dt} dt \Big)^2 \right] + \\ &+ 3\omega \bigg[ c\_2 + 2c\_3 \int e^{\int \phi(t)dt} dt \Big]. \end{aligned} \tag{163}$$

(2) Case *µ* = 1.

We again derive the results of the time-dependent oscillator (see Table 2 for *ν* = −2) in one dimension. Using the transformation (153), we deduce that the original equation

$$
\ddot{\mathbf{x}} = -\omega(t)\mathbf{x} + \phi(t)\dot{\mathbf{x}}\tag{164}
$$

for the frequency

$$
\omega(t) = -\rho^{-1}\ddot{\rho} + \phi(\ln \rho)^{\cdot} + \rho^{-4}e^{2\int \phi(t)dt} \tag{165}
$$

admits the general solution

$$x(t) = \rho(t)(A\sin\theta + B\cos\theta)\tag{166}$$

where *<sup>ρ</sup>*(*t*) <sup>≡</sup> *<sup>ρ</sup>*(*s*(*t*)) and *<sup>θ</sup>*(*s*(*t*)) = <sup>R</sup> *ρ* −2 (*t*)*e* R *<sup>φ</sup>*(*t*)*dtdt*.

(3) Case *µ* = 2.

We find the function *ω*¯ = *K* −5/2 <sup>11</sup> and the QFI

$$I = \mathcal{K}\_{11}(\mathbf{s}) \left(\frac{d\mathbf{x}}{ds}\right)^2 - \frac{d\mathcal{K}\_{11}}{ds}\mathbf{x}\frac{d\mathbf{x}}{ds} + (\mathbf{c}\_4 + \mathbf{c}\_5\mathbf{s})\frac{d\mathbf{x}}{ds} + \frac{2}{3}\mathcal{K}\_{11}^{-3/2}\mathbf{x}^3 + \frac{d^2\mathcal{K}\_{11}}{ds^2}\frac{\mathbf{x}^2}{2} - \mathbf{c}\_5\mathbf{x} \tag{167}$$

where *c*4, *c*<sup>5</sup> are arbitrary constants and the function *K*11(*s*) is given by

$$\frac{d^3 K\_{11}}{ds^3} = 2(c\_4 + c\_5s)K\_{11}^{-5/2}.\tag{168}$$

Using the transformation (153), the above results become

$$
\omega(t) = K\_{11}^{-5/2} e^2 \, ^\circ \phi(t) dt \tag{169}
$$

$$\begin{split} \mathbf{I} &= \quad \mathbb{K}\_{11} e^{-2\int \boldsymbol{\phi}(t)dt} \mathbf{x}^2 - \dot{\mathbb{K}}\_{11} e^{-2\int \boldsymbol{\phi}(t)dt} \mathbf{x} \dot{\mathbb{x}} + \left[ c\_4 + c\_5 \int e^{\int \boldsymbol{\phi}(t)dt} dt \right] e^{-\int \boldsymbol{\phi}(t)dt} \dot{\mathbb{x}} + \frac{2}{3} \mathbb{K}\_{11}^{-3/2} \mathbf{x}^3 + \\ &+ (\ddot{\mathbb{K}}\_{11} - \boldsymbol{\phi} \dot{\mathbb{K}}\_{11}) e^{-2\int \boldsymbol{\phi}(t)dt} \frac{\mathbf{x}^2}{2} - c\_5 \mathbf{x} \end{split} \tag{170}$$

and

$$\ddot{\mathbf{K}}\_{11} - 3\boldsymbol{\phi}\dot{\mathbf{K}}\_{11} - \dot{\boldsymbol{\phi}}\dot{\mathbf{K}}\_{11} + 2\boldsymbol{\phi}^{2}\dot{\mathbf{K}}\_{11} = 2\left[c\_{4} + c\_{5}\int e^{\int \boldsymbol{\phi}(t)dt}dt\right]e^{3\int \boldsymbol{\phi}(t)dt}\mathbf{K}\_{11}^{-5/2} \tag{171}$$

where the function *K*<sup>11</sup> = *K*11(*s*(*t*)).

We note that for *µ* = 2 Equation (154), or to be more specific its equivalent (155), arises in the solution of Einstein field equations when the gravitational field is spherically symmetric and the matter source is a shear-free perfect fluid (see, e.g., [38–43]).

(4) Case *µ* 6= −1.

In this case, *b*<sup>1</sup> = *b*<sup>2</sup> = 0, *K*<sup>11</sup> = *c*<sup>1</sup> + *c*2*s* + *c*3*s* <sup>2</sup> and *ω*¯(*s*) = (*c*<sup>1</sup> + *c*2*s* + *c*3*s* 2 ) − *µ*+3 2 where *c*1, *c*2, *c*<sup>3</sup> are arbitrary constants.

The QFI (159) becomes

$$I = (c\_1 + c\_2s + c\_3s^2) \left(\frac{dx}{ds}\right)^2 - (c\_2 + 2c\_3s)x\frac{dx}{ds} + \frac{2}{\mu + 1}(c\_1 + c\_2s + c\_3s^2)^{-\frac{\mu + 1}{2}}x^{\mu + 1} + c\_3x^2 \tag{172}$$

and the function

$$
\omega(s) = (c\_1 + c\_2s + c\_3s^2)^{-\frac{\mu+3}{2}}.\tag{173}
$$

It can be checked that (172) and (173) for *µ* = 0, 1, 2 give results compatible with the ones we found for these values of *µ*.

Using the transformation (153), we deduce that the original system (154) is integrable iff the functions *ω*(*t*), *φ*(*t*) are related as follows

$$
\omega(t) = \left[c\_1 + c\_2 \int e^{\int \phi(t)dt} dt + c\_3 \left(\int e^{\int \phi(t)dt} dt\right)^2\right]^{-\frac{\mu+3}{2}} e^{2\int \phi(t)dt}.\tag{174}
$$

In this case, the associated QFI (172) is

*I* = " *c*<sup>1</sup> + *c*<sup>2</sup> Z *e* R *<sup>φ</sup>*(*t*)*dtdt* + *c*<sup>3</sup> Z *e* R *<sup>φ</sup>*(*t*)*dtdt*<sup>2</sup> # *e* −2 R *<sup>φ</sup>*(*t*)*dtx*˙ 2 − *c*<sup>2</sup> + 2*c*<sup>3</sup> Z *e* R *<sup>φ</sup>*(*t*)*dtdt e* − R *<sup>φ</sup>*(*t*)*dtxx*˙ + + 2 *µ* + 1 " *c*<sup>1</sup> + *c*<sup>2</sup> Z *e* R *<sup>φ</sup>*(*t*)*dtdt* + *c*<sup>3</sup> Z *e* R *<sup>φ</sup>*(*t*)*dtdt*<sup>2</sup> #− *µ*+1 2 *x <sup>µ</sup>*+<sup>1</sup> + *c*3*x* 2 . (175)

These expressions generalize the expressions given in [37]. Indeed, if we introduce the notation *ω*(*t*) ≡ *α*(*t*), *φ*(*t*) ≡ −*β*(*t*), then Equations (174) and (175) for *c*<sup>3</sup> = 0 become Equarions (25) and (26) of [37].

#### *16.1. The Generalized Lane–Emden Equation*

Consider the 1d generalized Lane–Emden Equation (see Equation (6) in [44])

$$\ddot{\mathfrak{x}} = -\omega(t)\mathfrak{x}^{\mu} - \frac{k}{t}\dot{\mathfrak{x}}\tag{176}$$

where *k* is an arbitrary constant. This equation is well-known in the literature because of its many applications in astrophysical problems (see Refs. in [44]). In general, to find explicit analytic solutions of Equation (176) it is a major task. For example, such solutions have only been found for the special values *µ* = 0, 1, 5, in the case that the function *ω*(*t*) = 1 and the constant *k* = 2. New, exact solutions, or at least the Liouville integrability, of Equation (176) are guaranteed, if we find a way to determine its FIs. We see that Equation (176) is a subcase of the original Equation (154) for *<sup>φ</sup>*(*t*) = <sup>−</sup>*<sup>k</sup> t* ; therefore, we can apply the results found earlier in Section 16.

In what follows, we only discuss the fourth case where *µ* 6= −1 in order to compare our results with those found in Table 1 of [44]. In particular, for *<sup>φ</sup>*(*t*) = <sup>−</sup>*<sup>k</sup> t* the function (174) and the associated QFI (175) become

$$
\omega(t) = t^{-2k} \left( c\_1 + c\_2 M + c\_3 M^2 \right)^{-\frac{\mu+3}{2}} \tag{177}
$$

and

$$I = \mathfrak{f}^{2\mathfrak{k}} \left( c\_1 + c\_2 \mathcal{M} + c\_3 \mathcal{M}^2 \right) \mathfrak{x}^2 - \mathfrak{f}^k (c\_2 + 2c\_3 \mathcal{M}) \mathfrak{x} \mathfrak{x} + \frac{2}{\mu + 1} \left( c\_1 + c\_2 \mathcal{M} + c\_3 \mathcal{M}^2 \right)^{-\frac{\mu + 1}{2}} \mathfrak{x}^{\mu + 1} + c\_3 \mathfrak{x}^2 \tag{178}$$

where the function *M*(*t*) = R *t* <sup>−</sup>*kdt*.

Concerning the form of the function *M*(*t*) there are two cases to consider: (a) *k* = 1; (b) *k* 6= 1.

(a) Case *k* = 1.

We have *M* = ln *t* and Equations (177) and (178) give

$$
\omega(t) = t^{-2} \left[ c\_1 + c\_2 \ln t + c\_3 (\ln t)^2 \right]^{-\frac{\mu + 3}{2}} \tag{179}
$$

and

$$I \quad = \quad t^2 \Big[c\_1 + c\_2 \ln t + c\_3 (\ln t)^2\Big] \dot{\mathbf{x}}^2 - t(c\_2 + 2c\_3 \ln t) \mathbf{x} \dot{\mathbf{x}} +$$

$$+ \frac{2}{\mu + 1} \Big[c\_1 + c\_2 \ln t + c\_3 (\ln t)^2\Big]^{-\frac{\mu + 1}{2}} \mathbf{x}^{\mu + 1} + c\_3 \mathbf{x}^2. \tag{180}$$

We consider the following subcases:


Equations (179) and (180) give the function *ω*(*t*) = *At*−<sup>2</sup> and the QFI (divide *I* with 2*c*1)

$$I = \frac{t^2}{2}\dot{\mathfrak{x}}^2 + \frac{A}{\mu + 1}\mathfrak{x}^{\mu + 1}$$

where the constant *A* = *c* − *µ*+3 2 1 . This is the Case 5 in Table 1 of [44]. - *c*<sup>1</sup> = *c*<sup>3</sup> = 0, *c*<sup>2</sup> 6= 0.

Equations (179) and (180) give the function *ω*(*t*) = *At*−<sup>2</sup> (ln *t*) − *µ*+3 <sup>2</sup> and the QFI (divide *I* with 2*c*2)

$$I = \frac{1}{2}t^2(\ln t)\dot{\mathfrak{x}}^2 - \frac{t}{2}\mathfrak{x}\dot{\mathfrak{x}} + \frac{A}{\mu+1}(\ln t)^{-\frac{\mu+1}{2}}\mathfrak{x}^{\mu+1}$$

where the constant *A* = *c* − *µ*+3 2 2 . This is the Case 6 in Table 1 of [44].

$$\text{--}\qquad \mathcal{c}\_1 = \mathcal{c}\_2 = 0 \text{ } \mathcal{c}\_3 \neq 0 \text{.}$$

Equations (181) and (182) give the function *ω*(*t*) = *At*−<sup>2</sup> (ln *t*) <sup>−</sup>*µ*−<sup>3</sup> and the QFI (divide *I* with 2*c*3)

$$I = \frac{1}{2}(t\ln t)^2 \dot{\mathbf{x}}^2 - t(\ln t)\mathbf{x}\dot{\mathbf{x}} + \frac{A}{\mu+1}(\ln t)^{-\mu-1}\mathbf{x}^{\mu+1} + \frac{\mathbf{x}^2}{2}$$

where the constant *A* = *c* − *µ*+3 2 3 . This is the Case 7 in Table 1 of [44]. (b) Case *k* 6= 1.

We have *M* = *<sup>t</sup>* 1−*k* 1−*k* and Equations (177) and (178) give

$$
\omega(t) = t^{-2k} \left[ c\_1 + \frac{c\_2}{1-k} t^{1-k} + \frac{c\_3}{(1-k)^2} t^{2(1-k)} \right]^{-\frac{\mu+3}{2}} \tag{181}
$$

and

$$I\_{-}=t^{2k}\left[c\_{1}+\frac{c\_{2}}{1-k}t^{1-k}+\frac{c\_{3}}{(1-k)^{2}}t^{2(1-k)}\right]\dot{\mathbf{x}}^{2}-t^{k}\left(c\_{2}+\frac{2c\_{3}}{1-k}t^{1-k}\right)\mathbf{x}\dot{\mathbf{x}}+$$

$$+\frac{2}{\mu+1}\left[c\_{1}+\frac{c\_{2}}{1-k}t^{1-k}+\frac{c\_{3}}{(1-k)^{2}}t^{2(1-k)}\right]^{-\frac{\mu+1}{2}}\mathbf{x}^{\mu+1}+c\_{3}\mathbf{x}^{2}.\tag{182}$$

We consider the following subcases:


Equations (181) and (182) give the function *ω*(*t*) = *At*−2*<sup>k</sup>* and the QFI (divide *I* with 2*c*1)

$$I = \frac{t^{2k}}{2} \dot{\mathfrak{x}}^2 + \frac{A}{\mu + 1} \mathfrak{x}^{\mu + 1}$$

where the constant *A* = *c* − *µ*+3 2 1 . This is the Case 2 in Table 1 of [44].


Equations (181) and (182) give the function *ω*(*t*) = *At* <sup>1</sup> 2 (*kµ*−*k*−*µ*−3) and the QFI (multiply *I* with <sup>1</sup>−*<sup>k</sup> c*2 )

$$I = t^{k+1} \dot{\mathfrak{x}}^2 + (k-1)t^k \mathfrak{x} \dot{\mathfrak{x}} + \frac{2A}{\mu+1} t^{\frac{1}{2}(\mu+1)(k-1)} \mathfrak{x}^{\mu+1}$$

where the constant *A* = *c*2 1−*k* − *µ*+3 2 . This is the Case 3 in Table 1 of [44].

We note also that for *k* = *µ*+3 *<sup>µ</sup>*−<sup>1</sup> where *<sup>µ</sup>* <sup>6</sup><sup>=</sup> <sup>1</sup> the function *<sup>ω</sup>*(*t*) = *<sup>A</sup>* <sup>=</sup> *const*. This reproduces the first subcase of Case 1 in Table 1 of [44] which is the Case 5.1 of [45].


Equations (181) and (182) give the function *ω*(*t*) = *Atkµ*+*k*−*µ*−<sup>3</sup> and the QFI (multiply *I* with (1−*k*) 2 2*c*<sup>3</sup> )

$$I = \frac{t^2}{2}\dot{\mathfrak{x}}^2 + (k-1)t\mathfrak{x}\dot{\mathfrak{x}} + \frac{A}{\mu+1}t^{(\mu+1)(k-1)}\mathfrak{x}^{\mu+1} + \frac{1}{2}(k-1)^2\mathfrak{x}^2$$

where the constant *A* = 1 √−*<sup>k</sup> c*3 *µ*+<sup>3</sup> . This is the Case 4 in Table 1 of [44]. *µ*+3

We note also that for *k* = *µ*+1 the function *ω*(*t*) = *A* = *const*. This recovers the second subcase of Case 1 in Table 1 of [44] which is the Case 5.2 of [45].

We conclude that the seven cases 1–7 found in Table 1 of [44] are just subcases of the above two general cases a) and b). To compare with these results one must adopt the notation *ω* = *f* , *k* = *n* and *µ* = *p*.

#### **17. Conclusions**

The purpose of the present work was to compute the QFIs of time-dependent dynamical systems of the form *q*¨ *<sup>a</sup>* <sup>=</sup> <sup>−</sup><sup>Γ</sup> *a bcq*˙ *b q*˙ *<sup>c</sup>* <sup>−</sup> *<sup>ω</sup>*(*t*)*Q<sup>a</sup>* (*q*), where the connection coefficients are computed from the kinetic metric, using the direct method instead of the Noether symmetries as it is usually done. In the direct method, one assumes that the QFI is of the form *I* = *Kabq*˙ *a q*˙ *<sup>b</sup>* + *Kaq*˙ *<sup>a</sup>* + *K* and demands that *dI*/*dt* = 0. This leads to a system of PDEs whose solution provides the QFIs. One key result is that the tensor *Kab* is a KT of the kinetic metric.

We have discussed the solution of the system of equations at two levels. The first level is purely geometric and concerns the KT *Kab*; the second level is the physical, which concerns the quantities *ω*(*t*), *Q<sup>a</sup>* (*q*) defining the dynamical system.

Concerning the first level we have applied two different methods:


In both methods, the key point is to compute the scalar *K*. Concerning the dynamical quantities *ω*(*t*), *Q<sup>a</sup>* (*q*) we have chosen to work in two ways:


The last part of our considerations concerns the well-known proposition that under a reparameterization the linear damping *φ*(*t*)*q*˙ *a* can be absorbed to a time-dependent generalized force. We used this proposition in the case of a 1d non-linear second order timedependent differential equation, we determined the condition that the time-dependent coefficients of the equation must satisfy in order a QFI to exist and we computed this QFI. As an application, we studied the properties of the well-known generalized Lane– Emden equation.

We note that one is possible to consider other dynamical quantities and/or kinetic metric and compute the QFIs. What is the same in all cases is the method of work which we hope we have presented adequately in the present paper.

**Author Contributions:** Methodology, M.T.; formal analysis, A.M.; investigation, A.M. and M.T.; writing—original draft preparation, A.M.; writing—review and editing, A.M. and M.T.; supervision, M.T. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Appendix A**

Substituting the polynomial function *ω*(*t*) given by (39) in the system of Equations (34)–(38) we have the following cases.

**I. Case n** = **m** (both *n*, *m* finite)

From Equation (34) we obtain

$$\mathcal{C}\_{(k)ab} = -L\_{(k-1)(a\sharp b)}\ \ \ k = 1, \ldots, n,\ \ L\_{(n)(a\sharp b)} = 0. \tag{A1}$$

Therefore, *L*(*n*)*<sup>a</sup>* is a KV of *γab*. Condition (37) gives

$$\begin{split} 0 &= \ -2\left(b\_{1} + 2b\_{2}t + \ldots + tb\_{\ell}t^{\ell-1}\right)\left(\mathcal{C}\_{(0)ab}\mathcal{Q}^{b} + \mathcal{C}\_{(1)ab}\mathcal{Q}^{b}t + \ldots + \mathcal{C}\_{(n)ab}\mathcal{Q}^{b}\frac{t^{n}}{n}\right) + 2L\_{(2)a} + 6L\_{(3)a}t + \ldots + \\ &\quad + n(n-1)L\_{(n)a}t^{n-2} - 2\left(b\_{0} + b\_{1}t + \ldots + b\_{\ell}t^{\ell}\right)\left(\mathcal{C}\_{(1)ab}\mathcal{Q}^{b} + \mathcal{C}\_{(2)ab}\mathcal{Q}^{b}t + \ldots + \mathcal{C}\_{(n)ab}\mathcal{Q}^{b}t^{n-1}\right) + \\ &\quad + \left(b\_{0} + b\_{1}t + \ldots + bt\_{\ell}t^{\ell}\right)\left[\left(L\_{(0)b}\mathcal{Q}^{b}\right)\_{,\mu} + \left(L\_{(1)b}\mathcal{Q}^{b}\right)\_{,\mu}t + \ldots + \left(L\_{(n-1)b}\mathcal{Q}^{b}\right)\_{,\mu}t^{n-1} + \left(L\_{(n)b}\mathcal{Q}^{b}\right)\_{,\mu}t^{n}\right]. \end{split}$$

This is a polynomial of the general form *P*(0)*<sup>a</sup>* (*q*) + *P*(1)*<sup>a</sup>* (*q*)*t* + ... + *P*(*n*+`)*<sup>a</sup>* (*q*)*t <sup>n</sup>*+` = 0. The vanishing of the coefficients *P*(*k*)*<sup>a</sup>* (*q*) in the last polynomial implies that

$$L\_{\text{(n)}a}Q^a = s = \text{const} \tag{A2}$$

$$\sum\_{s=0}^{\ell-1} \left[ -\frac{2(k+s)b\_{(k+s\leq\ell)}}{n-s} \mathbb{C}\_{(n-s\geq 0)\not\models 0} Q^b - 2b\_{(k+s\leq\ell)} \mathbb{C}\_{(n-s\geq 0)\not\models 0} Q^b + b\_{(k+s\leq\ell)} \left( \mathbb{L}\_{(n-s-1\geq 0)\not\models 0} Q^b \right)\_{\mathbb{A}} \right] = 0 \tag{A3}$$
 
$$\text{where } k = 1, 2, \dots, \ell,$$

$$-\sum\_{s=1}^{\ell} \left[ \frac{2sb\_s}{n-s} \mathbf{C}\_{(n-s\geq 0)ab} \mathbf{Q}^b \right] + \sum\_{s=0}^{\ell} \left[ -2b\_s \mathbf{C}\_{(n-s>0)ab} \mathbf{Q}^b + b\_s \left( L\_{(n-s-1\geq 0)b} \mathbf{Q}^b \right)\_{,\mu} \right] = 0 \tag{A4}$$

and

$$k(k-1)L\_{(k)d} - \sum\_{s=1}^{\ell} \left[ \frac{2sb\_s}{k-s-1} \mathbb{C}\_{(k-s-1\geq 0)d} Q^b \right] + \sum\_{s=0}^{\ell} \left[ -2b\_s \mathbb{C}\_{(k-s-1>0)d} Q^b + b\_s \left( L\_{(k-s-2\geq 0)b} Q^b \right)\_{\rho} \right] = 0 \tag{A5}$$

where *k* = 2, 3, ...*n*.

We note that in the *<sup>n</sup>* <sup>+</sup> ` <sup>+</sup> <sup>1</sup> formulae (A3)–(A5), when the undefined quantity *<sup>C</sup>*(0)*ab* 0 appears in the calculations, it must be replaced by *C*(0)*ab* in order to have a consistent result. We continue with the remaining constraints (35) and (36) in order to determine the scalar coefficient *K*(*t*, *q*). The solution of (36) is

$$\begin{split} K\_{\ell^{1}} &= \quad L\_{(0)a} \mathcal{Q}^{\ell} \left( b\_{0} + b\_{1} t + \ldots + b\_{\ell} t^{\ell} \right) + L\_{(1)a} \mathcal{Q}^{\ell} \left( b\_{0} t + b\_{1} t^{2} + \ldots + b\_{\ell} t^{\ell+1} \right) + \ldots + \\ &\quad + L\_{(n-1)a} \mathcal{Q}^{\ell} \left( b\_{0} t^{n-1} + b\_{1} t^{n} + \ldots + b\_{\ell} t^{n+\ell-1} \right) + s \left( b u^{n} + b\_{1} t^{n+1} + \ldots + b\_{\ell} t^{n+\ell} \right) \implies \\ K\_{\ell} &= \quad L\_{(0)a} \mathcal{Q}^{\ell} \left( b\_{0} t + b\_{1} \frac{t^{2}}{2} + \ldots + b\_{\ell} \frac{t^{\ell+1}}{\ell+1} \right) + L\_{(1)a} \mathcal{Q}^{\ell} \left( b\_{0} \frac{t^{2}}{2} + b\_{1} \frac{t^{3}}{3} + \ldots + b\_{\ell} \frac{t^{\ell+2}}{\ell+2} \right) + \ldots \\ & \quad + L\_{(n-1)a} \mathcal{Q}^{\ell} \left( b\_{0} \frac{t^{n}}{n} + b\_{1} \frac{t^{n+1}}{n+1} + \ldots + b\_{\ell} \frac{t^{n+\ell}}{n+\ell} \right) + s \left( b\_{0} \frac{t^{n+1}}{n+1} + b\_{1} \frac{t^{n+2}}{n+2} + \ldots + b\_{\ell} \frac{t^{n+\ell+1}}{n+\ell+1} \right) + G(q). \end{split}$$

Replacing *K* in (35) and using the conditions (A2)–(A5) we find that

$$G\_{,a} = 2b\_0 \mathcal{C}\_{(0)ab} Q^b - L\_{(1)a}.$$

Condition (38) is satisfied trivially from the above solutions. The QFI is

$$\begin{split} I \quad &= \left( \frac{t^n}{n} \mathbb{C}\_{(n)a^g} + \ldots + t \mathbb{C}\_{(1)a^g} + \mathbb{C}\_{(0)a^g} \right) q^a q^b + t^n L\_{(n)a} q^d + \ldots + t L\_{(1)a} q^a + L\_{(0)a} q^d + \\ &\quad + L\_{(0)a} Q^a \left( b\_0 t + b\_1 \frac{t^2}{2} + \ldots + b\_\ell \frac{t^{\ell+1}}{\ell+1} \right) + L\_{(1)a} Q^a \left( b\_0 \frac{t^2}{2} + b\_1 \frac{t^3}{3} + \ldots + b\_\ell \frac{t^{\ell+2}}{\ell+2} \right) + \ldots \\ &\quad + L\_{(n-1)a} Q^a \left( b\_0 \frac{t^n}{n} + b\_1 \frac{t^{n+1}}{n+1} + \ldots + b\_\ell \frac{t^{n+\ell}}{n+\ell} \right) + s \left( b\_0 \frac{t^{n+1}}{n+1} + b\_1 \frac{t^{n+2}}{n+2} + \ldots + b\_\ell \frac{t^{n+\ell+1}}{n+\ell+1} \right) + G(q) \left( b\_0 \frac{t^n}{n+\ell} + b\_1 \frac{t^{n+1}}{n+\ell+1} + \ldots + b\_\ell \frac{t^n}{n+\ell+1} \right). \end{split}$$

where *<sup>C</sup>*(0)*ab* is a KT, the KTs *<sup>C</sup>*(*k*)*ab* = −*L*(*k*−1)(*a*;*b*) for *k* = 1, ..., *n*, *L*(*n*)*<sup>a</sup>* is a KV such that *<sup>L</sup>*(*n*)*aQ<sup>a</sup>* <sup>=</sup> *<sup>s</sup>*, *<sup>G</sup>*,*<sup>a</sup>* <sup>=</sup> <sup>2</sup>*b*0*C*(0)*abQ<sup>b</sup>* <sup>−</sup> *<sup>L</sup>*(1)*<sup>a</sup>* and the conditions (A3)–(A5) are satisfied.

**II. Case n** 6= **m**. (one of *n* or *m* may be infinite)

We find QFIs that are subcases of those found in **Case I** and **Case III** which follows.

#### **III. Both n, m are infinite.**

In this case, we consider the solution to have the form

$$K\_{ab}(t,q) = g(t)\mathbb{C}\_{ab}(q),\ \ K\_a(t,q) = f(t)L\_a(q)$$

where the functions *g*(*t*), *f*(*t*) are analytic so that they may be represented by polynomial functions as follows

$$g(t) = \sum\_{k=0}^{n} c\_k t^k = c\_0 + c\_1 t + \dots + c\_n t^n$$

$$f(t) = \sum\_{k=0}^{m} d\_k t^k = d\_0 + d\_1 t + \dots + d\_m t^m.$$

In the above expressions, the coefficients *c*0, *c*1, ..., *c<sup>n</sup>* and *d*0, *d*1, ..., *d<sup>m</sup>* are arbitrary constants. We find that only the following subcase gives a new independent FI. All other subcases give results already found.

$$\underline{\text{Subcase } (g = e^{\lambda t}, f = e^{\mu t}), \lambda \mu \neq \mathbf{0}.}$$

In this case, the system of Equations (34)–(37) (Equation (38) is satisfied trivially from the solutions found below) becomes:

$$
\lambda e^{\lambda t} \mathcal{C}\_{ab} + e^{\mu t} L\_{(a;b)} \quad = \quad \mathbf{0} \tag{A6}
$$

$$-2\left(b\_0 + b\_1t + \ldots + b\_\ell t^\ell\right)e^{\lambda t}\mathbb{C}\_{ab}Q^b + \mu e^{\mu t}L\_a + K\_{,a} = \begin{array}{c} 0 \end{array} \tag{A7}$$

$$K\_{\ell} - (b\_0 + b\_1 t + \ldots + b\_{\ell} t^{\ell}) \varepsilon^{\mu t} L\_a Q^a \quad = \quad 0 \tag{A8}$$

$$-2\left(b\_1 + 2b\_2t + \ldots + \ell b\_\ell t^{\ell - 1}\right)e^{\lambda t} \mathbb{C}\_{ab} \mathbb{Q}^b - 2\lambda (b\_0 + b\_1t + \ldots + b\_\ell t^\ell)e^{\lambda t} \mathbb{C}\_{ab} \mathbb{Q}^b +$$

$$+ \mu^2 e^{\mu t} \mathbb{L}\_{\mathfrak{q}} + (b\_0 + b\_1t + \ldots + b\_\ell t^\ell)e^{\mu t} \left(\mathbb{L}\_b \mathbb{Q}^b\right)\_{,\mathfrak{q}} = \ 0. \tag{A9}$$

We consider the following subcases.


From (A6) we have that *<sup>C</sup>ab* <sup>=</sup> <sup>−</sup> <sup>1</sup> *λ L*(*a*;*b*) . Therefore, *L*(*a*;*b*) is a KT. We consider two cases according to the degree ` of the polynomial *ω*(*t*).


From (A9) we find that

$$\left(L\_b \mathbb{Q}^b\right)\_{,a} = \begin{array}{c} 2\lambda \mathbb{C}\_{ab} \mathbb{Q}^b \end{array} \tag{A10}$$

$$
\lambda^2 L\_a + b\_0 \left( L\_b \mathbf{Q}^b \right)\_{,a} - \mathbf{2} (b\_1 + \lambda b\_0) \mathbf{C}\_{ab} \mathbf{Q}^b \quad = \quad \mathbf{0}. \tag{A11}
$$

Replacing with *<sup>C</sup>ab* <sup>=</sup> <sup>−</sup> <sup>1</sup> *λ L*(*a*;*b*) and by substituting (A10) in (A11) we obtain

$$\left(L\_b \mathcal{Q}^b\right)\_{,a} = -2L\_{(a;b)} \mathcal{Q}^b \tag{A12}$$

.

$$
\lambda^3 L\_a + 2b\_1 L\_{(a;b)} Q^b \quad = \quad 0. \tag{A13}
$$

The solution of (A8) is

$$K = \left(\frac{b\_0}{\lambda} - \frac{b\_1}{\lambda^2}\right) e^{\lambda t} L\_a Q^a + \frac{b\_1}{\lambda} t e^{\lambda t} L\_a Q^a + G(q)$$

which when replaced in (A7) gives *G*,*<sup>a</sup>* = 0, that is *G* = *const* ≡ 0. The QFI is

$$L\_{\varepsilon}(\ell=1) = -e^{\lambda t}L\_{(a\beta)}\eta^{a}\eta^{b} + \lambda e^{\lambda t}L\_{a}\eta^{a} + \left(b\_{0} - \frac{b\_{1}}{\lambda}\right)e^{\lambda t}L\_{a}Q^{a} + b\_{1}te^{\lambda t}L\_{a}Q^{a} \tag{A14}$$

where *L*(*a*;*b*) is a KT, *LbQ<sup>b</sup>* ,*a* = *<sup>λ</sup>* 3 *b*1 *L<sup>a</sup>* and *λ* <sup>3</sup>*L<sup>a</sup>* <sup>=</sup> <sup>−</sup>2*b*1*L*(*a*;*b*)*Q<sup>b</sup>*


$$\begin{aligned} \text{From (A9) we find that } \left(L\_b \mathbb{Q}^b\right)\_{,a} &= 2\lambda \mathbb{C}\_{ab} \mathbb{Q}^b, \mathbb{C}\_{ab} \mathbb{Q}^b = 0 \text{ and } \lambda^2 L\_a = 2b\_1 \mathbb{C}\_{ab} \mathbb{Q}^b. \\ \text{Therefore, } L\_a &= 0 \text{ and hence } \mathbb{C}\_{ab} = -\frac{1}{\lambda} L\_{(a\mathbb{B})} = 0. \text{ We end up with a trivial Fibonacci group, that is,} \\ L\_c &= \text{const.} \end{aligned}$$

#### **References**


## *Article* **Closed-Loop Nash Equilibrium in the Class of Piecewise Constant Strategies in a Linear State Feedback Form for Stochastic LQ Games**

**Vasile Dr˘agan 1,2,†, Ivan Ganchev Ivanov 3,† , Ioan-Lucian Popa 4,\* ,† and Ovidiu Bagdasar 5,†**


**Abstract:** In this paper, we examine a sampled-data Nash equilibrium strategy for a stochastic linear quadratic (LQ) differential game, in which admissible strategies are assumed to be constant on the interval between consecutive measurements. Our solution first involves transforming the problem into a linear stochastic system with finite jumps. This allows us to obtain necessary and sufficient conditions assuring the existence of a sampled-data Nash equilibrium strategy, extending earlier results to a general context with more than two players. Furthermore, we provide a numerical algorithm for calculating the feedback matrices of the Nash equilibrium strategies. Finally, we illustrate the effectiveness of the proposed algorithm by two numerical examples. As both situations highlight a stabilization effect, this confirms the efficiency of our approach.

**Keywords:** nash equilibria; stochastic LQ differential games; sampled-data controls; equilibrium strategies; optimal trajectories

**MSC:** 91A23; 93E20; 49N10; 49N70

#### **1. Introduction**

Stochastic control problems governed by Itô's differential equations have been the subject of intensive research over the last decades. This generated a rich literature and fundamental results such as the *H*<sup>2</sup> and LQ robust sampled-data control problems under a unified framework studied in [1,2], classes of uncertain sampled-data systems with random jumping parameters characterized by finite state semi-Markov process analysed in [3], or stochastic differential games investigated in [4–7].

Dynamical games have been used to solve many real life problems (see e.g., [8]). For example, the concept of Nash equilibrium is very important for dynamical games, where for controlled systems the closed-loop and open-loop equilibria strategies present special interest. Various aspects of open-loop Nash equilibria are studied for a LQ differential game in [9], other results being reported in [10–12]. In addiytion, in [13] applications to gas network optimisation are studied via open-loop sampled-data Nash equilibrium strategy. The framework in which state vector measurements for a class of differential games are available only at discrete times was first studied in [14]. There, a two-player differential game was considered, and necessary conditions for the sample data controls were obtained using a backward translation method starting at the last time interval, and following the

**Citation:** Dragan, V.; Ivanov, I.G.; Popa, I.-L.; Bagdasar, O. Closed-Loop Nash Equilibrium in the Class of Piecewise Constant Strategies in a Linear State Feedback Form for Stochastic LQ Games. *Mathematics* **2021**, *9*, 2713. https://doi.org/ 10.3390/math9212713

Academic Editor: Ioannis Dassios

Received: 7 September 2021 Accepted: 20 October 2021 Published: 26 October 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

previous state measurements. This case has been extended to a stochastic framework in [15], where the players have access to sample-data state information with sampling interval. For other results dealing with closed-loop systems (see, e.g., [16]). Stochastic dynamical games are an important, but more challenging framework. First introduced in [17], stochastic LQ problems have been studied extensively (see, [18,19]).

In the present paper, we consider stochastic differential games governed by Itô's differential equation, with state multiplicative and control multiplicative white noise perturbations. The original contributions of this work are the following. First, we analyze the design of a Nash equilibrium strategy in a state feedback form in the class of piecewise constant admissible strategies. It is assumed that the state measurements are available only at some discrete times. The original problem is transformed into an equivalent one which asks to find some existence conditions for a Nash equilibrium strategy in a state feedback form for a LQ stochastic differential game described by a system of Itô differential equations controlled by impulses. Necessary and sufficient conditions for the existence of a Nash equilibrium strategy for the new LQ differential game are obtained based on methods from [20,21]. The feedback matrices of the equilibrium strategies for the original dynamical game are obtained from the general result using the structure of the matrix coefficients of the system controlled by impulses. Another major contribution of this paper consists of the numerical methods for computing the feedback matrices of the Nash equilibrium strategy.

To our knowledge, in the stochastic framework, there are few papers dealing with the problem of sampled-data Nash equilibrium strategy in both open-loop and closed-loop forms ([22,23]), the papers [13,14] mentioned before only considering the deterministic framework. In that case, the problem of sampled-data Nash equilibrium strategy can be transformed in a natural way into a problem stated in discrete-time framework. Such a transformation is not possible when the dynamical system contains state multiplicative and control multiplicative white noise perturbations. In [15], the stochastic character is due only to the presence of the additive white noise perturbations. In that case, the approach is not essentially different from the one used in the deterministic case.

The paper is organized as follows. In Section 2, we formulate the problem, introducing the L-players Nash equilibria concept. In Section 2.2, we state an equivalent form of the original problem and we introduce a system of matrix linear differential equations with jumps and algebraic constraints which is involved in the derivation of the feedback matrices of the equilibrium strategies. Then, in Section 2.3, we provide some necessary and sufficient conditions which guarantee the existence of a piecewise constant Nash equilibrium strategy. An algorithm implementing these developments is given in Section 3. The efficiency of the proposed algorithm is demonstrated by two numerical examples illustrating the behavior of the optimal trajectories generated by the equilibrium strategy. Section 4 is dedicated to conclusions.

#### **2. Problem Formulation**

#### *2.1. Model Description and Problem Setting*

Consider the controlled system having the state space representation described by

$$\begin{aligned} dx(t) &= [Ax(t) + \sum\_{k=1}^{L} B\_k u\_k(t)]dt + [\mathbb{C}x(t) + \sum\_{k=1}^{L} D\_k u\_k(t)]dw(t), \\ x(t\_0) &= x\_0, \ t \in [t\_0, t\_f], \end{aligned} \tag{1}$$

where *<sup>x</sup>*(*t*) <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* is the state vector, *L* is a positive integer, *u<sup>k</sup>* (*t*) <sup>∈</sup> <sup>R</sup>*m<sup>k</sup>* , *<sup>k</sup>* <sup>=</sup> 1, . . . , *<sup>L</sup>* are control parameters, and {*w*(*t*)}*t*≥<sup>0</sup> is a 1-dimensional standard Wiener process defined on a probability space (Ω, F,P).

In the controlled system there are *L* players (*k* = 1, 2, . . . , *L*) who change their behavior through their control function *u<sup>k</sup>* (·), *<sup>k</sup>* <sup>=</sup> 1, . . . , *<sup>L</sup>*. The matrices of the system *<sup>A</sup>*, *<sup>C</sup>* <sup>∈</sup> <sup>R</sup>*n*×*<sup>n</sup>* and matrices of the players *B<sup>k</sup>* , *<sup>D</sup><sup>k</sup>* <sup>∈</sup> <sup>R</sup>*n*×*m<sup>k</sup>* , *<sup>k</sup>* <sup>=</sup> 1, . . . , *<sup>L</sup>*, are known. In the field of the game theory, the controls *u<sup>k</sup>* (·) are called admissible strategies (or policies) for the players. The different classes of admissible strategies can be defined in various ways, depending on the available information.

Each player aims to minimize its own cost function (performance criterion), and for *k* = 1, . . . , *L* we have

$$J\_k(t\_0, \mathbf{x}\_0; u\_1, \dots, u\_L) = \mathbb{E}\left[\mathbf{x}\_u^T(t\_f)\mathbf{G}\_k\mathbf{x}\_u(t\_f) + \int\_{t\_0}^{t\_f} (\mathbf{x}\_u^T(t)M\_k\mathbf{x}\_u(t) + \sum\_{j=1}^L u\_j^T(t)R\_{kj}u\_j(t))dt\right]. \tag{2}$$

We make the following assumption regarding the weights matrices in (2): **H.** *G<sup>k</sup>* ≥ 0, *M<sup>k</sup>* ≥ 0, *Rkk* > 0, and *Rkl* ≥ 0, with *k*, *l* = 1, . . . , *L*, and *l* 6= *k*. Here we generalize Definition 2.1 given in [23].

**Definition 1.** *The L-tuple of admissible strategies* (*u*˜1(·), *u*˜2(·), . . . , *u*˜*L*(·)) *is said to achieve a Nash equilibrium for the differential game described by the controlled system (1), the cost function (2), and the class of the admissible strategies* U = U<sup>1</sup> × U<sup>2</sup> × · · · × U*L, if for all uk* (·) ∈ U*<sup>k</sup> , k* = 1, . . . , *L we have*

$$\mathbb{I}\_{\mathbb{K}}(t\_0, \mathbf{x}\_0; \mathfrak{u}\_1, \mathfrak{u}\_2, \dots, \mathfrak{u}\_L) \le \mathbb{I}\_{\mathbb{K}}(t\_0, \mathbf{x}\_0; \mathfrak{u}\_1, \mathfrak{u}\_2, \dots, \mathfrak{u}\_{k-1}, \mathfrak{u}\_k, \mathfrak{u}\_{k+1}, \dots, \mathfrak{u}\_L). \tag{3}$$

In this paper we consider a special class of closed-loop admissible strategies in which the states *x*(*t*) of the dynamical system are available for measurement at the discrete-times 0 ≤ *t*<sup>0</sup> < *t*<sup>1</sup> < . . . < *tN*−<sup>1</sup> < *t<sup>N</sup>* = *t<sup>f</sup>* , and the set of admissible strategies consists of piecewise constant stochastic processes of the form

$$\mu\_k(t) = \mathbb{F}\_k(j)\mathbf{x}(t\_j), \ t\_j \le t < t\_{j+1}, \ j = 0, 1, \dots, N-1,\tag{4}$$

with *F<sup>k</sup>* (*j*) <sup>∈</sup> <sup>R</sup>*mk*×*<sup>n</sup>* are arbitrary matrices.

Our aim is to investigate the problem of designing a Nash equilibrium strategy in the class of piecewise constant admissible strategies of type (4) (the closed-loop admissible strategies), for a LQ differential game described by a dynamical system of type (1), under the performance criteria (2). Moreover, we also present a method for the numerical computation of the feedback gains of the equilibrium strategy.

We denote <sup>U</sup>˜ *pc* <sup>=</sup> <sup>U</sup>˜ *pc* <sup>1</sup> <sup>×</sup> <sup>U</sup>˜ *pc* <sup>2</sup> <sup>×</sup> . . . <sup>×</sup> <sup>U</sup>˜ *pc L* the set of the piecewise constant admissible strategies of type (4).

#### *2.2. The Equivalent Problem*

Define *v<sup>k</sup>* : [*t*0, *t<sup>f</sup>* ] <sup>→</sup> <sup>R</sup>*m<sup>k</sup>* by *<sup>v</sup><sup>k</sup>* (*t*) = *u<sup>k</sup>* (*j*), *t<sup>j</sup>* ≤ *t* < *tj*+<sup>1</sup> , *j* = 0, 1, . . . , *N* − 1, where *uk* (*j*) are arbitrary *m<sup>k</sup>* -dimensional random vectors with finite second moments. If *x*(*t*) is the solution of system (1) determined by the piecewise constant inputs *v<sup>k</sup>* (·), we set *ξ*(*t*) = (*x T* (*t*) *v T* 1 (*t*) . . . *v T L* (*t*))*<sup>T</sup>* <sup>∈</sup> <sup>R</sup>*n*+*m*, *<sup>m</sup>* <sup>=</sup> <sup>∑</sup> *L <sup>k</sup>*=<sup>1</sup> *m<sup>k</sup>* .

Direct calculations show that *ξ*(*t*) is the solution of the initial value problem (IVP) associated to a linear stochastic system with finite jumps often called system controlled by impulses:

$$d\mathfrak{J}(t) = \mathbf{A}\mathfrak{J}(t)dt + \mathbf{C}\mathfrak{J}(t)dw(t), \ t\_{\mathfrak{j}} \le t < t\_{\mathfrak{j}+1} \tag{5a}$$

$$\mathfrak{F}(t\_j^+) = \mathbf{A}\_d \mathfrak{F}(t\_j) + \sum\_{k=1}^{L} \mathbf{B}\_{dk} u\_k(j), \quad j = 0, 1, \dots, N - 1,\tag{5b}$$

$$\mathfrak{L}(t\_0) = (\mathfrak{x}\_0^T \mathbf{0}^T \dots \mathbf{0}^T)^T,\tag{5c}$$

under the notations:

$$\mathbf{A} = \left( \begin{array}{cccccc} A & B\_1 & B\_2 & \cdots & B\_L \\ 0\_{mm} & 0\_{mm\_1} & 0\_{mm\_2} & \cdots & 0\_{mm\_L} \end{array} \right) \prime \\ \mathbf{C} = \left( \begin{array}{cccccc} \mathbf{C} & D\_1 & D\_2 & \cdots & D\_L \\ 0\_{mm} & 0\_{mm\_1} & 0\_{mm\_2} & \cdots & 0\_{mm\_L} \end{array} \right) \prime \\ \mathbf{A}\_d = \left( \begin{array}{cccccc} I\_n & 0\_{nm\_1} & 0\_{nm\_2} & \cdots & 0\_{nm\_L} \\ 0\_{mm} & 0\_{mm\_1} & 0\_{mm\_2} & \cdots & 0\_{mm\_L} \end{array} \right) \prime \\ \mathbf{B}\_{dk} = \left( \begin{array}{cccccc} 0\_{mm\_k}^T & 0\_{m\_1 m\_k}^T & \cdots & 0\_{m\_{k-1} m\_k}^T & l\_{m\_k} & 0\_{m\_{k+1} m\_k}^T & \cdots & 0\_{m\_L m\_k}^T \end{array} \right)^T \end{array} \tag{6}$$

where 0*pq* denotes the zero matrix of size *p* × *q*.

The performance criteria (2) becomes

$$\begin{aligned} J\_k(t\_0, \check{\mathbf{y}}\_0; \mathbf{u}\_1, \mathbf{u}\_2, \dots, \mathbf{u}\_L) &= \mathbb{E}[\check{\mathbf{s}}^T(t\_f)\mathbf{G}\_k\check{\mathbf{s}}(t\_f) + \int\_{t\_0}^{t\_f} \check{\mathbf{s}}^T(t)\mathbf{M}\_k\check{\mathbf{s}}(t)dt] \\ &+ \sum\_{j=0}^{N-1} \mathbb{E}[\sum\_{i=1}^L u\_i^T(j)\mathbf{R}\_{kl}(j)u\_i(j)], \end{aligned} \tag{7}$$

for all **u***<sup>k</sup>* = (*u<sup>k</sup>* (0), . . . , *u<sup>k</sup>* (*N* −1)), *u<sup>k</sup>* (*j*) are *m<sup>k</sup>* -dimensional random vectors F*t<sup>j</sup>* -measurable such that

$$\mathbb{E}[|\mathfrak{u}\_k(j)|^2] < \infty.$$

Throughout the paper F*<sup>t</sup>* denotes the *σ*-algebra generated by the random variables *w*(*s*), 0 ≤ *s* ≤ *t*. The matrices in (7) can be written as

$$\begin{array}{l} \mathbf{G}\_{k} = \text{diag}(\mathbf{G}\_{k} \; \; 0) \in \mathbb{R}^{(n+m)\times(n+m)} \\\\ \mathbf{M}\_{k} = \text{diag}(M\_{k} \; \; 0) \in \mathbb{R}^{(n+m)\times(n+m)} \\\\ \mathbf{R}\_{ki}(j) = (t\_{j+1} - t\_{j})\mathbf{R}\_{ki} .\end{array} \tag{8}$$

Let U *sd* <sup>=</sup> <sup>U</sup> *sd* <sup>1</sup> × U*sd* <sup>2</sup> × · · · × U*sd L* be the set of the inputs of the form of sampled data linear state feedback, i.e., **<sup>u</sup>** = (**u**1, **<sup>u</sup>**2, . . . , **<sup>u</sup>***L*) ∈ U*sd* if and only if **<sup>u</sup>***<sup>k</sup>* = (*u<sup>k</sup>* (0), . . . , *u<sup>k</sup>* (*N* − 1)) with

$$\mu\_k(j) = \mathbf{F}\_k(j)\xi(t\_j), \ 0 \le j \le N-1,\tag{9}$$

where **F***<sup>k</sup>* (*j*) <sup>∈</sup> <sup>R</sup>*mk*×(*n*+*m*) are arbitrary matrices and *<sup>ξ</sup>*(*tj*) are the values at the time instants *t<sup>j</sup>* of the solution of the following IVP:

$$d\mathfrak{J}(t) = \mathbf{A}\mathfrak{J}(t)dt + \mathbf{C}\mathfrak{J}(t)dw(t), \ t\_j < t \le t\_{j+1} \tag{10a}$$

$$\mathfrak{F}(t\_j^+) = (\mathbf{A}\_d + \sum\_{k=1}^L \mathbf{B}\_{dk} \mathbf{F}\_k(j)) \xi(t\_j), \quad j = 0, 1, \dots, N - 1,\tag{10b}$$

$$
\mathfrak{F}(t\_0) = \mathfrak{f}\_0 \in \mathbb{R}^{n+m}.\tag{10c}
$$

Let Φ*<sup>k</sup>* be a matrix valued sequence of the form

$$\Phi\_k = (\mathbf{F}\_k(0), \mathbf{F}\_k(1), \dots, \mathbf{F}\_k(N-1)), \tag{11}$$

where **F***<sup>k</sup>* (*i*) <sup>∈</sup> <sup>R</sup>*mk*×(*n*+*m*) are arbitrary matrices. We consider the set

$$\mathcal{U}\_{\Phi}^{\mathrm{sd}} = \{ (\Phi\_1, \Phi\_2, \dots, \Phi\_L) : \Phi\_k \text{ are arbitrary sequences defined as in (11)} \}. \tag{12}$$

**Remark 1.** *By (9) and (10), there is a one to one correspondence between the sets* U *sd and* <sup>U</sup> *sd* Φ *. Each* **u***<sup>k</sup> from* U *sd k can be identified with the sequence* Φ*<sup>k</sup>* = (Φ*<sup>k</sup>* (0), Φ*<sup>k</sup>* (1), . . . , Φ*<sup>k</sup>* (*N* − 1)) *of its feedback matrices.*

Based on this remark we can rewrite the performance criterion (7) as:

$$\mathcal{J}\_k(t\_0, \xi\_0; \Phi\_1, \Phi\_2, \dots, \Phi\_L) = \mathbb{E}[\xi^T(t\_f)\mathbf{G}\_k\xi(t\_f) + \int\_{t\_0}^{t\_f} \xi^T(t)\mathbf{M}\_k\xi(t)dt]$$

$$+ \sum\_{j=0}^{N-1} \mathbb{E}[\sum\_{i=1}^L \xi^T(t\_f)\mathbf{F}\_i^T(j)\mathbf{R}\_{ki}(j)\mathbf{F}\_i(j)\xi(t\_f)],\tag{13}$$

for all (Φ1, <sup>Φ</sup>2, . . . , <sup>Φ</sup>*L*) ∈ U*sd* Φ .

Similarly to Definition 1, one can define a Nash equilibrium strategy for the LQ differential game described by the controlled system (5), the performance criteria (13) and the class of admissible strategies U *sd* Φ described by (12).

**Definition 2.** *The L-tuple of admissible strategies* (Φ˜ <sup>1</sup>, Φ˜ <sup>2</sup>, . . . , Φ˜ *<sup>L</sup>*) *is said to achieve a Nash equilibrium for the differential game described by the controlled system (5), the cost function (13), and the class of the admissible strategies* U *sd* Φ *, if for all* (Φ1, <sup>Φ</sup>2, . . . , <sup>Φ</sup>*L*) ∈ U*sd* Φ *, we have*

$$\mathcal{J}\_k(t\_0, \xi\_0; \Phi\_1, \Phi\_2, \dots, \Phi\_L) \le \mathcal{J}\_k(t\_0, \xi\_0; \Phi\_1, \Phi\_2, \dots, \Phi\_{k-1}, \Phi\_k, \Phi\_{k+1}, \dots, \Phi\_L). \tag{14}$$

**Remark 2.**


$$\mathbf{F}\_k(j) = (F\_k(j) \qquad \mathbf{0}\_{m\_k m})\_\prime \tag{15}$$

*where F<sup>k</sup>* (*j*) <sup>∈</sup> <sup>R</sup>*mk*×*<sup>n</sup> . Hence, some admissible strategies (9) are of type (4). Hence, if the feedback matrices of the Nash equilibrium strategy* (Φ˜ <sup>1</sup>, Φ˜ <sup>2</sup>, . . . , Φ˜ *<sup>L</sup>*) *have the structure given in (15), then the strategy of type (9) with these feedback matrices provide the Nash equilibrium strategy for the LQ differential game described by (1), (2) and (4).*

To obtain explicit formulae for the feedback matrices of a Nash equilibrium strategy of type (9) (or, equivalently (11), (12)), we use the following system of matrix linear differential equations (MLDEs) with jumps and algebraic constraints:

$$-\dot{\mathbf{P}}\_{k}(t) = \mathbf{A}^{T}\mathbf{P}\_{k}(t) + \mathbf{P}\_{k}(t)\mathbf{A} + \mathbf{C}^{T}\mathbf{P}\_{k}(t)\mathbf{C} + \mathbf{M}\_{k} \quad \mathbf{t}\_{j} \le t < t\_{j+1} \tag{16a}$$

$$\mathbf{P}\_{k}(t\_{j}^{-}) = \mathbf{A}^{T}\_{[-k]}(j)\mathbf{P}\_{k}(t\_{j})\mathbf{A}\_{[-k]}(j) - \mathbf{A}^{T}\_{[-k]}(j)\mathbf{P}\_{k}(t\_{j})\mathbf{B}\_{dk} \times$$

$$\times \left( \mathbf{R}\_{kk}(j) + \mathbf{B}\_{dk}^T P\_k(t\_j) \mathbf{B}\_{dk} \right)^\dagger \mathbf{B}\_{dk}^T P\_k(t\_j) \mathbf{A}\_{[-k]}(j) + \mathbf{M}\_{[-k]}(j) \tag{16b}$$

[−*k*]

$$\sum\_{i=1}^{k-1} \mathbf{B}\_{dk}^T P\_k(t\_j) \mathbf{B}\_{di} \mathbf{F}\_i(j) + (\mathbf{R}\_{kk}(j) + \mathbf{B}\_{dk}^T P\_k(t\_j) \mathbf{B}\_{dk}) \mathbf{F}\_k(j)$$
 
$$\underline{L}$$

$$+\sum\_{i=k+1}^{L} \mathbf{B}\_{dk}^T P\_k(t\_j) \mathbf{B}\_{di} \mathbf{F}\_i(j) = -\mathbf{B}\_{dk}^T P\_k(t\_j) \mathbf{A}\_d \tag{16c}$$

$$P\_k(t\_N^-) = \mathbf{G}\_{\mathbf{k}\prime} \quad \mathbf{k} = \mathbf{1}\_{\prime} \dots \mathbf{L}\_{\prime} \tag{16d}$$

where we have denoted

$$\mathbf{A}\_{[-k]}(j) = \mathbf{A}\_d + \sum\_{i=1, i \neq k}^{L} \mathbf{B}\_{di} \mathbf{F}\_i(j) \tag{17}$$

and

$$\mathbf{M}\_{[-k]}(j) = \sum\_{i=1, i \neq k}^{L} \mathbf{F}\_i^T(j) \mathbf{R}\_{ki}(j) \mathbf{F}\_i(j), \tag{18}$$

while the superscript † denotes the generalized inverse of a matrix.

*j*

[−*k*]

**Remark 3.** *A solution of the terminal value problem (TVP) with algebraic constraints (16) is a 2L-uple of the form* (*P*1(·), *P*2(·), . . . , *PL*(·); **F**1(·), **F**2(·), . . . , **F***L*(·)) *where, for each* 1 ≤ *k* ≤ *L, Pk* (·) *is a solution of the TVP (16a), (16b), (16d) and* **F***<sup>k</sup>* (*j*) <sup>∈</sup> <sup>R</sup>*mk*×(*n*+*m*) *,* 0 ≤ *j* ≤ *N* − 1*. On the interval* [*tN*−1, *tN*]*, P<sup>k</sup>* (·) *is the solution of the TVP described by the perturbed Lyapunovtype equation from (16a) and the terminal value given in (16d). On each interval* [*tj*−<sup>1</sup> , *tj*)*, j* ≤ *N* − 1*, the terminal value P<sup>k</sup>* (*t* − *j* ) *of P<sup>k</sup>* (·) *is computed via (16b) together with (17) and (18) provided that* (**F**1(*j*), **F**2(*j*), . . . , **F***L*(*j*)) *to be obtained as solution of (16c). So, the TVPs solved by Pk* (·), 1 ≤ *k* ≤ *L are interconnected via (16c).*

To facilitate the statement of the main result of this section, we rewrite (16c) in a compact form as:

$$(\Pi\_d(P\_1(t\_{\bar{j}}), \dots, P\_L(t\_{\bar{j}}), j)) \mathbb{F}(j) = -\Gamma\_d(P\_1(t\_{\bar{j}}), \dots, P\_L(t\_{\bar{j}})), \tag{19}$$

where F(*j*) = (**F** *T* 1 (*j*) **F** *T* 2 (*j*) . . . **F** *T L* (*j*))*<sup>T</sup>* and the matrices Π*<sup>d</sup>* (*P*1(*tj*), . . . , *PL*(*tj*), *j*) and Γ*d* (*P*1(*tj*), . . . , *PL*(*tj*)) are obtained using the block components of (16c).

#### *2.3. Sampled Data Nash Equilibrium Strategy*

First we derive a necessary and sufficient condition for the existence of an equilibrium strategy of type (9) for the LQ differential game given by the controlled system (5), the performance criteria (7) and the set of the admissible strategies U *sd*. To this end we adapt the argument used in the proof of ([22], Theorem 4).

We prove:

#### **Theorem 1.** *Under the assumption H. the following are equivalent:*

*(i) the LQ differential game defined by the dynamical system controlled by impulses (5), the performance criteria (7) and the class of the admissible strategies of type (9) has a Nash equilibrium strategy*

$$\mathfrak{u}\_k(j) = \tilde{\mathbf{F}}\_k(j)\xi(t\_j),\ 0 \le j \le N-1, 1 \le k \le L. \tag{20}$$

*(ii) the TVP with constraints (16) has a solution* (*P*˜ <sup>1</sup>(·), *<sup>P</sup>*˜ <sup>2</sup>(·), . . . , *<sup>P</sup>*˜ *<sup>L</sup>*(·); **<sup>F</sup>**˜ <sup>1</sup>(·), **<sup>F</sup>**˜ <sup>2</sup>(·), . . . , **<sup>F</sup>**˜ *<sup>L</sup>*(·)) *defined on the whole interval* [*t*0, *t<sup>f</sup>* ] *and satisfying the conditions below for* 0 ≤ *j* ≤ *N* − 1*:*

$$\begin{split} \Pi\_{d}(\tilde{\mathsf{P}}\_{1}(t\_{\mathrm{j}}),\ldots,\tilde{\mathsf{P}}\_{L}(t\_{\mathrm{j}}),\boldsymbol{j})\Pi\_{d}(\tilde{\mathsf{P}}\_{1}(t\_{\mathrm{j}}),\ldots,\tilde{\mathsf{P}}\_{L}(t\_{\mathrm{j}}),\boldsymbol{j})^{\dagger}\Gamma\_{d}(\tilde{\mathsf{P}}\_{1}(t\_{\mathrm{j}}),\ldots,\tilde{\mathsf{P}}\_{L}(t\_{\mathrm{j}})) &= \\ =\Gamma\_{d}(\tilde{\mathsf{P}}\_{1}(t\_{\mathrm{j}}),\ldots,\tilde{\mathsf{P}}\_{L}(t\_{\mathrm{j}})).\end{split} \tag{21}$$

*If condition (21) holds, then the feedback matrices of a Nash equilibrium strategy of type (9) are the matrix components of the solution of the TVP (16) and are given by*

$$\begin{aligned} (\tilde{\mathbf{F}}\_1^T(j) \cdot \tilde{\mathbf{F}}\_2^T(j) \dots \tilde{\mathbf{F}}\_L^T(j))^T &= -\Pi\_d(\tilde{\mathbf{P}}\_1(t\_j), \dots, \tilde{\mathbf{P}}\_L(t\_j), j)^\dagger \Gamma\_d(\tilde{\mathbf{P}}\_1(t\_j), \dots, \tilde{\mathbf{P}}\_L(t\_j)),\\ 0 \le j \le N - 1. \end{aligned} \tag{22}$$

*The minimal value of the cost of the k-th player is ξ T* 0 *P*˜ *k* (*t* − 0 )*ξ*0*.*

**Proof.** From (14) and Remarks 1 and 2(a), one can see that a strategy of type (9) defines a Nash equilibrium strategy for the linear differential game described by the controlled system (5), the performance criteria (7) (or equivalently (13)) if and only if for each 1 ≤ *k* ≤ *L* the optimal control problem described by the controlled system

$$d\mathfrak{J}(t) = \mathbf{A}\mathfrak{J}(t)dt + \mathbf{C}\mathfrak{J}(t)dw(t), \ t\_{\mathfrak{j}} < t \le t\_{\mathfrak{j}+1} \tag{23a}$$

$$\tilde{\boldsymbol{\zeta}}(t\_{\mathbf{j}}^{+}) = \tilde{\mathbf{A}}\_{[-k]}(\boldsymbol{j})\tilde{\boldsymbol{\zeta}}(t\_{\mathbf{j}}) + \mathbf{B}\_{dk}\boldsymbol{\mu}\_{k}(\boldsymbol{j}), \ \boldsymbol{j} = \boldsymbol{0}, 1, \ldots, N - 1,\tag{23b}$$

$$\mathfrak{J}(t\_0) = \mathfrak{J}\_0 \in \mathbb{R}^{n+m} \,. \tag{23c}$$

and the quadratic functional

$$\begin{aligned} \mathcal{I}\_{[-k]}(t\_0, \mathfrak{z}\_0; \mathbf{u}\_k) &= \mathbb{E}[\mathfrak{z}^T(t\_f)\mathbf{G}\_k\mathfrak{z}(t\_f) + \int\_{t\_0}^{t\_f} \xi^T(t)\mathbf{M}\_k\mathfrak{z}(t)dt] \\ &+ \sum\_{j=0}^{N-1} \mathbb{E}[\mathfrak{z}^T(t\_j)\mathbf{M}\_{[-k]}(j)\mathfrak{z}(t\_j) + u\_k^T(j)\mathbf{R}\_{kk}(j)u\_k(j)], \end{aligned} \tag{24}$$

has an optimal control in a state feedback form. The controlled system (23) and the performance criterion (24) are obtained substituting *u*˜` (*j*) = **F**˜ ` (*j*)*ξ*(*tj*), 1 ≤ *k*, ` ≤ *L*, ` 6= *k* in (5) and (7), respectively. **A**˜ [−*k*] and **M**˜ [−*k*] are computed as in (17) and (18), respectively, but with **F***i*(*j*) replaced by **F**˜ *<sup>i</sup>*(*j*).

To obtain necessary and sufficient conditions for the existence of the optimal control in a linear state feedback form we employ the results proved in [20]. First, notice that in the case of the optimal control problem (23)–(24), the TVP (16a), (16b), (16d) plays the role of the TVP (19)–(23) from [20].

Using Theorem 3 in [20] in the case of the optimal control problem described by (23) and (24) we deduce that the existence of the Nash equilibrium strategy of the form (9) for the differential game described by the controlled system (5), the performance criteria (7) (or its equivalent form (13)), is equivalent to the solvability of the TVP described by (16). The feedback matrix **F**˜ *k* (*j*) of the optimal control solves the equation:

$$(\mathbf{R}\_{kk}(j) + \mathbf{B}\_{dk}^T \tilde{P}\_k(t\_j) \mathbf{B}\_{dk}) \tilde{\mathbf{F}}\_k(j) = -\mathbf{B}\_{dk}^T \tilde{P}\_k(t\_j) \tilde{\mathbf{A}}\_{[-k]}(j). \tag{25}$$

Substituting the formulae of **A**˜ [−*k*] in (25) we deduce that the feedback matrices of the Nash equilibrium strategy solve an equation of the form (16c) written for **F**˜ *k* (*j*) instead of **F***k* (*j*). This equation may be written in the compact form:

$$\Pi\_d(\tilde{P}\_1(t\_j), \dots, \tilde{P}\_L(t\_j), j)\mathbb{P}(j) = -\Gamma\_d(\tilde{P}\_1(t\_j), \dots, \tilde{P}\_L(t\_j)),\tag{26}$$

where F˜(*j*) = (**F**˜ *<sup>T</sup>* 1 (*j*) **F**˜ *<sup>T</sup>* 2 (*j*) . . . **F**˜ *<sup>T</sup> L* (*j*))*<sup>T</sup>* .

By Lemma 2.7 in [21] we deduce that the Equation (26) has a solution if and only if the condition (21) holds. A solution of the Equation (26) is given by (22). The minimal value of the cost for the *k*-th player is obtained from Theorem 1 in [20] applied in the case of the optimal control problem described by (23), (24). Thus the proof is complete.

**Remark 4.** *When the matrices* Π*<sup>d</sup>* (*P*˜ <sup>1</sup>(*tj*), . . . , *P*˜ *<sup>L</sup>*(*tj*), *j*) *are invertible, the conditions (21) are satisfied automatically. In this case, the feedback matrices* **F**˜ *k* (*j*) *of a Nash equilibrium strategy of type (20) are obtained as the unique solution of the Equation (22), because in this case, the generalized inverse of each matrix* Π*<sup>d</sup>* (*P*˜ <sup>1</sup>(*tj*), . . . , *P*˜ *<sup>L</sup>*(*tj*), *j*)*,* 0 ≤ *j* ≤ *N* − 1 *is the usual inverse.*

Combining (6) and (16c), we deduce that the matrices **F**˜ *k* (*j*) provided by (22) have the structure **F**˜ *k* (*j*) = (*F*˜ *k* (*j*) 0*mkm*). Hence, the Nash equilibrium strategy of the differential game described by the dynamical system (5), the performance criteria (7) and the admissible strategies of type (9) have the form

$$\mathfrak{u}\_k(j) = (\mathfrak{F}\_k(j) \quad 0\_{m\_k m}) \mathfrak{f}(t\_j) = \mathfrak{F}\_k(j) \mathfrak{x}(t\_j), \ 0 \le j \le N - 1.$$

Now we obtain the following Nash equilibrium strategy of the differential game.

**Theorem 2.** *Assume that the conditions* **H**. *and (ii) in Theorem 1 are satisfied. Then, a Nash equilibrium strategy in a state feedback form with sampled measurements of type (4) of the differential game described by the dynamical system (1) and the performance criteria (2) are given by:*

$$\mathfrak{u}\_{k}(t) = \tilde{F}\_{k}(j)\mathbf{x}(t\_{j}), \ t\_{j} \le t \le t\_{j+1}, \ 0 \le j \le N-1, \ 1 \le k \le L. \tag{27}$$

*The feedback matrices F*˜ *k* (*j*) *from (27) are given by the first n columns of the matrices* **F**˜ *k* (*j*)*, which are obtained as solutions of Equation (26). In (27), x*(*tj*) *are the values measured at the times tj ,* 0 ≤ *j* ≤ *N* − 1, *of the solution of the closed-loop system obtained when (27) is plugged into (1). The minimal value of the cost (2) associated to the k-th player is given by*

> (*x T* 0 0*nm*)*P*˜ *k* (*t* − 0 )(*x T* 0 0*nm*) *T* .

In the next section, we present an algorithm which allows the numerical computation of the matrices *F*˜ *k* (*j*) arising in (27) for an LQ differential game with two players.

#### **3. Numerical Computations and the Algorithm**

In what follows we assume that *L* = 2 and *tj*+<sup>1</sup> − *t<sup>j</sup>* = *h* > 0.0 ≤ *j* ≤ *N* − 1. We propose a numerical approach to compute the optimal strategies

$$\mathfrak{u}\_k(j) = \mathfrak{F}\_k(j)\mathfrak{x}(t\_j), \ j = 0, 1, \ldots, N - 1. \tag{28}$$

The algorithm consists of two steps:

• We first compute the feedback matrices *F*˜ *k* (*j*), *j* = 0, 1, . . . , *N* − 1, *k* = 1, 2 of the Nash equilibrium strategy, based on the solution *P*˜ <sup>1</sup>(·), *<sup>P</sup>*˜ <sup>2</sup>(·):

$$-\dot{P}\_k(t) = \mathbf{A}^T P\_k(t) + P\_k(t)\mathbf{A} + \mathbf{C}^T P\_k(t)\mathbf{C} + \mathbf{M}\_{k\prime} \quad t\_{\circ} \le t < t\_{\circ + 1}. \tag{29}$$

**STEP 1.A.** We take *P<sup>k</sup>* (*t* − *N* ) = **G***<sup>k</sup>* , *k* = 1, 2, and compute

$$\tilde{P}\_k(t\_{N-1}) = e^{\mathcal{L}^\*h}[\mathbf{G}\_k] + \mathbb{M}\_{k\prime} \quad k = 1, 2,\text{ where}\tag{30}$$

$$\mathbf{M}\_{k} = h\mathbf{M}\_{k} + \frac{h^{2}}{2}\mathcal{L}^{\*}[\mathbf{M}\_{k}] + \frac{h^{3}}{6}(\mathcal{L}^{\*})^{2}[\mathbf{M}\_{k}] + \dots + \frac{h^{p}}{p!}(\mathcal{L}^{\*})^{p-1}[\mathbf{M}\_{k}] \tag{31}$$

$$e^{\mathcal{L}^\*h}[\mathbf{X}] \simeq \sum\_{\ell=0}^q \frac{h^\ell}{\ell!} \mathcal{L}^{\*\ell}[\mathbf{X}] = \mathbf{X} + h\mathcal{L}^\*[\mathbf{X}] + \frac{h^2}{2}(\mathcal{L}^\*)[\mathcal{L}^\*[\mathbf{X}]] + \dots + \frac{h^q}{q!}(\mathcal{L}^\*)^q[\mathbf{X}] \mathcal{L}$$

with *p* ≥ 1 and *q* ≥ 1 sufficiently large. For the operator L ∗ [*X*] we have

$$\mathcal{L}^\*[X] = \mathbf{A}^T X + X\mathbf{A} + \mathbf{C}^T X \mathbf{C} \tag{32}$$

for all *X* = *X <sup>T</sup>* <sup>∈</sup> <sup>R</sup>(*n*+*m*1+*m*2)×(*n*+*m*1+*m*2) . The iterations L ` [**X**] are computed from:

$$\mathcal{L}^{\ast \ell}[\mathbf{X}] = \mathbf{A}^{T} \mathcal{L}^{\ast (\ell - 1)}[\mathbf{X}] + \mathcal{L}^{\ast (\ell - 1)}[\mathbf{X}] \mathbf{A} + \mathbf{C}^{T} \mathcal{L}^{\ast (\ell - 1)}[\mathbf{X}] \mathbf{C} \tag{33}$$

for ` ≥ 1 with L 0 [**X**] = **X** where **X** = *P<sup>k</sup>* (*t* − *j*+1 ) or **X** = **M***<sup>k</sup>* , respectively. We compute the feedback matrices *F*˜ *k* (*<sup>N</sup>* <sup>−</sup> <sup>1</sup>) <sup>∈</sup> <sup>R</sup>*mk*×*<sup>n</sup>* as solutions of the linear equation

$$
\begin{pmatrix}
R\_{11} + h^{-1}\tilde{P}\_{1,11}(t\_{N-1}) & h^{-1}\tilde{P}\_{1,12}(t\_{N-1}) \\
h^{-1}\tilde{P}\_{2,12}^T(t\_{N-1}) & R\_{22} + h^{-1}\tilde{P}\_{2,22}(t\_{N-1})
\end{pmatrix}
\begin{pmatrix}
\tilde{F}\_1(N-1) \\
\tilde{F}\_2(N-1)
\end{pmatrix} = \\
h^{-1}\tilde{P}\_{1,01}^T(t\_{N-1}) \\
h^{-1}\tilde{P}\_{2,02}^T(t\_{N-1})
\end{pmatrix}
\tag{34}
$$

**STEP 1.B.** We set

$$\mathfrak{F}\_k(N-1) = (\mathfrak{F}\_k(N-1) \quad 0 \quad 0) \in \mathbb{R}^{m\_k \times (n+m\_1+m\_2)}, k = 1, 2.$$

Next, we compute *P*˜ *k* (*t* − *N*−1 ), *k* = 1, 2,:

$$
\begin{split}
\tilde{\mathsf{P}}\_{1}(t\_{N-1}^{-}) &= (\mathbf{A}\_{d} + \mathbf{B}\_{d2}\mathbf{F}\_{2}(N-1))^{T}\tilde{\mathsf{P}}\_{1}(t\_{N-1})(\mathbf{A}\_{d} + \mathbf{B}\_{d2}\mathbf{F}\_{2}(N-1)) \\ &- \left(\mathbf{A}\_{d} + \mathbf{B}\_{d2}\tilde{\mathsf{P}}\_{2}(N-1)\right)^{T}\tilde{\mathsf{P}}\_{1}(t\_{N-1})\mathbf{B}\_{d1}(h\mathbb{R}\_{11} + \mathbf{B}\_{d1}^{T}\tilde{\mathsf{P}}\_{1}(t\_{N-1})\mathbf{B}\_{d1})^{-1} \\ &\cdot \mathbf{B}\_{d1}^{T}\mathbf{P}\_{1}(t\_{N-1})(\mathbf{A}\_{d} + \mathbf{B}\_{d2}\mathbf{F}\_{2}(N-1)) + h\mathbf{F}\_{2}^{T}(N-1)\mathbf{R}\_{12}\mathbf{F}\_{2}(N-1) \end{split} \tag{35}
$$

and

$$
\begin{split}
\tilde{P}\_{2}(t\_{N-1}^{-}) &= (\mathbf{A}\_{d} + \mathbf{B}\_{d1}\tilde{\mathbf{F}}\_{1}(N-1))^{T}\tilde{P}\_{2}(t\_{N-1})(\mathbf{A}\_{d} + \mathbf{B}\_{d1}\tilde{\mathbf{F}}\_{1}(N-1)) \\ &- (\mathbf{A}\_{d} + \mathbf{B}\_{d1}\tilde{\mathbf{F}}\_{1}(N-1))^{T}\tilde{P}\_{2}(t\_{N-1})\mathbf{B}\_{d2}(h\mathbf{R}\_{22} + \mathbf{B}\_{d2}^{T}\tilde{P}\_{2}(t\_{N-1})\mathbf{B}\_{d2})^{-1} \\ &\cdot \mathbf{B}\_{d2}^{T}\tilde{P}\_{2}(t\_{N-1})(\mathbf{A}\_{d} + \mathbf{B}\_{d1}\tilde{\mathbf{F}}\_{1}(N-1)) + h\tilde{\mathbf{F}}\_{1}^{T}(N-1)\mathbf{R}\_{21}\tilde{\mathbf{F}}\_{1}(N-1). \end{split}
\tag{36}
$$

**STEP 2.A.** Fix *<sup>j</sup>* such that *<sup>j</sup>* <sup>≤</sup> *<sup>N</sup>* <sup>−</sup> 2. Assuming that *<sup>P</sup>*˜ *k* (*t* − *j*+1 ) have already been computed for a *j* ≤ *N* − 2, *k* = 1, 2, we compute

$$\tilde{P}\_k(t\_j) = e^{\mathcal{L}^\* h} [\tilde{P}\_k(t\_{j+1}^-)] + \mathbb{M}\_{k'} k = 1, 2,\tag{37}$$

where M*<sup>k</sup>* is computed as in (31).

We compute the feedback gains *F*˜ *k* (*j*) <sup>∈</sup> <sup>R</sup>*mk*×*<sup>n</sup>* as solution of the linear equation

$$
\begin{pmatrix}
\mathsf{R}\_{11} + h^{-1}\mathsf{P}\_{1,11}(t\_{\hat{f}}) & h^{-1}\mathsf{P}\_{1,12}(t\_{\hat{f}}) \\
h^{-1}\mathsf{P}\_{2,12}^T(t\_{\hat{f}}) & \mathsf{R}\_{22} + h^{-1}\mathsf{P}\_{2,22}(t\_{\hat{f}})
\end{pmatrix}
\begin{pmatrix}
\mathsf{F}\_{1}(j) \\
\mathsf{F}\_{2}(j)
\end{pmatrix} = -\begin{pmatrix}
h^{-1}\mathsf{P}\_{1,01}^T(t\_{\hat{f}}) \\
h^{-1}\mathsf{P}\_{2,02}^T(t\_{\hat{f}})
\end{pmatrix} \tag{38}
$$

**STEP 2.B.** Setting **F**˜ *k* (*j*) = *F*˜ *k* (*j*) 0 0 <sup>∈</sup> <sup>R</sup>*mk*×(*n*+*m*1+*m*2) , *k* = 1, 2 we compute *P*˜ *k* (*t* − *j* ) as in the formulae below

$$
\begin{split}
\tilde{P}\_{1}(t\_{j}^{-}) &= (\mathbf{A}\_{d} + \mathbf{B}\_{d2}\tilde{\mathbf{F}}\_{2}(j))^{T}\tilde{P}\_{1}(t\_{j})(\mathbf{A}\_{d} + \mathbf{B}\_{d2}\tilde{\mathbf{F}}\_{2}(j)) \\ &- (\mathbf{A}\_{d} + \mathbf{B}\_{d2}\tilde{\mathbf{F}}\_{2}(j))^{T}\tilde{P}\_{1}(t\_{j})\mathbf{B}\_{d1}(h\mathbf{R}\_{11} + \mathbf{B}\_{d1}^{T}\tilde{P}\_{1}(t\_{j})\mathbf{B}\_{d1})^{-1} \\ &\cdot \mathbf{B}\_{d1}^{T}\tilde{P}\_{1}(t\_{j})(\mathbf{A}\_{d} + \mathbf{B}\_{d2}\tilde{\mathbf{F}}\_{2}(j)) + h\tilde{\mathbf{F}}\_{2}^{T}(j)\mathbf{R}\_{12}\tilde{\mathbf{F}}\_{2}(j) \end{split} \tag{39}
$$

and

$$
\tilde{P}\_2(t\_j^-) = (\mathbf{A}\_d + \mathbf{B}\_{d1}\tilde{\mathbf{F}}\_1(j))^T \tilde{P}\_2(t\_j) (\mathbf{A}\_d + \mathbf{B}\_{d1}\tilde{\mathbf{F}}\_1(j))
$$

$$
$$

$$
\mathbf{B}\_{d2}^T \tilde{P}\_2(t\_j) (\mathbf{A}\_d + \mathbf{B}\_{d1}\tilde{\mathbf{F}}\_1(j)) + h \tilde{\mathbf{F}}\_1^T(j) \mathbf{R}\_{21} \tilde{\mathbf{F}}\_1(j).
\tag{40}
$$

• In the second step, the computation of the optimal trajectory *x*˜(*t*) involves the initial vector *x*<sup>0</sup> and the equilibrium strategy values *u<sup>k</sup>* (*j*), *k* = 1, 2. Then, we illustrate the mean squares of the optimal trajectory E[|*x*˜(*t*)| 2 ] and of the equilibrium strategy E[|*u*˜*<sup>k</sup>* (*t*)| 2 ], *k* = 1, 2. We set ˜*ξ*(*t*) = (*x*˜ *T* (*t*) *u*˜ *T* 1 (*t*) *u*˜ *T* 2 (*t*))*<sup>T</sup>* and define *X*(*t*) = E[ ˜*ξ*(*t*) ˜*ξ T* (*t*)].

We have *t* → *X*(*t*) solves the forward linear differential equation with finite jumps:

$$
\dot{X}(t) = LX(t), \ t\_j \le t < t\_{j+1}. \tag{41}
$$

For *t<sup>j</sup>* = *jh* we write:

$$X(t\_j^+) = \left(\mathbf{A}\_d + \mathbf{B}\_{d1}\tilde{\mathbf{F}}\_1(j) + \mathbf{B}\_{d2}\tilde{\mathbf{F}}\_2(j)\right)X(t\_j) \cdot \left(\mathbf{A}\_d + \mathbf{B}\_{d1}\tilde{\mathbf{F}}\_1(j) + \mathbf{B}\_{d2}\tilde{\mathbf{F}}\_2(j)\right)^T \tag{42}$$

0 ≤ *j* ≤ *N* − 1, *t<sup>j</sup>* = *jh*, where

$$LX = \mathbf{A}X + X\mathbf{A}^T + \mathbf{C}X\mathbf{C}^T.\tag{43}$$

Then, we have used the values to make plots

$$\begin{aligned} \mathbb{E}[\left|\mathfrak{x}(i\delta + jh)\right|^2] &= \operatorname{Tr}[\mathbf{X}\_{11}(i\delta + jh)] \\ \mathbb{E}[\left|\tilde{\mu}\_1(i\delta + jh)\right|^2] &= \operatorname{Tr}[\mathbf{X}\_{22}(i\delta + jh)] \\ \mathbb{E}[\left|\tilde{\mu}\_2(i\delta + jh)\right|^2] &= \operatorname{Tr}[\mathbf{X}\_{33}(i\delta + jh)] \end{aligned}$$

where

$$X(i\delta + jh) = \begin{pmatrix} X\_{11}(i\delta + jh) & X\_{12}(i\delta + jh) & X\_{13}(i\delta + jh) \\ X\_{12}^T(i\delta + jh) & X\_{22}(i\delta + jh) & X\_{23}(i\delta + jh) \\ X\_{13}^T(i\delta + jh) & X\_{23}^T(i\delta + jh) & X\_{33}(i\delta + jh) \end{pmatrix}$$

such that *<sup>X</sup>*11(*i<sup>δ</sup>* <sup>+</sup> *jh*) <sup>∈</sup> <sup>R</sup>*n*×*<sup>n</sup>* , *<sup>X</sup>*22(*i<sup>δ</sup>* <sup>+</sup> *jh*) <sup>∈</sup> <sup>R</sup>*m*1×*m*<sup>1</sup> and *<sup>X</sup>*33(*i<sup>δ</sup>* <sup>+</sup> *jh*) <sup>∈</sup> <sup>R</sup>*m*2×*m*<sup>2</sup> .

This algorithm enables us to compute the equilibrium strategies values *u<sup>k</sup>* (*j*) of the players. The experiments illustrate that the optimal strategies are piecewise constant, which seems to indicate that we have a stabilization effect.

Further, we consider two examples for the LQ differential game described by the dynamical system (1), the performance criteria (2) and the class of piecewise constant admissible strategies of type (28).

**Example 1.** *We consider the controlled system (1) in the special form n* = *m*<sup>1</sup> = *m*<sup>2</sup> = 2. *The coefficient matrices A*, *B<sup>k</sup>* , *C*, *D<sup>k</sup>* , *M<sup>k</sup>* , *G<sup>k</sup>* , *Rkk*, *Rk*` , *k*, ` = 1, 2, *k* 6= ` *are defined as*

$$A = \begin{pmatrix} 1.5 & 0.17 \\ 0.07 & -1.4 \end{pmatrix} \qquad B\_1 = \begin{pmatrix} 1.5 & 0.7 \\ 0.3 & 0.4 \end{pmatrix} \qquad B\_2 = \begin{pmatrix} 1.2 & 0.95 \\ 0.8 & 0.7 \end{pmatrix}$$

$$C = \begin{pmatrix} 0.7 & 0.19 \\ 0.24 & 0.9 \end{pmatrix} \qquad D\_1 = \begin{pmatrix} 0.2 & 0.04 \\ 0.4 & 0.5 \end{pmatrix} \qquad D\_2 = \begin{pmatrix} 0.1 & 0.06 \\ 0.2 & 0.3 \end{pmatrix}$$

$$M\_1 = \begin{pmatrix} 0.8 & 0.7 \\ 0.7 & 0.95 \end{pmatrix} \qquad M\_2 = \begin{pmatrix} 0.09 & 0.04 \\ 0.04 & 0.08 \end{pmatrix}$$

$$G\_1 = \begin{pmatrix} 1.2 & 0.45 \\ 0.45 & 1.5 \end{pmatrix} \qquad G\_2 = \begin{pmatrix} 0.95 & 0.8 \\ 0.8 & 1.15 \end{pmatrix}$$

$$R\_{11} = \begin{pmatrix} 0.6 & 0.25 \\ 0.25 & 0.8 \end{pmatrix} \qquad R\_{22} = \begin{pmatrix} 0.3 & 0.15 \\ 0.15 & 0.4 \end{pmatrix}$$

$$R\_{12} = \begin{pmatrix} 0.05 & 0.04 \\ 0.04 & 0.08 \end{pmatrix} \qquad R\_{21} = \begin{pmatrix} 0.06 & 0.07 \\ 0.07 & 0.09 \end{pmatrix}$$

*The evolution of the mean square values* E[|*x*˜(*t*)|] 2 *and* E[|*uopt*(*t*)|] 2 *of the optimal trajectory x*˜(*t*) *(with the initial point x T* <sup>0</sup> = (0.03 0.01)*) and the equilibrium strategies u*1,*opt*(*t*) *and u*2,*opt*(*t*) *is depicted in Figure 1 on the intervals* [0, 1]*, and in Figure 2 for* [0, 2]*, respectively. The values of the optimal trajectory x*˜(*t*) *equilibrium strategies of both players are very close to zero in both the short-term and long-term periods.*

**Figure 1.** (**left**) E[|*x*˜(*t*)| 2 ]; Interval [*t*0, *τ*] = [0, 1]; (**right**) E[|*u*1,*opt*(*t*)| 2 ] and E[|*u*2,*opt*(*t*)| 2 ]; Interval [*t*0, *τ*] = [0, 1].

**Figure 2.** (**left**) E[|*x*˜(*t*)| 2 ]; Interval [*t*0, *τ*] = [0, 2]; (**right**) E[|*u*1,*opt*(*t*)| 2 ] and E[|*u*2,*opt*(*t*)| 2 ]; Interval [*t*0, *τ*] = [0, 2].

**Example 2.** *We consider the controlled system (1) in the special form n* = 4 *and m*<sup>1</sup> = *m*<sup>2</sup> = 2. *We define the matrix coefficients A*, *B<sup>k</sup>* , *C*, *D<sup>k</sup>* , *M<sup>k</sup>* , *G<sup>k</sup>* , *Rkk*, *Rk*` , *k*, ` = 1, 2, *k* 6= ` *as follows:*

*A* = 0.5 0.17 0.07 0.9 0.07 0.54 0.2 0.25 0.6 0.8 0.92 0.06 0.35 0.45 0.04 −0.99 *B*<sup>1</sup> = 4.05 −0.4 0.4 −0.8 1 0.9 0 −0.8 *C* = 0.07 0.19 0.8 0 0.4 0.18 0.24 0.7 0.06 0.3 0.15 0.4 0.45 0.37 0.09 0.08 *D*<sup>1</sup> = 0.15 0 −0.2 0.25 0 0.035 0.04 −0.2 *B*<sup>2</sup> = 0.4 0.05 0.05 −0.07 0 0.07 0.3 −0.05 *D*<sup>2</sup> = 0.25 0.525 1.25 −0.025 0.35 −0.75 0.25 −0.9 *M*<sup>1</sup> = 0.78 0 0 0 0 0.82 0 0 0 0 0.6 0 0 0 0 0.5 *M*<sup>2</sup> = 0.6 0 0 0 0 0.8 0 0 0 0 0.48 0 0 0 0 1.05 *G*<sup>1</sup> = 0.9 0.05 0.25 0.35 0.05 1 0.2 0.07 0.25 0.2 1.05 0.3 0.35 0.07 0.3 0.9 *G*<sup>2</sup> = 1.25 0.75 0.21 0.65 0.75 0.88 0.45 0.76 0.21 0.45 1 0.87 0.65 0.76 0.87 0.99 *R*<sup>11</sup> = 1.26 0.25 0.25 0.8 0.25 0.95 0.15 0.4 0.25 0.15 0.96 0.3 0.8 0.4 0.3 0.88 *R*<sup>22</sup> = 0.6 0.15 0.15 0.4 0.15 0.85 0.36 0.4 0.15 0.36 0.4 0.25 0.4 0.4 0.25 0.87 *R*<sup>12</sup> = 0.98 0.04 0.36 0.4 0.04 0.8 0.36 0.45 0.36 0.36 0.64 0.1 0.4 0.45 0.1 0.89 *R*<sup>21</sup> = 0.6 0.07 0.35 0.28 0.07 0.8 0.39 0.25 0.35 0.39 1.2 0.48 0.28 0.25 0.48 1.01 .

*The evolution of the mean square values* E[|*x*˜(*t*)|] 2 *and* E[|*uopt*(*t*)|] 2 *of the optimal trajectory x*˜(*t*) *(with the initial point x T* <sup>0</sup> = (0.15 0.01 0.02 0.03 )*) and the equilibrium strategies u*1,*opt*(*t*) *and u*2,*opt*(*t*) *on the intervals* [0, 1] *(Figure 3) and* [0, 5] *(Figure 4), respectively. The values of the optimal trajectory x*˜(*t*) *equilibrium strategies of both players are very close to zero in short-term and long-term period.*

**Figure 3.** (**left**) E[|*x*˜(*t*)| 2 ]; Interval [*t*0, *τ*] = [0, 1]; (**right**) E[|*u*1,*opt*(*t*)| 2 ] and E[|*u*2,*opt*(*t*)| 2 ]; Interval [*t*0, *τ*] = [0, 1].

**Figure 4.** (**left**) E[|*x*˜(*t*)| 2 ]; Interval [*t*0, *τ*] = [0, 5]; (**right**) E[|*u*1,*opt*(*t*)| 2 ] and E[|*u*2,*opt*(*t*)| 2 ]; Interval [*t*0, *τ*] = [0, 5].

#### **4. Concluding Remarks**

In this paper, we have investigated the formulation of existence conditions for the Nash equilibria strategy in a state feedback form, in the piecewise constant admissible strategies case. These conditions are expressed through the solvability of the algebraic Equation (26). The solutions of these equations provide the feedback matrices of the desired Nash equilibrium strategy. To obtain such conditions for the existence of a sampled-data Nash equilibrium strategy, we have transformed the original problem into an equivalent one which requires to find a Nash equilibrium strategy in a state feedback form for a stochastic differential game, in which the dynamic is described by Itô type differential equations controlled by impulses. Unlike for the deterministic case, when the problem of finding of a sampled-data Nash equilibrium strategy can be transformed into an equivalent problem in discrete-time, in the stochastic framework when the controlled system is described by Itô type differential equations, such a transformation to the discrete-time case is not possible. The developments from the present work clarify and extend the results from Section 5 of [23], where only the particular case *L* = 2 was considered. The key method used for obtaining the feedback matrices of the Nash equilibrium strategy via the Equation (26) is the solution *P*˜ *k* (·), 1 ≤ *k* ≤ *L* of the TVP (16). On each interval (*tj*−<sup>1</sup> , *tj*), 1 ≤ *j* ≤ *N*, (16a) consists of *L* uncoupled backward linear differential equation. The boundary values *P*˜ *k* (*t* − *j* ) are computed via (16d) for *j* = *N* and via (16b) for *j* ≤ *N* − 1. Finally, we gave an algorithm for calculating the equilibrium strategies of the players, and the numerical experiments suggest a stabilization effect.

**Author Contributions:** Conceptualization, V.D., I.G.I., I.-L.P. and O.B.; methodology, V.D., I.G.I., I.-L.P. and O.B.; software, V.D., I.G.I., I.-L.P. and O.B.; validation, V.D., I.G.I., I.-L.P. and O.B.; investigation, V.D., I.G.I., I.-L.P. and O.B.; resources, V.D., I.G.I., I.-L.P. and O.B.; writing—original draft preparation, V.D., I.G.I., I.-L.P. and O.B.; writing—review and editing, V.D., I.G.I., I.-L.P. and O.B. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by "1 Decembrie 1918" University of Alba Iulia through scientific research funds.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **High-Order Filtered PID Controller Tuning Based on Magnitude Optimum**

**Damir Vranˇci´c 1,\* and Mikuláš Huba <sup>2</sup>**


**Abstract:** The paper presents a tuning method for PID controllers with higher-order derivatives and higher-order controller filters (HO-PID), where the controller and filter orders can be arbitrarily chosen by the user. The controller and filter parameters are tuned according to the magnitude optimum criteria and the specified noise gain of the controller. The advantages of the proposed approach are twofold. First, all parameters can be obtained from the process transfer function or from the measured input and output time responses of the process as the steady-state changes. Second, the a priori defined controller noise gain limits the amount of HO-PID output noise. Therefore, the method can be successfully applied in practice. The work shows that the HO-PID controllers can significantly improve the control performance of various process models compared to the standard PID controllers. Of course, the increased efficiency is limited by the selected noise gain. The proposed tuning method is illustrated on several process models and compared with two other tuning methods for higher-order controllers.

**Keywords:** higher-order controllers; PID controller; magnitude optimum; controller tuning; noise attenuation

#### **1. Introduction**

The PID controllers are widespread in many industries and are frequently included in embedded solutions [1–4]. This is not surprising, since the basic PID control algorithm is very simple and the control performance, when the controller is tuned appropriately, is usually very good. However, the control performance can be improved by increasing the controller order. The improvement depends on the process order. While the first-order process can be efficiently controlled by the PI controller and the second-order process by the PID controller, the control efficiency for higher-order processes can be improved by increasing the controller order beyond the PID control.

In practice the PI controllers are used more often than the PID controllers, since the latter significantly increase the controller output noise. Naturally, with higher degrees of controllers, the problem becomes aggravated. Therefore, the appropriate higher-order controller filter is inevitable in practical applications.

For easier classification of the HO-PID controllers according to the controller (m) and filter (*n*) order, let us denote them as PID<sup>m</sup> n . A general PID<sup>m</sup> n controller transfer function *GCF*(*s*) can be defined as follows:

$$\begin{array}{l} \text{G}\_{\text{CF}}(s) = \text{G}\_{\text{C}}(s)\text{G}\_{\text{F}}(s), \quad where\\ \text{G}\_{\text{C}}(s) = \left(\text{K}\_{-1}s^{-1} + \text{K}\_{0} + \text{K}\_{1}s + \dots + \text{K}\_{m}s^{m}\right) \\\ \text{G}\_{\text{F}}(s) = \frac{1}{\left(1 + \text{T}\_{\text{F}}s\right)^{\text{n}}} \end{array} \tag{1}$$

where *K*−1, *K*0, *K*1, . . . *K<sup>m</sup>* are controller gains, and *GF*(*s*) is the binomial filter with filter time constant *TF*. In practical applications, in order to limit the higher-frequency controller

**Citation:** Vranˇci´c, D.; Huba, M. High-Order Filtered PID Controller Tuning Based on Magnitude Optimum. *Mathematics* **2021**, *9*, 1340. https://doi.org/10.3390/math9121340

Academic Editor: Ioannis Dassios

Received: 6 May 2021 Accepted: 5 June 2021 Published: 9 June 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

output noise, *<sup>n</sup>* <sup>≥</sup> *<sup>m</sup>*. Note that PID<sup>0</sup> <sup>0</sup> denotes the PI controller (*K<sup>I</sup>* <sup>=</sup> *<sup>K</sup>*−1, *<sup>K</sup><sup>P</sup>* <sup>=</sup> *<sup>K</sup>*0) and PID<sup>1</sup> 1 denotes the PID controller (*K<sup>I</sup>* = *K*−1, *K<sup>P</sup>* = *K*0, *K<sup>D</sup>* = *K*1) with the first-order filter (*n* = 1).

Several tuning methods for HO-PID controllers have been proposed so far. The majority of them are made for proportional-integrative-derivative-accelerative (PIDA) controllers (PID<sup>2</sup> n ). The controller structure is either 1 degree of freedom (1-DOF) [5–20] or 2 degrees of freedom (2-DOF) [21–23] which can optimise the tracking and control performance.

The tuning methods for the mentioned PIDA controllers are derived either for the firstorder process with delay [20,22,23], third order process [5,11,13,14,21], first-order double integrating process [5,11,16], second-order integrating process [8,9,11,16], double integrating process with time delay [10], fourth-order system [18], for different types of process models [6,17,18] or for the automatic voltage regulator (AVR) in the generator excitation system [7,15,19]. Unfortunately, only a few of the mentioned PIDA controller tuning methods take into account the controller filter in the controller design stage [5–7,10,15,23]. Therefore, the practical implementation of other PIDA tuning methods remains questionable.

Besides PIDA controllers, some higher-order controller tuning methods also exist [24–27]. The tuning method for the PID<sup>3</sup> 0 controller (the controller filter is not considered), where the controlled process is a model of the ship power plant, including the heat exchanger, is given in [24]. The method optimises the IAE for disturbance rejection while limiting the peak of the closed-loop amplitude frequency response.

Tuning methods for even higher-order controllers (*m* > 3) were developed for the integrating process model with a time delay (IPTD) [25–27]. Although the type of the process model seems to be limited, we have to mention that many stable process models can be modelled as IPTD processes [1]. For HO-PID control of stable time-delayed processes, a new method was also proposed by generalizing Skogestad's method SIMC [28]. The basic version is based on the approximation of processes by transfer functions with multiple time constants (obtained, for example, by an appropriate identification method); however, a suitable model can also be obtained from a more general description of the process reduced by the modified "half-rule" method [29]. Although not specifically designed for HO-PID controllers, we should also mention that the tuning approach is based on the design of multiple dominant closed-loop poles for delayed processes, applied to the PI and PID controllers [30], which can be easily extended to HO-PID controllers.

The developed tuning methods for the PID<sup>m</sup> n controllers reveal that the HO-PID controllers can be much more efficient than the ordinary PID controllers without significant increase of the controller output noise.

This paper presents the PID<sup>m</sup> n controller and filter tuning method, which is based on the parametric or the non-parametric process description. It means that the process can be given by the general transfer function (of the arbitrary order and time delay) or by the process input and output time-responses during the steady-state change of the process. The only user-defined parameters will be the controller (*m*) and the filter (*n*) order and the desired high-frequency gain of the controller. As will be shown later, the controller parameters will be calculated analytically.

Therefore, the main advantages of the proposed method are the flexibility of the process description (the process model is not required), simple specifications by the user and simple calculation of the controller and filter parameters.

The content of the paper is as follows. The tuning method for the PID<sup>m</sup> n controllers is covered in Section 2. The calculation of the controller and controller filter time constant, according to the desired closed-loop high-frequency gain, is derived in Section 3. The comparison with some other tuning methods is carried out in Section 4. The paper concludes with Section 5.

#### **2. HO-PID Controller Tuning**

The HO-PID controller parameters will be derived according to the magnitude optimum multiple integration (MOMI) tuning method, which is based on the magnitude

optimum (MO) criteria [31–37]. The main advantages of the MOMI method are that it combines frequency-domain MO tuning criterion (providing a fast and non-oscillatory closed-loop process output response) with the time-domain method of moments (the calculation of the process characteristic areas directly from the process time responses).

#### The process

The general order process transfer function with time delay is defined by the following expression:

$$G\_P(s) = \frac{K\_{PR} \left(1 + b\_1s + b\_2s^2 + \dots + b\_rs^r\right)}{1 + a\_1s + a\_2s^2 + \dots + a\_ps^p} e^{-sT\_{del}} \tag{2}$$

where *KPR* is the process gain, *Tdel* is the process time delay and *a*<sup>1</sup> to *a<sup>p</sup>* and *b<sup>1</sup>* to *b<sup>r</sup>* are the process dynamic parameters. To simplify the derivation, let us assume that the process transfer function is developed into an infinite Taylor series around *s* = 0:

$$G\_P(s) = G\_{P0} + G\_{P1}s + \frac{G\_{P2}}{2!}s^2 + \frac{G\_{P3}}{3!}s^3 + \dotsb \tag{3}$$

where *GPk* are the *k*-th derivatives of the *GP*(*s*) over s around *s* = 0. The moments can be calculated from the process impulse response *h*(*t*) in the following way [1,38]:

$$G\_P^{(k)}(0) = G\_{Pk} = (-1)^k \int\_0^\infty t^k h(t) dt\tag{4}$$

Besides measuring the process impulse response, the moments can also be calculated from the process steady-state change by measuring the process input and output time responses [32,34]. By integrating the process input and output time responses, the so-called characteristic areas *A<sup>k</sup>* are obtained, which are related to the process moments as follows:

$$A\_k = \frac{(-1)^k}{k!} G\_{Pk} \tag{5}$$

The process transfer function, based on the characteristic areas, can be derived from (3) and (5) as follows:

$$G\_P(\mathbf{s}) = A\_0 - A\_1 \mathbf{s} + A\_2 \mathbf{s}^2 - A\_3 \mathbf{s}^3 + \dotsb \,\tag{6}$$

Since the calculation of the mentioned areas from the process input and output time responses, during arbitrary steady-state change, are already covered in detail in [36], it will not be repeated herein.

The process moments (4) and, therefore, the characteristic areas *A<sup>k</sup>* (5), can also be calculated from the process transfer function (2) by calculating the derivatives of *GP*(*s*) over *s* around *s* = 0. The result is the following [32,34]:

$$\begin{aligned} A\_0 &= K\_{PR} \\ A\_1 &= K\_{PR}(a\_1 - b\_1 + T\_{del}) \\ A\_2 &= A\_1 a\_1 + K\_{PR} \left( b\_2 - a\_2 - T\_{del} b\_1 + \frac{T\_{del}^2}{2!} \right) \\ &\vdots \\ A\_k &= \sum\_{i=1}^{k-1} (-1)^{k+i-1} A\_1 a\_{k-i} + (-1)^{k+1} K\_{PR} (a\_k - b\_k) + K\_{PR} \sum\_{i=1}^k \frac{(-1)^{k+i}}{i!} T\_{del}^i b\_{k-i} \end{aligned} \tag{7}$$

Therefore, the characteristic areas in expression (6) can be calculated either from the process time response or from the process transfer function. This is a very important advantage, since the actual process model can be used, but is not required.

In order to simplify the derivation of the controller parameters, the controller binomial filter *GF*(*s*) (1) will be considered as a part of the process. Since the above areas are calculated for the process without the filter, the areas *A<sup>i</sup>* should be modified, accordingly. If the filter *GF*(*s*) (1) is added to the process (2) and developed into a Taylor series, it can be derived that the new areas, denoted by *AiF*, can be simply calculated as:

$$A\_{VF} = M\_F^n A\_{V^\*} \text{ where}$$

$$M\_F = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0\\ T\_F & 1 & 0 & 0 & 0 & 0\\ T\_F^2 & T\_F & 1 & 0 & 0 & 0\\ T\_F^3 & T\_F^2 & T\_F & 1 & 0 & 0\\ T\_F^4 & T\_F^3 & T\_F^2 & T\_F & 1 & 0\\ \vdots & \vdots & \vdots & \vdots & \vdots & T\_F & \ddots \end{bmatrix}, \\ A\_V = \begin{bmatrix} A\_0\\ A\_1\\ A\_2\\ A\_3\\ A\_4\\ \vdots \end{bmatrix}, \\ A\_{VF} = \begin{bmatrix} A\_{0F} \\ A\_{1F} \\ A\_{2F} \\ A\_{3F} \\ A\_{4F} \\ \vdots \end{bmatrix} \tag{8}$$

Note that *n* is the binomial filter order (1). Naturally, the chosen size of the matrix and the vectors depends on the number of the required areas.

Note that the characteristic areas with the included controller filter can be obtained a-posteriori, when the process areas *A<sup>i</sup>* are already measured either from the process time response (5) or calculated from the process transfer function (7).

For further reference, please note that the process areas with the included controller binomial filter are denoted with index *F* (*AiF*) and the process areas without the controller filter are denoted without index *F* (*A<sup>i</sup>* ).

The closed-loop transfer function

In the paper, the process and the HO-PID controller (1) will be considered, as shown in Figure 1. Signals *r*, *r<sup>f</sup>* , *e*, *u*, *d*, *n* and *y* stand for the reference, filtered reference, control error, controller output, process input disturbance, process output noise and the process output, respectively. Block *GFR* represents the second order filter for the reference signal in order to reduce excessive controller output change on reference changes:

$$G\_{FR}(s) = \frac{1}{\left(1 + T\_{FR}s\right)^2},\tag{9}$$

where *TFR* denotes the reference filter time constant. Due to simplicity, the filter order in (9) is fixed. However, note that the filter order may be increased by increasing the controller order so as to additionally attenuate the swings of the signal *u* for step-like changes of the reference signal *r*.

**Figure 1.** The control loop with HO-PID controller and the process.

Let us now calculate the process closed-loop transfer function *GCL*(*s*) from the filtered reference (*r<sup>f</sup>* ) to the process output (*y)*. The closed-loop transfer function is then defined as:

$$G\_{\mathbb{C}L}(s) = \frac{\mathbb{Y}(s)}{R\_F(s)} = \frac{G\_{\mathbb{C}}G\_P}{1 + G\_{\mathbb{C}}G\_P} \, \tag{10}$$

where *Y*(*s*) and *R*(*s*) are the Laplace transforms of the process output and the reference signals, respectively.

When applying the process (6) and the controller (1) transfer functions to (10), and considering that the controller binomial filter is a part of the process (in the process transfer function (6) the areas *A<sup>i</sup>* are replaced by *AiF*), the closed-loop transfer function becomes:

$$\begin{aligned} \mathbf{G}\_{\rm CL}(\mathbf{s}) &= \frac{\mathbf{G}\_{\rm OL}(\mathbf{s})}{1 + \mathbf{G}\_{\rm OL}(\mathbf{s})} \\ \mathbf{G}\_{\rm OL}(\mathbf{s}) &= A\_{\rm 0F} \mathbf{K}\_{-1} \mathbf{s}^{-1} + (A\_{\rm 0F} \mathbf{K}\_{\rm 0} - A\_{\rm 1F} \mathbf{K}\_{-1}) + \mathbf{s} (A\_{\rm 0F} \mathbf{K}\_{\rm 1} - A\_{\rm 1F} \mathbf{K}\_{\rm 0} + A\_{\rm 2F} \mathbf{K}\_{-1}) \\ &+ \mathbf{s}^2 (A\_{\rm 0F} \mathbf{K}\_{\rm 2} - A\_{\rm 1F} \mathbf{K}\_{\rm 1} + A\_{\rm 2F} \mathbf{K}\_{\rm 0} - A\_{\rm 3F} \mathbf{K}\_{-1}) + \cdots \\ &+ \mathbf{s}^k \sum\_{i=0}^{k+1} (-1)^i A\_{i\rm F} \mathbf{K}\_{k-i} + \cdots \end{aligned} \tag{11}$$

where *GOL*(s) denotes the open-loop transfer function *GOL*(*s*) = *GC*(*s*)*GP*(*s*). The MO criteria

According to [35], the MO tuning criterion states that the closed-loop amplitude (magnitude) should be 1 in as wide a frequency bandwidth as possible (starting from frequency ω = 0). This can be achieved if the open-loop transfer function *GOL*(*j*ω), in the Nyquist diagram, follows the vertical line with the real value −0.5 (according to M and N circles in control theory).

Replacing s with complex frequency *j*ω in *GOL*(*s*) (11) yields:

$$\begin{aligned} \text{G}\_{\text{OL}}(s) &= -jA\_{0\text{F}}\text{K}\_{-1}\omega^{-1} + (A\_{0\text{F}}\text{K}\_{0} - A\_{1\text{F}}\text{K}\_{-1}) + j\omega(A\_{0\text{F}}\text{K}\_{1} - A\_{1\text{F}}\text{K}\_{0} + A\_{2\text{F}}\text{K}\_{-1}) \\ &- \omega^{2}(A\_{0\text{F}}\text{K}\_{2} - A\_{1\text{F}}\text{K}\_{1} + A\_{2\text{F}}\text{K}\_{0} - A\_{3\text{F}}\text{K}\_{-1}) + \cdots \\ &+ (j\omega)^{k}\sum\_{i=0}^{k+1} (-1)^{i}A\_{i\text{F}}\text{K}\_{k-i} + \cdots \end{aligned} \tag{12}$$

where j denotes the imaginary component *j* = √ −1.

Since merely the real part of the open-loop transfer function is required, only the even powers over frequency in (12) are needed. Therefore:

$$\begin{aligned} \text{Re}\{\mathbb{G}\_{\text{OL}}(\mathbf{s})\} &= (A\_{\text{IF}}\mathbf{K}\_{\text{0}} - A\_{\text{IF}}\mathbf{K}\_{-1}) - \omega^2 (A\_{\text{IF}}\mathbf{K}\_{2} - A\_{\text{IF}}\mathbf{K}\_{1} + A\_{\text{2}}\mathbf{K}\_{0} - A\_{\text{3}}\mathbf{K}\_{-1}) + \cdots \\ &+ (-1)^{q} \omega^{2q} \sum\_{i=0}^{2q+1} (-1)^{i} A\_{\text{i}\text{F}} \mathbf{K}\_{2q-i} + \cdots \end{aligned} \tag{13}$$

In order to achieve that the *Re*{*GOL*(*s*)} = −0.5 for as high frequencies as possible, the following conditions should be fulfilled:

$$\begin{array}{c} -A\_{1F}K\_{-1} + A\_{0F}K\_{0} = -0.5\\ -A\_{3F}K\_{-1} + A\_{2F}K\_{0} - A\_{1F}K\_{1} + A\_{0F}K\_{2} = 0\\ -A\_{5F}K\_{-1} + A\_{4F}K\_{0} - A\_{3F}K\_{1} + A\_{2F}K\_{2} - A\_{1F}K\_{3} + A\_{0F}K\_{4} = 0\\ \vdots \end{array} \tag{14}$$

or in matrix form:

$$\begin{aligned} \boldsymbol{M}\_{V} &= \boldsymbol{\mathcal{C}}, \text{ where} \\ \boldsymbol{M}\_{F} &= \begin{bmatrix} -A\_{1F} & A\_{0F} & 0 & 0 & 0 & \cdots \\ -A\_{3F} & A\_{2F} & -A\_{1F} & A\_{0F} & 0 & \cdots \\ -A\_{5F} & A\_{4F} & -A\_{3F} & A\_{2F} & -A\_{1F} & \cdots \\ -A\_{7F} & A\_{6F} & -A\_{5F} & A\_{4F} & -A\_{3F} & \cdots \\ -A\_{9F} & A\_{8F} & -A\_{7F} & A\_{6F} & -A\_{5F} & \cdots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \end{bmatrix}, \quad \boldsymbol{K}\_{V} = \begin{bmatrix} \boldsymbol{A}\_{-1} \\ \boldsymbol{K}\_{0} \\ \boldsymbol{K}\_{1} \\ \boldsymbol{K}\_{2} \\ \boldsymbol{K}\_{3} \\ \vdots \end{bmatrix}, \quad \boldsymbol{\mathcal{C}} = \begin{bmatrix} \boldsymbol{0} \\ \boldsymbol{0} \\ \boldsymbol{0} \\ \boldsymbol{0} \\ \boldsymbol{0} \\ \vdots \end{bmatrix} \end{aligned} \tag{15}$$

Note that the matrix and vector dimensions depend on the number of controller parameters (*m* + 2):

$$\mathcal{M}\_{(m+2)\times(m+2)\prime} \mathcal{K}\_{V\_{(m+2)\times 1}\prime} \mathcal{C}\_{(m+2)\times 1} \tag{16}$$

The controller parameters (gains) can then be simply calculated from (15):

$$K\_V = M^{-1} \mathbb{C} \tag{17}$$

The calculation of the controller and filter parameters is straightforward. However, to make it even simpler, we have provided online MATLAB/Octave scripts via the OctaveOnline Bucket website [39]. The provided scripts calculate all the controller parameters for the given process transfer function and the filter time constant. The website layout is shown in Figure 2. The calculation procedure proceeds as follows:


The script then calculates the characteristic areas, the controller and the filter parameters. The results are then shown on the right panel of the website.

Illustrative example 1

The *D*<sup>1</sup> 1 , *PID*<sup>2</sup> 2 and *PID*<sup>3</sup> 3 controller parameters will be calculated for the following processes:

$$\begin{array}{l} G\_{P1}(s) = \frac{1}{\frac{(1+s)^4}{s(1+s)^4}}\\ G\_{P2}(s) = \frac{\varepsilon^{-0.5s}}{(1+s)^2} \end{array} \tag{18}$$

The chosen controller filter time constants is T<sup>F</sup> = 0.1. The characteristic areas, without (7), and with controller filter (8) are given in Table 1.


**Table 1.** The calculated areas for the processes (18) without and with the controller filter.

The calculated controller parameters (17) are given in Table 2.

**Table 2.** The calculated controller parameters.


In order to reduce the excessive swing of the controller output when changing the reference, the following reference filter time constant (9) is used for both processes:

$$T\_{\rm FR} = 0.5\tag{19}$$

Note that the second order reference filter is used (9).

The closed-loop responses for the processes *GP*1(*s*) and *GP*2(*s*), for all three types of controllers, are shown in Figures 3 and 4. It is clear that tracking and control performance increase by the controller order. Note that the controller output response of *PID*<sup>3</sup> 3 controller is not shown entirely in order to see the responses of *PID*<sup>1</sup> 1 and *PID*<sup>2</sup> 2 controllers more clearly.

When comparing process output responses when using controllers *PID*<sup>1</sup> 1 and *PID*<sup>3</sup> 3 in Figures 3 and 4, it can be seen that the relative difference in performance is larger on higher-order process *GP*1(*s*). This is expected, since lower-order processes can already be optimally controlled by lower-order controllers (e.g., the first-order process with *PID*<sup>0</sup> 0 and the second-order process with the *PID*<sup>1</sup> 1 controller). Since the second-order process *GP*2(*s*) has an additional delay, the closed-loop performance can still be slightly increased with the *PID*<sup>3</sup> 3 controller.

**Figure 3.** The closed-loop responses for the process *GP*<sup>1</sup> (s) when using controllers *PID*<sup>1</sup> 1 , *PID*<sup>2</sup> 2 and *PID*<sup>3</sup> 3 .

**Figure 4.** The closed-loop responses for the process *GP*<sup>2</sup> (*s*) when using controllers *PID*<sup>1</sup> 1 , *PID*<sup>2</sup> 2 and *PID*<sup>3</sup> 3 .

According to the closed-loop responses, it can be concluded that HO-PID controllers can significantly improve the closed-loop performance, especially for higher-order processes. The only required parameter from the user is the controller filter time constant *TF*. Namely, the amplification of the process output measurement noise depends on the chosen *TF*. However, the relation between *T<sup>F</sup>* and the actual amplification of the high-frequency (HF) noise depends on several other controller parameters and is a rather complex function. Therefore, in practice, it would be more appropriate to define the desired HF noise amplification than the controller filter time constant.

#### **3. HO-PID Controller with Specified HF Noise Amplification**

As mentioned in the previous section, the *n*-th order controller filter *GF*(*s*) (1) is primarily used to decrease the controller output noise due to the measurement noise (in

addition to making the entire controller transfer function proper or strictly proper and, therefore, realisable in practice). High controller amplification of the measurement noise is never desired, since it may also cause large swings of the control output signals and thus may decrease the actuator's life span. In order to limit the amplification of the process measurement noise, the user can try different values of *T<sup>F</sup>* until the desired amplification (attenuation) of the noise is achieved. In practice, this may take too long, since the function between *T<sup>F</sup>* and the noise amplification is complex and non-linear. Therefore, from the user's perspective, it is easier to define the desired noise amplification of the controller than select filter time constant *TF*.

The process output noise (*ny*) is amplified by the controller (1) in the closed-loop configuration as follows:

$$\mathcal{U}\_{N} = \mathcal{G}\_{\mathbb{C}N}(s)\mathcal{N}\_{\mathcal{Y}} = \frac{\mathcal{G}\_{\mathbb{C}F}(s)}{1 + \mathcal{G}\_{\mathbb{C}F}(s)\mathcal{G}\_{P}(s)}\mathcal{N}\_{\mathcal{Y}} = \left(\mathcal{G}\_{P}(s) + \frac{1}{\mathcal{G}\_{\mathbb{C}F}(s)}\right)^{-1}\mathcal{N}\_{\mathcal{Y}}\tag{20}$$

where *N<sup>y</sup>* and *U<sup>N</sup>* are Laplace transforms of the measurement noise and the controller output noise, respectively. The negative sign is omitted to simplify the derivation. From (20) it can be seen that at lower frequencies, the transfer function *GCN*(*s*) is mostly dominated by the process transfer function *GP*(*s*), while at higher frequencies, it is mostly dominated by the controller transfer function *GCF*(*s*). At lower frequencies, the process can be approximated by its gain *KPR*, while at higher frequencies the controller gains *K*−<sup>1</sup> and *K*<sup>0</sup> can be neglected. Therefore, *GCN*(*s*) can be approximated by the following transfer function:

$$\mathbf{G}\_{\rm CN}(s) \approx \frac{\mathbf{K}\_{\rm PR}^{-1} + \mathbf{K}\_1 \mathbf{s} + \mathbf{K}\_2 \mathbf{s}^2 + \dots + \mathbf{K}\_m \mathbf{s}^m}{\left(1 + T\_{\rm F}\mathbf{s}\right)^n}. \tag{21}$$

On the other hand, the desired controller output noise (*UND*) should be similar to:

$$
\mathcal{U}\_{\rm ND} = \mathcal{K}\_{\rm HF} \mathcal{N}\_{\mathcal{Y}} \tag{22}
$$

where *KHF* is a chosen noise amplification factor. Since amplitudes *U<sup>N</sup>* and *UND* cannot be compared directly due to different frequency characteristics, it is easier to compare noise powers of both signals in some chosen frequency bandwidth (from ω<sup>1</sup> to ω2). Namely, due to Parseval theorem, the power of the controller output signal (*PUN*) is proportional to:

$$P\_{\rm LIN} \propto \int\_{\omega\_1}^{\omega\_2} \left| G\_{\rm CN}(\omega) N\_{\rm y}(\omega) \right|^2 d\omega \,. \tag{23}$$

The desired noise power is, according to (22), proportional to:

$$P\_{\rm IIND} \propto \int\_{\omega\_1}^{\omega\_2} \left| K\_{\rm HF} \mathcal{N}\_y(\omega) \right|^2 d\omega \,. \tag{24}$$

When considering the *N<sup>y</sup>* as a white noise with amplitude over frequency as *Ny*(*ω*) = 1, the powers *PUN* and *PUND* would become the same when:

$$\begin{aligned} \int\_{\omega\_1}^{\omega\_2} \frac{F\_0 + F\_1 \omega^2 + F\_2 \omega^4 + \dots + F\_0 \omega^{2m}}{\left(1 + T\_F^2 \omega^2\right)^n} d\omega &= K\_{HF}^2 (\omega\_2 - \omega\_1), where \ K\_0 = K\_{PR}^{-1} and \\ F\_0 &= K\_0^2 \\ F\_1 &= K\_1^2 - 2K\_0 K\_2 \\ &\vdots \\ F\_k &= K\_k^2 + 2\sum\_{i=0}^{k-1} (-1)^{k+i} K\_i K\_{2k-i} \\ &\vdots \\ F\_m &= K\_m^2 \end{aligned} \tag{25}$$

However, the solution of the integral in (25), due to the denominator (controller filter), becomes very complex and highly non-linear in respect to *TF*. Therefore, some search algorithm (optimization) must be applied for each calculation of the *TF*.

This would seriously impact the simplicity of the proposed method. Therefore, it is decided to simplify the function inside the above integral. Since at higher frequencies, the most dominant controller term becomes the one with the highest derivative (*Km*), the function can be simplified as follows:

$$\frac{F\_0 + F\_1 \omega^2 + F\_2 \omega^4 + \dots + F\_m \omega^{2m}}{\left(1 + T\_F^2 \omega^2\right)^n} \approx \begin{cases} F\_m \omega^{2m} & ; \omega \le \frac{1}{T\_F} \\ \frac{F\_m \omega^{2(m-n)}}{T\_F^{2n}} & ; \omega > \frac{1}{T\_F} \end{cases} \tag{26}$$

According to (26), the expression (25) simplifies into:

$$\begin{cases} \int\_{\omega\_1}^{\omega\_F} F\_m \omega^{2m} d\omega + \int\_{\omega\_F}^{\omega\_2} F\_m \omega^{2(m-n)} \omega\_F^{2n} d\omega \approx K\_{HF}^2 (\omega\_2 - \omega\_1) \\\int\_{\omega\_1}^{\omega\_F} F\_m \omega^{2m} d\omega = \frac{F\_m (\omega\_F^{2m+1} - \omega\_1^{2m+1})}{(2m+1)} \\\int\_{\omega\_F}^{\omega\_2} F\_m \omega^{2(m-n)} \omega\_F^{2n} d\omega = \begin{cases} \frac{F\_m \omega\_F^{2n} \left(\omega\_2^{2(m-n)+1} - \omega\_F^{2(m-n)+1}\right)}{(2m-2n+1)} & n > m \\\ F\_m \omega\_F^{2n} (\omega\_2 - \omega\_F) & n = m \\\ \omega\_F = \frac{1}{T\_F} \end{cases} \end{cases} \tag{27}$$

The contribution of noise power in the frequency region below *ω<sup>F</sup>* is usually much smaller than at higher frequencies. Therefore, in order to even further simplify the derivation, we can choose *ω*<sup>1</sup> = 0 without making any significant error in the calculation. Selection of the upper frequency (*ω*2) in the integral, due to the Shannon theorem, depends on the controller sampling frequency. Without loss of generality, the upper frequency can be selected as:

$$
\omega\_2 = \omega\_S = \frac{2\pi}{T\_S} \text{ .}\tag{28}
$$

where *ω<sup>S</sup>* is controller sampling frequency (in rad/s) and *T<sup>S</sup>* is controller sampling time. By taking into account that:

$$
\omega\_{\rm S} \gg \omega\_{\rm F} \tag{29}
$$

and *ω*<sup>1</sup> = 0, the expression (27) simplifies even further:

$$\begin{cases} \int\_{0}^{\omega\_{F}} F\_{m} \omega^{2m} d\omega = \frac{F\_{m} \omega\_{F}^{2m+1}}{2m+1} \\ \int\_{\omega\_{F}}^{\omega\_{S}} \frac{F\_{m} \omega^{2(m-n)}}{T\_{F}^{2n}} d\omega \approx \begin{cases} \frac{F\_{m} \omega\_{F}^{2m+1}}{2(n-m)-1} & n > m \\ F\_{m} \omega\_{F}^{2n} \omega\_{S} & n = m \end{cases} \end{cases} \tag{30}$$

Therefore, the final expression, when taking into account that *F<sup>m</sup>* = *K<sup>m</sup> 2* , reads as:

$$\begin{array}{ll} \mathbb{K}\_m^2 \omega\_F^{2m+1} \left( \frac{1}{2m+1} + \frac{1}{2(n-m)-1} \right) \approx \mathbb{K}\_{HF}^2 \omega\_S & n > m\\ \mathbb{K}\_m^2 \omega\_F^{2m} \approx \mathbb{K}\_{HF}^2 & n = m \end{array} \tag{31}$$

Therefore, the filter time constant (*T<sup>F</sup>* =1/*ωF*) can be estimated as follows:

$$T\_F \approx \sqrt[2m+1]{\frac{\mathcal{K}\_m^2 \left(\frac{1}{2m+1} + \frac{1}{2(n-m)-1}\right)}} \quad \text{; } n > m \tag{32}$$

$$T\_F \approx \sqrt[m]{\frac{|\mathcal{K}\_m|}{\mathcal{K}\_{HF}}} \qquad \text{; } n = m$$

Note that the above derivation of the filter time constant takes into account approximations (26) and (29). This means that the final output noise power of the controller may differ from that defined by the selected high frequency gain *KHF*. However, if the above

approximations are not taken into account, then the final expressions for the calculation of *ω*<sup>F</sup> in (31) would become those of the higher order without analytic solution for *TF*. This would significantly complicate the filter calculation.

The entire procedure for the calculation of the controller parameters for a given process is given in Figure 5.


**Figure 5.** Calculation of the filter and controller parameters.

As shown in Figure 5, the calculation of the controller and filter parameters is straightforward. However, to make it even simpler, as mentioned before, we have provided online MATLAB/Octave scripts via OctaveOnline Bucket website [39]. The website layout is shown in Figure 6. The calculation procedure proceeds as follows:


**Figure 6.** The layout of the OctaveOnline Bucket website (function test\_HO\_filt.m).

The script then calculates the filter time constant, the characteristic areas and the controller parameters. The results are shown on the right panel of the website. Illustrative example 2

Consider the following fourth-order process transfer function *GP*3(*s*) (18):

$$G\_{P3}(s) = \frac{e^{-0.2s}}{\left(1+s\right)^4} \tag{33}$$

The initially chosen controller filter time constants is *T<sup>F</sup>* = 0.1. For all the experiments in this section, the chosen sampling time is *T<sup>S</sup>* = 0.002 s. In order to retain clarity of the derivations, the characteristic areas of the process are not mentioned herein; however, they can be calculated (besides all the controller and the filter parameters) on the aforementioned website [39].

a. Changing the parameter *KHF*

According to the procedure given in Figure 5, when choosing parameter *KHF*, controller *PID*<sup>3</sup> 4 and repeating steps 3–5 a few times (in our case, 3 times), the calculated filter time constants, and the calculated controller parameters (17) are given in Table 3.

**Table 3.** The calculated filter time constants *T<sup>F</sup>* and controller parameters at different noise gains *KHF*.


Again, in order to reduce the excessive swing of the controller output when changing the reference, the following second-order reference filter time constant (9) is used:

$$T\_{\rm FR} = 0.2 \tag{34}$$

The closed-loop responses for different values of *KHF* are given in Figure 7.

**Figure 7.** The closed-loop responses for the process *GP*<sup>3</sup> (*s*) when using controllers *PID*<sup>3</sup> 4 , at different *KHF*.

As expected, the speed of the closed-loop response and the controller output signal noise increases by increasing the noise gain factor *KHF*. However, the improvement of the closed-loop speed is not so significant at the highest factors *KHF*. On the other hand, the controller output noise increases at higher factors *KHF*. As expected, there is a trade-off

114

between the closed-loop speed and the amount of the controller output noise. Therefore, in practice, the allowed noise gain should be chosen wisely according to the amount of noise present in the system.

The actual "amplification" of the measurement noise (the actually achieved noise gain *KHF*) is measured by dividing standard deviations of the controller output signal (*u*) and the process output (*n*) when the process is in the steady-state:

$$
\sigma\_{rel} = \frac{\sigma\_{\rm ul}}{\sigma\_{\rm y}}.\tag{35}
$$

The actual amplifications of the measurement noise signals are given in Table 3. It is obvious that the actual gains of the noise (*σrel*) are very similar to the desired ones (*KHF*).

b. Changing the filter order (*n*)

On the other hand, the speed of the closed-loop response, for the same *KHF* and the controller order (*m*), can also be altered by changing the filter order. In this regard, we tested the performance of the controllers *PID*<sup>3</sup> *n* , where *n* varies from 3 to 6.

According to the procedure given in Figure 5, when choosing *KHF* = 10 and repeating steps 3–5 3 times, the calculated filter time constants and the controller parameters (17) are given in Table 4.

**Table 4.** The calculated filter time constants *T<sup>F</sup>* and controller parameters at different *n*.


The closed-loop responses for different controller filter orders (*n* = 3 to 6) are given in Figure 8.

**Figure 8.** The closed-loop responses for the process *GP*<sup>3</sup> (*s*) when using controllers with different filter orders *PID*<sup>3</sup> *n* , at *KHF* = 10.

As can be seen, the speed of the closed-loop response is the highest for controller *PID*<sup>3</sup> 4 . The speed of controllers with higher-order filters (*n* > 4) are slightly slower. The speed of response for *n* > 4 is not improving since a higher-order filter also adds some complexity to the closed-loop transfer function. This, in return, may result in lower closed-loop speeds.

The practical question is how to find the most optimal controller filter order in advance, before making the closed-loop experiment on the process. This can be answered by calculating the integral of control error (*IE*), which can be considered as a measure of the closed-loop speed:

$$\text{IE} = \int\_{t=0}^{\infty} (r - y)dt\tag{36}$$

If the closed-loop responses have small overshoots, the higher values of IE indicate slower closed-loop responses. For such responses, the IE can be a useful tool to measure the closed-loop speed. The IE value can be relatively easily calculated by transforming the Equation (36) into Laplace domain. It can be shown that:

$$IE = \frac{1}{\mathcal{K}\_{PR}\mathcal{K}\_{-1}}.\tag{37}$$

Therefore, for the process with the same steady-state gain *KPR* (2), the closed-loop speed is inversely proportional to the integrating gain (*K*−1) of the controller. Therefore, the controller with the highest gain *K*−<sup>1</sup> will produce the fastest closed-loop response (providing that the closed-loop responses have small or negligible overshoots). Indeed, from Table <sup>4</sup> it is evident that the highest gain *<sup>K</sup>*−<sup>1</sup> is calculated for controller *PID*<sup>3</sup> 4 . This corresponds to our previous observations.

The actual amplifications of the measurement noise signals, according to (35), are given in Table 4. The actual noise gains (*σrel*) are very similar to the desired ones (*KHF*) for filter orders 3 and 4, while for higher-order filters the actual noise gain is lower. This is due to various assumptions (simplifications) made when deriving the filter time constant (32).

c. Changing the controller order (*m*)

As is already known, the speed of the closed-loop response can also be altered by changing the controller order. In this regard, we tested the performance of the controllers *PID<sup>m</sup> n* , where *m* varies from 1 to 4. In all cases the controller filter is chosen to be 1 order higher than the controller order (*n* = *m* + 1). The desired noise gain remains the same as in the previous experiment (*KHF* = 10).

The calculated filter time constants and the controller parameters (17) are given in Table 5.


**Table 5.** The calculated filter time constants *T<sup>F</sup>* and controller parameters at different controller order *m* (*n* = *m* + 1).

The closed-loop responses for different controller orders (*m* = 1 to 4) are given in Figure 9.

**Figure 9.** The closed-loop responses for the process *GP*<sup>3</sup> (*s*) when using controllers with different controller orders *PID<sup>m</sup> n* , at *KHF* = 10 and *n* = *m* + 1.

As can be seen, the closed-loop speed increases by increased controller order, similar to the results in Figures 3 and 4. The difference is that now the controller output noise is under control (*KHF* = 10), so the level of control noise is similar for all of the controllers. The fastest responses are obtained with *PID*<sup>4</sup> 5 . In a similar manner as in the previous case, the speed of responses can be estimated by comparing the values of the calculated integrating gains (*K*−1) in Table 5. Indeed, *<sup>K</sup>*−<sup>1</sup> is the highest for *PID*<sup>4</sup> 5 .

The actual amplifications of the measurement noise signals, according to (35), are given in Table 5. The actual gains of the noise (*σrel*) are very similar to the desired ones (*KHF*) for all controller orders.

#### **4. Robustness**

The proposed design of HO-PID controllers results in a relatively fast and nonoscillatory response. In addition, the controller noise is under control by choosing parameter *KHF*. However, the designed closed-loop system can still be not robust enough to process variations. Namely, due to nonlinearity or time-variations of the process, its characteristics (gain, delay, time constants, etc.) can vary by working point or by time.

The robustness of a stable closed-loop system is usually measured by maximum sensitivity (*MS*) [1,38]. Maximum sensitivity is related to the distance of the open-loop transfer function *GC*(*jω*)*GP*(*jω*) from the critical point (−1+j0). Namely, *M<sup>S</sup>* is the inverse of the minimum distance between the open-loop transfer function and the critical point. Generally, a smaller value of *M<sup>S</sup>* denotes a more robust closed-loop system to process variations. Usual values of *M<sup>S</sup>* for stable processes are between 1.4 and 2.0 [1,38].

The robustness of the closed-loop system for the proposed HO-PID controllers has been tested on the following third-order process with delay:

$$\mathcal{G}\_{\rm P4}(s) = \frac{K\_{\rm PR}e^{-T\_{dcl}s}}{\left(1+sT\right)^3},\tag{38}$$

where the nominal values are *KPR* = 1, *Tdel* = 1 and *T* = 1. Three different HO-PID controllers are selected: *PID*<sup>2</sup> 3 , *PID*<sup>3</sup> 4 , and *PID*<sup>4</sup> 5 . The calculated controller parameters, when choosing *KHF* = 10, *T<sup>S</sup>* = 0.002 s and according to the proposed tuning method, are shown in Table 6. The calculated values of the maximum sensitivity *MS*, for all three controllers, are shown in the same table. It can be seen that the *M<sup>S</sup>* values slightly increase with the increased controller order. However, the differences are not large and all the values are below 2.0.

**Table 6.** The calculated filter time constants *T<sup>F</sup>* and controller parameters at different controller orders *m* (*n* = *m* + 1) for *GP*<sup>4</sup> (*s*).


Besides calculating the *M<sup>S</sup>* values, we were also simulating the closed-loop responses using all three controllers on the nominal process, and on the changed process (±10% change of process gain *KPR*, time-delay *Tdel* and time constant *T*). The closed-loop responses are shown in Figures 10–12. It is evident that the closed-loop responses under perturbed parameters are still stable without significant oscillations. When comparing Figures 10 and 12 it can be noticed that the perturbed parameters with controller *PID*<sup>4</sup> 5 result in slightly more deviation from the nominal response than with controller *PID*<sup>2</sup> 3 . This is all in accordance with the calculated values of *M<sup>S</sup>* in Table 6.

**Figure 10.** The closed-loop responses for the process *GP*<sup>4</sup> (*s*), using controller *PID*<sup>2</sup> 3 at *KHF* = 10 for nominal (solid line), 10% increased (dashed line) and 10% decreased (dash-dotted line) process parameters.

**Figure 11.** The closed-loop responses for the process *GP*<sup>4</sup> (*s*), using controller *PID*<sup>3</sup> 4 at *KHF* = 10 for nominal (solid line), 10% increased (dashed line) and 10% decreased (dash-dotted line) process parameters.

**Figure 12.** The closed-loop responses for the process *GP*<sup>4</sup> (*s*), using controller *PID*<sup>4</sup> 5 at *KHF* = 10 for nominal (solid line), 10% increased (dashed line) and 10% decreased (dash-dotted line) process parameters.

#### **5. Comparison with Other Tuning Methods**

The proposed tuning method was compared with some other methods for PIDA controllers. The chosen methods, which were tested on a particular process model, are from

Lurang and Puangdownreong [21] (denoted as the Lurang method from here on) and Jung and Dorf [11] (denoted as the Jung method from here on). The Lurang method involves calculating the PIDA controller parameters by optimizing the tracking and disturbance rejection response under several limitations given on rise time, overshoot, settling time, steady-state error and similar. The optimization is carried out with a modified bat algorithm proposed by the authors. The Jung method analytically calculates the PIDA controller parameters for the third-order process according to provided desired overshoot and settling time. Both methods do not take into account the controller's filter. Therefore, the actual implementation in practice could be questionable if the filter dynamics become slower.

Case 1

The following process model has been selected, according to [21]:

$$G\_{P5}(s) = \frac{1}{(1+s)(1+0.5s)\left(1+\frac{s}{5}\right)}\tag{39}$$

The Lurang method suggests the following PIDA controller parameters:

$$\text{K}\_{-1} = 2.20, \text{ K}\_{0} = 3.60, \text{ K}\_{1} = 1.60, \text{ K}\_{2} = 0.06\tag{40}$$

The chosen controller filter time constant was very low (*T<sup>F</sup>* = 0.01), since we did not want to spoil the closed-loop response of the Lurang method. Namely, as already mentioned, the Lurang method does not take into account the controller filter in the design phase.

For comparison, we chose the controller structures with the lowest possible controller filter order *n*: *PID*<sup>2</sup> 2 . For illustrative purposes, the one-order higher controller structure (*PID*<sup>3</sup> 3 ) was also tested. Note that the closed-loop results of our proposed method, for the same level of controller noise, can be improved by using *n* > *m*.

The calculated controller parameters, for the given process and controller filter were the following:

$$\begin{array}{c} PID\_2^2: \ K\_{-1} = 25.06, \ K\_0 = 45.95, \ K\_1 = 25.07, \ K\_2 = 4.18\\ PID\_3^3: \ K\_{-1} = 37.53, \ K\_0 = 69.42, \ K\_1 = 38.88, \ K\_2 = 6.88, \ K\_3 = 0.104 \end{array} \tag{41}$$

We tested, separately, the tracking response and the disturbance rejection when using all three controllers. For tracking response, the reference (*r*) changed from 0 to 1 at *t* = 1 s and for disturbance response the process input disturbance (*d*) changed from 0 to 1 at *t* = 1 s.

The closed-loop responses are shown in Figure 13.

As can be seen, the responses of the proposed method with *PID*<sup>2</sup> 2 controller are superior to the Lurang method. Certainly, the one-order higher controller (*PID*<sup>3</sup> 3 ) has a better result.

For a more objective comparison, the integral of squared error (ISE) signal has been calculated for all three controllers. The results are shown in Table 7. It is obvious that the ISE values for the *PID*<sup>2</sup> 2 controller are much lower than the ones for the Lurang controller.

**Table 7.** The ISE values for all three controllers in tracking and disturbance rejection.


**Figure 13.** The comparison of the closed-loop responses for the process *GP*<sup>5</sup> (*s*) when using *PID*<sup>2</sup> 2 , *PID*<sup>3</sup> 3 and the Lurang controller.

#### Case 2

The second process model has been selected according to [11]:

$$G\_{P6}(s) = \frac{0.0556}{(1+s)\left(1+\frac{s}{3}\right)\left(1+\frac{s}{6}\right)}\tag{42}$$

The Jung method suggests the following PIDA controller parameters:

$$\text{K}\_{-1} = 529.8, \text{ K}\_{0} = 516.5, \text{ K}\_{1} = 179.2, \text{ K}\_{2} = 26.3 \tag{43}$$

As before, the controller filter time constant was chosen very low (*T<sup>F</sup>* = 0.005), since we wanted to preserve the closed-loop response of the Jung method, which was obtained without the controller filter.

The calculated *PID*<sup>2</sup> 2 and *PID*<sup>3</sup> 3 controller parameters, for the given process and controller filter were the following:

$$\begin{array}{c} PID\_2^2: \ K\_{-1} = 902, \ K\_0 = 1353, \ K\_1 = 501, \ K\_2 = 50.1\\ PID\_3^3: \ K\_{-1} = 1351, \ K\_0 = 2037, \ K\_1 = 767, \ K\_2 = 81.3, \ K\_3 = 0.6 \end{array} \tag{44}$$

As in the previous case, the closed-loop responses were tested on the tracking response and the disturbance rejection. The closed-loop responses are shown in Figure 14.

**Figure 14.** The comparison of the closed-loop responses for the process *GP*<sup>6</sup> (*s*) when using *PID*<sup>2</sup> 2 , *PID*<sup>3</sup> 3 and the Jung controller.

Again, the responses of the proposed method with *PID*<sup>2</sup> 2 and *PID*<sup>3</sup> 3 controllers are superior to the Jung method. The comparison of ISE values in Table 8 shows *PID*<sup>2</sup> 2 controller has lower values than the Jung controller. However, note that the disturbance rejection settling time is the best with the Jung method.

**Table 8.** The ISE values for all three controllers in tracking and disturbance rejection.


#### Case 3

The fourth-order process model has been selected according to [6]:

$$G\_{P7}(s) = \frac{1}{(1+s)\left(1+\frac{s}{2}\right)\left(1+\frac{s}{4}\right)\left(1+\frac{s}{8}\right)}\tag{45}$$

The Puangdownreong method suggests the following PIDA controller parameters:

$$\mathbf{K}\_{-1} = \mathbf{1.647}, \; \mathbf{K}\_{0} = \mathbf{2.684}, \; \mathbf{K}\_{1} = \mathbf{1.105}, \; \mathbf{K}\_{2} = -\mathbf{2.65} \cdot \mathbf{10}^{-3} \tag{46}$$

The method calculated the following controller filter:

$$G\_F(s) = \frac{1}{1 + 0.0132s + 5.26 \cdot 10^{-5}s^2} \tag{47}$$

which was also used in design of the proposed *PID*<sup>2</sup> 2 controller parameters. For the given process and controller filter transfer function, the following controller parameters were calculated:

$$PID\_2^2 \text{: } K\_{-1} = 4.36, \ K\_0 = 7.735, \ K\_1 = 3.97, \ K\_2 = 0.60 \tag{48}$$

As in the previous case, the closed-loop responses were tested on the tracking response and the disturbance rejection. The closed-loop responses are shown in Figure 15.

**Figure 15.** The comparison of the closed-loop responses for the process *GP*<sup>6</sup> (s) when using *PID*<sup>2</sup> 2 and the Puangdownreong controller.

Again, the responses of the proposed method with the *PID*<sup>2</sup> 2 controller are superior to the Puangdownreong method. The comparison of ISE values in Table 9 shows the *PID*<sup>2</sup> 2 controller has lower values than the Puangdownreong controller.

**Table 9.** The ISE values for both controllers in tracking and disturbance rejection.


#### **6. Conclusions**

In the paper, the method for tuning the parameters of the *m*-th order controller with the *n*-th order binomial filter has been presented. The proposed tuning method is based on the MO criteria which aims to produce non-oscillatory and fast closed-loop reference step responses. The calculation of the controller parameters is analytical and does not require any kind of optimization. An additional advantage of the proposed method is that the process can be described either by the process model or by the process time responses during the steady-state change.

To keep the noise gain of the controller under control, the filter time constant of the controller can also be calculated according to the specified noise gain. The calculation procedure is still analytical, and the results confirm that the level of controller noise is consistent with the given noise gain. The only exception is the use of larger relative degrees between the controller and the filter order, which is the consequence of some simplifications in the calculation of the filter time constant.

The proposed method was tested on six different process models (from second to fourth-order process models with or without time delay). The results confirmed that the control performance can be improved by increasing the controller order or by selecting the filter order appropriately without increasing the controller output noise. The study shows that increasing the filter order improves the performance only up to a certain level, after which the performance starts to decrease. The optimum degree of controller and filter order can be easily determined by the value of integral gain of the controller.

The tuning method was compared with three other tuning methods for PIDA controllers (Lurang [21], Jung [11] and Puangdownreong [6] methods). Although the selected process models were the same as in the aforementioned methods, the proposed method resulted in a better control performance.

Therefore, the proposed higher-order controller design is efficient, and the controller output noise gain is under control. However, it does not mean that the proposed method cannot be improved. Indeed, the proposed method is based on optimizing the reference tracking performance. In our further research, we plan to improve the disturbance rejection performance as well. Namely, the article shows that the MO controller design leads to a strong asymmetry in the dynamics of tracking and disturbance rejection behaviour. While the HO-PID controller design leads to an increase in the number of pulses of the control signal after reference step changes, the responses of the control signal after the change of disturbance remain monotonic. This motivates us to deal with the modification of the MO controller design with regard to a faster response to disturbances.

Moreover, we also plan to design a method that will find the most optimal controller and filter order for the given process and noise amplification considering the complexity of the controller and filter order. We plan to calculate the optimal parameters of the reference filter to control the change of the controller output signal when the reference signal is changed.

Another planned modification is adding a user-defined parameter for changing the speed of the closed-loop control. Slowing down the control speed would further increase the robustness of the system.

**Author Contributions:** Writing-original draft preparation, D.V. and M.H. Simulations, D.V. and M.H. Editing, D.V. and M.H. Project administration, D.V. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the grants P2-0001 financed by the Slovenian Research Agency, APVV SK-IL-RD-18-0008 platoon modelling and control for mixed autonomous and conventional vehicles: a laboratory experimental analysis, and VEGA 1/0745/19 control and modelling of mechatronic systems in e-mobility.

**Acknowledgments:** Supported by Slovenská e-akadémia, n. o.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **Feedforward of Measurable Disturbances to Improve Multi-Input Feedback Control**

**Javier Rico-Azagra \*,† and Montserrat Gil-Martínez †**

Control Engineering Research Group, Electrical Engineering Department, University of La Rioja, 26004 Logroño, Spain; montse.gil@unirioja.es

**\*** Correspondence: javier.rico@unirioja.es; Tel.: +34-941-299-479

† These authors contributed equally to this work.

**Abstract:** The availability of multiple inputs (plants) can improve output performance by conveniently allocating the control bandwidth among them. Beyond that, the intervention of only the useful plants at each frequency implies the minimum control action at each input. Secondly, in single input control, the addition of feedforward loops from measurable external inputs has been demonstrated to reduce the amount of feedback and, subsequently, palliate its sideband effects of noise amplification. Thus, one part of the action calculated by feedback is now provided by feedforward. This paper takes advantage of both facts for the problem of robust rejection of measurable disturbances by employing a set of control inputs; a previous work did the same for the case of robust reference tracking. Then, a control architecture is provided that includes feedforward elements from the measurable disturbance to each control input and feedback control elements that link the output error to each control input. A methodology is developed for the robust design of the named control elements that distribute the control bandwidth among the cheapest inputs and simultaneously assures the prescribed output performance to correct the disturbed output for a set of possible plant cases (model uncertainty). The minimum necessary feedback gains are used to fight plant uncertainties at the control bandwidth, while feedforward gains achieve the nominal output response. Quantitative feedback theory (QFT) principles are employed. An example illustrates the method and its benefits versus a control architecture with only feedback control elements, which have much more gain beyond the control bandwidth than when feedforward is employed.

**Keywords:** mid-ranging; valve position control; input resetting control; parallel control; MISO; robust control; QFT; frequency domain; feedforward

#### **1. Introduction**

Uncertainties such as nonmeasurable disturbances or unavoidable simplifications in plant modelling justify feedback control loops, which, by permanently supervising the output, can correct its deviation from the reference or track reference changes. Better performances of the output response are linked to larger control bandwidths, which are provided by larger gains of feedback controllers (magnitude frequency response). Limited actuator ranges usually constrain the bandwidth and performance. Even for unlimited linear ranges or very powerful actuators, sensor noise amplifications at the control inputs impose an important constraint to the bandwidth to avoid fatigue or even saturation of actuators (Horowitz [1] labelled this fact as 'the cost of feedback'). With this in mind, Quantitative Feedback Theory (QFT) [1–3] proposes incorporating feedforward controllers when external inputs are available (reference or measurable disturbance inputs [4]), reducing feedback gain to only that strictly necessary to compensate for the uncertainties. However, reduction of the feedback gain increases the feedforward gain to maintain a specific performance that is linked with the chosen closed-loop bandwidth. Then, the control action is univocally conditioned by the plant frequency response and the desired performance, but its convenient

**Citation:** Rico-Azagra, J; Gil-Martínez, M. Feedforward of Measurable Disturbances to Improve Multi-Input Feedback Control. *Mathematics* **2021**, *9*, 2114. https:// doi.org/10.3390/math9172114

Academic Editor: Dimplekumar N. Chalishajar

Received: 30 June 2021 Accepted: 27 August 2021 Published: 1 September 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

allocation between feedback and feedforward control actions can prevent excessive sensor noise amplification linked with feedback gain. These facts, which are common knowledge in single-input control, have not yet been fully exploited in multi-input control.

The use of multiple control inputs can undoubtedly improve closed-loop performance. A great variety of control structures and design methods are available in the scientific literature. Some works focused on widening the range of operating points for the output [5,6]. Other works focused on improving the output dynamic performance and simultaneously searched for a profitable combination of control inputs, branding them as input (valve) position control, mid-ranging control [7], or input resetting control [8]—their foundation [9] has inspired a set of works in the robust framework of QFT with the named missions of feedback and feedforward [10–13]. Thus, the robustly designed control elements determine the intervention or inhibition of plants (one for each input) along the frequency band. Let us consider the following facts: (i) some plants could provide the performance using less control action than others, p.e., those plants of larger magnitude, considering that magnitude dominance can change over the frequency band; (ii) plants that do not significantly contribute to the performance at certain frequencies are advised to be inhibited; (iii) the collaboration of productive plants can reduce the control action that is needed at their inputs—the virtual total control effort is divided among them. The frequency inhibition of unproductive plants is relevant for several reasons. It prevents high-frequency signals from exciting the actuators of plants that are useless at high frequencies [10]. Similarly, it also avoids inconvenient steady-state displacements of the operating point of plants that are useless at low frequencies [11]; applied works such as [14,15] highlight the relevance of resetting the steady-state points of high-frequency intervention plants. Finally, stability issues become critical when plants are out of phase, despite the fact that their magnitude contribution may be large and nearly the same [16,17]; Reference [12] presented an appropriate intervention of the magnitude frequency response of plants that were not minimum phase.

Structures with exclusive feedback controllers to the control inputs are the only possibility for rejection at the output of nonmeasurable disturbances. In [10,11], robust design methods of the feedback controllers distributed the frequency band among the most favourable plants to minimise the control action at each input (any number of inputs were possible) while achieving the desired performance of the output response. The control architecture of [10] allowed the collaboration of plants over the same frequency band while the architecture of [11] required separated work-bands in favour of an easier design method; unstable, nonminimum phase, or delayed plants were investigated in [12].

The reference tracking problem admits feedforward elements that can reduce the gain of feedback controllers to palliate noise amplification at control inputs [13]. Beyond that, the priority of the method in [13] was achieving correct distribution of the bandwidth among the inputs (plants) to obtain the performance using the minimum possible control action at each input.

The fact that disturbances are sometimes measurable variables opens the possibility of connecting feedforward paths to the control inputs, which can be exploited by this work. A control architecture with feedback and feedforward elements will be presented for robust disturbance rejection. The method will distribute the frequency band among the most favourable inputs (those that demand less control action). Finally, a robust design of the control elements will guarantee the prescribed performance and stability for a set of possible plant models. Feedforward will reduce feedback, reporting important benefits with regard to excessive sensor noise amplification at the control inputs that could saturate actuators and spoil the expected performance in feedback-only control structures. Whenever external disturbances are measurable, the contribution of this work can be of importance in a great variety of fields where multi-input control has been successfully applied. Remarkable application fields include bioprocesses [14,15], thermal systems [18], medical systems [19,20], scanner imaging [21,22], massive data storage devices [23,24], fuel engines [25] and electrical vehicles [26], robotics [27–29], and unmanned aerial vehicles [30].

#### **2. Control Architecture and Robust Design Method**

Figure 1 depicts the multiple-input single-output (MISO) system and the proposed control architecture. The *y* output deviation is modelled by the influence of *ui*=1,. . . *<sup>n</sup>* control inputs and a *d* disturbance input, achieving a vector of (*n* + 1) transfer functions (plants) *P*(*s*) = [*pi*=1,. . . *<sup>n</sup>*(*s*), *p<sup>d</sup>* (*s*)]. Let us consider a total number *z* of uncertain parameters in these dynamical models. By defining *q<sup>l</sup>* as a vector of those uncertain parameters in a set of all possible values of <sup>Q</sup> in <sup>R</sup>*<sup>z</sup>* , the MISO uncertain system can be formally defined as

$$\mathcal{P} = \{ P(s; q\_l) : q\_l \in \mathcal{Q} \}. \tag{1}$$

Henceforth, labels *p<sup>i</sup>* or *p<sup>d</sup>* denote plant models of delimited uncertainty.

**Figure 1.** Feedback–feedforward control structure for robust rejection of measurable disturbances in multi-input systems.

An appropriate control must compensate for the output deviation from the constant reference, *e* = *r* − *y*, when a *d* disturbance occurs; reference tracking problems were discussed in [13]. Measurable disturbances are considered in the new design method. In such a case, the robust control specification is posed in the frequency domain as

$$\left| \frac{e}{d} \right| = \left| \frac{p\_d + (\sum\_{i=1}^n p\_i g\_{d\_i}) g\_{d\_m}}{1 + \sum\_{i=1}^n p\_i c\_i} \right| \le \mathcal{W}\_d; \quad \forall P \in \mathcal{P}\_\prime \,\,\forall \omega\_\prime \tag{2}$$

where *W<sup>d</sup>* is an upper tolerance on the set of |*e*/*d*| frequency responses.

As demonstrated in [10], when *d* is nonmeasurable (i.e., *gd<sup>i</sup>* = *gd<sup>m</sup>* = 0), the parallel structure of feedback controllers *ci*=1,...,*<sup>n</sup>* allows any distribution of frequencies for *ui*=1,. . . *<sup>n</sup>* participation to fulfil |*e*/*d*| ≤ *W<sup>d</sup>* . Several *p<sup>i</sup>* plants could even collaborate over the same frequencies to reduce each *u<sup>i</sup>* . On the other hand, a series structure of feedback controllers [11] obliges to a predefined location of plants inside the structure and requires separated frequency work-bands for the *u<sup>i</sup>* inputs. In spite of this, a method was provided to sort the plants and assign a convenient frequency band to each input to use the least *u<sup>i</sup>* possible.

Beyond those solutions, feedforward loops from the external input *d* to the control inputs *u<sup>i</sup>* are now being added. Individual elements *gd<sup>i</sup>* allow the frequency band distribution for *u<sup>i</sup>* inputs with regard to feedforward tasks, while the feedforward master *gd<sup>m</sup>* locates the responses *e*, taking advantage of the measurable information *d*; as long as there is a set of possible plants (1), there is a bunch of responses. The dispersion of

frequency responses is constrained by feedback, which can be freely distributed among *u<sup>i</sup>* by controllers *c<sup>i</sup>* . In summary, a total feedforward

$$\mathcal{I}\_{\mathcal{S}d} = \sum\_{i=1}^{n} \mathcal{I}\_{\mathcal{S}d\_i} = \left(\sum\_{i=1}^{n} p\_i \mathcal{g}\_{d\_i}\right) \mathcal{g}\_{d\_m} = p\_{d\_m} \mathcal{g}\_{d\_m} \tag{3}$$

is contributed by individual feedforward channels *<sup>l</sup>gdi* , which supply *uf f<sup>i</sup>* , and a total feedback

$$l\_t = \sum\_{i=1}^{n} l\_i = \sum\_{i=1}^{n} p\_i c\_i \tag{4}$$

is contributed by individual feedback loops *l<sup>i</sup>* , which supply *uf b<sup>i</sup>* . Both components *uf f<sup>i</sup>* and *uf b<sup>i</sup>* build the control action *u<sup>i</sup>* , which, for *d* handling, can be written and overbounded as

$$\left|\frac{u\_i}{d}\right| = \left|\frac{u\_{fb\_i}}{d} + \frac{u\_{ff\_i}}{d}\right| = \left|-\frac{p\_d + l\_{\mathcal{S}d}}{1 + l\_l}c\_l + g\_{d\_i}g\_{d\_m}\right| \le \mathcal{W}\_d|c\_i| + |\mathcal{g}\_{d\_i}g\_{d\_m}|; \quad \forall P \in \mathcal{P}, \ \forall \omega. \tag{5}$$

In SISO control (*i* = *n* = 1), the desired performance for *e*/*d* univocally fixes the only control action *u*1/*d*, which can be distributed as desired between feedback and feedforward components. QFT prioritises feedforward to reduce as much feedback as possible and its said drawbacks; [4] provided a design solution inside a tracking error structure such as ours, which pursues the smallest *p*1*c*<sup>1</sup> gain that guarantees the existence of *p*1*g<sup>d</sup>* to meet (2); in this way, the amplification of sensor noise *v* at the control input *u*<sup>1</sup> that depends on *c*<sup>1</sup> gain is also reduced as much as possible. However, evaluating (5), a reduction of *uf b*<sup>1</sup> /*d* occurs at the expense of an increase in *uf f*<sup>1</sup> /*d*, since *u*<sup>1</sup> is unique to provide the performance, i.e., the gain of *g<sup>d</sup>* = *gd*<sup>1</sup> *gd<sup>m</sup>* increases.

On the other hand, a multi-input availability offers many more possibilities. Let us note that despite the distribution between *l<sup>t</sup>* and *lg<sup>d</sup>* that was selected to achieve the performance (2), infinite combinations of *l<sup>i</sup>* (4) and *lgd<sup>i</sup>* (3) could build them. The goal is to find the solution that uses the set of smaller control inputs *ui*/*d*. The authors of [13] provided a method for the problem of robust reference tracking (*r* 6= constant). The key point was the more the gain of a plant, the less the need of control action to contribute to the performance, which foresaw the use of inputs towards plants with higher gains at each frequency.

In the current case, let us suppose a single input *u<sup>i</sup>* (plant *p<sup>i</sup>* ) participates in the disturbance rejection. If plant models are perfectly known, the control action *u<sup>i</sup>* = −*pd*/*p<sup>i</sup>* would cancel the *d* disturbance influence on the *y* output. Here, the whole set of plant uncertainties is being considered in the robust design. Then, the frequency response

$$k\_{\dot{l}} = \frac{p\_{d,\text{max}}}{p\_{\dot{r},\text{min}}}, \quad \forall P \in \mathcal{P}\_{\text{\textquotedblleft}} \,\forall \omega \tag{6}$$

is a rough approximation of the less favourable *u<sup>i</sup>* if only *p<sup>i</sup>* participates in the *d* disturbance rejection; *pi*,min is the plant *p<sup>i</sup>* of least magnitude at a particular *ω* inside the uncertain set |*pi*(*jω*)|; and *pd*,max is the plant *p<sup>d</sup>* of largest magnitude at *ω* inside |*p<sup>d</sup>* (*jω*)|.

Next, the *k<sup>i</sup>* frequency responses of all inputs *i* = 1, . . . , *n* are compared at each frequency to decide which inputs are of sufficient interest for participation; those that yield the smallest *k<sup>i</sup>* magnitude are considered. At any frequency, the contribution of as many inputs as possible is desired, if it yields a total plant

$$p\_{d\_m} = \sum\_{i=1}^{n} p\_i g\_{d\_{i'}} \tag{7}$$

with *significantly* greater magnitude than the individuals *p<sup>i</sup>* (let us advance that *gd<sup>i</sup>* will be designed as filters with unitary gain at the pass band). Thus, the potential collaboration of plants would reduce individual feedforward actuations |*uf f<sup>i</sup>* /*d*| because the virtual need

of total feedforward | ∑ *uf f<sup>i</sup>* /*d*| ≈ |*pd*/*pd<sup>m</sup>* | would be significantly reduced. A two-in-two comparison of *k<sup>i</sup>* is advised. As a rule of thumb, a difference in *ki*(*jω*) magnitude greater than 20log2 = 6 dB makes the plant associated with larger *ki*(*jω*) magnitude useless. When a plant cannot report benefits at a certain frequency, its disconnection is recommended to avoid useless signals reaching the actuators. A second relevant point is to check that the *ki*(*jω*) phase-shift of those plants that are likely to collaborate is less than 90◦ , since the vector sum of plants in the counter-phase would reduce the total magnitude of *pd<sup>m</sup>* (7). The disconnection of useless plans in the counter-phase is a priority for stability issues [12].

The *k<sup>i</sup>* comparisons decide the smallest *u<sup>i</sup>* input at each frequency, i.e., the desired frequency band allocation among inputs. Then, the design of *gd<sup>i</sup>* and *c<sup>i</sup>* must attain the planned distribution and, simultaneously, *gd<sup>i</sup>* , *ci* , and *gd<sup>m</sup>* must achieve the specification (2). The design method is described as follows: First, *gd<sup>i</sup>* are designed as filters with unitary gain over the pass-band. This yields a convenient plant *pd<sup>m</sup>* (7) that selects the most powerful *p<sup>i</sup>* plants at each frequency for feedforward tasks. Subsequently, feedback *l<sup>t</sup>* must reduce the influence of *pd<sup>m</sup>* uncertainty in |*e*/*d*| deviations around zero only to the extent that a master feedforward *gd<sup>m</sup>* can further position the magnitude frequency responses inside tolerance ±*W<sup>d</sup>* . The required amount of feedback *l<sup>t</sup>* could be provided with several combinations of *ci* , but the one according to the planned distribution will save the control action by using the most powerful plants at each frequency for feedback tasks too. The set of controllers *c<sup>i</sup>* are designed via loop-shaping of *li*(*jω*) to satisfy the bounds *β<sup>l</sup> i* (*ω*) at a discrete set of frequencies Ω = {*ω*}. The QFT bounds *β<sup>l</sup> i* translate the closed loop specification into terms of restrictions for *l<sup>i</sup>* = *c<sup>i</sup> p<sup>i</sup>* nominal at specific frequencies *ω* that are conveniently selected according to the plant and specifications; the bounds are depicted on a mod-arg plot [2,3]. During *li*(*jω*) shaping, when it lies exactly on the bounds, it guarantees the minimum gain of the *c<sup>i</sup>* controller to achieve the specification by the whole set of plant cases. A sequential process between the *i* = 1, ..., *n* loops is arbitrated. Thus, if at some point the controller *c<sup>i</sup>* is to be adjusted and the other controllers *ck*6=*<sup>i</sup>* take known values in the sequence, the robust disturbance rejection specification (2) can be rewritten as

$$\left| \left| \frac{e}{d} \right| = \left| \frac{p\_d + g\_{d\_m} p\_{d\_m}}{1 + \sum\_{k \neq i} p\_k c\_k + p\_i c\_i} \right| \le \mathcal{W}\_d; \quad \forall \mathcal{P} \in \mathcal{P}, \forall \omega\_\prime \tag{8}$$

and their representative *β<sup>l</sup> i* bounds can be computed by choosing *A* = *pd<sup>m</sup>* , *B* = *p<sup>d</sup>* , *<sup>C</sup>* = <sup>1</sup> + <sup>∑</sup>*k*6=*<sup>i</sup> <sup>p</sup><sup>k</sup> ck* , *D* = *p<sup>i</sup>* , *G* = *c<sup>i</sup>* , *G<sup>f</sup>* = *gd<sup>m</sup>* , and *W* = *W<sup>d</sup>* in the solution given to

> 

$$\left| \frac{AG\_f + B}{\mathcal{C} + DG} \right| \le W \tag{9}$$

in [13]; this work provided the formulation to make the design of *c<sup>i</sup>* and *gd<sup>m</sup>* independent. After the bound computation, the essence of loop-shaping is that *c<sup>i</sup>* reaches the necessary gain at the frequencies where the *p<sup>i</sup>* plant must work and filters (gain below 0 dB) those frequencies where *p<sup>i</sup>* must not work. Special attention must be paid to the frequencies where several inputs must collaborate. A detailed explanation of the global procedure is given in [10].

The full achievement of (2) ends with the design of the master feedforward *gd<sup>m</sup>* . The specification format can now be adapted to |(*A* + *BG*)/(*C* + *DG*)| ≤ *W* of function *gndbnds* in the QFT toolbox [31]. By choosing *A* = *p<sup>d</sup>* , *B* = *pd<sup>m</sup>* , *C* = 1 + *l<sup>t</sup>* , *D* = 0, *G* = *gd<sup>m</sup>* , and *W* = *W<sup>d</sup>* , the regions that are permitted for *gd<sup>m</sup>* on a mod-arg plot are determined; the loop-shaping of *gd<sup>m</sup>* (*jω*) is conducted on these bounds.

Considering the whole set of external inputs, the output error responds to

$$e = -\frac{p\_d + l\_{\mathcal{S}\_d}}{1 + l\_t}d + \frac{1}{1 + l\_t}(r - v) - \sum\_{i=1}^{n} \frac{p\_i}{1 + l\_t} r\_{u\_{i'}} \tag{10}$$

and the control inputs are

$$\mu\_i = \left[ -\frac{p\_d + l\_{\mathcal{S}\_d}}{1 + l\_t} c\_i + g\_{d\_i} g\_{d\_m} \right] d + \frac{c\_i}{1 + l\_t} (r - v) + \frac{(1 + l\_{-i})}{1 + l\_t} r\_{\mathcal{U}\_i} - \sum\_{k \neq i} \frac{c\_i p\_k}{1 + l\_t} r\_{\mathcal{U}\_{k'}} \tag{11}$$

where *l*−*<sup>i</sup>* = *l<sup>t</sup>* − *l<sup>i</sup>* . Two benefits are mentioned. The availability of multiple inputs made it possible to select the intervention of the more favourable plants *p<sup>i</sup>* at each frequency to achieve |*e*/*d*| < *W<sup>d</sup>* using the minimum |*ui*/*d*|. Individual feedforward *gd<sup>i</sup>* and individual feedback *c<sup>i</sup>* either disconnected or not the commanded inputs at each frequency; integrators or derivators are recommended to connect or disconnect plants at low frequency to fully eliminate steady-state errors [13]. Further, *c<sup>i</sup>* and *gd<sup>m</sup>* were in charge of providing |*e*/*d*| < *W<sup>d</sup>* ; let us recall that *gd<sup>i</sup>* were filters of unitary gain at the pass-band. The use of feedforward *gd<sup>m</sup>* allows reducing the amount of feedback |*c<sup>i</sup>* |; in fact, the formal QFT method pursues the minimum set of |*c<sup>i</sup>* | for the existence of *gd<sup>m</sup>* . As |*c<sup>i</sup>* | reduces, |*ui*/*v*| (11) also reduces in comparison with *gd<sup>m</sup>* = 0 solutions (feedback-only control structures are the only option when disturbances are nonmeasurable, as in [10,12]).

An additional flexibility of multi-input systems is the possibility of moving the system operating point *ui*(*t* = ∞) by changing the input resetting point *ru<sup>i</sup>* (*t*) of the plants that do not work at low frequencies [9,12,32].

The output reference *r*(*t*) is considered constant in the present work. For tracking control problems, feedforwarding *r*(*t*) can achieve important benefits; a control architecture and design method were provided in [13].

#### **3. Example**

The following theoretical example illustrates the new method of designing feedback and feedforward elements. In addition to analysing how the specification of robust disturbance rejection is achieved with the minimum set of control actions, a comparison with a control structure with only feedback elements is conducted, proving the superiority of the feedback–feedforward structure. References [10,12] collected other examples with only feedback elements.

A system with two control inputs obeys the following models *y*/*u<sup>i</sup>* , *i* = 1, 2, with parametric uncertainty:

$$\begin{aligned} p\_1(s) &= \frac{a\_1}{\left(\frac{a\_2}{a\_1}s + 1\right)^2}, & a\_1 &\in [1.60, \ 2.40], \quad a\_2 \in [0.17, \ 18.00];\\ p\_2(s) &= \frac{b\_1}{b\_2s + 1}, & b\_1 &\in [0.98, \ 1.02], \quad b\_2 \in [0.33, \ 1.00]. \end{aligned} \tag{12}$$

The *d* disturbance input influence follows the uncertain parametric model *y*/*d*:

$$p\_d(s) = \frac{c\_1}{s + c\_2}, \quad c\_1 \in [2.00, 3.00], \quad c\_2 \in [1.00, 2.00]. \tag{13}$$

Robust stability specifications

$$|T\_l(j\omega)| = \left|\frac{l\_l(j\omega)}{1 + l\_l(j\omega)}\right| \le \mathcal{W}\_{s\_l} i = 1, 2; \quad \forall P \in \mathcal{P}\_\prime \,\,\forall \omega\_\prime \tag{14}$$

seek minimum phase margins of 40◦ for both feedback loops *i* = 1, 2 by taking

$$\mathcal{W}\_{\rm s\_{\bar{i}}} = \left| \frac{0.5}{\cos(\pi (180 - PM) / 360)} \right|, \quad PM = 40^{\circ}. \tag{15}$$

Their representative bounds will delimit forbidden regions around the critical point that cannot be violated by *<sup>l</sup><sup>i</sup>* = *<sup>c</sup><sup>i</sup> <sup>p</sup>i*/(1+ <sup>∑</sup>*j*6=*<sup>i</sup> lj*) at any *ω*-frequency during loop-shaping [11,12,32]. These bounds can be computed with traditional CAD tools in the QFT toolbox [31].

The specification for the robust rejection of measurable *d*-disturbances (2) adopts the tolerance

$$\mathcal{W}\_{\rm d}(\omega) = \left| \frac{0.2s}{(0.5s+1)^2} \right|\_{s=j\omega} \,\prime \,\, \tag{16}$$

whose magnitude frequency response is plotted together with the magnitude frequency responses of plants in Figure 2a. The error tolerance *W<sup>d</sup>* (*ω*) will allow reducing the feedback to zero as soon as possible over *ω* > 4. However, as *W<sup>d</sup>* (*ω*) < |*p<sup>d</sup>* (*jω*)|, a small feedforward action will be needed at those frequencies. The problem arises when only feedback action is used, since feedback can never be neglected at high frequencies. Thus, a new specification

$$\mathcal{W}\_{dfb}(\omega) = \left| \frac{0.2s(0.1s+1)}{(0.5s+1)^2} \right|\_{s=j\omega} \tag{17}$$

is defined for the comparison of the new control architecture with the feedback-only architecture. Let us remark that time-domain performance will not be appreciatively altered between the use of (16) or (17) since their differences occur after the control bandwidth (see Figure 2a).

**Figure 2.** Magnitude frequency responses of plants: (**a**) performance specifications; (**b**) outcome plant after individual feedforward prefiltering.

The set of discrete frequencies

$$
\Omega = \{0.01, \ 0.1, \ 0.2, \ 0.4, \ 0.8, \ 1, \ 4, \ 8, \ 10, \ 20\} \text{[rad/s]} \tag{18}
$$

will be used for the assignment of working frequencies to inputs (plants) for bound calculations and to guide loop-shaping in the QFT framework. These have been selected considering the frequency response of plants and of the open-loop and closed-loop transfer functions.

#### *3.1. Design Methodology*

The frequency band allocation that minimises *u<sup>i</sup>* is founded on the *k<sup>i</sup>* frequency responses (6), which are depicted in Figure 3. The criteria argued in Section 2 advise that *p*<sup>1</sup> works over *ω* < 0.2 and *p*<sup>2</sup> works over *ω* > 1.0 since their respective *k<sup>i</sup>* magnitudes are the lesser over those frequencies. Additionally, the collaboration of both plants over 0.2 ≤ *ω* ≤ 1.0 is advised since the difference between *k<sup>i</sup>* -magnitudes is less than 6 dB, and the difference between *k<sup>i</sup>* -phases is less than 90◦ . Table 1 summarises all these conclusions.

**Table 1.** Frequency-band allocation that minimises *u<sup>i</sup>* .


**Figure 3.** Frequency responses of *k*1,2 (6).

According to the desired frequency band allocation (Table 1), the design of the individual feedforward elements yields

$$g\_{d\_1}(\mathbf{s}) = \frac{1}{s + 1}, \quad g\_{d\_2}(\mathbf{s}) = \frac{s}{s + 0.2}. \tag{19}$$

The low-pass filter *gd*<sup>1</sup> attains a cut-off frequency of *ω<sup>c</sup>* = 1, and the high-pass filter *gd*2 attains a cut-off frequency of *ω<sup>c</sup>* = 0.2. The use of individual filters *gdi*=1,2 modifies the outcome plant *pd<sup>m</sup>* (7) to be handled by feedback *ci*=1,2 and the remaining feedforward *gd<sup>m</sup>* . Figure 2b proves how *pd<sup>m</sup>* selects the more powerful part of plants *pi*=1,2, which will minimise *uf f<sup>i</sup>* to satisfy the performance specification (2) and (16).

For the design of feedback controllers *c<sup>i</sup>* , the bounds *β<sup>l</sup> i* that represent performance (2) and (16) and stability (14) and (15) specifications are computed. Then, each *l<sup>i</sup>* nominal, *li<sup>o</sup>* = *pi<sup>o</sup> ci* , is shaped to meet the bounds considering the frequency band allocation that minimises *uf b<sup>i</sup>* (see Table 1). Figure 4 depicts the bounds and loop-shapings; nominal plants *pi<sup>o</sup>* correspond to parameters *a* = 0.16, *b* = 1.36, *c* = 0.98, *d* = 1.00 in (12). The resulting controllers are

$$c\_1(s) = \frac{1.575(s + 0.08)}{s(s + 0.12)(s + 1.5)}, \quad c\_2(s) = \frac{1.5(s + 0.6)}{(s + 0.3)^2}.\tag{20}$$

Finally, the bounds on the feedforward master are computed, and the loop-shaping (see Figure 4) yields

$$g\_{d\_{\mathfrak{m}}}(\mathbf{s}) = \frac{-1.1849(\mathbf{s} + 0.1)(\mathbf{s} + 0.175)(\mathbf{s} + 2)}{(\mathbf{s} + 0.052)(\mathbf{s}^2 + 0.8563\mathbf{s} + 0.2072)}.\tag{21}$$

If no feedforward loops are employed (*gd*<sup>1</sup> = *gd*<sup>2</sup> = *gd<sup>m</sup>* = 0), feedback controllers *ci*=1,2 should complete the whole job. In such a case, after computing the bounds that represent the specifications of robust disturbance rejection (2) and (17) and robust stability (14) and (15), the shaping of *li<sup>o</sup>* , *i* = 1, 2, yields

$$c\_1 = \frac{108(s+0.1)}{s(s+10)(s+0.4)}, \quad c\_2 = \frac{964.49(s+2.5)^2(s+0.2)}{(s+74)(s+7.8)(s^2+0.45s+0.0625)}.\tag{22}$$

**Figure 4.** Bounds and loop-shaping for (north west) *c*<sup>1</sup> , (north east) *c*2, and (south west) *gd<sup>m</sup>* .

#### *3.2. Analysis and Comparatives*

Figure 5 shows several magnitude frequency responses of interest; the feedbackfeedforward solution is depicted in blue and the feedback-only solution is in red; where applicable, several plant cases (12) are depicted.

In particular, closed-loop frequency responses of subplots (a) and (b) in Figure 5 prove the fulfilment of robust specifications on disturbance rejection and stability, respectively. A tight achievement of performance tolerance in the control bandwidth (*ω* ≤ 4) can be noticed because some plant cases are close to or on the tolerance *W<sup>d</sup>* (*ω*); to achieve it, observe how *l*1*o* is onto *βl*<sup>1</sup> at *ω* = {0.01, 0.1, 0.2} and *l*2*<sup>o</sup>* onto *βl*<sup>2</sup> at *ω* = {0.2, 0.4, 0.8, 1} in Figure 4, which requires a relatively high order of the controllers (20) and (22). In addition, Table 1 planning has been executed successfully: *l*1-*lg*<sup>1</sup> of the feedback–feedforward solution and *l*<sup>1</sup> of the feedback-only solution work alongside the low-frequency band, and *l*2-*lg*<sup>2</sup> of feedback–feedforward and *l*<sup>2</sup> of feedback-only work alongside the high-frequency band (see subplots (e) and (f) in Figure 5).

The expected benefits of the above are using the minimum |*ui*/*d*| over *ω* ≤ 4 (see subplot (g) in Figure 5). Regarding the frequency band distribution among plants, let us note, p.e., that if *p*<sup>2</sup> were forced to work at low frequency instead of *p*1, |*u*2/*d*(*j*0)| would be |*p*1/*p*2(*j*0)| larger than the current |*u*1/*d*(*j*0)|. Regarding the tight achievement of bounds for each input design, it seeks the strictly necessary |*ui*/*d*| to achieve the specification; observe how |*u*1/*d*| along *ω* ≤ 1 and |*u*2/*d*| along 0.2 ≤ *ω* ≤ 4 are very similar for both solutions. The minimum effort in the control bandwidth pursues that |*ui*/*v*| can be reduced as soon as possible at high frequencies (see subplot (h) in Figure 5). However, the collaboration of feedback *l<sup>i</sup>* and feedforward *lg<sup>i</sup>* to build *ui*/*d* yields smaller magnitudes |*li* | than when only feedback intervenes (see subplot (e) in Figure 5). Then, feedback gains |*ci* | are smaller not only in the control bandwidth but also beyond the work-band of each input (see subplot (c) in Figure 5). The end effect |*u*1/*v*| over *ω* > 1 is smaller for the feedback–feedforward solution than for the feedback-only solution, and the same occurs

for |*u*2/*v*| over *ω* > 4 (see subplot (h) in Figure 5). In fact, a huge noise amplification is expected at the second control input in the feedback-only solution.

**Figure 5.** Magnitude frequency responses: (**a**) Output error, (**b**) stability, (**c**) feedback controllers, (**d**) feedforward elements, (**e**) feedback open-loops, (**f**) feedforward open-loops, (**g**) control inputs for disturbance rejection, (**h**) sensor noise at the control inputs.

Figure 6 shows the time-domain behaviour. External inputs are a unit step change of disturbance *d*(*t*) at *t* = 1 s and a sensor noise *v*(*t*) that is built with a band-limited, white-noise source of Simulink® (power of 0.00005 and sample time of 0.01 s); the reference input *r*(*t*) is constant and equal to zero. Blue and red colours distinguish the responses of feedback–feedforward and feedback-only solutions, respectively. Several plant cases are represented.

As expected, the output response *y*(*t*) is built by the faster response *y*2(*t*) of the plant *p*2, which works at high frequencies, and by the slower response *y*1(*t*) of the plant *p*1, which progressively takes control of the steady-state. Ignoring the noise, the control actions *u*1(*t*) and *u*2(*t*), which command the plants *p*<sup>1</sup> and *p*2, corroborate the same. Let us also remark how the input *u*2, which does not work at steady-state, recovers the initial operating point *ru*<sup>2</sup> = 0. Further, observe that *y*(*t*) finally recovers the initial steady-state of zero. Both steady-state conditions *y* = *r* and *u*<sup>2</sup> = *ru*<sup>2</sup> require an integrator in *c*<sup>1</sup> (20) and (22) and a differentiator in *gd*<sup>2</sup> (19). Regarding the control actions, *ui*(*t*) is built with *uf f<sup>i</sup>* (*t*) and *uf b<sup>i</sup>* (*t*) in the feedback–feedforward solution, while *ui*(*t*) = *uf b<sup>i</sup>* (*t*) is built in the feedback-only solution. The main difference between both solutions is the *v*(*t*) noise amplification at the output and, mainly, at the control inputs. The huge noise amplification at *u*2(*t*) of the feedback-only solution would cause fatigue of the actuator or might saturate it and spoil the theoretical performance. In such a case, a more conservative specification for disturbance rejection *e*/*d* would be indicated in true-life control (higher tolerance *W<sup>d</sup>* in the control bandwidth). All these corroborate the superiority of feedback–feedforward schemes.

**Figure 6.** Time-domain responses.

#### **4. Conclusions**

A new control architecture and design methodology has been proposed for the robust rejection of measurable disturbances when multiple control inputs are available to correct the output deviation. The multi-input character allowed selecting the most favourable plants (inputs) at each frequency to provide the performance. Thus, individual feedback and feedforward controllers to each input allowed distributing the control bandwidth as desired among the inputs; the allocation criterion was minimising the control action to provide the performance. Beyond that, the main benefit of the new structure is the presence of feedforward loops. This allowed reducing the amount of feedback and, consequently, the sensor noise amplification at the output and, mainly, at the control inputs. The advantages would be notorious in real-life systems, since excessive noise at actuators could sacrifice the achievement of the aggressive performance of the output. Finally, it is important to recall the robust character of the control system, which guaranteed the expected performance for a set of possible plant models.

**Author Contributions:** Conceptualization, J.R.-A. and M.G.-M.; software, J.R.-A.; data curation, J.R.-A.; writing—original draft preparation, M.G.-M.; writing—review and editing, J.R.-A. and M.G.-M.; funding acquisition, M.G.-M. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the University of La Rioja under grant REGI 2020/23.

**Data Availability Statement:** The data used to support the findings of this study are available from the corresponding author upon request.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Abbreviations**

The following abbreviations are used in this manuscript:


#### **References**


## *Article* **Leveraging Elasticity to Uncover the Role of Rabinowitsch Suspension through a Wavelike Conduit: Consolidated Blood Suspension Application**

**Sara I. Abdelsalam 1,2,\* and Abdullah Z. Zaher <sup>3</sup>**


**Abstract:** The present work presents a mathematical investigation of a Rabinowitsch suspension fluid through elastic walls with heat transfer under the effect of electroosmotic forces (EOFs). The governing equations contain empirical stress-strain equations of the Rabinowitsch fluid model and equations of fluid motion along with heat transfer. It is of interest in this work to study the effects of EOFs, which are rigid spherical particles that are suspended in the Rabinowitsch fluid, the Grashof parameter, heat source, and elasticity on the shear stress of the Rabinowitsch fluid model and flow quantities. The solutions are achieved by taking long wavelength approximation with the creeping flow system. A comparison is set between the effect of pseudoplasticity and dilatation on the behaviour of shear stress, axial velocity, and pressure rise. Physical behaviours have been graphically discussed. It was found that the Rabinowitsch and electroosmotic parameters enhance the shear stress while they reduce the pressure gradient. A biomedical application to the problem is presented. The present analysis is particularly important in biomedicine and physiology.

**Keywords:** elasticity; electroosmotic forces; heat transfer; Rabinowitsch fluid; suspension

#### **1. Introduction**

The movement of blood liquids is an important study for the mathematical simulation of medical applications. Rabinowitsch fluid is one of the fluids that simulate blood movement because the Rabinowitsch model effectively relies on studying the result of lubricant additives, for a wide range of shear rates, and studying their experimental data. Over the past decades, scientists have made active efforts to increase the ability of solidifying the features of non-Newtonian lubricants using long-chain quantities by adding a very small addition of the polymer solution. A very important result from this is that this result reduces the lubricant sensitivity. Additionally, a non-linear relationship appears between the shear stress rate and shear pressure. Through those recent actions based on the Rabinowitsch model, Akbar and Butt [1] studied the flow of the Rabinowitsch model due to the cilia located on the wall. Moreover, Singh et al. [2] studied the movement of Rabinowitsch fluid through peristaltic flow. In addition, Vaidya [3] investigated the movement of Rabinowitsch fluid through the oblique wall of a channel, while Sadaf and Nadeem [4] studied the Rabinowitsch model through a non-uniform conduit with peristalsis. Choudhari et al. [5] also studied the effect of slipping on the oscillating transmission of a Rabinowitsch model in a non-uniform channel.

In recent years, microfluidic systems have been developed through the use of Electric-Double-Layer (EDL). This increased interest is reflected in references [6–8]. Electrical osmosis is defined as the movement of a liquid in relation to a fixed surface due to the

**Citation:** Abdelsalam, S.I.; Zaher, A.Z. Leveraging Elasticity to Uncover the Role of Rabinowitsch Suspension through a Wavelike Conduit: Consolidated Blood Suspension Application. *Mathematics* **2021**, *9*, 2008. https://doi.org/10.3390/ math9162008

Academic Editor: Efstratios Tzirtzilakis

Received: 28 June 2021 Accepted: 18 August 2021 Published: 22 August 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

presence of an externally applied electric field. One of the first studies that have studied the application of these external forces is by Ross [9]. The idea is that electrical ripening comes into contact with the aqueous electrolytic solution with the solids and then generates a relatively electrical charge. In addition, the opposite ion charge is attracted to that charge on the surface and the opposite process from the ions on the surface and shows the double layer, and thus, the surface becomes electrically charged. As a result of this phenomenon, a process of acceleration of the liquid by migrating ions occurs, and the resulting flow is called electromagnetic flow.

The study of the movement of suspended particles inside the fluid is considered the most important medical application. The movement of the fluid that contains particles is similar to the movement of the blood plasma since the blood consists of solid materials, that is, it is a liquid in which those substances swim. In that sense, there are a lot of species studied such as sickle cell (Hb SS), plasma cell dyscrasias, normal blood, controlled hypertension, uncontrolled hypertension, and polycythaemia. Each of these types is known by a specific haematocrit, i.e., *C* = 0.248, *C* = 0.28, *C* = 0.426, *C* = 0.4325, *C* = 0.4331, and *C* = 0.632 [10]. In addition, the study of the movement of suspended particles inside fluids is very interesting because they resemble white blood cells, red blood cells, and/or platelets that move inside the blood. Many experimental and analytical studies have focused on studying suspended particles because of their great importance in improving and understanding the blood flow and the distribution of proteins within it [11–13].

The geometrical shape of fluid flow has an important role in understanding various properties of different fluid flows such as blood flow and other important applications. Most studies that have discussed fluid movement have relied on solid ducts and tubes [14–24]. Because biological flows depend on their flexible flow fields, and this appears through their flexible nature, the flow and the movement of Newtonian and non-Newtonian fluids through walls of a flexible nature carry many important medical applications such as blood flow through the arteries, small blood vessels, heart systems, and others, which, according to some studies, revealed that the velocity of the blood is greatly affected by the elastic placement of the walls. Some of the work that has been interested in discussing the flow rate through elastic nature can be found in the refs. [25–31].

Accordingly, this work attempts to fill the void of the movement of the particulate suspension under the effects of electroosmotic forces using Rabinowitsch fluid. Analytical solution is used to obtain the physical parameters of the problem subjected to appropriate boundary conditions. The impact of relevant parameters is discussed graphically.

#### **2. The Mathematical Model and the Rabinowitsch Fluid Equation**

Consider a particulate suspension swimming in a Rabinowitsch fluid through elastic peristaltic walls of a channel with amplitude *a* and half width *b*. In addition, consider that the deformation on the wall is *α* as shown in Figure 1. Furthermore, the inlet pressure is defined as *p<sup>i</sup>* and the outlet pressure is defined as *po*, as shown in Figure 1. The effect of the electroosmotic forces on the Rabinowitsch fluid through the elastic peristaltic walls is taken into account. The velocity of the particulate suspension and Rabinowitsch fluid are denoted by → *V Up*, *V<sup>p</sup>* , → *V Uf* , *V <sup>f</sup>* . The mathematical geometry of the channel wall is given by

$$H(\overline{X}, t) = \pm \left( d + a \sin \frac{2\pi}{\lambda} (\overline{X} - c \, t) \right), \tag{1}$$

Here, *d* is the radius of the artery channel, *a* is the amplitude of the wall, *λ* is the amplitude of the peristaltic wave, and *c* is the blood velocity.

The isotropic rheological equation of a Rabinowitsch fluid takes the following form:

$$
\overline{\tau}\_{\overline{XY}} + \mu\_o \overline{\tau}\_{\overline{XY}}{}^3 = \mu\_S(\mathbb{C}) \frac{\partial \overline{U}}{\partial \overline{Y}} \,\tag{2}
$$

where the coefficient *µ<sup>o</sup>* represents pseudo-plasticity of the fluid, which takes a fundamental role in determining the nature of fluids; *µS*(*C*) is the viscosity of suspension; *τXY* is the stress tensor; *U* is the velocity component; and *C* is the volume fraction. The model represents a pseudoplastic state for *µ<sup>o</sup>* > 0, a Newtonian state for *µ<sup>o</sup>* = 0, and an expanded fluid model for *µ<sup>S</sup>* < 1. *Mathematics* **2021**, *9*, 2008 3 of 25

**Figure 1.** (**a**) Physical modelling of the problem. (**b**) Example for extracellular fluid that contains plasma. **Figure 1.** (**a**) Physical modelling of the problem. (**b**) Example for extracellular fluid that contains plasma.

̅തത + ̅തത

The isotropic rheological equation of a Rabinowitsch fluid takes the following form: ഥ The momentum and continuity equations for the problem of both particle and fluid phases are given in the following form [32].

<sup>ଷ</sup> = ௌ()

Model of Fluid Phase

$$
\frac{\partial \overline{\Pi}\_f}{\partial \overline{X}} + \frac{\partial \overline{\nabla}\_f}{\partial \overline{Y}} = 0,\tag{3}
$$

ത , (2)

$$\begin{split} \rho\_f \, \mathbb{C}\_{PH} \left( \frac{\partial \mathbb{T}\_f}{\partial t} + \overline{\mathbb{U}}\_f \frac{\partial \mathbb{T}\_f}{\partial \cdot \mathbb{X}} + \overline{\mathbb{V}}\_f \frac{\partial \mathbb{T}\_f}{\partial \mathbb{T}} \right) \\ = -\mathbb{C}\_{PH} \frac{\partial \mathbb{P}}{\partial \mathbb{X}} + \mathbb{C}\_{PH} \left[ \frac{\partial \mathbb{T}\_{\overline{\mathbb{X}\mathbb{X}}}}{\partial \mathbb{X}} + \frac{\partial \mathbb{T}\_{\overline{\mathbb{X}\mathbb{Y}}}}{\partial \mathbb{T}} \right] + \rho\_\ell \mathbb{E}\_\mathbb{X} + \rho\_f \gamma \text{g } (T - T\_0) - \mathbb{C} \, \mathbb{S} \left( \overline{\mathbb{U}}\_f - \overline{\mathbb{U}}\_P \right), \end{split} \tag{4}$$

$$\begin{split} \rho\_f \, \mathsf{C}\_{PH} \begin{pmatrix} \frac{\partial \overline{V}\_f}{\partial t} + \overline{\Pi}\_f \frac{\partial \overline{V}\_f}{\partial X} + \overline{V}\_f \frac{\partial \overline{V}\_f}{\partial Y} \\ = -\mathsf{C}\_{PH} \frac{\partial \overline{P}}{\partial Y} + \mathsf{C}\_{PH} \left[ \frac{\partial \tau\_{\overline{T}X}}{\partial X} + \frac{\partial \tau\_{\overline{T}Y}}{\partial Y} \right] - \mathsf{C} \, \mathsf{S} \left( \overline{V}\_f - \overline{V}\_P \right) \end{pmatrix} \end{split} \tag{5}$$

$$(\rho \mathbf{C})\_f \left( \frac{\partial T\_f}{\partial t} + \overline{\mathbf{U}}\_f \frac{\partial T\_f}{\partial \overline{X}} + \overline{\mathbf{V}}\_f \frac{\partial T\_f}{\partial \overline{Y}} \right) = k \left( \frac{\partial^2 T}{\partial \overline{X}^2} + \frac{\partial^2 T}{\partial \overline{Y}^2} \right) + H\_{S\_f} \tag{6}$$
  $\mathbf{I}\_1, \dots, \mathbf{I}\_N$ 

ത Model of Particle Phase

ത + ு

ത

= −ு

Model of Particle Phase

$$\frac{\partial \overline{U}\_P}{\partial \overline{X}} + \frac{\partial \overline{V}\_P}{\partial \overline{Y}} = 0,\tag{7}$$

ത = 0, (7)

(4)

$$\rho\_P \, \mathbb{C}\_{PH} \left( \frac{\partial \overline{\mathcal{U}}\_P}{\partial t} + \overline{\mathcal{U}}\_P \frac{\partial \overline{\mathcal{U}}\_P}{\partial \overline{X}} + \overline{\mathcal{V}}\_P \frac{\partial \overline{\mathcal{U}}\_P}{\partial \overline{Y}} \right) = -\mathbb{C}\_{PH} \frac{\partial \overline{P}}{\partial \overline{X}} + \mathbb{C} \, \mathbb{S} \left( \overline{\mathcal{U}}\_f - \overline{\mathcal{U}}\_P \right), \tag{8}$$
 
$$(\partial \overline{\mathcal{V}}\_P \quad - \partial \overline{\mathcal{V}}\_P \quad - \partial \overline{\mathcal{V}}\_P) \qquad \qquad \partial \overline{P} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \overline{\mathcal{U}}\_P \quad \overline{\mathcal{U}}\_P \quad \overline{\mathcal{U}}\_P \quad \overline{\mathcal{U}}\_P \quad \overline{\mathcal{U}}\_P \quad \overline{\mathcal{U}}\_P \quad \overline{\mathcal{U}}\_P \quad \overline{\mathcal{U}}\_P \quad \overline{\mathcal{U}}\_P \quad \overline{\mathcal{U}}\_P \quad \overline{\mathcal{U}}\_P \quad \overline{\mathcal{U}}\_P \quad \overline{\mathcal{U}}\_P \quad \overline{\mathcal{U}}\_P \quad \overline{\mathcal{U}}\_P \quad \overline{\mathcal{U}}\_P \quad \overline{\mathcal{U}}\_P \quad \overline{\mathcal{U}}\_P \quad \overline{\mathcal{U}}\_P \quad \overline{\mathcal{U}}\_P \quad \overline{\mathcal{U}}\_P \quad \overline{\mathcal{U}}\_P \quad \overline{\mathcal{U}}\_P \quad \overline{\mathcal{U}}\_P \quad \overline{\mathcal{U}}$$

$$\left(\rho\_f \mathbb{C}\_{PH}\left(\frac{\partial \overline{V}\_P}{\partial t} + \overline{\mathcal{U}}\_P \frac{\partial \overline{V}\_P}{\partial \overline{X}} + \overline{\mathcal{V}}\_P \frac{\partial \overline{V}\_P}{\partial \overline{Y}}\right) = -\mathbb{C}\_{PH} \frac{\partial \overline{P}}{\partial \overline{Y}} + \mathbb{C} \, S \left(\overline{V}\_f - \overline{V}\_P\right),\tag{9}$$

ഥ ത <sup>+</sup> ത 

തത ത <sup>+</sup>

ത

where *CPH* = 1 − *C*, *S* is the drag coefficient and *µS*(*C*) is the viscosity of suspension, *ρ<sup>f</sup>* ,*<sup>p</sup>* is the fluid and particle density, *ρ<sup>e</sup>* is the electrical charge density, *E<sup>x</sup>* is the axial electric field, *γ* is the thermal expansion coefficient, g is the gravitational acceleration, *k* is the thermal conductivity, and *H<sup>S</sup>* is the constant heat absorption or heat generation. The empirical relation for *S* and *µS*(*C*) can be described as

$$\begin{split} \mu\_{\rm S} &= 1/(1 - m \, \text{C}), \; m = 0.07 \ast \text{Exp} \left[ 2.49 \ast \text{C} - \frac{1107}{273} \ast \text{Exp} [-1.69 \ast \text{C}] \right], \\ \mathcal{S} &= \frac{9\mu\_{0}}{2\varepsilon^{2}} \gamma(\text{C}), \; \gamma(\text{C}) = \frac{4 + 3\sqrt{8\text{C} - 3\text{C}^{2}} + 3\text{C}}{(2 - 3\text{C})^{2}}. \end{split} \tag{10}$$

Here, *µ*<sup>0</sup> is the viscosity of fluid for suspending medium, and ∈ is the radius of a particle.

Now, we use the convenient transformation to convert from fixed frame to wave frame as follows:

$$
\overline{\mathbf{x}} = \overline{\mathbf{X}} - \mathbf{c}t, \ \overline{\mathbf{y}} = \mathbf{Y}, \ \overline{\mathbf{u}} = \overline{\mathbf{U}} - \mathbf{c}, \ p = P.
\tag{11}
$$

Then, the mathematical formulation and Rabinowitsch fluid Equations (1)–(9) take the following form:

Rabinowitsch fluid equation

$$
\overline{\tau}\_{\overline{\mathfrak{X}}\overline{\mathfrak{Y}}} + \mu\_o \overline{\tau}\_{\overline{\mathfrak{X}}\overline{\mathfrak{Y}}}^3 = \mu\_s(\mathbb{C}) \frac{\partial \overline{u}\_f}{\partial \overline{\mathfrak{Y}}},\tag{12}
$$

Model of fluid phase

$$\rho\_f \, \mathbb{C}\_{PH} \left( \overline{u}\_f \frac{\partial \overline{u}\_f}{\partial \, \overline{X}} + \overline{v}\_f \frac{\partial \overline{u}\_f}{\partial \overline{y}} \right) \tag{13}$$

$$\rho\_f \mathcal{C}\_{PH} \left( \overline{\overline{u}}\_f \frac{\partial \overline{V}\_f}{\partial \overline{\underline{x}}} + \overline{\overline{v}}\_f \frac{\partial \overline{v}\_f}{\partial \overline{y}} \right) = -\mathcal{C}\_{PH} \frac{\partial \overline{p}}{\partial \overline{y}} + \mathcal{C}\_{PH} \left[ \frac{\partial \underline{\tau\_{yx}}}{\partial \overline{x}^2} + \frac{\partial \underline{\tau\_{yy}}}{\partial \overline{y}^2} \right] - \mathcal{C} \mathcal{S} \left( \overline{v}\_f - \overline{v}\_P \right), \tag{14}$$

$$(\rho \mathbb{C})\_f \left( \frac{\partial T\_f}{\partial t} + \overline{u}\_f \frac{\partial T\_f}{\partial \overline{x}} + \overline{v}\_f \frac{\partial T\_f}{\partial \overline{y}} \right) = k \left( \frac{\partial^2 T}{\partial \overline{x}^2} + \frac{\partial^2 T}{\partial \overline{y}^2} \right) + H\_{\text{S}\_f} \tag{15}$$

Model of particle phase

$$\rho\_P \, \mathbb{C}\_{PH} \left( \overline{u}\_P \frac{\partial \overline{u}\_P}{\partial \overline{\mathfrak{X}}} + \overline{v}\_P \frac{\partial \overline{u}\_P}{\partial \overline{\mathfrak{Y}}} \right) = -\mathbb{C}\_{PH} \frac{\partial \overline{p}}{\partial \overline{\mathfrak{X}}} + \mathbb{C} \, \mathbb{S} \left( \overline{u}\_f - \overline{u}\_P \right), \tag{16}$$

$$\rho\_f \, \mathbb{C}\_{PH} \left( \overline{\mathfrak{u}}\_P \frac{\partial \overline{\mathfrak{v}}\_P}{\partial \overline{\mathfrak{x}}} + \overline{\mathfrak{v}}\_P \frac{\partial \overline{\mathfrak{v}}\_P}{\partial \overline{\mathfrak{y}}} \right) = -\mathbb{C}\_{PH} \frac{\partial \overline{\mathfrak{y}}}{\partial \overline{\mathfrak{y}}} + \mathbb{C} \, \mathbb{S} \left( \overline{\mathfrak{v}}\_f - \overline{\mathfrak{v}}\_P \right), \tag{17}$$

#### **3. Electroosmotic Flow**

The Poisson–Boltzmann equation:

$$
\nabla^2 \overline{\varphi} = \frac{\rho\_\varepsilon}{\varepsilon} \,' \tag{18}
$$

where *ρ<sup>e</sup>* is a charge density, *e* is the electric permittivity, and *ϕ* is the electroosmotic potential function.

The charge density *ρ<sup>e</sup>* of the fluid in a unit volume is given by:

$$\rho\_{\varepsilon} = \varepsilon \, e \left( n^{+} - n^{-} \right) = -2\varepsilon \, e \, n\_{0} \, \sinh \left\{ \frac{\varepsilon \, e \overline{\varphi}}{k\_{B} T\_{av}} \right\} \, \tag{19}$$

$$n^{-} = n\_0 \, e^{\frac{\varepsilon \cdot c \overline{\Psi}}{k\_B T a \nu}}, \; n^{+} = n\_0 \, e^{\frac{-\varepsilon \cdot c \overline{\Psi}}{k\_B T a \nu}},\tag{20}$$

where *ε*, *n* <sup>+</sup>, *n* <sup>−</sup>, *e*, *kB*, and *Tav* are the valence of ions, the number densities of positive and negative ions, electric charge, Boltzmann's constant, local absolute temperature of the electrolytic solution, and bulk volume concentration of positive or negative ions, respectively. In addition, using the Debye–Huckel linearisation principle, <sup>n</sup> *ε e ϕ* 0 *<sup>k</sup>BTav* <sup>1</sup> o . Equation (19) reduces to

$$
\rho\_{\varepsilon} = \frac{-\varepsilon}{\Gamma^2} \overline{\overline{\rho}}\_{\prime} \tag{21}
$$

where *Γ* = (*ε e*) −1 q*<sup>e</sup> <sup>k</sup>BTav* 2 *n*<sup>0</sup> is the Debye–Huckel parameter, which describes the properties of the EDL thickness. The solution for the distribution of the electroosmotic potential can easily be achieved using the Poisson–Boltzmann equation:

$$\frac{\partial^2 \overline{\varphi}}{\partial \overline{x}^2} + \frac{\partial^2 \overline{\varphi}}{\partial \overline{y}^2} = \frac{1}{\Gamma^2} \ \overline{\varphi} \, \tag{22}$$

#### **4. Non-Dimensional Physical Parameters**

The non dimensionless quantities are introduced in the following expression

*uP*, *<sup>f</sup>* = *uP*, *<sup>f</sup> c* , *y* = *y a* , *vP*, *<sup>f</sup>* = *vP*, *<sup>f</sup> δc* , *p* = *<sup>a</sup>* 2 *λ c µo p*, *δ* = *<sup>a</sup> λ* , *ϑ* = *ϕ ζ* , *Re* = *ρf c a µ*0 , *Gr* = *ρ<sup>f</sup> γg a*<sup>2</sup> *T*<sup>0</sup> *µo c* , *θ* = *T*−*T*<sup>0</sup> *T*0 , *µ* = *µs*(*C*) *µ*0 , *τ* = *<sup>a</sup> cµ*<sup>0</sup> *τ*, *UHS* = − *Ex e ζ µ*<sup>0</sup> *c* , *K* = *µ<sup>S</sup>* (*C*)*c* <sup>2</sup>*µ*<sup>0</sup> 2 *a* 2 , *m* = *<sup>a</sup> k* 2 , *M* = *Sa*<sup>2</sup> *µ<sup>S</sup>* (*C*)(1−*C*) , *Q* = *Hsa* 2 *µ*<sup>0</sup> *T<sup>o</sup>* , *p<sup>r</sup>* = *µoC<sup>f</sup> k* , *h* = *<sup>H</sup> d* , *φ* = *<sup>a</sup> d* (23)

where *K*, *m*, *Q*, *p<sup>r</sup>* , *UHS*, *Gr*, and *Re* are, respectively, the Rabinowitsch fluid parameter, electroosmotic parameter, heat source, Prandtl number, electroosmotic velocity, Grashof number, and Reynolds number. The non-dimensional formulation of the mathematical geometry for the channel wall is given by

$$h(\mathbf{x}) = \pm (1 + \phi \sin 2\pi \,\mathbf{x}),$$

where *φ* is the amplitude ratio.

After using the non-dimensional physical parameters given by Equation (23) in the governing Equations (12)–(17) and in Equation (22), we find:

Non-dimensional Rabinowitsch fluid equations

$$
\pi\_{xy} + K \,\pi\_{xy}{}^3 = \overline{\mu} \frac{\partial u\_f}{\partial y} \,\tag{24}
$$

Non-dimensional model of fluid phase

*<sup>R</sup><sup>e</sup> <sup>δ</sup> <sup>C</sup>PH uf ∂u<sup>f</sup> <sup>∂</sup><sup>x</sup>* + *v<sup>f</sup> ∂u<sup>f</sup> ∂y* = −*CPH ∂p <sup>∂</sup><sup>x</sup>* <sup>+</sup> *<sup>C</sup>PH*<sup>h</sup> *δ ∂τxx <sup>∂</sup><sup>x</sup>* + *∂τxy ∂y* i <sup>+</sup> *<sup>m</sup>*2*UHS* cosh *my* <sup>+</sup> *<sup>G</sup><sup>r</sup>* (*<sup>T</sup>* <sup>−</sup> *<sup>T</sup>*0) <sup>−</sup> *C CPH <sup>µ</sup> <sup>M</sup> u<sup>f</sup>* − *u<sup>P</sup>* , (25)

$$\mathcal{R}\_{\mathbf{c}} \, \delta \, \mathcal{C}\_{PH} \left( u\_f \frac{\partial v\_f}{\partial \mathbf{x}} + v\_f \frac{\partial v\_f}{\partial y} \right) = -\mathcal{C}\_{PH} \frac{\partial p}{\partial y} + \mathcal{C}\_{PH} \left[ \frac{\partial \tau\_{yx}}{\partial \mathbf{x}} + \frac{\partial \tau\_{yy}}{\partial y} \right] \\ - \mathcal{C}\_{PH} \, \overline{\mu} \, M \left( v\_f - v\_P \right), \tag{26}$$

$$R\_{\varepsilon}p\_{r}\delta\left(u\_{f}\frac{\partial\theta}{\partial x} + v\_{f}\frac{\partial\theta}{\partial y}\right) = \left(\frac{\partial^{2}\theta}{\partial x^{2}} + \frac{\partial^{2}\theta}{\partial y^{2}}\right) + Q\_{\prime} \tag{27}$$

Non-dimensional model of particle phase

$$\frac{\partial \rho\_P}{\partial \rho\_f} \mathcal{C} \mathcal{R}\_\ell \delta \left( u\_P \frac{\partial u\_P}{\partial \overline{\mathfrak{X}}} + v\_P \frac{\partial u\_P}{\partial \overline{\mathfrak{Y}}} \right) = -\mathcal{C} \frac{\partial p}{\partial \mathfrak{x}} + \mathcal{C} \, \mathcal{C}\_{PH} \, \overline{\mu} \, M \left( u\_f - u\_P \right), \tag{28}$$

$$\frac{\rho\_P}{\rho\_f} \, \text{CR}\_\epsilon \delta \left( u\_P \frac{\partial v\_P}{\partial \mathbf{x}} + v\_P \frac{\partial v\_P}{\partial y} \right) = -\mathbb{C} \frac{\partial p}{\partial y} + \mathbb{C} \, \text{C}\_{PH} \, \overline{\mu} \, M \, \delta \left( v\_f - v\_P \right), \tag{29}$$

with dimensionless boundary conditions

$$\begin{array}{llll} \mu = -1, & \theta = 0, & \theta = 1 & \text{at} & y = h(\mathbf{x}),\\ \mu = -1, & \theta = 0, & \theta = 1 & \text{at} & y = -h(\mathbf{x})\\ & & \mathbf{r}\_{\mathbf{x}y} = \mathbf{0} \text{ at } y = \mathbf{0}. \end{array} \tag{30}$$

#### **5. Methodology**

Taking a long wavelength approximation and a creeping flow system, i.e., *δ* 1, the solution of Equations (24)–(29) takes the following form

$$\theta(y) = \frac{1}{2}Q(h-y)(h+y),\tag{31}$$

$$\pi\_{xy} = \frac{6\ P\ y + \text{G}\_{\text{F}}\ Q\ y \left(-3h^2 + y^2\right) - 6\ m\text{ U}\_{HS}\ \frac{\sinh\left(\frac{\sinh\left(\frac{\text{s}\sinh\left(\frac{\text{m}\cdot\text{s}}{\text{c}\cdot\text{s}}\right)}{\cosh\left(\frac{\text{m}\cdot\text{s}}{\text{c}\cdot\text{s}}\right)}{\cosh\left(\frac{\text{s}}{\text{c}\cdot\text{s}}\right)}\right)}}{6\ \text{C}\_{PH}}\tag{32}$$

$$\begin{array}{ll} u(y) = c\_0 \{c\_1 + c\_2 \, y^{10} + c\_3 \, y^2 + c\_4 \, y^8 + c\_5 \, y^6 + c\_6 \, \text{Cosh}(3my) + c\_7 \, y \, \text{Sink}(2my) + c\_8 \, P\_1(y) \\ + \text{Cosh}(2my)(c\_9 + c\_{10} \, P\_2(y)) + y \, \text{Sink}(mny)(c\_{16} + c\_{12} \, P\_1(y)P\_2(y) + c\_{13} \, P\_3(y)) + c\_{17} \\ + \text{Cosh}(my)(c\_{18} + c\_{19} \, P\_4(y) + c\_{20} \, y^4 + c\_{21} \, y^2 + (c\_{23} - c\_{14} \, y^2(P\_1(y))^2)) \}. \end{array} \tag{33}$$

where *P*, *P<sup>j</sup>* (*j* = 1 → 4) and the constants *c<sup>i</sup>* (*i* = 1 → 23) are given in the Appendix A.

#### **6. Theoretical Determination of Pressure Gradient and Pressure Rise Application in Blood Flows**

In this section, the deformation in the walls that is defined by elasticity in the channel walls is taken into account, which appears from the pressure shown in Figure 1. According to Rubinow and Keller [28], the flow rate and pressure gradient are related by the following expression:

$$Q = -\sigma (p\_i - p\_o) \frac{\partial p}{\partial \mathbf{x}'} \tag{34}$$

The flow rate is defined as

$$Q = \int\_0^h u(y) dy \,\tag{35}$$

Following the hypothesis of elastic walls, according to Rubinow and Keller [28], and using Equations (33)–(35), it is found that the flow rate takes the following form as follows

$$Q = \sigma\_1 (p\_i - p\_o) \left( -\frac{\partial p}{\partial \mathbf{x}} \right)^3 + \sigma\_2 (p\_i - p\_o) \left( -\frac{\partial p}{\partial \mathbf{x}} \right)^2 + \sigma\_3 (p\_i - p\_o) \left( -\frac{\partial p}{\partial \mathbf{x}} \right) + c\_{24} \tag{36}$$

such that *c*<sup>24</sup> is given in the Appendix A.

Where

$$
\sigma\_1(p\_i - p\_o) = \frac{\mathfrak{a}(\mathfrak{x})^5 k}{5 \mathbf{A} \, M^3} \tag{37}
$$

$$\begin{array}{l} \sigma\_{2}(p\_{i} - p\_{o}) = \frac{1}{86407 \text{ m}^{6} \text{M}^{3}} \begin{pmatrix} \frac{13824}{7} \text{G}\_{I} \ a(\mathbf{x})^{\top} \text{K} \ m^{6} \text{Q} + 103680 \ a(\mathbf{x}) \text{K} \ m^{4} \text{U}\_{\text{HS}}\\ +25920 \ a(\mathbf{x}) \text{K} \ m^{4} (2 + a(\mathbf{x})^{2} m^{2}) \text{U} \text{lbs} - 155520 \ \text{K} \ m^{3} \text{U}\_{\text{HS}} \tanh \ (a(\mathbf{x})m) \\ -77760 \ a(\mathbf{x})^{2} \ \text{K} \ m^{5} \text{U}\_{\text{HS}} \tanh (a(\mathbf{x})m) \end{pmatrix}, \end{array} \tag{38}$$

$$\begin{array}{l} \sigma\_{3}(p\_{i}-p\_{o}) = \frac{1}{86407m^{4}\hbar^{3}}(-2880\ \mathfrak{a}\ (\mathfrak{a}\ )^{2}m^{6}\hbar^{2} - \frac{5721}{72}\mathbb{G}\_{\mathfrak{a}}\ \mathfrak{a}\ )^{\mathsf{F}}\mathrm{K}m^{6}\mathbb{Q}^{2} \\ &+ 4320\ \mathfrak{a}\ (\mathfrak{a}\ )^{3}\mathrm{K}m^{6}\mathbb{U}\_{156}\ \mathrm{sech}^{2}(\mathfrak{a}\ (\mathfrak{a}\ )\mathfrak{m})\end{array} + 414720\ \mathrm{G}\_{\mathfrak{a}}\ \mathfrak{a}\ (\mathfrak{a}\ )\mathbb{K}\ \mathrm{Q}\ \mathrm{U}\_{156} \\ &- 17280\ \mathrm{G}\_{\mathfrak{a}}\ \mathfrak{a}\ (\mathfrak{a}\ )^{2}\mathrm{K}m^{6}\mathbb{Q}\ \mathrm{U}\_{156} + 17280\ \mathrm{G}\_{\mathfrak{a}}\ \mathfrak{a}\ \mathrm{K}\ \mathrm{m}^{2}\left(24-\mathfrak{a}\right)\mathfrak{a}^{2}\!\!\!/$$

Here, *α*(*x*) = *h*(*x*) + α 0 , where *h*(*x*) and α 0 are the radii of the channel for peristalsis and elasticity, respectively. Additionally, the pressure rise is defined as

$$
\Delta p = \int\_0^1 \left(\frac{dp}{d\mathbf{x}}\right) d\mathbf{x}.\tag{40}
$$

#### **7. Graphical Results and Discussion**

The goal of this section is to study the effect of the pertinent parameters on the resulted physical expression. In doing so, the Mathematica program is used in order to investigate the impact of Rabinowitsch parameter *K*, Prandlt number *Pr*, heat source *Q*, electroosmotic parameter *m*, volume fraction *C*, Grashof number *Gr*, maximum electroosmotic velocity *UHS*, and radius of the channel for elasticity *α* 0 on the shear stress *τxy*, axial velocity *U*(*y*), pressure gradient *dp dx* , and pressure rise ∆*p*. A graphical comparison is also set to compare between pseudoplastic and dilatant fluids.

Figures 2–9 are plotted to investigate the impact of *UHS*, *C*, *Gr*, *m*, *K,* and *α* 0 on *τxy* for sundry values of the parameters of interest. It is observed from Figures 2–7 that the Rabinowitsch shear stress improves prominently with increasing all the parameters, even with increasing the curviness of the conduit in both the lower and upper halves of the channel. Figures 8 and 9 demonstrate a comparison between the impact of pseudoplasticity and dilatation on the shear stress profile through *x* and *y* axes, respectively. It is notable from the latter figures that for pseudoplastic fluid, *τxy* is enhanced along the conduit through the *x*-axis, whereas for the case of dilatant fluids, a reverse effect is observed. It is also seen that *τxy* behaves differently along the *y*-axis where it is seen that, for the pseudoplastic fluids, *τxy* decays near the lower wall of the channel and improves with an increase in the curviness of the channel. An exact opposite behaviour is seen for dilatant fluids, as seen in Figure 9.

Figures 10–16 illustrate the impact of *K*, *UHS*, *Gr*, *C*, *m,* and *α* 0 on *U*(*y*) for various values of the pertinent parameters. It is noticed that *K*, *UHS*, and *α* 0 play a distinguished role in lessening the fluid velocity, as seen in Figures 10, 11 and 15. It is also depicted that *Gr*, *C,* and *m* disturb the velocity profile significantly, as observed in Figures 12–14. It is noticed that the latter parameters barely have an effect on *U*(*y*) near the walls of the channel, whereas they enhance the flow in the centre part of the channel. It is generally noticed that *U*(*y*) has a parabolic shape along the conduit for all the parameters under consideration. Figure 16 is plotted to spot the difference in the behaviour of *U*(*y*) for pseudoplastic and dilated fluids. It is demonstrated that for pseudoplastic fluids, *U*(*y*) is not disturbed at all near the walls of the conduit, whereas it is noticed that for dilated fluids, the flow is decelerated at the centre of the channel.

**Figure 2.** Display of shear stress profile for different values of ுௌ. **Figure 2.** Display of shear stress profile for different values of *UHS*. **Figure 2.** Display of shear stress profile for different values of ுௌ.

**↑ τxy ↑ τxy**

**Figure 3.** Display of shear stress profile for different values of *C*. **Figure 3.** Display of shear stress profile for different values of *C*. **Figure 3.** Display of shear stress profile for different values of *C*.

**Figure 4.** Display of shear stress profile for different values of *Gr*. **Figure 4.** Display of shear stress profile for different values of *Gr*. **Figure 4.** Display of shear stress profile for different values of *Gr*.

**Figure 5.** Display of shear stress profile for different values of *m*. **Figure 5.** Display of shear stress profile for different values of *m*. **Figure 5.** Display of shear stress profile for different values of *m*.

**Figure 6.** Display of shear stress profile for different values of *K*. **Figure 6.** Display of shear stress profile for different values of *K*. **Figure 6.** Display of shear stress profile for different values of *K*.

.

**Figure 7.** Display of shear stress profile for different values of <sup>ᇱ</sup> **Figure 7.** Display of shear stress profile for different values of <sup>ᇱ</sup> **Figure 7.** Display of shear stress profile for different values of *α* 0 .

**Figure 8.** Display of shear stress profile via *x* for pseudoplastic and dilatant fluids. **Figure 8.** Display of shear stress profile via *x* for pseudoplastic and dilatant fluids. **Figure 8.** Display of shear stress profile via *x* for pseudoplastic and dilatant fluids.

**↑ τxy ↑ τxy**

**Figure 9.** Display of shear stress profile via *y* for pseudoplastic and dilatant fluids. **Figure 9.** Display of shear stress profile via *y* for pseudoplastic and dilatant fluids. **Figure 9.** Display of shear stress profile via *y* for pseudoplastic and dilatant fluids.

Figures 10–16 illustrate the impact of *K*, ுௌ, *Gr*, *C*, *m,* and <sup>ᇱ</sup>

Figures 10–16 illustrate the impact of *K*, ுௌ, *Gr*, *C*, *m,* and <sup>ᇱ</sup>

role in lessening the fluid velocity, as seen in Figures 10, 11, and 15. It is also depicted that *Gr*, *C,* and *m* disturb the velocity profile significantly, as observed in Figures 12–14. It is

role in lessening the fluid velocity, as seen in Figures 10, 11, and 15. It is also depicted that *Gr*, *C,* and *m* disturb the velocity profile significantly, as observed in Figures 12–14. It is

values of the pertinent parameters. It is noticed that *K*, ுௌ, and <sup>ᇱ</sup>

values of the pertinent parameters. It is noticed that *K*, ுௌ, and <sup>ᇱ</sup>

on *U(y)* for various

on *U(y)* for various

play a distinguished

play a distinguished

noticed that the latter parameters barely have an effect on *U(y)* near the walls of the channel, whereas they enhance the flow in the centre part of the channel. It is generally noticed that *U(y)* has a parabolic shape along the conduit for all the parameters under consideration. Figure 16 is plotted to spot the difference in the behaviour of *U(y)* for pseudoplastic and dilated fluids. It is demonstrated that for pseudoplastic fluids, *U(y)* is not disturbed at all near the walls of the conduit, whereas it is noticed that for dilated fluids, the flow is

noticed that the latter parameters barely have an effect on *U(y)* near the walls of the chan‐ nel, whereas they enhance the flow in the centre part of the channel. It is generally noticed that *U(y)* has a parabolic shape along the conduit for all the parameters under considera‐ tion. Figure 16 is plotted to spot the difference in the behaviour of *U(y)* for pseudoplastic and dilated fluids. It is demonstrated that for pseudoplastic fluids, *U(y)* is not disturbed at all near the walls of the conduit, whereas it is noticed that for dilated fluids, the flow is

**Figure 10.** Display of axial velocity for different values of *K*. **Figure 10.** Display of axial velocity for different values of *K*. **Figure 10.** Display of axial velocity for different values of *K*.

decelerated at the centre of the channel.

decelerated at the centre of the channel.

*Mathematics* **2021**, *9*, 2008 12 of 25

 **u**(**y**) **u**(**y**)

**Figure 11.** Display of axial velocity for different values of ுௌ. **Figure 11.** Display of axial velocity for different values of ுௌ. **Figure 11.** Display of axial velocity for different values of *UHS*.

*Mathematics* **2021**, *9*, 2008 13 of 25

**Figure 12.** Display of axial velocity for different values of *Gr*. **Figure 12.** Display of axial velocity for different values of *Gr*. **Figure 12.** Display of axial velocity for different values of *Gr*.

**Figure 13.** Display of axial velocity for different values of *C*. **Figure 13.** Display of axial velocity for different values of *C*. **Figure 13.** Display of axial velocity for different values of *C*.

**Figure 14.** Display of axial velocity for different values of *m*. **Figure 14.** Display of axial velocity for different values of *m*. **Figure 14.** Display of axial velocity for different values of *m*.

.

**Figure 15.** Display of axial velocity for different values of <sup>ᇱ</sup> **Figure 15.** Display of axial velocity for different values of <sup>ᇱ</sup> **Figure 15.** Display of axial velocity for different values of *α* 0 .

**Figure 16.** Display of axial velocity for pseudoplastic and dilatant fluids. **Figure 16.** Display of axial velocity for pseudoplastic and dilatant fluids.

Figures 17–22 are prepared in order to see the behaviour of ௗ ௗ௫ along the axis of the conduit under the effect of *K*, *C*, ுௌ, *Gr*, *m,* and <sup>ᇱ</sup> . It is seen that *K*, *C*, *m,* and <sup>ᇱ</sup> serve to reduce ௗ ௗ௫ for all values of the pertinent parameters, as noticed in Figures 17, 18, 21 and 22. It is also noticed from Figures 19 and 20 that ௗ ௗ௫ grows for greater values of ுௌ and *Gr*. It is also observed that for *x* ∈ [0, 2] and [3.9, 6], the pressure gradient is small and that Figures 17–22 are prepared in order to see the behaviour of *dp dx* along the axis of the conduit under the effect of *K*, *C*, *UHS*, *Gr*, *m,* and *α* 0 . It is seen that *K*, *C*, *m,* and *α* 0 serve to reduce *dp dx* for all values of the pertinent parameters, as noticed in Figures 17, 18, 21 and 22. It is also noticed from Figures 19 and 20 that *dp dx* grows for greater values of *UHS* and *Gr*. It is also observed that for *x* ∈ [0, 2] and [3.9, 6], the pressure gradient is small and that the large pressure gradient occurs for *x* ∈ [2.1, 4].

the large pressure gradient occurs for *x* ∈ [2.1, 4]. Figures 23–28 are prepared in order to spot the variation of ∆*p* that is portrayed against the dimensionless time-averaged flux across one wavelength, *Q*, for several values of the parameters under consideration. The contributions of *K*, *Gr,* and *m* for ∆*p* are displayed in Figures 23, 25 and 26, where it is noticed that ∆*p* decays near the lower wall of the channel and grows afterwards with an increase in the channel curviness. It is also shown from Figures 24 and 27 that ∆*p* attains smaller values as the channel curviness increases away from the wall of the conduit. Finally, Figure 28 displays the behaviour of ∆*p* in case of dilatation and pseudoplasticity of fluids. It is seen that ∆*p* is generally higher for dilated fluids than that of pseudoplastic ones. It is also observed that ∆*p* decreases for dilated fluids all the way along, whereas it decreases for pseudoplastic fluids only until a specific value (*Q* = 1) away from the wall from which the behaviour is reversed.

**Figure 17.** Display of pressure gradient for different values of *K*.

**Figure 16.** Display of axial velocity for pseudoplastic and dilatant fluids.

conduit under the effect of *K*, *C*, ுௌ, *Gr*, *m,* and <sup>ᇱ</sup>

22. It is also noticed from Figures 19 and 20 that ௗ

the large pressure gradient occurs for *x* ∈ [2.1, 4].

to reduce ௗ




**0.0**

**0.5**

**1.0**

Figures 17–22 are prepared in order to see the behaviour of ௗ

**Dilatant Fluid**


**y**

**K 0**

<

**Pseudoplastic Fluid**

**u**(**y**)

**K 0**

>

ௗ௫ for all values of the pertinent parameters, as noticed in Figures 17, 18, 21 and

*Gr*. It is also observed that for *x* ∈ [0, 2] and [3.9, 6], the pressure gradient is small and that

ௗ௫ along the axis of the

serve

. It is seen that *K*, *C*, *m,* and <sup>ᇱ</sup>

ௗ௫ grows for greater values of ுௌ and

*U***Hs** = -**0.5** *U***Hs** = **0** *U***Hs** = **0.5**

**Figure 17.** Display of pressure gradient for different values of *K*. **Figure 17.** Display of pressure gradient for different values of *K*.

 **<sup>p</sup> x**

 

**Figure 18.** Display of pressure gradient for different values of *C*. **Figure 18.** Display of pressure gradient for different values of *C*.

**Figure 19.** Display of pressure gradient for different values of ுௌ.

**0.2**

**0.4**

**0.6**

**0.8**

**1.0**

**1.2**

**1.4**

**1.6**

**0 1 2 3 4 5 6**

**x**

**0.4**

**0.6**

**0.8**

**1.0**

**1.2**

**1.4**

**1.6**

**0 1 2 3 4 5 6**

 **<sup>p</sup> x**

 

**x**

**C** = **0.1 C** = **0.2 C** = **0.3**

**Figure 19.** Display of pressure gradient for different values of ுௌ. **Figure 19.** Display of pressure gradient for different values of *UHS*.

**Figure 18.** Display of pressure gradient for different values of *C*.

**Figure 20.** Display of pressure gradient for different values of *Gr*. **Figure 20.** Display of pressure gradient for different values of *Gr*.

**Figure 21.** Display of pressure gradient for different values of *m*.

**0.2**

**0.4**

**0.6**

**Figure 20.** Display of pressure gradient for different values of *Gr*.

**<sup>0</sup> <sup>1</sup> <sup>2</sup> <sup>3</sup> <sup>4</sup> <sup>5</sup> <sup>6</sup>** - **0.2**

 **<sup>p</sup> x** 

**x**

**Gr** = **1 Gr** = **2 Gr** = **3**

**Figure 21.** Display of pressure gradient for different values of *m*. **Figure 21.** Display of pressure gradient for different values of *m*.

.

Figures 23–28 are prepared in order to spot the variation of ∆ that is portrayed against the dimensionless time-averaged flux across one wavelength, *Q*, for several values of the parameters under consideration. The contributions of *K*, *Gr,* and *m* for ∆ are displayed in Figures 23, 25 and 26, where it is noticed that ∆ decays near the lower wall of the channel and grows afterwards with an increase in the channel curviness. It is also shown from Figures 24 and 27 that ∆ attains smaller values as the channel curviness increases away from the wall of the conduit. Finally, Figure 28 displays the behaviour of ∆ in case of dilatation and pseudoplasticity of fluids. It is seen that ∆ is generally

only until a specific value (*Q* = 1) away from the wall from which the behaviour is re-

**Figure 22.** Display of pressure gradient for different values of <sup>ᇱ</sup> **Figure 22.** Display of pressure gradient for different values of *α* 0 .

versed.

**Figure 23.** Display of pressure rise vs. volume flow rate for different values of *K*. **Figure 23.** Display of pressure rise vs. volume flow rate for different values of *K*. **Figure 23.** Display of pressure rise vs. volume flow rate for different values of *K*.

**Figure 24.** Display of pressure rise vs**.** volume flow rate for different values of *C*. **Figure 24.** Display of pressure rise vs**.** volume flow rate for different values of *C*. **Figure 24.** Display of pressure rise vs. volume flow rate for different values of *C*.

**Figure 25.** Display of pressure rise vs. volume flow rate for different values of *Gr*. **Figure 25.** Display of pressure rise vs. volume flow rate for different values of *Gr*. **Figure 25.** Display of pressure rise vs. volume flow rate for different values of *Gr*.

**Figure 26.** Display of pressure rise vs. volume flow rate for different values of *m*. **Figure 26.** Display of pressure rise vs. volume flow rate for different values of *m*. **Figure 26.** Display of pressure rise vs. volume flow rate for different values of *m*.

**Figure 27.** Display of pressure rise vs. volume flow rate for different values of <sup>ᇱ</sup> **Figure 27.** Display of pressure rise vs. volume flow rate for different values of *α* 0 . **Figure 27.** Display of pressure rise vs. volume flow rate for different values of <sup>ᇱ</sup>

.

**Figure 28.** Display of pressure rise vs. volume flow rate for pseudoplastic and dilatant fluids. **Figure 28.** Display of pressure rise vs. volume flow rate for pseudoplastic and dilatant fluids. **Figure 28.** Display of pressure rise vs. volume flow rate for pseudoplastic and dilatant fluids.

#### **8. Biomedical Application of the Problem**

**8. Biomedical Application of the Problem**  Shear stress of fluid circulation is an important diagnostic aspect for evaluating the properties of blood supply through the arteries. The evolution of shear stress in the consolidated system, combined with the dynamic rheology of the blood, describes the reduction of the circular region of the system over time. Wall shear stress plays a significant part in reshaping the arterial wall, which can contribute to arterial thickening. Table 1 **8. Biomedical Application of the Problem**  Shear stress of fluid circulation is an important diagnostic aspect for evaluating the properties of blood supply through the arteries. The evolution of shear stress in the consolidated system, combined with the dynamic rheology of the blood, describes the reduction of the circular region of the system over time. Wall shear stress plays a significant part in reshaping the arterial wall, which can contribute to arterial thickening. Table 1 Shear stress of fluid circulation is an important diagnostic aspect for evaluating the properties of blood supply through the arteries. The evolution of shear stress in the consolidated system, combined with the dynamic rheology of the blood, describes the reduction of the circular region of the system over time. Wall shear stress plays a significant part in reshaping the arterial wall, which can contribute to arterial thickening. Table 1 illustrates the non-dimensional shear stresses of Rabinowitsch fluid, *τ*, through an artery

for various values of the haematocrit, *C*, for diseased blood. It is noticed that as the C increases, *τ* increases.


**Table 1.** Rabinowitsch shear stress through an artery for various values *C*.

#### **9. Deductions**

In this article, the impact of Rabinowitsch suspension fluid through elastic walls with heat transfer under the effect of the electroosmotic forces is investigated. The solutions of the fluid model are achieved by taking a long wavelength approximation. A comparison is set between the effect of pseudoplasticity and dilatation on the behaviour of shear stress, axial velocity, and pressure rise. The impact of all the pertinent parameters are discussed graphically. The main observations are as follows:


**Author Contributions:** Conceptualization, S.I.A.; methodology, A.Z.Z.; software, A.Z.Z.; validation, S.I.A.; formal analysis, A.Z.Z.; investigation, S.I.A.; resources, A.Z.Z.; data curation, S.I.A. and A.Z.Z.; writing—original draft preparation, S.I.A. and A.Z.Z.; writing—review and editing, S.I.A.; visualization, A.Z.Z.; supervision, S.I.A.; project administration, S.I.A. All authors have read and agreed to the published version of the manuscript.

**Funding:** Not applicable.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** Figure 1b is used by courtesy of Encyclopedia Britannica.

**Conflicts of Interest:** Authors declare no conflict of interest.

#### **Appendix A**

The constants given in Equations (32), (33) and (36) are defined as:

*P* = *dp dx* . *P*1(*y*) = 6 *P* + *QGr*(−3*h* <sup>2</sup> + *y* 2 ). *P*2(*y*) = 2*P* + *QGr*(−*h* <sup>2</sup> + *y* 2 ). *P*3(*y*) = 12*P* + *QGr*(−6*h* <sup>2</sup> + 5*y* 2 ). *P*4(*y*) = −4*P* + *QGr*(2*h* <sup>2</sup> <sup>−</sup> <sup>5</sup>*<sup>y</sup>* 2 ). *c*<sup>0</sup> = <sup>1</sup> 8640 *µ m*6*C* 3 *PH* . *c*<sup>1</sup> = *m*<sup>6</sup> (−8640 *µ C* 3 *PH* +*h* 2 (540*Km*2*UHs* <sup>2</sup>*Sech*<sup>2</sup> (*hm*)(12*P* − 5*h* <sup>2</sup>*QGr*) +360*C* 2 *PH*(−12*P* + 5*h* <sup>2</sup>*QGr*) +*h* <sup>2</sup>*K*(−2160(*P*) <sup>3</sup> + 2520*h* <sup>2</sup>*QGr*(*P*) 2 −990*h* <sup>4</sup>*Q*2*G* 2 *<sup>r</sup> P* + 131*h* <sup>6</sup>*Q*3*G* 3 *r* )) +90*y* 4 (4*QC*<sup>2</sup> *PHG<sup>r</sup>* <sup>+</sup>3*K*(−2*m*2*QUhs*2*Sech*<sup>2</sup> (*hm*)*G<sup>r</sup>* +(2*P* − *h* <sup>2</sup>*QGr*) 3 ))). *c*<sup>2</sup> = 4 *Km*<sup>6</sup> *Q*<sup>3</sup> *G* 3 *r* . *c*<sup>3</sup> = 1080 *m*<sup>6</sup> (−3*Km*2*UHs* <sup>2</sup>*Sech*<sup>2</sup> (*h*(*x*)*m*) + 2*C* 2 *PH*)(2*P* − *h* <sup>2</sup>*QGr*). *<sup>c</sup>*<sup>4</sup> <sup>=</sup> <sup>−</sup><sup>45</sup> *Km*<sup>6</sup> *<sup>Q</sup>*2*<sup>G</sup>* 2 *r* (−2*P* + *h* <sup>2</sup>*QGr*). *<sup>c</sup>*<sup>5</sup> <sup>=</sup> <sup>180</sup> *K m*<sup>6</sup> *Q Gr*(−2*<sup>P</sup>* <sup>+</sup> *<sup>h</sup>* <sup>2</sup>*QGr*) 2 . *<sup>c</sup>*<sup>6</sup> <sup>=</sup> <sup>−</sup><sup>720</sup> *K m*<sup>8</sup> *<sup>U</sup>HS* <sup>3</sup> *Sech*<sup>3</sup> (*hm*). *c*<sup>7</sup> = 1620 *K m*<sup>5</sup> *UHS* <sup>2</sup> *Q G<sup>r</sup> Sech*<sup>2</sup> (*hm*). *c*<sup>8</sup> = 180 *m*<sup>2</sup> *UHS Sech*(*hm*). *c*<sup>9</sup> = *c*11*Q G<sup>r</sup> c*<sup>10</sup> = 2*m*<sup>2</sup> *c*11. *<sup>c</sup>*<sup>11</sup> <sup>=</sup> <sup>−</sup><sup>810</sup> *K m*<sup>4</sup> *<sup>U</sup>HS* <sup>2</sup> *Sech*<sup>2</sup> (*hm*).

*c*<sup>12</sup> = *c*<sup>15</sup> *m*<sup>4</sup> . *c*<sup>13</sup> = 4*m*2*Q G<sup>r</sup> c*15. *c*<sup>14</sup> = 90 *UHS m*6*K Sech*(*hm*). *c*<sup>15</sup> = 4320 *K m UHS Sech*(*hm*). *c*<sup>16</sup> = 120 *Q*<sup>2</sup> *G* 2 *r* . *<sup>c</sup>*<sup>17</sup> <sup>=</sup> <sup>−</sup><sup>720</sup> *<sup>U</sup>HS* (9*Km*8*UHS* <sup>2</sup>*Sech*<sup>2</sup> (*hm*)+ <sup>4</sup>(−9*Km*<sup>4</sup> (2 + *h* <sup>2</sup>*m*<sup>2</sup> )(*P*) <sup>2</sup> <sup>−</sup> <sup>3</sup>*m*6*<sup>C</sup>* 2 *PH*+ 6*Km*<sup>2</sup> (−12 − 3*h* <sup>2</sup>*m*<sup>2</sup> + *h* <sup>4</sup>*m*<sup>4</sup> )*QGrP* − *K*(180+ 54*h* <sup>2</sup>*m*<sup>2</sup> <sup>−</sup> <sup>6</sup>*<sup>h</sup>* <sup>4</sup>*m*<sup>4</sup> + *h* <sup>6</sup>*m*<sup>6</sup> )*Q*2*G* 2 *r* )) *Km*(8*m*7*UHS* <sup>2</sup>*Cosh*(3*hm*)*Sech*<sup>2</sup> (*hm*)+) 9*m*3*UHS Cosh*(2*hm*)*Sech*(*hm*)(4*m*2*P* + *QGr*)+ <sup>6</sup>*h*(*m*4*UHS Sech*(*hm*) *Sinh*(2*hm*)(−<sup>12</sup> *<sup>m</sup>*2*P*<sup>+</sup> (−3 + 4*h* <sup>2</sup>*m*<sup>2</sup> )*QGr*) + 32sinh(*hm*)(−3*m*4*<sup>P</sup>* <sup>2</sup>+ *m*2 (−12 + *h* <sup>2</sup>*m*<sup>2</sup> )*QGrP* + (−30 + *h* <sup>2</sup>*m*<sup>2</sup> )*Q*2*G* 2 *r* ))). *c*<sup>18</sup> = 90 *UHS Sech*(*hm*)(9*Km*8*UHS* <sup>2</sup>*Sech*<sup>2</sup> (*hm*) <sup>−</sup> <sup>720</sup>*KQ*2*<sup>G</sup>* 2 *r* ) + *c*22. *c*<sup>19</sup> = 6480 *Km*2*QG<sup>r</sup> UHS Sech*(*hm*). *<sup>c</sup>*<sup>20</sup> <sup>=</sup> <sup>−</sup><sup>2700</sup> *<sup>Q</sup>*<sup>2</sup> *<sup>G</sup>* 2 *<sup>r</sup> m*<sup>4</sup> *K UHS Sech*(*hm*). *c*<sup>21</sup> = 1080 *Q Gr*(−2*P* + *h* <sup>2</sup>*QGr*) *UHS Sech*(*hm*)6*Km*<sup>4</sup> . *<sup>c</sup>*<sup>22</sup> <sup>=</sup> <sup>1620</sup> *K m*<sup>4</sup> *<sup>U</sup>HS* (−2*<sup>P</sup>* <sup>+</sup> *<sup>h</sup>* <sup>2</sup>*QGr*) 2 *Sech*(*hm*). *c*<sup>23</sup> = −1080 *C* 2 *PH <sup>m</sup>*<sup>6</sup> *<sup>U</sup>HS Sech*(*hm*). *c*<sup>24</sup> = <sup>1</sup> 8640*µm*6*M*<sup>3</sup> n <sup>−</sup><sup>8640</sup> *<sup>µ</sup> <sup>h</sup>*(*x*)*m*6*M*<sup>3</sup> <sup>+</sup> <sup>1152</sup> *<sup>G</sup><sup>r</sup> <sup>h</sup>*(*x*) <sup>5</sup>*m*6*M*2*Q* +<sup>7552</sup> <sup>77</sup> *G<sup>r</sup>* <sup>3</sup>*h*(*x*) <sup>11</sup>*K m*<sup>6</sup> *<sup>Q</sup>*<sup>3</sup> <sup>−</sup> <sup>1728</sup>*c*<sup>1</sup> <sup>2</sup>*G<sup>r</sup> h*(*x*) <sup>5</sup>*K m*<sup>8</sup> *Q UHS* <sup>2</sup>+ 8640 *h*(*x*) *m*6*M*2*UHS* + 1555200 *G<sup>r</sup>* <sup>2</sup>*h*(*x*) *K Q*2*UHS*<sup>−</sup> 34560*G<sup>r</sup>* <sup>2</sup>*h*(*x*) <sup>3</sup>*Km*2*Q*2*UHS* <sup>−</sup> <sup>34560</sup>*G<sup>r</sup>* <sup>2</sup>*h*(*x*) *<sup>K</sup>*(−<sup>45</sup> <sup>+</sup> *<sup>h</sup>* <sup>2</sup>*m*<sup>2</sup> ) *Q*2*UHS*+ 2880 *G<sup>r</sup>* <sup>2</sup>*h*(*x*) *K*(180 + 54*h*(*x*) 2 *<sup>m</sup>*<sup>2</sup> <sup>−</sup> <sup>6</sup>*h*(*x*) 4 *m*<sup>4</sup> + *h*(*x*) <sup>6</sup>*m*<sup>6</sup> ) *<sup>Q</sup>*2*UHS*<sup>−</sup> 6480*h*(*x*) *K m*8*UHS* 3 *sech*<sup>2</sup> (*h*(*x*) *m*)+ 3240*Grh*(*x*)*Km*4*QUHS* 2 *cosh*(2*h*(*x*)*m*)*sech*<sup>2</sup> (*h*(*x*)*m*)− 1080*Grh*(*x*) <sup>3</sup>*Km*6*QUHS* 2 *cosh*(2*h*(*x*)*m*)*sech*<sup>2</sup> (*hm*)+ 720 *h*(*x*)*Km*8*UHS* 3 *cosh*(3*h*(*x*)*m*)*sech*<sup>3</sup> (*h*(*x*)*m*)− <sup>8640</sup> *<sup>m</sup>*5*M*2*UHS*tanh(*h*(*x*)*m*) <sup>−</sup> 2073600 *Gr* <sup>2</sup>*KQ*2*Uhs*tanh(*h*(*x*)*m*) *m* − 466560 *G<sup>r</sup>* <sup>2</sup>*h*(*x*) <sup>2</sup>*KmQ*2*UHS*tanh(*h*(*x*)*m*)+ *Gr* <sup>2</sup>*h*(*x*) <sup>4</sup>*Km*3*Q*2*UHS*tanh(*h*(*x*)*m*)(<sup>34560</sup> <sup>−</sup> <sup>2880</sup>*G<sup>r</sup>* <sup>2</sup>*h*(*x*) 2 *m*2 )+ 17280 *G<sup>r</sup>* <sup>2</sup>*h*(*x*) <sup>2</sup>*Km*(−<sup>30</sup> <sup>+</sup> *<sup>h</sup>*(*x*) 2 *m*2 ) *<sup>Q</sup>*2*UHS*tanh(*h*(*x*)*m*)<sup>−</sup> 17280 *Gr* <sup>2</sup>*K*(90+18*h*(*x*) <sup>2</sup>*m*2−*h*(*x*) <sup>4</sup>*m*<sup>4</sup> )*Q*2*UHS*tanh(*h*(*x*)*m*) *<sup>m</sup>* + 6480 *Km*7*UHS* 3 tanh(*h*(*x*)*m*) *sech*<sup>2</sup> (*h*(*x*)*m*)− 1620 *GrKm*3*QUHS* 2 tanh(*h*(*x*)*m*) − 1620 *Grh*(*x*) <sup>2</sup> *Km*5*QUHS* 2 tanh(*h*(*x*)*m*)+ 2160 *Grh*(*x*) <sup>4</sup>*Km*7*QUHS* 2 tanh(*h*(*x*)*m*)− 240 *Km*7*UHS* 3 sinh(3*h*(*x*)*m*)*sech*<sup>3</sup> (*h*(*x*)*m*) .

#### **References**


## *Article* **A Comparative Study among New Hybrid Root Finding Algorithms and Traditional Methods**

**Elsayed Badr 1,2,\* , Sultan Almotairi 3,\* and Abdallah El Ghamry <sup>4</sup>**


**Abstract:** In this paper, we propose a novel blended algorithm that has the advantages of the trisection method and the false position method. Numerical results indicate that the proposed algorithm outperforms the secant, the trisection, the Newton–Raphson, the bisection and the regula falsi methods, as well as the hybrid of the last two methods proposed by Sabharwal, with regard to the number of iterations and the average running time.

**Keywords:** hybrid method; trisection; bisection; false position; Newton–Raphson; secant; dynamical systems

#### **1. Introduction**

There are many sciences (mathematics, computer science, dynamical systems in engineering, agriculture, biomedical, etc.) that require finding the roots of non-linear equations. When there is not an analytic solution, we try to determine a numerical solution. There is not a specific algorithm for solving every non-linear equation efficiently.

There are several pure methods for solving such problems, including the pure, metaheuristic and blended methods. Pure methods include classical techniques such as the bisection method, the false position method, the secant method and the Newton–Raphson method, etc. Metaheuristic methods use metaheuristic algorithms such as particle swarm optimization, firefly, and ant colony for root finding, whereas blended methods are hybrid combinations of two classical methods.

There is not a specific method for solving every non-linear equation efficiently. In general, we can see more details about classical methods in [1–4] and especially for the bisection and Newton–Raphson methods in [5–8]. Other problems such as minimization, target shooting, etc. are discussed in [9–14].

Sabharwal [15] proposed a novel blended method that is a dynamic hybrid of the bisection and false position methods. He deduced that his algorithm outperformed pure methods (bisection and false position). On the other hand, he observed that his algorithm outperformed the secant method and the Newton–Raphson method according to the number of iterations. Sabharwal did not analyze his algorithm according to the running time, but he was satisfied with the iterations number only. Perhaps there is a method that has a small number of iterations, but the execution time is large and vice versa. For this reason, the iteration number and the running time are important metrics to evaluate the algorithms. Unfortunately, most researchers have not paid attention to the details of finding the running time. Furthermore, they did not discuss and did not answer the following question: why does the running time change from one run to another on the used software package?

**Citation:** Badr, E.; Almotairi, S.; Ghamry, A.E. A Comparative Study among New Hybrid Root Finding Algorithms and Traditional Methods. *Mathematics* **2021**, *9*, 1306. https:// doi.org/10.3390/math9111306

Academic Editor: Ioannis Dassios

Received: 29 April 2021 Accepted: 3 June 2021 Published: 7 June 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

The genetic algorithm was used to compare among the classical methods [9–11] based on the fitness ratio of the equations. The authors deduced that the genetic algorithm is more efficient than the classical algorithms for solving the functions *x* <sup>2</sup> <sup>−</sup> *<sup>x</sup>* <sup>−</sup> 2 [12] and *x* <sup>2</sup> + 2 *<sup>x</sup>* <sup>−</sup> <sup>7</sup> [11]. Mansouri et al. [12] presented a new iterative method to determine the fixed point of a nonlinear function. Therefore, they combined ideas proposed in the artificial bee colony algorithm [13] and the bisection method [14]. They illustrate this method with four benchmark functions and compare results with others methods, such as artificial bee colony (ABC), particle swarm optimization (PSO), genetic algorithm (GA) and firefly algorithms.

For more details about the classical methods, hybrid methods and the metaheuristic approaches, the reader can refer to [16,17].

In this work, we propose a novel blended algorithm that has the advantages of the trisection method and the false position algorithm. The computational results show that the proposed algorithm outperforms the trisection and regula falsi methods. On the other hand, the introduced algorithm outperforms the bisection, Newton–Raphson and secant methods according to the iteration number and the average of running time. Finally, the implementation results show the superiority of the proposed algorithm on the blended bisection and false position algorithm, which was proposed by Sabharwal [15]. The results presented in this paper open the way for presenting new methods that compete with traditional methods and may replace them in software packages.

The rest of this work is organized as follows: The pure methods for determining the roots of non-linear equations are introduced in Section 2. The blended algorithms for finding the roots of non-linear equations are presented in Section 3. In Section 4, the numerical results analysis and statistical test among the pure methods and the blended algorithms are provided. Finally, conclusions are drawn in Section 5.

#### **2. Pure Methods**

In this section, we introduce five pure methods for finding the roots of non-linear equations. These methods are the bisection method, the trisection method, the false position method, the secant method and the Newton–Raphson method. We contribute to implementing the trisection algorithm with equal subintervals that overcomes the bisection algorithm on fifteen benchmark equations as shown in Section 3. On the other hand, the trisection algorithm also outperforms the false position method, secant method and Newton–Raphson method partially, as shown in Section 3.

#### *2.1. Bisection Method*

We assume that the function *f*(*x*) is defined and continuous on the closed interval [*a*, *b*], where the signals of *f*(*x*) at the ends (*a* and *b*) are different. We divide the interval [*a*, *b*] into two halves, where *x* = *<sup>a</sup>*+*<sup>b</sup>* 2 , if *f*(*x*) = 0; then, *x* becomes a solution for the equation *f*(*x*) = 0. Otherwise, (*f*(*x*) 6= 0) and we can choose one subinterval [*a*, *x*] or [*x*, *b*] that has different signals of *f*(*x*) at its ends. We repeat dividing the new subinterval into two halves until we reach the exact solution *x* where *f*(*x*) = 0 or the approximate solution *f*(*x*) ≈ 0 with tolerance, *eps*. The value of *eps* closes to zero as shown in Algorithm 1 and other algorithms.

The size of the interval was reduced by half at each iteration. Therefore the value *eps* is determined from the following formula:

$$eps = \frac{b - a}{2^n} \tag{1}$$

where *n* is the number of iterations. From (1), the number of iterations is found by

$$m = \left\lceil \log\_2(\frac{b-a}{eps}) \right\rceil \tag{2}$$

```
Algorithm 1. Bisection(f, a, b, eps).
```

```
Input: The function f(x),
       The interval [a, b] where the root lies in,
       The absolute error (eps).
Output: The root (x),
       The value of f(x)
       Numbers of iterations (n),
       The interval [a, b] where the root lies in
n := 0
while true do
     n := n + 1
     x := (a + b)/2
     if |f(x)| <= eps.
       return x, f(x), n, a, b
     else if f(a) * f(x) < 0
       b := x
     else
       a := x
end (while)
```
The bisection method is a bracketing method, so it brackets the root in the interval [*a*, *b*], and at each iteration, the size of the interval [*a*, *b*] is halved. Accordingly, it reduces the error between the approximation root and the exact root for any iteration. On the other hand, the bisection method works quickly if the approximate root is far from the endpoint of the interval; otherwise, it needs more iterations to reach the root [17].

#### Advantages and Disadvantages of the Bisection Method

The bisection method is simple to implement, and its convergence is guaranteed. On the other hand, it has a relatively slow convergence, it needs different signs for the function values of the endpoints, and the test for checking this affects the complexity in the number of operations.

#### *2.2. Trisection Method Mathematics* **2021**, *9*, x FOR PEER REVIEW 4 of 17

The trisection method is like the bisection method, except that it divides the interval [*a*, *b*] into three subintervals, while the bisection method divides the interval [*a*, *b*] into two partial periods. Algorithm 2 divides the interval [*a*, *b*] into three equal subintervals and searches for the root in the subinterval that contains different signs of the function values at the endpoints of this subinterval. If the condition of termination is true, then the iteration has finished its task; otherwise, the algorithm repeats the calculations. In order to divide the interval [*a*, *b*] into equal three parts by *x*<sup>1</sup> and *x*2, we need to

If the condition of termination is true, then the iteration has finished its task; otherwise, the algorithm repeats the calculations. know the locations of *x*<sup>1</sup> and *x*<sup>2</sup> as the following: As shown in Figure 1, since

In order to divide the interval [*a*, *b*] into equal three parts by *x*<sup>1</sup> and *x*2, we need to know the locations of *x*<sup>1</sup> and *x*<sup>2</sup> as the following: *x*<sup>1</sup> *− a = b – x<sup>2</sup>* (3)

As shown in Figure 1, since

$$\mathbf{x}\_1 - a = b \mathbf{-} \mathbf{x}\_2 \tag{3}$$

$$\mathbf{x}\_2 - \mathbf{x}\_1 = \mathbf{x}\_1 - a \tag{4}$$

(5)

(6)

**Figure 1.** How to divide the interval [*a*, *b*] into three subintervals. **Figure 1.** How to divide the interval [*a*, *b*] into three subintervals.

We get By solving Equations (3) and (4),

We get

And

$$x\_1 = \frac{2a+b}{3}$$

2 2 3 *b a*

The size of the interval [*a*, *b*] decreases to a third with each repetition. Therefore, the

3 *n b a*

3 log ( ) *b a*

When we compare Equations (2) and (6), we conclude that the iterations number of the trisection algorithm is less than the iterations number of the bisection algorithm. We might think that the trisection algorithm is better than the bisection algorithm since it requires a few iterations. However, it might be the case that one iteration of the trisection algorithm has an execution time greater than the execution time of one iteration of the bisection algorithm. Therefore, we will consider both execution time and the number of

*eps* 

 

*x*

*eps*

where *n* is the number of iterations. From (5) the number of iterations is found by

*n*

The interval [*a*, *b*] where the root lies in,

The interval [*a*, *b*] where the root lies in

value *eps* is determined from the following formula:

iterations to evaluate the different algorithms.

The absolute error (*eps*).

Numbers of iterations (*n*),

The value of *f*(*x*)

**Algorithm 2 Trisection(***f***,** *a***,** *b***,** *eps***) Input:** The function *f*(*x*),

**Output:** The root (*x*),

And

$$\mathfrak{x}\_2 = \frac{2b+a}{3}$$

The size of the interval [*a*, *b*] decreases to a third with each repetition. Therefore, the value *eps* is determined from the following formula:

$$
eps = \frac{b - a}{\mathfrak{Z}^n} \tag{5}$$

where *n* is the number of iterations. From (5) the number of iterations is found by

$$m = \left\lceil \log\_3(\frac{b-a}{eps}) \right\rceil \tag{6}$$

When we compare Equations (2) and (6), we conclude that the iterations number of the trisection algorithm is less than the iterations number of the bisection algorithm. We might think that the trisection algorithm is better than the bisection algorithm since it requires a few iterations. However, it might be the case that one iteration of the trisection algorithm has an execution time greater than the execution time of one iteration of the bisection algorithm. Therefore, we will consider both execution time and the number of iterations to evaluate the different algorithms.


```
Input: The function f(x),
       The interval [a, b] where the root lies in,
       The absolute error (eps).
Output: The root (x),
       The value of f(x)
       Numbers of iterations (n),
       The interval [a, b] where the root lies in
n := 0
while true do
     n := n + 1
     x1 := (b + 2*a)/3
     x2 := (2*b + a)/3
     if |f(x1)| < |f(x2)|
          x := x1
     else
          x := x2
     if |f(x)| <= eps
          return x, f(x), n, a, b
     else if f(a) * f(x1) < 0
          b := x1
     else if f(x1) * f(x2) < 0
          a := x1
          b := x2
     else
          a := x2
end (while)
```
Advantages and Disadvantages of the Trisection Method

The trisection method has the same advantages and disadvantages of the bisection method, in addition to being faster than it, as shown in Tables 1–9.

#### *2.3. False Position (Regula Falsi) Method*

There is no unique method suitable for finding the roots of all nonlinear functions. Each method has advantages and disadvantages. Hence, the false position method is a

dynamic and fast method when the nature of the function is linear. The function *f*(*x*), whose roots are in the interval [*a*, *b*] must be continuous, and the values of *f*(*x*) at the endpoints of the interval [*a*, *b*] have different signs. The false position method uses two endpoints of the interval [*a*, *b*] with initial values (*r*<sup>0</sup> = *a*, *r*<sup>1</sup> = *b*). The connecting line between the two points (*r*0, *f*(*r*0)) and (*r*1, *f*(*r*1)) intersects the *x*-axis at the next estimate, *r*2. Now, we can determine the successive estimates, *r<sup>n</sup>* from the following relationship

$$r\_n = r\_{n-1} - \frac{f(r\_{n-1})(r\_{n-1} - r\_{n-2})}{f(r\_{n-1}) - f(r\_{n-2})} \tag{7}$$

for *n* ≥ 2.

Remark: The regula falsi method is very similar to the bisection method. However, the next iteration point is not the midpoint of the interval but the intersection of the *x*-axis with a secant through (*a*, *f*(*a*)) and (*b*, *f*(*b*)).

Algorithm 3 uses the relation (7) to get the successive approximations by the false position method.


Advantages and Disadvantages of the Regula Falsi Method

It is guaranteed to converge, and it is fast when the function is linear. On the other hand, we cannot determine the iterations number needed for convergence. It is very slow when the function is not linear.

#### *2.4. Newton–Raphson Method*

This method depends on a chosen initial point *x*0. This point plays an important role for Newton–Raphson method. The success of the method depends mainly on the point *x*0, and then the method may converge to its root or diverge based on the choice of the point *x*0. Therefore, the first estimate can be determined from the following relation.

$$\mathbf{x}\_1 = \mathbf{x}\_0 - \frac{f(\mathbf{x}\_0)}{f'(\mathbf{x}\_0)} \tag{8}$$

The successive approximations for the Newton–Raphson method can be found from the following relation:

$$\mathbf{x}\_{i+1} = \mathbf{x}\_i - \frac{f(\mathbf{x}\_i)}{f'(\mathbf{x}\_i)} \tag{9}$$

such that the *f* 0 (*xi*) is the first derivative of the function *f* (*x*) at the point *x<sup>i</sup>* .

Algorithm 4 uses the relation (9) to get the successive approximations by the Newton– Raphson method.


```
This function implements Newton's method.
Input: The function (f),
       An initial root xi,
       The absolute error (eps).
Output: The root (x),
       The value of f(x)
       Numbers of iterations (n),
g(x) := f'(x)
n = 0
while true do
    n := n + 1
    xi = xi − f(xi)/g(xi)
    if |f(x)| <= eps
       return xi, f(xi), n
end (while)
```
Advantages and Disadvantages of the Newton–Raphson Method

It is very fast compared to other methods, but it sometimes fails, meaning that there is no guarantee of its convergence.

#### *2.5. Secant Method*

Just as there is the possibility of the Newton method failing, there is also the possibility that the secant method will fail. The Newton method uses the relation (9) to find the successive approximations, but the secant method uses the following relation:

$$\mathbf{x}\_{i+1} = \mathbf{x}\_i - \frac{\mathbf{x}\_i - \mathbf{x}\_{i-1}}{f(\mathbf{x}\_i) - f(\mathbf{x}\_{i-1})} f(\mathbf{x}\_i) \tag{10}$$

Algorithm 5 uses the relation (10) to get the successive approximations by the secant method.


Advantages and Disadvantages of the Secant Method

It is very fast compared to other methods, but it sometimes fails, meaning that there is no guarantee of its convergence.

#### **3. Hybrid Algorithms**

In this section, instead of pure methods such as the bisection method, the trisection method, the false position method, the secant method and the Newton–Raphson method, we propose a new hybrid root-finding algorithm (trisection–false position), which outperforms the algorithm (bisection-false position) that was proposed by Sabharwal [15].

#### *3.1. Blended Bisection and False Position*

Sabharwal [15] proposed a new algorithm that has the advantages of both the bisection and the false position methods. He built a novel hybrid method, Algorithm 6, which overcame the pure methods (bisection and false position).


Advantages and Disadvantages of the Blended Algorithm

It is guaranteed to converge, and it is efficient more than the classical methods but it sometimes takes a long time to get root.

#### *3.2. Blended Trisection and False Position*

We exploit the superiority of the trisection over the bisection method (as shown in Section 4) in order to present a new hybrid method (Algorithm 7) that overcomes the hybrid method presented by Sabharwal [15]. The blended method (trisection–false position) is based on calculating the segment line point in the false position method and also calculating two points that divide the interval [*a*, *b*] in the trisection method and then choosing the best of them, which converges to the approximating root. The number of iterations *n*(*eps*) of the proposed hybrid method is less than or equal to *min*{*n<sup>f</sup>* (*eps*), *nt*(*eps*)}, where *n<sup>f</sup>* (*eps*) and *nt*(*eps*) are the number of iterations of the false position method and the trisection method, respectively. Algorithm 7 outperforms all the classical methods (Tables 1–9).

```
Algorithm 7. blendTF(f, a, b, eps).
```

```
This function implements the blended method of trisection and false position methods.
     Input: The function (f); The interval [a, b] where the root lies in,
             The absolute error (eps).
     Output: The root (x), The value of f(x), Numbers of iterations (n),
             The interval [a, b] where the root lies in
     n = 0; a1 := a; a2 := a; b1 := b, b2 := b
     while true do
        n := n + 1
        xT1 := (b + 2*a)/3
        xT2 := (2*b + a)/3
        xF := a − (f(a)*(b − a))/(f(b) − f(a))
        x := xT1
        fx := fxT1
        if |f(xT2)| < |f(x)|
             x := xT2
        if |f(xF)| < |f(x)|
             x := xF
        if |f(x)| <= eps
             return x, f(x), n, a, b
        if fa * f(xT1) < 0
             b1 := xT1
        else if f(xT1) * f(xT2) < 0
             a1 := xT1
             b1 := xT2
        else
             a1 := xT2
        if fa*f(xF) < 0
             b2 := xF;
        else
        a2 := xF;
        a := max(a1, a2) ; b := min(b1, b2)
     end (while)
```
Advantages and Disadvantages of the Blended Algorithm (The Proposed Algorithm)

It is guaranteed to converge, and it is more efficient than the classical methods and the blended algorithm that was proposed in [15], as shown in Tables 1–9.

#### **4. Computational Study**

The numerical results of the pure methods bisection method, trisection method, false position method, secant method and Newton–Raphson method are proposed. In addition to the computational results for the hybrid methods, the bisection–false position and trisection–false position are proposed. We compare the pure method and the hybrid method with the proposed hybrid method according to the number of iterations and CPU time. We used fifteen benchmark problems for this comparison, as shown in Table 1. We ran each problem ten times, and then we computed the average of CPU time and the number of iterations.


**Table 1.** Fifteen benchmark problems.

**Table 2.** Comparison among pure methods and blended algorithms according to iterations, AppRoot, error and interval bounds.


**Table 3.** Solutions of fifteen problems by the bisection method.



#### **Table 4.** Solutions of fifteen problems by the trisection method.

**Table 5.** Solutions of fifteen problems by the false position method.


**Table 6.** Solutions of fifteen problems by Newton's method.




**Table 8.** Solutions of fifteen problems by the hybrid method bisection-false position.


**Table 9.** Solutions of fifteen problems by the hybrid method trisection-false position.


We used MATLAB v7.01 Software Package to implement all the codes. All codes were run under 64-bit Window 8.1 Operating System with Core(TM)i5 CPU M 460 @2.53GHz, 4.00 GB of memory.

#### *Dataset and Evaluation Metrics*

There are different ways to terminate the numerical algorithms such as the absolute error (eps) and the number of iterations. In this paper, we used the absolute error (*eps* = 10−14) to terminate all the algorithms. Perhaps there is a method that has a small number of iterations, but the execution time is large and vice versa. For this reason, the iteration number and the running time are important metrics to evaluate the algorithms. Unfortunately, most researchers did not pay attention to the details of finding the running time. Furthermore, they did not discuss and did not answer the following question: why does the running time change from one run to another with the used software package? Therefore, we ran every algorithm ten times and calculated the average of the running time to obtain an accurate running time and avoid the problems of the operating systems.

In Table 2, the abbreviations AppRoot, Error, LowerB and UpperB are used to denote the approximation root, the difference between two successive roots, lower bound and upper bound, respectively. Table 2 shows the performance of all classical methods and blended algorithms for solving the Problem 4. It is clear that both the trisection and the proposed blended algorithm (trisection-false position) outperformed the other algorithms. Because it is not accurate enough to make a conclusion from one function, we used fifteen benchmark functions (Table 1) to evaluate the proposed algorithm.

Ali Demir [23] proved that the trisection method with *k*-Lucas number works faster than the bisection method. From Tables 3 and 4 and Figure 2, it is clear that the trisection method is better than the bisection method with respect to the running time for all problems except for problem 9. On the other hand, the trisection method determined the exact root (2.0000000000000000) of problem 4 after one iteration, but the bisection method found the approximate root (2.0000000000000284) after 45 iterations. Figure 3 shows that the trisection method always has fewer iterations than the bisection method. We can determine the number of iterations for the trisection method by *n* = l log<sup>3</sup> ( *b*−*a eps* ) m and the number of iterations for the bisection method by *n* = l log<sup>2</sup> ( *b*−*a eps* ) m . The authors [6,11] explained that the secant method is better than the bisection and Newton–Raphson methods for problem 8. It is not accurate to draw a conclusion from one function [15], so we experimented on fifteen benchmark functions. From Table 7, it is clear that the secant method failed to solve problem 11. *Mathematics* **2021**, *9*, x FOR PEER REVIEW 14 of 17 termine the number of iterations for the trisection method by 3 log ( ) *b an eps* and the number of iterations for the bisection method by 2 log ( ) *b a n eps* . The authors [6,11] explained that the secant method is better than the bisection and Newton–Raphson methods for problem 8. It is not accurate to draw a conclusion from one function [15], so we experimented on fifteen benchmark functions. From Table 7, it is clear that the secant method failed to solve problem 11.

**Figure 2.** A comparison among 7 methods on 15 problems according to the number of iteration. **Figure 2.** A comparison among 7 methods on 15 problems according to the number of iteration.

Bisection Trisection

**Figure 3.** A comparison among 7 methods on 15 problems according to the CPU time.

P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15

solve P11.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

From Tables 5–7, we deduce that the proposed hybrid algorithm (trisection-false position) is better than the Newton–Raphson, false-position and secant. The Newton– Raphson method failed to solve problems P6, P9 and P11, and the secant method failed to

**Figure 2.** A comparison among 7 methods on 15 problems according to the number of iteration.

P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15

**Figure 3.** A comparison among 7 methods on 15 problems according to the CPU time. **Figure 3.** A comparison among 7 methods on 15 problems according to the CPU time.

termine the number of iterations for the trisection method by

number of iterations for the bisection method by

method failed to solve problem 11.

0

10

20

30

40

50

60

3 log ( ) *b a*

*eps* 

. The authors [6,11]

Bisection Trisection

and the

*n*

*eps* 

2 log ( ) *b a*

*n*

explained that the secant method is better than the bisection and Newton–Raphson methods for problem 8. It is not accurate to draw a conclusion from one function [15], so we experimented on fifteen benchmark functions. From Table 7, it is clear that the secant

From Tables 5–7, we deduce that the proposed hybrid algorithm (trisection-false position) is better than the Newton–Raphson, false-position and secant. The Newton– Raphson method failed to solve problems P6, P9 and P11, and the secant method failed to solve P11. From Tables 5–7, we deduce that the proposed hybrid algorithm (trisection-false position) is better than the Newton–Raphson, false-position and secant. The Newton– Raphson method failed to solve problems P6, P9 and P11, and the secant method failed to solve P11. *Mathematics* **2021**, *9*, x FOR PEER REVIEW 15 of 17

> From Figure 4 and Tables 8 and 9, it is clear that the proposed blended algorithm (trisection–false position) has fewer iterations than the blended algorithm (bisection–false position) [15] on all the problems except problem 5 (i.e., according to the number of iterations, the proposed algorithm achieved 93.3% of fifteen problems but Sabharwal's algorithm achieved 6.6%). From Figure 4 and Tables 8 and 9, it is clear that the proposed blended algorithm (trisection–false position) has fewer iterations than the blended algorithm (bisection–false position) [15] on all the problems except problem 5 (i.e., according to the number of iterations, the proposed algorithm achieved 93.3% of fifteen problems but Sabharwal's algorithm achieved 6.6%).

**Figure 4.** A comparison among 7 methods on 15 problems according to the number of iterations. **Figure 4.** A comparison among 7 methods on 15 problems according to the number of iterations.

From Figure 5 and Tables 8 and 9, it is clear that the proposed blended algorithm (trisection–false position) outperforms the blended algorithm (bisection-false position) [15] for eight problems versus seven problems (i.e., the proposed algorithm achieved From Figure 5 and Tables 8 and 9, it is clear that the proposed blended algorithm (trisection–false position) outperforms the blended algorithm (bisection-false position) [15] for eight problems versus seven problems (i.e., the proposed algorithm achieved 53.3%

**Figure 5.** A comparison among 7 methods on 15 problems according to the CPU time.

P10

P11

P12

P13

P14

P15

Bisection-False Trisection-False

P9

53.3% of fifteen problems but Sabharwal's algorithm achieved 46.6%). On the other hand, the trisection method determined the exact root (1.0000000000000000) of the problem 4

0

P1

P2

P3

P4

P5

P6

P7

P8

0.05

0.1

0.15

0.2

0.25

of fifteen problems but Sabharwal's algorithm achieved 46.6%). On the other hand, the trisection method determined the exact root (1.0000000000000000) of the problem 4 after nine iterations, but the bisection method found the approximate root (0.9999999999999999) after 12 iterations. [15] for eight problems versus seven problems (i.e., the proposed algorithm achieved 53.3% of fifteen problems but Sabharwal's algorithm achieved 46.6%). On the other hand, the trisection method determined the exact root (1.0000000000000000) of the problem 4 after nine iterations, but the bisection method found the approximate root (0.9999999999999999) after 12 iterations.

**Figure 4.** A comparison among 7 methods on 15 problems according to the number of iterations.

P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15

From Figure 5 and Tables 8 and 9, it is clear that the proposed blended algorithm (trisection–false position) outperforms the blended algorithm (bisection-false position)

Bisection-False Trisection-False

*Mathematics* **2021**, *9*, x FOR PEER REVIEW 15 of 17

rithm achieved 6.6%).

0

2

4

6

8

10

12

14

From Figure 4 and Tables 8 and 9, it is clear that the proposed blended algorithm (trisection–false position) has fewer iterations than the blended algorithm (bisection–false position) [15] on all the problems except problem 5 (i.e., according to the number of iterations, the proposed algorithm achieved 93.3% of fifteen problems but Sabharwal's algo-

**Figure 5.** A comparison among 7 methods on 15 problems according to the CPU time. **Figure 5.** A comparison among 7 methods on 15 problems according to the CPU time.

#### **5. Conclusions**

In this work, we proposed a novel blended algorithm that has the advantages of the trisection method and the false position method. The computational results show that the proposed algorithm outperforms the trisection and regula falsi methods. On the other hand, the introduced algorithm outperforms the bisection, Newton–Raphson and secant methods according to the iteration number and the average running time. Finally, the implementation results show the superiority of the proposed algorithm on the blended bisection and false position algorithm, which was proposed by Sabharwal [15]. In future work, we will do more numerical studies using benchmark functions to evaluate the proposed algorithm and ensure that it competes with the traditional algorithms to replace it in software packages such as Matlab and Python. We will also propose some other hybrid algorithms that may be better than the proposed algorithm such as the bisection–Newton– Raphson method and trisection–Newton–Raphson.

**Author Contributions:** Conceptualization, E.B.; methodology, S.A.; software, A.E.G.; validation, E.B.; formal analysis, E.B.; investigation, S.A.; resources, A.E.G.; data curation, A.E.G.; writing original draft preparation, E.B.; writing—review and editing, E.B.; visualization, E.B.; supervision, E.B.; project administration, E.B.; funding acquisition, S.A. All authors have read and agreed to the published version of the manuscript.

**Funding:** The authors extend their appreciation to the Deanship of Scientific Research at Majmaah University for funding this work under project number (R-2021-140).

**Acknowledgments:** The help from Higher Technological Institute, 10th of Ramadan City, Egypt for publishing is sincerely and greatly appreciated. We also thank the referees for suggestions to improve the presentation of this paper.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **On a Riemann–Liouville Type Implicit Coupled System via Generalized Boundary Conditions †**

**Usman Riaz <sup>1</sup> , Akbar Zada <sup>1</sup> , Zeeshan Ali <sup>2</sup> , Ioan-Lucian Popa 3,\* , Shahram Rezapour 4,5,\* and Sina Etemad <sup>4</sup>**


**Abstract:** We study a coupled system of implicit differential equations with fractional-order differential boundary conditions and the Riemann–Liouville derivative. The existence, uniqueness, and at least one solution are established by applying the Banach contraction and Leray–Schauder fixed point theorem. Furthermore, Hyers–Ulam type stabilities are discussed. An example is presented to illustrate our main result. The suggested system is the generalization of fourth-order ordinary differential equations with anti-periodic, classical, and initial boundary conditions.

**Keywords:** Riemann–Liouville fractional derivative; coupled system; fractional order boundary conditions; green function; existence theory; Ulam stability

**MSC:** 26A33; 34B27; 45M10

#### **1. Introduction**

The generalization of ordinary derivatives leads us to the theory of fractional derivatives. The concept of fractional derivatives was established in 1695, after the well-known conversation of Leibniz and L'Hospital [1]. Mathematicians like Riemann, Liouville, Caputo, Hadamard, Fourier, and Laplace contributed a lot and made the area more interesting for researchers. A fractional-order derivative is a global operator, which may act as a tool to modify or modernize different physical phenomena like control theory [2], dynamical process [3], electro-chemistry [4], mathematical biology [5], image and signal processing [6], etc. For more applications of the fractional differential equations (FDES), we refer the reader to the works in [7–11]. Furthermore, the theory of coupled systems of differential equations is referred to as an important theory in the applied sciences envisaging different areas of biochemistry, ecology, biology, and classical fields of physical sciences and engineering. For details see in [12–14].

The theory regarding the existence of solutions of FDES, drew significant attention of the researchers working on different boundary conditions, e.g., classical, integral, multipoint, non-local, periodic, and anti-periodic [15–18]. Among the qualitative properties of FDES, the stability property of the solution is the central one, particularly the Hyers–Ulam (HU) stability [19–26]. Stability theory in the sense of HU was first discussed by Ulam [27] in the form of a question in 1940 and the following year, Hyers [28] answered his question

**Citation:** Riaz, U.; Zada, A.; Ali, Z.; Popa, I.-L.; Rezapour, S.; Etemad, S. On a Riemann–Liouville Type Implicit Coupled System via Generalized Boundary Conditions. *Mathematics* **2021**, *9*, 1205. https:// doi.org/10.3390/math9111205

Academic Editor: Ioannis Dassios

Received: 16 April 2021 Accepted: 24 May 2021 Published: 26 May 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

in the context of Banach spaces. Recently, generalized HU stability was discussed by Alqifiary et al. [29] for linear differential equations. Razaei et al. [30] presented Laplace transform and HU stability of linear differential equations. Wang et al. [31] studied HU stability for two types of linear FDES. Shen et al. [32] worked on the HU stability of linear FDES with constant coefficients using Laplace transform method. Liu et al. [33] proved the HU stability of linear Caputo–Fabrizio FDES. Liu et al. [34] studied the HU stability of linear Caputo–Fabrizio FDES with the Mittag–Leffler kernel by Laplace transform method.

The above work motivate us to study the coupled implicit FDES with fractional-order differential boundary conditions:

 D*<sup>α</sup>* <sup>v</sup>(t) <sup>−</sup> *<sup>χ</sup>*1(t, <sup>u</sup>(t), <sup>D</sup>*<sup>α</sup>* v(t)) = 0; t ∈ J, D*κ* <sup>u</sup>(t) <sup>−</sup> *<sup>χ</sup>*2(t, <sup>v</sup>(t), <sup>D</sup>*<sup>κ</sup>* u(t)) = 0; t ∈ J, D*α*−<sup>4</sup> v(0) = *η*1D*α*−<sup>4</sup> v(*σ*), D*α*−<sup>3</sup> v(0) = *η*2D*α*−<sup>3</sup> v(*σ*), D*α*−<sup>2</sup> v(0) = *η*3D*α*−<sup>2</sup> v(*σ*), D*α*−<sup>1</sup> v(0) = *η*4D*α*−<sup>1</sup> v(*σ*), D*κ*−<sup>4</sup> u(0) = *η*5D*κ*−<sup>4</sup> u(*σ*), D*κ*−<sup>3</sup> u(0) = *η*6D*κ*−<sup>3</sup> u(*σ*), D*κ*−<sup>2</sup> u(0) = *η*7D*κ*−<sup>2</sup> u(*σ*), D*κ*−<sup>1</sup> u(0) = *η*8D*κ*−<sup>1</sup> u(*σ*), (1)

where <sup>3</sup> <sup>&</sup>lt; *<sup>α</sup>*, *<sup>κ</sup>* <sup>≤</sup> 4, <sup>J</sup> = [0, *<sup>σ</sup>*], *<sup>σ</sup>* <sup>&</sup>gt; <sup>0</sup> and *<sup>η</sup>*<sup>i</sup> <sup>6</sup><sup>=</sup> <sup>1</sup> for <sup>i</sup> <sup>=</sup> 1, 2, . . . , 8. <sup>D</sup>*<sup>α</sup>* , D*<sup>κ</sup>* be the *α*, *κ* order denotes Riemann–Liouville fractional derivatives and *χ*1, *χ*<sup>2</sup> : J × R × R → R be continuous functions.

Higher-order ordinary differential equations (ODES) can be used to model problems arising from the field of applied sciences and engineering [35,36]. The generalization of fourth-order ODES are FDES (1) if *α* = *κ* = 4. Fourth-order differential equations have important applications in mechanics, thus have attracted considerable attention over the last three decades. The problem of static deflection of a uniform beam, which can be modeled as a fourth-order initial value problem is a good example of a real problem in engineering [37,38].

This problem has been extensively analyzed, some new techniques were developed and numerous general and impressive results regarding the existence of solutions were established in [39–42]. Sometimes, mathematical modeling of the various physical phenomena may arise as a coupled system of the forgoing ODES. Furthermore, for *η*<sup>i</sup> = −1 (i = 1, 2, . . . , 8), we can obtain anti-periodic boundary conditions which are applicable in several mathematical models, some are given in [43,44].

The manuscript is categorized as follows. For our main results, we establish some basic notations, definitions, and lemma in Section 2. In Section 3, we present existence, uniqueness, and at least one solution of system (1) by applying the Banach contraction fixed point theorem and Leray–Schauder fixed point theorem. In Section 4, we discuss definitions of HU type stabilities, which help us to show that system (1) has HU type stabilities by two different approaches. In Section 5, by a particular example of the system (1), we show that our results are applicable.

#### **2. Background Materials**

In this fragment, we present basic notations with Banach spaces, definitions of the considered derivative and integral, and lemma, which will be utilized in the next sections.

Suppose <sup>C</sup>(J) is a Banach space with a norm defined as <sup>k</sup>v<sup>k</sup> <sup>=</sup> supt∈<sup>J</sup> v(t) . For t ∈ J, we define v*r*(t) = t *r* v(t), *r* ≥ 0. Suppose that S<sup>1</sup> = C*r*(J) ⊂ C(J) be the space of all functions v such that v*<sup>r</sup>* ∈ S<sup>1</sup> which yields to be a Banach space when endowed with the norm

$$\|\mathbf{v}\|\_{\mathfrak{S}\_1} = \max\_{\mathbf{t}\in\mathfrak{J}} \{ \sup\_{\mathbf{t}\in\mathfrak{J}} \mathbf{t}^r |\mathbf{v}(\mathbf{t})|\_{\prime} \sup\_{\mathbf{t}\in\mathfrak{J}} \mathbf{t}^r |\mathfrak{D}^\mathbf{u} \mathbf{v}(\mathbf{t})| \}.$$

Similarly, k(v, u)k<sup>S</sup> = kvkS<sup>1</sup> + kukS<sup>2</sup> is the norm defined on the product space, where <sup>S</sup> <sup>=</sup> <sup>S</sup><sup>1</sup> <sup>×</sup> <sup>S</sup>2. Obviously S, k(v, u)k<sup>S</sup> is a Banach space.

**Definition 1.** *[45] For a continuous function* <sup>v</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup>*, the Riemann–Liouville integral of order α* > 0 *is defined as*

$$\mathfrak{J}^{\mathfrak{a}}\mathfrak{v}(\mathfrak{t}) = \frac{1}{\Gamma(\mathfrak{a})} \int\_{0}^{\mathfrak{t}} \frac{\mathfrak{v}(\mathfrak{r})}{(\mathfrak{t}-\mathfrak{r})^{1-\mathfrak{a}}} \,\mathfrak{d}\mathfrak{r}\mathfrak{r}$$

*such that the integral is pointwise defined on* R+*.*

**Definition 2.** *[45] For a continuous function* <sup>v</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup>*, the Riemann–Liouville derivative of order α* > 0 *is defined as*

$$\mathfrak{D}^{\mathfrak{a}}\mathsf{v}(\mathfrak{t}) = \frac{1}{\Gamma(n-\mathfrak{a})} \left(\frac{\mathfrak{d}}{\mathfrak{d}\mathfrak{t}}\right)^{n} \int\_{0}^{\mathfrak{t}} \frac{\mathsf{v}(\tau)}{(\mathfrak{t}-\tau)^{\alpha-n+1}} \,\mathsf{d}\tau\mathsf{v}$$

*where* [*α*] *represents the integer part of α and n* = [*α*] + 1*. We note that for \$* > −1, *\$* 6= *α* − 1, *α* − 2, . . . , *α* − *n*, *we have*

$$
\mathfrak{D}^{\mathfrak{a}}\mathfrak{t}^{\varrho} = \frac{\Gamma(\varrho + 1)}{\Gamma(\varrho - \mathfrak{a} + 1)} \mathfrak{t}^{\varrho - \mathfrak{a}}.
$$

*and*

$$
\mathfrak{D}^{\mathfrak{a}} \mathfrak{t}^{\mathfrak{a}-i} = \mathfrak{0}, \; i = 1, 2, 3, \dots, n.
$$

**Lemma 1.** *[45] Solution of the following Riemann–Liouville* FDE *of order n* − 1 < *α* ≤ *n*

$$
\mathfrak{D}^{\mathfrak{a}}\mathbf{v}(\mathbf{t}) = \mathfrak{d}(\mathbf{t})\_{\mathsf{A}}
$$

*is*

$$\mathfrak{J}^{\mathfrak{a}}\mathfrak{J}^{\mathfrak{a}}\mathfrak{v}(\mathfrak{t}) = \mathfrak{J}^{\mathfrak{a}}\mathfrak{\theta}(\mathfrak{t}) + k\_0 \mathfrak{t}^{a-n} + k\_1 \mathfrak{t}^{a-n-1} + \dots + k\_{n-2} \mathfrak{t}^{a-2} + k\_{n-1} \mathfrak{t}^{a-1},$$

*where k*i(*i* = 1, 2, 3, . . . , *n*) *are unknowns.*

#### **3. Existence Theory**

This section is devoted to the equivalent integral form of the proposed problem.

**Lemma 2.** *Let ϑ* ∈ C(J)*, the following α* ∈ (3, 4] *order* FDE *with boundary conditions*

$$\begin{cases} \mathfrak{D}^{\mathfrak{a}}\mathbf{v}(\mathbf{t}) = \mathfrak{d}(\mathbf{t}); \ \mathbf{t} \in \mathfrak{J}, \\ \mathfrak{D}^{\mathfrak{a}^{a-4}}\mathbf{v}(0) = \eta\_1 \mathfrak{D}^{\mathfrak{a}^{a-4}}\mathbf{v}(\sigma), \ \mathfrak{D}^{\mathfrak{a}^{a-3}}\mathbf{v}(0) = \eta\_2 \mathfrak{D}^{\mathfrak{a}^{a-3}}\mathbf{v}(\sigma), \\ \mathfrak{D}^{\mathfrak{a}^{a-2}}\mathbf{v}(0) = \eta\_3 \mathfrak{D}^{\mathfrak{a}^{a-2}}\mathbf{v}(\sigma), \ \mathfrak{D}^{\mathfrak{a}^{a-1}}\mathbf{v}(0) = \eta\_4 \mathfrak{D}^{\mathfrak{a}^{a-1}}\mathbf{v}(\sigma). \end{cases} \tag{2}$$

*have the solution*

$$\mathbf{v}(\mathbf{t}) = \int\_0^{\sigma} \mathbf{G}\_{\kappa}(\mathbf{t}, \tau) \theta(\tau) d\tau,$$

*where*

**G***α*(t, *τ*) = t−*τ <sup>α</sup>*−<sup>1</sup> <sup>Γ</sup>(*α*) + *η*1 t *α*−4 *σ*−*τ* 3 6(1−*η*<sup>1</sup> )Γ(*α*−3) <sup>+</sup> - (1−*η*<sup>1</sup> )*η*2t *<sup>α</sup>*−3+*η*1*η*2*σ*t *α*−4 (*α*−3) *σ*−*<sup>τ</sup>* 2 2(1−*η*<sup>1</sup> )(1−*η*2)Γ(*α*−2) + *η*3t *α*−2 *σ*−*τ* (1−*η*3)Γ(*α*−1) <sup>+</sup> *η*2*η*3*σ*t *α*−3 *σ*−*τ* (1−*η*2)(1−*η*3)Γ(*α*−2) <sup>+</sup> *η*1 (1+*η*2)*η*3*σ* 2 t *α*−4 *σ*−*τ* 2(1−*η*<sup>1</sup> )(1−*η*2)(1−*η*3)Γ(*α*−3) + *η*4 t *α*−1 (1−*η*<sup>4</sup> )Γ(*α*) + *η*3*η*4*σ*t *α*−2 (1−*η*3)(1−*η*<sup>4</sup> )Γ(*α*−1) <sup>+</sup> *η*2(1+*η*3)*η*4*σ* 2 t *α*−3 2(1−*η*2)(1−*η*3)(1−*η*<sup>4</sup> )Γ(*α*−2) + *η*1 (1+*η*2)(1+*η*3)+*η*2+*η*<sup>3</sup> *η*4*σ* 3 t *α*−4 6(1−*η*<sup>1</sup> )(1−*η*2)(1−*η*3)(1−*η*<sup>4</sup> )Γ(*α*−3) , 0 ≤ *τ* < t ≤ *σ*, *η*1 t *α*−4 *σ*−*τ* 3 6(1−*η*<sup>1</sup> )Γ(*α*−3) <sup>+</sup> - (1−*η*<sup>1</sup> )*η*2t *<sup>α</sup>*−3+*η*1*η*2*σ*t *α*−4 (*α*−3) *σ*−*<sup>τ</sup>* 2 2(1−*η*<sup>1</sup> )(1−*η*2)Γ(*α*−2) + *η*3t *α*−2 *σ*−*τ* (1−*η*3)Γ(*α*−1) <sup>+</sup> *η*2*η*3*σ*t *α*−3 *σ*−*τ* (1−*η*2)(1−*η*3)Γ(*α*−2) <sup>+</sup> *η*1 (1+*η*2)*η*3*σ* 2 t *α*−4 *σ*−*τ* 2(1−*η*<sup>1</sup> )(1−*η*2)(1−*η*3)Γ(*α*−3) + *η*4 t *α*−1 (1−*η*<sup>4</sup> )Γ(*α*) + *η*3*η*4*σ*t *α*−2 (1−*η*3)(1−*η*<sup>4</sup> )Γ(*α*−1) <sup>+</sup> *η*2(1+*η*3)*η*4*σ* 2 t *α*−3 2(1−*η*2)(1−*η*3)(1−*η*<sup>4</sup> )Γ(*α*−2) + *η*1 (1+*η*2)(1+*η*3)+*η*2+*η*<sup>3</sup> *η*4*σ* 3 t *α*−4 6(1−*η*<sup>1</sup> )(1−*η*2)(1−*η*3)(1−*η*<sup>4</sup> )Γ(*α*−3) , 0 ≤ t < *τ* ≤ *σ*. (3)

**Proof.** Using Lemma 1 on FDE (2), we have

$$\mathbf{v}(\mathbf{t}) = \frac{1}{\Gamma(\mathfrak{a})} \int\_0^\mathfrak{t} (\mathfrak{t} - \mathfrak{r})^{\mathfrak{a} - 1} \theta(\mathfrak{r}) \mathfrak{d}\mathfrak{r} + k\_3 \mathfrak{t}^{\mathfrak{a} - 1} + k\_2 \mathfrak{t}^{\mathfrak{a} - 2} + k\_1 \mathfrak{t}^{\mathfrak{a} - 3} + k\_0 \mathfrak{t}^{\mathfrak{a} - 4}. \tag{4}$$

Applying boundary conditions of (2) on (4), we get unknowns

$$\begin{split} k\_{0} &= \frac{\eta\_{1}}{(1-\eta\_{1})\Gamma(\alpha-3)} \Big[ \frac{1}{6} \int\_{0}^{\sigma} (\sigma-\tau)^{3} \theta(\tau) d\tau + \frac{\eta\_{2}\sigma}{2(1-\eta\_{2})} \int\_{0}^{\sigma} (\sigma-\tau)^{2} \theta(\tau) d\tau \\ &\quad + \frac{(1+\eta\_{2})\eta\_{3}\sigma^{2}}{2(1-\eta\_{2})(1-\eta\_{3})} \int\_{0}^{\sigma} (\sigma-\tau)\theta(\tau) d\tau + \frac{((1+\eta\_{2})(1+\eta\_{3})+\eta\_{2}+\eta\_{3})\eta\_{4}\sigma^{3}}{6(1-\eta\_{2})(1-\eta\_{3})(1-\eta\_{4})} \int\_{0}^{\sigma} \theta(\tau) d\tau \Big], \\ k\_{1} &= \frac{\eta\_{2}}{(1-\eta\_{2})\Gamma(\alpha-2)} \Big[ \frac{1}{2} \int\_{0}^{\sigma} (\sigma-\tau)^{2} \theta(\tau) d\tau + \frac{\eta\_{3}\sigma}{(1-\eta\_{3})} \int\_{0}^{\sigma} (\sigma-\tau)\theta(\tau) d\tau \\ &\quad + \frac{(1+\eta\_{3})\eta\_{4}\sigma^{2}}{2(1-\eta\_{3})(1-\eta\_{4})} \int\_{0}^{\sigma} \theta(\tau) d\tau \Big], \\ k\_{2} &= \frac{\eta\_{3}}{(1-\eta\_{3})\Gamma(\alpha-1)} \Big[ \int\_{0}^{\sigma} (\sigma-\tau)\theta(\tau) d\tau + \frac{\eta\_{4}\sigma}{(1-\eta\_{4})} \int\_{0}^{\sigma} \theta(\tau) d\tau \Big], \\ k\_{3} &= \frac{\eta\_{4}}{(1-\eta\_{4})\Gamma(\alpha)} \Big$$

Put the values of *k*0, *k*1, *k*<sup>2</sup> and *k*<sup>3</sup> in Equation (4), we obtain

<sup>v</sup>(t) = <sup>1</sup> Γ(*α*) Z t 0 t − *τ α*−<sup>1</sup> *ϑ*(*τ*)d*τ* + *η*1t *α*−4 6(1 − *η*1)Γ(*α* − 3) Z *σ* 0 *σ* − *τ* 3 *ϑ*(*τ*)d*τ* + h *η*2t *α*−3 2(1 − *η*2)Γ(*α* − 2) + *η*1*η*2*σ*t *α*−4 2(1 − *η*1)(1 − *η*2)Γ(*α* − 3) i Z *<sup>σ</sup>* 0 *σ* − *τ* 2 *ϑ*(*τ*)d*τ* + h *η*3t *α*−2 (1 − *η*3)Γ(*α* − 1) + *η*2*η*3*σ*t *α*−3 (1 − *η*2)(1 − *η*3)Γ(*α* − 2) + *η*1(1 + *η*2)*η*3*σ* 2 t *α*−4 2(1 − *η*1)(1 − *η*2)(1 − *η*3)Γ(*α* − 3) i × Z *σ* 0 *σ* − *τ ϑ*(*τ*)d*τ* + h *η*4t *α*−1 (1 − *η*4)Γ(*α*) + *η*3*η*4*σ*t *α*−2 (1 − *η*3)(1 − *η*4)Γ(*α* − 1) + *η*2(1 + *η*3)*η*4*σ* 2 t *α*−3 2(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*α* − 2) + + *η*1 (1 + *η*2)(1 + *η*3) + *η*<sup>2</sup> + *η*<sup>3</sup> *η*4*σ* 3 t *α*−4 6(1 − *η*1)(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*α* − 3) i Z *<sup>σ</sup>* 0 *ϑ*(*τ*)d*τ* = Z *σ* 0 **G***α*(t, *τ*)*ϑ*(*τ*)d*τ*, (5)

where **G***α*(t, *τ*) is given by (3).

**Remark 1.** *Let µ* ∈ C(J)*, the following κ* ∈ (3, 4] *order* FDE *with boundary conditions*

$$\begin{cases} \mathfrak{D}^{\kappa}\mathfrak{u}(\mathfrak{t}) = \mathfrak{u}(\mathfrak{t}); \; \mathfrak{t} \in \mathfrak{J}, \\ \mathfrak{D}^{\kappa-4}\mathfrak{u}(0) = \eta\_{5}\mathfrak{D}^{\kappa-4}\mathfrak{u}(\sigma), \; \mathfrak{D}^{\kappa-3}\mathfrak{u}(0) = \eta\_{6}\mathfrak{D}^{\kappa-3}\mathfrak{u}(\sigma), \\ \mathfrak{D}^{\kappa-2}\mathfrak{u}(0) = \eta\_{7}\mathfrak{D}^{\kappa-2}\mathfrak{u}(\sigma), \; \mathfrak{D}^{\kappa-1}\mathfrak{u}(0) = \eta\_{8}\mathfrak{D}^{\kappa-1}\mathfrak{u}(\sigma) \end{cases}$$

*has the solution*

$$\mathfrak{u}(\mathfrak{t}) = \int\_0^{\sigma} \mathbf{G}\_{\mathfrak{k}}(\mathfrak{t}, \mathfrak{r}) \mu(\mathfrak{r}) d\mathfrak{r} \omega$$

*where* **G***κ*(t, *τ*) *is given by*

**G***κ*(t, *τ*) = t−*τ <sup>κ</sup>*−<sup>1</sup> <sup>Γ</sup>(*κ*) + *η*5t *κ*−4 *σ*−*τ* 3 <sup>6</sup>(1−*η*5)Γ(*κ*−3) <sup>+</sup> - (1−*η*5)*η*6t *<sup>κ</sup>*−3+*η*5*η*6*σ*t *κ*−4 (*κ*−3) *σ*−*<sup>τ</sup>* 2 2(1−*η*5)(1−*η*6)Γ(*κ*−2) + *η*7t *κ*−2 *σ*−*τ* (1−*η*7)Γ(*κ*−1) <sup>+</sup> *η*6*η*7*σ*t *κ*−3 *σ*−*τ* (1−*η*6)(1−*η*7)Γ(*κ*−2) <sup>+</sup> *η*5(1+*η*6)*η*7*σ* 2 t *κ*−4 *σ*−*τ* 2(1−*η*5)(1−*η*6)(1−*η*7)Γ(*κ*−3) + *η*8t *κ*−1 (1−*η*8)Γ(*κ*) <sup>+</sup> *η*7*η*8*σ*t *κ*−2 (1−*η*7)(1−*η*8)Γ(*κ*−1) <sup>+</sup> *η*6(1+*η*7)*η*8*σ* 2 t *κ*−3 2(1−*η*6)(1−*η*7)(1−*η*8)Γ(*κ*−2) + *η*5 (1+*η*6)(1+*η*7)+*η*6+*η*<sup>7</sup> *η*8*σ* 3 t *κ*−4 6(1−*η*5)(1−*η*6)(1−*η*7)(1−*η*8)Γ(*κ*−3) , 0 ≤ *τ* < t ≤ *σ*, *η*5t *κ*−4 *σ*−*τ* 3 <sup>6</sup>(1−*η*5)Γ(*κ*−3) <sup>+</sup> - (1−*η*5)*η*6t *<sup>κ</sup>*−3+*η*5*η*6*σ*t *κ*−4 (*κ*−3) *σ*−*<sup>τ</sup>* 2 2(1−*η*5)(1−*η*6)Γ(*κ*−2) + *η*7t *κ*−2 *σ*−*τ* (1−*η*7)Γ(*κ*−1) <sup>+</sup> *η*6*η*7*σ*t *κ*−3 *σ*−*τ* (1−*η*6)(1−*η*7)Γ(*κ*−2) <sup>+</sup> *η*5(1+*η*6)*η*7*σ* 2 t *κ*−4 *σ*−*τ* 2(1−*η*5)(1−*η*6)(1−*η*7)Γ(*κ*−3) + *η*8t *κ*−1 (1−*η*8)Γ(*κ*) <sup>+</sup> *η*7*η*8*σ*t *κ*−2 (1−*η*7)(1−*η*8)Γ(*κ*−1) <sup>+</sup> *η*6(1+*η*7)*η*8*σ* 2 t *κ*−3 2(1−*η*6)(1−*η*7)(1−*η*8)Γ(*κ*−2) + *η*5 (1+*η*6)(1+*η*7)+*η*6+*η*<sup>7</sup> *η*8*σ* 3 t *κ*−4 6(1−*η*5)(1−*η*6)(1−*η*7)(1−*η*8)Γ(*κ*−3) , 0 ≤ t < *τ* ≤ *σ*.

**Remark 2.** *Putting α* = 4 *and η*<sup>1</sup> = *η*<sup>2</sup> = *η*<sup>3</sup> = *η*<sup>4</sup> = −1 *in* (3)*, gives Green's function* **G***α*(t, *τ*) *of fourth-order* ODE *with anti-periodic boundary conditions.*

**Remark 3.** *Putting α* = 4 *and η*<sup>1</sup> = *η*<sup>2</sup> = *η*<sup>3</sup> = *η*<sup>4</sup> = 0 *in* (5)*, gives the solution of fourth-order* ODE *having initial conditions.*

For the reason of advantage, we set the following notations:

$$\begin{split} \mathfrak{Q}\_{a} &= \max\left\{ \frac{\sigma^{4}}{\Gamma(a+1)} + \left| \frac{\eta\_{1}\sigma^{4}}{24(1-\eta\_{1})\Gamma(a-3)} \right| + \frac{\eta\_{2}(1-\eta\_{1})\sigma^{4} + \eta\_{1}\eta\_{2}\sigma^{4}(a-3)}{6(1-\eta\_{1})(1-\eta\_{2})\Gamma(a-2)} \right| \\ &+ \left| \frac{\eta\_{3}(1-\eta\_{2})\sigma^{4} + \eta\_{2}\eta\_{3}\sigma^{4}(a-2)}{2(1-\eta\_{2})(1-\eta\_{3})\Gamma(a-1)} \right| + \left| \frac{\eta\_{1}(1+\eta\_{2})\eta\_{3}\sigma^{4}}{2(1-\eta\_{1})(1-\eta\_{2})(1-\eta\_{3})\Gamma(a-3)} \right| \\ &+ \left| \frac{(1-\eta\_{3})\eta\_{4}\sigma^{4} + \eta\_{3}\eta\_{4}\sigma^{4}(a-1)}{(1-\eta\_{3})(1-\eta\_{4})\Gamma(a)} \right| + \left| \frac{\eta\_{2}(1+\eta\_{3})\eta\_{4}\sigma^{4}}{2(1-\eta\_{2})(1-\eta\_{3})(1-\eta\_{4})\Gamma(a-2)} \right| \\ &+ \left| \frac{\eta\_{1}((1+\eta\_{2})(1+\eta\_{3})+\eta\_{2}+\eta\_{3})\eta\_{4}\sigma^{4}}{6(1-\eta\_{1})(1-\eta\_{2})(1-\eta\_{3})(1-\eta\_{4})\Gamma(a-3)} \right| \end{split} \tag{6}$$

and

$$\begin{split} \mathcal{Q}\_{\mathbf{k}} &= \max\left\{ \frac{\sigma^{4}}{\Gamma(\kappa+1)} + \left| \frac{\eta\_{5}\sigma^{4}}{24(1-\eta\_{5})\Gamma(\kappa-3)} \right| + \frac{\eta\_{6}(1-\eta\_{5})\sigma^{4} + \eta\_{5}\eta\_{6}\sigma^{4}(\kappa-3)}{6(1-\eta\_{5})(1-\eta\_{6})\Gamma(\kappa-2)} \right| \\ &+ \left| \frac{\eta\tau(1-\eta\_{6})\sigma^{4} + \eta\_{6}\eta\_{7}\sigma^{4}(\kappa-2)}{2(1-\eta\_{6})(1-\eta\_{7})\Gamma(\kappa-1)} \right| + \left| \frac{\eta\_{5}(1+\eta\_{6})\eta\_{7}\sigma^{4}}{2(1-\eta\_{5})(1-\eta\_{6})(1-\eta\_{7})\Gamma(\kappa-3)} \right| \\ &+ \left| \frac{(1-\eta\_{7})\eta\_{8}\sigma^{4} + \eta\_{7}\eta\_{8}\sigma^{4}(\kappa-1)}{(1-\eta\_{7})(1-\eta\_{8})\Gamma(\kappa)} \right| + \left| \frac{\eta\_{6}(1+\eta\_{7})\eta\_{8}\sigma^{4}}{2(1-\eta\_{6})(1-\eta\_{7})(1-\eta\_{8})\Gamma(\kappa-2)} \right| \\ &+ \left| \frac{\eta\_{5}((1+\eta\_{6})(1+\eta\_{7})+\eta\_{6}+\eta\_{7})\eta\_{8}\sigma^{4}}{6(1-\eta\_{5})(1-\eta\_{6})(1-\eta\_{7})(1-\eta\_{8})\Gamma(\kappa-3)} \right| \end{split} (7)$$

If solution of system (1) is (v, u) and t ∈ J, then

<sup>v</sup>(t) = <sup>1</sup> Γ(*α*) Z t 0 t − *τ α*−<sup>1</sup> *χ*1(*τ*, u(*τ*), D*<sup>α</sup>* v(*τ*))d*τ* + *η*1t *α*−4 6(1 − *η*1)Γ(*α* − 3) Z *σ* 0 *σ* − *τ* 3 *χ*1(*τ*, u(*τ*), D*<sup>α</sup>* v(*τ*))d*τ* + h *η*2t *α*−3 2(1 − *η*2)Γ(*α* − 2) + *η*1*η*2*σ*t *α*−4 2(1 − *η*1)(1 − *η*2)Γ(*α* − 3) i Z *<sup>σ</sup>* 0 *σ* − *τ* 2 *χ*1(*τ*, u(*τ*), D*<sup>α</sup>* v(*τ*))d*τ* + h *η*3t *α*−2 (1 − *η*3)Γ(*α* − 1) + *η*2*η*3*σ*t *α*−3 (1 − *η*2)(1 − *η*3)Γ(*α* − 2) + *η*1(1 + *η*2)*η*3*σ* 2 t *α*−4 2(1 − *η*1)(1 − *η*2)(1 − *η*3)Γ(*α* − 3) i × Z *σ* 0 *σ* − *τ χ*1(*τ*, u(*τ*), D*<sup>α</sup>* v(*τ*))d*τ* + h *η*4t *α*−1 (1 − *η*4)Γ(*α*) + *η*3*η*4*σ*t *α*−2 (1 − *η*3)(1 − *η*4)Γ(*α* − 1) + *η*2(1 + *η*3)*η*4*σ* 2 t *α*−3 2(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*α* − 2) + *η*1 (1 + *η*2)(1 + *η*3) + *η*<sup>2</sup> + *η*<sup>3</sup> *η*4*σ* 3 t *α*−4 6(1 − *η*1)(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*α* − 3) i × Z *σ* 0 *χ*1(*τ*, u(*τ*), D*<sup>α</sup>* v(*τ*))d*τ* = Z *σ* 0 **G***α*(t, *τ*)*χ*1(*τ*, u(*τ*), D*<sup>α</sup>* v(*τ*))d*τ*,

and

<sup>u</sup>(t) = <sup>1</sup> Γ(*κ*) Z t 0 t − *τ κ*−<sup>1</sup> *χ*2(*τ*, v(*τ*), D*<sup>κ</sup>* u(*τ*))d*τ* + *η*5t *κ*−4 6(1 − *η*5)Γ(*κ* − 3) Z *σ* 0 *σ* − *τ* 3 *χ*2(*τ*, v(*τ*), D*<sup>κ</sup>* u(*τ*))d*τ* + h *η*6t *κ*−3 2(1 − *η*6)Γ(*κ* − 2) + *η*5*η*6*σ*t *κ*−4 2(1 − *η*5)(1 − *η*6)Γ(*κ* − 3) i Z *<sup>σ</sup>* 0 *σ* − *τ* 2 *χ*2(*τ*, v(*τ*), D*<sup>κ</sup>* u(*τ*))d*τ* + h *η*7t *κ*−2 (1 − *η*7)Γ(*κ* − 1) + *η*6*η*7*σ*t *κ*−3 (1 − *η*6)(1 − *η*7)Γ(*κ* − 2) + *η*5(1 + *η*6)*η*7*σ* 2 t *κ*−4 2(1 − *η*5)(1 − *η*6)(1 − *η*7)Γ(*κ* − 3) i × Z *σ* 0 *σ* − *τ χ*2(*τ*, v(*τ*), D*<sup>κ</sup>* u(*τ*))d*τ* + h *η*8t *κ*−1 (1 − *η*8)Γ(*κ*) + *η*7*η*8*σ*t *κ*−2 (1 − *η*7)(1 − *η*8)Γ(*κ* − 1) + *η*6(1 + *η*7)*η*8*σ* 2 t *κ*−3 2(1 − *η*6)(1 − *η*7)(1 − *η*8)Γ(*κ* − 2) + *η*5 (1 + *η*6)(1 + *η*7) + *η*<sup>6</sup> + *η*<sup>7</sup> *η*8*σ* 3 t *κ*−4 6(1 − *η*5)(1 − *η*6)(1 − *η*7)(1 − *η*8)Γ(*κ* − 3) i × Z *σ* 0 *χ*2(*τ*, v(*τ*), D*<sup>κ</sup>* u(*τ*))d*τ* = Z *σ* 0 **G***κ*(t, *τ*)*χ*2(*τ*, v(*τ*), D*<sup>κ</sup>* u(*τ*))d*τ*.

We use the following notations for convenience:

$$\begin{aligned} \chi(\mathbf{t}) &= \chi\_1(\mathbf{t}, \mathfrak{u}(\mathbf{t}), \mathfrak{D}^\alpha \mathbf{v}(\mathbf{t})) = \chi\_1(\mathbf{t}, \mathfrak{u}(\mathbf{t}), \mathfrak{v}(\mathbf{t})) \\ \chi(\mathbf{t}) &= \chi\_2(\mathbf{t}, \mathfrak{v}(\mathbf{t}), \mathfrak{D}^\alpha \mathfrak{u}(\mathbf{t})) = \chi\_2(\mathbf{t}, \mathfrak{v}(\mathbf{t}), \chi(\mathbf{t})). \end{aligned}$$

Now, transform system (1) to the fixed point problem, let F : S → S is an operator defined by

$$\mathsf{F}(\mathsf{v},\mathsf{u})(\mathsf{t}) = \begin{pmatrix} \mathsf{t} \\ \int \mathsf{G}\_{\mathsf{a}}(\mathsf{t},\mathsf{r}) \chi\_{1}(\mathsf{r},\mathsf{u}(\mathsf{r}),\mathsf{v}(\mathsf{r})) \mathsf{d}\mathsf{r} \\ \mathsf{t} \\ \int \mathsf{G}\_{\mathsf{x}}(\mathsf{t},\mathsf{r}) \chi\_{2}(\mathsf{r},\mathsf{v}(\mathsf{r}),\mathsf{x}(\mathsf{r})) \mathsf{d}\mathsf{r} \end{pmatrix} = \begin{pmatrix} \mathsf{F}\_{\mathsf{a}}(\mathsf{u},\mathsf{v})(\mathsf{t}) \\ \mathsf{F}\_{\mathsf{x}}(\mathsf{v},\mathsf{x})(\mathsf{t}) \end{pmatrix} = \begin{pmatrix} \mathsf{F}\_{\mathsf{a}}(\mathsf{v})(\mathsf{t}) \\ \mathsf{F}\_{\mathsf{x}}(\mathsf{u})(\mathsf{t}) \end{pmatrix}. \tag{8}$$

Then, the fixed point of F and the solution of system (1) coincided, i.e.,

F*α*(v)(t) = 1 Γ(*α*) Z t 0 t − *τ α*−<sup>1</sup> Y(*τ*)d*τ* + *η*1t *α*−4 6(1 − *η*1)Γ(*α* − 3) Z *σ* 0 *σ* − *τ* 3 V(*τ*)d*τ* + h *η*2t *α*−3 2(1 − *η*2)Γ(*α* − 2) + *η*1*η*2*σ*t *α*−4 2(1 − *η*1)(1 − *η*2)Γ(*α* − 3) i Z *<sup>σ</sup>* 0 *σ* − *τ* 2 Y(*τ*)d*τ* + h *η*3t *α*−2 (1 − *η*3)Γ(*α* − 1) + *η*2*η*3*σ*t *α*−3 (1 − *η*2)(1 − *η*3)Γ(*α* − 2) + *η*1(1 + *η*2)*η*3*σ* 2 t *α*−4 2(1 − *η*1)(1 − *η*2)(1 − *η*3)Γ(*α* − 3) i × Z *σ* 0 *σ* − *τ* Y(*τ*)d*τ* + h *η*4t *α*−1 (1 − *η*4)Γ(*α*) + *η*3*η*4*σ*t *α*−2 (1 − *η*3)(1 − *η*4)Γ(*α* − 1) + *η*2(1 + *η*3)*η*4*σ* 2 t *α*−3 2(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*α* − 2) + *η*1 (1 + *η*2)(1 + *η*3) + *η*<sup>2</sup> + *η*<sup>3</sup> *η*4*σ* 3 t *α*−4 6(1 − *η*1)(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*α* − 3) i Z *<sup>σ</sup>* 0 Y(*τ*)d*τ* and

F*κ*(u)(t) = 1 Γ(*κ*) Z t 0 t − *τ κ*−<sup>1</sup> X(*τ*)d*τ* + *η*5t *κ*−4 6(1 − *η*5)Γ(*κ* − 3) Z *σ* 0 *σ* − *τ* 3 X(*τ*)d*τ* + h *η*6t *κ*−3 2(1 − *η*6)Γ(*κ* − 2) + *η*5*η*6*σ*t *κ*−4 2(1 − *η*5)(1 − *η*6)Γ(*κ* − 3) i Z *<sup>σ</sup>* 0 *σ* − *τ* 2 X(*τ*)d*τ* + h *η*7t *κ*−2 (1 − *η*7)Γ(*κ* − 1) + *η*6*η*7*σ*t *κ*−3 (1 − *η*6)(1 − *η*7)Γ(*κ* − 2) + *η*5(1 + *η*6)*η*7*σ* 2 t *κ*−4 2(1 − *η*5)(1 − *η*6)(1 − *η*7)Γ(*κ* − 3) i × Z *σ* 0 *σ* − *τ* X(*τ*)d*τ* + h *η*8t *κ*−1 (1 − *η*8)Γ(*κ*) + *η*7*η*8*σ*t *κ*−2 (1 − *η*7)(1 − *η*8)Γ(*κ* − 1) + *η*6(1 + *η*7)*η*8*σ* 2 t *κ*−3 2(1 − *η*6)(1 − *η*7)(1 − *η*8)Γ(*κ* − 2) + *η*5 (1 + *η*6)(1 + *η*7) + *η*<sup>6</sup> + *η*<sup>7</sup> *η*8*σ* 3 t *κ*−4 6(1 − *η*5)(1 − *η*6)(1 − *η*7)(1 − *η*8)Γ(*κ* − 3) i Z *<sup>σ</sup>* 0 X(*τ*)d*τ*.

Using Banach contraction theorem in the following, we prove the uniqueness of solution of system (1).

**Theorem 1.** *Let the functions χ*1, *χ*<sup>2</sup> : J × R × R → R *are continuous and satisfy the hypothesis:* **<sup>H</sup>**1*: For every* <sup>t</sup> ∈ <sup>J</sup> *and* <sup>u</sup>, <sup>v</sup>, <sup>Y</sup>, <sup>X</sup>, ¯u, ¯v, <sup>Y</sup>¯, <sup>X</sup>¯ : <sup>J</sup> → <sup>R</sup>*, there are* <sup>L</sup>*χ*<sup>1</sup> , L*χ*<sup>2</sup> , L*χ*<sup>1</sup> , L*χ*<sup>2</sup> *, such that*

$$\left|\chi\_{1}(\mathbf{t},\mathbf{u}(\mathbf{t}),\mathbf{v}(\mathbf{t}))-\chi\_{1}(\mathbf{t},\mathbf{\bar{u}}(\mathbf{t}),\mathbf{\bar{v}}(\mathbf{t}))\right| \leq \mathcal{L}\_{\chi\_{1}}|\mathbf{u}(\mathbf{t})-\mathbf{\bar{u}}(\mathbf{t})|+\overline{\mathcal{L}}\_{\chi\_{1}}|\mathbf{v}(\mathbf{t})-\mathbf{\bar{v}}(\mathbf{t})|.$$

$$\left|\chi\_{2}(\mathbf{t},\mathbf{v}(\mathbf{t}),\mathbf{x}(\mathbf{t}))-\chi\_{2}(\mathbf{t},\mathbf{\bar{v}}(\mathbf{t}),\mathbf{\bar{x}}(\mathbf{t}))\right| \leq \mathcal{L}\_{\chi\_{2}}|\mathbf{v}(\mathbf{t})-\mathbf{\bar{v}}(\mathbf{t})|+\overline{\mathcal{L}}\_{\chi\_{2}}|\mathbf{x}(\mathbf{t})-\mathbf{\bar{x}}(\mathbf{t})|.$$

*In addition, suppose that*

$$\frac{\mathfrak{Q}\_{\mathfrak{K}}\mathcal{L}\_{\chi\_1}(1-\overline{\mathcal{L}}\_{\chi\_2}) + \mathfrak{Q}\_{\mathfrak{K}}\mathcal{L}\_{\chi\_2}(1-\overline{\mathcal{L}}\_{\chi\_1})}{(1-\overline{\mathcal{L}}\_{\chi\_2})(1-\overline{\mathcal{L}}\_{\chi\_1})} < 1,$$

*where* Q*<sup>α</sup> and* Q*<sup>κ</sup> are defined by Equations* (6) *and* (7)*, respectively. Furthermore,* 0 ≤ L*χ*<sup>1</sup> *,* L*χ*<sup>2</sup> < 1 *(through out the paper). Then, the solution of system* (1) *is unique.*

**Proof.** Consider supt∈<sup>J</sup> *<sup>χ</sup>*1(t, 0, 0) = <sup>Φ</sup><sup>∗</sup> <sup>&</sup>lt; <sup>∞</sup> and supt∈<sup>J</sup> *χ*2(t, 0, 0) = Ψ<sup>∗</sup> < ∞, such that

$$r \geq \frac{2\mathfrak{Q}\_{\mathfrak{a}}\Phi^\*(1-\overline{\mathcal{L}}\_{\chi\_2}) + 2\mathfrak{Q}\_{\mathfrak{k}}\Psi^\*(1-\overline{\mathcal{L}}\_{\chi\_1})}{2(1-\overline{\mathcal{L}}\_{\chi\_1})(1-\overline{\mathcal{L}}\_{\chi\_2}) - \mathfrak{Q}\_{\mathfrak{k}}\mathcal{L}\_{\chi\_1} - \mathfrak{Q}\_{\mathfrak{k}}\mathcal{L}\_{\chi\_2}}$$

.

We show that F(B*r*) ⊂ B*<sup>r</sup>* , where

$$\mathcal{B}\_r = \left\{ (\mathbf{v}, \mathbf{u}) \in \mathbb{S} : \| (\mathbf{v}, \mathbf{u}) \|\_{\mathbb{S}} \le r, \ \| \mathbf{v} \| \le \frac{r}{2}, \ \| \mathbf{u} \| \le \frac{r}{2} \right\}.$$

For (v, u) ∈ B*<sup>r</sup>* , we have

t 4−*α* F*α*(v)(t) ≤ t 4−*α* Γ(*α*) Z t 0 t − *τ α*−<sup>1</sup> ( *χ*1(*τ*, u(*τ*), <sup>Y</sup>(*τ*)) − *χ*1(*τ*, 0, 0)  + *χ*1(*τ*, 0, 0) d*τ* + *η*1 6(1 − *η*1)Γ(*α* − 3) × Z *σ* 0 *σ* − *τ* 3 ( *χ*1(*τ*, u(*τ*), <sup>Y</sup>(*τ*)) − *χ*1(*τ*, 0, 0)  + *χ*1(*τ*, 0, 0) d*τ* + *η*2t 2(1 − *η*2)Γ(*α* − 2) + *η*1*η*2*σ* 2(1 − *η*1)(1 − *η*2)Γ(*α* − 3) Z T 0 *σ* − *τ* 2 ( *χ*1(*τ*, u(*τ*), <sup>V</sup>(*τ*)) − *χ*1(*τ*, 0, 0)  + *χ*1(*τ*, 0, 0) d*τ* + *η*3t 2 (1 − *η*3)Γ(*α* − 1) + *η*2*η*3t*σ* (1 − *η*2)(1 − *η*3)Γ(*α* − 2) + *η*1(1 + *η*2)*η*3*σ* 2 2(1 − *η*1)(1 − *η*2)(1 − *η*3)Γ(*α* − 3) × Z *σ* 0 *σ* − *τ χ*1(*τ*, u(*τ*), <sup>V</sup>(*τ*)) − *χ*1(*τ*, 0, 0)  + *χ*1(*τ*, 0, 0) d*τ* + *η*3*η*4*σ*t 2 (1 − *η*3)(1 − *η*4)Γ(*α* − 1) + *η*4t 3 (1 − *η*4)Γ(*α*) + *η*1 (1 + *η*2)(1 + *η*3) + *η*<sup>2</sup> + *η*<sup>3</sup> *η*4*σ* 3 6(1 − *η*1)(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*α* − 3) + *η*2(1 + *η*3)*η*4*σ* 2 t 2(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*α* − 2) Z *σ* 0 *χ*1(*τ*, u(*τ*), <sup>V</sup>(*τ*)) − *χ*1(*τ*, 0, 0)  + *χ*1(*τ*, 0, 0) d*τ*. (9)

Consider

$$\begin{array}{rcl} |\mathbf{v}(\mathbf{t})| &\leq& |\chi\_{1}(\mathbf{t},\mathbf{u}(\mathbf{t}),\mathbf{v}(\mathbf{t}))-\chi\_{1}(\mathbf{t},\mathbf{0},\mathbf{0})|+|\chi\_{1}(\mathbf{t},\mathbf{0},\mathbf{0})| \\ &\leq& |\chi\_{1}(\mathbf{t},\mathbf{0},\mathbf{0})|+\mathcal{L}\_{\chi\_{1}}|\mathbf{u}(\mathbf{t})|+\overline{\mathcal{L}}\_{\chi\_{1}}|\mathbf{v}(\mathbf{t})| \\ &\leq& \frac{|\chi\_{1}(\mathbf{t},\mathbf{0},\mathbf{0})|+\mathcal{L}\_{\chi\_{1}}|\mathbf{u}(\mathbf{t})|}{1-\overline{\mathcal{L}}\_{\chi\_{1}}}.\end{array} \tag{10}$$

Substituting (10) in (9), we get

kF*α*(v)k ≤ h *σ* 4 Γ(*α* + 1) + *η*1*σ* 4 24(1 − *η*1)Γ(*α* − 3)  + *η*2(1 − *η*1)*σ* <sup>4</sup> + *η*1*η*2*σ* 4 (*α* − 3) 6(1 − *η*1)(1 − *η*2)Γ(*α* − 2) + *η*3(1 − *η*2)*σ* <sup>4</sup> + *η*2*η*3*σ* 4 (*α* − 2) 2(1 − *η*2)(1 − *η*3)Γ(*α* − 1)  + *η*1(1 + *η*2)*η*3*σ* 4 2(1 − *η*1)(1 − *η*2)(1 − *η*3)Γ(*α* − 3) + (1 − *η*3)*η*4*σ* <sup>4</sup> + *η*3*η*4*σ* 4 (*α* − 1) (1 − *η*3)(1 − *η*4)Γ(*α*)  + *η*2(1 + *η*3)*η*4*σ* 4 2(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*α* − 2) + *η*1 (1 + *η*2)(1 + *η*3) + *η*<sup>2</sup> + *η*<sup>3</sup> *η*4*σ* 4 6(1 − *η*1)(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*α* − 3) i 2Φ<sup>∗</sup> + L*χ*<sup>1</sup> *r* 2(1 − L*χ*<sup>1</sup> ) .

Therefore,

$$\left\|\left|\mathsf{F}\_{a}(\mathsf{v})\right|\right\| \leq \mathfrak{Q}\_{a} \frac{2\Phi^{\*} + \mathcal{L}\_{\chi\_{1}}r}{2(1 - \overline{\mathcal{L}}\_{\chi\_{1}})}.\tag{11}$$

On the same way, we can write

$$\|\mathsf{F}\_{\mathsf{X}}(\mathsf{u})\| \leq \mathfrak{Q}\_{\mathsf{X}} \frac{2\mathsf{Y}^{\*} + \mathcal{L}\_{\lambda\_{2}}r}{2(1 - \overline{\mathcal{L}}\_{\lambda\_{2}})}.\tag{12}$$

Inequalities (11) and (12) combined give

$$\|\mathsf{F}(\mathsf{v},\mathsf{u})\|\_{\mathsf{S}} \leq r.$$

For any t ∈ J, and (v1, u1),(v2, u2) ∈ S, we get

t 4−*α* F*α*(v1)(t) − F*α*(v2)(t) ≤ t 4−*α* Γ(*α*) Z t 0 t − *τ α*−<sup>1</sup> *χ*1(*τ*, u1(*τ*), <sup>Y</sup>1(*τ*)) − *χ*1(*τ*, u2(*τ*), <sup>Y</sup>2(*τ*)) d*τ* + *η*1 6(1 − *η*1)Γ(*α* − 3) × Z *σ* 0 *σ* − *τ* 3 *χ*1(*τ*, u1(*τ*), <sup>Y</sup>1(*τ*)) − *χ*1(*τ*, u2(*τ*), <sup>Y</sup>2(*τ*)) d*τ* + *η*2t 2(1 − *η*2)Γ(*α* − 2) + *η*1*η*2*σ* 2(1 − *η*1)(1 − *η*2)Γ(*α* − 3) Z *σ* 0 *σ* − *τ* 2 *χ*1(*τ*, u1(*τ*), <sup>Y</sup>1(*τ*)) − *χ*1(*τ*, u2(*τ*), <sup>Y</sup>2(*τ*)) d*τ* + *η*3t 2 (1 − *η*3)Γ(*α* − 1) + *η*2*η*3t*σ* (1 − *η*2)(1 − *η*3)Γ(*α* − 2) + *η*1(1 + *η*2)*η*3*σ* 2 2(1 − *η*1)(1 − *η*2)(1 − c0)Γ(*α* − 3) × Z *σ* 0 *σ* − *τ χ*1(*τ*, u1(*τ*), <sup>Y</sup>1(*τ*)) − *χ*1(*τ*, u2(*τ*), <sup>Y</sup>2(*τ*)) d*τ* + *η*3*η*4*σ*t 2 (1 − *η*3)(1 − *η*4)Γ(*α* − 1) + *η*4t 3 (1 − *η*4)Γ(*α*) + *η*1 (1 + *η*2)(1 + *η*3) + *η*<sup>2</sup> + *η*<sup>3</sup> *η*4*σ* 3 6(1 − *η*1)(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*α* − 3) + *η*2(1 + *η*3)*η*4*σ* 2 t 2(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*α* − 2) Z *σ* 0 *χ*1(*τ*, u1(*τ*), <sup>Y</sup>1(*τ*)) − *χ*1(*τ*, u2(*τ*), <sup>Y</sup>2(*τ*)) d*τ* ≤ h *σ* 4 Γ(*α* + 1) + *η*1*σ* 4 24(1 − *η*1)Γ(*α* − 3)  + *η*2(1 − *η*1)*σ* <sup>4</sup> + *η*1*η*2*σ* 4 (*α* − 3) 6(1 − *η*1)(1 − *η*2)Γ(*α* − 2) + *η*3(1 − *η*2)*σ* <sup>4</sup> + *η*2*η*3*σ* 4 (*α* − 2) 2(1 − *η*2)(1 − *η*3)Γ(*α* − 1)  + *η*1(1 + *η*2)*η*3*σ* 4 2(1 − *η*1)(1 − *η*2)(1 − *η*3)Γ(*α* − 3) + (1 − *η*3)*η*4*σ* <sup>4</sup> + *η*3*η*4*σ* 4 (*α* − 1) (1 − *η*3)(1 − *η*4)Γ(*α*)  + *η*2(1 + *η*3)*η*4*σ* 4 2(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*α* − 2) + *η*1 (1 + *η*2)(1 + *η*3) + *η*<sup>2</sup> + *η*<sup>3</sup> *η*4*σ* 4 6(1 − *η*1)(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*α* − 3) i L*χ*<sup>1</sup> 1 − L*χ*<sup>1</sup> ku<sup>1</sup> − u2k

and thus we get

$$\left\|\left|\mathsf{F}\_{\mathfrak{a}}(\mathsf{v}\_{1}) - \mathsf{F}\_{\mathfrak{a}}(\mathsf{v}\_{2})\right|\right\| \leq \frac{\mathsf{Q}\_{\mathfrak{a}}\mathcal{L}\_{\mathcal{X}\_{1}}}{1 - \overline{\mathcal{L}}\_{\mathcal{X}\_{1}}} \left\|\mathsf{u}\_{1} - \mathsf{u}\_{2}\right\|. \tag{13}$$

Similarly,

$$\|\mathsf{F}\_{\mathsf{X}}(\mathsf{u}\_{1}) - \mathsf{F}\_{\mathsf{X}}(\mathsf{u}\_{2})\| \leq \frac{\mathsf{Q}\_{\mathsf{X}}\mathcal{L}\_{\mathsf{X}2}}{1 - \overline{\mathcal{L}\_{\mathsf{X}}}} \|\mathsf{v}\_{1} - \mathsf{v}\_{2}\|. \tag{14}$$

From the inequalities (13) and (14), we get that

$$\|\mathsf{F}(\mathsf{v}\_{1},\mathsf{u}\_{1}) - \mathsf{F}(\mathsf{v}\_{2},\mathsf{u}\_{2})\|\_{\mathsf{S}} \leq \frac{\mathsf{Q}\_{\mathsf{d}}\mathcal{L}\_{\chi\_{1}}(1-\overline{\mathcal{L}}\_{\chi\_{2}}) + \mathsf{Q}\_{\mathsf{k}}\mathcal{L}\_{\chi\_{2}}(1-\overline{\mathcal{L}}\_{\chi\_{1}})}{(1-\overline{\mathcal{L}}\_{\chi\_{2}})(1-\overline{\mathcal{L}}\_{\chi\_{1}})} \|(\mathsf{v}\_{1},\mathsf{u}\_{1}) - (\mathsf{v}\_{2},\mathsf{u}\_{2})\|\_{\mathsf{S}}.$$

Therefore, F is a contraction operator. Therefore, by Banach's fixed point theorem, F has a unique fixed point, so the solution of the problem (1) is unique.

The next result is based on the following Leray–Schauder alternative theorem.

**Theorem 2.** *[46] Let* F : S → S *be an operator which is completely continuous (i.e., a map that restricted to any bounded set in* S *is compact). Suppose*

$$\mathcal{B}(\mathsf{F}) = \{ \mathsf{v} \in \mathsf{S} : \mathsf{v} = \lambda \mathsf{F}(\mathsf{v}), \ \lambda \in [0, 1] \}. \mathsf{L}$$

*Then, either the operator* F *has at least one fixed point or the set* B(F) *is unbounded.*

**Theorem 3.** *Suppose the functions χ*1, *χ*<sup>2</sup> : J × R × R → R *are continuous and satisfy the following hypothesis:*

**<sup>H</sup>**2*: For every* <sup>t</sup> <sup>∈</sup> <sup>J</sup> *and* <sup>u</sup>, <sup>V</sup> : <sup>J</sup> <sup>→</sup> <sup>R</sup>*, there are <sup>φ</sup>*i(<sup>i</sup> <sup>=</sup> 1, 2, 3) : <sup>J</sup> <sup>→</sup> <sup>R</sup>+*, such that*

$$|\chi\_1(\mathbf{t}, \mathfrak{u}(\mathbf{t}), \mathfrak{v}(\mathbf{t}))| \le \phi\_1(\mathbf{t}) + \phi\_2(\mathbf{t})|\mathfrak{u}(\mathbf{t})| + \phi\_3(\mathbf{t})|\chi(\mathbf{t})|.$$

*Similarly, for every* <sup>t</sup> <sup>∈</sup> <sup>J</sup> *and* <sup>v</sup>, <sup>X</sup> : <sup>J</sup> <sup>→</sup> <sup>R</sup>*, there are <sup>ϕ</sup>*i(<sup>i</sup> <sup>=</sup> 1, 2, 3) : <sup>J</sup> <sup>→</sup> <sup>R</sup>+*, such that*

$$\left|\chi\_2(\mathbf{t}, \mathbf{v}(\mathbf{t}), \chi(\mathbf{t}))\right| \le \wp\_1(\mathbf{t}) + \wp\_2(\mathbf{t})|\mathfrak{u}(\mathbf{t})| + \wp\_3(\mathbf{t})|\chi(\mathbf{t})|.$$

*with* supt∈<sup>J</sup> *φ*i(t) = *φ* ∗ i , supt∈<sup>J</sup> *ϕ*i(t) = *ϕ* ∗ i (i = 1, 2, 3) *. In addition, it is assumed that*

$$\Omega\_0 = \max \left\{ \frac{\mathcal{Q}\_\kappa \phi\_2^\*}{1 - \phi\_3^\*}, \frac{\mathcal{Q}\_a \phi\_2^\*}{1 - \phi\_3^\*} \right\} < 1 \text{ and } 0 \le \phi\_3^\*, \phi\_3^\* < 1. \tag{15}$$

*Then, the system* (1) *has at least one solution.*

**Proof.** First, we prove that F is completely continuous. In view of continuity of *χ*1, *χ*2, the operator F is also continuous. For any (v, u) ∈ B*<sup>r</sup>* , we have

t 4−*α* F*α*(v)(t) ≤ t 4−*α* Γ(*α*) Z t 0 t − *τ α*−<sup>1</sup> |Y(*τ*)|d*τ* + *η*1 6(1 − *η*1)Γ(*α* − 3) Z *σ* 0 *σ* − *τ* 3 |Y(*τ*)|d*τ* + *η*2t 2(1 − *η*2)Γ(*α* − 2) + *η*1*η*2*σ* 2(1 − *η*1)(1 − *η*2)Γ(*α* − 3) Z *σ* 0 *σ* − *τ* 2 |Y(*τ*)|d*τ* + *η*3t 2 (1 − *η*3)Γ(*α* − 1) + *η*2*η*3t*σ* (1 − *η*2)(1 − *η*3)Γ(*α* − 2) + *η*1(1 + *η*2)*η*3*σ* 2 2(1 − *η*1)(1 − *η*2)(1 − *η*3)Γ(*α* − 3) Z *σ* 0 *σ* − *τ* |Y(*τ*)|d*τ* + *η*4t 3 (1 − *η*4)Γ(*α*) + *η*3*η*4*σ*t 2 (1 − *η*3)(1 − *η*4)Γ(*α* − 1) + *η*2(1 + *η*3)*η*4*σ* 2 t 2(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*α* − 2) + *η*1 (1 + *η*2)(1 + *η*3) + *η*<sup>2</sup> + *η*<sup>3</sup> *η*4*σ* 3 6(1 − *η*1)(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*α* − 3) Z *σ* 0 |Y(*τ*)|d*τ*. (16)

Now by **H**2, we have

$$\begin{array}{rcl}|\mathbf{y}(\mathbf{t})| &=& |\chi\_1(\mathbf{t}, \mathfrak{u}(\mathbf{t}), \mathfrak{y}(\mathbf{t}))| \\ &\leq& \phi\_1(\mathbf{t}) + \phi\_2(\mathbf{t})|\mathfrak{u}(\mathbf{t})| + \phi\_3(\mathbf{t})|\mathfrak{y}(\mathbf{t})| \\ &\leq& \frac{\phi\_1(\mathbf{t}) + \phi\_2(\mathbf{t})|\mathfrak{u}(\mathbf{t})|}{1 - \phi\_3(\mathbf{t})}.\end{array}$$

Therefore, (16) implies

kF*α*(v)k ≤<sup>h</sup> *σ* 4 Γ(*α* + 1) + *η*1*σ* 4 24(1 − *η*1)Γ(*α* − 3)  + *η*2(1 − *η*1)*σ* <sup>4</sup> + *η*1*η*2*σ* 4 (*α* − 3) 6(1 − *η*1)(1 − *η*2)Γ(*α* − 2) + *η*3(1 − *η*2)*σ* <sup>4</sup> + *η*2*η*3*σ* 4 (*α* − 2) 2(1 − *η*2)(1 − *η*3)Γ(*α* − 1)  + *η*1(1 + *η*2)*η*3*σ* 4 2(1 − *η*1)(1 − *η*2)(1 − *η*3)Γ(*α* − 3) + (1 − *η*3)*η*4*σ* <sup>4</sup> + *η*3*η*4*σ* 4 (*α* − 1) (1 − *η*3)(1 − *η*4)Γ(*α*)  + *η*2(1 + *η*3)*η*4*σ* 4 2(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*α* − 2) + *η*1 (1 + *η*2)(1 + *η*3) + *η*<sup>2</sup> + *η*<sup>3</sup> *η*4*σ* 4 6(1 − *η*1)(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*α* − 3) i 2*φ* ∗ <sup>1</sup> + *φ* ∗ 2 *r* 2(1 − *φ* ∗ 3 ) , (17)

which implies that

$$\|\|\mathsf{F}\_{\mathfrak{a}}(\mathsf{v})\|\| \leq \mathfrak{Q}\_{\mathfrak{a}} \frac{2\phi\_1^\* + \phi\_2^\*r}{2(1 - \phi\_3^\*)}.\tag{18}$$

Similarly, we get

$$\|\mathsf{F}\_{\mathsf{X}}(\mathsf{u})\| \leq \mathfrak{Q}\_{\mathsf{X}} \frac{2\varphi\_{1}^{\*} + \varphi\_{2}^{\*}r}{2(1 - \varphi\_{3}^{\*})}.\tag{19}$$

Thus, it follows from the inequalities (18) and (19) that F is uniformly bounded. Now, we prove that F is equicontinuous. Let 0 ≤ t<sup>2</sup> ≤ t<sup>1</sup> ≤ t. Then, we have

 t 4−*α* 1 F*α*(v)(t1) − t 4−*α* 2 F*α*(v)(t2)  = = 1 Γ(*α*) Z t<sup>1</sup> 0 - t 4−*α* 1 (t<sup>1</sup> − *τ*) *<sup>α</sup>*−<sup>1</sup> <sup>−</sup> <sup>t</sup> 4−*α* 2 (t<sup>2</sup> − *τ*) *α*−1 <sup>V</sup>(*τ*)d*τ* − 1 Γ(*α*) Z t<sup>2</sup> t1 t 4−*α* 2 (t<sup>2</sup> − *τ*) *α*−1 Y(*τ*)d*τ* + h *η*4(t 3 <sup>1</sup> − t 3 2 ) (1 − *η*4)Γ(*α*) + *η*3*η*4*σ*(t 2 <sup>1</sup> − t 2 2 ) (1 − *η*3)(1 − *η*4)Γ(*α* − 1) + *η*2(1 + *η*3)*η*4*σ* 2 (t<sup>1</sup> − t2) 2(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*α* − 2) i Z *<sup>σ</sup>* 0 Y(*τ*)d*τ* + h *η*3(t 2 <sup>1</sup> − t 2 2 ) (1 − *η*3)Γ(*α* − 1) + *η*2*η*3*σ*(t<sup>1</sup> − t2) (1 − *η*2)(1 − *η*3)Γ(*α* − 2) i Z *<sup>σ</sup>* 0 *σ* − *τ* Y(*τ*)d*τ* + *η*2(t<sup>1</sup> − t2) 2(1 − *η*2)Γ(*α* − 2) Z *σ* 0 *σ* − *τ* 2 Y(*τ*)d*τ* .

Therefore, we get

 t 4−*α* 1 F*α*(v)(t1) − t 4−*α* 2 F*α*(v)(t2) ≤ h 1 Γ(*α*) Z t<sup>1</sup> 0 - t 4−*α* 1 (t<sup>1</sup> − *τ*) *<sup>α</sup>*−<sup>1</sup> <sup>−</sup> <sup>t</sup> 4−*α* 2 (t<sup>2</sup> − *τ*) *α*−1 d*τ* − 1 Γ(*α*) Z t<sup>2</sup> t1 t 4−*α* 2 (t<sup>2</sup> − *τ*) *α*−1 d*τ* + *η*4*σ*(t 3 <sup>1</sup> − t 3 2 ) (1 − *η*4)Γ(*α*) + *η*3*η*4*σ* 2 (t 2 <sup>1</sup> − t 2 2 ) (1 − *η*3)(1 − *η*4)Γ(*α* − 1) + *η*2(1 + *η*3)*η*4*σ* 3 (t<sup>1</sup> − t2) 2(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*α* − 2) + *η*3*σ* 2 (t 2 <sup>1</sup> − t 2 2 ) 2(1 − *η*3)Γ(*α* − 1) + *η*2*η*3*σ* 3 (t<sup>1</sup> − t2) 2(1 − *η*2)(1 − *η*3)Γ(*α* − 2) + *η*2*σ* 3 (t<sup>1</sup> − t2) 6(1 − *η*2)Γ(*α* − 2) i*φ* ∗ <sup>1</sup> + *φ* ∗ 2 |u| 1 − *φ* ∗ 3 −→ 0 as t<sup>1</sup> −→ t2.

Similarly

 t 4−*κ* 1 F*κ*(u)(t1) − t 4−*κ* 2 F*κ*(u)(t2) ≤ h 1 Γ(*κ*) Z t<sup>1</sup> 0 - t 4−*κ* 1 (t<sup>1</sup> − *τ*) *<sup>κ</sup>*−<sup>1</sup> <sup>−</sup> <sup>t</sup> 4−*κ* 2 (t<sup>2</sup> − *τ*) *κ*−1 d*τ* − 1 Γ(*κ*) Z t<sup>2</sup> t1 t 4−*κ* 2 (t<sup>2</sup> − *τ*) *κ*−1 d*τ* + *η*4*σ*(t 3 <sup>1</sup> − t 3 2 ) (1 − *η*4)Γ(*κ*) + *η*3*η*4*σ* 2 (t 2 <sup>1</sup> − t 2 2 ) (1 − *η*3)(1 − *η*4)Γ(*κ* − 1) + *η*2(1 + *η*3)*η*4*σ* 3 (t<sup>1</sup> − t2) 2(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*κ* − 2) + *η*3*σ* 2 (t 2 <sup>1</sup> − t 2 2 ) 2(1 − *η*3)Γ(*κ* − 1) + *η*2*η*3*σ* 3 (t<sup>1</sup> − t2) 2(1 − *η*2)(1 − *η*3)Γ(*κ* − 2) + *η*2*σ* 3 (t<sup>1</sup> − t2) 6(1 − *η*2)Γ(*κ* − 2) i *ϕ* ∗ <sup>1</sup> + *ϕ* ∗ 2 |v| 1 − *ϕ* ∗ 3 −→ 0 as t<sup>1</sup> −→ t2.

Therefore, F(v, u) is equicontinuous. Thus, we proved that the operator F(v, u) is continuous, uniformly bounded, and equicontinuous, concluding that F(v, u) is completely continuous. Now, by using Arzela–Ascoli theorem, the operator F(v, u) is compact.

Finally, we are going to check that B = (v, u) ∈ S (v, u) = *λ*F(v, u), *λ* ∈ [0, 1] is bounded. Suppose (v, u) ∈ B, then (v, u) = *λ*F(v, u). For t ∈ J, we have

$$\mathbf{v}(\mathbf{t}) = \lambda \mathsf{F}\_{\mathfrak{A}}(\mathbf{v})(\mathbf{t}), \quad \mathfrak{u}(\mathbf{t}) = \lambda \mathsf{F}\_{\mathfrak{A}}(\mathfrak{u})(\mathbf{t}).$$

Then,

$$\begin{split} & \mathbf{t}^{4-\eta} |\mathbf{v}(\mathbf{t})| \\ \leq & \left| \frac{\sigma^{4}}{\Gamma(\mathfrak{a}+1)} + \left| \frac{\eta\_{1}\sigma^{4}}{24(1-\eta\_{1})\Gamma(\mathfrak{a}-3)} \right| + \frac{\eta\_{2}(1-\eta\_{1})\sigma^{4} + \eta\_{1}\eta\_{2}\sigma^{4}(\mathfrak{a}-3)}{6(1-\eta\_{1})(1-\eta\_{2})\Gamma(\mathfrak{a}-2)} \right| \\ & + \left| \frac{\eta\_{3}(1-\eta\_{2})\sigma^{4} + \eta\_{2}\eta\_{3}\sigma^{4}(\mathfrak{a}-2)}{2(1-\eta\_{2})(1-\eta\_{3})\Gamma(\mathfrak{a}-1)} \right| + \left| \frac{\eta\_{1}(1+\eta\_{2})\eta\_{3}\sigma^{4}}{2(1-\eta\_{1})(1-\eta\_{2})(1-\eta\_{3})\Gamma(\mathfrak{a}-3)} \right| \\ & + \left| \frac{(1-\eta\_{3})\eta\_{4}\sigma^{4} + \eta\_{3}\eta\_{4}\sigma^{4}(\mathfrak{a}-1)}{(1-\eta\_{3})(1-\eta\_{4})\Gamma(\mathfrak{a})} \right| + \left| \frac{\eta\_{2}(1+\eta\_{3})\eta\_{4}\sigma^{4}}{2(1-\eta\_{2})(1-\eta\_{3})(1-\eta\_{4})\Gamma(\mathfrak{a}-2)} \right| \\ & + \left| \frac{\eta\_{1}(1+\eta\_{2})(1+\eta\_{3}) + \eta\_{2} + \eta\_{3}\}\eta\_{4}\sigma^{4}}{6(1-\eta\_{1})(1-\eta\_{2})(1-\eta\_{3})(1-\eta\_{3})(1-\eta\_{4})\Gamma(\mathfrak{a}-3)} \right| \end{split} (20)$$

and

$$\begin{split} & \mathbf{t}^{4-\aleph} |\mathbf{u}(\mathbf{t})| \\ \leq & \left| \frac{\sigma^{4}}{\Gamma(\kappa+1)} + \left| \frac{\eta\_{5}\sigma^{4}}{24(1-\eta\_{5})\Gamma(\kappa-3)} \right| + \frac{\eta\_{6}(1-\eta\_{5})\sigma^{4} + \eta\_{5}\eta\_{6}\sigma^{4}(\kappa-3)}{6(1-\eta\_{5})(1-\eta\_{6})\Gamma(\kappa-2)} \right| \\ & + \left| \frac{\eta\_{7}(1-\eta\_{6})\sigma^{4} + \eta\_{6}\eta\_{7}\sigma^{4}(\kappa-2)}{2(1-\eta\_{6})(1-\eta\_{7})\Gamma(\kappa-1)} \right| + \left| \frac{\eta\_{5}(1+\eta\_{6})\eta\_{7}\sigma^{4}}{2(1-\eta\_{5})(1-\eta\_{6})(1-\eta\_{7})\Gamma(\kappa-3)} \right| \\ & + \left| \frac{(1-\eta\_{7})\eta\_{8}\sigma^{4} + \eta\_{7}\eta\_{8}\sigma^{4}(\kappa-1)}{(1-\eta\_{7})(1-\eta\_{8})\Gamma(\kappa)} \right| + \left| \frac{\eta\_{6}(1+\eta\_{7})\eta\_{8}\sigma^{4}}{2(1-\eta\_{6})(1-\eta\_{7})(1-\eta\_{8})\Gamma(\kappa-2)} \right| \\ & + \left| \frac{\eta\_{5}(1+(\eta\_{6})(1+\eta\_{7})+\eta\_{6}+\eta\_{7})\eta\_{8}\sigma^{4}}{6(1-\eta\_{5})(1-\eta\_{6})(1-\eta\_{7})(1-\eta\_{8})\Gamma(\kappa-3)} \right| \Big| \frac{\varrho\_{1}(1+\eta\_{7})\eta\_{7}(1)}{1-\eta\_{7}(1)}. \tag{21} \end{split}$$

Therefore, from (20) and (21), we have

$$\|\mathbf{v}\| \le \mathfrak{Q}\_{\mathfrak{A}} \frac{\phi\_1^\* + \phi\_2^\* \|\|\mathbf{u}\|\|}{1 - \phi\_3^\*}$$

and

$$\|\mathfrak{u}\| \le \mathfrak{Q}\_{\mathbb{K}} \frac{\mathfrak{q}\_1^\* + \mathfrak{q}\_2^\* \|\mathfrak{v}\|}{1 - \mathfrak{q}\_3^\*} \lambda$$

which imply that

$$||\mathbf{v}|| + ||\mathbf{u}|| = \frac{\mathfrak{Q}\_a \mathfrak{q}\_1^\*}{1 - \mathfrak{q}\_3^\*} + \frac{\mathfrak{Q}\_\kappa \mathfrak{q}\_1^\*}{1 - \mathfrak{q}\_3^\*} + \frac{\mathfrak{Q}\_\kappa \mathfrak{q}\_2^\* ||\mathbf{v}||}{1 - \mathfrak{q}\_3^\*} + \frac{\mathfrak{Q}\_a \mathfrak{q}\_2^\* ||\mathbf{u}||}{1 - \mathfrak{q}\_3^\*}$$

.

Consequently, we get

$$\|(\mathsf{v}\_{\mathsf{t}}\,\mathsf{u})\|\mathsf{s}\leq\frac{\mathfrak{Q}\_{\mathsf{t}}\phi\_{1}^{\*}+\mathfrak{Q}\_{\mathsf{t}}\varphi\_{1}^{\*}}{(1-\phi\_{3}^{\*})(1-\rho\_{3}^{\*})(1-\mathfrak{Q}\_{0})}\,\mathsf{t}$$

for any t ∈ J, where Q<sup>0</sup> is defined by (15), which infer that B is bounded. Therefore, by Theorem 2, F has at least one fixed point. Thus, the system (1) has at least one solution.

#### **4. Stability Results**

Let us recall some definitions related to HU stabilities:

Suppose the functions <sup>Θ</sup>*α*, <sup>Θ</sup>*<sup>κ</sup>* : <sup>J</sup> <sup>→</sup> <sup>R</sup><sup>+</sup> are nondecreasing and *<sup>e</sup>α*, *<sup>e</sup><sup>κ</sup>* <sup>&</sup>gt; 0. Consider the inequalities given below.

$$\begin{cases} \left| \mathfrak{D}^{\mathfrak{u}} \mathbf{v}(\mathbf{t}) - \chi\_{1}(\mathbf{t}, \mathbf{u}(\mathbf{t}), \mathfrak{D}^{\mathfrak{u}} \mathbf{v}(\mathbf{t})) \right| \leq \mathfrak{e}\_{\mathfrak{u}} \ \mathbf{t} \in \mathfrak{J}\_{\mathsf{t}}\\ \left| \mathfrak{D}^{\mathbf{x}} \mathbf{u}(\mathbf{t}) - \chi\_{2}(\mathbf{t}, \mathbf{v}(\mathbf{t}), \mathfrak{D}^{\mathbf{x}} \mathbf{u}(\mathbf{t})) \right| \leq \mathfrak{e}\_{\mathbf{x}} \ \mathbf{t} \in \mathfrak{J}\_{\mathsf{t}} \end{cases} \tag{22}$$

$$\begin{cases} \left| \mathfrak{D}^{\mathfrak{n}} \mathbf{v}(\mathbf{t}) - \chi\_{1}(\mathbf{t}, \mathfrak{u}(\mathbf{t}), \mathfrak{D}^{\mathfrak{n}} \mathbf{v}(\mathbf{t})) \right| \leq \Theta\_{\mathfrak{n}}(\mathbf{t}) \varepsilon\_{\mathfrak{u}\prime} \; \mathbf{t} \in \mathfrak{J},\\ \left| \mathfrak{D}^{\mathbf{x}} \mathbf{u}(\mathbf{t}) - \chi\_{2}(\mathbf{t}, \mathbf{v}(\mathbf{t}), \mathfrak{D}^{\mathbf{x}} \mathbf{u}(\mathbf{t})) \right| \leq \Theta\_{\mathbf{x}}(\mathbf{t}) \varepsilon\_{\mathbf{x}\prime} \; \mathbf{t} \in \mathfrak{J}, \end{cases} \tag{23}$$

$$\begin{cases} \left| \mathfrak{D}^{\mathfrak{a}} \mathbf{v}(\mathbf{t}) - \chi\_{1}(\mathbf{t}, \mathfrak{u}(\mathbf{t}), \mathfrak{D}^{\mathfrak{a}} \mathbf{v}(\mathbf{t})) \right| \leq \Theta\_{\mathfrak{a}}(\mathbf{t}), & \mathfrak{t} \in \mathfrak{I}, \\\left| \mathfrak{D}^{\mathfrak{x}} \mathbf{u}(\mathbf{t}) - \chi\_{2}(\mathbf{t}, \mathbf{v}(\mathbf{t}), \mathfrak{D}^{\mathfrak{x}} \mathbf{u}(\mathbf{t})) \right| \leq \Theta\_{\mathfrak{x}}(\mathbf{t}), & \mathfrak{x} \in \mathfrak{I}. \end{cases} \tag{24}$$

**Definition 3.** *[47] System* (1) *is* HU *stable, if there are* **C***α*,*<sup>κ</sup>* = max(**C***α*, **C***κ*) > 0 *such that for some e* = max(*eα*, *eκ*) > 0 *and for each solution* (v, u) ∈ S *of the inequality* (22)*. There is a solution* (w, *ζ*) ∈ S *with*

$$\left\| \left( (\mathbf{v}, \mathbf{u})(\mathbf{t}) - (\mathbf{w}, \zeta)(\mathbf{t}) \right) \right\| \le \mathbf{C}\_{\mathbf{a}, \mathbf{x}} \mathbf{c}, \ t \in \mathfrak{J}. \tag{25}$$

**Definition 4.** *[47] System* (1) *is generalized* HU *stable, if there is* <sup>Φ</sup>*α*,*<sup>κ</sup>* <sup>∈</sup> <sup>C</sup>(R+, <sup>R</sup>+) *with* Φ*α*,*κ*(0) = 0*, such that for each solution* (v, u) ∈ S *of* (22)*, there is a solution* (w, *ζ*) ∈ S *of problem* (1)*, which satisfies*

$$\left\| \left( (\mathbf{v}, \mathbf{u})(\mathbf{t}) - (\mathbf{w}, \zeta)(\mathbf{t}) \right) \right\| \le \Phi\_{\mathbf{u}, \mathbf{x}}(\boldsymbol{\varepsilon}), \; \mathbf{t} \in \mathfrak{J}. \tag{26}$$

**Definition 5.** *[47] System* (1) *is* HU*–Rassias stable with respect to* Θ*α*,*<sup>κ</sup>* = max(Θ*α*, Θ*κ*) ∈ C(J, R), *if there are constants* **C**Θ*α*,Θ*<sup>κ</sup>* = max(**C**Θ*<sup>α</sup>* , **C**Θ*<sup>κ</sup>* ) > 0 *such that for some e* = (*eα*, *eκ*) > 0 *and for each solution* (v, u) ∈ S *of the inequality* (23)*. There is a solution* (w, *ζ*) ∈ S *with*

$$\left\| \left( (\mathbf{v}, \mathbf{u})(\mathbf{t}) - (\mathbf{w}, \mathbb{Q})(\mathbf{t}) \right) \right\| \leq \mathbf{C}\_{\Theta, \Theta\_{\mathbf{t}}} \Theta\_{\mathbf{t}, \mathbf{x}}(\mathbf{t}) \mathbf{c}, \; \mathbf{t} \in \mathbf{\mathfrak{J}}.\tag{27}$$

**Definition 6.** *[47] System*(1)*is generalized* HU*–Rassias stable with respect to* Θ*α*,*<sup>κ</sup>* = max(Θ*α*, Θ*κ*) ∈ C(J, R)*, if there is constant* **C**Θ*α*,Θ*<sup>κ</sup>* = max(**C**Θ*<sup>α</sup>* , **C**Θ*<sup>κ</sup>* ) > 0*, such that for each solution* (v, u) ∈ S *of the inequality* (24)*. There is a solution* (w, *ζ*) ∈ S *of* (1)*, which satisfies*

$$\left\| \left( (\mathbf{v}, \mathbf{u})(\mathbf{t}) - (\mathbf{w}, \mathbb{Q})(\mathbf{t}) \right) \right\| \leq \mathbf{C}\_{\Theta\_{\mathbf{u}}, \Theta\_{\mathbf{x}}} \Theta\_{\mathbf{u}, \mathbf{x}}(\mathbf{t}), \ \mathbf{t} \in \mathbf{J}. \tag{28}$$

**Remark 4.** *We say that* (v, u) ∈ S *is a solution of the inequality* (22)*, if there are* Ψ*χ*<sup>1</sup> , Ψ*χ*<sup>2</sup> ∈ C(J, R)*, which depends on* v, u*, respectively, such that*

 $\left| (A\_1) \left| \Psi\_{\chi\_1}(\mathbf{t}) \right| \le \epsilon\_{\mathbf{t}\prime} \left| \Psi\_{\chi\_2}(\mathbf{t}) \right| \le \epsilon\_{\mathbf{t}\prime} \text{ t} \in \mathfrak{J};$ 
$$\int \mathfrak{D}^a \mathbf{v}(\mathbf{t}) = \chi\_1(\mathbf{t}, \mathbf{u}(\mathbf{t}), \mathfrak{D}^a \mathbf{v}(\mathbf{t})) + \Psi\_{\chi\_1}(\mathbf{t}), \; \mathbf{t} \in \mathfrak{J}.$$

$$\mathfrak{D}^{\kappa}\mathfrak{u}(\mathfrak{t}) = \chi\_{2}(\mathfrak{t}, \mathfrak{v}(\mathfrak{t}), \mathfrak{D}^{\kappa}\mathfrak{u}(\mathfrak{t})) + \Psi\_{\chi\_{2}}(\mathfrak{t}), \ \mathfrak{t} \in \mathfrak{J}.$$

**Lemma 3.** *Let* (v, u) ∈ S *be the solution of inequality* (22)*, then we have*

$$\begin{cases} \left||\mathbf{v} - \boldsymbol{m}\_{1}\right|| \leq \mathfrak{Q}\_{\mathbf{a}} \boldsymbol{\varepsilon}\_{\boldsymbol{\alpha}\prime} \; \mathbf{t} \in \mathfrak{J}\_{\mathbf{v}},\\ \left||\mathbf{u} - \boldsymbol{m}\_{2}\right|| \leq \mathfrak{Q}\_{\mathbf{a}} \boldsymbol{\varepsilon}\_{\boldsymbol{\alpha}\prime} \; \mathbf{t} \in \mathfrak{J}. \end{cases}$$

**Proof.** By (*A*2) of Remark 4 and for t ∈ J, we have

$$\begin{cases} \mathfrak{D}^{\mathfrak{a}}\mathbf{v}(\mathbf{t}) = \chi\_{1}(\mathbf{t},\mathbf{u}(\mathbf{t}),\mathfrak{D}^{\mathfrak{a}}\mathbf{v}(\mathbf{t})) + \Psi\_{\chi\_{1}}(\mathbf{t}),\\ \mathfrak{D}^{\mathfrak{x}}\mathbf{u}(\mathbf{t}) = \chi\_{2}(\mathbf{t},\mathbf{v}(\mathbf{t}),\mathfrak{D}^{\mathfrak{x}}\mathbf{u}(\mathbf{t})) + \Psi\_{\chi\_{2}}(\mathbf{t}),\\ \mathfrak{D}^{\mathfrak{a}-4}\mathbf{v}(0) = \eta\_{1}\mathfrak{D}^{\mathfrak{a}-4}\mathbf{v}(\sigma),\ \mathfrak{D}^{\mathfrak{a}-3}\mathbf{v}(0) = \eta\_{2}\mathfrak{D}^{\mathfrak{a}-3}\mathbf{v}(\sigma),\\ \mathfrak{D}^{\mathfrak{a}-2}\mathbf{v}(0) = \eta\_{3}\mathfrak{D}^{\mathfrak{a}-2}\mathbf{v}(\sigma),\ \mathfrak{D}^{\mathfrak{a}-1}\mathbf{v}(0) = \eta\_{4}\mathfrak{D}^{\mathfrak{a}-1}\mathbf{v}(\sigma),\\ \mathfrak{D}^{\mathfrak{x}-4}\mathbf{u}(0) = \eta\_{5}\mathfrak{D}^{\mathfrak{x}-4}\mathbf{u}(\sigma),\ \mathfrak{D}^{\mathfrak{x}-3}\mathbf{u}(0) = \eta\_{6}\mathfrak{D}^{\mathfrak{x}-3}\mathbf{u}(\sigma)\\ \mathfrak{D}^{\mathfrak{x}-2}\mathbf{u}(0) = \eta\_{7}\mathfrak{D}^{\mathfrak{x}-2}\mathbf{u}(\sigma),\ \mathfrak{D}^{\mathfrak{x}-1}\mathbf{u}(0) = \eta\_{8}\mathfrak{D}^{\mathfrak{x}-1}\mathbf{u}(\sigma). \end{cases} \tag{29}$$

By Lemma 1, the solution of (29) can be written as

 <sup>v</sup>(t) = <sup>1</sup> Γ(*α*) Z t 0 t − *τ α*−<sup>1</sup> - *χ*1(*τ*, u(*τ*), D*<sup>α</sup>* v(*τ*)) + Ψ*χ*<sup>1</sup> (*τ*) d*τ* + *η*1 t *α*−4 6(1 − *η*1)Γ(*α* − 3) × Z *σ* 0 *σ* − *τ* 3 - *χ*1(*τ*, u(*τ*), D*<sup>α</sup>* v(*τ*)) + Ψ*χ*<sup>1</sup> (*τ*) d*τ* + h *η*2t *α*−3 2(1 − *η*2)Γ(*α* − 2) + *η*1*η*2*σ*t *α*−4 2(1 − *η*1)(1 − *η*2)Γ(*α* − 3) i × Z *σ* 0 *σ* − *τ* 2 - *χ*1(*τ*, u(*τ*), D*<sup>α</sup>* v(*τ*)) + Ψ*χ*<sup>1</sup> (*τ*) d*τ* + h *η*3t *α*−2 (1 − *η*3)Γ(*α* − 1) + *η*2*η*3*σ*t *α*−3 (1 − *η*2)(1 − *η*3)Γ(*α* − 2) + *η*1(1 + *η*2)*η*3*σ* 2 t *α*−4 2(1 − *η*1)(1 − *η*2)(1 − *η*3)Γ(*α* − 3) i Z *<sup>σ</sup>* 0 *σ* − *τ χ*1(*τ*, u(*τ*), D*<sup>α</sup>* v(*τ*)) + Ψ*χ*<sup>1</sup> (*τ*) d*τ* + h *η*4 t *α*−1 (1 − *η*4)Γ(*α*) + *η*3*η*4*σ*t *α*−2 (1 − *η*3)(1 − *η*4)Γ(*α* − 1) + *η*2(1 + *η*3)*η*4*σ* 2 t *α*−3 2(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*α* − 2) + *η*1 (1 + *η*2)(1 + *η*3) + *η*<sup>2</sup> + *η*<sup>3</sup> *η*4*σ* 3 t *α*−4 6(1 − *η*1)(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*α* − 3) i Z *<sup>σ</sup>* 0 - *χ*1(*τ*, u(*τ*), D*<sup>α</sup>* v(*τ*)) + Ψ*χ*<sup>1</sup> (*τ*) d*τ*, <sup>u</sup>(t) = <sup>1</sup> Γ(*κ*) Z t 0 t − *τ κ*−<sup>1</sup> - *χ*2(*τ*, v(*τ*), D*<sup>κ</sup>* u(*τ*)) + Ψ*χ*<sup>2</sup> (*τ*) d*τ* + *η*5t *κ*−4 6(1 − *η*5)Γ(*κ* − 3) × Z *σ* 0 *σ* − *τ* 3 - *χ*2(*τ*, v(*τ*), D*<sup>κ</sup>* u(*τ*)) + Ψ*χ*<sup>2</sup> (*τ*) d*τ* + h *η*6t *κ*−3 2(1 − *η*6)Γ(*κ* − 2) + *η*5*η*6*σ*t *κ*−4 2(1 − *η*5)(1 − *η*6)Γ(*κ* − 3) i × Z *σ* 0 *σ* − *τ* 2 - *χ*2(*τ*, v(*τ*), D*<sup>κ</sup>* u(*τ*)) + Ψ*χ*<sup>2</sup> (*τ*) d*τ* + h *η*7t *κ*−2 (1 − *η*7)Γ(*κ* − 1) + *η*6*η*7*σ*t *κ*−3 (1 − *η*6)(1 − *η*7)Γ(*κ* − 2) + *η*5(1 + *η*6)*η*7*σ* 2 t *κ*−4 2(1 − *η*5)(1 − *η*6)(1 − *η*7)Γ(*κ* − 3) i Z *<sup>σ</sup>* 0 *σ* − *τ χ*2(*τ*, v(*τ*), D*<sup>κ</sup>* u(*τ*)) + Ψ*χ*<sup>2</sup> (*τ*) d*τ* + h *η*8t *κ*−1 (1 − *η*8)Γ(*κ*) + *η*7*η*8*σ*t *κ*−2 (1 − *η*7)(1 − *η*8)Γ(*κ* − 1) + *η*6(1 + *η*7)*η*8*σ* 2 t *κ*−3 2(1 − *η*6)(1 − *η*7)(1 − *η*8)Γ(*κ* − 2) + *η*5 (1 + *η*6)(1 + *η*7) + *η*<sup>6</sup> + *η*<sup>7</sup> *η*8*σ* 3 t *κ*−4 6(1 − *η*5)(1 − *η*6)(1 − *η*7)(1 − *η*8)Γ(*κ* − 3) i Z *<sup>σ</sup>* 0 - *χ*2(*τ*, v(*τ*), D*<sup>κ</sup>* u(*τ*)) + Ψ*χ*<sup>2</sup> (*τ*) d*τ*. (30)

From first equation of (30), we have

t 4−*α* v(t) − *m*1(t)  ≤ t 4−*α* Γ(*α*) Z t 0 t − *τ α*−<sup>1</sup> Ψ*χ*<sup>1</sup> (*τ*) d*τ* + *η*1 6(1 − *η*1)Γ(*α* − 3) Z *σ* 0 *σ* − *τ* 3 Ψ*χ*<sup>1</sup> (*τ*) d*τ* + *η*2t 2(1 − *η*2)Γ(*α* − 2) + *η*1*η*2*σ* 2(1 − *η*1)(1 − *η*2)Γ(*α* − 3) Z *σ* 0 *σ* − *τ* 2 Ψ*χ*<sup>1</sup> (*τ*) d*τ* + *η*3t 2 (1 − *η*3)Γ(*α* − 1) + *η*2*η*3t*σ* (1 − *η*2)(1 − *η*3)Γ(*α* − 2) + *η*1(1 + *η*2)*η*3*σ* 2 2(1 − *η*1)(1 − *η*2)(1 − *η*3)Γ(*α* − 3) Z *σ* 0 *σ* − *τ* Ψ*χ*<sup>1</sup> (*τ*) d*τ* + *η*4t 3 (1 − *η*4)Γ(*α*) + *η*3*η*4*σ*t 2 (1 − *η*3)(1 − *η*4)Γ(*α* − 1) + *η*2(1 + *η*3)*η*4*σ* 2 t 2(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*α* − 2) + *η*1 (1 + *η*2)(1 + *η*3) + *η*<sup>2</sup> + *η*<sup>3</sup> *η*4*σ* 3 6(1 − *η*1)(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*α* − 3) Z *σ* 0 Ψ*χ*<sup>1</sup> (*τ*) d*τ*. (31)

where *m*1(t) are those terms which are free of Ψ*χ*<sup>1</sup> . Using (6) and (*A*1) of Remark 4, (31) becomes

kv − *m*1k ≤ Q*αeα*.

Similarly for second equation of (30), we obtain

$$\|\|\mathbf{u} - m\_2\|\| \le \mathfrak{Q}\_{\mathbf{k}} \mathfrak{e}\_{\mathbf{k}}.$$

*4.1. Method (I)* **Theorem 4.** *If hypothesis* **H**<sup>1</sup> *and*

$$\Lambda = 1 - \frac{\mathfrak{Q}\_{\mathfrak{u}} \mathfrak{Q}\_{\mathfrak{k}} \mathcal{L}\_{\chi\_1} \mathcal{L}\_{\chi\_2}}{(1 - \mathfrak{Q}\_{\mathfrak{u}} \overline{\mathcal{L}}\_{\chi\_1})(1 - \mathfrak{Q}\_{\mathfrak{k}} \overline{\mathcal{L}}\_{\chi\_2})} > 0 \tag{32}$$

*hold, with* 0 ≤ Q*α*L*χ*<sup>1</sup> , Q*κ*L*χ*<sup>2</sup> < 1*. Then system* (1) *is* HU *stable.*

**Proof.** Let (v, u) ∈ S be the solution of (22) and (w, *ζ*) ∈ S be the solution of following system:

 <sup>D</sup>*α*w(t) <sup>−</sup> *<sup>χ</sup>*1(t, *<sup>ζ</sup>*(t), <sup>D</sup>*α*w(t)) = 0, <sup>t</sup> <sup>∈</sup> <sup>J</sup>, D*κ <sup>ζ</sup>*(t) <sup>−</sup> *<sup>χ</sup>*2(t,w(t), <sup>D</sup>*<sup>κ</sup> ζ*(t)) = 0, t ∈ J, D*α*−4w(0) = *η*1D*α*−4w(*σ*), D*α*−3w(0) = *η*2D*α*−3w(*σ*), D*α*−2w(0) = *η*3D*α*−2w(*σ*), D*α*−1w(0) = *η*4D*α*−1w(*σ*), D*κ*−<sup>4</sup> *ζ*(0) = *η*5D*κ*−<sup>4</sup> *ζ*(*σ*), D*κ*−<sup>3</sup> *ζ*(0) = *η*6D*κ*−<sup>3</sup> *ζ*(*σ*), D*κ*−<sup>2</sup> *ζ*(0) = *η*7D*κ*−<sup>2</sup> *ζ*(*σ*), D*κ*−1w(0) = *η*8D*κ*−<sup>1</sup> *ζ*(*σ*). (33)

Then in view of Lemma 1, for t ∈ J the solution of (33) is given by:

 <sup>w</sup>(t) = <sup>1</sup> Γ(*α*) Z t 0 t − *τ α*−<sup>1</sup> *χ*1(*τ*, *ζ*(*τ*), D*α*w(*τ*))d*τ* + *η*1t *α*−4 6(1 − *η*1)Γ(*α* − 3) × Z *σ* 0 *σ* − *τ* 3 *χ*1(*τ*, *ζ*(*τ*), D*α*w(*τ*))d*τ* + h *η*2t *α*−3 2(1 − *η*2)Γ(*α* − 2) + *η*1*η*2*σ*t *α*−4 2(1 − *η*1)(1 − *η*2)Γ(*α* − 3) i × Z *σ* 0 *σ* − *τ* 2 *χ*1(*τ*, *ζ*(*τ*), D*α*w(*τ*))d*τ* + h *η*3t *α*−2 (1 − *η*3)Γ(*α* − 1) + *η*2*η*3*σ*t *α*−3 (1 − *η*2)(1 − *η*3)Γ(*α* − 2) + *η*1(1 + *η*2)*η*3*σ* 2 t *α*−4 2(1 − *η*1)(1 − *η*2)(1 − *η*3)Γ(*α* − 3) i Z *<sup>σ</sup>* 0 *σ* − *τ χ*1(*τ*, *ζ*(*τ*), D*α*w(*τ*))d*τ* + h *η*4t *α*−1 (1 − *η*4)Γ(*α*) + *η*3*η*4*σ*t *α*−2 (1 − *η*3)(1 − *η*4)Γ(*α* − 1) + *η*2(1 + *η*3)*η*4*σ* 2 t *α*−3 2(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*α* − 2) + *η*1 (1 + *η*2)(1 + *η*3) + *η*<sup>2</sup> + *η*<sup>3</sup> *η*4*σ* 3 t *α*−4 6(1 − *η*1)(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*α* − 3) i Z *<sup>σ</sup>* 0 *χ*1(*τ*, *ζ*(*τ*), D*α*w(*τ*))d*τ*, *<sup>ζ</sup>*(t) = <sup>1</sup> Γ(*κ*) Z t 0 t − *τ κ*−<sup>1</sup> *χ*2(*τ*,w(*τ*), D*<sup>κ</sup> ζ*(*τ*))d*τ* + *η*5t *κ*−4 6(1 − *η*5)Γ(*κ* − 3) × Z *σ* 0 *σ* − *τ* 3 *χ*2(*τ*,w(*τ*), D*<sup>κ</sup> ζ*(*τ*))d*τ* + h *η*6t *κ*−3 2(1 − *η*6)Γ(*κ* − 2) + *η*5*η*6*σ*t *κ*−4 2(1 − *η*5)(1 − *η*6)Γ(*κ* − 3) i × Z *σ* 0 *σ* − *τ* 2 *χ*2(*τ*,w(*τ*), D*<sup>κ</sup> ζ*(*τ*))d*τ* + h *η*7t *κ*−2 (1 − *η*7)Γ(*κ* − 1) + *η*6*η*7*σ*t *κ*−3 (1 − *η*6)(1 − *η*7)Γ(*κ* − 2) + *η*5(1 + *η*6)*η*7*σ* 2 t *κ*−4 2(1 − *η*5)(1 − *η*6)(1 − *η*7)Γ(*κ* − 3) i Z *<sup>σ</sup>* 0 *σ* − *τ χ*2(*τ*,w(*τ*), D*<sup>κ</sup> ζ*(*τ*))d*τ* + h *η*8t *κ*−1 (1 − *η*8)Γ(*κ*) + *η*7*η*8*σ*t *κ*−2 (1 − *η*7)(1 − *η*8)Γ(*κ* − 1) + *η*6(1 + *η*7)*η*8*σ* 2 t *κ*−3 2(1 − *η*6)(1 − *η*7)(1 − *η*8)Γ(*κ* − 2) + *η*5 (1 + *η*6)(1 + *η*7) + *η*<sup>6</sup> + *η*<sup>7</sup> *η*8*σ* 3 t *κ*−4 6(1 − *η*5)(1 − *η*6)(1 − *η*7)(1 − *η*8)Γ(*κ* − 3) i Z *<sup>σ</sup>* 0 *χ*2(*τ*,w(*τ*), D*<sup>κ</sup> ζ*(*τ*))d*τ*. (34) Consider

$$\mathbf{t}^{4-a} \left| \mathbf{v}(\mathbf{t}) - \mathbf{w}(\mathbf{t}) \right| \le \mathbf{t}^{4-a} \left| \mathbf{v}(\mathbf{t}) - m\_1(\mathbf{t}) \right| + \mathbf{t}^{4-a} \left| m\_1(\mathbf{t}) - \mathbf{w}(\mathbf{t}) \right|. \tag{35}$$

Applying Lemma 3 in (35), we get

t 4−*α* v(t) − w(t)  ≤ Q*αe<sup>α</sup>* + t 4−*α* Γ(*α*) Z t 0 t − *τ α*−<sup>1</sup> *<sup>χ</sup>*1(*τ*, <sup>u</sup>(*τ*), <sup>D</sup>*<sup>α</sup>* <sup>v</sup>(*τ*)) <sup>−</sup> *<sup>χ</sup>*1(*τ*, *<sup>ζ</sup>*(*τ*), <sup>D</sup>*α*w(*τ*)) d*τ* + *η*1 6(1 − *η*1)Γ(*α* − 3) × Z *σ* 0 *σ* − *τ* 3 *<sup>χ</sup>*1(*τ*, <sup>u</sup>(*τ*), <sup>D</sup>*<sup>α</sup>* <sup>v</sup>(*τ*)) <sup>−</sup> *<sup>χ</sup>*1(*τ*, *<sup>ζ</sup>*(*τ*), <sup>D</sup>*α*w(*τ*)) d*τ* + *η*2t 2(1 − *η*2)Γ(*α* − 2) + *η*1*η*2*σ* 2(1 − *η*1)(1 − *η*2)Γ(*α* − 3) Z *σ* 0 *σ* − *τ* 2 *<sup>χ</sup>*1(*τ*, <sup>u</sup>(*τ*), <sup>D</sup>*<sup>α</sup>* <sup>v</sup>(*τ*)) <sup>−</sup> *<sup>χ</sup>*1(*τ*, *<sup>ζ</sup>*(*τ*), <sup>D</sup>*α*w(*τ*)) d*τ* + *η*3t 2 (1 − *η*3)Γ(*α* − 1) + *η*2*η*3t*σ* (1 − *η*2)(1 − *η*3)Γ(*α* − 2) + *η*1(1 + *η*2)*η*3*σ* 2 2(1 − *η*1)(1 − *η*2)(1 − *η*3)Γ(*α* − 3) × Z *σ* 0 *σ* − *τ <sup>χ</sup>*1(*τ*, <sup>u</sup>(*τ*), <sup>D</sup>*<sup>α</sup>* <sup>v</sup>(*τ*)) <sup>−</sup> *<sup>χ</sup>*1(*τ*, *<sup>ζ</sup>*(*τ*), <sup>D</sup>*α*w(*τ*)) d*τ* + *η*4t 3 (1 − *η*4)Γ(*α*) + *η*3*η*4*σ*t 2 (1 − *η*3)(1 − *η*4)Γ(*α* − 1) + *η*2(1 + *η*3)*η*4*σ* 2 t 2(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*α* − 2) + *η*1 (1 + *η*2)(1 + *η*3) + *η*<sup>2</sup> + *η*<sup>3</sup> *η*4*σ* 3 6(1 − *η*1)(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*α* − 3) × Z *σ* 0 *<sup>χ</sup>*1(*τ*, <sup>u</sup>(*τ*), <sup>D</sup>*<sup>α</sup>* <sup>v</sup>(*τ*)) <sup>−</sup> *<sup>χ</sup>*1(*τ*, *<sup>ζ</sup>*(*τ*), <sup>D</sup>*α*w(*τ*)) d*τ* ≤ Q*αe<sup>α</sup>* + h *σ* 4 Γ(*α* + 1) + *η*1*σ* 4 24(1 − *η*1)Γ(*α* − 3)  + *η*2(1 − *η*1)*σ* <sup>4</sup> + *η*1*η*2*σ* 4 (*α* − 3) 6(1 − *η*1)(1 − *η*2)Γ(*α* − 2) + *η*3(1 − *η*2)*σ* <sup>4</sup> + *η*2*η*3*σ* 4 (*α* − 2) 2(1 − *η*2)(1 − *η*3)Γ(*α* − 1)  + *η*1(1 + *η*2)*η*3*σ* 4 2(1 − *η*1)(1 − *η*2)(1 − *η*3)Γ(*α* − 3) + (1 − *η*3)*η*4*σ* <sup>4</sup> + *η*3*η*4*σ* 4 (*α* − 1) (1 − *η*3)(1 − *η*4)Γ(*α*)  + *η*2(1 + *η*3)*η*4*σ* 4 2(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*α* − 2)  (36) + *η*1 (1 + *η*2)(1 + *η*3) + *η*<sup>2</sup> + *η*<sup>3</sup> *η*4*σ* 4 6(1 − *η*1)(1 − *η*2)(1 − *η*3)(1 − *η*4)Γ(*α* − 3) i L*χ*<sup>1</sup> ku − *ζ*k + L*χ*<sup>1</sup> <sup>k</sup>D*<sup>α</sup>* <sup>v</sup> <sup>−</sup> <sup>D</sup>*α*w<sup>k</sup> .

Using **H**<sup>1</sup> of Theorem 1 and (6) in (36), we have

$$\left\|\mathbf{v} - \mathbf{w}\right\| \le \frac{\varrho\_{\mathfrak{a}} \varepsilon\_{\mathfrak{a}}}{1 - \varrho\_{\mathfrak{a}} \overline{\mathcal{L}}\_{\chi\_{1}}} + \frac{\varrho\_{\mathfrak{a}} \mathcal{L}\_{\chi\_{1}}}{1 - \varrho\_{\mathfrak{a}} \overline{\mathcal{L}}\_{\chi\_{1}}} \left\|\mathbf{u} - \boldsymbol{\zeta}\right\|. \tag{37}$$

Similarly, we can get

$$\|\|\mathbf{u} - \boldsymbol{\zeta}\|\| \le \frac{\mathcal{Q}\_{\mathbf{x}} \mathbf{c}\_{\mathbf{x}}}{1 - \mathcal{Q}\_{\mathbf{x}} \overline{\mathcal{L}}\_{\chi\_2}} + \frac{\mathcal{Q}\_{\mathbf{x}} \mathcal{L}\_{\chi\_2}}{1 - \mathcal{Q}\_{\mathbf{x}} \overline{\mathcal{L}}\_{\chi\_2}} \|\mathbf{v} - \mathbf{w}\|. \tag{38}$$

We write (37) and (38) as

$$\begin{split} \left||\mathbf{v} - \mathbf{w}\right|| &-\frac{\left\|\mathbf{\mathcal{Q}}\_{\mathbf{d}}\mathcal{L}\_{\mathcal{X}\_{1}}}{1 - \left\|\mathbf{\mathcal{Q}}\_{\mathbf{d}}\overline{\mathbf{\mathcal{L}}}\_{\mathcal{X}\_{1}}\right\|}\left\|\mathbf{u} - \mathbf{\mathcal{J}}\right\| \leq \frac{\left\|\mathbf{\mathcal{Q}}\_{\mathbf{d}}\mathbf{\mathcal{E}}\_{\mathbf{d}}\right\|}{1 - \left\|\mathbf{\mathcal{Q}}\_{\mathbf{d}}\overline{\mathbf{\mathcal{L}}}\_{\mathcal{X}\_{1}}\right\|}, \\ \left||\mathbf{u} - \mathbf{\mathcal{J}}\right|| &-\frac{\left\|\mathbf{\mathcal{Q}}\_{\mathbf{k}}\mathcal{L}\_{\mathcal{X}\_{2}}}{1 - \left\|\mathbf{\mathcal{Q}}\_{\mathbf{k}}\overline{\mathbf{\mathcal{L}}}\_{\mathcal{X}\_{2}}\right\|}\left\|\mathbf{v} - \mathbf{w}\right\| \leq \frac{\left\|\mathbf{\mathcal{Q}}\_{\mathbf{k}}\mathbf{\mathcal{E}}\_{\mathbf{k}}\right\|}{1 - \left\|\mathbf{\mathcal{Q}}\_{\mathbf{k}}\overline{\mathbf{\mathcal{L}}}\_{\mathcal{X}\_{2}}\right\|}, \\ \\ &\left\|\mathbf{\mathcal{I}}\right\| &-\frac{\left\|\mathbf{\mathcal{Q}}\_{\mathbf{k}}\mathbf{\mathcal{L}}\_{\mathcal{X}\_{1}}\right\|}{1 - \left\|\mathbf{\mathcal{Q}}\_{\mathbf{k}}\overline{\mathbf{\mathcal{L}}}\_{\mathcal{X}\_{2}}\right\|}\right\|, \leq \left\|\mathbf{\mathcal{I}}\right\| &\left\|\mathbf{\mathcal{Q}}\_{\mathbf{k}}\overline{\mathbf{\mathcal{L}}}\_{\mathcal{X}\_{2}}\right\| \end{split}$$

$$
\begin{bmatrix} 1 & -\frac{\mathfrak{Q}\_{\mathbf{z}} \mathcal{L}\_{\chi\_{1}}}{1 - \mathfrak{Q}\_{\mathbf{z}} \mathcal{L}\_{\chi\_{1}}}\\\\ -\frac{\mathfrak{Q}\_{\mathbf{z}} \mathcal{L}\_{\chi\_{2}}}{1 - \mathfrak{Q}\_{\mathbf{z}} \mathcal{L}\_{\chi\_{2}}} & \mathbf{1} \end{bmatrix} \begin{bmatrix} ||\mathbf{v} - \mathbf{w}|| & \\\\ ||\mathbf{u} - \boldsymbol{\zeta}|| & \end{bmatrix} \leq \begin{bmatrix} \frac{\mathfrak{Q}\_{\mathbf{z}} \mathcal{L}\_{\chi\_{1}}}{1 - \mathfrak{Q}\_{\mathbf{z}} \mathcal{L}\_{\chi\_{1}}}\\\\ \frac{\mathfrak{Q}\_{\mathbf{z}} \mathcal{L}\_{\chi\_{2}}}{1 - \mathfrak{Q}\_{\mathbf{z}} \mathcal{L}\_{\chi\_{2}}} \end{bmatrix}.
$$

From the above, we get

$$\begin{bmatrix} \|\mathbf{v} - \mathbf{w}\| \\\\ \|\mathbf{u} - \boldsymbol{\zeta}\| \end{bmatrix} \leq \begin{bmatrix} \frac{1}{\Lambda} & \frac{\Omega\_{\mathbf{a}} \mathcal{L}\_{\mathbf{X}}}{\Lambda(1 - \Omega\_{\mathbf{a}} \mathcal{L}\_{\mathbf{X}})} \\\\ \frac{\Omega\_{\mathbf{x}} \mathcal{L}\_{\mathbf{X}}}{\Lambda(1 - \Omega\_{\mathbf{x}} \mathcal{L}\_{\mathbf{X}})} & \frac{1}{\Lambda} \end{bmatrix} \begin{bmatrix} \frac{\Omega\_{\mathbf{a}} \epsilon\_{\mathbf{a}}}{1 - \Omega\_{\mathbf{a}} \mathcal{L}\_{\mathbf{X}}} \\\\ \frac{\Omega\_{\mathbf{x}} \epsilon\_{\mathbf{x}}}{1 - \Omega\_{\mathbf{x}} \mathcal{L}\_{\mathbf{X}}} \end{bmatrix}.$$

where

$$\Lambda = 1 - \frac{\varrho\_{\alpha}\varrho\_{\kappa}\mathcal{L}\_{\chi\_{1}}\mathcal{L}\_{\chi\_{2}}}{(1 - \varrho\_{\alpha}\overline{\mathcal{L}}\_{\chi\_{1}})(1 - \varrho\_{\kappa}\overline{\mathcal{L}}\_{\chi\_{2}})} > 0.1$$

Further simplification gives

$$\begin{split} ||\mathbf{v} - \mathbf{w}|| &\leq \frac{\mathfrak{Q}\_{\mathbf{a}}\mathfrak{C}\_{\mathbf{a}}}{\Lambda(1 - \mathfrak{Q}\_{\mathbf{a}}\overline{\mathcal{L}}\_{\chi\_{1}})} + \frac{\mathfrak{Q}\_{\mathbf{a}}\mathfrak{Q}\_{\mathbf{x}}\mathfrak{C}\_{\chi\_{1}}\mathfrak{C}\_{\chi\_{1}}}{\Lambda(1 - \mathfrak{Q}\_{\mathbf{a}}\overline{\mathcal{L}}\_{\chi\_{1}})(1 - \mathfrak{Q}\_{\mathbf{x}}\overline{\mathcal{L}}\_{\chi\_{2}})}, \\ ||\mathbf{u} - \boldsymbol{\xi}|| &\leq \frac{\mathfrak{Q}\_{\mathbf{a}}\mathfrak{C}\_{\mathbf{a}}}{\Lambda(1 - \mathfrak{Q}\_{\mathbf{a}}\overline{\mathcal{L}}\_{\chi\_{2}})} + \frac{\mathfrak{Q}\_{\mathbf{a}}\mathfrak{Q}\_{\mathbf{x}}\mathfrak{C}\_{\chi\_{2}}\mathfrak{C}\_{\mathbf{a}}}{\Lambda(1 - \mathfrak{Q}\_{\mathbf{a}}\overline{\mathcal{L}}\_{\chi\_{1}})(1 - \mathfrak{Q}\_{\mathbf{x}}\overline{\mathcal{L}}\_{\chi\_{2}})}, \end{split}$$

from which we have

$$\begin{split} \|\mathbf{v} - \mathbf{w}\| + \|\mathbf{u} - \boldsymbol{\xi}\| &\leq \frac{\mathfrak{Q}\_{\mathbf{z}}\mathfrak{e}\_{\mathbf{z}}}{\Lambda(1 - \mathfrak{Q}\_{\mathbf{z}}\overline{\mathfrak{L}}\_{\overline{\lambda}1})} + \frac{\mathfrak{Q}\_{\mathbf{z}}\mathfrak{e}\_{\mathbf{x}}}{\Lambda(1 - \mathfrak{Q}\_{\mathbf{z}}\overline{\mathfrak{L}}\_{\overline{\lambda}2})} + \frac{\mathfrak{Q}\_{\mathbf{z}}\mathfrak{Q}\_{\mathbf{z}}\mathfrak{L}\_{\overline{\lambda}1}\mathfrak{e}\_{\mathbf{x}}}{\Lambda(1 - \mathfrak{Q}\_{\mathbf{z}}\overline{\mathfrak{L}}\_{\overline{\lambda}1})(1 - \mathfrak{Q}\_{\mathbf{z}}\overline{\mathfrak{L}}\_{\overline{\lambda}2})} \\ &+ \frac{\mathfrak{Q}\_{\mathbf{z}}\mathfrak{Q}\_{\mathbf{x}}\mathfrak{L}\_{\overline{\lambda}2}\mathfrak{e}\_{\mathbf{z}}}{\Lambda(1 - \mathfrak{Q}\_{\mathbf{z}}\overline{\mathfrak{L}}\_{\overline{\lambda}1})(1 - \mathfrak{Q}\_{\mathbf{x}}\overline{\mathfrak{L}}\_{\overline{\lambda}2})}. \end{split} \tag{39}$$

Let *e* = max *eα*, *e<sup>κ</sup>* , then from (39) we have

$$\|\left(\mathbf{v},\mathbf{u}\right) - \left(\mathbf{w},\zeta\right)\|\_{\mathbb{S}} \le \mathbf{C}\_{\alpha,\kappa}\varepsilon\_{\prime} \tag{40}$$

,

where

$$\begin{split} \mathbf{C}\_{\mathsf{u},\mathsf{x}} &= \frac{\mathfrak{Q}\_{\mathsf{u}}}{\Lambda(1-\mathfrak{Q}\_{\mathsf{u}}\overline{\mathcal{L}}\_{\mathsf{X}1})} + \frac{\mathfrak{Q}\_{\mathsf{x}}}{\Lambda(1-\mathfrak{Q}\_{\mathsf{x}}\overline{\mathcal{L}}\_{\mathsf{X}2})} + \frac{\mathfrak{Q}\_{\mathsf{u}}\mathfrak{Q}\_{\mathsf{x}}\mathcal{L}\_{\mathsf{X}1}}{\Lambda(1-\mathfrak{Q}\_{\mathsf{u}}\overline{\mathcal{L}}\_{\mathsf{X}1})(1-\mathfrak{Q}\_{\mathsf{x}}\overline{\mathcal{L}}\_{\mathsf{X}2})} \\ &+ \frac{\mathfrak{Q}\_{\mathsf{u}}\mathfrak{Q}\_{\mathsf{x}}\mathcal{L}\_{\mathsf{X}2}}{\Lambda(1-\mathfrak{Q}\_{\mathsf{u}}\overline{\mathcal{L}}\_{\mathsf{X}1})(1-\mathfrak{Q}\_{\mathsf{x}}\overline{\mathcal{L}}\_{\mathsf{X}2})}. \end{split}$$

**Remark 5.** *By setting* Φ*α*,*κ*(*e*) = **C***α*,*κe*, Φ*α*,*κ*(0) = 0 *in* (40)*, then by Definition 4 the problem* (1) *is generalized* HU *stable.*

**<sup>H</sup>**3: Let functions <sup>Θ</sup>*α*, <sup>Θ</sup>*<sup>κ</sup>* : <sup>J</sup> <sup>→</sup> <sup>R</sup><sup>+</sup> be nondecreasing. Then, there are *<sup>ζ</sup>*Θ*<sup>α</sup>* , *ζ*Θ*<sup>κ</sup>* > 0, such that for every t ∈ J, the inequalities

$$\mathfrak{I}^{\mathfrak{a}}\Theta\_{\mathfrak{a}}(\mathfrak{t}) \leq \zeta\_{\Theta\_{\mathfrak{k}}}\Theta\_{\mathfrak{a}}(\mathfrak{t}) \quad \text{and} \quad \mathfrak{I}^{\mathfrak{x}}\Theta\_{\mathfrak{k}}(\mathfrak{t}) \leq \zeta\_{\Theta\_{\mathfrak{k}}}\Theta\_{\mathfrak{k}}(\mathfrak{t})$$

holds.

**Remark 6.** *Lemma 3 and Theorem 4 gives that the system* (1) *is* HU*–Rassias and generalized* HU*–Rassias stable, if e<sup>α</sup>* = Θ*α*(t)*e<sup>α</sup> and e<sup>κ</sup>* = Θ*κ*(t)*e<sup>κ</sup> with* **H**<sup>3</sup> *and* Λ > 0*.*

*4.2. Method (II)*

**Theorem 5.** *Under the hypothesis* **H**<sup>1</sup> *and if* Λ<sup>∗</sup> = 1 − - Q*κ*L*χ*<sup>2</sup> 1−Q*κ*L*χ*<sup>2</sup> + Q*α*L*χ*<sup>1</sup> 1−Q*α*L*χ*<sup>1</sup> > 0. *Then system* (1) *is* HU *stable.*

**Proof.** From inequality (37) and (38), we have

$$\begin{split} & \|\mathbf{v} - \mathbf{w}\| + \|\mathbf{u} - \widetilde{\zeta}\| \\ & \leq \frac{\mathfrak{Q}\_{\mathbf{d}} \mathfrak{C}\_{\mathbf{d}}}{1 - \mathfrak{Q}\_{\mathbf{d}} \overline{\mathscr{L}}\_{\chi\_{1}}} + \frac{\mathfrak{Q}\_{\mathbf{x}} \mathfrak{C}\_{\mathbf{x}}}{1 - \mathfrak{Q}\_{\mathbf{x}} \overline{\mathscr{L}}\_{\chi\_{2}}} + \frac{\mathfrak{Q}\_{\mathbf{x}} \mathfrak{C}\_{\chi\_{2}}}{1 - \mathfrak{Q}\_{\mathbf{x}} \overline{\mathscr{L}}\_{\chi\_{2}}} \|\mathbf{v} - \mathbf{w}\| + \frac{\mathfrak{Q}\_{\mathbf{d}} \mathfrak{C}\_{\chi\_{1}}}{1 - \mathfrak{Q}\_{\mathbf{d}} \overline{\mathscr{L}}\_{\chi\_{1}}} \|\mathbf{u} - \widetilde{\zeta}\|. \end{split} \tag{41}$$

Let max *eα*, *e<sup>κ</sup>* = *e*, then from (41) we obtain

$$\| (\mathbf{v}, \mathbf{u}) - (\mathbf{w}, \mathbb{Q}) \| \underline{\mathbf{s}} \le \mathbf{C}\_{\mathbf{a}, \mathbf{x}} \mathbf{e}\_{\mathbf{v}} \tag{42}$$

where

$$\mathbf{C}\_{\mathfrak{a},\mathbb{K}} = \left[ \frac{\mathfrak{Q}\_{\mathfrak{k}}}{\Lambda^\*(1 - \mathfrak{Q}\_{\mathfrak{a}}\overline{\mathcal{L}}\_{\chi\_1})} + \frac{\mathfrak{Q}\_{\mathfrak{k}}}{\Lambda^\*(1 - \mathfrak{Q}\_{\mathfrak{k}}\overline{\mathcal{L}}\_{\chi\_2})} \right].$$

**Remark 7.** *With the help of Remark 5, we can obtain the generalized* HU *stability of system* (1)*.*

**Remark 8.** *Lemma 3 and Theorem 5 gives that the system* (1) *is* HU*–Rassias and generalized* HU*–Rassias stable, if e<sup>α</sup>* = Θ*α*(t)*e<sup>α</sup> and e<sup>κ</sup>* = Θ*κ*(t)*e<sup>κ</sup> with* **H**<sup>3</sup> *and* Λ<sup>∗</sup> > 0*.*

**Remark 9.** *The results of coupled systems of fourth-order nonlinear* FDES *gives the results of fourth-order nonlinear system of* ODES *(If α*, *κ* = 4*) with anti-periodic and initial conditions, if η*<sup>i</sup> = −1 (i = 1, 2, . . . , 8) *and η*<sup>i</sup> = 0 (i = 1, 2, . . . , 8) *respectively.*

#### **5. Example**

**Example 1.** *Consider the following coupled system of* FDES*:*

 D*<sup>α</sup>* v(t) − h 1 4(t + 2) 2 <sup>D</sup>*<sup>α</sup>* v(t) 1 + D*α*v(t) + 1 <sup>16</sup> sin<sup>2</sup> u(t) i = 0, t ∈ [0, 1], D*κ* u(t) − h 1 32*π* sin(2*π*v(t)) + <sup>D</sup>*κ*u(t) 16 1 + D*κ*u(t) <sup>+</sup> 1 2 i = 0, t ∈ [0, 1], D*α*−<sup>4</sup> v(0) = *η*1D*α*−<sup>4</sup> v(*σ*), D*α*−<sup>3</sup> v(0) = *η*2D*α*−<sup>3</sup> v(*σ*), D*α*−<sup>2</sup> v(0) = *η*3D*α*−<sup>2</sup> v(*σ*), D*α*−<sup>1</sup> v(0) = *η*4D*α*−<sup>1</sup> v(*σ*), D*κ*−<sup>4</sup> u(0) = *η*5D*κ*−<sup>4</sup> u(*σ*), D*κ*−<sup>3</sup> u(0) = *η*6D*κ*−<sup>3</sup> u(*σ*), D*κ*−<sup>2</sup> u(0) = *η*7D*κ*−<sup>2</sup> u(*σ*), D*κ*−<sup>1</sup> u(0) = *η*8D*κ*−<sup>1</sup> u(*σ*). (43)

*From system* (43)*, we can see α* = *κ* = <sup>10</sup> 3 , *σ* = 1, *η*<sup>1</sup> = *η*<sup>5</sup> = <sup>1</sup> 2 , *η*<sup>2</sup> = *η*<sup>6</sup> = <sup>1</sup> 3 *, η*<sup>3</sup> = *η*<sup>7</sup> = −1 *and η*<sup>4</sup> = *η*<sup>8</sup> = −1*. Moreover, we have*

$$\begin{split} \left| \chi\_{1}(\mathsf{t},\mathsf{u}\_{1}(\mathsf{t}),\mathfrak{D}^{\mathsf{u}}\mathsf{v}\_{1}(\mathsf{t})) - \chi\_{1}(\mathsf{t},\mathsf{u}\_{2}(\mathsf{t}),\mathfrak{D}^{\mathsf{u}}\mathsf{v}\_{2}(\mathsf{t})) \right| &\leq \frac{1}{16} \left| \mathsf{u}\_{1}(\mathsf{t}) - \mathsf{u}\_{2}(\mathsf{t}) \right| + \frac{1}{16} \left| \mathfrak{D}^{\mathsf{u}}\mathsf{v}\_{1}(\mathsf{t}) - \mathfrak{D}^{\mathsf{u}}\mathsf{v}\_{2}(\mathsf{t}) \right|. \\ \left| \chi\_{2}(\mathsf{t},\mathsf{v}\_{1}(\mathsf{t}),\mathfrak{D}^{\mathsf{u}}\mathsf{u}\_{1}(\mathsf{t})) - \chi\_{2}(\mathsf{t},\mathsf{v}\_{2}(\mathsf{t}),\mathfrak{D}^{\mathsf{u}}\mathsf{v}\_{2}(\mathsf{t})) \right| &\leq \frac{1}{16} \left| \mathsf{v}\_{1}(\mathsf{t}) - \mathsf{v}\_{2}(\mathsf{t}) \right| + \frac{1}{16} \left| \mathfrak{D}^{\mathsf{u}}\mathsf{u}\_{1}(\mathsf{t}) - \mathfrak{D}^{\mathsf{u}}\mathsf{u}\_{2}(\mathsf{t}) \right|. \end{split}$$

*Therefore, we get* L*χ*<sup>1</sup> = L*χ*<sup>1</sup> = L*χ*<sup>2</sup> = L*χ*<sup>2</sup> = <sup>1</sup> <sup>16</sup> *. Therefore,*

$$\frac{\mathfrak{Q}\_{\mathfrak{A}}\mathcal{L}\_{\chi\_1}(1-\overline{\mathcal{L}}\_{\chi\_2}) + \mathfrak{Q}\_{\mathfrak{X}}\mathcal{L}\_{\chi\_2}(1-\overline{\mathcal{L}}\_{\chi\_1})}{(1-\overline{\mathcal{L}}\_{\chi\_2})(1-\overline{\mathcal{L}}\_{\chi\_1})} \approx 0.75141 < 1,$$

*Thus, solution of* (43) *is unique. Moreover, system* (43) *is* HU*, generalized* HU*,* HU*–Rassias and generalized* HU*–Rassias stable by two different approaches under the conditions of Theorem 4 and Theorem 5, i.e.,* Λ > 0 *and* Λ∗ > 0*.*

#### **6. Conclusions**

This paper concluded that the solution of coupled implicit FDES (1) is unique and exists by using the Banach contraction theorem and Leray–Schauder fixed point theorem. Under some assumptions, the aforesaid coupled system has at least one solution. Besides this, the considered coupled system is HU, generalized HU, HU–Rassias and generalized HU–Rassias stable. An example is presented to illustrate our obtained results. The proposed system (1) gives the following well-known system of ODES, which has wide applications in applied sciences [5]


**Author Contributions:** Conceptualization, U.R., A.Z., Z.A., I.-L.P., S.R., S.E.; investigation, U.R., A.Z., Z.A., I.-L.P., S.R., S.E.; writing—original draft preparation, U.R., A.Z., Z.A., I.-L.P., S.R., S.E.; writing—review and editing, U.R., A.Z., Z.A., I.-L.P., S.R., S.E. All authors have read and agreed to the published version of the manuscript.

**Funding:** Not applicable.

**Institutional Review Board Statement:** Not applicable

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The fifth author was supported by Azarbaijan Shahid Madani University.

**Conflicts of Interest:** The authors declare no conflict of interest.

**Sample Availability:** Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

#### **References**


## *Article* **Stability Analysis and Optimal Control of a Fractional Order Synthetic Drugs Transmission Model**

**Meghadri Das <sup>1</sup> , Guruprasad Samanta <sup>1</sup> and Manuel De la Sen 2,\***


**Abstract:** In this work, a fractional-order synthetic drugs transmission model with psychological addicts has been proposed along with psychological treatment. The effects of synthetic drugs are deadly and sometimes even violent. We have studied the local and global stability of the model with different criterion. The existence and uniqueness criterion along with positivity and boundedness of the solutions have also been established. The local and global stabilities are decided by the basic reproduction number *R*0. We have also analyzed the sensitivity of parameters. An optimal control problem has been formulated by controlling psychological addiction and analyzed by the help of Pontryagin maximum principle. These results are verified by numerical simulations.

**Keywords:** Caputo fractional differential equation; synthetic drugs; stability; optimal control

**Citation:** Das, M.; Samanta, G.P.; De la Sen, M. Stability Analysis and Optimal Control of a Fractional Order Synthetic Drugs Transmission Model. *Mathematics* **2021**, *9*, 703. https:// doi.org/10.3390/math9070703

Academic Editor: Ioannis Dassios

Received: 20 February 2021 Accepted: 19 March 2021 Published: 24 March 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

#### **1. Introduction**

Synthetic drugs, also referred to as designer or club drugs, are chemically created in a lab to mimic another group of drugs such as marijuana, cocaine, or morphine. There are more than 200 to 300 identified synthetic drug compounds and many of them are cocaine, methamphetamine, and marijuana compounds [1,2]. The effects of synthetic drugs are anxiety, aggressive behavior, paranoia, seizures, loss of consciousness, nausea, vomiting, and even coma or death [3]. Synthetic drugs are powerful, and when mixed with unknown chemical compounds are extremely dangerous and can cause overdose very quickly. If an overdose has occurred, immediate medical care is required. More lately, new designer drugs have emerged with vigorous addictive potentials such as synthetic cathinones ("Bath Salts"), also labeled as Bliss, Vanilla Sky, and Ivory Wave. These synthetic drugs stimulate the central nervous system by inhibiting the retake of norepinephrine and dopamine, leading to serious adverse effects on the Central Nervous System (CNS) or even death [1]. Moreover, many infectious diseases such as hepatitis and AIDS can easily infect drug users due to the rampant use of shared needles [4,5]. Drugs like amphetamine are mostly used in specific regions like Goa and Ahmedabad in India. A recent study shows that drug use in India continues to grow rapidly, and more disturbingly, heroin has replaced the natural opioids (opium and poppy husk). An epidemiological study from Punjab has been revealed that the use of other synthetic drugs and cocaine has also increased significantly [6]. Most synthetic drugs are manufactured in an illegal laboratory, and there are no safety measures used in the manufacture of synthetic drugs. When an addicted person attempts to quit, he/she may experience very uncomfortable withdrawal symptoms which can lower their resolve to maintain abstinence and otherwise complicate early recovery. Professional detoxification programs are needed for synthetic drug addicts to withdraw safely from synthetic drugs. Behavioral therapies and counseling are effective tools for changing negative behavior and thought patterns that may help for improving the mental help they need.

Ma et al. [7] have developed different forms of heroin epidemic models to study the transmission of heroin epidemics. Sharomi and Gumel [8] have formulated different smoking models for giving up smoking. Similarly, mathematical modeling can be also used to describe the spread of synthetic drugs. Nyabadza et al. [9] have studied the methamphetamine transmission model in South Africa. Liu et al. [10] have formulated a synthetic drug transmission model with treatment and studied global stability and backward bifurcation of the model. Saha and Samanta [11] have also studied the stability of a synthetic drug transmission model with optimal control. There are many works that have been performed on fractional-order epidemiological systems because a fractional-order system has memory effect [12]. Fractional calculus is often utilized for the generalization of their order, where fractional order is replaced with integer order [13]. During a systematic study, it has been noted that the integer order model may be a special case of fractional order model wherever the solution of fractional order system must converge to the solution of integer order system as the order approaches one [14]. There are so many fields where fractional order systems are more suitable than integer order systems. Phenomena that are connected with memory and affected by hereditary cannot be expressed by integer order system [15]. It is observed that the data collected from real-life phenomena fit better with the fractional-order system. Diethelm [16] has compared the numerical solutions of fractional-order system and integer order system, and concluded that the fractional order system gives more relevant interpretation than integer order system. There are many systems [17–22] that have been studied recently in fractional order framework. In epidemiology, the Ebola virus model has been studied in Caputo differential equation system in 2015 [23]. Agarwal [24] first studied optimal control problem in fractional order system in 2004. In 2018, Kheiri and Jafari [25] have also worked on fractional order optimal control for HIV/AIDS.

#### *Motivation and Brief Overview*

There are some relevant advantages of Caputo fractional differentiations and differential equations.


Motivated by the aforementioned works and the advantages of Caputo fractionalorder differential equations, a model of fractional synthetic drug transmission with psychological drug addicts has been formulated in this work using Caputo fractional-order differential equations (Section 2). In this work, we have analyzed the drug transmission model in the fractional-order framework, and the effect of the psychological treatment of the awareness campaign has also been studied by formulating fractional-order optimal control problem.

This work is presented in two different parts. In the first part (Section 3), we first carried out a basic analysis, such as existence, oneness, non-negativity, and the limit of solutions of the proposed system of equations. Dynamical behaviors of the different equilibrium points are established in the same section. Though our main aim is to study the system in fractional-order framework, a fractional-order control problem has also framed in Section 4 to study the control effect of treatment on psychological addict class which may enhance our research.

In the beginning of our work, we recall some basic definitions and theories of fractionalorder differential equations (Section 3) followed by calculating equilibrium points (Section 3.1). Next, we also discuss whether the solution of the proposed system is unique (Section 3.2). We have also discussed the boundedness and feasible condition of the solutions of the system (Section 3.3). Transfer dynamics has also been discussed with the help of the reproduction number in the next section (Section 3.5). We also study sensitivity analysis (Section 3.4) of the model with local and global stability of equilibrium points (both diseasefree and endemic) systematically (Sections 3.6). Then, we present our system as an optimal control problem with psychological treatment as control variable and derived optimal conditions (Section 4). Finally, numerical simulations are performed (Section 5), followed by some conclusions of the whole work (Section 6).

#### **2. Model Formulation**

We have formulated a fractional-order synthetic drugs transmission model with psychological addicts by taking susceptible (*S*), psychological addicts (*P*1), physiological addicts (*P*2), and treatment class as four compartments.

$$\begin{aligned} \ \_\_0^C \mathbf{f}\_1^\varepsilon \mathbf{x}(t) &= A^\varepsilon - \delta^\varepsilon \mathbf{x} - \beta\_1 \, ^\varepsilon \mathbf{x} \mathbf{y} - \beta\_2 \, ^\varepsilon \mathbf{x} \mathbf{z}, \mathbf{x}(t\_0) = \mathbf{x}\_0 > \mathbf{0}, \\\\ \ \_0^C \mathbf{D}\_0^\varepsilon \mathbf{y}(t) &= \beta\_1 ^\varepsilon \mathbf{x} \mathbf{y} + \beta\_2 ^\varepsilon \mathbf{x} \mathbf{z} - (k^\varepsilon + \delta^\varepsilon + \Phi^\varepsilon) \mathbf{y}, \mathbf{y}(t\_0) = \mathbf{y}\_0 > \mathbf{0}, \\\\ \ \_0^C \mathbf{D}\_0^\varepsilon \mathbf{z}(t) &= k^\varepsilon \mathbf{y} + \gamma^\varepsilon r - \mathfrak{f}^\varepsilon \mathbf{z} - \delta^\varepsilon \mathbf{z}, \mathbf{z}(t\_0) = \mathbf{z}\_0 > \mathbf{0}, \\\\ \ \_0^C \mathbf{D}\_0^\varepsilon \mathbf{r}(t) &= \Phi^\varepsilon \mathbf{y} + \mathfrak{f}^\varepsilon \mathbf{z} - \gamma^\varepsilon r - \delta^\varepsilon r, \mathbf{r}(t\_0) = r\_0 > \mathbf{0}, \end{aligned} \tag{1}$$

where 0 < *ε* < 1, is the order of derivative and *<sup>C</sup> t*0 *Dε t* is notation due to Caputo fractional derivative, and *t*<sup>0</sup> = 0 is the initial time. Here, *x*(*t*), *y*(*t*), *z*(*t*), and *r*(*t*) represent the respective size of susceptible population, psychologically addicted population, physiological addicted population, and the class of addicts in treatment, respectively. From a survey on synthetic drugs, it is evident that a large number of the young population are in the susceptible class, which is roughly equivalent to the recruitment rate *A* of susceptible class and which is assumed to be constant [26]. After contact with an addict, the susceptible addict will first pass into the class of psychological addict, while after taking many drugs, the psychological addict will become the physiological addict. Broadly speaking, a susceptible addict is more likely to initiate drug abuse when he/she has contact with a physiological addict compared to a psychological addict. We denote the corresponding contact rates are *β*<sup>1</sup> *ε* and *β*<sup>2</sup> *ε* . Once psychological and physiological addicts accept treatment and rehabilitation, they will enter into treatment compartment. The treatment rates are denoted by *φ <sup>ε</sup>* and *ξ ε* , respectively. In addition, some drug users in treatment may escape and reenter physiologically addicted compartment with rate *γ ε* . The parameters *k <sup>ε</sup>* and *δ <sup>ε</sup>* are the escalation rate from psychological addicts to physiological addicts and natural death rate, respectively. All parameters *A ε* , *γ ε* , *δ ε* , *β*<sup>1</sup> *ε* , *β*<sup>2</sup> *ε* , *φ ε* , *ξ ε* , *k <sup>ε</sup>* are assumed to be positive constants (briefly described in Table 1). Schematic diagram of system (3) is mentioned in Figure 1.

It is observed that the time dimension of system (1) is correct because both sides of the equations of system (1) have dimension (*time*) −*ε* [27]. Next, let us consider *t*<sup>0</sup> = 0 and omit the superscript *ε* to all parameters and redefine system (1) as

$$\begin{aligned} \,^C\_0 D\_t^\varepsilon x(t) &= A - \delta x - \beta\_1 xy - \beta\_2 xz, x(0) = x\_0 > 0, \\\\ \,^C\_0 D\_t^\varepsilon y(t) &= \beta\_1 xy + \beta\_2 xz - (k + \delta + \phi) y, y(0) = y\_0 > 0, \\\\ \,^C\_0 D\_t^\varepsilon z(t) &= ky + \gamma r - \xi z - \delta z, z(0) = z\_0 > 0, \\\\ \,^C\_0 D\_t^\varepsilon r(t) &= \phi y + \xi z - \gamma r - \delta r, r(0) = r\_0 > 0, \end{aligned} \tag{2}$$

We have considered *N*(*t*) to be the total human population and so *N*(*t*) = *x*(*t*) + *y*(*t*) + *z*(*t*) + *r*(*t*). In the first phase, a susceptible individual becomes a psychological addict after they come in contact with a drug addict. However, after becomes accustomed to the persistent presence and influence of the drug, the individual is likely to become a physiological addict. A psychological or physiological addict will enter into the treatment compartment at the time of taking treatment and rehabilitation. It is shown in Section 3.3 that the number of total human population is bounded above and let *N* = inf *t*∈[0,∞) {*M* ∈

*R*<sup>+</sup> : *N*(*t*) ≤ *M*}. Therefore, we can assume that the total population *N*(*t*) is constant (*N*) for large time scale (*t* → ∞). Let us scale the state variables with respect to the total population *N*:

$$\mathcal{S}(t) = \frac{\varkappa(t)}{N},\\P\_1(t) = \frac{y(t)}{N},\\P\_2(t) = \frac{z(t)}{N},\\\mathcal{R}(t) = \frac{r(t)}{N},\\\Lambda = \frac{A}{N}.$$

Therefore, system (2) becomes

$$\begin{aligned} \,^C\_0 D\_t^\varepsilon S(t) &= \Lambda - \delta S - \beta\_1 S P\_1 - \beta\_2 S P\_2, S(0) = S\_0 > 0, \\\\ \,^C\_0 D\_t^\varepsilon P\_1(t) &= \beta\_1 S P\_1 + \beta\_2 S P\_2 - (k + \delta + \phi) P\_1, P\_1(0) = P\_{1,0} > 0, \\\\ \,^C\_0 D\_t^\varepsilon P\_2(t) &= kP\_1 + \gamma R - \mathfrak{F}P\_2 - \delta P\_2, P\_2(0) = P\_{2,0} > 0, \\\\ \,^C\_0 D\_t^\varepsilon R(t) &= \phi P\_1 + \mathfrak{F}P\_2 - \gamma R - \delta R, R(0) = R\_0 > 0. \end{aligned} \tag{3}$$

**Figure 1.** Schematic diagram of system (3).


#### **Table 1.** Parameters of system (3).

#### **3. Preliminaries**

**Definition 1** ([28])**.** *The Caputo fractional derivative with order ε* > 0 *for a function g* ∈ *C n* ([*t*0, ∞+),IR) *is denoted and defined as*

$$\,\_{t\_0}^{\mathbb{C}}D\_t^{\varepsilon}g(t) = \begin{cases} \frac{1}{\Gamma(n-\varepsilon)} \int\_{t\_0}^t \frac{g^{(n)}(s)}{(t-s)^{\varepsilon-n+1}}ds, \varepsilon \in (n-1, n), n \in \mathbb{N}, \\\\ \frac{d^n}{dt^n}g(t), \varepsilon = n. \end{cases}$$

*where* Γ(·) *is the Gamma function, t* ≥ *t*<sup>0</sup> *and n is a natural number. In particular, for ε* ∈ (0, 1)*:*

$$\,\_{t\_0}^{\mathcal{C}}D\_t^{\varepsilon}g(t) = \frac{1}{\Gamma(1-\varepsilon)} \int\_{t\_0}^t \frac{g'(s)}{\left(t-s\right)^{\varepsilon}}ds$$

**Lemma 1.** *(Generalized Mean Value Theorem) [29] Let* <sup>0</sup> <sup>&</sup>lt; *<sup>ε</sup>* <sup>≤</sup> 1, *<sup>ψ</sup>*(*t*) <sup>∈</sup> *<sup>C</sup>*[*a*, *<sup>b</sup>*] *and if <sup>C</sup>* <sup>0</sup> *D<sup>ε</sup> <sup>t</sup>ψ*(*t*) *is continuous in* (*a*, *b*]*, then*

$$
\psi(\mathfrak{x}) = \psi(a) + \frac{1}{\Gamma(\varepsilon)} (\mathfrak{x} - a)^{\varepsilon} \,\_0^{\mathbb{C}} D\_t^{\varepsilon} \psi(\zeta),
$$

*where* 0 ≤ *ζ* ≤ *x,* ∀*x* ∈ (*a*, *b*]*.*

**Remark 1.** *If <sup>C</sup>* <sup>0</sup> *D<sup>ε</sup> <sup>t</sup>ψ*(*t*) ≥ 0 *C* <sup>0</sup> *D<sup>ε</sup> <sup>t</sup>ψ*(*t*) ≤ 0 , *t* ∈ (*a*, *b*)*, then ψ*(*t*) *is a non-decreasing (nonincreasing) function for t* ∈ [*a*, *b*]*.*

**Definition 2** ([13])**.** *One-parametric and two-parametric Mittag–Leffler functions are described as follows:*

$$E\_{\varepsilon}(w) = \sum\_{k=0}^{\infty} \frac{w^k}{\Gamma(\varepsilon k + 1)} \text{ and } E\_{\varepsilon\_1 \varepsilon\_2}(w) = \sum\_{k=0}^{\infty} \frac{w^k}{\Gamma(\varepsilon\_1 k + \varepsilon\_2)}, \text{ where } \varepsilon\_\prime \varepsilon\_1, \varepsilon\_2 \in \mathbb{R}\_+\dots$$

**Theorem 1** ([30])**.** *Let ε* > 0, *n* − 1 < *ε* < *n*, *n* ∈ N*. Assume that g*(*t*) *is continuously differentiable functions up to order* (*n* − 1) *on* [*t*0, ∞) *and n th derivative of g*(*t*) *exists with exponential order. If <sup>C</sup> t*0 *Dε t g*(*t*) *is piecewise continuous on* [*t*0, ∞)*, then*

$$\mathcal{K}\left\{{}^{\mathbb{C}}\_{t\_0}D\_t^{\varepsilon}g(t)\right\} = s^{\alpha}F(s) - \sum\_{j=0}^{n-1} s^{\alpha-j-1} {}^{\mathbb{C}}g^j(t\_0)\_{\prime\prime}$$

*where F*(*s*) = L {*g*(*t*)} *denotes the Laplace transform of g*(*t*)*.*

**Theorem 2** ([31])**.** *Let* C *be the complex plane. For any ε*1,*ε*<sup>2</sup> ∈ R<sup>+</sup> *and M* ∈ C*, then*

$$\mathcal{QC}\left\{t^{\varepsilon\_2-1}E\_{\varepsilon\_1,\varepsilon\_2}(Mt^{\varepsilon\_1})\right\} = \frac{s^{\varepsilon\_1-\varepsilon\_2}}{\left(s^{\varepsilon\_1}-M\right)^{\varepsilon\_2}}$$

*for* R(*s*) > k*M*k 1 *<sup>ε</sup>*<sup>1</sup> *, where* R(*s*) *represents the real part of the complex number s, and Eε*<sup>1</sup> ,*ε*2 *is the Mittag–Leffler function.*

**Theorem 3** ([28])**.** *Consider the following fractional-order system:*

$$\prescript{C}{t\_0}D\_t^\varepsilon X(t) = \Psi(X),\\ X\_{t\_0} = (\mathbf{x}\_{t\_0 \prime}^1 \mathbf{x}\_{t\_0 \prime}^2 \dots \mathbf{x}\_{t\_0}^n),\\ \mathbf{x}\_{t\_0}^i > 0, i = 1, 2, \dots, n$$

*with* 0 < *ε* < 1*, X*(*t*) = (*x* 1 (*t*), *x* 2 (*t*), ..., *x n* (*t*)) *and* <sup>Ψ</sup>(*X*) : [*t*0, <sup>∞</sup>) <sup>→</sup> <sup>R</sup>*n*×*<sup>n</sup>* . *The equilibrium points of this system are evaluated by solving the following system of equations:* Ψ(*X*) = 0*. These equilibrium points are locally asymptotically stable iff each eigenvalue λ<sup>i</sup> of the Jacobian matrix <sup>J</sup>*(*X*) = *<sup>∂</sup>*(Ψ1, <sup>Ψ</sup>2, ..., <sup>Ψ</sup>*n*) *∂*(*x* 1 , *x* 2 , ..., *x n*) *calculated at the equilibrium points satisfy* |arg(*λi*)| > *επ* 2 *.*

#### *3.1. Equilibria of System (3)*

The equilibria of system (3) can be obtained by solving the system:

$$\begin{aligned} \Lambda - \delta S^\* - \beta\_1 S^\* P\_1^\* - \beta\_2 S^\* P\_2^\* &= 0 \\\\ \beta\_1 S^\* P\_1^\* + \beta\_2 S^\* P\_2^\* - (k + \delta + \phi) P\_1^\* &= 0 \\\\ k P\_1^\* + \gamma R^\* - \xi P\_2^\* - \delta P\_2^\* &= 0 \\\\ \phi P\_1^\* + \xi P\_2^\* - \gamma R^\* - \delta R^\* &= 0 \end{aligned} \tag{4}$$

System (3) has two types of equilibrium points:


$$\begin{split} S^\* &= \frac{(k+\delta+\phi)P\_1^\*}{\beta\_1 P\_1^\* + \beta\_2 P\_2^\*} \\ P\_1^\* &= \frac{\Lambda \beta\_1 \delta(\gamma + \delta + \xi) + \Lambda \beta\_2 (k\delta + k\gamma + \phi\gamma) - \delta^2 (k+\delta+\phi)(\gamma+\delta+\xi)}{(k+\delta+\phi)[\beta\_1(\gamma+\delta+\xi) + \beta\_2(k\delta+k\gamma+\phi\gamma)]} \\ P\_2^\* &= \frac{(k\delta+k\gamma+\phi\gamma)P\_1^\*}{\delta(\gamma+\delta+\xi)} \\ R^\* &= \frac{\xi P\_2^\* + \phi P\_1^\*}{\delta+\gamma} .\end{split} \tag{5}$$

For drug-addiction equilibrium *E*<sup>1</sup> to exist in feasible region R<sup>4</sup> <sup>+</sup>, it is necessary and sufficient that Λ*β*1*δ*(*γ* + *δ* + *ξ*) + Λ*β*2(*kδ* + *kγ* + *φγ*) ≥ *δ* 2 (*k* + *δ* + *φ*)(*γ* + *δ* + *ξ*)

*3.2. Existence and Uniqueness*

**Lemma 2** ([32])**.** *Consider the system:*

$$\,\_{t\_0}^{\mathbb{C}}D\_t^{\varepsilon} \mathbf{x}(t) = \mathbf{g}(t, \mathfrak{x}), t\_0 > 0 \tag{6}$$

*with initial condition x*(*t*0) = *xt*<sup>0</sup> *, where <sup>ε</sup>* <sup>∈</sup> (0, 1]*, <sup>g</sup>* : [*t*0, <sup>∞</sup>) <sup>×</sup> <sup>Ω</sup> <sup>→</sup> IR*<sup>n</sup>* , <sup>Ω</sup> <sup>⊆</sup> IR*<sup>n</sup> , if local Lipschitz condition is satisfied by g*(*t*, *x*) *with respect to x, then there exists a solution of (6) on* [*t*0, ∞) × Ω *which is unique.*

To study the existence and uniqueness of system (3), let us consider the region <sup>Ω</sup> <sup>×</sup> [*t*0, *<sup>γ</sup>*],where <sup>Ω</sup> <sup>=</sup> {(*S*, *<sup>P</sup>*1, *<sup>P</sup>*2, *<sup>R</sup>*) <sup>∈</sup> <sup>R</sup><sup>4</sup> : max(|*S*|, |*P*1|, |*P*2|, |*R*|) ≤ *M*} and *γ* < +∞. Denote *X* = (*S*, *P*1, *P*2, *R*) and *X* = (*S*¯, *P*¯ <sup>1</sup>, *P*¯ <sup>2</sup>, *R*¯). Consider a mapping *L*(*X*) = (*L*1(*X*), *L*2(*X*), *L*3(*X*), *L*4(*X*)), where

$$\begin{aligned} L\_1(X) &= \Lambda - \delta S - \beta\_1 S P\_1 - \beta\_2 S P\_2 \\\\ L\_2(X) &= \beta\_1 S P\_1 + \beta\_2 S P\_2 - (k + \delta + \phi) P\_1 \\\\ L\_3(X) &= kP\_1 + \gamma R - \sharp P\_2 - \delta P\_2 \\\\ L\_4(X) &= \phi P\_1 + \xi P\_2 - \gamma R - \delta R \end{aligned}$$

For any *X*, *X* ∈ Ω:

 *L*(*X*) − *L*(*X*) = *L*1(*X*) − *L*1(*X*)  + *L*2(*X*) − *L*2(*X*)  + *L*3(*X*) − *L*3(*X*)  + *L*4(*X*) − *L*4(*X*) = <sup>Λ</sup> <sup>−</sup> *<sup>δ</sup><sup>S</sup>* <sup>−</sup> *<sup>β</sup>*1*SP*<sup>1</sup> <sup>−</sup> *<sup>β</sup>*2*SP*<sup>2</sup> <sup>−</sup> <sup>Λ</sup> <sup>+</sup> *<sup>δ</sup>S*¯ <sup>+</sup> *<sup>β</sup>*1*S*¯*P*¯ <sup>1</sup> + *β*2*S*¯*P*¯ 2 + *<sup>β</sup>*1*SP*<sup>1</sup> <sup>+</sup> *<sup>β</sup>*2*SP*<sup>2</sup> <sup>−</sup> (*<sup>k</sup>* <sup>+</sup> *<sup>δ</sup>* <sup>+</sup> *<sup>φ</sup>*)*P*<sup>1</sup> <sup>−</sup> *<sup>β</sup>*1*S*¯*P*¯ <sup>1</sup> <sup>−</sup> *<sup>β</sup>*2*S*¯*P*¯ <sup>2</sup> + (*k* + *δ* + *φ*)*P*¯ 1 + *kP*<sup>1</sup> <sup>+</sup> *<sup>γ</sup><sup>R</sup>* <sup>−</sup> *<sup>ξ</sup>P*<sup>2</sup> <sup>−</sup> *<sup>δ</sup>P*<sup>2</sup> <sup>−</sup> *kP*¯ <sup>1</sup> <sup>−</sup> *<sup>γ</sup>R*¯ <sup>+</sup> *<sup>ξ</sup>P*¯ <sup>2</sup> + *δP*¯ 2 <sup>+</sup>|*φP*<sup>1</sup> <sup>+</sup> *<sup>ξ</sup>P*<sup>2</sup> <sup>−</sup> *<sup>γ</sup><sup>R</sup>* <sup>−</sup> *<sup>δ</sup><sup>R</sup>* <sup>−</sup> *<sup>φ</sup>P*¯ <sup>1</sup> <sup>−</sup> *<sup>ξ</sup>P*¯ <sup>2</sup> <sup>+</sup> *<sup>γ</sup>R*¯ <sup>+</sup> *<sup>δ</sup>R*¯<sup>|</sup> ≤ *δ <sup>S</sup>* <sup>−</sup> *<sup>S</sup>*¯  + 2*β*<sup>1</sup> *SP*<sup>1</sup> <sup>−</sup> *<sup>S</sup>*¯*P*¯ 1  + 2*β*<sup>2</sup> *SP*<sup>2</sup> <sup>−</sup> *<sup>S</sup>*¯*P*¯ 2 +(*<sup>δ</sup>* <sup>+</sup> <sup>2</sup>*<sup>φ</sup>* <sup>+</sup> <sup>2</sup>*k*)|*P*<sup>1</sup> <sup>−</sup> *<sup>P</sup>*¯ <sup>1</sup><sup>|</sup> + (*<sup>δ</sup>* <sup>+</sup> <sup>2</sup>*ξ*)|*P*<sup>2</sup> <sup>−</sup> *<sup>P</sup>*¯ <sup>2</sup><sup>|</sup> + (2*<sup>γ</sup>* <sup>+</sup> *<sup>δ</sup>*)|*<sup>R</sup>* <sup>−</sup> *<sup>R</sup>*¯<sup>|</sup> ≤ (*δ* + 2*β*1*M* + 2*β*2*M*) *<sup>S</sup>* <sup>−</sup> *<sup>S</sup>*¯  + (2*β*1*<sup>M</sup>* <sup>+</sup> <sup>2</sup>*<sup>k</sup>* <sup>+</sup> <sup>2</sup>*<sup>φ</sup>* <sup>+</sup> *<sup>δ</sup>*)|*P*<sup>1</sup> <sup>−</sup> *<sup>P</sup>*¯ 1| +(2*β*2*<sup>M</sup>* <sup>+</sup> <sup>2</sup>*<sup>ξ</sup>* <sup>+</sup> *<sup>δ</sup>*)|*P*<sup>2</sup> <sup>−</sup> *<sup>P</sup>*¯ <sup>2</sup><sup>|</sup> + (2*<sup>γ</sup>* <sup>+</sup> *<sup>δ</sup>*)|*<sup>R</sup>* <sup>−</sup> *<sup>R</sup>*¯<sup>|</sup> ≤ *H*<sup>1</sup> *<sup>S</sup>* <sup>−</sup> *<sup>S</sup>*¯ <sup>+</sup> *<sup>H</sup>*2|*P*<sup>1</sup> <sup>−</sup> *<sup>P</sup>*¯ <sup>1</sup><sup>|</sup> <sup>+</sup> *<sup>H</sup>*3|*P*<sup>2</sup> <sup>−</sup> *<sup>P</sup>*¯ <sup>2</sup><sup>|</sup> <sup>+</sup> *<sup>H</sup>*4|*<sup>R</sup>* <sup>−</sup> *<sup>R</sup>*¯<sup>|</sup> ≤ *H X* − *X* , where *H* = max{*H*1, *H*2, *H*3, *H*4},

and

$$\begin{aligned} H\_1 &= (\delta + 2\beta\_1 M + 2\beta\_2 M) \\\\ H\_2 &= (2\beta\_1 M + 2k + 2\phi + \delta) \\\\ H\_3 &= (2\beta\_2 M + 2\xi + \delta) \\\\ H\_4 &= (2\gamma + \delta) \end{aligned}$$

Therefore, *L*(*X*) satisfies Lipschitz's condition with respect to *X*. Therefore, Lemma 2 confirms that there exists a unique solution *X*(*t*) of system (3) with initial condition *X*(0) = (*S*0, *P*1,0, *P*2,0, *R*0). The following theorem is the consequence of this result.

**Theorem 4.** *There exists a unique solution X*(*t*) ∈ Ω *of system (3) for all t* ≥ 0 *with initial condition X*(0) = (*S*0, *P*1,0, *P*2,0, *R*0) ∈ Ω*.*

#### *3.3. Non-Negativity and Boundedness*

In this section, we have established the criterion for feasibility of the solution of system (3). Suppose IR<sup>+</sup> stands for the set of all non-negative real numbers and <sup>Γ</sup><sup>+</sup> <sup>=</sup> <sup>n</sup> (*S*, *<sup>P</sup>*1, *<sup>P</sup>*2, *<sup>R</sup>*) <sup>∈</sup> IR<sup>4</sup> + o represents the first quadrant.

**Theorem 5. (Non-negativity):** *The solutions X*(*t*) = (*S*, *P*1, *P*2, *R*) *of system (3) remain in* Γ<sup>+</sup> *if X*(0) = (*S*0, *P*1,0, *P*2,0, *R*0) ∈ Γ+*.*

**Proof.**

$$\left. \begin{smallmatrix} \, \, \_{t\_0}^{\mathbb{C}} D\_t^{\varepsilon} S(t) \end{smallmatrix} \right|\_{S(t) = 0} = \Lambda > 0 \tag{7} $$

$$\left. \begin{smallmatrix} \, \, \_0^\mathbb{C} D\_1^\varepsilon P\_1(t) \end{smallmatrix} \right|\_{P\_1(t) = 0} = \beta S P\_2 \tag{8}$$

$$\left. \left(^{C}\_{0}D\_{t}^{\varepsilon}P\_{2}(t)\right)\right|\_{P\_{2}(t)=0} = kP\_{1} + \gamma R \tag{9}$$

$$\left. \prescript{C}{0}{D}\_{t}^{\varepsilon} \mathcal{R}(t) \right|\_{\mathcal{R}(t) = 0} = \prescript{\mathfrak{x}}{\xi} \mathcal{P}\_{2} + \phi \mathcal{P}\_{1} \tag{10}$$

From (7), we have

$${}^{C}\_{t\_0} D\_t^{\varepsilon} \mathcal{S}(t)|\_{S(t)=0} = \Lambda > 0.$$

From Lemma 1, we can say *S*(*t*) is increasing in a neighborhood of time *t* where *S*(*t*) = 0 and *S*(*t*) cannot cross the axis *S*(*t*) = 0. Therefore, *S*(*t*) > 0 for all *t* ≥ 0. Now, we claim that the solution *P*1(*t*) starts from Γ<sup>+</sup> and remains non-negative. If not, then there exists *τ*<sup>1</sup> such that *P*1(*t*) crosses *P*1(*t*) = 0 axis at *t* = *τ*<sup>1</sup> for the first time and the following conditions hold:

$$\begin{cases} \begin{array}{l} P\_1(t) > 0, \text{ for } 0 \le t < \tau\_1, \\ P\_1(\tau\_1) = 0, \\ P\_1(\tau\_1^+) < 0. \end{array} \end{cases}$$

From (8), we have *<sup>C</sup>* <sup>0</sup> *D ε <sup>t</sup>P*1(*t*) *P*1 (*τ*<sup>1</sup> )=0 = *β*2*S*(*τ*1)*P*2(*τ*1). Now, we have two cases

**Case 1:** If *P*2(*τ*1) ≥ 0 then by the Remark 1 of Lemma 1, we can say that *P*1(*t*) is nondecreasing in a neighborhood of *t* = *τ*<sup>1</sup> and which concludes *P*1(*τ* + 1 ) = 0. Therefore, we have arrived at a contradiction.

**Case 2:** If *P*2(*τ*1) < 0, then there exists a *τ*<sup>2</sup> such that 0 < *τ*<sup>2</sup> < *τ*<sup>1</sup> with

$$\begin{cases} P\_2(t) > 0, \text{ for } 0 \le t < \tau\_2, \\\ P\_2(\tau\_2) = 0, \\\ P\_2(\tau\_2^+) < 0. \end{cases} $$

From (9), we have

$$\left. \left( \, \_0^{\mathbb{C}} D\_t^{\varepsilon} P\_2(t) \right) \right|\_{P\_2(\tau\_2) = 0} = k P\_1(\tau\_2) + \gamma R(\tau\_2)$$

Now we have two sub-cases.

**Sub-case 1:** If *kP*1(*τ*2) + *γR*(*τ*2) ≥ 0, then *P*2(*τ*<sup>2</sup> <sup>+</sup>) ≮ 0 and it contradicts our assumption.

**Sub-case 2:** If *kP*1(*τ*2) + *γR*(*τ*2) < 0, then *P*1(*τ*2) > 0 as 0 < *τ*<sup>2</sup> < *τ*<sup>1</sup> and *R*(*τ*2) must be negative. In this case, we can find a *τ*<sup>3</sup> such that 0 < *τ*<sup>3</sup> < *τ*<sup>2</sup> < *τ*<sup>1</sup> with

$$\begin{cases} \begin{array}{l} \mathcal{R}(t) > 0, \text{ for } 0 \le t < \tau\_{3\prime} \\\mathcal{R}(\tau\_{3}) = 0, \\\mathcal{R}(\tau\_{3}^{+}) < 0. \end{array} \end{cases}$$

From (10), we have

$$\left. \zeta\_0^{\mathbb{C}} D\_t^{\varepsilon} R(t) \right|\_{R(\tau\_3) = 0} = \zeta P\_2(\tau\_3) + \phi P\_1(\tau\_3) > 0$$

which contradicts our assumption that *R*(*τ* + 3 ) < 0. Therefore, we have *P*1(*t*) ≥ 0, ∀*t* ∈ [0, ∞).

Again from (9), we have *<sup>C</sup>* <sup>0</sup> *D ε <sup>t</sup>P*2(*t*) *P*2(*t*)=0 = *kP*<sup>1</sup> + *γR*. If *R*(*t*) > 0 then *P*2(*t*) is nondecreasing (remark of Lemma 1) and *P*2(*t*) > 0, *t* ∈ [0, ∞). If possible, let *R*(*t*) crosses *R*(*t*) = 0 axis for the first time at *t* = *t*1. Then, we have

$$\begin{cases} \begin{array}{l} \mathcal{R}(t) > 0, \text{ for } 0 \le t < t\_{1\prime} \\\mathcal{R}(t\_{1}) = 0, \\\mathcal{R}(t\_{1}^{+}) < 0. \end{array} \end{cases}$$

From (10), we have

$$\left. \zeta^{\mathbb{C}} D\_t^{\varepsilon} R(t) \right|\_{R(t\_1) = 0} = \zeta^{\varepsilon} P\_2(t\_1) + \phi P\_1(t\_1) > 0$$

and this opposes our assumption *R*(*t*<sup>1</sup> <sup>+</sup>) <sup>&</sup>lt; 0. Hence *<sup>P</sup>*2(*t*) <sup>&</sup>gt; 0, *<sup>t</sup>* <sup>∈</sup> [0, <sup>∞</sup>). Again from (10), it is evident that *<sup>C</sup>* <sup>0</sup> *D ε <sup>t</sup>R*(*t*) *R*(*t*)=0 = *ξP*<sup>2</sup> + *φP*<sup>1</sup> > 0 and assures that *R*(*t*) > 0 and also *P*2(*t*) > 0 , *t* ∈ [0, ∞).

Thus, all solutions of system (3) (and thus system (2)) starting in Γ+ are confined in the region Γ+.

**Theorem 6. (Boundedness):** *Solutions X*(*t*) = (*x*, *y*, *z*,*r*) *of system (2) are uniformly bounded.*

**Proof.** From the first equation of (2), it is noted that

$$\,\_{0}^{\mathbb{C}}D\_{t}^{\varepsilon}\mathfrak{x}(t) \leq A - \delta\mathfrak{x}$$

Taking Laplace transforms on both sides, we have

$$\begin{aligned} &s^{\varepsilon}\mathcal{A}\{\mathbf{x}(t)\} - s^{\varepsilon-1}\mathbf{x}(0) + \delta\mathcal{A}\{\mathbf{x}(t)\} \leq \frac{A}{s}, \text{ where } \mathcal{A}\{\cdot\} \text{ is the Laplace transform operator,} \\ &\Rightarrow \mathcal{A}\{\mathbf{x}(t)\} \leq A \frac{s^{\varepsilon-(1+\varepsilon)}}{s^{\varepsilon}+\delta} + \mathbf{x}(0)\frac{s^{\varepsilon-1}}{s^{\varepsilon}+\delta} \end{aligned}$$

Taking inverse Laplace transforms (using Theorem 2),

$$\mathbf{x}(t) \le \mathbf{x}(\mathbf{0}) E\_{\varepsilon,1}(-\delta t^{\varepsilon}) + A t^{\varepsilon} E\_{\varepsilon,\varepsilon+1}(-\delta t^{\varepsilon}) \tag{11}$$

$$\mathbb{E}\_{\varepsilon} \ge \varepsilon(t) \le M\_1 [E\_{\varepsilon,1}(-\delta t^{\varepsilon}) + \delta t^{\varepsilon} E\_{\varepsilon,\varepsilon+1}(-\delta t^{\varepsilon})] = \frac{M\_1}{\Gamma(1)} = M\_1 \lambda$$

where *<sup>M</sup>*<sup>1</sup> <sup>=</sup> max *A δ* , *x*(0) and, as it is from the properties of Mittag–Leffler function [33],

$$E\_{\mathfrak{a},\mathfrak{\beta}}(z) = zE\_{\mathfrak{a},\mathfrak{a}+\mathfrak{\beta}}(z) + \frac{1}{\Gamma(\mathfrak{\beta})}$$

In this case

$$E\_{\varepsilon,1}(-\delta t^{\varepsilon}) = (-\delta t^{\varepsilon})E\_{\varepsilon,\varepsilon+1}(-\delta t^{\varepsilon}) + \frac{1}{\Gamma(1)}\tag{12}$$

Let *N*(*t*) = *x*(*t*) + *y*(*t*) + *z*(*t*) + *r*(*t*) represent the total population, then

$$\begin{array}{ll} \stackrel{\mathsf{C}}{0}D\_{t}^{\varepsilon}N(t) &= \stackrel{\mathsf{C}}{0}D\_{t}^{\varepsilon}\mathbf{x}(t) + \stackrel{\mathsf{C}}{0}D\_{t}^{\varepsilon}\mathbf{y}(t) + \stackrel{\mathsf{C}}{0}D\_{t}^{\varepsilon}\mathbf{z}(t) + \stackrel{\mathsf{C}}{0}D\_{t}^{\varepsilon}r(t) \\\\ &= A - \{\delta\mathbf{x}(t) + \delta\mathbf{y}(t) + \delta\mathbf{z}(t) + \delta r(t)\} \\\\ &= A - \delta N(t). \end{array}$$

Therefore,

$$\,\_0^C D\_t^\varepsilon N(t) + \delta N(t) = A$$

Applying Laplace transformation, we have (using Theorem 1):

$$s^{\varepsilon}F(s) - s^{\varepsilon - 1}N(0) + \delta F(s) = \frac{A}{s}, \text{ where } F(s) = \mathcal{L}\{N(t)\}$$

$$\Rightarrow F(s) = A\frac{s^{-1}}{s^{\varepsilon} + \delta} + \frac{N(0)s^{\varepsilon - 1}}{s^{\varepsilon} + \delta} = \frac{s^{\varepsilon - 1}N(0)}{s^{\varepsilon} + \delta} + \frac{As^{\varepsilon - (1 + \varepsilon)}}{s^{\varepsilon} + \delta}$$

Taking inverse Laplace transforms (using Theorem 2),

$$N(t) = N(0)E\_{\varepsilon,1}(-\delta t^{\varepsilon}) + At^{\varepsilon}E\_{\varepsilon,\varepsilon+1}(-\delta t^{\varepsilon})\tag{13}$$

From (12) and (13), we get

$$N(t) \le M\_2[E\_{\varepsilon,1}(-\delta t^{\varepsilon}) + \delta t^{\varepsilon}E\_{\varepsilon,\varepsilon+1}(-\delta t^{\varepsilon})] = \frac{M\_2}{\Gamma(1)} = M\_{2\prime}$$

where *<sup>M</sup>*<sup>2</sup> <sup>=</sup> max *A δ* , *N*(0) 

Thus, *x*(*t*), *N*(*t*) are bounded and thus (using Theorem 5) the solutions *X*(*t*) = (*x*(*t*), *y*(*t*), *z*(*t*),*r*(*t*)) are bounded uniformly in {(*x*(*t*), *y*(*t*), *z*(*t*),*r*(*t*))|*x* + *y* + *z* + *r* ≤ *M*2; *x* ≤ *M*1} for *t* ∈ [0, ∞)

#### *3.4. Reproduction Number and Sensitivity Analysis*

The basic reproduction number is defined as the number of new addicted individuals produced by a single addicted individual during infectious period when contacted into susceptible compartment (*R*<sup>0</sup> = 2 means a person who has the synthetic drug addiction will transmit it to an average of 2 other people). Reproduction number *R*<sup>0</sup> of system (3) for *ε* = 1 can be calculated as the maximum eigenvalue of the next generation matrix *FV*−<sup>1</sup> computed at the drug-free equilibrium [34]. Here,

$$F = \begin{bmatrix} \beta\_1 \frac{\Lambda}{\delta} & \beta\_2 \frac{\Lambda}{\delta} \\\\ 0 & 0 \end{bmatrix}; V = \begin{bmatrix} \delta + \phi + k & 0 \\\\ -k & \delta + \xi \\\\ \end{bmatrix} \tag{14}$$

Thus, we get

$$R\_0 = \frac{\Lambda[\beta\_1(\mathfrak{f} + \delta) + k\mathfrak{f}\_2]}{\delta(\mathfrak{f} + \delta)(\delta + \mathfrak{g} + k)} = \frac{\beta\_1(\mathfrak{f} + \delta)\Lambda}{\delta(\mathfrak{f} + \delta)(\delta + \mathfrak{g} + k)} + \frac{k\mathfrak{f}\_2\Lambda}{\delta(\mathfrak{f} + \delta)(\delta + \mathfrak{g} + k)}\tag{15}$$

The first part is due to the psychologically addicted people and the second part is due to the physiological addicted people.

The drug–addiction equilibrium *E*1(*S* ∗ , *P* ∗ 1 , *P* ∗ 2 , *R* ∗ ) of system (3) can be rewritten as

$$\begin{aligned} S^\* &= \frac{(k + \delta + \phi)P\_1^\*}{\beta\_1 P\_1^\* + \beta\_2 P\_2^\*} \\\\ P\_1^\* &= \frac{B\_0(R\_0 - 1) + B\_1}{B\_2}, \text{where} \\\\ B\_0 &= \left[\delta^2 (k + \delta + \phi)(\delta + \xi) + \frac{\gamma}{\delta + \xi}\right] \\\\ B\_1 &= \frac{\gamma}{\delta + \xi} \left[\lambda \beta\_2 k \xi + \Lambda \beta\_2 \phi(\delta + \xi)\right] \\\\ B\_2 &= (k + \delta + \phi) \left[\beta\_1 \delta (\gamma + \delta + \xi) + \beta\_2 (k\gamma + k\delta + \phi\gamma)\right] \\\\ P\_2^\* &= \frac{(k\delta + k\gamma + \phi\gamma)P\_1^\*}{\delta(\gamma + \delta + \xi)} \\\\ R^\* &= \frac{\xi P\_2^\* + \phi P\_1^\*}{\delta + \gamma} \end{aligned} \tag{16}$$

Therefore, if *R*<sup>0</sup> > 1, the drug–addict equilibrium *E*<sup>1</sup> exists.

The basic reproduction number (*R*0) of system (3) relies upon seven parameters: per capita contact rates *β*1, *β*2, rate of recruitment Λ (of *S*), escalation rate from psychological to physiological addicts (*k*), per capita treatment rates for psychological and physiological addicts respectively (*φ* , *ξ*), and natural death rate (*δ*). Among these parameters, we cannot control the parameters Λ, *k*, and *δ*. Therefore, the basic reproduction number (*R*0) mainly depends on *ξ*, *φ*, *β*1, *β*<sup>2</sup> and the value of *R*<sup>0</sup> = 0.0266 according to Table 2. To examine the sensitivity of *R*<sup>0</sup> to any parameter (say, *θ*), normalized forward sensitivity index with respect to each parameter has been computed as [11,34]

$$\chi\_{\theta} = \frac{\partial \mathcal{R}\_0}{\partial \theta} \frac{\theta}{\mathcal{R}\_0}$$

The sensitivity index may depend on some system parameters but also can be constant or independent of some parameters. These values are very important to estimate the sensitivity of parameters, which should be done cautiously, as a small perturbation in a parameter causes relevant quantitative changes. Merely, the estimation of a parameter with a lower sensitivity index does not demand caution, because a small perturbation in that parameter causes small changes. In this context, we have examined the sensitivity of *R*<sup>0</sup> to the parameters *β*1, *β*2, *φ*, and *ξ*, normalized forward sensitivity index with respect to Table 3.

$$\frac{\partial R\_0}{\partial \phi} = -\frac{\Lambda \left[\beta\_1 (\xi + \delta) + k \beta\_2\right]}{\delta (\delta + \xi) (k + \delta + \phi)^2}$$

$$\chi\_{\Phi} = \frac{\phi}{R\_0} \frac{\partial R\_0}{\partial \phi} = -\frac{\phi}{k + \delta + \phi}$$

$$\begin{aligned} \frac{\partial R\_0}{\partial \xi} &= -\frac{\Lambda k \beta\_2}{\delta(\delta + \xi)^2 (k + \delta + \phi)} \\\\ \chi\_{\xi} &= \frac{\xi}{R\_0} \frac{\partial R\_0}{\partial \xi} = -\frac{k \beta\_2 \xi}{(\delta + \xi)[\beta\_1(\xi + \delta) + k \beta\_2]} \\\\ \frac{\partial R\_0}{\partial \beta\_1} &= \frac{\Lambda}{\delta(k + \delta + \phi)} \\\\ \chi\_{\beta\_1} &= \frac{\beta\_1}{R\_0} \frac{\partial R\_0}{\partial \beta\_1} = \frac{\beta\_1 (\delta + \xi)}{[\beta\_1(\xi + \delta) + k \beta\_2]} \\\\ \frac{\partial R\_0}{\partial \beta\_2} &= \frac{\Lambda k}{\delta(k + \delta + \phi)((\delta + \xi))} \\\\ \chi\_{\beta\_2} &= \frac{\beta\_2}{R\_0} \frac{\partial R\_0}{\partial \beta\_2} = \frac{k \beta\_2}{[\beta\_1(\xi + \delta) + k \beta\_2]} \end{aligned}$$

If *β*<sup>1</sup> = *bβ*; *β*<sup>2</sup> = *β*, where *b* is a nonzero real number, then

$$\begin{aligned} \frac{\partial R\_0}{\partial \boldsymbol{\beta}} &= \frac{\Lambda[b(\boldsymbol{\xi} + \boldsymbol{\delta}) + \boldsymbol{k}]}{\delta(\boldsymbol{\xi} + \boldsymbol{\delta})(\boldsymbol{\delta} + \boldsymbol{\phi} + \boldsymbol{k})} \\\\ \chi\_{\boldsymbol{\beta}} &= \frac{\boldsymbol{\beta}}{R\_0} \frac{\partial R\_0}{\partial \boldsymbol{\beta}} = \frac{\boldsymbol{\beta}}{\frac{\beta \Lambda[b(\boldsymbol{\xi} + \boldsymbol{\delta}) + \boldsymbol{k}]}{\delta(\boldsymbol{\xi} + \boldsymbol{\delta})(\boldsymbol{\delta} + \boldsymbol{\phi} + \boldsymbol{k})}} \frac{\Lambda[b(\boldsymbol{\xi} + \boldsymbol{\delta}) + \boldsymbol{k}]}{\delta(\boldsymbol{\xi} + \boldsymbol{\delta})(\boldsymbol{\delta} + \boldsymbol{\phi} + \boldsymbol{k})} = 1 \end{aligned}$$

Here, *χβ*<sup>1</sup> , *χβ*<sup>2</sup> , *χ<sup>ξ</sup>* , *χ<sup>φ</sup>* are the sensitivity indexes correspond to the respective parameters *β*1, *β*2, *ξ*, *φ*. Therefore, it is clear that the basic reproduction number (*R*0) is most sensitive to changes in *β* (*χ<sup>β</sup>* = 1), where *β*<sup>1</sup> = *bβ*; *β*<sup>2</sup> = *β* and *b* is a nonzero real number, probability of transmission from susceptible to drug addicts (both psychological and physiological).

**Table 2.** Sensitivity indices of different parameters of system (3) corresponding to Table 3.


**Table 3.** Parameter values used in system (3) when *E*<sup>0</sup> = (1, 0, 0, 0) and *R*<sup>0</sup> = 0.3151.


If *β*1, *β*<sup>2</sup> increases, *R*<sup>0</sup> also increases, whereas *R*<sup>0</sup> decreases when *φ*, *ξ* increases, or vice versa. However, the increase in *φ*, i.e., the treatment rate for psychological addicts, cannot help as much as the treatment rate for physiological addicts *ξ*. In this way, it is smarter to concentrate either *β*1, *β*<sup>2</sup> (the contact rates ) and *φ*, treatment rate for mental addicts. It is also noticeable that *R*<sup>0</sup> is more sensitive to *β*<sup>1</sup> rather than *β*<sup>2</sup> according Table 2.

#### *3.5. Local Stability*

To analyze the local stability of disease free and endemic equilibrium points, we need the following.

**Definition 3** ([37])**.** *The discriminant* ∇(*f*) *of a polynomial f*(*x*) = *x <sup>n</sup>* + *α*1*x <sup>n</sup>*−<sup>1</sup> + *α*2*x <sup>n</sup>*−<sup>2</sup> + ... + *α<sup>n</sup> is defined by*

$$
\nabla(f) = (-1)^{\frac{n(n-1)}{2}} |S\_n(f, f')|.
$$

*Where Sn*(*f* , *g*) *is the Sylvester matrix of f*(*x*) *and g*(*x*) *of order* (*n* + *l*) × (*n* + *l*) *and g*(*x*) = *x <sup>l</sup>* + *β*1*x <sup>l</sup>*−<sup>1</sup> + *β*2*x <sup>l</sup>*−<sup>2</sup> + ... + *β<sup>l</sup> .*

$$\text{For } n = 3 \text{, we have } f(\mathbf{x}) = \mathbf{x}^3 + a\_1\mathbf{x}^2 + a\_2\mathbf{x} + a\_3 \text{ and } f'(\mathbf{x}) = 3\mathbf{x}^2 + 2a\_1\mathbf{x} + a\_2\mathbf{x}$$

$$|S\_3(f, f')| = \begin{vmatrix} 1 & a\_1 & a\_2 & a\_3 & 0 \\ 0 & 1 & a\_1 & a\_2 & a\_3 \\ 3 & 2a\_1 & a\_2 & 0 & 0 \\ 0 & 3 & 2a\_1 & a\_2 & 0 \\ 0 & 0 & 3 & 2a\_1 & a\_2 \\ 0 & 0 & 3 & 2a\_1 & a\_2 \\ \end{vmatrix} = -18a\_1a\_2a\_3 - (a\_1a\_2)^2 + 4a\_1^2a\_3 + 4a\_2^2 + 27a\_3^2$$
 
$$\text{or } a\_1 = -(3a\_1a\_2 + 3a\_1a\_3)$$

Therefore, ∇(*f*) = −|*S*3(*f* , *f* 0 )| = 18*α*1*α*2*α*<sup>3</sup> + (*α*1*α*2) <sup>2</sup> <sup>−</sup> <sup>4</sup>*<sup>α</sup>* 2 1 *α*<sup>3</sup> − 4*α* 2 <sup>2</sup> − 27*α* 2 3

**Lemma 3.** *(Routh–Hurwitz conditions for fractional calculus) [38]: If* ∇(*P*) *is the discriminant of the characteristic equation P*(*λ*) = *λ <sup>n</sup>* + *a*1*λ <sup>n</sup>*−<sup>1</sup> + *a*2*λ <sup>n</sup>*−<sup>2</sup> + ... + *a<sup>n</sup> of Jacobian matrix of system* (1) *evaluated at equilibrium point, then for n* = 3 *the system is asymptotically stable if any of the following conditions hold:*


To study the local stability of the system (3), we need to compute Jacobian matrices at the equilibrium points *E*<sup>0</sup> and *E*1. At the drug-free equilibrium point *E*0:

$$J\left\{ \left( \frac{\Lambda}{\delta}, 0, 0, 0 \right) \right\} = \begin{bmatrix} -\delta & -\beta\_1 \frac{\Lambda}{\delta} & -\beta\_2 \frac{\Lambda}{\delta} & 0\\ 0 & \beta\_1 \frac{\Lambda}{\delta} - (k + \delta + \phi) & \beta\_2 \frac{\Lambda}{\delta} & 0\\ 0 & k & -(\xi + \delta) & \gamma\\ 0 & \phi & \xi & -(\gamma + \delta) \end{bmatrix}$$

The eigenvalues of the system are *λ*<sup>1</sup> = −*δ*, and the other three eigenvalues can be found from the equation *Q*(*λ*) ≡ *λ* <sup>3</sup> + *c*1*λ* <sup>2</sup> + *c*2*λ* + *c*<sup>3</sup> = 0, where

$$\begin{aligned} c\_1 &= -(K\_1 + K\_5 + K\_3) \\ c\_2 &= K\_1 K\_5 + K\_1 K\_9 + K\_5 K\_9 - K\_2 K\_4 - K\_3 K\_7 - K\_6 K\_8 \\ c\_3 &= -K\_1 K\_5 K\_9 + K\_1 K\_6 K\_8 + K\_2 K\_4 K\_9 - K\_2 K\_6 K\_7 - K\_3 K\_4 K\_8 + K\_3 K\_7 K\_5 \\ K\_1 &= \beta\_1 \frac{\Lambda}{\delta} - (k + \delta + \phi) \\ K\_2 &= \beta\_2 \frac{\Lambda}{\delta} \\ K\_3 &= 0 \\ K\_4 &= k \\ K\_5 &= -(\xi + \delta) \\ K\_6 &= \gamma \\ K\_7 &= \phi \\ K\_8 &= \xi \\ K\_9 &= -(\gamma + \delta) \end{aligned} \tag{17}$$

Suppose ∇(*Q*) = 18*c*1*c*2*c*<sup>3</sup> + (*c*1*c*2) <sup>2</sup> <sup>−</sup> <sup>4</sup>*<sup>c</sup>* 2 1 *<sup>c</sup>*<sup>3</sup> <sup>−</sup> *cc*<sup>2</sup> <sup>2</sup> − 27*c* 2 3 , then by the Routh–Harwitz conditions for the fractional differential equation, the endemic equilibrium point *E*<sup>0</sup> is locally asymptotically stable if any of the following conditions hold:


$$J(E\_1) = \begin{bmatrix} -\delta - \beta\_1 P\_1^\* - \beta\_2 P\_2^\* & -\beta\_1 P\_1^\* S^\* & -\beta\_2 P\_2^\* S^\* & 0\\ \beta\_1 P\_1^\* + \beta\_2 P\_2^\* & \beta\_1 S^\* - (k + \delta + \phi) & \beta\_2 S^\* & 0\\ & 0 & k & -(\tilde{\xi} + \delta) & \gamma\\ & 0 & \phi & \xi & -(\gamma + \delta) \end{bmatrix}$$

Characteristic equation of this matrix is *P*(*λ*) ≡ *λ* <sup>3</sup> + *a*1*λ* <sup>2</sup> + *a*2*λ* + *a*<sup>3</sup> = 0, where

$$a\_1 = \frac{e\_{23}e\_{32} + e\_{12}e\_{22} - e\_{22}e\_{33} - e\_{22}e\_{44} - e\_{11}e\_{12}}{e\_{22}}$$

*a*<sup>2</sup> = [*e*11*e*22*e*<sup>33</sup> + *e*11*e*22*e*<sup>44</sup> − *e*11*e*23*e*<sup>32</sup> + *e*22*e*33*e*<sup>44</sup> − *e*22*e*34*e*<sup>43</sup> − *e*23*e*32*e*<sup>44</sup> +*e*34*e*23*e*<sup>42</sup> − *e*22*e*12*e*<sup>33</sup> − *e*22*e*12*e*<sup>44</sup> + *e*32*e*13*e*21]/*e*<sup>22</sup> (18)

$$\begin{aligned} a\_3 &= \left[ e\_{11} e\_{22} e\_{34} e\_{43} - e\_{11} e\_{22} e\_{33} e\_{44} + e\_{11} e\_{23} e\_{32} e\_{44} - e\_{11} e\_{23} e\_{34} e\_{44} + e\_{12} e\_{21} e\_{33} e\_{44} \right] \\ &- e\_{12} e\_{21} e\_{34} e\_{43} - e\_{21} e\_{13} e\_{32} e\_{44} \Big] / e\_{22} \end{aligned}$$

and *eij*, *i*, *j* = 1, 2, 3, 4 are as follows:

$$\begin{aligned} e\_{11} &= -\delta - \beta\_1 P\_1^\* - \beta\_2 P\_2^\* \\ e\_{12} &= -\beta\_1 P\_1^\* S^\* \\ e\_{13} &= -\beta\_2 P\_2^\* \\ e\_{14} &= 0 \\ e\_{21} &= \beta\_1 P\_1^\* + \beta\_2 P\_2^\* \\ e\_{22} &= \beta\_1 S^\* - (k + \delta + \phi) \\ e\_{23} &= \beta\_2 S^\* \\ e\_{24} &= 0 \\ e\_{31} &= 0 \\ e\_{33} &= k \\ e\_{34} &= \gamma \\ e\_{41} &= 0 \\ e\_{42} &= \phi \\ e\_{43} &= \xi \\ e\_{44} &= -(\gamma + \delta) \end{aligned} \tag{19}$$

Therefore, *λ<sup>i</sup>* , *i* = 1, 2, 3, can be found from this equation. Suppose ∇(*P*) = 18*a*1*a*2*a*<sup>3</sup> + (*a*1*a*2) <sup>2</sup> <sup>−</sup> <sup>4</sup>*<sup>a</sup>* 2 1 *a*<sup>3</sup> − 4*a* 2 <sup>2</sup> − 27*a* 2 3 , then by the Routh–Hurwitz conditions for fractional differential equations, the endemic equilibrium point *E*<sup>1</sup> is locally asymptotically stable if any of the following conditions hold:


The following theorems are the consequence of these discussions.

**Theorem 7.** *The drug-free equilibrium E*<sup>0</sup> *of system (2) is locally asymptotically stable if any of the following conditions holds with (17):*


**Theorem 8.** *The endemic equilibrium E*<sup>1</sup> *of system (2) is locally asymptotically stable if any of the following conditions holds with (18) and (19):*


#### *3.6. Global Asymptotic Stability*

We need following useful lemmas about Lyapunov direct method related with global stability of the equilibrium points in fractional order models.

**Lemma 4** ([32])**.** *Suppose u*(*t*) ∈ R<sup>+</sup> *be a continuous and differentiable function. Then, for any moment of time t* > 0*, C* <sup>0</sup> *D<sup>ε</sup> t u*(*t*) − *u* <sup>∗</sup> − *u* ∗ ln *<sup>u</sup>*(*t*) *u* ∗ ≤ 1 − *u* ∗ *u*(*t*) *C* <sup>0</sup> *D<sup>ε</sup> t u*(*t*), *u* <sup>∗</sup> ∈ R+, ∀*ε* ∈ (0, 1)*.*

**Lemma 5.** *(Uniform Asymptotic Stability Theorem) [39]: Consider the non-autonomous system*

$$\,\_{0}^{\mathbb{C}}D\_{t}^{\varepsilon}\mathbf{x}(t) = f(t, \mathbf{x}), \; \mathbf{x} \in \Omega \subseteq \mathbb{R}^{n} \tag{20}$$

*.*

*Let x* ∗ *be an equilibrium point of the system (x* <sup>∗</sup> <sup>∈</sup> <sup>Ω</sup> <sup>⊆</sup> <sup>R</sup>*<sup>n</sup> ) and* Φ(*t*, *x*(*t*)) : [0, ∞) × Ω → R *be a continuously differentiable function such that*

$$\begin{aligned} \, \_0^C D\_t^\varepsilon \Phi(t, \mathfrak{x}(t)) &\le -\Theta\_3(\mathfrak{x}), \\\\ \Theta\_1(\mathfrak{x}) &\le \Phi(t, \mathfrak{x}(t)) \le \Theta\_2(\mathfrak{x}), \forall \varepsilon \in (0, 1), \forall \mathfrak{x}(t) \in \Omega \end{aligned}$$

*where* Θ*<sup>i</sup>* , *i* = 1, 2, 3, *are continuous positive definite functions on* Ω*. Then, the equilibrium point x* ∗ *of system (20) is globally asymptotically stable.*

**Theorem 9.** *If* 1 > [*kγξ* + *γφ*(*ξ* + *δ*)]Λ*β*<sup>2</sup> *δ* <sup>2</sup>(*k* + *φ* + *δ*)(*ξ* + *δ* + *γ*)(*ξ* + *δ*) *, then the disease-free equilibrium E*<sup>0</sup> *of system (3) is globally asymptotically stable when*

$$R\_0 \le 1 - \frac{[k\gamma\mathfrak{f} + \gamma\phi(\mathfrak{f} + \delta)]\Lambda\beta\_2}{\delta^2(k+\phi+\delta)(\mathfrak{f}+\delta+\gamma)(\mathfrak{f}+\delta)}.$$

**Proof.** We have considered a positive definite function:

$$L = \frac{1}{M}P\_1 + \frac{\beta\_2(\gamma + \delta)}{\delta(\mathfrak{F} + \delta + \gamma)}P\_2 + \frac{\beta\_2\gamma}{\delta(\mathfrak{F} + \delta + \gamma)}R,\text{ where }M = \frac{\Lambda}{\delta}.$$

Clearly, *L* ≥ 0 and *L* = 0 only when *P*<sup>1</sup> = 0, *P*<sup>2</sup> = 0 and *R* = 0.

Taking the *ε* order Caputo derivative *<sup>C</sup>* <sup>0</sup> *D<sup>ε</sup> t* of *L* along the solution of system (3), we have (for large time *t*)

$$\begin{split} \frac{\varepsilon}{\delta}D\_{t}^{s}L\_{1}^{s} &= \frac{1}{M}\frac{\varsigma}{\delta}D\_{t}^{s}P\_{1} + \frac{\beta\_{2}(\gamma+\delta)}{\delta(\tilde{\xi}+\delta+\gamma)}\frac{\zeta}{0}D\_{1}^{s}P\_{2} + \frac{\beta\_{2}\gamma}{\delta(\tilde{\xi}+\delta+\gamma)}\frac{\zeta}{0}D\_{1}^{s}R \\ &= \frac{1}{M}[\beta\_{1}SP\_{1} + \beta\_{2}SP\_{2} - kP\_{1} - (\delta+\phi)P\_{1}] + \frac{\beta\_{2}(\gamma+\delta)}{\delta(\tilde{\xi}+\delta+\gamma)}[kP\_{1} + \gamma R - \xi P\_{2} - \delta P\_{2}] \\ &+ \frac{\beta\_{2}\gamma}{\delta(\tilde{\xi}+\delta+\gamma)}[\phi P\_{1} + \zeta P\_{2} - \gamma R - \delta R] \\ &\leq \frac{1}{M}\beta\_{1}MP\_{1} - \frac{1}{M}(k+\delta+\phi)P\_{1} + \frac{\beta\_{2}(\gamma+\delta)}{\delta(\tilde{\xi}+\delta+\gamma)}kP\_{1} + \frac{\beta\_{2}\gamma\phi}{\delta(\tilde{\xi}+\delta+\gamma)}P\_{1} \\ &= \left[\frac{\delta R\_{0}}{\Lambda}(k+\delta+\phi) - \frac{\beta\_{2}k}{\tilde{\xi}+\delta} + \frac{\beta\_{2}k(\gamma+\delta) + \beta\_{2}\gamma\phi}{\delta(\tilde{\xi}+\tilde{\xi}+\gamma)} - \frac{(k+\delta+\phi)}{M}\right]P\_{1} \\ &= \frac{\delta(k+\delta+\phi)}{\Lambda}[R\_{0} - L\_{0}]P\_{1} \end{split}$$

where

$$L\_0 = 1 + \frac{\Lambda \beta\_2 k}{\delta(\tilde{\zeta} + \delta)(k + \phi + \delta)} - \frac{\Lambda}{\delta^2} \frac{\beta\_2 k (\gamma + \delta) + \beta\_2 \gamma \phi}{(k + \phi + \delta)(\tilde{\zeta} + \delta + \gamma)}$$

$$= 1 - \frac{[k \gamma \tilde{\zeta} + \gamma \phi (\tilde{\zeta} + \delta)] \Lambda \beta\_2}{\delta^2 (k + \phi + \delta)(\tilde{\zeta} + \delta + \gamma)(\tilde{\zeta} + \delta)} \le 1$$

Therefore, *<sup>C</sup>* <sup>0</sup> *D<sup>ε</sup> t L* ≤ 0 if *R*<sup>0</sup> ≤ *L*0. Therefore, using Lemma 5:

$$\lim\_{t \to \infty} P\_1(t) = \lim\_{t \to \infty} P\_2(t) = \lim\_{t \to \infty} R(t) = 0.1$$

Thus, in the limit *S*(*t*) is given by the solutions of *<sup>C</sup>* <sup>0</sup> *D ε <sup>t</sup>S*(*t*) = Λ − *δS*. As *S*(0) > 0, the theorem follows.

**Theorem 10.** *If R*<sup>0</sup> > 1*, then the endemic equilibrium E*1(*S* ∗ , *P* ∗ 1 , *P* ∗ 2 , *R* ∗ ) *of system (3) is globally asymptotically stable.*

**Proof.** Consider a positive definite function:

$$\begin{split} V &= \left( S - S^\* - S^\* \ln \frac{S}{S^\*} \right) + \left( P\_1 - P\_1^\* - P\_1^\* \ln \frac{P\_1}{P\_1^\*} \right) \\ &+ \frac{\beta\_2 (\gamma + \delta)}{\delta (\mathfrak{F} + \delta + \gamma)} \left( P\_2 - P\_2^\* - P\_2^\* \ln \frac{P\_2}{P\_2^\*} \right) + \frac{\beta\_2 \gamma}{\delta (\mathfrak{F} + \delta + \gamma)} \left( R - R^\* - R^\* \ln \frac{R}{R^\*} \right) \end{split} \tag{21}$$

It is observed that *V* ≥ 0 and *V* = 0 only at *E*1. Taking the *ε* order Caputo derivative *C* <sup>0</sup> *D<sup>ε</sup> t* of *V* and using Lemma 4, we have

$$\begin{split} \, \_0^\mathbb{C} D\_t^\varepsilon (V) \le & \left( 1 - \frac{S^\*}{S} \right) \_0^\mathbb{C} D\_t^\varepsilon S + \left( 1 - \frac{P\_1^\*}{P\_1} \right) \_0^\mathbb{C} D\_t^\varepsilon P\_1 \\\\ \, \_0^\mathbb{C} + \frac{\beta\_2 (\gamma + \delta)}{\delta (\tilde{\xi} + \delta + \gamma)} \left( 1 - \frac{P\_2^\*}{P\_2} \right) \_0^\mathbb{C} D\_t^\varepsilon P\_2 + \frac{\beta\_2 \gamma}{\delta (\tilde{\xi} + \delta + \gamma)} \left( 1 - \frac{R^\*}{R} \right) \_0^\mathbb{C} D\_t^\varepsilon R \end{split} (22)$$

From the steady-state of equilibrium point (4), we have

$$\begin{aligned} \Lambda &= \delta S^\* + \beta\_1 S^\* P\_1^\* + \beta\_2 S^\* P\_2^\* \\\\ \frac{\beta\_1 S^\* P\_1^\* + \beta\_2 S^\* P\_2^\*}{P\_1^\*} &= (k + \delta + \phi) \\\\ \frac{k P\_1^\* + \gamma R^\*}{P\_2^\*} &= (\xi + \delta) \\\\ \frac{\phi P\_1^\* + \xi P\_2^\*}{R^\*} &= (\gamma + \delta) \\\\ P\_1 &= P\_2 & R \end{aligned} \tag{23}$$

Let *a* = *S S* ∗ , *b* = *P*1 *P* ∗ 1 , *c* = *P*2 *P* ∗ 2 , *d* = *R R*∗ . From (22) and (23), we have

*C* <sup>0</sup> *D<sup>ε</sup> t* (*V*) ≤ (*S* − *S* ∗ ) *S* " − *δ*(*S* − *S* ∗ ) − *β*1(*P*1*S* − *P* ∗ 1 *S* ∗ ) − *β*2(*P*2*S* − *P* ∗ 2 *S* ∗ ) # 1 − *P* ∗ 1 *P*1 !"*β*1*SP*<sup>1</sup> + *β*2*SP*<sup>2</sup> − (*β*1*S* <sup>∗</sup> + *β*2*S* ∗*P* ∗ 2 ) *P*1 *P* ∗ 1 # + *β*2(*γ* + *δ*) *δ*(*δ* + *ξ* + *γ*) 1 − *P* ∗ 2 *P*2 !"*kP*<sup>1</sup> + *γR* − *P*<sup>2</sup> *kP*<sup>1</sup> ∗ + *γR* ∗ *P*2 ∗ # + *β*2*γ δ*(*δ* + *ξ* + *γ*) 1 − *R* ∗ *R* !"*φP*<sup>1</sup> + *ξP*<sup>2</sup> − *φP*<sup>1</sup> <sup>∗</sup> + *ξP*<sup>2</sup> ∗ *R*∗ # = − *δ S* (*S* − *S* ∗ ) <sup>2</sup> + *β*1*P* ∗ 1 *S* ∗ " (1 − *ab*) 1 − 1 *a* ! + 1 − 1 *b* ! *ab* − *b* 1 − 1 *b* !# +*β*2*P* ∗ 2 *S* ∗ " (1 − *ac*) 1 − 1 *a* ! + 1 − 1 *b* ! *ac* − *b* 1 − 1 *b* !# *β*2(*γ* + *δ*) *δ*(*ξ* + *δ* + *γ*) *kP*∗ 1 1 − 1 *c* ! (*<sup>b</sup>* <sup>−</sup> *<sup>c</sup>*) + *<sup>β</sup>*2(*<sup>γ</sup>* <sup>+</sup> *<sup>δ</sup>*) *δ*(*ξ* + *δ* + *γ*) *γR* ∗ 1 − 1 *c* ! (*d* − *c*) + *β*2*γ δ*(*ξ* + *δ* + *γ*) *φP* ∗ 1 1 − 1 *d* ! (*<sup>b</sup>* <sup>−</sup> *<sup>d</sup>*) + *<sup>β</sup>*2*<sup>γ</sup> δ*(*ξ* + *δ* + *γ*) *ξP* ∗ 2 1 − 1 *d* ! (*c* − *d*) = − *δ S* (*S* − *S* ∗ ) <sup>2</sup> + *β*1*P* ∗ 1 *S* ∗ 2 − 1 *a* − *a* ! + " *β*2 (*kγ* + *kδ* + *φγ*) *δ*(*γ* + *δ* + *ξ*) *P* ∗ 1 # 2 − 1 *a* + *c* − *ac b* − *b* ! Using *P* ∗ <sup>2</sup> = (*kγ* + *kδ* + *φγ*) *δ*(*γ* + *δ* + *ξ*) *P* ∗ 1 + *β*2(*γ* + *δ*) *δ*(*ξ* + *δ* + *γ*) *kP*∗ 1 *b* − *c* − *b c* + 1 ! + *β*2*γ δ*(*ξ* + *δ* + *γ*) (*φP* ∗ <sup>1</sup> + *ξP* ∗ 2 ) *d* − *c* − *d c* + 1 ! Using *R* ∗ (*δ* + *γ*) = (*φP* ∗ <sup>1</sup> + *ξP* ∗ 2 ) + *β*2*γ δ*(*ξ* + *δ* + *γ*) *φP* ∗ 1 *b* − *d* − *b d* + 1 ! + *β*2*γ δ*(*ξ* + *δ* + *γ*) *ξP* ∗ 2 *c* − *d* − *c d* + 1 !

$$\begin{aligned} &= -\frac{\delta}{S}(S - S^\*)^2 + \beta\_1 P\_1^\* S^\* \left(2 - \frac{1}{a} - a\right) \\\\ &+ \frac{\beta\_2(\gamma + \delta)}{\delta(\tilde{\zeta} + \delta + \gamma)} k P\_1^\* \left(3 - \frac{1}{a} - \frac{ac}{b} - \frac{b}{c}\right) \\\\ &+ \frac{\beta\_2 \Phi \gamma}{\delta(\tilde{\zeta} + \delta + \gamma)} P\_1^\* \left(4 - \frac{1}{a} - \frac{ac}{b} - \frac{d}{c} - \frac{b}{d}\right) \\\\ &\qquad \qquad \qquad \qquad \qquad \qquad 1 - ac - b \end{aligned} \tag{24}$$

Using the inequality *A*.*M*. ≥ *G*.*M*., we have 2 − 1 *a* − *a* ≤ 0; 3 − 1 *a* − *ac b* − *b c* ≤ 0; 4 − 1 *a* − *ac b* − *d c* − *b d* <sup>≤</sup> 0. From relation (24) it is clear that *<sup>C</sup>* <sup>0</sup> *D<sup>ε</sup> t* (*V*) ≤ 0 and thus *C* <sup>0</sup> *D<sup>ε</sup> t* (*V*) is negative definite with respect to *E*1. Thus *E*<sup>1</sup> is globally asymptotically stable by Lemma 5.

#### **4. Fractional Optimal Control Problem**

The applications of Fractional-ordered optimal control problem (FOCP) have grown in recent decades. Agrawal has introduced the general form of FOCPs in the Riemann– Liouville sense and suggests a numerical method to solve FOCP using Lagrange multiplier technique [24]. In traditional integer-order optimal control problems, the calculus of variations is the common method. Pontryagin's principle is one of the most useful approaches to solve optimal control problem. There are several works where these methods are employed in Fractional ordered optimal control problems [25,40].

Let *x* be the pseudo-state vector, *u* = [*u* 1 , *u* 2 , ..., *u <sup>m</sup>*] <sup>∈</sup> *<sup>U</sup>* <sup>⊆</sup> *<sup>R</sup> <sup>m</sup>* is the input vector, and *U* is the set of admissible control of the dynamical system *<sup>C</sup>* <sup>0</sup> *D<sup>ε</sup> t x* = *f*(*x*, *u*, *t*), *x*(0) = *x*0. The system's pseudo-state is supposed to reach final condition *x<sup>f</sup>* in the unknown final time *T<sup>f</sup>* < ∞. The control *u* ∈ *U* must be chosen for all *t* ∈ [0, *T<sup>f</sup>* ] to minimize the objective functional *J* which is defined by the application and can be abstracted as

$$J = \Theta(\mathfrak{x}(T\_f)) + \int\_0^{T\_f} F(\mathfrak{x}(t), \mathfrak{u}(t))dt$$

The constraints on the system dynamics can be adjoined to the Lagrangian *F* by introducing time-varying Lagrange multiplier vector *λ*, whose elements are called the co-states of the system. This motivates the construction of the Hamiltonian *H* defined for all *t* ∈ [0, *T<sup>f</sup>* ].

$$H(\mathbf{x}(t), \boldsymbol{\mu}(t), \boldsymbol{\lambda}(t)) = \boldsymbol{\lambda}^T(t) f(\mathbf{x}(t), \boldsymbol{\mu}(t)) + F(\mathbf{x}(t), \boldsymbol{\mu}(t)).$$

where *λ T* stands for transpose of *λ*. Pontryagin's minimum principle states that the optimal state trajectory *x* ∗ , optimal control *u* ∗ , and corresponding Lagrange multiplier vector *λ* ∗ must minimize the Hamiltonian *H* so that [41]

1. *H*(*x* ∗ (*t*), *u* ∗ (*t*), *λ* ∗ (*t*)) ≤ *H*(*x* ∗ (*t*), *u*(*t*), *λ* ∗ (*t*))

$$\text{2. } \quad \frac{\partial \Theta(\mathbf{x})}{\partial T\_f}|\_{\mathbf{x} = \mathbf{x}(T\_f)} + H(T\_f) = \mathbf{0}$$

$$\begin{aligned} \text{3.} \quad & \frac{\partial L}{\partial t} D\_{T\_f}^{\varepsilon} \lambda^T = \frac{\partial H}{\partial x}|\_{x=x\*}\\ \text{4.} \quad & \frac{\partial H}{\partial u}|\_{u=u\*} = 0 \text{ and } \frac{\partial^2 H((x^\*(t), u^\*(t), \lambda^\*(t)))}{\partial u^2} \le 0\\ \text{where} \quad & \frac{}{t}^{RL} D\_T^{\varepsilon} f(t) = \frac{-1}{\Gamma(1-\varepsilon)} \frac{d}{d\tau} \int\_t^T (\tau - t)^{-\varepsilon} f(\tau) d\tau, \forall t \in [0, T] \end{aligned}$$

$$^{EL}D\_T^\varepsilon f(t) = \frac{-1}{\Gamma(1-\varepsilon)} \frac{d}{d\tau} \int\_t^T (\tau - t)^{-\varepsilon} f(\tau) d\tau, \forall t \in [0, \tau]$$

is the Right Riemann–Liouville derivative of order *ε*. The notation "RL" stands for Right Riemann–Liouville derivative. These four conditions are the necessary conditions, but not sufficient for optimal control.

Our point is to limit the number of synthetic drug addicts by considering the impact of "awareness program, mental directing and other preventive measures" as a control strategy. We have thought about system (3) with this control system. Empowering the mindfulness mission and advising program in a successive premise can impact conduct change among mental addicts. Mindfulness crusades keep the populace from ingesting medications as well as make them mindful about the repercussions of engrossing manufactured medications. Considering this, a treatment rate work *cηP*<sup>1</sup> has been introduced in system (3) to get system (26). Here, *c* speaks to the therapy rate (through directing) alongside the effect of awareness missions and *η* is the power of treatment. There are various costs included like analysis, drugs, and different costs when advising is given. In this way, *η* can be utilized as a potential instrument to create a constructive outcomes on mental addicts with 0 ≤ *η* ≤ 1. Here, 0 portrays no improvement throughout the directing time frame, while 1 is speaking to full improvement. Consequently, the control force *η* completely depends on the exertion of the mental addicts to prevent themselves from consuming synthetic drugs.

In the following, we have focus on determining the optimal treatment via counseling with minimum cost by implementing the control. From the previous discussions, we have deduced that the acceptable set for the control variable *η*(*t*) is

$$\Theta = \{ \eta(t) | \eta(t) \in [0, 1], t \in [0, T\_f] \}.$$

where *T<sup>f</sup>* represents the final time up to which the control policy can be implemented. It is assumed that the control functions *η*(*t*) is measurable.

Our main objective is to minimize the given objective function *J*, which represents cost involved in counseling and awareness programs in time interval [0, *T<sup>f</sup>* ], by finding optimal control *η* ∗ as follows:

$$J(\eta^\*) = J(\min\{\eta(t) \in \Theta\}).\tag{25}$$

Here,

$$J(\eta) = \int\_0^{T\_f} \left[ \omega\_1 P\_1(t) + \frac{\omega\_2}{2} \eta^2(t) \right] dt \,\rho$$

(where *ω*<sup>1</sup> 6= 0, *ω*<sup>2</sup> 6= 0 are the cost of treatment of psychological class and cost of implementation of control strategy, respectively )

subject to

$$\begin{aligned} \ \_\circ^C D\_1^\varepsilon S(t) &= \Lambda - \delta S - \beta\_1 S P\_1 - \beta\_2 S P\_2, S(0) > 0, \\\\ \ \_\circ^C D\_1^\varepsilon P\_1(t) &= \beta\_1 S P\_1 + \beta\_2 S P\_2 - (k + \delta + \phi) P\_1 - c\eta P\_1, P\_1(0) > 0, \\\\ \ \_\circ^C D\_1^\varepsilon P\_2(t) &= kP\_1 + \gamma R - \xi P\_2 - \delta P\_2, P\_2(0) > 0, \\\\ \ \_\circ^C D\_1^\varepsilon R(t) &= \phi P\_1 + \xi P\_2 - \gamma R - \delta R + c\eta P\_1, R(0) > 0, \end{aligned} \tag{26}$$

The existence of optimal control *η* ∗ can be established in the next theorem.

**Theorem 11.** *Let the control function η* ∈ Θ *be measurable on* [0, *T<sup>f</sup>* ] *with value of each of η*(*t*) *lies in [0,1]. Then, there exist adjoint variables λ*1, *λ*2, *λ*3, *λ*<sup>4</sup> *and optimal control η* ∗ *minimizing the objective function J*(*η*) *of (26) satisfying*

$$\begin{aligned} \, \_t^{RL}D\_{T\_f}^{\varepsilon} \lambda\_1(t) &= \lambda\_1(\delta + \beta\_1 P\_1 + \beta\_2 P\_2) - \lambda\_2(\beta\_1 P\_1 + \beta\_2 P\_2) \\\\ \_t^{RL}D\_{T\_f}^{\varepsilon} \lambda\_2(t) &= \lambda\_1 \beta\_1 S - \lambda\_2[\beta\_1 S - (k + \delta + \phi + c\eta)] - \lambda\_3 k - \lambda\_4(\phi + c\eta) - \omega\_1 \\\\ \_t^{RL}D\_{T\_f}^{\varepsilon} \lambda\_3(t) &= \lambda\_1 \beta\_2 S - \lambda\_2 \beta\_2 S + \lambda\_3(\xi + \delta) - \lambda\_4 \xi \\\\ \_t^{RL}D\_{T\_f}^{\varepsilon} \lambda\_4(t) &= -\lambda\_3 \gamma + \lambda\_4(\gamma + \delta) \end{aligned}$$

*with transversality conditions λi*(*T<sup>f</sup>* ) = 0 (*i* = 1, 2, 3, 4) *and*

$$\begin{aligned} \eta^\* &= \max\{\min\{\eta, 1\}, 0\} \\\\ \eta &= \frac{c P\_1(t)(\lambda\_2(t) - \lambda\_4(t))}{\omega\_2} \end{aligned} \tag{27}$$

*where S* ∗ , *P* ∗ 1 , *P* ∗ 2 , *R* ∗ *are the corresponding optimal state solutions of (26) associated with control variable η.*

**Proof.** We have constructed the Hamiltonian as

$$\begin{aligned} H &= \omega\_1 P\_1(t) + \frac{\omega\_2}{2} \eta^2(t) \\\\ &+ \lambda\_1 \left\{ \Lambda - \delta S - \beta\_1 S P\_1 - \beta\_2 S P\_2 \right\} \\\\ &+ \lambda\_2 \left\{ \beta\_1 S P\_1 + \beta\_2 S P\_2 - (k + \delta + \phi) P\_1 - c\eta P\_1 \right\} \\\\ &+ \lambda\_3 \left\{ k P\_1 + \gamma R - \xi P\_2 - \delta P\_2 \right\} + \lambda\_4 \{ \phi P\_1 + \xi P\_2 - \gamma R - \delta R + c\eta P\_1 \} \end{aligned} \tag{28}$$

with (*λ*1, *λ*2, *λ*3, *λ*4) being the associated adjoint variables with *λi*(*T<sup>f</sup>* ) = 0 (*i* = 1, 2, 3, 4), which satisfy the following canonical equations:

$$\begin{aligned} \, \_{l}^{RL}D\_{T\_f}^{\varepsilon} \lambda\_1(t) &= -\frac{\partial H}{\partial S} = \lambda\_1(\delta + \beta\_1 P\_1 + \beta\_2 P\_2) - \lambda\_2(\beta\_1 P\_1 + \beta\_2 P\_2) \\\\ \, \_{l}^{RL}D\_{T\_f}^{\varepsilon} \lambda\_2(t) &= -\frac{\partial H}{\partial P\_1} = \lambda\_1 \beta\_1 S - \lambda\_2 [\beta\_1 S - (k + \delta + \phi + c\eta)] - \lambda\_3 k - \lambda\_4 (\phi + c\eta) - \omega\_1 \\\\ \, \_{l}^{RL}D\_{T\_f}^{\varepsilon} \lambda\_3(t) &= -\frac{\partial H}{\partial P\_2} = \lambda\_1 \beta\_2 S - \lambda\_2 \beta\_2 S + \lambda\_3 (\xi + \delta) - \lambda\_4 \xi \\\\ &\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \end{aligned} (29)$$

$$\lambda\_t^{RL} D\_{T\_f}^{\varepsilon} \lambda\_4(t) = -\frac{\partial H}{\partial R} = -\lambda\_3 \gamma + \lambda\_4(\gamma + \delta)$$

Therefore, the problem of finding *η* ∗ that minimizes *J* subject to (26) is converted to minimizing the Hamiltonian with respect to the control. Then, by Pontryagin principle, we have achieved the optimal condition:

$$\frac{\partial H}{\partial \eta} = \omega\_2 \eta - \lambda\_2 c P\_1 + \lambda\_4 c P\_1 = 0 \tag{30}$$

which can be solved in terms of the state and adjoint variables to give

$$\overline{\eta} = \frac{cP\_1(t)(\lambda\_2(t) - \lambda\_4(t))}{\omega\_2} \tag{31}$$

For the optimal control *η* ∗ , which requires considering the constrains on the control and the sign of *<sup>∂</sup><sup>H</sup> ∂η* , we have

$$\eta^\* = \begin{cases} 0, \text{ if } \frac{\partial H}{\partial \eta} < 0 \\\\ \vec{\eta}, \text{ if } \frac{\partial H}{\partial \eta} = 0 \\\\ 1, \text{ if } \frac{\partial H}{\partial \eta} > 0 \end{cases} \tag{32}$$

and

$$\eta^\* = \max\{\min\{\overline{\eta}, 1\}, 0\}, \text{ where } \overline{\eta} = \frac{cP\_1(t)(\lambda\_2(t) - \lambda\_4(t))}{\omega\_2}. \tag{33}$$

The optimal state can be found by substituting *η* ∗ into the system (26).

#### **5. Numerical Simulations**

Analytical study is incomplete without numerical verification of the results. In this section, we have presented numerical simulation of system (3) and fractional order control problem (27). We have used FDE12 MatLab function which is designed on predictor– corrector scheme based on Adams–Bashforth–Moulton algorithm introduced by Roberto Garrappa [42]. Diethelm [16,43] used the predictor–corrector scheme based on Adams– Bashforth–Moulton algorithm which is used in FDE12. We have used FDE12 function directly for system (3) just like ODE45, ODE23.

We have also used iterative scheme (Euler's forward and backward) in MatLab interface to develop fractional order optimal control problem. The process is briefly described below. The optimality system constitutes a two-point boundary value problem including a set of fractional-order differential equations. The state system (26) is an initial value and the adjoint system (29) is a boundary value problem. The state system is solved by forward iteration method and the costate system is solved by backward iteration method by the following algorithm through Matlab.

State system (26) is solved using the iterative scheme below:

$$\begin{aligned} S(i) &= [\Lambda - \delta S(i-1) - \beta\_1 S(i-1)P\_1(i-1) - \beta\_2 S(i-1)P\_2(i-1)]h^\epsilon \\ &- \sum\_{j=1}^{i-1} c(j)S(i-j) \\ P\_1(i) &= [\beta\_1 S(i-1)P\_1(i-1) + \beta\_2 S(i-1)P\_2(i-1) - (k+\delta+\phi)P\_1(i-1)] \\ &- c\eta P\_1(i-1)[h^\epsilon - \sum\_{j=1}^{i} c(j)P\_1(i-j) \\ P\_2(i) &= [kP\_1(i-1) + \gamma R(i-1) - \xi P\_2(i-1) - \delta P\_2(i-1)]h^\epsilon - \sum\_{j=1}^{i} c(j)P\_2(i-j) \\ R(i) &= [\phi P\_1(i-1) + \xi P\_2(i-1) - \gamma R(i-1) - \delta R(i-1) + c\eta P\_1(i-1)]h^\epsilon \\ &- \sum\_{j=1}^{i} c(j)R(i-j) \end{aligned}$$

where *<sup>c</sup>*(0) = <sup>1</sup> and *<sup>c</sup>*(*j*) = (<sup>1</sup> <sup>−</sup> <sup>1</sup>+*<sup>ε</sup> j* )*c*(*j* − 1), *j* ≥ 1 and *h ε* is the time step length. Here, *S*(*i*) is the value of *S*(*t*) at *i*th iteration. The last term of each of the above system of equations stands for memory. The adjoint system (29) is solved by backward iteration method with terminal conditions *λi*(*T<sup>f</sup>* ) = 0, *i* = 1, 2, 3, 4 using the following iterative scheme:

$$\begin{aligned} \lambda\_1(i) &= [\lambda\_1(i-1)(\delta + \beta\_1 P\_1(i) + \beta\_2 P\_2(i)) - \lambda\_2(i-1)(\beta\_1 P\_1(i) + \beta\_2 P\_2(i))]h^\epsilon \\ &- \sum\_{j=1}^i c(j)\lambda\_1(i-j) \\ \lambda\_2(i) &= [\lambda\_1(i)\beta\_1 S(i) - \lambda\_2(i-1)\{\beta\_1 S(i) - (k + \delta + \phi + c\eta)\} - \lambda\_3(i-1)k \\ &- \lambda\_4(i-1)(\phi + c\eta) - \omega\_1]h^\epsilon - \sum\_{j=1}^i c(j)\lambda\_2(i-j) \\ \lambda\_3(i) &= [\lambda\_1(i)\theta\_2 S(i) - \lambda\_2(i)\theta\_2 S(i) + \lambda\_3(i-1)(\xi + \delta) - \lambda\_4(i-1)\xi]h^\epsilon \\ &- \sum\_{j=1}^i c(j)\lambda\_3(i-j) \\ \lambda\_4(i) &= [-\lambda\_3(i)\gamma + \lambda\_4(i-1)(\gamma + \delta)]h^\epsilon - \sum\_{j=1}^i c(j)\lambda\_4(i-j) \end{aligned}$$

The optimal control is updated by the scheme below.

$$\eta^\* = \max\{\min\{\overline{\eta}, 1\}, 0\}, \text{ where } \overline{\eta} = \frac{cP\_1(i)(\lambda\_2(i-1) - \lambda\_4(i-1))}{\omega\_2}.$$

We have developed MatLab code using the above algorithm and chosen *h* = 0.02 throughout the numerical simulation. In fitting the test data of memory phenomena from different fields, it has been found that the fractional order can be physically explained as an index of memory. The higher the value of order *ε*, the slower the forgetting is and most of the epidemic transmission dynamics depend on memory (previous stages) [15]. The value of order of fractional derivative (*ε*) needs to be close to 1. Theoretically, we may study the fractional order system for any value lies between 0 to 1, but it is better to choose the value close to 1. There are some cases where we have found interesting results if we reduce the order of derivative, but for very small values of *ε* (less than 0.5) the MatLab code become erroneous. Therefore, we have to chose the order wisely and in our context we choose the value 0.95 (it may be any value from 0.9 to 0.99) for numerical simulation. The value of the order can be estimated by least-squares method of curve fitting with real data from field survey or by graphical study [21].

In this section, we have portrayed some time series of system (3) and variation of *R*<sup>0</sup> with respect to *β*1, *β*2, *ξ*, *φ*. Next, we have discussed about the effect of control intervention. Figure 2 represents the situation when the drug free equilibrium *E*<sup>0</sup> = (1, 0, 0, 0) is asymptotically stable corresponds to the Table 3. Next, let us consider the following three cases:


Figure 3 depicts the time series and phase portrait of system (3) (case 1) when the drug addict equilibrium is *E*<sup>1</sup> = (0.1254, 0.0519, 0.5429, 0.0783) and *R*<sup>0</sup> = 1.6246. Figures 4 and 5 represent the cases 2 and 3 when corresponding equilibrium points are

(0.0567, 0.0572, 0.5984, 0.0864),(0.051, 0.0576, 0.6081, 0.0871)

respectively. Figure 6 represents the variation of time series of state variables when *ε* varies and other parameters are fixed as in Table 5. Figures 7–10 depict the change in *R*<sup>0</sup> with respect to parameters *β*1, *β*2, *φ*, *ξ*, respectively. Figures 2 and 3 justify Theorems 7 and 8, respectively. Figure 11 depicts the variation of time series with the control parameter *η*.

Now, let us consider Table 7 for simulating optimal control problem (26). We have used Forward-backward iterative scheme to solve this optimal control problem [44]. For *η* = 0, the drug-free equilibrium point is *E*<sup>0</sup> = (0.83, 0, 0, 0) and *R*<sup>0</sup> = 0.508. We have considered final time *T<sup>f</sup>* = 20 days and *t* = 1 day. Note that there are more addicted population in physiological state than in psychological state. Now, we shall discuss about the effect of control intervention. The positive weights have been considered as *ω*<sup>1</sup> = 1.6, *ω*<sup>2</sup> = 10.

Figure 11 shows the variation of time series of state variables when the control parameter *η* changes. Figure 12 represents the time series of state variables of optimal control problem (26). Figure 13 represents time series of optimal control variable (*η* ∗ ) and optimal cost function (*J* ∗ ). Figure 14 depicts the case when no control is applied. There is a significant number of psychological and physiologically addicted population present in the scenario (*η* = 0) which will create economic burden in terms of loss of productivity, morbidity, and mortality and in obtaining protective measures (Figure 14). It has been found from Figures 13 and 14 that if the control strategy is applied, then the number of psychologically addicts and number of addicts in treatment class decrease but the number of physiological addicts increases. The values of *S*, *P*1, *P*2, *R* in the without control stage after 20 days are 0.5823, 0.003934,0.01343, and 0.023, respectively, but after applying control those values change to 0.5823, 0.003917, 0.01345, and 0.02297. Though the change is smaller in fraction, it is effective in large populated countries like India and China. In Figure 13, it has been observed that the value of optimal control is increasing between 0 to 8 days and then decreases. A certain time is required to persuade a psychologically addicted person that ingesting drugs in a frequent manner is harmful and can even cause physical damages. However, once a person starts understanding these deadly affects, it becomes easy for them to take medicines and to do the other needful to make them free of this addiction. We have performed the cost design analysis for optimal control policy mentioned in Figure 13.

**Figure 2.** Time series of system (3) corresponds to Table 2 when *E*<sup>0</sup> = (1, 0, 0, 0) and *R*<sup>0</sup> = 0.3151.


**Table 4.** Parametric values used in system (3) when *β*<sup>1</sup> > *β*<sup>2</sup> , *E*<sup>1</sup> = (0.1254, 0.0519, 0.5429, 0.0783) and *R*<sup>0</sup> = 1.6246.

**Figure 3.** Time series of system (3) corresponds to Table 3 when *E*<sup>1</sup> = (0.1254, 0.0519, 0.5429, 0.0783) and *R*<sup>0</sup> = 1.6246.


**Figure 4.** Time series of system (3) corresponds to Table 4 when *E*<sup>1</sup> = (0.0567, 0.0572, 0.5984, 0.0864) and *R*<sup>0</sup> = 2.2154.

**Figure 5.** Time series of system (3) corresponds to Table 5 when *E*<sup>1</sup> = (0.051, 0.0576, 0.6081, 0.0871) and *R*<sup>0</sup> = 1.4277.

**Figure 6.** Variation of time series of system (3) with *ε* corresponds to Table 4 when *R*<sup>0</sup> = 2.2154.



**Figure 7.** Variation of *R*<sup>0</sup> of system (3) with respect to *β*<sup>1</sup> while values of other parameters are taken from Table 3.

**Figure 8.** Variation of *R*<sup>0</sup> of system (3) with respect to *β*<sup>2</sup> while values of other parameters are taken from Table 3.

**Figure 9.** Variation of *R*<sup>0</sup> of system (3) with respect to *φ* while values of other parameters are taken from Table 3.

**Figure 10.** Variation of *R*<sup>0</sup> of system (3) with respect to *ξ* while values of other parameters are taken from Table 3.


**Figure 11.** Variation of time series of system (26) with different control *η* corresponds to Table 4.

**Table 7.** Parametric values used in system (26).

**Figure 12.** Time series of state variables of system (26) for Table 6 when *ε* = 0.95 , *ω*<sup>1</sup> = 1.6, *ω*<sup>2</sup> = 10.

**Figure 13.** Time series of optimal control *η* ∗ and optimal cost *J* ∗ of system (26) for Table 6 when *ε* = 0.95 , *ω*<sup>1</sup> = 1.6, *ω*<sup>2</sup> = 10.

**Figure 14.** Time series of state variables of system (26) for Table 6 when *ε* = 0.95 and *η* = 0.

#### **6. Conclusions**

Fractional calculus plays an important role in dynamical processes. It gives us an extra parameter *ε* by which we can simulate our model properly. Here, we have studied on the fractional-order synthetic drugs transmission model with psychological addicts incorporating memory effects. We have observed that the dynamics of system (3) depends on the strength of memory effects, controlled by the order of fractional derivative *ε* [13].

In our work, we have framed a model in Caputo-fractional differentiation formalism where people are addicted to drugs both psychologically and physiologically. By nextgeneration matrix method, we have found the basic reproduction number *R*0, and this *R*<sup>0</sup> gives (or, is consistent with) the local and global stability conditions of the drug-free and drug addiction equilibria. It has been observed from numerical examples that if *R*<sup>0</sup> < 1, the system has only drug-free equilibrium and this equilibrium is stable (Figure 2). If *R*<sup>0</sup> > 1, the drug addiction equilibrium persists and locally stable (Figures 3–5). By analyzing sensitivity of parameters *β*1, *β*2, *ξ*, *φ*, we have reached the conclusion that controlling the transmission of the synthetic drugs is better than providing treatment to the addicts. Therefore, we have designed a control strategy to prevent drug transmission. From Figure 6, it has also been found that by lowering the value of fractional order, susceptible and psychological addicted populations decrease but the physiological population and population in treatment class increase.

In the next section of this work, we have discussed an optimal control problem related to the drug abuse epidemic model where we have tried to minimize the drugaddicted population along with the cost of treatment. We have reformulated our model by considering the effect of "counseling and awareness campaigns" as control variable and calculated the total cost. Analytically, we have used Pontryagin's Principle for fractional calculus to determine the value optimal control parameter [45]. The analytical results and numerical simulations are quite relevant, and by the numerical computations we can deduce certain observations that have been discussed earlier.

Nowadays, an enormous number of the populace, particularly the young population, is presented to the universe of medications because of different reasons. For guiding purposes, we hope to hone in on those populaces. As by taking a gander at them as a helpless populace, it is easier to evaluate how to best acquaint normal guiding with the mental addicts in the general public through the model. Instructive foundations and families should remind adolescents about the significance of well-being training just as the Government needs to assume some responsibility to build mindfulness among the individuals. In goodness of missions and social projects, individuals may understand the human impacts of manufactured medications and decrease interest, which could prompt a lower contact rate. The proposed model shows the effect of guiding mental addicts through mathematical re-enactments. Besides, the result of an ideal reaction because of directing can limit the cost to, and quantity of, dependent people. The approach can limit the general monetary burden. In this circumstance, we ask a legitimate control strategy which will be powerful in the feeling of the study of disease transmission and financial matters.

**Author Contributions:** All the authors have participate equally in all the aspects of this paper: conceptualization, methodology, investigation, formal analysis, writing—original draft preparation, writing—review and editing. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the Spanish Government for its support through grant RTI2018-094336-B-100 (MCIU/AEI/FEDER, UE) and to the Basque Government for its support through grant IT1207-19.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The data used to support the findings of this study are included in the references within the article.

**Acknowledgments:** The authors are grateful to the anonymous referees and Nemo Guan, Special Issue Editor, for their careful reading, valuable comments, and helpful suggestions, which have helped them to improve the presentation of this work significantly. The third author (Manuel De la Sen) is grateful to the Spanish Government for its support through grant RTI2018-094336-B-100 (MCIU/AEI/FEDER, UE) and to the Basque Government for its support through grant IT1207-19.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel. +41 61 683 77 34 Fax +41 61 302 89 18 www.mdpi.com

*Mathematics* Editorial Office E-mail: mathematics@mdpi.com www.mdpi.com/journal/mathematics

MDPI St. Alban-Anlage 66 4052 Basel Switzerland

Tel: +41 61 683 77 34

www.mdpi.com ISBN 978-3-0365-7108-9