# **Calculus of Variations, Optimal Control, and Mathematical Biology**

**A Themed Issue Dedicated to Professor Delfim F. M. Torres on the Occasion of His 50th Birthday** 

Edited by

Natália Martins, Ricardo Almeida, Cristiana João Soares da Silva and Moulay Rchid Sidi Ammi Printed Edition of the Special Issue Published in *Axioms*

www.mdpi.com/journal/axioms

## **Calculus of Variations, Optimal Control, and Mathematical Biology: A Themed Issue Dedicated to Professor Delfim F. M. Torres on the Occasion of His 50th Birthday**

## **Calculus of Variations, Optimal Control, and Mathematical Biology: A Themed Issue Dedicated to Professor Delfim F. M. Torres on the Occasion of His 50th Birthday**

Editors

**Nat ´alia Martins Ricardo Almeida Cristiana Jo˜ao Soares da Silva Moulay Rchid Sidi Ammi**

MDPI • Basel • Beijing • Wuhan • Barcelona • Belgrade • Manchester • Tokyo • Cluj • Tianjin

*Editors* Natalia Martins ´ University of Aveiro Aveiro, Portugal

Ricardo Almeida University of Aveiro Aveiro, Portugal

Cristiana Joao Soares da Silva ˜ Instituto Universitario de ´ Lisboa (ISCTE-IUL) Lisbon, Portugal

Moulay Rchid Sidi Ammi Moulay Ismail University Errachidia, Morocco

*Editorial Office* MDPI St. Alban-Anlage 66 4052 Basel, Switzerland

This is a reprint of articles from the Special Issue published online in the open access journal *Axioms* (ISSN 2075-1680) (available at: https://www.mdpi.com/journal/axioms/special issues/delfim).

For citation purposes, cite each article independently as indicated on the article page online and as indicated below:

LastName, A.A.; LastName, B.B.; LastName, C.C. Article Title. *Journal Name* **Year**, *Volume Number*, Page Range.

**ISBN 978-3-0365-6856-0 (Hbk) ISBN 978-3-0365-6857-7 (PDF)**

© 2023 by the authors. Articles in this book are Open Access and distributed under the Creative Commons Attribution (CC BY) license, which allows users to download, copy and build upon published articles, as long as the author and publisher are properly credited, which ensures maximum dissemination and a wider impact of our publications.

The book as a whole is distributed by MDPI under the terms and conditions of the Creative Commons license CC BY-NC-ND.

## **Contents**



## **About the Editors**

#### **Nat ´alia Martins**

Natalia Martins, PhD in Mathematics (2006, University of Aveiro, Portugal), is an Associate ´ Professor in the Department of Mathematics at the University of Aveiro (UA), Portugal. She has worked in the Department of Mathematics of UA since 1992 and was Vice-Director of the department from 2008 to 2019. She was also a member of the Pedagogical Council of UA from 2009 to 2016, where she was elected Vice-President, a position she held for more than 5 years between 2011 and 2016. Since 2022, she has been the scientific coordinator of the group "System and Control group" of the research unit CIDMA (Center for Research and Development in Mathematics and Applications), which is a research team hosted at the Department of Mathematics of the University of Aveiro. Her research interests include Time Scale Calculus, Fractional Calculus, Calculus of Variations, and Optimal Control Theory. She is the author of more than 55 papers in international journals, conference papers, and book chapters. Natalia Martins belongs to the Editorial Board of ´ *Fractal and Fractional*, *Axioms*, and *International Journal of Dynamical Systems and Differential Equations*.

#### **Ricardo Almeida**

Ricardo Almeida received his bachelor's and master's degrees in Mathematics from the University of Porto, Portugal, and his Ph.D. degree in Mathematics from the University of Aveiro, Portugal. He is currently an Assistant Professor at the University of Aveiro. His research interests include fractional calculus, calculus of variations, and optimal control theory. He has published 5 books and more than 90 papers in international journals. He is the Associate Editor in the journal *Fractal and Fractional* and is an Editor for *AIMS Mathematics* and *Axioms*. He is the adjunct coordinator of the Center for Research and Development in Mathematics and Applications (CIDMA) since 2019.

#### **Cristiana Jo˜ao Soares da Silva**

Cristiana Joao Soares da Silva received her Ph.D. degree in Mathematics from the University ˜ of Aveiro, in Portugal, and from the University of Orleans, in France, in 2010. From 2011 to 2018, ´ she was a post-doctoral researcher at the Center for Research and Development in Mathematics and Applications (CIDMA) of the University of Aveiro and, between 2019 and 2022, she had a Researcher position at CIDMA. Since September 2022, she is Assistant Professor at Iscte—Instituto Universitario ´ de Lisboa, in Lisbon, Portugal. Her research interests include optimal control theory, optimization methods, and mathematical modeling of infectious diseases.

#### **Moulay Rchid Sidi Ammi**

Moulay Rchid Sidi Ammi is a Full Professor of Mathematics at the University Moulay Ismail, Morocco (UMI), and is responsible for the R&D Unit AMNEA and Laboratory MAIS. His main research areas are EDPs; calculus of variations; optimal control; optimization; fractional derivatives and integrals; dynamic equations on time scales; and mathematical biology. He has written outstanding scientific and pedagogical publications in international journals ranked in the 1st quartile by ISI Web of Science and Scopus. He has strong experience in graduate and post-graduate student supervision and teaching in mathematics. Moreover, he has been team leader and member in several national and international R&D projects.

## **Preface to "Calculus of Variations, Optimal Control, and Mathematical Biology: A Themed Issue Dedicated to Professor Delfim F. M. Torres on the Occasion of His 50th Birthday"**

This publication is a Special Issue of the journal *Axioms* entitled "Calculus of Variations, Optimal Control and Mathematical Biology: A Themed Issue Dedicated to Professor Delfim F. M. Torres on the Occasion of His 50th birthday". This Special Issue is dedicated to Professor Delfim F. M. Torres on his 50th birthday, as a recognition of his significant contributions to Mathematics, in particular regarding the calculus of variations, optimal control, and mathematical biology. Professor Torres is a distinguished University Professor, a highly cited researcher in Mathematics (in the top 1% for Mathematics in the Web of Science in 2015, 2016, 2017, and 2019), and a lifetime member of the American Mathematical Society. He is one of the founders of fractional variational analysis and fractional optimal control, and has tremendously contributed to the theory of variational analysis, with applications to many other fields such as optimization, optimal control, time-scale analysis, and mathematical epidemiology and biology. Professor Torres is the recipient of several international awards, including the 325 Years of Fractional Calculus Award, which is a testament to the high regard in which his achievements in the area of fractional calculus and its applications are held. He has been included in the World's Top 2% Scientists by Stanford University in 2020, 2021, and 2022, both as a career-long highly cited researcher and as a single calendar year highly cited researcher. He is considered by Thomson Reuters as one of the World's Most Influential Scientific Minds, has won a Publons Peer Review Award as a world's top peer reviewer, and was recognized in the top 1% of reviewers. He has also won a Sentinel of Science Award. Many of his research works have been considered top papers in their area, served as research highlights, and been awarded prizes. Besides being a great scholar, he is also a great teacher who has already supervised twenty-four Ph.D. students from all over the world. This Special Issue covers many of Professor Torres' research interests, which include several areas of pure and applied mathematical sciences, such as approximations and expansions, biology, and other natural sciences, calculus of variations, and optimal control, optimization, difference and functional equations, fluid mechanics, functional analysis, game theory, economics, social and behavioral sciences, general measure and integration, the mechanics of deformable solids, number theory, numerical analysis, operations research, mathematical programming, ordinary differential equations, partial differential equations, quantum theory, real functions, systems and control theory, fractional calculus and its applications, integral equations and transforms, higher transcendental functions and their applications, q-series and q-polynomials, inventory modeling and optimization, dynamic equations on time scales, and mathematical modeling.

This book comprises an editorial and fifteen original research papers that have been carefully reviewed.

#### **Nat ´alia Martins, Ricardo Almeida, Cristiana Jo˜ao Soares da Silva, and Moulay Rchid Sidi Ammi** *Editors*

#### *Editorial*

## **Editorial for the Special Issue of Axioms "Calculus of Variations, Optimal Control and Mathematical Biology: A Themed Issue Dedicated to Professor Delfim F. M. Torres on the Occasion of His 50th Birthday"**

**Natália Martins 1,\*,†, Ricardo Almeida 1,†, Cristiana J. Silva 1,2,† and Moulay Rchid Sidi Ammi 3,†**


#### **1. Introduction**

This publication is an editorial for the Special Issue of Axioms "Calculus of Variations, Optimal Control and Mathematical Biology: A Themed Issue Dedicated to Professor Delfim F. M. Torres on the Occasion of His 50th birthday". This Special Issue is dedicated to Professor Delfim F. M. Torres on the occasion of his 50th birthday, as recognition of his significant contributions to Mathematics, in particular in the calculus of variations, optimal control, and mathematical biology. Professor Torres is a distinguished University Professor, a highly cited researcher in Mathematics (in the top 1% for Mathematics in the Web of Science in the years 2015, 2016, 2017 and 2019), and a lifetime member of the American Mathematical Society. He is one of the founders of fractional variational analysis and fractional optimal control, and has made tremendous contributions to the theory of variational analysis, with applications to many other fields such as optimization, optimal control, time-scale analysis, and mathematical epidemiology and biology. Professor Torres is the recipient of several international awards, including the 325 Years of Fractional Calculus Award, which is testament to the high regard in which his achievements in the area of fractional calculus and its applications are held. He has been included in the World's Top 2% Scientists by Stanford University in the years 2020, 2021 and 2022, both as a career-long highly cited researcher and as a single calendar year highly cited researcher. He is considered by Thomson Reuters as one of the World's Most Influential Scientific Minds, has won a Publons Peer Review Award as a world's top peer reviewer, and was recognized in the top 1% of reviewers. He has also won a Sentinel of Science Award. Many of his research works have been considered top papers in their area, served as research highlights and been awarded prizes. Besides being a great scholar, he is also a great teacher who has already supervised twenty-four Ph.D. students from all over the world. This Special Issue covers many of Professor Torres' research interests, which include several areas of pure and applied mathematical sciences, such as approximations and expansions, biology and other natural sciences, calculus of variations and optimal control, optimization, difference and functional equations, fluid mechanics, functional analysis, game theory, economics, social and behavioral sciences, general measure and integration, the mechanics of deformable solids, number theory, numerical analysis, operations research, mathematical programming, ordinary differential equations, partial differential equations, quantum theory, real functions, systems and control theory, fractional calculus and its

**Citation:** Martins, N.; Almeida, R.; Silva, C.J.; Sidi Ammi, M.R. Editorial for the Special Issue of Axioms "Calculus of Variations, Optimal Control and Mathematical Biology: A Themed Issue Dedicated to Professor Delfim F. M. Torres on the Occasion of His 50th Birthday". *Axioms* **2023**, *12*, 110. https://doi.org/10.3390/ axioms12020110

Received: 13 January 2023 Accepted: 17 January 2023 Published: 20 January 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

applications, integral equations and transforms, higher transcendental functions and their applications, *q*-series and *q*-polynomials, inventory modeling and optimization, dynamic equations on time scales, and mathematical modeling.

This Special Issue comprises fifteen original research papers that have been carefully reviewed. In the next section, we briefly describe the main contributions of these fifteen papers. We finalize this editorial with a brief biography of Delfim F. M. Torres.

#### **2. Contributions**

In this section, we provide an overview of the scientific contributions of the papers that constitute this Special Issue, gathering them into five mathematical research areas: Calculus of variations, Optimal control, Mathematical biology, Fractional calculus, and Differential geometry. We remark that some of the papers can be integrated in more than one of these research topics.

#### *2.1. Calculus of Variations*

In the paper *On a Non–Newtonian Calculus of Variations*, Delfim F. M. Torres presents, for the first time in the literature, a non-Newtonian calculus of variations that involves the minimization of a function defined by a non-Newtonian integral with a Lagrangian depending on the non-Newtonian derivative. The main result of this paper is a firstorder necessary optimality condition of a Euler–Lagrange type that each solution of a non-Newtonian variational problem, with admissible functions taking positive values only, must satisfy. The non-Newtonian calculus of variations introduced in this paper provides a natural framework for dealing with multiplicative functions that arise in economics, physics and biology.

In the article *Weighted Generalized Fractional Integration by Parts and the Euler–Lagrange Equation*, Zine et al. introduce the right-weighted generalized fractional derivative in the Riemann–Liouville sense, and also introduce its associated integral operator, proving their main properties, and, in particular, their integration by parts formula. With the new general integration by parts formula, the authors obtain an appropriate weighted Euler– Lagrange equation for dynamic optimization, extending those existing in the literature. The paper finishes with an illustration of the obtained theoretical results in the quantum mechanics setting.

#### *2.2. Optimal Control*

The article *Maximum Principle and Second-Order Optimality Conditions in Control Problems with Mixed Constraints*, by Arutyunov et al., concerns the optimality conditions for a smooth optimal control problem with an endpoint and mixed constraints. Under the normality assumption, second-order necessary optimality conditions based on the Robinson stability theorem are derived. The main novelty is that only a local regularity with respect to the mixed constraints is required, instead of the conventional stronger global regularity hypothesis. This affects the maximum condition. Therefore, the normal set of Lagrange multipliers in question satisfies the maximum principle, albeit along with the modified maximum condition in which the maximum is taken over a reduced feasible set. In the second part of this work, the case of abnormal minimizers, that is, when the full rank of controllability matrix condition is not valid, is addressed. The same type of reduced maximum condition is obtained.

In the paper *Minimum Energy Problem in the Sense of Caputo for Fractional Neutral Evolution Systems in Banach Spaces*, by Ech-chaffani et al., a class of fractional neutral evolution equations on Banach spaces involving Caputo derivatives is investigated. Conditions for the controllability of the fractional-order system and conditions for existence of a solution to an optimal control problem of minimum energy are established. The main results are proved with the help of fixed-point and semigroup theories.

#### *2.3. Mathematical Biology*

In the paper *Global Stability Condition for the Disease-Free Equilibrium Point of Fractional Epidemiological Models*, Almeida et al. present a new method to study the global asymptotic stability of dynamical systems described by fractional differential equations. The usual approach involves the determination of an appropriate Lyapunov function, very laborious computations and the application of LaSalle's invariance principle. The method proposed by the authors uses only basic results from matrix theory and some well-known results from fractional-order differential equations. To illustrate the applicability of the theoretical results, the authors exemplify the procedure with three epidemiological models: a fractionalorder SEIR model with classical incidence function, a fractional-order SIRS model with a general incidence function, and a fractional-order model for HIV/AIDS.

In the research *Hybrid Method for Simulation of a Fractional COVID-19 Model with Real Case Application*, by Din et al., a mathematical analysis for the novel coronavirus responsible for COVID-19 is provided. The fractional-order analysis is carried out using a non-singular kernel type operator known as the Atangana–Baleanu–Caputo (ABC) derivative. The model, adopting available information about the disease from Pakistan in the period from 9 April to 2 June 2020, is parametrized. The authors obtain the required solution with the help of a hybrid method, which is a combination of the decomposition method and the Laplace transform. Furthermore, a sensitivity analysis is carried out to evaluate the parameters that are more sensitive to the basic reproduction number of the model. The obtained results are compared with real data of Pakistan, and numerical plots are presented at various fractional orders.

In the paper *Fractional Dynamics of a Measles Epidemic Model*, Abboubakar et al. propose a fractional mathematical model for the transmission dynamics of measles, considering a Caputo fractional derivative. The main goal of this work is to compare the dynamics of a measles epidemic model with integer and Caputo fractional derivatives. The epidemic model considers vaccination and hospitalization of infected individuals. A local and global stability analysis of the equilibrium points is proved, and the theoretical results are illustrated through numerical simulations using the Adams-type predictor–corrector iterative scheme. The numerical simulations demonstrate that the fractional model shows different quantitative behavior than the model with integer derivatives.

The paper *Pattern Formation Induced by Fuzzy Fractional-Order Model of COVID-19*, by Alnahdi et al., proposes a mathematical model for COVID-19 using a fuzzy fractional evolution equation, stated in Caputo's sense for order *α* ∈ (1, 2). The model considers six compartments: susceptible, exposed, totally infected, asymptotically infected, totally recovered individuals and reservoir. The existence and uniqueness of the solution of the model is proved using Schauder's fixed point theorem. Moreover, the dynamic behavior of the model is studied by combining the fuzzy Laplace approach with the Adomian decomposition transform.

In the paper *Modeling the Impact of the Imperfect Vaccination of the COVID-19 with Optimal Containment Strategy*, Benahmadi et al. propose a mathematical model applied to COVID-19, aiming to investigate the impact of vaccination in the control of the disease's spread. The compartmental model is given by a system of nine ordinary differential equations. After computing the basic reproduction number, a local and global stability analysis of the disease's free equilibrium is presented. An optimal control problem is proposed wherein the aim is to find the optimal control strategy under imperfect vaccination. Three controls are introduced in the model, representing awareness of taking the vaccine, movement restrictions for susceptible and vaccinated individuals, and improvement of the vaccine's efficacy, respectively. In the numerical simulations, the model is calibrated to real data from a vaccination campaign in Morocco between 1 February 2021 and 25 March 2021. The numerical solutions show that to reduce the impact of imperfect vaccination, a longer awareness campaign is needed to engage the population in vaccination. On the other hand, restrictions on population mobility should not be long lasting. Moreover, to ensure the full

protection of the health population, vaccination efficacy must be increased by 30% in the first 50 days.

In the paper *Determining COVID-19 Dynamics Using Physics Informed Neural Networks*, by Malinzi et al., the physics informed neural networks (PINNs) framework is applied to COVID-19 epidemics, using a compartmental SIRD (susceptible–infected–recovered–death) mathematical model. The main goal of this work is to find patterns of the transmission dynamics of the disease, which involves predicting the infection, recovery and death rates and therefore, predicting the actively infected, totally recovered, susceptible and deceased individuals at any given time. First, the PINNs framework's application to the SIRD model is validated through numerical simulations using *Mathematica*, and after, the SIRD model is tested and validated using a real COVID-19 dataset.

In the paper by Francesca Acotto and Ezio Venturino, entitled *A Note on an Epidemic Model with Cautionary Response in the Presence of Asymptomatic Individuals*, the authors studied an epidemiological model, taking into consideration some demographic features, and also the case in which the illness appears in two forms, asymptomatic and symptomatic. Because of the presence of asymptomatic individuals, fear drives a reduction in social contact, lowering the overall transmission rate. With some numerical simulations, it is shown that with an increase in information regarding the propagation of the disease, the number of infected individuals decreases.

#### *2.4. Fractional Calculus*

In *Riemann–Liouville Fractional Sobolev and Bounded Variation Spaces* by Leaci and Tomarelli, a study of fractional derivatives on Sobolev spaces is carried out. The fractional operators are given by the mean value between the left and right fractional operators. Some problems, such as embeddings or compactness properties, the Abel equation, and semigroup properties, are considered. These methods can be applied into fractional variational models for image analysis.

In the article *On Periodic Fractional* (*p*, *q*)*-Integral Boundary Value Problems for Sequential Fractional* (*p*, *q*)*-Integrodifference Equations* by Soontharanon and Sitthiwirattham, questions on the existence and uniqueness of solution for a fractional (*p*, *q*)-integrodifference equation with periodic fractional (*p*, *q*)-integral boundary condition are studied. The proofs are based on Banach and Schauder's fixed point theorems. Furthermore, some properties of the fractional (*p*, *q*)-integral are obtained. The paper ends with some illustrative examples.

The paper *Existence Results for a Multipoint Fractional Boundary Value Problem in the Fractional Derivative Banach Space* by Boucenna et al., deals with a class of nonlinear implicit fractional differential equations subject to nonlocal boundary conditions, expressed in terms of nonlinear integro-differential equations. Using the Krasnoselsky fixed-point theorem, via the Kolmogorov–Riesz criteria, the existence of solutions in a specific fractional derivative Banach space is established. Then, two numerical examples are given to illustrate the theoretical results.

#### *2.5. Differential Geometry*

The paper *Local Structure of Convex Surfaces near Regular and Conical Points*, by Plakhov, deals with the limiting behavior of a part of a surface when it is cut by a plane. More precisely, given a point on a convex surface, and a plane of support Π to the surface at this point, the author considers the plane parallel with Π cutting a part of the surface. Two cases are studied: when the point is regular, and when it is singular conical. This work follows on from a 1995 conjecture by Buttazzo, Ferone, and Kawohl, "Minimum problems over sets of concave functions and related questions", published in *Math. Nachr.*, already solved by the author under some assumptions, and continued in this current work.

#### **3. Short Biography of Delfim F. M. Torres**

Delfim Fernando Marado Torres was born 16 August 1971 in Nampula, Mozambique. Since March 2015, he has been a full Professor of Mathematics at the University of Aveiro

(UA), Director of the Research Unit CIDMA, the largest Portuguese research center in Mathematics, and Coordinator of its Systems and Control Group. He obtained a PhD in Mathematics from UA in 2002, and Habilitation in Mathematics from UA in 2011. His main research areas are calculus of variations and optimal control, optimization, fractional derivatives and integrals, dynamic equations on time scales and mathematical biology. Torres has written outstanding scientific and pedagogical publications. In particular, he has co-authored two books with Imperial College Press [1,2], three books with Springer [3–5], and edited several research books, such as [6–8]. He has strong experience in graduate and postgraduate student supervision and teaching in mathematics. Moreover, he has been team leader and member in several national and international R&D projects, including EU projects and networks. Since 2013, he has been the Director of the Doctoral Programme Consortium in Mathematics and Applications (MAP-PDMA) of the Universities of Minho, Aveiro, and Porto. Prof. Torres has been married since 2003, and has one daughter and two sons.

**Author Contributions:** Conceptualization, N.M., R.A., C.J.S. and M.R.S.A.; methodology, N.M., R.A., C.J.S. and M.R.S.A.; writing—original draft preparation, N.M., R.A., C.J.S. and M.R.S.A.; writing review and editing, N.M., R.A., C.J.S. and M.R.S.A. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was partially funded by Portuguese funds through the CIDMA—Center for Research and Development in Mathematics and Applications, and the Portuguese Foundation for Science and Technology (FCT—Fundação para a Ciência e a Tecnologia), reference UIDB/04106/2020.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors deeply thank Axioms (ISSN 2075-1680) and Luna Shen, Managing Editor of Axioms, for all the support given to the Special Issue "Calculus of Variations, Optimal Control and Mathematical Biology: A Themed Issue Dedicated to Professor Delfim F. M. Torres on the Occasion of His 50th birthday". The authors of this editorial are also deeply grateful to all colleagues and friends of Delfim F. M. Torres for submitting high quality articles to this Special Issue.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

## *Article* **A Note on an Epidemic Model with Cautionary Response in the Presence of Asymptomatic Individuals**

**Francesca Acotto † and Ezio Venturino \*,†,‡**

Dipartimento di Matematica "Giuseppe Peano", Università di Torino, Via Carlo Alberto 10, 10123 Torino, Italy **\*** Correspondence: ezio.venturino@unito.it

† These authors contributed equally to this work.

‡ Member of the INdAM research group GNCS.

**Abstract:** We analyse a simple disease transmission model accounting for demographic features and an illness appearing in two forms, asymptomatic and symptomatic. Its main feature is the epidemicinduced fear of the population, for which contacts are reduced, responding to increasing symptomatic numbers. We find that in the presence of asymptomatic individuals, if the progression rate to symptomatic is high, protection measures may prevent the whole population becoming infected. The results also elucidate the importance of assessing transmission rates as quickly as possible.

**Keywords:** population dynamics; vertical transmission; epidemic fear

**MSC:** 95D30; 95D25

#### **1. Introduction**

Mathematical models for the spread of a communicable disease date back almost a century, from the original work of Kermack and McKendric. From the early classical models with no demographics, where the population at risk was fixed in size, the models evolved to encompass size-varying populations [1,2]. A thorough review of infection models, mainly for diseases affecting humans, is provided by [3]. Other reviews with more recent developments in the field appear in [4,5]. In addition, stochastic models for these situations can be developed [6]. These types of models have also been adapted for various situations [7], also including, possibly, the spread of epidemics among interacting populations, from which originated the so-called ecoepidemic models, see the fairly recent review [8].

The first model that incorporated the human behavior response to an epidemic spread is [9], an SIR-type system [10]. It models the fact that when the epidemic is spreading, people react by reducing contacts, in order to not be infected, e.g., by using protective means or distancing [11]. The use of vaccines, when available, would be another option. However, more recently, other issues have arisen, such as the anti-vaccination attitude of parts of populations. Some studies investigating this phenomenon have been undertaken [12,13].

The still ongoing COVID-19 epidemic [14] has highlighted another feature of these transmissible diseases. Namely, there is a relevant role played by asymptomatics. In fact, in the earlier phases of this pandemic, its spread was mainly due to contacts among susceptibles and asymptomatic infected individuals. A similar situation is exemplified by the Spanish flu of the XX century [15], or the most recent SARS. In the latter viral shedding outbreaks only for advanced stages of the disease cause respiratory symptoms occur, but for SARS-CoV-2, the infected can also spread the disease in the early stages, when they are asymptomatic [16]. There are also other diseases that do not show symptoms promptly. For instance, a measles-infected person is contagious in the very first days after getting the disease, the average latent period is 14 days, while the symptoms appear later, with an average infectious period of one week [3]. In the case of pneumonic plague, experiments

**Citation:** Acotto, F.; Venturino, E. A Note on an Epidemic Model with Cautionary Response in the Presence of Asymptomatic Individuals. *Axioms* **2023**, *12*, 62. https://doi.org/ 10.3390/axioms12010062

Academic Editor: Valery Y. Glizer

Received: 16 November 2022 Revised: 22 December 2022 Accepted: 30 December 2022 Published: 6 January 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

with mice indicate that the initial 36 h of infection show fast bacterial replication in the lungs, but no host immune responses or obvious disease symptoms appear, ref. [17]. In addition, primary septicemic plague, the second most diffused, starts with no palpable lymph nodes but with bacteremia [18]. For COVID-19, many models are now available, see, e.g., [19], where a highly detailed SPEIQHRD model is presented and validated using data from various different countries; Ref. [20], which introduces an SEIRD model and is identified for different USA states; or [21], which empirically considers the chaotic and cyclic behavior of the epidemics, verified by actual data.

However, we stress that this paper is not at all concerned with this specific epidemic. Rather, we have mentioned COVID-19 as well as measles just as paradigms for diseases where asymptomatic people appear. In summary, the salient feature of the epidemics that we consider here consists of the presence of asymptomatic people who are able to spread it. The present paper is also a theoretical investigation; therefore, the use of real data is not our concern here. The goal of the paper focuses on people's possible response to the spread of a generic epidemic. In this sense, it, rather, represents a model for possible human behaviors. Therefore, empirical data on specific contagious diseases are not needed as they are not used. Nor are other possible numerical schemes for the forecasting of the disease incidence of relevance for our purposes.

Along these lines, we would like to investigate a model in which people respond to higher numbers of the infected, which are recognized as disease carriers, i.e., symptomatic individuals. However, the disease is essentially transmitted by the asymptomatic people, because it is assumed that the symptomatic ones, once discovered, are isolated so as not to render them vehicles of propagation. We propose a model for this effect. Additionally, it also incorporates demographic features of the population. Some other specific properties of the system introduced here are the following. Essentially, it is a variation of the classical SI infection model, in which asymptomatic individuals are also accounted for, by splitting the infected (the class I of the SI model) into asymptomatics and symptomatics. The latter, however, are assumed to be isolated; therefore, it can also be interpreted as an SIR model, in which the *R* class denotes the set of individuals removed from circulation and, therefore, are not able to spread the disease any longer, rather than those recovered from the disease. Two variants are proposed: without and with vertical transmission. The models are fully analysed, determining all their possible equilibria, feasibility and stability. Their explicit coordinate expressions are determined, except for coexistence, for which we provide sufficient conditions for its feasibility. Their transcritical bifurcations are investigated both analytically and numerically, assessing the critical parameter values for which they occur. Implications for people's behavior are discussed in the final sections.

The most important result appears to be the fact that in the presence of a high progression rate from asymptomatic to symptomatic, the SAI model proposed here is able to preserve some susceptibles from the contagion, in spite of being an SI model for which everyone becomes infected.

The main findings of this investigation show that in the presence of asymptomatic individuals, people's voluntary means to reduce their own possible contagion must be undertaken more strictly, by suitably lowering the overall transmission term, than in the case where the epidemic's symptoms are immediately manifested. Our simulations also allow the quantification, if sufficient information on the spread of the disease is available, of the number of susceptibles that remain unaffected by the epidemic. They also show the importance of the prompt broadcasting of this information by the authorities.

#### **2. Materials and Methods**

As mentioned in the Introduction, we consider here a human population which reproduces and experiences an epidemic. Thus, it is divided into susceptibles *S* and infected individuals and, in turn, subdivided among those that do not show symptoms, but still can spread the disease, *A*, and the symptomatic ones *I*, who are isolated as recognized virus carriers. The latter, thus, do not contribute to the diffusion of the pathogenic agent among the population. We also assume that the susceptibles can react to the presence of the disease, by reducing their contacts. However, this behavior is influenced by the number of the people recognized as diseased, i.e., the *I*s; the higher this number, the lower the contact rate must be. The model contains only three compartments, because in the end its behavior will be compared with another classical model concerned with people's cautionary response, where asymptomatic individuals are absent, as elaborated at length in Section 5.

The demographic features incorporate reproduction, natural mortality and competition for resources. All these involve, in principle, the three subsets into which the total population is partitioned.

Two possibilities arise, considering reproduction: namely, that the disease is or is not passed onto the offsprings. In the first model, we do not consider vertical transmission, so asymptomatic individuals reproduce, but their offsprings are healthy and, therefore, are accounted for among the new recruits of susceptibles. Alternatively, vertical transmission hypothesizes that newborns from asymptomatic people appear in the asymptomatic class as well. The two models are constructed and analysed in the following subsections.

#### *2.1. The SAI Model without Vertical Transmission* Model Equations

Using the notation introduced in the above preamble, the system reads:

$$\frac{dS}{dt} = -r(S+A) - mS - c\_{SS}S^2 - c\_{SA}SA - c\_{SI}SI - \frac{aSA}{1+\beta I} \tag{1}$$

$$\frac{dA}{dt} = -mA - c\_{AA}A^2 - c\_{AS}AS - c\_{AI}AI + \frac{aSA}{1+\beta I} - \pi A\_I$$

$$\frac{dI}{dt} = -(m+\mu)I - c\_{II}I^2 - c\_{IS}IS - c\_{IA}IA + \pi A\_I$$

The first equation for susceptibles contains reproduction at rate *r*, which is related to both the susceptible and asymptomatic classes, first term. Here, we implicitly assume that the disease does not affect the asymptomatic reproduction rate. Susceptibles are subject to natural mortality *m*, second term, as well as intraspecific competition, the next three terms, due to other susceptible individuals, at rate *cSS*, or to asymptomatic ones, at rate *cSA*, or finally by infected, at rate *cSI*. The last term is the epidemiological one, which accounts for the disease spread. In view of our assumptions, it has two main features. The disease transmission is modeled via a mass action term in the numerator, accounting for the "successful" contacts between the susceptibles and the unrecognized disease carriers, i.e., the asymptomatic ones. The denominator, instead, accounts for the preventive measures, so that susceptibles tend to reduce their intermingling with other people when more and more diseased individuals are identified, i.e., it must decrease with an increasing number of symptomatic individuals *I*. The transmission parameter *α* measures how many contacts there are in the time unit, as well as how many of them yield a new case of the infection. Instead, *β* can be considered the weight and relevance that susceptibles give to the information about the disease spread.

The second equation models the asymptomatic dynamics. The first term contains the natural mortality, assuming that at this stage the disease does not cause deaths. The next three terms denote the intracompartment competition and the corresponding ones with the other two population classes. The susceptible individuals that become infected in the process described in the first equation appear here as new recruits, while the last term denotes transition toward a more serious form of the disease, and, thereby, showing symptoms.

This very same term, the last one in the third equation, appears in the infected dynamics, as the only input in this class. Here, in addition to the natural mortality, the first term also contains the disease-related one, *μ*. The following three terms again represent the competition between individuals of this class as well as with the other two compartments.

Tables 1 and 2 summarize the meaning of the parameters, all assumed to be non-negative.


**Table 1.** Interpretation and dimensions of demographic parameters.

**Table 2.** Interpretation and dimensions of epidemiological parameters.


#### *2.2. Boundedness*

Note, first of all, that if *r* < *m*, by summing the three equations of (1) and dropping most of the negative terms, we obtain

$$\frac{dT}{dt} = \frac{d(S+A+I)}{dt} \le (r-m)(S+A) - (m+\mu)I\_A$$

which entails that the population *T* = *S* + *A* + *I* will eventually vanish, implying also that each one of its subclasses does as well, as they are necessarily non-negative. Hence, *<sup>S</sup>* <sup>→</sup> <sup>0</sup>+, *<sup>A</sup>* <sup>→</sup> <sup>0</sup><sup>+</sup> and *<sup>I</sup>* <sup>→</sup> <sup>0</sup>+. Unless otherwise stated, from now on, we assume

$$r > m.\tag{2}$$

On the other hand, even in case where (2) holds, the system trajectories are bounded. Indeed, considering again the total population *T*, summing the equations in (1), but retaining some of the quadratic terms for an arbitrary *η* > 0, we obtain

$$\frac{dT}{dt} + \eta T \le \Pi\_S(\mathcal{S}) + \Pi\_A(A) + \Pi\_I(I)\prime$$

where the functions on the right hand side are concave parabolae:

$$
\Pi\_S(S) = [r + \eta - c\_{SS}S]S, \quad \Pi\_A(A) = [r + \eta - c\_{AA}A]A, \quad \Pi\_I(I) = [\eta - c\_{II}I]I.
$$

By replacing the latter with their maxima, namely, evaluating each one of them, respectively, at the abscissae

$$S\_m = \frac{r+\eta}{2c\_{SS}}, \quad A\_m = \frac{r+\eta}{2c\_{AA}}, \quad I\_m = \frac{\eta}{2c\_{II}}, \dots$$

so that

$$\Pi\_{\mathcal{S}}(\mathcal{S}) \le \Pi\_{\mathcal{S}}(\mathcal{S}\_{\mathfrak{W}}) = \frac{(r+\eta)^2}{4c\_{\mathfrak{S}\mathcal{S}}}, \quad \Pi\_A(A) \le \Pi\_A(A\_{\mathfrak{W}}) = \frac{(r+\eta)^2}{4c\_{AA}}, \quad \Pi\_I(I) \le \Pi\_I(l\_{\mathfrak{W}}) = \frac{\eta^2}{4c\_{II}}, \quad \iota = \iota\_I, \quad \iota\_I = \iota\_J$$

we obtain the bound for the differential inequality

$$\frac{dT}{dt} + \eta T \le M, \quad M = \frac{(r+\eta)^2}{4c\_{SS}} + \frac{(r+\eta)^2}{4c\_{AA}} + \frac{\eta^2}{4c\_{II}}.$$

It then follows that

$$T(t) \le \max\left\{ T(0), \frac{M}{\eta} \right\},$$

which is the desired result; since the total population is bounded, each subpopulation must be bounded as well because it cannot have negative values.

#### *2.3. Model Equilibria*

The model allows only three possible equilibria. Two are easily found: the population collapse *E*<sup>0</sup> = (0, 0, 0) and the disease-free point,

$$E\_S = \left(\frac{r-m}{c\_{SS}}, 0, 0\right).$$

which are always admissible in view of the assumption (2).

#### 2.3.1. Endemic Coexistence

The final allowed equilibrium point is coexistence of the three population classes, with the disease becoming endemic. To assess this point, we need to study the three equilibrium equations of (1). They give three surfaces in the *S*, *A*, *I* phase space. To understand their shape, we intersect them with parallel planes to the coordinate planes.

**The first surface** Σ(1)

The first surface Σ(1),

$$\Sigma^{(1)}:\quad r(S+A)-mS-\mathfrak{c}\_{SS}S^2-\mathfrak{c}\_{SA}SA-\mathfrak{c}\_SII-\frac{aSA}{1+\beta I}=0,\tag{3}$$

arises from the corresponding equilibrium equation of (1). On the plane *S* = 0, this surface intersects the first quadrant of the *A*-*I* plane only on the *I* axis.

On the plane *A* = 0, in addition to *S* = 0, i.e. the *I* axis, the intersection is the straight line

$$c\_{S\mathcal{S}}\mathcal{S} + c\_{SI}I = r - m \,. \tag{4}$$

There are two intersection points with the coordinate axes:

$$I\_0 = \frac{r - m}{c\_{SI}} , \quad S\_0 = \frac{r - m}{c\_{SS}} ,$$

both are strictly positive in view of (2).

On the generic plane *I* = *h* > 0, the intersection is the conic

$$r(S+A) - mS - c\_{SS}S^2 - c\_{SA}SA - c\_{SI}Sh - \tilde{a}SA = 0, \quad \tilde{\kappa} = \frac{\alpha}{1+\beta h}.\tag{5}$$

To study this, it should be observed that the matrix of this quadratic form is

$$\mathbf{M}\_{\mathbf{h}}^{\Sigma^{(1)}} = \begin{bmatrix} -c\_{SS} & -\frac{1}{2}(c\_{SA} + \tilde{\alpha}) & \frac{1}{2}(r - m - c\_{SI}h) \\ -\frac{1}{2}(c\_{SA} + \tilde{\alpha}) & 0 & \frac{r}{2} \\ \frac{1}{2}(r - m - c\_{SI}h) & \frac{r}{2} & 0 \end{bmatrix} \prime$$

whose determinant is

$$
\Delta\_h^{\Sigma^{(1)}} = \frac{r}{4} (r c\_{\mathbb{S}S} - (c\_{SA} + \tilde{\alpha})(r - m - c\_{SI}h)) \dots
$$

The conic is nondegenerate whenever Δ*<sup>h</sup>* = 0, i.e., if and only if

$$r \neq \frac{(m + c\_{SI}h)(c\_{SA} + \tilde{\alpha})}{c\_{SA} + \tilde{\alpha} - c\_{SS}},\tag{6}$$

a condition that we now assume. The principal minor of order two is always negative

$$\delta\_h^{\Sigma^{(1)}} = \begin{vmatrix} -\mathfrak{c}\_{SS} & -\frac{1}{2}(\mathfrak{c}\_{SA} + \tilde{\mathfrak{a}}) \\\\ -\frac{1}{2}(\mathfrak{c}\_{SA} + \tilde{\mathfrak{a}}) & 0 \end{vmatrix} = -\frac{1}{4}(\mathfrak{c}\_{SA} + \tilde{\mathfrak{a}})^2 < 0$$

so that the conic is a hyperbola. It intersects the *A* axis only at the origin, while the intersection with the *S* axis is given by the point

$$S\_0^h = \frac{r - m - c\_{SI}h}{c\_{SS}}$$

which is positive if and only if *h* < *I*0. Thus, in such a case, the two intersections with the coordinate axes are (0, 0) and *Sh* 0, 0 . On the other hand, the origin is the only intersection when *h* ≥ *I*0.

The conic (5) can be explicitly written as

$$A\_h(S) = \frac{(r - m - c\_{SS}S - c\_{SI}h)S}{(c\_{SA} + \tilde{\alpha})S - r},\tag{7}$$

so that its vertical asymptote is

$$S = S^{\hbar}\_{\infty} = \frac{r}{c\_{SA} + \tilde{\alpha}} = \frac{r(1 + \beta \hbar)}{c\_{SA}(1 + \beta \hbar) + \alpha}.$$

Since *S<sup>h</sup>* <sup>∞</sup> is strictly positive, for our only case of interest *h* > 0, there are two possible situations for the conic (5)

	- **–** If *S<sup>h</sup>* <sup>0</sup> < *<sup>S</sup><sup>h</sup>* <sup>∞</sup>, then *Ah*(*S*) > 0 if and only if *S<sup>h</sup>* <sup>0</sup> < *<sup>S</sup>* < *<sup>S</sup><sup>h</sup>* <sup>∞</sup>. The conic is a hyperbola which is positive and increasing in the (*I* = *h*) *S*-*A* plane only from the zero to the asymptote.
	- **–** If *S<sup>h</sup>* <sup>∞</sup> < *S<sup>h</sup>* <sup>0</sup>, then *Ah*(*S*) > 0 if and only if *<sup>S</sup><sup>h</sup>* <sup>∞</sup> < *S* < *S<sup>h</sup>* <sup>0</sup>. The conic is a hyperbola which is positive and decreasing only from the asymptote to *Sh* 0, 0 .
	- **–** Finally, if *S<sup>h</sup>* <sup>0</sup> = *<sup>S</sup><sup>h</sup>* <sup>∞</sup>, then the conic is degenerate because (6) fails to hold, having, in this case,

$$r = \frac{(m + c\_{SI}h)(c\_{SA} + \tilde{\mathfrak{a}})}{c\_{SA} + \tilde{\mathfrak{a}} - c\_{SS}} \ .$$

Further, in the positive cone of the *S*–*I* plane, as *h* increases *S<sup>h</sup>* <sup>0</sup> decreases linearly, moving along the line segment (4), starting from *S*<sup>0</sup> and vanishing when *h* = *I*0, while *S<sup>h</sup>* ∞ increases, starting from *S*<sup>∞</sup> and tending asymptotically to *rc*−<sup>1</sup> *SA*.

Thus, in the plane *S*–*I*, the trajectories of *S<sup>h</sup>* <sup>0</sup> and *<sup>S</sup><sup>h</sup>* <sup>∞</sup> never intersect if *S*<sup>0</sup> < *S*∞, while if *S*<sup>0</sup> ≥ *S*∞, they intersect on the line segment (4) at a point on the *I* = *h* plane with

$$h = \frac{-\left(c\_{SI}(c\_{SA} + a) - \beta c\_{SA}(r - m) + \beta c\_{SS}r\right) + \sqrt{D}}{2\beta c\_{SI}c\_{SA}},$$

with

$$D = \left(\mathfrak{c}\_{SI}(\mathfrak{c}\_{SA} + \mathfrak{a}) - \beta \mathfrak{c}\_{SA}(r - m) + \beta \mathfrak{c}\_{SS}r\right)^2 + 4\beta \mathfrak{c}\_{SI}\mathfrak{c}\_{SA}\left((r - m)(\mathfrak{c}\_{SA} + \mathfrak{a}) - \mathfrak{c}\_{SS}r\right).$$

**The second surface** Θ(1)

The second surface Θ(1), from the corresponding equilibrium equation of (1)

$$\Theta^{(1)}: \quad -m - \mathfrak{c}\_{AA}A - \mathfrak{c}\_{AS}S - \mathfrak{c}\_{AI}I + \frac{aS}{1+\beta I} - \pi = 0,\tag{8}$$

meets the plane *S* = 0 on the segment with negative intersections with the coordinate axes

$$\mathbf{c}\_{AA}\mathbf{A} + \mathbf{c}\_{AI}\mathbf{I} = -\mathbf{m} - \boldsymbol{\pi}, \quad \widehat{\mathbf{l}}\_0 = -\frac{\mathbf{m} + \boldsymbol{\pi}}{\mathbf{c}\_{AI}} < 0, \quad \widehat{A}\_0 = -\frac{\mathbf{m} + \boldsymbol{\pi}}{\mathbf{c}\_{AA}} < 0,\tag{9}$$

so that no feasible portion exists.

On the plane *A* = 0, the intersection is

$$m + m\beta I + \mathfrak{c}\_{AS}\mathbb{S} + \mathfrak{c}\_{AS}\mathfrak{\beta}SI + \mathfrak{c}\_{AI}I + \mathfrak{c}\_{AI}\mathfrak{\beta}I^2 - a\mathbb{S} + \pi + \pi\mathfrak{\beta}I = 0\,. \tag{10}$$

The matrix of this conic is

$$\mathbf{M}^{\mathbf{O^{(1)}}} = \begin{bmatrix} 0 & \frac{1}{2}c\_{AS}\boldsymbol{\beta} & \frac{1}{2}(c\_{AS} - \boldsymbol{a}) \\ \frac{1}{2}c\_{AS}\boldsymbol{\beta} & c\_{AI}\boldsymbol{\beta} & \frac{1}{2}(\boldsymbol{\beta}(\boldsymbol{m} + \boldsymbol{\pi}) + c\_{AI}) \\ \frac{1}{2}(c\_{AS} - \boldsymbol{a}) & \frac{1}{2}(\boldsymbol{\beta}(\boldsymbol{m} + \boldsymbol{\pi}) + c\_{AI}) & \boldsymbol{m} + \boldsymbol{\pi} \end{bmatrix}.$$

with determinant

$$
\Delta^{\Theta^{(1)}} = \frac{1}{4} \mathfrak{a} \beta (\mathfrak{c}\_{AS} \mathfrak{c}\_{AI} - \mathfrak{c}\_{AI} \mathfrak{a} - \beta \mathfrak{c}\_{AS} (m + \pi)) \dots
$$

It is nondegenerate for Δ = 0, i.e., if and only if

$$\mathfrak{a} \neq \mathfrak{c}\_{AS} \left( 1 - \frac{\beta \mathfrak{c}\_{AS} (m + \pi)}{\mathfrak{c}\_{AI}} \right). \tag{11}$$

If (11) holds, since

$$
\delta^{\Theta^{(1)}} = \begin{vmatrix} 0 & \frac{1}{2} c\_{AS} \beta \\\\ \frac{1}{2} c\_{AS} \beta & c\_{AI} \beta \end{vmatrix} = -\frac{1}{4} c\_{AS}^2 \beta^2 < 0,
$$

it is a hyperbola. Its intersections with the coordinate axes are

$$
\hat{S}\_0 = \frac{m+\pi}{a-c\_{AS}}\,'\tag{12}
$$

which is positive only if

$$\mathfrak{A} \rhd \mathfrak{C}\_{AS\prime} \tag{13}$$

and the roots of the quadratic

$$
\mathcal{L}\_{AI}\beta I^2 + ((m+\pi)\beta + \mathcal{c}\_{AI})I + m + \pi = 0,
$$

which, following Descartes' rule, are both negative. Writing the hyperbola explicitly as

$$S(I) = -\frac{c\_{AI}\beta I^2 + [(m+\pi)\beta + c\_{AI}]I + m + \pi}{c\_{AS}(1+\beta I) - \pi},\tag{14}$$

we find its vertical asymptote

$$
\hat{I}\_{\infty} = \frac{a - c\_{AS}}{\beta c\_{AS}} > 0
$$

in view of (13). In such case, the hyperbola is positive if and only if 0 <sup>≤</sup> *<sup>I</sup>* <sup>&</sup>lt; *I*∞. Then, the hyperbola increases from 0, *<sup>S</sup>*<sup>0</sup> to the asymptote *<sup>I</sup>* <sup>=</sup> *I*∞. If (13) fails to hold, no portion of the conic lies in the first quadrant.

On the plane *I* = *h*, recalling (5), the surface Θ(1) gives the straight line

$$
\mathcal{L}\_{AA}A + (\mathcal{c}\_{AS} - \widetilde{\kappa})S = -m - \pi - \mathcal{c}\_{AI}h\_{,} \tag{15}
$$

with the following intersections with the coordinate axes

$$
\hat{A}\_0^h = -\frac{m + \pi + c\_{AI}h}{c\_{AA}}, \quad \hat{S}\_0^\hbar = \frac{m + \pi + c\_{AI}h}{\tilde{\alpha} - c\_{AS}}.
$$

The former is always negative, the latter is positive if and only if *<sup>α</sup>* <sup>&</sup>gt; *cAS*, which is equivalent to

$$
\hbar < I\_{\infty} \,. \tag{16}
$$

Consequently, the surface <sup>Θ</sup>(1) does no intersect the plane *<sup>I</sup>* <sup>=</sup> *<sup>h</sup>* if *<sup>h</sup>* <sup>≥</sup> *I*∞, while it crosses the half-line portion in the positive cone of the line joining the point *Sh* 0, 0 , with 0, *<sup>A</sup><sup>h</sup>* 0 , if *<sup>h</sup>* <sup>&</sup>lt; *I*∞.

Further, as *<sup>h</sup>* increases, *<sup>A</sup><sup>h</sup>* <sup>0</sup> decreases linearly to <sup>−</sup>∞, while *<sup>S</sup><sup>h</sup>* <sup>0</sup> grows along the hyperbola (10), in the first quadrant of the *<sup>S</sup>*-*<sup>I</sup>* plane, starting from *<sup>S</sup>*0, recall (12), and tending asymptotically to *I*∞.

**The third surface** Γ

Here, we have

$$\Gamma: \quad -(m+\mu)I - c\_{II}I^2 - c\_{IS}IS - c\_{IA}IA + \pi A = 0,\tag{17}$$

On the plane *S* = -, we obtain the conic

$$
\varepsilon\_{II}I^2 + \varepsilon\_{IA}IA + (m + \mu + \varepsilon\_{IS}\ell)I - \pi A = 0 \,, \tag{18}
$$

whose matrix is

$$\mathbf{M}\_{\ell}^{\Gamma} = \begin{bmatrix} c\_{II} & \frac{c\_{IA}}{2} & \frac{1}{2}(m + \mu + c\_{IS}\ell) \\ \frac{c\_{IA}}{2} & 0 & -\frac{\pi}{2} \\ \frac{1}{2}(m + \mu + c\_{IS}\ell) & -\frac{\pi}{2} & 0 \end{bmatrix},$$

with

$$\Delta\_{\ell}^{\Gamma} = -\frac{\pi}{4} (\pi c\_{II} + c\_{IA} (m + \mu + c\_{IS} \ell)) < 0 \dots$$

Since Δ<sup>Γ</sup> is always negative, the conic is always nondegenerate. Further, the principal minor of order two is always negative, indicating that the conic is a hyperbola:

$$
\delta\_\ell^\Gamma = \begin{vmatrix} c\_{II} & \frac{c\_{IA}}{2} \\ \frac{c\_{IA}}{2} & 0 \end{vmatrix} = -\frac{1}{4}c\_{IA}^2.
$$

This hyperbola intersects the axes only at the origin in the feasible range. We can write it explicitly as

$$A\_{\ell}(I) = \frac{(m + \mu + c\_{II}I + c\_{IS}\ell)I}{\pi - c\_{IA}I}.\tag{19}$$

Its vertical asymptote is independent of -,

$$
\widetilde{I}\_{\infty} = \frac{\pi}{c\_{IA}} > 0\_{-}
$$

Hence, in the positive cone of the *I*-*A* plane, *A*-(*I*) <sup>&</sup>gt; 0 if and only if 0 <sup>&</sup>lt; *<sup>I</sup>* <sup>&</sup>lt; *I*∞. Thus, the conic (18) raises up from the origin to the asymptote, for every value of -.

It is also easily seen that on *A* = 0, the surface Γ intersects the first quadrant of the *S*–*I* plane only on the *S* horizontal axis.

**The possible intersections of** Σ(1)**,** Θ(1) **and** Γ

Thus, on the *S* = 0 coordinate plane, Σ(1) and Θ(1) meet at a point *QI*=<sup>0</sup> if the condition

$$
\bar{S}\_0 < S\_0 \tag{20}
$$

is satisfied. In such case, on the plane *A* = 0, the segment joining *S*<sup>0</sup> and *I*<sup>0</sup> meets at the point *QA*=0, the hyperbola generated by the intersection of the surface Θ(1) with *A* = 0. The intersection of Σ(1) and Θ(1) always exists, provided the condition (20) holds, because Σ(1) raises up to the vertical asymptote, while Θ(1) has a positive slope on the *I* = *h* planes. The curve, *<sup>ρ</sup>* <sup>=</sup> <sup>Σ</sup>(1) <sup>∩</sup> <sup>Θ</sup>(1), thus joins the points *QI*=<sup>0</sup> and *QA*=0. However, the former has a positive value of *A*, on the coordinate plane *I* = 0, the latter lies on the plane *A* = 0, and, thus, they lie in the upper half space in which Γ partitions the phase space *S*–*A*–*I*, the latter in the lower one. Hence, the line *ρ* intersects Γ and this intersection point represents the endemic coexistence equilibrium.

#### *2.4. Equilibria Stability*

Local Stability

The Jacobian matrix associated with the system (1) is

$$\mathbf{J} = \begin{bmatrix} J\_{1,1} & r - c\_{SA}S - \frac{aS}{1 + \beta I} & -c\_{SI}S + \frac{a\beta SA}{\left(1 + \beta I\right)^2} \\ -c\_{AS}A + \frac{aA}{1 + \beta I} & J\_{2,2} & -c\_{AI}A - \frac{a\beta SA}{\left(1 + \beta I\right)^2} \\ -c\_{IS}I & -c\_{IA}I + \pi & J\_{3,3} \end{bmatrix} \prime$$

where

$$\begin{aligned} f\_{1,1} &= r - m - 2c\_{SS}S - c\_{SA}A - c\_{SI}I - \frac{\alpha A}{1 + \beta I}, \\\\ f\_{2,2} &= -m - 2c\_{AA}A - c\_{AS}S - c\_{AI}I + \frac{\alpha S}{1 + \beta I} - \pi, \\\\ f\_{3,3} &= -m - \mu - 2c\_{II}I - c\_{IS}S - c\_{IA}A \ . \end{aligned}$$

and

$$^{15}$$

For point *E*0, we immediately obtain the eigenvalues *r* − *m*, −*m* − *π* < 0 and −*m* − *μ* < 0, so that the ecosystem collapses if

$$
\tau < m \tag{21}
$$

in line with the earlier considerations.

In addition, at the disease-free point *ES*, the eigenvalues can all be determined analytically:

$$-r + m \,, \quad -m - \frac{(r - m)(\mathcal{c}\_{AS} - \alpha)}{\mathcal{c}\_{SS}} - \pi \,, \quad -m - \mu - \frac{\mathcal{c}\_{IS}}{\mathcal{c}\_{SS}}(r - m) \, . \,\,\mu$$

In view of (2), the first and third eigenvalues are negative, and the feasibility conditions reduce to

$$
\pi + m > \frac{(r - m)(a - c\_{AS})}{c\_{SS}}.\tag{22}
$$

For the coexistence endemic equilibrium *ESAI*, we rely on numerical simulations. Observe that, using the equilibrium equations, the diagonal terms of the Jacobian simplify, becoming

$$J\_{11} = -\frac{rA}{S} - \mathfrak{c}\_{SS}S, \quad J\_{22} = -\mathfrak{c}\_{AA}A, \quad J\_{11} = -\frac{\pi A}{I} - \mathfrak{c}\_{II}I.$$

Thus, the trace is negative. The remaining two Routh–Hurwitz conditions are too complicated to shed any further light on and are not analysed further.

Table 3 summarizes the information gathered on the three equilibrium points and their local stability.

**Table 3.** Summary of equilibria and local stability for model (1).


*2.5. SAI Model with Vertical Transmission*

The model is:

$$\frac{dS}{dt} = rS - mS - c\_{SS}S^2 - c\_{SA}SA - c\_{SI}SI - \frac{aSA}{1 + \beta I} \tag{23}$$

$$\frac{dA}{dt} = rA - mA - c\_{AA}A^2 - c\_{AS}AS - c\_{AI}AI + \frac{aSA}{1 + \beta I} - \pi A\_{\prime}$$

$$\frac{dI}{dt} = -(m + \mu)I - c\_{II}I^2 - c\_{IS}IS - c\_{IA}IA + \pi A\_{\prime}$$

The variables and parameters retain the same meanings as (1), which is recalled in Tables 1 and 2. Thus, the description of (1) holds in this case as well. The only change with respect to model (1) is the fact that reproduction of asymptomatic individuals leads to new offsprings in the very same class.

#### *2.6. Preliminary Analysis*

In view of the fact that (1) and (23) differ only in one term, the considerations that lead to the system disappearance if (2) is not satisfied, as well as the boundedness of the trajectories, can be repeated using the same steps and show that these results hold here as well. Thus, in order to ensure that the model solutions do not vanish, the condition (2) must be imposed here as well.

Further, the equilibria *E*<sup>0</sup> and *ES* are the same as the corresponding ones of (1), with the same feasibility condition for the latter, namely, (2).

#### 2.6.1. Equilibria

In addition to endemic coexistence, we find also the point *E*± *AI*, where the susceptibles vanish. To investigate the latter, the second equilibrium equation yields

$$A = \frac{1}{\mathfrak{c}\_{AA}} (r - m - \mathfrak{c}\_{AI}I - \pi) \,. \tag{24}$$

Substituting (24) into the third equilibrium equation, we find the following quadratic equation *I* :

$$\begin{aligned} \mathbf{c}\_2 I^2 + \mathbf{c}\_1 I + \mathbf{c}\_0 &= \mathbf{0}, \quad \mathbf{c}\_2 = \mathbf{c}\_{II} - \frac{\mathbf{c}\_{IA}\mathbf{c}\_{AI}}{\mathbf{c}\_{AA}},\\ \mathbf{c}\_1 = m + \mu + \frac{\mathbf{c}\_{IA}r - \mathbf{c}\_{IA}m - \mathbf{c}\_{IA}\pi + \mathbf{c}\_{AI}\pi}{\mathbf{c}\_{AA}}, \quad \mathbf{c}\_0 = -\frac{\pi}{\mathbf{c}\_{AA}}(r - m - \pi). \end{aligned}$$

Its roots are

$$I\_{\pm} = \frac{c\_{AA}}{2(c\_{II}c\_{AA} - c\_{IA}c\_{AI})} \left( -m - \mu - \frac{c\_{IA}r - c\_{IA}m - c\_{IA}\pi + c\_{AI}\pi}{c\_{AA}} \pm \sqrt{\Delta} \right), \tag{25}$$

with

$$\Delta = \left( m + \mu + \frac{\mathfrak{c}\_{IA}r - \mathfrak{c}\_{IA}m - \mathfrak{c}\_{IA}\pi + \mathfrak{c}\_{AI}\pi}{\mathfrak{c}\_{AA}} \right)^2 + \frac{4\pi}{\mathfrak{c}\_{AA}} \left( \mathfrak{c}\_{II} - \frac{\mathfrak{c}\_{IA}\mathfrak{c}\_{AI}}{\mathfrak{c}\_{AA}} \right) \left( r - m - \pi \right) . \tag{26}$$

Thus, there are two possible equilibria

$$E\_{AI}^{\pm} = \left(0, \frac{r - m - c\_{AI}I\_{\pm} - \pi}{c\_{AA}}, I\_{\pm}\right)$$

with feasibility conditions

$$r > m + \mathfrak{c}\_{AI} I\_{\pm} + \pi \tag{27}$$

and

$$\frac{1}{\varepsilon\_{II}\varepsilon\_{AA} - \varepsilon\_{IA}\varepsilon\_{AI}} \left( -m - \mu - \frac{\mathfrak{c}\_{IA}\pi - \mathfrak{c}\_{IA}m - \mathfrak{c}\_{IA}\pi + \mathfrak{c}\_{AI}\pi}{\mathfrak{c}\_{AA}} \pm \sqrt{\Delta} \right) > 0. \tag{28}$$

#### 2.6.2. The Endemic Coexistence Equilibrium

Again, we study this point through the intersection of suitable surfaces, arising from the equilibrium equations of (23). Note that since the last equation in (23) is unchanged with respect to the same one in (1), the resulting surface is once again represented by the function Γ, already investigated at the end of Section 2.3.1.

**The first surface** Σ(2)

From the first equilibrium equation of (23), we have

$$\Sigma^{(2)}: r - m - \mathcal{c}\_{SS}\mathcal{S} - \mathcal{c}\_{SA}A - \mathcal{c}\_{SI}I - \frac{\kappa A}{1 + \beta I} = 0 \,. \tag{29}$$

On the plane *A* = 0, we obtain

$$c\_{S\mathcal{S}}\mathbb{S} + c\_{SI}I = r - m\,, \tag{30}$$

which meets the coordinate axes at the points with nonvanishing abscissae given by

$$S\_0 = \frac{r - m}{c\_{SS}}, \quad I\_0 = \frac{r - m}{c\_{SI}}.\tag{31}$$

Thus, the intersection with the coordinate plane *A* = 0 is the line segment joining the points (0, *I*0) and (*S*0, 0).

Similarly, on the plane *I* = 0, we find the point *A*<sup>0</sup> and the line

$$A\_0 = \frac{r - m}{c\_{SA} + a'} \quad c\_{SS}S + (c\_{SA} + a)A = r - m \, \tag{32}$$

whose feasible portion joins the points (0, *A*0) and (*S*0, 0).

On the plane *S* = -, we obtain the conic

$$\left(r - m - \mathfrak{c}\_{SS}\ell + (\beta(r - m - \mathfrak{c}\_{SS}\ell) - \mathfrak{c}\_{SI})I - (\mathfrak{c}\_{SA} + a)A - \beta\mathfrak{c}\_{SA}AI - \beta\mathfrak{c}\_{SI}I^2 = 0\right), \tag{33}$$

whose coefficient matrix is

$$\mathbf{M}\_{\ell}^{\Sigma^{(2)}} = \begin{bmatrix} -\mathbf{c}\_{SI}\boldsymbol{\beta} & -\frac{1}{2}\mathbf{c}\_{SA}\boldsymbol{\beta} & \frac{1}{2}(\beta(r-m-\mathbf{c}\_{SS}\boldsymbol{\ell})-\mathbf{c}\_{SI})\\ -\frac{1}{2}\mathbf{c}\_{SA}\boldsymbol{\beta} & 0 & -\frac{1}{2}(\mathbf{c}\_{SA}+\boldsymbol{\alpha})\\ \frac{1}{2}(\beta(r-m-\mathbf{c}\_{SS}\boldsymbol{\ell})-\mathbf{c}\_{SI}) & -\frac{1}{2}(\mathbf{c}\_{SA}+\boldsymbol{\alpha}) & r-m-\mathbf{c}\_{SS}\boldsymbol{\ell} \end{bmatrix},$$

with determinant

$$
\Delta\_{\ell}^{\Sigma^{(2)}} = \frac{1}{4} \mathfrak{a} \beta (\mathfrak{a} \mathfrak{c}\_{SI} + \beta \mathfrak{c}\_{SA} (r - m - \mathfrak{c}\_{SS} \ell) + \mathfrak{a} \mathfrak{c}\_{SI} \mathfrak{c}\_{SA}) \dots
$$

Now, if - <sup>≤</sup> *<sup>S</sup>*0, we have that <sup>Δ</sup>Σ(2) is always positive and the conic is nondegenerate. Since

$$
\delta\_\ell^{\Sigma^{(2)}} = \begin{vmatrix}
\end{vmatrix} = -\frac{1}{4}\varepsilon\_{SA}^2\beta^2 < 0
$$

the conic is a hyperbola. The intersections on this *S* = plane with the coordinate axes are the points

$$A\_0^\ell = \frac{r - m - c\_{SS}\ell}{c\_{SA} + \alpha}$$

positive for

$$r > m + c\_{SS}\ell \tag{34}$$

and the roots of the quadratic

$$\beta c\_{SI} I^2 + (c\_{SI} - (r - m - c\_{SS} \ell) \beta) I - (r - m - c\_{SS} \ell) = 0 \,,$$

namely,

$$I\_0^{\ell, \pm} = \frac{(r - m - c\_{SS}\ell)\beta - c\_{SI} \pm \sqrt{\Delta\_I^{\ell}}}{2\beta c\_{SI}}\,\prime$$

with

$$
\Delta\_I^\ell = \left(\beta(r - m - c\_{SS}\ell) + c\_{SI}\right)^2.
$$

Recalling (31), the roots explicitly are

$$I\_0^{\ell+} = \frac{r - m - c\_{SS}\ell}{c\_{SI}}, \quad I\_0^{\ell-} = -\frac{1}{\beta} = I\_0^{-} < 0.1$$

Note that if (34) is not satisfied, no portion of the conic lies in the feasible cone. In addition, both *A*- <sup>0</sup> and *I* - + <sup>0</sup> are positive if and only if - < *S*0. As increases, both *I* - + <sup>0</sup> and *<sup>A</sup>*- <sup>0</sup> decrease linearly, respectively, along the segments (30) and (32), starting, respectively, from *I*<sup>0</sup> and *A*0, and coalescing into the origin when -= *S*0.

Writing the conic in explicit form

$$A\_{\ell}(I) = -\frac{\beta c\_{SI}I^2 + (c\_{SI} - (r - m - c\_{SS}\ell)\beta)I - (r - m - c\_{SS}\ell)}{c\_{SA}(1 + \beta I) + a}.\tag{35}$$

the hyperbola is seen to have the vertical asymptote at *<sup>I</sup>* <sup>=</sup> *<sup>I</sup>*<sup>∞</sup> <sup>=</sup> <sup>−</sup>(*cSA* <sup>+</sup> *<sup>α</sup>*)(*cSAβ*)−<sup>1</sup> <sup>&</sup>lt; 0, always being negative and independent of -. Thus, if - < *S*0, the hyperbola is positive, with a feasible branch on the plane *S* = joining the points

$$\left(I\_0^{\ell+},0\right), \quad \left(0,A\_0^{\ell}\right)$$

on the coordinate axes. This branch is concave because

$$I\_0^- = -\frac{1}{\beta} > -\frac{1}{\beta} \left( 1 + \frac{\alpha}{c\_{SA}} \right) = I\_{\infty} < 0.1$$

**The second surface** Θ(2)

This surface has the expression

$$\Theta^{(2)}: r - m - \pi - \mathfrak{c}\_{AA}A - \mathfrak{c}\_{AS}S - \mathfrak{c}\_{AI}I + \frac{aS}{1 + \beta I} = 0. \tag{36}$$

On the plane *S* = 0, we obtain the straight line

$$
\varepsilon\_{AA} A + \varepsilon\_{AI} I = r - m - \pi \,. \tag{37}
$$

with intersections with the axes at the points

$$
\widehat{I}\_0 = \frac{r - m - \pi}{c\_{AI}}, \quad \widehat{A}\_0 = \frac{r - m - \pi}{c\_{AA}}.
$$

giving the segment joining (0, *I*0) and (*A*0, 0). These are both positive if

$$
\pi > m + \pi \,. \tag{38}
$$

If this condition does not hold, there is no feasible intersection.

Recalling (5), the intersection with the plane *I* = *h* gives

$$
\varepsilon\_{AA}A + (\varepsilon\_{AS} - \tilde{\mathfrak{a}})S = r - m - \pi - \mathfrak{c}\_{AI}\mathfrak{h}.\tag{39}
$$

Assuming (38), the intersections with the coordinate axes are

$$
\widehat{A}\_{\boldsymbol{h}} = \frac{\boldsymbol{r} - \boldsymbol{m} - \boldsymbol{\pi} - \boldsymbol{c}\_{AI}\boldsymbol{h}}{\boldsymbol{c}\_{AA}}, \quad \widehat{\boldsymbol{S}}\_{\boldsymbol{h}} = \frac{\boldsymbol{r} - \boldsymbol{m} - \boldsymbol{\pi} - \boldsymbol{c}\_{AI}\boldsymbol{h}}{\boldsymbol{c}\_{AS} - \widetilde{\boldsymbol{\alpha}}}.
$$

Note that on *<sup>I</sup>* <sup>=</sup> 0, these intersections become *<sup>A</sup><sup>h</sup>* <sup>=</sup> *<sup>A</sup>*0, found above, and *<sup>S</sup>*<sup>0</sup> = (*<sup>r</sup>* <sup>−</sup> *<sup>m</sup>* <sup>−</sup> *<sup>π</sup>*) (*cAS* <sup>−</sup> *<sup>α</sup>*)−1. The positivity of the latter is given by

$$
\varepsilon\_{AS} > \mathfrak{a} \,, \tag{40}
$$

On the plane *A* = 0 we once again obtain a conic section

$$
\beta c\_{AI}I^2 + \beta c\_{AS}SI + (c\_{AI} - \beta(r - m - \pi))I + (c\_{AS} - a)S - (r - m - \pi) = 0,\tag{41}
$$

whose matrix is

$$\mathbf{M}^{\mathbf{O}^{(2)}} = \begin{bmatrix} \beta c\_{AI} & \frac{1}{2}\beta c\_{AS} & \frac{1}{2}(c\_{AI} - \beta(r - m - \pi)) \\ 1 & \frac{1}{2}\beta c\_{AS} & 0 & \frac{1}{2}(c\_{AS} - a) \\ \frac{1}{2}(c\_{AI} - \beta(r - m - \pi)) & \frac{1}{2}(c\_{AS} - a) & -(r - m - \pi) \end{bmatrix} \,,$$

with determinant

$$
\Delta^{\Theta^{(2)}} = \frac{1}{4} \mathfrak{a} \beta (c\_{AS} \beta (r - m - \pi) + c\_{AS} c\_{AI} - c\_{AI} \mathfrak{a}) \ .
$$

It is nondegenerate if and only if

$$\alpha \neq c\_{AS} \left( 1 + \frac{\beta(r - m - \pi)}{c\_{AI}} \right). \tag{42}$$

Since

$$
\delta^{\Theta^{(2)}} = \begin{vmatrix}
\beta c\_{AI} & \frac{1}{2}\beta c\_{AS} \\
\frac{1}{2}\beta c\_{AS} & 0
\end{vmatrix} = -\frac{1}{4}c\_{AS}^2 \beta^2 < 0
$$

the conic section is again a hyperbola. Its intersections with the axes are the points

$$\begin{aligned} \hat{S}\_0 &= \frac{r - m - \pi}{c\_{AS} - \alpha}, \quad \hat{I}\_0^{\pm} = \frac{\beta(r - m - \pi) - c\_{AI} \pm \sqrt{\Delta\_I}}{2\beta c\_{AI}}, \\ \Delta\_I &= (c\_{AI} - \beta(r - m - \pi))^2 + 4\beta c\_{AI}(r - m - \pi) = \left(\beta(r - m - \pi) + c\_{AI}\right)^2, \end{aligned}$$

the latter being the roots of the quadratic

$$
\beta \mathfrak{c}\_{AI} l^2 + (\mathfrak{c}\_{AI} - \beta(r - m - \pi))l - (r - m - \pi) = 0 \dots
$$

Note that these roots are in fact

$$
\hat{I}\_0^+ = \hat{I}\_0, \quad \hat{I}\_0^- = -\frac{1}{\beta}.
$$

Writing this conic explicitly as

$$S(I) = -\frac{\beta c\_{AI}I^2 + (c\_{AI} - \beta(r - m - \pi))I - (r - m - \pi)}{c\_{AS}(1 + \beta I) - n},\tag{43}$$

we observe that it has a vertical asymptote at

$$
\widehat{I}\_{\infty} = \frac{\alpha - c\_{AS}}{c\_{AS}\beta},
$$

which is positive if (40) does not hold.

**The possible intersections of** Σ(2)**,** Θ(2) **and** Γ

We now briefly also discuss for this case the sufficient conditions for an intersection. Now, an intersection between Σ(2) and Θ(2) can be guaranteed if the corresponding intersections with the coordinate axes are suitably arranged, imposing some kind of interlacing properties between the corresponding coordinates.

More specifically, assuming all intersections with the coordinate axes for both Σ(2) and Θ(2) to be positive, imposing

$$S\_0 > \widehat{S}\_{0\prime} \quad A\_0 < \widehat{A}\_{0\prime} \quad I\_0 < \widehat{I}\_{0\prime} \tag{44}$$

the two surfaces meet on the *S*–*A* coordinate plane at the point *W*<sup>−</sup> = (*S*−, *A*−, 0),

$$S\_{-}=\frac{1}{\delta\_{-}}[(r-m)c\_{AA}-(r-m-\pi)(c\_{SA}+\pi)]\_{\prime}$$

$$A\_{-}=\frac{1}{\delta\_{-}}[c\_{SS}(r-m-\pi)-(c\_{AS}-\alpha)(r-m)]\_{\prime}$$

$$\delta\_{-}=c\_{SS}c\_{AA}-(c\_{AS}+\alpha)(c\_{SA}+\alpha)\_{\prime}$$

and also at a point on the *S*–*I* coordinate plane, say *E*+ = (*S*+, 0, *I*+)

$$S\_{+} = \frac{1}{c\_{SS}}[(r-m)c\_{SI}I\_{+}]\_{'}$$

where *I*+ a root of the quadratic

$$\kappa\_2 I^2 + \kappa\_1 I + \kappa\_0 = 0, \quad \kappa\_0 = \frac{1}{\mathfrak{C}\_{SS}} (r - m)(\mathfrak{c}\_{AS} - a) - (r - m - \pi),$$

$$\kappa\_1 = \frac{1}{\mathfrak{C}\_{SS}} (r - m)(\mathfrak{c}\_{AS} - a) + \mathfrak{c}\_{AI} - \beta (r - m - \pi), \quad \kappa\_2 = \beta \left(\mathfrak{c}\_{AI} - \frac{\mathfrak{c}\_{AS} \mathfrak{c}\_S I}{\mathfrak{C}\_{SS}}\right),$$

and clearly also on the line *ρ*<sup>1</sup> joining these two points. In addition, imposing instead the reverse inequalities

$$S\_0 < \ddot{S}\_0, \quad A\_0 > \dot{A}\_0, \quad I\_0 > \ddot{I}\_0. \tag{45}$$

Σ(2) and Θ(2) intersect each other again on the line *ρ*1. Both these sets of conditions (44) and (45) represent sufficient intersection conditions, and more cases could arise and also lead to other intersection lines; however, we do not examine them further.

In addition, the intersection line *ρ*<sup>1</sup> meets Γ because *W*<sup>−</sup> = (*S*−, *A*−, 0) lies in the upper semispace generated by Γ, while *E*+ = (*S*+, 0, *I*+) in the lower one. Hence, since the intersection exists, it provides the population values of the endemic equilibrium for (23). Table 4 summarizes these findings.

#### 2.6.3. Local Stability

The Jacobian here has slight differences with respect to the one of (1), namely, it is

$$
\begin{bmatrix}
\hat{\mathbf{J}} = \begin{bmatrix}
\hat{f}\_{1,1} & -c\_{SA}S - \frac{\alpha S}{1+\beta I} & -c\_{SI}S + \frac{\alpha \beta SA}{\left(1+\beta I\right)^2} \\
\end{bmatrix}
\end{bmatrix}
$$

with

$$
\widehat{f}\_{1,1} = r - m - 2c\_{SS}S - c\_{SA}A - c\_{SI}I - \frac{\alpha A}{1 + \beta I},
$$

$$
\widehat{f}\_{2,2} = r - m - 2c\_{AA}A - c\_{AS}S - c\_{AI}I + \frac{\alpha S}{1 + \beta I} - \pi I
$$

and

$$J\mathfrak{z}\_{\mathfrak{z}} = -m - \mu - 2c\mu I - c\_{IS}\mathfrak{z} - c\_{IA}A\dots$$

For equilibrium *E*<sup>0</sup> the stability condition is unchanged with respect to model (1), namely (21). For *ES*, instead, there is a change in the second eigenvalue, for which, instead of (22), the stability changes in

$$
\pi > \left( 1 - \frac{c\_{AS} - \alpha}{c\_{SS}} \right) (r - m). \tag{46}
$$

For the equilibrium *E*± *AI*, one eigenvalue factorizes to give the first stability condition

$$
\sigma < m + c\_{SA} A\_{\pm} + c\_{SI} I\_{\pm} + \frac{\alpha A\_{\pm}}{1 + \beta I\_{\pm}}.\tag{47}
$$

Applying the Routh–Hurwitz conditions to the remaining minor, we obtain from the condition on the trace

$$
\pi < 2m + \pi + \mu + 2\varepsilon\_{AA}A\_{\pm} + 2\varepsilon\_{II}I\_{\pm} + \varepsilon\_{AI}I\_{\pm} + \varepsilon\_{IA}A\_{\pm},\tag{48}
$$

while the one on the determinant provides the last stability condition

$$c\_{AI}A\_{\pm}(-c\_{IA}I\_{\pm} + \pi) > (r - m - 2c\_{AA}A\_{\pm} - c\_{AI}I\_{\pm} - \pi)(m + \mu + 2c\_{II}I\_{\pm} + c\_{IA}A\_{\pm})\ . \tag{49}$$

For the coexistence equilibrium, using the equilibrium equations we can simplify the diagonal terms of the Jacobian, which become

$$
\widehat{j}\_{11} = -\mathfrak{c}\_{\text{SS}} \mathbf{S}\_{\prime} \quad \widehat{j}\_{22} = -\mathfrak{c}\_{AA} A\_{\prime} \quad \widehat{j}\_{33} = -\frac{\pi A}{I} - \mathfrak{c}\_{II} I\_{\prime}
$$

immediately showing that the trace is negative. Stability hinges then on the remaining two Routh–Hurwitz conditions, which are very much involved and are neither going to be stated, nor investigated.

Table 5 summarizes these findings.

**Table 4.** Summary of equilibria and their feasibility for model (23).


**Table 5.** Summary of equilibria and their local stability for model (23).


#### **3. Results**

We proposed two models for the phenomenon of contact reduction in the case of an epidemic's spread. The novelty here is represented by the fact that the fear inducing individuals' intermingling reduction is based on the number of symptomatic cases, not just on the "infected", which also includes the asymptomatic individuals.

The numerical experiments were all performed using our own codes written in Matlab, using the intrinsic function ode45 (or ode15s) for the differential equations integration. In the simulations, we always took the following hypothetical initial conditions:

$$S(0) = 10\text{\AA}00, \quad A(0) = 1, \quad I(0) = 0. \tag{50}$$

In addition, the very same hypothetical competition rates are used, namely

$$\begin{aligned} \varepsilon\_{SS} &= 0.0004, \quad \varepsilon\_{AA} = 0.0005, \quad \varepsilon\_{II} = 0.0006, \quad \varepsilon\_{SA} = 0.0008, \quad \varepsilon\_{SI} = 0.0003, \\\ c\_{AS} &= 0.0001, \quad c\_{AI} = 0.0002, \quad \varepsilon\_{IS} = 0.0009, \quad \varepsilon\_{IA} = 0.0007. \end{aligned} \tag{51}$$

Two models are presented, differing in the way disease transmission occurs between parents and offsprings. In the former, vertical transmission is not allowed, a fact that, instead, is present in the second model.

We present the simulation results from these models in Figures 1–7.

In Figure 1, the disease-free point is attained for model (1) and the hypothetical parameters

$$
\mu = 0.005, \quad \beta = 0.006, \quad \mu = 0.06, \quad \pi = 7, \quad m = 0.2, \quad r = 0.7. \tag{52}
$$

For model (1), in Figure 2, coexistence is obtained using the hypothetical parameter values

$$
\mu = 0.5, \quad \beta = 0.6, \quad \mu = 0.3, \quad \pi = 1, \quad m = 0.4, \quad r = 3. \tag{53}
$$

In Figure 3, the hypothetical parameters are

$$
\pi = 0.5, \quad \beta = 6, \quad \mu = 0.5, \quad \pi = 1, \quad m = 0.4, \quad r = 3. \tag{54}
$$

The hypothetical parameters of Figure 4 are

$$
\mu = 0.5, \quad \beta = 0.6, \quad \mu = 0.3, \quad \pi = 4, \quad m = 2, \quad r = 3. \tag{55}
$$

Note that in the former case, the asymptomatics attain the highest value, followed by the susceptibles. In the second case, instead, the disease seems to affect less of the population; the most populated compartment is the one of the susceptibles, followed by the asymptomatic individuals. Figure 4 shows an instance in which the largest population is represented by the symptomatic individuals and the second largest are the asymptomatics.

**Figure 1.** Disease-free point for model (1) obtained with parameter values (52), (51) and initial conditions (50).

**Figure 4.** Coexistence equilibrium for model (23) obtained with parameter values (55), (51) and initial conditions (50).

For the model (23) with vertical transmission, we also show three instances of the endemic equilibrium. The hypothetical parameters of Figure 5 are (51) and

$$
\pi = 0.5, \quad \beta = 6, \quad \mu = 0.5, \quad \pi = 2, \quad m = 0.4, \quad r = 3,\tag{56}
$$

while those hypothetical of Figure 6, in addition to (51), instead being

$$
\mu = 0.5, \quad \beta = 0.6, \quad \mu = 0.5, \quad \pi = 2.5, \quad m = 0.4, \quad r = 3 \tag{57}
$$

and those hypothetical parameters for Figure 7, in which the asymptomatics represent the most numerous class, are

$$
\pi = 1.6, \quad \beta = 0.6, \quad \mu = 0.3, \quad \pi = 4, \quad m = 0.4, \quad r = 3 \tag{58}
$$

The disease-free point is also attained with the following hypothetical parameters

$$
\mu = 0.5, \quad \beta = 0.6, \quad \mu = 0.3, \quad \pi = 4, \quad m = 0.4, \quad r = 3.
$$

The resulting figure is extremely close to Figure 1 and, therefore, it is not shown.

In case of Figure 5, the most populated compartment is the susceptibles, followed by the asymptomatic individuals. In Figure 6, the asymptomatic prevail, followed by the symptomatic individuals, so that in this case the disease has been contracted by a larger proportion of the population. Figure 7 shows, instead, the case in which the asymptomatics represent the largest class.

**Figure 5.** Coexistence equilibrium for model (23) obtained with parameter values (56), (51) and initial conditions (50).

**Figure 6.** Coexistence equilibrium for model (23) obtained with parameter values (57), (51) and initial conditions (50).

**Figure 7.** Coexistence equilibrium for model (23) obtained with parameter values (58), (51) and initial conditions (50).

#### **4. Global Behavior of the Systems**

On the basis of the previous results, comparing the feasibility and stability conditions of the various equilibria, we can conjecture the existence of bifurcations relating them. We show analytically their presence, but we can go even further, completely assessing the models behavior.

The first step in this direction is to recall that both models' trajectories are confined to a compact set, as discussed in Sections 2.2 and 2.6. In the second place, using Sotomayor's theorem [22], we show that a chain of transcritical bifurcations ties the various systems' equilibria.

Model (1) implies that no two such equilibria may appear simultaneously to be stable and feasible, and, therefore, when locally asymptotically stable, they are also globally asymptotically stable. Therefore, these systems move from "disappearance" to the susceptible-only, i.e., disease-free, point, and, finally, to the endemic case.

In order to apply Sotomayor's theorem, some preliminary calculations are needed. We slightly change our notation, denoting now by *f*(*S*, *A*, *I*)=(*f*1(*S*, *A*, *I*), *f*2(*S*, *A*, *I*), *f*3(*S*, *A*, *I*))*<sup>T</sup>* both systems' (1) and (23) right-hand sides, and by *D f* their Jacobian.

#### *4.1. Application of Sotomayor's Theorem for Model (1)*

We now determine the second partial derivatives of *f* with respect to the variables *S*, *A* and *I*, i.e., the elements of *D*<sup>2</sup> *f* :

*∂*<sup>2</sup> *f*<sup>1</sup> *<sup>∂</sup>S*<sup>2</sup> <sup>=</sup> <sup>−</sup>2*cSS* , *<sup>∂</sup>*<sup>2</sup> *<sup>f</sup>*<sup>1</sup> *<sup>∂</sup>I*<sup>2</sup> <sup>=</sup> <sup>−</sup> <sup>2</sup>*αβ*2*SA* (<sup>1</sup> <sup>+</sup> *<sup>β</sup>I*)<sup>3</sup> , *<sup>∂</sup>*<sup>2</sup> *<sup>f</sup>*<sup>1</sup> *<sup>∂</sup>S∂<sup>A</sup>* <sup>=</sup> *<sup>∂</sup>*<sup>2</sup> *<sup>f</sup>*<sup>1</sup> *<sup>∂</sup>A∂<sup>S</sup>* <sup>=</sup> <sup>−</sup>*cSA* <sup>−</sup> *<sup>α</sup>* 1 + *βI* , *∂*<sup>2</sup> *f*<sup>1</sup> *<sup>∂</sup>S∂<sup>I</sup>* <sup>=</sup> *<sup>∂</sup>*<sup>2</sup> *<sup>f</sup>*<sup>1</sup> *<sup>∂</sup>I∂<sup>S</sup>* <sup>=</sup> <sup>−</sup>*cSI* <sup>+</sup> *αβ<sup>A</sup>* (<sup>1</sup> <sup>+</sup> *<sup>β</sup>I*)<sup>2</sup> , *<sup>∂</sup>*<sup>2</sup> *<sup>f</sup>*<sup>1</sup> *<sup>∂</sup>A∂<sup>I</sup>* <sup>=</sup> *<sup>∂</sup>*<sup>2</sup> *<sup>f</sup>*<sup>1</sup> *<sup>∂</sup>I∂<sup>A</sup>* <sup>=</sup> *αβ<sup>S</sup>* (<sup>1</sup> <sup>+</sup> *<sup>β</sup>I*)<sup>2</sup> , *<sup>∂</sup>*<sup>2</sup> *<sup>f</sup>*<sup>2</sup> *<sup>∂</sup>A*<sup>2</sup> <sup>=</sup> <sup>−</sup>2*cAA* , *∂*<sup>2</sup> *f*<sup>2</sup> *<sup>∂</sup>I*<sup>2</sup> <sup>=</sup> <sup>2</sup>*αβ*2*SA* (<sup>1</sup> <sup>+</sup> *<sup>β</sup>I*)<sup>3</sup> , *<sup>∂</sup>*<sup>2</sup> *<sup>f</sup>*<sup>2</sup> *<sup>∂</sup>S∂<sup>A</sup>* <sup>=</sup> *<sup>∂</sup>*<sup>2</sup> *<sup>f</sup>*<sup>2</sup> *<sup>∂</sup>A∂<sup>S</sup>* <sup>=</sup> <sup>−</sup>*cAS* <sup>+</sup> *<sup>α</sup>* 1 + *βI* , *<sup>∂</sup>*<sup>2</sup> *<sup>f</sup>*<sup>2</sup> *<sup>∂</sup>S∂<sup>I</sup>* <sup>=</sup> *<sup>∂</sup>*<sup>2</sup> *<sup>f</sup>*<sup>2</sup> *<sup>∂</sup>I∂<sup>S</sup>* <sup>=</sup> <sup>−</sup> *αβ<sup>A</sup>* (<sup>1</sup> <sup>+</sup> *<sup>β</sup>I*)<sup>2</sup> , *∂*<sup>2</sup> *f*<sup>2</sup> *<sup>∂</sup>A∂<sup>I</sup>* <sup>=</sup> *<sup>∂</sup>*<sup>2</sup> *<sup>f</sup>*<sup>2</sup> *<sup>∂</sup>I∂<sup>A</sup>* <sup>=</sup> <sup>−</sup>*cAI* <sup>−</sup> *αβ<sup>S</sup>* (<sup>1</sup> <sup>+</sup> *<sup>β</sup>I*)<sup>2</sup> , *<sup>∂</sup>*<sup>2</sup> *<sup>f</sup>*<sup>3</sup> *<sup>∂</sup>I*<sup>2</sup> <sup>=</sup> <sup>−</sup>2*cI I* , *<sup>∂</sup>*<sup>2</sup> *<sup>f</sup>*<sup>3</sup> *<sup>∂</sup>S∂<sup>I</sup>* <sup>=</sup> *<sup>∂</sup>*<sup>2</sup> *<sup>f</sup>*<sup>3</sup> *<sup>∂</sup>I∂<sup>S</sup>* <sup>=</sup> <sup>−</sup>*cIS* , *∂*<sup>2</sup> *f*<sup>3</sup> *<sup>∂</sup>A∂<sup>I</sup>* <sup>=</sup> *<sup>∂</sup>*<sup>2</sup> *<sup>f</sup>*<sup>3</sup> *<sup>∂</sup>I∂<sup>A</sup>* <sup>=</sup> <sup>−</sup>*cIA* , *<sup>∂</sup>*<sup>2</sup> *<sup>f</sup>*<sup>1</sup> *<sup>∂</sup>A*<sup>2</sup> <sup>=</sup> *<sup>∂</sup>*<sup>2</sup> *<sup>f</sup>*<sup>2</sup> *<sup>∂</sup>S*<sup>2</sup> <sup>=</sup> *<sup>∂</sup>*<sup>2</sup> *<sup>f</sup>*<sup>3</sup> *<sup>∂</sup>S*<sup>2</sup> <sup>=</sup> *<sup>∂</sup>*<sup>2</sup> *<sup>f</sup>*<sup>3</sup> *<sup>∂</sup>A*<sup>2</sup> <sup>=</sup> *<sup>∂</sup>*<sup>2</sup> *<sup>f</sup>*<sup>3</sup> *<sup>∂</sup>S∂<sup>A</sup>* <sup>=</sup> *<sup>∂</sup>*<sup>2</sup> *<sup>f</sup>*<sup>3</sup> *<sup>∂</sup>A∂<sup>S</sup>* <sup>=</sup> 0.

Furthermore, by differentiating the components of *f* with respect to *r*, we find

$$f\_{\mathcal{I}} = \begin{bmatrix} \frac{\partial f\_1}{\partial r}, \frac{\partial f\_2}{\partial r}, \frac{\partial f\_3}{\partial r} \end{bmatrix}^T = \begin{bmatrix} \mathbb{S} + A\_\prime \mathbf{0}, \mathbf{0} \end{bmatrix}^T.$$

whose Jacobian, differentiating with respect to the variables *S*, *A* and *I*, is

$$Df\_r = \begin{bmatrix} 1 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix} . \tag{59}$$

#### 4.1.1. Transcritical Bifurcation *E*<sup>0</sup> → *ES*

Consider the equilibrium point *E*<sup>0</sup> and choose *r* as the bifurcation parameter. Evaluating the Jacobian at the equilibrium *E*0, we obtain

$$J = Df(E\_{0"\prime}r) = \begin{bmatrix} r - m & m & 0 \\ 0 & -m - \pi & 0 \\ 0 & \pi & -m - \mu \end{bmatrix} \prime$$

having the eigenvalues *λ*<sup>1</sup> = *r* − *m*, *λ*<sup>2</sup> = −*m* − *π* and *λ*<sup>3</sup> = −*m* − *μ*; thus, two eigenvalues have a negative real part and the first one vanishes by taking as the critical bifurcation value

$$r\_0 = m.\tag{60}$$

Its right *v* and left *w* eigenvectors corresponding to the zero eigenvalue are, therefore

$$w = \begin{bmatrix} 1,0,0 \end{bmatrix}^T, \quad w = \begin{bmatrix} m+\pi \\ \end{bmatrix}, 1,0 \right]^T.$$

The only nonvanishing terms in *D*<sup>2</sup> *f*(*E*0, *m*) are

$$\begin{aligned} \frac{\partial^2 f\_1}{\partial S^2} &= -2\varepsilon\_{SS}, & \frac{\partial^2 f\_1}{\partial S \partial A} = \frac{\partial^2 f\_1}{\partial A \partial S} &= -\varepsilon\_{SA} - \mathfrak{a}, & \frac{\partial^2 f\_1}{\partial S \partial I} = \frac{\partial^2 f\_1}{\partial I \partial S} &= -\varepsilon\_{SI}, \\ \frac{\partial^2 f\_2}{\partial A^2} &= -2\varepsilon\_{AA}, & \frac{\partial^2 f\_2}{\partial S \partial A} = \frac{\partial^2 f\_2}{\partial A \partial S} = -\varepsilon\_{AS} + \mathfrak{a}, & \frac{\partial^2 f\_2}{\partial A \partial I} = \frac{\partial^2 f\_2}{\partial I \partial A} &= -\varepsilon\_{AI}, \\ \frac{\partial^2 f\_3}{\partial I^2} &= -2\varepsilon\_{II}, & \frac{\partial^2 f\_3}{\partial S \partial I} = \frac{\partial^2 f\_3}{\partial I \partial S} = -\varepsilon\_{IS}, & \frac{\partial^2 f\_3}{\partial A \partial I} = \frac{\partial^2 f\_3}{\partial I \partial A} &= \frac{\partial^2 f\_3}{\partial I \partial A} = -\varepsilon\_{IA}. \end{aligned}$$

Furthermore, recalling (59), we have

$$f\_r(E\_{0\prime}m) = \left[0,0,0\right]^T, \quad Df\_r(E\_{0\prime}m) = Df\_r\ldots$$

Finally, the components of *D*<sup>2</sup> *f*(*E*0, *m*)(*v*, *v*) are

$$D^2 f\_1(E\_0, m)(v, v) = -2c\_{SS}v\_1^2 - 2(c\_{SA} + \mathfrak{a})v\_1v\_2 - 2c\_{SI}v\_1v\_3$$

$$D^2 f\_2(E\_0, m)(v, v) = -2c\_{AA}v\_2^2 - 2(c\_{AS} - \mathfrak{a})v\_1v\_2 - 2c\_{AI}v\_2v\_3$$

$$D^2 f\_3(E\_0, m)(v, v) = -2c\_{II}v\_3^2 - 2c\_{IS}v\_1v\_3 - 2c\_{IA}v\_2v\_3 \ .$$

Thus, the three conditions required by Sotomayor's Theorem for a transcritical bifurcation are met; indeed

$$\begin{aligned} w^T f\_r(E\_{0\prime} m) &= 0, \quad w^T [Df\_r(E\_{0\prime} m) v] = w\_1 v\_1 = \frac{m+\pi}{m} \neq 0, \\ w^T \left[ D^2 f(E\_{0\prime} m)(v, v) \right] &= -2c\_{SS} w\_1 v\_1^2 = -\frac{2c\_{SS}(m+\pi)}{m} \neq 0. \end{aligned}$$

4.1.2. Transcritical Bifurcation *ES* → *ESAI*

Now consider the *ES* and choose *r* as the bifurcation parameter.

The Jacobian evaluated at *ES* is

$$J = Df(E\_S, r) = \begin{bmatrix} -\frac{c\_{SS}(m+\pi)}{a-c\_{AS}} & f\_{1,2} & -\frac{c\_{IS}(m+\pi)}{a-c\_{AS}}\\ 0 & f\_{2,2} & 0\\ 0 & \pi & -\frac{(a-c\_{AS})(m+\mu)+c\_{IS}(m+\pi)}{a-c\_{AS}} \end{bmatrix}^T$$

with

$$J\_{1,2} = \frac{(\mathbf{c}\_{SS} - \mathbf{c}\_{AS} - \mathbf{c}\_{SA})m + (\mathbf{c}\_{SS} - \mathbf{c}\_{SA} - \mathbf{a})\pi}{\mathbf{a} - \mathbf{c}\_{AS}}, \quad J\_{2,2} = -m - \pi + (\mathbf{a} - \mathbf{c}\_{AS})\frac{r - m}{\mathbf{c}\_{SS}}.$$

Its eigenvalues are

$$
\lambda\_1 = -\frac{\mathfrak{c}\_{\rm SS}(m+\pi)}{\mathfrak{a} - \mathfrak{c}\_{\rm AS}}, \quad \lambda\_2 = J\_{22}, \quad \lambda\_3 = -\frac{(\mathfrak{a} - \mathfrak{c}\_{\rm AS})(m+\mu) + \mathfrak{c}\_{\rm IS}(m+\pi)}{\mathfrak{a} - \mathfrak{c}\_{\rm AS}}.
$$

Now, *λ*<sup>2</sup> vanishes by choosing the critical value

$$r^\dagger = m + \frac{c\_{SS}(m+\pi)}{\pi - c\_{AS}},\tag{62}$$

while the remaining ones are negative, if *α* > *cAS*; or the first one is positive, if *α* < *cAS*. The left eigenvector is *w* = [0, 1, 0] *<sup>T</sup>*, while the right one is

$$\sigma = \begin{bmatrix} \frac{1}{c\_{SS}(m+\pi)} \left( \frac{((\varepsilon\_{SS} - \varepsilon\_{AS} - \varepsilon\_{SA})m + (\varepsilon\_{SS} - \varepsilon\_{SA} - a)\pi)((a - c\_{AS})(m + \mu) + c\_{IS}(m + \pi))}{(a - c\_{AS})\pi} - c\_{SI}(m + \pi) \right) \\\\ \frac{(a - c\_{AS})(m + \mu) + c\_{IS}(m + \pi)}{(a - c\_{AS})\pi} \\ 1 \end{bmatrix}.$$

Recalling (59), we further have

$$f\_r\left(E\_{S\prime}r^{\dagger}\right) = \left[\frac{m+\pi}{\mathfrak{a}-\mathfrak{c}\_{AS}},0,0\right]^T, \quad Df\_r\left(E\_{S\prime}r^{\dagger}\right) = Df\_r\dots$$

However, in spite of having *w<sup>T</sup> fr ES*,*r*† = 0 , Sotomayor's Theorem is inconclusive because the last condition is not satisfied, namely *wT D fr ES*,*r*† *v* = 0 .

**Remark 1.** *This calculation is very interesting, because in spite of the fact that the analysis is undecided concerning the existence of the transcritical bifurcation, the simulations below will show that it does indeed take place. Thus, it is an example of the fact that the conditions in Sotomayor's Theorem are sufficient but not necessary.*

#### *4.2. Application of Sotomayor's Theorem for Model (23)*

The proof follows pretty much the one of model (1). We outline only the basic changes. Once again, we use the same notation for *f* and *D f* , which, here, denote the right hand side and the Jacobian of (23). The only changes in the Jacobian are the elements

$$\frac{\partial f\_1}{\partial A} = -c\_{SA}S - \frac{aS}{1 + \beta I}, \quad \frac{\partial f\_2}{\partial A} = r - m - 2c\_{AA}A - c\_{AS}S - c\_{AI}I + \frac{aS}{1 + \beta I} - \pi \dots$$

It further turns out that *D*<sup>2</sup> *f* is the same as the one of model (1). Here, instead, we find

$$f\_r = \begin{bmatrix} \frac{\partial f\_1}{\partial r}, \frac{\partial f\_2}{\partial r}, \frac{\partial f\_3}{\partial r} \end{bmatrix}^T = \begin{bmatrix} \mathcal{S}, A, \mathcal{0} \end{bmatrix}^T, \mathcal{I}$$

whose Jacobian—differentiating with respect to the variables *S*, *A* and *I*—is now

$$Df\_r = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{bmatrix}.$$

#### 4.2.1. Transcritical Bifurcation *E*<sup>0</sup> → *ES*

We consider the equilibrium point *E*<sup>0</sup> and we choose *r* as the bifurcation parameter. By evaluating the Jacobian, we obtain

$$J = Df(E\_{0\prime}r) = \begin{bmatrix} r - m & 0 & 0 \\ 0 & -\pi & 0 \\ 0 & \pi & -m - \mu \end{bmatrix} \prime$$

having the eigenvalues *λ*<sup>1</sup> = *r* − *m*, *λ*<sup>2</sup> = −*π* and *λ*<sup>3</sup> = −*m* − *μ*, two of them being negative. The first one vanishes by taking, again, *r*<sup>0</sup> = *m* as in (60), the same the critical bifurcation threshold as for model (1). The left and right eigenvectors are, respectively

$$w = [1,0,0]^T, \quad w = [1,0,0]^T \dots$$

The elements of *D*<sup>2</sup> *f*(*E*0,*r*0) are the same as the first model, because *D*<sup>2</sup> *f* is the same; consequently, also, are the components of

$$D^2f(E\_0, r\_0)(v, v) = \left[D^2f\_1(E\_0, r\_0)(v, v), D^2f\_2(E\_0, r\_0)(v, v), D^2f\_3(E\_0, r\_0)(v, v)\right]^T.$$

Furthermore, recalling (59), we have

$$f\_r(E\_0, r\_0) = \begin{bmatrix} 0, 0, 0 \end{bmatrix}^T, \quad Df\_r(E\_0, r\_0) = Df\_r \ .$$

Thus, the three conditions required by Sotomayor's Theorem are met; indeed

$$\begin{aligned} w^T f\_r(E\_{0\prime} m) &= 0, \quad w^T [Df\_r(E\_{0\prime} m) v] = w\_1 v\_1 = 1 \neq 0, \\ w^T \left[ D^2 f(E\_{0\prime} m)(v, v) \right] &= -2c\_{SS} w\_1 v\_1^2 = -2c\_{SS} \neq 0. \end{aligned}$$

Hence, at *r* = *r*<sup>0</sup> = *m*, there is a transcritical bifurcation for which *E*<sup>0</sup> becomes *ES*.

4.2.2. Transcritical Bifurcation *ES* → *ESAI*

We consider the equilibrium point *ES* and we choose *r* as the bifurcation parameter. From the evaluation of the Jacobian, for

$$
\mathfrak{c}\_{AS} \neq \mathfrak{c}\_{SS} + \mathfrak{a} \tag{63}
$$

we obtain

$$J = Df(E\_S, r) = \begin{bmatrix} -\frac{c\_{SS}\pi}{c\_{SS} - c\_{AS} + a} & -\frac{(c\_{AS} + a)\pi}{c\_{SS} - c\_{AS} + a} & -\frac{c\_{IS}\pi}{c\_{SS} - c\_{AS} + a} \\ 0 & I\_{22} & 0 \\ 0 & \pi & -m - \mu - \frac{c\_{IS}\pi}{c\_{SS} - c\_{AS} + a} \end{bmatrix},$$

having the eigenvalues

$$
\lambda\_1 = -\frac{c\_{SS}\pi}{c\_{SS} - c\_{AS} + a'} \quad \lambda\_2 = f\mathbf{z} \mathbf{z} \quad \lambda\_3 = -\pi - \mu - \frac{c\_{IS}\pi}{c\_{SS} - c\_{AS} + a'} .
$$

The second eigenvalue vanishes by taking as the critical bifurcation threshold

$$
\sigma^\ddagger = m + \frac{c\_{SS}\pi}{c\_{SS} - c\_{AS} + a} > m\_\prime \tag{64}
$$

the latter inequality following by requiring *λ*<sup>1</sup> < 0. Note that *r*† for model (1) and *r*‡ for model (23) differ; compare (62) and (64). The left and right eigenvectors are, respectively

$$v = \begin{bmatrix} \frac{c\_{SI}}{c\_{SS}} - \frac{c\_{AS} + \mu}{c\_{SS}} \left( \frac{m + \mu}{\pi} + \frac{c\_{IS}}{c\_{SS} - c\_{AS} + \mu} \right) \\ \frac{m + \mu}{\pi} + \frac{c\_{SS} - c\_{AS} + \mu}{c\_{SS} - c\_{AS} + \mu} \end{bmatrix}, \quad w = \begin{bmatrix} 0, 1, 0 \end{bmatrix}^T.$$

The nonvanishing elements of *D*<sup>2</sup> *f ES*,*r*‡ are exactly those of (61) with the exception of

$$\frac{\partial^2 f\_1}{\partial A \partial I} = \frac{\partial^2 f\_1}{\partial I \partial A} = \frac{a \beta \pi}{c\_{SS} - c\_{AS} + a}, \quad \frac{\partial^2 f\_2}{\partial A \partial I} = \frac{\partial^2 f\_2}{\partial I \partial A} = -c\_{AI} - \frac{a \beta \pi}{c\_{SS} - c\_{AS} + a}, \quad \frac{\partial^2 f\_3}{\partial I^2} = -2c\_{II} - \frac{a \beta \pi}{c\_{SS} - c\_{AS} + a}$$

Further, recalling once again (59), we have

$$f\_r\left(E\_{S\prime}r^\ddagger\right) = \left[\frac{\pi}{c\_{SS} - c\_{AS} + \pi}, 0, 0\right]^T, \quad Df\_r\left(E\_{S\prime}r^\ddagger\right) = Df\_r\left(E\_{S\prime}r^\ddagger\right)$$

Finally, the components of

$$D^2 f\left(E\_{\mathcal{S}\prime} r^\sharp\right)(v,v) = \left[D^2 f\_1\left(E\_{\mathcal{S}\prime} r^\sharp\right)(v,v), D^2 f\_2\left(E\_{\mathcal{S}\prime} r^\sharp\right)(v,v), D^2 f\_3\left(E\_{\mathcal{S}\prime} r^\sharp\right)(v,v)\right]^T$$

are

$$\begin{split} D^2 f\_1 \left( E\_{\mathcal{S}}, r^\ddagger \right) (v, v) &= -2c\_{\mathcal{S}S} v\_1^2 - 2(c\_{\mathcal{S}A} + a) v\_1 v\_2 - 2c\_{\mathcal{S}I} v\_1 v\_3 + \frac{2a\beta\pi}{c\_{\mathcal{S}S} - c\_{\mathcal{A}S} + a} v\_2 v\_3, \\ D^2 f\_2 \left( E\_{\mathcal{S},r} r^\ddagger \right) (v, v) &= -2c\_{\mathcal{A}A} v\_2^2 - 2(c\_{\mathcal{A}S} - a) v\_1 v\_2 - 2 \left( c\_{\mathcal{I}A} + \frac{a\beta\pi}{c\_{\mathcal{S}S} - c\_{\mathcal{A}S} + a} \right) v\_2 v\_3, \\ D^2 f\_3 \left( E\_{\mathcal{S},r} r^\ddagger \right) (v, v) &= -2c\_{\mathcal{I}I} v\_3^2 - 2c\_{\mathcal{I}S} v\_1 v\_3 - 2c\_{\mathcal{I}A} v\_2 v\_3. \end{split}$$

The first condition required to use Sotomayor's Theorem is *w<sup>T</sup> fr ES*,*r*‡ = 0. We would also need the nonvanishing of the following quantities:

$$w^T \left[ Df\_r \left( E\_{\mathbb{S}'} r^\dagger \right) v \right] = v\_2 = \frac{m + \mu}{\pi} + \frac{c\_{IS}}{c\_{SS} - c\_{AS} + \pi}, \tag{65}$$

$$\begin{split} w^{T} \left[ D^{2} f \left( E\_{S}, r^{\ddagger} \right) (\upsilon, \upsilon) \right] &= D^{2} f\_{2} \left( E\_{S}, r^{\ddagger} \right) (\upsilon, \upsilon) = -2 \varepsilon\_{AA} \left( \frac{m + \mu}{\pi} + \frac{c\_{IS}}{c\_{SS} - c\_{AS} + a} \right)^{2} \\ -2 (c\_{AS} - a) \left( \frac{c\_{SI}}{c\_{SS}} - \frac{c\_{AS} + a}{c\_{SS}} \left( \frac{m + \mu}{\pi} + \frac{c\_{IS}}{c\_{SS} - c\_{AS} + a} \right) \right) \left( \frac{m + \mu}{\pi} + \frac{c\_{IS}}{c\_{SS} - c\_{AS} + a} \right) \\ &\quad - 2 \left( c\_{IA} + \frac{a \beta \pi}{c\_{SS} - c\_{AS} + a} \right) \left( \frac{m + \mu}{\pi} + \frac{c\_{IS}}{c\_{SS} - c\_{AS} + a} \right) . \end{split} \tag{66}$$

The first one is satisfied and shows that either a transcritical or a pitchfork bifurcation is possible. The remaining ones are needed for a transcritical bifurcation. Thus, if these quantities are different from 0 for *r* = *r*‡, there is a transcritical bifurcation from *ES* to *ESAI*.

In case (66) vanishes, we further investigate the situation. We can observe that (65) is zero if and only if

$$
\pi = -\frac{(m+\mu)(c\_{SS} - c\_{AS} + \alpha)}{c\_{IS}},
\tag{67}
$$

while (66) is zero if and only if either (67) holds or

$$\begin{split} c\_{AA} &= \frac{\pi (c\_{SS} - c\_{AS} + a)}{(m + \mu)(c\_{SS} - c\_{AS} + a) + \pi c\_{IS}} \\ &\times \left\{ (\mathfrak{a} - c\_{AS}) \left[ \frac{c\_{SI}}{c\_{SS}} - \frac{c\_{AS} + a}{c\_{SS}} \left( \frac{m + \mu}{\pi} + \frac{c\_{IS}}{c\_{SS} - c\_{AS} + a} \right) \right] - c\_{AI} - \frac{a\beta\pi}{c\_{SS} - c\_{AS} + a} \right\}, \quad (68) \end{split}$$

and, in this second case, we must assume

$$
\pi \neq -\frac{(m+\mu)(c\_{SS}-c\_{AS}+\mu)}{c\_{IS}}.\tag{69}
$$

Now, because *w* = [0, 1, 0] *<sup>T</sup>*, the third derivatives simplify, namely

$$w^T \left[ D^3 f(E\_{S\prime} r^\ddagger)(v, v, v) \right] = D^3 f\_2(E\_{S\prime} r^\ddagger)(v, v, v) \,.$$

The only nonvanishing third partial derivatives of *f*<sup>2</sup> are

$$\frac{\partial^3 f\_2}{\partial I^3} = -\frac{6\alpha \beta^3 SA}{(1 + \beta I)^4} \text{ \AA$$

$$\frac{\partial^3 f\_2}{\partial I^2 \partial \mathcal{S}} = \frac{\partial^3 f\_2}{\partial \mathcal{S} \partial I^2} = \frac{\partial^3 f\_2}{\partial I \partial \mathcal{S} \partial I} = \frac{2a\beta^2 A}{(1+\beta I)^3}, \quad \frac{\partial^3 f\_2}{\partial I^2 \partial A} = \frac{\partial^3 f\_2}{\partial A \partial I^2} = \frac{\partial^3 f\_2}{\partial I \partial A \partial I} = \frac{2a\beta^2 S}{(1+\beta I)^3}$$

and

$$\frac{\partial^3 f\_2}{\partial S \partial A \partial I} = \frac{\partial^3 f\_2}{\partial S \partial I \partial A} = \frac{\partial^3 f\_2}{\partial A \partial S \partial I} = \frac{\partial^3 f\_2}{\partial A \partial I \partial S} = \frac{\partial^3 f\_2}{\partial I \partial S \partial A} = \frac{\partial^3 f\_2}{\partial I \partial A \partial S} = -\frac{a\beta}{(1+\beta I)^2}.$$

Of these, upon evaluation at (*ES*,*r*‡), the only nonvanishing ones are those in the last two groups, namely

$$\frac{\partial^3 f\_2(E\_{\mathcal{S}}, r^\dagger)}{\partial I^2 \partial A} = \frac{\partial^3 f\_2(E\_{\mathcal{S}}, r^\dagger)}{\partial A \partial I^2} = \frac{\partial^3 f\_2(E\_{\mathcal{S}}, r^\dagger)}{\partial I \partial A \partial I} = \frac{2a\beta^2 \pi}{c\_{\mathcal{S}S} - c\_{AS} + a}$$

and

$$\begin{split} \frac{\partial^3 f\_2(E\_{S\prime}r^\ddagger)}{\partial S\partial A \partial I} &= \frac{\partial^3 f\_2(E\_{S\prime}r^\ddagger)}{\partial S\partial I \partial A} = \frac{\partial^3 f\_2(E\_{S\prime}r^\ddagger)}{\partial A \partial S \partial I} = \frac{\partial^3 f\_2(E\_{S\prime}r^\ddagger)}{\partial A \partial I \partial S} \\ &= \frac{\partial^3 f\_2(E\_{S\prime}r^\ddagger)}{\partial I \partial S \partial A} = \frac{\partial^3 f\_2(E\_{S\prime}r^\ddagger)}{\partial I \partial A \partial S} = -a\beta. \end{split}$$

Consequently

$$\begin{split} &w^{\top}[D^{3}f(E\_{S},r^{\sharp})(v,v,v)] = D^{3}f\_{2}(E\_{S},r^{\sharp})(v,v,v) = \sum\_{j\_{1},j\_{2},j\_{3}=1}^{3} \frac{\partial^{3}f\_{2}(E\_{S},r^{\sharp})}{\partial x\_{j\_{1}}\partial x\_{j\_{2}}\partial x\_{j\_{3}}}v\_{j\_{1}}v\_{j\_{2}}v\_{j\_{3}} \\ &= 3\frac{\partial^{3}f\_{2}(E\_{S},r^{\sharp})}{\partial A\partial I^{2}}v\_{2}v\_{3}^{2} + 6\frac{\partial^{3}f\_{2}(E\_{S},r^{\sharp})}{\partial S\partial A\partial I}v\_{1}v\_{2}v\_{3} = 6a\beta v\_{2}v\_{3}\left(\frac{\beta\pi}{c\_{SS}-c\_{AS}+a}v\_{3} - v\_{1}\right). \end{split}$$

Substituting into this expression the components of *v* we explicitly have

$$\begin{split} w^T [D^3 f(E\_S, r^\ddagger)(\upsilon, \upsilon, \upsilon)] &= 6a\beta \left( \frac{m+\mu}{\pi} + \frac{c\_{IS}}{c\_{SS} - c\_{AS} + a} \right) \\ &\times \left[ \frac{\beta \pi}{c\_{SS} - c\_{AS} + a} - \frac{c\_{SI}}{c\_{SS}} + \frac{c\_{AS} + a}{c\_{SS}} \left( \frac{m+\mu}{\pi} + \frac{c\_{IS}}{c\_{SS} - c\_{AS} + a} \right) \right]. \end{split} \tag{70}$$

Still assuming (63),we can observe that (70) is zero if and only if (67) holds or, alternatively

$$\beta = \frac{c\_{SI}c\_{SS}\pi - (c\_{AS} - a)c\_{SI}\pi - (m + \mu)(c\_{AS} + a)(c\_{SS} - c\_{AS} + a) - c\_{IS}c\_{SS}\pi}{c\_{SS}\pi^2}.\tag{71}$$

In summary, for the case (63), since Sotomayor's Theorem gives only sufficient conditions, we cannot conclude anything about the bifurcation from *ES* for *r* = *r*‡ being a saddle-node. Instead, a sufficient condition for a transcritical bifurcation to exist is that the conditions (67) and (68) are both not satisfied; alternatively, a pitchfork bifurcation occurs if (67) and (71) are not satisfied but (68) is verified.

We, finally, also investigate the case for which (63) does not hold, i.e., *cSS* − *cAS* + *α* = 0. The Jacobian evaluated at *ES* gives *J*<sup>22</sup> <sup>=</sup> <sup>−</sup>*π*. Taking now *<sup>π</sup>* as the bifurcation parameter, with threshold value *π*<sup>0</sup> = 0, we find the right and left eigenvectors *v* = [*v*1, *v*2, 0] *T w* = [0, 1, 0] *<sup>T</sup>*, with

$$v\_1 = -\frac{c\_{SS}}{c\_{SA} + \alpha} < 0, \quad v\_2 = 1 > 0.$$

In addition, calculating the derivative with respect to the bifurcation parameter, *f<sup>π</sup>* = [0, −*A*, *A*] *<sup>T</sup>* so that *fπ*(*ES*, *π*)=[0, 0, 0] *<sup>T</sup>* and *w<sup>T</sup> fπ*(*ES*, *π*) = 0. Upon evaluation of the Jacobian *D fπ*, it also follows that *<sup>w</sup>TD fπ*(*ES*, *<sup>π</sup>*)*<sup>v</sup>* = [0, <sup>−</sup>1, 0] *Tv* <sup>=</sup> <sup>−</sup>*v*<sup>2</sup> <sup>&</sup>gt; 0. Since *<sup>w</sup>*<sup>1</sup> <sup>=</sup> *w*<sup>3</sup> = 0, it is enough to calculate just the partial derivatives of the second component of *f* :

$$w^T D^2 f(v, v) = \frac{\partial^2 f\_2}{\partial S^2} v\_1^2 + 2 \frac{\partial^2 f\_2}{\partial A \partial S} v\_1 v\_2 + \frac{\partial^2 f\_2}{\partial A^2} v\_2^2 = -2 c\_{AS} v\_1 v\_2 - 2 c\_{AA} A v\_2^2$$

so that

$$w^T D^2 f(v, v)|\_{\left(E\_S, \pi\right)} = -2c\_{AS} v\_1 v\_2 > 0$$

and, therefore, there is also no transcritical bifurcation in this case.

**Remark 2.** *Note that if we try to apply the same technique to the situation α* − *cAS* = 0 *for (1), we obtain J*<sup>22</sup> = −*m* − *π, so that in this case the threshold value for the bifurcation parameter π would be negative, π*(1) <sup>0</sup> = −*m* < 0*, and, therefore, not biologically feasible.*

#### *4.3. Numerical Simulations for the Bifurcations*

In addition, we also show numerically the two transcritical bifurcation diagrams, indicating in particular that the coexistence equilibrium in both models originates from the disease-free equilibrium, which, in turn, arises from the origin when the population reproduction rate overcomes its mortality rate, as it also appears from the theoretical analysis. Note that Figures 8 and 9 qualitatively appear to be the same, but their vertical axes differ quite a bit.

**Figure 8.** (**Left**): transcritical bifurcations for model (1) obtained with parameter values (72), (55) and initial conditions (50). (**Right**): zoom of the left image showing two bifurcations as *r* changes; the first from *E*<sup>0</sup> to *ES* when *r* = *r*<sup>0</sup> = 2 and the second from *ES* to *ESAI* when *r* = *r*† = 2.4898.

**Figure 9.** (**Left**): transcritical bifurcations for model (23) obtained with parameter values (72), (55) and initial conditions (50). (**Right**): zoom of the left image showing two bifurcations as *r* changes; the first one from *E*<sup>0</sup> to *ES* when *r* = *r*<sup>0</sup> = 2 and the second from *ES* to *ESAI* when *r* = *r*‡ = 2.3019.

The transcritical bifurcation parameters for both model (1) and (23), still using (55) are:

$$\begin{aligned} \mathbf{c}\_{SS} = 0.04, \quad \mathbf{c}\_{AA} = 0.05, \quad \mathbf{c}\_{II} = 0.06, \quad \mathbf{c}\_{SA} = 0.08, \quad \mathbf{c}\_{SI} = 0.03, \\\mathbf{c}\_{AS} = 0.01, \quad \mathbf{c}\_{AI} = 0.02, \quad \mathbf{c}\_{IS} = 0.09, \quad \mathbf{c}\_{IA} = 0.07. \end{aligned} \tag{72}$$

#### **5. Discussion**

This investigation has been prompted by the recent COVID-19 pandemic, in which, at least at its outbreak, the role of asymptomatic individuals was apparently fundamental. We consider the general population response to such an event. The now classical Capasso–Serio model [9] was the first to encompass this feature. Although more generally formulated, it also contains three compartments: susceptibles, infected and removed.

The specific form for the model [9] that we adopt in our simulations for comparison purposes is the following:

$$\begin{array}{rcl} \frac{dS}{dt} &=& -\frac{\alpha SI}{1 + \beta I^2} \\ \frac{dI}{dt} &=& \frac{\alpha SI}{1 + \beta I^2} - \gamma I \\ \frac{dR}{dt} &=& \gamma I. \end{array} \tag{73}$$

As already mentioned in the Introduction, the SAI models (1) and (23) proposed here from the epidemiological viewpoint are an extension of the classical SI model or as an SIR model in which *R* stands for removed rather than recovered. Since, in the SI case, removed do not appear, in a sense (1) and (23) share its properties. Introducing the asymptomatics *A* in (1) and (23), the infected are split among *A* and symptomatics *I*. System (1) or (23) can be compared with the Capasso–Serio model by means of the following matches:

*S* : (1) → *S* : (73), (the susceptible classes) *A* : (1) → *I* : (73), (the classes that can transmit the disease) *I* : (1) → *R* : (73), (removed from circulation and unable to transmit the disease).

In (1) and (23), *I* are recognized as disease carriers. In order to possibly not get the disease, the susceptibles take the size of symptomatic *I* as an index by which to measure the reduction in their contacts with all other individuals. In (73), instead, both *I* and *R* are recognized as disease carriers, but *R* are isolated and cannot produce new cases of the disease. Here, the contact reductions are based on the size of the infected class *I*. Both *I* for (1) and (23) as well as *R* for (73), represent sinks for the dynamical system. In the comparison between (1) (or (23)) and (73), only the people that can spread the disease are relevant, respectively, *A* and *I*. However, the important point is that the people can only react to the infected they see, i.e., *I* in both models. Note, indeed, that in spite of the above matching among compartments, we cannot completely identify the asymptomatics *A* of our case with the infected in [9], nor our symptomatic individuals *I* with the removed of [9], because the "fear" response function in our model depends inversely on symptomatics, while in [9], it does on the infected class. In [9], this dependence has to be quadratic to push the disease transmission to zero, meaning a large contact reduction for a large number of infected. There would (almost) be symmetrization if, in [9], the fear would be induced by the removed. However, it is clearly assumed in [9] that in such a case the infected are recognizable as disease carriers, in contrast to the asymptomatics of the model (1) (or (23)), and, therefore, are seen by the susceptibles as potential contagion sources.

We now compare the two models, ours and [9], in several situations. Firstly, the full cases, respectively, SAI and SIR. In the following figures, we show comparisons between these compartments in [9] and our models. Note that to make the comparison fair in our models, we set all demographic parameters of type *cEB*, with *E*, *B* ∈ {*S*, *I*, *A*}, to zero, because in [9] no demographics is present. In this case, the models (1) and (23) coincide, so from now on we can just refer to one of them. The remaining hypothetical reference parameter values are the following

$$
\mu = 0.5, \quad \beta = 1, \quad \mu = 0, \quad m = 0, \quad r = 0. \tag{74}
$$

In both Figures 10 and 11, while in the classical model (73) ultimately almost the whole population is affected, a good number of susceptibles are preserved in the SAI model (1). For the (73) model, the higher the removal rate *γ*, the faster the disease affects the population, Figure 10. Instead, for the SAI model, the higher the progression rate *π*, the higher the number of susceptibles that are preserved from the disease. A similar effect is noted if the disease contact rate *α* increases, Figure 11.

**Figure 10.** Here, the disease transmission rate is fixed and lower than the progression to the symptomatic/removed classes, with *α* = 0.5 < *π* = *γ* ∈ [1, ... , 10]. Comparison between (1) (or equivalently (23)) on the left column, and the Capasso–Serio model (73) on the right column, in terms of the progression from asymptomatic to symptomatic *π* for (1) (as well as (23)) and of the removal rate *γ* for (73), with parameter values (74) and initial conditions (50). Left frame: the simulations to show the settling of the systems. Right frame: the blow up of the initial instants to better show the transients.

**Figure 11.** Here, *π* = *γ* = 5. Comparison between (1) (or, equivalently, (23)) on the left column, and the Capasso–Serio model (73) on the right column, in terms of the disease transmission rate *α* ∈ [0.1, ... , 1.0] so that, again, it is below the progression to symptomatic or removal rates, *α* < *π* = *γ* with parameter values (74) and initial conditions (50). Left frame: the simulations to show the settling of the systems. Right frame: the blow up of the initial instants to better show the transients.

Figure 12 contains the simulation of the opposite conditions: higher transmission rate than progression/removal rates. In this situation, the susceptibles of the SAI model (1) are quickly depleted and the whole population quickly becomes symptomatic. For (73), this occurs much more slowly, and the slower the smaller the transmission rate is.

We next compare the two types of models as different versions of the SI model, with just asymptomatics in (1) and recognizable infected for (73). We, thus, take *π* = *γ* = 0 to prevent progression respectively to symptomatics or removed. Figure 13 contains the results of the simulations in this case. In the case of model (73), the *R* compartment is initially empty and, in this case, clearly remains empty throughout the simulation. For the SAI model, there is no progression to symptomatics and, also, this compartment is empty. It is interesting to note that the lower values of *α* once again have a delaying effect on the epidemics' propagation for (73), the more marked the lower the value of the rate *α*. However, in the SAI model, everyone is very quickly affected and the susceptibles are quickly depleted. The important remark is that the behavior remains in agreement with the ones found in the former, Figure 12, since *π* = 0 < *α*.

We also compared the two models (1) and (73), interpreting the former as an extension of the SI model, allowing in it two classes of individuals affected by the disease: asymptomatic and symptomatic. This is performed by allowing progression from *A* to *I*, while there is no removal rate in the classical model. Hence, *π* = 0 and *γ* = 0. Figure 14 shows the results. It is clearly seen that for *α* < *π*, the SAI model preserves again part of the susceptibles, while this does not occur for (73), and for *α* > *π* in both models the whole population is affected.

We finally consider a less relevant simulation, for the reverse case *π* = 0 and *γ* = 0.05. This is not a fair comparison as on one hand we have an SA model (the SI model with all asymptomatics) versus a full SIR model. In the former case, everybody becomes asymptomatic and in the latter, almost everyone also contracts the disease, within a timespan that is longer the lower the contact rate. The results are not shown, as they coincide with Figure 13.

**Figure 12.** Here, we illustrate the case 0 = *π* < *α* ∈ [0.1 ... , 1.0], *γ* = 0.05. In this case, in the SAI model, the whole population is quickly affected, while for (73), the disease propagates at a lower speed with a lower transmission rate. In the long run, here as well, the susceptible class is depleted. Left frame: the simulations to show the settling of the systems. Right frame: the time interval is shorter to better show the transients in the SAI model. Other parameter values are (74) and initial conditions (50).

**Figure 13.** Here, *π* = *γ* = 0, so that both systems become SI models. Comparison between (1) (or, equivalently, (23)) on the left column, and the Capasso–Serio model (73), on the right column. Left frame: *α* ∈ [10, ... , 100]. Right frame: *α* ∈ [0.1, ... , 1.0]. The other parameter values are given in (74) and initial conditions (50).

**Figure 14.** Here, *π* = 5, *γ* = 0 and *α* ∈ [1.0, 10]. It is clearly seen that for *α* < 5 in the SAI model the susceptibles are preserved, while for transmission rates higher than this threshold, *α* > *π* = 5, all individuals eventually become symptomatic. The same occurs in the (73) model independently of *α*, at a lower pace. Other parameter values are (74) and initial conditions (50).

The simulations for model (1), or equivalently (23), show a much higher impact of the disease. For increasing disease contact rate, the susceptibles are very much reduced, and symptomatic people rise to high values, Figure 11; instead, the asymptomatic ones quickly vanish. Higher progression rates from asymptomatic to symptomatic are beneficial, because they increase the number of susceptibles and sensibly reduce the symptomatic individuals.

#### **6. Conclusions**

In this paper, we analysed a simple disease transmission system with some demographic features. The illness is assumed to develop at first in an asymptomatic form. The model accounts for epidemic-induced fear in the population, for which measures are taken to reduce contacts. The main novelty is represented by the fact that susceptibles respond not to a large number of asymptomatic infected, but just to the size of the symptomatic individuals compartment.

The demographic part of the model accounts for inter- and intraspecific pressures among the various compartments, but this is not symmetric. Specifically, such pressure could be reduced in the class of symptomatic infected *I*, because they are known to be sick and are, therefore, supposedly being cared for. Alternatively, it could be higher, meaning that being debilitated by the disease, they feel more the competition of other compartments. They instead exert a pressure on the other classes; this could be interpreted, e.g., as a burden, namely, the costs for the society to hospitalize them. This is very approximate and could be modified and expanded, if needed.

However, our main focus lies on the epidemiology. The susceptibles become infected by the asymptomatic in a mild form, migrating, indeed, into the asymptomatic class. This could be criticized and improved, but it is essential in order to compare the results with the classical model (73). Considering other infection mechanisms that may lead directly from susceptibles to symptomatic individuals, this would indeed significantly change the model and make the comparison less fair. In this way, both in (1) as well as in (23), because in the absence of demographics they coincide, the transition between compartments is the same used in (73). Thus, the simulations' results can be compared in an adequate way. There is only a minor mathematical change of a technical nature, in that the response function depends inversely on 1 + *βI*<sup>2</sup> in (73), and this is necessary to push it to zero for large values of *I*, while for the SAI case it is enough for it to be inversely proportional to 1 + *βI*, in view of the fact that in the numerator no *I* appears.

The main finding in the simulations is that the SAI model introduced here, in spite of being an SI-type model with the infected individuals split between two classes, asymptomatic and symptomatic, may prevent some susceptibles from contracting the disease, in the proper situation, in contrast to the classical SI model. Specifically, this occurs if the progression rate from asymptomatic to symptomatic is above the contact rate. The progression rate plays, thus, a fundamental role. The explanation for this situation lies in the fact that for a high progression rate, the asymptomatic class is fast depleted so that the symptomatic reaches high numbers quickly. As a consequence, the transmission rate in the SAI model quickly approaches zero, because the denominator grows and the numerator is reduced. Specifically, for the single susceptible the infected washout rate should exceed their recruitment rate i.e., *<sup>α</sup><sup>I</sup>*

$$\frac{\alpha l}{1+\beta I} < \pi.$$

Thus the minimum weight that the individuals should give on the information about the symptomatics, can be assessed by finding the critical threshold *β*†

$$
\beta^{\dagger} = \frac{a}{\pi} - \frac{1}{I}
$$

where *I* is the number of observed symptomatics. In this way for *β* > *β*† the ratio of transmission to progression rates falls below one, and a number of susceptibles are preserved from getting the disease, the higher the farther *β* is from the critical threshold. The transmission rate of model (73), instead, contains the infected *I* also in the numerator. Thus, although the denominator grows quickly because it contains the term 1 + *βI*2, the transmission rate reaches zero at a slower pace than the SAI model does. This phenomenon represents an instance of the well-known fact that diseases that severely affect individuals preserve instead the whole community, while they significantly impact the population if they are mild at the individual level.

This analysis shows that the determination of whether a disease is asymptomatic and the assessment of the progression and transmission rates proves fundamental for its possible containment. In addition, in the presence of asymptomatic diseased individuals, the individual protection measures should have a higher impact, measured by a larger weight coefficient *β*, when just a few symptomatics appear in the population. The simulations of Figures 10 and 11 also help in quantifying the disease impact on the population, if a reliable measure of the contact rate *α*, as well as of the transition rate to symptomatic *π*, exist.

Overall, this investigation reveals the importance of properly assessing these rates as soon as possible. It also stresses that accounting for asymptomatics in the individual response as well as in the epidemic control is of utmost importance.

**Author Contributions:** Both authors have contributed to the paper in each and every one of its parts. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was partially supported by the local research project "Metodi numerici per l'approssimazione e le scienze della vita" of the Dipartimento di Matematica "Giuseppe Peano", Università di Torino.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors dedicate the paper to, and congratulate, Delfim F. M. Torres on his 50th birthday. The authors thank the referees for their valuable comments.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

## *Article* **Minimum Energy Problem in the Sense of Caputo for Fractional Neutral Evolution Systems in Banach Spaces**

**Zoubida Ech-chaffani 1,†, Ahmed Aberqi 2,†, Touria Karite 3,\*,† and Delfim F. M. Torres 4,†**


**Abstract:** We investigate a class of fractional neutral evolution equations on Banach spaces involving Caputo derivatives. Main results establish conditions for the controllability of the fractional-order system and conditions for existence of a solution to an optimal control problem of minimum energy. The results are proved with the help of fixed-point and semigroup theories.

**Keywords:** fractional control systems; neutral evolution systems; controllability; optimal control; minimum energy; Banach fixed point theorem

**MSC:** 26A33; 34K40; 49J30; 93B05

#### **1. Introduction**

A neutral system is a system where time-delays play an important role. Precisely, such delays appear in both state variables and their derivatives. A delay in the derivative is called "neutral", which makes the system more complex than a classical one where the delays only occur in the state. Neutral delays do not only occur in physical systems, but they also appear in control systems, where they are sometimes added to improve the performance. For instance, a wide range of neutral-type control systems are expressed by

$$\frac{d}{dt}[y(t) - Ky\_t] = Ly\_t + Bu(t), \quad t \ge 0, \quad y\_0(\cdot) = f\_0(\cdot), \tag{1}$$

where *yt* : [−1, 0] <sup>→</sup> <sup>C</sup>*<sup>n</sup>* is defined by *yt*(*s*) = *<sup>y</sup>*(*<sup>t</sup>* <sup>+</sup> *<sup>s</sup>*); for *<sup>f</sup>* <sup>∈</sup> *<sup>H</sup>*1([−1, 0], <sup>C</sup>*n*), the difference operator *K* is given by *K f* = *A*−<sup>1</sup> *f*(−1) with *A*−<sup>1</sup> a constant *n* × *n* matrix. The delay operator *L* is defined by

$$Lf = \int\_{-1}^{0} \left[ A\_2(\theta) f'(\theta) + A\_3(\theta) f(\theta) \right] d\theta$$

with *A*<sup>2</sup> and *A*<sup>3</sup> *n* × *n* matrices whose elements belong to *L*2(−1, 0); *B* is a constant *n* × *r* matrix; and the control *u* is an *L*2-function [1].

Nowadays, many researchers have investigated neutral differential equations in Banach spaces [2–4]. This interest is explained by the fact that neutral-argument differential equations have interesting applications in real-life problems: they appear, e.g., while

**Citation:** Ech-chaffani, Z.; Aberqi, A.; Karite, T.; Torres, D.F.M. Minimum Energy Problem in the Sense of Caputo for Fractional Neutral Evolution Systems in Banach Spaces. *Axioms* **2022**, *11*, 379. https:// doi.org/10.3390/axioms11080379

Academic Editor: Valery Y. Glizer

Received: 6 July 2022 Accepted: 29 July 2022 Published: 31 July 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

modeling networks containing lossless transmission lines or in super-computers. Moreover, second-order neutral equations play an important role in automatic control and in aeromechanical systems, where inertia plays a central role [5–7].

Controllability plays an inherent crucial role in finite and infinite-dimensional systems, being one of the primary concepts in control theory, along with observability and stability. This concept has also attracted many authors; see, for instance, [8–10].

In the last two decades, several researchers have been interested in exploring the concept of controllability for fractional systems [11–13]. This is natural because fractional differential equations are considered a valuable tool in modeling various real-world dynamic systems, including physics, biology, socio-economy, chemistry and engineering [14–16].

It turns out that system (1) can also be studied in the fractional sense, e.g., being expressed by

$$\begin{cases} \ ^{\mathbb{C}}D\_t^q[y(t) - Ky\_t] = Ly(t) + Bu(t), \quad t \in [0, T],\\ y\_0(\cdot) = f\_0(\cdot), \end{cases}$$

where *CD<sup>q</sup> <sup>t</sup>* denotes the Caputo fractional derivative of order *q*. The existence of solutions to fractional differential equations for neutral systems involving Caputo or other fractional operators, like Riemann–Liouville fractional derivatives, has been paid much attention [17–19]. Recently, some achievements regarding the existence and uniqueness of mild solutions to fractional stochastic neutral differential systems in a finite dimensional space have been made [20]. Other works are consecrated to demonstrate existence of a mild solution for neutral fractional inclusions of the Sobolev type [21].

In [22], Sakthivel et al. examined the exact controllability of fractional differential neutral systems by establishing sufficient conditions via a fixed-point analysis approach. Later on, Sakthivel et al. investigated the weak controllability of fractional dynamical systems of order 1 < *q* < 2 using sectorial operators and Krasnoselskii's fixed-point theorem [23]. Using the same techniques as the previous authors, Qin et al. have studied the controllability and optimal control of fractional dynamical systems of order 1 < *q* < 2 in Banach spaces [24]. Yan and Jia used stochastic analysis theory and fixed-point theorems with the strongly continuous *α*-order cosine family to study an optimal control problem for a class of stochastic fractional equations of order *α* ∈ (1, 2] in Hilbert spaces [25]. In 2021, Zhou and He obtained, via the contraction principle and Shauder's fixed-point theorem, a set of sufficient conditions for the exact controllability of a class of fractional systems [26]. More recently, Xi et al. studied the approximate controllability of fractional neutral hyperbolic systems using Sadovskii's fixed point theorem while constructing a Cauchy sequence and a control function [27]. Dineshkumar et al. addressed the problem of approximate controllability for neutral stochastic fractional systems in the sense of Hilfer, treating the problem using Schauder's fixed-point theorem and extending the obtained results to the case of nonlocal conditions [28]. In [29], Ma et al. analyzed the weak controllability of a fractional neutral differential inclusion of the Hilfer type in Hilbert spaces using Bohnenblust–Karlin's fixed point theorem. The concept of complete controllability is studied in [30] by Wen and Xi, where they establish sufficient conditions to assure this type of controllability.

Here, we let (*X*, |·|) be a Banach space, and we denote the Banach space of continuous functions by C(0, *T*; *X*) with the norm |*x*| = sup *t*∈*J* |*x*(*t*)|. Our main goal is to explore the concepts of controllability and optimal control for the following general evolution fractional system:

$$\begin{cases} \, \, ^C D\_t^\nu [\mathbf{x}(t) - h(t, \mathbf{x}\_l)] = \mathcal{A}\mathbf{x}(t) + \mathcal{B}u(t), \quad t \in (0, T],\\ \mathbf{x}(0) = \mathbf{x}\_0 \in D(\mathcal{A}), \end{cases} \tag{2}$$

where *CD<sup>ν</sup> <sup>t</sup>* denotes the fractional derivative of order *ν* ∈ (0, 1) in the sense of Caputo, *h* : [0, *T*] × C(0, *T*; *X*) → *X* is a given continuous function, and the dynamic of the system A : *D*(A) ⊆ *X* → *X* is a linear, closed operator with dense domain *D*(A) generating

a compact and uniformly bounded *C*<sup>0</sup> semigroup {T (*t*)}*t*≥<sup>0</sup> on *X*. The control function *<sup>u</sup>*(·) is given in *<sup>L</sup>*2(0, *<sup>T</sup>*; *<sup>U</sup>*), with *<sup>U</sup>* a reflexive Banach space, and the control operator B∈L(*U*, *X*) is a linear continuous bounded operator, i.e., there exists a constant *M*<sup>1</sup> > 0 such that |B| ≤ *<sup>M</sup>*1. (3)

$$|\leq{M\_{1}}.\tag{3}$$

Our main aim is to be able to obtain a set of sufficient conditions assuring the controllability of system (2) and, afterwards, to consider an associated optimal control problem and prove existence of a solution.

The rest of this paper is organized as follows. In Section 2, the definitions of Caputo fractional derivative and mild solutions for system (2) are recalled. Our main result on the controllability of (2) is proved in Section 3. In Section 4, we prove the existence of a control giving minimum energy on a closed convex set of admissible controls. Section 5 is consecrated to the analysis of a concrete example, illustrating the applicability of our main results. We end with Section 6, which contains conclusions and points out some possible future directions of research.

#### **2. Background**

In this section, basic definitions, notations, and lemmas are introduced to be used throughout the paper. In particular, we recall the main properties of fractional calculus [31,32] and useful properties of semigroup theory [33].

Throughout the paper, let A be the infinitesimal generator of a compact and uniformly bounded *C*<sup>0</sup> semi-group {T (*t*)}*t*≥0. Let 0 ∈ (A), where (A) denotes the resolvent of A. Then, for 0 <sup>≤</sup> *<sup>μ</sup>* <sup>≤</sup> 1, the fractional power <sup>A</sup>*<sup>μ</sup>* is defined as a closed linear operator on its domain *<sup>D</sup>*(A*μ*). For a compact semi-group {T (*t*)}*t*≥0, the following properties are useful in this paper:

(i) There exists *MT* ≥ 1 such that

$$M\_T = \sup\_{t \ge 0} |\mathcal{T}(t)|;\tag{4}$$

(ii) For any *<sup>μ</sup>* ∈ (0, 1], there exists L*<sup>μ</sup>* > 0 such that

$$|\mathcal{A}^{\mu}\mathcal{T}(t)| \le \frac{\mathbb{L}\_{\mu}}{t^{\mu}}, \quad 0 \le t \le T. \tag{5}$$

Now we recall the notion of a Caputo fractional derivative.

**Definition 1** (See [32])**.** *The left-sided Caputo fractional derivative of order ν* > 0 *of a function <sup>z</sup>* <sup>∈</sup> *<sup>L</sup>*1([0, *<sup>T</sup>*]) *is*

$$\, \_0^C D\_t^{\nu} z(t) = \frac{1}{\Gamma(\kappa - \nu)} \int\_0^t (t - s)^{\kappa - \nu - 1} \frac{d^\kappa}{ds^\kappa} z(s) ds,\tag{6}$$

*where t* ≥ <sup>0</sup>*, <sup>κ</sup>* − <sup>1</sup> < *<sup>ν</sup>* < *<sup>κ</sup>, <sup>κ</sup>* ∈ N*, and* <sup>Γ</sup>(·) *is the gamma function.*

Using the probability density function and its Laplace transform [34] (see also [35,36]), we recall the definition of a mild solution for system (2).

**Definition 2** (See [34])**.** *Let u* ∈ *U for t* ∈]0, *T*]*. A function x* ∈ C(0, *T*; *X*) *is said to be a mild solution of system* (2) *if*

$$\begin{split} \mathbf{x}(t,\boldsymbol{u}) &= \mathcal{S}\_{\boldsymbol{\nu}}(t)[\mathbf{x}\_{0} - h(\mathbf{0}, \mathbf{x}\_{0})] + h(t, \mathbf{x}\_{t}) + \int\_{0}^{t} (t-s)^{\nu-1} \mathcal{A} \mathcal{K}\_{\boldsymbol{\nu}}(t-s) h(s, \mathbf{x}\_{s}) ds \\ &+ \int\_{0}^{t} (t-s)^{\nu-1} \mathcal{K}\_{\boldsymbol{\nu}}(t-s) \mathcal{B} u(s) ds, \end{split} \tag{7}$$

*where Sν*(·) *and Kν*(·) *are the characteristic solution operators defined by*

$$S\_{\nu}(t) = \int\_{0}^{\infty} \phi\_{\nu}(\Theta) \mathcal{T}(t^{\nu}\Theta) \,d\Theta \quad and \quad K\_{\nu}(t) = \nu \int\_{0}^{\infty} \Theta \phi\_{\nu}(\Theta) \mathcal{T}(t^{\nu}\Theta) \,d\Theta$$

*with*

$$\phi\_{\boldsymbol{\nu}}(\boldsymbol{\Theta}) = \frac{1}{\nu} \boldsymbol{\Theta}^{-1-\frac{1}{\boldsymbol{\nu}}} \boldsymbol{\psi}\_{\boldsymbol{\nu}} \left( \boldsymbol{\Theta}^{-\frac{1}{\boldsymbol{\nu}}} \right).$$

*and*

$$\psi\_{\boldsymbol{\theta}}(\boldsymbol{\Theta}) = \frac{1}{\pi} \sum\_{n=1}^{\infty} (-1)^{n-1} \boldsymbol{\Theta}^{-\boldsymbol{\nu} n - 1} \frac{\Gamma(n\boldsymbol{\nu} + 1)}{n!} \sin(n\boldsymbol{\pi}\boldsymbol{\nu}), \quad \boldsymbol{\Theta} \in (0, \infty),$$

*the probability density. In addition, we have*

$$\int\_0^\infty \psi\_\nu(\Theta) d\Theta = 1 \text{ and } \int\_0^\infty \Theta^\Lambda \phi\_\nu(\Theta) d\Theta = \frac{\Gamma(1+\Lambda)}{\Gamma(1+\nu\Lambda)}, \quad \Lambda \in [0,1].$$

**Remark 1.** *The solution x*(*t*, *u*) *of* (2) *is considered in the weak sense, and, when there are no ambiguities, it is denoted by xu*(*t*)*. We denote by xu*(*T*) *the mild solution of system* (2) *at the final time T.*

The following properties of *Sν*(·) and *Kν*(·) will be used throughout the paper.

#### **Lemma 1** (See [34])**.**

*1. For any t* ≥ 0*, the operators Sν*(*t*) *and Kν*(*t*) *are linear and bounded, i.e.,*

$$|\mathcal{S}\_{\boldsymbol{\nu}}(t)\boldsymbol{y}| \leq \mathcal{M}\_{T}|\boldsymbol{y}| \quad \text{and} \quad |\mathcal{K}\_{\boldsymbol{\nu}}(t)\boldsymbol{y}| \leq \frac{\nu \mathcal{M}\_{T}}{\Gamma(1+\nu)}|\boldsymbol{y}|.$$

*for any y* ∈ *X where MT* = sup *t*≥0 |T (*t*)|*.*

*2. For t* > 0*, if* T (*t*) *is compact, then Sν*(*t*) *and Kν*(*t*) *are both compact operators.*

**Lemma 2** (See [34])**.** *For any x* ∈ *X, ς* ∈ (0, 1) *and μ* ∈ (0, 1] *we have*

*(i)* <sup>A</sup>*Kν*(*t*)*<sup>x</sup>* <sup>=</sup> <sup>A</sup>1−*ςKν*(*t*)A*ςx,* <sup>0</sup> <sup>≤</sup> *<sup>t</sup>* <sup>≤</sup> *a;*

$$|\langle \ddot{u} \rangle - |\mathcal{A}^{\mu} K\_{\nu}(t)| \le \frac{\nu \mathbb{L}\_{\mu}}{t^{\nu \mu}} \frac{\Gamma(2 - \mu)}{\Gamma(1 + \nu(1 - \mu))}, 0 < t \le a.s$$

#### **3. Controllability**

Following [37], let us define the meaning of controllability for our system (2).

**Definition 3.** *System* (2) *is said to be controllable in X on* [0, *T*] *if for any given initial state <sup>x</sup>*<sup>0</sup> <sup>∈</sup> *<sup>X</sup> and any desired final state xd* <sup>∈</sup> *X, there exists a control <sup>u</sup>*(·) <sup>∈</sup> *<sup>L</sup>*2(0, *<sup>T</sup>*; *<sup>U</sup>*) *such that the mild solution x* ∈ C(0, *T*; *X*) *of system* (2) *satisfies xu*(*T*) = *xd.*

To prove controllability, we make use of the following assumptions (*A*1) and (*A*2):


$$|\mathcal{A}^{\xi}h(t,z) - \mathcal{A}^{\xi}h(t,y)| \le H||z-y||\tag{8}$$

and

$$|\mathcal{A}^{\in}h(t,z)| \le H\_1(\|z\|+1). \tag{9}$$

Let *<sup>H</sup><sup>ν</sup>* : *<sup>L</sup>*2(0, *<sup>T</sup>*; *<sup>U</sup>*) <sup>→</sup> *<sup>X</sup>* be the linear operator defined by

$$H\_{\nu}u = \int\_{0}^{T} (T - s)^{\nu - 1} K\_{\nu}(T - s) \mathcal{B}u(s) ds.$$

By construction, this operator is invertible. Indeed, because *Hν* takes values in the cokernel *L*2(0, *T*; *U*) *kerHν*, then it is injective. It is also surjective because *L*2(0, *T*; *U*) *kerH<sup>ν</sup> ImH<sup>ν</sup>* (see [38,39]). The inverse operator *H*−<sup>1</sup> *<sup>ν</sup>* takes values in *L*2(0, *T*; *U*) ker *Hν*. Thus, there exists a positive constant *M*<sup>2</sup> ≥ 0 such that

$$\left| H\_{\nu}^{-1} \right|\_{\mathcal{L}\left(X, L^{2}(0, T; \mathcal{U}) / \ker H\_{\nu} \right)} \leq M\_{2}. \tag{10}$$

Let *r* ≥ 0. Note that *Br* = {*x* ∈ C(0, *T*; *X*) : *x* ≤ *r*} is a bounded closed and convex subset in C(0, *T*; *X*).

**Theorem 1.** *If* (*A*1) *and* (*A*2) *are fulfilled, then the evolution system* (2) *is controllable in* [0, *T*] *provided*

$$\left[|\mathcal{A}^{-\xi}| + \frac{\mathbb{L}\_{1-\xi}\Gamma(1+\xi)}{\xi\Gamma(1+\nu\xi)}T^{\eta\xi} + \frac{MM\_TM\_1}{\Gamma(1+\nu)}T^{\upsilon}\left(|\mathcal{A}^{-\xi}| + \frac{\mathbb{L}\_{1-\xi}\Gamma(1+\xi)}{\xi\Gamma(1+\nu\xi)}T^{\eta\xi}\right)\right]H < 1. \tag{11}$$

**Proof.** For any function *x*, we define the control

$$\begin{split} \mu\_{\boldsymbol{x}}(t) &= H\_{\boldsymbol{\nu}}^{-1} \Big[ \mathbf{x}\_{d} - \mathcal{S}\_{\boldsymbol{\nu}}(t) \big[ \mathbf{x}\_{0} - h(\mathbf{0}, \mathbf{x}\_{0}) \big] - h(\boldsymbol{T}, \mathbf{x}\_{T}) \\ &- \int\_{0}^{T} (\boldsymbol{T} - \mathbf{s})^{\boldsymbol{\nu} - 1} \mathcal{A} \mathcal{K}\_{\boldsymbol{\nu}} (\boldsymbol{T} - \mathbf{s}) h(\mathbf{s}, \mathbf{x}\_{\boldsymbol{\nu}}) d\mathbf{s} \Big] (t) . \end{split} \tag{12}$$

We shall prove that *G* : C(0, *T*; *X*) → C(0, *T*; *X*), defined by

$$\begin{split} f(\mathbf{G}x(t)) &= S\_{\boldsymbol{\nu}}(t)[\mathbf{x}\_{0} - h(\mathbf{0}, \mathbf{x}\_{0})] + h(t, \mathbf{x}\_{l}) + \int\_{0}^{t} (t-s)^{\nu-1} \mathcal{A} \mathcal{K}\_{\boldsymbol{\nu}}(t-s) h(s, \mathbf{x}\_{s}) ds \\ &+ \int\_{0}^{t} (t-s)^{\nu-1} \mathcal{K}\_{\boldsymbol{\nu}}(t-s) \mathcal{B} u\_{\boldsymbol{x}}(s) ds, \quad t \in [0, T], \end{split} \tag{13}$$

has a fixed point *x* for the control *ux* steering system (2) from *x*<sup>0</sup> to *xd* in time *T*. From (3), (10), Lemma 1 and (i) of Lemma 2, we have

$$\begin{split} |\mathcal{B}u\_{\boldsymbol{x}}(t)| &\leq MM\_{1} \Big( |\mathbf{x}\_{d}| + M\_{T} \Big[ |\mathbf{x}\_{0}| + |h(\mathbf{0}, \mathbf{x}\_{0})| \Big] + |h(T, \mathbf{x}\_{T})| \Big) \\ &+ \int\_{0}^{T} (T - s)^{\nu - 1} \Big| \mathcal{A}^{1-\zeta} \mathcal{K}\_{\boldsymbol{\nu}}(T - s) \mathcal{A}^{\zeta} h(\mathbf{s}, \mathbf{x}\_{\boldsymbol{s}}) \Big| ds \Big). \end{split}$$

In view of (9) and (ii) of Lemma 2, it follows that

$$\begin{split} |\mathcal{B}u\_{3}(t)| &\leq M\mathcal{M}\_{1} \Big( |\mathbf{x}\_{d}| + M\_{T} \Big[ |\mathbf{x}| + \left(r+1\right)H\_{1}|\mathcal{A}^{-\xi}| \Big] + \left(r+1\right)H\_{1}|\mathcal{A}^{-\xi}| \Big) \\ &+ \frac{\nu \mathbb{L}\_{1-\xi} \Gamma(1+\xi)}{\Gamma(1+\nu\xi)} H\_{1} \Big( r+1\Big) \int\_{0}^{T} (T-s)^{\nu\xi-1} ds \Big) \\ &\leq M\mathcal{M}\_{1} \Big( |\mathbf{x}\_{d}| + M\_{T} \Big[ |\mathbf{x}| + \left(r+1\right)H\_{1}|\mathcal{A}^{-\xi}| \Big] + \left(r+1\right)H\_{1}|\mathcal{A}^{-\xi}| \Big) \\ &+ \frac{\mathbb{L}\_{1-\xi}}{\Gamma(1+\xi)} H\_{1} \Big( r+1\Big) T^{\nu\xi} \Big). \end{split}$$

Let

 $\mathcal{W} = \mathcal{M}M\_1\left(|\mathbf{x}\_d| + M\_T\left[|\mathbf{x}| + \left(r+1\right)H\_1|\mathcal{A}^{-\xi}|\right]\right)$ 
$$+ \left(r+1\right)H\_1|\mathcal{A}^{-\xi}| + \frac{\mathcal{L}\_{1-\xi}}{\Gamma(1+\xi)}H\_1\left(r+1\right)T^{\eta\xi} \Big| \cdot \mathcal{X}$$

It follows that

$$|\mathcal{B}u\_x(t)| \le \mathcal{Y}.\tag{14}$$

In order to show that *G* has a unique fixed point on *Br*, we will proceed in two steps. Step I: *Gx* ∈ *Br* whenever *x* ∈ *Br*. For any fixed *x* ∈ *Br* and 0 ≤ *t* ≤ *T*, we have

$$\begin{aligned} |\left( (\mathbf{G}\mathbf{x}(t)) \right)| &\leq |\mathbf{S}\_{\mathbf{V}}(t)[\mathbf{x}\_{0} - h(\mathbf{0}, \mathbf{x}\_{0})]| + |h(t, \mathbf{x}\_{l})| + \int\_{0}^{t} \left| (t - s)^{\nu - 1} \mathcal{A} \mathbf{K}\_{\mathbf{V}}(t - s) h(\mathbf{s}, \mathbf{x}\_{s}) \right| ds \\ &+ \int\_{0}^{t} (t - s)^{\nu - 1} |\mathbf{K}\_{\mathbf{v}}(t - s) \mathcal{B}u\_{\mathbf{x}}(s)| ds. \end{aligned}$$

From Lemma 1, (9), and (i) of Lemma 2, it results that

$$\begin{split} |(\mathrm{Gx}(t))| &\leq M\_T \Big[ r + \left(r+1\right)H\_1|\mathcal{A}^{-\varsigma}| \Big] + \left(r+1\right)H\_1|\mathcal{A}^{-\varsigma}| \\ &\quad + \int\_0^t (t-s)^{\upsilon-1} \Big| \mathcal{A}^{1-\varsigma}K\_\upsilon(t-s)\mathcal{A}^{\xi}h(s,x\_s) \Big| \, ds \\ &\quad + \frac{\upsilon M\_T}{\Gamma(1+\upsilon)} \int\_0^t (t-s)^{\upsilon-1}|\mathcal{B}u\_x| \, ds. \end{split}$$

Now, by using (ii) of Lemma 2, we get

$$\begin{split} |(\mathcal{G}\mathbf{x}(t))| &\leq M\_T[r+H|\mathcal{A}^{-\xi}|\left(r+1\right)] + H|\mathcal{A}^{-\xi}|\left(r+1\right)] \\ &+ \frac{\nu \mathcal{L}\_{1-\xi} \Gamma(1+\xi)}{\Gamma(1+\nu\xi)} H(r+1) \int\_0^t (t-s)^{\nu\xi-1} ds \\ &+ \frac{\nu \mathcal{M}\_T}{\Gamma(1+\nu)} \int\_0^t (t-s)^{\nu-1} \left| \mathcal{B}u\_{\mathcal{X}}(s) \right| ds. \end{split}$$

According to (14), one has

$$\begin{split} |(Gx(t))| &\leq M\_T \left[ r + H|\mathcal{A}^{-\xi}| \left( r + 1 \right) \right] + H|\mathcal{A}^{-\xi}| \left( r + 1 \right) | \\ &\quad + \frac{\nu \mathcal{L}\_{1-\xi} \Gamma(1+\xi)}{\xi \Gamma(1+\nu\xi)} H \Big( r + 1 \Big) T^{\nu\xi} + \frac{M\_T}{\Gamma(1+\nu)} \mathcal{Y} T^{\nu} . \end{split}$$

By choosing

$$\begin{aligned} r &= M\_T \left[ r + \left( r + 1 \right) H\_1 |\mathcal{A}^{-\xi}| \right] + \left( r + 1 \right) H\_1 |\mathcal{A}^{-\xi}| \\ &+ \frac{\nu \mathcal{L}\_{1-\xi} \Gamma(1+\xi)}{\xi \Gamma(1+\nu\xi)} H\_1 \left( r + 1 \right) T^{\nu\xi} + \frac{\nu M\_T}{\Gamma(1+\nu)} \mathcal{Y} T^{\nu} . \end{aligned}$$

we get that *Gx* ∈ *Br* whenever *x* ∈ *Br*.

Step II: *G* is a contraction on *Br*. For any *v*, *w* ∈ *Br* and 0 ≤ *t* ≤ *T*, in accordance with (12), we obtain

$$\begin{split} \left| (Gv)(t) - (Gw)(t) \right| &\leq \left| h(t, v\_{l}) - h(t, w\_{l}) \right| \\ &+ \int\_{0}^{t} (t - s)^{\upsilon - 1} \left| \mathcal{A}r\_{\upsilon}(t - s) \left( h(s, v(s)) - h(s, w(s)) \right) \right| ds \\ &+ \int\_{0}^{t} (t - s)^{\upsilon - 1} \left| r\_{\upsilon}(t - s) \mathcal{B}H\_{\upsilon}^{-1} \left[ h(T, v\_{T}) - h(T, w\_{T}) + \int\_{0}^{T} (T - \tau)^{\upsilon - 1} \right] \right| ds \\ &\times \mathcal{A}K\_{\upsilon}(T - \tau) \left( h(\tau, v(\tau)) - h(\tau, w(\tau)) \right) d\tau \right) (s) \Big| ds. \end{split}$$

Considering Lemma 2 and (*A*2), we get

$$\begin{split} |(Gv)(t) - (Gw)(t)| &\leq H|\mathcal{A}^{-\xi}|v - w| + \frac{\nu \mathbb{L}\_{1-\xi} \Gamma(1+\xi)}{\Gamma(1+\nu\xi)} H|v - w| \int\_{0}^{t} (t-s)^{\upsilon\_{\xi}-1} ds \\ &\quad + \frac{\nu M M\_{T} M\_{1}}{\Gamma(1+\nu)} \int\_{0}^{t} (t-s)^{\upsilon-1} \left| \left| h(T, \upsilon\_{T}) - h(T, \upsilon\_{T}) \right| \right| \\ &\quad + \int\_{0}^{t} (T-\tau)^{\upsilon-1} \left| \mathcal{A}^{1-\xi} K\_{\upsilon}(t-\tau) \mathcal{A}^{\xi} \left[ h(\tau, \upsilon(\tau)) - h(\tau, \upsilon(\tau)) \right] \right| d\tau \Big] ds. \end{split}$$

From (8), we obtain that

$$\begin{split} |(Gv)(t) - (Gw)(t)| &\leq H|\mathcal{A}^{-\xi}|v - w| + \frac{\mathbb{L}\_{1-\xi}\Gamma(1+\varsigma)}{\xi\Gamma(1+\upsilon\varsigma)}H|v - w|T^{\upsilon\varsigma} \\ &+ \frac{\upsilon M M\_{T} M\_{1}}{\Gamma(1+\upsilon)} \int\_{0}^{T} (t-s)^{\upsilon-1} \Big[ H|\mathcal{A}^{-\varsigma}|v - w| \\ &+ \frac{\mathbb{L}\_{1-\varsigma}\Gamma(1+\varsigma)}{\zeta\Gamma(1+\upsilon\varsigma)}H|v - w|T^{\upsilon\varsigma} \Big] ds \\ &\leq H|\mathcal{A}^{-\xi}|v - w| + \frac{\mathbb{L}\_{1-\varsigma}\Gamma(1+\varsigma)}{\xi\Gamma(1+\upsilon\varsigma)}H|v - w|T^{\upsilon\varsigma} \\ &+ \frac{M M\_{T} M\_{1}}{\Gamma(1+\upsilon)}T^{\upsilon} \Big[|\mathcal{A}^{-\varsigma}| + \frac{\mathbb{L}\_{1-\varsigma}\Gamma(1+\varsigma)}{\zeta\Gamma(1+\upsilon\varsigma)}T^{\upsilon\varsigma} \Big]H|v - w| \\ &= \Big[|\mathcal{A}^{-\varsigma}| + \frac{\mathbb{L}\_{1-\varsigma}\Gamma(1+\varsigma)}{\zeta\Gamma(1+\upsilon\varsigma)}T^{\upsilon\varsigma} + \frac{M M\_{T} M\_{1}}{\Gamma(1+\upsilon)}T^{\upsilon} \\ &\Big(|\mathcal{A}^{-\varsigma}| + \frac{\mathbb{L}\_{1-\varsigma}\Gamma(1+\varsigma)}{\zeta\Gamma(1+\upsilon\varsigma)}T^{\upsilon\varsigma} \Big)\Big]H|v - w|. \end{split}$$

From Theorem 1, we have

$$\left[|\mathcal{A}^{-\xi}| + \frac{\mathbb{L}\_{1-\xi}\Gamma(1+\xi)}{\xi\Gamma(1+\nu\xi)}T^{\nu\xi} + \frac{MM\_{\Upsilon}M\_{1}}{\Gamma(1+\nu)}T^{\nu}\left(|\mathcal{A}^{-\xi}| + \frac{\mathbb{L}\_{1-\xi}\Gamma(1+\xi)}{\xi\Gamma(1+\nu\xi)}T^{\nu\xi}\right)\right]H < 1;$$

it follows that

$$|(Gv)(t) - (Gw)(t)| < |v - w|\_{\prime}$$

that is, *G* is a contraction on *Br*. We conclude from the Banach fixed-point theorem that *G* has a unique fixed point *x* in C(0, *T*; *X*). Then, by injecting *ux* in (7), we have

$$\begin{split} \mathbf{x}\_{\boldsymbol{\mu}\_{\boldsymbol{x}}}(T) &= \mathbf{S}\_{\boldsymbol{\nu}}(T)[\mathbf{x}\_{0} - h(\mathbf{0}, \mathbf{x}\_{0})] + h(T, \mathbf{x}\_{T}) + \int\_{0}^{T} (T - \mathbf{s})^{\boldsymbol{\nu} - 1} \mathcal{A} \mathbf{K}\_{\boldsymbol{\nu}}(T - \mathbf{s}) h(\mathbf{s}, \mathbf{x}\_{\boldsymbol{\nu}}) \mathrm{d}\mathbf{s} \\ &\quad + \int\_{0}^{T} (T - \mathbf{s})^{\boldsymbol{\nu} - 1} \mathbf{K}\_{\boldsymbol{\nu}}(T - \mathbf{s}) \mathcal{B} \mathbf{u}\_{\boldsymbol{x}}(\mathbf{s}) \mathrm{d}\mathbf{s}, \\ &= \mathbf{S}\_{\boldsymbol{\nu}}(T) [\mathbf{x}\_{0} - h(\mathbf{0}, \mathbf{x}\_{0})] + h(T, \mathbf{x}\_{T}) + \int\_{0}^{T} (T - \mathbf{s})^{\boldsymbol{\nu} - 1} \mathcal{A} \mathbf{K}\_{\boldsymbol{\nu}}(T - \mathbf{s}) h(\mathbf{s}, \mathbf{x}\_{\boldsymbol{x}}) \mathrm{d}\mathbf{s} \\ &\quad + H\_{\boldsymbol{\nu}} H\_{\boldsymbol{\nu}}^{-1} \left[ \mathbf{x}\_{d} - \mathcal{S}\_{\boldsymbol{\nu}}(T) [\mathbf{x}\_{0} - h(\mathbf{0}, \mathbf{x}\_{0})] - h(T, \mathbf{x}\_{T}) \right. \\ &\left. - \int\_{0}^{T} (T - \mathbf{s})^{\boldsymbol{\nu} - 1} \mathcal{A} \mathbf{K}\_{\boldsymbol{\nu}}(T - \mathbf{s}) h(\mathbf{s}, \mathbf{x}\_{\boldsymbol{x}}) \mathrm{d}\mathbf{s} \right] \\ &= \mathbf{x}\_{d} \end{split}$$

and system (2) is exactly controllable, which completes the proof.

We have shown, under assumptions (*A*1) and (*A*2), and with the help of Schauder's fixed-point theorem, that the neutral system (2) is controllable when condition (11) holds. It would be interesting to clarify if the obtained control is unique in the sense that any control that allows reaching the state *xd* is such that the associated state *x* is a fixed point of the operator *G*. This uniqueness question is relevant but remains open.

#### **4. Optimal Control**

Now, we consider the problem of steering system (2) from the state *x*<sup>0</sup> to a target state *xd* in time *T* with minimum energy. We prove the existence of solution to such an optimal control problem when the set of admissible controls is closed and convex.

Let U*ad* be the nonempty set of admissible controls defined by

$$\mathcal{U}\_{\mathrm{ad}} = \left\{ \boldsymbol{\mu} \in L^2(0, T; \boldsymbol{\mathcal{U}}) \,:\, \mathfrak{x}\_{\boldsymbol{\mathfrak{u}}}(T) = \mathfrak{x}\_{\boldsymbol{\mathfrak{d}}} \right\}.$$

We shall prove that U*ad* is closed. For that, let us consider a sequence *un* in U*ad* such that *un* <sup>→</sup> *<sup>u</sup>* strongly in *<sup>L</sup>*2(0, *<sup>T</sup>*; *<sup>U</sup>*), so

$$\begin{split} \mathbf{x}\_{\mathsf{u}\_{\mathsf{u}}}(T) &= \mathbf{S}\_{\mathsf{V}}(T)[\mathbf{x}\_{0} - h(\mathbf{0}, \mathbf{x}\_{0})] + h(T, \mathbf{x}\_{T}) + \int\_{0}^{T} (T - s)^{\nu - 1} \mathcal{A} \mathbf{K}\_{\nu}(T - s) h(\mathbf{s}, \mathbf{x}\_{\mathsf{s}}) \mathrm{d}s \\ &+ \int\_{0}^{T} (T - s)^{\nu - 1} \mathbf{K}\_{\nu}(T - s) \mathcal{B} \mathbf{u}\_{\mathsf{u}}(\mathsf{s}) \mathrm{d}s. \end{split}$$

Put

$$\mathcal{Q}u = \int\_0^T (T-s)^{\nu-1} \mathcal{A} \mathcal{K}\_{\mathbb{V}}(T-s) h(\mathbf{s}, \mathbf{x}\_{\delta}) \mathrm{d}s + \int\_0^T (T-s)^{\nu-1} \mathcal{K}\_{\mathbb{V}}(T-s) \mathcal{B}u\_{\mathbb{N}}(\mathbf{s}) \mathrm{d}s.$$

Since Q*u* is continuous, then Q*un* → Q*u* strongly in *X*. We also have that *h* : [0, *T*] × C(0, *T*; *X*) → *X* is continuous; then *xun* (*T*) → *xu*(*T*) in *X*, but *xun* (*T*) ∈ {*xd*}, which is closed. Therefore, *xu*(*T*) ∈ {*xd*}, which means that *u* ∈ U*ad*. Hence, U*ad* is closed.

For a desired state *xd*, our optimal control problem consists of finding within U*ad* a control minimizing the functional

$$J(u) = \frac{\xi}{2} \int\_0^T |\mathbf{x}\_{\mathsf{H}}(t) - \mathbf{x}\_{\mathsf{d}}|\_X^2 dt + \frac{\varepsilon}{2} \int\_0^T |u(t)|\_{\mathsf{U}}^2 dt,$$

where *xu*(·) is the mild solution of system (2) associated with *u*. The parameters *ε* and *ς* are non-negative constants. Precisely, our optimal control problem is:

$$\left\{ \begin{array}{c} \inf\_{u \in \mathcal{U}\_{\text{ad}}} J(u), \\ \text{s.t. (2).} \end{array} \right. \tag{15}$$

The following result gives a necessary condition for the existence of an optimal control to our minimum energy problem.

**Theorem 2.** *Let* <sup>U</sup>*ad be closed and convex. If* <sup>1</sup> <sup>−</sup> *<sup>H</sup>*|A−*ς*<sup>|</sup> <sup>&</sup>gt; <sup>0</sup>*, then there exists a <sup>u</sup>* ∈ U*ad solution to the optimal control problem* (15)*.*

**Proof.** Let *up* <sup>2</sup> <sup>≤</sup> 2 *ε <sup>J</sup>*(*up*) with (*up*)*p*∈<sup>N</sup> bounded. Then there exists a subsequence, still denoted (*up*)*p*∈N, that converges weakly to a limit *<sup>u</sup>*. If <sup>U</sup>*ad* is closed and convex, then <sup>U</sup>*ad* is closed for the weak topology, which implies that *<sup>u</sup>* ∈ U*ad*. Let *xp* be the unique solution of system (2) associated with *up*, and let *x* be the unique solution of system (2) associated with *u*. Then,

$$\begin{split} |\mathbf{x}\_p(t) - \mathbf{x}^\*(t)| &\leq |h(t, \mathbf{x}\_p(t)) - h(t, \mathbf{x}^\*(t))| \\ &\quad + \left| \int\_0^t (t - s)^{\upsilon - 1} \mathcal{A} \mathcal{K}\_\upsilon(t - s) [h(\mathbf{s}, \mathbf{x}\_p(s)) - h(\mathbf{s}, \mathbf{x}^\*(s))] \mathbf{ds} \right| \\ &\quad + \left| \int\_0^t (t - s)^{\upsilon - 1} \mathcal{K}\_\upsilon(t - s) \mathcal{B} [\mathbf{u}\_p(s) - \mathbf{u}^\*(s)] \mathbf{ds} \right| \\ &\leq H |\mathcal{A}^{-\varsigma}| \left| \mathbf{x}\_p(t) - \mathbf{x}^\*(t) \right| \\ &\quad + \int\_0^t (t - s)^{\upsilon - 1} \left| \mathcal{A}^{1 - \varsigma} \mathcal{K}\_\upsilon(t - s) [\mathcal{A}^{\varsigma} h(\mathbf{s}, \mathbf{x}\_p(s)) - \mathcal{A}^{\varsigma} h(\mathbf{s}, \mathbf{x}^\*(s))] \right| \mathbf{ds} \\ &\quad + \left| \int\_0^t (t - s)^{\upsilon - 1} \mathcal{K}\_\upsilon(t - s) \mathcal{B} [\mathbf{u}\_p(s) - \mathbf{u}^\*(s)] \mathbf{ds} \right| \Big| \mathbf{ &\leq \end{split} \tag{16}$$

This leads us to

$$\begin{split} \left| \left( 1 - H|\mathcal{A}^{-\xi}| \right) \right| \left| x\_p(t) - x^\*(t) \right| &\leq \frac{\nu \Gamma(1 + \xi)}{\Gamma(1 + \nu \xi)} \mathbb{L}\_{1 - \xi} \int\_0^t (t - s)^{\nu \underline{s} - 1} H |x\_p(t) - x^\*(t)| \mathrm{d}s \\ &\quad + \left| \int\_0^t (t - s)^{\nu - 1} K\_\nu(t - s) \mathcal{B}[u\_p(s) - u^\*(s)] \mathrm{d}s \right|, \end{split} \tag{17}$$

*t* ∈ [0, *T*]. Set K <sup>=</sup> <sup>1</sup> <sup>1</sup> − *<sup>H</sup>*|A−*ς*| . Then,

$$\begin{split} \left| \mathbf{x}\_{\mathcal{P}}(t) - \mathbf{x}^\*(t) \right| &\leq \mathcal{K} \frac{\nu \Gamma(1+\xi)}{\Gamma(1+\nu\xi)} \mathbb{L}\_{1-\xi} \int\_0^t (t-s)^{\nu\xi-1} H |\mathbf{x}\_{\mathcal{P}}(t) - \mathbf{x}^\*(t)| \, \mathrm{d}s \\ &\quad + \mathcal{K}^{\prime} \left| \int\_0^t (t-s)^{\nu-1} \mathcal{K}\_{\mathcal{V}}(t-s) \mathcal{B}[u\_{\mathcal{P}}(s) - u^{\star}(s)] \mathrm{d}s \right|, \quad t \in [0, T]. \end{split} \tag{18}$$

Using the Gronwall lemma, we obtain that

$$\begin{split} \left| \left| x\_{p}(t) - \mathbf{x}^{\*}(t) \right| \leq & \mathcal{K}' \Big| \int\_{0}^{t} (t-s)^{\upsilon-1} \mathbf{K}\_{\upsilon}(t-s) \mathcal{B}[u\_{p}(s) - u^{\star}(s)] \mathbf{ds} \right| \\ & \qquad \times \exp \Big( \mathcal{K}' \frac{\nu \Gamma(1+\varsigma)}{\Gamma(1+\upsilon\varsigma)} \mathbb{L}\_{1-\varsigma} H \int\_{0}^{t} (t-s)^{\upsilon\varsigma-1} \mathbf{ds} \Big) \\ \leq & \mathcal{K}' \Big| \int\_{0}^{t} (t-s)^{\upsilon-1} \mathbf{K}\_{\upsilon}(t-s) \mathcal{B}[u\_{p}(s) - u^{\star}(s)] \mathbf{ds} \Big| \\ & \qquad \times \exp \Big( \mathcal{K}' \frac{\Gamma(1+\varsigma)}{\xi \Gamma(1+\upsilon\varsigma)} \mathbb{L}\_{1-\varsigma} H T^{\upsilon\varsigma} \Big). \end{split} \tag{19}$$

Now, by the weak convergence, *up u*<sup>∗</sup> in *L*2(0, *T*, *U*), and from Lemma 1, we obtain that

$$\begin{aligned} \left| \int\_0^t (t-s)^{\nu-1} K\_\nu(t-s) \mathcal{B}[u\_P(s) - u^\*(s)] \mathrm{d}s \right| \\ &\leq \frac{\nu M\_T M\_1}{\Gamma(1+\nu)} \int\_0^t (t-s)^{\nu-1} |u\_P(s) - u^\*(s)|\_{L^2(0,T,\mathcal{U})} \mathrm{d}s, \end{aligned} \tag{20}$$

from which *xp* <sup>→</sup> *<sup>x</sup>* strongly in *<sup>L</sup>*2(0, *<sup>T</sup>*; *<sup>X</sup>*). Hence,

$$\lim\_{n \to \infty} \int\_0^T \left| x\_p(t) - x\_d \right|\_X^2 dt = \int\_0^T \left| x(t) - x\_d \right|\_X^2 dt.$$

Using the lower semi-continuity of norms, the weak convergence of (*up*)*<sup>n</sup>* gives

$$|u^\*| \le \lim\_{n \to \infty} \inf |u\_p|.$$

Therefore, *<sup>J</sup>*(*u*) <sup>≤</sup> lim*n*→<sup>∞</sup> inf *<sup>J</sup>*(*up*), leading to *<sup>J</sup>*(*u*) = inf *u*∈U*ad J*(*up*), which establishes the optimality of *u*.

We have just proved the existence of an optimal control for a closed convex set of admissible controls. In Section 5, our main results are illustrated with the help of an example.

#### **5. An Application**

In this section we illustrate the results given by our Theorems 1 and 2. Let *X* = *L*2((0, 1); R) and consider the fractional differential system

$$\begin{cases} \ ^C D\_t^{1/2} \left( y(t, z) - h(t, y\_t) \right) = \Delta y(t, z) + \mathcal{B}u(t, z), & t \in [0, 1],\\ y(t, 0) = y(t, 1) = 0, & t \in [0, 1], \end{cases} \tag{21}$$

where the order *<sup>ν</sup>* of the fractional derivative is equal to <sup>1</sup> 2 , and the function *h* : [0, 1] ×C→ *X* is given by

$$h(t, y\_t)(\mathbf{x}) = \int\_0^1 \mathcal{F}(\mathbf{x}, z) \mu\_t(v, z) dz,\tag{22}$$

where F is assumed to satisfy the following conditions:

(a) The function F(*x*, *z*), *x*, *z* ∈ [0, 1], is measurable and

$$\int\_{0}^{1} \int\_{0}^{1} \mathcal{F}^2(x, z) dz < \infty;$$

(b) The function *∂x*F(*x*, *z*) is measurable, F(0, *z*) = F(1, *z*) = 0, and

$$\left(\int\_0^1 \int\_0^1 \left(\partial \mathbf{x} \mathcal{F}(\mathbf{x}, z)\right)^2 dz d\mathbf{x}\right)^{1/2} < \infty.$$

Let A : *D*(A) ⊆ *X* → *X* be defined by A*x* = −*x* with the domain

*D*(A) = # *x*(·) ∈ *X* : *x*, *x* absolutely continuous , *x* ∈ *X*, *x*(0) = *x*(1) = 0 \$ .

We begin by proving that the assumption (*A*1) holds. Indeed, operator A is selfadjoint, with a compact resolvent, and generating an analytic compact semi-group T (*t*). Furthermore, the eigenvalues of <sup>A</sup> are <sup>Λ</sup>*<sup>p</sup>* <sup>=</sup> *<sup>p</sup>*2*π*2, *<sup>p</sup>* <sup>∈</sup> <sup>N</sup>, with corresponding normalized eigenvectors *ep*(*z*) = % <sup>2</sup> *<sup>π</sup>* sin(*pπz*), {*ei*}<sup>∞</sup> *<sup>i</sup>*=<sup>1</sup> forming an orthonormal basis of *X*. Then,

$$\mathcal{A}\mathfrak{x} = -\sum\_{p=1}^{p=\infty} \Lambda\_p(\mathfrak{x}, \mathfrak{e}\_p) \mathfrak{e}\_{p}, \quad \mathfrak{x} \in D(\mathcal{A}),$$

and

$$\mathcal{T}(t)x(s) = \sum\_{i=1}^{i=\infty} \exp(\Lambda\_i t)(x, e\_i)e\_i(s), \quad x \in X.$$

Note that T (·) is a uniformly stable semi-group and T (*t*)*<sup>L</sup>*2[0,1] ≤ exp(−*t*). The following properties hold:

$$\text{(i)}\qquad \mathcal{A}^{-\frac{1}{2}}x = \sum\_{p=1}^{\infty} \frac{1}{p} (x, e\_p) e\_p;$$

(ii) The operator <sup>A</sup><sup>1</sup> <sup>2</sup> is given by

$$\begin{aligned} \mathcal{A}^{\frac{1}{2}}x &= \sum\_{p=1}^{\infty} p(\mathbf{x}, \mathbf{c}\_p) \mathbf{c}\_p, \\\\ \text{and } D(\mathcal{A}^{\frac{1}{2}}) &= \left\{ \mathbf{x}(\cdot) \in X, \sum\_{p=1}^{\infty} p(\mathbf{x}, \mathbf{c}\_p) \mathbf{c}\_p \in X \right\}. \end{aligned}$$

Clearly, (4), (5), and (*A*1) are satisfied.

Under our assumptions (a) and (b) on F, (8) and (9) are also satisfied, and assumption (*A*2) also holds.

Let *U* be a reflexive Banach space. We consider the control operator B : *U* → *X* defined by *p*=∞

$$\mathcal{B}\mu = \sum\_{p=1}^{\rho \asymp \rho} \Lambda\_p(\vec{u}, e\_p) e\_{p\prime}$$

where

$$\vec{u} = \begin{cases} \ \uplus\_{p\prime} & p = 1,2,\dots,N\_{\prime} \\ \ 0, & p = N+1,N+2,\dots \end{cases}$$

We see that B is a bounded continuous operator with *M*<sup>1</sup> = *<sup>N</sup>*Λ*N*. For *<sup>N</sup>* ∈ N and *<sup>H</sup>*1/2 : *<sup>L</sup>*2([0, 1], *<sup>U</sup>*) <sup>→</sup> *<sup>X</sup>* given by

$$H\_{1/2}u = \int\_0^1 (1-s)^{1/2} P\_{1/2}(1-s) \mathcal{B}u(s) ds,$$

we have

$$\begin{split} H\_{1/2}u &= \int\_0^1 (1-s)^{1/2} \frac{1}{2} \int\_0^\infty \Theta \Phi\_{1/2}(\Theta) T((1-s)^{1/2}\Theta) \mathcal{B}u(s) d\Theta \, ds \\ &= \int\_0^1 (1-s)^{1/2} \frac{1}{2} \int\_0^\infty \Theta \Phi\_{1/2}(\Theta) \sum\_{i=1}^{l-\infty} \exp(\Lambda\_i (1-s)^{1/2}\Theta) (\mathcal{B}u, e\_i) e\_i(s) d\Theta \, ds \\ &= \int\_0^1 (1-s)^{1/2} \sum\_{i=1}^\infty \int\_0^\infty \frac{1}{2} \Theta \Phi\_{1/2}(\Theta) \sum\_{j=0}^\infty \frac{\Lambda\_i (1-s)^{1/2} \Theta)^j}{j!} (u, e\_i) e\_i(s) d\Theta \, ds \\ &= \int\_0^1 (1-s)^{1/2} \sum\_{i=1}^\infty \sum\_{j=0}^\infty \frac{(\Lambda\_i (1-s)^{1/2})^j}{\Gamma(1/2 + \frac{1}{2}j)} (u, e\_i) e\_i(s) ds \\ &= \sum\_{i=1}^\infty \sum\_{j=0}^\infty \int\_0^1 \frac{\Lambda\_i^j}{\Gamma(\frac{1}{2} + \frac{1}{2}j)} (1-s)^{\frac{1+i}{2}} (u, e\_i) e\_i(s) \\ &= \sum\_{i=1}^\infty \sum\_{j=0}^\infty \frac{2\Lambda\_i^j}{\Gamma(\frac{1}{2} + \frac{1}{2}j)(3+j)} (u, e\_i) e\_i(s). \end{split}$$

Applying Theorem 1, we deduce that the fractional differential system (21) is controllable. Moreover, for function *h* defined as in (22) with the Lipshitz constant *H* < 1 <sup>A</sup><sup>−</sup> <sup>1</sup> 2 , we

conclude from Theorem 2 that there exists a control steering the system, in one unit of time, from a given initial state to a given terminal state with minimum energy.

#### **6. Conclusions**

Using the Banach fixed-point theorem, we have obtained a set of sufficient conditions for the controllability of a class of fractional neutral evolution equations involving the Caputo fractional derivative of order *α* ∈]0, 1[ (cf. Theorem 1). The result is proved in two major steps: (i) in the first step, we proved that the operator *G* defined by (13) is an element of the bounded closed and convex subset *Br*, (ii) while in the second, we proved that *G* is a contraction on the same subset *Br*. Moreover, we formulated a minimum energy optimal control problem and proved conditions assuring the existence of a solution for the optimal control problem inf *u*∈U*ad J*(*u*) subject to (2) (cf. Theorem 2). An example was given illustrating the two main results.

Our work can be extended in several directions: (i) to a case of enlarged controllability using different fractional derivatives; (ii) by developing methods to determine the control predicted by our existence theorem, e.g., by using RHUM and penalization approaches [10,40,41]; (iii) or by giving applications of neutral systems to epidemiological problems [42,43]. Many other questions remain open, as is the case of regional controllability and regional discrete controllability for problems of the type considered here. A strong motivation behind the investigation of neutral evolution systems, such as (2) considered here, comes from physics, since they describe well various physical phenomena as fractional diffusion equations. However, neutral systems are difficult to study, since such control systems contain time-delays not only in the state but also in the velocity variables, which make them intrinsically more complicated. The limitations of the method we proposed here is that we are not able to provide conditions under which the optimal control is unique. Additionally, we do not have an explicit form for it.

**Author Contributions:** Conceptualization, A.A.; methodology, Z.E.-c., A.A., T.K. and D.F.M.T.; validation, Z.E.-c., A.A., T.K. and D.F.M.T.; formal analysis, Z.E.-c., A.A., T.K. and D.F.M.T.; investigation, Z.E.-c., A.A., T.K. and D.F.M.T.; writing—original draft preparation, Z.E.-c., A.A., T.K. and D.F.M.T.; writing—review and editing, Z.E.-c., A.A., T.K. and D.F.M.T. All authors have read and agreed to the published version of the manuscript.

**Funding:** T.K. and D.F.M.T. were partially funded by FCT, project UIDB/04106/2020 (CIDMA).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** This research is part of Ech-chaffani's Ph.D., which is being carried out at Sidi Mohamed Ben Abdellah, Fez, under the scientific supervision of Aberqi. It was essentially finished during a one-month visit of Karite to the Department of Mathematics of University of Aveiro, Portugal, April and May 2022. The authors are very grateful to two anonymous referees for many suggestions and invaluable comments.

**Conflicts of Interest:** The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

#### **References**


### *Article* **Fractional Dynamics of a Measles Epidemic Model**

**Hamadjam Abboubakar 1,2,\*, Rubin Fandio 2, Brandon Satsa Sofack <sup>2</sup> and Henri Paul Ekobena Fouda <sup>2</sup>**


**Abstract:** In this work, we replaced the integer derivative with Caputo derivative to model the transmission dynamics of measles in an epidemic situation. We began by recalling some results on the local and global stability of the measles-free equilibrium point as well as the local stability of the endemic equilibrium point. We computed the basic reproduction number of the fractional model and found that is it equal to the one in the integer model when the fractional order *ν* = 1. We then performed a sensitivity analysis using the global method. Indeed, we computed the partial rank correlation coefficient (PRCC) between each model parameter and the basic reproduction number *R*<sup>0</sup> as well as each variable state. We then demonstrated that the fractional model admits a unique solution and that it is globally stable using the Ulam–Hyers stability criterion. Simulations using the Adams-type predictor–corrector iterative scheme were conducted to validate our theoretical results and to see the impact of the variation of the fractional order on the quantitative disease dynamics.

**Keywords:** measles; mathematical model; global sensitivity analysis; partial rank correlation coefficient (PRCC); fractional derivative; Caputo derivative; Ulam–Hyers stability

**MSC:** 92D30; 26A33

#### **1. Introduction**

Measles, also called rubeola or morbilli, is an infectious illness caused by the *Morbillivirus* of the family of *Paramyxoviridae* [1,2]. It principally affects children below five years of age and has high mortality [2,3]. Despite the availability of a vaccine against the measles virus, this illness remained a health problem that concerns the World Health Organization (WHO). Indeed, in 2017, about 110,000 people died from measles, particularly children below the age of 6 [2,4]. Table 1 depicts the 10 countries that have mainly been affected by a global measles outbreak, while Figure 1 shows the global repartition of measles.

Several works based on mathematical modeling have been proposed to study the transmission dynamics of several diseases. These works are mainly based on the SIR-type compartmental modeling [5–12]. The interest in using mathematical modeling to study the measles transmission dynamics and to find control measures that permit preventing or stopping measles outbreaks is increasing [13]. Many authors have proposed and analyzed different mathematical models of measles based on compartmental modeling [2,14–22]. Among these works, only [2] took into account the effects of hospitalization of the infected individuals. Indeed, most of them consider the traditional SEIR-compartmental models.

**Citation:** Abboubakar, H.; Fandio, R.; Sofack, B.S.; Ekobena Fouda, H.P. Fractional Dynamics of a Measles Epidemic Model. *Axioms* **2022**, *11*, 363. https://doi.org/10.3390/ axioms11080363

Academic Editor: Natália Martins

Received: 27 June 2022 Accepted: 21 July 2022 Published: 26 July 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).


**Table 1.** Top 10 countries with global measles outbreaks [23].

**Figure 1.** World repartition of measles in 2019 [24].

Fractional calculus has been a useful tool to model, predict, and forecast epidemic outbreaks for the last 20 years. Indeed, as fractional calculus was predicted by Leibniz to be a paradox, it has become a central interest for many researchers in various fields, such as engineering sciences [25], mathematical epidemiology [26,27], physics [28], and economics [29]. The most commonly known fractional operators are Caputo derivatives and their variants [30–38], the Caputo–Fabrizio derivative [39], Atangana–Baleanu derivative [40,41], and piece-wise derivative [42]. The kernels of some of these mentioned operators have different characteristics. For example, the Caputo operator is defined with the power law-type kernel (nonlocal but singular), and that of Caputo–Fabrizio has an exponentially decaying (nonsingular) kernel, while the Atangana–Baleanu operator in the Caputo sense has a Mittag–Leffler-type kernel [43]. In a recent work [44], Atangana formulated and studied a compartmental model that could be used to depict the survival of fractional calculus. The fact that the Caputo operator has a memory effect and the Caputo derivative of a constant function is equal to zero [45] means that this derivative is the most used.

Concerning the transmission dynamics of measles, few authors have used fractional derivatives [22,46–49]. In [46]. Farman et al. employed a fraction Caputo operator on a SEIR epidemic model to control measles for infected populations. Ogunmiloro et al. [47] studied a mathematical model describing the transmission dynamics of measles with a double vaccination dose, treatment, and two groups of measles-infected and measles-induced encephalitis-infected humans with relapse under the fractional Atangana–Baleanu–Caputo (ABC) operator. Qureshi, in [22], proposed a new epidemiological system for the measles epidemic using the Caputo fractional derivative with a memory effect.

The objective of this work was to compare, from a quantitative point of view, the dynamics of an epidemic model of measles with integer derivatives and fractional derivatives (in the sense of Caputo). To achieve our goal, we extended the model by Olumuyiwa et al. [2], which consists of a six-compartmental model integrating vaccinated and hospitalized individuals, by replacing the integer derivative with the Caputo derivative. The theoretical analysis of the fractional model was performed by a classical method and consists of the algebraic determination of the basic reproduction number *R*0, which depends on the fractional order *ν*, the proof of the local and global stabilities of the disease-free equilibrium, as well as the local stability of the endemic equilibrium. To determine the model parameters that

have a great influence on the measles epidemic in Nigeria, we performed a global sensitivity analysis of the model by computing the partial rank correlation coefficients between the basic reproduction number (as well as state variables) and the model parameters. We proved the existence and uniqueness of the solutions of the fractional model as well as its global stability using the Ulam–Hyers method. We then constructed a numerical scheme based on the Adams-type predictor–corrector iterative scheme [50,51], and finally, performed a numerical simulation to see the impact of the variation of the fractional order on the disease dynamics.

The paper is presented as follows: Section 2 is devoted to the model formulation and basic results. Section 3 is devoted to the global sensitivity analysis. In Section 4, we recall some definitions and useful results concerning fractional calculus. We also formulated the fractional measles model with the Caputo derivative and performed asymptotic stability of equilibrium points. Then, we provide the proof of existence, the uniqueness of the solution, and the global stability of the fractional model. The numerical scheme is also presented in this section. Section 5 is devoted to the numerical simulations. A conclusion rounds up the paper.

#### **2. Model Formulation and Basic Results**

In [2], Olumuyiwa et al. proposed and studied the following SVEIHR compartmental model with the integer derivative

$$\begin{cases} \begin{array}{rcl} \frac{d}{dt}\mathcal{S}(t) &=& \Lambda + a\mathcal{V} - \beta \mathcal{S} \mathcal{T} - \overbrace{\left(c+d\right)}^{k\_1} \mathcal{S}, \\ \frac{d}{dt}\mathcal{V}(t) &=& c\mathcal{S} - \overbrace{\left(d+a\right)}^{k\_2} \mathcal{V}, \\ \frac{d}{dt}\mathcal{E}(t) &=& \beta \mathcal{S} \mathcal{T} - \overbrace{\left(d+\gamma\right)}^{k\_3} \mathcal{E}, \\ \frac{d}{dt}\mathcal{I}(t) &=& \gamma \mathcal{E} - \overbrace{\left(d+a+\phi\right)}^{k\_4} \mathcal{I}, \\ \frac{d}{dt}\mathcal{U}(t) &=& \phi \mathcal{I} - \overbrace{\left(d+a+d\right)}^{k\_5} \mathcal{H}, \\ \frac{d}{dt}\mathcal{R}(t) &=& \phi \mathcal{I} - d\mathcal{R}. \end{array} \end{cases} \tag{1}$$

to model the transmission dynamics of measles in Nigeria. In Equation (1), S denotes the susceptible population, V is the vaccinated population, E is the total number of latent persons (infected but not infectious), I is the total number of infected persons, H is the total number of hospitalized persons, and R is the total number of recovered persons. The description of the model parameters and their values are consigned in Table 2.

**Table 2.** Biological description of model parameters and their numerical values [2].


The following subset of R<sup>6</sup>

$$\Sigma = \left\{ (\mathcal{S}, \mathcal{V}, \mathcal{E}, \mathcal{Z}, \mathcal{H}, \mathcal{R})^t \in \mathbb{R}\_+^6 : \mathcal{N} := \mathcal{S} + \mathcal{V} + \mathcal{E} + \mathcal{I} + \mathcal{H} + \mathcal{R} \le \frac{\Lambda}{d} \right\}$$

is positively invariant for system Equation (2), which defines a dynamical system. Since the state variable R only appears in the last equation of Equation (1), it is sufficient to study the following reduced system

$$\begin{cases} \begin{array}{rcl} \frac{d}{dt}S(t) &=& \Lambda + a\mathcal{V} - \beta \mathcal{S} \mathcal{Z} - \widehat{(c+d)} \mathcal{S}, \\ \frac{d}{dt}\mathcal{V}(t) &=& c\mathcal{S} - \widehat{(d+a)} \mathcal{V}, \\ \frac{d}{dt}\mathcal{E}(t) &=& \beta \mathcal{S} \mathcal{Z} - \widehat{(d+\gamma)} \mathcal{E}, \\ \frac{d}{dt}\mathcal{I}(t) &=& \gamma \mathcal{E} - \widehat{(d+a+\phi)} \mathcal{Z}, \\ \frac{d}{dt}\mathcal{I}(t) &=& \gamma \mathcal{E} - \widehat{(d+a+\phi)} \mathcal{Z}, \\ \frac{d}{dt}\mathcal{H}(t) &=& \phi \mathcal{I} - \widehat{(c+a+d)} \mathcal{H}. \end{array} \tag{2}$$

The model Equation (2) admits two nonnegative equilibrium points: the diseasefree equilibrium E<sup>0</sup> = (S0, V0, 0, 0, 0, 0) *<sup>t</sup>* = (*d* + *α*)Λ (*d* + *α* + *c*)*d* , *<sup>c</sup>*<sup>Λ</sup> (*d* + *α* + *c*)*d* , 0, 0, 0, 0*<sup>t</sup>* and a unique endemic equilibrium E<sup>1</sup> = (S1, V1, E1, I1, H1, R1) *t* , where

$$\begin{cases} \begin{array}{rcl} \mathcal{Z}\_{1} &=& \frac{\Lambda \gamma}{R\_{0} k\_{3} k\_{4}} (R\_{0} - 1), \\ \mathcal{S}\_{1} &=& \frac{\alpha}{ca + k\_{2} (k\_{1} + \beta \mathcal{Z}\_{1})}, \\ \mathcal{V}\_{1} &=& \frac{\beta}{k\_{2}} \mathcal{S}\_{1}, \\ \mathcal{E}\_{1} &=& \frac{\beta}{k\_{3}} \mathcal{S}\_{1} \mathcal{Z}\_{1}, \\ \mathcal{H}\_{1} &=& \frac{\phi}{k\_{5}} \mathcal{Z}\_{1}. \end{array} \end{cases} \tag{3}$$

with *R*0, which denotes the basic reproduction number expressed as follows:

$$R\_0 = \frac{\beta \gamma}{k\_3 k\_4} \frac{(d+\alpha)\Lambda}{(d+\alpha+c)d}. \tag{4}$$

From Equation (3), it follows that:

**Proposition 1.** *The model Equation* (2) *admits a unique endemic equilibrium point* E<sup>1</sup> = (S1, V1, E1, I1, H1, R1) *<sup>t</sup> if and only if R*<sup>0</sup> > 1*.*


**Remark 1.** *As suggested in [44], the epidemic spread can also be evaluated by computing the so-called threshold "strength number". Indeed, the strength number permits knowing, in an epidemic period, if there is the possibility for a renewal process [44]. In the case of epidemic measles, model Equation* (9)*, this threshold is equal to zero, which implies that the spread does not have a renewal process.*

#### **3. Uncertainty and Global Sensitivity Analysis**

In [2], the authors performed a local sensitivity analysis by computing the sensitivity indices of *R*<sup>0</sup> against model parameters. The disadvantage of this kind of sensitivity analysis (SA) is that the sensitivity index is calculated by varying only one parameter while the remaining parameters are fixed. Considering the combined variability from all input parameters simultaneously, we performed a global sensitivity analysis to examine the model's response to parameter variation in the parameter space. To this aim, we computed the partial rank correlation coefficient (PRCC) between *R*<sup>0</sup> (as well as each state variable of the model) and each model parameter. PRCC is the best and most reliable sensitivity analysis method that provides monotonicity between parameters and the model output when we want to measure the nonlinear (but monotonic) relationship between two variables [52,53]. The Latin hypercube sampling (LHS) was used as a sampling technique [54] with the number of runs equal to 5000. Each model parameter was supposed to be random with uniform distribution and their mean values are listed in Table 2. The most influential parameter is the one with the PRCC less than −0.5 or greater than +0.5 [53]. The results of the SA are depicted in Figures 2–4.

From Figure 2, it is clear that the parameters *β*, *a*, and *φ* have the highest influence on *R*0. This suggests that individual protection combined with efficient treatment may potentially be the most effective strategy to reduce the basic reproduction number.

From Figure 3, the parameters with the highest influence on S are *β*, *a*, and *φ*; the ones with the highest influence on V are *β*, *α*, and *c*, while the ones with the highest influence on R are *a*, *φ*, and *σ*.

From Figure 4, the parameters with the highest influence on E are Λ and *γ*; the ones with the highest influence on I are Λ, *a*, and *φ*, while the ones with the highest influence on H are *a*, *φ*, and *σ*. In general, mass vaccination combined with personal protection and care as soon as the first symptoms appear would make it possible to effectively fight measles.

**Figure 2.** Partial rank correlation coefficients between the basic reproduction number *R*<sup>0</sup> and model parameters.

**Figure 3.** Partial rank correlation coefficients between uninfected state variables of the model and model parameters.

**Figure 4.** Partial rank correlation coefficients between infected state variables of the model and model parameters.

#### **4. The Fractional Model and Its Analysis**

*4.1. Primarily Definition and Results of Fractional Calculus*

Now, we present some main properties of the fractional-order differential equations.

**Definition 1** ([55,56])**.** *The Caputo fractional derivative of a function <sup>f</sup>* ∈ C([*a*, *<sup>b</sup>*], R) *can be written as follows:*

$$\prescript{C}{0}{\mathbb{D}}\_t^{\nu} \psi(t) = \chi^{\mu-\nu} f^{(\mu)}(t), \quad \beta > 0,\tag{5}$$

*where <sup>C</sup>* <sup>0</sup> <sup>D</sup>*<sup>ν</sup> is the Caputo operator,* <sup>0</sup> <sup>&</sup>lt; *<sup>ν</sup>* <sup>≤</sup> <sup>1</sup> *is the fractional order, <sup>μ</sup> is the smallest integer not equal to ν, and χ<sup>φ</sup> is the Riemann–Liouville integral operator of order φ, defined as follows*

$$\chi^{\phi}\mathcal{Y}(t) = \frac{1}{\Gamma(\phi)} \int\_0^t (t-\tau)^{\phi-1} \mathcal{Y}(\tau) \mu \tau, \quad \phi > 0. \tag{6}$$

**Theorem 2** ([55,57])**.** *Consider the m-dimensional system*

$$\begin{cases} \begin{array}{rcl} \frac{\theta^{\nu}z}{\theta \mathbb{H}^{\nu}} &=& \omega z\_{\text{\textquotedblleft}}\\ z(0) &=& z\_{0\text{\textquotedblright}} \end{array} \end{cases} \tag{7}$$

*where ω is a squared constant matrix of order m* × *m, and* 0 < *ν* < 1*. Let us denote by γi, i* = 1, 2, . . . , *m the eigenvalues of ω.*


**Theorem 3** ([55,57])**.** *Let us consider the following system:*

$$\begin{cases} \frac{\mu^{\nu}z}{\mu^{\nu}} &= \quad \Psi(z),\\ z(0) &= \quad z\_0 \text{ with } 0 < \nu < 1 \text{ and } z \in \mathbb{R}^{\nu}. \end{cases} \tag{8}$$

*Solving equation* Ψ(*z*) = 0 *permits obtaining the equilibrium points of system Equation* (8)*. If all of the eigenvalues <sup>γ</sup><sup>i</sup> of <sup>J</sup>* <sup>=</sup> *∂ψ*(*z*∗) *<sup>∂</sup><sup>z</sup> satisfy* <sup>|</sup>*arg*(*γi*)<sup>|</sup> <sup>&</sup>gt; *νπ* <sup>2</sup> *, then the equilibrium point z*<sup>∗</sup> *is locally asymptotically stable (LAS).*

#### *4.2. The Fractional Model with Caputo Operator*

By replacing integer derivative with Caputo fractional derivative in system (1), we obtain the following system:

$$\begin{cases} \begin{array}{rcl} \stackrel{\mathsf{C}}{0} \mathbb{D}^{\nu} \mathcal{S}(t) &=& \Lambda^{\nu} + a^{\nu} \mathcal{V} - \beta^{\nu} \mathcal{S} \mathcal{Z} - (c^{\nu} + d^{\nu}) \mathcal{S}, \\ \stackrel{\mathsf{C}}{0} \mathbb{D}^{\nu} \mathcal{V}(t) &=& c^{\nu} \mathcal{S} - (d^{\nu} + a^{\nu}) \mathcal{V}, \\ \stackrel{\mathsf{C}}{0} \mathbb{D}^{\nu} \mathcal{E}(t) &=& \beta^{\nu} \mathcal{S} \mathcal{Z} - (d^{\nu} + \gamma^{\nu}) \mathcal{E}, \\ \stackrel{\mathsf{C}}{0} \mathbb{D}^{\nu} \mathcal{Z}(t) &=& \gamma^{\nu} \mathcal{E} - (d^{\nu} + a^{\nu} + \phi^{\nu}) \mathcal{Z}, \\ \stackrel{\mathsf{C}}{0} \mathbb{D}^{\nu} \mathcal{H}(t) &=& \phi^{\nu} \mathcal{Z} - (\sigma^{\nu} + a^{\nu} + d^{\nu}) \mathcal{H}, \\ \stackrel{\mathsf{C}}{0} \mathbb{D}^{\nu} \mathcal{R}(t) &=& \sigma^{\nu} \mathcal{H} - d^{\nu} \mathcal{R}, \end{array} \tag{9}$$

For this model, the corresponding basic reproduction number is given by

$$R\_0 = \frac{\beta^\nu \gamma^\nu (d^\nu + a^\nu) \Lambda^\nu}{(d^\nu + \gamma^\nu)(d^\nu + a^\nu + \phi^\nu)(d^\nu + a^\nu + c^\nu)d^\nu}. \tag{10}$$

From Equation (10), we note that for *ν* = 1, the basic reproduction number coincides for both models.

As in the case of the model with the integer derivative Equation (2), the fractional model Equation (4) only has one endemic equilibrium E<sup>1</sup> = (S1, V1, E1, I1, H1, R1) , where

$$\begin{cases} \begin{array}{rcl} \mathcal{Z}\_{1} &=& \frac{\Lambda^{\nu}\gamma^{\nu}}{R\_{0}k\_{3}k\_{4}}(\mathcal{R}\_{0}-1), \ \mathcal{S}\_{1} = \frac{(d^{\nu}+d^{\nu})\Lambda^{\nu}}{ca+k\_{2}(k\_{1}+\beta^{\nu}\mathcal{Z}\_{1})},\\ \mathcal{V}\_{1} &=& \frac{c^{\nu}}{k\_{2}}\mathcal{S}\_{1\prime}\mathcal{S}\_{1} = \frac{\beta^{\nu}}{k\_{3}}\mathcal{S}\_{1}\mathcal{Z}\_{1}, \ \mathcal{H}\_{1} = \frac{\phi^{\nu}}{k\_{5}}\mathcal{Z}\_{1\prime}\mathcal{R}\_{1} = \frac{\sigma^{\nu}}{d^{\nu}}\mathcal{H}\_{1\prime} \end{array} \end{cases} \tag{11}$$

with Setting *k*<sup>1</sup> = *c<sup>ν</sup>* + *dν*, *k*<sup>2</sup> = *d<sup>ν</sup>* + *αν*, *k*<sup>3</sup> = *d<sup>ν</sup>* + *γν*, *k*<sup>4</sup> = *d<sup>ν</sup>* + *a<sup>ν</sup>* + *φν*, and *k*<sup>5</sup> = *σ<sup>ν</sup>* + *a<sup>ν</sup>* + *dν*.

4.2.1. Asymptotic Stability of the Disease-Free Equilibrium

The following theorems are proved in the same way as in [2].

**Theorem 4.** *The disease-free equilibrium point* E<sup>0</sup> *is locally asymptotically stable if R*<sup>0</sup> < 1*; otherwise, it is unstable.*

**Theorem 5.** *The disease-free equilibrium point* (E0) *is globally asymptotically stable whenever R*<sup>0</sup> < 1*.*

**Theorem 6.** *The endemic equilibrium point* (E1) *is locally stable, when R*<sup>0</sup> > 1*.*

4.2.2. Existence and Uniqueness of Solution

In this section, we present the results of the existence and uniqueness of the solution of the fractional differential Equation (9).

For this purpose, let *<sup>η</sup>* = C([0; *<sup>a</sup>*], R), a Banach space of the continuous function from [0; *<sup>a</sup>*] to R, endowed with the norm Z*η*= sup *t*∈[0,*a*] {|Z|}.

The system Equation (9) can be rewritten in the following compact form:

$$\chi\_t^\mathbb{C} \mathbb{D} \mathcal{X}(t) = \xi(t, \mathcal{X}), \ \mathcal{X}(0) = \mathcal{X}\_0 \ge 0, \ t \in [0, a], \ \nu \in (0; 1). \tag{12}$$

where X = (S, V, E, I, R) and *ξ* = (*ξ*1, *ξ*2, *ξ*3, *ξ*4, *ξ*5, *ξ*6) with

$$\begin{cases} \begin{aligned} \prescript{\mathcal{X}}{}{1} &= \Lambda^{\nu} + \mathsf{a}^{\nu}\mathscr{V} - \beta^{\nu}\mathscr{S}\mathscr{Z} - (\mathsf{c}^{\nu} + \mathsf{d}^{\nu})\mathscr{S}, \\ \prescript{\mathcal{X}}{}{2} &= \mathsf{c}^{\nu}\mathscr{S} - (\mathsf{d}^{\nu} + \mathsf{a}^{\nu})\mathscr{V}, \\ \prescript{\mathcal{X}}{}{3} &= \beta^{\nu}\mathscr{S}\mathscr{Z} - (\mathsf{d}^{\nu} + \gamma^{\nu})\mathscr{E}, \\ \prescript{\mathcal{X}}{}{4} &= \gamma^{\nu}\mathscr{E} - (\mathsf{d}^{\nu} + \mathsf{a}^{\nu} + \mathsf{d}^{\nu})\mathscr{Z}, \\ \prescript{\mathcal{X}}{}{5} &= \mathsf{c}^{\nu}\mathscr{Z} - (\mathsf{c}^{\nu} + \mathsf{a}^{\nu} + \mathsf{d}^{\nu})\mathscr{H}, \\ \prescript{\mathcal{X}}{}{6} &= \sigma^{\nu}\mathscr{H} - \mathsf{d}^{\nu}\mathscr{R}, \end{aligned} \end{cases}$$

Since *<sup>ξ</sup><sup>i</sup>* ∈ *<sup>η</sup>*, for all of *<sup>i</sup>* ∈ [0; 6] ∩ N, then the following operator

$$(\mathcal{P}\mathcal{X})(t) = \mathcal{X}\_0 + \frac{1}{\Gamma(\nu)} \int\_0^t (t-a)^{\nu-1} \zeta(a, \mathcal{X}(a)) da,\tag{13}$$

is well-defined. The following result is valid:

**Lemma 1.** *Let* X = (S, V, E, I, H, R) *. The function ξ* = (*ξi*) *defined above satisfies ξ*(*t*, X (*t*)) − *ξ*(*t*, X (*t*)) *<sup>η</sup>*≤ **N***<sup>ξ</sup>* X− X *<sup>η</sup> for some* **N***<sup>ξ</sup>* > 0*.*

**Proof.** We proceed as follows for the first component of *ξ*:

$$\begin{split} & \left| \xi\_{1}(t, \mathcal{X}(t)) - \xi\_{1}(t, \overline{\mathcal{X}}(t)) \right| \\ &= \left| a^{\upsilon}(\mathcal{V}(t) - \overline{\mathcal{V}}(t)) - \mathcal{S}^{\upsilon}(\mathcal{S}(t)\mathcal{Z}(t) - \overline{\mathcal{S}}(t)\overline{\mathcal{Z}}(t)) - h\_{1}(\mathcal{S}(t) - \overline{\mathcal{S}}(t)) \right|, \\ & \qquad \le h\_{1} \left| \mathcal{S}(t) - \overline{\mathcal{S}}(t) \right| + a^{\upsilon} \left| \mathcal{V}(t) - \overline{\mathcal{V}}(t) \right| + \beta^{\upsilon} \left| \mathcal{S}(t)\mathcal{Z}(t) - \overline{\mathcal{S}}(t)\overline{\mathcal{Z}}(t) \right|. \end{split} \tag{14}$$

However,

$$\begin{aligned} \left| \mathcal{S}(t)\mathcal{Z}(t) - \overline{\mathcal{S}}(t)\overline{\mathcal{Z}}(t) \right| &= \left| \mathcal{Z}(t) \left( \mathcal{S}(t) - \overline{\mathcal{S}}(t) \right) + \overline{\mathcal{S}}(t) \left( \mathcal{Z}(t) - \overline{\mathcal{Z}}(t) \right) \right| \\ &\leq \left| \mathcal{Z}(t) \right| \left| \mathcal{S}(t) - \overline{\mathcal{S}}(t) \right| + \left| \overline{\mathcal{S}}(t) \right| \left| \mathcal{Z}(t) - \overline{\mathcal{Z}}(t) \right|. \end{aligned}$$

Thus, we obtain

$$\begin{split} & \left| \tilde{\mathcal{E}}\_{1}(t, \mathcal{X}(t)) - \tilde{\mathcal{E}}\_{1}(t, \mathcal{X}(t)) \right| \\ &= \left| a^{v}(\mathcal{V}(t) - \overline{\mathcal{V}}(t)) - \beta^{v}(\mathcal{S}(t)\mathcal{Z}(t) - \overline{\mathcal{S}}(t)\overline{\mathcal{Z}}(t)) - h\_{1}(\mathcal{S}(t) - \overline{\mathcal{S}}(t)) \right|, \\ & \leq h\_{1} \left| \mathcal{S}(t) - \overline{\mathcal{S}}(t) \right| + a^{v} \left| \mathcal{V}(t) - \overline{\mathcal{V}}(t) \right| \\ & + \beta^{v} \left( \left| \mathcal{Z}(t) \right| \left| \mathcal{S}(t) - \overline{\mathcal{S}}(t) \right| + \left| \overline{\mathcal{S}}(t) \right| \left| \mathcal{Z}(t) - \overline{\mathcal{Z}}(t) \right| \right) \\ & = \left( h\_{1} + \beta^{v} \left| \mathcal{Z}(t) \right| \right) \left| \mathcal{S}(t) - \overline{\mathcal{S}}(t) \right| + a^{v} \left| \mathcal{V}(t) - \overline{\mathcal{V}}(t) \right| \\ & + \beta^{v} \left| \overline{\mathcal{S}}(t) \right| \left| \mathcal{Z}(t) - \overline{\mathcal{Z}}(t) \right| \\ & \leq \mathbf{N}\_{1} \left( \left| \mathcal{S}(t) - \overline{\mathcal{S}}(t) \right| + \left| \mathcal{V}(t) - \overline{\mathcal{V}}(t) \right| + \left| \mathcal{Z}(t) - \overline{\mathcal{Z}}(t) \right| \right), \end{split} \tag{15}$$

where **N**<sup>1</sup> = *h*<sup>1</sup> + *α<sup>ν</sup>* + max *t*∈[0,*a*] # *<sup>β</sup>ν*|I(*t*)<sup>|</sup> <sup>+</sup> *<sup>β</sup><sup>ν</sup>* <sup>S</sup>(*t*) \$ , with *h*<sup>1</sup> = *c<sup>ν</sup>* + *dν*. Similarly, with *ξ*2, we also have :

$$\begin{split} \left| \left< \mathfrak{Z}(t, \mathcal{X}(t)) - \mathfrak{Z}(t, \overline{\mathcal{X}}(t)) \right> \right| &= \left| c^{\nu}(\mathcal{S}(t) - \overline{\mathcal{S}}(t)) - h\_{2}(\mathcal{V}(t) - \overline{\mathcal{V}}(t)) \right|, \\ &\leq c^{\nu} \left| \left( \mathcal{S}(t) - \overline{\mathcal{S}}(t) \right) \right| + h\_{2} \left| \left( \mathcal{V}(t) - \overline{\mathcal{V}}(t) \right) \right| \\ &\leq \mathbf{N}\_{2} \left( \left| \mathcal{S}(t) - \overline{\mathcal{S}}(t) \right| + \left| \mathcal{V}(t) - \overline{\mathcal{V}}(t) \right| \right), \end{split} \tag{16}$$

where **N**<sup>2</sup> = *c<sup>ν</sup>* + *h*2, where *h*<sup>2</sup> = *d<sup>ν</sup>* + *αν*.

Based on the analogous reasoning, we obtain *ξi*, *i* ∈ [0; 6] ∩ N :

$$\begin{split} \left| \widetilde{\varepsilon}\_{3}(t, \mathcal{X}(t)) - \widetilde{\varepsilon}\_{3}(t, \overline{\mathcal{X}}(t)) \right| &\leq \mathbf{N}\_{3} (|\mathcal{S}(t) - \overline{\mathcal{S}}(t)| + |\mathcal{Z}(t) - \overline{\mathcal{Z}}(t)| + |\mathcal{E}(t) - \overline{\mathcal{E}}(t)|), \\ \left| \widetilde{\varepsilon}\_{4}(t, \mathcal{X}(t)) - \widetilde{\varepsilon}\_{4}(t, \overline{\mathcal{X}}(t)) \right| &\leq \mathbf{N}\_{4} (|\mathcal{E}(t) - \overline{\mathcal{E}}(t)| + |\mathcal{Z}(t) - \overline{\mathcal{Z}}(t)|), \\ \left| \widetilde{\varepsilon}\_{5}(t, \mathcal{X}(t) - \overline{\varepsilon}\_{5}(t, \overline{\mathcal{X}}(t)) \right| &\leq \mathbf{N}\_{5} (|\mathcal{Z}(t) - \overline{\mathcal{Z}}(t)| + |\mathcal{H}(t) - \overline{\mathcal{H}}(t)|), \\ \left| \widetilde{\varepsilon}\_{6}(t, \mathcal{X}(t)) - \widetilde{\varepsilon}\_{6}(t, \overline{\mathcal{X}}\mathcal{Y}(t)) \right| &\leq \mathbf{N}\_{6} (|\mathcal{H}(t) - \overline{\mathcal{H}}(t)| + |\mathcal{R}(t) - \overline{\mathcal{R}}(t)|), \end{split} \tag{17}$$

where **N**<sup>3</sup> = *h*<sup>3</sup> + max *t*∈[0,*a*] # *βν* |I| + S \$, **N**<sup>4</sup> = *γ<sup>ν</sup>* + *h*4, **N**<sup>5</sup> = *φ<sup>ν</sup>* + *h*5, **N**<sup>6</sup> = *φ<sup>ν</sup>* + *d<sup>ν</sup>* with *<sup>h</sup>*<sup>3</sup> <sup>=</sup> *<sup>d</sup><sup>ν</sup>* <sup>−</sup> *<sup>γ</sup>ν*,*h*<sup>4</sup> <sup>=</sup> *<sup>d</sup><sup>ν</sup>* <sup>+</sup> *<sup>a</sup><sup>ν</sup>* <sup>+</sup> *<sup>φ</sup>ν*, *<sup>h</sup>*<sup>5</sup> <sup>=</sup> *<sup>σ</sup><sup>ν</sup>* <sup>+</sup> *<sup>a</sup><sup>ν</sup>* <sup>+</sup> *<sup>d</sup>ν*. Therefore, we finally obtain

$$\begin{aligned} \|\|\left(\boldsymbol{\xi}(t,\mathcal{X}(t)) - \boldsymbol{\xi}(t,\overline{\mathcal{X}}(t))\right)\|\|\_{\eta} &= \sup\_{i=1}^{6} |\boldsymbol{\xi}\_{i}(t,\mathcal{X}(t)) - \boldsymbol{\xi}\_{i}(t,\overline{\mathcal{X}}(t))|, \\ &\leq \overbrace{\left(\mathbf{N}\_{1} + \mathbf{N}\_{2} + \mathbf{N}\_{3} + \mathbf{N}\_{4} + \mathbf{N}\_{5} + \mathbf{N}\_{6}\right)}^{\mathbf{N}\_{\eta}} \|\|\mathcal{X} - \overline{\mathcal{X}}\|\|\_{\eta} \dots \|\|\|\_{\eta} \end{aligned}$$

**Theorem 7.** *Let the result of Lemma <sup>1</sup> hold and* <sup>=</sup> *<sup>a</sup><sup>ν</sup>* Γ(*ν* + 1) *. If* **N***<sup>η</sup>* < 1*, then there exists a unique solution of the model Equation* (12) *on* [0, *a*]*, which is uniformly Lyapunov-stable.*

**Proof.** The function *<sup>ξ</sup>* : [0, *<sup>a</sup>*] <sup>×</sup> <sup>R</sup><sup>6</sup> <sup>→</sup> <sup>R</sup><sup>6</sup> <sup>+</sup> is clearly continuous in its domain. Thus, the existence of the solutions to Equations (9) and (12) follows from (Theorem 3.1, [58]).

The Banach contraction mapping principle on operator P (see Equation (13)) will be used in the following to prove the uniqueness of the solution of the fractional model Equation (12). By definition, sup *t*∈[0,*a*] *<sup>ξ</sup>*(*t*, 0) <sup>=</sup> <sup>Λ</sup>*ν*. Let us now define *<sup>ζ</sup>* <sup>&</sup>gt; <sup>X</sup><sup>0</sup> <sup>+</sup>Λ*<sup>ν</sup>* 1 − **N***<sup>η</sup>* and a closed convex set Z*<sup>ζ</sup>* = # X ∈ *η* :X *η*≤ *ζ* \$ . Thus, for the sel- map property, it suffices to show that PZ*<sup>ζ</sup>* ⊆ Z*<sup>ζ</sup>* . Let X ∈Z*<sup>ζ</sup>* , we have

$$\begin{split} \|\mathcal{TX}\|\_{\eta} &= \sup\_{t \in [0,\omega]} \left\{ |\mathcal{X}\_{0} + \frac{1}{\Gamma(\nu)} \int\_{0}^{t} (t-\varepsilon)^{\nu-1} \xi(\varepsilon, \mathcal{X}(\varepsilon)) d\varepsilon \right\}, \\ &\leq |\mathcal{X}\_{0}| + \frac{1}{\Gamma(\nu)} \sup\_{t \in [0,\omega]} \left\{ \int\_{0}^{t} (t-\varepsilon)^{\nu-1} (|\mathcal{G}(\varepsilon, \mathcal{X}(\varepsilon)) - \xi(\varepsilon, 0)| + |\xi(\varepsilon, 0)|) d\varepsilon \right\}, \\ &\leq |\mathcal{X}\_{0}| + \frac{1}{\Gamma(\nu)} \sup\_{t \in [0,\omega]} \left\{ \int\_{0}^{t} (t-\varepsilon)^{\nu-1} (\|\mathcal{G}(\varepsilon, \mathcal{X}(\varepsilon)) - \xi(\varepsilon, 0)\|\_{\eta} + \|\mathcal{L}(\varepsilon, 0)\|\_{\eta}) d\varepsilon \right\}, \\ &\leq |\mathcal{X}\_{0}| + \frac{\mathbf{N}\_{\eta} \parallel \mathcal{X} \parallel\_{\eta} + \Lambda^{\nu}}{\Gamma(\nu)} \sup\_{t \in [0,\omega]} \left\{ \int\_{0}^{t} (t-\varepsilon)^{\nu-1} d\varepsilon \right\}, \\ &\leq |\mathcal{X}\_{0}| + \frac{\mathbf{N}\_{\eta} \tilde{\xi} + \Lambda^{\nu}}{\Gamma(\nu)} \sup\_{t \in [0,\omega]} \left\{ \int\_{0}^{t} (t-\varepsilon)^{\nu-1} d\varepsilon \right\}, \\ &= |\mathcal{X}\_{0}| + \frac{\mathbf{a}^{\nu}}{\Gamma(\nu+1)} (\mathbf{N}\_{\eta} \tilde{\xi} + \Lambda^{\nu}), \\ &= |\mathcal{X}\_{0}| + \mathcal{$$

Then, PX ⊆ Z*<sup>ζ</sup>* and P are indeed the self-map. It remains to show that P is a contraction. Let X and X two solutions of Equation (12). Using the result of Lemma 1, we obtain

$$\begin{split} \|\|\mathcal{P}\mathcal{X} - \mathcal{P}\overline{\mathcal{X}}\|\|\_{\eta} &= \sup\_{t \in [0,\mathfrak{a}]} \{ \left| (\mathcal{P}\mathcal{X})(t) - (\mathcal{P}\overline{\mathcal{X}})(t) \right| \}, \\ &= \frac{1}{\Gamma(\nu)} \sup\_{t \in [0,\mathfrak{a}]} \left\{ \int\_{0}^{t} (t-\varepsilon)^{\nu-1} \left| \mathfrak{f}(\varepsilon, \mathcal{X}(\varepsilon)) - \mathfrak{f}(\varepsilon, \overline{\mathcal{X}}(\varepsilon)) \right| d\varepsilon \right\}, \\ &\leq \frac{\mathsf{N}\_{\eta}}{\Gamma(\nu)} \sup\_{t \in [0,\mathfrak{a}]} \left\{ \int\_{0}^{t} (t-\varepsilon)^{\nu-1} (|\mathcal{X}(\varepsilon) - \overline{\mathcal{X}}(\varepsilon)| d\varepsilon \right\}, \\ &\leq \varepsilon \mathsf{N}\_{\eta} \parallel \mathcal{X} - \overline{\mathcal{X}} \parallel\_{\eta} \,. \end{split}$$

The condition *N<sup>η</sup>* < 1 ensures that P is a contraction mapping. Thus, by the Banach contraction mapping principle, P has a unique fixed point on [0, *a*], which is the solution of Equation (12). Theorem 3.2 in [58] ensures the uniformly Lyapunov stability of the solution.

#### *4.3. Global Stability of the Fractional Model*

In what follows, we will perform the global stability of the fractional-order model Equation (9) in the sense of Ulam–Hyers [59,60]. To this aim, we introduce the following inequality:

$$\left| \left| \, ^C\_0 \mathbb{D}^\nu \mathcal{X}(t) - \mathfrak{J}(t, \mathcal{X}(t)) \right| \right| \le b, \quad t \in [0; a]. \tag{18}$$

A function X ∈ *η* is a solution of Equation (18) if there exists ℵ ∈ *η* satisfying


Since X ∈ *η* is a solution of Equation (18), then X ∈ *η* is also a solution of the following integral inequality

$$\left| \left| \overline{\mathcal{X}}(t) - \overline{\mathcal{X}}(0) - \frac{1}{\Gamma(\nu)} \int\_0^t (t - \epsilon)^{\nu - 1} \overline{\zeta}(\epsilon, \overline{\mathcal{X}}(\epsilon)) d\epsilon \right| \leq \sigma b. \tag{19}$$

We claim the following result:

**Theorem 8.** *Assume that the result of Lemma <sup>1</sup> holds, and* <sup>1</sup> <sup>−</sup> **N***<sup>η</sup>* <sup>&</sup>gt; <sup>0</sup> *with* <sup>=</sup> *<sup>a</sup><sup>ν</sup>* Γ(*ν* + 1) *. The fractional order model Equation* (9) *(and equivalently Equation* (12)*) is Ulam–Hyers-stable and, consequently, generalized Ulam–Hyers-stable.*

**Proof.** Let X be a unique solution of Equation (12), and X satisfies Equation (18). For *b* > 0, *t* ∈ [0, *a*], we have

$$\begin{split} \|\|\overline{\mathcal{X}}(t) - \mathcal{X}(t)\|\|\_{\mathfrak{\eta}} &= \sup\_{t \in [0, \mathfrak{a}]} |\overline{\mathcal{X}}(t) - \mathcal{X}(t)| \\ &= \sup\_{t \in [0, \mathfrak{a}]} \left| \overline{\mathcal{X}}(t) - \mathcal{X}(0) - \frac{1}{\Gamma(\upsilon)} \int\_{0}^{t} (t - \epsilon)^{\upsilon - 1} \underline{\mathcal{S}}(\epsilon, \mathcal{X}(\epsilon)) d\epsilon \right| \\ &\leq \sup\_{t \in [0, \mathfrak{a}]} \left| \overline{\mathcal{X}}(t) - \overline{\mathcal{X}}(0) - \frac{1}{\Gamma(\upsilon)} \int\_{0}^{t} (t - \epsilon)^{\upsilon - 1} \underline{\mathcal{S}}(\epsilon, \overline{\mathcal{X}}(\epsilon)) d\epsilon \right| \\ &\quad + \sup\_{t \in [0, \mathfrak{a}]} \frac{1}{\Gamma(\upsilon)} \int\_{0}^{t} (t - \epsilon)^{\upsilon - 1} |\underline{\mathcal{S}}(\epsilon, \overline{\mathcal{X}}(\epsilon)) - \underline{\mathcal{S}}(\epsilon, \mathcal{X}(\epsilon))| d\epsilon \\ &\leq \underline{\alpha} b + \frac{\mathcal{N}\_{\eta}}{\Gamma(\upsilon)} \sup\_{t \in [0, \mathfrak{a}]} \int\_{0}^{t} (t - \epsilon)^{\upsilon - 1} |\overline{\mathcal{X}}(\epsilon) - \mathcal{X}(\epsilon)| d\epsilon \\ &\leq \underline{\alpha} b + \alpha \mathcal{N}\_{\eta} \parallel \overline{\mathcal{X}}(t) - \mathcal{X}(t) \parallel\_{\mathcal{U}} \end{split}$$

which implies that  X (*t*) − X (*t*) *<sup>η</sup>*≤ *b* + ,- . 1 − **N***<sup>η</sup>* . Thus, from (Definitions 4.5 & 4.6, [59]), we conclude that The fractional order model Equation (9) (and, equivalently, Equation (12)) is Ulam–Hyers-stable and, consequently, generalized Ulam–Hyers-stable. This ends the proof.

**C***η*

#### *4.4. Numerical Scheme*

In this section, we provide the numerical solution of the nonlinear mathematical model using an appropriate iterative scheme, which is very important in mathematical modeling. We use the Adams-type predictor–corrector iterative scheme [50,51] to numerically solve our fractional order model, Equation (9). Let us consider a uniform discretization of [0, *a*] given by *tn* <sup>=</sup> *<sup>n</sup>χ*, *<sup>n</sup>* <sup>=</sup> 0, 1, 2, ... , *<sup>N</sup>*, where *<sup>χ</sup>* <sup>=</sup> *<sup>a</sup> <sup>n</sup>* denotes the step size. Now, given any approximation,

$$\begin{split} \mathcal{Y}\_{\mathcal{X}}(t\_{i}) &= (\mathcal{S}\_{\mathcal{X}}(t\_{i}), \mathcal{V}\_{\mathcal{X}}(t\_{i}), \mathcal{E}\_{\mathcal{X}}(t\_{i}), \mathcal{Z}\_{\mathcal{X}}(t\_{i}), \mathcal{H}\_{\mathcal{X}}(t\_{i}), \mathcal{R}\_{\mathcal{X}}(t\_{i})), \\ &\approx \mathcal{Y}(t\_{i}) = (\mathcal{S}(t\_{i}), \mathcal{V}(t\_{i}), \mathcal{E}(t\_{i}), \mathcal{Z}(t\_{i}), \mathcal{H}(t\_{i}), \mathcal{R}(t\_{i})), \end{split}$$

we obtain the next approximation Y*χ*(*ti*+1) using the Adams-type predictor–corrector iterative scheme, as follows :

$$\begin{split} \mathcal{S}\_{n+1} &= \mathcal{S}\_0 + \frac{\boldsymbol{\chi}^{\boldsymbol{\nu}}}{\Gamma(\boldsymbol{\nu} + 2)} \Big( \boldsymbol{\Lambda}^{\boldsymbol{\nu}} + \boldsymbol{a}^{\boldsymbol{\nu}} \mathcal{V}\_{n+1}^{\boldsymbol{p}} - \boldsymbol{\beta}^{\boldsymbol{\nu}} \mathcal{S}\_{n+1}^{\boldsymbol{p}} \mathcal{Z}\_{n+1}^{\boldsymbol{p}} - k\_1 \mathcal{S}\_{n+1}^{\boldsymbol{p}} \Big) \\ &+ \frac{\boldsymbol{\chi}^{\boldsymbol{\nu}}}{\Gamma(\boldsymbol{\nu} + 2)} \sum\_{j=0}^{n} \theta\_{j,\boldsymbol{\nu}+1} \Big( \boldsymbol{\Lambda}^{\boldsymbol{\nu}} + \boldsymbol{a}^{\boldsymbol{\nu}} \mathcal{V}\_{j} - \boldsymbol{\beta}^{\boldsymbol{\nu}} \mathcal{S}\_{j} \mathcal{Z}\_{j} - k\_1 \mathcal{S}\_{\boldsymbol{\gamma}} \Big), \end{split} \tag{20}$$

$$\mathcal{V}\_{n+1} = \mathcal{V}\_0 + \frac{\chi^\nu}{\Gamma(\nu+2)} \left\{ \mathbf{c}^\nu \mathcal{S}\_{n+1}^p - k\_2 \mathcal{V}\_{n+1}^p + \sum\_{j=0}^n \theta\_{j,n+1} \left( \mathbf{c}^\nu \mathcal{S}\_j - k\_2 \mathcal{V}\_j \right) \right\},\tag{21}$$

$$\mathcal{E}\_{n+1} = \mathcal{E}\_0 + \frac{\boldsymbol{\chi}^{\boldsymbol{\nu}}}{\Gamma(\boldsymbol{\nu} + 2)} \left\{ \boldsymbol{\beta}^{\boldsymbol{\nu}} \boldsymbol{\mathcal{S}}\_{n+1}^{\boldsymbol{\mu}} \boldsymbol{\mathcal{Z}}\_{n+1}^{\boldsymbol{\mu}} - k\_3 \boldsymbol{\mathcal{E}}\_{n+1}^{\boldsymbol{\mu}} + \sum\_{j=0}^{n} \theta\_{j, \boldsymbol{\nu}+1} \left( \boldsymbol{\beta}^{\boldsymbol{\nu}} \boldsymbol{\mathcal{S}}\_{j} \boldsymbol{\mathcal{Z}}\_{j} - k\_3 \boldsymbol{\mathcal{E}}\_{j} \right) \right\}, \tag{22}$$

$$\mathcal{Z}\_{n+1} = \mathcal{Z}\_0 + \frac{\chi^\nu}{\Gamma(\nu+2)} \left\{ \gamma^\nu \mathcal{E}\_{n+1}^p - k\_4 \mathcal{Z}\_{n+1}^p + \sum\_{j=0}^n \theta\_{j,n+1} \left( \gamma^\nu \mathcal{E}\_j - k\_4 \mathcal{Z}\_j \right) \right\},\tag{23}$$

$$\mathcal{H}\_{n+1} = \mathcal{H}\_0 + \frac{\chi^\nu}{\Gamma(\nu + 2)} \left\{ \phi^\nu \mathcal{Z}\_{n+1}^p - k\_5 \mathcal{H}\_{n+1}^p + \sum\_{j=0}^n \theta\_{j,n+1} \left( \phi^\nu \mathcal{Z}\_j - k\_5 \mathcal{H}\_j \right) \right\},\tag{24}$$

$$\mathcal{R}\_{n+1} = \mathcal{R}\_0 + \frac{\chi^\nu}{\Gamma(\nu+2)} \left\{ \sigma^\nu \mathcal{H}\_{n+1}^p - d^\nu \mathcal{R}\_{n+1}^p + \sum\_{j=0}^n \theta\_{j,n+1} \left( \sigma^\nu \mathcal{H}\_j - d^\nu \mathcal{R}\_j \right) \right\},\tag{25}$$

where

$$\begin{aligned} \mathcal{S}\_{n+1}^{p} &= \mathcal{S}\_{0} + \frac{1}{\Gamma(\nu)} \sum\_{j=0}^{n} b\_{j,n+1} (\Lambda^{\nu} + a^{\nu} \mathcal{V}\_{j} - \beta^{\nu} \mathcal{S}\_{j} \mathcal{Z}\_{j} - k\_{1} \mathcal{S}\_{j}), \\ \mathcal{V}\_{n+1}^{p} &= \mathcal{V}\_{0} + \frac{1}{\Gamma(\nu)} \sum\_{j=0}^{n} b\_{j,n+1} (\sigma^{\nu} \mathcal{S}\_{j} - k\_{2} \mathcal{V}\_{j}), \\ \mathcal{E}\_{n+1}^{p} &= \mathcal{E}\_{0} + \frac{1}{\Gamma(\nu)} \sum\_{j=0}^{n} b\_{j,n+1} (\beta^{\nu} \mathcal{S}\_{j} \mathcal{Z}\_{j} - k\_{3} \mathcal{E}\_{j}), \\ \mathcal{I}\_{n+1}^{p} &= \mathcal{I}\_{0} + \frac{1}{\Gamma(\nu)} \sum\_{j=0}^{n} b\_{j,n+1} (\gamma^{\nu} \mathcal{E}\_{j} - k\_{4} \mathcal{I}\_{j}), \\ \mathcal{H}\_{n+1}^{p} &= \mathcal{H}\_{0} + \frac{1}{\Gamma(\nu + 2)} \sum\_{j=0}^{n} b\_{j,n+1} (\Phi^{\nu} \mathcal{Z}\_{j} - k\_{5} \mathcal{H}\_{j}), \\ \mathcal{R}\_{n+1}^{p} &= \mathcal{R}\_{0} + \frac{1}{\Gamma(\nu + 2)} \sum\_{j=0}^{n} b\_{j,n+1} (\sigma^{\nu} \mathcal{H}\_{j} - d^{\nu} \mathcal{R}\_{j}), \end{aligned} \tag{26}$$

with

$$\theta\_{i,n+1} = \begin{cases} n^{\nu+1} - (n-\nu)(n+1), & \text{if } \quad i = 0, \\ (n-i+2)^{\nu+1} - 2(n-i+1)^{\nu+1} + (n-i)^{\nu+1}, & \text{if } \quad 1 \le i \le n, \\ 1, & \text{if } \quad i = n+1. \end{cases}$$

and

$$b\_{i,n+1} = \frac{\chi^\nu}{\nu}((n-i+1)^\nu - (n-i)^\nu).$$

**Theorem 9.** *The numerical scheme given by Equations* (20)*–*(26) *is stable.*

**Proof.** The proof follows the proof of Theorem 3.3 in [61] (see also Theorem 5.1, [62]).

#### **5. Numerical Simulations**

In this section, we present numerical simulations of the model using the parameter values listed in Table 2, which were estimated using real data from the measles outbreak in Nigeria [2]. From Equation (10), it is clear that the basic reproduction number varies according to the value of the fractional order *ν*. Thus, for fractional orders *ν* ∈ 1, 0.9, 0.8, 0.7, 0.6, the corresponding values of the basic reproduction number *R*<sup>0</sup> are, respectively, 3.13, 2.60, 2.15, 1.76, and 1.44. From Figure 5, it is clear that *R*<sup>0</sup> is an increasing function of the fractional order *ν*.

**Figure 5.** The basic reproduction number *R*<sup>0</sup> versus the fractional order *ν*. All other parameter values are listed in Table 2.

Figure 6a shows the combined effect of the fractional order and vaccination on the basic reproduction number, while Figure 6b shows the combined effect of the fractional order and hospitalization on the basic reproduction number. We see that *R*<sup>0</sup> decreases according to the increases in the vaccination rate *c*. Note that for *ν* = 1 and *c* ∈ # 10−6, 10−3, 10−2, 10−<sup>1</sup> \$ , the basic reproduction number is *R*<sup>0</sup> ∈ {3.13, 2.45, 0.83, 0.11}, for *ν* = 0.9 and *c* ∈ # 10−6, 10−3, 10−2, 10−<sup>1</sup> \$ , the basic reproduction number is *R*<sup>0</sup> ∈ {2.60, 1.99, 0.76, 0.13}, for *ν* = 0.8 and *c* ∈ # 10−6, 10−3, 10−2, 10−<sup>1</sup> \$ , the basic reproduction number is *R*<sup>0</sup> ∈ {2.15, 1.61, 0.69, 0.15}. This proves that the measles epidemic can be controllable if the vaccination rate coverage is very high (indeed, from *c* = 0.01, R<sup>0</sup> is less than one). Moreover, effectively caring for the sick makes it possible to reduce the number of infected. Indeed, R<sup>0</sup> decreases when the hospitalized rate *φ* increases. Figure 7 combines mass vaccination with the efficient treatment in the basic reproduction number, while the combining effect of individual protection with effective healthcare is depicted in Figure 8. It is clear that when these two control strategies are at their high levels, the basic reproduction number is less than one, which implies the end of the epidemic.

The effect of the fractional order on the dynamics of the measles model is depicted in Figures 9 and 10. It is clear that susceptible (as well as vaccinated) individuals decrease according to the decrease of the fractional order *ν*, while recovered individuals increase (Figure 9). The populations of all infected classes (E, I, and H) increase when the fractional order decreases (Figure 10). Nevertheless, we can observe in Figure 10 that the epidemic 'pick' is backward-delayed according to the decrease of the fractional order *ν*. For example, the total number of infected individuals in the latent stage *E* tended to its maximum value (11,000,000) at the 160th week after the beginning of the epidemic for *ν* = 1, while this time was backward-delayed to the 15th week.

The effects of the two control measures, namely, vaccination and effective medical care, are depicted in Figures 11 and 12. As in the case of the basic reproduction number, it is clear from Figure 11 that a high level of vaccination coverage can decrease the number of new cases of measles. Similarly, we see in Figure 12 that better care for patients (consisting of systematic hospitalization of new cases from the first symptoms of the disease) will allow for better control of the epidemic. Thus, the combination of these two control measures will help in the fight against the disease spread.

**Figure 6.** The basic reproduction number *R*<sup>0</sup> versus (**a**) the vaccination rate *c* and the fractional order *ν*; (**b**) the hospitalized rate *φ* and the fractional order *ν*.

**Figure 7.** The basic reproduction number *R*<sup>0</sup> versus the vaccination rate *c* and the hospitalization rate *φ*.

**Figure 8.** The basic reproduction number *R*<sup>0</sup> versus the transmission rate *β* and the recovered rate *σ*.

**Figure 9.** The time series of non-infected state variables of the model Equation (9) with the parameter values listed in Table 2 and different values for the fractional order derivatives *ν*.

**Figure 10.** The time series of infected state variables of the model Equation (9) with the parameter values listed in Table 2 and different values for the fractional order derivatives *ν*.

**Figure 11.** The time series of the total infected population *E*(*t*) + *I*(*t*) with the parameter values listed in Table 2, except the vaccination rate *c* and the fractional order derivative *ν*, which vary.

**Figure 12.** The time series of the total infected population E(*t*) + I(*t*) with the parameter values listed in Table 2, except the hospitalized rate *φ* and the fractional order derivative *ν*, which vary.

#### **6. Conclusions and Perspectives**

The main objective of this work was to compare the quantitative dynamics of an epidemic model of measles with the integer derivative and fractional derivative (in the sense of Caputo). Thus, we replaced the integer derivative with the Caputo derivative in an existing measles compartmental model, which took into account vaccinations and hospitalized individuals. We performed a global sensitivity analysis by computing the partial rank correlation coefficient between the model parameters, the basic reproduction number, and each state variable of the model. For the fractional model, we computed the basic reproduction number, which is a function of the model parameters and fractionalorder parameter. We proved the local and global stability of the disease-free equilibrium as well as the local stability of the endemic equilibrium point. The existence, uniqueness of the solutions, and global stability of the fractional model were also conducted using the Ulam–Hyers stability method. Simulations via the Adams-type predictor–corrector iterative scheme, were conducted to validate our theoretical results and to see the impact of the variation of the fractional order on the disease dynamics. Indeed, the simulation results reveal that the model with the Caputo fractional derivative has a different quantitative behavior than the model with the integer derivative. This suggests that depending on the fractional order, we can forecast the number of total cases in a given interval of time.

It is important to note that some other aspects, such as limited medical resources, mass vaccination, and other fractional derivatives operator (Caputo–Fabrizio, Atangana–Baleanu, piecewise derivatives) can represent direct perspectives to this work. Moreover, there is evidence that the measles vaccine is flawed [63]. Thus, another future direction of this work would be to consider that some individuals who received the measles vaccine may be infected.

**Author Contributions:** Conceptualization, H.A., R.F. and B.S.S.; methodology, H.A., R.F. and B.S.S.; software, H.A. and R.F.; validation, H.A., R.F., B.S.S. and H.P.E.F.; formal analysis, H.A. and R.F.; investigation, H.A. and R.F.; resources, H.A.; data curation, H.A., R.F. and B.S.S.; writing—original draft preparation, H.A.; writing—review and editing, H.A., R.F., B.S.S. and H.P.E.F.; visualization, H.A.; supervision, H.A. and H.P.E.F.; project administration, H.A. and H.P.E.F.; funding acquisition, H.A. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors thank the Editor and the reviewers for their comments and suggestions, which helped us improve the work.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **Local Structure of Convex Surfaces near Regular and Conical Points**

**Alexander Plakhov 1,2**


**Abstract:** Consider a point on a convex surface in <sup>R</sup>*d*, *<sup>d</sup>* <sup>≥</sup> 2 and a plane of support <sup>Π</sup> to the surface at this point. Draw a plane parallel with Π cutting a part of the surface. We study the limiting behavior of this part of the surface when the plane approaches the point, being always parallel with Π. More precisely, we study the limiting behavior of the normalized surface area measure in *Sd*−<sup>1</sup> induced by this part of the surface. In this paper, we consider two cases: (a) when the point is regular and (b) when it is singular conical, that is the tangent cone at the point does not contain straight lines. In Case (a), the limit is the atom located at the outward normal vector to Π, and in Case (b), the limit is equal to the measure induced by the part of the tangent cone cut off by a plane.

**Keywords:** convex surfaces; surface area measure of a convex body; Newton's problem of minimal resistance

**MSC:** 52A20; 26B25

#### **1. Introduction**

Consider a convex compact set *C* with a nonempty interior in Euclidean space R*d*, *d* ≥ 2. Let *r*<sup>0</sup> ∈ *∂C* be a point on its boundary, and let Π be a plane of support to *C* at *r*0. Consider the part of the boundary *∂C* containing *r*<sup>0</sup> and bounded by a plane parallel with Π. We are interested in studying the limiting properties of this part of the boundary when the bounding plane approaches Π.

In what follows, a convex compact set with a nonempty interior will be called a convex body.

The point *r*<sup>0</sup> ∈ *∂C* is called *regular* if the plane of support at this point is unique and *singular* otherwise. It is well known that regular points form a full-measure set in *∂C*.

Let *e* denote the outward unit normal vector to Π. Take *t* > 0, and let Π*<sup>t</sup>* be the plane parallel with Π at the distance *t* from it, on the side opposite to the normal vector. Thus, the plane Π = Π<sup>0</sup> is given by the equation *r* − *r*0, *e* = 0 and Π*<sup>t</sup>* by the equation *r* − *r*0, *e* = −*t*. The body *C* is contained in the closed half-space {*r* : *r* − *r*0, *e* ≤ 0}. Here and in what follows, · , · means the scalar product.

Consider the convex body:

$$\mathcal{C}\_t = \mathcal{C} \cap \{ r : \langle r - r\_{0\prime} \ e \rangle \ge -t \}.$$

In other words, *Ct* is the part of *C* cut off by the plane Π*t*. The boundary of *Ct* is the union of the convex set of codimension 1:

$$B\_t = \mathbb{C} \cap \{ r : \langle r - r\_{0\prime} \ e \rangle = -t \} \tag{1}$$

and the convex surface:

$$S\_l = \partial \mathbb{C} \cap \{ r : \langle r - r\_0, e \rangle \ge -t \}; \tag{2}$$

**Citation:** Plakhov, A. Local Structure of Convex Surfaces near Regular and Conical Points. *Axioms* **2022**, *11*, 356. https://doi.org/10.3390/ axioms11080356

Academic Editor: Hans J. Haubold

Received: 20 June 2022 Accepted: 19 July 2022 Published: 23 July 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

thus, *∂Ct* = *Bt* ∪ *St*.

In what follows, we will denote as |A|*<sup>m</sup>* the *m*-dimensional Hausdorff measure of the Borel set A ⊂ <sup>R</sup>*d*. By default, |·| means |·|*d*−1.

Let *nr* denote the outward unit normal to *C* at a regular point *r* ∈ *∂C*, and let *S* be a Borel subset of *∂C*. *The surface area measure induced by S* is the Borel measure *ν<sup>S</sup>* defined in *Sd*−<sup>1</sup> satisfying

$$\nu\_{\mathbb{S}}(\mathcal{A}) := |\{ r \in \mathbb{S} : n\_r \in \mathcal{A} \}|$$

for any Borel subset A ⊂ *<sup>S</sup>d*−1. In the case when *<sup>S</sup>* coincides with *<sup>∂</sup>C*, we obtain the wellknown measure *ν∂<sup>C</sup>* called the *surface area measure of the convex body C*. For this measure, the following well-known relation takes place:

$$\int\_{S^{d-1}} n \,\nu\_{\partial \mathbb{C}}(dn) = \mathbb{\tilde{0}}.\tag{3}$$

Denote by *ν<sup>t</sup>* the normalized measure induced by the surface *St*; more precisely,

$$\nu\_t := \frac{1}{|B\_t|} \,\nu\_{\mathbb{S}\_t}.$$

That is, for any Borel set A ⊂ *<sup>S</sup>d*−1, it holds

$$\nu\_{\mathcal{I}}(\mathcal{A}) = \frac{1}{|B\_{\mathcal{I}}|} |\{ r \in \mathcal{S}\_{\mathcal{I}} : n\_{\mathcal{I}} \in \mathcal{A} \}|.$$

The surface area measure of *∂Ct* equals *ν∂Ct* = |*Bt*|*δ*−*<sup>e</sup>* + |*Bt*|*νt*; hence,

$$\int\_{S^{d-1}} n \, d\nu\_{\partial \mathbb{C}\_t}(n) = |B\_t|(-\epsilon + \int\_{S^{d-1}} n \, \nu\_t(dn)).$$

Here and in what follows, *δ<sup>e</sup>* means the unit atom supported at *e*. Applying Formula (3) to *∂Ct*, one obtains

$$\int\_{\mathbb{S}^{d-1}} n \,\nu\_{\mathbb{H}}(dn) = \mathfrak{e}.\tag{4}$$

We say that *ν<sup>t</sup> weakly converges* to *ν*<sup>∗</sup> as *t* → 0 and denote lim*t*→<sup>0</sup> *ν<sup>t</sup>* = *ν*∗, if for any continuous function *f* on *Sd*−1, it holds

$$\lim\_{t \to 0} \int\_{S^{d-1}} f(n) \, \nu\_t(dn) = \int\_{S^{d-1}} f(n) \, \nu\_\*(dn).$$

Similarly, we say that *ν*<sup>∗</sup> is a *weak partial limit* of the measure *νt*, if there exists a sequence of positive numbers *ti*, *<sup>i</sup>* ∈ N converging to 0 such that, for any continuous function *f* on *Sd*−1, it holds

$$\lim\_{i \to \infty} \int\_{S^{d-1}} f(n) \, \nu\_{t\_i}(dn) = \int\_{S^{d-1}} f(n) \, \nu\_\*(dn).$$

In this article, we are going to study the limiting properties of the measure *ν<sup>t</sup>* as *t* → 0. One such property is derived immediately. Let *ν*<sup>∗</sup> be a weak limit or a weak partial limit of *νt*. Passing to the limit *t* → 0 or to the limit *ti* → 0 in Formula (4), one obtains

$$\int\_{S^{d-1}} n \,\nu\_\*(dn) = \mathfrak{e}.\tag{5}$$

The *tangent cone* to *C* at *r*<sup>0</sup> ∈ *∂C* is the closure of the union of all rays with vertex at *r*<sup>0</sup> that intersect *C* \ *r*0. Equivalently, the tangent cone at *r*<sup>0</sup> is the smallest closed cone with the vertex at *r*<sup>0</sup> that contains *C*; see Figure 1.

**Figure 1.** The tangent cone and the normal cone to a convex body *C*.

If the tangent cone at *r*<sup>0</sup> is a half-space, then the point *r*<sup>0</sup> is regular, and vice versa.

The *normal cone* to *C* at *r*<sup>0</sup> is the union of all rays with vertex at *r*<sup>0</sup> whose director vector is the outward normal to a plane of support at *r*0. It is denoted as *N*(*r*0). An equivalent definition is the following: the normal cone at *r*<sup>0</sup> is the set of points *r* that satisfy *r* − *r*0, *r* − *r*0 ≤ 0 for all *r* ∈ *C*. The normal cone to a convex body does not contain straight lines. Both tangent and normal cones are, of course, convex sets.

If the dimension of *N*(*r*0) equals *d* (equivalently, if the tangent cone does not contain straight lines), then *r*<sup>0</sup> is called a *conical point* of *C*. If the dimension of *N*(*r*0) equals 1, then *r*<sup>0</sup> is regular, and vice versa. In the intermediate case, that is if the dimension of *N*(*r*0) is greater than 1, but smaller than *d*, *r*<sup>0</sup> is called a *ridge point*. This notation goes back to Pogorelov [1].

The motivation for this study comes, to a great extent, from extremal problems in classes of convex bodies and, in particular, from Newton's problem of least resistance for convex bodies [2]. It is natural to try to develop a geometric method of the small variation of convex bodies for such problems, and perhaps, the simplest way would be cutting a small part of the body by a plane. This method proved itself to be effective in the case of Newton's problem. Let us describe this problem in some detail.

The problem in a class of radially symmetric bodies was first stated and solved by Newton himself in 1687 in [3]. The more general version of the problem was posed by Buttazzo and Kawohl in 1993 in [2]. This general problem can be formulated in the functional form as follows:

Find the smallest value of the functional

$$\int\_{\Omega} \int\_{\Omega} \frac{1}{1 + |\nabla u(x, y)|^2} dx dy \tag{6}$$

in the class of convex functions *<sup>u</sup>* : <sup>Ω</sup> <sup>→</sup> <sup>R</sup> satisfying 0 <sup>≤</sup> *<sup>u</sup>* <sup>≤</sup> *<sup>M</sup>*, where <sup>Ω</sup> <sup>⊂</sup> <sup>R</sup><sup>2</sup> is a planar convex body and *M* > 0.

The physical meaning of this problem is as follows: find the optimal streamlined shape of a convex body moving downwards through an extremely rarefied medium, provided that the body–particle collisions are perfectly elastic.

Problem (6) (along with its further generalizations) has been studied in various papers including [4–13], but has not been solved completely until now.

It was conjectured in 1995 in [6] that the slope of the graph of an optimal function near the zero level set *L*<sup>0</sup> = {(*x*, *y*) : *u*(*x*, *y*) = 0} equals 1. This conjecture was numerically disproven by Wachsmuth (personal communication) in the case when *L*<sup>0</sup> has an empty interior and, therefore, is a line segment. Moreover, numerical simulation shows that the infimum of |∇*u*| in the complement of *L*<sup>0</sup> is strictly greater than 1.

On the other hand, this conjecture was proven by the author in [14] in the case when *L*<sup>0</sup> has a nonempty interior. More precisely, it was proven that if *u* minimizes functional (6), then for almost all (*x*, *y*) ∈ *∂L*0, it holds

$$\lim\_{(\mathbf{x'},\mathbf{y'})\,(\notin L\_0)\to(\mathbf{x},\mathbf{y})} |\nabla \boldsymbol{\mu}(\mathbf{x'},\mathbf{y'})| = 1.$$

The proof is based on the results concerning local properties of convex surfaces near ridge points in the case *d* = 3. These results were formulated, with the proofs being briefly outlined, in [14].

**Remark 1.** *The limiting behavior of ν<sup>t</sup> in the case d* = 2 *is quite simple. In this case, the tangent cone is an angle, which degenerates to a half-plane if the point is regular. We will call it the* tangent angle*. Let the tangent angle to C* <sup>⊂</sup> <sup>R</sup><sup>2</sup> *at r*<sup>0</sup> *be given by*

$$<\langle r - r\_{0\prime} \ e\_1 \rangle \le 0, \quad \langle r - r\_{0\prime} \ e\_2 \rangle \le 0, \quad |e\_1| = 1, \ |e\_2| = 1,$$

*and e be given by*

*e* = *λ*1*e*<sup>1</sup> + *λ*2*e*2, *λ*<sup>1</sup> ≥ 0, *λ*<sup>2</sup> ≥ 0, |*e*| = 1.

*Thus, e*<sup>1</sup> *and e*<sup>2</sup> *are the outward unit normals to the sides of the angle, and e is the outward unit normal to a line of support at r*0*. Then, the limiting measure is the sum of two atoms:*

$$\lim\_{t \to 0} \nu\_t = \lambda\_1 \delta\_{c\_1} + \lambda\_2 \delta\_{c\_2}.$$

*The proof of this relation is simple and is left to the reader.*

*Note that if the point r*<sup>0</sup> *is regular, then e*<sup>1</sup> = *e*<sup>2</sup> = *e. It may also happen that the point is singular, that is e*<sup>1</sup> = *e*2*, and e coincides with one of the vectors e*<sup>1</sup> *and e*2*. In both cases, the limiting measure is an atom:*

$$\lim\_{t \to 0} \nu\_t = \delta\_{\ell^\*}$$

The limiting behavior of *ν<sup>t</sup>* is different for different kinds of points:

(a) If the point *r*<sup>0</sup> is regular, then the limiting measure is an atom.

(b) If *r* is a conical point, then the limiting measure coincides with the measure induced by the part of the boundary of the tangent cone cut off by a plane Π*t*, *t* = *σ* (note that all the induced measures with *t* > 0 are proportional).

(c) The case of ridge points is the most interesting. In this case, the limiting measure may not exist, and the characterization of all possible partial limits is a difficult task.

Still, the study is nontrivial also in Cases (a) and (b). In this paper, we restrict ourselves to these cases, while Case (c) is postponed to the future. The main results of the paper are contained in the following Theorems 1 and 2.

**Theorem 1.** *If r*<sup>0</sup> *is a regular point of ∂C, then*

$$\lim\_{t \to 0} \nu\_t = \delta\_t. \tag{7}$$

Let *r*<sup>0</sup> be a conical point, *K* be the tangent cone at *r*0, *S*ˆ *<sup>t</sup>* be the part of *∂K* containing *r*<sup>0</sup> cut off by the plane Π*t*, and *B*ˆ*<sup>t</sup>* be the intersection of the cone with the cutting plane Π*t*, *t* > 0, that is

$$
\hat{S}\_l = \partial K \cap \{ r : \langle r - r\_{0\prime} \ e \rangle \ge -t \} \quad \text{and} \quad \hat{B}\_l = K \cap \{ r : \langle r - r\_{0\prime} \ e \rangle = -t \}.
$$

Let *Kt* = *K* ∩ {*r* : *r* − *r*0, *e*≥−*t*} be the part of the cone cut off by the plane Π*t*; its boundary is *∂Kt* = *S*ˆ *<sup>t</sup>* <sup>∪</sup> *<sup>B</sup>*ˆ*t*.

All measures induced by *S*ˆ *t* are proportional, that is the measure:

$$\nu\_{\star} := \frac{1}{|\mathcal{B}\_t|} \,\nu\_{\mathcal{S}\_t}$$

does not depend on *t*.

**Theorem 2.** *If r*<sup>0</sup> *is a conical point of ∂C, then*

$$\lim\_{t \to 0} \nu\_t = \nu\_\*. \tag{8}$$

#### **2. Proof of Theorem 1**

The proof is based on several propositions.

Consider a convex set *<sup>D</sup>* <sup>⊂</sup> <sup>R</sup>*d*−1, and let *<sup>A</sup>* <sup>=</sup> <sup>|</sup>*D*<sup>|</sup> be its (*<sup>d</sup>* <sup>−</sup> <sup>1</sup>)-dimensional volume and *<sup>P</sup>* = |*∂D*|*d*−<sup>2</sup> be the (*<sup>d</sup>* − <sup>2</sup>)-dimensional volume of its boundary.

**Proposition 1.** *If D contains a circle of radius a, then*

$$P \le \frac{d-1}{a}A.\tag{9}$$

**Proof.** Let *ds* ⊂ *∂D* be an infinitesimal element of the boundary of *D* and denote by *p*(*ds*) its (*d* − 2)-dimensional volume. Consider the pyramid with the vertex at the center *O* of the circle and with the base *ds*, that is the union of line segments joining *O* with the points of *ds*. Let *A*(*ds*) be the element of the (*d* − 1)-dimensional volume of this pyramid; see Figure 2. Then, we have

$$A(ds) \ge \frac{a}{d-1} \, p(ds)\_{\prime}$$

and therefore,

$$A = \int\_{\partial D} A(ds) \ge \frac{a}{d-1} \int\_{\partial D} p(ds) = \frac{a}{d-1} P.$$

From here follows Inequality (9).

**Figure 2.** The convex set *D* containing a circle of radius *a*.

Consider Euclidean space <sup>R</sup>*<sup>d</sup>* with the coordinates (*x*, *<sup>z</sup>*), *<sup>x</sup>* = (*x*1, ... , *xd*−1), and fix *t* > 0 and 0 < *ϕ* < *π*/2.

**Proposition 2.** *Let a convex body <sup>C</sup>* <sup>⊂</sup> <sup>R</sup>*<sup>d</sup> be contained between the planes <sup>z</sup>* <sup>=</sup> <sup>0</sup> *and <sup>z</sup>* <sup>=</sup> *t, <sup>t</sup>* <sup>&</sup>gt; <sup>0</sup>*. Let <sup>D</sup> be the image of <sup>C</sup> under the natural projection of* <sup>R</sup>*<sup>d</sup> on the x-plane,* (*x*, *<sup>z</sup>*) → *x, and let <sup>P</sup>* = |*∂D*|*d*−<sup>2</sup> *be the* (*<sup>d</sup>* − <sup>2</sup>)*-dimensional volume of <sup>∂</sup>D. Let a domain* U ⊂ *<sup>∂</sup><sup>C</sup> be such that the outward normal nr* = (*nr*,1, ... *nr*,*d*−1, *nr*,*d*) *at each regular point <sup>r</sup>* ∈ U *satisfies* |*nr*,*d*| ≤ cos *<sup>ϕ</sup>. (In other words, the angles between nr for r* ∈ U *and the vectors* ±(0, . . . , 0, 1) *are* ≥ *ϕ.) Then,*

$$|\mathcal{U}| \le \frac{2tP}{\sin \varrho}.$$

**Proof.** The body *C* is bounded below by the graph of a convex function, say *u*1, and above by the graph of a concave function, say *u*2; see Figure 3. Both functions are defined on *D*. That is, we have

**Figure 3.** The body *C* between two parallel planes *z* = 0 and *z* = *t* is shown. Here, U is represented by the union of two curves bounded by the points.

Let U*<sup>i</sup>* (*i* = 1, 2) denote the intersection of U with the graph of *ui*. Clearly, if a point (*x*, *ui*(*x*)) is regular and belongs to U*i*, then |∇*ui*(*x*)| ≥ tan *ϕ*.

For 0 ≤ *z* ≤ *t*, denote by *Pz* the (*d* − 2)-dimensional volume of the set:

$$L\_z = \{ \mathbf{x} : \boldsymbol{\mu}\_1(\mathbf{x}) = z \text{ and } (\mathbf{x}, \boldsymbol{\mu}\_1(\mathbf{x})) \in \mathcal{U}\_1 \}.$$

One clearly has *Pz* ≤ *P*. Let *s* be the (*d* − 2)-dimensional parameter in *Lz*, and let *ds* be the element of the (*d* − 2)-dimensional volume in *Lz*. Denote by *x*(*z*,*s*) the point in *Lz* corresponding to the parameter *s*. Then, the (*d* − 1)-dimensional volume of U<sup>1</sup> equals

$$|\mathcal{U}\_1| = \int\_0^t dz \int\_{L\_z} \sqrt{1 + \frac{1}{|\nabla u\_1(\mathbf{x}(z,s))|^2}} \, ds \le \int\_0^t P\_z \sqrt{1 + \cot^2 \varrho} \, dz \le \frac{tP}{\sin \varrho}.$$

The same argument holds for U2. It follows that |U| = |U1| + |U2| ≤ 2*tP*/ sin *ϕ*.

**Proposition 3.** *If a convex set in* <sup>R</sup>*d*−<sup>1</sup> *contains <sup>d</sup>* <sup>−</sup> <sup>1</sup> *mutually orthogonal line segments of length 1, then it also contains a ball of radius c* = 1/2(*d* − 1)*.*

**Proof.** Denote the convex set by *D* and the segments by [*A*<sup>0</sup> *<sup>i</sup>* , *<sup>A</sup>*<sup>1</sup> *<sup>i</sup>* ], *i* = 1, ... , *d* − 1. Since all points *A<sup>j</sup> <sup>i</sup>* lie in *<sup>D</sup>*, each convex combination of the form *PJ* <sup>=</sup> <sup>1</sup> *<sup>d</sup>*−<sup>1</sup> <sup>∑</sup>*d*−<sup>1</sup> *<sup>i</sup>*=<sup>1</sup> *<sup>A</sup>J*(*i*) *<sup>i</sup>* , where *J* denotes a map {1, ... , *d* − 1} → {0, 1}, also lies in *D*. The convex combination of the set of points *PJ* is a hypercube with the size of length 1/(*d* − 1) and contains the ball of radius 1/2(*d* − 1) with the center at the hypercube's center.

**Proposition 4.** *Bt contains d* − 1 *mutually orthogonal line segments of length βt, where βt*/*t* → ∞ *as t* → 0*.*

**Proof.** Take a unit vector *e* orthogonal to *e*, and consider the 2-dimensional plane Π through *r*<sup>0</sup> parallel with *e* and *e* . The intersection Π ∩ *C* =: *C* is a 2-dimensional convex body, and *r*<sup>0</sup> is a regular point on its boundary; the intersection Π ∩ Π*<sup>t</sup>* = *l <sup>t</sup>* is a line orthogonal to *e* at the distance *t* from *r*0; the intersection Π ∩ *Bt* is a line segment (maybe degenerating to a point or the empty set). Equivalently, this segment is the intersection of the body *C <sup>t</sup>* with the line *l <sup>t</sup>*. Since the point *r*<sup>0</sup> ∈ *∂C* is regular, we conclude that the length of this segment *β <sup>t</sup>* satisfies *β <sup>t</sup>*/*t* → ∞ as *t* → 0.

Now, choose unit vectors *e*1, ... ,*ed*−<sup>1</sup> in such a way that the set of vectors *e*1, ... ,*ed*−1,*e* forms an orthonormal system in <sup>R</sup>*d*. For each *<sup>i</sup>* <sup>=</sup> 1, ... , *<sup>d</sup>* <sup>−</sup> 1, draw the 2-dimensional plane <sup>Π</sup>*<sup>i</sup>* through *<sup>r</sup>*<sup>0</sup> parallel with *<sup>e</sup>* and *ei*. The intersections <sup>Π</sup>*<sup>i</sup>* <sup>∩</sup> *Bt* are line segments parallel with *ei*, and therefore, they are mutually orthogonal. The lengths of these segments *βi <sup>t</sup>* satisfy *β<sup>i</sup> <sup>t</sup>*/*<sup>t</sup>* <sup>→</sup> <sup>∞</sup> as *<sup>t</sup>* <sup>→</sup> 0. Taking *<sup>β</sup><sup>t</sup>* <sup>=</sup> min1≤*i*≤*d*−<sup>1</sup> *<sup>β</sup><sup>i</sup> <sup>t</sup>*, one comes to the statement of the proposition.

Recall that *St* is the intersection of *∂C* with the half-space {*r* : *r* − *r*0, *e*≥−*t*} and Π*<sup>t</sup>* is the plane of the equation *r* − *r*0, *e* = −*t*. For *ϕ* ∈ (0, *π*/2), denote by *St*,*<sup>ϕ</sup>* the part of *St* containing the regular points *r* satisfying *nr*, *e* ≤ cos *ϕ*. In other words, *St*,*<sup>ϕ</sup>* is the set of regular points *r* in *St* such that the angle between *e* and *nr* is greater than or equal to *ϕ*.

**Proposition 5.** *We have*

$$\frac{|S\_{t,\rho}|}{|B\_t|} \to 0 \quad \text{as} \ t \to 0.$$

**Proof.** Consider a coordinate system (*x*, *<sup>z</sup>*), *<sup>x</sup>* = (*x*1, ... , *xd*−1) such that the *<sup>x</sup>*-plane coincides with Π*<sup>t</sup>* and the *z*-axis is directed toward the vector *e*. For *t*<sup>0</sup> > 0 sufficiently small, the intersection of Π*<sup>t</sup>* and the interior of *C* is nonempty for all *t* ≤ *t*0. The angle between −*e* and the outward normal at each regular point of *St*, *t* ≤ *t*<sup>0</sup> is greater than a positive value *ϕ*0. That is, for any regular point *r* ∈ *St*, it holds *nr*, *e*≥− cos *ϕ*0. Without loss of generality, one can take *ϕ* < *ϕ*0, and then, for all regular points *r* ∈ *St*,*ϕ*, it holds |*nr*, *e*| ≤ cos *ϕ*.

In the chosen coordinate system, *Ct* is contained between the planes *z* = 0 and *z* = *t*. Denote by *Dt* the image of *Ct* under the natural projection (*x*, *z*) → *x*. The domain *Dt* contains *Bt* and is contained in the (*t* cot *ϕ*)-neighborhood of *Bt*; hence, its (*d* − 2)-dimensional volume does not exceed *Pt* <sup>=</sup> <sup>|</sup>*∂Bt*|*d*−<sup>2</sup> <sup>+</sup> *sd*−2(*<sup>t</sup>* cot *<sup>ϕ</sup>*)*d*−2, where *sd*−<sup>2</sup> <sup>=</sup> <sup>|</sup>*Sd*−2|*d*−<sup>2</sup> means the area of the (*d* − 2)-dimensional unit sphere.

Applying Proposition 2 to the body *C* = *Ct* and the domain U = *St*,*ϕ*, one obtains

$$|S\_{t,\varphi}| \le \frac{2tP\_t}{\sin\varphi} = 2t \frac{|\partial B\_t|\_{d-2} + s\_{d-2}(t\cot\varphi)^{d-2}}{\sin\varphi}.$$

By Propositions 3 and 4, *Bt* contains a ball of radius *cβt*, and therefore, by Proposition 1,

$$|\partial B\_t|\_{d-2} \le \frac{d-1}{c\beta\_t} \left| B\_t \right|\_{d}$$

and additionally, <sup>|</sup>*Bt*| ≥ *bd*−1(*cβt*)*d*−1, where *bd*−<sup>1</sup> means the volume of the unit ball in R*d*−1. Hence,

$$\frac{|S\_{t,\eta}|}{|B\_t|} \le 2t \frac{\frac{d-1}{c\beta\_t}|B\_t| + s\_{d-2}(t \cot \eta)^{d-2}}{\sin \eta |B\_t|} \le \frac{2(d-1)}{c \sin \eta} \frac{t}{\beta\_t} + \frac{2s\_{d-2}}{c \sin \eta} \frac{t}{b\_{d-1}} \to 0 \quad \text{as } t \to 0.$$

Let us now finish the proof of Theorem 1.

Recall that *ν<sup>S</sup>* is the surface area measure induced by *S*. For all *ϕ* ∈ (0, *π*/2), one has

$$\nu\_t = \frac{1}{|B\_t|} \,\nu\_{S\_{t,q}} + \frac{1}{|B\_t|} \,\nu\_{S\_t \backslash S\_{t,q}} \cdot \frac{1}{|B\_t|}$$

Proposition 5 implies that the measure <sup>1</sup> <sup>|</sup>*Bt*<sup>|</sup> *<sup>ν</sup>St*,*<sup>ϕ</sup>* converges to 0 as *<sup>t</sup>* <sup>→</sup> 0. Indeed, for any continuous function *f* on *Sd*−1,

$$\int\_{S^{d-1}} f(n) \frac{1}{|B\_t|} \nu\_{S\_{t,q}}(dn) \le \max |f| \frac{|S\_{t,q}|}{|B\_t|} \to 0 \quad \text{as } t \to 0.$$

On the other hand, the measure <sup>1</sup> <sup>|</sup>*Bt*<sup>|</sup> *<sup>ν</sup>St*\*St*,*<sup>ϕ</sup>* is supported in the set in *<sup>S</sup>d*−<sup>1</sup> containing all points whose radius vector forms the angle ≤ *ϕ* with *e*. It follows that each partial limit of <sup>1</sup> <sup>|</sup>*Bt*<sup>|</sup> *<sup>ν</sup>St*\*St*,*<sup>ϕ</sup>* and, therefore, each partial limit of *<sup>ν</sup><sup>t</sup>* are supported in this set. Since *<sup>ϕ</sup>* <sup>&</sup>gt; <sup>0</sup> can be made arbitrary small, one concludes that each partial limit of *ν<sup>t</sup>* is proportional to *δe*. Finally, utilizing Equality (5) true for each partial limit *ν*∗, one concludes that the limit of *ν<sup>t</sup>* exists and is equal to *δe*.

#### **3. Proof of Theorem 2**

In the proof, we will use the well-known fact that the surface area measure is continuous with respect to the Hausdorff topology in the space of convex bodies.

More precisely, we say that a family of convex bodies *Ct*, *t* > 0 in R*<sup>d</sup>* converges to a convex body *<sup>C</sup>* <sup>⊂</sup> <sup>R</sup>*<sup>d</sup>* as *<sup>t</sup>* <sup>→</sup> 0 in the sense of Hausdorff and write *Ct* −−→*t*→<sup>0</sup> *<sup>C</sup>*, if for any *ε* > 0, there exists *t*<sup>0</sup> > 0 such that for all *t* ≤ *t*0, *Ct* is contained in the *ε*-neighborhood of *C* and *C* is contained in the *ε*-neighborhood of *Ct*.

It is well known that if *Ct* −−→*t*→<sup>0</sup> *<sup>C</sup>*, then *ν∂Ct* <sup>→</sup> *ν∂<sup>C</sup>* as *<sup>t</sup>* <sup>→</sup> 0.

Choose *<sup>σ</sup>* <sup>&</sup>gt; 0 so <sup>|</sup>*B*ˆ*σ*<sup>|</sup> <sup>=</sup> 1, and therefore,

$$\nu\_{\hat{S}\_{\mathbb{V}^\*}} = \nu\_\*.\tag{10}$$

Let the origin coincide with the point *<sup>r</sup>*0, that is *<sup>r</sup>*<sup>0</sup> <sup>=</sup>0; then, the homothety of a set <sup>A</sup> with the center at *r*<sup>0</sup> and ratio *k* is *k*A. See Figure 4.

**Figure 4.** The tangent cone at *r*0, the cutting planes Π*<sup>t</sup>* and Π*σ*, and the sets *Bt*, *Bσ*, and *B*ˆ*<sup>σ</sup>* in the case when the point *r*<sup>0</sup> is conical.

**Proposition 6.** *<sup>σ</sup> <sup>t</sup> Bt* −−→*t*→<sup>0</sup> *<sup>B</sup>*ˆ*σ.*

**Proof.** Note that for all positive *t*<sup>1</sup> and *t*2,

$$\frac{t\_1}{t\_2} \,\Pi\_{t\_2} = \Pi\_{t\_1} \quad \text{and} \quad \frac{t\_1}{t\_2} \,\mathcal{B}\_{t\_2} = \mathcal{B}\_{t\_1}.$$

Additionally, since the tangent cone *K* contains *C*, then *B*ˆ*<sup>t</sup>* contains *Bt*, and so,

$$\frac{\sigma}{t} \, \_B\mathbf{\hat{}} \subset \frac{\sigma}{t} \, \_B\mathbf{\hat{}} = \mathbf{\hat{}}\_{\sigma}.$$

Let now 0 <sup>&</sup>lt; *<sup>t</sup>*<sup>1</sup> <sup>≤</sup> *<sup>t</sup>*2. Since0 and *Bt*<sup>2</sup> belong to *<sup>C</sup>*, so does their linear combination,

$$\frac{t\_1}{t\_2}B\_{t\_2} = \left(1 - \frac{t\_1}{t\_2}\right)\mathfrak{J} + \frac{t\_1}{t\_2}B\_{t\_2} \subset \mathbb{C}.$$

On the other hand, *<sup>t</sup>*<sup>1</sup> *<sup>t</sup>*<sup>2</sup> *Bt*<sup>2</sup> <sup>⊂</sup> *<sup>t</sup>*<sup>1</sup> *<sup>t</sup>*<sup>2</sup> <sup>Π</sup>*t*<sup>2</sup> <sup>=</sup> <sup>Π</sup>*t*<sup>1</sup> . It follows that *<sup>t</sup>*<sup>1</sup> *<sup>t</sup>*<sup>2</sup> *Bt*<sup>2</sup> ⊂ *<sup>C</sup>* ∩ <sup>Π</sup>*t*<sup>1</sup> = *Bt*<sup>1</sup> . We conclude that *<sup>σ</sup>*

$$\frac{\sigma}{t\_2} B\_{t\_2} \subset \frac{\sigma}{t\_1} B\_{t\_1 \prime}$$

that is *<sup>σ</sup> <sup>t</sup> Bt*, *<sup>t</sup>* <sup>&</sup>gt; 0 form a nested family of sets contained in *<sup>B</sup>*ˆ*σ*. Suppose that *<sup>σ</sup> <sup>t</sup> Bt* does not converge to *<sup>B</sup>*ˆ*σ*. This implies that the closure of the union

$$\overline{\bigcup\_{t>0} \frac{\sigma}{t} B\_t} = \colon B\_{\sigma}.$$

is contained in, but does not coincide with, *B*ˆ*σ*.

The union:

$$\bigcup\_{t>0} \frac{t}{\sigma} \mathcal{B}\_{\sigma} = \colon \mathcal{K}$$

is a cone with the vertex at *r*0; it is contained in the tangent cone *K*, but does not coincide with it. On the other hand,

$$\mathcal{C} = \bigcup\_{t \ge 0} B\_t \subset \bigcup\_{t \ge 0} \frac{t}{\sigma} \vec{B}\_{\sigma} = \vec{K}\varphi$$

that is *C* is contained in the cone *K*˜, which is smaller than the tangent cone *K*. This contradiction proves our proposition.

From Proposition 6, it follows, in particular, that

$$\lim\_{t \to 0} \left| \frac{\mathcal{F}}{t} B\_t \right| = |\hat{B}\_{\mathcal{F}}| = 1,\tag{11}$$

and therefore,

$$\nu\_{\vec{q}\_{\vec{T}}B\_{t}} = \left| \frac{\sigma}{t} B\_{t} \right| \delta\_{-\mathfrak{e}} \longrightarrow \left| \mathcal{B}\_{\sigma} \right| \delta\_{-\mathfrak{e}} = \nu\_{\vec{B}\_{\mathfrak{e}^\*}} \quad \text{as } t \to 0. \tag{12}$$

Denote

$$\Sigma^{\mathfrak{f}}\_{\sigma} := \operatorname{conv} \left( \frac{\sigma}{t} \, \, B\_t \cup r\_0 \right).$$

Since the convex body *<sup>σ</sup> <sup>t</sup> Ct* contains both *<sup>r</sup>*<sup>0</sup> and *<sup>σ</sup> <sup>t</sup> Bt*, we have <sup>Σ</sup>*<sup>t</sup> <sup>σ</sup>* <sup>⊂</sup> *<sup>σ</sup> <sup>t</sup> Ct*.

Recall that *K<sup>σ</sup>* is the part of the tangent cone cut off by the plane Π*σ*. We have *<sup>K</sup><sup>σ</sup>* <sup>=</sup> conv(*B*ˆ*<sup>σ</sup>* <sup>∪</sup> *<sup>r</sup>*0). Since by Proposition 6, *<sup>σ</sup> <sup>t</sup> Bt* −−→*t*→<sup>0</sup> *<sup>B</sup>*ˆ*σ*, we conclude that conv *<sup>σ</sup> <sup>t</sup> Bt* ∪ *r*0 −−→*t*→<sup>0</sup> conv(*B*ˆ*<sup>σ</sup>* <sup>∪</sup> *<sup>r</sup>*0), that is

$$
\Sigma\_{\sigma}^t \xrightarrow[t \to 0]{} \mathbb{K}\_{\sigma}.
$$

Using this relation and the double inclusion:

$$
\Sigma\_{\sigma}^{t} \subset \frac{\sigma}{t} \mathbb{C}\_{t} \subset \mathbb{K}\_{\sigma \prime}
$$

one concludes that *<sup>σ</sup> <sup>t</sup> Ct* converges to *K<sup>σ</sup>* in the sense of Hausdorff, and therefore,

$$\nu\_{\mathsf{T}}\circ\_{\mathsf{D}\mathsf{C}\_{t}} \to \nu\_{\mathsf{D}K\_{t}} \quad \text{as} \quad t \to 0.1$$

Using that *<sup>σ</sup> <sup>t</sup> <sup>∂</sup>Ct* <sup>=</sup> *<sup>σ</sup> <sup>t</sup> St* <sup>∪</sup> *<sup>σ</sup> <sup>t</sup> Bt* and *<sup>∂</sup>K<sup>σ</sup>* <sup>=</sup> *<sup>S</sup>*<sup>ˆ</sup> *<sup>σ</sup>* <sup>∪</sup> *<sup>B</sup>*ˆ*<sup>σ</sup>* and using (12), one obtains

$$\nu\_{\nsubseteq S\_t} \to \nu\_{\lessdot} \quad \text{as} \quad t \to 0, \star$$

and taking account of (11), one obtains

$$\lim\_{t \to 0} \nu\_t = \lim\_{t \to 0} \frac{1}{|B\_t|} \nu\_{S\_t} = \frac{1}{\lim\_{t \to 0} |\frac{\sigma}{T} B\_t|} \lim\_{t \to 0} \nu\_{\frac{\sigma}{T} S\_t} = \nu\_{S\_{\nu}} = \nu\_\*.$$

Theorem 2 is proven.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** This work was supported by the Center for Research and Development in Mathematics and Applications (CIDMA) through the Portuguese Foundation for Science and Technology (FCT), within Projects UIDB/04106/2020 and UIDP/04106/2020.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


## *Article* **Pattern Formation Induced by Fuzzy Fractional-Order Model of COVID-19**

**Abeer S. Alnahdi 1,†, Ramsha Shafqat 2,\*,†, Azmat Ullah Khan Niazi 2,† and Mdi Begum Jeelani 1,\*,†**


**Abstract:** A novel coronavirus infection system is established for the analytical and computational aspects of this study, using a fuzzy fractional evolution equation (FFEE) stated in Caputo's sense for order (1,2). It is constructed using the FFEE formulated in Caputo's meaning. The model consist of six components illustrating the coronavirus outbreak, involving the susceptible people **K**-(*ω*), the exposed population **L**-(*ω*), total infected strength **C**-(*ω*), asymptotically infected population **M**-(*ω*), total number of humans recovered **E**-(*ω*), and reservoir **Q**-(*ω*). Numerical results using the fuzzy Laplace approach in combination with the Adomian decomposition transform are developed to better understand the dynamical structures of the physical behavior of COVID-19. For the controlling model, such behavior on the generic characteristics of RNA in COVID-19 is also examined. The findings show that the proposed technique of addressing the uncertainty issue in a pandemic situation is effective.

**Keywords:** approximation solution; fuzzy number; fuzzy fractional order derivative; coronavirus infection system; Adomian decomposition method

**MSC:** 26A33; 34K37

#### **1. Introduction**

Recently, the entire world has been afflicted by a novel coronavirus pandemic known as the "novel coronavirus 2019", abbreviated as "nCOVID-19", which was initially reported in Wuhan, central China [1]. It has been discovered that nCOVID-19 is spread from animal to human; several afflicted people claimed to have contracted the virus after visiting a local fish and wild animal market in Wuhan on 28 November [2]. Following that, other researchers confirmed that transmission can also occur from one person to another [3]. According to World Health Organization data, the number of reported laboratory-confirmed human infections in 187 countries, territories, or places around the world reached 292,142 on 21 March 2020, with 12,784 mortality cases [4]. The death rate was as high as 0.0666 in some nations, such as Italy and Spain. This confirms the severity and high infectivity of nCOVID-19. Most patients infected with nCOVID-19 will have mild to moderate respiratory symptoms, such as shortness of breath, low fever, nausea, cough, and other symptoms. Other symptoms have been described, including gastroenteritis and neurological illnesses of varying severity [5]. nCOVID-19 is primarily transmitted by droplets from the nose when an infected person coughs or sneezes. A person is in danger of catching the virus if he or she inhales droplets from infected people in the air. As a result, avoiding meetings and contacting other individuals is the greatest approach to avoid contracting the virus.

To manage people flow and movement, Wuhan has been shut down by the Chinese government, and they have decreased or restricted the transportation system of the country,

**Citation:** Alnahdi, A.S.; Shafqat, R.; Niazi, A.U.K.; Jeelani, M.B. Pattern Formation Induced by Fuzzy Fractional-Order Model of COVID-19. *Axioms* **2022**, *11*, 313. https:// doi.org/10.3390/axioms11070313

Academic Editors: Natália Martins, Ricardo Almeida, Cristiana João Soares da Silva and Moulay Rchid Sidi Ammi

Received: 30 May 2022 Accepted: 23 June 2022 Published: 27 June 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

including airplanes, trains, buses, and private cars, among other things. People have had to stay at home and have their body temperature taken every day. If they have to go outside, they are advised to wear respirators. With the spread of nCOVID-19 around the world, more governments have entered the antivirus fight in the footsteps of the Chinese government. It was reported that an increasing number of governments have begun to issue restrictions prohibiting international travel, as well as closing schools, shopping malls, and businesses. The nCOVID-19 pandemic has caused significant economic loss throughout the world, as well as huge hardship for country administrations and even all human beings. A large number of doctors and researchers also dedicated themselves to the anti-pandemic fight and conducted research based on their knowledge. They studied nCOVID-19 from a variety of perspectives, including microbiology, virology, sociology, veterinary sciences, infectious diseases, public environmental occupational health, political economy, media studies, and so on. The main countries in nCOVID-19 research include China, the United States, and Korea, as a result of the virus's early epidemic, which prompted them to begin relevant research right away. The origins of nCOVID-19 were investigated by a group of scientists. Initially, bats were thought to be the source of nCOVID-19, which is comparable to SARS (severe acute respiratory syndrome), a worldwide epidemic that began in China and other parts of the world in 2003 [6,7].

Following that, some studies linked nCOVID-19 to the 2012 pandemics of SARS and MERS (Middle East respiratory syndrome) to show that there are lessons to be learned from the two pandemics. SARS-COVID, MERS-COVID, and nCOVID-19 all belong to SARS-COVID, according to Lu [8] and belong to the same family of Betacoronaviruses. Previous studies, according to Zhou, suggest that nCOVID-19 has a significant degree of resemblance to SARS-COVID, and, as evidenced by [9], has the same predicted cell entry mechanism and human cell receptor use based on full-length genome phylogenetic research. Xiaolong and Mose also analyzed the high RBD (receptor binding domain) identity between nCOVID-19 and SARS-COVID, and proposed that the SARS-COVID specific human antibody, CR3022, might bind potently to the virus. nCOVID-19 RBD has a KD of 6.3 nM, indicating that the difference within the RBD is 6.3 nM. SARS-COVID and nCOVID-19 have a significant impact on neutralizing antibody cross-reactivity, which is still needed to produce novel monoclonal antibodies that bind selectively to nCOVID-19 RBD [10]. Syed et al. determined SARS-COVID-derived B lymphocyte epitopes and T-cell epitopes experimentally based on previous studies on SARS-COVID immunological systems and structures and discovered that they are similar and contain no mutation within the available nCOVID-19 sequences, which is critical for narrowing down the hunt for potent targets for an efficient vaccine against the nCOVID-19. Some researchers are concentrating on the transmission and identification of the nCOVID-19 virus in people. Human-to-human transmission is widely acknowledged as a factor in the rapid spread of illnesses. Ahmed said that viral strains from the area's affected persons had been analyzed, but that there was a limited genetic difference, meaning that they all descended from a single ancestor [11]. Zhou, on the other hand, claimed that the sequences of the seven conserved viral replicase domains in ORF 1ab in nCOVID-19 and SARS-COVID [9] are 94.6 percent similar. Chaudhury et al. demonstrated that using accurate, physics-based energy functions, computational protein– protein docking can disclose the native-like, low-energy protein–protein complex from the unbound structures of two separate, interacting protein components [12]. In this paper, we attempt to mathematically examine the nCOVID-19 infection mechanism. The numerical findings are obtained using the fuzzy Laplace transform based on Adomian decomposition, which can be useful in understanding the dynamical structures of the physical behavior of nCOVID-19. In the form of nonlinear fractional order differential equations (FODEs), we define the system of six equations illustrating the coronavirus outbreak, involving susceptible people **A**-(*ω*), the exposed population **B**-(*ω*), total infected strength **C**-(*ω*), asymptotically infected population **D**-(*ω*), total number of humans recovered **E**-(*ω*), and reservoir **F**-(*ω*), which are listed in the following order [13]:

*D<sup>ϑ</sup> ω***A**-(*ω*) = *r*- − *ı*-**A**- − *s*-**A**-(*C*- + *λ***D**-) − *s*-**S**-**F**-, *D<sup>ϑ</sup> ω***B**-(*ω*) = *s*-**A**-(*C*- + **D**-) + *sl***A**-**F** − (1 − -)*μ*-**B**- − *μ* -**B**- − *ı*-**B**-, *D<sup>ϑ</sup> ω***C**-(*ω*)=(1 − -)*μ*-**B**- − (*ϑ*- + *ı*-)**C**-, *D<sup>ϑ</sup> ω***D**-(*ω*) = *μ*- **B**- − (*γ* - + *ı*-)**C**-, *D<sup>ϑ</sup>* - **E**-(*ω*) = *γ*-**C**- + *γ* -**D**- − *ı*-**E**-, *D<sup>ϑ</sup>* - **F**-(-) = *ς***C**- + *ψ***D**- − **F**-, (1)

where *r* is the birth rate, *ı* is the death rate of infected people, *s* is the transmission coefficient, *sl* is the disease transmission coefficient, and *λ* is the transmissibility multiple. The symbols *μ* and *μ* signify the incubation period. The recovery rates of **C** and **D** are represented by *ϑ* and *ϑ* -, respectively. *ς* and *ψ* reflect the virus's influence from **C** and **D**- to **F**-, respectively, and indicates the virus's elimination rate from **F**-. The parameters are included in Table 1 for convenience. In recent years, fuzzy calculus and FODEs [14–17] have been added to current calculus and DEs, respectively. Then, FODEs were expanded to fuzzy FODEs [18–20]. Many academics have researched FODEs and fuzzy integral equations in order to establish the existence–uniqueness theory of solutions [21–26]. It is extremely time consuming to compute more precise solutions to each fuzzy FODE when dealing with fuzzy FODEs. Mathematicians have put forth a lot of effort to solve fuzzy FODEs using various approaches, such as perturbation methods, integral transform methods, and spectral techniques [27–32]. Some researchers examined the stability of fuzzy DEs [33]. Niazi et al. [34], Iqbal et al. [35], Shafqat et al. [36] and Abuasbeh et al. [37] investigated the existence and uniqueness of the FFEE. Ahmad et al. [38] worked on Model (2), the fuzzy fractional-order model of the novel coronavirus as

*D<sup>ϑ</sup> ω***L**-(*ω*) = *r*ˆ- − *ı*ˆ-**L**- − *s*ˆ-**L**-(**N**- + *λ*ˆ **P**-) − *s*ˆ-**L**-**R**-, *D<sup>ϑ</sup> ω***M**-(*ω*) = *s*ˆ-**L**-(*N*- + **P**-) + *s*ˆ-**L**-**R** − (1 − ˆ -)*μ*ˆ -**M**- − ˆ *μ*ˆ - **M**- − *ı*ˆ-**M**-, *D<sup>ϑ</sup> ω***N**-(*ω*)=(1 − ˆ -)*μ*ˆ -**M**- <sup>−</sup> (*ϑ*<sup>ˆ</sup> - + *ı*ˆ-)**N**-, *D<sup>ϑ</sup> ω***P**-(*ω*) = ˆ *μ*ˆ - **K**- <sup>−</sup> (*ϑ*<sup>ˆ</sup> - + *ı*ˆ-)**P**-, *D<sup>ϑ</sup> ω***Q**-(*ω*) = *ϑ*ˆ -**N**- + *ϑ*ˆ - **P**- − *ı*-**Q**-, *D<sup>ϑ</sup> ω***R**-(*ω*) = *ς*ˆ**N**- + *ψ*ˆ**P**- − ˆ**R**-. (2)

Inspired by the above, Model (3) with a fuzzy fractional-order derivative by using mild solution is investigated here, with the uncertainty in initial data. For 1 < *ϑ* ≤ 2,

*c* 0*D<sup>ϑ</sup> ω***K**-(*ω*) = *r*ˆ- − *ı*ˆ-**K**- − *s*ˆ-**K**-(**C**- + *λ*ˆ **M**-) − *s*ˆ-**K**-**Q**-, *c* 0*D<sup>ϑ</sup> ω***L**-(*ω*) = *s*ˆ-**K**-(**C**- + **M**-) + *s*ˆ-**K**-**Q** − (1 − ˆ -)*μ*ˆ -**L**- − ˆ *μ*ˆ -**L**- − *ı*ˆ-**L**-, *c* 0*D<sup>ϑ</sup> ω***C**-(*ω*)=(1 − ˆ -)*μ*ˆ -**L**- <sup>−</sup> (*ϑ*<sup>ˆ</sup> - + *ı*ˆ-)**C**-, *c* 0*D<sup>ϑ</sup> ω***M**-(*ω*) = ˆ *μ*ˆ -**L**- <sup>−</sup> (*ϑ*<sup>ˆ</sup> - + *ı*ˆ-)**M**-, *c* 0*D<sup>ϑ</sup> ω***E**-(*ω*) = *ϑ*ˆ -**C**- + *ϑ*ˆ -**M**- − *ı*-**E**-, *c* 0*D<sup>ϑ</sup> ω***Q**-(*ω*) = *ς*ˆ**C**- + *ψ*ˆ**M**- − ˆ**Q**-, (3)

associated to fuzzy initial condition, for Ξ ∈ (0, 1),

$$\begin{split} \mathbf{K}(0,\Xi) &= (\underline{\bf K}(0,\Xi),\overline{\bf K}(0,\Xi)) + (-1)\underline{\chi}(\mathbf{K}), \quad \mathbf{K}'(0,\Xi) = (\underline{\bf K}(0,\Xi),\overline{\bf K}(0,\Xi)), \\ \mathbf{L}(0,\Xi) &= (\underline{\bf L}(0,\Xi),\overline{\mathbf{L}}(0,\Xi)) + (-1)\underline{\chi}(\mathbf{L}), \quad \mathbf{L}'(0,\Xi) = (\underline{\bf L}(0,\Xi),\overline{\mathbf{L}}(0,\Xi)), \\ \mathbf{C}(0,\Xi) &= (\underline{\bf C}(0,\Xi),\overline{\bf C}(0,\Xi)) + (-1)\underline{\chi}(\mathbf{C}), \quad \mathbf{C}'(0,\Xi) = (\underline{\bf C}(0,\Xi),\overline{\bf C}(0,\Xi)), \\ \mathbf{M}(0,\Xi) &= (\underline{\bf M}(0,\Xi),\overline{\bf M}(0,\Xi)) + (-1)\underline{\chi}(\mathbf{M}), \quad \mathbf{M}'(0,\Xi) = (\underline{\bf M}(0,\Xi),\overline{\bf M}(0,\Xi)), \\ \mathbf{E}(0,\Xi) &= (\underline{\bf E}(0,\Xi),\overline{\mathbf{E}}(0,\Xi)) + (-1)\underline{\chi}(\mathbf{E}), \quad \mathbf{E}'(0,\Xi) = (\underline{\bf E}(0,\Xi),\overline{\mathbf{E}}(0,\Xi)), \\ \mathbf{Q}(0,\Xi) &= (\underline{\bf Q}(0,\Xi),\overline{\mathbf{Q}}(0,\Xi)) + (-1)\underline{\chi}(\mathbf{Q}), \quad \mathbf{Q}'(0,\Xi) = (\underline{\bf Q}(0,\Xi),\overline{\mathbf{Q}}(0,\Xi)). \end{split}$$

where *r* is birth rate, *ı* is death rate of infected people, *s* is transmission coefficient, *sl* is the disease transmission coefficient, and *λ* is the transmissibility multiple. The symbols *μ* and *μ* signify the incubation period. The recovery rates of **C** and **M** are represented by *ϑ* and *ϑ*-, respectively. *ς* and *ψ* reflect the virus's influence from **C** and **M** to **Q**-, respectively, and indicates the virus's elimination rate from **Q**-.


**Table 1.** Description of the model's parameters (3).

We use fuzzy fractional-order model of order (1, 2) by using a fuzzy mild solution of the novel coronavirus with nonlocal conditions. Owing to it, our model's graphs are more accurate. The theory of fuzzy sets continues to gain scholars' attention because of its huge range of applications in domains such as engineering, mechanics, robotics, electrical, control, thermal systems, and signal processing. In light of the foregoing arguments and to meet the current uncertain scenario, based on fuzzy fractional calculus, we suggested a new coronavirus infection strategy. We ensure that the proposed model is closer to the true behavior of a system growing the basic properties of RNA in COVID-19 by researching it, which also improves the physical behavior of such an infection system. Our major goal is to obtain the existence–uniqueness result of a COVID-19 model of order (1, 2). Applying mild solutions to this COVID-19 model becomes more complicated. However, via rigorous analysis, we show that the suggested function deduces a novel representation of solution operators and then provides a new idea of mild solutions. As a result, the research of system (3) differs greatly from earlier studies on the COVID-19 model. The other section of this paper is as follows. Section 2 discusses the definitions. Section 3 introduces the existence–uniqueness of the solution to the succeeding fuzzy model. A general method is also shown here for using fuzzy Laplace transform to determine the solution of the examined system. In Section 4, numerical results and discussion are presented. Finally, in Section 5, a conclusion is given.

#### **2. Preliminaries**

**Definition 1** ([39,40])**.** *Take <sup>ρ</sup>* : **<sup>R</sup>**<sup>m</sup> <sup>→</sup> [0, 1] *to be a fuzzy real line set meeting the below properties:*


*Then, it is known as a fuzzy number.*

**Definition 2** ([39])**.** *On a fuzzy number ρ, the* ℘*-level set is defined by*

$$[\mu]^\wp = \{ \mathfrak{x} \in \mathbb{R}^m : \mu(\mathfrak{x}) \le \wp \},$$

*where* <sup>℘</sup> <sup>∈</sup> (0, 1] *and x* <sup>∈</sup> **<sup>R</sup>**m*.*

**Definition 3** ([39,40])**.** *Suppose* [*ρ*(*θ*), *ρ*(*θ*)] *is the parametric form of a fuzzy number ρ, where* 0 ≤ *θ* ≤ 1 *and the below properties are satisfied:*


**Definition 4** ([30])**.** *If a mapping <sup>υ</sup>* : **<sup>E</sup>**<sup>m</sup> <sup>×</sup> **<sup>E</sup>**<sup>m</sup> <sup>→</sup> **<sup>R</sup>**<sup>m</sup> *and let* = ((*θ*) <sup>≤</sup> (*θ*)) *and <sup>μ</sup>* <sup>=</sup> (*μ*(*θ*) ≤ *μ*(*θ*)) *be two fuzzy numbers in their parametric form. The Hausdorff distance between and μ is defined as*

$$\upsilon(\varrho,\mu) = \max\_{a \in A} \{ \sup\_{b \in B} \inf\_{b \in B} ||a - b||\_\prime \sup\_{b \in B} \inf\_{a \in A} ||a - b|| \}.$$

In **E**m, the metric *υ* has the below properties:


**Definition 5** ([30])**.** *Let <sup>τ</sup>*1, *<sup>τ</sup>*<sup>2</sup> <sup>∈</sup> **<sup>E</sup>**m*. If there exist <sup>τ</sup>*<sup>3</sup> <sup>∈</sup> **<sup>E</sup>**<sup>m</sup> *such that <sup>τ</sup>*<sup>1</sup> <sup>=</sup> *<sup>τ</sup>*<sup>2</sup> <sup>+</sup> *<sup>τ</sup>*<sup>3</sup> *then <sup>τ</sup>*<sup>3</sup> *is said to be the H-difference of τ*<sup>1</sup> *and τ*2*, denoted by τ*<sup>1</sup> *τ*2*.*

**Definition 6** ([30])**.** *If* <sup>Θ</sup> : **<sup>R</sup>**<sup>m</sup> <sup>→</sup> **<sup>E</sup>**<sup>m</sup> *be a fuzzy mapping. Then* <sup>Θ</sup> *is called continuous if for any* > 0 ∃ *δ* > 0 *and a fixed value of λ*<sup>0</sup> ∈ [*ζ*1, *ζ*2]*, we have*

$$v(\Theta(\lambda), \Theta(\lambda\_0)) < \epsilon,$$

*whenever*

$$|\lambda - \lambda\_0| < \phi.$$

**Definition 7** ([27,30])**.** *Let <sup>ϕ</sup> be a continuous fuzzy function on* [0, *<sup>b</sup>*] <sup>⊆</sup> **<sup>R</sup>**m*, a fuzzy fractional integral in RL sense corresponding to ω is defined by*

$$I^{\lambda}\phi(\omega) = \frac{1}{\Gamma(\lambda)}\int\_0^{\omega} (\omega - \zeta)^{\lambda - 1}\phi(\zeta)d\zeta,\text{ where }\lambda,\zeta \in (0,\infty).$$

*If <sup>φ</sup>* <sup>∈</sup> *<sup>C</sup>F*[0, *<sup>b</sup>*] 1 *LF*[0, *b*]*, where CF*[0, *b*] *and LF*[0, *b*] *are the spaces of fuzzy continuous fractions and fuzzy Lebesgue integrable functions, respectively, then the fuzzy fractional integral is defined as*

$$[I^{\lambda}\phi(\omega)]\_{\wp} = [I^{\kappa}\underline{\oint}\_{\wp}(\omega), I^{\lambda}\overline{\phi}\_{\wp}(\omega)], 0 \le \wp \le 1, \omega$$

*where*

$$\begin{aligned} I^{\lambda} \underline{\oint}\_{\wp} (\omega) &= \frac{1}{\Gamma(\lambda)} \int\_0^{\omega} (\omega - \zeta)^{\lambda - 1} \underline{\oint}\_{\wp} (\omega) d\zeta, \ \lambda, \zeta &\in (0, \infty), \\\ I^{\lambda} \overline{\Phi}\_{\wp}(\omega) &= \frac{1}{\Gamma(\lambda)} \int\_0^{\omega} (\omega - \zeta)^{\lambda - 1} \overline{\Phi}\_{\wp}(\omega) d\zeta, \ \lambda, \zeta &\in (0, \infty). \end{aligned}$$

**Definition 8** ([30])**.** *If a fuzzy fraction <sup>φ</sup>* <sup>∈</sup> *<sup>C</sup>F*[0, *<sup>b</sup>*] <sup>1</sup> *<sup>L</sup>F*[0, *<sup>b</sup>*] *is such that <sup>φ</sup>* = [*φ*℘(*ω*), *<sup>φ</sup>*℘(*ω*)], 0 ≤ ℘ ≤ 1 *and ω*<sup>1</sup> ∈ (0, *b*)*, then the fuzzy fractional Caputo's derivative is defined as*

$$[D^\beta \underline{\phi}\_\wp(\omega\_0), D^\beta \overline{\phi}\_\wp(\omega\_0)], 0 \le \beta \le 1,$$

*where*

$$\begin{split} D^{\mathfrak{f}} \underline{\Phi}\_{\wp}(\omega\_{0}) &= \quad \frac{1}{\Gamma(n-\beta)} \Big[ \int\_{0}^{\omega} (\omega-\zeta)^{n-\beta-1} \frac{d^{n}}{d\zeta^{n}} \underline{\Phi}\_{\wp}(\omega) d\zeta \Big]\_{\omega=0} \\ D^{\mathfrak{f}} \overline{\Phi}\_{\wp}(\omega\_{0}) &= \quad \frac{1}{\Gamma(n-\beta)} \Big[ \int\_{0}^{\omega} (\omega-\zeta)^{n-\beta-1} \frac{d^{n}}{d\zeta^{n}} \overline{\Phi}\_{\wp}(\omega) d\zeta \Big]\_{\omega=\omega\_{0}} \end{split}$$

*whenever the integrals on the right-hand sides converge and n* = [*β*]*.*

**Definition 9** ([29–31])**.** *Suppose φ is a continuous fuzzy-valued function. Assume that φ*(*χ*).*e*−*s<sup>χ</sup> is an improper fuzzy Riemann-integrable on* [0, ∞)*, then its fuzzy Laplace transform is*

$$L[\phi(\chi)] = \int\_0^\infty \phi(\chi) \, e^{-s\chi} d\chi.$$

*For* 0 ≤ *r* ≤ 1*, the parametric form of φ*(*χ*) *is represented by*

$$\int\_0^\infty \overline{\phi}(\chi, r) \, e^{-s\chi} d\chi = \left[ \int\_0^\infty \underline{\phi}(\chi, r) \, e^{-s\chi} d\chi \, \int\_0^\infty \overline{\phi}(\chi, r) \, e^{-s\chi} d\chi \right].$$

*Hence,*

$$L[\phi(\chi, r)] = [L\phi(\chi, r), L\overline{\phi}(\chi, r)].$$

**Theorem 1** ([31])**.** *If <sup>φ</sup>* <sup>∈</sup> *<sup>C</sup>F*[0, *<sup>b</sup>*] 1 *LF*[0, *b*]*, then the Laplace transform of the fuzzy fractional derivative in Caputo's form is given for* 0 ≤ ℘ ≤ 1 *and* 0 < *β* ≤ 1 *by*

$$L[(D^\beta \phi(\omega))\_\wp] = \mathfrak{s}^\beta L[\phi(\omega)] - \mathfrak{s}^{\beta - 1}[\phi(0)].$$

**Theorem 2** (Schauder Fixed Point Theorem)**.** *Let* (*X*, .) *be a Banach space and S* ⊂ *X be compact, convex and nonempty. Any continuous operator A* : *S* → *S has at least one fixed point.*

#### **3. Main Result**

The existence–uniqueness of solutions to succeeding fuzzy fractional model (FFM) are examined in the following section, and we show how to discover a semi-analytic solution to model (3) using the fuzzy Laplace transform by using mild the solution.

#### *3.1. Existence–Uniqueness*

The existence–uniqueness of the succeeding FFM are addressed in this section using fixed point theory. Consider the right-hand side of Model (3):


> where -,, <sup>Φ</sup>, <sup>Ψ</sup>, <sup>Υ</sup> and <sup>Λ</sup> are fuzzy functions. Thus, for 1 < *<sup>γ</sup>* ≤ 2, the model (3) is

*D<sup>ϑ</sup> ω***K**-(*ω*) = *ψ*(-(*ω*, **K**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))), *D<sup>ϑ</sup> ω***L**-(*ω*) = (*ω*, **K**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))), *D<sup>ϑ</sup> ω***C**-(*ω*) = Φ(*ω*, **K**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))), *D<sup>ϑ</sup> ω***M**-(*ω*) = Ψ(*ω*, **K**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))), *D<sup>ϑ</sup> ω***E**-(*ω*) = Υ(*ω*, **K**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))), *D<sup>ϑ</sup> ω***Q**-(*ω*))) = Λ(*ω*, **K**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))), (5)

with fuzzy initial conditions

**K**(0, Ξ)=(**K**(0, Ξ), **K**(0, Ξ)) + (−1)*g*(**K**), **K** (0, Ξ)=(**K**(0, Ξ), **K**(0, Ξ)), **L**(0, Ξ)=(**L**(0, Ξ), **L**(0, Ξ)) + (−1)*g*(**L**), **L** (0, Ξ)=(**L**(0, Ξ), **L**(0, Ξ)), **C**(0, Ξ)=(**C**(0, Ξ), **C**(0, Ξ)) + (−1)*g*(**C**), **C** (0, Ξ)=(**C**(0, Ξ), **C**(0, Ξ)), **M**(0, Ξ)=(**M**(0, Ξ), **M**(0, Ξ)) + (−1)*g*(**M**), **M** (0, Ξ)=(**M**(0, Ξ), **M**(0, Ξ)), **E**(0, Ξ)=(**E**(0, Ξ), **E**(0, Ξ)) + (−1)*g*(**E**), **E** (0, Ξ)=(**E**(0, Ξ), **E**(0, Ξ)), **Q**(0, Ξ)=(**Q**(0, Ξ), **Q**(0, Ξ)) + (−1)*g*(**Q**), **Q** (0, Ξ)=(**Q**(0, Ξ), **Q**(0, Ξ)).

Now applying a mild solution and using the initial conditions, we obtain

**K**-(*ω*) = *Cq*(*ω*)(**K**(0, Ξ)+(−1)*g*(**K**)) + *Kq*(*ω*)**K** (0, Ξ)+ 1 Γ(*ϑ*) 2 *ω* <sup>0</sup> (*<sup>ω</sup>* − ∧)*ϑ*−1*Pϑ*(*<sup>ω</sup>* − ∧)-(*ω*, **K**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*)))*d*∧, **L**-(*ω*) = *Cq*(*ω*)(**L**(0, Ξ)+(−1)*g*(**L**)) + *Kq*(*ω*)**L** (0, Ξ)+ 1 Γ(*ϑ*) 2 *ω* <sup>0</sup> (*<sup>ω</sup>* − ∧)*ϑ*−1*Pϑ*(*<sup>ω</sup>* − ∧)(*ω*, **<sup>K</sup>**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*)))*d*∧, **C**-(*ω*) = *Cq*(*ω*)(**C**(0, Ξ)+(−1)*g*(**C**)) + *Kq*(*ω*)**C** (0, Ξ)+ 1 Γ(*ϑ*) 2 *ω* <sup>0</sup> (*<sup>ω</sup>* − ∧)*ϑ*−1*Pϑ*(*<sup>ω</sup>* − ∧)Φ(*ω*, **<sup>K</sup>**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*)))*d*∧), **M**-(*ω*) = *Cq*(*ω*)(**M**(0, Ξ)+(−1)*g*(**M**)) + *Kq*(*ω*)**M** (0, Ξ)+ 1 Γ(*ϑ*) 2 *ω* <sup>0</sup> (*<sup>ω</sup>* − ∧)*ϑ*−1*Pϑ*(*<sup>ω</sup>* − ∧)Ψ(*ω*, **<sup>K</sup>**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*)))*d*∧, **E**-(*ω*)) = *Cq*(*ω*)(**E**(0, Ξ)+(−1)*g*(**E**)) + *Kq*(*ω*)**E** (0, Ξ)+ 1 Γ(*ϑ*) 2 *ω* <sup>0</sup> (*<sup>ω</sup>* − ∧)*ϑ*−1*Pϑ*(*<sup>ω</sup>* − ∧)Υ(*ω*, **<sup>K</sup>**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*)))*d*∧, **Q**-(*ω*) = *Cq*(*ω*)(**Q**(0, Ξ)+(−1)*g*(**Q**)) + *Kq*(*ω*)**Q** (0, Ξ)+ 1 Γ(*ϑ*) 2 *ω* <sup>0</sup> (*<sup>ω</sup>* − ∧)*ϑ*−1*Pϑ*(*<sup>ω</sup>* − ∧)Λ(*ω*, **<sup>K</sup>**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*)))*d* ∧ . (6)

Let us define a Banach space as **G** = **G**<sup>1</sup> × **G**<sup>2</sup> under the fuzzy norm:


Equation (6) becomes

$$\mathbf{N}(\omega) = \mathbb{C}\_{\emptyset}(\omega)\mathbf{N}(0,\Xi) + K\_{\emptyset}(\omega)\mathbf{N}^{\ell}(0,\Xi) + \frac{1}{\Gamma(\theta)}\int\_{0}^{\omega} (\omega - \wedge)^{\theta - 1} P\_{\emptyset}(\omega - \wedge)\Theta(\wedge,\mathbf{N}(\wedge))d\wedge\tag{7}$$

where

Θ(*ω*, **N**-(*ω*)) = -(*ω*, **K**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))), (*ω*, **K**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))), Φ(*ω*, **K**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))), Ψ(*ω*, **K**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))), Υ(*ω*, **K**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))), Λ(*ω*, **K**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))).

On the nonlinear function <sup>Θ</sup> : **<sup>G</sup>** → G, we add the following assumptions:

**Assumption 1** (*H*1)**.** *There exists constant K***<sup>N</sup>** > 0 *such that for each* **N**-<sup>1</sup> (*ω*), **N**-<sup>2</sup> (*ω*) ∈ **G***,*

$$|\Theta(\omega, \mathbf{N}\_{\ell\_1}(\omega)) - \Theta(\omega, \mathbf{N}\_{\ell\_2}(\omega))| \le \mathsf{K}\_{\mathsf{N}} |\Theta(\omega, \mathbf{N}\_{\ell\_1}(\omega)) - \Theta(\omega, \mathbf{N}\_{\ell\_2}(\omega))|.$$

**Assumption 2** (*H*2)**.** *There exist constants F***<sup>N</sup>** > 0 *such that*

$$|\Theta(\omega, \mathbf{N}\_{\ell}(\omega))| \le F\_{\mathbf{N}} |\mathbf{N}\_{\ell}(\omega)| + N\_{\mathbf{N}}.$$

**Theorem 3.** *Under Assumption 2, the considered Model (5) has at least one solution.*

**Proof.** Suppose *A* = {**N**-(*ω*) ∈ **G** : ||**N**-(*ω*))|| ≤ *r*} ⊂ **G** is a closed and convex fuzzy set, and *ψ* : *I* → *I* is a mapping defined as

$$\mathbb{E}\left(\mathbf{N}\_{k}(\omega)\right) = \mathbb{C}\_{\mathbb{P}}(\omega)\mathbf{N}(0,\Xi) + \mathbf{K}\_{\mathbb{P}}(\omega)\mathbf{N}'(0,\omega) + \frac{1}{\Gamma(\theta)}\int\_{0}^{\omega} (\omega - \wedge)^{\theta - 1} P\_{\overline{\theta}}(\omega - \wedge) \Theta(\wedge, \mathbf{N}\_{\ell}(\wedge))d\wedge. \tag{8}$$
 
$$\text{Tr}\,\mathbf{1} = \dots \mathbf{N} \text{ (}\omega\text{)} \subset I \text{ -- }\dots\text{-linear}$$

For any **N**-(*ω*) ∈ *I*, we have

$$\begin{split} ||\!|\psi(\mathbf{N}\_{\ell}(\omega))|| &= \max\_{\omega \in [0,3]} \left| \mathbb{C}\_{q}(\omega)\mathbf{N}(0,\Xi) + \mathbb{K}\_{q}(\omega)\mathbf{N}'(0,\Xi) + \frac{1}{\Gamma(\theta)} \int\_{0}^{\omega} (\omega - \wedge)^{\theta - 1} P\_{q}(\omega - \wedge) \Theta(\wedge, \mathbf{N}\_{\ell}(\wedge)) d\wedge \right| \\ &\leq \|\!|\mathbb{C}\_{q}(\ell)\mathbf{N}(0,\Xi) + \mathbb{K}\_{q}(\omega)\mathbf{N}'(0,\Xi)| + \frac{1}{\Gamma(\theta)} \int\_{0}^{\omega} (\psi - \wedge)^{\theta - 1} P\_{q}(\omega - \wedge) [\Theta(\wedge, \mathbf{N}\_{\ell}(\wedge))] d\wedge \\ &\leq \|\!|\mathbb{C}\_{q}(\omega)\mathbf{N}(0,\Xi) + \mathbb{K}\_{q}(\omega)\mathbf{N}'(0,\Xi)| + \frac{1}{\Gamma(\theta)} \int\_{0}^{\omega} (\omega - \wedge)^{\theta - 1} P\_{q}(\omega - \wedge) [M\_{\mathbf{N}}|\mathbf{N}\_{\ell}(\omega)] + N\_{\mathbf{N}}] d\wedge \\ &\leq \|\!|\mathbf{N}(0,\Xi)| + \frac{\tau^{\theta}}{\Gamma(\theta + 1)} [M\_{\mathbf{N}}|\mathbf{N}\_{\ell}(\omega)| + N\_{\mathbf{N}}]. \end{split}$$

We obtain *ζ*(*I*) ⊂ *I* from the previous inequality, implying that the operator *ϕ* is bounded. The operator *φ* is then shown to be completely continuous. Allow *φ*1, *φ*<sup>2</sup> ∈ [0, ] to be such that *φ*<sup>1</sup> < *φ*2, and then

$$\begin{split} ||\!|\phi(\mathbf{N}\_{\ell}(\omega)(\boldsymbol{\varrho}\_{2})-\phi(\mathbf{N}\_{\ell}(\omega)(\boldsymbol{\varrho}\_{1}))|| &=& \left| \frac{1}{\Gamma(\gamma)} \int\_{0}^{\varrho\_{2}} (\phi\_{2}-\wedge)^{\beta-1} P\_{q}(\omega-\wedge) \Theta(\wedge, \mathbf{N}\_{\ell}(\wedge)) d\wedge\right| \\ & \quad - \frac{1}{\Gamma(\gamma)} \int\_{0}^{\varrho\_{1}} (\phi\_{1}-\wedge)^{\gamma-1} P\_{q}(\omega-\wedge) \Theta(\wedge, \mathbf{N}\_{\ell}(\wedge)) d\wedge \left| \right| \\ &\leq \quad [\!\phi\_{2}^{\gamma}-\phi\_{1}^{\gamma}] \frac{[F\_{\mathbf{N}}|\mathbf{N}\_{\ell}(\omega)|+N\_{\mathbf{N}}]}{\Gamma(\gamma+1)}. \end{split}$$

We can see that the right-hand side of the inequality goes to zero as *φ*<sup>2</sup> → *φ*1. Hence,

$$||\phi(\mathbf{N}\_\ell(\omega)(\phi\_2) - \phi(\mathbf{N}\_\ell(\omega))(\phi\_1)|| \to as \,\phi\_2 \to \phi\_1.)$$

As a result, *φ* is an equicontinuous operator. The operator *φ* is entirely continuous according to the Arzela–Ascoli theorem, and it was previously bounded. As a result of Schauder's fixed point theorem, System (5) has at least one solution.

**Theorem 4.** *If τϑK***<sup>N</sup>** < Γ(*ϑ* + 1)*, the investigated system (5) has a unique solution if Assumption 1 holds.*

**Proof.** Let **N**-<sup>1</sup> (*ω*), **N**-<sup>2</sup> (*ω*) ∈ **G**, then

$$\begin{split} ||\!|\phi(\mathbf{N}\_{\ell\_1}(\omega)) - \phi(\mathbf{N}\_{\ell\_2}(\omega))|| &= \max\_{\omega \in [0,3]} \left| \frac{1}{\Gamma(\theta)} \int\_0^{\omega} (\omega - \wedge)^{\theta - 1} P\_q(\omega - \wedge) \Theta(\wedge, \mathbf{N}\_{\ell\_1}(\wedge)) d\wedge \omega \right| \\ &- \frac{1}{\Gamma(\theta)} \int\_0^{\omega} (\omega - \wedge)^{\theta - 1} P\_q(\omega - \wedge) \Theta(\wedge, \mathbf{N}\_{\ell\_2}(\wedge)) d\wedge \omega \\ &\leq \ \frac{\tau^{\theta}}{\Gamma(\theta + 1)} K\_{\mathbf{N}} |\mathbf{N}\_{\ell\_1}(\omega) - \mathbf{N}\_{\ell\_2}(\omega)|. \end{split}$$

As a result, *ζ* is a contraction. As a result, according to the Banach contraction theorem, System (5) has a single solution.

#### *3.2. Procedure for Solution*

A general method is shown here for using the fuzzy Laplace transform to determine the solution of the examined system.

We have the fuzzy Laplace transform of (5) when we use beginning conditions:

*L*[ *c* 0*D<sup>ϑ</sup> ω***K**-(*ω*)] = *L*[-(*ω*, **K**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))], *L*[ *c* 0*D<sup>ϑ</sup> ω***L**-(*ω*)] = *L*[(*ω*, **K**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))], *L*[ *c* 0*D<sup>ϑ</sup> ω***C**-(*ω*)] = *L*[Φ(*ω*, **K**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))], *L*[ *c* 0*D<sup>ϑ</sup> ω***M**-(*ω*)] = *L*[Ψ(*ω*, **K**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))], *L*[ *c* 0*D<sup>ϑ</sup> ω***E**-(*ω*)] = *L*[Υ(*ω*, **K**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))], *L*[ *c* 0*D<sup>ϑ</sup> ω***Q**-(*ω*)] = *L*[Λ(*ω*, **K**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))], *s <sup>ϑ</sup>L*[ *c* 0*D<sup>ϑ</sup> ω***K**-(*ω*)] = *s <sup>ϑ</sup>*−1**K**(0, Ξ) + *s <sup>ϑ</sup>*−1**K** (0, Ξ) + *L*[-(*ω*, **K**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))], *s <sup>ϑ</sup>L*[ *c* 0*D<sup>ϑ</sup> ω***L**-(*ω*)] = *s <sup>ϑ</sup>*−1**L**(0, Ξ) + *s <sup>ϑ</sup>*−1**L** (0, Ξ) + *L*[(*ω*, **K**-(*ω*), **L**-(*ω*), **C**-(*ω*), **MM**-(*ω*), **E**-(*ω*), **Q**-(*ω*))], *s <sup>ϑ</sup>L*[ *c* 0*D<sup>ϑ</sup> ω***C**-(*ω*)] = *s <sup>ϑ</sup>*−1**C**(0, Ξ) + *s <sup>ϑ</sup>*−1**C** (0, Ξ) + *L*Φ(*ω*, **K**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))], *s <sup>ϑ</sup>L*[ *c* 0*D<sup>ϑ</sup> ω***M**-(*ω*)] = *s <sup>ϑ</sup>*−1**M**(0, Ξ) + *s <sup>ϑ</sup>*−1**D** (0, Ξ) + *L*[Ψ(*ω*, **K**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))], *s <sup>ϑ</sup>L*[ *c* 0*D<sup>ϑ</sup> ω***E**-(*ω*)] = *s <sup>ϑ</sup>*−1**E**(0, Ξ) + *s <sup>ϑ</sup>*−1**E** (0, Ξ) + *L*[Υ(*ω*, **K**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))], *s <sup>ϑ</sup>L*[ *c* 0*D<sup>ϑ</sup> ω***Q**-(*ω*)] = *s <sup>ϑ</sup>*−1**Q**(0, Ξ) + *s <sup>ϑ</sup>*−1**Q** (0, Ξ) + *L*[Λ(*ω*, **K**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))], *L*[ *c* 0*D<sup>ϑ</sup> ω***K**-(*ω*)] = <sup>1</sup> *s* **<sup>K</sup>**(0, <sup>Ξ</sup>) + <sup>1</sup> *s* **K** (0, <sup>Ξ</sup>) + <sup>1</sup> *<sup>s</sup><sup>ϑ</sup> <sup>L</sup>*[-(*ω*, **K**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))], *L*[ *c* 0*D<sup>ϑ</sup> ω***L**-(*ω*)] = <sup>1</sup> *s* **<sup>L</sup>**(0, <sup>Ξ</sup>) + <sup>1</sup> *s* **L** (0, <sup>Ξ</sup>) + <sup>1</sup> *<sup>s</sup><sup>ϑ</sup> <sup>L</sup>*[(*ω*, **<sup>K</sup>**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*)))], *L*[ *c* 0*D<sup>ϑ</sup> ω***C**-(*ω*)] = <sup>1</sup> *s* **<sup>C</sup>**(0, <sup>Ξ</sup>) + <sup>1</sup> *s* **C** (0, <sup>Ξ</sup>) + <sup>1</sup> *<sup>s</sup><sup>ϑ</sup> <sup>L</sup>*[Φ(*ω*, **<sup>K</sup>**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))], *L*[ *c* 0*D<sup>ϑ</sup> ω***M**-(*ω*)] = <sup>1</sup> *s* **<sup>M</sup>**(0, <sup>Ξ</sup>) + <sup>1</sup> *s* **M** (0, <sup>Ξ</sup>) + <sup>1</sup> *<sup>s</sup><sup>ϑ</sup> <sup>L</sup>*[Ψ(*ω*, **<sup>K</sup>**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))], *L*[ *c* 0*D<sup>ϑ</sup> ω***E**-(*ω*)] = <sup>1</sup> *s* **<sup>E</sup>**(0, <sup>Ξ</sup>) + <sup>1</sup> *s* **E** (0, <sup>Ξ</sup>) + <sup>1</sup> *<sup>s</sup><sup>ϑ</sup> <sup>L</sup>*[Υ(*ω*, **<sup>K</sup>**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))], *L*[ *c* 0*D<sup>ϑ</sup> ω***Q**-(*ω*)] = <sup>1</sup> *s* **<sup>Q</sup>**(0, <sup>Ξ</sup>) + <sup>1</sup> *s* **Q** (0, <sup>Ξ</sup>) + <sup>1</sup> *<sup>s</sup><sup>ϑ</sup> <sup>L</sup>*[Λ(*ω*, **<sup>K</sup>**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))].

The infinite series solution is

$$\begin{split} \mathbf{K}\_{\ell}(\omega) &= \sum\_{n=0}^{\infty} \mathbf{K}\_{\ell\_{n}}(\omega), \quad \mathbf{L}\_{\ell}(\omega) = \sum\_{n=0}^{\infty} \mathbf{L}\_{\ell\_{n}}(\omega), \quad \mathbf{C}\_{\ell}(\omega) = \sum\_{n=0}^{\infty} \mathbf{C}\_{\ell\_{n}}(\omega), \\ \mathbf{M}\_{\ell}(\omega) &= \sum\_{n=0}^{\infty} \mathbf{M}\_{\ell\_{n}}(\omega), \quad \mathbf{E}\_{\ell}(\omega) = \sum\_{n=0}^{\infty} \mathbf{E}\_{\ell\_{n}}(\omega), \quad \mathbf{Q}\_{\ell}(\omega) = \sum\_{n=0}^{\infty} \mathbf{Q}\_{\ell\_{n}}(\omega), \\ \mathbf{K}\_{\ell}(\omega)) \mathbf{C}\_{\ell}(\omega) &= \sum\_{n=0}^{\infty} \mathbf{X}\_{1,n}, \quad \mathbf{K}\_{\ell}(\omega) \mathbf{M}\_{\ell}(\omega) = \sum\_{n=0}^{\infty} \mathbf{X}\_{2,n}, \quad \mathbf{K}\_{\ell}(\omega) \mathbf{Q}\_{\ell}(\omega) = \sum\_{n=0}^{\infty} \mathbf{X}\_{3,n}. \end{split}$$

where **X**1,*n*, **X**2,*<sup>n</sup>* and **X**3,*<sup>n</sup>* are Adomian polynomials that represent nonlinear terms. As a result, the final equation is

$$-L\left[\sum\_{n=0}^{\infty} D\_{\omega}^{\theta} \mathbf{K}\_{\ell}(\omega)\right] \\ = \frac{1}{s} \mathbf{K}(0, \Xi) + \frac{1}{s} \mathbf{K}'(0, \Xi) \\ + \frac{1}{s^{\overline{\theta}}} L[\mathbb{P}(\omega, \mathbf{K}\_{\ell}(\omega), \mathbf{L}\_{\ell}(\omega), \mathbf{C}\_{\ell}(\omega), \mathbf{M}\_{\ell}(\omega), \mathbf{E}\_{\ell}(\omega), \mathbf{Q}\_{\ell}(\omega))], \quad \forall \ \mathbf{x} \in \Omega$$

*L* <sup>∞</sup> ∑ *n*=0 *Dϑ ω***L**-(*ω*) <sup>=</sup> <sup>1</sup> *s* **<sup>L</sup>**(0, <sup>Ξ</sup>) + <sup>1</sup> *s* **L** (0, <sup>Ξ</sup>) + <sup>1</sup> *<sup>s</sup><sup>ϑ</sup> <sup>L</sup>*[(*ω*, **<sup>K</sup>**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*)))], *L* <sup>∞</sup> ∑ *n*=0 *Dϑ ω***C**-(*ω*) <sup>=</sup> <sup>1</sup> *s* **<sup>C</sup>**(0, <sup>Ξ</sup>) + <sup>1</sup> *s* **C** (0, <sup>Ξ</sup>) + <sup>1</sup> *<sup>s</sup><sup>ϑ</sup> <sup>L</sup>*[Φ(*ω*, **<sup>K</sup>**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))], *L* <sup>∞</sup> ∑ *n*=0 *Dϑ ω***M**-(*ω*) <sup>=</sup> <sup>1</sup> *s* **<sup>D</sup>**(0, <sup>Ξ</sup>) + <sup>1</sup> *s* **M** (0, <sup>Ξ</sup>) + <sup>1</sup> *<sup>s</sup><sup>ϑ</sup> <sup>L</sup>*[Ψ(*ω*, **<sup>K</sup>**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))], *L* <sup>∞</sup> ∑ *n*=0 *Dϑ ω***E**-(*ω*) <sup>=</sup> <sup>1</sup> *s* **<sup>E</sup>**(0, <sup>Ξ</sup>) + <sup>1</sup> *s* **E** (0, <sup>Ξ</sup>) + <sup>1</sup> *<sup>s</sup><sup>ϑ</sup> <sup>L</sup>*[Υ(*ω*, **<sup>K</sup>**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))], *L* <sup>∞</sup> ∑ *n*=0 *Dϑ ω***Q**-(*ω*) <sup>=</sup> <sup>1</sup> *s* **<sup>Q</sup>**(0, <sup>Ξ</sup>) + <sup>1</sup> *s* **Q** (0, <sup>Ξ</sup>) + <sup>1</sup> *<sup>s</sup><sup>ϑ</sup> <sup>L</sup>*[Λ(*ω*, **<sup>K</sup>**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))].

#### When we use the inverse Laplace transform, we obtain

∞ ∑ *n*=0 *Dϑ ω***K**-(*ω*) = **<sup>K</sup>**(0, <sup>Ξ</sup>) + <sup>1</sup> *s* **K** (0, Ξ) + *L*−<sup>1</sup> 1 *<sup>s</sup><sup>ϑ</sup> <sup>L</sup>*[-(*ω*, **K**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))] , ∞ ∑ *n*=0 *Dϑ ω***L**-(*ω*) = **<sup>L</sup>**(0, <sup>Ξ</sup>) + <sup>1</sup> *s* **L** (0, Ξ) + *L*−<sup>1</sup> 1 *<sup>s</sup><sup>ϑ</sup> <sup>L</sup>*[(*ω*, **<sup>K</sup>**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))] , ∞ ∑ *n*=0 *Dϑ ω***C**-(*ω*) = **<sup>C</sup>**(0, <sup>Ξ</sup>) + <sup>1</sup> *s* **C** (0, Ξ) + *L*−<sup>1</sup> 1 *<sup>s</sup><sup>ϑ</sup> <sup>L</sup>*[Φ(*ω*, **<sup>K</sup>**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))] , ∞ ∑ *n*=0 *Dϑ ω***M**-(*ω*) = **<sup>M</sup>**(0, <sup>Ξ</sup>) + <sup>1</sup> *s* **M** (0, Ξ) + *L*−<sup>1</sup> 1 *<sup>s</sup><sup>ϑ</sup> <sup>L</sup>*[Ψ(*ω*, **<sup>K</sup>**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))] , ∞ ∑ *n*=0 *Dϑ ω***E**-(*ω*) = **<sup>E</sup>**(0, <sup>Ξ</sup>) + <sup>1</sup> *s* **E** (0, Ξ) + *L*−<sup>1</sup> 1 *<sup>s</sup><sup>ϑ</sup> <sup>L</sup>*[Υ(*ω*, **<sup>K</sup>**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))] , ∞ ∑ *n*=0 *Dϑ ω***Q**-(*ω*) = **<sup>Q</sup>**(0, <sup>Ξ</sup>) + <sup>1</sup> *s* **Q** (0, Ξ) + *L*−<sup>1</sup> 1 *<sup>s</sup><sup>ϑ</sup> <sup>L</sup>*[Λ(*ω*, **<sup>K</sup>**-(*ω*), **L**-(*ω*), **C**-(*ω*), **M**-(*ω*), **E**-(*ω*), **Q**-(*ω*))] ,

We evaluate the first two terms of two terms of the series when comparing the terms on both sides:

**K**-<sup>0</sup> (*ω*) = **K**(0, Ξ), **K** -0 (*ω*) = **K** (0, Ξ), **K**-<sup>0</sup> (*ω*) = **<sup>K</sup>**(0, <sup>Ξ</sup>), **<sup>K</sup>** -<sup>0</sup> (*ω*) = **<sup>K</sup>** (0, Ξ), **L**-<sup>0</sup> (*ω*) = **L**(0, Ξ), **L** -0 (*ω*) = **L** (0, Ξ), **L**-<sup>0</sup> (*ω*) = **<sup>L</sup>**(0, <sup>Ξ</sup>), **<sup>L</sup>** -<sup>0</sup> (*ω*) = **<sup>L</sup>** (0, Ξ), **C**-<sup>0</sup> (*ω*) = **C**(0, Ξ), **C** -0 (*ω*) = **C** (0, Ξ), **C**-<sup>0</sup> (*ω*) = **<sup>C</sup>**(0, <sup>Ξ</sup>), **<sup>C</sup>** -<sup>0</sup> (*ω*) = **<sup>C</sup>** (0, Ξ), **M**-<sup>0</sup> (*ω*) = **M**(0, Ξ), **M** -0 (*ω*) = **M** (0, Ξ), **M**-<sup>0</sup> (*ω*) = **<sup>M</sup>**(0, <sup>Ξ</sup>), **<sup>M</sup>** -<sup>0</sup> (*ω*) = **<sup>M</sup>** (0, Ξ), **E**-<sup>0</sup> (*ω*) = **E**(0, Ξ), **E** -0 (*ω*) = **E** (0, Ξ), **E**-<sup>0</sup> (*ω*) = **<sup>E</sup>**(0, <sup>Ξ</sup>), **<sup>E</sup>** -<sup>0</sup> (*ω*) = **<sup>E</sup>** (0, Ξ), **Q**-0 (*ω*) = **Q**(0, Ξ), **Q** -0 (*ω*) = **Q** (0, Ξ), **Q**-<sup>0</sup> (*ω*) = **<sup>Q</sup>**(0, <sup>Ξ</sup>), **<sup>Q</sup>** -<sup>0</sup> (*ω*) = **<sup>Q</sup>** (0, Ξ), (9)

**K**-<sup>1</sup> (*ω*) = *<sup>L</sup>*−1[ <sup>1</sup> *<sup>s</sup><sup>ϑ</sup> <sup>L</sup>*[*r*ˆ- − *ı*ˆ-**K**-<sup>0</sup> − *ı*ˆ-**K** -<sup>0</sup> − *s*ˆ**K**-<sup>0</sup> (**C**-<sup>0</sup> + *λ***M**-<sup>0</sup> ) − *s*ˆ**K** -0 (**C** -<sup>0</sup> + *λ***M** -0 ) − *s*ˆ-**K**-0**Q**ˆ -<sup>0</sup> − *<sup>s</sup>*ˆ-**K** -0 **Q**ˆ -0 ]], **K**-<sup>1</sup> (*ω*) = *<sup>L</sup>*−1[ <sup>1</sup> *<sup>s</sup><sup>ϑ</sup> <sup>L</sup>*[*r*ˆ- − *ı*ˆ-**K**-<sup>0</sup> − *ı*ˆ-**K** -<sup>0</sup> − *s*ˆ**K**-<sup>0</sup> (**C**-<sup>0</sup> + *λ***M**-<sup>0</sup> ) <sup>−</sup> *<sup>s</sup>*ˆ**<sup>K</sup>** -<sup>0</sup> (**<sup>C</sup>** -<sup>0</sup> <sup>+</sup> *<sup>λ</sup>***<sup>M</sup>** -<sup>0</sup> ) − *s*ˆ-**K**-0**Q**<sup>ˆ</sup> -<sup>0</sup> − *s*ˆ-**K** -0**Q**ˆ -<sup>0</sup> ]]. (10)

We may find the other terms in the same way.

As a result, the examined system's series solution is

**K**-(*ω*) = **K**-<sup>0</sup> (*ω*) + **K**-<sup>0</sup> (*ω*) + **K**-<sup>1</sup> (*ω*) + **K**-<sup>1</sup> (*ω*) + **K**-<sup>2</sup> (*ω*) + **K**-<sup>2</sup> (*ω*) + ..., **K**-(*ω*) = **K**-<sup>0</sup> (*ω*) + **K**-<sup>0</sup> (*ω*) + **K**-<sup>1</sup> (*ω*) + **K**-<sup>1</sup> (*ω*) + **K**-<sup>2</sup> (*ω*) + **K**-<sup>2</sup> (*ω*) + ..., **L**-(*ω*) = **L**-<sup>0</sup> (*ω*) + **L**-<sup>0</sup> (*ω*) + **L**-<sup>1</sup> (*ω*) + **L**-<sup>1</sup> (*ω*) + **L**-<sup>2</sup> (*ω*) + **L**-<sup>2</sup> (*ω*) + ..., **L**-(*ω*) = **L**-<sup>0</sup> (*ω*) + **L**-<sup>0</sup> (*ω*) + **L**-<sup>1</sup> (*ω*) + **L**-<sup>1</sup> (*ω*) + **L***k*<sup>2</sup> (*ω*) + **L**-<sup>2</sup> (*ω*) + ..., **C**-(*ω*) = **C**-<sup>0</sup> (*ω*) + **C**-<sup>0</sup> (*ω*) + **C**-1 (*ω*) + **C**-1 (*ω*) + **C**-<sup>2</sup> (*ω*) + **C**-<sup>2</sup> (*ω*) + ..., **C**-(*ω*) = **C**-<sup>0</sup> (*ω*) + **C**-<sup>0</sup> (*ω*) + **C**-<sup>1</sup> (*ω*) + **C**-<sup>1</sup> (*ω*) + **C**-<sup>2</sup> (*ω*) + **C**-<sup>2</sup> (*ω*) + ..., **M**-(*ω*) = **M**-<sup>0</sup> (*ω*) + **M**-<sup>0</sup> (*ω*) + **M**-<sup>1</sup> (*ω*) + **M**-<sup>1</sup> (*ω*) + **M**-<sup>2</sup> (*ω*) + **M**-<sup>2</sup> (*ω*) + ..., **M**-(*ω*) = **M**-<sup>0</sup> (*ω*) + **M**-<sup>0</sup> (*ω*) + **M**-<sup>1</sup> (*ω*) + **M**-<sup>1</sup> (*ω*) + **M**-<sup>2</sup> (*ω*) + **M**-<sup>2</sup> (*ω*) + ..., **E**-(*ω*) = **E**-<sup>0</sup> (*ω*) + **E**-<sup>0</sup> (*ω*) + **E**-<sup>1</sup> (*ω*) + **E**-<sup>1</sup> (*ω*) + **E**-<sup>2</sup> (*ω*) + **E**-<sup>2</sup> (*ω*) + ..., **E**-(*ω*) = **E**-<sup>0</sup> (*ω*) + **E**-<sup>0</sup> (*ω*) + **E**-<sup>1</sup> (*ω*) + **E**-<sup>1</sup> (*ω*) + **E**-<sup>2</sup> (*ω*) + **E**-<sup>2</sup> (*ω*) + ..., **Q**- (*ω*) = **Q**-0 (*ω*) + **Q**-0 (*ω*) + **Q**-1 (*ω*) + **Q**-1 (*ω*) + **Q**-2 (*ω*) + **Q**-2 (*ω*) + ..., **Q**-(*ω*) = **Q**-<sup>0</sup> (*ω*) + **Q**-<sup>0</sup> (*ω*) + **Q**-<sup>1</sup> (*ω*) + **Q**-<sup>1</sup> (*ω*) + **Q**-2 (*ω*) + **Q**-2 (*ω*) + ... (11)

#### **4. Numerical Results and Discussion**

We take a look at Table 1 that corresponds to the model's parameters. Consider initial conditions for the proposed model (3).

We used the initial conditions and applied the proposed procedure to (3):

**K**-<sup>0</sup> (*ω*, Ξ) = 2Ξ − 1, **K***h*¯ <sup>0</sup> (*ω*, Ξ) = 1 − 2Ξ, **K** -0 (*ω*, <sup>Ξ</sup>) = <sup>2</sup><sup>Ξ</sup> <sup>−</sup> 1, **<sup>K</sup>** *<sup>h</sup>*¯ <sup>0</sup> (*ω*, Ξ) = 1 − 2Ξ, **L**-<sup>0</sup> (*ω*, Ξ) = 2Ξ − 1, **L***h*¯ <sup>0</sup> (*ω*, Ξ) = 1 − 2Ξ, **L** -0 (*ω*, <sup>Ξ</sup>) = <sup>2</sup><sup>Ξ</sup> <sup>−</sup> 1, **<sup>L</sup>** *<sup>h</sup>*¯ <sup>0</sup> (*ω*, Ξ) = 1 − 2Ξ, **C**-<sup>0</sup> (*ω*, Ξ) = 2Ξ − 1, **C***h*¯ <sup>0</sup> (*ω*, Ξ) = 1 − 2Ξ, **C** -0 (*ω*, <sup>Ξ</sup>) = <sup>2</sup><sup>Ξ</sup> <sup>−</sup> 1, **<sup>C</sup>** *<sup>h</sup>*¯ <sup>0</sup> (*ω*, Ξ) = 1 − 2Ξ, **M**-<sup>0</sup> (*ω*, Ξ) = 2Ξ − 1, **M***h*¯ <sup>0</sup> (*ω*, Ξ) = 1 − 2Ξ, **M** -0 (*ω*, <sup>Ξ</sup>) = <sup>2</sup><sup>Ξ</sup> <sup>−</sup> 1, **<sup>M</sup>** *<sup>h</sup>*¯ <sup>0</sup> (*ω*, Ξ) = 1 − 2Ξ, **E**-<sup>0</sup> (*ω*, Ξ) = 2Ξ − 1, **E***h*¯ <sup>0</sup> (-, Ξ) = 1 − 2Ξ, **E** -0 (*ω*, <sup>Ξ</sup>) = <sup>2</sup><sup>Ξ</sup> <sup>−</sup> 1, **<sup>E</sup>** *<sup>h</sup>*¯ <sup>0</sup> (*ω*, Ξ) = 1 − 2Ξ, **Q**-0 (*ω*, Ξ) = 2Ξ − 1, **Q***h*¯ <sup>0</sup> (*ω*, Ξ) = 1 − 2Ξ, **Q** -0 (*ω*, <sup>Ξ</sup>) = <sup>2</sup><sup>Ξ</sup> <sup>−</sup> 1, **<sup>Q</sup>** *<sup>h</sup>*¯ <sup>0</sup> (*ω*, Ξ) = 1 − 2Ξ,

The second term of the series solution is

$$\begin{split} \underline{\mathbf{K}}\_{\ell\_{1}}(\omega,\Sigma) &= \left[\mathfrak{f}\_{\ell} - \mathfrak{f}\_{\ell}(2\Xi-1) - \mathfrak{s}(2\Xi-1)^{2} - \lambda\mathfrak{s}\_{\ell}(2\Xi-1)^{2} - \mathfrak{s}\_{\ell}(2\Xi-1)^{2}\right] \frac{\omega^{\Sigma}}{\Gamma(\varPhi + 1)}, \\ \underline{\mathbf{K}}\_{\ell\_{1}}^{\prime}(\omega,\Sigma) &= \left[\mathfrak{f}\_{\ell}^{\prime} - \mathfrak{f}\_{\ell}^{\prime}(2\Xi-1) - \mathfrak{s}^{\prime}(2\Xi-1)^{2} - \lambda\mathfrak{s}\_{\ell}^{\prime}(2\Xi-1)^{2} - \mathfrak{s}\_{\ell}^{\prime}(2\Xi-1)^{2}\right] \frac{\omega^{\Sigma}}{\Gamma(\varPhi + 1)}, \\ \underline{\mathbf{K}}\_{\ell\_{1}}(\omega,\Sigma) &= \left[\mathfrak{f}\_{\ell} - \mathfrak{f}\_{\ell}(1-2\Xi) - \mathfrak{s}(1-2\Xi)^{2} - \lambda\mathfrak{s}\_{\ell}(1-2\Xi)^{2} - \mathfrak{s}\_{\ell}(1-2\Xi)^{2}\right] \frac{\omega^{\Sigma}}{\Gamma(\varPhi + 1)}, \\ \underline{\mathbf{K}}\_{\ell\_{1}}^{\prime}(\omega,\Sigma) &= \left[\mathfrak{f}\_{\ell}^{\prime} - \mathfrak{f}\_{\ell}^{\prime}(1-2\Xi) - \mathfrak{s}^{\prime}(1-2\Xi)^{2} - \lambda\mathfrak{s}\_{\ell}^{\prime}(1-2\Xi)^{2} - \mathfrak{s}\_{\ell}^{\prime}(1-2\Xi)^{2}\right] \frac{\omega^{\Sigma}}{\Gamma(\mathscr{A} + 1)}, \end{split} \tag{12}$$

$$\begin{split} \mathbf{L}\_{\ell\_{1}}(\omega,\Xi) &= \left[\mathbb{s}\_{\ell}(2\Xi-1)^{2} + \lambda\mathbb{s}\_{\ell}(2\Xi-1)^{2} + \mathbb{s}\_{\ell}(2\Xi-1)^{2} - (1-\mathfrak{d})\mathfrak{h}\_{\ell}(2\Xi-1) - \mathfrak{d}\mathfrak{h}\_{\ell}(2\Xi-1) - \mathfrak{s}\_{\ell}\mathfrak{h}\_{\ell}(2\Xi-1) - \mathfrak{s}\_{\ell}\mathfrak{h}\_{\ell}(2\Xi-1) - \mathfrak{s}\_{\ell}\mathfrak{h}\_{\ell}(2\Xi-1) \right] \\ &\quad \mathbb{s}\_{\ell}(2\Xi-1) \big[\frac{\omega^{2}}{\Gamma(\mathfrak{d}+1)} \\ \mathbf{L}\_{\ell\_{1}}^{\prime}(\omega,\Xi) &= \left[\mathbb{s}\_{\ell}^{\prime}(2\Xi-1)^{2} + \lambda\mathbb{s}\_{\ell}^{\prime}(2\Xi-1)^{2} + \mathbb{s}\_{\ell}^{\prime}(2\Xi-1)^{2} - (1-\mathfrak{d})\mathfrak{h}\_{\ell}^{\prime}(2\Xi-1) - \mathfrak{d}\mathfrak{h}\_{\ell}^{\prime}(2\Xi-1) - \mathfrak{d}\mathfrak{h}\_{\ell}^{\prime}(2\Xi-1) - \mathfrak{s}\_{\ell}\mathfrak{h}\_{\ell}^{\prime}(2\Xi-1) - \mathfrak{s}\_{\ell}\mathfrak{h}\_{\ell}(2\Xi-1) - \mathfrak{s}\_{\ell}\mathfrak{h}\_{\ell}(2\Xi-1) \big] \right] \end{split}$$

$$\begin{split} & \mathfrak{l}\_{\ell}^{\prime}(2\Xi-1) \Big| \frac{\omega^{\omega}}{\Gamma(\vartheta+1)},\\ & \mathfrak{L}\_{\ell\_{1}}(\omega,\mathfrak{L}) = [\mathfrak{k}\_{\ell}(1-2\Xi)^{2}+\lambda\mathfrak{k}\_{\ell}(1-2\Xi)^{2}+\mathfrak{s}\_{\ell}(1-2\Xi)^{2}-(1-\mathfrak{d})\mathfrak{j}\_{\ell}(1-2\Xi)-\hat{\mathfrak{o}}\_{\ell}\mathfrak{j}\_{\ell}(1-2\Xi)-\hat{\mathfrak{o}}\_{\ell}\mathfrak{j}\_{\ell}(1-2\Xi)-\hat{\mathfrak{o}}\_{\ell}\mathfrak{j}\_{\ell}(1-2\Xi)-\hat{\mathfrak{o}}\_{\ell}\mathfrak{j}\_{\ell}(1-2\Xi)-\hat{\mathfrak{o}}\_{\ell}\mathfrak{j}\_{\ell}(1-2\Xi)-\hat{\mathfrak{o}}\_{\ell}\mathfrak{j}\_{\ell}(1-2\Xi)-\hat{\mathfrak{o}}\_{\ell}\mathfrak{j}\_{\ell}(1-2\Xi)-\hat{\mathfrak{o}}\_{\ell}\mathfrak{j}\_{\ell}(1-2\Xi)-\hat{\mathfrak{o}}\_{\ell}\mathfrak{j}\_{\ell}(1-2\Xi)] \end{split} \tag{13}$$

$$\tilde{\mathbf{r}}'\_{\ell}(1-2\Xi)\big|\frac{\omega^{\frac{\pi}{2}}}{\Gamma(\delta+1)}\prime$$

$$\begin{split} \underline{\mathbf{C}}\_{\ell\_{1}}(\omega,\Sigma) &= [(1-\hat{\mathsf{S}}\_{\ell})\boldsymbol{\mu}\_{\ell}^{\prime}(2\Xi-1) - (\hat{\mathsf{\theta}}\_{p}^{\prime}+\mathfrak{f}\_{\ell})(2\Xi-1)]\frac{\omega^{\Xi}}{\Gamma(\theta+1)\prime} \\ \underline{\mathbf{C}}\_{\ell\_{1}}^{\prime}(\omega,\Sigma) &= [(1-\hat{\mathsf{S}}\_{\ell})\hat{\mathsf{\mu}}\_{k}(2\Xi-1) - (\hat{\mathsf{\theta}}\_{p}+\mathfrak{f}\_{\ell})(2\Xi-1)]\frac{\omega^{\Xi}}{\Gamma(\theta+1)\prime} \\ \underline{\mathbf{C}}\_{\ell\_{1}}(\omega,\Sigma) &= [(1-\mathbf{\S}\_{\ell})\boldsymbol{\mu}\_{\ell}^{\prime}(1-2\Xi) - (\boldsymbol{\theta}\_{p}^{\prime}+\mathfrak{f}\_{\ell})(1-2\Xi)]\frac{\omega^{\Xi}}{\Gamma(\theta+1)\prime} \\ \underline{\mathbf{C}}\_{\ell\_{1}}^{\prime}(\omega,\Sigma) &= [(1-\mathbf{\S}\_{\ell})\boldsymbol{\mu}\_{k}(1-2\Xi) - (\boldsymbol{\theta}\_{p}+\mathfrak{f}\_{\ell})(1-2\Xi)]\frac{\omega^{\Xi}}{\Gamma(\theta+1)} \end{split} \tag{14}$$

**M**-<sup>1</sup> (*ω*, Ξ)=[ˆ *μ*ˆ -(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>) <sup>−</sup> (*ϑ*<sup>ˆ</sup> - − *ı*ˆ-)(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)] *<sup>ω</sup>*<sup>Ξ</sup> (2Ξ−1) *ω*<sup>Ξ</sup> <sup>Γ</sup>(*ϑ*+1), **M** -1 (*ω*, Ξ)=[ˆ *μ*ˆ -(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>) <sup>−</sup> (*ϑ*<sup>ˆ</sup> - − *ı*ˆ -)(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)] *<sup>ω</sup>*<sup>Ξ</sup> (2Ξ−1) *ω*<sup>Ξ</sup> <sup>Γ</sup>(*ϑ*+1), **M**-<sup>1</sup> (*ω*, Ξ)=[ˆ *μ*ˆ -(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ) <sup>−</sup> (*ϑ*<sup>ˆ</sup> - − *ı*ˆ-)(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ)] *<sup>ω</sup>*<sup>Ξ</sup> (1−2Ξ) *ω*<sup>Ξ</sup> <sup>Γ</sup>(*ϑ*+1), **M** -<sup>1</sup> (*ω*, Ξ)=[ˆ *μ*ˆ -(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ) <sup>−</sup> (*ϑ*<sup>ˆ</sup> - − *ı*ˆ -)(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ)] *<sup>ω</sup>*<sup>Ξ</sup> (1−2Ξ) *ω*<sup>Ξ</sup> <sup>Γ</sup>(*ϑ*+1), (15) **E**-<sup>1</sup> (*ω*, <sup>Ξ</sup>)=(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)[*ϑ*<sup>ˆ</sup> - + *ϑ*ˆ - − *ı*ˆ-] *<sup>ω</sup>*<sup>Ξ</sup> <sup>Γ</sup>(*ϑ*+1), **E** -1 (*ω*, <sup>Ξ</sup>)=(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)[*ϑ*<sup>ˆ</sup> - + *ϑ*ˆ - − *ı*ˆ -] *<sup>ω</sup>*<sup>Ξ</sup> <sup>Γ</sup>(*ϑ*+1), **E**-<sup>1</sup> (*ω*, <sup>Ξ</sup>)=(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ)[*ϑ*<sup>ˆ</sup> - + *ϑ*ˆ - − *ı*ˆ-] *<sup>ω</sup>*<sup>Ξ</sup> <sup>Γ</sup>(*ϑ*+1), **E** -<sup>1</sup> (*ω*, <sup>Ξ</sup>)=(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ)[*ϑ*<sup>ˆ</sup> - + *ϑ*ˆ - − *ı*ˆ -] *<sup>ω</sup>*<sup>Ξ</sup> <sup>Γ</sup>(*ϑ*+1), (16) **Q**-1 (*ω*, Ξ)=(2Ξ − 1)[*ς*ˆ- <sup>+</sup> *<sup>ψ</sup>*<sup>ˆ</sup> <sup>−</sup> ˆ] *<sup>ω</sup>*<sup>Ξ</sup> <sup>Γ</sup>(*ϑ*+1), **Q** -1 (*ω*, Ξ)=(2Ξ − 1)[*ς*ˆ - + *ψ*ˆ − ˆ ] *<sup>ω</sup>*<sup>Ξ</sup> <sup>Γ</sup>(*ϑ*+1), **Q**-<sup>1</sup> (*ω*, Ξ)=(1 − 2Ξ)[*ς*ˆ- <sup>+</sup> *<sup>ψ</sup>*<sup>ˆ</sup> <sup>−</sup> ˆ] *<sup>ω</sup>*<sup>Ξ</sup> <sup>Γ</sup>(*ϑ*+1), **Q** -<sup>1</sup> (*ω*, Ξ)=(1 − 2Ξ)[*ς*ˆ - + *ψ*ˆ − ˆ ] *<sup>ω</sup>*<sup>Ξ</sup> <sup>Γ</sup>(*ϑ*+1), (17)

assume that

*I*<sup>1</sup> = *r*ˆ- − *ı*ˆ-(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>) <sup>−</sup> *<sup>s</sup>*ˆ(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)<sup>2</sup> <sup>−</sup> *<sup>λ</sup>s*ˆ-(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)<sup>2</sup> <sup>−</sup> *<sup>s</sup>*ˆ(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)2, *I*<sup>1</sup> = *r*ˆ- − *ı*ˆ-(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ) <sup>−</sup> *<sup>s</sup>*ˆ(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ)<sup>2</sup> <sup>−</sup> *<sup>λ</sup>s*ˆ-(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ)<sup>2</sup> <sup>−</sup> *<sup>s</sup>*ˆ-(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ)2, *I*<sup>2</sup> = *s*ˆ-(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)<sup>2</sup> <sup>+</sup> *<sup>λ</sup>s*ˆ-(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)<sup>2</sup> <sup>+</sup> *<sup>s</sup>*ˆ-(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)<sup>2</sup> <sup>−</sup> (<sup>1</sup> <sup>−</sup> ˆ)*μ*<sup>ˆ</sup> -(2Ξ − 1) − ˆ *μ*ˆ -(2Ξ − 1) − *ı*ˆ-(2Ξ − 1)], *I*<sup>2</sup> = *s*ˆ-(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ)<sup>2</sup> <sup>+</sup> *<sup>λ</sup>s*ˆ-(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ)<sup>2</sup> <sup>+</sup> *<sup>s</sup>*ˆ-(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ)<sup>2</sup> <sup>−</sup> (<sup>1</sup> <sup>−</sup> ˆ)*μ*<sup>ˆ</sup> -(1 − 2Ξ) − ˆ *μ*ˆ -(1 − 2Ξ) − *ı*ˆ-(1 − 2Ξ)], *I*<sup>3</sup> = [(1 − ˆ -)*μ*ˆ -(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>) <sup>−</sup> (*ϑ*ˆ*<sup>p</sup>* <sup>+</sup> *<sup>ı</sup>*ˆ-)(2Ξ − 1)], *I*<sup>3</sup> = [(1 − ˆ -)*μ*ˆ -(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ) <sup>−</sup> (*ϑ*ˆ*<sup>p</sup>* <sup>+</sup> *<sup>ı</sup>*ˆ-)(1 − 2Ξ)], *I*<sup>4</sup> = ˆ*μ*ˆ -(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ) <sup>−</sup> (*ϑ*<sup>ˆ</sup> - + *ı*ˆ-)(1 − 2Ξ), *I*<sup>4</sup> = ˆ *μ*ˆ -(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ) <sup>−</sup> (*ϑ*<sup>ˆ</sup> - − *ı*ˆ-](1 − 2Ξ), *<sup>I</sup>*<sup>5</sup> = (2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)[*ϑ*<sup>ˆ</sup> - + *ϑ*ˆ - − *ı*ˆ-], *<sup>I</sup>*<sup>5</sup> = (<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ)[*ϑ*<sup>ˆ</sup> - + *ϑ*ˆ - − *ı*ˆ-], *I*<sup>6</sup> = (2Ξ − 1)[*ς*ˆ- + *<sup>ψ</sup>*<sup>ˆ</sup> − ˆ], *I*<sup>6</sup> = (1 − 2Ξ)[*ς*ˆ- + *<sup>ψ</sup>*<sup>ˆ</sup> − ˆ], *I* <sup>1</sup> = *r*ˆ - − *ı*ˆ -(2Ξ − 1) − *s*ˆ (2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)<sup>2</sup> <sup>−</sup> *<sup>λ</sup>s*<sup>ˆ</sup> -(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)<sup>2</sup> <sup>−</sup> *<sup>s</sup>*<sup>ˆ</sup> -(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)2, *I* <sup>1</sup> = *r*ˆ - − *ı*ˆ -(1 − 2Ξ) − *s*ˆ (<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ)<sup>2</sup> <sup>−</sup> *<sup>λ</sup>s*<sup>ˆ</sup> -(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ)<sup>2</sup> <sup>−</sup> *<sup>s</sup>*<sup>ˆ</sup> -(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ)2,

*I* <sup>2</sup> = *s*ˆ -(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)<sup>2</sup> <sup>+</sup> *<sup>λ</sup>s*<sup>ˆ</sup> -(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)<sup>2</sup> <sup>+</sup> *<sup>s</sup>*<sup>ˆ</sup> -(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)<sup>2</sup> <sup>−</sup> (<sup>1</sup> <sup>−</sup> <sup>ˆ</sup> )*μ*ˆ -(2Ξ − 1) − ˆ *μ*ˆ -(2Ξ − 1) − *ı*ˆ -(2Ξ − 1)], *I*<sup>2</sup> = *s*ˆ -(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ)<sup>2</sup> <sup>+</sup> *<sup>λ</sup>s*<sup>ˆ</sup> -(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ)<sup>2</sup> <sup>+</sup> *<sup>s</sup>*<sup>ˆ</sup> -(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ)<sup>2</sup> <sup>−</sup> (<sup>1</sup> <sup>−</sup> ˆ)*μ*<sup>ˆ</sup> -(1 − 2Ξ) − ˆ *μ*ˆ - (1 − 2Ξ) − *ı*ˆ -(1 − 2Ξ)], *I*<sup>3</sup> = [(1 − ˆ -)*μ*ˆ -(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>) <sup>−</sup> (*ϑ*ˆ*<sup>p</sup>* <sup>+</sup> *<sup>ı</sup>*<sup>ˆ</sup> -)(2Ξ − 1)], *I*<sup>3</sup> = [(1 − ˆ -)*μ*ˆ -(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ) <sup>−</sup> (*ϑ*ˆ*<sup>p</sup>* <sup>+</sup> *<sup>ı</sup>*<sup>ˆ</sup> -)(1 − 2Ξ)], *<sup>I</sup>*<sup>4</sup> <sup>=</sup> <sup>ˆ</sup> *μ*ˆ -(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ) <sup>−</sup> (*ϑ*<sup>ˆ</sup> - + *ı*ˆ -)(1 − 2Ξ), *I*<sup>4</sup> = ˆ *μ*ˆ -(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ) <sup>−</sup> (*ϑ*<sup>ˆ</sup> - − *ı*ˆ-](1 − 2Ξ), *<sup>I</sup>*<sup>5</sup> = (2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)[*ϑ*<sup>ˆ</sup> - + *ϑ*ˆ - − *ı*ˆ-], *<sup>I</sup>*<sup>5</sup> = (<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ)[*ϑ*<sup>ˆ</sup> - + *ϑ*ˆ - − *ı*ˆ -], *I*<sup>6</sup> = (2Ξ − 1)[*ς*ˆ - + *<sup>ψ</sup>*<sup>ˆ</sup> − ˆ], *I*<sup>6</sup> = (1 − 2Ξ)[*ς*ˆ -+ *<sup>ψ</sup>*<sup>ˆ</sup> − ˆ],

The series' third term is now underway.

**K**-<sup>2</sup> (*ω*, Ξ) = *r*ˆ *<sup>ω</sup><sup>ϑ</sup>* <sup>Γ</sup>(*ϑ*+1) − *ı*ˆ- *<sup>I</sup>*<sup>1</sup> *<sup>ω</sup>*2*<sup>ϑ</sup>* Γ(2*ϑ*+1)[*s*ˆ-(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)*I*<sup>3</sup> <sup>+</sup> *<sup>I</sup>*1] *<sup>ω</sup>*2*<sup>ϑ</sup>* <sup>Γ</sup>(2*ϑ*+1) − [*s*ˆ-(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)(*I*<sup>6</sup> <sup>+</sup> *<sup>I</sup>*1)] *<sup>ω</sup>*2*<sup>ϑ</sup>* <sup>Γ</sup>(2*ϑ*+1), **K**-<sup>2</sup> (*ω*, Ξ) = *r*ˆ *<sup>ω</sup><sup>ϑ</sup>* <sup>Γ</sup>(*ϑ*+1) − *ı*ˆ- *<sup>I</sup>*<sup>1</sup> *<sup>ω</sup>*2*<sup>ϑ</sup>* Γ(2*ϑ*+1)[*s*ˆ-(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)*I*<sup>3</sup> <sup>+</sup> *<sup>I</sup>*1] *<sup>ω</sup>*2*<sup>ϑ</sup>* <sup>Γ</sup>(2*ϑ*+1) − [*s*ˆ-(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)(*I*<sup>6</sup> <sup>+</sup> *<sup>I</sup>*1)] *<sup>ω</sup>*2*<sup>ϑ</sup>* <sup>Γ</sup>(2*ϑ*+1), **L**-<sup>2</sup> (*ω*, Ξ)=[*s*ˆ-(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)(*I*<sup>3</sup> <sup>+</sup> *<sup>I</sup>*1)] *<sup>ω</sup>*2<sup>Ξ</sup> <sup>Γ</sup>(2*ϑ*+1) + [*λs*ˆ-(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)<sup>2</sup> <sup>+</sup> *<sup>s</sup>*ˆ-(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)<sup>2</sup> −(1 − ˆ)*μ*ˆ -(2Ξ − 1) − ˆ *μ*ˆ -(2Ξ − 1) − *ı*ˆ-(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)] *<sup>ω</sup>*<sup>Ξ</sup> <sup>Γ</sup>(*ϑ*+1), **L**-<sup>2</sup> (*ω*, Ξ)=[*s*ˆ-(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ)(*I*<sup>3</sup> <sup>+</sup> *<sup>I</sup>*1)] *<sup>ω</sup>*2<sup>Ξ</sup> <sup>Γ</sup>(2*ϑ*+1) + [*λs*ˆ-(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ)<sup>2</sup> <sup>+</sup> *<sup>s</sup>*ˆ-(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ)<sup>2</sup> −(1 − ˆ)*μ*ˆ -(1 − 2Ξ) − ˆ *μ*ˆ -(1 − 2Ξ) − *ı*ˆ-(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)] *<sup>ω</sup>*<sup>Ξ</sup> <sup>Γ</sup>(*ϑ*+1), **C**-1 (*ω*, <sup>Ξ</sup>)=(<sup>1</sup> <sup>−</sup> **\$**<sup>ˆ</sup> -)*ω*ˆ -(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>) <sup>−</sup> (*ϑ*ˆ*<sup>p</sup>* <sup>+</sup> *<sup>ı</sup>*ˆ-)(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)] *<sup>ω</sup>*<sup>Ξ</sup> <sup>Γ</sup>(*ϑ*+1), **C**-<sup>1</sup> (*ω*, <sup>Ξ</sup>)=(<sup>1</sup> <sup>−</sup> **\$**<sup>ˆ</sup> -)*μ*ˆ -(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ) <sup>−</sup> (*ϑ*ˆ*<sup>p</sup>* <sup>+</sup> *<sup>ı</sup>*ˆ-)(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ)] *<sup>ω</sup>*<sup>Ξ</sup> <sup>Γ</sup>(*ϑ*+1), **M**-<sup>1</sup> (*ω*, Ξ)=[ˆ *μ*ˆ -(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>) <sup>−</sup> (*ϑ*<sup>ˆ</sup> - − *ı*ˆ-)] *<sup>ω</sup>*<sup>Ξ</sup> ( <sup>2</sup><sup>Ξ</sup> <sup>−</sup> <sup>1</sup>) *<sup>ω</sup>*<sup>Ξ</sup> <sup>Γ</sup>(*ϑ*+1), **M**-<sup>1</sup> (*ω*, Ξ)=[ˆ *μ*ˆ -(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ) <sup>−</sup> (*ϑ*<sup>ˆ</sup> - − *ı*ˆ-)] *<sup>ω</sup>*<sup>Ξ</sup> ( <sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ) *<sup>ω</sup>*<sup>Ξ</sup> <sup>Γ</sup>(*ϑ*+1), **E**-<sup>2</sup> (*ω*, <sup>Ξ</sup>)=[*ϑ*<sup>ˆ</sup> - *<sup>I</sup>*<sup>3</sup> <sup>−</sup> *<sup>ϑ</sup>*<sup>ˆ</sup> - *I*<sup>4</sup> − *ı*ˆ- *<sup>I</sup>*5] *<sup>ω</sup>*2<sup>Ξ</sup> <sup>Γ</sup>(2Ξ+1), **E**-<sup>2</sup> (*ω*, <sup>Ξ</sup>)=[*ϑ*<sup>ˆ</sup> - *<sup>I</sup>*<sup>3</sup> <sup>−</sup> *<sup>ϑ</sup>*<sup>ˆ</sup> - *I*<sup>4</sup> − *ı*ˆ- *<sup>I</sup>*5] *<sup>ω</sup>*2<sup>Ξ</sup> <sup>Γ</sup>(2*ϑ*+1), **Q**-2 (*ω*, Ξ)=[*ς*ˆ- *<sup>I</sup>*<sup>3</sup> <sup>+</sup> *<sup>ψ</sup>*ˆ*I*<sup>4</sup> <sup>−</sup> ˆ*I*6] *<sup>ω</sup>*<sup>Ξ</sup> <sup>Γ</sup>(*ϑ*+1), **Q**-<sup>2</sup> (*ω*, Ξ)=(1 − 2Ξ)[*ς*ˆ- *<sup>I</sup>*<sup>3</sup> <sup>+</sup> *<sup>ψ</sup>*ˆ*I*<sup>4</sup> <sup>−</sup> ˆ*I*6] *<sup>ω</sup>*<sup>Ξ</sup> <sup>Γ</sup>(*ϑ*+1), (18) **K** -2 (*ω*, Ξ) = *r*ˆ *<sup>ω</sup><sup>ϑ</sup>* <sup>Γ</sup>(*ϑ*+1) − *ı*ˆ - *I* <sup>1</sup> *<sup>ω</sup>*2*<sup>ϑ</sup>* <sup>Γ</sup>(2*ϑ*+1)[*s*ˆ -(2Ξ − 1)*I* <sup>3</sup> + *I* <sup>1</sup>] *<sup>ω</sup>*2*<sup>ϑ</sup>* <sup>Γ</sup>(2*ϑ*+1) − [*s*ˆ -(2Ξ − 1)(*I* <sup>6</sup> + *I* <sup>1</sup>)] *<sup>ω</sup>*2*<sup>ϑ</sup>* <sup>Γ</sup>(2*ϑ*+1), **K** -<sup>2</sup> (*ω*, Ξ) = *r*ˆ *<sup>ω</sup><sup>ϑ</sup>* <sup>Γ</sup>(*ϑ*+1) − *ı*ˆ - *I* <sup>1</sup> *<sup>ω</sup>*2*<sup>ϑ</sup>* <sup>Γ</sup>(2*ϑ*+1)[*s*ˆ -(2Ξ − 1)*I* <sup>3</sup> + *I* <sup>1</sup>] *<sup>ω</sup>*2*<sup>ϑ</sup>* <sup>Γ</sup>(2*ϑ*+1) − [*s*ˆ -(2Ξ − 1)(*I* <sup>6</sup> + *I* <sup>1</sup>)] *<sup>ω</sup>*2*<sup>ϑ</sup>* <sup>Γ</sup>(2*ϑ*+1), **L** -2 (*ω*, Ξ)=[*s*ˆ -(2Ξ − 1)(*I* <sup>3</sup> + *I* <sup>1</sup>)] *<sup>ω</sup>*2<sup>Ξ</sup> <sup>Γ</sup>(2*ϑ*+1) + [*λs*ˆ -(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)<sup>2</sup> <sup>+</sup> *<sup>s</sup>*<sup>ˆ</sup> -(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)<sup>2</sup> −(1 − ˆ)*μ*ˆ -(2Ξ − 1) − ˆ *μ*ˆ -(2Ξ − 1) − *ı*ˆ -(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)] *<sup>ω</sup>*<sup>Ξ</sup> <sup>Γ</sup>(*ϑ*+1), **L** -<sup>2</sup> (*ω*, Ξ)=[*s*ˆ -(1 − 2Ξ)(*I* <sup>3</sup> + *I*1) ] *<sup>ω</sup>*2<sup>Ξ</sup> <sup>Γ</sup>(2*ϑ*+1) + [*λs*ˆ -(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ)<sup>2</sup> <sup>+</sup> *<sup>s</sup>*<sup>ˆ</sup> -(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ)<sup>2</sup> −(1 − ˆ)*μ*ˆ -(1 − 2Ξ) − ˆ *μ*ˆ -(1 − 2Ξ) − *ı*ˆ -(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)] *<sup>ω</sup>*<sup>Ξ</sup> <sup>Γ</sup>(*ϑ*+1), **C** -1 (*ω*, <sup>Ξ</sup>)=(<sup>1</sup> <sup>−</sup> **\$**<sup>ˆ</sup> -)*ω*ˆ -(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>) <sup>−</sup> (*ϑ*<sup>ˆ</sup> *<sup>p</sup>* + *ı*ˆ -)(2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>)] *<sup>ω</sup>*<sup>Ξ</sup> <sup>Γ</sup>(*ϑ*+1), **C** -<sup>1</sup> (*ω*, <sup>Ξ</sup>)=(<sup>1</sup> <sup>−</sup> **\$**<sup>ˆ</sup> -)*μ*ˆ -(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ) <sup>−</sup> (*ϑ*<sup>ˆ</sup> *<sup>p</sup>* + *ı*ˆ -)(<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ)] *<sup>ω</sup>*<sup>Ξ</sup> <sup>Γ</sup>(*ϑ*+1), **M** -1 (*ω*, Ξ)=[ˆ *μ*ˆ - (2<sup>Ξ</sup> <sup>−</sup> <sup>1</sup>) <sup>−</sup> (*ϑ*<sup>ˆ</sup> - − *ı*ˆ -)] *<sup>ω</sup>*<sup>Ξ</sup> ( <sup>2</sup><sup>Ξ</sup> <sup>−</sup> <sup>1</sup>) *<sup>ω</sup>*<sup>Ξ</sup> <sup>Γ</sup>(*ϑ*+1), **M** -<sup>1</sup> (*ω*, Ξ)=[ˆ *μ*ˆ - (<sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ) <sup>−</sup> (*ϑ*<sup>ˆ</sup> - − *ı*ˆ-) ] *<sup>ω</sup>*<sup>Ξ</sup> ( <sup>1</sup> <sup>−</sup> <sup>2</sup>Ξ) *<sup>ω</sup>*<sup>Ξ</sup> <sup>Γ</sup>(*ϑ*+1), **E** -2 (*ω*, Ξ)=[*ϑ*ˆ - *I* <sup>3</sup> <sup>−</sup> *<sup>ϑ</sup>*<sup>ˆ</sup> - *I* <sup>4</sup> − *ı*ˆ - *I* <sup>5</sup>] *<sup>ω</sup>*2<sup>Ξ</sup> <sup>Γ</sup>(2Ξ+1), **E** -<sup>2</sup> (*ω*, <sup>Ξ</sup>)=[*ϑ*<sup>ˆ</sup> - *I* <sup>3</sup> <sup>−</sup> *<sup>ϑ</sup>*<sup>ˆ</sup> - *I* <sup>4</sup> − *ı*ˆ - *I* <sup>5</sup>] *<sup>ω</sup>*2<sup>Ξ</sup> <sup>Γ</sup>(2*ϑ*+1), **Q** -2 (*ω*, Ξ)=[*ς*ˆ - *I* <sup>3</sup> + *ψ*ˆ *I* <sup>4</sup> − ˆ *I* <sup>6</sup>] *<sup>ω</sup>*<sup>Ξ</sup> <sup>Γ</sup>(*ϑ*+1), **Q** -<sup>2</sup> (*ω*, Ξ)=(1 − 2Ξ)[*ς*ˆ - *I* <sup>3</sup> + *ψ*ˆ *I* <sup>4</sup> − ˆ *I* <sup>6</sup>] *<sup>ω</sup>*<sup>Ξ</sup> <sup>Γ</sup>(*ϑ*+1). (19)

Here, we present the computational results based on the numerical scheme discussed above. Figures 1–6 show a comparison of approximate fuzzy and approximate normal solutions for the considered model at various fractional orders for the given uncertainty. Figure 1 presents the susceptible population versus days; in Figure 2, the exposed population versus days; in Figure 3 the infected population versus days; in Figure 4, the recovered population versus days; in Figure 5, the asymptotically infected population versus days; and in Figure 6, the reservoir population versus days for a fuzzy and normal solution at 1.25, 1.50 and 1.75. Figure 7 shows the recovered population versus days and in Figure 8, the asymptotically infected population versus days for a fuzzy and normal solution at 1.00, 1.25, 1.50, 1.75 and 2.00. As the susceptible class value decreases, the exposed population grows, and the infection spreads at a different rate due to changing fractional orders. Similarly, when the number of death cases rises, the recovered class expands, the asymptotically infected class expands, and the virus population in the reservoir expands. We can see from the pictures that fuzziness, in combination with fractional order (1, 2), provides global dynamics to nonlinear problems where the data are uncertain.

**Figure 1.** Illustration of approximate fuzzy and normal susceptible compartment solutions for three terms at given uncertainty levels *ϑ* ∈ [1, 2] versus fractional order.

**Figure 2.** Illustration of approximate fuzzy and normal exposed compartment solutions for three terms at given uncertainty levels *ϑ* ∈ [1, 2] versus fractional order.

**Figure 3.** Illustration of approximate fuzzy and normal infected compartment solutions for three terms at given uncertainty levels *ϑ* ∈ [1, 2] versus fractional order

**Figure 4.** Illustration of approximate fuzzy and normal recovered compartment solutions for three terms at given uncertainty levels *ϑ* ∈ [1, 2] versus fractional order.

**Figure 5.** Illustration of approximate fuzzy and normal asymptotically infected compartment solutions for three terms at given uncertainty levels *ϑ* ∈ [1, 2] versus fractional order.

**Figure 6.** Illustration of approximate fuzzy and normal reservoir compartment solutions for three terms at given uncertainty levels *ϑ* ∈ [1, 2] versus fractional order.

**Figure 7.** Illustration of approximate fuzzy and normal recovered compartment solutions for five terms at given uncertainty levels *ϑ* ∈ [1, 2] versus fractional order.

**Figure 8.** Illustration of approximate fuzzy and normal asymptotically infected compartment solutions for five terms at given uncertainty levels *ϑ* ∈ [1, 2] versus fractional order.

**Remark 1.** *According to the results, the lower bounded is a growing set-valued function, while the upper bounded is a decreasing one, indicating that the solutions are fuzzy numbers. It is also worth noting that under fuzzy differentiability, identical findings can be derived in general circumstances.*

**Remark 2.** *Given that stochastic and random parameters are more difficult to address, and that uncertainty might contribute to an increase in computation costs, using fuzzy notions to model such real-world systems may be the best option.*

#### **5. Conclusions**

In this paper, we present an analytical investigation of the fractional order COVID-19 model (3) for existence–uniqueness results, as well as numerical simulations. We used the Schauder's fixed point theorem to prove that the solution to the fuzzy fractional-order model of COVID-19 exists and is unique. Using the fuzzy Laplace transform and the Adomian decomposition approach, we established a reasonable strategy for obtaining an approximate solution for the recommended model. We compared fuzzy and normal results for up to three phrases to demonstrate the utility of this method. We discovered that fuzziness combined with a fractional calculus technique yielded outstanding global dynamics in instances where data uncertainty exists. For future research, we recommend that readers revisit the problem for stochastic differential operators, optimal control and sensitivity analysis. Furthermore, the provided results can be compared to simulations for different fractional derivatives.

**Author Contributions:** Formal analysis, A.U.K.N.; Project administration, M.B.J.; Supervision, A.S.A. and A.U.K.N.; Writing–original draft, R.S.; Writing–review & editing, M.B.J. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by [Imam Mohammad Ibn Saud Islamic University] grant number [21-13-18-069] and the APC was funded by [Imam Mohammad Ibn Saud Islamic University].

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** No new data were created this study.

**Acknowledgments:** The authors extend their appreciation to the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University for funding this work through Research Group no. 21-13-18-069.

**Conflicts of Interest:** The authors declare that they have no known competing financial interest or personal relationships that could have appeared to influence the work reported in this paper.

#### **References**


## *Article* **Existence Results for a Multipoint Fractional Boundary Value Problem in the Fractional Derivative Banach Space**

**Djalal Boucenna 1,†, Amar Chidouh 2,† and Delfim F. M. Torres 3,\*,†**

	- Department of Mathematics, University of Aveiro, 3810-193 Aveiro, Portugal

**Abstract:** We study a class of nonlinear implicit fractional differential equations subject to nonlocal boundary conditions expressed in terms of nonlinear integro-differential equations. Using the Krasnosel'skii fixed-point theorem we prove, via the Kolmogorov–Riesz criteria, the existence of solutions. The existence results are established in a specific fractional derivative Banach space and they are illustrated by two numerical examples.

**Keywords:** fractional differential equations; boundary value problems; Kolmogorov–Riesz theorem; fixed point theorems; Nemytskii operator

**MSC:** 34B10; 34K37; 45J05

#### **1. Introduction**

It is noticeable, in recent years, that the field of fractional calculus has been swept for research by many mathematicians, due to its effectiveness in describing many physical phenomena, see, e.g., [1–7].

A fractional derivative is a generalization of the ordinary one. Despite the emergence of several definitions of fractional derivative, the content is one that depends entirely on Volterra integral equations and their kernel, which facilitates the description of each phenomenon as a temporal lag, such as rheological phenomena [8–10].

The study of differential equations is considered of primary importance in mathematics. In applications, differential equations serve as mathematical models for all natural phenomena. Regardless of their type (ordinary, partial, or fractional), the study of differential equations is developed in three directions: existence, uniqueness, and stability of solutions. Therefore, to investigate boundary value problems is always a central question in mathematics [11–16].

Often, it is of central importance to know the behavior of any solution, of the equation under study, at the boundary of the domain, because that makes it easier to find the solution. In 2009, Ahmad and Nieto considered the following boundary value problem [17]:

$$\begin{aligned} ^C D^\alpha y(t) &= f\left(t, y(t), \int\_0^t q(t, s) y(s) ds\right), \quad 1 < \alpha < 2, \\\ a y(0) + b y'(0) &= \int\_0^1 q\_1(y(s)) ds, \\\ a y(1) + b y'(1) &= \int\_0^1 q\_2(y(s)) ds. \end{aligned} \tag{1}$$

**Citation:** Boucenna, D.; Chidouh, A.; Torres, D.F.M. Existence Results for a Multipoint Fractional Boundary Value Problem in the Fractional Derivative Banach Space. *Axioms* **2022**, *11*, 295. https://doi.org/ 10.3390/axioms11060295

Academic Editor: Chris Goodrich

Received: 2 May 2022 Accepted: 14 June 2022 Published: 16 June 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Their results of existence are obtained via Krasnosel'skii fixed-point theorem in the space of continuous functions. For that, they apply Ascoli's theorem in order to provide the compactness of the first part of the Krasnosel'skii operator.

The pioneering work of Ahmad and Nieto of 2009 [17] gave rise to several different investigations. These include: inverse source problems for fractional integrodifferential equations [18]; the study of positive solutions for singular fractional boundary value problems with coupled integral boundary conditions [19]; the expression and properties of Green's function for nonlinear boundary value problems of fractional order with impulsive differential equations [20]; existence of solutions to several kinds of differential equations using the coincidence theory [21]; existence and uniqueness of solution for fractional differential equations with Riemann–Liouville fractional integral boundary conditions [22]; sufficient conditions for the existence and uniqueness of solutions for a class of terminal value problems of fractional order on an infinite interval [23]; existence of solutions, and stability, for fractional integro-differential equations involving a general form of Hilfer fractional derivative with respect to another function [24]; existence of solutions for a boundary value problem involving mixed generalized fractional derivatives of Riemann– Liouville and Caputo, supplemented with nonlocal multipoint boundary conditions [25]; existence conditions to fractional order hybrid differential equations [26]; and an existence analysis for a nonlinear implicit fractional differential equation with integral boundary conditions [27]. Motivated by all these existence results, we consider here a more general multipoint fractional boundary value problem in the fractional derivative Banach space.

Let 1 <sup>&</sup>lt; *<sup>p</sup>* <sup>&</sup>lt; <sup>∞</sup> and 1 <sup>≥</sup> *<sup>γ</sup>* <sup>&</sup>gt; <sup>1</sup> *<sup>p</sup>* and consider the following fractional boundary value problem:

$$\begin{aligned} ^C D^\alpha y(t) &= f\left(t, y(t), ^C D^\gamma y(t)\right) + ^C D^{\alpha - 2} g(t, y(t)), \quad 2 < \alpha < 3, \\ y(0) + y'(0) &= \int\_0^1 q\_1(y(s)) ds, \\ y(1) + y'(1) &= \int\_0^1 q\_2(y(s)) ds, \\ y''(0) &= 0, \end{aligned} \tag{2}$$

where *CD<sup>α</sup>* is the standard Caputo derivative, *<sup>f</sup>* : [0, 1] <sup>×</sup> <sup>R</sup> <sup>×</sup> <sup>R</sup> <sup>→</sup> <sup>R</sup>, and *<sup>g</sup>* : [0, 1] <sup>×</sup> R → R and *<sup>q</sup>*1, *<sup>q</sup>*<sup>2</sup> : R → R are given functions such that *<sup>g</sup>*(*t*, 0)= *<sup>g</sup>*(0, *<sup>y</sup>*) = *<sup>q</sup>*1(0) = *<sup>q</sup>*2(0) = 0 for any (*t*, *<sup>y</sup>*) ∈ [0, 1] × R. Our problem (2) generalizes (1) and finds applications in viscoelasticity, where the fractional operators are associated with delay kernels that make the fractional differential equations the best models for several rheological Maxwell phenomena. In particular, for *α* ∈ (1, 2), we can model oscillatory processes with fractional damping [28].

We prove existence of a solution to problem (2) in the special Banach space *Eγ*,*<sup>p</sup>* that is known in the literature as the fractional derivative space [29]. This Banach space is equipped with the norm

$$\|\|u\|\|\_{\gamma,p} = \left(\int\_0^T |u(t)|^p + \int\_0^T \left| {^C D\_0^\gamma u(t)} \right|^p \right)^{\frac{1}{p}}.\tag{3}$$

The paper is organized as follows. In Section 2, we recall some useful definitions and lemmas to prove our main results. The original contributions are then given in Section 3. The main result is Theorem 1, which establishes the existence of solutions to the fractional boundary value problem (2) using Krasnosel'skii fixed point theorem. Two illustrative examples are given. We end with Section 4, discussing the obtained existence result.

#### **2. Preliminaries**

For the convenience of the reader, and to facilitate the analysis of problem (2), we begin by recalling the necessary background from the theory of fractional calculus [30,31]. **Definition 1.** *The Riemann–Liouville fractional integral of order α* > 0 *of a function f* : (0, +∞) → R *is given by*

$$I\_0^\alpha f(t) = \frac{1}{\Gamma(\alpha)} \int\_0^t (t-s)^{\alpha-1} f(s)ds.$$

**Definition 2.** *The Caputo fractional derivative of order <sup>α</sup>* > 0 *of a function <sup>f</sup>* : (0, +∞) → R *is given by*

$${}^{C}D\_{0}^{\alpha}f(t) = \frac{1}{\Gamma(n-\alpha)} \int\_{0}^{t} \frac{f^{(n)}(s)}{(t-s)^{n-n+1}}ds = I\_{0}^{n-\alpha}f^{(n)}(t),$$

*where n* = [*α*] + 1*, with* [*α*] *denoting the integer part of α.*

**Lemma 1** (See [17])**.** *Let α* > 0*. Then, the fractional differential equation CD<sup>α</sup>* <sup>0</sup><sup>+</sup> *u*(*t*) = 0 *has*

$$u(t) = c\_0 + c\_1t + c\_2t^2 + \dots + c\_{n-1}t^{n-1}, \quad c\_i \in \mathbb{R}, \quad i = 1, 2, \dots, n-1, \dots$$

*as solution.*

**Definition 3.** *A map f* : [0, 1] × R × R → R *is said to be Carathéodory if*


**Definition 4.** *Let <sup>J</sup> be a measurable subset of* <sup>R</sup> *and <sup>f</sup>* : *<sup>J</sup>* <sup>×</sup> <sup>R</sup>*d*<sup>1</sup> <sup>→</sup> <sup>R</sup>*d*<sup>2</sup> *satisfies the Carathéodory condition. By a generalized Nemytskii operator we mean the mapping Nf taking a (measurable) vector function u* = *u*1,..., *ud*<sup>1</sup> *to the function Nf u*(*t*) = *f*(*t*, *u*(*t*))*, t* ∈ *J.*

The following lemma is concerned with the continuity of the operator *Nf* with *d*<sup>1</sup> = 2 and *d*<sup>2</sup> = 1.

**Lemma 2** (See [32])**.** *Consider the same data of Definition 4. Assume there exists <sup>w</sup>* <sup>∈</sup> *<sup>L</sup>*1([0, 1]) *with* 1 ≤ *p* < ∞ *and a constant c* > 0 *such that* | *f*(*t*, *u*, *v*)| ≤ *w*(*t*) + *c* |*u*| *<sup>p</sup>* <sup>+</sup> <sup>|</sup>*v*<sup>|</sup> *p for almost all t* ∈ [0, 1] *and u*, *<sup>v</sup>* ∈ R*. Then, the Nemytskii operator*

$$N\_f u(t) = f(t, u(t)), \quad u = (u\_1, u\_2) \in L^p(0, 1) \times L^p(0, 1), \quad t \in [0, 1] \text{ a.e.} $$

*is continuous from Lp*([0, 1]) <sup>×</sup> *<sup>L</sup>p*([0, 1]) *to L*1(0, 1)*.*

**Lemma 3** (See [33])**.** *Let* <sup>F</sup> *be a bounded set in Lp*([0, 1]) *with* <sup>1</sup> <sup>≤</sup> *<sup>p</sup>* <sup>&</sup>lt; <sup>∞</sup>*. Assume that*

$$\lim\_{|\mathbf{h}| \to 0} \|\pi\_{\mathbf{h}}f - f\|\_{p} = 0 \text{ uniformly on } \mathcal{F}.$$

*Then,* <sup>F</sup> *is relatively compact in Lp*([0, 1])*.*

For any 1 ≤ *p* < ∞, we denote

$$||u||\_{L^{p}[0,T]} := \left(\int\_{0}^{T} |u(t)|^{p}\right)^{\frac{1}{p}}, \quad ||u||\_{\infty} := \max\_{t \in [0,T]} |u(t)|.$$

Now, we give the definition and some properties of *Eγ*,*p*. For more details about the following lemmas, see [29,34] and references therein.

**Definition 5.** *Let* <sup>0</sup> <sup>&</sup>lt; *<sup>γ</sup>* <sup>≤</sup> <sup>1</sup> *and* <sup>1</sup> <sup>&</sup>lt; *<sup>p</sup>* <sup>&</sup>lt; <sup>∞</sup>*. The fractional derivative space <sup>E</sup>γ*,*<sup>p</sup> is defined by the closure of C*∞([0, *T*]) *with respect to the norm*

$$\|\|u\|\|\_{\gamma,p} = \left(\int\_0^T |u(t)|^p + \int\_0^T \left| {^C D\_0^\gamma} u(t) \right|^p \right)^{\frac{1}{p}}.\tag{4}$$

**Lemma 4** (See [29,34])**.** *Let* <sup>0</sup> <sup>&</sup>lt; *<sup>γ</sup>* <sup>≤</sup> <sup>1</sup> *and* <sup>1</sup> <sup>&</sup>lt; *<sup>p</sup>* <sup>&</sup>lt; <sup>∞</sup>*. The fractional derivative space <sup>E</sup>γ*,*<sup>p</sup> is a reflexive and separable Banach space.*

**Lemma 5** (See [29,34])**.** *Let* <sup>0</sup> <sup>&</sup>lt; *<sup>γ</sup>* <sup>≤</sup> <sup>1</sup> *and* <sup>1</sup> <sup>&</sup>lt; *<sup>p</sup>* <sup>&</sup>lt; <sup>∞</sup>*. For all u* <sup>∈</sup> *<sup>E</sup>γ*,*p, we have*

$$\|\|u\|\|\_{L^p} \le \frac{T^u}{\Gamma(\gamma+1)} \left\|{^C D\_0^\gamma u}\right\|\_{L^p}.\tag{5}$$

*Moreover, if γ* > <sup>1</sup> *<sup>p</sup> and* <sup>1</sup> *<sup>p</sup>* <sup>+</sup> <sup>1</sup> *<sup>q</sup>* = 1*, then*

$$\|\|u\|\|\_{\infty} \le \frac{T^{\kappa - \frac{1}{p}}}{\Gamma(\gamma)((\gamma - 1)q + 1)^{\frac{1}{q}}} \left\|{}^{C}D\_{0}^{\gamma}u\right\|\_{L^{p}}.\tag{6}$$

According to the inequality (5), we can also consider the space *Eγ*,*<sup>p</sup>* with respect to the equivalent norm

$$\left\|\boldsymbol{u}\right\|\_{\gamma,p} = \left\|{\prescript{\boldsymbol{C}}{}{D}\_{0}^{\gamma}\boldsymbol{u}}\right\|\_{L^{p}} = \left(\int\_{0}^{T} \left|{\prescript{\boldsymbol{C}}{}{D}\_{0}^{\gamma}\boldsymbol{u}(t)}\right|^{p}\right)^{\frac{1}{p}}, \ u \in \boldsymbol{E}^{\gamma,p}.$$

#### **3. Main Results**

We begin by considering a linear problem and obtain its solution in terms of a Green function.

**Lemma 6.** *Assume h*, *k* ∈ *C*([0, 1])*, k*(0) = 0 *and α* ∈ (2, 3)*. Then, the solution to the boundary value problem*

$$\begin{aligned} \,^C D^a y(t) &= h(t) + ^C D^{a-2} k(t), \quad t \in (0, 1), \\ y''(0) &= 0, \\ y(0) + y'(0) &= \int\_0^1 \eta\_1(s) ds, \\ y(1) + y'(1) &= \int\_0^1 \eta\_2(s) ds, \end{aligned} \tag{7}$$

*is given by*

$$y(t) = \int\_0^1 G(t,s)h(s)ds + \int\_0^1 H(t,s)k(s)ds + (2-t)\int\_0^1 \eta\_1(s)ds + (t-1)\int\_0^1 \eta\_2(s)ds, \quad t \ge 0$$

*where*

$$G(t,s) = \begin{cases} \frac{(t-s)^{a-1} + (1-t)(1-s)^{a-1}}{\Gamma(a)} + \frac{(1-t)(1-s)^{a-2}}{\Gamma(a-1)}, & 0 \le s \le t \le 1, \\\\ \frac{(1-t)(1-s)^{a-1}}{\Gamma(a)} + \frac{(1-t)(1-s)^{a-2}}{\Gamma(a-1)}, & 0 \le t \le s \le 1, \end{cases} \tag{8}$$

*and*

$$H(t,s) = \begin{cases} \ (t-s) + (1-t)(2-s), & 0 \le s \le t \le 1, \\\ \ (1-t)(2-s), & 0 \le t \le s \le 1. \end{cases} \tag{9}$$

**Proof.** Let *y* be a solution of problem (7). By Lemma 1, we have

$$y(t) = c\_0 + c\_1 t + c\_2 t^2 + \frac{1}{\Gamma(\alpha)} \int\_0^t (t - s)^{\alpha - 1} h(s) ds + l\_0^2 k(t).$$

Taking the conditions (7) into account, it follows that

$$\mathbf{c}\_2 = \mathbf{0},$$

$$y(0) + y'(0) = \mathbf{c}\_0 + \mathbf{c}\_1 = \int\_0^1 \eta\_1(s) ds,$$

and

$$\begin{aligned} y(1) + y'(1) &= c\_0 + 2c\_1 + \frac{1}{\Gamma(a)} \int\_0^1 (1-s)^{a-1} h(s) ds + \int\_0^1 (1-s) k(s) ds \\ &+ \frac{1}{\Gamma(a-1)} \int\_0^1 (1-s)^{a-2} h(s) ds + \int\_0^1 k(s) ds \\ &= \int\_0^1 \eta\_2(s) ds, \end{aligned}$$

which implies

$$\begin{aligned} c\_0 &= \frac{1}{\Gamma(\alpha)} \int\_0^1 (1-s)^{\alpha-1} h(s) ds + \frac{1}{\Gamma(\alpha-1)} \int\_0^1 (1-s)^{\alpha-2} h(s) ds \\ &+ \int\_0^1 (2-s) k(s) ds + 2 \int\_0^1 \eta\_1(s) ds - \int\_0^1 \eta\_2(s) ds, \end{aligned}$$

and

$$\begin{aligned} \mathcal{L}\_1 &= -\frac{1}{\Gamma(\alpha)} \int\_0^1 (1-s)^{\alpha-1} h(s) ds - \frac{1}{\Gamma(\alpha-1)} \int\_0^1 (1-s)^{\alpha-2} h(s) ds \\ &- \int\_0^1 (2-s) k(s) ds + \int\_0^1 \eta\_2(s) ds - \int\_0^1 \eta\_1(s) ds. \end{aligned}$$

Hence, the solution of problem (7) is

$$\begin{split} y(t) &= \int\_{0}^{t} \left( \frac{(t-s)^{a-1} + (1-t)(1-s)^{a-1}}{\Gamma(a)} + \frac{(1-t)(1-s)^{a-2}}{\Gamma(a-1)} \right) h(s) ds \\ &+ \int\_{t}^{1} \left( \frac{(1-t)(1-s)^{a-1}}{\Gamma(a)} + \frac{(1-t)(1-s)^{a-2}}{\Gamma(a-1)} \right) h(s) ds \\ &+ \int\_{0}^{t} ((t-s) + (1-t)(2-s)) k(s) ds + \int\_{t}^{1} (1-t)(2-s) k(s) ds \\ &+ (2-t) \int\_{0}^{1} \eta\_{1}(s) ds + (t-1) \int\_{0}^{1} \eta\_{2}(s) ds \\ &= \int\_{0}^{1} G(t,s) h(s) ds + \int\_{0}^{1} H(t,s) k(s) ds \\ &+ (2-t) \int\_{0}^{1} \eta\_{1}(s) ds + (t-1) \int\_{0}^{1} \eta\_{2}(s) ds. \end{split}$$

The proof is complete.

**Lemma 7.** *Functions G*, *H*, *<sup>∂</sup><sup>γ</sup> <sup>∂</sup><sup>t</sup> <sup>G</sup> and <sup>∂</sup><sup>γ</sup> <sup>∂</sup><sup>t</sup> H are continuous on* [0, 1] × [0, 1] *and satisfy for all t*,*s* ∈ [0, 1]*: 1.* <sup>|</sup>*G*(*t*,*s*)<sup>|</sup> <sup>≤</sup> <sup>3</sup> <sup>Γ</sup>(*α*−1), <sup>|</sup>*H*(*t*,*s*)<sup>|</sup> <sup>≤</sup> 3.

$$2. \qquad \left| \, \frac{\partial^{\gamma}}{\partial t} G(t, s) \right| \leq \frac{\Gamma(a)}{\Gamma(a - \gamma)} + \frac{2}{\Gamma(2 - \gamma)\Gamma(a - 1)}, \quad \left| \, \frac{\partial^{\gamma}}{\partial t} H(t, s) \right| \leq \frac{3}{\Gamma(2 - \gamma)}.$$

**Proof.** We have

$${}^{C}D\_{0}^{\gamma}(1-t) = I\_{0}^{1-\gamma}(1-t)^{\prime} = -\frac{1}{\Gamma(2-\gamma)}t^{1-\gamma},\tag{10}$$

and

$$\frac{\partial^{\gamma}}{\partial t}(t-s)^{a-1} = I\_0^{1-\gamma} \frac{\partial}{\partial t}(t-s)^{a-1} = (a-1)I\_0^{1-\gamma}(t-s)^{a-2}.$$

Thus, for 0 <sup>≤</sup> *<sup>s</sup>* <sup>≤</sup> *<sup>t</sup>* <sup>≤</sup> 1, we get *<sup>∂</sup><sup>γ</sup> <sup>∂</sup><sup>t</sup>* (*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>*)*α*−<sup>1</sup> <sup>≥</sup> 0 and

$$\frac{\partial^{\gamma}}{\partial t}(t-s)^{a-1} \le^{\mathbb{C}} D\_0^{\gamma} t^{a-1} = \frac{\Gamma(a)}{\Gamma(a-\gamma)} t^{a-\gamma-1}.\tag{11}$$

On the other hand, we have Γ(*α* − 1) ≤ Γ(*α*) for 2 ≤ *α* ≤ 3. Now, we give the bound of functions |*G*(*t*,*s*)| and *∂γ <sup>∂</sup><sup>t</sup> G*(*t*,*s*) . From the definition of function *G* and (10) and (11), we obtain:

• For 0 ≤ *s* ≤ *t* ≤ 1,

$$\begin{array}{rcl} |G(t,s)| &=& \frac{\frac{(t-s)^{\alpha-1}}{\Gamma(a)} + (1-t)(1-s)^{\alpha-1}}{\Gamma(a)} + \frac{(1-t)(1-s)^{\alpha-2}}{\Gamma(a-1)} \\ &\leq& \frac{\frac{(1-s)^{\alpha-1}}{\Gamma(a)} (1+(1-t))}{\Gamma(a)} + \frac{(1-t)(1-s)^{\alpha-2}}{\Gamma(a-1)} \\ &\leq& \frac{\frac{1+(1-t)}{\Gamma(a)}}{\Gamma(a)} + \frac{(1-t)}{\Gamma(a-1)} \\ &\leq& \frac{\frac{3}{\Gamma(a-1)}}{\Gamma(a-1)} \end{array}$$

and

$$\begin{array}{c|c|c} \left| \frac{\partial \gamma}{\partial t} G(t, s) \right| &\leq & \left| \frac{\Gamma(a)}{\Gamma(a - \gamma)} t^{a - \gamma - 1} \right| + \left| \frac{t^{1 - \gamma}}{\Gamma(2 - \gamma)} \left( \frac{(1 - s)^{a - 1}}{\Gamma(a)} + \frac{(1 - s)^{a - 2}}{\Gamma(a - 1)} \right) \right| \\ &\leq & \frac{\Gamma(a)}{\Gamma(a - \gamma)} + \frac{1}{\Gamma(2 - \gamma)} \left( \frac{1}{\Gamma(a)} + \frac{1}{\Gamma(a - 1)} \right) \\ &\leq & \frac{\Gamma(a)}{\Gamma(a - \gamma)} + \frac{2}{\Gamma(2 - \gamma)\Gamma(a - 1)}. \end{array}$$

• For 0 ≤ *t* ≤ *s* ≤ 1,

$$\begin{array}{rcl} \left| G(t,s) \right| &=& \frac{(1-t)(1-s)^{\alpha-1}}{\Gamma(\alpha)} + \frac{(1-t)(1-s)^{\alpha-2}}{\Gamma(\alpha-1)}\\ &\leq& \frac{(1-t)}{\Gamma(\alpha)} + \frac{(1-t)}{\Gamma(\alpha-1)}\\ &\leq& \frac{2}{\Gamma(\alpha-1)}. \end{array}$$

and

$$\begin{array}{rcl} \left| \frac{\partial^{\gamma}}{\partial t} G(t, s) \right| &=& \left| -\frac{t^{1-\gamma}}{\Gamma(2-\gamma)} \left( \frac{(1-s)^{\alpha-1}}{\Gamma(\alpha)} + \frac{(1-s)^{\alpha-2}}{\Gamma(\alpha-1)} \right) \right| \\ &\leq& \frac{1}{\Gamma(2-\gamma)} \left( \frac{1}{\Gamma(\alpha)} + \frac{1}{\Gamma(\alpha-1)} \right) \\ &\leq& \frac{2}{\Gamma(2-\gamma)\Gamma(\alpha-1)}. \end{array}$$

By using the same above calculation, we obtain the estimation of |*H*(*t*,*s*)| and *∂γ <sup>∂</sup><sup>t</sup> H*(*t*,*s*) . The proof is complete.

In the sequel, we denote

$$G\_{\gamma}(t, \mathbf{s}) := \frac{\partial^{\gamma}}{\partial t} G(t, \mathbf{s}), \quad t, \mathbf{s} \in [0, 1] \times [0, 1].$$

Moreover, we also use the following notations: *<sup>G</sup>*<sup>∗</sup> := max*t*,*s*∈[0,1]×[0,1]|*G*(*t*,*s*)| and

$$G^\*\_\gamma := \max\_{t,s \in [0,1] \times [0,1]} |G\_\gamma(t,s)|.$$

**Theorem 1.** *Assume that the following four hypotheses hold:*

*(H1) <sup>f</sup>* : [0, 1] × R × R → R *satisfies the Carathéodory condition.*

*(H2) There exist w* <sup>∈</sup> *<sup>L</sup>*1(0, 1) *and c* <sup>&</sup>gt; <sup>0</sup> *such that*

$$|f(t, u, v)| \le w(t) + c(|u|^p + |v|^p) \text{ for } t \in (0, 1) \text{ and } u, v \in \mathbb{R}.\tag{12}$$

*(H3) There exist two strictly positive constants <sup>k</sup>*<sup>1</sup> *and <sup>k</sup>*<sup>2</sup> *and a function <sup>ϕ</sup>*<sup>1</sup> <sup>∈</sup> *<sup>L</sup>q*((0, 1), <sup>R</sup>+)*,* 1 *<sup>p</sup>* <sup>+</sup> <sup>1</sup> *<sup>q</sup>* = <sup>1</sup>*, such that for all t* ∈ [0, 1] *and x*, *<sup>y</sup>* ∈ R*, we have*

$$\begin{aligned} |\lg(t, \mathbf{x}) - \lg(t, y)| &\le q\_1(t)|\mathbf{x} - y|, \\ |q\_1(\mathbf{x}) - q\_1(y)| &\le k\_1|\mathbf{x} - y|, \\ |q\_2(\mathbf{x}) - q\_2(y)| &\le k\_2|\mathbf{x} - y|. \end{aligned}$$

*(H4) There exists a real number R* > 0 *such that*

$$\frac{R\left[3\|\,\varphi\_{1}\|\_{q} + k\_{1} + k\_{2}\right]}{\Gamma(2-\gamma)\Gamma(1+\gamma)} + G\_{\gamma}^{\*}\left(\|w\|\_{1} + c\left(1 + \left(\frac{1}{\Gamma(\gamma+1)}\right)^{p}\right)R^{p}\right) \le R.\tag{13}$$

*Then, if*

$$\frac{|\Im\|\,\varrho\_1\|\_q + k\_1 + k\_2}{\Gamma(2-\gamma)\Gamma(1+\gamma)} < 1,\tag{14}$$

*the boundary value problem* (2) *has a solution in Eγ*,*p.*

**Proof.** We transform problem (2) into a fixed-point problem. Define two operators *F*, *L* : *<sup>E</sup>γ*,*<sup>p</sup>* <sup>→</sup> *<sup>E</sup>γ*,*<sup>p</sup>* by

$$Fy(t) = \int\_0^1 G(t, s) f(s, y(s), D^\gamma y(s)) ds,$$

and

$$L y(t) = \int\_0^1 H(t, s) g(s, y(s)) ds + (2 - t) \int\_0^1 q\_1(y(s)) ds + (t - 1) \int\_0^1 q\_2(y(s)) ds.$$

Then, *y* is a solution of problem (2) if, and only if, *y* is a fixed point of *F* + *L*. We define the set *BR* as follows:

$$B\_{\mathcal{R}} := \{ \mathfrak{u} \in E^{\gamma\_{\mathcal{I}}p}, \|\mathfrak{u}\|\_{E^{\gamma\_{\mathcal{I}}p}} \le \mathcal{R} \},$$

where *R* is the same constant defined in (*H*3). It is clear that *BR* is convex, closed, and a bounded subset of *Eγ*,*p*. We shall show that *F*, *G* satisfy the assumptions of Krasnosel'skii fixed-point theorem. The proof is given in several steps.

(i) We prove that *<sup>F</sup>* is continuous. Let (*yn*)*n*∈<sup>N</sup> be a sequence such that *yn* <sup>→</sup> *<sup>y</sup>* in *<sup>E</sup>γ*,*p*. From (12) and Lemma 2, and for each *t* ∈ [0, 1], we obtain

$$\begin{aligned} &\left| \left( ^{\mathbb{C}}D\_{0}^{\gamma}F\mathcal{Y}\_{\mathbb{N}} \right) (t) - \left( ^{\mathbb{C}}D\_{0}^{\gamma}F\mathcal{Y} \right) (t) \right| \\ &\leq \int\_{0}^{1} |G\_{\mathcal{I}}(t,s)| \left| f(s,\mathcal{Y}\_{\mathbb{N}}(s),D^{\gamma}\mathcal{Y}\_{\mathbb{N}}(s)) - f(s,\mathcal{Y}(s),D^{\gamma}\mathcal{Y}(s)) \right| ds \\ &\leq G\_{\gamma}^{\*} \left\| N\_{f}\mathcal{Y}\_{\mathbb{N}} - N\_{f}\mathcal{Y} \right\|\_{1} .\end{aligned}$$

Applying the *<sup>L</sup><sup>p</sup>* norm, we obtain that *Fyn* <sup>−</sup> *Fy <sup>E</sup>γ*,*<sup>p</sup>* <sup>→</sup> 0 when *yn* <sup>→</sup> *<sup>y</sup>* in*Eγ*,*p*. Thus, the operator *F* is continuous.

(ii) Now, we prove that *Fx* + *Ly* ∈ *BR* for *x*, *y* ∈ *BR*. Let *x*, *y* ∈ *BR*, *t* ∈ (0, 1). In view of hypothesis (*H*3), we obtain

$$\begin{aligned} \left| \,^C D\_0^\gamma F y(t) \right| &\le \int\_0^1 |G\_\gamma(t,s)| |f(s,y(s),D^\gamma y(s))| ds \\ &\le G\_\gamma^\* \left( \|w\|\_1 + c \left( \|y\|\_p^p + \left| \,^C D\_0^\gamma y \right| \right|\_p^p \right) \\ &\le G\_\gamma^\* \left( \|w\|\_1 + c \left( 1 + \left( \frac{1}{\Gamma(\gamma+1)} \right)^p \right) R^p \right). \end{aligned}$$

Applying the *L<sup>p</sup>* norm, we obtain that

$$\|\|Fy\|\|\_{E\cap\mathcal{V}} \le G\_\gamma^\* \left( \|w\|\_1 + \varepsilon \left( 1 + \left( \frac{1}{\Gamma(\gamma+1)} \right)^p \right) R^p \right). \tag{15}$$

Also,

$$\begin{split} \left| \,^{\mathbb{C}}D\_{0}^{\gamma}L(\mathbf{x})(t) \right| &\leq \frac{3}{\Gamma(2-\gamma)} \int\_{0}^{1} |g(s,\mathbf{x}(s))| ds \\ &+ \frac{1}{\Gamma(2-\gamma)} \int\_{0}^{1} |q\_{1}(\mathbf{x}(s))| ds + \frac{1}{\Gamma(2-\gamma)} \int\_{0}^{1} |q\_{2}(\mathbf{x}(s))| ds \\ &\leq \frac{3}{\Gamma(2-\gamma)} \int\_{0}^{1} q\_{1}(s)|\mathbf{x}(s)| ds + \frac{1}{\Gamma(2-\gamma)} \int\_{0}^{1} k\_{1}|\mathbf{x}(s)| ds \\ &+ \frac{1}{\Gamma(2-\gamma)} \int\_{0}^{1} k\_{2}|\mathbf{x}(s)| ds. \end{split}$$

Applying again the *L<sup>p</sup>* norm, we obtain from Holder's inequality that

$$\|\|L(\mathbf{x})\|\|\_{E^{\gamma,p}} \le \frac{3}{\Gamma(2-\gamma)} \left( \|\|\varphi\_1\|\|\_{q} \|\|\mathbf{x}\|\|\_{p} \right) + \frac{k\_1}{\Gamma(2-\gamma)} \|\|\mathbf{x}\|\|\_{p} + \frac{k\_2}{\Gamma(2-\gamma)} \|\mathbf{x}\|\|\_{p}.$$

In view of (5), we obtain

$$\|L(\mathbf{x})\|\_{E^{\gamma,\eta}} \le \left[ \frac{3\|\varphi\_1\|\_q}{\Gamma(2-\gamma)\Gamma(1+\gamma)} + \frac{k\_1}{\Gamma(2-\gamma)\Gamma(1+\gamma)} + \frac{k\_2}{\Gamma(2-\gamma)\Gamma(1+\gamma)} \right] \|\mathbf{x}\|\_{E^{\gamma,\eta}}.$$

Then,

$$\|\|L(\mathbf{x})\|\|\_{E\cap\mathcal{P}} \le \frac{R\left[3\|\|q\_1\|\|\_q + k\_1 + k\_2\right]}{\Gamma(2-\gamma)\Gamma(1+\gamma)}.\tag{16}$$

From (13), (15) and (16), we conclude that *Fx* + *Ly* ∈ *BR* whenever *x*, *y* ∈ *BR*.

(iii) Let us prove that *<sup>F</sup>*(*BR*) <sup>=</sup> {*F*(*u*) : *<sup>u</sup>* <sup>∈</sup> *BR*} is relatively compact in *<sup>E</sup>γ*,*p*. Let *<sup>t</sup>* <sup>∈</sup> (0, 1) and *h* > 0, where *t* + *h* ≤ 1, and let *u* ∈ *DR*. From (12), we obtain that

$$\begin{split} & \left| \,^C D\_0^\gamma \mathbf{F} y(t+h) - \,^C D\_0^\gamma \mathbf{F} y(t) \right| \\ & \quad \le \int\_0^1 |\mathbf{G}\_\gamma(t+h, \mathbf{s}) - \mathbf{G}\_\gamma(t, \mathbf{s})| |f(\mathbf{s}, y(\mathbf{s}), D^\gamma y(\mathbf{s}))| \, d\mathbf{s} \\ & \quad \le \int\_0^1 |\mathbf{G}\_\gamma(t+h, \mathbf{s}) - \mathbf{G}\_\gamma(t, \mathbf{s})| \left[ w(\mathbf{s}) + c(|y(\mathbf{s})|^p + |D^\gamma y(\mathbf{s})|^p) \right] \, d\mathbf{s} \\ & \quad \le \sup\_{t \in [0, 1]} \left[ \sup\_{s \in [0, 1]} |\mathbf{G}\_\gamma(t+h, \mathbf{s}) - \mathbf{G}\_\gamma(t, \mathbf{s})| \right] \left( \|w\|\_1 + c \left( 1 + \left( \frac{1}{\Gamma(\gamma + 1)} \right)^p \right) R^p \right) . \end{split}$$

Therefore,

$$\frac{\|\|Fu(\cdot + h) - Fu(\cdot)\|\|\_{E\cap\mathcal{V}}}{\left(\|w\|\_1 + c\left(1 + \left(\frac{1}{\Gamma(\gamma + 1)}\right)^p\right)R^p\right)} \le \sup\_{t \in [0, 1]} \left[\sup\_{s \in [0, 1]} |G\_{\mathcal{V}}(t + h, s) - G\_{\mathcal{V}}(t, s)|\right].\tag{17}$$

Then, *Fu*(· + *h*) − *Fu*(·)*<sup>E</sup>γ*,*<sup>p</sup>* → 0 as *h* → 0 for any *u* ∈ *BR*, since *G<sup>γ</sup>* is a continuous function on [0, 1] × [0, 1]. From Lemma 3, we conclude that *F* : *BR* → *BR* is compact.

(iv) Finally, we prove that *L* is a contraction. Let *x*, *y* ∈ *DR* and *t* ∈ (0, 1). Then,

$$\begin{split} \left| \,^C D\_0^\gamma L(\mathbf{x})(t) - \,^C D\_0^\gamma L(\mathbf{y})(t) \right| &\leq \frac{3}{\Gamma(2-\gamma)} \int\_0^1 |g(\mathbf{s}, \mathbf{x}(s)) - g(\mathbf{s}, \mathbf{y}(s))| ds \\ &\qquad + \frac{1}{\Gamma(2-\gamma)} \int\_0^1 |q\_1(\mathbf{x}(s)) - q\_1(\mathbf{x}(s))| ds \\ &\qquad + \frac{1}{\Gamma(2-\gamma)} \int\_0^1 |q\_2(\mathbf{x}(s)) - q\_2(\mathbf{x}(s))| ds \\ &\leq \frac{3}{\Gamma(2-\gamma)} \int\_0^1 \varrho\_1(s) |\mathbf{x}(s) - \mathbf{y}(s)| ds \\ &\qquad + \frac{k\_1}{\Gamma(2-\gamma)} \int\_0^1 |\mathbf{x}(s) - \mathbf{x}(s)| ds \\ &\qquad + \frac{k\_2}{\Gamma(2-\gamma)} \int\_0^1 |\mathbf{x}(s) - \mathbf{x}(s)| ds. \end{split}$$

Applying the *L<sup>p</sup>* norm and Holder's inequality, we obtain that

$$\begin{split} \| \| L(\mathbf{x}) - L(\mathbf{y}) \| \|\_{E^{\gamma, p}} &\leq \frac{3}{\Gamma(2 - \gamma)} \Big( \| \| \varphi\_1 \| \_q \| \| \mathbf{x} - \mathbf{y} \| \Big) + \frac{k\_1}{\Gamma(2 - \gamma)} \Big( \| \mathbf{x} - \mathbf{y} \| \_p \Big) \\ &+ \frac{k\_2}{\Gamma(2 - \gamma)} \Big( \| \mathbf{x} - \mathbf{y} \| \_p \Big). \end{split}$$

Then, from (5), we obtain

$$\|L(\mathbf{x}) - L(\mathbf{y})\|\_{E^{\gamma, p}} \le \frac{3||\varrho\_1||\_q + k\_1 + k\_2}{\Gamma(2 - \gamma)\Gamma(1 + \gamma)} \|\mathbf{x} - \mathbf{y}\|\_{E^{\gamma, p}}.$$

From (14), the operator *L* is a contraction.

As a consequence of (i)–(iv), we conclude that *F* : *BR* → *BR* is continuous and compact. As a consequence of Krasnosel'skii fixed point theorem, we deduce that *F* + *G* has a fixed point *<sup>y</sup>* <sup>∈</sup> *BR* <sup>⊂</sup> *<sup>E</sup>γ*,*p*, which is a solution to problem (2).

We now illustrate our Theorem 1 with two examples.

**Example 1.** *Consider the fractional boundary value problem* (2) *with*

$$\begin{aligned} \mathfrak{a} &= 2.5, \quad \gamma = 0.5, \quad p = 3, \quad q = \frac{3}{2}, \\ f(t, \mathfrak{x}, y) &= \frac{\exp(-t)}{5} - \frac{1}{164\pi} \arctan\left(\mathfrak{x}^3 + y^3\right), \\ g(t, \mathfrak{x}) &= \frac{1}{10} t^{\frac{2}{3}} \mathfrak{x}, \\ q\_1(\mathfrak{x}) &= q\_2(\mathfrak{x}) = \frac{1}{20} \mathfrak{x}, \end{aligned}$$

*which we denote by* (*P*1)*. Hypotheses* (*H*1) *and* (*H*2) *are satisfied for*

$$w(t) = \frac{\exp(-t)}{5} \in L^1(0, 1), \quad c = \frac{1}{164\pi'} \quad \text{ } \newline p\_1(t) = \frac{t^{\frac{2}{5}}}{10'} \quad \text{and} \quad k\_1 = k\_2 = \frac{1}{20}.$$

*Moreover, we have*

$$\frac{\left[3\left|\left|\varphi\_{1}\right|\right|\_{q} + k\_{1} + k\_{2}\right]}{\Gamma(2-\gamma)\Gamma(1+\gamma)} = \frac{\left[\frac{3}{10}\left(\frac{1}{2}\right)^{\frac{2}{3}} + \frac{1}{10}\right]}{\left(\Gamma\left(\frac{3}{2}\right)\right)^{2}} \simeq 0.368 < 1.0$$

*If we choose R* = 2*, then we obtain*

$$\begin{split} & \quad \frac{\mathcal{R}\left[3||\varrho\_{1}||\_{q} + k\_{1} + k\_{2}\right]}{\Gamma(2-\gamma)\Gamma(1+\gamma)} + \mathcal{G}^{\*}\_{\gamma}\left(||w||\_{1} + c\left(1 + \left(\frac{1}{\Gamma(\gamma+1)}\right)^{p}\right)R^{p}\right) - R} \\ & \leq \frac{2\left[\frac{3}{10}\left(\frac{1}{2}\right)^{\frac{2}{3}} + \frac{1}{10}\right]}{\left(\Gamma\left(\frac{3}{2}\right)\right)^{2}} + 4.047 \left(\frac{1}{5} + \frac{1}{164\pi}\left(1 + \left(\frac{1}{\Gamma\left(\frac{3}{2}\right)}\right)^{3}\right)2^{3}\right) - 2 \\ & \simeq -0.301. \end{split}$$

*Since all conditions of our Theorem 1 are satisfied, we conclude that the fractional boundary value problem* (*P*1) *has a solution in Eγ*,*p.*

**Example 2.** *Consider the fractional boundary value problem* (2) *with*

$$\begin{aligned} \alpha &= 2.7, \quad \gamma = 0.7, \quad p = 4, \quad q = \frac{4}{3}, \\\ f(t, \mathbf{x}, y) &= \frac{1}{10} \sin(t) + \frac{1}{200} \cos\left(\mathbf{x}^4 + y^4\right), \\\ g(t, \mathbf{x}) &= \frac{1}{9\pi} t^{\frac{3}{4}} \arctan(\mathbf{x}), \\\ q\_1(\mathbf{x}) &= q\_2(\mathbf{x}) = \frac{1}{10} \sin(\mathbf{x}), \end{aligned}$$

*which we denote by* (*P*2)*. Hypotheses* (*H*1) *and* (*H*2) *are satisfied for*

$$w(t) = \frac{1}{10}\sin(t) \in L^1(0,1), \quad \mathfrak{c} = \frac{1}{200}, \quad \varrho\_1(t) = \frac{t^{\frac{3}{4}}}{9\pi} \quad \text{and} \quad k\_1 = k\_2 = \frac{1}{10}.$$

*Moreover, we have*

$$\frac{\left[3\|\|\varphi\_1\|\|\_{q} + k\_1 + k\_2\right]}{\Gamma(2-\gamma)\Gamma(1+\gamma)} = \frac{\left[\frac{1}{3\pi}\left(\frac{1}{2}\right)^{\frac{3}{4}} + \frac{1}{5}\right]}{\Gamma(1.3)\Gamma(1.7)} \simeq 0.323 < 1.$$

*If we choose R* = 2*, then we obtain*

$$\begin{split} &\frac{R\left[3||\varrho\_{1}||\_{q}+k\_{1}+k\_{2}\right]}{\Gamma(2-\gamma)\Gamma(1+\gamma)}+G\_{7}^{\*}\left(||w||\_{1}+c\left(1+\left(\frac{1}{\Gamma(\gamma+1)}\right)^{p}\right)R^{p}\right)-R\\ &\leq\frac{2\left[\frac{1}{3\pi}\left(\frac{1}{2}\right)^{\frac{3}{2}}+\frac{1}{5}\right]}{\Gamma(1.3)\Gamma(1.\mathcal{T})}+3.9995\left(\frac{1}{10}+\frac{1}{200}\left(1+\left(\frac{1}{\Gamma(1.\mathcal{T})}\right)^{4}\right)2^{4}\right)-2\\ &\simeq-0.163. \end{split}$$

*Since all conditions of our Theorem 1 are satisfied, we conclude that the fractional boundary value problem* (*P*2) *has a solution in Eγ*,*p.*

#### **4. Discussion**

The celebrated existence result of Ahmad and Nieto [17] for problem (1) is obtained via Krasnosel'skii fixed-point theorem in the space of continuous functions. For that, they needed to apply Ascoli's theorem in order to provide the compactness of the first part of the Krasnosel'skii operator. Here, we proved existence for the more general problem (2) in the fractional derivative Banach space *Eγ*,*p*, equipped with the norm (3). From norm (3), it is natural to deal with a subspace of *<sup>L</sup><sup>p</sup>* <sup>×</sup> *<sup>L</sup>p*. Since Ascoli's theorem is limited to the space of continuous functions for the compactness, we had to make use of a different approach to ensure existence of solution in the fractional derivative space *Eγ*,*p*. Our tool was the Kolmogorov–Riesz compactness theorem, which turned out to be a powerful tool to address the problem. To the best of our knowledge, the use of the Kolmogorov–Riesz compactness theorem to prove existence results for boundary value problems involving nonlinear integrodifferential equations of fractional order with integral boundary conditions is a completely new approach. In this direction, we are only aware of the work [35], where a necessary and sufficient condition of pre-compactness in variable exponent Lebesgue spaces is established and, as an application, the existence of solutions to a fractional Cauchy problem is obtained in the Lebesgue space of variable exponent. As future work, we intend to generalize our existence result to the variable-order case [36].

**Author Contributions:** Conceptualization, D.B., A.C. and D.F.M.T.; validation, D.B., A.C. and D.F.M.T.; formal analysis, D.B., A.C. and D.F.M.T.; investigation, D.B., A.C. and D.F.M.T.; writing original draft preparation, D. B., A.C. and D.F.M.T.; writing—review and editing, D.B., A.C. and D.F.M.T. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was partially funded by FCT, grant number UIDB/04106/2020 (CIDMA).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Data sharing not applicable.

**Acknowledgments:** The authors are grateful to the referees for their comments and remarks.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **Weighted Generalized Fractional Integration by Parts and the Euler–Lagrange Equation**

**Houssine Zine 1,†, El Mehdi Lotfi 2,†, Delfim F. M. Torres 1,\*,†, and Noura Yousfi 2,†**


**Abstract:** Integration by parts plays a crucial role in mathematical analysis, e.g., during the proof of necessary optimality conditions in the calculus of variations and optimal control. Motivated by this fact, we construct a new, right-weighted generalized fractional derivative in the Riemann– Liouville sense with its associated integral for the recently introduced weighted generalized fractional derivative with Mittag–Leffler kernel. We rewrite these operators equivalently in effective series, proving some interesting properties relating to the left and the right fractional operators. These results permit us to obtain the corresponding integration by parts formula. With the new general formula, we obtain an appropriate weighted Euler–Lagrange equation for dynamic optimization, extending those existing in the literature. We end with the application of an optimization variational problem to the quantum mechanics framework.

**Keywords:** weighted generalized fractional calculus; integration by parts formula; Euler–Lagrange equation; quantum mechanics; calculus of variations

**MSC:** 26A33; 49K05

#### **1. Introduction**

In the last decade, fractional calculus played an important role in the theoretical study of dynamical systems by showing significant results in many natural fields and engineering domains [1,2]. For this reason, mathematicians are paying more attention to the generalization of several important formulas in the integral theory of Mathematical Analysis, namely, the Newton–Leibniz formula, the Green formula, and the Gauss and Stokes formulas [3,4]. Some are central tools that enable mathematicians to extend other theories, such as the integration by parts formula, Taylor's formula, the Euler–Lagrange equation, Grönwall's inequality, Lyapunov theorems and LaSalle's invariance principle [5,6].

Often, memory effects are fractionally modeled with Riemann–Liouville and Caputo derivatives [7,8]. However, the fact that the Mittag–Leffler function is a generalization of the exponential function naturally gives rise to new definitions for fractional operators [9,10]. In 2020, Hattaf [11] has proposed a new left-weighted generalized fractional derivative for both Caputo and Riemann–Liouville senses and their associated integral operator, see also [12]. Motivated by their applications in mechanics, where the introduction of the correct operator is needed [8,13], here, we introduce the right-weighted generalized fractional derivative and its associated integral operator, proving their main properties and, in particular, their integration by parts formula.

It is worth emphasizing that integration by parts is of great interest in integral calculus and mathematical analysis. For example, it represents a strong tool to develop the calculus

**Citation:** Zine, H.; Lotfi, E.M.; Torres, D.F.M.; Yousfi, N. Weighted Generalized Fractional Integration by Parts and the Euler–Lagrange Equation. *Axioms* **2022**, *11*, 178. https://doi.org/10.3390/ axioms11040178

Academic Editor: Chris Goodrich

Received: 21 March 2022 Accepted: 13 April 2022 Published: 15 April 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

of variations through the so-called Euler–Lagrange equation, which is the central result of dynamic optimization [8]. In recent years, the development of some theoretical practices using fractional derivatives has drawn the attention of several researchers. In 2012, Almeida, Malinowska and Torres [14] reviewed some recent results of fractional variational calculus and discussed the necessary optimality conditions of Euler–Lagrange type for functionals with a Lagrangian containing left and right Caputo derivatives. In 2017, Abdeljawad and Baleanu obtained an adequate integration by parts formula and the corresponding Euler– Lagrange equations using the nonlocal fractional derivative with Mittag–Leffler kernel. In 2019, Abdeljawad et al. [15] developed a fractional integration by parts formula for Riemann–Liouville, Liouville–Caputo, Caputo–Fabrizio and Atangana–Baleanu fractional derivatives. In 2020, Zine and Torres [16] introduced a stochastic fractional calculus, and obtained a stochastic fractional Euler–Lagrange equation. Motivated by these works, particularly [14–17], and with the help of our weighted generalized fundamental integration by parts formula, we extend the available Euler–Lagrange equations.

The main purpose of our work is to compute a new integration by parts formula for the weighted generalized fractional derivative and to discuss the associated necessary optimality conditions of Euler–Lagrange type. To do this, we organize the paper as follows. In Section 2, we recall some necessary results from the literature. We proceed with Section 3, introducing the right-weighted generalized fractional derivative and its associated integral and studying their well-posedness. Integration by parts is investigated in Section 4, followed by Section 5, where the weighted generalized fractional Euler–Lagrange equation is rigorously proved. We end with Section 6, illustrating the obtained theoretical results with their application in the quantum mechanics framework.

#### **2. Preliminaries**

In this section, we present some definitions and properties from the fractional calculus literature, which will help us to prove our main results. In the text, *<sup>f</sup>* <sup>∈</sup> *<sup>H</sup>*1(*a*, *<sup>b</sup>*) is a sufficiently smooth function on [*a*, *<sup>b</sup>*] with *<sup>a</sup>*, *<sup>b</sup>* ∈ R. In addition, we adopt the following notations:

$$\phi(\alpha) := \frac{1-\alpha}{B(\alpha)}, \quad \psi(\alpha) := \frac{\alpha}{B(\alpha)'}.$$

where 0 ≤ *α* < 1 and *B*(*α*) is a normalization function obeying *B*(0) = *B*(1) = 1. In the paper, we denote

$$
\mu\_{\alpha} := \frac{\alpha}{1 - \alpha}.
$$

**Lemma 1** (See [18])**.** *Let <sup>α</sup>* <sup>&</sup>gt; <sup>0</sup>*, <sup>p</sup>* <sup>≥</sup> <sup>1</sup>*, <sup>q</sup>* <sup>≥</sup> <sup>1</sup> *and* <sup>1</sup> *p* + 1 *q* ≤ 1 + *α (p* = 1 *and q* = 1 *in the case* 1 *p* + 1 *<sup>q</sup>* <sup>=</sup> <sup>1</sup> <sup>+</sup> *<sup>α</sup>). If f* <sup>∈</sup> *Lp*(*a*, *<sup>b</sup>*) *and g* <sup>∈</sup> *Lq*(*a*, *<sup>b</sup>*)*, then*

$$\int\_{a}^{b} f(\mathbf{x}) \, \_{a,1}^{RL} I^{a} \mathbf{g}(\mathbf{x}) d\mathbf{x} = \int\_{a}^{b} \mathbf{g}(\mathbf{x}) \, ^{RL} I\_{b,1}^{a} f(\mathbf{x}) d\mathbf{x} \, ^{R}$$

*where RL <sup>a</sup>*,1 *<sup>I</sup><sup>α</sup> is the left standard Riemann–Liouville fractional integral of order <sup>α</sup> given by*

$$\int\_{a,1}^{RL} I^a f(\mathbf{x}) = \frac{1}{\Gamma(a)} \int\_a^{\infty} (\mathbf{x} - \mathbf{s})^{a-1} f(\mathbf{s}) d\mathbf{s}, \quad \mathbf{x} > a,\tag{1}$$

*and RL I<sup>α</sup> <sup>b</sup>*,1 *is the right standard Riemann–Liouville fractional integral of order α given by*

$${}^{RL}I\_{b,1}^{a}f(\mathbf{x}) = \frac{1}{\Gamma(\alpha)} \int\_{\mathbf{x}}^{b} (\mathbf{s} - \mathbf{x})^{a-1} f(\mathbf{s}) d\mathbf{s}, \quad \mathbf{x} < b. \tag{2}$$

**Definition 1** (See [11])**.** *Let* 0 ≤ *α* < 1 *and β* > 0*. The left-weighted generalized fractional derivative of order α of function f , in the Riemann–Liouville sense, is defined by*

$${}^{R}\_{a,\mu}D^{a,\beta}f(\mathbf{x}) = \frac{1}{\phi(\mathbf{a})} \frac{1}{w(\mathbf{x})} \frac{d}{d\mathbf{x}} \int\_{a}^{\mathbf{x}} (wf)(s) E\_{\beta} \left[ -\mu\_{a}(\mathbf{x} - \mathbf{s})^{\beta} \right] ds,\tag{3}$$

*where E<sup>β</sup> denotes the Mittag–Leffler function of parameter β defined by*

$$E\_{\hat{\mathbb{P}}}(z) = \sum\_{j=0}^{\infty} \frac{z^j}{\Gamma(\beta j + 1)}, \quad z \in \mathbb{C}\_{\prime} \tag{4}$$

*and w* <sup>∈</sup> *<sup>C</sup>*1([*a*, *<sup>b</sup>*]) *with w*, *<sup>w</sup>* > 0*. The corresponding fractional integral is defined by*

$$
\lambda\_{a,w} l^{a,\emptyset} f(\mathbf{x}) = \phi(a) f(\mathbf{x}) + \psi(a) \, ^{RL}\_{a,w} l^{\emptyset} f(\mathbf{x}), \tag{5}
$$

*where RL <sup>a</sup>*,*<sup>w</sup> I<sup>β</sup> is the standard weighted Riemann–Liouville fractional integral of order β given by*

$$\int\_{a,w}^{RL} I^{\beta} f(\mathbf{x}) = \frac{1}{\Gamma(\beta)} \frac{1}{w(\mathbf{x})} \int\_{a}^{\mathbf{x}} (\mathbf{x} - \mathbf{s})^{\beta - 1} w(\mathbf{s}) f(\mathbf{s}) d\mathbf{s}, \quad \mathbf{x} > a. \tag{6}$$

#### **3. Well-Posedness of the Right-Weighted Fractional Operators**

We denote the right-weighted generalized fractional derivative of order *α* in the Riemann–Liouville sense by *RD<sup>α</sup>*,*<sup>β</sup> <sup>b</sup>*,*w*, and we define this so that the following identity occurs:

$$\left(Q\binom{\mathbb{R}}{a,w}D^{\alpha,\beta}f\right)(x) = \left(^{\mathbb{R}}D\_{b,w}^{\alpha,\beta}Qf\right)(x)$$

with *Q* being the *reflection operator*, that is, (*Q f*)(*x*) = *f*(*a* + *b* − *x*) with function *f* defined on the interval [*a*, *b*].

**Definition 2** (right-weighted generalized fractional derivative)**.** *Let* 0 ≤ *α* < 1 *and β* > 0*. The right-weighted generalized fractional derivative of order α of function f , in the Riemann– Liouville sense, is defined by*

$${}^{\mathcal{R}}D\_{b,w}^{a,\beta}f(\mathbf{x}) = \frac{-1}{\phi(a)} \frac{1}{w(\mathbf{x})} \frac{d}{d\mathbf{x}} \int\_{\mathbf{x}}^{b} (wf)(\mathbf{s}) E\_{\beta} \left[ -\mu\_a(\mathbf{s} - \mathbf{x})^{\beta} \right] ds,\tag{7}$$

*where w* <sup>∈</sup> *<sup>C</sup>*1([*a*, *<sup>b</sup>*]) *with w*, *<sup>w</sup>* > 0*.*

To properly define the new right-weighted fractional integral, we need to solve the equation *RD<sup>α</sup>*,*<sup>β</sup> <sup>b</sup>*,*<sup>w</sup> f*(*x*) = *u*(*x*). We have

$${}^{R}D\_{b,w}^{a,\beta}f(\mathbf{x}) = {}^{R}D\_{b,w}^{a,\beta}QQf(\mathbf{x}) = Q\_{a,w}^{\;R}D^{a,\beta}Qf(\mathbf{x}) = u(t).$$

Then,

$$\prescript{R}{a,w}D^{\alpha,\beta}Qf(\mathbf{x}) = Q\mu(\mathbf{x}),$$

and thus,

$$Qf(\mathbf{x}) = \phi(a)Q\mu(\mathbf{x}) + \psi(a)^{RL}\_{\
u,\nu}I^{\beta}Q\mu(\mathbf{x}) = \phi(a)Q\mu(\mathbf{x}) + \psi(a)Q^{RL}I^{\beta}\_{b,\nu}\mu(\mathbf{x}),$$

where *RL I β <sup>b</sup>*,*<sup>w</sup>* is the right-weighted standard Riemann–Liouville fractional integral of order *β* given by

$${}^{RL}I\_{b,w}^{\beta}f(\mathbf{x}) = \frac{1}{\Gamma(\beta)} \frac{1}{w(\mathbf{x})} \int\_{\mathbf{x}}^{b} (\mathbf{s} - \mathbf{x})^{\beta - 1} w(\mathbf{s}) f(\mathbf{s}) d\mathbf{s}, \quad \mathbf{x} < b. \tag{8}$$

Applying *Q* to both sides of (8), we obtain

$$f(t) = \phi(\alpha)u(\alpha) + \psi(\alpha)^{\operatorname{RL}} I\_{b,\omega}^{\beta}u(\alpha).$$

Moreover,

$$\begin{aligned} \, \_ {a,w} ^ {R,\emptyset} \mathcal{Q}f(\mathbf{x}) &= \, \_ {\phi} (\alpha) Qf(\mathbf{x}) + \psi(\alpha) \, ^ {RL}\_{a,w} \mathcal{I}^{\S}Qf(\mathbf{x}) \\ &= \, \_ {\phi} (\alpha) Qf(\mathbf{x}) + \psi(\alpha) \, ^ {RL}\_{b,w} \mathcal{I}^{\S}\_{b,w} f(\mathbf{x}) \\ &= \, \_ {\mathcal{Q}} \Big[ \phi(\alpha) f(\mathbf{x}) + \psi(\alpha) \, ^ {RL}\_{b,w} \mathcal{I}^{\S}\_{b,w} f(\mathbf{x}) \Big]. \end{aligned}$$

We are now in the position to introduce the concept of the right-weighted generalized fractional integral.

**Definition 3** (right-weighted generalized fractional integral)**.** *Let* 0 ≤ *α* < 1 *and β* > 0*. The right-weighted generalized fractional integral of order α of function f is given by*

$$I\_{b,w}^{a,\beta}f(\mathbf{x}) = \phi(a)f(\mathbf{x}) + \psi(a)^{RL}I\_{b,w}^{\beta}f(\mathbf{x}),\tag{9}$$

*where w* <sup>∈</sup> *<sup>C</sup>*1([*a*, *<sup>b</sup>*]) *with w*, *<sup>w</sup>* > 0*.*

Our next result provides a series representation to the left- and right-weighted generalized fractional derivatives.

**Theorem 1.** *Let* 0 ≤ *α* < 1 *and β* > 0*. The left- and right-weighted generalized fractional derivatives of order α of function f can be written, respectively, as*

$$\Gamma\_{a,w}^{R} D^{a,\beta} f(\mathbf{x}) = \frac{1}{\phi(a)} \sum\_{j=0}^{\infty} (-\mu\_a)^j \frac{R!}{a!w} I^{\beta j} f(\mathbf{x}) \tag{10}$$

*and*

$${}^{R}D\_{b,\mu}^{\mu,\beta}f(x) = \frac{-1}{\phi(a)}\sum\_{j=0}^{\infty}(-\mu\_a)^j \, {}^{RL}\_{b,\mu}I^{\beta j}f(x). \tag{11}$$

**Proof.** The Mittag–Leffler function *Eβ*(*x*) is an entire series of *x*. Since the series (4) locally and uniformly converges in the whole complex plane, the left-weighted generalized fractional derivative can be rewritten as

$$\begin{split} \, \_{a,w}^{R}D^{a,\beta}f(\mathbf{x}) &= \quad \frac{1}{\phi(\alpha)} \frac{1}{w(\mathbf{x})} \frac{d}{d\mathbf{x}} \int\_{a}^{\mathbf{x}} (wf)(s) \sum\_{j=0}^{\infty} (-\mu\_{a})^{j} \frac{(\mathbf{x}-\mathbf{s})^{\beta j}}{\Gamma(\beta j + 1)} ds \\ &= \quad \frac{1}{\phi(\alpha)} \frac{1}{w(\mathbf{x})} \sum\_{j=0}^{\infty} (-\mu\_{a})^{j} \frac{1}{\Gamma(\beta j + 1)} \frac{d}{d\mathbf{x}} \int\_{a}^{\mathbf{x}} (wf)(s)(\mathbf{x}-\mathbf{s})^{\beta j} ds \\ &= \quad \frac{1}{\phi(\alpha)} \frac{1}{w(\mathbf{x})} \sum\_{j=0}^{\infty} (-\mu\_{a})^{j} \frac{1}{\Gamma(\beta j)} \int\_{a}^{\mathbf{x}} (wf)(s)(\mathbf{x}-\mathbf{s})^{\beta j - 1} ds \\ &= \quad \frac{1}{\phi(\alpha)} \sum\_{j=0}^{\infty} (-\mu\_{a})^{j} \binom{R\_{L}}{a,w} I^{\beta j} f(\mathbf{x}) .\end{split}$$

From Definition 2, and using the same steps that were used before, one can easily rewrite the new right-weighted generalized fractional derivative as equality (11). The proof of (10) is similar.

**Theorem 2.** *Let* 0 ≤ *α* < 1 *and β* > 0*. The left- and right-weighted generalized fractional derivative and their associated integrals satisfy the following formulas:*

$$\left(\square\_{a,w}I^{a,\emptyset}\left(\_{a,w}^R D^{a,\emptyset}f\right)\right)(\mathbf{x}) =\_{a,w}^R D^{a,\emptyset}\left(\_{a,w}I^{a,\emptyset}f\right)(\mathbf{x}) = f(\mathbf{x})\tag{12}$$

*and*

$$\left(\boldsymbol{I}\_{b,w}^{a,\emptyset}\left(\prescript{R}{}{D}\_{b,w}^{a,\emptyset}f\right)(\mathbf{x}) = \prescript{R}{}{D}\_{b,w}^{a,\emptyset}\left(\prescript{a}{}{\ast}\prescript{\emptyset}{}{I}\_{b,w}f\right)(\mathbf{x}) = -f(\mathbf{x}).\tag{13}$$

**Proof.** We note that

$$\begin{split} \left( \begin{array}{c} \mu\_{a} \mu^{\underline{R}} \left( \prescript{R}{a}{\underline{R}} \prescript{R}{a}{\underline{\beta}} f \right) (\mathbf{x}) = \phi(a) \Big\langle \prescript{R}{a}{\underline{\mu}} \prescript{R}{a}{\underline{\beta}} f \right) (\mathbf{x}) + \psi(a) \prescript{RL}{a}{\underline{\mu}} \prescript{R}{\underline{\beta}} \prescript{R}{a}{\underline{\beta}} f \Big) (\mathbf{x}) \\ = \sum\_{j=0}^{\infty} (-\mu\_{a})^{j} \prescript{RL}{a}{\underline{\mu}} I^{\underline{\beta}j} f(\mathbf{x}) + \mu\_{a} \prescript{RL}{a}{\underline{\mu}} I^{\underline{\beta}} \left( \sum\_{j=0}^{\infty} (-\mu\_{a})^{j} \prescript{RL}{a}{\underline{\mu}} I^{\underline{\beta}j} f \right) (\mathbf{x}) \\ = \sum\_{j=0}^{\infty} (-\mu\_{a})^{j} \prescript{RL}{a}{\underline{\mu}} I^{\underline{\beta}j} f(\mathbf{x}) - \sum\_{j=0}^{\infty} (-\mu\_{a})^{j+1} \prescript{RL}{a}{\underline{\mu}} I^{\underline{\beta} + \beta j} f(\mathbf{x}) \\ = f(t) . \end{split}$$

Then,

$$\begin{split} \, ^R\_{a,w}D^{a,\beta} \left( \, \_{a,w}I^{a,\beta}f \right)(\mathbf{x}) &= \frac{1}{\Phi(a)} \sum\_{j=0}^{\infty} (-\mu\_a)^j \, ^{RL}\_{a,w} I^{\beta j} \left( \, \_{a,w}I^{a,\beta}f \right)(\mathbf{x}), \\ &= \frac{1}{\Phi(a)} \sum\_{j=0}^{\infty} (-\mu\_a)^j \, ^{RL}\_{a,w} I^{\beta j} \left[ \Phi(a)f(\mathbf{x}) + \Psi(a) \, ^{RL}\_{a,w} I^{\beta}f(\mathbf{x}) \right] \\ &= \sum\_{j=0}^{\infty} (-\mu\_a)^j \, ^{RL}\_{a,w} I^{\beta j} f(\mathbf{x}) + \mu\_a \sum\_{j=0}^{\infty} (-\mu\_a)^j \, ^{RL}\_{a,w} I^{\beta j + \beta} f(\mathbf{x}) \\ &= \sum\_{j=0}^{\infty} (-\mu\_a)^j \, ^{RL}\_{a,w} I^{\beta j} f(\mathbf{x}) - \sum\_{j=0}^{\infty} (-\mu\_a)^{j+1} \, ^{RL}\_{a,w} I^{\beta j + \beta} f(\mathbf{x}) \\ &= f(\mathbf{x}) \end{split}$$

and equality (12) holds true. The proof of equality (13) is similar.

#### **4. Integration by Parts**

Our formulas of integration by parts are proved in suitable function spaces.

**Definition 4** (See [19])**.** *For α* > 0*, β* > 0 *and* 1 ≤ *p* ≤ ∞*, the following function spaces are defined:*

$$\_{a,w}I^{a,\emptyset}(L\_p) := \left\{ f : f = \_{a,w}I^{a,\emptyset}(\eta), \eta \in L\_p(a,b) \right\}.$$

*and*

$$I\_{b,w}^{\mathfrak{a},\mathfrak{F}}(L\_{\mathbb{P}}) := \left\{ f : f = I\_{b,w}^{\mathfrak{a},\mathfrak{F}}(\theta), \theta \in L\_{\mathbb{P}}(a,b) \right\},$$

**Theorem 3** (integration by parts without the weighted function)**.** *Let* 0 ≤ *α* < 1*, β* > 0*, <sup>p</sup>* <sup>≥</sup> <sup>1</sup>*, q* <sup>≥</sup> <sup>1</sup> *and* <sup>1</sup> *p* + 1 *q* <sup>≤</sup> <sup>1</sup> <sup>+</sup> *<sup>α</sup> (p* <sup>=</sup> <sup>1</sup> *and q* <sup>=</sup> <sup>1</sup> *in the case* <sup>1</sup> *p* + 1 *<sup>q</sup>* <sup>=</sup> <sup>1</sup> <sup>+</sup> *<sup>α</sup>).* • *If f* ∈ *Lp*(*a*, *b*) *and g* ∈ *Lq*(*a*, *b*)*, then*

$$\int\_{a}^{b} f(\mathbf{x}) (\_{a,1}I^{a,\emptyset} \mathcal{G})(\mathbf{x}) d\mathbf{x} = \int\_{a}^{b} \mathcal{g}(\mathbf{x}) (I^{a,\emptyset}\_{b,1} f)(\mathbf{x}) d\mathbf{x}.\tag{14}$$

.

• *If f* ∈ *I α*,*β <sup>b</sup>*,*w*(*Lp*) *and g* <sup>∈</sup> *<sup>a</sup>*,*<sup>w</sup> <sup>I</sup>α*,*β*(*Lq*)*, then*

$$\int\_{a}^{b} f(\mathbf{x}) (\prescript{R}{a,1}{D}{\mathbf{z}}^{a,\beta} \prescript{}{\mathbf{g}}{}) (\mathbf{x}) d\mathbf{x} = \int\_{a}^{b} g(\mathbf{x}) (\prescript{R}{}{D}\_{b,1}^{a,\beta} \prescript{}{f}{}) (\mathbf{x}) d\mathbf{x}.\tag{15}$$

**Proof.** First, we prove equality (14). Since,

$$\begin{aligned} \int\_{a}^{b} f(\mathbf{x}) (\prescript{}{a,1}{}{l}^{a,\emptyset} \mathbf{g})(\mathbf{x}) d\mathbf{x} &= \int\_{a}^{b} f(\mathbf{x}) \Big[ \boldsymbol{\phi}(\mathbf{a}) \mathbf{g}(\mathbf{x}) + \boldsymbol{\psi}(\mathbf{a}) \prescript{RL}{a,1}{l}{l}^{\emptyset} \mathbf{g}(\mathbf{x}) \Big] \\ &= \boldsymbol{\phi}(\mathbf{a}) \int\_{a}^{b} f(\mathbf{x}) \boldsymbol{\varrho}(\mathbf{x}) d\mathbf{x} + \boldsymbol{\psi}(\mathbf{a}) \int\_{a}^{b} f(\mathbf{x}) \prescript{RL}{a,1}{l}{l}^{\emptyset} \boldsymbol{\varrho}(\mathbf{x}) d\mathbf{x}, \end{aligned}$$

it follows from Lemma 1 that

$$\begin{aligned} \int\_{a}^{b} f(\mathbf{x}) (\prescript{}{a,1}{}{l}^{a,\emptyset} \mathbf{g})(\mathbf{x}) d\mathbf{x} &= \boldsymbol{\Phi}(a) \int\_{a}^{b} f(\mathbf{x}) \mathbf{g}(\mathbf{x}) d\mathbf{x} + \boldsymbol{\Psi}(a) \int\_{a}^{b} \mathbf{g}(\mathbf{x}) \prescript{}{R}{}{l}\_{b,1}^{\emptyset} f(\mathbf{x}) d\mathbf{x} \\ &= \int\_{a}^{b} \mathbf{g}(\mathbf{x}) \left[ \boldsymbol{\Phi}(a) f(\mathbf{x}) + \boldsymbol{\Psi}(a) \prescript{R}{}{R}\_{b,1}^{\emptyset} f(\mathbf{x}) \right] \\ &= \int\_{a}^{b} \mathbf{g}(\mathbf{x}) (I\_{b,1}^{a,\emptyset} f)(\mathbf{x}) d\mathbf{x}. \end{aligned}$$

Now, we prove (15):

$$\begin{aligned} \int\_{a}^{b} f(\mathbf{x}) (\prescript{R}{a}{}\_{a,1} D^{a,\beta} \mathbf{g})(\mathbf{x}) d\mathbf{x} &= \int\_{a}^{b} l\_{b,1}^{a,\beta} \theta(\mathbf{x}) \Big( \prescript{R}{a}{}\_{a,1} D^{a,\beta} (\prescript{}{a}{}\_{a,1} I^{a,\beta} \eta) \Big)(\mathbf{x}) d\mathbf{x} \\ &= \int\_{a}^{b} \eta(\mathbf{x}) \prescript{a}{}{b}\_{b,1}^{a,\beta} \theta(\mathbf{x}) d\mathbf{x} \text{ (from Theorem 2)} \\ &= \int\_{a}^{b} \theta(\mathbf{x}) \prescript{}{a}{}\_{a,1} I^{a,\beta} \eta(\mathbf{x}) d\mathbf{x} \text{ (from equality (14))} \\ &= \int\_{a}^{b} \g(\mathbf{x}) (\prescript{R}{}{}\_{b,1}^{a,\beta} f)(\mathbf{x}) d\mathbf{x} \text{ (from Theorem 2)}. \end{aligned}$$

The proof is complete.

**Theorem 4** (weighted generalized integration by parts)**.** *Let* 0 ≤ *α* < 1*, β* > 0*, p* ≥ 1*, <sup>q</sup>* <sup>≥</sup> <sup>1</sup> *and* <sup>1</sup> *p* + 1 *q* <sup>≤</sup> <sup>1</sup> <sup>+</sup> *<sup>α</sup> (p* <sup>=</sup> <sup>1</sup> *and <sup>q</sup>* <sup>=</sup> <sup>1</sup> *in the case* <sup>1</sup> *p* + 1 *<sup>q</sup>* <sup>=</sup> <sup>1</sup> <sup>+</sup> *<sup>α</sup>). If <sup>f</sup>* <sup>∈</sup> *Lp*(*a*, *<sup>b</sup>*) *and g* ∈ *Lq*(*a*, *b*)*, then*

$$\int\_{a}^{b} f(\mathbf{x}) (\_{a,w} 1^{a,\emptyset} \mathbf{g})(\mathbf{x}) d\mathbf{x} \quad = \int\_{a}^{b} w(\mathbf{x})^2 \mathbf{g}(\mathbf{x}) \left( I\_{b,w}^{a,\emptyset} \left( \frac{f}{w^2} \right) \right)(\mathbf{x}) d\mathbf{x} \tag{16}$$

$$\int\_{a}^{b} f(\mathbf{x}) (\prescript{R}{a,\boldsymbol{\mu}}{}{D}^{a,\beta} \mathbf{g})(\mathbf{x}) d\mathbf{x} \quad = \int\_{a}^{b} w(\mathbf{x})^{2} \mathbf{g}(\mathbf{x}) \left( \prescript{R}{}{}{D}^{a,\beta}\_{b,\boldsymbol{\mu}} \left( \frac{f}{w^{2}} \right) \right)(\mathbf{x}) d\mathbf{x}.\tag{17}$$

**Proof.** We have

$$\begin{split} \int\_{a}^{b} f(\mathbf{x}) (\iota\_{a,w} I^{a,\emptyset} \mathbf{g})(\mathbf{x}) d\mathbf{x} &= \int\_{a}^{b} w(\mathbf{x}) \frac{f(\mathbf{x})}{w(\mathbf{x})} \left( \iota\_{a,w} I^{a,\emptyset} \left( \frac{\mathbf{g}w}{w} \right) \right)(\mathbf{x}) d\mathbf{x} \\ &= \int\_{a}^{b} \frac{f(\mathbf{x})}{w(\mathbf{x})} \left( \iota\_{a,1} I^{a,\emptyset} (\mathbf{g}w) \right)(\mathbf{x}) d\mathbf{x} \\ &= \int\_{a}^{b} w(\mathbf{x}) g(\mathbf{x}) \left( I^{a,\emptyset}\_{b,1} \left( \frac{f}{w} \right) \right)(\mathbf{x}) d\mathbf{x} \text{ (from Theorem 3)} \\ &= \int\_{a}^{b} g(\mathbf{x}) w(\mathbf{x})^{2} \left( I^{a,\emptyset}\_{b,w} \left( \frac{f}{w^{2}} \right) \right)(\mathbf{x}) d\mathbf{x}. \end{split}$$

Therefore, equality (16) is true. Similarly, we have

$$\begin{split} \int\_{a}^{b} f(\mathbf{x}) (\prescript{R}{a,w}\_{a,w} D^{a,\beta} \mathbf{g})(\mathbf{x}) d\mathbf{x} &= \int\_{a}^{b} w(\mathbf{x}) \frac{f(\mathbf{x})}{w(\mathbf{x})} \left( \prescript{R}{a,w}\_{a,w} D^{a,\beta} \left( \frac{\mathbf{g} \mathbf{w}}{w} \right) \right)(\mathbf{x}) d\mathbf{x} \\ &= \int\_{a}^{b} \frac{f(\mathbf{x})}{w(\mathbf{x})} \left( \prescript{R}{a,1} D^{a,\beta} (\mathbf{g} \mathbf{w}) \right)(\mathbf{x}) d\mathbf{x} \\ &= \int\_{a}^{b} w(\mathbf{x}) g(\mathbf{x}) \left( D^{a,\beta}\_{b,1} \left( \frac{f}{w} \right) \right)(\mathbf{x}) d\mathbf{x} \text{ (from Theorem 3)} \\ &= \int\_{a}^{b} g(\mathbf{x}) w(\mathbf{x})^{2} \left( D^{a,\beta}\_{b,w} \left( \frac{f}{w^{2}} \right) \right)(\mathbf{x}) d\mathbf{x} \end{split}$$

and equality (17) holds.

**Remark 1.** *When w*(*t*) = 1 *and α* = *β, then we can obtain from our Theorem 4 the integration by parts formula [17] associated with Atangana–Baleanu derivatives:*

$$\begin{aligned} \int\_{a}^{b} f(\mathbf{x}) \left( {}\_{a}^{AB} I^{a} \mathbf{g} \right)(\mathbf{x}) d\mathbf{x} &= \int\_{a}^{b} \mathbf{g}(\mathbf{x}) \left( {}^{AB} I\_{b}^{a} f \right)(\mathbf{x}) d\mathbf{x}, \\ \int\_{a}^{b} f(\mathbf{x}) \left( {}\_{a}^{ABR} D^{a} \mathbf{g} \right)(\mathbf{x}) d\mathbf{x} &= \int\_{a}^{b} \mathbf{g}(\mathbf{x}) \left( {}^{ABR} D\_{b}^{a} f \right)(\mathbf{x}) d\mathbf{x}. \end{aligned}$$

From (16) and (17), we obtain the following consequence.

**Corollary 1.** *Let* <sup>0</sup> <sup>≤</sup> *<sup>α</sup>* <sup>&</sup>lt; <sup>1</sup>*, <sup>β</sup>* <sup>&</sup>gt; <sup>0</sup>*, <sup>p</sup>* <sup>≥</sup> <sup>1</sup>*, <sup>q</sup>* <sup>≥</sup> <sup>1</sup> *and* <sup>1</sup> *p* + 1 *q* ≤ 1 + *α (p* = 1 *and q* = 1 *in the case* <sup>1</sup> *p* + 1 *<sup>q</sup>* <sup>=</sup> <sup>1</sup> <sup>+</sup> *<sup>α</sup>). If f* <sup>∈</sup> *Lp*(*a*, *<sup>b</sup>*) *and g* <sup>∈</sup> *Lq*(*a*, *<sup>b</sup>*)*, then <sup>b</sup> a f*(*x*) *I α*,*β <sup>b</sup>*,*wg* (*x*)*dx* = *<sup>b</sup> a w*(*x*)2*g*(*x*) *a*,*w I α*,*β f w*2 (*x*)*dx*, *<sup>b</sup> a f*(*x*) *RD<sup>α</sup>*,*<sup>β</sup> <sup>b</sup>*,*wg* (*x*)*dx* = *<sup>b</sup> a w*(*x*)2*g*(*x*) *R a*,*wD<sup>α</sup>*,*<sup>β</sup> f w*2 (*x*)*dx*.

For a symmetric view of weighted generalized integration by parts, we propose the following corollary of Theorem 4.

**Corollary 2.** *Let* <sup>0</sup> <sup>≤</sup> *<sup>α</sup>* <sup>&</sup>lt; <sup>1</sup>*, <sup>β</sup>* <sup>&</sup>gt; <sup>0</sup>*, <sup>p</sup>* <sup>≥</sup> <sup>1</sup>*, <sup>q</sup>* <sup>≥</sup> <sup>1</sup> *and* <sup>1</sup> *p* + 1 *q* ≤ 1 + *α (p* = 1 *and q* = 1 *in the case* <sup>1</sup> *p* + 1 *<sup>q</sup>* <sup>=</sup> <sup>1</sup> <sup>+</sup> *<sup>α</sup>). If f* <sup>∈</sup> *Lp*(*a*, *<sup>b</sup>*) *and g* <sup>∈</sup> *Lq*(*a*, *<sup>b</sup>*)*, then*

$$\int\_{a}^{b} w(\mathbf{x}) f(\mathbf{x}) \left( {}\_{a, \mu} I^{a, \beta} \frac{\mathbf{g}}{w} \right) (\mathbf{x}) d\mathbf{x} \quad = \int\_{a}^{b} w(\mathbf{x}) g(\mathbf{x}) \left( I^{a, \beta}\_{b, \nu} \frac{\mathbf{f}}{w} \right) (\mathbf{x}) d\mathbf{x}, \tag{18}$$
 
$$\mathbf{f}^{b} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad f^{b} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad$$

$$\int\_{a}^{b} w(\mathbf{x}) f(\mathbf{x}) \left( {}\_{a,w}^{R} D^{a,\theta} \frac{\mathcal{S}}{w} \right) (\mathbf{x}) d\mathbf{x} \quad = \int\_{a}^{b} w(\mathbf{x}) g(\mathbf{x}) \left( {}\_{a}^{R} D^{a,\theta}\_{b,w} \frac{f}{w} \right) (\mathbf{x}) d\mathbf{x}.\tag{19}$$

#### **5. The Weighted Generalized Fractional Euler–Lagrange Equation**

Let us denote by *AC*(*<sup>I</sup>* → R) the set of absolutely continuous functions *<sup>X</sup>*, where *I* = [*a*, *b*], such that the left and right Riemann–Liouville-weighted generalized fractional derivatives of *X* exist, endowed with the norm

$$||X|| = \sup\_{t \in I} \left( |\lfloor X(t) \rfloor| + |\lfloor \frac{RL}{a,w} D^{a,\beta} X(t) \rfloor| + |\lfloor \frac{RL}{b,w} D^{a,\beta}\_{b,w} X(t) \rfloor| \right).$$

Let *<sup>L</sup>* <sup>∈</sup> *<sup>C</sup>*1(*<sup>I</sup>* <sup>×</sup> <sup>R</sup> <sup>×</sup> <sup>R</sup> <sup>×</sup> <sup>R</sup> <sup>→</sup> <sup>R</sup>) and consider the following minimization problem:

$$J[X] = \left(\int\_{a}^{b} L(t, X(t), ^{RL}D\_{a,w}^{a,\beta}X(t), ^{RL}D\_{b,w}^{a,\beta}X(t))dt\right) \longrightarrow \min \tag{20}$$

subject to the boundary conditions

$$X(a) = X\_{a\prime} \quad X(b) = X\_b. \tag{21}$$

Under appropriate general conditions, one can prove that the minimum of *J*[·] exists [20]. Here, we are interested in showing the usefulness of our Theorem 4 to prove the necessary optimality conditions for problem (20) and (21). With the help of weighted generalized fractional integration by parts, we obtain the following Euler–Lagrange necessary optimality condition for the fundamental weighted generalized fractional problem of the calculus of variations (20) and (21).

**Theorem 5** (the weighted generalized fractional Euler–Lagrange equation)**.** *If <sup>L</sup>* <sup>∈</sup> *<sup>C</sup>*1(*<sup>I</sup>* <sup>×</sup> R × R × R → R) *and <sup>X</sup>* ∈ *AC*([*a*, *<sup>b</sup>*] → R) *is a minimizer of* (20) *subject to the fixed end points* (21)*; then, X satisfies the following weighted generalized fractional Euler–Lagrange equation:*

$$\left(\partial\_2 L + w(t)^{2\
u} D\_{b,w}^{\
u,\emptyset} \left(\frac{\partial\_3 L}{w(t)^2}\right) + w(t)^{2\
u} \, \_{a,w} D^{\
u,\emptyset} \left(\frac{\partial\_4 L}{w(t)^2}\right) = 0.5$$

*where ∂iL denotes the partial derivative of the Lagrangian L with respect to its ith argument evaluated at t*, *X*(*t*), *RL Dα*,*<sup>β</sup> <sup>a</sup>*,*wX*(*t*), *RL Dα*,*<sup>β</sup> <sup>b</sup>*,*wX*(*t*) *.*

**Proof.** Let *<sup>J</sup>*[*X*] = *<sup>b</sup> a L t*, *X*(*t*), *R <sup>a</sup>*,*<sup>w</sup> <sup>D</sup>α*,*βX*(*t*), *<sup>R</sup> Dα*,*<sup>β</sup> <sup>b</sup>*,*wX*(*t*) *dt* and assume that *X*∗ is the optimal solution of problem (20) and (21). Set

$$X = X^\* + \varepsilon \eta\_{\prime\prime}$$

where *<sup>η</sup>*, *<sup>X</sup>* ∈ *AC*([*a*, *<sup>b</sup>*] → R) and *<sup>ε</sup>* is a small, real parameter. By linearity of the weighted generalized fractional derivative, we obtain

$$\prescript{R}{a,w}{D}^{a,\beta}X(t) = \prescript{R}{a,w}{D}^{a,\beta}X^\* + \varepsilon \left(\prescript{R}{a,w}{D}^{a,\beta}\eta(t)\right)^\*$$

and

$${}^{R}D\_{b,w}^{a,\beta}X(t) = \, {}^{R}D\_{b,w}^{a,\beta}X^\* + \varepsilon \left( {}^{R}D\_{b,w}^{a,\beta}\eta(t) \right).$$

Now, consider the following function:

$$\begin{aligned} J(\varepsilon) = \int\_a^b \mathcal{L}\left(t, X^\*(t) + \varepsilon \eta(t), \, ^R\_{a, \mu} D^{a, \beta} X^\*(t) + \varepsilon \Big( ^R\_{a, \mu} D^{a, \beta} \eta(t) \Big) \right) ,\\ ^R D^{a, \beta}\_{b, \mu} X^\*(t) + \varepsilon \Big( ^R\_{b, \mu} D^{a, \beta}\_{b, \mu} \eta(t) \Big) ) dt. \end{aligned}$$

Fermat's theorem asserts that *<sup>d</sup> dε J*(*ε*) *ε*=0 = 0 and we deduce, by the chain rule, that

$$\int\_{a}^{b} \left(\partial\_{2}L\cdot\eta + \partial\_{3}L\cdot\_{a,w}^{R}D^{a,\emptyset}\eta + \partial\_{4}L\cdot^{R}D\_{b,w}^{a,\emptyset}\eta\right)dt = 0.5$$

Using Theorem 4 of weighted fractional integration by parts, we obtain that

$$\int\_{a}^{b} \left(\partial\_{2}L\cdot\eta + w(t)^{2}\cdot\eta \cdot {^R}D\_{b,w}^{a,\theta}\left(\frac{\partial\_{3}L}{w(t)^{2}}\right) + w(t)^{2}\cdot\eta \cdot {^R\_{a,w}D^{a,\theta}}\left(\frac{\partial\_{4}L}{w(t)^{2}}\right)\right)dt = 0.7$$

The result follows by the fundamental theorem of the calculus of variations.

#### **6. An Application**

Let us consider the weighted generalized fractional variational problem (20) and (21) with

$$\begin{aligned} L\left(t, \boldsymbol{X}(t)\_{\boldsymbol{a},\boldsymbol{w}}^{\boldsymbol{R}} \boldsymbol{D}^{\boldsymbol{a},\boldsymbol{\beta}} \boldsymbol{X}(t)\_{\boldsymbol{r}}^{\boldsymbol{R}} \, \boldsymbol{D}^{\boldsymbol{a},\boldsymbol{\beta}}\_{\boldsymbol{b},\boldsymbol{w}} \boldsymbol{X}(t)\right) \\ &= \frac{1}{2} \Big( \frac{1}{2} m \, |\_{\boldsymbol{a},\boldsymbol{w}}^{\boldsymbol{R}} \, \boldsymbol{D}^{\boldsymbol{a},\boldsymbol{\beta}} \boldsymbol{X}(t) \, |^{2} + \frac{1}{2} m \, |^{\boldsymbol{R}} \, \boldsymbol{D}^{\boldsymbol{a},\boldsymbol{\beta}}\_{\boldsymbol{b},\boldsymbol{w}} \boldsymbol{X}(t) \, |^{2} \Big) - V(\boldsymbol{X}(t)), \end{aligned}$$

where *<sup>X</sup>* is an absolutely continuous function on [*a*, *<sup>b</sup>*] and *<sup>V</sup>* maps *<sup>C</sup>*1(*<sup>I</sup>* <sup>→</sup> <sup>R</sup>) to <sup>R</sup>. Note that

$$\left(\frac{1}{2}\left(\frac{1}{2}m\,\,|\_{a,\nu}^{\mathbb{R}}\,D^{a,\emptyset}X(t)\,\,|^{2}+\frac{1}{2}m\,\,|^{\mathbb{R}}\,D^{a,\emptyset}\_{b,\nu}X(t)\,\,|^{2}\right)\right)$$

can be viewed as a weighted generalized kinetic energy in the quantum mechanics framework. By applying our Theorem 5 to the current variational problem, we obtain that

$$\frac{1}{2}m\left[w(t)^{2\ R}D\_{b,w}^{a,\beta}\left(\frac{^R\_{a,w}D^{a,\beta}X(t)}{w(t)^2}\right)+w(t)^{2\ R}\_{a,w}D^{a,\beta}\left(\frac{^R\_{b,w}D^{a,\beta}X(t)}{w(t)^2}\right)\right]=V'(X(t)),\tag{22}$$

where *V* is the derivative of the potential energy of the system. We observe that relation (22) generalizes Newton's dynamical law *mX*¨(*t*) = *V* (*X*(*t*)).

#### **7. Conclusions**

In this work, some definitions and properties of a recent class of fractional operators defined by general integral operators, with and without singular kernels, are recalled. A new definition of a right-weighted generalized fractional operator in the Riemann–Liouville sense is then posed, serving as a prerequisite for the establishment of a new weighted generalized integration by parts formula, which shows a duality relation with the existing left weighted generalized fractional operator in the Riemann–Liouville–Hattaf sense [11]. In the context of the fractional calculus of variations, we have investigated weighted generalized Euler–Lagrange equations, which were then used to produce an effective application in the quantum mechanics setting, after a proper definition of kinetic energy.

**Author Contributions:** Conceptualization, H.Z., E.M.L., D.F.M.T. and N.Y.; validation, H.Z., E.M.L., D.F.M.T. and N.Y.; formal analysis, H.Z., E.M.L., D.F.M.T. and N.Y.; investigation, H.Z., E.M.L., D.F.M.T. and N.Y.; writing—original draft preparation, H.Z., E.M.L., D.F.M.T. and N.Y.; writing review and editing, H.Z., E.M.L., D.F.M.T. and N.Y.; supervision, D.F.M.T. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by Fundação para a CiênciaeaTecnologia (FCT) grant number UIDB/04106/2020 (CIDMA).

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors would like to express their gratitude to two anonymous reviewers, for their constructive comments and suggestions, which helped them to enrich the paper.

**Conflicts of Interest:** The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

#### **References**


## *Article* **Modeling the Impact of the Imperfect Vaccination of the COVID-19 with Optimal Containment Strategy**

**Lahbib Benahmadi 1, Mustapha Lhous 1, Abdessamad Tridane 2,\*, Omar Zakary <sup>3</sup> and Mostafa Rachik <sup>3</sup>**


**Abstract:** Since the beginning of the COVID-19 pandemic, vaccination has been the main strategy to contain the spread of the coronavirus. However, with the administration of many types of vaccines and the constant mutation of viruses, the issue of how effective these vaccines are in protecting the population is raised. This work aimed to present a mathematical model that investigates the imperfect vaccine and finds the additional measures needed to help reduce the burden of disease. We determine the R<sup>0</sup> threshold of disease spread and use stability analysis to determine the condition that will result in disease eradication. We also fitted our model to COVID-19 data from Morocco to estimate the parameters of the model. The sensitivity analysis of the basic reproduction number, with respect to the parameters of the model, is simulated for the four possible scenarios of the disease progress. Finally, we investigate the optimal containment measures that could be implemented with vaccination. To illustrate our results, we perform the numerical simulations of optimal control.

**Keywords:** COVID-19; vaccination; basic reproduction number; stability; Lyapunov function; optimal control

#### **1. Introduction**

Since the beginning of the ongoing COVID-19 pandemic, the world has been racing to develop a vaccination that helps protect the populations around the world and bring human life to a normal status. This race to find a vaccine was not only challenged by the fast spread of the disease, but also the high rate of mutation of COVID-19. As a result, we witness many vaccination types with different biotechnological approaches and different efficacy [1]. These efficacies are based on clinical trials that might have some limitations as their samples do not necessarily cover a wide population from different parts of the world. These facts make the question of the efficacy of vaccines legitimate and need to be investigated.

This problem was investigated using mathematical modeling to study the possible measure that needed to be implemented to reduce the impact of an imperfect vaccine.

The mathematical model of the imperfect vaccination of infectious disease was started by the work of Arino et al. [2]. Many papers have followed up on this work, looking into the various spreads of imperfect vaccination for various diseases. For example, the work of Abu-Raddad [3] studied the mathematical model of a possible HIV vaccination where the authors investigated the impact of the defectiveness of vaccination on the progress of the disease. Liu et al. [4] studied a general SIR with an added vaccination compartment and imperfect vaccination. This study showed how vaccination reduced the infected

**Citation:** Benahmadi, L.; Lhous, M.; Tridane, A.; Zakary, O.; Rachik, M. Modeling the Impact of the Imperfect Vaccination of the COVID-19 with Optimal Containment Strategy. *Axioms* **2022**, *11*, 124. https:// doi.org/10.3390/axioms11030124

Academic Editors: Natalia Martins, Cristiana João Soares da Silva, Moulay Rchid Sidi Ammi and Ricardo Almeida

Received: 16 January 2022 Accepted: 25 February 2022 Published: 10 March 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

population but could not eradicate the infection. In fact, the eradication required an additional necessary condition. If vaccination efficacy improves, this condition may be alleviated. A mathematical model with the imperfect vaccination of birds in the case of avian influenza was studied in [5]. This model considered age-since-vaccination structure and symptomatically infected birds. This study showed that the only way eradicate the disease was by the full coverage of the bird population or by full efficacy. A time delay model of imperfect vaccination was studied in [6] with a possible loss of immunity. The study showed the existence of the critical vaccination coverage needed to eliminate the infection. In the case of imperfect vaccination, the authors showed that a critical proportion of the population needed vaccination. Another delay model with distributed delay [7] and the delay model with a generalized incidence function were studied in [8].

Regarding the ongoing pandemic, there are some studies that have investigated imperfect vaccination in the USA ([9,10]) and the UK [11] but without finding optimal measures that could help contain the pandemic, as the use of an imperfect vaccine cannot achieve the low endemicity of COVID-19. On the other hand, many studies (see [12–22]) used optimal control to find the optimal way to allocate vaccination and the best strategy to vaccinate the population, depending on the age or comorbidity of the population. The goal of this paper was to investigate a mathematical model of the imperfect vaccination of COVID-19. The aim was to study the dynamics of this model and present the possible control measures that need to be implemented in order to reduce the impact of the vaccine's imperfection.

To our knowledge, our work is the only one to date to have studied the potential dynamics of imperfect vaccination and the optimal use of other public health measures that help to reduce the effect of administering imperfect vaccination. The only work that combined these two problems was used in the case of possible malaria vaccination [23].

The structure of this paper is summarized as follows. In Section 2, the mathematical model is formed and the existence conditions of the system are verified. Section 3 takes into account the basic reproduction number. Sections 4 and 5 analyze the local and global stability at the disease-free equilibrium point, respectively. The optimal control problem of vaccination and additional measures to reduce the disease spread are presented in Section 6. In Section 7, we fit our model to data from Morocco to estimate the parameters of the model. We also discussed four possible scenarios of the dynamic of the model via the elasticity of the basic reproduction number and we give the illustration of the optimal solution via numerical simulations. The conclusion and discussion of the results are given in Section 8.

#### **2. The Mathematical Model**

Nine epidemiological compartments were recognized in the population: susceptible (*S*); vaccinated (*V*); exposed (*E*) (asymptomatic); infected with mild symptoms (*I*); quarantined in a home with mild symptoms (*Q*); hospitalized (*H*); quarantined in a hospital with complications (*C*) (i.e., isolated in a hospital with breathing assistance); and mortality due to disease (*D*):

$$\begin{aligned} \frac{dS}{dt} &= \quad -\beta\_1 \frac{\xi E}{N} - \beta\_2 \frac{\xi I}{N} - \lambda\_1 S \\ \frac{dV}{dt} &= \quad \lambda\_1 S - \beta\_3 \frac{\xi E}{N} - \beta\_4 \frac{\nu I}{N} - \lambda\_2 V \\ \frac{dE}{dt} &= \quad \beta\_1 \frac{\xi E}{N} + \beta\_2 \frac{\xi I}{N} + \beta\_3 \frac{\nu I}{N} + \beta\_4 \frac{\nu I}{N} - \theta E \\ \frac{dI}{dt} &= \quad \theta E - (\gamma\_1 + \gamma\_2 + \gamma\_3)I \\ \frac{dQ}{dt} &= \quad \gamma\_1 I - (\sigma\_1 + \delta\_1)Q \\ \frac{dH}{dt} &= \quad \gamma\_2 I + \sigma\_1 Q - (\sigma\_2 + \delta\_2)H \\ \frac{d\mathcal{C}}{dt} &= \quad \gamma\_3 I + \sigma\_2 H - (\mu + \delta\_3)C \\ \frac{dR}{dt} &= \quad \delta\_1 Q + \delta\_2 H + \delta\_3 C + \lambda\_2 V \\ \frac{dD}{dt} &= \quad \mu C \end{aligned} \tag{1}$$

In this model, we assume that vaccination does not provide complete protection against COVID-19. As a result, the rate of being vaccinated is *λ*1, and *λ*<sup>2</sup> is the rate of vaccinated persons who have recovered and developed immunity. We suppose that each infectious sub-population (*E* and *I*) infected the healthy population at varying densities, where *β<sup>i</sup>* is the infection density per capita with *i* = 1, 2. In reality, our major assumption about vaccination's imperfection is that some vaccinated persons may only receive partial protection and may become sick if they are exposed to multiple infections. The imperfection can be due to the mutation of the virus. In fact, when people are vaccinated, they tend to relax their guard and take fewer protection measures again the virus. *θ* is the proportion of infected individuals. Some people show mild symptoms with the per capita rate *γ*<sup>1</sup> and can stay at home with treatment; whilst others develop hard symptoms and must be monitored in the hospital with the per capita rate *γ*<sup>2</sup> and another critical condition that requires penetrating breathing with per capita rate *γ*3. The parameter *δi*, with *i* = 1, 2, 3, represents the recovery rates of quarantined, hospitalized and critical infected persons (respectively). Average quarantine and hospitalization times are 1/*σ*<sup>1</sup> and 1/*σ*2, respectively. Finally, *μ* represents the mortality rate due to disease. The flow chart of the model is given in the Figure 1. When the nine equations in (1) are combined together, the total population size *N* remains constant.

#### *2.1. Positivity and Boundedness*

This part is dedicated to establishing the positivity and boundedness of the model (1) solutions under non-negative conditions.

**Theorem 1.** *If S*(0) ≥ 0*, V*(0) ≥ 0*, E*(0) ≥ 0*, I*(0) ≥ 0 *Q*(0) ≥ 0*, H*(0) ≥ 0*, C*(0) ≥ 0*, R*(0) ≥ 0 *and D*(0) ≥ 0*, then the solution S*(*t*)*, V*(*t*)*, E*(*t*)*, I*(*t*) *Q*(*t*)*, H*(*t*)*, C*(*t*)*, R*(*t*)*, D*(*t*) *of system (1) is non-negative and the solutions exist in* Ω *for all t* ≥ 0*.*

The proof of positivity follows the standard argument, as can be seen, for example, in [24].

The solution of the model (1) exists in the positively invariant region:

<sup>Ω</sup> <sup>=</sup> {(*S*(*t*), *<sup>V</sup>*(*t*), *<sup>E</sup>*(*t*), *<sup>I</sup>*(*t*), *<sup>Q</sup>*(*t*), *<sup>H</sup>*(*t*), *<sup>C</sup>*(*t*), *<sup>R</sup>*(*t*), *<sup>D</sup>*(*t*)) <sup>∈</sup> *<sup>R</sup>*<sup>9</sup> + : *S*(*t*) + *V*(*t*) + *E*(*t*) + *I*(*t*) + *Q*(*t*) + *C*(*t*) + *R*(*t*) + *D*(*t*) = *N* ≤ *N*(0)}.

Since all the parameters and sub populations of the system are non-negative and *N*(*t*) = constant ∀*t* ≥ 0.

Straightforwardly, we obtain *S*(*t*) ≤ *N*(*t*) ≤ *N*(0) for all *t* ≥ 0. The other variables yielded the same result. As a result, the overall population is finitely upper bounded. For system (1), the region Ω is positively invariant.

**Figure 1.** Flow chart of the model.

*2.2. Existence and Uniqueness of Solutions*

**Theorem 2.** *The system (1) that fulfills a given initial condition (S*(0)*, V*(0)*, E*(0)*, I*(0) *Q*(0)*, H*(0)*, C*(0)*, R*(0)*, D*(0)*) has a unique solution.*

**Proof.** The system (1) may be expressed as follows:

$$
\dot{X} = AX + B(X) \tag{2}
$$

where *X*(*t*)=[*S*(*t*), *V*(*t*), *E*(*t*), *I*(*t*), *Q*(*t*), *H*(*t*), *C*(*t*), *R*(*t*), *D*(*t*)]

$$A = \begin{bmatrix} -\lambda\_1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ \lambda\_1 & -\lambda\_2 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & -\theta & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & \theta & -\gamma\_1 - \gamma\_2 - \gamma\_3 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & \gamma\_1 & -\sigma\_1 - \delta\_1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & \gamma\_2 & \sigma\_1 & -\sigma\_2 - \delta\_2 & 0 & 0 & 0\\ 0 & 0 & 0 & \gamma\_3 & 0 & \sigma\_2 & -\mu - \delta\_3 & 0 & 0\\ 0 & \lambda\_2 & 0 & 0 & \delta\_1 & \delta\_2 & \delta\_3 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & \mu & 0 & 0 \end{bmatrix},\tag{3}$$

$$B(X) = \begin{bmatrix} -\beta\_1 \frac{SE}{N} - \beta\_2 \frac{SI}{N} \\ -\beta\_3 \frac{VE}{N} - \beta\_4 \frac{SI}{N} \\ \beta\_1 \frac{SE}{N} + \beta\_2 \frac{SI}{N} + \beta\_3 \frac{VE}{N} + \beta\_4 \frac{VI}{N} \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{bmatrix}.$$

Equation (2) is a non-linear system that can be written as

$$
\Sigma(X) = AX + B(X).
$$

We have:

$$\begin{array}{c|c|c|c|c|c|c}|B(E\_1)-B(E\_2)| & \leq & \frac{2\beta\_1}{N}|S\_2E\_2-S\_1E\_1| + \frac{2\beta\_2}{N}|S\_2I\_2-S\_1I\_1| + \frac{2\beta\_3}{N}|V\_2E\_2-V\_1E\_1| + \frac{2\beta\_4}{N}|V\_2I\_2-V\_1I\_1| \\ & \leq & \frac{2\beta\_1}{N}|S\_1(E\_2-E\_1)+E\_2(S\_2-S\_1)| + \frac{2\beta\_2}{N}|S\_1(I\_2-I\_1)+I\_2(S\_2-S\_1)| + \\ & \frac{2\beta\_3}{N}|V\_1(E\_2-E\_1)+E\_2(V\_2-V\_1)| + \frac{2\beta\_4}{N}|V\_1(I\_2-I\_1)+I\_2(V\_2-V\_1)| \\ & \leq & 2\beta\_1(|E\_2-E\_1|+|S\_2-S\_1|)+2\beta\_2(|I\_2-I\_1|+|S\_2-S\_1|) + \\ & 2\beta\_3(|E\_2-E\_1|+|V\_2-V\_1|)+2\beta\_4(|I\_2-I\_1|+|V\_2-V\_1|) \\ & \leq & 2(\beta\_1+\beta\_3)|E\_2-E\_1|+2(\beta\_1+\beta\_2)|S\_2-S\_1|+2(\beta\_2+\beta\_4)|I\_2-I\_1|+2(\beta\_3+\beta\_4)|V\_2-V\_1| \\ & \leq & 4M(|S\_2-S\_1|+|V\_2-V\_1|+|E\_2-E\_1|+|V\_2-I|), \text{ where } & M=\max\{\beta\_1,\beta\_2,\beta\_3,\beta\_4\} \\ & \leq & 4M\|X\_2-X\_1\| \end{array}$$

then, we obtain |Σ(*X*1) − Σ(*X*2)| ≤ *M X*<sup>1</sup> − *X*<sup>2</sup>, where *M* = max{*M*, *A*} < ∞. As a result, the function Σ(*t*) is uniformly Lipschitz continuous. The restriction on *S*(*t*) ≥ 0, *V*(*t*) ≥ 0, *E*(*t*) ≥ 0, *I*(*t*) ≥ 0, *Q*(*t*) ≥ 0, *H*(*t*) ≥ 0, *C*(*t*) ≥ 0, *R*(*t*) ≥ 0, and *D*(*t*) ≥ 0. Thus, a solution to the system (2) exists [25].

#### **3. The Basic Reproduction Number**

The basic reproduction number R<sup>0</sup> is the average number of persons in a susceptible population that one person infected with COVID-19 is expected to infect, and it is calculated using the next generation matrix approach [26]. The disease compartments are thus *E* and *I*. Therefore, the all-time disease-free equilibrium point E<sup>0</sup> = (*N*0, 0, 0, 0, 0, 0, 0).

$$\mathcal{F} = \begin{pmatrix} \beta\_1 \frac{\mathbf{S}^\* E^\*}{N} + \beta\_2 \frac{\mathbf{S}^\* I^\*}{N} + \beta\_3 \frac{V^\* E^\*}{N} + \beta\_4 \frac{V^\* I^\*}{N} \\ 0 \end{pmatrix}, \ \mathcal{V} = \mathcal{V}^- - \mathcal{V}^+ = \begin{pmatrix} \theta \ E \\ (\gamma\_1 + \gamma\_2 + \gamma\_3) I^\* - \theta E^\* \end{pmatrix}.$$

The Jacobian matrices of F and V computed at E<sup>0</sup> are provided by *F* and *V*, respectively, such that:

$$F = \begin{pmatrix} \beta\_1 & \beta\_2 \\ 0 & 0 \end{pmatrix}, \; V = \begin{pmatrix} \theta & 0 \\ -\theta & (\gamma\_1 + \gamma\_2 + \gamma\_3) \end{pmatrix}.$$

The inverse of *V* is given by

$$V^{-1} = \begin{pmatrix} \frac{1}{\theta} & 0\\ \frac{1}{\left(\gamma\_1 + \gamma\_2 + \gamma\_3\right)} & \frac{1}{\left(\gamma\_1 + \gamma\_2 + \gamma\_3\right)} \end{pmatrix} \text{ and } FV^{-1} = \begin{pmatrix} \frac{\beta\_1}{\theta} + \frac{\beta\_2}{\gamma\_1 + \gamma\_2 + \gamma\_3} & \frac{\beta\_2}{\gamma\_1 + \gamma\_2 + \gamma\_3} \\\ 0 & 0 \end{pmatrix}.$$

Therefore, the domainant eigenvalue of *FV*−1:

$$\mathcal{R}\_0 = \rho (FV^{-1}) = \frac{\beta\_1}{\theta} + \frac{\beta\_2}{(\gamma\_1 + \gamma\_2 + \gamma\_3)} \tag{4}$$

#### **4. Local Stability Analysis at Disease-Free Equilibrium (DFE)** *E***<sup>0</sup>**

The DFE's local stability is investigated as follows.

The Jacobian matrix of the system (1) at E<sup>0</sup> = (*N*, 0, 0, 0, 0, 0, 0) is:

$$f^\*(\mathcal{E}\_0) = \begin{pmatrix} -\lambda\_1 & 0 & -\beta\_1 & -\beta\_2 & 0 & 0 & 0\\ \lambda\_1 & -\lambda\_2 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & \beta\_1 - \theta & \beta\_2 & 0 & 0 & 0\\ 0 & 0 & \theta & -(\gamma\_1 + \gamma\_2 + \gamma\_3) & 0 & 0 & 0\\ 0 & 0 & 0 & \gamma\_1 & -(\sigma\_1 + \delta\_1) & 0 & 0\\ 0 & 0 & 0 & \gamma\_2 & \sigma\_1 & -(\sigma\_2 + \delta\_2) & 0\\ 0 & 0 & 0 & \gamma\_3 & 0 & \sigma\_2 & -(\mu + \delta\_3) \end{pmatrix}. \tag{5}$$

The eigenvalues of the Jacobian matrix *J*∗(E0) are the roots of the following characteristic equation:

$$(-\lambda\_1 - \lambda)(-\lambda\_2 - \lambda)(-\sigma\_1 - \delta\_1 - \lambda)(-\sigma\_2 - \delta\_2 - \lambda)(-\mu - \delta\_3 - \lambda)(\lambda^2 + a\_1\lambda + a\_0) = 0 \tag{6}$$

where:

$$\begin{array}{rcl} a\_0 &=& (\theta - \delta\_1)(\gamma\_1 + \gamma\_2 + \gamma\_3) - \theta \beta\_2 \\ a\_1 &=& (\theta - \delta\_1) + (\gamma\_1 + \gamma\_2 + \gamma\_3) \end{array} \tag{7}$$

$$\text{The roots of } (\lambda^2 + a\_1\lambda + a\_0) = 0 \text{ are given by}$$

$$\begin{array}{rcl} \lambda\_{1} &=& \frac{\oint\_{1}^{\cdot} -\theta - (\gamma\_{1} + \gamma\_{2} + \gamma\_{3}) - \sqrt{(\beta\_{1} - \theta + \gamma\_{1} + \gamma\_{2} + \gamma\_{3})^{2} + 4\theta\beta\_{2}}}{2} \\ \lambda\_{2} &=& \frac{\oint\_{1}^{\cdot} -\theta - (\gamma\_{1} + \gamma\_{2} + \gamma\_{3}) + \sqrt{(\beta\_{1} - \theta + \gamma\_{1} + \gamma\_{2} + \gamma\_{3})^{2} + 4\theta\beta\_{2}}}{2} \end{array} \\ &=& \frac{\frac{a\_{1} - \gamma - \sqrt{(a\_{1} + \gamma)^{2} + 4\theta\beta\_{2}}}{2}}{\frac{a\_{1} - \gamma + \sqrt{(a\_{1} + \gamma)^{2} + 4\theta\beta\_{2}}}{2}} \end{array}$$

with *α*<sup>1</sup> = *β*<sup>1</sup> − *θ* and *γ* = (*γ*<sup>1</sup> + *γ*<sup>2</sup> + *γ*3).

It is straightforward that if *<sup>α</sup>*<sup>1</sup> <sup>−</sup> *<sup>γ</sup>* <sup>&</sup>lt; <sup>0</sup> ⇒ R<sup>∗</sup> <sup>=</sup> *<sup>β</sup>*<sup>1</sup> *<sup>θ</sup>*+*<sup>γ</sup>* < 1 and *λ*<sup>1</sup> < 0. In the case for *λ*2, we write the equation:

$$1 - \mathcal{R}\_0 = \frac{-\alpha\_1 \gamma - \theta \beta\_2}{\gamma \theta} \tag{8}$$

From the previous Equation (8), if R<sup>0</sup> < 1, then:

$$\begin{array}{l} \alpha\_{1}\gamma + \theta\beta\_{2} < 0\\ \Rightarrow \quad 4\alpha\_{1}\gamma + 4\theta\beta\_{2} < 0\\ \Rightarrow \quad 2\alpha\_{1}\gamma + 4\theta\beta\_{2} < -2\alpha\_{1}\gamma\\ \Rightarrow \quad \alpha\_{1}^{2} + 2\alpha\_{1}\gamma + 4\theta\beta\_{2} + \gamma^{2} < \alpha\_{1}^{2} - 2\alpha\_{1}\gamma + \gamma^{2} \\ \Rightarrow \quad (\alpha\_{1} + \gamma)^{2} + 4\theta\beta\_{2} < (\gamma - \alpha\_{1})^{2} \\ \Rightarrow \quad \sqrt{(\alpha\_{1} + \gamma)^{2} + 4\theta\beta\_{2}} < \gamma - \alpha\_{1} \\ \Rightarrow \quad \alpha\_{1} - \gamma + \sqrt{(\alpha\_{1} + \gamma)^{2} + 4\theta\beta\_{2}} < 0 \\ \Rightarrow \quad \lambda\_{2} < 0 \end{array}$$

Using the same steps as in Equation (8), if R<sup>0</sup> > 1, then:

$$\begin{array}{l} \alpha\_{1}\gamma + \theta\beta\_{2} > 0 \\ \Rightarrow \ 4\alpha\_{1}\gamma + 4\theta\beta\_{2} > 0 \\ \Rightarrow \ 2\alpha\_{1}\gamma + 4\theta\beta\_{2} > -2\alpha\_{1}\gamma \\ \Rightarrow \ \alpha\_{1}^{2} + 2\alpha\_{1}\gamma + 4\theta\beta\_{2} + \gamma^{2} > \alpha\_{1}^{2} - 2\alpha\_{1}\gamma + \gamma^{2} \\ \Rightarrow \ (\alpha\_{1} + \gamma)^{2} + 4\theta\beta\_{2} > (\gamma - \alpha\_{1})^{2} \\ \Rightarrow \ \sqrt{(\alpha\_{1} + \gamma)^{2} + 4\theta\beta\_{2}} > \gamma - \alpha\_{1} \\ \Rightarrow \ \alpha\_{1} - \gamma + \sqrt{(\alpha\_{1} + \gamma)^{2} + 4\theta\beta\_{2}} > 0 \\ \Rightarrow \ \lambda\_{2} > 0 \end{array}$$

Notice that the fact that <sup>R</sup><sup>0</sup> <sup>&</sup>lt; 1 implies that *<sup>β</sup>*<sup>1</sup> *<sup>θ</sup>* <sup>&</sup>lt; 1 and since <sup>R</sup><sup>∗</sup> <sup>&</sup>lt; *<sup>β</sup>*<sup>1</sup> *<sup>θ</sup>* . Then, we conclude that it is enough to have <sup>R</sup><sup>0</sup> <sup>&</sup>lt; 1 to ensure that the roots of (*λ*<sup>2</sup> <sup>+</sup> *<sup>a</sup>*1*<sup>λ</sup>* <sup>+</sup> *<sup>a</sup>*0)= <sup>0</sup> are negative. On the other hand, if R<sup>∗</sup> > 1, then R<sup>0</sup> > 1. As a result, we have just proven the following theorem:

**Theorem 3.** *If* R<sup>0</sup> < 1*, the disease-free equilibrium* E<sup>0</sup> *of the system (1) is locally asymptotically stable, but unstable if* <sup>R</sup><sup>∗</sup> <sup>&</sup>gt; <sup>1</sup>*, where:* <sup>R</sup><sup>∗</sup> <sup>=</sup> *<sup>β</sup>*<sup>1</sup> *<sup>θ</sup>*+*<sup>γ</sup> .*

#### **5. Global Stability Analysis at Disease-Free Equilibrium**

The global stability of the disease-free equilibrium point E<sup>0</sup> was found in this part by creating the Lyapunov function as follows:

**Theorem 4.** *The disease-free equilibrium* E<sup>0</sup> *of the model (1) is globally asymptotically stable whenever* R<sup>0</sup> *Total* <sup>≤</sup> <sup>1</sup>*, where* <sup>R</sup><sup>0</sup> *Total* <sup>=</sup> <sup>R</sup><sup>0</sup> <sup>+</sup> <sup>R</sup><sup>0</sup> *v.*

**Proof.** Consider the Lyapunov function L in the trivial equilibrium point E0, which has non-negative coefficients *g*<sup>1</sup> and *g*2:

$$L \quad = \ \text{g}\_1 E + \text{g}\_2 I \tag{9}$$

Differentiating Equation (9) with respect to time t, and substituting both *dE dt* and *dI dt* from Equation (1) yields the result:

$$\begin{array}{rcl} L & = & \mathbf{g}\_1 \mathbf{E} + \mathbf{g}\_2 \mathbf{I} \\ & = & \mathbf{g}\_1 \boldsymbol{\beta}\_1 \frac{SE}{N} + \mathbf{g}\_1 \boldsymbol{\beta}\_2 \frac{SI}{N} + \mathbf{g}\_1 \boldsymbol{\beta}\_3 \frac{VE}{N} + \mathbf{g}\_1 \boldsymbol{\beta}\_4 \frac{VI}{N} - \mathbf{g}\_1 \boldsymbol{\theta} E \\ & + \mathbf{g}\_2 \boldsymbol{\theta} E - \mathbf{g}\_2 (\gamma\_1 + \gamma\_2 + \gamma\_3)I \end{array} \tag{10}$$

By simplifying Equation (10) by collecting similar terms of *E* and *I*, then by solving for coefficient *g*<sup>1</sup> and *g*2, this yields:

*<sup>L</sup>*˙ <sup>≤</sup> *<sup>g</sup>*1*β*1*<sup>E</sup>* <sup>+</sup> *<sup>g</sup>*1*β*3*<sup>E</sup>* <sup>+</sup> *<sup>g</sup>*2*θ<sup>E</sup>* <sup>−</sup> *<sup>g</sup>*1*θ<sup>E</sup>* <sup>+</sup> *<sup>g</sup>*1*β*<sup>2</sup> *<sup>I</sup>* <sup>+</sup> *<sup>g</sup>*1*β*<sup>4</sup> *<sup>I</sup>* <sup>−</sup> *<sup>g</sup>*2(*γ*<sup>1</sup> <sup>+</sup> *<sup>γ</sup>*<sup>2</sup> <sup>+</sup> *<sup>γ</sup>*3)*<sup>I</sup>* ≤ *g*1*θ β*<sup>1</sup> + *β*<sup>3</sup> *<sup>θ</sup>* <sup>+</sup> *<sup>g</sup>*<sup>2</sup> *g*1 − 1 *E* + *g*1(*β*<sup>2</sup> + *β*<sup>4</sup> − *g*2(*γ*<sup>1</sup> + *γ*<sup>2</sup> + *γ*3))*I* ≤ *g*<sup>1</sup> *<sup>β</sup>*<sup>2</sup> <sup>+</sup> *<sup>β</sup>*<sup>4</sup> <sup>−</sup> *<sup>g</sup>*1(<sup>1</sup> <sup>−</sup> *<sup>β</sup>*<sup>1</sup> <sup>+</sup> *<sup>β</sup>*<sup>3</sup> *<sup>θ</sup>* )(*γ*<sup>1</sup> <sup>+</sup> *<sup>γ</sup>*<sup>2</sup> <sup>+</sup> *<sup>γ</sup>*3) *I* <sup>≤</sup> *<sup>g</sup>*<sup>2</sup> 1 *β*<sup>2</sup> + *β*<sup>4</sup> *g*1 <sup>−</sup> (*γ*<sup>1</sup> <sup>+</sup> *<sup>γ</sup>*<sup>2</sup> <sup>+</sup> *<sup>γ</sup>*3) + *<sup>β</sup>*<sup>1</sup> <sup>+</sup> *<sup>β</sup>*<sup>3</sup> *<sup>θ</sup>* (*γ*<sup>1</sup> <sup>+</sup> *<sup>γ</sup>*<sup>2</sup> <sup>+</sup> *<sup>γ</sup>*3) *I* <sup>≤</sup> *<sup>g</sup>*<sup>2</sup> <sup>1</sup>(*γ*<sup>1</sup> + *γ*<sup>2</sup> + *γ*3) *β*<sup>2</sup> *<sup>g</sup>*1(*γ*<sup>1</sup> <sup>+</sup> *<sup>γ</sup>*<sup>2</sup> <sup>+</sup> *<sup>γ</sup>*3) <sup>+</sup> *<sup>β</sup>*<sup>1</sup> *<sup>θ</sup>* <sup>−</sup> <sup>1</sup> <sup>+</sup> *<sup>β</sup>*<sup>4</sup> *<sup>g</sup>*1(*γ*<sup>1</sup> <sup>+</sup> *<sup>γ</sup>*<sup>2</sup> <sup>+</sup> *<sup>γ</sup>*3) <sup>+</sup> *<sup>β</sup>*<sup>3</sup> *θ I* ≤ (*γ*<sup>1</sup> + *γ*<sup>2</sup> + *γ*3)(R<sup>0</sup> + R<sup>0</sup> *<sup>v</sup>* <sup>−</sup> <sup>1</sup>)*<sup>I</sup>* (11) where *<sup>g</sup>*<sup>1</sup> <sup>=</sup> 1 , *<sup>g</sup>*<sup>2</sup> *g*1 <sup>=</sup> <sup>1</sup> <sup>−</sup> *<sup>β</sup>*<sup>1</sup> <sup>+</sup> *<sup>β</sup>*<sup>3</sup> *<sup>θ</sup>* . R0 *Total* = *β*<sup>2</sup> (*γ*<sup>1</sup> <sup>+</sup> *<sup>γ</sup>*<sup>2</sup> <sup>+</sup> *<sup>γ</sup>*3) <sup>+</sup> *<sup>β</sup>*<sup>1</sup> *<sup>θ</sup>* <sup>+</sup> *<sup>β</sup>*<sup>4</sup> (*γ*<sup>1</sup> <sup>+</sup> *<sup>γ</sup>*<sup>2</sup> <sup>+</sup> *<sup>γ</sup>*3) <sup>+</sup> *<sup>β</sup>*<sup>3</sup> *θ* = R<sup>0</sup> + R<sup>0</sup> *<sup>v</sup>*. (12)

Since *<sup>g</sup>*<sup>2</sup> *g*1 <sup>&</sup>gt; 0, then *<sup>β</sup>*<sup>1</sup> <sup>+</sup> *<sup>β</sup>*<sup>3</sup> *<sup>θ</sup>* <sup>&</sup>lt; 1. Therefore, *<sup>L</sup>*˙ is negative if <sup>R</sup><sup>0</sup> *Total* < 1. Furthermore,

*<sup>L</sup>*˙ <sup>=</sup> 0 if and only if *<sup>I</sup>* <sup>=</sup> 0. It can thus be investigated whether singleton <sup>E</sup><sup>0</sup> is the highest compact invariant set for the model (1). Thus, by LaSalle's invariance principle [27], the DFE is globally asymptotically stable in a region Ω around E0.

**Remark 1.** *The above result shows that* R<sup>0</sup> *Total and* <sup>R</sup><sup>0</sup> *can be reduced to less than a unit so that the disease disappears. Obviously,* R<sup>0</sup> < R<sup>0</sup> *Total means that if* <sup>R</sup><sup>0</sup> <sup>&</sup>lt; <sup>1</sup>*, then the complete eradication of disease is guaranteed.*

#### **6. The Optimal Imperfect Vaccination**

When imperfect vaccination is administered to a population, there is a need to find the optimal approach to use it in order to reduce the burden of the disease in the population. The goal of this section is to implement the best control strategy possible in the situation of an imperfect vaccination. Three types of control are used for this purpose. First, the control *u*<sup>1</sup> represents the awareness of taking the vaccine via media, as well as creating knowledge of the positive effects of vaccination to gain herd immunity in the population. The second control *u*<sup>2</sup> is the movement restrictions for susceptible and vaccinated individuals by adhering to a preventative protocol, avoiding the exposure of the vaccinated people to the coronavirus via non-pharmaceutical measures. The third one *u*<sup>3</sup> seeks to improve the efficacy of the vaccine.

Therefore, the model with control strategies is given by

$$\begin{array}{rcl} \frac{dI}{dt} &=& -\frac{\dot{S}}{N}(1-\mu\_{2})(\beta\_{1}E+\beta\_{2}I)-(\lambda\_{1}+\mu\_{1})S\\ \frac{dV}{dt} &=& (\lambda\_{1}+\mu\_{1})S-\frac{V}{N}(1-\mu\_{2})(\beta\_{3}E+\beta\_{4}I)-(\lambda\_{2}+\mu\_{3})V\\ \frac{dE}{dt} &=& \frac{\dot{S}}{N}(1-\mu\_{2})(\beta\_{1}E+\beta\_{2}I)+\frac{V}{N}(1-\mu\_{2})(\beta\_{3}E+\beta\_{4}I)-\theta E\\ \frac{dI}{dt} &=& \theta E-(\gamma\_{1}+\gamma\_{2}+\gamma\_{3})I \\ \frac{dQ}{dt} &=& \gamma\_{1}I-(\sigma\_{1}+\delta\_{1})Q\\ \frac{dH}{dt} &=& \gamma\_{2}I+\sigma\_{1}Q-(\sigma\_{2}+\delta\_{2})H\\ \frac{dH}{dt} &=& \gamma\_{3}I+\sigma\_{2}H-(\mu\_{2}+\delta\_{3})C\\ \frac{dC}{dt} &=& \delta\_{1}Q+\delta\_{2}H+\delta\_{3}C+(\lambda\_{2}+\mu\_{3})V\\ \frac{dR}{dt} &=& \mu C \end{array} \tag{13}$$

With:

$$(\mu\_1(t), \mu\_2(t), \mu\_3(t)) \in \mathcal{U}\_{ad}^T \tag{14}$$

and <sup>U</sup>*<sup>T</sup> ad* is a set of admissible controls defined by

$$\mathcal{U}\_{ad}^{T} = \left\{ \begin{array}{ll} \boldsymbol{u} \mid (\boldsymbol{u}\_{1}(t), \boldsymbol{u}\_{2}(t), \boldsymbol{u}\_{3}(t)) \text{ are measurable}, & 0 \le \boldsymbol{u}\_{1}(t) \le 1 - \lambda\_{1}, \; 0 \le \boldsymbol{u}\_{2}(t) \le 1, \\\ & 0 \le \boldsymbol{u}\_{3}(t) \le 1 - \lambda\_{2}, \; t \in [0, T] \end{array} \right\} \tag{15}$$

The objective function to minimize is:

$$J(u\_1(t), u\_2(t), u\_3(t)) = \int\_0^T [-A\_1V(t) + A\_2I(t) - A\_3R(t) + \frac{1}{2}\tau\_1u\_1^2(t) + \frac{1}{2}\tau\_2u\_2^2(t) + \frac{1}{2}\tau\_3u\_3^2(t)]dt\tag{16}$$

The positive weight constants *A*1, *A*<sup>2</sup> and *A*3, respectively, keep the sizes of *V*(*t*), *I*(*t*), and *R*(*t*) in balance. Positive weight parameters: *τ*1, *τ*2, and *τ*<sup>3</sup> are related with the controls *u*1(*t*), *u*2(*t*), and *u*3(*t*) in the objective functional.

To solve the problem, we first compute the Lagrangian and Hamiltonian. Equation (16) in order to identify an optimal solution. The optimal problem's Lagrangian is:

$$L = -A\_1 V(t) + A\_2 I(t) - A\_3 R(t) + \frac{1}{2} \tau\_1 u\_1^2(t) + \frac{1}{2} \tau\_2 u\_2^2(t) + \frac{1}{2} \tau\_3 u\_3^2(t). \tag{17}$$

For the control problem, we may define the Hamiltonian *H* as follows:

$$\overline{H}\_{\text{\tiny\,H}} = \left. L + \zeta\_1(t) \frac{d\mathbb{S}}{dt} + \zeta\_2(t) \frac{dV}{dt} + \zeta\_3(t) \frac{dE}{dt} + \zeta\_4(t) \frac{d\mathbb{I}}{dt} + \zeta\_5(t) \frac{d\mathbb{Q}}{dt} + \zeta\_6(t) \frac{dH}{dt} + \zeta\_7(t) \frac{d\mathbb{C}}{dt} \tag{18}$$

$$+\zeta\_8(t)\frac{dR}{dt} + \zeta\_9(t)\frac{dD}{dt}$$

where *ζ*1,..., *ζ*<sup>9</sup> are the adjoint functions to be found. We have the existence result:

**Theorem 5.** *The optimal control problem, defined by Equations (13)–(16), has a solution* (*u*∗ <sup>1</sup>, *u*<sup>∗</sup> <sup>2</sup>, *u*<sup>∗</sup> 3 ) *that satisfies*

$$J(\mathfrak{u}\_1^\*, \mathfrak{u}\_2^\*, \mathfrak{u}\_3^\*) = \min\_{(\mathfrak{u}\_1, \mathfrak{u}\_2, \mathfrak{u}\_3) \in \mathcal{U}\_{\text{ad}}^T} J(\mathfrak{u}\_1, \mathfrak{u}\_2, \mathfrak{u}\_3)$$

**Proof.** We use the result [28] to show that an optimal control exists. The control and the state variables are both non-negative. This minimization problem satisfies the convexity requirement of the objective functional.

The control space previously defined as (15) is both convex and closed by definition. In order for the optimal control to exist, it is necessary for the optimal system to be compact. The boundedness of the optimal system determines the compactness needed. Additionally, an integrand throughout the functional (16) is convex on the control (*u*1(*t*), *u*2(*t*), *u*3(*t*)). It can be concluded that the constant *ρ* > 1 exists, as do positive integers *w*1, *w*<sup>2</sup> and *w*<sup>3</sup> such that *<sup>J</sup>*(*u*1, *<sup>u</sup>*2, *<sup>u</sup>*3) ≥ −*w*<sup>2</sup> <sup>+</sup> *<sup>w</sup>*1((*u*1, *<sup>u</sup>*2, *<sup>u</sup>*3)<sup>2</sup>) *ρ* <sup>2</sup> . This leads us to conclude that optimal control exists.

#### *Characterization of the Optimal Control*

We then investigate the necessary optimal control conditions. For this purpose, the maximum principle of Pontryagin to Hamiltonian [29] can be applied.

**Theorem 6.** *Let S*∗(*t*)*, V*∗(*t*)*, E*∗(*t*)*, I*∗(*t*)*, Q*∗(*t*)*, H*∗(*t*)*, C*∗(*t*)*, R*∗(*t*) *and D*∗(*t*) *represent optimal state solutions with optimal control variables* (*u*∗ <sup>1</sup> (*t*), *u*<sup>∗</sup> <sup>2</sup> (*t*), *u*<sup>∗</sup> <sup>3</sup> (*t*)) *for the optimal control problem (16). ζ*1,..., *ζ*<sup>9</sup> *are thus adjoint variables that satisfy:*

$$\dot{\zeta}\_1(t) = (1 - \nu\_2)(\beta\_1 \frac{\mathbb{E}(t)}{N} + \beta\_2 \frac{\mathbb{I}(t)}{N})(\zeta\_1(t) - \zeta\_3(t)) + (\lambda\_1 + \nu\_1)(\zeta\_1(t) - \zeta\_2(t))$$

$$\mathcal{J}\_2(t) = \begin{array}{c} A\_1 + (1 - \mathfrak{u}\_2)(\beta\_3 \frac{\mathbb{E}(t)}{N} + \beta\_4 \frac{\mathbb{I}(t)}{N})(\mathbb{J}\_2(t) - \mathbb{J}\_3(t)) + (\lambda\_2 + \mathfrak{u}\_3)(\mathbb{J}\_2(t) - \mathbb{J}\_8(t)) \end{array}$$

$$\mathcal{J}\_3(t) = \begin{pmatrix} (1 - \nu\_2)\beta\_1 \frac{S(t)}{N} (\mathbb{\zeta}\_1(t) - \mathbb{\zeta}\_3(t)) + (1 - \nu\_2)\beta\_3 \frac{V(t)}{N} (\mathbb{\zeta}\_2(t) - \mathbb{\zeta}\_3(t)) + \theta(\mathbb{\zeta}\_3(t) - \mathbb{\zeta}\_4(t)) \end{pmatrix}$$

$$\begin{aligned} \dot{\zeta}\_4(t) &= -A\_2 + (1 - \nu\_2)\beta\_2 \frac{\zeta(t)}{N} (\zeta\_1(t) - \zeta\_3(t)) + (1 - \nu\_2)\beta\_4 \frac{V(t)}{N} (\zeta\_2(t) - \zeta\_3(t)) \\ &+ \gamma\_1 (\zeta\_4(t) - \zeta\_5(t)) + \gamma\_2 (\zeta\_4(t) - \zeta\_6(t)) + \gamma\_3 (\zeta\_4(t) - \zeta\_7(t)) \end{aligned}$$

$$\mathcal{J}\_5(t) \;=\; \sigma\_1(\mathcal{J}\_5(t) - \mathcal{J}\_6(t)) + \delta\_1(\mathcal{J}\_5(t) - \mathcal{J}\_8(t)),$$

$$\mathcal{J}\_6(t) \;=\; \sigma\_1(\mathcal{J}\_6(t) - \mathcal{J}\_7(t)) + \delta\_2(\mathcal{J}\_6(t) - \mathcal{J}\_8(t)),$$

$$\mathcal{J}\_{\mathcal{T}}(t) \;= \; \mu(\mathcal{J}\_{\mathcal{T}}(t) - \mathcal{J}\_{\mathcal{B}}(t)) + \delta\_{\mathcal{3}}(\mathcal{J}\_{\mathcal{T}}(t) - \mathcal{J}\_{\mathcal{S}}(t)),$$


*Conditions of transversality.*

$$\begin{array}{l} \zeta\_1(T) = \zeta\_3(T) = \zeta\_5(T) = \zeta\_6(T) = \zeta\_7(T) = \zeta\_9(T) = 0, \\\zeta\_2(T) = -A\_1, \; \zeta\_4(T) = A\_2, \; \zeta\_8(T) = -A\_3. \end{array} \tag{20}$$

(19)

*Moreover, the optimal control* (*u*∗ <sup>1</sup>, *u*<sup>∗</sup> <sup>2</sup>, *u*<sup>∗</sup> <sup>3</sup> ) *is provided by*

$$\begin{array}{llll} u\_1^\*(t) &=& \max\{\min\{\frac{\xi(t)}{\tau\_1}(\zeta\_1(t) - \zeta\_2(t)), 1 - \lambda\_1\}, 0\}. \\ u\_2^\*(t) &=& \max\{\min\{\frac{\xi(t)}{\tau\_2 N}(\beta\_1 E(t) + \beta\_2 I(t))(\zeta\_3(t) - \zeta\_1(t)) + \frac{V(t)}{\tau\_2 N}(\beta\_3 E(t) + \beta\_4 I(t))(\zeta\_3(t) - \zeta\_2(t)), 1\}, 0\}. \\ u\_3^\*(t) &=& \max\{\min\{\frac{V(t)}{\tau\_3}(\zeta\_2(t) - \zeta\_3(t)), 1 - \lambda\_2\}, 0\} \end{array} \tag{21}$$

**Proof.** By using Hamiltonian (18), Pontryagin's maximum principle and setting *S*(*t*) = *S*∗(*t*), *V*(*t*) = *V*∗(*t*), *E*(*t*) = *E*∗(*t*), *I*(*t*) = *I*∗(*t*), *Q*(*t*) = *Q*∗(*t*), *H*(*t*) = *H*∗(*t*), *C*(*t*) = *C*∗(*t*), *R*(*t*) = *R*∗(*t*) and *D*(*t*) = *D*∗(*t*) to obtain the following:

*dζ*<sup>1</sup> *dt* = (<sup>1</sup> <sup>−</sup> *<sup>u</sup>*2)(*β*<sup>1</sup> *E*(*t*) *<sup>N</sup>* + *β*<sup>2</sup> *I*(*t*) *<sup>N</sup>* )(*ζ*1(*t*) − *ζ*3(*t*)) + (*λ*<sup>1</sup> + *u*1)(*ζ*1(*t*) − *ζ*2(*t*)) *dζ*<sup>2</sup> *dt* <sup>=</sup> *<sup>A</sup>*<sup>1</sup> + (<sup>1</sup> <sup>−</sup> *<sup>u</sup>*2)(*β*<sup>3</sup> *E*(*t*) *<sup>N</sup>* + *β*<sup>4</sup> *I*(*t*) *<sup>N</sup>* )(*ζ*2(*t*) − *ζ*3(*t*)) + (*λ*<sup>2</sup> + *u*3)(*ζ*2(*t*) − *ζ*8(*t*)) *dζ*<sup>3</sup> *dt* = (<sup>1</sup> <sup>−</sup> *<sup>u</sup>*2)*β*<sup>1</sup> *S*(*t*) *<sup>N</sup>* (*ζ*1(*t*) − *ζ*3(*t*)) + (1 − *u*2)*β*<sup>3</sup> *V*(*t*) *<sup>N</sup>* (*ζ*2(*t*) − *ζ*3(*t*)) + *θ*(*ζ*3(*t*) − *ζ*4(*t*)) *dζ*<sup>4</sup> *dt* <sup>=</sup> <sup>−</sup>*A*<sup>2</sup> + (<sup>1</sup> <sup>−</sup> *<sup>u</sup>*2)*β*<sup>2</sup> *S*(*t*) *<sup>N</sup>* (*ζ*1(*t*) − *ζ*3(*t*)) + (1 − *u*2)*β*<sup>4</sup> *V*(*t*) *<sup>N</sup>* (*ζ*2(*t*) − *ζ*3(*t*)) + *γ*1(*ζ*4(*t*) − *ζ*5(*t*)) + *γ*2(*ζ*4(*t*) − *ζ*6(*t*)) + *γ*3(*ζ*4(*t*) − *ζ*7(*t*))

$$\begin{array}{rcl} \frac{d\zeta\_5}{dt} &=& \sigma\_1(\zeta\_5(t) - \zeta\_6(t)) + \delta\_1(\zeta\_5(t) - \zeta\_8(t)) \\\\ \frac{d\zeta\_6}{dt} &=& \sigma\_1(\zeta\_6(t) - \zeta\_7(t)) + \delta\_2(\zeta\_6(t) - \zeta\_8(t)) \\\\ \frac{d\zeta\_7}{dt} &=& \mu(\zeta\_7(t) - \zeta\_9(t)) + \delta\_3(\zeta\_7(t) - \zeta\_8(t)) \\\\ \frac{d\zeta\_8}{dt} &=& A\_3 \\\\ \frac{d\zeta\_9}{dt} &=& 0 \end{array}$$

Using optimality conditions, we conclude that:

$$\begin{split} \frac{dH(t)}{du\_{1}(t)} &= \quad \tau\_{1}u\_{1}^{\*}(t) + \mathcal{S}(t)(\zeta\_{2}(t) - \zeta\_{1}(t)) \\ \frac{d\overline{H}(t)}{du\_{2}(t)} &= \quad \tau\_{2}u\_{2}^{\*}(t) + \frac{\mathcal{S}(t)}{N}(\beta\_{1}E(t) + \beta\_{2}I(t))(\zeta\_{1}(t) - \zeta\_{3}(t)) + \frac{V(t)}{N}(\beta\_{3}E(t) + \beta\_{4}I(t))(\zeta\_{2}(t) - \zeta\_{3}(t)) \\ \frac{d\overline{H}(t)}{du\_{3}(t)} &= \quad \tau\_{3}u\_{3}^{\*}(t) + V(t)(\zeta\_{8}(t) - \zeta\_{2}(t)). \end{split}$$

Hence:

$$\begin{split} &\frac{dH(t)}{du\_{1}(t)} = 0 \quad \Rightarrow \quad u\_{1}^{\circ}(t) = \frac{\mathbb{S}(t)}{\mathbb{T}\_{1}} (\zeta\_{1}(t) - \zeta\_{2}(t)),\\ &\frac{d\overline{H}(t)}{du\_{2}(t)} = 0 \quad \Rightarrow \quad u\_{2}^{\circ}(t) = \frac{\mathbb{S}(t)}{\mathbb{T}\_{2}^{N}} (\beta\_{1}E(t) + \beta\_{2}I(t))(\zeta\_{3}(t) - \zeta\_{1}(t)) + \frac{V(t)}{\tau^{3N}} (\beta\_{3}E(t) + \beta\_{4}I(t))(\zeta\_{3}(t) - \zeta\_{2}(t)),\\ &\frac{dH(t)}{du\_{3}(t)} = 0 \quad \Rightarrow \quad u\_{3}^{\circ}(t) = \frac{V(t)}{\tau^{3}} (\zeta\_{2}(t) - \zeta\_{3}(t)). \end{split}$$

By applying the control space property, we obtain that:

$$\begin{cases} \begin{array}{ll} u\_1^\* = 0 & \text{if } \frac{S(t)}{\tau\_1} (\tilde{\zeta}\_1(t) - \tilde{\zeta}\_2(t)) \le 0 \\\ u\_1^\* = \frac{S(t)}{\tau\_1} (\tilde{\zeta}\_1(t) - \tilde{\zeta}\_2(t)) & \text{if } \ 0 < \frac{S(t)}{\tau\_1} (\tilde{\zeta}\_1(t) - \tilde{\zeta}\_2(t)) < 1 \\\ u\_1^\* = 1 & \text{if } \ \frac{S(t)}{\tau\_1} (\tilde{\zeta}\_1(t) - \tilde{\zeta}\_2(t)) \ge 1 - \lambda\_1. \end{array} \end{cases}$$

which means that:

$$\begin{cases} \begin{array}{llll} \mu\_{2}^{\*} = 0 & \text{if} & \frac{\mathcal{S}(t)}{\mathsf{r}\_{2}^{\*}N} (\beta\_{1}\mathcal{E}(t) + \beta\_{2}\mathcal{I}(t))(\boldsymbol{\zeta}\_{3}(t) - \boldsymbol{\zeta}\_{1}(t)) + \frac{V(t)}{\mathsf{r}\_{2}^{\*}N} (\beta\_{3}\mathcal{E}(t) + \beta\_{4}I(t))(\boldsymbol{\zeta}\_{3}(t) - \boldsymbol{\zeta}\_{2}(t)) \leq 0 \\\ \mu\_{2}^{\*} = \omega^{\*} & \text{if} & 0 < \frac{\mathcal{S}(t)}{\mathsf{r}\_{2}^{\*}N} (\beta\_{1}\mathcal{E}(t) + \beta\_{2}I(t))(\boldsymbol{\zeta}\_{3}(t) - \boldsymbol{\zeta}\_{1}(t)) + \frac{V(t)}{\mathsf{r}\_{2}^{\*}N} (\beta\_{3}\mathcal{E}(t) + \beta\_{4}I(t))(\boldsymbol{\zeta}\_{3}(t) - \boldsymbol{\zeta}\_{2}(t)) < 1 \\\ \mu\_{2}^{\*} = 1 & \text{if} & \frac{\mathcal{S}(t)}{\mathsf{r}\_{2}^{\*}N} (\beta\_{1}\mathcal{E}(t) + \beta\_{2}I(t))(\boldsymbol{\zeta}\_{3}(t) - \boldsymbol{\zeta}\_{1}(t)) + \frac{V(t)}{\mathsf{r}\_{2}^{\*}N} (\beta\_{3}\mathcal{E}(t) + \beta\_{4}I(t))(\boldsymbol{\zeta}\_{3}(t) - \boldsymbol{\zeta}\_{2}(t)) \ge 1. \end{array} \end{cases}$$

Since:

$$
\omega^\* = \frac{\mathcal{S}(t)}{\tau\_2 N} (\beta\_1 E(t) + \beta\_2 I(t)) (\mathcal{J}\_3(t) - \mathcal{J}\_1(t)) \\
+ \frac{V(t)}{\tau\_2 N} (\beta\_3 E(t) + \beta\_4 I(t)) (\mathcal{J}\_3(t) - \mathcal{J}\_2(t)).
$$

We have:

$$\begin{cases} \begin{aligned} \boldsymbol{u}\_{3}^{\*} = 0 & \text{if } \quad \frac{\boldsymbol{V}(t)}{\tau\_{3}} (\boldsymbol{\zeta}\_{2}(t) - \boldsymbol{\zeta}\_{8}(t)) \leq 0 \\\ \boldsymbol{u}\_{3}^{\*} = \frac{\boldsymbol{V}(t)}{\tau\_{3}} (\boldsymbol{\zeta}\_{2}(t) - \boldsymbol{\zeta}\_{8}(t)) & \text{if } \quad 0 < \frac{\boldsymbol{V}(t)}{\tau\_{3}} (\boldsymbol{\zeta}\_{2}(t) - \boldsymbol{\zeta}\_{8}(t)) < 1 \\\ \boldsymbol{u}\_{3}^{\*} = 1 & \text{if } \quad \frac{\boldsymbol{V}(t)}{\tau\_{3}} (\boldsymbol{\zeta}\_{2}(t) - \boldsymbol{\zeta}\_{8}(t)) \geq 1 - \lambda\_{2}. \end{aligned} \end{cases}$$

Thus, optimal control is defined as

$$\begin{array}{rcl} \mu\_{1}^{\*} &=& \max\{\min\{\frac{S(t)}{\tau\_{1}}(\zeta\_{1}(t)-\zeta\_{2}(t)), 1-\lambda\_{1}\}, 0\}, \\ \mu\_{2}^{\*} &=& \max\{\min\{\frac{S(t)}{\tau\_{2}N}(\beta\_{1}E(t)+\beta\_{2}I(t))(\zeta\_{3}(t)-\zeta\_{4}(t))+\frac{V(t)}{\tau\_{2}N}(\beta\_{3}E(t)+\beta\_{4}I(t))(\zeta\_{3}(t)-\zeta\_{2}(t)), 1\}, 0\}, \\ \mu\_{3}^{\*} &=& \max\{\min\{\frac{V(t)}{\tau\_{3}}(\zeta\_{2}(t)-\zeta\_{8}(t)), 1-\lambda\_{2}\}, 0\} \end{array}$$

#### **7. Numerical Simulation**

The goal of this section is to show how the control strategies can be used to improve outcomes in vaccination campaigns in Morocco. After fitting the model to the data, the estimated parameters are taken to perform sensitivity analysis and determine the optimal control.

#### *7.1. Parameter Estimation*

We used data from a vaccination campaign in Morocco between 1 February 2021 and 25 March 2021 to validate our findings. We consider the data from the COVID-19 data [30] and the initial conditions within the values of available data on 1 February 2021 are (*S*0, *V*0, *E*0, *I*0, *Q*0, *H*0, *C*0, *R*0, *D*0) = (36,202,000, 200,081, 25,543, 13,099, 9824, 1572, 131, 450,052, 8287). These initial values were estimated from the data, apart from *E*<sup>0</sup> and *Q*0, which were assumed. To obtain the best fitting curve for actual data, we applied the leastsquares fitting technique [31]. The parameter values of the model are estimated based on this fitting and are given as follows: *<sup>λ</sup>*<sup>1</sup> <sup>=</sup> 3.23 <sup>×</sup> <sup>10</sup>−3, *<sup>λ</sup>*<sup>2</sup> <sup>=</sup> 1.5 <sup>×</sup> <sup>10</sup>−4, *<sup>θ</sup>* <sup>=</sup> 7.4385 <sup>×</sup> <sup>10</sup>−2, *<sup>σ</sup>*<sup>1</sup> <sup>=</sup> 3.9601 <sup>×</sup> <sup>10</sup>−2, *<sup>σ</sup>*<sup>2</sup> <sup>=</sup> 5.1954 <sup>×</sup> <sup>10</sup>−2, *<sup>δ</sup>*<sup>1</sup> <sup>=</sup> 2.04 <sup>×</sup> <sup>10</sup>−2, *<sup>δ</sup>*<sup>2</sup> <sup>=</sup> 1.01 <sup>×</sup> <sup>10</sup>−2, *<sup>δ</sup>*<sup>3</sup> <sup>=</sup> 0.00203, *<sup>β</sup>*<sup>1</sup> <sup>=</sup> 9.813×10<sup>−</sup>3, *<sup>β</sup>*<sup>2</sup> <sup>=</sup> 2.4×10<sup>−</sup>3, *<sup>β</sup>*<sup>3</sup> <sup>=</sup> 0.21, *<sup>β</sup>*<sup>4</sup> <sup>=</sup> 0.21, *<sup>γ</sup>*<sup>1</sup> <sup>=</sup> 9.81×10<sup>−</sup>2, *<sup>γ</sup>*<sup>2</sup> <sup>=</sup> 1.2×10<sup>−</sup>2, *<sup>γ</sup>*<sup>3</sup> <sup>=</sup> 1.02 <sup>×</sup> <sup>10</sup>−<sup>4</sup> and *<sup>μ</sup>* <sup>=</sup> 1.501 <sup>×</sup> <sup>10</sup><sup>−</sup>3. The value of the basic reproduction number in this case is R<sup>0</sup> = 0.1536999. The fit of the number of individuals infected with COVID-19 in Morocco is described in Figure 2.

**Figure 2.** Fitting the infected population with real data from 1 February 2021 to 25 March 2021.

#### *7.2. Sensitivity Analysis*

In this part, we aimed to study the sensitivity of the model parameters with respect to R0 *Total*. The goal is to determine the impact of these parameters on the endemicity of the disease. The sensitivity analysis for this outbreak threshold demonstrates the importance of each parameter in the spread of COVID-19, allowing us to determine which parameter to make more sensitive on R<sup>0</sup> *Total* with respect to a specific parameter, *ρ*, via the sensitivity index define by

$$\zeta\_{\rho}^{\mathcal{R}\_0^{\text{Total}}} = \frac{\partial \mathcal{R}\_0^{\text{Total}}}{\partial \rho} \frac{\rho}{\mathcal{R}\_0^{\text{Total}}}.\tag{22}$$

Using the previous definition:

*ζ*R<sup>0</sup> *Total <sup>β</sup>*<sup>1</sup> <sup>=</sup> <sup>1</sup> *θ β*1 R0 *Total* , *ζ*R<sup>0</sup> *Total <sup>β</sup>*<sup>2</sup> <sup>=</sup> <sup>1</sup> (*γ*1+*γ*2+*γ*3) *β*2 R0 *Total ζ*R<sup>0</sup> *Total <sup>β</sup>*<sup>3</sup> <sup>=</sup> <sup>1</sup> *θ β*3 R0 *Total* , *ζ*R<sup>0</sup> *Total <sup>β</sup>*<sup>4</sup> <sup>=</sup> <sup>1</sup> (*γ*1+*γ*2+*γ*3) *β*4 R0 *Total ζ*R<sup>0</sup> *Total <sup>γ</sup>*<sup>1</sup> <sup>=</sup> <sup>−</sup>(*β*2+*β*4) (*γ*1+*γ*2+*γ*3)<sup>2</sup> *γ*1 R0 *Total* , *ζ*R<sup>0</sup> *Total <sup>γ</sup>*<sup>2</sup> <sup>=</sup> <sup>−</sup>(*β*2+*β*4) (*γ*1+*γ*2+*γ*3)<sup>2</sup> *γ*2 R0 *Total ζ*R<sup>0</sup> *Total <sup>γ</sup>*<sup>3</sup> <sup>=</sup> <sup>−</sup>(*β*2+*β*4) (*γ*1+*γ*2+*γ*3)<sup>2</sup> *γ*3 R0 *Total* , *ζ*R<sup>0</sup> *Total <sup>θ</sup>* <sup>=</sup> <sup>−</sup>(*β*1+*β*3) *θ*2 *θ* R0 *Total* . (23)

Each parameter's sensitivity index, corresponding to the basic reproductive numbers R0 *Total*, was computed and displayed in Table 1, and the graphical bar-graph findings were generated in Figure 3. The sensitivity indices indicate the importance of each parameter in disease transmission and prevalence. These are some examples: if *ζ*R<sup>0</sup> *Total <sup>β</sup>*<sup>1</sup> = 0.931, it means that if *β*<sup>1</sup> went up (or decreased) by 93.1%, R<sup>0</sup> *Total* is also likely to increase or decrease by 93.1%. Similarly, for *ζ*R<sup>0</sup> *Total <sup>θ</sup>* = −0.8202, the decrease in the parameter *θ* by 82.02% will fall (or increase ) R<sup>0</sup> *Total* by a similar proportion.

Our goal is to simulate the elasticity of R<sup>0</sup> *Total* with respect to model parameters in four scenarios that represent the different scenarios of the dynamics of the model as follows:


**Figure 3.** Scenarios of R<sup>0</sup> *Total* sensitivity respecting the parameters.


**Table 1.** Sensitivity index for each parameter that has a direct correlation to R<sup>0</sup> *Total*.

All of the sensitivity scenarios described above demonstrate that the basic reproductive number R<sup>0</sup> *Total* is more sensitive to some parameters than others—particularly in the cases of *β*1, *β*3, *γ*<sup>1</sup> and *θ*. Scenario 1 has no persistent disease, but Scenario 3 has persistent disease with significant endemicity among the vaccinated, as can be seen from the sensitivity indices of *γ*<sup>1</sup> (rate of infected getting isolated) and *θ* (incubation period). The same observation applies to the scenarios of the persistence of disease with low endemicity in Scenario 2 and the persistence of disease with high endemicity among the vaccinated in Scenario 4. The level of sensitivity of these parameters is higher when endemicity is low among the non-vaccinated population. Our simulations revealed that the value of the sensitivity index changes depending on the Scenarios (1, 2, 3, 4), with *β*<sup>3</sup> being the highest value for Scenarios 1 and 2. However, in *β*1, we can see more domination for the index in Scenario 4.

#### *7.3. Simulation of Optimal Control*

The time series of variables in the model without and with optimal control are shown in the figures below. The goal is to compare the effect of control on the different variables of the model.

The three variables are shown in Figure 4: susceptible, infected and recovered without and with optimal control effect. These simulations show that optimal control increases the number of susceptible and recovered people while decreasing the number of infected individuals. The effect of control on the susceptible and recovered populations is obviously more significant since the susceptible and recovered populations rose four-fold in 50 days while the infected population declined by one fold during the same period. This means that the control is very effective for all of these three compartments.

The simulation of the time series of the variables representing the vaccinated, exposed and deceased individuals is shown in Figure 5. Clearly, the vaccinated population benefits more from the optimal control than the exposed population since the control aims to improve vaccination effectiveness. The control strategy, on the other hand, has no obvious effect on the number of deaths.

Similarly, as seen in Figure 6, the control strategy reduces the number of isolated populations (at home or in hospital). However, there is more benefit to controlling the population with mild symptoms compared to people in the hospital with severe or critical symptoms. When applying the control to all three categories of quarantined, hospitalized, and critical cases of COVID-19, it is clear that the control is obvious.

The three types of optimal controls are given in Figures 7–9. Those figures show the intensity of each measure needed to be implemented in the case of imperfect vaccination. The awareness campaign should stay as long as 7 weeks with maximum intensity. At the same time, non-pharmaceutical measures must take place, including mobility restrictions or lockdown for at least 24 days. With regard to the third control measure, the improvement of the vaccine population should increase to reach possible efficacy during the first week of vaccination and stay above 30% within the first 50 days of vaccination. The outcome of this

control approach shows that these three measures must be simultaneously implemented to deal with imperfect vaccination.

#### **Figure 4.** Time series of the states S, I and R without and with control.

**Figure 5.** The time series of states V, E and D without and with control.

**Figure 6.** Time series of the states Q, H and C with and without control.

**Figure 7.** Evolutionary dynamics of the control *u*1(*t*).

**Figure 8.** Evolutionary dynamics of the control *u*2(*t*).

**Figure 9.** Evolutionary dynamics of the control *u*3(*t*).

#### **8. Conclusions**

COVID-19 is still taking a toll on people's lives all over the world. Countries are rushing to implement the vaccination to gain herd immunity, to contain the spread of the disease, and to bring the fatality rate of the disease to the lowest possible level. However, the administered vaccines have different levels of efficacy among the same population, which means the idea of relying on vaccination alone to control the pandemic would not provide total protection for the population against further waves of COVID-19. In this work, we aimed to present a mathematical model of the imperfect vaccination of COVID-19 and to study the dynamics of this model. One further element of this model is that susceptible and vaccinated people have different infection rates and the fatality rate of the disease (rate of death due to the infection) is related to the percentage of the isolated population that is in critical condition. We derive a threshold R<sup>0</sup> which is the basic reproduction number of disease transmission among the population. We showed that this threshold does not give us sharp epidemiological properties of the model. In fact, we prove, via the Lyapunov method, that the disease is globally asymptotically stable if R<sup>0</sup> *Total* <sup>≤</sup> 1 with R0 *Total* is the sum of <sup>R</sup><sup>0</sup> and <sup>R</sup><sup>0</sup> *<sup>v</sup>*, which is the threshold of transmission of the disease

among the vaccinated population. This finding shows that the increase in the efficacy of the vaccination should lead to protecting the population from infection (low *β*<sup>3</sup> and *β*4) which will help control the pandemic.

To make our analysis more realistic, we estimated the parameters of our model using data from Morocco between 1 February 2021 and 25 March 2021. Within the range of the estimated parameters, we performed a sensitivity analysis of R<sup>0</sup> *Total* with respect to the parameters of the model to find the elasticity index with respect to each parameter. Depending on the disease status, our simulation (3) produced four outcomes. Scenario 1: There is no disease persistence. Scenario 2: Persistence of diseases with low threshold values. Scenario 3: The disease persists with a high endemicity among vaccinated people and a low endemicity among non-vaccinated people. Scenario 4: The disease persists, with low endemicity among the vaccinated and high endemicity among the unvaccinated.

To further investigate the possible additional measures that help with vaccination. We introduce an optimal control problem with the goal of increasing the awareness of vaccination, limiting the probability of infection by adhering to a preventative protocol and increasing the efficacy of the vaccine. The public health authorities can easily implement these measures. In fact, as the virus mutates, many governments are pushing their populations to get vaccinated and asking people to reduce their contact and wear masks. Moreover, there is a constant effort to increase the efficacy of the vaccine by producing new ones or by boosters.

Our solution of optimal control showed that, to reduce the impact of imperfect vaccination, we needed a longer awareness campaign to engage the population in vaccination. On the other hand, the restriction on population mobility should not be long, since our simulation showed a drop of *u*<sup>2</sup> from 1 (full restriction) after 24 days. To ensure the full protection of the health population, vaccination efficacy must increase by 30% in the first 50 days.

In conclusion, our work showed that facing the imperfection of the vaccination of COVID-19, we mainly have to focus on two measures. The first one is to increase awareness of the importance of vaccination, which will increase the number of people vaccinated. The second is to work on developing vaccines with high efficacy that give more protection to the population instead of making the symptoms of viruses less severe. With these two measures, our study showed that population mobility restrictions have the lowest impact on controlling the virus spread.

**Author Contributions:** Conceptualization, L.B. and A.T.; methodology, M.L. and A.T.; software, L.B. and M.L.; validation, A.T., M.L. and O.Z.; formal analysis, L.B. and A.T.; investigation, L.B.; resources, M.L. and M.R.; data curation, L.B. and M.R.; writing—original draft preparation, L.B. and A.T.; writing—review and editing, L.B. and A.T.; visualization, L.B. and M.L.; supervision, M.L., A.T. and M.R. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Acknowledgments:** The authors would like to thank the reviewers for their comments and inputs that help improve the quality of our work.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **Determining COVID-19 Dynamics Using Physics Informed Neural Networks**

**Joseph Malinzi 1,2,\*, Simanga Gwebu <sup>1</sup> and Sandile Motsa 1,3**


**Abstract:** The Physics Informed Neural Networks framework is applied to the understanding of the dynamics of COVID-19. To provide the governing system of equations used by the framework, the Susceptible–Infected–Recovered–Death mathematical model is used. This study focused on finding the patterns of the dynamics of the disease which involves predicting the infection rate, recovery rate and death rate; thus, predicting the active infections, total recovered, susceptible and deceased at any required time. The study used data that were collected on the dynamics of COVID-19 from the Kingdom of Eswatini between March 2020 and September 2021. The obtained results could be used for making future forecasts on COVID-19 in Eswatini.

**Keywords:** Physics Informed Neural Networks; mathematical modeling; data analysis; COVID-19

**Citation:** Malinzi, J.; Gwebu, S.; Motsa, S. Determining COVID-19 Dynamics Using Physics Informed Neural Networks. *Axioms* **2022**, *11*, 121. https://doi.org/ 10.3390/axioms11030121

Academic Editors: Natália Martins, Ricardo Almeida, Cristiana João Soares da Silva and Moulay Rchid Sidi Ammi

Received: 18 November 2021 Accepted: 24 January 2022 Published: 10 March 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

#### **1. Introduction and Background**

#### *1.1. Introduction*

The Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) virus causes Coronavirus Disease of 2019 (COVID-19) [1]. On 11 March 2020, the World Health Organization (WHO) declared it a pandemic after it spread globally and inflicted havoc [2,3]. This virus is a member of the Beta coronavirus family, which are quite likely to cause severe symptoms and potentially death [4]. The massive impact of the viral disease demonstrates the need for an agent comprehension of its dynamics [3,5–8].

In order to curtail the spread of the disease, many countries have executed partial to full lockdowns, South Africa being among the first country to do so and the Kingdom of Eswatini following suit a day later. The pandemic has forced these countries to, periodically, close down most economic activities and this has had some serious ramifications on its citizenry including their currencies, the rand and emalangeni respectively, declining in value along with a rise in commodity prices [9]. It is therefore highly imperative that a solution to the pandemic is quickly found.

The spread of diseases like COVID-19 has been modelled using systems of ordinary differential equations (ODEs) amongst other types of mathematical approaches. Mathematical modeling has already been used to derive several useful insights about COVID-19 [10–22]. Several aspects have been investigated, ranging from determining the epidemic curves [11,17,20–22], investigating the role played by asymptomatic cases in the spread of the disease [15], determining the efficacy of wearing masks [12] and investigating the efficacy of the different control measures [13,16,19].

Data analysis tools such as machine learning (ML) have also been used to better understand the distribution patterns of COVID-19 [8,23]. The viability of applying artificial intelligence (AI) technologies to solve ODEs has been questioned because these equations are governed by scientific rules that are never imposed during the training process [24]. A novel framework called Physics Informed Neural Networks has been developed to address this problem (PINNs). It can also make very accurate predictions based on very small datasets [25].

Various studies conducted on COVID-19 are gradually utilizing the Physics Informed Neural Network structure. PINNs was used in one investigation to evaluate the spread of COVID-19 after quarantine restrictions were implemented. As the governing systems of equations, the researchers used the Susceptible–Exposed–Infected–Removed model. It primarily sought to comprehend the advantages of COVID-19 restrictions being implemented [6]. A time-varying set of parameters was employed in another experiment. The governing systems of equations were based on the Susceptible–Infected–Recovered– Deceased (SIRD) model. However, a recurrent neural network was used to create this model [8].

The PINNs framework is an Artificial Neural Network (ANN) architecture that exposes the generated neural network to datasets and governing laws during the training process. The governing laws are provided to the model in the form of ODEs or PDEs [24–26]. Understanding the dynamics of COVID-19 is critical for minimizing the virus's consequences. Although AI models have been built, the majority of them require a large amount of training data to obtain high accuracy. However, because COVID-19 was recently discovered it has a tiny dataset, hence these AI models are not viable. Other models just suit the facts provided, making future predictions less accurate. This necessitates the creation of models that can generate accurate predictions on the dynamics of disease spread using tiny datasets.

The goal of this research is to determine the dynamics of COVID-19 using Physics Informed Neural Networks. To achieve this, the study used the Physics Informed Neural Networks architecture. The SIRD model was employed as a PINNs mathematical model in this investigation. The dynamics that this study aims to figure out are the virus's average rates of infection, recovery, and mortality. It further uses the dynamics to predict the number of active infections, the number of people who have recovered, how many are vulnerable or susceptible, and how many are deceased at any point in time. To perform the training process of the neural network, the research utilized data collected from the Kingdom of Eswatini between March 2020 and September 2021.

The rest of the document is divided into four parts, the first of which is a literature review. This section provides a review of some previous studies which have utilized the PINNs framework. The methodology section follows, which examines the mathematical and physics informed neural network framework and its development. The results and simulations part follows, which includes the analysis as well as the results and display of inaccuracies acquired. The last section is the conclusion, which concludes the research findings and provides recommendations for further research.

#### *1.2. Background*

Artificial Neural Networks (ANNs) are the building blocks of deep learning, a branch of AI and machine learning [27]. They are computer models that try to combine the capabilities of human brains and computers [25,28]. Nodes are placed in a layer format and interlinked by connectors in the widely used ANN structure [29]. The input layer receives data in vector format and transmits a dot product of the connector weight and the received data to the next node layer. The activation function multiplies the node's dot product [30]. The activation function is a mathematical function that converts linear input values into non-linear format [31]. The feedforward process is the name given to this method. Backpropagation is the technique of taking the error and adjusting the weights during the training phase.

**Definition 1.** *A feedforward neural network with a total of N neurons arranged in a single layer is a function y* : <sup>R</sup>*<sup>d</sup>* <sup>→</sup> <sup>R</sup> *of the form:*

$$y(t) = \sum\_{i=1}^{N} \alpha\_i \sigma(w\_i^T + b\_i)\_{\prime}$$

*where <sup>t</sup>* <sup>∈</sup> <sup>R</sup>*d*, *<sup>α</sup>i*, *bi* <sup>∈</sup> <sup>R</sup>*. <sup>σ</sup> is the activation function, wi are weights for each neuron multiplied to input value t. α<sup>i</sup> are neural network weights and are applied to the output of each neuron in the layer and bi is the bias of each neuron.*

There are numerous activation functions used in neural networks. This study employees the tangent hyperbolic function (tanh).

**Definition 2.** *A tangent hyperbolic activation function is a function <sup>σ</sup>* : R → R *such that:*

$$
\sigma(t) \to \begin{cases} \quad 1 & as \quad t \to \infty, \\ -1 & as \quad t \to -\infty. \end{cases}
$$

The ability of neural networks to alter internal variables during training so that they can tackle any given problem with some degree of precision is one of its most important features. Discriminatory functions are what neural networks are by definition. As a result of this attribute, the neural network is a universal approximator.

**Theorem 1.** *If the σ in the neural network definition is continuous, then the set of all neural networks is dense in a space of continuous discriminatory functions function with domain C on In denoted by C*(*In*)*, where In is an n-dimensional unit cube.*

**Proof.** Let N ⊂ *C*(*In*) be the set of neural networks where N is a linear subspace of *C*(*In*). To show that N is dense in *C*(*In*), we show that its closure is *C*(*In*). By contradiction, suppose N = *C*(*In*). Then N is a closed proper subspace of *C*(*In*).

The mathematical modeling approach and the data-based approach are the two primary methods for making predictions. The benefits and drawbacks of these two models are distinct. The majority of mathematical models used are generated from the underlying processes. As a result, these models follow governing laws, resulting in a directed output that, when given the correct initial values, always yields accurate results. However, one of the most significant drawbacks is that mathematical models do not account for any unanticipated changes, which is a flaw in real-time process analysis [32].

Data models that include machine learning algorithms identify patterns in incoming data and produce the desired output. Larger datasets are required to fully comprehend these trends. This means that if a limited data collection is available, other datasets that are similarly relevant can be utilised. The margin of error is increased by this compromise. The necessity for a huge amount of data and an extensive training procedure necessitates the use of a lot of computing power, which is expensive. Data can also be compressed to fit the processing power available, compromising results.

#### **2. Review of Studies on Physics Informed Neural Networks**

Multiple world problems have been analyzed and simulated using the recently established Physics Informed Neural Networks framework. This section summarizes some research on Physics Informed Neural Networks. COVID-19-related research is among the studies included.

#### *2.1. Physics Informed Deep Learning for Traffic State Estimation*

Real-time traffic states were analyzed using the Physics Informed Neural Networks framework. The method of estimating traffic variables using partial data is known as traffic state estimation. The traffic variables employed in the analysis are *f* , which stands for traffic flow rate, *v*, which stands for vehicle average speed rate, and *ρ*, which stands for vehicle density. The goal of traffic state analysis is to improve road planning and comprehension. This includes the early detection of traffic jams and high transit demand. A dramatic decline in average speed *v*, for example, could signal significant traffic congestion or an accident [24].

The methods used to conduct these traffic estimations are primarily mathematical or data-driven methods. This research takes a data-driven approach. However, datadriven technologies such as machine learning necessitate a large amount of data. This is a significant disadvantage because it necessitates the employment of a large number of sensors and other equipment to be archived, which is a very costly undertaking. This forces transportation planners to collect data primarily in cost-effective locations, resulting in noisy or error-filled data. To address these issues, the researchers used a physics informed neural network technique [24].

They set the variables based on the data acquired when developing mathematical models, *q* being the stated number of cars traversing a certain area at a specific time. The mean speed of vehicles is used to get the average speed *v*, and the vehicle density *ρ* is calculated as the number of vehicles in a given road distance. The cumulative traffic flow is defined as the total number of vehicles passing through a specific point *x* over a given time period *t*. The partial differential equation of cumulative flow *q*(*x*, *t*) represents the flow with regard to time (*t*). The partial differential equation of cumulative flow *ρ*(*x*, *t*) is a partial differential equation of density with regard to *x*. The mathematical representations of the densities is:

$$q(\mathbf{x}, t) = \frac{\partial N(\mathbf{x}, t)}{\partial t},\tag{1}$$

$$
\rho(\mathbf{x},t) = \frac{-\partial N(\mathbf{x},t)}{\partial \mathbf{x}}.\tag{2}
$$

The conservation law states:

$$\frac{\partial qN(\mathbf{x},t)}{\partial \mathbf{x}} + \frac{\partial \rho N(\mathbf{x},t)}{\partial t} = 0. \tag{3}$$

The relationship between the stated variables is:

$$v(\rho) = v\_f \left(1 - \frac{\rho}{\rho\_m}\right),\tag{4}$$

$$q(\rho) = \rho v\_f \left( 1 - \frac{\rho}{\rho\_m} \right),\tag{5}$$

where *vf* = traffic free flow and *ρ<sup>m</sup>* = maximum traffic flow.

The mean square error (MSE) of *N* number of outputs at point *x* at time *t* is used to construct the cost function (*JDL*), which is utilized to increase the accuracy of the neural network. The neural network's forecast is *ρ* ∗ (*x*, *t*), but the genuine value is *ρ*(*x*, *t*). The MSE *JPHY* is found in relation to the conservation of the specified conservation laws as a result of the deployment of the physics informed neural network.

$$J\_{DL} = \frac{1}{N} \sum\_{i=1}^{N} |\rho(\mathbf{x}, t) - \rho^\*(\mathbf{x}, t)|^2,\tag{6}$$

$$J\_{PHY} = \frac{1}{N} \sum\_{i=1}^{N} |v\_f(1 - \frac{2\rho(\mathbf{x}, t)}{\rho\_{\text{m}}}) + \frac{\partial \rho(\mathbf{x}, t)}{\partial \mathbf{x}}) + \frac{\partial \rho(\mathbf{x}, t)}{\partial t})|^2. \tag{7}$$

The neural network is then optimized using the physics informed neural network, and a parameter *μ* is added to give the neural network an adjustment weight. The PINNs implementation equation was then included.

$$J = \mu f \text{DL} + (1 - \mu) f\_{PHY}.\tag{8}$$

The Frobenius norm is then used to calculate the accuracy of the neural network. The model was put to the test with various data sizes and collection locations, with positive results [24].

This research used a physics informed neural network to analyze electricity generation. Generators are used in the power generation process, which are powered by diverse energy sources such as wind and water. The analysis and comprehension of a real-time power generation is critical in determining the amount of power generated by the generators [32]. It is not new to utilize data models to analyze power production and mathematical models to produce estimations. However, they all have disadvantages. For example, using data and machine learning models demands a large amount of data. It is also necessary to have the data analyzed by professionals before use in order to eliminate the noisy data. This data analysis paradigm also entails the creation of sophisticated neural network designs [32].

As a result, the study introduces the usage of a physics-informed neural network to construct a training process that is data and physics-based. The study employs a single machine infinite bus (SMIB) system, which is a single-generator model. The inertia constant *m*1, the damping coefficient *d*1, and the bus sustenance entry *B*<sup>12</sup> are all parameters and variables in the equation. The power supplied by the generator is *P*1, the voltage magnitudes of buses 1 and 2 are *V*<sup>1</sup> and *V*2, and the voltage angle behind reactance is *σ*<sup>1</sup> and *σ*2. The angular frequency of generators is *sigma*. As a result, the final function is:

$$f\_{\sigma}(t, P\_1) = m\_1 \sigma^- + d\_1 \sigma^\cdot + B\_{12} V\_1 V\_2 \sin(\sigma) - P\_1. \tag{9}$$

Equation (9) is used as the governing equation in the implementation of the physics informed neural network. The model adjusts *σ*, *σ*. and *P*<sup>1</sup> between [*Pmin*, *Pmax*] during the learning process. The model was simulated using datasets that were created using computer models and showed very positive results [32].

#### *2.2. Neural Network Aided Quarantine Control Model Estimation of Global COVID-19 Spread*

Two deep learning models are presented in the paper to approximate the parameters of COVID-19 spread. Forecasts are made using statistics from the United States, China (Wuhan), Italy, and South Korea. Both of the deep learning models employed in the models are physics informed neural networks. The PINNs is used to solve problems with machine learning and conventional neural network models. Overfitting of data, the necessity for high processing capacity, and the need for more data from other pandemics or diseases spread, such as MERS and SARS in this case, are all examples of these issues. Because artificial neural networks are so complicated, it is difficult to understand how the final approximation is achieved. PINNs, on the other hand, simplify the process, making it easier to comprehend and analyze COVID-19's distribution. The study's main goal is to determine the advantages of implementing COVID-19 limitations [6].

The first model employs an SEIR system of ODEs. In the model, *S* stands for the number of susceptible people in the population, *E* for the number of Exposed people individuals, *I* is the number of people who are infected, and *R* stands for the number of individuals who have been removed. The first model utilized ignores the potential consequences of COVID-19 control rules.

The common mathematical models, however, do not account for these in the predictions they make; making it hard to account for variables or elements such as over crowding, social distancing and other policies which may have been implemented by the different countries. The main policies highlighted by the authors include the use of police to enforce proper social distancing in traffic crossings, shops and other places. It also focuses on the

shutdown of public transport, trains and airports. Thus, to account for these multiple policies and have a better prediction the study uses real data. This study is conducted using data and estimations. The study also estimates the effective reproduction rate. The first model:

$$\frac{dS(t)}{dt} = -\frac{\beta S(t)I(t)}{N},\tag{10}$$

$$\frac{dE(t)}{dt} = \frac{\beta S(t)I(t)}{N} - \sigma E(t),\tag{11}$$

$$\frac{dI(t)}{dt} = \sigma E(t) - \gamma I(t),\tag{12}$$

$$\frac{dR(t)}{dt} = \gamma I(t),\tag{13}$$

subject to the initial conditions, *S* = *S*0, *I* = *I*0, *R* = *R*<sup>0</sup> and *E* = *E*0.

The second model used in the study accounts for quarantine control. The model thus introduces a time dependent variable *T*(*t*) = *Q*(*t*) × *I*(*t*). This also changes the effective reproduction rate to *Rt* = *<sup>β</sup> <sup>γ</sup>*+*Q*(*t*). The parameter *Q*(*t*) is also determined using a separate neural network, which takes in the data of Time, Susceptible, Exposed, Infected and Recovered as input data. The model processes the data in a 2-layer network with 10 nodes per layer and uses a ReLu activation function (*NN*(*W*, *U*) ). The determined *Q*(*t*) is then put in the Physics Informed Neural Network which uses the model below to make the approximations of the model.

$$\frac{dS(t)}{dt} = -\frac{\beta S(t)I(t)}{N},\tag{14}$$

$$\frac{dI(t)}{dt} = \frac{\beta S(t)I(t)}{N} - (\gamma - Q(t))I(t),\tag{15}$$

$$\frac{dI(t)}{dt} = \frac{\beta S(t)I(t)}{N} - (\gamma - \text{NN}(\mathcal{W}, \mathcal{U}))I(t),\tag{16}$$

$$\frac{dR(t)}{dt} = \gamma I(t),\tag{17}$$

$$\frac{dT(t)}{dt} = Q(t)I(t) = \text{NN}(\mathcal{W}, \mathcal{U})I(t). \tag{18}$$

Subjected to the initial conditions, *S* = *S*0, *I* = *I*0, *R* = *R*<sup>0</sup> and *T* = *T*0.

The results that were attained by the study showed that the first model, which does not account for imposed restrictions, had approximations which were bigger than the real values. This means it approximated that the virus would be more catastrophic. The second model achieved a better fit showing that the imposed restrictions have had a positive impact in the spread of COVID-19. The model was also comprehensible providing parameters which can be used to make future predictions [6].

#### *2.3. Identification and Prediction of Time-Varying Parameters of COVID-19 Model: A Data-Driven Deep Learning Approach*

This study focused on finding the parameters of an SIRD model which are time based rather than to the average parameter [8]. This study also used a deep learning model and specifically a Physics Informed Neural Network. The virus spreading model employed is that study is an SIRD where *S* represents the number of people who are Susceptible, *I* represents the Infected people or active cases, *R* represents the number of Recovered people and *D* represents the number of Deaths. Where *β* is the spreading rate, *γ* is the recovery rate and *δ* is the death rate [8].

$$\frac{dS(t)}{dt} = -\frac{\beta S(t)I(t)}{N},\tag{19}$$

$$\frac{dI(t)}{dt} = \frac{\beta S(t)I(t)}{N} - \gamma I(t) - \delta I(t),\tag{20}$$

$$\frac{d\mathcal{R}(t)}{dt} = \gamma I(t),\tag{21}$$

$$\frac{dD(t)}{dt} = \quad \delta I(t). \tag{22}$$

The model is subjected to the initial values, *S* = *S*0, *I* = *I*0, *R* = *R*<sup>0</sup> and *D* = *D*0.

#### **3. Problem Formulation and Methodology**

The governing laws of the Physics Informed Neural Networks framework are provided as mathematical equations. This section covers the development and evaluation of the key mathematical models of focus. The mathematical model serve as the assumed physics laws that the model should adhere to.

The Susceptible–Infected–Recovered–Deceased (SIRD) model used assumes that the population can assume four states, Susceptible (S), Infected (I), Recovered (R) and Deceased (D). The susceptible population is the group which can contract the virus, this contraction occurs at the rate *β*. The infected population is the population group that has contacted the virus and it is still active. The infected group can be removed to either assume a recovered population at the rate *γ* or deceased population at the rate *δ*. This means *δ* is the death rate, *β* is the infection rate and *γ* is the recovery rate. Figure 1 shows the resulting COVID-19 transmission SIRD flow diagram.

From the flow diagram in Figure 1 we obtain the system:

$$\frac{dS(t)}{dt} = -\frac{\beta S(t)I(t)}{N},\tag{23}$$

$$\frac{dI(t)}{dt} = \frac{\beta S(t)I(t)}{N} - \gamma I(t) - \delta I(t),\tag{24}$$

$$\frac{d\mathcal{R}(t)}{dt} = \gamma I(t),\tag{25}$$

$$\frac{dD(t)}{dt} = \delta I(t). \tag{26}$$

The model is subjected to the initial values, *S* = *S*0, *I* = *I*0, *R* = *R*<sup>0</sup> = 0 and *D* = *D*<sup>0</sup> = 0.

The system in general reflects the mathematical behaviour of the virus. Equation (23) indicates the change in the number of susceptible individuals *S*(*t*) with respect to time *t*; which is a reduction by a factor of the product of the spreading rate *β*, the susceptible

population at the time *S*(*t*) and the Infected population at time *I*(*t*) divided by the total population *N*. Equation (24) shows that the change in the number of active infections *I*(*t*) with respect to time *t*; which is an addition by size at which the susceptible population was reduced. This is then reduced by a factor of the product of the recovery rate *γ* and the Infected population at the time *I*(*t*) and a product of the death rate *δ* and the Infected population at the time *I*(*t*). Equation (25) shows that the change in the number of recoveries is an addition by a factor of the product of the recovery rate *γ* and the Infected population at the time *I*(*t*). Equation (26) shows that the change in the number of deaths is and a product of the death rate *δ* and the Infected population at the time *I*(*t*). The model also assumes that initially the are no recoveries or deaths and that the infected and susceptible populations are greater than zero.

Studies and implementations of neural networks have shown that using numbers less than one improves accuracy and optimisation. As a result, we must use the nondimentionalisation technique to rescale the provided data to values between 0 and 1.

We let *w* = *<sup>S</sup> <sup>N</sup>* , *<sup>x</sup>* <sup>=</sup> *<sup>I</sup> <sup>N</sup>* , *<sup>y</sup>* <sup>=</sup> *<sup>R</sup> <sup>N</sup>* , *<sup>z</sup>* <sup>=</sup> *<sup>D</sup> <sup>N</sup>* , *t* = *q*.

To rescale the SIRD model, we make these assumptions, with the goal of reducing the number of variables and so obtaining new SIRD model values, thus:

*S* = *wN*, *I* = *xN*, *R* = *yN*, *D* = *zN*. Substituting in the SIRD model we obtain:

$$\frac{d(wN)}{dq} = -\frac{\beta (wN)(xN)}{N},\tag{27}$$

$$\frac{d(\mathbf{x}N)}{dq} = \frac{\beta(wN)(\mathbf{x}N)}{N} - \gamma \mathbf{x}N - \delta \mathbf{x}N,\tag{28}$$

$$\frac{d(yN)}{dq} = \gamma xN,\tag{29}$$

$$\frac{d(zN)}{dq} = \delta x N.\tag{30}$$

Hence the resulting system is:

$$\frac{dw}{dq} = -\beta w \mathbf{x},\tag{31}$$

$$\frac{d\mathbf{x}}{dq} = \beta w\mathbf{x} - \gamma \mathbf{x} - \delta \mathbf{x},\tag{32}$$

$$\frac{dy}{dq} = \gamma x,\tag{33}$$

$$\frac{dz}{dq} = \delta x.\tag{34}$$

#### *3.1. The Neural Network*

The neural network we create, as a consequence, takes a single input value of time *t*. The input is processed through the layers with weights *Wi*,*j*, where *i* is the start node position and *j* is the finishing node position. The product of the weight and time is applied to an activation function *tanh* denoted by *σ* at every node, forming a matrix. The model's output nodes, which make up the output layer, are *S*(*t*), *I*(*t*), *R*(*t*), and *D*(*t*).

$$
\sigma(\mathbf{x}) = \frac{\mathbf{e}^{\mathbf{x}} - \mathbf{e}^{-\mathbf{x}}}{\mathbf{e}^{\mathbf{x}} + \mathbf{e}^{-\mathbf{x}}}.\tag{35}
$$

The representation of a neural network matrix with *m* layers and *n* nodes per layer.

#### 3.1.1. Residual of Model's Equations

The difference between the right and left sides of an ODE is the residual error. The residual error is utilized to determine the neural network's loss function in the construction of PINNs. We get four residual error functions from the SIRD model. We get *ResS*, the residual error of the susceptible population, from Equation (23), which is the margin of how wrong the mathematically predicted susceptible population change is. We get *ResI* from Equation (24), which is the residual error of the infected population, or the margin of error in the mathematically estimated active infection change. The residual error of the recovered population *ResR* is given by Equation (25). We get *ResD* the residual error of the deceased population from Equation (26), which is the margin of how much wrong the mathematically predicted deceased population change is.

$$Res\_S = \frac{dS(t)}{dt} + \frac{\beta S(t)I(t)}{N},\tag{36}$$

$$Res\_I = \frac{dI(t)}{dt} - \frac{\beta S(t)I(t)}{N} + \gamma I(t) + \delta I(t),\tag{37}$$

$$\text{Res}\_{\mathcal{R}} = \frac{d\mathcal{R}(t)}{dt} - \gamma I(t), \tag{38}$$

$$\text{Res}\_D = \frac{dD(t)}{dt} - \delta I(t). \tag{39}$$

#### 3.1.2. The Loss Function

A loss function must first be constructed before back propagation can be used to optimize a neural network. We derive the loss function *lossT* for the PINNs we constructed by adding the total of two loss functions *loss*<sup>1</sup> and *loss*2. The total of the mean square errors of the susceptible population *MSESoutput* is *loss*1. The gap between the actual and forecast population sizes is the total of the sensitive population's mean square error. The total of the difference between the actual size of the infected population and the values predicted by the ANNs is the mean square errors of the infected population *MSEIoutput*. The difference between the actual size of the recovered population and the anticipated recovered population size, as well as the mean square errors of the deceased population *MSEDoutput*, make up the recovered population mean square errors *MSERoutput*. *MSEDoutput* is the mean square error of the difference of the predicted deceased *D*∗(*ti*) and the actual data value *Di*.

The sum of the mean square errors of the susceptible population residual error *MSESres*, the mean square errors of the infected population residual error *MSEIres*, the mean square errors of the recovered population residual error *MSERres*, and the mean square errors of the deceased population residual error *MSEDres* is *loss*2.

$$\begin{split} loss\_1 &= MSE\_{Soutput} + MSE\_{Ioutput} + MSE\_{Routput} \\ &+ MSE\_{Doutput} \end{split} \tag{40}$$

$$loss\_2 = MSE\_{Sres} + MSE\_{Ires} + MSE\_{Rres}$$

$$+MSE\_{Dres} \tag{41}$$

$$loss\_{\overline{\Gamma}} = loss\_1 + loss\_{2\prime} \tag{42}$$

$$MSE\_{Sres} = \frac{1}{M} \sum\_{i=1}^{M} |Res\_S|^2,\tag{43}$$

$$MSE\_{I\text{res}} = \frac{1}{M} \sum\_{i=1}^{M} |Res\_{I}|^2,\tag{44}$$

$$MSE\_{Rres} = \frac{1}{M} \sum\_{i=1}^{M} |Res\_{R}|^2,\tag{45}$$

$$MSE\_{\rm Dres} = \frac{1}{M} \sum\_{i=1}^{M} \left| Res\_D \right|^2,\tag{46}$$

$$MSE\_{Soutput} = \frac{1}{M} \sum\_{i=1}^{M} |S^\*(t\_i) - S\_i|^2,\tag{47}$$

$$MSE\_{\text{output}} = \frac{1}{M} \sum\_{i=1}^{M} \left| I^\*(t\_i) - I\_i \right|^2,\tag{48}$$

$$MSE\_{Routput} = \frac{1}{M} \sum\_{i=1}^{M} |R^\*(t\_i) - R\_i|^2,\tag{49}$$

$$MSE\_{Doutput} = \frac{1}{M} \sum\_{i=1}^{M} |D^\*(t\_i) - D\_i|^2. \tag{50}$$

We get a neural network that looks like Figure 2 using the time input, the layer matrix, the output layer, and the residual functions.

**Figure 2.** A schematic representation of the Physics informed neural network, which takes an input of time (t) and outputs Susceptible (*S*), Infected (*I*), Recovered (*R*) and Deceased *D*. The output is subjected to PINN.

#### *3.2. Basic Model Properties*

The analysis of the mathematical model is presented in the next part. This part examines the model's features and expected behaviour, such as determining the reproduction number, which is the minimal number of transmissions required for a pandemic to occur, and the sensitivity analysis of the ODE system.

#### 3.2.1. Basic Reproduction Number

To comprehend COVID-19, which has now become a pandemic, we must first estimate the minimal rate of secondary infections required for a pandemic to arise. The reproduction number *R*<sup>0</sup> is also the rate at which a spread would be stopped if it fell below it. Its derivation is as follows:

$$0 < \beta S\_0 I\_0 - (\gamma + \delta) I\_{0\prime} \tag{51}$$

$$0 < \beta S\_0 - (\gamma + \delta),\tag{52}$$

$$
\beta S\_0 < (\gamma + \delta),
\tag{53}
$$

$$R\_0 = \frac{\beta S\_0}{\gamma + \delta}.\tag{54}$$

The change described by the left hand side, obtained after non-dimentionalization, must be strictly greater than 0 for a pandemic to occur, according to Equation (51). Equation (52) is produced by dividing both sides by *I*0, and Equation (53) is obtained by moving the term containing the spreading rate to the left hand side. We can deduce from Equation (54) that the lowest needed spreading rate for a pandemic is equal to the left hand side divided by the right hand side.

#### 3.2.2. SIRD Model Analysis

The mathematical model's sensitivity analysis also reveals some of the model's essential aspects, such as the estimated maximum number of infections *Imax*. Equations (23)–(26), which are all part of the SIRD model, are used to calculate the maximum number of infected individuals that can occur at any given time.

$$\frac{dI(t)}{dS(t)} = -1 + \frac{\gamma + \delta}{\beta S}.\tag{55}$$

To obtain this we first divide Equation (23) by Equation (24) to obtain Equation (55) and integrate it to obtain Equation (56).

$$I + S - \frac{\gamma + \delta}{\beta} (\ln S) = I\_0 + S\_0 - \frac{\gamma + \delta}{\beta} (\ln S\_0). \tag{56}$$

To obtain the maximum possible value of infection, we find a point where the Equation (56) is equal to zero. It was determined that this occurs when *S* = *<sup>β</sup> <sup>γ</sup>*+*<sup>δ</sup>* .

$$I + S - \frac{\gamma + \delta}{\beta} (\ln S) = I\_0 + S\_0 - \frac{\gamma + \delta}{\beta} (\ln S\_0)\_\prime \tag{57}$$

$$I\_{\max} + S - \frac{\gamma + \delta}{\beta} (\ln S) = I\_0 + S\_0 - \frac{\gamma + \delta}{\beta} (\ln S\_0), \tag{58}$$

$$I\_{\max} + \frac{\beta}{\gamma + \delta} - \frac{\gamma + \delta}{\beta} (\ln(\frac{\beta}{\gamma + \delta})) = I\_0 + S\_0 - \frac{\gamma + \delta}{\beta} (\ln S\_0), \tag{59}$$

$$I\_{\text{max}} = I\_0 + S\_0 - \frac{\gamma + \delta}{\beta} [1 + (\ln(\frac{\beta}{\gamma + \delta}) S\_0)],\tag{60}$$

$$I\_{\text{max}} = I\_0 + S\_0 - \frac{\gamma + \delta}{\beta} [1 + (\ln(R\_0))].\tag{61}$$

In Equation (59), we substitute the value of *S*, to obtain the equation. In Equation (60), the model is simplified and rearranged. In Equation (61), the value *R*<sup>0</sup> is substituted where possible such that the remaining equation is attained. We now have the estimated number of persons who will become infected. Individuals infected with the virus either recover *Rend* or die *Dend*, thus we calculate the predicted number of persons who will either recover or die.

$$
\overline{R}\_{cud} = R\_{cnd} + D\_{cnd\prime} \tag{62}
$$

$$R\_{eud} = S\_0 + l\_0 - S\_{eud}.\tag{63}$$

According to the created SIRD model, the whole number of people who will be infected will either recover from the disease or die from it, therefore the total number of persons affected will be equal to the sum of the recoveries *Rend* and the deceased *Dend* illustrated in Equation (62). We estimate the total infected at the conclusion of the virus spreading period to be equal to the sum of the original susceptible and infected populations minus the Susceptible population at the end of the period, as shown in Equation (63).

$$S\_{cnd} = \frac{\gamma + \delta}{\beta} \ln(S\_{cnd}),\tag{64}$$

$$\mathcal{S}\_{\rm end} = I\_0 + \mathcal{S}\_0 - \frac{\gamma + \delta}{\beta} \ln(\mathcal{S}\_0),\tag{65}$$

where *S*, *I*, *R* and *D*, respectively, represents susceptible, infectious, recovered and deceased individuals and *N* = *S* + *I* + *R* + *D* is the total population. The parameters *β*, *γ* and *δ*, respectively, represent the infection, recovery rate and death rates. Since in the analysis or

in the model recovered and deceased individuals have the same effects on the model, we group them as *R* representing removed with the removal rate of (*γ* + *δ*); such that we have,

$$\frac{dS(t)}{dt} = -\frac{\beta S(t)I(t)}{N},\tag{66}$$

$$\frac{dI(t)}{dt} = \frac{\beta S(t)I(t)}{N} - (\gamma + \delta)I(t),\tag{67}$$

$$\frac{dR(t)}{dt} = (\gamma + \delta)I(t). \tag{68}$$

Which can be rewritten as:

$$\frac{dS(t)}{dt} = -\frac{\beta S(t)I(t)}{N},\tag{69}$$

$$\frac{dI(t)}{dt} = \frac{\beta S(t)I(t)}{N} - (\gamma + \delta)I(t),\tag{70}$$

$$R = N - I - S.\tag{71}$$

The results of the simulations of the Physics Informed Neural Networks framework are presented in this section. It also includes a full analysis of the results, as well as changes in accuracy as parameters like data size change. The information was gathered through national daily updates and a Google studio analysis website created by the University of Eswatini and Wits Ithemba Labs [33]. Due to location/resource constraints, the model was constructed using Python 3 on a Spyder interface running in offline mode. Numpy, Mathplotlib, and Tensorflow were the main packages utilized.

#### *3.3. Simulation Using Mathematica Generated Data*

To test the model and validate the PINNs, we used Mathematica to create fictional data for an SIRD model. The benefit of this type of data was that it was less noisy. We started the model with a 100,000-person susceptible population, 0 recoveries and deaths, and five infections. In order to acquire the data shown in Figure 3, the average infection rate was set to be 0.14, the average healing rate was 0.037, and the average mortality rate was 0.005.

**Figure 3.** A Mathematica generated graph simulation of an example SIRD model. The green represents Susceptible population, blue represents the recoveries, red is the active infected population and orange is the deceased population.

The traditional behaviour of a SIR mode is seen in Figure 3, where the size of the vulnerable population decreases as the size of active illnesses increases. The size of the recoveries and deaths then increases until they reach a maximum or stabilize.

PINNs Model of Mathematica Results

The aforementioned model produced data from a PINNs of three layers, each with 30 nodes.

The findings provided after the model was trained were compared to the susceptible population, which was also utilized as the training dataset. There is only a small error between the two plots because they are so tightly aligned. Figure 4 depicts the resulting findings, which compare active infections from the dataset used to train the model to the values obtained by the trained model. This graph is well-fitting, indicating that it has a low degree of error. The graph in Figure 5 depicts the results of the recovered dataset used for training, as well as the produced results following the training process, which have a strong match and hence less errors. Figure 6 shows the outcome of comparing the actual deceased with the results acquired after the training procedure; this graph has a good match but is less accurate than the others.

**Figure 4.** The resulting graph of the predicted values of the Infected population and the actual values of the infected population from the Mathematica generated data.

**Figure 5.** The graphs shows the results of the predicted values of the Recovered and the actual values of the recovered from the Mathematica generated data.

**Figure 6.** The resulting graph of the predicted values of the Deceased population and the actual values of the deceased population from the Mathematica generated data.

#### *3.4. PINNs Simulations of Alabama State Data*

We used a dataset from the American state of Alabama to test the SIRD model for further validation. The dataset spanned roughly 300 days, and the simulation used a three-layer neural network with 30 nodes per layer, with 1,000,000 iterations.

Figure 7 shows the outcome of data fitting, which compares the susceptible population data from the real data to that obtained by the trained PINNs model, and shows that there is a decent fit, implying that there is a little inaccuracy. The resulting graph of the data fitting is shown in Figure 8; it has a good fit, implying that there is a little error. The graph in Figure 9 compares the values acquired by the training model to the original data used in the recoveries, and it likewise has a strong match and less mistakes. Figure 10 is the last graph that compares

the findings received after the model was trained to the actual data of the deceased population. This graph has a good match, but it is more erroneous than the others.

**Figure 7.** This graph shows a comparison of the predicted values of susceptible population and the actual data of susceptible population for the State of Alabama.

**Figure 8.** The graph shows a comparison of the predicted infected population values and the actual data infected population for the State of Alabama.

**Figure 9.** This graph shows a comparison of the predicted values of recovered population and the actual data of recovered population for the State of Alabama.

**Figure 10.** The graph shows a comparison of the predicted deceased population values and the actual data deceased population for the State of Alabama.

#### *3.5. PINNs Simulation of a Model Using 170 Data Points*

To put the model to the test and further test the potential of the PINNs model, a simulation using a smaller dataset was conducted. To conduct the simulation a dataset derived from the existing data to fully stretch over the period making up only 30% of the available data results were obtained.

The resulting graph, in Figure 11, compares the values acquired by the trained model to the actual data for the data fitting purposes of the vulnerable population and finds a good match, resulting in a tiny sized error. The obtained graph of data fitting is shown in Figure 12; it has a good fit, which indicates it has a minimal error. The graph in Figure 13 shows the recovered population results, which are a comparison of the trained model results and the real data, with a strong match and less mistakes. This graph has a good fit, but it has a larger error than the other graphs. Figure 14 is the resulting graph of the deceased population, which is a comparison of the trained model results and the actual data. The total result reveals that while the fitting has less mistakes, they are larger when compared to scenarios where larger data was employed.

**Figure 11.** This graph shows a comparison of the predicted values of susceptible population and the actual data of susceptible population for a 130 data points.

**Figure 12.** The graph shows a comparison of the predicted infected population values and the actual data of the infected population for a 130 data points.

**Figure 13.** This graph shows a comparison of the predicted values of recovered population and the actual data of the recovered population for a 130 data points.

**Figure 14.** The graph shows a comparison of the predicted deceased population values and the actual data of the deceased population for a 130 data points.

#### *3.6. PINNs Simulation of a Model Using All Available Data Points at the Time (576 Data Points)*

The simulation was carried out with 5,000,000 iterations and four layers, each with 30 nodes. The dataset used was 576 days long, which was the maximum number of days accessible at the time.

In this situation, the best simulation is carried out utilizing all of the data available; the training data account for 70% of the data, while the testing sample accounts for just 30% of the data and is chosen at random. The resulting graph for data fitting purposes for the sensitive population is shown in Figure 15, and there is a tiny sized error and good fitting. The obtained graph of data fitting is shown in Figure 16; it has a good fit, which indicates it has a minimal error. The graph in Figure 17 depicts the recovered findings, and it has a strong fit and few mistakes. The resulting graph of deceased is shown in Figure 18; this graph has a decent fit, although it has a larger error than the other graphs. The overall result

demonstrates that, while the fitting has less mistakes, they are larger than in circumstances when additional data was employed.

**Figure 15.** This graph shows a comparison of the predicted values of susceptible population and the actual data of the susceptible population for a 530 data points.

**Figure 16.** The graph shows a comparison of the predicted infected population values and the actual data of the infected population for a 530 data points.

**Figure 17.** This graph shows a comparison of the predicted values of recovered population and the actual data of the recovered population for a 530 data points.

**Figure 18.** The graph shows a comparison of the predicted deceased population values and the actual data of the deceased population for a 530 data points.

#### *3.7. PINNs Simulation Forecasting 30 Days*

Simulations using the three layers of 30 nodes per layer was conducted and there were 5,000,000 iterations made during the training.

Figure 19 is a result graph based on the forecasting of the sensitive population for the next 30 days; the numbers appear to be declining, as expected. The resultant graph, shown in Figure 20, provides the anticipated data of active infections in a curved shape. The results of the anticipated recoveries are depicted in Figure 21. The resulting graph of the anticipated deceased population is shown in Figure 22. The prediction also determined the total predicted recoveries *Rend*, as well as the maximum expected infections *Imax*, susceptibles population expected at the end of the disease's spread *Send*, and the maximum expected infections *Iend*. Using the estimated parameter values, we obtained *Imax* = 72,121, *Send* = 1,094,719, and *Rend* = 70,274.

**Figure 19.** This graph shows a comparison of the predicted values of susceptible population and the actual data of susceptible population for a SIRD model with future predictions.

**Figure 20.** The graph shows a comparison of the predicted infected population values and the actual data infected population for a SIRD model with future predictions.

**Figure 21.** This graph shows a comparison of the predicted values of recovered population and the actual data of recovered population for a SIRD model with future predictions.

**Figure 22.** The graph shows a comparison of the predicted deceased population values and the actual data deceased population for a SIRD model with future predictions.

#### *3.8. Deep Learning Sensitivity Analysis*

The study's Physics Informed Neural Network framework is primarily influenced by four elements. The number of iterations conducted during training, the quantity of the training data utilized, the total number of layers in the model, and the number of nodes in each layer are all variables to consider. To begin the sensitivity analysis, the default model is set up with the following parameters: number of iterations 200,000, amount of training data 400, number of layers 3 and number of nodes 30.

Then, many simulations were run, with all model variables set to default, only two parameters changed, and all mean square errors recorded. The model setup is only subjected to a single simulation. Because the beginning values for each scenario are set at random, there are certain uncontrollable margins of error. A contingency only enables one further simulation trial in such instances.

The number of layers in the neural network and the number of performed iterations were the parameters that were varied in Table 1. The number of layers was varied between two, four and eight, while the number of iterations was varied between 100, 200, 400, and 800,000. The simulation results show that increasing the number of iterations reduces the amount of the error for the same number of layers. When the number of iterations is kept constant, the margin of error is reduced as well. This means that, as the number of iterations and layers increases, the accuracy improves.

**Table 1.** Results of the mean square error analyzing of varying number of iterations and number of layers in the simulations.


Table 2 compares the correlation impacts that the number of nodes per layer and the number of layers included in the neural network have. The number of nodes utilized in this test were 10, 20, 40, and 80, respectively, and the number of layers employed were two, four and eight. The data obtained, which are also displayed on the table of concern, show that increasing the number of nodes improves the margin of error given a constant number of layers. However, increasing the number of layers for a given number of layers reduces the margin of error. This means that, as the number of layers and nodes per layer are raised, the lowest margin of error is achieved for the physics informed neural network model established for this study.

**Table 2.** Results of the mean square error analysing of varying number of nodes in a layer and number of layers in the simulations.


The results of an analysis and simulations studying the correlation of the data size and the number of layers are shown in Table 3. The data sizes were 100, 150, 200, and 350, while the number of layers tested was two, four and eight. The results show that increasing the number of layers while keeping the data size constant reduces the margin of error. The investigation of a variety of data sizes on a fixed number of layers reveals that the values do not vary in a predictable way, but rather shift within a tiny margin. These findings reveal

that data size has no effect on the number of layers, however data size has an effect on the number of layers, which reduces the margin of error.


**Table 3.** Results of the mean square error analysing of varying sizes of data points and number of layers in the simulations.

The results of an investigational analysis on the number of nodes per layer and the number of iterations are shown in Table 4. 10,000, 400,000, and 800,000 iterations were completed. The numbers 10, 20, 40, and 80 were used to test the nodes. The results show that increasing the number of iterations reduces the amount of the error while the number of nodes remains constant. The findings also reveal that increasing or decreasing the number of nodes has no effect on the amount of money spent. As a result, increasing the number of iterations reduces the amount of the error while having no effect on the number of nodes.

**Table 4.** Results of the mean square error analysing of varying number of iterations and number of nodes per layer in the simulations.


The results of simulations undertaken to determine the correlation between the data size used during the training process and the number of iterations are shown in Table 5 . The experiment included data sizes of 100, 150, 200, and 350, as well as iterations of 100, 400, and 800,000. The results show that increasing the number of iterations while keeping the training data size constant minimizes the margin of error. The results also show that changing the data size while maintaining a constant number of iterations has no effect on the number of iterations. As a result, increasing the number of iterations decreases the amount of the error, whereas changing the size of the training data has no effect.

**Table 5.** Results of the mean square error analysis of varying number of iterations and data size per layer in the simulations.


The data size used during the training process and the number of nodes per layer are shown in Table 6. The experiment used 100, 150, 200, and 350 data sizes, as well as 100, 150, 200, and 350 iterations. The results suggest that increasing the number of nodes while keeping the training data size constant lowers the margin of error. The results also reveal that changing the data size while keeping the number of nodes fixed has no effect on the number of nodes. As a result, increasing the number of nodes lowers the amount of the error, however changing the size of the training data has no effect.

**Table 6.** Results of the mean square error analyzing of varying sizes of data points iterations and number of nodes in layers in the simulations.


#### **4. Discussion**

The results obtained showed high accuracy especially in data fitting compared to the mathematical method's accuracy [34], particularly in data fitting. Another feature of this model is that it anticipates the wave behaviour of the active infected population as it makes forecasts. In comparison to other PINNs techniques, the model achieved roughly similar results, while a convolutional neural network time varying model [6,8] outperformed it somewhat. The method's major flaw is that it relies on previous data. This reduces its efficiency when it becomes accustomed to forecasting data, especially if the anticipated time is extended, because other previously unforeseeable factors, such as new species, are introduced.

#### **5. Conclusions**

The study's goal was to examine current data in order to determine COVID-19's behavioural dynamics. The Physics Informed Neural Networks framework was used to accomplish this analysis and to be able to estimate the dynamics of COVID-19. The focus of the research was to establish the number of patients who were susceptible, infected, recovered, and deceased in a timely manner. The study also intended to establish the disease's dissemination rate, death rate, and recovery rate based on this information. The rates were to be calculated on an average basis. The study attempted to take advantage of neural networks' ability to uncover hidden patterns in data, such as prospective increases and falls in dynamics, because they have this capability.

The Physics Informed Neural Networks framework used in the study is an ANN model that trains a neural network by exposing it to both the training data and the governing equations of the underlying problem. The Susceptible–Infected–Recovered–Deceased model was the mathematical model introduced as the governing equations of the PINNs training model. The underlying data for the simulations was collected between March 2020 and September 2021 in the kingdom of Eswatini. The main advantage of adopting the PINNs model was that it outperformed all other data analysis models even when given minimal quantities of training data, which was important given the disease's newness and the lack of data and knowledge [8].

The generated PINNs model was used to run simulations, and the results were presented in the form of tables and graphs. The study's first model was created artificially and had the advantage of being both accurate and covering a disease spread that lasted the entire life of the simulated disease spread. The acquired result had a modest margin of error, with the sole exception being the forecasts for the deceased population, which had a

larger margin of error. This demonstrated that the proposed model could produce accurate findings and that the model could be used to assess the model over its entire life cycle. Another experiment was carried out with data from the state of Alabama in the United States. With a minimal margin of error, the constructed model was able to produce reliable results.This test was primarily performed to determine whether there was no overfitting; overfitting occurs when a created ANN can only handle a problem in the context of that specific data. As a result, while the study focused on using data from Eswatini, the results demonstrated that the data can be transferred and used with data from any country, region, continent, or sample of specification.

Only 170 data points were included in the first simulation results provided in the study using the main Eswatini dataset. The goal of this simulation was to see if the model could produce viable findings with limited amounts of data. This would be particularly useful in formulating predictions about the virus's behaviour in circumstances where data were sparingly collected or big gaps of missing data existed. The results collected revealed some positive outcomes with few faults. However, the results were less accurate when compared to those obtained when bigger datasets were employed. There were also substantially larger margins in the death forecasts, which was a problem that persisted across all simulations. This contradicts the conclusions of the sensitivity analysis, which found that increasing the size of the training dataset had a minimal margin of change.

To build the training and testing datasets, a bigger dataset encompassing all of the data available at the time was also employed. The results were similar to those obtained during the training of the smaller dataset, but with a far higher level of precision. This enormous dataset was also used to create forecasts for the future. These predictions were found to be quite accurate, but as the number of days forecasted increased, the accuracy declined. Even while making these future predictions, the determined dynamics demonstrated that they can both account for the creation of a wave, which was a critical and extremely unusual aspect that was found. This suggests that the model was able to adjust the dynamics both positively and negatively without any external help based on the data patterns.

The model was created over a long period of time, with early tests and simulations carried out when there were few data available. Although it is not displayed in the findings, it was discovered that the model was unable to forecast the major changes in the wave namely the crust and thrust—during the first wave. However, as the data grew larger, the model began to recognize these patterns, and the results improved, eventually leading to those that were reached. The results produced during these simulations and tests were more accurate than those obtained during research that used the mathematical model [35,36] to conduct tests. This is because, while average rates are employed in both cases, the generated PINNs model gradually learned to modify these rates or dynamics based on hidden patterns in the data. In comparison to [6,8], the results had similar margins of error, primarily due to data fitting because no future predictions were offered in these research.

The results show that the model is well adapted to making value predictions during the training period. As a result, the model is highly adapted to data fitting in situations when data were not gathered, were incorrectly input, or were lost. Another advantage of the architecture is that it returns the spreading rate, death rate and recovery rate, which were set up to function as modifying variables for the neural network's PINNs component. However, as the number of projected days grows larger, the accuracy of the forecast decreases. This happens because the projections are based on the spread rate, mortality rate and recovery rate trends, all of which are based on outdated data. The wave format produced by active infections is also predicted by the model. The results show that the model is well equipped to make decisions. As a result, while the model is well-suited to making short-term predictions, it may also be used to make long-term predictions with a manageable margin of error.

The study had the misfortune of only lasting a few weeks. As a result, the model was unable to make a number of jumps, including the inability to forecast when a potential wave would occur. There have also been some new developments, such as the discovery that some infected individuals can become re-affected. As a result, the SIRD model would be more accurate than the SIRD model currently in use. Thus, future research into these areas, which are currently understudied, is necessary.

The lack of data and processing power was a major stumbling block during the project. As a result, we advocate developing a model that groups each of the SIRD populations by age for future research, as it has been proven that different age groups are impacted by the disease differently. Each of the metrics or rates can then be defined per age group using the well segmented data. We also suggest testing a model identical to the one employed in the study, but on a larger scale with more processing capacity.

**Author Contributions:** Conceptualization, J.M., S.G. and S.M.; methodology, J.M., S.G. and S.M.; software, S.G.; validation, S.G.; formal analysis, S.G.; investigation, S.G.; resources, data curation, S.G.; writing—original draft preparation, S.G.; writing—review and editing, J.M., S.G. and S.M.; visualization, J.M., S.G. and S.M.; supervision, J.M. and S.M.; project administration, J.M.; funding acquisition, S.M. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the National Research Foundation of South Africa, Grant number 131604 and the School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, South Africa.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The dataset used in this study can be obtained from: https://datastudio. google.com/reporting/b847a713-0793-40ce-8196-e37d1cc9d720/page/2a0LB (accessed on 1 October 2021).

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **Maximum Principle and Second-Order Optimality Conditions in Control Problems with Mixed Constraints**

**Aram V. Arutyunov 1,†, Dmitry Yu. Karamzin 2,\*,† and Fernando Lobo Pereira 3,\*,†**


**Abstract:** This article concerns the optimality conditions for a smooth optimal control problem with an endpoint and mixed constraints. Under the normality assumption, which corresponds to the fullrank condition of the associated controllability matrix, a simple proof of the second-order necessary optimality conditions based on the Robinson stability theorem is derived. The main novelty of this approach compared to the known results in this area is that only a local regularity with respect to the mixed constraints, that is, a regularity in an *ε*-tube about the minimizer, is required instead of the conventional stronger global regularity hypothesis. This affects the maximum condition. Therefore, the normal set of Lagrange multipliers in question satisfies the maximum principle, albeit along with the modified maximum condition, in which the maximum is taken over a reduced feasible set. In the second part of this work, we address the case of abnormal minimizers, that is, when the full rank of controllability matrix condition is not valid. The same type of reduced maximum condition is obtained.

**Keywords:** optimal control; maximum principle; mixed constraints

#### **1. Introduction**

In this article, second-order necessary optimality conditions for an optimal control problem with mixed equality and inequality constraints are investigated. Under the normality condition, which is ensured by the full rank of the controllability matrix, a rather simple proof of the optimality conditions is proposed based on Robinson's theorem on the metric regularity for set-valued mappings. For the case in which the normality condition is violated, the second-order conditions are derived based on the index approach. This means that some reduced cone of Lagrange multipliers is invoked, which is defined by using the index of the quadratic form of the Lagrange function; see, e.g., [1–3].

In work [4], the two notions of the strong and of weak regularity of an admissible trajectory with respect to mixed constraints have been considered. Strong regularity means that the constraint qualification, or the so-called Robinson condition, is satisfied for all timepoints and for all admissible control values. This corresponds to the regularity condition in the classical sense. Weak regularity means that this condition is satisfied merely in some neighborhood of the optimal process. By their nature, these two concepts correspond, respectively, to global, and local regularity settings. Under weak regularity, a refined maximum condition of Pontryagin's type has been obtained, in which the maximum is taken over the closure of regular points of the feasible set, but not over the entire feasible set. In this article, the results of [4] are carried over to the second-order conditions in the case of global minimum.

The literature on optimality conditions for optimal control problems with mixed constraints is extensive. In the context of this research related to the study of mixed

**Citation:** Arutyunov, A.V.; Karamzin, D.Y.; Pereira, F.L. Maximum Principle and Second-Order Optimality Conditions in Control Problems with Mixed Constraints. *Axioms* **2022**, *11*, 40. https://doi.org/10.3390/ axioms11020040

Academic Editor: Natália Martins

Received: 28 December 2021 Accepted: 15 January 2022 Published: 20 January 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

constraints, we note the works of [5–12]. Regarding the second-order conditions in mixed constrained problems, one may consider, e.g., [3,13,14] and the bibliography cited therein. At the same time, these selective lists of publications are far from exhaustive.

This work is organized as follows. In the next section, the problem formulation is presented, together with main definitions and notation. In Section 3, the issue of normality is discussed. In Section 4, the main result of this work—the normal maximum principle and second-order optimality conditions—is formulated and proved. In Section 5, the abnormal situation is taken into consideration, and the result of the previous section is refined. Section 6 concludes the work with a short summary.

#### **2. Problem Formulation**

Consider the following optimal control problem on the fixed time interval [0, 1]:

$$\begin{cases} \text{Minimize} & \varphi(p) \\ \text{subject to} & \dot{\mathbf{x}}(t) = f(\mathbf{x}(t), u(t), t) \text{ for a.a.} \, t, \\ & e\_1(p) \le 0, \, e\_2(p) = 0, \\ & u(t) \in \mathcal{U}(\mathbf{x}(t), t) \text{ for a.a.} \, t, \end{cases} \tag{1}$$

where *p* = (*x*0, *x*1), *x*<sup>0</sup> = *x*(0), *x*<sup>1</sup> = *x*(1), *t* ∈ [0, 1], and

$$\mathcal{U}(\mathbf{x},t) := \{ \boldsymbol{\mu} \in \mathbb{R}^m : \boldsymbol{r}\_1(\mathbf{x}, \boldsymbol{\mu}, t) \le 0, \, \boldsymbol{r}\_2(\mathbf{x}, \boldsymbol{\mu}, t) = 0 \}.$$

The mappings *<sup>ϕ</sup>* : <sup>R</sup>2*<sup>n</sup>* <sup>→</sup> <sup>R</sup>, *ei* : <sup>R</sup>2*<sup>n</sup>* <sup>→</sup> <sup>R</sup>*ki* , *<sup>f</sup>* : <sup>R</sup>*<sup>n</sup>* <sup>×</sup> <sup>R</sup>*<sup>m</sup>* <sup>×</sup> <sup>R</sup><sup>1</sup> <sup>→</sup> <sup>R</sup>*n*, *ri* : <sup>R</sup>*<sup>n</sup>* <sup>×</sup> <sup>R</sup>*<sup>m</sup>* <sup>×</sup> <sup>R</sup><sup>1</sup> <sup>→</sup> <sup>R</sup>*qi* , *<sup>i</sup>* <sup>=</sup> 1, 2, satisfy the following hypothesis.

**Hypothesis 1** (**H1**)**.** *Mappings ϕ*,*e*1,*e*2, *f* ,*r*1,*r*<sup>2</sup> *are twice continuously differentiable.*

The vector *p* = (*x*0, *x*1) is termed the endpoint, as well as the constraints given by mappings *e*1,*e*2. The scalar function *ϕ*(*p*) defines the minimizing functional. Mappings *r*1,*r*<sup>2</sup> define the mixed constraints which are imposed on both state and control variables.

A pair of functions (*x*, *u*) is designed by the control process; if *x*(·) is absolutely continuous, *u*(·) is measurable and essentially bounded, whereas *x*˙(*t*) = *f*(*x*(*t*), *u*(*t*), *t*) for a.a. *t* ∈ [0, 1]. A control process is feasible, provided that the endpoint, control and state constraints are satisfied. A feasible process (*x*¯, *u*¯) is termed optimal if, for any feasible process (*x*, *u*), *ϕ*(*p*¯) ≤ *ϕ*(*p*), where *p*¯ = (*x*¯(0), *x*¯(1)).

This concept of the minimum is known as a *global strong minimum*. The purpose of this work is to derive the second-order necessary optimality conditions for this type of minimum under the normality assumptions. That is, to find such a set of Lagrange multipliers that simultaneously satisfies the maximum principle and Legendre's condition, and for which *λ*<sup>0</sup> > 0. Such a set of multipliers must be unique upon normalization. The abnormal situation is also examined after the normal case.

Consider the reference control process (*x*¯, *u*¯), which can be optimal, extremal, regular, or normal in what follows. Denote by *r* = (*r*1,*r*2) the joint mapping acting onto R*q*, where *<sup>q</sup>* <sup>=</sup> *<sup>q</sup>*<sup>1</sup> <sup>+</sup> *<sup>q</sup>*2. Let *<sup>J</sup>*(*x*, *<sup>u</sup>*, *<sup>t</sup>*) :<sup>=</sup> {*<sup>j</sup>* : *<sup>r</sup><sup>j</sup>* (*x*, *u*, *t*) = 0} be the set of active indices, where the upper index specifies the vector component. Set *J*(*u*, *t*) := *J*(*x*¯(*t*), *u*, *t*). Let U(·) designate the closure of function *u*¯(·) w.r.t. the Lebesgue measure; that is, for a given *t* ∈ [0, 1], the set U(*t*) consists of essential values of *u*¯(·) at point *t*, [8]. Recall that the vector *a* is said to be the essential value of a function *u*(·) at point *τ*, provided that -({*t* ∈ [*τ* − *ε*, *τ* + *ε*] : *u*(*t*) ∈ *Bε*(*a*)}) > 0 ∀ *ε* > 0, where *Bε*(*a*) is the closed ball centered at *a* with the radius *ε*, and designates the Lebesgue measure on R.

The main regularity concept is as follows.

**Definition 1.** *The control process* (*x*¯, *u*¯) *is said to be regular w.r.t. the mixed constraints, provided that, for all <sup>t</sup>* <sup>∈</sup> [0, 1] *and for all <sup>u</sup>* ∈ U(*t*)*, the active gradients* (*r<sup>j</sup>* ) *<sup>u</sup>*(*x*¯(*t*), *u*, *t*)*, j* ∈ *J*(*u*, *t*) *are linearly independent.*

The following proposition represents an equivalent reformulation of the introduced regularity concept. For *ε* ≥ 0, define the set

$$J\_{\varepsilon}(\mathbf{x}, \boldsymbol{\mu}, t) := \left\{ j \in \{1, \ldots, q\_1\} : r^j(\mathbf{x}, \boldsymbol{\mu}, t) \in [-\varepsilon, 0] \right\} \cup \{q\_1 + 1, \ldots, q\},$$

which is subject to the same conventions as the mapping *J*(*x*, *u*, *t*). It is clear that *J* ⊆ *Jε*, and *J*<sup>0</sup> = *J*.

**Proposition 1.** *Let the control process* (*x*¯, *u*¯) *be regular w.r.t. the mixed constraints. Then, there exists a number ε*<sup>0</sup> > 0 *such that, for all t* ∈ [0, 1] *and for almost all s* ∈ [0, 1] *such that* <sup>|</sup>*<sup>s</sup>* <sup>−</sup> *<sup>t</sup>*| ≤ *<sup>ε</sup>*0*, the <sup>ε</sup>*0*-active gradients* (*r<sup>j</sup>* ) *<sup>u</sup>*(*x*¯(*s*), *u*¯(*s*),*s*)*, j* ∈ *Jε*<sup>0</sup> (*s*) *are linearly independent. Moreover, the number ε*<sup>0</sup> *can be chosen such that the modulus of surjectivity for this set of gradients is not lower than ε*0*.*

The proof is based on a simple contradiction argument.

The point *<sup>u</sup>* <sup>∈</sup> *<sup>U</sup>*(*x*, *<sup>t</sup>*) is termed regular provided that the gradients (*r<sup>j</sup>* ) *<sup>u</sup>*(*x*, *u*, *t*), *j* ∈ *J*(*x*, *u*, *t*) are positively linearly independent. The subset of all regular points of *U*(*x*, *t*) is denoted as *UR*(*x*, *t*). Denote

$$\Theta(\mathbf{x},t) := \text{clos } \mathcal{U}\_{\mathbb{R}}(\mathbf{x},t) \dots$$

It is clear that, for the regular process (*x*¯, *<sup>u</sup>*¯), one has U(*t*) ⊆ <sup>Θ</sup>(*x*¯(*t*), *<sup>t</sup>*) = ∅ ∀ *<sup>t</sup>* ∈ [0, 1]. Consider the Hamilton–Pontryagin function

$$\mathcal{H}(\mathbf{x}, \mathbf{u}, \boldsymbol{\psi}, t) := \langle \psi\_{\prime} f(\mathbf{x}, \mathbf{u}, t) \rangle\_{\prime \prime}$$

and the Lagrangian

$$\mathcal{L}(p,\lambda) := \lambda^0 \mathfrak{p}(p) + \langle \lambda\_1, e\_1(p) \rangle + \langle \lambda\_2, e\_2(p) \rangle.$$

Here, *<sup>ψ</sup>* <sup>∈</sup> (R*n*)∗, *<sup>λ</sup>* = (*λ*0, *<sup>λ</sup>*1, *<sup>λ</sup>*2) <sup>∈</sup> (R1+*k*1+*k*<sup>2</sup> )<sup>∗</sup> are the conjugate variables.

**Definition 2.** *The control process* (*x*¯, *u*¯) *is said to satisfy the maximum principle provided that there exists a vector <sup>λ</sup>* = (*λ*0, *<sup>λ</sup>*1, *<sup>λ</sup>*2) <sup>∈</sup> (R1+*k*1+*k*<sup>2</sup> )∗*, where <sup>λ</sup>*<sup>0</sup> <sup>≥</sup> <sup>0</sup> *and <sup>λ</sup>*<sup>1</sup> <sup>≥</sup> <sup>0</sup>*, an absolutely continuous vector-valued function <sup>ψ</sup>* <sup>∈</sup> <sup>W</sup>1,∞([0, 1];(R*n*)∗)*, and a measurable, essentially bounded, vector-valued function <sup>ν</sup>* <sup>∈</sup> <sup>L</sup>∞([0, 1];(R*q*)∗) *of which the j-th component is nonnegative for j* = 1, . . ., *q*1*, such that λ* = 0*, and, on* [0, 1]*, it holds that:*

$$\dot{\psi}(t) = -\mathcal{H}\_x'(\bar{x}(t), \bar{u}(t), \psi(t), t) + \nu(t)r\_x'(\bar{x}(t), \bar{u}(t), t) \text{ for a.a.t},\tag{2}$$

$$
\psi(\mathfrak{a}) = (-1)^{\mathfrak{a}} \mathcal{L}'\_{\mathfrak{x}\_{\mathfrak{a}}}(\mathfrak{p}, \lambda) \text{ for } \mathfrak{a} = 0, 1,\tag{3}
$$

$$\max\_{u \in \Theta(\mathfrak{x}(t), t)} \mathcal{H}(\mathfrak{x}(t), u, \psi(t), t) = \mathcal{H}(\mathfrak{x}(t), \mathfrak{u}(t), \psi(t), t) \text{ for a.a.t},\tag{4}$$

$$\mathcal{H}'\_{\boldsymbol{\upmu}}(\mathfrak{x}(t), \boldsymbol{\upmu}(t), \boldsymbol{\upmu}(t), t) - \boldsymbol{\upnu}(t)r'\_{\boldsymbol{\upmu}}(\mathfrak{x}(t), \boldsymbol{\upmu}(t), t) = 0 \text{ for } a.a. \text{ t, }\tag{5}$$

$$<\langle \lambda\_1, c\_1(\vec{p}) \rangle = 0,\text{ and } \int\_0^1 \langle \nu(t), r(\vec{x}(t), \vec{u}(t), t) \rangle dt = 0. \tag{6}$$

Here, Condition (2) is the co-state equation, that is, the differential equation for the conjugate variable *ψ*. Equalities (3) are the transversality conditions. Equality (4) is the maximum condition. Equality (5) is the so-called Euler–Lagrange equation. Equalities (6) are known as the complementary slackness condition. Furthermore, (*λ*, *ψ*, *ν*) are known as the Lagrange multipliers.

Under the regularity condition given in Definition 3, the multipliers *ψ*, and *ν* are uniquely defined by the vector *λ*, where (*λ*, *ψ*, *ν*) is the set of Lagrange multipliers corresponding to (*x*¯, *u*¯) in view of the maximum principle. This assertion simply follows from the Euler–Lagrange equation. Then, denote by <sup>Λ</sup> <sup>=</sup> <sup>Λ</sup>(*x*¯, *<sup>u</sup>*¯) the set of vectors *<sup>λ</sup>* <sup>∈</sup> (R1+*k*1+*k*<sup>2</sup> )<sup>∗</sup> for which there exist (*ψ*, *ν*) such that the corresponding set of Lagrange multipliers (*λ*, *ψ*, *ν*) generated by *λ* satisfies the maximum principle.

#### **3. Normality Condition**

Let us introduce the notion of normality. This notion is based on the concept of linearization of the control problem and the corresponding variational differential system. Consider the reference control process (*x*¯, *<sup>u</sup>*¯), and a pair (*δx*0, *<sup>δ</sup>u*) ∈ X :<sup>=</sup> <sup>R</sup>*<sup>n</sup>* <sup>×</sup> <sup>L</sup>2([0, 1]; <sup>R</sup>*m*). Denote by *<sup>δ</sup>x*(·) the solution to the variational differential equation on the time interval [0, 1], which corresponds to (*δx*0, *δu*), that is,

$$
\delta \mathbf{x}(t) = f\_x'(\mathbf{x}(t), \mathbf{u}(t), t) \delta \mathbf{x}(t) + f\_u'(\mathbf{x}(t), \mathbf{u}(t), t) \delta \mathbf{u}(t), \tag{7}
$$

where *δx*(0) = *δx*0. Such a solution exists on the entire time interval [0, 1] and, as soon as *<sup>δ</sup><sup>u</sup>* is an *<sup>L</sup>*2-function, one finds that *<sup>δ</sup><sup>x</sup>* <sup>∈</sup> <sup>W</sup>1,2([0, 1]; <sup>R</sup>*n*).

In what follows, it is not restrictive to set *e*1(*p*¯) = 0. Thus, all the endpoint constraints of the inequality type are assumed to be active. Consider the two following subspaces in X :

$$\begin{aligned} \mathcal{N}\_{\varepsilon} &:= \left\{ (\delta \mathbf{x}\_0, \delta u) \in \mathcal{X} : \boldsymbol{\varepsilon}'(\boldsymbol{p}) \delta p = 0 \right\}, \\ \mathcal{N}\_{\boldsymbol{r}} &:= \left\{ (\delta \mathbf{x}\_0, \delta u) \in \mathcal{X} : \boldsymbol{D}(t) \left[ r'\_{\boldsymbol{x}}(\ddot{\boldsymbol{x}}(t), \ddot{\boldsymbol{u}}(t), t) \delta \mathbf{x}(t) + r'\_{\boldsymbol{u}}(\ddot{\boldsymbol{x}}(t), \ddot{\boldsymbol{u}}(t), t) \delta \boldsymbol{u}(t) \right] = 0 \right\}. \end{aligned}$$

Here, *e* is the joint mapping of *e*1,*e*2; *δp* = (*δx*0, *δx*1), where *δx*<sup>1</sup> = *δx*(1), and *D*(*t*) is the diagonal *q* × *q*-matrix which has 1 in the position (*j*, *j*) iff *j* ∈ *J*(*t*) and 0 otherwise.

Consider the matrix

$$\mathcal{R}(t) = r'\_{\boldsymbol{u}}(\boldsymbol{\mathfrak{x}}(t), \boldsymbol{\mathfrak{u}}(t), \boldsymbol{t})^\* D(t) r'\_{\boldsymbol{u}}(\boldsymbol{\mathfrak{x}}(t), \boldsymbol{u}(t), \boldsymbol{t}) .$$

Set

$$M(t) := R(t)^+ r'\_{\mu}(\vec{x}(t), \vec{u}(t), t)^\* D(t) r'\_x(\vec{x}(t), \vec{u}(t), t) \dots$$

Here, *A*<sup>+</sup> stands for the generalized inverse [15]. Here, the generalized inverse *R*(*t*)<sup>+</sup> can be computed as follows. Let *T*(*t*) be a non-singular orthogonal linear transform which maps the subspace ker *R*(*t*)<sup>⊥</sup> = im *R*(*t*) onto the subspace of R*<sup>m</sup>* with the first *m* − *q*(*t*) coordinates vanished, where *q*(*t*) = |*J*(*t*)| is the number of active indices. Then, *R*(*t*)<sup>+</sup> = *T*−1(*t*)[*T*(*t*)*R*(*t*)*T*−1(*t*)]−1*T*(*t*), where the pseudo-inverse of the block 0 0 0 *A* −<sup>1</sup> is understood as 0 0 0 *A*−<sup>1</sup> .

Consider the matrix differential system

$$\dot{\Phi}(t) = f\_x'(\vec{x}(t), \vec{u}(t), t)\Phi(t) - f\_u'(\vec{x}(t), \vec{u}(t), t)M(t)\Phi(t),\tag{8}$$

where Φ(0) = *I*. Let Φ(*t*) be the solution to (8), and *P*(*t*) be the matrix of orthogonal projection onto ker *R*(*t*).

It is clear that by virtue of the construction, any element (*δx*0, *δu*) ∈ N*<sup>r</sup>* can be represented as

$$(\delta \mathbf{x}\_{0\prime} \delta \boldsymbol{u}) = (\delta \mathbf{x}\_{0\prime} P \delta \boldsymbol{u} + \mathcal{V}[\delta \mathbf{x}\_{0\prime} \delta \boldsymbol{u}]),\tag{9}$$

where V[*δx*0, *δu*] = *M*(*t*)*δx*(*t*), whereas

$$
\delta \mathbf{x}(t) = \Phi(t) \Big( \delta \mathbf{x}\_0 + \int\_0^t \Phi^{-1}(s) f\_{\mathbf{u}}'(\mathbf{x}(s), \boldsymbol{\mathcal{U}}(s), \mathbf{s}) P(\mathbf{s}) \delta \mathbf{u}(\mathbf{s}) ds \Big). \tag{10}
$$

Conversely, any *<sup>δ</sup>x*<sup>0</sup> <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* and *<sup>δ</sup><sup>v</sup>* <sup>∈</sup> <sup>L</sup>2([0, 1]; <sup>R</sup>*m*): *<sup>δ</sup>v*(*t*) <sup>∈</sup> ker *<sup>R</sup>*(*t*) a.e. yields an element of N*<sup>r</sup>* as (*δx*0, *δv* + V[*δx*0, *δv*]) ∈ N*r*. Therefore, there is a one-to-one correspondence between N*<sup>r</sup>* and the space of the above-specified elements (*δx*0, *δv*). At the same time, the formula for the solution *δx* in N*<sup>r</sup>* is given by (10).

Let us proceed with the construction of the controllability matrix. Define the R*n*×*k*matrix *A* as

$$A = e\_{x\_0}'(\vec{p}) + e\_{x\_1}'(\vec{p})\Phi(1),$$

with the R*m*×*k*-matrix *B*(*t*) given as

$$B(t) = e\_{x\_1}'(\not p)\Phi(1)\Phi^{-1}(t)f\_{\
u}'(\mathbb{x}(t),\mathbb{u}(t),t).$$

Now, the controllability matrix *Q* is introduced as the R*k*×*k*-matrix:

$$Q = AA^\* + \int\_0^1 B(t)P(t)B^\*(t)dt.$$

**Definition 3.** *The regular control process* (*x*¯, *u*¯) *is said to be normal, provided that Q* > 0*, or equivalently,* rank *Q* = *k.*

#### **4. Main Result**

In this section, the second-order necessary optimality conditions are addressed. Consider the two cones

$$\mathbb{C}\_{\mathbf{c}} := \left\{ y \in \mathbb{R}^{k} : y^{j} \le 0 \text{ for } j = 1, \dots, k\_{1}, \text{ and } y^{j} = 0 \text{ for } j = k\_{1} + 1, \dots, k \right\},$$

$$\mathbb{C}\_{r} := \left\{ y \in \mathbb{R}^{q} : y^{j} \le 0 \text{ for } j = 1, \dots, q\_{1}, \text{ and } y^{j} = 0 \text{ for } j = q\_{1} + 1, \dots, q \right\}.$$

Define the cone

$$\begin{aligned} \mathcal{K} := \left\{ (\delta \mathbf{x}\_{0\prime} \delta u) \in \mathcal{X} : \mathbf{e}^{\prime}(\not p) \delta p \in \mathbb{C}\_{\epsilon \nu} \\ D(\mathbf{t}) \left[ r\_{\mathbf{x}}^{\prime}(\bar{\mathbf{x}}(\mathbf{t}), \bar{\mathbf{u}}(\mathbf{t}), \mathbf{t}) \delta \mathbf{x}(\mathbf{t}) + r\_{\mathbf{u}}^{\prime}(\bar{\mathbf{x}}(\mathbf{t}), \bar{\mathbf{u}}(\mathbf{t}), \mathbf{t}) \delta \mathbf{u}(\mathbf{t}) \right] \in \mathbb{C}\_{\mathsf{T}} \right\}. \end{aligned}$$

On the space X , consider the quadratic form

$$\begin{split} \Omega\_{\lambda}[(\delta x\_{0}, \delta u)]^{2} &= \mathcal{L}\_{pp}^{\prime\prime}(\vec{p}, \lambda)[\delta p]^{2} - \int\_{0}^{1} \mathcal{H}\_{ww}^{\prime\prime}(\vec{x}(t), \vec{u}(t), \vec{\psi}(t), t)[\delta w(t)]^{2}dt \\ &+ \int\_{0}^{1} \langle \upsilon(t), r\_{ww}^{\prime\prime}(\vec{x}(t), \vec{u}(t), t)[\delta w(t)]^{2} \rangle dt. \end{split}$$

Here and further, for convenience of notation: *w* = (*x*, *u*), *δw*(*t*)=(*δx*(*t*), *δu*(*t*)). The main result of this section consists in the following theorem.

**Theorem 1.** *Let* (*x*¯, *u*¯) *be an optimal control process in Problem (1). Suppose that this process is normal.*

*Then,* <sup>Λ</sup> <sup>=</sup> <sup>∅</sup>*. Moreover,* dim span(Λ) = <sup>1</sup>*, and, for <sup>λ</sup>* = (*λ*0, *<sup>λ</sup>*1, *<sup>λ</sup>*2) <sup>∈</sup> <sup>Λ</sup>*, it holds that λ*<sup>0</sup> > 0*, and*

$$\left[\Omega\Lambda\left[\left(\delta\mathbf{x}\_{0\prime}\delta u\right)\right]^2\right] \ge 0 \;\,\forall\left(\delta\mathbf{x}\_{0\prime}\delta u\right) \in \mathcal{K}.\tag{11}$$

The proof is preceded with the following auxiliary assertion.

**Lemma 1.** *Consider linear bounded operators A and Ai, i* = 1, 2, ...*, acting in a given Hilbert space X, such that Ai* → *A pointwise. Assume that the spaces* im *Ai and* im *A*<sup>∗</sup> *<sup>i</sup> are closed and that* im *A* ⊆ im *Ai for all i. Assume also that the sequence of norms* (*AiA*<sup>∗</sup> *<sup>i</sup>* )−<sup>1</sup>im *Ai is uniformly bounded. Let C* ⊆ *X be a closed and convex set.*

*A*−1(*C*) = Limsup *i*→∞ *A*−<sup>1</sup> *<sup>i</sup>* (*C*). (12)

**Proof.** Let *<sup>ξ</sup><sup>i</sup>* <sup>∈</sup> *<sup>A</sup>*−<sup>1</sup> *<sup>i</sup>* (*C*) and *ξ<sup>i</sup>* → *ξ*<sup>0</sup> as *i* → ∞. By virtue of the uniform boundedness principle, *Aiξ<sup>i</sup>* → *Aξ*0. Thus, *Aξ*<sup>0</sup> ∈ *C* and the embedding '⊇' is proven.

Let us confirm the inverse embedding. Given *<sup>ξ</sup>*<sup>0</sup> <sup>∈</sup> *<sup>A</sup>*−1(*C*), it is necessary to indicate a sequence of elements *<sup>ξ</sup><sup>i</sup>* <sup>∈</sup> *<sup>A</sup>*−<sup>1</sup> *<sup>i</sup>* (*C*), such that *ξ<sup>i</sup>* → *ξ*0.

Consider the extremal problem

$$||\mathfrak{J} - \mathfrak{J}\_0||^2 \to \min,\ A\_i \mathfrak{J} = A \mathfrak{J}\_0.$$

Denote the solution to this problem as *ξi*. The solution exists since im *A* ⊆ im *Ai* and since the quadratic functional is weakly lower semi-continuous, whereas the closed convex set *A*−<sup>1</sup> *<sup>i</sup>* (*Aξ*0) is weakly closed. Since the image of *Ai* is closed, one can apply the Lagrange multiplier rule as follows. There exists a non-zero vector *λ<sup>i</sup>* ∈ im *Ai* such that

$$
\xi\_i - \xi\_0 + A\_i^\* \lambda\_i = 0.
$$

Applying *Ai*, the multiplier is expressed as follows

$$
\lambda\_i = (\mathcal{A}\_i A\_i^\*)^{-1} (\mathcal{A}\_i \mathfrak{g}\_0 - \mathcal{A}\_i \mathfrak{g}\_i).
$$

Therefore,

*Then,*

$$\mathfrak{g}\_i^\times = \left(I - A\_i^\* \left(A\_i A\_i^\*\right)^{-1} (A\_i - A)\right) \mathfrak{g}\_{0-i}^\times$$

Note that *A*<sup>∗</sup> *<sup>i</sup>* (*AiA*<sup>∗</sup> *<sup>i</sup>* )−1≤*Ai*·(*AiA*<sup>∗</sup> *<sup>i</sup>* )−1<sup>≤</sup> const by the assumption of the lemma. However, *Aiξ*<sup>0</sup> → *Aξ*0, and thus, *ξ<sup>i</sup>* → *ξ*0.

**Proof to Theorem 1.** By virtue of Theorem 3.5 in [4] and the regularity of the process (*x*¯, *u*¯), there exists a set of multipliers (*λ*, *ψ*, *ν*) satisfying the maximum principle, such that *λ* = 0. Firstly, we prove that the given *λ* satisfies the following Lagrange multipliers rule:

$$
\lambda^0 \left( \langle \boldsymbol{\varrho}'\_{\mathbf{x}\_0}(\bar{\boldsymbol{p}}), \delta \mathbf{x}\_0 \rangle + \langle \boldsymbol{\varrho}'\_{\mathbf{x}\_1}(\bar{\boldsymbol{p}}), \delta \mathbf{x}\_1 \rangle \right) + (\lambda\_{1r} \lambda\_2) \left( \boldsymbol{\varrho}'\_{\mathbf{x}\_0}(\bar{\boldsymbol{p}}) \delta \mathbf{x}\_0 + \boldsymbol{\varrho}'\_{\mathbf{x}\_1}(\bar{\boldsymbol{p}}) \delta \mathbf{x}\_1 \right) = 0 \tag{13}
$$

for all (*δx*0, *δu*) ∈ N*r*.

Due to the maximum principle, one has

$$
\begin{split}
\langle\Psi(1),\delta\mathbf{x}(1)\rangle &= \langle\Psi(0),\delta\mathbf{x}(0)\rangle + \int\_{0}^{1} \Big( \langle\dot{\Psi}(t),\delta\mathbf{x}(t)\rangle + \langle\Psi(t),\delta\mathbf{x}(t)\rangle \Big) dt \\ &= \langle\Psi(0),\delta\mathbf{x}(0)\rangle + \int\_{0}^{1} \Big( \langle-\Psi(t)f\_{x}^{\prime}(\ddot{\mathbf{x}}(t),\ddot{\mathbf{u}}(t),t) + \nu(t)r\_{x}^{\prime}(\ddot{\mathbf{x}}(t),\ddot{\mathbf{u}}(t),t),\delta\mathbf{x}(t)\rangle \Big) dt \\ &\quad \qquad \qquad \qquad \langle\Psi(t),f\_{x}^{\prime}(\ddot{\mathbf{x}}(t),\ddot{\mathbf{u}}(t),t)\delta\mathbf{x}(t) + f\_{u}^{\prime}(\ddot{\mathbf{x}}(t),\ddot{\mathbf{u}}(t),t)\delta\mathbf{u}(t)\rangle \Big) dt \\ &= \left\langle\mathcal{L}\_{\mathbf{x}\_{0}}^{\prime}(\ddot{\boldsymbol{\upmu}},\lambda),\delta\mathbf{x}\_{0}\right\rangle \\ &\quad + \int\_{0}^{1} \left(\nu(t)r\_{x}^{\prime}(\ddot{\mathbf{x}}(t),\ddot{\mathbf{u}}(t),t)\delta\mathbf{x}(t) + \Psi(t)f\_{u}^{\prime}(\ddot{\mathbf{x}}(t),\ddot{\mathbf{u}}(t),t)\delta\mathbf{u}(t)\right) dt.
\end{split}
$$

Let us add and subtract the term *ν*(*t*)*r <sup>u</sup>*(*x*¯(*t*), *u*¯(*t*), *t*)*δu*(*t*) under the integral. Then, by virtue of (5), and also by taking into account that

$$\nu(t)\left(r\_x'(t)\delta x(t) + r\_u'(t)\delta u(t)\right) = 0 \text{ for a.a.t.}$$

when (*δx*0, *δu*) ∈ N*r*, we derive *ψ*(1), *δx*(1) = = L *<sup>x</sup>*<sup>0</sup> (*p*¯, *λ*), *δx*<sup>0</sup> > . Then, from (3),

$$\langle -\mathcal{L}'\_{x\_1}(\vec{p},\lambda), \delta x\_1 \rangle = \langle \psi(1), \delta x(1) \rangle = \langle \mathcal{L}'\_{x\_0}(\vec{p},\lambda), \delta x\_0 \rangle.$$

Hence,

$$
\langle \mathcal{L}'\_{x\_0}(p,\lambda), \delta x\_0 \rangle + \langle \mathcal{L}'\_{x\_1}(p,\lambda), \delta x\_1 \rangle = 0,
$$

and therefore, Condition (13) is proven.

Consider the endpoint constraint operator

$$\mathcal{E}\left(\delta \mathbf{x}\_{0\prime}\delta u\right) := e^{\prime}(\vec{p})\delta p^{\prime}$$

acting from <sup>X</sup> to <sup>R</sup>*k*. It is a straightforward task to derive that condition *<sup>Q</sup>* <sup>&</sup>gt; 0 implies that <sup>E</sup>(N*r*) = <sup>R</sup>*k*. Then, using *<sup>λ</sup>* <sup>=</sup> 0, Equation (13) yields that *<sup>λ</sup>*<sup>0</sup> <sup>&</sup>gt; 0. Moreover, the multiplier *λ* is unique, upon normalization.

Let us proceed to the proof of the second-order condition (11). Take a number *ε* > 0. Let *Dε*(*t*) designate the diagonal *q* × *q*-matrix defined as *D*(*t*) but, now, with the set *J*(*t*) replaced by *Jε*(*t*). Define the cone K*<sup>ε</sup>* in the same way as K, but with the matrix *D*(*t*) replaced by *Dε*(*t*). It is clear that K*<sup>ε</sup>* ⊆ K for all *ε* > 0. Firstly, we prove (11) for the reduced cone <sup>K</sup>*ε*. Consider the space <sup>X</sup><sup>∞</sup> <sup>=</sup> <sup>R</sup>*<sup>n</sup>* <sup>×</sup> <sup>L</sup>∞([0, 1]; <sup>R</sup>*n*) as the *<sup>L</sup>*∞-analogue of <sup>X</sup> , and the following image-space <sup>Y</sup>*<sup>ε</sup>* :<sup>=</sup> <sup>∏</sup>*<sup>q</sup> <sup>j</sup>*=<sup>1</sup> <sup>L</sup>∞(*T<sup>ε</sup> <sup>j</sup>* ; <sup>R</sup>) <sup>×</sup> <sup>R</sup>*k*, where *<sup>T</sup><sup>ε</sup> <sup>j</sup>* = {*t* ∈ [0, 1] : *j* ∈ *Jε*(*t*)}, *j* = 1, . . ., *q*.

Define the mapping F*<sup>ε</sup>* : X<sup>∞</sup> → Y*<sup>ε</sup>* as follows

$$\mathcal{F}\_{\varepsilon}(\mathbf{x}\_{0\prime},\boldsymbol{\mu}(\cdot)) = \left(r^{1}(\mathbf{x}(\cdot),\boldsymbol{\mu}(\cdot),\cdot)|\_{\mathbf{T}^{\overline{\boldsymbol{\gamma}}\_{\overline{\boldsymbol{\gamma}}}}}, \dots, r^{q}(\mathbf{x}(\cdot),\boldsymbol{\mu}(\cdot),\cdot)|\_{\mathbf{T}^{\overline{\boldsymbol{\gamma}}\_{\overline{\boldsymbol{\gamma}}}}}, e^{1}(p), e^{2}(p),\dots, e^{k}(p)\right).$$

Set *y*¯*<sup>ε</sup>* = F*ε*(*x*¯0, *u*¯(·)). Let C*<sup>ε</sup>* ⊂ Y*<sup>ε</sup>* be the closed cone such that, for all pairs (*ξ*(·), *γ*) ∈ <sup>C</sup>*ε*, one has *<sup>ξ</sup>*(*t*) <sup>∈</sup> *Cr* for a.a. *<sup>t</sup>* <sup>∈</sup> [0, 1], and *<sup>γ</sup>* <sup>∈</sup> *Ce*. Here, *<sup>ξ</sup><sup>j</sup>* (*t*) = 0 when *<sup>t</sup>* <sup>∈</sup>/ *<sup>T</sup><sup>ε</sup> j* .

Consider the inclusion

$$\mathcal{F}\_{\varepsilon}(\mathbf{x}\_{0\prime}\boldsymbol{\mu}(\cdot)) \in \mathcal{Y}\_{\varepsilon} + \mathcal{C}\_{\varepsilon\prime} \quad (\mathbf{x}\_{0\prime}\boldsymbol{\mu}(\cdot)) \in \mathcal{X}\_{\infty}.\tag{14}$$

The Fréchet derivative F *<sup>ε</sup>*(*x*¯0, *u*¯(·)) is the linear mapping (A*ε*, B) : X<sup>∞</sup> → Y*ε*, where <sup>A</sup>*<sup>ε</sup>* = (A<sup>1</sup> *<sup>ε</sup>*, . . ., <sup>A</sup>*<sup>q</sup> ε*),

$$\mathcal{A}^{\bar{j}}(\delta \mathbf{x}\_{0\prime} \delta \mathbf{u}) := (r^{\bar{j}})\_{\mathbf{x}}'(\bar{\mathbf{x}}(t), \bar{\mathbf{u}}(t), t) \delta \mathbf{x}(t) + (r^{\bar{j}})\_{\mathbf{u}}'(\bar{\mathbf{x}}(t), \bar{\mathbf{u}}(t), t) \delta \mathbf{u}(t), \ \mathbf{t} \in T^{\mathfrak{c}}\_{\mathbf{j}'} $$

and B(*δx*0, *δu*) := *e* (*p*¯)*δp*. The proof of this fact involves a standard argument.

Firstly, consider this derivative as the extended linear mapping acting from X to ∏*q <sup>j</sup>*=<sup>1</sup> <sup>L</sup>2(*T<sup>ε</sup> <sup>j</sup>* ; <sup>R</sup>) <sup>×</sup> <sup>R</sup>*k*, that is, in Hilbert spaces. Let us prove its surjection. Since the linear mapping A*<sup>ε</sup>* is surjective due to regularity w.r.t. the mixed and state constraints (this is a simple task to ensure by solving the corresponding Volterra equation and using Proposition 1 in this way), it is sufficient to show that the linear mapping B is surjective on ker A*ε*. Let *Q<sup>ε</sup>* be the matrix constructed as *Q*; however, with the matrix *D*(*t*) replaced by *Dε*(*t*). It is clear that *Q<sup>ε</sup>* → *Q* as *ε* → 0. Therefore, one has that *Q<sup>ε</sup>* > 0 for all sufficiently small *<sup>ε</sup>*. At the same time, this condition implies that <sup>E</sup>(ker <sup>A</sup>*ε*) = <sup>R</sup>*k*. Therefore, it is simple to conclude that (A*ε*, B) is a surjective linear mapping for all sufficiently small *ε*.

The surjection of (A*ε*, B) as the linear mapping from X<sup>∞</sup> to Y*<sup>ε</sup>* results from the following simple argument. Firstly, notice that, in space X , one has the relation

$$\text{clos}(\ker \mathcal{A}\_{\mathfrak{t}} \cap \mathcal{X}\_{\infty}) = \ker \mathcal{A}\_{\mathfrak{t}\prime} \tag{15}$$

which is clear due to Formulas (9) and (10), as these still hold when *D*(*t*) is replaced by *Dε*(*t*) for a sufficiently small *ε*. Then, simply, N*<sup>r</sup>* = N*r*(*ε*) = ker A*ε*. At the same time, the linear operator <sup>A</sup>*<sup>ε</sup>* is surjective as the mapping from <sup>X</sup><sup>∞</sup> to <sup>∏</sup>*<sup>q</sup> <sup>j</sup>*=<sup>1</sup> <sup>L</sup>∞(*T<sup>ε</sup> <sup>j</sup>* ; R) by virtue of the same arguments involving the solution to a Volterra equation. However, the image of B is finite-dimensional, whereas, as has already been confirmed, B is surjective on the space ker A*ε*. Therefore, by virtue of (15), one finds that B is surjective on the subspace ker A*<sup>ε</sup>* ∩ X∞. Thus, the derivative F *<sup>ε</sup>*(*x*¯0, *u*¯(·)) is surjective and, thereby, (*x*¯0, *u*¯(·)) is a normal point for the mapping F*ε*.

The Robinson theorem (see Theorem 1 in [16]) asserts the existence of a neighbourhood *O<sup>ε</sup>* of point ((*x*¯0, *u*¯(·)), *y*¯*ε*) ∈ X<sup>∞</sup> × Y*<sup>ε</sup>* such that

$$\text{dist}\left( (\mathbf{x}\_{0},\boldsymbol{\mu}(\cdot)),\mathcal{F}\_{\varepsilon}^{-1}(\boldsymbol{y}+\mathcal{C}\_{\varepsilon}) \right) \leq \varepsilon \, \text{dist}(\mathcal{F}\_{\varepsilon}(\mathbf{x}\_{0},\boldsymbol{\mu}(\cdot)),\boldsymbol{y}+\mathcal{C}\_{\varepsilon}) \quad \forall \,( (\mathbf{x}\_{0},\boldsymbol{\mu}(\cdot)),\boldsymbol{y}) \in O\_{\varepsilon}. \tag{16}$$

Consider an arbitrary element *h* = (*δx*0, *δu*) ∈ K*<sup>ε</sup>* ∩ X∞, and a number *τ* > 0. By putting in (16) the value *ξ*(*τ*)=(*x*¯0, *u*¯(·)) + *τh* for (*x*0, *u*(·)), and *y*¯*<sup>ε</sup>* for *y*, one obtains

$$\text{dist}\left(\xi(\tau), \mathcal{F}\_{\varepsilon}^{-1}(\mathcal{Y}\_{\varepsilon} + \mathcal{C}\_{\varepsilon})\right) \le o(\tau),$$

and, thus, the set <sup>K</sup>*<sup>ε</sup>* ∩ X<sup>∞</sup> is tangent to the solution set <sup>F</sup> <sup>−</sup><sup>1</sup> *<sup>ε</sup>* (*y*¯*<sup>ε</sup>* <sup>+</sup> <sup>C</sup>*ε*); see Corollary 2 in [16]. This means that, for every small *τ*, there exists a vector *ω*(*τ*)=(*a*(*τ*), *v*(·; *τ*)) ∈ X<sup>∞</sup> such

that *<sup>ω</sup>*(*τ*) *<sup>τ</sup>* <sup>→</sup> 0 as *<sup>τ</sup>* <sup>→</sup> 0, whereas <sup>F</sup>*ε*(*ξ*(*τ*) + *<sup>ω</sup>*(*τ*)) <sup>∈</sup> *<sup>y</sup>*¯*<sup>ε</sup>* <sup>+</sup> <sup>C</sup>*ε*.

Let us set *x*0(*τ*) := *x*¯0 + *τδx*<sup>0</sup> + *a*(*τ*), and *u*(*t*; *τ*) := *u*¯(*t*) + *τδu*(*t*) + *v*(*t*; *τ*). Then, one may verify that the control pair (*x*0(*τ*), *u*(·; *τ*)) ∈ X<sup>∞</sup> is admissible to Problem (1) for all sufficiently small *τ* due to the construction of the mapping F*ε*. At this point, it is essential that *ε* > 0. Let *x*(·; *τ*) be the trajectory corresponding to the pair (*x*0(*τ*), *u*(·; *τ*)), and *p*(*τ*)=(*x*(0; *τ*), *x*(1; *τ*)). Take the multiplier *λ* = (1, *λ*1, *λ*2) ∈ Λ(*x*¯, *u*¯), and the corresponding multiplier *ν* entailed by *λ*. Let *ϕ*(*p*¯) = 0.

Consider the inequality

$$\mathcal{L}(p(\mathbf{r}), \boldsymbol{\lambda}) + \int\_0^1 \langle \boldsymbol{\nu}(t), r(\mathbf{x}(t; \mathbf{r}), \boldsymbol{\mu}(t; \mathbf{r}), t) \rangle dt \ge 0,\tag{17}$$

which results from the condition of minimum and from the fact that the control process (*x*(·; *τ*), *u*(·; *τ*)) is admissible.

Consider the second-order variational system

$$\begin{cases} \dot{\delta}\mathbf{x}\_{(1)}(t;\tau) = f\_{\mathbf{x}}'(\ddot{\mathbf{x}}(t), \ddot{\mathbf{u}}(t), t)\delta\mathbf{x}\_{(1)}(t;\tau) + f\_{\mathbf{u}}'(\ddot{\mathbf{x}}(t), \ddot{\mathbf{u}}(t), t)\delta\mathbf{u}(t;\tau), \\ \delta\mathbf{x}\_{(2)}(t;\tau) = f\_{\mathbf{x}}'(\ddot{\mathbf{x}}(t), \ddot{\mathbf{u}}(t), t)\delta\mathbf{x}\_{(2)}(t;\tau) + \frac{1}{2}f\_{ww}''(\ddot{\mathbf{x}}(t), \ddot{\mathbf{u}}(t), t)[(\delta\mathbf{x}\_{(1)}(t;\tau), \delta\mathbf{u}(t;\tau))]^2, \\ \delta\mathbf{x}\_{(1)}(0;\tau) = \delta\mathbf{x}\_{0}(\tau), \; \delta\mathbf{x}\_{(2)}(0;\tau) = 0. \end{cases}$$

$$\text{Here, } \delta \mathbf{x}\_0(\tau) = \delta \mathbf{x}\_0 + \frac{a(\tau)}{\tau}, \text{and } \delta u(t;\tau) = \delta u(t) + \frac{v(t;\tau)}{\tau}. \text{ It is clear that}$$

$$\mathbf{x}(t;\tau) = \ddot{\mathbf{x}}(t) + \tau \delta \mathbf{x}\_{(1)}(t;\tau) + \tau^2 \delta \mathbf{x}\_{(2)}(t;\tau) + \mathbf{o}(\tau^2).$$

Therefore, by expanding in the Taylor series in (17), one has

$$\begin{split} o(\tau^2) \le & \left< \mathcal{L}'\_{\mathbf{x}\_0}(\not p,\lambda), \tau \delta \mathbf{x}\_0(\tau) \right> + \left< \mathcal{L}'\_{\mathbf{x}\_1}(\not p,\lambda), \tau \delta \mathbf{x}\_{(1)}(\mathbf{1};\tau) + \tau^2 \delta \mathbf{x}\_{(2)}(\mathbf{1};\tau) \right> \\ & + \frac{\tau^2}{2} \mathcal{L}''\_{pp}(\not p,\lambda) \left[ (\delta \mathbf{x}\_0(\tau), \delta \mathbf{x}\_{(1)}(\mathbf{1};\tau)) \right]^2 + \int\_0^1 \Big< v(t), r'\_x(\mathbf{x}(t), \vec{u}(t), t) \, dt \\ & \left[ \tau \delta \mathbf{x}\_{(1)}(t;\tau) + \tau^2 \delta \mathbf{x}\_{(2)}(t;\tau) \right] + \tau r'\_u(\not p(t), \vec{u}(t), t) \delta u(t;\tau) \\ & + \frac{\tau^2}{2} r''\_{\text{ww}}(\not p(t), \vec{u}(t), t) \big] (\delta \mathbf{x}\_{(1)}(t;\tau), \delta u(t;\tau)) \Big]^2 \Big) dt. \end{split}$$

Using the adjoint equation, one has

$$\begin{aligned} \frac{d}{dt} \left< \psi(t), \delta \mathbf{x}\_{(1)}(t;\tau) \right> &= \nu(t) r\_{\mathbf{x}}'(t) \delta \mathbf{x}\_{(1)}(t;\tau) + \psi(t) f\_{\mathbf{u}}'(t) \delta \mathbf{u}(t;\tau), \\\frac{d}{dt} \left< \psi(t), \delta \mathbf{x}\_{(2)}(t;\tau) \right> &= \nu(t) r\_{\mathbf{x}}'(t) \delta \mathbf{x}\_{(2)}(t;\tau) + \frac{1}{2} \psi(t) f\_{\mathbf{u} \mathbf{w}}^{\prime\prime}(t) \left[ (\delta \mathbf{x}\_{(1)}(t;\tau), \delta \mathbf{u}(t;\tau)) \right]^2. \end{aligned}$$

Here, and from now on, the dependence on the optimal process is, for simplicity, omitted. Therefore, using these relations and the transversality conditions (3), and by gathering the terms with *τ*, and *τ*<sup>2</sup> in two different groups, we obtain

$$\begin{split} o(\tau^2) \leq & -\tau \int\_0^1 [\mathcal{H}'\_{\mathfrak{u}}(t) - \nu(t)r'\_{\mathfrak{u}}(t)] \delta u(t;\tau) dt + \frac{\tau^2}{2} \Big( \mathcal{L}''\_{pp}(\not p,\lambda) [(\delta \mathbf{x}\_0(\tau), \delta \mathbf{x}\_{(1)}(1;\tau))]^2 \\ & - \int\_0^1 (\psi(t)f^{\prime\prime}\_{ww}(t) + \nu(t)r^{\prime\prime}\_{ww}(t)) [(\delta \mathbf{x}\_{(1)}(t;\tau), \delta u(t;\tau))]^2 dt \Big). \end{split}$$

Now, as the implication of (5), we obtain (11) for the given (*δx*0, *δu*) ∈ K*<sup>ε</sup>* ∩ X∞. Then, Estimate (11) is proven on the cone K*<sup>ε</sup>* by a simple passage to the limit in X .

Let us pass to the limit as *ε* → 0, and prove (11) on the entire cone K. Take *h* ∈ K. One needs to justify that, for all *ε* > 0, there exists *h<sup>ε</sup>* ∈ K*<sup>ε</sup>* such that *h<sup>ε</sup>* → *h* in X . Indeed, then, (11) is proven due to a simple passage to the limit. However, the existence of such *h<sup>ε</sup>* is yielded by Lemma 1. Indeed, it is a straightforward task to verify that the derivative operator F *<sup>ε</sup>* satisfies all the assumptions of this assertion; it obviously converges pointwise to F <sup>0</sup> as *ε* → 0, while its image is closed, as was confirmed above. The image of the conjugate operator is closed due to the regularity of the reference control process with respect to the mixed constraints which is merely a technical step to ensure. Another technical step is to assert that F *ε*[F *ε*] ∗ is positive due to normality. Moreover, the constant of covering does not depend on *ε*. It is also a straightforward task to verify that the rest of the assumptions hold if we consider C<sup>0</sup> as *C*.

The proof is complete.

#### **5. Abnormal Case**

In this section, we consider the case when rank *Q* < *k*. This case, when the normality condition is not satisfied, is called abnormal. Then, as a simple example can show that Theorem 1 fails to hold. Firstly, the normalized multiplier *λ* is no longer unique, and moreover, there may not exist such a multiplier from the cone Λ for which Estimate (11) is still valid everywhere on K. Therefore, Theorem 1 requires a certain refinement in the abnormal case. Let us formulate the "abnormal" version of this statement. In this enterprise, we follow the method based on the so-called index approach.

Consider the reduced cone of Lagrange multipliers Λ*<sup>a</sup>* = Λ*a*(*x*¯, *u*¯), which contains multipliers *λ* ∈ Λ such that

$$\text{ind}\_{\mathcal{N}} \Omega\_{\lambda} = k - \text{rank} \, Q.$$

Here, the notation ind*<sup>X</sup>* stands for the index of a quadratic form over the space *X*, and N = N*<sup>e</sup>* ∩ N*r*. Consider also the following extra hypotheses.

**Hypothesis 2** (**H2**)**.** *Mappings f* ,*r*<sup>2</sup> *are affine w.r.t. u, while r*<sup>1</sup> *is convex w.r.t. u.*

**Hypothesis 3** (**H3**)**.** *Mixed constraints are globally regular, that is, U*(*x*, *t*) = *UR*(*x*, *t*) *for all x and t. Moreover, the set-valued mapping U*(*x*, *t*) *is uniformly bounded.*

The main result of this section is as follows.

**Theorem 2.** *Let* (*x*¯, *u*¯) *be an optimal control process in Problem (1). Suppose that this process is regular with respect to the mixed constraints.*

*Then,* <sup>Λ</sup>*<sup>a</sup>* = ∅*. Moreover, under (H2) and (H3), one has*

$$\max\_{\lambda \in \Lambda\_d} \Omega\_\lambda [(\delta \mathbf{x}\_{0\prime} \delta u)]^2 \ge 0 \quad \forall \, (\delta \mathbf{x}\_{0\prime} \delta u) \in \mathcal{K}. \tag{18}$$

In the case of local weak minimum, Estimate (18) has been proven in [14]. Here, our task is to prove it in the case of global strong, or Pontryagin's type of the minimum. In [3], the condition that the cone Λ*<sup>a</sup>* is non-empty has been proven in the class of generalized controls. Note that, under the normality condition of the optimal control process, Estimate (18) implies (11) since the normalized multiplier is unique. Thus, Theorem 2, in essence, represents a stronger assertion than Theorem 1, albeit under some extra assumptions such as (H2) and (H3). These two assumptions are meant to simplify the presentation. Note that (H3) is sufficient to suppose on the optimal trajectory only.

**Proof.** The proof of theorem is divided into the two stages.

STAGE 1. In this stage, we prove that <sup>Λ</sup>*<sup>a</sup>* = ∅. In the beginning, suppose that Hypothesis (H3) is valid. Under (H1) and (H3), it is convenient to assume that *f*(*x*, *u*, *t*), and *r*(*x*, *u*, *t*) are constant with respect to (*x*, *u*), and *t*, outside of some sufficiently large ball. This can be obtained due to a simple problem reduction. In what follows, it will not also be restrictive to consider that *ϕ*(*p*∗) = 0, and, for the simplicity of exposition, to consider that all the constraints are scalar-valued, i.e., *k*<sup>1</sup> = *k*<sup>2</sup> = *q*<sup>1</sup> = 1, while *q*<sup>2</sup> = 0.

Let *a*, *b* be non-negative numbers. Consider the mapping

$$\Delta(a,b) := \begin{cases} ab^{-4} & \text{if } b > 0, \\ 1 & \text{if } a > 0, \, b = 0, \\ 0 & \text{if } a = b = 0. \end{cases}$$

This function is lower semi-continuous. It will serve as a penalty function in the applied method below.

Take a pair (*x*0, *u*) ∈ X , and consider the unique solution to the Cauchy problem *x*˙(*t*) = *f*(*x*(*t*), *u*(*t*), *t*), *x*(0) = *x*0, which exists on the entire time interval [0, 1] due to the above assumptions. Set *p* = (*x*0, *x*1), where *x*<sup>1</sup> = *x*(1). Note that *p* depends on (*x*0, *u*). Let {*εi*} be an arbitrary sequence of positive numbers converging to zero. Consider the mapping *ϕ*<sup>+</sup> *<sup>i</sup>* (*p*)=(*ϕ*(*p*) + *<sup>ε</sup>i*)+, where *<sup>a</sup>*<sup>+</sup> <sup>=</sup> max{*a*, 0} for *<sup>a</sup>* <sup>∈</sup> <sup>R</sup>. Thus, the following functional over the space X is well-defined:

$$F\_i(\mathbf{x}\_0, \boldsymbol{\mu}) := \boldsymbol{\varphi}\_i^+(\boldsymbol{p}) + \Delta \left( (\boldsymbol{e}\_1^+(\boldsymbol{p}))^2 + |\mathbf{e}\_2(\boldsymbol{p})|^2 + \int\_0^1 (r(\mathbf{x}, \boldsymbol{\mu}, t)^+)^2 dt, \boldsymbol{\varphi}\_i^+(\boldsymbol{p}) \right).$$

Functional *Fi* is lower semi-continuous which is a straightforward exercise to verify due to the assumptions made above regarding the mappings *f* and *r*. At the same time, this functional is positive everywhere: *Fi* > 0.

Consider the following problem

$$\text{Minimize } F\_i(\mathfrak{x}\_{0\prime}\,\mu), \ (\mathfrak{x}\_{0\prime}\,\mu) \in \mathcal{X}.$$

Note that *Fi*(*x*¯0, *u*¯) = *εi*. By applying the smooth variational principle, see, e.g., in [17], for each *i*, there exists an element (*x*0,*i*, *ui*) ∈ X and a sequence of elements (*x*˜*j*, *u*˜*j*) ∈ X , *j* = 1, 2, . . ., converging to (*x*0,*i*, *ui*) such that

$$F\_i(\mathfrak{x}\_{0,i}, \mathfrak{u}\_i) \le F\_i(\mathfrak{x}\_{0\prime}\mathfrak{u}) = \varepsilon\_{i\prime} \tag{19}$$

$$|\mathfrak{x}\_{0,i} - \mathfrak{x}\_0|^2 + \int\_0^1 |u\_i(t) - \mathfrak{u}(t)|^2 dt \le \sqrt[3]{\varepsilon\_{i'}^2} \tag{20}$$

and the pair (*x*0,*i*, *ui*) is the unique solution to the following problem:

$$\text{Minimize } \ F\_{\vec{t}}(\mathbf{x}\_{0}, \boldsymbol{\mu}) + \sqrt[3]{\varepsilon\_{i}} \sum\_{j=1}^{\infty} 2^{-j} \left( |\mathbf{x}\_{0} - \vec{\mathbf{x}}\_{j}|^{2} + \int\_{0}^{1} |\boldsymbol{\mu} - \vec{\boldsymbol{u}}\_{j}(t)|^{2} dt \right), \ (\mathbf{x}\_{0}, \boldsymbol{\mu}) \in \mathcal{X}.$$

Suppose that *ϕ*<sup>+</sup> *<sup>i</sup>* (*pi*) = 0. Then, *ϕ*(*pi*) < 0 and, in view of optimality, taking into account that *ϕ*(*p*¯) = 0, it follows that some of constraints in (1): *e*1, or *e*2, or *r*, are violated. Therefore, by definition of Δ, one has *Fi*(*x*0,*i*, *ui*) ≥ 1. However, this contradicts (19) for

*i* > 1. Thus, *ϕ*<sup>+</sup> *<sup>i</sup>* (*pi*) <sup>&</sup>gt; 0. Consider a number *<sup>δ</sup><sup>i</sup>* <sup>&</sup>gt; 0 such that *<sup>ϕ</sup>*<sup>+</sup> *<sup>i</sup>* (*p*) > 0 ∀ *p*: |*p* − *pi*| ≤ *δi*. Then, by virtue of, again, the definition of Δ, the pair (*x*0,*i*, *ui*) is the unique global minimum to the following control problem:

$$\begin{aligned} \text{Minimize } \qquad & z\_0 + z\_0^{-4} \left( (e\_1(p)^+)^2 + |e\_2(p)|^2 \right) + \int\_0^1 z^{-4} (r(\mathbf{x}, u, t)^+)^2 dt \\ & + \sqrt[3]{\varepsilon\_i} \sum\_{j=1}^\infty 2^{-j} \left( |\mathbf{x}\_0 - \bar{\mathbf{x}}\_j|^2 + \int\_0^1 |u - \bar{u}\_j(t)|^2 dt \right), \\ \text{subject to } \qquad & \dot{\mathbf{x}} = f(\mathbf{x}, u, t), \\ \qquad & \dot{\mathbf{x}} = 0 \quad \text{for } \mathbf{a} \text{ a } t \in [0, 1] \end{aligned} \tag{21}$$

$$\begin{aligned} \dot{z} &= 0, \text{ for a.a.} t \in [0, 1], \\ |p - p\_i| &\le \delta\_{i\prime} \ z\_0 = \varrho\_i^+(p). \end{aligned}$$

Denote by *xi*, *zi* the solution to (21), that is, the trajectory corresponding to the pair (*x*0,*i*, *ui*(·)). Note that function *zi*(·) is constant, and thus it can be treated simply as number *zi* ∈ R.

Problem (21) is, as a matter of fact, unconstrained. Consider the first and second-order necessary optimality conditions for this problem.

The first-order conditions are stated as follows. There exist a number *λ*<sup>0</sup> *<sup>i</sup>* > 0, and absolutely continuous conjugate functions *ψ<sup>i</sup>* and *σ<sup>i</sup>* which correspond to *xi*, and *zi*, respectively, such that, for a.a. *t* ∈ [0, 1],

$$\begin{split} \psi\_{i}(t) &= -\mathcal{H}\_{\mathbf{x}}'(\mathbf{x}\_{i}(t), u\_{i}(t), \psi\_{i}(t), t) + 2\lambda\_{i}^{0} \boldsymbol{z}\_{i}^{-4} \boldsymbol{r}^{+}(\mathbf{x}\_{i}(t), u\_{i}(t), t) \boldsymbol{r}\_{\mathbf{x}}'(\mathbf{x}\_{i}(t), u\_{i}(t), t), \\ \dot{\boldsymbol{v}}\_{i}(t) &= -4\lambda\_{i}^{0} \boldsymbol{z}\_{i}^{-5} \left( r(\mathbf{x}\_{i}(t), u\_{i}(t), t)^{+} \right)^{2}, \end{split} \tag{22}$$

$$\begin{split} \psi\_{i}(s) = (-1)^{s} \lambda\_{i}^{0} \Big( 2z\_{i}^{-4} \left( e\_{1}^{+}(p\_{i}) \frac{\partial \varepsilon\_{1}}{\partial \mathbf{x}\_{s}}(p\_{i}) + e\_{2}(p\_{i}) \frac{\partial \varepsilon\_{2}}{\partial \mathbf{x}\_{s}}(p\_{i}) \right) + (1-s) \sqrt[3]{\varepsilon\_{i}} \omega\_{1,i}'(\mathbf{x}\_{0,i}) \Big) \\ - (-1)^{s} \rho\_{i} \frac{\partial \varrho}{\partial \mathbf{x}\_{s}}(p\_{i}), \ s = 0, 1, \\ \sigma\_{i}(0) = \lambda\_{i}^{0} \Big( 1 - 4z\_{i}^{-5} (e\_{1}^{+}(p\_{i}))^{2} - 4z\_{i}^{-5} |e\_{2}(p\_{i})|^{2} \Big) + \rho\_{i\prime} \\ \sigma\_{i}(1) = 0, \end{split} \tag{23}$$

$$\begin{split} & \max\_{u \in \mathbb{R}^n} \left( \mathcal{H}(\mathbf{x}\_i(t), u, \boldsymbol{\psi}\_i(t), t) - \lambda\_i^0 z\_i^{-4} (r(\mathbf{x}\_i(t), u, t)^+)^2 - \lambda\_i^0 \, \boldsymbol{\upvee} \boldsymbol{\upvarepsilon}\_i \cdot \boldsymbol{\omega}\_{2,i}(u, t) \right) \\ & \quad = \mathcal{H}(\mathbf{x}\_i(t), u\_i(t), \boldsymbol{\uppsi}\_i(t), t) - \lambda\_i^0 z\_i^{-4} (r(\mathbf{x}\_i(t), u\_i(t), t)^+)^2 - \lambda\_i^0 \, \boldsymbol{\upvee} \boldsymbol{\upvarepsilon}\_i \cdot \boldsymbol{\upomega}\_{2,i}(u\_i(t), t), \end{split} \tag{24}$$

Here, *<sup>ρ</sup><sup>i</sup>* <sup>∈</sup> <sup>R</sup> is the multiplier corresponding to the constraint *<sup>z</sup>*<sup>0</sup> <sup>=</sup> *<sup>ϕ</sup>*<sup>+</sup> *<sup>i</sup>* (*p*),

$$\omega\_{1,i}(\mathbf{x}) := \sum\_{j=1}^{\infty} 2^{-j} |\mathbf{x} - \overline{\mathbf{x}}\_j|^2, \text{ and } \omega\_{2,i}(\mathbf{u}, t) := \sum\_{j=1}^{\infty} 2^{-j} |u - \overline{u}\_j(t)|^2.$$

Conditions (22)–(24) are the first-order optimality conditions in the form of the maximum principle. Consider the second-order optimality conditions for Problem (21).

Take an element (*δx*0, *δu*) ∈ X . Consider the variational differential equation related to (21), that is,

$$\begin{aligned} \delta \mathbf{x}\_i(t) &= \frac{\partial f}{\partial \mathbf{x}}(\mathbf{x}\_i(t), u\_i(t), t) \delta \mathbf{x}\_i(t) + \frac{\partial f}{\partial u}(\mathbf{x}\_i(t), u\_i(t), t) \delta u(t), \\ \delta z\_i(t) &= \mathbf{0}, \end{aligned} \tag{25}$$

for a.a. *t* ∈ [0, 1], where

$$\begin{aligned} \delta \mathbf{x}\_i(0) &= \delta \mathbf{x}\_{0\prime} \\ \delta \mathbf{z}\_i(0) &= \mathbf{q}^\prime(p\_i) \delta p\_i. \end{aligned}$$

Here, *δpi* := (*δx*0, *δxi*(1)).

The solution to (25) exists, and it is unique on the entire time interval [0, 1] due to the assumptions made above. The function *δzi*(·) is obviously constant, thus, it is treated just as number *δzi* in what follows.

On the space X , consider the quadratic form

$$\begin{split} \Omega\_{i}[(\delta\mathbf{x}\_{0},\delta u)]^{2} &= \lambda\_{i}^{0}2z\_{i}^{-4}\epsilon\_{1}(p\_{i})^{+}\epsilon\_{1}''(p\_{i})[\delta p\_{i}]^{2} + 2z\_{i}^{-4}\epsilon\_{2}(p\_{i})\epsilon\_{2}''(p\_{i})[\delta p\_{i}]^{2} \\ &+ 20z\_{i}^{-6}\Big{(}(\epsilon\_{1}(p\_{i})^{+})^{2} + |\epsilon\_{2}(p\_{i})|^{2}\Big{)}\delta z\_{i}^{2} - \Big{(}\rho\_{i}\eta^{\prime\prime}(p\_{i}) + \lambda\_{i}^{0}\sqrt[3]{\varepsilon\_{i}}\omega\_{1,i}^{\prime\prime}(p\_{i})\Big{)}[\delta p\_{i}]^{2} \\ &- \int\_{0}^{1}\mathcal{H}\_{\mathrm{uw}}^{\prime\prime}(\mathbf{x}\_{i}(t),u\_{i}(t),\psi\_{i}(t),t)[(\delta\mathbf{x}\_{i}(t),\delta u(t))]^{2}dt \\ &+ \lambda\_{i}^{0}\int\_{0}^{1}2z\_{i}^{-4}r(\mathbf{x}\_{i}(t),u\_{i}(t),t)^{+}r\_{\mathrm{uw}}^{\prime\prime}(\mathbf{x}\_{i}(t),u\_{i}(t),t)[(\delta\mathbf{x}\_{i}(t),\delta u(t))]^{2}dt \\ &+ \int\_{0}^{1}20z\_{i}^{-6}(r(\mathbf{x}\_{i}(t),u\_{i}(t),t)^{+})^{2}\delta z\_{i}^{2}dt + \lambda\_{i}^{0}\sqrt[3]{\varepsilon\_{i}}\int\_{0}^{1}\omega\_{2,i}^{\prime\prime}(u\_{i}(t),t)[\delta u(t)]^{2}dt. \end{split}$$

Consider the closed subspace N*<sup>i</sup>* ⊆ X of pairs (*δx*0, *δu*) such that


Then, the second-order necessary optimality condition is given by the inequality

$$
\Omega\_i[(\delta \ge\_0 \delta u)]^2 \ge 0 \quad \forall \,(\delta \ge\_0 \delta u) \in \mathcal{N}\_i. \tag{26}
$$

(Note that functional *F*(*x*0, *u*) is not twice continuously differentiable. At the same time, the scalar function *F*(*x*0,*<sup>i</sup>* + *τδx*0, *ui* + *τδu*) of *τ* possesses the second derivative w.r.t. *τ* at *τ* = 0, provided that (*δx*0, *δu*) ∈ N*i*. Using this fact, and the fact that Problem (21) is unconstrained, it is simple to derive (26) by applying direct variations arguments.)

The next step is to pass to the limit as *i* → ∞ in the obtained optimality conditions. Firstly, it follows from (20) that *x*0,*<sup>i</sup>* → *x*¯0, and *ui*(*t*) → *u*¯(*t*) strongly in *L*2, and, thereby, *xi*(*t*) ⇒ *x*¯(*t*) uniformly on [0, 1]. Then, *zi* → 0. Define

$$\begin{aligned} \lambda\_i^1 &:= 2\lambda\_i^0 z\_i^{-4} e\_1(p\_i)^+; \\ \lambda\_i^2 &:= 2\lambda\_i^0 z\_i^{-4} e\_2(p\_i); \\ \nu\_i(t) &:= 2\lambda\_i^0 z\_i^{-4} r(x\_i(t), u\_i(t), t)^+, \end{aligned}$$

and consider the following normalization for the multipliers

$$|\lambda\_i| + |\psi\_i(0)| + ||\nu\_i||\_{L\_2} = 1,\tag{27}$$

where *λ<sup>i</sup>* = (*λ*<sup>0</sup> *<sup>i</sup>* , *<sup>λ</sup>*<sup>1</sup> *<sup>i</sup>* , *<sup>λ</sup>*<sup>2</sup> *i* ).

Let us show that *σi*(0) → 0. Indeed, one has

$$\sigma\_i(0) = 4 \int\_0^1 \lambda\_i^0 z\_i^{-5} r(x\_i(t), u\_i(t), t)^+ \\
\rangle^2 dt = 2 \int\_0^1 \nu\_i(t) z\_i^{-1} r(x\_i(t), u\_i(t), t)^+ dt.$$

However, due to (19), one has *<sup>z</sup>*−<sup>1</sup> *<sup>i</sup> <sup>r</sup>*(*xi*(*t*), *ui*(*t*), *<sup>t</sup>*)<sup>+</sup> *<sup>L</sup>*<sup>2</sup> <sup>→</sup> 0. This, together with (27), implies that *σi*(0) → 0. Then, the transversality condition and, again, (19) and (27) simply yield that *λ*<sup>0</sup> *<sup>i</sup>* − *ρ<sup>i</sup>* → 0.

By passing to a subsequence, in view of the compactness argument, one may assume from (27) that *λ<sup>i</sup>* → *λ*, *ψi*(*t*) ⇒ *ψ*(*t*) and *ν<sup>i</sup> w* → *ν* weakly in *L*<sup>2</sup> as *i* → ∞ for some multipliers *<sup>λ</sup>* = (*λ*0, *<sup>λ</sup>*1, *<sup>λ</sup>*2), *<sup>ψ</sup>* and *<sup>ν</sup>*. Then, *<sup>ρ</sup><sup>i</sup>* <sup>→</sup> *<sup>λ</sup>*0. It is also clear that by passing to a subsequence, one can assert that *λ*<sup>0</sup> *<sup>i</sup> <sup>z</sup>*−<sup>4</sup> *<sup>i</sup>* → ∞. Indeed, otherwise all the multipliers converge to zero, contradicting (27). By virtue of the regularity of the optimal control process with respect to mixed constraints, for each *i*, there exists a control function *ζ<sup>i</sup>* such that *ζi*(*t*) ∈ *U*(*xi*(*t*), *t*) a.e., and *ζ<sup>i</sup>* → *u*¯ in *L*∞. Thus, from the maximum condition (24), it follows that *<sup>r</sup>*(*xi*(*t*), *ui*(*t*), *<sup>t</sup>*)<sup>+</sup> <sup>→</sup> 0 uniformly. Since the set *<sup>U</sup>*(*x*¯(*t*), *<sup>t</sup>*) is uniformly bounded, this implies, again due to regularity, that the control function *ui* is essentially bounded uniformly with respect to *i*, that is *ui <sup>L</sup>*<sup>∞</sup> ≤ const.

From (24), one derives that

$$\mathcal{V}\_{\mathbf{i}}\left(\mathbf{t}\right) \cdot \boldsymbol{r}\_{\boldsymbol{u}}'\left(\mathbf{x}\_{\mathbf{i}}(\mathbf{t}), \boldsymbol{u}\_{\mathbf{i}}(\mathbf{t}), \mathbf{t}\right) = \mathcal{H}\_{\mathbf{i}}'\left(\mathbf{x}\_{\mathbf{i}}(\mathbf{t}), \boldsymbol{u}\_{\mathbf{i}}(\mathbf{t}), \boldsymbol{\psi}\_{\mathbf{i}}(\mathbf{t}), \mathbf{t}\right) - \boldsymbol{\lambda}\_{\mathbf{i}}^{0} \cdot \boldsymbol{\%} \overline{\boldsymbol{\varepsilon}\_{\mathbf{i}}} \cdot \frac{\partial \boldsymbol{\omega}\_{\mathbf{2},\mathbf{i}}}{\partial \mathbf{u}}\left(\boldsymbol{u}\_{\mathbf{i}}(\mathbf{t}), \mathbf{t}\right).$$

Whence, using regularity and the above obtained facts, one has

$$|\nu\_i(t)| \le \text{const} \left( |\psi\_i(t)| + \lambda\_i^0 \right) \quad \forall i, \ t \in [0, 1]. \tag{28}$$

Using the facts and estimates obtained above, one can simply pass to the limit in (22)–(24) and prove that the set of multipliers (*λ*, *ψ*, *ν*) satisfies the maximum principle. At the same time, the fact that *λ* = 0 follows from (28).

Now, let us pass to the limit in (26). Take numbers *ε* > 0 and *σ* > 0. Restricting to a subsequence, one can state that *ui*(*t*) → *u*¯(*t*) for a.a. *t* ∈ [0, 1]. Due to Egorov's theorem, there is a subset *Eσ*, the measure of which equals 1 − *σ*, such that *ui*(*t*) ⇒ *u*¯(*t*) uniformly on *Eσ*. Denote *T<sup>ε</sup>* := {*t* ∈ [0, 1] : *r*(*x*¯(*t*), *u*¯(*t*), *t*) ≥ −*ε*}, *Tε*(*σ*) := *T<sup>ε</sup>* ∩ *Eσ*.

Consider the bounded linear operator A*<sup>i</sup>* : X (*Eσ*) → L2(*Tε*(*σ*); R) such that

$$\mathcal{A}\_i(\delta \mathbf{x}\_0, \delta \boldsymbol{\mu}) = r\_\mathbf{x}'(\mathbf{x}\_i(t), \boldsymbol{\mu}\_i(t), t)\delta \mathbf{x}\_i(t) + r\_\mathbf{u}'(\mathbf{x}\_i(t), \boldsymbol{\mu}\_i(t), t)\delta \boldsymbol{u}(t)|\_{T\_\varepsilon(\sigma) = 0}$$

where X (*Eσ*) is defined as X , but *δu*(*t*) = 0 for a.a. *t* ∈/ *Eσ*. It is a simple matter to show that, due to the uniform convergence, one has A*<sup>i</sup>* → A strongly, where

$$\mathcal{A}(\delta \mathbf{x}\_{0\prime} \delta \boldsymbol{u}) = r\_x'(\vec{\mathbf{x}}(t), \vec{\boldsymbol{u}}(t), t) \delta \mathbf{x}(t) + r\_u'(\vec{\mathbf{x}}(t), \vec{\boldsymbol{u}}(t), t) \delta \boldsymbol{u}(t)|\_{T\_t(\sigma)}.$$

Then, as is known, ker A*<sup>i</sup>* → X*ε*(*σ*) := ker A⊆X (*Eσ*). It is clear that X*ε*(*σ*) → X*<sup>ε</sup>* := X*ε*(0) as *σ* → 0 by virtue of its definition and the regularity condition. One needs to use the solution to a corresponding Volterra equation to prove this simple fact. Then, Lemma 1 yields that X*<sup>ε</sup>* → N*<sup>r</sup>* as *ε* → 0 if we consider *C* = {0}. Here, when treating the convergence of spaces, the symbol '→' stands for Limsup.

Let Π*<sup>i</sup>* ⊆ X denote the kernel of the endpoint operator *e* (*pi*)*δpi*. It is clear that codim Π*<sup>i</sup>* ≤ *k*. Then, this is a simple exercise to ensure the existence of a subspace Π ⊆ N*<sup>e</sup>* such that codim Π ≤ *k* and

$$
\Pi \cap \mathcal{N}\_r \subseteq \operatorname{Limsup} \, \Pi\_i \cap \ker \mathcal{A}\_i,
$$

where Limsup is total: firstly as *i* → ∞, then, as *σ* → 0 and finally, as *ε* → 0. At the same time, note that *Ti* ∩ *E<sup>σ</sup>* ⊆ *Tε*(*σ*) for all large *i*. Therefore, one has the embedding ker A*<sup>i</sup>* ∩ Π*<sup>i</sup>* ⊆ N*<sup>i</sup>* ∩ X (*Eσ*), and then, the passage to the limit in (26) gives the condition <sup>Λ</sup>*<sup>a</sup>* = ∅. In the latter deduction, Proposition 1 of [14] has been used and also the fact that the terms with *δz*<sup>2</sup> *<sup>i</sup>* in Ω*<sup>i</sup>* converge to zero in view of (19) and (27).

Now, it is necessary to remove the extra assumptions imposed in (H3) regarding the boundedness and global regularity. However, this can be done following precisely the same method as presented in [4]. Take *c* > 0, and consider the additional control constraint |*u*| ≤ *c*. For each *ε* > 0, there will be *N* specifically constructed regular selectors of *U*(*x*, *t*) which are surrounded with *N ε*-tubes as in the above-cited source. Then, the passage to the limit, firstly as *ε* → 0, then, as *N* → ∞, and, at the end, as *c* → ∞ will complete the proof for Stage 1.

The full proof for the next stage is rather lengthy. Therefore, let us present it schematically, in a sketch-form, exposing the main idea.

STAGE 2. Here, under (H2) and (H3), we prove Estimate (18). For this purpose, the notion of *χ*-problem is used. Take any *ε*, *δ* > 0 and (*δx*0, *δu*) ∈ K. It is not restrictive to assume that the minimum in (1) is absolute.

Consider the problem

$$\begin{cases} \text{Minimize} \quad \varrho(p) - \chi\varrho(\bar{p}\_{\ell}) + (\max\{0, \chi - 1\})^{4} + \delta|\mathbf{x}\_{0} - \bar{\mathbf{x}}\_{0} - \varepsilon\delta\mathbf{x}\_{0}|^{2} \\ \text{subject to} \quad \dot{\mathbf{x}}(t) = f(\mathbf{x}(t), u(t), t) \text{ for a.a.} \mathbf{t}, \\\ \begin{aligned} \text{subject to} \quad \dot{\mathbf{x}}(t) &= f(\mathbf{x}(t), u(t), t) \text{ for a.a.} \mathbf{t}, \\\ \mathbf{e}(p) - \chi\mathbf{e}(\bar{p}\_{\ell}) \in \mathbb{C}\_{\ell^{\prime}} \, |\mathbf{x}\_{0} - \bar{\mathbf{x}}\_{0}| \le \delta, \\\ r(\mathbf{x}(t), u(t), t) - \chi r(\bar{x}\_{\ell}(t), \bar{u}(t) + \varepsilon\delta u(t), t) \in \mathbb{C}\_{r} \quad \text{for a.a.} \mathbf{t}, \\\ \chi \ge 0. \end{aligned} \end{cases} \tag{29}$$

Here, *x*¯*ε*(·) is the trajectory corresponding to the perturbed pair (*x*¯0 + *εδx*0, *u*¯(*t*) + *εδu*(*t*)), whereas *p*¯*<sup>ε</sup>* is the corresponding endpoint vector.

Note that the infimum in Problem (29) is finite due to the imposed assumptions. Moreover, it is not greater than zero, since the process (*x*¯*ε*(*t*), *u*¯(*t*) + *εδu*(*t*)), *χ* = 1 is feasible, whereas the value of the cost equals zero. At the same time, when *χ* = 0, the infimum over X is positive due to the absolute optimality in (1) and since *ϕ*(*p*¯) = 0. This suggests the application of the smooth variational principle, albeit in the version from Ref. [18], so the finite-dimensional variable *χ* is not subject to perturbation. That is, the adding to the cost due to the variational principle is within the space X only. Then, for any sufficiently small *α* > 0, one can assert the solution (*x*0,*α*, *uα*(·)), *χα* to the perturbed problem such that *χα* > 0. Indeed, otherwise there is a contradiction with the range of the infimum.

The remaining arguments are somewhat standard: the results of Stage 1 are applied to the *α*-solutions which are regular due to (H3). By taking *α* = *α*(*ε*) appropriately small according to the given *ε*, one can prove the convergence of this solution to the optimal solution (*x*¯0, *u*¯(*t*)) as *ε* → 0. (In this enterprise, Hypothesis (H2) is essentially used together with the weak sequential compactness of controls *uα*(*ε*) implemented by virtue of a standard technique. It is also needed to use the form of the minimizing functional in (29) and the above obtained fact that the infimum is not greater than zero, in order to prove the strong convergence of these controls.) Then, it is necessary to pass to the limit in the obtained conditions, firstly as *α* → 0, then, as *ε* → 0 and finally, as *δ* → 0. At the same time, the transversality condition with respect to the *χ*-variable will yield the desired Estimate (18) by virtue of the expansion in Taylor series.

The proof is complete.

#### **6. Conclusions**

In this article, second-order necessary conditions in the form of Estimate (18) have been derived for both normal and abnormal cases. The notion of normality is defined as the condition of full rank for the corresponding controllability matrix. In the normal case, the set of Lagrange multipliers is unique, upon normalization, while the multiplier *λ*<sup>0</sup> is positive. In the abnormal case, it is essential that the reduced cone of Lagrange multipliers Λ*<sup>a</sup>* is considered and that it has been proven non-empty. Along with the second-order necessary conditions, a refined version of the maximum condition in the form (4) has been obtained. The principal feature of the obtained result is that the maximum is taken over the reduced feasible set Θ(*x*, *t*), which is the closure of the set of regular points.

**Author Contributions:** Conceptualization and methodology, A.V.A.; writing—original draft preparation, D.Y.K.; writing—review and editing, D.Y.K. and F.L.P.; supervision and project administration, F.L.P. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the Ministry of Science and Higher Education of the Russian Federation, project no 075-15-2020-799.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The support of SYSTEC UID/EEA/00147, Ref. POCI-01-0145-FEDER-006933; SNAP project, Ref. NORTE-01-0145-FEDER-000085; and MAGIC project, Ref. POCI-01-0145-FEDER-032485; all funded by ERDF | NORTE 2020, PT2020, FEDER | COMPETE2020 | POCI | PIDDAC | FCT/MCTES, are acknowledged. Theorem 1 is obtained by A.V. Arutyunov under financial support of Russian Science Foundation (Project No 22-21-00863). Theorem 2 was obtained by A.V. Arutyunov under financial support of Russian Science Foundation (Project No 20-11-20131). The work of D.Yu. Karamzin was carried out according to the State Assignment of Russian Federation (State registration number AAAA-A19-119092390082-8, Topic 0063-2019-0010). The useful remarks of the anonymous reviewers are also acknowledged.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **Riemann–Liouville Fractional Sobolev and Bounded Variation Spaces †**

**Antonio Leaci 1,\* and Franco Tomarelli <sup>2</sup>**


**Abstract:** We establish some properties of the bilateral Riemann–Liouville fractional derivative *Ds*. We set the notation, and study the associated Sobolev spaces of fractional order *s*, denoted by *Ws*,1(*a*, *b*), and the fractional bounded variation spaces of fractional order *s*, denoted by *BVs*(*a*, *b*). Examples, embeddings and compactness properties related to these spaces are addressed, aiming to set a functional framework suitable for fractional variational models for image analysis.

**Keywords:** fractional derivatives; distributional derivatives; Sobolev spaces; bounded variation functions; embeddings; compactness; calculus of variations; Abel equation

#### **1. Introduction**

Among several different available definitions for fractional derivatives and corresponding functional spaces, this paper focuses the analysis on some classical pointwise defined or distributional fractional derivatives connected to integral-convolution operators. Precisely, we refer to bilateral definitions of Riemann–Liouville fractional derivatives and related Sobolev and bounded variation spaces that we introduced in [1]: here, we show some compactness and embedding properties of these spaces.

First, we recall the classical Riemann–Liouville left and right fractional derivatives (*d*/*dx*)*<sup>s</sup>* <sup>+</sup> and (*d*/*dx*)*<sup>s</sup>* <sup>−</sup> and introduce the distributional Riemann–Liouville left and right fractional derivatives *D<sup>s</sup>* <sup>+</sup>, *D<sup>s</sup>* <sup>−</sup> together with their bilateral even and odd versions, respectively *D<sup>s</sup> <sup>e</sup>*, *D<sup>s</sup> <sup>o</sup>*, all of them defined for non-integer orders *s*, 0<*s*<1 (see Definition 4).

Second, we provide the definitions of the fractional Sobolev spaces *Ws*,1 and fractional bounded variation spaces *BV<sup>s</sup>* , associated to these bilateral derivatives (see Definitions 9 and 10). These function spaces are studied here (see Theorem 6, Examples 2–5, 6 and 8) in comparison with their non-bilateral counterpart ([2–8]).

The spaces *Ws*,1 and *BV<sup>s</sup>* turn out to be the natural setting for data of Abel integral equations in order to make them well-posed problems in the distributional framework too: see Propositions <sup>2</sup> and <sup>3</sup> showing that if *<sup>f</sup>* <sup>∈</sup> *BVs*(*a*, *<sup>b</sup>*) with <sup>−</sup><sup>∞</sup> <sup>&</sup>lt; *<sup>a</sup>* <sup>≤</sup> *<sup>b</sup>* <sup>≤</sup> <sup>+</sup>∞, then the distributional Abel integral equation *I<sup>s</sup> <sup>a</sup>*+[*u*] = *f* admits a unique solution and provides an explicit resolvent formula. Corollaries 1 and 2 state analogous results for backward equations. This approach provides an alternative formulation of classical *L*<sup>1</sup> representability (see [9]); precisely, this approach leads to a straightforward extension of solvability for the Abel integral equation under conditions weaker than *L*<sup>1</sup> representability, namely with data possibly belonging to *BVs*(*a*, *b*).

Basic properties of the functional spaces introduced in present article (weak compactness property stated by Theorems 3 and 11 together with comparison embeddings and strict embeddings stated in Theorems 6 and 8 and by (92) and (93)), namely

$$BV(a,b) \underset{\stackrel{\scriptstyle \pi}{\neq}} \bigcap\_{\sigma \in (0,1)} W^{\sigma,1}(a,b) \underset{\stackrel{\scriptstyle \pi}{\neq}} W^{s,1}(a,b) \underset{\stackrel{\scriptstyle \pi}{\neq}} BV^{s}\_{+}(a,b) \qquad \forall s \in (0,1),$$

**Citation:** Leaci, A.; Tomarelli, F. Riemann–Liouville Fractional Sobolev and Bounded Variation Spaces. *Axioms* **2022**, *11*, 30. https://doi.org/10.3390/ axioms11010030

Academic Editors: Natália Martins, Ricardo Almeida, Moulay Rchid Sidi Ammi and Cristiana João Soares da Silva

Received: 27 November 2021 Accepted: 11 January 2022 Published: 14 January 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

$$\mathcal{W}^{s,1}(a,b) \underset{\overrightarrow{\rightsquigarrow}}{\subset} BV^{s}\_{+}(a,b) \; , \qquad \mathcal{W}^{s,1}(a,b) \underset{\overrightarrow{\rightsquigarrow}}{\subset} BV^{s}\_{-}(a,b) \; , \qquad \forall s \in (0,1) \; .$$

are studied with the aim of providing a functional framework suitable to fractional variational models for image analysis ([10–17]), which are the object of a forthcoming paper [18]. The present preliminary study deals with the one-dimensional case only.

We thank an anonymous referee for useful remarks and pointing us to the recent article [19] containing a different approach to the Sonin–Abel equation in weighted Lebesgue spaces.

#### **2. Bilateral Fractional Integral and Derivative**

In this paper (*a*, *<sup>b</sup>*) ⊂ R is a nonempty (possibly unbounded) open interval, *<sup>u</sup>* is a real function of one variable and 0 < *s* < 1. The support of a function *u* is denoted by spt *u*. The notation *d*/*dx* stands for the classical pointwise derivative; *Dx*, or shortly *D*, denotes the distributional derivative with respect to the variable *x*. For every open interval *<sup>A</sup>* ⊂ R, we denote by *AC*(*A*) the set of absolutely continuous functions with the domain in the interval *A*, which coincides ([20]) with the space the Gagliardo–Sobolev space *W*1,1 *<sup>G</sup>* (*A*) = {*<sup>u</sup>* <sup>∈</sup> *<sup>L</sup>*1(*A*) <sup>|</sup> *Du* <sup>∈</sup> *<sup>L</sup>*1(*A*)} when they are both endowed with the standard norm *<sup>u</sup> <sup>L</sup>*1(*A*) <sup>+</sup> *Du <sup>L</sup>*1(*A*). Moreover, we set *<sup>L</sup>*<sup>1</sup> *loc*(*A*) as the set of measurable functions which are Lebesgue integrable on every compact subset of *A*, and *ACloc*(*A*) = *W*1,1 *<sup>G</sup>*,*loc*(*A*) = {*<sup>u</sup>* <sup>∈</sup> *<sup>L</sup>*<sup>1</sup> *loc*(*A*) <sup>|</sup> *Du* <sup>∈</sup> *<sup>L</sup>*<sup>1</sup> *loc*(*A*)} and *BV*(*A*) = {*<sup>u</sup>* <sup>∈</sup> *<sup>L</sup>*1(*A*) <sup>|</sup> *Du* ∈ M(*A*)}, where M(*A*) denotes the measures whose total variation on *A* is bounded. We denote by D (*A*) and S (*A*) respectively the space of distributions and the space of tempered distributions on the open set *A*. We denote by *C*0,*α*(*K*) the space of of Hölder continuous functions on the set *K*.

For the reader's convenience, we recall the definition of Gagliardo's fractional Sobolev spaces *Ws*,1 *<sup>G</sup>* ([21,22]). For any *s* ∈ (0, 1), we set

$$\mathcal{W}\_{\widehat{G}}^{s,1} = \left\{ u \in L^1(a,b) \; : \; \frac{|u(x) - u(y)|}{|x - y|^{1 + s}} \in L^1([a, b] \times [a, b]) \right\},\tag{1}$$

which is a Banach space endowed with the norm

$$||u||\_{\mathcal{W}^{s,1}\_{\mathcal{G}}} = \left[ \int\_{[a,b]} |u(\boldsymbol{x})|d\boldsymbol{x} \right. \\ \left. + \int\_{[a,b]} \int\_{[a,b]} \frac{|u(\boldsymbol{x}) - u(\boldsymbol{y})|}{|\boldsymbol{x} - \boldsymbol{y}|^{1+s}} d\boldsymbol{x} \, d\boldsymbol{y} \right],$$

and we recall also the definition of the Riemann–Liouville fractional integral and derivative of order *s* for *L*1-functions, whose standard references can be found in the book by Samko et al. [9].

In the sequel, *H* denotes the Heaviside function *H*(*x*) =1 if *x*≥0, *H*(*x*) =0 if *x*<0, while sign denotes the sign function sign(*x*) =1 if *x*>0, sign(*x*) =−1 if *x*<0, sign(0) =0.

#### **Definition 1.** *(Riemann–Liouville fractional integral)*

*Assume u* <sup>∈</sup> *<sup>L</sup>*1(*a*, *<sup>b</sup>*) *and s* <sup>&</sup>gt; <sup>0</sup>*.*

*The left-side and right-side Riemann–Liouville fractional integrals RL I<sup>s</sup> <sup>a</sup>*<sup>+</sup> *and RL I<sup>s</sup> <sup>b</sup>*<sup>−</sup> *are defined by setting, respectively,*

$$\,\_{RL}I\_{a+}^{s}[u](x) = \,\_{\Gamma(s)}\frac{1}{\Gamma(s)}\int\_{a}^{x} \frac{u(t)}{(x-t)^{1-s}}dt, \qquad x \in \left[a, b\right],\tag{2}$$

$$\,\_{RL}I\_{b-}^{s}[u](\mathbf{x}) = \,\_{\Gamma(s)}\frac{1}{\Gamma(s)}\int\_{\mathbf{x}}^{b} \frac{u(t)}{(t-\mathbf{x})^{1-s}}dt, \qquad \mathbf{x} \in [a,b] \,. \tag{3}$$

Here, Γ denotes the Euler gamma function [23].

Notice that *RL I*<sup>1</sup> *<sup>a</sup>*+[*u*](*x*) = <sup>2</sup> *<sup>x</sup> <sup>a</sup> u*(*t*) *dt* and in general, for every strictly positive integer value, *<sup>s</sup>* <sup>=</sup> *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>, *RL <sup>I</sup><sup>n</sup> <sup>a</sup>*+[*u*] coincides with the *n*-th order primitive, vanishing at *x* = *a* together with all derivatives up to order *n* − 1.

Both *RL I<sup>s</sup> <sup>a</sup>*+[*u*] and *RL I<sup>s</sup> <sup>b</sup>*−[*u*] are absolutely continuous functions if *<sup>s</sup>* <sup>≥</sup> 1 since they are primitives of *L*<sup>1</sup> functions, whereas we can only say that they are *L<sup>q</sup>* functions if 0<*s*<1, 1≤*q*<1/(1 − *s*) (see [9]): indeed, jump discontinuities are allowed if 0<*s*<1, as shown by the next example.

**Example 1.** *Set* (*a*, *<sup>b</sup>*)=(−1, 1)*. Then for every* <sup>0</sup> <sup>&</sup>lt; *<sup>s</sup>* <sup>&</sup>lt; <sup>1</sup> *there is <sup>u</sup>* <sup>∈</sup> *<sup>L</sup>p*(−1, 1)*,* <sup>1</sup> <sup>≤</sup> *<sup>p</sup>* <sup>&</sup>lt; 1/*s, s.t. I<sup>s</sup>* (−1)+[*u*] *is discontinuous. For instance, consider <sup>u</sup>*(*x*) = *<sup>H</sup>*(*x*) *<sup>x</sup>*−*<sup>s</sup> , thus, exploiting the Euler beta function B*(*ν*, *μ*) =2 <sup>1</sup> <sup>0</sup> *<sup>y</sup>ν*−1(<sup>1</sup> <sup>−</sup> *<sup>y</sup>*)*μ*−1*dy* <sup>=</sup> <sup>Γ</sup>(*ν*)Γ(*μ*) <sup>Γ</sup>(*ν*+*μ*) *, one gets RL I s* (−1)+[*u*](*x*) = <sup>1</sup> Γ(*s*) *<sup>x</sup>* −1 *H*(*t*) *<sup>t</sup><sup>s</sup>* (*<sup>x</sup>* <sup>−</sup> *<sup>t</sup>*)1−*<sup>s</sup> dt*<sup>=</sup> *<sup>H</sup>*(*x*) *B*(*s*, 1 − *s*) <sup>Γ</sup>(*s*) <sup>=</sup> - 0 *if* − 1<*x*≤0 Γ(1 − *s*) *if* 0<*x*<1 .

*Thus, I<sup>s</sup>* (−1)+[*u*] *is a piecewise constant function on* (−1, <sup>+</sup>1) *with a jump at x* <sup>=</sup> <sup>0</sup>*.*

Next, we recall the classical definition of left and right Riemann–Liouville fractional derivatives as in [9,24–27].

#### **Definition 2.** *(Classical Riemann–Liouville fractional derivative)*

*Assume u* <sup>∈</sup> *<sup>L</sup>*1(*a*, *<sup>b</sup>*) *and* <sup>0</sup> <sup>&</sup>lt; *<sup>s</sup>* <sup>&</sup>lt; <sup>1</sup>*.*

*The left Riemann–Liouville derivative of u at x* ∈ [*a*, *b*] *is defined by*

$$\,\_{RL} \left( \frac{d}{d\mathbf{x}} \right)\_{a+}^{s} [u](\mathbf{x}) = \frac{d}{d\mathbf{x}} \,\_{RL} I\_{a+}^{1-s} [u](\mathbf{x}) = \frac{1}{\Gamma(1-s)} \frac{d}{d\mathbf{x}} \int\_{a}^{\mathbf{x}} \frac{u(t)}{(\mathbf{x}-t)^{s}} dt \tag{4}$$

*for every value of x such that this derivative exists.*

*Similarly, we may define the right Riemann–Liouville derivative of u at x* ∈ [*a*, *b*] *as*

$$\,\_{RL} \left( \frac{d}{d\mathbf{x}} \right)\_{b-}^{s} [u](\mathbf{x}) = -\frac{d}{d\mathbf{x}} \,\_{RL} l\_{b-}^{1-s} [u](\mathbf{x}) = \frac{-1}{\Gamma(1-s)} \frac{d}{d\mathbf{x}} \int\_{\mathbf{x}}^{b} \frac{u(t)}{(t-\mathbf{x})^{s}} dt \tag{5}$$

*for every value of x such that this derivative exists.*

Then we introduce the distributional Riemann–Liouville fractional derivative as in [25]: a refinement of the Riemann–Liouville fractional derivative, obtained by the plain substitution of the pointwise classical derivative with the distributional derivative

#### **Definition 3.** *(Distributional Riemann–Liouville fractional derivative)*

*Assume <sup>u</sup>* <sup>∈</sup> *<sup>L</sup>*1(*a*, *<sup>b</sup>*) *and* <sup>0</sup> <sup>&</sup>lt; *<sup>s</sup>* <sup>&</sup>lt; <sup>1</sup>*. The distributional left Riemann–Liouville derivative of u, RLD<sup>s</sup> a*+[*u*]∈ D (*a*, *b*)*, is defined by*

$$D\_{RL}D\_{a+}^{s}[u](\mathbf{x}) = D\_{x\ R L}I\_{a+}^{1-s}[u](\mathbf{x}) = \frac{1}{\Gamma(1-s)}D\_{x}\int\_{a}^{x} \frac{u(t)}{(\mathbf{x}-t)^{s}}dt.\tag{6}$$

*Similarly, we may define the distributional right Riemann–Liouville derivative of u, RLD<sup>s</sup> <sup>b</sup>*−[*u*]<sup>∈</sup> *<sup>D</sup>* (*a*, *b*)*, as*

$$D\_{\rm RL} D\_{b-}^{\varepsilon}[u](\mathbf{x}) = -D\_{\rm x\,RL} I\_{b-}^{1-s}[u](\mathbf{x}) = \frac{-1}{\Gamma(1-s)} D\_{\rm x} \int\_{\rm x}^{b} \frac{u(t)}{(t-\mathbf{x})^s} dt. \tag{7}$$

**Remark 1.** *The distributional Riemann–Liouville fractional derivatives D<sup>s</sup>* <sup>±</sup> *provide a suitable refinement of the classical ones* (*d*/*dx*)*<sup>s</sup>* <sup>±</sup> *for the purposes of the present paper. However, we emphasize that they coincide on every L*<sup>1</sup> *function u such that I*1−*s*[*u*] *is absolutely continuous, as it was always the case in the classical applications of fractional derivatives ([28–30]).*

In Lemma 5 below, we examine the case when the above pointwise-defined derivative *d*/*dx* exists a.e. and defines an *L*<sup>1</sup> function coincident with the distributional derivative *D*, respectively of *RL I* <sup>1</sup>−*<sup>s</sup> <sup>a</sup>*<sup>+</sup> [*u*] and *RL <sup>I</sup>* 1−*s <sup>b</sup>*<sup>−</sup> [*u*].

In the sequel, we omit the suffix *RL* of the interval without loss of information, since in this paper, we do not consider any other fractional derivative than the Riemann–Liouville one; we omit also the endpoints *a*<sup>+</sup> and *b*<sup>−</sup> suffix whenever they are clearly established.

Therefore, we will write shortly *I<sup>s</sup>* +[*u*], *I<sup>s</sup>* <sup>−</sup>[*u*], *<sup>D</sup><sup>s</sup>* +[*u*], *D<sup>s</sup>* <sup>−</sup>[*u*], (*d*/*dx*)*<sup>s</sup>* +[*u*] and (*d*/*dx*)*<sup>s</sup>* <sup>−</sup>[*u*], respectively, in place of *RL I<sup>s</sup> <sup>a</sup>*+[*u*], *RL I<sup>s</sup> <sup>b</sup>*−[*u*], *RLD<sup>s</sup> <sup>a</sup>*+[*u*], *RLD<sup>s</sup> <sup>b</sup>*−[*u*], *RL*(*d*/*dx*)*<sup>s</sup> <sup>a</sup>*+[*u*] and *RL*(*d*/*dx*)*<sup>s</sup> <sup>b</sup>*−[*u*].

One of the disadvantages of the one-side Riemann–Liouville derivative and integral, as defined above, is the fact that only one endpoint of the interval plays a role (see (56) and (57) ) since they are "anisotropic" definitions (see [31] and Lemma 6). On the other hand, if we aim to exploit such definitions in a variational context, we have to deal with boundary conditions so that both interval endpoints must play a role ([32]). Therefore, we introduced the bilateral fractional integral and derivative, by keeping separate their "even" and "odd" parts:

**Definition 4.** *For every <sup>u</sup>* <sup>∈</sup> *<sup>L</sup>*1(*a*, *<sup>b</sup>*) *we set the even and odd versions of bilateral fractional integrals and derivatives:*

$$I\_{\varepsilon}^{s}[u](\mathbf{x}) := \frac{1}{2} (I\_{+}^{s}[u](\mathbf{x}) + I\_{-}^{s}[u](\mathbf{x})) \tag{8}$$

$$= \frac{1}{2\,\Gamma(s)} \int\_{a}^{b} \frac{u(t)}{|\mathbf{x} - t|^{1 - s}} dt = \frac{(u \ast 1/|t|^{1 - s})(\mathbf{x})}{2\,\Gamma(s)},$$

$$\begin{split} D\_{\varepsilon}^{s}[u](\mathbf{x}) &:= \,^{D}D\_{\mathbf{x}}\,^{1-s}[u](\mathbf{x}) \\ &= \,^{1}\frac{1}{2}(D\_{+}^{s}[u](\mathbf{x}) - D\_{-}^{s}[u](\mathbf{x})) \ = \,^{D}D\_{\mathbf{x}}\,\frac{(u\*1/|t|^{s})(\mathbf{x})}{2\,\Gamma(1-s)}, \end{split} \tag{9}$$

$$I\_o^s[u](\mathbf{x}) := \frac{1}{2} (I\_+^s[u](\mathbf{x}) - I\_-^s[u](\mathbf{x})) \tag{10}$$

$$= \frac{1}{2\,\Gamma(s)} \int\_a^b u(t) \, \frac{\text{sign}(\mathbf{x} - t)}{|\mathbf{x} - t|^{1 - s}} \, dt = \frac{(u \ast \frac{\text{sign}(t)}{|t|^{1 - s}})(\mathbf{x})}{2\,\Gamma(s)},$$

$$\begin{split} D\_o^s[u](\mathbf{x}) &:= \; D\_x \, I\_o^{1-s}[u](\mathbf{x}) \\ &= \frac{1}{2} (D\_+^s[u](\mathbf{x}) + D\_-^s[u](\mathbf{x})) \ = \; D\_\mathbf{x} \frac{(u \ast \text{sign}(t) / |t|^s)(\mathbf{x})}{2 \, \Gamma(1-s)}. \end{split} \tag{11}$$

So that

*I*

$$I\_+^s[u] \quad = \quad I\_c^s[u] + \quad I\_o^s[u] \text{ , } \qquad D\_+^s[u] \quad = \; D\_c^s[u] + \; D\_o^s[u] \text{ , } \tag{12}$$

$$D\_{-}^{s}[\![\!u]\!] \quad = \ } \quad I\_{\varepsilon}^{s}[\![\!u]\!] \quad - \ } \quad D\_{-}^{s}[\![\!u]\!] \quad = \ } \quad D\_{o}^{s}[\![\!u]\!] \quad - \ } \quad D\_{\varepsilon}^{s}[\![\!u]\!] \,. \tag{13}$$

Whenever (*a*, *<sup>b</sup>*) = R, the convolution in (8)–(11) has to be understood, without relabeling, as the convolution of the trivial extension of *u* (still an *L*1(R) function with support on [*a*, *b*]) with either 1/|*t*| *<sup>s</sup>* or sign(*t*)/|*t*<sup>|</sup> *<sup>s</sup>* (both belonging to *L*<sup>1</sup> *loc*(R)). Also *Is* <sup>±</sup>[*u*](*x*), *<sup>I</sup><sup>s</sup> <sup>e</sup>*[*u*](*x*), *I<sup>s</sup> <sup>o</sup>* [*u*](*x*) have to be understood, without relabeling, as the natural extension for *<sup>x</sup>* ∈ R\[*a*, *<sup>b</sup>*], provided by the convolution of the trivial extension of *<sup>u</sup>* with the corresponding kernels (here, *H* denotes the Heaviside function):

$$I\_+^s[u] = u \ast \frac{H(x)}{\Gamma(s)|x|^{1-s}}, \qquad I\_-^s[u] = u \ast \frac{H(-x)}{\Gamma(s)|x|^{1-s}}, \qquad \text{for every } x \in \mathbb{R}\_+.\tag{14}$$

$$I\_c^s[u] = \mathfrak{u} \* \frac{1}{2\Gamma(s)|\mathbf{x}|^{1-s}}, \qquad I\_o^s[u] = \mathfrak{u} \* \frac{\text{sign}(\mathbf{x})}{2\Gamma(s)|\mathbf{x}|^{1-s}}, \qquad \text{for every } \mathbf{x} \in \mathbb{R}, \tag{15}$$

namely

$$I\_+^s[u](\mathbf{x}) = \frac{1}{\Gamma(s)} \int\_a^b \frac{u(t)H(\mathbf{x} - t)}{|\mathbf{x} - t|^{1 - s}} dt \qquad \text{for every } \mathbf{x} \in \mathbb{R}\_\prime \tag{16}$$

$$I\_{-}^{s}[u](\mathbf{x}) \;= \; \_{\overline{\Gamma}(s)} \int\_{a}^{b} \frac{u(t) \, H(t-\mathbf{x})}{|\mathbf{x} - t|^{1 - s}} \, dt \qquad \text{for every } \mathbf{x} \in \mathbb{R} \; \_{\prime} \tag{17}$$

$$d\_e^s[\mu](\mathbf{x}) \quad = \frac{1}{2\,\Gamma(s)} \int\_a^b \frac{\mu(t)}{|\mathbf{x} - t|^{1 - s}} dt \qquad \text{for every } \mathbf{x} \in \mathbb{R}\_\prime \tag{18}$$

$$I\_o^s[u](\mathbf{x}) \;= \frac{1}{2\,\Gamma(s)} \int\_a^b \frac{u(t)\,\text{sign}(\mathbf{x} - t)}{|\mathbf{x} - t|^{1 - s}} \, dt \qquad \text{for every } \mathbf{x} \in \mathbb{R} \;. \tag{19}$$

Moreover

$$\text{spt } I\_+^s \left[ \mu \right] \subset \left[ a, +\infty \right) \qquad \text{spt } I\_-^s \left[ \mu \right] \subset \left( -\infty, b \right]. \tag{20}$$

**Remark 2.** *In* (9) *and* (11)*, Dx denotes the distributional derivative in* D (R)*, but obviously its restriction as a distribution on the open set* (*a*, *b*) *is understood whenever one works in the bounded interval* (*a*, *b*)*.*

Up to a normalization constant (see (25)), *I<sup>s</sup> <sup>e</sup>*[*u*] is called the Riesz potential of *u* ([1,9]). These fractional integrals *I<sup>s</sup>* +[*u*], *I<sup>s</sup>* <sup>−</sup>[*u*], *<sup>I</sup><sup>s</sup> <sup>e</sup>*[*u*], *I<sup>s</sup> <sup>o</sup>* [*u*] turn out to be in *<sup>L</sup><sup>p</sup> loc* R) (thus *Lp*(*I*) on every bounded interval *I*) for every 1 ≤ *p* < 1/(1 − *s*), since they are convolutions of *<sup>u</sup>* <sup>∈</sup> *<sup>L</sup>*1(R) with an *<sup>L</sup><sup>p</sup> loc*(R) kernel. Moreover, we have the next result.

**Lemma 1.** *If* <sup>−</sup><sup>∞</sup> <sup>&</sup>lt; *<sup>a</sup>* <sup>&</sup>lt; *<sup>b</sup>* <sup>&</sup>lt; <sup>+</sup>∞*, <sup>u</sup>* <sup>∈</sup> *<sup>L</sup>*∞(R)*,* spt(*u*) <sup>⊂</sup> [*a*, *<sup>b</sup>*] *and* <sup>0</sup> <sup>&</sup>lt; *<sup>s</sup>* <sup>&</sup>lt; <sup>1</sup> *then Is* +[*u*], *I<sup>s</sup>* <sup>−</sup>[*u*], *<sup>I</sup><sup>s</sup> <sup>e</sup>*[*u*], *I<sup>s</sup> <sup>o</sup>* [*u*] *belong to L*∞(R) <sup>∩</sup> *<sup>C</sup>*0,*<sup>s</sup> .*

**Proof.** See Lemmas 2.5 and 3.6 (iii) in [1].

The behavior of all the above operators, as *s* → 0<sup>+</sup> or *s* → 1−, is clarified by subsequent Lemmas of the present section, whose proof can be found in [1].

Notice that both 1/|*x*| *<sup>s</sup>* and sign(*x*)/|*x*<sup>|</sup> *<sup>s</sup>* belong to *L*<sup>1</sup> *loc*(R), for 0 < *s* < 1; hence, the convolution with any *L*<sup>1</sup> function is well defined and belongs to *L*<sup>1</sup> *loc*(R); moreover sign(*x*)/|*x*| *<sup>s</sup>* <sup>→</sup> p.v. <sup>1</sup> *<sup>x</sup>* in S as *s* → 1−, while 1/|*x*| *<sup>s</sup>* has no limit in <sup>S</sup> as *s* → 1−, where S denotes the space of tempered distributions.

Fractional derivatives degenerate developing singularities as *s* → 1−; nevertheless, they can be made convergent to meaningful limits by suitable normalization.

**Lemma 2.** *Assume* <sup>0</sup> <sup>&</sup>lt; *<sup>s</sup>* <sup>&</sup>lt; <sup>1</sup> *, <sup>u</sup>* <sup>∈</sup> *<sup>W</sup>*1,2 *<sup>G</sup>* (R) *and choose the constants in the Fourier transform such that <sup>u</sup>*(*ξ*) = <sup>2</sup> <sup>R</sup> exp(−*iξx*) *u*(*x*) *dx . Then*

$$\frac{D\_c^s[u]}{\sin(s\,\pi/2)} \longrightarrow \mathcal{F}^{-1}\left\{i\,\xi\,\hat{u}(\xi)\right\} \\ = Du \quad \text{in } L^2(\mathbb{R}) \quad \text{as } s \to 1 \text{--} ,\tag{21}$$

$$\frac{D\_o^s[u]}{\cos(s\ \pi/2)} \longrightarrow \mathcal{F}^{-1}\left\{|\xi|\,\|\hat{u}(\xi)\right\|\right\} \qquad \text{in } L^2(\mathbb{R}) \quad \text{as } s \to 1-,\tag{22}$$

$$D\_+^s[u] \longrightarrow Du \qquad \text{in } L^2(\mathbb{R}) \qquad \text{as } s \to 1 \text{ } \tag{23}$$

$$D\_{-}^{s}[u] \longrightarrow -Du \qquad \text{in } L^{2}(\mathbb{R}) \quad \text{as } s \to 1 \dots \tag{24}$$

**Remark 3.** *Notice that relations* (21)*,* (23) *and* (24) *tell that, as <sup>s</sup>* <sup>→</sup> <sup>1</sup>−*, both <sup>D</sup><sup>s</sup>* +[*u*] *(left Riemann–Liouville fractional derivative of order s of u) and D<sup>s</sup> <sup>e</sup>*[*u*] *(even Riemann–Liouville fractional derivative of order s of u) converge in L*<sup>2</sup> *to the distributional derivative Du, while D<sup>s</sup>* <sup>−</sup>[*u*] *converges in L*<sup>2</sup> *to* <sup>−</sup>*Du.*

*On the other hand relationship* (22) *means that D<sup>s</sup> <sup>o</sup>*[*u*] *(odd Riemann–Liouville fractional derivative of order <sup>s</sup> of u) fades as <sup>s</sup>* <sup>→</sup> <sup>1</sup><sup>−</sup> *but, when suitably normalized as <sup>D</sup><sup>s</sup> <sup>o</sup>*[*u*]/ cos(*s π*/2)*, it con-* *verges in L*<sup>2</sup> *to the Gagliardo fractional derivative of order 1 of u, say* (−*Δ*)1/2*<sup>u</sup>* :<sup>=</sup> <sup>F</sup> <sup>−</sup><sup>1</sup> { <sup>|</sup>*ξ*<sup>|</sup> *<sup>u</sup>*(*ξ*) } *.*

Fractional integrals degenerate developing singularities as *s* → 0+; indeed the convolution term fulfills |*x*| *<sup>s</sup>*−1/(2Γ(*s*) cos(*sπ*/2)) <sup>→</sup> *<sup>δ</sup>* in <sup>S</sup> as *s* → 0+; nevertheless, fractional integrals are convergent to meaningful limits by suitable normalization.

**Lemma 3.** *Assume* <sup>0</sup> <sup>&</sup>lt; *<sup>s</sup>* <sup>&</sup>lt; <sup>1</sup> *, <sup>u</sup>* <sup>∈</sup> *<sup>L</sup>*1(R) *with <sup>u</sup>* <sup>∈</sup> *<sup>L</sup>*1(R) *and set the constants in the Fourier transform such that <sup>u</sup>*(*ξ*) = <sup>2</sup> <sup>R</sup> exp(−*iξx*) *u*(*x*) *dx. Then*

$$\frac{1}{\cos(\text{s}\,\pi/2)}\,\,l\_{\varepsilon}^{s}[u](\mathbf{x}) \longrightarrow u(\mathbf{x}) \qquad \text{uniformly in } \mathbb{R} \quad \text{as } \mathbf{s} \to \mathbf{0}\_{+},\tag{25}$$

$$\frac{\pi}{\sin(\operatorname{s}\pi/2)}\ \operatorname{l}\_{o}^{s}[u](\mathbf{x}) \longrightarrow \ (\mathbf{p}.\mathbf{v}.1/\mathbf{x}) \ast \boldsymbol{u} \quad \text{in}\ \operatorname{S}'(\mathbb{R}) \quad \text{as}\ \operatorname{s}\to 0+. \tag{26}$$

**Lemma 4.** *Assume* <sup>0</sup> <sup>&</sup>lt; *<sup>s</sup>* <sup>&</sup>lt; <sup>1</sup> *, u* <sup>∈</sup> *<sup>L</sup>*1(R)*. If I*1−*<sup>s</sup> <sup>o</sup>* [*u*] <sup>∈</sup> *ACloc*(R)*, then*

$$\frac{1}{\left(\cos(s\,\pi/2)\right)^{2}}\,\,I\_{\varepsilon}^{s}[D\_{o}^{s}[\boldsymbol{u}]\,\,]\,=\,\,\boldsymbol{u}\,.\tag{27}$$

*If I*1−*<sup>s</sup> <sup>e</sup>* [*u*] <sup>∈</sup> *ACloc*(R)*, then*

$$\frac{1}{\left(\sin(\text{s}\,\pi/2)\right)^{2}}\,\,l\_{o}^{s}[\,D\_{e}^{s}[\,\mu]\,\,]\,=\,\,\mu\,.\tag{28}$$

*If I*1−*<sup>s</sup> <sup>o</sup>* [*I<sup>s</sup> <sup>e</sup>*[*u*]] ∈ *ACloc*(R)*, then*

$$\frac{1}{\left(\cos(s\,\pi/2)\right)^{2}}\,\,D\_{o}^{s}[\,I\_{e}^{s}[\,u]\,\,]\,=\,\,u\,.\tag{29}$$

*If I*1−*<sup>s</sup> <sup>e</sup>* [*I<sup>s</sup> <sup>o</sup>* [*u*]] ∈ *ACloc*(R)*, then*

$$\frac{1}{\left(\sin(\text{s}\,\pi/2)\right)^{2}}\,\,D\_{\varepsilon}^{s}[\,I\_{o}^{s}[\,\mu]\,\,]\,=\,\,\,\mu\,.\tag{30}$$

**Lemma 5.** *Assume* <sup>0</sup> <sup>&</sup>lt; *<sup>s</sup>* <sup>&</sup>lt; <sup>1</sup> *, u* <sup>∈</sup> *<sup>L</sup>*1(R)*. Then*

$$D\_+^s \left[ I\_+^s \left[ \mu \right] \right] = \mu \tag{31}$$

$$\left[D\_{-}^{s}\left[I\_{-}^{s}\left[\mu\right]\right]\right] = \left.\mu\right|.\tag{32}$$

*If in addition I*1−*<sup>s</sup>* <sup>+</sup> [*u*] <sup>∈</sup> *ACloc*(R)*, then*

$$\left[I\_{+}^{s}\right]D\_{+}^{s}\left[\mu\right]\;=\;\mu\;.\tag{33}$$

*If in addition I*1−*<sup>s</sup>* <sup>−</sup> [*u*] <sup>∈</sup> *ACloc*(R)*, then*

$$\left[I\_{-}^{s}\left[D\_{-}^{s}\left[u\right]\right]\right] = \left.u\right|.\tag{34}$$

**Remark 4.** *Every distributional fractional derivative (left, right, even, and odd) appearing in the statements of Lemmas 3–5, which are proved in [1] with fractional classical derivatives* (*d*/*dx*)*<sup>s</sup>* ±*, still hold true in the present formulation with corresponding distributional derivatives* (*Dx*)*<sup>s</sup>* <sup>±</sup> *by exactly the same proof, since the assumptions ensure that all derivatives are evaluated on local absolute continuous functions.*

**Remark 5.** *Notice that, when* R *is replaced by a bounded interval, the identities* (27)*,* (28)*,* (33) *and* (34) *require an additional correction term, taking into account of boundary values (see* (52) *and* (53) *in Theorem 1), whereas* (31) *and* (32) *remain true (see* (136)*,* (137)*).*

Symmetries of even or odd functions are inherited neither by fractional integrals, nor by fractional derivatives. Nevertheless, the next lemma holds true.

**Lemma 6.** *For every s*∈(0, 1)*,* <sup>0</sup><*a*≤+<sup>∞</sup> *and v* <sup>∈</sup> *<sup>L</sup>*1(−*a*, *<sup>a</sup>*)*, by setting*

$$
\psi(\mathbf{x}) = \upsilon(-\mathbf{x})\,.\tag{35}
$$

*we obtain*

$$I\_+^s[\vartheta](\mathbf{x}) = \, ^s I\_-^s[\mathbf{v}](-\mathbf{x}) \qquad \qquad \text{on } \begin{pmatrix} -a, a \end{pmatrix} , \tag{36}$$

$$D\_{+}^{s}[\psi](\mathbf{x}) = -D\_{-}^{s}[\upsilon](-\mathbf{x}) \qquad \text{on } (-a, a) \,. \tag{37}$$

*For every s*∈(0, 1)*,* <sup>0</sup><*a*≤+<sup>∞</sup> *and every even function v* <sup>∈</sup> *<sup>L</sup>*1(−*a*, *<sup>a</sup>*)*, we get*

$$I\_+^s[v](\mathbf{x}) \ = I\_+^s[\vartheta](\mathbf{x}) \ = I\_-^s[v](-\mathbf{x}) \qquad \qquad \text{on } \left(-a, a\right), \tag{38}$$

$$D\_{+}^{s}[v](\mathbf{x}) = D\_{+}^{s}[\vartheta](\mathbf{x}) = -D\_{-}^{s}[v](-\mathbf{x}) \qquad \text{on } (-a, a) \,. \tag{39}$$

*For every s* <sup>∈</sup> (0, 1) *and every odd function v* <sup>∈</sup> *<sup>L</sup>*1(−*a*, *<sup>a</sup>*)*, we obtain*

$$I\_+^s[v](\mathbf{x}) = -I\_+^s[\vartheta](\mathbf{x}) \ = \ -I\_-^s[v](-\mathbf{x}) \qquad \qquad \text{on } (-a, a) \ , \tag{40}$$

$$D\_{+}^{\varepsilon}[v](\mathbf{x}) = -D\_{+}^{\varepsilon}[\mathring{v}](\mathbf{x}) \ = \ D\_{-}^{\varepsilon}[v](-\mathbf{x}) \qquad \qquad \text{on } (-a, a) \,, \tag{41}$$

**Proof.**

$$\begin{split} I\_{+}^{s}[\boldsymbol{v}](\mathbf{x}) &= \ \frac{1}{\Gamma(\mathbf{s})} \int\_{-\mathbf{z}}^{\mathbf{x}} \frac{\boldsymbol{v}(-t)}{\left(\mathbf{x} - t\right)^{1-\mathbf{s}}} dt = \ \frac{1}{\Gamma(\mathbf{s})} \int\_{-\mathbf{z}}^{\mathbf{x}} \frac{\boldsymbol{v}(-t)}{\left(-t - (-\mathbf{x})\right)^{1-\mathbf{s}}} dt \stackrel{s=-t}{=} \\ &= \ -\frac{1}{\Gamma(\mathbf{s})} \int\_{\mathbf{z}}^{-\mathbf{x}} \frac{\boldsymbol{v}(\mathbf{s})}{\left(\mathbf{s} - (-\mathbf{x})\right)^{1-\mathbf{s}}} ds = \ \frac{1}{\Gamma(\mathbf{s})} \int\_{-\mathbf{x}}^{\mathbf{x}} \frac{\boldsymbol{v}(\mathbf{s})}{\left(\mathbf{s} - (-\mathbf{x})\right)^{1-\mathbf{s}}} ds = I\_{-}^{s}[\boldsymbol{v}](-\mathbf{x}). \end{split}$$

By inserting 1 − *s* in place of *s* in (36), if *v* is even we obtain (37) via

$$D\_+^s[\vec{v}](\mathbf{x}) = D\_\mathbf{x} I\_+^{1-s}[\vec{v}](\mathbf{x}) = D\_\mathbf{x} I\_-^{1-s}[v](-\mathbf{x}) = -D\_-^s[v](-\mathbf{x}) .$$

Even *<sup>v</sup>* entails *<sup>v</sup>*(*x*) = *<sup>v</sup>*(−*x*), *<sup>v</sup>* <sup>=</sup> *<sup>v</sup>*ˇ, *<sup>I</sup><sup>s</sup>* +[*v*](*x*) = *I<sup>s</sup>* +[*v*ˇ](*x*) and *D<sup>s</sup>* +[*v*](*x*) = *D<sup>s</sup>* +[*v*ˇ](*x*); hence, (36) and (37) entail, respectively, (38) and (39).

Odd *<sup>v</sup>* entails *<sup>v</sup>*(*x*) = <sup>−</sup>*v*(−*x*), *<sup>v</sup>* <sup>=</sup> <sup>−</sup>*v*ˇ, *<sup>I</sup><sup>s</sup>* +[*v*](*x*) = <sup>−</sup>*I<sup>s</sup>* +[*v*ˇ](*x*) and *Ds* +[*v*](*x*) = <sup>−</sup>*D<sup>s</sup>* +[*v*ˇ](*x*); hence, (36), (37) entail, respectively, (40), (41).

Results listed above (mainly Lemmas 3 and 4 proved in [1]) lead to the natural definition of the operators representing the bilateral version of Riemann–Liouville fractional derivatives and integrals, as stated below. Results similar to the ones in Lemma 6 can be found also in [33].

**Definition 5.** *(Bilateral Riemann–Liouville fractional integral of order s)*

$$I\_{\varepsilon}^{s}[\mu] = \frac{1}{\cos(s\,\pi/2)} I\_{\varepsilon}^{s}[\mu] \, \, \, \, \, \frac{1}{2\,\Gamma(s)\,\cos(s\pi/2)} \left(I\_{+}^{s}[\mu] + I\_{-}^{s}[\mu]\right) \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \,$$

**Definition 6.** *(Bilateral Riemann–Liouville fractional derivative of order s)*

$$D^s[u] = \frac{1}{\cos(s\,\pi/2)} D^s\_o[u] \, = \frac{1}{2\,\Gamma(s)\,\cos(s\,\pi/2)} \left(D^s\_+[u] - D^s\_-[u]\right) \,.$$

#### **3. The Bilateral Fractional Sobolev Space**

From now on, we consider only functions defined on a bounded interval (*a*, *b*). As already mentioned in [25], possible naïve definitions of bilateral fractional Sobolev spaces could be set by *Us*,1 = *Us*,1 <sup>+</sup> <sup>∩</sup> *<sup>U</sup>s*,1 <sup>−</sup> , where *<sup>s</sup>* <sup>∈</sup> (0, 1) and

$$\begin{aligned} \mathcal{U}\_+^{s,1} &= \{ u \in L^1(a,b) \mid (d/dx)\_+^s \,\, u \in L^1(a,b) \}, \\ \mathcal{U}\_-^{s,1} &= \{ u \in L^1(a,b) \mid (d/dx)\_-^s \,\, u \in L^1(a,b) \}, \end{aligned}$$

for example, a definition which refers to *L*1-functions whose classical Riemann–Liouville fractional derivative of prescribed order *s* ∈ (0, 1) exists finitely almost everywhere and belongs to *L*1(*a*, *b*).

Actually, if the classical Riemann–Liouville fractional derivative (*d*/*dx*)*<sup>s</sup>* <sup>+</sup> [*u*](*x*) of *u* exists a.e. for *x* for some *s* ∈ (0, 1), then *I* <sup>1</sup>−*<sup>s</sup> <sup>a</sup>*<sup>+</sup> [*u*] is differentiable almost everywhere, referring to the same *s*; nevertheless, such an a.e. derivative does not provide complete information about the distributional derivative of the fractional integral *I* <sup>1</sup>−*<sup>s</sup> <sup>a</sup>*<sup>+</sup> [*u*], when *I* <sup>1</sup>−*<sup>s</sup> <sup>a</sup>*<sup>+</sup> [*u*] is not an absolutely continuous function. Thus, the differential properties are not completely described by the pointwise fractional derivative, though existing almost everywhere in (*a*, *b*). This shows that the previous definitions *Us*,1 <sup>+</sup> and *<sup>U</sup>s*,1 <sup>−</sup> are not suitable to obtain an integration by parts formula, whereas the appropriate ones refer to distributional Riemann–Liouville fractional derivative *D<sup>s</sup>* <sup>+</sup> [*u*](*x*) in Definition 3, namely, they are given by

$$\mathfrak{U}\_{+}^{s\_1^\*} = \{ u \in L^1(a, b) \mid D\_+^s u \in L^1(a, b) \} \,, \tag{42}$$

$$\mathfrak{U}\_{-}^{s,1} = \{ u \in L^{1}(a,b) \mid D\_{-}^{s}u \in L^{1}(a,b) \}\,,\tag{43}$$

Therefore, to develop a satisfactory theory of fractional Sobolev spaces, we introduced a more effective function space in [25], by defining the fractional Sobolev spaces related to one-sided fractional derivatives, which are recalled in subsequent Definition 7, where we confine to the case *p* = 1.

**Definition 7.** *We recall the definitions of Riemann–Liouville fractional Sobolev spaces related to one-sided fractional derivatives, as introduced in [25]:*

$$\mathcal{W}\_{+}^{s,1}(a,b) := \{ u \in L^{1}(a,b) \mid I\_{+}^{1-s}[u] \in \mathcal{W}\_{G}^{1,1}(a,b) \text{ } \} \;=\,\mathfrak{A}\_{+}^{s,1},\tag{44}$$

$$\mathcal{W}\_{-}^{s,1}(a,b) := \{ u \in L^{1}(a,b) \mid I\_{-}^{1-s}[u] \in \mathcal{W}\_{G}^{1,1}(a,b) \text{ } \} \, = \, \mathfrak{A}\_{-}^{s,1} \,. \tag{45}$$

Explicitly, the properties *<sup>u</sup>* <sup>∈</sup> *<sup>W</sup>s*,1 <sup>±</sup> (*a*, *<sup>b</sup>*) entail, respectively, that the distributional derivatives *D I* <sup>1</sup>−*<sup>s</sup>* <sup>±</sup> [*u*] belong to *L*1(*a*, *b*), thus *Ws*,1 <sup>+</sup> (*a*, *<sup>b</sup>*) = <sup>U</sup>*s*,1 <sup>+</sup> and *<sup>W</sup>s*,1 <sup>−</sup> (*a*, *<sup>b</sup>*) = <sup>U</sup>*s*,1 − .

Here, we introduce also the "even" and "odd" fractional Sobolev spaces.

**Definition 8.** *The even/odd Riemann–Liouville fractional Sobolev spaces are*

$$\mathcal{W}\_{\mathfrak{c}}^{s,1}(a,b) := \{ u \in L^1(a,b) \mid I\_{\mathfrak{c}}^{1-s}[u] \in \mathcal{W}\_{\widehat{\mathbb{G}}}^{1,1}(a,b) \mid \},\tag{46}$$

$$\mathcal{W}\_o^{s,1}(a,b) := \{ u \in L^1(a,b) \mid I\_o^{1-s}[u] \in \mathcal{W}\_G^{1,1}(a,b) \text{ } \}. \tag{47}$$

Eventually, we define the bilateral Riemann–Liouville fractional Sobolev spaces, with the aim to achieve a symmetric framework.

#### **Definition 9.** *The (Bilateral) Riemann–Liouville Fractional Sobolev spaces.*

*For every s* <sup>∈</sup> (0, 1)*, we set Ws*,1(*a*, *<sup>b</sup>*) = *<sup>W</sup>s*,1 <sup>+</sup> (*a*, *<sup>b</sup>*) <sup>∩</sup> *<sup>W</sup>s*,1 <sup>−</sup> (*a*, *<sup>b</sup>*)*, that is,*

$$\mathcal{W}^{s,1}(a,b) := \{ u \in L^1(a,b) \mid I\_+^{1-s}[u] \in \mathcal{W}\_G^{1,1}(a,b) \text{ and } I\_-^{1-s}[u] \in \mathcal{W}\_G^{1,1}(a,b) \}. \tag{48}$$

*Notice that, concerning Definition 9, by exploiting* (12) *and* (13)*, we get also*

$$\mathcal{W}^{s,1}(a,b) = \mathcal{W}^{s,1}\_+(a,b) \cap \mathcal{W}^{s,1}\_-(a,b) \;= \; \mathcal{W}^{s,1}\_0(a,b) \cap \mathcal{W}^{s,1}\_c(a,b) \;. \tag{49}$$

**Theorem 1.** *Assume* 0 < *s* < 1 *and* (*a*, *b*) *is bounded. Then, the (bilateral) Riemann–Liouville fractional Sobolev space Ws*,1(*a*, *b*) *(Definition 9) is a normed space when endowed with the natural norm*

$$||u||\_{W^{s,1}} := ||u||\_{L^1(a,b)} + ||D\_{a+}^s[u]||\_{L^1(a,b)} + ||D\_{b-}^s[u]||\_{L^1(a,b)}\,. \tag{50}$$

*The set <sup>W</sup>s*,1(*a*, *<sup>b</sup>*) *is a Banach space and, for every <sup>q</sup>* <sup>∈</sup> [1, 1/(<sup>1</sup> <sup>−</sup> *<sup>s</sup>*)) *there is <sup>C</sup>* <sup>=</sup> *<sup>C</sup>*(*s*, *<sup>q</sup>*, *<sup>a</sup>*, *<sup>b</sup>*) *such that*

$$\|\|u\|\|\_{L^q(a,b)} \le \|\mathcal{C}(s, q, a, b)\| \|u\|\|\_{\mathbb{W}^{s,1}(a,b)} \cdot \tag{51}$$

*Every u* <sup>∈</sup> *<sup>W</sup>s*,1(*a*, *<sup>b</sup>*) *can be represented by both*

$$u(\mathbf{x}) = \, \_{a+} [D\_{a+}^{\varepsilon}[u]](\mathbf{x}) + \, \frac{\, \_{a+} [u](a)}{\Gamma(s)}(\mathbf{x} - a)^{s-1} \qquad a.e. \mathbf{x} \in (a, b), \tag{52}$$

*and*

$$u(\mathbf{x}) = \, \_ {b-} \left[ \, \_ {b-} [\, \_ {b-} [u]] \right](\mathbf{x}) + \, \frac{\, \_ {b-} [u](b)}{\Gamma(s)} \, (b-\mathbf{x})^{s-1} \qquad a.e. \mathbf{x} \in (a,b). \tag{53}$$

**Proof.** The map *<sup>u</sup>* → *<sup>u</sup> <sup>W</sup>s*,1 is a norm on *<sup>W</sup>s*,1(*a*, *<sup>b</sup>*), indeed,

 *<sup>u</sup> <sup>L</sup>*1(*a*,*b*) <sup>+</sup> *<sup>D</sup><sup>s</sup> <sup>a</sup>*+[*u*]*<sup>L</sup>*1(*a*,*b*) is equivalent to the norm *I* <sup>1</sup>−*<sup>s</sup> <sup>a</sup>*<sup>+</sup> [*u*]*<sup>W</sup>*1,1 *<sup>G</sup>* (*a*,*b*) since *<sup>I</sup>* <sup>1</sup>−*<sup>s</sup> <sup>a</sup>*<sup>+</sup> [*u*] belongs to *W*1,1 *<sup>G</sup>* , *<sup>D</sup><sup>s</sup> <sup>a</sup>*+[*u*] = *<sup>d</sup> dx I* <sup>1</sup>−*<sup>s</sup> <sup>a</sup>*<sup>+</sup> [*u*] = *D I*1−*<sup>s</sup> <sup>a</sup>*<sup>+</sup> [*u*] and *<sup>I</sup>* <sup>1</sup>−*<sup>s</sup> <sup>a</sup>*<sup>+</sup> [*u*]*<sup>L</sup>*1(*a*,*b*) <sup>≤</sup> *<sup>C</sup> <sup>u</sup> <sup>L</sup>*1(*a*,*b*) ; analogously *<sup>u</sup> <sup>L</sup>*1(*a*,*b*) <sup>+</sup> *<sup>D</sup><sup>s</sup> <sup>b</sup>*−[*u*]*<sup>L</sup>*1(*a*,*b*) is a norm for *<sup>I</sup>* 1−*s <sup>b</sup>*<sup>−</sup> [*u*], due to *<sup>I</sup>* 1−*s <sup>b</sup>*<sup>−</sup> [*u*] <sup>∈</sup> *<sup>W</sup>*1,1 *<sup>G</sup>* , *Ds <sup>b</sup>*−[*u*] = *<sup>d</sup> dx I* 1−*s <sup>b</sup>*<sup>−</sup> [*u*] = *D I*1−*<sup>s</sup> <sup>b</sup>*<sup>−</sup> [*u*] and *<sup>I</sup>* <sup>1</sup>−*<sup>s</sup> <sup>a</sup>*<sup>+</sup> [*u*]*<sup>L</sup>*1(*a*,*b*) <sup>≤</sup> *<sup>C</sup> <sup>u</sup> <sup>L</sup>*1(*a*,*b*) .

The completeness of *Ws*,1(*a*, *b*) with respect to such a norm when (*a*, *b*) is bounded, follows by the completeness of *L*<sup>1</sup> and *W*1,1 *<sup>G</sup>* together with the fact that *uk* is a Cauchy sequence in the norm *Ws*,1(*a*, *b*) if and only if *uk* is a Cauchy sequence in *L*1(*a*, *b*) and *I* <sup>1</sup>−*<sup>s</sup>* <sup>±</sup> [*uk*] are Cauchy sequences in *<sup>W</sup>*1,1, *<sup>G</sup>* .

Estimate (51) and representations (52) and (53) follow by (129), (130) and (135) of Proposition 12, which is shown in Section 5.

**Remark 6.** *Thanks to I* <sup>1</sup>−*<sup>s</sup> <sup>a</sup>*<sup>+</sup> [*u*]*<sup>L</sup>*1(*a*,*b*) <sup>≤</sup>*<sup>u</sup> <sup>L</sup>*1(*a*,*b*), *<sup>I</sup>* 1−*s <sup>b</sup>*<sup>−</sup> [*u*]*<sup>L</sup>*1(*a*,*b*) <sup>≤</sup>*<sup>u</sup> <sup>L</sup>*1(*a*,*b*)*, we have replaced the terms I* <sup>1</sup>−*<sup>s</sup>* <sup>±</sup> [*u*]*<sup>W</sup>*1,1(*a*,*b*) *in the norm* (50) *by <sup>D</sup> I* <sup>1</sup>−*<sup>s</sup>* <sup>±</sup> [*u*] *<sup>L</sup>*1(*a*,*b*)*, where D denotes the distributional derivative.*

*Obviously, the terms <sup>D</sup><sup>s</sup>* +[*u*]*<sup>L</sup>*1(*a*,*b*) *and <sup>D</sup><sup>s</sup>* <sup>−</sup>[*u*]*<sup>L</sup>*1(*a*,*b*) *in the norm* (50) *could be alternatively replaced by <sup>D</sup><sup>s</sup> <sup>e</sup>*[*u*]*<sup>L</sup>*1(R) *and <sup>D</sup><sup>s</sup> <sup>o</sup>*[*u*]*<sup>L</sup>*1(R)*, referring to* (8)*,* (16) *and* (17)*, still achieving an equivalent norm on Ws*,1(*a*, *b*)*.*

**Example 2.** *For every s* ∈ (0, 1)*, the constant functions and v*(*x*) = *x*(1 − *x*) *both belong to the space Ws*,1(*a*, *b*)*. Spaces C*<sup>∞</sup> <sup>0</sup> (*a*, *<sup>b</sup>*)*, test functions on* (*a*, *<sup>b</sup>*)*, and <sup>C</sup>*1([*a*, *<sup>b</sup>*])*, continuously differentiable functions, are contained in Ws*,1(*a*, *b*)*.*

**Example 3.** *For every* 0 < *s* < 1*, the discontinuous piecewise constant H*(*x*) *belongs to <sup>W</sup>s*,1(−1, 1)\*W*1,1 *<sup>G</sup>* (−1, 1)*.*

*Indeed, both I* <sup>1</sup>−*<sup>s</sup>* <sup>+</sup> [*H*](*x*) = *<sup>H</sup>*(*x*) <sup>|</sup>*x*<sup>|</sup> 1−*s* <sup>Γ</sup>(2−*s*) *and <sup>I</sup>* <sup>1</sup>−*<sup>s</sup>* <sup>−</sup> [*H*](*x*) = *<sup>H</sup>*(*x*)(1−*x*)1−*s*+*H*(−*x*)(|1−*x*<sup>|</sup> <sup>1</sup>−*s*−|*x*<sup>|</sup> <sup>1</sup>−*s*) Γ(2−*s*) *belong to W*1,1 *<sup>G</sup>* (−1, 1)*.*

**Example 4.** *For every <sup>s</sup>* <sup>∈</sup> (0, 1) *and <sup>β</sup>* <sup>≥</sup> <sup>0</sup>*, the function <sup>x</sup><sup>β</sup> belongs to <sup>W</sup>s*,1(0, 1)*. This claim is straightforward for β* = 0 *and β* ≥ 1*; we refer to* (97) *and* (98) *for* 0 < *β* < 1.

**Example 5.** *Function x*−*<sup>α</sup> with <sup>α</sup>* <sup>&</sup>gt; <sup>0</sup> *belongs to Ws*,1(0, 1) *if and only if* <sup>0</sup> <sup>&</sup>lt; *<sup>α</sup>* <sup>&</sup>lt; <sup>1</sup> <sup>−</sup> *s. Indeed, I* <sup>1</sup>−*<sup>s</sup>* <sup>+</sup> [*x*−*α*](*x*) = <sup>Γ</sup>(1−*α*) <sup>Γ</sup>(2−*s*−*α*) *<sup>x</sup>*1−*s*−*<sup>α</sup>* <sup>∈</sup> *<sup>W</sup>*1,1 *<sup>G</sup>* (0, 1) *if* −*s* − *α* > −1*, while I* <sup>1</sup>−*<sup>s</sup>* <sup>+</sup> [*xs*−1] (*x*) = <sup>1</sup> <sup>Γ</sup>(1−*s*) <sup>∈</sup> *<sup>W</sup>*1,1 *<sup>G</sup>* (0, 1)*, summarizing <sup>x</sup>*−*<sup>α</sup> belongs to <sup>W</sup>s*,1 <sup>+</sup> (0, 1) *for α* ≤ 1 − *s; on the other hand, I* <sup>1</sup>−*<sup>s</sup>* <sup>−</sup> [*x*−*α*] *belongs to <sup>L</sup>*1(*a*, *<sup>b</sup>*) *for <sup>α</sup>* <sup>&</sup>lt; <sup>1</sup> *and is bounded on* (0, 1) *if and only if <sup>α</sup>* <sup>&</sup>lt; <sup>1</sup> <sup>−</sup> *s, due to*

$$\begin{aligned} \left[I\_{-}^{1-s}[\mathbf{x}^{-\alpha}](\mathbf{x})\right] &= \frac{1}{\Gamma(1-s)} \int\_{\mathbf{x}}^{1} t^{-\alpha}(t-\mathbf{x})^{-s} \, dt \stackrel{t=\mathbf{x}^{\alpha}}{=} \\ &= \frac{\mathbf{x}^{1-s-\alpha}}{\Gamma(1-s)} \int\_{1}^{1/\alpha} y^{-\alpha}(y-1)^{-s} \, dy .\end{aligned}$$

*Summarizing, and taking into account Example 4 for α* ≤ 0*,*

$$\mathbf{x}^{-a} \in \mathcal{W}\_{+}^{s,1}(0,1) \cap \mathcal{W}\_{-}^{s,1}(0,1) \quad \text{if and only if} \quad a < 1 - s. \tag{54}$$

*In the particular case α* = 1 − *s, we recover*

$$\mathbf{x}^{s-1} \in \mathcal{W}\_{+}^{s,1}(0,1) \backslash \mathcal{W}\_{-}^{s,1}(0,1) \tag{55}$$

*with I*1−*<sup>s</sup>* <sup>+</sup> [*xs*−1](*x*) =Γ(*s*) *and I*1−*<sup>s</sup>* <sup>−</sup> [*xs*−1](*x*) *unbounded in a right neighborhood of x* <sup>=</sup> <sup>0</sup>*.*

#### **Theorem 2.** *(Integration by parts in Ws***,1(***a***,** *b***)***)*

*Next, identities hold true for* 0<*s*< 1*,* −∞<*a*<*b*< +∞ *:*

$$\begin{cases} \int\_{a}^{b} u(\mathbf{x}) \, D\_{+}^{s}[v](\mathbf{x}) \, d\mathbf{x} = -\int\_{a}^{b} \frac{d}{d\mathbf{x}} u(\mathbf{x}) \, I\_{+}^{1-s}[v](\mathbf{x}) \, d\mathbf{x} + u(b) \, I\_{+}^{1-s}[v](b) \\ \qquad \qquad \qquad \qquad \forall v \in \mathcal{W}\_{+}^{s,1}(a,b), \, \forall u \in \mathcal{W}\_{G}^{1,1}(a,b) \end{cases} \tag{56}$$

$$\begin{cases} \int\_{a}^{b} u(\mathbf{x}) \, D\_{-}^{s}[v](\mathbf{x}) \, d\mathbf{x} = + \int\_{a}^{b} \frac{d}{d\mathbf{x}} u(\mathbf{x}) \, I\_{-}^{1-s}[v](\mathbf{x}) \, d\mathbf{x} + u(a) \, I\_{-}^{1-s}[v](a) \\ \qquad \qquad \qquad \qquad \forall v \in \mathcal{W}\_{-}^{s,1}(a,b), \, \forall u \in \mathcal{W}\_{G}^{1,1}(a,b) \end{cases} \tag{57}$$

$$\begin{cases} \int\_{a}^{b} u(\mathbf{x}) \, D\_{\varepsilon}^{s}[v](\mathbf{x}) \, d\mathbf{x} = \\ \quad - \int\_{a}^{b} \frac{d}{d\mathbf{x}} u(\mathbf{x}) \, l\_{\varepsilon}^{1-s}[v](\mathbf{x}) \, d\mathbf{x} + \frac{1}{2} \left( u(b) \, l\_{+}^{1-s}[v](b) - u(a) \, l\_{-}^{1-s}[v](a) \right) \\ \qquad \qquad \qquad \qquad \qquad \forall v \in \mathcal{W}^{s,1}(a,b), \, \forall \, u \in \mathcal{W}\_{G}^{1,1}(a,b) \,, \end{cases} \tag{58}$$

$$\begin{cases} \int\_{a}^{b} u(\mathbf{x}) \, D\_{o}^{s}[v](\mathbf{x}) \, d\mathbf{x} = \\ \quad - \int\_{a}^{b} \frac{d}{d\mathbf{x}} u(\mathbf{x}) \, I\_{0}^{1-s}[v](\mathbf{x}) \, d\mathbf{x} + \frac{1}{2} \left( u(b) \, I\_{+}^{1-s}[v](b) + u(a) \, I\_{-}^{1-s}[v](a) \right) \\ \qquad \qquad \qquad \qquad \forall v \in W^{s,1}(a,b), \; \forall \, u \in W\_{G}^{1,1}(a,b) \end{cases} \tag{59}$$

**Proof.** Identity (56) follows by (2), (5) and

$$\begin{split} &\int\_{a}^{b} u(\mathbf{x}) \, D\_{+}^{s}[v](\mathbf{x}) \, d\mathbf{x} = \frac{1}{\Gamma(1-s)} \int\_{a}^{b} u(\mathbf{x}) \left( \frac{d}{d\mathbf{x}} \int\_{a}^{\mathbf{x}} \frac{v(t)}{(\mathbf{x}-t)^{s}} \, dt \right) d\mathbf{x} = \\ &= -\frac{1}{\Gamma(1-s)} \int\_{a}^{b} \frac{d}{d\mathbf{x}} u(\mathbf{x}) \left( \int\_{a}^{\mathbf{x}} \frac{v(t)}{(\mathbf{x}-t)^{s}} \, dt \right) d\mathbf{x} + u(b) \, \frac{1}{\Gamma(1-s)} \int\_{a}^{b} \frac{v(t)}{(\mathbf{x}-t)^{s}} \, dt \, dt \end{split}$$

Identity (57) follows by (3), (4) and similar computations. Identities (58) and (59) follow by the subtraction and sum of (56) and (57). **Remark 7.** *Notice that when u is representable, e.g., under a slightly stronger condition, then we find a more symmetric formulation. For instance,* (59) *translates into*

$$\begin{cases} \int\_{a}^{b} I\_{o}^{1-s}[w](\mathbf{x}) \, D\_{o}^{s}[v](\mathbf{x}) \, d\mathbf{x} = \\ \quad - \int\_{a}^{b} D\_{o}^{s}[w](\mathbf{x}) \, I\_{o}^{1-s}[v](\mathbf{x}) \, d\mathbf{x} + \frac{1}{2} \left( I\_{o}^{1-s}[w](b) \, I\_{+}^{1-s}[v](b) + I\_{o}^{1-s}[w](a) \, I\_{-}^{1-s}[v](a) \right) \\ \qquad \qquad \qquad \qquad \qquad \qquad \forall \, v, w \in \mathcal{W}^{s,1}(a,b) \,. \end{cases} \tag{60}$$

**Lemma 7.** *The strict embedding*

$$\mathcal{W}^{1,1}\_{\mathbb{G}}(a,b) \underset{\overrightarrow{\gamma^{\mu}}}{\subset} \mathcal{W}^{s,1}(a,b) \qquad 0 < s < 1 \tag{61}$$

*holds true with the related uniform estimate: there is a constant K* = *K*(*s*, *a*, *b*) *such that*

$$\|\|\boldsymbol{\upsilon}\|\|\_{\mathcal{W}^{s,1}(a,b)} \leq \|\boldsymbol{K}\|\|\boldsymbol{\upsilon}\|\|\_{\mathcal{W}^{1,1}\_{G}(a,b)}\tag{62}$$

**Proof.** By computations in Example 3, we know that the Heaviside function belongs to *<sup>W</sup>s*,1(−1, 1)\*W*1,1 *<sup>G</sup>* (−1, 1); thus if the embedding holds true, then it is strict.

Recalling the definition ([34]) of right Caputo fractional derivative *CD<sup>s</sup>* +[*u*] and its relationship with the right Riemann–Liouville fractional derivative

$$\,\_{\mathbb{C}}D\_{+}^{s}[u](\mathbf{x}) := \,\_{+}I\_{+}^{1-s}[u'](\mathbf{x}) = \,\_{\Gamma} \frac{1}{\Gamma(1-s)} \int\_{a}^{\mathbf{x}} \frac{u'(t)}{(\mathbf{x}-t)^{s}} \, dt \qquad \forall u \in \mathcal{W}\_{G}^{1,1}(a,b),$$

$$\begin{aligned} \, \_{RL}D\_+^s[u](\mathbf{x}) &= \, \_\mathbb{C}D^s \mathbf{s}\_+[u](\mathbf{x}) + \frac{u(a)}{\Gamma(1-s)} \left(\mathbf{x} - a\right)^{-s} = \\ &= \, \_\mathbb{C}I\_+^{1-s}[u'](\mathbf{x}) + \frac{u(a)}{\Gamma(1-s)} \left(\mathbf{x} - a\right)^{-s} \qquad \forall u \in \mathcal{W}\_G^{1,1}(a,b) \end{aligned}$$

and taking into account

$$\|I\_+^{1-s}[u']\|\_{L^1(a,b)} \le K\_1 \|u'\|\_{L^1(a,b)} \le K\_1 \|u\|\_{\mathcal{W}\_G^{1,1}(a,b)'} \quad |u(a)| \le K\_2 \|u\|\_{\mathcal{W}\_G^{1,1}(a,b)'}$$

we get (7) and (62).

#### **Theorem 3. [***Compactness in Ws***,1(***a***,** *b***)]**

*Assume that the interval* (*a*, *b*) *is bounded, the parameter s fulfills* 0<*s*<1*, and*

$$\|\|\mu\_n\|\|\_{W^{s,1}(a,b)} \le \mathbb{C} \,. \tag{63}$$

*Then there exist <sup>u</sup>* <sup>∈</sup> *<sup>L</sup>q*(*a*, *<sup>b</sup>*)*,* <sup>∀</sup>*<sup>q</sup>* <sup>∈</sup> [1, 1/(<sup>1</sup> <sup>−</sup> *<sup>s</sup>*))*, and a subsequence such that, without relabeling,*

$$\begin{cases} & (i) \qquad u\_{n} \rightharpoonup u & \text{weakly in } L^{q}(a,b)) \quad \forall q \in [1, 1/(1-s)), \\ & (ii) \quad I\_{+}^{1-s}[u\_{n}] \to I\_{+}^{1-s}[u] \quad \text{strongly in } L^{p}(a,b), \; \forall p \in [1, +\infty), \\ & (iii) \quad I\_{-}^{1-s}[u\_{n}] \to I\_{-}^{1-s}[u] \quad \text{strongly in } L^{p}(a,b), \; \forall p \in [1, +\infty), \\ & I\_{+}^{1-s}[u\_{n}] \rightharpoonup I\_{+}^{1-s}[u] \, \,, \quad I\_{-}^{1-s}[u\_{n}] \rightharpoonup I\_{-}^{1-s}[u] \quad \text{weakly in } BV(a,b). \end{cases}$$

**Proof.** Claim (i) follows by (51) and (63) and reflexivity of *Lq*(*a*, *b*) for any fixed 1 < *qk* < 1/(1 − *s*); thus, by choosing a sequence *qk* → 1/(1 − *s*) and extracting a diagonal sequence, we get the claim for a unique subsequence and unique *u* valid for every *<sup>q</sup>* fulfilling 1 <sup>&</sup>lt; *<sup>q</sup>* <sup>&</sup>lt; 1/(<sup>1</sup> <sup>−</sup> *<sup>s</sup>*). Moreover, such *<sup>u</sup>* belongs to *<sup>L</sup>*1(*a*, *<sup>b</sup>*). Eventually, for *<sup>q</sup>* <sup>=</sup> <sup>1</sup>

there is a measure *μ* such that *un μ* in M(*a*, *b*), but such *μ* must be equal to *u*, then *un u* in *L*1(*a*, *b*).

The compact embedding *W*1,1 *<sup>G</sup>* (*a*, *<sup>b</sup>*) <sup>→</sup> *<sup>L</sup>p*(*a*, *<sup>b</sup>*) valid for any *<sup>p</sup>* <sup>∈</sup> [1, <sup>+</sup>∞) (Rellich Theorem) entails the existence of *<sup>z</sup>*<sup>+</sup> and *<sup>z</sup>*<sup>−</sup> in *<sup>L</sup>p*(*a*, *<sup>b</sup>*) fulfilling, up to subsequences,

$$L^{1-s}\_+[u\_n] \to z\_+ \quad \text{strongly in } L^p(a,b), \; \forall p \in \left[1, +\infty\right), \tag{65}$$

$$L^{1-s}\_-[\mu\_n] \to z\_- \quad \text{strongly in } L^p(a,b), \ \forall p \in \left[1, +\infty\right), \tag{66}$$

$$M\_{\varepsilon}^{1-s}[u\_n] \to \frac{1}{2}(z\_+ + z\_-) \quad \text{strongly in } L^p(a, b), \ \forall p \in [1, +\infty), \tag{67}$$

$$L\_o^{1-s}[\mathfrak{u}\_n] \to \frac{1}{2}(z\_+ - z\_-) \quad \text{strongly in } L^p(a, b), \; \forall p \in [1, +\infty) \,. \tag{68}$$

By (*i*) and the Mazur Theorem, there is a sequence of convex combinations *yn*, which is strongly converging: precisely, *yn* <sup>→</sup> *<sup>u</sup>* strongly in *<sup>L</sup>q*(*a*, *<sup>b</sup>*) for every *<sup>q</sup>* <sup>∈</sup> [1, 1/(<sup>1</sup> <sup>−</sup> *<sup>s</sup>*)) with *yn* = ∑*<sup>n</sup> <sup>j</sup>*=<sup>1</sup> *cn*,*juj*, *cn*,*<sup>j</sup>* <sup>≥</sup> 0, <sup>∑</sup>*<sup>n</sup> <sup>j</sup>*=<sup>1</sup> *cn*,*<sup>j</sup>* = 1. Hence, by (63),

$$||y\_{\mathcal{H}}||\_{\mathcal{W}^{s,1}} \le \sum\_{j=1}^{n} c\_{\mathcal{U},j} ||u\_j||\_{\mathcal{W}^{s,1}} \le C. \tag{69}$$

*I*1−*<sup>s</sup>* is a continuous map from *L<sup>q</sup>* to *L<sup>r</sup>* , *q*∈ 1, 1/(1 − *s*)) and *r*∈[1, *q*/ <sup>1</sup> − (<sup>1</sup> − *s*)*sq*, hence, we obtain

$$I^{1-s}\_{\\\pm}[y\_n] \to I^{1-s}\_{\\\pm}[u] \quad \text{strongly in } L^r(a,b), \; r \in [1, q/\left(1-(1-s)sq\right))\tag{70}$$

and hence, in D (*a*, *b*). Moreover, by (69), *I*1−*s*[*yn*] is bounded in *W*1,1 *<sup>G</sup>* (*a*, *b*); then, there exists *w*<sup>±</sup> ∈ *BV*(*a*, *b*) such that, possibly up to subsequences,

$$I\_+^{1-s}[y\_n] \rightharpoonup w\_+, \quad I\_-^{1-s}[y\_n] \rightharpoonup w\_- \qquad \text{weakly in } BV(a,b). \tag{71}$$

Taking into account (70), (71) and the uniqueness of limit in D (*a*, *b*), we obtain

$$w\_{\neq} = I\_{\neq}^{1-s}[u] \in BV(a,b), \ w\_{=} = I\_{=}^{1-s}[u] \in BV(a,b), \quad I\_{\neq}^{1-s}[y\_n] \rightharpoonup I\_{\neq}^{1-s}[u] \text{ in } BV(a,b). \tag{72}$$

Taking into account *un* <sup>∈</sup> *<sup>W</sup>s*,1(*a*, *<sup>b</sup>*), we set *fn* <sup>=</sup> *<sup>I</sup>*1−*s*[*un*], so *un* solves Abel integral equation *I*1−*s*[*un*] = *fn*. By the semigroup property, *I<sup>s</sup>* +[ *fn*](*x*) = *I<sup>s</sup> I* <sup>1</sup>−*<sup>s</sup>* <sup>+</sup> [*un*] (*x*) = *I*<sup>1</sup> +[*un*] = 2 *x <sup>a</sup> un*(*t*) *dt*; hence, *<sup>I</sup><sup>s</sup>* +[ *fn*](*a*) = 0. Therefore (by Proposition 2), the Abel integral equations have a unique solution in *<sup>L</sup>*1(*a*, *<sup>b</sup>*), given by *un* = *<sup>D</sup>*1−*<sup>s</sup>* <sup>+</sup> [ *fn*] = *<sup>D</sup>*1−*<sup>s</sup>* <sup>+</sup> *I* <sup>1</sup>−*<sup>s</sup>* <sup>+</sup> [*un*] , *<sup>n</sup>* ∈ N.

Set *<sup>f</sup>* <sup>=</sup> *<sup>I</sup>*1−*s*[*u*] <sup>∈</sup> *BV*(*a*, *<sup>b</sup>*). *<sup>u</sup>* <sup>∈</sup> *<sup>L</sup>*1(*a*, *<sup>b</sup>*). So *<sup>u</sup>* solves the Abel equation *<sup>I</sup>*1−*s*[*u*] = *<sup>f</sup>* . Moreover, by the semigroup property, *I<sup>s</sup>* +[ *f* ](*x*) = *I<sup>s</sup> I* <sup>1</sup>−*<sup>s</sup>* <sup>+</sup> [*u*] (*x*) = *I*<sup>1</sup> +[*u*] = <sup>2</sup> *<sup>x</sup> <sup>a</sup> u*(*t*) *dt*; hence, *I<sup>s</sup>* +[ *f* ](*a*+) = 0. Therefore (Proposition 3), the Abel equation has a unique solution in *<sup>L</sup>*1(*a*, *<sup>b</sup>*), given by *<sup>u</sup>* = *<sup>D</sup>*1−*<sup>s</sup>* <sup>+</sup> [ *<sup>f</sup>* ] = *<sup>D</sup>*1−*<sup>s</sup>* <sup>+</sup> *I* <sup>1</sup>−*<sup>s</sup>* <sup>+</sup> [*u*] .

By (*i*), *un* <sup>→</sup> *<sup>u</sup>* strongly in *<sup>L</sup>q*, hence,

$$f\_n = I\_+^{1-s}[u\_n] \to I\_+^{1-s}[u] = f \quad \text{strongly in } L^q, \ q \in \left[1, 1/(1-s)\right).$$

Then, by (65), *I*1−*s*[*u*] = *f* = *z*<sup>+</sup> . Hence we have shown claims (ii) and (iii) .

Moreover, the convergence is also in the sense of distributions and the sequence is bounded in *W*1,1 *<sup>G</sup>* ; therefore, *I* <sup>1</sup>−*<sup>s</sup>* <sup>+</sup> [*u*] belongs to *BV*(*a*, *<sup>b</sup>*) and, again up to subsequences,

$$f\_n = I\_+^{1-s}[\mu\_n] \rightharpoonup I\_+^{1-s}[\mu] = f \quad \text{weakly in } BV(a,b).$$

We can deal with *z*<sup>−</sup> by the same argument, exploiting Corollary 1 for the backward Abel integral equation *I* <sup>1</sup>−*<sup>s</sup>* <sup>−</sup> [*un*] = *gn*, leading to *<sup>I</sup>*1−*s*[*u*] = *<sup>g</sup>* <sup>=</sup> *<sup>z</sup>*−.

**Remark 8.** *The boundedness of* (*a*, *b*) *is an essential assumption in the previous compactness theorem, not only to exploit the Rellich theorem, but also to avoid slow non-integrable decay at infinity of the fractional integral: indeed, even for an integrable compactly supported u, we may have I*1−*<sup>s</sup>* <sup>+</sup> [*u*](*x*) <sup>∼</sup> (*<sup>x</sup>* <sup>−</sup> *<sup>a</sup>*)−*<sup>s</sup> at* <sup>+</sup>∞*, e.g., if u* <sup>=</sup> *<sup>χ</sup>*[*a*,*b*]*.*

**Remark 9.** *We emphasize that in Theorem 3, we cannot improve* (64)*, since I* <sup>1</sup>−*<sup>s</sup>* <sup>+</sup> [*u*] *may belong to BV*(*a*, *<sup>b</sup>*) \ *<sup>W</sup>*1,1 *<sup>G</sup>* (*a*, *b*)*.*

*Indeed, we can choose f*(*x*) = sign(*x*) *if* −1 < *x* < 1*, fn*(*x*) = −1 *if* −1 < *x* < −1/*n, fn*(*x*) = *n x if* −1/*n* < *x* < 1/*n and fn*(*x*) = 1 *if* 1/*n* < *x* < 1*. Thus, f belongs to BV*(−1, 1)\ *<sup>W</sup>*1,1(−1, 1) *and fn <sup>W</sup>*1,1 *<sup>G</sup>* (−1,1) *is uniformly bounded. Solving the Abel equations <sup>I</sup>* <sup>1</sup>−*<sup>s</sup>* <sup>+</sup> [*un*] = *fn and I* <sup>1</sup>−*<sup>s</sup>* <sup>+</sup> [*u*] = *<sup>f</sup> with Propositions <sup>2</sup> and <sup>3</sup> provides <sup>u</sup>* = *<sup>D</sup><sup>s</sup>* +[ *<sup>f</sup>* ] ∈ M(−1, 1)\*L*1(−1, 1)*, whereas un* = *D<sup>s</sup>* +[ *fn*] *is uniformly bounded in <sup>L</sup>*1*; hence, un is uniformly bounded in <sup>W</sup>s*,1(−1, 1)*, due to Lemma 7.*

We recall a well-known result [9] (Theorem 2.1) concerning the *L*1-representability of functions.

**Theorem 4. [***L***1-***representability***]** *Given f* <sup>∈</sup> *<sup>L</sup>*1(*a*, *<sup>b</sup>*)*, then*

*<sup>f</sup>* <sup>∈</sup> *<sup>I</sup><sup>s</sup>* +(*L*1(*a*, *<sup>b</sup>*)) *for some s* <sup>∈</sup> (0, 1) *if and only if*

> *I* <sup>1</sup>−*<sup>s</sup>* <sup>+</sup> [ *<sup>f</sup>* ] <sup>∈</sup> *<sup>W</sup>*1,1(*a*, *<sup>b</sup>*) *and I*1−*<sup>s</sup>* <sup>+</sup> [ *<sup>f</sup>* ](*a*) = 0 .

*<sup>f</sup>* <sup>∈</sup> *<sup>I</sup><sup>s</sup>* <sup>−</sup>(*L*1(*a*, *<sup>b</sup>*)) *for some s* <sup>∈</sup> (0, 1) *if and only if*

$$I\_{-}^{1-s}[f] \in \mathcal{W}^{1,1}(a,b) \quad \text{and} \quad I\_{-}^{1-s}[f](b) = 0 \dots$$

*Moreover, in the affirmative case, say, when there exists <sup>u</sup>* <sup>∈</sup> *<sup>L</sup>*1(*a*, *<sup>b</sup>*) *such that <sup>f</sup>* <sup>=</sup> *<sup>I</sup><sup>s</sup>* +[*u*] *(resp. f* = *I<sup>s</sup>* <sup>−</sup>[*u*]*), we obtain*

$$
\mu = D\_+^s f \quad \text{ (respectively } \mu = D\_-^s f\text{)}\text{ .}\tag{73}
$$

In Section 5, we provide a self-contained proof of the above result together with a discussion of the related forward and backward Abel equation in the distributional framework, even in the cases when *I* <sup>1</sup>−*<sup>s</sup>* <sup>+</sup> [ *<sup>f</sup>* ](*a*) <sup>=</sup> 0 or *<sup>I</sup>* <sup>1</sup>−*<sup>s</sup>* <sup>−</sup> [ *<sup>f</sup>* ](*b*) <sup>=</sup> 0 (see Propositions <sup>2</sup> and 3 and Corollaries 1 and 2).

Here, we show that the representability result has a natural extension to the bilateral case.

**Theorem 5.** *Assume* 0 < *s* < 1*. Then*

$$f \in L^1(a,b) \quad \text{and} \quad f \in I\_+^s(L^1(a,b)) \cap I\_-^s(L^1(a,b))$$

*if and only if*

$$f \in W^{s,1}(a,b), \qquad 2I^{1-s}[f](a) - I\_{-}^{1-s}[f](a) = 0 = 2I^{1-s}[f](b) - I\_{+}^{1-s}[f](b),$$

*if and only if*

$$f \in \mathcal{W}^{s,1}(a,b), \qquad 2I\_+^{1-s}[f](b) - I\_-^{1-s}[f](a) = 0 = 2I\_-^{1-s}[f](a) - I\_+^{1-s}[f](b) \ .$$

**Proof.** Since

$$I^{1-s}[f](a) = \frac{1}{2\Gamma(1-s)} \int\_a^b \frac{f(t)}{(t-a)^s} dt = I\_+^{1-s}[f](b)\,,$$

$$I^{1-s}[f](b) = \frac{1}{2\Gamma(1-s)} \int\_a^b \frac{f(t)}{(b-t)^s} dt = I\_-^{1-s}[f](a)\,,$$

$$I\_+^{1-s}[f](a) = 2\begin{array}{c} I^{1-s}[f](a) \ \ -I\_-^{1-s}[f](a) = 2\begin{array}{c} I\_+^{1-s}[f](b) \ \ -I\_-^{1-s}[f](a) \end{array} \\ \end{array}$$

*I* <sup>1</sup>−*<sup>s</sup>* <sup>−</sup> [ *<sup>f</sup>* ](*b*) = <sup>2</sup> *<sup>I</sup>* 1−*s* [ *f* ](*b*) − *I* <sup>1</sup>−*<sup>s</sup>* <sup>+</sup> [ *<sup>f</sup>* ](*b*) = <sup>2</sup> *<sup>I</sup>* <sup>1</sup>−*<sup>s</sup>* <sup>−</sup> [ *<sup>f</sup>* ](*a*) <sup>−</sup> *<sup>I</sup>* <sup>1</sup>−*<sup>s</sup>* <sup>+</sup> [ *<sup>f</sup>* ](*b*),

the claim follows by Definition 9, Theorem 4, Proposition 1 (semigroup property of fractional integral), Propositions 2 and 3 and Corollaries 1 and 2.

Next, we make explicit some embedding relationship between *Ws*,1 <sup>±</sup> and *<sup>U</sup>s*,1 ± .

**Theorem 6.** *The following strict embeddings hold true:*

$$\mathcal{W}^{\mathfrak{s},1} \subseteq \mathcal{W}^{\mathfrak{s},1}\_+ \subseteq \mathcal{U}^{\mathfrak{s},1}\_+, \qquad \mathcal{W}^{\mathfrak{s},1} \subseteq \mathcal{W}^{\mathfrak{s},1}\_- \subseteq \mathcal{U}^{\mathfrak{s},1}\_- \tag{74} \tag{74}$$

*where we refer to Definitions 7–9 about Ws*,1(*a*, *b*)*, shortly denoted Ws*,1 *here, versus the naïve definition Us*,1 <sup>±</sup> *at the beginning of the present Section 3.*

**Proof.** Without loss of generality, we assume (*a*, *b*)=(0, 1).

Strict embeddings of *Ws*,1 in *Ws*,1 <sup>+</sup> and of *<sup>W</sup>s*,1 in *<sup>W</sup>s*,1 <sup>−</sup> are shown respectively by *<sup>x</sup>s*−<sup>1</sup> and (<sup>1</sup> <sup>−</sup> *<sup>x</sup>*)*s*−1: see (55) in Example 5.

Therefore, in order to show (74), it is sufficient to show an example for the strict embedding *Ws*,1 + ⊂ <sup>=</sup> *<sup>U</sup>s*,1 <sup>+</sup> : indeed, the proof of *<sup>W</sup>s*,1 <sup>−</sup> ⊂ <sup>=</sup> *<sup>U</sup>s*,1 <sup>−</sup> is achieved by replacing the variable *t* with (1 − *t*) in the counterexample showing the other strict embedding by exploiting the symmetry with respect to *x* = 1/2, analogous to the one with respect to *x* = 0 in (36) and (37).

We first note that *Ws*,1 <sup>+</sup> <sup>⊂</sup> *<sup>U</sup>s*,1 <sup>+</sup> follows by definition (4): the existence of a weak derivative in *L*<sup>1</sup> of *I* <sup>1</sup>−*<sup>s</sup>* <sup>±</sup> [*u*] entails the existence of the fractional derivative *<sup>D</sup><sup>s</sup>* <sup>±</sup>[*u*], coincident with the almost everywhere defined fractional derivative (*d*/*dx*)*<sup>s</sup>* <sup>±</sup>[*u*](x).

The strict embeddings *Ws*,1 + ⊂ <sup>=</sup> *<sup>U</sup>s*,1 <sup>+</sup> , and *<sup>W</sup>s*,1 <sup>−</sup> ⊂ <sup>=</sup> *<sup>U</sup>s*,1 <sup>−</sup> follows by the subsequent argument, which, for any fixed *<sup>s</sup>*∈(0, 1), provides the existence of a function in *<sup>U</sup>s*,1 <sup>+</sup> \*Ws*,1 <sup>+</sup> and a function in *Us*,1 <sup>−</sup> \*Ws*,1 − .

Given *<sup>s</sup>* <sup>∈</sup> (0, ln 2/ln 3), we show a function *<sup>z</sup>* in *<sup>U</sup>s*,1 <sup>+</sup> such that *<sup>z</sup>* ∈ *<sup>W</sup>s*,1 <sup>+</sup> .

Precisely, by denoting *V* the Cantor–Vitali function on [0, 1] ([21]), we claim that

$$z := D\_+^{1-s}[V] \in \mathcal{U}\_+^{s,1} \backslash \mathcal{W}\_+^{s,1} \qquad s \in (\mathfrak{a}, 1)\dots$$

Indeed *V* is *α*-Hölder continuous with *α* := ln 2/ln 3. So *I<sup>s</sup>* +[*V*] belongs to <sup>C</sup>1(*a*, *<sup>b</sup>*) for every *<sup>s</sup>* <sup>∈</sup> (<sup>1</sup> <sup>−</sup> *<sup>α</sup>*, 1) by Theorem 3.1 in [9] and the fact that *<sup>V</sup>*(0) = 0. Therefore *<sup>I</sup><sup>s</sup>* +[*V*] <sup>∈</sup> *<sup>W</sup>*1,1 *G* (hence *<sup>V</sup>* <sup>∈</sup>*W*1−*s*,1 <sup>+</sup> (0, 1)) for *<sup>s</sup>* <sup>∈</sup> (<sup>1</sup> <sup>−</sup> *<sup>α</sup>*, 1). Moreover, *<sup>I</sup><sup>s</sup>* +[*V*](0) =0 for *s*∈(0, 1): indeed, due to continuity of *V* in [0, 1], *I<sup>s</sup>* +[*V*] is continuous in [0, 1] and we obtain

$$\Gamma(s)I\_+^s[V](0) = \lim\_{x \to 0} \int\_0^x \frac{V(t)}{(x-t)^{1-s}} dt = \lim\_{x \to 0} \left( \int\_0^x \frac{V(t) - V(x)}{(x-t)^{1-s}} dt + \frac{x^s V(x)}{s} \right) \dots$$

By Hölder continuity ( *<sup>V</sup>* ∈ C0,*α*(*a*, *<sup>b</sup>*) ), we obtain <sup>|</sup>*V*(*t*) <sup>−</sup> *<sup>V</sup>*(*x*)| ≤ *<sup>C</sup>*|*<sup>x</sup>* <sup>−</sup> *<sup>t</sup>*<sup>|</sup> *<sup>α</sup>*. Then

$$\left| \int\_{0}^{\mathbf{x}} \frac{V(t)}{(\mathbf{x} - t)^{1 - s}} dt \right| \leq C \int\_{0}^{\mathbf{x}} (\mathbf{x} - t)^{\mathbf{a} + s - 1} dt + \frac{\mathbf{x}^{\mathbf{s}} V(\mathbf{x})}{\mathbf{s}} = \frac{\mathbb{C} \mathbf{x}^{\mathbf{s} + \mathbf{a}}}{\mathbf{s} + \mathbf{a}} + \frac{\mathbf{x}^{\mathbf{s}} V(\mathbf{x})}{\mathbf{s}}.$$

Therefore, the limit above is equal to 0, as *<sup>x</sup>* <sup>→</sup> <sup>0</sup>+, thus proving the claim *<sup>I</sup><sup>s</sup>* +[*V*](0) = 0. Summarizing, *<sup>V</sup>* <sup>∈</sup> *BV*(0, 1), *<sup>I</sup><sup>s</sup>* +[*V*](0) = 0 and *I<sup>s</sup>* +[*V*] <sup>∈</sup> *<sup>W</sup>*1,1(0, 1), for *<sup>s</sup>* <sup>∈</sup> (<sup>1</sup> <sup>−</sup> *<sup>α</sup>*, 1). Therefore we can consider the Abel integral equation in the distributional setting

$$\text{find } z \in L^1(0, 1): \qquad I\_+^{1-s}[z] = V \qquad \text{on } (0, 1), \tag{75}$$

and solve it; by Proposition 3, the unique solution is given by *<sup>z</sup>* = *<sup>D</sup>*1−*<sup>s</sup>* <sup>+</sup> [*V*] = *D I<sup>s</sup>* +[*V*], and fulfils *V* = *I* <sup>1</sup>−*<sup>s</sup>* <sup>+</sup> [*z*]. Moreover (*d*/*dx*)*<sup>s</sup>* +[*z*]=(*d*/*dx*)*I* <sup>1</sup>−*<sup>s</sup>* <sup>+</sup> [*z*]=(*d*/*dx*)*<sup>V</sup>* = 0 a.e. on

(0, 1), whereas *D<sup>s</sup>* +[*z*] = *D I*1−*<sup>s</sup>* <sup>+</sup> [*z*] = *DV*, which is a nontrivial bounded measure. Explicitly *<sup>z</sup>* <sup>=</sup> *<sup>D</sup>*1−*<sup>s</sup>* <sup>+</sup> [*V*] fulfills *<sup>z</sup>* <sup>∈</sup> *<sup>U</sup>s*,1 <sup>+</sup> \*Ws*,1 <sup>+</sup> . So far, we have proved the first embedding chain in (74) for *s* ∈ (1 − *α*, 1)=( 1 − ln 2/ln 3, 1).

In the sequel, we show that, given any *σ* ∈ (0, 1), we can adapt the Cantor–Vitali function in such a way that it is *s*-Hölder continuous for any *s* ∈ (0, *σ*]; hence, we recover the strict embedding for any *s* in (1 − *σ*, 1), and hence, for any *s* in (0, 1), due to the generic choice of *σ*.

Indeed, given *τ* ∈ (0, 1), we can replace the construction of Cantor 1/3 - middle set C (say, a set whose Hausdorff dimension is ln 2/ ln 3 , which leads to the *α* = *α*(1/3) = ln 2/ ln 3 Hölder continuous Cantor–Vitali function *V*1/3 := *V*) with the Cantor-like *τ* - middle set C*τ*, with Hausdorff dimension dim(C*τ*) = ln 2/ ln 2 − ln(1 − *τ*) , which leads to the *α*(*τ*) Hölder continuous Cantor–Vitali generalized function *V* = *Vτ*, where

$$\mu(\tau) = \dim(\mathcal{C}\_{\tau}) = \ln 2 / \left( \ln 2 - \ln(1 - \tau) \right) \dots$$

Notice that *α*(*τ*) → 1<sup>−</sup> as *τ* → 0<sup>+</sup> and *α*(*τ*) → 0<sup>+</sup> as *τ* → 1−, so that *α*(*τ*) spans the interval (0, 1) as *<sup>τ</sup>* runs over (0, 1). Moreover, *<sup>V</sup><sup>τ</sup>* <sup>∈</sup> (*BV* <sup>∩</sup> *<sup>C</sup>*0)\*W*1,1 for *<sup>τ</sup>*∈(0, 1).

Again by Proposition 3, we get that *V<sup>τ</sup>* is representable, say there exists (unique) *<sup>z</sup><sup>τ</sup>* <sup>∈</sup> *<sup>L</sup>*1(0, 1) s.t. *<sup>z</sup><sup>τ</sup>* <sup>=</sup> *<sup>D</sup>*1−*<sup>s</sup>* <sup>+</sup> [*Vτ*] s.t. *<sup>V</sup><sup>τ</sup>* <sup>=</sup> *<sup>I</sup>* <sup>1</sup>−*<sup>s</sup>* <sup>+</sup> [*zτ*] for *<sup>s</sup>* <sup>∈</sup> (<sup>1</sup> <sup>−</sup> *<sup>α</sup>*(*τ*), 1), and we claim that *Vτ*(0) = 0, *I<sup>s</sup>* +[*Vτ*](0) = 0, *<sup>z</sup><sup>τ</sup>* <sup>∈</sup> *<sup>U</sup>s*,1 <sup>+</sup> \*Ws*,1 <sup>+</sup> for *s* ∈ (1 − *α*(*τ*), 1): indeed these claims about the generalized Cantor–Vitali function *V<sup>τ</sup>* can be proved by the same procedure dealing with the definition of *V* = *V*1/3, as it is sketched below.

The function *Vτ* is of bounded variations since is monotone, as it is the uniform limit of a sequence of monotone nondecreasing functions.

Continuity of *Vτ* follows from uniform convergence of standard iterative approximations by piecewise linear functions. The absolutely continuous part of the distributional derivative *D V<sup>τ</sup>* is identically 0 since *V<sup>τ</sup>* is locally constant on an open set of Lebesgue measure 1: indeed, it is a union of open intervals, which is iteratively obtained by approximation with finite unions *An* whose measure *<sup>n</sup>* fulfills the recursive scheme: -<sup>1</sup> = *τ*, *<sup>n</sup>*+<sup>1</sup> = *<sup>n</sup>* + 2*nτ* <sup>1</sup>−*n* <sup>2</sup>*<sup>n</sup>* , so that *<sup>n</sup>* <sup>=</sup> <sup>1</sup> <sup>−</sup> (<sup>1</sup> <sup>−</sup> *<sup>τ</sup>*)*<sup>n</sup>* <sup>→</sup> 1 as *<sup>n</sup>* <sup>→</sup> <sup>∞</sup>.

The worst case for differential quotients of *n*-th approximations of *Vτ* is provided by (1/2)*n*/ (1 − *n*)/2*n* <sup>=</sup> 1/(<sup>1</sup> <sup>−</sup> *<sup>τ</sup>*)*n*, so that *<sup>α</sup>*(*τ*) is the biggest real *<sup>α</sup>* s.t.

$$(1/2)^n / \left( (1 - \ell\_n) / 2^n \right)^a = 2^{n(a-1)} / (1 - \tau)^{na}$$

is uniformly bounded for *<sup>n</sup>* ∈ N, say

$$0 < \alpha \le \kappa(\tau) = \ln 2 / \left(\ln 2 - \ln(1 - \tau)\right) \dots$$

So *I<sup>s</sup>* +[*Vτ*] belongs to <sup>C</sup>1(*a*, *<sup>b</sup>*) for every *<sup>s</sup>* <sup>∈</sup> (<sup>1</sup> <sup>−</sup> *<sup>α</sup>*(*τ*), 1) by Theorem 3.1 of [9] and taking into account that *Vτ*(0) = 0. Therefore, *I* <sup>1</sup>−*<sup>s</sup>* <sup>+</sup> [*Vτ*] <sup>∈</sup> *<sup>W</sup>*1,1 that is *<sup>V</sup><sup>τ</sup>* <sup>∈</sup> *<sup>W</sup>s*,1 <sup>+</sup> (0, 1) for *s* ∈ (0, *α*(*τ*)).

Moreover, *I* <sup>1</sup>−*<sup>s</sup>* <sup>+</sup> [*Vτ*](0) = 0. Indeed, by continuity of *<sup>V</sup><sup>τ</sup>* in [0, 1], we obtain

$$\Gamma(s)\,I\_+^s\left[V\_\tau\right](0) = \lim\_{\mathbf{x}\to 0} \int\_0^\mathbf{x} \frac{V\_\tau(t)}{\left(\mathbf{x} - t\right)^{1-s}} dt = \lim\_{\mathbf{x}\to 0} \left(\int\_0^\mathbf{x} \frac{V\_\tau(t) - V\_\tau(\mathbf{x})}{\left(\mathbf{x} - t\right)^{1-s}} dt + \frac{\mathbf{x}^s V\_\tau(\mathbf{x})}{\mathbf{s}}\right) \dots$$

Since *<sup>V</sup><sup>τ</sup>* ∈ C0,*α*(*τ*)(*a*, *<sup>b</sup>*), we get <sup>|</sup>*Vτ*(*t*) <sup>−</sup> *<sup>V</sup>τ*(*x*)| ≤ *<sup>C</sup>*|*<sup>x</sup>* <sup>−</sup> *<sup>t</sup>*<sup>|</sup> *<sup>α</sup>*(*τ*). Thus

$$\left| \int\_{0}^{\mathbf{x}} \frac{V\_{\mathsf{T}}(t)}{(\mathbf{x} - t)^{1 - s}} dt \right| \leq \mathbb{C} \int\_{0}^{\mathbf{x}} (\mathbf{x} - t)^{a(\tau) + s - 1} dt + \frac{\mathbf{x}^{s} V\_{\mathsf{T}}(\mathbf{x})}{s} = \frac{\mathbb{C} \mathbf{x}^{s + a(\tau)}}{s + a(\tau)} + \frac{\mathbf{x}^{s} V\_{\mathsf{T}}(\mathbf{x})}{s} \cdot \mathbf{x}$$

Summarizing, if *V<sup>τ</sup>* is the generalized Cantor–Vitali function and U*τ*(*x*) = *Vτ*(1 − *x*),

$$z\_{\tau} := D\_{+}^{1-s}[V\_{\tau}] \in \mathcal{U}\_{+}^{s,1} \left( \mathcal{W}\_{+}^{s,1} \right. \qquad s \in \left(1 - a(\tau), 1\right), \tag{76}$$

since *D<sup>s</sup>* +[*zτ*] = *D* V*<sup>τ</sup>* is a nontrivial Cantor measure with no atomic part, whereas (*d*/*dx*)*<sup>s</sup>* +[*zτ*] = 0 a.e.; moreover,

$$\mu\_{\tau} := D\_{-}^{1-s}[\mathcal{U}\_{\mathbb{F}}] \in \mathcal{U}\_{-}^{s,1}\left(\mathcal{W}\_{-}^{s,1}\right) \qquad s \in \left(1-a(\tau),1\right),\tag{77}$$

where, to achieve (77), we exploit Proposition 2 to solve the backward Abel integral equation in the distributional framework *<sup>I</sup>*1−*s*[*uτ*] = <sup>U</sup>*τ*; indeed, <sup>U</sup>*<sup>τ</sup>* <sup>∈</sup> *BV*(0, 1), *<sup>I</sup><sup>s</sup>* <sup>−</sup>[U*τ*](1) = 0, then the unique solution *<sup>v</sup>* <sup>∈</sup> *<sup>L</sup>*1(0, 1) of *<sup>I</sup>* <sup>1</sup>−*<sup>s</sup>* <sup>−</sup> [*v*] = <sup>U</sup>*<sup>τ</sup>* is *<sup>v</sup>* <sup>=</sup> *<sup>u</sup><sup>τ</sup>* <sup>=</sup> *<sup>D</sup>*1−*<sup>s</sup>* <sup>−</sup> [U*τ*], which fulfills *I* <sup>1</sup>−*<sup>s</sup>* <sup>−</sup> [*uτ*] = <sup>U</sup>*τ*. Hence, by evaluating the distributional derivative *<sup>D</sup>*, we get *<sup>D</sup><sup>s</sup>* <sup>−</sup>[*uτ*] = <sup>−</sup>*<sup>D</sup>* <sup>U</sup>*<sup>τ</sup>* which is a nontrivial Cantor measure with no atomic part, whereas (*d*/*dx*)*<sup>s</sup>* <sup>−</sup>[*uτ*] = 0 a.e.

We list some properties concerning the comparison of bilateral Riemann–Liouville fractional Sobolev spaces *Ws*,1 with classical spaces: Gagliardo fractional Sobolev spaces *Ws*,1 *<sup>G</sup>* , functions of bounded variation *BV*(0, 1) and *SBV*(0, 1), De Giorgi's space of special bounded variation functions, whose derivatives have no Cantor part ([21,35] for example).

**Theorem 7.** *Let be s*,*r* ∈ (0, 1) *such that r* > *s. Then*

$$\mathcal{W}^{r,1}\_G(0,1) \cap I^s\_+(L^1(a,b)) \cap I^s\_-(L^1(a,b)) \subset \mathcal{W}^{s,1}(0,1)$$

*with continuous injection, say* <sup>∀</sup>*<sup>u</sup>* <sup>∈</sup> *<sup>W</sup>r*,1 *<sup>G</sup>* (0, 1) <sup>∩</sup> *<sup>I</sup><sup>s</sup>* +(*L*1(*a*, *<sup>b</sup>*)) <sup>∩</sup> *<sup>I</sup><sup>s</sup>* <sup>−</sup>(*L*1(*a*, *<sup>b</sup>*))

$$||u||\_{\mathcal{W}^{s,1}} := ||u||\_{L^1(a,b)} + ||I^{1-s}\_+ u||\_{\mathcal{W}^{1,1}(0,1)} + ||I^{1-s}\_- u||\_{\mathcal{W}^{1,1}(0,1)} \le C||u||\_{\mathcal{W}^{s,1}\_G(0,1)}, \forall u$$

**Proof.** Straightforward consequence of Theorem 3.2 of [25] and Definition 9.

In [25], we have compared *Ws*,1 <sup>+</sup> (0, 1) and *<sup>W</sup>s*,1 <sup>−</sup> (0, 1) with *SBV*(0, 1), and proved

$$SBV(0,1) \subset \bigcap\_{s \in (0,1)} W^{s,1}(0,1).$$

This inclusion was refined by a recent result (Theorem 3.4 in [2]) showing

$$BV(a,b) \subset \bigcap\_{s \in (0,1)} \mathcal{W}^{s,1}(a,b). \tag{78}$$

On the other hand, for every *<sup>s</sup>* <sup>∈</sup> (0, 1), *<sup>W</sup>s*,1(*a*, *<sup>b</sup>*) is contained neither in *<sup>W</sup>*1,1 *<sup>G</sup>* (*a*, *b*) nor in *BV*(*a*, *b*), due to remarkable examples of Weierstrass-type functions. Indeed a Weierstrass function *w* can be defined ([36]) so that *w* belongs to *W<sup>s</sup>*,1, but *w* does not belong to *BV*(0, 1) since it is nowhere differentiable. Fix *q* > 1 and set

$$w(\mathbf{x}) = \sum\_{n=0}^{\infty} q^{-n} \left( \exp(iq^n \mathbf{x}) - \exp(iq^n a) \right). \tag{79}$$

Notice that the the constant subtraction entails *w*(*a*) = 0, thus preventing a singularity of *D<sup>s</sup> <sup>a</sup>*+[*w*](*x*) at *x* = *a*.

**Theorem 8.** *Let s*,*r* ∈ (0, 1) *be such that r* > *s. Then*

$$\mathcal{W}^{r,1}\_G(0,1) \cap I^s\_+\left(L^1(a,b)\right) \cap I^s\_-\left(L^1(a,b)\right) \subset \mathcal{W}^{s,1}(0,1)$$

*with continuous injection. Precisely,*

$$\begin{aligned} \|u\|\_{W^{s,1}} &:= \|u\|\_{L^1(a,b)} + \|I\_+^{1-s}u\|\_{W^{1,1}(0,1)} + \|I\_-^{1-s}u\|\_{W^{1,1}(0,1)} \leq \mathbb{C} \|u\|\_{W\_G^{s,1}(0,1)}, \\\\ \forall u \in W\_G^{s,1}(0,1) \cap I\_+^s \left(L^1(a,b)\right) \cap I\_-^s \left(L^1(a,b)\right). \end{aligned}$$

We emphasize that in the case of an unbounded interval (*a*, *b*), there is no chance for a compactness statement analogous to Theorem 3 in *W<sup>s</sup>*,1(a,b), since the Rellich theorem cannot be applied.

On the other hand, the property *<sup>u</sup>* <sup>∈</sup> *<sup>W</sup>s*,1(R) entails a stronger qualitative condition on *<sup>u</sup>* than in the case of *<sup>u</sup>* <sup>∈</sup> *<sup>W</sup>s*,1(*a*, *<sup>b</sup>*) with a boundedness of (*a*, *<sup>b</sup>*), as clarified by the next remark.

**Remark 10.** *If <sup>u</sup>* <sup>∈</sup> *<sup>W</sup>s*,1(R)*,* <sup>0</sup> <sup>&</sup>lt; *<sup>s</sup>* <sup>&</sup>lt; <sup>1</sup>*, then* <sup>2</sup> <sup>R</sup> *u*(*t*) *dt* = 0 *and* |*ξ*| *<sup>s</sup>*−1*u*(*ξ*) *is bounded in a neighborhood of ξ* =0*. Property* 2 *<sup>b</sup> <sup>a</sup> <sup>u</sup>*(*t*) *dt*=<sup>0</sup> *may fail for u* <sup>∈</sup> *<sup>W</sup>s*,1(*a*, *<sup>b</sup>*) *if* (*a*, *<sup>b</sup>*) <sup>=</sup> <sup>R</sup>*.*

*Indeed,* (*a*, *<sup>b</sup>*) =<sup>R</sup> *entails <sup>u</sup>* <sup>∈</sup> *<sup>C</sup>*0∩*L*∞(R)*, hence u*∈*L*1(R)*, u*∈*Ws*,1(R)⊂*Ws*,1 *<sup>e</sup>* (R)*, hence*

$$I\_{\mathfrak{e}}^{1-s}[\mathfrak{u}] = \mathfrak{u} \ast \frac{1}{2\,\Gamma(1-s)\,|\,t|^s} \in \,\mathcal{W}\_{\mathcal{G}}^{1,1}(\mathbb{R}) \subset \,L^1(\mathbb{R})\,,$$

*then, exploiting the Fourier transform* F*,* |*ξ*| *<sup>s</sup>*−1*u*(*ξ*) <sup>∈</sup> *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>L</sup>*∞(R)*, hence* <sup>|</sup>*ξ*<sup>|</sup> *<sup>s</sup>*−1*u*(*ξ*) *is bounded and* 2 <sup>R</sup> *<sup>u</sup>*(*t*)*dt* <sup>=</sup> *<sup>u</sup>*(0) = <sup>0</sup>*.*

*If* (*a*, *<sup>b</sup>*) <sup>=</sup> <sup>R</sup>*, then, referring to* (8) *and* (10)*, neither <sup>I</sup>*1−*<sup>s</sup> <sup>e</sup>* [*u*] *nor <sup>I</sup>*1−*<sup>s</sup> <sup>o</sup>* [*u*] *belong to <sup>L</sup>*1(R) *(or even to <sup>L</sup>p*(R) *with* <sup>1</sup> <sup>&</sup>lt; *<sup>p</sup>* <sup>≤</sup> <sup>2</sup>*, when <sup>s</sup>* <sup>&</sup>gt; 1/2*), moreover the summability of <sup>I</sup>*1−*<sup>s</sup> <sup>o</sup> may fail at infinity due to a decay of order* |*x*| −*s , therefore u may be unbounded around <sup>ξ</sup>* <sup>=</sup> <sup>0</sup>*.*

**Remark 11.** *Notwithstanding Remark 10 (excluding nontrivial constant functions from the space <sup>W</sup>s*,1(R)*), if we restrict to bounded intervals, a constant function <sup>u</sup>* <sup>≡</sup> *<sup>K</sup> belongs to <sup>W</sup>s*,1(*a*, *<sup>b</sup>*)*, for every bounded interval* (*a*, *b*) *and every value of K. Indeed,*

$$I\_{+}^{1-s}[K](\mathbf{x}) = \frac{K}{\Gamma(2-s)}(\mathbf{x} - a)^{1-s} \in L^{1}(a, b),$$

$$D\_{+}^{s}[K](\mathbf{x}) = D\_{\mathbf{x}}\left[I\_{+}^{1-s}[K]\right](\mathbf{x}) = \frac{d}{d\mathbf{x}}I\_{+}^{1-s}[K](\mathbf{x}) = \frac{K}{\Gamma(1-s)}\left(\mathbf{x} - a\right)^{-s} \in L^{1}(a, b).$$

#### **4. Bilateral Fractional Bounded Variation Space**

Possible naïve definitions could be provided, for *s* ∈ (0, 1), by

$$A\_+^s = \left\{ u \in L^1(a, b) \mid \left(\frac{d}{dx}\right)\_+^s u \text{ is a bounded measure} \right\},$$

$$A\_-^s = \left\{ u \in L^1(a, b) \mid \left(\frac{d}{dx}\right)\_-^s u \text{ is a bounded measure} \right\},$$

$$A^s = A\_+^s \cap A\_{- '}^s$$

which refer to *L*1-functions whose classical pointwise-defined Riemann–Liouville fractional derivative of prescribed order *s* ∈ (0, 1) is a bounded measure.

Actually, if the Riemann–Liouville fractional derivative *Ds*[*u*](*x*) of *u* exists for a.e. *x* for some *<sup>s</sup>* <sup>∈</sup> (0, 1), then *<sup>I</sup><sup>s</sup> <sup>a</sup>*+[*u*] is differentiable almost everywhere, referring to the same *s*; nevertheless, we have no information on the distributional derivative of the fractional integral *I<sup>s</sup> <sup>a</sup>*+[*u*].

These differential properties are not completely described by the point-wise derivative, though it exists almost everywhere. This shows that the previous definitions of *A<sup>s</sup>* , *A<sup>s</sup>* <sup>+</sup> and *As* <sup>−</sup> are not suitable to obtain an integration by the parts formula. Therefore, to develop a satisfactory theory of fractional bounded variation spaces, as we did for fractional Sobolev spaces in [25], we introduce a more suitable function space: the *bilateral fractional bounded variation space BV<sup>s</sup>* , as defined in the sequel.

**Remark 12.** *We recall that, as long as these classical fractional derivatives are evaluated on absolutely continuous functions, as it was done in all previous section, using the operators of the classical Definition 2 provides the same results as the distributional Definition 3: for this reason,* *we keep the usual classical notations (RLD<sup>s</sup> <sup>a</sup>*+*, RLD<sup>s</sup> <sup>b</sup>*<sup>−</sup> *and the corresponding short forms <sup>D</sup><sup>s</sup>* +*, Ds* <sup>−</sup>*). However, in the present section, we evaluate fractional derivatives on functions of bounded variations, a setting where the two definitions provide different evaluations.*

Next, inspired by [2], where the nonsymmetric spaces are studied also in the case of higher order derivatives, we introduce the bilateral Riemann–Liouville bounded variation space, with the aim to achieve a symmetric framework.

**Definition 10.** *The (bilateral) Riemann–Liouville fractional bounded variation spaces. For every s* ∈ (0, 1)*, we set*

$$BV^{\mathfrak{s}^{\mathfrak{s}}} = BV^{\mathfrak{s}^{\mathfrak{s}}}\_{+} \cap BV^{\mathfrak{s}^{\mathfrak{s}}}\_{-} \tag{80}$$

*where, referring to Definition 3,*

$$BV\_+^\varepsilon = \{ u \in L^1(a, b) \mid I\_+^{1-s}[u] \in BV(a, b) \} \\ = \{ u \in L^1(a, b) \mid D\_+^\varepsilon[u] \in \mathcal{M}(a, b) \}.$$

$$BV\_-^\varepsilon = \{ u \in L^1(a, b) \mid I\_-^{1-s}[u] \in BV(a, b) \} \\ = \{ u \in L^1(a, b) \mid D\_-^\varepsilon[u] \in \mathcal{M}(a, b) \}.$$

**Theorem 9.** *Assume that the interval* (*a*, *b*) *is bounded and the parameter s fulfills* 0<*s*<1*. Then, the space BVs*(*a*, *b*) *is a normed space endowed with the norm*

$$\|u\|\|\_{BV^s} := \|u\|\_{L^1(a,b)} + \|\left|D^s\_{a+}[u]\right|\|\_{\mathcal{M}(a,b)} + \|\left|D^s\_{b-}[u]\right|\|\_{\mathcal{M}(a,b)}.\tag{81}$$

*Contribution <sup>D</sup><sup>s</sup>* +[*u*]M(*a*,*b*) <sup>+</sup> *<sup>D</sup><sup>s</sup>* <sup>−</sup>[*u*]M(*a*,*b*) *in the norm* (81) *can be replaced by <sup>D</sup><sup>s</sup> <sup>e</sup>*[*u*]M(*a*,*b*)+ *<sup>D</sup><sup>s</sup> <sup>o</sup>*[*u*]M(*a*,*b*) *.*

*Moreover, BVs*(*a*, *<sup>b</sup>*) *is a Banach space and for every <sup>q</sup>* <sup>∈</sup> [1, 1/(<sup>1</sup> <sup>−</sup> *<sup>s</sup>*))*, there is C* = *C*(*s*, *q*, *a*, *b*)*, such that*

$$||\mu||\_{L^{q}(a,b)} \leq \|\mathcal{C}(s,q,a,b)\| ||\mu||\_{BV^s(a,b)}.\tag{82}$$

*Every u* <sup>∈</sup> *BVs*(*a*, *<sup>b</sup>*) *can be represented by both*

$$u(\mathbf{x}) = I\_{a+}^{\mathbf{s}} \left[ D\_{a+}^{\mathbf{s}}[u] \right](\mathbf{x}) + \frac{I\_{a+}^{1-s}[u](a\_{+})}{\Gamma(s)}(\mathbf{x} - a)^{s-1} \qquad a.e. \mathbf{x} \in (a, b), \tag{83}$$

*and*

$$u(\mathbf{x}) = \ I\_{b-}^{\mathbf{x}} \left[ D\_{b-}^{\mathbf{x}}[u] \right](\mathbf{x}) + \frac{I\_{b-}^{1-s}[u](b\_{-})}{\Gamma(s)} (b-\mathbf{x})^{s-1} \qquad a.e. \mathbf{x} \in (a,b). \tag{84}$$

**Proof.** We emphasize that here *I* <sup>1</sup>−*<sup>s</sup> <sup>a</sup>*<sup>+</sup> [*u*](*a*+) and *<sup>I</sup>* 1−*s <sup>b</sup>*<sup>−</sup> [*u*](*b*−) replace, respectively, *<sup>I</sup>* <sup>1</sup>−*<sup>s</sup> <sup>a</sup>*<sup>+</sup> [*u*](*a*) and *I* 1−*s <sup>b</sup>*<sup>−</sup> [*u*](*b*) which were in representations (52) and (53) of *<sup>W</sup>s*,1 functions, since in the present *BV* setting, there are not pointwise defined values, though there are well-defined finite right and left limits at every point in (*a*, *b*).

The map *<sup>u</sup>* → *<sup>u</sup> BV<sup>s</sup>* is a norm on *<sup>W</sup>s*,1(*a*, *<sup>b</sup>*), indeed,

 *<sup>u</sup> <sup>L</sup>*1(*a*,*b*) <sup>+</sup> *<sup>D</sup><sup>s</sup> <sup>a</sup>*+[*u*]M(*a*,*b*) is equivalent to the norm *<sup>I</sup>* <sup>1</sup>−*<sup>s</sup> <sup>a</sup>*<sup>+</sup> [*u*]*BVs*,1(*a*,*b*), since *<sup>I</sup>* <sup>1</sup>−*<sup>s</sup> <sup>a</sup>*<sup>+</sup> [*u*] belongs to *BV*, *D<sup>s</sup> <sup>a</sup>*+[*u*] = *D I*1−*<sup>s</sup> <sup>a</sup>*<sup>+</sup> [*u*] and *<sup>I</sup>* <sup>1</sup>−*<sup>s</sup> <sup>a</sup>*<sup>+</sup> [*u*]*<sup>L</sup>*1(*a*,*b*) <sup>≤</sup> *<sup>C</sup> <sup>u</sup> <sup>L</sup>*1(*a*,*b*) ; analogously *<sup>u</sup> <sup>L</sup>*1(*a*,*b*) <sup>+</sup> *<sup>D</sup><sup>s</sup> <sup>b</sup>*−[*u*]M(*a*,*b*) is a norm for *<sup>I</sup>* 1−*s <sup>b</sup>*<sup>−</sup> [*u*], due to *<sup>I</sup>* 1−*s <sup>b</sup>*<sup>−</sup> [*u*] <sup>∈</sup> *BV*, *<sup>D</sup><sup>s</sup> <sup>b</sup>*−[*u*] = *D I*1−*<sup>s</sup> <sup>b</sup>*<sup>−</sup> [*u*] and *I* <sup>1</sup>−*<sup>s</sup> <sup>a</sup>*<sup>+</sup> [*u*]*<sup>L</sup>*1(*a*,*b*) <sup>≤</sup> *<sup>C</sup> <sup>u</sup> <sup>L</sup>*1(*a*,*b*) . Therefore, terms *<sup>I</sup>* <sup>1</sup>−*<sup>s</sup>* <sup>±</sup> [*u*]*BVs*(*a*,*b*) can be replaced, respectively, by *D I* <sup>1</sup>−*<sup>s</sup>* <sup>±</sup> [*u*] M(*a*,*b*) in the natural norm

$$\|\|u\|\|\_{BV^s} := \|\|u\|\|\_{L^1(a,b)} + \|\|I\_+^{1-s}[u]\|\|\_{BV^s(a,b)} + \|\|I\_-^{1-s}[u]\|\|\_{BV^s(a,b)}\dots\|\|\_{BV^s(a,b)}$$

The other claims follow by the same proof of Theorem 1 for the fractional Sobolev setting, where actually only the Proposition 2 and Corollary 1 about Abel forward and backward integral equation must be suitably tuned as stated in Remark 17.

**Example 6.** *The constant functions and <sup>v</sup>*(*x*) = *<sup>x</sup>*(<sup>1</sup> <sup>−</sup> *<sup>x</sup>*) *belong to the space BVs*(0, 1)*. In general, the space C*<sup>∞</sup> <sup>0</sup> (*a*, *<sup>b</sup>*) *of test function on* (*a*, *<sup>b</sup>*) *is contained in BVs*(*a*, *<sup>b</sup>*)*.*

**Example 7.** *Heaviside function H belongs to BVs*(−1, 1)*, thanks to Example 3.*

**Example 8.** *Function H*(*x*)|*x*| *<sup>s</sup>*−<sup>1</sup> *belongs to BV<sup>s</sup>* +(−1, 1)\*Ws*,1 <sup>+</sup> (−1, 1) *if* 0 < *s* < 1*, since I* <sup>1</sup>−*<sup>s</sup>* <sup>+</sup> [*H*(*x*)|*x*<sup>|</sup> *<sup>s</sup>*−1] =Γ(*s*)*H*(*x*) <sup>∈</sup> *BV<sup>s</sup>* +(−1, 1) *due to Example 1.*

*Due to the unboundedness of I* <sup>1</sup>−*<sup>s</sup>* <sup>−</sup> [ *<sup>H</sup>*(*x*)|*x*<sup>|</sup> *<sup>s</sup>*−<sup>1</sup> ](*x*) *in a right neighborhood of x* = 0 *(due to Example 5), we obtain that H*(*x*)|*x*| *<sup>s</sup>*−<sup>1</sup> *does not belong to BV<sup>s</sup>* <sup>−</sup>(−1, 1)*.*

*In general, for* 0<*s*<1*, H*(*x*)|*x*| <sup>−</sup>*<sup>α</sup> belongs to BV<sup>s</sup>* +(−1, 1)\*BV<sup>s</sup>* <sup>−</sup>(−1, 1) *if* <sup>0</sup> <sup>&</sup>lt; *<sup>α</sup>* <sup>&</sup>lt; <sup>1</sup> <sup>−</sup> *s.*

#### **Theorem 10.** *(integration by parts in BVs***(***a***,** *b***)***)*

*Next, identities hold true for* 0<*s*< 1*,* −∞<*a*<*b*< +∞ *:*

$$\begin{cases} \int\_{a}^{b} u(\mathbf{x}) \, d \, D\_{+}^{s}[v](\mathbf{x}) = -\int\_{a}^{b} D\_{\mathbf{x}} u(\mathbf{x}) \, I\_{+}^{1-s}[v](\mathbf{x}) \, d\mathbf{x} + u(b) \, I\_{+}^{1-s}[v](b) \\ \qquad \qquad \qquad \qquad \forall v \in BV\_{+}^{s}(a,b), \, \forall u \in \mathcal{W}\_{G}^{1,1}(a,b) \end{cases} \tag{85}$$

$$\begin{cases} \int\_{a}^{b} u(\mathbf{x}) \, d \, D\_{-}^{s}[v](\mathbf{x}) = + \int\_{a}^{b} D\_{\mathbf{x}} u(\mathbf{x}) \, I\_{-}^{1-s}[v](\mathbf{x}) \, d\mathbf{x} + u(a) \, I\_{-}^{1-s}[v](a) \\ \qquad \qquad \qquad \qquad \forall v \in BV\_{-}^{s}(a,b), \, \forall \, u \in W\_{G}^{1,1}(a,b) \end{cases} \tag{86}$$

$$\begin{cases} \int\_{a}^{b} u(\mathbf{x}) \, d \, D\_{\varepsilon}^{s}[v](\mathbf{x}) = \\ \quad - \int\_{a}^{b} D\_{\mathbf{x}} u(\mathbf{x}) \, I\_{\varepsilon}^{1-s}[v](\mathbf{x}) \, d\mathbf{x} + \frac{1}{2} \left( u(b) \, I\_{+}^{1-s}[v](b) - u(a) \, I\_{-}^{1-s}[v](a) \right) \\ \qquad \qquad \qquad \qquad \forall v \in BV^{s}(a,b), \, \forall u \in \mathcal{W}\_{G}^{1,1}(a,b), \\ \quad \qquad \qquad \forall \quad c \, b \end{cases} \tag{87}$$

$$\begin{cases} \int\_{a}^{b} u(\mathbf{x}) \, d \, D\_{o}^{s}[v](\mathbf{x}) = \\ \quad - \int\_{a}^{b} D\_{x}u(\mathbf{x}) \, I\_{0}^{1-s}[v](\mathbf{x}) \, d\mathbf{x} + \frac{1}{2} \left( u(b) \, I\_{+}^{1-s}[v](b) + u(a) \, I\_{-}^{1-s}[v](a) \right) \\ \qquad \qquad \qquad \forall v \in BV^{s}(a,b), \; \forall u \in \mathcal{W}\_{G}^{1,1}(a,b) \end{cases} \tag{88}$$

**Proof.** Exactly the same proof of Theorem 2, but the facts that, here, the distributional derivatives *Dx* in *BV* replaces the almost everywhere pointwise derivative *d*/*dx* in *W*1,1 *G* and the integrals at the left-hand side are evaluated with respect to the measures *D<sup>s</sup>* +[*v*], *Ds* <sup>−</sup>[*v*], *<sup>D</sup><sup>s</sup> <sup>e</sup>*[*v*] and *D<sup>s</sup> <sup>o</sup>*[*v*], in place of Lebesgue measure.

#### **Theorem 11. [***Compactness in BVs***(***a***,** *b***)]**

*Assume that* 0<*s*<1*, the interval* (*a*, *b*) *is bounded and*

$$\|\mu\_n\|\_{BV^s(a,b)} \le \mathbb{C} \,. \tag{89}$$

*Then, there exist u* <sup>∈</sup> *<sup>L</sup>*1(*a*, *<sup>b</sup>*) *and a subsequence such that, without relabeling,*

$$\begin{cases} \begin{array}{ll} (i) & u\_n \rightharpoonup u & \text{weakly in } L^q(a,b)) & \forall q \in [1, 1/(1-s)), \\ \\ (ii) & I\_+^{1-s}[u\_n] \to I\_+^{1-s}[u] & \text{strongly in } L^p(a,b), \forall p < +\infty, \\ \end{array} \\ (iii) & I\_-^{1-s}[u\_n] \to I\_-^{1-s}[u] & \text{strongly in } L^p(a,b), \forall p < +\infty. \end{array}$$

$$I\_+^{1-s}[u\_n] \rightharpoonup I\_+^{1-s}[u] \, , \qquad I\_-^{1-s}[u\_n] \rightharpoonup I\_-^{1-s}[u] \, & \text{weakly in } BV(a,b). \tag{90}$$

**Proof.** The proof can be achieved by exactly the same argument used in the proof of compactness in *Ws*,1(*a*, *b*) (Theorem 3).

**Remark 13.** *We emphasize that*

$$BV(a,b) \underset{\nearrow}{\subset} BV^s(a,b) \qquad \forall s \in (0,1), \tag{91}$$

*since BV* ⊂ <sup>=</sup> *<sup>W</sup>s*,1 *and Ws*,1 <sup>⊂</sup> *BV<sup>s</sup> . Moreover,*

$$BV(a,b) \underset{\stackrel{\scriptstyle \in}{\rightarrow}} \bigcap\_{\sigma \in (0,1)} W^{\sigma,1}(a,b) \underset{\stackrel{\scriptstyle \in}{\rightarrow}} W^{s,1}(a,b) \underset{\stackrel{\scriptstyle \in}{\rightarrow}} BV^s\_+(a,b) \qquad \forall s \in (0,1). \tag{92}$$

*Indeed, the first embedding follows by* (78) *and is strict due to* (79)*; the second embedding is obviously strict, about the third embedding notice that H*(*x*)|*x*| *<sup>s</sup>*−<sup>1</sup> <sup>∈</sup> *BV<sup>s</sup>* +(−1, 1)\*Ws*,1 <sup>+</sup> (−1, 1) *(see Example 8).*

*In addition, we can rewrite* (74) *as follows*

$$\mathcal{W}^{s,1}(a,b) \subsetneq \mathcal{W}^s\_+(a,b) \,, \qquad \mathcal{W}^{s,1}(a,b) \subsetneq \mathcal{W}^s\_-(a,b) \,, \qquad \forall s \in (0,1). \tag{93}$$

*since, referring to notations* (76) *and* (77) *in the proof of Theorem 6,*

$$\exists z\_{\tau} \in BV\_{+}^{s}(-1,1) \backslash W^{s,1}(-1,1) \,, \qquad s \in \left(1 - \ln 2 / \left(\ln 2 - \ln(1 - \tau)\right), 1\right), \tag{94}$$

$$\exists \, u\_{\tau} \in BV\_{-}^{s}(-1,1) \backslash W^{s,1}(-1,1), \qquad s \in \left(1 - \ln 2 / \left(\ln 2 - \ln(1 - \tau)\right), 1\right) \tag{95}$$

#### **5. Abel Equation in** D (R) **and Some Useful Relationships**

Here, for the reader's convenience, first, we recall some basic algebra of fractional differential calculus, then we extend to the distributional setting some classical results about Abel integral equations: these suitably tuned claims are exploited in Sections 3 and 4 to prove the main properties of *<sup>W</sup>α*,1(*a*, *<sup>b</sup>*) and *BVα*(*a*, *<sup>b</sup>*), with *<sup>α</sup>* <sup>∈</sup> (0, 1): Theorems 1, 3, 5, 6, 9 and 11.

All the results stated in this section are independent of the ones of previous sections.

To avoid confusion with the standard notation of variable *s* in the Laplace transform, here, we label by *α*, instead of *s*, the index of fractional integral, fractional derivative and fractional Sobolev space.

All along this section: the *Laplace transformable function* refers to a measurable function *<sup>v</sup>* on R with support contained in [0, +∞) such that there exists *<sup>λ</sup>* ∈ R for which *e*−*<sup>λ</sup>xv*(*x*) is a Lebesgue-integrable function; the *Laplace transformable distribution* refers to a distribution *<sup>v</sup>* on R with support contained in [0, +∞) such that there exists *<sup>λ</sup>* ∈ R for which *<sup>e</sup>*−*<sup>λ</sup>xv*(*x*) is a tempered distribution; and in all cases, *<sup>V</sup>* <sup>=</sup> L{*v*} denotes the Laplace transform of *v*.

First we recall some relationships concerning fractional integral of powers of *x* in (0, 1):

$$I\_{0+}^{1-a}[x^{\beta}] = \frac{\Gamma(1+\beta)}{\Gamma(2+\beta-a)} x^{1+\beta-a} \qquad\qquad a \in (0,1), \ \beta > -1,\tag{96}$$

$$\left[I\_{0+}^{a}\left[\mathbf{x}^{\emptyset}\right]\right] = \frac{\Gamma(1+\beta)}{\Gamma(1+\beta+a)}\mathbf{x}^{\emptyset+a} \qquad\qquad\qquad\qquad a\in\left(0,1\right),\,\beta>-1,\tag{97}$$

$$I\_{1-}^{1-a}[\mathbf{x}^{\boldsymbol{\beta}}] = \left(\frac{\Gamma(\boldsymbol{a}-\boldsymbol{\beta}-1)}{\Gamma(-\boldsymbol{\beta})} - \frac{B(\mathbf{x}, \boldsymbol{a}-\boldsymbol{\beta}-1, 1-a)}{\Gamma(1-a)}\right) \mathbf{x}^{1+\boldsymbol{\beta}-a} \qquad \text{a,} \boldsymbol{\beta} \in (0,1), \tag{98}$$

where *B*=*B*(*x*, *μ*, *ν*) is the incomplete Beta function: *B*(*x*, *ν*, *μ*) =2 *<sup>x</sup>* <sup>0</sup> *<sup>y</sup>ν*−1(<sup>1</sup> <sup>−</sup> *<sup>y</sup>*)*μ*−1*dy*.

Hence, since both conditions *I* <sup>1</sup>−*<sup>α</sup>* <sup>0</sup><sup>+</sup> [*xβ*] <sup>∈</sup> *<sup>W</sup>*1,1 *<sup>G</sup>* (0, 1) and *I* <sup>1</sup>−*<sup>α</sup>* <sup>0</sup><sup>+</sup> [*xβ*](0) = 0 hold true when *β* > *α* − 1, one obtains the fractional derivative of power functions of *x* in (0, 1):

$$\begin{split} \left[D\_{0+}^{\mathfrak{a}}\left[\mathbf{x}^{\mathfrak{f}}\right]\right] &= \,^{D}I\_{\mathfrak{X}}\,^{1-a}\left[\mathbf{x}^{\mathfrak{f}}\right] = \frac{d}{d\mathbf{x}}\,^{1-a}\left[\mathbf{x}^{\mathfrak{f}}\right] \\ &= \,^{\Gamma(1+\mathfrak{f})}\frac{\Gamma(1+\mathfrak{f})}{(1+\mathfrak{f}-a)\Gamma(1+\mathfrak{f}-a)}\,\frac{d}{d\mathbf{x}}\mathbf{x}^{1+\mathfrak{f}-a} \\ &= \,^{\Gamma(1+\mathfrak{f})}\frac{\Gamma(1+\mathfrak{f})}{\Gamma(1+\mathfrak{f}-a)}\,\mathbf{x}^{\mathfrak{f}-a} \qquad\qquad a\in(0,1), \; \mathfrak{f}>a-1. \end{split} \tag{99}$$

Moreover,

$$I\_{0+}^{1-a}[\mathbf{x}^{a-1}] = \frac{1}{\Gamma(1-a)} \int\_0^\mathbf{x} \frac{t^{a-1} dt}{(\mathbf{x}-t)^a} = \frac{1}{\Gamma(1-a)} \int\_0^1 \frac{dy}{y^{1-a}(1-y)^a} = \frac{B(a, 1-a)}{\Gamma(1-a)} = \Gamma(a)$$

entails

$$D\_{0\_{+}}^{a} \left[ \mathfrak{x}^{a-1} \right] \equiv 0, \qquad \qquad \qquad \mathfrak{a} \in \left( 0, 1 \right). \tag{100}$$

In the particular case *α* = 1/2 we obtain

$$D\_{0+}^{1/2}[\mathbf{x}^{-1/2}](\mathbf{x}) \ = \ \sqrt{\pi} \qquad \text{and} \qquad D\_{0+}^{1/2}[\mathbf{x}^{-1/2}] \ \equiv \ \mathbf{0} \,. \tag{101}$$

Thus *D*1/2 has a nontrivial kernel, as it is the case of the linear operators *Dα*. More in general, by (100), we know that

$$D\_{a\_{+}}^{a}\left[H(\mathbf{x}-a)(\mathbf{x}-a)^{a-1}\right] \equiv 0 \qquad \forall a \in (0,1), \; \forall a \in \mathbb{R}, \; \mathbf{x} \in \mathbb{R}.\tag{102}$$

The converse holds too (see Proposition 8).

#### **Proposition 1.** *(Semigroup property of fractional integral I<sup>α</sup> <sup>a</sup>***<sup>+</sup>** *)*

*For every* <sup>0</sup><*α*<1*, v* <sup>∈</sup> *<sup>L</sup>*1(R)*,* spt *<sup>v</sup>* <sup>⊂</sup> [*a*, <sup>+</sup>∞) *with a*∈R*, we have*

$$I\_{a+}^{1-a} \left[ I\_{a+}^{a} [v] \right](\mathbf{x}) \;=\; I\_{a+}^{1} [v] \;=\int\_{a}^{\mathbf{x}} v(t) \, dt \qquad \mathbf{x} \in \mathbb{R} \; \tag{103}$$

$$D\_{a+}^{1-a} \left[ D\_{a+}^{a} [v] \right] \ = \ D\_{x} [v] \qquad \text{in } \mathcal{D}'(\mathbb{R}) \,. \tag{104}$$

*In general, if* <sup>−</sup>∞<*<sup>a</sup>* <sup>&</sup>lt; *<sup>b</sup>*≤+∞*, <sup>α</sup>*, *<sup>β</sup>*∈(0, 1)*, <sup>α</sup>* <sup>+</sup> *<sup>β</sup>*<1*, v* <sup>∈</sup> *<sup>L</sup>*1(*a*, *<sup>b</sup>*)*, then*

$$I\_{a+}^{a}\left[I\_{a+}^{\beta}[v]\right](\mathbf{x}) = I\_{a+}^{a+\beta}[v](\mathbf{x}) \qquad \mathbf{x} \in \mathbb{R},\tag{105}$$

$$D\_{a+}^{a}\left[D\_{a+}^{\beta}[v]\right](\mathbf{x}) = D\_{a+}^{a+\beta}[v] \qquad \mathcal{D}'(\mathbb{R})\,. \tag{106}$$

*if* <sup>−</sup>∞≤*<sup>a</sup>* <sup>&</sup>lt; *<sup>b</sup>*<+∞*, <sup>α</sup>*, *<sup>β</sup>*∈(0, 1)*, <sup>α</sup>* <sup>+</sup> *<sup>β</sup>*<1*, v* <sup>∈</sup> *<sup>L</sup>*1(*a*, *<sup>b</sup>*)*, then*

$$I\_{b-}^{a}\left[I\_{b-}^{\mathcal{S}}[v]\right](\mathbf{x}) = I\_{b-}^{a+\mathcal{P}}[v](\mathbf{x}) \qquad \mathbf{x} \in \mathbb{R},\tag{107}$$

$$\left[D\_{b-}^{a}\left[D\_{b-}^{\beta}\right]v\right](\mathbf{x}) = \left.D\_{b-}^{a+\beta}\right[v] \qquad \mathcal{D}'(\mathbb{R})\,. \tag{108}$$

**Proof.** Consider the trivial extension of *v* and the standard extension of related subsequent fractional integrals as defined by

$$I\_{a+}^{\mathfrak{a}}[v] = v \ast \frac{1}{\Gamma(\mathfrak{a})} \frac{H(\mathfrak{x})}{|x|^{1-\mathfrak{a}}} \qquad \mathfrak{x} \in \mathbb{R} \dots$$

We assume first *a* = 0. By denoting *V*, the Laplace transform of *v* and taking into account of L{*H*(*x*) *<sup>x</sup>β*} <sup>=</sup> <sup>Γ</sup>(*<sup>β</sup>* <sup>+</sup> <sup>1</sup>)/*sβ*+<sup>1</sup> and (97), we obtain, for Re *<sup>s</sup>*>0,

$$\begin{aligned} \mathcal{L}\left\{I\_{0+}^{1-a}\left[I\_{0+}^{a}[v]\right](\mathbf{x})\right\} &=& \frac{1}{\Gamma(1-a)}\frac{1}{\Gamma(a)}\mathcal{L}\left\{v\*\frac{H(\mathbf{x})}{|\mathbf{x}|^{a}}\*\frac{H(\mathbf{x})}{|\mathbf{x}|^{1-a}}\right\} \\ &=& \frac{1}{s}V(\mathbf{s}) = \mathcal{L}\left\{\int\_{-\infty}^{\mathbf{x}}v(t)dt\right\} = \mathcal{L}\left\{\int\_{0}^{\mathbf{x}}v(t)dt\right\}, \end{aligned}$$

hence, claim (103) follows by the injectivity of the Laplace transform.

$$\begin{aligned} \mathcal{L}\left\{D\_{0+}^{1-a}\left[D\_{0+}^{a}[v]\right](x)\right\} &=& \frac{1}{\Gamma(a)}\frac{1}{\Gamma(1-a)}\mathcal{L}\left\{D\_{x}\left(\frac{H(x)}{|x|^{1-a}}\*D\_{x}\left(\frac{H(x)}{|x|^{a}}\*v\right)\right)\right\} \\ &=& \frac{1}{\Gamma(a)}\frac{1}{\Gamma(1-a)}s\frac{\Gamma(a)}{s^{a}}\operatorname{s}\frac{\Gamma(1-a)}{s^{1-a}}\operatorname{V}(s) = \operatorname{s}\operatorname{V}(s) = \mathcal{L}\{D\_{x}v\}\,\rho \end{aligned}$$

hence, claim (104) follows by the injectivity of the Laplace transform.

In general, we obtain, for Re *s*>0,

$$\begin{aligned} \mathcal{L}\left\{l\_{0+}^{a}\left[l\_{0+}^{\emptyset}[v]\right](\mathbf{x})\right\} &= \frac{1}{\Gamma(a)}\frac{1}{\Gamma(\beta)}\mathcal{L}\left\{v\*\frac{H(\mathbf{x})}{|\mathbf{x}|^{1-a}}\*\frac{H(\mathbf{x})}{|\mathbf{x}|^{1-\beta}}\right\} \\ &= \frac{1}{s^{a+\beta}}V(s) = \frac{1}{\Gamma(a+\beta)}\mathcal{L}\left\{v\*\frac{H(\mathbf{x})}{|\mathbf{x}|^{1-(a+\beta)}}\right\} = \mathcal{L}\left\{l\_{0+}^{a+\beta}[v](\mathbf{x})\right\}, \end{aligned}$$

which proves (105). Identity (106) is achieved in the same way. The case of general *<sup>a</sup>* ∈ R is achieved by translation. Moreover, given *<sup>v</sup>* <sup>∈</sup> *<sup>L</sup>*1(*a*, *<sup>b</sup>*) with <sup>−</sup><sup>∞</sup> <sup>&</sup>lt; *<sup>a</sup>* <sup>&</sup>lt; *<sup>b</sup>* <sup>≤</sup> <sup>+</sup>∞, by considering the trivial extension of *v* on R, we obtain

$$\begin{split} I\_{b-}^{\boldsymbol{a}}\left[I\_{b-}^{\boldsymbol{\beta}}\left[\boldsymbol{v}\right]\right] & \overset{\{35\}}{=} \left(I\_{a+}^{\boldsymbol{a}}\left[\left(I\_{b-}^{\boldsymbol{\beta}}\left[\boldsymbol{v}\right]\right)^{\check{\cdot}}\right]\right)^{\check{\cdot}} = \left(I\_{a+}^{\boldsymbol{a}}\left[\left(\left(I\_{a+}^{\boldsymbol{\beta}}\left[\boldsymbol{v}\right]\right)^{\check{\cdot}}\right)^{\check{\cdot}}\right]\right)^{\check{\cdot}} = \left(I\_{a+}^{\boldsymbol{a}}\left[I\_{a+}^{\boldsymbol{\beta}}\left[\boldsymbol{v}\right]\right]\right)^{\check{\cdot}} = \check{\cdot} \\ & \overset{\{105\}}{=} \left(I\_{a+}^{\boldsymbol{a}+\boldsymbol{\beta}}\left[\boldsymbol{v}\right]\right)^{\check{\cdot}} \overset{\{35\}}{=} \left(\left(I\_{b-}^{\boldsymbol{a}+\boldsymbol{\beta}}\left[\boldsymbol{v}\right]\right)^{\check{\cdot}}\right)^{\check{\cdot}} = I\_{b-}^{\boldsymbol{a}+\boldsymbol{\beta}}\left[\boldsymbol{v}\right]. \end{split}$$

hence, (107) is proved. Identity (108) is achieved in the same way.

**Proposition 2.** *Assume α* ∈ (0, 1)*,* −∞ < *a* < *b* ≤ +∞*, I* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [ *<sup>f</sup>* ](*a*) = 0 *and <sup>f</sup> belongs to Wα*,1 <sup>+</sup> (*a*, *b*) := *<sup>v</sup>* <sup>∈</sup> *<sup>L</sup>*1(*a*, *<sup>b</sup>*)<sup>|</sup> *<sup>I</sup>* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [*v*] <sup>∈</sup> *<sup>W</sup>*1,1 *<sup>G</sup>* (*a*, *b*) ! *.*

*Then the Abel integral equation*

$$I\_{a+}^{a}[\mu](\mathbf{x}) = f(\mathbf{x}) \qquad \text{for a.e. x in the interval } (a,b) \tag{109}$$

*admits the solution u given by*

$$u(\mathbf{x}) = D\_{a+}^{a}[f](\mathbf{x}) \quad \text{for } a \text{.e. } \mathbf{x} \text{ in the interval } (a, b). \tag{110}$$

*which is unique among Laplace-transformable functions evaluated with translated variable x* − *a.*

**Proof.** Whenever necessary, we consider the trivial extension (namely, 0 valued) on (−∞, *a*) of every function and if necessary on (*b*, +∞), without relabeling the function name. Thus, every related fractional integral set as a function defined over (*a*, *b*) has a trivial extension, which coincides on (*a*, *b*) with the same fractional integral of the trivial extension, namely, it has support contained on [*a*, +∞).

First, we assume *a* = 0. In such a case, *f* is a Laplace-transformable function: we denote by *F*(*s*) = L{ *f* }(*s*) and U(s) = L{u}(s) their Laplace transform evaluated at the variable *s*. If a Laplace transformable solution *u* exists, then its Laplace transform U must fulfill the transformed equation. We have

$$\begin{aligned} I\_{0+}^{\mathfrak{a}}[u](\mathbf{x}) &= \frac{1}{\Gamma(\mathfrak{a})} \int\_{0}^{\mathbf{x}} \frac{\mu(t) \, dt}{(\mathbf{x} - t)^{1 - \mathfrak{a}}} = \frac{1}{\Gamma(\mathfrak{a})} \left( H(\mathbf{x}) \, \mathbf{x}^{\mathfrak{a} - 1} \right) \, \ast \, \mathfrak{a} \\\\ I\_{0+}^{\mathfrak{a}}[u](\mathbf{x}) &= f(\mathbf{x}) \\\\ \frac{1}{\Gamma(\mathfrak{a})} \frac{\Gamma(\mathfrak{a})}{\operatorname{s}^{\mathfrak{a}}} \operatorname{U}(\mathbf{s}) &= \operatorname{F}(\mathbf{s}) \\\\ \operatorname{U}(\mathbf{s}) &= \operatorname{s}^{\mathfrak{a}} F(\mathbf{s}) = \operatorname{s} \frac{F(\mathbf{s})}{\operatorname{s}^{1 - \mathfrak{a}}} \end{aligned}$$

We evaluate *<sup>u</sup>* <sup>=</sup> <sup>L</sup><sup>−</sup>1U: reminding that L{*Dxw*(*x*)} <sup>=</sup> *<sup>s</sup>* L{*w*}, where *<sup>w</sup>* is any <sup>L</sup>transformable distribution, and here, *Dx* and *d*/*dx* denote respectively the distributional derivative on the open set R and on (*a*, +∞). By taking into account that *I* <sup>1</sup>−*<sup>α</sup>* <sup>+</sup> [ *<sup>f</sup>* ] belongs to *W*1,1 *<sup>G</sup>* (0, *<sup>b</sup>*) <sup>⊂</sup> *<sup>L</sup>*∞(0, *<sup>b</sup>*) <sup>∩</sup> *<sup>C</sup>*0[0, *<sup>b</sup>*], we know that *<sup>I</sup>* <sup>1</sup>−*<sup>α</sup>* <sup>+</sup> [ *<sup>f</sup>* ](0+) is a well-defined real value. Thus, by formula *s G*(*s*) = L{*Dx g*} = L{(*d*/*dx*)*g*} + *g*(0+) applied to *g* = *I* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [ *<sup>f</sup>* ] under the assumption *I* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [ *<sup>f</sup>* ](0) = 0, we obtain

$$\begin{split} u(\mathbf{x}) &= \ \mathcal{L}^{-1}\{\mathcal{U}(\mathbf{s})\} = \mathcal{L}^{-1}\{s\frac{F(\mathbf{s})}{\mathbf{s}^{1-a}}\} \\ &= \ \ D\_{\mathbf{x}}\left(f\*\frac{1}{\Gamma(1-a)\,\mathbf{x}^{a}}\right) = \ D\_{\mathbf{x}}\left(l\_{+}^{1-a}[f](\mathbf{x})\right) \\ &= \ \ \frac{d}{d\mathbf{x}}\left(l\_{+}^{1-a}[f](\mathbf{x})\right) = \ \frac{1}{\Gamma(1-a)}\frac{d}{d\mathbf{x}}\int\_{0}^{\mathbf{x}}\frac{f(t)\,dt}{(\mathbf{x}-t)^{a}} = \ D\_{+}^{a}[f](\mathbf{x}) \end{split} \tag{111}$$

where the four last equalities are understood in the sense a.e. on (−∞, *b*), coherently with the fact that *<sup>u</sup>* <sup>∈</sup> *<sup>L</sup>*1(−∞, *<sup>b</sup>*) because it coincides with the derivative of the function *I* <sup>1</sup>−*<sup>α</sup>* <sup>+</sup> [ *<sup>f</sup>* ] <sup>∈</sup> *<sup>W</sup>*1,1 *<sup>G</sup>* (0, *b*) and vanishes on (−∞, 0). Moreover *u* is unique due to the injectivity of the Laplace transform. Then (110) is proved when *a* = 0.

If *a* = 0, we can exploit the solution Formula (111) proved in case *a* = 0: assume *Iα <sup>a</sup>*+[*u*] = *f* on (*a*, *b*) and set *u*(*t*) = *v*(*t* − *a*); then *t* → *v*(*t*) = *u*(*t* + *a*) and *t* → *f*(*t* + *a*) have support on [0, +∞) and, hence, are Laplace transformable functions.

$$l\_{a+}^{\mathfrak{a}}[u](\mathbf{x}) = \frac{1}{\Gamma(a)} \int\_{a}^{\mathbf{x}} \frac{u(t)dt}{(\mathbf{x}-t)^{1-a}} = \frac{1}{\Gamma(a)} \int\_{a}^{\mathbf{x}} \frac{v(t-a) \, dt}{\left((\mathbf{x}-a)-(a+t)\right)^{1-a}}$$

$$= \frac{1}{\Gamma(a)} \int\_{0}^{\mathbf{x}-a} \frac{v(\mathbf{r}) \, d\mathbf{r}}{\left((\mathbf{x}-a)-\mathbf{r}\right)^{1-a}} = l\_{0+}^{\mathfrak{a}}[v](\mathbf{x}-a) = l\_{0+}^{\mathfrak{a}}[u(t+a)](\mathbf{x}-a)$$

Thus we have the Abel equation *I<sup>α</sup>* <sup>0</sup>+[*u*(*t* + *a*)](*x* − *a*) = *f*(*x*), that is

$$d\_{0+}^{\\\mathfrak{a}}[\mathfrak{u}(t+a)](\mathfrak{x}) = f(\mathfrak{x}+a)\dots$$

By (111), we get *u*(*x* + *a*) = *v*(*x*) = *D<sup>α</sup>* <sup>0</sup>+[ *f*(*x* + *a*)](*x* + *a*), that is

> *u*(*x*) = *D<sup>α</sup> <sup>a</sup>*+[ *f* ](*x*). (112)

**Remark 14.** *At a first glance, both technical assumptions in Proposition 2, namely <sup>f</sup>* <sup>∈</sup>*Wα*,1 <sup>+</sup> (*a*, *b*) *and I*1−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [ *<sup>f</sup>* ](*a*) = <sup>0</sup>*, may look strange or unnatural.*

*However, they cannot be circumvented: actually, they are both necessary conditions for the existence of a solution u*∈*L*1(*a*, *<sup>b</sup>*) *of Equation* (109)*.*

*Let us check this claim: if such a solution as <sup>u</sup>*<sup>∈</sup> *<sup>L</sup>*1(*a*, *<sup>b</sup>*) *exists, then <sup>f</sup>* <sup>=</sup> *<sup>I</sup><sup>α</sup> <sup>a</sup>*+[*u*] *belongs to L*1(*a*, *b*)*; moreover, due to the semigroup property of fractional integrals (see Proposition 1),*

$$I\_{a+}^{1-a}[f](\mathbf{x}) = I\_{a+}^{1-a}[I\_{+}^{a}[u]](\mathbf{x}) = I\_{a+}^{1}[[u]](\mathbf{x}) = \int\_{a}^{\mathbf{x}} u(t) \, dt \,, \tag{113}$$

*hence, I* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [ *<sup>f</sup>* ] *is the primitive of an <sup>L</sup>*1(*a*, *<sup>b</sup>*) *function; thus <sup>I</sup>* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [ *<sup>f</sup>* ] *belongs to <sup>W</sup>*1,1 *<sup>G</sup>* (*a*, *b*) *and I* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [ *<sup>f</sup>* ](*a*) = <sup>0</sup>*.*

**Remark 15.** *Condition I* <sup>1</sup>−*<sup>α</sup>* <sup>+</sup> [ *<sup>f</sup>* ](*a*) = 0 *may be not easy to check. However, it can be replaced by stronger conditions, which are much easier to check. Indeed, if either there exists a finite value <sup>f</sup>*(*a*+) :<sup>=</sup> lim*x*→*a*<sup>+</sup> *<sup>f</sup>*(*x*) *or f is bounded in a neighborhood of* <sup>0</sup>*, then I*1−*<sup>α</sup>* <sup>+</sup> [ *<sup>f</sup>* ](*a*) = <sup>0</sup>*.*

**Remark 16.** *For the unnormalized Abel equation ,* Γ(*α*) *I<sup>α</sup> <sup>a</sup>*+[*u*] = *f , namely*

$$\int\_{a}^{x} \frac{u(t)}{(x-t)^{1-\alpha}} dt \;= \; f(\mathbf{x}) \qquad \text{for } \mathbf{x} \text{ in the interval } (a,b), \tag{114}$$

*as a straightforward consequence of Proposition <sup>2</sup> and Euler reflection formula,* <sup>Γ</sup>(*z*)Γ(<sup>1</sup> <sup>−</sup> *<sup>z</sup>*) = *<sup>π</sup>*/ sin(*πz*) <sup>∀</sup>*<sup>z</sup>* <sup>∈</sup> <sup>C</sup>\Z*, under the assumption <sup>f</sup>* <sup>∈</sup> *<sup>W</sup>s*,1(*a*, *<sup>b</sup>*)*, we recover the next formula for the unique solution u in L*1(*a*, *b*)*:*

$$u(\mathbf{x}) = \frac{1}{\Gamma(a)} D\_{a+}^{x} [f](\mathbf{x}) = \frac{1}{\Gamma(a)\Gamma(1-a)} \frac{d}{d\mathbf{x}} \int\_{a}^{x} \frac{f(t) \, dt}{(\mathbf{x} - t)^{1-a}} = \frac{\sin(a\pi)}{\pi} \frac{d}{d\mathbf{x}} \int\_{a}^{x} \frac{f(t) \, dt}{(\mathbf{x} - t)^{1-a}} \tag{115}$$

*still under the requirement that necessary conditions <sup>f</sup>* <sup>∈</sup>*Wα*,1 <sup>+</sup> (*a*, *b*) *and I* <sup>1</sup>−*<sup>α</sup>* <sup>+</sup> [ *<sup>f</sup>* ](*a*) = <sup>0</sup> *hold true.*

Now, we remove the assumption *I* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [*v*](*a*) = 0 and look for solutions in <sup>D</sup> (R).

**Proposition 3.** *Assume that α* ∈ (0, 1)*,* −∞ < *a* < *b* ≤ +∞ *and f belongs to the space BV<sup>α</sup>* +(*a*, *b*) := *<sup>v</sup>* <sup>∈</sup> *<sup>L</sup>*1(*a*, *<sup>b</sup>*)<sup>|</sup> *<sup>I</sup>* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [*v*] <sup>∈</sup> *BV*(*a*, *<sup>b</sup>*) ! *.*

*Then, the Abel integral equation in the distributional framework*

$$d\_{a+}^{
u}[
u] = f \qquad \text{in } \mathcal{D}'(\mathbb{R}), \tag{116}$$

*admits a unique solution u among Laplace transformable distributions evaluated at x* − *a (variable translation), which is the bounded measure on* R *with support contained in* [*a*, +∞) *given by*

$$u(\mathbf{x}) = D\_{a+}^{a}[f](\mathbf{x}) + I\_{a+}^{1-a}[f](a\_{+})\,\delta(\mathbf{x} - a) \quad \text{in } \mathcal{D}'(\mathbb{R}).\tag{117}$$

*In* (116)*, actually u denotes the trivial extension outside* (*a*, *b*)*, and*

$$I\_{a+}^{\alpha}[\mu] = \mu \, \ast \, \frac{1}{\Gamma(\alpha)} \, \frac{H(\mathbf{x})}{|\mathbf{x}|^{1-\alpha}}$$

*represents the distributional convolution whose evaluation, namely f , is identically* 0 *on* (−∞, *a*) *and possibly non-zero on* [*b*, +∞)*.*

**Proof.** Same proof of Proposition 2. Only the step in (111) with *a* = 0 has to be slightly modified: denoting by *D <sup>x</sup>* and *<sup>D</sup><sup>x</sup>* the distributional derivative respectively in <sup>D</sup> (R\{0}) and D (0, +∞), setting *F*(*s*) = L{ *f* }, *g*(*x*) = *I* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [ *<sup>f</sup>* ](*x*), *<sup>G</sup>*(*s*) = L{*g*} <sup>=</sup> *<sup>F</sup>*(*s*)/*s*1−*α*, L{*Dx g*} = *sG*(*s*) and

$$D\_{\mathfrak{x}}\mathfrak{g} = \widetilde{D}\_{\mathfrak{x}}\mathfrak{g} + \mathfrak{g}(0\_{+})\,\delta(\mathfrak{x}) = \widetilde{D}\_{\mathfrak{x}}\mathfrak{g} + \mathfrak{g}(0\_{+})\,\delta(\mathfrak{x})\,,\,\mathfrak{x}$$

we exploit the fact that *I* <sup>1</sup>−*<sup>α</sup>* <sup>+</sup> [ *<sup>f</sup>* ](0+) is a finite well-defined value (since *<sup>f</sup>* <sup>∈</sup>*BV<sup>α</sup>* +(0, *b*) entails *I* <sup>1</sup>−*<sup>α</sup>* <sup>+</sup> [ *<sup>f</sup>* ]∈*BV*(0, *<sup>b</sup>*)), and we replace (111) by

$$\begin{split} u(\mathbf{x}) &= \mathcal{L}^{-1}\{\mathcal{U}(\mathbf{s})\} = \mathcal{L}^{-1}\left\{\mathbf{s}\,\frac{F(\mathbf{s})}{\mathbf{s}^{1-\alpha}}\right\} = \mathcal{L}^{-1}\{\mathbf{s}\,\mathbf{G}(\mathbf{s})\} \\ &= D\_{\mathbf{x}}\,\mathbf{g} = \frac{d}{dx}\,\mathbf{g} + \mathbf{g}(0\_{+})\,\delta(\mathbf{x}) \\ &= \frac{d}{dx}\left(f\*\frac{1}{\Gamma(1-\alpha)\,\mathbf{x}^{\alpha}}\right) + I\_{a+}^{1-\alpha}[f](0\_{+})\,\delta(\mathbf{x}) \\ &= \frac{d}{dx}\left(I\_{+}^{1-\alpha}[f](\mathbf{x})\right) + I\_{a+}^{1-\alpha}[f](0\_{+})\,\delta(\mathbf{x}) \\ &= D\_{+}^{\alpha}[f](\mathbf{x}) + I\_{a+}^{1-\alpha}[f](0\_{+})\,\delta(\mathbf{x}) .\end{split}$$

**Corollary 1.** *Assume <sup>α</sup>* ∈ (0, 1)*,* −<sup>∞</sup> ≤ *<sup>a</sup>* < *<sup>b</sup>* < +∞*, the value <sup>f</sup>*(*b*−) := lim*x*→*b*<sup>−</sup> *<sup>f</sup>*(*x*) *exists and is finite (possibly substituted by weaker condition I* 1−*α <sup>b</sup>*<sup>−</sup> [ *<sup>f</sup>* ](*b*) = <sup>0</sup> *) and <sup>f</sup> belongs to Wα*,1 <sup>−</sup> (*a*, *<sup>b</sup>*) :<sup>=</sup> *<sup>v</sup>* <sup>∈</sup> *<sup>L</sup>*1(*a*, *<sup>b</sup>*)<sup>|</sup> *<sup>I</sup>* 1−*α <sup>b</sup>*<sup>−</sup> [*v*] <sup>∈</sup> *<sup>W</sup>*1,1 *<sup>G</sup>* (*a*, *b*) ! *.*

*Then, the backward Abel integral equation*

$$I\_{b-}^{\mathfrak{a}}[u](\mathbf{x}) = f(\mathbf{x}) \qquad \text{for a.e. x in the interval } (a, b) \tag{118}$$

*admits a solution u, unique among Laplace transformable functions evaluated at b* − *x (sign change and translation), which is given by*

$$u(\mathbf{x}) = D\_{b-}^{a}[f](\mathbf{x}) \qquad \text{for a.e. x in the interval } (a, b). \tag{119}$$

**Proof.** Taking into account that *<sup>b</sup>* ∈ R, set *<sup>v</sup>*(*t*) = *<sup>u</sup>*ˇ(*t*) := *<sup>u</sup>*(−*t*), and hence, *<sup>u</sup>* : (*a*, *<sup>b</sup>*) → R, *<sup>v</sup>* : (−*b*, −*a*) → R, and choose *<sup>x</sup>* ∈ (*a*, *<sup>b</sup>*). Then

$$\begin{split} f(\mathbf{x}) &= \, \_b^a \, \_- [u](\mathbf{x}) = \frac{1}{\Gamma(a)} \int\_{\mathbf{x}}^b \frac{u(\mathbf{r}) \, d\mathbf{r}}{\left(\mathbf{r} - \mathbf{x}\right)^{1-a}} \\ &= \, \_1^1 \frac{1}{\Gamma(a)} \int\_{\mathbf{x}-b}^0 \frac{u(t+b) \, dt}{\left(\left(b+t\right) - \mathbf{x}\right)^{1-a}} = \frac{1}{\Gamma(a)} \int\_{\mathbf{x}-b}^0 \frac{v(-\left(b+t\right)) \, dt}{\left(-\mathbf{x} + \left(b+t\right)\right)^{1-a}} \\ &= \, \_1^1 \frac{-1}{\Gamma(a)} \int\_0^{\mathbf{x}-b} \frac{v(-\left(b+t\right)) \, dt}{\left(-\mathbf{x} + \left(b+t\right)\right)^{1-a}} = \frac{1}{\Gamma(a)} \int\_{-b}^{-\mathbf{x}} \frac{v(y) \, dy}{\left(-\mathbf{x} - \mathbf{y}\right)^{1-a}} = \, \_ ^1 \underline{[-b]+[a]} \left[v\right](-\mathbf{x}) \end{split}$$

Therefore, we can apply Proposition 2 to an Abel equation on (−*b*, −*a*):

$$I\_{(-b)+}^{\mathfrak{a}}[v](\mathbf{x}) = f(-\mathbf{x})$$

$$I\_{(-b)+}^{\mathfrak{a}}[\bar{u}](\mathbf{x}) = \hat{f}(\mathbf{x})$$

$$\bar{u}(\mathbf{x}) = \; D\_{(-b)+}^{\mathfrak{a}}[\bar{f}](\mathbf{x})$$

$$\mu(-\mathbf{x}) = \; \frac{1}{\Gamma(1-\mathfrak{a})} \frac{d}{dx} \int\_{-b}^{\mathfrak{x}} \frac{f(-t)}{(\mathfrak{x}-t)^{\mathfrak{a}}} = \frac{-1}{\Gamma(1-\mathfrak{a})} \frac{d}{dx} \int\_{b}^{-\mathfrak{x}} \frac{f(\tau) \, d\tau}{(\mathfrak{x}+\tau)^{\mathfrak{a}}}$$

$$= \frac{1}{\Gamma(1-\mathfrak{a})} \frac{d}{dx} \int\_{-\mathfrak{x}}^{b} \frac{f(\tau) \, d\tau}{(\mathfrak{x}+\tau)^{\mathfrak{a}}} = \frac{d}{dx} \left( I\_{b-}^{1-\mathfrak{a}}[f](-\mathbf{x}) \right) \frac{\text{|chain-rule}}{\text{|}}$$

$$= -\frac{d}{dx} \left( I\_{b-}^{1-\mathfrak{a}}[f] \right)(-\mathbf{x}) = \; D\_{b-}^{\mathfrak{a}}[f](-\mathbf{x}) \qquad \mathbf{x} \in (-b\_{r}-a)$$

$$u(\mathbf{x}) := D\_{b-}^{\alpha}[f](\mathbf{x}) \qquad \mathbf{x} \in (a, b) \quad \Box$$

**Corollary 2.** *Assume that α* ∈ (0, 1)*,* −∞ < *a* < *b* ≤ +∞ *and f belongs to the space BV<sup>α</sup>* <sup>−</sup>(*a*, *<sup>b</sup>*) :<sup>=</sup> *<sup>v</sup>* <sup>∈</sup> *<sup>L</sup>*1(*a*, *<sup>b</sup>*)<sup>|</sup> *<sup>I</sup>* 1−*α <sup>b</sup>*<sup>−</sup> [*v*] <sup>∈</sup> *BV*(*a*, *<sup>b</sup>*) ! *.*

*Then the backward Abel integral equation in the distributional framework*

$$I\_{b-}^{a}[u](\mathbf{x}) = f(\mathbf{x}) \qquad \text{in } \mathcal{D}'(\mathbb{R}), \tag{120}$$

*admits a unique solution u among Laplace transformable distributions evaluated at b* − *x (say with sign change and translation), which is the bounded measure with support contained in* (−∞, *b*] *given by*

$$u(\mathbf{x}) = D\_{b-}^{a}[f](\mathbf{x}) + I\_{b-}^{1-a}[f](b\_{-})\,\delta(\mathbf{x} - b) \qquad \text{in } \mathcal{D}'(\mathbb{R}).\tag{121}$$

*In* (120)*, actually u denotes the trivial extension outside* (*a*, *b*)*, and*

$$I\_{b-}^{\kappa}[\mu] = \mu \, \ast \, \frac{1}{\Gamma(\alpha)} \frac{H(-x)}{|x|^{1-\alpha}} \, .$$

*represents the distributional convolution whose evaluation, namely f , is identically* 0 *on* [*b*, +∞) *and possibly non-zero on* (−∞, *a*)*.*

**Proof.** Same proof of Corollary 1, but exploiting Proposition 3 instead of Proposition 2. Notice that the trivial extension of a function in *L*1(*a*, *b*) has compact support and can be dealt with as a Laplace transformable distribution evaluated at the variable (*b* − *x*).

**Example 9.** *We mention some basic examples of solution u for Abel integral equation I<sup>α</sup>* <sup>0</sup><sup>+</sup> [*u*] = *f on* (0, *<sup>b</sup>*) *with* <sup>0</sup> <sup>&</sup>lt; *<sup>b</sup>* <sup>≤</sup> <sup>+</sup><sup>∞</sup> *and, more in general for distributional Abel integral equation <sup>I</sup><sup>α</sup> <sup>a</sup>*<sup>+</sup> [*u*] = *f and I<sup>α</sup> <sup>b</sup>*<sup>−</sup> [*u*] = *f with support condition.*

$$1. \quad \text{If } f = \mathfrak{x}^a, \text{ then } \mathfrak{u} = D\_{a+}^{\mathfrak{u}}[t^a](\mathfrak{x}) = \Gamma(a+1) \lrcorner H(\mathfrak{x}), \text{ for } \mathfrak{u} \in (0,1), \text{ due to Proposition 2.2}$$

$$\text{2.} \qquad \text{If } f = H(\mathbf{x}), \text{ then } \mathbf{u} = D\_{\mathbf{a}+}^{\mathbf{a}}[H](\mathbf{x}) = \frac{\mathbf{x}^{-\mathbf{a}}}{\Gamma(1-\mathbf{a})}, \text{ for } \mathbf{a} \in (0,1), \text{ due to Proposition 2.1.}$$

3. *If f* <sup>=</sup> *<sup>x</sup>β, then u* <sup>=</sup> <sup>Γ</sup>(*<sup>β</sup>* <sup>+</sup> <sup>1</sup>) <sup>Γ</sup>(<sup>1</sup> <sup>+</sup> *<sup>β</sup>* <sup>−</sup> *<sup>α</sup>*) *<sup>x</sup>β*−*α, for <sup>α</sup>* <sup>∈</sup> (0, 1)*, <sup>β</sup>* <sup>&</sup>gt; *<sup>α</sup>* <sup>−</sup> <sup>1</sup>*, due to Proposition 2.*

*These relationships are deduced by Proposition 2: in the first and second item, notice that <sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*∞(*a*, *<sup>b</sup>*) *entails I*1−*<sup>α</sup>* <sup>0</sup><sup>+</sup> [*u*](0) = <sup>0</sup> *(see Remark 15), while in third item <sup>β</sup>* <sup>&</sup>gt; *<sup>α</sup>* <sup>−</sup> <sup>1</sup> *entails both I* <sup>1</sup>−*<sup>α</sup>* <sup>+</sup> [*xβ*](*x*) = <sup>Γ</sup>(1+*β*) <sup>Γ</sup>(2+*β*−*α*) *<sup>x</sup>*1+*β*−*<sup>α</sup>* <sup>∈</sup> *<sup>W</sup>*1,1 *<sup>G</sup>* (0, 1) *and I* <sup>1</sup>−*<sup>α</sup>* <sup>0</sup><sup>+</sup> [*xβ*](0) = <sup>0</sup>*. Thus, we get the three claims above by applying the relationships*

$$D\_{0\_{+}}^{a} \left[ \mathbf{x}^{\boldsymbol{\beta}} \right] = \frac{\Gamma(1+\beta)}{\Gamma(1+\beta-a)} \mathbf{x}^{\boldsymbol{\beta}-a}, \qquad (1-a) \neq a \in (0,1), \; \beta > a-1,\tag{122}$$

*Dα* <sup>0</sup><sup>+</sup> [*xα*−1] = 0 , *<sup>α</sup>* <sup>∈</sup> (0, 1). (123)

4. *If <sup>f</sup>*(*x*)=(*<sup>x</sup>* <sup>−</sup> *<sup>a</sup>*)*α*−1*, <sup>x</sup>*∈(*a*, *<sup>b</sup>*)*, <sup>α</sup>*∈(0, 1)*, then the solution <sup>u</sup> with support on* [*a*, <sup>+</sup>∞) *to distributional backward Abel equation I<sup>α</sup> <sup>a</sup>*+[*u*]=(*<sup>x</sup>* <sup>−</sup> *<sup>a</sup>*)*α*−1*H*(*<sup>x</sup>* <sup>−</sup> *<sup>a</sup>*) *is given by u*(*x*) = Γ(*α*) *δ*(*x* − *a*)*. Indeed I* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [(*<sup>t</sup>* <sup>−</sup> *<sup>a</sup>*)*α*−1](*x*) = <sup>Γ</sup>(*α*)*, <sup>D</sup><sup>α</sup> <sup>a</sup>*+[*t <sup>α</sup>*−1](*x*) = *Dx <sup>I</sup>*1−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [*<sup>t</sup> <sup>α</sup>*−1](*x*) <sup>≡</sup> <sup>0</sup> *thus, by Proposition 3, u*= *D<sup>α</sup> <sup>a</sup>*+[(*<sup>t</sup>* <sup>−</sup> *<sup>a</sup>*)*α*−1](*x*)+*<sup>I</sup>* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [(*t*−*a*)1−*α*](*a*+)*δ*(*<sup>x</sup>* <sup>−</sup> *<sup>a</sup>*) =Γ(*α*)*δ*(*x*−*a*). *Then, u solves the Abel equation since, by representation* (14)*, we obtain I α <sup>a</sup>*+[Γ(*α*) *<sup>δ</sup>*(*<sup>x</sup>* <sup>−</sup> *<sup>a</sup>*)] <sup>=</sup> <sup>Γ</sup>(*α*) *<sup>δ</sup>*(*<sup>x</sup>* <sup>−</sup> *<sup>a</sup>*) <sup>∗</sup> *<sup>H</sup>*(*x*) (*x*)*α*−<sup>1</sup> <sup>Γ</sup>(*α*) <sup>=</sup> *<sup>H</sup>*(*<sup>x</sup>* <sup>−</sup> *<sup>a</sup>*) (*<sup>x</sup>* <sup>−</sup> *<sup>a</sup>*)*α*−<sup>1</sup> *in* <sup>D</sup> (R).


$$\begin{aligned} \, \_bI\_{b-}^{1-a}[(b-t)^{a-1}](\mathbf{x}) &= \, \_0\frac{1}{\Gamma(1-a)} \int\_{\mathbf{x}}^{b} \frac{1}{(b-t)^{1-a}(t-\mathbf{x})^a} \, dt \, \frac{\left\lfloor \mathbf{y} = (b-t)\mathbf{x}/(b-\mathbf{x}) \right\rfloor}{\mathbf{=}} \\ &= \, \_0\frac{1}{\Gamma(1-a)} \int\_{0}^{\mathbf{x}} \frac{1}{y^{1-a}(1-y)^a} \, dy = \frac{\mathcal{B}(a, 1-a)}{\Gamma(1-a)} = \Gamma(a) \, , \end{aligned}$$

*so I* 1−*α <sup>b</sup>*<sup>−</sup> [(*<sup>b</sup>* <sup>−</sup> *<sup>t</sup>*)*α*−1](*x*) =Γ(*α*)*, <sup>D</sup><sup>α</sup> <sup>b</sup>*−[*<sup>t</sup> <sup>α</sup>*−1](*x*) =−*Dx <sup>I</sup>* 1−*α <sup>b</sup>*<sup>−</sup> [*<sup>t</sup> <sup>α</sup>*−1](*x*)≡<sup>0</sup> *so, by Corollary 2, u* = *D<sup>α</sup> <sup>b</sup>*−[(*<sup>b</sup>* <sup>−</sup> *<sup>t</sup>*)*α*−1](*x*)+*<sup>I</sup>* 1−*α <sup>b</sup>*<sup>−</sup> [(*<sup>b</sup>* <sup>−</sup> *<sup>t</sup>*)*α*−1](*b*−) *<sup>δ</sup>*(*<sup>b</sup>* <sup>−</sup> *<sup>x</sup>*) = <sup>Γ</sup>(*α*) *<sup>δ</sup>*(*<sup>b</sup>* <sup>−</sup> *<sup>x</sup>*). *Then, such u solves the Abel equation since, by representation* (14)*,*

$$I\_{-}^{a}[\Gamma(a)\,\delta(b-\mathbf{x})] = \Gamma(a)\,\delta(b-\mathbf{x})\*\frac{H(-\mathbf{x})\,(\mathbf{x})^{a-1}}{\Gamma(a)} = H(b-\mathbf{x})\,(b-\mathbf{x})^{a-1} \quad \text{in } \mathcal{D}'(\mathbb{R}).$$

**Lemma 8.** *Fix a value α* ∈ (0, 1)*.*

*If a Laplace transformable function u fulfils D<sup>α</sup>* <sup>0</sup>+[*u*] ≡ 0 *on the half-line* (0, +∞)*, then u*(*x*) = *C xα*−1*, for a suitable constant C.*

*If a function <sup>u</sup>* <sup>∈</sup> *<sup>L</sup>*1(*a*, *<sup>b</sup>*)*, with* <sup>−</sup><sup>∞</sup> <sup>&</sup>lt; *<sup>a</sup>* <sup>&</sup>lt; *<sup>b</sup>* <sup>&</sup>lt; <sup>+</sup>∞*, fulfils <sup>D</sup><sup>α</sup> <sup>a</sup>*+[*u*] ≡ 0 *on* (*a*, *b*)*, then <sup>u</sup>*(*x*) = *<sup>C</sup>* (*<sup>x</sup>* <sup>−</sup> *<sup>a</sup>*)*α*−1*, for a suitable constant K.*

*If a function <sup>u</sup>* <sup>∈</sup> *<sup>L</sup>*1(*a*, *<sup>b</sup>*)*, with* <sup>−</sup><sup>∞</sup> <sup>&</sup>lt; *<sup>a</sup>* <sup>&</sup>lt; *<sup>b</sup>* <sup>&</sup>lt; <sup>+</sup>∞*, fulfils <sup>D</sup><sup>α</sup> <sup>b</sup>*−[*u*] <sup>≡</sup> <sup>0</sup> *on* (*a*, *<sup>b</sup>*)*, then <sup>u</sup>*(*x*) = *<sup>C</sup>* (*<sup>b</sup>* <sup>−</sup> *<sup>x</sup>*)*α*−1*, for a suitable constant C.*

**Proof.** The property

$$\begin{aligned} D\_{\mathbf{x}} I\_{\mathbf{a}+}^{1-a}[\mathbf{u}] &= \begin{array}{c} D\_{\mathbf{a}+}^{a}[\mathbf{u}] \ \equiv 0 \end{array} \end{aligned}$$

entails *I* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [*u*] is constant. Thus, for a suitable constant function *<sup>K</sup>*, we have that *<sup>u</sup>* fulfills the Abel integral equation: *I* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [*u*] = *<sup>K</sup>* , moreover *<sup>I</sup><sup>α</sup> <sup>a</sup>*+[*K*](*a*) = 0 since *<sup>K</sup>* <sup>∈</sup> *<sup>L</sup>*∞(*a*, *<sup>b</sup>*), and due to (96) and the boundedness of (*a*, *b*)

$$I\_{a+}^{\mathfrak{a}}[K] = K \frac{(\mathfrak{x} - a)^{a}}{\mathfrak{a} \, \Gamma(\mathfrak{a})} \in \mathcal{W}\_{\mathcal{G}}^{1,1}(a, b) \qquad \forall \mathfrak{a} \in (0, 1).$$

Then, by Proposition 2, the solution *u* of the Abel equation *I* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [*u*] = *<sup>K</sup>* is

$$\mu(\mathbf{x}) = D\_+^{1-a}[K] = D\_\mathbf{x} I\_{a+}^a[K] = \frac{K}{\Gamma(a)} \frac{d}{dx} \frac{(\mathbf{x} - a)^a}{a} = \frac{K}{\Gamma(a)} (\mathbf{x} - a)^{a-1} \dots$$

This proves the first and second claim, since an <sup>L</sup>-transformable function is an *<sup>L</sup>*<sup>1</sup> function on every bounded interval. The third one follows in the same way, by applying Corollary 1 to the backward Abel equation *I* 1−*α <sup>b</sup>*<sup>−</sup> [*u*] = *<sup>K</sup>*.

Lemma 8 provides the inverse of (100). Hence, summarizing

$$u \in L^1(a, b) \text{ fulfills } D\_{a+}^a[u] \equiv 0 \text{ on } (a, b) \qquad \text{iff} \qquad u(\mathbf{x}) = \mathbb{C}(\mathbf{x} - a)^{a-1}.\tag{124}$$

**Lemma 9.** *Assume that the interval* (*a*, *<sup>b</sup>*) *is bounded,* <sup>0</sup><*α*<1*, <sup>v</sup>*∈*L*1(*a*, *<sup>b</sup>*)*, <sup>I</sup>* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [*v*] *belongs to W*1,1 *<sup>G</sup>* (*a*, *<sup>b</sup>*) *and I*1−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [*v*](*a*+) = <sup>0</sup>*.*

*Then*

$$\exists \text{ unique } \mathcal{V} \in L^1(a, b): \quad v = I\_{a+}^a[\mathcal{V}]; \text{ and } \mathcal{V}(\mathbf{x}) = D\_{a+}^a[v] = D\_{\mathbf{x}} I\_{a+}^{1-a}[v], \tag{125}$$

*and v*∈*Lq*(*a*, *<sup>b</sup>*) *for every q*∈ 0, 1/(1 − *α*) *; moreover, there is C* = *C*(*q*) *such that*

$$\|\|\boldsymbol{\upsilon}\|\|\_{L^{q}(a,b)} \leq \mathbb{C}(q) \left( \|\boldsymbol{\upsilon}\|\|\_{L^{1}(a,b)} + \|\boldsymbol{I}\_{a+}^{1-a}[\boldsymbol{\upsilon}]\|\_{\mathbb{W}\_{G}^{1,1,\circ}(a,b)} \right). \tag{12.6}$$

*The same claims hold true when I<sup>α</sup> <sup>a</sup>*+[*v*]*, I* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [*v*] *and <sup>D</sup><sup>α</sup> <sup>a</sup>*+[*v*] *are replaced, respectively, by Iα <sup>b</sup>*−[*v*]*, I*1−*<sup>α</sup> <sup>b</sup>*<sup>−</sup> [*v*] *and D<sup>α</sup> <sup>b</sup>*−[*v*] *in the assumptions and the claims.*

**Proof.** By considering V as the unknown in the Abel integral equation

$$\mathcal{I}\_{a+}^{a}[\mathcal{V}] = v \qquad \text{on } (a,b) \tag{127}$$

we know by Proposition <sup>2</sup> that there is a solution V ∈ *<sup>L</sup>*1(*a*, *<sup>b</sup>*) fulfilling the integral equation: such <sup>V</sup> is the unique solution in *<sup>L</sup>*1(*a*, *<sup>b</sup>*) and fulfills

$$\mathcal{V}(\mathbf{x}) = D\_{a+}^{a}[v] = D\_{\mathbf{x}}\left(I\_{a+}^{1-a}[v]\right). \tag{128}$$

Thus <sup>V</sup>(*x*) <sup>∈</sup> *<sup>L</sup>*1(*a*, *<sup>b</sup>*). Moreover, by (127), *<sup>I</sup>* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [*v*](*a*) = 0 and the semigroup property of *<sup>s</sup>* → *<sup>I</sup><sup>s</sup> <sup>a</sup>*<sup>+</sup> (Proposition 1),

$$I\_{a+}^{1-a}[\upsilon](\mathfrak{x}) = I\_{a+}^{1-a}[I\_{a+}^{a}[\mathcal{V}]] = I\_{a+}^{1}[\mathcal{V}] = \int\_{a}^{\mathfrak{x}} \mathcal{V}(t) \, dt \dots$$

Summing up V ∈ *<sup>L</sup>*1(*a*, *<sup>b</sup>*), *<sup>I</sup>* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [V](*a*+) = 0, *<sup>I</sup>* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [*v*](*a*+) = 0, *<sup>I</sup>* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [*<sup>v</sup>* <sup>−</sup> *<sup>I</sup><sup>α</sup> <sup>a</sup>*+[V]](*a*+) ≡ 0; then *v* = *I<sup>α</sup> <sup>a</sup>*+[V], *<sup>v</sup>* <sup>∈</sup> *<sup>I</sup><sup>α</sup> a*+ *L*1(*a*, *b*) and (125), (126) follow by standard embedding of fractional integrals.

If we remove the assumption *I* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [*v*](*a*+) = 0 in Lemma 9, then we must add suitable corrections to both *v* and V, as stated by the next theorem.

**Theorem 12.** *Assume that* (*a*, *<sup>b</sup>*) *bounded,* <sup>0</sup><*α*<1*, v*∈*L*1(*a*, *<sup>b</sup>*)*, I*1−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [*v*] *belongs to BV*(*a*, *<sup>b</sup>*)*. Then*

$$\exists \mathcal{V} \in \mathcal{M}(\mathbb{R}), \text{ spt } \mathcal{V} \subset [a\_r + \infty), \exists \mathcal{R} \in \mathbb{R}: \ v = I\_{a+}^{\mathfrak{a}}[\mathcal{V}] + \frac{\mathcal{R}}{\Gamma(a)}(\mathbf{x} - a)^{\mathfrak{a} - 1}; \ D\_{a+}^{\mathfrak{a}}[\mathbf{v}] = \mathcal{V}(\mathbf{x}), \tag{129}$$

*and v*∈*Lq*(*a*, *<sup>b</sup>*) *for every q*∈ 1, 1/(1 − *α*) *; moreover, there is B* = *B*(*q*,K, *α*) *such that*

$$||\upsilon||\_{L^{q}(a,b)} \le B(q, \mathfrak{K}, a) \left( ||\upsilon||\_{L^{1}(a,b)} + ||I\_{a+}^{1-a}[\upsilon]||\_{\mathcal{W}\_{G}^{1,1,\epsilon}(a,b)} \right). \tag{130}$$

*Explicitly, for every given α* ∈ (0, 1)*, we have*

$$\boldsymbol{\nu} = I\_{a+}^{\boldsymbol{a}}[D\_{a+}^{\boldsymbol{a}}[\boldsymbol{v}]] + \frac{I\_{a+}^{1-\boldsymbol{a}}[\boldsymbol{v}](\boldsymbol{a}\_{+})}{\Gamma(\boldsymbol{a})}(\mathbf{x} - \boldsymbol{a})^{\boldsymbol{a}-1}, \quad \boldsymbol{a}.\boldsymbol{e}.\,\mathbf{x} \in (\boldsymbol{a}, \boldsymbol{b}), \forall \boldsymbol{v} \in L^{1}(\boldsymbol{a}, \boldsymbol{b}): I\_{a+}^{1-\boldsymbol{a}}[\boldsymbol{v}] \in BV(\boldsymbol{a}, \boldsymbol{b}). \tag{131}$$

*The same claims hold true when I<sup>α</sup> <sup>a</sup>*+[*v*]*, I* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [*v*] *and <sup>D</sup><sup>α</sup> <sup>a</sup>*+[*v*] *are replaced respectively by Iα <sup>b</sup>*−[*v*]*, I*1−*<sup>α</sup> <sup>b</sup>*<sup>−</sup> [*v*] *and D<sup>α</sup> <sup>b</sup>*−[*v*] *in the assumptions and the claims.*

**Proof.** Since *I* −*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [*v*] belongs to *BV*(*a*, *<sup>b</sup>*), it has a finite right value *<sup>I</sup>* −*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [*v*](*a*+) at *<sup>x</sup>* = *<sup>a</sup>*, labeled by K, say K := *I* −*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [*v*](*a*+). By (97), *<sup>I</sup>* −*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [(*<sup>x</sup>* <sup>−</sup> *<sup>a</sup>*)]*α*−<sup>1</sup> <sup>≡</sup> <sup>Γ</sup>(*α*), 0<*α*<1. We set

$$w(x) = v(x) - \frac{\mathfrak{A}}{\Gamma(\alpha)}(x - a)^{\alpha - 1}$$

then *<sup>w</sup>*∈*L*1(*a*, *<sup>b</sup>*), *<sup>I</sup>* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [*w*] <sup>∈</sup> *BV*(*a*, *<sup>b</sup>*) and *<sup>I</sup>* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [*w*](*a*+) = 0. We know by Proposition <sup>3</sup> that there is a solution W∈M(R) with spt W ⊂[*a*, +∞) fulfilling the integral equation

$$
\sigma = I\_{a+}^{a}[\mathcal{V}] \qquad \text{in } \mathcal{D}'(\mathbb{R}), \tag{132}
$$

such W is the unique solution with support on [*a*, +∞) and fulfills

$$\mathcal{W}(\mathbf{x}) = D\_{\mathbf{a}+}^{\mathbf{a}}[w](\mathbf{x}) \tag{133}$$

Thus W(*x*) ∈ M. By (132), *I* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [*w*](*a*+) = 0 and the semigroup property of *<sup>s</sup>* → *<sup>I</sup><sup>s</sup> a*+

$$I\_{a+}^{1-a}[w](\mathbf{x}) = I\_{a+}^{1-a}[I\_{a+}^{a}[\mathcal{W}]] = I\_{a+}^{1}[\mathcal{W}] = \int\_{a}^{\mathbf{x}} \mathcal{W}(t) \, dt.$$

Hence, by setting V = W + *I* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [*v*](*a*+) *<sup>δ</sup>*(*<sup>x</sup>* <sup>−</sup> *<sup>a</sup>*) and taking into account (99), we obtain

$$w(\mathbf{x}) = w(\mathbf{x}) + \frac{\mathfrak{K}}{\Gamma(a)}(\mathbf{x} - a)^{a-1} = I\_{a+}^{a}[\mathcal{V}](\mathbf{x}) + \frac{\mathfrak{K}}{\Gamma(a)}(\mathbf{x} - a)^{a-1}.\tag{134}$$

$$D\_{a+}^{a}[v] = D\_{a+}^{a}[w] + D\_{a+}^{a} \left[ \frac{\mathcal{K}}{\Gamma(a)} (\mathbf{x} - a)^{a-1} \right] = D\_{x} I\_{a+}^{1-a}[v] \ = \mathcal{V}(\mathbf{x}) = D\_{a+}^{a}[w]. \tag{135}$$

Thus V ∈ *<sup>L</sup>*1(*a*, *<sup>b</sup>*), *<sup>I</sup>* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [V](*a*+) = 0, *<sup>I</sup>* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [*v*](*a*+) = 0, *<sup>I</sup>* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [*<sup>v</sup>* <sup>−</sup> *<sup>I</sup><sup>α</sup> <sup>a</sup>*+[V]](*a*+) ≡ 0; then *v* = *I<sup>α</sup> <sup>a</sup>*+[V], *<sup>v</sup>* <sup>∈</sup> *<sup>I</sup><sup>α</sup> a*+ *L*1(*a*, *b*) . The function (*<sup>x</sup>* <sup>−</sup> *<sup>a</sup>*)*α*−<sup>1</sup> belongs to *<sup>L</sup>q*(*a*, *<sup>b</sup>*) for every *q* ∈ 1, 1/(1 − *α*) , due to the boundedness of the interval. By standard embedding of fractional integrals, the function *w* = *I<sup>α</sup> <sup>a</sup>*+(V) belongs to *<sup>L</sup>q*(*a*, *<sup>b</sup>*) for every *<sup>q</sup>*∈ 1, 1/(1 − *α*) . Summarizing, *<sup>v</sup>*∈*Lq*(*a*, *<sup>b</sup>*) for every *<sup>q</sup>*∈ 1, 1/(1− *α*) and (129) and (130) hold true.

**Remark 17.** *We emphasize that in Theorem 12 the fractional integrals and derivatives I* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> *, <sup>I</sup>* 1−*α <sup>b</sup>*<sup>−</sup> *, D<sup>α</sup> <sup>a</sup>*<sup>+</sup> *and D<sup>α</sup> <sup>b</sup>*<sup>−</sup> *are understood in the distributional sense provided by Definitions <sup>1</sup> and 3. Referring to Definition 10, with* (*a*, *b*) *bounded,* (131) *reads as follows*

$$\nabla \nu(\mathbf{x}) = I\_{a+}^{\mu} \left[ D\_{a+}^{\mu} \left[ \nu \right] \right](\mathbf{x}) + \frac{I\_{a+}^{1-a} \left[ \nu \right](a\_{+})}{\Gamma(a)} (\mathbf{x} - a)^{a-1} \quad a.e. \mathbf{x} \in (a, b), \forall \upsilon \in BV\_{+}^{\varsigma}(a, b), a \in (0, 1). \tag{136}$$

*Moreover, in a bounded interval* (*a*, *b*) *we have*

$$w = D\_{a+}^{a}[I\_{a+}^{a}[v]] \quad a.e.\ on\ (a,b), \; \forall v \in L^{1}(a,b), \; \forall a \in (0,1), \tag{137}$$

*since D<sup>α</sup> a*+[*I<sup>α</sup> <sup>a</sup>*+[*v*]] = *Dx*[*I* <sup>1</sup>−*<sup>α</sup> <sup>a</sup>*<sup>+</sup> [*I<sup>α</sup> <sup>a</sup>*+[*v*]]] = *Dx*[[*I*<sup>1</sup> *<sup>a</sup>*+[*v*]] = *Dx*[ 2 *x <sup>a</sup> v*] = *v ; whereas*

$$\upsilon = D\_{a+}^{a}[I\_{a+}^{a}[v]] + \mathbb{C}\,\delta(\mathbf{x} - a) \quad \text{on } \mathcal{D}'(\mathbb{R}), \,\forall v \in \mathcal{M}(\mathbb{R}),\\\text{spt}\,\upsilon \subset [a, b], \,\forall a \in (0, 1), \tag{138}$$

*where <sup>C</sup>*=lim*x*→*a*<sup>+</sup> *<sup>I</sup>*1[*v*](*x*)*, indeed by Lemma <sup>8</sup> the kernel of <sup>D</sup><sup>α</sup> <sup>a</sup>*<sup>+</sup> *is made by functions of the kind <sup>K</sup>*(*<sup>x</sup>* <sup>−</sup> *<sup>a</sup>*)*α*−1*, which all belong to BV<sup>s</sup>* +(*a*, *b*) *and fulfill on* R

$$\mathbb{C}\Gamma(a)H(\mathbf{x}-a)(\mathbf{x}-a)^{a-1} = \mathbb{C}\Gamma(a)\,\delta(\mathbf{x}-a)\*\frac{1}{\Gamma(a)}\frac{H(\mathbf{x})}{\mathbf{x}^{a-1}} = I\_{a+}^{a}\left[\mathbb{C}\,\Gamma(a)\,\delta(\mathbf{x}-a)\right].$$

#### **6. Conclusions**

We establish some properties of the bilateral Riemann–Liouville fractional derivative *D<sup>s</sup>* .

We set the notation and study the associated Sobolev spaces of fractional order *s*, denoted by *Ws*,1(*a*, *b*), and the fractional bounded variation spaces of fractional order *s*, denoted by *BVs*(*a*, *b*). The basic properties of these spaces are proved: weak compactness properties, and comparison embeddings and strict embeddings with several related spaces, namely,

$$BV(a,b) \underset{\stackrel{\scriptstyle\rightarrow}{\neq}} \bigcap\_{\sigma \in (0,1)} W^{\sigma,1}(a,b) \underset{\stackrel{\scriptstyle\rightarrow}{\neq}} W^{s,1}(a,b) \underset{\stackrel{\scriptstyle\rightarrow}{\neq}} BV^{s}\_{+}(a,b) \qquad \forall \mathbf{s} \in (0,1),$$

$$W^{s,1}(a,b) \underset{\stackrel{\scriptstyle\rightarrow}{\neq}} BV^{s}\_{+}(a,b) \,, \qquad W^{s,1}(a,b) \underset{\stackrel{\scriptstyle\rightarrow}{\neq}} BV^{s}\_{-}(a,b) \; \wedge \qquad \forall \mathbf{s} \in (0,1).$$

Spaces *Ws*,1 and *BV<sup>s</sup>* are the natural setting for data of Abel integral equations in order to make them well-posed problems in the distributional framework too.

**Author Contributions:** Conceptualization, A.L. and F.T.; methodology, A.L. and F.T.; formal analysis, A.L. and F.T.; writing—original draft preparation, A.L. and F.T.; writing—review and editing, A.L. and F.T. All authors have read and agreed to the published version of the manuscript.

**Funding:** The authors are members of the Gruppo Nazionale per l'Analisi Matematica, la Probabilità e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM). This research was partially funded by Italian M.U.R. PRIN: grant number 2017BTM7SN "Variational Methods for stationary and evolution problems with singularities and interfaces".

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** We thank Maïtine Bergounioux for many helpful discussions.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **Hybrid Method for Simulation of a Fractional COVID-19 Model with Real Case Application**

**Anwarud Din 1, Amir Khan 2, Anwar Zeb 3, Moulay Rchid Sidi Ammi 4,\*, Mouhcine Tilioua <sup>4</sup> and Delfim F. M. Torres <sup>5</sup>**


**Abstract:** In this research, we provide a mathematical analysis for the novel coronavirus responsible for COVID-19, which continues to be a big source of threat for humanity. Our fractional-order analysis is carried out using a non-singular kernel type operator known as the Atangana-Baleanu-Caputo (ABC) derivative. We parametrize the model adopting available information of the disease from Pakistan in the period 9 April to 2 June 2020. We obtain the required solution with the help of a hybrid method, which is a combination of the decomposition method and the Laplace transform. Furthermore, a sensitivity analysis is carried out to evaluate the parameters that are more sensitive to the basic reproduction number of the model. Our results are compared with the real data of Pakistan and numerical plots are presented at various fractional orders.

**Keywords:** coronavirus disease 2019 (COVID-19); ABC derivative; hybrid method; existence analysis; semi-analytical solution

**MSC:** 34C60; 26A33; 92D30

#### **1. Introduction**

The novel coronavirus SARS-CoV-2, responsible for COVID-19, which is member of the family of Severe Acute Respiratory Syndrome (SARS) viruses, has been recognized as the most dangerous virus of this decade [1]. This virus has become the new novel strain of the SARS family, which was not recognized in humans before [2]. COVID-19 has not just affected humans, but also a number of animals have been infected by the virus. The SARS-CoV-2 virus has been transmitted from human to human and similarly in animals, but its origin is still a controversy [3]. Infected humans and different species of various animals are recognized as active causes of spreading of the virus [1]. In the past, some similar viruses, like the Middle East Respiratory Syndrome Coronavirus (MERS-CoV), were spread out from camels to human population, and for SARS-CoV-1 the civet was recognized as the source of spreading into humans. For COVID-19, the main source or the major reason of spreading is human-to-human interaction, where the virus transmission is easily made by an infected person to a susceptible one. Currently, thousands of research studies have been proposed and many predictions have been given on COVID-19 dynamics, see in [4–8] and references therein. Our paper is, however, different from those in the literature. In [4], special focus is given to the transmissibility of the so-called superspreaders, with numerical simulations being given for data of Galicia, Spain, and Portugal. It turns out that, for each

**Citation:** Din, A.; Khan, A.; Zeb, A.; Sidi Ammi, M.R.; Tilioua, M.; Torres, D.F.M. Hybrid Method for Simulation of a Fractional COVID-19 Model with Real Case Application. *Axioms* **2021**, *10*, 290. https:// doi.org/10.3390/axioms10040290

Academic Editors: Stevan Pilipovi´c and Chris Goodrich

Received: 1 August 2021 Accepted: 28 October 2021 Published: 1 November 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

region, the order of the Caputo derivative takes a different value, always different from one, showing the relevance of considering fractional-order models to investigate COVID-19. The work in [5] studies the COVID-19 pandemic in Portugal until the end of the three states of emergency, describing well what has happen in Portugal with respect to the evolution of active infected and hospitalized individuals. In [6,7], a non-fractional but stochastic time-delayed model for COVID-19 is given, with the aim to study the situation of Morocco. In [8], the authors provide a *S*-*E*-*I*-*P*-*A*-*H*-*R*-*F* model while here we propose a much simpler *P*-*I*-*Q* model (our model has only three state variables while the model in [8] is much more complex, with eight state variables). In [8], the authors use the classical operator of Caputo; differently, here we use the more recent ABC derivatives that, in contrast, use non-singular kernels that allow us to consider a much simpler model. While the main result in [8] is the proof of the global stability of the disease free equilibrium; in contrast, here we prove Ulam–Hyers stability. We also construct a practical algorithm to compute numerically the solution of the model (see Section 5), while such algorithmic approach is not addressed in [8]. Moreover, we do a sensitivity analysis to the parameters of the model. Such sensitivity analysis is also not addressed in [8]. In contrast with the work in [8], that investigates the realities of Wuhan, Spain, and Portugal, we study the case of Pakistan.

COVID-19 generally transfers by interaction with humans in close contact for a particular time period, with most common symptoms of sneezing and coughing. The virus droplets stay on the layer of matters and when they come to the contact with any susceptible human, then the virus symptoms easily transfer to the individuals. Such infected humans can pass the infection to others by touching their mouth, eyes, or nose. This virus has the strength to be alive on different surfaces, like cardboard and copper, for many hours up to some days. As the time passes, the amount of the virus symptoms decreases over a time span and might not be alive in sufficient amount for spreading the infection. It has been recorded that the symptoms appearance and COVID-19 infection initial stage lies between 1 to 14 days [1]. Several countries have prepared and implemented a COVID-19 vaccine program and are trying to protect their populations. However, to date, there is yet no treatment available. At present, the most effect way to protect ourselves from the virus remains the quarantine or isolation, effective use of mask, following the guidelines that have been passed by governments of all countries along with the World Health Organization (WHO).

Modeling of infectious diseases has a rich literature and a number of research articles have been developed, both using classical dynamical systems as well as fractional models [4,8]. Fractional-order derivatives can be useful and helpful as compared to classical derivatives, because the dynamics of real phenomena can be comprehensively understood by fractional-order derivatives due to its special properties, i.e., hereditary and memory [9–16]. For a comparison between classical (integer-order) and fractional-order models, see in [4,8]. Roughly speaking, ordinary derivatives cannot distinguish the phenomenon at two distinct closed points. To sort out this problem of ordinary derivatives, generalized derivatives have been introduced in the framework of fractional calculus [17]. The first concept of fractional-order derivative was given by Leibniz and L'Hôpital in 1695. Aiming quantitative analysis, optimization, and numerical estimations, many number of attempts have been made by employing fractional differential equations (FDEs) [1,2,18–38]. The growing interest in the modeling of complex real-world issues with the use of FDEs is due to its numerous properties that can not be found in the ordinary sense. These characteristics allow FDEs to model effectively not only non-Markovian processes but also non-Gaussian phenomena [39]. Different non-classical fractional-order derivatives and different kinds of FDEs were proposed [40–42]. Among them, one has the Atangana-Baleanu-Caputo (ABC) derivative, which is a nonlocal fractional derivative with a nonsingular kernel, connected with various applications. For a discussion of the ABC and related operators see in [43,44], and for their use on contemporary modeling we refer the interested reader to the works in [45–48].

The famous method of decomposition was developed from the 1970s to the 1990s by George Adomian, to analytically handle nonlinear problems. After that, the Adomian decomposition method became a powerful tool to simulate analytical or approximate solutions for various problems of an applied nature. Many mathematical models have been studied by the applications of homotopy, Laplace Adomian Decomposition Method (LADM), and variational methods [49–51]. To the best of our knowledge, no one has studied a variable order epidemic model with ABC derivatives by the LADM. Motivated by this fact, here we study a fractional-order COVID-19 epidemic model with ABC derivatives by the Laplace Adomian decomposition algorithm. In particular, we use Banach and Krassnoselskii fixed point theorems to define some sufficient conditions to prove existence and uniqueness of solution. As stability is important for the estimated solution, we consider Ulam type stability through nonlinear functional analysis. The aforementioned stability is investigated for ordinary fractional derivatives in many research papers, see, e.g., in [52–54], but research on Ulam type stability regarding ABC derivatives is a rarity. At the end of the paper, our results are illustrated with real data based on Pakistan COVID-19 cases in March 2020.

The paper is organized as follows. Section 2 is devoted to the model formulation. Section 3 is concerned with some preliminary results on fractional differential equations. Existence and uniqueness are carried out in Section 4. Section 5 deals with the solution of the COVID-19 model using the LADM. Some plots are given in Section 6, showing the simplicity and reliability of the proposed algorithm. In Section 7, a sensitivity analysis is given to find the most sensitive parameter with respect to the basic reproduction number. We end with Section 8 of conclusions, including some possible future directions of research.

#### **2. Model Formulation**

Mathematical modeling plays a major role in investigating and thus controlling the dynamics of a disease, particularly in the vaccination privation or at the initial phases of the epidemic. Several mathematical models can be found in [12–15]. We formulated a fractional COVID-19 epidemic model, similar to other diseases [49,50], and predict its future behavior. Inspired by FDEs using the ABC derivative, we aim to simulate the COVID-19 transmission in the form of

$$\begin{cases} \mathcal{A}\mathcal{C}\mathbf{D}\_t^\theta P(t) = \lambda - \gamma P(t)I(t) - d\_0 P(t), \\ \mathcal{A}\mathcal{C}\mathbf{D}\_t^\theta I(t) = \gamma P(t)I(t) - (d\_0 + h + \eta)I(t) + \sigma Q(t), \\ \mathcal{A}\mathcal{C}\mathbf{D}\_t^\theta Q(t) = \eta I(t) - (d\_0 + \mu + \sigma)Q(t), \end{cases} \tag{1}$$

along with initial conditions

$$P(0) = P\_{0\prime} \ I(0) = I\_{0\prime} \ Q(0) = Q\_{0\prime} \tag{2}$$

where ABC**D***<sup>θ</sup> <sup>t</sup>* is the ABC fractional derivative of order 0 < *θ* ≤ 1 (see Definition 1 in Section 3). In this model, *P*(*t*) represents the amount of susceptible humans, *I*(*t*) stands for the population of infected humans, and *Q*(*t*) represents the population of quarantined humans at time *t*. The meaning of the parameters of Model (1) are given in Table 1. We take the below assumptions to the given system:


The basic reproduction number *R*0, which represents the secondary cases for the Model (1), is easily demonstrated to be given by

$$R\_0 = \frac{\gamma \lambda (d\_0 + \mu + \sigma)}{d\_0 (d\_0 + \mu + \sigma)(d\_0 + h) + \eta (d\_0 + \mu)}.\tag{3}$$


**Table 1.** Parameters description defined in the given Model (1).

In addition, *I*(*t*) + *P*(*t*) + *Q*(*t*) = *N*(*t*), where *N* represents the total population.

#### **3. Preliminary Results**

For completeness, here we recall necessary definitions and results from the literature.

**Definition 1** (See [11,48])**.** *If x is an absolutely continuous function and* 0 < *θ* ≤ 1*, then the ABC derivative is given by*

$$\mathcal{L}^{\text{ABC}} \mathbf{D}\_t^{\theta} \phi(t) = \frac{\mathcal{A} \mathcal{B} \mathcal{C}(\theta)}{1 - \theta} \int\_0^t \frac{d}{dy} \mathbf{x}(\omega) \mathcal{M}\_\theta \left[ \frac{-\theta}{1 - \theta} (t - \omega)^\theta \right],\tag{4}$$

*where* ABC(*θ*) *is a normalization function such that* 1 = ABC(1) = ABC(0) *and* M*<sup>θ</sup> is a special Mittag–Leffler function.*

**Remark 1.** *By replacing* M*<sup>θ</sup>* −*θ* 1−*θ t* − *ω θ with* <sup>M</sup><sup>1</sup> <sup>=</sup> exp −*θ* 1−*θ t* − *ω one obtains the so-called Caputo–Fabrizio derivative. Additionally, we have*

$$\mathcal{A}^{\underline{\mathcal{B}\mathcal{C}}} \mathbf{D}\_0^{\underline{\theta}}[constant] = 0.$$

**Remark 2.** *Let x*(*t*) *be a function having fractional ABC derivative. Then, the Laplace transform of* ABC**D***<sup>θ</sup>* <sup>0</sup>*x*(*t*) *is given by*

$$\mathcal{QC}\left[\prescript{\mathcal{A}\mathcal{BC}}{}{\mathbf{D}}\_{0}^{\theta}\mathbf{x}(t)\right] = \frac{\mathcal{A}\mathcal{BC}(\theta)}{[s^{\theta}(1-\theta)+\theta]} \Big[s^{\theta}\mathcal{QC}[\mathbf{x}(t)] - s^{\theta-1}\mathbf{x}(0)\Big].$$

**Lemma 1** (See [52])**.** *The solution to*

$$\begin{aligned} \mathcal{A} \mathcal{C} \mathbf{D}\_0^\theta \mathbf{x}(t) &= z(t), \quad t \in [0, T], \\ \mathbf{x}(0) &= \mathbf{x}\_{0\prime} \end{aligned}$$

1 > *θ* > 0*, is given by*

$$x(t) = x\_0 + \frac{(1 - \theta)}{\mathcal{A}\mathcal{B}\mathcal{C}(\theta)} z(t) + \frac{\theta}{\mathcal{A}\mathcal{B}\mathcal{C}(\theta)\Gamma(\theta)} \int\_0^t (t - \omega)^{\theta - 1} z(\omega) d\omega.$$

**Theorem 1** (See [54])**.** *Let* **X** = *C*[0, *T*] *and consider the Banach space defined by* **Z** = **X** × **X** × **X** *with the norm-function <sup>A</sup>* = (*P*, *<sup>I</sup>*, *<sup>Q</sup>*) = max*t*∈[0,*T*][|*P*(*t*) + |*I*(*t*)| + |*Q*(*t*)|]*. Consider* **<sup>B</sup>** *to be a convex subset of* **Z** *and* **F***, and* **G** *be operators such that*


*Then,* **F***u* + **G***u* = *u possesses at least one solution.*

#### **4. Qualitative Analysis of the Proposed Model**

Here, we rewrite the right-hand sides of (1) as

$$\begin{aligned} \mathbf{f}\_1(t, P(t), I(t), Q(t)) &= -\gamma I(t)P(t) + \lambda - d\_0 P(t), \\ \mathbf{f}\_2(t, P(t), I(t), Q(t)) &= \gamma I(t)P(t) - (d\_0 + h + \eta)I(t) + \sigma Q(t), \\ \mathbf{f}\_3(t, P(t), I(t), Q(t)) &= \eta I(t) - (d\_0 + \sigma + \mu)Q(t). \end{aligned} \tag{5}$$

By using (5), we have

$$\begin{aligned} \mathcal{A}^{ABC} \mathbf{D}\_{+0}^{\theta} \mathcal{A}(t) &= \Phi(t, \mathcal{A}(t)), \quad t \in [0, \tau], \quad 0 < \theta \le 1, \\ \mathcal{A}(0) &= \mathcal{A}\_0. \end{aligned} \tag{6}$$

In view of Lemma 1, (6) yields

$$\begin{split} \mathcal{A}(t) = \mathcal{A}\_{0}(t) + \left[\Phi(t, \mathcal{A}(t)) - \Phi\_{0}(t)\right] \frac{(1-\theta)}{\mathcal{A}\mathcal{B}\mathcal{C}(\theta)} \\ &+ \frac{\theta}{\Gamma(\theta)\mathcal{A}\mathcal{B}\mathcal{C}(\theta)} \int\_{0}^{t} (t-\omega)^{\theta-1} \Phi(\omega, \mathcal{A}(\omega)) d\omega, \end{split} \tag{7}$$

where

$$\mathcal{A}(t) = \begin{cases} P(t) \\ I(t) \\ Q(t) \end{cases}, \quad \mathcal{A}\_0(t) = \begin{cases} P\_0 \\ I\_0 \\ Q\_0 \end{cases},$$

$$\Phi(t, \mathcal{A}(t)) = \begin{cases} \mathbf{f}\_1(t, P, I, Q) \\ \mathbf{f}\_2(t, P, I, Q) \\ \mathbf{f}\_3(t, P, I, Q) \end{cases}, \quad \Phi\_0(t) = \begin{cases} \mathbf{f}\_1(0, P\_0, I\_{0\prime}, Q\_0) \\ \mathbf{f}\_2(0, P\_0, I\_{0\prime}, Q\_0) \\ \mathbf{f}\_3(0, P\_0, I\_{0\prime}, Q\_0) \end{cases} \tag{8}$$

Using (7) and (8), we define the two operators **F** and **G** as follows:

$$\begin{split} \mathbf{F}(\mathcal{A}) &= \mathcal{A}\_0(t) + \left[ \Phi(t, \mathcal{A}(t)) - \Phi\_0(t) \right] \frac{(1-\theta)}{\mathcal{A}\mathcal{B}\mathcal{C}(\theta)}, \\ \mathbf{G}(\mathcal{A}) &= \frac{\theta}{\Gamma(\theta)\mathcal{A}\mathcal{B}\mathcal{C}(\theta)} \int\_0^t (t-\omega)^{\theta-1} \Phi(\omega, \mathcal{A}(\omega)) d\omega. \end{split} \tag{9}$$

For existence and uniqueness, we assume some basic axioms and a Lipschitz hypothesis:

(H1) there is *C*Φ and *D*Φ such that

$$|\Phi(t, \mathcal{A}(t))| \le C\_{\Phi} \|\mathcal{A}\| + D\_{\Phi};$$

(H2) there is *<sup>L</sup>*<sup>Φ</sup> <sup>&</sup>gt; 0 such that ∀ A, A ∈¯ **<sup>Z</sup>** one has

$$|\Phi(t,\mathcal{A}) - \Phi(t,\bar{\mathcal{A}})| \le L\_{\Phi}[||\mathcal{A}|| - ||\bar{\mathcal{A}}||].$$

**Theorem 2.** *Under hypotheses* (*H*1) *and* (*H*2)*, Equation* (7) *possesses at least one solution, which implies that* (1) *possesses an equal number of solutions if* (1−*θ*) ABC(*θ*) *<sup>L</sup>*<sup>Φ</sup> <sup>&</sup>lt; <sup>1</sup>*.*

**Proof.** The theorem is proved in two steps, with the help of Theorem 1. (i) Consider A ∈¯ **<sup>B</sup>**, where **B** = {A ∈ **Z** : A ≤ *ρ*, *ρ* > 0} is a closed and convex set. Then, for **F** in (9), we have

$$\begin{split} \left|| \mathbb{F}(\mathcal{A}) - \mathbb{F}(\bar{\mathcal{A}}) \right|| &= \frac{(1-\theta)}{\mathcal{A}\mathcal{B}\mathcal{C}(\theta)} \max\_{t \in [0,\tau]} \left| \Phi(t, \mathcal{A}(t)) - \Phi(t, \bar{\mathcal{A}}(t)) \right| \\ &\leq \frac{(1-\theta)}{\mathcal{A}\mathcal{B}\mathcal{C}(\theta)} L\circ \left|| \mathcal{A} - \bar{\mathcal{A}} \right||. \end{split} \tag{10}$$

Therefore, **F** is a contraction. (ii) We want **G** to be relatively compact. For that it suffices that **G** is equicontinuous and bounded. Obviously, **G** is continuous as Φ is continuous and for all A ∈ **B** one has

$$\begin{split} \|\mathbf{G}(\mathcal{A})\| &= \max\_{t \in [0, \pi]} \left| \frac{\theta}{\Gamma(\theta)\mathcal{A}\mathcal{B}\mathcal{C}(\theta)} \int\_{0}^{t} (t - \omega)^{\theta - 1} \Phi(\omega, \mathcal{A}(\omega)) d\omega \right| \\ &\leq \frac{\theta}{\Gamma(\theta)\mathcal{A}\mathcal{B}\mathcal{C}(\theta)} \int\_{0}^{\tau} (\tau - \omega)^{\theta - 1} |\Phi(\omega, \mathcal{A}(\omega))| d\omega \\ &\leq \frac{\tau^{\theta}}{\mathcal{A}\mathcal{B}\mathcal{C}(\theta)\Gamma(\theta)} [\mathsf{C}\_{\Theta}\rho + D\_{\Phi}]. \end{split} \tag{11}$$

Thus, (11) shows the boundedness of **G**. For equi-continuity, we assume *t*<sup>1</sup> > *t*<sup>2</sup> ∈ [0, *τ*], so that

$$\begin{split} & \left| \mathbf{G} (\mathcal{A}(t\_1)) - \mathbf{G} (\mathcal{A}(t\_2)) \right| \\ &= \frac{\theta}{\mathcal{A}\mathcal{B}\mathcal{C}(\theta)\Gamma(\theta)} \left| \int\_0^{t\_1} (t\_1 - \omega)^{\theta - 1} \Phi(\omega, \mathcal{A}(\omega)) d\omega - \int\_0^{t\_2} (t\_2 - \omega)^{\theta - 1} \Phi(\omega, \mathcal{A}(\omega)) d\omega \right| \\ & \leq \frac{[\mathsf{C}\_\Phi \rho + D\_\Phi]}{\mathcal{A}\mathcal{B}\mathcal{C}(\theta)\Gamma(\theta)} |t\_1^\theta - t\_2^\theta|. \end{split} \tag{12}$$

The right-hand side in (12) goes to zero at *t*<sup>1</sup> → *t*2. Remembering that **G** is continuous,

$$|\mathbf{G}(\mathcal{A}(t\_1)) - \mathbf{G}(\mathcal{A}(t\_2))| \to 0 \text{ as } t\_1 \to t\_2.$$

Having the boundedness and continuity of **G**, we conclude that **G** is uniformly continuous and bounded. According to the theorem of Arzelá–Ascoli, **G** is relatively compact and therefore entirely continuous. It follows from Theorem 1 that the integral Equation (7) has at least one solution.

Now, we show uniqueness.

**Theorem 3.** *Under hypotheses* (*H*1) *and* (*H*2)*, Equation* (7) *possesses a unique solution and this implies that* (1) *possesses also a unique solution if* (1−*θ*)*L*<sup>Φ</sup> ABC(*θ*) <sup>+</sup> *<sup>τ</sup><sup>θ</sup> <sup>L</sup>*<sup>Φ</sup> ABC(*θ*)Γ(*θ*) <sup>&</sup>lt; <sup>1</sup>*.*

**Proof.** Let the operator **T** : **Z** → **Z** be defined by

$$\begin{split} \mathsf{T}\mathcal{A}(t) = \mathcal{A}\_{0}(t) &+ \left[\Phi(t, \mathcal{A}(t)) - \Phi\_{0}(t)\right] \frac{(1-\theta)}{\mathcal{A}\mathcal{B}\mathcal{C}(\theta)} \\ &+ \frac{\theta}{\mathcal{A}\mathcal{B}\mathcal{C}(\theta)\Gamma(\theta)} \int\_{0}^{t} (t-\omega)^{\theta-1} \Phi(\omega, \mathcal{A}(\omega)) d\omega, \quad t \in [0, \tau]. \end{split} \tag{13}$$

Let <sup>A</sup>, A ∈¯ **<sup>Z</sup>**. Then, one can take

$$\begin{split} ||\mathbf{T}\mathcal{A} - \mathbf{T}\bar{\mathcal{A}}|| &\leq \frac{(1-\theta)}{\mathcal{A}\mathcal{B}\mathcal{C}(\boldsymbol{\theta})} \max\_{t \in [0,\pi]} |\Phi(t, \mathcal{A}(t)) - \Phi(t, \bar{\mathcal{A}}(t))| \\ &\quad + \frac{\theta}{\Gamma(\theta)\mathcal{A}\mathcal{B}\mathcal{C}(\boldsymbol{\theta})} \max\_{t \in [0,\pi]} \left| \int\_{0}^{t} (t-\omega)^{\theta-1} \Phi(\omega, \mathcal{A}(\omega)) d\omega \right| \\ &\quad - \int\_{0}^{t} (t-\omega)^{\theta-1} \Phi(\omega, \bar{\mathcal{A}}(\omega)) d\omega \Big| \\ &\leq \Xi ||\mathcal{A} - \bar{\mathcal{A}}||, \end{split} \tag{14}$$

where

$$
\Sigma = \frac{(1-\theta)L\_{\Phi}}{\mathcal{A}\mathcal{BC}(\theta)} + \frac{\tau^{\theta}L\_{\Phi}}{\Gamma(\theta)\mathcal{A}\mathcal{BC}(\theta)}.\tag{15}
$$

Thus, **T** is a contraction from (14). Therefore, (7) possesses a unique solution.

Next, in order to investigate the stability of our problem, we consider a small disturbance *φ* ∈ *C*[0, *T*], with *φ*(0) = 0, that depends only on the solution.

**Lemma 2.** *Let φ* ∈ *C*[0, *T*] *with φ*(0) = 0 *such that* |*φ*(*t*)| ≤ *ε for ε* > 0 *and consider the problem*

$$\begin{aligned} \mathcal{A}^{\mathcal{A}\mathcal{C}} \mathbf{D}\_{+0}^{\theta} \mathcal{A}(t) &= \Phi(t, \mathcal{A}(t)) + \phi(t), \\ \mathcal{A}(0) &= \mathcal{A}\_0. \end{aligned} \tag{16}$$

*The solution of* (16) *satisfies the following relation:*

$$\begin{split} \left| \mathcal{A}(t) - \left( \mathcal{A}\_{0}(t) + \left[ \Phi(t, \mathcal{A}(t)) - \Phi\_{0}(t) \right] \frac{(1-\theta)}{\mathcal{A}\mathcal{B}\mathcal{C}(\theta)} \right. \\ \left. + \frac{\theta}{\mathcal{A}\mathcal{B}\mathcal{C}(\theta)\Gamma(\theta)} \int\_{0}^{t} (t-\omega)^{\theta-1} \Phi(\omega, \mathcal{A}(\omega)) d\omega \right) \right| \\ \leq \frac{\Gamma(\theta) + \tau^{\theta}}{\Gamma(\theta)\mathcal{A}\mathcal{B}\mathcal{C}(\theta)} \varepsilon = \Omega\_{\tau, \theta}. \end{split} \tag{17}$$

**Proof.** The proof is standard and is omitted here.

**Theorem 4.** *Consider hypotheses* (*H*1) *and* (*H*2) *along with* (17) *of Lemma 2. Then, the solution to Equation* (7) *is Ulam–Hyers stable if* Ξ < 1*, where* Ξ *is defined by* (15)*.*

**Proof.** Assume A ∈ **<sup>Z</sup>** and let A ∈¯ **<sup>Z</sup>** be the unique solution of (7). Then,

$$\begin{split} \left||\mathcal{A}-\mathcal{A}\right|| &= \max\_{t\in[0,T],\boldsymbol{T}} \left|\mathcal{A}(t)-\left(\mathcal{A}\_{0}(t)+\left[\Phi(t,\mathcal{A}(t))-\Phi\_{0}(t)\right]\right)\frac{(1-\theta)}{\mathcal{A}\mathcal{B}\mathcal{C}(\theta)}\right| \\ &+\frac{\theta}{\Gamma(\theta)\mathcal{A}\mathcal{B}\mathcal{C}(\theta)}\int\_{0}^{t}(t-\omega)^{\theta-1}\Phi(\omega,\mathcal{A}(\omega))d\omega\right| \\ &\leq \max\_{t\in[0,T]} \left|\mathcal{A}(t)-\left(\mathcal{A}\_{0}(t)+\left[\Phi(t,\mathcal{A}(t))-\Phi\_{0}(t)\right]\frac{(1-\theta)}{\mathcal{A}\mathcal{B}\mathcal{C}(\theta)}\right)\right| \\ &+\frac{\theta}{\Gamma(\theta)\mathcal{A}\mathcal{B}\mathcal{C}(\theta)}\int\_{0}^{t}(t-\omega)^{\theta-1}\Phi(\omega,\mathcal{A}(\omega))d\omega\right| \\ &+\left|\left(\mathcal{A}\_{0}(t)+\left[\Phi(t,\mathcal{A}(t))-\Phi\_{0}(t)\right]\frac{(1-\theta)}{\mathcal{A}\mathcal{B}\mathcal{C}(\theta)}\right)\end{split}$$

$$\begin{split} \frac{\theta}{\Gamma(\theta)\mathcal{A}\mathcal{B}\mathcal{C}(\theta)}\int\_{0}^{t}(t-\omega)^{\theta-1}\Phi(\omega,\mathcal{A}(\omega))d\omega \end{split}$$

$$\begin{split} & - \left( \mathcal{A}\_{0}(t) + \left[ \Phi(t, \vec{\mathcal{A}}(t)) - \Phi\_{0}(t) \right] \frac{(1-\theta)}{\mathcal{A}\mathcal{B}\mathcal{C}(\theta)} \right) \\ & + \frac{\theta}{\Gamma(\theta)\mathcal{A}\mathcal{B}\mathcal{C}(\theta)} \int\_{0}^{t} (t-\omega)^{\theta-1} \Phi(\omega, \vec{\mathcal{A}}(\omega)) d\omega \right) \Bigg| \\ & \leq \quad \Omega\_{\tau,\theta} + \frac{(1-\theta)L\_{\Phi}}{\mathcal{A}\mathcal{B}\mathcal{C}(\theta)} \|\mathcal{A} - \vec{\mathcal{A}}\| + \frac{\tau^{\theta}L\_{\Phi}}{\Gamma(\theta)\mathcal{A}\mathcal{B}\mathcal{C}(\theta)} \||\mathcal{A} - \vec{\mathcal{A}}\|| \\ & \leq \quad \Omega\_{\tau,\theta} + \Xi \|\mathcal{A} - \vec{\mathcal{A}}\|. \end{split} \tag{18}$$

From (18), we can write that

$$\|\mathcal{A} - \vec{\mathcal{A}}\| \le \frac{\Omega\_{\text{r},\theta}}{1 - \Xi} \|\mathcal{A} - \vec{\mathcal{A}}\|.\tag{19}$$

The proof is complete.

#### **5. Construction of an Algorithm for Deriving the Solution of the Model**

Herein, we derive a general series-type solution for the proposed system with ABC derivatives. Taking the Laplace transform in Model (1), we transform both sides of each equation and we use the initial conditions to obtain that

$$\begin{cases} \mathcal{L}^{\varepsilon}[P(t)] = \frac{P\_0}{s} + \frac{[s^{\theta}(1-\theta)+\theta]}{s^{\theta}\mathcal{A}\mathcal{B}\mathcal{C}(\theta)}\mathcal{L}^{\varepsilon}[\lambda - \gamma P(t)I(t) - d\_0P(t)],\\ \mathcal{L}^{\varepsilon}[I(t)] = \frac{I\_0}{s} + \frac{[s^{\theta}(1-\theta)+\theta]}{s^{\theta}\mathcal{A}\mathcal{B}\mathcal{C}(\theta)}\mathcal{L}^{\varepsilon}[\gamma I(t)P(t) - I(t)(d\_0 + h + \eta) + \sigma Q(t)],\\ \mathcal{L}^{\varepsilon}[Q(t)] = \frac{Q\_0}{s} + \frac{[s^{\theta}(1-\theta)+\theta]}{s^{\theta}\mathcal{A}\mathcal{B}\mathcal{C}(\theta)}\mathcal{L}^{\varepsilon}[\eta I(t) - (d\_0 + \mu + \sigma)Q(t)].\end{cases} \tag{20}$$

Now, considering each solution in the form of series,

$$P(t) = \sum\_{n=0}^{\infty} P\_n(t), \quad I(t) = \sum\_{n=0}^{\infty} I\_n(t), \quad Q(t) = \sum\_{n=0}^{\infty} Q\_n(t), \tag{21}$$

we separate the nonlinear term *P*(*t*)*I*(*t*) in terms of Adomian polynomials as

$$P(t)I(t) = \sum\_{n=0}^{\infty} H\_{\mathbb{H}}(t), \text{ where } H\_{\mathbb{H}}(t) = \frac{1}{n!} \frac{d^n}{d\lambda^n} \left[ \sum\_{k=0}^n \lambda^k P\_k(t) \sum\_{k=0}^n \lambda^k I\_k(t) \right] \Big|\_{\lambda=0}. \tag{22}$$

Therefore, from (21) and (22), we obtain from (20) that

$$\begin{cases} \mathcal{L}\left[\sum\_{n=0}^{\infty} P\_n(t)\right] = \frac{p\_0}{s} + \frac{[s^\theta (1-\theta) + \theta]}{s^\theta \mathcal{A} \mathcal{B} \mathcal{C}(\theta)} \mathcal{L}\left[\lambda - \gamma \sum\_{n=0}^{\infty} H\_n(t) - d\_0 \sum\_{n=0}^{\infty} P\_n(t)\right], \\\mathcal{L}\left[\sum\_{n=0}^{\infty} I\_n(t)\right] = \frac{l\_0}{s} \\\ \quad + \frac{[s^\theta (1-\theta) + \theta]}{s^\theta \mathcal{A} \mathcal{B} \mathcal{C}(\theta)} \mathcal{L}\left[\gamma \sum\_{n=0}^{\infty} H\_n(t) - (d\_0 + h + \eta) \sum\_{n=0}^{\infty} I\_n(t) + \sigma \sum\_{n=0}^{\infty} Q\_n(t)\right], \\\ \mathcal{L}\left[\sum\_{n=0}^{\infty} Q\_n(t)\right] = \frac{Q\_0}{s} + \frac{[s^\theta (1-\theta) + \theta]}{s^\theta \mathcal{A} \mathcal{B} \mathcal{C}(\theta)} \mathcal{L}\left[\eta \sum\_{n=0}^{\infty} I\_n(t) - (d\_0 + \mu + \sigma) \sum\_{n=0}^{\infty} Q\_n(t)\right]. \end{cases} (23)$$

⎧

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

⎧

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

*Qn*+1(*t*) = L <sup>−</sup><sup>1</sup>

[*s<sup>θ</sup>* (<sup>1</sup> <sup>−</sup> *<sup>θ</sup>*) + *<sup>θ</sup>*]

Now, comparing the terms on both sides of (23), one has

<sup>L</sup> [*P*0(*t*)] = *<sup>P</sup>*<sup>0</sup> *<sup>s</sup>* , <sup>L</sup> [*I*0(*t*)] = *<sup>I</sup>*<sup>0</sup> *<sup>s</sup>* , <sup>L</sup> [*Q*0(*t*)] = *<sup>Q</sup>*<sup>0</sup> *s* , <sup>L</sup> [*P*1(*t*)] = [*s<sup>θ</sup>* (<sup>1</sup> <sup>−</sup> *<sup>θ</sup>*) + *<sup>θ</sup>*] *<sup>s</sup>θ*ABC(*θ*) <sup>L</sup> [*<sup>λ</sup>* <sup>−</sup> *<sup>γ</sup>H*0(*t*) <sup>−</sup> *<sup>d</sup>*0*P*0(*t*)], <sup>L</sup> [*I*1(*t*)] = [*s<sup>θ</sup>* (<sup>1</sup> <sup>−</sup> *<sup>θ</sup>*) + *<sup>θ</sup>*] *<sup>s</sup>θ*ABC(*θ*) <sup>L</sup> [*γH*0(*t*) <sup>−</sup> (*d*<sup>0</sup> <sup>+</sup> *<sup>h</sup>* <sup>+</sup> *<sup>η</sup>*)*I*0(*t*) + *<sup>σ</sup>Q*0(*t*)], <sup>L</sup> [*Q*1(*t*)] = [*s<sup>θ</sup>* (<sup>1</sup> <sup>−</sup> *<sup>θ</sup>*) + *<sup>θ</sup>*] *<sup>s</sup>θ*ABC(*θ*) <sup>L</sup> [*ηI*0(*t*) <sup>−</sup> (*d*<sup>0</sup> <sup>+</sup> *<sup>μ</sup>* <sup>+</sup> *<sup>σ</sup>*)*Q*0(*t*)], <sup>L</sup> [*P*2(*t*)] = [*s<sup>θ</sup>* (<sup>1</sup> <sup>−</sup> *<sup>θ</sup>*) + *<sup>θ</sup>*] *<sup>s</sup>θ*ABC(*θ*) <sup>L</sup> *λ* − *γH*1(*t*) − *d*0*P*1(*t*) , <sup>L</sup> [*I*2(*t*)] = [*s<sup>θ</sup>* (<sup>1</sup> <sup>−</sup> *<sup>θ</sup>*) + *<sup>θ</sup>*] *<sup>s</sup>θ*ABC(*θ*) <sup>L</sup> [*γH*1(*t*) <sup>−</sup> (*d*<sup>0</sup> <sup>+</sup> *<sup>h</sup>* <sup>+</sup> *<sup>η</sup>*)*I*1(*t*) + *<sup>σ</sup>Q*1(*t*)], <sup>L</sup> [*Q*2(*t*)] = [*s<sup>θ</sup>* (<sup>1</sup> <sup>−</sup> *<sup>θ</sup>*) + *<sup>θ</sup>*] *<sup>s</sup>θ*ABC(*θ*) <sup>L</sup> [*ηI*1(*t*) <sup>−</sup> (*d*<sup>0</sup> <sup>+</sup> *<sup>μ</sup>* <sup>+</sup> *<sup>σ</sup>*)*Q*1(*t*)], . . . <sup>L</sup> [*Pn*+1(*t*)] = [*s<sup>θ</sup>* (<sup>1</sup> <sup>−</sup> *<sup>θ</sup>*) + *<sup>θ</sup>*] *<sup>s</sup>θ*ABC(*θ*) <sup>L</sup> [*<sup>λ</sup>* <sup>−</sup> *<sup>γ</sup>Hn*(*t*) <sup>−</sup> *<sup>d</sup>*0*Pn*(*t*)], *<sup>n</sup>* <sup>≥</sup> 0, <sup>L</sup> [*In*+1(*t*)] = [*s<sup>θ</sup>* (<sup>1</sup> <sup>−</sup> *<sup>θ</sup>*) + *<sup>θ</sup>*] *<sup>s</sup>θ*ABC(*θ*) <sup>L</sup> [*γHn*(*t*) <sup>−</sup> (*d*<sup>0</sup> <sup>+</sup> *<sup>h</sup>* <sup>+</sup> *<sup>η</sup>*)*In*(*t*) + *<sup>σ</sup>Qn*(*t*)], *<sup>n</sup>* <sup>≥</sup> 0, <sup>L</sup> [*Qn*+1(*t*)] <sup>=</sup> [*s<sup>θ</sup>* (<sup>1</sup> <sup>−</sup> *<sup>θ</sup>*) + *<sup>θ</sup>*] *<sup>s</sup>θ*ABC(*θ*) <sup>L</sup> [*ηIn*(*t*) <sup>−</sup> (*d*<sup>0</sup> <sup>+</sup> *<sup>μ</sup>* <sup>+</sup> *<sup>σ</sup>*)*Qn*(*t*)], *<sup>n</sup>* <sup>≥</sup> 0. (24)

Applying the inverse Laplace transform to (24), we obtain that

*P*0(*t*) = *P*0, *I*0(*t*) = *I*0, *Q*0(*t*) = *Q*0, *P*1(*t*) = L <sup>−</sup><sup>1</sup> [*s<sup>θ</sup>* (<sup>1</sup> <sup>−</sup> *<sup>θ</sup>*) + *<sup>θ</sup>*] *<sup>s</sup>θ*ABC(*θ*) <sup>L</sup> [*<sup>λ</sup>* <sup>−</sup> *<sup>γ</sup>H*0(*t*) <sup>−</sup> *<sup>d</sup>*0*P*0(*t*)] , *I*1(*t*) = L <sup>−</sup><sup>1</sup> [*s<sup>θ</sup>* (<sup>1</sup> <sup>−</sup> *<sup>θ</sup>*) + *<sup>θ</sup>*] *<sup>s</sup>θ*ABC(*θ*) <sup>L</sup> [*γH*0(*t*) <sup>−</sup> (*d*<sup>0</sup> <sup>+</sup> *<sup>h</sup>* <sup>+</sup> *<sup>η</sup>*)*I*0(*t*) + *<sup>σ</sup>Q*0(*t*)] , *Q*1(*t*) = L <sup>−</sup><sup>1</sup> [*s<sup>θ</sup>* (<sup>1</sup> <sup>−</sup> *<sup>θ</sup>*) + *<sup>θ</sup>*] *<sup>s</sup>θ*ABC(*θ*) <sup>L</sup> [*ηI*0(*t*) <sup>−</sup> (*d*<sup>0</sup> <sup>+</sup> *<sup>μ</sup>* <sup>+</sup> *<sup>σ</sup>*)*Q*0(*t*)] , *P*2(*t*) = L <sup>−</sup><sup>1</sup> [*s<sup>θ</sup>* (<sup>1</sup> <sup>−</sup> *<sup>θ</sup>*) + *<sup>θ</sup>*] *<sup>s</sup>θ*ABC(*θ*) <sup>L</sup> [*<sup>λ</sup>* <sup>−</sup> *<sup>γ</sup>H*1(*t*) <sup>−</sup> *<sup>d</sup>*0*P*1(*t*)] , *I*2(*t*) = L <sup>−</sup><sup>1</sup> [*s<sup>θ</sup>* (<sup>1</sup> <sup>−</sup> *<sup>θ</sup>*) + *<sup>θ</sup>*] *<sup>s</sup>θ*ABC(*θ*) <sup>L</sup> [*γH*1(*t*) <sup>−</sup> (*d*<sup>0</sup> <sup>+</sup> *<sup>h</sup>* <sup>+</sup> *<sup>η</sup>*)*I*1(*t*) + *<sup>σ</sup>Q*1(*t*)] , *Q*2(*t*) = L <sup>−</sup><sup>1</sup> [*s<sup>θ</sup>* (<sup>1</sup> <sup>−</sup> *<sup>θ</sup>*) + *<sup>θ</sup>*] *<sup>s</sup>r*ABC(*θ*) <sup>L</sup> [*ηI*1(*t*) <sup>−</sup> (*d*<sup>0</sup> <sup>+</sup> *<sup>μ</sup>* <sup>+</sup> *<sup>σ</sup>*)*Q*1(*t*)] , . . . *Pn*+1(*t*) = L <sup>−</sup><sup>1</sup> [*s<sup>θ</sup>* (<sup>1</sup> <sup>−</sup> *<sup>θ</sup>*) + *<sup>θ</sup>*] *<sup>s</sup>θ*ABC(*θ*) <sup>L</sup> [*<sup>λ</sup>* <sup>−</sup> *<sup>γ</sup>Hn*(*t*) <sup>−</sup> *<sup>d</sup>*0*Pn*(*t*)] , *n* ≥ 0, *In*+1(*t*) = L <sup>−</sup><sup>1</sup> [*s<sup>θ</sup>* (<sup>1</sup> <sup>−</sup> *<sup>θ</sup>*) + *<sup>θ</sup>*] *<sup>s</sup>θ*ABC(*θ*) <sup>L</sup> [*γHn*(*t*) <sup>−</sup> (*d*<sup>0</sup> <sup>+</sup> *<sup>h</sup>* <sup>+</sup> *<sup>η</sup>*)*In*(*t*) + *<sup>σ</sup>Qn*(*t*)] , *n* ≥ 0, (25)

, *n* ≥ 0.

*<sup>s</sup>θ*ABC(*θ*) <sup>L</sup> [*ηIn*(*t*) <sup>−</sup> (*d*<sup>0</sup> <sup>+</sup> *<sup>μ</sup>* <sup>+</sup> *<sup>σ</sup>*)*Qn*(*t*)]

#### **6. Numerical Interpretation and Discussion**

To illustrate the dynamical structure of our infectious disease model, we now consider a practical case study under various numerical observations and given parameter values. The concrete parameter values we have used are shown in Table 2.

**Table 2.** Numerical values for the parameters of Model (1).


We assume that the initial susceptible, infected, and isolated populations are 10, 0.01, and 0.0011 million, respectively. Among the 21,000 selected population, the density of susceptible population is about 0.6 percent, the infected population is 0.2 percent, and the isolated population is 0.2 percent.

By using the parameter values in Table 2, we computed the first three terms of the general series solution (25) with ABC(*θ*) = 1 as

$$\begin{split} P(t) &= 0.6 + 1.99712 \left[ 1 - \theta + \frac{\theta t^{\theta}}{\Gamma(\theta)} \right] \\ &- 0.00681 \left[ (1 - \theta)^{2} t + \frac{\theta^{2} t^{2\theta}}{\Gamma(2\theta + 1)} + \frac{2\theta (1 - \theta) t^{\theta + 1}}{\Gamma(\theta + 2)} \right] + \cdots \\ I(t) &= 0.2 - 0.5976 \left[ 1 - \theta + \frac{\theta t^{\theta}}{\Gamma(\theta)} \right] \\ &- 0.003096 \left[ (1 - \theta)^{2} t + \frac{2\theta (1 - \theta) t^{\theta}}{\Gamma(\theta)} + \frac{\theta^{2} t^{2\theta}}{\Gamma(2\theta + 1)} \right] + \cdots \\ Q(t) &= 0.2 + 0.14 \left[ 1 - \theta + \frac{\theta t^{\theta}}{\Gamma(\theta)} \right] \\ &- 0.004058 \left[ (1 - \theta)^{2} t + \frac{2\theta (1 - \theta) t^{\theta}}{\Gamma(\theta)} + \frac{\theta^{2} t^{2\theta}}{\Gamma(2\theta + 1)} \right] + \cdots \ . \end{split} \tag{26}$$

We have utilized the numeric computing environment MATLAB, version 2016, and plotted the solution (25) in Figure 1 by considering the first fifteen terms of the series (21).

Figure 1 shows the dynamics of each one of the state variables in the classical sense when *θ* = 1 (green curves). Similarly, by considering the model in ABC sense, that is, for *θ* ∈ (0, 1), we plot in Figure 1 each of the state variables to analyze the changes in comparison with the classical case. From Figure 1a, we see that as we increase the order *θ* of the fractional ABC derivative, the susceptibility increases. Further, all fractional order derivatives shows no effect after about 60 days, i.e., the susceptible population stabilizes. Figure 1b shows that infected individuals tend to increase, with different rates, when we decrease the fractional-order: the smaller the fractional order *θ*, faster the increase rate, and vice versa. All obtained curves for infected individuals, for different values of the fractional order derivatives, approach towards a non-zero steady state, which shows that the disease will persist in the community if not properly managed. On the other hand, Figure 1c shows that during the first month the disease progress with more and more

people getting quarantined, irrespective of the order of the derivative. However, after that, the quarantined population tends to decline and, at the end, there will be no quarantined individuals in the community.

In Section 6.1, we show that the fractional Model (1) with ABC derivatives has the ability to describe effectively the dynamics of transmission of the current COVID-19 outbreak.

**Figure 1.** Dynamical nature of susceptible, infected and quarantined individuals of the fractional ABC Model (1) for different values of the fractional-order *θ*. (**a**) *P*(*t*)—susceptible individuals along time *t*; (**b**) *I*(*t*)—infected individuals along time *t*; (**c**) *Q*(*t*)—quarantined individuals along time *t*.

#### *6.1. Case Study with Real Data: Khyber Pakhtunkhawa (Pakistan)*

The Khyber Pakhtunkhawa Province, like other provinces of Pakistan and the rest of the world, is also being affected by COVID-19. We decided to calibrate our model with real data of COVID-19 from Khyber Pakhtunkhawa, Pakistan, from 9 April to 2 June 2020. For that, we have used the minimization method of MATLAB taking the initial weights

$$P(0) = 35,525,047, \quad I(0) = 10,485, \quad Q(0) = 18,000,$$

determined from the work in [55], and *θ* = 1, from which we arrived to the values of the parameters shown in Table 3.

Figure 2 shows the total number of individuals infected by COVID-19 as registered from 9 April to the 2nd in June 2020, which corresponds to the period of one month and 24 days used to calibrate our model.

Figure 3 compares the actual/real data of COVID-19 with the curve of infected given by Model (1), clearly showing the appropriateness of our model to describe the COVID-19 outbreak.


**Table 3.** Parameter values for the case of Khyber Pakhtunkhawa, Pakistan.

**Figure 2.** Real data of infected individuals by COVID-19 from Khyber Pakhtunkhwa, Pakistan, from 9 April to 2 June 2020.

**Figure 3.** Comparison of infected individuals by COVID-19: Model (1) output (in blue) versus real data of Khyber Pakhtunkhawa, Pakistan, from 9 April to 2 June 2020 (in red).

Figure 4 projects the long-term behavior of the COVID-19 outbreak during a period of eight months. We can see the data matches during the first 1.8 months and, additionally, we observe that the long-term behavior consists on a rise of infected individuals with time. This means that if the government did not apply proper strategies, the incidence could increase drastically in the coming months.

#### **7. Sensitivity Analysis**

Here, we conduct a sensitivity analysis to evaluate the parameters that are sensitive in minimizing the propagation of the ailment. Although its computation is tedious for complex biological models, forward sensitivity analysis is recorded as an important component of epidemic modeling: the ecologist and epidemiologist gain a lot of insight from the sensitivity study of the basic reproduction number *R*<sup>0</sup> [56]. In Definition 2, we assume that the basic reproduction number *R*<sup>0</sup> is differentiable with respect to parameter *ω*. Given (3), this means that Definition 2 makes sense for *ω* ∈ {*γ*, *λ*, *d*0, *μ*, *σ*, *h*, *η*}.

**Definition 2.** *The normalized forward sensitivity index of R*<sup>0</sup> *with respect to parameter ω is defined by*

$$S\_{\omega} = \frac{\omega}{R\_0} \frac{\partial R\_0}{\partial \omega}. \tag{27}$$

As we have an analytical form for the basic reproduction number, recall (3), we apply the direct differentiation process given in (27). Not only do the sensitivity indexes show us the impact of various factors associated with the spread of the infectious disease, but they also provide us with valuable details on the comparative change between *R*<sup>0</sup> and the parameters. Moreover, they also assist in the production of control strategies [57].

Table 4 demonstrates that *γ*, *h*, and *σ* parameters have a positive effect on the basic reproduction number *R*0, which means that the growth or decay of these parameters by 10% would increase or decrease the reproduction number by 10%, 6.36%, and 0.31%, respectively. On the other hand, *d*0-, *μ*-, and *η*-sensitive indexes indicate that increasing their values by 10% would decrease the basic reproduction number *R*<sup>0</sup> by 14.89%, 0.09%, and 1.68%, respectively.

**Table 4.** Sensitivity indexes of the basic reproduction number *R*<sup>0</sup> (3) (see Definition 2) for relevant parameters of Model (1).


The sensitivity of the basic reproduction number *R*<sup>0</sup> is also seen graphically in Figure 5.

**Figure 5.** Sensitivity of the basic reproduction number *R*<sup>0</sup> (3) for relevant parameters of Model (1). (**a**) *R*<sup>0</sup> versus *γ* and *d*; (**b**) *R*<sup>0</sup> versus *γ* and *μ*; (**c**) *R*<sup>0</sup> versus *γ* and *η*; (**d**) *R*<sup>0</sup> versus *h* and *d*; (**e**) *R*<sup>0</sup> versus *h* and *μ*; (**f**) *R*<sup>0</sup> versus *h* and *η*; (**g**) *R*<sup>0</sup> versus *d* and *σ*; (**h**) *R*<sup>0</sup> versus *d* and *η*.

#### **8. Conclusions and Future Work**

In this manuscript, we studied a COVID-19 disease model providing a detailed qualitative analysis and showed its usefulness with a case study of Khyber Pakhtunkhawa, Pakistan. Our sensitivity analysis shows that the transmission rate *γ* has a huge effect on the model as compared to other parameters: the basic reproduction number varies directly with the transmission rate *γ*. The sensitivity analysis also showed that the death rate parameter *μ* has no effect on spreading the infection, which seems biologically correct. The transmission rate will be small by keeping a social distancing and self-quarantine situation that causes a decrease in the infection. In this way, one can control COVID-19 infection from rapid spreading in the community. In the future, we plan to analyze optimal control techniques to reduce the population of infected individuals by adopting a number of control measures. A modification of the given model is also possible by introducing more parameters for analyzing the early outbreaks of COVID-19 and then transmission and treatment aspects can be recalled. The given system can be also simulated by adding exposed and hospitalized classes and taking a stochastic fractional derivative. Here, we have provided a case study with real data from Pakistan, but other case studies can also be done.

**Author Contributions:** Conceptualization, M.R.S.A., A.K., A.Z., and D.F.M.T.; Formal analysis, M.R.S.A., A.D., A.K., and D.F.M.T.; Investigation, A.D. and A.Z.; Methodology, M.T. and A.Z.; Software, A.D. and A.K.; Supervision, D.F.M.T.; Validation, M.R.S.A. and D.F.M.T.; Writing—original draft, A.D., A.K., and A.Z.; Writing—review and editing, M.T., A.K., and D.F.M.T. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was partially funded by Fundação para a CiênciaeaTecnologia (FCT) grant number UIDB/04106/2020 (CIDMA).

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors are grateful to reviewers for their comments, questions, and suggestions, which helped them to improve the manuscript.

**Conflicts of Interest:** The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

#### **References**


## *Article* **On Periodic Fractional (***p***,** *q***)-Integral Boundary Value Problems for Sequential Fractional (***p***,** *q***)-Integrodifference Equations**

**Jarunee Soontharanon <sup>1</sup> and Thanin Sitthiwirattham 2,\***


**Abstract:** We study the existence results of a fractional (*p*, *q*)-integrodifference equation with periodic fractional (*p*, *q*)-integral boundary condition by using Banach and Schauder's fixed point theorems. Some properties of (*p*, *q*)-integral are also presented in this paper as a tool for our calculations.

**Keywords:** fractional (*p*, *q*)-integral; fractional (*p*, *q*)-difference; periodic boundary value problems; existence

#### **1. Introduction**

The studies of quantum calculus with integer order were presented in the last three decades, and many researchers extensively studied calculus without a limit that deals with a set of nondifferentiable functions, the so-called quantum calculus. Many types of quantum difference operators are employed in several applications of mathematical areas, such as the calculus of variations, particle physics, quantum mechanics, and theory of relativity. The *q*-calculus, one type of quantum initiated by Jackson [1–5], was employed in several fields of applied sciences and engineering such as physical problems, dynamical system, control theory, electrical networks, economics, and so on [6–14].

For fractional quantum calculus, Agarwal [15] and Al-Salam [16] proposed fractional *q*-calculus, and Díaz and Osler [17] proposed fractional difference calculus. In 2017, Brikshavana and Sitthiwirattham [18] introduced fractional Hahn difference calculus. In 2019, Patanarapeelert and Sitthiwirattham [19] studied fractional symmetric Hahn difference calculus.

Later, the motivation of quantum calculus based on two parameters (*p*, *q*)-integer was presented. The (*p*, *q*)-calculus (postquantum calculus) was introduced by Chakrabarti and Jagannathan [20]. This calculus was used in many fields such as special functions, approximation theory, physical sciences, Lie group, hypergeometric series, Bézier curves, and surfaces. For some recent papers about (*p*, *q*)-differenceequations, we refer to [21–33] and the references therein. For example, the fundamental theorems of (*p*, *q*)-calculus and some (*p*, *q*)-Taylor formulas were studied in [21]. In [32], the (*p*, *q*)-Melin transform and its applications were studied. The Picard and Gauss–Weierstrass singular integral in (*p*, *q*) calculus were introduced in [33]. For the boundary value problem for (*p*, *q*)-difference equations were studied in [34–36]. For example, the nonlocal boundary value problems for first-order (*p*, *q*)-difference equations were studied in [34]. The second-order (*p*, *q*) difference equations with separated boundary conditions were studied in [35]. In [36], the authors studied the first-order and second-order (*p*, *q*)-difference equations with impulse.

Recently, Soontharanon and Sitthiwirattham [37] introduced the fractional (*p*, *q*) difference operators and its properties. Now, this calculus was used in the inequalities [38,39] and the boundary value problems [40–42]. However, the study of the boundary value problems for fractional (*p*, *q*)-difference equation in the beginning, there are a few literature on this knowledge. In [40], the existence results of a fractional (*p*, *q*)-integrodifference

**Citation:** Soontharanon, J.; Sitthiwirattham, T. On Periodic Fractional (*p*, *q*)-Integral Boundary Value Problems for Sequential Fractional (*p*, *q*)-Integrodifference Equations. *Axioms* **2021**, *10*, 264. https://doi.org/10.3390/ axioms10040264

Academic Editor: Natália Martins

Received: 8 September 2021 Accepted: 18 October 2021 Published: 19 October 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

<sup>1</sup> Department of Mathematics, Faculty of Applied Science, King Mongkut's University of Technology North Bangkok, Bangkok 10800, Thailand; jarunee.s@sci.kmutnb.ac.th

equation with Robin boundary condition were studied in 2020. In 2021 [41], the authors investigated the boundary value problem of a class of fractional (*p*, *q*)-difference Schr*o*¨dinger equations. In the same year, the existence results of solution and positive solution for the boundary value problem of a class of fractional (*p*, *q*)-difference equations involving the Riemann–Liouville fractional derivative [42] were studied.

Motivated by the above papers, we seek to enrich the contributions in this new research area. In this paper, we introduce and study the boundary value problem involving function *F*, which depends on fractional (*p*, *q*)-integral and fractional (*p*, *q*)-difference, and the boundary condition is nonlocal. Our problem is sequential fractional (*p*, *q*)-integrodifference equation with periodic fractional (*p*, *q*)-integral boundary conditions of the form

$$\begin{aligned} D\_{p,q}^{\mathfrak{a}} D\_{p,q}^{\mathfrak{f}} \mu(t) &= F \left[ t, \mu(t), \Psi\_{p,q}^{\mathcal{V}} \mu(t), D\_{p,q}^{\mathcal{V}} \mu(t) \right], \quad t \in I\_{p,q}^{T} \\ \mu(0) &= \mu\left( \frac{T}{p} \right) \\ \mathcal{Z}\_{p,q}^{\mathfrak{g}} \mathfrak{g}(\mathfrak{y}) \mu(\mathfrak{y}) &= \mathfrak{g}(\mathfrak{u}), \quad \mathfrak{y} \in I\_{p,q}^{T} - \{0, T\}, \end{aligned} \tag{1}$$

where *I<sup>T</sup> <sup>p</sup>*,*<sup>q</sup>* := *q p <sup>k</sup> <sup>T</sup> <sup>p</sup>* : *<sup>k</sup>* ∈ N<sup>0</sup> ∪ {0}; 0 < *q* < *p* ≤ 1; *α*, *β*, *γ*, *ν*, *θ* ∈ (0, 1]; *F* ∈ *C IT p*,*q* × R × R × R, R , *g* ∈ *IT <sup>p</sup>*,*q*, R+ are given functions; *ϕ* : *C IT <sup>p</sup>*,*q*, R → R is given functional; and for *φ* ∈ *C IT <sup>p</sup>*,*<sup>q</sup>* <sup>×</sup> *<sup>I</sup><sup>T</sup> <sup>p</sup>*,*q*, [0, ∞) , we define an operator of the (*p*, *q*)-integral of the product of functions *φ* and *u* as

$$(\Psi\_{p,q}^{\gamma}u(t) := \left(\mathcal{Z}\_{p,q}^{\gamma}\phi \, u\right)(t) = \frac{1}{p^{(\frac{\gamma}{2})}\Gamma\_{p,q}(\gamma)} \int\_0^t (t-qs)^{\frac{\gamma-1}{p,q}} \phi\left(t, \frac{s}{p^{\gamma-1}}\right) u\left(\frac{s}{p^{\gamma-1}}\right) d\_{p,q}s.$$

We aim to show the existence results to the problem (1). Firstly, we convert the given nonlinear problem (1) into a fixed point problem related to (1), by considering a linear variant of the problem at hand. Once the fixed point operator is available, we make use the classical Banach's and Schauder's fixed point theorems to establish existence results.

The paper is organized as follows: Section 2 contains some preliminary concepts related to our problem. We present the existence and uniqueness result in Section 2, and the existence of at least one solution in Section 4. To illustrate our results, we provide some examples in Section 5. Finally, Section 6 discusses our conclusions.

#### **2. Preliminaries**

In this section, we provide some basic definitions, notations, and lemmas as follows. For 0 < *q* < *p* ≤ 1, we define

$$\begin{aligned} [k]\_q &:= \quad \begin{cases} \frac{1-q^k}{1-q}, & k \in \mathbb{N} \\ 1, & k=0, \end{cases} \\ [k]\_{p,q} &:= \quad \begin{cases} p^k - q^k = \begin{cases} p^k - q^k = [k]\_{\frac{q}{p}}, & k \in \mathbb{N} \\ 1, & k=0, \end{cases} \\ [k]\_{p,q} !, & k=0, \end{cases} \\ [k]\_{p,q} ! &:= \quad \begin{cases} [k]\_{p,q} [k-1]\_{p,q} \cdot \cdots \cdot [1]\_{p,q} = \prod\_{i=1}^k \frac{p^i - q^i}{p-q}, & k \in \mathbb{N} \\ 1, & k=0. \end{cases} \end{aligned}$$

The (*p*, *q*)-forward jump and the (*p*, *q*)-backward jump operators are defined as

$$
\sigma\_{p,q}^k(t) := \left(\frac{q}{p}\right)^k t \quad \text{and} \quad \rho\_{p,q}^k(t) := \left(\frac{p}{q}\right)^k t, \quad \text{for} \ k \in \mathbb{N} \text{, respectively.}
$$

The *q*-analogue of the power function (*a* − *b*) *n <sup>q</sup>* with *<sup>n</sup>* ∈ N<sup>0</sup> := {0, 1, 2, ...} is given by

$$(a-b)^{\frac{0}{q}}\_{\overline{q}} := 1, \qquad (a-b)^{\frac{n}{q}}\_{\overline{q}} := \prod\_{i=0}^{n-1} (a-bq^i), \qquad a, b \in \mathbb{R}.$$

The (*p*, *q*)-analogue of the power function (*a* − *b*) *n <sup>p</sup>*,*<sup>q</sup>* with *<sup>n</sup>* ∈ N<sup>0</sup> is given by

$$(a-b)^{\frac{0}{p}}\_{p,q} := 1, \qquad (a-b)^{\frac{n}{p}}\_{p,q} := \prod\_{k=0}^{n-1} (ap^k - bq^k), \qquad a, b \in \mathbb{R}.$$

Generally, for *<sup>α</sup>* ∈ R, we define

$$(a-b)^{\underline{a}}\_{\underline{q}} = a^a \prod\_{i=0}^{\infty} \frac{1 - \left(\frac{b}{a}\right) q^i}{1 - \left(\frac{b}{a}\right) q^{a+i}}, \ a \neq 0.$$

$$p(a-b)^{\underline{a}}\_{\underline{p}, \underline{q}} = p^{\left(\frac{a}{2}\right)} (a-b)^{\underline{a}}\_{\underline{q}} = a^a \prod\_{i=0}^{\infty} \frac{1}{p^a} \left[ \frac{1 - \frac{b}{a} \left(\frac{q}{p}\right)^i}{1 - \frac{b}{a} \left(\frac{q}{p}\right)^{i+x}} \right], a \neq 0.$$

$$(a) \ t^a$$

In particular, *a α <sup>q</sup>* = *aα*, *a α <sup>p</sup>*,*<sup>q</sup>* = *a p α* and (0) *α <sup>q</sup>* = (0) *α <sup>p</sup>*,*<sup>q</sup>* = 0 for *α* > 0.

The (*p*, *q*)-gamma and (*p*, *q*)-beta functions are defined by

$$\begin{aligned} \Gamma\_{p,q}(\mathbf{x}) &:= \begin{cases} \frac{(p-q)\frac{\mathbf{x}-1}{p,q}}{(p-q)\mathbf{x}^{\mathbf{x}-1}} = \frac{\left(1-\frac{q}{p}\right)\frac{\mathbf{x}-1}{p,q}}{\left(1-\frac{q}{p}\right)^{\mathbf{x}-1}}, & \mathbf{x} \in \mathbb{R} \ \left\{ \begin{array}{l} 0,-1,-2,\dots \end{array} \right\} \\\ [\mathbf{x}-1]\_{p,q}!, & \mathbf{x} \in \mathbb{N}, \end{cases} \\\ B\_{p,q}(\mathbf{x},y) &:= \int\_{0}^{1} t^{\mathbf{x}-1} (1-qt) \frac{y-1}{p,q} d\_{p,q}t = p^{\frac{1}{2}(y-1)(2\mathbf{x}+y-2)} \frac{\Gamma\_{p,q}(\mathbf{x})\Gamma\_{q}p\_{\mathbf{y}}q(y)}{\Gamma\_{p,q}(\mathbf{x}+y)} d\_{p,q}t \end{aligned}$$

respectively.

**Definition 1.** *For* <sup>0</sup> < *<sup>q</sup>* < *<sup>p</sup>* ≤ <sup>1</sup> *and f* : [0, *<sup>T</sup>*] → R*, we define the* (*p*, *<sup>q</sup>*)*-difference of f as*

$$D\_{p,q}f(t) \quad := \begin{cases} \frac{f(pt) - f(qt)}{(p-q)(t)}, & \text{for } t \neq 0\\ f'(0), & \text{for } t = 0 \end{cases}$$

*provided that f is differentiable at* 0 *and f is called* (*p*, *q*)*-differentiable on I<sup>T</sup> <sup>p</sup>*,*<sup>q</sup> if Dp*,*<sup>q</sup> f*(*t*) *exists for all t* <sup>∈</sup> *<sup>I</sup><sup>T</sup> <sup>p</sup>*,*q.*

Observe that the function *g*(*t*) = *Dp*,*<sup>q</sup> f*(*t*) is defined on [0, *T*/*p*].

**Definition 2.** *Let <sup>I</sup> be any closed interval of* R *containing <sup>a</sup>*, *<sup>b</sup> and* 0*. Assuming that <sup>f</sup>* : *<sup>I</sup>* → R *is a given function, we define* (*p*, *q*)*-integral of f from a to b by*

$$\int\_a^b f(t)d\_{p,q}t := \int\_0^b f(t)d\_{p,q}t - \int\_0^a f(t)d\_{p,q}t\_{\prime\prime}$$

*where*

$$\mathcal{Z}\_{p,q}f(\mathbf{x}) = \int\_0^\mathbf{x} f(t) d\_{p,q}t = (p-q)\mathbf{x} \sum\_{k=0}^\infty \frac{q^k}{p^{k+1}} f\left(\frac{q^k}{p^{k+1}}\mathbf{x}\right), \quad \mathbf{x} \in I, q$$

*provided that the series converges at x* = *a and x* = *b and f is called* (*p*, *q*)*-integrable on* [*a*, *b*] *if it is* (*p*, *q*)*-integrable on* [*a*, *b*] *for all a*, *b* ∈ *I*.

An operator <sup>I</sup> *<sup>N</sup> <sup>p</sup>*,*<sup>q</sup>* is defined as

$$\mathcal{I}\_{p,q}^{0}f(\mathbf{x}) = f(\mathbf{x}) \text{ and } \mathcal{I}\_{p,q}^{N}f(\mathbf{x}) = \mathcal{I}\_{p,q}\mathcal{I}\_{p,q}^{N-1}f(\mathbf{x}), N \in \mathbb{N}.$$

The relations between (*p*, *q*)-difference and (*p*, *q*)-integral operators are given by

$$D\_{p,q} \mathcal{Z}\_{p,q} f(\mathbf{x}) = f(\mathbf{x}) \text{ and } \mathcal{Z}\_{p,q} D\_{p,q} f(\mathbf{x}) = f(\mathbf{x}) - f(0).$$

Fractional (*p*, *q*)-integral and fractional (*p*, *q*)-difference of Riemann–Liouville type are defined as follows.

**Definition 3.** *For <sup>α</sup>* <sup>&</sup>gt; 0, 0 <sup>&</sup>lt; *<sup>q</sup>* <sup>&</sup>lt; *<sup>p</sup>* <sup>≤</sup> <sup>1</sup> *and <sup>f</sup> defined on <sup>I</sup><sup>T</sup> <sup>p</sup>*,*q, the fractional* (*p*, *q*)*-integral is defined by*

$$\begin{aligned} \mathcal{I}\_{p,q}^{\mathfrak{a}}f(t) &:= \frac{1}{p^{\binom{\mathfrak{a}}{2}}\Gamma\_{p,q}(\mathfrak{a})} \int\_{0}^{t} (t-qs)^{\frac{\mathfrak{a}-1}{p}} f\left(\frac{s}{p^{\mathfrak{a}-1}}\right) d\_{p,q}s\\ &= \frac{(p-q)t}{p^{\binom{\mathfrak{a}}{2}}\Gamma\_{p,q}(\mathfrak{a})} \sum\_{k=0}^{\infty} \frac{q^{k}}{p^{k+1}} \left(t - \left(\frac{q}{p}\right)^{k+1}t\right)\_{p,q}^{\frac{\mathfrak{a}-1}{p}} f\left(\frac{q^{k}}{p^{k+\mathfrak{a}}}t\right), \end{aligned}$$

*and* (I<sup>0</sup> *<sup>p</sup>*,*<sup>q</sup> f*)(*t*) = *f*(*t*)*.*

**Definition 4.** *For <sup>α</sup>* <sup>&</sup>gt; 0, 0 <sup>&</sup>lt; *<sup>q</sup>* <sup>&</sup>lt; *<sup>p</sup>* <sup>≤</sup> <sup>1</sup> *and <sup>f</sup> defined on <sup>I</sup><sup>T</sup> <sup>p</sup>*,*q, the fractional* (*p*, *q*)*-difference operator of Riemann–Liouville type of order α is defined by*

$$\begin{aligned} \label{eq:SDAR-1} D\_{p,q}^{\underline{a}} f(t) &:=& D\_{p,q}^{\underline{N}} \mathcal{Z}\_{p,q}^{\underline{N}-a} f(t) \\ &=& \frac{1}{p^{\binom{-\mathfrak{a}}{2}} \Gamma\_{p,q}(-a)} \int\_0^t (t-qs)^{\frac{-\mathfrak{a}-1}{p,q}} f\left(\frac{s}{p^{-\mathfrak{a}-1}}\right) d\_{p,q} s \,\,\,\,\end{aligned}$$

*and D*<sup>0</sup> *<sup>p</sup>*,*<sup>q</sup> <sup>f</sup>*(*t*) = *<sup>f</sup>*(*t*)*, where N* − <sup>1</sup> < *<sup>α</sup>* < *<sup>N</sup>*, *<sup>N</sup>* ∈ N.

**Lemma 1** ([37])**.** *Let <sup>α</sup>* <sup>∈</sup> (*<sup>N</sup>* <sup>−</sup> 1, *<sup>N</sup>*), *<sup>N</sup>* <sup>∈</sup> <sup>N</sup>, 0 <sup>&</sup>lt; *<sup>q</sup>* <sup>&</sup>lt; *<sup>p</sup>* <sup>≤</sup> <sup>1</sup> *and f* : *<sup>I</sup><sup>T</sup> <sup>p</sup>*,*<sup>q</sup>* → R*. Then,*

$$\mathcal{L}\_{p,q}^{a}D\_{p,q}^{a}f(t) = f(t) + \mathbb{C}\_{1}t^{a-1} + \mathbb{C}\_{2}t^{a-2} + \dots + \mathbb{C}\_{N}t^{a-N}$$

*for some Ci* ∈ R, *<sup>i</sup>* = 1, 2, . . . , *<sup>N</sup>*.

**Lemma 2** ([37])**.** *Let* <sup>0</sup> <sup>&</sup>lt; *<sup>q</sup>* <sup>&</sup>lt; *<sup>p</sup>* <sup>≤</sup> <sup>1</sup> *and f* : *<sup>I</sup><sup>T</sup> <sup>p</sup>*,*<sup>q</sup>* → R *be continuous at* <sup>0</sup>*. Then,*

$$\int\_0^\chi \int\_0^s f(\tau) \, d\_{p,q}\tau \, d\_{p,q}s \, = \int\_0^{\frac{\chi}{p}} \int\_{pq\tau}^\chi f(\tau) \, d\_{p,q}s \, d\_{p,q}\tau.$$

**Lemma 3** ([37])**.** *Let α*, *β* > 0, 0 < *q* < *p* ≤ 1*. Then,*

$$\begin{aligned} (a) \quad & \int\_0^t (t - qs)^{\frac{\alpha - 1}{p, q}} \mathfrak{s}^{\beta} \, d\_{p, q} \mathfrak{s} \; = \; t^{a + \beta} B\_{p, q}(\beta + 1, a), \\ (b) \quad & \int\_0^t \int\_0^\chi (t - q\mathfrak{x})^{\frac{\alpha - 1}{p, q}}\_{p, q} (\mathfrak{x} - q\mathfrak{s})^{\frac{\beta - 1}{p, q}}\_{p, q} \, d\_{p, q} \mathfrak{s} \, d\_{p, q} \mathfrak{x} \; = \; \frac{B\_{p, q}(\beta + 1, a)}{[\beta]\_{p, q}} t^{a + \beta}. \end{aligned}$$

**Lemma 4** ([40])**.** *Let <sup>α</sup>*, *<sup>β</sup>* > 0, 0 < *<sup>q</sup>* < *<sup>p</sup>* ≤ <sup>1</sup> *and n* ∈ Z*. Then,*

$$\begin{split} &(a) \quad \int\_{0}^{t} (t - qs) \frac{a - 1}{p, q} d\_{p, q} \mathbf{s} = \left. p^{(\frac{\alpha}{2})} \frac{\Gamma\_{p, q}(a)}{\Gamma\_{p, q}(a + 1)} \right| t^{\alpha}, \\ &(b) \quad \int\_{0}^{t} \int\_{0}^{\frac{\pi}{p - \beta - 1}} (t - q\mathbf{x}) \frac{\beta - 1}{p, q} \left( \frac{\mathbf{x}}{p^{-\beta - 1}} - q\mathbf{s} \right) \frac{a - 1}{p, q} d\_{p, q} \mathbf{s} \, d\_{p, q} \mathbf{x} = \left. p^{(\frac{\alpha}{2}) + \left( \frac{-\beta}{2} \right)} \frac{\Gamma\_{p, q}(a)}{\Gamma\_{p, q}(a + 1)} t^{a + \beta}, \right. \\ &(c) \quad \int\_{0}^{t} (t - q\mathbf{s}) \frac{-\beta - 1}{p, q} \left( \frac{\mathbf{s}}{p^{-\beta - 1}} \right)^{a - n} d\_{p, q} \mathbf{s} = \left. p^{(\frac{-\beta}{2})} \frac{\Gamma\_{p, q}(a - n + 1) \Gamma\_{p, q}(-\beta)}{\Gamma\_{p, q}(a - \beta - n + 1)} \right. \end{split}$$

**Lemma 5.** *Let <sup>α</sup>*, *<sup>β</sup>*, *<sup>θ</sup>* > 0, 0 < *<sup>q</sup>* < *<sup>p</sup>* ≤ <sup>1</sup> *and n* ∈ Z*. Then,*

$$\begin{split} (a) \qquad & \int\_{0}^{t} \int\_{0}^{\frac{x}{p^{\theta}-1}} (t-qx) \frac{\rho-1}{p\rho} \left( \frac{x}{p^{\theta-1}} - qs \right) \frac{\beta-1}{p\rho} \left( \frac{s}{p^{\alpha-1}} \right)\_{p,q}^{\frac{\alpha-n}{\theta}} d\_{p,q} s \, d\_{p,q} \mathbf{x} \\ & \qquad = p^{\rho} \frac{\beta\_{1}+\binom{\theta}{2}}{\Gamma\_{p,q}(a+\beta+\theta-n+1)} \frac{\Gamma\_{p,q}(a-n+1) \Gamma\_{p,q}(\theta) \Gamma\_{p,q}(\theta)}{\Gamma\_{p,q}(a+\beta+\theta-n+1)} t^{a+\beta+\theta-n} \, \\ (b) \qquad & \int\_{0}^{t} \int\_{0}^{\frac{y}{p^{\theta}-1}} \int\_{0}^{\frac{x}{p^{\theta}-1}} (t-qy) \frac{\rho-1}{p\rho} \left( \frac{y}{p^{\theta}-1} - qx \right) \sum\_{p,q}^{\beta-1} \left( \frac{x}{p^{\theta-1}} - qs \right) \frac{x-1}{p,q} \, d\_{p,q} s \, d\_{p,q} \mathbf{x} \, d\_{p,q} \mathbf{y} \\ & \qquad = p^{\binom{\alpha}{2}+(\frac{\beta}{2})} \frac{\Gamma\_{p,q}(a) \Gamma\_{p,q}(\theta) \Gamma\_{p,q}(\theta)}{\Gamma\_{p,q}(a+\beta+\theta+1)} t^{a+\beta+\theta} . \end{split}$$

**Proof.** By Lemmas 2, 3 and 4 and definition of the (*p*, *q*)-beta function, we have

(*a*) *<sup>t</sup>* 0 *<sup>x</sup> pθ*−1 0 (*t* − *qx*) *θ*−1 *p*,*q x <sup>p</sup>θ*−<sup>1</sup> <sup>−</sup> *qsβ*−<sup>1</sup> *p*,*q s pα*−<sup>1</sup> *α*−*<sup>n</sup> p*,*q dp*,*qs dp*,*qx* = *<sup>t</sup>* 0 (*t* − *qx*) *θ*−1 *p*,*q <sup>x</sup> pθ*−1 0 *x <sup>p</sup>θ*−<sup>1</sup> <sup>−</sup> *qsβ*−<sup>1</sup> *p*,*q s pα*−<sup>1</sup> *α*−*<sup>n</sup> p*,*q dp*,*qs dp*,*qx* <sup>=</sup> *<sup>p</sup>*( *β* 2) *<sup>p</sup>*(*θ*−1)(*α*+*β*−*n*) · Γ*p*,*q*(*α* − *n* + 1)Γ*p*,*q*(*β*) Γ*p*,*q*(*α* + *β* − *n* + 1) *<sup>t</sup>* 0 (*t* − *qx*) *<sup>θ</sup>*−<sup>1</sup> *<sup>p</sup>*,*<sup>q</sup> <sup>x</sup>α*+*β*−*<sup>n</sup> dp*,*qx* = *p*( *β* <sup>2</sup>)+( *θ* <sup>2</sup>) <sup>Γ</sup>*p*,*q*(*<sup>α</sup>* <sup>−</sup> *<sup>n</sup>* <sup>+</sup> <sup>1</sup>)Γ*p*,*q*(*β*)Γ*p*,*q*(*θ*) <sup>Γ</sup>*p*,*q*(*<sup>α</sup>* <sup>+</sup> *<sup>β</sup>* <sup>+</sup> *<sup>θ</sup>* <sup>−</sup> *<sup>n</sup>* <sup>+</sup> <sup>1</sup>) *<sup>t</sup> <sup>α</sup>*+*β*+*θ*−*n*, *<sup>y</sup> <sup>x</sup>*

$$\begin{split} (b) \qquad & \int\_{0}^{1} \int\_{0}^{\frac{y}{p^{\beta-1}}} \int\_{0}^{\frac{x}{p^{\beta}-1}} (t-qy) \frac{\theta-1}{p\theta} \left(\frac{y}{p^{\beta-1}}-qx\right)\_{p,q}^{\frac{\beta-1}{p}} \left(\frac{x}{p^{\beta-1}}-qs\right)\_{p,q}^{\frac{x}{p-1}} d\_{p,q}s \, d\_{p,q}y \\ & \qquad = \int\_{0}^{t} \int\_{0}^{\frac{y}{p^{\beta-1}}} (t-qy) \frac{\theta-1}{p\theta} \left(\frac{y}{p^{\beta-1}}-qx\right)\_{p,q}^{\frac{\beta-1}{p}} \left[\int\_{0}^{\frac{x}{p^{\beta}-1}} \left(\frac{x}{p^{\beta-1}}-qs\right) \frac{s^{x}-1}{p,q} d\_{p,q}s\right] d\_{p,q}x \, d\_{p,q}y \\ & \qquad = p^{(\frac{\alpha}{2})} \frac{\Gamma\_{p,q}(\alpha)}{\Gamma\_{p,q}(\alpha+1)} \int\_{0}^{t} \int\_{0}^{\frac{y}{p^{\beta-1}}} (t-qy) \frac{\theta-1}{p,q} \left(\frac{y}{p^{\beta-1}}-qx\right) \frac{\theta-1}{p,q} \left(\frac{x^{\alpha}}{p^{\beta-1}}\right) d\_{p,q}x \, d\_{p,q}y \\ & \qquad = p^{(\frac{\alpha}{2}) + (\frac{\beta}{2}) + (\frac{\alpha}{2})} \frac{\Gamma\_{p,q}(\alpha) \Gamma\_{p,q}(\theta) \Gamma\_{p,q}(\theta)}{\Gamma\_{p,q}(\alpha+\beta+\theta+1)} t^{\alpha+\beta+\theta} . \end{split}$$

The proof is complete.

The following lemma, dealing with a linear variant of problem (1), plays an important role in the forthcoming analysis.

**Lemma 6.** *Let* Ω = 0, *α*, *β*, *θ* ∈ (0, 1]*,* 0 < *q* < *p* ≤ 1*, h* ∈ *C IT <sup>p</sup>*,*q*, R *and g* ∈ *C IT <sup>p</sup>*,*q*, R+ *be given functions, ϕ* : *C IT <sup>p</sup>*,*q*, R → R *be given functional. Then, the problem*

$$D\_{p,q}^{\alpha}D\_{p,q}^{\beta}\mu(t) = h(t), \quad t \in I\_{p,q}^{T} \tag{2}$$

$$
u(0) = \mu \left(\frac{T}{p}\right) \tag{3}$$

$$\mathcal{Z}\_{p,q}^{\emptyset} \mathcal{X}(\eta)\mu(\eta) = \varrho(\mu), \quad \eta \in I\_{p,q}^{T} - \left\{0, \frac{T}{p}\right\} \tag{4}$$

*has the unique solution:*

$$\begin{split} u(\boldsymbol{u}) &= \frac{1}{p^{\frac{(\beta)}{2} + (\frac{\beta}{2})} \Gamma\_{p,q}(\boldsymbol{a}) \Gamma\_{p,q \{\beta\}}} \int\_{0}^{t} \int\_{0}^{\frac{x}{p^{\beta - 1}}} (t - qx) \frac{\beta - 1}{p, q} \left( \frac{x}{p^{\beta - 1}} - qs \right) \frac{a - 1}{p, q} h \left( \frac{s}{p^{\alpha - 1}} \right) d\_{p, q} s \, d\_{p, q} \mathbf{x} \\ &- \frac{t^{\beta - 1}}{\Omega} \left\{ \mathbf{B}\_{\eta} \mathbb{P}[h] + \mathbf{A}\_{\Gamma} \left( \boldsymbol{\varrho}(\boldsymbol{u}) - \mathbb{Q}[h] \right) \right\} \\ &+ \frac{t^{\alpha + \beta - 1}}{\Omega \Gamma\_{p,q}(\boldsymbol{a} + \boldsymbol{\beta})} \left\{ A\_{\eta} \mathbb{P}[h] + \left( \frac{T}{p} \right)^{\beta - 1} \left( \boldsymbol{\varrho}(\boldsymbol{u}) - \mathbb{Q}[h] \right) \right\} \end{split} \tag{5}$$

*where the functionals* P[*h*] *and* Q[*h*] *are defined by*

$$\begin{split} \mathbb{P}[h] &:= \frac{1}{p^{(\frac{\alpha}{2}) + (\frac{\beta}{2})} \Gamma\_{p,q}(\alpha) \Gamma\_{p,q}(\beta)} \int\_{0}^{\frac{\pi}{p}} \int\_{0}^{\frac{\pi}{p^{\beta-1}}} \left( \frac{T}{p} - qx \right)\_{p,q}^{\beta - 1} \left( \frac{x}{p^{\beta - 1}} - qs \right)\_{p,q}^{\frac{\alpha - 1}{\alpha}} \times \\ &h \Big( \frac{s}{p^{a - 1}} \right) d\_{p, q} s \, d\_{p, q} \mathbf{x} \\ \mathbb{Q}[h] &:= \frac{1}{p^{(\frac{\alpha}{2}) + (\frac{\beta}{2}) + (\frac{\beta}{2})} \Gamma\_{p, q}(\alpha) \Gamma\_{p, q}(\beta) \Gamma\_{p, q}(\theta) \end{split} \tag{6}$$
 
$$\begin{split} \mathbb{Q}[h] &:= \frac{1}{p^{(\frac{\alpha}{2}) + (\frac{\beta}{2}) + (\frac{\beta}{2})} \Gamma\_{p, q}(\alpha) \Gamma\_{p, q}(\beta) \Gamma\_{p, q}(\theta) \\ & \qquad \left( \frac{y}{p^{\theta - 1}} - qx \right)\_{p, q}^{\frac{\beta - 1}{\alpha}} \left( \frac{x}{p^{\theta - 1}} - qs \right)\_{p, q}^{\frac{\alpha - 1}{\alpha}} \mathcal{g} \left( \frac{y}{p^{\theta - 1}} \right) h \Big( \frac{s}{p^{\alpha - 1}} \right) d\_{p, q} s \, d\_{p, q} \mathbf{x} \, d\_{p, q} \mathbf{y} \end{split} \tag{7}$$

*and the constants AT*, *Aη*,*B<sup>η</sup> and* Ω *are defined by*

$$A\_T := \frac{1}{p^{(\frac{\beta}{2})} \Gamma\_{p,q}(\beta)} \int\_0^{\frac{T}{p}} \left( \frac{T}{p} - qs \right)\_{p,q}^{\frac{\beta - 1}{\beta - 1}} \left( \frac{s}{p^{\beta - 1}} \right)^{\frac{\alpha - 1}{\alpha}} d\_{p,q} s = \frac{\left( \frac{T}{p} \right)^{\alpha + \beta - 1}}{\Gamma\_{p,q}(\alpha + \beta)} \tag{8}$$

$$A\_{\eta} := \frac{1}{p^{(\frac{\theta}{2})} \Gamma\_{p, \eta}(\theta)} \int\_0^{\eta} (\eta - qs)^{\frac{\theta - 1}{p, \eta}} g\left(\frac{s}{p^{\theta - 1}}\right) \left(\frac{s}{p^{\theta - 1}}\right)^{\frac{\theta - 1}{\theta - 1}} d\_{p, \eta}s \tag{9}$$

$$B\_{\eta} := \frac{1}{p^{(\frac{\beta}{2}) + (\frac{\rho}{2})} \Gamma\_{p,q}(\boldsymbol{\theta}) \Gamma\_{p,q}(\boldsymbol{\theta})} \int\_{0}^{\eta} \int\_{0}^{\frac{\mathbf{x}}{p^{\beta - 1}}} (\eta - qx) \frac{\rho - 1}{p, q} \left( \frac{\mathbf{x}}{p^{\beta - 1}} - qs \right) \frac{\boldsymbol{\beta} - 1}{p, q} \mathbf{g} \left( \frac{\mathbf{x}}{p^{\beta - 1}} \right) \times \mathbf{x}$$
 
$$ \left( \frac{\mathbf{s}}{p^{\alpha - 1}} \right)^{\frac{\alpha - 1}{\alpha}} d\_{p, q} s \, d\_{p, q} \mathbf{x} \tag{10}$$

$$
\Omega := \left(\frac{T}{p}\right)^{\beta - 1} \mathcal{B}\_{\eta} - A\_T \mathcal{A}\_{\eta} \,. \tag{11}
$$

**Proof.** Taking fractional (*p*, *q*)-integral of order *α* for (2) and using Lemma 1, we then have

$$\begin{split} \mathcal{D}\_{p,q}^{\mathcal{G}}u(t) &= \mathcal{C}\_1 t^{a-1} + \mathcal{Z}\_{p,q}^a h(t) \\ &= \mathcal{C}\_1 t^{a-1} + \frac{1}{p^{(\frac{a}{2})} \Gamma\_{p,q}(a)} \int\_0^t (t - qs)\_{p,q}^{a-1} h\left(\frac{s}{p^{a-1}}\right) d\_{p,q}s. \end{split} \tag{12}$$

Next, taking fractional (*p*, *q*)-difference of order *β* for (12), we have

$$u(t) = \mathcal{C}ot^{\beta - 1} + \mathcal{C}\_1 \frac{t^{a + \beta - 1}}{\Gamma\_{p, q}(a + \beta)} + \frac{1}{p^{\binom{a}{2} + (\frac{\beta}{2})} \Gamma\_{p, q}(a) \Gamma\_{p, q}(\beta)} \times$$

$$\int\_0^t \int\_0^{\frac{\mathbf{x}}{p^{\beta - 1}}} (t - q\mathbf{x}) \frac{\beta - 1}{p\cdot q} \left(\frac{\mathbf{x}}{p^{\beta - 1}} - q\mathbf{s}\right) \frac{\mathbf{a} - 1}{p\cdot q} h\left(\frac{\mathbf{s}}{p^{a - 1}}\right) d\_{p, q} \mathbf{s} \, d\_{p, q} \mathbf{x}.\tag{13}$$

Substituting *t* = 0, *<sup>T</sup> <sup>p</sup>* into (13) and employing the condition (3), we have

$$\mathbf{C}\_0 \left(\frac{T}{p}\right)^{\beta - 1} + \mathbf{C}\_1 \mathbf{A}\_T = -\mathbb{P}[h].\tag{14}$$

By taking fractional (*p*, *q*)-integral of order *θ* for (13), we have

$$\begin{split} \mathcal{T}\_{p,q}^{\theta}u(t) &= \, \_0\overline{\,}\_{p,q}^{\theta+\theta-1} + \frac{\mathcal{C}\_1}{p^{(\frac{\theta}{2})} + (\frac{\theta}{2})^{\frac{\theta}{2}} \Gamma\_{p,q}(\theta)\Gamma\_{p,q}(\theta)} \times \\ &\int\_0^t \int\_0^{\frac{\pi}{p^{\theta-1}}} (t-qx) \frac{\theta-1}{p\cdot q} \left(\frac{x}{p^{\theta-1}} - qs\right) \frac{\theta-1}{p\cdot q} \left(\frac{s}{p^{\alpha-1}}\right)^{\frac{\theta-1}{\alpha}} d\_{p,q}s \, d\_{p,q}\mathbf{x} \\ &+ \, \frac{1}{p^{(\frac{\theta}{2}) + (\frac{\theta}{2}) + (\frac{\theta}{2})} \Gamma\_{p,q}(\theta)\Gamma\_{p,q}(\theta)\Gamma\_{p,q}(\theta)} \int\_0^t \int\_0^{\frac{\pi}{p^{\theta-1}}} \int\_0^{\frac{\pi}{p^{\theta-1}}} (t-qy) \frac{\theta-1}{p\cdot q} \times \\ &\left(\frac{y}{p^{\theta-1}} - qx\right) \frac{\theta-1}{p\cdot q} \left(\frac{x}{p^{\theta-1}} - qs\right) \frac{\theta-1}{p\cdot q} h \left(\frac{s}{p^{\alpha-1}}\right) d\_{p,q}s \, d\_{p,q}\mathbf{x} \, d\_{p,q}\mathbf{y}. \end{split} \tag{15}$$

From the condition (4) we have

$$\mathbf{C}\_0 \mathbf{A}\_\eta + \mathbf{C}\_1 \mathbf{B}\_\eta = \boldsymbol{\varrho}(\boldsymbol{\mu}) - \mathbb{Q}[\boldsymbol{h}] \tag{16}$$

Solving the system of linear Equations (14) and (16),we obtain

$$\mathbf{C}\_{0} = \frac{-\mathbf{B}\_{\eta}\mathbb{P}[h] - \mathbf{A}\_{T}\left(\boldsymbol{\varrho}(\boldsymbol{u}) - \mathbb{Q}[h]\right)}{\boldsymbol{\Omega}} \quad \text{and} \quad \mathbf{C}\_{1} = \frac{\left(\frac{T}{\boldsymbol{\rho}}\right)^{\tilde{\boldsymbol{\rho}}-1} \left(\boldsymbol{\varrho}(\boldsymbol{u}) - \mathbb{Q}[h]\right) + \mathbf{A}\_{\eta}\mathbb{P}[h]}{\boldsymbol{\Omega}}.$$

where P[*h*], Q[*h*], **A***T*, **A***η*,**B***<sup>η</sup>* and Ω are defined by (6)–(11), respectively.

After substituting *C*0, *C*<sup>1</sup> into (13), we obtain (5). We can prove the converse by direct computation. The proof is complete.

#### **3. Existence and Uniqueness Result**

In this section, we prove the existence and uniqueness result for problem (1) by using Banach fixed point theorem as follows.

**Lemma 7** ([43] Banach fixed point theorem)**.** *Let a nonempty closed subset C of a Banach space X, then there is a unique fixed point for any contraction mapping P of C into itself.*

Let C = *C IT <sup>p</sup>*,*q*, R be a Banach space of all function *u* with the norm defined by

$$\|\|\mu\|\|\_{\mathcal{C}} = \max \left\{ \||\mu\||\_{\prime} \left\| D^{\nu}\_{p,q} \mu \right\| \right\},$$

where *u* = max *<sup>t</sup>*∈*I<sup>T</sup> p*,*q* |*u*(*t*)| ! and *<sup>D</sup><sup>ν</sup> <sup>p</sup>*,*qu*C = max *<sup>t</sup>*∈*I<sup>T</sup> p*,*q Dν <sup>p</sup>*,*qu*(*t*) ! .

By Lemma 6, replacing *h*(*t*) by *F t*, *u*(*t*), Ψ*<sup>γ</sup> <sup>p</sup>*,*qu*(*t*), *D<sup>ν</sup> <sup>p</sup>*,*qu*(*t*) , we define an operator A : C→C by

$$\begin{split} \mathcal{A}(\mathcal{A}u)(t) &:= \frac{\varrho(u)}{\Omega} \left[ \frac{(\frac{T}{p})^{\beta-1}}{\Gamma\_{p,q}(\alpha+\beta)} t^{\alpha+\beta-1} - \mathbf{A}\_{T} t^{\beta-1} \right] \\ &+ \frac{\mathbf{Q}^{\ast}[F\_{u}]}{\Omega} \left[ \mathbf{A}\_{T} t^{\beta-1} - \frac{(\frac{T}{p})^{\beta-1}}{\Gamma\_{p,q}(\alpha+\beta)} t^{\alpha+\beta-1} \right] \\ &+ \frac{\mathbb{P}^{\ast}[F\_{u}]}{\Omega} \left[ \mathbf{A}\_{\eta} \frac{t^{\alpha+\beta-1}}{\Gamma\_{p,q}(\alpha+\beta)} - \mathbf{B}\_{\eta} t^{\beta-1} \right] \\ &+ \frac{1}{p^{\binom{\alpha}{2}+\binom{\beta}{2}} \Gamma\_{p,q}(\alpha) \Gamma\_{p,q}(\beta)} \int\_{0}^{t} \int\_{0}^{\frac{s}{p^{\beta-1}}} (t-qx)^{\frac{\beta-1}{p,q}} \left( \frac{x}{p^{\beta-1}} - qs \right)^{\frac{\alpha-1}{\alpha}} \times \\ &F \left[ \frac{s}{p^{\alpha-1}}, \mu \left( \frac{s}{p^{\alpha-1}} \right), \Psi\_{p,q}^{\prime\prime} \mu \left( \frac{s}{p^{\alpha-1}} \right) \right] d\_{p,q}s \, d\_{p,q}x \end{split} \tag{17}$$

where the functionals P∗[*Fu*] and Q∗[*Fu*] are defined by

$$\begin{split} \mathbb{P}^\*[F\_u] &:= \frac{1}{p^{(\frac{\sigma}{2}) + \binom{\theta}{2}} \Gamma\_{p,q}(a) \Gamma\_{p,q}(\mathfrak{g})} \int\_0^{\frac{T}{p}} \int\_0^{\frac{T}{p^{\beta-1}}} \left( \frac{T}{p} - qx \right)^{\frac{\theta - 1}{\theta}} \left( \frac{x}{p^{\beta - 1}} - qs \right)^{\frac{\alpha - 1}{\alpha}} \times \\ & \qquad \qquad \qquad \qquad F \left[ \frac{s}{p^{a-1}}, u \left( \frac{s}{p^{a-1}} \right), \Psi\_{p,q}^\gamma u \left( \frac{s}{p^{a-1}} \right), D\_{p,q}^\nu u \left( \frac{s}{p^{a-1}} \right) \right] d\_{p,q}s \, d\_{p,q} \chi \end{split} \tag{18}$$

$$\begin{split} \mathbb{Q}^\*[F\_u] &:= \frac{1}{p^{(\frac{\sigma}{2}) + (\frac{\beta}{2}) + (\frac{\beta}{2})} \Gamma\_{p,q}(a) \Gamma\_{p,q}(\beta) \Gamma\_{p,q}(\theta)} \times \\ &\int\_0^\eta \int\_0^{\frac{y}{p^{\theta - 1}}} \int\_0^{\frac{y}{p^{\theta - 1}}} (\eta - qy) \frac{s - 1}{p, q} \left( \frac{y}{p^{\theta - 1}} - qx \right)^{\frac{\beta - 1}{p}} \left( \frac{x}{p^{\theta - 1}} - qs \right)^{\frac{\alpha - 1}{\alpha}}\_{p, q} \mathcal{g} \left( \frac{y}{p^{\theta - 1}} \right) \times \\ &F \Big[ \frac{s}{p^{\alpha - 1}}, u \left( \frac{s}{p^{\alpha - 1}} \right), \Psi^\gamma\_{p, q} u \left( \frac{s}{p^{\alpha - 1}} \right), D^\nu\_{p, q} u \left( \frac{s}{p^{\alpha - 1}} \right) \Big] d\_{p, q} s \, d\_{p, q} x \, d\_{p, q} y \end{split} \tag{19}$$

and the constants **A***T*, **A***η*,**B***<sup>η</sup>* and Ω are defined by (8)–(11), respectively.

We see that the problem (1) has solution if and only if the operator A has fixed point.

**Theorem 1.** *Assume that F* : *I<sup>T</sup> <sup>p</sup>*,*<sup>q</sup>* <sup>×</sup> <sup>R</sup> <sup>×</sup> <sup>R</sup> <sup>×</sup> <sup>R</sup> <sup>→</sup> <sup>R</sup> *is continuous, <sup>φ</sup>* : *<sup>I</sup><sup>T</sup> <sup>p</sup>*,*<sup>q</sup>* <sup>×</sup> *<sup>I</sup><sup>T</sup> <sup>p</sup>*,*<sup>q</sup>* → [0, ∞) *is continuous with <sup>φ</sup>*<sup>0</sup> <sup>=</sup> max - *<sup>φ</sup>*(*t*,*s*) : (*t*,*s*) <sup>∈</sup> *<sup>I</sup><sup>T</sup> <sup>p</sup>*,*<sup>q</sup>* <sup>×</sup> *<sup>I</sup><sup>T</sup> p*,*q , and ϕ* : *C IT <sup>p</sup>*,*q*, R → R *is given functional. Suppose that the following conditions hold:*

(*H*1)*There exist positive constants <sup>L</sup>*1, *<sup>L</sup>*2, *<sup>L</sup>*<sup>3</sup> *such that for each <sup>t</sup>* <sup>∈</sup> *<sup>I</sup><sup>T</sup> <sup>p</sup>*,*<sup>q</sup> and ui*, *vi* ∈ R, *<sup>i</sup>* = 1, 2, 3*,*

$$\left| F[t, \boldsymbol{\mu}\_1, \boldsymbol{\mu}\_2, \boldsymbol{\mu}\_3] - F[t, \boldsymbol{\nu}\_1, \boldsymbol{\nu}\_2, \boldsymbol{\nu}\_3] \right| \leq L\_1 \left| \boldsymbol{\mu}\_1 - \boldsymbol{\nu}\_1 \right| + L\_2 \left| \boldsymbol{\mu}\_2 - \boldsymbol{\nu}\_2 \right| + L\_3 \left| \boldsymbol{\mu}\_3 - \boldsymbol{\nu}\_3 \right|.$$

(*H*2)*There exists a positive constant ω such that for each u*, *v* ∈ C*,*

$$|q(\mu) - q(v)| \le \omega ||\mu - v||\_{\mathcal{C}} \dots$$

(*H*3)*For each t* <sup>∈</sup> *<sup>I</sup><sup>T</sup> <sup>p</sup>*,*q*, 0 < *g* < *g*(*t*) < *G.* (*H*4)X := *ωO<sup>T</sup>* + (L + *L*3)Θ ≤ 1*, where*

$$\mathcal{L} := \ L\_1 + L\_2 \frac{\phi\_0 (\frac{T}{p})^\gamma}{\Gamma\_{p\mathcal{A}} (\gamma + 1)},\tag{20}$$

$$\Theta := \frac{O\_T G \eta^{a+\beta+\theta}}{\Gamma\_{p,q}(a+\beta+\theta-1)} + \frac{(\frac{T}{p})^{a+\beta}}{\Gamma\_{p,q}(a+\beta+1)}(O\_\eta+1),\tag{21}$$

$$\mathbf{O}\_T := \left[ \frac{\left( \frac{T}{p} \right)^{\alpha + \beta - 1}}{\Gamma\_{p, \emptyset} (\alpha + \beta)} + \mathbf{A}\_T \right] \frac{\left( \frac{T}{p} \right)^{\beta - 1}}{\min |\Omega|} \tag{22}$$

$$\mathcal{O}\_{\eta} := \left[ \frac{\left( \frac{T}{p} \right)^{\alpha}}{\Gamma\_{p,\emptyset} (\alpha + \beta)} \max A\_{\eta} + \max \mathcal{B}\_{\eta} \right] \frac{\left( \frac{T}{p} \right)^{\beta - 1}}{\min |\Omega|}. \tag{23}$$

*Then, problem* (1) *has a unique solution in I<sup>T</sup> <sup>p</sup>*,*q.*

**Proof.** For each *<sup>t</sup>* <sup>∈</sup> *<sup>I</sup><sup>T</sup> <sup>p</sup>*,*<sup>q</sup>* and *u*, *v* ∈ C,

$$\begin{split} \left| \Psi\_{p,q}^{\gamma} u(t) - \Psi\_{p,q}^{\gamma} v(t) \right| &\leq \frac{\Phi\_0}{p^{(\frac{\gamma}{2})} \Gamma\_{p,q}(\gamma)} \int\_0^t (t - qs)\_{p,q}^{\frac{\gamma - 1}{p - 1}} \left| u \left( \frac{s}{p^{\gamma - 1}} \right) - v \left( \frac{s}{p^{\gamma - 1}} \right) \right| d\_{p,q} s \\ &\leq \frac{\Phi\_0}{p^{(\frac{\gamma}{2})} \Gamma\_{p,q}(\gamma)} \left|| u - v \right| \int\_0^{\frac{\gamma}{p}} \left( \frac{T}{p} - qs \right)\_{p,q}^{\frac{\gamma - 1}{p - 1}} d\_{p,q} s \\ &= \frac{\Phi\_0 \left( \frac{T}{p} \right)^{\gamma}}{\Gamma\_{p,q}(\gamma + 1)} ||u - v||. \end{split}$$

Denote that

$$\mathcal{F}|\boldsymbol{u} - \boldsymbol{v}|(t) := \left| F\left[ t, \boldsymbol{u}(t), \boldsymbol{\Psi}^{\gamma}\_{p,q} \boldsymbol{u}(t), D^{\boldsymbol{\nu}}\_{p,q} \boldsymbol{u}(t) \right] - F\left[ t, \boldsymbol{v}(t), \boldsymbol{\Psi}^{\gamma}\_{p,q} \boldsymbol{v}(t), D^{\boldsymbol{\nu}}\_{p,q} \boldsymbol{v}(t) \right] \right|.$$

By using Lemma 5(a), we obtain

 P∗[*Fu*] − P∗[*Fv*] ≤ 1 *p*( *α* <sup>2</sup>)+( *β* 2) Γ*p*,*q*(*α*)Γ*p*,*q*(*β*) *<sup>T</sup> p* 0 *<sup>x</sup> pβ*−1 0 *T <sup>p</sup>* <sup>−</sup> *qxβ*−<sup>1</sup> *p*,*q x <sup>p</sup>β*−<sup>1</sup> <sup>−</sup> *qsα*−<sup>1</sup> *p*,*q* × F|*u* − *v*| *s pα*−<sup>1</sup> *dp*,*qs dp*,*qx* ≤ *L*1|*u* − *v*| + *L*<sup>2</sup> Ψ*γ <sup>p</sup>*,*qu* <sup>−</sup> <sup>Ψ</sup>*<sup>γ</sup> <sup>p</sup>*,*qv* <sup>+</sup> *<sup>L</sup>*<sup>3</sup> *Dν <sup>p</sup>*,*qu* <sup>−</sup> *<sup>D</sup><sup>ν</sup> <sup>p</sup>*,*qv p*( *α* <sup>2</sup>)+( *β* 2) Γ*p*,*q*(*α*)Γ*p*,*q*(*β*) *<sup>T</sup> p* 0 *<sup>x</sup> pβ*−1 0 *T <sup>p</sup>* <sup>−</sup> *qxβ*−<sup>1</sup> *p*,*q* × *x <sup>p</sup>β*−<sup>1</sup> <sup>−</sup> *qsα*−<sup>1</sup> *p*,*q s pα*−<sup>1</sup> *dp*,*qs dp*,*qx* ≤ ⎛ ⎜⎝ ⎡ ⎢ ⎣ *L*<sup>1</sup> + *L*2*φ*<sup>0</sup> *T p γ* Γ*p*,*q*(*γ* + 1) ⎤ ⎥ <sup>⎦</sup>|*<sup>u</sup>* <sup>−</sup> *<sup>v</sup>*<sup>|</sup> <sup>+</sup> *<sup>L</sup>*<sup>3</sup> *Dν <sup>p</sup>*,*qu* <sup>−</sup> *<sup>D</sup><sup>ν</sup> <sup>p</sup>*,*qv* ⎞ ⎟⎠ *T p α*+*<sup>β</sup>* Γ*p*,*q*(*α* + *β* + 1) ≤ L + *L*<sup>3</sup> *<sup>T</sup> p α*+*<sup>β</sup>* <sup>Γ</sup>*p*,*q*(*<sup>α</sup>* <sup>+</sup> *<sup>β</sup>* <sup>+</sup> <sup>1</sup>) *<sup>u</sup>* <sup>−</sup> *<sup>v</sup>*<sup>C</sup> , (24)

and by using Lemma 5(b), we have

 <sup>Q</sup>∗[*Fu*] <sup>−</sup> <sup>Q</sup>∗[*Fv*] <sup>≤</sup> *<sup>G</sup> p*( *α* <sup>2</sup>)+( *β* <sup>2</sup>)+( *θ* 2) Γ*p*,*q*(*α*)Γ*p*,*q*(*β*)Γ*p*,*q*(*θ*) *<sup>η</sup>* 0 *<sup>y</sup> pθ*−1 0 *<sup>x</sup> pβ*−1 0 (*η* − *qy*) *θ*−1 *p*,*q y <sup>p</sup>θ*−<sup>1</sup> <sup>−</sup> *qxβ*−<sup>1</sup> *p*,*q* × *x <sup>p</sup>β*−<sup>1</sup> <sup>−</sup> *qsα*−<sup>1</sup> *p*,*q* F|*u* − *v*| *s pα*−<sup>1</sup> *dp*,*qs dp*,*qx dp*,*qy* ≤ *G L*1|*u* − *v*| + *L*<sup>2</sup> Ψ*γ <sup>p</sup>*,*qu* <sup>−</sup> <sup>Ψ</sup>*<sup>γ</sup> <sup>p</sup>*,*qv* <sup>+</sup> *<sup>L</sup>*<sup>3</sup> *Dν <sup>p</sup>*,*qu* <sup>−</sup> *<sup>D</sup><sup>ν</sup> <sup>p</sup>*,*qv p*( *α* <sup>2</sup>)+( *β* <sup>2</sup>)+( *θ* 2) Γ*p*,*q*(*α*)Γ*p*,*q*(*β*)Γ*p*,*q*(*θ*) × *<sup>η</sup>* 0 *<sup>y</sup> pθ*−1 0 *<sup>x</sup> pβ*−1 0 (*η* − *qy*) *θ*−1 *p*,*q y <sup>p</sup>θ*−<sup>1</sup> <sup>−</sup> *qxβ*−<sup>1</sup> *p*,*q x <sup>p</sup>β*−<sup>1</sup> <sup>−</sup> *qsα*−<sup>1</sup> *p*,*q dp*,*qs dp*,*qx dp*,*qy* ≤ ⎡ ⎢ ⎣ ⎛ ⎜⎝ *L*<sup>1</sup> + *L*<sup>2</sup> *φ*0 *T p γ* Γ*p*,*q*(*γ* + 1) ⎞ ⎟⎠|*<sup>u</sup>* <sup>−</sup> *<sup>v</sup>*<sup>|</sup> <sup>+</sup> *<sup>L</sup>*<sup>3</sup> *Dν <sup>p</sup>*,*qu* <sup>−</sup> *<sup>D</sup><sup>ν</sup> <sup>p</sup>*,*qv* ⎤ ⎥ ⎦ *Gηα*+*β*+*<sup>θ</sup>* Γ*p*,*q*(*α* + *β* + *θ* + 1) <sup>≤</sup> *<sup>G</sup>*(<sup>L</sup> <sup>+</sup> *<sup>L</sup>*3)*ηα*+*β*+*<sup>θ</sup>* <sup>Γ</sup>*p*,*q*(*<sup>α</sup>* <sup>+</sup> *<sup>β</sup>* <sup>+</sup> *<sup>θ</sup>* <sup>+</sup> <sup>1</sup>) *<sup>u</sup>* <sup>−</sup> *<sup>v</sup>*<sup>C</sup> . (25)

Then,

 (A*u*)(*t*) <sup>−</sup> (A*v*)(*t*) <sup>≤</sup> *<sup>ω</sup> <sup>u</sup>* <sup>−</sup> *<sup>v</sup>*<sup>C</sup> *T p β*−<sup>1</sup> |Ω| ⎡ ⎢ ⎣ *T p α*+*β*−<sup>1</sup> <sup>Γ</sup>*p*,*q*(*<sup>α</sup>* <sup>+</sup> *<sup>β</sup>*) <sup>+</sup> **<sup>A</sup>***<sup>T</sup>* ⎤ ⎥ ⎦ + (L + *<sup>L</sup>*3)*<sup>u</sup>* − *<sup>v</sup>*C *G ηα*+*β*+*<sup>θ</sup>* |Ω|Γ*p*,*q*(*α* + *β* + *θ* − 1) . *T p β*−<sup>1</sup> ⎡ ⎢ ⎣ *T p α*+*β*−<sup>1</sup> <sup>Γ</sup>*p*,*q*(*<sup>α</sup>* <sup>+</sup> *<sup>β</sup>*) <sup>+</sup> **<sup>A</sup>***<sup>T</sup>* ⎤ ⎥ ⎦ + (L + *<sup>L</sup>*3)*<sup>u</sup>* − *<sup>v</sup>*C *T p α*+*<sup>β</sup>* |Ω|Γ*p*,*q*(*α* + *β* + 1) . *T p β*−<sup>1</sup> ⎡ ⎢ ⎣ *T p α* Γ*p*,*q*(*α* + *β*) **A***<sup>η</sup>* + **B***<sup>η</sup>* ⎤ ⎥ ⎦ <sup>+</sup> (<sup>L</sup> <sup>+</sup> *<sup>L</sup>*3)*<sup>u</sup>* <sup>−</sup> *<sup>v</sup>*<sup>C</sup> *p*( *α* <sup>2</sup>)+( *β* 2) Γ*p*,*q*(*α*)Γ*p*,*q*(*β*) *<sup>T</sup> p* 0 *<sup>x</sup> pβ*−1 0 *T <sup>p</sup>* <sup>−</sup> *qxβ*−<sup>1</sup> *p*,*q x <sup>p</sup>β*−<sup>1</sup> <sup>−</sup> *qsα*−<sup>1</sup> *p*,*q dp*,*qs dp*,*qx* ≤ " **O***<sup>T</sup> <sup>ω</sup>* + (<sup>L</sup> <sup>+</sup> *<sup>L</sup>*3) *<sup>G</sup> <sup>η</sup>α*+*β*+*<sup>θ</sup>* Γ*p*,*q*(*α* + *β* + *θ* − 1) + **O***<sup>η</sup>* ⎡ ⎢ ⎣(<sup>L</sup> <sup>+</sup> *<sup>L</sup>*3) *T p α*+*<sup>β</sup>* Γ*p*,*q*(*α* + *β* + 1) ⎤ ⎥ <sup>⎦</sup> + (<sup>L</sup> <sup>+</sup> *<sup>L</sup>*3) *T p α*+*<sup>β</sup>* Γ*p*,*q*(*α* + *β* + 1) & *u* − *v*C = X *<sup>u</sup>* − *<sup>v</sup>*C . (26)

Taking fractional (*p*, *q*)-difference of order *ν* for (17), we get

(*D<sup>ν</sup> <sup>p</sup>*,*q*A*u*)(*t*) <sup>=</sup> *<sup>ϕ</sup>*(*u*) Ω ( *<sup>T</sup> <sup>p</sup>* )*β*−1Γ*p*,*q*(*<sup>α</sup>* + *<sup>β</sup>*) Γ*p*,*q*(*α* + *β*)Γ*p*,*q*(*α* + *β* − *ν*) *t <sup>α</sup>*+*β*−*ν*−<sup>1</sup> <sup>−</sup> **<sup>A</sup>***<sup>T</sup>* Γ*p*,*q*(*β*) Γ*p*,*q*(*β* − *ν*) *t β*−*ν*−1 <sup>+</sup> <sup>Q</sup>∗[*Fu*] Ω **A***<sup>T</sup>* Γ*p*,*q*(*β*) Γ*p*,*q*(*β* − *ν*) *t <sup>β</sup>*−*ν*−<sup>1</sup> <sup>−</sup> ( *<sup>T</sup> <sup>p</sup>* )*β*−1Γ*p*,*q*(*<sup>α</sup>* + *<sup>β</sup>*) Γ*p*,*q*(*α* + *β*)Γ*p*,*q*(*α* + *β* − *ν*) *t α*+*β*−*ν*−1 + P∗[*Fu*] Ω **A***η* Γ*p*,*q*(*α* + *β*) Γ*p*,*q*(*α* + *β* − *ν*)Γ*p*,*q*(*α* + *β*) *t <sup>α</sup>*+*β*−*ν*−<sup>1</sup> <sup>−</sup> **<sup>B</sup>***<sup>η</sup>* Γ*p*,*q*(*β*) Γ*p*,*q*(*β* − *ν*) *t β*−*ν*−1 + 1 *p*( *α* <sup>2</sup>)+( *β* <sup>2</sup>)+( −*ν* 2 ) Γ*p*,*q*(*α*)Γ*p*,*q*(*β*)Γ*p*,*q*(−*ν*) × *<sup>t</sup>* 0 *<sup>y</sup> p*−*ν*−1 0 *<sup>x</sup> pβ*−1 0 (*t* − *qy*) −*ν*−1 *p*,*q y <sup>p</sup>*−*ν*−<sup>1</sup> <sup>−</sup> *qxβ*−<sup>1</sup> *p*,*q x <sup>p</sup>β*−<sup>1</sup> <sup>−</sup> *qsα*−<sup>1</sup> *p*,*q* × *F s <sup>p</sup>α*−<sup>1</sup> , *<sup>u</sup> s pα*−<sup>1</sup> , Ψ*<sup>γ</sup> <sup>p</sup>*,*qu s pα*−<sup>1</sup> , *D<sup>ν</sup> <sup>p</sup>*,*qu s pα*−<sup>1</sup> *dp*,*qs dp*,*qx dp*,*qy*. (27)

Thus,

 (*D<sup>ν</sup> <sup>p</sup>*,*q*A*u*)(*t*) <sup>−</sup> (*D<sup>ν</sup> <sup>p</sup>*,*q*A*v*)(*t*) ≤ *ω u* − *v*C *T p* −*<sup>ν</sup>* Γ*p*,*q*(*α* + *β*) Γ*p*,*q*(*α* + *β* − *ν*) **O***<sup>T</sup>* ≤ (L + *<sup>L</sup>*3)*<sup>u</sup>* − *<sup>v</sup>*C *G ηα*+*β*+*<sup>θ</sup>* Γ*p*,*q*(*α* + *β* + *θ* − 1) . *T p* −*<sup>ν</sup>* Γ*p*,*q*(*α* + *β*) Γ*p*,*q*(*α* + *β* − *ν*) **O***<sup>T</sup>* + (L + *<sup>L</sup>*3)*<sup>u</sup>* − *<sup>v</sup>*C *T p α*+*<sup>β</sup>* Γ*p*,*q*(*α* + *β* + 1) . *T p* −*<sup>ν</sup>* Γ*p*,*q*(*α* + *β*) Γ*p*,*q*(*α* + *β* − *ν*) **O***η* + (L + *<sup>L</sup>*3)*<sup>u</sup>* − *<sup>v</sup>*C ( *T <sup>p</sup>* )*α*+*β*−*<sup>ν</sup>* Γ*p*,*q*(*α* + *β* − *ν* + 1) ≤ " **O***<sup>T</sup> <sup>ω</sup>* + (<sup>L</sup> <sup>+</sup> *<sup>L</sup>*3) *<sup>G</sup> <sup>η</sup>α*+*β*+*<sup>θ</sup>* Γ*p*,*q*(*α* + *β* + *θ* − 1) *T p* −*<sup>ν</sup>* Γ*p*,*q*(*α* + *β*) Γ*p*,*q*(*α* + *β* − *ν*) + **O***<sup>η</sup>* ⎡ ⎢ ⎣(<sup>L</sup> <sup>+</sup> *<sup>L</sup>*3) *T p α*+*<sup>β</sup>* Γ*p*,*q*(*α* + *β* + 1) ⎤ ⎥ ⎦ *T p* −*<sup>ν</sup>* Γ*p*,*q*(*α* + *β*) Γ*p*,*q*(*α* + *β* − *ν*) + (L + *L*3) *T p α*+*β*−*<sup>ν</sup>* Γ*p*,*q*(*α* + *β* − *ν* + 1) & *u* − *v*C < X *<sup>u</sup>* − *<sup>v</sup>*C . (28)

From (26) and (28), we have

$$\|\mathcal{A}u - \mathcal{A}v\|\_{\mathcal{C}} \le \mathcal{X} \|u - v\|\_{\mathcal{C}}.$$

By (*H*4), we can conclude that A is a contraction. Thus, by using Banach fixed point theorem in lemma 7, A has a fixed point, which is a unique solution of problem (1) on *I<sup>T</sup> <sup>p</sup>*,*q*.

#### **4. Existence of at Least One Solution**

In this section, we prove the existence of at least one solution to (1). The following lemmas reviewing the Schauder's fixed point theorem are also provided.

**Lemma 8** ([43] Arzelá-Ascoli theorem)**.** *A collection of functions in C*[*a*, *b*] *with the sup norm, is relatively compact if and only if it is uniformly bounded and equicontinuous on* [*a*, *b*]*.*

**Lemma 9** ([43])**.** *If a set is closed and relatively compact, then it is compact.*

**Lemma 10** ([44] Schauder's fixed point theorem)**.** *Let* (*D*, *d*) *be a complete metric space, U be a closed convex subset of D, and T* : *D* → *D be the map such that the set Tu* : *u* ∈ *U is relatively compact in D. Then, the operator T has at least one fixed point u*<sup>∗</sup> ∈ *U: Tu*<sup>∗</sup> = *u*∗.

**Theorem 2.** *Assume that F* : *I<sup>T</sup> <sup>p</sup>*,*<sup>q</sup>* × R × R × R → R *is continuous, and <sup>ϕ</sup>* : *<sup>C</sup> IT <sup>p</sup>*,*q*, R → R *is given functional. Suppose that the following conditions hold:*

(*H*5)*There exists a positive constant M such that for each t* <sup>∈</sup> *<sup>I</sup><sup>T</sup> <sup>p</sup>*,*<sup>q</sup> and ui* ∈ R, *<sup>i</sup>* = 1, 2, 3*,*

$$\left| F[t, u\_1, u\_2, u\_3] \right| \le M.$$

(*H*6)*There exists a positive constant N such that for each u* ∈ C*,*

$$|\varphi(u)| \le N.$$

*Then, problem* (1) *has at least one solution on I<sup>T</sup> <sup>p</sup>*,*q.*

**Proof.** To prove this theorem, we proceed as follows.

**Step I.** Verify A maps bounded sets into bounded sets in *BR* = {*<sup>u</sup>* ∈ C : *<sup>u</sup>*C ≤ *<sup>R</sup>*}. Let us prove that for any *R* > 0, there exists a positive constant *L* such that for each *x* ∈ *BR*, we have <sup>A</sup>*u*C <sup>≤</sup> *<sup>L</sup>*. By using Lemma 5, for each *<sup>t</sup>* <sup>∈</sup> *<sup>I</sup><sup>T</sup> <sup>p</sup>*,*<sup>q</sup>* and *u* ∈ *BR*, we have

$$\begin{split} & \left| \mathbb{P}^\* \left[ F\_{\mathbb{H}} \right] \right| \\ & \leq \frac{M}{p^{(\frac{\gamma}{p}) + \left( \frac{\gamma}{\gamma} \right)} \Gamma\_{P,q}(\alpha) \Gamma\_{P,q}(\theta)} \int\_0^{\frac{\gamma}{p}} \int\_0^{\frac{\gamma}{p^{\beta-1}}} \left( \frac{T}{p} - q\mathbf{x} \right)\_{p,q}^{\beta-1} \left( \frac{\mathbf{x}}{p^{\beta-1}} - q\mathbf{s} \right) \sum\_{p,q}^{\kappa-1} d\_{p,q} \mathbf{s} \, d\_{p,q} \mathbf{x} \\ & \leq \frac{M \left( \frac{\gamma}{p} \right)^{\kappa+\beta}}{\Gamma\_{P,q}(\kappa+\beta+1)}, \\ & \leq \frac{GM}{p^{(\frac{\gamma}{p}) + (\frac{\gamma}{\gamma} + \frac{\gamma}{\gamma})} \Gamma\_{P,q}(\alpha) \Gamma\_{P,q}(\theta) \Gamma\_{P,q}(\theta)} \int\_0^{\eta} \int\_0^{\frac{\gamma}{p^{\beta-1}}} \int\_0^{\frac{\gamma}{p^{\beta-1}}} (\eta - qy)\_{p,q}^{\beta-1} \left( \frac{y}{p^{\beta-1}} - qx \right)\_{p,q}^{\beta-1} \times \\ & \quad \left( \frac{x}{p^{\beta-1}} - qy \right)\_{p,q}^{\kappa-1} \frac{d\_{p,q}}{p,q} \mathbf{s} d\_{p,q} \mathbf{x} d\_{p,q} \\ & \leq \frac{GMp^{\alpha+\beta+\theta}}{\Gamma\_{P,q}(\kappa+\beta+\theta+1)}. \end{split} \tag{30}$$

From (29) and (30), we have

256

$$\begin{split} \left| (\mathcal{A}u)(t) \right| &\leq N \mathbf{O}\_{\mathcal{T}} + \frac{GM\eta^{a+\beta+\theta}}{\Gamma\_{p,q}(a+\beta+\theta-1)} \mathbf{O}\_{\mathcal{T}} \\ &\quad + \frac{M\left(\frac{\mathbf{T}}{p}\right)^{a+\beta}}{\Gamma\_{p,q}(a+\beta+1)} \mathbf{O}\_{\eta} \\ &\quad + \frac{M}{p^{\frac{\alpha}{\beta}+\frac{\beta}{\beta}}\Gamma\_{p,q}(a)\Gamma\_{p,q}(\theta)} \int\_{0}^{\frac{\mathbf{T}}{p}} \int\_{0}^{\frac{\mathbf{T}}{p-1}} \left( \frac{T}{p} - q\mathbf{x} \right)^{\frac{\beta-1}{p}}\_{p,q} \left( \frac{\mathbf{x}}{p^{\beta-1}} - q\mathbf{s} \right)^{\frac{\mathbf{x}-1}{p}}\_{p,q} d\_{p,q} \mathbf{s} \, d\_{p,q} \mathbf{x} \\ &\leq N\mathbf{O}\_{\mathcal{T}} + M \left[ \frac{\mathbf{O}\_{\mathcal{T}}\,\mathrm{G}}{\Gamma\_{p,q}(a+\beta+\theta-1)} + \frac{\left(\frac{\mathbf{I}}{p}\right)^{a+\beta}}{\Gamma\_{p,q}(a+\beta+1)} (\mathbf{O}\_{\mathcal{T}} + 1) \right] \\ &\leq N\mathbf{O}\_{\mathcal{T}} + M\boldsymbol{\Theta} := L. \end{split} \tag{31}$$

We find that

$$\begin{split} \left| \left( D\_{p,q}^{\nu} \mathcal{A} u \right) (t) \right| &\leq N \mathbf{O}\_{T} \left( \frac{T}{p} \right)^{-\nu} \frac{\Gamma\_{p,q}(\mathfrak{a} + \beta)}{\Gamma\_{p,q}(\mathfrak{a} + \beta - \nu)} \\ &\quad + M \left[ \frac{\mathbf{O}\_{T} \operatorname{G} \eta^{\mathfrak{a} + \beta + \theta}}{\Gamma\_{p,q}(\mathfrak{a} + \beta + \theta - 1)} \left( \frac{T}{p} \right)^{-\nu} \frac{\Gamma\_{p,q}(\mathfrak{a} + \beta)}{\Gamma\_{p,q}(\mathfrak{a} + \beta - \nu)} \right. \\ &\left. + \frac{\left( \frac{T}{p} \right)^{\mathfrak{a} + \beta}}{\Gamma\_{p,q}(\mathfrak{a} + \beta + 1)} (\mathbf{O}\_{\eta} + 1) \left( \frac{T}{p} \right)^{-\nu} \frac{\Gamma\_{p,q}(\mathfrak{a} + \beta)}{\Gamma\_{p,q}(\mathfrak{a} + \beta - \nu)} \right] \\ &< L. \end{split} \tag{32}$$

Thus, (A*u*)C ≤ *<sup>L</sup>*, which implies that A is uniformly bounded.

**Step II.** Since *F* is continuous, we can conclude that the operator A is continuous on *BR*.

**Step III.** For any *<sup>t</sup>*1, *<sup>t</sup>*<sup>2</sup> <sup>∈</sup> *<sup>I</sup><sup>T</sup> <sup>p</sup>*,*<sup>q</sup>* with *t*<sup>1</sup> < *t*2, we find that

$$\begin{split} \left| (\mathcal{A}u)(t\_1) - (\mathcal{A}u)(t\_2) \right| &\leq \frac{\left| t\_2^{\beta - 1} - t\_1^{\beta - 1} \right|}{|\Omega|} \left[ \mathbf{A}\_T (N + \mathbb{Q}^\* [F\_u]) + \mathbf{B}\_\eta \mathbb{P}^\* [F\_u] \right] \\ &+ \frac{\left| t\_2^{\alpha + \beta - 1} - t\_1^{\alpha + \beta - 1} \right|}{|\Omega| \Gamma\_{p, q} (a + \beta)} \left[ \left( \frac{T}{p} \right)^{\beta - 1} (N + \mathbb{Q}^\* [F\_u]) + \mathbf{A}\_\eta \mathbb{P}^\* [F\_u] \right] \\ &+ \frac{M}{\Gamma\_{p, q} (a + \beta)} \left| t\_2^{a + \beta} - t\_1^{a + \beta} \right|, \end{split} \tag{33}$$

and

$$\begin{split} & \left| (D\_{p,q} \, ^\nu \mathcal{A}u)(t\_2) - (D\_{p,q}^\nu \mathcal{A}u)(t\_1) \right| \\ & \leq \left| \frac{t\_2^{a+\beta} - t\_1^{a+\beta}}{|\Omega| \Gamma\_{p,q}(\beta-\nu)} \right| \left[ \mathbf{A}\_T(N+\mathbb{Q}^\*[F\_u]) + \mathbf{B}\_\eta \mathbb{P}^\*[F\_u] \right] + \frac{\left| t\_2^{a+\beta-\nu-1} - t\_1^{a+\beta-\nu-1} \right|}{|\Omega| \Gamma\_{p,q}(a+\beta-\nu)} \times \\ & \left[ \left( \frac{T}{p} \right)^{\beta-1} (N+\mathbb{Q}^\*[F\_u]) + \mathbf{A}\_\eta \mathbb{P}^\*[F\_u] \right] + \frac{M}{\Gamma\_{p,q}(a+\beta-\nu+1)} \left| t\_2^{a+\beta-\nu} - t\_1^{a+\beta-\nu} \right|. \end{split} \tag{34}$$

We see that the right-hand side of (33) and (34) tends to be zero when |*t*<sup>2</sup> − *t*1| → 0. Thus, A is relatively compact on *BR*. This implies that A(*BR*) is an equicontinuous set. By Arzelá-Ascoli theorem in Lemma 8, Lemma 9, and the above steps, we see that A : C→C

is completely continuous. Hence, we can conclude from Schauder fixed point theorem in Lemma 10 that problem (1) has at least one solution.

#### **5. Examples**

In this section, to illustrate our results, we consider some examples.

**Example 1.** *Consider the following fractional* (*p*, *q*)*-integrodifference equation as*

$$D\_{\frac{2}{3},\frac{1}{5}}^{\frac{2}{3}}D\_{\frac{2}{3},\frac{1}{5}}^{\frac{1}{2}}u(t) = \frac{1}{\left(100\epsilon^{2} + t^{3}\right)(1 + |u(t)|)} \left[e^{-3t}\left(\mu^{2} + 2|u|\right) + e^{-\left(\pi + \sin^{2}\pi t\right)} \Big| \Psi\_{\frac{2}{3},\frac{1}{2}}^{\frac{1}{2}}u(t)\right]$$

$$+ e^{-\left(2\pi + \cos^{2}\pi t\right)} \left|D\_{\frac{2}{3},\frac{1}{2}}^{\frac{1}{2}}u(t)\right|, \; t \in I\_{3,\frac{1}{2}}^{10} = \left\{\frac{10\left(\frac{1}{2}\right)^{k}}{\left(\frac{2}{3}\right)^{k+1}} : k \in \mathbb{N}\_{0}\right\} \cup \{0\}\tag{35}$$

*with periodic fractional* (*p*, *q*)*-integral boundary condition*

$$u(0) = \begin{array}{c} u(0) = \quad u(15) \\ \frac{2}{3}\frac{1}{2}\left(2c + \sin\left(\frac{1215}{256}\right)\right)^2 u\left(\frac{1215}{256}\right) \\ \end{array} = \sum\_{i=0}^{\infty} \frac{\mathbb{C}\_i|u(t\_i)|}{1 + |u(t\_i)|}, \quad t\_i = \sigma\_{\frac{1}{3}, \frac{1}{2}}^i(10),$$

*where Ci is given constants with* <sup>1</sup> <sup>1000</sup> <sup>≤</sup> <sup>∑</sup><sup>∞</sup> *<sup>i</sup>*=<sup>0</sup> *Ci* <sup>≤</sup> *<sup>e</sup>* <sup>1000</sup> *and <sup>φ</sup>*(*t*,*s*) = *<sup>e</sup>*−|*t*−*s*<sup>|</sup> (*t*+*e*)<sup>3</sup> *.*

 $\text{Letting } a = \frac{3}{4}, \,\,\theta = \frac{1}{2}, \,\,\gamma = \frac{1}{3}, \,\,\nu = \frac{2}{3}, \,\,\theta = \frac{2}{3}, \,\, q = \frac{1}{2}, \,\, T = 10, \,\,\eta = \frac{\sigma^{4}\_{3,\,\,\xi}}{\frac{2}{3}\pi\delta}(10) = \frac{1215}{256}, \,\,\, g(t) = (20e + \sin t)^{2} \text{ and } \,\, F\left[t, u(t), \,\,\Psi^{\gamma}\_{p,q} u(t)\right] = \frac{1}{(100e^{-t} + e^{-t})(1 + |u(t)|)} \times \left[e^{-3t}\left(\ln\frac{t}{\theta} + \ln\frac{t}{\theta}\right) - e^{-t}\left(\ln\frac{t}{\theta} + \ln\frac{t}{\theta}\right)\right].$   $\left[e^{-3t}(u^{2} + 2|u|) + e^{-(\pi + \sin^{2}\pi t)}\Big|\Psi^{\frac{1}{2}}\_{\frac{2}{3},\,\underline{1}}u(t)\right] + e^{-(2\pi + \cos^{2}\pi t)} \left|D^{\frac{1}{2}}\_{\frac{2}{3},\,\underline{1}}u(t)\right| \right].$   $\text{Using above values, we find that}$ 

Using above values, we find that

$$\phi\_0 = 0.0498, \quad |\mathbf{A}\_T| = 2.06344, \quad |\mathbf{A}\_\eta| \le 264.588, \quad |\mathbf{B}\_\eta| \le 196.777 \text{ and } \ |\Omega| \ge 283.525.$$

$$\text{For all } t \in I\_{\frac{10}{3}, \frac{1}{2}}^{10} \text{ and } \mu, \upsilon \in \mathbb{R}, \text{ we find that}$$

$$\begin{aligned} & \left| F \left[ t, \boldsymbol{\mu}, \boldsymbol{\Psi}\_{p,q}^{\gamma} \boldsymbol{\mu}, \boldsymbol{D}\_{p,q}^{\boldsymbol{\nu}} \boldsymbol{\mu} \right] - F \left[ t, \boldsymbol{\nu}, \boldsymbol{\Psi}\_{p,q}^{\gamma} \boldsymbol{\nu}, \boldsymbol{D}\_{p,q}^{\boldsymbol{\nu}} \boldsymbol{\nu} \right] \right| \\ & \leq \frac{1}{100c^{2}} | \boldsymbol{\mu} - \boldsymbol{\nu} | + \frac{1}{1000c^{2+\pi}} | \boldsymbol{\Psi}\_{p,q}^{\gamma} \boldsymbol{\mu} - \boldsymbol{\Psi}\_{p,q}^{\gamma} \boldsymbol{\nu} | + \frac{1}{100c^{2+2\pi}} \left| D\_{p,q}^{\boldsymbol{\nu}} \boldsymbol{\mu} - D\_{p,q}^{\boldsymbol{\nu}} \boldsymbol{\nu} \right|. \end{aligned}$$

Thus, (*H*1) holds with *<sup>L</sup>*<sup>1</sup> <sup>=</sup> 0.001353, *<sup>L</sup>*<sup>2</sup> <sup>=</sup> 5.848 <sup>×</sup> <sup>10</sup>−<sup>5</sup> and *<sup>L</sup>*<sup>3</sup> <sup>=</sup> 2.5273 <sup>×</sup> <sup>10</sup>−6. So L = 0.00136.

For all *u*, *v* ∈ C,

$$\left|\varphi(u) - \varphi(v)\right| \le \frac{\mathcal{E}}{1000} \|u - v\|\_{\mathcal{E}}.$$

Thus, (*H*2) holds with *ω* = 0.002718. In addition, (*H*3) holds with *g* = 19.6831, *G* = 41.42935. Since

$$\mathbf{O}\_T = 0.001885 \quad \mathbf{O}\_\eta = 2.10484 \quad \text{and} \quad \Theta = 89.5277,$$

therefore, (*H*4) holds with

$$\mathcal{X} = 0.121989 < 1..$$

Hence, by Theorem 1 this problem has a unique solution.

**Example 2.** *Consider the following fractional* (*p*, *q*)*-integrodifference equation as*

$$D\_{\frac{3}{3},\frac{1}{2}}^{\frac{3}{2}}D\_{\frac{3}{3},\frac{1}{2}}^{\frac{1}{2}}u(t) = \frac{1}{10} \left(t + \frac{1}{3}\right)e^{-\left(t + \frac{1}{10}\right)\left[u(t) + \left|\frac{\mathbf{v}\_{\frac{3}{3}}^{\frac{1}{2}}}{\frac{3}{3}\cdot\frac{1}{2}}u(t)\right| + \left|D\_{\frac{3}{3},\frac{1}{2}}^{\frac{1}{2}}u(t)\right|\right)}{}\_{}{}\_{}{t} \in I\_{\frac{2}{3},\frac{1}{2}}^{10}\tag{37}$$

*with periodic fractional* (*p*, *q*)*-integral boundary condition*

$$u(0) = \begin{array}{c} u(0) = -u(15) \\ \frac{2}{3}\left(2e + \sin\left(\frac{1215}{256}\right)\right)^2 u\left(\frac{1215}{256}\right) \\ \end{array} = \sum\_{i=0}^{\infty} \mathbb{C}\_i e^{-\left|u(t\_i)\right|}, \ t\_i = \sigma\_{\frac{2}{3}, \frac{1}{2}}^i(10), \end{array} \tag{38}$$

*where Di is given constants with* <sup>1</sup> <sup>500</sup> <sup>≤</sup> <sup>∑</sup><sup>∞</sup> *<sup>i</sup>*=<sup>0</sup> *Di* <sup>≤</sup> *<sup>e</sup>* <sup>500</sup> *.*

Letting *α* = <sup>3</sup> <sup>4</sup> , *<sup>β</sup>* <sup>=</sup> <sup>1</sup> <sup>2</sup> , *<sup>γ</sup>* <sup>=</sup> <sup>1</sup> <sup>3</sup> , *<sup>ν</sup>* <sup>=</sup> <sup>1</sup> <sup>4</sup> , *<sup>θ</sup>* <sup>=</sup> <sup>2</sup> <sup>3</sup> , *<sup>p</sup>* <sup>=</sup> <sup>2</sup> <sup>3</sup> , *<sup>q</sup>* <sup>=</sup> <sup>1</sup> <sup>2</sup> , *<sup>T</sup>* <sup>=</sup> 10, *<sup>η</sup>* <sup>=</sup> <sup>1215</sup> <sup>256</sup> . It is clear that *F t*, *u*, Ψ*<sup>γ</sup> <sup>p</sup>*,*qu*, *D<sup>ν</sup> <sup>p</sup>*,*qu* <sup>≤</sup> <sup>23</sup> <sup>15</sup> <sup>=</sup> *<sup>M</sup>* for *<sup>t</sup>* <sup>∈</sup> *<sup>I</sup>*<sup>10</sup> 2 3 , 1 2 , and <sup>|</sup>*ϕ*(*u*)<sup>|</sup> <sup>≤</sup> *<sup>e</sup>* <sup>500</sup> = *N* for *u* ∈ C. Thus, we can conclude from Theorem 2 that our problem has at least one solution.

#### **6. Conclusions**

A fractional (*p*, *q*)-integrodifference equation with periodic fractional (*p*, *q*)-integral boundary condition (1) is studied. Our problem contains three fractional (*p*, *q*)-difference operators, and two fractional (*p*, *q*)-integral operators. We establish the conditions for the existence and uniqueness of solution for problem (1) by using the Banach fixed point theorem, and this result is shown in Theorem 1. We also established the conditions of at least one solution by using the Schauder's fixed point theorem, and this result is shown in Theorem 2. The choice to use of Theorems 1 or 2 depends on the conditions of the assumptions. The main results are illustrated by a numerical example. Some properties of fractional (*p*, *q*)-integral needed in our study are also discussed. The results of the paper are new and enrich the subject of boundary value problems for fractional (*p*, *q*)-difference equations. In the future work, we may extend this work by considering new boundary value problems.

**Author Contributions:** Conceptualization, J.S. and T.S.; methodology, J.S. and T.S.; formal analysis, J.S. and T.S.; funding acquisition, J.S and T.S. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by King Mongkut's University of Technology North Bangkok. Contract no.KMUTNB-62-KNOW-22.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors would like to express their gratitude to anonymous referees for very helpful suggestions and comments which led to improvements of our original manuscript.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **Global Stability Condition for the Disease-Free Equilibrium Point of Fractional Epidemiological Models †**

**Ricardo Almeida ‡, Natália Martins \*,‡ and Cristiana J. Silva ‡**

Center for Research and Development in Mathematics and Applications (CIDMA), Department of Mathematics, University of Aveiro, 3810-193 Aveiro, Portugal; ricardo.almeida@ua.pt (R.A.); cjoaosilva@ua.pt (C.J.S.)

**\*** Correspondence: natalia@ua.pt

† Dedicated to Professor Delfim F. M. Torres on the occasion of his 50th birthday.

‡ These authors contributed equally to this work.

**Abstract:** In this paper, we present a new result that allows for studying the global stability of the disease-free equilibrium point when the basic reproduction number is less than 1, in the fractional calculus context. The method only involves basic linear algebra and can be easily applied to study global asymptotic stability. After proving some auxiliary lemmas involving the Mittag–Leffler function, we present the main result of the paper. Under some assumptions, we prove that the disease-free equilibrium point of a fractional differential system is globally asymptotically stable. We then exemplify the procedure with some epidemiological models: a fractional-order SEIR model with classical incidence function, a fractional-order SIRS model with a general incidence function, and a fractional-order model for HIV/AIDS.

**Keywords:** epidemiology; mathematical modeling; fractional calculus; equilibrium; stability

#### **1. Introduction**

Fractional differential equations play an important role in modeling real-life phenomena. By replacing an integer-order derivative with a real-order fractional derivative, often we can fit the system of equations to the real data more efficiently because many dynamical systems cannot be completely described by ODEs. To mention a few of them, we refer applications to bioengineering [1,2], biology [3], Lévy motion [4], harmonic oscillators with damping [5], economy [6,7], and engineering [8,9]. In this work, we are particularly interested in applications of fractional calculus in epidemiological models. This topic has been intensively studied in the recent past, from the well formulation of the problem, the existence of equilibrium points, modeling, and forecasting of epidemiological systems. For example, Refs. [10–12] proposed fractional epidemiological models to study the spread of COVID-19 in different countries, Refs. [13,14] investigated the HIV infection, in Ref. [15], a varicella outbreak in China was considered, the spread of dengue fever outbreak in the Cape Verde islands was studied in [16], and in [17] a fractional measles model was proposed. Stability studies were given in e.g., [13,18–21] and numerical methods in e.g., [22–24]. We also refer to [25] where a review of several fractional epidemiological models was carried out.

An important problem is the study of the global stability of the equilibrium points, in order to better understand the evolution of the disease over time. That is, the system will evolve to the equilibrium point, independently of the starting points. The study of local stability is a relatively simple matter, as it usually involves finding the eigenvalues of the Jacobian matrix and studying their sign. However, the question of global stability is not, in many cases, simple to answer as it usually involves constructing suitable Lyapunov-like functions and there is no routine on how to find them. We emphasize here the fact that the use of the Lyapunov stability theory to establish the global asymptotic stability for fractional differential equations is more complicated than for the ODEs (see, e.g., [26–29]).

**Citation:** Almeida, R.; Martins, N.; Silva, C.J. Global Stability Condition for the Disease-Free Equilibrium Point of Fractional Epidemiological Models. *Axioms* **2021**, *10*, 238. https://doi.org/10.3390/ axioms10040238

Academic Editor: Ioannis Dassios

Received: 23 July 2021 Accepted: 17 September 2021 Published: 25 September 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

This was the motivation to investigate a new method to study global asymptotic stability for dynamical systems described by fractional differential equations. In [30], a novel method was presented. By writing the system in a matricial form and analyzing the matrices involved is such procedure, under some simple assumptions, we can ensure that the equilibrium point is globally stable if the basic reproduction number R<sup>0</sup> is less than 1. The aim of this work is to generalize the main result of [30] to the fractional setting. To our knowledge, this is the first work on global stability following this last approach to the problem.

The paper is organized as follows. In Section 2, we present some concepts and known results needed for this work. Section 3 presents the new contributions of this paper. After deducing some auxiliary lemmas, we prove the main result of this work, Theorem 2. Under some assumptions that can be easily verified for a wide range of epidemiological models given by fractional differential systems, we prove that, if the basic reproduction number is less than 1, then the equilibrium point is globally asymptotically stable. Lastly, in Section 4, we present three examples to show the utility of our research.

#### **2. Preliminaries**

We begin this section with some basic definitions and results of the fractional calculus needed in this work. For more details, we refer the reader to [31,32].

Throughout the text, *<sup>α</sup>* <sup>∈</sup>]0, 1[ and <sup>Γ</sup>(*z*) = <sup>2</sup> <sup>∞</sup> <sup>0</sup> *t <sup>z</sup>*−1*e<sup>t</sup> dt*, *z* > 0, is the Gamma function.

**Definition 1.** *Let f* : R<sup>+</sup> <sup>0</sup> → R *be an integrable function. The (left-sided) Riemann–Liouville fractional integral of function f of order α is given by*

$$\mathbb{I}\_{0+}^{\alpha}f(t) := \frac{1}{\Gamma(\alpha)} \int\_{0}^{t} (t-\tau)^{\alpha-1} f(\tau)d\tau, \quad t > 0.$$

**Definition 2.** *The (left-sided) Caputo fractional derivative of order <sup>α</sup> of function <sup>f</sup>* <sup>∈</sup> *<sup>C</sup>*1(R<sup>+</sup> <sup>0</sup> , R) *is defined by*

$$\, ^C \mathbb{D}\_{0+}^{\alpha} f(t) := \frac{1}{\Gamma(1-\alpha)} \int\_0^t (t-\tau)^{-\alpha} f'(\tau) \, d\tau, \quad t > 0.$$

Next, we recall the definition of the generalized Mittag–Leffler function, which is a special function that generalizes the standard exponential function. The Mittag–Leffler function is of great importance in fractional calculus because it arises naturally in the solution of fractional-order differential and integral equations.

**Definition 3.** *The Mittag–Leffler function with two parameters is defined by*

$$E\_{\alpha, \beta}(t) := \sum\_{k=0}^{\infty} \frac{t^k}{\Gamma(\alpha k + \beta)}, \quad t \in \mathbb{C}\_{\text{tot}}$$

*with α*, *β* ≥ 0*. When β* = 1*, we define the one parameter Mittag–Leffler function Eα*(*t*) := *Eα*,1(*t*)*.*

To understand the theory of fractional differential equations, one needs to know properties of these special functions. Its main properties and applications can be found, for example, in [33]. We emphasize here the fact that *Eα*,*β*(*t*) can take negative values (cf. [34]).

Recently, we have observed an increasing interest in the Mittag–Leffler function for matrix arguments, since the solution of many systems of differential equations of noninteger order can be expressed using this matrix function. For theoretical properties and a survey on numerical approximation of the matrix Mittag–Leffler function, we recommend the recent paper [35] and the references cited therein.

**Definition 4.** *Given <sup>A</sup>* <sup>∈</sup> <sup>C</sup>*n*×*n, the matrix Mittag–Leffler function with two parameters is defined through the convergent series*

$$E\_{\alpha,\beta}(A) := \sum\_{k=0}^{\infty} \frac{A^k}{\Gamma(\alpha k + \beta)}$$

*where α*, *β* ≥ 0*. If β* = 1*, we define the one parameter matrix Mittag–Leffler function Eα*(*A*) := *Eα*,1(*A*)*.*

**Remark 1.** *For α* = *β* = 1*, the matrix Mittag–Leffler function is the matrix exponential, that is, E*1,1(*A*) = exp(*A*) = ∑<sup>∞</sup> *<sup>k</sup>*=<sup>0</sup> *<sup>A</sup><sup>k</sup> <sup>k</sup>*! . *Unfortunately, as noticed in [36], there are several works where some properties of the matrix exponential were incorrectly extended to the matrix Mittag–Leffler function and then used to solve certain linear matrix fractional differential equations. One of the properties that cannot be extended to the matrix Mittag–Leffler function is the semigroup property: for given commutating matrices A and B, in general, we have Eα*(*A* + *B*) = *Eα*(*A*) · *Eα*(*B*)*. We note, however, that, if matrices A and B commute and α* ≈ 1*, then Eα*(*A* + *B*) ≈ *Eα*(*A*) · *Eα*(*B*)*.*

We recall now two properties of the matrix Mittag–Leffler function that are useful in the present work (see [35]):


To finalize this section, we review some concepts on matrix theory.

**Definition 5.** *We say that a square matrix A is an M-matrix if the off-diagonal entries are nonpositive and the real parts of all eigenvalues are nonnegative.*

Given a square matrix *A*, the set of eigenvalues of *A* is denoted by *σ*(*A*). The spectral bound of matrix *A* is defined as *m*(*A*) = max{Re(*λ*) : *λ* ∈ *σ*(*A*)}, where Re(*λ*) denotes the real part of *λ*, and the spectral radius of *A* is defined as *ρ*(*A*) = max{|*λ*| : *λ* ∈ *σ*(*A*)}.

The following result is a fundamental tool in the proof of Lemma 4.

**Lemma 1** ([36])**.** *Let A* = *a b c d be a diagonalizable matrix of order 2 with eigenvalues <sup>λ</sup>*<sup>1</sup> <sup>=</sup> (*a*+*d*)−<sup>Ω</sup> <sup>2</sup> *and <sup>λ</sup>*<sup>2</sup> <sup>=</sup> (*a*+*d*)+<sup>Ω</sup> <sup>2</sup> *where* <sup>Ω</sup> := :(*<sup>a</sup>* − *<sup>d</sup>*)<sup>2</sup> + <sup>4</sup>*bc. Let <sup>e</sup>*<sup>1</sup> := *<sup>E</sup>α*,*β*(*λ*1) *and e*<sup>2</sup> := *Eα*,*β*(*λ*2)*. If* Ω *and c are not zero, the matrix Mittag–Leffler function of matrix A is*

$$E\_{a,\emptyset}(A) = \frac{1}{2\Omega} \begin{bmatrix} (d-a)(\varepsilon\_1 - \varepsilon\_2) + \Omega(\varepsilon\_1 + \varepsilon\_2) & -2b(\varepsilon\_1 - \varepsilon\_2) \\ -2c(\varepsilon\_1 - \varepsilon\_2) & -(d-a)(\varepsilon\_1 - \varepsilon\_2) + \Omega(\varepsilon\_1 + \varepsilon\_2) \end{bmatrix}.$$

We remark that, in Lemma 1, if Ω = 0 or *c* = 0, a simple formula for the Mittag–Leffler function of a diagonalizable matrix of order 2 can be easily obtained.

**Theorem 1** (cf. [37])**.** *Let A* <sup>∈</sup> <sup>R</sup>*n*×*n. If the spectrum of A satisfies the relation*

$$\sigma(A) \subseteq \left\{ \lambda \in \mathbb{C} \mid \{0\} : |\arg(\lambda)| > \frac{a\pi}{2} \right\},$$

*then* lim*t*→<sup>∞</sup> *<sup>E</sup>α*(*Atα*) <sup>=</sup> 0.

#### **3. Main Results**

Suppose that the epidemiological model under study is described by the fractional differential system

$$\begin{cases} \ ^\mathsf{C}\mathbb{D}\_{0+}^{a}X(t) = F(X,I) \\\\ \ ^\mathsf{C}\mathbb{D}\_{0+}^{a}I(t) = G(X,I) \\\\ G(X,0) = 0 \end{cases} \tag{1}$$

with nonnegative initial conditions *<sup>X</sup>*(0) = *<sup>X</sup>*<sup>0</sup> <sup>∈</sup> <sup>R</sup>*<sup>m</sup>* and *<sup>I</sup>*(0) = *<sup>I</sup>*<sup>0</sup> <sup>∈</sup> <sup>R</sup>*n*, where the components of the vector *X* denote the number of uninfected individuals (e.g., susceptible, recovered, vaccinated, etc.) and the components of *I* denote the number of the infected and infectious (the ones that can transmit the disease, such as the asymptomatic but infectious and active infected). In addition, we assume that function *F* is continuous, *G* is of class *C*1, and the fractional differential system (1) with initial conditions *X*(0) = *X*<sup>0</sup> and *I*(0) = *I*<sup>0</sup> admits a unique solution.

Throughout this paper, we denote by *<sup>U</sup>*<sup>0</sup> = (*X*, 0) <sup>∈</sup> <sup>R</sup>*m*+*<sup>n</sup>* the disease-free equilibrium (DFE) point of the system (1), that is, *F*(*X*, 0) = *G*(*X*, 0) = 0.

Let *A* := *<sup>∂</sup><sup>G</sup> <sup>∂</sup><sup>I</sup>* (*X*, 0) and assume that matrix *<sup>A</sup>* can be written in the form *<sup>A</sup>* <sup>=</sup> *<sup>M</sup>* <sup>−</sup> *<sup>D</sup>*, where *M*, *D* are two square matrices with *M* ≥ 0 (all entries are nonnegative), and *D* > 0 is a diagonal matrix. The following result was proven in [38]:

$$m(A) < 0 \quad \text{if and only if} \quad \rho(MD^{-1}) < 1,$$

or

$$m(A) > 0 \quad \text{if and only if} \quad \rho(MD^{-1}) > 1.$$

The value

$$\mathcal{R}\_0 := \rho(MD^{-1})$$

plays an important role in epidemiological models, and it is known as the basic reproduction number. This number gives the average number of secondary cases produced by one infected individual in a population where all individuals are susceptible to the infection.

The following result is well known in the literature. For the convenience of the reader, we present here one possible proof that follows from the fact that the scalar Mittag–Leffler function is completely monotonic ([39,40]).

**Lemma 2.** *For* <sup>0</sup> < *<sup>α</sup>* < <sup>1</sup>*, Eα*,*α*(*t*) ≥ <sup>0</sup>*, for all t* ∈ R*.*

**Proof.** Clearly, *Eα*,*α*(*t*) ≥ 0, for all *t* ≥ 0. To prove that *Eα*,*α*(−*t*) ≥ 0, for all *t* > 0, we use the fact that the scalar Mittag–Leffler function *Eα*(−*t*), *t* ≥ 0, is completely monotonic, that is,

$$(-1)^{m} \frac{d^{m}}{dt^{m}} E\_{\mathfrak{a}}(-t) \ge 0, \quad \forall m \in \mathbb{N}.\tag{2}$$

Since

$$a\frac{d}{dt}E\_{\mathfrak{a}}(-t) = -E\_{\mathfrak{a},\mathfrak{a}}(-t), \quad t \ge 0,$$

it follows from (2) that

$$E\_{\kappa,a}(-t) \ge 0, \quad t \ge 0.$$

This completes the proof.

The following result is also useful in this work.

**Lemma 3.** *For* <sup>0</sup> < *<sup>α</sup>* < <sup>1</sup>*, Eα*,*<sup>α</sup>* : R → R *is an increasing function.*

**Proof.** It is clear that *Eα*,*<sup>α</sup>* is an increasing function on R<sup>+</sup> <sup>0</sup> . Now, we prove that *<sup>d</sup> dt Eα*,*α*(*t*) ≥ 0, for all *<sup>t</sup>* ∈ R<sup>−</sup> <sup>0</sup> . Since

$$\frac{d}{dt}E\_{\alpha,\alpha}(t) = \alpha \frac{d^2}{dt^2}E\_{\alpha}(t) = \sum\_{k=1}^{\infty} \frac{kt^{k-1}}{\Gamma(k\alpha + \alpha)}\gamma\_k$$

from (2), we conclude that *<sup>d</sup> dt Eα*,*α*(*t*) ≥ 0, proving the desired result.

Now, we prove the following lemma that shows the applicability of our main result (Theorem 2).

**Lemma 4.** *Let <sup>A</sup>* <sup>∈</sup> <sup>R</sup>2×<sup>2</sup> *be a matrix and* <sup>0</sup> <sup>&</sup>lt; *<sup>α</sup>* <sup>&</sup>lt; <sup>1</sup>*. If matrix <sup>A</sup> is diagonalizable and* <sup>−</sup>*<sup>A</sup> is an M-matrix, then Eα*,*α*(*A*) ≥ 0*.*

**Proof.** With our assumptions and using the notations from Lemma 1, we have that *b*, *c*, Ω ∈ R<sup>+</sup> <sup>0</sup> , *<sup>λ</sup>*1, *<sup>λ</sup>*<sup>2</sup> ∈ R<sup>−</sup> <sup>0</sup> , and *λ*<sup>2</sup> ≥ *λ*1. First, suppose that Ω = 0 and *c* = 0. Hence, from Lemmas 2 and 3, we conclude that *e*<sup>2</sup> := *Eα*,*α*(*λ*2) ≥ *e*<sup>1</sup> := *Eα*,*α*(*λ*1) ≥ 0. It remains to be proved that

$$
\Omega(\mathfrak{e}\_1 + \mathfrak{e}\_2) \ge -(d - a)(\mathfrak{e}\_1 - \mathfrak{e}\_2) \quad \text{and} \quad \Omega(\mathfrak{e}\_1 + \mathfrak{e}\_2) \ge (d - a)(\mathfrak{e}\_1 - \mathfrak{e}\_2).
$$

Suppose that *d* ≥ *a* (the other case is similar). Then, we just need to prove the first inequality. Since both sides of the inequality are nonnegative, we have that

$$[(a-d)^2 + 4bc](e\_1 + e\_2)^2 \ge (d-a)^2(e\_1 - e\_2)^2,$$

which is equivalent to

$$\begin{aligned} ((a-d)^2e\_1^2 + (a-d)^2e\_2^2 + 2(a-d)^2e\_1e\_2 + 4bc(e\_1+e\_2)^2 \\ &\ge (a-d)^2e\_1^2 + (a-d)^2e\_2^2 - 2(a-d)^2e\_1e\_2 \end{aligned}$$

proving the desired. Now, we suppose that *c* = 0 and *a* = *d*. If *a* < *d*, then we get

$$E\_{\mathfrak{a},\mathfrak{F}}(A) = \left[ \begin{array}{cc} \mathfrak{e}\_1 & \frac{b}{a-d}(\mathfrak{e}\_1 - \mathfrak{e}\_2) \\ 0 & \mathfrak{e}\_2 \end{array} \right] \ge 0,$$

and, if *a* > *d*, then

$$E\_{\mathfrak{a}, \mathfrak{b}}(A) = \left[ \begin{array}{cc} \mathfrak{e}\_2 & \frac{b}{a-d}(\mathfrak{e}\_2 - \mathfrak{e}\_1) \\ 0 & e\_1 \end{array} \right] \ge 0.$$

If *c* = 0 and *a* = *d*, then

$$E\_{\mathfrak{a},\mathfrak{b}}(A) = \left[ \begin{array}{cc} e\_1 & 0 \\ 0 & e\_1 \end{array} \right] \ge 0.$$

since *b* = 0 (otherwise, *A* is not diagonalizable). If Ω = 0, the proof is trivial since in this case *A* is diagonalizable iff *a* = *d* and *c* = *d* = 0.

The following result is a fundamental tool in the proof of Theorem 2.

**Lemma 5.** *Let <sup>B</sup>* <sup>∈</sup> <sup>R</sup>*n*×*<sup>n</sup> be an invertible matrix and <sup>H</sup>* : <sup>R</sup>*m*+*<sup>n</sup>* <sup>→</sup> <sup>R</sup>*<sup>n</sup> be a continuous function. Suppose that the fractional differential equation*

$$^C\mathbb{D}\_{0+}^{\alpha}I(t) = B \cdot I(t) - H(X(t), I(t))$$

*with initial condition <sup>I</sup>*(0) = *<sup>I</sup>*<sup>0</sup> <sup>∈</sup> <sup>R</sup>*n, has a unique solution. Then, the solution of this initial value problem satisfies*

$$I(t) = E\_{\mathfrak{A}}(Bt^{\mathfrak{a}}) \cdot I\_0 - \int\_0^t B^{-1} \cdot \frac{d}{ds} E\_{\mathfrak{a}}(Bs^{\mathfrak{a}}) \cdot H(X(t-s), I(t-s)) \, ds.$$

**Proof.** The proof follows the ideas from [41] (Theorem 7.2). First, observe that

$$\prescript{\mathsf{C}}{}{\mathbb{D}}\_{0+}^{\alpha} \left( E\_{\alpha} (Bt^{\kappa}) \cdot I\_{0} \right) = B \cdot E\_{\alpha} (Bt^{\kappa}) \cdot I\_{0} \cdot$$

To compute

$$\stackrel{C}{}{\mathbb{D}}\_{0+}^{\alpha} \left( \int\_{0}^{t} B^{-1} \cdot \frac{d}{ds} E\_{\mathfrak{a}}(Bs^{\mathfrak{a}}) \cdot H(X(t-s), I(t-s)) \, ds \right) / \, ^{\alpha}$$

let

$$\overline{y}(t) = \int\_0^t B^{-1} \cdot \frac{d}{ds} E\_a(Bs^a) \cdot H(X(t-s), I(t-s)) \, ds.$$

Since

$$B^{-1} \cdot \frac{d}{ds} E\_a(Bs^a) = a s^{a-1} E\_a'(Bs^a) = a s^{a-1} \sum\_{k=1}^{\infty} \frac{k(Bs^a)^{k-1}}{\Gamma(ka+1)} = \sum\_{k=1}^{\infty} \frac{B^{k-1} s^{ak-1}}{\Gamma(ka)}.$$

we get the following:

$$\begin{split} \mathfrak{F}(t) &= \sum\_{k=1}^{\infty} \frac{B^{k-1}}{\Gamma(ka)} \int\_{0}^{t} s^{ak-1} \, H(X(t-s), I(t-s)) \, ds \\ &= \sum\_{k=1}^{\infty} \frac{B^{k-1}}{\Gamma(ka)} \int\_{0}^{t} (t-\tau)^{ak-1} \, H(X(\tau), I(\tau)) \, d\tau \\ &= \sum\_{k=1}^{\infty} B^{k-1} \cdot \mathbb{I}\_{0+}^{ak} H(X(t), I(t)). \end{split}$$

Thus,

$$\begin{split} \,^C \mathbb{D}\_{0+}^{\mathfrak{a}} \overline{\mathfrak{y}}(t) &= \sum\_{k=1}^{\infty} B^{k-1} \cdot \,^C \mathbb{D}\_{0+}^{\mathfrak{a}} \mathbb{L}\_{0+}^{\mathfrak{a}} H(X(t), I(t)) = \sum\_{k=1}^{\infty} B^{k-1} \cdot \mathbb{I}\_{0+}^{\mathfrak{a}(k-1)} H(X(t), I(t)) \\ &= \sum\_{k=0}^{\infty} B^k \cdot \mathbb{I}\_{0+}^{\mathfrak{a}} H(X(t), I(t)) = H(X(t), I(t)) + \sum\_{k=1}^{\infty} B^k \cdot \mathbb{I}\_{0+}^{\mathfrak{a}k} H(X(t), I(t)) \\ &= H(X(t), I(t)) + B \cdot \overline{\mathfrak{y}}(t) .\end{split}$$

Hence, we may conclude that

$$\begin{aligned} ^C\mathbb{D}\_{0+}^{\mathfrak{a}}I(t) &= B \cdot E\_{\mathfrak{a}}(Bt^{\mathfrak{a}}) \cdot I\_0 - H(X(t), I(t)) - B \cdot \overline{\mathfrak{y}}(t) \\ &= B \cdot \left( E\_{\mathfrak{a}}(Bt^{\mathfrak{a}}) \cdot I\_0 - \overline{\mathfrak{y}}(t) \right) - H(X(t), I(t)) \\ &= B \cdot I(t) - H(X(t), I(t)), \end{aligned}$$

proving the desired result.

We are now in conditions to present a new global stability condition for the DFE of system (1) when R<sup>0</sup> < 1. Knowing that an equilibrium point is globally asymptotically stable with respect to the system that describes the evolution of the uninfected individuals, and with some extra assumptions related to the matrices involved in the system associated with infected and infectious individuals, we can conclude that the equilibrium point is in fact globally asymptotically stable with respect to the complete system. Although the

result imposes some restrictions in order to be applied, for many epidemiological models, it can be easily used, as we will illustrate in Section 4.

#### **Theorem 2.** *Suppose that*


*If* <sup>R</sup><sup>0</sup> <sup>&</sup>lt; <sup>1</sup>*, then the DFE, <sup>U</sup>*<sup>0</sup> = (*X*, 0)*, is a globally asymptotically stable equilibrium of system* (1)*, for all* 0 < *α* < 1*.*

**Proof.** First, observe that, since R<sup>0</sup> < 1, then *m*(*A*) < 0 (see [38]), and so matrix *A* is invertible. Since

$$\prescript{C}{}{\mathbb{D}}\_{0+}^{\alpha}I(t) = G(X(t), I(t)) = A \cdot I(t) - \prescript{\succ}{G}(X(t), I(t)),$$

then, by Lemma 5, we get

$$0 \le I(t) = E\_\mathfrak{a}(At^\mathfrak{a}) \cdot I\_0 - \int\_0^t A^{-1} \frac{d}{ds} E\_\mathfrak{a}(As^\mathfrak{a}) \cdot \mathbf{\hat{G}}(X(t-s), I(t-s)) \, ds \le E\_\mathfrak{a}(At^\mathfrak{a}) \cdot I\_{0+} $$

since

$$A^{-1}\frac{d}{ds}E\_{\mathfrak{a}}(As^{\mathfrak{a}}) = as^{a-1}E\_{\mathfrak{a}}'(As^{\mathfrak{a}}) = s^{a-1}E\_{\mathfrak{a},\mathfrak{a}}(As^{\mathfrak{a}}) \ge 0.1$$

Since the real parts of the eigenvalues of matrix *A* are negative, from Theorem 1, we get

$$\lim\_{t \to \infty} \left\| E\_{\mathfrak{A}}(At^{\mathfrak{a}}) \right\| = 0\_{\mathfrak{m}}$$

and hence

$$\lim\_{t \to \infty} I(t) = 0.$$

Since *X* is globally asymptotically stable with respect to *<sup>C</sup>*D*<sup>α</sup>* <sup>0</sup>+*X*(*t*) = *F*(*X*, 0), which in turn is the limiting system of *<sup>C</sup>*D*<sup>α</sup>* <sup>0</sup>+*X*(*t*) = *F*(*X*, *I*), then we get

$$\lim\_{t \to \infty} X(t) = X^\star(t)\_\prime$$

which completes the proof.

**Remark 2.** *Note that the assumption Eα*,*α*(*A*) ≥ 0 *in Theorem 2 is trivially satisfied if matrix A has dimensions 1 or 2 (by Lemmas 2 and 4, respectively). In addition, if α* ≈ 1*, since Eα*,*α*(*A*) ≈ exp(*A*)*, the above condition also holds for any matrix A* <sup>∈</sup> <sup>R</sup>*n*×*n.*

#### **4. Examples**

In this section, we illustrate our main result, Theorem 2, by considering three Caputo fractional-order compartmental models and show that the disease free equilibrium is globally stable in all the cases, whenever R<sup>0</sup> < 1. We stress that usually (see e.g., [21]) the global stability of the disease free equilibrium is proved by considering an appropriate Lyapunov function and LaSalle's invariance principle [42], which are often difficult to apply especially when the model has a considerable number of variables and parameters. Our main result allows us to prove the global stability of the disease free equilibrium in an easier and simpler way. For the numerical implementation of the fractional derivatives, we

have used the Adams–Bashforth–Moulton scheme, which has been implemented in the Matlab code fde12 by Garrappa [43].

The code implements the predictor–corrector PECE method of Adams–Bashforth– Moulton type described in [44]. We fixed a time step size of *h* = 2−<sup>6</sup> and consider, without loss of generality, the fractional-order derivatives *α* ∈ {0.8, 0.85, 0.9, 0.95, 1.0}.

#### *4.1. A Fractional SEIR Model with Traditional Incidence Rate*

We start by considering a Caputo fractional-order version of the classical SEIR model that has been applied to describe the transmission dynamics of infectious diseases where there exists a significant latency period during which the individuals are infected but not yet infectious. During this period, the individual is in the so-called *exposed* compartment *E*, see e.g., [45]. The other compartments of the model are *susceptible S*, *infected I*, and *recovered R*, and each of them denotes a fraction of the total population. The following assumptions are considered: the birth and death rates are assumed to be equal, and denoted by *μ*; the incidence rate is the traditional one, given by *βSI*, where *β* represents the transmission rate; the latent period is denoted by *ε*; infected individuals recover at a rate *γ* and remain recovered with permanent immunity. All parameters are assumed to be positive. The model is given by the following system:

$$\begin{cases} ^\mathbb{C}\mathbb{D}\_{0+}^{\alpha}S(t) = \mu - \mu S(t) - \beta S(t)I(t),\\ ^\mathbb{C}\mathbb{D}\_{0+}^{\alpha}E(t) = -(\varepsilon + \mu)E(t) + \beta S(t)I(t),\\ ^\mathbb{C}\mathbb{D}\_{0+}^{\alpha}I(t) = \varepsilon E(t) - (\gamma + \mu)I(t),\\ ^\mathbb{C}\mathbb{D}\_{0+}^{\alpha}R(t) = \gamma I(t) - \mu R(t). \end{cases} \tag{3}$$

The disease-free equilibrium of the model (3) is given by

$$
\Sigma\_0 = \left( S^0, E^0, I^0, R^0 \right) = \left( 1, 0, 0, 0 \right) \dots
$$

Following the notation from Section 3, we have

$$A = \begin{bmatrix} -\varepsilon - \mu & \beta \\ \varepsilon & -\gamma - \mu \end{bmatrix}.$$

The matrix *A* can be written as *A* = *M* − *D* with

$$M = \begin{bmatrix} 0 & \beta \\\\ \varepsilon & 0 \end{bmatrix}$$

and

$$D = \begin{bmatrix} \varepsilon + \mu & 0\\ 0 & \gamma + \mu \end{bmatrix}.$$

The point *X* = (1, 0) is globally asymptotically stable for the system of uninfected individuals:

$$\begin{cases} \,^\mathbb{C}\mathbb{D}\_{0+}^{\mathfrak{a}}S(t) = \mu - \mu S(t),\\ \,^\mathbb{C}\mathbb{D}\_{0+}^{\mathfrak{a}}\mathcal{R}(t) = -\mu \mathcal{R}(t). \end{cases} \tag{4}$$

It is easy to verify that the function

$$\mathcal{R}(t) = \mathcal{R}\_0 \, E\_a(-\mu t^a)$$

satisfies the second equation of (4). From [41] (Theorem 7.2), the solution of the first equation of (4) is the function

$$S(t) = S\_0 \, E\_\alpha(-\mu t^\alpha) + \int\_0^t \mu s^{\alpha - 1} E\_{\alpha, a}(-\mu s^a) \, ds.$$

Simple computations lead to

$$S(t) = S\_0 E\_a(-\mu t^a) - E\_a(-\mu t^a) + 1.$$

Thus,

$$(S(t), R(t)) \to (1, 0) \quad \text{as} \quad t \to \infty.$$

In addition, by Lemma 4, *Eα*,*α*(*A*) is nonnegative and so, by Theorem 2, the disease-free equilibrium of the model (9) is globally asymptotically stable.

Consider the following parameter values: *μ* = 1/80, *β* = 0.05 and *γ* = 1, *ε* = 1. Then, the eigenvalues of the matrix *A* are −0.7888 and −1.2361; therefore, *m*(*A*) < 0. Moreover, we confirm that <sup>R</sup><sup>0</sup> :<sup>=</sup> *<sup>ρ</sup>*(*MD*−1) = 0.2893 <sup>&</sup>lt; 1.

Through adequate numerical simulations, we illustrate the global stability of the disease free equilibrium, whenever R<sup>0</sup> < 1, considering different values of *α* and initial conditions. In Figure 1, we consider different values for *α* and the initial condition *x*<sup>0</sup> from (5).

**Figure 1.** Stability of the disease free equilibrium Σ<sup>0</sup> = (1, 0, 0, 0), for the SEIR fractional model (3), considering different values of *α* ∈ {0.8, 0.85, 0.9, 0.95, 1.0}. On the left: *S*. On the right: *E* + *I* + *R*.

The global stability of the disease free equilibrium Σ<sup>0</sup> = (1, 0, 0, 0) is illustrated in Figure 2, considering different initial conditions *xi*, *i* = 0, . . . , 7, given by (5):

$$\begin{array}{l} \mathbf{x}\_{0} = (\mathbf{S}\_{0,0}, \mathbf{E}\_{0,0}, l\_{0,0}, R\_{0,0}) = (0.3, 0.5, 0.1, 0.1), \\ \mathbf{x}\_{1} = (\mathbf{S}\_{0,1}, \mathbf{E}\_{0,1}, l\_{0,1}, R\_{0,1}) = (0.4, 0.1, 0.3, 0.2), \\ \mathbf{x}\_{2} = (\mathbf{S}\_{0,2}, \mathbf{E}\_{0,2}, l\_{0,2}, R\_{0,2}) = (0.5, 0.05, 0.4, 0.05), \\ \mathbf{x}\_{3} = (\mathbf{S}\_{0,3}, \mathbf{E}\_{0,3}, l\_{0,3}, R\_{0,3}) = (0.6, 0.1, 0.2, 0.1), \\ \mathbf{x}\_{4} = (\mathbf{S}\_{0,4}, \mathbf{E}\_{0,4}, l\_{0,4}, R\_{0,4}) = (0.7, 0.05, 0.1, 0.15), \\ \mathbf{x}\_{5} = (\mathbf{S}\_{0,5}, \mathbf{E}\_{0,5}, l\_{0,5}, R\_{0,5}) = (0.8, 0.1, 0.1, 0.0), \\ \mathbf{x}\_{6} = (\mathbf{S}\_{0,6}, \mathbf{E}\_{0,6}, l\_{0,6}, R\_{0,6}) = (0.85, 0.05, 0.1, 0.0), \\ \mathbf{x}\_{7} = (\mathbf{S}\_{0,7}, \mathbf{E}\_{0,7}, l\_{0,7}, R\_{0,7}) = (0.95, 0.025, 0.025, 0.0). \end{array} \tag{5}$$

**Figure 2.** Global stability of the disease free equilibrium Σ<sup>0</sup> = (1, 0, 0, 0), for the fractional model (3), considering *α* = 0.9 and different initial conditions *xi*, *i* = 0, ... , 7, from (5). On the left *S* and on the right *E* + *I* + *R*.

#### *4.2. A Fractional SIRS Model with General Incidence Rate*

In the second example, we consider the Caputo fractional-order version of the classical *SIRS* model from [21], given by the following system:

$$\begin{cases} ^C\mathbb{D}\_{0+}^{\mathfrak{a}}S(t) = \Lambda - \mu S(t) - \frac{\beta I(t)S(t)}{1 + k\_1 S(t) + k\_2 I(t) + k\_3 S(t)I(t)} + \lambda R(t), \\ ^C\mathbb{D}\_{0+}^{\mathfrak{a}}I(t) = \frac{\beta I(t)S(t)}{1 + k\_1 S(t) + k\_2 I(t) + k\_3 S(t)I(t)} - (\mu + r)I(t), \\ ^C\mathbb{D}\_{0+}^{\mathfrak{a}}R(t) = rI(t) - (\mu + \lambda)R(t). \end{cases} \tag{6}$$

The model considers a homogeneous population divided into three subgroups: susceptible individuals *S*(*t*), infected and infectious individuals *I*(*t*), and recovered *R*(*t*), individuals at time *t*. The parameters Λ, *β*, *μ*, and *r*, represent the recruitment rate of the population, the infection rate, the natural death rate, and the recovery rate of the infected individuals, respectively. The rate that recovered individuals lose immunity and return to the susceptible class is represented by *λ*. While contacting with infected individuals, the susceptible become infected at the incidence rate

$$\frac{\beta SI}{1 + k\_1 S + k\_2 I + k\_3 SI}'$$

where *k*1, *k*2, and *k*<sup>3</sup> are nonnegative constants [21]. We remark that system (6) admits a unique positive solution (see [21] (Theorem 7)).

The disease-free equilibrium of the model (6) is given by

$$
\Sigma\_0 = \left(\mathbb{S}^0, I^0, \mathbb{R}^0\right) = \left(\frac{\Lambda}{\mu}, 0, 0\right).
$$

In this case, following the notation from Section 3,

$$A = M - D = \left[\frac{\Lambda \beta}{\Lambda k\_1 + \mu} - \mu - r\right]^2$$

with

$$M = \left[\frac{\Lambda \beta}{\Lambda k\_1 + \mu}\right] \quad \text{and} \quad D = [\mu + r].$$

Hence,

$$\mathcal{R}\_0 = \rho (MD^{-1}) = \frac{\Lambda \beta}{(\mu + r)(\Lambda k\_1 + \mu)}.$$

We easily conclude that *A* < 0 whenever R<sup>0</sup> < 1.

In what follows, we prove that the first condition of Theorem 2 holds, that is, *X* = (Λ/*μ*, 0) is globally asymptotically stable for the system of uninfected individuals:

$$\begin{cases} ^C\mathbb{D}\_{0+}^{\alpha}S(t) = \Lambda - \mu S(t) + \lambda R(t),\\ ^C\mathbb{D}\_{0+}^{\alpha}R(t) = -(\mu + \lambda)R(t). \end{cases} \tag{7}$$

The solution of the fractional differential equations (7) is the functions

$$\mathcal{R}(t) = \mathcal{R}\_0 \, E\_{\mathfrak{A}}(- (\mu + \lambda)t^{\alpha})$$

and

$$S(t) = S\_0 E\_\mathfrak{a}(-\mu t^a) + \int\_0^t \left[\Lambda + \lambda R\_0 E\_\mathfrak{a}(-(\mu + \lambda)(t - s)^a)\right] s^{a-1} E\_{\mathfrak{a}, \mathfrak{a}}(-\mu s^a) \, ds \dots$$

Obviously *R*(*t*) → 0, as *t* goes to infinity. Now, we prove that *S*(*t*) → Λ/*μ*, as *t* → ∞. First, observe that

$$\begin{split} \int\_{0}^{t} s^{a-1} E\_{a, \mathfrak{a}} (-\mu s^{a}) \, ds &= \sum\_{k=0}^{\infty} \frac{(-\mu)^{k}}{\Gamma(ka+a)} \int\_{0}^{t} s^{ka+a-1} \, ds = \sum\_{k=0}^{\infty} \frac{(-\mu)^{k}}{\Gamma(ka+a+1)} t^{ka+a} \\ &= -\frac{1}{\mu} \sum\_{k=0}^{\infty} \frac{(-\mu)^{k+1}}{\Gamma((k+1)a+1)} t^{(k+1)a} = -\frac{1}{\mu} (E\_{a}(-\mu t^{a}) - 1). \end{split}$$

For the other term inside the integral, we get

$$\begin{aligned} \int\_0^t E\_a(-(\mu+\lambda)(t-s)^a) s^{a-1} E\_{a,a}(-\mu s^a) \, ds \\ &= \sum\_{m=0}^\infty \sum\_{k=0}^\infty \frac{(-(\mu+\lambda))^m (-\mu)^k}{\Gamma(ma+1)\Gamma(ka+a)} \int\_0^t (t-s)^m s^{ka+a-1} \, ds. \end{aligned}$$

To evaluate this integral, we use the known formula involving the Beta function:

$$B(\mathbf{x}, y) = \frac{\Gamma(\mathbf{x})\Gamma(y)}{\Gamma(\mathbf{x} + y)}, \quad \mathbf{x}, y > 0.$$

With the change of variable *u* = *s*/*t*, we get

$$\begin{split} \int\_{0}^{t} (t-s)^{ma} s^{ka+a-1} ds &= t^{ma} \int\_{0}^{t} (1-s/t)^{ma} s^{ka+a-1} ds = t^{ma+ka+a} \int\_{0}^{1} (1-u)^{ma} u^{ka+a-1} du \\ &= t^{ma+ka+a} B(ka+a, ma+1) = t^{ma+ka+a} \frac{\Gamma(ka+a) \Gamma(ma+1)}{\Gamma(ma+ka+a+1)}. \end{split}$$

Thus, we prove that the solution *S*(·) is given by

$$S(t) = S\_0 E\_4(-\mu t^a) + \frac{\Lambda}{\mu} (1 - E\_4(-\mu t^a)) + \lambda R\_0 \sum\_{m=0}^{\infty} \sum\_{k=0}^{\infty} \frac{(- (\mu + \lambda))^m (-\mu)^k}{\Gamma(m\kappa + k\kappa + \kappa + 1)} t^{m\kappa + k\kappa + a} \dots$$

Observe that, as *t* goes to infinity,

$$S\_0 E\_a(-\mu t^a) \to 0 \quad \text{and} \quad \frac{\Lambda}{\mu} (1 - E\_a(-\mu t^a)) \to \frac{\Lambda}{\mu}.$$

To sum up, it remains to prove that the double sum converges to zero. For that purpose, we recall the concept of the Mittag–Leffler function of two variables (cf. [46]) *Eα*,*β*(*x*, *y*, ·). With such notation, we can write

$$\sum\_{k=0}^{\infty} \sum\_{k=0}^{\infty} \frac{(-(\mu + \lambda))^m (-\mu)^k}{\Gamma(ma + ka + a + 1)} t^{ma + ka + a} = t^a E\_{a, a} (-(\mu + \lambda) t^a, -\mu t^a, a + 1)$$

which converges to zero, as *t* goes to infinity, by [46] (Theorem 3.1). This proves the desired conclusion. We also remark that, by Lemma 2, *Eα*,*α*(*A*) is nonnegative. Therefore, by Theorem 2, the disease-free equilibrium of the model (6) is globally asymptotically stable.

Considering the parameter values from [21], Λ = 0.8, *μ* = 0.1, *λ* = 0.5, *β* = 0.1, *r* = 0.5, *k*<sup>1</sup> = 0.1, *k*<sup>2</sup> = 0.02, and *k*<sup>3</sup> = 0.003, we have R<sup>0</sup> = 0.7407 < 1. For initial conditions, we consider the following ones and without any specific criteria:

$$\begin{aligned} y\_0 &= (S\_{0,0}, l\_{0,0}, R\_{0,0}) = (10, 1, 1), & y\_1 &= (S\_{0,1}, l\_{0,1}, R\_{0,1}) = (100, 10, 5), \\ y\_2 &= (S\_{0,2}, l\_{0,2}, R\_{0,2}) = (200, 20, 10), & y\_3 &= (S\_{0,3}, l\_{0,3}, R\_{0,3}) = (300, 30, 20), \\ y\_4 &= (S\_{0,4}, l\_{0,4}, C\_{0,4}, A\_{0,5}) = (400, 40, 50) \end{aligned} \tag{8}$$

The stability of the disease free equilibrium for model (6) is illustrated in Figure 3.

**Figure 3.** Stability of the disease free equilibrium Σ<sup>0</sup> = ( <sup>Λ</sup> *<sup>μ</sup>* = 8, 0, 0), for the SIRS fractional model (6). In the left: *I* + *R*, considering different values of *α* ∈ {0.8, 0.85, 0.9, 0.95, 1.0} and initial condition *y*<sup>1</sup> from (8). On the right: *S* and *I* + *R*, considering the initial conditions *yi*, *i* = 0, ... , 4 from (8) and fixed *α* = 0.9.

#### *4.3. A Modified Fractional SICA Model for HIV/AIDS*

In this example, we consider a modified Caputo fractional-order model for HIV/AIDS, based on the model proposed in [14,47]. We show that this fractional model satisfies the conditions of Theorem 2 and, through some numerical simulations, we illustrate the global stability of the disease free equilibrium, when R<sup>0</sup> < 1.

In this model, the total population is assumed to be homogeneous and divided into four mutually-exclusive compartments: susceptible individuals (*S*); HIV-infected individuals with no clinical symptoms of AIDS but able to transmit HIV to other individuals (*I*); HIV-infected individuals under antiretroviral (ART) treatment (the so called chronic stage) with a viral load remaining low (C); and HIV-infected individuals with AIDS clinical symptoms (A). Analogously to the assumption made in [47], we consider that individuals in the chronic stage *C* have a very low viral load and do not transmit HIV infection [48], but, differently from [47], we assume that individuals with AIDS *A*, due to their higher viral load, may transmit HIV virus, at a rate *η<sup>A</sup> β* with *η<sup>A</sup>* > 1. Therefore, effective contact with people infected with HIV is at a rate *λ*, given by

$$
\lambda = \beta (I + \eta\_A A)\_{\prime \prime}
$$

where *β* is the effective contact rate for HIV transmission. We assume that the recruitment rate is equal to the natural death rate and is denoted by *μ*. The following assumptions are the same as in [14]. HIV-infected individuals with no AIDS symptoms *I* progress to the class of individuals with HIV infection under ART treatment *C*, at a rate *φ*, and HIV-infected individuals with AIDS symptoms are treated for HIV at rate *γ*. Individuals in the class *C* leave for the class *I*, at a rate *ω*. HIV-infected individuals with AIDS symptoms *A* that start treatment move to the class of HIV-infected individuals *I*, moving to the chronic class *C* only if the treatment is maintained. HIV-infected individuals with no AIDS symptoms *I* that do not take ART treatment progress to the AIDS class *A*, at rate *ρ*. Only HIV-infected individuals with AIDS symptoms *A* suffer from an AIDS induced death, at a rate *d*. The total population at time *t*, denoted by *N*(*t*), is given by *N*(*t*) = *S*(*t*) + *I*(*t*) + *C*(*t*) + *A*(*t*). The Caputo fractional-order system that describes the previous assumptions is given by

$$\begin{cases} ^C\mathbb{D}\_{0+}^{\underline{a}}S(t) = \mu - \beta(I(t) + \eta\_A A(t))S(t) - \mu S(t), \\ ^C\mathbb{D}\_{0+}^{\underline{a}}I(t) = \beta(I(t) + \eta\_A A(t))S(t) - (\rho + \phi + \mu)I(t) + \omega \mathbb{C}(t) + \gamma A(t), \\ ^C\mathbb{D}\_{0+}^{\underline{a}}\mathbb{C}(t) = \phi I(t) - (\omega + \mu)\mathbb{C}(t), \\ ^C\mathbb{D}\_{0+}^{\underline{a}}A(t) = \rho I(t) - (\gamma + \mu + d)A(t). \end{cases} \tag{9}$$

The disease free equilibrium of system (9) is given by

$$
\Sigma\_0 = \left( S^0, I^0, \mathbb{C}^0, A^0 \right) = (1, 0, 0, 0). \tag{10}
$$

Using the notation from Section 3, we have

$$A = \left[ \begin{array}{cc} \beta - \rho - \phi - \mu \\\\ \rho \end{array} - \gamma - \mu - d \end{array} \Big| .$$

The matrix *A* can be written as *A* = *M* − *D* with

$$M = \begin{bmatrix} \beta & \beta \eta\_A + \gamma \\\\ \rho & 0 \end{bmatrix}$$

and

$$D = \begin{bmatrix} \rho + \phi + \mu & 0\\ 0 & \gamma + \mu + d \end{bmatrix}$$

In this case, we will prove that *X* = (1, 0) is globally asymptotically stable for the system of uninfected individuals:

$$\begin{cases} ^\mathbb{C}\mathbb{D}\_{0+}^\mu S(t) = \mu - \mu S(t),\\ ^\mathbb{C}\mathbb{D}\_{0+}^\mu \mathcal{C}(t) = -(\omega + \mu) \mathcal{C}(t). \end{cases} \tag{11}$$

.

The solution of (11) is

$$\mathcal{C}(t) = \mathcal{C}\_0 E\_a \left( - (\omega + \mu) t^a \right),$$

and

$$S(t) = S\_0 E\_\mathfrak{a}(-\mu t^\alpha) + \int\_0^t \mu s^{\alpha - 1} E\_{\mathfrak{a}, \mathfrak{a}}(-\mu s^\alpha) \, ds.$$

Similarly to Section 4.1, we can prove that the disease-free equilibrium of the model (9) is globally asymptotically stable.

Let us consider the parameter values from Table 1.


**Table 1.** Parameter values of model (9) corresponding to R<sup>0</sup> = 0.1863 < 1.

Then, the eigenvalues of the matrix *A* are −0.9612 and −1.4474; therefore, *m*(*A*) < 0. Moreover, we confirm that <sup>R</sup><sup>0</sup> :<sup>=</sup> *<sup>ρ</sup>*(*MD*−1) = 0.1863 <sup>&</sup>lt; 1.

We now show, using numerical simulations, that for different values of *α* and initial conditions, the global stability of the disease free equilibrium holds, whenever R<sup>0</sup> < 1.

In Figure 4, we consider different values for *α* and the initial conditions *x*<sup>0</sup> from (12).

**Figure 4.** Stability of the disease free equilibrium Σ<sup>0</sup> = (1, 0, 0, 0), for the SICA fractional model (9), considering different values of *α* ∈ {0.8, 0.85, 0.9, 0.95, 1.0}. On the left: *S*. On the right: *I* + *C* + *A*.

The global stability of the disease free equilibrium (10) is illustrated in Figure 5, considering different initial conditions *xi*, *i* = 0, . . . , 7, given by (12).

$$\begin{array}{l} \mathbf{x}\_{0} = (\mathbf{S}\_{0,0}, I\_{0,0}, \mathbf{C}\_{0,0}, A\_{0,0}) = (0.8, 0.1, 0.1, 0), \\ \mathbf{x}\_{1} = (\mathbf{S}\_{0,1}, I\_{0,1}, \mathbf{C}\_{0,1}, A\_{0,1}) = (0.4, 0.2, 0.2, 0.2), \\ \mathbf{x}\_{2} = (\mathbf{S}\_{0,2}, I\_{0,2}, \mathbf{C}\_{0,2}, A\_{0,2}) = (0.7, 0.1, 0.1, 0.1), \\ \mathbf{x}\_{3} = (\mathbf{S}\_{0,3}, I\_{0,3}, \mathbf{C}\_{0,3}, A\_{0,3}) = (0.5, 0.1, 0.2, 0.2), \\ \mathbf{x}\_{4} = (\mathbf{S}\_{0,4}, I\_{0,4}, \mathbf{C}\_{0,4}, A\_{0,4}) = (0.9, 0.05, 0.05, 0), \\ \mathbf{x}\_{5} = (\mathbf{S}\_{0,5}, I\_{0,5}, \mathbf{C}\_{0,5}, A\_{0,5}) = (0.6, 0.2, 0.1, 0.1), \\ \mathbf{x}\_{6} = (\mathbf{S}\_{0,6}, I\_{0,6}, \mathbf{C}\_{0,6}, A\_{0,6}) = (0.55, 0.25, 0.1, 0.1), \\ \mathbf{x}\_{7} = (\mathbf{S}\_{0,7}, I\_{0,7}, \mathbf{C}\_{0,7}, A\_{0,7}) = (0.75, 0.1, 0.1, 0.05). \end{array} \tag{12}$$

**Figure 5.** Global stability of the disease free equilibrium Σ<sup>0</sup> = (1, 0, 0, 0), for the fractional model (9), considering *α* = 0.9 and different initial conditions *xi*, *i* = 0, ... , 7, from (12), on the *x*-axis *S* and on the *y*-axis *I* + *C* + *A*.

#### **5. Conclusions**

A new and simple result for the global stability for the disease-free equilibrium of fractional epidemiological models is presented. We highlight here the fact that the approach available in the literature so far involves the determination of an appropriate Lyapunov function, very laborious computations and, in the end, the application of LaSalle's invariance principle. Our new method uses only basic results from matrix theory and some well-known results from fractional-order differential equations.

We also remark that the applicability of our main result, Theorem 2, is only possible if the matrix *A* satisfies the condition *Eα*,*α*(*A*) ≥ 0. We proved that, under the assumptions of Theorem 2, this condition holds if matrix *A* has dimensions 1 or 2. It would be interesting to check under what conditions we can guarantee that *Eα*,*α*(*A*) ≥ 0 if the matrix A has dimensions greater than 2. Since many epidemiological models divide the population into subpopulations of epidemiological significance where the number of the infectious compartments are at most 2, our main result can be applied to a wide variety of epidemiological models.

**Author Contributions:** Conceptualization, R.A., N.M., and C.J.S.; methodology, R.A., N.M., and C.J.S.; formal analysis, R.A., N.M., and C.J.S.; writing—original draft preparation, R.A., N.M., and C.J.S.; writing—review and editing, R.A., N.M., and C.J.S. All authors have read and agreed to the published version of the manuscript.

**Funding:** Work is supported by Portuguese funds through the CIDMA-Center for Research and Development in Mathematics and Applications, and the Portuguese Foundation for Science and Technology (FCT-Fundação para a CiênciaeaTecnologia), within project UIDB/04106/2020. Cristiana J. Silva is also supported by FCT via the FCT Researcher Program CEEC Individual 2018 with reference CEECIND/00564/2018.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors are grateful to the two reviewer's valuable comments that improved the manuscript.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **On a Non-Newtonian Calculus of Variations**

**Delfim F. M. Torres**

Center for Research and Development in Mathematics and Applications (CIDMA), Department of Mathematics, University of Aveiro, 3810-193 Aveiro, Portugal; delfim@ua.pt

**Abstract:** The calculus of variations is a field of mathematical analysis born in 1687 with Newton's problem of minimal resistance, which is concerned with the maxima or minima of integral functionals. Finding the solution of such problems leads to solving the associated Euler–Lagrange equations. The subject has found many applications over the centuries, e.g., in physics, economics, engineering and biology. Up to this moment, however, the theory of the calculus of variations has been confined to Newton's approach to calculus. As in many applications negative values of admissible functions are not physically plausible, we propose here to develop an alternative calculus of variations based on the non-Newtonian approach first introduced by Grossman and Katz in the period between 1967 and 1970, which provides a calculus defined, from the very beginning, for positive real numbers only, and it is based on a (non-Newtonian) derivative that permits one to compare relative changes between a dependent positive variable and an independent variable that is also positive. In this way, the non-Newtonian calculus of variations we introduce here provides a natural framework for problems involving functions with positive images. Our main result is a first-order optimality condition of Euler–Lagrange type. The new calculus of variations complements the standard one in a nontrivial/multiplicative way, guaranteeing that the solution remains in the physically admissible positive range. An illustrative example is given.

**Keywords:** calculus of variations; non-Newtonian calculus; multiplicative integral functionals; multiplicative Euler–Lagrange equations; admissible positive functions

**MSC:** 26A24; 49K05

#### **1. Introduction**

A popular method of creating a new mathematical system is to vary the axioms of a known one. Non-Newtonian calculi provide alternative approaches to the usual calculus of Newton (1643–1727) and Leibniz (1646–1716), which were first introduced by Grossman and Katz (1933–2010) in the period between 1967 and 1970 [1]. The two most popular non-Newtonian calculi are the multiplicative and bigeometric calculi, which in fact are modifications of each other: in these calculi, the addition and subtraction are changed to multiplication and division [2]. Since such multiplicative calculi are variations on the usual calculus, the traditional one is sometimes called the additive calculus [3].

Recently, it has been shown that non-Newtonian/multiplicative calculi are more suitable than the ordinary Newtonian/additive calculus for some problems, e.g., in actuarial science, finance, economics, biology, demography, pattern recognition in images, signal processing, thermostatistics and quantum information theory [3–7]. This is explained by the fact that while the basis for the standard/additive calculus is the representation of a function as locally linear, the basis of a multiplicative calculus is the representation of a function as locally exponential [1,3,7]. In fact, the usefulness of product integration goes back to Volterra (1860–1940), who introduced in 1887 the notion of a product integral and used it to study solutions of differential equations [8,9]. For readers not familiar with product integrals, we refer to the book [10], which contains short biographical sketches of Volterra, Schlesinger and other mathematicians involved in the development of product integrals,

**Citation:** Torres, D.F.M. On a Non-Newtonian Calculus of Variations. *Axioms* **2021**, *10*, 171. https://doi.org/10.3390/ axioms10030171

Academic Editor: Giampiero Palatucci

Received: 12 June 2021 Accepted: 26 July 2021 Published: 29 July 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

and an extensive list of references, offering a gentle opportunity to become acquainted with the subject of non-Newtonian integration. For our purposes, it is enough to understand that a non-Newtonian calculus is a methodology that allows one to have a different look at problems that can be investigated via calculus: it provides differentiation and integration tools, based on multiplication instead of addition, and in some cases—mainly problems of price elasticity, multiplicative growth, etc.—the use of such multiplicative calculi is preferable to the traditional Newtonian calculus [11–14]. Moreover, a non-Newtonian calculus is a self-contained system, independent of any other system of calculus [15].

The main aim of our work was to obtain, for the first time in the literature, a non-Newtonian calculus of variations that involves the minimization of a functional defined by a non-Newtonian integral with an integrand/Lagrangian depending on the non-Newtonian derivative. The calculus of variations is a field of mathematical analysis that uses, as the name indicates, variations, which are small changes in functions, to find maxima and minima of the considered functionals: mappings from a set of functions to the real numbers. In the non-Newtonian framework, instead of the classical variations of the form *<sup>y</sup>*(⋅) + *h*(⋅), proposed by Lagrange (1736–1813) and still used nowadays in all recent formulations of the calculus of variations [16–18], for example, in the fractional calculus of variations [19,20], quantum variational calculus [21,22] and the calculus of variations on time scales [23,24], we propose here to use "multiplicative variations". More precisely, in contrast with the calculi of variations found in the literature, we show here, for the first time, how to consider variations of the form *<sup>y</sup>*(⋅) ⋅ ln *<sup>h</sup>*(⋅). The functionals of the calculus of variations are expressed as definite integrals, here in a non-Newtonian sense, involving functions and their derivatives, in a non-Newtonian sense here. The functions that maximize or minimize the functionals of the calculus of variations are found using the Euler–Lagrange equation, which we prove here in the non-Newtonian setting. Given the importance of the calculus of variations in applications, for example, in physics [25,26], economics [27,28] and biology [29,30], and the importance that non-Newtonian calculus already has in these areas, we trust that the calculus of variations initiated here will call attention to the research community. We give credit to the citation found in the 1972 book of Grossman and Katz [1]: "for each successive class of phenomena, a new calculus or a new geometry".

#### **2. Materials and Methods**

From 1967 till 1970, Grossman and Katz gave definitions of new kinds of derivatives and integrals, converting the roles of subtraction and addition into division and multiplication, respectively, and established a new family of calculi, called non-Newtonian calculi [1,31,32], which are akin to the classical calculus developed by Newton and Leibniz three centuries ago. Non-Newtonian calculi use different types of arithmetic and their generators. Let *α* be a bijection between subsets *X* and *Y* of the set of real numbers R, and endow *Y*, with the induced operations sum and multiplication and the ordering given by the inverse map *α*<sup>−</sup>1. Then the *α*-arithmetic is a field with the order topology [33]. In concrete, given a bijection *<sup>α</sup>* <sup>∶</sup> *<sup>X</sup>* <sup>→</sup> *<sup>Y</sup>* <sup>⊆</sup> <sup>R</sup>, called a generator [15], we say that *<sup>α</sup>* defines an arithmetic if the following four operations are defined:

$$\begin{aligned} \mathbf{x} \oplus \mathbf{y} &= \alpha \left( \alpha^{-1} \left( \mathbf{x} \right) + \alpha^{-1} \left( \mathbf{y} \right) \right), \\ \mathbf{x} \oplus \mathbf{y} &= \alpha \left( \alpha^{-1} \left( \mathbf{x} \right) - \alpha^{-1} \left( \mathbf{y} \right) \right), \\ \mathbf{x} \odot \mathbf{y} &= \alpha \left( \alpha^{-1} \left( \mathbf{x} \right) \cdot \alpha^{-1} \left( \mathbf{y} \right) \right), \\ \mathbf{x} \odot \mathbf{y} &= \alpha \left( \alpha^{-1} \left( \mathbf{x} \right) / \alpha^{-1} \left( \mathbf{y} \right) \right). \end{aligned} \tag{1}$$

If *<sup>α</sup>* is chosen to be the identity function and *<sup>X</sup>* <sup>=</sup> <sup>R</sup>, then (1) reduces to the four operators studied in school; i.e., one gets the standard arithmetic, from which the traditional (Newton–Leibniz) calculus is developed. For other choices of *α* and *X*, we can get an infinitude of other arithmetics from which Grossman and Katz produced a series of non-Newton calculi, compiled in the seminal book of 1972 [1]. Among all such non-Newton calculi, recently great interest has been focused on the Grossman–Katz calculus obtained when we

fix *<sup>α</sup>*(*x*) = *<sup>e</sup>x*, *<sup>α</sup>*−1(*x*) = ln(*x*) and *<sup>X</sup>* <sup>=</sup> <sup>R</sup><sup>+</sup> for the set of real numbers strictly greater than zero [7,12,14,15]. We shall concentrate here on one option, originally called by Grossman and Katz the geometric/exponential/bigeometric calculus [1,2,13,34–37], but from which other different terminology and small variations of the original calculus have grown up in the literature, in particular, the multiplicative calculus [3–5,8,12,15,36,38–40], and more recently, the proportional calculus [7,11,14,41], which is essentially the bigeometric calculus of [35]. Here we follow closely this last approach—in particular, the exposition of the non-Newton calculus as found in [7,14,35], because it is appealing to scientists who seek ways to express laws in a scale-free form.

Throughout the text, we fix *<sup>α</sup>*(*x*) = *<sup>e</sup>x*, *<sup>α</sup>*−1(*x*) = ln(*x*), and *<sup>X</sup>* <sup>=</sup> <sup>R</sup>+. Then we get from (1) the following operations:

$$\begin{aligned} \mathbf{x} \oplus \mathbf{y} &= \mathbf{x} \cdot \mathbf{y}, \\ \mathbf{x} \oplus \mathbf{y} &= \frac{\mathbf{x}}{\mathbf{y}}, \\ \mathbf{x} \odot \mathbf{y} &= \mathbf{x}^{\ln(y)}, \\ \mathbf{x} \odot \mathbf{y} &= \mathbf{x}^{1/\ln(y)}, \quad \mathbf{y} \neq \mathbf{1}. \end{aligned} \tag{2}$$

Let *<sup>a</sup>*, *<sup>b</sup>*, *<sup>c</sup>* <sup>∈</sup> <sup>R</sup>+. In the non-Newtonian arithmetic given by (2), the following properties of the <sup>⊙</sup> operation hold (cf. Proposition 2.1 of [14]):


We see that in this non-Newtonian algebra, *<sup>a</sup>* <sup>=</sup> 1 is the traditional "zero" (in the current arithmetic, 0 represents −∞). In fact, see Proposition 2.2 of [14], one has


Based on the mentioned properties, one easily proves that (R+, <sup>⊕</sup>, ⊙) is a field (see Theorem 2.3 of [14]). In this field, the following calculus has been developed [7,14].

**Definition 1** (absolute value)**.** *The absolute value of x* <sup>∈</sup> <sup>R</sup>+*, denoted by* [[*x*]]*, is given by*

$$\left[\left[\mathbf{x}\right]\right] = \begin{cases} \mathbf{x} & \text{if } x \ge 1, \\ 1 \ominus \mathbf{x} & \text{if } x \in (0,1). \end{cases}$$

Let *<sup>x</sup>*, *<sup>y</sup>*, *<sup>z</sup>* be positive real numbers and define *<sup>d</sup>* <sup>∶</sup> <sup>R</sup><sup>+</sup> <sup>×</sup> <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup><sup>+</sup> as

$$d(\mathbf{x}\_\prime y) = [[\mathbf{x} \ominus y]].$$

The following properties are simple to prove:


We can now introduce the notion of limit.

**Definition 2** (limit)**.** *We write* lim*x*→*x*<sup>0</sup> *<sup>f</sup>* (*x*) = *L, <sup>L</sup>* <sup>∈</sup> <sup>R</sup>+*, as: if for all* <sup>&</sup>gt; <sup>1</sup> *there exists <sup>δ</sup>* <sup>&</sup>gt; <sup>1</sup> *such that if* <sup>1</sup> <sup>&</sup>lt; *<sup>d</sup>*(*x*, *<sup>x</sup>*0) < *<sup>δ</sup>, then d*( *<sup>f</sup>* (*x*), *<sup>L</sup>*) < *.*

According with Definition 2, it is possible to specify the meaning of equality lim*x*→*x*<sup>0</sup> *<sup>f</sup>* (*x*) = *<sup>f</sup>* (*x*0), and therefore, the notion of continuity in the non-Newtonian calculus.

**Definition 3** (continuity)**.** *We say that f is continuous at x*<sup>0</sup> *or*

$$\lim\_{x \to x\_0} f(x) = f(x\_0)\_{\prime\prime}$$

*if* <sup>∀</sup> <sup>&</sup>gt; <sup>1</sup>*,* <sup>∃</sup> *<sup>δ</sup>* <sup>&</sup>gt; <sup>1</sup> *such that d*(*x*, *<sup>x</sup>*0) < *<sup>δ</sup>* ⇒ *<sup>d</sup>*( *<sup>f</sup>* (*x*), *<sup>f</sup>* (*x*0)) < *.*

We proceed by reviewing the essentials on non-Newtonian differentiation and integration.

#### *2.1. Derivatives*

The derivative of a function is introduced in the following terms.

**Definition 4** (derivative [7,14,35,41])**.** *A positive function f is differentiable at x*<sup>0</sup> *if*

$$\lim\_{\mathbf{x}\to\mathbf{x}\_{0}}[(f(\mathbf{x})\oplus f(\mathbf{x}\_{0}))\otimes(\mathbf{x}\ominus\mathbf{x}\_{0})] = \lim\_{h\to 1}[(f(\mathbf{x}\_{0}\oplus h)\ominus f(\mathbf{x}\_{0}))\otimes h]$$

*exists. In this case, the limit is denoted by* ̃*<sup>f</sup>* (*x*0) *and receives the name of derivative of <sup>f</sup> at <sup>x</sup>*0*. Moreover, we say that f is differentiable if f is differentiable at x*<sup>0</sup> *for all x*<sup>0</sup> *in the domain of f .*

It is not difficult to prove that if *f* is differentiable at *x*0, then *f* is continuous at *x*0. Define

$$\begin{aligned} \mathfrak{x}^{\{0\}} &= \mathfrak{e}, \\ \mathfrak{x}^{\{n\}} &= \mathfrak{x} \odot \dots \odot \mathfrak{x} \text{  $n$  times, } \quad n \in \mathbb{N}. \end{aligned}$$

We have

$$\overline{\mathfrak{X}^{\{\underline{n}\}}} = \left(\mathfrak{X}^{\{n-1\}}\right)^{\mathrm{N}} = \mathfrak{e}^{\mathrm{n}} \oplus \mathfrak{X}^{\{n-1\}}, \quad n \in \mathbb{N}.\tag{3}$$

In particular, if *<sup>n</sup>* <sup>=</sup> 1 in (3), we get:

*x*

• If *<sup>f</sup>* (*x*) = *<sup>x</sup>*, then ̃*<sup>f</sup>* (*x*) = *<sup>e</sup>*.

More examples of derivatives of a function in the sense of Definition 4 follow:


$$
\overline{\cos\_{\mathfrak{c}}(\mathfrak{x})} = 1 \ominus \sin\_{\mathfrak{c}}(\mathfrak{x}) \quad \text{and} \quad \overline{\sin\_{\mathfrak{c}}(\mathfrak{x})} = \cos\_{\mathfrak{c}}(\mathfrak{x}) \dots
$$

The basic rules of differentiation (keep recalling that 1 is the "zero" of the non-Newtonian calculus) follow:


If ̃*<sup>f</sup>* (*x*) = 1 for all *<sup>x</sup>* ∈ (*a*, *<sup>b</sup>*), then *<sup>f</sup>* (*x*) = *<sup>c</sup>* for all *<sup>x</sup>* ∈ (*a*, *<sup>b</sup>*), where *<sup>c</sup>* is a constant. Moreover, if ̃*<sup>f</sup>* (*x*) = *<sup>g</sup>* ̃(*x*) for all *<sup>x</sup>* ∈ (*a*, *<sup>b</sup>*), then there exists a constant *<sup>c</sup>* such that *<sup>f</sup>* (*x*) = *cg*(*x*) for all *<sup>x</sup>* ∈ (*a*, *<sup>b</sup>*); that is, *<sup>f</sup>* (*x*) = *<sup>g</sup>*(*x*) ⊕ *<sup>c</sup>*.

Higher-order derivatives are defined as usual:

$$\begin{aligned} \overline{f}^{\{0\}}(\mathbf{x}) &= f(\mathbf{x}), \\ \overline{f}^{\{n\}}(\mathbf{x}) &= \frac{d}{d\mathbf{x}} [\overline{f}^{\{n-1\}}(\mathbf{x})], \quad n \in \mathbb{N}. \end{aligned}$$

In the sequel, we use the following notation:

$$\sum\_{i=0}^{n} a\_i = a\_0 \oplus \dots \oplus a\_n.$$

**Theorem 1** (Taylor's theorem)**.** *Let <sup>f</sup> be a function such that* ̃*<sup>f</sup>* (*n*+1)(*x*) *exists for all <sup>x</sup> in a range that contains the number a. Then,*

$$f(\mathbf{x}) = P\_n(\mathbf{x}) \oplus R\_n(\mathbf{x})$$

*for all x, where*

$$P\_{\mathcal{H}}(\boldsymbol{\mathfrak{x}}) = \sum\_{k=0}^{\underline{m}} \boldsymbol{e}^{\underline{k}} \odot \overline{\boldsymbol{f}}^{\{k\}}(\boldsymbol{a}) \odot (\boldsymbol{x} \ominus \boldsymbol{a})^{\{k\}}$$

*is the Taylor polynomial of degree n and*

$$R\_n(\mathfrak{x}) = \mathfrak{e}^{\frac{1}{\{n+1\}!}} \oplus \overline{f}^{\{n+1\}}(\mathfrak{c}) \oplus (\mathfrak{x} \oplus \mathfrak{a})^{\{n+1\}}$$

*is the remainder term in Lagrange form, for some number c between a and x.*

Suppose *f* is a function that has derivatives of all orders over an interval centered on *<sup>a</sup>*. If lim*n*→+∞ *Rn*(*x*) = 1 for all *<sup>x</sup>* in the interval, then the Taylor series is convergent and converges to *<sup>f</sup>* (*x*):

$$f(\mathfrak{x}) = \bigoplus\_{k=0}^{+\infty} \mathfrak{e}^{\frac{1}{k!}} \odot \overline{f}^{\{k\}}(a) \odot (\mathfrak{x} \ominus a)^{\{k\}} \cdot \mathfrak{x}$$

As examples of convergent series, one has:

$$\begin{aligned} \mathfrak{c}^{\mathfrak{X}} &= \overbrace{\sum\_{k=0}^{+\infty} \mathfrak{c}^{\frac{1}{2k}}}^{+\infty} \odot \mathfrak{x}^{\{k\}},\\ \operatorname{cos}\_{\mathfrak{C}}(\mathfrak{x}) &= \sum\_{k=0}^{+\infty} \mathfrak{c}^{\frac{(-1)^{k}}{2k!}} \odot \mathfrak{x}^{\{2k\}} \end{aligned}$$

The Taylor's theorem given by Theorem 1 has a natural extension for functions of several variables [6,42]. Here we proceed by briefly reviewing integration. For more on the alpha-arithmetic, its topology and analysis, we refer the reader to the literature. For example, mean value theorems can be found in the original book of Grossman and Katz of 1972 [1]; for a recent reference with detailed proofs, see [43].

#### *2.2. Integrals*

The notion of integral for the non-Newtonian calculus under consideration is a type of product integration [10]. As expected, the function *<sup>F</sup>*(*x*) is an antiderivative of function *<sup>f</sup>* (*x*) on the interval *<sup>I</sup>* if *<sup>F</sup>* ̃(*x*) = *<sup>f</sup>* (*x*) for all *<sup>x</sup>* <sup>∈</sup> *<sup>I</sup>*. The indefinite integral of *<sup>f</sup>* (*x*) is denoted by

$$\int f(\mathbf{x})d\mathbf{x} = F(\mathbf{x}) \oplus c\_{\prime}$$

where *c* is a constant. Examples are (see [7,14,35]):


The definite integral of *<sup>f</sup>* on [*a*, *<sup>b</sup>*] is denoted by

$$\int\_{a}^{b} f(x)d\mathbf{x}.$$

If *<sup>f</sup>* is positive and continuous on [*a*, *<sup>b</sup>*], then *<sup>f</sup>* is integrable in [*a*, *<sup>b</sup>*]. The following properties hold:

(i) <sup>⨏</sup> *b <sup>a</sup> <sup>f</sup>* (*x*) ˜*dx* <sup>=</sup> <sup>⨏</sup> *c <sup>a</sup> <sup>f</sup>* (*x*) ˜*dx* <sup>⊕</sup> <sup>⨏</sup> *b <sup>c</sup> <sup>f</sup>* (*x*) ˜*dx*; (ii) <sup>⨏</sup> *b <sup>a</sup>* ( *<sup>f</sup>* (*x*) ⊕ *<sup>g</sup>*(*x*)) ˜*dx* <sup>=</sup> <sup>⨏</sup> *b <sup>a</sup> <sup>f</sup>* (*x*) ˜*dx* <sup>⊕</sup> <sup>⨏</sup> *b <sup>a</sup> <sup>g</sup>*(*x*) ˜*dx*;

$$\text{(iii)}\quad\int\_{a}^{b}(f(\mathbf{x})\ominus\mathcal{g}(\mathbf{x}))d\mathbf{\bar{x}} = \int\_{a}^{b}f(\mathbf{x})\bar{d}\mathbf{\bar{x}} \ominus\int\_{a}^{b}\mathcal{g}(\mathbf{x})d\mathbf{\bar{x}};$$


If *<sup>f</sup>* is positive and integrable on [*a*, *<sup>b</sup>*], then *<sup>F</sup>* defined on [*a*, *<sup>b</sup>*] by

$$F(x) = \int\_{a}^{x} f(t) \, \vec{d}t$$

is continuous over [*a*, *<sup>b</sup>*]. Moreover, the fundamental theorems of integral calculus hold: if *<sup>f</sup>* is continuous in *<sup>x</sup>* ∈ [*a*, *<sup>b</sup>*], then *<sup>F</sup>* is differentiable at *<sup>x</sup>* with

$$F(\bar{\mathbf{x}}) = f(\bar{\mathbf{x}});$$

if *<sup>f</sup>* <sup>=</sup> ̃*<sup>h</sup>* for some function *<sup>h</sup>*, then

$$\int\_{a}^{b} f(\mathbf{x})d\mathbf{\tilde{x}} = h(b) \oplus h(a).$$

For more on the *α*-arithmetic, its generalized real analysis, its fundamental topological properties related to non-Newtonian metric spaces and its calculus, including non-Newtonian differential equations and its applications, see [7,33,44–48]. For gentle, thorough and modern introduction to the subject of non-Newtonian calculi, we also refer the reader to the recent book [49]. Now we proceed with our original results.

#### **3. Results**

In order to develop a non-Newtonian calculus of variations (dynamic optimization), we begin by first proving some necessary results of static optimization.

*3.1. Static Optimization*

Given <sup>&</sup>gt; 1, let

$$\mathcal{B}(\mathfrak{X}, \mathfrak{e}) \coloneqq \left\{ \mathfrak{x} \in \mathbb{R}^+ : d(\mathfrak{x}, \mathfrak{X}) \le \mathfrak{e} \right\} = \left\{ \mathfrak{x} \in \mathbb{R}^+ : \left[ \{ \mathfrak{x} \ominus \mathfrak{X} \} \right] \le \mathfrak{e} \right\}.$$

Note that for *<sup>a</sup>*, *<sup>b</sup>* <sup>∈</sup> <sup>R</sup>+, one has

$$a = b \iff \frac{a}{b} = 1 \left(\text{or } \frac{b}{a} = 1\right) \Leftrightarrow a \ominus b = 1 \text{ (or } b \ominus a = 1\text{)}.$$

Similarly for inequalities, for example,

$$a < b \Leftrightarrow \frac{a}{b} < 1 \Leftrightarrow a \oplus b < 1.$$

This means that

$$\mathcal{B}(\vec{x}, \epsilon) = \left\{ \mathbf{x} \in \mathbb{R}^+ : d(\mathbf{x}, \vec{x}) \ominus \epsilon \le 1 \right\}.$$

**Definition 5** (local minimizer)**.** *Let <sup>f</sup>* ∶ (*a*, *<sup>b</sup>*) → <sup>R</sup><sup>+</sup> *and consider the problem of minimizing <sup>f</sup>* (*x*)*, <sup>x</sup>* ∈ (*a*, *<sup>b</sup>*)*. We say that <sup>x</sup>* ∈ (*a*, *<sup>b</sup>*) *is a (local) minimizer of <sup>f</sup> in* (*a*, *<sup>b</sup>*) *if there exists* <sup>&</sup>gt; <sup>1</sup> *such that <sup>f</sup>* (*x*) ≤ *<sup>f</sup>* (*y*) *(i.e., <sup>f</sup>* (*x*) ⊖ *<sup>f</sup>* (*y*) ≤ <sup>1</sup>*) for all <sup>y</sup>* ∈ B(*x*, )∩(*a*, *<sup>b</sup>*)*. In this case, we say that <sup>f</sup>* (*x*) *is a (local) minimum.*

Another important concept in optimization is that of descent direction.

**Definition 6** (descent direction)**.** *<sup>A</sup> <sup>d</sup>* <sup>∈</sup> <sup>R</sup><sup>+</sup> *is said to be a descent direction of <sup>f</sup> at <sup>x</sup> if <sup>f</sup>* (*<sup>x</sup>* <sup>⊕</sup> <sup>⊙</sup> *<sup>d</sup>*) < *<sup>f</sup>* (*x*) ∀ <sup>&</sup>gt; <sup>1</sup> *sufficiently close to 1, or equivalently, if <sup>f</sup>* (*<sup>x</sup>* <sup>⊕</sup> <sup>⊙</sup> *<sup>d</sup>*) ⊖ *<sup>f</sup>* (*x*) < <sup>1</sup> *for all* <sup>&</sup>gt; <sup>1</sup> *sufficiently close to 1.*

**Remark 1.** *From the chain rule and other properties of Section 2, it follows that*

$$\frac{d}{d\mathbf{x}}[f(\mathbf{x}\oplus\boldsymbol{\epsilon}\odot d)] = \overline{f}(\mathbf{x}\oplus\boldsymbol{\epsilon}\odot d)$$

*and* ˜*<sup>d</sup>*

$$\frac{d}{d\tilde{\mathfrak{L}}}[f(\mathbf{x}\oplus\mathfrak{e}\ominus d)] = \tilde{f}(\mathbf{x}\oplus\mathfrak{e}\ominus d)\otimes d.\tag{4}$$

*In particular, we get from* (4) *that*

$$\left. \frac{\overline{d}}{\overline{d}\epsilon} [f(\mathbf{x} \oplus \epsilon \odot d)] \right|\_{\epsilon=1} = \overline{f}(\mathbf{x}) \odot d.$$

Our first result allow us to identify a descent direction of *f* at *x* based on the derivative of *f* at *x*.

**Theorem 2.** *Let <sup>f</sup> be differentiable. If there exists <sup>d</sup>* <sup>∈</sup> <sup>R</sup><sup>+</sup> *such that* ̃*<sup>f</sup>* (*x*) ⊙ *<sup>d</sup>* <sup>&</sup>lt; <sup>1</sup>*, then <sup>d</sup> is a descent direction of f at x.*

**Proof.** We know from Taylor's theorem (Theorem 1) that

$$f(\mathbf{x}\oplus\boldsymbol{\epsilon}\ominus d) = f(\mathbf{x})\oplus\boldsymbol{\epsilon}\ominus\overline{f}(\mathbf{x})\odot d\oplus R\_1(\mathbf{x}\oplus\boldsymbol{\epsilon}\ominus d),\tag{5}$$

where

$$\begin{aligned} R\_1(\mathfrak{x}\oplus\mathfrak{e}\odot d) &= \left(\overline{f}^{\{2\}}(\mathfrak{c})\right)^{\frac{1}{2}}\odot \left(\mathfrak{e}\odot d\right)^{\{2\}} \\ &= \mathfrak{e}^{\{2\}}\odot \left(\overline{f}^{\{2\}}(\mathfrak{c})\right)^{\frac{1}{2}}\odot d^{\{2\}} \end{aligned}$$

with *<sup>c</sup>* being in the interval between *<sup>x</sup>* and *<sup>x</sup>* <sup>⊕</sup> <sup>⊙</sup> *<sup>d</sup>*. The equality (5) can be written in the following equivalent form:

$$f(\mathbf{x}\oplus\boldsymbol{\epsilon}\oplus d)\oplus f(\mathbf{x})=\boldsymbol{\epsilon}\odot\overline{f}(\mathbf{x})\odot d\oplus R\_1(\mathbf{x}\oplus\boldsymbol{\epsilon}\odot d).\tag{6}$$

Recalling that *<sup>a</sup>* <sup>⊙</sup> *<sup>b</sup>*{−1} <sup>=</sup> *<sup>a</sup>* <sup>⊘</sup> *<sup>b</sup>*, *<sup>a</sup>* <sup>⊙</sup> *<sup>a</sup>*{−1} <sup>=</sup> *<sup>e</sup>* and <sup>⊙</sup> is distributive over <sup>⊕</sup>, we get from (6) that

$$\begin{split} \left( f\left( \mathbf{x} \oplus \boldsymbol{\varepsilon} \odot d \right) \oplus f\left( \mathbf{x} \right) \right) \otimes \boldsymbol{\varepsilon} &= \widetilde{f}\left( \mathbf{x} \right) \odot d \oplus \boldsymbol{\varepsilon}^{\{-1\}} \odot R\_1\{\mathbf{x} \oplus \boldsymbol{\varepsilon} \odot d\} \\ &= \widetilde{f}\left( \mathbf{x} \right) \odot d \oplus \boldsymbol{\varepsilon} \odot \left( \widetilde{f}^{\{2\}}\left( \boldsymbol{\varepsilon} \right) \right)^{\frac{1}{2}} \odot d^{\{2\}} . \end{split} \tag{7}$$

Now we note that as <sup>→</sup> 1, one has ⊙ (̃*<sup>f</sup>* (2)(*c*)) 1 2 <sup>⊙</sup> *<sup>d</sup>*{2} <sup>→</sup> 1 so that the right-hand side of (7) converges to ̃*<sup>f</sup>* (*x*) ⊙ *<sup>d</sup>*. From the hypothesis ̃*<sup>f</sup>* (*x*) ⊙ *<sup>d</sup>* <sup>&</sup>lt; 1 of our theorem, this means that, for <sup>&</sup>gt; 1 sufficiently close to 1, the right-hand side of (7) is strictly less than one. Thus, for <sup>&</sup>gt; 1 sufficiently close to 1,

$$(f(\mathfrak{x}\oplus\mathfrak{e}\ominus d)\oplus f(\mathfrak{x}))\otimes\mathfrak{e}<1.\tag{8}$$

Recalling that *<sup>a</sup>* <sup>⊘</sup> <sup>&</sup>lt; <sup>1</sup> <sup>⇔</sup> *<sup>a</sup>*1/ ln() <sup>&</sup>lt; 1, we conclude from (8) that for sufficiently close to 1 we have *<sup>f</sup>* (*<sup>x</sup>* <sup>⊕</sup> <sup>⊙</sup> *<sup>d</sup>*) ⊖ *<sup>f</sup>* (*x*) < 1; that is, *<sup>d</sup>* is a descent direction of *<sup>f</sup>* at *<sup>x</sup>*.

As a corollary of Theorem 2, we obtain Fermat's necessary optimality condition, which gives us a method to find local minimizers (or maximizers) of differentiable functions on open sets, by showing that every local extremizer of the function is a stationary point (the non-Newtonian function's derivative is one at that point).

**Theorem 3** (Fermat's theorem–stationary points)**.** *Let <sup>f</sup>* ∶ (*a*, *<sup>b</sup>*) → <sup>R</sup><sup>+</sup> *be differentiable. If <sup>x</sup>* <sup>∈</sup> (*a*, *<sup>b</sup>*) *is a minimizer of f , then* ̃*<sup>f</sup>* (*x*) = <sup>1</sup>*.*

**Proof.** We want to prove that for a minimizer *<sup>x</sup>* we must have ̃*<sup>f</sup>* (*x*) = <sup>1</sup> <sup>⇔</sup> <sup>1</sup> <sup>⊖</sup> ̃*<sup>f</sup>* (*x*) = 1. We do the proof by contradiction. Assume that ̃*<sup>f</sup>* (*x*) ≠ 1; that is, 1 <sup>⊖</sup> ̃*<sup>f</sup>* (*x*) ≠ 1. Let *<sup>d</sup>* <sup>=</sup> <sup>1</sup> <sup>⊖</sup> ̃*<sup>f</sup>* (*x*). Then,

$$\overline{f}(\mathbf{x}) \otimes d = \overline{f}(\mathbf{x}) \odot \left(1 \oplus \overline{f}(\mathbf{x})\right) = 1 \oplus \overline{f}(\mathbf{x})^{\{2\}}$$

$$= \frac{1}{\overline{f}(\mathbf{x}) \otimes \overline{f}(\mathbf{x})} = \left(\frac{1}{\overline{f}(\mathbf{x})}\right)^{\ln\left(\frac{\widetilde{f}}{\mathbf{f}}(\mathbf{x})\right)}$$

and since *<sup>g</sup>*(*y*)=( <sup>1</sup> *y* ) ln(*y*) is a function with 0 <sup>&</sup>lt; *<sup>g</sup>*(*y*) < 1 for all *<sup>y</sup>* <sup>≠</sup> 1, we conclude that ̃*<sup>f</sup>* (*x*) ⊙ *<sup>d</sup>* <sup>&</sup>lt; 1. It follows from Theorem <sup>2</sup> that *<sup>d</sup>* is a descent direction of *<sup>f</sup>* at *<sup>x</sup>*, and therefore, from the definition of descent direction, *x* is not a local minimizer.

In the next section, we make use of Theorem 3 to prove the non-Newtonian Euler– Lagrange equation.

#### *3.2. Dynamic Optimization*

A central tool in dynamic optimization, both in the calculus of variations and optimal control [16], is integration by parts. In what follows, we use the following notation:

$$
\left.\psi(x)\right|\_a^b = \psi(b) \oplus \psi(a).
$$

**Theorem 4** (integration by parts)**.** *Let <sup>f</sup>* ∶ [*a*, *<sup>b</sup>*] → <sup>R</sup><sup>+</sup> *and <sup>g</sup>* ∶ [*a*, *<sup>b</sup>*] → <sup>R</sup><sup>+</sup> *be differentiable. The following formula of integration by parts holds:*

$$\int\_{a}^{b} \overline{f}(\mathbf{x}) \odot g(\mathbf{x}) d\mathbf{\tilde{x}} = f(\mathbf{x}) \odot g(\mathbf{x}) \Big|\_{a}^{b} \ominus \int\_{a}^{b} f(\mathbf{x}) \odot \overline{g}(\mathbf{x}) d\mathbf{\tilde{x}}.\tag{9}$$

**Proof.** From the derivative of a product, we know that

$$\frac{d}{dx}[f(\mathbf{x}) \odot g(\mathbf{x})] = \widetilde{f}(\mathbf{x}) \odot g(\mathbf{x}) \oplus f(\mathbf{x}) \odot \widetilde{g}(\mathbf{x}).\tag{10}$$

On the other hand, the fundamental theorem of integral calculus tell us that

$$\int\_{a}^{b} \frac{\tilde{d}}{\tilde{d}\mathfrak{x}} [f(\mathfrak{x}) \odot \mathfrak{g}(\mathfrak{x})] \,\mathrm{d}\mathfrak{x} = f(\mathfrak{x}) \odot \mathfrak{g}(\mathfrak{x})\Big|\_{a}^{b}.$$

Therefore, by integrating (10) from *a* to *b*, we conclude that

$$\int\_a^b \left[ \overline{f}(\mathbf{x}) \odot \mathbf{g}(\mathbf{x}) \right] \overline{\mathbf{d}} \mathbf{x} \oplus \int\_a^b \left[ f(\mathbf{x}) \odot \overline{\mathbf{g}}(\mathbf{x}) \right] \overline{\mathbf{d}} \mathbf{x} = f(\mathbf{x}) \odot \mathbf{g}(\mathbf{x}) \Big|\_{\mathbf{a}}^b.$$

which is equivalent to (9).

We are now in a condition to formulate the fundamental problem of the calculus of variations: to minimize the integral functional

$$\mathcal{F}[\mathcal{Y}] = \int\_{a}^{b} L(\mathfrak{x}, \mathfrak{y}(\mathfrak{x}), \overline{\mathfrak{y}}(\mathfrak{x})) d\mathfrak{x}$$

over all smooth functions *<sup>y</sup>* on [*a*, *<sup>b</sup>*] with fixed end points *<sup>y</sup>*(*a*) = *ya* and *<sup>y</sup>*(*b*) = *yb*. The central result of the calculus of variations and classical mechanics is the celebrated Euler– Lagrange equation, whose solutions are stationary points of the given action functional. We restrict ourselves here to the classical framework of the calculus of variations, where both the Lagrangian *L* and admissible functions *y* are smooth enough: typically, one considers *<sup>L</sup>* <sup>∈</sup> *<sup>C</sup>*<sup>2</sup> and *<sup>y</sup>* <sup>∈</sup> *<sup>C</sup>*2, so that one can look to the Euler–Lagrange equation as a second-order ordinary differential equation [17]. We adopt such assumptions here. We denote the problem by (*P*).

Before proving the Euler–Lagrange equation (the necessary optimality condition for problem (*P*)), we first need to prove a non-Newtonian analogue of the fundamental lemma of the calculus of variations.

**Lemma 1** (fundamental lemma of the calculus of variations)**.** *If <sup>f</sup>* ∶ [*a*, *<sup>b</sup>*] → <sup>R</sup><sup>+</sup> *is a positive continuous function such that*

$$\int\_{a}^{b} f(\mathbf{x}) \odot h(\mathbf{x}) d\mathbf{\tilde{x}} = 1 \tag{11}$$

*for all functions <sup>h</sup>*(*x*) *that are continuous for <sup>a</sup>* <sup>≤</sup> *<sup>x</sup>* <sup>≤</sup> *<sup>b</sup> with <sup>h</sup>*(*a*) = *<sup>h</sup>*(*b*) = <sup>1</sup>*, then <sup>f</sup>* (*x*) = <sup>1</sup> *for all <sup>x</sup>* ∈ [*a*, *<sup>b</sup>*]*.*

**Proof.** We do the proof by contradiction. Suppose the function *<sup>f</sup>* is not one—*<sup>f</sup>* (*x*) > 1—at some point *<sup>x</sup>* ∈ [*a*, *<sup>b</sup>*]. Then, by continuity, *<sup>f</sup>* (*x*) > 1 for all *<sup>x</sup>* in some interval [*x*1, *<sup>x</sup>*2]⊂[*a*, *<sup>b</sup>*]. If we set

$$h(\mathbf{x}) = \begin{cases} (\mathbf{x} \oplus \mathbf{x}\_1) \odot (\mathbf{x}\_2 \ominus \mathbf{x}) & \text{if } \mathbf{x} \in [\mathbf{x}\_1, \mathbf{x}\_2],\\ 1 & \text{if } \mathbf{x} \in [a, b] \times [\mathbf{x}\_1, \mathbf{x}\_2], \end{cases}$$

then *<sup>h</sup>*(*x*) satisfies the assumptions of the lemma; i.e., *<sup>h</sup>*(*x*) is continuous for *<sup>x</sup>* ∈ [*a*, *<sup>b</sup>*] with *<sup>h</sup>*(*a*) = 1 and *<sup>h</sup>*(*b*) = 1. We have

$$\begin{split} \int\_{a}^{b} f(\mathbf{x}) \odot h(\mathbf{x}) \tilde{d}\mathbf{x} &= \int\_{a}^{x\_{1}} 1 \tilde{d}\mathbf{x} \oplus \int\_{x\_{1}}^{x\_{2}} f(\mathbf{x}) \odot (\mathbf{x} \ominus \mathbf{x}\_{1}) \odot (\mathbf{x}\_{2} \ominus \mathbf{x}) \tilde{d}\mathbf{x} \oplus \int\_{x\_{2}}^{b} 1 \tilde{d}\mathbf{x} \\ &= \int\_{x\_{1}}^{x\_{2}} f(\mathbf{x}) \odot (\mathbf{x} \ominus \mathbf{x}\_{1}) \odot (\mathbf{x}\_{2} \ominus \mathbf{x}) \tilde{d}\mathbf{x}. \end{split} \tag{12}$$

Let us analyze the integrand *<sup>β</sup>*(*x*) of (12):

$$f(\mathbf{x}) = f(\mathbf{x}) \odot \left[ (\mathbf{x} \oplus \mathbf{x}\_1) \odot (\mathbf{x}\_2 \oplus \mathbf{x}) \right] = f(\mathbf{x}) \odot \left[ \frac{\mathbf{x}}{\mathbf{x}\_1} \odot \frac{\mathbf{x}\_2}{\mathbf{x}} \right],$$

$$= f(\mathbf{x}) \odot \left[ \left( \frac{\mathbf{x}}{\mathbf{x}\_1} \right)^{\ln \left( \frac{\mathbf{x}\_2}{\mathbf{x}} \right)} \right].$$

Since *<sup>f</sup>* (*x*) > 1 and ( *<sup>x</sup> x*1 ) ln( *<sup>x</sup>*<sup>2</sup> *<sup>x</sup>* ) <sup>&</sup>gt; 1 for any *<sup>x</sup>* ∈ (*x*1, *<sup>x</sup>*2), we also have that *<sup>β</sup>*(*x*) > 1 for any *<sup>x</sup>* ∈ (*x*1, *<sup>x</sup>*2), with *<sup>β</sup>*(*x*) = 1 at *<sup>x</sup>* <sup>=</sup> *<sup>x</sup>*<sup>1</sup> and *<sup>x</sup>* <sup>=</sup> *<sup>x</sup>*2. It follows that

$$\int\_{a}^{b} f(\mathbf{x}) \odot h(\mathbf{x}) \, d\mathbf{x} = \int\_{\mathbf{x}\_{1}}^{\mathbf{x}\_{2}} \beta(\mathbf{x}) \, d\mathbf{x} > 1 \odot (\mathbf{x}\_{2} \ominus \mathbf{x}\_{1}) = 1.$$

This contradicts (11) and proves the lemma.

Now we formulate and prove the analog of the Euler–Lagrange differential equation for our problem (*P*).

**Theorem 5** (Euler–Lagrange equation)**.** *If y*(*x*)*, x* ∈ [*a*, *<sup>b</sup>*]*, is a solution to the problem (P)*

$$\mathcal{F}[y] = \int\_{a}^{b} L(x, y(x), \overleftarrow{y}(x)) \, \overset{\textstyle \textstyle f}{=} \min\_{y \in \mathcal{Y}(y\_{a}; y\_{b})}$$

*with*

$$\mathcal{O}(\mathcal{Y}\_a; \mathcal{Y}\_b) \coloneqq \left\{ \mathcal{Y} \in \mathbb{C}^2([a, b]; \mathbb{R}^+) : \mathcal{Y}(a) = \mathcal{Y}\_{a \prime}, \mathcal{Y}(b) = \mathcal{Y}\_{b \prime}, \mathcal{Y}(\mathbf{x}) > 0 \,\,\forall \,\mathbf{x} \in [a, b] \right\},$$

*then y*(*x*) *satisfies the Euler–Lagrange equation*

$$\mathbb{Z}\_y(\mathbf{x}, y(\mathbf{x}), \overline{y}(\mathbf{x})) = \frac{\overline{d}}{\overline{d}\mathbf{x}} \mathbb{Z}\_{\overline{y}}(\mathbf{x}, y(\mathbf{x}), \overline{y}(\mathbf{x})) \tag{13}$$

*for all x* ∈ [*a*, *<sup>b</sup>*]*.*

**Proof.** Let *<sup>y</sup>*(*x*), *<sup>x</sup>* ∈ [*a*, *<sup>b</sup>*], be a minimizer of (5). Then, function (*<sup>y</sup>* <sup>⊕</sup> <sup>⊙</sup> *<sup>h</sup>*)(*x*), *<sup>x</sup>* ∈ [*a*, *<sup>b</sup>*], belongs to Y(*ya*; *yb*) for any function *<sup>h</sup>* ∈ Y(1; 1) and for any in an open neighborhood of 1. Note that (*<sup>y</sup>* <sup>⊕</sup> <sup>⊙</sup> *<sup>h</sup>*)(*x*) = *<sup>y</sup>*(*x*) for <sup>=</sup> 1. This means that for any smooth function *<sup>h</sup>*(*x*), *<sup>x</sup>* ∈ [*a*, *<sup>b</sup>*], satisfying *<sup>h</sup>*(*a*) = *<sup>h</sup>*(*b*) = 1, the function *<sup>ϕ</sup>*() defined by

$$\begin{split} \varrho(\mathfrak{e}) &= \mathcal{F}[\mathfrak{y} \oplus \mathfrak{e} \odot h] = \int\_{a}^{b} L\Big(\mathfrak{x}, (\mathfrak{y} \oplus \mathfrak{e} \odot h)(\mathfrak{x}), (\widetilde{\mathfrak{y} \oplus \mathfrak{e} \odot h})(\mathfrak{x})\Big) d\mathfrak{x} \\ &= \int\_{a}^{b} L\Big(\mathfrak{x}, \mathfrak{y}(\mathfrak{x}) \oplus \mathfrak{e} \odot h(\mathfrak{x}), \widetilde{\mathfrak{y}}(\mathfrak{x}) \oplus \mathfrak{e} \odot \widetilde{h}(\mathfrak{x})\Big) d\mathfrak{x} \end{split} \tag{14}$$

has a minimizer for <sup>=</sup> 1. It follows from Fermat's theorem (Theorem 3) that

$$\left. \overline{\varphi}(1) \right| = \left. \frac{\overline{d}}{\overline{d}\epsilon} \, q\epsilon(\epsilon) \right|\_{\epsilon=1} = 1 \dots$$

By differentiating (14) with respect to , and then putting <sup>=</sup> 1, we get from the chain rule and relations

$$\frac{d}{d\tilde{\mathfrak{L}}} (\mathfrak{y}(\mathfrak{x}) \oplus \mathfrak{e} \odot h(\mathfrak{x})) = h(\mathfrak{x}), \quad \frac{\tilde{d}}{d\tilde{\mathfrak{e}}} (\overline{\mathfrak{y}}(\mathfrak{x}) \oplus \mathfrak{e} \odot \overline{h}(\mathfrak{x})) = \overline{h}(\mathfrak{x}),$$

that

$$1 = \int\_{a}^{b} \left[ \widetilde{L}\_{\mathcal{Y}}(\mathbf{x}, \mathbf{y}(\mathbf{x}), \widetilde{\mathbf{y}}(\mathbf{x})) \odot h(\mathbf{x}) \oplus \widetilde{L}\_{\mathcal{Y}}(\mathbf{x}, \mathbf{y}(\mathbf{x}), \widetilde{\mathbf{y}}(\mathbf{x})) \odot \widetilde{h}(\mathbf{x}) \right] d\mathbf{x}.\tag{15}$$

From integration by parts (Theorem 4), and the fact that *<sup>h</sup>*(*a*) = 1 and *<sup>h</sup>*(*b*) = 1, one has

$$\begin{aligned} \left| \int\_{a}^{b} \mathbb{\widetilde{L}}\_{\widetilde{\mathcal{Y}}} (\mathbf{x}, \boldsymbol{y}(\mathbf{x}), \overline{\boldsymbol{y}}(\mathbf{x})) \odot \widetilde{h}(\mathbf{x}) d\mathbf{x} = h(\mathbf{x}) \odot \mathop{\mathbb{L}}\_{\widetilde{\mathcal{Y}}} (\mathbf{x}, \boldsymbol{y}(\mathbf{x}), \overline{\boldsymbol{y}}(\mathbf{x})) \right|\_{\mathfrak{a}}^{\mathfrak{b}} \\ &\Leftrightarrow \int\_{a}^{b} \frac{\overline{d}}{\overline{d}\mathbf{x}} \Big[ \mathbb{\widetilde{L}}\_{\widetilde{\mathcal{Y}}} (\mathbf{x}, \boldsymbol{y}(\mathbf{x}), \overline{\boldsymbol{y}}(\mathbf{x})) \Big] \odot h(\mathbf{x}) d\mathbf{x}, \end{aligned}$$

that is,

$$\int\_{a}^{b} \widetilde{L}\_{\widehat{y}}(\mathbf{x}, y(\mathbf{x}), \overline{y}(\mathbf{x})) \odot \widetilde{h}(\mathbf{x}) d\mathbf{x} = \mathbf{1} \oplus \int\_{a}^{b} \frac{\widetilde{d}}{\widetilde{d}\mathbf{x}} [\widetilde{L}\_{\widehat{y}}(\mathbf{x}, y(\mathbf{x}), \overline{y}(\mathbf{x}))] \odot h(\mathbf{x}) d\mathbf{x}.\tag{16}$$

Using equality (16) in the necessary condition (15), we get that

$$1 = \int\_{a}^{b} \overline{L}\_{\mathcal{Y}}(\mathbf{x}, \boldsymbol{y}(\mathbf{x}), \overline{\boldsymbol{y}}(\mathbf{x})) \odot h(\mathbf{x}) \overline{d}\mathbf{x} \oplus 1 \oplus \int\_{a}^{b} \frac{d}{d\mathbf{x}} [\overline{L}\_{\tilde{\mathcal{Y}}}(\mathbf{x}, \boldsymbol{y}(\mathbf{x}), \overline{\boldsymbol{y}}(\mathbf{x}))] \odot h(\mathbf{x}) \overline{d}\mathbf{x}$$

$$\Leftrightarrow \int\_{a}^{b} \left[ \overline{L}\_{\mathcal{Y}}(\mathbf{x}, \boldsymbol{y}(\mathbf{x}), \overline{\boldsymbol{y}}(\mathbf{x})) \odot \frac{d}{d\mathbf{x}} (\overline{L}\_{\tilde{\mathcal{Y}}}(\mathbf{x}, \boldsymbol{y}(\mathbf{x}), \overline{\boldsymbol{y}}(\mathbf{x}))) \right] \odot h(\mathbf{x}) \overline{d}\mathbf{x} = 1. \tag{17}$$

The result follows from the fundamental lemma of the calculus of variations (Lemma 1) applied to (17): ̃*Ly*(*x*, *<sup>y</sup>*(*x*), *<sup>y</sup>* ̃(*x*)) ⊖ ˜*<sup>d</sup>* ˜*dx* (̃*L*̃*y*(*x*, *<sup>y</sup>*(*x*), *<sup>y</sup>* ̃(*x*))) <sup>=</sup> 1 for all *<sup>x</sup>* ∈ [*a*, *<sup>b</sup>*].

To illustrate our main result, let us see an example. Consider the following problem of the calculus of variations:

$$\begin{aligned} \mathcal{F}[\mathcal{Y}] = \sqrt{\mathcal{e}} \odot \int\_{1}^{\epsilon^{2\pi}} \left[ \overleftarrow{\mathcal{Y}}^{\{2\}}(\mathbf{x}) \oplus \mathcal{y}^{\{2\}}(\mathbf{x}) \right] \check{d}\mathbf{x} &\longrightarrow \min, \\ \mathcal{y}(1) = \mathbf{e}, \quad \mathcal{Y}(\epsilon^{2\pi}) = \epsilon^{-1}. \end{aligned} \tag{18}$$

Theorem 5 tell us that the solution of (18) must satisfy the Euler–Lagrange Equation (13). In this example, the Lagrangian *<sup>L</sup>* is given by *<sup>L</sup>*(*x*, *<sup>y</sup>*, *<sup>y</sup>* ̃) = <sup>√</sup>*<sup>e</sup>* <sup>⊙</sup> (*<sup>y</sup>* ̃{2} <sup>⊖</sup> *<sup>y</sup>*{2}), so that

$$\begin{aligned} \widetilde{L}\_{\mathcal{Y}}(\mathbf{x}, \boldsymbol{y}, \widetilde{\boldsymbol{y}}) &= \sqrt{\boldsymbol{c}} \odot \left(1 \odot \boldsymbol{c}^{2} \odot \boldsymbol{y}\right), \\ \widetilde{L}\_{\widetilde{\mathcal{Y}}}(\mathbf{x}, \boldsymbol{y}, \widetilde{\boldsymbol{y}}) &= \sqrt{\boldsymbol{c}} \odot \left(\boldsymbol{c}^{2} \odot \widetilde{\mathcal{Y}} \ominus 1\right) = \sqrt{\boldsymbol{c}} \odot \left(\boldsymbol{c}^{2} \odot \widetilde{\mathcal{Y}}\right). \end{aligned} \tag{19}$$

Noting that <sup>√</sup>*<sup>e</sup>* <sup>=</sup> *<sup>e</sup>* <sup>⊘</sup> *<sup>e</sup>*<sup>2</sup> <sup>=</sup> (*e*2) {−1} , the equalities (19) simplify to

$$
\overline{L}\_{\mathcal{Y}}(\mathbf{x}, \mathbf{y}, \overleftarrow{\mathcal{Y}}) = 1 \ominus \mathcal{Y}, \quad \overline{L}\_{\mathcal{Y}}(\mathbf{x}, \mathbf{y}, \overleftarrow{\mathcal{Y}}) = \overleftarrow{\mathcal{Y}}
$$

and the Euler–Lagrange Equation (13) takes the form

$$\exists \mathbf{1} \oplus \mathbf{y}(\mathbf{x}) = \overline{\mathbf{y}}^{\{2\}}(\mathbf{x}) \Leftrightarrow \overline{\mathbf{y}}^{\{2\}}(\mathbf{x}) \oplus \mathbf{y}(\mathbf{x}) = \mathbf{1}.\tag{20}$$

The second-order differential Equation (20) has solutions of the form

$$y(\mathbf{x}) = c\_1 \odot \cos\_\ell(\mathbf{x}) \oplus c\_2 \odot \sin\_\ell(\mathbf{x}),$$

where *<sup>c</sup>*<sup>1</sup> and *<sup>c</sup>*<sup>2</sup> are constants. Given the boundary conditions *<sup>y</sup>*(1) = *<sup>e</sup>* and *<sup>y</sup>*(*e*2*π*) <sup>=</sup> *<sup>e</sup>*<sup>−</sup>1, we conclude that the Euler–Lagrange extremal for problem (18) is given by

$$\mathcal{Y}(\mathbf{x}) = \mathfrak{e} \odot \cos\_{\mathfrak{e}^\*}(\mathbf{x}) \oplus \mathfrak{e}^2 \odot \sin\_{\mathfrak{e}}(\mathbf{x}) \Leftrightarrow \mathcal{Y}(\mathbf{x}) = \mathfrak{e}^{2\sin\left(\ln(x)\right) + \cos\left(\ln(x)\right)}.$$

#### **4. Discussion**

One can say that the calculus of variations began in 1687 with Newton's minimal resistance problem [25,50,51]. It immediately occupied the attention of Bernoulli (1655–1705), but it was Euler (1707–1783) who first elaborated mathematically on the subject, beginning in 1733. Lagrange (1736–1813) was influenced by Euler's work and contributed significantly to the theory, introducing a purely analytic approach to the subject based on additive variations *<sup>y</sup>* <sup>+</sup> *h*, whose essence we still follow today [17]. The calculus of variations is concerned with the minimization of integral functionals:

$$\int\_{a}^{b} L(x, y(x), y'(x)) dx \longrightarrow \min. \tag{21}$$

However, as observed in [52], there are other interesting problems that arise in applications in which the functionals to be minimized are not of the form (21). For example, the planning of a firm trying to program its production and investment policies to reach a given production rate and to maximize its future market competitiveness at a given time horizon, can be mathematically stated in the form

$$\left(\int\_{a}^{b} L\_1(\mathbf{x}, y(\mathbf{x}), y'(\mathbf{x})) d\mathbf{x}\right) \cdot \left(\int\_{a}^{b} L\_2(\mathbf{x}, y(\mathbf{x}), y'(\mathbf{x})) d\mathbf{x}\right) \longrightarrow \min \tag{22}$$

(see [52]). Another example, also given in [52], appears when dealing with the so called "slope stability problem", which is described mathematically as minimizing a quotient functional:

$$\frac{\int\_{a}^{b} L\_{1}\{x, y(x), y'(x)\} dx}{\int\_{a}^{b} L\_{2}\{x, y(x), y'(x)\} dx} \longrightarrow \min. \tag{23}$$

Such multiplicative integral minimization problems that arise in different applications are nonstandard problems of the calculus of variations, but they can be naturally modeled in the non-Newtonian calculus of variations, and then solved in a rather standard way, using non-Newtonian Lagrange variations of the form *<sup>y</sup>* <sup>⊕</sup> <sup>⊙</sup> *<sup>h</sup>*, as we have proposed here. Therefore, we claim that the non-Newtonian calculus of variations just introduced may be useful for dealing with multiplicative functionals that arise in economics, physics and biology.

In this paper we have restricted ourselves to the ideas and to the central results of any calculus of variations: the celebrated Euler–Lagrange equation, which is a first-order necessary optimality condition. Of course our results can be extended, for example, by relaxing the considered hypotheses and enlarging the space of admissible functions, which we have considered here to be *C*2, or considering vector functions instead of scalar ones. We leave such generalizations to the interested and curious reader. In fact, much remains now to be done. As possible future research directions we can mention: obtaining natural boundary conditions (sometimes also called transversality conditions) to be satisfied at a boundary point *<sup>a</sup>* and/or *<sup>b</sup>*, when *<sup>y</sup>*(*a*) and/or *<sup>y</sup>*(*b*) are free or restricted to take values on a given curve; obtaining second-order necessary conditions; obtaining sufficient conditions; to investigate nonadditive isoperimetric problems; etc.

#### **5. Conclusions**

In this work, a new calculus of variations was proposed, based on the non-Newtonian approach introduced by Grossman and Katz, thereby avoiding problems about nonnegativ-

ity. A new relation was proved, the multiplicative Euler–Lagrange differential Equation (13), which each solution of a non-Newtonian variational problem, with admissible functions taking positive values only, must satisfy. An example was provided for illustration purposes.

Grossman and Katz have shown that infinitely many calculi can be constructed independently [1]. Each of these calculi provide different perspectives for approaching many problems in science and engineering [53]. Additionally, a mathematical problem, which is difficult or impossible to solve in one calculus, can be easily revealed through another calculus [14,39].

Since the pioneering work of Grossman and Katz, non-Newtonian calculi have been a topic for new study areas in mathematics and its applications [15,54]. Particularly, Stanley [55], Córdova-Lepe [11], Slavík [10], Pap [44] and Bashirov et al. [12], have called the attention of mathematicians to the topic. More recently, non-Newtonian calculi have become a hot topic in economic and finance [56], quantum calculus [54], complex analysis [57–59], numerical analysis [2,37,42], inequalities [40,60], biomathematics [7,61] and mathematical education [14].

Here we adopted the non-Newtonian calculus as originally introduced by Grossman and Katz [1,34,35] and recently developed by Córdova-Lepe [11,41] and collaborators [4]: see the recent reviews in [7,14]. Roughly speaking, the key to understanding such calculus, valid for positive functions, is a formal substitution, where one replaces addition and subtraction with multiplication and division, respectively; multiplication in standard calculus is replaced by exponentiation in the non-Newtonian case, and thus, division by exponentiation with the reciprocal exponent. Our main contribution here was to develop, for the first time in the literature, a suitable non-Newtonian calculus of variations that minimizes a non-Newtonian integral functional with a Lagrangian that depends on the non-Newtonian derivative. The main result is a first-order necessary optimality condition of Euler–Lagrange type.

We trust that the present paper marks the beginning of a fruitful road for Non-Newtonian (NN) mechanics, NN calculus of variations and NN optimal control, thereby calling attention to and serving as inspiration for a new generation of researchers. Currently, we are investigating the validity of Emmy Noether's principle in the NN/multiplicative calculus of variations here introduced.

**Funding:** This research was funded by The Center for Research and Development in Mathematics and Applications (CIDMA) through the Portuguese Foundation for Science and Technology (FCT– Fundação para a CiênciaeaTecnologia), grant number UIDB/04106/2020.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The author is grateful to Luna Shen, Managing Editor of Axioms, for proposing a volume/special issue dedicated to his 50th anniversary, and to Natália Martins, Ricardo Almeida, Cristiana J. Silva and M. Rchid Sidi Ammi, who kindly accepted the invitation of Axioms to lead the project.

**Conflicts of Interest:** The author declares no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript or in the decision to publish the results.

#### **References**


#### **Short Biography of Author**

**Delfim Fernando Marado Torres** is a Portuguese Mathematician born 16 August 1971 in Nampula, Portuguese Mozambique. He obtained a PhD in Mathematics from the University of Aveiro (UA) in 2002, and habilitation in Mathematics, UA, in 2011. He is a full professor of mathematics since 9 March 2015. He has been the Director of the R&D Unit CIDMA, the largest Portuguese research center for mathematics, and Coordinator of its Systems and Control Group. His main research areas are calculus of variations and optimal control; optimization; fractional derivatives and integrals; dynamic equations on time scales; and mathematical biology. Torres has written outstanding scientific and pedagogical publications. In particular, he has co-authored two books with Imperial College Press and three books with Springer. He has strong experience in graduate and post-graduate student supervision and teaching in mathematics. Twenty PhD students in mathematics have successfully finished under his supervision. Moreover, he has been the leading member in several national and international R&D projects, including EU projects and networks. Professor Torres has been, since 2013, the Director of the Doctoral Programme Consortium in Mathematics and Applications (MAP-PDMA) of Universities of Minho, Aveiro, and Porto. Delfim married 2003 and has one daughter and two sons.

MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel. +41 61 683 77 34 Fax +41 61 302 89 18 www.mdpi.com

*Axioms* Editorial Office E-mail: axioms@mdpi.com www.mdpi.com/journal/axioms

MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel: +41 61 683 77 34

www.mdpi.com ISBN 978-3-0365-6857-7