**Differential Equation Models in Applied Mathematics: Theoretical and Numerical Challenges**

Editor

**Fasma Diele**

MDPI • Basel • Beijing • Wuhan • Barcelona • Belgrade • Manchester • Tokyo • Cluj • Tianjin

*Editor* Fasma Diele Istituto per le Applicazioni del Calcolo M. Picone, CNR Italy

*Editorial Office* MDPI St. Alban-Anlage 66 4052 Basel, Switzerland

This is a reprint of articles from the Special Issue published online in the open access journal *Mathematics* (ISSN 2227-7390) (available at: https://www.mdpi.com/journal/mathematics/special issues/Differential Equation Models).

For citation purposes, cite each article independently as indicated on the article page online and as indicated below:

LastName, A.A.; LastName, B.B.; LastName, C.C. Article Title. *Journal Name* **Year**, *Volume Number*, Page Range.

**ISBN 978-3-0365-3010-9 (Hbk) ISBN 978-3-0365-3011-6 (PDF)**

© 2022 by the authors. Articles in this book are Open Access and distributed under the Creative Commons Attribution (CC BY) license, which allows users to download, copy and build upon published articles, as long as the author and publisher are properly credited, which ensures maximum dissemination and a wider impact of our publications.

The book as a whole is distributed by MDPI under the terms and conditions of the Creative Commons license CC BY-NC-ND.

## **Contents**


## **About the Editor**

**Fasma Diele** (MSc in Mathematics, University of Bari, Italy, 1991) is a senior researcher in the 'Environmental Mathematics' Project Area of IAC-CNR. She is the author of more than 50 papers, mainly in the field of Numerical Analysis and Applied Mathematics, with more than 400 citations and an H-index of 11. She has been involved in environmental modelling and numerical research activities within several EU funded projects (BIO SOS, H2020 ECOPOTENTIAL, eLTER-Plus, and CHOECO). She has held an Italian national scientific qualification for Associate Professor in Numerical Analysis since 2012. She is a member of the Faculty Board of PhD in Mathematics, University of Bari (until 2017), Editor for 'Abstract and Applied Analysis' by Hindawi (censed by WOS) and other minor journals, an active member of the GNCS group of INDAM (Istituto Nazionale di Alta Matematica), and an elected member of Giunta of MSE group of UMI (Unione Matematica Italiana). She also serves as a scientific evaluator for ANVUR-VQR 2015-1019 (Valutazione della Qualita della Ricerca) and a ` research project evaluator for the Italian Ministero dello Sviluppo Economico (MiSE), for the Italian Ministero dell'Istruzione dell'Universita e della Ricerca (MIUR), and for National Research Fund for ` Scientific & Technological Development (FON-DECYT), Chile, in the Mathematics area. Additionally, she is a member of European Women in Mathematics (EWM).

## *Editorial* **Differential Equation Models in Applied Mathematics: Theoretical and Numerical Challenges**

**Fasma Diele**

Istituto per Applicazioni del Calcolo 'M.Picone', CNR, Via Amendola 122/D, 70126 Bari, Italy; fasma.diele@cnr.it

#### **1. Motivations for the Special Issue**

The articles published in the Special Issue "Differential Equation Models in Applied Mathematics: Theoretical and Numerical Challenges" of the MDPI *Mathematics* journal are here collected. The Special Issue intended to highlight old and new challenges in the formulation, solution, understanding, and interpretation of models of differential equations (DEs) in different real world applications. Indeed, models of differential equations can describe complex mechanisms arising in a wide range of applications in many different sectors as ecology, health, biology, economics, and finance. Differential modelling and difference equations are tools to understand the dynamics, do forecasting and scenario analysis; in addition, they allow for the detection of optimal solutions according to selected criteria.

The technical topics covered in the seven articles published in this book include: asymptotic properties of high order nonlinear DEs [1,2], analysis of backward bifurcation [3], stability analysis of fractional-order differential systems [4]. Models oriented to real applications consider the chemotactic between cell species [5], the mechanism of on-off intermittency in food chain models [6] and the occurrence of hysteresis in marketing [3]. Numerical aspects deal with the preservation of mass and positivity [5] and the efficient solution of Boundary Value Problems (BVPs) for optimal control problems [7].

In the following, I summarize the main content of novelty of this book distinguishing among contributes that concerns:


#### *1.1. Theoretical Challenges of DEs*

In articles [1,2] the focus is on high-order differential equations. In [1], new oscillation theorems for fourth-order differential equations are established using the Riccati and the integral averaging techniques. The article [2] investigates the inverse problem for a non homogeneous, higher-order Sobolev type equation with assigned Cauchy and overdetermined conditions. By using the theory of bounded polynomial operator pencils, the problem is initially reduced into two regular and singular aggregates and then it is restored using the method of successive approximations. A theorem on the solvability of the original problem represents the main theoretical contribute of this paper to the literature on this subject.

In the review article [4] systems of fractional-order DEs with Caputo derivative are presented. Due to the dependence on the order of the fractional derivatives, the linear stability analysis leads to properties fundamentally different from those of classical DEs: unlike systems of integer order, coefficients of the systems are not sufficient to describe stability properties of solutions. By reviewing the asymptotic analysis of the Mittag–Leffler function and of its derivatives and by examining systems with some specific structures, this paper intends to contribute to the research on the stability analysis of multi-order higher dimensional systems that represents nowadays an important theoretical challenge for fractional-order DEs.

**Citation:** Diele, F. Differential Equation Models in Applied Mathematics. *Mathematics* **2022**, *10*, 249. https://doi.org/10.3390/ math10020249

Received: 4 January 2022 Accepted: 11 January 2022 Published: 14 January 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

#### *1.2. Numerical Challenges of DEs*

The transmission model for microfluidic chips presented in the featured paper [5] involves doubly parabolic DEs in 2D spatial domains connected with either a doubly parabolic or a hyperbolic-parabolic DEs in 1D domains. The important contribute of this paper is the development of novel positive numerical conditions and numerical methods, based on finite difference schemes, assuring the mass-preservation at the external boundaries and the interfaces between domains of different sizes. It has to be underlined that is the first numerical work where this new technique of switching the size of the domains and type of partial differential equations, i.e., parabolic vs. hyperbolic, is introduced in the literature.

In paper [7], Hamiltonian boundary valued DEs deriving from the applications of the indirect method, based on Pontryagin's conditions, to optimal control problems are considered. The main contribute of this paper is to show how to properly choose and use codes on popular scientific platforms (Fortran, Matlab, R), for solving some specific challenging optimal control problems. This paper gives important indications useful to choose an initial mesh, to handle the input parameters or to use of a continuation technique for nonlinear problems to achieve accurate solutions via a bvp (boundary value problem) solver.

#### *1.3. Real-Word Applications of DEs*

In the featured paper [6], the power of DEs as leading mathematical tools for describing ecosystem dynamics is illustrated. In particular, some preliminary steps towards a conceptual description of population outbursts grounded into an environment-driven mechanism are described. The focus is on a three-species food chain represented by the Hastings–Powell model: by stochastically perturbing the value of some parameters, the authors show the emergence of on–off intermittency, i.e., an irregular alternation between stable phases and sudden bursts in population size. The strength of this paper lies in representing the first evidence of the possibility of on–off intermittent behavior in a food chains model.

The original point of view adopted in the paper [3] illustrates the ability of DEs in modelling and anlyzing dynamical scenarios in different real-word applications. Here the expert approach of the author adopts the analogy between the key mechanism of contagion for both the spread of an epidemic and for a referral marketing defined as viral because of it involves a person-to-person transmission. The theoretical concept of backward bifurcation, to avoid in the epidemic context, in the viral marketing could strengthen the campaign's chances of survival. However, the paper points out the possible introduction of a risk factor in the bistability range where, according to the chosen initial conditions, hysteresis-type behaviors can emerge.

#### **2. Conclusions**

I hope that this collection will be useful for those working in the area of modelling real-word applications through differential equations and those who care about an accurate numerical approximation of their solutions. The reading is also addressed to ones who are willing to become familiar with differential equations which, due to their predictive abilities, represent the main mathematical tool for making scenario analysis of our changing world [8].

#### **Funding:** This research received no external funding

**Acknowledgments:** As the guest editor, I want to thank all the authors for contributing to the Special Issue with their interesting and valuable articles collected in this book. I would also to thank all referees for their thorough and timely reports on the submitted works. Finally, it is my pleasure also to thank, in person of Grace Du, all the editorial staff of the journal Mathematics for the pleasant cooperation, during the preparation of the Special Issue and during the preparation of this book.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


## *Review* **Stability of Systems of Fractional-Order Differential Equations with Caputo Derivatives**

**Oana Brandibur 1, Roberto Garrappa 2,3 and Eva Kaslik 1,\***


<sup>3</sup> Member of the INdAM Research Group GNCS, Istituto Nazionale di Alta Matematica "Francesco Severi", Piazzale Aldo Moro 5, 00185 Rome, Italy

**\*** Correspondence: eva.kaslik@e-uvt.ro

**Abstract:** Systems of fractional-order differential equations present stability properties which differ in a substantial way from those of systems of integer order. In this paper, a detailed analysis of the stability of linear systems of fractional differential equations with Caputo derivative is proposed. Starting from the well-known Matignon's results on stability of single-order systems, for which a different proof is provided together with a clarification of a limit case, the investigation is moved towards multi-order systems as well. Due to the key role of the Mittag–Leffler function played in representing the solution of linear systems of FDEs, a detailed analysis of the asymptotic behavior of this function and of its derivatives is also proposed. Some numerical experiments are presented to illustrate the main results.

**Keywords:** fractional differential equations; stability; linear systems; multi-order systems; Mittag– Leffler function

#### **1. Introduction**

The investigation of stability properties plays a prominent role in the qualitative theory of fractional-order systems, similarly as in the case of the classical theory of integer-order dynamical systems [1,2]. The classical Hartman–Grobman linearisation theorem, which states that the local behavior of a dynamical system in a neighborhood of a hyperbolic equilibrium is qualitatively equivalent to the behavior of its linearisation near the equilibrium, is extended to the case of fractional-order systems as well [3–5]. Consequently, linear stability analysis is of fundamental importance in the investigation of fractional-order systems, and, in particular, stability properties of linear autonomous systems of fractional-order differential equations play a key role in this context.

For single-order systems of fractional differential equations (FDEs), namely systems in which the FDEs have the same fractional order, the most important theoretical result, which may now be considered classical, is Matignon's stability theorem [6], recently generalized in [7] for the case when the fractional order belongs to the interval (0, 2).

Thus far, the investigation of stability properties of multi-order (incommensurate) fractional-order systems has unquestionably received less consideration. We refer to [8–11] for the stability analysis of incommensurate fractional-order systems with rational orders. Moreover, closely linked to this research topic, bounded input bounded output stability of systems with irrational transfer functions has been investigated in [12,13]. Very recently, the asymptotic properties of solutions of several classes of linear multi-order systems of fractional differential equations (such as systems with block triangular coefficient matrices) have been considered in [14].

The main difficulty in establishing necessary and sufficient conditions for the stability of multi-order linear systems of fractional differential equations (conceivably comparable

**Citation:** Brandibur, O.; Garrappa, R.; Kaslik, E. Stability of Systems of Fractional-Order Differential Equations with Caputo Derivatives. *Mathematics* **2021**, *9*, 914. https:// doi.org/10.3390/math9080914

Academic Editor: Fasma Diele

Received: 5 March 2021 Accepted: 19 April 2021 Published: 20 April 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

to the classical Routh–Hurwitz conditions for integer-order systems) is due to the fact that a large number of parameters are involved: the system's coefficients, as well as multiple fractional orders. Undoubtedly, the complexity of the problem is positively correlated with the system's dimension.

The case of two-dimensional multi-order fractional-order systems has been fully investigated in [15–17]. On one hand, necessary and sufficient conditions for the asymptotic stability and instability of the fractional-order system have been obtained, in terms of the main diagonal elements and the determinant of the system's matrix, as well as the fractional orders of the Caputo derivatives. Moreover, necessary and sufficient fractionalorder independent conditions have also been presented, in terms of the main diagonal elements and the determinant of the system's matrix, which guarantee the asymptotic stability or instability of the considered two-dimensional system, regardless of the choice of the fractional-orders considered in the system. These latter results prove to be especially useful in practical applications where the exact fractional orders of the Caputo derivatives are not precisely known.

It is important to note that multi-term fractional-order differential equations [18] and their qualitative properties are sharply linked to multi-order systems of fractional differential equations. We refer to [11] for a thorough presentation of the relationship between these two concepts. The investigation of stability properties of multi-term FDEs is so far limited to two-term and three-term fractional-order differential and difference equations, which have been recently studied in [19–22]. However, due to the increasing complexity of the problem, equations with four or more fractional terms have not yet been investigated.

This paper is organized as follows: Section 2 illustrates the statement of the problem and the main definitions. Due to the importance in the description of the solution of linear systems of FDEs, in Section 3, we provide a detailed description of the Mittag–Leffler function, of its derivatives and of the corresponding asymptotic behavior. Section 4 investigates the stability properties of single-order systems of FDEs, by presenting classical Matignon's theorem and some simulations illustrating the different stability behavior in dependence of the spectral properties of the matrix system. Stability analysis of multi-order systems is discussed in Section 5; since general results are far from being formulated in this case, we focus on some special cases and we separately investigate two-dimensional systems, higher dimensional systems with block-triangular structure, and higher dimensional systems with some special fractional orders. Some concluding remarks are hence provided in the concluding Section 6.

#### **2. Preliminaries**

Consider an *n*-dimensional fractional-order system with Caputo derivatives:

$$\mathbf{^G D^q \mathbf{y}}(t) = f(t, \mathbf{y}) \tag{1}$$

with **q** = (*q*1, *q*2, ..., *qn*) ∈ (0, 1] *<sup>n</sup>*, assuming that *<sup>f</sup>* : [0, <sup>∞</sup>) <sup>×</sup> <sup>R</sup>*<sup>n</sup>* <sup>→</sup> <sup>R</sup>*<sup>n</sup>* is a continuous function on its domain of definition, Lipschitz-continuous with respect to the second variable and **<sup>y</sup>** : [0, <sup>∞</sup>) <sup>→</sup> <sup>R</sup>*<sup>n</sup>* a vector-valued function. With <sup>C</sup>*D***qy**(*t*), we denote the application of the Caputo derivative of order 0 < *qi* ≤ 1 to each component *yi*(*t*) of **y**(*t*), namely

$${}^{\mathbf{C}}D^{\mathbf{q}}\mathbf{y}(t) = \begin{pmatrix} {}^{\mathbf{C}}D^{q\_{\mathbf{i}}}y\_{1}(t) \\ {}^{\mathbf{C}}D^{q\_{\mathbf{i}}}y\_{2}(t) \\ \vdots \\ {}^{\mathbf{C}}D^{q\_{\mathbf{n}}}y\_{n}(t) \end{pmatrix}, \quad {}^{\mathbf{C}}D^{q\_{\mathbf{i}}}y\_{i}(t) := \frac{1}{\Gamma(1-q\_{i})} \int\_{0}^{t} (t-\tau)^{-q\_{i}} y\_{i}'(\tau)d\tau.$$

Existence and uniqueness of the solution of initial value problems associated with system (1) is ensured by Corollary 2.4 from [14].

Whenever *q*<sup>1</sup> = *q*<sup>2</sup> = ... = *qn*, system (1) is said to be single-order; otherwise, the term multi-order will be used.

Let us further assume that **y** = 0 is an equilibrium solution of system (1), i.e.,

$$f(t,0) = 0 \quad \text{for any } t \ge 0.$$

**Definition 1.** *Let α* > 0 *and denote by ϕ*(*t*, **y**0) *the unique solution of (1) satisfying the initial condition* **<sup>y</sup>**(0) = **<sup>y</sup>**<sup>0</sup> <sup>∈</sup> <sup>R</sup>*n. Then:*


$$\|\!\!\!\!\varphi(t,\mathbf{y}\_0)\|\!\!\!\!\/=\mathcal{O}(t^{-\alpha})\quad\text{as}\ t\to\infty.$$

**Remark 1.** *In the particular case of linear systems of fractional-order differential equations with constant coefficients, we say that* the system is stable/asymptotically stable/unstable *if and only if its trivial solution is stable/asymptotically stable/unstable.*

#### **3. Mittag–Leffler Functions, Derivatives and Asymptotic Behavior**

In the analysis of linear systems of FDEs, a crucial role is played by the Mittag–Leffler (ML) function [23]

$$E\_{a, \beta}(z) = \sum\_{k=0}^{\infty} \frac{z^k}{\Gamma(ak + \beta)}, \quad a > 0, \quad z \in \mathbb{C}, \tag{2}$$

where Γ(*x*) = <sup>∞</sup> <sup>0</sup> *<sup>t</sup>x*−1e−*<sup>t</sup>* <sup>d</sup>*t* is the Euler–Gamma function. Since Γ(*k* + <sup>1</sup>) = *k*!, *k* ∈ N, this function generalizes the exponential function when *α* = *β* = 1, namely *E*1,1(*z*) = e*z*. When *β* = 1, the notation *Eα*(*z*) := *Eα*,1(*z*) is preferred.

For the purposes of this paper (the reasons will be clearer later on), it is convenient to study and introduce a further generalization of the ML function.

#### *3.1. The Prabhakar Function and Its Asymptotic Properties*

For three real parameters *α*, *β* and *γ*, the three-parameter Mittag–Leffler (ML) function, also known as the Prabhakar function [24], is defined by its series representation

$$E\_{\alpha,\beta}^{\gamma}(z) = \frac{1}{\Gamma(\gamma)} \sum\_{k=0}^{\infty} \frac{\Gamma(\gamma + k) z^k}{k j! \Gamma(\alpha k + \beta)}, \quad \alpha > 0, \quad z \in \mathbb{C}.$$

This function is not only a generalization, to three parameters, of the two-parameter ML function *Eα*,*β*(*z*) (indeed, when *γ* = 1, it is *E*<sup>1</sup> *<sup>α</sup>*,*β*(*z*) = *Eα*,*β*(*z*)), but it also provides a simple and elegant way to represent derivatives of two-parameter ML functions since

$$E\_{a,\emptyset}^{(m)}(z) := \frac{\mathbf{d}^m}{\mathbf{d}z^m} E\_{a,\emptyset}(z) = m! E\_{a,a m + \emptyset}^{m+1}(z), \quad m = 0, 1, 2, \dots \tag{3}$$

as one can easily check after a term-by-term differentiation of (2).

In order to introduce a result about the Laplace transform (LT), it is necessary to introduce what is known as the *Prabhakar kernel*

$$e^{\gamma}\_{\alpha,\beta}(t;\lambda) = t^{\beta - 1} E^{\gamma}\_{\alpha,\beta}(t^{\alpha}\lambda), \quad t > 0, \quad \lambda \in \mathbb{C}\_{\prime}$$

for which the following analytical representation of the LT is available:

$$\mathcal{E}\_{a,\beta}^{\gamma}(s;\lambda) := \mathcal{L}\left[e\_{a,\beta}^{\gamma}(t;\lambda) : s\right] = \frac{s^{a\gamma - \beta}}{(s^a - \lambda)^{\gamma}}, \quad \Re(s) > 0, \quad |s| > |\lambda|^{\frac{1}{a}}.$$

Having in mind the stability analysis of linear FDEs, whose solutions will be expressed in terms of Mittag–Leffler functions and their derivatives, it is of interest to recall some results about the asymptotic behavior of the Prabhakar function in the complex plane.

In particular, for large arguments and 0 < *α* ≤ 1, we first identify exponential and algebraic expansions, respectively given by

$$\begin{aligned} \mathcal{F}^{\gamma}\_{\kappa,\emptyset}(z) &= \frac{1}{\Gamma(\gamma)} \mathrm{e}^{z^{1/a}} z^{\frac{\gamma-\beta}{a}} \frac{1}{\alpha^{\gamma}} \sum\_{j=0}^{\infty} c\_j z^{-\frac{j}{a}} \\ \mathcal{A}^{\gamma}\_{\kappa,\emptyset}(z) &= \frac{z^{-\gamma}}{\Gamma(\gamma)} \sum\_{j=0}^{\infty} \frac{(-1)^j \Gamma(j+\gamma)}{j! \Gamma(\beta - \alpha(j+\gamma))} z^{-j} \end{aligned}$$

and, thanks to the results obtained by Paris [25,26], we know that

$$E\_{a,\beta}^{\gamma}(z) \sim \begin{cases} \begin{array}{ll} \mathcal{F}\_{a,\beta}^{\gamma}(z) + \mathcal{A}\_{a,\beta}^{\gamma}(z\mathbf{e}^{\mp \pi i}) & |\arg z| < \frac{a\pi}{2} \\ \mathcal{A}\_{a,\beta}^{\gamma}(z\mathbf{e}^{\mp \pi i}) + \mathcal{F}\_{a,\beta}^{\gamma}(z) & \frac{a\pi}{2} < |\arg z| < a\pi \\ \mathcal{A}\_{a,\beta}^{\gamma}(z\mathbf{e}^{\mp \pi i}) & a\pi < |\arg z| \le \pi \end{array}, & |z| \to \infty \end{cases}$$

with the sign in e∓*π*<sup>i</sup> which must taken negative for *z* in the upper complex half-plane and positive otherwise. Following the convention adopted in [27], in each sum, we have first indicated the dominant term, namely the exponential term <sup>F</sup>*<sup>γ</sup> <sup>α</sup>*,*β*(*z*) when <sup>|</sup> arg *<sup>z</sup>*<sup>|</sup> <sup>&</sup>lt; *απ* 2 and the algebraic term <sup>A</sup>*<sup>γ</sup> α*,*β*(*z*e∓*π*<sup>i</sup> ) when *απ* <sup>2</sup> < | arg *z*| < *απ*. The lines | arg *z*| = *απ* and <sup>|</sup> arg *<sup>z</sup>*<sup>|</sup> <sup>=</sup> *απ* <sup>2</sup> are, respectively, Stokes and anti-Stokes lines where asymptotic expansions change their behavior. The above result is graphically summarized in Figure 1.

**Figure 1.** Asymptotic behavior of the Prabhakar function in the complex plane.

We first recall that coefficients *cj* in asymptotic expansion <sup>F</sup>*<sup>γ</sup> <sup>α</sup>*,*β*(*z*) are obtained [25] from the inverse factorial expansion, for |*s*| → ∞ in | arg(*s*)| ≤ *π* −  and any arbitrarily small  > 0, of

$$F\_{a,\beta}^{\gamma}(s) := \frac{\Gamma(\gamma + s)\Gamma(as + \psi)}{\Gamma(s + 1)\Gamma(as + \beta)} = a^{1 - \gamma} \left( 1 + \sum\_{j = 1}^{\infty} \frac{c\_j}{(as + \psi)\_j} \right) \tag{4}$$

with (*x*)*<sup>j</sup>* = Γ(*x* + *j*)/Γ(*x*) the Pochhammer symbol and *ψ* = 1 − *γ* + *β*. They can be evaluated by means of a sophisticated algorithm introduced in in [25] and also explained in [28]. The first few entries of *ck* are available in [26].

Based on the asymptotic properties of the Prabhakar function, we obtain the asymptotic equivalence:

$$e\_{a,\beta}^{\gamma}(t;\lambda) \sim \begin{cases} \lambda^{\frac{\gamma-\beta}{a}} t^{\gamma-1} \mathbf{e}^{t\lambda^{1/a}} & \text{if } |\arg(\lambda)| \le \frac{a\pi}{\mathbb{Z}}\\ \frac{\mathbf{e}^{\pm\gamma\pi i}}{\lambda^{\gamma} \Gamma(\beta - a\gamma)} t^{\beta - a\gamma - 1} & \text{if } |\arg(\lambda)| > \frac{a\pi}{\mathbb{Z}} \end{cases} \tag{5}$$

as *<sup>t</sup>* <sup>→</sup> <sup>∞</sup>, where the sign in the term e±*γπ*<sup>i</sup> is positive if *<sup>λ</sup>* is in the upper complex half-plane, and negative otherwise.

#### *3.2. Asymptotic Behavior of Derivatives of the ML Function*

Thanks to the relationship (3) between derivatives of the ML function and the Prabhakar function, the investigation of the asymptotic behavior of any *m*-th order derivative of *Eα*,*β*(*z*) is hence possible by applying the corresponding results for the Prabhakar function and afterwards replacing *β* with *αm* + *β* and *γ* with *m* + 1.

To this purpose, we first observe that, after these replacements, the function *F<sup>γ</sup> <sup>α</sup>*,*β*(*s*) in (4) becomes *Fm*+<sup>1</sup> *<sup>α</sup>*,*αm*+*β*(*s*)=(*s* + 1)*m*/(*αs* + *ψ*)*m*, with *ψ* = *αm* + *β*. Hence, coefficients *cj* vanish for *j* = *m* + 1, *m* + 2, . . . and the exponential and algebraic expansions read

$$\begin{aligned} \mathcal{F}^{m+1}\_{\mathfrak{a},\mathfrak{a}m+\beta}(z) &= \frac{1}{m!} \mathbf{e}^{z^{1/\mathfrak{a}}} z^{\frac{1-\mathfrak{a}m-\beta}{\mathfrak{a}}} \frac{1}{\mathfrak{a}^{m+1}} \sum\_{j=0}^{m} c\_j z^{\frac{m-j}{\mathfrak{a}}} \\ \mathcal{A}^{m+1}\_{\mathfrak{a},\mathfrak{a}m+\beta}(z) &= \frac{1}{m!} \sum\_{j=0}^{\infty} \frac{(-1)^j (j+1)\_m}{\Gamma(\beta - \mathfrak{a}(j+1))} z^{-j-m-1} \end{aligned}$$

Therefore, by taking into account just the dominant expansions in each sector of the complex plane delimited by Stokes and anti-Stokes lines, and just leading terms in each expansion, we can describe the asymptotic behavior of derivatives of the ML function as

$$\frac{\mathbf{d}^m}{\mathbf{d}z^m} E\_{\boldsymbol{\alpha}, \boldsymbol{\beta}}(z) \sim \begin{cases} \frac{1}{\mathbf{a}^{m+1}} \mathbf{e}^{z^{1/a}} z^{\frac{m+1-\alpha m-\beta}{a}} & |\arg z| < \frac{a\pi}{2} \\\ (-1)^{m+1} \frac{m!}{\Gamma(\beta-1)} z^{-m-1} & a\pi < |\arg z| \le \pi \end{cases}, \quad |z| \to \infty$$

*3.3. Behavior of Derivatives of the ML Function When* <sup>|</sup> arg *<sup>z</sup>*<sup>|</sup> <sup>=</sup> *απ* 2

It remains to investigate the behavior along the anti-Stokes line <sup>|</sup> arg *<sup>z</sup>*<sup>|</sup> <sup>=</sup> *απ* <sup>2</sup> where both the exponential and the algebraic terms are present. We therefore consider

$$z = \rho e^{\pm \frac{\mu\_i}{2}i}, \quad \rho > 0$$

and, for large *ρ* = |*z*|, it is

$$\begin{split} \frac{\mathbf{d}^{\mathsf{m}}}{\mathsf{d}z^{\mathsf{m}}} E\_{\mathsf{n},\beta}(z) &\sim \mathsf{e}^{\pm \mathsf{i}\rho^{1/a}} \frac{1}{\mathsf{a}^{\mathsf{m}+1}} \sum\_{j=0}^{\mathsf{m}} c\_{j} \rho^{\frac{1-\mathsf{a}\mathsf{m}-\beta+\mathsf{m}-j}{\mathsf{a}}} \mathsf{e}^{\pm(1-\mathsf{a}\mathsf{m}-\beta+\mathsf{m}-j)} \frac{\mathsf{e}^{\pm}}{\mathsf{e}^{\pm}} + \\ &+ \sum\_{j=0}^{\infty} \frac{(-1)^{j}(j+1)\_{\mathsf{m}}}{\Gamma(\beta-\mathsf{a}(j+1))} \frac{1}{\rho^{\mathsf{m}+1+j}} \mathsf{e}^{-\mathsf{i}(\mathsf{m}+1+j)(\frac{\mathsf{a}}{2}-1)\pi}. \end{split}$$

Clearly, the second term asymptotically goes to zero when *ρ* → ∞. The first term, instead, in modulus asymptotically tends to zero only for suitable values of *α* and *β* such

that 1 − *αm* − *β* + *m* − *j* ≤ 0 for any *j* ∈ {0, 1, ... , *m*}, namely, when 1 − *αm* − *β* + *m* ≤ 0 or, equivalently, when

$$m \le \frac{\beta - 1}{1 - \alpha}.$$

When we consider the one-parameter ML function *Eα*(*z*), namely *β* = 1, which is the instance of the ML involved in the stability analysis of linear FDEs, for arg *<sup>z</sup>* <sup>=</sup> <sup>±</sup>*απ* 2 just *Eα*(*z*) asymptotically converges to 1/*<sup>α</sup>* for <sup>|</sup>*z*| → <sup>∞</sup> but any *<sup>m</sup>*-th order derivative of *Eα*(*z*), with *m* ≥ 0, is unbounded when |*z*| → ∞.

This situation is illustrated in Figure 2, where we report the first derivatives of *Eα*(*z*) for *α* = 0.6 and *α* = 0.8 evaluated for *z* along the anti-Stokes line arg *z* = *απ* <sup>2</sup> (results are similar when arg *<sup>z</sup>* <sup>=</sup> <sup>−</sup>*απ* <sup>2</sup> ).

**Figure 2.** Modulus of *Eα*(*z*) and its first and second derivatives with arg *z* = *απ* <sup>2</sup> and *α* = 0.6 (**left** plot) and *α* = 0.8 (**right** plot).

**Remark 2.** *The behavior on the anti-Stokes line* <sup>|</sup> arg *<sup>z</sup>*<sup>|</sup> <sup>=</sup> *απ* <sup>2</sup> *of m-th order derivatives of the ML function Eα*(*z*)*, which are unbounded as* |*z*| → ∞ *for m* ≥ 1*, is quite different from that of the exponential function* e*<sup>z</sup> (namely, the special instance of Eα*(*z*) *for α* = 1*). Indeed, derivatives of the the exponential are never unbounded on the corresponding anti-Stokes lines* <sup>|</sup> arg *<sup>z</sup>*<sup>|</sup> <sup>=</sup> *<sup>π</sup>* <sup>2</sup> *since there it is* d*m*/d*zm*e*<sup>z</sup>* <sup>=</sup> <sup>1</sup> *for any m* <sup>=</sup> 0, 1, . . .

#### **4. Stability of Linear Systems of Single-Order FDEs**

We first consider the following linear system of Caputo-type fractional-order differential equations of the same fractional order:

$$^G D^q y(t) = A y(t),\tag{6}$$

where *<sup>q</sup>* <sup>∈</sup> (0, 1] and *<sup>A</sup>* <sup>∈</sup> <sup>R</sup>*n*×*n*, coupled with the initial condition *<sup>y</sup>*(0) = *<sup>y</sup>*<sup>0</sup> <sup>∈</sup> <sup>R</sup>*n*.

In is important to emphasize that system (6) is equivalent to the following system of weakly singular Volterra integral equations of convolution type (see, for example, [29,30]):

$$y(t) = y\_0 + A \int\_0^t \frac{(t - \tau)^{q-1}}{\Gamma(q)} y(\tau) d\tau. \tag{7}$$

For the most important advances regarding the general theory of linear Volterra integral equations, including the case when the convolution kernel is completely monotonic, we refer to [31–34].

The characteristic equation associated with system (6) is

$$\det(s^q I - A) = 0,\tag{8}$$

where, according to [35], the principal value (first branch) of the complex power function is considered. Therefore, it is easy to see that *s* is a root of the characteristic Equation (8) if and only if there exists an eigenvalue *λ* of the matrix *A* such that

*s*

$$q = \lambda.\tag{9}$$

Hence, this leads to the following characterization of the stability properties of system (6), in terms of the roots of its characteristic equation:

**Proposition 1.** *The linear system* (6) *is asymptotically stable if and only if*

$$
\sigma(A) \subset \mathcal{S}\_q
$$

*where σ*(*A*) *denotes the spectrum of the matrix A and*

$$S\_{\emptyset} = \{ \lambda \in \mathbb{C} \; : \; s^{\emptyset} \neq \lambda \; \vert \; \forall \; \Re(s) \ge 0 \}.$$

With the aim of investigating the stability properties of system (6) by characterizing the *stability region Sq*, and presenting a concise proof of Matignon's theorem [6], it is convenient to use the Jordan normal form of the matrix *A*. Indeed, let us consider a nonsingular matrix *<sup>P</sup>* <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup>* such that

$$A = PJP^{-1}, \quad J = \begin{pmatrix} J\_1 & 0 & \dots & 0 \\ 0 & J\_2 & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & J\_P \end{pmatrix}.$$

where *Jk*, *k* = 1, . . . , *p* are Jordan blocks

$$J\_k = \begin{pmatrix} \lambda\_k & 1 & 0 & \dots & 0 & 0\\ 0 & \lambda\_k & 1 & \dots & 0 & 0\\ \vdots & \ddots & \ddots & \ddots & \ddots & \vdots\\ 0 & 0 & 0 & \dots & \lambda\_k & 1\\ 0 & 0 & 0 & \dots & 0 & \lambda\_k \end{pmatrix}$$

and *λ<sup>k</sup>* are eigenvalues of the matrix *A*. The size of the largest Jordan block *Jk* of *A* associated with the eigenvalue *λ<sup>k</sup>* is called the *index* of *λ<sup>k</sup>* [36]. On the other hand, the total number of Jordan blocks associated with a given eigenvalue *λ<sup>k</sup>* in the Jordan normal form of the matrix *A* is the *geometric multiplicity* of the eigenvalue *λk*. Moreover, the sum of the sizes of all Jordan blocks corresponding to *λ<sup>k</sup>* is the *algebraic multiplicity* of *λk*. Therefore, the index of an eigenvalue *λ<sup>k</sup>* is equal to 1 if and only if its algebraic and geometric multiplicities are equal.

With these observations, we next give a slightly modified version of the classical result of Matignon, to fix a small imprecision in the second statement, related to the use of the *geometric multiplicity* instead of the *index* of an eigenvalue:

**Theorem 1** (Matignon, 1996 [6])**.** *The linear system* (6) *is*

*i.* O(*t* <sup>−</sup>*q*)*-asymptotically stable if and only if*

$$\sigma(A) \subset S\_q = \left\{ \lambda \in \mathbb{C} \; : \; |\arg(\lambda)| > \frac{q\pi}{2} \right\}.$$

*ii. stable if and only if <sup>σ</sup>*(*A*) <sup>⊂</sup> *Sq and the eigenvalues of <sup>A</sup> which satisfy* <sup>|</sup> arg(*λ*)<sup>|</sup> <sup>=</sup> *<sup>q</sup><sup>π</sup>* <sup>2</sup> *have index* 1*.*

**Proof.** With the notations introduced previously, denoting *z*(*t*) = *Py*(*t*), it is easy to verify that system (6) is equivalent to

$$\prescript{C}{}{D}^{q}z(t) = \mathcal{J}z(t). \tag{10}$$

Applying the LT to the linear system (10) leads to the following formula for the LT of the vector function *z*(*t*):

$$Z(s) = s^{q-1}(s^qI - I)^{-1}z(0)\tag{11}$$

Since the Jordan normal form *<sup>J</sup>* is a block diagonal matrix, the matrix (*s<sup>q</sup> <sup>I</sup>* <sup>−</sup> *<sup>J</sup>*)−<sup>1</sup> is also block diagonal, and its blocks are upper triangular matrices of the form:

$$(s^q I - J\_k)^{-1} = \begin{pmatrix} (s^q - \lambda\_k)^{-1} & (s^q - \lambda\_k)^{-2} & \dots & (s^q - \lambda\_k)^{-d\_k} \\ 0 & (s^q - \lambda\_k)^{-1} & \dots & (s^q - \lambda\_k)^{-(d\_k - 1)} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & (s^q - \lambda\_k)^{-1} \end{pmatrix}$$

where *dk* represents the dimension of the *k*-th Jordan block *Jk*.

Correspondingly, the Laplace transform *Z*(*s*) is made up of "blocks" (of size *dk*) of the form

$$Z\_k(s) = s^{q-1}(s^q I - J\_k)^{-1} z\_k(0) = \begin{pmatrix} \sum\_{j=1}^{d\_k} \frac{s^{q-1}}{(s^q - \lambda\_k)^j} z\_{k,j}(0) \\ \sum\_{j=2}^{d\_k} \frac{s^{q-1}}{(s^q - \lambda\_k)^{j-1}} z\_{k,j}(0) \\ \vdots \\ \sum\_{s^{q-1}}^{s^{q-1}} z\_{k,d\_k}(0) \end{pmatrix}\_{s^q}, k = \overline{1, p}.$$

Applying the inverse LT, and taking into account that

$$\mathcal{L}^{-1}\left[\frac{s^{q-1}}{(s^q-\lambda\_k)^m};t\right] = t^{(m-1)q}E^{m}\_{q,(m-1)q+1}(t^q\lambda\_k) = \mathfrak{e}^{m}\_{q,(m-1)q+1}(t;\lambda\_k) \quad , \ m \in \mathbb{N}^\*$$

we obtain:

$$z\_k(t) = \begin{pmatrix} \sum\_{j=1}^{d\_k} \mathbf{e}\_{q,(j-1)q+1}^j(t; \lambda\_k) z\_{k,j}(0) \\ \sum\_{j=2}^{d\_k} \mathbf{e}\_{q,(j-2)q+1}^{j-1}(t; \lambda\_k) z\_{k,j}(0) \\ \vdots \\ \mathbf{e}\_{q,1}^1(t; \lambda\_k) z\_{k,d\_k}(0) \end{pmatrix}\_{\mathbf{t}}, k = \overline{1,p}.$$

Based on (5), we obtain the following asymptotic equivalence:

$$c\_{q,(m-1)q+1}^{m}(t;\lambda) = t^{(m-1)q} E\_{q,(m-1)q+1}^{m}(t^q \lambda) \sim \begin{cases} \frac{\lambda^{(m-1)(\frac{1}{q}-1)}}{(m-1)!q^m} t^{m-1} \mathbf{e}^{\lambda^{1/q}t} & \text{if } |\arg(\lambda)| \le \frac{q\pi}{2} \\\frac{(-1)^m}{\lambda^m \Gamma(1-q)} t^{-q} & \text{if } |\arg(\lambda)| > \frac{q\pi}{2} \end{cases}$$

as *t* → ∞, where *m* ∈ N∗.

Therefore, the following conclusions can be drawn:


With the above observations, the conclusions of Matignon's theorem readily follow. We emphasize that, for the case of statement *ii.*, if there exists an eigenvalue of *A* which satisfies <sup>|</sup> arg(*λ*)<sup>|</sup> <sup>=</sup> *<sup>q</sup><sup>π</sup>* <sup>2</sup> , the solutions of (6) are bounded if and only if the size of the largest Jordan block associated with this critical eigenvalue is equal to 1, i.e., the index of the eigenvalue is 1.

**Remark 3.** *The above proof slightly differs from the one in [6]. Matignon's proof, indeed, makes use of derivatives of the ML function instead of the Prabhakar kernel e γ <sup>α</sup>*,*β*(*t*; *λ*) *as in the proof of Theorem 1. A link between the two proofs can be, however, easily established in view of the relationship (3) between derivatives of the ML function and the Prabhakar function.*

**Remark 4.** *Matignon's theorem implies that, if* 0 < *q*<sup>1</sup> < *q*<sup>2</sup> ≤ 1 *and system* (6) *is asymptotically stable for q* = *q*2*, then it will be asymptotically stable for q* = *q*<sup>1</sup> *as well. In particular, if the classical integer-order system y*˙ = *Ay is asymptotically stable (i.e., all eigenvalues of A have negative real part), it follows that the fractional-order system* (6) *is asymptotically stable, for any fractional-order q* ∈ (0, 1)*.*

**Example 1.** *To present numerical evidences of the above results, we consider here the linear systems of FDEs (6), with fractional order q* = 2/3 *and the coefficient matrix A chosen from one of the following four matrices:*

$$A\_{1} = \begin{pmatrix} 1 & -\sqrt{3} & \frac{1}{4} & 0 \\ \sqrt{3} & 1 & 0 & \frac{1}{4} \\ 0 & 0 & 1 & -\sqrt{3} \\ 0 & 0 & \sqrt{3} & 1 \end{pmatrix}, \quad A\_{2} = \begin{pmatrix} 1 & -\sqrt{3} & 0 & 0 \\ \sqrt{3} & 1 & 0 & 0 \\ 0 & 0 & 1 & -\sqrt{3} \\ 0 & 0 & \sqrt{3} & 1 \end{pmatrix}$$

$$A\_{3} = \begin{pmatrix} 1 - \epsilon & -\sqrt{3} & \frac{1}{4} & 0 \\ \sqrt{3} & 1 - \epsilon & 0 & \frac{1}{4} \\ 0 & 0 & 1 - \epsilon & -\sqrt{3} \\ 0 & 0 & \sqrt{3} & 1 - \epsilon \end{pmatrix}, \quad A\_{4} = \begin{pmatrix} 1 + \epsilon & -\sqrt{3} & 0 & 0 \\ \sqrt{3} & 1 + \epsilon & 0 & 0 \\ 0 & 0 & 1 + \epsilon & -\sqrt{3} \\ 0 & 0 & \sqrt{3} & 1 + \epsilon \end{pmatrix}.$$

*The solution y*(*t*) = *Eq*(*t qA*)*y*<sup>0</sup> *evaluates by direct computation the matrix ML function thanks to the algorithm described in [37], after using the initial condition y*<sup>0</sup> = 1, <sup>−</sup>4, <sup>−</sup>2, 4*<sup>T</sup> . The value*  = 0.1 *is used in A*<sup>3</sup> *and A*4*.*

*The asymptotic behavior of the solution y*(*t*) *depends on the spectral properties of the matrix. In particular, we observe that:*

*<sup>A</sup>*<sup>1</sup> *has two eigenvalues <sup>λ</sup>*1/2 <sup>=</sup> <sup>e</sup>±*<sup>q</sup> <sup>π</sup>* <sup>2</sup> <sup>i</sup> *laying* on the border of the stability sector *Sq and both having* index *2; according to Theorem 1, the system produces unbounded solutions as clearly shown in the left plot of Figure 3;*

*<sup>A</sup>*<sup>2</sup> *has the same two eigenvalues <sup>λ</sup>*1/2 <sup>=</sup> <sup>e</sup>±*<sup>q</sup> <sup>π</sup>* <sup>2</sup> <sup>i</sup> *of A*1*, laying* on the border of the stability sector *Sq, but their* index *is now* 1*; the expected bounded solutions are shown in the right plot of Figure 3;*

*A*<sup>3</sup> *has two eigenvalues λ*1/2 *with* index *2, as A*1*, but now they lay* inside the stability sector *Sq; the asymptotically stable solutions are illustrated in the left plot of Figure 4;*

*A*<sup>4</sup> *has two eigenvalues λ*1/2 *with* index 1*, as A*2*, but lying* outside the stability sector *Sq; the resulting unbounded solutions are illustrated in the right plot of Figure 4.*

**Figure 3.** Solutions of the linear system <sup>C</sup>*D<sup>q</sup>* <sup>0</sup> = *Ay*(*t*), with *q* = 2/3, for *A* = *A*<sup>1</sup> (**left** plot) and *A* = *A*<sup>2</sup> (**right** plot).

**Figure 4.** Solutions of the linear system <sup>C</sup>*D<sup>q</sup>* <sup>0</sup> = *Ay*(*t*), with *q* = 2/3, for *A* = *A*<sup>3</sup> (**left** plot) and *A* = *A*<sup>4</sup> (**right** plot).

#### **5. Stability of Linear Multi-Order Systems of FDEs**

Extending Matignon's theorem to the case of systems of FDEs with multiple fractional orders raises several technical difficulties, and, consequently, with the current state of the art, we are unable to present an exhaustive theory regarding this matter.

One of the technical difficulties that should be mentioned in this context is the fact that, for a general multi-order system of the form

$$^G D^q y(t) = A y(t),\tag{12}$$

where *<sup>A</sup>* <sup>∈</sup> <sup>R</sup>*n*×*n*, *<sup>q</sup>* = (*q*1, *<sup>q</sup>*2, ... , *qn*) <sup>∈</sup> (0, 1] *<sup>n</sup>* (such that not all *qi* are equal), considering the Jordan normal form *<sup>J</sup>* of the matrix *<sup>A</sup>* and a nonsingular matrix *<sup>P</sup>* <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup>* such that *A* = *PJP*−<sup>1</sup> (similarly as in the previous section), the transformation *z*(*t*) = *Py*(*t*) does not lead to an equivalent system of the form

$${}^{C}D^{\emptyset}z(t) = \slash z(t).$$

Therefore, different theoretical approaches should be used to tackle linear multi-order systems of FDEs.

Using another approach, namely the Laplace transform method, we first obtain the following system:

$$s^{q\_i}Y\_i(s) - s^{q\_i - 1}y\_i(0) = \sum\_{j=1}^n a\_{ij}Y\_j(s), \quad i = \overline{1, n}, \tag{13}$$

where *Yi*(*s*) is the Laplace transform of the *i*-th component *yi*(*t*) of the solution *y*(*t*).

System (13) is equivalent to the following system:

$$
\Delta(s) \cdot \begin{pmatrix} \chi\_1(s) \\ \chi\_2(s) \\ \vdots \\ \chi\_n(s) \end{pmatrix} = \begin{pmatrix} b\_1(s) \\ b\_2(s) \\ \vdots \\ b\_n(s) \end{pmatrix},
$$

where *bi*(*s*) = *sqi*−<sup>1</sup>*yi*(0), for any *i* = 1, *n* and

$$\Delta(\mathbf{s}) = \text{diag}(\mathbf{s}^{q\_1}, \mathbf{s}^{q\_2}, \dots, \mathbf{s}^{q\_n}) - A.$$

Using standard properties of the Laplace transform [8,14,35], the following result holds:

**Theorem 2.** *The multi-order system* (12) *is asymptotically stable if all the roots of the characteristic equation*

$$\det \Delta(s) = 0 \tag{14}$$

*have negative real parts.*

It is important to point out that, for large-scale systems with many different fractional orders for the Caputo derivatives, the analysis of the roots of the characteristic Equation (14) is a very difficult and complex task.

Nevertheless, the case of two-dimensional linear multi-order systems has been fully analyzed in [17], and a summary of the main results will be presented in the next section.

#### *5.1. Stability of Two-Dimensional Systems of FDEs with Different Fractional Orders*

In the general case of a two-dimensional linear system of fractional-order differential equations:

$$\begin{cases} \ ^{\text{C}}D^{q\_1}y\_1(t) = a\_{11}y\_1(t) + a\_{12}y\_2(t) \\ \ ^{\text{C}}D^{q\_2}y\_2(t) = a\_{21}y\_1(t) + a\_{22}y\_2(t) \end{cases} \tag{15}$$

where *<sup>A</sup>* = (*aij*) <sup>∈</sup> <sup>R</sup>2×<sup>2</sup> and *<sup>q</sup>*1, *<sup>q</sup>*<sup>2</sup> <sup>∈</sup> (0, 1], applying the LT leads to the following characteristic equation:

$$\det(\text{diag}(s^{\eta\_1}, s^{\eta\_2}) - A) = 0.$$

which can be written as

$$s^{q\_1 + q\_2} - a\_{11}s^{q\_2} - a\_{22}s^{q\_1} + \det(A) = 0,\tag{16}$$

where *sq*<sup>1</sup> and *sq*<sup>2</sup> represent the principal values (first branches) of the corresponding complex power functions [35].

Employing asymptotic properties and the Final Value Theorem of the LT [12,35], the following result [16] holds:

#### **Proposition 2.**


In general, computing the roots of the characteristic Equation (16) is not a straightforward task. Thus, departing from Proposition 2, we seek to obtain necessary and sufficient conditions involving the coefficients *a*<sup>11</sup> and *a*<sup>22</sup> of the main diagonal of the matrix *A* as well as the determinant det(*A*), which guarantee the stability or instability of system (15).

We first concentrate our attention on *fractional-order-dependent stability and instability conditions,* as described below. The proof of the following results is rather elaborate, involving the root locus method, and has been presented in detail in [17]. Note that only the case det(*A*) > 0 is discussed here, as det(*A*) < 0 implies that system (15) is unstable, for any fractional orders (*q*1, *q*2) ∈ (0, 1] <sup>2</sup> (in fact, it is trivial to show that, if det(*A*) < 0, the characteristic Equation (16) has at least one positive real root).

**Lemma 1.** *Let δ* > 0*, q*1, *q*<sup>2</sup> ∈ (0, 1] *and consider the smooth parametric curve in the* (*a*11, *a*22) *plane defined by*

$$\Gamma(\delta, q\_1, q\_2) \;:\quad \begin{cases} a\_{11} = \delta^{\frac{q\_1}{q\_1 + q\_2}} h(\omega, q\_1, q\_2) \\ a\_{22} = \delta^{\frac{q\_2}{q\_1 + q\_2}} h(-\omega, q\_1, q\_2) \end{cases} \;\omega \in \mathbb{R}\prime$$

*where:*

$$h(\omega, q\_1, q\_2) = \begin{cases} \rho\_2(q\_1, q\_2)e^{q\_1\omega} - \rho\_1(q\_1, q\_2)e^{-q\_2\omega}, & \text{if } q\_1 \neq q\_2\\ \cos\frac{q\pi}{2} - \omega, & \text{if } q\_1 = q\_2 := q\_1 \end{cases}$$

*with the functions ρ*1(*q*1, *q*2) *and ρ*2(*q*1, *q*2) *defined for q*<sup>1</sup> = *q*<sup>2</sup> *as*

$$\rho\_k(q\_1, q\_2) = \frac{\sin \frac{q\_k \pi}{2}}{\sin \frac{(q\_2 - q\_1)\pi}{2}} \quad \text{ } \quad for \, k = \overline{1, 2}.$$

*The following statements hold:*


**Theorem 3** (Fractional-order-dependent stability and instability results)**.**

*Let* det(*A*) = *δ* > 0 *and q*1, *q*<sup>2</sup> ∈ (0, 1] *arbitrarily fixed. Consider the curve* Γ(*δ*, *q*1, *q*2) *and the function φδ*,*q*1,*q*<sup>2</sup> : R → R *given by Lemma 1.*


$$a\_{22} < \phi\_{\delta\_{\mathcal{A}^1 \mathcal{A}^2}}(a\_{11})\text{.}$$

*iii. If a*<sup>22</sup> > *φδ*,*q*1,*q*<sup>2</sup> (*a*11)*, system (15) is unstable.*

Theorem 3 provides a relatively simple algebraic criterion (in the form of inequalities comprising the elements of the main diagonal of the system's matrix *A* as well as its determinant and the fractional orders) that enables us to immediately decide the question of asymptotic stability or instability for a given two-dimensional multi-order system of fractional differential equations. In fact, Theorem 3 may be seen as a generalization of the Routh–Hurwitz stability criterion.

**Remark 5.** *If q*<sup>1</sup> = *q*<sup>2</sup> := *q, the curve* Γ(*δ*, *q*1, *q*2) *reduces to the straight line:*

$$a\_{11} + a\_{22} = 2\sqrt{\delta} \cos \frac{q \pi}{2}.$$

*Therefore, Theorem 3 provides that, for equal fractional orders, system* (15) *is asymptotically stable if and only if*

$$Tr(A) < 2\sqrt{\det(A)}\cos\frac{q\pi}{2}.\tag{17}$$

*The eigenvalues of the system's matrix A are*

$$\lambda\_{1,2} = \frac{\text{Tr}(A) \pm \sqrt{\text{Tr}(A)^2 - 4\det(A)}}{2}$$

*and, hence, inequality* (17) *is equivalent to* <sup>|</sup> arg *<sup>λ</sup>*1,2<sup>|</sup> <sup>&</sup>gt; *<sup>q</sup><sup>π</sup>* <sup>2</sup> *. Consequently, for two-dimensional systems, the conclusion of Matignon's theorem is recovered as a particular case of Theorem 3.*

**Remark 6.** *The asymptotic stability of the two-dimensional integer order system y*˙ = *Ay does not directly imply the asymptotic stability of system* (15) *for any fractional orders* (*q*1, *q*2) ∈ (0, 1] 2*. We can only state, based on Remark 4, that, if the integer order system y*˙ = *Ay is asymptotically stable, then so is system* (15) *with equal fractional orders q*<sup>1</sup> = *q*2*.*

**Example 2.** *Let us consider the system*

$$\begin{cases} \,^c \mathbf{D}^{q\_1} y\_1(t) = a\_{11} y\_1(t) + a\_{12} y\_2(t) \\ \,^c \mathbf{D}^{q\_2} y\_2(t) = a\_{21} y\_1(t) + a\_{22} y\_2(t) \end{cases} \text{with } A = (a\_{ij}) = \begin{pmatrix} -2 & 0.5 \\ -5 & 1 \end{pmatrix} \tag{18}$$

*where q*1, *q*<sup>2</sup> ∈ (0, 1]*. As Tr*(*A*) = −1 < 0 *and* det(*A*) = 0.5 > 0*, the Routh–Hurwith stability test guarantees that, for q*<sup>1</sup> = *q*<sup>2</sup> = 1*, system* (18) *is asymptotically stable (see left plot in Figure 5);* *the eigenvalues of the matrix <sup>A</sup> are <sup>λ</sup>*1,2 <sup>=</sup> <sup>−</sup><sup>1</sup> <sup>2</sup> (1 ± *i*)*. Therefore, for equal fractional orders q*<sup>1</sup> = *q*<sup>2</sup> ∈ (0, 1)*, system* (18) *is also asymptotically stable (see left plot in Figure 5); this can also be verified by inequality* (17)*.*

**Figure 5.** Asymptotically stable solutions of system (18) when *q* = (0.8, 0.8) (**left** plot) and unstable solutions when *q* = (0.2, 1) (**right** plot).

*However, for q*<sup>1</sup> = 0.2 *and q*<sup>2</sup> = 1*, system* (18) *is unstable (see right plot in Figure 5). Indeed, applying Theorem 3, system* (18) *with* (*q*1, *q*2)=(0.2, 1) *is unstable if*

$$a\_{22} > \phi\_{\delta\_{\mathcal{A}\_1 \mathcal{A}\_2}}(a\_{11})\_{\prime 1}$$

*where a*<sup>11</sup> = −2*, a*<sup>22</sup> = 1*, δ* = det(*A*) = 0.5 *and, based on the notations from Lemma 1:*

$$\phi\_{\delta,q\_{1,\ell\_2}}(a\_{11}) = \delta^{\frac{g\_2}{q\_1+q\_2}} h(-\omega^\*, q\_{1,\ell\_2})$$

*where ω*∗ *is the unique root of the equation*

$$a\_{11} = \delta^{\frac{q\_1}{q\_1 + q\_2}} h(\omega^\*, q\_1, q\_2).$$

*Numerically solving this algebraic equation, we compute ω*<sup>∗</sup> = −2.19664 *and, therefore, we also obtain φδ*,*q*1,*q*<sup>2</sup> (*a*11) = 0.895383*. As a*<sup>22</sup> = 1*, it follows that the instability condition a*<sup>22</sup> > *φδ*,*q*1,*q*<sup>2</sup> (*a*11) *is satisfied (see left plot of Figure 6).*

*Furthermore, it is important to emphasize that Theorem 3 can also be applied when at least one of the fractional orders is irrational. For example, choosing q*<sup>1</sup> = <sup>1</sup> *<sup>π</sup> and q*<sup>2</sup> = 1*, in a similar way as before, we compute φδ*,*q*1,*q*<sup>2</sup> (*a*11) = 1.10307*, and hence a*<sup>22</sup> < *φδ*,*q*1,*q*<sup>2</sup> (*a*11)*, which means that system* (18) *with q*<sup>1</sup> = <sup>1</sup> *<sup>π</sup> and q*<sup>2</sup> = 1 *is asymptotically stable (see right plot of Figure 6).*

**Figure 6.** Position of the point (*a*11, *a*22)=(−2, 1) (plotted in red) with respect to curve Γ(*δ*, *q*1, *q*2) (shown in green) in the particular case (*q*1, *<sup>q</sup>*2)=(0.2, 1) (left plot) and (*q*1, *<sup>q</sup>*2) = <sup>1</sup> *<sup>π</sup>* , 1 (right plot) from Example 2.

*Figure 7 showes region of fractional orders* (*q*1, *q*2) *for which system* (18) *is globally asymptotically stable.*

The next step is to seek necessary and sufficient conditions which ensure the asymptotic stability or instability of system (15) for any choice of the fractional orders. A complete investigation of the family of curves Γ(*δ*, *q*1, *q*2) leads to the following *fractional-order independent stability and instability results* [17]:

**Theorem 4** (Fractional-order independent instability results)**.**


 *a*<sup>11</sup> + *a*<sup>22</sup> ≥ det(*A*) + 1 *or a*<sup>11</sup> > 0, *a*<sup>22</sup> > 0, *a*11*a*<sup>22</sup> ≥ det(*A*).

**Theorem 5** (Fractional-order-independent stability results)**.** *System (15) is asymptotically stable, regardless of the fractional orders q*1, *q*<sup>2</sup> ∈ (0, 1] *if and only if the following inequalities are satisfied:*

$$a\_{11} + a\_{22} < 0 < \det(A) \quad \text{and} \quad \max\{a\_{11}, a\_{22}\} < \min\{1, \det(A)\}.$$

The previous theorems provide easily verifiable necessary and sufficient conditions which ensure the asymptotic stability or instability of the two-dimensional system (15), for any choice of the fractional orders *q*1, *q*<sup>2</sup> ∈ (0, 1]. These conditions are expressed as simple inequalities involving the main diagonal elements *a*<sup>11</sup> and *a*<sup>22</sup> as well as the determinant det(*A*) of the system's matrix. On one hand, if det(*A*) < 0, Theorem 4 provides that system (15) is unstable, for any choice of the fractional orders *q*1, *q*<sup>2</sup> ∈ (0, 1]. Hence, we will focus our attention on the case *δ* = det(*A*) > 0. Let us denote by *Rs* and by *Ru* the *fractional-order independent stability and instability regions* given by Theorems 4 and 5:

$$\begin{aligned} R\_{\mathfrak{u}} &= \left\{ (a\_{11}, a\_{22}, \delta) \in \mathbb{R}^2 \times (0, \infty) : a\_{11} + a\_{22} \ge \delta + 1 \text{ or } a\_{11} > 0, \; a\_{22} > 0, \; a\_{11} a\_{22} \ge \delta \right\} \\ R\_{\mathfrak{s}} &= \left\{ (a\_{11}, a\_{22}, \delta) \in \mathbb{R}^2 \times (0, \infty) : a\_{11} + a\_{22} < 0 \text{ and } \max\{a\_{11}, a\_{22}\} < \min\{1, \delta\} \right\} \end{aligned}$$

The regions *Ru* and *Rs* are plotted in Figure 8. The intersections of these regions with the *δ* = det(*A*) = 6 plane are shown in Figure 9. Moreover, it can be verified [17] that the union of all the curves Γ(*δ*, *q*1, *q*2) (for *δ* > 0 and *q*1, *q*<sup>2</sup> ∈ (0, 1]) represents the complementary of *Rs* ∪ *Ru* (see Figure 9).

**Figure 8.** The *fractional-order-independent* stability (red) and instability (blue) regions *Rs* and *Ru* provided by Theorems 4 and 5 for system (15).

**Figure 9.** Curves Γ(*δ*, *q*1, *q*2) given by Lemma 1, for det(*A*) = *δ* = 6 and *qi* ∈ *k* <sup>40</sup> , *<sup>k</sup>* <sup>=</sup> 1, 40 , *i* = 1, 2 (1600 curves), color-coded from red to violet according to increasing values of *q*1*q*2. The red/blue shaded regions represent the intersections of the *fractional-order independent* stability and instability regions (see Figure 8) with the det(*A*) = 6 plane.

**Remark 7.** *The classical Routh–Hurwitz stability test for two-dimensional systems of the form y*˙ = *Ay provide that the system is asymptotically stable if and only if Tr(A)* < 0 *and* det(*A*) > 0*. However, based on Theorem 5, the additional inequality* max{*a*11, *a*22} < min{1, det(*A*)} *has to be verified in order to ensure the asymptotic stability of the fractional-order system* (15)*, regardless of the choice of fractional orders q*<sup>1</sup> *and q*2*.*

**Example 3.** *It is easy to see that, if system* (18) *is considered, as a*<sup>11</sup> = −2*, a*<sup>22</sup> = 1 *and* det(*A*) = 0.5*, even though the Routh–Hurwitz conditions Tr*(*A*) < 0 *and* det(*A*) > 0 *are fulfilled, the additional additional inequality* max{*a*11, *a*22} < min{1, det(*A*)} *does not hold, and hence*

*system* (18) *is not asymptotically stable for any choice of the fractional orders q*1, *q*<sup>2</sup> ∈ (0, 1]*. Indeed, as we have seen in Example 2, for q*<sup>1</sup> = 0.2 *and q*<sup>2</sup> = 1*, system* (18) *is unstable.*

In conclusion, based on the previously described results, the following steps should be undertaken for the stability analysis of a two-dimensional system of FDEs:


The results described in this section, particularly Theorems 3–5, give a comprehensive method to assess stability properties of two-dimensional fractional-order systems. However, the generalization of these results to higher dimensional fractional-order systems still remains an open question.

#### *5.2. Stability of Higher Dimensional Systems of FDEs with Specific Structures*

Consider that the matrix *A* of the linear system (12) has a block-triangular structure:

$$A = \begin{pmatrix} A\_{11} & A\_{12} & \dots & A\_{1p} \\ & A\_{22} & \dots & A\_{2p} \\ & & \ddots & \vdots \\ & & & A\_{pp} \end{pmatrix}$$

where *Aii* <sup>∈</sup> <sup>R</sup>*di*×*di* , for *<sup>i</sup>* <sup>=</sup> 1, *<sup>m</sup>* and *Aii* <sup>∈</sup> <sup>R</sup>2×<sup>2</sup> for *<sup>i</sup>* <sup>=</sup> *<sup>m</sup>* <sup>+</sup> 1, *<sup>p</sup>*, such that

$$\sum\_{i=1}^{m} d\_i + 2(p - m) = n.$$

We also assume that

$$q = (\underbrace{q\_1, q\_1, \dots, q\_1}\_{d\_1 \text{ times}}, \dots, \underbrace{q\_m, q\_m, \dots, q\_m}\_{d\_m \text{ times}}, q\_{m+1'}^1 q\_{m+1'}^2, \dots, q\_{p'}^1 q\_p^2) \in (0, 1]^n.$$

In this case, the characteristic equation associated with system (12) is

$$\prod\_{i=1}^{m} \det(s^{q\_i}I - A\_{ii}) \cdot \prod\_{i=m+1}^{p} \det(\text{diag}(s^{q\_i^2}\_{\prime}, s^{q\_i^2}\_{\prime}) - A\_{ii}) = 0 \tag{19}$$

Therefore, combining Matignon's theorem (Theorem 1) and Theorem 3, the following statements are obtained:

	- **–** *σ*(*Aii*) ⊂ *Sqi* = % *<sup>λ</sup>* <sup>∈</sup> <sup>C</sup> : <sup>|</sup> arg(*λ*)<sup>|</sup> <sup>&</sup>gt; *qi<sup>π</sup>* 2 & for any *i* = 1, *m* and **–** *A*<sup>22</sup> *ii* < *φδi*,*q*<sup>1</sup> *<sup>i</sup>* ,*q*<sup>2</sup> *i* (*A*<sup>11</sup> *ii* ), for any *<sup>i</sup>* = *<sup>m</sup>* + 1, *<sup>p</sup>*, where *<sup>A</sup>*<sup>11</sup> *ii* and *<sup>A</sup>*<sup>22</sup> *ii* are the main diagonal elements of matrix *Aii*, *δ<sup>i</sup>* = det(*Aii*) and *φ* is defined in Lemma 1.
	- **–** there exists *i* ∈ {1, 2, ... *m*} such that the matrix *Aii* has at least one eigenvalue *λ* such that <sup>|</sup> arg(*λ*)<sup>|</sup> <sup>&</sup>lt; *qi<sup>π</sup>* <sup>2</sup> or

**–** there exists *<sup>i</sup>* ∈ {*<sup>m</sup>* <sup>+</sup> 1, ... , *<sup>p</sup>*} such that *<sup>A</sup>*<sup>22</sup> *ii* > *φδi*,*q*<sup>1</sup> *<sup>i</sup>* ,*q*<sup>2</sup> *i* (*A*<sup>11</sup> *ii* ), where *<sup>A</sup>*<sup>11</sup> *ii* and *<sup>A</sup>*<sup>22</sup> *ii* are the main diagonal elements of matrix *Aii*, *δ<sup>i</sup>* = det(*Aii*) and *φ* is defined in Lemma 1.

#### *5.3. Stability of Higher Dimensional Systems of FDEs with Special Fractional Orders*

Let us consider the following *n*-dimensional linear multi-order system of fractional differential equations:

$$^C D^{q\_i} y\_i(t) = \sum\_{j=1}^n a\_{ij} y\_j(t), \quad i = \overline{1, n}, \tag{20}$$

where *qi* ∈ (0, 1], *aij* ∈ R and *n* ≥ 3.

If the coefficient matrix of the system is not of a triangular or block triangular form as considered in the previous section, one can not provide a comprehensive stability theory. Still, an approach that works under certain restrictions on the fractional orders of the Caputo derivatives has been developed in [14]. We will next recall the general results obtained by the mentioned authors.

Suppose *qj* ∈ (0, 1], for any *j* = 1, *n* and that there exists *q*<sup>∗</sup> ∈ (0, 1] and *ρ<sup>j</sup>* ∈ Q such that *qj* = *ρjq*∗. It follows that there exists *rj*,*sj* ∈ N for *j* = 1, *n* such that gcd(*rj*,*sj*) = 1 and *<sup>ρ</sup><sup>j</sup>* <sup>=</sup> *rj sj* . Let *s* be the least common multiple of the denominators *sj*. Then, for any *j*, there exists *α<sup>j</sup>* ∈ N such that

$$q\_{\dot{j}} = \frac{q^\* \alpha\_{\dot{j}}}{s} \quad \left(\alpha\_{\dot{j}} = \frac{s r\_{\dot{j}}}{s\_{\dot{j}}}\right).$$

We can rewrite the *j*-th equation of system (20) as an equivalent system of *α<sup>j</sup>* differential equations having the order *<sup>q</sup>*<sup>∗</sup> *<sup>s</sup>* . It follows that system (20) can be expressed as a system of *<sup>n</sup>*<sup>∗</sup> <sup>=</sup> *<sup>n</sup>* ∑ *j*=1 *<sup>α</sup><sup>j</sup>* equations of order *<sup>q</sup>*<sup>∗</sup> *s* :

$$^C D^{q^\*/s} y^\*(t) = A^\* y^\*(t),\tag{21}$$

where *A*∗ has the following block structure

$$A^\* = \begin{pmatrix} A\_{11} & A\_{12} & \dots & A\_{1n} \\ A\_{21} & A\_{22} & \dots & A\_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ A\_{n1} & A\_{n2} & \dots & A\_{nn} \end{pmatrix}$$

with *Ajk* <sup>∈</sup> <sup>R</sup>*αj*×*α<sup>k</sup>* ,

$$A\_{jj} = \begin{pmatrix} 0 & 1 & 0 & \dots & 0 & 0 \\ 0 & 0 & 1 & \dots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & 0 & 0 \\ 0 & 0 & 0 & \dots & 0 & 1 \\ a\_{jj} & 0 & 0 & \dots & 0 & 0 \end{pmatrix}, j = \overline{1, n}$$

and

$$A\_{jk} = \begin{pmatrix} 0 & 0 & \dots & 0 \\ \vdots & \vdots & \ddots & 0 \\ 0 & 0 & \dots & 0 \\ a\_{jk} & 0 & \dots & 0 \end{pmatrix}, \; j, k = \overline{1, n}, j \neq k.$$

Even though the dimension *n*∗ of the system may be significantly higher than the dimension *n* of the original system (20), resulting in higher computational costs, all the equations of the new system (21) now have the same fractional order, giving an advantage in studying the stability properties of the solutions of the system.

We expose the main result of this section, based on [14], which gives us stability criteria involving the components of the matrix *A*∗.

**Theorem 6.** *Suppose that qj* ∈ (0, 1] *for any j and there exists q*<sup>∗</sup> ∈ (0, 1] *and ρ<sup>j</sup>* ∈ Q *such that qj* = *ρjq*∗*, for all j. Then, all the solutions of system* (20) *converge to zero at infinity if the eigenvalues λ*∗ *<sup>j</sup> of the associated system's coefficient matrix A*<sup>∗</sup> *satisfy*

$$|\arg \lambda\_j^\*| > \frac{\pi q^\*}{2s}, \forall j = \overline{1, n}\_r$$

*with s being the least common multiple of the denominators of ρj.*

**Example 4.** *Again, we reconsider system* (18) *with q*<sup>1</sup> = 0.2 *and q*<sup>2</sup> = 1*. In this case, the matrix A*∗ *given by the above procedure is*


*and the system* (18) *is equivalent to a system of six fractional-order differential equations with the same order q* = *q*<sup>1</sup> = 0.2*. The matrix A*<sup>∗</sup> *has a pair of complex conjugated eigenvalues* (*λ*, *λ*)*, <sup>λ</sup>* <sup>=</sup> 0.543842 <sup>+</sup> *<sup>i</sup>* 0.133131 *such that* <sup>|</sup> arg(*λ*)<sup>|</sup> <sup>=</sup> 0.240076 <sup>&</sup>lt; 0.1*<sup>π</sup>* <sup>=</sup> *<sup>q</sup><sup>π</sup>* <sup>2</sup> *. Hence, based on Matignon's theorem, system* (18) *is unstable for q*<sup>1</sup> = 0.2 *and q*<sup>2</sup> = 1*. Therefore, this is in good agreement with the results obtained in Example 2, based on Theorem 3.*

*However, it is important to note that cases q*<sup>1</sup> = <sup>1</sup> *<sup>π</sup> and q*<sup>2</sup> = 1 *cannot be investigated using the technique provided by Theorem 6.*

#### **6. Conclusions**

An extensive analysis of stability properties of linear systems of FDEs has been provided. This analysis is of importance to describe the asymptotic behavior of physical systems when modeled by means of FDEs. Both single-order and multi-order systems have been studied, reviewing the most important theoretical results that have been obtained so far in the literature. The role of the Mittag–Leffler function, and of its derivatives, has been highlighted and a presentation of their asymptotic behavior has been proposed. We have seen that, unlike systems of integer order, coefficients of the systems are not sufficient to describe stability properties of solutions, due to the tight dependence on the order of the fractional derivatives. This dependence becomes more and more difficult to investigate in systems incorporating derivatives of different order, as we have observed from the analysis of two-dimensional systems. Stability analysis of multi-order higher dimensional systems is still an open problem which deserves to be investigated with more attention; with this work, a first contribution has been provided by examining systems with some specific structures, and we hope these results will stimulate the analysis of more general systems.

**Author Contributions:** Conceptualization, O.B., R.G., and E.K.; methodology, O.B. and E.K.; software, O.B., R.G., and E.K.; validation, O.B., R.G., and E.K.; formal analysis, O.B. and E.K.; investigation, O.B., E.K., and R.G.; writing—original draft preparation, O.B., R.G., and E.K.; writing—review and editing, O.B., R.G., and E.K.; visualization, O.B., E.K., and R.G.; supervision, E.K.; funding acquisition, R.G. and E.K. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by COST Action CA 15225—"Fractional-order systems-analysis, synthesis and their importance for future design". The work of R.G. has also been partially supported by an INdAM-GNCS 2020 project.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Abbreviations**

The following abbreviations are used in this manuscript:


#### **References**


## *Article* **Handling Hysteresis in a Referral Marketing Campaign with Self-Information. Hints from Epidemics**

**Deborah Lacitignola**

Department of Electrical and Information Engineering, University of Cassino and Southern Lazio, Di Biasio Str., I-03043 Cassino, Italy; d.lacitignola@unicas.it

**Abstract:** In this study we show that concept of backward bifurcation, borrowed from epidemics, can be fruitfully exploited to shed light on the mechanism underlying the occurrence of hysteresis in marketing and for the strategic planning of adequate tools for its control. We enrich the model introduced in (Gaurav et al., 2019) with the mechanism of self-information that accounts for information about the product performance basing on consumers' experience on the recent past. We obtain conditions for which the model exhibits a forward or a backward phenomenology and evaluate the impact of self-information on both these scenarios. Our analysis suggests that, even if hysteretic dynamics in referral campaigns is intimately linked to the mechanism of referrals, an adequate level of self-information and a fairly high level of customer-satisfaction can act as strategic tools to manage hysteresis and allow the campaign to spread in more controllable conditions.

**Keywords:** epidemic models; backward bifurcation; hysteresis; referral marketing; self-information

#### **1. Introduction and Motivations**

In the last century, mathematical models based on differential equations have been fruitfully applied to describe phenomena belonging to even extremely different disciplinary fields. As well known in literature [1], mathematical models can act essentially in two directions: those based on more sophisticated mathematical tools can give a great contribution in terms of quantitative predictions but simpler qualitative models can be precious to shed light on the constitutive mechanisms, highlighting their role and reciprocal interactions.

Precisely because they go to the heart of the phenomena, simple mechanistic qualitative models are capable to create bridges between apparently very distant worlds, making sure that models and methodologies used in a certain context could be exploited to open the way to the understanding of phenomena that are similar in their underlying mechanisms. On this line, it is not surprising that the simple mechanistic model found by Volterra [2] to describe the interaction between preys and predators in the Italian Adriatic Sea displays the same mathematical structure as the one introduced in those same years by Lotka [3] in the context of the chemical kinetics. And, again, it is not surprising at all that a discipline such as marketing has been able to benefit from the models and the modus operandi of mathematical epidemiology. In this case, the unifying factor is the idea of contagion, a key mechanism for those forms of marketing defined as viral. The viral name refers in particular to all those marketer-initiated consumer activities that spreads a marketing message unaltered across a market or segment in a limited time period mimicking an epidemic [4]. Terms from epidemiology have been hence widely used to explain such viral marketing process [5,6].

This interconnection has become even more pronounced with the unchallenged emergence of new means of communication. With consumers showing increasing resistance to traditional forms of advertising, marketers have been forced to rely on alternative strategies. Among these are social networks, whose usage is sensitively growing among marketing managers with the aim to promote an idea, a product or a brand at no additional cost to the firm. If a marketer encourages consumers to share and spread a marketing message

**Citation:** Lacitignola, D. Handling Hysteresis in a Referral Marketing Campaign with Self-Information. Hints from Epidemics. *Mathematics* **2021**, *9*, 680. https://doi.org/ 10.3390/math9060680

Academic Editor: Dumitru Baleanu

Received: 6 February 2021 Accepted: 18 March 2021 Published: 22 March 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

through their social contacts, this is called Referral Marketing [7]. In few words, referral marketing spreads the word about a product or service through a business' existing customers, rather than traditional advertising. This kind of marketing uses referrals or word-of-mouth to promote services or products and businesses may control it through suitable strategies and make a viral referral campaign. Strategic use of referral marketing can hence allow marketers to leverage the power of consumer recommendations in order to achieve the desired results. On this line, questions as 'Which are the underlying set of interactions that ensure a marketing message to go viral? Which parameters can allow an effective spread of a marketing message through a viral process?' becomes simply crucial. And since a viral marketing message involves a person-to-person transmission spreading within a population just like an epidemic, it is not strange that the most likely enlightening answers could hence come from epidemic models.

In the classical models of epidemiology, the interactions between susceptible and infected is a key factor for the spread of an epidemic, qualitatively defined as a situation in which the number of the infected reaches a significant percentage at steady state. In the case of a viral marketing campaign it can be thought as a situation for which, because of the sharing mechanisms, the marketing message reaches and attracts a majority of its target consumers. Obviously in epidemiology one aims to contain epidemics whereas, within the marketing framework, the main purpose is to maximize the spread.

In the context of online social networks and digital contagion, many efforts have recently been made to model such kind of dynamics: reference [4] discussed the viral marketing diffusion within the SIR and SEIAR epidemic framework and [8] proposed a mathematical model borrowed from epidemiology to describe its spread. An extensive survey in reference [9] underlined that along with the viral component, a particular focus on customer behaviors should be given to ensure the relevance and survival of a newlylaunched campaign. On this line, references [7,10] considered more realistic models for the viral campaign spread, where specific behavioral factors were introduced to take into account a customer's perspective about marketing messages, i.e., inherent adversion, brand trust, remembering and reminding.

In this paper we want to pursue this line focusing on the interplay between two behavioral mechanisms that can be involved together in a referral campaign. In fact, if referral is obviously the key mechanism of a referral campaign, it is not the only one. The Nielson Global Survey of Trust in Advertising [11] clearly supports the remarkable potential of referrals showing that for the question 'To what extent do you trust the following form of advertising?', the answer 'Recommendation from the people I know' gains the first position with 83%. But the answer 'Consumer's opinion posted online' is also on the podium with 66%, confirming how online reviews remain a trust source of customer information. This means that, on average, two-thirds of consumers feels the need of 'self-information' and make purchases after inspecting customers' opinions posted online about a particular product or service. In this case information comes from sources of reviews with no conflicts of interest, such as specific consumers' forum that collect opinions by those who bought particular products or experienced certain services.

Therefore in a referral marketing campaign, the nature of the information for the potential consumer can be twofold: passive, when it is linked to the mechanism of recommendations by friends and acquaintances or active when it is linked to the self-information mechanism described above. Our aim is to elucidate under what conditions the interplay between 'passive' and 'active' information can strengthen or weaken the survival chances of a referral campaign. On this line, we enrich the model introduced in [10] with the mechanism of self-information that accounts for information about the product performance basing on consumer's experience on the recent past. Such a mechanism, based on a kind of learning that a potential consumer can adopt during the referral campaign, is mathematically obtained by introducing a distributed lag in the population equations that therefore become an integro-differential system, i.e., a delay differential model. The importance of considering such kind of models is provided by the fact that the role of delays in

biological [12–15] as well as in economic models [16–20] is widely recognized, being often appropriate for these kind of problems to allow the rate of change of the system variables to depend in some sense on the previous history. We want to establish conditions for which the model exhibits a forward or a backward phenomenology and evaluate the impact of self-information on both these scenarios. The backward phenomenology, in particular, is connected to a situation of bistability between the campaign-free equilibrium and the campaign-standing equilibrium and can lead the system towards hysteresis-type behaviors. In a very qualitative way the term hysteresis, related to the idea of "irreversibility", denotes the effects that persist after the causes that determined them have been removed. The relevance of using hysteresis at economic systems level is well recognized and marketing provides a generous framework to improve the understanding of this phenomenon in the economic sphere. In marketing, hysteresis is mainly thought in relation to consumer behaviour as well as to temporary or permanent changes of consumption patterns caused by specific marketing tools [21–23].

Its link with hysteresis is the reason why, in mathematical epidemiology, many papers have been focused on backward bifurcation, i.e., [24–28]. In that context, the basic reproduction number *R*<sup>0</sup> is usually defined as the expected number of new infections produced by a single infective individual introduced into a disease-free population [29] and *R*<sup>0</sup> = 1 represents the threshold value that separates the stability and instability regimes of the disease-free equilibrium. There are two bifurcation scenarios commonly detectable at *R*<sup>0</sup> = 1: (i) forward bifurcation that implies disease eradication below the threshold *R*<sup>0</sup> = 1; (ii) backward bifurcation that includes a saddle-node (sn) bifurcation at *R*<sup>0</sup> = *Rsn* <sup>0</sup> < 1 along with a subcritical transcritical bifurcation at *R*<sup>0</sup> = 1; it involves a multiplicity of endemic equilibria and subcritical persistence of the disease. When a backward scenario is found, reducing *R*<sup>0</sup> below 1 is not sufficient to eradicate the disease and a further effort should be done until *R*<sup>0</sup> is lowered below the critical value *Rsn* <sup>0</sup> . It is therefore obvious that, in epidemic models, detecting and managing the occurrence of backward bifurcations are two features of primary importance in the perspectives of the disease control. In viral marketing, however, the backward scenario may play a different role than in epidemics since it could be seen as an opportunity for the firm to carry on the viral campaign even in adverse conditions, which in itself adds an interesting perspective to the problem. Also in this case, however, the backward scenario must still be carefully monitored because in the bistability regime, too large displacements from the campaign-standing equilibrium can bring the system into the basin of attraction of the campaign-free equilibrium. That means a sudden collapse of the referral campaign.

The paper is structured as follows: In Section 2, we enrich the model introduced in [10] with the mechanism of self-information by the means of a variable that summarizes information about the product performance basing on consumers' experience on the recent past. In Section 3 we get conditions for the existence of a campaign-free and of a campaignstanding equilibria and establish under which conditions, expressed as a function of the system parameters, the campaign spread goes towards stopping. In Section 4 a bifurcation analysis in the neighbouring of the campaign-free equilibrium is performed and conditions are obtained for the emergence of a forward or a backward scenario that are also discussed in the perspective to improve the sustainability of the referral campaign. The effects of self-information on the bifurcation thresholds is shown in Section 5 where the role of the customer satisfaction parameter is also elucidated. Concluding remarks, in Section 6, close the paper.

#### **2. A Referral Marketing Model with Self-Information**

To mimics referral dynamics, a model was introduced in [7] with the total population divided in three mutually exclusive subpopulations: Unaware, Broadcaster and Inert. The unaware class *U* is the target market, namely 'susceptible' people that have not yet received the message about a certain product but are exposed and have a chance of receiving it; the broadcaster class *B* is composed of individuals who have received the message earlier and have the potential to spread the message further to their social contacts; the inert class *I* is instead made of individuals that, willingly or unwillingly, do not take part in the campaign even if they have come across it at least once. This model is essentially based on contagion as the basic transition mechanism between different subgroups.

To increase the degree of realism, the authors then proposed in [10] a more realistic model including some additional features raised in a survey campaign developed in [9]. Analyzing the surveys and the interactions between different people, they modeled the transition between different sub-groups with taking into account some additional factors that more clearly reflect customer's perspective about marketing messages: (a) inherent aversion, i.e., a portion of individuals could be strongly against the mechanisms of referral marketing in general; (b) brand trust, i.e., people need to 'trust' the person who is referring the product (for example family or friends) as well as the brand-names while participating in referral marketing; (c) remembering and reminding, i.e., strategically designed emails from the company or casual reminders from friends can tempt inert individuals to become broadcasters again. The following model was hence considered [10]:

$$\begin{array}{rcl} \dot{u} &=& \mu - \rho \, b \, u - \mu \, u \\\\ \dot{b} &=& p \, \rho \, b \, u - \sigma \, b + a\_1 \, b \, i - \mu \, b + \lambda \, i \\\\ \dot{i} &=& (1 - p) \, \rho \, b \, u + \sigma \, b - a\_1 \, b \, i - \lambda \, i - \mu \, i \end{array} \tag{1}$$

where *u*,*b* and *i* are the fraction of the unaware, broadcaster and inert classed normalized by the total population. In (1), it is assumed that a broadcaster spreads the message to a member from unaware class at a rate *ρ* and, whenever a broadcaster sends the referral message to an unaware individual, this moves to the broadcaster class with a probability *p* and to the inert class with a probability (1 − *p*). The parameter *p* ∈ [0, 1] assumes a high value if the campaign comes from a trusted brand or the message comes from a trusted member and can be hence interpreted as the 'trust' parameter. The term (1 − *p*) accounts that some individuals of the unaware class might decide to ignore the messages or to not take part in the campaign, i.e., groups of individuals that are for example rigidly inert. Messages from not so trustworthy brand or members increases the value of (1 − *p*).

Once the unawares have become broadcasters or inert, they can 'change their mind' by moving from one class to another respectively. In fact, broadcasters can stop sharing the message, hence moving to the inert class at a rate *σ*. On the other hand, inert people can move back to broadcaster class following two different mechanisms: (i) independently of their interaction with other individuals (like reminder from the company etc.) at a rate *λ* or (ii) because of their interaction with another broadcaster (like reminder from a friend, discussion with family members) at a rate *α*<sup>1</sup> = *α p* where *α* is the original relapse rate and *p* is the trust parameter. Obviously people can join or leave a particular social platform where the campaign is going. It is then assumed a constant input *μ* in the unaware class and a natural 'mortality rate' *μ* for each class so that a fixed population size can be maintained.

The analysis carried out in [10] showed that the brand loyalty and brand name are two important factors to create positive reaction of a person towards a campaign message. Moreover, model dynamics turned out to be critically affected by variations in the relapse rate *α* that was recognized to be crucial to safeguard the survival of the campaign. In particular, sufficiently high values of the relapse rate *α* could drive the system towards a bistability situation between the campaign-free and the campaign-standing equilibria.

In [10] the involved information mechanism was essentially passive because the spread of the message is based on referrals. To investigate the role of an active information on the spreading of the referral campaign, we equipped model (1) with a self-information variable *m* that summarizes information about the product performance basing on the customers' experiences in the recent past, i.e., online customer reviews. Because of this 'active' information process, we assume that unaware individuals can exit their class at a rate *γ*, moving to the broadcaster class with a probability *q* and to the inert class with a probability (1 − *q*). The parameter *q* ∈ [0, 1] assumes a high value if the online reviews on the product indicates an overall high level of satisfaction and can be hence interpreted as a 'customer satisfaction' parameter. We hence consider the following model:

$$\begin{array}{ll}\dot{u} &=& \mu - \rho \, b \, u - \mu \, u - \gamma \, m \, u \\\\ \dot{b} &=& p \, \rho \, b \, u - \sigma \, b + a\_1 \, b \, i - \mu \, b + \lambda \, i + \gamma \, q \, m \, u \\\\ \dot{a} &=& (1 - p) \, \rho \, b \, u + \sigma \, b - a\_1 \, b \, i - \lambda \, i - \mu \, i + \gamma \, (1 - q) \, m \, u \end{array} \tag{2}$$

where the self-information variable *m* is given by

$$m(t) = \int\_{-\infty}^{t} f(u(\tau), b(\tau), i(\tau)) \, \, \mathcal{K}\_{\mathfrak{d}}(t - \tau) d\tau \tag{3}$$

The distributed lag (3) in the governing equations means that unaware, broadcaster and inert individuals at time *t* are affected by the state variables *u*, *b*, *i* at possibly all previous times *τ* ≤ *t* in a way prescribed by the function *f*(*u*(*τ*), *b*(*τ*), *i*(*τ*)) and distributed in the past by the delay kernel *Ka*(*t* − *τ*) which is also called 'memory function'.

We assume here that the function *f*(*u*(*τ*), *b*(*τ*), *i*(*τ*)) = *k b* where *k* is a positive parameter. Among the possible types of delay kernels, we consider

$$K\_a(t) = a \, e^{-a \, t} \tag{4}$$

which qualitatively represents a weak delay in the sense that the maximum (weighted) response of the growth rate is to current population density whereas past densities have exponentially decreasing influence. Such a kernel provides therefore a reasonable effect of short term memory.

With (4) as delay kernel and by applying the *linear chain trick* [30], the set of delay differential Equations (2) and (3) turns out to be equivalent to the following set of ordinary differential equations that will be hereafter the object of our investigations:

$$\begin{array}{ll}\dot{m} &=& \mu - \rho \, b \, u - \mu \, u - \gamma \, m \, u \\\\ \dot{b} &=& p \, \rho \, b \, u - \sigma \, b + a\_1 \, b \, i - \mu \, b + \lambda \, i + \gamma \, q \, m \, u \\\\ \dot{\dot{a}} &=& (1 - p) \, \rho \, b \, u + \sigma \, b - a\_1 \, b \, i - \lambda \, i - \mu \, i + \gamma \, (1 - q) \, m \, u \\\\ \dot{m} &=& a \, k \, b - a \, m \end{array} \tag{5}$$

with *α*<sup>1</sup> = *α p*.

In the next section, we get conditions for the existence of a campaign-free and of a campaign-standing equilibria and establish under which conditions, expressed as a function of the system parameters, the campaign goes viral or is forced to stop.

#### **3. The Campaign-Free and the Campaign-Standing Equilibria**

Model (5) always admits a *campaign-free* equilibrium *E*<sup>0</sup> = (1, 0, 0, 0) and, under suitable conditions on the system parameters can admit one or two *campaign-standing* equilibrium *E*∗ = (*u*∗, *b*∗, *i* ∗, *m*∗) where:

$$u^\* = \frac{\mu}{b^\* \left(\gamma k + \rho\right) + \mu'} \quad i^\* = \frac{b^\* \left(\gamma k + \rho\right)\sigma + \gamma k \,\mu \left(1 - q\right) + \mu \left[\sigma + \rho \left(1 - p\right)\right]}{(b^\* \gamma k + b^\* \rho + \mu)(b^\* a \, p + \lambda + \mu)}, \quad m^\* = k \, b^\*,\tag{6}$$

and *b*∗ is a positive solution of the following algebraic equation,

$$P\_2 \ b^2 + P\_1 \ b + P\_0 = 0,\tag{7}$$

with

$$\begin{aligned} P\_2 &= \mathfrak{a} \ p \left( \gamma k + \rho \right), \\\\ P\_1 &= -p \left( \gamma k + \rho - \mu \right) \mathfrak{a} + (\gamma k + \rho)(\lambda + \mu + \sigma) = p \left( \gamma k + \rho - \mu \right)(\mathfrak{a}\_0 - \mathfrak{a}), \\\\ P\_0 &= \left( \mu - \gamma k - \rho \right) \lambda + \mu^2 + \mu \sigma - \gamma k \, \mu \, q - \rho \, \mu \, p = \mu \left( \sigma - \sigma\_c \right), \end{aligned} \tag{8}$$

and

$$a\_0 = \frac{(\gamma \, k + \rho)(\lambda + \mu + \sigma)}{p \, (\gamma \, k + \rho - \mu)}, \quad \sigma\_c = \frac{1}{\mu} [\lambda \, (\gamma \, k + \rho - \mu) + \gamma \, k \, q \, \mu + \mu \, (p \, \rho - \mu)]. \tag{9}$$

By (6), it follows that *E*∗ is a positive equilibrium provided *b*∗ is a positive solution of (7). Moreover being (7) a second order algebraic equation we observe that, for certain ranges of the parameter values, model (5) could admit a multiplicity of campaign-standing equilibria.

In the next we assume *μ* ≤ *p ρ* so that the natural "mortality rate" for each class is considered slow with respect to the marketing process. Under this condition, both *σ<sup>c</sup>* and *α*<sup>0</sup> are positive quantities. We now determine the conditions for which model (5) can admit feasible (i.e. positive) campaign-standing equilibria. To do that, we inspect the discriminant of the algebraic Equation (7), namely

$$
\Delta = P\_1^2 - 4 \, P\_2 \, P\_0 = p^2 \left( \gamma \, k + \rho - \mu \right)^2 \left( a\_0 - a \right)^2 - 4 \, a \, p \left( \gamma \, k + \rho \right) \, \mu \left( \sigma - \sigma\_c \right) \tag{10}
$$

and observe that, if *σ* < *σc*, then (10) is a positive quantity so that by the Descartes' rule of signs, the algebraic Equation (7) admits only one positive real solution. On the contrary, if *σ* > *σc*, then Δ < 0 ⇔ *α*<sup>1</sup> < *α* < *α*2, where

$$\alpha\_{1/2} = \alpha\_0 + \frac{Q\_1 \mp \sqrt{3\,\mathrm{a}\_0^2 \, Q\_0^2 + Q\_1^2 + 4\,\mathrm{a}\_0 \, Q\_0 \, Q\_1}}{2\,Q\_0} \tag{11}$$

and

$$Q\_0 = p^2 \left(\gamma k + \rho - \mu\right)^2, \qquad Q\_1 = 4 \left(\gamma \, k + \rho\right) \mu \left(\sigma - \sigma\_c\right).$$

For *σ* > *σc*, *Q*<sup>1</sup> is a positive quantity and it is also easy to prove that *α*<sup>1</sup> < *α*<sup>0</sup> < *α*2. We can hence conclude that: if *α*<sup>1</sup> < *α* < *α*<sup>2</sup> then Equation (7) admits no real solutions; if *α* < *α*<sup>1</sup> then, by the Descartes' rule of signs, the algebraic Equation (7) admits two negative real solutions; if *α* > *α*<sup>2</sup> then it admits two positive real solutions.

The above results can be summarized in the following theorems:

**Theorem 1.** *Let μ* ≤ *p ρ and σ* < *σc. Then model (5) admits the* campaign-free *equilibrium E*<sup>0</sup> = (1, 0, 0, 0) *and one positive* campaign-standing *equilibrium E*∗*.*

**Theorem 2.** *Let μ* ≤ *p ρ and σ* > *σc. Then model (5) admits the* campaign-free *equilibrium E*<sup>0</sup> = (1, 0, 0, 0) *and (i) if α* < *α*2*, none positive campaign-standing equilibrium exists; (ii) if α* > *α*2*, two positive campaign-standing equilibria exist.*

As far as the local stability properties of the campaign-free equilibrium *E*<sup>0</sup> = (1, 0, 0, 0) are concerned, we observe that the Jacobian matrix of model (5) when evaluated at *E*0, is given by

$$J(E\_0) = \begin{pmatrix} -\mu & -\rho & 0 & -\gamma \\ 0 & p\,\rho - \mu - \sigma & \lambda & \gamma \\ & 0 & (1 - p)\,\rho + \sigma & -\lambda - \mu & \gamma (1 - q) \\ & 0 & a\,k & 0 & -a \end{pmatrix}'$$

and admits *ω* = −*μ* as an eigenvalue. To reason about the sign of the other three eigenvalues, we introduce the following matrices:

$$A = \begin{pmatrix} p\rho - \mu - \sigma & \lambda & \gamma \cdot q \\ (1 - p)\rho + \sigma & -\lambda - \mu & \gamma (1 - q) \\ a k & 0 & -a \end{pmatrix},$$

$$A\_1 = \begin{pmatrix} -\lambda - \mu & \gamma (1 - q) \\ 0 & -a \end{pmatrix}, A\_2 = \begin{pmatrix} (1 - p)\rho + \sigma & \gamma (1 - q) \\ a k & -a \end{pmatrix}, A\_3 = \begin{pmatrix} (1 - p)\rho + \sigma & -\lambda - \mu \\ a k & 0 \end{pmatrix}.$$

and recall that the remaining three eigenvalues of *J*(*E*0) have negative real part if and only if the following conditions holds:

$$
\det(A) < 0; \quad \text{tr}(A) < 0; \quad \sum\_{i=1}^3 \det(A\_i) > \det(A) / \text{tr}(A).
$$

We get *det*(*A*) = *μ a* (*σ<sup>c</sup>* − *σ*) and *tr*(*A*) = *p ρ* − *a* − *λ* − 2 *μ* − *σ* so that:

$$\det(A) < 0 \Leftrightarrow \sigma > \sigma\_{\mathfrak{c}} \quad \text{tr}(A) < 0 \Leftrightarrow a > a\_{\mathfrak{c}}$$

where *σ<sup>c</sup>* is given in (9) and *ac* = *p ρ* − *λ* − 2 *μ* − *σ*. We also observe that

and

$$a\_{\mathcal{C}} > 0 \Leftrightarrow \sigma < \delta = p\,\rho - \lambda - 3\,\mu$$

$$
\partial \!\!/ < \sigma\_{\mathfrak{c}} \:\Leftrightarrow -\mu < \gamma \, k \, q + \frac{\lambda \, (\rho + \gamma \, k)}{\mu}
$$

that is always verified. Therefore for *σ* > *σ<sup>c</sup>* > *σ*˜, the threshold quantity *ac* is negative so that *tr*(*A*) < 0 for every positive value of *a*. Moreover by straightforward algebra follows that, for *σ* > *σc*, inequality 3 ∑ *i*=1 *det*(*Ai*) > *det*(*A*)/*tr*(*A*) is always verified. We are hence in the position to state the following theorem:

**Theorem 3.** *Let μ* ≤ *p ρ. (i) If σ* < *σ<sup>c</sup> then the campaign-free equilibrium E*<sup>0</sup> *is unstable. (ii) If σ* > *σ<sup>c</sup> then the campaign-free equilibrium E*<sup>0</sup> *is locally asymptotically stable.*

In the following section, we analyze in more details the *nature* of the transcritical bifurcation at *σ* = *σ<sup>c</sup>* and its impact on the sustainability of the referral campaign.

#### **4. Sustaining the Campaign: Forward or Backward Scenario?**

Within the epidemic framework, backward scenarios have been mainly detected by the means of specific bifurcation approaches [31] with the aim to establish the nature of the bifurcation at *R*<sup>0</sup> = 1. Once the backward scenario is detected, the subcritical persistence of the disease can be prevented by varying significant parameters in the system or by the means of error-based methods as the Z-type control approach [32–34].

In this section, we discuss the occurrence of the backward vs the forward phenomenology for model (5), by using the method proposed in [35] that provides simple and manageable conditions for monitoring both these scenarios.

As shown in the previous section, *σ* = *σ<sup>c</sup>* is a transcritical bifurcation threshold. We observe that all the coefficients in the equilibrium Equation (7) may be regarded as functions of the parameter *σ*. Moreover at *σ* = *σc*, *P*0(*σc*) = 0 so that Equation (7) becomes

$$P\_2(\sigma\_c)b^2 + P\_1(\sigma\_c)b = 0$$

and admits the roots *<sup>b</sup>* <sup>=</sup> 0 and *<sup>b</sup>* <sup>=</sup> <sup>−</sup>*P*1(*σ<sup>c</sup>* ) *<sup>P</sup>*2(*σ<sup>c</sup>* ). The former is related to the campaign-free equilibrium and the latter corresponds to a positive campaign-standing equilibrium only if *P*1(*σc*) and *P*2(*σc*) have opposite signs. Therefore, in order to have a positive campaignstanding equilibrium, *P*1(*σc*) < 0 must hold. By implicit differentiation of Equation (7) with respect to *σ*, one obtains:

$$(2\,P\_2\,b + P\_1)\frac{db}{d\sigma} + \frac{dP\_2}{d\sigma}b^2 + \frac{dP\_1}{d\sigma}b + \frac{dP\_0}{d\sigma} = 0.1$$

Now, looking at the equilibrium *b* = 0, at *σ* = *σ<sup>c</sup>* one has:

$$P\_1(\sigma\_\varepsilon) \frac{db}{d\sigma}(\sigma\_\varepsilon) = -\frac{dP\_0}{d\sigma} < 0,\tag{12}$$

since, recalling (8), it holds *dP*<sup>0</sup> *<sup>d</sup><sup>σ</sup>* <sup>=</sup> *<sup>μ</sup>* <sup>&</sup>gt; 0. Therefore, in order inequality (12) to be verified, *<sup>P</sup>*1(*σc*) and *db <sup>d</sup><sup>σ</sup>* (*σc*) must have opposite sign. This means that the slope of the bifurcation curve at *b* = 0 must have opposite sign with respect to the coefficient *P*1(*σc*). Since in our case a forward scenario at *<sup>σ</sup>* <sup>=</sup> *<sup>σ</sup><sup>c</sup>* is obtained when *db <sup>d</sup><sup>σ</sup>* (*σc*) <sup>&</sup>lt; 0 and a backward scenario when *db <sup>d</sup><sup>σ</sup>* (*σc*) <sup>&</sup>gt; 0, it hence follows that: (i) if *<sup>P</sup>*1(*σc*) <sup>&</sup>lt; 0 then a backward bifurcation occurs at *σ* = *σc*; (ii) if *P*1(*σc*) > 0, the system displays a forward bifurcation at *σ* = *σc*.

For model (5), *P*1(*σc*) < 0 is hence a necessary and sufficient condition for the occurrence of the backward bifurcation at *σ* = *σc*. By (9), the threshold *α*<sup>0</sup> depends on the parameter *σ*. Therefore by introducing,

$$\boldsymbol{a}^\* = \boldsymbol{a}\_0(\sigma\_c) = \frac{\gamma \, k + \rho}{\gamma \, k + \rho - \mu} \, \boldsymbol{\bar{\kappa}}\_{\prime} \tag{13}$$

where

$$
\overline{\alpha} = \frac{\gamma \, k \left( \mu \, q + \lambda \right) + \rho \left( \mu \, p + \lambda \right)}{\mu \, p},
\tag{14}
$$

the following result holds:

**Theorem 4.** *Let μ* < *p ρ. (i) If α* < *α*<sup>∗</sup> *then system (5) exhibits a forward bifurcation at σ* = *σc. (ii) If α* > *α*<sup>∗</sup> *then system (5) exhibits a backward bifurcation at σ* = *σc.*

**Proof.** It follows from (8) by direct computations.

**Remark 1.** *It easy to prove by direct computation that the threshold α*<sup>2</sup> *defined in (11) is such that*

$$
\mathfrak{a}\_2 = \mathfrak{a}\_0(\sigma\_c) = \mathfrak{a}^\*.
$$

*so that results in Theorem 4 are in perfect agreement with the existence results provided in Theorem 2.*

To validate the results found in Theorem 4, we show the local dynamics in the neighboring of the bifurcation value *σ* = *σ<sup>c</sup>* by the means of the bifurcation diagrams in the (*σ*, *b*∗) parameter space, Figure 1.

**Figure 1.** Bifurcation diagram in the plane (*σ*, *b*∗). The other parameters are *μ* = 0.05; *ρ* = 0.25; *λ* = 0.02; *p* = 0.7; *q* = 0.8; *a* = 0.5; *k* = 2; *γ* = 0.2 so that *α*∗ = 1.1684 and *σ<sup>c</sup>* = 0.6850. The solid lines (-) denote stability; the dashed lines (- -) denote instability. (**Left**) Forward scenario. The case *α* < *α*∗, *α* = 0.4 − At *σ* = *σ<sup>c</sup>* = 0.6850, system (5) exhibits a forward bifurcation. (**Right**) Backward scenario. The case *α* > *α*∗, *α* = 2 − At *σ* = *σ<sup>c</sup>* = 0.6850, system (5) exhibits a backward bifurcation. The value *σSN* = 0.9105 is the saddle-node bifurcation threshold.

For the numerical investigations, we decide to use the same parameters considered in [7,10] where a mathematical model was introduced basing on data collected through an extensive questionnaire-based-survey [9]. That survey recognized the dynamics of viral marketing propagation as a complicated nonlinear phenomenon that involves several interactions between the participants and is influenced by several intensive and extensive parameters. In [7,10], the above conceptual framework was developed through a mathematical ODE epidemic model that in [7] contains only the essential features of the phenomenon and in [10] is instead enriched with more realistic behavioral factors. The set of parameters used in these papers are chosen with the purposes (i) to illustrate the range of possible dynamics that can be expressed by the model and (ii) to elucidate which parameters and hence mechanisms can influence the overall dynamics. The perspective in which they move is a qualitative one and the model we develop in the present paper, enriching [10] with the self-information mechanism, moves exactly in the same qualitative direction. Therefore, to better elucidate the role of self-information and for a better comparison with the dynamics presented in [7,10], in the present study we have intentionally decided to consider the same set of parameters used there, namely: *μ* = 0.05; *ρ* = 0.25; *λ* = 0.02; *p* = 0.7. The parameters for the self-information mechanism are instead chosen so that the hypothesis of Theorem 4 could be verified. We hence fix *q* = 0.8, *a* = 0.5, *k* = 2, *γ* = 0.2.

With this choice for the parameters, the assumption *μ* = 0.05 < (*p ρ*) = 0.1750 is verified. Moreover *α*<sup>∗</sup> = 1.1684 and *σ<sup>c</sup>* = 0.6850.

In Figure 1 (left), the parameters are taken in order to verify condition (i) in Theorem 4 so that a forward scenario is obtained. In this case, *α* = 0.4 < *α*∗ = 1.1684: a forward bifurcation occurs at *σ* = *σ<sup>c</sup>* = 0.6850. For *σ* < *σc*, the campaign-standing equilibrium *E*<sup>∗</sup> is the only attractor for the system, being *E*<sup>0</sup> unstable in this range. Differently, for *σ* > *σc*, the campaign-free equilibrium *E*<sup>0</sup> is the only attractor for the system and increasing *σ* above the threshold *σ<sup>c</sup>* is sufficient to stop the campaign. In Figure 1 (right), we choose the parameter values so that condition (ii) in Theorem 4 is verified. In this case, *α* = 2 > *α*∗ = 1.1684: a backward bifurcation occurs at *σ* = *σ<sup>c</sup>* = 0.6850 and *σSN* = 0.91058 is the saddle nodebifurcation value. For *σ* < *σc*, the campaign-standing equilibrium *E*<sup>∗</sup> is the only attractor for the system since *E*<sup>0</sup> is unstable in this range. For *σ<sup>c</sup>* < *σ* < *σSN*, a bistability situation occurs, with the disease-free equilibrium *E*<sup>0</sup> and the endemic equilibrium *E*<sup>2</sup> as local attractors. For *σ* > *σSN*, the campaign-free equilibrium *E*<sup>0</sup> becomes the only attractor

for the system. In this case, the value of the parameter *σ* should be increased above the saddle-node bifurcation threshold *σSN* in order to stop the campaign.

The above results well put into evidence that the sustainability of the referral campaign is linked to the suitable interplay between the two parameters *α* and *σ* that respectively regulate the reciprocal transition between the broadcaster and inert classes. We recall that *α* is the relapse rate from the inert to the broadcaster class whereas *σ* is the dropout rate of the broadcaster class in favor of the inert class. Therefore, when the impact of the relapse rate *α* is below a certain threshold, i.e., *α* < *α*∗, then increasing the dropout rate *σ* above a certain threshold *σ<sup>c</sup>* has the effect to stop the campaign. On the contrary, when the impact of the relapse rate *α* is much stronger, i.e., *α* > *α*∗, then simply increasing the dropout rate *σ* above *σ<sup>c</sup>* is not enough to stop the campaign and the value of *σ* must exceed an higher threshold *σSN* to make it end. This aspect would seem to suggest that a backward scenario could strengthen the campaign's chances of survival. However, in the bistability range *σ<sup>c</sup>* < *σ* < *σSN*, the dynamics of the system is highly dependent on the initial conditions so that, within the backward scenario, a sudden stop of the campaign could likely occur.

In this latter case, inducing a slight reduction of the dropout rate *σ* does not allow to restore the spreading of the campaign. To this aim, it is in fact necessary to drastically reduce *σ* below the *σ<sup>c</sup>* value. This behavior is depicted in Figure 2 and clearly indicates a hysteretic phenomenology since the functioning and the current state of the system can be understood in a more detailed manner with reference to its past. In this sense, the effects on the dynamics persist after the causes that determined them have been removed.

**Figure 2.** Graphical representation of an hysteresis cycle on the bifurcation diagram in the plane (*σ*, *b*∗) in the case *α* > *α*∗, where a backward scenario is obtained. The other parameters are as in Figure 1 (right). Here *α*<sup>∗</sup> = 1.1684, *σ<sup>c</sup>* = 0.6850 and the value *σSN* = 0.9105 is the saddle-node bifurcation threshold. The solid lines (-) denote stability; the dashed lines (- -) denote instability.

As a consequence if the backward phenomenology can represent an opportunity, it nevertheless introduces a risk factor and, for this reason, it must be detected and adequately managed. This suggests the need for a more accurate characterization of the bistability range delimited by the transcritical threshold *σ<sup>c</sup>* and by the saddle-node threshold *σSN*. To this aim, we derive the analytical expression of the saddle-node bifurcation threshold *σSN*. We first recall that the two campaign-standing equilibria *E*<sup>∗</sup> <sup>1</sup> and *E*<sup>∗</sup> <sup>2</sup> are such that:

$$E\_1 = (\mathfrak{u}\_1^\*, b\_1^\*, i\_1^\*, m\_1^\*)\_{\prime} \qquad E\_2 = (\mathfrak{u}\_2^\*, b\_2^\*, i\_2^\*, m\_2^\*)\_{\prime}$$

where *ui*, *ii* and *mi* are defined in (6) and *bi* are the two positive solutions of the algebraic Equation (7) whose coefficients are defined in (8). More precisely,

$$b\_{1/2} = -\frac{P\_1 \mp \sqrt{\Delta}}{2 \, P\_2}$$

with Δ defined in (10) and the quantities *α*<sup>0</sup> and *σ<sup>c</sup>* defined in (9). At *σ* = *σSN*, the two campaign-standing equilibria *E*∗ <sup>1</sup> (unstable) and *E*<sup>∗</sup> <sup>2</sup> (stable) coalesce and disappear so that, for *σ* > *σSN* the campaign-free equilibria is the only attractor for the system. The saddlenode bifurcation of the two campaign-standing equilibria can be detected by requiring that Δ = 0 so that *E*∗ <sup>1</sup> ≡ *E*<sup>∗</sup> <sup>2</sup> . At this regard, it holds:

$$\Delta = 0 \Leftrightarrow \sigma\_{1/2} = \frac{1}{(\gamma \, k + \rho)} \left[ \alpha \, p \left( \gamma \, k + \mu + \rho \right) - (\gamma \, k + \rho)(\lambda + \mu) \mp 2\sqrt{\Delta^\*} \right]$$

where

$$
\Delta^\* = \mathfrak{a} \not\!p \left(\gamma k + \rho\right) \varprojlim p \left(\mathfrak{a} - \mathfrak{a}\right) \tag{15}
$$

with *α*˜ defined in (14). By direct computation it easy follows that if *α* > *α*˜ then *σ<sup>i</sup>* are real quantities and that, for *α* > *α*<sup>∗</sup> > *α*˜, the inequalities *σ*<sup>1</sup> > *σ<sup>c</sup>* > 0 hold. Therefore,

$$
\sigma\_{SN} = \sigma\_1 = \frac{1}{\left(\gamma \, k + \rho\right)} \left[ a \, p \left( \gamma \, k + \mu + \rho \right) - (\gamma \, k + \rho)(\lambda + \mu) - 2 \sqrt{\Delta^\*} \right],
$$

is the saddle-node bifurcation threshold and [*σc*, *σSN*] is the bistability range for model (5). In the next section, we show how these critical thresholds are affected by variations in the self-information level.

#### **5. Effects of Self-Information on the Bifurcation Thresholds**

Since *γ* and *k* are the parameters specifically related to the self-information mechanism, we introduce the *information parameter ζ* = *γ k* and consider the different bifurcation thresholds as function of *ζ*, i.e.,

$$\begin{aligned} \alpha^\*(\zeta) &= \frac{\zeta + \rho}{\zeta + \rho - \mu} \frac{\zeta \left(\mu \, q + \lambda\right) + \rho \left(\mu \, p + \lambda\right)}{\mu \, p} \\\\ \sigma\_c(\zeta) &= \frac{1}{\mu} [\lambda \, (\zeta + \rho - \mu) + \zeta \, q \, \mu + \mu \left(p \, \rho - \mu\right)] \\\\ \sigma\_{SN}(\zeta) &= \frac{1}{(\zeta + \rho)} \left[ \alpha \, p \left(\zeta + \mu + \rho\right) - (\zeta + \rho)(\lambda + \mu) - 2\sqrt{\Delta^\*(\zeta)} \right] \end{aligned} \tag{16}$$

with Δ∗(*ζ*) as defined in (15). We observe that the saddle-node bifurcation threshold *σSN* is a real quantity provided that the information variable *ζ* is chosen in the range (0, *ζ*∗), where

$$\zeta^\* = \frac{\mathfrak{a}\ \mathfrak{p}\ \mathfrak{p} - \mathfrak{p}\ (\mathfrak{p}\ \mathfrak{p} + \lambda)}{\mathfrak{p}\ \mathfrak{q} + \lambda} \tag{17}$$

Moreover, since in the backward scenario *α* > *α*∗, the inequality

$$
\alpha > \frac{\rho \left(\mu \, p + \lambda\right)}{\mu \, p},
$$

is always verified and *ζ*∗ is a positive quantity. We also observe that the transcritical bifurcation threshold *σ<sup>c</sup>* is an increasing function of *ζ*, being

$$\frac{d\sigma\_c}{d\zeta} = \frac{\mu \, q + \lambda}{\mu}.$$

Therefore an higher information increases the threshold *σc*, so that both in the forward and in the backward regime it becomes larger the range [0, *σc*] for which the campaignstanding equilibrium is the only attractor for the system.

Moreover, Figure 3 also indicates that:


**Figure 3.** Thresholds (16) as function of the information variable *ζ*. The other parameters are chosen as in Figure 1. (**Top-left**) The threshold *α*∗(*ζ*) as function of *ζ*. (**Top-right**) The saddle-node bifurcation threshold *σSN*(*ζ*) as function of *ζ*. The threshold *σSN* is feasible in the range (0, *ζ*∗], with *ζ*<sup>∗</sup> = 0.9375 (**Bottom**) The length of the bistability range, i.e., *σSN* − *σc*, within the backward scenario as function of *ζ*. The bistability range is increasing for [0, *ζ*1) and (*ζ*2, *ζ*∗) and it decreases for [*ζ*1, *ζ*2]. Here *ζ*<sup>∗</sup> = 0.9375; *ζ*<sup>1</sup> = 0.1135; *ζ*<sup>2</sup> = 0.8861.

To give a more quantitative measure of the impact of the information parameter *ζ* on the bifurcation thresholds (16), we will make use of the sensitivity analysis that is a useful tool to reveal how a certain parameter can influence the campaign transmission. The sensitivity of a certain variable with respect to system parameters can be measured through a sensitivity index that provides a quantitative measure of the relative change in a variable when a parameter changes. When the variable is a differentiable function of the parameter, the sensitivity index is defined as follows:

**Definition 1.** *[36] The normalized forward sensitivity index of a variable u, that depends differentiably on a parameter p, is defined as*

$$
\phi\_p^u = \frac{\partial u}{\partial p} \frac{p}{u}
$$

The normalized forward sensitivity index of a variable with respect to a parameter is therefore the ratio of the relative change in the variable to the relative change in the parameter.

Figure 4 shows how the sensitivity index of the different thresholds *α*∗, *σ<sup>c</sup>* and *σSN* varies with varying the information parameter *ζ*. For both *α*∗ and *σc*, the sensitivity index is a saturating function of *ζ*, the first increasing more slowly than the latter. The sensitivity of *σSN* instead rapidly grows for enough low values and for enough high values of the information parameter *ζ*; on the contrary, it grows very slowly for intermediate values of *ζ*. In Table 1, we show more quantitatively how variations in the information parameter *ζ* can affect the different thresholds (16).

**Figure 4.** Sensitivity indices of the different thresholds *α*∗, *σ<sup>c</sup>* and *σSN* as function of the information variable *ζ*. The other parameters are chosen as in Figure 1. (**Top-left**) Plot of the sensitivity *φα*<sup>∗</sup> *<sup>ζ</sup>* versus the information variable *ζ*; (**Top-right**) Plot of the sensitivity *φσ<sup>c</sup> <sup>ζ</sup>* versus the information variable *ζ*; (**Bottom**) Plot of the sensitivity *φσSN <sup>ζ</sup>* versus the information variable *ζ*.


**Table 1.** Sensitivity indices of the thresholds *α*∗, *σ<sup>c</sup>* and *σSN* for three different levels of information: low, intermediate and high. The numerical values of the system parameters used for the computations are: *μ* = 0.05; *ρ* = 0.25; *λ* = 0.02; *p* = 0.7; *q* = 0.8. Here *ζ*<sup>∗</sup> = 0.9375; *ζ*<sup>1</sup> = 0.1135; *ζ*<sup>2</sup> = 0.8861.

It is interesting to observe that *ζ* affects such thresholds differently depending on the level of information we consider.

In the case of low information, *σ<sup>c</sup>* is the most affected threshold: in fact, *φσ<sup>c</sup> <sup>ζ</sup>* = 0.38, which means that increasing (or decreasing) the parameter *ζ* by 10%, increases (or decreases) the transcritical threshold *σ<sup>c</sup>* by 3.8%. The less affected threshold is instead *σSN*, being *φσSN <sup>ζ</sup>* = 0.1734. However, for this case, the sensitivity indices for the three thresholds have numerical values fairly low and quite similar each others. A similar situation, but with higher values of the sensitivity indices is found for the case of intermediate levels of information for which the thresholds *α*∗ and *σ<sup>c</sup>* are influenced by variations in the information parameter *ζ* much more than the saddle-node bifurcation threshold *σSN*. Also in this case, *σ<sup>c</sup>* is the threshold most influenced by variations in the information parameter, being *φσ<sup>c</sup> <sup>ζ</sup>* <sup>=</sup> 0.74; the less affected threshold is instead *<sup>σ</sup>SN*, since *<sup>φ</sup>σSN <sup>ζ</sup>* = 0.3450. The case of high levels of information presents a completely different scenario being now *σSN* the most affected threshold with *φσSN <sup>ζ</sup>* = 0.97: this means that increasing (or decreasing) the parameter *ζ* by 10%, increases (or decreases) the saddle-node bifurcation threshold *σSN* by 9.7%. In this case, however, also the thresholds *α*∗ and *σ<sup>c</sup>* are significantly influenced by variations in the information parameter *ζ*.

These results seem to suggest that *intermediate* levels of information allow to spread the campaign in more controllable conditions. In fact, they seem to (i) favor a forward-type regime over a backward type, as it can be observed by the significant increase in the *α*∗ threshold; (ii) favor the presence of a single campaign-standing type attractor (significant increase in the threshold *σc*) with respect to a bistability regime (loose impact on the threshold *σSN*). In this sense, intermediate levels of information are surely preferable to low ones. On the other hand, too high levels of information sensitively impact the saddle-node threshold, favoring a bistability situation where the chances of the campaign's survival increase despite being exposed to the likely emergence of hysteretic dynamics.

The survival of the campaign obviously depends on the number of people who make it to spread and in the bistability range, when *σ* tends to *σSN*, the level of broadcasters at the campaign-standing equilibrium tends to decrease, as it can be seen from the bifurcation diagram in Figure 1.

It is therefore interesting to ask whether the level of customer satisfaction linked to the self-information process can act as a destabilizing factor for the survival of the campaign. Numerical simulations in Figure 5 (Top) show that, for low or intermediate values of the self-information parameter *ζ*, the campaign-standing equilibrium is rather resilient to variations in the level of the customer satisfaction *q*. However, increasing the level of self-information from low to intermediate, the impact of *q* also increases to the point that, for high values of *ζ*, a threshold value *q*∗ can be found below which the referral campaign is driven to stop, Figure 5 (Bottom).

**Figure 5.** Impact of the customer satisfaction parameter *q* on the referral campaign in the bistability region, *σ* ∈ [*σc*, *σSN*], for different levels of self-information. Initial conditions are chosen in the neighbouring of the campaign-standing equilibrium. The other parameters are as in Figure 1. (**Top-left**) Low level of the self-information parameter, i.e., *ζ* = 0.08 (*k* = 2; *γ* = 0.04) and *σ* = 0.4. (**Top-right**) Intermediate level of the self-information parameter, i.e., *ζ* = 0.5 (*k* = 2; *γ* = 0.25) and *σ* = 0.7. (**Bottom**) High level of the self-information parameter, i.e., *ζ* = 0.9 (*k* = 2; *γ* = 0.45) and *σ* = 0.9.

#### **6. Conclusions**

t

With this study we wanted to show that the concept of backward bifurcation, borrowed from epidemics, can be fruitfully exploited to shed light on the mechanism underlying the occurrence of hysteresis in marketing as well as for the strategic planning of adequate tools for its control.

In this paper, we considered a referral marketing model with self-information and evaluated how the interplay between a passive information (due to referrals) and an active information (due to self-information) impacts the sustainability of the viral campaign. We found that the emergence of a forward or a backward phenomenology is essentially linked to passive information mechanisms since the occurrence of these scenarios depends on the suitable interplay between the two parameters that regulate the reciprocal transition between the broadcaster and inert classes by the means of referrals.

Differently from epidemics, in the viral marketing context, a backward scenario could strengthen the campaign's chances of survival. But if it can represent an opportunity from one side, on the other it introduces a risk factor because of the bistability range where system dynamics highly depends on the initial conditions. In this range hysteresis-type behaviors can hence emerge. Moreover, if in epidemics the main purpose is to 'avoid' a backward type scenario, for viral marketing this aim becomes learning to tame and eventually manage the backward phenomenology. In the present study, this has been shown to be the role of self-information that, however, needs to be properly calibrated. According to the Latin sentence 'in medio stat virtus', our analysis shows in fact that intermediate levels of selfinformation allow the campaign to spread in more controllable conditions by favoring the more reassuring forward-type regime over the backward one and, in both these cases, by widening the range of parameters in which the campaign-standing equilibrium is the only attractor for the system. Too high levels of information can instead broaden the

region of parameters in which bistability occurs and, although enlarging the chances of survival of the campaign, can be responsible of sudden collapses in its spread. Just in this case, the level of customer satisfaction turns out to have a certain weight since a threshold customer satisfaction value can be found, below which, small fluctuations from the campaign-standing equilibrium value can lead the campaign to a sudden stop.

Therefore, even if hysteretic dynamics in referral campaigns may likely occur because intimately linked to the mechanism of referrals, an adequate level of self-information and a fairly high level of customer-satisfaction, can be two weapons capable to control hysteresis by transforming a potential risk into an opportunity.

In conclusion, this study represents a qualitative step to better understand how selfinformation can impact the sustainability of a referral marketing campaign and, within such a qualitative dimension, there is no presumption to fit the trend of a specific campaign. To provide further insight into the topic, two extensions are currently the subject of ongoing research: (i) giving the model a more quantitative dimension through a validation with a practical experience and (ii) exploring the possible impact of multilayer or multiplex networks, that may lead to some hidden patterns of influence and interplay between the self-information mechanism and the viral spreading of the campaign.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Acknowledgments:** The present work has been performed under the auspices of the Italian National Group for Mathematical Physics (GNFM-Indam). The author wishes to thank the anonymous Referees and the handling Editor for their valuable comments and remarks.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


## *Article* **Mass-Preserving Approximation of a Chemotaxis Multi-Domain Transmission Model for Microfluidic Chips**

**Elishan Christian Braun 1,†, Gabriella Bretti 2,\*,† and Roberto Natalini 2,†**


**Abstract:** The present work is inspired by the recent developments in laboratory experiments made on chips, where the culturing of multiple cell species was possible. The model is based on coupled reaction-diffusion-transport equations with chemotaxis and takes into account the interactions among cell populations and the possibility of drug administration for drug testing effects. Our effort is devoted to the development of a simulation tool that is able to reproduce the chemotactic movement and the interactions between different cell species (immune and cancer cells) living in a microfluidic chip environment. The main issues faced in this work are the introduction of mass-preserving and positivity-preserving conditions, involving the balancing of incoming and outgoing fluxes passing through interfaces between 2D and 1D domains of the chip and the development of mass-preserving and positivity preserving numerical conditions at the external boundaries and at the interfaces between 2D and 1D domains.

**Keywords:** multi-domain network; transmission conditions; finite difference schemes; chemotaxis; reaction-diffusion models

**MSC:** 65M06; 35L50; 92B05; 92C17; 92C42

#### **1. Introduction**

The aim of the present work is to study both the modelling and numerical approximation of a chemotaxis-reaction-diffusion mathematical system describing the qualitative behavior of different cell species living in a confined environment. This work is inspired by laboratory experiments made on microfluidic chip [1], where some populations cohexist and interact. In recent years, there has been the development of a new approach to biological studies aimed at reconstructing organs and complex biological processes on a chip [2]. The fundamental idea is that the comprehension of the sophisticated physiology of organisms, based on the complex behavior and interaction of cell populations, tissues, and organs, needs interdisciplinary contributions from biology to mathematics.

Motivated by the laboratory setting of the experiment in microfluidic chips [1–3]—see also the short description of the experiments reported in Section 2.1.1—we introduce a model mimicking the interactions between two cell populations—namely, immune and cancer cells.

The mathematical model, proposed in Section 2.2, is a reaction-diffusion system with chemotaxis and describes birth and death processes, the migration of immune cells driven by chemical signals produced by tumor cells, and interactions between different cell species. We underline that, since the chemical gradients are not measured experimentally, by using the simulation algorithm based on the mathematical model proposed here, the chemical concentration gradients in the chip can be obtained by solving the inverse problem of minimizing the residuals between the measured trajectories and the simulated ones; see

**Citation:** Braun, E.C.; Bretti, G.; Natalini, R. Mass-Preserving Approximation of a Chemotaxis Multi-Domain Transmission Model for Microfluidic Chips. *Mathematics* **2021**, *9*, 688. https://doi.org/ 10.3390/math9060688

Academic Editor: Ioannis G. Stratis

Received: 3 March 2021 Accepted: 18 March 2021 Published: 23 March 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

also the discussion in Section 5 about the future developments of our work. From a mathematical point of view, we follow the framework of the classical macroscopic models of chemotaxis; see, for instance, [4], where the evolution of the density of cells is described by a parabolic equation and the concentration of a chemoattractant can be given by a parabolic or elliptic equation, depending on the different regimes to be described and on the authors' choices. The choice of a continuous model to reproduce an experiment in a confined environment, with a relatively small number of cells, is motivated by the fact that we aim at developing a simulation tool which is able to describe the phenomena of immunosorveillance of cancer in tissues, where billions of cells are present. For this reason, a macroscopic model is more suitable respect a particle model.

In the chambers, we consider a 2D doubly parabolic model which is a modification of the Keller–Segel model [4] to take into account the presence of two populations both producing chemical signal which are interacting each other. We remark that we consider only the 2D case, since the experimental data do not take into account the height of the chip. Clearly, in principle, our framework (model and numerical algorithm) could be easily extended to the third dimension and we remark that, analogously to the 2D-1D case here considered, mass-preserving and positivity preserving numerical conditions at the external boundaries and at the interfaces between 3D and channels still hold.

We consider the microchannels connecting the 2D chambers as 1D lines for modelling and computational reasons, as explained in Section 1.1. In order to model the dynamics on the microchannels, we choose two different approaches: we can assign a 1D version of the doubly parabolic model used in the chambers; otherwise, we can assign a model derived from a 1D-GA model [5], being characterized by the more realistic feature that the speed of propagation of cells in the channels is finite, which seems the dominant property at this scale. On the other hand, other models based on hyperbolic/kinetic equations for the evolution of the density of individuals can be assigned, characterized by a finite speed of propagation [6–10].

#### *1.1. The Geometry of the Microfluidic Chip and of the Related Computational Domain*

The microfluidic chip is represented as a network of channels connecting two boxes (the microfluidic chambers); see Figure 1 and a schematic picture of the related computational domain is depicted in Figure 2. Here, we refer to the experiment of two main culture chambers (a tumor and an immune cell compartment) connected via narrow capillary migration microchannels with, respectively, width and length of 12 μm and 500 μm. Moreover, the channels height is of 10 μm; however, since in the video footage the experiments is recorded at a fixed height, the third spatial dimension in our framework is neglected. The cross-sectional dimensions of culture chambers are 1 mm (width) × 100 μm (height).

A simplified schematization of the bounded surface where the experiment is performed is reported in Figure 2. We have two microfluidic chambers of the same size, one on the left and the other on the right, defined, respectively, as Ω*<sup>l</sup>* = [0, *Lx*] × [0, *Ly*] and Ω*<sup>r</sup>* := [*Lx* + *L*, 2*Lx* + *L*] × [0, *Ly*]. They are connected by microchannels, each of them schematized as rectangles *R* = [0, *L*] × [*a*, *b*]. In order to ease the reading, we point out that in the sequel we approximate the rectangular microchannels as 1D intervals *C* = [0, *L*] with zero thickness for the following reasons:


Then, the link between the box on the left and the corridor is schematized as a junction (node 1*L*) and analogously the link between the corridor and the box on the right as node 2*L*. The two junctions are not really a single point, therefore they are parametrized as an interval for node 1*L* and node 2*L*—namely, [*a*, *b*] of length *σ* := *b* − *a*.

We remark that for the sake of simplicity, the numerical treatment is developed for a simpler geometry composed by 2D chambers connected through a single 1D channel. The extension to multiple 1D channels is done in Section 3.2.2.

**Figure 1.** Microfluidic chip environment: two chambers connected by multiple channels. Credits by Vacchelli et al. [1] edited by AAAS.

**Figure 2.** Simplified schematization of the chip geometry depicted in Figure 1.

#### *1.2. Original Contribution of the Present Paper*

From the mathematical and numerical viewpoint, here we deal with a challenging issue arising in the chemotaxis modelling of cell interaction. The problem involves doubly parabolic models in 2D domains (microfluifidic chambers) that are connected with 1D domains represented by channels, where either a doubly parabolic or a hyperbolic-parabolic model can be assigned.

The classical doubly parabolic Keller–Segel (KS) model [4] of chemotaxis reads as:

$$\begin{cases} \boldsymbol{u}\_t = d \boldsymbol{i} \boldsymbol{v} (\boldsymbol{\nu} \nabla \boldsymbol{u} - \boldsymbol{\chi}(\boldsymbol{\phi}) \boldsymbol{u} \nabla \boldsymbol{\phi}) \\\ \boldsymbol{\phi} = \boldsymbol{D} \Delta \boldsymbol{\phi} + \boldsymbol{a} \boldsymbol{u} - \boldsymbol{b} \boldsymbol{\phi} \end{cases} \tag{1}$$

with *u* the density of individuals in the considered medium, *ν* the diffusion rate of the organism according to Fick's Law, and *φ* the density of the chemoattractant. The positive constant *D* is the diffusion coefficient of the chemoattractant; the positive coefficients *a* and *b* are, respectively, its production and degradation rates; and *χ* is the chemotactic sensitivity, depending on the density of the considered quantities. In the 2D domains given by the microfluidic chambers, we apply a reaction-diffusion chemotaxis KS-like model inspired by (1) and described in Section 2.2.1.

In the 1D microfluidic channels, we use the one-dimensional version of the KS-like model used in the chambers, but we also study the behavior of individuals when a hyperbolic-parabolic model, characterized by finite speed of propagation, is assigned. Such a hyperbolic-parabolic model, described in Section 2.2.1, is inspired by the Greeberg– Alt (GA) model, arising as a simple model for chemotaxis on a line:

$$\begin{cases} \partial\_t u + \partial\_x v = 0, \\\partial\_t v + \lambda^2 \partial\_x u = -v + \chi(\phi) u \partial\_x \phi, \\\partial\_t \phi = D \partial\_{xx} \phi + a u - b \phi. \end{cases} \tag{2}$$

Note that here *v* is the averaged flux. Let us underline that the flux *v* in model (2) corresponds to *<sup>v</sup>* <sup>=</sup> <sup>−</sup>*λ*2∇*<sup>u</sup>* <sup>+</sup> *<sup>χ</sup>*(*φ*)*u*∇*<sup>φ</sup>* for the KS system.

We remark that here we do not consider the GA model on the 2D domain, since in this case the derivation of the monotonicity condition, easily computed in 1D, it is not straightforward, due to the oscillations brought by 2D wave equation. One possibility to overcome this problem should be to consider the macroscopic hyperbolic model proposed by Preziosi in [9,11]. Alternatively, a kinetic 2D model of chemotaxis and its numerical approximation is studied in [12].

This system was analytically studied on the whole line and on bounded intervals in [13], while an effective numerical approximation, the Asymptotic High Order (AHO) scheme, was introduced in [14]—see also [15,16]—and extended on networks with general boundary conditions in [17,18].

Here, in the numerical treatment for the computation of numerical solutions one has to take care of what happens at the interface when switching from 2D-doubly parabolic models to 1D-doubly parabolic or 1D-hyperbolic-parabolic ones and vice versa.

Since we aim at reproducing the numerical solutions of such models, we need to deal with a multi-domain problem given by the passage from a 2D domain represented by the chambers of the chip to 1D domains given by the channels. For this reason we need to develop ad hoc transmission conditions to ensure mass conservation at the 2D-1D interfaces. From the numerical viewpoint, here we consider numerical boundary conditions including in the stencil a ghost cell value taken from the neighbouring domain, as we will show in the numerical Section 3. The approximation of doubly parabolic chemotaxis models for the 1D-KS model (1) on networks was already considered in [19]. However, in that case the transmission conditions were between 1D–1D interfaces and on each arc of the network the same fully parabolic model was considered. We also underline that in such work, transmission conditions required to impose the continuity of the density of both cells *u* and chemoattractant *φ*, while we only impose the continuity of the fluxes, which seems to be more realistic when dealing with flux of individuals or molecules.

For the numerical approximation of the GA system (2), we refer to our previous papers [14] for a single line. In particular, the numerical treatment of the hyperbolic part of the system is based on the finite difference AHO scheme with the development of masspreserving numerical scheme at outer boundaries, while the parabolic part is approximated by finite difference and Crank–Nicolson scheme.

In [17,18] the GA system was solved on networks, thus making it necessary to develop mass-preserving transmission conditions at inner nodes (interfaces) and suitable boundary conditions at outer nodes. However, rgw transmission conditions considered there involved the mass exchange only between 1D–1D interfaces; moreover, on each arc of the network, the same model was considered. Furthermore, the second-order numerical approximation of the boundary conditions developed in such papers did not ensure the posivity preserving property in the case of oscillating functions.

The numerical approximation of permeability Kedem–Katchalsky [20] conditions describing the conservation of the flux through a node was already considered in [21],

but we underline that in the mentioned paper the study was done for the approximation with finite elements methods of linear problems. For reaction-diffusion problems the approximation of permeability conditions was studied in [22] for finite difference schemes and in [23] for discontinuous Galerkin methods. The numerical treatment of permeability conditions for chemotaxis problems was presented for the first time in [24] for the 1D parabolic–parabolic interface, and a finite difference approximation was developed without taking into consideration the mass preservation nor the positivity-preservation properties at the interfaces.

Therefore, to the best of our knowledge the present paper is the first numerical work where this new technique of switching the size of the domains and type of equations (parabolic vs. hyperbolic approach) is introduced, in order to develop mass-preserving and positivity preserving schemes. We point out that in the present paper the approximation method—based on finite difference schemes—involves the derivation of suitable zero-flux boundary conditions. The motivation for our framework based on finite difference methods seems to be a natural choice when dealing with hyperbolic models of transmission.

#### *1.3. Main Contents and Plan of the Paper*

In the present paper, a positivity-preserving and mass-preserving numerical discretization of Neumann boundary conditions at the corners and at the bottom and top boundaries of the 2D domain for a 2D-doubly parabolic reaction-diffusion problem are presented. Moreover, a positivity-preserving and mass-preserving numerical scheme at the interfaces of the network connecting the 2D chambers with the 1D channels (where the 1D-doubly parabolic or 1D-hyperbolic-parabolic problem can be assigned) is developed. To summarize the main contents of the present work, the mathematical issues faced in this study are categorized into two aspects:


Then, the numerical questions arising in the mentioned issues and here addressed are:


The plan of the paper is as follows. In Section 2 we describe the biological framework that inspired our study and we introduce the mathematical formulation of biologically inspired models and we introduce the adopted model. Section 3 is devoted to the numerical techniques used to approximate the problem and in Section 4 some numerical tests showing the qualitative behavior of cells in the designed environment are presented. Finally, in Section 5 a discussion on the results and the future developments of our work is presented.

#### **2. Materials and Methods**

The present Section is devoted to:


#### *2.1. Biological Framework*

The control of immune cells migration and interaction with tumor cells living inside the chambers of the microfluidic chip, represent a new and attractive approach for the clinical management of tumor diseases. Furthermore, in the chip environment also drug testing can be exploited. Then, the quantitative assessment of immune cell migration ability to recognize and attack the tumor cells for each patient could provide a new potential parameter predictive of patient outcomes in the future.

Migrating cells respond to complex chemical stimuli (as a mixture of growth factors, cytokines, and chemokines), representing a source of chemoattractants. These chemoattractants, through the interaction with their receptors, allow cells to acquire a polarized morphology and to perform the action of immunosurveillance.

The development of lab-on-chip technologies has made it possible to realize a reproducible tailoring of the cellular microenvironment, thus allowing the continuous monitoring of experiments and the accurate control of experimental parameters. Recently, the development of microengineering has provided the possibility to realize culturing of multiple cell types and made it possible to observe cell-cell interactions and to transpose in vivo studies to a second generation of in vitro smart environments. The main advantages of this new technological tool are a close control over local experimental conditions and lower costs with respect to the use of animals in laboratory experiments for efficacy and toxicity testing. Some results obtained with on-chip experiments are presented in [1,2,25–27].

Regarding the structure of microfluidic devices, they are designed to allow chemical and physical contacts between tumor cells and non-adherent immune cells (i.e., murine splenocytes or human peripheral blood mononuclear cells). The microfluidic co-culture platforms are fabricated in polydimethylsiloxane (PDMS, Silgard 184), a biocompatible optically transparent silicone elastomer.

#### 2.1.1. Setting of the Laboratory Experiments

Here, we shortly describe the laboratory experiments inspiring our work (see [1]) and carried out in the microfluidic chip environment; see Figure 1. Two populations, immune cells of wild type and cancer cells, are initially put into two separate chambers and can have physical and chemical contact through the microchannels. In more detail, rgw cells are loaded into the reservoirs as follows: the left chambers are filled with about 2 <sup>×</sup> 106 human peripheral blood mononuclear cells and the right chambers with about 5 <sup>×</sup> 104 breast cancer cells, pre-treated or not with doxorubicin hydrochloride, all immersed in a suitable culture medium. Time-lapse recordings are performed over a period of 72 h (1 microphotograph every 2 min) by means of a microscope placed directly inside the CO2 incubator for the duration of the recording.

We consider two different scenarios:


The culture medium in both cases is neutral, thus meaning that no exogenous substances are introduced. Our aim is then to build a simulation algorithm based on the mathematical model, which is able to reproduce the main features of the observed phenomena in the two scenarios.

#### *2.2. Mathematical Framework*

For the development of the mathematical modelling explained in the next section, we remark that we neglect the third dimension, since we do not have laboratory measurements of the movement of cells in the vertical direction. For this reason we consider the microfluidic chambers as 2D objects. Nowadays, the mathematical analysis of biological phenomena has become an important tool to explore complex processes and to detect mechanisms that might not be evident to the experimenters. Although a mathematical model cannot replace a real experiment, it may represent a support tool to explain acquired biological data and it may allow to gain a deeper understanding of the interactions between cancer cells and the immune system. More generally, mathematical models can describe a broad variety of biological phenomena, including cell dynamics and cancer [28–32].

The movement of bacteria under the effect of a chemical substance has been widely studied in the last few decades, and numerous mathematical models have been proposed. As shown in [33], chemotaxis is decisive in biological processes. For instance, the formation of cells aggregations (amoebae, bacteria, etc) occurs during the response of the different species to the change of the chemical gradients in the environment. Moreover it is possible to describe this biological phenomenon at different scales: particles models, hybrid (multiscale) models and macroscopic models.

In the present paper, the population density is assumed to be as a whole, thus macroscopic models of partial differential equations are considered. In particular, in order to describe the dynamics of cells in the 2D chambers we use a KS-like model, while in the microchannels we compare the behavior between two different modelization: a 1D KS-like model and a 1D GA-like model. The modelling here applied is described in the next Section 2.2.1.

#### 2.2.1. The Model

Here, we introduce a mathematical model that aims at describing the behavior of two populations of cells coexisting together: tumoral cells *T* and immune cells (macrophages) *M*. We underline that the setting here considered can be made more complex with the introduction of a greater number of cell species and with the presence of an exogenous substance in the environment.

The model consists of a reaction-diffusion system with chemotaxis that it is able to describe birth and death processes, interactions with chemoattractants, interactions and competition between different cell species. The microfluidic chip is schematized as a network of channels connecting two boxes (the microfluidic chambers), then, following the ideas in [17], ad hoc transmission conditions were introduced to ensure mass conservation. The parameters of the model, such as the velocity of different cell populations, the turning rates, and the decay rates, will be calibrated with rgw observed data.

Cancer cells *T* produce chemical signals, called *ϕ*, activating the immune response of *M* and influencing their behavior. Moreover, we take into account the presence of cytokines *ω* (produced by *M*), acting as a chemical killer of cancer cells. Mainly referring to the KS model, the model here considered in the 2D chambers reads as:

$$\begin{cases} \frac{\partial}{\partial t}T = D\_T \Delta T - \lambda\_T(\omega)T\_\prime \\\\ \frac{\partial}{\partial t}M = D\_M \Delta M - di\upsilon(\chi(\varrho)M\nabla\varrho), \\\\ \frac{\partial}{\partial t}\varrho = D\_\vartheta \Delta \varrho + \alpha\_\theta T - \beta\_\theta \varrho \lrcorner \\\\ \frac{\partial}{\partial t}\omega = D\_\omega \Delta \omega + \alpha\_\omega M - \beta\_\omega \omega .\end{cases} \tag{3}$$

The system above describes the dynamics of the two cell species and the diffusion of the chemoattractant, and it needs to be complemented with suitable initial conditions and boundary conditions for the cells and the chemoattractant concentrations, as will be specified in the following paragraphs.

In particular, bearing in mind the first scenario (treated tumor), the system above describes the following situation: tumor cells *T* produce a chemical substance *ϕ* attracting immune cells *M*, that start migrating towards them. In this case, the tumor does not proliferate, since it is treated by a chemotherapy medicine, thus we do not include this feature in the model. Immune cells do not seem to proliferate during the experiment, thus we neglect this feature in the model. Note that, although here we are neglecting the proliferation of tumor, a tumor growth is observed experimentally in the second scenario (untreated tumor). This feature can be easily added to the model by putting a linear or logistic source term in the equation for the tumor.

Immune cells also produce a chemical substance *ω* which should be responsible for tumor killing. Therefore, in the first equation of the system (3), besides the diffusion term we can find −*λT*(*ω*)*T*, representing the tumor suppression operated by immune cells. We underline that at the moment we have no information from the biologists about the real killing rate in the microchip environment induced by *ω*, but we decided to introduce it in order to include this phenomenon in the model qualitatively.

In the second equation, in addition to the diffusion term we have the chemotactic term *f* = *χ*(*ϕ*)∇*ϕ* due to the presence of the chemical substance *ϕ* produced by the tumor.

In order to define the action of the cytotoxin *ω* produced by immune cells (which determines the death of cancer cells), we introduce the function *λT*(*ω*):

$$
\lambda\_T(\omega) = \frac{S\omega}{\gamma + \omega'} \tag{4}
$$

where *S* is the maximum secretion rate of the cytotoxin by the immune cells and *γ* the equivalent Michaelis constant associated with the production, as described in [33]. A lot of effort has been devoted to finding a biologically accurate expression for the chemotaxis function *χ*(*ϕ*) representing the chemotactic sensitivity of immune cells; here, we refer the form suggested in [34] by Lapidis and Schiller:

*<sup>χ</sup>*(*ϕ*) = *<sup>k</sup>*<sup>1</sup> (*k*<sup>2</sup> <sup>+</sup> *<sup>ϕ</sup>*)<sup>2</sup> , (5)

where *k*<sup>1</sup> represents the cellular drift velocity, while *k*<sup>2</sup> is the receptor dissociation constant, which says how many molecules are necessary to bind the receptors. Note that we mainly refer to [33] for the values of the parameters *k*1, *k*2, and all the parameters of the model are reported in Table 1.

Now, in order to describe the dynamics of cells in the microchannels connecting the two boxes, we introduce the following 1D models for the dynamics, with the label *c* indicating the channels. Then, we consider two possible models in the channels: a diffusive one:

$$\begin{cases} \frac{\partial}{\partial t} T\_{\mathcal{C}} = D\_T \partial\_{xx} T\_{\mathcal{C}} - \lambda\_T(\omega) T\_{\mathcal{C}}, \\\\ \frac{\partial}{\partial t} M\_{\mathcal{C}} = D\_M \partial\_{xx} M\_{\mathcal{C}} - \partial\_{\mathcal{X}} (M\_{\mathcal{C}} f\_{\mathcal{C}}), \\\\ \frac{\partial}{\partial t} \rho\_{\mathcal{C}} = D\_{\mathcal{g}} \partial\_{xx} \rho\_{\mathcal{C}} + a\_{\mathcal{g}} T\_{\mathcal{c}} - \beta\_{\mathcal{g}} \rho\_{\mathcal{C}\prime} \\\\ \frac{\partial}{\partial t} \omega\_{\mathcal{C}} = D\_{\omega} \partial\_{xx} \omega\_{\mathcal{C}} + a\_{\omega} M\_{\mathcal{C}} - \beta\_{\omega} \omega\_{\mathcal{C}\prime} \end{cases} \tag{6}$$

and a hyperbolic one:

$$\begin{cases} \begin{aligned} \partial\_t T\_{\mathcal{c}} + \partial\_{\mathcal{x}} \boldsymbol{v}\_{\mathcal{c}}^T &= -\lambda\_T(\boldsymbol{\omega}) T\_{\mathcal{c}}, \\\\ \partial\_t \boldsymbol{v}\_{\mathcal{c}}^T + D\_T \partial\_{\mathcal{x}} T\_{\mathcal{c}} &= -\boldsymbol{v}\_{\mathcal{c}}^T, \\\\ \partial\_t \omega\_{\mathcal{c}} &= D\_{\omega} \partial\_{\mathcal{x}\mathcal{x}} \omega\_{\mathcal{c}} + \boldsymbol{a}\_{\omega} T\_{\mathcal{c}} - \beta\_{\mathcal{c}} \omega\_{\mathcal{c}}, \\\\ \partial\_{\mathcal{x}} M\_{\mathcal{c}} + \partial\_{\mathcal{t}} \boldsymbol{v}\_{\mathcal{c}}^M &= 0, \\\\ \partial\_{\mathcal{t}} \boldsymbol{v}\_{\mathcal{c}}^M + D\_M \partial\_{\mathcal{x}} M\_{\mathcal{c}} &= M\_{\mathcal{c}} \boldsymbol{f}\_{\mathcal{c}} - \boldsymbol{v}\_{\mathcal{c}}^M, \\\\ \partial\_{\mathcal{t}} \boldsymbol{q}\_{\mathcal{c}} &= D\_{\boldsymbol{q}} \partial\_{\boldsymbol{x}\mathcal{x}} \boldsymbol{q}\_{\mathcal{c}} + \boldsymbol{a}\_{\boldsymbol{q}} T\_{\mathcal{c}} - \beta\_{\mathcal{q}} \boldsymbol{q}\_{\mathcal{c}}, \end{aligned} \end{cases} \tag{7}$$

with *Tc* and *Mc*, respectively, as the density of tumor and immune cells; *fc* = *χ*(*ϕc*)*∂xϕ<sup>c</sup>* and *v<sup>T</sup> <sup>c</sup>* and *v<sup>M</sup> <sup>c</sup>* , respectively, as the average flux of tumor cells *Tc* and immune cells *Mc* in the channels. Note that the 1D-doubly parabolic model (6) is the one-dimensional version of the system (3), while (7) is the 1D-hyperbolic-parabolic model inspired by GA model [5]. We remark that, for the hyperbolic-parabolic system (7), we also need to assign initial and boundary conditions for the flux *v*.

#### 2.2.2. The Simplified Model

To present the numerical scheme, we write a simpler model with respect to (3) but share the main features of it:

$$\begin{cases} \partial\_t u &= \, \_D \Delta u - \text{div} \mathbf{F} + g(\mathbf{x}, y, t, u), \\\ \_D \partial\_t \Phi &= \, \_D D\_\theta \Delta \phi + au - b\phi, \end{cases} \tag{8}$$

with *u* as the density of individuals, *φ* as the density of chemoattractant, and with **F** = **u f** . From now on, the two components of the drift term **f** = *χ*(*φ*)∇*φ* will be indicated as:

$$\mathbf{f}(\mathbf{x}, \mathbf{y}, \mathbf{t}) := \begin{pmatrix} f^{\times}(\mathbf{x}, \mathbf{y}, \mathbf{t}) \\ f^{\mathcal{Y}}(\mathbf{x}, \mathbf{y}, \mathbf{t}) \end{pmatrix}.$$

For the mono-dimensional channel, in order to make the explanation of the numerical approximation easier, we consider simpler models sharing the same characteristics of the models in (6) and (7), which read as:

$$\begin{cases} \partial\_t u\_\mathfrak{c} = D\_{u\_\mathfrak{c}} \partial\_{xx} u - \partial\_x F\_\mathfrak{c} + \mathfrak{g}(\mathfrak{x}, t, u), \\\partial\_t \phi\_\mathfrak{c} = D\_{\phi\_\mathfrak{c}} \partial\_{xx} \phi\_\mathfrak{c} + a\_\mathfrak{c} u\_\mathfrak{c} - b\_\mathfrak{c} \phi\_\mathfrak{c} \end{cases} \tag{9}$$

or

$$\begin{cases} \partial\_t u\_\varepsilon + \partial\_x v\_\varepsilon = g(\mathbf{x}, t, u\_\varepsilon)\_\prime \\ \partial\_t v\_\varepsilon + \lambda\_\varepsilon^2 \partial\_x u\_\varepsilon = F\_\varepsilon - v\_\varepsilon \\ \partial\_t \phi\_\varepsilon = D\_{\phi\varepsilon} \partial\_{\mathbf{x}\mathbf{x}} \phi\_\varepsilon + a\_\varepsilon u\_\varepsilon - b\_\varepsilon \phi\_{\varepsilon\prime} \end{cases} \tag{10}$$

with *Fc* = *uc fc*. The systems above have to be complemented with smooth initial conditions for the unknowns *u*, *φ* and also *v* for system (10); initial data will be specified in Section 3.3. On the boundary, we consider for all the quantities homogeneous Neumann conditions, so we assume no-flux boundary conditions.

Monotonicity conditions. We also mention at this point that model (10) requires an analytical monotonicity criteria; see [35]. For a linear convection term *Auc* and a linear source term *g* = *Buc*, the sub-characteristic criteria

$$\left| \frac{A}{\lambda\_c} \right| - B \le 1,$$

must be satisfied in order for the quantity *uc* to be non-negative. Otherwise, having a negative *uc* would lead to unphysical solutions.

With regard to our former model (7), that would mean we have for the immune cell density *M* the monotonicity condition:

$$\frac{k\_1}{\left(k\_2 + \varrho\_c\right)^2} |\partial\_x \wp\_c| \le \sqrt{D\_M}.\tag{11}$$

This needs to be verified in the computational domain in order to ensure non-negative solutions.

**Remark 1.** *We remark that the no-flux conditions boundary conditions used in our simulations are needed to have the mass-conservation of all the quantities. However, they are not realistic, since in the laboratory experiment there is an inflow of cells from the outer boundaries. Then, we aim at extending the no-flux boundary conditions to more general ones.*

In the following section, in order to discuss mass-conservation we restrict our study on a 2D closed rectangular domain named Ω*nc* (i.e., box without the channels) and on single lines not connected at the outer boundaries, as indicated by *Cnc*.

#### *2.3. Outer Boundary and Interface Conditions for the Models with Null Source Term G*

From now on, we consider a further simplified version of the models presented above putting the source term *g* equal to zero to discuss mass conservation.

2.3.1. Boundary Conditions for the 2d Doubly Parabolic Model (8) with *G* = 0.

Here, we consider the mass conservation of cell density *u* for zero-flux conditions at the outer boundaries of a rectangular closed domain Ω*nc* for the 2D model (8) with a null source term.

Indeed, since our model describes the migration of cells by both diffusion and chemoattractant effects, physically speaking the mass of cells must be preserved in the absence of the creation and destruction of cells.

For this reason, we assume a no-flux condition for the density *u* and the chemoattractant *φ*. Since we define the source term **f** as a product function of *φ*, we get the no-flux conditions:

$$\mathbf{F}(\mathbf{x}, \mathbf{y}, \mathbf{t}) \mathfrak{u}|\_{\partial \Omega\_{\mathrm{nc}}} = 0, \ \ \ \nabla \mathfrak{u}(\mathbf{x}, \mathbf{y}, \mathbf{t}) \mathfrak{u}|\_{\delta \Omega\_{\mathrm{nc}}} = 0, \ (\mathbf{x}, \mathbf{y}) \in \partial \Omega\_{\mathrm{nc}}.\tag{12}$$

These guarantee mass conservation.

2.3.2. Interface between 2D-1D Models in (8) and (9)

We remark that, for simplicity, the following description focuses on the left part of the domain—i.e., Ω*<sup>l</sup>* and its connection with a single microchannel represented by the interval [0, *L*]. However, the numerical treatment holds analogously on the right side of the domain and also to multiple channels.

In the 2D left box Ω*l*, the position of node 1*L* is at *x* = *Ly*, *y* ∈ [*a*, *b*] and for the 1D domain, represented by a given number of channels *C*, node 1*L* is placed at *x* = 0 (left endpoint of the channel); see Figure 2.

In order to ensure the conservation of the total mass when the mass-exchange occurs, we introduce suitable transmission conditions at the interface between 2D and 1D domains. Then, we consider the simplified models (9) or (10) for 1D channels, with *g* = 0.

In particular, we have to prescribe the flux conservation at the 2D-1D interfaces, since we cannot lose or gain any cells during the passage through a node.

The conservation condition for *u* reads as:

$$\frac{d}{dt}\int\_{\Omega\_l} \mu(\mathbf{x}, \mathbf{y}, t)d\Omega\_l + \frac{d}{dt}\int\_0^L \mu\_c(\mathbf{x}, t)d\mathbf{x} = 0,$$

and it rewrites as:

$$\begin{cases} 0 = \oint\_{\partial\Omega\_{\parallel}} (D\_{\boldsymbol{u}} \nabla \boldsymbol{u}(\boldsymbol{x}, \boldsymbol{y}, \boldsymbol{t}) - F(\boldsymbol{x}, \boldsymbol{y}, \boldsymbol{t})) \boldsymbol{n} dS \\ + (D\_{\boldsymbol{u}\_{\mathcal{c}}} \partial\_{\boldsymbol{x}} \boldsymbol{u}\_{\mathcal{c}} - F\_{\boldsymbol{c}}(\boldsymbol{x}, \boldsymbol{t}))|\_{L}^{0} \end{cases}$$

by using the divergence theorem in the first integral. With our analytical boundary conditions, the integral vanishes, except at the boundary where the node is positioned.

We remark that attention has to be paid to *n* being the outer normal of the domain.

Thanks to the boundary conditions (12), we get the condition:

$$\int\_{a}^{b} \left( D\_{\text{ul}} \partial\_{\text{x}} u(L\_{\text{x}}, y, t) - u(L\_{\text{x}}, y, t) f^{\text{x}}(L\_{\text{x}}, y, t) \right) dy = D\_{\text{ul}\_{\text{c}}} \partial\_{\text{x}} u\_{\text{c}}(0, t) - F\_{\text{c}}(0, t). \tag{13}$$

Note that in this case, the evaluation of the integrand at the right endpoint *L* is discarded, since we are only considering the junction connecting Ω*<sup>l</sup>* with the left endpoint *x* = 0 of the microchannel (see Figure 2).

Now, we impose Kedem–Katchalsky (KK) [20] conditions describing the conservation of the flux through a node (see also [21] for the numerical treatment of these conditions). In particular, at the interface between the left chamber and the channels, we have (on the left of node 1*L* in Figure 2):

$$D\_\mathbf{u} \partial\_\mathbf{x} \mathbf{u}(L\_\mathbf{x}, y, t) - \mathbf{u}(L\_\mathbf{x}, y, t) f^\mathbf{x}(L\_\mathbf{x}, y, t) = \mathbf{K}(\mathbf{u}\_\mathbf{c}(0, t) - \mathbf{u}(L\_\mathbf{x}, y, t)) \qquad \text{for } y \in [a, b] \tag{14}$$

On the right of node 1*L*, we have:

$$D\_{\mathcal{U}\_{\mathcal{C}}} \partial\_{\mathcal{X}} u\_{\mathcal{C}}(0, t) - F\_{\mathcal{C}}(0, t) = K u\_{\mathcal{C}}(0, t) (b - a) - \int\_{a}^{b} u(L\_{\mathcal{X}} y, t) dy. \tag{15}$$

Thanks to conditions (14) and (15), we are guaranteed to have the flux conservation (13), and we will use such conditions to obtain numerical boundary conditions for the boundary values at the nodes on both sides, as shown in Section 3 and in Section 3.1.2.

#### 2.3.3. Interface between 2D-1D Models in (8)–(10)

In this section, we describe the combination of 2D parabolic-1D hyperbolic model in order to describe the dynamics with a hyperbolic model in the channels. Further care has to be given in order to keep some important properties which ensure the consistency and non-negativity of numerical solutions when connecting both models.

Now, the transmission conditions for the switch from Ω*<sup>l</sup>* to *C* = [0, *L*] are derived in this case. For the mass conservation of *u*, we impose the condition:

$$\begin{split} 0 &= \frac{d}{dt} \int\_{\Omega\_{l}} \boldsymbol{u}(\mathbf{x}, \mathbf{y}, t) d\Omega\_{l} + \frac{d}{dt} \int\_{0}^{L} \boldsymbol{u}\_{\mathcal{E}}(\mathbf{x}, t) d\mathbf{x} \\ &= \int\_{\Omega\_{l}} (D\_{\boldsymbol{u}} \triangle \boldsymbol{u}(\mathbf{x}, \mathbf{y}, t) - \operatorname{div} \mathbf{F}(\mathbf{x}, \mathbf{y}, t)) d\Omega\_{l} + \int\_{0}^{L} -\partial\_{\mathbf{x}} \boldsymbol{v}(\mathbf{x}, t) d\mathbf{x} \\ \Longrightarrow \quad \oint\_{\partial \Omega\_{l}} (D\_{\boldsymbol{u}} \nabla \boldsymbol{u}(\mathbf{x}, \mathbf{y}, t) - \mathbf{F}(\mathbf{x}, \mathbf{y}, t) \boldsymbol{n}) dS + \boldsymbol{v}(\mathbf{0}, t) = 0. \end{split}$$

Note that in the above formula, we have *v*(*L*, *t*) = 0 because we are looking at left interface (node 1*L*). Then, we finally get:

$$\int\_{a\_1}^{b\_1} (D\_u \partial\_x u(L\_{x\prime} y\_\prime t) - u(L\_{x\prime} y\_\prime t) f^x(L\_{x\prime} y\_\prime t)) dy = -v(0, t). \tag{16}$$

Now, we impose the KK-condition at the interface:

$$D\_{\mathfrak{u}}\partial\_{x}u(L\_{\mathfrak{x}\prime}y,t) - u(L\_{\mathfrak{x}\prime}y,t)f^{\mathfrak{x}}(L\_{\mathfrak{x}\prime}y,t) = \mathcal{K}(u\_{\mathfrak{c}}(0,t) - u(L\_{\mathfrak{x}\prime}y,t)) \qquad \text{ for } y \in [a,b]$$

Then, (16) reads as:

$$w(0,t) \quad = \quad -K(b-a)u\_c(0,t) + K \int\_a^b u(L\_{x'}y, t)dy. \tag{17}$$

#### **3. Numerical Approximation**

In this section, we describe the numerical approximation of the adopted models: 2D-doubly parabolic, 1D-doubly parabolicm and 1D-hyperbolic-parabolic. We define equispaced *xi* := *ix*, *yj* := *jy*, and *tn* := *nt* with *x*, *y*, *t* > 0, and *i* = 0, ... , *Nx* + 1, *j* = 0, ... , *Ny* + 1; for channel [0, *L*], we discretize it as *xi* = *ix*, with *i* = 0, ... , *N*. For a more structured presentation, we introduce the operators:

$$\begin{array}{l} \delta\_{\mathbf{x}}^{2}u\_{i,j}^{n} := u\_{i+1,j}^{n} - 2u\_{i,j}^{n} + u\_{i-1,j}^{n}, \; \delta\_{\mathbf{y}}^{2}u\_{i,j}^{n} := u\_{i,j+1}^{n} - 2u\_{i,j}^{n} + u\_{i,j-1}^{n}, \\ \delta\_{\mathbf{x}}^{0}u\_{i,j}^{n} := u\_{i+1,j}^{n} - u\_{i-1,j}^{n}, \; \delta\_{\mathbf{y}}^{0}u\_{i,j}^{n} := u\_{i,j+1}^{n} - u\_{i,j-1}^{n}, \\ \delta\_{\mathbf{x}}^{1}u\_{i,j}^{n} := u\_{i+1,j}^{n} - u\_{i,j}^{n}, \; \delta\_{\mathbf{y}}^{1}u\_{i,j}^{n} := u\_{i,j+1}^{n} - u\_{i,j}^{n}. \end{array}$$

**Remark 2.** *Note that special attention has to be paid also to the stiffness induced by the source term g*(*x*, *y*, *t*, *u*)*. To overcome this issue, implicit methods can be used, such as the Crank–Nicolson method, which is associated with a* Δ*t that is small enough.*

Mass-preserving and (numerically)-positivity-preserving approximations will be developed in the present section. In the following, we will neglect the label *c* to make the reading easier and we will make distinctions only when necessary.

#### *3.1. The Parabolic-Parabolic Case*

Here, we introduce a numerical scheme for the doubly parabolic systems (8) and (9) for *g* = 0.

For the discretization of equations in a 2D system (8) in the interior points of the domain—i.e., for *i* = 1, ... , *Nx*, *j* = 1, ... , *Ny*, we define a finite difference discretization both for *u* and *φ*:

$$\begin{array}{rcl} \mu\_{i,j}^{n+1} &=& \mu\_{i,j}^{n} + D\_{\text{il}} \frac{\triangle t}{\mathbb{Z}} \left[ \frac{\delta\_{x}^{2} (u\_{i,j}^{n} + u\_{i,j}^{n+1})}{\triangle x^{2}} + \frac{\delta\_{y}^{2} (u\_{i,j}^{n} + u\_{i,j}^{n+1})}{\triangle y^{2}} \right] \\ & - \triangle t (\mathcal{S}\_{\text{x}}^{0} F\_{i,j}^{\text{x},n} + \mathcal{S}\_{\text{y}}^{0} F\_{i,j}^{\text{y},n})\_{\*} \end{array} \tag{18}$$

$$\begin{array}{rcl}\phi\_{i,j}^{n+1} &=& \phi\_{i,j}^{n} + D\_{\phi} \frac{\triangle t}{2} \left[ \frac{\delta\_{x}^{2} \left( \phi\_{i,j}^{n} + \phi\_{i,j}^{n+1} \right)}{\triangle x^{2}} + \frac{\delta\_{y}^{2} \left( \phi\_{i,j}^{n} + \phi\_{i,j}^{n+1} \right)}{\triangle y^{2}} \right] \\ &+ \frac{\triangle t}{2} a \left( u\_{i,j}^{n} + u\_{i,j}^{n+1} \right) - \frac{\triangle t}{2} b \left( \phi\_{i,j}^{n} + \phi\_{i,j}^{n+1} \right). \end{array} \tag{19}$$

Note that *Fx*,*<sup>n</sup> <sup>i</sup>* = *<sup>χ</sup>*(*φ<sup>n</sup> i*,*j* )*u<sup>n</sup> i*,*j δ*0 *xφ<sup>n</sup> <sup>i</sup>*,*<sup>j</sup>* with *<sup>δ</sup>*<sup>0</sup> *xφ<sup>n</sup> <sup>i</sup>*,*<sup>j</sup>* a centered second order approximation of *φx*. For a 1D system (9), in the interior points of the channel *C* we apply the Crank– Nicolson scheme, as above:

$$\mu\_i^{\varepsilon, n+1} = \quad u\_i^{\varepsilon, n} + D\_{\mu\_c} \Delta t \frac{\delta\_x^2 \left(u\_i^{\varepsilon, n} + u\_i^{\varepsilon, n+1}\right)}{\triangle x^2} - \Delta t \delta\_x^0 F\_{\epsilon, i\prime}^n \tag{20}$$

$$\begin{array}{rcl} \phi\_i^{c,n+1} &=& \phi\_i^{c,n} + D\_{\phi\_c} \frac{\triangle t}{2} \frac{\phi\_x^2 \left(\phi\_i^{c,n} + \phi\_i^{c,n+1}\right)}{\triangle x^2} \\ &+ \frac{\triangle t}{2} a\_c (u\_i^{c,n} + u\_i^{c,n+1}) - \frac{\triangle t}{2} b\_c (\phi\_i^{c,n} + \phi\_i^{c,n+1}). \end{array} \tag{21}$$

**Remark 3.** *We remark that here, for simplicity, upwinding terms needed to avoid meshgrid restrictions caused by a large Péclet number—see, for instance [36]—are not included in the schemes (18) and (20). In this case, assuming a small* Δ*x and* Δ*y will be enough to avoid oscillations.*

*The upwinding be added in the implemented scheme described in Section 3.3.*

Stability condition. By using the Von-Neumann stability analysis on the linearized problem, we derive the following condition for the Crank–Nicolson schemes above, [37]:

$$D\_{\rm u} \frac{\triangle t}{\triangle x^2} \le 1 \qquad \text{for 1D,} \quad D\_{\rm u} \frac{\triangle t}{\triangle x^2} + D\_{\rm u} \frac{\triangle t}{\triangle y^2} \le 1 \qquad \text{for 2D.} \tag{22}$$

In the following, we present the discretization of the boundary and transmission conditions to complete the numerical schemes.

3.1.1. Discretization of the Outer Boundary Conditions for the Doubly Parabolic Problem

Since a qualitative characteristic of the model is the preservation of total mass for zero-flux boundary conditions, the first step is to ensure the mass preservation at each time step in a closed 1D line *Cnc* and, analogously, in a closed 2D chamber, namely Ω*nc*. To this aim, we have to choose discrete boundary conditions that both are consistent with the analytical boundary conditions and preserve the mass in the numerical method. We remark that we present the computations without source term *g*, and we will add it in the following to complete the equations.

Boundary conditions for the density of individuals *u* in 1D model (9).

Here, we consider Neumann boundary conditions *<sup>∂</sup><sup>u</sup> <sup>∂</sup><sup>x</sup>* (*x*, *t*) = 0. If we discretized it with a (standard) forward finite difference scheme:

$$
\partial\_{\mathbf{x}}u(0) = \frac{u\_1^{\mathbf{c},\mathbf{u}} - u\_0^{\mathbf{c},\mathbf{u}}}{\Delta \mathbf{x}},\tag{23}
$$

the mass will not be preserved over time, as shown in the numerical Example 1. Therefore, in order to have mass preservation, we use the central finite difference approximation with a ghost cell:

$$
\partial\_x u(0) = \frac{u\_1^{c,n} - u\_{-1}^{c,n}}{2\Delta x}.\tag{24}
$$

At the outer boundaries at the first (*x*<sup>0</sup> = 0) and last (*xN*+<sup>1</sup> = *L*) endpoint of the channels, we assign the numerical schemes:

$$u\_0^{\varepsilon,n+1} = u\_0^{\varepsilon,n} + D\_{u\_\varepsilon} \frac{\triangle t}{\triangle x^2} \delta\_x^1 \left( u\_0^{\varepsilon,n} + u\_0^{\varepsilon,n+1} \right) - \frac{\triangle t}{\triangle x} (F\_{\varepsilon,1}^n + F\_{\varepsilon,0}^n) \tag{25}$$

and

$$\mathbf{u}\_{N+1}^{\varepsilon,n+1} = \mathbf{u}\_{N+1}^{\varepsilon,n} - D\_{\mathbf{u}\_{\varepsilon}} \frac{\triangle t}{\triangle x^2} \delta\_x^1 \left( \mathbf{u}\_N^{\varepsilon,n} + \mathbf{u}\_N^{\varepsilon,n+1} \right) + \frac{\triangle t}{\triangle x} (F\_{\varepsilon,N}^n + F\_{\varepsilon,N+1}^n)\_{\prime} \tag{26}$$

with *F<sup>n</sup> <sup>c</sup>*,0 = 0 in (25) and *<sup>F</sup><sup>n</sup> <sup>c</sup>*,*N*+<sup>1</sup> = 0 in (26), since we are imposing zero flux at the boundaries. The boundary conditions above are mass-preserving by construction, as stated in the following proposition.

**Proposition 1.** *The 1D numerical scheme at the internal points introduced above, namely:*

$$u\_{i}^{\varepsilon,n+1} = u\_{i}^{\varepsilon,n} + D\_{u\_{\varepsilon}} \frac{\triangle t}{2 \triangle x^{2}} \delta\_{x}^{2} (u\_{i}^{\varepsilon,n} + u\_{i}^{\varepsilon,n+1}) - \frac{\triangle t}{2 \triangle x} \delta\_{x}^{0} F\_{\varepsilon,i'}^{n} \tag{27}$$

*endowed with boundary conditions (25) and (26), is mass-preserving by construction, since it is obtained by imposing* <sup>I</sup>*n*+<sup>1</sup> − I*<sup>n</sup>* <sup>=</sup> <sup>0</sup>*.*

**Proof.** The mass conservation over time on the closed domain Ω*nc* reads as:

$$I(t) = \int\_{\Omega\_{\rm nc}} u(t, \mathbf{x}, \mathbf{y}) d\Omega\_{\rm nc} = \int\_{\Omega\_{\rm nc}} u(0, \mathbf{x}, \mathbf{y}) d\Omega\_{\rm nc} = I(0). \tag{28}$$

Now, applying a quadrature rule for the numerical integration:

$$
\mathcal{L}^n \approx \int\_{\Omega\_{\rm nc}} u(t, x, y) d\Omega\_{\rm nc}.\tag{29}
$$

We need to ensure that <sup>I</sup>*n*+<sup>1</sup> <sup>=</sup> <sup>I</sup>*n*.

For the numerical integration, different quadrature formulas can be used; in particular, we applied closed Newton–Cotes formulas to take into account the values at the boundaries. By using the trapezoidal rule with an integration error of O *x*<sup>2</sup> and imposing the equality <sup>I</sup>*n*+<sup>1</sup> − I*<sup>n</sup>* <sup>=</sup> 0 in the 1D case, one obtains:

$$
\triangle x \left( \frac{u\_0^{c,n+1}}{2} - \frac{u\_0^{c,n}}{2} + \sum\_{i=1}^N \left( u\_i^{c,n+1} - u\_i^{c,n} \right) + \frac{u\_{N+1}^{c,n+1}}{2} - \frac{u\_{N+1}^{c,n}}{2} \right) = 0.
$$

Using the numerical scheme (20) for *uc*,*n*+<sup>1</sup> *<sup>i</sup>* in (30) for *i* = 1, . . . , *N* we have:

$$\begin{split} \triangle \mathbf{x} \quad \begin{pmatrix} \frac{\boldsymbol{u}\_{0}^{\boldsymbol{c},\boldsymbol{n}+1} - \boldsymbol{u}\_{0}^{\boldsymbol{c},\boldsymbol{n}}}{2} & + \frac{D\_{\mathbf{u}^{\boldsymbol{c}}} \triangle \mathbf{t}}{2 \triangle \mathbf{x}^{2}} \sum\_{i=1}^{N} \boldsymbol{\delta}\_{\mathbf{x}}^{2} \left( \boldsymbol{u}\_{i}^{\boldsymbol{c},\boldsymbol{n}} + \boldsymbol{u}\_{i}^{\boldsymbol{c},\boldsymbol{n}+1} \right) \\ - \frac{\triangle \mathbf{t}}{2 \triangle \mathbf{x}} \sum\_{i=1}^{N} \boldsymbol{\delta}\_{\mathbf{x}}^{0} F\_{\mathbf{c},i}^{\mathbf{n}} + \frac{\boldsymbol{u}\_{N+1}^{\boldsymbol{c},\mathbf{n}+1} - \boldsymbol{u}\_{N+1}^{\boldsymbol{c},\mathbf{n}}}{2} \end{pmatrix} = \boldsymbol{0}. \end{split}$$

Now, using the definition of *δ*<sup>0</sup> *xF<sup>n</sup> <sup>c</sup>*,*<sup>i</sup>* in the sum, the above formula becomes:

$$\begin{cases} \boldsymbol{u}\_{0}^{\boldsymbol{c},n+1} - \boldsymbol{u}\_{0}^{\boldsymbol{c},n} - D\_{\boldsymbol{u}\_{\mathcal{C}}} \frac{\triangle t}{\triangle x^{2}} \Big( \boldsymbol{u}\_{1}^{\boldsymbol{c},n} + \boldsymbol{u}\_{1}^{\boldsymbol{c},n+1} - \left( \boldsymbol{u}\_{0}^{\boldsymbol{c},n} + \boldsymbol{u}\_{0}^{\boldsymbol{c},n+1} \right) \Big) + \frac{\triangle t}{\triangle x} \Big( F\_{\boldsymbol{c},0}^{n} + F\_{\boldsymbol{c},1}^{n} \Big) \\\ + \boldsymbol{u}\_{N+1}^{\boldsymbol{c},n+1} - \boldsymbol{u}\_{N+1}^{\boldsymbol{c},n} - D\_{\boldsymbol{u}\_{\mathcal{C}}} \frac{\triangle t}{\triangle x^{2}} \Big( \boldsymbol{u}\_{N}^{\boldsymbol{c},n} + \boldsymbol{u}\_{N}^{\boldsymbol{c},n+1} - \left( \boldsymbol{u}\_{N+1}^{\boldsymbol{c},n} + \boldsymbol{u}\_{N+1}^{\boldsymbol{c},n+1} \right) \Big) \\\ - \frac{\triangle t}{\triangle x} \left( F\_{\boldsymbol{c},N}^{n} + F\_{\boldsymbol{c},N+1}^{n} \right) = 0. \end{cases}$$

We can now compute the values for both *<sup>u</sup>n*+<sup>1</sup> <sup>0</sup> and *<sup>u</sup>n*+<sup>1</sup> *<sup>N</sup>*+<sup>1</sup> so that the term equals zero. By collecting values from nearby stencils together (otherwise we obtain an error of O(*x*), which can be verified by Taylor expansion), at the outer boundaries of 1D domain we obtain the schemes (25) and (26). Note that in the formula above, by imposing homogeneous Neumann boundary conditions we have *f <sup>n</sup>* <sup>0</sup> = 0 and *<sup>f</sup> <sup>n</sup> <sup>N</sup>*+<sup>1</sup> = 0.

We also remark that besides imposing the stability condition (22), Δ*x* has to be small enough in order to ensure the positivity of the scheme for taking into account the possibility of having a negative term brought by *f <sup>n</sup>* <sup>1</sup> or *<sup>f</sup> <sup>n</sup> <sup>N</sup>* in (25) and (26), respectively.

**Remark 4.** *Although here we are not proving that, the scheme (27) is in practice second order in the space up to the boundaries, since it can be equivalently obtained using the second-order approximation of the first derivative including a ghost cell reported in (24).*

We underline that for **f** = 0, even using formula (24), the approach with the discrete integral equation is necessary for ensuring the mass preservation. Furthermore, by using a different numerical integration scheme, we can achieve different mass-preserving boundary conditions of a higher order.

Boundary conditions for the density of individuals *u* in 2D model (8).

Let us assume there are no-flux conditions at the boundaries. Then, in this case we consider the 2D closed domain Ω*nc*.

Using the mass-preserving property argument, we compute boundary conditions for the corners and the top and bottom boundaries of Ω*nc*. By applying them with the numerical method (18) into <sup>I</sup>*n*+<sup>1</sup> − I*<sup>n</sup>* <sup>=</sup> 0, we obtain the expression:

$$\frac{\triangle t \triangle x}{4} \left( -4 \triangle t \sum\_{i=1}^{N\_x} \sum\_{j=1}^{N\_y} \left( \delta\_x^0 F\_{i,j}^x + \delta\_y^0 F\_{i,j}^y \right) \right) = 0, 1$$

⎧

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

since the terms in *u* cancel. By plugging in it the expression of the central in rgw space second-order finite difference scheme *δ*<sup>0</sup> *x f x*,*n <sup>j</sup>* for div(**f**), we obtain:

$$\begin{split} & \frac{1}{\triangle y} \sum\_{i=1}^{N\_x} \left( F\_{i,N\_y+1}^{y,n} + F\_{i,N\_y}^{y,n} - F\_{i,1}^{y,n} - F\_{i,0}^{y,n} \right) \\ & + \frac{1}{\triangle x} \sum\_{j=1}^{N\_y} \left( F\_{N\_x+1,j}^{x,n} + F\_{N\_x,j}^{x,n} - F\_{1,j}^{x,n} - F\_{0,j}^{x,n} \right) = 0. \end{split}$$

Now, we can distribute the remaining values to the boundaries in the same way as above for the 1D-parabolic case. Therefore, we obtain the following mass-preserving boundary conditions for the corners:

*<sup>u</sup>n*+<sup>1</sup> 0,0 <sup>=</sup> *<sup>u</sup><sup>n</sup>* 0,0 <sup>+</sup> *Du <sup>t</sup>* <sup>2</sup>*x*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *x un* 0,0 <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> 0,0 + *Du <sup>t</sup>* <sup>2</sup>*y*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *y un* 0,0 <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> 0,0 , *<sup>u</sup>n*+<sup>1</sup> *Nx*+1,0 <sup>=</sup> *<sup>u</sup><sup>n</sup> Nx*+1,0 <sup>−</sup> *Du <sup>t</sup>* <sup>2</sup>*x*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *x un Nx*,0 <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> *Nx*,0 +*Du <sup>t</sup>* <sup>2</sup>*y*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *y un Nx*+1,0 <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> *Nx*+1,0 , *<sup>u</sup>n*+<sup>1</sup> 0,*Ny*+<sup>1</sup> <sup>=</sup> *<sup>u</sup><sup>n</sup>* 0,*Ny*+<sup>1</sup> <sup>+</sup> *Du <sup>t</sup>* <sup>2</sup>*x*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *x un* 0,*Ny*+<sup>1</sup> <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> 0,*Ny*+<sup>1</sup> <sup>−</sup>*Du <sup>t</sup>* <sup>2</sup>*y*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *y un* 0,*Ny* <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> 0,*Ny* , *<sup>u</sup>n*+<sup>1</sup> *Nx*+1,*Ny*+<sup>1</sup> <sup>=</sup> *<sup>u</sup><sup>n</sup> Nx*+1,*Ny*+<sup>1</sup> <sup>−</sup> *Du <sup>t</sup>* <sup>2</sup>*x*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *x un Nx*,*Ny*+<sup>1</sup> <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> *Nx*,*Ny*+<sup>1</sup> <sup>−</sup>*Du <sup>t</sup>* <sup>2</sup>*y*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *y un Nx*+1,*Ny* <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> *Nx*+1,*Ny* . (30)

For the edges of the box Ω*nc*, *i* = 1, . . . , *Nx* and *j* = 1, . . . , *Ny*, we have:

$$\begin{cases} \begin{array}{rcl} u\_{i,0}^{n+1} &=& u\_{i,0}^{n} + D\_{\underline{u}} \frac{\triangle t}{2\triangle x^{2}} \delta\_{x}^{2} \Big( u\_{i,0}^{n} + u\_{i,0}^{n+1} \Big) + D\_{\underline{u}} \frac{\triangle t}{\triangle y^{n}} \delta\_{y}^{1} \Big( u\_{i,0}^{n} + u\_{i,0}^{n+1} \Big) \\ & \quad \quad \quad \quad \quad - \frac{\triangle t}{\triangle y^{n}} \delta\_{i,1}^{y,n} \\\ u\_{i,N\_{y}+1}^{n+1} &=& u\_{i,N\_{y}+1}^{n} + D\_{\underline{u}} \frac{\triangle t}{2\triangle x^{2}} \delta\_{x}^{2} \Big( u\_{i,N\_{y}+1}^{n} + u\_{i,N\_{y}+1}^{n+1} \Big) \\ & \quad \quad \quad \quad \quad \quad \quad \quad - D\_{\underline{u}} \frac{\triangle t}{\triangle y^{2}} \delta\_{y}^{1} \Big( u\_{i,N\_{y}}^{n} + u\_{i,N\_{y}}^{n+1} \Big) + \frac{\triangle t}{\triangle y} F\_{i,N\_{y}}^{y,n} \\\ u\_{0,j}^{n+1} &=& u\_{0,j}^{n} + D\_{\underline{u}} \frac{\triangle t}{\triangle x^{2}} \delta\_{x}^{1} \Big( u\_{0,j}^{n} + u\_{0,j}^{n+1} \Big) + D\_{\underline{u}} \frac{\triangle t}{2 \triangle y^{2}} \delta\_{y}^{2} \Big( u\_{0,j}^{n} + u\_{0,j}^{n+1} \Big) \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \end{cases} \tag{31}$$

We underline that in the above formulas the terms *Fx*,*<sup>n</sup>* and *Fy*,*<sup>n</sup>* on the edges of the domain cancel because of the homogeneous Neumann boundary condition on *φ<sup>x</sup>* and *φy*. Boundary conditions for the density of chemoattractant *φ* in 1D model (9).

For the computation of the conditions at the outer boundaries for the chemoattractant *φ<sup>c</sup>* in the 1D-doubly parabolic model we proceed as above, but neglect the source term *acuc* − *bcφ<sup>c</sup>* to obtain boundary conditions that are mass-preserving.

Proceeding as above, we achieve the following mass-preserving boundary conditions for the chemoattractant:

$$\begin{cases} \begin{aligned} \phi\_0^{\varepsilon, n+1} &= \phi\_0^{\varepsilon, n} + D\_{\phi\_\varepsilon} \frac{\triangle t}{\triangle x^2} \left( \delta\_x^1 \phi\_0^{\varepsilon, n} + \delta\_x^1 \phi\_0^{\varepsilon, n+1} \right) \\ \phi\_{N+1}^{\varepsilon, n+1} &= \phi\_{N+1}^{\varepsilon, n} - D\_{\phi\_\varepsilon} \frac{\triangle t}{\triangle x^2} \left( \delta\_x^1 \phi\_N^{\varepsilon, n} + \delta\_x^1 \phi\_N^{\varepsilon, n+1} \right) \end{aligned} \tag{32}$$

This, under the CFL condition (22), is second-order accurate and positivity-preserving up to the boundaries. The parabolic equation in the interior points is solved using an implicit-explicit method:

$$
\phi\_i^{\varepsilon, n+1} = \phi\_i^{\varepsilon, n} + D\_{\phi\_c} \frac{\triangle t}{2 \triangle x^2} \delta\_x^2 \left( \phi\_i^{\varepsilon, n} + \phi\_i^{\varepsilon, n+1} \right). \tag{33}
$$

Boundary conditions for the density of chemoattractant *φ* in 2D model (8).

Reasoning as above, by applying the numerical method (19) we obtain the following boundary condition for the chemoattractant at the corners:

*<sup>φ</sup>n*+<sup>1</sup> 0,0 <sup>=</sup> *<sup>φ</sup><sup>n</sup>* 0,0 + *D<sup>φ</sup> t x*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *x*(*φ<sup>n</sup>* 0,0 <sup>+</sup> *<sup>φ</sup>n*+<sup>1</sup> 0,0 ) + <sup>2</sup>*D<sup>φ</sup> t y*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *y*(*φ<sup>n</sup>* 0,0 <sup>+</sup> *<sup>φ</sup>n*+<sup>1</sup> 0,0 ) *<sup>φ</sup>n*+<sup>1</sup> *Nx*+1,0 <sup>=</sup> *<sup>φ</sup><sup>n</sup> Nx*+1,0 − *D<sup>φ</sup> t x*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *x*(*φ<sup>n</sup> Nx*+1,0 <sup>+</sup> *<sup>φ</sup>n*+<sup>1</sup> *Nx*+1,0) +*D<sup>φ</sup> t y*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *y*(*φ<sup>n</sup> Nx*+1,0 <sup>+</sup> *<sup>φ</sup><sup>n</sup> Nx*+1,0) *<sup>φ</sup>n*+<sup>1</sup> *Nx*+1,*Ny*+<sup>1</sup> <sup>=</sup> *<sup>φ</sup><sup>n</sup> Nx*+1,*Ny*+<sup>1</sup> − *D<sup>φ</sup> t x*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *x*(*φ<sup>n</sup> Nx*,*Ny*+<sup>1</sup> <sup>+</sup> *<sup>φ</sup>n*+<sup>1</sup> *Nx*,*Ny*+1) −*D<sup>φ</sup> t y*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *x*(*φ<sup>n</sup> Nx*+1,*Ny*+<sup>1</sup> <sup>+</sup> *<sup>φ</sup><sup>n</sup> Nx*+1,*Ny* ) *<sup>φ</sup>n*+<sup>1</sup> 0,*Ny*+<sup>1</sup> <sup>=</sup> *<sup>φ</sup><sup>n</sup>* 0,0 + *D<sup>φ</sup> t x*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *x*(*φ<sup>n</sup>* 1,*Ny*+<sup>1</sup> <sup>+</sup> *<sup>φ</sup>n*+<sup>1</sup> 1,*Ny*+1) −*D<sup>φ</sup> t y*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *y*(*φ<sup>n</sup>* 0,*Ny* <sup>+</sup> *<sup>φ</sup>n*+<sup>1</sup> 0,*Ny* ) (34)

and

⎧

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

⎧

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

$$\begin{array}{rcl}\phi\_{i,0}^{n+1} &=& \phi\_{i,0}^{n} + D\_{\phi}\frac{\triangle t}{\triangle x}\delta\_{x}^{2}(\phi\_{i,0}^{n} + \Phi\_{i,0}^{n+1}) + 2D\_{\phi}\frac{\triangle t}{\triangle y^{n}}\delta\_{y}^{1}(\phi\_{i,0}^{n} + \Phi\_{i,0}^{n+1})\\\\\phi\_{i,N\_{y}+1}^{n+1} &=& \phi\_{i,N\_{y}+1}^{n} + D\_{\phi}\frac{\triangle t}{\triangle x^{2}}\delta\_{x}^{2}(\phi\_{i,N\_{y}+1}^{n} + \Phi\_{i,N\_{y}+1}^{n+1})\\& \quad - 2D\_{\phi}\frac{\triangle t}{\triangle y^{n}}\delta\_{y}^{1}(\phi\_{i,N\_{y}}^{n} + \Phi\_{i,N\_{y}}^{n+1})\\\phi\_{0,j}^{n+1} &=& \phi\_{0,j}^{n} + D\_{\phi}\frac{\triangle t}{\triangle y^{j}}\delta\_{y}^{2}(\phi\_{0,j}^{n} + \Phi\_{0,j}^{n+1}) + 2D\_{\phi}\frac{\triangle t}{\triangle x^{2}}\delta\_{x}^{1}(\phi\_{0,j}^{n} + \Phi\_{0,j}^{n+1})\\\\\phi\_{N\_{x}+1,j}^{n+1} &=& \phi\_{N\_{x}+1,j}^{n} + D\_{\phi}\frac{\triangle t}{\triangle y^{j}}\delta\_{y}^{2}(\phi\_{N\_{x}+1,j}^{n} + \Phi\_{N\_{x}+1,j}^{n+1})\\& \quad - 2D\_{\phi}\frac{\triangle t}{\triangle x^{2}}\delta\_{x}^{1}(\phi\_{N\_{x},j}^{n} + \Phi\_{N\_{x},j}^{n+1})\end{array} \tag{35}$$

We now have a complete numerical method to solve the system (8) on Ω*nc* and the 1D version of it (9) on *Cnc*.

Let us now turn to the complete domain depicted in Figure 2 in order to develop mass-preserving transmission conditions at the nodes of the network. We remark that for the sake of clarity, for the development of the numerical transmission conditions here we consider just the junction—indicated as node 1*L*—connecting the left box Ω*<sup>l</sup>* and a single channel parametrized as an interval *C* = [*a*, *b*].

3.1.2. Discretization of the Transmission Conditions for the 2D-1D Doubly Parabolic Case

The choice of suitable transmission conditions is crucial, since it should reflect the qualitative attributes of the analytical model.

Here, we use ghost values (24) in order to obtain the numerical boundary conditions, since in such a way mass-preserving and positivity-preserving properties are ensured.

By using the central approximation formula for *div f* in the condition (14) on the left of node 1*L*, we have:

$$D\_\mathbf{u} \partial\_\mathbf{x} \mathbf{u}(L\_\mathbf{x}, \mathbf{y}, \mathbf{t}) - F^\mathbf{x}(L\_\mathbf{x}, \mathbf{y}, \mathbf{t}) \quad = \quad \mathcal{K}(\mathfrak{u}\_\mathbf{c}(0, \mathbf{t}) - \mathfrak{u}(L\_\mathbf{x}, \mathbf{y}, \mathbf{t})) \text{ for } \mathbf{y} \in [a, b].$$

Then we have:

$$D\_{\boldsymbol{u}} \frac{\boldsymbol{u}\_{N\_{\boldsymbol{x}}+2,j}^{\boldsymbol{u}} - \boldsymbol{u}\_{N\_{\boldsymbol{x},j}}^{\boldsymbol{u}}}{2 \triangle \boldsymbol{x}} = \quad \mathsf{K} \Big( \boldsymbol{u}\_{0}^{\boldsymbol{c}, \boldsymbol{u}} - \boldsymbol{u}\_{N\_{\boldsymbol{x}}+1,j}^{\boldsymbol{u}} \Big) + F\_{N\_{\boldsymbol{x}}+1,j}^{\boldsymbol{x}, \boldsymbol{u}}$$

and we get:

$$\boldsymbol{u}\_{\mathrm{N}\_{\mathrm{x}}+2,j}^{\mathrm{n}} = \boldsymbol{u}\_{\mathrm{N}\_{\mathrm{x},j}}^{\mathrm{n}} + \mathbb{K}\frac{2\triangle x}{D}\left(\boldsymbol{u}\_{0}^{\varepsilon,\mathrm{n}} - \boldsymbol{u}\_{\mathrm{N}\_{\mathrm{x}}+1,j}^{\mathrm{n}}\right) + \frac{2\triangle x}{D\_{\mathrm{n}}}\boldsymbol{F}\_{\mathrm{N}\_{\mathrm{x}}+1,j}^{\mathrm{x},\mathrm{n}}\tag{36}$$

for *j* = *ja*,..., *jb*.

Moreover, using the central approximation for *∂xuc* in (15), we finally get the formula:

$$u\_{-1}^{\varepsilon, n} = u\_1^{\varepsilon, n} - K \frac{2\triangle x}{D\_{u\_{\varepsilon}}} (b - a) u\_0^{\varepsilon, n} + \frac{2\triangle x}{D\_{u\_{\varepsilon}}} \Delta y \sum\_{j = j\_a}^{j\_b} K(u\_{N\_x + 1, j}^n + u\_{N\_x + 1, j}^{n + 1}) - \frac{2\triangle x}{D\_{u\_{\varepsilon}}} F\_{\varepsilon, 0}^n. \tag{37}$$

We now use the Ansatz to apply the ghost values into the 1D (27) and 2D (18) numerical schemes without specific chemotactic approximation, and use the discrete integral equation to determine the chemotactic term discretization.

Since we need to conserve the mass in each domain, but also in both connected ones, the expanded discrete integral equation is needed to compute the total mass over both domains.

Plugging the ghost values (36) and (37), respectively, into the numerical schemes (18) and (27), we get the conditions at the interface (node 1*L*):

$$\begin{array}{rcl} \boldsymbol{u}\_{\mathrm{N}\_{\mathrm{x}}+1,j}^{n+1} &=& \boldsymbol{u}\_{\mathrm{N}\_{\mathrm{x}}+1,j}^{n} - D\_{\boldsymbol{u}} \frac{\triangle t}{\triangle x} \boldsymbol{\delta}\_{\mathrm{x}}^{1} \Big( \boldsymbol{u}\_{\mathrm{N}\_{\mathrm{x}},j}^{n} + \boldsymbol{u}\_{\mathrm{N}\_{\mathrm{x}},j}^{n+1} \Big) + 2K \frac{\triangle t}{\triangle x} \Big( \boldsymbol{u}\_{0}^{\mathrm{c},n} - \boldsymbol{u}\_{\mathrm{N}\_{\mathrm{x}}+1,j}^{n} \Big) \\ &+ D\_{\boldsymbol{u}} \frac{\triangle t}{2 \triangle y^{2}} \boldsymbol{\delta}\_{\mathrm{y}}^{2} \Big( \boldsymbol{u}\_{\mathrm{N}\_{\mathrm{x}}+1,j}^{n} + \boldsymbol{u}\_{\mathrm{N}\_{\mathrm{x}}+1,j}^{n+1} \Big) \\ &+ 2 \frac{\triangle t}{\triangle x} \boldsymbol{F}\_{\mathrm{N}\_{\mathrm{x}}+1,j}^{\mathrm{x},n} - \triangle t \boldsymbol{\delta}\_{\mathrm{x}}^{0} \boldsymbol{F}\_{\mathrm{N}\_{\mathrm{x}}+1,j}^{\mathrm{x},n} - \triangle t \boldsymbol{\delta}\_{\mathrm{y}}^{0} \boldsymbol{F}\_{\mathrm{N}\_{\mathrm{x}}+1}^{\mathrm{y},n} \end{array} \tag{38}$$

and

$$\begin{array}{lcl} \boldsymbol{u}\_{0}^{\boldsymbol{c},n+1} &=& \boldsymbol{u}\_{0}^{\boldsymbol{c},n} + D\_{\boldsymbol{u}\_{\boldsymbol{c}}} \frac{\triangle t}{\triangle x^{\boldsymbol{x}}} \boldsymbol{\delta}\_{\boldsymbol{x}}^{1} \Big( \boldsymbol{u}\_{0}^{\boldsymbol{c},n} + \boldsymbol{u}\_{0}^{\boldsymbol{c},n+1} \Big) - 2K \frac{\triangle t}{\triangle x} (\boldsymbol{b} - \boldsymbol{a}) \boldsymbol{u}\_{0}^{\boldsymbol{c},n} \\ &+ 2K \frac{\triangle t \triangle y}{\triangle x} \sum\_{j=j\_{\boldsymbol{a}}}^{j\_{\boldsymbol{b}}} \left( \boldsymbol{u}\_{N\_{\boldsymbol{x}}+1,j}^{\boldsymbol{n}} + \boldsymbol{u}\_{N\_{\boldsymbol{x}}+1,j}^{\boldsymbol{n}+1} \right) \\ &- 2 \frac{\triangle t}{\triangle x} F\_{\boldsymbol{c},0}^{\boldsymbol{n}} - \frac{\triangle t}{2 \Delta x} \boldsymbol{\delta}\_{\boldsymbol{x}}^{0} F\_{\boldsymbol{c},0}^{\boldsymbol{n}}. \end{array} \tag{39}$$

In particular, the conservation of the discrete total mass reads as:

$$
\mathcal{Z}\_{\rm{1D}}^{\rm{n+1}} + \mathcal{Z}\_{\rm{2D}}^{\rm{n+1}} - \mathcal{Z}\_{\rm{1D}}^{\rm{n}} - \mathcal{Z}\_{\rm{2D}}^{\rm{n}} = 0,\tag{40}
$$

Now, applying the conditions (38) and (39) with the other boundary conditions (31) and (25), we get:

$$\begin{split} &\Delta x \big( -K \frac{\triangle t}{\triangle x} (b-a) u\_0^{c,n} + K \frac{\triangle t \wedge \nu}{\triangle x} \sum\_{j=j\_a}^{j\_b} \left( u\_{N\_x+1,j}^n + u\_{N\_x+1,j}^{n+1} \right) \\ &- \frac{\triangle t}{\triangle x} F\_{c,0}^n - \frac{\triangle t}{2} \ell\_x^0 F\_{c,0}^n + \frac{\triangle t}{2 \triangle x} \left( F\_{c,0}^n + F\_{c,1}^n \right) \\ &+ \quad \frac{\triangle x \wedge\_y}{4} \big( 2 \sum\_{j=j\_a}^{j\_b} \left( 2 \frac{\triangle t}{\triangle x} \Big( u\_0^{c,n} - u\_{N\_x+1,j}^n \Big) + 2 \frac{\triangle t}{\triangle x} F\_{N\_x+1,j}^{x,n} - \triangle t \ell\_x^0 F\_{N\_x+1,j}^{x,n} \\ &- \triangle t \ell\_y^0 F\_{N\_x+1,j}^{y,n} \Big) \\ &- \frac{2 \triangle t}{\triangle x} \sum\_{j=j\_a}^{j\_b} \left( F\_{N\_x+1,j}^{x,n} + F\_{N\_x,j}^{x,n} \right) - \frac{2 \triangle t}{\triangle y} \sum\_{j=j\_a}^{j\_b} \left( F\_{N\_x+1,j}^{y,n} + F\_{N\_x,j}^{y,n} \right) \Big) = 0, \end{split}$$

We can then obtain the following transmission conditions:

*uc*,*n*+<sup>1</sup> <sup>0</sup> <sup>=</sup> *<sup>u</sup>c*,*<sup>n</sup>* <sup>0</sup> + *Duc t x*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *x uc*,*<sup>n</sup>* <sup>0</sup> <sup>+</sup> *<sup>u</sup>c*,*n*+<sup>1</sup> 0 <sup>−</sup> *<sup>t</sup> x* (*F<sup>n</sup> <sup>c</sup>*,1 + *<sup>F</sup><sup>n</sup> <sup>c</sup>*,0) ! "# \$ same as for BC without transmission condition <sup>−</sup>2*<sup>K</sup> <sup>t</sup> <sup>x</sup>* (*<sup>b</sup>* <sup>−</sup> *<sup>a</sup>*)*uc*,*<sup>n</sup>* <sup>0</sup> <sup>+</sup> <sup>2</sup>*<sup>K</sup> <sup>t</sup> x jb* ∑ *j*=*ja un Nx*+1,*<sup>j</sup>* <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> *Nx*+1,*j* , (41) *un*+<sup>1</sup> *Nx*+1,*<sup>j</sup>* <sup>=</sup> *<sup>u</sup><sup>n</sup> Nx*+1,*<sup>j</sup>* <sup>−</sup> *Du <sup>t</sup> x*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *x un Nx*,*<sup>j</sup>* <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> *Nx*,*j* +*Du <sup>t</sup>* <sup>2</sup>*y*<sup>2</sup> *<sup>δ</sup>*<sup>2</sup> *y un Nx*+1,*<sup>j</sup>* <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> *Nx*+1,*j* + *<sup>t</sup> x Fx*,*<sup>n</sup> Nx*+1,*<sup>j</sup>* <sup>+</sup> *<sup>F</sup>x*,*<sup>n</sup> Nx*,*j* <sup>−</sup> *<sup>t</sup>* 2*y Fy*,*<sup>n</sup> Nx*+1,*j*+<sup>1</sup> <sup>−</sup> *<sup>F</sup>y*,*<sup>n</sup> Nx*+1,*j*−1 <sup>+</sup> <sup>2</sup>*<sup>K</sup> <sup>t</sup> x uc*,*<sup>n</sup>* <sup>0</sup> <sup>−</sup> *<sup>u</sup><sup>n</sup> Nx*+1,*j* ! "# \$ additional term for transmission condition, (42)

for *j* = *ja*,..., *jb*.

Proceeding analogously as above, this approach leads to mass-preserving and positivitypreserving transmission conditions for the chemoattractant *φ* as well. In particular, we have at the first and last endpoint, respectively:

$$\begin{array}{rcl} \phi\_0^{\varepsilon, n+1} &=& \phi\_0^{\varepsilon, n} + D\_{\phi\_0} \frac{\triangle t}{\triangle x^2} \left(\delta\_x^1 \phi\_0^{\varepsilon, n} + \delta\_x^1 \phi\_0^{\varepsilon, n+1}\right) + \triangle t a\_\varepsilon \mathfrak{u}\_0^{\varepsilon, n} - \triangle t b\_\varepsilon \phi\_0^{\varepsilon, n} \\ &- 2K \frac{\triangle t}{\triangle x} (b-a) \phi\_0^{\varepsilon, n} + 2K \frac{\triangle t}{\triangle x} \int\_a^b \phi(L\_x, y, t\_n) dy \end{array}$$

and

$$\begin{array}{rcl}\mathfrak{\phi}\_{\text{N}\_{\text{x}}+1,j}^{n+1} &=& \mathfrak{\phi}\_{\text{N}\_{\text{x}}+1,j}^{n} + D\_{\Phi} \frac{\triangle t}{\triangle y^{\Phi}} \delta\_{\text{y}}^{2} (\mathfrak{\phi}\_{\text{N}\_{\text{x}}+1,j}^{n} + \mathfrak{\phi}\_{\text{N}\_{\text{x}}+1,j}^{n+1}) \\ & - 2D\_{\Phi} \frac{\triangle t}{\triangle x^{\pi}} \delta\_{\text{x}}^{1} (\mathfrak{\phi}\_{\text{N}\_{\text{x}},j}^{n} + \mathfrak{\phi}\_{\text{N}\_{\text{x}},j}^{n+1}) \\ & + a \triangle t u\_{\text{N}\_{\text{x}}+1,j}^{n} - b \triangle t \mathfrak{\phi}\_{\text{N}\_{\text{x}}+1,j}^{n} + 2K \frac{\triangle t}{\triangle x} \left(\mathfrak{\phi}\_{\text{O}}^{\text{c},\text{n}} - \mathfrak{\phi}\_{\text{N}\_{\text{x}}+1,j}^{\text{n}}\right). \end{array} \tag{43}$$

We have finally developed a complete numerical scheme to treat doubly parabolic partial differential equations systems in two domains, 1D and 2D, connected through a node, which ensures by construction the mass conservation as the original PDE. Numerically, the scheme also ensures the positivity preserving property under the monotonicity conditions discussed in Section 3.3.1.

#### *3.2. The Hyperbolic-Parabolic Case*

The second-order AHO scheme on a line was introduced in [14] for the 1D hyperbolic system (2). Here, considering the presence of the source term *g* on the right hand side of the equation for the density of cells, the AHO scheme of second order reads as:

$$\begin{cases} \begin{aligned} u\_i^{\varepsilon, n+1} &= \\_ {u\_i^{\varepsilon, n}} + \lambda \frac{\triangle t}{2\triangle x} \delta\_x^2 u\_i^{\varepsilon, n} - \left(\frac{\triangle t}{2\triangle x} - \frac{\triangle t}{4\lambda}\right) \delta\_x^0 v\_i^{\varepsilon, n} + \frac{\triangle t}{4\lambda} \delta\_x^0 F\_{\varepsilon, i}^n \\ &+ \frac{\triangle t}{4} (g\_{i-1}^n + 2g\_i^n + g\_{i+1}^n) \end{aligned} & \end{cases} \tag{44}$$
 
$$\begin{aligned} v\_i^{\varepsilon, n+1} &= \; v\_i^{\varepsilon, n} - \lambda^2 \frac{\triangle t}{2\triangle x} \delta\_x^0 u\_i^{\varepsilon, n} + \left(\frac{\triangle t}{2\triangle x} - \frac{\triangle t}{4}\right) \delta\_x^2 v\_i^{\varepsilon, n} + \frac{\triangle t}{4} \delta\_x^2 F\_{\varepsilon, i}^n \\ &+ \lambda \frac{\triangle t}{4} (g\_{i-1}^n - g\_{i+1}^n) \end{aligned} \tag{44}$$

with mass-preserving boundary conditions (including the additional source term *g*) at the external boundaries.

We remark that for the hyperbolic-parabolic model not only mass must be preserved as the in the fully parabolic model, but also the flux *v* needs to converge towards the steady state *v* = 0. Since here we have the 1D domain connected at both the endpoints, we do not need to use numerical boundary conditions for the outer boundaries. However, for the

details and the description of the AHO numerical scheme at the outer boundaries, see [14]. For this reason we use the so-called AHO (Asymptotic Higher Order) schemes (see [18] for the study of AHO scheme at interfaces including mass-preserving transmission conditions) with source term *g*, for which the approximation of the stationary solutions is up to the third order of accuracy and converges towards a numerical solution with *v* = 0, while preserving the mass.

3.2.1. Discretization of Transmission Conditions for the 2D-Doubly Parabolic and 1D-Hyperbolic-Parabolic Case

The first equation is the same as for the interface between the 2D-doubly parabolic and 1D-doubly parabolic case. Hence, we derive the same transmission condition reported in (42) for *un*+<sup>1</sup> *Nx*+1,*<sup>j</sup>* for *j* = *ja*,..., *jb*.

For the flux, the transmission condition (17) gives us

$$\boldsymbol{v}\_{0}^{\varepsilon, n+1} = -K(\boldsymbol{b} - \boldsymbol{a})\boldsymbol{u}\_{0}^{\varepsilon, n+1} + K \triangle \boldsymbol{y} \sum\_{j=j\_{a}}^{j\_{b}} \left(\boldsymbol{u}\_{N\_{x}+1,j}^{n} + \boldsymbol{u}\_{N\_{x}+1,j}^{n+1}\right) \tag{45}$$

This can be computed explicitly with the numerically computed values of *uc*,*n*+<sup>1</sup> <sup>0</sup> and *<sup>u</sup>n*+<sup>1</sup> *Nx*+1,*j* . Then, imposing the mass conservation:

$$T\_{2\rm D}^{n+1} + T\_{2\rm D}^{n+1} - T\_{1\rm D}^n - T\_{1\rm D}^n = 0$$

we get:

$$\begin{cases} \frac{\triangle x}{2} \left[ u\_0^{\varepsilon, n+1} - u\_0^{\varepsilon, n} + \lambda \frac{\triangle t}{\triangle x} \left( u\_0^{\varepsilon, n+1} - u\_1^{\varepsilon, n+1} \right) - \left( \frac{\triangle t}{\triangle x} - \frac{\triangle t}{\triangle t} \right) \left( -v\_0^{\varepsilon, n+1} - v\_1^{\varepsilon, n+1} \right) \right] \\ + \frac{\triangle x \triangle t}{4 \lambda} \left( F\_{c,0}^{n+1} + F\_{c,1}^{n+1} \right) \\ + \frac{\triangle x \triangle y}{4} \left[ 4 \sum\_{j=j\_x}^{j\_b} \frac{\triangle t K}{\triangle x} \left( u\_0^{\varepsilon, n+1} + u\_0^{\varepsilon, n} - u\_{N\_x + 1, j}^{n+1} - u\_{N\_x + 1, j}^{n} \right) \right] = 0 \end{cases}$$

We thus finally obtain the mass-preserving transmission condition, where we finally add the source term *g*, as:

$$\begin{array}{rcl} \mu\_{0}^{\varepsilon,n+1} & = & \mu\_{0}^{\varepsilon,n} + \lambda \frac{\triangle t}{\triangle x} \delta\_{x}^{1} \mu\_{0}^{\varepsilon,n+1} - \left(\frac{\triangle t}{\triangle x} - \frac{\triangle t}{\Delta \lambda}\right) \left(\upsilon\_{0}^{\varepsilon,n+1} + \upsilon\_{1}^{\varepsilon,n+1}\right) \\ & - \mathsf{K} \frac{\triangle t}{\triangle x} \triangle y \sum\_{j=j\_{a}}^{j\_{b}} \left(\mathsf{u}\_{0}^{\varepsilon,n+1} + \mathsf{u}\_{0}^{\varepsilon,n} - \mathsf{u}\_{\mathrm{N}x+1,j}^{n+1} - \mathsf{u}\_{\mathrm{N}x+1,j}^{n}\right) \\ & - \frac{\triangle t}{\Delta \mathsf{I}} \left(\mathsf{F}\_{\varepsilon,0}^{n+1} + \mathsf{F}\_{\varepsilon,1}^{n+1}\right) + \frac{\triangle t}{\underline{\mathcal{I}}} \Bigl(\mathsf{g}\_{0}^{n+1} + \mathsf{g}\_{1}^{n+1}\Bigr). \end{array} \tag{46}$$

**Proposition 2.** *The complete numerical scheme derived for the 2D-doubly parabolic-1D-hyperbolicparabolic model is mass-preserving—by construction—across the transmission conditions in absence of source terms.*

**Remark 5.** *Note that the chemoattractant equation is the same as for the 1D-doubly parabolic and 2D-doubly parabolic case. Hence, the numerical schemes (33) and (19) with boundary conditions (32) and (43) can be used.*

#### 3.2.2. Multiple Channels

In the previous paragraphs, we connected the two-dimensional domain Ω*<sup>l</sup>* with a single one-dimensional channel *C* at (*Lx*, *y*) ∈ Ω*<sup>l</sup>* with *y* ∈ [*a*1, *b*1], and *ja*<sup>1</sup> and *jb*<sup>1</sup> , the positions of the endpoints of the corridor on the numerical grid. Of course, this can be extended to more channels.

Let *Cm*, with *m* = 1, ... ,M be M corridors, connected to the two-dimensional domain Ω*<sup>l</sup>* at (*Lx*, *ym*) with *ym* ∈ [*am*, *bm*] and *a*<sup>1</sup> > 0, *bm* < *am*+1, for *m* = 1, ... ,M − 1, and

### *<sup>b</sup>*<sup>M</sup> < *Ly* to avoid intersections of the corridors, with equal width *<sup>k</sup>y*, *<sup>k</sup>* ∈ N.

#### *3.3. Implemented Algorithm.*

Before presenting the numerical tests in the next Section 4, we adapt the approximation scheme for the density *u*, also including the source term *g*, implemented to solve the problem in the 2D-1D domain. As underlined before, it is necessary to use implicit schemes to consider the presence of stiff source terms. For this reason, for the approximation of the time derivatives we use the Crank–Nicolson (CN) method on the diffusion and source term, which is a second-order implicit method and the explicit central method for the convection term. Moreover, since CN is only A-stable but not L-stable [38], we also need to choose a Δ*t* that is small enough to avoid spurious oscillations of the solution during transience.

Because of the explicit term, we have numerical restrictions on the mesh grid and time step. Furthermore, as discussed previously, we introduce artificial viscosity to avoid oscillations due to not suitable mesh grid size in dominant convection regime, which is often the case in chemotaxis models. Finally, the implicit-explicit numerical method used to compute the solutions for the density *u* in (8) inside the 2D domain Ω*<sup>l</sup>* is:

$$\begin{array}{ll} \mathfrak{u}\_{i,j}^{n+1} &=& \mathfrak{u}\_{i,j}^{n} + D\_{\mathfrak{u}} \frac{\triangle t}{2} \left[ \frac{\delta\_{x}^{2} (\mathfrak{u}\_{i,j}^{n} + \mathfrak{u}\_{i,j}^{n+1})}{\triangle x^{2}} + \frac{\delta\_{y}^{2} (\mathfrak{u}\_{i,j}^{n} + \mathfrak{u}\_{i,j}^{n+1})}{\triangle y^{2}} \right] \\ & - \frac{\triangle t}{4} \left[ \frac{\delta\_{x}^{2} F\_{i,j}^{n}}{\triangle x} + \frac{\delta\_{y}^{2} F\_{i,j}^{n}}{\triangle y} \right] + \frac{\triangle t}{2} \left( \mathfrak{g}\_{i,j}^{n} + \mathfrak{g}\_{i,j}^{n+1} \right) \\ & - \triangle t \underbrace{\left[ \frac{\delta\_{x}^{2} \theta\_{i,j}^{n}}{2 \triangle x} + \frac{\delta\_{y}^{2} \theta\_{i,j}^{n}}{2 \triangle y} \right]}\_{\text{ar artificial viscosity}} \end{array} \tag{47}$$

with: *θ<sup>n</sup> <sup>i</sup>*,*<sup>j</sup>* := *<sup>χ</sup>*(*ϕ<sup>n</sup> i*,*k*)*u<sup>n</sup> i*,*j* |∇*ϕ<sup>n</sup> i*,*j* | for *i* = 1, ... , *Nx*, *j* = 1, ... , *Ny*. As can be seen, the function *θ* used for the artificial viscosity is almost identical to *f* , with the exception of using the absolute value of ∇*ϕ*. By using this, we increase artificial viscosity only where the gradient of the chemoattractant increases. This reduces the restriction on the meshgrid due to the condition induced by the cell Péclet number, [36]. The numerical transmission condition on the left of node 1*L* (*i* = *Nx* + 1, *j* = *ja*,..., *jb*) is:

$$\begin{array}{llll}\mu\_{N\_{u}+1,j}^{n+1} &=& \mu\_{N\_{u}+1,j}^{n} - D\_{u}\frac{\beta\_{M}}{\Delta x^{2}}\delta\_{x}^{2}\left(u\_{N\_{u}j}^{n} + \mu\_{N\_{u},j}^{n+1}\right) \\ &+& \text{ $D\_{u}$ }\frac{\gamma\_{M}}{\Delta x^{2}\rho}\delta\_{y}^{2}\left(u\_{N\_{u}+1,j}^{n} + \mu\_{N\_{u}+1,j}^{n+1}\right) \\ &+& \frac{\gamma\_{M}}{\Delta x}\left(F\_{N\_{u}+1,j}^{x,u} + F\_{N\_{u},j}^{x,u}\right) - \frac{\gamma\_{M}}{\Delta y}\left(F\_{N\_{u}+1,j+1}^{y,u} - F\_{N\_{u}+1,j-1}^{y,u}\right) \\ &+& \frac{\gamma\_{M}}{2}\left(\delta\_{N\_{u}+1,j}^{n} + \delta\_{N\_{u}+1,j}^{n+1}\right) - \triangle\left(\delta\_{x}^{\frac{\delta\_{M}}{\Delta x}}\frac{\delta\_{y}^{n}}{\triangle x} + \frac{\delta\_{y}^{\Delta y}\delta\_{n\_{u}+1,j}}{2\triangle y}\right) \\ &\quad \text{( $\omega$ -differential term for transmission condition)} \\ &+& K\frac{\triangle t}{\triangle x}\left(\mu\_{0}^{\varepsilon,u} - \mu\_{N\_{u}+1,j}^{n} + \mu\_{0}^{\varepsilon,n+1} - \mu\_{N\_{u}+1,j}^{n+1}\right). \end{array} \tag{4.8}$$

! "# \$ The role of permeability coefficient *K* in the positivity of (48) is discussed in Section 3.3.1.

For the external boundaries (the edges of the chamber Ω*<sup>l</sup>* except at the junctions *j* = *ja*,..., *jb*), we use:

⎧ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩ *un*+<sup>1</sup> *<sup>i</sup>*,0 = *<sup>u</sup><sup>n</sup> <sup>i</sup>*,0 <sup>+</sup> *Du <sup>t</sup>* <sup>2</sup>*x*<sup>2</sup> *<sup>δ</sup>*<sup>2</sup> *x un <sup>i</sup>*,0 <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> *i*,0 + *Du <sup>t</sup> y*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *y un <sup>i</sup>*,0 <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> *i*,0 <sup>−</sup> *<sup>t</sup>* <sup>2</sup>*<sup>x</sup> <sup>δ</sup>*<sup>0</sup> *xF<sup>x</sup>*,*<sup>n</sup> <sup>i</sup>*,0 <sup>−</sup> *<sup>t</sup> y Fy*,*<sup>n</sup> <sup>i</sup>*,0 <sup>+</sup> *<sup>F</sup>y*,*<sup>n</sup> i*,1 + *<sup>t</sup>* 2 *gn <sup>i</sup>*,0 <sup>+</sup> *<sup>g</sup>n*+<sup>1</sup> *i*,0 − *t δ*<sup>2</sup> *xθ<sup>n</sup> i*,0 <sup>2</sup>*<sup>x</sup>* <sup>+</sup> *<sup>δ</sup>*<sup>1</sup> *<sup>y</sup> θ<sup>n</sup> i*,0 *y* , *i* = 1, . . . , *Nx*, *un*+<sup>1</sup> *<sup>i</sup>*,*Ny*+<sup>1</sup> <sup>=</sup> *<sup>u</sup><sup>n</sup> <sup>i</sup>*,*Ny*+<sup>1</sup> <sup>+</sup> *Du <sup>t</sup>* <sup>2</sup>*x*<sup>2</sup> *<sup>δ</sup>*<sup>2</sup> *x un <sup>i</sup>*,*Ny*+<sup>1</sup> <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> *i*,*Ny*+1 <sup>−</sup>*Du <sup>t</sup> y*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *y un <sup>i</sup>*,*Ny* <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> *i*,*Ny* <sup>−</sup> *<sup>t</sup>* <sup>2</sup>*<sup>x</sup> <sup>δ</sup>*<sup>0</sup> *xF<sup>x</sup>*,*<sup>n</sup> <sup>i</sup>*,*Ny*+<sup>1</sup> <sup>+</sup> *<sup>t</sup> y Fy*,*<sup>n</sup> <sup>i</sup>*,*Ny* <sup>+</sup> *<sup>F</sup>y*,*<sup>n</sup> i*,*Ny*+1 + *<sup>t</sup>* 2 *gn <sup>i</sup>*,*Ny*+<sup>1</sup> <sup>+</sup> *<sup>g</sup>n*+<sup>1</sup> *i*,*Ny*+1 − *t <sup>δ</sup>*<sup>2</sup> *xθ<sup>n</sup> i*,*Ny*+1 <sup>2</sup>*<sup>x</sup>* <sup>+</sup> *<sup>δ</sup>*<sup>1</sup> *<sup>y</sup> θ<sup>n</sup> i*,*Ny y* , *i* = 1, . . . , *Nx*, *un*+<sup>1</sup> 0,*<sup>j</sup>* = *<sup>u</sup><sup>n</sup>* 0,*<sup>j</sup>* <sup>+</sup> *Du <sup>t</sup> x*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *x un* 0,*<sup>j</sup>* <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> 0,*j* + *Du <sup>t</sup>* <sup>2</sup>*y*<sup>2</sup> *<sup>δ</sup>*<sup>2</sup> *y un* 0,*<sup>j</sup>* <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> 0,*j* <sup>−</sup> *<sup>t</sup> x Fx*,*<sup>n</sup>* 0,*<sup>j</sup>* <sup>+</sup> *<sup>F</sup>x*,*<sup>n</sup>* 1,*j* <sup>−</sup> *<sup>t</sup>* <sup>2</sup>*<sup>y</sup> <sup>δ</sup>*<sup>0</sup> *yF<sup>y</sup>*,*<sup>n</sup>* 0,*j* + *<sup>t</sup>* 2 *gn* 0,*<sup>j</sup>* <sup>+</sup> *<sup>g</sup>n*+<sup>1</sup> 0,*j* − *t δ*<sup>1</sup> *xθ<sup>n</sup>* 0,*j <sup>x</sup>* <sup>+</sup> *<sup>δ</sup>*<sup>2</sup> *<sup>y</sup> θ<sup>n</sup>* 0,*j* 2*y* , *j* = 1, . . . , *Ny*, *un*+<sup>1</sup> *Nx*+1,*<sup>j</sup>* <sup>=</sup> *<sup>u</sup><sup>n</sup> Nx*+1,*<sup>j</sup>* <sup>−</sup> *Du <sup>t</sup> x*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *x un Nx*,*<sup>j</sup>* <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> *Nx*,*j* +*D <sup>t</sup>* <sup>2</sup>*y*<sup>2</sup> *<sup>δ</sup>*<sup>2</sup> *y un Nx*+1,*<sup>j</sup>* <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> *Nx*+1,*j* <sup>−</sup> *<sup>t</sup>* <sup>2</sup>*<sup>x</sup> <sup>δ</sup>*<sup>0</sup> *xF<sup>x</sup>*,*<sup>n</sup> <sup>i</sup>*,*Ny*+<sup>1</sup> <sup>+</sup> *<sup>t</sup> x Fy*,*<sup>n</sup> <sup>i</sup>*,*Ny* <sup>+</sup> *<sup>F</sup>y*,*<sup>n</sup> i*,*Ny*+1 + *<sup>t</sup>* 2 *gn Nx*+1,*<sup>j</sup>* <sup>+</sup> *<sup>g</sup>n*+<sup>1</sup> *Nx*+1,*j* − *t δ*<sup>1</sup> *xθ<sup>n</sup> Nx*,*j <sup>x</sup>* <sup>+</sup> *<sup>δ</sup>*<sup>2</sup> *<sup>y</sup> θ<sup>n</sup> Nx*+1,*j* 2*y* , *j* = 1, . . . , *Ny*, *j* = *ja*,..., *jb*. (49)

For the corners, we use the following boundary conditions:

⎧ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩ *<sup>u</sup>n*+<sup>1</sup> 0,0 <sup>=</sup> *<sup>u</sup><sup>n</sup>* 0,0 <sup>+</sup> *Du <sup>t</sup> x*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *x un* 0,0 <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> 0,0 + *Du <sup>t</sup> y*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *y un* 0,0 <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> 0,0 <sup>−</sup> *<sup>t</sup> x Fx*,*<sup>n</sup>* 0,0 <sup>+</sup> *<sup>F</sup>x*,*<sup>n</sup>* 1,0 <sup>−</sup> *<sup>t</sup> y Fy*,*<sup>n</sup>* 0,0 <sup>+</sup> *<sup>F</sup>y*,*<sup>n</sup>* 0,1 + *<sup>t</sup>* 2 *gn* 0,0 <sup>+</sup> *<sup>g</sup>n*+<sup>1</sup> 0,0 −*t δ*<sup>1</sup> *xθ<sup>n</sup>* 0,0 *<sup>x</sup>* <sup>+</sup> *<sup>δ</sup>*<sup>0</sup> *<sup>y</sup> θ<sup>n</sup>* 0,0 *y* , *<sup>u</sup>n*+<sup>1</sup> *Nx*+1,0 <sup>=</sup> *<sup>u</sup><sup>n</sup> Nx*+1,0 <sup>−</sup> *Du <sup>t</sup> x*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *x un Nx*,0 <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> *Nx*,0 +*Du <sup>t</sup> y*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *y un Nx*+1,0 <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> *Nx*+1,0 <sup>−</sup> *<sup>t</sup> x Fx*,*<sup>n</sup> Nx*+1,0 <sup>+</sup> *<sup>F</sup>x*,*<sup>n</sup> Nx*,0 <sup>−</sup> *<sup>t</sup> y Fy*,*<sup>n</sup> Nx*+1,0 <sup>+</sup> *<sup>F</sup>y*,*<sup>n</sup> Nx*+1,1 + *<sup>t</sup>* 2 *gn Nx*+1,0 <sup>+</sup> *<sup>g</sup>n*+<sup>1</sup> *Nx*+1,0 −*t δ*<sup>1</sup> *xθ<sup>n</sup> Nx*,0 *<sup>x</sup>* <sup>+</sup> *<sup>δ</sup>*<sup>1</sup> *<sup>y</sup> θ<sup>n</sup> Nx*+1,0 *y* , *<sup>u</sup>n*+<sup>1</sup> 0,*Ny*+<sup>1</sup> <sup>=</sup> *<sup>u</sup><sup>n</sup>* 0,*Ny*+<sup>1</sup> <sup>+</sup> *Du <sup>t</sup> x*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *x un* 0,*Ny*+<sup>1</sup> <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> 0,*Ny*+<sup>1</sup> <sup>−</sup>*Du <sup>t</sup> y*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *y un* 0,*Ny* <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> 0,*Ny* <sup>−</sup> *<sup>t</sup> x Fx*,*<sup>n</sup>* 0,*Ny*+<sup>1</sup> <sup>+</sup> *<sup>F</sup>x*,*<sup>n</sup>* 1,*Ny*+1 <sup>−</sup> *<sup>t</sup> y Fy*,*<sup>n</sup>* 0,*Ny*+<sup>1</sup> <sup>+</sup> *<sup>F</sup>y*,*<sup>n</sup>* 0,*Ny* + *<sup>t</sup>* 2 *gn* 0,*Ny*+<sup>1</sup> <sup>+</sup> *<sup>g</sup>n*+<sup>1</sup> 0,*Ny*+<sup>1</sup> −*t <sup>δ</sup>*<sup>1</sup> *xθ<sup>n</sup>* 0,*Ny*+1 *<sup>x</sup>* <sup>+</sup> *<sup>δ</sup>*<sup>1</sup> *<sup>y</sup> θ<sup>n</sup>* 0,*Ny y* , *<sup>u</sup>n*+<sup>1</sup> *Nx*+1,*Ny*+<sup>1</sup> <sup>=</sup> *<sup>u</sup><sup>n</sup> Nx*+1,*Ny*+<sup>1</sup> <sup>−</sup> *Du <sup>t</sup> x*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *x un Nx*,*Ny*+<sup>1</sup> <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> *Nx*,*Ny*+<sup>1</sup> <sup>−</sup>*Du <sup>t</sup> y*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *y un Nx*+1,*Ny* <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> *Nx*+1,*Ny* <sup>−</sup> *<sup>t</sup> x Fx*,*<sup>n</sup> Nx*+1,*Ny*+<sup>1</sup> <sup>+</sup> *<sup>F</sup>x*,*<sup>n</sup> Nx*,*Ny*+1 <sup>−</sup> *<sup>t</sup> y Fy*,*<sup>n</sup> Nx*+1,*Ny*+<sup>1</sup> <sup>+</sup> *<sup>F</sup>y*,*<sup>n</sup> Nx*+1,*Ny* + *<sup>t</sup>* 2 *gn Nx*+1,*Ny*+<sup>1</sup> <sup>+</sup> *<sup>g</sup>n*+<sup>1</sup> *Nx*+1,*Ny*+<sup>1</sup> −*t <sup>δ</sup>*<sup>1</sup> *xθ<sup>n</sup> Nx*,*Ny*+1 *<sup>x</sup>* <sup>+</sup> *<sup>δ</sup>*<sup>1</sup> *<sup>y</sup> θ<sup>n</sup> Nx*+1,*Ny y* . (50)

Similarly, for the chemoattractant *φ*, we have the implicit-explicit scheme in the interior points of the 2D domain:

$$\begin{array}{rcl}\phi\_{i,j}^{n+1} &=& \phi\_{i,j}^{n} + D\_{\phi} \frac{\triangle t}{\mathbb{Z}} \left[ \frac{\delta\_{x}^{2} \left( \phi\_{i,j}^{n} + \phi\_{i,j}^{n+1} \right)}{\triangle x^{2}} + \frac{\delta\_{y}^{2} \left( \phi\_{i,j}^{n} + \phi\_{i,j}^{n+1} \right)}{\triangle y^{2}} \right] \\ & \overset{\triangle t}{\mathbb{Z}} \left( a(u\_{i,j}^{n} + u\_{i,j}^{n+1}) - \frac{\triangle t}{\mathbb{Z}} \left( b(\phi\_{i,j}^{n} + \phi\_{i,j}^{n+1}) \right) \end{array} \tag{51}$$

At the boundaries and the corners of the numerical schemes for *φ*, we use, respectively, conditions:

⎧ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩ *<sup>φ</sup>n*+<sup>1</sup> 0,0 <sup>=</sup> *<sup>φ</sup><sup>n</sup>* 0,0 + *D<sup>φ</sup> t x*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *x*(*φ<sup>n</sup>* 0,0 <sup>+</sup> *<sup>φ</sup>n*+<sup>1</sup> 0,0 ) + <sup>2</sup>*D<sup>φ</sup> t y*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *y*(*φ<sup>n</sup>* 0,0 <sup>+</sup> *<sup>φ</sup>n*+<sup>1</sup> 0,0 ) <sup>+</sup>*at*(*u<sup>n</sup>* 0,0 <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> 0,0 ) <sup>−</sup> *<sup>b</sup>t*(*φ<sup>n</sup>* 0,0 <sup>+</sup> *<sup>φ</sup>n*+<sup>1</sup> 0,0 ), *<sup>φ</sup>n*+<sup>1</sup> *Nx*+1,0 <sup>=</sup> *<sup>φ</sup><sup>n</sup> Nx*+1,0 − *D<sup>φ</sup> t x*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *x*(*φ<sup>n</sup> Nx*+1,0 <sup>+</sup> *<sup>φ</sup>n*+<sup>1</sup> *Nx*+1,0) +*D<sup>φ</sup> t y*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *y*(*φ<sup>n</sup> Nx*+1,0 <sup>+</sup> *<sup>φ</sup><sup>n</sup> Nx*+1,0) <sup>+</sup>*at*(*u<sup>n</sup> Nx*+1,0 <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> *Nx*+1,0) <sup>−</sup> *<sup>b</sup>t*(*φ<sup>n</sup> Nx*+1,0 <sup>+</sup> *<sup>φ</sup>n*+<sup>1</sup> *Nx*+1,0), *<sup>φ</sup>n*+<sup>1</sup> *Nx*+1,*Ny*+<sup>1</sup> <sup>=</sup> *<sup>φ</sup><sup>n</sup> Nx*+1,*Ny*+<sup>1</sup> − *D<sup>φ</sup> t x*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *x*(*φ<sup>n</sup> Nx*,*Ny*+<sup>1</sup> <sup>+</sup> *<sup>φ</sup>n*+<sup>1</sup> *Nx*,*Ny*+1) −*D<sup>φ</sup> t y*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *x*(*φ<sup>n</sup> Nx*+1,*Ny*+<sup>1</sup> <sup>+</sup> *<sup>φ</sup><sup>n</sup> Nx*+1,*Ny* ) <sup>+</sup>*atu<sup>n</sup> Nx*+1,*Ny*+<sup>1</sup> <sup>−</sup> *<sup>b</sup>tφ<sup>n</sup> Nx*+1,*Ny*+1, *<sup>φ</sup>n*+<sup>1</sup> 0,*Ny*+<sup>1</sup> <sup>=</sup> *<sup>φ</sup><sup>n</sup>* 0,0 + *D<sup>φ</sup> t x*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *x*(*φ<sup>n</sup>* 1,*Ny*+<sup>1</sup> <sup>+</sup> *<sup>φ</sup>n*+<sup>1</sup> 1,*Ny*+1) −*D<sup>φ</sup> t y*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *y*(*φ<sup>n</sup>* 0,*Ny* <sup>+</sup> *<sup>φ</sup>n*+<sup>1</sup> 0,*Ny* ) + *<sup>a</sup>tu<sup>n</sup>* 0,*Ny*+<sup>1</sup> <sup>−</sup> *<sup>b</sup>tφ<sup>n</sup>* 0,*Ny*+1, (52)

For the borders, we use:

⎧ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩ *φn*+<sup>1</sup> *<sup>i</sup>*,0 = *<sup>φ</sup><sup>n</sup> <sup>i</sup>*,0 + *D<sup>φ</sup> t x*<sup>2</sup> *<sup>δ</sup>*<sup>2</sup> *x*(*φ<sup>n</sup> <sup>i</sup>*,0 <sup>+</sup> *<sup>φ</sup>n*+<sup>1</sup> *<sup>i</sup>*,0 ) + 2*D<sup>φ</sup> t y*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *y*(*φ<sup>n</sup> <sup>i</sup>*,0 <sup>+</sup> *<sup>φ</sup>n*+<sup>1</sup> *<sup>i</sup>*,0 ) <sup>+</sup>*atu<sup>n</sup> <sup>i</sup>*,0 <sup>−</sup> *<sup>b</sup>tφ<sup>n</sup> i*,0, *φn*+<sup>1</sup> *<sup>i</sup>*,*Ny*+<sup>1</sup> <sup>=</sup> *<sup>φ</sup><sup>n</sup> <sup>i</sup>*,*Ny*+<sup>1</sup> + *D<sup>φ</sup> t x*<sup>2</sup> *<sup>δ</sup>*<sup>2</sup> *x*(*φ<sup>n</sup> <sup>i</sup>*,*Ny*+<sup>1</sup> <sup>+</sup> *<sup>φ</sup>n*+<sup>1</sup> *<sup>i</sup>*,*Ny*+1) −2*D<sup>φ</sup> t y*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *y*(*φ<sup>n</sup> <sup>i</sup>*,*Ny* <sup>+</sup> *<sup>φ</sup>n*+<sup>1</sup> *<sup>i</sup>*,*Ny* ) + *<sup>a</sup>tu<sup>n</sup> <sup>i</sup>*,*Ny*+<sup>1</sup> <sup>−</sup> *<sup>b</sup>tφ<sup>n</sup> <sup>i</sup>*,*Ny*+1, *φn*+<sup>1</sup> 0,*<sup>j</sup>* = *<sup>φ</sup><sup>n</sup>* 0,*<sup>j</sup>* + *D<sup>φ</sup> t y*<sup>2</sup> *<sup>δ</sup>*<sup>2</sup> *y*(*φ<sup>n</sup>* 0,*<sup>j</sup>* <sup>+</sup> *<sup>φ</sup>n*+<sup>1</sup> 0,*<sup>j</sup>* ) + 2*D<sup>φ</sup> t x*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *x*(*φ<sup>n</sup>* 0,*<sup>j</sup>* <sup>+</sup> *<sup>φ</sup>n*+<sup>1</sup> 0,*<sup>j</sup>* ) <sup>+</sup>*atu<sup>n</sup>* 0,*<sup>j</sup>* <sup>−</sup> *<sup>b</sup>tφ<sup>n</sup>* 0,*j* , *φn*+<sup>1</sup> *Nx*+1,*<sup>j</sup>* <sup>=</sup> *<sup>φ</sup><sup>n</sup> Nx*+1,*<sup>j</sup>* + *D<sup>φ</sup> t y*<sup>2</sup> *<sup>δ</sup>*<sup>2</sup> *y*(*φ<sup>n</sup> Nx*+1,*<sup>j</sup>* <sup>+</sup> *<sup>φ</sup>n*+<sup>1</sup> *Nx*+1,*j* ) −2*D<sup>φ</sup> t x*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *x*(*φ<sup>n</sup> Nx*,*<sup>j</sup>* <sup>+</sup> *<sup>φ</sup>n*+<sup>1</sup> *Nx*,*j* ) + *<sup>a</sup>tu<sup>n</sup> Nx*+1,*<sup>j</sup>* <sup>−</sup> *<sup>b</sup>tφ<sup>n</sup> Nx*+1,*j* . (53)

Note that for *i* = *Nx* + 1 the last formula in (53) is applied for *j* = 1, ... , *Ny*, *j* = *ja*,..., *jb*.

**Remark 6.** *If we consider a two-dimensional domain* Ω*<sup>r</sup> connected to the right endpoint of the one-dimensional corridor C, the complete numerical scheme for the left domain* Ω*<sup>l</sup> described above can be considered.*

*The main difference is that the transmission conditions at the interface between the box and the channel (the left for the box* Ω*<sup>l</sup> and the right for the corridor) are reversed to the left for the corridor and the right for the box* Ω*r. In the numerical scheme, the change only affects the channel C, where we have transmission conditions also for uc*,*<sup>n</sup> <sup>N</sup>*+<sup>1</sup> *(resp. <sup>v</sup>c*,*<sup>n</sup> <sup>N</sup>*+1*). The same boundary condition can be used without transmission conditions, with only the additional term derived from the KK-condition and it must be added as well for uc*,*<sup>n</sup>* <sup>0</sup> *(resp. vc*,*<sup>n</sup>* <sup>0</sup> *).*

For the computation of solutions on the one-dimensional channel *C*, we have two different approximations depending on the choice of the model we assign to it. If we solve the doubly parabolic problem (9), the approximation scheme used is the Crank–Nicolson scheme, as above:

$$\begin{split} \boldsymbol{u}\_{i}^{\text{c},n+1} &= \quad \boldsymbol{u}\_{i}^{\text{u}} + \boldsymbol{D}\_{\boldsymbol{u}\_{c}} \frac{\triangle t}{2} \left[ \frac{\delta\_{\pi}^{2} \left( \boldsymbol{u}\_{i}^{\text{c},n} + \boldsymbol{u}\_{i}^{\text{c},n+1} \right)}{\triangle x^{2}} \right] - \triangle t \boldsymbol{\delta}\_{\pi}^{0} \boldsymbol{F}\_{\text{c},i}^{\text{u}} \\ &+ \frac{\triangle t}{2} \left( \boldsymbol{g}\_{i}^{\text{u}} + \boldsymbol{g}\_{i}^{\text{u}+1} \right) - \triangle t \left( \frac{\delta\_{\pi}^{2} \theta\_{i}^{n}}{2 \triangle x} \right), \end{split} \tag{54}$$

with the transmission condition on the left of node 1*L* (*i* = 0) given by:

$$\begin{array}{rcl} \mu\_{0}^{\varepsilon,n+1} & = & \mu\_{0}^{\varepsilon,n} + D\_{\mathsf{u}\_{\varepsilon}} \frac{\triangle t}{\triangle x^{2}} \delta\_{x}^{1} \Big( \mu\_{0}^{\varepsilon,n} + \nu\_{0}^{\varepsilon,n+1} \Big) - \frac{\triangle t}{\triangle x} \Big( F\_{\mathsf{c},0}^{n} + F\_{\mathsf{c},1}^{n} \Big) - \triangle t \frac{\delta\_{x}^{\varepsilon} \theta\_{0}^{n}}{\triangle x} \\ & - K \frac{\triangle t}{\triangle x} \Bigg\{ (b-a) (\mu\_{0}^{\varepsilon,n} + \nu\_{0}^{\varepsilon,n+1}) + \sum\_{j=j\_{\mathsf{x}}}^{j\_{\mathsf{b}}} \Big( \mu\_{\mathrm{N}\_{\mathsf{x}}+1,j}^{n} + \mu\_{\mathrm{N}\_{\mathsf{x}}+1,j}^{n+1} \Big) \Bigg\} \\ & + \frac{\triangle t}{\triangle} \Big( \mathcal{g}\_{0}^{\mathsf{n}} + \mathcal{g}\_{0}^{\mathsf{n}+1} \Big). \end{array} \tag{55}$$

If, instead, we need to solve the hyperbolic-parabolic problem (7), an implicit version of the second-order AHO scheme is applied—see the scheme reported in (44)—in order to ensure the stability of numerical solutions in the channels.

The scheme (44) is endowed with transmission conditions (45) and (46).

#### 3.3.1. Stability at Interfaces

Note that, in order to ensure the positivity of the quantities in the above formulas deriving from the KK conditions—i.e., (48) for the 2D domain and (55) or (45) for the 1D domain—we also need to take care of the ratio between the KK coefficient *K* and the space discretization steps. In particular, for (48) and (45), one needs to ensure that *K <sup>t</sup> <sup>x</sup>* and, respectively, *K <sup>t</sup> x<sup>y</sup>* is not too big in order to damp possible high oscillations produced by the term in parentheses. Similarly, in (55) we need to check that *K <sup>t</sup> <sup>x</sup>* (*<sup>b</sup>* <sup>−</sup> *<sup>a</sup>*) is small in order to prevent the growth of the negative term.

Moreover, as previously discussed, we need to check that the numerical monotonicity condition is satisfied:

$$\frac{k\_1}{\left(k\_2 + q\_{i,j}^{\text{II}}\right)^2} |\partial\_{x,i,j}^{\text{II}} q\_{i,j}^{\text{II}}| \le \sqrt{D\_{\text{M}}} \tag{56}$$

in the computational domain in order to ensure non-negative solutions.

Now, we consider the interface between the 2D and 1D domains. If we assume *g* = 0 and *f* = *χ*(*φ*)∇*φ*, the first equation in the 2D parabolic system, (8), rewrites as:

$$
\partial\_t u = D \triangle u - \operatorname{div}(u \cdot f). \tag{57}
$$

The transmission condition (48) reads as:

*un*+<sup>1</sup> *Nx*+1,*<sup>j</sup>* <sup>=</sup> *<sup>u</sup><sup>n</sup> Nx*+1,*<sup>j</sup>* <sup>−</sup> *Du <sup>t</sup> x*<sup>2</sup> *<sup>δ</sup>*<sup>1</sup> *x*(*u<sup>n</sup> Nx*,*<sup>j</sup>* <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> *Nx*,*j* ) + *Du <sup>t</sup>* <sup>2</sup>*y*<sup>2</sup> *<sup>δ</sup>*<sup>2</sup> *y un Nx*+1,*<sup>j</sup>* <sup>+</sup> *<sup>u</sup>n*+<sup>1</sup> *Nx*+1,*j* + *<sup>t</sup> x f x*,*n Nx*,*j un Nx*,*<sup>j</sup>* + *f x*,*n Nx*+1,*j un Nx*+1,*j* <sup>−</sup> *<sup>t</sup>* 2*y f y*,*n Nx*+1,*j*+1*u<sup>n</sup> Nx*+1,*j*+<sup>1</sup> − *f y*,*n Nx*+1,*j*−<sup>1</sup>*uNx*<sup>+</sup>1,*j*−<sup>1</sup> + *<sup>t</sup> x* | *f x*,*n Nx*,*j* |*un Nx*,*<sup>j</sup>* − | *f x*,*n Nx*+1,*j* |*un Nx*+1,*j* + *<sup>t</sup>* 2*y* | *f y*,*n Nx*+1,*j*+1|*u<sup>n</sup> Nx*+1,*j*+<sup>1</sup> − 2| *f y*,*n Nx*+1,*j* |*un Nx*+1,*j* +| *f y*,*n Nx*+1,*j*−1|*u<sup>n</sup> Nx*+1,*j*−1 + *x t K uc*,*<sup>n</sup>* <sup>0</sup> <sup>−</sup> *<sup>v</sup><sup>n</sup> Nx*+1,*j* + *uc*,*n*+<sup>1</sup> <sup>0</sup> <sup>−</sup> *<sup>u</sup>n*+<sup>1</sup> *Nx*+1,*j* ! "# \$ KK-transmission term . (58)

Then, the transmission condition (55) for *g* = 0 and *f* = *χ*(*φ*)*φ<sup>x</sup>* now reads as:

$$\begin{array}{ll} \mu\_{0}^{\varepsilon,n+1} &=& \mu\_{0}^{\varepsilon,n} + D\_{\mathrm{u}} \frac{\triangle t}{\triangle x^{2}} \delta\_{x}^{1} \big( \mu\_{0}^{\varepsilon,n} + \nu\_{0}^{\varepsilon,n+1} \big) - \frac{\triangle t}{\triangle x} \big( \mu\_{0}^{\varepsilon,n} f\_{0}^{n} + \nu\_{1}^{\varepsilon,n} f\_{1}^{n} \big) \\ &+ \frac{\triangle t}{\triangle x} \big( \mu\_{1}^{\varepsilon,n} |f\_{1}^{n}| - \mu\_{0}^{\varepsilon,n} |f\_{0}^{n}| \big) \\ &+ \underbrace{\frac{\triangle x}{\triangle t} K \Big( \sum\_{j=j\_{x}}^{j\_{b}} \Bigl( \mu\_{N\_{x}+1,j}^{n} + \mu\_{N\_{x}+1,j}^{n+1} \Bigr) - (b-a) \big( \mu\_{0}^{n} + \mu\_{0}^{n+1} \Bigr)}\_{\textbf{KK-transmission term}} \\ \end{array} \tag{59}$$

for *j* = *ja*, ... *jb*. For the transmission condition (58), we see that monotonicity is preserved when:

$$\left| 1 - D\_{\mathbf{u}} \frac{\triangle t}{\triangle x^{2}} - D\_{\mathbf{u}} \frac{\triangle t}{\triangle y^{2}} + \frac{\triangle t}{\triangle x} f\_{\mathbf{N}\_{\mathbf{x}} + 1, j}^{\mathbf{x}, \mathbf{n}} - \frac{\triangle t}{\triangle x} |f\_{\mathbf{N}\_{\mathbf{x}} + 1, j}^{\mathbf{x}, \mathbf{n}}| - \frac{\triangle t}{\triangle y} |f\_{\mathbf{N}\_{\mathbf{y}} + 1, j}^{\mathbf{y}, \mathbf{n}}| - \frac{\triangle t}{\triangle x} \mathbf{K} > 0,\tag{60}$$

which gives us the stability condition for the left side of the interface.

For the right side of the interface, if we assign the doubly parabolic 1D system (9), we have the stability condition:

$$(1 - D\_{\text{il}\_c} \frac{\triangle t}{\triangle x^2} + \lambda f\_{c,0}^n - \frac{\triangle t}{\triangle x} |f\_0^n| - \frac{\triangle t}{\triangle x} K(b - a). \tag{61}$$

We underline that (61) is not only influenced by the KK-constant *K* but also by the channel width *σ* := (*b* − *a*), which must be taken care of accordingly.

Analogously, we conduct the derivation of stability conditions for the transmission conditions in the case where we assign the 2D parabolic model (57) on the left part of the interface and the hyperbolic system (10) on the right. For the sake of clarity, we rewrite the hyperbolic part of the system (10) for the density flux of individuals when we assume *g* = 0:

$$\begin{cases} \begin{array}{rcl} \partial\_t u\_{\mathcal{E}} + \bar{\partial}\_{\mathcal{X}} v\_{\mathcal{E}} &=& 0, \\ \partial\_t v\_{\mathcal{E}} + \lambda\_{\mathcal{E}}^2 \bar{\partial}\_{\mathcal{X}} u\_{\mathcal{E}} &=& F\_{\mathcal{E}}. \end{array} \end{cases} \tag{62}$$

The KK-transmission explicit version of conditions (46) and (45) in this case read as:

*uc*,*n*+<sup>1</sup> <sup>0</sup> <sup>=</sup> *<sup>u</sup>c*,*<sup>n</sup>* <sup>0</sup> <sup>+</sup> *<sup>λ</sup> <sup>t</sup> x uc*,*<sup>n</sup>* <sup>1</sup> <sup>−</sup> *<sup>u</sup>c*,*<sup>n</sup>* 0 − *<sup>t</sup> <sup>x</sup>* <sup>−</sup> *<sup>t</sup>* 2*λ vc*,*<sup>n</sup>* <sup>0</sup> <sup>+</sup> *<sup>v</sup>c*,*<sup>n</sup>* 1 <sup>−</sup> *<sup>t</sup>* 2*λ Fn <sup>c</sup>*,0 + *<sup>F</sup><sup>n</sup> c*,1 <sup>−</sup>*<sup>K</sup> <sup>t</sup> x<sup>y</sup> jb* ∑ *j*=*ja uc*,*<sup>n</sup>* <sup>0</sup> <sup>−</sup> *<sup>u</sup><sup>n</sup> Nx*+1,*<sup>j</sup>* <sup>+</sup> *<sup>u</sup>c*,*n*+<sup>1</sup> <sup>0</sup> <sup>−</sup> *<sup>u</sup>n*+<sup>1</sup> *Nx*+1,*j* , *uc*,*n*+<sup>1</sup> *<sup>N</sup>*+<sup>1</sup> <sup>=</sup> *<sup>u</sup>c*,*<sup>n</sup> <sup>N</sup>*+<sup>1</sup> <sup>+</sup> *<sup>λ</sup> <sup>t</sup> x uc*,*<sup>n</sup> <sup>N</sup>* <sup>−</sup> *<sup>u</sup>c*,*<sup>n</sup> N*+1 + *<sup>t</sup> <sup>x</sup>* <sup>−</sup> *<sup>t</sup>* 2*λ vc*,*<sup>n</sup> <sup>N</sup>* <sup>+</sup> *<sup>v</sup>c*,*<sup>n</sup> N*+1 + *<sup>t</sup>* 2*λ Fn <sup>c</sup>*,*<sup>N</sup>* + *<sup>F</sup><sup>n</sup> c*,*N*+1 <sup>−</sup>*<sup>K</sup> <sup>t</sup> x<sup>y</sup> jb* ∑ *j*=*ja uc*,*<sup>n</sup> <sup>N</sup>*+<sup>1</sup> <sup>−</sup> *<sup>u</sup>c*,*<sup>n</sup>* 0,*<sup>j</sup>* <sup>+</sup> *<sup>u</sup>c*,*n*+<sup>1</sup> *<sup>N</sup>*+<sup>1</sup> <sup>−</sup> *<sup>u</sup>c*,*n*+<sup>1</sup> 0,*j* , *vc*,*<sup>n</sup>* <sup>0</sup> <sup>=</sup> <sup>−</sup>*K*(*<sup>b</sup>* <sup>−</sup> *<sup>a</sup>*)*uc*,*<sup>n</sup>* <sup>0</sup> <sup>+</sup> *<sup>K</sup><sup>y</sup>* <sup>∑</sup>*jb j*=*ja ul*,*<sup>n</sup> Nx*+1,*<sup>j</sup>* <sup>+</sup> *<sup>u</sup>l*,*n*+<sup>1</sup> *Nx*+1,*j* , *vc*,*<sup>n</sup> <sup>N</sup>*+<sup>1</sup> <sup>=</sup> <sup>−</sup>*K*(*<sup>b</sup>* <sup>−</sup> *<sup>a</sup>*)*uc*,*<sup>n</sup> <sup>N</sup>*+<sup>1</sup> <sup>+</sup> *<sup>K</sup><sup>y</sup>* <sup>∑</sup>*jb j*=*ja ur*,*<sup>n</sup> Nx*+1,*<sup>j</sup>* <sup>+</sup> *<sup>u</sup>r*,*n*+<sup>1</sup> *Nx*+1,*j* . (63)

In order to establish the monotonicity condition, as in [39], we need to diagonalize the boundary conditions above into the diagonal variables *<sup>w</sup>n*+<sup>1</sup> <sup>0</sup> , *<sup>z</sup>n*+<sup>1</sup> <sup>0</sup> , *<sup>w</sup>n*+<sup>1</sup> *<sup>N</sup>*+1, *<sup>z</sup>n*+<sup>1</sup> *<sup>N</sup>*+<sup>1</sup> with the relation *u* = *w* + *z* and *v* = *λ*(*z* − *w*).

We have:

$$\begin{array}{rcl}w\_0^{c,n+1} &=& \lambda \left(z\_0^{c,n+1} - w\_0^{c,n+1}\right) = -K(b-a)(z\_0^{c,n+1} + w\_0^{c,n+1})\\ &+ \mathbf{K} \triangle y \sum\_{j=j\_x}^{j\_b} \left(u\_{N\_x+1,j}^{l,n+2} + u\_{N\_x+1,j}^{l,n+1}\right). \end{array}$$

If we set  $\rho := \frac{\lambda - (b - a)K}{\lambda + (b - a)K}$   $\varphi^{n+1} := + \frac{K}{\lambda + (b - a)K}$   $\sum\_{j=j\_a}^{j\_b} \left( u\_{N\_x + 1, j}^{l, n + 2} + u\_{N\_x + 1, j}^{l, n + 1} \right)$   $w\_0^{n+1} = \rho w\_0^{n+1} + \triangle y \varsigma^{n+1}$ 

and

$$\begin{array}{rcl} w\_0^{n+1} + \rho w\_0^{n+1} + \mathfrak{g}^{n+1} &=& (1+\rho)w\_0^n + \mathfrak{g}^n + 2\lambda \frac{\triangle t}{\triangle x} (w\_1^n - \rho w\_0^n - \mathfrak{g}^n) \\ &+ \frac{\triangle t}{2} \left( (\rho - 1)w\_0^n + \mathfrak{g}^n + z\_1^n - w\_1^n \right) - \frac{\triangle t}{2\lambda} \left( F\_{c,0}^n + F\_{c,1}^n \right) \\ &- \frac{\triangle t}{\triangle x} K\_k \left( (b-a)(1+\rho) \left( w\_0^n + w\_0^{n+1} \right) + (b-a) \left( \mathfrak{g}^n + \mathfrak{g}^{n+1} \right) \right) \\ &- \sum\_{j=j\_k}^{j\_{k\_j}} \left( u\_{N\_x+1,j}^n + u\_{N\_x+1,j}^{n+1} \right) \end{array}$$

Now, by applying the monotonicity condition we get the following inequality:

$$\begin{cases} (1+\rho) - 2\lambda \frac{\triangle t}{\triangle x} \rho + \frac{\triangle t}{2} (\rho - 1) - \frac{\triangle t}{\triangle x} K(b-a)(1+\rho) - \frac{\triangle t}{2\Delta} F\_{\varepsilon,0}^n > 0\\ \Leftrightarrow \triangle t \le \frac{\triangle t}{\triangle x} \rho - \frac{\rho - 1}{2} + K \frac{(b-a)(1+\rho)}{\triangle x} + \frac{f\_{\varepsilon,0}^n}{2\Delta} \end{cases} \tag{64}$$

.

For the implicit AHO, the condition above reads as:

$$
\triangle t \le \frac{1+\rho}{K \frac{(b-a)(1+\rho)}{\triangle x} + \frac{F\_{c0}^u}{2\Delta}}
$$

In Figure 3, the time step restriction (64) is depicted for a qualitative understanding of the effect of the Kedem–Katchalsky constant *K* and of the channel width *σ* on the time step *t*. As expected, the time step *t* must be chosen smaller when either *K* or *σ* increases. Furthermore for *K* = 0 we recover the time step restriction of the *AHO*2-scheme *<sup>t</sup>* <sup>≤</sup> <sup>4</sup>*<sup>x</sup> x*+4*<sup>λ</sup>* <sup>=</sup> <sup>2</sup> · <sup>10</sup>−3. Since the values of *<sup>K</sup>* are typically of similar magnitude to the diffusion coefficients, the additional stability restrictions caused by the hyperbolic part of the transmission conditions are minimal.

Finally, we also point out that the time step restriction for the transmission condition for the one-dimensional parabolic Equation (61) is much more severe than for the onedimensional hyperbolic (64), which can also be seen qualitatively in the steepness of Figures 3 and 4.

**Figure 3.** Time step restriction (64) *t* for the hyperbolic transmission condition with *x* = 0.01, *y* = 0.1 and *λ* = 5 for different *K* and channel widths (*b* − *a*) for the transmission between the two-dimensional parabolic Equation (57) with the one-dimensional hyperbolic Equation (62) with *f* = 0.

**Figure 4.** Time step restriction (61) *t* for the one-dimensional parabolic transmission condition with *x* = 0.01, *y* = 0.1 and *D* = 5 for different *K* and channel width *σ* = *b* − *a*, *f* = 0. As expected, the time step *t* must be chosen smaller when either *K* or the channel width *σ* increases.

Comparing all time step restrictions (61) and (64) with each other, it is evident that the restriction for the two-dimensional parabolic transmission condition dominates the full model.

For the sake of completeness, we underline that at each time step a non-linear equation system must be solved, for which Newton–Krylov subspace methods [40] can be used, which take advantage of the mostly sparse structure of the Jacobian matrix.

#### **4. Numerical Tests and Results**

We start this section with a preliminary test on mass-preserving properties at the boundaries. Indeed, in absence of source terms, the masses of cells and chemical substances are preserved. Then, in order to perform a numerical verification of this property, we consider the numerical approximation at the interface between 1D-1D models in the next numerical example.

**Example 1.** *In Figure 5, a comparison between the central mass-preserving (24) and standard finite difference (23) boundary conditions is depicted, for the 1D-doubly parabolic case on both sides of the interface (on the left) and for the 1D-doubly parabolic-1D-hyperbolic-parabolic interface (on the right). From this 1D numerical example, the necessity of developing modified boundary conditions which are consistent and preserve the mass correctly is evident. We also underline that for the discretization of the Neumann condition, the forward scheme is first-order accurate, while the central scheme is second-order accurate; for more details, see [41].*

From now on, this section is devoted to the presentation of the numerical tests and the parameters of the problem are reported in Table 1. Our aim is to show the ability of the simulation algorithm based on the model (3)–(7) to reproduce the qualitative behavior of the two population sharing the same habitat as observed in the videos of laboratory experiments, [1,2,25].

We remark that we decided to perform numerical simulations of the chip geometry by assigning the 1D-hyperbolic-parabolic model on channels, since it seems more realistic.


**Figure 5.** (**left**) evolution of total mass for 1D-1D-doubly parabolic model with standard vs. mass-preserving boundary conditions. (**right**) evolution of total mass for 1D-1D-hyperbolic-parabolic model with standard vs. mass-preserving boundary conditions.

**Example 2.** *Before we numerically simulate the laboratory experiment with the algorithm, we conduct a simple numerical test in order to prove its accuracy. We assumed the following setting: a left squared chamber* Ω*<sup>l</sup> with one corridor positioned in the middle and only one cell family with initial distribution u*(*x*, *y*, 0) = 5*e* − 1 <sup>2</sup> ((*x*−0.5)2+(*y*−0.5)<sup>2</sup>)*. Since we do not have any analytical solution for this problem, we choose* Δ*t and* Δ*x* = Δ*y, which are small enough to obtain reasonable error estimations. In this case, we use dt* <sup>=</sup> <sup>10</sup>−<sup>4</sup> *and* <sup>Δ</sup>*<sup>x</sup>* <sup>=</sup> <sup>Δ</sup>*<sup>y</sup>* <sup>=</sup> <sup>5</sup> <sup>×</sup> <sup>10</sup>−<sup>4</sup> *for the approximation ue at time t* <sup>=</sup> <sup>100</sup> *and calculate the error as the quantity ue* <sup>−</sup> *uapprox in L*1*-norm.*

*In order to confirm the order of our scheme, we use a log-log-plot with constant and small enough* Δ*t (resp.* Δ*x) and decreasing* Δ*x (resp.* Δ*t). As shown in Figures 6 and 7, the time order* *and space order are equal to line with slope 2 in the log-log plot, which corresponds to our scheme of order 2 in time and space.*

**Figure 6.** Log-log plot of the error—namely, the quantity *ue* <sup>−</sup> *<sup>u</sup>*approx in *<sup>L</sup>*1-norm as a function of the space step, with fixed Δ*t* = 10−<sup>3</sup> and decreasing Δ*x* = 0.5, 0.1, 0.05, 0.001 at time *t* = 100. We depict in blue the obtained error and in red a line with slope 2 for comparison.

**Figure 7.** Log-log plot of the error, namely the quantity *ue* <sup>−</sup> *<sup>u</sup>*approx in *<sup>L</sup>*1-norm as a function of the time step, with fixed Δ*x* = 10−<sup>3</sup> and decreasing Δ*t* = 0.5, 0.1, 0.05, 0.001 at time *t* = 100. We depict in blue the obtained error and in red a line with slope 2 for comparison.

Now, we describe the simulation of the chip environment. All the simulations were performed in MATLABc . The computational time for a simulation on the complete geometry until time *t* = 50,000 took about 400 s on an Intel(R) Core(TM) i7-3630 QM CPU 2.4 GHz. The computational domain is schematized in Figure 2 with the two chambers and 5 corridors *Cm* := [0, *L*], *m* = 1, ... , 5 with the same width = 12 μm and equispaced from each other. The numerical method implemented is listed in Section 3.3; for the 1D channels, the *AHO*2-scheme (of second order) is implemented, since there we are considering the hyperbolic-parabolic model. The discretization grid has time step size *t* = 100 s and space size *x* = 2.5 μm, y = 25 μm.

For the examples below, we assume the following initial condition (time *t* = 0) for the tumor cells distribution on the chip for (*x*, *y*) ∈ Ω*l*:

$$T(\mathbf{x}, y, 0) = 5\varepsilon^{-10^{-4}\left(\mathbf{x}^2 + (y - 500)^2\right)} + 5\varepsilon^{-10^{-4}\left(\mathbf{x}^2 + (y - 5)^2\right)} + 5\varepsilon^{-10^{-4}\left(\mathbf{x}^2 + (y - 1000)^2\right)},\tag{65}$$

Whereas, in the corridors and the right chamber, no tumor cells are present. For the immune cells distribution on the chip for (*x*, *y*) ∈ Ω*r*, we assign:

$$M(\mathbf{x}, \mathbf{y}, \mathbf{0}) = \mathbf{5}, \text{ for } \mathbf{x}, \mathbf{y} \in \Omega\_{\mathbf{5}}.\tag{66}$$

to mean that macrophages are disposed in the right chamber, whereas no immune cells are present in the left chamber nor in the corridors at the beginning.

For the chemoattractants, we set a constantly null initial density for *ω* and *ϕ* in both the chambers and also in the channels.

*The results are depicted in the following Figures 8 and 9.*

**Figure 8.** Treated case. Initial distribution for the model (3)–(7) at time *t* = 0.

*In Figures 8 and 9, we can see the density of the tumor cells T and macrophages M for different times t* = 0, *t* = 10000*s, and t = 50,000 s. Note that at time t* = 0 *tumor cells are present in the left chamber only and immune cells are present in the right chamber only.*

*Since tumor cells are previously treated by a chemoterapy drug, they slowly diffuse around and stay confined in the left chamber* Ω*<sup>l</sup> during all the simulation time; in the meanwhile they produce chemoattractant ϕ attracting immune cells. Immune cells M, instead, diffuse around in* Ω*r, cross the channels, and after a certain time they enter the left chamber* Ω*<sup>l</sup> while creating chemoattractant ω.*

*This is due to the fact that the chemoattractant ϕ produced by cancer cells travels through channels and induces a migration of the immune cells M towards the tumor cells T, causing a higher migration towards the center of the left chamber where the initial distribution of tumor cells was closest to the chambers, as we observe from the laboratory experiment.*

*At the final time t = 50,000 s, we can see that the quantity of tumor cells is decreasing under the action of immune cells producing chemokine ω.*

**Figure 9.** Treated case. Evolution of the model (3)–(7) at time *t* = 10,000 s (**top**) and at time *t* = 50,000 s (**bottom**).

**Example 4.** *Second scenario: untreated case. In this numerical test, we consider the second scenario where the tumor is not treated with any medicine. Therefore, in this case we assume a higher diffusion coefficient for the tumor, but the initial conditions are the same as those used above. The results are depicted in the following Figure 10.*

*In Figure 10, we can see the density of the tumor cells T and macrophages M for times t = 10,000 s and t = 50,000 s. Note that at time t* = 0 *tumor cells are present in the left chamber only and immune cells are present in the right chamber only.*

*Since, in the laboratory experiment, untreated tumor cells diffuse around, cross the channels, and enter the right chamber* Ω*<sup>r</sup> after some time, we try to reproduce such behavior by using a diffusion coefficient DT of a order of magnitude higher respect to the one used in the first scenario. Moreover, in this case the tumor cells do not produce chemoattractant ϕ; for this reason, here we set αφ* = 0*. Immune cells M diffuse around in* Ω*<sup>r</sup> and do not cross the channels, since the chemical stimulus is not secreted by T cells. Moreover, the production of the chemical substance w is neglected in this case, thus tumor cells are not killed.*

We only mention that we tested the 1D-doubly parabolic model on channels and compared it with the hyperbolic-parabolic model used in the previous examples. By using the same initial data as for the other examples, we notice that the doubly parabolic model seems to have a similar pattern as for the hyperbolic-parabolic model depicted above, but the scale of the quantities differs a lot between these models.

In particular, for the doubly parabolic model the concentration of the tumor cells *T* is two or three order of magnitude higher. This is due to the much slower movement of the immune cells through the channels. This also explains the much higher concentration of the chemoattractant *φ* because of the much higher concentration of *T* compared to in the other models.

**Figure 10.** Untreated case. Evolution of the model (3)–(7) at time *t* = 10,000 s (**top**) and at time *t* = 50,000 s (**bottom**).

In the following Figure 11, we represent the density of tumor cells and immune cells as particles, by randomly placing them according to their density. The higher the density at a given point, the more cells will be distributed randomly around that area. If the density is lower than a chosen threshold in a certain point, no cells will be represented around it.

(**c**) Visualization for time t = 50

**Figure 11.** Visualization of immune cells (blue dots) and tumor cells (red squares) for times t = 0, t = 5, and t = 50 using the density of each quantity and representing them as cells.

#### **5. Conclusions and Future Perspectives**

The principal feature of the present work has been the development of a simulation tool to describe cell movements and interactions inside a microfluidic chip environment. Our study focused on both the modelling and the numerical point of view. Indeed, schematizing the chip geometry as two 2D boxes connected by a network of 1D channels, the main issues were:


Furthermore, from the modelling point of view, we studied the dynamics in the channels in the case of a doubly parabolic model and a hyperbolic-parabolic model. Since we obtained comparable asymptotic states, we decided to apply the hyperbolic-parabolic model in order to obtain a finite speed of propagation in the channels, which seems to be more realistic. In this framework, bearing in mind the laboratory experiments on a chip described in Section 2.1, it was possible to simulate the chip environment with two species of living cell moving in it. Moreover, we remark that we can simulate more complicated situations where more than two cell species are present.

As a further development of the present study, we will work on the calibration of the model against experimental data obtained from cell tracking in a microfluidic chip [1,2,25].

**Author Contributions:** Methodology, G.B. and R.N.; Software, E.C.B.; Supervision, G.B. and R.N.; Validation, E.C.B. and G.B.; Visualization, E.C.B.; Data curation, E.C.B. and G.B.; Conceptualization, R.N.; Investigation, E.C.B. and G.B. Writing, E.C.B. and G.B. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** All data are contained within the article.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **On-Off Intermittency in a Three-Species Food Chain**

**Gabriele Vissio 1,\* and Antonello Provenzale <sup>2</sup>**


**Abstract:** The environment affects population dynamics through multiple drivers. Here we explore a simplified version of such influence in a three-species food chain, making use of the Hastings– Powell model. This represents an idealized resource–consumer–predator chain, or equivalently, a vegetation–host–parasitoid system. By stochastically perturbing the value of some parameters in this dynamical system, we observe dramatic modifications in the system behavior. In particular, we show the emergence of on–off intermittency, i.e., an irregular alternation between stable phases and sudden bursts in population size, which hints towards a possible conceptual description of population outbursts grounded into an environment-driven mechanism.

**Keywords:** on–off intermittency; dynamical systems; theoretical ecology; stochastic forcing; hastingspowell model; food chain

#### **1. Introduction**

When Batchelor and Townsend [1] observed a peculiar irregularity in a turbulent fluid, namely the alternation between sudden bursts of motion and a milder, non-turbulent activity, they used the word *intermittency* to describe it. Since then, the same term has been used to describe several types of switching behavior between different dynamical regimes. Here, we are especially interested in the phenomenon called *on–off intermittency* [2].

On-off intermittency has been observed in real systems, such as electronic circuits [3], earthquakes [4], solar cycles [5], electrodynamics of liquid crystals [6], as well as theoretically studied through numerical approaches [2,7] with specific focus on discrete-time population dynamics models (i.e., maps) [8–10].

The goal of this work is to expand these studies to the case of a stochastically driven system of coupled ordinary differential equations (ODEs). To this end, we include the random variability of suitable model parameters to simulate environmental stochasticity in a system representing the population dynamics of three different species. In the autonomous case, a three-dimensional ODE system is the minimum requirement to allow chaotic dynamics owing to the Poincaré–Bendixson theorem [11].

Our choice here is the well-known Hastings–Powell model [12,13], a system that describes the evolution of three species, anonymously called *x*, *y*, *z*, which represent primary producers (resource), consumers (or host) and predators (or parasitoids). We stochastically perturb some of the system parameters, which measure species interactions or carrying capacity, showing that on–off intermittent behaviour can emerge. This feature could qualitatively explain the onset of outbreaks (also called irruptions) in the population size of some of the ecosystem components [14,15].

Section 2 describes the properties of on–off intermittency, summarizing results from previous studies that inspired this work. Section 3 introduces the Hastings–Powell model, describes its parameters and explains the numerical approach that is adopted. In Section 4, we illustrate the occurrence of on–off intermittency when one introduces environmentally driven—and stochastically simulated—parameters. Finally, Section 5 summarizes our results and outlines some possible future developments.

**Citation:** Vissio, G.; Provenzale, A. On-Off Intermittency in a Three-Species Food Chain. *Mathematics* **2021**, *9*, 1641. https://doi.org/ 10.3390/math9141641

Academic Editor: Sergei Petrovskii

Received: 24 May 2021 Accepted: 7 July 2021 Published: 12 July 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

#### **2. On-Off Intermittency**

On-off intermittency is characterized by the alternation between *regular phases*, which duration can span a rather wide range of orders of magnitude, and *burst phases*, where a sudden instability throws the system into (possibly) chaotic behavior. This kind of intermittency can appear in a dynamical system that has an invariant manifold (in the simplest case, a fixed point) whose stability properties depend on an external control parameter but whose phase-space position is only weakly dependent upon the same parameter.

When such control parameter has an irregular temporal variation, either stochastic or chaotic, the manifold alternates between stable and unstable conditions. In order to realize on–off intermittency, a system must keep its dynamics in the proximity of the manifold, which in the stable phases must be attractive enough to allow for long periods during which the system resides in the vicinity of the manifold. Lingering near this temporarily stable manifold, the system undergoes protracted regular phases, when suddenly the volatility of the control parameter induces the instability of the manifold and causes the system to burst away from it, leading to values which are quite different from its typical statistics.

In past years, on–off intermittency has raised some interest in the scientific community. After its basics were scouted by the work of Platt et al. [2], Heagy et al. [7] gave a mathematical sounding demonstration of the power law underlying the duration of laminar phases for maps with the specific form *yn*+<sup>1</sup> = *zn f*(*yn*) (with the variable *zn* coming from a random or a chaotic process), then Toniolo et al. [8] further deepened this latter aspect, inspecting the occurrence of on–off intermittency in a stochastically driven logistic map. Due to the possibility of adopting this concept to qualitatively explain ecological outbreaks, in 2010 Metta et al. [9] and Moon [10] investigated Toniolo's framework in the context of coupled logistic equations. While the former focused on kurtosis as an index to identify on–off intermittency, the latter put the spotlight on the stability of the coupled system, employing the largest Lyapunov Exponent to quantify the chaotic dynamics occurring with different coupling strengths of adjacent logistic systems. Here, we continue the exploration of on–off intermittency in the context of ecosystem dynamics and study its presence and characteristics in a system of coupled ordinary differential equations representing a three-layer food chain.

#### **3. Hastings–Powell Model**

Alan Hastings and Thomas Powell introduced a three-dimensional dynamical system [12] in order to illustrate chaotic behavior in a food web involving three trophic levels. They employed the type 2 functional response (i.e., a Michaelis–Menten functional form) shown in the 1975 Murdoch and Oaten's paper [16] to couple the different trophic levels of the system. The basic equations of the Hastings–Powell model are:

$$\frac{dX}{dT} = R\_0 X \left(1 - \frac{X}{K\_0}\right) - C\_1 \frac{A\_1 X}{B\_1 + X} Y \tag{1}$$

$$\frac{dY}{dT} = \frac{A\_1 X}{B\_1 + X} Y - \frac{A\_2 Y}{B\_2 + Y} Z - D\_1 Y \tag{2}$$

$$\frac{dZ}{dT} = \mathbb{C}\_2 \frac{A\_2 \mathbb{Y}}{B\_2 + \mathbb{Y}} Z - D\_2 Z \tag{3}$$

where *X*, *Y*, *Z* represent the biomass of three species on three different trophic levels and *T* is time. Throughout the three equations, subscripts 0, 1, 2 indicate parameters referring to, respectively, *X*, *Y*, *Z*. *R*<sup>0</sup> and *K*<sup>0</sup> are, respectively, the growth rate and the carrying capacity of the species *X*. The constants *A*1, *A*2, *B*1, *B*<sup>2</sup> characterize the functional responses among the species, representing the saturation of the response; specifically, the *B*s are the prey populations that correspond to half the maximum value of the predation rate per unit prey. *C*−<sup>1</sup> <sup>1</sup> , *C*<sup>2</sup> are the conversion rates from resource to consumer and from prey to predator, respectively, while, finally, *D*1, *D*<sup>2</sup> are constant death rates.

A suitable nondimensionalization leads to redefine the variables of the system:

$$\begin{aligned} x &= \frac{X}{K\_0} \\ y &= \frac{\mathbb{C}\_1 Y}{K\_0} \\ z &= \frac{\mathbb{C}\_1 Z}{\mathbb{C}\_2 K\_0} \\ t &= R\_0 T \end{aligned} \tag{4}$$

Consequently, the nondimensional parameters are:

$$\begin{aligned} a\_1 &= \frac{K\_0 A\_1}{R\_0 B\_1} & b\_1 &= \frac{K\_0}{B\_1} & d\_1 &= \frac{D\_1}{R\_0} \\ a\_2 &= \frac{C\_2 A\_2 K\_0}{C\_1 R\_0 B\_2} & b\_2 &= \frac{K\_0}{C\_1 B\_2} & d\_2 &= \frac{D\_2}{R\_0} \end{aligned} \tag{5}$$

Thus, the final equations of Hastings–Powell model are:

$$\frac{dx}{dt} = \mathbf{x}(1-\mathbf{x}) - \frac{a\_1 \mathbf{x}}{b\_1 \mathbf{x} + 1}y \tag{6}$$

$$\frac{dy}{dt} = \frac{a\_1 x}{b\_1 x + 1}y - \frac{a\_2 y}{b\_2 y + 1}z - d\_1 y \tag{7}$$

$$\frac{dz}{dt} = \frac{a\_2 y}{b\_2 y + 1} z - d\_2 z \tag{8}$$

Hastings and Powell chose the model parameters to be, in their words, "biologically reasonable". For example, the parameter values associated with the consumer (*y*) are larger than those for the predator/parasitoid (*z*), so that *x* and *y* interact on a faster time scale with respect to *y* and *z*. We defer to the original work of Hastings and Powell for further discussions on parameter values.

Note that Equation (8) is conceptually different from Equations (6) and (7): indeed, it is possible to factorize *z* on the right hand side, leading to *dz dt* = *a*2*y <sup>b</sup>*2*y*+<sup>1</sup> − *d*<sup>2</sup> *z*. A separation of variables allows to retrieve the exact solution *z*(*t*) = *z*(0) exp *<sup>a</sup>*2*<sup>y</sup> <sup>b</sup>*2*y*+<sup>1</sup> − *d*<sup>2</sup> , which could replace the differential equation in the numerical simulation. This peculiarity makes Equation (8) quite different from the other two equations and, therefore, we expect it to react differently to stochastic forcing.

In Appendix A we provide a concise analysis of the fixed points in the Hastings– Powell model.

#### *Stochastic Parameters and Numerical Simulations*

To simulate how the environment affects the evolution of the three-species food chain in the Hasting-Powell model, we allow some of the model parameters to become random numbers. In particular, we allow either *a*<sup>1</sup> or *K*<sup>0</sup> in Equations (5)–(8) to vary stochastically with a uniform distribution between 0 and *α*. The computation of the stochastic term is performed at every time step of the numerical simulation, feeding the same term throughout all the steps needed by the Runge–Kutta 4 scheme employed. The random number at each time step is independent of the previous value, that is we force the system with white noise.

The different cases are run for 108 time units, after a spin-up time of 10<sup>6</sup> time units to eliminate the initial transient. Initial conditions for *x*0, *y*<sup>0</sup> are randomly and uniformly chosen between 0 and 1 while the initial value for *z*<sup>0</sup> is randomly and uniformly chosen between 4 and 5. This choice for *z*<sup>0</sup> is related to the convenience of starting as close as possible to the system attractor, thus reducing the spin-up time. Laminar phases are defined as *<sup>x</sup>* <sup>&</sup>gt; <sup>1</sup> <sup>−</sup> 0.001 or *<sup>y</sup>*, *<sup>z</sup>* <sup>&</sup>lt; 0.001—i.e., a distance of 10−<sup>3</sup> from the stable fixed point.

#### **4. Results**

#### *4.1. Intensity of Grazing*

The parameter *a*<sup>1</sup> measures the intensity of grazing by the consumer (*y*) on the resource (*x*) in the coupling term between the equations for *x* and *y* in Equations (6) and (7). As a first test, we replace *a*<sup>1</sup> with the random number *a*˜1, uniformly distributed between 0 and *α*. In this way, the time-averaged value of *a*˜1 becomes *a*¯1 = *α*/2. Therefore, the coupling between *x* and *y* becomes:

$$\frac{\tilde{a}\_1 \ge}{b\_1 \ge +1} y \tag{9}$$

Here we use *α* = 3.5, which gives *a*¯1 = 1.75. For the other parameters we adopt the same values as in the original paper of Hastings and Powell, namely:

$$b\_1 = 3 \qquad \quad d\_1 = 0.4 \qquad \quad a\_2 = 0.1 \qquad \quad b\_2 = 2 \qquad \quad d\_2 = 0.01 \tag{10}$$

Figure 1 shows the time series of the three trophic levels (*x*, *y* and *z*) and a running mean of the instantaneous value of *a*˜1. The time series of the resource *x* and of the herbivorous *y* visually illustrate the occurrence of on–off intermittency, with the alternation of long laminar phases and irregularly spaced bursts. The laminar phases of *x* are centered on *x* = 1, corresponding to a fixed point of the system, and the bursts are towards lower values when the herbivorous density suddenly increases. The *z* signal, instead, corresponds to a smoothed version of the intermittent signals and it is slightly delayed with respect to the herbivorous dynamics, as expected from the form of the equations.

**Figure 1.** Case *α* = 3.5 for stochastic *a*1. Time series of *x* (Panel **a**), *y* (Panel **b**), *z* (Panel **c**) and of the running mean of *a*˜1 computed on a window with width *τ* = 200—the dotted line is the mean value of *a*˜1 (Panel **d**).

As mentioned above, the simplest case of on–off intermittency appears when the stability of a fixed point of the system depends on an external parameter that varies irregularly in time, thus determining an alternation between stability and instability of the

fixed point. To motivate our choice of *α* = 3.5, in Figure 2 we show the orbit diagram for *y* in the range 0 ≤ *a*<sup>1</sup> ≤ 5 (the orbit diagram for *x* is conceptually similar). In order to find on– off intermittency, we need to span a parameter range covering the interval between stability and chaos. The chosen value of *α* suits this well, forcing the instantaneous value of *a*<sup>1</sup> to vary between 0 and 3.5. From Figure 1, one sees an approximate correspondence between intermittent bursts and periods when the running average of *a*˜1 exceeds its average value which, in this case, approximately corresponds to the stability limit of the fixed point.

**Figure 2.** Orbit diagram depicting the attractors of *y* as a function of *a*1. Other parameter values as in the original Hastings–Powell model.

The first distinctive feature of on–off intermittent time series is the shape of the probability distribution of the off-phase durations—i.e., the number of time steps in which the system endures off (laminar) behaviour. It has been shown that for a simplified type of discrete maps [7,8], for on–off intermittency the distribution of laminar phase duration, *D*, follows a power law, *D*<sup>−</sup> <sup>3</sup> <sup>2</sup> . Figure 3 shows that, also for this continuous on–off intermittent system, the *x* and *y* signals display the same approximate power-law distribution of off phases, at least in a limited range of off-phase durations.

Another characteristics of the intermittent signals is the broad distribution of the amplitudes. Figure 4 shows the distribution of maxima for on–off intermittency and for standard chaotic behavior with a fixed value *a*<sup>1</sup> = 5. An approximate power-law distribution of the maxima is evident for the intermittent dynamics.

**Figure 3.** Duration of the off (laminar) phases of the *x* (dashed) and *y* (solid) components with a stochastic *a*<sup>1</sup> parameter in the Hastings–Powell model. The dotted line indicates a dependence proportional to *D*<sup>−</sup> <sup>3</sup> 2 .

**Figure 4.** Probability distribution of the maxima of 1 − *x* (Panel **a**), *y* (Panel **b**), *z* (Panel **c**), in case of non-intermittent dynamics (solid line) and for on–off intermittent behavior (dotted line).

Conceptually, inserting the stochastic term as done in Equation (9) is tantamount to randomly forcing *A*<sup>1</sup> in Equations (1) and (2). Thus, the results presented in this section indicate that the environmental fluctuations (represented by the stochastic term), randomly influencing the rate of successful consumption by *y* of the resource *x*, can cause on–off intermittency in both compartments.

#### *4.2. Carrying Capacity K*<sup>0</sup>

Another interesting option is to allow the environment to stochastically affect the system carrying capacity, *K*0. Even though *a*<sup>1</sup> and *K*<sup>0</sup> are mathematically related, their ecological meaning is different: the former is related to the interaction between *x* and *y* and the latter only to the maximum value of *x* in the absence of consumers. Therefore, they deserve separate analyses from an ecological standpoint. Looking at Equation (5), we infer that to this end we must multiply *a*1, *b*1, *a*<sup>2</sup> and *b*<sup>2</sup> in Equations (6)–(8) by the same value of a random number *ρ*, uniformly distributed in the interval 0 ≤ *ρ* ≤ *α*. The parameters chosen for this Section are the same described in Equation (10), with *a*<sup>1</sup> = 2. Following the rationale that yielded Equation (9), the couplings in Equations (6)–(8) become:

$$\frac{\tilde{a}\_1 \ge}{\tilde{a}\_1 \ge +1} y$$

$$z \frac{\vec{a}\_2 y}{\vec{b}\_2 y + 1} z \tag{12}$$

Choosing different values for *α* leads to different and peculiar behaviours.

From *α* = 0 to *α* ≈ 1.3 the system undergoes a long lasting stability at *x* = 1, *y* = 0, *z* = 0. For *α* = 1.4 we observe the occurrence of on–off intermittency for *x* and *y*, while after the transient the *z* species becomes extinct, that is, the predator (parasitoid) cannot control the consumer (host).

Figure 5 shows the time series of *x* and *y*, along with the probability distributions of the maxima and the moving average of the value of the random number *ρ* controlling *K*0. As in the case of stochastic variability in *a*1, the intermittency of the time series is matched by the fluctuations of the running mean of *K*0, with low values of the latter corresponding to laminar phases of the time series.

Figure 6 shows the probability distribution of the laminar phase durations, which matches a power law with *D*<sup>−</sup> <sup>3</sup> 2 .

**Figure 5.** Stochastic *K*<sup>0</sup> with *α* = 1.4. Time series of *x* (Panel **a**), *y* (Panel **b**) and of the running mean of the random variable *ρ* controlling *K*0, computed on a window with width *τ* = 2000—the dotted line is the mean value of *ρ* (Panel **d**). The probability distributions of the maxima of 1 − *x* and *y* are shown in (Panel **c**).

**Figure 6.** Laminar phase duration of the *x* (dashed) and *y* (solid) variables for stochastic variability of the carrying capacity *<sup>K</sup>*<sup>0</sup> in the case *<sup>α</sup>* <sup>=</sup> 1.4. The dotted line is proportional to *<sup>D</sup>*<sup>−</sup> <sup>3</sup> 2 .

For *α* = 1.5, *x* and *y* show chaotic dynamics, but the most intriguing phenomenon is related to the apparent on–off intermittency of *z*, as shown in Figure 7. A close inspection of the laminar phase durations, however shows that extended laminar periods are quite likely to occur, thus the curve is less steep than *D*<sup>−</sup> <sup>3</sup> 2 .

**Figure 7.** (**Left** panel) Time series of *z* in the case *α* = 1.5; (**Right** panel) Laminar phase durations of the predator/parasitoid *<sup>z</sup>* (solid) for stochastic *<sup>K</sup>*<sup>0</sup> with *<sup>α</sup>* <sup>=</sup> 1.5. The dashed line is proportional to *<sup>D</sup>*<sup>−</sup> <sup>3</sup> 2 .

Finally, we report that at larger values of *α*—here we employ *α* = 2.1—non-intermittent, chaotic dynamics for *z* is paired with approximately on–off intermittent behavior for *x* and *y*. Figure 8 shows a time series of *y* along with the laminar phase durations, which approximately follows the simple power law (even though less robustly than in Figure 6).

The implications of these results are intriguing. If the environment induces a random variability of the carrying capacity *K*0, allowing it to temporarily reach large enough values, on–off intermittency can emerge quite easily in both *x* and *y*—that is, in the primary producers and their consumers. By further increasing *α*, that is, the amplitude of the random variability of *K*0, a peculiar intermittency in the predator (parasitoid) *z* develops. This implies that sudden bursts in a population could be induced far from the trophic level that is directly affected by the environmental fluctuations. For even larger values of the fluctuations in *K*<sup>0</sup> (*α* = 2.1), the resource and the consumer again undergo approximate

on–off behavior, while the predator behaves chaotically. Clearly, the complexity of the system behavior is huge, and a deeper exploration of the different dynamics and of their ecological implications is deferred to future works.

**Figure 8.** (**Left** panel) Time series of *y* in case *α* = 2.1; (**Right** panel) Laminar phase durations of the *x* (dashed) and *y* (solid) variables for stochastic *<sup>K</sup>*<sup>0</sup> with *<sup>α</sup>* <sup>=</sup> 2.1. The dotted line is proportional to *<sup>D</sup>*<sup>−</sup> <sup>3</sup> 2 .

#### **5. Discussion and Conclusions**

This paper conceptually extends the works of Platt et al. [2], Heagy et al. [7], Toniolo et al. [8] and Metta et al. [9] and it focuses on the emergence of on–off intermittency in idealized food chains. In our view, such dynamical behavior can be taken as a conceptual description of species outbreak events in different levels of the food chain.

To explore this issue, we used the Hastings–Powell model, a well-known system that allowed us to inspect a simple three-species food chain: resource, consumer and predator (or else, vegetation, pest host species and parasitoid). Environmental forcing was supposed to act on the resource dynamics, and it was represented as an imposed random variation in some of the controlling parameters.

When the stochastic variability is inserted into the parameter controlling the intensity of resource consumption (i.e., *y* on *x*, Section 4.1), on–off intermittency easily arises in both these variables, while the predator *z* displays a smoother dynamics.

Stochastic variability of the carrying capacity (Section 4.2) leads to intriguing results; indeed, increasing the range of random variations to higher values sequentially generates different behaviours:


Ecologically, this suggests that a low carrying capacity for *x* implies that the species directly feeding on it (*y*) can endure but on average it does not supply enough biomass for *z* to survive. A slightly larger value of *K*<sup>0</sup> allows for *z* to "jump start", while a higher average value of the carrying capacity is enough to fully support the predator species, bringing back on–off intermittency in the dynamics of *x* and *y*. Of course, this is just an euristic representation that requires further exploration.

Several points remain open to investigation, such as:

• Would a deterministic, chaotic system representing the environment dynamics, in place of the stochastic process adopted here, allow for a more thoroughly mathematical analysis of the problem?


Such questions are, in our opinion, relevant to better understand bursting phenomena in ecology and will be a subject of future research, after the first demonstration of the possibility of on–off intermittent behavior in model food chains that was illustrated here.

**Author Contributions:** Conceptualization, G.V. and A.P.; methodology, software and validation, G.V.; writing—original draft preparation, G.V.; writing—review and editing, G.V. and A.P.; funding acquisition, A.P. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was partially funded by the EU H2020 project "EOTIST", Grant Agreement no. 952111.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Code used in numerical simulation can be found at https://figshare. com/articles/software/VissioProvenzale2021/14639556 (accessed on 8 July 2021).

**Acknowledgments:** The authors acknowledge the three reviewers for their insightful comments. In particular, the authors are grateful to Reviewer 1, whose observations led to the material included in Appendix A.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Appendix A. Fixed Points in Hastings–Powell Model**

A thorough analysis of the fixed points of the models is beyond the scope of this paper. Nevertheless, it is useful to briefly recap them, in order to give some perspective on the results obtained, especially on the population values typically attained during the laminar phases. Checking the fixed points in the system leads to four different results:


#### **References**


## *Article* **Inverse Problem for the Sobolev Type Equation of Higher Order**

**Alyona Zamyshlyaeva \* and Aleksandr Lut**

Department of Applied Mathematics and Programming, South Ural State University, 454080 Chelyabinsk, Russia; lutav@susu.ru

**\*** Correspondence: zamyshliaevaaa@susu.ru

**Abstract:** The article investigates the inverse problem for a complete, inhomogeneous, higher-order Sobolev type equation, together with the Cauchy and overdetermination conditions. This problem was reduced to two equivalent problems in the aggregate: regular and singular. For these problems, the theory of polynomially bounded operator pencils is used. The unknown coefficient of the original equation is restored using the method of successive approximations. The main result of this work is a theorem on the unique solvability of the original problem. This study continues and generalizes the authors' previous research in this area. All the obtained results can be applied to the mathematical modeling of various processes and phenomena that fit the problem under study.

**Keywords:** Sobolev type equation; inverse problem; high-order equation; method of successive approximations; polynomial boundedness of operator pencils

#### **1. Introduction**

Let U, F, Y be Banach spaces, operators *A*, *B*0, *B*1, ..., *Bn*−<sup>1</sup> ∈ L(U; F), i.e., linear and continuous operators defined on U and acting to F, ker *A* = {0}, *C* ∈ L(U; Y), given functions *χ* : [0, *T*] → L(Y; F), *f* : [0, *T*] → F, Ψ : [0, *T*] → Y. Consider the following problem with *t* ∈ [0, *T*]

$$Av^{(n)}(t) = B\_{n-1}v^{(n-1)}(t) + \dots + B\_1v'(t) + B\_0v(t) + q(t)\chi(t) + f(t),\tag{1}$$

$$v(0) = v\_0, \ v'(0) = v\_{1\prime} \dots, v^{(n-1)}(0) = v\_{n-1\prime} \tag{2}$$

$$\mathcal{C}v(t) = \Psi(t). \tag{3}$$

The problem of finding a pair of functions *<sup>v</sup>*(*t*) <sup>∈</sup> *<sup>C</sup>n*([0, *<sup>T</sup>*]; <sup>U</sup>) (*<sup>n</sup>* times continuously diferentiable) and *<sup>q</sup>*(*t*) <sup>∈</sup> *<sup>C</sup>*1([0, *<sup>T</sup>*]; <sup>Y</sup>) (continuously diferentiable) from relations (1)–(3) is called an inverse problem. At present, the authors have obtained the result when studying the inverse problem, but only in the case of the second-order, Sobolev type equation [1].

The degeneracy of the operator *A* allows us to classify Equation (1) as a Sobolev type equation. Additionally, one can see that this equation is complete, since all the components *v*(*t*), *v* (*t*), ..., *v*(*n*)(*t*) are present. In addition, the Cauchy condition (2) is posed. The overdetermination condition (3) arises due to the need to restore the parameter *q*(*t*) of the equation.

The study of Sobolev type equations was carried out repeatedly [1–12]. There are articles devoted to both the first [2–4], the second [1,5,6], the third [7], and the higher [8–10] order. In [2], sufficient conditions for the existence of positive solutions to the Showalter– Sidorov and the Cauchy problem for an abstract linear equation of this type were presented. The linear representatives of Sobolev type equations, such as the Barenblatt–Zheltov– Kochina equation and the Hoff equation are studied in [3]. The paper [7] contains a condition for the existence of a weak, local, timely solution to the Cauchy problem for a model Sobolev type equation. In the study of the direct problem for a higher-order, Sobolev type equation, the phase space method was used [10]. Papers [11,12] are among the first

**Citation:** Zamyshlyaeva, A.; Lut, A. Inverse Problem for the Sobolev Type Equation of Higher Order. *Mathematics* **2021**, *9*, 1647. https://doi.org/10.3390/ math9141647

Academic Editor: Fasma Diele

Received: 8 June 2021 Accepted: 9 July 2021 Published: 13 July 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

investigations of Sobolev type equations, and the recent works devoted to applications of Sobolev type equations to real-life models are as follows: [13,14].

The works [1,5,15–24] were devoted to the consideration of inverse problems. In [15], the process of unsteady flow of a viscous incompressible fluid in a pipe with a permeable wall was considered. The dependence on the choice of the boundary of the rectangular region and the unique solvability of the inverse problem were investigated in [16]. The uniqueness criterion for the Lavrent'ev–Bitsadze equation is established in [17]. The correctness in Sobolev spaces of the problem of determining the function of sources in the heat and mass transfer Navier–Stokes system was proved [18]. The problem was finding the area where the vector of boundary displacements and forces is given in parametric form [19]. In [20], the inverse boundary value problem for the heat equation was studied, and the error of the obtained approximate solution was estimated.

The article consists of four sections. The second section combines the necessary, previously obtained, results of the theory of polynomially *A*-bounded of operator pencils formulated in the form of definitions, theorems and lemmas. Section «Results» has three subsections. The first one presents the result of applying the splitting theorem; thus, the original problem is divided into two equivalent problems in the aggregate: regular and singular. In the second subsection, we study the unique solvability of the regular problem by reducing it to an equivalent problem of the first order and achieving the necessary smoothness for the required function *q* using the method of successive approximations. The third subsection generalizes the result of studying the singular problem obtained earlier in the work [9], thus obtaining the theorem on the existence and uniqueness of the solution to the problem (1)–(3). In the last section, the significance of the obtained results is given in both the development of the studied theory and their practical application.

#### **2. Preliminary Information**

To find a pair of functions *v*(*t*) and *q*(*t*), we use the results obtained in the research into higher-order, Sobolev type equations [8]. Thus, we will apply the theory of polynomially *A*-bounded operator pencils. Denote by - *B* the pencil of operators *B*0, *B*1, ..., *Bn*−1.

**Definition 1.** *The sets*

$$\rho^A(\mathcal{B}) = \{ \mu \in \mathbb{C} : (\mu^n A - \mu^{n-1} B\_{n-1} - \dots - \mu B\_1 - B\_0)^{-1} \in \mathcal{L}(\mathcal{F}; \mathcal{U}) \}$$

*and σA*(- *<sup>B</sup>*) = <sup>C</sup>¯ \*ρA*(- *B*) *will be called the A-resolvent set and the A-spectrum of the pencil* - *B, respectively.*

**Definition 2.** *The operator-function complex variable*

$$\mathcal{R}\_{\mu}^{A}(\mathcal{B}) = (\mu^{n}A - \mu^{n-1}B\_{n-1} - \dots - \mu B\_{1} - B\_{0})^{-1}$$

*with domain ρA*(- *B*) *will be called the A-resolvent of the pencil* - *B.*

**Definition 3.** *Let the pencil* - *B be polynomially A-bounded if*

$$
\exists a \in \mathbb{R}\_{+} \,\,\forall \mu \in \mathbb{C} \,\,\left( |\mu| > a \right) \Rightarrow \left( R^{A}\_{\mu} (\vec{\mathcal{B}}) \in \mathcal{L} (\mathcal{F}; \mathcal{U}) \right).
$$

Let the pencil - *B* be polynomially *A*-bounded. Introduce an important condition

$$\int\_{\gamma} \mu^k R\_{\mu}^A(\vec{B}) d\mu \equiv \mathbb{O}, \; k = 0, 1, \dots, n - 2,\tag{4}$$

where *γ* = {*μ* ∈ C : |*μ*| = *r* > *a*}.

**Lemma 1.** *Let the pencil* - *B be polynomially A-bounded and condition (4) be fulfilled. Then, the operators*

$$P = \frac{1}{2\pi i} \int\_{\gamma} R^A\_{\mu}(\vec{B}) \mu^{n-1} A d\mu \in \mathcal{L}(\mathcal{U}),$$

$$Q = \frac{1}{2\pi i} \int\_{\gamma} \mu^{n-1} A R^A\_{\mu}(\vec{B}) d\mu \in \mathcal{L}(\mathcal{F})$$

*are projectors.*

Put <sup>U</sup><sup>0</sup> <sup>=</sup> ker *<sup>P</sup>*, <sup>F</sup><sup>0</sup> <sup>=</sup> ker *<sup>Q</sup>*, <sup>U</sup><sup>1</sup> <sup>=</sup> im *<sup>P</sup>*, <sup>F</sup><sup>1</sup> <sup>=</sup> im *<sup>Q</sup>*. From the previous Lemma it follows that <sup>U</sup> <sup>=</sup> <sup>U</sup><sup>0</sup> ⊕ U1, <sup>F</sup> <sup>=</sup> <sup>F</sup><sup>0</sup> ⊕ F1. Let *<sup>A</sup>k*(*B<sup>k</sup> <sup>l</sup>* ) denote the restriction of the operator *<sup>A</sup>*(*Bl*) onto <sup>U</sup>*k*, *<sup>k</sup>* <sup>=</sup> 0, 1; *<sup>l</sup>* <sup>=</sup> 0, 1, ..., *<sup>n</sup>* <sup>−</sup> 1.

**Theorem 1.** *Let the pencil* - *B be polynomially A-bounded and condition (4) be fulfilled. Then, the actions of the operators split:*


**Definition 4.** *Define the family of operators* {*K*<sup>1</sup> *<sup>q</sup>* , *K*<sup>2</sup> *<sup>q</sup>* , ..., *K<sup>n</sup> <sup>q</sup>* } *as follows*

$$K\_1^1 = H\_{0\prime} \ K\_1^2 = -H\_{1\prime} \ \dots \ K\_1^n = -H\_{n-1\prime}$$

$$K\_{q+1}^1 = K\_q^n H\_{0\prime} \ K\_{q+1}^2 = K\_q^1 - K\_q^n H\_{1\prime} \ \dots \ K\_{q+1}^n = K\_q^{n-1} - K\_q^n H\_{n-1\prime} \ \vdots \ q = 1,2,\dots,n$$

*where H*<sup>0</sup> = (*B*<sup>0</sup> <sup>0</sup>)−1*A*0, *<sup>H</sup>*<sup>1</sup> = (*B*<sup>0</sup> 0)−1*B*<sup>0</sup> <sup>1</sup>, ..., *Hn*−<sup>1</sup> = (*B*<sup>0</sup> 0)−1*B*<sup>0</sup> *<sup>n</sup>*−1.

**Definition 5.** *The point* ∞ *is called*


#### **3. Results**

#### *3.1. Reduction of the Initial Inverse Problem*

Let the pencil - *B* be polynomially *A*-bounded and condition (4) be fulfilled, then *v*(*t*) can be represented as *v*(*t*) = *Pv*(*t*)+(*I* − *P*)*v*(*t*) = *u*(*t*) + *ω*(*t*). Suppose that <sup>U</sup><sup>0</sup> <sup>⊂</sup> ker *<sup>C</sup>*. Then, by virtue of Theorem <sup>1</sup> and Lemma <sup>1</sup> problem (1)–(3) is equivalent to the problem of finding the functions *<sup>u</sup>* <sup>∈</sup> *<sup>C</sup>n*([0, *<sup>T</sup>*]; <sup>U</sup>1), *<sup>ω</sup>* <sup>∈</sup> *<sup>C</sup>n*([0, *<sup>T</sup>*]; <sup>U</sup>0), *<sup>q</sup>* <sup>∈</sup> *<sup>C</sup>*1([0, *<sup>T</sup>*]; <sup>Y</sup>) from the relations

$$u^{(n)}(t) = \mathcal{S}\_{n-1}u^{(n-1)}(t) + \dots + \mathcal{S}\_1u'(t) + \mathcal{S}u(t) + q(t)(A^\dagger)^{-1}Q\chi(t) + (A^\dagger)^{-1}Qf(t), \tag{5}$$

$$u(0) = u\_{0\prime} \quad u'(0) = u\_1 \quad \dots \text{ } u^{(n-1)}(0) = u\_{n-1\prime} \tag{6}$$

$$\mathbb{C}u(t) = \Psi(t) \equiv \mathbb{C}v(t),\tag{7}$$

$$\begin{split}H\_0\omega^{(n)}(t) &= H\_{n-1}\omega^{(n-1)}(t) + \dots + H\_2\omega^{\prime\prime}(t) + H\_1\omega^{\prime}(t) + \omega(t) + \\ &+ q(t)(B\_0^0)^{-1}(I-Q)\chi(t) + (B\_0^0)^{-1}(I-Q)f(t),\end{split} \tag{8}$$

$$
\omega(0) = \omega\_0, \ \omega'(0) = \omega\_{1\prime} \quad \dots \ \omega^{(n-1)}(0) = \omega\_{n-1\prime} \tag{9}
$$

where

$$S\_0 = (A^1)^{-1} B\_{0\prime}^1 \quad S\_1 = (A^1)^{-1} B\_{1\prime}^1 \quad \dots \quad S\_{n-1} = (A^1)^{-1} B\_{n-1\prime}^1$$

$$u\_0 = P v\_{0\prime} \quad u\_1 = P v\_{1\prime} \quad \dots \quad u\_{n-1} = P v\_{n-1\prime}$$

$$\omega\_0 = (I - P) v\_{0\prime} \quad \omega\_1 = (I - P) v\_{1\prime} \quad \dots \quad \omega\_{n-1} = (I - P) v\_{n-1\prime} \quad t \in [0, T].$$

The inverse problem (5)–(7) is called regular, and problem (8), (9) is called singular.

#### *3.2. Solution of the Regular Inverse Problem*

Rewrite problem (5)–(7) in the notation [25]. Let <sup>X</sup> <sup>=</sup> <sup>U</sup>1, operators *<sup>S</sup>*0, *<sup>S</sup>*1, ..., *Sn*−<sup>1</sup> ∈ C*l*(X ), *C* ∈ L(X , Y), operator-function Φ : [0, *T*] → L(Y; X ), functions *h* : [0, *T*] → X , Ψ : [0, *T*] → Y

$$u^{(n)}(t) = S\_{n-1}u^{(n-1)}(t) + \ldots + S\_1u'(t) + S\_0u(t) + q(t)\Phi(t) + h(t), \ t \in [0, T],\tag{10}$$

$$u(0) = u\_0, \ u'(0) = u\_1, \ \dots, \ u^{(n-1)}(0) = u\_{n-1}, \tag{11}$$

$$\mathbb{C}u(t) = \mathbb{Y}(t). \tag{12}$$

**Theorem 2.** *Let the pencil* - *B be polynomially A-bounded and condition (4) be fulfilled; moreover, <sup>C</sup>* ∈ L(<sup>X</sup> ; <sup>Y</sup>), <sup>Φ</sup> <sup>∈</sup> *<sup>C</sup>*1([0, *<sup>T</sup>*];L(Y; <sup>X</sup> )), *<sup>h</sup>* <sup>∈</sup> *<sup>C</sup>*1([0, *<sup>T</sup>*]; <sup>X</sup> ), <sup>Ψ</sup> <sup>∈</sup> *<sup>C</sup>n*+1([0, *<sup>T</sup>*]; <sup>Y</sup>)*, for any <sup>t</sup>* <sup>∈</sup> [0, *<sup>T</sup>*] *the operator <sup>C</sup>*Φ(*t*) *be invertible and* (*C*Φ)−<sup>1</sup> <sup>∈</sup> *<sup>C</sup>*1([0, *<sup>T</sup>*];L(Y)). *If the compatibility condition Cun*−<sup>1</sup> <sup>=</sup> <sup>Ψ</sup>*n*−1(0) *is satisfied, then the solution to the inverse problem (10)–(12) exists and is unique in the class of functions q* <sup>∈</sup> *<sup>C</sup>*1([0, *<sup>T</sup>*]; <sup>Y</sup>), *<sup>u</sup>* <sup>∈</sup> *<sup>C</sup>n*([0, *<sup>T</sup>*]; <sup>X</sup> ).

**Proof of Theorem 2.** Reduce problem (10)–(12) to the problem for the first-order equation

$$z'(t) = Az(t) + q(t)Q(t) + H(t), \; t \in [0, T], \tag{13}$$

$$z(0) = z\_{0\prime} \tag{14}$$

$$Bz(t) = \Psi(t),\tag{15}$$

$$\begin{aligned} \text{where } z(t) &= \begin{pmatrix} u(t) \\ \vdots \\ \vdots \\ u^{(n-2)}(t) \\ u^{(n-1)}(t) \end{pmatrix}, \; A = \begin{pmatrix} 0 & I & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & I \\ 0 & 0 & \dots & S\_{n-1} \end{pmatrix}, \; Q(t) &= \begin{pmatrix} 0 \\ \vdots \\ \vdots \\ 0 \end{pmatrix}, \\\ H(t) &= \begin{pmatrix} 0 \\ \vdots \\ 0 \\ u(t) \end{pmatrix}, \; z(0) = \begin{pmatrix} u(0) \\ \vdots \\ u^{(n-2)}(0) \\ u^{(n-1)}(0) \end{pmatrix}, \; z\_0 = \begin{pmatrix} u\_0 \\ \vdots \\ u\_{n-2} \\ u\_{n-1} \end{pmatrix}, \; B = \begin{pmatrix} 0 & \dots & 0 & \dots \end{pmatrix}, \\\ \Psi(t) &= \begin{pmatrix} 0 \\ \vdots \\ 0 \\ \end{pmatrix} \end{aligned}$$

Put *<sup>R</sup>*(*t*) = <sup>−</sup>(*C*Φ(*t*))−1. Therefore, all the conditions of Theorem 6.2.3 from [25], are fulfilled, and the function *q*(*t*) satisfies the integral equation

$$\begin{aligned} q(t) &= q\_0(t) + R(t) \left( \mathbb{C} \mathbb{S}\_0 \int\_0^t V\_{1,n}(t-s) q(s) \Phi(s) ds + \\ &+ \mathbb{C} \mathbb{S}\_1 \int\_0^t V\_{2,n}(t-s) q(s) \Phi(s) ds + ... + \mathbb{C} \mathbb{S}\_{n-1} \int\_0^t V\_{n,n}(t-s) q(s) \Phi(s) ds \right), \end{aligned} \tag{16}$$

where

$$q\_0(t) = -R(t)\left(\Psi^{(n)}(t) - CS\_0V\_{1,1}(t)u\_0 - CS\_1V\_{2,1}(t)u\_0 - \dots - CS\_{n-1}V\_{n,1}(t)u\_0 - \dots\right)$$

$$-CS\_0V\_{1,2}(t)u\_1 - CS\_1V\_{2,2}(t)u\_1 - \dots - CS\_{n-1}V\_{n,2}(t)u\_1 - \dots -$$

$$-CS\_0V\_{1,n}(t)u\_{n-1} - CS\_1V\_{2,n}(t)u\_{n-1} - \dots - CS\_{n-1}V\_{n,n}(t)u\_{n-1} -$$

$$-CS\_0\int\_0^t V\_{1,n}(t-s)h(s)ds - CS\_1\int\_0^t V\_{2,n}(t-s)h(s)ds -$$

$$-\dots - CS\_{n-1}\int\_0^t V\_{n,n}(t-s)h(s)ds - Ch(t)\Big).$$

Thus, there exists a unique solution *<sup>q</sup>* <sup>∈</sup> *<sup>C</sup>*1([0, *<sup>T</sup>*]; <sup>Y</sup>), *<sup>z</sup>* <sup>∈</sup> *<sup>C</sup>*1([0, *<sup>T</sup>*]; <sup>X</sup> *<sup>n</sup>*) to the inverse problem (13)–(15). And we obtain that the solution to the regular inverse problem (10)–(12) exists and is unique, with *<sup>q</sup>* <sup>∈</sup> *<sup>C</sup>*1([0, *<sup>T</sup>*]; <sup>Y</sup>), *<sup>u</sup>* <sup>∈</sup> *<sup>C</sup>n*([0, *<sup>T</sup>*]; <sup>X</sup> ).

In order to obtain a solution to a singular problem, we need a greater smoothness of the function *<sup>q</sup>* from the solution of a regular problem than class *<sup>C</sup>*1([0, *<sup>T</sup>*]; <sup>Y</sup>). Next, we need the following Lemma from [1].

**Lemma 2.** *Let l* <sup>∈</sup> <sup>N</sup>, *<sup>V</sup>* <sup>∈</sup> *<sup>C</sup>l*−1([0, *<sup>T</sup>*];L(<sup>X</sup> )), *<sup>g</sup>* <sup>∈</sup> *<sup>C</sup><sup>l</sup>* ([0, *T*]; X )*. Then*

$$\left(\int\_0^l V(t-s)\mathbf{g}(s)ds\right)^{(l)} = \sum\_{k=0}^{l-1} V^{(l-k-1)}(t)\mathbf{g}^{(k)}(0) + \int\_0^t V(t-s)\mathbf{g}^{(l)}(s)ds.$$

The following theorem provides sufficient conditions for the existence of a more smooth (as *<sup>p</sup>* <sup>∈</sup> <sup>N</sup>) solution *<sup>q</sup>* <sup>∈</sup> *<sup>C</sup>p*+*n*([0, *<sup>T</sup>*], <sup>Y</sup>) of a regular problem.

**Theorem 3.** *Let the pencil* - *B be polynomially A-bounded and condition (4) be fulfilled, <sup>p</sup>* <sup>∈</sup> <sup>N</sup>0*; moreover, <sup>C</sup>* ∈ L(<sup>X</sup> ; <sup>Y</sup>), <sup>Φ</sup> <sup>∈</sup> *<sup>C</sup>p*+*n*([0, *<sup>T</sup>*];L(Y; <sup>X</sup> )), *<sup>h</sup>* <sup>∈</sup> *<sup>C</sup>p*+*n*([0, *<sup>T</sup>*]; <sup>X</sup> ), <sup>Ψ</sup> <sup>∈</sup> *<sup>C</sup>p*+2*n*([0, *<sup>T</sup>*]; <sup>Y</sup>), *for any <sup>t</sup>* <sup>∈</sup> [0, *<sup>T</sup>*] *operator <sup>C</sup>*Φ(*t*) *be invertible, with* (*C*Φ) <sup>−</sup><sup>1</sup> <sup>∈</sup> *<sup>C</sup>p*+*n*([0, *<sup>T</sup>*];L(Y)) *and the compatibility condition Cun*−<sup>1</sup> <sup>=</sup> <sup>Ψ</sup>(*n*−<sup>1</sup>)(0) *be satisfied for some un*−<sup>1</sup> ∈ U1. *Then there exists and a unique solution of (10)–(12) and <sup>q</sup>* <sup>∈</sup> *<sup>C</sup>p*+*n*([0, *<sup>T</sup>*]; <sup>Y</sup>).

**Proof of Theorem 3.** Write the propagators of the homogeneous Equation (10) in a matrix, denoting the resolving group of homogeneous Equation (13)

$$V(t) = \begin{pmatrix} V\_{1,1}(t) & V\_{1,2}(t) & \dots & V\_{1,n-1}(t) & V\_{1,n}(t) \\ V\_{2,1}(t) & V\_{2,2}(t) & \dots & V\_{2,n-1}(t) & V\_{2,n}(t) \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ V\_{n-1,1}(t) & V\_{n-1,2}(t) & \dots & V\_{n-1,n-1}(t) & V\_{n-1,n}(t) \\ V\_{n,1}(t) & V\_{n,2}(t) & \dots & V\_{n,n-1}(t) & V\_{n,n}(t) \end{pmatrix} = \frac{1}{2\pi i} \int R\_{\mu}^{A}(\varbeta) \times \dots$$

$$\times \begin{pmatrix} \mu^{n-1}A - \mu^{n-2}B\_{n-1} - \dots - B\_1 & \mu^{n-2}A - \mu^{n-3}B\_{n-1} - \dots - B\_2 & \dots \\ B\_0 & \mu^{n-1}A - \mu^{n-2}B\_{n-1} - \dots - \mu B\_2 & \dots \\ \vdots & & \vdots & & \ddots \\ \mu^{n-3}B\_0 & \mu^{n-3}B\_1 + \mu^{n-4}B\_0 & \dots \\ \mu^{n-2}B\_0 & \mu^{n-2}B\_1 + \mu^{n-3}B\_0 & \dots \end{pmatrix}$$

$$\begin{pmatrix} \dots & \mu A - B\_{n-1} & \mathbb{I} \\ \dots & \mu^2 A - \mu B\_{n-1} & \mu \mathbb{I} \\ \vdots & \vdots & \vdots \\ \dots & \mu^{n-1} A - \mu^{n-2} B\_{n-1} & \mu^{n-2} \mathbb{I} \\ \dots & \mu^{n-2} B\_{n-2} + \mu^{n-3} B\_{n-3} + \dots + B\_0 & \mu^{n-1} \mathbb{I} \end{pmatrix} e^{\mu t} d\mu,$$

where I is the identity operator. Earlier, in the proof of Theorem 2, it was established that the function *q*(*t*) satisfies the integral Equation (16). Take the natural number *l* ≤ *p* + *n*. Assuming that *<sup>q</sup>* <sup>∈</sup> *<sup>C</sup><sup>l</sup>* ([0, *T*]; Y) by Lemma 2, we obtain the equality

$$\begin{split} q^{(l)}(t) &= q\_0^{(l)}(t) + \sum\_{k=0}^{l-1} C\_l^k R^{(k)}(t) \mathbb{C}S\_0 \sum\_{m=0}^{l-k-1} V\_{1,n}^{(l-k-m-1)}(t) (q\Phi)^{(m)}(0) + \\ &+ \sum\_{k=0}^{l} \sum\_{m=0}^{l-k} C\_l^{k,m} R^{(k)}(t) \mathbb{C}S\_0 \int\_0^t V\_{1,n}(t-s) q^{(l-k-m)}(s) \Phi^{(m)}(s) ds + \\ &+ \sum\_{k=0}^{l-1} C\_l^k R^{(k)}(t) \mathbb{C}S\_1 \sum\_{m=0}^{l-k-1} V\_{2,n}^{(l-k-m-1)}(t) (q\Phi)^{(m)}(0) + \\ &+ \sum\_{k=0}^{l} \sum\_{m=0}^{l-k} C\_l^{k,m} R^{(k)}(t) \mathbb{C}S\_1 \int\_0^t V\_{2,n}(t-s) q^{(l-k-m)}(s) \Phi^{(m)}(s) ds + ...+ \\ &+ \sum\_{k=0}^{l-1} C\_l^k R^{(k)}(t) \mathbb{C}S\_{n-1} \sum\_{m=0}^{l-k-1} V\_{n,n}^{(l-k-m-1)}(t) (q\Phi)^{(m)}(0) + \\ &+ \sum\_{k=0}^{l} \sum\_{m=0}^{l-k} C\_l^{k,m} R^{(k)}(t) \mathbb{C}S\_{n-1} \int\_0^t V\_{n,n}(t-s) q^{(l-k-m)}(s) \Phi^{(m)}(s) ds, \end{split}$$

where *C<sup>k</sup> <sup>l</sup>* <sup>=</sup> *<sup>l</sup>*! *k*!(*l*−*k*)! , *Ck*,*<sup>m</sup> <sup>l</sup>* <sup>=</sup> *<sup>l</sup>*! *<sup>k</sup>*!*m*!(*l*−*k*−*m*)! and

*q* (*l*) <sup>0</sup> (*t*) = − *l* ∑ *k*=0 *Ck <sup>l</sup> <sup>R</sup>*(*k*) (*t*) Ψ(*l*−*k*+*n*) (*t*)− <sup>−</sup>*CS*0*V*(*l*−*k*) 1,1 (*t*)*u*<sup>0</sup> <sup>−</sup> *CS*1*V*(*l*−*k*) 2,1 (*t*)*u*<sup>0</sup> <sup>−</sup> ... <sup>−</sup> *CSn*−1*V*(*l*−*k*) *<sup>n</sup>*,1 (*t*)*u*0− <sup>−</sup>*CS*0*V*(*l*−*k*) 1,2 (*t*)*u*<sup>1</sup> <sup>−</sup> *CS*1*V*(*l*−*k*) 2,2 (*t*)*u*<sup>1</sup> <sup>−</sup> ... <sup>−</sup> *CSn*−1*V*(*l*−*k*) *<sup>n</sup>*,2 (*t*)*u*<sup>1</sup> − ...− <sup>−</sup>*CS*0*V*(*l*−*k*) 1,*<sup>n</sup>* (*t*)*un*−<sup>1</sup> <sup>−</sup> *CS*1*V*(*l*−*k*) 2,*<sup>n</sup>* (*t*)*un*−<sup>1</sup> <sup>−</sup> ... <sup>−</sup> *CSn*−1*V*(*l*−*k*) *<sup>n</sup>*,*<sup>n</sup>* (*t*)*un*−1<sup>−</sup> −*CS*<sup>0</sup> *t* 0 *<sup>V</sup>*1,*n*(*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>*)*h*(*l*−*k*) (*s*)*ds* − *CS*<sup>1</sup> *t* 0 *<sup>V</sup>*2,*n*(*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>*)*h*(*l*−*k*) (*s*)*ds* − ...− −*CSn*−<sup>1</sup> *t* 0 *Vn*,*n*(*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>*)*h*(*l*−*k*) (*s*)*ds* <sup>−</sup> *Ch*(*l*−*k*) (*t*) + + *l*−1 ∑ *k*=0 *Ck <sup>l</sup> <sup>R</sup>*(*k*) (*t*)*CS*<sup>0</sup> *l*−*k*−1 ∑ *m*=0 *V*(*l*−*k*−*m*−1) 1,*<sup>n</sup>* (*t*)*h*(*m*) (0)+ + *l*−1 ∑ *k*=0 *Ck <sup>l</sup> <sup>R</sup>*(*k*) (*t*)*CS*<sup>1</sup> *l*−*k*−1 ∑ *m*=0 *V*(*l*−*k*−*m*−1) 2,*<sup>n</sup>* (*t*)*h*(*m*) (0) + ...+

$$+\sum\_{k=0}^{l-1} \mathbb{C}\_{I}^{k} \mathbb{R}^{(k)}(t) \mathbb{C} \mathbb{S}\_{n-1} \sum\_{m=0}^{l-k-1} V\_{n,n}^{(l-k-m-1)}(t) h^{(m)}(0)$$

exists from the conditions of this theorem for *l* = 0, 1, ..., *p* + *n*.

Show that *<sup>q</sup>* <sup>∈</sup> *<sup>C</sup>p*+*n*([0, *<sup>T</sup>*], <sup>Y</sup>); for this purpose, denote *<sup>r</sup>*<sup>0</sup> <sup>=</sup> *<sup>q</sup>*0(0), and for *l* = 1, 2, ..., *p* + *n*, determine the following values

$$\begin{split} r\_{l} &= q\_{0}^{(l)}(0) + \sum\_{k=0}^{l-1} \mathcal{C}\_{l}^{k} R^{(k)}(0) \mathbb{C} \mathbb{S}\_{0} \sum\_{m=0}^{l-k-1} V\_{1,n}^{(l-k-m-1)}(0) \sum\_{j=0}^{m} \mathcal{C}\_{m}^{j} r\_{m-j} \Phi^{(j)}(0) + \\ &+ \sum\_{k=0}^{l-1} \mathcal{C}\_{l}^{k} R^{(k)}(0) \mathbb{C} \mathcal{S}\_{1} \sum\_{m=0}^{l-k-1} V\_{2,n}^{(l-k-m-1)}(0) \sum\_{j=0}^{m} \mathcal{C}\_{m}^{j} r\_{m-j} \Phi^{(j)}(0) + ... + \\ &+ \sum\_{k=0}^{l-1} \mathcal{C}\_{l}^{k} R^{(k)}(0) \mathbb{C} \mathcal{S}\_{n-1} \sum\_{m=0}^{l-k-1} V\_{n,n}^{(l-k-m-1)}(0) \sum\_{j=0}^{m} \mathcal{C}\_{m}^{j} r\_{m-j} \Phi^{(j)}(0) .\end{split}$$

Consider the system of integral equations

$$
\begin{split}
\widehat{q}\_{0}(t) &= q\_{0}(t) + R(t) \Big(\big(\operatorname{CS}\_{0}\Big)^{\top} V\_{1,n}(t-s) \widehat{q}\_{0}(s) \Phi(s) ds + \\ \\
+ \big(\operatorname{CS}\_{1}\Big)^{\top} V\_{2,n}(t-s) \widehat{q}\_{0}(s) \Phi(s) ds + \dots + \operatorname{CS}\_{n-1}\Big[^{\top} V\_{n,n}(t-s) \widehat{q}\_{0}(s) \Phi(s) ds \Big), \\
\widehat{q}\_{1}(t) &= q\_{0}^{(0)}(t) + \sum\_{k=0}^{l-1} \big( \operatorname{\bf{C}}^{k} R^{(k)}(t) \operatorname{\bf C} S\_{k} \sum\_{m=0}^{l-k-1} V\_{l,n}^{(l-k-m-1)}(t) \sum\_{j=0}^{m} \operatorname{\bf C}\_{m} r\_{m-j} \Phi^{(j)}(0) + \\ &+ \sum\_{k=0}^{l-1} \operatorname{\bf C}\_{1}^{k} R^{(k)}(t) \operatorname{\bf C} S\_{k} \sum\_{m=0}^{l-k-1} V\_{2,n}^{(l-k-m-1)}(t) \sum\_{j=0}^{m} \operatorname{\bf C}\_{m}^{j} r\_{m-j} \Phi^{(j)}(0) + ... \\ &+ \sum\_{k=0}^{l-1} \operatorname{\bf C}\_{1}^{k} R^{(k)}(t) \operatorname{\bf C} S\_{n-1} \sum\_{m=0}^{l-k-1} V\_{n,n}^{(l-k-m-1)}(t) \sum\_{j=0}^{m} \operatorname{\bf C}\_{m} r\_{m-j} \Phi^{(j)}(0) + \\ &+ \sum\_{k=0}^{l} \operatorname{\bf C}\_{1}^{k} R^{(k)}(t) \operatorname{\bf C}\_{2} \operatorname{\bf C}\_{3}$$

Reduce (17) to the Volterra equation of the second kind

$$\mathbf{g}(t) = \mathbf{g}\_0(t) + \int\_0^t \mathbf{K}(t, s)\mathbf{g}(s)ds$$

on the space (*C*([0, *<sup>T</sup>*]; <sup>Y</sup>))*p*+*n*+<sup>1</sup> with a matrix operator function *<sup>K</sup>*(*t*,*s*), given on the triangle <sup>Δ</sup> <sup>=</sup> {(*t*,*s*) <sup>∈</sup> <sup>R</sup><sup>2</sup> : <sup>0</sup> <sup>≤</sup> *<sup>t</sup>* <sup>≤</sup> *<sup>T</sup>*, 0 <sup>≤</sup> *<sup>s</sup>* <sup>≤</sup> *<sup>t</sup>*}. By virtue of the continuity of all data of system (17), this has a unique solution

$$(\vec{q}\_{0\prime}\vec{q}\_{1\prime},...,\vec{q}\_{p+n}) \in (\mathbb{C}([0,T];\mathcal{Y}))^{p+n+1}.$$

This solution will be the limit of the sequence of approximations

*q*˜0,*i*(*t*) = *q*0(*t*) + *R*(*t*) *CS*<sup>0</sup> *t* 0 *<sup>V</sup>*1,*n*(*<sup>t</sup>* − *<sup>s</sup>*)*q*˜0,*<sup>i</sup>*−1(*s*)Φ(*s*)*ds*+ +*CS*<sup>1</sup> *t* 0 *<sup>V</sup>*2,*n*(*<sup>t</sup>* − *<sup>s</sup>*)*q*˜0,*<sup>i</sup>*−1(*s*)Φ(*s*)*ds* + ... + *CSn*−<sup>1</sup> *t* 0 *Vn*,*n*(*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>*)*q*˜0,*<sup>i</sup>*−1(*s*)Φ(*s*)*ds* , *q*˜*l*,*i*(*t*) = *q* (*l*) <sup>0</sup> (*t*) + *l*−1 ∑ *k*=0 *Ck <sup>l</sup> <sup>R</sup>*(*k*) (*t*)*CS*<sup>0</sup> *l*−*k*−1 ∑ *m*=0 *V*(*l*−*k*−*m*−1) 1,*<sup>n</sup>* (*t*) *m* ∑ *j*=0 *Cj mrm*−*j*Φ(*j*) (0)+ + *l*−1 ∑ *k*=0 *Ck <sup>l</sup> <sup>R</sup>*(*k*) (*t*)*CS*<sup>1</sup> *l*−*k*−1 ∑ *m*=0 *V*(*l*−*k*−*m*−1) 2,*<sup>n</sup>* (*t*) *m* ∑ *j*=0 *Cj mrm*−*j*Φ(*j*) (0) + ...+ + *l*−1 ∑ *k*=0 *Ck <sup>l</sup> <sup>R</sup>*(*k*) (*t*)*CSn*−<sup>1</sup> *l*−*k*−1 ∑ *m*=0 *<sup>V</sup>*(*l*−*k*−*m*−1) *<sup>n</sup>*,*<sup>n</sup>* (*t*) *m* ∑ *j*=0 *Cj mrm*−*j*Φ(*j*) (0)+ + *l* ∑ *k*=0 *l*−*k* ∑ *m*=0 *Ck*,*<sup>m</sup> <sup>l</sup> <sup>R</sup>*(*k*) (*t*)*CS*<sup>0</sup> *t* 0 *<sup>V</sup>*1,*n*(*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>*)*q*˜*l*−*k*−*m*,*i*−1(*s*)Φ(*m*) (*s*)*ds*+ + *l* ∑ *k*=0 *l*−*k* ∑ *m*=0 *Ck*,*<sup>m</sup> <sup>l</sup> <sup>R</sup>*(*k*) (*t*)*CS*<sup>1</sup> *t* 0 *<sup>V</sup>*2,*n*(*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>*)*q*˜*l*−*k*−*m*,*i*−1(*s*)Φ(*m*) (*s*)*ds* + ...+ + *l* ∑ *k*=0 *l*−*k* ∑ *m*=0 *Ck*,*<sup>m</sup> <sup>l</sup> <sup>R</sup>*(*k*) (*t*)*CSn*−<sup>1</sup> *t* 0 *Vn*,*n*(*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>*)*q*˜*l*−*k*−*m*,*i*−1(*s*)Φ(*m*) (*s*)*ds*, *l* = 1, 2, ..., *p* + *n*; *i* ∈ N, (18)

which for *i* → ∞ on the interval [0, *T*] converge uniformly to the functions *q*˜*l*, *l* = 0, 1, ..., *p* + *n*. Set the initial approximation *q*˜*l*,0 ≡ 0; *l* = 0, 1, ..., *p* + *n*, then *q*˜*l*+1,0 = *q*˜ *<sup>l</sup>*,0; *l* = 0, 1, ..., *p* + *n* − 1. In addition, from (18), it follows that

$$\vec{q}\_{l,i}(0) = r\_l; \ l = 0, 1, \ldots, p + n; \ i \in \mathbb{N}. \tag{19}$$

Assume that for all *τ* = 1, 2, ..., *i* the equalities *q*˜*l*+1,*τ*(*t*) = *q*˜ *<sup>l</sup>*,*τ*(*t*), *l* = 0, 1, ..., *p* + *n* − 1 are true. Then, using Lemma 2 and equalities (18), we obtain

$$\frac{d}{dt}\left(\sum\_{k=0}^{l}\sum\_{m=0}^{l-k}C\_{l}^{k,m}R^{(k)}(t)\mathbb{C}S\_{0}\int\_{0}^{t}V\_{1,n}(t-s)\ddot{q}\_{l-k-m,i}(s)\Phi^{(m)}(s)ds\right) = 0$$

$$=\sum\_{k=0}^{l}\sum\_{m=0}^{l-k}\mathcal{C}\_{l}^{k,m}R^{(k+1)}(t)\mathbb{C}S\_{0}\int\_{0}^{t}V\_{1,n}(t-s)\ddot{q}\_{l-k-m,i}(s)\Phi^{(m)}(s)ds + \sum\_{k=0}^{l}\sum\_{m=0}^{l-k}\mathcal{C}\_{l}^{k,m}\dot{R}^{(k)}(t)\left(\dot{q}\_{l-k-m,i}(0)\Phi^{(m)}(0) + \dot{q}\_{l-k-m,i}(0)\Phi^{(m)}(s)\right)$$

$$+\sum\_{k=0}^{l}\sum\_{m=0}^{l-k}\mathcal{C}\_{l}^{k,m}R^{(k)}(t)\mathcal{C}S\_{0}V\_{1,n}(t)\tilde{q}\_{l-k-m,i}(0)\Phi^{(m)}(0) +$$

$$\begin{split} &+\sum\_{k=0}^{l}\sum\_{m=0}^{l-k}\mathcal{C}\_{l}^{k,m}\mathcal{R}^{(k)}(t)\mathcal{C}\mathcal{S}\_{0}\left[\begin{matrix}l\\l\end{matrix}V\_{1,n}(t-s)\bar{q}\_{l-k-m,j}(s)\Phi^{(m+1)}(s)ds+\\ &+\sum\_{k=0}^{l}\sum\_{m=0}^{l-k}\mathcal{C}\_{l}^{k,m}\mathcal{R}^{(k)}(t)\mathcal{C}\mathcal{S}\_{0}\left[\begin{matrix}l\\l\end{matrix}V\_{1,n}(t-s)\bar{q}\_{l-k-m+1,j}(s)\Phi^{(m)}(s)ds\\ &=\sum\_{k=1}^{l+1}\sum\_{m=0}^{l-k+1}\mathcal{C}\_{l}^{k-1,m}\mathcal{R}^{(k)}(t)\mathcal{C}\mathcal{S}\_{0}\left[\begin{matrix}l\\l\end{matrix}V\_{1,n}(t-s)\bar{q}\_{l-k-m+1,j}(s)\Phi^{(m)}(s)ds+\\ &+\sum\_{k=0}^{l}\mathcal{C}\_{k}^{k}\mathcal{R}^{(k)}(t)\mathcal{C}\mathcal{S}\_{0}V\_{1,n}(t)\sum\_{m=0}^{l-k}\mathcal{C}\_{l-k}^{m}r\_{l-k-m}\Phi^{(m)}(0)\right.\\ &+\sum\_{k=0}^{l}\sum\_{m=1}^{l-k+1}\mathcal{C}\_{l}^{k,m-1}\mathcal{R}^{(k)}(t)\mathcal{C}\mathcal{S}\_{0}\left[\begin{matrix}l\\l\end{matrix}V\_{1,n}(t-s)\bar{q}\_{l-k-m+1,j}(s)\Phi^{(m)}(s)ds+\\ &+\sum\_{k=0}^{l}\sum\_{m=0}^{l-k}\mathcal{C}\_{l}^{k,m}\mathcal{R}^{(k)}(t)\mathcal{C}\mathcal{S}\_{0}\left[\begin{matrix}l\\l\end{matrix}V$$

Denote by

$$a\_{k,m} = R^{(k)}(t) \gets \mathcal{S}\_0 \int\_0^t V\_{1,n}(t-s)\bar{q}\_{l-k-m+1,i}(s)\Phi^{(m)}(s)ds, \ l=2,3,\ldots,p+n.$$

Taking into account the equalities

$$\mathbb{C}\_{l}^{k} + \mathbb{C}\_{l}^{k-1} = \mathbb{C}\_{l+1'}^{k} \quad \mathbb{C}\_{l}^{k,m} + \mathbb{C}\_{l}^{k-1,m} + \mathbb{C}\_{l}^{k,m-1} = \mathbb{C}\_{l+1}^{k,m}$$

we obtain

*l* ∑ *k*=0 *l*−*k* ∑ *m*=0 *Ck*,*<sup>m</sup> <sup>l</sup> ak*,*<sup>m</sup>* + *l*+1 ∑ *k*=1 *l*−*k*+1 ∑ *m*=0 *Ck*−1,*<sup>m</sup> <sup>l</sup> ak*,*<sup>m</sup>* + *l* ∑ *k*=0 *l*−*k*+1 ∑ *m*=1 *Ck*,*m*−<sup>1</sup> *<sup>l</sup> ak*,*<sup>m</sup>* = = *l* ∑ *k*=1 *l*−*k* ∑ *m*=1 *Ck*,*<sup>m</sup> <sup>l</sup> ak*,*<sup>m</sup>* + *l* ∑ *k*=1 *Ck*,0 *<sup>l</sup> ak*,0 + *l* ∑ *m*=0 *C*0,*<sup>m</sup> <sup>l</sup> a*0,*<sup>m</sup>* + + *l* ∑ *k*=1 *l*−*k* ∑ *m*=1 *Ck*−1,*<sup>m</sup> <sup>l</sup> ak*,*<sup>m</sup>* + *l* ∑ *k*=1 *Ck*−1,0 *<sup>l</sup> ak*,0 + *l* ∑ *k*=1 *Ck*−1,*l*−*k*+<sup>1</sup> *<sup>l</sup> ak*,*l*−*k*+<sup>1</sup> <sup>+</sup> *<sup>C</sup>l*,0 *<sup>l</sup> al*<sup>+</sup>1,0 + + *l* ∑ *k*=1 *l*−*k* ∑ *m*=1 *Ck*,*m*−<sup>1</sup> *<sup>l</sup> ak*,*<sup>m</sup>* + *l*+1 ∑ *m*=1 *C*0,*m*−<sup>1</sup> *<sup>l</sup> a*0,*<sup>m</sup>* + *l* ∑ *k*=1 *Ck*,*l*−*<sup>k</sup> <sup>l</sup> ak*,*l*−*k*+<sup>1</sup> = = *l* ∑ *k*=1 *l*−*k* ∑ *m*=1 *Ck*,*<sup>m</sup> <sup>l</sup>*+1*ak*,*<sup>m</sup>* + *l* ∑ *k*=1 *Ck*,0 *<sup>l</sup>*+1*ak*,0 + *l* ∑ *m*=1 *C*0,*<sup>m</sup> <sup>l</sup>*+1*a*0,*<sup>m</sup>* + *l* ∑ *k*=1 *Ck*,0 *<sup>l</sup>*+1*ak*,*l*−*k*+1+ + *C*0,0 *<sup>l</sup> <sup>a</sup>*0,0 <sup>+</sup> *<sup>C</sup>*0,*<sup>l</sup> <sup>l</sup> <sup>a</sup>*0,*l*+<sup>1</sup> <sup>+</sup> *<sup>C</sup>l*,0 *<sup>l</sup> al*<sup>+</sup>1,0 = *l*+1 ∑ *k*=0 *l*−*k*+1 ∑ *m*=0 *Ck*,*<sup>m</sup> <sup>l</sup>*+1*ak*,*m*. (21)

For *l* = 0, 1 fullment of (21) can be checked directly. From (20) and (21), it follows that

$$\frac{d}{dt}\left(\sum\_{k=0}^{l}\sum\_{m=0}^{l-k}\mathcal{C}\_{l}^{k,m}R^{(k)}(t)\mathcal{C}S\_{0}\int\_{0}^{t}V\_{1,\mathfrak{u}}(t-s)\vec{q}\_{l-k-m,i}(s)\Phi^{(m)}(s)ds\right) = 0$$

$$\mathcal{I} = \sum\_{k=0}^{l+1} \sum\_{m=0}^{l-k+1} \mathbb{C}\_{l+1}^{k,m} R^{(k)}(t) \text{CS}\_0 \int\_0^t V\_{1,n}(t-s) \vec{q}\_{l-k-m+1,i}(s) \Phi^{(m)}(s) ds + \sum\_{k=0}^{l} \mathbb{C}\_l^{k,m} C\_{l-k} r\_{l-k-m} \Phi^{(m)}(0). \tag{22}$$

$$+ \sum\_{k=0}^{l} \mathbb{C}\_l^k R^{(k)}(t) \text{CS}\_0 V\_{1,n}(t) \sum\_{m=0}^{l-k} \mathbb{C}\_{l-k}^m r\_{l-k-m} \Phi^{(m)}(0). \tag{23}$$

Similarly, we obtain the result for the subsequent integral element from (18)

$$\frac{d}{dt}\left(\sum\_{k=0}^{l}\sum\_{m=0}^{l-k}\mathcal{C}\_{l}^{k,m}\mathcal{R}^{(k)}(t)\mathcal{C}\mathcal{S}\_{1}\int\_{0}^{t}V\_{2,n}(t-s)\tilde{q}\_{l-k-m,i}(s)\Phi^{(m)}(s)ds\right) = 0$$

$$=\sum\_{k=0}^{l+1}\sum\_{m=0}^{l-k+1}\mathcal{C}\_{l+1}^{k,m}\mathcal{R}^{(k)}(t)\mathcal{C}\mathcal{S}\_{1}\int\_{0}^{t}V\_{2,n}(t-s)\tilde{q}\_{l-k-m+1,i}(s)\Phi^{(m)}(s)ds + \left.\int\_{0}^{l}\sum\_{k=0}^{l}\mathcal{C}\_{l}^{k,m}\mathcal{R}^{(k)}(t-s)\Phi^{(m)}(s)\Phi^{(m)}(s)\right|\_{s}ds\right)$$

$$+\sum\_{k=0}^{l}\mathcal{C}\_{l}^{k}\mathcal{R}^{(k)}(t)\mathcal{C}\mathcal{S}\_{1}V\_{2,n}(t)\sum\_{m=0}^{l-k}\mathcal{C}\_{l-k}^{m}r\_{l-k-m}\Phi^{(m)}(0). \tag{23}$$

Continuing the procedure for all subsequent integral elements (18), we present the result for the last

$$\frac{d}{dt}\left(\sum\_{k=0}^{l}\sum\_{m=0}^{l-k}\mathcal{C}\_{l}^{k,m}\mathcal{R}^{(k)}(t)\mathbb{C}S\_{n-1}\int\_{0}^{t}V\_{n,n}(t-s)\vec{q}\_{l-k-m,i}(s)\Phi^{(m)}(s)ds\right) = 0$$

$$\begin{split} \mathcal{I} &= \sum\_{k=0}^{l+1}\sum\_{m=0}^{l-k+1}\mathcal{C}\_{l+1}^{k,m}\mathcal{R}^{(k)}(t)\mathbb{C}S\_{n-1}\int\_{0}^{t}V\_{n,n}(t-s)\vec{q}\_{l-k-m+1,i}(s)\Phi^{(m)}(s)ds + \\ &\quad + \sum\_{k=0}^{l}\mathcal{C}\_{l}^{k}\mathcal{R}^{(k)}(t)\mathbb{C}S\_{n-1}V\_{n,n}(t)\sum\_{m=0}^{l-k}\mathcal{C}\_{l-k}^{m}r\_{l-k-m}\Phi^{(m)}(0). \end{split} \tag{24}$$

Changing the summation indices and re-grading the sums, we obtain

*d dt <sup>l</sup>*−<sup>1</sup> ∑ *k*=0 *Ck <sup>l</sup> <sup>R</sup>*(*k*) (*t*)*CS*<sup>0</sup> *l*−*k*−1 ∑ *m*=0 *V*(*l*−*k*−*m*−1) 1,*<sup>n</sup>* (*t*) *m* ∑ *j*=0 *Cj mrm*−*j*Φ(*j*) (0) = = *l*−1 ∑ *k*=0 *Ck <sup>l</sup> <sup>R</sup>*(*k*) (*t*)*CS*<sup>0</sup> *l*−*k*−1 ∑ *m*=0 *V*(*l*−*k*−*m*) 1,*<sup>n</sup>* (*t*) *m* ∑ *j*=0 *Cj mrm*−*j*Φ(*j*) (0)+ + *l*−1 ∑ *k*=0 *Ck <sup>l</sup> <sup>R</sup>*(*k*+1) (*t*)*CS*<sup>0</sup> *l*−*k*−1 ∑ *m*=0 *V*(*l*−*k*−*m*−1) 1,*<sup>n</sup>* (*t*) *m* ∑ *j*=0 *Cj mrm*−*j*Φ(*j*) (0) = = *l*−1 ∑ *k*=0 *Ck <sup>l</sup> <sup>R</sup>*(*k*) (*t*)*CS*<sup>0</sup> *l*−*k*−1 ∑ *m*=0 *V*(*l*−*k*−*m*) 1,*<sup>n</sup>* (*t*) *m* ∑ *j*=0 *Cj mrm*−*j*Φ(*j*) (0)+ + *l* ∑ *k*=1 *Ck*−<sup>1</sup> *<sup>l</sup> <sup>R</sup>*(*k*) (*t*)*CS*<sup>0</sup> *l*−*k* ∑ *m*=0 *V*(*l*−*k*−*m*) 1,*<sup>n</sup>* (*t*) *m* ∑ *j*=0 *Cj mrm*−*j*Φ(*j*) (0) = = *<sup>l</sup>*−<sup>1</sup> ∑ *k*=1 *Ck <sup>l</sup> <sup>R</sup>*(*k*) (*t*)*CS*<sup>0</sup> *l*−*k*−1 ∑ *m*=0 *V*(*l*−*k*−*m*) 1,*<sup>n</sup>* (*t*) *m* ∑ *j*=0 *Cj mrm*−*j*Φ(*j*) (0)+ +*C*<sup>0</sup> *<sup>l</sup> R*(*t*)*CS*<sup>0</sup> *l*−1 ∑ *m*=0 *V*(*l*−*m*) 1,*<sup>n</sup>* (*t*) *m* ∑ *j*=0 *Cj mrm*−*j*Φ(*j*) (0) +

$$+\left(\sum\_{k=1}^{l-1} \mathbb{C}\_{l}^{k-1} \mathbf{R}^{(k)}(t) \mathbb{C} \mathbb{S}\_{0} \sum\_{m=0}^{l-k-1} V\_{1,n}^{(l-k-m)}(t) \sum\_{j=0}^{m} \mathbb{C}\_{m}^{j} r\_{m-j} \Phi^{(j)}(0) + \\\\+\sum\_{k=1}^{l-1} \mathbb{C}\_{l}^{k-1} \mathbf{R}^{(k)}(t) \mathbb{C} \mathbb{S}\_{0} V\_{1,n}(t) \sum\_{j=0}^{l-k} \mathbb{C}\_{l-k}^{j} r\_{l-k-j} \Phi^{(j)}(0) + \\\\+\mathbb{C}\_{l}^{l-1} \mathbf{R}^{(l)}(t) \mathbb{C} \mathbb{S}\_{0} V\_{1,n}(t) \mathbb{C}\_{0}^{0} r\_{0} \Phi(0) \right) \\\\=\sum\_{k=0}^{l} \mathbb{C}\_{l+1}^{k} \mathbf{R}^{(k)}(t) \mathbb{C} \mathbb{S}\_{0} \sum\_{m=0}^{l-k} V\_{1,n}^{(l-k-m)}(t) \sum\_{j=0}^{m} \mathbb{C}\_{m}^{j} r\_{m-j} \Phi^{(j)}(0) - \\\\-\sum\_{k=0}^{l} \mathbb{C}\_{l}^{k} \mathbf{R}^{(k)}(t) \mathbb{C} \mathbb{S}\_{0} V\_{1,n}(t) \sum\_{m=0}^{l-k} \mathbb{C}\_{l-k}^{m} r\_{l-k-m} \Phi^{(m)}(0). \tag{25}$$

Similarly, we obtain the result for the next non-integral element from (18)

$$\frac{d}{dt}\left(\sum\_{k=0}^{l-1}\mathcal{C}\_{l}^{k}R^{(k)}(t)\mathcal{C}S\_{1}\sum\_{m=0}^{l-k-1}V\_{2,n}^{(l-k-m-1)}(t)\sum\_{j=0}^{m}\mathcal{C}\_{m}^{j}r\_{m-j}\Phi^{(j)}(0)\right) = $$

$$=\sum\_{k=0}^{l}\mathcal{C}\_{l+1}^{k}R^{(k)}(t)\mathcal{C}S\_{1}\sum\_{m=0}^{l-k}V\_{2,n}^{(l-k-m)}(t)\sum\_{j=0}^{m}\mathcal{C}\_{m}^{j}r\_{m-j}\Phi^{(j)}(0) - $$

$$-\sum\_{k=0}^{l}\mathcal{C}\_{l}^{k}R^{(k)}(t)\mathcal{C}S\_{1}V\_{2,n}(t)\sum\_{m=0}^{l-k}\mathcal{C}\_{l-k}^{m}r\_{l-k-m}\Phi^{(m)}(0). \tag{26}$$

Continuing the procedure for all subsequent non-integral elements (18), we present the result for the last

$$\frac{d}{dt}\left(\sum\_{k=0}^{l-1}\mathcal{C}\_{l}^{k}R^{(k)}(t)\mathcal{C}S\_{n-1}\sum\_{m=0}^{l-k-1}V\_{n,m}^{(l-k-m-1)}(t)\sum\_{j=0}^{m}\mathcal{C}\_{m}^{j}r\_{m-j}\Phi^{(j)}(0)\right) = 0$$

$$=\sum\_{k=0}^{l}\mathcal{C}\_{l+1}^{k}R^{(k)}(t)\mathcal{C}S\_{n-1}\sum\_{m=0}^{l-k}V\_{n,n}^{(l-k-m)}(t)\sum\_{j=0}^{m}\mathcal{C}\_{m}^{j}r\_{m-j}\Phi^{(j)}(0) -$$

$$-\sum\_{k=0}^{l}C\_{l}^{k}R^{(k)}(t)\mathcal{C}S\_{n-1}V\_{n,n}(t)\sum\_{m=0}^{l-k}C\_{l-k}^{m}r\_{l-k-m}\Phi^{(m)}(0). \tag{27}$$

Differentiating (18), and also using (22)–(27), we obtain the equalities *q*˜ *<sup>l</sup>*,*i*+<sup>1</sup> = *q*˜*l*+1,*i*+1; *l* = 0, 1, ..., *p* + *n* − 1. Thus, the sequence *q*˜0,*<sup>i</sup>* converges as *i* → ∞ to the function *q*˜0 uniformly on the interval [0, *T*], and the sequence *q*˜ 0,*<sup>i</sup>* = *q*˜1,*<sup>i</sup>* converges as *i* → ∞ to the function *q*˜1 uniformly on the segment [0, *T*]. Therefore, the function *q*˜0 is continuously differentiable and *q*˜ <sup>0</sup> = *q*˜1. The equalities of *q*˜ *<sup>l</sup>* = *q*˜*l*<sup>+</sup>1; *l* = 1, 2, ..., *p* + *n* − 1, are proved in the same way, which implies that *<sup>q</sup>*˜0 <sup>≡</sup> *<sup>q</sup>* <sup>∈</sup> *<sup>C</sup>p*+*n*([0, *<sup>T</sup>*]; <sup>Y</sup>) and, therefore, *<sup>q</sup>*(*l*) <sup>=</sup> *<sup>q</sup>*˜*l*; *l* = 1, 2, ..., *p* + *n*.

#### *3.3. Solvability of the Original Inverse Problem*

**Theorem 4.** *Let the pencil* - *B be polynomially A-bounded and condition (4) be fulfilled; moreover, the* <sup>∞</sup> *be a pole of order <sup>p</sup>* <sup>∈</sup> <sup>N</sup><sup>0</sup> *of the A-resolvent of the pencil* - *B*, *operator C* ∈ L(U; Y), <sup>U</sup><sup>0</sup> <sup>⊂</sup> ker *<sup>C</sup>*, *<sup>χ</sup>* <sup>∈</sup> *<sup>C</sup>p*+*n*([0, *<sup>T</sup>*];L(Y; <sup>F</sup>)), *<sup>f</sup>* <sup>∈</sup> *<sup>C</sup>p*+*n*([0, *<sup>T</sup>*]; <sup>F</sup>), <sup>Ψ</sup> <sup>∈</sup> *<sup>C</sup>p*+2*n*([0, *<sup>T</sup>*]; <sup>Y</sup>), *for any <sup>t</sup>* <sup>∈</sup> [0, *<sup>T</sup>*] *operator <sup>C</sup>*(*A*1)−1*Q<sup>χ</sup> be invertible, with* (*C*(*A*1)−1*Qχ*) <sup>−</sup><sup>1</sup> <sup>∈</sup> *<sup>C</sup>p*+*n*([0, *<sup>T</sup>*];L(Y))*, the*

*condition Cun*−<sup>1</sup> <sup>=</sup> <sup>Ψ</sup>(*n*−<sup>1</sup>)(0) *be satisfied at some initial value un*−<sup>1</sup> ∈ U1, *and the initial values wk* = (*<sup>I</sup>* <sup>−</sup> *<sup>P</sup>*)*vk* ∈ U<sup>0</sup> *satisfy*

$$w\_k = -\sum\_{j=0}^p K\_j^n (B\_0^0)^{-1} \frac{d^{j+k}}{dt^{j+k}} \left[ (I - Q) (q(0)\chi(0) + f(0)) \right], \ k = 0, 1, \dots, n - 1.$$

*Then, there exists a unique solution* (*v*, *<sup>q</sup>*) *of inverse problem (1)–(3), where <sup>q</sup>* <sup>∈</sup> *<sup>C</sup>p*+*n*([0, *<sup>T</sup>*]; <sup>Y</sup>)*, <sup>v</sup>* <sup>=</sup> *<sup>u</sup>* <sup>+</sup> *<sup>w</sup>*, *whence <sup>u</sup>* <sup>∈</sup> *<sup>C</sup>n*([0, *<sup>T</sup>*]; <sup>U</sup>1) *is the solution of (5)–(7), and the function <sup>w</sup>* <sup>∈</sup> *<sup>C</sup>n*([0, *<sup>T</sup>*]; <sup>U</sup>0) *is a solution of (8) and (9) given by*

$$aw(t) = -\sum\_{j=0}^{p} K\_j^{\text{fl}} (B\_0^0)^{-1} \frac{d^j}{dt^j} \left[ (I - Q)(q(t)\chi(t) + f(t)) \right]. \tag{28}$$

**Proof of Theorem 4.** The conditions of Theorems 2 and 3 are satisfied, and, therefore, there exists a unique solution (*q*, *<sup>u</sup>*) to problem (5)–(7), where *<sup>q</sup>* <sup>∈</sup> *<sup>C</sup>p*+*n*([0, *<sup>T</sup>*]; <sup>Y</sup>), *<sup>u</sup>* <sup>∈</sup> *<sup>C</sup>n*([0, *<sup>T</sup>*]; <sup>U</sup>1).

Using the result of [9] and the required smoothness of the function *q*, we obtain that there exists a unique solution *<sup>w</sup>* <sup>∈</sup> *<sup>C</sup>n*([0, *<sup>T</sup>*]; <sup>U</sup>0) to (8), (9), given by (28).

#### **4. Discussion**

The results obtained in the article can be applied to various mathematical models, such as a model of oscillation of a rotating viscous fluid using the viscosity coefficient, a model of gravitational-gyroscopic and internal waves, and a model of sound waves in smectics, since these mathematical models can be reduced to the Sobolev type equations of higher order. One of the most typical examples of the application of the Sobolev type equations theory is the Boussinesq–Love model [5]:

$$
\sigma(\lambda - \Delta)v\_{tt} = \mathfrak{a}(\Delta - \lambda')v\_t + \beta(\Delta - \lambda'')v + qf,\tag{29}
$$

with initial conditions

$$v(\mathfrak{x},0) = v\_0(\mathfrak{x}), \ v\_t(\mathfrak{x},0) = v\_1(\mathfrak{x})\_{\mathfrak{x}}$$

boundary condition

$$|v(\mathbf{x},t)|\_{\partial\Omega} = 0$$

and overdetermination condition

$$\int\_{\Omega} v(\mathbf{x}, t) \mathcal{K}(\mathbf{x}) d\mathbf{x} = \Phi(t), \tag{30}$$

where *<sup>v</sup>*0(*x*), *<sup>v</sup>*1(*x*), *<sup>K</sup>*(*x*), <sup>Φ</sup>(*t*) are given functions, *<sup>v</sup>*(*x*, *<sup>t</sup>*) is a searched function and <sup>Ω</sup> <sup>⊂</sup> <sup>R</sup>*<sup>n</sup>* is a bounded domain with a boundary *∂*Ω of class *C*∞. Equation (29) describes longitudinal vibrations in a thin elastic rod, taking into account the inertia and external load. The coefficients *λ*, *α*, *λ* , *β*, *λ* characterize the properties of the rod material and relate such quantities as Young's modulus, Poisson's ratio, material density and radius of gyration relative to the center of gravity, in addition, the function *f* sets a known part of the external load (if known). The integral overdetermination condition (30) arises at the moment when, in addition to finding the function *v*, it is necessary to restore the component of the external load *q*. In addition, it is planned to use the obtained results for the development of numerical methods, to find approximate solutions to some of the previously presented models.

**Author Contributions:** Conceptualization, A.Z. and A.L.; Methodology, A.Z.; Validation, A.Z. and A.L.; Formal Analysis, A.L.; Investigation, A.L.; Resources, A.Z.; Data Curation, A.L.; Writing— Original Draft Preparation, A.L.; Supervision, A.Z.; Project administration, A.Z.; Funding Acquisition, A.L. All authors have read and agreed to the published version of the manuscript.

**Funding:** The reported study was funded by RFBR, project number 19-31-90137.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **Important Criteria for Asymptotic Properties of Nonlinear Differential Equations**

**Ahmed AlGhamdi 1,†, Omar Bazighifan 2,3,† and Rami Ahmad El-Nabulsi 4,5,6,\*,†**


**Abstract:** In this article, we prove some new oscillation theorems for fourth-order differential equations. New oscillation results are established that complement related contributions to the subject. We use the Riccati technique and the integral averaging technique to prove our results. As proof of the effectiveness of the new criteria, we offer more than one practical example.

**Keywords:** fourth-order differential equations; neutral delay; oscillation

#### **1. Introduction**

In this manuscript, we are concerned with the asymptotic behavior of solutions to fourth-order differential equations:

$$\left(m(z)\Psi\_{r\_1}\left(\zeta^{\prime\prime}(z)\right)\right)^{\prime} + \tilde{\omega}\left(z\right)\Psi\_{r\_2}\left(\delta\left(a(z)\right)\right) = 0,\tag{1}$$

where Ψ*ri* [*s*] = |*s*| *ri*−1*s*, *<sup>i</sup>* <sup>=</sup> 1, 2, *<sup>ς</sup>*(*z*) <sup>=</sup> *<sup>δ</sup>*(*z*) <sup>+</sup> *<sup>y</sup>*˜(*z*)*δ*(*α*˜(*z*)), *<sup>m</sup>*, *<sup>y</sup>*˜, *<sup>ω</sup>*˜ <sup>∈</sup> *<sup>C</sup>*[*z*0, <sup>∞</sup>), *<sup>m</sup>*(*z*) <sup>&</sup>gt; 0, *m* (*z*) ≥ 0, *ω*˜(*z*) > 0, 0 ≤ *y*˜(*z*) < *y*˜0 < ∞, *α*˜, *α* ∈ *C*[*z*0, ∞), *α*˜(*z*) ≤ *z*, lim*z*→<sup>∞</sup> *α*˜(*z*) = lim*z*→<sup>∞</sup> *α*(*z*) = ∞; and *r*<sup>1</sup> and *r*<sup>2</sup> are quotients of odd positive integers, under the assumption of the following:

$$\int\_{z\_0}^{\infty} \frac{1}{m^{1/r\_1}(s)} ds = \infty. \tag{2}$$

The theory of the oscillation of delay of differential equations is a fertile study area and has attracted the attention of many authors recently. This is due to the existence of many important applications of this theory in neural networks, biology, social sciences, engineering, etc.; see [1,2].

A study of the behavior of solutions to higher order differential equations yields much fewer results than for the least order equations although they are of the utmost importance in a lot of applications, especially neutral delay differential equations.

Currently, there are studies on the oscillation results of differential equations, so many of these studies have been devoted to study the oscillation of different classes of differential equations by using different techniques in order to establish sufficient conditions to ensure the oscillatory behavior of the solutions of (1), see [3–5].

**Citation:** AlGhamdi, A.; Bazighifan, O.; El-Nabulsi, R.A. Important Criteria for Asymptotic Properties of Nonlinear Differential Equations. *Mathematics* **2021**, *9*, 1659. https:// doi.org/10.3390/math9141659

Academic Editor: Fasma Diele

Received: 7 June 2021 Accepted: 10 July 2021 Published: 14 July 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

The motivation for studying this article is complemented by the results reported in [6,7]; therefore, we discuss their findings and results below.

Xing et al. [6] presented criteria for oscillation of the equation as follows:

$$\left(m(z)\left(\boldsymbol{\varrho}^{(n-1)}(z)\right)^{r\_1}\right)' + \tilde{\omega}(z)\delta^{r\_1}(\boldsymbol{\alpha}(z)) = 0,$$

under the conditions

$$\left(\mathfrak{a}^{-1}(z)\right)' \ge \mathfrak{a}\_0 > 0, \ \mathfrak{a}'(z) \ge \mathfrak{a}\_0 > 0, \ \mathfrak{a}^{-1}(\mathfrak{a}(z)) < z$$

and

$$\lim\_{z \to \infty} \inf\_{s \to \infty} \int\_{\mathbb{R}^{-1}(\mathfrak{a}(z))}^{z} \frac{\widehat{\omega}(s)}{m(s)} \left(s^{\mathfrak{n}-1}\right)^{r\_1} \mathrm{d}s > \left(\frac{1}{a\_0} + \frac{\mathfrak{j}\_0^{r\_1}}{a\_0 \widetilde{\mathfrak{a}}\_0}\right) > \frac{((n-1)!)^{r\_1}}{\mathfrak{e}},$$

where 0 <sup>≤</sup> *<sup>y</sup>*˜(*z*) <sup>&</sup>lt; *<sup>y</sup>*˜0 <sup>&</sup>lt; <sup>∞</sup> and *<sup>ω</sup>*+˜(*z*) :<sup>=</sup> min% *ω*˜ *α*−1(*z*) , *ω*˜ *α*−1(*α*˜(*z*))&. Moreover, the authors used the comparison method to obtain oscillation conditions for this equation.

Bazighifan et al. [7] presented oscillation results for the following fourth-order equa-

tion:

$$\left(m(z)\left(\boldsymbol{\xi}^{\prime\prime\prime}(z)\right)^{r\_1}\right)' + \tilde{\omega}(z)\delta^{r\_1}(\boldsymbol{\alpha}(z)) = 0\_{\prime\prime}$$

under the conditions

$$\int\_{z\_0}^{\infty} \frac{1}{m^{1/r\_1}(s)} ds < \infty$$

using the Riccati technique.

Zhang et al. [8] established oscillation criteria for the following equation:

$$\left(m(z)\left(\mathfrak{z}^{(n-1)}(z)\right)^{r\_1}\right)' + \tilde{\omega}(z)f(\delta(\mathfrak{a}(z)))\mathrm{d}s = 0$$

and under the condition

$$\int\_{z\_0}^{\infty} \left( k\rho(z)E(z) - \frac{1}{4\lambda} \left( \frac{\rho'(z)}{\rho(z)} \right)^2 \eta(z) \right) ds = \infty.$$

Chatzarakis et al. [9], by using the Riccati technique, established asymptotic behavior for the following neutral equation:

$$\left(m(z)\left(\mathfrak{s}^{\prime\prime}(z)\right)^{r\_1}\right)' + \int\_a^b \omega(z,s)f(\delta(\mathfrak{a}(z,s)))\,\mathrm{d}s = 0.$$

The authors in [6,7] used the comparison technique that differs from the one we used in this article. Their approach is based on using these mentioned methods to reduce Equation (1) into a first-order equation, while in our article, we discuss the oscillatory properties of differential equations with a middle term and with a canonical operator of the neutral-type, and we employ a different approach based on using the integral averaging technique and the Riccati technique to reduce the main equation into a first-order inequality to obtain more effective oscillatory properties.

The purpose of this article is to establish new oscillation criteria for (1). The methods used in this paper simplify and extend some of the known results that are reported in the literature [6,7]. The authors in [6,7] used a comparison technique that differs from the one we used in this article.

#### **2. Oscillation Criteria**

We next present the lemmas needed for the proof of the original results:

**Lemma 1** ([10])**.** *If δ*(*i*)(*z*) > 0, *i* = 0, 1, . . . , *n*, *and δ*(*n*+1)(*z*) < 0, *then the following holds:*

$$n!\frac{\delta(z)}{z^n} \ge (n-1)!\frac{\delta'(z)}{z^{n-1}}.$$

**Lemma 2** ([11])**.** *Let <sup>δ</sup>* <sup>∈</sup> *<sup>C</sup>n*([*z*0, <sup>∞</sup>),(0, <sup>∞</sup>)). *Assume that <sup>δ</sup>*(*n*)(*z*) *is of fixed sign and not identically zero on* [*z*0, <sup>∞</sup>) *and that there exists a <sup>z</sup>*<sup>1</sup> <sup>≥</sup> *<sup>z</sup>*<sup>0</sup> *such that <sup>δ</sup>*(*n*−<sup>1</sup>)(*z*)*δ*(*n*)(*z*) <sup>≤</sup> <sup>0</sup> *for all z* ≥ *z*1*. If* lim*z*→<sup>∞</sup> *δ*(*z*) = 0, *then for every μ* ∈ (0, 1) *there exists z<sup>μ</sup>* ≥ *z*<sup>1</sup> *such that the following holds:*

$$\left|\delta(z)\right| \ge \frac{\mu}{(n-1)!} z^{n-1} \left|\delta^{(n-1)}(z)\right| \text{ for } z \ge z\_{\mu}.$$

**Lemma 3** ([12])**.** *Let a* ≥ 0*; then, the following holds:*

$$X\delta - Y\delta^{(a+1)/a} \le a^a (a+1)^{-(a+1)} Y^{-a} X^{a+1},$$

*where Y* > 0 *and X are constants.*

**Lemma 4** ([13])**.** *Assume that δ*(*z*) *is an eventually positive solution of Equation (1). Then,*

$$\text{Case } (\mathbf{N}\_1): \quad \underline{\mathfrak{g}}(z) > 0, \underline{\mathfrak{g}}'(z) > 0, \underline{\mathfrak{g}}'''(z) > 0, \underline{\mathfrak{g}}'''(z) > 0, \\ \text{Case } (\mathbf{N}\_2): \quad \underline{\mathfrak{g}}(z) > 0, \underline{\mathfrak{g}}'(z) > 0, \underline{\mathfrak{g}}'''(z) < 0, \underline{\mathfrak{g}}'''(z) > 0.$$

Here are the notations used for our study:

$$\begin{array}{rcl} E\_1(z) &=& \beta(z)\tilde{\omega}(z)(1-\tilde{y}\_0)^{r\_2}A\_1^{r\_2-r\_1}\left(\frac{\mathfrak{a}(z)}{z}\right)^{3r\_2}, \\ \Phi(z) &=& (1-\tilde{y}\_0)^{r\_2/r\_1}h(z)A\_2^{r\_2/r\_1-1}(z)\int\_z^\infty \left(\frac{1}{m(u)}\int\_u^\infty \tilde{\omega}(s)\frac{\mathfrak{a}^{r\_2}(s)}{s^{r\_2}}ds\right)^{1/r\_1}d\mu \end{array}$$

and

$$
\Theta(z) = r\_1 \mu\_1 \frac{z^2}{2m^{1/r\_1}(z)\beta^{1/r\_1}(z)}.
$$

**Lemma 5.** *Let δ*(*z*) *is an eventually positive solution of Equation (1), then*

$$\left(m(z)\left(\boldsymbol{\xi}^{\prime\prime\prime}(z)\right)^{r\_1}\right)^{\prime}\leq -G(z)\left(\boldsymbol{\xi}^{\prime\prime\prime}(a(z))\right)^{r\_2},\tag{3}$$

*where*

$$G(z) = \tilde{\omega}(z)(1 - \tilde{y}\_0)^{r\_2} \left(\frac{\mu}{6} (\tilde{\alpha}(z))^3\right)^{r\_2}.$$

**Proof.** Let *δ*(*z*) is an eventually positive solution of Equation (1). From definition of *ς*(*z*) = *δ*(*z*) + *y*˜(*z*)*δ*(*α*˜(*z*)), we obtain the following:

$$\begin{array}{rcl}\delta(z) & \geq & \mathfrak{g}(z) - \mathfrak{y}\_{0}\delta(\mathfrak{a}(z)) \\ & \geq & \mathfrak{g}(z) - \mathfrak{y}\_{0}\mathfrak{g}(\mathfrak{a}(z)) \\ & \geq & (1 - \mathfrak{y}\_{0})\mathfrak{g}(z), \end{array}$$

which with (1), results in the following:

$$\left(m(z)\left(\boldsymbol{\xi}^{\prime\prime}(z)\right)^{r\_1}\right)' + \tilde{\omega}(z)(1-\tilde{y}\_0)^{r\_2}\boldsymbol{\xi}^{r\_2}(a(z)) \le 0. \tag{4}$$

Using Lemma 2, we see the following:

$$
\varrho(z) \ge \frac{\mu}{6} z^3 \varrho^{\prime\prime\prime}(z). \tag{5}
$$

Combining (4) and (5), we find the following:

$$\left(m(z)\left(\boldsymbol{\zeta}^{\prime\prime\prime}(z)\right)^{r\_1}\right)^{\prime} + \tilde{\omega}(z)\left(1-\tilde{y}\_0\right)^{r\_2} \left(\frac{\mu}{6}(\boldsymbol{a}(z))^3\right)^{r\_2} \left(\boldsymbol{\zeta}^{\prime\prime\prime}(\boldsymbol{a}(z))\right)^{r\_2} \le 0.5$$

Thus, (3) holds. This completes the proof.

**Lemma 6.** *Let δ*(*z*) *is an eventually positive solution of Equation (1) and*

$$B'(z) \le \frac{\beta'(z)}{\beta(z)} B(z) - E\_1(z) - r\_1 \mu\_1 \frac{z^2}{2m^{1/r\_1}(z)\beta^{1/r\_1}(z)} B^{\frac{r\_1+1}{r\_1}}(z), \text{ if } \zeta \text{ satisfies } (\mathbf{N}\_1) \tag{6}$$

*and*

$$A'(z) \le -\Phi(z) + \frac{h'(z)}{h(z)}A(z) - \frac{1}{h(z)}A^2(z),\text{ if }\varsigma\text{ satisfies } (\mathbf{N}\_2),\tag{7}$$

*where*

$$B(z) := \beta(z) \frac{m(z) (\xi'''(z))^{r\_1}}{\xi^{r\_1}(z)} > 0 \tag{8}$$

*and*

$$A(z) := h(z) \frac{\mathfrak{g}'(z)}{\mathfrak{g}(z)}, \; z \ge z\_1. \tag{9}$$

**Proof.** Let *δ*(*z*) is an eventually positive solution of Equation (1). Let (**N**1) holds. From (8) and (4), we find the following:

$$B'(z) \le \frac{\beta'(z)}{\beta(z)} B(z) - \beta(z) \tilde{\omega}(z) (1 - \bar{y}\_0)^{r\_2} \frac{\varsigma^{r\_2}(a(z))}{\varsigma^{r\_1}(z)} - r\_1 \beta(z) \frac{m(z) (\varsigma^{\prime\prime\prime}(z))^{r\_1}}{\varsigma^{r\_1 + 1}(z)} \varsigma'(z). \tag{10}$$

Using Lemma 1, we find

$$
\varsigma(z) \ge \frac{z}{3} \varsigma'(z)
$$

and hence,

$$\frac{\mathfrak{g}(a(z))}{\mathfrak{g}(z)} \ge \frac{a^3(z)}{z^3}.\tag{11}$$

It follows from Lemma 2 that

$$
\zeta'(z) \ge \frac{\mu\_1}{2} z^2 \zeta'''(z),
\tag{12}
$$

for all *μ*<sup>1</sup> ∈ (0, 1) and every sufficiently large *z*. Thus, by (10)–(12), we obtain the following:

$$\begin{array}{rcl}B'(z) & \leq & \frac{\beta'(z)}{\beta(z)}B(z) - \beta(z)\widetilde{\omega}(z)(1-\overline{y}\_0)^{r\_2}\xi^{r\_2-r\_1}(z)\left(\frac{a(z)}{z}\right)^{3r\_2} \\ & & -r\_1\mu\_1 \frac{z^2}{2m^{1/r\_1}(z)\beta^{1/r\_1}(z)}B^{\frac{r\_1+1}{r\_1}}(z). \end{array}$$

Since *ς* (*z*) > 0, there exist *z*<sup>2</sup> ≥ *z*<sup>1</sup> and *A*<sup>1</sup> > 0 such that the following holds:

$$
\mathfrak{g}(z) \gg A\_1. \tag{13}
$$

Thus, we obtain the following:

$$\begin{aligned} B'(z) &\leq \frac{\beta'(z)}{\beta(z)} B(z) - \beta(z) \tilde{\omega}(z) (1 - \tilde{y}\_0)^{r\_2} A^{r\_2 - r\_1} \left( \frac{\alpha(z)}{z} \right)^{3r\_2} \\ &- r\_1 \mu\_1 \frac{z^2}{2m^{1/r\_1}(z) \beta^{1/r\_1}(z)} B^{\frac{r\_1 + 1}{r\_1}}(z), \end{aligned}$$

which yields the following:

$$B'(z) \le \frac{\beta'(z)}{\beta(z)} B(z) - E\_1(z) - r\_1 \mu\_1 \frac{z^2}{2m^{1/r\_1}(z)\beta^{1/r\_1}(z)} B^{\frac{r\_1+1}{r\_1}}(z).$$

Thus, (6) holds.

Let (**N**2) hold. Integrating (4) from *z* to *u*, we find the following:

$$m(u)\left(\boldsymbol{\xi}^{\prime\prime}(u)\right)^{r\_1} - m(z)\left(\boldsymbol{\xi}^{\prime\prime}(z)\right)^{r\_1} \leq -\int\_z^u \tilde{\omega}(s)(1-\tilde{y}o)^{r\_2}\boldsymbol{\xi}^{r\_2}(a(s))ds.\tag{14}$$

From Lemma 1, we obtain the following:

$$
\varsigma(z) \ge z \varsigma'(z)
$$

and hence,

$$
\mathfrak{g}(\mathfrak{a}(z)) \ge \frac{\mathfrak{a}(z)}{z} \mathfrak{g}(z). \tag{15}
$$

For (14), letting *u* → ∞ and using (15), we obtain the following:

$$m(z) \left(\zeta^{\prime\prime}(z)\right)^{r\_1} \ge (1-\bar{y}\_0)^{r\_2}\zeta^{r\_2}(z)\int\_z^{\infty} \tilde{\omega}(s) \frac{a^{r\_2}(s)}{s^{r\_2}}ds.\tag{16}$$

Integrating (16) from *z* to ∞, we find the following:

$$\int\_{\gamma} \zeta''(z) \le -(1-\mathfrak{z}\_0)^{r\_2/r\_1} \varsigma^{r\_2/r\_1}(z) \int\_z^{\infty} \left(\frac{1}{m(u)} \int\_u^{\infty} \breve{\omega}(s) \frac{u^{r\_2}(s)}{s^{r\_2}} ds\right)^{1/r\_1} du \,\tag{17}$$

From the definition of *A*(*z*), we see that *A*(*z*) > 0 for *z* ≥ *z*1, and using (13) and (17), we find the following:

$$\begin{aligned} A'(z) &= \frac{h'(z)}{h(z)} A(z) + h(z) \frac{\xi''(z)}{\xi(z)} - h(z) \left(\frac{\xi'(z)}{\xi(z)}\right)^2 \\ &\le \frac{h'(z)}{h(z)} A(z) - \frac{1}{h(z)} A^2(z) \\ &- (1 - \tilde{y}\_0)^{r\_2/r\_1} h(z) \zeta^{r\_2/r\_1 - 1}(z) \int\_z^\infty \left(\frac{1}{m(u)} \int\_u^\infty \tilde{\omega}'(s) \frac{a^{r\_2}(s)}{s^{r\_2}} ds\right)^{1/r\_1} d\mu. \end{aligned}$$

Since *ς* (*z*) > 0, there exist *z*<sup>2</sup> ≥ *z*<sup>1</sup> and *A*<sup>2</sup> > 0 such that the following holds:

$$\zeta(z) > A\_{2^\*}$$

Thus, we obtain the following:

$$A'(z) \le -\Phi(z) + \frac{h'(z)}{h(z)}A(z) - \frac{1}{h(z)}A^2(z).$$

Thus, (7) holds. Proof of the theorem is completed.

#### **Definition 1.** *Let*

$$D = \{(z, s) \in \mathbb{R}^2 : z \ge s \ge z\_0\} \text{ and } D\_0 = \{(z, s) \in \mathbb{R}^2 : z > s \ge z\_0\}.$$

*The function Gi* ∈ *C*(*D*, R) *fulfills the following conditions:*

*(i) Gi*(*z*,*s*) = 0 *for z* ≥ *z*0, *Gi*(*z*,*s*) > 0, (*z*,*s*) ∈ *D*0;

*(ii) The functions h*, *<sup>υ</sup>* <sup>∈</sup> *<sup>C</sup>*1([*z*0, <sup>∞</sup>),(0, <sup>∞</sup>)) *and gi* <sup>∈</sup> *<sup>C</sup>*(*D*0, <sup>R</sup>) *such that*

$$\frac{\partial}{\partial s}G\_1(z,s) + \frac{\beta'(s)}{\beta(s)}G(z,s) = g\_1(z,s)G\_1^{r\_1/(r\_1+1)}(z,s) \tag{18}$$

*and*

$$\frac{\partial}{\partial s}\mathcal{G}\_2(z,s) + \frac{h'(s)}{h(s)}\mathcal{G}\_2(z,s) = \mathcal{g}\_2(z,s)\sqrt{\mathcal{G}\_2(z,s)}.\tag{19}$$

Now, we present some Philos-type oscillation criteria for (1).

**Theorem 1.** *Let (24) hold. If <sup>β</sup>*, *<sup>h</sup>* <sup>∈</sup> *<sup>C</sup>*1([*z*0, <sup>∞</sup>), <sup>R</sup>) *such that*

$$\limsup\_{z \to \infty} \frac{1}{G(z, z\_1)} \int\_{z\_1}^{z} G(z, s) E\_1(s) - \frac{g\_1^{r\_1 + 1}(z, s) G\_1^{r\_1}(z, s)}{(r\_1 + 1)^{r\_1 + 1}} \frac{2^{r\_1} m(s) \beta(s)}{\left(\mu\_1 s^2\right)^{r\_1}} ds = \infty \tag{20}$$

*for all μ*<sup>2</sup> ∈ (0, 1), *and*

$$\limsup\_{z \to \infty} \frac{1}{G\_2(z, z\_1)} \int\_{z\_1}^{z} \left( G\_2(z, s) \Phi(s) - \frac{h(s) g\_2^2(z, s)}{4} \right) ds = \infty,\tag{21}$$

*then (1) is oscillatory.*

**Proof.** Let *δ* be a non-oscillatory solution of (1) , we see that *δ* > 0. Assume that (**N**1) holds. Multiplying (6) by *G*(*z*,*s*) and integrating the resulting inequality from *z*<sup>1</sup> to *z*, we obtain the following:

$$\begin{split} \int\_{z\_1}^{z} \mathcal{G}(z,s) \mathcal{E}\_1(s) \mathrm{d}s &\quad \leq \ \mathcal{B}(z\_1) \mathcal{G}(z, z\_1) + \int\_{z\_1}^{z} \Big( \frac{\partial}{\partial s} \mathcal{G}(z,s) + \frac{\beta'(s)}{\beta(s)} \mathcal{G}(z,s) \Big) \mathcal{B}(s) \mathrm{d}s \\ &\quad - \int\_{z\_1}^{z} \mathcal{G}(s) \mathcal{G}(z,s) \mathcal{B}^{\frac{r\_1 + 1}{r\_1}}(s) \mathrm{d}s. \end{split}$$

From (18), we obtain the following:

$$\begin{split} \int\_{z\_1}^{z} \mathcal{G}(z,s) \mathcal{E}\_1(s) \mathrm{d}s &\quad \leq \quad \mathcal{B}(z\_1) \mathcal{G}(z, z\_1) + \int\_{z\_1}^{z} \mathcal{g}\_1(z,s) \mathcal{G}\_1^{r\_1/(r\_1+1)}(z,s) \mathcal{B}(s) \mathrm{d}s \\ &\quad - \int\_{z\_1}^{z} \mathcal{G}(s) \mathcal{G}(z,s) \mathcal{B}^{\frac{r\_1+1}{r\_1}}(s) \mathrm{d}s. \end{split} \tag{22}$$

Using Lemma <sup>3</sup> with *<sup>V</sup>* <sup>=</sup> <sup>Θ</sup>(*s*)*G*(*z*,*s*), *<sup>U</sup>* <sup>=</sup> *<sup>g</sup>*1(*z*,*s*)*Gr*1/(*r*1+1) <sup>1</sup> (*z*,*s*) and *δ* = *B*(*s*), we obtain the following:

$$\begin{aligned} &\quad g\_1(z,s)G\_1^{r\_1/(r\_1+1)}(z,s)B(s) - \Theta(s)G(z,s)B^{\frac{r\_1+1}{r\_1}}(s),\\ &\leq \quad \frac{g\_1^{r\_1+1}(z,s)G\_1^{r\_1}(z,s)}{(r\_1+1)^{r\_1+1}}\frac{2^{r\_1}m(z)\beta(z)}{\left(\mu\_1 z^2\right)^{r\_1}},\end{aligned}$$

which, with (22) gives the following:

$$\frac{1}{G(z, z\_1)} \int\_{z\_1}^{z} \left( G(z, s) E\_1(s) - \frac{g\_1^{r\_1 + 1}(z, s) G\_1^{r\_1}(z, s)}{(r\_1 + 1)^{r\_1 + 1}} \frac{2^{r\_1} m(s) \beta(s)}{\left( \mu\_1 s^2 \right)^{r\_1}} \right) ds \le B(z\_1) \rho$$

which contradicts (20).

Assume that (**N**2) holds. Multiplying (7) by *G*2(*z*,*s*) and integrating the resulting inequality from *z*<sup>1</sup> to *z*, we find the following:

$$\begin{aligned} \int\_{z\_1}^z G\_2(z,s) \Phi(s) \mathrm{d}s &\quad \le \quad A(z\_1) G\_2(z, z\_1) \\ &\quad + \int\_{z\_1}^z \left( \frac{\partial}{\partial s} G\_2(z,s) + \frac{h'(s)}{h(s)} G\_2(z,s) \right) A(s) \mathrm{d}s \\ &\quad - \int\_{z\_1}^z \frac{1}{h(s)} G\_2(z,s) A^2(s) \mathrm{d}s. \end{aligned}$$

Thus,

$$\begin{aligned} \int\_{z\_1}^z G\_2(z,s) \Phi(s) \mathrm{d}s &\leq \quad A(z\_1) G\_2(z, z\_1) + \int\_{z\_1}^z g\_2(z,s) \sqrt{G\_2(z,s)} A(s) \mathrm{d}s \\ &- \int\_{z\_1}^z \frac{1}{h(s)} G\_2(z,s) A^2(s) \mathrm{d}s \\ &\leq \quad A(z\_1) G\_2(z, z\_1) + \int\_{z\_1}^z \frac{h(s) g\_2^2(z,s)}{4} \mathrm{d}s \end{aligned}$$

and so

$$\frac{1}{G\_2(z, z\_1)} \int\_{z\_1}^{z} \left( G\_2(z, s) \Phi(s) - \frac{h(s) g\_2^2(z, s)}{4} \right) ds \le A(z\_1).$$

which contradicts (21). Proof of the theorem is completed.

**Corollary 1.** *Let (24) hold. If <sup>β</sup>*, *<sup>h</sup>* <sup>∈</sup> *<sup>C</sup>*1([*z*0, <sup>∞</sup>), <sup>R</sup>) *such that*

$$\int\_{z\_0}^{\infty} \left( E\_1(s) - \frac{2^{r\_1}}{(r\_1+1)^{r\_1+1}} \frac{m(s) (\beta'(s))^{r\_1+1}}{\mu\_1^{r\_1} s^{2r\_1} \beta^{r\_1}(s)} \right) ds = \infty \tag{23}$$

*and*

$$\int\_{z\_0}^{\infty} \left( \Phi(s) - \frac{\left(h'(s)\right)^2}{4h(s)} \right) ds = \infty,\tag{24}$$

*for some μ*<sup>1</sup> ∈ (0, 1) *and every A*1, *A*<sup>2</sup> > 0*, then (1) is oscillatory.*

#### **3. Example**

This section presents some interesting examples to examine the applicability of theoretical outcomes.

**Example 1.** *Consider the following equation:*

$$
\left(\delta + \frac{1}{2}\delta\left(\frac{1}{3}z\right)\right)^{(4)} + \frac{\tilde{\omega}\_0}{z^4}\delta\left(\frac{1}{2}z\right) = 0, \; z \ge 1, \tilde{\omega}\_0 > 0. \tag{25}
$$

*Let r*<sup>1</sup> = *r*<sup>2</sup> = 1, *m*(*z*) = 1, *y*˜(*z*) = 1/2, *α*˜(*z*) = *z*/3, *α*(*z*) = *z*/2 *and ω*˜(*z*) = *ω*˜ 0/*z*4*. Hence, it is easy to see that*

$$\int\_{z\_0}^{\infty} \frac{1}{m^{1/r\_1}(s)} ds = \infty, \; E\_1(z) = \frac{\tilde{\omega}\_0}{16s}$$

*and*

$$
\Phi(z) := \frac{\vec{\omega}\_0}{24}.
$$

*If we put β*(*s*) := *z*<sup>3</sup> *and h*(*z*) := *z*2*, then we find the following:*

$$\begin{aligned} &\int\_{z\_0}^{\infty} \left( E\_1(s) - \frac{2^{r\_1}}{(r\_1+1)^{r\_1+1}} \frac{m(s) (\beta'(s))^{r\_1+1}}{\mu\_1^{r\_1} s^{2r\_1} \beta^{r\_1}(s)} \right) ds \\ &= \quad \int\_{z\_0}^{\infty} \left( \frac{\tilde{\omega}\_0}{16s} - \frac{9}{2\mu\_1 s} \right) ds \end{aligned}$$

*and*

$$\begin{aligned} &\int\_{z\_0}^{\infty} \left(\Phi(s) - \frac{\left(h'(s)\right)^2}{4h(s)}\right) \mathrm{d}s \\ &=\quad \int\_{z\_0}^{\infty} \left(\frac{\tilde{\omega}\_0}{24} - 1\right) \mathrm{d}s. \end{aligned}$$

*Thus,*

*and*

$$
\omega\_0 > 72 \tag{26}
$$

$$
\tilde{\omega}\_0 \gg 24.\tag{27}
$$

*From Corollary 1, Equation (25) is oscillatory if ω*˜ <sup>0</sup> > 72.

**Example 2.** *Consider the following equation:*

$$\left(z(\delta + \bar{y}\_0 \delta(\gamma z))^{\prime\prime}\right)^{\prime} + \frac{\bar{w}\_0}{z^3} \delta(\eta z) = 0, \; z \ge 1,\tag{28}$$

*where y*˜0 ∈ [0, 1), *γ*, *η* ∈ (0, 1) *and ω*˜ <sup>0</sup> > 0*. Let r*<sup>1</sup> = *r*<sup>2</sup> = 1, *m*(*z*) = *z*, *y*˜(*z*) = *y*˜0, *α*˜(*z*) = *γz*, *α*(*z*) = *ηz and ω*˜(*z*) = *ω*˜ 0/*z*3*. Hence, if we set β*(*s*) := *z*<sup>2</sup> *and h*(*z*) := *z, then we get*

$$E\_1(z) = \frac{\tilde{\omega}\_0 (1 - \tilde{y}\_0) \eta^3}{z}, \Phi(z) = \frac{\tilde{\omega}\_0 (1 - \tilde{y}\_0) \eta}{4z}.$$

*Thus, (23) and (24) become the following:*

$$\begin{aligned} &\int\_{z\_0}^{\infty} \left( E\_1(s) - \frac{2^{r\_1}}{(r\_1+1)^{r\_1+1}} \frac{m(s) (\boldsymbol{\beta}^\prime(s))^{r\_1+1}}{\mu\_1^{r\_1} s^{2r\_1} \boldsymbol{\beta}^{r\_1}(s)} \right) \mathrm{d}s \, ds \\ &= \quad \int\_{z\_0}^{\infty} \left( \frac{\tilde{\omega}\_0 (1-\tilde{y}\_0) \eta^3}{s} - \frac{2}{\mu\_1 s} \right) \mathrm{d}s \end{aligned}$$

*and*

$$\begin{aligned} &\int\_{z\_0}^{\infty} \left(\Phi(s) - \frac{\left(h'(s)\right)^2}{4h(s)}\right) \mathrm{d}s \\ &= \quad \int\_{z\_0}^{\infty} \left(\frac{\bar{\omega}\_0 (1 - \bar{y}\_0)\eta}{4s} - \frac{1}{4s}\right) \mathrm{d}s. \end{aligned}$$

*So,*

*ω*˜ <sup>0</sup> > 2 (<sup>1</sup> <sup>−</sup> *<sup>y</sup>*˜0)*η*<sup>3</sup> (29)

*and*

$$
\tilde{\omega}\_0 > \frac{1}{(1 - \tilde{y}\_0)\eta}.
$$

*From Corollary 1, Equation (28) is oscillatory if (29) holds*.

#### **4. Conclusions**

In this work, we prove some new oscillation theorems for (1). New oscillation results are established that complement related contributions to the subject. We used the Riccati technique and integral averages technique to obtain some new results to the oscillation of Equation (1) under the condition <sup>∞</sup> *z*0 1 *m*1/*r*<sup>1</sup> (*s*) d*s* = ∞. In future work, we will study this type of equation under the following condition:

$$\int\_{z\_0}^{\infty} \frac{1}{m^{1/r\_1}(s)} ds < \infty,$$

We also introduce some important oscillation criteria of differential equations of the fourth-order and under the following:

$$
\xi(z) = \delta(z) + \overline{y}(z) \sum\_{i=1}^{j} \delta\_i(\overline{u}(z)).
$$

**Author Contributions:** Conceptualization, A.A., O.B. and R.A.E.-N.; methodology, A.A., O.B. and R.A.E.-N.; investigation, A.A., O.B. and R.A.E.-N.; resources, A.A., O.B. and R.A.E.-N.; data curation, A.A., O.B. and R.A.E.-N.; writing—original draft preparation, A.A., O.B. and R.A.E.-N.; writing—review and editing, A.A., O.B. and R.A.E.-N.; supervision, A.A., O.B. and R.A.E.-N.; project administration, A.A., O.B. and R.A.E.-N.; funding acquisition, A.A., O.B. and R.A.E.-N. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **BVPs Codes for Solving Optimal Control Problems**

**Francesca Mazzia 1,\*,† and Giuseppina Settanni 2,†**


**Abstract:** Optimal control problems arise in many applications and need suitable numerical methods to obtain a solution. The indirect methods are an interesting class of methods based on the Pontryagin's minimum principle that generates Hamiltonian Boundary Value Problems (BVPs). In this paper, we review some general-purpose codes for the solution of BVPs and we show their efficiency in solving some challenging optimal control problems.

**Keywords:** optimal control; indirect methods; boundary value problems

#### **1. Introduction**

Many optimal control problems arise from an interest in observing the dynamic behavior of a state variable described by a dynamic equation, namely by a differential equation, in several areas of applications such as biology, chemistry, economy, physics, and engineering. For example, we can consider the development of a specific species of animals in an ecological preserve, the dynamical behavior of a chemical process, the evolution of the selling trend of a company, or the simulation of high-performance racing vehicles. The dynamical behavior of this kind of problems is influenced by the choice of control variables, as it might be incorporating the presence of predators in the ecological preserve, moreover, both state and control variables must fulfil constraints, and minimize or maximize an objective function.

Numerical methods solving optimal control problems were considered starting from the 1950s, when Bellman introduced the dynamic programming [1], that requires solving a partial differential equation, called the Hamiltonian-Jacobi-Bellman equation. Through time the numerical approaches can be mainly divided into two classes: direct methods and indirect methods [2,3]. Perhaps the first class of direct methods is the most widely applied, it transforms the problem into a nonlinear optimization problem or nonlinear programming problem, essentially this class is focused on the use of optimization techniques. The second class of the indirect methods transforms the original optimal control problem into a twopoint boundary value problem, highlighting particular attention to numerical methods solving differential equation systems. The last strategy is often considered disadvantageous for figuring out challenging optimal control problem.

As against this last opinion, this work aims to review many of the available generalpurpose codes solving boundary value problems, able to figure out optimal control problems arising from adopting an indirect approach. The review is also devoted to some numerical strategies that are useful and sometime necessary to numerically solve the problem, such as continuation techniques associated with suitable penalty functions.

The most used solver for indirect methods has been the shooting method, based on guessing the value of the unknown boundary condition at one end of the interval, so that an initial value problem is solved to obtain the solution at the other end of the interval that is already known. Although the shooting method is simple to apply, it is not particularly advantageous to use when the boundary value problem is ill conditioned or stiff, and

**Citation:** Mazzia, F.; Settanni, G. BVPs Codes for Solving Optimal Control Problems. *Mathematics* **2021**, *9*, 2618. https://doi.org/10.3390/ math9202618

Academic Editors: Fasma Diele and Janusz Brzdek

Received: 30 June 2021 Accepted: 12 October 2021 Published: 17 October 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

purposely when the optimal control problem is hypersensitive [4]. To overcome this matter the multiple shooting method is considered, specifically the time interval is partitioned in more subintervals and the shooting method is applied over each of these intervals. Another class of methods widely used, since it is the most robust and fast converging, is the class of collocation methods, where piecewise polynomials are used to parametrize the state and control variables. Finally, the solution is computed solving a nonlinear system by means of root finding techniques.

In the literature there exist different general-purpose open-source codes solving boundary value problems that are highly suitable in solving stiff and singular perturbations problems. Many of them have been implemented in Fortran, which has been for many years the preferred language for scientific computing. Some effort has been however accomplished to make them also available in problem-solving environments such as Matlab or R. The first bvp codes, as colsys/colnew [5,6], twpbvp [7] , twpbvpl [8], acdc and colmod [9], coldae [10], mirkdc [11] and BVP\_M-2 [12,13] have been written in Fortran/Fortran90. A collection of the last releases of many of the cited Fortran codes, together with the driver that allows a common input definition, and a list of numerical examples arising in several applications are available in the web site Test set for BVP solvers [14,15].

The Matlab environment allows the use of two functions, named bvp4c [16] and bvp5c [17], for solving BVPs. Other interesting codes that are usable in Matlab are bvptwp [18], TOM [19], HOFiD\_bvp [20] and bvpSuite2.0 [21], based on the code sbvp [22] for the solution of singular problems. The code bvpSuite2.0 could be used also for singular BVPs and differential algebraic problems of index 1. For the R community is instead available the package called bvpSolve that allows the running in R of many of the available Fortran codes [4,23]. In Python the package scipy.integrate includes the function solve\_bvp [24], a routine based on BVP\_M-2 and similar to the bvp4c Matlab code. All of them solve two-point boundary value problems, this means that applied to a second order boundary value problems, they transform the original problem into a system of first-order differential equations with boundary conditions, except for the collocation codes colsys, colnew, colmod, coldae, bvpSuite, and the high-order finite difference code HOFiD\_bvp, since each of them can be applied directly to higher order problems.

Our aim is to apply some of the cited codes for figuring out boundary value problems coming up using indirect methods to optimal control problems. Meanwhile, we will highlight some matters that can arise in handling bvp solvers, such as the choice of an initial mesh, or the use of a continuation technique for nonlinear problems. To this aim we show by some test problems how the proper use of these techniques and a good choice of the input parameters can allow us to obtain a solution in more efficient way than we could achieve using default parameters. Since the aim of this paper is not to make a comparison between the selected codes, we do not show the execution time, but we point out how the choice of a code depends on the problem.

We use as platform to run the experiment the Matlab environment and we consider the codes available in the Matlab distribution, bvptwp and TOM. We do not present the results for the collocation code bvpSuite2.0 because it does not give in output the same information of the other codes and it does not allow using a numerical Jacobian. For R-users all the examples could be solved using all the codes available in the bvpSolve package. Since bvpSolve run the Fortran codes by means of an interface, the results are the same obtained by the original Fortran codes.

The paper shows a list of a few interesting problems, for other applications of the same codes, here considered, to more involved optimal control problems, we refer the reader to [25–28]. Moreover, we highlight that it is not our aim to compare direct and indirect methods, but only to show the efficiency of indirect methods that often are not taken into consideration because users do not know the potentiality of general-purpose codes for BVPs.

The paper is organized as follows: in Section 2 we briefly introduce the indirect methods; in Section 3 we review codes for solving boundary value problems (BVPs) that are illustrated and classified through different programming environments, in particular Fortran codes are allocated in Section 3.1, Matlab codes in Section 3.2 and R codes in Section 3.3. Finally, in Sections 4–8 interesting optimal control problems are solved using indirect methods and the BVPs related. In Section 9 we give some conclusions, highlighting the potentiality of the BVP codes considered.

#### **2. Optimal Control Problems: Indirect Methods**

Given a non-empty compact time interval [*t*0, *tf* ] ⊂ R, with *t*<sup>0</sup> < *tf* , an optimal control problem is defined as

$$\begin{aligned} \text{minimize} \quad & \varphi(t\_f, \mathbf{x}\_f) + \int\_{t\_0}^{t\_f} L(t, \mathbf{x}, \mathbf{u}) \, dt, \\ & \mathbf{x}' = f(t, \mathbf{x}, \mathbf{u}), \\ & \mathbf{b}(\mathbf{x}(t\_0), \mathbf{x}(t\_f)) = \mathbf{0}, \\ & \mathbf{u} \in \mathcal{U}, \end{aligned} \tag{1}$$

where *ϕ* and *L* are sufficiently smooth functions involved in the minimization of the objective function, **<sup>x</sup>**(*t*) <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* is the state variable of the dynamical system, **<sup>u</sup>**(*t*) ∈U⊂ <sup>R</sup>*<sup>m</sup>* is the control variable and U the set of admissible controls, *f* is a regular function and **b**(**x**(*t*0), **x**(*tf*)) = **0** are the general boundary conditions. Furthermore, Problem (1) might be subject to a path constraint that can be expressed by a mixed control-state constraint **c**(*t*, **x**, **u**) ≤ **0** or a pure state constraint *s*(*t*, **x**) ≤ **0** .

There exist two main approaches solving optimal control problems (1), direct methods and indirect methods [2,29]. Direct methods suitably discretize an infinite-dimensional optimal control problem, giving back a finite-dimensional optimization problem that can be solved using appropriate nonlinear programming methods, such as sequential quadratic programming. This approach results robust and efficient if applied to several problems, besides not requiring a strong knowledge in optimal control theory, it becomes highly advantageous to use.

On the other hand, indirect methods are instead related to the Pontryagin's minimum principle [29], a necessary condition for optimality that transforms the original Problem (1) into a two-point boundary value problem for state and adjoint Lagrange multiplier functions, defined as

$$\begin{aligned} \mathbf{x}' &= f(t, \mathbf{x}, \mathbf{u}), \\ \mathbf{A}' &= -H\_{\mathbf{x}}(t, \mathbf{x}, \mathbf{u}, \boldsymbol{\lambda}), \\ \mathbf{b}(\mathbf{x}(t\_0), \mathbf{x}(t\_f)) &= \mathbf{0}, \\ \mathbf{b}\_{\mathbf{x}(t\_0)}(\mathbf{x}(t\_0), \mathbf{x}(t\_f)) \boldsymbol{\omega} &= \boldsymbol{\lambda}(t\_0), \\ \mathbf{b}\_{\mathbf{x}(t\_f)}(\mathbf{x}(t\_0), \mathbf{x}(t\_f)) \boldsymbol{\omega} &= -q\_{\mathbf{x}}(t\_f, \mathbf{x}\_f) - \boldsymbol{\lambda}(t\_f), \end{aligned} \tag{2}$$

where *H*(*t*, **x**, **u**, *λ*) = *L*(*t*, **x**, **u**) + *λ* · *f*(*t*, **x**, **u**) is the Hamiltonian function and the optimal control **u**∗(*t*) is obtained by a local optimization of the Hamiltonian, namely **u**∗(*t*) = arg min *u*∈U *H*(*t*, **x**, **u**, *λ*). Pro this approach there is the possibility to compute an

accurate numerical solution; however, against we find some drawbacks, such as the necessity to have a good initial guess for the solution of the generated nonlinear boundary value problem. Now, to overcome this matter we focus on the application of some well-known two-point boundary value codes that are considered extremely efficient and robust to solve the BVP (2).

#### **3. Codes for BVPs**

Boundary value problems arise in many fields of application, so in the last 40 years a great effort has been done to develop efficient methods solving this kind of problems. Among them many are methods applied to two-point boundary value problems, i.e., to systems of first-order ordinary differential equations with boundary conditions, others can be applied directly to second or high-order boundary value problems without any transformation of the original problem. Moreover, these codes are available in different programming environment, so in the following we will give information about their characteristics.

#### *3.1. Fortran Codes*

The code colsys was written by U. Ascher, R. Matteij and R. Russell [5] and it is based on method of spline collocation at Gaussian points and solves mixed-order systems of multipoint BVPs, high-order equations, problems with non-separated boundary conditions and problems with singularity. The code computes the solution on a sequence of meshes that are refined using the equidistribution of error to satisfy the required input tolerance. The error estimate is obtained roughly at each step halving the mesh. The components of the collocation solution are expressed by B-spline basis, which are evaluated by the de Boor's algorithms. Indeed, the damped Newton's method of quasilinearization is used for solving the nonlinear problems.

The code colnew [6,30] is the descendant of colsys and, contrary to this last, it uses a Runge–Kutta monomial representation for the piecewise polynomial solution, instead of B-spline basis. This change returns a code faster than the native version colsys.

The codes twpbvp,twpbvpl and acdc were written by J. R. Cash and his collaborators. The code twpbvp [7], differently from colsys, uses mono-implicit Runge–Kutta formulae and a deferred correction method for solving two-point boundary value problems. The mono-implicit Runge–Kutta formulae are implemented applying the deferred correction procedure, which allows discovery of the solution of a high-order method using only low order schemes. The code guarantees to construct a mesh refinement that is very suitable for singular perturbation problems.

The code twpbvpl, differently from twpbvp, is based on three Lobatto Runge–Kutta formulae of order 4, 6, 8, which are implemented using a suitable deferred correction scheme, solved with a damped Newton iteration scheme. The code is devoted in solving efficiently nonlinear stiff two-point boundary value problems.

The code acdc [9] has been developed from twpbvpl including an automatic continuation strategy, implemented to suitably solve linear and nonlinear singular perturbation problems characterized from a small parameter . The parameter  often brings about stiffness in the problem, so that for a nonlinear problem a good initial solution is required to reach the convergence of the Newton method. The continuation strategy arises to overcome these matters, specifically it consists of selecting an initial perturbation parameter 0, chosen to compute a solution of a problem not particularly stiff, usually for <sup>0</sup> ≈ 1, and satisfying a certain exit tolerance *tol*. The idea is to obtain an initial rough profile of the solution of the problem for a desired perturbation parameter . Then, chosen an integer *N*  the interval [0, ] is discretized in *N*  subintervals, so that

$$
\mathfrak{e}\_0 > \mathfrak{e}\_1 > \dots > \dots > \dots > \mathfrak{e}\_{N\_0 - 1} > \mathfrak{e}\_{N\_0}.
$$

Now, *N*  + 1 boundary value problems satisfying an exit tolerance *tol* are iteratively computed, so that the solution of the problem obtained at iteration *i* = 0, ... , *N*  − 1, for  *<sup>i</sup>* on a mesh *πi*, is the initial solution of the next problem with perturbation parameter  *<sup>i</sup>*<sup>+</sup>1. A crucial point of this strategy is the selection of the initial parameter <sup>0</sup> and the value of discretizazion *N* , both depend on the problem. In codes such as acdc <sup>0</sup> is set equal to 0.5 by default; however the suggestion is to consider <sup>0</sup> as a value not extremely small allowing the obtaining of an accurate solution of the problem for that value of perturbation; The code acdc chooses the sequences of parameters and the total number of continuation steps automatically. It is however possible to implement a continuation strategy for the other codes, in this case for *N*  it would be convenient to start with a small integer and then double or increment it, if the procedure does not converge.

The code colmod [9] is a modified version of the code colsys using the same continuation strategy adopted in acdc.

The codes twpbvpc, twpbvplc and acdcc [14] are the modified version of the codes twpbvp, twpbvpl and acdc that implement a mesh selection strategy based on the estimation of the local error and of two conditioning parameters [31]. This hybrid mesh strategy has first been used in the Matlab code TOM, described in the next section.

The code mirkdc written by W. Enright and P. Muir [11] uses MIRK method and controls the defect, also BVP\_M-2 written by J.J. Boisvert, P. Muir and R. Spiteri [12] is based on MIRK methods, but this last controls both the defect and/or the global error, giving, moreover, information about the conditioning constant.

Detailed information about all the numerical schemes and techniques related to the Fortran codes in this subsection can be found in [32] where a review of global methods for solving BVPs is presented.

#### *3.2. Matlab Codes*

The BVP codes available officially in the Matlab environment are bvp4c [33] and bvp5c [34]. The code bvp4c [16] is based on a collocation method with a <sup>C</sup><sup>1</sup> piecewise cubic polynomial, or equivalently on an implicit Runge–Kutta formula with a continuous extension, namely the collocation method is equivalent to a three-stages Lobatto IIIa implicit Runge–Kutta formula. This code implements a method of order four and solves a large class of BVP, such as equations with non-separated boundary conditions, singular problems, Sturm–Liouville problems. An advantage of this code is being able to compute numerical partial derivatives and use a vectorized finite difference Jacobian. Differently from the other codes the error estimation and the mesh selection are based on the residual estimation. We recall that if *S*(*x*) approximates the solution *y*(*x*), then the residual control in the differential equation *y* (*x*) = *f*(*x*, *y*(*x*)) is given by *r*(*x*) = |*S* (*x*) − *f*(*x*, *S*(*x*))|.

The code bvp5c is based on the four-stages Lobatto IIIa formula, giving a method of order five. Contrarily to bvp4c, bvp5c controls the residual and the true approximate error. It is clear that if the BVP is well-conditioned a small residual implies a small true error, but this is not satisfied if the BVP is ill-conditioned, hence the strategy to control the residual and the true error is more efficient than the one applied in bvp4c.

The next two codes TOM and HOFiD\_bvp belong to the class of Boundary Value Methods [35], especially suitable for solving BVPs.

The code TOM [19], based on the TOP Order Methods and the BS method of order four, six, eight and ten distinguishes for the use of conditioning in the mesh selection strategy. In [36] the authors analyzed how the conditioning and the stiffness of a problem depend on the estimation of the following conditioning parameters:


Specifically, the problem is: well-conditioned if *κ*, *κ*1, *γ*<sup>1</sup> and *σ* are of moderate size; stiff if *σ* 1; ill-conditioned if *κ* 1 and *γ* 1; ill posed if *κ*<sup>2</sup> > *κ*1. A complete description of the parameters and the algorithms used to compute their approximation is presented in [37]. The hybrid mesh selection algorithm controls the approximation of conditioning parameters and chooses the mesh points to have an estimation of those discrete quantities close to the continuous ones. Meanwhile, the code controls that the error of the solution computed is less than a prescribed tolerance. The error approximation is computed using a deferred correction technique with a higher order method, moreover a

quasi-linearization technique is implemented to solve nonlinear problem. The release of May 2021, which has been used for the numerical tests in this paper, has the possibility to choose two different mesh selections, one suitable for regular problem and the other one for stiff or singular perturbation problems.

The code HOFiD\_bvp [20] is based on high-order finite difference schemes (HOFiD) of order four, six, eight and ten, and an upwind method. Each derivative in the high-order boundary value problem is approximated directly by these schemes, hence it is not required any transformation of the problem in a system of first-order differential equations. The error estimation is computed applying the deferred correction technique to two consecutive order methods. The mesh selection is based on the error equidistribution. For nonlinear problems, the code uses a continuation strategy, as explained previously, and also combines an order variation strategy, this means that a solution of the problem obtained with a lower order and tolerance can be considered to be initial solution to run the code with higher order and tolerance. The strategy adopted returns a code suitable to solve high-order boundary value problems that can be singularly perturbed, singular, with discontinuous terms and multipoint. Other versions of the code solve singular second order initial value problems [38], Sturm–Liouville problems [39] and multi-parameters spectral problems [40].

An interesting code for solving high-order BVPs is the bvpSuite2.0 package, based on collocation methods. The collocation points could be chosen by the users among Gauss, Lobatto, uniform or user defined points. The code solves implicit BVPs, eigenvalue problems, differential algebraic problems of index 1 and it is particularly suited for singular problems. BvpSuite2.0 [21] is the evolution of two previous versions of the code with improved usability. The mesh selection strategy used is described in [41].

Finally, we consider the Matlab code bvptwp [18] based on an efficient translation of the Fortran codes twpbvp, twpbvpl and acdc in the Matlab environment, which are named twpbvp\_m, twpbvp\_l, acdc. Moreover, the Matlab package also contains the translation of the Fortran version of the same codes that use a hybrid mesh selection based on conditioning, similar to the one used in the code TOM, called twpbvpc\_m, twpbvpc\_l, acdcc. The code bvptwp is available on the calgo website and on the web-page called Test Set for BVP Solvers [15]. The version used in this paper is the release of May 2021.

#### *3.3. R Codes*

In recent years, the use of the open-source software R is upward among the problemsolving environments (PSEs) , and although it is mainly used as a software for statistics and visualization, several powerful methods solving differential equations have been developed. In this regard we highlight the package bvpSolve [23], which, using an interface, implements all the Fortran codes introduced in Section 3.1.

#### *3.4. Experiments*

Since our aim is to show the suitability and the efficiency of the BVP solvers in computing the solution of the Hamiltonian boundary value problems deriving from the application of the indirect method to optimal control problems, in the following sections we carry out some interesting numerical tests. We run experiments using the Matlab codes bvp4c, bvp5c, and bvptwp. For the last solver we consider all the codes available, i.e., twpbvp\_m, twpbvp\_l, twpbvpc\_m, twpbvpc\_l, acdc, acdcc. We also add the results obtained with the new release of the code TOM (May 2021). This code allows the choice of a boundary value method of specific order and a mesh variation strategy. For all the examples we choose the BS method of order 4 and we denote by tom the code run using a mesh variation for regular problems and by tomc the one implementing a mesh variation suited for stiff problems. For R-users all the examples could be solved applying all the codes included in the bvpSolve package. Since bvpSolve runs the Fortran codes by an interface, the obtained results are similar to those computed by means of the original Fortran codes. We also observe that some of the codes considered here for the numerical

tests are also present in the R package bvpSolve rel. 1.4.2. The R version of these codes on the same examples show comparable results.

In our tests we use an initial mesh with 16 equidistant points and an initial solution with zero elements, except in some examples where specified. Moreover, the maximal mesh allowed has been set to 10<sup>4</sup> and the function evaluations have been vectorized. In the tables we report the number of points in the final mesh *f M* (in reading this value we recall that the code TOM does not use any auxiliary steps but all the others codes needs also several intermediate steps depending on the order of the methods used), the total number of vectorized function evaluation *NVF* and the mixed relative error on some significant components of the solution defined for a generic component *x* by the following formula

$$\max\_{i} \frac{|\mathbf{x}\_{i} - \mathbf{x}(t\_{i})|}{(1 + |\mathbf{x}(t\_{i})|)}$$

where *xi* is the numerical approximation of *x*(*ti*). If the exact solution of the test problem is not available, the error is computed by running the code twpbvpc\_l using a doubled mesh and a halved input tolerance. For all the codes we give in input equal absolute and relative tolerances. If the codes twpbvp\_m/twpbvpc\_m, twpbvp\_l/twpbvpc\_l, acdc/acdcc give the same results we report only one result in the tables. If a code cannot solve the problem, we put \* in the tables.

#### **4. Hypersensitive Optimal Control Problems**

The first class of examples we consider is the class of hypersensitive optimal control problems. Problems in this class are stiff, and need a suitable mesh variation strategy when solved using both direct and indirect methods. Usually, they are considered extremely difficult to be solved by indirect methods, because the solution is sensitive to changes in the initial conditions. In [42] the authors describe a dichotomic basis method which is inspired to the computation of the solution of singular perturbation problems for stiff initial value problems. In the following examples we show that general-purpose finite differences codes can solve very efficiently this class of problems. The codes can be applied for the numerical solution of completely hypersensitive problems whose solution has fast rates in all directions and partially hypersensitive problems, with the fast rate in only one direction.

#### *4.1. Nonlinear Mass Spring System with Quadratic Cost*

As first example we consider a hypersensitive nonlinear mass spring system [43], where the mass position *x* is defined such that the spring is unstretched when *x* = 0. The spring force is *Fs*(*x*) = <sup>−</sup>*k*1*<sup>x</sup>* <sup>−</sup> *<sup>k</sup>*2*x*3. The control is exerted on the mass by an external force denoted by *F*(*t*), hence the control input is *u*(*t*) = *F*(*t*). The equation of motion of the mass is *mx* = *Fs*(*x*) + *F*(*t*). We assume that *k*<sup>1</sup> = 1, *k*<sup>2</sup> = 1 and *m* = 1.

The optimal control problem needs to determine the control *u* on the fixed time interval [0, *T*] such that

$$\begin{cases} \min\_{\mathbf{x}, \boldsymbol{\mu}} \frac{1}{2} \int\_{0}^{T} \left(\mathbf{x}^{2} + \boldsymbol{v}^{2} + \boldsymbol{\mu}^{2}\right) dt\\ \mathbf{x}' = \boldsymbol{v} \\ \boldsymbol{v}' = -\mathbf{x} - \mathbf{x}^{3} + \boldsymbol{\mu} \\ \mathbf{x}(0) = 1, \boldsymbol{v}(0) = 0, \, \mathbf{x}(T) = 0.75, \, \mathbf{v}(T) = 0. \end{cases}$$

The associated Hamiltonian is

$$H(\mathbf{x}, \upsilon, \lambda, \mu, \mu) = \frac{1}{2}(\mathbf{x}^2 + \upsilon^2 + \mu^2) + \lambda\upsilon + \mu(-\mathbf{x} - \mathbf{x}^3 + \mu)$$

and the optimal control, obtained by computing *<sup>∂</sup><sup>H</sup> <sup>∂</sup><sup>u</sup>* = 0, is given by *u*<sup>∗</sup> = −*μ*. Therefore, applying the indirect method the optimal control problem is equivalent to solve the following BVP

$$\begin{aligned} \mathbf{x}' &= \upsilon \\ \upsilon' &= -\mathbf{x} - \mathbf{x}^3 - \mu \\ \lambda' &= -\mathbf{x} + \mu(1 + 3\mathbf{x}^2) \\ \mu' &= -\upsilon - \lambda \\ \mathbf{x}(0) &= 1, \upsilon(0) = 0, \mathbf{x}(T) = 0.75, \upsilon(T) = 0. \end{aligned} \tag{3}$$

In Figure 1 we show the solution for *T* = 20 and *T* = 40. In Table 1 we present some results obtained increasing the value of *<sup>T</sup>* from 20 to *<sup>T</sup>* <sup>=</sup> <sup>2</sup> <sup>×</sup> 106. First, we choose an initial mesh of 16 equidistant points and try to run all the codes, except the codes acdc and acdcc, since for this formulation of the problem there is not a parameter to be used for continuation. If on one hand, for *T* = 20 all the methods converge to the solution, and for *<sup>T</sup>* <sup>=</sup> <sup>2</sup> <sup>×</sup> 104 only the codes bvp4c and bvp5 fail, on the other hand for *<sup>T</sup>* <sup>=</sup> <sup>2</sup> <sup>×</sup> 106 no one goes to convergence except the codes tom and tomc (see Table 2). Essentially, there are some troubles with a singular Jacobian for bvp4c and bvp5c, or a drawback with the maximum number of mesh points allowed with the other codes. In the last case we could increase the maximum value of mesh points; however, we will try to differently overcome this matter and to debunk the idea that the indirect methods are not as competitive as direct ones.


**Table 1.** Nonlinear Mass spring: final mesh (fM), total number of vectorized function evaluation (NVF) and mixed errors for *x*, *v*, *u*. The solution is computed starting from an initial mesh with 16 equidistant points.

**Table 2.** Nonlinear Mass spring, *<sup>T</sup>* <sup>=</sup> <sup>2</sup> <sup>×</sup> 106: final mesh (fM), total number of vectorized function evaluation (NVF) and mixed errors for *x*, *v*, *u*, initial mesh with 16 equidistant points.


**Figure 1.** Mass spring: solution in time for the mass position *x* on the left and the control *u* on the right. Final time *T* = 20 (blue line) and *T* = 40 (red dash-dot line).

First, we point out that the results presented in Table 2 clearly show that the mesh selection based on conditioning allows the solution of the problem using a reduced number of mesh points and vectorial function evaluations. To gain the convergence for the other codes, it can be sufficient, in some cases, to increase the number of points in the initial mesh. To this aim, in Table 3 we show the numerical results obtained for bvp5c using 501 or 1001 initial equidistant points and *<sup>T</sup>* <sup>=</sup> <sup>2</sup> <sup>×</sup> <sup>10</sup>4. This strategy is advantageous for bvp5c, not yet for bvp4c, that needs an initial mesh of 2501 mesh points to reach the convergence. However, we observe that bvp5c is not able to reach convergence if we use an initial mesh of 2501 mesh points. For the other classes of methods increasing the number of mesh points is not advantageous in terms of computational cost and time execution.

**Table 3.** Nonlinear Mass spring, initial mesh (IM) with 501, 1001 and 2501 equidistant points and *<sup>T</sup>* <sup>=</sup> <sup>2</sup> <sup>×</sup> <sup>10</sup>4: final mesh (fM), total number of vectorized function evaluation (NVF) and mixed errors for *x*, *v*, *u*.


To improve the performance of all considered codes, the BVP (3) is reformulated using a variable transformation. Let *τ* = *t*/*T* with *τ* ∈ [0, 1], we solve the following BVP

$$\begin{aligned} \mathbf{x}' &= Tv\\ \mathbf{v}' &= -T(\mathbf{x} + \mathbf{x}^3 + \mu) \\ \lambda' &= T\left(-\mathbf{x} + \mu(1 + 3\mathbf{x}^2)\right) \\ \mu' &= -T(v + \lambda) \\ \mathbf{x}(0) &= 1, \mathbf{v}(0) = 0, \mathbf{x}(1) = 0.75, \mathbf{v}(1) = 0. \end{aligned} \tag{4}$$

Now, we set the perturbation parameter  = 1/*T*, so that we can run for parameters less than 1 the codes acdc and acdcc that use an automatic continuation strategy. For all the other codes, we can adopt a continuation strategy starting with an initial value of <sup>0</sup> that guarantees the convergence, in our case we use <sup>0</sup> = 1/20 and we change this value up to reach the required value. To this aim we consider the perturbation parameter changing in the interval [0, ] among the values 0.5 <sup>×</sup> <sup>10</sup>−*<sup>j</sup>* , *j* = −2, ... , −6. This means that we discretize the interval with *<sup>N</sup>*  <sup>=</sup> 3 and *<sup>N</sup>*  <sup>=</sup> 5 respectively for *<sup>T</sup>* <sup>=</sup> <sup>2</sup> <sup>×</sup> 104 and *<sup>T</sup>* <sup>=</sup> <sup>2</sup> <sup>×</sup> <sup>10</sup>6. In Table <sup>4</sup> we show the results obtained applying this successful continuation strategy. All the methods converge for all the values of *T* using a low computational cost. In this case, the codes based on automatic continuation strategy are very efficient, using acdc/acdcc the users do not need to decide how to change the continuation parameters, even if in some cases the automatic continuation could fail to reach the final desired value.

More information about this problem could be obtained by analyzing the conditioning parameters given in output by the codes twpbvpc\_m and tomc, reported in Table 5. As we can see the stiffness parameter *σ* grows with the width of the interval, and depends on this last, moreover *κ*<sup>2</sup> > *κ*<sup>1</sup> shows that the problem could be ill posed, and *γ*<sup>1</sup> tending to zeros shows the presence of different time scales. The transformation of time interval in [0, 1] does not change the stiffness of the problem, but the problem is well posed (see Table 6).

#### *4.2. Completely Hypersensitive Control Problem*

This example is a hypersensitive optimal control problem implemented in ICLOCS2, defined as a problem "extremely difficult" to solve using an indirect method [42,44] and given by

$$\begin{cases} \min\_{\mathbf{x}, \mathbf{u}} \int\_0^T (\mathbf{x}^2 + \mathbf{u}^2) \, dt \\ \mathbf{x}' = -\mathbf{x}^3 + \mathbf{u} \\ \mathbf{x}(0) = 1, \mathbf{x}(T) = 1.5. \end{cases} \tag{5}$$

Considered the Hamiltonian *<sup>H</sup>*(*x*, *<sup>λ</sup>*, *<sup>u</sup>*) = *<sup>x</sup>*<sup>2</sup> <sup>+</sup> *<sup>u</sup>*<sup>2</sup> <sup>+</sup> *<sup>λ</sup>*(−*x*<sup>3</sup> <sup>+</sup> *<sup>u</sup>*), the first-order necessary conditions for optimality leads to the following boundary value problem (BVP)

$$\begin{aligned} \mathbf{x}' &= -\mathbf{x}^3 - \frac{\lambda}{2} \\ \lambda' &= -2\mathbf{x} + 3\lambda \mathbf{x}^2 \\ \mathbf{x}(0) &= 1, \; \mathbf{x}(T) = 1.5, \end{aligned} \tag{6}$$

where the optimal control is *<sup>u</sup>*<sup>∗</sup> <sup>=</sup> <sup>−</sup>*<sup>λ</sup>* <sup>2</sup> . We choose *<sup>T</sup>* = <sup>10</sup>4, *<sup>T</sup>* = 106 and an initial mesh of 11 equidistant points, the solution is plotted in Figure 2. Numerical results shown in Table 7 point out good performance of all the codes except bvp4c and bvp5c, which are not suitable for stiff problems, indeed we underline as they converge to the solution respectively up to *T* = 38 and *T* = 29.

In Table 8 the approximations of the conditioning constants show the dependence of the stiffness on the width of the interval. Moreover, the numerical results underline the necessity of adopting a good mesh selection strategy for computing the solution.

**Figure 2.** Hypersensitive: solution in time for the mass position *x* on the left and the control *u* on the right, final time *T* = 104.




**Table 5.** Nonlinear Mass spring: conditioning parameters computed using *tol* = 10−<sup>6</sup> and initial mesh with 11 equidistant points.

**Table 6.** Nonlinear Mass spring using the variable *τ* = *t*/*T*: conditioning parameters computed using *tol* = 10−<sup>6</sup> and initial mesh with 11 equidistant points.


**Table 7.** Hypersensitive problem solved with an initial mesh with 11 equidistant points: final mesh (fM), total number of vectorized function evaluation (NVF) and mixed errors for *x*, *v*, *u*.



**Table 8.** Hypersensitive problem: conditioning parameters computed using *tol* = 10−<sup>6</sup> and initial mesh with 11 equidistant points.

For the purpose of improving the performance and overcoming some drawbacks, we propose, as already done for the test problem in Section 4.1, to use the transformation of the variable *τ* = *t*/*T*, such that the BVP (6) can be reformulated for *τ* ∈ [0, 1] as

$$\begin{aligned} \mathbf{x}' &= T\left(-\mathbf{x}^3 - \frac{\lambda}{2}\right) \\ \lambda' &= T\left(-2\mathbf{x} + 3\lambda \mathbf{x}^2\right) \\ \mathbf{x}(0) &= 1, \; \mathbf{x}(1) = 1.5. \end{aligned} \tag{7}$$

The advantage of this formulation is that considering  = 1/*T* as a perturbation parameter, we can apply the continuation strategy on that parameter. In Table 9 we report the results using as starting value <sup>0</sup> = 1/10 and changing the continuation parameters in the interval [0, ] among the value of the set 10<sup>−</sup>2, 10−3, 10−4. We remember that acdc and acdcc, using an automatic continuation strategy, needs only to insert the desired value of  and uses as <sup>0</sup> the default value 0.5. The numerical tests and the conditioning parameters in Tables 8 and 10 clearly show that for this class of problems, if we cannot use a continuation of parameters, the codes able to give a solution are the ones suited for stiff problems that work still better if also the mesh selection is appropriate for this class of problems.

**Table 9.** Hypersensitive problem using the variable *τ* = *t*/*T*, initial mesh with 11 equidistant points and continuation strategy on *T*: final mesh (fM), total number of vectorized function evaluation (NVF) and mixed errors for *x*,*v*,*u*.



**Table 10.** Hypersensitive problem using the variable *τ* = *t*/*T*: conditioning parameters computed using *tol* = 10−<sup>6</sup> and initial mesh with 11 equidistant points.

#### **5. Bang-Bang Optimal Control Problem**

The bang-bang optimal control problem [45] is among the more challenging ones. It arises from a model in which a point unit mass *m* subjects to a limited force in onedimensional space, i.e., *mx*(*t*) = *u*(*t*) and *u*(*t*) ≤ 1. The main feature of optimal control problem of moving the mass from *x* = 0 to the maximum distance *x* in one second can be formulated as follows

$$\begin{aligned} \min -x(1) &= \int\_0^1 (-v)dt, \\ x' &= v, \\ v' &= u, \qquad t \in [0, 1], \\ x(0) &= v(0) = v(1) = 0, \\ |u| &\le 1, \end{aligned} \tag{8}$$

The associated Hamiltonian function is defined as

$$H(\mathbf{x}, \upsilon, \lambda, \mu, \mu) = -\upsilon + \lambda \upsilon + \mu \upsilon$$

and the optimal control is given by

$$
\mu^\* = \underset{|\mu| \le 1}{\text{arg min}} \, H(\mathfrak{x}, \mathfrak{v}, \lambda, \mathfrak{\mu}, \mathfrak{u}) = -\text{sign}(\mathfrak{\mu}).
$$

Now, by applying the indirect method the solution of the optimal control Problem (8) is equivalent to solve the following BVP problem

$$\begin{aligned} \mathbf{x}' &= \mathbf{v}, \\ \mathbf{v}' &= \mathbf{u}, \\ \lambda' &= 0, \\ \mu' &= 1 - \lambda, \\ \mathbf{x}(0) &= \mathbf{v}(0) = \mathbf{v}(1) = \lambda(1) = 0. \end{aligned} \tag{9}$$

We observe that the optimal control is defined as

$$u(t) = -\operatorname{sign}(\mu) = \begin{cases} 1 & \mu < 0, \\ -1 & \mu > 0, \\ \text{any value in [-1,1]} & \mu = 0, \end{cases}$$

and the exact solution is given by

$$x(t) = \begin{cases} t^2 & t < 1/2\\ \frac{t^2}{2} - \frac{1}{4} & t > 1/2 \end{cases}, \quad v(t) = \begin{cases} t & t < 1/2\\ 1 - t & t > 1/2 \end{cases},$$

$$u(t) = \begin{cases} 1 & t < 1/2, \\ -1 & t > 1/2, \end{cases} \qquad \lambda(t) = 0, \qquad \mu(t) = t - \frac{1}{2}.$$

The discontinuity of the switching function is overcome by a smoothing technique that can be executed by different strategies. We choose two of them in particular. The first strategy, given a small parameter , consists of using the approximation

$$\operatorname{sign}(\mu) \approx \frac{2}{\pi} \arctan \left( \frac{\mu \pi}{2\epsilon} \right).$$

The exact bang-bang solution is better approximated when  becomes smaller; however, this for value around smaller than 10−<sup>4</sup> can give ill-conditioning problems. Table 11 contains all the results obtained using the Matlab codes, the solution is plotted in Figure 3. Only bvp5c fails, and for getting the solution is necessary to use the continuation strategy. To this regard we consider as initial perturbation parameter <sup>0</sup> = 1 and then we change it choosing *N*  = 10 logarithmically equispaced points between 1 and the value required . When *tol* = 10−<sup>4</sup> bvp5c converges using 19 points for both  equal to 10−<sup>3</sup> and 10−6, instead when *tol* = 10−<sup>4</sup> bvp5c gets the solution with 36 and 28 points respectively for  = 10−<sup>3</sup> and  = 10−6.

**Table 11.** Bang-Bang optimal control Problem (9): final mesh (fM), total number of vectorized function evaluation (NVF) and mixed errors for *x*, *v*, *u*.


**Figure 3.** Bang-Bang,  = 10<sup>−</sup>3: solution in time for the mass position *x* on the (**left**), for the velocity in the (**center**) and the control *u* on the (**right**).

For the second smoothing technique we can add a barrier or a penalty function. In this regard, we consider a piecewise quadratic penalty function defined as in [45]

$$P(u; \epsilon, \sigma) = \frac{\epsilon}{2}u^2 + \frac{1}{\sigma^2} \begin{cases} (|u| - 1 + \sigma)^2 & |u| > 1 - \sigma, \\ 0 & \text{otherwise} \end{cases}$$

where the parameter *σ* gives the distance from the border where the penalty changes fast. Consequently, the Problem (8) is reformulated without inequality constraint as follows

$$\begin{aligned} \min & \int\_{0}^{1} P(u; \boldsymbol{\varepsilon}, \boldsymbol{\sigma}) - \boldsymbol{\upsilon} \, dt, \\ & \boldsymbol{x}' = \boldsymbol{\upsilon}, \\ & \boldsymbol{v}' = \boldsymbol{u}, \qquad t \in [0, 1], \\ & \boldsymbol{x}(0) = \boldsymbol{v}(0) = \boldsymbol{v}(1) = 0. \end{aligned} \tag{10}$$

The optimal control *u*, obtained as a solution of the equation

$$P\_u(u; \epsilon, \sigma) + \mu = 0,$$

is equal to

$$u = \begin{cases} \frac{2 - 2\sigma - \sigma^2 \mu}{\epsilon \sigma^2 + 2} & u \ge 1 - \sigma \\ \frac{-2 + 2\sigma - \sigma^2 \mu}{\epsilon \sigma^2 + 2} & u < \sigma - 1 \\ 0 & \text{otherwise} \end{cases}$$

In Table 12 we show the numerical results obtained for *σ* = 10−<sup>4</sup> and  = 10−4, 10−6, starting with an initial mesh of 16 equidistant points and a null initial solution. It is clear that all the codes have a good performance, we do not report the results for bvp4c and bvp5c because they fail. To overcome this drawback in Table 13 we consider the continuation strategy, this means that the codes bvp4c and bvp5c are run for different values of  starting from <sup>0</sup> = 10 up to the desired value . In particular, we choose *N*  = 10 values logarithmically equispaced.


**Table 12.** Bang-Bang optimal control problem-solving (8) using a piecewise quadratic penalty function with *σ* = 10<sup>−</sup>4: final mesh (fM), total number of vectorized function evaluation (NVF) and mixed errors for *x*, *v*, *u*.

**Table 13.** Bang-Bang optimal control problem-solving (8) using a piecewise quadratic penalty function with *σ* = 10−<sup>4</sup> and the continuation strategy: final mesh (fM), total number of vectorized function evaluation (NVF) and mixed errors for *x*, *v*, *u*.


We also report the results of acdc and acdcc that use an automatic continuation strategy. The results point out the suitability and efficiency of the strategy in solving this kind of problems, also for bvp4c and bvp5c when the nonlinear solution is approximated using a continuation strategy. The conditioning parameters reported in Tables 14 and 15 show that the problem is not stiff since *σ* is of moderate size, indeed the main difficulty is caused by the convergence of the nonlinear discretization schemes. In this regard we highlight as the results of the codes twpbvpc\_m and twpbvpc\_l are the same of those gained by the codes twpbvp\_m and twpbvp\_l, confirming the non-necessity of these codes to use a mesh selection strategy based on conditioning for this non-stiff problem.


**Table 14.** Bang-Bang optimal control problem: conditioning parameters computed using *tol* = 10<sup>−</sup>6.

**Table 15.** Bang-Bang optimal control problem with penalty: conditioning parameters computed using *tol* = 10−6.


#### **6. Longitudinal Dynamics of a Vehicle**

We consider an example of nonlinear optimal control problem derived from a model of the longitudinal dynamics of a vehicle with the aerodynamic down-force [2]. In particular, a vehicle, supposed to be a point mass, is moved in a fixed time *T* from an initial zero velocity to a final zero velocity

$$\begin{aligned} \min \{ \mathbf{x}(0) - \mathbf{x}(T) \} &= \min \left( -\int\_0^T v \, dt \right) \\ \mathbf{x}' &= v, \\ \mathbf{v}' &= u - k\_0 - k\_1 v - k\_2 v^2, \qquad t \in [0, T], \\ \mathbf{x}(0) &= v(0) = v(T) = 0, \\ |u| &\le g + k\_3 v^2. \end{aligned} \tag{11}$$

The Hamiltonian function associated with this problem is

$$H(\mathbf{x}, \upsilon, \lambda, \mu, \mu) = -\upsilon + \lambda \upsilon + \mu \left(\mu - k\_0 - k\_1 \upsilon - k\_2 \upsilon^2\right)^2$$

and the optimal control is given by

$$\mu^\* = \underset{|u| \le \mathcal{g} + k\_3 v^2}{\text{arg min}} \, H(\mathfrak{x}, v, \lambda, \mathfrak{z}, \mathfrak{u}, \mathfrak{u}) = -(\mathfrak{g} + k\_3 v^2) \text{sign}(\mu).$$

Now, applying the indirect method the global optimal control problem is reduced to the boundary value problem

$$\begin{aligned} \mathbf{x}' &= \mathbf{v}, \\ \mathbf{v}' &= \boldsymbol{\mu} - k\_0 - k\_1 \boldsymbol{\upsilon} - k\_2 \boldsymbol{\upsilon}^2, \\ \boldsymbol{\lambda}' &= \mathbf{0}, \\ \boldsymbol{\mu}' &= 1 - \boldsymbol{\lambda} + \boldsymbol{\mu} (k\_1 + 2 \boldsymbol{\upsilon} k\_2), \\ \mathbf{x}(0) &= \boldsymbol{v}(0) = \boldsymbol{v}(T) = \boldsymbol{\lambda}(T) = 0. \end{aligned} \tag{12}$$

We observe that the optimal control problem has a theoretical solution given by

$$
\mu = -\operatorname{sign}(\mu)(\operatorname{g} + k\_3 \upsilon^2),
$$

that can be approximated using a barrier function defined as

$$
\mu = -\frac{2}{\pi} (\mathfrak{g} + k\_3 v^2) \arctan\left(\frac{2\mu}{\pi \epsilon}\right).
$$

Let *<sup>g</sup>*<sup>+</sup> <sup>=</sup> *<sup>g</sup>* <sup>+</sup> *<sup>k</sup>*<sup>0</sup> and *<sup>g</sup>*<sup>−</sup> <sup>=</sup> *<sup>g</sup>* <sup>−</sup> *<sup>k</sup>*0, if *ts* <sup>=</sup> <sup>1</sup> *k*1 ln *<sup>g</sup>*<sup>−</sup> <sup>+</sup> *<sup>g</sup>*+*ek*1*<sup>T</sup>* <sup>2</sup>*<sup>g</sup>* is the switching time, then the solution for the optimal control is defined as

$$u(t) = \begin{cases} 1 & t \le t\_{sr} \\ -1 & t > t\_{sr} \end{cases}$$

Moreover, the exact solution for the space and the velocity is expressed by

$$\mathbf{x}(t) = \begin{cases} k\_1^{-2} \mathbf{g}\_- \left( k\_1 t + e^{-k\_1 t} - 1 \right) & t \le t\_{s\tau} \\ k\_1^{-2} \left( \mathbf{g}\_+ + e^{-k\_1 t} \left( \mathbf{g}\_- - 2 \mathbf{g} e^{k\_1 t\_s} \right) + k\_1 (2 \mathbf{g} t\_s - t \mathbf{g}\_+) \right) & t > t\_{s\tau} \end{cases}$$

and

$$v(t) = \begin{cases} k\_1^{-1} \mathcal{g} - \left( 1 - e^{-k\_1 t} \right) & t \le t\_{s\prime} \\ k\_1^{-1} \mathcal{g} + \left( e^{k\_1 (T - t)} - 1 \right) & t > t\_{s\prime} \end{cases}$$

while the multipliers assume the form

$$
\lambda(t) = 0, \quad \mu(t) = \frac{1}{k\_1} \left( \frac{2\mathfrak{g}e^{k\_1(t-T)}}{\mathfrak{g}\_-e^{-k\_1t} + \mathfrak{g}\_+} - 1 \right).
$$

In Table 16 are shown all the numerical results obtained using all the Matlab codes considered starting with an initial mesh of 11 equispaced points and an initial approximation with null elements, the solution is plotted in Figure 4. For this problem only the codes of the bvptwp package are able to give a solution for  = 10−6, so for the other codes we have used a continuation strategy with a starting value <sup>0</sup> = 10−<sup>3</sup> and *N*  = 10 logarithmic equispaced intermediate points. In Table 16 all the results obtained are shown in order that the symbol c in bracket labels those computed using the continuation strategy. Moreover, the results emphasize that not always the automatic continuation is advantageous and cheaper from a computational cost of view, since it is evident that the total number of vectorial functions evaluation is much greater for acdc than for twpbvp\_m and twpbvp\_l. Remember that they use the same numerical scheme. The conditioning parameters in Table 17 are all moderate size, hence the problem is not stiff.


**Table 16.** Longitudinal dynamics of a vehicle *T* = 10, *g* = 9.81, *k*<sup>0</sup> = 0.02 *g*, *k*<sup>1</sup> = 10−5*g*, *k*<sup>2</sup> = 0, *k*<sup>3</sup> = 0: final mesh (fM), total number of vectorized function evaluation (NVF) and mixed errors for *x*, *v*, *u*.

**Table 17.** Longitudinal dynamics of a vehicle *T* = 10, *g* = 9.81, *k*<sup>0</sup> = 0.02 *g*, *k*<sup>1</sup> = 10−5*g*, *k*<sup>2</sup> = 0, *k*<sup>3</sup> = 0: conditioning parameters computed using *tol* = 10−6.


**Figure 4.** Longitudinal dynamics of a vehicle,  = 10−3, *T* = 10, *g* = 9.81, *k*<sup>0</sup> = 0.02 *g*, *k*<sup>1</sup> = 10−5*g*, *k*<sup>2</sup> = 0, *k*<sup>3</sup> = 0: theoretical (dash-dot line) and numerical (dot line) solution in time for the control *u*.

#### **7. Gottard Rocket**

Now, we consider an example of optimal control problem with a singular arc [46]. A rocket of mass *m* lifts off vertically at time *t* = 0 with (normalized) altitude *h*(0) = 1 and velocity *v*(0) = 0. Known the initial mass, the fuel mass and the drag characteristics of the rocket, the aim is to choose the thrust *u*(*t*) and the final time *T* to maximize the altitude *h*(*T*) at the final time *T*. The optimal control problem is given by

$$\begin{aligned} \min\_{T,v} & \int\_0^T (-v)dt, \\ & h' = v, \\ & v' = \frac{u - D(h, v)}{m} - \mathfrak{g}(h), \\ & m' = -\frac{u}{c}, \\ & 0 \le u \le u\_{\max}, \\ & h(0) = 1, v(0) = 0, m(0) = 1, m(T) = 0.6. \end{aligned} \tag{13}$$

Given the constants *Dc* and *hc*, the aerodynamic drag is defined by

$$D(h,v) = D\_{\mathfrak{c}}v^2 e^{-h\_{\mathfrak{c}}\left(\frac{h-h(0)}{h(0)}\right)}.$$

Moreover, if *g*<sup>0</sup> is the gravitational force at the earth's surface, then the gravitational force is given by

$$\lg(h) = \lg\left(\frac{h(0)}{h}\right)^2.$$

The equation is scaled choosing the model parameters *m*(0), *h*(0) and *g*0, which allows management of dimension-free equations. As in [46], we consider

$$\mu\_{\text{max}} = 3.5 \text{g}\_0 m(0), \qquad D\_c = \frac{1}{2} v\_c \frac{m(0)}{\text{g}\_0}, \qquad c = \frac{1}{2} (\text{g}\_0 h(0))^{1/2} \text{yr}$$

where *g*<sup>0</sup> = 1, *hc* = 500, *mc* = 0.6 and *vc* = 620.

Since the problem (13) has a free final time, we fix the time interval using the variable transformation *t*(*τ*) := *τT*, with *τ* ∈ [0, 1]. A new state variable *T* satisfying the differential constrain *T*˙ = 0 is added to the problem and a penalty function *P*(*u*; , *σ*¯) is used as smoothing technique, so that the problem can be reformulated as follows

$$\begin{aligned} \min\_{T,v,\mu} & \int\_0^1 (-Tv + TP(u; \epsilon, \vartheta)) \, d\tau, \\ & h' = Tv, \\ & v' = \frac{T}{m} \left( u - \frac{1}{2} v\_c v^2 \epsilon^{h\_c(1-h)} \right) - \frac{T}{h^2}, \\ & m' = -T\frac{u}{c}, \\ & T' = 0, \\ & h(0) = 1, v(0) = 0, m(0) = 1, m(1) = 0.6. \end{aligned} \tag{14}$$

As in Section 5, *P*(*u*; , *σ*¯) is a piecewise quadratic penalty function defined as

$$P(u; \mathfrak{e}; \mathfrak{v}) = \frac{\mathfrak{e}}{2} \left( u - \frac{u\_{\max}}{2} \right)^2 + \frac{1}{\mathfrak{d}^2} \begin{cases} \left( u - u\_{\max} + \bar{\sigma} \right)^2 & u > u\_{\max} - \bar{\sigma} \\ \left( \bar{\sigma} - u \right)^2 & u < \bar{\sigma} \\ 0 & \text{otherwise} \end{cases}$$

Now, the Hamiltonian formulation of the problem (14) gives as a result the following BVP

$$\begin{aligned} \label{eq:1} h' &= \,^T \boldsymbol{\nu}, \\ \tau' &= \frac{T}{m} \Big( \boldsymbol{u} - \frac{1}{2} \boldsymbol{v}\_c \boldsymbol{v}^2 \boldsymbol{e}^{hc\_c(1-h)} \Big) - \frac{T}{h^2}, \\ \tau' &= 0, \\ \lambda'\_1 &= -T\lambda\_2 \Big( \frac{1}{2m} h\_c \boldsymbol{v}\_c \boldsymbol{v}^2 \boldsymbol{e}^{hc\_c(1-h)} + \frac{2}{h^3} \Big), \\ \lambda'\_2 &= -T \Big( \lambda\_1 - \lambda\_2 \frac{\boldsymbol{v}\_c \boldsymbol{v} \boldsymbol{e}^{hc\_c(1-h)}}{m} - 1 \Big), \\ \lambda'\_3 &= \frac{T}{m^2} \lambda\_2 \Big( \boldsymbol{u} - \frac{1}{2} \boldsymbol{v}\_c \boldsymbol{v}^2 \boldsymbol{e}^{(h\_c - h)} \Big), \\ \lambda'\_4 &= \boldsymbol{v} - P(\boldsymbol{u}; \boldsymbol{e}, \boldsymbol{\sigma}) - \lambda\_1 \boldsymbol{v} - \lambda\_2 \Big( \frac{\boldsymbol{u} - \frac{1}{2} \boldsymbol{v}\_c \boldsymbol{v}^2 \boldsymbol{e}^{h\_c(1-h)}}{m} - \frac{1}{h^2} \Big) + \lambda\_3 \frac{\boldsymbol{u}}{c}, \\ \boldsymbol{h}(0) &= 1, \ v(0) = 0, m(0) = 1, m(1) = 0.6, \\ \lambda\_1(1) &= 0, \lambda\_2(1) = 0, \lambda\_4(0) = 0, \lambda\_4(1) = 0, \end{aligned}$$

where the thrust *u*, computed by solving the equation

$$P\_{\mathfrak{u}}(\mathfrak{u}; \mathfrak{e}, \mathfrak{v}) + \frac{\lambda\_2}{m} - \frac{\lambda\_3}{c} = 0,$$

is equivalent to

$$u = \begin{cases} \frac{1}{\varepsilon \vartheta^2 + 2} \left( \varepsilon \vartheta^2 \frac{u\_{\max}}{2} + 2u\_{\max} - 2\vartheta + \vartheta^2 \left( \frac{\lambda\_2}{m} - \frac{\lambda\_3}{c} \right) \right) & u > u\_{\max} - \vartheta \\\frac{\vartheta}{\varepsilon \vartheta^2 + 2} \left( \varepsilon \bar{\vartheta} \frac{u\_{\max}}{2} + 2 - \bar{\vartheta} \left( \frac{\lambda\_2}{m} - \frac{\lambda\_3}{c} \right) \right) & u < \bar{\vartheta} \\\frac{1}{\varepsilon} \left( \varepsilon \frac{u\_{\max}}{2} - \frac{\lambda\_2}{m} + \frac{\lambda\_3}{c} \right) & \text{otherwise} \end{cases}$$

Since the problem is highly nonlinear, it is chosen as starting approximation of the solution *h* = *λ*<sup>1</sup> = *λ*<sup>2</sup> = *λ*<sup>3</sup> = 1, *λ*<sup>4</sup> = 0, *v*(*τ*) = *τ*(1 − *τ*), *m*(*τ*)=(*m*(1) − *m*(0))*τ* + *m*(0) and *T* = 0.01. The choice of a good initial approximation is the main matter when the parameters of the penalty function *σ*¯ and  become extremely small. In this case, it is helpful to apply a continuation strategy for the parameter , changing the value of this parameter from <sup>0</sup> = 10−<sup>1</sup> to the desired value of . To highlight the advantages of this strategy, we solve the optimal control problem (15) choosing *σ*¯ = 10−<sup>4</sup> and two different values of  = 10−3, 10−6, the solution is plotted in Figure 5.

In Table 18 the results are computed without applying the continuation strategy, hence we observe that if on one hand only the codes bvp5c, tom and tomc fail for  = 10−3, on the other all the codes do not converge for  = 10−6. Consequently, in Table 19 we run the codes using the continuation strategy. All the numerical tests use an initial mesh of 16 equidistant points. For the continuation strategy in Table 19, except for acdc and acdcc, the parameter  is initially set to <sup>0</sup> = 10−<sup>1</sup> ( <sup>0</sup> = 1 for tom and tomc), and then it is changed using *N*  = 10 logarithmically equispaced values up to reach the value required . However, to obtain the convergence of bvp4c for  = 10−6, we put the value of *N*  = 100 when *tol* = 10−<sup>4</sup> and *N*  = 20 when *tol* = 10−<sup>6</sup> and for tom/tomc we put the value of *N*  = 55. The conditioning parameters reported in Table 20 show that the problem is not stiff, but it is ill conditioned since *κ*<sup>1</sup> > *κ*2.

**Figure 5.** Goddard rocket,  = 10<sup>−</sup>3, *σ*¯ = 10<sup>−</sup>4: from left to right solutions in time for altitude *h* and mass *m* (on the **top**), for velocity *v* and thrust *u* (on the **bottom**).

**Table 18.** Goddard Rocket problem (15) solved using a piecewise quadratic penalty function with *σ*¯ = 10−<sup>4</sup> and  = 10−3: final mesh (fM), total number of vectorized function evaluation (NVF) and mixed errors for *h*, *v*, *m*, *T* and *u*.





**Table 20.** Goddard Rocket problem: conditioning parameters computed using *tol* = 10−6.

#### **8. Minimization of the Fuel Cost in the Operation of a Train**

As in [2,47] an optimal control problem in transportation is to minimize fuel cost in the operation of a train. To simplify the track is supposed to be straight. Let *x* be the position along the track measured from a fixed reference point and *v* the velocity of the train, such that the minimization problem is equivalent to solve the optimal control problem

$$\begin{aligned} \min\_{\boldsymbol{v}, \boldsymbol{u}\_d} & \int\_0^{4.8} \boldsymbol{u}\_d \, \boldsymbol{v} \, d\boldsymbol{t}, \\ \boldsymbol{x}' & = \boldsymbol{v}, \\ \boldsymbol{v}' & = h(\boldsymbol{x}) - F(\boldsymbol{v}) + \boldsymbol{u}\_d - \boldsymbol{u}\_{b\prime} \\ & 0 \le \boldsymbol{u}\_d \le 10, \quad 0 \le \boldsymbol{u}\_b \le 2, \\ \mathbf{x}(0) &= \boldsymbol{v}(0) = \boldsymbol{v}(4.8) = 0, \boldsymbol{x}(4.8) = 6, \end{aligned} \tag{16}$$

where *F*(*v*(*t*)) models the friction due to the rolling of the wheels and the air resistance and *h*(*x*) is the active component of the gravitational force due to hill slopes that are respectively defined as

$$\begin{aligned} h(\mathbf{x}) &= \frac{2}{\pi} \left( \tan^{-1} \left( \frac{\mathbf{x} - \mathbf{2}}{\delta} \right) + \tan^{-1} \left( \frac{\mathbf{x} - \mathbf{4}}{\delta} \right) \right), & \delta &= 0.05, \\ F(v) &= 0.3 + 0.14|v| + 0.16v^2. \end{aligned}$$

Moreover, the control variables *ua* and *ub* represent respectively the acceleration provided by the engine and the deceleration from applying the brakes.

First, as smoothing technique let us consider piecewise quadratic penalty functions defined as

$$\begin{aligned} P^d(\mathfrak{u}\_d; \mathfrak{e}; \tau) &= \frac{\varepsilon}{2} (\mathfrak{u}\_d - \mathfrak{k})^2 + \frac{1}{\tau^2} \begin{cases} (\mathfrak{u}\_d - 10 + \tau)^2 & \mathfrak{u}\_d > 10 - \tau \\ (\tau - \mathfrak{u}\_d)^2 & \mathfrak{u}\_d < \tau \\ 0 & \text{otherwise} \end{cases} \\\ P^b(\mathfrak{u}\_b; \mathfrak{e}; \tau) &= \frac{\varepsilon}{2} (\mathfrak{u}\_b - 1)^2 + \frac{1}{\tau^2} \begin{cases} (\mathfrak{u}\_b - 2 + \tau)^2 & \mathfrak{u}\_b > 2 - \tau \\ (\tau - \mathfrak{u}\_b)^2 & \mathfrak{u}\_b < \tau \\ 0 & \text{otherwise} \end{cases} \end{aligned}$$

so that the Problem (16) can be written as

$$\begin{aligned} \min\_{\boldsymbol{v}, \boldsymbol{u}\_d} & \int\_0^{4.8} \left( \boldsymbol{u}\_d \, \boldsymbol{v} + P^d(\boldsymbol{u}\_d; \boldsymbol{\varepsilon}, \boldsymbol{\tau}) + P^b(\boldsymbol{u}\_b; \boldsymbol{\varepsilon}, \boldsymbol{\tau}) \right) d\boldsymbol{t}, \\ & \boldsymbol{x}' = \boldsymbol{v}, \\ & \boldsymbol{v} = h(\boldsymbol{x}) - F(\boldsymbol{v}) + \boldsymbol{u}\_d - \boldsymbol{u}\_{b\prime} \\ & \boldsymbol{x}(0) = \boldsymbol{v}(0) = \boldsymbol{v}(4.8) = 0, \boldsymbol{x}(4.8) = 6. \end{aligned} \tag{17}$$

From the Hamiltonian formulation we obtain the following BVP

$$\begin{aligned} \mathbf{x'} &= \upsilon, \\ \upsilon' &= h(\mathbf{x}) - F(\upsilon) + \mathfrak{u}\_{\mathfrak{d}} - \mathfrak{u}\_{\mathfrak{b}\prime} \\ \lambda' &= -\mu h\_{\mathfrak{x}}(\mathbf{x}), \\ \mu' &= -\lambda + \mu F\_{\upsilon}(\upsilon) - \mathfrak{u}\_{\mathfrak{a}\prime} \\ \mathbf{x}(0) &= \upsilon(0) = \upsilon(4.8) = 0, \ x(4.8) = 6, \end{aligned} \tag{18}$$

where *ua* and *ub*, computed by solving the equations

$$P^{a}\_{\
u\_{\!\!\! \nu}}(\mu\_{\!\!\! \nu};\epsilon,\tau) + \mu + \upsilon = 0, \qquad P^{a}\_{\
u\_{\!\!\! \nu}}(\mu\_{\!\!\! \nu};\epsilon,\tau) - \mu = 0,$$

are respectively

$$u\_{d} = \begin{cases} \frac{5\epsilon\tau^{2} + 20 - 2\tau - \tau^{2}(\mu + \upsilon)}{\epsilon\tau^{2} + 2} & \mathfrak{u}\_{d} > 10 - \tau \\ \frac{\tau(5\epsilon\tau + 2 - \tau(\mu + \upsilon))}{\epsilon\tau^{2} + 2} & \mathfrak{u}\_{d} < \tau \\ \frac{5\epsilon - (\mu + \upsilon)}{\epsilon} & \text{otherwise} \end{cases}$$

$$\mathfrak{u}\_{b} = \begin{cases} \frac{\epsilon\tau^{2} + 4 - 2\tau + \tau^{2}\mu}{\epsilon\tau^{2} + 2} & \mathfrak{u}\_{b} > 2 - \tau \\ \frac{\tau(\epsilon\tau + 2 + \tau\mu)}{\epsilon\tau^{2} + 2} & \mathfrak{u}\_{b} < \tau \\ \frac{\epsilon + \mu}{\epsilon} & \text{otherwise} \end{cases}$$

As shown in Table 21, all the methods, starting with an initial mesh of 16 equidistant points and initial solution *x* = *v* = *λ* = *μ* = 1, converge when  = 1, 0.5 and *tol* = 10−4.

**Table 21.** Minimization of the fuel cost in the operation of a train (18) using a piecewise quadratic penalty function with *τ* = 10−2: final mesh (fM), total number of vectorized function evaluation (NVF) and mixed errors for *x*, *v*, *ua*, *ub*.


Now, decreasing the value of , all these methods fail, since the Problem (18) is highly ill conditioned and strongly depends on perturbations. However, these methods can reach the convergence using a continuation strategy on the parameter . As initial  we can choose 1 or 0.5, since we know that all the methods converge for those values. Moreover, we need to define the discretization for the perturbation parameter, namely we consider *N*  logarithmically equispaced points in the range 0, . Since the continuation depends on the choice of *N*  and the initial value 0, in Table 22 we show the results obtained using <sup>0</sup> = 1 and *N*  = 5, except for tom/tomc for which we need to consider for the convergence *N*  = 10. Our interest is to analyze the performance of the codes for small perturbation parameters, as  = 10−2, 10−3, requiring an exit tolerance *tol* = 10−3. In Figure 6 we show the solution for  = 10−2. The conditioning parameters in Table 23 suggest that the problem is ill conditioned but not stiff, in fact *κ*, *κ*1, *κ*2, *γ*<sup>1</sup> are all much greater than 1. The

condition number of the matrix of the last step of the integration procedure (last column of Table 23) is very high and confirms the ill-conditioning of the problem.

**Table 22.** Minimization of the fuel cost in the operation of a train (18) using a piecewise quadratic penalty function with *τ* = 10−<sup>2</sup> and continuation strategy: final mesh (fM), total number of vectorized function evaluation (NVF) and mixed errors for *x*, *v*, *ua*, *ub*.


**Figure 6.** Minimization of the fuel cost in the operation of a train  = 10−2: from left to right solutions in time for the position *x*, the velocity *v* and the difference between the control variables representing the acceleration and the deceleration *ua* − *ub*.

**Table 23.** Minimization of the fuel cost in the operation of a train: conditioning parameters computed using *tol* = 10<sup>−</sup>6, cond is the condition number of the matrix associated with the last nonlinear iteration.


#### **9. Conclusions**

In this paper, after a review of general-purpose codes for solving boundary value problems we have solved some challenging optimal control problems derived using the indirect method. The presented results show that this approach could be a good alternative to the direct methods for the solution of this kind of problems, especially if the mesh selection strategy adopted is suitable for stiff problems in the case of hypersensitive problems, or an appropriate initial condition is computed for the nonlinear iteration using a continuation strategy. All these techniques can sometimes require the application of some regularization

procedure, as in the presence of singular arc. Our goal with this paper is to give some indications useful to handle the input parameters of a BVP code to achieve an accurate solution, since the default values assigned usually works for very simple regular problems. Moreover, some codes give in output information about the stiffness and the conditioning of the problems, which could be used in choosing the correct solution method.

**Author Contributions:** Writing—original draft, F.M. and G.S. Both authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

**Funding:** The research of Francesca Mazzia has been funded by the PON "Ricerca e Innovazione 2014-2020", project "RPASInAir: Integrazione dei Sistemi Aeromobili a Pilotaggio Remoto nello spazio aereo non segregato per servizi", n. ARS01\_00820 and the research of Giuseppina Settanni by the INdAM-GNCS 2020 Research Project "Numerical algorithms in optimization, ODEs, and applications" (the authors are members of the INdAM Research group GNCS).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel. +41 61 683 77 34 Fax +41 61 302 89 18 www.mdpi.com

*Mathematics* Editorial Office E-mail: mathematics@mdpi.com www.mdpi.com/journal/mathematics

MDPI St. Alban-Anlage 66 4052 Basel Switzerland

Tel: +41 61 683 77 34 Fax: +41 61 302 89 18

www.mdpi.com