Next Article in Journal
Evolving Patterns in Irrational Numbers Using Waiting Times between Digits
Next Article in Special Issue
Modeling and Predicting Passenger Load Factor in Air Transportation: A Deep Assessment Methodology with Fractional Calculus Approach Utilizing Reservation Data
Previous Article in Journal
Fundamental Matrix, Integral Representation and Stability Analysis of the Solutions of Neutral Fractional Systems with Derivatives in the Riemann—Liouville Sense
Previous Article in Special Issue
Investigation of the Robust Fractional Order Control Approach Associated with the Online Analytic Unity Magnitude Shaper: The Case of Wind Energy Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Quasilinearization Approach for Identification Control Vectors in Fractional-Order Nonlinear Systems

by
Miglena N. Koleva
1,* and
Lubin G. Vulkov
2
1
Department of Mathematics, Faculty of Natural Sciences and Education, University of Ruse, 8 Studentska Str., 7017 Ruse, Bulgaria
2
Department of Applied Mathematics and Statistics, Faculty of Natural Sciences and Education, University of Ruse, 8 Studentska Str., 7017 Ruse, Bulgaria
*
Author to whom correspondence should be addressed.
Fractal Fract. 2024, 8(4), 196; https://doi.org/10.3390/fractalfract8040196
Submission received: 12 February 2024 / Revised: 20 March 2024 / Accepted: 26 March 2024 / Published: 28 March 2024
(This article belongs to the Special Issue Fractional Order Controllers for Non-linear Systems)

Abstract

:
This paper is concerned with solving the problem of identifying the control vector problem for a fractional multi-order system of nonlinear ordinary differential equations (ODEs). We describe a quasilinearization approach, based on minimization of a quadratic functional, to compute the values of the unknown parameter vector. Numerical algorithm combining the method with appropriate fractional derivative approximation on graded mesh is applied to SIS and SEIR problems to illustrate the efficiency and accuracy. Tikhonov regularization is implemented to improve the convergence. Results from computations, both with noisy-free and noisy data, are provided and discussed. Simulations with real data are also performed.

1. Introduction

Many real processes are delayed in time, and the resulting memory effects require the use of fractional calculus to model them mathematically. For example, the problems, based on time-fractional derivative and a set of coupled nonlinear ODEs, describes the concentrations of species in so-called compartment problems [1]. The model posits that each compartment exhibits homogeneity within a well-mixed population. The coupling terms in the ODEs depict interactions among populations in distinct compartments. These terms could entail straightforward constant rate removal processes, or they might signify reactions among various populations. For certain compartmental models, understanding the time of entry of an individual into a compartment is crucial. Fractional-order systems of ODEs are one of the most effective approaches for studying transmission dynamics of epidemic [2,3,4,5,6,7,8,9,10], fractional pharmacokinetic processes [11], honeybee population dynamics [12,13], mechanics [14], etc. Recently, there has also been an increase in the number of studies based on COVID-19 fractional epidemiological modeling, see e.g., refs. [15,16,17]. One class of these models, describing the spread of an infectious disease, are the SIR (susceptible-infected-removed) compartment model, SEIR (susceptible-exposed-infected-recovered), etc.
A numerical solution of epidemic fractional SIR model of measles with and without demographic effects by using Laplace Adomian decomposition method is presented in [2]. The effect of the fractional order was investigated. The homotopy analysis method is proposed in [3] for the SIR model that tracks the temporal dynamics of childhood illness in the face of a preventive vaccine. Authors of [8] provide some remarkable and helpful properties of the SIR epidemic model with nonlinear incidence and Caputo fractional derivative.
The SEIR fractional differential model with Caputo operator is considered in [7]. Authors construct a numerical method in order to explain and comprehend influenza A(H1N1) epidemics. They conclude that the nonlinear fractional-order epidemic model is appropriate for the study of influenza and fits the real data.
In [10], the author proposes a SEIR measles epidemiological model with a Caputo fractional order operator. Existence, uniqueness and positivity of the solution was proved. The problem is solved by the Adams technique. It is shown that the incorporation of memory features into dynamical systems designed for modeling infectious diseases reveals concealed dynamics of the infection that classical derivatives are unable to detect. Existence and uniqueness results for the Caputo fractional nonlinear ODE system, describing transmission dynamics of COVID-19, are obtained in [15,17]. In [13], the author presents a existence, uniqueness and stability analysis for a fractional multi-order honeybee colony population nonlinear ODE system with Caputo and Caputo–Fabrizio operators.
One of the most popular approaches for numerically solving nonlinear ODEs and systems of ODEs with Caputo derivative is the fractional Adam method [18,19,20]. This method is applied, for example in [10] for solving the Caputo fractional SEIR measles epidemiological problem. In [17], authors a construct numerical method based on the Adams–Bashforth–Moulton approximation for solving the nonlinear ODEs system with a Caputo operator.
The Newton–Kantorovich method for solving both classical and fractional nonlinear initial-boundary values and initial value problems is employed in many papers, see e.g., refs. [21,22,23,24]. In [25], the quasi-Newton’s method, based on the Fréchet derivative, is developed for solving nonlinear fractional order differential equations.
The original concept of the quasilinearization method was developed by Bellman and Kalaba [26] and recently has been widely investigated and applied by Ben-Romdhane et.al., to several highly nonlinear problems [27] combined with other methods by Feng et al. [28] and by Sinha et al. [29]. By combining the technique of lower and upper solutions with the quasilinearization method and incorporating the Newton–Fourier idea, it becomes possible to simultaneously construct lower and upper bounding sequences that converge quadratically to the solution of the given problem [30]. This approach, referred to as generalized quasilinearization, has been further refined and theoretically extended to address a broad class of problems.
The quasilinearization method is a realization of Bellman–Kalaba quasilinearization, representing a generalization of the Newton–Raphson method. It is employed for solving either scalar or systems of nonlinear ordinary differential equations or nonlinear partial differential equations. One can also interpret quasilinearization as Newton’s method for the solution of a nonlinear differential operator equation.
In [31,32,33] existence, uniqueness and convergence results for the generalized quasilinearization method for the Caputo fractional nonlinear differential equation, represented as the Volterra fractional integral equation, are obtained.
Inverse or identification problem solutions have a key role in many practical applications [34,35,36,37]. It is a process of reconstruction from a set of observations/measurements, unknown parameters/coefficients, and initial conditions or source, i.e., the quantities that cannot be directly measured. Such problems have attracted a lot of attention from many researchers during the last decade. In spite of many results on the existence, uniqueness and stability of the solution for linear ODEs and PDEs, the nonlinearity of the differential equation renders it challenging to solve. In general, inverse problems are highly ill-posed [34,35,36,37,38], and this makes them difficult to solve, even numerically. Small fluctuations (noisy measurements) in the input data can lead to significant changes in the outcome. Therefore, to achieve meaningful results, the inversion or reconstruction process requires stabilization, commonly known as regularization.
Numerical methods for solving inverse problems for fractional-order ODEs are proposed in [14,39,40]. The shooting method is employed in [14] for numerically solving terminal value problems for Caputo fractional differential equations. In [39], the authors develop a numerical method for recovering a right-hand side function of a nonlinear fractional-order ODE, using Chelyshkov wavelets. Numerical discretizations of inverse problems for Caputo fractional ODEs under interval uncertainty are constructed in [40].
Recently, a lot of research on inverse problems concern fractional-order nonlinear ODEs systems. In [41] there is proposed a parameter and initial condition identification block pulse function method for fractional order systems. A time–domain determination inverse problem for fractional-order chaotic systems is solved in [42] by a hybrid particle swarm optimization–genetic algorithm method. An adaptive recursive least-square method is applied in [43] for parameter estimation in fractional-order chaotic systems.
In [44], the authors developed a new numerical approach, based on the Adams–Bashforth–Moulton method, for solving parameter estimation inverse problem for fractional-order nonlinear dynamical biological systems. Time fractional-order biological nonlinear ODE systems in Caputo sense with uncertain parameters are considered in [4,5]. In particular, the solution for HIV-I infection and Hepatitis E virus infection models is investigated. In [45], the author investigate a fractional-order delay nonlinear ODE system to study tuberculosis transmission. A numerical method for recovering dynamic parameters of tuberculosis is developed.
In [16], authors use the Monte Carlo method based on the genetic algorithm in order to estimate unknown parameters in fractional SIR models, describing the spread of the COVID-19 disease. The authors of [46] solve the numerical parameter identification inverse problem for the Caputo fractional SIR model, in order to estimate the economic damage of COVID-19 pandemic. They minimize a least-squares cost functional.
The parameter identification inverse problem for the nonlinear ODE system with Caputo differential operator describing honeybee colony collapse was solved numerically in [12]. The authors applied a gradient optimization method to minimize a quadratic cost functional.
The aim of this paper is to present an extension to the fractional multi-order ( α i ) ODE system, the quasilinearization parameter identification method proposed in [47,48] for the classical ODEs system. We develop a robust and convergent numerical method of order 2 max i α i . The approach successfully recovers the unknown parameters even for noisy data. The Tikhonov regularization technique is applied, which increases the range of convergence with respect to the initial guess.
The remaining part of the paper is organized as follows. In the next section, we formulate direct and inverse problems on the base of finite number solution observations. In Section 3, the fractional-order quasilinearization inverse method is presented. The implementation of the method is described in Section 4. The usefulness of our approach is illustrated by numerical simulations for SIR and SEIR epidemiological models in Section 5. The paper is finalized with some conclusions.

2. Direct and Inverse Problems

First, we introduce the direct (forward) problem for the nonlinear fractional differential equation system with initial condition:
d α x ( t ) d t α = f ( t , x ( t ) ; p ) , t 0 t T , x ( t 0 ) = x 0 ,
where
α = ( α 1 , α 2 , , α I ) , x ( t ) = ( x 1 ( t ) , , x I ( t ) ) , p = ( p 1 , , p L ) R L ,
R L is a real L-dimensional Euclidean space, and d α / d t α is the left Caputo fractional derivative of order 0 < α 1 for arbitrary function y A C [ t 0 , T ] , i.e., y is absolutely continuous on [ t 0 , T ] , and
d α y d t α = 1 Γ ( 1     α ) t 0 t 1 ( t     ς ) α d y d t d ς , 0 < α < 1 ; d y d t , α = 1 ,
The vector function
f ( t , x ; p ) = f 1 ( t , x ; p ) , , f I ( t , x ; p ) : [ t 0 , T ] × R I × R L R I ,
where the symbol ‘;’ stands to underline the dependence on parameter p , is called vector field. We will use also the notation x α = ( x 1 ) α 1 , ( x 2 ) α 2 , , ( x I ) α I .
The direct problem is to determine the solution x 1 ( t ) , x 2 ( t ) , , x I ( t ) , when the parameters p , right-hand side f and initial data x 0 in (1) are given.
We denote by C n ( t 0 , T ) the set of functions φ which are n-times differentiable on the interval [ t 0 , T ] endowed by the norm
φ C n = max t [ t 0 , T ] k = 1 n d k φ d t k , n = 0 , 1 , 2 , , φ C 0 = φ C .
Here, | · | is the Euclidean norm when the argument is the vector and the corresponding operator norm if the argument is matrix.
Further, we will suppose that for the vector-valued function, f from (1) will have continuous partial differential derivatives up to second order
f x = f i x j , i , j = 1 , , I ,
f p = f i p l , 2 f p x = 2 f i p l x j , i , j = 1 , , I , l = 1 , , L ,
in a bounded convex region D R n and
P = { p R L : | p l |   < M ˜ , l = 1 , , L } , M ˜ > 0 .
We will use the following result of existence and uniqueness of the solution to the problem (1).
Theorem 1. 
Let α i ( 0 , 1 ) , i = 1 , 2 , , I and the function f ( t , x ) satisfies the conditions (3) and (4). Then, the problem (1) has one solution x C 2 [ t 0 , T ] , which is the solution of nonlinear Volterra integral equation
x ( t ) = x 0 + 1 Γ ( α ) t 0 t ( t ς ) α 1 f ( ς , x ( ς ) ; p ) d ς
and vice versa.
Proof. 
For proof, we use the results of [49], as well of [19,50].
The condition (3) is sufficient for f to be Lipschitz continuous for all t [ t 0 , T ] and p P . Also, f i x j ( t , x ( t ) ; p ) const . , i , j = 1 , 2 , , I , p P , t [ t 0 , T ] . Then, Theorem 2 from [49] and Theorem 3.1 from [19] assure global existence of unique solution x C 2 [ t 0 , T ] to the problem (1), as well as the representation (5).    □
Further, since the system (1) depends on the vector parameters p = ( p 1 , p 2 , , p ) , we state the following assertion.
Corollary 1. 
Let the conditions of Theorem 1 and (4) be fulfilled. Then, the solution x ( t , p ) has continuous partial derivatives with respect to p = ( p 1 , p 2 , , p L ) .
Proof. 
Let us consider the auxiliary problem:
d α x ( t ) d t α = f ( t , x ( t ) ; p ) , t 0 t T , x ( t ) = x 0 , d α p d t α = 0 , 0 t T .
Then, the analysis of the system (6), similar to that of the Theorem 1 provides the proof.    □
The inverse problem in this paper consists of identifying the parameter vector p P , where P is an admissible set upon the observed behavior of the solution of dynamical system (1).
The idea of the parameter identification inverse problem is to minimize the difference between a measured state x σ ( t ; p ) of the system and the ones calculated with the mathematical model. However, in the real practice, the measurements are available at a final number of times. So, let us denote x σ ( t m ; p ) , m = 1 , 2 , M as the measured state of the dynamical system (1) and x ( t m ; p ) , m = 1 , 2 , M as those obtained by solving the fractional-order differential problem (1). We gather the residuals for the measurements to be calibrated
r m ( p ) = x ( t m ; p ) x σ ( t m ; p ) , m = 1 , 2 , , M ,
in the residual vector r ( p ) R M , i.e.
r ( p ) = [ r 1 ( p ) , r 2 ( p ) , , r M ( p ) ] T r ,
where T r means the transpose operation.
We treat the inverse problem in the nonlinear least-square form
min p P F ( p ) , F ( p ) : = 1 2 r ( p ) 2 = 1 2 r T r ( p ) r ( p ) .
This conception is developed in the next section, where a functional of type t 0 T F ( δ p ) with δ p being residual on iterations of quazilinearization of the problem (1).

3. Quasilinearization Optimization Approach to (1)

For clarity of the exposition, we describe the method for the simple case of two equations with three unknown parameters, i.e., I = 2 , L = 3 .
Let x 0 1 ( t ) , x 0 2 ( t ) be an initial approximation. Applying quasilinearization [26,51] to (1), at each iteration k = 1 , 2 , , we arrive at the linear Cauchy problem:
d α i x k i d t α i = f i ( t , x k 1 1 , x k 1 2 ; p ) + f i x i ( t , x k 1 1 , x k 1 2 ; p ) ( x k i x k 1 i ) + f i x 3 i ( t , x k 1 1 , x k 1 2 ; p ) ( x k 3 i x k 1 3 i ) , i = 1 , 2 ,
x i ( t 0 ) = x 0 i , i = 1 , 2 .
Following [26], for a known parameter vector p = ( p 1 , p 2 , p 3 ) , solving the problem (10) and (11) one can construct a sequence { x k ( t ) } , which converges to x ( t ; p ) with a quadratic rate of convergence. However, we do not know the exact value of parameter p P , so that starting from an an initial guess p 0 P , after each quasilinearization iteration we specify the parameter value p k = p k 1 + δ p k from the linear system (10) and (11) by the following sensitivity scheme.
In line with the theory of sensitivity functions [52], we present the differences { x i ( t ) x k i ( t ; p k ) } , i = 1 , 2 , with second-order accuracy as follows:
r i = x i ( t ; p ) x k i ( t ; p k ) = x i ( t ; p ) [ x k i ( t ; p k 1 ) δ x k i + l = 1 3 x k i p l ( t ; p k 1 ) ( p k l p k 1 l ) ] + o ( | δ p | ) a s | δ p | 0 , i = 1 , 2 .
The Jacobian P k with elements ( P k i l ) i , l = 1 , 1 2 , 3 , called sensitivity matrix,
P k ( P k i l ) i , l = 1 , 1 2 , 3 = x 1 p 1 ( t ; p k 1 ) x 1 p 2 ( t ; p k 1 ) x 1 p 3 ( t , p k 1 ) x 2 p 1 ( t ; p k 1 ) x 2 p 2 ( t ; p k 1 ) x 2 p 3 ( t , p k 1 ) = P k 1 , P k 2 , P k 3 ,
can be found after solving the system (10) and (11). Here, P k s , s = 1 , 2 , 3 are the corresponding columns of the matrix P k .
We suppose the existence of continuous mixed derivatives 2 f i p l x j , i = 1 , 2 , l = 1 , 2 , 3 , j = 1 , 2 . Then, the sensitivity matrix P k is a solution of the following matrix equation, obtained by differentiation of the system (10) with respect to the vector p :
d α i P k d t α i = J ( t , x k 1 1 , x k 1 2 , p k 1 ) P k + Q k + f ( t , x k 1 1 , x k 1 2 , p k 1 ) p , P k ( 0 ) = 0 ,
where the Jacobian J ( t , x k 1 1 , x k 1 2 , p k 1 ) is defined as follows
J ( t , x k 1 1 , x k 1 2 , p k 1 ) = J 1 ( t , x k 1 1 , x k 1 2 , p k 1 ) , J 2 ( t , x k 1 1 , x k 1 2 , p k 1 ) T r = J 1 , J 2 T r , J i = f i x i ( t , x k 1 1 , x k 1 2 ; p ) ( x k i x k 1 i ) , f i x 3 i ( t , x k 1 1 , x k 1 2 ; p ) ( x k 3 i x k 1 3 i ) ,
and Q k is 2 × 3 matrix with elements
q k i l = j = 1 2 2 f i ( t , x k 1 1 , x k 1 2 , p k 1 ) p l x j ( x k j x k 1 j ) , i = 1 , 2 , l = 1 , 2 , 3 .
The increment δ p k is obtained by minimization of the functional [47,48]:
J k ( δ p ) = t 0 T δ x k 1 δ x k 2 P k δ p T r δ x k 1 δ x k 2 P k δ p d t .
It is easy to see that the functional J k ( δ p ) is twice continuously differentiable and we have the extremum necessary condition
J k ( δ p ) = 2 t 0 T P k T r P k δ p P k T r δ x k 1 δ x k 2 d t = 0
and
J k ( δ p ) = 2 t 0 T P k T r P k d t > 0 .
It follows from (16), that the minimum δ p k satisfies the system of algebraic equations
C k δ p = D k , C k = c k l j l = 1 , j = 1 3 , 3 = t 0 T P k T r P k d t .
Therefore, C k is 3 × 3 symmetric matrix with elements
c k l j = t 0 T x k 1 ( t ; p k 1 ) p l x k 2 ( t ; p k 1 ) p l T r x k 1 ( t ; p k 1 ) p j x 2 ( t ; p k 1 ) p j d t
and
D k = d k l l = 1 3 = t 0 T P k T δ x k 1 δ x k 2 d t ,
is a 3-component vector:
d k l = t 0 T x k 1 ( t ; p k 1 ) p l x k 2 ( t ; p k 1 ) p l x 1 ( t ; p ) x 1 ( t ; p k 1 ) x 2 ( t ; p ) x 2 ( t ; p k 1 ) d t .
The matrix C k is a Gram matrix of vectors p k j and
c k l j = ( P k l , P k j ) L 2 ( t 0 , T ) .
It is shown in [47,48,53] that
det ( C k ) = Γ ( P k 1 , P k 2 , P k 3 ) 0
and it is strongly positive, that is to say, C k is non-singular if and only if the vectors P k l , l = 1 , 2 , 3 are linearly independent. In this case, we summarize the approach in the following steps.
Algorithm
Step 1. Choose initial guess p 0 , ( x 0 1 ( t ) , x 0 2 ( t ) ) and set k = 1 .
Step 2. Solve the linear problem (10) and (11) to find x k 1 ( t ; p k 1 ) , x 2 ( t ; p k 1 ) , and the system (14) to find P k .
Step 3. Solve the linear algebraic Equations (17) to find the new parameter value p k = p k 1 + δ p k .
Step 4. One of the expressions can be used as a criterion to stop the iteration process
p k p k 1 , J k ( δ p ) ,
0 T x 1 ( t ; p ) x k 1 ( t ; p ) x 2 ( t ; p ) x k 2 ( t ; p ) T r x k 1 ( t ; p ) x k 1 ( t ; p m ) x k 2 ( t ; p ) x k 2 ( t ; p m ) d t ,
when it reaches a sufficiently small value. Otherwise, k : = k + 1 and go to Step 2.
Now, following results in [47,48], more specifically Section 3 in [47], we discuss the convergence of the quasilinearization method, based on Algorithm for general case i = 1 , 2 , , I , l = 1 , 2 , , L .
We suppose that the solution of the problem (1) satisfies
x C X = const . , max x C X sup p P max f C , J ( x ) C Y = const .
It follows from (5), (10) and (11) the integral representation
x k ( t ) = x 0 ( t ) + 1 Γ ( α ) t 0 T ( t ς ) 1 α f ( τ , x k 1 ; p k 1 ) + J ( τ , x k 1 ; p k 1 ) ( x k x k 1 ) d ς ,
which gives for k = 1 :
x 1 C     x 0 C + Y ( X + 1 ) ( T t 0 ) α + Y ( T t 0 ) α x 1 C .
If we take T t 0 , such that
( T t 0 ) α i X     x 0 C Y ( 1 + 2 X ) for all i = 1 , 2 , , I ,
then x 1 C     X . Further, when (19) holds, by induction follows that x k C     X for k 1 .
Next, we assume that
max x C X sup p P max i , j , l 2 f i p l x j C Y .
Also, we suppose that P k C     X , for sufficiently small T t 0 and any k 1 .
Further, using the representation (18) and following the line of consideration in (Section 3 of [47]), we derive
x k + 1 ( t ; p k ) x k + 1 ( t ; p k ) C     ε 2 k X 1 ,
where X 1 = max 1 i I Y ( T t 0 ) α i / 1 Y ( T t 0 ) α i and ε = X 1 x 1 ( t ; p k ) x 0 ( t ) C .
Additionally, as in [47], one can deduce for the functional J k = J k ( δ p k ) , δ p k = p k p k 1 of (12), the inequalities
0 J k + 1 J k + const . ε 2 k .
If we require ε < 1 , then there exists a limit J * = lim k J k .
Furthermore, if the functional J k ( δ p ) is a strong convex function with convex constant 0.5 ϱ , see [54], then, following the results of Section 3 in [47], one can show that
1 2 ϱ | δ p δ p k | 2 J k ( δ p ) J k ( δ p k )
and
0 1 2 ϱ | p p k | 2 const . ϱ 2 k , 0 J k const . ϱ 2 k .
Therefore, the next assertion holds.
Theorem 2. 
Assume that the vector-function f and its second-order derivatives (3) and (4) are continuous and bounded as ( t , x ; p ) ( t 0 , t 1 ) × D × P . Let x 1 ( t ; p ) , x 1 ( t ; p ) , , x I ( t ; p ) , p P be solution to the problem (1) and assume that the conditions (20)–(22) are fulfilled. Then, the Algorithm for the general case i = 1 , 2 , , I , l = 1 , 2 , , L is convergent with quadratic rate of convergence and the estimate (23) holds.

4. Numerical Implementation of the Method

In this section, we present the numerical approach and discuss the realization.

4.1. Numerical Discretization

For the numerical discretization of the fractional derivative, we apply L 1 formula on non-uniform mesh [55,56]. We define the nonuniform temporal mesh ω ¯ τ : t 0 < t 1 < < t M = T with step size τ n + 1 = t n + 1 t n , n = 0 , 1 , , M 1 , τ n < τ n + 1 and τ = max 0 n M τ n and denote ( x i ) n = x i ( t n ) , i = 1 , 2 . Assume that M = M and we have measurements at each grid node t n .
The Caputo fractional derivative of the function ( x i ) n + 1 is approximated by
d α i ( x i ) n + 1 d t α i 1 Γ ( 1 α i ) s = 0 n ( x i ) s + 1     ( x i ) s τ s + 1 t s t s + 1 ( t n + 1 η ) α i d η = s = 0 n ( x i ) s + 1 ( x i ) s ρ n , s i , i = 1 , 2 ,
where
ρ n , s i = ( t n + 1 t s ) 1 α i ( t n + 1 t s + 1 ) 1 α i Γ ( 2 α i ) τ s + 1 and ρ n , n i = τ n + 1 α i Γ ( 2 α i ) .
Further, we involve the notation
G n ( x i ) : = s = 1 n ρ n , s i ρ n , s 1 i ( x i ) s + ρ n , 0 i ( x i ) 0 , n = 0 , 1 , , M 1 , i = 1 , 2 .
Therefore, the discretized problem (10) and (11) become
ρ n , n i x k i   f i ( t n + 1 , x k 1 1 , x k 1 2 ; p ) f i x i ( t n + 1 , x k 1 1 , x k 1 2 ; p ) ( x k i x k 1 i ) f i x 3 i ( t n + 1 , x k 1 1 , x k 1 2 ; p ) ( x k 3 i x k 1 3 i ) = G n ( x i ) , i = 1 , 2 ,
x i ( t 0 ) = x 0 i , i = 1 , 2 .
and the corresponding to (14) discrete sensitivity problem in vector-matrix form is
E ρ P k J ( t n + 1 , x k 1 1 , x k 1 2 , p k 1 ) P k Q k f ( t n + 1 , x k 1 1 , x k 1 2 , p k 1 ) p = G n ( P ) , P 0 ( 0 ) = 0 , l = 1 , 2 , 3 , i = 1 , 2 .
where q k i l are computed at time layer t n + 1 , E ρ = ρ n , n 1 0 0 ρ n , n 2 and G n ( P ) is a 2 × 3 matrix with elements
G n i l = G n ( P i l ) = s = 1 n ρ n , s i ρ n , s 1 i ( P i l ) s + ρ n , 0 i ( P i l ) 0 , i = 1 , 2 , l = 1 , 2 , 3 .
Here x k i and P k i l are solutions x i and P i l , respectively at k-th iteration of the new time layer t n + 1 , i.e., x k i = x k i ( t n + 1 ) and P k i l = P k i l ( t n + 1 ) .
The regularization technique stabilizes the solution in case of the ill-conditioning and sensitivity of the solution to noise and perturbation. If the sensitivity vectors are not linear independent, then the matrix C k may be ill-conditioned. In this case, to solve the system (17), we use Tikhonov regularization [57]. One regularization replaces the functional J ( δ p )  with
δ x k 1 δ x k 2 P k δ p L 2 ( t 0 , T ) 2 + ϵ δ p 2 .
This leads to the linear system
( C k + ϵ E ) δ p = D k ,
instead of (17), where E is the identity matrix, and ϵ is a regularization parameter.
Another regularization approach is to replace the functional J ( δ p ) with
δ x k 1 δ x k 2 P k δ p L 2 ( t 0 , T ) 2 + ϵ P k 1 + δ p p ˜ 2 ,
where p ˜ is a known vector to be close to the true value of the unknown parameter. It is expected that the unknown parameter is searched in a neighborhood of p . This leads to the following linear algebraic system of equations, instead of (17):
( C k + ϵ E ) = D k + ϵ ( p ˜ p k 1 ) .
To compute the C k and D k , the integrals are approximated by trapezoidal rule.

4.2. Realization of the Numerical Discretizations

Now, we summarize the results above in the following Algorithms 1 and 2 for parameter identification inverse problem (PIIP).
Algorithm 1 PIIP with regularization (27)
  • Require:  p 0 , ( x 0 1 ( t ) , x 0 2 ( t ) ) , ϵ , t o l
  • Ensure:  p , ( x 1 ( t n ) , x 2 ( t n ) ) , n = 1 , 2 , , M
  • k 1 , Q t o l + 1 ;  
  • while  Q > t o l  do
       Find x k 1 ( t n ; p k 1 ) , x k 2 ( t n ; p k 1 ) , n = 1 , 2 , , M , solving (24) and (25);  
       Determine P k ( t n ) , n = 1 , 2 , , M from (26);  
       Find p k = p k 1 + δ p k , solving (27);  
  • Q   p k p k 1 ;
  •      k k + 1 .
  • end while
Algorithm 2 PIIP with regularization (28)
  • Require:  p 0 , ( x 0 1 ( t ) , x 0 2 ( t ) ) , ϵ , t o l , p ˜ .
  • Ensure:  p , ( x 1 ( t n ) , x 2 ( t n ) ) , n = 1 , 2 , , M
  • k 1 , Q t o l + 1 ;  
  • while  Q > t o l  do
       Find x k 1 ( t n ; p k 1 ) , x k 2 ( t n ; p k 1 ) , n = 1 , 2 , , M , solving (24) and 25);  
       Determine P k ( t n ) , n = 1 , 2 , , M from (26);  
       Find p k = p k 1 + δ p k , solving (28);  
  • Q   p k p k 1 ;
  • k k + 1 .
  • end while
Here, · is a discrete version of · C norm, namely ν = max 0 n M | ν n | .
We will clarify two points. The first one concerns the convergence. Although the quasilinearization is the second order of convergence in the sense of Theorem 2, since we apply O M ( 2 α ) approximation of the fractional derivative, we may expect at most a min i { 2 α i } = 2 max i { α i } order of convergence of Algorithms 1 and 2.
The second point is related to the regularity of the solution. If the conditions of Theorem 1 are fulfilled, then we may use uniform temporal mesh. Following the results in [55,58], we can conclude that the order of convergence of the numerical method for the direct problem is 2 max i { α i } .
Consider the case t 0 = 0 . Because the fractional differential equation is a nonlocal problem and the derivative of the solution usually exhibits singularity at t 0 = 0 , it is unfeasible to employ high-order numerical methods with uniform meshes. Hence, it is reasonable to employ non-uniform meshes to capture the singularity near the initial time.
Suppose that there exist a positive constant C, such that
d s x i ( t ) d t s C ( 1 + t α i s ) , i = 1 , 2 , s = 0 , 1 , 2 , t [ 0 , T ] ,
and consider the temporal grid defined as follows [55,58]
t n = T n M r , r 1 , n = 0 , 1 , , M ,
where for r = 1 , the mesh is uniform, otherwise the grid nodes are concentrated close to the origin t 0 = 0 . Then, following the results in [55,58], the convergence rate of the numerical solution of the direct problem in maximal norm is
x ( t ) x ( t n ) O M min { 2 α 1 , r α 1 } + M min { 2 α 2 , r α 2 } ,
or in general, the case of I number of equations
x ( t ) x ( t n ) O i = 1 I M min { 2 α i , r α i } = O M min i { min { 2 α i , r α i } } .
Therefore, in order to obtain optimal accuracy, we use graded mesh (29), taking r = ( 2 α ̲ ) / α ̲ , where α ̲ = min i α i .

5. Numerical Simulations

In this section, we illustrate the efficiency of the developed method for recovering three and four parameters in biological ODE systems with two and four equations, respectively. We give absolute ( E p l ) and relative ( E p l ), errors of the recovered parameters p and relative errors ( E x i , E x i 2 ) of the solution in maximal and L 2 norms, respectively
E ν = E ν ( M ) = ν n ν * n ν * n , E ν 2 = E ν 2 ( M ) = ν n ν * n 2 ν * n 2 , ν 2 = n = 0 M τ n ν 2 1 / 2 , E p l = | p k l p * l | , E p l = | p k l p * l | p * l , l = 1 , 2 , , L ,
for different fractional orders α i . Here, v = x i , p * j is the exact (true) value of the parameter p j , p k j and ν n = ( x i ) n are the recovered parameter and solution at time layer t n , and ν * n = ( x i ) * n is the reference solution of the ODEs system at time layer t n , which can be the exact or numerical solution of the direct problem with exact values of the parameters, i.e., p = p * .
In addition, we perform simulations to illustrate the model behavior with respect to parameters.
We will also investigate the case of noisy measurements, generated by
x σ i = x i ( t ; p ) + σ x i N ,
where the measurements x i ( t ; p ) are obtained from direct problem, σ x i is the noise level, and N is an M-dimensional random variable with standard normal distribution.
We take t 0 = 0 , and the computations are conducted mainly for the more intricate scenario of a weak singular solution in graded mesh (29).

5.1. System of Two Equations with Three Unknown Parameters

First, we consider a simplified prototype of SIS fractional order epidemic model [2,5,8], describing the dynamics of susceptible S ( t ) and infectious I ( t ) individuals at any time t
d α 1 S d t α 1 = Λ ( β I + μ ) S , S ( 0 ) = S 0 , d α 2 I d t α 2 = β S I γ I , I ( 0 ) = I 0 .
Typically in such models β , Λ , μ and γ are constants, representing transmission, birth, death and recovery rates, respectively.
The inverse problem is formulated for recovering the parameters p = ( β , μ , γ ) , and computations are performed by Algorithm 1 with t o l = 10 4 . We assume that M = M and the measurements are performed at grid nodes.
Example 1.  (Direct problem). First, we test the accuracy of the numerical discretization (24) and (25) for solving the direct problem (31), T = 1 in the case of weak singular solution. To this aim, we add residual functions f S ( t ) and f I ( t ) in the right-hand side of the differential equations in (31), such that the exact solution is to be S * = 0.8 + E α 1 ( t α 1 ) , I * = 0.2 + E α 1 ( t α 2 ) , where E α i , i = 1 , 2 is the Mittag–Leffler function, namely E α ( z ) = n = 0 z n Γ ( n α + 1 ) .
That corresponding to the (24) and (25) discretization for the modified system (31) with an exact solution is
ρ n , n 1 S k + ( β I k 1 + μ ) S k + β I k S k 1 = G n ( S ) + Λ + β I k 1 S k 1 + f S ( t n + 1 ) , ρ n , n 2 I k β I k 1 S k β I k S k 1 + γ I k = G n ( I ) + f I ( t n + 1 ) β I k 1 S k 1 ,
where S k ( 0 ) = S 0 and I k ( 0 ) = I 0 .
We compute the solution of the direct problem from (32), initiating iteration process at each time layer with stopping criteria max { S k S k 1 , I k I k 1 } 10 6 , for exact values of the parameters p * = ( β * , μ * , γ * ) = ( 0.8 , 0.40 , 0.43 ) and Λ = 0.4 [3].
Note that in inverse problem Algorithms 1 and 2, to find the solution x ( t ; p k 1 ) , an iterative process with the same termination criteria is used to find the solution x ( t ; p k 1 ) .
In Table 1 we present errors and the corresponding orders of convergence
CR ν = log 2 E x ( M ) E x ( 2 M ) , CR ν 2 = log 2 E x 2 ( M ) E x 2 ( 2 M ) ,
for different and equal orders of the fractional derivative. We observe that the accuracy of the solutions S is O M ( 2 α 1 ) and that of the solution I is O M ( 2 α 2 ) , both in maximal and L 2 norms. Therefore, the convergence of the numerical solution of the system (31) is E O M ( 2 max { α 1 , α 2 } ) , where E = max { E S , E I } or E = max { E S 2 , E I 2 } .
Example 2.  (Inverse problem: exact measurements). We consider an inverse problem for recovering parameters p in the ODE system (31) with an exact solution, modified just as in Example 1. We consider noisy-free data ( σ = 0 ), and the measurements x ( t ; p ) in (30) are equal to the numerical solution of the direct problem, computed by (24) and (25) for the true values p * = ( 0.8 , 0.4 , 0.43 ) . The sensitivity problem (26) is
S k β ρ n , n 1 + β k I k 1 + μ k + β k S k 1 I k β = G n ( S β ) S k 1 I k I k 1 ( S k S k 1 ) , S k β ( β k I k 1 ) + ρ n , n 2 β k S k 1 + α k I k β = G n ( I β ) + I k 1 S k + S k 1 ( I k I k 1 ) , S k μ ρ n , n 1 + β k I k 1 + μ k + β k S k 1 I k μ = G n ( S μ ) S k , S k μ ( β k I k 1 ) + ρ n , n 2 β k S k 1 + α k I k μ = G n ( I μ ) , S k γ ρ n , n 1 + β k I k 1 + μ k + β k S k 1 I k γ = G n ( S γ ) , S k γ ( β k I k 1 ) + ρ n , n 2 β k S k 1 + α k I k γ = G n ( I γ ) I k .
where S k p l = S k p l , I k p l = S k p l , S k p l ( 0 ) = 0 , I k p l ( 0 ) = 0 , l = 1 , 2 , 3 .
The regularization parameter is determined experimentally, by multiple runs, such that to avoid singularity of the matrix C k (very small det ( C k ) ) and to obtain convergent iteration process. In Table 2, we give the regularization parameter, recovered values of p , errors of the recovered values of the parameters and solution, and number of iterations k for different initial guesses p 0 and fractional orders, M = 320 , T = 1 . The reference value of the recovered solution ( S , I ) is the exact solutions ( S * , I * ) . Results show that independently of the fractional order, the recovering of the parameters is almost exact, and the accuracy of the restored solution ( S , I ) is similar to the accuracy of the discrete solution of the direct problem. Therefore, we conclude that the order of convergence of the numerical method for solving PIIP with noisy-free data has the same order of convergence as the numerical method of the direct problem.
For comparison, we give also the results for integer order derivative. In this case, L1 approximation is replaced by forward time stepping. We observe that the recovering of the parameters is with higher precision, and the obtained values of the solution ( S , I ) have the same accuracy as the discrete solution of the direct problem. Note that for this example, regularization is not required.
The convergence of the method without regularization is limited in relation to the size of the convergence range for the initial guess p 0 . Moreover, for this test example, a smaller α 1 leads to a bigger singularity of C k , and convergence is achieved in more iterations. Nevertheless, applying the regularization approach, the parameters are successfully recovered. It is not a surprise that, if the initial guess p 0 approaches the true values p * , the regularization parameter can be decreased, the iteration process requires smaller number of iterations, and the accuracy improves.
For completeness, we illustrate the performance of the numerical approach for the smooth solution in the sense of Theorem 1. We repeat the same experiment as in Table 2, but now, we take the exact solution to be S * = 0.8 + t 4 + 2 t , I * = 0.2 + t 5 + 3 t , and the mesh is uniform. The computational results are listed in Table 3. The parameter recovering is more accurate than the case of weak singular solution, and for most sets of initial guesses, the convergence is attained for a smaller number of iterations. Furthermore, in most cases, except for α 1 = 0.4 , α 2 = 0.6 and initial guess p 0 = ( 2 , 3 , 2 ) , the recovering without regularization fails.
Computational results in this example showed that if the system is not ill-conditioned, and thus if the regularization is not used, the recovering of the parameters is almost exact. Otherwise, the use of regularization is strongly recommended in order to obtain a convergent iteration process, although this is at the expense of the accuracy of the restored parameters. Nevertheless, the recovery is with good enough precision such that the reconstructed solution is with optimal accuracy.
Further, all experiments are performed for the case of weak singular solution.
Example 3.  (Inverse problem: noisy measurements). The test’s problem is the same as in Example 2, but now, we add a noise, i.e., σ S > 0 and σ I > 0 in (30). We consider three levels of noise: 10 % ( σ S = 0.065 , σ I = 0.03 ), 5 % ( σ S = 0.032 , σ I = 0.015 ) and 1 % ( σ S = 0.0065 , σ I = 0.003 ).
The computations are performed for p 0 = ( 2 , 3 , 2 ) , M = 320 and different fractional orders. In the case of α 1 = α 2 = 0.5 , we take ϵ = 0.001 , while for α 1 = 0.7 , α 2 = 0.3 , we set ϵ = 0.0001 . In Table 4 we give the numerical results. The noise level has a bigger impact on the precision of the recovered parameters in the case of derivatives with different fractional orders of large range than the case of moderate and equal values of the fractional orders, but the convergence is faster. For α 1 = α 2 = 0.5 , the iteration process requires twice as many iterations than for the case of α 1 = 0.7 , α 2 = 0.3 . In both scenarios, the precision of the determined parameters is optimal, such that the solution ( S , I ) is sufficiently accurate.
In Figure 1, we illustrate the convergence of the iteration process, namely, we represent the values of | δ p k j | at each iteration for α 1 = α 2 = 0.5 and α 1 = 0.7 , α 2 = 0.3 and 10 % noise. For both set of parameters, we observe the fluctuation at initial iterations, and then, as was mentioned above, for α 1 = α 2 = 0.5 , the convergence is attained at a small number of iterations compared to the case α 1 = 0.7 , α 1 = 0.3 . We believe that the reason is in the smaller regularization parameter and bigger grading of the mesh, which is prescribed by (29).

5.2. System of Four Equations with Four Unknown Parameters

We consider the following dynamical system SEIR model, describing measles epidemic [10].
d α S d t α = Λ α ( β α I + μ α ) S , S ( 0 ) = S 0 , d α E d t α = β α S I ( μ α + λ α ) E , E ( 0 ) = E 0 , d α I d t α = λ α E ( μ α + γ α ) I , I ( 0 ) = I 0 , d α R d t α = γ α I μ α R , R ( 0 ) = R 0 ,
where S ( t ) , E ( t ) , I ( t ) , R ( t ) are susceptible, exposed, infectious and the recovered individuals at time t, measured in months. The epidemiological model parameters describe recruitment rate Λ , transmission coefficient rate β , natural death rate μ , rate of exposure to the epidemic λ and recovery rate γ . Let the total population be N ( t ) = S ( t ) + E ( t ) + I ( t ) + R ( t ) . If the birth Λ α rate is equal to death rate μ α N , then N is a constant, and the problem (33) reduces to a system or three equations. Namely, the function R ( t ) is determined from algebraic relation of the total population conservation.
For the simulations, we use the data from [10] for real confirmed measles incidence cases for May–December, 2018 in Pakistan
Λ = 374125 , β = 4.569662 × 10 11 , μ = 5.25 × 10 4 , λ = 2 , γ = 1.579 , α = 8.368542 × 10 1 , S ( 0 ) = 204989885 , E ( 0 ) = 3548 , I ( 0 ) = 6567 , R ( 0 ) = 0 ,
incorporated in the normalized model (33).
The inverse problem is to recover the vector p = ( β , μ , λ , γ ) . We use the numerical solution of the direct problem (33) and (34), as a reference (exact) solution and as the measured solution x i ( t ; p ) in (30).
The computations are performed for the system (33) and (34), involving rescalling S ˜ = S / N ( 0 ) , E ˜ = E / N ( 0 ) , I ˜ = I / N ( 0 ) , R ˜ = R / N ( 0 ) and to reduce the computational time, we use also the transformation t ˜ = t / T .
The additional challenging in this example is that we need to recover parameters with a very different order of magnitude. We apply both Algorithms 1 and 2 for more general case I = L = 4 and t o l = 10 5 .
Example 4.  (Inverse problem: noisy measurements at each t n ). We compute the solution of the inverse problem for noisy data (30), σ S = 102500 , σ E = 246 , σ I = 205 , σ R = 1025 , measured at each grid node t n ( M = M ). The final time is T = 7 months and M = 320 . The level of the noise of ( S σ , E σ , I σ , R σ ) is illustrated on Figure 2.
We consider two choices of the initial guess p 0 1 = ( 7.42 × 10 11 , 1.12 × 10 3 , 2.82 , 2.22 ) and p 0 2 = ( 4 × 10 10 , 2.75 × 10 3 , 4.58 , 3.61 ) and examine the performance of Algorithms 1 and 2.
Vector p ˜ is chosen by giving one and the same deviation of each element of p . The values of this deviation in percentages are given in Table 5. Errors of the recovered parameters p and solution ( S , E , I , R ) for different regularization are presented in Table 5, as well.
We observe that the proposed numerical identification algorithm, realized by both algorithms, recovers the unknown parameters and solution ( S , E , I , R ) with optimal accuracy. As can be expected, the regularization (28) is successful even for initial guess that differs significantly from their true values. The iteration process converges in four to five iterations.
In Figure 3 and Figure 4, we depict an exact and recovered solution of the system (33) for initial guess p 0 1 and p 0 2 , computed by Algorithms 1 and 2, respectively. The better fitting of the recovered solution to the exact one is attained by Algorithm 2.
Example 5.  (Inverse problem: noisy measurements at M < M grid nodes). We repeat the experiments from Example 4 with only difference that now, we take measurements at M ˜ = 65 grid nodes. The observations are generated from numerical solution of the direct problem at M ˜ nodes, giving the same noise levels as in Example 4. Then, we generate simulated measurements interpolating data on the time mesh. The results are listed in Table 6. We establish that the values of the recovered parameters and corresponding errors are very closed to the ones in Table 5.

6. Conclusions

In this work, we constructed a numerical quasilinearization regularization approach for solving a parameter identification inverse problem for a fractional multi-order ( α i ) nonlinear ODE system. The uniqueness and existence of the differential direct problem was discussed. We illustrated that the order of convergence of the numerical solution of the direct problem is 2 max i { α i } . In the case of noisy-free data, the order of convergence of the numerical solution of the inverse problem is the same as that of the the numerical solution of the direct problem, namely 2 max i { α i } . The precision of the recovered parameters is more significantly affected by the noise level in cases where derivatives have varying fractional orders over a large range, compared to situations with moderate and equal fractional order values. The convergence of the iteration process achieves a moderate number of iterations. Moreover, the precision of the determined parameters is optimal such that the recovered solution is sufficiently accurate. The recovering is successful even for 10 % noise added to the simulated data and initial guess, not closed to the true value vector, but the use of regularization technique is essential. Otherwise, the iteration process may not converge. The regularization substantially increases the range of convergence with respect to the initial guess and reduces the number of iterations.
In our future work, we plan to develop a numerical method for a parameter identification inverse problem for a fractional multi-order nonlinear ODE system, applying a generalized quasilinearization approach by incorporating the technique of lower and upper solutions. Moreover, we intend to extended the method for a time-fractional parabolic semilinear systems and to improve the temporal order of convergence. Additionally, we intend to develop a numerical method for recovering the fractional order of the derivative for a fractional multi-order nonlinear ODEs system.

Author Contributions

Conceptualization, M.N.K. and L.G.V.; methodology, M.N.K. and L.G.V.; investigation, M.N.K. and L.G.V.; resources, M.N.K. and L.G.V.; writing—original draft preparation, M.N.K. and L.G.V.; writing—review and editing, L.G.V.; validation, M.N.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Bulgarian National Science Fund under the Project KP-06-N 62/3 “Numerical methods for inverse problems in evolutionary differential equations with applications to mathematical finance, heat-mass transfer, honeybee population and environmental pollution”, 2022.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article.

Acknowledgments

The authors are very grateful to the anonymous reviewers whose valuable comments and suggestions improved the quality of the paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Angstmann, C.N.; Erickson, A.M.; Henry, B.I.; McGann, A.V.; Murray, J.M.; Nichols, J.A. A general framework for fractional order compartment models. SIAM Rev. 2021, 63, 375–392. [Google Scholar] [CrossRef]
  2. Ahmad, A.; Farman, M.; Ahmad, M.O.; Raza, N.; Abdullah, M. Dynamical behavior of SIR epidemic model with non-integer time fractional derivatives: A mathematical analysis. Int. J. Adv. Appl. Sci. 2018, 5, 123–129. [Google Scholar] [CrossRef]
  3. Arafa, A.A.M.; Rida, S.Z.; Khalil, M. Solutions of fractional order model of childhood diseases with constant vaccination strategy. Math. Sci. Lett. 2012, 1, 17–23. [Google Scholar] [CrossRef]
  4. Chakraverty, S.; Jena, R.M.; Jena, S.K. Time-fractional model of HIV-I infection of CD4+ T lymphocyte cells in uncertain environment. In Time-Fractional Order Biological Systems with Uncertain Parameters; Synthesis Lectures on Mathematics & Statistics; Springer: Cham, Switzerland, 2020. [Google Scholar]
  5. Chakraverty, S.; Jena, R.M.; Jena, S.K. Time-Fractional Order Biological Systems with Uncertain Parameters; Springer: Cham, Switzerland, 2020; 144p. [Google Scholar]
  6. Demirci, E.; Unal, A.; Özalp, N. A fractional order SEIR model with density dependent death rate. Hacet. J. Math. Stat. 2011, 40, 287–295. [Google Scholar]
  7. González-Parra, G.; Arenas, A.J.; Chen-Charpentier, B.M. A fractional order epidemic model for the simulation of outbreaks of influenza A (H1N1). Math. Method Appl. 2014, 37, 2218–2226. [Google Scholar] [CrossRef]
  8. Goufo, E.F.D.; Maritz, R.; Munganga, J. Some properties of the Kermack-McKendrick epidemic model with fractional derivative and nonlinear incidence. Adv. Differ. Equ. 2014, 2014, 278. [Google Scholar] [CrossRef]
  9. Podlubny, I. Fractional Differential Equations; Academic Press: Cambridge, MA, USA, 1999. [Google Scholar]
  10. Qureshi, S. Real life application of Caputo fractional derivative for measles epidemiological autonomous dynamical system. Chaos Solitons Fractals 2020, 134, 109744. [Google Scholar] [CrossRef]
  11. Dokoumetzidis, A.; Magin, R.; Macheras, P. Fractional kinetics in multi-compartmental systems. J. Pharmacokinet. Pharmacodyn. 2010, 37, 507–524. [Google Scholar] [CrossRef]
  12. Georgiev, S.G.; Vulkov, L.G. Parameter identification approach for a fractional dynamics model of honeybee population. Lect. Notes Comput. Sci. 2022, 13127, 40–48. [Google Scholar]
  13. Yildiz, T.A. A fractional dynamical model for Honeybee colony population. Int. J. Biomath. 2018, 11, 1850063. [Google Scholar]
  14. Diethelm, K.; Uhlig, F. New approach to shooting methods for terminal value problems of fractional differential equations. J. Sci. Comput. 2023, 97, 38. [Google Scholar] [CrossRef]
  15. Ahmad, S.; Ullah, A.; Al-Mdallal, Q.M.; Khan, H.; Shah, K.; Khan, A. Fractional order mathematical modeling of COVID-19 transmission. Chaos Solitons Fractals 2020, 139, 110256. [Google Scholar] [CrossRef] [PubMed]
  16. Kozioł, K.; Stanisławski, R.; Bialic, G. Fractional-order SIR epidemic model for transmission prediction of COVID-19 disease. Appl. Sci. 2020, 10, 8316. [Google Scholar] [CrossRef]
  17. Tuan, N.H.; Mohammadi, H.; Rezapour, S. A mathematical model for COVID-19 transmission by using the Caputo fractional derivative. Chaos Solitons Fractals 2020, 140, 110107. [Google Scholar] [CrossRef] [PubMed]
  18. Diethelm, K. The Analysis of Fractional Differential Equations: An Application-Oriented Exposition Using Differential Operators of Caputo Type; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  19. Diethelm, K.; Ford, N.J. Analysis of Fractional Differential Equations: Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
  20. Li, C.; Tao, C. On the fractional Adams method. Comput. Math. Appl. 2009, 58, 1573–1588. [Google Scholar] [CrossRef]
  21. Ascher, U.M.; Petzold, L.R. Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations; SIAM: Philadelphia, PA, USA, 1997. [Google Scholar]
  22. Boichuk, A.A.; Chuiko, S.M. On approximate solutions of nonlinear boundary-value problems by the Newton-Kantorovich method. J. Math. Sci. 2021, 258, 594–617. [Google Scholar] [CrossRef]
  23. Filipov, S.M.; Faragó, I. Implicit Euler time discretization and FDM with Newton method in nonlinear heat transfer modeling. Int. Sci. J. Math. Model. 2018, 3, 94–98. [Google Scholar]
  24. Fan, L.; Yan, Y. A high order numerical method for solving nonlinear fractional differential equation with non-uniform meshes. In Numerical Ethods and Applications (NMA 2018); Nikolov, G., Kolkovska, N., Georgiev, K., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2018; pp. 207–215. [Google Scholar]
  25. Xu, M.; Niu, J.; Lin, Y. An efficient method for fractional nonlinear differential equations by quasi-Newton’s method and simplified reproducing kernel method. Math. Methods Appl. Sci. 2017, 41, 5–14. [Google Scholar] [CrossRef]
  26. Bellman, R.; Kalaba, R. Quasilinearization and Nonlinear Boundary-Value Problems; Elsevier Publishing Company: New York, NY, USA, 1965. [Google Scholar]
  27. Ben-Romdhane, M.; Temimi, H.; Baccouch, M. An iterative finite difference method for approximating the two-branched solution of Bratu’s problem. Appl. Numer. Math. 2019, 139, 62–76. [Google Scholar] [CrossRef]
  28. Feng, H.; Yue, X.; Wang, X.; Zhang, Z. Decoupling and quasi-linearization methods for boundary value problems in relative orbital mechanics. Nonlinear Dyn. 2023, 111, 199–215. [Google Scholar] [CrossRef]
  29. Sinha, V.K.; Maroju, P. New development of variational iteration method using quasilinearization method for solving nonlinear problems. Mathematics 2023, 11, 935. [Google Scholar] [CrossRef]
  30. Lakshmikantham, V.; Vatsala, A. Generalized Quasilinearization for Nonlinear Problems; Mathematics and Its Applications; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1998; Volume 440. [Google Scholar]
  31. Almuthaybiri, S.S.; Eloe, P.W.; Neugebauer, J.T. Quasilinearization and boundary value problems at resonance for Caputo fractional differential equations. Commun. Appl. Nonlinear Anal. 2019, 26, 80–100. [Google Scholar]
  32. Vasundhara Devi, J.; McRae, F.A.; Drici, Z. Generalized quasilinearization for fractional differential equations. Comput. Math. Appl. 2010, 59, 1057–1062. [Google Scholar] [CrossRef]
  33. Vasundhara Devi, J.; Suseela, C.H. Quasilinearization for fractional differential equations. Commun. Appl. Anal. 2008, 12, 407–418. [Google Scholar]
  34. Hasanov, A.H.; Romanov, V.G. Introduction to Inverse Problems for Differential Equations, 1st ed.; Springer: Cham, Switzerland, 2017; 261p. [Google Scholar] [CrossRef]
  35. Lesnic, D. Inverse Problems with Applications in Science and Engineering; CRC Press: Abingdon, UK, 2021; p. 349. [Google Scholar]
  36. Prilepko, A.I.; Orlovsky, D.G.; Vasin, I.A. Methods for Solving Inverse Problems in Mathematical Physics; Marcel Dekker: New York, NY, USA, 2000. [Google Scholar]
  37. Samarskii, A.A.; Vabishchevich, P.N. Numerical Methods for Solving Inverse Problems in Mathematical Physics; de Gruyter: Berlin, Germany, 2007; 438p. [Google Scholar]
  38. Kabanikhin, S.I. Inverse and Ill-Posed Problems; DeGruyer: Berlin, Germany, 2011. [Google Scholar]
  39. Erman, S.; Demir, A.; Ozbilge, E. Solving inverse non-linear fractional differential equations by generalized Chelyshkov wavelets. Alex. Eng. J. 2023, 66, 947–956. [Google Scholar] [CrossRef]
  40. Salahshour, S.; Ahmadian, A.; Pansera, B.A.; Ferrara, M. Uncertain inverse problem for fractional dynamical systems using perturbed collage theorem. Commun. Nonlinear Sci. Numer. Simulat. 2021, 94, 105553. [Google Scholar] [CrossRef]
  41. Lu, Y.; Tang, Y.; Zhang, X.; Wang, S. Parameter identification of fractional order systems with nonzero initial conditions based on block pulse functions. Measurement 2020, 158, 107684. [Google Scholar] [CrossRef]
  42. Kosari, M.; Teshnehlab, M. Non-linear fractional-order chaotic systems identification with approximated fractional-order derivative based on a hybrid particle swarm optimization-genetic algorithm method. J. AI Data Min. 2018, 6, 365–373. [Google Scholar]
  43. Behinfaraz, R.; Badamchizadeh, M.; Ghiasi, A.R. An adaptive method to parameter identification and synchronization of fractional-order chaotic systems with parameter uncertainty. Appl. Math. Model. 2016, 40, 4468–4479. [Google Scholar] [CrossRef]
  44. Liu, F.; Burrage, K. Novel techniques in parameter estimation for fractional dynamical models arising from biological systems. Comput. Math. Appl. 2011, 62, 822–833. [Google Scholar] [CrossRef]
  45. Georgiev, S. Mathematical identification analysis of a fractional-order delayed model for tuberculosis. Fractal Fract. 2023, 7, 538. [Google Scholar] [CrossRef]
  46. Georgiev, S.; Vulkov, L. Numerical coefficient reconstruction of time-depending integral-and fractional-order SIR models for economic analysis of Covid-19. Mathematics 2022, 10, 4247. [Google Scholar] [CrossRef]
  47. Abdullaev, U.G. Quasilinearization and inverse problems of nonlinear dynamics. J. Optim. Appl. 1995, 85, 509–523. [Google Scholar] [CrossRef]
  48. Abdulla, U.G.; Poteau, R. Identification of parameters in systems—Biology. Math. Biosc. 2018, 305, 133–145. [Google Scholar] [CrossRef]
  49. Kilbas, A.A.; Marzan, S.A. Nonlinear differential equations with the Caputo fractional derivative in the space of continuously differentiable functions. Diff. Equ. 2005, 41, 84–89. [Google Scholar] [CrossRef]
  50. Lin, W. Global existence theory and chaos control of fractional differential equations. J. Math. Anal. Appl. 2007, 332, 709–726. [Google Scholar] [CrossRef]
  51. Koleva, M.N.; Vulkov, L.G. Two-grid quasilinearization approach to ODEs with applications to model problems in physics and mechanics. Comput. Phys. Commun. 2010, 181, 663–670. [Google Scholar] [CrossRef]
  52. Tomorich, R. Sansitivity Analysis of Dynamical Systems; Academic Press: New York, NY, USA, 1963. [Google Scholar]
  53. Beckenbach, E.F.; Bellman, R.E. Inequalities; Springer: Berlin, Germany; New York, NY, USA, 1962. [Google Scholar]
  54. Rockafellar, R.T. Convex Analysis; Princeton University Press: Princeton, NJ, USA, 1969. [Google Scholar]
  55. Stynes, M.; O’Riordan, E.; Gracia, J.L. Error analysis of a finite difference method on graded meshes for a time-fractional diffusion equation. J. Numer. Anal. 2017, 55, 1057–1079. [Google Scholar] [CrossRef]
  56. Zhang, Y.; Sun, Z.; Liao, H. Finite difference methods for the time fractional diffusion equationo non-uniform meshes. J. Comput. Phys. 2014, 265, 195–210. [Google Scholar] [CrossRef]
  57. Tikhonov, A.; Arsenin, V. Solutions of Ill-Posed Problems; John Wiley & Sons: New York, NY, USA, 1977. [Google Scholar]
  58. Zhou, Y.; Stynes, M. Optimal convergence rates in time-fractional discretisations: The L1, L 1 ¯ and Alikhanov schemes. East Asian J. Appl. Math. 2022, 12, 503–520. [Google Scholar]
Figure 1. | δ p k j | at each iteration, σ = 0.05 , M = 320 for α 1 = α 2 = 0.5 (left) and α 1 = 0.7 , α 2 = 0.3 (right), Example 3.
Figure 1. | δ p k j | at each iteration, σ = 0.05 , M = 320 for α 1 = α 2 = 0.5 (left) and α 1 = 0.7 , α 2 = 0.3 (right), Example 3.
Fractalfract 08 00196 g001
Figure 2. The noise ( σ x N ), added to the numerical solution of the direct problem in order to generate ( S σ , E σ , I σ , R σ ) in (30), Example 4.
Figure 2. The noise ( σ x N ), added to the numerical solution of the direct problem in order to generate ( S σ , E σ , I σ , R σ ) in (30), Example 4.
Fractalfract 08 00196 g002
Figure 3. Exact (solid line) and recovered (line with circles) solution, Algorithm 1, ϵ = 0.001 , initial guess p 0 1 , Example 4.
Figure 3. Exact (solid line) and recovered (line with circles) solution, Algorithm 1, ϵ = 0.001 , initial guess p 0 1 , Example 4.
Fractalfract 08 00196 g003
Figure 4. Exact (solid line) and recovered (line with circles) solution, Algorithm 2, ϵ = 0.001 , 10% deviation of p ˜ , initial guess p 0 2 , Example 4.
Figure 4. Exact (solid line) and recovered (line with circles) solution, Algorithm 2, ϵ = 0.001 , 10% deviation of p ˜ , initial guess p 0 2 , Example 4.
Fractalfract 08 00196 g004
Table 1. Errors and convergence rate of the solution ( S , I ) , direct problem, Example 1.
Table 1. Errors and convergence rate of the solution ( S , I ) , direct problem, Example 1.
α 1 α 2 M = 20 M = 40 M = 80 M = 160 M = 320 M = 640 M = 1280
0.50.5 E S 2.552 ×  10 3 1.018 ×  10 3 3.902 ×  10 4 1.454 ×  10 4 5.323 ×  10 5 1.928 ×  10 5 6.940 ×  10 6
CR S 1.3261.3831.4241.4501.4651.474
E I 8.805 ×  10 3 3.067 ×  10 3 1.078 ×  10 3 3.809 ×  10 4 1.349 ×  10 4 4.782 ×  10 5 1.695 ×  10 5
CR I 1.5211.5081.5011.4981.4961.497
E S 2 2.203 ×  10 3 7.655 ×  10 4 2.682 ×  10 4 9.462 ×  10 5 3.349 ×  10 5 1.187 ×  10 5 4.209 ×  10 6
CR S 2 1.5251.5131.5031.4981.4961.496
E I 2 1.366 ×  10 2 4.726 ×  10 3 1.655 ×  10 3 5.838 ×  10 4 2.066 ×  10 4 7.320 ×  10 5 2.594 ×  10 5
CR I 2 1.5311.5141.5031.4991.4971.497
0.70.3 E S 3.191 ×  10 3 1.418 ×  10 3 6.138 ×  10 4 2.608 ×  10 4 1.096 ×  10 4 4.571 ×  10 5 1.877 ×  10 5
CR S 1.1701.2091.2351.2511.2621.284
E I 1.057 ×  10 2 4.060 ×  10 3 1.581 ×  10 3 6.213 ×  10 4 2.457 ×  10 4 9.847 ×  10 5 3.252 ×  10 5
CR I 1.3811.3611.3481.3391.3191.599
E S 2 3.657 ×  10 3 1.623 ×  10 3 7.01 ×  10 4 2.983 ×  10 4 1.255 ×  10 4 5.251 ×  10 5 2.165 ×  10 5
CR S 2 1.1721.2101.2331.2491.2571.278
E I 2 1.594 ×  10 2 5.925 ×  10 3 2.252 ×  10 3 8.693 ×  10 4 3.390 ×  10 4 1.325 ×  10 4 4.206 ×  10 5
CR I 2 1.4281.3961.3731.3581.3551.656
Table 2. Errors of the recovered parameters p and solution ( S , I ) , Example 2.
Table 2. Errors of the recovered parameters p and solution ( S , I ) , Example 2.
α 1 = α 2 = 0.5 α 1 = 0.4 , α 2 = 0.6 α 1 = 0.7 , α 2 = 0.3 α 1 = α 2 = 1
p 0 (1, 2, 1)(2, 3, 2)(1, 2, 1)(2, 3, 2)(2, 3, 2)(3, 2, 3)
ϵ0.00050.0010.0010.0050.00010
β k 0.8000100.8000250.8000340.8002620.8000000.800000
μ k 0.3999920.4999800.3999720.3997910.4000000.400000
γ k 0.4300140.4300430.4300470.4303540.4300000.430000
E β 1.0402 ×  10 5 2.4504 ×  10 5 3.4993 ×  10 5 2.6165 ×  10 4 2.9138 ×  10 7 1.1768 ×  10 14
E μ 8.2974 ×  10 6 1.9546 ×  10 5 2.7937 ×  10 5 2.0888 ×  10 4 2.3914 ×  10 7 5.5511 ×  10 17
E γ 1.4362 ×  10 5 3.3832 ×  10 5 4.7305 ×  10 5 3.5370 ×  10 4 4.1960 ×  10 7 1.2990 ×  10 14
E β 1.3003 ×  10 5 3.0630 ×  10 5 4.3741 ×  10 5 3.2706 ×  10 4 3.6422 ×  10 7 1.4710 ×  10 14
E μ 2.0743 ×  10 5 4.8865 ×  10 5 6.9842 ×  10 5 5.2222 ×  10 4 5.9785 ×  10 7 1.3878 ×  10 16
E γ 3.3400 ×  10 5 7.8679 ×  10 5 1.1001 ×  10 4 8.2256 ×  10 4 9.7582 ×  10 7 3.0208 ×  10 14
E S 5.3987 ×  10 5 5.3988 ×  10 5 1.3685 ×  10 4 1.3687 ×  10 4 1.4339 ×  10 4 2.5810 ×  10 4
E I 2.5134 ×  10 4 2.5133 ×  10 4 5.4781 ×  10 4 5.4770 ×  10 4 4.4899 ×  10 4 1.6266 ×  10 3
E S 2 3.3493 ×  10 5 3.3494 ×  10 5 8.5864 ×  10 5 8.5882 ×  10 5 1.2549 ×  10 4 1.2412 ×  10 4
E I 2 2.0661 ×  10 4 2.0661 ×  10 4 3.5098 ×  10 4 3.5094 ×  10 4 3.3904 ×  10 4 1.1390 ×  10 3
k11161441810
Table 3. Errors of the recovered parameters p and smooth solution ( S , I ) , Example 2.
Table 3. Errors of the recovered parameters p and smooth solution ( S , I ) , Example 2.
α 1 = α 2 = 0.5 α 1 = 0.4 , α 2 = 0.6 α 1 = 0.7 , α 2 = 0.3
p 0 (1, 2, 1)(2, 3, 2)(1, 2, 1)(2, 3, 2)(2, 3, 2)
ϵ0.00010.00100.0050.005
β k 0.8000000.8000000.8000000.8000020.800000
μ k 0.4000000.4000020.4000000.3999950.400000
γ k 0.4300000.4299980.4300000.4300060.429999
E β 1.7116 ×  10 8 8.0290 ×  10 7 2.0005 ×  10 12 2.3862 ×  10 6 1.3503 ×  10 7
E μ 3.5581 ×  10 8 1.6727 ×  10 6 7.3495 ×  10 12 5.1874 ×  10 6 2.2071 ×  10 7
E γ 4.1176 ×  10 8 1.9328 ×  10 6 7.8799 ×  10 12 5.7160 ×  10 6 3.2656 ×  10 7
E β 2.1394 ×  10 8 1.0036 ×  10 6 2.5006 ×  10 12 2.9828 ×  10 6 1.6879 ×  10 7
E μ 8.8951 ×  10 8 4.1817 ×  10 6 1.8374 ×  10 11 1.2968 ×  10 5 5.5178 ×  10 7
E γ 9.5758 ×  10 8 4.4948 ×  10 6 1.8325 ×  10 11 1.3293 ×  10 5 7.5943 ×  10 7
E S 1.0147 ×  10 4 1.0147 ×  10 4 1.5576 ×  10 4 1.5576 ×  10 4 2.6470 ×  10 4
E I 2.2631 ×  10 4 2.2631 ×  10 4 2.8133 ×  10 4 2.8133 ×  10 4 8.3472 ×  10 4
E S 2 4.9615 ×  10 5 4.9615 ×  10 5 8.3308 ×  10 5 8.3308 ×  10 5 1.0833 ×  10 4
E I 2 1.4677 ×  10 4 1.4677 ×  10 4 1.8367 ×  10 4 1.8367 ×  10 4 4.8207 ×  10 4
k1114121312
Table 4. Errors of the recovered parameters p and solution ( S , I ) , Example 3.
Table 4. Errors of the recovered parameters p and solution ( S , I ) , Example 3.
α 1 = α 2 = 0.5 α 1 = 0.7 , α 2 = 0.3
Noise1%5%10%1%5%10%
β k 0.79920.79540.79000.79850.79190.7834
μ k 0.40080.40450.40990.40110.40600.4126
γ k 0.42810.41960.40810.42700.41440.3982
E β 8.1197 ×  10 4 4.5403 ×  10 3 9.9972 ×  10 3 1.5330 ×  10 3 8.1016 ×  10 3 1.6634 ×  10 2
E μ 8.5923 ×  10 4 4.5611 ×  10 3 9.9945 ×  10 3 1.1390 ×  10 3 6.0346 ×  10 3 1.2611 ×  10 2
E γ 1.9371 ×  10 3 1.0318 ×  10 2 2.1993 ×  10 2 2.9961 ×  10 3 1.5574 ×  10 2 3.1774 ×  10 2
E β 1.0152 ×  10 3 5.6730 ×  10 3 1.2469 ×  10 2 9.1622 ×  10 3 1.0127 ×  10 2 3.1528 ×  10 2
E μ 2.0981 ×  10 3 1.1453 ×  10 2 2.4858 ×  10 2 2.8476 ×  10 3 1.5087 ×  10 2 2.0793 ×  10 2
E γ 4.5050 ×  10 3 2.3994 ×  10 2 5.0916 ×  10 2 6.9677 ×  10 3 3.6219 ×  10 2 7.3892 ×  10 2
E S 5.4023 ×  10 5 5.4173 ×  10 5 5.4314 ×  10 5 1.4354 ×  10 4 1.4424 ×  10 4 1.4516 ×  10 4
E I 2.5187 ×  10 4 2.5406 ×  10 4 2.5714 ×  10 4 4.4962 ×  10 4 4.5218 ×  10 4 4.5540 ×  10 4
E S 2 3.3502 ×  10 5 3.3518 ×  10 5 3.3507 ×  10 5 1.2562 ×  10 4 1.2618 ×  10 4 1.2691 ×  10 4
E I 2 2.0686 ×  10 4 2.0787 ×  10 4 2.0923 ×  10 4 3.3926 ×  10 4 2.4010 ×  10 4 3.4111 ×  10 4
k171717888
Table 5. Errors of the recovered parameters p and solution ( S , E , I , R ) for different regularization, ϵ = 0.001 , Example 4.
Table 5. Errors of the recovered parameters p and solution ( S , E , I , R ) for different regularization, ϵ = 0.001 , Example 4.
p 0 p 0 1 p 0 2 p 0 2
AlgorithmAlgorithm 1Algorithm 2Algorithm 2
deviation p ˜ 5%10%
β k 7.418 ×  10 11 4.844 ×  10 11 5.121 ×  10 11
μ k  5.270 ×  10 4  5.271 ×  10 4   5.273 ×  10 4
λ k 2.8202.1202.241
γ k 2.2261.6731.769
E β 2.848 ×  10 11 2.743 ×  10 12 5.512 ×  10 12
E μ 1.968 ×  10 6 2.182 ×  10 6 2.319 ×  10 6
E λ 8.204 ×  10 1 1.120 ×  10 1 2.412 ×  10 1
E γ 6.477 ×  10 1 9.479 ×  10 2 1.904 ×  10 1
E β 6.234 ×  10 1 6.003 ×  10 2 1.206 ×  10 1
E μ 3.750 ×  10 3 4.156 ×  10 3 4.416 ×  10 3
E λ 4.102 ×  10 1 6.003 ×  10 2 1.206 ×  10 1
E γ 4.102 ×  10 1 6.003 ×  10 2 1.206 ×  10 1
E S 4.435 ×  10 5 4.724 ×  10 5 4.876 ×  10 5
E E 6.307 ×  10 2 1.576 ×  10 2 3.077 ×  10 2
E I 6.937 ×  10 2 1.628 ×  10 2 3.179 ×  10 2
E R 9.305 ×  10 2 1.696 ×  10 2 3.312 ×  10 2
E S 2 2.823 ×  10 5 2.901 ×  10 5 3.029 ×  10 5
E E 2 6.041 ×  10 2 3.445 ×  10 2 6.643 ×  10 2
E I 2 6.431 ×  10 2 3.459 ×  10 2 6.670 ×  10 2
E R 2 3.821 ×  10 2 9.985 ×  10 3 1.927 ×  10 2
Table 6. Errors of the recovered parameters p and solution ( S , E , I , R ) for different regularization, ϵ = 0.001 , Example 5.
Table 6. Errors of the recovered parameters p and solution ( S , E , I , R ) for different regularization, ϵ = 0.001 , Example 5.
p 0 p 0 1 p 0 2 p 0 2
AlgorithmAlgorithm 1Algorithm 2Algorithm 2
deviation p ˜ 5%10%
β k 4.752 ×  10 11 4.844 ×  10 11 5.121 ×  10 11
μ k  5.281 ×  10 4  5.278 ×  10 4  5.278 ×  10 4
λ k 2.8202.1202.241
γ k 2.2271.6741.769
E β 1.826 ×  10 12 2.743 ×  10 12 5.512 ×  10 12
E μ 3.104 ×  10 6 2.183 ×  10 6 2.827 ×  10 6
E λ 8.205 ×  10 1 1.120 ×  10 1 2.413 ×  10 1
E γ 6.478 ×  10 1 9.480 ×  10 2 1.905 ×  10 1
E β 3.995 ×  10 2 6.003 ×  10 2 1.206 ×  10 1
E μ 5.912 ×  10 3 5.395 ×  10 3 5.386 ×  10 3
E λ 4.102 ×  10 1 6.003 ×  10 2 1.206 ×  10 1
E γ 4.102 ×  10 1 6.004 ×  10 2 1.206 ×  10 1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Koleva, M.N.; Vulkov, L.G. A Quasilinearization Approach for Identification Control Vectors in Fractional-Order Nonlinear Systems. Fractal Fract. 2024, 8, 196. https://doi.org/10.3390/fractalfract8040196

AMA Style

Koleva MN, Vulkov LG. A Quasilinearization Approach for Identification Control Vectors in Fractional-Order Nonlinear Systems. Fractal and Fractional. 2024; 8(4):196. https://doi.org/10.3390/fractalfract8040196

Chicago/Turabian Style

Koleva, Miglena N., and Lubin G. Vulkov. 2024. "A Quasilinearization Approach for Identification Control Vectors in Fractional-Order Nonlinear Systems" Fractal and Fractional 8, no. 4: 196. https://doi.org/10.3390/fractalfract8040196

APA Style

Koleva, M. N., & Vulkov, L. G. (2024). A Quasilinearization Approach for Identification Control Vectors in Fractional-Order Nonlinear Systems. Fractal and Fractional, 8(4), 196. https://doi.org/10.3390/fractalfract8040196

Article Metrics

Back to TopTop