Next Article in Journal
Numerical Solution to Anomalous Diffusion Equations for Levy Walks
Next Article in Special Issue
Identifying Key Risk Factors in Product Development Projects
Previous Article in Journal
Dynamical Behavior of a New Chaotic System with One Stable Equilibrium
Previous Article in Special Issue
E-Learning Platform Assessment and Selection Using Two-Stage Multi-Criteria Decision-Making Approach with Grey Theory: A Case Study in Vietnam
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Guaranteed Estimation of Solutions to the Cauchy Problem When the Restrictions on Unknown Initial Data Are Not Posed

by
Oleksandr Nakonechnyi
1,2,†,
Yuri Podlipenko
1,† and
Yury Shestopalov
2,*
1
Faculty of Computer Science and Cybernetics, National Taras Shevchenko University of Kiev, 03680 Kiev, Ukraine
2
Faculty of Engineering and Sustainable Development, Academy of Technology and Environment, University of Gävle, 80176 Gävle, Sweden
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2021, 9(24), 3218; https://doi.org/10.3390/math9243218
Submission received: 10 November 2021 / Revised: 3 December 2021 / Accepted: 10 December 2021 / Published: 13 December 2021
(This article belongs to the Special Issue Decision Making and Its Applications)

Abstract

:
The paper deals with Cauchy problems for first-order systems of linear ordinary differential equations with unknown data. It is assumed that the right-hand sides of equations belong to certain bounded sets in the space of square-integrable vector-functions, and the information about the initial conditions is absent. From indirect noisy observations of solutions to the Cauchy problems on a finite system of points and intervals, the guaranteed mean square estimates of linear functionals on unknown solutions of the problems under consideration are obtained. Under an assumption that the statistical characteristics of noise in observations are not known exactly, it is proved that such estimates can be expressed in terms of solutions to well-defined boundary value problems for linear systems of impulsive ordinary differential equations.

1. Introduction

A general theory of guaranteed estimates of solutions to Cauchy problems for ordinary differential equations under uncertainty was constructed in [1]. These results were further developed in [2,3,4,5].
The paper focuses on elaborating the methods of estimating the state of the systems described by the Cauchy problems for linear ordinary differential equations with incomplete data.
The formulations of the estimation problems under the conditions of uncertainty, which are considered in this article, are new, and research in this direction has not been carried out previously.
For solving these estimation problems, we use observations that are linear transformations of unknown solutions on a finite system of intervals and points perturbed by additive random noises. Such a type of observation is caused by the fact that in many practically important cases, unknown solutions cannot be observed in a direct manner.
From observations of the state of systems, we find optimal, in a certain sense, estimates for functionals from solutions of these problems under the condition that the information about initial conditions is missing and that the right-hand sides of equations and correlation functions of random noises in observations are not known exactly, but it is only known that they belong to the certain given sets in the corresponding function spaces.
In such a situation, the minimax estimation method turns out to be applicable and preferable. In fact, choosing this approach, one can obtain optimal estimates not only for the unknown solutions but also for linear functionals with respect to these solutions. In other words, the desired estimates linear with respect to observations are such that the maximal mean square error determined over the whole set of realizations of perturbations from the sets under consideration attains its minimal value. Traditionally, these kinds of estimates are referred to as the guaranteed or minimax estimates.
We demonstrate that these problems can be reduced to the determination of minima of quadratic functionals on closed convex sets in Hilbert spaces. Expressions for the minimax estimates and for the estimation errors are determined as a result of the solution to this problem with the use of the Lagrange multipliers method. It is shown that such estimates are expressed in terms of solutions to certain well-defined uniquely solvable systems of differential equations.
This paper continues our research cycle accomplished in [6,7], where the guaranteed (minimax) estimation method has been worked out for estimating linear functionals over the set of unknown solutions and data under the condition that unknown right-hand sides of the equations and initial conditions entering the statement of the Cauchy problems belong to a certain set in the corresponding Hilbert space (for details, see [8,9,10,11,12]).

2. Preliminaries

Let us first present the assertions and notations that will be frequently used in the text of the paper.
If vector-functions f ( t ) R n and g ( t ) R n are absolutely continuous on the closed interval [ t 1 , t 2 ] , then the following integration by the parts formula is valid
( f ( t 2 ) , g ( t 2 ) ) n ( f ( t 1 ) , g ( t 1 ) ) n = t 1 t 2 f ( t ) , d g ( t ) d t n + g ( t ) , d f ( t ) d t n d t ,
where by ( · , · ) n , we denote the inner product in R n here and later on (see [13]).
Lemma 1.
Suppose Q is a bounded positive (that is ( Q f , f ) H > 0 when f 0 ). Hermitian (self-adjoint) operator in a complex (real) Hilbert space H with bounded inverse Q 1 . Then, the generalized Cauchy–Schwarz inequality
| ( f , g ) H | ( Q 1 f , f ) H 1 / 2 ( Q g , g ) H 1 / 2 ( f , g H )
is valid. The equality sign in (2) is attained at the element
g = Q 1 f ( Q 1 f , f ) H 1 / 2 .
For a proof, we refer to [14] (p. 186).

3. Setting of the Minimax Estimation Problem

We consider the following estimation problem. Let the unknown vector-function x ( t ) R n be a solution of the Cauchy problem
d x ( t ) d t = A ( t ) x ( t ) + B ( t ) f ( t ) , t ( t 0 , T ) ,
x ( t 0 ) = C x 0 ,
where A ( t ) = [ a i j ( t ) ] is an n × n -matrix and B ( t ) = [ b i j ( t ) ] is an n × r -matrix with entries a i j ( t ) and b i j ( t ) , which are square-integrable and piecewise continuous (here and in what follows, a function is called piecewise continuous on an interval if the interval can be broken into a finite number of subintervals on which the function is continuous on each open subinterval (i.e., the subinterval without its endpoints) and has a finite limit at the endpoints of each subinterval). C = [ c i j ] is n × k -matrix with entries c i j R , i = 1 , , n , j = 1 , , k , f ( t ) R r is a vector-function belonging to the space ( L 2 ( t 0 , T ) ) r , x 0 R k .
By a solution of this problem, we mean a function x ( t ) ( W 2 1 ( t 0 , T ) ) n that satisfies Equation (3) almost everywhere (a.e.) on ( t 0 , T ) (except on a set of Lebesgue measure 0) and the conditions (4). Here, W 2 1 ( t 0 , T ) is the space of functions absolutely continuous on an interval [ t 0 , T ] for which the derivative that exists almost everywhere on ( t 0 , T ) belongs to space L 2 ( t 0 , T ) .
We suppose that the Cauchy data ( x 0 , f ( t ) ) are unknown and satisfy the condition ( x 0 , f ( · ) ) G 1 , where by G 1 we denote the set
G 1 : = F = ( x 0 , f ) R k × ( L 2 ( t 0 , T ) ) r : t 0 T ( Q 1 ( t ) ( f ( t ) f 0 ( t ) ) , f ( t ) f 0 ( t ) ) r d t ϵ 1 .
Here, matrix Q 1 ( t ) is a symmetric positive definite r × r matrix with real-valued piecewise continuous entries on [ t 0 , T ]   f 0 ( L 2 ( 0 , T ) ) r , which is a prescribed vector-function, and ϵ 1 is a prescribed positive number.
The problem is to estimate the expression
l ( x ) = t 0 T ( l 0 ( t ) , x ( t ) ) n d t + ( a , x ( T ) ) n ,
from observations of the form (here, we denote vectors and matrices by y and H and vector-functions and matrices-functions by y ( · ) and H ( · ) ).
y j ( t ) = H j ( t ) x ( t ) + ξ j ( t ) , t Ω j , j = 1 , , M ,
y i = H i x ( t i ) + ξ i , i = 1 , , N ,
in the class of estimates
l ( x ) ^ = j = 1 M Ω j ( y j ( t ) , u j ( t ) ) l d t + i = 1 N ( y i , u i ) m + c ,
linear with respect to observations (7) and (8); here, x ( t ) is the state of a system described by the Cauchy problem (3) and (4), l 0 ( L 2 ( t 0 , T ) ) n , a R n , H i are given m × n -matrices, H j ( t ) are given l × n -matrices where the elements are piecewise continuous functions on Ω ¯ j , u i R m , and u j ( t ) are vector-functions that belong to ( L 2 ( Ω j ) ) l ,   c R .
We suppose that
ξ : = ( ξ 1 ( · ) , , ξ M ( · ) , ξ 1 , , ξ N ) G 2 ,
where ξ i = ( ξ 1 ( i ) , , ξ m ( i ) ) T and ξ j ( · ) = ( ξ 1 ( j ) ( · ) , , ξ l ( j ) ( · ) ) T are observation errors in (7) and (8), respectively, that are realizations of random vectors ξ i = ξ i ( ω ) R m and random vector-functions ξ j ( t ) = ξ j ( ω , t ) R l and G 2 denotes the set of random elements ξ whose components ξ i and ξ j ( · ) are uncorrelated; that is, it is assumed that
E ( ξ ˜ i , v ) m ( ξ ˜ j ( · ) , v ( · ) ) ( L 2 ( Ω j ) ) l = 0 v R m , v ( · ) ( L 2 ( Ω j ) ) l , i = 1 , N , j = 1 , M ;
have zero means, E ξ i = 0 and E ξ j ( · ) = 0 , with finite second moments E | ξ i | 2 and E ξ j ( · ) ( L 2 ( Ω j ) ) l 2 , and unknown correlation matrices R i = E ξ i ξ i T and R j ( t , s ) = E ξ j ( t ) ξ j T ( s ) , satisfying the conditions
j = 1 M Ω j Tr [ D j ( t ) R j ( t , t ) ] d t ϵ 2 ,
and
i = 1 N Tr [ D i R i ] ϵ 3 ,
correspondingly ( Tr D : = i = 1 l d i i denotes the trace of the matrix D = { d i j } i , j = 1 l ). Here, D i , i = 1 , , N , are symmetric positive definite m × m matrices with constant entries and D j ( t ) , j = 1 , , M , are symmetric positive definite l × l matrices the entries that are assumed to be piecewise continuous functions on Ω ¯ j , and ϵ i , i = 2 , 3 , are prescribed positive numbers.
Set
u : = ( u 1 ( · ) , , u M ( · ) , u 1 , , u N ) ( L 2 ( Ω 1 ) ) l × × ( L 2 ( Ω M ) ) l × R N × m = : H .
The norm and inner product in space H are defined by
u H = j = 1 M u j ( · ) ( L 2 ( Ω j ) ) l + i = 1 N u i R m 2 1 / 2
and
( u , v ) H = j = 1 M ( u j ( · ) , v j ( · ) ) ( L 2 ( Ω j ) ) l + i = 1 N ( u i , v i ) m u , v H ,
respectively.
Definition 1.
The estimate
l ( x ) ^ ^ = j = 1 M Ω j ( y j ( t ) , u ^ j ( t ) ) l d t + i = 1 N ( y i , u ^ i ) m + c ^ ,
in which vectors u ^ i , and a number c ^ are determined from the condition
inf u H , c R σ ( u , c ) = σ ( u ^ , c ^ ) ,
where
σ ( u , c ) = sup F G 1 , ξ G 2 E | l ( x ) l ( x ) ^ | 2 ,
will be called the minimax estimate of expression (6).
The quantity
σ : = { σ ( u ^ , c ^ ) } 1 / 2
will be called the error of the minimax estimation of l ( x ) .
We see that a minimax estimate minimizes the maximal mean-square estimation error determined for the “worst” implementation of perturbations.

4. Representations for Minimax Estimates and Estimation Errors

In order to reduce the problem of determination of the minimax estimates to a certain optimal control problem, one can introduce, for any fixed u H vector-function, z ( t ; u ) as a unique solution to the problem (here and in what follows, we assume that if a function is piecewise continuous, then it is continuous from the left).
d z ( t ; u ) d t = A T ( t ) z ( t ; u ) + l 0 ( t ) j = 1 M χ Ω j ( t ) H j T ( t ) u j ( t ) for   a . e . t ( t 0 , T ) ,
Δ z ( · ; u ) | t = t i = z ( t i + t 0 ; u ) z ( t i ; u ) = H i T u i , i = 1 , , N , z ( T ; u ) = a ,
where χ Ω ( t ) is a characteristic function of the set Ω , and U is denoted by the set
U : = u H : C T z ( t 0 ; u ) = 0 .
It is easy to see that if U then U is closed and convex set in the space H . The following result is valid.
Lemma 2.
Let U (in the Appendix A, we give some sufficient conditions of non-emptiness of the set U). Then determination of the minimax estimate of l ( x ) is equivalent to the problem of optimal control of the system governed by the Equations (15) and (16) with the cost function
I ( u ) = ϵ 1 t 0 T ( Q ˜ 1 ( t ) z ( t ; u ) , z ( t ; u ) ) n d t + ϵ 2 j = 1 M Ω j ( D j 1 ( t ) u j ( t ) , u j ( t ) ) l d t + ϵ 3 i = 1 N ( D i 1 u i , u i ) m inf u U ,
where Q ˜ 1 ( t ) = B ( t ) Q 1 1 ( t ) B T ( t ) .
Proof. 
For each i = 1 , , N + 1 , denote by z i ( t ; u ) the restriction of function z ( t ; u ) to a subinterval ( t i 1 , t i ) of the interval ( t 0 , T ) and extend it from this subinterval to the ends t i 1 and t i by continuity. Then, due to (15) and (16),
d z i ( t ; u ) d t = A T ( t ) z i ( t ; u ) + l 0 ( t ) j = 1 M χ Ω j ( t ) H j T ( t ) u j ( t )                                                                               for   a . e . t ( t i 1 , t i ) , i = 1 , , N + 1 ,
z i + 1 ( t i ; u ) = z i ( t i ; u ) + H i T u i , i = 1 , , N , z N + 1 ( T ; u ) = a , C T z 1 ( t 0 ; u ) = 0 .
Let x be a solution to the problem (3) and (4). From relations (6)–(8), (19) and (20), and the integration by parts formula (1) with f ( t ) = x ( t ) , g ( t ) = z i ( t ; u ) , we obtain
l ( x ) l ( x ) ^ = i = 1 N + 1 t i 1 t i ( x ( t ) , l 0 ( t ) ) n d t + ( a , x ( T ) ) n i = 1 N ( y i , u i ) m j = 1 M Ω j ( y j ( t ) , u j ( t ) ) l d t c = ( x ( T ) , a ) n + i = 1 N + 1 t i 1 t i x ( t ) , d z i ( t ; u ) d t A T ( t ) z i ( t ; u ) n d t i = 1 N ( x ( t i ) , H i T u i ) n i = 1 N ( ξ i , u i ) m j = 1 M Ω j ( ξ j ( t ) , u j ( t ) ) l d t c = ( x ( T ) , a ) n + i = 1 N + 1 ( ( x ( t i 1 ) , z i ( t i 1 ; u ) ) n ( x ( t i ) , z i ( t i ; u ) ) n ) + i = 1 N + 1 t i 1 t i d x ( t ) d t A ( t ) x ( t ) , z i ( t ; u ) n d t i = 1 N ( x ( t i ) , z i + 1 ( t i ; u ) z i ( t i ; u ) ) n i = 1 N ( ξ i , u i ) m j = 1 M Ω j ( ξ j ( t ) , u j ( t ) ) l d t c = ( x ( T ) , a ) n + ( x ( t 0 ) , z 1 ( t 0 ; u ) ) n ( x ( t 1 ) , z 1 ( t 1 ; u ) ) n + i = 2 N ( ( x ( t i 1 ) , z i ( t i 1 ; u ) ) n ( x ( t i ) , z i ( t i ; u ) ) n ) + ( x ( t N ) , z N + 1 ( t N ) ) n ( x ( T ) , a ) n + i = 1 N + 1 t i 1 t i ( B ( t ) f ( t ) , z i ( t ; u ) ) n dt i = 1 N ( x ( t i ) , z i + 1 ( t i ; u ) z i ( t i ; u ) ) n i = 1 N ( ξ i , u i ) m j = 1 M Ω j ( ξ j ( t ) , u j ( t ) ) l d t c
Taking into account that
i = 2 N ( x ( t i 1 ) , z i ( t i 1 ; u ) ) n + ( x ( t N ) , z N + 1 ( t N ) ) n = i = 1 N 1 ( x ( t i ) , z i + 1 ( t i ; u ) ) n + ( x ( t N ) , z N + 1 ( t N ) ) n = i = 1 N ( x ( t i ) , z i + 1 ( t i ; u ) ) n ,
from latter equalities, we have
l ( x ) l ( x ) ^ = ( x ( t 0 ) , z 1 ( t 0 ; u ) ) n + i = 1 N + 1 t i 1 t i + 1 B ( t ) f ( t ) , z i ( t ; u ) n d t i = 1 N ( ξ i , u i ) m j = 1 M Ω j ( ξ j ( t ) , u j ( t ) ) l d t c
= ( x 0 , C T z ( t 0 ; u ) ) k + t 0 T B ( t ) f ( t ) , z ( t ; u ) n d t i = 1 N ( ξ i , u i ) m j = 1 M Ω j ( ξ j ( t ) , u j ( t ) ) l d t c .
The latter relationship yields
E [ l ( x ) l ( x ) ^ ] = ( x 0 , C T z ( t 0 ; u ) ) k + t 0 T f ( t ) , B T ( t ) z ( t ; u ) r d t c .
Since vector x 0 in the first term on the right-hand side of (23) may be an arbitrary element of space R k , the quantity
E [ l ( x ) l ( x ) ^ ]
will be finite if and only if u U , that is, if the first term on the right-hand side of (23) vanishes. Therefore, we will further assume that u U .
Taking into consideration the known relationship
D η = E [ η ] 2 [ E η ] 2
that couples the variance D η = E [ η E η ] 2 of random variable η with its expectation E η , in which η is determined by the right-hand side of equality
l ( x ) l ( x ) ^ = t 0 T B ( t ) f ( t ) , z ( t ; u ) n d t i = 1 N ( ξ i , u i ) m j = 1 M Ω j ( ξ j ( t ) , u j ( t ) ) l d t c : = η ,
which follows from (22), and from the noncorrelatedness of ξ i = ( ξ 1 ( i ) , , ξ m ( i ) ) T and ξ j ( · ) = ( ξ 1 ( j ) ( · ) , , ξ l ( j ) ( · ) ) T , from the equalities (22) and (23), we find
E | l ( x ) l ( x ) ^ | 2 = t 0 T f ( t ) , B T ( t ) z ( t ; u ) r d t c 2
+ E i = 1 N ( ξ i , u i ) m + j = 1 M Ω j ( ξ j ( t ) , u j ( t ) ) l d t 2
= | t 0 T f ( t ) f 0 ( t ) , B T ( t ) z ( t ; u ) r d t
+ t 0 T f 0 ( t ) , B T ( t ) z ( t ; u ) r d t c | 2
+ E i = 1 N ( ξ i , u i ) m 2 + E j = 1 M Ω j ( ξ j ( t ) , u j ( t ) ) l d t 2 .
Thus,
inf c R σ ( u , c ) = inf c R sup F G 1 , ξ G 2 E [ l ( x ) l ( x ) ^ ] 2
= inf c R sup F G 1 t 0 T f ( t ) f 0 ( t ) , B T ( t ) z ( t ; u ) r d t + t 0 T f 0 ( t ) , B T ( t ) z ( t ; u ) r d t c 2
+ sup ξ G 2 E i = 1 N ( ξ i , u i ) m 2 + E j = 1 M Ω j ( ξ j ( t ) , u j ( t ) ) l d t 2 .
Set
y : = t 0 T f ( t ) f 0 ( t ) , B T ( t ) z ( t ; u ) r d t ,
d = c t 0 T f 0 ( t ) , B T ( t ) z ( t ; u ) r d t .
Then F = ( x 0 , f ) G 1 , the generalized Cauchy−Schwarz inequality and (5) imply
| y | t 0 T ( Q 1 1 ( t ) B T ( t ) z ( t ; u ) , B T ( t ) z ( t ; u ) ) r d t 1 / 2
× t 0 T ( Q 1 ( t ) ( f ( t ) f 0 ( t ) ) , f ( t ) f 0 ( t ) ) r d t 1 / 2 ϵ 1 1 / 2 L ,
where
L = t 0 T ( Q ˜ 1 ( t ) z ( t ; u ) , z ( t ; u ) ) n d t 1 / 2 .
The direct substitution shows that the last inequality is transformed to an equality at F = ( x 0 , f ) G 1 , where
f = f 0 ± ϵ 1 1 / 2 L Q 1 1 B T ( · ) z ( · ; u ) .
Taking into account that
inf d R sup | y | ϵ 1 1 / 2 L | y d | 2 = ϵ 1 L 2 ,
we find
inf c R sup F G 1 [ t 0 T B T ( t ) z ( t ; u ) , f ( t ) f 0 ( t ) r d t
+ t 0 T B T ( t ) z ( t ; u ) , f 0 ( t ) r d t c ] 2 = ϵ 1 L 2
= ϵ 1 t 0 T ( Q ˜ 1 ( t ) z ( t ; u ) , z ( t ; u ) ) n d t ,
where the infimum over c is attained at
c = t 0 T B T ( t ) z ( t ; u ) , f 0 ( t ) r d t .
Calculate the last term on the right-hand side of (25). Applying Lemma 1, we have
E i = 1 N ( ξ i , u i ) m 2 E i = 1 N ( D i 1 u i , u i ) m · i = 1 N ( D i ξ i , ξ i ) m
= i = 1 N ( D i 1 u i , u i ) m · E i = 1 N ( D i ξ i , ξ i ) m .
Transform the last factor on the right-hand side of (28):
E i = 1 N ( D i ξ i , ξ i ) m = i = 1 N E j = 1 m k = 1 m d j k ( i ) ξ k ( i ) ξ j ( i ) = i = 1 N j = 1 m k = 1 m d j k ( i ) E ξ k ( i ) ξ j ( i )
= i = 1 N j = 1 m k = 1 m d j k ( i ) r k j ( i ) = i = 1 N Tr [ D i R i ] .
Similarly,
E j = 1 M Ω j ( ξ j ( t ) , u j ( t ) ) l d t 2 j = 1 M Ω j ( D j 1 ( t ) u j ( t ) , u j ( t ) ) l d t · E j = 1 M Ω j ( D j ( t ) ξ j ( t ) , ξ j ( t ) ) l d t
and
E j = 1 M Ω j ( D j ( t ) ξ j ( t ) , ξ j ( t ) ) l d t = j = 1 M Ω j Tr [ D j ( t ) R j ( t , t ) ] d t
In view of (10) and (11), we deduce from (28)
E i = 1 N ( u i , ξ i ) m 2 + E j = 1 M Ω j ( ξ j ( t ) , u j ( t ) ) l d t 2 ϵ 3 i = 1 N ( D i 1 u i , u i ) m + ϵ 2 j = 1 M Ω j ( D j 1 ( t ) u j ( t ) , u j ( t ) ) l d t .
It is not difficult to check that here, the equality sign is attained at the element
ξ ( 0 ) = ( ξ 1 ( 0 ) , , ξ N ( 0 ) , ξ 1 ( 0 ) ( · ) , , ξ M ( 0 ) ( · ) ) G 2
with
ξ i ( 0 ) = ϵ 3 1 / 2 η 1 D i 1 u i i = 1 N D i 1 u i , u i m 1 / 2 , i = 1 , , N ,
ξ j ( 0 ) ( t ) = ϵ 2 1 / 2 η 2 D j 1 ( t ) u j ( t ) j = 1 M Ω j D j 1 ( t ) u j ( t ) , u j ( t ) l d t 1 / 2 , j = 1 , , M ,
where η 1 and η 2 are uncorrelated random variables such that E η i = 0 and E | η i | 2 = 1 , i = 1 , 2 . Hence,
sup ξ G 2 E i = 1 N ( ξ i , u i ) m 2 + E j = 1 M Ω j ( ξ j ( t ) , u j ( t ) ) l d t 2 = ϵ 3 i = 1 N ( D i 1 u i , u i ) m + ϵ 2 j = 1 M Ω j ( D j 1 ( t ) u j ( t ) , u j ( t ) ) l d t .
From (25)–(27) and (29), we obtain
inf c R sup F G 1 , ξ G 2 E | l ( x ) l ( x ) ^ | 2 = I ( u ) ,
where I ( u ) is defined by (18) and where the infimum over c is attained at
c = t 0 T B T ( t ) z ( t ; u ) , f 0 ( t ) r d t .
As a result of solving the optimal control problem formulated in Lemma 2, we come to the following assertion.
Theorem 1.
There exists a unique minimax estimate l ( x ) ^ ^ of l ( x ) , which can be represented as
l ( x ) ^ ^ = j = 1 M Ω j ( y j ( t ) , u ^ j ( t ) ) l d t + i = 1 N ( y i , u ^ i ) m + c ^ = l ( x ^ ) ,
where
u ^ j ( t ) = ϵ 2 1 D j ( t ) H j ( t ) p ( t ) , j = 1 , , M , u ^ i = ϵ 3 1 D i H i p ( t i ) , i = 1 , , N ,
c ^ = t 0 T B T ( t ) z ^ ( t ) , f 0 ( t ) r d t ,
and functions p , z ^ and x ^ are found from the solution of systems of equations
d z ^ ( t ) d t = A T ( t ) z ^ ( t ) + l 0 ( t ) ϵ 2 1 j = 1 M χ Ω j ( t ) H j T ( t ) D j ( t ) H j ( t ) p ( t ) f o r   a . e . t ( t 0 , T ) ,
Δ z ^ | t = t i = ϵ 3 1 H i T D i H i p ( t i ) , i = 1 , , N , z ^ ( T ) = a , C T z ^ ( t 0 ) = 0 ,
d p ( t ) d t = A ( t ) p ( t ) + ϵ 1 Q ˜ 1 ( t ) z ^ ( t ) f o r   a . e . t ( t 0 , T ) ,
p ( t 0 ) = C λ ,
and
d p ^ ( t ) d t = A T ( t ) p ^ ( t ) ϵ 2 1 j = 1 M χ Ω j ( t ) H j T ( t ) D j ( t ) [ H j ( t ) x ^ ( t ) y j ( t ) ] f o r   a . e . t ( t 0 , T ) ,
Δ p ^ | t = t i = ϵ 3 1 H i T D i [ H i x ^ ( t i ) y i ] , i = 1 , , N , p ^ ( T ) = 0 , C T p ^ ( t 0 ) = 0 ,
d x ^ ( t ) d t = A ( t ) x ^ ( t ) + ϵ 1 Q ˜ 1 ( t ) p ^ ( t ) + B ( t ) f 0 ( t ) f o r   a . e . t ( t 0 , T ) ,
x ^ ( t 0 ) = C μ ,
respectively, where λ R k and μ R k are Lagrange multipliers. Problems (33)–(36) and (37)–(40) are uniquely solvable. Equations (37)–(40) are fulfilled with probability 1 .
The estimation error σ is given by the expression
σ = [ l ( p ) ] 1 / 2 .
Proof. 
Applying the same reasoning as in the proof of Theorem 1 from [8] and taking into account estimate (1.21) from [15], one can verify that the functional I ( u ) is strictly convex and lower semicontinuous on U . Since
I ( u ) = ϵ 1 t 0 T ( Q ˜ 1 ( t ) z ( t ; u ) , z ( t ; u ) ) n d t + ϵ 2 j = 1 M Ω j ( D j 1 ( t ) u j ( t ) , u j ( t ) ) l d t + ϵ 3 i = 1 N ( D i 1 u i , u i ) m c u H 2 u U , c = const ,
then, by Remark 1.2 to Theorem 1.1 (see [16]), there exists a unique element u ^ U such that
I ( u ^ ) = inf u U I ( u ) .
Applying the regularity condition (A1), we see that there exists a Lagrange multiplier λ R k such that
d d τ I λ ( u ^ + τ v ) | τ = 0 = 0 τ R , v H ,
where by I λ we denote the Lagrange function of problem (15), (16) and (18) defined by
I λ ( u ) = I ( u ) + 2 λ , C T z ( t 0 ; u ) k .
It follows from here that
0 = ( C λ , z ˜ ( t 0 ; v ) ) n + ϵ 1 t 0 T ( Q ˜ 1 z ( t ; u ^ ) , z ˜ ( t ; v ) ) n d t
+ ϵ 2 j = 1 M Ω j ( D j 1 ( t ) u ^ j ( t ) , v j ( t ) ) l d t + ϵ 3 i = 1 N ( D i 1 u ^ i , v i ) m ,
where z ˜ ( t ; v ) is the solution of problem (15) and (16) at ł 0 ( t ) = 0 , a = 0 , and u = v . Next, denote by p ( t ) the unique solution to the following problem
d p ( t ) d t = A ( t ) p ( t ) + ϵ 1 Q ˜ 1 ( t ) z ( t ; u ^ ) for   a . e . t ( t 0 , T ) ,
p ( t 0 ) = C λ .
Then
( C λ , z ˜ ( t 0 ; v ) ) n + ϵ 1 t 0 T ( Q ˜ 1 z ( t ; u ^ ) , z ˜ ( t ; v ) ) n d t
= ( p ( t 0 ) , z ˜ ( t 0 ; v ) ) n + s = 1 N + 1 t s 1 t s d p ( t ) d t A ( t ) p ( t ) , z ˜ s ( t ; u ) n d t =
= ( p ( t 0 ) , z ˜ ( t 0 ; v ) ) n + s = 1 N + 1 ( ( p ( t s ) , z ˜ s ( t s ; u ) ) n ( p ( t s 1 ) , z ˜ s ( t s 1 ; u ) ) n
+ t s 1 t s d z ˜ s ( t ; u ) d t A T ( t ) z ˜ s ( t ; u ) , p ( t ) n d t )
= i = 1 N ( p ( t i ) , H i T v i ) n j = 1 M Ω j ( p ( t ) , H j T ( t ) v j ( t ) ) n d t .
From (30), (42) and (43), and it follows (31) and (32) and that the pair of functions ( z ^ ( t ) , p ( t ) ) : = ( z ( t ; u ^ ) ) , p ( t ) ) is a unique solution of problems (33)–(35).
Similarly, we can prove representation l ( x ) ^ ^ = l ( x ^ )
Prove (41). By virtue of relations C T z ^ ( t 0 ) = 0 , (18) and (32), we obtain
σ ( u ^ , c ^ ) = I ( u ^ ) = ( λ , C T z ^ ( t 0 ) ) k + ϵ 1 t 0 T ( Q ˜ 1 ( t ) z ^ ( t ) , z ^ ( t ) ) n d t
+ ϵ 2 1 j = 1 M Ω j ( H j ( t ) p ( t ) , D j ( t ) H j ( t ) p ( t ) ) l d t + ϵ 3 1 i = 1 N ( H i p ( t i ) , D i H i p ( t i ) ) m
and
( λ , C T z ^ ( t 0 ) ) k + ϵ 1 t 0 T ( Q ˜ 1 ( t ) z ^ ( t ) , z ^ ( t ) ) n d t
= ( p ( t 0 ) , z ^ ( t 0 ) ) n + t 0 T d p ( t ) d t A ( t ) p ( t ) , z ^ ( t ) n d t
= ( p ( t 0 ) , z ^ ( t 0 ) ) n + s = 1 N + 1 t s 1 t s d p ( t ) d t A ( t ) p ( t ) , z ^ s ( t ) n d t
= ( p ( t 0 ) , z ^ ( t 0 ) ) n + s = 1 N + 1 ( ( p ( t s ) , z ^ s ( t s ) ) n ( p ( t s 1 ) , z ^ s ( t s 1 ) ) n
+ t s 1 t s p ( t ) , d z ^ s ( t ) d t A T ( t ) z ^ s ( t ) n d t )
= ϵ 3 1 i = 1 N ( p ( t i ) , H i T D i H i p ( t i ) ) n + ( p ( T ) , a ) n
+ t 0 T ( p ( t ) , l 0 ( t ) ) n d t ϵ 2 1 j = 1 M Ω j p ( t ) , H j T ( t ) D j ( t ) H j ( t ) p ( t ) n d t .
From two latter relations, (41) follows. □

5. σ 1 -Optimal Estimates of Unknown Solution of the Cauchy Problem at the Moment T

In this section, we will define an optimal, in a certain sense, estimate of unknown solution x ( t ) of the Cauchy problem (3) and (4) at the moment T that is linear with respect to observations (7) and (8) and show that this estimate of x ( T ) coincides with the function x ^ ( t ) obtained from the solution to problems (37)–(40) at the moment T.
Let x e ( T ) be an estimate of x ( T ) linear with respect to observations (7) and (8), which have the form
x e ( T ) = j = 1 M Ω j U j ( t ) y j ( t ) d t + i = 1 N U i y i + C ,
where U j ( t ) are n × l -matrices with entries that are square-integrable functions on Ω j , and U i are n × m -matrices, C R n .
Let M n m ( R ) be the set of n × m -matrices with real elements, M n l ( R ) be the set of n × l -matrices with real elements, and L 2 ( Ω j i , M n l ) be the set of M n l -valued square-integrable functions on Ω j . Set
U : = ( U 1 ( · ) , , U M ( · ) , U 1 , , U N ) H ,
where H : = L 2 ( Ω 1 , M n l ) × × L 2 ( Ω M , M n l ) × ( M n m ( R ) ) N and let { e 1 , , e n } be an orthogonal basis of R n . Let σ 1 ( U , C ) be the error functional of estimate x e ( T ) , which has the form
σ 1 ( U , C ) = s = 1 n sup F G 1 , ξ G 2 E | ( x ( T ) x e ( T ) , e s ) n | 2 1 / 2 .
Definition 2.
An estimate
x ^ e ( T ) = j = 1 M Ω j U ^ j ( t ) y j ( t ) d t + i = 1 N U ^ i y i + C ^
for which matrix-functions U ^ j ( · ) , matrices U ^ i , and vector C ^ are determined from the condition
inf U H , C R n σ 1 ( U , C ) = σ 1 ( U ^ , C ^ )
will be called a σ 1 -optimal estimate of vector x ( T ) . The quantity
σ 1 = σ 1 ( U ^ , C ^ )
will be called the error of σ 1 -optimal estimation.
Let z ^ ( s ) ( t ) , p ( s ) ( t ) be the solution of problem (33)–(36) at a = e s , s = 1 , , n , and l 0 ( t ) 0 .
Theorem 2.
The σ 1 -optimal estimate of vector x ( T ) is determined by (44) with
U ^ j ( t ) = ϵ 2 1 s = 1 n e s ( p ( s ) ) T ( t ) H j T ( t ) D j ( t ) , j = 1 , , M ,
U ^ i = ϵ 3 1 s = 1 n e s ( p ( s ) ) T ( t i ) H i T D i , i = 1 , , N , C ^ = s = 1 n c ^ ( s ) e s ,
where c ^ ( s ) are defined by (32) at z ^ ( t ) = z ^ ( s ) ( t ) , and symbol denotes the tensor product of a column vector and a row vector.
Proof. 
Obviously,
σ 1 2 ( U , C ) k = 1 n inf U H , C C n sup F G 1 , ξ G 2 E | ( x ( T ) x e ( T ) , e s ) n | 2
s = 1 n inf u s H , c s C sup F G 1 , ξ G 2 E | ( x ( T ) , e s ) n ( x ( T ) , e s ) n ^ | 2 ,
where ( x ( T ) , e s ) n ^ is an estimate defined by (9) at a = e s ,
u s : = ( ( u 1 ( s ) ) T ( · ) , , ( u M ( s ) ) T ( · ) , u 1 ( s ) ) T , , ( u N ( s ) ) T ) ,
u j ( s ) ( · ) is the s-th row of matrix U j ( · ) , j = 1 , , M , u i ( s ) is the s-th row of matrix U i , i = 1 , , N , and c s is the sth coordinate of vector C , s = 1 , , n .
By Theorem 1, we have
inf u s H , c s C sup F G 1 , ξ G 2 E | ( x ( T ) , e s ) n ( x ( T ) , e s ) n ^ | 2 = sup F G 1 , ξ G 2 E | ( x ( T ) , e s ) n ( x ^ ( T ) , e s ) n | 2 ,
where x ^ ( t ) is defined from a system of Equations (37)–(40).
Notice that the following equality
x ^ ( T ) = s = 1 n ( x ^ ( T ) , e s ) n e s = s = 1 n ( x ( T ) , e s ) n ^ ^ e s
holds. However,
( x ( T ) , e s ) n ^ ^ = j = 1 M Ω j ( y j ( t ) , u ^ j ( s ) ( t ) ) l d t + i = 1 N ( y i , u ^ i ( s ) ) m + c ^ ( s ) ,
where
u ^ j ( s ) ( t ) = ϵ 2 1 D j ( t ) H j ( t ) p ( s ) ( t ) , j = 1 , , M ,
u ^ i ( s ) = ϵ 3 1 D i H i p ( s ) ( t i ) , i = 1 , , N ,
c ( s ) = t 0 T ( f 0 ( t ) , B T ( t ) z ^ ( s ) ( t ) ) r d t + ( x 0 0 , C T z ^ ( s ) ( t 0 ) ) k .
Therefore,
s = 1 n ( x ^ ( T ) , e s ) n e s = s = 1 n j = 1 M Ω j ( y j ( t ) , u ^ j ( s ) ( t ) ) l d t + i = 1 N ( y i , u ^ i ( s ) ) m + c ^ ( s ) e s
= s = 1 n j = 1 M Ω j ( y j ( t ) , ϵ 2 1 D j ( t ) H j ( t ) p ( s ) ( t ) ) l d t + i = 1 N ( y i , ϵ 3 1 D i H i p ( s ) ( t i ) ) m + c ^ ( s ) e s
= Ω j j = 1 M s = 1 n ( ϵ 2 1 ( H j ) T ( t ) D j ( t ) y j ( t ) , p ( s ) ( t ) ) n d t e s + i = 1 N s = 1 n ( ϵ 3 1 H i T D i y i , p ( s ) ( t i ) ) n e s + s = 1 n c ^ ( s ) e s
= j = 1 M Ω j U ^ j ( t ) y j ( t ) d t + i = 1 N U ^ i y i + C ^ ,
where U ^ j ( t ) , U ^ i , and C ^ are defined by (45). It follows from here that functional σ 1 2 ( U , C ) attains its minimum value on matrices U ^ j ( t ) , j = 1 , , M , U ^ i , i = 1 , , N , and on vector C ^ . This proves the theorem. □
Corollary 1.
Vector x ^ ( T ) is the σ 1 -optimal estimate of vector x ( T ) .
Corollary 2.
σ 1 2 = s = 1 n ( p ( T ) , e s ) .
Denote by σ ( U , C ) the quantity defined by
σ ( U , C ) = sup F G 1 , ξ G 2 E x ( T ) x e ( T ) n 2 .
An estimate
x ^ ^ e ( T ) = i = 1 N U ^ i y i + j = 1 M Ω j U ^ j ( t ) y j ( t ) d t + C ^
for which matrices U ^ i , matrix-functions U ^ j ( · ) and vector C ^ are determined from the condition
inf U H , C R n σ ( U , C ) = σ ( U ^ , C ^ )
which is called an optimal mean square estimate of vector x ( T ) . The quantity
σ = [ σ ( U ^ , C ^ ) ] 1 / 2
is called the error of the optimal mean square estimation.
Parseval’s formula implies the inequality
σ ( U , C ) σ 1 ( U , C ) .
Therefore, for the error of the optimal mean square estimation σ , the following estimate from above holds:
σ σ 1 = s = 1 n ( p 1 ( s ) ( T ) , e s ) 1 / 2 .

6. Conclusions

When elaborating the guaranteed estimation of solutions to the Cauchy problem in the absence of restrictions on unknown initial data, we have reduced the determination of the necessary minimax estimates to well-defined optimal control problems.
Using this approach, we have proved the existence of the unique minimax estimate and obtained its representation together with that of the estimation error in terms of solutions to the explicitly derived systems of impulsive ordinary differential equations.
The results and techniques of this study can be extended to a wider class of initial value problems and, after appropriate generalization, to the analysis of such estimation problems for linear partial differential equations of the parabolic and hyperbolic types that describe evolution processes.

Author Contributions

Methodology, investigation, conceptualization, O.N.; writing—original draft preparation, conceptualization, methodology, investigation, validation, Y.P.; validation, resources, writing—review and editing, funding acquisition, Y.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Below, we shall provide some sufficient conditions providing the non-emptiness of the set U . To do this, we begin with the following remarks. Define in the space H the mapping D : H R k by D u : = C T z ( t 0 ; u ) . Then, since the solution of this problem can be represented as z ( t ; u ) = z ˜ ( t ; u ) + z 0 ( t ) , where z ˜ ( t ; u ) is the solution of problem (15) and (16) at ł 0 ( t ) = 0 and a = 0 and z 0 ( t ) is the solution of this problem at u = 0 , the Frechet derivative of the mapping D is a linear operator D ˜ L ( H ; R k ) , defined by D ˜ u = C T z ˜ ( t 0 ; u ) (see Example 1 on page 47 from [17]).
Suppose that the condition
Im D ˜ = R k ,
called the condition of regularity of the mapping D , is fulfilled. It is clear that from the condition of regularity of the mapping D , it follows that U is a non-empty set.
Remark A1.
Let the condition
U = u H : C T z ( t 0 ; u ) = 0
be fulfilled. Then there exists u ^ : = ( u ^ 1 ( · ) , , u ^ M ( · ) , u ^ 1 , , u ^ N ) U such that the equality
l ( x ˜ ) = j = 1 M Ω j ( y ^ j ( t ) , u ^ j ( t ) ) l d t + i = 1 N ( y ^ i , u ^ i ) m ,
holds for all those x 0 R k at which the following vector-functions
y ^ j ( t ) = H j ( t ) x ˜ ( t ) , t Ω j , j = 1 , , M ,
and the vectors
y ^ i = H i x ˜ ( t i ) , i = 1 , , N ,
are observed, where x ˜ ( t ) solves the problem
d x ˜ ( t ) d t = A ( t ) x ˜ ( t ) , t ( t 0 , T ) ,
x ˜ ( t 0 ) = C x 0 .
Proof. 
Let u ^ U . Since
l ( x ˜ ) = j = 1 M Ω j ( y ^ j ( t ) , u ^ j ( t ) ) l d t + i = 1 N ( y ^ i , u ^ i ) m = ( C T z ^ ( t 0 ) , x 0 ) k ,
where z ^ ( t ) = z ( t ; u ^ ) is a solution of problem (15) and (16) at u = u ^ , then equality C T z ^ ( t 0 ) = 0 implies (A2). □
Remark A2.
Let j 0 be a positive integer such that the system described by equation
d z j 0 ( t ; u ) d t = A T ( t ) z j 0 ( t ; u ) χ Ω j 0 ( t ) H j 0 T ( t ) u ( t )
is controllable, that is, for all t 1 < t 2 and for all z 1 , z 2 R n there exists a vector-function u ( t ) such that z j 0 ( t 1 ) = z 1 and z j 0 ( t 2 ) = z 2 . Then, the set U is nonempty.
Proof. 
Let u ( t ) be such a function. Then it is possible to choose z 1 so that the conditions z j 0 ( t j 0 ) = z 1 and z j 0 ( T ) = a are fulfilled, where Ω j 0 = ( t j 0 , t j 0 + 1 ) , t j 0 + 1 > t j 0 . Obviously, in this case element u with components u j ( t ) = 0 , j = 1 , , M , j j 0 , u j 0 ( t ) = u ( t ) , and u i = 0 , j = 1 , , N , belongs to U since the equalities z ( t 0 ) = 0 , z ( T ) = a hold. □
Corollary A1.
If matrices A ( t ) and H j 0 ( t ) are time-independent, then system (A3) is controllable if and only if the Kalman rank condition
rank H j 0 T , A T H j 0 T , , ( A T ) n 1 H j 0 T = n
holds.
Now, we provide sufficient conditions for non-emptiness of the set U . Introduce matrix-function Φ ( t , s ) as a unique solution to the problem
d Φ ( t , s ) d t = A ( t ) Φ ( t , s ) , Φ ( s , s ) = E n t > s .
Denote by K j ( t ) and N j k × l and k × m -matrices, respectively, such that K j T ( t ) = H j ( t ) Φ ( t , t 0 ) C and N i T = H i Φ ( t i , t 0 ) C .
Proposition A1.
The set U is non-empty if det D T 0 , where
det D T = j = 1 M Ω j K j ( t ) K j T ( t ) d t + i = 1 N N i N i T .
Proof. 
Let det D T 0 . Show then that there exist vector-functions u ^ j ( t ) , j = 1 , , M , and vectors u ^ i , i = 1 , , M , such that the equality C T z ^ ( t 0 ) = 0 (or the equivalent equality ( C T z ^ ( t 0 ) , x 0 ) k = 0 for an arbitrary vector x 0 R k ) holds.
Notice that
( C T z ( t 0 ; u ) , x 0 ) k = l ( x ˜ ) j = 1 M Ω j ( y ^ j ( t ) , u j ( t ) ) l d t i = 1 N ( y ^ i , u i ) m .
Introduce vector-function z ¯ ( t ) as a unique solution to the problem
d z ¯ ( t ) d t = A T ( t ) z ¯ ( t ) + l 0 ( t ) for   a . e . t ( t 0 , T ) , z ¯ ( T ) = a .
Then l ( x ˜ ) = ( C T z ¯ ( t 0 ) , x 0 ) k . It is easy to see that y ^ j ( t ) = H j ( t ) x ˜ ( t ) = H j ( t ) Φ ( t , t 0 ) C x 0 ,   j = 1 , , M , and y ^ i = H i Φ ( t i , t 0 ) C x 0 , i = 1 , , N . Hence,
( C T z ( t 0 ; u ) , x 0 ) k = ( C T z ¯ ( t 0 ) j = 1 M Ω j K j ( t ) u j ( t ) d t i = 1 N N i u i , x 0 ) k .
Then a necessary and sufficient condition for the existence of u j ( t ) and u i such that ( C T z ( t 0 ; u ) , x 0 ) k = 0 for all x 0 R k is that the equation
j = 1 M Ω j K j ( t ) u j ( t ) d t + i = 1 N N i u i = C T z ¯ ( t 0 )
be solvable.
We will look for a solution to this equation in the form u j ( t ) = K j T ( t ) d , u i = N i T d , where vector d is determined from the system of equations
D T d = C T z ¯ ( t 0 ) .
Since det D T 0 then there exists a vector d ^ such that d ^ = D T 1 C T z ¯ ( t 0 ) . Therefore, the element u ^ with components u ^ j ( t ) = K j T D T 1 C T z ¯ ( t 0 ) ,   j = 1 , , M ,   u ^ i = N i T D T 1 C T z ¯ ( t 0 ) , i = 1 , , N , belongs to the set U .  □
Proposition A2.
Under condition (A4), the regularity condition of the mapping D in Equation (A1) is fulfilled.
Proof. 
In fact, the previous reasoning leads to the conclusion that for function z ˜ ( t ; u ) , the equality
C T z ˜ ( t 0 ; u ) = j = 1 M Ω j K j ( t ) u j ( t ) d t i = 1 N N i u i
holds and condition (A1) is fulfilled if for any g R k the system
g = j = 1 M Ω j K j ( t ) u j ( t ) d t i = 1 N N i u i
has a solution. It is easy to see that the element u 0 H with components u j 0 ( · ) = K j T ( · ) D T 1 g ,   j = 1 , , M ,   u i 0 = N i T D T 1 g , i = 1 , , N , satisfies this equation. □

References

  1. Kurzhanskii, A.B. Control and Observation under Uncertainties; Nauka: Moscow, Russia, 1977. [Google Scholar]
  2. Chernousko, F.L. State Estimation for Dynamic Systems; CRC Press: Boca Raton, FL, USA, 1994. [Google Scholar]
  3. Kurzhanskii, A.B.; Valyi, I. Ellipsoidal Calculus for Estimation and Control; Birkhauser: Basel, Switzerland, 1997. [Google Scholar]
  4. Kurzhanskii, A.B.; Varaiya, P. Dynamics and Control of Trajectory Tubes—Theury and Computation; Birkhauser: Basel, Switzerland, 1997. [Google Scholar]
  5. Kurzhanskii, A.B.; Daryin, A.N. Dynamic Programming for Impulse Feedback and Fast Controls; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  6. Nakonechnyi, O.; Podlipenko, Y. Guaranteed recovery of unknown data from indirect noisy of their solutions on a finite system of points and intervals. Math. Model. Comput. 2019, 6, 179–191. [Google Scholar] [CrossRef]
  7. Nakonechnyi, O.; Podlipenko, Y. Optimal Estimation of Unknown Data of Cauchy Problem for First Order Linear Impulsive Systems of Ordinary Differential Equations from Indirect Noisy Observations of Their Solutions. Nonlinear Dyn. Syst. Theory 2021. submitted. [Google Scholar]
  8. Nakonechniy, O.G.; Podlipenko, Y.K. The minimax approach to the estimation of solutions to first order linear systems of ordinary differential periodic equations with inexact data. arXiv 2018, arXiv:1810.07228V1. [Google Scholar]
  9. Nakonechnyi, O.; Podlipenko, Y. Guaranteed Estimation of Solutions of First Order Linear Systems of Ordinary Differential Periodic Equations with Inexact Data from Their Indirect Noisy Observations. In Proceedings of the 2018 IEEE First International Conference on System Analysis and Intelligent Computing (SAIC), Kyiv, Ukraine, 8–12 October 2018. [Google Scholar]
  10. Nakonechnyi, O.; Podlipenko, Y. Optimal Estimation of Unknown Right-Hand Sides of First Order Linear Systems of Periodic Ordinary Differential Equations from Indirect Noisy Observations of Their Solutions. In Proceedings of the 2020 IEEE 2nd International Conference on System Analysis & Intelligent Computing (SAIC), Kyiv, Ukraine, 5–9 October 2020. [Google Scholar] [CrossRef]
  11. Nakonechnyi, O.; Podlipenko, Y. Guaranteed a posteriori estimation of unknown right-hand sides of linear periodic systems of ODEs. Appl. Anal. 2021. [Google Scholar] [CrossRef]
  12. Nakonechnyi, O.; Podlipenko, Y.; Shestopalov, Y. Guaranteed a posteriori estimation of uncertain data in exterior Neumann problems for Helmholtz equation from inexact indirect observations of their solutions. Inverse Probl. Sci. Eng. 2021, 29, 525–535. [Google Scholar] [CrossRef]
  13. Alekseev, V.M.; Tikhomirov, V.M.; Fomin, S.V. Optimal Control; Springer Science + Business Media: New York, NY, USA, 1987. [Google Scholar]
  14. Hutson, V.; Pym, J.; Cloud, M. Applications of Functional Analysis and Operator Theory, 2nd ed.; Elsevier: Amsterdam, The Netherlands, 2005. [Google Scholar]
  15. Bainov, D.D.; Simeonov, P.S. Impulsive Differential Equations. Asymptotic Properties of the Solutions; World Scientifc: Singapore, 1995. [Google Scholar]
  16. Lions, J.L. Optimal Control of Systems Described by Partial Differential Equations; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 1971. [Google Scholar]
  17. Ioffe, A.D.; Tikhomirov V., M. Theory of Extremal Problems; North-Holland Publishing Company: Amsterdam, The Netherlands; New York, NY, USA; Oxford, UK, 1979. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Nakonechnyi, O.; Podlipenko, Y.; Shestopalov, Y. Guaranteed Estimation of Solutions to the Cauchy Problem When the Restrictions on Unknown Initial Data Are Not Posed. Mathematics 2021, 9, 3218. https://doi.org/10.3390/math9243218

AMA Style

Nakonechnyi O, Podlipenko Y, Shestopalov Y. Guaranteed Estimation of Solutions to the Cauchy Problem When the Restrictions on Unknown Initial Data Are Not Posed. Mathematics. 2021; 9(24):3218. https://doi.org/10.3390/math9243218

Chicago/Turabian Style

Nakonechnyi, Oleksandr, Yuri Podlipenko, and Yury Shestopalov. 2021. "Guaranteed Estimation of Solutions to the Cauchy Problem When the Restrictions on Unknown Initial Data Are Not Posed" Mathematics 9, no. 24: 3218. https://doi.org/10.3390/math9243218

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop