1. Introduction
A general theory of guaranteed estimates of solutions to Cauchy problems for ordinary differential equations under uncertainty was constructed in [
1]. These results were further developed in [
2,
3,
4,
5].
The paper focuses on elaborating the methods of estimating the state of the systems described by the Cauchy problems for linear ordinary differential equations with incomplete data.
The formulations of the estimation problems under the conditions of uncertainty, which are considered in this article, are new, and research in this direction has not been carried out previously.
For solving these estimation problems, we use observations that are linear transformations of unknown solutions on a finite system of intervals and points perturbed by additive random noises. Such a type of observation is caused by the fact that in many practically important cases, unknown solutions cannot be observed in a direct manner.
From observations of the state of systems, we find optimal, in a certain sense, estimates for functionals from solutions of these problems under the condition that the information about initial conditions is missing and that the right-hand sides of equations and correlation functions of random noises in observations are not known exactly, but it is only known that they belong to the certain given sets in the corresponding function spaces.
In such a situation, the minimax estimation method turns out to be applicable and preferable. In fact, choosing this approach, one can obtain optimal estimates not only for the unknown solutions but also for linear functionals with respect to these solutions. In other words, the desired estimates linear with respect to observations are such that the maximal mean square error determined over the whole set of realizations of perturbations from the sets under consideration attains its minimal value. Traditionally, these kinds of estimates are referred to as the guaranteed or minimax estimates.
We demonstrate that these problems can be reduced to the determination of minima of quadratic functionals on closed convex sets in Hilbert spaces. Expressions for the minimax estimates and for the estimation errors are determined as a result of the solution to this problem with the use of the Lagrange multipliers method. It is shown that such estimates are expressed in terms of solutions to certain well-defined uniquely solvable systems of differential equations.
This paper continues our research cycle accomplished in [
6,
7], where the guaranteed (minimax) estimation method has been worked out for estimating linear functionals over the set of unknown solutions and data under the condition that unknown right-hand sides of the equations and initial conditions entering the statement of the Cauchy problems belong to a certain set in the corresponding Hilbert space (for details, see [
8,
9,
10,
11,
12]).
2. Preliminaries
Let us first present the assertions and notations that will be frequently used in the text of the paper.
If vector-functions
and
are absolutely continuous on the closed interval
then the following integration by the parts formula is valid
where by
we denote the inner product in
here and later on (see [
13]).
Lemma 1. Suppose Q is a bounded positive (that is when ). Hermitian (self-adjoint) operator in a complex (real) Hilbert space H with bounded inverse Then, the generalized Cauchy–Schwarz inequalityis valid. The equality sign in (2) is attained at the element For a proof, we refer to [
14] (p. 186).
3. Setting of the Minimax Estimation Problem
We consider the following estimation problem. Let the unknown vector-function
be a solution of the Cauchy problem
where
is an
-matrix and
is an
-matrix with entries
and
, which are square-integrable and piecewise continuous (here and in what follows, a function is called piecewise continuous on an interval if the interval can be broken into a finite number of subintervals on which the function is continuous on each open subinterval (i.e., the subinterval without its endpoints) and has a finite limit at the endpoints of each subinterval).
is
-matrix with entries
is a vector-function belonging to the space
By a solution of this problem, we mean a function
that satisfies Equation (
3) almost everywhere (a.e.) on
(except on a set of Lebesgue measure 0) and the conditions (
4). Here,
is the space of functions absolutely continuous on an interval
for which the derivative that exists almost everywhere on
belongs to space
We suppose that the Cauchy data
are unknown and satisfy the condition
where by
we denote the set
Here, matrix is a symmetric positive definite matrix with real-valued piecewise continuous entries on , which is a prescribed vector-function, and is a prescribed positive number.
The problem is to estimate the expression
from observations of the form (here, we denote vectors and matrices by
y and
H and vector-functions and matrices-functions by
and
).
in the class of estimates
linear with respect to observations (
7) and (
8); here,
is the state of a system described by the Cauchy problem (
3) and (
4),
are given
-matrices,
are given
-matrices where the elements are piecewise continuous functions on
and
are vector-functions that belong to
We suppose that
where
and
are observation errors in (
7) and (
8), respectively, that are realizations of random vectors
and random vector-functions
and
denotes the set of random elements
whose components
and
are uncorrelated; that is, it is assumed that
have zero means,
and
with finite second moments
and
, and unknown correlation matrices
and
, satisfying the conditions
and
correspondingly (
denotes the trace of the matrix
). Here,
are symmetric positive definite
matrices with constant entries and
are symmetric positive definite
matrices the entries that are assumed to be piecewise continuous functions on
and
are prescribed positive numbers.
The norm and inner product in space
H are defined by
and
respectively.
Definition 1. The estimatein which vectors and a number are determined from the conditionwherewill be called the minimax estimate of expression (6). The quantitywill be called the error of the minimax estimation of We see that a minimax estimate minimizes the maximal mean-square estimation error determined for the “worst” implementation of perturbations.
4. Representations for Minimax Estimates and Estimation Errors
In order to reduce the problem of determination of the minimax estimates to a certain optimal control problem, one can introduce, for any fixed
vector-function,
as a unique solution to the problem (here and in what follows, we assume that if a function is piecewise continuous, then it is continuous from the left).
where
is a characteristic function of the set
and
U is denoted by the set
It is easy to see that if
then
U is closed and convex set in the space
The following result is valid.
Lemma 2. Let (in the Appendix A, we give some sufficient conditions of non-emptiness of the set U). Then determination of the minimax estimate of is equivalent to the problem of optimal control of the system governed by the Equations (15) and (16) with the cost functionwhere . Proof. For each
, denote by
the restriction of function
to a subinterval
of the interval
and extend it from this subinterval to the ends
and
by continuity. Then, due to (
15) and (
16),
Let
x be a solution to the problem (
3) and (
4). From relations (
6)–(
8), (
19) and (
20), and the integration by parts formula (
1) with
we obtain
Taking into account that
from latter equalities, we have
The latter relationship yields
Since vector
in the first term on the right-hand side of (
23) may be an arbitrary element of space
, the quantity
will be finite if and only if
, that is, if the first term on the right-hand side of (
23) vanishes. Therefore, we will further assume that
.
Taking into consideration the known relationship
that couples the variance
of random variable
with its expectation
in which
is determined by the right-hand side of equality
which follows from (
22), and from the noncorrelatedness of
and
, from the equalities (
22) and (
23), we find
Then
the generalized Cauchy−Schwarz inequality and (
5) imply
where
The direct substitution shows that the last inequality is transformed to an equality at
where
Taking into account that
we find
where the infimum over
c is attained at
Calculate the last term on the right-hand side of (
25). Applying Lemma 1, we have
Transform the last factor on the right-hand side of (
28):
In view of (
10) and (
11), we deduce from (
28)
It is not difficult to check that here, the equality sign is attained at the element
with
where
and
are uncorrelated random variables such that
and
Hence,
From (
25)–(
27) and (
29), we obtain
where
is defined by (
18) and where the infimum over
c is attained at
□
As a result of solving the optimal control problem formulated in Lemma 2, we come to the following assertion.
Theorem 1. There exists a unique minimax estimate of , which can be represented aswhereand functions and are found from the solution of systems of equationsandrespectively, where and are Lagrange multipliers. Problems (33)–(36) and (37)–(40) are uniquely solvable. Equations (37)–(40) are fulfilled with probability The estimation error σ is given by the expression Proof. Applying the same reasoning as in the proof of Theorem 1 from [
8] and taking into account estimate (1.21) from [
15], one can verify that the functional
is strictly convex and lower semicontinuous on
Since
then, by Remark 1.2 to Theorem 1.1 (see [
16]), there exists a unique element
such that
Applying the regularity condition (
A1), we see that there exists a Lagrange multiplier
such that
where by
we denote the Lagrange function of problem (
15), (
16) and (
18) defined by
It follows from here that
where
is the solution of problem (
15) and (
16) at
and
Next, denote by
the unique solution to the following problem
From (
30), (
42) and (
43), and it follows (
31) and (
32) and that the pair of functions
is a unique solution of problems (
33)–(
35).
Similarly, we can prove representation
Prove (
41). By virtue of relations
(
18) and (
32), we obtain
and
From two latter relations, (
41) follows. □
5. -Optimal Estimates of Unknown Solution of the Cauchy Problem at the Moment T
In this section, we will define an optimal, in a certain sense, estimate of unknown solution
of the Cauchy problem (
3) and (
4) at the moment
T that is linear with respect to observations (
7) and (
8) and show that this estimate of
coincides with the function
obtained from the solution to problems (
37)–(
40) at the moment
T.
Let
be an estimate of
linear with respect to observations (
7) and (
8), which have the form
where
are
-matrices with entries that are square-integrable functions on
and
are
-matrices,
Let
be the set of
-matrices with real elements,
be the set of
-matrices with real elements, and
be the set of
-valued square-integrable functions on
Set
where
and let
be an orthogonal basis of
Let
be the error functional of estimate
, which has the form
Definition 2. An estimatefor which matrix-functions , matrices and vector are determined from the conditionwill be called a -optimal estimate of vector The quantitywill be called the error of -optimal estimation. Let
be the solution of problem (
33)–(
36) at
and
Theorem 2. The -optimal estimate of vector is determined by (44) withwhere are defined by (32) at and symbol ⊗ denotes the tensor product of a column vector and a row vector. Proof. Obviously,
where
is an estimate defined by (
9) at
is the
s-th row of matrix
is the
s-th row of matrix
and
is the
sth coordinate of vector
By Theorem 1, we have
where
is defined from a system of Equations (
37)–(
40).
Notice that the following equality
holds. However,
where
Therefore,
where
and
are defined by (
45). It follows from here that functional
attains its minimum value on matrices
,
and on vector
This proves the theorem. □
Corollary 1. Vector is the -optimal estimate of vector
Denote by
the quantity defined by
An estimate
for which matrices
matrix-functions
and vector
are determined from the condition
which is called an optimal mean square estimate of vector
The quantity
is called the error of the optimal mean square estimation.
Parseval’s formula implies the inequality
Therefore, for the error of the optimal mean square estimation
the following estimate from above holds:
6. Conclusions
When elaborating the guaranteed estimation of solutions to the Cauchy problem in the absence of restrictions on unknown initial data, we have reduced the determination of the necessary minimax estimates to well-defined optimal control problems.
Using this approach, we have proved the existence of the unique minimax estimate and obtained its representation together with that of the estimation error in terms of solutions to the explicitly derived systems of impulsive ordinary differential equations.
The results and techniques of this study can be extended to a wider class of initial value problems and, after appropriate generalization, to the analysis of such estimation problems for linear partial differential equations of the parabolic and hyperbolic types that describe evolution processes.
Author Contributions
Methodology, investigation, conceptualization, O.N.; writing—original draft preparation, conceptualization, methodology, investigation, validation, Y.P.; validation, resources, writing—review and editing, funding acquisition, Y.S. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
Appendix A
Below, we shall provide some sufficient conditions providing the non-emptiness of the set
To do this, we begin with the following remarks. Define in the space
H the mapping
by
Then, since the solution of this problem can be represented as
where
is the solution of problem (
15) and (
16) at
and
and
is the solution of this problem at
, the Frechet derivative of the mapping
is a linear operator
defined by
(see Example 1 on page 47 from [
17]).
Suppose that the condition
called the condition of regularity of the mapping
is fulfilled. It is clear that from the condition of regularity of the mapping
it follows that
U is a non-empty set.
Remark A1. Let the conditionbe fulfilled. Then there exists such that the equalityholds for all those at which the following vector-functionsand the vectorsare observed, where solves the problem Proof. Let
Since
where
is a solution of problem (
15) and (
16) at
then equality
implies (
A2). □
Remark A2. Let be a positive integer such that the system described by equationis controllable, that is, for all and for all there exists a vector-function such that and Then, the set U is nonempty. Proof. Let be such a function. Then it is possible to choose so that the conditions and are fulfilled, where Obviously, in this case element u with components and belongs to U since the equalities hold. □
Corollary A1. If matrices and are time-independent, then system (A3) is controllable if and only if the Kalman rank conditionholds. Now, we provide sufficient conditions for non-emptiness of the set
Introduce matrix-function
as a unique solution to the problem
Denote by and and -matrices, respectively, such that and
Proposition A1. The set U is non-empty if where Proof. Let Show then that there exist vector-functions and vectors such that the equality (or the equivalent equality for an arbitrary vector ) holds.
Introduce vector-function
as a unique solution to the problem
Then
It is easy to see that
and
Hence,
Then a necessary and sufficient condition for the existence of
and
such that
for all
is that the equation
be solvable.
We will look for a solution to this equation in the form
where vector
d is determined from the system of equations
Since then there exists a vector such that Therefore, the element with components belongs to the set □
Proposition A2. Under condition (A4), the regularity condition of the mapping in Equation (A1) is fulfilled. Proof. In fact, the previous reasoning leads to the conclusion that for function
, the equality
holds and condition (
A1) is fulfilled if for any
the system
has a solution. It is easy to see that the element
with components
satisfies this equation. □
References
- Kurzhanskii, A.B. Control and Observation under Uncertainties; Nauka: Moscow, Russia, 1977. [Google Scholar]
- Chernousko, F.L. State Estimation for Dynamic Systems; CRC Press: Boca Raton, FL, USA, 1994. [Google Scholar]
- Kurzhanskii, A.B.; Valyi, I. Ellipsoidal Calculus for Estimation and Control; Birkhauser: Basel, Switzerland, 1997. [Google Scholar]
- Kurzhanskii, A.B.; Varaiya, P. Dynamics and Control of Trajectory Tubes—Theury and Computation; Birkhauser: Basel, Switzerland, 1997. [Google Scholar]
- Kurzhanskii, A.B.; Daryin, A.N. Dynamic Programming for Impulse Feedback and Fast Controls; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
- Nakonechnyi, O.; Podlipenko, Y. Guaranteed recovery of unknown data from indirect noisy of their solutions on a finite system of points and intervals. Math. Model. Comput. 2019, 6, 179–191. [Google Scholar] [CrossRef]
- Nakonechnyi, O.; Podlipenko, Y. Optimal Estimation of Unknown Data of Cauchy Problem for First Order Linear Impulsive Systems of Ordinary Differential Equations from Indirect Noisy Observations of Their Solutions. Nonlinear Dyn. Syst. Theory 2021. submitted. [Google Scholar]
- Nakonechniy, O.G.; Podlipenko, Y.K. The minimax approach to the estimation of solutions to first order linear systems of ordinary differential periodic equations with inexact data. arXiv 2018, arXiv:1810.07228V1. [Google Scholar]
- Nakonechnyi, O.; Podlipenko, Y. Guaranteed Estimation of Solutions of First Order Linear Systems of Ordinary Differential Periodic Equations with Inexact Data from Their Indirect Noisy Observations. In Proceedings of the 2018 IEEE First International Conference on System Analysis and Intelligent Computing (SAIC), Kyiv, Ukraine, 8–12 October 2018. [Google Scholar]
- Nakonechnyi, O.; Podlipenko, Y. Optimal Estimation of Unknown Right-Hand Sides of First Order Linear Systems of Periodic Ordinary Differential Equations from Indirect Noisy Observations of Their Solutions. In Proceedings of the 2020 IEEE 2nd International Conference on System Analysis & Intelligent Computing (SAIC), Kyiv, Ukraine, 5–9 October 2020. [Google Scholar] [CrossRef]
- Nakonechnyi, O.; Podlipenko, Y. Guaranteed a posteriori estimation of unknown right-hand sides of linear periodic systems of ODEs. Appl. Anal. 2021. [Google Scholar] [CrossRef]
- Nakonechnyi, O.; Podlipenko, Y.; Shestopalov, Y. Guaranteed a posteriori estimation of uncertain data in exterior Neumann problems for Helmholtz equation from inexact indirect observations of their solutions. Inverse Probl. Sci. Eng. 2021, 29, 525–535. [Google Scholar] [CrossRef]
- Alekseev, V.M.; Tikhomirov, V.M.; Fomin, S.V. Optimal Control; Springer Science + Business Media: New York, NY, USA, 1987. [Google Scholar]
- Hutson, V.; Pym, J.; Cloud, M. Applications of Functional Analysis and Operator Theory, 2nd ed.; Elsevier: Amsterdam, The Netherlands, 2005. [Google Scholar]
- Bainov, D.D.; Simeonov, P.S. Impulsive Differential Equations. Asymptotic Properties of the Solutions; World Scientifc: Singapore, 1995. [Google Scholar]
- Lions, J.L. Optimal Control of Systems Described by Partial Differential Equations; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 1971. [Google Scholar]
- Ioffe, A.D.; Tikhomirov V., M. Theory of Extremal Problems; North-Holland Publishing Company: Amsterdam, The Netherlands; New York, NY, USA; Oxford, UK, 1979. [Google Scholar]
| Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).