1. Introduction
Wavelet canonical expansion (WLCE) of stochastic processes (StPs) is formed by the covariance function expansion coefficients of two-dimensional orthonormal basis of wavelets with a compact carrier [
1]. Methods of linear analysis and synthesis of StPs in nonstationary linear observable stochastic systems (LOStSs) were developed in [
2,
3,
4] for Bayes’ criteria (BC). Computer experiments confirmed high-efficiency of the algorithms for a small number of terms in WLCE. For scalar nonstationary nonlinear observable stochastic systems (NLOStS), exact methods based on canonical expansion (CE) were developed in [
2,
3,
4]. In practice, the quality analysis of NLOStSs based on CE and WLCE increase the computational flexibility and accuracy of corresponding stochastic numeral technologies.
This article is devoted to the problems of optimization for nonstationary NLOStSs by WLCE.
Section 2 is devoted to a problem statement for Bayes’ criteria in terms of the risk theory. The common BC algorithm of the WLCE method with a compact carrier is developed in
Section 3. In
Section 4, three BC (minimum mean square error, damage accumulation and probability of error exit outside the limits) particular algorithms are presented. An approximate algorithm based on a method of statistical linearization is given in
Section 5.
Section 6 contains three test examples illustrating the accuracy of developed algorithms for nonlinear functions.
Described algorithms are very useful for optimal BC quality analysis of complex NLOStSs in the presence of internal and external noises and stochastic factors described by CE and WLCE. The corresponding comparative analysis is given in [
2,
3,
4].
2. Problem Statement
Let the scalar real input StP
be presented as the sum of the useful signal
and Gaussian random additive noise
,
Here
is a nonlinear function of time
t and a vector of random parameters
with density
. At the output we need to obtain StP
as follows,
where
is the known nonlinear transform of useful signal,
, and
is Gaussian random additive noise. Noises
and
are independent from
.
The choice of criteria for comparing alternative systems for the same purpose, such as any question regarding the choice of criteria, is largely a matter of common sense, which can often be approached from the consideration of operating conditions and the purpose of any concrete system.
Thus, we obtain the following general principle for estimating the quality of a system and selecting the criterion of optimality [
2,
3,
4]. The solution quality of the problem in each actual case is estimated by a loss function
, the value of which is determined by the actual realizations of the signal
and its estimator
, where
A is the optimal operator.
The criterion of the maximum probability that the signal will not exceed a particular value can be represented as
If we take the function
as the characteristic function of the corresponding set of values of the error, the following formula
is valid. In applications connected with damage accumulation it is needed to employ (3) with function
l in the form
The quality of the solution of the problem on average for a given realization of the signal
, with all possible realizations of the estimator
corresponding to particular realization of the signal
, is estimated by the conditional mathematical expectation of the loss function for the given realization of the signal,
This quantity is called conditional risk. The conditional risk depends on the operator
for the estimator
and on the realization of signal
. Finally, the average quality of the solution for all possible realizations of
and its estimator
is characterized by the mathematical expectation of the conditional risk
This quantity is called mean risk.
All criteria of minimum risk which correspond to the possible loss functions or functional which may contain undetermined parameters are known as Bayes’ criteria.
So it is sufficient to find systems operators
which minimize conditional mathematical expectation of the loss function at the time moment t for each realization
of the observed StP
at the time interval
:
For this problem solution we use WLCE [
1]. At first, we find the conditional density of the useful output
(or vector
and random noise
) relative to the observable StP
.
3. Common Algorithm of the WLCE Method
As it is known in [
4], for noises
and
we use WLCE with the common random variables
and
where
for all
are uncorrelated random variables with mean zero and variances;
for all
,
and
are coordinate functions;
for all
are real functions satisfying Equation (12); and
, if
, otherwise,
.
For input StP
we construct WLCE by the following formulae:
and
We obtain from (10) and (14) the following presentations:
So StP depends upon and all sets of .
In the case of Gaussian-independent noises,
and
WLCE of
do not depend upon WLCE
. Consequently, coordinate functions
are expressed via coefficients of WLE for
So we obtain the following result:
At last, the conditional density of vector
relative to StP
coincides with the conditional density vector
relative to
and for time interval
is equal to:
where
Theorem 1. Let the following conditions hold for a stochastic system (1), (2):
- (1)
The covariance function of random noises is known and belongs to the space ;
- (2)
The joint covariance function of random noise and is known and belongs to the space ;
- (3)
The function relative to the variable ; is the parameter.
Then unknown parameters and in (19) for the conditional density of vector relative to StP are expressed in terms of the coefficients of the wavelet expansion of the given functions , , over the selected wavelet bases.
Proof of Theorem 1. In space
we fix the orthonormal wavelet basis with a compact carrier [
5,
6],
where
is the scaling function,
is the mother wavelet,
are wavelets of
level
and
is the maximal level of resolution.
The wavelet basis may be rewritten in the following form:
In space
we fix the two-dimensional orthonormal wavelet basis in the form tensor product for the case when dealing is performed identically for two variables,
where
.
Then in space
we construct the following two-dimensional wavelet expansion (WLE) of the covariance function
,
where
Here, variances
are calculated according to recurrent formulae,
where
—auxiliary coefficients. Parameters
are expressed by means of WLE coefficients of the covariance function
,
The rest coefficient is .
Coordinate functions
are defined by the following recurrent formulae:
Used here are auxiliary functions:
Real functions
satisfying the conditions (11) and (12) are expressed in terms of basic wavelet functions,
Coordinate functions are defined by (26) and (27) for the joint covariance function .
Considering
u as the parameter we obtain the following WLE for function
,
or in (21) notations,
So it is the general case that we obtain the following final expressions:
Theorem 1 is proved. □
In the case of a given density (19) we calculate conditional risk:
For optimal synthesis it is necessary to find the optimal output StP
at a given time moment
from the condition of integral minimum (32). Let us consider this integral as the function of variable
at fixed
and
. The value of parameter
for which the integral reaches a minimal value, defines the optimal operator in the case of Bayes’ criterion (3). Replacing in
variables
by random ones
we obtain the optimal operator:
The quality of the optimal operator is numerically calculated on the basis of mean risk by the known formula [
2,
3,
4]:
The common algorithm of the WLCE method for NLOStS synthesis using Bayes’ criteria consists of four Steps (Algorithm 1).
Algorithm 1 Common Synthesis of NLOStS by WLCE
|
- 1:
Construction of WLCE for random noises and according to Formulas (9) and (10): - –
Specifying wavelet bases with compact carrier by Formulas (20)–(22); - –
Two-dimensional WLE of covariance functions , according to Formula (23); - –
Calculation of variance of the independent random variable of WLCE for random noise by Formulas (24) and (25); - –
Definition of coordinate functions , according to recurrent Formulas (26) and (27).
- 2:
Construction of WLCE for input StP according to Formulas (13)–(15):
- –
WLE function relative to the variable according to Formulas (29) and (30); - –
Calculation according to Formula (31).
- 3:
Finding the conditional density of the vector of random parameters relative to StP by the Formula (19) and conditional risk by the Formula (32). - 4:
Determination of the minimum of the integral in (32) as a function with respect to the optimal estimate for any time moment . Construction of Bayes‘ optimal estimate according to Formula (33).
|
4. Synthesis of NLOStSs for Particular Optimal Criteria
Let us consider the following minimum optimal criteria: (i) mean square error; (ii) damage accumulation; (iii) probability of error exit outside the limits.
In case (i) the loss function and conditional risk are described by the formulae:
For the optimal estimate
it is necessary to find the minimum for parameter
in integral (36):
The solution of the Euler equation [
7],
gives the explicit expression for parameter
:
The right hand of (39) is the conditional mathematical expectation of useful output StP
relative to the input StP
, consequently, the optimal estimate of output StP for the mean square error is defined by formula:
For criterion (ii) we have the following formulae:
and (40).
For criterion (iii) we obtain the following formulae:
and
The equation for
takes the form
The solution (48) for gives the value for which the density at satisfies the equal , which is equal to the density for values , satisfying equality . So the optimal operator is defined by (47).
5. Approximate Algorithm Based on Statistical Linearization
Let us apply the method of statistical linearization (MSL) [
2,
3,
4] for NLOStSs (1) and (2) at the Gaussian random vector
with the density
where
are the elements of mean vector
and
,
and
are elements of the inverse matrix,
, of the covariance matrix
.
At first, according to MSL, we replace the nonlinear function in Equation (1) with the linear one,
where
,
are determined from the mean square error approximation condition [
2,
3,
4]. Using notation
we replace Equation (1) with the linear one,
At condition
using the WLCE method of linear synthesis [
2,
3,
4] we have the following equations:
where
So we obtain WLE
or in notations (21)
and
As a result Equations (53) and (54) may be rewritten in the form
From Equations (10) and (55) for noise we have the following presentation:
Taking into consideration the Equation (2),
We conclude that the WLCE of output
does not depend upon the WLCE of random noise
and
for Gaussian noises
and
.
The conditional joint density of
relative to
(or StP
) and the conditional mathematical expectation of Bayes’ loss function are expressed by known formulae [
2,
3,
4]:
Theorem 2. Let the following conditions hold for a stochastic system (1), (2):
- (1)
The covariance function of random noises is known and belongs to the space ;
- (2)
The joint covariance function of random noise and is known and belong to the space ;
- (3)
The random vector given by the normal probability density (49);
- (4)
The nonlinear function is approximated by a linear one with respect to random parameters according to the statistical linearization method in the form (50);
- (5)
The conditional probability density of relative to StP (or relative to a set of random variables according to the Formulas (53)–(55)) is approximated by the Formulas (65) and (67).
Then we obtain an approximate optimal estimate of the output StP for three criteria (
minimum mean square error, damage accumulation and probability of error exit outside the limits): where and are the conditional expectation and conditional covariance matrix of relative to StP , , , .
Proof of Theorem 2. As it was mentioned in
Section 3, the value of parameter
at which integral (66) reaches the minimum defined optimal Bayes operator. Changing in
variables
for random ones
we obtain the optimal system operator
In
Section 4 we found out that the optimal estimate of output (2) based on the observed input (1) in the equal conditional mathematical expectation
for three criteria. So the following expressions are valid:
.
Due to the Gaussian distribution of variables
,
,
, the density
will be Gaussian. Denoting
we obtain the following approximate presentation of (69):
where
and
are the conditional expectation and conditional covariance matrix.
Presentation (71) in notations
takes the form
Equating to zero partial derivative by
in (65) we obtain the equation for solving the conditional mathematical expectation
,
The system of linear algebraic equations may be rewritten in matrix form
where
Solving Equation (75)
and using notations
we obtain the approximate optimal estimate
Parameter
is computed according the known formulae [
4,
5,
6]:
Theorem 2 is proved. □
So we come to the following approximate algorithm for the construction of the optimal operator on the basis of the MSL and WLCE methods (Algorithm 2).
Algorithm 2 Synthesis of NLOStS by MSL and WLCE
|
- 1:
Approximation of the nonlinear function by function (50) and presentation of StP in form (52). - 2:
Specifying wavelet bases with a compact carrier by Formulas (20)–(22). - 3:
Presentations of structural functions in the form (57) and (58) and two dimensional WLE of covariance functions , according to Formula (23). - 4:
Calculation of variance of the independent random variable for of WLCE for random noise by Formulas (24) and (25). - 5:
Definition of coordinate functions , according to recurrent Formulas (26) and (27). - 6:
Calculation according to Formula (59). - 7:
Fixation of random variables using Formula (55). - 8:
Calculation by Formulas (67) and (72). - 9:
Definition of parameters by (77), (80) and function for the concrete nonlinear function . - 10:
Construction of Bayes’ optimal estimate according to Formula (79).
|
7. Conclusions
Algorithms for the optimal synthesis of nonstationary nonlinear StSs (Pugachev’s Equations (1) and (2)) based on canonical expansions of stochastic processes are well developed and applied [
4]. Algorithm 1 (Theorem 1) is oriented for Gaussian noises and non-Gaussian parameters. Algorithm 2 (Theorem 2) is valid for Gaussian parameters and noises. Corresponding algorithms based on wavelet canonical expansions are well worked out only for observable linear StSs. For non-Gaussian parameters and noises we recommend the use of CE with independent components with non-Gaussian distributions [
4].
An important issue of the considered nonstationary stochastic systems is to obtain faster convergence speed. Structural functions and covariance functions of noises in our approaches demand one- and two-dimensional -spaces. We use Haar wavelets due to their simplicity and analytical expressions. It is known that wavelet expansions based on Haar wavelets have poor convergence compared to wavelet expansions based on, for example, Daubechies wavelets. We suggest two ways for the improvement of convergence speed: (i) increase the maximal level of revolution of fix wavelet bases; (ii) choice of fixed type of carrier. Computer experiments in Examples 1–3 confirm good engineering accuracy even for resolution two levels and eight members of canonical expansions.
For the NLOStSs described, stochastic differences and differential equations corresponding to algorithms are given in [
2,
3,
4]. Applications:
- –
Approximate linear and equivalent linearized model building for observable nonstationary and nonlinear stochastic systems;
- –
Bayes’ criterion optimization and the calibration of parameters in complex quality measurement and control systems;
- –
Estimation of potential efficiency of nonstationary nonlinear stochastic systems;
- –
Directions of future generalizations and implementations;
- –
New models of scalar- and vector-observable stochastic systems (nonlinear, with additive and parametric noises, etc.);
- –
New classes of Bayes’ criterion (integral, functional, mixed);
- –
Implementation of wavelet-integral canonical expansions for hereditary stochastic systems.
This research was carried out using the infrastructure of the Shared Research Facilities “High Performance Computing and Big Data” (CKP “Informatics”) of FRC CSC RAS (Moscow, Russia).