Next Article in Journal
Finding Patient Zero and Tracking Narrative Changes in the Context of Online Disinformation Using Semantic Similarity Analysis
Next Article in Special Issue
Modelling Sign Language with Encoder-Only Transformers and Human Pose Estimation Keypoint Data
Previous Article in Journal
Temperature Time Series Prediction Model Based on Time Series Decomposition and Bi-LSTM Network
Previous Article in Special Issue
Tensor Train-Based Higher-Order Dynamic Mode Decomposition for Dynamical Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Synthesis of Nonlinear Nonstationary Stochastic Systems by Wavelet Canonical Expansions

by
Igor Sinitsyn
1,2,
Vladimir Sinitsyn
1,2,
Eduard Korepanov
1 and
Tatyana Konashenkova
1,*
1
Federal Research Center “Computer Science and Control”, Russian Academy of Sciences (FRC CSC RAS), 119333 Moscow, Russia
2
Moscow Aviation Institute, National Research University, 125993 Moscow, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(9), 2059; https://doi.org/10.3390/math11092059
Submission received: 20 March 2023 / Revised: 18 April 2023 / Accepted: 21 April 2023 / Published: 26 April 2023
(This article belongs to the Special Issue Mathematical Modeling, Optimization and Machine Learning)

Abstract

:
The article is devoted to Bayes optimization problems of nonlinear observable stochastic systems (NLOStSs) based on wavelet canonical expansions (WLCEs). Input stochastic processes (StPs) and output StPs of considered nonlinearly StSs depend on random parameters and additive independent Gaussian noises. For stochastic synthesis we use a Bayes approach with the given loss function and minimum risk condition. WLCEs are formed by covariance function expansion coefficients of two-dimensional orthonormal basis of wavelet with a compact carrier. New results: (i) a common Bayes’ criteria synthesis algorithm for NLOStSs by WLCE is presented; (ii) partial synthesis algorithms for three of Bayes’ criteria (minimum mean square error, damage accumulation and probability of error exit outside the limits) are given; (iii) an approximate algorithm based on statistical linearization; (iv) three test examples. Applications: wavelet optimization and parameter calibration in complex measurement and control systems. Some generalizations are formulated.

1. Introduction

Wavelet canonical expansion (WLCE) of stochastic processes (StPs) is formed by the covariance function expansion coefficients of two-dimensional orthonormal basis of wavelets with a compact carrier [1]. Methods of linear analysis and synthesis of StPs in nonstationary linear observable stochastic systems (LOStSs) were developed in [2,3,4] for Bayes’ criteria (BC). Computer experiments confirmed high-efficiency of the algorithms for a small number of terms in WLCE. For scalar nonstationary nonlinear observable stochastic systems (NLOStS), exact methods based on canonical expansion (CE) were developed in [2,3,4]. In practice, the quality analysis of NLOStSs based on CE and WLCE increase the computational flexibility and accuracy of corresponding stochastic numeral technologies.
This article is devoted to the problems of optimization for nonstationary NLOStSs by WLCE. Section 2 is devoted to a problem statement for Bayes’ criteria in terms of the risk theory. The common BC algorithm of the WLCE method with a compact carrier is developed in Section 3. In Section 4, three BC (minimum mean square error, damage accumulation and probability of error exit outside the limits) particular algorithms are presented. An approximate algorithm based on a method of statistical linearization is given in Section 5. Section 6 contains three test examples illustrating the accuracy of developed algorithms for nonlinear functions.
Described algorithms are very useful for optimal BC quality analysis of complex NLOStSs in the presence of internal and external noises and stochastic factors described by CE and WLCE. The corresponding comparative analysis is given in [2,3,4].

2. Problem Statement

Let the scalar real input StP Z ( t ) be presented as the sum of the useful signal Φ ( t , U ) and Gaussian random additive noise X ( t ) ,
Z ( t ) = Φ ( t , U ) + X ( t ) .
Here Φ ( t , U ) is a nonlinear function of time t and a vector of random parameters U = U 1 , U 2 , , U N T with density f ( u ) . At the output we need to obtain StP W ( t ) as follows,
W ( t ) = Ψ ( t , U ) + Y ( t ) ,
where Ψ ( t , U ) = P t Φ ( t , U ) is the known nonlinear transform of useful signal, Φ ( t , U ) , and Y ( t ) is Gaussian random additive noise. Noises X ( t ) and Y ( t ) are independent from U .
The choice of criteria for comparing alternative systems for the same purpose, such as any question regarding the choice of criteria, is largely a matter of common sense, which can often be approached from the consideration of operating conditions and the purpose of any concrete system.
Thus, we obtain the following general principle for estimating the quality of a system and selecting the criterion of optimality [2,3,4]. The solution quality of the problem in each actual case is estimated by a loss function l ( W , W * ) , the value of which is determined by the actual realizations of the signal W and its estimator W * = A Z , where A is the optimal operator.
The criterion of the maximum probability that the signal will not exceed a particular value can be represented as
E [ l ( W , W * ) ] = min .
If we take the function l as the characteristic function of the corresponding set of values of the error, the following formula
l W , W * = 1   a t   W * W w ( t ) , 0   a t   W * W < w ( t ) ,  
is valid. In applications connected with damage accumulation it is needed to employ (3) with function l in the form
l ( W , W * ) = 1 e k 2 ( W * W ) 2 .
The quality of the solution of the problem on average for a given realization of the signal W , with all possible realizations of the estimator W * corresponding to particular realization of the signal W , is estimated by the conditional mathematical expectation of the loss function for the given realization of the signal,
ρ ( A | Z ) = E [ l ( W , W * | Z ) ] .
This quantity is called conditional risk. The conditional risk depends on the operator A for the estimator W * and on the realization of signal W . Finally, the average quality of the solution for all possible realizations of W and its estimator W * is characterized by the mathematical expectation of the conditional risk
R ( A ) = E [ ρ ( A | Z ) ] = E [ l ( W , W * ) ] .
This quantity is called mean risk.
All criteria of minimum risk which correspond to the possible loss functions or functional which may contain undetermined parameters are known as Bayes’ criteria.
So it is sufficient to find systems operators A which minimize conditional mathematical expectation of the loss function at the time moment t for each realization z ( τ ) of the observed StP Z ( τ ) at the time interval [ t T ]   T > 0 :
E [ l ( W , W * ) | z ] = min .
For this problem solution we use WLCE [1]. At first, we find the conditional density of the useful output W ( t ) (or vector U and random noise Y ( t ) ) relative to the observable StP Z ( τ ) .

3. Common Algorithm of the WLCE Method

As it is known in [4], for noises X ( τ ) and Y ( τ ) we use WLCE with the common random variables V ν x
X ( τ ) = ν V ν x x ν ( τ ) ,
Y ( τ ) = ν V ν x y ν ( τ ) ,
V ν x = t T t a ν ( τ ) X ( τ ) d τ ,
and
t T t a μ ( τ ) x ν ( τ ) d τ = δ ν μ ,
where V ν x for all ν are uncorrelated random variables with mean zero and variances; D ν x for all ν , x ν ( τ ) and y ν ( τ ) are coordinate functions; a ν ( τ ) for all ν are real functions satisfying Equation (12); and δ ν μ = 1 , if ν = μ , otherwise, δ ν μ = 0 .
For input StP Z ( τ )   we construct WLCE by the following formulae:
Z ( τ ) = ν Z ν x ν ( τ ) ,
Z ν = t T t a ν ( τ ) Φ ( τ , U ) d τ + V ν x = α ν ( U ) + V ν x ,
and
t T t a ν ( τ ) Φ ( τ , U ) d τ = α ν ( U ) .
We obtain from (10) and (14) the following presentations:
Y ( τ , U ) = ν Z ν α ν ( U ) y ν ( τ ) ,
W ( τ , U ) = Ψ ( τ , U ) + ν Z ν α ν ( U ) y ν ( τ ) .
So StP W ( τ ) depends upon U and all sets of Z ν .
In the case of Gaussian-independent noises, X ( τ ) and Y ( τ ) WLCE of W ( τ ) do not depend upon WLCE Y ( τ ) . Consequently, coordinate functions y ν ( τ ) are expressed via coefficients of WLE for K X Y ( τ 1 , τ 2 ) . So we obtain the following result:
W ( τ , U ) = Ψ ( τ , U ) .
At last, the conditional density of vector U relative to StP Z ( τ ) coincides with the conditional density vector U relative to Z ν and for time interval t T , t . is equal to:
f 1 ( u | z 1 , z 2 , , z L ) = χ ( z ) f ( u ) exp ν z ν α ν ( u ) D ν x 1 2 ν = 1 α ν 2 ( u ) D ν x ,
where χ ( z ) = + f ( u ) exp ν z ν α ν ( u ) D ν x 1 2 ν α ν 2 ( u ) D ν x d u 1 .
Theorem 1.
Let the following conditions hold for a stochastic system (1), (2):
(1) 
The covariance function  K X ( τ 1 , τ 2 )  of random noises  X ( τ )  is known and belongs to the space  2 t T , t × t T , t ;
(2) 
The joint covariance function  K X Y ( τ 1 , τ 2 )  of random noise  X ( τ )  and  Y ( τ )  is known and belongs to the space  2 t T , t × t T , t ;
(3) 
The function  Φ ( τ , u ) 2 t T , t  relative to the variable  τ ;  u  is the parameter.
Then unknown parameters  α ν ( u ) and D ν x  in (19) for the conditional density of vector  U  relative to StP  Z ( τ )  are expressed in terms of the coefficients of the wavelet expansion of the given functions  K X ( τ 1 , τ 2 ) K X Y ( τ 1 , τ 2 ) Φ ( τ , u )  over the selected wavelet bases.
Proof of Theorem 1.
In space 2 t T , t we fix the orthonormal wavelet basis with a compact carrier [5,6],
φ 00 ( τ ) ,   ψ j k ( τ ) ,
where φ 00 ( τ )   =   φ ( τ ) is the scaling function, ψ 00 ( τ )   =   ψ ( τ ) is the mother wavelet, ψ j k ( τ )   =   2 j ψ ( 2 j τ k ) are wavelets of j level j = 1 , 2 , , J ;   k = 0 , 1 , , 2 j 1 ;   and J is the maximal level of resolution.
The wavelet basis may be rewritten in the following form:
g 1 ( τ ) = φ 00 ( τ ) ,   g 2 ( τ ) = ψ 00 ( τ ) , g ν ( τ ) = ψ j k ( τ ) , j = 1 , 2 , , J ;   k = 0 , 1 , , 2 j 1 ; ν = 2 j + k + 1 ;   ν = 3 , 4 , , L ; L = 2 J + 1 .  
In space 2 t T , t × t T , t we fix the two-dimensional orthonormal wavelet basis in the form tensor product for the case when dealing is performed identically for two variables,
Φ A ( τ 1 , τ 2 )   =   φ 00 ( τ 1 ) φ 00 ( τ 2 ) , Ψ H ( τ 1 , τ 2 )   =   φ 00 ( τ 1 ) ψ 00 ( τ 2 ) , Ψ B ( τ 1 , τ 2 )   =   ψ 00 ( τ 1 ) φ 00 ( τ 2 ) , Ψ j k n D ( τ 1 , τ 2 )   =   ψ j k ( τ 1 ) ψ j n ( τ 2 ) ,
where j = 1 , 2 , , J ;   k , n = 0 , 1 , , 2 j 1 .
Then in space 2 t T , t × t T , t we construct the following two-dimensional wavelet expansion (WLE) of the covariance function K X ( τ 1 , τ 2 ) ,
K X ( τ 1 , τ 2 ) =   a x Φ A ( τ 1 , τ 2 ) + h x Ψ H ( τ 1 , τ 2 ) + b x Ψ B ( τ 1 , τ 2 ) + j = 0 J k = 0 2 J 1 n = 0 2 J 1 d j k n x Ψ j k n D ( τ 1 , τ 2 ) ,
where
a x   =   t T t t T t K X ( τ 1 , τ 2 ) Φ A ( τ 1 , τ 2 ) d τ 1 d τ 2 , h x   =   t T t t T t K X ( τ 1 , τ 2 ) Ψ H ( τ 1 , τ 2 ) d τ 1 d τ 2 ,
b x   =   t T t t T t K X ( τ 1 , τ 2 ) Ψ B ( τ 1 , τ 2 ) d τ 1 d τ 2 , d j k n x   =   t T t t T t K X ( τ 1 , τ 2 ) Ψ D ( τ 1 , τ 2 ) d τ 1 d τ 2 .
Here, variances D ν x   ( ν = 1 , 2 , , L ) are calculated according to recurrent formulae,
c ν 1 = k ν 1 D 1 x   ( ν = 2 , 3 , , L ) ;   c ν μ = 1 D μ x k ν μ λ = 1 μ 1 D λ x c μ λ c ν λ ( μ = 2 , 3 , , ν 1 ; ν = ν = 3 , 4 , , L ) ; D 1 x = k 11 ;   D ν x = k ν ν λ = 1 ν 1 D λ x c ν λ 2   ( ν = 2 , 3 , , L ) ,
where c ν μ   ( ν = 2 , 3 , , L ; μ = 1 , 2 , , L ) —auxiliary coefficients. Parameters k ν μ   ( ν , μ = 1 , 2 , , L ) are expressed by means of WLE coefficients of the covariance function K X ( τ 1 , τ 2 ) ,
k 11 = a x ;   k 12 = h x ;   k 21 = b x ;   k 22 = d 000 x ;   k ν μ = d j k n x   ( ν = 2 j + k + 1 ; μ = 2 j + n + 1 ; j = 1 , 2 , , J ;   k , n = 0 , 1 , , 2 j 1 ) .
The rest coefficient is k ν μ = 0 .
Coordinate functions x ν ( τ ) are defined by the following recurrent formulae:
x 1 ( τ ) = 1 D 1 x h 1 x ( τ ) ;   x ν ( τ ) = 1 D ν x λ = 1 ν 1 d ν λ h λ x ( τ ) + h ν x ( τ ) ; d ν λ = c ν λ + μ = λ + 1 ν 1 c ν μ d μ λ   ;   d ν , ν 1 = c ν , ν 1   ; λ = 1 , 2 , , ν 2 ;   ν = 2 , 3 , , L .
Used here are auxiliary functions:
h 1 x ( τ ) = a x φ 00 ( τ ) + b x ψ 00 ( τ ) ;   h 2 x ( τ ) = h x φ 00 ( τ ) + d 000 x ψ 00 ( τ ) ; h ν x ( τ ) = k = 0 2 j 1 d j k n x ψ j k ( τ )   ; v = 3 , 4 , L ;   v = 2 j + n + 1 ;   n = 0 , 1 , , 2 j 1 ; j = 1 , 2 , J .
Real functions a ν ( τ ) satisfying the conditions (11) and (12) are expressed in terms of basic wavelet functions,
a 1 ( τ ) = g 1 ( τ ) ,   a ν ( τ ) = λ = 1 ν 1 d ν λ g λ ( τ ) + g ν ( τ )   ( ν = 2 , 3 , , L ) .
Coordinate functions y ν ( τ ) are defined by (26) and (27) for the joint covariance function K X Y ( τ 1 , τ 2 ) .
Considering u as the parameter we obtain the following WLE for function Φ ( τ , u ) ,
Φ ( τ , u )   =   a Φ ( u ) φ 00 ( τ ) + j = 0 J k = 0 2 j 1 d j k Φ ( u ) ψ j k ( τ )   , a Φ   ( u ) =   t T t Φ ( τ , u ) φ 00 ( τ ) d τ ,   d j k Φ   ( u ) =   t T t Φ ( τ , u ) ψ j k ( τ ) d τ ,
or in (21) notations,
Φ ( τ , u )   =   c 1 Φ ( u ) g 1 ( τ ) + ν = 2 ( ν = 2 j + k + 1 ; j = 0 , 1 , , J ; k = 0 , 1 , , 2 j 1 ) L c ν Φ ( u ) g ν ( τ )   , c 1 Φ ( u ) = a Φ ( u )   ,   c ν Φ ( u ) = d j k Φ ( u )   .
So it is the general case that we obtain the following final expressions:
α 1 ( u ) = c 1 Φ ( u ) ;   α ν ( u ) = λ = 1 ν 1 d ν λ c λ Φ ( u ) + c ν Φ ( u )   ( ν = 2 , 3 , , L ) .
Theorem 1 is proved. □
In the case of a given density (19) we calculate conditional risk:
ρ ( A | Z ) = E l W , W * | Z = χ ( z ) + l W , W * f ( u ) exp ν = 1 L z ν α ν ( u ) D ν x 1 2 ν = 1 L α ν 2 ( u ) D ν x d u .
For optimal synthesis it is necessary to find the optimal output StP W * ( t ) at a given time moment t from the condition of integral minimum (32). Let us consider this integral as the function of variable P = W * ( t ) at fixed z 1 , z 2 , , z L and t . The value of parameter P = P 0 ( t , z 1 , z 2 , , z L ) for which the integral reaches a minimal value, defines the optimal operator in the case of Bayes’ criterion (3). Replacing in P 0 ( t , z 1 , z 2 , , z L ) variables z 1 , z 2 , , z L by random ones Z 1 , Z 2 , , Z L we obtain the optimal operator:
W * ( t ) = A Z = P 0 ( t , Z 1 , Z 2 , , Z L ) .
The quality of the optimal operator is numerically calculated on the basis of mean risk by the known formula [2,3,4]:
R ( A ) = E ρ ( A | Z ) = E l W , W * = + + l Ψ ( t , u ) + ν = 1 L z ν α ν ( u ) y ν ( t ) , P 0 f 2 ( z 1 , z 2 , , z L | u ) f ( u ) d z 1 d z 2 d z L d u , f 2 ( z 1 , z 2 , , z L | u ) = 1 2 π L D 1 x D 2 x D L x exp 1 2 ν = 1 L 1 D ν x z ν r = 1 N α ν r u r 2 .
The common algorithm of the WLCE method for NLOStS synthesis using Bayes’ criteria consists of four Steps (Algorithm 1).
Algorithm 1 Common Synthesis of NLOStS by WLCE
1: 
Construction of WLCE for random noises X ( τ ) and Y ( τ ) according to Formulas (9) and (10):
Specifying wavelet bases with compact carrier by Formulas (20)–(22);
Two-dimensional WLE of covariance functions K X ( t 1 , t 2 ) , K X Y ( t 1 , t 2 ) according to Formula (23);
Calculation of variance D ν x of the independent random variable V ν x of WLCE for random noise X ( τ ) by Formulas (24) and (25);
Definition of coordinate functions x ν ( τ ) , y ν ( τ ) according to recurrent Formulas (26) and (27).
2: 
Construction of WLCE for input StP Z ( τ ) according to Formulas (13)–(15):
WLE function Φ ( τ , u ) relative to the variable τ according to Formulas (29) and (30);
Calculation α ν ( u )   according to Formula (31).
3: 
Finding the conditional density of the vector of random parameters U = U 1 , U 2 , , U N T relative to StP Z ( τ ) by the Formula (19) and conditional risk by the Formula (32).
4: 
Determination of the minimum of the integral in (32) as a function with respect to the optimal estimate W * ( t ) for any time moment t . Construction of Bayes‘ optimal estimate W * ( t ) according to Formula (33).

4. Synthesis of NLOStSs for Particular Optimal Criteria

Let us consider the following minimum optimal criteria: (i) mean square error; (ii) damage accumulation; (iii) probability of error exit outside the limits.
In case (i) the loss function and conditional risk are described by the formulae:
l W , W * = W * W 2 ,
ρ ( A | Z ) = χ ( z ) + W * Ψ ( t , u ) + ν = 1 L z ν α ν ( u ) y ν ( t ) 2 f ( u ) exp ν z ν α ν ( u ) D ν x 1 2 ν α ν 2 ( u ) D ν x d u .
For the optimal estimate W * it is necessary to find the minimum for parameter P = W * in integral (36):
I ( P ) = + P Ψ ( t , u ) + ν = 1 L z ν α ν ( u ) y ν ( t ) 2 f ( u ) exp ν z ν α ν ( u ) D ν 1 2 ν α ν 2 ( u ) D ν d u .
The solution of the Euler equation [7],
I ( P ) P = + P Ψ ( t , u ) + ν = 1 L z ν α ν ( u ) y ν ( t ) f ( u ) exp ν z ν α ν ( u ) D ν x 1 2 ν α ν 2 ( u ) D ν x d u = 0 ,
gives the explicit expression for parameter P 0 :
P 0 = + Ψ ( t , u ) + ν = 1 L z ν α ν ( u ) y ν ( t ) f ( u ) exp ν z ν α ν ( u ) D ν x 1 2 ν α ν 2 ( u ) D ν x d u + f ( u ) exp ν z ν α ν ( u ) D ν x 1 2 ν α ν 2 ( u ) D ν x d u .
The right hand of (39) is the conditional mathematical expectation of useful output StP W relative to the input StP Z , consequently, the optimal estimate of output StP for the mean square error is defined by formula:
W * = E W ( t ) | Z .
For criterion (ii) we have the following formulae:
l W , W * = 1 e k 2 W * W 2 ,
ρ ( A | Z ) = χ ( z ) + 1 e k 2 W * Ψ ( t , u ) + ν = 1 L z ν α ν ( u ) y ν ( t ) 2 f ( u ) exp ν z ν α ν ( u ) D ν x 1 2 ν α ν 2 ( u ) D ν x d u ,
+ e k 2 P Ψ ( t , u ) + ν = 1 L z ν α ν ( u ) y ν ( t ) 2 P Ψ ( t , u ) + ν = 1 L z ν α ν ( u ) y ν ( t ) f ( u ) exp ν z ν α ν ( u ) D ν x 1 2 ν α ν 2 ( u ) D ν x d u = 0 ,
e k 2 P Ψ ( t , u ) + ν = 1 L z ν α ν ( u ) y ν ( t ) 2 = 1 ,
and (40).
For criterion (iii) we obtain the following formulae:
l W , W * = 1   a t   W * W w ( t ) , 0   a t   W * W < w ( t ) ,  
ρ ( A | Z ) = E l W , W * | Z =   P W * W w ( t ) ,
and
I ( P ) = u ( P w ( t ) ) f ( u ) exp ν z ν α ν ( u ) D ν x 1 2 ν α ν 2 ( u ) D ν x d u + u ( P + w ( t ) ) + f ( u ) exp ν z ν α ν ( u ) D ν x 1 2 ν α ν 2 ( u ) D ν x d u .
The equation for P takes the form
f ( u ( P w ( t ) ) ) exp ν = 1 L z ν α ν ( u ( P w ( t ) ) ) D ν x 1 2 ν = 1 L α ν 2 ( u ( P w ( t ) ) ) D ν x = f ( u ( P + w ( t ) ) ) exp ν = 1 L z ν α ν ( u ( P + w ( t ) ) ) D ν x 1 2 ν = 1 L α ν 2 ( u ( P + w ( t ) ) ) D ν x .
The solution (48) for P gives the value P 0 for which the density f 1 ( u ( P 0 w ( t ) ) ) | z 1 , z 2 , , z L ) at u = u ( P 0 w ( t ) ) satisfies the equal Ψ ( t , u ) + ν = 1 L Z ν α ν ( u ) y ν ( t ) = P 0 w ( t ) , which is equal to the density f 1 ( u ( P 0 + w ( t ) ) ) | z 1 , z 2 , , z L ) for values u = u ( P 0 + w ( t ) ) , satisfying equality Ψ ( t , u ) + ν = 1 L Z ν α ν ( u ) y ν ( t ) = P 0 + w ( t ) . So the optimal operator is defined by (47).

5. Approximate Algorithm Based on Statistical Linearization

Let us apply the method of statistical linearization (MSL) [2,3,4] for NLOStSs (1) and (2) at the Gaussian random vector U = U 1 , U 2 , , U N T with the density
f ( u 1 , , u N ) = 2 π N K 1 2 exp 1 2 p , q = 1 N C p q u p m p u q m q ,
where m p ,   p = 1 , 2 , , N , are the elements of mean vector m = m 1 , m 2 , , m N and C p q , p = 1 , 2 , , N and q = 1 , 2 , , N are elements of the inverse matrix, C = C p q = K 1 , of the covariance matrix K = K p q   .
At first, according to MSL, we replace the nonlinear function in Equation (1) with the linear one,
Φ ( τ , U ) Φ 0 ( τ , m , K ) + r = 1 N ξ r ( τ , m , K ) U r m r ,
where Φ 0 = Φ 0 ( τ , m , K ) , ξ r = ξ r ( τ , m , K ) are determined from the mean square error approximation condition [2,3,4]. Using notation
ξ 0 ( τ , m , K ) = Φ 0 ( τ , m , K ) r = 1 N ξ r ( τ , m , K ) m r ,
we replace Equation (1) with the linear one,
Z ( τ ) = ξ 0 ( τ , m , K ) + r = 1 N U r ξ r ( τ , m , K ) + X ( τ ) .
At condition ξ r ( τ ) 2 t T , t   ( r = 1 , 2 , , N ) using the WLCE method of linear synthesis [2,3,4] we have the following equations:
Z ( τ ) = r = 1 N U r ξ r ( τ ) + X ( τ ) ,
Z ( τ ) = ξ 0 ( τ ) + Z ( τ ) ,
Z ν = t T t a ν ( τ ) Z ( τ ) d τ = r = 1 N α ν r U r + V ν x ,
where
a 1 ( τ ) = g 1 ( τ ) ,   a ν ( τ ) = λ = 1 ν 1 d ν λ g λ ( τ ) + g ν ( τ )   ( ν = 2 , 3 , , L ) , α ν r = t T t a ν ( τ ) ξ r ( τ ) d τ .
So we obtain WLE
ξ r ( τ )   =   a r ξ φ 00 ( τ ) + j = 0 J k = 0 2 j 1 d r j k ξ ψ j k ( τ )   ( r = 1 , , N ) , a r ξ   =   t T t ξ r ( τ ) φ 00 ( τ ) d τ ,   d r j k ξ   =   t T t ξ r ( τ ) ψ j k ( τ ) d τ ,
or in notations (21)
ξ r ( τ )   =   c r 1 ξ g 1 ( τ ) + ν = 2 ( ν = 2 j + k + 1 ; j = 0 , 1 , , J ; k = 0 , 1 , , 2 j 1 ) L c r ν ξ g ν ( τ )   ( r = 1 , , N ) , c r 1 ξ = a r ξ   ,   c r ν ξ = d r j k ξ   ,
and
α 1 r = c r 1 ξ ;   α ν r = λ = 1 ν 1 d ν λ c r λ ξ + c r ν ξ   ( ν = 2 , 3 , , L ) .
As a result Equations (53) and (54) may be rewritten in the form
Z ( τ ) = ν = 1 L Z ν x ν ( τ ) ,
Z ( τ ) = ξ 0 ( τ ) + ν = 1 L Z ν x ν ( τ ) .
From Equations (10) and (55) for noise we have the following presentation:
Y ( t ) = ν = 1 L Z ν r = 1 N α ν r U r y ν ( t ) .
Taking into consideration the Equation (2),
W = W ( t , U ) = Ψ ( t , U ) + ν = 1 L Z ν r = 1 N α ν r U r y ν ( t ) = Ψ ( t , U ) + ν = 1 L Z ν y ν ( t ) r = 1 N U r ν = 1 L α ν r y ν ( t ) .
We conclude that the WLCE of output W ( t ) does not depend upon the WLCE of random noise Y ( t ) and
W ( t , U ) = Ψ ( t , U )
for Gaussian noises X ( t ) and Y ( t ) .
The conditional joint density of U relative to Z ν (or StP Z ) and the conditional mathematical expectation of Bayes’ loss function are expressed by known formulae [2,3,4]:
f 1 ( u | z 1 , z 2 , , z L ) = χ ( z ) C 2 π N exp p = 1 N η p u p 1 2 p , q = 1 N b p q u p u q + C p q u p m p u u q m q u ,
ρ ( A | Z ) = E l W , W * | Z = χ ( z ) + l W , W * f 1 ( u | z 1 , z 2 , , z L ) d u .
Here
χ ( z ) = + f ( u ) exp ν = 1 L z ν α ν ( u ) D ν x 1 2 ν = 1 L α ν 2 ( u ) D ν x d u 1 ,   α ν ( u ) = r = 1 N α ν r u r , b p q = ν = 1 L 1 D ν x α ν p α ν q   ;   η p = η p ( z 1 , z 2 , , z L ) = ν = 1 L α ν p z ν D ν x   ; p , q = 1 , , N .
Theorem 2.
Let the following conditions hold for a stochastic system (1), (2):
(1) 
The covariance function  K X ( τ 1 , τ 2 )  of random noises  X ( τ ) is known and belongs to the space  2 t T , t × t T , t ;
(2) 
The joint covariance function  K X Y ( τ 1 , τ 2 )  of random noise  X ( τ )  and  Y ( τ )  is known and belong to the space  2 t T , t × t T , t ;
(3) 
The random vector  U = U 1 , U 2 , , U N T  given by the normal probability density (49);
(4) 
The nonlinear function  Φ ( τ , u )  is approximated by a linear one with respect to random parameters  u 1 , , u N  according to the statistical linearization method in the form (50);
(5) 
The conditional probability density of  U  relative to StP  Z  (or relative to a set of random variables  Z ν  according to the Formulas (53)–(55)) is approximated by the Formulas (65) and (67).
Then we obtain an approximate optimal estimate of the output StP  W  for three criteria (minimum mean square error, damage accumulation and probability of error exit outside the limits):
W * = Ψ 0 ( t , Λ , K U Z ) r = 1 N λ r b r 0 + η 0 ,
where Λ = λ 1 , λ 2 , , λ N T and K U Z are the conditional expectation and conditional covariance matrix of U relative to StP Z , Ψ 0 ( t , Λ , K U Z ) = E Ψ ( t , U ) | Z , b r 0 = ν = 1 L α ν r y ν ( t ) , η 0 = ν = 1 L z ν y ν ( t ) .
Proof of Theorem 2.
As it was mentioned in Section 3, the value of parameter P = P 0 ( t , z 1 , z 2 , , z L ) at which integral (66) reaches the minimum defined optimal Bayes operator. Changing in P 0 ( t , z 1 , z 2 , , z L ) variables z 1 , z 2 , , z L for random ones Z 1 , Z 2 , , Z L we obtain the optimal system operator
W * ( t ) = A Z = P 0 ( t , Z 1 , Z 2 , , Z L ) .
In Section 4 we found out that the optimal estimate of output (2) based on the observed input (1) in the equal conditional mathematical expectation W * = E W ( t ) | Z for three criteria. So the following expressions are valid: W * = E W | Z = E W ( U ) | Z .
Due to the Gaussian distribution of variables U , X , Y , the density f 1 ( u | z 1 , z 2 , , z L ) will be Gaussian. Denoting
Λ = E U | Z = λ r ,   λ r = E U r | Z = E U r | Z 1 , Z 2 , , Z L ,     K U Z = E U Λ U Λ T | Z ,   Ψ 0 ( t , Λ , K U Z ) = E Ψ ( t , U ) | Z ,
we obtain the following approximate presentation of (69):
W * = Ψ 0 ( t , Λ , K U Z ) + ν = 1 L Z ν y ν ( t ) r = 1 N λ r ν = 1 L α ν r y ν ( t ) ,
where Λ and K U Z are the conditional expectation and conditional covariance matrix.
Presentation (71) in notations
b r 0 = ν = 1 L α ν r y ν ( t ) ,   η 0 = η 0 ( z 1 , z 2 , , z L ) = ν = 1 L z ν y ν ( t ) ,
takes the form
W * = Ψ 0 ( t , Λ , K U Z ) r = 1 N λ r b r 0 + η 0 .
Equating to zero partial derivative by u 1 , , u N in (65) we obtain the equation for solving the conditional mathematical expectation λ r ,
p = 1 N λ p ( t ) C p q + b p q = η q   ( t ) + p = 1 N C p q m q ( q = 1 , , N ) .
The system of linear algebraic equations may be rewritten in matrix form
C 1 Λ = A 1 T Z 1 + C m ,
where
C 1 = C i j + b i j i , j = 1 N ,   A 1 = α ν j D ν x ν , j = 1 L , N   ,   Z 1 = z 1 , , z L T   .
Solving Equation (75)
Λ = C 1 1 A 1 T Z 1 + C m
and using notations
B 1 = b 10 b N 0 ,   Y 1 = y 1 ( t ) y L ( t ) ,   A = Y 1 T B 1 T C 1 1 A 1 T ,   B = B 1 T C 1 1 C m ,
we obtain the approximate optimal estimate
W * = Ψ 0 ( t , Λ , K U Z ) + A Z 1 B .
Parameter K U Z is computed according the known formulae [4,5,6]:
K U Z = + ( u Λ ) 2 f 1 ( u | z 1 , z 2 , , z L ) d u .
Theorem 2 is proved. □
So we come to the following approximate algorithm for the construction of the optimal operator on the basis of the MSL and WLCE methods (Algorithm 2).
Algorithm 2 Synthesis of NLOStS by MSL and WLCE
1: 
Approximation of the nonlinear function Φ ( t , U ) by function (50) and presentation of StP Z ( τ ) in form (52).
2: 
Specifying wavelet bases with a compact carrier by Formulas (20)–(22).
3: 
Presentations of structural functions ξ 1 ( τ ) , , ξ N ( τ ) in the form (57) and (58) and two dimensional WLE of covariance functions K X ( t 1 , t 2 ) , K X Y ( t 1 , t 2 ) according to Formula (23).
4: 
Calculation of variance D ν x of the independent random variable V ν x for ν = 1 , 2 , , L of WLCE for random noise X ( τ ) by Formulas (24) and (25).
5: 
Definition of coordinate functions x ν ( τ ) y ν ( τ )   ν = 1 , 2 , , L according to recurrent Formulas (26) and (27).
6: 
Calculation α ν r   ν = 1 , 2 , , L ;   r = 1 , 2 , , N according to Formula (59).
7: 
Fixation z ν   ν = 1 , 2 , , L of random variables Z ν   using Formula (55).
8: 
Calculation b p q ,   b p 0   , η 0 ,   η p   ( q , p = 1 , 2 , , N ) by Formulas (67) and (72).
9: 
Definition of parameters Λ ,   K U Z by (77), (80) and function Ψ 0 ( t , Λ , K U Z ) for the concrete nonlinear function Ψ .
10: 
Construction of Bayes’ optimal estimate W * ( t ) according to Formula (79).

6. Test Examples

6.1. Example 1

Let us consider the BC optimal filter for the reproduction of nonlinear signal W ( t ) = a sgn U + Y ( t ) using observation Z ( τ ) = U + X ( τ ) for, τ [ t T , t ] , where t T . Random noise X ( t ) has zero mathematical expectation and the covariance function K X ( t 1 , t 2 ) = D X exp ( α X t 1 t 2 ) . Random noises X ( t ) and Y ( t ) are independent. The scalar random parameter U is normal with mathematical expectation m u = 0 and variance D u = 1 . According to Section 4 for three criteria we have W * ( t ) = a sgn E U | Z .
After calculation Φ ( τ , u ) = u ξ ( t )   at ξ ( t ) = 1 we obtain, according (15), (16), (29) and (30), the following formulae:
Φ ( τ , u )   =   u c 1 ξ g 1 ( τ ) + ν = 2 L c ν ξ g ν ( τ )   ,   c 1 Φ ( u ) = u c 1 ξ   ,   c ν Φ ( u ) = u c ν ξ .
Using notations
α ¯ 1 = c 1 ξ ;   α ¯ ν = λ = 1 ν 1 d ν λ c λ ξ + c ν ξ   ( ν = 2 , 3 , , L ) ,   c 1 ξ = a ξ   ,   c ν ξ = d j k ξ ν = 2 j + k + 1 ;   j = 0 , 1 , , J ; k = 0 , 1 , , 2 j 1 ; ν = 2 , 3 , , L , α ν ( u ) = u α ¯ ν ( ν = 1 , 2 , , L ) ,   E U | Z = λ ,
we come to the required result:
W * = a sgn λ ,   λ = 1 D u + ν = 1 L α ¯ ν 2 D ν x 1 ν = 1 L z ν α ¯ ν D ν x .
For input data,
J = 2 ,   a = 1 , D X = 1 , α X = 1 ,   T = 8 ,   t [ 8 , 18 ] , l W , W * = 1 e k 2 W * W 2 k = 0.5 ; 1 ,
the results of computer experiments are shown in Figure 1 and Figure 2. Figure 1 corresponds to case 1 when the signal value and estimate coincide, whereas in Figure 2 the signal value and estimate differ in one random point.

6.2. Example 2

At conditions when
W ( t ) = 3 U 2 + Y ( t ) ,   Z ( τ ) = U 3 + X ( τ )
for the input data of Section 6.1. Using Section 4 results, according to (20) we have
W * ( t ) = 3 E U 2 | Z = + 3 u 2 exp ν z ν α ν ( u ) D ν 1 2 ν α ν 2 ( u ) D ν u m u 2 2 D u d u + exp ν z ν α ν ( u ) D ν 1 2 ν α ν 2 ( u ) D ν u m u 2 2 D u d u .
Denoting Φ ( τ , u ) = u 3 ξ ( t ) ,   ξ ( t ) = 1 and according to (15), (16), (29) and (30), and using the formulae
Φ ( τ , u )   =   u 3 c 1 ξ g 1 ( τ ) + ν = 2 L c ν ξ g ν ( τ )   ,   c 1 Φ ( u ) = u 3 c 1 ξ ,   c ν Φ ( u ) = u 3 c ν ξ ,
α ¯ 1 = c 1 ξ ;   α ¯ ν = λ = 1 ν 1 d ν λ c λ ξ + c ν ξ   ( ν = 2 , 3 , , L ) ,   c 1 ξ = a ξ ,   c ν ξ = d j k ξ ν = 2 j + k + 1 ;   j = 0 , 1 , , J ; k = 0 , 1 , , 2 j 1 ; ν = 2 , 3 , , L ,
we obtain   α ν ( u ) = u 3 α ¯ ν ( ν = 1 , 2 , , L ) .
The results of computer experiments based on Algorithm 1 for signal, signal estimate, error square and accumulation of damage for time interval [9, 18] are given in Figure 3 and Figure 4. Here the standard MATLAB R2019a software procedures with the step 0.01 were implemented.

6.3. Example 3

Let us consider the application of MSL (Section 5) to the system of the test example in Section 6.2. After calculations we have following formulae,
Φ ( t , u ) Φ 0 ( t , m u , D u ) + ξ 1 ( t , m u , D u ) u m u ,
Φ 0 ( t , m u , D u ) = m u D u 3 + m u 2 D u = m u 3 D u + m u 2 ,
ξ 1 ( t ) = ξ 1 ( t , m u , D u ) = 3 D u 1 + m u 2 D u = 3 D u + m u 2 ,
ξ 0 ( t ) = ξ 0 ( t , m , K ) = m u 3 D u + m u 2 3 D u + m u 2 m u = 2 m u 3 ,
Z ( τ ) = ξ 0 ( t ) + ξ 1 ( t ) U + X ( τ )
and
E U | Z = λ = η 1 + C 11 m u C 11 + b 11 ,   D U | Z = D u z = 1 C 11 + b 11 ,
where
  b 11 = ν = 1 L 1 D ν α ν 1 2 ,   η 1 = ν = 1 L α ν 1 z ν D ν   ,   C 11 = 1 D u .
So we obtain the following approximate result:
W * ( t ) = 3 E U 2 | Z = 3 D u z + λ 2 .
The results of computer experiments based on Algorithm 2 for signal, signal estimate, error module, error square, accumulation, conditional mathematical expectation and relative error ε = W W * W in % are given in Figure 5, Figure 6 and Figure 7. The conditional variance is constant and equal 0.00196.
Comparing test examples in Section 6.2 and Section 6.3 we conclude that MSL provides a good engineering accuracy of about 20–30%.

7. Conclusions

Algorithms for the optimal synthesis of nonstationary nonlinear StSs (Pugachev’s Equations (1) and (2)) based on canonical expansions of stochastic processes are well developed and applied [4]. Algorithm 1 (Theorem 1) is oriented for Gaussian noises and non-Gaussian parameters. Algorithm 2 (Theorem 2) is valid for Gaussian parameters and noises. Corresponding algorithms based on wavelet canonical expansions are well worked out only for observable linear StSs. For non-Gaussian parameters and noises we recommend the use of CE with independent components with non-Gaussian distributions [4].
An important issue of the considered nonstationary stochastic systems is to obtain faster convergence speed. Structural functions and covariance functions of noises in our approaches demand one- and two-dimensional 2 -spaces. We use Haar wavelets due to their simplicity and analytical expressions. It is known that wavelet expansions based on Haar wavelets have poor convergence compared to wavelet expansions based on, for example, Daubechies wavelets. We suggest two ways for the improvement of convergence speed: (i) increase the maximal level of revolution of fix wavelet bases; (ii) choice of fixed type of carrier. Computer experiments in Examples 1–3 confirm good engineering accuracy even for resolution two levels and eight members of canonical expansions.
For the NLOStSs described, stochastic differences and differential equations corresponding to algorithms are given in [2,3,4]. Applications:
Approximate linear and equivalent linearized model building for observable nonstationary and nonlinear stochastic systems;
Bayes’ criterion optimization and the calibration of parameters in complex quality measurement and control systems;
Estimation of potential efficiency of nonstationary nonlinear stochastic systems;
Directions of future generalizations and implementations;
New models of scalar- and vector-observable stochastic systems (nonlinear, with additive and parametric noises, etc.);
New classes of Bayes’ criterion (integral, functional, mixed);
Implementation of wavelet-integral canonical expansions for hereditary stochastic systems.
This research was carried out using the infrastructure of the Shared Research Facilities “High Performance Computing and Big Data” (CKP “Informatics”) of FRC CSC RAS (Moscow, Russia).

Author Contributions

Conceptualization, I.S.; methodology, I.S., V.S. and T.K.; software, E.K. and T.K.; All authors have read and agreed to the published version of the manuscript.

Funding

The research received no external funding.

Data Availability Statement

No new data were generated or analyzed during this study.

Acknowledgments

I.V. Sinitsyna being text translstor.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

BCBayes’ criteria
CECanonical expansion
LOStSLinear observable stochastic systems
MSLMethod of statistical linearization
NLOStSNonlinear observable stochastic system
StPStochastic process
StSStochastic system
WLCEWavelet canonical expansion
Basic Designations
X ( t ) random function, noise
Y ( t ) random function, noise
E X ( t ) mathematical expectation of random function X ( t )
Z ( t ) input stochastic process
W ( t ) output stochastic process
W * ( t ) estimator W ( t )
l ( W , W * ) loss function
A system operator
ρ ( A | W ) conditional risk
R ( A ) mean risk
U random vector parameter
V ν x random variable of CE of random function X ( t )
x ν ( t ) coordinate function of canonical expansion of random function X ( t )
y ν ( t ) coordinate function of canonical expansion of random function Y ( t )
D ν x variance of random variable V ν x
K X ( t 1 , t 2 ) covariance function of random function X ( t )
K X Y ( t 1 , t 2 ) joint covariance function of random function X ( t ) and Y ( t )
Z ν random variable of canonical expansion of StP
f ( u ) probability density of random vector U = U 1 , U 2 , , U N T
f 1 ( u | z 1 , z 2 , ) conditional probability density of random vector U relative to random variables Z ν
f 2 ( z 1 , z 2 , | u ) conditional probability density of random variables Z ν relative to U
φ 00 ( t ) Haar scaling function
ψ 00 ( t ) Haar mother wavelet

References

  1. Sinitsyn, I.; Sinitsyn, V.; Korepanov, E.; Konashenkova, T. Bayes Synthesis of Linear Nonstationary Stochastic Systems by Wavelet Canonical Expansions. Mathematics 2022, 10, 1517. [Google Scholar] [CrossRef]
  2. Pugachev, V.S. Theory of Random Functions and Its Applications to Control Problems; Pergamon Press Ltd.: Oxford, UK, 1965. [Google Scholar]
  3. Pugachev, V.S.; Sinitsyn, I.N. Stochastic Systems. Theory and Applications; World Scientific: Singapore, 2001. [Google Scholar]
  4. Sinitsyn, I. Canonical Expansion of Random Functions. In Theory and Applications, 2nd ed.; Torus Press: Moscow, Russia, 2023. (In Russian) [Google Scholar]
  5. Daubechies, I. Ten Lectures on Wavelets; SIAM: Philadelphia, PA, USA, 1992. [Google Scholar]
  6. Chui, C.K. An Introduction to Wavelets; Academic Press: Boston, MA, USA, 1992. [Google Scholar]
  7. Irene Fonseca, I.; Leoni, G. Modern Methods in the Calculus of Variations Lp Spaces; SMM; Springer: New York, NY, USA, 2007. [Google Scholar]
Figure 1. Example 1. Case 1.
Figure 1. Example 1. Case 1.
Mathematics 11 02059 g001
Figure 2. Example 1. Case 2.
Figure 2. Example 1. Case 2.
Mathematics 11 02059 g002
Figure 3. Results of synthesis of NLOStS (81) based on Algorithm 1: (a) graphs of signal extrapolation W and estimate extrapolation W * ; (b) graph of error module W * W .
Figure 3. Results of synthesis of NLOStS (81) based on Algorithm 1: (a) graphs of signal extrapolation W and estimate extrapolation W * ; (b) graph of error module W * W .
Mathematics 11 02059 g003
Figure 4. Results of synthesis of NLOStS (81) based on Algorithm 1: (a) graph of error square W * W 2 ; (b) graphs of accumulation of damage l W , W * = 1 e k 2 W * W 2 k = 0.5 ; 1 .
Figure 4. Results of synthesis of NLOStS (81) based on Algorithm 1: (a) graph of error square W * W 2 ; (b) graphs of accumulation of damage l W , W * = 1 e k 2 W * W 2 k = 0.5 ; 1 .
Mathematics 11 02059 g004
Figure 5. Results of synthesis of NLOStS (81) based on Algorithm 2: (a) graphs of signal extrapolation W and estimate extrapolation W * ; (b) graph of error module W * W .
Figure 5. Results of synthesis of NLOStS (81) based on Algorithm 2: (a) graphs of signal extrapolation W and estimate extrapolation W * ; (b) graph of error module W * W .
Mathematics 11 02059 g005
Figure 6. Results of synthesis of NLOStS (81) based on Algorithm 2: (a) graph of error square W * W 2 ; (b) graphs of accumulation of damage l W , W * = 1 e k 2 W * W 2 k = 0.5 ; 1 .
Figure 6. Results of synthesis of NLOStS (81) based on Algorithm 2: (a) graph of error square W * W 2 ; (b) graphs of accumulation of damage l W , W * = 1 e k 2 W * W 2 k = 0.5 ; 1 .
Mathematics 11 02059 g006
Figure 7. Graphs of: (a) conditional mathematical expectation E U | Z = λ ; (b) relative error in %.
Figure 7. Graphs of: (a) conditional mathematical expectation E U | Z = λ ; (b) relative error in %.
Mathematics 11 02059 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sinitsyn, I.; Sinitsyn, V.; Korepanov, E.; Konashenkova, T. Synthesis of Nonlinear Nonstationary Stochastic Systems by Wavelet Canonical Expansions. Mathematics 2023, 11, 2059. https://doi.org/10.3390/math11092059

AMA Style

Sinitsyn I, Sinitsyn V, Korepanov E, Konashenkova T. Synthesis of Nonlinear Nonstationary Stochastic Systems by Wavelet Canonical Expansions. Mathematics. 2023; 11(9):2059. https://doi.org/10.3390/math11092059

Chicago/Turabian Style

Sinitsyn, Igor, Vladimir Sinitsyn, Eduard Korepanov, and Tatyana Konashenkova. 2023. "Synthesis of Nonlinear Nonstationary Stochastic Systems by Wavelet Canonical Expansions" Mathematics 11, no. 9: 2059. https://doi.org/10.3390/math11092059

APA Style

Sinitsyn, I., Sinitsyn, V., Korepanov, E., & Konashenkova, T. (2023). Synthesis of Nonlinear Nonstationary Stochastic Systems by Wavelet Canonical Expansions. Mathematics, 11(9), 2059. https://doi.org/10.3390/math11092059

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop