Next Article in Journal
On the Uniqueness of the Bounded Solution for the Fractional Nonlinear Partial Integro-Differential Equation with Approximations
Previous Article in Journal
A Proposed Analytical and Numerical Treatment for the Nonlinear SIR Model via a Hybrid Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Axiomatic Foundations of Anisotropy-Based and Spectral Entropy Analysis: A Comparative Study

by
Victor A. Boichenko
,
Alexey A. Belov
and
Olga G. Andrianova
*
V.A. Trapeznikov Institute of Control Sciences of RAS, 117997 Moscow, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(12), 2751; https://doi.org/10.3390/math11122751
Submission received: 15 April 2023 / Revised: 12 June 2023 / Accepted: 15 June 2023 / Published: 17 June 2023
(This article belongs to the Section Dynamical Systems)

Abstract

:
An axiomatic development of control systems theory can systematize important concepts. The current research article is dedicated to the investigation and comparison of two axiomatic approaches to the analysis of discrete linear time-invariant systems affected by external random disturbances. The main goal of this paper is to explore axiomatics of an anisotropy-based theory in comparison with axiomatics of a spectral entropy approach in detail. It is demonstrated that the use of the spectral entropy approach is mathematically rigorous, which allows one to prove that the minimal disturbance attenuation level in terms of an anisotropy-based control theory provides the desired performance that is not only for ergodic signals. As a result, axiomatics of the spectral entropy approach allows one to rigorously prove that anisotropy-based controllers can be used to guarantee the desired disturbance attenuation level, not only for stationary random sequences, but also for a wider set of input random signals.

1. Introduction

As a rule, the classic automatic control theory is based on the application of mathematical tools to analyze and design the control law. On the one hand, this approach allows for deriving transparent methods and numerical algorithms of analysis and control. On the other hand, it allows for providing predefined quality criteria for the closed-loop system performance in mathematical terms. The use of mathematical tools requires specifying the set of axioms and deriving rules for investigation. An axiomatic approach in automatic control theory can be found in [1,2]. Thus, this paper [1] applies an axiomatic approach to investigate the Lyapunov stability of nonlinear differential equations. The axiomatic approach is also widely used in engineering research; for example, in the paper [3], the axiomatic approach is suggested to be used for complex engineering systems designs which involve hardware, software and human interface units. Meanwhile, an axiomatic approach for the design of cyberphysical systems is discussed in paper [4].
The current research article is dedicated to the investigation and comparison of two axiomatic approaches to the analysis and control of discrete linear time-invariant systems affected by external random disturbances. The first one is called the anisotropy-based control theory; the second one is known as the spectral entropy (σ-entropy) approach. The anisotropy-based control theory appeared in the mid 1990s from the papers [5,6,7,8,9]. The anisotropy-based approach made it possible to unify within the common framework H 2 and H control theories. As well as H 2 and H control theories, the anisotropy-based control theory deals with external disturbance attenuation acting on the control system. It is supposed that external disturbances belong to the certain set of signals that is specified by a non-negative scalar value, called a mean anisotropy level. The mean anisotropy level is a Kullback–Leibler information divergence from the probability distribution function (p.d.f.) of the signal to a p.d.f. of white Gaussian noise. The zero mean anisotropy corresponds to the set of white Gaussian noise, while any positive mean anisotropy defines a wider set of signals. Thus, H 2 and H control theories are limiting cases of the anisotropy-based control theory. The anisotropy-based control theory is actively developing at present. Many fruitful results can be found in [10,11,12,13].
Despite significant advantages in anisotropy-based control, there still exist some unsolved problems. It is worth mentioning that anisotropy-based control theory guarantees desired disturbance attenuation level only for ergodic processes, but real external disturbances are not stationary. Thus, it is not clear how effective anisotropy-based controllers are for real systems. To answer this question, a new approach, named the spectral entropy method, is proposed [14,15]. The first results on a linear time-invariant systems analysis appeared in [15]. It is demonstrated that spectral entropy allows for the description of a wider set of signals, including non-stationary and fading signals.
The main goal of this paper is to explore axiomatics of the anisotropy-based theory in comparison with axiomatics of the spectral entropy approach in detail. It is demonstrated that the use of a spectral entropy approach allows one to prove that a minimal disturbance attenuation level in terms of the anisotropy-based control theory is mathematically rigorous and provides a desired performance not only for ergodic signals.
The main contributions of this paper can be summarized as follows:
  • Axiomatics of two approaches: anisotropy-based and spectral entropy approaches are investigated and compared for the first time.
  • Different axiomatics are demonstrated to extend the set of input signals acting on the control plant substantially.
  • The problem of the spectral entropy analysis is proven to have the same solution as the anisotropy-based analysis. Thus, all the results obtained for the anisotropy-based analysis and control can be directly applied to the spectral entropy analysis.
This paper is organized as follows. In Section 2, a detailed description of anisotropy-based approach axiomatics is given. Section 3 deals with axiomatics of the spectral entropy control theory. In Section 4, a discussion of the new spectral entropy approach application is presented.

2. Axiomatics of Anisotropy-Based Control Theory

As it was mentioned in the Introduction, the anisotropy-based control theory deals with an external random disturbance attenuation problem. In contrast to classical H 2 and H control theories, the anisotropy-based control theory takes into consideration the correlation of inputting random signals. The disturbance attenuation level is calculated as a power norm ratio between an output and a random input disturbance. The anisotropy-based analysis and control is successfully developed for the linear time-invariant as well as for linear time-varying systems [16,17,18]. The analysis and control of linear time-varying systems applies the theory of matrices and matrix norms and is not considered here. To characterize a set of possible disturbances, the term “mean anisotropy” is introduced and used. The mean anisotropy level defines a set of stationary random disturbances with an unknown covariance and bounded “spectral color” [9]. It allows one to tune the controller with better disturbance attenuation properties. However, the anisotropy-based approach deals only with ergodic input signals.
Consider axiomatics of the anisotropy-based approach in detail, starting from the main six axioms:
  • Linear dynamical system;
  • Infinite horizon;
  • Anisotropy of a random vector;
  • Sequence vectorization and mean anisotropy of a random sequence;
  • Kolmogorov–Szego theorem and transfer to a frequency domain;
  • The quality criterion is the gain on a subset of signals whose anisotropy is less than or equal to a.
In the sequel in this paper, anisotropy-based theory statements will be reformulated in terms of:
  • Equivalence relation on the set of input signals;
  • The induced norm on a subset of signals, the power of which is bounded by the equivalence relation.
Consider a stable linear discrete-time system given in the state space in the following form [10]:
x k + 1 = A x k + B w k , x 0 = 0 , y k = C x k + D w k , k = 0 , 1 , 2 , ,
where x k R n is a state vector, W = { w k } k Z is a stationary Gaussian sequence of m-dimensional vectors with bounded mean anisotropy level A ¯ ( W ) < a ( a 0 ) and zero mean, and y k R p is an output of the system. The transfer function of system (1) is equal to F ( z ) = C ( z I A ) 1 B + D .
This is the standard beginning of the problem statement in an anisotropy-based context, from which the first two axioms immediately follow:
  • Linear discrete-time system;
  • Infinite horizon.
At the next step, we need to describe the set of signals acting on the system. Closely related to the anisotropy-based control theory, L Q R or H deal with continuous-time deterministic signals with finite energy (i.e., L 2 norm), while the L Q G control deals with white Gaussian noise with a known covariance. In the anisotropy-based theory, a more detailed description of the stationary sequence of random m-dimensional vectors is introduced by means of the anisotropy of the single vector and a mean anisotropy of the sequence composed of random vectors. We introduce definitions of the anisotropy and mean anisotropy, following [10].
Recall that L 2 m is a class of R m -dimensional random vectors that are absolutely and continuously distributed with a finite second moment.
Denoted by p m , λ with λ > 0 , the probability density function of the Gaussian signal with a zero mean and scalar covariance matrix λ I m is, i.e.,
p m , λ ( x ) = ( 2 π λ ) m / 2 exp | x | 2 2 λ , x R m .
For any w L 2 m with a probability density function f : R m R + , its relative entropy with respect to (2) takes the form [19]:
D f p m , λ = E ln f ( w ) p m , λ ( x ) = h ( w ) + m 2 ln ( 2 π λ ) + E | w | 2 2 λ ,
where
h ( w ) = E ln f ( w ) = R m f ( w ) ln f ( w ) d w
is a differential entropy of a random vector w.
Definition 1.
Anisotropy  A ( w )  of a random vector  w L 2 m  is defined as the minimum information deviation of its distribution from Gaussian distributions on   R m  with zero mean and scalar covariance matrices
A ( w ) = min λ > 0 D ( f p m , λ ) .
A direct calculation demonstrates that the minimum in (3) for all possible λ > 0 is achieved with λ = E | w | 2 / m and hence,
A ( w ) = min λ > 0 D ( f p m , λ ) = m 2 ln 2 π e m E | w | 2 h ( w ) .
Before formulating the third axiom of the anisotropy-based theory, note that the anisotropy of a random vector, being a relative entropy, has the following properties:
  • D ( p q ) 0 , and D ( p q ) = 0 only if p ( x ) = q ( x ) for any x;
  • In general case D ( p q ) D ( q p ) .
The second property leads directly to the fact that the relative entropy is not symmetric with respect to the rearrangement of p and q. This fact does not allow for using the relative entropy as a certain criterion, which is a certain characteristic of the two random vectors’ proximity. In the anisotropy-based theory, this drawback is overcome in the following way: the probability density functions of random vectors are compared not with each other, but with a reference vector; and the results of such a comparison are used further in theory. A random Gaussian vector with a zero mean and scalar covariance matrix λ I m is chosen as a reference, and since there is a whole family of such vectors, the minimum is calculated for all possible λ > 0 . Thus, the third axiom of the anisotropy-based theory is
  • Anisotropy of a random vector.
Note that for each w L 2 m , its anisotropy A ( w ) 0 , with A ( w ) = 0 , if and only if w is a Gaussian vector with zero mean and scalar covariance.
The control theory does not work with random vectors, but it does work with input and output signals, either discrete or continuous. In case of the anisotropy-based theory, the input signal is an infinite sequence of random vectors. Consequently, in this case the definition of anisotropy given above is not applicable. Instead of this term, the mean anisotropy of the random sequence is used. The mean anisotropy of the sequence is defined as a passage to the limit of a vectorized fragment of N elements of the sequence W with N + .
Select a sub-sequence of the dimension N × m from the sequence W [10]
W 0 : N 1 = w 0 w N 1 ,
where each vector w i R m for i = 0 , ( N 1 ) ¯ .
Definition 2.
Mean anisotropy of the sequence W is defined as
A ¯ ( W ) = lim N + A ( W 0 : N 1 ) N ·
Hence, the fourth axiom is
  • Mean anisotropy of a random sequence.
The term “mean anisotropy” is used to characterize a set of possible disturbances. The mean anisotropy level defines a set of stationary random disturbances with unknown covariances and bounded “spectral color”.
The definition of mean anisotropy A ¯ ( W ) of a random sequence is given in a probability-theoretic context, while the control theory deals either in the time domain or frequency domain, which means that it uses a classical function theory apparatus. The transition from a probability-theoretic description to a functional one is provided by the Kolmogorov–Szego theorem [20]. This theorem associates one step prediction error of a random sequence with its spectral density. However, this theorem has strict limitations on the random sequence—the random signal must be a centered stationary process. From this limitation, the following axiom arises:
  • Stationary random sequence.
Finally, the quality criterion in the classical anisotropy-based theory, known as the anisotropic norm of the system, is defined in the following way [10].
Denote the output sequence of the system (1) by Y = { y k } k Z . Define the power norm of the sequence Y as
Y P = lim N 1 2 N + 1 k = N N E | y k | 2 .
Supposing that Y P and W P are finite, for a given system F with an input signal W = { w k } k Z , a root mean square gain can be defined in the form
Q ( F , W ) = Y P W P ·
Definition 3.
Let W be a stationary Gaussian sequence with a bounded mean anisotropy  A ¯ ( W ) a , with   a 0  being a known scalar value. Then anisotropic norm of the system F is defined as
F a = sup A ¯ ( W ) a Q ( F , W ) .
Thus, the anisotropic norm F a sets the stochastic gain by the system F of the input signal W, which leads to the last (sixth) axiom of the anisotropy-based theory:
  • Anisotropic norm of the system.
Consequently, we obtain the following six axioms of the anisotropy-based control theory:
  • Linear discrete-time system;
  • Infinite horizon;
  • Anisotropy of a random vector;
  • Mean anisotropy of a random sequence;
  • Stationary random sequence;
  • Anisotropic norm of the system.
Presently, it is necessary to say a few words about one property of the mean anisotropy, which is not mentioned anywhere. The fact is that from a set-theoretic point of view, the mean anisotropy generates an equivalence relation on the set of input random sequences. Indeed, the mean anisotropy allows for any pair of sequences to introduce a binary equivalence relation ∼ on the set of input sequences W and the term “equianisotropic”, which means that the sequences have equal mean anisotropies. The equianisotropy relation introduced in this way satisfies all the equivalence conditions [21], i.e., for any sequences { w k } , { v k } , { u k } W , this relation is
  • Reflexive as
    { w k } { w k } ;
  • Symmetric as
    { w k } { v k } { v k } { w k } ;
  • Transitive as
    { w k } { v k } { v k } { u k } { w k } { u k } .
Like any equivalence relation, the equianisotropy relation on the set of input sequences W generates a partition of the set W into disjoint equivalence classes, and each class K a is characterized by its mean anisotropy level a.
The set of all equivalence classes K a of the set W with respect to the equianisotropy relation ∼ forms a factor set W / . Hence, the anisotropic norm (4) of the system is an induced norm on a subset of the factor set W / , and the “power” of this subset is determined by the condition A ¯ ( K a ) a .
Noting that axioms 3–5 serve as a detailed and consistent description of signals in the anisotropy-based theory, these three axioms can be grouped and reformulated as follows, into an equivalence relation on the set of input signals. Taking into account the properties of the mean anisotropy and keeping in mind the goal of expanding the anisotropy-based theory, the axiomatics of the anisotropy-based theory can be reformulated in the following way:
  • Linear dynamics;
  • Infinite horizon;
  • Equivalence relation on the set of input signals;
  • Induced norm on a subset of the equivalence classes factor set.
Based on these axioms, frequency and state-space domain approaches to the analysis of linear discrete-time systems appeared in [7,9]. Finally, a state-space formulae of the anisotropic norm is given by the following theorem [7].
Theorem 1.
Let the asymptotically stable system (1) satisfy inequality
F 2 2 m < F 2
with F 2 and F being H 2 and H norms of the system F.
Then, for  a > 0  there exists the unique pair  ( q , R )  of the scalar parameter  q 0 , F 2  and the admissible solution R of the Riccati equation
R = A T R A + q C T C + L T Σ 1 L ,
Σ = ( I q D T D B T R B ) 1 ,
L = Σ ( B T R A + q D T C ) .
such that
1 2 ln det m Σ tr ( L P L T + Σ ) = a ,
where P is a solution of the Lyapunov equation
P = ( A + B L ) P ( A + B L ) T + B Σ B T .
The anisotropic norm (4) of system (1) is given by
F a 2 = 1 q 1 m tr ( L P L T + Σ ) .
The state-space formulae given in the above theorem can easily be utilized for the numerical computation of the anisotropic norm using Newton iterations [9].

3. σ -Entropy Analysis Method

In this section, we will provide axiomatics of the spectral entropy approach to an analysis of linear time invariant systems.

3.1. Basic Definitions of σ -Entropy Theory

Consider a plant described by a linear discrete stationary system with a zero initial condition:
x k + 1 = A x k + B w k , x 0 = 0 , z k = C x k + D w k , k = 0 , 1 , 2 , ,
where A R n × n , B R n × m , C R p × n , D R p × m , x k R n is a state vector, z k R p is an output vector, and w k R m is a random input disturbance. The transfer function of system (10) is given by F ( z ) = C ( z I A ) 1 B + D . System (10) is assumed to be asymptotically stable (i.e., the spectral radius ϱ of the matrix A is less than one: ϱ ( A ) < 1 ), the system is supposed to be controllable and observable. The system is affected by a random sequence W = { w k } k Z with a zero mean
E w k = 0 ,
and finite l 2 norm W 2
W 2 2 = k = + E | w k | 2 < ,
or finite power norm W P
W P 2 = lim N + 1 2 N + 1 k = N N E | w k | 2 < .
Hereinafter, | w k | is the Euclidian norm of the vector w k R m .
Obviously, analysis of system (10) with the input signals (12) or (13) are two different problems, and they have been solved separately in [15], but there is a lot in common between them. To demonstrate this, following the paper [14], unify the description of the input signals and introduce the N norm of the signal in the form
W N 2 = N w k T w k ,
where N is a linear operator that transforms the Euclidean norm of the vector | w k | 2 = w k T w k into l 2 or into the power norm of the stochastic signal according to the following rule:
N ( · ) = k = k = + E [ · ] for l 2 norm , lim N 1 2 N + 1 k = N k = + N E [ · ] for P norm .
Define the correlation convolution of the K ( l ) input sequence { w k } via N operator, as follows:
K ( l ) = N w k + l T w k .
Fourier transform of the correlation convolution provides:
S w ( ω ) = 1 2 π l = + K ( l ) e i ω l .
Matrix S w ( ω ) , being a spectral density matrix of the sequence W, is Hermitian positive:
S w ( ω ) = S w * ( ω ) > 0
with * being a hermitian conjugation.
The inverse Fourier transform allows us to express the correlation convolution of K ( l ) in a standard way via the spectral density matrix S w ( ω ) :
K ( l ) = π + π S w ( ω ) e i ω l d ω .
Since N norm of the input sequence W is equal to
W N 2 = tr K ( 0 ) ,
then N norm of the sequence W can be expressed in terms of the spectral density matrix S w ( ω ) as
W N 2 = π + π tr S w ( ω ) d ω .
Similarly, N norm of the output sequence Z is
Z N 2 = π + π tr S z ( ω ) d ω ,
where S z ( ω ) is a spectral density of the output sequence Z, which is expressed in terms of the transfer matrix of the system (10) G ( z ) = C ( z I A ) 1 B + D and the spectral density of the input signal S w ( ω ) , as follows [22]:
S z ( ω ) = G ( e i ω ) S w ( ω ) G * ( e i ω ) .
Consequently, N norm of the output Z equals to
Z N 2 = π + π tr G * ( e i ω ) G ( e i ω ) S w ( ω ) d ω .
Presently, define the gain Θ for system (10) with the input sequence W, which satisfies either condition (12), or (13), as the ratio of the N norm of the output sequence Z to N norm of the input sequence W:
Θ 2 = Z N 2 W N 2 = π + π tr Λ ( ω ) S w ( ω ) d ω π + π tr S w ( ω ) d ω ,
where Λ ( ω ) = G * ( e i ω ) G ( e i ω ) .
Presently, following the paper [14], introduce an integral characteristic of the input signal, namely σ-entropy:
S ( W ) = α 2 π + π ln det β S w ( ω ) π + π tr S w ( ω ) d ω d ω .
The concept of the σ -entropy is introduced axiomatically, and only the functional relationship between S and S w is strictly postulated, while the coefficients before and under the integral should be determined from some additional assumptions. Briefly discuss these coefficients: first of all, the minus sign before the integral has a physical sense, as entropy should be a non-negative value, and under ln ( · ) there is a value less than one, so the minus is necessary. The scalar parameters α and β should be positive, and specific values are not important, but they could make sense while calculating the total entropy of the system, which contains the control plant (10).
Define the σ -entropy norm F s of system (10) for a random input sequence (12) or (13) as the maximum gain (17) for all input sequences, σ -entropy (18) of which does not exceed the given non-negative value s:
F s 2 = sup S ( W ) s Θ 2 = sup S ( W ) s π + π tr Λ ( ω ) S w ( ω ) d ω π + π tr S w ( ω ) d ω ·
Inequality S ( W ) s with S ( W ) given by (18) defines the set of all possible random signals W satisfying (11)–(13), whose σ -entropy does not exceed the scalar value s 0 .

3.2. σ -Entropy Norm Computation in the Frequency Domain

Before calculating σ -entropy norm (19), consider a property of σ -entropy (18), which will allow us to calculate the norm of the system in two stages. From a set-theoretic point of view, σ -entropy generates an equivalence relation on the set of input sequences. Indeed, the σ -entropy allows one to introduce the binary relation of equivalence∼for any pair of sequences; this relation is required to be “isoentropic” (i.e., to have equal σ -entropies) on the set of input sequences W . The σ -entropy equality relation satisfies all equivalence conditions [21], i.e., for all input signals { w k } , { v k } , { u k } W , this relation is
  • Reflexive as
    { w k } { w k } ;
  • Symmetric as
    { w k } { v k } { v k } { w k } ;
  • Transitive as
    { w k } { v k } { v k } { u k } { w k } { u k } .
Like any equivalence relation, the isoentropy relation on the set of input sequences W generates a partition of the set W into disjoint equivalence classes, and each class K s is characterized by its σ -entropy value s. Define the σ -entropy gain of system (10) as the maximum gain (17) Θ s on the isoentropic class K s of input sequences, σ -entropy (18), of which is equal to the given value s, as:
Θ s 2 = sup S ( W ) = s Θ 2 = sup S ( W ) = s π + π tr Λ ( ω ) S w ( ω ) d ω π + π tr S w ( ω ) d ω ·
The set of all equivalence classes K s of the set W with respect to the isoentropy relation∼forms a factor set W / . Hence, the σ -entropy norm (19) of the system is an induced norm on a subset of the factor set W / ; the “power” of this subset is defined by
S ( K s ) s .
Theorem 2.
For any  s 0  σ-entropy gain (20) of the system (10), the input is a stochastic discrete-time signal with a finite  N  norm and is calculated as
Θ s 2 = π + π tr Λ ( ω ) I q s Λ ( ω ) 1 d ω π + π tr I q s Λ ( ω ) 1 d ω ,
where parameter q s 0 , F 2 is a unique solution of the equation
α 2 π + π ln det β I q s Λ ( ω ) 1 π + π tr I q s Λ ( ω ) 1 d ω d ω = s .
Proof. 
The definition of σ -entropy gain Θ s 2 can be reformulated in the following form:
Θ s 2 = sup S w ( ω ) π + π tr Λ ( ω ) S w ( ω ) d ω : α 2 π + π ln det β S w ( ω ) d ω = s , π + π tr S w ( ω ) d ω = 1 .
Find the maximum of this expression by the method of indefinite Lagrange multipliers. In this case, the Lagrange function has the following form:
L [ S w ] = π + π tr Λ ( ω ) S w ( ω ) d ω + + λ 1 s + α 2 π + π ln det β S w ( ω ) d ω + λ 2 1 π + π tr S w ( ω ) d ω ,
where λ 1 , λ 2 are Lagrange multipliers. The solution of the extremum-seeking problem can be reduced to finding a spectral density on which the variation δ L [ S w ] of the Lagrange function
δ L [ S w ] = π + π tr Λ ( ω ) δ S w ( ω ) d ω + α λ 1 2 π + π tr S w 1 ( ω ) δ S w ( ω ) d ω λ 2 π + π tr δ S w ( ω ) d ω
is equal to zero
δ L [ S w ] = π + π tr Λ ( ω ) + α λ 1 2 S w 1 ( ω ) λ 2 I δ S w ( ω ) d ω = 0 .
In order to find δ L [ S w ] = 0 , it is necessary and sufficient that the expression in the square brackets equals zero:
Λ ( ω ) + α λ 1 2 S w 1 ( ω ) λ 2 I = 0 .
Thus, the spectral density matrix is equal to
S w ( ω ) = α λ 1 2 λ 2 I 1 λ 2 Λ ( ω ) 1 ,
or, equivalently,
S w ( ω ) = α p I q Λ ( ω ) 1 ,
where the parameters p = λ 1 2 λ 2 and q = 1 λ 2 can be found from the following conditions:
S ( W ) = s ,
π + π tr S w ( ω ) d ω = 1 .
According to the Equation (24), the spectral density matrix S w ( ω ) commutes with the matrix Λ ( ω ) :
Λ ( ω ) S w ( ω ) S w ( ω ) Λ ( ω ) = 0 .
In addition, matrices Λ ( ω ) and S w ( ω ) are Hermitian non-negatively and positively defined, as Λ ( ω ) is defined in the form Λ ( ω ) = G * ( ω ) G ( ω ) , and matrix S w ( ω ) is given by Condition (15). It leads to
Λ * ( ω ) = Λ ( ω ) 0 , S w * ( ω ) = S w ( ω ) > 0 .
Hence, there exists a unitary matrix U * ( ω ) = U 1 ( ω ) , which brings these two matrices to a diagonal form [23] simultaneously, and the eigenvalues λ i ( ω ) and s i ( ω ) of matrices Λ ( ω ) and S w ( ω ) are connected by the ratio
s i ( ω ) = α p 1 q λ i ( ω ) ·
For all the eigenvalues s i ( ω ) of matrix S w ( ω ) to be positive, it is necessary that the parameter q is limited:
0 q < max ω λ max 1 ( ω ) = F 2 ,
where λ max ( ω ) is the maximum eigenvalue of matrix Λ ( ω ) .
To find the parameter p, apply the normalization condition for the spectral density of the input signal (26):
π + π tr S w ( ω ) d ω = 1 .
Substituting the expression for the spectral density S w ( ω ) from the Equation (24) into this equality, we straightforwardly obtain
p = 1 α π + π tr I q Λ ( ω ) 1 d ω ·
Hence, the spectral density, for which gain (23) reaches its maximum value, is equal to
S w ( q , ω ) = I q Λ ( ω ) 1 π + π tr I q Λ ( ω ) 1 d ω .
Notice that the spectral density matrix S w ( q , ω ) is proportional to the resolvent
R ( μ ) = ( Λ μ I ) 1 of the linear operator Λ :
S w ( q , ω ) [ Λ ( ω ) 1 q I ] 1 .
Since the resolvent is an analytical function of each connected component of the resolvent set of the operator Λ [24], the spectral density of S w ( q ) is also an analytical function of q 0 , F 2 , and the variable q parametrizes the set of spectral density matrices.
Substitute the expression for spectral density (27) into the definition of σ -entropy (18):
S ( q ) = α 2 π + π ln det β I q Λ ( ω ) 1 π + π tr I q Λ ( ω ) 1 d ω d ω .
As the function S ( q ) is analytic, strictly increasing, and convex [10,25] on the half-interval 0 , F 2 , Equation (25) has a unique solution
q s = S 1 ( s ) .
Substituting expression (27) into the formula for the σ -entropy gain (20), one can obtain
Θ s 2 = π + π tr Λ ( ω ) I q s Λ ( ω ) 1 d ω π + π tr I q s Λ ( ω ) 1 d ω
with parameter q s 0 , F 2 being the unique solution of
α 2 π + π ln det β I q s Λ ( ω ) 1 π + π tr I q s Λ ( ω ) 1 d ω d ω = s ,
that finishes the proof of Theorem 2. □
Thus, the σ -entropy gain Θ s , i.e., the maximum gain on the isoentropy class of input sequences K s , the σ -entropy of which is equal to a given value s, can be calculated by Formula (21), while the log-determinant Equation (22) uniquely connects the parameter q s and the value of the σ -entropy S ( K s ) = s of the isoentropy class K s . Transform (21) and (22) into
Θ 2 ( q ) = π + π tr Λ ( ω ) I q Λ ( ω ) 1 d ω π + π tr I q Λ ( ω ) 1 d ω ,
and
s q = α 2 π + π ln det β I q Λ ( ω ) 1 π + π tr I q Λ ( ω ) 1 d ω d ω ,
make sure that the variable q parametrizes both σ -entropy gain Θ ( q ) of isoentropy classes K q , and σ -entropy s q of classes K q , and the set of isoentropy classes itself, i.e., the factor set W / . Moreover, both of these dependencies are strictly increasing functions of the variable q [10,25]. This fact, together with the partitioning of the input signals set into equivalence classes, allows for calculating the σ -entropy norm (19) of the system F s 2 = sup S ( S ) s Θ 2 in two steps:
  • Find the maximum gain on the equivalence class (this problem is solved by Theorem 2);
  • Determine the maximum value of σ -entropy gains Θ ( q ) on a subset of the W q s factor set W / , where the subset W q s is given by the condition 0 q q s :
    F s 2 = sup 0 q q s Θ 2 ( q ) .
Function Θ ( q ) is strictly increasing and reaches its maximum at the right end of the interval, hence:
F s 2 = π + π tr Λ ( ω ) I q s Λ ( ω ) 1 d ω π + π tr I q s Λ ( ω ) 1 d ω
with parameter q s 0 , F 2 being a unique solution of
α 2 π + π ln det β I q s Λ ( ω ) 1 π + π tr I q s Λ ( ω ) 1 d ω d ω = s .

3.3. σ -Entropy Norm Computation in the State Space

Before proving the theorem on the σ -entropy norm of a linear discrete-time stationary stochastic system, formulate the lemma of the inner discrete-time system [26,27] and proof the lemma on the worst case spectral density of the input signal.
Definition 4.
System  Υ ( z ) = C ( z I A ) 1 B + D  is called inner if Y*(z) Y(z) = I.
Lemma 1.
Let  Υ ( z ) = A B C D R H , and X is an observability gramian of the system Υ. The system Υ is inner if and only if the following three conditions are satisfied:
A T X A X + C T C = 0 , B T X A + D T C = 0 , B T X B + D T D = I
where R H denotes a set of strictly proper rational matrices.
Lemma 2.
Let  F ( z ) = A B C D  is a transfer matrix of a discrete-time system F and   0 q < F 2 . Then, the matrix
G ( z ) = A + B L B M 1 / 2 L M 1 / 2
factorizes the matrix S * ( ω ) = I q F * ( e i ω ) F ( e i ω ) 1 in the following form:
S * ( ω ) = G ( e i ω ) G * ( e i ω ) ,
and matrices L and M correspond to the admissible solution of the Riccati equation R
R = A T R A + q C T C + L T M 1 L ,
L = M ( B T R A + q D T C ) ,
M = ( I B T R B q D T D ) 1 .
Proof. 
The condition 0 q < F 2 means that I q F * ( e i ω ) F ( e i ω ) > 0 for any ω π , π . Positive definiteness of matrix I q F * ( e i ω ) F ( e i ω ) and its inverse matrix guarantee the existence of factorization S * ( ω ) = G ( e i ω ) G * ( e i ω ) . Consequently, Equation (30) may be rewritten in the form G * ( e i ω ) G 1 ( e i ω ) = I q F * ( e i ω ) F ( e i ω ) where G * = G * 1 . Denoting Υ ( e i ω ) = q F ( e i ω ) G 1 ( e i ω ) , the equation Υ * ( e i ω ) Υ ( e i ω ) = I is satisfied, which means that matrix Υ ( e i ω ) is the inner.
Chose matrix G ( z ) in the form
G ( z ) = A + B L B M 1 / 2 L M 1 / 2
where L and M = M T > 0 are arbitrary matrices of the corresponding dimension. By direct calculation, the inverse matrix G 1 ( z ) is defined as
G 1 ( z ) = A B M 1 / 2 L M 1 / 2 .
It is natural to consider the matrix Υ ( z ) as a transfer matrix of the system combined by two parallel subsystems F ( z ) and G 1 ( z ) , then this transfer matrix can be calculated using the formula [27]
Υ ( z ) = q F ( z ) G 1 ( z ) = A 0 B 0 A B q C 0 q D 0 M 1 / 2 L M 1 / 2 .
Since the dynamic parts of the two subsystems are identical, the state space realization of system Υ ( z ) has the following form:
Υ ( z ) = A B q C q D M 1 / 2 L M 1 / 2 .
Applying Lemma 1 to the system Υ ( z ) , Riccati Equations (31)–(33) follow immediately. Consequently, the system G ( e i ω ) factorizes matrix S * ( ω ) = I q F * ( e i ω ) F ( e i ω ) 1 , while matrices L and M correspond to an admissible solution R of Riccati Equations (31)–(33). □
Theorem 3.
Let F be a system with a state-space realization (10). Let W be a stochastic input disturbance with bounded σ-entropy, i.e.,  S ( S ) = s  with  s 0 . Then, the σ-entropy norm of system (10) is calculated by
F s 2 = 1 q 1 m tr L P L T + M
where P satisfies the Lyapunov equation
P = ( A + B L ) P ( A + B L ) T + B M B T ,
matrices L and M correspond to an admissible solution R of Riccati Equations (31)–(33) and the special type equation
1 2 ln det m M tr ( L P L T + M ) = s .
Proof. 
Before we obtain the main result of the theorem, first define certain parameters α and β . For the convenience of the calculation set
α = 1 2 π , β = 2 π m .
σ -Entropy norm, defined in the form (28), can be transformed to the following:
F s 2 = π + π tr 1 q I ( I q Λ ( ω ) ) I q Λ ( ω ) 1 d ω π + π tr I q Λ ( ω ) 1 d ω = = 1 q 2 π m q 1 π + π tr I q F * ( e i ω ) F ( e i ω ) 1 d ω .
Application of Lemma 2 to the denominator of (37) leads to
π + π tr I q F * ( e i ω ) F ( e i ω ) 1 d ω = π + π tr G * ( e i ω ) G ( e i ω ) d ω
where F ( z ) = A B C D and G ( z ) = A + B L B M 1 / 2 L M 1 / 2 . The integral on the right-hand side of Equation (38), up to a factor of 2 π , is equal to the H 2 norm of the matrix G:
π + π tr G * ( e i ω ) G ( e i ω ) d ω = 2 π G 2 2 = 2 π tr L P L T + M
where matrix P is a controllability gramian that satisfies the Lyapunov Equation (35). Substituting (39) and (38) into (37), one can obtain (34).
Now consider the equation of the Form (22) and transform its left-hand side to
π + π ln det 2 π m I q Λ ( ω ) 1 π + π tr I q Λ ( ω ) 1 d ω d ω = = 2 π m ln 2 π m π + π tr I q Λ ( ω ) 1 d ω + π + π ln det I q Λ ( ω ) 1 d ω .
According to (38) and (39), the integral in the denominator of the first term is equal to π + π tr I q Λ ( ω ) 1 d ω = 2 π tr L P L T + M , while the second term, following [6], equals to 2 π ln det M ; therefore,
1 2 π π + π ln det 2 π m I q Λ ( ω ) 1 π + π tr I q Λ ( ω ) 1 d ω d ω = ln det m M tr ( L P L T + M ) ,
which means that the special type Equation (36) equals to (22), completing the proof. □
Note that multiplier β is selected as β = 2 π m to satisfy σ ( W ) = 0 for q = 0 , i.e., to deal with white Gaussian noise. Multiplier α is selected equal to 1 2 π in order to provide matching between the analytical appearance of the mean anisotropy and spectral entropy in the state-space domain.

4. Discussion

The axioms underlying the anisotropic and spectral-entropy approaches to the analysis of linear time-invariant systems have been considered above. It was demonstrated that anisotropic and σ -entropy norms have similar mathematical formulae and are defined as a supremum ratio between the power norm of the output and power norm of the input on the set of input signals, bounded by the mean anisotropy or spectral entropy level, respectively. The substantial difference lies in the use of axioms to define the anisotropy or spectral entropy level. In anisotropy-based approach information theory, the relative entropy and Kolmogorov–Szego theorem are used to interpret the random disturbance from the time domain to the frequency domain. Consequently, some significant limitations have to be taken into account. Firstly, the relative entropy requires the use of white Gaussian noise as a reference signal. Secondly, the Kolmogorov–Szego formula is valid only for stationary signals [20] Thus, a substantial drawback of the anisotropy-based control theory is that it guarantees a given disturbance attenuation level only for stationary random sequences. In the spectral entropy approach, there is no reference signal. The spectral entropy level can be calculated for all signals that have a Fourier transform. This means that the set of input signals with bounded spectral entropy (36) is wider than the set of signals with bounded mean anisotropy (8), although both formulas are similar. Frequency and time domain formulae for the calculation of anisotropic and spectral entropy norms of linear time-invariant systems are identical. Consequently, we can claim that axiomatics of the spectral entropy approach allows one to rigorously prove that anisotropy-based controllers can be used to guarantee the desired disturbance attenuation level not only for stationary random sequences but also for a wider set of input signals. Some results on the optimal and suboptimal anisotropy-based control design can be found in [10].
It also should be noted that postulating on the spectral entropy in the form of (36) allows one to develop spectral entropy control for continuous-time linear systems. The first results engaged to the analysis of linear continuous-time systems are given in [14,28,29,30].

Author Contributions

Conceptualization, V.A.B. and A.A.B.; methodology, V.A.B.; formal analysis, V.A.B., A.A.B. and O.G.A.; investigation, V.A.B., A.A.B. and O.G.A.; writing—original draft preparation, V.A.B. and O.G.A.; writing—review and editing, A.A.B. and O.G.A.; supervision, A.A.B. and O.G.A.; project administration, A.A.B. and O.G.A.; funding acquisition, A.A.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by Russian Science Foundation grant number 23-21-00306, https://rscf.ru/project/23-21-00306/, accessed on 27 January 2023.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analysis, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Roxin, E. Axiomatic foundation of the theory of control systems. IFAC Proc. Vol. 1963, 1, 640–644. [Google Scholar] [CrossRef]
  2. Adams, K.; Hester, P.; Bradley, J.; Meyers, T.; Keating, C. Systems Theory as the Foundation for Understanding Systems. Syst. Eng. 2014, 17, 112–123. [Google Scholar] [CrossRef] [Green Version]
  3. Suh, N.P. Axiomatic Design Theory for Systems. Res. Eng. Des. 1998, 10, 189–209. [Google Scholar] [CrossRef]
  4. Yoong, C.; Palleti, V.; Maiti, R.; Silva, A.; Poskitt, C. Deriving invariant checkers for critical infrastructure using axiomatic design principles. Cybersecurity 2021, 4. [Google Scholar] [CrossRef]
  5. Semyonov, A.; Vladimirov, I.; Kurdyukov, A. Stochastic approach to H-optimization. In Proceedings of the 1994 33rd IEEE Conference on Decision and Control, Lake Buena Vista, FL, USA, 14–16 December 1994; Volume 3, pp. 2249–2250. [Google Scholar] [CrossRef]
  6. Vladimirov, I.; Kurdyukov, A.; Semyonov, A. Anisotropy of signals and entropy of linear stationary systems. Dokl. Math. 1995, 51, 388–390. [Google Scholar]
  7. Vladimirov, I.; Kurdyukov, A.; Semyonov, A. On computing the anisotropic norm of linear discrete-time-invariant systems. IFAC Proc. Vol. 1996, 29, 179–184. [Google Scholar] [CrossRef]
  8. Vladimirov, I.; Kurdyukov, A.; Semyonov, A. Asymptotics of the anisotropic norm of linear time-independent systems. Autom. Remote Control 1999, 60, 359–366. [Google Scholar]
  9. Diamond, P.; Vladimirov, I.; Kurdyukov, A.; Semyonov, A. Anisotropy-based performance analysis of linear discrete time invariant control systems. Int. J. Control 2001, 74, 28–42. [Google Scholar] [CrossRef]
  10. Kurdyukov, A.; Andrianova, O.; Belov, A.; Gol’din, D. In Between the LQG/H2- and H-Control Theories. Autom. Remote Control 2021, 82, 565–618. [Google Scholar] [CrossRef]
  11. Tchaikovsky, M.; Timin, V.; Kurdyukov, A. Synthesis of Anisotropic Suboptimal PID Controller for Linear Discrete Time-Invariant System with Scalar Control Input and Measured Output. Autom. Remote Control 2019, 80, 1681–1693. [Google Scholar] [CrossRef]
  12. Stoica, A.; Yaesh, I. Static Output Feedback Design in an Anisotropic Norm Setup. Ann. Acad. Rom. Sci. Ser. Math. Its Appl. 2020, 12, 425–438. [Google Scholar] [CrossRef]
  13. Belov, A. Robust Pole Placement and Random Disturbance Rejection for Linear Polytopic Systems with Application to Grid-Connected Converters. Eur. J. Control 2021, 63, 116–125. [Google Scholar] [CrossRef]
  14. Kurdyukov, A.; Boichenko, V. A Spectral Method of the Analysis of Linear Control Systems. Int. J. Appl. Math. Comput. Sci. 2019, 29, 667–679. [Google Scholar] [CrossRef] [Green Version]
  15. Boichenko, V.; Kurdyukov, A.; Kustov, A. From the Anisotropy-based Theory towards the σ-entropy Theory. In Proceedings of the 2018 15th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE), Mexico City, Mexico, 5–7 September 2018; pp. 1–6. [Google Scholar] [CrossRef]
  16. Tchaikovsky, M.M.; Timin, V.N. Synthesis of Anisotropic Suboptimal Control for Linear Time-Varying Systems on Finite Time Horizon. Autom. Remote Control 2017, 78, 1203–1217. [Google Scholar] [CrossRef]
  17. Maximov, E.; Kurdyukov, A.; Vladimirov, I. Anisotropic Norm Bounded Real Lemma for Linear Discrete Time Varying Systems. IFAC Proc. Vol. 2011, 18, 4701–4706. [Google Scholar] [CrossRef] [Green Version]
  18. Vladimirov, I.; Diamond, P.; Kloeden, P. Anisotropy-based robust performance analysis of finite horizon linear discrete time varying systems. Autom. Remote Control 2006, 67, 1265–1282. [Google Scholar] [CrossRef]
  19. Pinsker, M. The Amount of Information about a Gaussian Random Process Contained in the Second Process Stationarily Associated with It. Dokl. Akad. Nauk SSSR 1954, 98, 213–216. (In Russian) [Google Scholar]
  20. Bulinski, A.; Shiryaev, A. Theory of Random Processes; Fizmatlit: Moscow, Russia, 2005. (In Russian) [Google Scholar]
  21. Maltsev, A. Algebraic Systems; Nauka: Moscow, Russia, 1970. (In Russian) [Google Scholar]
  22. Zhou, K.; Glover, K.; Bodenheimer, B.; Doyle, J. Mixed H2 and H Performance Objectives I: Robust Performance Analysis. IEEE Trans. Autom. Control 1994, 39, 1564–1574. [Google Scholar] [CrossRef]
  23. Gantmacher, F. The Theory of Matrices; AMS Chelsea Publishing: New York, NY, USA, 1960. [Google Scholar]
  24. Plesner, A. Spectral Theory of Linear Operators; Nauka: Moscow, Russia, 1967. (In Russian) [Google Scholar]
  25. Kurdyukov, A.; Maximov, E.; Tchaikovsky, M. Anisotropy-Based Bounded Real Lemma. In Proceedings of the 19th International Symposium on Mathematical Theory of Networks and Systems, Budapest, Hungary, 5–9 July 2010. [Google Scholar]
  26. Tsai, M. On discrete spectral factorizations—A unified approach. IEEE Trans. Autom. Control 1993, 38, 1563–1567. [Google Scholar] [CrossRef]
  27. Bernstein, D. Matrix Mathematics; Princeton University Press: Princeton, NJ, USA, 2005. [Google Scholar]
  28. Boichenko, V.; Belov, A. On σ-entropy Analysis of Linear Stochastic Systems in State Space. Syst. Theory Control Comput. J. 2021, 1, 30–35. [Google Scholar] [CrossRef]
  29. Belov, A.; Boichenko, V. σ-entropy Analysis of LTI Continuous-Time Systems with Stochastic External Disturbance in Time Domain. In Proceedings of the 2020 24th International Conference on System Theory, Control and Computing (ICSTCC), Sinaia, Romania, 8–10 October 2020; pp. 184–189. [Google Scholar] [CrossRef]
  30. Boichenko, V.; Belov, A. On Calculation of σ-entropy Norm of Continuous Linear Time-Invariant Systems. In Proceedings of the 2020 15th International Conference on Stability and Oscillations of Nonlinear Control Systems (Pyatnitskiy’s Conference) (STAB), Moscow, Russia, 3–5 June 2020; pp. 1–4. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Boichenko, V.A.; Belov, A.A.; Andrianova, O.G. Axiomatic Foundations of Anisotropy-Based and Spectral Entropy Analysis: A Comparative Study. Mathematics 2023, 11, 2751. https://doi.org/10.3390/math11122751

AMA Style

Boichenko VA, Belov AA, Andrianova OG. Axiomatic Foundations of Anisotropy-Based and Spectral Entropy Analysis: A Comparative Study. Mathematics. 2023; 11(12):2751. https://doi.org/10.3390/math11122751

Chicago/Turabian Style

Boichenko, Victor A., Alexey A. Belov, and Olga G. Andrianova. 2023. "Axiomatic Foundations of Anisotropy-Based and Spectral Entropy Analysis: A Comparative Study" Mathematics 11, no. 12: 2751. https://doi.org/10.3390/math11122751

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop