In this section, we will sequentially perform two different constrained equivalence transformations on the system. In this process, a part of the observation equation will be combined with the state equation, so that the state equation becomes a square system that satisfies the regularity and observability of square singular matrices. Finally, the system will be transformed into a system that can be estimated using state estimation methods for square singular systems. Therefore, in this section, we first introduce the two constrained equivalence transformation methods that will be used.
3.2. The System Transformation Based on Constrained Equivalence Transformation
We have obtained Equation (
3) through the first type of restricted equivalent transformation. Let
, Equation (
3) can be decomposed into:
where,
.
Theorem 1. The matrix is a row full rank matrix.
Proof. According to Assumption 2,
, and both matrix
U and matrix
V are orthogonal matrices. Recall when a matrix is multiplied by an invertible matrix, the rank of the matrix remains unchanged. Therefore, we have:
Therefore, is full row rank, which means that . Thus, the matrix is a matrix with full row rank. □
Equation (
9) can be transformed into:
Substituting
,
into Equation (
2), it transforms into:
Lemma 1. The constrained equivalence transformation method does not change the observability of the system.
Matrix U and matrix V are both orthogonal matrices, Therefore: Therefore, the system formed by Equations (3) and (13) is still observable.
According to Equation (
14) and given
, The matrix
is composed of at least
linearly independent row vectors to ensure that
. Without losing generality, these
row vectors come from the first
rows of matrix
, which is denoted as
,
, then we have:
Let
,
,
,
,
.
Based on
, Equation (
13) can be rewritten as:
and Equation (
16) can be divided into two parts:
Theorem 2. Matrix is an invertible matrix.
Proof. According to Equation (
15),
. Where
is a diagonal matrix. Thus, elementary transformations can be performed on matrix
as:
Therefore,
. Thus we can conclude that
given
.
Therefore, Matrix is an invertible matrix. □
Theorem 3. In matrix , there exist at least row vectors, denoted as , such that .
Proof. According to Theorem 2, . According to Theorem 1, .
The equivalent proposition of Theorem 3 is: There are at least
row vectors in the matrix
that cannot be linearly represented by the row vectors of matrix
. Assuming its converse proposition holds true: at most
(
) row vectors in the matrix
cannot be linearly represented by the row vectors of matrix
. The equivalent proposition of the converse proposition is: There are at least
l row vectors in the matrix
that can be linearly represented by the row vectors of the matrix
. Based on the equivalent proposition of the converse proposition and
, we have:
Equation (
19) contradicts
. Therefore, there are at least
row vectors in the matrix
that cannot be linearly represented by the row vectors of matrix
, which means there exist at least
row vectors, denoted as
, in matrix
such that:
□
Without losing generality, matrix
comes from the first
rows of matrix
. Correspondingly, the first
rows of
are denoted as
,
. Based on Theorem 3, we can infer:
Matrix
is a part of
, thus, let
,
,
,
,
. Based on
, Equation (
17) can be rewritten as:
and Equation (
21) can be divided into two parts:
Substituting Equation (
12) into Equation (
22):
Combining Equations (7) and (19):
where,
,
. Let
.
Theorem 4. Equation (25) is regular, in other words, there exists a complex number , such that . Proof. According to
,
is a row full rank matrix. Therefore, regardless of the value of matrix
, there exists a complex number
, such that:
According to Theorem 3,
, which means
is a row full rank matrix. Therefore, regardless of the value of matrix
, there exists a complex number
, such that:
In summary, regardless of the value of
, there exists a complex number
, such that:
Therefore, . □
Combining Equations (18) and (23), we have:
Let
,
,
,
,
, Equations (25) and (26) can be rewritten as follows, respectively:
Theorem 5. .
Proof. The following equation holds:
According to Theorems 4 and 5, the system composed of Equations (27) and (28) is observable and regular.
According to the second type of restricted equivalent transformation, there exist orthogonal matrixes
,
, such that:
where,
,
,
, is a nilpotent matrix with degree
h. Let
,
, Equation (
27) can be transformed into:
Let
, Equation (
30) can be divided into two parts:
From Equation (
32), we have:
Substituting
,
into Equation (
23):
Equation (
33) contains the unknown term at the current time,
. This prevents us from estimating the system state based on the current information. Therefore, further transformations are needed to convert the system into a known system that does not include unknown terms.
3.3. Transforming the System into a Known System
In
Section 3.2, after performing a constrained equivalence transformation on the system, we obtained a system composed of Equations (3) and (13). Then, we combined a part of the observation equation with the state equation and performed a second constrained equivalence transformation, resulting in a system composed of Equations (31), (33) and (34). In this chapter, we will transform the system into a known system by a series of transformations that eliminate the unknown terms.
Performing a simple row transformation on Equation (
25):
In Equation (
35), matrices
and
are both rectangular matrices with more columns than rows. We can rewrite both matrices as a combination of a square matrix and several column vectors:
If we perform the second type of constrained equivalence transformation on the matrix pair
, transformation matrices can be denoted as
and
, respectively. Let
, matrix
and
can be transformed into:
where,
,
,
,
.
is a nilpotent matrix with degree
. Let:
For the nilpotent matrix
, all of its eigenvalues are 0. Therefore, if matrix
transforms into a Jordan matrix, all the diagonal elements of the Jordan matrix are 0. Consequently, there exists an invertible matrix
such that
can be transformed into a Jordan matrix as follows, denoted as
:
The degree of matrix
is
, thus in Equation (
38), the first
rows contain the number 1.
Left multiplying matrix
and
by matrix
, and right multiplying them by
:
Let
, Expanding
and
, we obtain:
In Equation (
41), all elements in rows
to
are zeros. Therefore, We perform the same elementary column transformations on the matrix blocks
and
, respectively:
Denoting the final results of Equations (43) and (44) as a combination of a square matrix and several column vectors:
where,
, noting that matrix
is non-invertible.
If we perform the second type of constrained equivalence transformation on matrices
and
, transformation matrices can be denoted as
and
, respectively. Matrix
and
can be transformed as follows:
where,
,
,
,
.
is a nilpotent matrix with degree
.
The matrix blocks
and
in Equation (
37) have undergone the following transformations through Equations (37)–(48), where the entire process consists of elementary transformations or multiplication with invertible matrices:
Applying a similar transformation process as in Equations (37)–(48) on the matrix blocks
and
in Equation (
49), we can obtain an expression similar to Equation (
49):
We refer to Equation (
49) as the first transformation and Equation (
50) as the second transformation, then the result of the
ith transformation is:
where,
,
. When the
tth transformation results in
or
, there will be two situations.
If
, the degree of matrix
at this point is
, and Equation (
36) finally transforms into:
If
, Equation (
36) finally transforms into:
If the transformation takes the form of Equation (
52), it indicates that the system is non-causal. If it transforms into Equation (
53), it indicates that the system is causal. Let
.
The block matrix
and
in Equation (
52) can be transformed similar to Equations (38), and we denote the transformation matrix as
. The result of Equation (
52) can be further transformed as follows:
where, matrix
is similar in form to Equation (
38) and can be denoted as follows:
In Equation (
55), the first
row vectors of matrix
contain the number 1.
Taking the non-causal system represented by Equations (52) and (54) as an example, since the entire transformation process represented by Equations (52) and (54) is a sequence of elementary transformations or multiplication by invertible matrices, the entire process can be viewed as left multiplying by an invertible matrix and right multiplying by another invertible matrix. Let us denote the left multiplication matrix as
, the right multiplication matrix as
, and let
. Then, Equation (
35) can be transformed into:
Let
,
,
,
,
,
,
,
, substituting these into Equation (
56) gives:
In Equation (
57),
may not be an invertible matrix, which may result in the presence of unknown terms similar to Equation (
33) in subsequent transformations of Equation (
57). Therefore, theorem 6 is given to solve this problem.
Theorem 6. In matrix , there exist row vectors, denoted as , such that when matrix undergoes the right multiplication transformation described in Equation (58) (equivalent to a column transformation), the resulting matrix is invertible.where, , , . Proof. Left multiplying matrix
and
by matrix
, and right multiplying them by
, we obtain:
From Equation (
15) we know that:
The transformation in Equation (
59) does not change the rank of
, thus we can observe the following from Equation (
59):
Therefore, matrix is invertible, and all column vectors of matrix are linearly independent. Hence, there exist at least linearly independent row vectors in matrix , denoted as matrix , and we can infer that matrix is invertible. Similarly, we denote matrix , which comes from matrix , as the matrix corresponding to matrix . □
Let
,
, replace
and
with
and
, respectively, Equation (
57) can be rewritten as:
Based on
, let
. Since
row vectors have been selected from matrix
to replace
, the observation Equation (
28) is correspondingly rewritten as:
where,
.
The system formed by Equations (61) and (62) is still regular and observable (the proof process is similar to Theorems 4 and 5).
Equations (1) and (2) undergo a series of transformations and are ultimately transformed into Equations (61) and (62). During this process, before adding the measurement equation to the state equation, all transformations involving the measurement equation are column transformations. In other words, the measurement noises are not fused with each other. Therefore, the noises in Equation (
61) and the noises in Equation (
62) do not have a correlation. Therefore, the classical Kalman filter can be used for systems (61) and (62).
Through the transformations and decompositions from Equations (36)–(61), the entire system becomes a solvable known system that does not contain unknown terms at the current time. For causal systems represented by Equation (
53), the transformation process is similar to that of non-causal systems, and can also obtain a solvable known system. Therefore, in subsequent analysis, we will mainly focus on non-causal systems represented by Equation (
52). Next, we will perform state estimation on the system in a form similar to standard Kalman filtering.