Next Article in Journal
High Performance Parallel Pseudorandom Number Generator on Cellular Automata
Previous Article in Journal
Study of a Viscous ΛWDM Model: Near-Equilibrium Condition, Entropy Production, and Cosmological Constraints
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Conjugate Gradient Algorithm for Least-Squares Solutions of a Generalized Sylvester-Transpose Matrix Equation

by
Kanjanaporn Tansri
and
Pattrawut Chansangiam
*,†
Department of Mathematics, School of Science, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2022, 14(9), 1868; https://doi.org/10.3390/sym14091868
Submission received: 2 August 2022 / Revised: 30 August 2022 / Accepted: 2 September 2022 / Published: 7 September 2022
(This article belongs to the Section Mathematics)

Abstract

:
We derive a conjugate-gradient type algorithm to produce approximate least-squares (LS) solutions for an inconsistent generalized Sylvester-transpose matrix equation. The algorithm is always applicable for any given initial matrix and will arrive at an LS solution within finite steps. When the matrix equation has many LS solutions, the algorithm can search for the one with minimal Frobenius-norm. Moreover, given a matrix Y, the algorithm can find a unique LS solution closest to Y. Numerical experiments show the relevance of the algorithm for square/non-square dense/sparse matrices of medium/large sizes. The algorithm works well in both the number of iterations and the computation time, compared to the direct Kronecker linearization and well-known iterative methods.

1. Introduction

Sylvester-type matrix equations are closely related to ordinary differential equations (ODEs), which can be adapted to several problems in control engineering and information sciences; see e.g., monographs [1,2,3]. The Sylvester matrix equation A X + X B = E and the famous special case Lyapunov equation A X + X A T = E have several applications in numerical methods for ODEs, control and system theory, signal processing, and image restoration; see e.g., [4,5,6,7,8,9]. The Sylvester-transpose equation A X + X T B = C is utilized in eigenstructure assignment in descriptor systems [10], pole assignment [3], and fault identification in dynamic systems [11]. In addition, if we require that the solution X to be symmetric, then the Sylvester-transpose equation coincides with the Sylvester one. A generalized Sylvester equation A X B + C X D = E can be applied to implicit ODEs, and a general dynamical linear model for vibration and structural analysis, robot and spaceship controls; see e.g., [12,13].
The mentioned matrix equations are special cases of a generalized Sylvester-transpose matrix equation:
A X B + C X T D = E ,
or more generally
i = 1 s A i X B i + j = 1 t C j X T D j = E .
A direct algebraic method to find a solution of Equation (2) is to use the Kronecker linearization transforming the matrix equation into an equivalent linear system; see e.g., [14] (Ch. 4). The same technique together with the notion of weighted Moore–Penrose inverse were adapted to solve a coupled inconsistent Sylvester-type matrix equations [15] for least-squares (LS) solutions. Another algebraic method is to apply a generalized Sylvester mapping [13], so that the solution is expressed in terms of polynomial matrices. However, when the sizes of coefficient matrices are moderate or large, it is inconvenient to use matrix factorizations or another traditional methods since they require a large memory to calculate an exact solution. Thus, the Kronecker linearization and other algebraic methods are only suitable for small matrices. That is why it is important to find solutions that are easy to compute, leading many researchers to come up with algorithms that can reduce the time and memory usage of solving large matrix equations.
In the literature, there are two notable techniques to derive iterative procedures for solving linear matrix equations; see more information in a monograph [16]. The conjugate gradient (CG) technique aims to create an approximate sequence of solutions so that the respective residual matrix creates a perpendicular base. The desired solution will come out in the final step of iterations. In the last decade, many authors developed CG-type algorithms for Equation (2) and its special cases, e.g., BiCG [17], BiCR [17], CGS [18], GCD [19], GPBiCG [20], and CMRES [21]. The second technique, known as gradient-based iterative (GI) technique, intends to construct a sequence of approximate solutions from the gradient of the associated norm-error function. If we carefully set parameters of GI algorithm, then the generated sequence would converge to the desired solution. In the last five years, many GI algorithms have been introduced; see e.g., GI [22,23], relaxed GI [24], accelerated GI [25], accelerated Jacobi GI [26], modified Jacobi GI [27], gradient-descent algorithm [28], and global generalized Hessenberg algorithm [29]. For LS solutions of Sylvester-type matrix equations, there are iterative solvers, e.g., [30,31].
Recently, the work [32] developed an effective gradient-descent iterative algorithm to produce approximated LS solutions of Equation (2). When Equation (2) is consistent, a CG-type algorithm was derived to obtain a solution within finite steps; see [33]. This work is a continuation of [33], i.e., we consider Equation (1) with rectangular coefficient matrices and a rectangular unknown matrix X. Suppose that Equation (1) is inconsistent. We propose a CG-type algorithm to approximate LS solutions, which will solve the following problems:
Problem 1.
Find a matrix X ^ R n × p that minimizes E A X B C X T D F .
Let L be the set of least-squares solutions of Equation (1). The second problem is to find an LS solution with the minimal norm:
Problem 2.
Find the matrix X * such that
X * F = min X ^ L X ^ F .
The last one is to find an LS solution closest to a given matrix:
Problem 3.
Let Y R n × p . Find the matrix X ˇ such that
X ˇ Y F = min X ^ L X ^ Y F .
Moreover, we extend our studies to the matrix Equation (2). We verify the results from theoretical and numerical points of view.
The organization of this article is as follows. In Section 2, we recall preliminary results from matrix theory that will be used in later discussions. In Section 3, we explain how the Kronecker linearization can transform Equation (1) into an equivalent linear system to obtain LS solutions. In Section 4, we propose a CG-type algorithm to solve Problem 1 and verify the theoretical capability of the algorithm. After that, Problems 2 and 3 are investigated in Section 5 and Section 6, respectively. To verify the theory, we provide numerical experiments in Section 7 to show the applicability and efficiency of the algorithm, compared to the Kronecker linearization and recent iterative algorithms. We summarize the whole work in the last section.

2. Auxiliary Results from Matrix Theory

Throughout, let us denote by R m × n the set of all m-by-n real matrices. Recall that the standard (Frobenius) inner product of A , B R m × n is defined by
A , B : = tr ( B T A ) = tr ( A B T ) .
If A , B = 0 , we say that A is orthogonal to B. A well-known property of the inner product is that
A , B C D = B T A D T , C ,
for any matrices A , B , C , D with appropriate dimensions. The Frobenius norm of matrix A R m × n is defined by
A F : = tr A T A .
The Kronecker product A B of A = [ a i j ] R m × n and B R p × q is defined to be the m p -by- n q matrix whose each ( i , j ) -th block is given by a i j B .
Lemma 1
([14]). For any real matrices A and B, we have ( A B ) T = A T B T .
The vector operator Vec ( · ) transforms a matrix A = [ a i j ] R m × n to the vector
Vec A : = a 11 a m 1 a 12 a m 2 a 1 n a m n T R m n .
The vector operator is bijective, linear, and related to the usual matrix multiplication as follows.
Lemma 2
([14]). For any A R m × n , B R n × p and C R p × q , we have
Vec A B C = ( C T A ) Vec B .
For each m , n N , we define a commutation matrix
P ( m , n ) : = i = 1 m j = 1 n E i j E i j T R m n × m n ,
where the ( i , j ) -th position of E i j R m × n is 1 and all other entries are 0. Indeed, P ( m , n ) acts on a vector by permuting its entries as follows.
Lemma 3
([14]). For any matrix A R m × n , we have
Vec ( A T ) = P ( m , n ) Vec ( A ) .
Moreover, commutation matrices permute the entries of A B as follows.
Lemma 4
([14]). For any A R m × n and B R p × q , we have
B A = P ( m , p ) T A B P ( n , q ) .
The next result will be used in the later discussions.
Lemma 5
([14]). For any matrices A , B , C , D with appropriate dimensions, we get
tr ( A T D T B C ) = ( Vec D ) T A B Vec C .

3. Least-Squares Solutions via the Kronecker Linearization

From now on, we investigate the generalized Sylvester-transpose matrix Equation (1), with corresponding coefficient matrices A R m × n , B R p × q , C R m × p , D R n × q , E R m × q , and a rectangular unknown matrix X R n × p . We focus our attention when Equation (1) is inconsistent. In this case, we will seek for its LS solution, that is, a matrix X * that solves the following minization problem:
min X R n × p E A X B C X T D F .
A traditional algebraic way to solve a linear matrix equation is known as the Kronecker linearization–to transform the matrix equation into an equivalent linear system using the notions of vector operator and Kronecker products. Indeed, taking the vector operator to Equation (1) and applying Lemmas 2 and 3 yield
Vec E = Vec ( A X B + C X T D ) = ( B T A ) Vec X + ( D T C ) Vec X T = ( B T A ) Vec X + ( D T C ) P ( n , p ) Vec X .
Let us denote x = Vec X , e = Vec E , and
M = ( B T A ) + ( D T C ) P ( n , p ) .
Thus, a matrix X is an LS solution of Equation (1) if and only if x is an LS solution of the linear system M x = e , or equivalently, a solution of the associated normal equation
M T M x = M T e .
The linear system (14) is always consistent, i.e., Equation (1) always has an LS solution. From the normal Equation (14) and Lemmas 2 and 3, we can deduce:
Lemma 6
([34]). Problem 1 is equivalent to the following consistent matrix equation
A T ( A X B + C X T D ) B T + D ( B T X T A T + D T X C T ) C = A T E B T + D E T C .
Moreover, the normal Equation (14) has a unique solution if and only if the matrix M is of full-column rank, i.e., M T M is invertible. In this case, the unique solution is given by x * = ( M T M ) 1 M T e , and the LS error can be computed as follows:
M x * e 2 = e 2 e T M x * .
If M is of not full-column rank (i.e., the kernel of M is nontrivial), then the system M x = e has many solutions. In this case, the LS solutions appear in the form x * = M e + u where M is the Moore–Penrose inverse of M, and u is an arbitrary vector in the kernel of M. Among all these solutions,
x * = M e
is the unique one having minimal norm.

4. Least-Squares Solution via a Conjugate Gradient Algorithm

In this section, we propose a CG-type algorithm to solve Problem 1. We do not impose any assumption on the matrix M, so that LS solutions of Equation (1) may not be unique.
We shall adapt the conjugate-gradient technique to solve the equivalent matrix Equation (15). Recall that the set of LS solutions of Equation (1) is denoted by L . From Lemma 6, observe that the residual of a matrix X R n × p according to Equation (1) is given by
R X : = A T E B T + D E T C A T ( A X B + C X T D ) B T D ( B T X T A T + D T X C T ) C .
Lemma 6 states that X L if and only if R X = 0 . From this, we propose the following algorithm. Indeed, the next approximate solution X r + 1 is equal to the current approximation X r along with a search direction U r + 1 of suitable step size.
Algorithm 1: A conjugate gradient iterative algorithm for Equation (1)
Symmetry 14 01868 i001
Remark 1.
To terminate the algorithm, one can alternatively set the stopping rule to be R r F δ ϵ where δ : = M x * e is the positive square root of the LS error described in Equation (16) and ϵ > 0 is a small tolerance.
For any given initial matrix X 0 , we will show that Algorithm 1 generates a sequence of approximate solutions X r of Equation (1), so that the set of residual matrices R r is orthogonal. It follows that a unique LS solution will be obtained within finite steps.
Lemma 7.
Assume that the sequences { R r } and { H r } are generated by Algorithm 1. We get
R r + 1 = R r R r F 2 α r + 1 H r + 1 , for r = 1 , 2 , .
Proof. 
From Algorithm 1, we have that for any r,
R r + 1 = A T E B T + D E T C A T ( A X r + 1 B + C X r + 1 T D ) B T D ( B T X r + 1 T A T + D T X r + 1 C T ) C = A T E B T + D E T C A T A ( X r + R r F 2 α r + 1 U r + 1 ) B + C ( X r + R r F 2 α r + 1 U r + 1 ) T D B T D B T ( X r + R r F 2 α r + 1 U r + 1 ) T A T + D T ( X r + R r F 2 α r + 1 U r + 1 ) C T C = A T E B T + D E T C A T A X r B + R r F 2 α r + 1 A U r + 1 B + C X r T D + R r F 2 α r + 1 C U r + 1 T D B T D B T X r T A T + R r F 2 α r + 1 B T U r + 1 T A T + D T X r C T + R r F 2 α r + 1 D T U r + 1 C T C = A T E B T + D E T C A T ( A X r B + C X r T D ) B T D ( B T X r T A T + D T X r C T ) C R r F 2 α r + 1 [ A T ( A U r + 1 B + C U r + 1 T D ) B T + D ( B T U r + 1 T A T + D T U r + 1 C T ) C ] = R r R r F 2 α r + 1 H r + 1 .
   □
Lemma 8.
The sequences { U r } and { H r } generated by Algorithm 1 satisfy
tr ( U m T H n ) = tr ( H m T U n ) , for any m , n .
Proof. 
Using the properties of the Kronecker product and the vector operator in Lemmas 1–5, we have
tr ( H m T U n ) = ( Vec H m ) T Vec U n = [ Vec ( A T ( A U m B + C j U m T D ) B T + D ( B T U m T A T + D T U m C T ) C ) ] T Vec U n = [ Vec ( A T A U m B B T ) ] T Vec U n + [ Vec ( A T C U m T D B T ) ] T Vec U n + [ Vec ( D B T U m T A T C ) ] T Vec U n + [ Vec ( D D T U m C T C ) ] T Vec U n = ( Vec U m ) T ( B B T A T A ) Vec U n + ( Vec U m T ) T ( D B k T C T A ) Vec U n + ( Vec U m T ) T ( A T C B D T ) Vec U n + ( Vec U m ) T ( C T C D D T ) Vec U n = tr ( B B T U m T A T A U n ) + tr ( A T C U m T D B T U n T ) + tr ( D B T U m T A T C U n T ) + tr ( C T C U m T D D T U n ) = ( Vec ( B B k T U n T A T A ) ) T Vec U m T + ( Vec ( C T A U n B D T ) ) T Vec U n T + ( Vec ( B D T U n C T A ) ) T Vec U m T + ( Vec ( C T C U n T D D T ) ) T Vec U m T = [ Vec ( A T A U n B B T ) ] T Vec U m + [ Vec ( D B T U n T A T C ) ] T Vec U m + [ Vec ( A T C U n T D B T ) ] T Vec U m + [ Vec ( D D T U n C T C ) ] T Vec U m = [ Vec ( A T ( A U n B + C U n T D ) B T + D ( B T U n T A T + D T U n C T ) C ) ] T Vec U m = ( Vec H n ) T Vec U m = tr ( H n T U m ) = tr ( U m T H n ) .
   □
Lemma 9.
The sequences { R r } , { U r } and { H r } generated by Algorithm 1 satisfy
tr ( R r T R r 1 ) = 0 , and tr ( U r + 1 T H r ) = 0 , forany r .
Proof. 
We use the induction principle to prove (21). In order to calculate related terms, we utilize Lemmas 7 and 8. For r = 1 , we get
tr ( R 1 T R 0 ) = tr ( R 0 T R 0 ) R 0 F 2 α 1 tr ( H 1 T R 0 ) = 0 , tr ( U 2 T H 1 ) = tr ( R 1 T H 1 ) + R 1 F 2 R 0 F 2 tr ( U 1 T H 1 ) = α 1 R 0 F 2 tr ( R 1 T R 1 ) + α 1 R 1 F 2 R 0 F 2 = 0 .
These imply that (21) holds for r = 1 . Now, we proceed the inductive step by assuming that tr ( R r T R r 1 ) = 0 and tr ( U r + 1 T H r ) = 0 . Then
tr ( R r + 1 T R r ) = tr ( R r T R r ) R r F 2 α r + 1 tr H r + 1 T U r + 1 R r F 2 R r 1 F 2 U r = R r F 2 R r F 2 α r + 1 tr ( H r + 1 T U r + 1 ) = 0 , tr ( U r + 2 T H r + 1 ) = tr R r + 1 T α r + 1 R r F 2 ( R r + 1 R r ) + R r + 1 F 2 R r F 2 α r + 1 = α r + 1 R r F 2 tr [ ( R r + 1 T R r ) ( R r + 1 T R r + 1 ) ] + R r + 1 F 2 R r F 2 α r + 1 = 0 .
Hence, Equationd (21) holds for any r.    □
Lemma 10.
Suppose the sequences { R r } , { U r } and { H r } are constructed from Algorithm 1. Then
tr ( R r T R 0 ) = 0 , tr ( U r + 1 T H 1 ) = 0 , for any r .
Proof. 
The initial step r = 1 holds due to Lemma 9. Now, assume that Equation (22) is valid for all r = 1 , , k . From Lemmas 7 and 8, we get
tr ( R k + 1 T R 0 ) = tr R k R k F 2 α k + 1 V k + 1 T R 0 = tr ( R k T R 0 ) R k F 2 α k + 1 tr ( H k + 1 T R 0 ) = R k F 2 α k + 1 tr ( H k + 1 T U 1 ) = R k F 2 α k + 1 tr ( U k + 1 T H 1 ) = 0 ,
and
tr ( U k + 2 T H 1 ) = tr ( H k + 2 T U 1 ) = tr α k + 2 R k + 1 F 2 R k + 2 R k + 1 T U 1 = α k + 2 R k + 1 F 2 [ tr ( R k + 2 T R 0 ) tr ( R k + 1 T R 0 ) ] = 0 .
Hence, Equation (22) holds for any r.    □
Lemma 11.
Suppose the sequences { R r } , { U r } and { H r } are constructed from Algorithm 1. Then for any integers m and n such that m n , we have
tr ( R m 1 T R n 1 ) = 0 , and tr ( U m T H n ) = 0 .
Proof. 
According to Lemma 8 and the fact that tr ( R m 1 T R n 1 ) = tr ( R n 1 T R m 1 ) for any integers m and n, it remains to prove two equalities in (23) for only m , n such that m > n . For m = n + 1 , the two equalities hold by Lemma 9. For m = n + 2 , we have
tr ( R n + 2 T R n ) = tr R n + 1 R n + 1 F 2 α n + 2 H n + 2 T R n = R n + 1 F 2 α n + 2 tr H n + 2 T U n + 1 R n F 2 R n 1 F 2 U n = R n + 1 F 2 α n + 2 R n F 2 R n 1 F 2 tr R n + 1 + R n + 1 F 2 R n F 2 U n + 1 T H n = R n + 1 F 2 α n + 2 R n F 2 R n 1 F 2 α n R n 1 F 2 tr ( R n + 1 T R n 1 ) ,
and, similarly, we have
tr ( R n + 1 T R n 1 ) = R n F 2 α n + 1 R n 1 F 2 R n 2 F 2 α n 1 R n 2 F 2 tr ( R n T R n 2 ) , tr ( R n T R n 2 ) = R n 1 F 2 α n R n 2 F 2 R n 3 F 2 α n 2 R n 3 F 2 tr ( R n 1 T R n 3 ) .
Moreover,
tr ( U n + 2 T H n ) = tr R n + 1 R n + 1 F 2 R n F 2 U n + 1 T H n = tr R n + 1 T α n R n 1 F 2 R n R n 1 = α n R n 1 F 2 tr R n R n F 2 α n + 1 H n + 1 T R n 1 = α n R n 1 F 2 R n F 2 α n + 1 tr ( H n + 1 T U n ) R n 1 F 2 R n 2 F 2 tr ( H n + 1 T U n 1 ) = α n R n 1 F 2 R n F 2 α n + 1 R n 1 F 2 R n 2 F 2 tr ( U n + 1 T H n 1 ) ,
and, similarly,
tr ( U n + 1 T H n 1 ) = α n 1 R n 2 F 2 R n 1 F 2 α n R n 2 F 2 R n 3 F 2 tr ( U n T H n 2 ) .
In a similar way, we can write tr ( R n + 1 T R n 1 ) and tr ( U n + 2 T H n ) in terms of tr ( R n T R n 2 ) and tr ( U n + 1 T H n 1 ) , respectively. Continuing this process until the terms tr ( R 2 T R 0 ) and tr ( U 3 T H 1 ) appear. By Lemma 10, we get tr ( R n + 1 T R n 1 ) = 0 and tr ( U n + 2 T H n ) = 0 . Similarly, we have tr ( R m T R n 1 ) = 0 and tr ( U m T H n ) = 0 for m = n + 3 , , k .
Suppose that tr ( R m 1 T R n 1 ) = tr ( U m T H n ) = 0 for m { n + 1 , , k } . Then for m = k + 1 , we have
tr ( R k T R n 1 ) = tr ( R k 1 T R n 1 ) R k 1 F 2 α k tr ( H k T R n 1 ) = R k 1 F 2 α k tr H k T U n R n 1 F 2 R n 2 F 2 U n 1 = R k 1 F 2 α k tr ( H k T U n ) R n 1 F 2 R n 2 F 2 tr ( H k T U n 1 ) = 0 .
and
tr ( U k + 1 T H n ) = tr ( R k T H n ) + R k F 2 R k 1 F 2 tr ( U k T H n ) = tr R k T α n R n 1 F 2 R n R n 1 = α n R n 1 F 2 tr ( R k T R n R k T R n 1 ) = 0 .
Hence, tr ( R m 1 T R n 1 ) = 0 and tr ( U m T H n ) = 0 for any m , n such that m n .    □
Theorem 1.
Algorithm 1 solves Problem 1 within finite steps. More precisely, for any given initial matrix X 0 R n × p , the sequence { X r } constructed from Algorithm 1 converges to an LS solution of Equation (1) in at most n p iterations.
Proof. 
Assume that R r 0 for r = 0 , 1 , , n p 1 . Assume that R n p 0 . By Lemma 11, the set { R 0 , R 1 , , R n p } of residual matrices is orthogonal in R n × p with respect to the Frobenius inner product (5). Therefore, the set { R 0 , R 1 , , R n p } of n p + 1 elements is linearly independent. This contradicts the fact that the dimension of R n × p is n p . Thus, R n p = 0 , and X n p satisfies Equation (15) in Lemma 6. Hence X n p is an LS solution of Equation (1).    □
We adapt the same idea as that for Algorithm 1 to derive an algorithm for Equation (2) as follows:
Algorithm 2: A conjugate gradient iterative algorithm for Equation (2)
Symmetry 14 01868 i002
The stopping rule of Algorithm 2 may be described as R r F δ ϵ where δ is the positive square root of the associated LS error and ϵ > 0 is a small tolerance.
Theorem 2.
Consider Equation (2) where A i R m × n , B i R p × q , C j R m × p , D j R n × q , D R m × q , E R m × q are given constant matrices and X R n × p is an unknown matrix. Assume that the matrix
M : = i = 1 s ( B i T A i ) + j = 1 t ( D j T C j ) P ( n , p )
is of full-column rank. Then, for any given initial matrix X 0 R n × p , the sequence { X r } constructed from Algorithm 2 converges to a unique LS solution.
Proof. 
The proof of is similar to that of Theorem 1.   □

5. Minimal-Norm Least-Squares Solution via Algorithm 1

In this section, we investigate Problem 2. That is, we consider the case when the matrix M may not have full-column rank, so that Equation (1) may have many LS solutions. We shall seek for an element of L with the minimal Frobenius norm.
Lemma 12.
Assume X ^ L . Then, any arbitrary element X ˜ L can be expressed as X ^ + Z for some matrix Z R n × p satisfying
A T ( A Z B + C Z T D ) B T + D ( B T Z T A T + D T Z C T ) C = 0 .
Proof. 
Let us denote the residual of the LS solutions X ^ and X ˜ , according to Equation (18), by R X ^ and R X ˜ , respectively. We consider the different Z : = X ˜ X ^ . Now, we compute
R X ˜ = A T ( A ( X ^ + Z ) B + C ( X ^ + Z ) T D ) B T + D ( B T ( X ^ + Z ) T A T + D T ( X ^ + Z ) C T ) C A T E B T D E T C = A T ( A Z B + C Z T D ) B T + D ( B T Z T A T + D T Z C T ) C R X ^ .
Since X ^ , X ˜ L , by Lemma 6 we have R X ^ = R X ˜ = 0 . It follows that Equation (25) holds as desired.   □
Theorem 3.
Algorithm 1 solves Problem 2 in at most n p iterations by starting with the initial matrix
X 0 = A T ( A V 0 B + C V 0 T D ) B T + D ( B T V 0 T A T + D T V 0 C T ) C ,
where V 0 R n × p is an arbitrary matrix, or especially X 0 = 0 .
Proof. 
If we run Algorithm 1 starting with (26), then we can write the solution X * of Problem 2 so that
X * = A T ( A V * B + C V * T D ) B T + D ( B T V * T A T + D T V * C T ) C ,
for some matrix V * R n × p . Now, assume that X ˜ is an arbitrary element in L . By Lemma 12, there is a matrix Z R n × p such that X ˜ = X * + Z and
A T ( A Z B + C Z T D ) B T + D ( B T Z T A T + D T Z C T ) C = 0 .
Using the property (6), we get
X * , Z = A T ( A V * B + C V * T D ) B T + D ( B T V * T A T + D T V * C T ) C , Z = V * , A T ( A Z B + C Z T D ) B T + D ( B T Z T A T + D T Z C T ) C = 0 .
Since X * is orthogonal to Z, it follows from the Pythagorean theorem that
X ˜ F 2 = X * + Z F 2 = X * F 2 + Z F 2 X * F 2 .
This implies that X * is the minimal-norm solution.   □
Theorem 4.
Consider the sequence { X r } generated by Algorithm 2 starting with the initial matrix
X 0 = k = 1 s A k T i = 1 s A i V 0 B i + j = 1 t C j V 0 T D j B k T + l = 1 t D l i = 1 s B i T V 0 T A i T + j = 1 t D j T V 0 C j T C l ,
where V 0 R n × p is an arbitrary matrix, or especially X 0 = 0 R n × p . Then the sequence { X r } converges to the minimal-norm LS solution of Equatiom (2) in at most n p iterations.
Proof. 
The proof is similar to that of Theorem 3.   □

6. Least-Squares Solution Closest to a Given Matrix

In this section, we investigate Problem 3. In this case, Equation (1) may have many LS solutions. We shall seek for one that closest to a given matrix with respect to the Frobenius norm.
Theorem 5.
Algorithm 1 solves Problem 3 by substituting E with E 1 = E ( A Y B + C Y T D ) , and choosing the initial matrix to be
W 0 = A T ( A V B + C V T D ) B T + D ( B T V T A T + D T V C T ) C ,
where V R n × p is arbitrary, or specially W 0 = 0 R n × p .
Proof. 
Let Y R n × p be given. We can translate Problem 3 into Problem 2 as follows:
min X R n × p A X B + C X T D E F = min X R n × p A X B + C X T D E A Y B C Y T D + A Y B + C Y T D F = min X R n × p A ( X Y ) B + C ( X Y ) T D E + A Y B + C Y T D F .
Now, substituting E 1 = E ( A Y B + C Y T D ) and W = X Y , we see that the solution X ˇ of Problem 3 is equal to W * + Y where W * is the minimal-norm LS solution of the equation
A W B + C W T D = E 1 ,
in unknown W. By Theorem 3, the matrix W * can be solved by Algorithm 1 with the initial matrix (27) where V R n × p is arbitrary matrix, or especially W 0 = 0 .   □
Theorem 6.
Suppose that the matrix Equation (2) is inconsistent. Let Y R n × p be given. Consider Algorithm 2 when we replace the matrix E by
E 1 = E i = 1 s A i Y B i j = 1 t C j Y T D j ,
and choose the initial matrix
W 0 = k = 1 s A k T i = 1 s A i F B i + j = 1 t C j F T D j B k T + l = 1 t D l i = 1 s B i T F T A i T + j = 1 t D j T F C j T C l ,
where F R n × p is arbitrary, or W 0 = 0 R n × p . Then, the sequence { X r } obtained by Algorithm 2 converges to the LS solution of (2) closest to Y within n p iterations.
Proof. 
The proof of the theorem is similar to that of Theorem 5.   □

7. Numerical Experiments

In this section, we provide numerical results to show the efficiency and effectiveness of Algorithm 2 (denoted by CG), which is an extension of Algorithm 1. We perform experiments when the coefficients in a given matrix equation are dense/sparse rectangular matrices of moderate/large sizes. We denote by ones ( m , n ) the m-by-n matrix whose all entries are 1. Each random matrix rand ( m , n ) has all entries belonging to the interval ( 0 , 1 ) . Each experiment contains some comparisons of Algorithm 2 with the direct Kronecker linearization as well as well-known iterative algorithms. All iterations were performed by MATLAB R2021a, on Mac operating system (M1 chip 8C CPU/8C GPU/8GB/512GB). The performance of algorithms is investigated through the number of iterations, the norm of residual matrices, and the CPU time. The latter is measured in seconds using the functions t i c and t o c on MATLAB.
The next is an example of Problem 1.
Example 1.
Consider a generalized Sylvester-transpose matrix equation
A 1 X B 1 + A 2 X B 2 + A 3 X B 3 + C 1 X T D 1 + C 2 X T D 2 = E ,
where the coefficient matrices are given by
A 1 = 0.5 ones ( m , n ) rand ( m , n ) R 50 × 50 , A 2 = 0.5 ones ( m , n ) rand ( m , n ) R 50 × 50 , A 3 = 0.5 ones ( m , n ) rand ( m , n ) R 50 × 50 , B 1 = 0.5 ones ( p , q ) rand ( p , q ) R 40 × 50 , B 2 = 0.5 ones ( p , q ) rand ( p , q ) R 40 × 50 , B 3 = 0.5 ones ( p , q ) rand ( p , q ) R 40 × 50 , C 1 = 0.5 ones ( m , p ) rand ( m , p ) R 50 × 40 , C 2 = 0.5 ones ( m , p ) rand ( m , p ) R 50 × 40 , D 1 = 0.5 ones ( n , q ) rand ( n , q ) R 50 × 50 , D 2 = 0.5 ones ( n , q ) rand ( n , q ) R 50 × 50 , E = 0.5 ones ( m , q ) rand ( m , q ) R 50 × 50 .
In fact, we have rank M = 2000 2001 = rank [ M e ] , i.e., the matrix equation does not have an exact solution. However, M is of full-column rank, so this equation has a unique LS solution. We run Algorithm 2 using an initial matrix X 0 = 0 R 50 × 40 and a tolerance error ϵ = M x * e = 6.4812 , where x * = ( M T M ) 1 M T e . It turns out that Algorithm 2 takes 20 iterations to get a least-square solution, consuming around 0.2 s, while the direct method consumes around 7 s. Thus, Algorithm 2 takes 35 times less computational time than the direct method. We compare the performance of Algorithm 2 with other well-known iterative algorithms: GI method [31], LSI method [31], and TAUOpt method [32]. The numerical results are shown in Table 1 and Figure 1. We see that after running 20 iterations, Algorithm 2 consumes CTs slightly more than other methods, but the relative error R r F is less than those of the others. Hence, Algorithm 2 is applicable and has a good performance.
The next is an example of Problem 2.
Example 2.
Consider a generalized Sylvester-transpose matrix equation
A 1 X B 1 + C 1 X T D 1 + C 2 X T D 2 = E ,
where
A 1 = 0.08 × ones ( m , n ) R 30 × 25 , B 1 = tridiag ( 0.11 , 0.61 , 0.29 ) R 30 × 30 , C 1 = tridiag ( 0.03 , 0.22 , 0.1 ) R 30 × 30 , C 2 = tridiag ( 0.38 , 0.29 , 0.41 ) R 30 × 30 , D 1 = 0.13 × ones ( n , q ) R 25 × 30 , D 2 = 0.04 × ones ( n , q ) R 25 × 30 , E = 0.01 × I 30 R 30 × 30 .
In this case, Equation (29) is inconsistent and the associated matrix M is not of full-column rank. Thus, Equation (29) has many LS solutions. The direct method concerning Moore–Penrose inverse (17) takes 0.627019 s to get the minimal-norm LS solutions. Alternatively, MNLS method [35] can be also used to this kind of problem. However, some coefficient matrices are triangular matrices with multiple zeros, causing the MNLS algorithm diverges and cannot provide answer. Therefore, let us apply Algorithm 2 using a tolerance error ϵ = 10 5 . According to Theorem 4, we choose three different matrices V 0 to generate the initial matrix X 0 . The numerical results are shown in Table 2 and Figure 2.
Figure 2 shows that the logarithm of the relative errors R r F for CG algorithms, using three initial matrices, are rapidly decreasing to zero. All of them consume around 0.037 s to arrive at the desired solution, which is 16 times less than the direct method.
The following is an example of Problem 3.
Example 3.
Consider a generalized Sylvester-transpose matrix equation
A 1 X B 1 + C 1 X T D 1 + C 2 X T D 2 = E ,
where
A 1 = 0.2 × ones ( m , n ) R 50 × 40 , B 1 = tridiag ( 0.2 , 0.3 , 0.3 ) R 50 × 50 , C 1 = tridiag ( 0.4 , 0.2 , 0.1 ) R 50 × 50 , C 2 = tridiag ( 0.7 , 0.2 , 0.3 ) R 50 × 50 , D 1 = 0.2 × ones ( n , q ) R 40 × 50 , D 2 = 0.1 × ones ( n , q ) R 40 × 50 , E = I 50 R 50 × 50 .
In fact, Equation (30) is inconsistent and has many LS solutions. The first task is to find the LS solution of Equation (30) closest to Y = 0.1 × ones ( n , p ) R 40 × 50 . According to Theorem 6, we apply Algorithm 2 with two different matrices V to construct the initial matrix W 0 . Algorithm 2 with V = 0 and V = 0.19 × I are denoted in Figure 3 by C G 1 and C G 2 , respectively.
The second task is to solve Problem 3 when we are given Y = 0.1 × ones ( n , p ) R 40 × 50 . Similarly, we use two different matrices V = 0 and V = 0.02 × ones ( n , p ) R 40 × 50 to construct the initial matrix.
Figure 3. The logarithm of the relative error for Example 3 with Y = 0.1 × ones ( n , p ) .
Figure 3. The logarithm of the relative error for Example 3 with Y = 0.1 × ones ( n , p ) .
Symmetry 14 01868 g003
Figure 4. The logarithm of the relative error for Example 3 with Y = I.
Figure 4. The logarithm of the relative error for Example 3 with Y = I.
Symmetry 14 01868 g004
Table 3. Relative error and computational time for Example 3.
Table 3. Relative error and computational time for Example 3.
YInitial VIterationsCPU R r F X * Y F
0.1 × ones ( n , p ) 0180.1041350.0000064.3116
0.19 × I 200.1081530.0000054.3116
I0180.1139600.0000090.8580
0.02 × ones 200.1084990.0000060.8580
We apply Algorithm 2 with a tolerance error ϵ = 10 5 . The numerical results in Figure 3 and Figure 4, and Table 3 illustrate that, in each case, the relative error converges rapidly to zero within 20 iterations, consuming around 0.1 s. Thus, Algorithm 2 performs well in both the number of iterations and computational time. Moreover, changing initial matrix and the desired matrix Y does not siginificantly affect the performance of algorithm.

8. Conclusions

We propose CG-type iterative algorithms, namely, Algorithms 1 and 2, to generate approximate solutions for the generalized Sylvester-transpose matrix Equations (1) and (2), respectively. When the matrix equation is inconsistent, the algorithm will arrive at an LS solution within n p iterations with the absence of round-off errors. When the matrix equation has many LS solutions, the algorithm can search for the one with minimal Frobenius norm within n p steps. Moreover, given a matrix Y, the algorithm can find the LS solution closest to Y within n p steps. The numerical simulations validate the relevance of the algorithm for medium/large sizes of squares/non-squares matrices. The algorithm is always applicable for any given initial matrix and the given matrix Y. The algorithm performs well in both the number of iterations and computational times, compared to the direct Kronecker linearization and well-known iterative methods.

Author Contributions

Writing—original draft preparation, K.T.; writing—review and editing, P.C.; data curation, K.T.; supervision, P.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research project is supported by National Research Council of Thailand (NRCT): (N41A640234).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare there is no conflict of interest.

References

  1. Geir, E.D.; Fernando, P. A Course in Robust Control Theory: A Convex Approach; Springer: New York, NY, USA, 1999. [Google Scholar]
  2. Lewis, F. A survey of linear singular systems. Circ. Syst. Signal Process. 1986, 5, 3–36. [Google Scholar] [CrossRef]
  3. Dai, L. Singular Control Systems; Springer: Berlin, Germany, 1989. [Google Scholar]
  4. Enright, W.H. Improving the efficiency of matrix operations in the numerical solution of stiff ordinary differential equations. ACM Trans. Math. Softw. 1978, 4, 127–136. [Google Scholar] [CrossRef]
  5. Aliev, F.A.; Larin, V.B. Optimization of Linear Control Systems: Analytical Methods and Computational Algorithms; Stability Control Theory, Methods Applications; CRC Press: Boca Raton, FL, USA, 1998. [Google Scholar]
  6. Calvetti, D.; Reichel, L. Application of ADI iterative methods to the restoration of noisy images. SIAM J. Matrix Anal. Appl. 1996, 17, 165–186. [Google Scholar] [CrossRef]
  7. Duan, G.R. Eigenstructure assignment in descriptor systems via output feedback: A new complete parametric approach. Int. J. Control 1999, 72, 345–364. [Google Scholar] [CrossRef]
  8. Duan, G.R. Parametric approaches for eigenstructure assignment in high-order linear systems. Int. J. Control Autom. Syst. 2005, 3, 419–429. [Google Scholar]
  9. Kim, Y.; Kim, H.S. Eigenstructure assignment algorithm for second order systems. J. Guid. Control Dyn. 1999, 22, 729–731. [Google Scholar] [CrossRef]
  10. Fletcher, L.R.; Kuatsky, J.; Nichols, N.K. Eigenstructure assignment in descriptor systems. IEEE Trans. Autom. Control 1986, 31, 1138–1141. [Google Scholar] [CrossRef]
  11. Frank, P.M. Fault diagnosis in dynamic systems using analytical and knowledge-based redundancy a survey and some new results. Automatica 1990, 26, 459–474. [Google Scholar] [CrossRef]
  12. Epton, M. Methods for the solution of AXD - BXC = E and its applications in the numerical solution of implicit ordinary differential equations. BIT Numer. Math. 1980, 20, 341–345. [Google Scholar] [CrossRef]
  13. Zhou, B.; Duan, G.R. On the generalized Sylvester mapping and matrix equations. Syst. Control Lett. 2008, 57, 200–208. [Google Scholar] [CrossRef]
  14. Horn, R.; Johnson, C. Topics in Matrix Analysis; Cambridge University Press: New York, NY, USA, 1991. [Google Scholar]
  15. Kilicman, A.; Al Zhour, Z.A. Vector least-squares solutions for coupled singular matrix equations. Comput. Appl. Math. 2007, 206, 1051–1069. [Google Scholar] [CrossRef] [Green Version]
  16. Simoncini, V. Computational methods for linear matrix equations. SIAM Rev. 2016, 58, 377–441. [Google Scholar] [CrossRef]
  17. Hajarian, M. Developing BiCG and BiCR methods to solve generalized Sylvester-transpose matrix equations. Int. J. Autom. Comput. 2014, 11, 25–29. [Google Scholar] [CrossRef]
  18. Hajarian, M. Matrix form of the CGS method for solving general coupled matrix equations. Appl. Math. Lett. 2014, 34, 37–42. [Google Scholar] [CrossRef]
  19. Hajarian, M. Generalized conjugate direction algorithm for solving the general coupled matrix equations over symmetric matrices. Numer. Algorithms 2016, 73, 591–609. [Google Scholar] [CrossRef]
  20. Dehghan, M.; Mohammadi-Arani, R. Generalized product-type methods based on Bi-conjugate gradient (GPBiCG) for solving shifted linear systems. Comput. Appl. Math. 2017, 36, 1591–1606. [Google Scholar] [CrossRef]
  21. Zadeh, N.A.; Tajaddini, A.; Wu, G. Weighted and deflated global GMRES algorithms for solving large Sylvester matrix equations. Numer. Algorithms 2019, 82, 155–181. [Google Scholar] [CrossRef]
  22. Kittisopaporn, A.; Chansangiam, P.; Lewkeeratiyukul, W. Convergence analysis of gradient-based iterative algorithms for a class of rectangular Sylvester matrix equation based on Banach contraction principle. Adv. Differ. Equ. 2021, 2021, 17. [Google Scholar] [CrossRef]
  23. Boonruangkan, N.; Chansangiam, P. Convergence analysis of a gradient iterative algorithm with optimal convergence factor for a generalized Sylvester-transpose matrix equation. AIMS Math. 2021, 6, 8477–8496. [Google Scholar] [CrossRef]
  24. Zhang, X.; Sheng, X. The relaxed gradient based iterative algorithm for the symmetric (skew symmetric) solution of the Sylvester equation AX + XB = C. Math. Probl. Eng. 2017, 2017. [Google Scholar] [CrossRef]
  25. Xie, Y.J.; Ma, C.F. The accelerated gradient based iterative algorithm for solving a class of generalized Sylvester transpose matrix equation. Appl. Math. Comp. 2016, 273, 1257–1269. [Google Scholar] [CrossRef]
  26. Tian, Z.; Tian, M.; Gu, C.; Hao, X. An accelerated Jacobi-gradient based iterative algorithm for solving Sylvester matrix equations. Filomat 2017, 31, 2381–2390. [Google Scholar] [CrossRef]
  27. Sasaki, N.; Chansangiam, P. Modified Jacobi-gradient iterative method for generalized Sylvester matrix equation. Symmetry 2020, 12, 1831. [Google Scholar] [CrossRef]
  28. Kittisopaporn, A.; Chansangiam, P. Gradient-descent iterative algorithm for solving a class of linear matrix equations with applications to heat and Poisson equations. Adv. Differ. Equ. 2020, 2020, 324. [Google Scholar] [CrossRef]
  29. Heyouni, M.; Saberi-Movahed, F.; Tajaddini, A. On global Hessenberg based methods for solving Sylvester matrix equations. Comp. Math. Appl. 2018, 2019, 77–92. [Google Scholar] [CrossRef]
  30. Hajarian, M. Extending the CGLS algorithm for least squares solutions of the generalized Sylvester-transpose matrix equations. J. Franklin Inst. 2016, 353, 1168–1185. [Google Scholar] [CrossRef]
  31. Xie, L.; Ding, J.; Ding, F. Gradient based iterative solutions for general linear matrix equations. Comput. Math. Appl. 2009, 58, 1441–1448. [Google Scholar] [CrossRef]
  32. Kittisopaporn, A.; Chansangiam, P. Approximated least-squares solutions of a generalized Sylvester-transpose matrix equation via gradient-descent iterative algorithm. Adv. Differ. Equ. 2021, 2021, 266. [Google Scholar] [CrossRef]
  33. Tansri, K.; Choomklang, S.; Chansangiam, P. Conjugate gradient algorithm for consistent generalized Sylvester-transpose matrix equations. AIMS Math. 2022, 7, 5386–5407. [Google Scholar] [CrossRef]
  34. Wang, M.; Cheng, X. Iterative algorithms for solving the matrix equation AXB + CXTD = E. Appl. Math. Comput. 2007, 187, 622–629. [Google Scholar] [CrossRef]
  35. Chen, X.; Ji, J. The minimum-norm least-squares solution of a linear system and symmetric rank-one updates. Electron. J. Linear Algebra 2011, 22, 480–489. [Google Scholar] [CrossRef]
Figure 1. The logarithm of the relative error R r F for Example 1.
Figure 1. The logarithm of the relative error R r F for Example 1.
Symmetry 14 01868 g001
Figure 2. The logarithm of the relative error for Example 2.
Figure 2. The logarithm of the relative error for Example 2.
Symmetry 14 01868 g002
Table 1. Relative error and computational time for Example 1.
Table 1. Relative error and computational time for Example 1.
MethodIterationsCPU R r F
CG200.1993086.407766
GI200.12971510.907665
LSI200.17944914.390460
TAUOpt200.0738667.806273
Direct7.0486320
Table 2. Relative error and computational time for Example 2.
Table 2. Relative error and computational time for Example 2.
V 0 IterationsCPU R r F
060.0365230.000008
0.02 × ones110.0365400.000009
−0.01 × I 100.0384250.000009
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tansri, K.; Chansangiam, P. Conjugate Gradient Algorithm for Least-Squares Solutions of a Generalized Sylvester-Transpose Matrix Equation. Symmetry 2022, 14, 1868. https://doi.org/10.3390/sym14091868

AMA Style

Tansri K, Chansangiam P. Conjugate Gradient Algorithm for Least-Squares Solutions of a Generalized Sylvester-Transpose Matrix Equation. Symmetry. 2022; 14(9):1868. https://doi.org/10.3390/sym14091868

Chicago/Turabian Style

Tansri, Kanjanaporn, and Pattrawut Chansangiam. 2022. "Conjugate Gradient Algorithm for Least-Squares Solutions of a Generalized Sylvester-Transpose Matrix Equation" Symmetry 14, no. 9: 1868. https://doi.org/10.3390/sym14091868

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop