Next Article in Journal
An Extended Single-Valued Neutrosophic Projection-Based Qualitative Flexible Multi-Criteria Decision-Making Method
Previous Article in Journal
Generic Properties of Framed Rectifying Curves
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

PHSS Iterative Method for Solving Generalized Lyapunov Equations †

Department of Mathematics, College of Sciences, Northeastern University, Shenyang 110819, China
*
Author to whom correspondence should be addressed.
Project supported by the National Natural Science Foundation of China (No. 11371081) and the Natural Science Foundation of Liaoning Province (No. 20170540323).
Mathematics 2019, 7(1), 38; https://doi.org/10.3390/math7010038
Submission received: 16 November 2018 / Revised: 18 December 2018 / Accepted: 26 December 2018 / Published: 3 January 2019

Abstract

:
Based on previous research results, we propose a new preprocessing HSS iteration method (PHSS) for the generalized Lyapunov equation. At the same time, the corresponding inexact PHSS algorithm (IPHSS) is given from the angle of application. All the new methods presented in this paper have given the corresponding convergence proof. The numerical experiments are carried out to compare the new method with the existing methods, and the improvement effect is obvious. The feasibility and effectiveness of the proposed method are proved from two aspects of theory and calculation.

1. Introduction

We consider the system of large sparse linear equations
A x = b ,
where A C n × n is non-Hermite positive definite matrix and x , b C n × n . The actual background of such problems can be found in [1,2,3,4,5,6,7] and its references. For (1), Bai, Golub and Ng put forward the HSS iteration method in 2003 [8].
Any matrix can be decomposed into the sum of symmetric matrices and skew symmetric matrices so that we can get the formula:
A = H ( A ) + S ( A ) = ( α I + H ( A ) ) ( α I S ( A ) ) = ( α I + S ( A ) ) ( α I H ( A ) ) ,
where α is normal number, H ( A ) = 1 2 ( A + A ) , S ( A ) = 1 2 ( A A ) , and H ( A ) , S ( A ) C n × n . As a result, the HSS iterative format proposed by Bai and others is:
Let x ( 0 ) C n be an initial guess. For k = 0 , 1 , 2 , ... , until the sequence of iterates { x ( k ) } converges, compute the next iterate x ( k + 1 ) through the following procedure:
{ ( α I + H ( A ) ) x ( k + 1 2 ) = ( α I S ( A ) ) x ( k ) + b , ( α I + S ( A ) ) x ( k + 1 ) = ( α I H ( A ) ) x ( k + 1 2 ) + b ,
where α is normal number. Bai and others proved its unconditional convergence to the unique solution of (1) in [8].
In order to speed up the HSS iteration method, Bai and others put forward the PHSS iteration method [9,10,11]. Decompose coefficient matrix A into the sum of symmetric matrices and skew symmetric matrices and we can get the formula:
A = ( α P ( A ) + H ( A ) ) ( α P ( A ) S ( A ) ) = ( α P ( A ) + S ( A ) ) ( α P ( A ) H ( A ) ) ,
where P ( A ) C n × n is Hermite positive definite matrix. Therefore, we can get the HSS iterative format:
{ ( α P ( A ) + H ( A ) ) x ( k + 1 2 ) = ( α P ( A ) S ( A ) ) x ( k ) + b , ( α P ( A ) + S ( A ) ) x ( k + 1 ) = ( α P ( A ) H ( A ) ) x ( k + 1 2 ) + b ,
where α is normal number. Bai and others proved its unconditional convergence to the unique solution of (1) in [10].

2. The PHSS Iterative Method for the Generalized Lyapunov Equation

Many methods to solve the standard Lyapunov equation have been put forward in [12,13,14,15,16,17,18,19]. In the literature [12], Xu and others put forward the HSS iterative solution of the generalized Lyapunov equation. Inspired by this, this paper proposes the PHSS iterative solution of the generalized Lyapunov equation.
Consider the generalized Lyapunov equation as follows:
A X + X A Τ + j = 1 m N j X N j Τ + C = 0 ,
where A , N j , C R n × n , A is an asymmetric positive definite matrix, C is a symmetric matrix and N j 2 A 2 ( j = 1 , 2 , ... , m ) . When a = b , the Equation (3) degenerates to the standard Lyapunov equation.
Then we apply the PHSS iterative method to solve the generalized Lyapunov Equation (3):
Let us suppose that α is a normal number, then the decomposition of A is similar to (2):
A = ( α P ( A ) + H ( A ) ) ( α P ( A ) S ( A ) ) = ( α P ( A ) + S ( A ) ) ( α P ( A ) H ( A ) ) , A Τ = ( α P ( A ) + H ( A ) ) ( α P ( A ) + S ( A ) ) = ( α P ( A ) S ( A ) ) ( α P ( A ) H ( A ) ) ,
Then the iterative format can be obtained:
{ ( α P ( A ) + H ( A ) ) X k + 1 2 + X k + 1 2 ( α P ( A ) + H ( A ) ) = ( α P ( A ) S ( A ) ) X k + X k ( α P ( A ) + S ( A ) ) j = 1 m N j X k N j Τ C , ( α P ( A ) + S ( A ) ) X k + 1 + X k + 1 ( α P ( A ) S ( A ) ) = ( α P ( A ) H ( A ) ) X k + 1 2 + X k + 1 2 ( α P ( A ) H ( A ) ) j = 1 m N j X k N j Τ C ,
According to the nature of Kronecker product, we can get
{ ( I ( A ) ( α P ( A ) + H ( A ) ) + ( α P ( A ) + H ( A ) ) I ( A ) ) x k + 1 2 = ( I ( A ) ( α P ( A ) S ( A ) ) + ( α P ( A ) S ( A ) ) I ( A ) ) x k j = 1 m ( N j N j ) x k c , ( I ( A ) ( α P ( A ) + S ( A ) ) + ( α P ( A ) + S ( A ) ) I ( A ) ) x k + 1 = ( I ( A ) ( α P ( A ) H ( A ) ) + ( α P ( A ) H ( A ) ) I ( A ) ) x k + 1 2 j = 1 m ( N j N j ) x k c ,
where x k = v e c ( X k ) , c = v e c ( C ) , then, according to the nature of Kronecker product, we can get
{ ( α P + H ) x k + 1 2 = ( α P S ) x k j = 1 m ( N j N j ) x k c , ( α P + S ) x k + 1 = ( α P H ) x k + 1 2 j = 1 m ( N j N j ) x k c ,
where
{ P = I ( A ) P ( A ) + P ( A ) I ( A ) , H = I ( A ) H ( A ) + H ( A ) I ( A ) , S = I ( A ) S ( A ) + S ( A ) I ( A ) .
The convergence of the iterative scheme (4) is equivalent to the convergence of the iterative scheme (5) and their convergence factors are the same.
Theorem 1.
Let us suppose that A R n × n is an asymmetric positive definite matrix, K = P 1 2 j = 1 m N j N j 2 and the maximum and minimum eigenvalues of matrix P 1 H are λ max and λ min , respectively. Then the convergence factor of the PHSS iterative method (4) is the spectral radius of matrix
G = ( α P + S ) 1 ( α P H ) ( α P + H ) 1 ( α P S ) 2 α P ( α P + S ) 1 ( α P + H ) 1 j = 1 m ( N j N j ) .
Its upper bound is
σ 0 ( α ) = max λ i λ ( P 1 H ) | α λ i α + λ i | + 2 K α + λ min .
When λ min > K and α ˜ = λ min λ max , σ 0 ( α ˜ ) reaches the minimum. It means that
σ 0 ( α ˜ ) = λ min λ max λ min + 2 K λ min λ max + λ min < 1
Therefore, the PHSS iterative method for solving the generalized Lyapunov equation is convergent.
Proof. 
The first form of the iteration format (5) is brought into the second form, and its iteration matrix is obtained:
G = ( α P + S ) 1 ( α P H ) ( α P + H ) 1 ( α P S ) 2 α P ( α P + S ) 1 ( α P + H ) 1 j = 1 m ( N j N j ) ,
Then the convergence factor of the iterative scheme (5) is ρ ( G ) , which is the same as the convergence factor of the iterative scheme (4).
Because P R n × n is a symmetric positive definite matrix, we can suppose that
H ˜ = P 1 2 H P 1 2 , S ˜ = P 1 2 S P 1 2 .
Because G is similar to
G 1 = ( α P + S ) G ( α P S ) 1 = ( α P H ) ( α P + H ) 1 ( α P S ) ( α P + S ) 1 2 α ( α P + S ) P ( α P + S ) 1 ( α P + H ) 1 j = 1 m ( N j N j ) ( α P + S ) 1
and G 1 is similar to
G 2 = P 1 2 G 1 P 1 2 = P 1 2 P 1 2 ( α I H ˜ ) P 1 2 P 1 2 ( α I + H ˜ ) 1 P 1 2 P 1 2 ( α I S ˜ ) P 1 2 P 1 2 ( α I + S ˜ ) 1 P 1 2 P 1 2 2 α P 1 2 P 1 2 ( α I + S ˜ ) P 1 2 P P 1 2 ( α I + S ˜ ) 1 P 1 2 P 1 2 ( α I + H ˜ ) 1 P 1 2 j = 1 m ( N j N j ) P 1 2 ( α P + S ) 1 P 1 2 P 1 2 = ( α I H ˜ ) ( α I + H ˜ ) 1 ( α I S ˜ ) ( α I + S ˜ ) 1 2 α ( α I + S ˜ ) P ( α I + S ˜ ) 1 P 1 ( α I + H ˜ ) 1 P 1 2 j = 1 m ( N j N j ) P 1 2 ( α I + S ˜ ) 1 ,
we can get that
ρ ( G ) = ρ ( G 1 ) = ρ ( G 2 ) G 2 ( α I H ˜ ) ( α I + H ˜ ) 1 ( α I S ˜ ) ( α I + S ˜ ) 1 2 2 α P 2 P 1 2 ( α I + H ˜ ) 1 2 P 1 2 2 j = 1 m ( N j N j ) 2 P 1 2 2 ( α I + S ˜ ) 1 2 = ( α I H ˜ ) ( α I + H ˜ ) 1 ( α I S ˜ ) ( α I + S ˜ ) 1 2 2 α ( α I + H ˜ ) 1 2 P 1 2 j = 1 m ( N j N j ) 2 ( α I + S ˜ ) 1 2 .
Because H is positive definite matrix, S is a semi positive definite matrix. For any non-zero column vector x R n , we can get that
x Τ H x > 0 , x Τ S x 0 ,
P is symmetric positive definite matrix, so P 1 is positive definite matrix. It is easy to prove that P 1 2 x is a non-zero column vector by means of proof of absurdity. Then we can see that
x Τ H ˜ x = x Τ P 1 2 H P 1 2 x = ( P 1 2 x ) Τ H ( P 1 2 x ) > 0 , x Τ S ˜ x = x Τ P 1 2 S P 1 2 x = ( P 1 2 x ) Τ S ( P 1 2 x ) 0 .
Therefore, H ˜ is a positive definite matrix, S ˜ is a semi positive definite matrix.
H is a real symmetric matrix and S is an antisymmetric matrix, so we can see that
H ˜ Τ = ( P 1 2 H P 1 2 ) Τ = P 1 2 H Τ P 1 2 = P 1 2 H P 1 2 = H ˜ , S ˜ Τ = ( P 1 2 S P 1 2 ) Τ = P 1 2 S Τ P 1 2 = P 1 2 S P 1 2 = S ˜ .
Therefore, H ˜ is a symmetric positive definite matrix, S ˜ is an antisymmetric semidefinite matrix. Meanwhile, since
P 1 2 H ˜ P 1 2 = P 1 2 P 1 2 H P 1 2 P 1 2 = P 1 H , P 1 2 S ˜ P 1 2 = P 1 2 P 1 2 S P 1 2 P 1 2 = P 1 S ,
we can conclude that H ˜ is similar to P 1 H and S ˜ is similar to P 1 S .
Let us suppose that Q = ( α I S ˜ ) ( α I + S ˜ ) 1 and we can see that
Q Q = ( α I S ˜ ) ( α I + S ˜ ) 1 ( ( α I S ˜ ) ( α I + S ˜ ) 1 ) = ( α I S ˜ ) ( α I + S ˜ ) 1 ( α I S ˜ ) 1 ( α I + S ˜ ) = ( α I S ˜ ) ( α I S ˜ ) 1 ( α I + S ˜ ) 1 ( α I + S ˜ ) = I .
It’s easy to deduce that Q Q = I and we can conclude that Q Q = Q Q = I . So Q is a unitary matrix and we can deduce that
( α I H ˜ ) ( α I + H ˜ ) 1 ( α I S ˜ ) ( α I + S ˜ ) 1 2 = ( α I H ˜ ) ( α I + H ˜ ) 1 2 .
Let us suppose that L = ( α I H ˜ ) ( α I + H ˜ ) 1 and we can deduce through that
L L = ( α I H ˜ ) ( α I + H ˜ ) 1 ( ( α I H ˜ ) ( α I + H ˜ ) 1 ) = ( α I H ˜ ) ( α I + H ˜ ) 1 ( α I + H ˜ ) 1 ( α I H ˜ ) = ( α I + H ˜ ) 1 ( α I H ˜ ) ( α I H ˜ ) ( α I + H ˜ ) 1 = L L .
Therefore, L is a normal matrix and we can deduce through the Formula (7) that
( α I H ˜ ) ( α I + H ˜ ) 1 ( α I S ˜ ) ( α I + S ˜ ) 1 2 = ( α I H ˜ ) ( α I + H ˜ ) 1 2 = max λ i λ ( H ˜ ) | α λ i α λ i | .
It is easy to see that
( α I + H ˜ ) 1 ( ( α I + H ˜ ) 1 ) = ( α I + H ˜ ) 1 ( α I + H ˜ ) 1 = ( ( α I + H ˜ ) 1 ) ( α I + H ˜ ) 1 , ( α I + S ˜ ) 1 ( ( α I + S ˜ ) 1 ) = ( α I + S ˜ ) 1 ( α I S ˜ ) 1 = ( α I S ˜ ) 1 ( α I + S ˜ ) 1 = ( ( α I + S ˜ ) 1 ) ( α I + S ˜ ) 1 ,
so both ( α I + H ˜ ) 1 and ( α I + S ˜ ) 1 are normal matrices. Because H ˜ is a positive definite matrix and S ˜ is a semi positive definite matrix, we can easy to deduce that
{ ( α I + H ˜ ) 1 2 = max λ i λ ( H ˜ ) | 1 α + λ i | = 1 α + λ min , ( α I + S ˜ ) 1 2 = max μ j λ ( S ˜ ) | 1 α + μ j | 1 α .
Through the Formula (6), (8) and (9), we can see that
ρ ( G ) max λ i λ ( H ˜ ) | α λ i α + λ i | + 2 K α + λ min .
The following proves that when λ min > K and α = λ min λ min , σ 0 ( α ) reaches the minimum and σ 0 ( α ) is less than 1 at this time.
In fact, for fixed α , function α λ α + λ is monotonically decreasing with respect to λ . So we can see that
σ 0 ( α ) = max { | α λ min α + λ min | + 2 K α + λ min , | α λ max α + λ max | + 2 K α + λ min } ,
It’s easy to see that when λ min > K , monotonically decreases over ( 0 , λ min ) and increases monotonously on ( λ min , + ) , | α λ min α + λ min | + 2 K α + λ min monotonically decreases over ( 0 , λ max ) and increases monotonously on ( λ max , + ) . Therefore, when λ min < α < λ max and α λ min α + λ min + 2 K α + λ min = α λ max α + λ max + 2 K α + λ min , σ 0 ( α ) , reaches the minimum. And we can conclude that when α = λ min λ max α ˜ , we can get that σ 0 ( α ˜ ) = λ min λ max λ min + 2 K λ min λ max + λ min < 1 .
Through the proof of the expression of σ 0 ( α ) , we can see that on the one hand, when α + , we can get that σ 0 ( α ) 1 and σ 0 ( α ) increases monotonously on ( α ˜ , + ) , on the other hand, when α , we can get that σ 0 ( α ) 1 and σ 0 ( α ) decreases monotonously on ( α ˜ , + ) , Therefore, we can see that on the one hand, when α α ˜ = λ min λ max , we can get that σ 0 ( α ) < 1 , on the other hand, when 0 α α ˜ = λ min λ max , we can get that σ 0 ( α ) < 1 . Summing up the above, we can conclude that the PHSS iterative method for the generalized Lyapunov equation is convergent and the upper bound of the convergence factor is σ 0 ( α ) which is only associated with matrix P 1 2 j = 1 m ( N j N j ) 2 and the eigenvalues of matrix P 1 H . In addition, when α = α ˜ , the upper bound σ 0 ( α ) of the convergence factor of the PHSS iterative method of the generalized Lyapunov Equation (3) is minimal, but the convergence factor ρ ( G ) does not necessarily reach the minimum at this time, that is to say, when α = α ˜ , the PHSS iteration does not necessarily converge the fastest. How to obtain the optimal parameters needs to be further studied.
The actual iterative parameter α is advisable to be α = α ˜ . Because H = I H ( A ) + H ( A ) I , we can get that
λ min = 2 λ min ( H ( A ) ) , λ max = 2 λ max ( H ( A ) ) .
Therefore, we can get that
α ˜ = λ min λ max = 2 λ min ( H ( A ) ) 2 λ max ( H ( A ) ) = 2 λ min ( H ( A ) ) λ max ( H ( A ) ) .
To sum up, the PHSS iterative method is convergent for the generalized Lyapunov Equation (3) which satisfies the condition. □

3. Inexact PHSS (IPHSS) Iterative Algorithm

In order to reduce the computational complexity of the HSS iterative method for solving the generalized Lyapunov equation, Xu Qingqing and others proposed an IHSS iteration method for solving the generalized Lyapunov equation in [12]. Similarly, the IPHSS iteration method for solving the generalized Lyapunov equation can be derived from the PHSS iteration method for solving the generalized Lyapunov equation.
Taking X k as the initial value, the following generalized Lyapunov equation is approximated by iterative method, and X k + 1 2 is obtained:
( α P ( A ) + H ( A ) ) X k + 1 2 + X k + 1 2 ( α P ( A ) + H ( A ) ) ( α P ( A ) S ( A ) ) X k + X k ( α P ( A ) + S ( A ) ) j = 1 m N j X N j Τ C .
Because the matrix of the Lyapunov Equation (10) is symmetric and positive definite, the approximate solution can be obtained by the CG algorithm.
Next, we use X k + 1 2 as initial value approximation to solve the following Lyapunov equation and get X k + 1 :
( α P ( A ) + S ( A ) ) X k + 1 + X k + 1 ( α P ( A ) S ( A ) ) ( α P ( A ) H ( A ) ) X k + 1 2 + X k + 1 2 ( α P ( A ) H ( A ) ) j = 1 m N j X k + 1 2 N j Τ C .
For Lyapunov Equation (11), the approximate solution can be obtained by CGNE algorithm. Similar to the inexact HSS iterative method for solving the generalized Lyapunov equation in the literature [12], the inexact PHSS iteration method for solving the generalized Lyapunov equation can be summarized as Algorithm 1 as follow:
Algorithm 1.(Inexact PHSS Algorithm)
Let us give the initial value X 0 R n × n , k = 0 , 1 , ... , and calculate the X k + 1 until the accuracy requirement is met.
(i)
Let us approximate the solution of
( α P ( A ) + H ( A ) ) Z k + 1 2 + Z k + 1 2 ( α P ( A ) + S ( A ) ) = R k ,
where
R k = A X k + X k A Τ + j = 1 m N j X k N j Τ + C ,
until Z k + 1 2 makes the corresponding residual
P k + 1 2 = R k ( α P ( A ) + H ( A ) ) Z k + 1 2 Z k + 1 2 ( α P ( A ) + S ( A ) )
satisfy P k + 1 2 2 ε k R k 2 .
(ii)
Let us approximate the solution of
( α P ( A ) + H ( A ) ) Z k + 1 + Z k + 1 ( α P ( A ) + S ( A ) ) = 2 α ( P ( A ) Z k + 1 2 + Z k + 1 2 P ( A ) ) ,
until Z k + 1 makes the corresponding residual
Q k + 1 = 2 α ( P ( A ) Z k + 1 2 + Z k + 1 2 P ( A ) ) ( α P ( A ) + H ( A ) ) Z k + 1 Z k + 1 ( α P ( A ) + S ( A ) )
satisfy Q k + 1 2 2 α η k P ( A ) Z k + 1 2 + Z k + 1 2 P ( A ) 2 .
(iii)
Computing X k + 1 = X k + Z k + 1 .
In Algorithm 1, ε k and η k is used to control the accuracy of internal iterations in the iterative process, and the stopping criterion of the (ii) step only makes the following convergence theorem more concise. In fact, the criterion can be changed to q k + 1 2 η k P z k + 1 2 .
Theorem 2.
Let us suppose that A R n × n is an asymmetrical positive definite matrix. According to Theorem 1, α is chosen to make the HSS iterative method converge. { X k } is an iterative sequence generated by Algorithm 1, and X * is the exact solution of the generalized Lyapunov equation. Then we can get that
| x k + 1 x * | [ σ 0 ( α ) + 2 P 1 2 F 2 α + λ min ( ε k + η k ( 1 + ε k ) ) ] | x k x * | ,
where x k = v e c ( X k ) and x * = v e c ( X * ) Let us define the vector norm | | as: For any vector y , we can define that | y | = ( α I + P 1 S ) y 2 .
In particular, if
σ 0 ( α ) + 2 P 1 2 F 2 α + λ min ( ε max + η max ( 1 + ε max ) ) < 1 ,
the iterative sequence { x k } converges to x * , that is, { X k } converges to X * , where ε max = max { ε k } and η max = max { η k } .
Proof. 
Because of Kronecker product, IPHSS iteration method is equivalent to
{ [ I ( α P ( A ) + H ( A ) ) + ( α P ( A ) + S ( A ) ) I ] z k + 1 2 = r k [ I ( α P ( A ) + H ( A ) ) + ( α P ( A ) + S ( A ) ) I ] z k + 1 = 2 α ( I P ( A ) + P ( A ) I ) z k + 1 2
where z k + 1 2 = v e c ( Z k + 1 2 ) , r k = v e c ( R k ) . Then the above iteration scheme is equivalent to
{ ( α P + H ) z k + 1 2 = r k , ( α P + S ) z k + 1 = 2 α P z k + 1 2 ,
where P = I ( A ) P ( A ) + P ( A ) I ( A ) , H = I ( A ) H ( A ) + H ( A ) I ( A ) and S = I ( A ) S ( A ) + S ( A ) I ( A ) . Order p k + 1 2 = v e c ( P k + 1 2 ) , q k + 1 = v e c ( Q k + 1 ) and we can see that
{ p k + 1 2 = r k ( α P + H ) z k + 1 2 , q k + 1 = 2 α P z k + 1 2 ( α P + S ) z k + 1
satisfies p k + 1 2 2 ε k r k 2 and q k + 1 2 2 α η k P z k + 1 2 , then we can conclude that
x k + 1 = x k + z k + 1 = x k + ( α P + S ) 1 ( 4 α P z k + 1 2 q k + 1 ) = x k 2 α ( α P + S ) 1 P ( α P + H ) 1 ( r k + p k + 1 2 ) ( α P + S ) 1 q k + 1 = x k 2 α ( α I + P 1 S ) 1 ( α I + P 1 H ) 1 P 1 ( r k + p k + 1 2 ) ( α P + S ) 1 q k + 1 .
Because
r k = F x k + c = ( H + S + j = 1 m N j N j ) x k + c ,
we can bring the Formula (13) into the type (12) and see that
x k + 1 = ( α I + P 1 S ) 1 ( α I + P 1 H ) 1 [ ( α I P 1 H ) ( α I P 1 S ) 2 α P 1 j = 1 m N j N j ] x k 2 α ( α I + P 1 S ) 1 ( α I + P 1 H ) 1 P 1 ( c + p k + 1 2 ) ( α P + S ) 1 q k + 1 .
Let X * be the exact solution of the generalized Lyapunov equation, that is, x * is the exact solution of the following two equations:
{ ( α P + H ) x = ( α P S ) x j = 1 m ( N j N j ) x c , ( α P + S ) x = ( α P H ) x j = 1 m ( N j N j ) x c .
Through the first equation in Formula (14), we can see that
x * = ( α P + H ) 1 ( α P S ) x * ( α P + H ) 1 ( j = 1 m N j N j x * + c ) ,
We can bring the Formula (15) into the second equation of the Formula (14) and see that
x * = ( α I + P 1 S ) 1 ( α I + P 1 H ) 1 [ ( α I P 1 H ) ( α I P 1 S ) 4 α P 1 j = 1 m N j N j ] x * 2 α ( α I + P 1 S ) 1 ( α I + P 1 H ) 1 P 1 c .
As a result, we can conclude that
x k + 1 x * = ( α I + P 1 S ) 1 ( α I + P 1 H ) 1 [ ( α I P 1 H ) ( α I P 1 S ) 2 α P 1 j = 1 m N j N j ] ( x k x * ) 2 α ( α I + P 1 S ) 1 ( α I + P 1 H ) 1 P 1 p k + 1 2 ( α I + P 1 S ) 1 P 1 q k + 1 .
Let us suppose that vector norm is | y | = ( α I + P 1 S ) y 2 and the matrix norm is
| Y | = ( α I + P 1 S ) Y ( α I + P 1 S ) 1 2 .
Because ( α I + P 1 H ) and ( α I P 1 H ) can be exchanged, we conclude that ( α I + P 1 H ) 1 and ( α I P 1 H ) can be exchanged. As a result, we can conclude that
| x k + 1 x * | | ( α I + P 1 S ) 1 ( α I + P 1 H ) 1 [ ( α I P 1 H ) ( α I P 1 S ) 2 α P 1 j = 1 m N j N j ] | | x k x * | + 2 α | ( α I + P 1 S ) 1 ( α I + P 1 H ) 1 P 1 p k + 1 2 | + | ( α I + P 1 S ) 1 P 1 q k + 1 | = ( α I + P 1 H ) 1 [ ( α I P 1 H ) ( α I P 1 S ) 2 α P 1 j = 1 m N j N j ] ( α I + P 1 S ) 1 2 | x k x * | + 2 α ( α I + P 1 H ) 1 P 1 p k + 1 2 2 + P 1 q k + 1 2 = ( α I P 1 H ) ( α I + P 1 H ) 1 ( α I P 1 S ) ( α I + P 1 S ) 1 2 α ( α I + P 1 H ) 1 P 1 j = 1 m N j N j ( α I + P 1 S ) 1 2 | x k x * | + 2 α ( α I + P 1 H ) 1 P 1 p k + 1 2 2 + P 1 q k + 1 2 ( α I H ˜ ) ( α I + H ˜ ) 1 ( α I S ˜ ) ( α I + S ˜ ) 1 2 2 α P 1 2 ( α I + H ˜ ) 1 2 j = 1 m N j N j 2 ( α I + S ˜ ) 1 2 | x k x * | + 2 α ( α I + P 1 H ) 1 P 1 p k + 1 2 2 + P 1 q k + 1 2 σ 0 ( α ) | x k x * | + 2 α P 1 2 ( α I + P 1 H ) 1 2 p k + 1 2 2 + P 1 2 q k + 1 2 .
Because r k 2 = F ( x k x * ) 2 F ( α I + P 1 S ) 1 2 | x k x * | , we can see that
p k + 1 2 2 ε k r k 2 ε k F ( α I + P 1 S ) 1 2 | x k x * | ε k F 2 ( α I + P 1 S ) 1 2 | x k x * | , q k + 1 2 2 α η k P 2 z k + 1 2 2 = 2 α η k P 2 ( α P + H ) 1 ( r k p k + 1 2 ) 2 2 α η k ( α I + P 1 H ) 1 2 ( r k 2 + p k + 1 2 2 ) 2 α η k ( α I + P 1 H ) 1 2 ( 1 + ε k ) r k 2 2 α η k ( 1 + ε k ) ( α I + P 1 H ) 1 2 F 2 ( α I + P 1 S ) 1 2 | x k x * | ,
Through the Formula (9), we can see that
| x k + 1 x * | σ 0 ( α ) | x k x * | + 2 α P 1 2 ( α I + P 1 H ) 1 2 p k + 1 2 2 + P 1 2 q k + 1 2 [ σ 0 ( α ) + 2 α ( ε k + η k ( 1 + ε k ) ) P 1 2 ( α I + P 1 H ) 1 2 F 2 ( α I + P 1 S ) 1 2 ] | x k x * | [ σ 0 ( α ) + 2 α ( ε k + η k ( 1 + ε k ) ) P 1 2 P 1 2 ( α I + P 1 2 H P 1 2 ) 1 P 1 2 2 F 2 P 1 2 ( α I + P 1 2 S P 1 2 ) 1 P 1 2 2 ] | x k x * | [ σ 0 ( α ) + 2 α ( ε k + η k ( 1 + ε k ) ) P 1 2 ( α I + H ˜ ) 1 2 F 2 ( α I + S ˜ ) 1 2 ] | x k x * | [ σ 0 ( α ) + 2 α ( ε k + η k ( 1 + ε k ) ) P 1 2 1 α + λ min F 2 1 α ] | x k x * | [ σ 0 ( α ) + 2 P 1 2 F 2 α + λ min ( ε k + η k ( 1 + ε k ) ) ] | x k x * | .
If we accurately solve the Lyapunov Equations (10) and (11), the corresponding { ε k } and { η k } should be zero, so both ε max and η max are zero. At this point, the convergence factor of the IPHSS iteration method is the same as that of the PHSS iteration method. Theorem 3 shows that in order to guarantee the convergence of the IPHSS iterative method, we only need the conditional
σ 0 ( α ) + 2 P 1 2 F 2 α + λ min ( ε k + η k ( 1 + ε k ) ) < 1
to satisfy, and we do not need { ε k } and { η k } to go to zero with the increase of k . Therefore, when the generalized Lyapunov equation is solved, the selection of { ε k } and { η k } should make the calculation as small as possible, and the iterative factor of the IPHSS iterative method is as close to the convergence factor of the PHSS iterative method as possible. □

4. Numerical Experiments

In this section, we test the IPHSS algorithm for solving the generalized Lyapunov equation by numerical examples.
Here is a theoretical numerical example for a simple test of numerical performance about the algorithm:
Example 1.
Now, we consider the generalized Lyapunov equation as follows:
A X + X A Τ + N X N Τ + C = 0 ,
where N is a random matrix that satisfies the condition of Theorem 1;
A = I R + Q I
where is Kronecker product. Let
R = t r i d i a g ( ( 2 h ) , 8 , ( 2 + h ) )   and   Q = t r i d i a g ( ( 2 2 h ) , 8 , ( 2 + 2 h ) )
are three diagonal matrices h = 1 n ; x ( 0 ) = v e c ( X ( 0 ) ) is taken as a zero vector; and the program is executed by Matlab. The order of the coefficient matrix A is n . The relative error is R e s = r ( k ) 2 b 2 . The stopping criterion is R e s < 10 6 and I t e r is the number of iterations. C P U is iterative time. The parameters of the IPHSS method are taken as α = 0.8 . The parameters of the IPHSS method are taken as α = β = 0.2 . The preconditioned matrix P is selected as the diagonal matrix of the coefficient matrix A . Through the IPHSS algorithm we can get Table 1 as follows.
The numerical results in the analysis Table 1 show that IPHSS method has faster convergence speed, better stability and convergence than IHSS method in this example.
The following numerical examples are given to test the numerical performance of the algorithm in practice:
Example 2.
Considering the problem about the finite element discretization of self-heat conduction [20]:
x ˙ = A x + N x u + B u
We need to solve the following generalized Lyapunov equation in solving its Kodamm matrix:
A X + X A Τ + N X N Τ + C = 0 ,
where
A = ( a i j ) n × n = { 1.6 , i = j 0.3 , | i j | = 1 0 , e l s e N = ( n i j ) n × n = { 0.05 , i = j 0.01 , | i j | = 1 0 , e l s e B = A 1 ( O n 1 × n 1 O n 1 × n 2 O n 2 × n 1 I n 2 ) A 1 ,   n 2 = n 100 , n 1 = n n 2 .
The parameters of the IPHSS method are taken as α = 0.9 . The parameters of the IPHSS method are taken as α = β = 0.2 . Through the IPHSS algorithm we can get Table 2 as follows.
The numerical results in the analysis Table 2 show that the amplitude of the number of iterative times for the IHSS iteration and the IPHSS iteration of the generalized Lyapunov equation is smaller, which indicates that the two methods are very stable, but the number of iterations and times of the IPHSS iteration are far smaller than that of the IHSS iteration, and the relative error of the IPHSS iteration is also less than the relative error of the IHSS iteration. Not only that, it can be seen that the gap between the iterative time of the IPHSS iterative method and the iteration time of the IHSS iteration method is larger, as we can see the higher order of the matrix, and thus the IPHSS iterative method for solving the generalized Lyapunov equation is more effective than the IHSS iteration.

5. Conclusions

In this paper, a new method of solving the generalized Lyapunov equation by PHSS iterative method is proposed and its convergence is proved. Then, the IPHSS algorithm for solving the generalized Lyapunov equation is put forward, and the convergence of the generalized Lyapunov equation is proved. Finally, a numerical experiment is carried out to compare the new method with the existing methods. It is found that compared with the IHSS iteration method, the IPHSS iteration method has obvious improvement effect.

Author Contributions

Conceptualization, H.-L.S. and S.-Y.L.; Methodology, H.-L.S.; Software, S.-Y.L.; Validation, H.-L.S., S.-Y.L. and X.-H.S.; Formal Analysis, H.-L.S.; Data Curation, S.-Y.L.; Writing-Original Draft Preparation, H.-L.S.; Writing-Review & Editing, S.-Y.L.; Project Administration, X.-H.S.; Funding Acquisition, H.-L.S.

Funding

This research was funded by the National Natural Science Foundation of China (No. 11371081) and the Natural Science Foundation of Liaoning Province (No. 20170540323).

Acknowledgments

The authors would like to express their great thankfulness to the referees for the comments and constructive suggestions, which are valuable in improving the quality of the original paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bai, Z.Z.; Benzi, M.; Chen, F. Modified HSS iteration methods for a class of complex symmetric linear system. Computing 2010, 87, 93–111. [Google Scholar] [CrossRef]
  2. Bai, Z.Z.; Benzi, M.; Chen, F. On preconditioned MHSS iteration methods for complex symmetric linear systems. Numer. Algorithms 2011, 56, 297–317. [Google Scholar] [CrossRef]
  3. Bertaccini, D. Ecient solvers for sequences of complex symmetric linear systems. Electron. Trans. Numer. Anal. 2004, 18, 49–64. [Google Scholar]
  4. Feriani, A.A.; Perotti, F.; Simoncini, V. Iterative system solvers for the frequency analysis of linear mechanical systems. Comput. Methods Appl. Mech. Eng. 2000, 190, 1719–1739. [Google Scholar] [CrossRef]
  5. Wu, S.L.; Li, C.X. A splitting method for complex symmetric indefinite linear system. J. Comput. Appl. Math. 2017, 313, 343–354. [Google Scholar] [CrossRef]
  6. Wu, S.L.; Li, C.X. Modified complex-symmetric and skew-Hermitian splitting iteration method for a class of complex-symmetric indenite linear systems. Numer. Algorithms 2017, 76, 93–107. [Google Scholar] [CrossRef]
  7. Wu, S.L.; Li, C.X. A splitting iterative method for the discrete dynamic linear systems. J. Comput. Appl. Math. 2014, 267, 49–60. [Google Scholar] [CrossRef]
  8. Bai, Z.Z.; Golub, G.H.; Ng, M.K. Hermitian and skew-Hemitian splitting methods for non-Hermitian positive definite linear system. SIAM J. Matrix Anal. Appl. 2003, 24, 603–626. [Google Scholar] [CrossRef]
  9. Bai, Z.Z.; Golub, G.H.; Pan, J. Preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermirian positive semidefinite linear systems. Numer. Math. 2004, 98, 1–32. [Google Scholar] [CrossRef]
  10. Bai, Z.Z.; Golub, G.H.; Li, C. Convergence properties of preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semidefinite matrices. Math. Comput. 2007, 76, 287–298. [Google Scholar] [CrossRef]
  11. Yang, A.; An, J.; Yu, J. A generalized preconditioned HSS method for non-Hermitian positive definite linear systems. Appl. Math. Comput. 2010, 216, 1715–1722. [Google Scholar] [CrossRef]
  12. Xu, Q.Q.; Dai, H. HSS iterative method for generalized Lyapunov equation. J. Appl. Math. Comput. Math. 2015, 29, 383–394. [Google Scholar]
  13. Bartels, R.H.; Stewart, G.W. Solution of the matrix equation AX+XB=C. Commun. ACM 1972, 15, 820–826. [Google Scholar] [CrossRef]
  14. Golub, G.H.; Nash, S.; Van Loan, C.F. A Hessenber-Schur method for the matrix problem AX+XB=C. IEEE Trans. Autom. Control 1979, 24, 909–913. [Google Scholar] [CrossRef]
  15. Lu, A.; Wachspress, E.L. Solution of Lyapunov equations by alternating direction implictiteration. Comput. Math. Appl. 1991, 21, 43–58. [Google Scholar] [CrossRef]
  16. Hu, D.Y.; Reichel, L. Krylov-Subspace methods for the Sylvester equation. Linear Algebra Its Appl. 1992, 172, 283–313. [Google Scholar] [CrossRef]
  17. Penzl, T. A cyclic low-rank Smith method for large sparse Lyapunov equations. SIAM J. Sci. Comput. 1999, 21, 1401–1418. [Google Scholar] [CrossRef]
  18. Li, R.J.; White, J. Low-rank solution of Lyapunov equations. SIAM J. Matrix Anal. Appl. 2004, 46, 693–713. [Google Scholar] [CrossRef]
  19. Simoncini, V. A new iterative method for solving large-scale Lyapunov matrix equations. SIAM J. Sci. Comput. 2007, 29, 1268–1288. [Google Scholar] [CrossRef]
  20. Ozisik, M.N. Boundary Value Problem of Heat Conduction; Minola Dover Publications: Mineola, NY, USA, 2013. [Google Scholar]
Table 1. Comparison of calculation results between preprocessing HSS iteration method (PHSS) and inexact PHSS algorithm (IPHSS) method.
Table 1. Comparison of calculation results between preprocessing HSS iteration method (PHSS) and inexact PHSS algorithm (IPHSS) method.
nIHSSIPHSS
IterCPUResIterCPURes
42690.2159.6357 × 10−740.0707.4966 × 10−8
162670.5598.0967 × 10−740.0771.6256 × 10−7
642591.7639.8830 × 10−750.1698.1159 × 10−8
14426643.0109.9025 × 10−750.7921.5784 × 10−7
256255260.1659.6540 × 10−754.0292.0429 × 10−7
5292832502.2919.2328 × 10−7539.9452.3134 × 10−7
10245383.9071.7833 × 10−7
Table 2. Comparison of calculation results between IHSS and IPHSS method.
Table 2. Comparison of calculation results between IHSS and IPHSS method.
nIHSSIPHSS
IterCPUResIterCPURes
64240.4136.9964 × 10−640.0922.1691 × 10−8
128211.7579.0778 × 10−640.3824.6206 × 10−8
2562012.4329.3266 × 10−642.2901.0216 × 10−7
51219109.7019.8608 × 10−6421.9172.8474 × 10−7
1024191382.2808.3162 × 10−64291.3883.8898 × 10−7

Share and Cite

MDPI and ACS Style

Li, S.-Y.; Shen, H.-L.; Shao, X.-H. PHSS Iterative Method for Solving Generalized Lyapunov Equations. Mathematics 2019, 7, 38. https://doi.org/10.3390/math7010038

AMA Style

Li S-Y, Shen H-L, Shao X-H. PHSS Iterative Method for Solving Generalized Lyapunov Equations. Mathematics. 2019; 7(1):38. https://doi.org/10.3390/math7010038

Chicago/Turabian Style

Li, Shi-Yu, Hai-Long Shen, and Xin-Hui Shao. 2019. "PHSS Iterative Method for Solving Generalized Lyapunov Equations" Mathematics 7, no. 1: 38. https://doi.org/10.3390/math7010038

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop