Next Article in Journal
On Optimal Fuzzy Best Proximity Coincidence Points of Proximal Contractions Involving Cyclic Mappings in Non-Archimedean Fuzzy Metric Spaces
Previous Article in Journal
A Generalization of b-Metric Space and Some Fixed Point Theorems
Previous Article in Special Issue
Data Clustering with Quantum Mechanics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Some Extended Block Krylov Based Methods for Large Scale Nonsymmetric Stein Matrix Equations

by
Abdeslem Hafid Bentbib
1,
Khalide Jbilou
2 and
EL Mostafa Sadek
3,*
1
Laboratory LAMAI, University of Cadi Ayyad, Marrakesh 40000, Morocco
2
LMPA, 50 rue F. Buisson, ULCO Calais, Calais 62228 , France
3
ENSA d’EL Jadida, University Chouaib Doukkali, EL Jadida 24002, Morocco
*
Author to whom correspondence should be addressed.
Mathematics 2017, 5(2), 21; https://doi.org/10.3390/math5020021
Submission received: 22 December 2016 / Revised: 15 March 2017 / Accepted: 17 March 2017 / Published: 27 March 2017
(This article belongs to the Special Issue Numerical Linear Algebra with Applications)

Abstract

:
In the present paper, we consider the large scale Stein matrix equation with a low-rank constant term A X B X + E F T = 0 . These matrix equations appear in many applications in discrete-time control problems, filtering and image restoration and others. The proposed methods are based on projection onto the extended block Krylov subspace with a Galerkin approach (GA) or with the minimization of the norm of the residual. We give some results on the residual and error norms and report some numerical experiments.

1. Introduction

In this paper, we are interested in the numerical solution of large scale nonsymmetric Stein matrix equations of the form:
A X B X + E F T = 0
where A and B are real, sparse and square matrices of size n × n and s × s , respectively, and E and F are matrices of size n × r and s × r , respectively.
Stein matrix equations play an important role in many problems in control and filtering theory for discrete-time large-scale dynamical systems, in each step of Newton’s method for discrete-time algebraic Riccati equations, model reduction problems, image restoration techniques and other problems [1,2,3,4,5,6,7,8,9,10].
Direct methods for solving the matrix Equation (1), such as those proposed by Bartels–Stewart [11] and the Hessenberg–Schur [12] algorithms, are attractive if the matrices are of small size. For a general overview of numerical methods for solving the Stein matrix equation [1,2,13].
The Stein matrix Equation (1) can be formulated as an n s × n s large linear system using the Kronecker formulation:
( B T A I s I n ) v e c ( X ) = v e c ( E F T )
where v e c ( X ) is the vector obtained by stacking all the columns of the matrix X, I n is the n-by-n identity matrix, and the Kronecker product of two matrices A and B is defined by A B = [ a i j B ] , where A = [ a i j ] . This product satisfies the properties ( A B ) ( C D ) = ( A C B D ) , ( A B ) T = A T B T and v e c ( A X B ) = ( B T A ) v e c ( X ) . Then, the matrix Equation (1) has a unique solution if and only if λ μ 1 for all λ σ ( A ) and μ σ ( B ) , where σ ( A ) denotes the spectrum of the matrix A. Throughout the paper, we assume that this condition is satisfied. Moreover, if both A and B are Schur stable, i.e., σ ( A ) and σ ( B ) lie in the open unit disc, and then the solution of Equation (1) can be expressed as the following infinite matrix series:
X = i = 0 A i   E   F T   B i
To solve large linear matrix equations, several Krylov subspace projection methods have been proposed (see, e.g., [1,13,14,15,16,17,18,19,20,21,22,23,24] and the references therein). The main idea developed in these methods is to use a block Krylov subspace or an extended block Krylov subspace and then project the original large matrix equation onto these Krylov subspaces using a Galerkin condition or a minimization property of the obtained residual. Hence, we will be interested in these two procedures to get approximate solutions to the solution of the Stein matrix Equation (1). The rest of the paper is organized as follows. In the next section, we recall the extended block Krylov subspace and the extended block Arnoldi (EBA) algorithm with some properties. In Section 3, we will apply the Galerkin approach (GA) to Stein matrix equations by using the extended Krylov subspaces. In Section 4, we define the minimal residual (MR) method for Stein matrix equations by using the extended Krylov subspaces. We finally present some numerical experiments in Section 5.

2. The Extended Block Krylov Subspace Algorithm

In this section, we recall the EBA algorithm applied to ( A , V ) , where V R n × r . The block Krylov subspace associated with ( A , V ) is defined as:
K m ( A , V ) = R a n g e ( { V , A V , A 2 V , , A m 1 V } )
The extended block Krylov subspace associated with the pair ( A , V ) is given as:
K m e ( A , V ) = R a n g e { V , A 1 V , A V , A 2 V , A 2 V , , A m 1 V , A m V } ) = K m ( A , V ) + K m ( A 1 , A 1 V )
The EBA Algorithm 1 is defined as follows [15,16,18,23]:
Algorithm 1. The Extended Block Arnoldi (EBA) Algorithm
(1)
Inputs: A an n × n matrix, V an n × r matrix and m an integer
(2)
Compute the QR decomposition of [ V , A 1 V ] , i.e., [ V , A 1 V ] = V 1 Λ
(3)
Set V 0 =  
(4)
for j = 1 , 2 , , m
(a)
Set V j ( 1 ) : first r columns of V j ; V j ( 2 ) : second r columns of V j
(b)
V j = V j 1 , V j ; V ^ j + 1 = A   V j ( 1 ) , A 1   V j ( 2 )
(c)
Orthogonalize V ^ j + 1 w.r. to V j to get V j + 1 , i.e.
for i = 1 , 2 , , j
H i , j = V i T   V ^ j + 1 ;
V ^ j + 1 = V ^ j + 1 V i   H i , j
End for
(d)
Compute the Q R decomposition of V ^ j + 1 , i.e., V ^ j + 1 = V j + 1   H j + 1 , j
(5)
End For
This algorithm allows us to construct an orthonormal matrix V m = V 1 , V 2 , , V m that is a basis of the block extended Krylov subspace K m e ( A , V ) . The restriction of the matrix A to the block extended Krylov subspace K m e ( A , V ) is given by T m = V m T   A   V m .
Let T ¯ m = V m + 1 T A V m . Then, we have the following relations [25]:
A V m = V m + 1 T ¯ m = V m T m + V m + 1 T m + 1 , m E m T
where E m = [ 0 2 r × 2 ( m 1 ) r , I 2 r ] T is the matrix of the last 2 r columns of the identity matrix I 2 m r [23,25]. In the next section, we will define the GA for solving Stein matrix equations.

3. Galerkin-Based Methods

In this section, we will apply the Galerkin projection method to obtain low-rank approximate solutions of the nonsymmetric Stein matrix Equation (1). This approach has been applied for Lyapunov, Sylvester or Riccati matrix equations [1,14,15,19,20,21,23,25,26].

3.1. The Case: Both A and B Are Large Matrices

We consider here a nonsymmetric Stein matrix equation, where A and B are large and sparse matrices with r n and r s . We project the initial problem by using the extended block Krylov subspaces K m e ( A , E ) and K m e ( B T , F ) associated with the pairs ( A ,   E ) and ( B T , F ) , respectively, and get orthonormal bases { V 1 , V 2 , , V m } and { W 1 , W 2 , , W m } . We then consider approximate solutions of the Stein matrix Equation (1) that have the low-rank form:
X m G A = V m Y m G A W m T
where V m = V 1 , V 2 , , V m and W m = W 1 , W 2 , , W m .
The matrix Y m G A is determined from the following Galerkin orthogonality condition:
V m T R m G A W m = V m T ( A X m G A B X m G A + E F T ) W m = 0
Now, replacing X m G A = V m Y m G A W m T in Equation (4), we obtain the reduced Stein matrix equation:
T A Y m G A ( T B ) T Y m G A + E ˜ F ˜ T = 0
where E ˜ = V m T E , F ˜ = W m T F , T A = V m T A V m ,   and   T B = W m T B T W m .
Assuming that λ i ( T A ) λ j ( T B ) 1 for any i = 1 , 2 , , 2 m r and j = 1 , 2 , , 2 m r , the solution Y m of the low-order Stein Equation (5) can be obtained by a direct method such as those described in [11]. The following result on the norm of the residual R m allows us to stop the iterations without having to compute the approximation X m G A .
Theorem 1.
Let X m G A be the approximation obtained at step m by the EBA algorithm. Then, the Frobenius norm of the residual R m G associated to the approximation X m G A is given by:
R m G F = α m 2 + β m 2 + γ m 2
where α m = T m A Y m E m ( T m + 1 , m B ) T F , β m = T m + 1 , m A E m T ( T m B ) T F , and:
γ m = T m + 1 , m A E m T Y m E m ( T m + 1 , m B ) T F
Proof. 
The proof is similar to the one given at proposition 6 in [17].  ☐
In the following result, we give an upper bound for the norm of the error X X m G A .
Theorem 2.
Assume that A 2 < 1 and B 2 < 1 , and let Y m G A be the exact solution of projected Stein matrix Equation (5) and X m G A be the approximate solution given by running m steps of the EBA algorithm. Then:
X X m G A 2 A 2 T m + 1 , m B 2 + B 2 T m + 1 , m A 2 + T m + 1 , m A 2 T m + 1 , m B 2 1 A 2 B 2 Y m 2
Proof. 
The proof is similar to the one given at Theorem 2 in [27].  ☐
The approximate solution X m G A can be given as a product of two matrices of low rank. Consider the singular value decomposition of the 2 m r × 2 m r matrix:
Y m G A = Y ˜ 1 Σ   Y ˜ 2 T
where Σ is the diagonal matrix of the singular values of Y m M R sorted in decreasing order. Let Y 1 , l and Y 2 , l be the 2 m r × l matrices of the first l columns of Y ˜ 1 and Y ˜ 2 , respectively, corresponding to the l singular values of magnitude greater than some tolerance. We obtain the truncated singular value decomposition:
Y m G A U 1 , l   Σ l   U 2 , l T
where Σ l = diag [ σ 1 , , σ l ] . Setting Z 1 , m = V m   U 1 , l   Σ l 1 / 2 , and Z 2 , m = W m   U 2 , l   Σ l 1 / 2 , it follows that:
X m G A Z 1 , m   Z 2 , m T
This is very important for large problems when one doesn’t need to compute and store the approximation X m at each iteration.
The GA is given in Algorithm 2:
Algorithm 2. Galerkin Approach (GA) for the Stein Matrix Equations
(1)
Inputs: A an n × n matrix, B an s × s matrix, E an n × r matrix and F an s × r matrix.
(2)
Choose a tolerance t o l > 0 , a maximum number of i t e r m a x iterations.
(3)
For m = 1 , 2 , 3 , , i t e r m a x
(4)
Compute V m , T m A , by Algorithm 1 applied to ( A ,   E ) .
(5)
Compute W m ,   T m B , by Algorithm 1 applied to ( B T ,   F ) .
(6)
Solve the low order Stein Equation (5) and compute R m F given by Equation (6)
(7)
if R m F t o l , stop,
(8)
Using Equation (8), the approximate solution X m G A is given by X m G A Z 1 , m   Z 2 , m T .
In the next section, we consider the case where the matrix A is large while B has a moderate or a small size.

3.2. The Case: A Large and B Small

In this section, we consider the Stein matrix equation:
A X B X + E = 0
where E is a matrix of size n × s with s < < n .
In this case, we will consider approximations of the exact solution X as:
X m = V m Y m
where V m is the orthonormal basis obtained by applying the extended block Krylov subspace K m e ( A , E ) . The orthogonality Galerkin condition gives:
V m T R m = 0
where R m is the m-th residual given by R m = A X m B X m + E . Therefore, we obtain the projected Stein matrix equation:
T A Y m G A B Y m G A + E ˜ = 0
where T A = V m T A V m and E ˜ = V m T E .
The next result gives a useful expression of the norm of the residual.
Theorem 3.
Let Y m G A the exact solution of the reduced Stein matrix Equation (11) and let X m G A = V m Y m G A be the approximate solution of Equation (9) with R m = R ( X m G A ) the corresponding residual. Then:
R m F = T m + 1 , m A E m T Y m G A B F
Proof. 
The residual is given by R m = A X m G A B X m G A + E . Since E is belonging to K m e ( A , E ) , then V m V m T E = E . Using the relation A V m = V m + 1 T ¯ m A , we have:
R m F = A V m Y m G A B V m Y m G A + E F = V m + 1 T ¯ m A Y m G A B V m Y m G A + V m V m T E F = V m + 1 T ¯ m A Y m G A B V m + 1 I 0 Y m G A + V m + 1 I 0 E ˜ F = V m + 1 T ¯ m A Y m G A B I 0 Y m G A + E ˜ 0 F
As the matrix V m + 1 is orthogonal and T ¯ m A = T m A T m + 1 , m A E m T , we have:
R m F = T m A T m + 1 , m A E m T Y m G A B Y m G A 0 + E ˜ 0 F = T m A Y m G A B Y m G A + E ˜ T m + 1 , m A E m T Y m G A B F
Therefore:
R m F = T m + 1 , m A E m T Y m G A B F
This result is very important because it allows us to calculate the Frobenius norm of R m ( X m G A ) without having to compute the approximate solution.
Next, we give a result showing that the error X X m is an exact solution of a perturbed Stein matrix equation.
Theorem 4.
Let X m be the approximate solution of Equation (9) obtained after m iterations of the EBA algorithm. Then:
( A F m ) X m B X m = E
where F m = V m + 1 T m + 1 , m A V m T .
Proof. 
Multiplying the Equation (11) from the left by V m , we obtain:
[ A V m V m + 1 T m + 1 , m A E m T ] Y m B V m Y m = V m E ˜
As V m E ˜ = E , we get:
( A F m ) X m B X m = E
where:
F m = V m + 1 T m + 1 , m A V m T
We can now state the following result, which gives an upper bound for the norm of the error.
Theorem 5.
If A 2 < 1 and B 2 < 1 , then we have:
X X m 2   T m + 1 , m A E m Y m B   2 1 A 2 B 2
Proof. 
By subtracting Equation (13) from Equation (9), we get:
A ( X X m ) B ( X X m ) = F m X m B
The error X m X is the solution of the Stein matrix Equation (17) and can be expressed as:
X m X = i = 0 + A i [ F m X m B ] B i
X m X 2 i = 0 + A i 2 F m X m B 2 B i 2
F m X m B 2 i = 0 + A 2 B 2 i
F m X m B 2 1 A 2 B 2
  T m + 1 , m A E m Y m B 2 1 A 2 B 2
In the next section, we present projection methods based on extended block Krylov subspaces and MR property.

4. Minimal Residual Method for Large Scale Stein Matrix Equations

In this section, we present a MR method for solving large scale Stein matrix equations. A MR method for solving large scale Lyapunov matrix equation is given in [22].

4.1. The Case: Both A and B Are Large

Instead of using a Galerkin condition as we explained in the preceding section, we consider here approximate solutions X m M R = V m Y m M R W m T satisfying the following minimization property:
X M R = arg min X m = V m Y m W m T A X m B X m + E F T F
We have the following result.
Theorem 6.
The solution X m M R of the the minimization problem:
X m M R = arg min X m = V m Y m W m T A X m B X m + E F T F
is given by:
X m M R = V m Y m M R W m T
where Y m M R solves the following low dimentional minimization problem:
Y m M R = arg min T ¯ m A Y m T ¯ m B T I 0 Y m I 0 + R E R F T 0 0 0 F
with E = V 1 R E and F = W 1 R F , the Q R factorization of E and F, respectively.
Proof. 
We have:
min X = V m Y m W m T A X B X + E F T F = min Y m A V m Y m W m T B V m Y m W m T V 1 R E R F T W 1 T F = min Y m V m + 1 T ¯ m A Y m T ¯ m B T I 0 Y m I 0 + R E R F T 0 0 0 W m + 1 T F = min Y m T ¯ m A Y m T ¯ m B T I 0 Y m I 0 + R E R F T 0 0 0 F
One advantage of using the minimization approach is the fact that the projected problem (24) always has a solution that is not the case when one uses a GA.
The main problem is now how to solve the reduced order minimization problem (24). One possibility is the use of the preconditioned global conjugate gradient (PGCG) method.

4.2. The Preconditioned Global CG Method for Solving the Reduced Minimization Problem

In this section, we adopt the preconditioned conjugate gradient method (PCG) [28,29] to solve the reduced minimization problem (24). The normal equation associated with (24) is given by:
L m ( L m   ( Y ) ) = L m ( C )
where:
L m ( Y ) = T ¯ m A Y T ¯ m B T I 0 Y I 0
Notice that L m is the adjoint of the linear operator L m with respect to the Frobenius inner product is given by:
L m ( Z ) = ( T ¯ m A ) T Z T ¯ m B I 0 Z I 0
and:
C = R E R F T 0 0 0
We can decompose the matrices T ¯ m A and T ¯ m B as follows:
T ¯ m A = T m A h m A   and   T ¯ m B = T m B h m B
where h m A and h m B represent the last 2 r rows of the matrices T ¯ m A and T ¯ m B , respectively. Therefore, the normal Equation (25) can be written as:
T ¯ m A T T ¯ m A Y T ¯ m B T T ¯ m B + Y T m A T Y T m B T m A Y ( T m B ) T L m ( C ) = 0
Considering the singular value decomposition (SVD) of the matrices T ¯ m A and T ¯ m B :
T ¯ m A = U ¯ A Σ ¯ A V ¯ A T ; T ¯ m B = U ¯ B Σ ¯ B V ¯ B T
we get the eigendecomposition:
T ¯ m A T T ¯ m A = Q A D A Q A T , T ¯ m B T T ¯ m B = Q B D B Q B T
where Q A = V ¯ A , Q B = V ¯ B and D A = Σ ¯ A T Σ ¯ A .
Let Y ˜ = Q A T Y Q B and C ˜ = Q A T L m ( C ) Q B , and then the normal Equation (26) is now expressed as:
D A Y ˜ D B + Y ˜ T ˜ m A Y ˜ ( T ˜ m B ) T ( T ˜ m A ) T Y ˜ ( T ˜ m B ) C ˜ = 0
where T ˜ m A = Q A T T m A Q A and T ˜ m B = Q B T T m B Q B . This expression suggests that one can use the first part as a preconditioner, that is, the matrix operator:
P ( Y ˜ ) = D A Y ˜ D B + Y ˜
It can be seen that the expression (29) corresponds to the normal equation of the following matrix operator:
L ˜ m ( Y ˜ ) = T ˜ m A   Y ˜   ( T ˜ m B ) T Q A 0   Y ˜   Q B T 0
where T ˜ m A = T ¯ m A   Q A and T ˜ m B = T ¯ m B   Q B . Then, the preconditioned global CG algorithm is obtained by applying the preconditioner (30) to the normal equation associated with the matrix linear operator defined by Equation (31). This is summarized in Algorithm 3.
Algorithm 3. The Preconditioned Global Conjugate Gradient (PGCG) Algorithm.
(1)
Set Y ˜ 0 = 0
Compute R ˜ 0 = C L ˜ m ( Y ˜ 0 ) ; S 0 = L ˜ m ( R ˜ 0 ) , Z 0 = P 1 ( S 0 ) ; P 0 = S 0
(2)
For j = 0 , 1 , 2 , , j m a x
(a)
W j = L ˜ m ( P j )
(b)
α j = S j , Z j F / | W j | F 2
(c)
Y ˜ j + 1 = Y ˜ j + α j P j
(d)
R ˜ j + 1 = R ˜ j α j W j
(e)
If R ˜ j + 1 F is small enough, then stop
Else
(f)
S j + 1 = L ˜ m ( R ˜ j + 1 )
(g)
Z j + 1 = P 1 ( S j + 1 )
(h)
β j = S j + 1 , Z j + 1 F / S j , Z j F
(i)
P j + 1 = Z j + 1 + β j P j
(3)
End For
Notice that the use of the preconditioner P requires the solution, at each iteration, of a Stein equation. As the matrices D A and D B of these Stein matrix equations are diagonal matrices, this reduces the costs.
The MR Algorithm 4 for the Stein matrix equations is summarized as follows:
Algorithm 4. The Minimal Residual (MR) Method for Nonsymmetric Stein Matrix Equations
(1)
Choose a tolerance t o l > 0 , a maximum number of i t e r m a x iterations
(2)
For m = 1 , 2 , 3 , , i t e r m a x
(3)
Update V m , T ¯ m A , by algorithm 1 (EBA) applied to ( A ,   E )
(4)
Update W m , T ¯ m B , by algorithm 1 (EBA) applied to ( B T ,   F )
(5)
Solve the low order problem (24)
(6)
if R m F t o l , stop
(7)
Using Equation (8), the approximate solution X m M R is given by X m Z 1 , m   Z 2 , m T

4.3. The Case: A Large and B Small

In this subsection, we apply the MR norm method to the nonsymmetric Stein Equation (9) in the case A large and B small. The approximate solution is given by:
X m M R = V m Y m M R
with:
X M R = arg min X m = V m Y m A X m B X m + E F
We have the following result, which is not difficult to prove.
Theorem 7.
The solution of the minimization problem:
X m M R = arg min X m = V m Y m A X m B X m + E F
is given by:
X m M R = V m Y m M R
where:
Y m M R = arg min Y m T ¯ m A Y m B Y m 0 + R E 0 F
with E = V 1 R E being the Q R decomposition of E.
The reduced minimization problem (33) can also be solved by using the preconditioned global CG method (PGCG), as we did for the problem (24).

5. Numerical Experiments

In this section, we present some numerical experiments of large and sparse Stein matrix equations. We compared EBA-MR and EBA-GA methods. For the GA and at each iteration m, we solved the projected Stein matrix equations by using the Bartels–Stewart algorithm [11]. When solving the minimization reduced problem by the PGCG, we stopped the iterations when the relative norm of the residual was less than t o l l = 10 12 or when a maximum of k m a x = 200 iterations was achieved. The algorithms were coded in Matlab 8.0 (2014). The stopping criterion used for EBA-MR and GA was R ( X m ) F < 10 7 or a maximum of m m a x = 100 iterations was achieved.
In all of the examples, the coefficients of the matrices E and F were random values uniformly distributed on [ 0 , 1 ] .
Example 1.
In this first example, the matrices A and B are obtained from the centered finite difference discretization of the operators:
L A ( u ) = Δ u + f 1 ( x , y ) u x + , f 2 ( x , y ) u y + f ( x , y ) u
L B ( u ) = Δ u + g 1 ( x , y ) u x + g 2 ( x , y ) u y + g ( x , y ) u
on the unit square [ 0 , 1 ] × [ 0 , 1 ] with homogeneous Dirichlet boundary conditions. The number of inner grid points in each direction was n 0 and s 0 for the operators L A and L B , respectively. The matrices A and B were obtained from the discretization of the operator L A and L B with the dimensions n = n 0 2 and s = s 0 2 , respectively. The discretization of the operator L A ( u ) and L B ( u ) yields matrices extracted from the Lyapack package [30] using the command fdm_2d_matrix and denoted as A = fdm(n0,’f_1(x,y)’,’f_2(x,y)’,’f(x,y)’). In this example, n = 10 , 000 and s = 4900 , respectively, and are named as A = fdm ( n 0 , f 1 ( x , y ) , f 2 ( x , y ) , f ( x , y ) ) and B = fdm ( s 0 , g 1 ( x , y ) , g 2 ( x , y ) , g ( x , y ) ) with f 1 ( x , y ) = e x y , f 2 ( x , y ) = sin ( x y ) , f ( x , y ) = y 2 , g 1 ( x , y ) = 100 e x , g 2 ( x , y ) = 12 x y and g ( x , y ) = x 2 + y 2 . For this experiment, we used r = 3 .
In Figure 1, we plotted the Frobenius norms of the residuals versus the number of iterations for the MR and the GAs.
In Table 1, we compared the performances of the MR method and the GA. For both methods, we listed the residual norms, the maximum number of iteration and the corresponding execution time.
Example 2.
For the second set of experiments, we considered matrices from the University of Florida Sparse Matrix Collection [31] and from the Harwell Boeing Collection (http://math.nist.gov/MatrixMarket).
In Figure 2, we used the matrices A = pde2961 and B = fdm ( s 0 , 100 e x , 12 x y , x 2 + y 2 ) with dimensions n = 2961 and s = 3600 , respectively, and r = 3 .
In Figure 3, we used the matrices A=Themal and B=fdm ( s 0 , e x y , s i n ( x y ) , x 2 y 2 ) with dimensions n = 3456 and s = 6400 , respectively, and r = 3 .
In Table 2, we compared the performances of the MR method and the GA. For both methods, we listed the residual norms, the maximum number of iterations and the corresponding execution time.

6. Conclusions

We presented in this paper two iterative methods for computing numerical solutions for large scale Stein matrix equations with low rank right-hand sides. The proposed methods are based on projection onto extended block Krylov subspaces with a Galerkin or a minimal residual approach. The approximate solutions are given as products of two low rank matrices and allow for saving memory for large problems. The numerical experiments show that the proposed Krylov-based methods are effective for large and sparse matrices.

Author Contributions

Authors have contributed equally in the mathematical part, the editorial as well as the experimental part.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bouhamidi, A.; Heyouni, M.; Jbilou, K. Block Arnoldi-based methods for large scale discrete-time algebraic Riccati equations. J. Comput. Appl. Math. 2011, 236, 1531–1542. [Google Scholar] [CrossRef]
  2. Bouhamidi, A.; Jbilou, K. Sylvester Tikhonov-regularization methods in image restoration. J. Comput. Appl. Math. 2007, 206, 86–98. [Google Scholar] [CrossRef]
  3. Zhou, B.; Lam, J.; Duan, G.-R. On Smith-type iterative algorithms for the Stein matrix equation. Appl. Math. Lett. 2009, 22, 1038–1044. [Google Scholar] [CrossRef]
  4. Zhou, B.; Lam, J.; Duan, G.-R. Toward solution of matrix equation X = Af (X)B + C. Linear Algebra Appl. 2011, 435, 1370–1398. [Google Scholar] [CrossRef]
  5. Zhou, B.; Duan, G.-R.; Lam, J. Positive definite solutions of the nonlinear matrix equation. Appl. Math. Comput. 2013, 219, 7377–7391. [Google Scholar] [CrossRef]
  6. Li, Z.-Y.; Zhou, B.; Lam, J. Towards positive definite solutions of a class of nonlinear matrix equations. Appl. Math. Comput. 2014, 237, 546–559. [Google Scholar] [CrossRef]
  7. Van Dooren, P. Gramian Based Model Reduction of Large-Scale Dynamical Systems. In Numerical Analysis; Chapman and Hall/CRC Press: London, UK, 2000; pp. 231–247. [Google Scholar]
  8. Datta, B.N. Numerical Methods for Linear Control Systems; Academic Press: New York, NY, USA, 2003. [Google Scholar]
  9. Datta, B.N.; Datta, K. Theoretical and computational aspects of some linear algebra problems in control theory. In Computational and Combinatorial Methods in Systems Theory; Byrnes, C.I., Lindquist, A., Eds.; Elsevier: Amsterdam, The Netherlands, 1986; pp. 201–212. [Google Scholar]
  10. Calvetti, D.; Levenberg, N.; Reichel, L. Iterative methods for XAXB = C. J. Comput. Appl. Math. 1997, 86, 73–101. [Google Scholar] [CrossRef]
  11. Bartels, R.H.; Stewart, G.W. Solution of the matrix equation A X + X B = C, Algorithm 432. Commun. ACM 1972, 15, 820–826. [Google Scholar] [CrossRef]
  12. Golub, G.H.; Nash, S.; van Loan, C. A Hessenberg Schur method for the problem AX + XB = C. IEEE Trans. Autom. Control 1979, 24, 909–913. [Google Scholar] [CrossRef]
  13. Simoncini, V. Computational methods for linear matrix equations. SIAM Rev. 2016, 58, 377–441. [Google Scholar] [CrossRef]
  14. Agoujil, S.; Bentbib, A.H.; Jbilou, K.; Sadek, E.L.M. A minimal residual norm method for large-scale Sylvester matrix equations. Elect. Trans. Numer. Anal. 2014, 43, 45–59. [Google Scholar]
  15. Bentbib, A.H.; Jbilou, K.; Sadek, E.M. On some Krylov subspace based methods for large-scale nonsymmetric algebraic Riccati problems. Comput. Math. Appl. 2015, 2555–2565. [Google Scholar] [CrossRef]
  16. Druskin, V.; Knizhnerman, L. Extended Krylov subspaces: Approximation of the matrix square root and related functions. SIAM J. Matrix Anal. Appl. 1998, 19, 755–771. [Google Scholar] [CrossRef]
  17. El Guennouni, A.; Jbilou, K.; Riquet, A.J. Block Krylov subspace methods for solving large Sylvester equations. Numer. Algorithms 2002, 29, 75–96. [Google Scholar] [CrossRef]
  18. Heyouni, M. Extended Arnoldi methods for large low-rank Sylvester matrix equations. Appl. Numer. Math. 2010, 60, 1171–1182. [Google Scholar] [CrossRef]
  19. Jaimoukha, I.M.; Kasenally, E.M. Krylov subspace methods for solving large Lyapunov equations. SIAM J. Numer. Anal. 1994, 31, 227–251. [Google Scholar] [CrossRef]
  20. Jbilou, K. Low-rank approximate solution to large Sylvester matrix equations. Appl. Math. Comput. 2006, 177, 365–376. [Google Scholar] [CrossRef]
  21. Jbilou, K.; Riquet, A.J. Projection methods for large Lyapunov matrix equations. Linear Algebra Appl. 2006, 415, 344–358. [Google Scholar] [CrossRef]
  22. Lin, Y.; Simoncini, V. Minimal residual methods for large scale Lyapunov equations. Appl. Numer. Math. 2013, 72, 52–71. [Google Scholar] [CrossRef]
  23. Simoncini, V. A new iterative method for solving large-scale Lyapunov matrix equations. SIAM J. Sci. Comput. 2007, 29, 1268–1288. [Google Scholar] [CrossRef]
  24. Jagels, C.; Reichel, L. Recursion relations for the extended Krylov subspace method. Linear Algebra Appl. 2011, 434, 1716–1732. [Google Scholar] [CrossRef]
  25. Heyouni, M.; Jbilou, K. An extended Block Arnoldi algorithm for large-scale solutions of the continuous-time algebraic Riccati equation. Elect. Trans. Numer. Anal. 2009, 33, 53–62. [Google Scholar]
  26. Saad, Y. Numerical solution of large Lyapunov equations. In Signal Processing, Scattering, Operator Theory and Numerical Methods; Kaashoek, M.A., van Shuppen, J.H., Ran, A.C., Eds.; Birkhaser: Boston, MA, USA, 1990; pp. 503–511. [Google Scholar]
  27. Bouhamidi, A.; Hached, M.; Heyouni, M.; Jbilou, K. A preconditioned block Arnoldi method for large Sylvester matrix equations. Numer. Linear Algebra Appl. 2011, 20, 208–219. [Google Scholar] [CrossRef]
  28. Saad, Y. Iterative Methods for Sparse Linear Systems, 2nd ed.; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2003. [Google Scholar]
  29. Saad, Y.; Yeung, M.; Erhel, J.; Guyomarc’h, F. A deflated version of the conjugate gradient algorithm. SIAM J. Sci. Comput. 2000, 21, 1909–1926. [Google Scholar] [CrossRef]
  30. Penzl, T. LYAPACK A MATLAB Toolbox for Large Lyapunov and Riccati Equations, Model Reduction Problems, and Linear-Quadratic Optimal Control Problems. Available online: http://www.tu-chemintz.de/sfb393/lyapack (Accessed on 10 June 2016).
  31. Davis, T. The University of Florida Sparse Matrix Collection, NA Digest, Volume 97, No. 23, 7 June 1997. Available online: http://www.cise.ufl.edu/research/sparse/matrices (Accessed on 10 June 2016).
Figure 1. Galerkin approach (GA): dashed line, minimal residual (MR): solid line.
Figure 1. Galerkin approach (GA): dashed line, minimal residual (MR): solid line.
Mathematics 05 00021 g001
Figure 2. GA: dashed line, MR: solid line.
Figure 2. GA: dashed line, MR: solid line.
Mathematics 05 00021 g002
Figure 3. GA: dashed line, MR: solid line.
Figure 3. GA: dashed line, MR: solid line.
Mathematics 05 00021 g003
Table 1. Results for Example 1.
Table 1. Results for Example 1.
Test ProblemMethodIterationsResidual NormTimes (s)
n = 8100 , s = 3600 , r = 2 GA437.56 × 10 8 4.80
MR3 1.46 × 10 8 1.87
n = 10,000, s = 4900 , r = 4 GA45 4.99 × 10 8 26.52
MR3 6.28 × 10 8 3.75
n = 12,100, s = 7900 , r = 3 GA49 8.93 × 10 8 12.96
MR3 4.98 × 10 8 3.63
Table 2. Results for Example 5.2.
Table 2. Results for Example 5.2.
Test ProblemMethodIterationsResidual NormTime (s)
n = 2961 , s = 3600 , r = 2 , A = pde2961GA45 9.10 × 10 9 3.7440
and B = fdm ( s 0 , 100 e x , 12 x y , x 2 + y 2 ) MR7 1.54 × 10 9 1.0296
n = 3456 , s = 8100 , r = 3 , A = ThermalGA40 3.27 × 10 8 10.1245
and B = fdm ( e x , sin ( x y ) , x 2 y 2 ) MR8 7.29 × 10 9 7.3008

Share and Cite

MDPI and ACS Style

Bentbib, A.H.; Jbilou, K.; Sadek, E.M. On Some Extended Block Krylov Based Methods for Large Scale Nonsymmetric Stein Matrix Equations. Mathematics 2017, 5, 21. https://doi.org/10.3390/math5020021

AMA Style

Bentbib AH, Jbilou K, Sadek EM. On Some Extended Block Krylov Based Methods for Large Scale Nonsymmetric Stein Matrix Equations. Mathematics. 2017; 5(2):21. https://doi.org/10.3390/math5020021

Chicago/Turabian Style

Bentbib, Abdeslem Hafid, Khalide Jbilou, and EL Mostafa Sadek. 2017. "On Some Extended Block Krylov Based Methods for Large Scale Nonsymmetric Stein Matrix Equations" Mathematics 5, no. 2: 21. https://doi.org/10.3390/math5020021

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop