Next Article in Journal
A New Generalization of the Truncated Gumbel Distribution with Quantile Regression and Applications
Next Article in Special Issue
Nash’s Existence Theorem for Non-Compact Strategy Sets
Previous Article in Journal
Change Point Test for Length-Biased Lognormal Distribution under Random Right Censoring
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Matrix Pencil Optimal Iterative Algorithms and Restarted Versions for Linear Matrix Equation and Pseudoinverse

1
Center of Excellence for Ocean Engineering, National Taiwan Ocean University, Keelung 202301, Taiwan
2
Department of Mechanical Engineering, National United University, Miaoli 36063, Taiwan
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(11), 1761; https://doi.org/10.3390/math12111761
Submission received: 9 May 2024 / Revised: 30 May 2024 / Accepted: 3 June 2024 / Published: 5 June 2024
(This article belongs to the Special Issue Nonlinear Functional Analysis: Theory, Methods, and Applications)

Abstract

:
We derive a double-optimal iterative algorithm (DOIA) in an m-degree matrix pencil Krylov subspace to solve a rectangular linear matrix equation. Expressing the iterative solution in a matrix pencil and using two optimization techniques, we determine the expansion coefficients explicitly, by inverting an m × m positive definite matrix. The DOIA is a fast, convergent, iterative algorithm. Some properties and the estimation of residual error of the DOIA are given to prove the absolute convergence. Numerical tests demonstrate the usefulness of the double-optimal solution (DOS) and DOIA in solving square or nonsquare linear matrix equations and in inverting nonsingular square matrices. To speed up the convergence, a restarted technique with frequency m is proposed, namely, DOIA(m); it outperforms the DOIA. The pseudoinverse of a rectangular matrix can be sought using the DOIA and DOIA(m). The Moore–Penrose iterative algorithm (MPIA) and MPIA(m) based on the polynomial-type matrix pencil and the optimized hyperpower iterative algorithm OHPIA(m) are developed. They are efficient and accurate iterative methods for finding the pseudoinverse, especially the MPIA(m) and OHPIA(m).

1. Introduction

Since pioneering works [1,2], Krylov subspace methods have been studied and developed extensively in the iterative solution of linear equations systems [3,4,5,6,7,8,9], like the minimum residual algorithm [10], the generalized minimal residual method (GMRES) [6,7], the quasiminimal residual method [8], the biconjugate gradient method [11], the conjugate gradient squared method [12], and the biconjugate gradient stabilized method [13]. For more discussion on the Krylov subspace methods, one can refer to review papers [5,14,15] and text books [16,17].
In this paper, we use the matrix pencil Krylov subspace method to derive some novel numerical algorithms to solve the following linear matrix equation:
A Z = F ,
where Z R n × p is an unknown matrix. We consider two possible cases:
Case   1 : A R n × n , F R n × p ,
Case   2 : A R q × n , F R q × p .
We assume rank ( A ) = rank [ A F ] for consistency in Equation (1).
The problem of solving a matrix equation is one of the most important topics in computational mathematics and has been widely studied in many different fields. As shown by Jbilou et al. [18], there are many methods for solving square linear matrix equations. Many iterative algorithms based on the block Krylov subspace method have been developed to solve matrix equations [19,20,21]. For a multiple-righthand-side system with A being nonsingular, the block Krylov subspace methods for computing A 1 F in Equation (1) comprise an even larger body of the literature [22,23,24]. There are some novel algorithms for solving Equation (1) with nonsquare A and square Z in [25,26,27,28]. Having an effective and fast solution to matrix equations plays a fundamental role in numerous fields of science. To reduce the computational complexity and increase the accuracy, a more efficient method is still required.
One of the applications of Equation (1) is finding the solution to the following linear equation system:
A x = b .
It is noted that for the solution to Equation (4), we need to find the left inversion of A, which is denoted by A L 1 . When we take F = I n , Equation (1) is used to find the right inversion of A, which is denoted by A R 1 . Mathematically, A L 1 = A R 1 for nonsingular square matrices; however, from the numerical outputs, A L 1 A R 1 , especially for ill-conditioned matrices.
In order to find the left inversion of A, we can solve
X A = I n ;
a transpose leads to
A T X T = I n .
If we use A T to replace A, then we supply an optimal solution and optimal iterative scheme to seek X T ; inserting its transpose into Equation (4), we can obtain the solution of x with
x = X b .
There are many algorithms for finding A L 1 or A R 1 for nonsingular matrices. Liu et al. [29] modified the conjugate gradient (CG)-type method to the matrix conjugate gradient method (MCGM), which performed well in finding the inverse matrix. Other iterative schemes encompass the Newton–Schultz method [30], the Chebyshev method [31], the Homeier method [32], the method by Petkovic and Stanimirovic [33], and some cubic and higher-order convergence methods [34,35,36,37].
We can apply Equation (1) to find a pseud-inverse A with dimensions n × q of a rectangular matrix A with dimensions q × n :
A A = I q ,
where I q is a q-dimensional identity matrix. This is a special Case 2 in Equation (3) with p = q and F = I q .
For wide application, the pseudoinverse and the Moore–Penrose inverse matrices have been investigated and computed using many methods [33,34,35,36,37,38,39,40,41,42,43,44,45]. More higher-order iterative schemes for the Moore–Penrose inverse were recently analyzed by Sayevand et al. [46].
In order to compare different solvers for systems of nonlinear equations, some novel goodness and qualification criteria are defined in [47]. They include convergence order, number of function evaluations, number of iterations, CPU time, etc., for evaluating the quality of a proposed iterative method.
From Equation (1),
B = F A Z 0
is an initial residual matrix, if an initial guess Z 0 is given. Let
X = Z Z 0 .
Then, Equation (1) is equivalent to
A X = B ,
which is used to search a descent direction X after giving an initial residual. It follows from Equations (9) and (10) that
R = F A Z = B A X
is a residual matrix. Below we will develop novel method to solve Equation (11), and hence Equation (1), if we replace B by F and Z by X. Since we cannot find the exact solutions of Equations (1) and (11) in general, the residual R in Equation (12) is not zero.

1.1. Notation

Throughout this paper, uppercase letters denote matrices, while lowercase letters denote vectors or scalars. If two matrices A and B have the same order, their inner product is defined by A : B = t r ( A T B ) , where t r is the trace of a square matrix, and the superscript T signifies the transpose. As a consequence, the Frobenius norm of A is given by A 2 : = t r ( A T A ) = A : A . The component of a vector a is written as a i , while the component of a matrix A is written as A i j . A single subscript k in a matrix U k means that it is the kth matrix in a matrix pencil.

1.2. Main Contribution

In GMRES [16], an m-dimensional matrix Krylov subspace is supposed,
K m ( A , B ) : = Span { B , A B , , A m 1 B } .
The Petrov–Galerkin method searches X K m via a perpendicular property:
B A X L m = A K m .
The descent matrix X K m can be achieved by minimizing the residual [16]:
min R 2 = min B A X 2 .
Equation (14) is equivalent to the projection of B on L m , and R is perpendicular to L m . To seek a fast convergent iterative algorithm, we must keep the orthogonality and simultaneously maximize the projection quantity:
max [ B : ( A X ) ] 2 A X 2 .
The main contribution of this paper is that we seek the best descent matrix X simultaneously satisfying (15) and (16). Some excellent properties including the absolute convergence of the proposed iterative algorithm are proven. When the matrix Krylov subspace method in Equation (13) is not suitable for the Case 2 problem, we can treat both Case 1 and 2 problems of linear matrix equations in a unified manner. Now, our problem is to construct double-optimal iterative algorithms with an explicit expansion form to solve the linear matrix Equation (1), which involves inverting a low-dimensional m × m positive definite matrix. Our problem is more difficult than that using (15) to derive iterative algorithms of the GMRES type. For the pseudoinverse problem of a rectangular matrix, higher-order polynomial methods and hyperpower iterative algorithms are unified into the frame of a double-optimal matrix pencil method but with different matrix pencils.

1.3. Outline

We start from an m-degree matrix pencil Krylov subspace to express the solution to the linear matrix equation in Section 2; two cases of linear matrix equations are considered. In Section 3, two merit functions are optimized for the determination of the expansion coefficients. More importantly, an explicit form double-optimal solution (DOS) to Equation (11) is created, for which we needed to invert an m × m positive definite matrix. In Section 4, we propose an iterative algorithm updated using the DOS method, which provides the best descent direction used in the double-optimal iterative algorithm (DOIA). A restarted version with frequency m, namely, the DOIA(m) is proposed. The DOS can be viewed as a single-step DOIA. Linear matrix equations are solved by the DOS, DOIA, and DOIA(m) methods in Section 5 to display some advantages of the presented methodology to find the approximate solution of Equation (1). The Moore–Penrose inverse of a rectangular matrix is addressed in Section 6, proposing two optimized methods based on a new matrix pencil of polynomials and a new modification of the hyperpower method. Numerical testing of the Moore–Penrose inverse of rectangular matrix is carried out in Section 7. Finally, the conclusions are drawn in Section 8.

2. The Matrix Pencil Krylov Subspace Method

This section constructs suitable bases for Case 1 and Case 2 problems of the linear matrix Equation (1). We denote the expansion space with Span { U 0 , U } . For Case 1,
U 0 = B ,
U : = [ U 1 , , U m ] = [ A U 0 , , A m U 0 ] ,
while, for Case 2,
U 0 = A T B ,
U : = [ U 1 , , U m ] = [ A T A U 0 , , ( A T A ) m U 0 ] .
In U , the matrices U 1 , , U m are orthonormalized by using the modified Gram–Schmidt process, such that U i : U j = δ i j , where : denotes the inner product of U i and U j .
In the space of Span { U 0 , U } , X in Equation (11) can be expanded as
X = α 0 U 0 + k = 1 m α k U k = α 0 U 0 + U α ,
where α : = ( α 1 , , α m ) T . The coefficients α 0 and α are determined optimally from a combination of U 0 R n × p and the m matrix U k R n × p , k = 1 , , m in a degree m matrix pencil Krylov subspace.
Both Case 1 and Case 2 can be treated in a unified manner as shown below, but with different U 0 and U . U is a matrix pencil consisting of m matrices in the matrix Krylov subspace, and U α is the product of the pencil with vector α R m . For the frequent use of the later, we give a definition of U α as
U α : = j = 1 m α j U j .
In Equation (21), X is spanned by { U 0 , U } , and we may write it as X Span { U 0 , U } . So, we call our expansion method a degree m + 1 matrix- pencil Krylov subspace method. It is different from the fixed affine matrix Krylov subspace X U 0 + Span { U } . The concept of fixed and varying affine Krylov subspaces was elaborated in [48].

3. Double-Optimal Solution

In this section, we seek the best descent matrix X simultaneously satisfying (15) and (16). The properties including the orthogonality of residual and absolute convergence of the proposed iterative algorithm are proven. We must emphasize that the derived iterative algorithms have explicit expansion forms to solve the linear matrix Equation (11), by inverting an m × m positive definite matrix.

3.1. Two Minimizations

To simplify the notation, let
Y : = A X ;
we consider the orthogonal projection of B to Y, measured by the error matrix:
E : = B B : Y Y Y Y .
The best approximation of X in Equation (11) can be found, when Y = A X minimizes
E 2 = B 2 ( B : Y ) 2 Y 2 0 ,
or maximizes the orthogonal projection of B to Y:
max Y ( B : Y ) 2 Y 2 .
We can solve A X = B by an approximation of X from the above optimization technique, but B A X is not exactly equal to zero, since Y = A X involves the unknown matrix X, and Y is not exactly equal to B.
Let
f : = Y 2 ( B : Y ) 2 1 B 2 ,
which is the reciprocal of the merit function in Equation (26).
Let
Y j = A U j .
Then, Equation (23), with the aid of Equation (21), can be written as
Y = Y 0 + Y α ,
where
Y 0 : = α 0 A U 0 ,
Y : = [ Y 1 , , Y m ] = A U = [ A U 1 , , A U m ] ;
Y is an m-degree matrix-pencil. We suppose that for each Y j , j = 1 , , m , rank ( Y j ) = min [ n , p ] for Case 1, and rank ( Y j ) = min [ q , p ] for Case 2. We take rank ( Y j ) = n 0 , where n 0 = min [ n , p ] for Case 1, and n 0 = min [ q , p ] for Case 2.
Inserting Equation (29) for Y into Equation (27), we encounter the following minimization problem:
min α f = Y 2 ( B : Y ) 2 = Y 0 + Y α 2 [ B : Y 0 + B : ( Y α ) ] 2 ;
however, in Y there is still an unknown scalar α 0 in Y 0 = α 0 A U 0 . Other minimization of the residual is supplemented:
min α 0 { R 2 = B A X 2 = B Y 2 } .
The solution of these two optimization problems is a major task at the next subsection.

3.2. Two Main Theorems

The first theorem determines the expansion coefficients α j , j = 0 , 1 , , m in Equation (21).
Theorem 1.
For X S p a n { U 0 , U } , the optimal solution of Equation (11) derived from the optimizations in (32) and (33) is given by
X = α 0 [ U 0 + λ 0 U D v U D u ] ,
where
C = [ C i j ] , C i j = Y i : Y j , D = C 1 , v = [ v i ] , v i = B : Y i , u = [ u i ] , u i = ( A U 0 ) : Y i , λ 0 = A U 0 2 u T D u B : ( A U 0 ) v T D u , W = λ 0 Y D v Y D u + A U 0 , α 0 = W : B W 2 .
We suppose that rank ( C ) = m < n 0 , such that C 1 exists.
Proof. 
With the help of Equation (29), B : Y in (32) can be written as
B : Y = B : Y 0 + v T α ,
where the components of v R m are given by
v i : = B : Y i .
Taking the squared norms of Equation (29) yields
Y 2 = Y 0 + Y α 2 = Y 0 2 + 2 ( Y 0 : Y ) T α + α T Y : Y α ,
where Y : Y = Y i : Y j , i , j = 1 , , m is an m × m matrix. We can derive
Y 2 = Y 0 2 + 2 w T α + α T C α ,
where
w i : = Y 0 : Y i ,
C i j = Y i : Y j ;
w R m and C R m × m is a symmetric positive definite matrix.
The minimality condition of f is
α Y 2 ( B : Y ) 2 = 0 ( B : Y ) 2 y 2 2 B : Y Y 2 y 1 = 0 ,
where
y 1 : = α ( B : Y ) = v ,
y 2 : = α Y 2 = 2 w + 2 C α ;
y 1 and y 2 are m-vectors, while C > 0 R m × m with rank ( C ) = m . Thus, the equation to determine α is
B : Y y 2 2 Y 2 y 1 = 0 .
We can observe from Equation (44) that y 2 is proportional to y 1 , which is supposed to be
y 2 = 2 λ y 1 ,
where λ is a multiplier to be determined from
Y 2 = λ B : Y .
Then, it follows from Equations (42), (43) and (45) that
w + C α = λ v ,
α = λ D v D w ,
where
D : = C 1 > 0 R m × m .
Inserting Equation (48) into Equations (36) and (38), we have
B : Y = B : Y 0 + λ v T D v v T D w ,
Y 2 = λ 2 v T D v + Y 0 2 w T D w .
Inserting Equations (50) and (51) into Equation (46) yields
λ 2 v T D v + Y 0 2 w T D w = λ B : Y 0 + λ 2 v T D v λ v T D w .
Cancelling λ 2 v T D v on both sides, we can derive
Y 0 2 w T D w = λ [ B : Y 0 v T D w ] ,
which renders
λ = Y 0 2 w T D w B : Y 0 v T D w .
Inserting it into Equation (48), α is obtained as follows:
α = Y 0 2 w T D w B : Y 0 v T D w D v D w .
Let
u i : = ( A U 0 ) : Y i .
Inserting Equation (30) for Y 0 = α 0 A U 0 into Equation (39), and comparing it to Equation (55), we have
w i = α 0 u i w = α 0 u .
By inserting α in Equation (48) into Equation (21) and using Y 0 = α 0 A U 0 , we can obtain
X = α 0 U 0 + λ U D v U D w = α 0 U 0 + α 0 U z ,
where
z : = λ 0 D v D u ,
λ 0 : = A U 0 2 u T D u B : ( A U 0 ) v T D u ,
λ = Y 0 2 w T D w B : Y 0 v T D w = α 0 λ 0 .
Upon letting
V : = U 0 + U z ,
Equation (57) can be expressed as
X = α 0 V .
Now α 0 can be determined by (33). Inserting Equation (62) into (33) yields
B A X 2 = B α 0 A V 2 = α 0 2 W 2 2 α 0 W : B + B 2 ,
where
W : = A V = A U 0 + Y z = λ 0 Y D v Y D u + A U 0
is derived according to Equations (31) and (58).
Differentiating Equation (63) vs α 0 and equating it to zero, generates
α 0 = W : B W 2 ;
by Equation (62),
X = W : B W 2 V .
The proof of Theorem 1 is complete.   □
To prove Theorem 2, we need the following two lemmas.
Lemma 1.
In terms of the matrix-pencil Y in Equation (31), we have
B : ( Y D v ) = v T D v ,
B : ( Y D u ) = v T D u .
Proof. 
By using the definition in Equation (22), the left-hand side of Equation (67) can be written as
B : ( Y D v ) = B : j = 1 m ( D v ) j Y j = j = 1 m ( D v ) j B : Y j = j = 1 m v j ( D v ) j ,
where v j and ( D v ) j denote, respectively, the jth component of v and D v , and Equation (37) is used in the last equality. In terms of vector form the last term is just the right-hand side of Equation (67). Similarly, Equation (68) can be proved by the same manner, i.e.,
B : ( Y D u ) = B : j = 1 m ( D u ) j Y j = j = 1 m ( D u ) j B : Y j = j = 1 m v j ( D u ) j .
This ends the proof of Lemma 1.   □
Lemma 2.
In terms of the matrix-pencil Y in Equation (31), we have
( A U 0 ) : ( Y z ) = u T z .
Proof. 
By using the definition in Equation (22), the left-hand side of Equation (69) can be written as
( A U 0 ) : ( Y z ) = ( A U 0 ) : j = 1 m z j Y j = j = 1 m z j ( A U 0 ) : Y j = j = 1 m z j u j ,
where z j and u j denote, respectively, the jth component of z and u, and Equation (55) is used in the last equality. In terms of vector form the last term is just the right-hand side of Equation (69). This ends the proof of Lemma 2.   □
Theorem 2.
In Theorem 1, the two parameters α 0 and λ 0 satisfy the following reciprocal relation:
λ = α 0 λ 0 = 1 .
Proof. 
Taking the inner product of B with Equation (64) and by Lemma 1, leads to
B : W = λ 0 v T D v + B : ( A U 0 ) v T D u .
With the aid of Equation (59), we have
B : W = λ 0 v T D v + 1 λ 0 [ A U 0 2 u T D u ] .
Then after multiplying λ 0 on the above equation, and by Equation (65), we can obtain
α 0 λ 0 W 2 = A U 0 2 + λ 0 2 v T D v u T D u .
Next we prove that Equation (73) is equal to W 2 . From Equation (64) and with the aids of Equations (40), (58) and (69), it follows that
W 2 = A U 0 2 + 2 ( A U 0 ) : ( Y z ) + z T C z = A U 0 2 + 2 u T z + z T C z = A U 0 2 + 2 λ 0 u T D v 2 u T D u + ( λ 0 v T D u T D ) C ( λ 0 D v D u ) = A U 0 2 + λ 0 2 v T D v u T D u ,
where D C D = D was used in view of Equation (49). Then by comparing Equations (73) and (74), we have proven Equation (70). This ends the proof of α 0 λ 0 = 1 . Then by Equation (60), λ = 1 is proven.   □
A more direct proof of λ = 1 is available by comparing Equation (46) to Equation (96), which is derived in the proof of Theorem 4.

3.3. Estimating Residual Error

To estimate the residual error we begin with the following lemma.
Lemma 3.
In terms of the matrix-pencil Y in Equation (31), we have
( A U 0 ) : ( Y D v ) = u T D v ,
( A U 0 ) : ( Y D u ) = u T D u .
Proof. 
By using the definition in Equations (22) and (55), the left-hand side of Equation (75) can be written as
( A U 0 ) : ( Y D v ) = ( A U 0 ) : j = 1 m ( D v ) j Y j = j = 1 m ( D v ) j ( A U 0 ) : Y j = j = 1 m u j ( D v ) j ,
where u j and ( D v ) j denote, respectively, the jth component of u and D v . In terms of vector form the last term is just the right-hand side of Equation (75). Similarly, Equation (76) can be proved by the same manner, i.e.,
( A U 0 ) : ( Y D u ) = ( A U 0 ) : j = 1 m ( D u ) j Y j = j = 1 m ( D u ) j ( A U 0 ) : Y j = j = 1 m u j ( D u ) j .
This ends the proof of Lemma 3.   □
Lemma 4.
In terms of the matrix-pencil Y in Equation (31), we have
( Y D u ) : ( Y D u ) = u T D u ,
( Y D v ) : ( Y D v ) = v T D v ,
( Y D u ) : ( Y D v ) = u T D v .
Proof. 
By using the definition in Equation (22), we have
( Y D u ) : ( Y D u ) = ( ( D u ) i Y i ) : ( ( D u ) j Y j ) = ( D u ) i ( D u ) j Y i : Y j .
The above two repeated indices i and j are summed automatically from 1 to m according to the convention of Einstein [49]. Then with the aid of Equation (40), we have
( Y D u ) : ( Y D u ) = ( ( D u ) i Y i ) : ( ( D u ) j Y j ) = ( D u ) i C i j ( D u ) j = u i D i j u j ,
where D = C 1 was used. In terms of vector form the above equation leads to Equation (77). Similarly, we have
( Y D v ) : ( Y D v ) = ( D v ) i C i j ( D v ) j = v i D i j v j ,
( Y D u ) : ( Y D v ) = ( D u ) i C i j ( D v ) j = u i D i j v j .
In terms of vector form the above two equations reduce to Equations (78) and (79). This ends the proof of Lemma 4.   □
Theorem 3.
For X S p a n { U 0 , U } , the residual error of the optimal solution in Equation (57) satisfies:
B A X 2 = B 2 v T D v [ B : ( A U 0 ) v T D u ] 2 A U 0 2 u T D u .
Proof. 
According to Equation (70), we can refine X in Equation (34) to
X = α 0 [ U 0 U D u ] + U D v ,
where
α 0 = B : ( A U 0 ) v T D u A U 0 2 u T D u .
Let us check residual square:
B A X 2 = B Y 2 = Y 2 2 B : Y + B 2 ,
where
Y = A X = α 0 A U 0 α 0 Y D u + Y D v
is obtained from Equation (85), in which Y = A U .
By using Lemmas 1, 3 and 4, it follows from Equation (88) that
Y 2 = α 0 2 ( A U 0 2 u T D u ) + v T D v ,
B : Y = α 0 B : ( A U 0 ) α 0 v T D u + v T D v .
Equation (90) is easily derived by taking the inner product of B to Equation (88):
B : Y = α 0 B : ( A U 0 ) α 0 B : Y D u + B : Y D v ,
and using Lemma 1. Taking the squared norms of Equation (88), we have
Y 2 = α 0 A U 0 α 0 Y D u + Y D v 2 = α 0 2 A U 0 2 + α 0 2 Y D u 2 + Y D v 2 2 α 0 2 ( A U 0 ) : ( Y D u ) + 2 α 0 ( A U 0 ) : ( Y D v ) 2 α 0 ( Y D u ) : ( Y D v ) ;
by using Lemmas 3 and 4, it is simplified to
Y 2 = α 0 2 A U 0 2 + α 0 2 u T D u + v T D v 2 α 0 2 u T D u + 2 α 0 u T D v 2 α 0 u T D v = α 0 2 A U 0 2 α 0 2 u T D u + v T D v .
It derives Equation (89).
Then, inserting the above two equations into Equation (87), we have
B A X 2 = α 0 2 ( A U 0 2 u T D u ) + v T D v 2 α 0 B : ( A U 0 ) + 2 α 0 v T D u 2 v T D v + B 2 = α 0 2 ( A U 0 2 u T D u ) + 2 α 0 [ v T D u B : ( A U 0 ) ] + B 2 v T D v .
Consequently, inserting Equation (86) for α 0 into the above equation, yields Equation (84).
   □
As a consequence, we can prove that the two merit functions E 2 = B 2 ( B : Y ) 2 / Y 2 and B A X 2 are the same.
Theorem 4.
In the optimal solution of X S p a n { U 0 , U } , the values of the two merit functions are the same, i.e.,
E 2 = B A X 2 = R 2 ,
where E 2 = B 2 ( B : Y ) 2 / Y 2 . Moreover, we have
R 2 < B 2 .
Proof. 
Inserting Equation (86) for α 0 into Equations (89) and (90), we can obtain
Y 2 = ( B : ( A U 0 ) v T D u ) 2 A U 0 2 u T D u + v T D v ,
B : Y = ( B : ( A U 0 ) v T D u ) 2 A U 0 2 u T D u + v T D v .
It is remarkable that
Y 2 = B : Y .
From Equations (84) and (94), we can derive
B A X 2 = B 2 Y 2 .
Inserting Equation (96) for B : Y into Equation (25), it follows that
E 2 = B 2 Y 2 .
This ends the proof of Equation (92).
By using the definition of residual matrix R = B A X and from Equation (97), we have
R 2 = B 2 Y 2 .
Because of Y 2 > 0 , Equation (93) follows readily.   □

3.4. Orthogonal Projection

Equation (92) indicates that E 2 and R 2 have the same values when the double-optimality conditions are achieved, of which the key equation (70) plays a dominant role; it renders Y 2 = B : Y and the equality in Equation (92) holds. The residual error is absolutely decreased in Equation (93); Equation (84) gives the estimation of residual error.
Theorem 5.
For X S p a n { U 0 , U } , Y = A X can be orthogonally decomposed into
Y = α 0 [ A U 0 Y D u ] + Y D v ,
where A U 0 Y D u and Y D v are orthogonal.
Proof. 
It follows from Equation (88) that
Y = α 0 [ A U 0 Y D u ] + Y D v .
From Lemma 3, we have
( A U 0 ) : ( Y D v ) = u T D v ,
and from Lemma 4, we have
( Y D u ) : ( Y D v ) = u T D v .
Subtracting the above two equations, we have
( A U 0 Y D u ) : ( Y D v ) = 0 .
This ends the proof.   □
Furthermore, Y D v can be written as
Y D v = i = 1 m ( D v ) i Y i = i = 1 m D i j v j Y i = i = 1 m D i j Y j : B Y i ,
with the aid of Equations (22) and (37). Now, we can introduce the projection operator P by
P ( B ) = i = 1 m D i j Y i Y j ( B ) = i = 1 m D i j Y i Y j : B ,
which acts on B and results in i = 1 m D i j Y i Y j : B . Accordingly, we can define the projection operator P as
P = i = 1 m D i j Y i Y j .
Theorem 6.
For X S p a n { U 0 , U } , the optimal solution (34) is an exact solution of
P ( A X ) = P ( B ) ,
which is a projection of Equation (11) into the affine Krylov subspace.
Proof. 
From Equations (100) and (23), we have
A X = α 0 [ A U 0 Y D u ] + Y D v .
Applying the projection operator P to the above equation yields
P ( A X ) = α 0 P ( A U 0 Y D u ) + P ( Y D v ) .
We need to prove
P ( A U 0 Y D u ) = 0 ,
P ( Y D v ) = P ( B ) .
From Equations (103) and (55), we have
P ( A U 0 ) = D i j Y i u j .
On the other hand, from Equations (103), (22), (40), and (49), we have
P ( Y D u ) = D i j Y i ( D u ) k Y j : Y k = D i j Y i D k u C j k = D i j C j k D k Y i u = D i Y i u = D i j Y i u j .
Subtracting the above two equations, we can prove Equation (107). From Equations (103), (22), (40), (49) and (37), we have
P ( Y D v ) = D i j Y i ( D v ) k Y j : Y k = D i j Y i D k v C j k = D i j C j k D k Y i v = D i Y i v = D i j Y i v j = D i j Y i Y j : B = P ( B ) .
The proof is complete.   □
Corollary 1.
In Theorem 3,
A U 0 2 u T D u > 0 ;
hence,
B A X 2 < B 2 .
Proof. 
First, we need to prove
A U 0 2 u T D u > 0 .
By using Equations (103) and (55), we have
( A U 0 ) T P ( A U 0 ) = u i D i j u j ,
which can be further written as
u T D u = u i D i j u j = ( A U 0 ) T P 2 ( A U 0 ) = P ( A U 0 ) 2 ,
because P is a projection operator. Thus, we have
A U 0 2 u T D u = ( A U 0 ) T ( I P ) ( A U 0 ) > 0 ,
because I P is also a projection operator, where I is an identity operator. In view of Equations (84) and (112), the result in Equation (113) is straightforward.   □
Remark 1.
For the Case 1 problem, the conventional generalized minimal residual method (GMRES) [16,18] only takes the minimization in (15) into account. Therefore, α 0 = 0 in the GMRES. To accelerate the convergence of the iterative algorithm, the maximal projection as mentioned in (16) must be considered. Therefore, in the developed iterative algorithm DOIA, we consider both optimization problems in (15) and (16), and simultaneously α 0 and α i , i = 1 , , m are derived explicitly.
Remark 2.
If we directly solve the optimization problem in (32) for α i , i = 1 , , m + 1 in a larger m + 1 -degree matrix pencil with
X = k = 1 m + 1 α k U k ,
then, via Equation (29), we have
Y = A X = k = 1 m + 1 α k Y k ,
where Y k = A U k ; it can be seen that Y 0 = 0 . Consequently, w = 0 in Equation (39), and λ in Equation (53) cannot be defined owing to Y 0 = 0 and w = 0 .

4. Double-Optimal Iterative Algorithm

According to Theorem 1, the double-optimal iterative algorithm (DOIA) to solve Equation (1) reads as follows (Algorithm 1).
Algorithm 1 DOIA
1: Select m and give an initial value of Z 0
2: Do k = 0 , 1 ,
3:    U 0 k = R k  (Case 1),  U 0 k = A T R k   (Case 2)
4:    U j k = A j U 0 k  (Case 1),  U j k = ( A T A ) j U 0 k  (Case 2)   j = 1 , , m
5:    U k = [ U 1 k , , U m k ]   (orthonormalization)
6:    Y k = A U k
7:    C i j k = Y i k : Y j k , C k = [ C i j k ]
8:    D k = C k 1
9:    v i k = R k : Y i k , v k = [ v i k ]
10:   u i k = ( A U 0 k ) : Y i k , u k = [ u i k ]
11:   λ k = A U 0 k 2 u k T D k u k R k : ( A U 0 k ) v k T D k u k
12:   W k = λ k Y k D k v k + A U 0 k Y k D k u k
13:   α k = R k : W k W k 2
14:   X k = α k [ U 0 k + λ k U k D k v k U k D k u k ]
15:   Z k + 1 = Z k + X k
16: Enddo, if R k < ε

4.1. Crucial Properties of DOIA

In this section, we prove the crucial properties of the DOIA, including the absolute convergence and the orthogonality of the residual.
Corollary 2.
In the DOIA, Theorem 4 guarantees that the residual is decreased step-by-step, i.e.,
R k + 1 2 = R k 2 A X k 2 ,
R k + 1 < R k .
Proof. 
Taking R = R k + 1 , B = R k and Y = A X k in Equation (99), we can derive Equation (118) readily. From Equation (84), we have
R k + 1 2 = R k 2 v k T D k v k [ B : ( A U 0 k ) v k T D k u k ] 2 A U 0 2 2 u k T D k u k .
Then, from Equation (114), we can prove Equation (119).   □
Corollary 3.
In the DOIA, the convergence rate is given by
R k R k + 1 = 1 sin θ > 1 , 0 < θ < π , θ π 2 ,
where θ is the intersection angle between R k and Y k .
Proof. 
From Equations (99), (96), and (118), we have
R k + 1 2 = R k 2 R k Y k cos θ = R k 2 Y k 2 , θ π 2 ,
where Y k = A X k , and 0 < θ < π is the intersection angle between R k and Y k , which can be written as
cos θ = Y k R k ,
with the help of Equation (96). Then, Equation (122) can be further reduced to
R k + 1 2 = R k 2 [ 1 cos 2 θ ] = R k 2 sin 2 θ .
Taking the square roots of both sides, we can obtain Equation (121).   □
Corollary 4.
In the DOIA, the residual vector R k + 1 is A orthogonal to the descent direction X k , i.e.,
R k + 1 : ( A X k ) = 0 .
Proof. 
From R = F A Z and Equations (12) and (23), we have
R = B Y .
Taking the inner product with Y and using Equations (96) and (23), it follows that
R : Y = R : ( A X ) = 0 ,
in which R is perpendicular to Y. Letting R = R k + 1 and X = X k , Equation (125) is proven.   □
The DOIA can provide a good approximation of Equation (11) with a better descent direction X k in the matrix pencil of the matrix Krylov subspace. Under this situation, we can prove the following corollary, which guarantees that the present algorithm quickly converges to the true solution.
Corollary 5.
In the DOIA, two consecutive residual matrices R k and R k + 1 are orthogonal by
R k + 1 : ( R k + 1 R k ) = 0 .
Proof. 
From the last equation in the DOIA, we have
R k + 1 = R k A X k .
Taking inner product with R k + 1 and using Lemma 2, yields
R k + 1 : R k + 1 = R k + 1 : R k R k + 1 : ( A X k ) = R k + 1 : R k ,
which can be rearranged in Equation (128).   □

4.2. Restarted DOIA(m)

In the DOIA, we fix m of the dimensions of the Krylov subspace. An alternative version of the DOIA can perform well by varying m in the range [ m 0 , m 1 ] ; like GMRES(m), it is named DOIA(m); m = m 1 m 0 + 1 is the frequency of the restart (Algorithm 2).
Algorithm 2 DOIA(m)
1: Select m 0 and m 1 , and give Z 0
2: Do i = 1 ,
3: Do m = m 0 , , m 1 , k = m m 0
4:    U 0 k = R k  (Case 1),  U 0 k = A T R k   (Case 2)
5:    U j k = A j U 0 k  (Case 1),  U j k = ( A T A ) j U 0 k  (Case 2)   j = 1 , , m
6:    U k = [ U 1 k , , U m k ]   (orthonormalization)
7:    Y k = A U k
8:    C i j k = Y i k : Y j k , C k = [ C i j k ]
9:    D k = C k 1
10:   v i k = R k : Y i k , v k = [ v i k ]
11:   u i k = ( A U 0 k ) : Y i k , u k = [ u i k ]
12:   λ k = A U 0 k 2 u k T D k u k R k : ( A U 0 k ) v k T D k u k
13:   W k = λ k Y k D k v k + A U 0 k Y k D k u k
14:   α k = R k : W k W k 2
15:   X k = α k [ U 0 k + λ k U k D k v k U k D k u k ]
16:   Z k + 1 = Z k + X k
17: Enddo of (3), if  R k + 1 < ε
18: Otherwise,  Z 0 = Z k + 1 , go to (2)

5. Numerical Examples

To demonstrate the efficiency and accuracy of the presented iterative algorithms DOIA and DOIA(m), several examples were examined. All the numerical computations were carried out with Fortran 77 in Microsoft Developer Studio with Intel Core I7-3770, CPU 2.80 GHz and 8 GB memory. The precision was 10 16 .

5.1. Example 1

We solve Equation (1) with
A = [ A i j ] = i + j / 2 1 n × n ,
Z = [ Z i j ] = [ 3 i + 2 j ] n × p .
Matrix F can be computed by inserting the above matrices into Equation (1).
This problem belongs to Case 1. When we apply the iterative algorithm in Section 4 to solve this problem with n = 50 and p = 20 , we fix m = 3 , and the convergence criterion is ε = 10 8 . As shown in Figure 1a, the DOIA converges very fast with only three steps. In order compare the numerical results with the exact solution, the matrix elements are vectorized along each row, and the number of components is given sequentially. As shown in Figure 1b, the numerical and exact solutions are almost coincident, with the numerical error shown in Figure 1c. The maximum error (ME) is 1.9 × 10 12 , and the residual error is A Z F = 1.04 × 10 11 .
When the problem dimension is raised to n = 150 and p = 50 , the original DOIA does not converge within 100 steps. However, using the restarted DOIA(m) with m 0 = 2 and m 1 = 3 , it requires three steps under ε = 10 6 , obtaining a highly accurate solution with ME = 3.92 × 10 11 and the error of A Z F = 1.04 × 10 9 . The CPU time is 1.06 s.
Table 1 compares the results obtained by the DOIA(m) for different n in Equations (131) and (132): for n = 300 , we take m 0 = 3 , m 1 = 5 and ε = 10 5 ; for n = 500 , we take m 0 = 4 , m 1 = 6 and ε = 10 4 .
Next, we consider Equation (1) with the A given in Equation (131), and F is given by F i j = 1 , i = 1 , , n , j = 1 , , p . With n = 150 , p = 50 and ε = 10 6 , the restarted DOIA(m) with m 0 = 2 and m 1 = 3 converges in two steps, obtaining a highly accurate solution with max | ( A Z N ) i j F i j | = 5.08 × 10 10 , where Z N is the numerical solution of Z in Equation (1).

5.2. Example 2

In Equation (1), we consider a cyclic matrix A. The first row is given by ( 1 , , N ) , where N = max ( n , q ) . The algorithm is given as follows (Algortim 3).
Algorithm 3 For cyclic matrix
1: Give N
2: Do i = 1 : N
3:    Do j = 1 : N
4:    If i = 1 , then S i , j = j ; otherwise
5:    S i , j = S i 1 , j + 1
6:    If S i , j > N , then S i , j = S i , j N
7:    Enddo of j
8: Enddo of i
The nonsquare matrix A is obtained by taking the first q rows from S if n > q or the first n columns from S if n < q . Matrix Z in Equation (1) is given by
Z = [ Z i j ] = [ i + j 1 ] n × p ,
where we fix n = 5 , p = 15 , and q = 30 . Matrix F can be computed by inserting the above matrices into Equation (1).
This problem belongs to Case 2. When we apply the iterative algorithm in Section 4 to find the solution, we fix m = 2 , and the convergence criterion is ε = 10 12 . As shown in Figure 2a, the DOIA converges very fast in 47 steps. As shown in Figure 2b, the numerical and exact solutions are almost coincident, with the numerical error shown in Figure 2c. The ME is 7.11 × 10 15 . Then, the DOIA(m), with m 0 = 1 and m 1 = 2 , obtains an ME = 7.11 × 10 15 in 43 steps.
We consider the inverse matrix obtained in Case 2 with q = p = n and F = I n . Let X be the inverse matrix of A; the Newton–Schultz iterative method is
X k + 1 = X k ( 2 I n A X k ) .
For the initial value with X 0 = A T / A 2 , the Newton–Schultz iterative method can converge very quickly.
We consider the cyclic matrix constructed from Algorithm 3 for the cyclic matrix and apply the DOIA(m) to find the A 1 with the same initial value with X 0 = A T / A 2 . Table 2 compares the A X I n and iteration number (IN) obtained by the DOIA(m) and the Newton–Schultz iterative method (NSIM) with ε = 10 15 . The DOIA(m) can find a more accurate inverse matrix with a lower IN.
For n = 7 , the CPU time of DOIA(m) is 0.41 s. Although n is raised to n = 50 , the DOIA(m), with m 0 = 2 and m 1 = 3 , is still applicable, with A X I n = 7.72 × 10 13 obtained in 149 steps, with a CPU time of 6.85 s.

5.3. Example 3

We consider
A = 1 2 3 4 5 6 2 3 4 5 6 1 3 4 5 6 1 2 4 5 6 1 2 3 5 6 1 2 3 4 6 1 2 3 4 5 .
We first compute A 1 and then the solution of the corresponding linear system. This problem belongs to Case 1.
We take m = 4 and apply the DOS to find the inverse matrix of A by setting B = I 6 in Equation (11). The values of A X I 6 obtained from the DOS with the accuracy to find the inverse matrix A 1 has two orders. The residual error is A X I 5 = 0.159 . In the solution to Equation (4), when we compare the numerical solution with the exact solution x i = 1 , i = 1 , , 6 , we find that the maximum error is 1.088 × 10 4 . Since the DOS is just a single-step DOIA, to improve the accuracy, we can apply the DOIA to solve the considered problem.
When we apply the DOIA to solve this problem with m = 3 , the accuracy can be raised to A X I 6 = 9.77 × 10 9 , while in the solution to Equation (4), ME = 5.73 × 10 11 . In Figure 3a, we show the residual error obtained by the DOIA, which is convergent in 67 steps under ε = 10 8 ; Figure 3b compares the errors of the elements of the inverse matrix, using the components of A X I 6 . It can be seen that the DOIA is very effective and is much accurate than the DOS method. DOS is a single-step DOIA without iteration.
For the DOIA(m) with m 0 = 2 and m 1 = 4 , it converges in 28 steps under ε = 10 8 ; the accuracy can be raised to A X I 6 = 9.73 × 10 9 , and ME = 2.13 × 10 11 . The CPU time of the DOIA(m) is 0.33 s, which converges faster than the DOIA.
To test the stability of the DOIA, we consider random noise with intensity s to disturb the coefficients in Equation (133). We take the same values of the parameters in the DOIA. Table 3 compares A X I 6 , ME, and iteration number (IN). Essentially, the noise does not influence the accuracy of A X I 6 ; however, ME and IN worsen when s increases.

5.4. Example 4

By testing the performance of the DOS on the solution to the linear equation system, we consider the following convex quadratic programming problem with an equality constraint:
min f = 1 2 x T P x + q T x ,
C x = b 0 ,
where P is an n 1 × n 1 matrix, C is an m 1 × n 1 matrix, and b 0 is an m 1 vector, which means that Equation (135) provides m 1 linear constraints. According to Lagrange theory, we need to solve the linear system (4) with the following b and A:
b : = q b 0 , A = P C T C 0 m 1 × m 1 .
For a definite solution, we take n 1 = 3 and m 1 = 2 with
min { f = x 1 2 + 2 x 2 2 + x 3 2 2 x 1 x 2 + x 3 } , x 1 + x 2 + x 3 = 4 , 2 x 1 x 2 + x 3 = 2 .
The dimension of A is n × n , where n = n 1 + m 1 = 5 . This problem belongs to Case 1. When we take m = 4 and employ the DOS, λ 0 = 0.3716283 and α 0 = 2.690861 meet α 0 λ 0 = 1 . With x = A 1 b , the solution is min f = 3.977273 at ( x 1 , x 2 , x 3 ) = ( 1.909091 , 1.954545 , 0.136364 ) with A x b = 3.729 × 10 15 ; it is almost an exact solution obtained without any iteration, because the DOS can be viewed as a single-step DOIA.

5.5. Example 5: Solving Ill-Conditioned Hilbert Matrix Equation

The Hilbert matrix is highly ill-conditioned:
A i j = 1 i + j 1 .
We consider an exact solution with x j = 1 , j = 1 , , n , and b i is given by
b i = j = 1 n 1 i + j 1 .
It is known that the Hilbert matrix is a highly ill-conditioned matrix. For n = 5 , the condition number is 4.81 × 10 5 , and, for n = 10 , the condition number is 1.19 × 10 9 . It can be proved that the asymptotic of the condition number of the Hilbert matrix is [50]
O ( 1 + 2 ) 4 n + 4 n .
This problem belongs to Case 1. We solve this problem by using the DOS. For n = 20 , we let m run from 2 to 9, and peak the best m with the minimum error of
A X I n .
In Figure 4a, we plot the above residual with respect to m, where we can observe that the value of α 0 λ 0 is almost equal to 1, as shown in Figure 4b. We take m = 4 , and the maximum error of x is 0.05 .
Below, we solve the inverse of a five-dimensional Hilbert matrix with n = 5 by using the DOS method, where we take m = 3 . It is interesting that X = A 1 is also symmetric. The residual error is A X I 5 = 1.443 . However, when we apply the DOIA to solve this problem with m = 3 , the accuracy of A X I 5 can be raised to A X I 5 = 0.912 . In Figure 5a, we show the residual error obtained by the DOIA, which does not converge within 100 steps under ε = 10 8 . Figure 5b compares the error of the elements of the inverse matrix by using the components of A X I 5 . Due to the highly ill-conditioned nature of the Hilbert matrix, the results obtained by the DOS and DOIA are quite different. ME = 1.51 × 10 2 is obtained for x i .
For the DOIA(m) with m 0 = 2 and m 1 = 4 , it converges in 62 steps under ε = 10 8 ; the accuracy can be raised to A X I 5 = 6.83 × 10 9 , and ME = 1.48 × 10 9 . Obviously, the DOIA(m) can significantly improve the convergence speed and accuracy.
Then, we compare the numerical results with those obtained with the QR method. For n = 10 , we apply the DOIA to solve this problem with m = 7 , and the accuracy is A X I 10 = 1.371 , while that obtained with the QR method is A X I 10 = 1.574 . In Figure 6a, we compare the solutions of linear system (4), where the exact solution is supposed to be x i = 1 , i = 1 , , 10 . Whereas the DOIA provides quite accurate solutions with ME =  2.117 × 10 3 , the QR method fails, as the diagonal elements of matrix R are very small for the ill-conditioned Hilbert matrix. The errors of the inverse matrix obtained with these two methods are comparable, as shown in Figure 6b.
The DOIA(m) with m 0 = 2 and m 1 = 6 does not converge within 100 steps under ε = 10 8 ; however, the accuracy can be raised to ME = 1.24 × 10 5 .
We consider the Hilbert matrix in Equation (136) and apply the DOIA(m) to find A 1 with an initial value X 0 = A T / A 2 . Because the Newton–Schultz iterative method (NSIM) converges slowly, we give the upper bound IN = 500. Table 4 compares the A X I n and IN obtained with the DOIA(m) with ε = 10 10 and NSIM. DOIA(m) can find an accurate inverse matrix in fewer interations.

5.6. Example 6

In this example, we apply the DOS with m = 5 to solve a six-dimensional matrix Equation (11) with the coefficient matrix A given in Equation (133), and the solution is the Hilbert matrix given in Equation (136) with n = 6 . First, we find the right inversion of A from A X = I 6 by using the DOS, and then the solution of the matrix equation is given by X T B . The numerical solution of the matrix elements with i = 1 , , 36 , which is obtained by arranging the matrix element from left to right and top to bottom with a consecutive Arabic number, is compared with the exact one in Figure 7a; the numerical error is shown in Figure 7b, which is smaller than 0.008. The residual error A X B is 0.0157, which is quite accurate.
For the DOIA(m) with m 0 = 1 and m 1 = 2 , it converges in 28 steps under ε = 10 10 ; the accuracy raises to A X B = 1.47 × 10 11 , and ME = 9.24 × 10 13 . The convergence can be accelerated to 14 steps when we take m 0 = 1 and m 1 = 5 ; the accuracy is slightly decreased to ME = 4.35 × 10 12 and A X B = 4.96 × 10 11 .

5.7. Example 7

Consider the mixed boundary value problem of the Laplace equation:
Δ u = u r r + 1 r u r + 1 r 2 u θ θ = 0 , u ( ρ , θ ) = h ( θ ) , 0 θ π , u n ( ρ , θ ) = g ( θ ) , π θ 2 π .
The method of fundamental solutions is taken as
u ( x ) = j = 1 n c j U ( x , s j ) , s j Ω c .
where
U ( x , s j ) = ln r j , r j = x s j .
We consider
u ( x , y ) = cos x cosh y + sin x sinh y , ρ ( θ ) = exp ( sin θ ) sin 2 ( 2 θ ) + exp ( cos θ ) cos 2 ( 2 θ ) .
This problem is a special Case 2 with p = 1 . We take q = 50 and n = 30 . ME = 3.19 × 10 2 is obtained in 174 steps by the DOIA(m) with m 0 = 4 and m 1 = 5 . The CPU time is 0.65 s. It can also be treated as a special Case 1 with p = 1 . We take n = 50 . ME = 5.5 × 10 2 is obtained within 300 steps by the DOIA(m) with m 0 = 2 and m 1 = 3 . The CPU time is 0.62 s. To improve the accuracy, we can develop the vector form of the DOIA(m), which is, however, another issue not reported here.
According to [6], we apply the GMRES(m) with m = 50 to solve this problem. ME = 4.8 × 10 2 is obtained in 2250 steps, and the CPU time is 2.19 s.

6. Pseudoinverse of Rectangular Matrix

The Moore–Penrose pseudoinverse of A denoted as A is the most famous for the inversion of a rectangular matrix, satisfying the following Penrose equations:
A A A = A ,
A A A = A ,
( A A ) T = A A ,
( A A ) T = A A .

6.1. A New Matrix Pencil

We rewrite Equation (8) as
A X = I q ;
upon giving an initial value X 0 , we attempt to seek the next step solution X with
X = α 0 X 0 + k = 1 m α k U k ,
where U k = X 0 ( A X 0 ) k . The new matrix pencil with degree m is given by
U = [ U 1 , , U m ] = [ X 0 A X 0 , , X 0 ( A X 0 ) m ] ,
which consists of an m-degree polynomial of A X 0 . The modified Gram–Schmidt process is employed to orthogonalize and normalize U .
The iterative form of Equation (142) is
X k + 1 = α 0 k X k + α 1 k X k A X k + α 2 k X k ( A X k ) 2 + + α m k X k ( A X k ) m .
Notice that it includes several iterative algorithms as special cases: the Newton–Schultz method [30], the Chebyshev method [31], the Homeier method [32], the PS method (PSM) by Petkovic and Stanimirovic [38], and the KKRJ [36]:
X k + 1 = 2 X k X k A X k ( Newton Schultz   method ) ,
X k + 1 = 3 X k 3 X k A X k + X k ( A X k ) 2 ( Chebyshev   method ) ,
X k + 1 = 7 2 X k 9 2 X k A X k + 5 2 X k ( A X k ) 2 1 2 X k ( A X k ) 3 ( Homeier   method ) ,
X k + 1 = ( 1 + β ) X k β X k A X k , ( PS   method ) ,
X k + 1 = ( 3 + β ) X k ( 3 + 3 β ) X k A X k + ( 1 + 3 β ) X k ( A X k ) 2 β X k ( A X k ) 3 ( KKRJ   method ) .
In Equation (144), two optimization methods are used to determine the coefficients α 0 k , α 1 k , , α m k .
The results in Theorem 1 are also applicable to problem (141). Hence, we propose the following iterative algorithm, namely, the Moore–Penrose iterative algorithm (MPIA).
MPIA is a new algorithm based on the newly developed DOIA in Section 4; MPIA is used as a new matrix pencil in Equation (143) and with U 0 = X 0 . The motivation for the development of MPIA using U k = X 0 ( A X 0 ) k in the matrix pencil is that we can generalize the methods in Equations (145)–(149) to the m orders method and provide a theoretical foundation to determine the expansion coefficients from the double-optimization technique (Algorithm 4).
Algorithm 4 MPIA
1: Select m and give X 0
2: Do k = 0 , 1 ,
3:    U j k = X k ( A X k ) j , j = 1 , , m
4:    U k = [ U 1 k , , U m k ]  (orthonormalization)
5:    Y k = [ Y 1 k , , Y m k ] = [ A U 1 k , , A U m k ]
6:    C i j k = Y i k : Y j k , C k = [ C i j k ]
7:    D k = C k 1
8:    v i k = I q : Y i k , v k = [ v i k ]
9:    u i k = ( A X k ) : Y i k , u k = [ u i k ]
10:   λ k = A X k 2 u k T D k u k I q : ( A X k ) v k T D k u k
11:   W k = λ k Y k D k v k + A X k Y k D k u k
12:   α k = I q : W k W k 2
13:   X k + 1 = α k [ X k + λ k U k D k v k U k D k u k ]
14: Enddo, if X k + 1 X k < ε
Like DOIA(m), the MPIA(m) can be constructed similarly.

6.2. Optimized Hyperpower Method

Pan et al. [51] proposed the following hyperpower method:
X k + 1 = X k i = 0 p 1 ( I q A X k ) i ;
also refer to [52,53]. Equation (150) includes the Newton–Schultz method [30], and the Chebyshev method [31] as special cases.
Let m = p 1 , and Equation (150) is rewritten as
X k + 1 = X k + i = 1 m ( I q A X k ) i ;
we generalize it to
X k + 1 = α 0 k X k + i = 1 m α i k ( I q A X k ) i .
Equation (152) is used to find the optimized descent matrix X in the current-step residual equation:
A X = R k = I q A Z k .
Two optimization methods are used to determine the coefficients α 0 k , α 1 k , , α m k . The results in Theorem 1 are also applicable to problem (153). Hence, we propose the following iterative algorithm, namely, the optimized hyperpower iterative algorithm (OHPIA(m)); we take the restarted technique into consideration. Since the matrix pencil consists of the powers of the residual matrices, the orthonormalization for U k is not suggested (Algorithm 5).
Algorithm 5 OHPIA(m)
1: Select m 0 and m 1 , and give Z 0
2: Do i = 1 ,
3: Do m = m 0 , , m 1 , k = m m 0
4:    U 0 k = X k
5:    U j k = X k R k j = X k ( I q A Z k ) j , j = 1 , , m
6:    U k = [ U 1 k , , U m k ]
7:    Y k = A U k
8:    C i j k = Y i k : Y j k , C k = [ C i j k ]
9:    D k = C k 1
10:   v i k = R k : Y i k , v k = [ v i k ]
11:   u i k = ( A U 0 k ) : Y i k , u k = [ u i k ]
12:   λ k = A U 0 k 2 u k T D k u k R k : ( A U 0 k ) v k T D k u k
13:   W k = λ k Y k D k v k + A U 0 k Y k D k u k
14:   α k = R k : W k W k 2
15:   X k = α k [ U 0 k + λ k U k D k v k U k D k u k ]
16:   Z k + 1 = Z k + X k
17: Enddo of (3), if R k + 1 < ε
18: Otherwise, Z 0 = Z k + 1 , go to (2)
OHPIA is a new algorithm based on the DOIA newly developed in Section 4; OHPIA is used a new matrix pencil derived from Equation (152):
U = [ U 1 , , U m ] = [ I q A X k , , ( I q A X k ) m ] .

7. Numerical Testing of Rectangular Matrix

7.1. Example 8

We find the Moore–Penrose inverse of the following rank-deficient matrix [54]:
A = 1 0 1 2 1 1 0 1 0 1 1 3 0 1 1 3 1 1 0 1 1 0 1 2 .
This problem belongs to Case 2. We apply the iterative method of the DOIA to find the Moore–Penrose inverse, which has an exact solution:
A = 1 102 15 18 3 3 18 15 8 13 5 5 13 8 7 5 2 2 5 7 6 3 9 9 3 6 .
Under the same convergence criterion ε = 10 9 as used by Xia et al. [54], the iteration process of the DOIA with m = 1 converges very fast in three steps. By comparison to Equation (154), ME = 7.3 × 10 9 is obtained.
In Table 5, we compare the DOIA ( m = 1 and ε = 10 15 ) with other numerical methods specified by Xia et al. [54], Petkovic and Stanimirovic [38], and Kansal et al. [36], to asses the performance of the DOIA measured using four numerical errors of the Penrose equations (137)–(140) and the iteration number (IN).
Recently, Kansal et al. [36] proposed the iterative scheme in Equation (149). For this problem, we take β = 0.5 . When we take ε = 10 15 , it overflows; hence, we take ε = 10 13 .
In the above, Alg. is the abbreviation for Algorithm, IP represents the initial point, and IN is the iteration number. 1 = ( 1 , , 1 ) T . The first two algorithms were reported by Xia et al. [54]. It can be seen that the DOIA converges much faster and is more accurate than theother algorithms.
In the computation of the pseudoinverse, the last two singular values close to zero can be prevented from appearing in the matrix C R m × m if m is small enough. For example, we take m = 1 in Table 5, such that D = C 1 can be computed accurately without being induced by the zero singular value appearing in the denominator to enlarge the rounding error.

7.2. Example 9: Inverting the Ill-Conditioned Rectangular Hilbert Matrix

We want to find the Moore–Penrose inverse of the Hilbert matrix:
A i j = 1 i + j 1 , 1 i q , 1 j n .
This problem belongs to Case 2. This is more difficult than the previous example. Here, we fix q = 50 , n = 3 . The numerical errors of the Penrose Equations (137)–(140) are compared in Table 6. In the DOIA(m) and OHPIA(m), we take m 0 = 1 , m 1 = 2 and ε = 10 10 . DOIA(m) and OHPIA(m) converge faster than in [36,38].
Next, we find the Moore–Penrose inverse of the Hilbert matrix with n = 10 and q = 15 . For MPIA, m = 3 ; ( m 0 , m 1 ) = ( 1 , 6 ) for MPIA(m). For the method in [36], which does not converge with ε = 10 6 , it is not applicable to find the inverse of a highly ill-conditioned Hilbert matrix with q = 15 and n = 10 (Table 7). Like the KKRJ, the OHPIA(m) is weak for highly ill-conditioned Hilbert matrices; the errors of A A A A 2 and ( A A ) T A A 2 raise to the order of 10 11 .

7.3. Example 10

We find the Moore–Penrose inverse of the cyclic matrix in Example 2, as shown in Table 8, where we take n = 50 and q = 60 , m = 3 for MPIA, and ( m 0 , m 1 ) = ( 2 , 6 ) for MPIA(m) and OHPIA(m). It is remarkable that the OHPIA(m) is much better than the other methods.

7.4. Example 11

We find the Moore–Penrose inverse of a full-rank matrix given by A i j = 1 , i = 1 , , 100 , j = 1 , , 10 but with A i j = 0 , i = 1 , , 10 , j > i . We take ( m 0 , m 1 ) = ( 2 , 6 ) and ε = 10 9 for the MPIA(m); ( m 0 , m 1 ) = ( 5 , 6 ) is taken for the OHPIA(m). For the method in Equation (145), we denote it as NSM. Table 9 compares the errors and IN for different methods, the MPIA(m) outperforms other methods; OHPIA(m) is better than MPIA(m).

7.5. Example 12

We find the Moore–Penrose inverse of a randomly generated real matrix of size 100 × 50 with 2 < A i j < 2 . We take ( m 0 , m 1 ) = ( 3 , 6 ) and ε = 10 12 for the MPIA(m). Table 10 compares the errors and IN of the different methods.

8. Conclusions

In the m-degree matrix pencil Krylov subspace, an explicit solution (34) of the linear matrix equation was obtained by optimizing the two merit functions in (32) and (33). Then, we derive an optimal iterative algorithm, DOIA, to solve square or nonsquare linear matrix equation systems. The iterative method DOIA possesses an A-orthogonal property and absolute convergence, which has good computational efficiency and accuracy in solving the linear matrix equations. The restarted version DOIA(m) is proven to speed up the convergence. The Moore–Penrose pseudoinverses of rectangular matrices are also derived by using the DOIA, DOIA(m), MPIA, and MPIA(m). The proposed polynomial pencil method includes the Newton–Schultz method, the Chebyshev method, the Homeier method, and the KKRJ method as special cases; it is important that in the proposed iterative MPIA and MPIA(m), the coefficients are optimized using two minimization techniques. We also propose a new modification of hyperpower method, namely, the optimized hyperpower iterative algorithm OHPIA(m), which, through two optimizations, become the most powerful iterative algorithm for quickly computing the Moore–Penrose pseudoinverses of rectangular matrices. However, the OHPIA(m), like KKRJ, is weak in inverting ill-conditioned rectangular matrices.
The idea of varying the affine matrix Krylov subspace is novel for finding a better iterative solution to linear matrix equations based on a dual optimization. The limitations are that several matrix multiplications are needed to construct the projection operator P in the matrix Krylov subspace, and the computational cost is high for an inversion of the matrix C 1 with dimension m.

Author Contributions

Conceptualization, C.-S.L. and C.-W.C.; Methodology, C.-S.L. and C.-W.C.; Software, C.-S.L., C.-W.C. and C.-L.K.; Validation, C.-S.L., C.-W.C. and C.-L.K.; Formal analysis, C.-S.L. and C.-W.C.; Investigation, C.-S.L., C.-W.C. and C.-L.K.; Resources, C.-S.L. and C.-W.C.; Data curation, C.-S.L., C.-W.C. and C.-L.K.; Writing—original draft, C.-S.L. and C.-W.C.; Writing—review & editing, C.-W.C.; Visualization, C.-S.L., C.-W.C. and C.-L.K.; Supervision, C.-S.L. and C.-W.C.; Project administration, C.-W.C.; Funding acquisition, C.-W.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was financially supported by the National Science and Technology Council [grant numbers: NSTC 112-2221-E-239-022].

Data Availability Statement

The data presented in this study are available on request from the corresponding authors.

Acknowledgments

The authors would like to express their thanks to the reviewers, who supplied feedback to improve the quality of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Hestenes, M.R.; Stiefel, E.L. Methods of conjugate gradients for solving linear systems. J. Res. Nat. Bur. Stand. 1952, 49, 409–436. [Google Scholar] [CrossRef]
  2. Lanczos, C. Solution of systems of linear equations by minimized iterations. J. Res. Nat. Bur. Stand. 1952, 49, 33–53. [Google Scholar] [CrossRef]
  3. Liu, C.S. An optimal multi-vector iterative algorithm in a Krylov subspace for solving the ill-posed linear inverse problems. CMC Comput. Mater. Contin. 2013, 33, 175–198. [Google Scholar]
  4. Dongarra, J.; Sullivan, F. Guest editors’ introduction to the top 10 algorithms. Comput. Sci. Eng. 2000, 2, 22–23. [Google Scholar] [CrossRef]
  5. Simoncini, V.; Szyld, D.B. Recent computational developments in Krylov subspace methods for linear systems. Numer. Linear Algebra Appl. 2007, 14, 1–59. [Google Scholar] [CrossRef]
  6. Saad, Y.; Schultz, M.H. GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Stat. Comput. 1986, 7, 856–869. [Google Scholar] [CrossRef]
  7. Saad, Y. Krylov subspace methods for solving large unsymmetric linear systems. Math. Comput. 1981, 37, 105–126. [Google Scholar] [CrossRef]
  8. Freund, R.W.; Nachtigal, N.M. QMR: A quasi-minimal residual method for non-Hermitian linear systems. Numer. Math. 1991, 60, 315–339. [Google Scholar] [CrossRef]
  9. van Den Eshof, J.; Sleijpen, G.L.G. Inexact Krylov subspace methods for linear systems. SIAM J. Matrix Anal. Appl. 2004, 26, 125–153. [Google Scholar] [CrossRef]
  10. Paige, C.C.; Saunders, M.A. Solution of sparse indefinite systems of linear equations. SIAM J. Numer. Anal. 1975, 12, 617–629. [Google Scholar] [CrossRef]
  11. Fletcher, R. Conjugate gradient methods for indefinite systems. In Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 1976; Volume 506, pp. 73–89. [Google Scholar]
  12. Sonneveld, P. CGS: A fast Lanczos-type solver for nonsymmetric linear systems. SIAM J. Sci. Stat. Comput. 1989, 10, 36–52. [Google Scholar] [CrossRef]
  13. van der Vorst, H.A. Bi-CGSTAB: A fast and smoothly converging variant of Bi-CG for the solution of nonsymmetric linear systems. SIAM J. Sci. Stat. Comput. 1992, 13, 631–644. [Google Scholar] [CrossRef]
  14. Saad, Y.; van der Vorst, H.A. Iterative solution of linear systems in the 20th century. J. Comput. Appl. Math. 2000, 123, 1–33. [Google Scholar] [CrossRef]
  15. Bouyghf, F.; Messaoudi, A.; Sadok, H. A unified approach to Krylov subspace methods for solving linear systems. Numer. Algorithms 2024, 96, 305–332. [Google Scholar] [CrossRef]
  16. Saad, Y. Iterative Methods for Sparse Linear Systems, 2nd ed.; SIAM: Philadelphia, PA, USA, 2003. [Google Scholar]
  17. van der Vorst, H.A. Iterative Krylov Methods for Large Linear Systems; Cambridge University Press: New York, NY, USA, 2003. [Google Scholar]
  18. Jbilou, K.; Messaoudi, A.; Sadok, H. Global FOM and GMRES algorithms for matrix equations. Appl. Numer. Math. 1999, 31, 49–63. [Google Scholar] [CrossRef]
  19. El Guennouni, A.; Jbilou, K.; Riquet, A.J. Block Krylov subspace methods for solving large Sylvester equations. Numer. Algorithms 2002, 29, 75–96. [Google Scholar] [CrossRef]
  20. Frommer, A.; Lund, K.; Szyld, D.B. Block Krylov subspace methods for functions of matrices. Electron. Trans. Numer. Anal. 2017, 47, 100–126. [Google Scholar] [CrossRef]
  21. Frommer, A.; Lund, K.; Szyld, D.B. Block Krylov subspace methods for functions of matrices II: Modified block FOM. SIAM J. Matrix Anal. Appl. 2020, 41, 804–837. [Google Scholar] [CrossRef]
  22. El Guennouni, A.; Jbilou, K.; Riquet, A.J. The block Lanczos method for linear systems with multiple right-hand sides. Appl. Numer. Math. 2004, 51, 243–256. [Google Scholar] [CrossRef]
  23. Kubínová, M.; Soodhalter, K.M. Admissible and attainable convergence behavior of block Arnoldi and GMRES. SIAM J. Matrix Anal. Appl. 2020, 41, 464–486. [Google Scholar] [CrossRef]
  24. Lund, K. Adaptively restarted block Krylov subspace methods with low-synchronization skeletons. Numer. Algorithms 2023, 93, 731–764. [Google Scholar] [CrossRef]
  25. Konghua, G.; Hu, X.; Zhang, L. A new iteration method for the matrix equation AX = B. Appl. Math. Comput. 2007, 187, 1434–1441. [Google Scholar] [CrossRef]
  26. Meng, C.; Hu, X.; Zhang, L. The skew-symmetric orthogonal solutions of the matrix equation AX = B. Linear Algebra Appl. 2005, 402, 303–318. [Google Scholar] [CrossRef]
  27. Peng, Z.; Hu, X. The reflexive and anti-reflexive solutions of the matrix equation AX = B. Linear Algebra Appl. 2003, 375, 147–155. [Google Scholar] [CrossRef]
  28. Zhang, J.C.; Zhou, S.Z.; Hu, X. The (P,Q) generalized reflexive and anti-reflexive solutions of the matrix equation AX = B. Appl. Math. Comput. 2009, 209, 254–258. [Google Scholar] [CrossRef]
  29. Liu, C.S.; Hong, H.K.; Atluri, S.N. Novel algorithms based on the conjugate gradient method for inverting ill-conditioned matrices, and a new regularization method to solve ill-posed linear systems. Comput. Model. Eng. Sci. 2010, 60, 279–308. [Google Scholar]
  30. Higham, N.J. Functions of Matrices: Theory and Computation; SIAM: Philadelphia, PA, USA, 2008. [Google Scholar]
  31. Amat, S.; Ezquerro, J.A.; Hernandez-Veron, M.A. Approximation of inverse operators by a new family of high-order iterative methods. Numer. Linear Algebra Appl. 2014, 21, 629. [Google Scholar] [CrossRef]
  32. Homeier, H.H.H. On Newton-type methods with cubic convergence. J. Comput. Appl. Math. 2005, 176, 425–432. [Google Scholar] [CrossRef]
  33. Petkovic, M.D.; Stanimirovic, P.S. Iterative method for computing Moore–Penrose inverse based on Penrose equations. J. Comput. Appl. Math. 2011, 235, 1604–1613. [Google Scholar] [CrossRef]
  34. Dehdezi, E.K.; Karimi, S. GIBS: A general and efficient iterative method for computing the approximate inverse and Moore–Penrose inverse of sparse matrices based on the Schultz iterative method with applications. Linear Multilinear Algebra 2023, 71, 1905–1921. [Google Scholar] [CrossRef]
  35. Cordero, A.; Soto-Quiros, P.; Torregrosa, J.R. A general class of arbitrary order iterative methods for computing generalized inverses. Appl. Math. Comput. 2021, 409, 126381. [Google Scholar] [CrossRef]
  36. Kansal, M.; Kaur, M.; Rani, L.; Jantschi, L. A cubic class of iterative procedures for finding the generalized inverses. Mathematics 2023, 11, 3031. [Google Scholar] [CrossRef]
  37. Cordero, A.; Segura, E.; Torregrosa, J.R.; Vassileva, M.P. Inverse matrix estimations by iterative methods with weight functions and their stability analysis. Appl. Math. Lett. 2024, 155, 109122. [Google Scholar] [CrossRef]
  38. Petkovic, M.D.; Stanimirovic, P.S. Two improvements of the iterative method for computing Moore–Penrose inverse based on Penrose equations. J. Comput. Appl. Math. 2014, 267, 61–71. [Google Scholar] [CrossRef]
  39. Katsikis, V.N.; Pappas, D.; Petralias, A. An improved method for the computation of the Moore–Penrose inverse matrix. Appl. Math. Comput. 2011, 217, 9828–9834. [Google Scholar] [CrossRef]
  40. Stanimirovic, I.; Tasic, M. Computation of generalized inverse by using the LDL* decomposition. Appl. Math. Lett. 2012, 25, 526–531. [Google Scholar] [CrossRef]
  41. Sheng, X.; Wang, T. An iterative method to compute Moore–Penrose inverse based on gradient maximal convergence rate. Filomat 2013, 27, 1269–1276. [Google Scholar] [CrossRef]
  42. Toutounian, F.; Ataei, A. A new method for computing Moore–Penrose inverse matrices. J. Comput. Appl. Math. 2009, 228, 412–417. [Google Scholar] [CrossRef]
  43. Soleimani, F.; Stanimirovic, P.S.; Soleymani, F. Some matrix iterations for computing generalized inverses and balancing chemical equations. Algorithms 2015, 8, 982–998. [Google Scholar] [CrossRef]
  44. Baksalary, O.M.; Trenkler, G. The Moore–Penrose inverse: A hundred years on a frontline of physics research. Eur. Phys. J. 2021, 46, 9. [Google Scholar] [CrossRef]
  45. Pavlikova, S.; Sevcovic, D. On the Moore–Penrose pseudo-inversion of block symmetric matrices and its application in the graph theory. Linear Algebra Appl. 2023, 673, 280–303. [Google Scholar] [CrossRef]
  46. Sayevand, K.; Pourdarvish, A.; Machado, J.A.T.; Erfanifar, R. On the calculation of the Moore–Penrose and Drazin inverses: Application to fractional calculus. Mathematics 2021, 9, 2501. [Google Scholar] [CrossRef]
  47. AL-Obaidi, R.H.; Darvishi, M.T. A comparative study on qualification criteria of nonlinear solvers with introducing some new ones. J. Math. 2022, 2022, 4327913. [Google Scholar] [CrossRef]
  48. Liu, C.S.; Kuo, C.L.; Chang, C.W. Solving least-squares problems via a double-optimal algorithm and a variant of Karush–Kuhn–Tucker equation for over-determined system. Algorithms 2024, 17, 211. [Google Scholar] [CrossRef]
  49. Einstein, A. The foundation of the general theory of relativity. Ann. Phys. 1916, 49, 769–822. [Google Scholar] [CrossRef]
  50. Todd, J. The condition of finite segments of the Hilbert matrix. In The Solution of Systems of Linear Equations and the Determination of Eigenvalues; Taussky, O., Ed.; National Bureau of Standards: Applied Mathematics Series; National Bureau of Standards: Gaithersburg, MD, USA, 1954; Volume 39, pp. 109–116. [Google Scholar]
  51. Pan, Y.; Soleymani, F.; Zhao, L. An efficient computation of generalized inverse of a matrix. Appl. Math. Comput. 2018, 316, 89–101. [Google Scholar] [CrossRef]
  52. Climent, J.J.; Thome, N.; Wei, Y. A geometrical approach on generalized inverses by Neumann-type series. Linear Algebra Appl. 2001, 332–334, 533–540. [Google Scholar] [CrossRef]
  53. Soleymani, F.; Stanimirovic, P.S.; Haghani, F.K. On hyperpower family of iterations for computing outer inverses possessing high efficiencies. Linear Algebra Appl. 2015, 484, 477–495. [Google Scholar] [CrossRef]
  54. Xia, Y.; Chen, T.; Shan, J. A novel iterative method for computing generalized inverse. Neural Comput. 2014, 26, 449–465. [Google Scholar] [CrossRef]
Figure 1. For example 1 solved by the DOIA: (a) residual, (b) comparison of numerical and exact solutions, and (c) numerical error; (b,c) with the same x axis.
Figure 1. For example 1 solved by the DOIA: (a) residual, (b) comparison of numerical and exact solutions, and (c) numerical error; (b,c) with the same x axis.
Mathematics 12 01761 g001
Figure 2. For example 2 solved by the DOIA: (a) residual, (b) comparison of numerical and exact solutions, and (c) numerical error; (b,c) with the same x axis.
Figure 2. For example 2 solved by the DOIA: (a) residual, (b) comparison of numerical and exact solutions, and (c) numerical error; (b,c) with the same x axis.
Mathematics 12 01761 g002
Figure 3. For example 3 solved by the DOS and DOIA, showing (a) residual and (b) comparison of error of inverse matrix.
Figure 3. For example 3 solved by the DOS and DOIA, showing (a) residual and (b) comparison of error of inverse matrix.
Mathematics 12 01761 g003
Figure 4. For Example 5 solved by the DOS, showing (a) residual, (b) α 0 λ 0 , and (c) error of x i ; (a,b) the same x axis.
Figure 4. For Example 5 solved by the DOS, showing (a) residual, (b) α 0 λ 0 , and (c) error of x i ; (a,b) the same x axis.
Mathematics 12 01761 g004
Figure 5. For Example 5 solved by the DOS, DOIA, and DOIA(m), showing (a) residuals, (b) comparing error of inverse matrix.
Figure 5. For Example 5 solved by the DOS, DOIA, and DOIA(m), showing (a) residuals, (b) comparing error of inverse matrix.
Mathematics 12 01761 g005
Figure 6. For example 5 solved by the DOIA, DOIA(m), and QR, comparing errors of (a) x i , and (b) inverse matrix.
Figure 6. For example 5 solved by the DOIA, DOIA(m), and QR, comparing errors of (a) x i , and (b) inverse matrix.
Mathematics 12 01761 g006
Figure 7. For example 6 solved by the DOS: (a) comparing numerical and exact solutions, (b) numerical error of matrix elements; (a,b) with the same x axis.
Figure 7. For example 6 solved by the DOS: (a) comparing numerical and exact solutions, (b) numerical error of matrix elements; (a,b) with the same x axis.
Mathematics 12 01761 g007
Table 1. For Example 1, comparing ME, iterations number (IN), and CPU time obtained by DOIA(m) for different n.
Table 1. For Example 1, comparing ME, iterations number (IN), and CPU time obtained by DOIA(m) for different n.
n60100150300500
ME 1.06 × 10 11 1.83 × 10 11 3.92 × 10 11 2.38 × 10 11 6.89 × 10 11
IN33333
CPU0.450.691.068.3229.22
Table 2. For Example 2, ( A X I n , IN) obtained by DOIA(m) and the Newton–Schultz iterative method (NSIM) for different n.
Table 2. For Example 2, ( A X I n , IN) obtained by DOIA(m) and the Newton–Schultz iterative method (NSIM) for different n.
n4567
DOIA(m)( 3.82 × 10 16 , 4)( 4.60 × 10 16 , 4)( 8.37 × 10 16 , 7)( 8.12 × 10 16 , 14)
NSIM( 8.42 × 10 16 , 11)( 3.66 × 10 16 , 12)( 1.68 × 10 16 , 12)( 1.67 × 10 15 , 13)
Table 3. For Example 3, A X I 6 , ME, and IN obtained by DOIA for different noise s values.
Table 3. For Example 3, A X I 6 , ME, and IN obtained by DOIA for different noise s values.
s00.010.030.050.1
A X I 6 9.77 × 10 9 9.74 × 10 9 9.61 × 10 9 9.82 × 10 9 9.96 × 10 9
ME 5.73 × 10 11 8.24 × 10 3 2.48 × 10 2 4.15 × 10 2 8.37 × 10 2
IN6712484246622
Table 4. For Example 5, the A X I n and IN obtained by DOIA(m) and the Newton–Schultz iterative method (NSIM) for different n.
Table 4. For Example 5, the A X I n and IN obtained by DOIA(m) and the Newton–Schultz iterative method (NSIM) for different n.
n456
DOIA(m)( 1.41 × 10 13 , 5)( 9.70 × 10 12 , 9)( 8.92 × 10 10 , 148)
NSIM( 3.27 × 10 13 , 500)( 4.81 × 10 12 , 500)( 1.80 × 10 10 , 500)
Table 5. Computed results of a rank-deficient matrix in Example 8.
Table 5. Computed results of a rank-deficient matrix in Example 8.
Alg.IP AA A A 2 A AA A 2 ( AA ) T AA 2 ( A A ) T A A 2 IN
[54] 10 5 A T 8.92 × 10 16 3.36 × 10 16 7.85 × 10 17 1.97 × 10 15 50
[54] 0.1 A T 9.17 × 10 20 6.99 × 10 16 7.85 × 10 17 2.86 × 10 15 60
PSM 0.1 A T 1.53 × 10 14 9.21 × 10 17 4.13 × 10 31 1.17 × 10 30 165
KKRJ 0.025 A T 3.01 × 10 30 2.25 × 10 29 2.28 × 10 31 1.53 × 10 30 6
NSIM A T / A 2 4.01 × 10 31 2.60 × 10 31 1.29 × 10 31 2.20 × 10 32 9
DOIA X 0 = 0 2.80 × 10 31 4.83 × 10 33 6.16 × 10 33 1.78 × 10 32 3
Table 6. Computed results of the Hilbert matrix with q = 50 , n = 3 in Example 9.
Table 6. Computed results of the Hilbert matrix with q = 50 , n = 3 in Example 9.
Alg.IP AA A A 2 A AA A 2 ( AA ) T AA 2 ( A A ) T A A 2 IN
PSM 0.8 A T 1.4 × 10 28 3.7 × 10 21 4.1 × 10 30 3.5 × 10 27 34
KKRJ 0.38 A T 1.01 × 10 28 9.45 × 10 27 2.52 × 10 26 5.77 × 10 27 12
NSIM A T / A 2 5.65 × 10 29 1.40 × 10 26 1.69 × 10 27 2.60 × 10 27 20
DOIA(m) X = 0 n × q 1.35 × 10 29 1.99 × 10 25 1.54 × 10 26 7.32 × 10 27 3
OHPIA(m) A T / A 2 1.61 × 10 29 7.52 × 10 26 2.23 × 10 26 9.58 × 10 28 3
Table 7. Computed results of the Hilbert matrix with q = 15 and n = 10 .
Table 7. Computed results of the Hilbert matrix with q = 15 and n = 10 .
Alg.IP AA A A 2 A AA A 2 ( AA ) T AA 2 ( A A ) T A A 2 IN
RRKJ A T / A 2 2.58 × 10 9 1.44 × 10 12 1.17 × 10 6 2.78 × 10 11 >500
MPIA A T / A 2 2.34 × 10 6 2.84 × 10 2 1.49 × 10 3 8.66 × 10 4 50
MPIA(m) A T / A 2 1.85 × 10 9 1.28 × 10 2 4.20 × 10 5 5.45 × 10 6 7
Table 8. Computed results of the cyclic matrix with q = 60 and n = 50 in Example 10.
Table 8. Computed results of the cyclic matrix with q = 60 and n = 50 in Example 10.
Alg.IP AA A A 2 A AA A 2 ( AA ) T AA 2 ( A A ) T A A 2 IN
KKRJ A T / A 2 3.39 × 10 20 2.27 × 10 31 4.98 × 10 26 6.73 × 10 26 11
MPIA A T / A 2 3.87 × 10 21 3.35 × 10 29 3.57 × 10 22 7.92 × 10 28 24
MPIA(m) A T / A 2 2.64 × 10 21 4.24 × 10 29 5.34 × 10 21 9.34 × 10 28 17
OHPIA(m) A T / A 2 2.42 × 10 24 9.93 × 10 32 2.72 × 10 27 3.25 × 10 28 6
Table 9. Computed results of a full-rank matrix in Example 11 with q = 100 and n = 10 .
Table 9. Computed results of a full-rank matrix in Example 11 with q = 100 and n = 10 .
Alg.IP AA A A 2 A AA A 2 ( AA ) T AA 2 ( A A ) T A A 2 IN
[54] 10 1 A T 3.24 × 10 14 1.89 × 10 15 1.10 × 10 15 5.99 × 10 13 130
NSM 10 8 A T 1.29 × 10 12 2.13 × 10 15 1.78 × 10 13 9.28 × 10 15 27
RRKJ A T / A 2 2.77 × 10 24 3.58 × 10 26 5.19 × 10 25 3.23 × 10 26 11
MPIA(m) A T / A 2 8.19 × 10 23 1.75 × 10 24 7.21 × 10 27 3.91 × 10 28 7
OHPIA(m) A T / A 2 7.23 × 10 28 2.85 × 10 31 7.41 × 10 20 1.90 × 10 30 5
Table 10. Computed results of a random matrix in Example 12 with q = 100 and n = 50 .
Table 10. Computed results of a random matrix in Example 12 with q = 100 and n = 50 .
Alg.IP AA A A 2 A AA A 2 ( AA ) T AA 2 ( A A ) T A A 2 IN
KKRJ A T / A 2 1.18 × 10 26 7.35 × 10 31 8.51 × 10 28 1.64 × 10 28 10
MPIA(m) A T / A 2 1.31 × 10 25 1.18 × 10 29 1.28 × 10 24 1.37 × 10 28 6
OHPIA(m) 0.01 A T / A 2 1.14 × 10 27 1.25 × 10 31 7.95 × 10 23 9.76 × 10 30 6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, C.-S.; Kuo, C.-L.; Chang, C.-W. Matrix Pencil Optimal Iterative Algorithms and Restarted Versions for Linear Matrix Equation and Pseudoinverse. Mathematics 2024, 12, 1761. https://doi.org/10.3390/math12111761

AMA Style

Liu C-S, Kuo C-L, Chang C-W. Matrix Pencil Optimal Iterative Algorithms and Restarted Versions for Linear Matrix Equation and Pseudoinverse. Mathematics. 2024; 12(11):1761. https://doi.org/10.3390/math12111761

Chicago/Turabian Style

Liu, Chein-Shan, Chung-Lun Kuo, and Chih-Wen Chang. 2024. "Matrix Pencil Optimal Iterative Algorithms and Restarted Versions for Linear Matrix Equation and Pseudoinverse" Mathematics 12, no. 11: 1761. https://doi.org/10.3390/math12111761

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop