Next Article in Journal
Dynamically Meaningful Latent Representations of Dynamical Systems
Previous Article in Journal
A Method for Reducing Sub-Divisional Errors in Open-Type Optical Linear Encoders with Angle Shift Pattern Main Scale
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Randomized Block Kaczmarz Methods for Inner Inverses of a Matrix

College of Science, China University of Petroleum, Qingdao 266580, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(3), 475; https://doi.org/10.3390/math12030475
Submission received: 10 December 2023 / Revised: 25 January 2024 / Accepted: 30 January 2024 / Published: 2 February 2024
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
In this paper, two randomized block Kaczmarz methods to compute inner inverses of any rectangular matrix A are presented. These are iterative methods without matrix multiplications and their convergence is proved. The numerical results show that the proposed methods are more efficient than iterative methods involving matrix multiplications for the high-dimensional matrix.
MSC:
65F10; 65F45; 65H10

1. Introduction

Consider the linear matrix equation
A X A = A ,
where A C m × n and X C n × m . The solution X of (1) is called the inner inverse of A. For arbitrary X ( 0 ) C n × m , all the inner inverses of A can be expressed as
X 0 = X ( 0 ) + A A A X ( 0 ) A A ,
where A is the Moore–Penrose generalized inverse of A.
Inner inverses play a role in solving systems of linear equations, finding solutions to least squares problems, and characterizing the properties of linear transformations [1]. They are also useful in various areas of engineering, such as robotics, big data analysis, network learning, sensory fusion, and so on [2,3,4,5,6].
To calculate the inner inverse of a matrix, various methods can be used, such as the Moore–Penrose pseudoinverse, singular value decomposition (SVD), or the method of partitioned matrices [1]. To our knowledge, few people discuss numerical methods for solving all the inner inverses of a matrix. In [7], the authors designed an iterative method based on gradient (GBMC) to solve the matrix Equation (1), which has the following iterative formula:
X ( k + 1 ) = X ( k ) + μ A * ( A A X ( k ) A ) A * , k = 0 , 1 , 2 , .
Here, 0 < μ < 2 A 2 4 is called the convergence factor. If the initial matrix X ( 0 ) = A * , then the sequence X ( k ) converges to A . Recently, various nonlinear and linear recurrent neural network (RNN) models have been developed for computing the pseudoinverse of any rectangular matrices (for more details, see [8,9,10,11]). The gradient-based neural network (GNN), whose derivation is based on the gradient of an nonnegative energy function, is an alternative for calculating the Moore–Penrose generalized inverses [12,13,14,15]. These methods for solving the inner inverses and for other generalized inverses of a matrix frequently use the matrix–matrix product operation, and consume a lot of computing time. In this paper, we aimed to explore two block Kaczmarz methods to solve (1) by the product of the matrix and vector.
In this paper, we denote A * , A , A , A 2 , A F and A , B F = trace ( A * B ) as the conjugate transpose, the Moore–Penrose pseudoinverse, the inner inverse, the 2-norm, the Frobenius norm of A and the inner product of two matrices A and B, respectively. In addition, for a given matrix A = ( a i j ) C m × n , A i , : , A : , j , R ( A ) , σ max ( A ) and σ min ( A ) , are used to denote its ith row, jth column, the column space of A, the maximum singular value and the smallest nonzero singular value of A, respectively. Recall that σ max ( A ) = A 2 and σ min ( A ) = 1 A 2 . For an integer m > 1 , let [ m ] = { 1 , 2 , , m } . Let E k denote the expected value conditional on the first k iterations; that is,
E k [ · ] = E [ · | j 0 , j 1 , , j k 1 ] ,
where j s ( s = 0 , 1 , . . . , k 1 ) is the column chosen at the sth iteration.
The rest of this paper is organized as follows. In Section 2, we derive the projected randomized block Kaczmarz method (MII-PRBK) for finding the inner inverses of a matrix and give its theoretical analysis. In Section 3, we discuss the randomized average block Kaczmarz method (MII-RABK) and its convergence results. In Section 4, some numerical examples are provided to illustrate the effectiveness of the proposed methods. Finally, some brief concluding remarks are described in Section 5.

2. Projected Randomized Block Kaczmarz Method for Inner Inverses of a Matrix

The classical Kaczmarz method is a row iterative scheme for solving the linear consistent system A x = b that requires only O ( n ) cost per iteration and storage and has a linear rate of convergence [16]. At each step, the method projects the current iteration onto the solution space of a single constraint. In this section, we are concerned with the randomized Kaczmarz method to solve the matrix Equation (1). At the kth iteration, we find the next iterate X ( k + 1 ) that is closest to the current iteration X ( k ) in the Frobenius norm under the ith condition A i , : X A = A i , : . Hence, X ( k + 1 ) is the optimal solution of the following constrained optimization problem:
min X C n × m 1 2 X X ( k ) F 2 s . t . A i , : X A = A i , : , i [ m ] .
Using the Lagrange multiplier method, turn (2) into the unconstrained optimization problem
min X C n × m , Y C n × 1 L ( X , Y ) = min X C n × m , Y C n × 1 1 2 X X ( k ) F 2 + Y , ( A i , : X A A i , : ) * .
Differentiating Lagrangian function L ( X , Y ) with respect to X and setting to zero gives X ( k + 1 ) = X ( k ) A i , : * Y * A * . Substituting into (3) and differentiating L ( X , Y ) with respect to Y, we can obtain Y * = 1 A i , : 2 2 ( A i , : A i , : X ( k ) A ) ( A * A ) . So, the projected randomized block Kaczmarz for solving A X A = A iterates as
X ( k + 1 ) = X ( k ) + A i , : * A i , : 2 2 ( A i , : A i , : X ( k ) A ) A , k = 0 , 1 , 2 , ,
where i [ m ] is selected with probability p i = A i , : 2 2 A F 2 . We describe this method as Algorithm 1, which is called the MII-PRBK algorithm.
Algorithm 1 Projected randomized block Kaczmarz method for matrix inner inverses (MII-PRBK)
Input:  A C m × n , X ( 0 ) C n × m
1:
 for  k = 0 , 1 , 2 , ,  do
2:
   Pick i with probability  p i ( A ) = A i , : 2 2 A F 2
3:
   Compute  X ( k + 1 ) = X ( k ) + A i , : * A i , : 2 2 ( A i , : ( A i , : X ( k ) ) A ) A
4:
end for
The following lemmas will be used extensively in this paper. Their proofs are straightforward.
Lemma 1. 
Let A C m × n be given. For any vector u R ( A * ) , it holds that
A u 2 2 σ min 2 ( A ) u 2 2 .
Lemma 2 
([17]). Let A C m × n be given. For any matrix B C n × m , if B : , j R ( A * ) , j = 1 , 2 , , m , it holds that
A B F 2 σ min 2 ( A ) B F 2 .
Using Lemma 2 twice, and the fact A B F 2 = B A F 2 , we can obtain Lemma 3.
Lemma 3. 
Let A C m × n be given. For any matrix B C n × m , if B : , j R ( A * ) , j = 1 , 2 , , m and ( B i , : ) * R ( A ) , i = 1 , 2 , , n , it holds that
A B A F 2 σ min 4 ( A ) B F 2 .
Remark 1. 
Lemma 3 can be seen as a special case of Lemma 1 in [18]. That is, let B = { B C n × m | Y C m × n s . t . B = A * Y A * } . For any matrix B B , it holds A B A F 2 σ min 4 ( A ) B F 2 . Notice that B is well defined because 0 B and A B .
Theorem 1. 
The sequence { X ( k ) } generated by the MII-PRBK method starting from any initial matrix X ( 0 ) C n × m converges linearly to X 0 in the mean square form. Moreover, the solution error in expectation for the iteration sequence X ( k ) obeys
E X ( k ) X 0 F 2 ρ k X ( 0 ) X 0 F 2 , k = 1 , 2 , ,
where ρ = 1 σ min 4 ( A ) A F 2 A 2 2 .
Proof. 
For k = 0 , 1 , 2 , , by (4) and A A A = A , we can obtain
A X 0 A = A ( X ( 0 ) + A A A X ( 0 ) A A ) A = A
and
A i , : X ( k + 1 ) A = A i , : X ( k ) + A i , : * A i , : 2 2 A i , : A i , : X ( k ) A A A = A i , : X ( k ) A + ( A i , : A i , : X ( k ) A ) A A = A i , : .
Combining (9), (10) and ( A ) * = A ( A * A ) , it follows from
X ( k + 1 ) X ( k ) , X ( k + 1 ) X 0 F = 1 A i , : 2 2 A i , : * ( A i , : A i , : X ( k ) A ) A , X ( k + 1 ) X 0 F = 1 A i , : 2 2 trace ( A ) * ( A i , : A i , : X ( k ) A ) * A i , : ( X ( k + 1 ) X 0 ) = 1 A i , : 2 2 trace ( A i , : A i , : X ( k ) A ) * A i , : ( X ( k + 1 ) X 0 ) ( A ) * = 0 ( by A i , : X ( k + 1 ) A = A i , : X 0 A = A i , : )
and
X ( k + 1 ) X ( k ) F 2 = 1 A i , : 2 4 A i , : * ( A i , : A i , : X ( k ) A ) A F 2 = 1 A i , : 2 4 trace ( A ) * ( A i , : A i , : X ( k ) A ) * A i , : A i , : * ( A i , : A i , : X ( k ) A ) A = 1 A i , : 2 2 ( A ) * ( A i , : A i , : X ( k ) A ) * 2 2 σ min 2 ( A ) A i , : 2 2 A i , : A i , : X ( k ) A 2 2 ( by Lemma 1 )
that
X ( k + 1 ) X 0 F 2 = X ( k ) X 0 F 2 X ( k + 1 ) X ( k ) F 2 X ( k ) X 0 F 2 σ min 2 ( A ) A i , : 2 2 A i , : A i , : X ( k ) A 2 2 .
By taking the conditional expectation, we have
E k X ( k + 1 ) X 0 F 2 X ( k ) X 0 F 2 σ min 2 ( A ) E k A i , : A i , : X ( k ) A 2 2 A i , : 2 2 = X ( k ) X 0 F 2 σ min 2 ( A ) i = 1 m A i , : 2 2 A F 2 A i , : A i , : X ( k ) A 2 2 A i , : 2 2 = X ( k ) X 0 F 2 σ min 2 ( A ) A F 2 A A X ( k ) A F 2 = X ( k ) X 0 F 2 σ min 2 ( A ) A F 2 A ( X ( k ) X 0 ) A F 2 X ( k ) X 0 F 2 σ min 4 ( A ) σ min 2 ( A ) A F 2 X ( k ) X 0 F 2 = 1 σ min 4 ( A ) A F 2 A 2 2 X ( k ) X 0 F 2 .
The second inequality is obtained by Lemma 3 because X ( 0 ) X 0 = A A A X ( 0 ) A A B and X ( k ) X 0 B on induction.
By the law of total expectation, we have
E X ( k ) X 0 F 2 ρ E X ( k 1 ) X 0 F 2 ρ k X ( 0 ) X 0 F 2 , k = 1 , 2 , .
This completes the proof.    □

3. Randomized Average Block Kaczmarz Method for Inner Inverses of a Matrix

In practice, the main drawback of (4) is that each iteration is expensive and difficult to parallelize, since the Moore–Penrose inverse A is needed to compute. In addition, A is unknown or too large to store in some practical problem. It is necessary to develop the pseudoinverse-free methods to compute the inner inverses of large-scale matrices. In this section, we exploit the average block Kaczmarz method [16,19] for solving linear equations to matrix equations.
At each iteration, the PRBK method (4) does an orthogonal projection of the current estimate matrix X k onto the corresponding hyperplane H i = { X C n × m | A i , : X A = A i , : } . Next, instead of projecting on the hyperplane H i , we consider the approximate solution X ( k + 1 ) by projecting the current estimate X ( k ) onto the hyperplane H i , j = { X C n × m | A i , : X ( k ) A : , j = A i , j } . That is,
X ( k + 1 ) = X ( k ) + A i , : * ( A i , : A i , : X ( k ) A : , j ) A : , j * A i , : 2 2 A : , j 2 2 .
Then, we take a convex combination of all directions A : , j (the weight is A : , j 2 2 A F 2 ) with some stepsize λ > 0 , and obtain the following average block Kaczmarz method:
X ( k + 1 ) = X ( k ) + λ A F 2 A i , : * A i , : 2 2 ( A i , : A i , : X ( k ) A ) A * .
Setting α = λ A F 2 > 0 , we obtain the following randomized block Kaczmarz iteration
X ( k + 1 ) = X ( k ) + α A i , : 2 2 A i , : * ( A i , : A i , : X ( k ) A ) A * , k = 0 , 1 , 2 , ,
where i [ m ] is selected with probability p i = A i , : 2 2 A F 2 . The cost of each iteration of this method is 8 m n + n 2 m if the square of the row norm of A has been calculated in advance. We describe this method as Algorithm 2, which is called the MII-RABK algorithm.
Algorithm 2 Randomized average block Kaczmarz method for matrix inner inverses (MII-RABK)
Input:  A C m × n , X ( 0 ) C n × m  and  α R
1:
for  k = 0 , 1 , 2 , ,  do
2:
   Pick i with probability  p i ( A ) = A i , : 2 2 A F 2
3:
   Compute  X ( k + 1 ) = X ( k ) + α A i , : 2 2 A i , : * ( A i , : ( A i , : X ( k ) ) A ) A *
4:
end for
In the following theorem, with the idea of the RK method [20], we show that the iteration (11) converges linearly to the matrix X 0 = X ( 0 ) + A A A X ( 0 ) A A for any initial matrix X ( 0 ) .
Theorem 2. 
Assume 0 < α < 2 A 2 2 . The sequence { X ( k ) } generated by the MII-RABK method starting from any initial matrix X ( 0 ) C n × m converges linearly to X 0 in mean square form. Moreover, the solution error in expectation for the iteration sequence X ( k ) obeys
E X ( k ) X 0 F 2 ρ ^ k X ( 0 ) X 0 F 2 , k = 1 , 2 , , .
where ρ ^ = 1 2 α α 2 A 2 2 A F 2 σ min 4 ( A ) .
Proof. 
For k = 0 , 1 , 2 , , by (11) and A X 0 A = A , we have
X ( k + 1 ) X 0 = X ( k ) + α A i , : 2 2 A i , : * ( A i , : A i , : X ( k ) A ) A * X 0 = ( X ( k ) X 0 ) α A i , : 2 2 A i , : * A i , : ( X ( k ) X 0 ) A A * ,
then
X ( k + 1 ) X 0 F 2 = X ( k ) X 0 F 2 + α 2 A i , : 2 4 A i , : * A i , : ( X ( k ) X 0 ) A A * F 2 2 α A i , : 2 2 X ( k ) X 0 , A i , : * A i , : ( X ( k ) X 0 ) A A * F .
It follows from
α 2 A i , : 2 4 A i , : * A i , : ( X ( k ) X 0 ) A A * F 2 = α 2 A i , : 2 2 A i , : ( X ( k ) X 0 ) A A * 2 2 ( by trace ( u u * ) = u 2 2 for any vector u ) α 2 A 2 2 A i , : 2 2 A i , : ( X ( k ) X 0 ) A 2 2 ( by u * A * 2 = A u 2 A 2 u 2 ) ,
and
2 α A i , : 2 2 X ( k ) X 0 , A i , : * A i , : ( X ( k ) X 0 ) A A * F = 2 α A i , : 2 2 trace ( A * ( X ( k ) X 0 ) * A i , : * A i , : ( X ( k ) X 0 ) A ) ( by trace ( M N ) = trace ( N M ) ) = 2 α A i , : 2 2 A i , : ( X ( k ) X 0 ) A 2 2
that
X ( k + 1 ) X 0 F 2 X ( k ) X 0 F 2 2 α α 2 A 2 2 A i , : 2 2 A i , : ( X ( k ) X 0 ) A 2 2 .
By taking the conditional expectation, we have
E k X ( k + 1 ) X 0 F 2 X ( k ) X 0 F 2 E k 2 α α 2 A 2 2 A i , : 2 2 A i , : ( X ( k ) X 0 ) A 2 2 = X ( k ) X 0 F 2 i = 1 m A i , : 2 2 A F 2 2 α α 2 A 2 2 A i , : 2 2 A i , : ( X ( k ) X 0 ) A 2 2 = X ( k ) X 0 F 2 2 α α 2 A 2 2 A F 2 A ( X ( k ) X 0 ) A F 2 .
Noting that X ( 0 ) X 0 = A A X ( 0 ) A A A B and α A i , : 2 2 A i , : * A i , : ( X ( k ) X 0 ) A A * B , we have X ( k + 1 ) X 0 B by induction. Then, by Lemma 3 and 0 < α < 2 A 2 2 , we can obtain
E k X ( k + 1 ) X 0 F 2 X ( k ) X 0 F 2 2 α α 2 A 2 2 A F 2 σ min 4 ( A ) X ( k ) X 0 F 2 = 1 2 α α 2 A 2 2 A F 2 σ min 4 ( A ) X ( k ) X 0 F 2 .
Finally, by (13) and induction on the iteration index k, we obtain the estimate (12). This completes the proof. □
Remark 2. 
If X ( 0 ) = 0 or A * , then X 0 = A , which is the unique minimum norm least squares solution of (2). Theorems 1 and 2 imply that the sequence X ( k ) generated by the MII-PRBK or MII-RABK method with X ( 0 ) = 0 or A * converges linearly to A .
Remark 3. 
Noting that
1 1 A F 2 A 2 2 σ min 4 ( A ) 1 2 α α 2 A 2 2 A F 2 σ min 4 ( A ) , α ( 0 , 2 A 2 2 )
this means that the convergence factor of the MII-PRBK method is smaller than that of MII-RABK method. However, in practice, it is very expensive to calculate the pseudoinverse of large-scale matrices.
Remark 4. 
Replacing A * in (11) with A , we obtain the following relaxed projected randomized block Kaczmarz method (MII-PRBKr)
X ( k + 1 ) = X ( k ) + α A i , : 2 2 A i , : * ( A i , : A i , : X ( k ) A ) A , k = 0 , 1 , 2 , ,
where 0 < α < 2 is the step size, and i is selected with probability p i = A i , : 2 2 A F 2 . By the similar approach as used in the proof of Theorem 2, we can prove that the iteration X ( k ) satisfies the following estimate
E X ( k ) X 0 F 2 ρ ˜ k X ( 0 ) X 0 F 2 ,
where ρ ˜ = 1 ( 2 α α 2 ) σ min 4 ( A ) A 2 2 A F 2 . It is obvious that when α = 1 , the MII-PRBKr iteration (14) is actually the MII-PRBK iteration (4).

4. Numerical Experiments

In this section, we will present some experiment results of the proposed algorithms for solving the inner inverse, and compare them with GBMC [7] for rectangular matrices. All experiments are carried out by using MATLAB (version R2020a) in a personal computer with Intel(R) Core(TM) i7-4712MQ CPU @2.30 GHz, RAM 8 GB and Windows 10.
All computations are started with the random matrices X ( 0 ) , and terminated once the relative error (RE) of the solution, defined by
R E = X ( k ) X 0 F X 0 F
at the current iteration X ( k ) , satisfies R E < 10 6 or exceeds the maximum iteration K = 10 6 . We report the average number of iterations (denoted as “IT”) and the average computing time in seconds (denoted as“CPU”) for 10 trials repeated runs of the MII-PRBK and MII-RABK methods. For clarity, we restate three methods as follows.
  • GBMC ([7])
    X ( k + 1 ) = X ( k ) + μ A * ( A A X ( k ) A ) A * , 0 < μ < 2 A 2 4 .
  • MII-PRBK (Algorithm 1)
    X ( k + 1 ) = X ( k ) + A i , : * A i , : 2 2 ( A i , : A i , : X ( k ) A ) A , p i = A i , : 2 2 A F 2 .
  • MII-RABK (Algorithm 2)
    X ( k + 1 ) = X ( k ) + α A i , : * A i , : 2 2 ( A i , : A i , : X ( k ) A ) A * , 0 < α < 2 A 2 2 , p i = A i , : 2 2 A F 2 .
We underscore once again the difference between the two algorithms; that is, Algorithm 1 needs the Moore–Penrose generalized inverse A , whereas Algorithm 2 replaces A with A * (which is easier to implement) and adds a stepsize parameter α .
Example 1. 
For given m , n , the entries of A are generated from a standard normal distribution by a Matlab built-in function, i.e., A = r a n d n ( m , n ) or A 1 = r a n d n ( m / 2 , n / 2 ) , A = [ A 1 , A 1 ; A 1 , A 1 ] .
Firstly, we test the impact of α in the MII-RABK method on the experimental results. To do this, we vary λ from 0.1 to 1.9 by step 0.1 , where α = λ A 2 2 satisfies 0 < α < 2 A 2 2 in Theorem 2. Figure 1 plots the IT and CPU versus different λ with different matrices in Table 1. From Figure 1, it can be seen that the number of iteration steps and the running time first decrease and then increase with the increase in λ , and almost achieve the minimum value when λ [ 1.5 , 1.7 ] for all matrices. Therefore, we set α = 1.6 A 2 2 in this example.
The results of numerical experiments are listed in Table 1 and Table 2. From these tables, it can be seen that the MII-GMBC method has the least number of iteration steps, whereas the MII-PRBK method has the least running time. Figure 2 plots the iteration steps and running time of different methods with the matrices A = r a n d n ( m , 25 ) (top) and A = r a n d n ( 25 , n ) (bottom). It is pointed out that the initial points on the left plots (i.e., A = r a n d n ( 100 , 25 ) and A = r a n d n ( 25 , 100 ) ) indicate that the MII-RABK method requires a very large number of iteration steps, which is related to Kaczmarz’s anomaly [21]. That is, the MII-RABK method enjoys a faster rate of convergence in the case where m is considerably smaller or larger than n. However, the closer m and n are, the slower the convergence is. As the number of rows or columns increases, the iteration steps and runtime of the MII-PRBK and MII-RABK methods are increasing relatively slowly, while the runtime of the GMBC method grows dramatically (see the right plots in Figure 2). Therefore, our proposed methods are more suitable for large matrices. The curves of relative error l o g 10 ( R E ) versus the number of iterations “IT” and running time “CPU”, given by the GMBC, MII-PRBK, MII-RABK methods for A 1 = r a n d n ( 500 , 50 ) , A = [ A 1 , A 1 ; A 1 , A 1 ] , are shown in Figure 3. From Figure 3, it can be seen that the relative error of GBMC method decays the fastest when the number of iterations increases and the relative error of MII-PRBK decays the fastest when the running time grows. This is because at each iteration, the GMBC method requires matrix multiplication which involves 4 m n 2 + 4 m 2 n m 2 n 2 flopping operations, whereas the MII-PRBK and MII-RABK methods only cost 8 m n + n 2 m flops.
Example 2. 
For given m , n , d , r c , the sparse matrix A is generated by a Matlab built-in function s p r a n d n ( m , n , d , r c ) , with approximately d m n normally distributed nonzero entries. The input parameters d and r c are the percentage of nonzeros and the reciprocal of condition number, respectively. In this example, A = s p r a n d n ( m , n , d , r c ) or A 1 = s p r a n d n ( m / 2 , n / 2 , d , r c ) , A = [ A 1 , A 1 ; A 1 , A 1 ] .
In this example, we set α = 1.9 A 2 2 . The numerical results of IT and CPU are listed in Table 3 and Table 4, and the curves of relative error versus IT and CPU are drawn in Figure 4. From Table 3 and Table 4, we can observe that the MII-PRBK method has the least iteration steps and running time. The computational efficiency of the MII-PRBK and MII-RABK methods has been improved by at least 33 and 7 times compared to the GBMC method. Moreover, the advantages of the proposed methods are more pronounced when the matrix size increases.
Example 3. 
Consider dense Toeplitz matrices. For given m , n , c = r a n d n ( m , 1 ) , r = r a n d n ( n , 1 ) , A = t o e p l i t z ( c , r ) or c = r a n d n ( m , 1 ) , r = r a n d n ( n , 1 ) , A 1 = t o e p l i t z ( c , r ) , A = [ A 1 , A 1 ; A 1 , A 1 ] .
In this example, we set α = 1.5 A 2 2 . The numerical results of IT and CPU are listed in Table 5, and the curves of the relative error versus IT and CPU are drawn in Figure 5. We can observe the same phenomenon as that in Example 1.
Example 4. 
Consider sparse Toeplitz matrices. For given m , n , d , c = s p r a n d n ( m , 1 , d ) , r = s p r a n d n ( n , 1 , d ) , A = t o e p l i t z ( c , r ) or c = s p r a n d n ( m / 2 , 1 , d ) , r = s p r a n d n ( n / 2 , 1 , d ) , A 1 = t o e p l i t z ( c , r ) , A = [ A 1 , A 1 ; A 1 , A 1 ] .
In this example, we set α = 1.5 A 2 2 . The numerical results of IT and CPU are listed in Table 6, and the curves of the relative error versus IT and CPU are drawn in Figure 6. Again, we can draw the same conclusion as that in Example 1.

5. Conclusions

In this paper, we have proposed the randomized block Kaczmarz algorithms to compute inner inverses of any rectangle matrix, where A is full rank or rank deficient. Convergence results are provided to guarantee the convergence of the proposed methods theoretically. Numerical examples are given to illustrate the effectiveness. Since the proposed algorithms only require one row of A at each iteration without matrix–matrix product, they are suitable for the scenarios where the matrix A is too large to fit in memory or the matrix multiplication is considerably expensive. In addition, if the MII-RABK method is implemented in parallel, the running time will be greatly reduced. Therefore, in some practical applications, the MII-RABK method is more feasible when the Moore–Penrose generalized inverse is unknown or the calculation is too expensive. Due to limitations in hardware and personal research areas, we did not perform numerical experiments on very large-scale practical problems, which will become one of our future works. In addition, we will extend the Kaczmarz method to deal with the other generalized inverses of any rectangle matrix. Moreover, providing the feasible principles for selecting parameters and designing a randomized block Kaczmarz algorithm with an adaptive stepsize will be one of our future works.

Author Contributions

Conceptualization, L.X. and W.L.; methodology, L.X. and W.B.; validation, L.X. and W.B.; writing—original draft preparation, L.X.; writing—review and editing, L.X. and W.L.; software, L.X. and Y.L.; visualization, L.X. and Z.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The datasets that support the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

The authors are thankful to the referees for their constructive comments and valuable suggestions, which have greatly improved the original manuscript of this paper.

Conflicts of Interest

The authors declare no conflicts of interest. The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Ben-Lsrael, A.; Greville, T.N.E. Generalized Inverses: Theory and Applications, 2nd ed.; Canadian Mathematical Society; Springer: NewYork, NY, USA, 2002. [Google Scholar]
  2. Cichocki, A.; Unbehauen, R. Neural Networks for Optimization and Signal Processing; John Wiley: West Sussex, UK, 1993. [Google Scholar]
  3. Guo, D.; Xu, F.; Yan, L. New pseudoinverse-based path-planning scheme with PID characteristic for redundant robot manipulators in the presence of noise. IEEE Trans. Control Syst. Technol. 2018, 26, 2008–2019. [Google Scholar] [CrossRef]
  4. Mihelj, M.; Munih, M.; Podobnik, J. Sensory fusion of magneto-inertial data sensory fusion based on Kinematic model with Jacobian weighted-left-pseudoinverse and Kalman-adaptive gains. IEEE Trans. Instrum. Meas. 2019, 68, 2610–2620. [Google Scholar] [CrossRef]
  5. Zhang, W.; Wu, J.; Yang, Y.; Akilan, T. Multimodel feature reinforcement framework using Moore-Penrose inverse for big data analysis. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 5008–5021. [Google Scholar] [CrossRef] [PubMed]
  6. Zhuang, H.; Lin, Z.; Toh, K.A. Blockwise recursive Moore-Penrose inverse for network learning. IEEE Trans. Syst. Man, Cybern. Syst. 2022, 52, 3237–3250. [Google Scholar] [CrossRef]
  7. Sheng, X.; Wang, T. An iterative method to compute Moore-Penrose inverse based on gradient maximal convergence rate. Filomat 2013, 27, 1269–1276. [Google Scholar] [CrossRef]
  8. Wang, J. Recurrent neural networks for computing pseudoinverses of rank-deficientmatrices. SIAM J. Sci. Comput. 1997, 18, 1479–1493. [Google Scholar] [CrossRef]
  9. Wei, Y. Recurrent neural networks for computing weighted Moore-Penrose inverse. Appl. Math. Comput. 2000, 116, 279–287. [Google Scholar] [CrossRef]
  10. Wang, X.Z.; Ma, H.F.; Predrag, S.S. Nonlinearly activated recurrent neural network for computing the drazin inverse. Neural Process. Lett. 2017, 46, 195–217. [Google Scholar] [CrossRef]
  11. Zhang, N.M.; Zhang, T. Recurrent neural networks for computing the moore-penrose inverse with momentum learning. Chin. J. Electron. 2020, 28, 1039–1045. [Google Scholar] [CrossRef]
  12. Zhang, Y.; Guo, D.; Li, Z. Common nature of learning between backpropagation and Hopfield-type neural networks for generalized matrix inversion with simplified models. IEEE Trans. Neural Netw. Learn. Syst. 2013, 24, 579–592. [Google Scholar] [CrossRef]
  13. Stanimirovic, P.; Petkovic, M.; Gerontitis, D. Gradient neural network with nonlinear activation for computing inner inverses and the Drazin inverse. Neural Process Lett. 2018, 48, 109–133. [Google Scholar] [CrossRef]
  14. Lv, X.; Xiao, L.; Tan, Z.; Yang, Z.; Yuan, J. Improved gradient neural networks for solving Moore-Penrose inverse of full-rank matrix. Neural Process. Lett. 2019, 50, 1993–2005. [Google Scholar] [CrossRef]
  15. Zhang, Y.; Zhang, J.; Weng, J. Dynamic Moore-Penrose inversion with unknown derivatives: Gradient neural network approach. IEEE Trans. Neural Netw. Learn. Syst. 2022, 34, 10919–10929. [Google Scholar] [CrossRef] [PubMed]
  16. Ion, N. Faster randomized block kaczmarz algorithms. SIAM J. Matrix Anal. Appl. 2019, 40, 1425–1452. [Google Scholar]
  17. Xing, L.L.; Li, W.G.; Bao, W.D. Some results for Kaczmarz method to solve Sylvester matrix equations. J. Frankl. Inst. 2023, 360, 7457–7461. [Google Scholar]
  18. Du, K.; Ruan, C.C.; Sun, X.H. On the convergence of a randomized block coordinate descent algorithm for a matrix least squaress problem. Appl. Math. Lett. 2022, 124, 107689. [Google Scholar] [CrossRef]
  19. Du, K.; Si, W.T.; Sun, X.H. Randomized extended average block Kaczmarz for solving least squares. Siam. J. Sci. Comput. 2020, 42, A3541–A3559. [Google Scholar] [CrossRef]
  20. Strohmer, T.; Vershynin, R. A randomized Kaczmarz algorithm with exponential convergence. J. Fourier Anal. Appl. 2009, 15, 262–278. [Google Scholar] [CrossRef]
  21. Dax, A. Kaczmarz’s anomaly: A surprising feature of Kaczmarz’s method. Linear Algebra Appl. 2023, 662, 136–162. [Google Scholar] [CrossRef]
Figure 1. IT (left) and CPU (right) of different α of the RABK method for A = r a n d n ( m , n ) from Example 1.
Figure 1. IT (left) and CPU (right) of different α of the RABK method for A = r a n d n ( m , n ) from Example 1.
Mathematics 12 00475 g001
Figure 2. IT (left) and CPU (right) of different methods with the matrices A = r a n d n ( m , 25 ) (top) and A = r a n d n ( 25 , n ) (bottom) from Example 1.
Figure 2. IT (left) and CPU (right) of different methods with the matrices A = r a n d n ( m , 25 ) (top) and A = r a n d n ( 25 , n ) (bottom) from Example 1.
Mathematics 12 00475 g002
Figure 3. The relative error of the different methods for A 1 = r a n d n ( 500 , 50 ) , A = [ A 1 , A 1 ; A 1 , A 1 ] .
Figure 3. The relative error of the different methods for A 1 = r a n d n ( 500 , 50 ) , A = [ A 1 , A 1 ; A 1 , A 1 ] .
Mathematics 12 00475 g003
Figure 4. The relative error of the different methods for A 1 = s p r a n d n ( 50 , 500 , 10 % , 0.1 ) , A = [ A 1 , A 1 ; A 1 , A 1 ] from Example 2.
Figure 4. The relative error of the different methods for A 1 = s p r a n d n ( 50 , 500 , 10 % , 0.1 ) , A = [ A 1 , A 1 ; A 1 , A 1 ] from Example 2.
Mathematics 12 00475 g004
Figure 5. The relative error of the different methods for c = r a n d n ( 50 , 1 ) , r = r a n d n ( 1000 , 1 ) , A 1 = t o e p l i t z ( c , r ) , A = [ A 1 , A 1 ; A 1 , A 1 ] .
Figure 5. The relative error of the different methods for c = r a n d n ( 50 , 1 ) , r = r a n d n ( 1000 , 1 ) , A 1 = t o e p l i t z ( c , r ) , A = [ A 1 , A 1 ; A 1 , A 1 ] .
Mathematics 12 00475 g005
Figure 6. The relative error of the different methods for c = s p r a n d n ( 50 , 1 ) , r = s p r a n d n ( 1000 , 1 ) , A 1 = t o e p l i t z ( c , r ) , A = [ A 1 , A 1 ; A 1 , A 1 ] .
Figure 6. The relative error of the different methods for c = s p r a n d n ( 50 , 1 ) , r = s p r a n d n ( 1000 , 1 ) , A 1 = t o e p l i t z ( c , r ) , A = [ A 1 , A 1 ; A 1 , A 1 ] .
Mathematics 12 00475 g006
Table 1. The numerical results of IT and CPU for A = r a n d n ( m , n ) from Example 1.
Table 1. The numerical results of IT and CPU for A = r a n d n ( m , n ) from Example 1.
mn GBMCMII-PRBKMII-RABK
501000IT29321.0812.3
CPU0.220.050.13
505000IT80198.4734.6
CPU8.470.411.40
10010,000IT55407.51398.2
CPU27.804.3513.71
100050IT26774.71092.1
CPU0.200.150.16
500050IT601173.61341.4
CPU6.362.162.50
10,000100IT682276.42637.6
CPU34.3621.1324.64
Table 2. The numerical results of IT and CPU for A = s p r a n d n ( m / 2 , n / 2 ) , A = [ A 1 , A 1 ; A 1 , A 1 ] from Example 1.
Table 2. The numerical results of IT and CPU for A = s p r a n d n ( m / 2 , n / 2 ) , A = [ A 1 , A 1 ; A 1 , A 1 ] from Example 1.
mn GBMCMII-PRBKMII-RABK
501000IT29158.3410.5
CPU0.190.020.05
505000IT6192.3353.4
CPU0.550.170.17
10010,000IT64175.7898.0
CPU32.071.896.64
100050IT26554.8612.4
CPU0.190.080.08
500050IT62491.5593.4
CPU6.620.941.07
10,000100IT561040.61266.2
CPU27.999.7911.23
Table 3. The numerical results of IT and CPU for A = s p r a n d n ( m , n , 10 % , 0.1 ) from Example 2.
Table 3. The numerical results of IT and CPU for A = s p r a n d n ( m , n , 10 % , 0.1 ) from Example 2.
mn GBMCMII-PRBKMII-RABK
501000IT3285605.417,184.3
CPU17.000.093.56
1001000IT32941617.639,508.0
CPU22.560.399.18
505000IT29841337.521,342.2
CPU321.433.1241.16
100050IT4271300.76822.5
CPU21.470.051.12
1000100IT4261741.310,456.3
CPU26.910.793.05
500050IT3079240.78018.8
CPU358.350.5015.68
Table 4. The numerical results of IT and CPU for A = s p r a n d n ( m / 2 , n / 2 , 10 % , 0.1 ) , A = [ A 1 , A 1 ; A 1 , A 1 ] from Example 2.
Table 4. The numerical results of IT and CPU for A = s p r a n d n ( m / 2 , n / 2 , 10 % , 0.1 ) , A = [ A 1 , A 1 ; A 1 , A 1 ] from Example 2.
mn GBMCMII-PRBKMII-RABK
501000IT3038240.25321.5
CPU15.560.040.87
1001000IT4010582.515,165.4
CPU26.650.123.54
505000IT2848517.98767.0
CPU334.171.1720.19
100050IT378099.83846.0
CPU18.960.020.59
1000100IT3603218.47623.5
CPU28.410.102.55
500050IT2768113.03930.4
CPU339.720.247.51
Table 5. The numerical results of IT and CPU for c = r a n d n ( m , 1 ) , r = r a n d n ( n , 1 ) , A = t o e p l i t z ( c , r ) from Example 3.
Table 5. The numerical results of IT and CPU for c = r a n d n ( m , 1 ) , r = r a n d n ( n , 1 ) , A = t o e p l i t z ( c , r ) from Example 3.
mn GBMCMII-PRBKMII-RABK
501000IT46179.5416.0
CPU0.230.020.06
505000IT46125.6386.3
CPU4.870.240.72
10010,000IT45276.8808.3
CPU22.342.748.07
100050IT45289.0719.2
CPU0.190.040.09
500050IT48237.6540.7
CPU5.120.441.11
10,000100IT46460.51286.4
CPU22.984.3612.06
Table 6. The numerical results of IT and CPU for c = s p r a n d n ( m , 1 , 0.1 ) , r = s p r a n d n ( n , 1 , 0.1 ) , A = t o e p l i t z ( c , r ) from Example 4.
Table 6. The numerical results of IT and CPU for c = s p r a n d n ( m , 1 , 0.1 ) , r = s p r a n d n ( n , 1 , 0.1 ) , A = t o e p l i t z ( c , r ) from Example 4.
mn GBMCMII-PRBKMII-RABK
501000IT47180.2438.6
CPU0.190.020.06
505000IT47134.5405.3
CPU4.870.230.74
10010,000IT45275.4805.8
CPU21.462.737.84
100050IT46350.5581.8
CPU0.190.050.07
500050IT49287.4580.5
CPU5.160.550.98
10,000100IT43596.51301.4
CPU21.466.5912.11
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xing, L.; Bao, W.; Lv, Y.; Guo, Z.; Li, W. Randomized Block Kaczmarz Methods for Inner Inverses of a Matrix. Mathematics 2024, 12, 475. https://doi.org/10.3390/math12030475

AMA Style

Xing L, Bao W, Lv Y, Guo Z, Li W. Randomized Block Kaczmarz Methods for Inner Inverses of a Matrix. Mathematics. 2024; 12(3):475. https://doi.org/10.3390/math12030475

Chicago/Turabian Style

Xing, Lili, Wendi Bao, Ying Lv, Zhiwei Guo, and Weiguo Li. 2024. "Randomized Block Kaczmarz Methods for Inner Inverses of a Matrix" Mathematics 12, no. 3: 475. https://doi.org/10.3390/math12030475

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop