Next Article in Journal
Neural Networks Simulation of Distributed SEIR System
Previous Article in Journal
Entropy-Optimized Fault Diagnosis Based on Unsupervised Domain Adaptation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Conditioning Theory for Generalized Inverse CA and Their Estimations

1
College of Mathematics and Computer Science, Zhejiang Normal University, Jinhua 321004, China
2
Department of Mathematics, The Islamia University of Bahawalpur, Bahawalpur 63100, Pakistan
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(9), 2111; https://doi.org/10.3390/math11092111
Submission received: 24 March 2023 / Revised: 25 April 2023 / Accepted: 25 April 2023 / Published: 29 April 2023
(This article belongs to the Section Algebra, Geometry and Topology)

Abstract

:
The conditioning theory of the generalized inverse C A is considered in this article. First, we introduce three kinds of condition numbers for the generalized inverse C A , i.e., normwise, mixed and componentwise ones, and present their explicit expressions. Then, using the intermediate result, which is the derivative of C A , we can recover the explicit condition number expressions for the solution of the equality constrained indefinite least squares problem. Furthermore, using the augment system, we investigate the componentwise perturbation analysis of the solution and residual of the equality constrained indefinite least squares problem. To estimate these condition numbers with high reliability, we choose the probabilistic spectral norm estimator to devise the first algorithm and the small-sample statistical condition estimation method for the other two algorithms. In the end, the numerical examples illuminate the obtained results.

1. Introduction

Throughout this paper, R m × n denotes the set of real m × n matrices. For a matrix A R m × n , A T is the transpose of A, rank ( A ) denotes the rank of A, A 2 is the spectral norm of A, and A F is the Frobenius norm of A. For a vector a, a is its -norm, and  a 2 the 2-norm. The notation | A | is a matrix whose components are the absolute values of the corresponding components of A. For any matrix A, the following four equations uniquely define the Moore–Penrose inverse A of A [1]:
A A A = A , A A A = A , ( A A ) T = A A , ( A A ) T = A A .
The generalized inverse C A is defined by
C A = ( I ( P Q P ) Q ) C ,
where Q = A T J A , A R ( p + q ) × n denotes weight matrix and P = I C C is the orthogonal projection onto the null space of C and C R s × n may not have full rank and J is a signature matrix defined by
J = I p 0 0 I q , p + q = m .
The generalized inverse C A originated from the equality constrained indefinite least squares problem (EILS), which is stated as follows [2,3,4,5]:
EILS : min C x h 2 ( g A x ) T J ( g A x ) ,
where g R m and h R s . The EILS problem has a unique solution:
x = C A h + ( P Q P ) A T J g
under the following condition:
rank ( C ) = s , x T Q x > 0 for all nonzero x null ( C ) .
The above condition implies
p n s , rank A C = n ,
then (5) ensures the existence and uniqueness of generalized inverse C A (see [2,6]). The generalized inverse C A has significant applications in the study of EILS algorithms, the analysis of large-scale structure, error analysis, perturbation theory, and the solution of the EILS problem [2,3,4,5,7,8,9,10]. The EILS problem was first demonstrated by Bojanczyk  et al. [5]. Additionally, we reveal some detailed work on the perturbation analysis of this problem. The perturbation theory of the EILS problem was discussed by Wang [11] and extended by Shi and Liu [8] based on the hyperbolic MGS elimination method. Diao and Zhou [12] recovered the linearized estimate of the backward error of this problem. Later, Li et al. [13] investigated the componentwise condition numbers for the EILS problem. Recently, Wang and Meng [14] studied the condition numbers and normwise perturbation analysis of the EILS problem.
Componentwise perturbation analysis has received significant attention in recent years; for references, see [15,16,17,18,19]. The motivation for studying componentwise perturbation analysis is reasonable for research because, if the perturbation in the input data is measured componentwise rather than by norm, it may help us to measure the sensitivity of a function more accurately [15], and improve the exactness and effectiveness of the EILS solution computation. It has attracted many authors’ attention to consider the componentwise perturbation analysis in which the least squares problem [16] and the weighted least squares problem [17] are included. In this article, we continue the research on componentwise perturbation analysis of the EILS problem. We can recover the componentwise perturbation bounds of the indefinite least squares problem with the intermediate result.
The generalized inverse C A reduce to K-weighted pseudoinverse L K when q = 0 and K has a full row rank. This pseudoinverse was expanded to the MK-weighted pseudoinverse L M K by Wei and Zhang [6], which describes its structure and uniqueness. Its algorithm was developed by Elden [20]. According to Wei [21], the expression of L K based on GSVD was investigated. A perturbation equation for L K was given by Gulliksson et al. [22]. The condition numbers for the K-weighted pseudoinverse L K and their statistical estimate were recently provided by Mahvish et al. [23].
The condition number is a well-known research topic in numerical analysis that estimates the worst-case sensitivity of input data to small perturbations on it (see [24,25,26] and references therein). The normwise condition number [25] has the disadvantage of disregarding the scaling structure of both input and output data. To address this issue, the terms mixed and componentwise condition numbers are introduced [26]. Mixed condition numbers employ componentwise error analysis for input data and normwise error analysis for output data. On the other hand, the componentwise condition numbers employ componentwise error analysis for input and output data. In fact, due to rounding errors and data storage difficulties, it is more practical to estimate input errors componentwise rather than normwise. However, the condition numbers of the generalized inverse C A have not been discussed until now. Inspired by this, we attempt to present the explicit expressions of normwise, mixed and componentwise condition numbers for the generalized inverse C A , as well as their statistical estimation due to their importance in EILS research.
The rest of this manuscript is organized as follows: Section 2 provides some preliminaries that will be helpful for the upcoming discussions. With the intermediate result, i.e., the derivative of C A , we can recover the explicit expression of condition numbers for the solution of the EILS problem in Section 3. Section 4 will present the componentwise perturbation analysis for the EILS problem. In Section 5, we propose the first two algorithms for the normwise condition number by using the probabilistic spectral norm estimator [27] and the small-sample statistical condition estimation [28] method. Additionally, we construct the third algorithm for the mixed and componentwise condition numbers by using the small-sample statistical condition estimation [28] method. To check the efficiency of these algorithms, we demonstrate them through numerical experiments in Section 6.

2. Preliminaries

In this part, we introduce some definitions and important results, which will be used in the upcoming sections.
Firstly, we define the entrywise division between two vectors v = v 1 , , v p T R p and w = w 1 , , w p T R p by v w = η 1 , , η p T with
η i = v i w i , if w i 0 v i , if w i = 0 .
Following [1,26,29], the componentwise distance between v and w is defined by
d ( v , w ) = v w w = max i = 1 , , p v i w i w i = v i * w i * w i * , if w i * 0 v i * , if w i * = 0 .
Note that when w i * 0 , i = 1 , , p , d ( v , w ) gives the relative distance from v to w with respect to w, while the absolute distance for w i * = 0 . We describe the distance between the matrices V , W R n × n as follows:
d ( V , W ) = d ( vec ( V ) , vec ( W ) ) .
In order to define the mixed and componentwise condition numbers, we also need to define the set B ( v , ε ) = { u R p | | u i v i | ϵ | v i | , i = 1 , , p } and B ( v , ε ) = { u R p u v 2 ε v 2 } for given ε > 0 .
Definition 1 
([29]). Let : R p R q be a continuous mapping defined on an open set Dom ( ) R p , and  v Dom ( ) , v 0 such that ( v ) 0 .
(i) 
The normwise condition number of ℵ at v is given by
n ( , v ) = lim ε 0 sup u B ( v , ε ) u v ( u ) ( v ) 2 ( v ) 2 / u v 2 v 2 .
(ii) 
The mixed condition number of ℵ at v is given by
m ( , v ) = lim ε 0 sup u B o ( v , ε ) u v ( u ) ( v ) ( v ) 1 d ( u , v ) .
(iii) 
The componentwise condition number of ℵ at v is given by
c ( , v ) = lim ε 0 sup u B o ( v , ε ) u v d ( u ) , ( v ) d ( u , v ) .
When the map in Definition 1 is Fréchet differentiable, the following lemma given in [29] makes the computation of condition numbers easier.
Lemma 1 
([29]). Under the assumptions of Definition 1, and supposing ℵ is F r é c h e t differentiable at v, we have
n ( , v ) = d ( v ) 2 v 2 ( v ) 2 ,   m ( , v ) = | d ( v ) | | v | ( v ) ,   c ( , v ) = | d ( v ) | | v | | ( v ) | ,
where d ( v ) stands for the Fréchet derivative of ℵ at v.
To obtain the explicit expressions of the above condition numbers, we need some properties of the Kronecker product [30] between X and Y:
vec ( Y Z X ) = X T Y vec ( Z ) ,
vec Y T = Π m n vec ( Y ) ,
Y X 2 = Y 2 X 2 ,
where the matrix Z has a suitable dimension, and  Π m n R m n × m n is the vec-permutation matrix, which depends only on the dimensions m and n.
Now, we present the following two lemmas, which will be helpful for obtaining condition numbers and their upper bounds.
Lemma 2 
([31], p. 174, Theorem 5). Let S be an open subset of R n × q , and let : S R m × p be a matrix function defined and k 1 times (continuously) differentiable on S. If  rank ( ( X ) ) is constant on S , then : S R p × m is k times (continuously) differentiable on S, and 
d = d + T d T ( I m ) + ( I p ) d T T .
Lemma 3 
([1]). For any matrices E , F , G , H , U and V with dimensions making the following well defined
[ E F + ( G H ) Π ] vec ( U ) ,
[ E F + ( G H ) Π ] vec ( U ) V ,
F U E T a n d H U T G T ,
we have
| [ E F + ( G H ) Π ] | vec ( | U | ) vec ( | F | | U | | E | T + | H | | U | T | G | T )
and
| [ E F + ( G H ) Π ] | vec ( | U | ) | V | vec ( | F | | U | | E | T + | H | | U | T | G | T ) | V | .

3. Condition Numbers

First, we define a mapping ϕ ( u ) : R m n + s n R n s by
ϕ ( u ) = vec ( C A ) .
Here, u = ( vec ( A ) T , vec ( C ) T ) T , Δ u = ( vec ( Δ A ) T , vec ( Δ C ) T ) T , and for matrix X = ( x i j ) , X F = vec ( X ) 2 and X max = vec ( X ) = max i , j | x i j | .
Then, using Definition 1, we present the definitions of the normwise, mixed, and componentwise condition numbers for generalized inverse C A as given in [32]:
n ( A , C ) = n ( ϕ , u ) : = lim ε 0 sup [ Δ A , Δ C ] F ε [ A , C ] F ( C + Δ C ) A C A F / C A F [ Δ A , Δ C ] F / [ A , C ] F ,
m ( A , C ) = m ( ϕ , u ) : = lim ε 0 sup Δ A / A max ε Δ C / C max ε ( C + Δ C ) A C A max C A max 1 d ( u + Δ u , u ) ,
c ( A , C ) = c ( ϕ , u ) : = lim ε 0 sup Δ A / A max ε Δ C / C max ε 1 d ( u + Δ u , u ) ( C + Δ C ) A C A C A max .
With the help of the vec operator, Frobenius, spectral, and Max norms, we can rewrite the definitions of normwise, mixed and componentwise condition numbers as follows:
n ( A , C ) = n ( ϕ , u ) : = lim ε 0 sup vec ( Δ A ) vec ( Δ C ) 2 ε vec ( A ) vec ( C ) 2 vec ( ( C + Δ C ) A C A ) 2 vec ( C A ) 2 / vec ( Δ A ) vec ( Δ C ) 2 vec ( A ) vec ( C ) 2 ,
m ( A , C ) = m ( ϕ , u ) : = lim ε 0 sup vec ( Δ A ) / vec ( A ) ε vec ( Δ C ) / vec ( C ) ε vec ( ( C + Δ C ) A C A ) vec ( C A ) 1 d ( u + Δ u , u ) ,
c ( A , C ) = c ( ϕ , u ) : = lim ε 0 sup vec ( Δ A ) / vec ( A ) ε vec ( Δ C ) / vec ( C ) ε 1 d ( u + Δ u , u ) vec ( ( C + Δ C ) A C A ) vec ( C A ) .
In the following, we find the expression of the Fréchet derivative of ϕ at u.
Lemma 4. 
Let the mapping ϕ be continuous. Then, the Fréchet differential at u is:
ϕ ( u ) = [ W ( A ) , W ( C ) ] ,
where
W ( A ) = [ ( C A T ( P Q P ) A T J ) + ( ( J A C A ) T ( P Q P ) ) Π m n ] , W ( C ) = [ ( C A T C A ) ( ( I C C ) T C A C T ) Π s n ( C T Q C A ) T ( P Q P ) ) Π s n ] .
Proof. 
Differentiating both sides of (2), we obtain
d ( C A ) = d [ ( I ( P Q P ) Q ) C ] .
From ([3], Theorem 2.2), we obtain
( P Q P ) = P ( P Q P ) = ( P Q P ) P = P ( P Q P ) P ,
P ( I ( P Q P ) Q P ) = 0 , ( P Q P ) Q P = P .
Thus, substituting (20) into (19) and differentiating both sides of the equation, we can deduce
d ( C A ) = d [ ( I P ( P Q P ) Q ) C ] = d C d ( P ( P Q P ) Q C ) = ( I P ( P Q P ) Q ) d C d P ( P Q P ) Q C P d ( P Q P ) Q C P ( P Q P ) d Q C .
Further, using (9), we have
d ( C A ) = ( I P ( P Q P ) Q ) [ C d C C + C C T d C T ( I C C ) + ( I C C ) d C T C T C ] d ( I C C ) ( P Q P ) Q C + P [ ( P Q P ) d ( P Q P ) ( P Q P ) ( P Q P ) ( P Q P ) T d ( P Q P ) T ( I ( P Q P ) ( P Q P ) ) ( I ( P Q P ) ( P Q P ) ) d ( P Q P ) T ( P Q P ) T ( P Q P ) ] Q C P ( P Q P ) d Q C .
Noting (20), (2), and  ( I P ( P Q P ) Q ) ( I C C ) = P ( I ( P Q P ) Q P ) , the previous equation may be expressed as
d ( C A ) = C A d C C + C A C T d C T ( I C C ) + P ( I ( P Q P ) Q P ) d C T C T C + d C C ( P Q P ) Q C + C d C ( P Q P ) Q C + ( P Q P ) d ( P Q P ) ( P Q P ) Q C ( P Q P ) ( P Q P ) T d ( P Q P ) T ( I ( P Q P ) ( P Q P ) ) Q C P ( I ( P Q P ) Q P ) d ( P Q P ) T ( P Q P ) T ( P Q P ) Q C ( P Q P ) d Q C .
Further, by the fact P Q = ( Q P ) T = Q P , (21), and 
C P ( P Q P ) = C ( P Q P ) = 0 ,
the above equation may be simplified as follows:
d ( C A ) = C K d C C + C A C T d C T ( I C C ) + C d C ( P Q P ) Q C ( P Q P ) d Q C + ( P Q P ) d Q P ( P Q P ) Q C + ( P Q P ) Q d P ( P Q P ) Q C ( P Q P ) ( P Q P ) T d Q T P T ( I ( P Q P ) ( P Q P ) ) Q C ( P Q P ) ( P Q P ) T Q T d P T ( I ( P Q P ) ( P Q P ) ) Q C . = C K d C C + C A C T d C T ( I C C ) + C d C ( P Q P ) Q C ( P Q P ) d A T J A C ( P Q P ) A T J d A C + ( P Q P ) P d A T J A ( P Q P ) Q C + ( P Q P ) P A T J d A ( P Q P ) Q C ( P Q P ) ( P Q P ) T d Q T P ( I Q P ( P Q P ) ) Q C + ( P Q P ) Q d P ( P Q P ) Q C ( P Q P ) ( P Q P ) ( P Q P ) d P T ( I Q P ( P Q P ) ) Q C .
Considering P Q = ( Q P ) T = Q P , we obtain
P ( ( I Q P ( P Q P ) ) = 0 Q P ( P Q P ) = P
Substituting this fact into (23) implies
d ( C A ) = C A d C C + C A C T d C T ( I C C ) + C d C ( P Q P ) Q C ( P Q P ) A T J d A ( I P ( P Q P ) Q ) C ( P Q P ) Q C d C ( P Q P ) Q C ( P Q P ) d A T J A ( I ( P Q P ) Q ) C + ( P Q P ) d C T C T Q ( I ( P Q P ) Q ) C .
We can rewrite the above equation by using (2) and (20) as
d ( C A ) = C A d C C A + C A C T d C T ( I C C ) + ( P Q P ) d C T C T Q C A ( P Q P ) A T J d A C A ( P Q P ) d A T J A C A .
By applying “vec” operator on (25), and using (6) and (7), we obtain
vec ( d ( C A ) ) = ( C A T ( P Q P ) A T J ) vec ( d A ) ( ( J A C A ) T ( P Q P ) ) vec ( d A T ) ( C A T C A ) vec ( d C ) + ( ( I C C ) T C A C T ) vec ( d C T ) + ( ( C T Q C A ) T ( P Q P ) ) vec ( d C T ) by ( ) = [ ( C A T ( P Q P ) A T J ) + ( ( J A C A ) T ( P Q P ) ) Π m n ] vec ( d A ) [ ( C A T C A ) ( ( I C C ) T C A C T ) Π s n ( C T Q C A ) T ( P Q P ) ) Π s n ] vec ( d C ) by ( ) = [ ( C A T ( P Q P ) A T J ) ( ( J A C A ) T ( P Q P ) ) Π m n , ( C A T C A ) + ( ( I C C ) T C A C T ) Π s n + ( C T Q C A ) T ( P Q P ) ) Π s n ] vec ( d A ) vec ( d C ) .
That is,
d ( vec ( C A ) ) = [ W ( A ) , W ( C ) ] d v .
Thus, we have obtained the required result by using the definition of Fréchet derivative.    □
Remark 1. 
Setting C = L , K = A , q = 0 and C as full row rank, we have C A = L K and
W ˜ ( A ) = [ ( C A T ( A P ) ) + ( ( A C A ) T ( A P ) ( A P ) T ) Π m n ] , W ˜ ( C ) = [ ( C A T C A ) ( ( A C ) T A C A ) T ( A P ) ( A P ) T ) Π s n ] ,
where the latter is just the result of ([23], Lemma 3.1), with which we can recover the condition numbers for K-weighted pseudoinverse L K [23].
Using the straightforward results of Lemmas 1 and 4, we derive the following condition numbers for C A .
Theorem 1. 
The normwise, mixed and componentwise condition numbers for C A defined in (11)–(13) are
n ( A , C ) = [ W ( A ) , W ( C ) ] 2 vec ( A ) vec ( C ) 2 vec ( C A ) 2 ,
m ( A , C ) = | W ( A ) | vec ( | A | ) + | W ( C ) | vec ( | C | ) vec ( C A ) ,
c ( A , C ) = | W ( A ) | vec ( | A | ) + | W ( C ) | vec ( | C | ) vec ( C A ) .
Next, we provide easier computable upper bounds by minimizing the cost of computing the above condition numbers. The estimation of the upper bounds will be demonstrated by numerical experiments in Section 6.
Corollary 1. 
The upper bounds of normwise, mixed and componentwise condition numbers for C A  are
n ( A , C ) n u p p e r ( A , C ) = [ C A 2 ( P Q P ) A T J 2 + J A C A 2 ( P Q P ) 2 + C A 2 C A 2 + ( I C C ) 2 C A C T 2 + C T Q C A 2 ( P Q P ) 2 ] [ A , C ] F C A F , m ( A , C ) m u p p e r ( A , C ) = [ | ( P Q P ) A T J | | A | | C A T | + | ( P Q P ) | | A T | | J A C A | + | C A | | C | | C A T | + | C A C T | | C T | | ( I C C ) | + | ( P Q P ) | | C T | | C T Q C A | ] max / C A max , c ( A , C ) c u p p e r ( A , C ) = [ | ( P Q P ) A T J | | A | | C A T | + | ( P Q P ) | | A T | | J A C A | + | C A | | C | | C A T | + | C A C T | | C T | | ( I C C ) | + | ( P Q P ) | | C T | | C T Q C A | ] / C A max .
Proof. 
For any two matrices X and Y, it is well-known that [ X , Y ] 2 X 2 + Y 2 . With the help of Theorem 1, and (8), we obtain
n ( A , C ) [ ( C A T ( P Q P ) A T J ) ( ( J A C A ) T ( P Q P ) ) Π m n 2 + ( C A T C A ) + ( ( I C C ) T C A C T ) Π s n + ( C T Q C A ) T ( P Q P ) ) Π s n 2 ] × [ A , C ] F C A F [ C A T ( P Q P ) A T J 2 + ( J A C A ) T ( P Q P ) 2 + C A T C A 2 + ( I C C ) T C A C T 2 + ( C T Q C A ) T ( P Q P ) 2 ] [ A , C ] F C A F = [ C A 2 ( P Q P ) A T J 2 + J A C A 2 ( P Q P ) 2 + C A 2 C A 2 + ( I C C ) 2 C A C T 2 + C T Q C A 2 ( P Q P ) 2 ] [ A , C ] F C A F .
Secondly, by using Lemma 3 and Theorem 1, we obtain
m ( A , C ) = | ( C A T ( P Q P ) A T J ) ( ( J A C A ) T ( P Q P ) ) Π m n | vec ( | A | ) + | ( C A T C A ) + ( ( I C C ) T C A C T ) Π s n + ( C T Q C A ) T ( P Q P ) ) Π s n | vec ( | C | ) / vec ( C A ) [ | ( P Q P ) A T J | | A | | C A T | + | ( P Q P ) | | A T | | J A C A | + | C A | | C | | C A T | + | C A C T | | C T | | ( I C C ) | + | ( P Q P ) | | C T | | C T Q C A | ] max / C A max ,
and finally, we have
c ( A , C ) = | ( C A T ( P Q P ) A T J ) ( ( J A C A ) T ( P Q P ) ) Π m n | vec ( | A | ) + | ( C A T C A ) + ( ( I C C ) T C A C T ) Π s n + ( C T Q C A ) T ( P Q P ) ) Π s n | vec ( | C | ) / | vec ( C A ) | [ | ( P Q P ) A T J | | A | | C A T | + | ( P Q P ) | | A T | | J A C A | + | C A | | C | | C A T | + | C A C T | | C T | | ( I C C ) | + | ( P Q P ) | | C T | | C T Q C A | ] / C A max .
   □
Remark 2. 
Using the GHQR factorization [3] on A and C in (2) and (5):
H T A Q = L 11 0 L 21 L 22 , U T C Q = K 11 0 0 0 ,
where U R s × s and Q R n × n and a J -orthogonal matrix, H R ( p + q ) × ( p + q ) (i.e., H J H T = J ), L 22 and K 11 are lower triangular and non-singular. We have
C A = Q I L 22 1 L 21 K 11 1 U 1 T , ( P Q P ) A T J = Q 0 L 22 1 H 2 T , ( P Q P ) = Q 0 ( L 22 T L 22 ) 1 Q T ,
C A C T = 0 L 22 1 L 22 ( K 11 1 ) T Q T , C T Q C A = U 1 T K 11 T L 11 1 J L 11 K 11 1 U 1 T ,
J A C A = J L 11 K 11 T U 1 T , C A C A T = Q I L 22 1 L 21 K 11 1 K 11 T I L 22 1 L 21 Q T , C C = U 1 K 11 K 11 1 0 U 1 T ,
where U = ( U 1 , U 2 ) , H = [ H 1 , H 2 ] ; U 1 and H 1 are, respectively, the submatrices of U and H obtained by taking the first r columns. Putting all the above terms into (18) leads to
W 1 ( A ) = [ U 1 K 11 T I L 22 1 L 21 Q T Q 0 L 22 1 H 2 T + K 11 T U 1 K 11 L 11 T J Q 0 ( L 22 T L 22 ) 1 Q T Π m n ] , W 1 ( C ) = [ U 1 K 11 T I L 22 1 L 21 Q T Q I L 22 1 L 21 K 11 1 U 1 T I U 1 K 11 K 11 1 0 U 1 T T 0 L 22 1 L 22 ( K 11 1 ) T Q T Π s n ( U 1 K 11 T L 11 T J L 11 T K 11 1 U 1 1 ) Q 0 ( L 22 T L 22 ) 1 Q T ) Π s n ] .
Remark 3. 
We can obtain d x using the d ( C A ) expression, where (4) is the solution of EILS problem (3). By differentiating (4), we obtain
d x = d ( C A h + ( P Q P ) A T J g ) .
Thus, using (20), we obtain
d x = d ( C A h + P ( P Q P ) A T J g ) = d ( C A ) h + C A d h + d P ( P Q P ) A T J g + P d ( P Q P ) A T J g + P ( P Q P ) d A T J g + P ( P Q P ) A T J d g .
Substituting (25) into above equation and using (9), we have
d x = [ C A d C C A + C A C T d C T ( I C C ) + ( P Q P ) d C T C T Q C A ( P Q P ) A T J d A C A ( P Q P ) d A T J A C A ] h + d ( I C C ) ( P Q P ) A T J g + P [ ( P Q P ) d ( P Q P ) ( P Q P ) + ( P Q P ) ( P Q P ) T d ( P Q P ) T ( I ( P Q P ) ( P Q P ) ) + ( I ( P Q P ) ( P Q P ) ) d ( P Q P ) T ( P Q P ) T ( P Q P ) ] A T J g + P ( P Q P ) d A T J g + P ( P Q P ) A T J d g + C A d h ,
which together with (20)–(22) give
d x = C A d C C A h + C A C T d C T ( I C C ) h + ( P Q P ) d C T C T Q C A h ( P Q P ) A T J d A C A h ( P Q P ) d A T J A C A h C d C ( P Q P ) A T J g ( P Q P ) d Q P ( P Q P ) A T J g ( P Q P ) Q d P ( P Q P ) A T J g + ( P Q P ) P Q P ( P Q P ) d P T ( I ( Q P ) ( P Q P ) ) A T J g + ( P Q P ) ( P Q P ) T d Q T P ( I ( Q P ) ( P Q P ) ) A T J g + P ( P Q P ) d A T J g + P ( P Q P ) A T J d g + C A d h .
Noting (24), the above equation can be rewritten as
d x = C A d C C A h + C A C T d C T ( I C C ) h + ( P Q P ) d C T C T A T J A C A h ( P Q P ) A T J d A C A h ( P Q P ) d A T J A C A h C d C ( P Q P ) A T J g + ( P Q P ) d A T J ( g A ( P Q P ) A T J g ) ( P Q P ) A T J d A ( P Q P ) A T J g + ( P Q P ) Q C d C ( P Q P ) A T J g ( P Q P ) d C T C T A T J ( g A ( P Q P ) A T J g ) + ( P Q P ) A T J d g + C A d h .
Further, by (20) and (4), we have
d x = C A d C ( C A h + ( P Q P ) A T J g ) + C A C T d C T ( I C C ) h ( P Q P ) A T J d A ( C A h + ( P Q P ) A T J g ) ( P Q P ) d C T C T A T J ( g A ( C A h + ( P Q P ) A T J g ) ) + ( P Q P ) d A T J ( g A ( C A h + ( P Q P ) A T J g ) ) + ( P Q P ) A T J d g + C A d h by   ( 20 ) = C A d C x + C A C T d C T ρ ( P Q P ) A T J d A x ( P Q P ) d C T C T A T J r + ( P Q P ) d A T J r + ( P Q P ) A T J d g + C A d h , by   ( 4 )
where s = J r = J ( g A x ) , β = ( I C C ) h . By utilizing operator “vec” on (30), and using (6) and (7), we obtain
d x = ( x T ( P Q P ) A T J ) vec ( d A ) + ( s T ( P Q P ) ) vec ( d A T ) ( x T C A ) vec ( d C ) + ( β T C A C T ) vec ( d C T ) + ( C T A T s ) T ( P Q P ) ) vec ( d C T ) + ( P Q P ) d g + C A d h by ( ) = [ ( x T ( P Q P ) A T J ) + ( s T ( P Q P ) ) Π m n ] vec ( d A ) [ ( x T C A ) ( β T C A C T ) Π s n ( ( C T A T s ) T ( P Q P ) ) Π s n ] vec ( d C ) + ( P Q P ) d g + C A d h by ( ) = [ ( x T ( P Q P ) A T J ) + ( s T ( P Q P ) ) Π m n , ( x T C A ) + ( β T C A C T ) Π s n + ( ( C T A T s ) T ( P Q P ) ) Π s n , ( P Q P ) , C A ] vec ( d A ) vec ( d C ) d g d h .
From the above result, we can recover the condition numbers of the EILS problem provided in [3,13,14]. Further, we observe that r = ( g A ( C A h + ( P Q P ) A T J g ) ) . Applying the same procedure, we can determine d r and condition numbers for residuals of EILS.

4. Componentwise Perturbation Analysis

In the following section, we derive a componentwise perturbation analysis of the augmented system for the EILS problem.
Let the perturbations d A R ( p + q ) × n , d C R s × n , d g R m and d h R s satisfy | d A | ϵ | A | , | d C | ϵ | C | | d g | ϵ | g | and | d h | ϵ | h | for a small ϵ and s = J r . Suppose that the perturbed augmented system is
0 0 C + d C 0 J + d J A + d A ( C + d C ) T ( A + d A ) T 0 λ + d λ s + d s x + d x = h + d h g + d g 0 .
Denoting
S = 0 0 C 0 J A C T A T 0 , u = g h 0 , v = λ s x
and the perturbations
d S = 0 0 d C 0 d J d A ( d C ) T ( d A ) T 0 , d f = d g d h 0 , d z = d λ d s d x .
When A is of full column rank and C has full row rank, S is invertible. It can be verified that
S 1 = C A T Q C A J A C A T C A T J A C A J J A ( P Q P ) A T J J A ( P Q P ) C A ( P Q P ) A T J ( P Q P ) .
If the spectral radius
ρ S 1 | d S | < 1
then I m + n + S 1 d S is invertible. Clearly, the condition
ϵ < ρ 1 | C A T | | C | T | C A T | | A | T | C A T Q C A | | C | + | ( J A C A ) T | | A | J A ( P Q P ) | C | T J A ( P Q P ) | A | T J A C A | A | + J J A ( P Q P ) A T J | C | | ( P Q P ) | | C | T | ( P Q P ) | | A | T | C A | | C | + | ( P Q P ) A T J | | A | ,
implies (31). The following results [24] are important for Theorem 2.
Lemma 5. 
The perturbed system of a linear system S v = u is defined as follows:
( S + d S ) ( v + d v ) = u + d u ,
where v + d v is the solution to the perturbed system, when the perturbations d S and d u are sufficiently small such that S + d S is invertible, the perturbation d v in the solution v satisfies
d v = I + S 1 d S 1 S 1 ( d u d S v ) ,
which implies
| d v | I + S 1 d S 1 S 1 ( | d u | + | d S | | v | ) .
Furthermore, when the spectral radius ρ S 1 | d S | < 1 , we have
| d v | I S 1 | d S | 1 S 1 ( | d u | + | d S | | v | ) = I + O S 1 | d S | S 1 ( | d u | + | d S | | v | ) .
Now, we have the following bounds for the perturbations in the equality constrained indefinite least squares solution and residual.
Theorem 2. 
Under the above assumption, for any ϵ > 0 satisfying the condition (32), when the componentwise perturbations | d A | ϵ | A | , | d C | ϵ | C | | d g | ϵ | g | and | d h | ϵ | h | , the error in the solution is bounded by
d x ϵ C A ( | h | + | C | | x | ) + ( P Q P ) A T J ( | g | + | A | | x | ) + ( P Q P ) ( | C | T | λ | + | A | T | r | ) + O ϵ 2
and error in the residual is bounded by
d r ϵ ( J A C A ( | h | + | C | | x | ) + J J A ( P Q P ) A T J ( | g | + | A | | x | ) + J A ( P Q P ) ( | C | T | λ | + | A | T | r | ) ) + O ϵ 2 .
Proof. 
Since the condition (32) implies (31), applying (33) in Lemma 5, we obtain
d λ d s d x I + O S 1 | d S | S 1 | d h | + | d C | | x | | d g | + | d A | | x | | d C | T | λ | + | d A | T | r | .
Finally, using the conditions | d A | ϵ | A | , | d C | ϵ | C | | d g | ϵ | g | and | d h | ϵ | h | , and the explicit form of S 1 , the upper bounds (34) and (35) can be obtained.    □
Furthermore, we can obtain the componentwise perturbation bounds of the indefinite least squares solution and its residual.
Remark 4. 
Assume that C is a zero matrix, λ = 0 , and  h = 0 . Using the above notations, for any ϵ > 0 , if the componentwise perturbations satisfy | d A | ϵ | A | and | d g | ϵ | g | , then the error in the solution is bounded by
d x ϵ | A T J A 1 A T J | ( | g | + | A | | x | ) + A T J A 1 | | A | T | r | + O ϵ 2
and the error in the residual is bounded by
d r ϵ | J J A A T J A 1 A T J | ( | g | + | A | | x | ) + | J A A T J A 1 | | A | T | r | + O ϵ 2 .

5. Statistical Condition Estimates

This section proposes three algorithms for estimating the normwise, mixed and componentwise condition numbers for the generalized inverse C A . Algorithm 1 is based on a probabilistic condition estimator method [27] and utilized to examine the normwise condition number for K-weighted pseudoinverse L K [23], ILS problem [33], constrained and weighted least squares problem [34] and Tikhonov regularization of total least squares problem [35]. Based on the SSCE method [28], we develop Algorithm 2 to estimate the normwise condition number; for details, see [23,33,36,37,38].   
Algorithm 1: Probabilistic condition estimator for the normwise condition number
  • Compute the derivative d ϕ ( u ) = [ W ( A ) , W ( C ) ] , and choose a starting vector u 0 uniformly and randomly from the unit t-sphere S t 1 with t = n 2 .
  • Using the probabilistic spectral norm estimator [27], compute the certain lower bound  α 1 and the probabilistic upper bound α 2 of d ϕ ( u ) .
  • Compute the normwise condition number by using (26)
    n p ( A , C ) = n p ( A , C ) [ A , C ] F C A F with n p ( A , C ) = α 1 + α 2 2 .
Algorithm 2: Small-sample statistical condition estimation method for the normwise condition number
  • Generate matrices d A 1 , d C 1 , d A 2 , d C 2 , , d A q , d C q with each entry in N ( 0 , 1 ) and Orthonormalize the following matrix
    vec d A 1 vec d A 2 vec d A q vec d C 1 vec d C 2 vec d C q
    to obtain τ 1 , τ 2 , , τ q by modified Gram-Schmidt orthogonalization process. Each τ i can be converted into the corresponding matrices d A i , d C i by applying the unvec operation.
  • Let p = m + m n . Approximate ω p and ω q by
    ω k 2 π ( k 1 2 )
  • For i = 1 , 2 , , q , compute
    θ i = C A d C i C A + C A C T d C i T ( I C C ) + ( P Q P ) d C i T C T Q C A ( P Q P ) A T J d A i C A ( P Q P ) d A i T J A C A .
  • Compute the absolute condition vector by
    κ abs : = ω q ω p θ 1 2 + θ 2 2 + + θ q 2 ,
    where the square operation is applied to each entry of θ i , i = 1 , 2 , , q and the square root is also applied componentwise.
  • Estimate the normwise condition number (26) by
    n ( A , C ) = N SCE [ A , C ] F C A F ,
    where N SCE : = ω q ω p σ 1 2 2 + σ 2 2 2 + + σ q 2 2 = κ abs F .
To estimate the mixed and componentwise condition numbers, we need the following SSCE method, which is from [28] and has been applied to many problems (see, e.g., [23,32,33,34,35]).

6. Numerical Experiments

In the following section, we illustrate two specific examples. The first compares the normwise, mixed and componentwise condition numbers and their upper bounds. The second is used to present the efficiency of statistical condition estimators.
Example 1. 
In this example, we first compute the condition numbers and their upper bounds by using the below matrix pair, then we demonstrate the reliability of Algorithms 1–3. Matlab2018a has been used to perform all the numerical experiments. We examine 200 matrices that are created by repeatedly applying the matrices A R m × n from [33] and C R s × n below.
A = U p 0 0 U q D 0 V , U p = I p 2 u p u p T , U q = I q 2 u q u q T , and V = I n 2 v v T ,
where u p R p , u q R q and v R n are unit random vectors obtained from Matlab function randn( · , 1 and D = n l diag n l , ( n 1 ) l , , 1 l . It is simple to determine that the condition number of A, i.e.,  κ ( A ) = A 2 A 2 , is n l . C = C 1 , 0 , where C 1 is a nonsymmetric Gaussian random Toeplitz matrix generated by the Matlabs function toeplitz ( c , r ) with c = randn ( s , 1 ) , r = randn ( s , 1 ) . From Table 1, we can see the numerical outcomes of the ratios given by
ω 1 = n upper ( A , C ) / n ( A , C ) , ω 2 = m upper ( A , C ) / m ( A , C ) and ω 3 = c upper ( A , C ) / c ( A , C ) .
To show the efficiency of the three algorithms discussed above, we run some numerical tests and choose parameters δ = 0.01 and ϵ = 0.001 for Algorithm 1 and k = 2 for Algorithms 2 and 3. The ratios between the exact condition numbers and their estimated values are determined as follows:
r p = n p ( A , C ) / n ( A , C ) , r s = n s ( A , C ) / n ( A , C ) , r m = m s ( A , C ) / m ( A , C ) , r c = c s ( A , C ) / c ( A , C ) ,
where r p is the ratio between the exact normwise condition number and the estimated value of Algorithm 1, r s is the ratio between the exact normwise condition number and the estimated value of Algorithm 2, and  r m and r c are the ratios between the exact mixed and componentwise condition numbers and estimated values of Algorithm 3.
The results in Table 2 demonstrate that Algorithms 1–3 can reliably estimate the condition numbers in most situations, supporting the statement in ([39], Chapter 15) that an estimate of the condition number that is correct to within a factor 10 is generally appropriate because it is the magnitude of an error bound that is of interest, not its precise value. For the normwise condition number, Algorithm 1 works more effectively and stably.
Algorithm 3: Small-sample statistical condition estimation method for the mixed and componentwise condition numbers
  • Generate matrices d A 1 , d C 1 , d A 2 , d C 2 , , d A q , d C q with each entry in N ( 0 , 1 ) and Orthonormalize the following matrix:
    vec d A 1 vec d A 2 vec d A q vec d C 1 vec d C 2 vec d C q
    to obtain τ 1 , τ 2 , , τ q by modified Gram-Schmidt orthogonalization process. Each τ i can be converted into the corresponding matrices d A i , d C i by applying the unvec operation. Let d A i , d C i be the matrix d A i ˜ , d C i ˜ multiplied by [ A , C ] componentwise.
  • Let p = m n + s n . Approximate ω p and ω q by (36).
  • For i = 1 , 2 , , q , compute
    θ i = C A d C i C A + C A C T d C i T ( I C C ) + ( P Q P ) d C i T C T Q C A ( P Q P ) A T J d A i C A ( P Q P ) d A i T J A C A .
    Using the approximations for ω p and ω q , compute the absolute condition vector
    κ s c e = ω q ω p θ 1 2 + θ 2 2 + + θ q 2
  • Estimate the mixed and componentwise condition estimations m s c e ( A , C ) and c s c e ( A , C ) as follows:
    m s ( A , C ) = κ s c e vec ( C A ) , c s ( A , C ) = κ s c e vec ( C A ) .
Example 2. 
On similar patrons given in [2,3,5], we generate A and C matrices using the GHQR factorization.
H T A Q = L 11 0 L 21 L 22 , U T C Q = K 11 0 0 0 ,
where H R ( p + q ) × ( p + q ) is J-orthogonal, i.e., H J H T = J , Q is orthogonal, and L 22 R ( n s ) × ( n s ) and K 11 R ( s × s ) are lower triangular and non-singular, respectively. In our experiment, we let L 11   L 21 be random matrices. H is a random J-orthogonal matrix with a specific condition number generated using the method described in [40]. Q R ( n × n ) and U R ( s × s ) generated randomly (by Matlabs gallery (‘qmult ’, …)), L 22 , and K 11 are generated by QR factorization of random matrices with specified condition numbers and pre-assigned singular value distributions (generated via Matlabs gallery (‘randsvd’, …)). To examine the above algorithms’ performance, we use 500 matrix pairs, variate the condition numbers of A and C, and set p = 50 , q = 30 , n = 40 , and s = 20 . The ratios between the exact condition numbers and their estimated values are below.
r p = n p ( A , C ) / n ( A , C ) , r s = n s ( A , C ) / n ( A , C ) , r m = m s ( A , C ) / m ( A , C ) , r c = c s ( A , C ) / c ( A , C ) ,
where the parameters δ, ϵ, k, and ratios r p , r s , r m and r c are the same as given in Example 1. We present these numerical results and CPU time in Figure 1 and Figure 2. The time ratios are defined by
t p : = t 1 t , t s : = t 2 t , t m : = t 3 t , t c : = t 4 t ,
where t is the CPU time of computing the generalized inverse C A by GHQR decomposition [20]. t 1 is the CPU time of Algorithm 1, t 2 is the CPU time of Algorithm 2, and t 3 and t 4 are the CPU times of Algorithm 3. From Figure 1 and Figure 2, we can see that these three algorithms are highly efficient in estimating condition numbers. However, Table 3 shows that the CPU times of Algorithms 1 and 2 are smaller than Algorithm 3.

7. Conclusions

In this paper, we provided the explicit expressions and upper bounds for the normwise, mixed, and componentwise condition numbers for the generalized inverse C A . Additionally, the corresponding results for the K-weighted pseudoinverse L K can be obtained as a special case. We also show how to recover the previous condition numbers of the EILS solution from the generalized inverse C A condition numbers. We also developed the componentwise perturbation analysis of the EILS problem. Moreover, we designed three algorithms that efficiently estimate the normwise, mixed, and componentwise conditions for the generalized inverse C A using the probabilistic condition estimation method and the small-sample statistical condition estimation method. Finally, numerical results demonstrated the performance of these algorithms. In the future, we will continue our research on the MK-weighted generalized inverse.

Author Contributions

Methodology, M.S.; Investigation, M.S. and X.Z.; writing—original draft, M.S.; review and editing, A.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Zhejiang Normal University Postdoctoral Research Fund (Grant No. ZC304022938), the Natural Science Foundation of China (Project No. 61976196) and the Zhejiang Provincial Natural Science Foundation of China under Grant No. LZ22F030003.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cucker, F.; Diao, H.; Wei, Y. On mixed and componentwise condition numbers for Moore–penrose inverse and linear least squares problems. Math. Comput. 2007, 76, 947–963. [Google Scholar] [CrossRef]
  2. Liu, Q.; Pan, B.; Wang, Q. The hyperbolic elimination method for solving the equality constrained indefinite least squares problem. Int. J. Comput. Math. 2010, 87, 2953–2966. [Google Scholar] [CrossRef]
  3. Liu, Q.; Wang, M. Algebraic properties and perturbation results for the indefinite least squares problem with equality constraints. Int. J. Comput. Math. 2010, 87, 425–434. [Google Scholar] [CrossRef]
  4. Bjöorck, Å.; Higham, N.J.; Harikrishna, P. Solving the indefinite least squares problem by hyperbolic QR factorization. SIAM J. Matrix Anal. Appl. 2003, 24, 914–931. [Google Scholar]
  5. Bjöorck, Å.; Higham, N.J.; Harikrishna, P. The equality constrained indefinite least squares problem: Theory and algorithms. BIT Numer. Math. 2003, 43, 505–517. [Google Scholar]
  6. Wei, M.; Zhang, B. Structures and uniqueness conditions of MK-weighted pseudoinverses. BIT Numer. Math. 1994, 34, 437–450. [Google Scholar] [CrossRef]
  7. Bjöorck, Å. Algorithms for indefinite linear least squares problems. Linear Algebra Appl. 2021, 623, 104–127. [Google Scholar]
  8. Shi, C.; Liu, Q. A hyperbolic MGS elimination method for solving the equality constrained indefinite least squares problem. Commun. Appl. Math. Comput. 2011, 25, 65–73. [Google Scholar]
  9. Mastronardi, N.; Dooren, P.V. An algorithm for solving the indefinite least squares problem with equality constraints. BIT Numer. Math. 2014, 54, 201–218. [Google Scholar] [CrossRef]
  10. Mastronardi, N.; Dooren, P.V. A structurally backward stable algorithm for solving the indefinite least squares problem with equality constraints. IMA J. Numer. Anal. 2015, 35, 107–132. [Google Scholar] [CrossRef]
  11. Wang, Q. Perturbation analysis for generalized indefinite least squares problems. J. East China Norm. Univ. Nat. Sci. Ed. 2009, 4, 47–53. [Google Scholar]
  12. Diao, H.; Zhou, T. Linearised estimate of the backward error for equality constrained indefinite least squares problems. East Asian J. Appl. Math. 2019, 9, 270–279. [Google Scholar]
  13. Li, H.; Wang, S.; Yang, H. On mixed and componentwise condition numbers for indefinite least squares problem. Linear Algebra Appl. 2014, 448, 104–129. [Google Scholar] [CrossRef]
  14. Wang, S.; Meng, L. A contribution to the conditioning theory of the indefinite least squares problems. Appl. Numer. Math. 2022, 17, 137–159. [Google Scholar] [CrossRef]
  15. Skeel, R.D. Scaling for numerical stability in Gaussian elimination. J. ACM 1979, 26, 494–526. [Google Scholar] [CrossRef]
  16. Bjöorck, Å. Component-wise perturbation analysis and error bounds for linear least squares solutions. BIT Numer. Math. 1991, 31, 238–244. [Google Scholar] [CrossRef]
  17. Diao, H.; Liang, L.; Qiao, S. A condition analysis of the weighted linear least squares problem using dual norms. Linear Algebra Appl. 2018, 66, 1085–1103. [Google Scholar] [CrossRef] [Green Version]
  18. Diao, H.; Zhou, T. Backward error and condition number analysis for the indefinite linear least squares problem. Int. J. Comput. Math. 2019, 96, 1603–1622. [Google Scholar] [CrossRef] [Green Version]
  19. Diao, H. Condition numbers for a linear function of the solution of the linear least squares problem with equality constraints. J. Comput. Appl. Math. 2018, 344, 640–656. [Google Scholar] [CrossRef]
  20. Eldén, L. A weighted pseudoinverse, generalized singular values, and constrained least squares problems. BIT Numer. Math. 1982, 22, 487–502. [Google Scholar] [CrossRef]
  21. Wei, M. Algebraic properties of the rank-deficient equality-constrained and weighted least squares problem. Linear Algebra Appl. 1992, 161, 27–43. [Google Scholar] [CrossRef] [Green Version]
  22. Gulliksson, M.E.; Wedin, P.A.; Wei, Y. Perturbation identities for regularized Tikhonov inverses and weighted pseudoinverses. BITBIT Numer. Math. 2000, 40, 513–523. [Google Scholar] [CrossRef]
  23. Samar, M.; Li, H.; Wei, Y. Condition numbers for the K-weighted pseudoinverse L K and their statistical estimation. Linear Multilinear Algebra 2021, 69, 752–770. [Google Scholar] [CrossRef]
  24. Burgisser, P.; Cucker, F. Condition: The geometry of numerical algorithms. In Grundlehren der Mathematischen Wissenschaften; Springer: Heidelberg, Germany, 2013; Volume 349. [Google Scholar]
  25. Rice, J. A theory of condition. SIAM J. Numer. Anal. 1966, 3, 287–310. [Google Scholar] [CrossRef]
  26. Gohberg, I.; Koltracht, I. Mixed, componentwise, and structured condition numbers. SIAM J. Matrix Anal. Appl. 1993, 14, 688–704. [Google Scholar] [CrossRef]
  27. Hochstenbach, M.E. Probabilistic upper bounds for the matrix two-norm. J. Sci. Comput. 2013, 57, 464–476. [Google Scholar] [CrossRef] [Green Version]
  28. Kenney, C.S.; Laub, A.J. Small-sample statistical condition estimates for general matrix functions. SIAM J. Sci. Comput. 1994, 15, 36–61. [Google Scholar] [CrossRef]
  29. Xie, Z.; Li, W.; Jin, X. On condition numbers for the canonical generalized polar decomposition of real matrices. Electron. J. Linear Algebra 2013, 26, 842–857. [Google Scholar] [CrossRef]
  30. Horn, R.A.; Johnson, C.R. Topics in Matrix Analysis; Cambridge University Press: New York, NY, USA, 1991. [Google Scholar]
  31. Magnus, J.R.; Neudecker, H. Matrix Differential Calculus with Applications in Statistics and Econometrics, 3rd ed.; John Wiley and Sons: Chichester, UK, 2007. [Google Scholar]
  32. Diao, H.; Xiang, H.; Wei, Y. Mixed, componentwise condition numbers and small sample statistical condition estimation of Sylvester equations. Numer. Linear Algebra Appl. 2012, 19, 639–654. [Google Scholar] [CrossRef]
  33. Li, H.; Wang, S. On the partial condition numbers for the indefnite least squares problem. Appl. Numer. Math. 2018, 123, 200–220. [Google Scholar] [CrossRef] [Green Version]
  34. Samar, M. Condition numbers for a linear function of the solution to the constrained and weighted least squares problem and their statistical estimation. Taiwan J. Math. 2021, 25, 717–741. [Google Scholar] [CrossRef]
  35. Samar, M.; Lin, F. Perturbation and condition numbers for the Tikhonov regularization of total least squares problem and their statistical estimation. J. Comput. Appl. Math. 2022, 411, 114230. [Google Scholar] [CrossRef]
  36. Diao, H.; Wei, Y.; Xie, P. Small sample statistical condition estimation for the total least squares problem. Numer. Algor. 2017, 75, 435–455. [Google Scholar] [CrossRef]
  37. Samar, M.; Zhu, X. Structured conditioning theory for the total least squares problem with linear equality constraint and their estimation. AIMS Math. 2023, 8, 11350–11372. [Google Scholar] [CrossRef]
  38. Baboulin, M.; Gratton, S.; Lacroix, R.; Laub, A.J. Statistical estimates for the conditioning of linear least squares problems. In Parallel Processing and Applied Mathematics: 10th International Conference, PPAM 2013, Warsaw, Poland, September 8–11, 2013, Revised Selected Papers, Part I 10; Springer: Berlin/Heidelberg, Germany, 2014; Lecture Notes in Computer Science; Volume 8384, pp. 124–133. [Google Scholar]
  39. Higham, N.J. Accuracy and Stability of Numerical Algorithms, 2nd ed.; SIAM: Philadelphia, PA, USA, 2002. [Google Scholar]
  40. Higham, N.J. J-Orthogonal matrices: Properties and generation. SIAM Rev. 2003, 45, 504–519. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Efficiency of normwise condition estimators and CPU times of Algorithms 1 and 2.
Figure 1. Efficiency of normwise condition estimators and CPU times of Algorithms 1 and 2.
Mathematics 11 02111 g001aMathematics 11 02111 g001b
Figure 2. Efficiency of mixed and componentwise condition estimators and CPU time of Algorithm 3.
Figure 2. Efficiency of mixed and componentwise condition estimators and CPU time of Algorithm 3.
Mathematics 11 02111 g002
Table 1. Comparison of condition numbers and their upper bounds by choosing different values of p , q , s and n.
Table 1. Comparison of condition numbers and their upper bounds by choosing different values of p , q , s and n.
MeanMaxMeanMaxMeanMax
n 1 p , q , n , s ω 1 ω 2 ω 3
25 , 15 , 20 , 10 1.0763 × 10 0 4.8422 × 10 0 1.0647 × 10 0 2.8373 × 10 0 1.1538 × 10 0 3.9657 × 10 0
50 , 30 , 40 , 20 1.3146 × 10 0 6.7089 × 10 0 1.0861 × 10 0 4.4630 × 10 0 1.2845 × 10 0 5.9123 × 10 0
75 , 45 , 60 , 30 1.7422 × 10 0 1.6402 × 10 1 1.1965 × 10 0 1.2847 × 10 1 1.0784 × 10 0 1.5766 × 10 1
100 , 60 , 80 , 40 2.6043 × 10 0 1.9461 × 10 1 1.4574 × 10 0 1.6783 × 10 1 1.7540 × 10 0 1.8452 × 10 1
n 2 p , q , n , s ω 1 ω 2 ω 3
25 , 15 , 20 , 10 1.4032 × 10 0 5.7654 × 10 0 1.2433 × 10 0 4.6501 × 10 0 1.3601 × 10 0 5.3752 × 10 0
50 , 30 , 40 , 20 1.7341 × 10 0 8.2074 × 10 0 1.5623 × 10 0 6.4738 × 10 0 1.7320 × 10 0 7.2004 × 10 0
75 , 45 , 60 , 30 2.5254 × 10 0 2.8732 × 10 1 1.8510 × 10 0 1.6062 × 10 1 2.0653 × 10 0 2.2903 × 10 1
100 , 60 , 80 , 40 2.7034 × 10 0 3.9543 × 10 1 2.0312 × 10 0 2.0106 × 10 1 2.3871 × 10 0 2.4803 × 10 1
n 3 p , q , n , s ω 1 ω 2 ω 3
25 , 15 , 20 , 10 1.7301 × 10 0 7.9662 × 10 0 1.4607 × 10 0 6.8606 × 10 0 1.5296 × 10 0 8.0651 × 10 0
50 , 30 , 40 , 20 1.9674 × 10 0 3.7649 × 10 1 1.7065 × 10 0 8.5963 × 10 0 1.8472 × 10 0 9.7063 × 10 0
75 , 45 , 60 , 30 2.7055 × 10 0 5.6570 × 10 1 2.0276 × 10 0 3.2613 × 10 1 2.3601 × 10 0 4.6904 × 10 1
100 , 60 , 80 , 40 2.9867 × 10 0 7.1601 × 10 1 2.2760 × 10 0 4.9013 × 10 1 2.5935 × 10 0 5.9721 × 10 1
n 4 p , q , n , s ω 1 ω 2 ω 3
25 , 15 , 20 , 10 1.8271 × 10 0 2.3021 × 10 1 1.6354 × 10 0 1.4032 × 10 1 1.7925 × 10 0 1.5102 × 10 1
50 , 30 , 40 , 20 2.3064 × 10 0 3.7632 × 10 1 1.9642 × 10 0 1.5210 × 10 1 1.9862 × 10 0 1.6082 × 10 1
75 , 45 , 60 , 30 2.8063 × 10 0 7.4310 × 10 1 2.0513 × 10 0 5.0471 × 10 1 2.6743 × 10 0 6.0437 × 10 1
100 , 60 , 80 , 40 2.9887 × 10 0 8.6501 × 10 1 2.3810 × 10 0 7.1089 × 10 1 2.7011 × 10 0 7.4810 × 10 1
Table 2. Results by choosing different values of p , q , s and n for Algorithms 1–3.
Table 2. Results by choosing different values of p , q , s and n for Algorithms 1–3.
MeanVarianceMeanVarianceMeanVarianceMeanVariance
n 1 p , q , n , s r p r s r m r c
25 , 15 , 20 , 10 1.0000 × 10 0 5.3577 × 10 11 1.0322 × 10 0 1.2063 × 10 1 1.0067 × 10 0 1.3505 × 10 2 1.2785 × 10 0 1.0431 × 10 2
50 , 30 , 40 , 20 1.0000 × 10 0 7.0635 × 10 9 1.1439 × 10 0 3.5027 × 10 1 1.0134 × 10 0 3.9054 × 10 2 1.3744 × 10 0 3.6397 × 10 2
75 , 45 , 60 , 30 1.0001 × 10 0 1.5165 × 10 11 1.2906 × 10 0 4.6021 × 10 1 1.1075 × 10 0 4.1653 × 10 2 1.5043 × 10 0 3.9428 × 10 2
100 , 60 , 80 , 40 1.0001 × 10 0 1.7940 × 10 12 1.3482 × 10 0 5.7803 × 10 1 1.2306 × 10 0 4.9563 × 10 2 1.8732 × 10 0 4.6543 × 10 2
n 2 p , q , n , s r p r s r m r c
25 , 15 , 20 , 10 1.0000 × 10 0 6.5102 × 10 9 1.2654 × 10 0 2.7360 × 10 1 1.3405 × 10 0 3.4605 × 10 2 1.2765 × 10 0 2.6123 × 10 2
50 , 30 , 40 , 20 1.0000 × 10 0 7.4738 × 10 11 1.4783 × 10 0 4.4925 × 10 1 1.7169 × 10 0 4.8543 × 10 2 1.5063 × 10 0 4.3326 × 10 2
75 , 45 , 60 , 30 1.0001 × 10 0 1.6062 × 10 9 1.6295 × 10 0 6.8732 × 10 1 1.8206 × 10 0 6.4890 × 10 2 1.7422 × 10 0 5.0542 × 10 2
100 , 60 , 80 , 40 1.0001 × 10 0 2.5106 × 10 13 1.8693 × 10 0 7.9543 × 10 1 2.1456 × 10 0 7.4293 × 10 2 2.0361 × 10 0 6.3702 × 10 2
n 3 p , q , n , s r p r s r m r c
25 , 15 , 20 , 10 1.0000 × 10 0 1.7029 × 10 8 1.2063 × 10 0 4.2083 × 10 1 1.6710 × 10 0 5.7862 × 10 2 1.3722 × 10 0 4.7031 × 10 2
50 , 30 , 40 , 20 1.0000 × 10 0 2.4771 × 10 11 1.7033 × 10 0 7.2035 × 10 1 1.8041 × 10 0 6.0165 × 10 2 1.5760 × 10 0 5.7402 × 10 2
75 , 45 , 60 , 30 1.0002 × 10 0 6.1041 × 10 12 2.0654 × 10 0 7.5293 × 10 1 2.2054 × 10 0 8.3014 × 10 2 2.0113 × 10 0 7.2461 × 10 2
100 , 60 , 80 , 40 1.0003 × 10 0 5.6854 × 10 13 2.1976 × 10 0 8.2063 × 10 1 2.2593 × 10 0 8.6458 × 10 2 2.1263 × 10 0 7.9432 × 10 2
n 4 p , q , n , s r p r s r m r c
25 , 15 , 20 , 10 1.0000 × 10 0 5.6321 × 10 7 1.6305 × 10 0 6.2092 × 10 1 1.9455 × 10 0 6.7402 × 10 2 1.8240 × 10 0 6.0461 × 10 2
50 , 30 , 40 , 20 1.0000 × 10 0 6.0573 × 10 9 1.7002 × 10 0 8.0210 × 10 1 1.9822 × 10 0 8.0549 × 10 2 1.9701 × 10 0 7.4322 × 10 2
75 , 45 , 60 , 30 1.0003 × 10 0 8.6021 × 10 11 2.1533 × 10 0 9.0425 × 10 1 2.4003 × 10 0 9.3614 × 10 2 2.2764 × 10 0 8.4681 × 10 2
100 , 60 , 80 , 40 1.0004 × 10 0 2.8543 × 10 12 2.4187 × 10 0 9.2054 × 10 1 2.6005 × 10 0 9.5370 × 10 2 2.5711 × 10 0 9.4502 × 10 2
Table 3. CPU times for Algorithms 1–3 by choosing different values of p , q , s and n.
Table 3. CPU times for Algorithms 1–3 by choosing different values of p , q , s and n.
p , q , n , s t p t s t m t c
25 , 15 , 20 , 10 0.10650.27420.76010.4643
75 , 45 , 60 , 30 0.37840.52041.36441.1677
100 , 60 , 80 , 40 0.48420.60321.45691.2658
120 , 80 , 100 , 50 0.56430.74111.63451.5403
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Samar, M.; Zhu, X.; Shakoor, A. Conditioning Theory for Generalized Inverse CA and Their Estimations. Mathematics 2023, 11, 2111. https://doi.org/10.3390/math11092111

AMA Style

Samar M, Zhu X, Shakoor A. Conditioning Theory for Generalized Inverse CA and Their Estimations. Mathematics. 2023; 11(9):2111. https://doi.org/10.3390/math11092111

Chicago/Turabian Style

Samar, Mahvish, Xinzhong Zhu, and Abdul Shakoor. 2023. "Conditioning Theory for Generalized Inverse CA and Their Estimations" Mathematics 11, no. 9: 2111. https://doi.org/10.3390/math11092111

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop