Next Article in Journal
Direct Numerical Simulation of Turbulent Boundary Layer over Cubical Roughness Elements
Previous Article in Journal
Applying Microbial-Induced Calcium Carbonate Precipitation Technology to Improve the Bond Strength of Lightweight Aggregate Concrete after High-Temperature Damage
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Componentwise Perturbation Analysis of the Singular Value Decomposition of a Matrix

1
Institute of Information and Communication Technologies, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria
2
Department of Engineering Sciences, Bulgarian Academy of Sciences, 1040 Sofia, Bulgaria
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(4), 1417; https://doi.org/10.3390/app14041417
Submission received: 13 January 2024 / Revised: 1 February 2024 / Accepted: 3 February 2024 / Published: 8 February 2024

Abstract

:
A rigorous perturbation analysis is presented for the singular value decomposition (SVD) of a real matrix with full column rank. It is proved that the SVD perturbation problem is well posed only when the singular values are distinct. The analysis involves the solution of symmetric coupled systems of linear equations. It produces asymptotic (local) componentwise perturbation bounds on the entries of the orthogonal matrices participating in the decomposition of the given matrix and on its singular values. Local bounds are derived for the sensitivity of the singular subspaces measured by the angles between the unperturbed and perturbed subspaces. Determining the asymptotic bounds of the orthogonal matrices and the sensitivity of singular subspaces requires knowing only the norm of the perturbation of the given matrix. An iterative scheme is described to find global bounds on the respective perturbations, and results from numerical experiments are presented.

1. Introduction

As it is known ([1], Ch. 2), ([2], Ch. 1), ([3], Ch, 2), each m × n , m n matrix A R m × n can be represented by the singular value decomposition (SVD) in the factorized form
A = U Σ V T ,
where the m × m matrix U and the n × n matrix V are orthogonal and the m × n matrix Σ is diagonal:
Σ = Σ n 0 ( m n ) × n ) , Σ n = diag σ 1 , σ 2 , , σ n .
The numbers σ i 0 are called singular values of the matrix A. The columns of
U : = [ u 1 , u 2 , , u m ] , u j R m ,
are called left singular vectors and the columns of
V : = [ v 1 , v 2 , , v n ] , v j R n
are the right singular vectors. The subspaces spanned by sets of left and right singular vectors are called left and right singular subspaces, respectively.
The singular value decomposition has a long and interesting history described in [4].
The SVD has many properties, making it an invaluable tool in matrix analysis and matrix computations; see the references cited above. Among them is that the rank r of A equals the number of its nonzero singular values and the equality A 2 = σ max . The usual assumption is that by an appropriate ordering of the columns u j and v j , the singular values appear in the order
σ max : = σ 1 σ 2 σ n : = σ min .
If some of the singular values are equal to zero, then A has r < n linearly independent columns, and the matrix Σ in (1) can be represented as
Σ = Σ r 0 r × ( n r ) 0 ( m r ) × r 0 ( m r ) × ( n r ) , Σ r = diag σ 1 , σ 2 , , σ r .
Further, we shall consider the case r = n , i.e., assume that the matrix A is of full column rank.
This paper concerns the case when the matrix A is subject to an additive perturbation δ A . In such a case, there exists another pair of orthogonal matrices U ˜ and V ˜ and a diagonal matrix Σ ˜ , such that
A ˜ : = A + δ A = U ˜ Σ ˜ V ˜ T .
The perturbation analysis of the singular value decomposition consists in determining the changes in the quantities related to the elements of the decomposition due to the perturbation δ A . This includes determining bounds on the changes of the entries of the orthogonal matrices that reduce the original matrix to diagonal form and bounds on the perturbations of the singular values. Hence, the analysis aims to find bounds on the sizes of δ U = U ˜ U , δ V = V ˜ V , and δ Σ = Σ ˜ Σ as functions of the size of δ A . It should be emphasized that problems of such a kind arise in most singular value decomposition applications. The most important application of the perturbation analysis is to assess the accuracy of the computed SVD since each algorithm for its determination produces the SVD of A + δ A , not of A, where the size of δ A depends on the properties of the floating point arithmetic used and the corresponding algorithm. Knowing the size of δ A , we may use the perturbation bounds to estimate the difference between the actual and computed elements of the decomposition. For a deep and systematic presentation of the matrix perturbation theory and its use in accuracy estimation, the reader is referred to the books of Stewart [2], Stewart and Sun [5], and Wilkinson [6].
According to Weyl’s theorem ([2], Ch. 1), we have that
| δ σ i | = | σ ˜ i σ i | δ A 2 , i = 1 , 2 , , n ,
which shows that the singular values are perturbed by no more than the 2-norm of the perturbation of A; i.e., the singular values are always well conditioned. The SVD perturbation analysis is well defined if the matrix A is of full column rank n; i.e., σ m i n 0 , since otherwise, the corresponding left singular vector is undetermined.
The size of the perturbations δ A , δ U , δ V , and δ Σ is usually measured by using some of the matrix norms, which leads to the so-called normwise perturbation analysis. In several cases, we are interested in the size of the perturbations of the individual entries of δ U , δ V , and δ Σ , so that it is necessary to implement a componentwise perturbation analysis [7]. This analysis has an advantage when the individual components of δ U , δ V , and δ Σ differ significantly in magnitude, and the normwise estimates do not produce tight bounds on the perturbations.
The literature on the perturbation theory of the singular value decomposition is significant. The first results on this topic are obtained by Wedin [8] and Stewart [9], who developed estimates of the sensitivity of pairs of singular subspaces (see also ([5], Ch. V)). Other results concerning the sensitivity of the singular vectors and singular subspaces are obtained in [10,11], and a survey on the perturbation theory of the SVD until 1990 can be found in [12]. Using the technique of perturbation expansions for invariant subspaces, Sun [13] derived perturbation bounds for a pair of singular subspaces that improved the bounds obtained in [9] and contained as a special case the bounds presented in [11]. A perturbation theory for the singular values and singular subspaces of diagonally dominant matrices, including graded matrices, is presented in [14], and optimal perturbation bounds for the case of structured perturbations are derived in [15]. The seminal paper by Demmel and his co-authors [16] contains a perturbation theory that provides relative perturbation bounds for the singular values and singular subspaces for different classes of matrices. The problem of backward error analysis of SVD algorithms is discussed in [17]. High-accuracy algorithms for computing SVD are proposed by Drmač and Veselić in [18,19] and implemented in the LAPACK package [20]. An improvement in the results of [8] is presented in [21]. Several results concerning the sensitivity of the SVD are summarized in the survey [22], and a reach bibliography on the accurate computation of SVD can be found in [23]. Finally, some recent applications of SVD are described in [24,25]. It should be pointed out that the available SVD perturbation theory provides bounds on the sensitivity of the singular vectors and singular subspaces but does not provide perturbation bounds on the individual entries of the matrices U and V. Such bounds are important in several applications, and this fact justifies the opinion that apart from the large number of results about the sensitivity of the singular values and singular vectors, a complete componentwise perturbation analysis of the SVD is not available up to the moment.
This paper presents a rigorous perturbation analysis of the orthogonal matrices, singular subspaces, and singular values of a real matrix of full column rank. It is proved that the SVD perturbation problem is well posed only in the case of distinct (simple) singular values. The analysis produces asymptotic (local) componentwise perturbation bounds of the entries of the orthogonal matrices U and V and the singular values of the given matrix. Local bounds are derived for the sensitivity of a pair of singular subspaces measured by the angles between the unperturbed and perturbed subspaces. An iterative scheme is described to find global bounds on the respective perturbations, and the results of numerical experiments are presented. The analysis performed in the paper implements the same methodology as the one used previously in [26] to determine componentwise perturbation bounds of the QR decomposition of a matrix. However, the SVD perturbation analysis has some distinctive features, making it a self-dependent problem.
The paper is organized as follows. In Section 2, we derive the basic nonlinear algebraic equations used to perform the perturbation analysis of the SVD. After introducing in Section 3 the perturbation parameters that determine the perturbations of the matrices U and V, we derive a symmetric system of coupled equations for these parameters in Section 4. The solution to the equations for the first-order terms of the perturbation parameters allows us to find asymptotic bounds on the parameters in Section 5, on the singular values in Section 6, and on the perturbations of the matrices U and V in Section 7. Using the bounds on the perturbation parameters in Section 8, we derive bounds on the sensitivity of the singular subspaces. In Section 9, we develop an iterative scheme for finding global bounds on the perturbations, and in Section 10 we present the results of some numerical experiments illustrating the proposed analysis. Some conclusions are drawn in Section 11.

2. Basic Equations

The perturbed singular value decomposition of A (2) can be written as
U ˜ T A ˜ V ˜ = Σ ˜ ,
where
U ˜ = U + δ U : = [ u ˜ 1 , u ˜ 2 , , u ˜ m ] , u ˜ j R m , δ U : = [ δ u 1 , δ u 2 , , δ u m ] , δ u j R m , u ˜ j = u j + δ u j , j = 1 , 2 , , m , V ˜ = V + δ V : = [ v ˜ 1 , v ˜ 2 , , v ˜ n ] , v ˜ j R n , δ V : = [ δ v 1 , δ v 2 , , δ v n ] , δ v j R n , v ˜ j = v j + δ v j , j = 1 , 2 , , n ,
and
Σ ˜ = Σ + δ Σ = Σ ˜ n 0 , Σ ˜ n = Σ n + δ Σ n = diag ( σ ˜ 1 , σ ˜ 2 , , σ ˜ n ) , δ Σ n = diag ( δ σ 1 , δ σ 2 , , δ σ n ) , σ ˜ i = σ i + δ σ i .
Equation (4) is rewritten as
δ U T A V + U T A δ V + δ F = δ Σ n 0 ( m n ) × n + Δ 0 ,
where δ F = U T δ A V and the matrix Δ 0
Δ 0 = δ U T A δ V U T δ A δ V δ U T δ A V δ U T δ A δ V R m × n
contains only higher-order terms in the elements of δ A , δ U , and δ V .
Let the matrices U and δ U be separated as U = [ U 1 , U 2 ] , U 1 R m × n and δ U = [ δ U 1 , δ U 2 ] , δ U 1 R m × n , respectively. Since the matrix A can be represented as A = U 1 Σ n V T , the matrix U 2 is not well determined but should satisfy the orthogonality condition [ U 1 , U 2 ] T [ U 1 , U 2 ] = I m . The perturbation δ U 2 is also undefined so that we can bound only the perturbations of the entries in the first n columns of U, i.e., the entries of δ U 1 . Further on, we shall use (5) to determine componentwise bounds on δ U 1 , δ V and δ Σ n .

3. Perturbed Orthogonal Matrices and Perturbation Parameters

In the perturbation analysis of the SVD, it is convenient to find first componentwise bounds on the entries of the matrices δ W U : = U T δ U 1 and δ W V : = V T δ V , which are related to the corresponding perturbations δ U 1 and δ V by orthogonal transformations. The implementation of the matrices δ W U and δ W V allows us to find bounds on
δ U 1 = U δ W U
and
δ V = V δ W V
using orthogonal transformations without increasing the norms of δ W U and δ W V . This helps to determine bounds on δ U 1 and δ V , which are as tight as possible.
First, consider the matrix
δ W U = U T δ U 1 = u 1 T δ u 1 u 1 T δ u 2 u 1 T δ u n u 2 T δ u 1 u 2 T δ u 2 u 2 T δ u n u n T δ u 1 u n T δ u 2 u n T δ u n u n + 1 T δ u 1 u n + 1 T δ u 2 u n + 1 T δ u n u m T δ u 1 u m T δ u 2 u m T δ u n R m × n .
Further on, we shall use the vector of the subdiagonal entries of the matrix δ W U ,
x : = [ x 1 , x 2 , , x p ] T R p , = [ u 2 T δ u 1 , u 3 T δ u 1 , , u m T δ u 1 m 1 , u 3 T δ u 2 , , u m T δ u 2 m 2 , , u n + 1 T δ u n , , u m T δ u n m n ] T ,
where
p = i = 1 n ( m i ) = n ( n 1 ) / 2 + ( m n ) n = n ( 2 m n 1 ) / 2 .
As it will become clear later on, together with the orthogonality condition
[ U 1 + δ U 1 ] T [ U 1 + δ U 1 ] = I n ,
the vector x contains the whole information necessary to find the perturbation δ U 1 . This vector may be expressed as
x = vec ( Low ( δ W U ) ) ,
or, equivalently,
x = Ω x vec ( δ W U ) ,
where
Ω x : = diag ( ω 1 , ω 2 , , ω n ) R p × m n , ω i : = 0 ( m i ) × i , I m i R ( m i ) × m , i = 1 , 2 , , n , Ω x Ω x T = I p , Ω x 2 = 1
is a matrix that “pulls out” the p elements of x from the m · n elements of δ W U (we consider 0 0 × i as an empty matrix). If, for instance, m = 6 and n = 4 , then
Low ( δ W U ) = δ w u 21 δ w u 31 δ w u 32 δ w u 41 δ w u 42 δ w u 43 δ w u 51 δ w u 52 δ w u 53 δ w u 54 δ w u 61 δ w u 62 δ w u 63 δ w u 64 ,
vec ( δ W U ) = δ w u 11 δ w u 21 δ w u 31 δ w u 41 δ w u 51 δ w u 61 | δ w u 12 δ w u 22 δ w u 32 δ w u 42 δ w u 52 δ w u 62 | δ w u 13 δ w u 23 δ w u 33 δ w u 43 δ w u 53 δ w u 63 | δ w u 14 δ w u 24 δ w u 34 δ w u 44 δ w u 54 δ w u 64 T ,
and the matrix
Ω x = 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
gives the relationship between the subdiagonal entries of the matrix δ W U and the parameter vector x,
x = Ω x vec ( δ W U ) = δ w u 21 δ w u 31 δ w u 41 δ w u 51 δ w u 61 | δ w u 32 δ w u 42 δ w u 52 δ w u 62 | δ w u 43 δ w u 53 δ w u 63 | δ w u 54 δ w u 64 T .
We have that
x k = u i T δ u j , k = i + ( j 1 ) m j ( j + 1 ) 2 , 1 j n , j < i m , 1 k p .
In a similar way, we introduce the vector of the subdiagonal entries of the matrix δ W V = V T δ V (note that V is a square matrix),
y = [ y 1 , y 2 , , y q ] T R q , y i = [ v 2 T δ v 1 , v 3 T δ v 1 , , v n T δ v 1 n 1 , v 3 T δ v 2 , , v n T δ v 2 n 2 , , v n T δ v n 1 1 ] T ,
where q = n ( n 1 ) / 2 . It is fulfilled that
y = vec ( Low ( δ W V ) )
or, equivalently,
y = Ω y vec ( δ W V ) ,
where
Ω y : = diag ( ω 1 , ω 2 , , ω n ) R q × n 2 , ω i : = 0 ( n i ) × i , I n i R ( n i ) × n , i = 1 , 2 , , n , Ω y T Ω y = I q , Ω y 2 = 1 .
In this case,
y = v i T δ v j , = i + ( j 1 ) n j ( j + 1 ) 2 , 1 j n , j < i n 1 , 1 q .
Further on, the quantities x k , k = 1 , 2 , , p and y , = 1 , 2 , , q will be referred to as perturbation parameters since they determine the perturbations δ U 1 and δ V as well as the sensitivity of the singular values and singular subspaces.
First, consider the matrix
δ W U = U T δ U 1 : = [ δ w u 1 , δ w u 2 , , δ w u n ] , δ w u j R m .
Lemma 1.
The matrix
Applsci 14 01417 i001
is the linear (asymptotic) approximation of the matrix δ W U ; i.e., for sufficiently small δ A 2 , it is fulfilled that δ Q U δ W U .
Proof. 
Using the vector x of the perturbation parameters, the matrix δ W U is written in the form
Applsci 14 01417 i002
where the diagonal and superdiagonal entries have to be determined.
First, determine the entries of the superdiagonal part of W U . Since U ˜ T U ˜ = I m , it follows that
U T δ U = δ U T U δ U T δ U
and
δ u i T u j = u i T δ u j δ u i T δ u j , 1 i m , i < j m .
According to the orthogonality condition (10), the entries of the strictly upper triangular part of δ W U can be represented as
u i T δ u j = u j T δ u i δ u i T δ u j , = x k δ u i T δ u j , k = j + ( i 1 ) m i ( i + 1 ) 2 , 1 i n , i < j n .
Now consider how to determine the diagonal entries of the matrix δ W U ,
δ d u j j : = u j T δ u j , j = 1 , 2 , , n
by the elements of x. Since δ u j T u j = u j T δ u j , according to (10) we have that
2 u j T δ u j = δ u j T δ u j , j = 1 , 2 , , n
or
δ d u j j = δ u j 2 2 / 2 .
Since
δ w u j = U T δ u j ,
it follows that
δ d u j j = δ w u j 2 2 / 2 .
The above expression shows that δ d u j j is always negative and the entries | δ d u j j | , j = 1 , 2 , , n depend quadratically on the entries of δ W U . On the other hand, in a linear setting, we have for the superdiagonal elements of δ W U that
δ w u 1 j δ w u j 1 , j δ w u j j δ w u j + 1 , j δ w u m , j = δ q u 1 j δ q u j 1 , j 0 δ q u j + 1 , j δ q u m , j + 0 0 δ d u j j 0 0 j , j = 1 , 2 , , n .
Hence,
δ w u j 2 = δ q u j + δ d u j j 2 2 = δ q u j 2 2 + δ d u j j 2 .
From (12) and (11), we obtain the quadratic equation
δ d u j j 2 + 2 δ d u j j + δ q u j 2 2 = 0 .
From the two possible solutions to this equation, we take the root
δ d u j j = δ q u j 2 2 / ( 1 + 1 δ q u j 2 2 ) , j = 1 , 2 , , n ,
since in this case δ d u j j 0 with δ q u j 2 0 . The expression (14) allows us to find an approximation of δ d u j j from the entries of the matrix δ Q U . For small perturbation δ A (small values of δ q u j 2 ), we have the estimate (also following from (11))
| δ d u j j | δ d u j j l i n = δ q u j 2 2 / 2 .
So, for small perturbations the quantity | δ d u j j | , j = 1 , 2 , , n depends quadratically on δ q u j 2 .
Thus, the matrix δ W U can be represented as the sum
δ W U = δ Q U + δ D U δ N U ,
where according to (8), the matrix δ Q U has entries depending only on the perturbation parameters x,
Applsci 14 01417 i003
and the matrix
Applsci 14 01417 i004
contains second-order terms in δ u j , j = 1 , 2 , , n .
Since the matrices δ D U and δ N U contain only second-order terms in the entries of δ U , taking into account that x 0 and consequently δ U 0 with δ A 2 0 , it follows from (15) that the matrix δ Q U is the linear approximation of δ W U .    □
Similarly, for the matrix
Applsci 14 01417 i005
like the case of δ W U , it is possible to show that
δ W V = δ Q V + δ D V δ N V ,
where
Applsci 14 01417 i006
has elements depending only on the perturbation parameters y,
Applsci 14 01417 i007
d v j j = v j T δ v j , j = 1 , 2 , , n and the matrix
Applsci 14 01417 i008
contains second-order terms in δ v j , j = 1 , 2 , , n . The diagonal entries of δ D V are determined as in the case of the matrix δ D U .

4. Equations for the Perturbation Parameters

In this section, we derive exact nonlinear equations for the perturbation parameters x and y and equations for their linear approximations. At this stage, we assume that the perturbation δ A is known, but in Section 5 we show how to use these equations to find asymptotic approximations of x and y knowing only the perturbation norm.
The elements of the perturbation parameter vectors x and y can be determined from Equation (5). For this aim, it is appropriate to transform this equation as follows. Taking into account that A V = U Σ = U 1 Σ n and U T A = Σ V T , the equation is represented as
δ U T U 1 Σ n + Σ n 0 ( m n ) × n V T δ V + δ F = Σ ˜ n Σ n 0 ( m n ) × n + Δ 0 .
where
Applsci 14 01417 i009
After transposing (9), we obtain that
δ U T U 1 = U T δ U 1 δ U T δ U 1 .
Substituting in (17) the term δ U T U 1 with the expression on the right hand side of (18), we obtain
U T δ U 1 Σ n + Σ n 0 ( m n ) × n V T δ V + δ F = Σ ˜ n Σ n 0 ( m n ) × n + Δ ,
where
Δ = Δ 0 + δ U T δ U 1 Σ n = δ U T δ U 1 Σ n δ U T A δ V U T δ A δ V δ U T δ A V δ U T δ A δ V
contains higher-order terms in the entries of δ A , δ U , and δ V .
Replacing the matrices U T δ U 1 and V T δ V by δ W U , and δ W V , respectively, (19) is rewritten as
δ W U Σ n + Σ n 0 ( m n ) × n δ W V + δ F = Σ ˜ n Σ n 0 ( m n ) × n + Δ
or
δ Q U Σ n + Σ n 0 ( m n ) × n δ Q V + δ F = Σ ˜ n Σ n 0 ( m n ) × n + ( δ D U δ N U ) Σ n Σ ( δ D V δ N V ) + Δ .
Note that the matrices δ D U , δ N U , δ D V , δ N V and Δ contain only higher-order terms in the entries of δ A , δ U and δ V .
Equation (21) is the basic equation in the perturbation analysis of the SVD performed in this paper. This equation represents a diagonal system of linear equations with respect to the unknown entries of δ Q U and δ Q V , allowing us to solve it efficiently even for high-order matrices. Neglecting the higher-order terms in this equation, we obtain in Section 5 asymptotic bounds for the elements of the SVD. The approximation of these terms makes it possible to determine global perturbation bounds in Section 9.
The entries of the matrices δ Q U and δ Q V can be substituted by the corresponding elements x k , k = i + ( j 1 ) m j ( j + 1 ) 2 and y , = i + ( j 1 ) n j ( j + 1 ) 2 of the vectors x and y as shown in the previous section. This leads to the representation of Equation (21) as two matrix equations with respect to two groups of the entries of x,
Applsci 14 01417 i010
and
Applsci 14 01417 i011
where
Δ 1 = δ U 1 T δ U 1 Σ n δ U 1 T A δ V U 1 T δ A δ V δ U 1 T δ A V δ U 1 T δ A δ V ,
Δ 2 = δ U 2 T δ U 1 Σ n δ U 2 T A δ V U 2 T δ A δ V δ U 2 T δ A V δ U 2 T δ A δ V ,
Δ : = Δ 1 Δ 2 , Δ 1 R n × n , Δ 2 R ( m n ) × n .
We note that the estimation of Δ 2 requires knowing an estimate of δ U 2 , which is undetermined.
Equations (22) and (23) are the basic equations of the SVD perturbation analysis. They can be used to obtain asymptotic as well as global perturbation bounds on the elements of the vectors x and y.
Let us introduce the vectors
x ( 1 ) = Ω 1 x , and x ( 2 ) = Ω 2 x ,
where
Ω 1 : = diag ( ω 1 , ω 2 , , ω n ) R q × p , ω i : = I n i , 0 ( n i ) | × ( m n ) R ( n i ) × ( m i ) , i = 1 , 2 , , n , Ω 1 Ω 1 T = I q , Ω 1 2 = 1 , Ω 2 : = diag ( ω 1 , ω 2 , , ω n ) R ( m n ) n × p , ω i : = 0 ( m n ) × ( n i ) , I m n R ( m n ) × ( m i ) , i = 1 , 2 , , n , Ω 2 Ω 2 T = I ( m n ) n , Ω 2 2 = 1 .
The vector x ( 1 ) contains the elements of the unknown vector x participating in (22), while x ( 2 ) contains the elements of x participating in (23). It is easy to prove that
x = Ω 1 T x ( 1 ) + Ω 2 T x ( 2 ) .
Taking into account that
Low ( ( δ D U 1 δ N U 1 ) Σ n Σ n ( δ D V δ N V ) ) = 0 , Up ( δ D U 1 Σ n Σ n δ D V ) = 0 ,
the strictly lower part of (22) can be represented column-wise as the following system of linear equations with respect to the unknown vectors x ( 1 ) and y,
S 1 x ( 1 ) + S 2 y = f + vec ( Low ( Δ 1 ) ) ,
where
S 1 = diag ( σ 1 , σ 1 , , σ 1 n 1 , σ 2 , , σ 2 n 2 , , σ n 1 1 ) ,
S 2 = diag ( σ 2 , σ 3 , , σ n n 1 , σ 3 , , σ n n 2 , , σ n 1 ) , S i R q × q , i = 1 , 2 .
and
f = vec ( Low ( δ F 1 ) ) = Ω 3 vec ( δ F 1 ) R q , Ω 3 : = diag ( ω 1 , ω 2 , , ω n ) R q × n 2 , ω i : = 0 ( n i ) × i , I n i R ( n i ) × n , i = 1 , 2 , , n , Ω 3 Ω 3 T = I q , Ω 3 2 = 1 .
Similarly, the strictly upper part of (22) is represented row-wise as the system of equations
S 2 x ( 1 ) S 1 y = g vec ( ( Up ( δ N U 1 Σ n Σ n δ N V Δ 1 ) ) T ) ,
where
g = vec ( ( Up ( δ F 1 ) ) T ) = Ω 4 vec ( δ F 1 ) R q , Ω 4 : = 0 ( n 1 ) × n , ω 1 , 0 ( n 2 ) × 2 n , ω 2 , 0 1 × ( n 1 ) n , ω n 1 R q × n 2 , ω i : = [ 0 1 × ( i 1 ) , 1 , 0 1 × ( n i ) ] 0 1 × n 0 1 × n 0 1 × n [ 0 1 × ( i 1 ) , 1 , 0 1 × ( n i ) ] 0 1 × n 0 1 × n 0 1 × n [ 0 1 × ( i 1 ) , 1 , 0 1 × ( n i ) ] R ( n i ) × ( n i ) . n , i = 1 , 2 , , n 1 , Ω 4 Ω 4 T = I q , Ω 4 2 = 1 .
It should be noted that the operators Low and Up , used in (26) and (29), respectively, take only the entries of the strict lower and strict upper part of the corresponding matrix. These entries are then arranged by the operator vec column by column, excluding the zeros above or below the diagonal. For instance, if n = 4 , the elements of the vectors f and g satisfy
Applsci 14 01417 i012
In this way, the solution to (22) reduces to the solution to the two symmetric coupled Equations (26) and (29) with diagonal matrices of size q × q . The Equation (23) can be solved independently yielding
x ( 2 ) = vec ( ( δ F 2 Δ 2 ) Σ n 1 ) .
Note that the elements of x ( 1 ) depend on the elements of y and vice versa, while x ( 2 ) depends neither on x ( 1 ) nor on y.

5. Asymptotic Bounds on the Perturbation Parameters

In this section, we determine linear (asymptotic) bounds on the perturbation parameter vectors x and y using information only for the norm of the perturbation Δ A .
Equations (26) and (29) can be used to determine asymptotic approximations of the vectors x ( 1 ) and y. The exact solution to these equations satisfies
x ( 1 ) = S x f ( f + vec ( Low ( Δ 1 ) ) + S x g ( g vec ( ( Up ( δ N U 1 Σ n Σ n δ N V Δ 1 ) ) T ) ,
y = S y f ( f + vec ( Low ( Δ 1 ) ) + S y g ( g vec ( ( Up ( δ N U 1 Σ n Σ n δ N V Δ 1 ) ) T ) ,
where taking into account that S 1 and S 2 commute, we have that
S x f = ( S 2 S 1 S 2 1 S 1 ) 1 S 1 S 2 1 = ( S 2 2 S 1 2 ) 1 S 1 R q × q , S x g = ( S 2 S 1 S 2 1 S 1 ) 1 = ( S 2 2 S 1 2 ) 1 S 2 R q × q , S y f = S x g , S y g = S x f .
Exploiting these expressions, it is possible to show that
S x f = diag σ 1 σ 2 2 σ 1 2 , σ 1 σ 3 2 σ 1 2 , , σ 1 σ n 2 σ 1 2 n 1 , σ 2 σ 2 2 σ 1 2 , , σ 2 σ n 2 σ 2 2 n 2 , , σ n 1 σ n 2 σ n 1 2 1 ,
S x g = diag σ 2 σ 2 2 σ 1 2 , σ 3 σ 3 2 σ 1 2 , , σ n σ n 2 σ 1 2 n 1 , σ 3 σ 3 2 σ 2 2 , , σ n σ n 2 σ 2 2 n 2 , , σ n σ n 2 σ n 1 2 1 .
Let us consider the conditions for the existence of a solution to the Equations (26) and (29).
Theorem 1.
The Equations (26) and (29) have a unique solution if and only if the singular values of A are distinct.
Proof. 
The Equations (26) and (29) have a unique solution for x ( 1 ) and y if and only if the square symmetric block matrix
S 1 S 2 S 2 S 1
is nonsingular, or equivalently, the matrices S 1 , S 2 , and S 2 2 S 1 2 are nonsingular. The matrices S 1 and S 2 are nonsingular since the matrix A has nonzero singular values. (For the solution of linear systems of equations with block matrices, see ([27], Ch. II.)
In turn, a condition for nonsingularity of the matrix S 2 2 S 1 2 can be found taking into account the structure of the matrices S x f and S x g . Clearly, the denominators of the first n 1 diagonal entries of S x f and S x g will be different from zero if σ 1 is distinct from σ 2 , σ 3 , , σ n . Similarly, the denominators of the next group of n 2 diagonal entries will be different from zero if σ 2 is distinct from σ 3 , , σ n , and so on. Finally, σ n 1 should be different from σ n . Thus, we conclude that the matrices S x f and S x g will exist and the Equations (26) and (29) will have a unique solution, if and only if the singular values of A are distinct.    □
We note that Theorem 1 is in accordance with the results obtained in [14,28]. Such a result should not come as a surprise since U is the matrix of the transformation of A A T to Schur (diagonal) form U Σ Σ T U T and V is the matrix of the transformation of A T A to diagonal form V Σ T Σ V T . On the other hand, the perturbation problem for the Schur form is well posed only when the matrix eigenvalues (the diagonal elements of Σ Σ T or Σ T Σ ) are distinct.
Neglecting the higher-order terms in (31) and (32) and approximating each element of f and g by the perturbation norm δ A 2 , we obtain the following result.
Lemma 2.
The linear approximations of the vectors x ( 1 ) and y satisfy
| x ( 1 ) | x ( 1 ) l i n , | y | y l i n
x ( 1 ) l i n = ( | S x f | + | S x g | ) h ,
y l i n = ( | S y f | + | S y g | ) h ,
where S y g = S x f and S y f = S x g are given by (33) and (34), respectively, and
h = [ 1 , 1 , , 1 ] q T × δ A 2 .
Clearly, if the matrices | S x f | , | S x g | , | S y f | , | S y g | have large diagonal entries, then the estimates of the perturbation parameters will be large. Using the expressions for S x f and S x g , we may show that
S x f 2 = max i , j σ i | σ j 2 σ i 2 | , i = 1 , 2 , , n 1 , j = i + 1 , i + 2 , , n , S x g 2 = max i , j σ j | σ j 2 σ i 2 | .
Note that the norms of S x f and S x g can be considered as condition numbers of the vectors x 1 and y with respect to the changes in δ A .
An asymptotic estimate of the vector x ( 2 ) is obtained, neglecting the higher-order term Δ 2 and approximating its elements according to (30).
Lemma 3.
The linear approximation of the vector x ( 2 ) satisfies
x ( 2 ) l i n = vec ( Z ) , Z = δ A 2 × 1 / σ 1 1 / σ 2 1 / σ n 1 / σ 1 1 / σ 2 1 / σ n 1 / σ 1 1 / σ 2 1 / σ n .
Equation (38) shows that a group of n elements of x ( 2 ) l i n will be large if the singular value associated with the corresponding column of Z is small. The presence of large elements in the vector x leads to large entries in δ W U and consequently in the estimate of δ U . This observation aligns with the well known fact that the sensitivity of a singular subspace is inversely proportional to the smallest singular value associated with that subspace.
As a result of determining the linear estimates (36)–(38), we obtain an asymptotic approximation of the vector x as
x l i n = Ω 1 T x ( 1 ) l i n + Ω 2 T x ( 2 ) l i n .
It should be emphasized that the determination of the linear bounds x l i n and y l i n requires knowing only the norm of the perturbation δ A .
Thanks to the diagonal structure of the matrices S 1 , S 2 , and Σ , the solutions to the Equations (26)–(30) are computed efficiently with high accuracy. In fact, the computing of the diagonal elements of the matrices S x f , S x g , S y f , S y g requires 10 q floating point operations (flops), the determining of x ( 1 ) l i n and y l i n according to (36) and (37) requires 4 q flops, and the obtaining of x ( 2 ) l i n by (38) needs 2 ( m n ) n flops. Thus, the computing of x l i n and y l i n requires 14 n ( n 1 ) / 2 + 2 ( m n ) n flops. Also, the solution of the diagonal systems of equations is performed very accurately in a floating point arithmetic.
Example 1.
Consider the 6 × 4 matrix
A = 3 3 3 6 3 1 1 8 3 1 0 9 3 1 2 11.1 6 4 6 11.9 3 1 1 10.1
and assume that it is perturbed by
δ A = 10 c × A 0 , A 0 = 5 3 1 2 7 4 9 5 3 8 5 4 2 6 3 7 5 3 1 9 4 8 2 6 ,
where c is a varying parameter. (For convenience, the entries of the matrix A 0 are taken as integers).
The singular value decompositions of the matrices A and A + δ A are computed by the function svd of MATLAB®[29]. The singular values of A are
σ 1 = 25.460186918120350 , σ 2 = 7.684342946021752 , σ 3 = 1.248776186923002 , σ 4 = 0.017709242587950 .
In the given case, the matrices S 1 and S 2 in Equations (27) and (28) are
S 1 = diag ( 25.4601869 , 25.4601869 , 25.4601869 , 7.6843429 , 7.6843429 , 1.2487762 ) , S 2 = diag ( 7.6843429 , 1.2487762 , 0.0177092 , 1.2487762 , 0.0177092 , 0.0177092 ) ,
and the matrices participating in (31) and (32) are
S x f = diag ( 0.0432135 , 0.0393717 , 0.0392770 , 0.1336647 , 0.1301354 , 0.8009451 ) , S x g = diag ( 0.0130426 , 0.0019311 , 0.0000273 , 0.0217217 , 0.0002999 , 0.0113584 ) S y f = S x g , S y g = S x f .
The matrix
S 1 S 2 S 2 S 1 ,
which determines the solution for x ( 1 ) and y, has a condition number with respect to the 2-norm equal to 26.9234179. The exact parameters x k and their linear approximations x k l i n computed by using (36) and (38) are shown to eight decimal digits for two perturbation sizes in Table 1. The differences between the values of x k l i n and x k are due to the bounding of the elements of the vectors f and g by the value of δ A 2 and taking the terms in (36)–(38) with positive signs. Both approximations are necessary to ensure that x x l i n for arbitrary small size perturbation.
Similarly, in Table 2, we show for the same perturbations of A the exact perturbation parameters y and their linear approximations obtained from (37).

6. Bounding the Perturbations of the Singular Values

Equation (22) can also be used to determine linear and nonlinear estimates of the perturbations of the singular values. Considering the diagonal elements of this equation (highlighted in green boxes) and taking into account that diag ( δ N U 1 ) = 0 n , diag ( δ N V ) = 0 n , we obtain
δ Σ n = diag ( δ F 1 ) diag ( δ D U 1 Σ n Σ n δ D V + Δ 1 )
or
δ σ i = δ f i i ( δ D U 1 Σ n Σ n δ D V + Δ 1 ) i i , i = 1 , 2 , , n ,
where ( . ) i i denotes the ith diagonal element of ( . ) . Neglecting the higher-order terms, we determine the componentwise asymptotic estimate
| δ σ i | δ σ i l i n , δ σ i l i n : = | δ f i i | , i = 1 , 2 , , n .
Bounding each diagonal element | δ f i i | by δ A 2 , we find the normwise estimate of δ σ i ,
| δ σ i | δ σ ^ i , δ σ ^ i : = δ A 2 ,
which is in accordance with Weyl’s theorem (see (3)).
From (40), we also have that
i = 1 n δ σ i 2 δ F 1 F + δ D U 1 Σ n Σ n δ D V F + Δ 1 F .
In Table 3, we show the exact perturbations of the singular values of the matrix A of Example 1 along with the normwise bound δ σ ^ i = δ A 2 and the asymptotic estimate δ σ i l i n obtained from (41) under the assumption that δ A is known. The exact perturbations and their linear bounds are very close.

7. Asymptotic Bounds on the Perturbations of U 1 and V

Having componentwise estimates for the elements of x and y, it is possible to find easily asymptotic bounds on the entries of the matrices δ U 1 and δ V .
Theorem 2.
The asymptotic bound on the matrix | δ U 1 | is given by
| δ U 1 | | U | | U T δ U 1 | δ U 1 l i n = | U | δ W U l i n ,
where  
Applsci 14 01417 i013
and the parameters x k l i n 0 , k = 1 , 2 , , p are determined by (39).
Proof. 
The proof follows directly from Lemma 1 and Equation (6).    □
The linear approximation δ U 1 l i n gives bounds on the perturbations of the individual elements of the orthogonal transformation matrix U. Note that (42) is strictly valid only for infinitesimally small perturbation δ A .
Similarly, we have that the linear approximation of the matrix δ W V is given by
Applsci 14 01417 i014
From (7), we obtain that
| δ V | δ V l i n = | V | δ W V l i n .
Hence, the matrix δ V l i n entries give asymptotic bounds on the perturbations of the entries of V.
Consider the volume of operations necessary to determine the perturbation bounds δ U l i n and δ V l i n provided that the SVD of the matrix A is already computed. According to (42) and (43), the computation of the bounds δ U l i n and δ V l i n requires 2 m 2 n and 2 n 3 flops, respectively. Adding the 14 n ( n 1 ) / 2 + 2 ( m n ) n flops necessary to determine x l i n and y l i n , we find that the obtaining of the asymptotic componentwise estimates of δ U and δ V requires altogether
2 ( m 2 n + n 3 + m n ) + 5 n 2 7 n 2 ( m 2 n + n 3 )
flops. Thus, the perturbation analysis requires O ( m 3 ) operations.
For the matrix A of Example 1, we obtain that the absolute values of the exact changes of the entries of the matrix U 1 for the perturbation δ A = 10 5 A 0 satisfy
| δ U 1 | = 10 3 × 0.0020130 0.0013238 0.0330402 0.2407168 0.0009548 0.0070093 0.0356216 0.9425346 0.0006562 0.0004107 0.0576633 0.1185295 0.0000996 0.0028839 0.0020370 0.1517173 0.0004584 0.0024637 0.0316821 0.3649600 0.0004974 0.0014734 0.0630623 0.1877213
and their asymptotic componentwise estimates found by using (42) are
δ U 1 l i n = 10 2 × 0.0018494 0.0052303 0.0215558 0.9587781 0.0012398 0.0044262 0.0221410 1.4004629 0.0014125 0.0045112 0.0219559 0.5375840 0.0017421 0.0041696 0.0159680 0.5367878 0.0015062 0.0033948 0.0117823 0.6368282 0.0016843 0.0050287 0.0180170 0.9570612 .
Also, for the same δ A , we have that
| δ V | = 10 4 × 0.0188135 0.0293793 0.2621967 0.0365476 0.0348050 0.0032323 0.0065525 0.2572914 0.0153096 0.0121733 0.0836556 0.0994065 0.0036567 0.0106224 0.0446413 0.0002219
and, according to (43),
δ V l i n = 10 3 × 0.0110821 0.0341292 0.1616122 0.0371114 0.0130708 0.0300489 0.0179235 0.1608765 0.0161344 0.0250809 0.0803601 0.1020482 0.0056686 0.0191882 0.0672177 0.0164358 .
It is seen that the magnitude of the entries of δ U 1 l i n and δ V l i n correctly reflect the magnitude of the corresponding entries of | δ U 1 | and | δ V | , respectively. Note that the perturbations of the columns of U 1 and V tend to increase with increasing of the column number.

8. Sensitivity of Singular Subspaces

The sensitivity of the left
U r = span ( u 1 , u 2 , , u r ) , r min { m 1 , n }
or the right
V r = span ( v 1 , v 2 , , v r ) , r n 1
singular subspace of dimension r is measured by the canonical angles between the corresponding unperturbed and perturbed subspaces ([2], Ch. 4), ([5], Ch. V), [30].
Let the unperturbed left singular subspace corresponding to the first r singular values be denoted by U r and its perturbed counterpart as U ˜ r , and let U ( r ) and U ˜ ( r ) be the orthonormal bases for U r and U ˜ r , respectively. Further on, the sensitivity of the singular subspace U r will be characterized by the maximum canonical angle between U r and U ˜ r , defined as
cos ( Φ max ( U r , U ˜ r ) ) = σ min ( U ( r ) T U ˜ ( r ) ) .
The expression (44) has the disadvantage that if Φ max is small, then cos ( Φ max ) 1 and the angle Φ max is not well determined. To avoid this difficulty, instead of cos ( Φ max ( U r , U ˜ r ) ) , it is preferable to work with sin ( Φ max ( U r , U ˜ r ) ) . Let U ( r ) be the orthogonal complement of U ( r ) , U ( r ) T U ( r ) = 0 . Then, it is possible to show that [31]
sin ( Φ max ( U r , U ˜ r ) ) = σ max ( U ( r ) T U ˜ ( r ) ) .
Since
U ˜ ( r ) = U ( r ) + δ U ( r ) ,
we have that
sin ( Φ max ( U r , U ˜ r ) ) = σ max ( U ( r ) T δ U ( r ) ) .
Equation (46) shows that the sensitivity of the left singular subspace U r is related to the values of the perturbation parameters x k = u i T δ u j , k = i + ( j 1 ) m j ( j + 1 ) 2 , i > r , j = 1 , 2 , , r . In particular, for r = 1 , the sensitivity of the first column of U (the left singular vector, corresponding to σ 1 ) is determined by
sin ( Φ max ( U 1 , U ˜ 1 ) ) = σ max ( δ W U 2 : m , 1 ) ,
for r = 2 one has
sin ( Φ max ( U 2 , U ˜ 2 ) ) = σ max ( δ W U 3 : m , 1 : 2 )
and so on (see Figure 1), where the matrices U ( r ) T δ U ( r ) = δ W U r + 1 : m , 1 : r for different values of r are highlighted in boxes.
Similarly, utilizing the matrix δ W V , it is possible to find the sine of the maximum angle between the unperturbed V r and the perturbed V ˜ r right singular subspace
sin ( Θ max ( V r , V r ˜ ) ) = σ max ( δ W V r + 1 : n , 1 : r )
(see Figure 2). Hence, if the perturbation parameters are determined, it is possible to find sensitivity estimates of the nested singular subspaces
U 1 = span ( U ( 1 ) ) = span ( u 1 ) , U 2 = span ( U ( 2 ) ) = span ( u 1 , u 2 ) , U r = span ( U ( r ) ) = span ( u 1 , u 2 , , u r ) , r = min { m 1 , n }
and
V 1 = span ( V ( 1 ) ) = span ( v 1 ) , V 2 = span ( V ( 2 ) ) = span ( v 1 , v 2 ) , V n 1 = span ( V ( n ) ) = span ( v 1 , v 2 , , v n 1 ) .
Specifically, as
δ W U = * * * * x 1 * * * x 2 x m * * x n 1 x m + n 3 x 2 m + n 6 * x m 1 x 2 m 3 x 3 m 6 x p R m × n ,
we have that the exact maximum angle between the unperturbed and perturbed left singular subspace of dimension r is given by
Φ max ( U r , U ˜ r ) = arcsin ( σ max ( δ W U r + 1 : m , 1 : r ) ) .
Similarly, the maximum angle between the unperturbed and perturbed right singular subspace of dimension r is
Θ max ( V r , V ˜ r ) = arcsin ( σ max ( δ W V r + 1 : m , 1 : r ) ) .
Thus, we obtain the following result.
Theorem 3.
The asymptotic estimate of the angle Φ max ( U r , U ˜ r ) between the unperturbed and perturbed singular subspace of dimension r satisfies
| Φ max ( U r , U ˜ r ) | Φ max l i n ( U r , U ˜ r ) , Φ max l i n ( U r , U ˜ r ) = arcsin ( δ W U r + 1 : m , 1 : r l i n 2 ) ,
where
δ W U l i n = 0 * * * x 1 l i n 0 * * x 2 l i n x m l i n 0 * x n 1 l i n x m + n 3 l i n x 2 m + n 6 l i n 0 x m 1 l i n x 2 m 3 l i n x 3 m 6 l i n x p l i n R m × n
and the parameters x k l i n , k = 1 , 2 , , p are determined from (39).
In particular, for the sensitivity of the range R ( A ) of A, we obtain that
sin ( Φ max l i n ( U n , U ˜ n ) ) = δ W U n + 1 : m , 1 : n l i n 2 .
Similarly, for the angles between the unperturbed and perturbed right singular subspaces of dimension r, we obtain the linear estimates
| Θ max ( V r , V ˜ r ) | Θ max l i n ( U r , U ˜ r ) , Θ max l i n ( V r , V ˜ r ) = arcsin ( δ W V r + 1 : n , 1 : r l i n 2 ) ,
where
δ W V l i n = 0 * * * y 1 l i n 0 * * y 2 l i n y n l i n 0 * y n 1 l i n y 2 n 3 l i n y 3 n 6 l i n 0 R n × n
and y , = 1 , 2 , , q are determined from (37).
We note that using separate x and y parameters decouples the SVD perturbation problem and makes it possible to determine the sensitivity estimates of the left and right singular subspaces independently. This is important when the left or right subspace in a pair of singular subspaces is much more sensitive than its counterpart.
Consider as an example the same perturbed matrix A as in Example 1. Computing the matrices δ W U l i n and δ W V l i n , it is possible to estimate the sensitivity of all four pairs of singular subspaces of dimensions r = 1 , 2 , 3 , and 4 corresponding to the chosen ordering of the singular values. In Table 4, we show the actual values of the left and right singular subspaces sensitivity and the computed asymptotic estimates (49) and (50) of this sensitivity. To determine the sensitivity of other singular subspaces, reordering the singular values in the initial decomposition so that the desired subspace appears in the set of nested singular subspaces is possible. Note that asymptotic estimates of the canonical angles between an arbitrary unperturbed singular subspace U q , q min { m 1 , n } spanned by specific singular vectors combined in the matrix U q and its perturbed counterpart can be determined by computing the singular values of the matrix U q T δ U q l i n , where U q is the orthogonal complement of U q , consisting of all singular vectors that do not participate in U q , and δ U q l i n is the linear estimate obtained by using (42). Similarly, it is possible to obtain asymptotic perturbation bounds for a desired right singular subspace.

9. Global Perturbation Bounds

Since analytical expressions for the global perturbation bounds of the singular value decompositions are unknown up to this moment, we present an iterative procedure for finding estimates of these bounds based on the asymptotic analysis presented above. This procedure is similar to the corresponding iterative schemes proposed in [26] but is more complicated since determining bounds on the parameter vectors x and y must be performed simultaneously because the equations for these parameters are coupled.

9.1. Perturbation Bounds of the Entries of U 2

The main difficulty in determining global bounds of x and y is to find an appropriate approximation of the high-order term Δ 2 in (23). As is seen from (25), the determining of such an estimate requires knowing the perturbation δ U 2 , which is not well determined since it contains the columns of the matrix δ U 2 = U ˜ 2 U 2 . This perturbation satisfies the equations
( U 1 + δ U 1 ) T ( U 2 + δ U 2 ) = 0 ,
( U 2 + δ U 2 ) T ( U 2 + δ U 2 ) = I m n
which follow from the orthogonality of the matrix
U ˜ = [ U 1 + δ U 1 , U 2 + δ U 2 ] .
An estimate of δ U 2 can be found based on a suitable approximation of
X = U T δ U 2 : = X 1 X 2 .
As shown in [26], a first-order approximation of the matrix X can be determined using the estimates
X 1 a p p r = ( I n + δ W 1 T ) 1 δ W 2 T R n × ( m n ) ,
X 2 a p p r = X 1 a p p r T X 1 a p p r / 2 R ( m n ) × ( m n ) ,
where δ W 1 = U 1 T δ U 1 , δ W 2 = U 2 T δ U 1 and for sufficiently small perturbation δ U 1 , the matrix I n + δ W 1 T is nonsingular. (Note that δ W U = [ δ W 1 T δ W 2 T ] T is already estimated.) Thus, we have that
δ U 2 a p p r = U X a p p r .

9.2. Iterative Procedure for Finding Global Bounds of x and y

Global componentwise perturbation bounds of the matrices U and V can be found using nonlinear estimates of the matrices δ W U and δ W V , determined by (15) and (16), respectively. Such estimates are found correcting the linear estimates of the perturbation parameters x k = u i T δ u j and y = v i T δ v j on each iteration step. This approach is used in [9,32] (Ch. 4), and [33] in the normwise perturbation analysis of invariant subspaces and is related to the solution of the nonlinear equation
T x = g ϕ ( x ) ,
where T is a linear operator corresponding to the asymptotic perturbation estimate and ϕ ( x ) reflects the higher order terms corresponding to the nonlinear correction of the estimate. In finding the normwise global perturbation bounds, the above operator equation can be solved analytically using the contraction mapping theorem. Unfortunately, in the determining of componentwise estimates it is necessary to solve a system of complicated nonlinear equations and in such case the determining of analytical nonlinear bounds is difficult. That is why, in finding global bounds, we shall implement an iterative technique, similar to the one presented in [26]. Like the asymptotic estimates, the global bounds can be found only knowing the norm of δ A . This is performed approximating in the best possible way the higher-order terms (15) and (16), using on each step the approximations of the parameter vectors x and y. We note that the iteration convergence is established only experimentally, and the derivation of a convergence proof is a matter for further investigation. Thus, the proposed iterative procedure should be considered only as an illustrative one.
Consider the case of estimating the matrix δ W U . It is convenient to substitute the terms containing the perturbations δ u j in (15) with the quantities
δ w u j = U T δ u j , j = 1 , 2 , , n ,
which have the same magnitude as δ u j . Since
δ u i T δ u j = δ u i T U U T δ u j = δ w u i T δ w u j ,
the absolute value of the matrix δ W U R m × n (15) can be bounded as
| δ W U n o n l | = | U T δ U 1 | = [ | δ w u 1 | , | δ w u 2 | , , | δ w u n | ] | δ Q U | + | δ D U | + | δ N U | ,
where
Applsci 14 01417 i015
Applsci 14 01417 i016
Applsci 14 01417 i017
The diagonal entries δ d u j j , j = 1 , 2 , , n can be found from the entries of δ Q U by using the approximation (14). Since the unknown column estimates | δ w j | participate in both sides of (56), it is possible to obtain them as follows. The first column of δ W U is determined from
| δ w u 1 | = | δ q u 1 | + | δ d u 1 | ,
where | δ q u 1 | , | δ d 1 | are the first columns of | δ Q U | , | δ D U | , respectively. Then, the next column estimates | δ w j | , j = 2 , 3 , n can be determined recursively from
| δ w u j | = | δ q u j | + | δ d u j | + | δ w u 1 T | | δ w u j | | δ w u 2 T | | δ w u j | | δ w u j 1 T | | δ w u j | 0 0 ,
which is equivalent to solving the linear system
| S U j | | δ w u j | = | δ q u j | + | δ d u j | ,
where
| S U j | = e 1 T | δ w u 1 T | e 2 T | δ w u 2 T | e j 1 T | δ w u j 1 T | e j T e n T R m × m
and e j is the jth column of I m . The matrix | S U j | is upper triangular with unit diagonal and if | δ w u i | , i = 1 , 2 , , j 1 have small norms, then the matrix | S U j | is diagonally dominant. Hence, it is very well conditioned with a condition number close to 1.
As a result we obtain that
| δ w u j | | S U j | 1 ( | δ q u j | + | δ d u j | )
which produces the jth column of | δ W U n o n l | .
A similar recursive procedure can be used to determine the quantities | δ w v j | = | v i T δ v j | . In this case for each j it is necessary to solve the nth order linear system
| S V j | | δ w v j | = | δ q v j | + | δ d v j | .
The estimates of | δ w u j | , j = 1 , 2 , , m and | δ w v j | , j = 1 , 2 , , n thus obtained, are used to bound the absolute values of the nonlinear elements Δ 1 and Δ 2 given in (24) and (25), respectively. Utilizing the approximation of U T δ U 2 , it is possible to find an approximation of the matrix U T δ U as
Z = [ δ W U n o n l , δ U 2 a p p r ] : = [ Z 1 , Z 2 ] , Z 1 R m × n , Z 2 R m × ( m n )
where δ U 2 a p p r = U X a p p r ,
X a p p r = X 1 a p p r X 2 a p p r
and X 1 a p p r , X 2 a p p r are given by (53), (54). Then, the elements of Δ 1 , Δ 2 are bounded according to (24) and (25) as
| Δ 1 i j n o n l | σ j | Z 1 i | T | Z 1 j | + | Z 1 i | T Σ | Y j | + ( Z 1 i 2 + Y j 2 + Z 1 i 2 Y j 2 ) δ A 2 , i = 1 , 2 , , n , j = 1 , 2 , , n ,
| Δ 2 i j n o n l | σ j | Z 2 i | T | Z 1 j | + | Z 2 i | T Σ | Y j | + ( Z 2 i 2 + Y j 2 + Z 2 i 2 Y j 2 ) δ A 2 . i = 1 , 2 , , m n , j = 1 , 2 , , n .
Utilizing (31) and (32), the nonlinear corrections of the vectors x ( 1 ) and y can be determined from
δ x ( 1 ) = | S x f | vec ( Low ( | Δ 1 | ) + | S x g | vec ( ( Up ( | δ N U 1 | Σ n + Σ n | δ N V | + | Δ 1 n o n l | ) ) T ) ,
δ y = | S y f | vec ( Low ( | Δ 1 | ) + | S y g | vec ( ( Up ( | δ N U 1 | Σ n + Σ n | δ N V | + | Δ 1 | n o n l ) ) T ) ,
where | δ N U 1 | is estimated by using the corresponding expression (56) and | δ N V | —is estimated by using a similar expression.
The nonlinear correction of x ( 2 ) is found from
δ x ( 2 ) = vec ( Z ) , Z = Δ 2 n o n l 2 × 1 / σ 1 1 / σ 2 1 / σ n 1 / σ 1 1 / σ 2 1 / σ n 1 / σ 1 1 / σ 2 1 / σ n .
and the total correction vector is determined from
δ x = Ω 1 T δ x ( 1 ) + Ω 2 T δ x ( 2 ) .
Now, the nonlinear estimates of the vectors x and y are found from
x n o n l = x l i n + δ x ,
y n o n l = y l i n + δ y .
In this way, we obtain an iterative scheme for finding simultaneously nonlinear estimates of the coupled perturbation parameter vectors x and y involving the Equations (56)–(59) and (60)–(65). In the numerical experiments presented below, the initial conditions are chosen as x 0 n o n l = e p s [ 1 , 1 , , 1 ] T and y 0 n o n l = e p s [ 1 , 1 , , 1 ] T , where e p s is the MATLAB®function eps, e p s = 2 52 . The stopping criteria for x- and y-iterations are taken as
e r r x = x s n o n l x s 1 n o n l 2 / x s 1 n o n l 2 < t o l , e r r y = y s n o n l y s 1 n o n l 2 / y s 1 n o n l 2 < t o l ,
where t o l = 10 e p s . The scheme converges for perturbations δ A of restricted size. It is possible that y converges while x does not converge.
The nonlinear estimate of the higher term Δ 1 can be used to obtain nonlinear corrections of the singular value perturbations. Based on (40), a nonlinear correction of each singular value can be determined as
δ σ i c o r r = ( | δ D U 1 | Σ n + Σ n | δ D V | + | Δ 1 | ) i i ,
so that the corresponding singular value perturbation is estimated as
δ σ i n o n l = σ i l i n + δ σ i c o r r .
Note that σ i l i n = | δ f i i | is known only when the entries of the perturbation δ A are known, and usually, this is not fulfilled in practice. Nevertheless, the nonlinear correction (66) can be useful in estimating the sensitivity of a given singular value.
In Table 5, we present the number of iterations necessary to find the global bound x n o n l for the problem considered in Example 1 with perturbations δ A = 10 c × A 0 , c = 10 , 9 , , 3 . In the last two columns of the table, we give the norm of the exact higher-order term Δ and its approximation Δ n o n l 2 computed according to (58) and (59) (the approximation is given for the last iteration). In particular, for the perturbation δ A = 10 5 A 0 , the exact higher-order term Δ , found using (24) and (25), is
| Δ | = 10 7 × 0.0045563 0.0006839 0.0146195 0.0257942 0.0012737 0.0008692 0.0080543 0.0003708 0.0000993 0.0015257 0.0086067 0.0085598 0.0019107 0.0007767 0.0258012 0.0170781 0.0032494 0.0006701 0.0423789 0.2046100 0.0064032 0.0004567 0.0384765 0.0071094 , Δ 2 = 2.12746 × 10 8 .
Implementing the described iterative procedure, after 10 iterations, we obtain the nonlinear bound
Δ n o n l = 10 5 × 0.0019716 0.0021343 0.0049111 0.0047911 0.0041273 0.0053422 0.0069174 0.0069309 0.0189857 0.0186412 0.0222135 0.0179503 0.8671799 0.8680880 0.8664831 0.8607933 0.5146400 0.5162827 0.5207565 0.2618103 0.5146400 0.5162827 0.5207565 0.2618103 , Δ n o n l 2 = 2.16338 × 10 5
computed according to (58) and (59) on the base of the nonlinear bound x n o n l .
The global bounds x n o n l 2 and y n o n l 2 , found for different perturbations along with the values of x 2 and y 2 , are shown in Table 6. The results confirm that the global estimates of x and y are close to the corresponding asymptotic estimates.
In Figure 3 and Figure 4, we show the convergence of the relative errors
e r r x = x s n o n l x s 1 n o n l / x s 1 n o n l
and
e r r y = y s n o n l y s 1 n o n l / y s 1 n o n l ,
respectively, at step s of the iterative process for different perturbations δ A = 10 c × A 0 . (The value of the relative error on each step is shown by a circle.) As it is seen from the figures, for the given example the convergence of y is close to the convergence of x. For all values of c, except c = 3 , the relative error begins with values e r r x , e r r y > > 1 and ends with values e r r x , e r r y < 10 13 . With the increasing in the perturbation size, the convergence becomes slower and for c = 3 ( δ A 2 = 2.08198 × 10 2 ) , the iteration diverges. This demonstrates the restricted usefulness of the nonlinear estimates, which is valid only for limited perturbation magnitudes.
In Table 7, we give normwise perturbation bounds of the singular values along with the actual singular value perturbations and their global bounds found for two perturbations of A under the assumption that the linear bounds of all singular values are known. As can be seen from the table, the nonlinear estimates of the singular values are very tight.

9.3. Global Perturbation Bounds of δ U 1 and δ V

Having nonlinear bounds of x, y, | δ W U | , and | δ W V | , we may find nonlinear bounds on the perturbations of the entries of U 1 and V according to the relationships
δ U 1 n o n l = | U | δ W U n o n l ,
δ V n o n l = | V | δ W V n o n l .
For the perturbations of the orthogonal matrices of Example 1 in the case of δ A = 10 5 A 0 , we obtain the nonlinear componentwise bounds
δ U 1 n o n l = 10 2 × 0.0018797 0.0053307 0.0221710 0.9714224 0.0012678 0.0045195 0.0227162 1.4182521 0.0014492 0.0046330 0.0227031 0.5447168 0.0017643 0.0042429 0.0164150 0.5441097 0.0015182 0.0034345 0.0120243 0.6452928 0.0017076 0.0051060 0.0184868 0.9697071
and
δ V n o n l = 10 3 × 0.0110869 0.0341506 0.1618907 0.0371700 0.0130756 0.0300673 0.0179387 0.1611512 0.0161396 0.0250967 0.0804756 0.1022096 0.0056705 0.0191966 0.0673203 0.0164507 .
These bounds are close to the obtained respective linear estimates δ U 1 l i n and δ V l i n in Section 7.
Based on (47) and (48), global estimates of the maximum angles between the unperturbed and perturbed singular subspaces of dimension r can be obtained using the nonlinear bounds δ W U n o n l and δ W V n o n l of the matrices δ W U and δ W V , respectively. For the pair of left and right singular subspaces of dimension r, we obtain that
Φ max n o n l ( U r , U ˜ r ) = arcsin ( δ W U r + 1 : m , 1 : r n o n l 2 ) , r min { m 1 , n } ,
Θ max n o n l ( V r , V ˜ r ) = arcsin ( δ W V r + 1 : n , 1 : r n o n l 2 ) , r n 1 .
In Table 8, we give the exact angles between the unperturbed and perturbed left and right singular subspaces of different dimensions and their nonlinear bounds computed using (69) and (70) for the matrix A from Example 1 and two perturbations δ A = 10 c × A 0 , c = 10 , 5 . The comparison with the corresponding linear bounds in Table 4 shows that the two types of bounds produce close results. As in estimating the other elements of the singular value decomposition, the global perturbation bounds are slightly larger than the corresponding asymptotic estimates but give guaranteed bounds on the changes of the respective elements, although for a limited size of δ A .

10. Numerical Experiments

In this section, we present the results of some numerical experiments illustrating the properties of the asymptotic and global estimates obtained in the paper. The computations are performed with MATLAB®Version 9.9 (R2020b) [29] using IEEE double-precision arithmetic and are verified by using GNU Octave, v. 5.2.0. The M-files implementing the linear and nonlinear SVD perturbation estimates along the example files are available from the authors.
Example 2.
This example illustrates the ill-conditioning of the singular subspaces in the case of close singular values of the given matrix.
Consider a 10 × 3 matrix A obtained as
A = U 0 Σ 0 V 0 T ,
where
U 0 = I 2 e e T / 10 , e = [ 1 , 1 , , 1 ] , V 0 = I 2 h h T / 3 , h = [ 1 , 1 , 1 ]
are orthogonal ad symmetric matrices (elementary reflections [Ch. 4] [2]) and
diag ( Σ 0 ) = ( 2 , 1 + τ , 1 ) ,
where the parameter τ varies between 10 8 and 10 1 .
In Figure 5, we present the actual values of δ U 1 2 , x 2 and the angle | Φ max 2 | along with the asymptotic normwise estimate δ U 1 l i n 2 as functions of the difference τ = σ 2 σ 3 = 10 8 , , 10 1 . Similarly, in Figure 6, we show the values of δ V 2 , y 2 and the angle | Θ max 2 | together with the asymptotic estimate δ V l i n 2 as functions of τ. Note that Φ max 2 is the larger angle between the subspaces U ( 2 ) = span ( u 1 , u 2 ) and U ˜ ( 2 ) = span ( u ˜ 1 , u ˜ 2 ) , while Θ max 2 = Θ max ( V 2 , V ˜ 2 ) .
The results shown in Figure 5 and Figure 6 confirm that the sensitivity of SVD increases with the decreasing distance between the singular values. According to (33), (34), (36) and (37), with the decreasing of τ the conditioning of the singular subspaces worsens and the norms of the perturbations δ U 1 , δ V increase. Hence, a potential source of ill-conditioning of the singular subspaces is the closeness between the singular values of the matrix. Another cause of ill-conditioning may be the presence of small singular values which, according to (38), leads to large elements of the vector x 2 and large norms of δ U 1 , δ V . The separation of the parameter vector x in two parts x 1 and x 2 reveals the independent importance of these two causes.
Example 3.
In this example, we compare the perturbation bounds of the singular subspaces derived in this paper with some known bounds from the literature.
For the matrix A and the perturbation E given in the previous example, compare first the sensitivity of the singular vector v 3 associated with the minimum singular value of A with the respective estimate of sin Θ 3 ( v 3 , v 3 ˜ ) presented in [Example 2, p. 267] [5].
In Figure 7, we show the exact value of | sin Θ 3 | , the estimate given in [5], and the linear estimate sin Θ l i n derived in this paper as functions of the parameter τ. (The estimate from [5] is valid for τ > δ A 2 .) Both estimates are very close for all values of τ.
Compare now the sensitivity of the left U 2 and right V 2 singular subspaces for the same matrix, with the bounds derived in [13] and the nonlinear estimates obtained in this paper. Since the estimates in [13] require knowing the norms of parts of the exact perturbation δ A , these norms are substituted by δ A 2 for a fair comparison.
In Figure 8, we show the exact value of tan Φ F = tan Φ 1 2 + tan Φ 2 2 , the respective estimate from [13], and the estimate tan Φ n o n l F = tan Φ 1 n o n l 2 + tan Φ 2 n o n l 2 , where Φ 1 and Φ 2 are the canonical angles between U 2 and U ˜ 2 . The corresponding comparison of the exact value of tan Θ F = | tan Θ 1 | with the estimate from [13] and the estimate tan Θ n o n l F = tan Θ 1 n o n l is given in Figure 9. It is seen from the figures that for values of τ between 9 × 10 9 and 10 6 the estimates from [13] produce slightly better results, but for values of τ less than 9 × 10 9 , these estimates exceed significantly the nonlinear estimates obtained in this paper. (Note that the estimates derived in [13] are valid for τ 6.498 × 10 9 . ) The comparison of the estimates shows that in case of ill conditioned problems (small values of τ) the bound presented in this paper is less conservative than the estimate given in [13].
Example 4.
This example illustrates the properties of the linear and nonlinear perturbation bounds obtained in the paper.
Consider a 100 × 80 matrix, taken as
A = U 0 Σ 0 0 V 0 T ,
where Σ 0 = diag ( 1 , 1 , , 1 ) , the matrices U 0 and V 0 are constructed as proposed in [34],
U 0 = M 2 S U M 1 , M 1 = I m 2 e e T / m , M 2 = I m 2 f f T / m , e = [ 1 , 1 , 1 , , 1 ] T , f = [ 1 , 1 , 1 , , ( 1 ) m 1 ] T , S U = diag ( 1 , σ , σ 2 , , σ m 1 ) , V 0 = N 2 S V N 1 , N 1 = I n 2 g g T / n , N 2 = I n 2 h h T / n , g = [ 1 , 1 , 1 , , 1 ] T , h = [ 1 , 1 , 1 , , ( 1 ) n 1 ] T , S V = diag ( 1 , τ , τ 2 , , τ n 1 ) ,
and the matrices M 1 , M 2 , N 1 , N 2 are Householder reflections. The condition numbers of U 0 and V 0 with respect to the inversion are controlled by the variables σ and τ and are equal to σ m 1 and τ n 1 , respectively. In the given case, σ = 1.05 , τ = 1.1 and cond ( U 0 ) = 125.2 , cond ( V 0 ) = 186.2 . The minimum singular value of the matrix A is σ min ( A ) = 0.103068025192609 . The perturbation of A is taken as δ A = 10 c × A 0 , where c is a negative number and A 0 is a matrix with random entries generated by the MATLAB®functionrand.
In Figure 10, Figure 11, Figure 12, Figure 13, Figure 14 and Figure 15, we show several results related to the perturbations of the singular value decomposition of A for 30 values of c between 12 and 5 . As particular examples, in Figure 10 we display the perturbations of the entry U 60 , 20 , which is an element of the matrix U 1 , and in Figure 11, we display the perturbations of the entry V 60 , 20 , both as functions of δ A 2 . The componentwise linear bounds correctly reflect the behavior of the actual perturbations and are valid for wide changes in the perturbation size. Note that this holds for all elements of U and V. The global (nonlinear) bounds practically coincide with the linear bounds but do not exist for perturbations whose size is larger than 1.5 × 10 4 .
In Figure 12 and Figure 13, we show the angles between the perturbed and unperturbed left
U 50 = span ( u 1 , u 2 , , u 50 )
and right
V 50 = span ( v 1 , v 2 , , v 50 )
singular subspaces of dimension 50. Again, the linear bounds on the angles | Φ max ( U 50 , U ˜ 50 ) | and | Θ max ( V 50 , V ˜ 50 ) | are valid for perturbation magnitudes from δ A 2 = 10 10 to δ A 2 = 10 4 and this also holds for singular subspaces of other dimensions. Note that for sufficiently large δ A 2 , the linear estimate also become non-valid.
In Figure 14 and Figure 15, we show the perturbations of the singular values and their nonlinear bounds for two different perturbations with δ A 2 = 2.7282784 × 10 10 and δ A 2 = 4.3932430 × 10 4 . While in the first case, the nonlinear bound δ σ i n o n l is close to the actual change δ σ i of the singular values, in the second case, the bound becomes significantly greater than the actual change due to the overestimating of the higher-order term Δ 1 .
From the results shown in Figure 10, Figure 11, Figure 12 and Figure 13, it follows that for the 100 × 80 matrix in the example under consideration, the overestimates of the perturbations of U and V are approximately 200 times, while the overestimates of Φ max and Θ max are approximately 100 and 170 times, respectively. In Figure 16 and Figure 17, we show the ratios δ U i j l i n / | δ U i j | , i = 1 , 2 , , m ; j = 1 , 2 , , n and δ V i j l i n / | δ V i j | , i = 1 , 2 , , n ; j = 1 , 2 , , n , respectively, along with the corresponding mean values (averages) E ( δ U i j l i n / | δ U i j | ) and E ( δ V i j l i n / | δ V i j | ) of the ratio for δ A = 10 8 × A 0 = 4.4679 × 10 7 . The overestimates are mainly due to overestimating the perturbation parameters x ( 1 ) , y and x ( 2 ) in the Equations (36)–(38). In these equations, each entry δ A i j is replaced by the quantity δ A 2 , leading to an overestimate approximately proportional to m in case of random perturbations. To illustrate this point, in Figure 18 and Figure 19 we show the entries of the matrices | δ U 1 | and | δ V | , respectively, along with the corresponding componentwise estimates δ U l i n and δ V l i n for δ A = 10 8 × A 0 = 4.4679 × 10 7 , where the asymptotic estimates are computed for the exact (non approximated) values of the perturbation parameters x k = u i δ u j and y = v i δ v j , obtained by using the exact δ A . Obviously, the estimates are very close to the exact values, which confirms their good quality. The small differences between the actual quantities and their estimates are caused by taking the absolute values of U , V and δ W U , δ W V in finding the asymptotic bounds δ U l i n = | U | δ W U l i n and δ V l i n = | V | δ W V l i n . We note that the overestimates are inevitable if we want to obtain guaranteed asymptotic perturbation bounds of the SVD elements, and they can be significantly reduced only if we use probability perturbation bounds [28].
Example 5.
This example visualizes the componentwise estimates of the orthogonal matrices and the singular values for a 200 × 150 matrix.
Consider a matrix A with m = 200 , n = 150 , constructed as in the previous example for σ = 1.05 and τ = 1.0 . The perturbation of A is taken as δ A = 10 9 × A 0 , where A 0 is a matrix with random entries.
In Figure 20, Figure 21 and Figure 22, we show the entries of the matrix | δ U 1 | , the absolute values of the exact changes of all 150 singular values, and the entries of the matrix | δ V | , respectively, along with the corresponding componentwise estimates δ U l i n , δ σ i n o n l , and δ V l i n . The computation of the linear bounds requires the solution of a system of 2 q = 22350 linear equations for x ( 1 ) and y and ( m n ) m = 7500 equations for x ( 2 ) . The nonlinear bounds of δ U 1 and δ V are found only for 7 iterations and are visually indistinguishable from the corresponding linear bounds.
The perturbation bounds derived were also tested with higher-order examples, including a 500 × 300 example.
The examples presented in this section confirm that the new componentwise perturbation bounds of the singular value decomposition can be used efficiently in the analysis of high-order problems and may compare favorably with the known bounds for some particular cases, for instance in the sensitivity analysis of the singular subspaces. The componentwise bounds of the perturbations in the orthogonal matrices U and V in the last example clearly show that for some specific entries the perturbation magnitudes differ 10 5 times and the implementation of normwise perturbation bounds for such examples is not informative. It should also be emphasized that the asymptotic estimates, although not global, ensure valid perturbation bounds for sufficiently large perturbations of the given matrix.

11. Conclusions

The paper presents new results related to the perturbation analysis of the singular value decomposition of a real rectangular matrix of full rank. New asymptotic componentwise perturbation bounds are derived for the orthogonal matrices participating in the decomposition, and an alternative method for computing the sensitivity of the singular subspaces is proposed. The possibility to find non-local bounds is illustrated by using a simple iterative procedure.
A potential disadvantage of the proposed perturbation bounds is their conservatism, i.e., the large difference between the bounds and the corresponding perturbations, especially for large values of m and n. This is due to the necessity to replace the entries of the actual perturbation in the derived bounds by its 2-norm, which leads to pessimistic estimates. This conservatism can be removed using probability perturbation bounds, a matter of further research.
The singular value decomposition perturbation analysis presented in this paper has some peculiarities, making it a challenging problem. On the one hand, the SVD analysis is simpler than other problems, like the perturbation analysis of the orthogonal decomposition to triangular form (the QR decomposition). This is due to the diagonal form of the decomposed matrix, which, among the others, allows the equations for the perturbation parameters to be solved easily, avoiding using the Kronecker product. On the other hand, the presence of two matrices in the decomposition requires the introduction of two different parameter vectors, which are mutually dependent due to the relationship between the perturbations of the two orthogonal matrices. This makes it necessary to solve a coupled system of equations about the parameter vectors, complicating the analysis.
The analysis performed in the paper reveals two reasons for the ill-conditioning of the singular subspaces of a matrix. The first cause is the closeness of some singular values of A, which leads to large elements of the vector | x ( 1 ) | and consequently to large entries of the perturbation | δ U 1 | . The second reason is the presence of small singular values of A, which is reflected by large elements of the vector | x ( 2 ) | and also leads to large values of the respective entries of | δ U 1 | . Significantly, these two reasons are independent of each other.

Author Contributions

Conceptualization, P.P.; methodology, V.A. and P.P.; software, P.P.; validation, V.A.; writing—original draft preparation, P.P.; writing—review and editing, V.A. and P.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets generated during the current study are available from the authors upon reasonable request. The data are not publicly available due to technical reasons.

Acknowledgments

The authors are grateful to the reviewers for carefully reading the manuscript and the remarks and suggestions that improved the presentation.

Conflicts of Interest

The authors declare no conflicts of interest.

Notation

R the set of real numbers;
R m × n the space of real m × n matrices ( R n = R n × 1 );
R ( A ) the range of A;
span ( u 1 , u 2 , , u n ) ,the subspace spanned by the vectors u 1 , u 2 , , u n ;
X the orthogonal complement of the subspace X ;
| A | the matrix of absolute values of the elements of A;
A T the transposed of A;
A 1 the inverse of A;
a j the jth column of A;
A i , 1 : n the ith row of m × n matrix A;
A i 1 : i 2 , j 1 : j 2 the part of matrix A from row i 1 to i 2 and from column j 1 to j 2 ;
δ A perturbation of A;
O ( δ A 2 ) a quantity of second-order of magnitude with respect to δ A ;
0 m × n the zero m × n matrix;
I n the unit n × n matrix;
e j the jth column of I n ;
σ i ( A ) the ith singular value of A;
σ min ( A ) , σ max ( A ) the minimum and maximum singular values of A, respectively;
relation of partial order. If a , b R n , then a b
means a i b i , i = 1 , 2 , , n ;
Low ( A ) the strictly lower triangular part of A R n × n ;
Up ( A ) the strictly upper triangular part of A R n × n ;
vec ( A ) the vec mapping of A R m × n . If A is partitioned column-wise as
A = [ a 1 , a 2 , a n ] , then vec ( A ) = [ a 1 T , a 2 T , , a n T ] T ,
x the Euclidean norm of x R n ;
A 2 the spectral norm of A;
Θ max ( X , Y ) the maximum angle between subspaces X and Y .

References

  1. Horn, R.A.; Charles, R. Matrix Analysis, 2nd ed.; Cambridge University Press: Cambridge, UK, 2013; ISBN 978-0-521-83940-2. [Google Scholar]
  2. Stewart, G.W. Matrix Algorithms: Volume 1: Basic Decompositions; SIAM: Philadelphia, PA, USA, 1998; ISBN 0-89871-414-1. [Google Scholar]
  3. Golub, G.H.; Van Loan, C.F. Matrix Computations, 4th ed.; The Johns Hopkins University Press: Baltimore, MD, USA, 2013; ISBN 978-1-4214-0794-4. [Google Scholar]
  4. Stewart, G.W. On the early history of the singular value decomposition. SIAM Rev. 1993, 35, 551–566. [Google Scholar] [CrossRef]
  5. Stewart, G.; Sun, J.-. G Matrix Perturbation Theory; Academic Press: New York, NY, USA, 1990; ISBN 978-0126702309. [Google Scholar]
  6. Wilkinson, J.H. The Algebraic Eigenvalue Problem; Clarendon Press: Oxford, UK, 1965; ISBN 0-198-53418-3. [Google Scholar]
  7. Gohberg, I.; Koltracht, I. Mixed, componentwise, and structured condition numbers. SIAM J. Matrix Anal. Appl. 1993, 14, 688–704. [Google Scholar] [CrossRef]
  8. Wedin, P.-A. Perturbation bounds in connection with singular value decomposition. Bit Numer. Math. 1972, 12, 99–111. [Google Scholar] [CrossRef]
  9. Stewart, G. Error and perturbation bounds for subspaces associated with certain eigenvalue problems. SIAM Rev. 1973, 15, 727–764. [Google Scholar] [CrossRef]
  10. Sun, J.g. Perturbation expansions for invariant subspaces. Lin. Alg. Appl. 1991, 153, 85–97. [Google Scholar] [CrossRef]
  11. Vaccaro, R. A second-order perturbation expansions for the SVD. SIAM J. Matrix Anal. Appl. 1994, 15, 661–671. [Google Scholar] [CrossRef]
  12. Stewart, G. Perturbation Theory for the Singular Value Decomposition; Technical Report CS-TR 2539; University of Maryland: College Park, MD, USA, 1990. [Google Scholar]
  13. Sun, J.g. Perturbation analysis of singular subspaces and deflating subspaces. Numer. Math. 1996, 73, 235–263. [Google Scholar] [CrossRef]
  14. Barlow, J.; Demmel, J. Computing accurate eigensystems of scaled diagonally dominant matrices. SIAM J. Numer. Anal. 1990, 27, 762–791. [Google Scholar] [CrossRef]
  15. Barlow, J.; Slapničar, I. Optimal perturbation bounds for the Hermitian eigenvalue problem. Lin. Alg. Appl. 2000, 309, 19–43. [Google Scholar] [CrossRef]
  16. Demmel, J.; Gu, M.; Eisenstat, S.; Slapničar, I.; Veselić, K.; Drmač, Z. Computing the singular value decomposition with high relative accuracy. Lin. Alg. Appl. 1999, 299, 21–80. [Google Scholar] [CrossRef]
  17. Dopico, F.M.; Moro, J. A note on multiplicative backward errors of accurate SVD algorithms. SIAM J. Matrix Anal. Appl. 2004, 25, 1021–1031. [Google Scholar] [CrossRef]
  18. Drmač, Z.; Veselić, K. New fast and accurate Jacobi SVD algorithm: I. SIAM J. Matrix Anal. Appl. 2008, 29, 1322–1342. [Google Scholar] [CrossRef]
  19. Drmač, Z.; Veselić, K. New fast and accurate Jacobi SVD algorithm: II. SIAM J. Matrix Anal. Appl. 2008, 29, 1343–1362. [Google Scholar] [CrossRef]
  20. Anderson, E.; Bai, Z.; Bischof, C.; Blackford, S.; Demmel, J.; Dongarra, J.; Sorensen, D. LAPACK Users’ Guide, 3rd ed.; SIAM: Philadelphia, PA, USA, 1999. [Google Scholar]
  21. Nakatsukasa, Y. Algorithms and Perturbation Theory for Matrix Eigenvalue Problems and the Singular Value Decomposition. Ph.D. Thesis, University of California, Davis, CA, USA, 2011. [Google Scholar]
  22. Li, R. Matrix perturbation theory. In Handbook of Linear Algebra, 2nd ed.; Hogben, L., Ed.; CRC Press: Boca Raton, FL, USA, 2014. [Google Scholar]
  23. Drmač, Z. Computing eigenvalues and singular values to high relative accuracy. In Handbook of Linear Algebra, 2nd ed.; Hogben, L., Ed.; CRC Press: Boca Raton, FL, USA, 2014; pp. (59–1)–(59–21). [Google Scholar]
  24. Harcha, H.; Chakrone, O.; Tsouli, N. On the nonlinear eigenvalue problems involving the fractional p-Laplacian operator with singular weight. J. Nonlinear Funct. Anal. 2022, 2022, 40. [Google Scholar] [CrossRef]
  25. Tu, Z.; Guo, L.; Pan, H.; Lu, J.; Xu, C.; Zou, Y. Multitemporal image cloud removal using group sparsity and nonconvex low-rank approximation. J. Nonlinear Var. Anal. 2023, 7, 527–548. [Google Scholar] [CrossRef]
  26. Petkov, P. Componentwise perturbation analysis of the QR decomposition of a matrix. Mathematics 2022, 10, 4687. [Google Scholar] [CrossRef]
  27. Gantmacher, F. Theory of Matrices; AMS Chelsea Publishing: New York, NY, USA, 1959; Volume 1, Reprinted by the AMS: Providence, RI, USA, 2000; ISBN 0-8218-1376-5. [Google Scholar]
  28. Stewart, G. Stochastic perturbation theory. SIAM Rev. 1990, 32, 579–610. [Google Scholar] [CrossRef]
  29. The MathWorks, Inc. MATLAB, version 9.9.0.1538559 (R2020b); The MathWorks, Inc.: Natick, MA, USA, 2020; Available online: http://www.mathworks.com (accessed on 5 February 2024).
  30. Drmač, Z. On principal angles between subspaces of Euclidean space. SIAM J. Matrix Anal. Appl. 2000, 22, 173–194. [Google Scholar] [CrossRef]
  31. Björck, A.; Golub, G. Numerical methods for computing angles between linear subspaces. Math. Comp. 1973, 27, 579–594. [Google Scholar] [CrossRef]
  32. Stewart, G. Matrix Algorithms: Volume II: Eigensystems; SIAM: Philadelphia, PA, USA, 2001; ISBN 0-89871-503-2. [Google Scholar]
  33. Konstantinov, M.M.; Petkov, P.H. Perturbation Methods in Matrix Analysis and Control; NOVA Science Publishers, Inc.: New York, NY, USA, 2020; ISBN 978-1-53617-470-0. [Google Scholar]
  34. Bavely, C.; Stewart, G. An algorithm for computing reducing subspaces by block diagonalization. SIAM J. Numer. Anal. 1979, 16, 359–367. [Google Scholar] [CrossRef]
Figure 1. Sensitivity estimations of the left singular subspaces.
Figure 1. Sensitivity estimations of the left singular subspaces.
Applsci 14 01417 g001
Figure 2. Sensitivity estimations of the right singular subspaces.
Figure 2. Sensitivity estimations of the right singular subspaces.
Applsci 14 01417 g002
Figure 3. Relative errors in the iterative determining of the global bounds of x.
Figure 3. Relative errors in the iterative determining of the global bounds of x.
Applsci 14 01417 g003
Figure 4. Relative errors in the iterative determining of the global bounds of y.
Figure 4. Relative errors in the iterative determining of the global bounds of y.
Applsci 14 01417 g004
Figure 5. Norms of δ U 1 , x , δ U 1 l i n and Φ max 2 as functions of the difference τ = σ 2 σ 3 .
Figure 5. Norms of δ U 1 , x , δ U 1 l i n and Φ max 2 as functions of the difference τ = σ 2 σ 3 .
Applsci 14 01417 g005
Figure 6. Norms of δ V , y , δ V l i n and Θ max 2 as functions of the difference τ = σ 2 σ 3 .
Figure 6. Norms of δ V , y , δ V l i n and Θ max 2 as functions of the difference τ = σ 2 σ 3 .
Applsci 14 01417 g006
Figure 7. Exact value of | sin Θ 3 | and its perturbation bounds as functions of τ .
Figure 7. Exact value of | sin Θ 3 | and its perturbation bounds as functions of τ .
Applsci 14 01417 g007
Figure 8. Exact value of tan Φ F and its perturbation bounds as functions of τ .
Figure 8. Exact value of tan Φ F and its perturbation bounds as functions of τ .
Applsci 14 01417 g008
Figure 9. Exact value of tan Θ F and its perturbation bounds as functions of τ .
Figure 9. Exact value of tan Θ F and its perturbation bounds as functions of τ .
Applsci 14 01417 g009
Figure 10. Exact values of | δ U 60 , 20 | and the corresponding linear and nonlinear estimates as functions of the perturbation norm.
Figure 10. Exact values of | δ U 60 , 20 | and the corresponding linear and nonlinear estimates as functions of the perturbation norm.
Applsci 14 01417 g010
Figure 11. Exact values of | δ V 60 , 20 | and the corresponding linear and nonlinear estimates as functions of the perturbation norm.
Figure 11. Exact values of | δ V 60 , 20 | and the corresponding linear and nonlinear estimates as functions of the perturbation norm.
Applsci 14 01417 g011
Figure 12. Exact values of Φ max 50 and the corresponding linear and nonlinear estimates as functions of the perturbation norm.
Figure 12. Exact values of Φ max 50 and the corresponding linear and nonlinear estimates as functions of the perturbation norm.
Applsci 14 01417 g012
Figure 13. Exact values of Θ max 50 and the corresponding linear and nonlinear estimates as functions of the perturbation norm.
Figure 13. Exact values of Θ max 50 and the corresponding linear and nonlinear estimates as functions of the perturbation norm.
Applsci 14 01417 g013
Figure 14. Perturbations of the singular values and their nonlinear bounds for δ A = 4.1517085 × 10 10 .
Figure 14. Perturbations of the singular values and their nonlinear bounds for δ A = 4.1517085 × 10 10 .
Applsci 14 01417 g014
Figure 15. Perturbations of the singular values and their nonlinear bounds for δ A = 2.7913420 × 10 5 .
Figure 15. Perturbations of the singular values and their nonlinear bounds for δ A = 2.7913420 × 10 5 .
Applsci 14 01417 g015
Figure 16. The ratio δ U i j l i n / | U i j | and its mean value for δ A = 10 8 × A 0 = 4.4679 × 10 7 .
Figure 16. The ratio δ U i j l i n / | U i j | and its mean value for δ A = 10 8 × A 0 = 4.4679 × 10 7 .
Applsci 14 01417 g016
Figure 17. The ratio δ V i j l i n / V i j and its mean value for δ A = 10 8 × A 0 = 4.4679 × 10 7 .
Figure 17. The ratio δ V i j l i n / V i j and its mean value for δ A = 10 8 × A 0 = 4.4679 × 10 7 .
Applsci 14 01417 g017
Figure 18. Values of | δ U i j | and the corresponding linear estimates for the exact values of x k .
Figure 18. Values of | δ U i j | and the corresponding linear estimates for the exact values of x k .
Applsci 14 01417 g018
Figure 19. Values of | δ V i j | and the corresponding linear estimates for the exact values of y .
Figure 19. Values of | δ V i j | and the corresponding linear estimates for the exact values of y .
Applsci 14 01417 g019
Figure 20. Values of | δ U i j | and the corresponding linear estimates for a 200 × 150 matrix.
Figure 20. Values of | δ U i j | and the corresponding linear estimates for a 200 × 150 matrix.
Applsci 14 01417 g020
Figure 21. Perturbations and perturbation bounds of the singular values for a 200 × 150 matrix.
Figure 21. Perturbations and perturbation bounds of the singular values for a 200 × 150 matrix.
Applsci 14 01417 g021
Figure 22. Values of | δ V i j | and the corresponding linear estimates for a 200 × 150 matrix.
Figure 22. Values of | δ V i j | and the corresponding linear estimates for a 200 × 150 matrix.
Applsci 14 01417 g022
Table 1. Exact perturbation parameters x k related to the matrix δ W U and their linear estimates.
Table 1. Exact perturbation parameters x k related to the matrix δ W U and their linear estimates.
c δ A 2 x k = u i T δ u j | x k | x k lin
10 2.0819813 × 10 9 x 1 = u 2 T δ u 1 5.9651314 × 10 12 1.1712418 × 10 10
x 2 = u 3 T δ u 1 1.2358232 × 10 11 8.5991735 × 10 11
x 3 = u 4 T δ u 1 2.5843633 × 10 12 8.1830915 × 10 11
x 4 = u 5 T δ u 1 7.5119150 × 10 13 8.1773996 × 10 11
x 5 = u 6 T δ u 1 1.9764954 × 10 11 8.1773996 × 10 10
x 6 = u 3 T δ u 2 2.5769606 × 10 11 3.2351171 × 10 10
x 7 = u 4 T δ u 2 6.7934677 × 10 12 2.7156394 × 10 10
x 8 = u 5 T δ u 2 7.1177269 × 10 11 2.7093810 × 10 10
x 9 = u 6 T δ u 2 3.0802793 × 10 11 2.7093810 × 10 10
x 10 = u 4 T δ u 3 4.9634458 × 10 10 1.6912007 × 10 9
x 11 = u 5 T δ u 3 6.0967501 × 10 10 1.6672173 × 10 9
x 12 = u 6 T δ u 3 6.6936869 × 10 10 1.6672173 × 10 9
x 13 = u 5 T δ u 4 1.0685519 × 10 8 1.1756467 × 10 7
x 14 = u 6 T δ u 4 1.0010811 × 10 9 1.1756467 × 10 7
5 2.0819812 × 10 4 x 1 = u 2 T δ u 1 5.9650702 × 10 7 1.1712418 × 10 5
x 2 = u 3 T δ u 1 1.2358210 × 10 6 8.5991735 × 10 6
x 3 = u 4 T δ u 1 2.5843445 × 10 7 8.1830915 × 10 6
x 4 = u 5 T δ u 1 7.5103866 × 10 8 8.1773996 × 10 6
x 5 = u 6 T δ u 1 1.9765187 × 10 6 8.1773996 × 10 6
x 6 = u 3 T δ u 2 2.5769484 × 10 6 3.2351171 × 10 5
x 7 = u 4 T δ u 2 6.7937087 × 10 7 2.7156394 × 10 5
x 8 = u 5 T δ u 2 7.1177424 × 10 6 2.7093809 × 10 5
x 9 = u 6 T δ u 2 3.0802821 × 10 6 2.7093809 × 10 5
x 10 = u 4 T δ u 3 4.9636511 × 10 5 1.6912007 × 10 4
x 11 = u 5 T δ u 3 6.0971010 × 10 5 1.6672173 × 10 4
x 12 = u 6 T δ u 3 6.6939951 × 10 5 1.6672173 × 10 4
x 13 = u 5 T δ u 4 1.0673940 × 10 3 1.1756467 × 10 2
x 14 = u 6 T δ u 4 1.0014883 × 10 4 1.1756467 × 10 2
Note: The elements x k obtained from Equation (38) are highlighted in yellow boxes.
Table 2. Exact perturbation parameters y related to the matrix δ W V and their linear estimates.
Table 2. Exact perturbation parameters y related to the matrix δ W V and their linear estimates.
c δ A 2 y = v i T δ v j | y | y lin
10 2.0819813 × 10 9 y 1 = v 2 T δ v 1 5.9724939 × 10 13 1.1712418 × 10 10
y 2 = v 3 T δ v 1 4.0539576 × 10 11 8.5991735 × 10 11
y 3 = v 4 T δ v 1 1.3009739 × 10 11 8.1830915 × 10 11
y 4 = v 5 T δ v 2 3.5278691 × 10 12 3.2351171 × 10 10
y 5 = v 6 T δ v 2 3.3493810 × 10 11 2.7156394 × 10 10
y 6 = v 3 T δ v 3 2.7589325 × 10 10 1.6912007 × 10 9
5 2.0819812 × 10 4 y 1 = v 2 T δ v 1 5.9734613 × 10 8 1.1712418 × 10 5
y 2 = v 3 T δ v 1 4.0539817 × 10 6 8.5991735 × 10 6
y 3 = v 4 T δ v 1 1.3009948 × 10 6 8.1830915 × 10 6
y 4 = v 5 T δ v 2 3.5285443 × 10 7 3.2351171 × 10 5
y 5 = v 6 T δ v 2 3.3493454 × 10 6 2.7156394 × 10 5
y 6 = v 3 T δ v 3 2.7590797 × 10 5 1.6912007 × 10 4
Table 3. Perturbations of the singular values and their linear estimates.
Table 3. Perturbations of the singular values and their linear estimates.
c δ σ ^ i = δ A 2 | δ σ i | δ σ i lin = | δ f ii |
10 2.0819812 × 10 9 δ σ 1 = 1.5439774 × 10 9 δ σ 1 = 1.5439778 × 10 9
δ σ 2 = 4.2996717 × 10 11 δ σ 2 = 4.2995882 × 10 11
δ σ 3 = 6.1140315 × 10 10 δ σ 3 = 6.1140303 × 10 10
δ σ 4 = 1.7363173 × 10 10 δ σ 4 = 1.7363178 × 10 10
5 2.0819812 × 10 4 δ σ 1 = 1.5439748 × 10 4 δ σ 1 = 1.5439778 × 10 4
δ σ 2 = 4.2998913 × 10 6 δ σ 2 = 4.2995882 × 10 6
δ σ 3 = 6.1133265 × 10 5 δ σ 3 = 6.1140303 × 10 5
δ σ 4 = 1.7375078 × 10 5 δ σ 4 = 1.7363178 × 10 4
Table 4. Sensitivity of the singular subspaces.
Table 4. Sensitivity of the singular subspaces.
c δ A 2 | Φ max ( U r , U ˜ r ) | Φ max lin ( U r , U ˜ r )
10 2.0819813 × 10 9 Φ 1 = 0.0024212 × 10 8 Φ 1 = 0.0002029 × 10 6
Φ 2 = 0.0082136 × 10 8 Φ 2 = 0.0005938 × 10 6
Φ 3 = 0.0103290 × 10 7 Φ 3 = 0.0029428 × 10 6
Φ 4 = 0.1074643 × 10 7 Φ 4 = 0.1662787 × 10 6
5 2.0819812 × 10 4 Φ 1 = 0.0024212 × 10 3 Φ 1 = 0.0020294 × 10 2
Φ 2 = 0.0082136 × 10 3 Φ 2 = 0.0059380 × 10 2
Φ 3 = 0.1032951 × 10 3 Φ 3 = 0.0294279 × 10 2
Φ 4 = 0.1073496 × 10 2 Φ 4 = 1.6628641 × 10 2
c δ A 2 | Θ max ( V r , V ˜ r ) | Θ max lin ( V r , V ˜ r )
10 2.0819813 × 10 9 Θ 1 = 0.0425801 × 10 9 Θ 1 = 0.0166760 × 10 8
Θ 2 = 0.0438355 × 10 9 Θ 2 = 0.0438688 × 10 8
Θ 3 = 0.2782232 × 10 9 Θ 3 = 0.1714819 × 10 8
5 2.0819812 × 10 4 Θ 1 = 0.0425804 × 10 4 Θ 1 = 0.0166760 × 10 3
Θ 2 = 0.0438356 × 10 4 Θ 2 = 0.0438688 × 10 3
Θ 3 = 0.2782378 × 10 4 Θ 3 = 0.1714819 × 10 3
Table 5. Convergence of the global bounds and higher-order terms.
Table 5. Convergence of the global bounds and higher-order terms.
c δ A F Number of Iterations Δ 2 Δ nonl 2
10 2.08198 × 10 9 4 2.12806 × 10 18 2.09530 × 10 15
9 2.08198 × 10 8 4 2.12806 × 10 16 2.09531 × 10 13
8 2.08198 × 10 7 5 2.12806 × 10 14 2.09537 × 10 11
7 2.08198 × 10 6 5 2.12806 × 10 12 2.09595 × 10 9
6 2.08198 × 10 5 7 2.12800 × 10 10 2.10185 × 10 7
5 2.08198 × 10 4 10 2.12746 × 10 8 2.16338 × 10 5
4 2.08198 × 10 3 29 2.12189 × 10 6 3.23896 × 10 3
3 2.08198 × 10 2 No convergence--
Table 6. Global bounds of x and y.
Table 6. Global bounds of x and y.
c x 2 x nonl 2 y 2 y nonl 2
10 1.0782203 × 10 8 1.6628799 × 10 7 2.8118399 × 10 10 1.7511069 × 10 9
9 1.0782176 × 10 7 1.6628817 × 10 6 2.8118379 × 10 9 1.7511072 × 10 8
8 1.0782167 × 10 6 1.6628999 × 10 5 2.8118393 × 10 8 1.7511098 × 10 7
7 1.0782065 × 10 5 1.6630816 × 10 4 2.8118405 × 10 7 1.7511358 × 10 6
6 1.0781037 × 10 4 1.6649057 × 10 3 2.8118536 × 10 6 1.7513967 × 10 5
5 1.0770771 × 10 3 1.6837998 × 10 2 2.8119844 × 10 5 1.7540888 × 10 4
4 1.0668478 × 10 2 1.9743124 × 10 1 2.8132907 × 10 4 1.7960922 × 10 3
Table 7. Perturbations of the singular values and their nonlinear estimates.
Table 7. Perturbations of the singular values and their nonlinear estimates.
c δ σ ^ i = δ A 2 | δ σ i | δ σ i nonl
10 2.0819812 × 10 9 δ σ 1 = 0.1543977 × 10 8 δ σ 1 = 0.1543978 × 10 8
δ σ 2 = 0.4299672 × 10 10 δ σ 2 = 0.4299589 × 10 10
δ σ 3 = 0.6114032 × 10 9 δ σ 3 = 0.6114031 × 10 9
δ σ 4 = 0.1736317 × 10 9 δ σ 4 = 0.1736329 × 10 9
5 2.0819812 × 10 4 δ σ 1 = 0.1543975 × 10 3 δ σ 1 = 0.1544264 × 10 3
δ σ 2 = 0.4299891 × 10 5 δ σ 2 = 0.4373954 × 10 5
δ σ 3 = 0.6113327 × 10 4 δ σ 3 = 0.6143758 × 10 4
δ σ 4 = 0.1737508 × 10 4 δ σ 4 = 0.2848085 × 10 4
Table 8. Nonlinear sensitivity estimates of the singular subspaces.
Table 8. Nonlinear sensitivity estimates of the singular subspaces.
c δ A 2 | Φ max ( U r , U ˜ r ) | Φ max nonl ( U r , U ˜ r )
10 2.0819813 × 10 9 Φ 1 = 0.0024212 × 10 8 Φ 1 = 0.0002029 × 10 6
Φ 2 = 0.0082136 × 10 8 Φ 2 = 0.0005938 × 10 6
Φ 3 = 0.1032901 × 10 8 Φ 3 = 0.0029428 × 10 6
Φ 4 = 1.0746434 × 10 8 Φ 4 = 0.1662788 × 10 6
5 2.0819812 × 10 4 Φ 1 = 0.0024212 × 10 3 Φ 1 = 0.0020601 × 10 2
Φ 2 = 0.0082136 × 10 3 Φ 2 = 0.0060635 × 10 2
Φ 3 = 0.1032951 × 10 3 Φ 3 = 0.0303251 × 10 2
Φ 4 = 1.0734956 × 10 3 Φ 4 = 1.6837810 × 10 2
c δ A 2 | Θ max ( V r , V ˜ r ) | Θ max nonl ( V r , V ˜ r )
10 2.0819813 × 10 9 Θ 1 = 0.0425801 × 10 9 Θ 1 = 0.0166760 × 10 8
Θ 2 = 0.0438355 × 10 9 Θ 2 = 0.0438688 × 10 8
Θ 3 = 0.2782232 × 10 9 Θ 3 = 0.1714819 × 10 8
5 2.0819812 × 10 4 Θ 1 = 0.0425804 × 10 4 Θ 1 = 0.0166819 × 10 3
Θ 2 = 0.0438356 × 10 4 Θ 2 = 0.0438972 × 10 3
Θ 3 = 0.2782378 × 10 4 Θ 3 = 0.1718211 × 10 3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Angelova, V.; Petkov, P. Componentwise Perturbation Analysis of the Singular Value Decomposition of a Matrix. Appl. Sci. 2024, 14, 1417. https://doi.org/10.3390/app14041417

AMA Style

Angelova V, Petkov P. Componentwise Perturbation Analysis of the Singular Value Decomposition of a Matrix. Applied Sciences. 2024; 14(4):1417. https://doi.org/10.3390/app14041417

Chicago/Turabian Style

Angelova, Vera, and Petko Petkov. 2024. "Componentwise Perturbation Analysis of the Singular Value Decomposition of a Matrix" Applied Sciences 14, no. 4: 1417. https://doi.org/10.3390/app14041417

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop