Next Article in Journal
Using ArcFace Loss Function and Softmax with Temperature Activation Function for Improvement in X-ray Baggage Image Classification Quality
Previous Article in Journal
Generating Stochastic Structural Planes Using Statistical Models and Generative Deep Learning Models: A Comparative Investigation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Inexact Noda Iteration for Computing the Smallest Eigenpair of a Large, Irreducible Monotone Matrix

Department of Applied Mathematics, National University of Kaohsiung, Kaohsiung 811, Taiwan
Mathematics 2024, 12(16), 2546; https://doi.org/10.3390/math12162546
Submission received: 15 July 2024 / Revised: 15 August 2024 / Accepted: 16 August 2024 / Published: 17 August 2024
(This article belongs to the Special Issue Numerical Methods for Scientific Computing)

Abstract

:
In this paper, we introduce an inexact Noda iteration method featuring inner and outer iterations for computing the smallest eigenvalue and corresponding eigenvector of an irreducible monotone matrix. The proposed method includes two primary relaxation steps designed to compute the smallest eigenvalue and its associated eigenvector. These steps are influenced by specific relaxation factors, and we examine how these factors impact the convergence of the outer iterations. By applying two distinct relaxation factors to solve the inner linear systems, we demonstrate that the convergence can be globally linear or superlinear, contingent upon the relaxation factor used. Additionally, the relaxation factor affects the rate of convergence. The inexact Noda iterations we propose are structure-preserving and ensure the positivity of the approximate eigenvectors. Numerical examples are provided to demonstrate the practicality of the proposed method, consistently preserving the positivity of approximate eigenvectors.

1. Introduction

Monotone matrices play a significant role in various mathematical applications, including stability analysis [1], eigenvalue bounds, and more [2,3]. A real matrix A is termed monotone if its inverse A 1 is non-negative [4,5]. This condition distinguishes monotone matrices from M-matrices, which can be expressed as σ I B with B non-negative and σ > ρ ( B ) [6]. For irreducible nonsingular M-matrices, the smallest eigenvalue λ satisfies λ = σ ρ ( B ) > 0 , whereas for monotone matrices, it is given by σ min ( A ) = ρ ( ( A 1 ) ) 1 . Despite these differences, both types share important spectral properties (p. 487, [7]), such as the Perron root being the largest eigenvalue of A 1 , which is associated with a positive eigenvector.
Several methods [8,9,10,11,12,13,14,15,16] exist for computing the Perron vector of a non-negative matrix, B, but ensuring the strict positivity of approximations can be challenging, especially when the Perron vector has very small components. One effective approach is the Noda iteration (NI), introduced by Noda in 1971, which utilizes shifted Rayleigh quotient-like approximations [17]. This method has been adapted for irreducible nonsingular M-matrices as well [18,19]. For large, sparse non-negative matrices, Jia, Lin, and Liu proposed inexact Noda iteration (INI) strategies specifically designed to preserve the positivity of approximate eigenvectors while maintaining practicality and fast convergence [20].
In this paper, we introduce an inexact Noda iteration (INI) method tailored to finding the smallest eigenvalue and associated eigenvector of an irreducible monotone matrix, A . Our main contributions include two relaxation steps aimed at enhancing computational efficiency and preserving the structure. The first step involves using O ( γ k min ( x k ) ) as a stopping criterion for inner iterations, where 0 < γ k < 1 , and x k denotes the current positive approximate eigenvector. This criterion ensures that the iterations converge effectively while maintaining positivity constraints. The second step updates the approximate eigenvalues using recurrence relations, specifically λ ¯ k + 1 = λ ¯ k ( 1 γ k ) min x k y k + 1 , where y k + 1 represents the next normalized positive approximate eigenvector. This update scheme not only maintains structure preservation but also guarantees the global convergence of the INI algorithms.
The parameter γ k , referred to as the “relaxation factor,” plays a crucial role in the convergence theory of INI. We establish rigorous convergence results for INI with two different relaxation factors, showing that the algorithms exhibit globally linear and superlinear convergence rates corresponding to γ k . Furthermore, due to the ill-conditioned nature of linear systems encountered during inner iterations as approximate eigenvalues converge to ρ ( A 1 ) (or ρ ( B ) ), we propose a modified Noda iteration (MNI). This approach employs rank-one updates within inner iterations to mitigate the condition number of the linear systems. We demonstrate mathematically that MNI is equivalent to NI and integrate it into our framework, termed modified inexact Noda iteration (MINI). MINI combines the strengths of INI and MNI, significantly improving the condition number of inner linear systems encountered during the iterative process for monotone matrix eigenvalue problems.
The paper is organized as follows: In Section 2, the Noda iteration and preliminary concepts are introduced. Section 3 presents the new strategy for the inexact Noda iteration and proves its basic properties. Section 4 establishes the convergence theory of the method and derives the asymptotic convergence factor. Section 5 details the integrated algorithm combining INI with MNI. In Section 6, numerical examples are presented to illustrate the convergence theory and demonstrate the effectiveness of INI. Finally, Section 7 provides concluding remarks.

2. Preliminaries and Notation

For any real matrix B = b i j R n × n , we write B 0 ( > 0 ) if b i j 0 ( > 0 ) for all 1 i , j n . We define | B | = [ | b i j | ] . If  B 0 , we say B is a non-negative matrix, and if B > 0 , we say B is a positive matrix. For real matrices B and C of the same size, if  B C is a non-negative matrix, we write B C . A non-negative (positive) vector is similarly defined. A non-negative matrix B is said to be reducible if it can be placed into a block upper-triangular form through simultaneous row/column permutations; otherwise, it is irreducible. If  μ is not an eigenvalue of B, the function sep ( μ , B ) is defined as
sep ( μ , B ) = ( μ I B ) 1 1 .
Throughout the paper, we use a 2-norm for vectors and matrices, and the superscript T denotes its transpose.
We review some fundamental properties of non-negative matrices, monotone matrices, and M-matrices.
Definition 1. 
A matrix A is said to be “monotone” if A x 0 implies x 0 for any positive vector. Furthermore, a monotone matrix M is an M-matrix if M = a i j , a i j 0 for i j .
Another characterization of monotone matrices is given in the following well known theorem.
Theorem 1 
([21]). A is monotone if and only if A is non-singular and A 1 0 .
Lemma 1 
([6]). Let M be a nonsingular M-matrix. Then, the following statements are equivalent:
1. 
M = a i j , a i j 0 for i j , and  M 1 0 ;
2. 
M = σ I B with some B 0 , and σ > ρ ( B ) .
For a pair of positive vectors, v and w , define
max w v = max i w ( i ) v ( i ) , min w v = min i w ( i ) v ( i ) ,
where v = [ v ( 1 ) , v ( 2 ) , , v ( n ) ] T , and w = [ w ( 1 ) , w ( 2 ) , , w ( n ) ] T .
Theorem 2 
([7]). Let B be an irreducible nonnegative matrix. Then, there exist λ * > 0 and a unit vector, x * > 0 , such that
B x * = λ * x * .
If λ is an eigenvalue of B, then | λ | λ * . Denote λ * by ρ ( B ) . If λ is an eigenvalue with a nonnegative unit eigenvector x , then λ = ρ ( B ) and x = x * . Moreover, for any v > 0
min B v v ρ ( B ) max B v v .

2.1. The Noda Iteration

The Noda iteration [17] is an inverse iteration method adjusted by a Rayleigh quotient-like approximation of the Perron pair of an irreducible non-negative matrix, B .
Given an initial vector, x 0 > 0 with x 0 = 1 , the Noda iteration (NI) consists of three steps:
( λ ^ k I B ) y k + 1 = x k , x k + 1 = y k + 1 / y k + 1 , λ ^ k + 1 = max B x k + 1 x k + 1 .
The main goal is to compute a new approximation, x k + 1 , to x * by solving the inner linear system (2). From Theorem 2, we know that λ ^ k > ρ ( B ) if x k is not a scalar multiple of the eigenvector x * . This result shows that λ ^ k I B is an irreducible nonsingular M-matrix, and its inverse is an irreducible non-negative matrix. Consequently, y k + 1 > 0 and x k + 1 > 0 , i.e.,  x k + 1 is always a positive vector if x k is. Through variable transformation, λ ^ k + 1 is obtained from the following relation:
λ ^ k + 1 = λ ^ k min x k y k + 1 ,
ensuring that { λ ^ k } is monotonically decreasing.

2.2. The Inexact Noda Iteration

Based on the Noda iteration, in [20], the authors propose an inexact Noda iteration (INI) for the computation of the spectral radius of a non-negative irreducible matrix, B. In this paper, since A is a monotone matrix, i.e., A 1 is a non-negative matrix, we replace B with A 1 in (2); i.e.,
( λ ^ k I A 1 ) y k + 1 = x k .
When A is large and sparse, we see that we must resort to an iterative linear solver to get an approximate solution. In order to reduce the computational cost of (3), we solve y k + 1 in (3) by inexactly satisfying
( λ ^ k I A 1 ) y k + 1 = x k + A 1 f k ,
which is equivalent to
( λ ^ k A I ) y k + 1 = A x k + f k ,
x k + 1 = y k + 1 / y k + 1 ,
where f k is the residual vector between ( λ ^ k A I ) y k + 1 and A x k . Here, the residual norm (inner tolerance) ξ k : = f k can be changed at each iterative step k.
Theorem 3 
([20]). Let A be an irreducible monotone matrix and 0 γ < 1 be a fixed constant. For the unit length x k > 0 , if  x k x * and f k in (5) satisfies
A 1 f k γ min x k ,
then { λ ^ k } is monotonically decreasing and lim k λ ^ k = ρ ( A 1 ) . Moreover, the convergence of INI is at least globally linear.
Based on (5)–(7), we describe INI as Algorithm 1.
Algorithm 1 Inexact Noda Iteration (INI)
  • Given λ ^ 0 , x 0 > 0 with x 0   = 1 , 0 γ < 1 and tol > 0 .
  • for  k = 0 , 1 , 2 ,
  •     Solve ( λ ^ k A I ) y k + 1 = A x k inexactly such that the inner tolerance ξ k satisfies condition (7)
  •     Normalize the vector x k + 1 = y k + 1 / y k + 1 .
  •     Compute λ ^ k + 1 = max A 1 x k + 1 x k + 1 .
  • until convergence: Resi =   A x k + 1 λ ^ k 1 x k + 1   <   tol .
When Relation (4) is used, step 5 in Algorithm 1 can be rewritten as
λ ^ k + 1 = λ ^ k min x k + A 1 f k y k + 1 .
Unfortunately, A 1 is not explicitly available; in other words, we need to compute “ A 1 f k ” exactly for the required approximate eigenvalue λ ^ k + 1 . Hence, in the next section, we propose a new strategy (see (9)) to estimate the approximate eigenvalues without increasing the computational cost. This strategy is practical and preserves the strictly decreasing property of the approximate eigenvalue sequence.

3. The Relaxation Strategy for INI and Some Basic Properties

In order to ensure that INI is correctly implemented, we now propose two main relaxation steps to define Algorithm 2:
  • The residual norm satisfies
    ξ k = f k γ k sep ( 0 , A ) min ( x k ) ,
    where 0 γ k γ < 1 with a constant upper bound γ .
  • The update of the approximate eigenvalue satisfies
    λ ¯ k + 1 = λ ¯ k 1 γ k min x k y k + 1 .
Algorithm 2 Inexact Noda Iteration for monotone matrices (INI)
  • Given λ ¯ 0 , x 0 > 0 with x 0   = 1 , 0 γ < 1 and tol > 0 .
  • for  k = 0 , 1 , 2 ,
  •     Solve ( λ ¯ k A I ) y k + 1 = A x k inexactly such that the inner tolerance ξ k satisfies condition (8)
  •     Normalize the vector x k + 1 = y k + 1 / y k + 1 .
  •     Compute λ ¯ k + 1 that satisfies condition (9).
  • until convergence: A x k + 1 λ ¯ k 1 x k + 1   < tol .
In step 3 of Algorithm 2, it leads to two equivalent inexact relations, satisfying
( λ ¯ k I A 1 ) y k + 1 = x k + A 1 f k
( λ ¯ k A I ) y k + 1 = A x k + f k ,
We remark that λ ¯ k + 1 in (9) is no longer equal to max A 1 x k + 1 x k + 1 and, therefore, that λ ¯ k + 1 cannot be clearly demonstrated to be greater than its lower bound, ρ ( A 1 ) . The following lemma ensures that ρ ( A 1 ) is still the lower bound of λ ¯ k .
Lemma 2. 
Let A be an irreducible monotone matrix with the Perron pair ( ρ ( A 1 ) , x * ) . For the unit length x k x * > 0 and the relaxation factor γ k [ 0 , 1 ) , if  λ ¯ k > ρ ( A 1 ) , f k in (11) satisfies Condition (8) and the approximate eigenvalue satisfies (9), then the new approximation x k + 1 > 0 and the sequence λ ¯ k is monotonically decreasing and bounded below by ρ ( A 1 ) , i.e.,
λ ¯ k > λ ¯ k + 1 ρ ( A 1 ) .
Proof. 
From (9) and γ k [ 0 , 1 ) , it is easy to know that λ ¯ k is monotonically decreasing, i.e., λ ¯ k > λ ¯ k + 1 . From (8),
A 1 f k A 1 f k γ k min ( x k ) ,
which implies
A 1 f k A 1 f k e γ k min ( x k ) e γ k x k ,
where e = 1 , , 1 T . Thus, x k + A 1 f k > 0 . Consequently, λ ¯ k I A 1 is a nonsingular M-matrix, and the vector y k + 1 satisfies
y k + 1 = ( λ ¯ k I A 1 ) 1 x k + A 1 f k > 0 .
This implies x k + 1 = y k + 1 / y k + 1 > 0 .
We now prove that λ ¯ k is bounded below by ρ ( A 1 ) . From (12), we have
1 γ k x k x k + A 1 f k 1 + γ k x k ,
and
1 γ k x k y k + 1 x k + A 1 f k y k + 1 1 + γ k x k y k + 1 .
Hence, we obtain
1 γ k min x k y k + 1 min x k + A 1 f k y k + 1 1 + γ k min x k y k + 1 ,
Then,
min x k + A 1 f k y k + 1 min x k y k + 1 γ k min x k y k + 1 .
From (10), it follows that
ρ ( A 1 ) λ ^ k + 1 = max A 1 x k + 1 x k + 1 = λ ¯ k min x k + A 1 f k y k + 1 .
Combine (9), (15) and (14); then,
λ ¯ k + 1 = λ ¯ k 1 γ k min x k y k + 1 = λ ^ k + 1 + min x k + A 1 f k y k + 1 1 γ k min x k y k + 1 λ ^ k + 1 + 1 γ k 1 γ k min x k y k + 1 > ρ ( A 1 ) .
According to induction, λ ¯ k is bounded below by ρ ( A 1 ) , i.e.,
λ ¯ k > ρ ( A 1 ) for all k .
   □
From Lemma 2, since λ ¯ k is a monotonically decreasing and bounded sequence, we must have lim k λ ¯ k = α ρ ( A 1 ) .
We next investigate the case α > ρ ( A 1 ) , and we present some basic results; this plays an important role later in proving the convergence of INI.
Lemma 3. 
Assume that the sequence λ ¯ k , x k , y k is generated via Algorithm 2. For any subsequence x k j x k . If x k j v as j , then v > 0 .
Proof. 
Let S be the set of all indices t, such that lim j x k j ( t ) = v ( t ) = 0 . Since x k j = 1 , S is a proper subset of { 1 , 2 , , n } . Suppose S is nonempty. Then, according to the definition of λ ¯ k and λ ^ k ,
λ ¯ 0 λ ¯ k j 1 λ ¯ k j 1 min x k j 1 + A 1 f k j 1 y k j = λ ^ k j = max A 1 x k j x k j A 1 x k j ( t ) x k j ( t ) for all t = 1 , 2 , , n .
Since lim j x k j ( t ) = 0 for t S , it holds that lim j A 1 x k j ( t ) = A 1 v ( t ) = 0 for t S . Thus, A t p 1 = 0 for all t S and for all p S , which contradicts the irreducibility of A. Therefore, S is empty, and thus, v > 0 .   □

4. Convergence Analysis for INI

In Section 4, we will prove the global convergence and the convergence rate of INI. Furthermore, we will derive the explicit linear convergence factor and the superlinear convergence order with different γ k values.
For an irreducible non-negative matrix A 1 , recall that the largest eigenvalue ρ ( A 1 ) of A 1 is simple. Let x * be the unit length positive eigenvector corresponding to ρ ( A 1 ) . Then, for any orthogonal matrix, x * V , it holds (cf. [22]) that
x * T V T A 1 x * V = ρ ( A 1 ) c T 0 L
with L = V T A 1 V , whose eigenvalues constitute the other eigenvalues of A 1 . Therefore, we now define
ε k = λ ¯ k ρ ( A 1 ) , A k = λ ¯ k I A 1 .
Similar to (16), we also have the spectral decomposition
x * T V T A k x * V = ε k c T 0 L k ,
where L k = λ ¯ k I L . For  λ ¯ k ρ ( A 1 ) , it is easy to verify that
x * T V T A k 1 x * V = ε k 1 b k T 0 L k 1 with b k T = c T L k 1 ε k ,
from which we get
A k 1 V = x * b k T + V L k 1 = x * c T L k 1 ε k + V L k 1 , A k 1 = ε k 1 x * x * T ε k 1 x * c T L k 1 V T + V L k 1 V T ,
and
x * T A k 1 = ε k 1 x * T ε k 1 c T L k 1 V T .
Since ξ k = f k γ k sep ( 0 , A ) min ( x k ) in INI, it holds that A 1 f k γ k x k . Therefore, we have
1 γ k x k x k + A 1 f k 1 + γ k x k .
As A k 1 0 , it follows from the above relation that
1 γ k A k 1 x k y k + 1 1 + γ k A k 1 x k .
Theorem 4. 
Let A be an irreducible monotone matrix, and the sequence λ ¯ k , x k is generated via Algorithm 2. If  0 γ k γ < 1 , then the monotonically decreasing sequence λ ¯ k converges to ρ ( A 1 ) , and x k converges to the positive eigenvector x * corresponding to ρ ( A 1 ) .
Proof. 
From Lemma 2, the sequence λ ¯ k is bounded and monotonically decreasing, and we must have lim k λ ¯ k = α . Next, we prove that α = ρ ( A ) must hold.
It follows that
λ ¯ k λ ¯ k + 1 = ( 1 γ k ) min x k y k + 1 ( 1 γ ) min ( x k ) y k + 1 > 0 .
Since the sequence λ ¯ k converges, we obtain from (20) that
lim k min ( x k ) y k + 1 = 0 .
Suppose min ( x k ) is not bounded below by a positive constant. Then, there is a subsequence, { k j } , with lim j min ( x k j ) = 0 . Since x k j   = 1 , we may assume that lim j x k j = v exists. Now, lim j min ( x k j ) = min ( v ) = 0 . However, v > 0 according to Lemma 3. The contradiction shows that min ( x k ) is bounded below by a positive constant. Therefore, lim k y k + 1 1 = 0 .
From (10), we have
( λ ¯ k j I A 1 ) y k j + 1 = x k j + A 1 f k j ,
which can be rewritten as
( λ ¯ k j I A 1 ) x k j + 1 = x k j + A 1 f k j y k j + 1 .
By taking limits from both sides of (21), we then have
lim j ( λ ¯ k j I A 1 ) x k j + 1 = lim j x k j + A 1 f k j y k j + 1 = 0 ,
which implies A 1 v = α v . Therefore, α = ρ ( A 1 ) and v = x * . We then have α = ρ ( A 1 ) and { x k } converges to x * .    □
Theorem 4 has proven the global convergence of INI, but the results are only qualitative, and they do not tell us anything about how fast the INI method converges. In this subsection, we will show the convergence rate of INI with different relaxation factors, γ k . More precisely, we prove that INI converges at least linearly with an asymptotic convergence factor bounded by 2 γ 1 + γ for 0 γ k γ < 1 and superlinearly for decreasing γ k = λ ¯ k 1 λ ¯ k λ ¯ k 1 , respectively.
From (9), we have
ε k + 1 = ε k 1 1 γ k min x k ε k y k + 1 = : ε k ρ k .
Since λ ¯ k λ ¯ k + 1 < λ ¯ k ρ ( A 1 ) , from (22) and (9), we have
ρ k = 1 1 γ k min x k ε k y k + 1 = 1 λ ¯ k λ ¯ k + 1 λ ¯ k ρ ( A 1 ) < 1 .
Theorem 5. 
For INI, if  γ k γ < 1 , then lim k ρ k 2 γ 1 + γ < 1 , i.e., the convergence of INI is at least globally linear. If  lim k γ k = 0 , then lim k ρ k = 0 , that is, the convergence of INI is globally superlinear.
Proof. 
From (18) and (19), we have
min x k ε k y k + 1 min x k ( 1 + γ k ) ε k A k 1 x k = 1 1 + γ k min x k ε k A k 1 x k .
Then,
ρ k = 1 1 γ k min x k ε k y k + 1 1 1 γ k 1 + γ k min x k ε k A k 1 x k .
From (17), we get
ε k A k 1 x k = x x T x k x c T L k 1 V T x k + ε k V L k 1 V T x k .
From Theorem 4, we know that lim k x k = x and lim k λ ¯ k = ρ ( A 1 ) , from which it follows that ε k 0 and L k 1 ρ ( A 1 ) I L 1 . On the other hand, since L k 1 ( ρ ( A 1 ) I L ) 1 and lim k V T x k = V T x = 0 , from (23), we get
lim k ε k A k 1 x k = x .
Consequently, we obtain
lim k min x k ε k y k + 1 lim k 1 1 + γ k min lim k x k ε k A k 1 x k = lim k 1 1 + γ k min x x = lim k 1 1 + γ k > 0 ,
leading to
lim k ρ k 1 lim k 1 γ k 1 + γ k 2 γ 1 + γ < 1 .
   □
It can be seen from (24) that, if γ k is small, then INI must ultimately converge quickly. Although Theorem 5 has established the superlinear convergence of INI, it does not reveal the convergence order. Our next concern is to derive the precise convergence order of INI. This is more informative and instructive because it lets us understand how fast INI converges.
Theorem 6. 
If the inner tolerance ξ k in INI satisfies condition (8) with the relaxation factors
γ k = λ ¯ k 1 λ ¯ k λ ¯ k 1 ,
then INI converges quadratically (asymptotically) in the form of
ε ¯ k 2 ε ¯ k 1 2
for a k large enough, where the relative error ε ¯ k + 1 = ε k / ρ ( A 1 ) .
Proof. 
Since λ ¯ k 1 > λ ¯ k > ρ ( A 1 ) , we have
γ k = λ ¯ k 1 λ ¯ k λ ¯ k 1 λ ¯ k 1 ρ ( A 1 ) ρ ( A 1 ) = ε k 1 ρ ( A 1 ) .
From (22), (24) and (25), we have
ε k = ε k 1 ρ k 1 ε k 1 2 γ k 1 + γ k = ε k 1 2 1 + 1 γ k ε k 1 2 1 + ρ ( A 1 ) ε k 1 = ε k 1 2 2 ε k 1 + ρ ( A 1 ) 2 ρ ( A 1 ) ε k 1 2 .
Dividing both sides of the above inequality by ρ ( A 1 ) , we get
ε ¯ k = ε k ρ ( A 1 ) 2 ρ ( A 1 ) 2 ε k 1 2 = 2 ε ¯ k 1 2 .
   □

5. The Modified Inexact Noda Iteration

In this section, we propose a modified Noda iteration (MNI) for a non-negative matrix, and we show that MNI and NI are equivalent. Thus, by combining INI (Algorithm 2) with MNI, we can propose a modified inexact Noda iteration for a monotone matrix.

5.1. The Modified Noda Iteration

When λ ^ k I B tends to a singular matrix, the Noda iteration requires us to solve a possibly ill-conditioned linear system (2). Hence, we propose a rank one update technique for the ill-conditioned linear system (2); i.e.,
B λ ^ k I x k x k T 0 Δ y k δ k = ( λ ^ k I B ) x k 0 ,
where Δ y k = x k + 1 x k . Let r k = ( λ ^ k I B ) x k . In general, the linear system (26) is a well-conditioned linear system unless B has the Jordan form corresponding to the largest eigenvalue, which contradicts the Perron–Frobenius theorem.
From (26),
0 = B λ ^ k I x k + 1 x k δ k x k r k = B λ ^ k I x k + 1 B λ ^ k I x k δ k x k r k = B λ ^ k I x k + 1 δ k x k .
Hence, we have the following linear system:
λ ^ k I B x k + 1 δ k = x k ,
or
λ ^ k I B y k + 1 = x k ,
with y k = 1 δ k x k + 1 . Thus, from (2) and (26), we have the new iterative vector
x k + 1 = y k + 1 y k + 1 = x k + Δ y k x k + Δ y k .
This means the Noda iteration and the modified Noda iteration are mathematically equivalent. Based on (26) and (27), we state our algorithm (Algorithm 3) as follows.
Algorithm 3 Modified Noda Iteration (MNI)
  • Given λ ^ 0 , x 0 > 0 with x 0   = 1 and tol > 0 .
  • for  k = 0 , 1 , 2 ,
  •     if  B x k + 1 λ ^ k x k + 1   > tol
  •         Solve ( λ ^ k I B ) y k + 1 = x k .
  •         Normalize the vector x k + 1 = y k + 1 / y k + 1 .
  •      else, if
  •         Solve B λ ^ k x k x k T 0 Δ y k δ k = λ ^ k x k B x k 0 .
  •         Normalize the vector x k + 1 = ( x k + Δ y k ) / x k + Δ y k .
  •      end
  •      Compute λ ^ k + 1 = max B x k + 1 x k + 1 .
  • until convergence: B x k + 1 λ ^ k x k + 1   < tol .
Note that the sequence { λ ^ k I B } tends to a singular matrix, meaning that { λ ^ k } converges to an eigenvalue of B, and (2) becomes an ill-conditioned linear system. Based on practical experiments, we propose taking B x k + 1 λ ^ k x k + 1   tol and switching from (2) to (26).

5.2. The Modified Inexact Noda Iteration

For a monotone matrix A, we replaced B with A 1 in (26). The linear system (26) can be rewritten as
I λ ^ k A A x k x k T 0 Δ y k δ k = λ ^ k A x k x k 0 .
Based on MNI, by combining INI (Algorithm 2) with Equation (28), we can propose a modified inexact Noda iteration for a monotone matrix, which is described as Algorithm 4.
Algorithm 4 Modified inexact Noda iteration (MINI)
  • Given λ ^ 0 , x 0 > 0 with x 0   = 1 and tol > 0 .
  • for  k = 0 , 1 , 2 ,
  •     if  A x k + 1 λ ¯ k 1 x k + 1   > tol
  •         Run INI for monotone matrix A (Algorithm 2).
  •     else, if
  •         Solve I λ ¯ k A A x k x k T 0 Δ y k δ k = λ ¯ k A x k x k 0 .
  •         Normalize the vector x k + 1 = ( x k + Δ y k ) / x k + Δ y k .
  •         Compute λ ¯ k + 1 , which satisfies condition (9).
  •     end
  • until convergence: A x k + 1 λ ¯ k 1 x k + 1   < tol .

6. Numerical Experiments

In this section, we present numerical experiments to validate our theoretical results for the INI and demonstrate the effectiveness of the proposed MINI algorithms. All numerical tests were conducted on an Intel(R) Core(TM) i7 CPU 4770 @ 3.4 GHz with 16 GB of memory using Matlab R2013a with machine precision ε = 2.22 × 10 16 on a Microsoft Windows 7 64-bit system.
I outer denotes the number of outer iterations required to achieve convergence, and I inner denotes the total number of inner iterations, which measures the overall efficiency of the INI and MINI methods. Consequently, the average number of inner iterations per outer iteration, I ave = I inner / I outer , was calculated for the test algorithms. In the tables, “Positivity” indicates whether the converged Perron vector maintains strict positivity. If “No,” the percentage in parentheses indicates the proportion of positive components in the converged Perron vector. Additionally, the CPU time for each algorithm is reported to measure overall efficiency.

6.1. INI for Computing the Smallest Eigenvalue of a Monotone Matrix

An example is presented to illustrate the numerical behavior of NI, INI_1, and INI_2 for monotone matrices. The minimal residual method is used to solve the inner linear systems, with implementations using the standard Matlab function minres. The outer iteration begins with the normalized vector of 1 , , 1 T , and the stopping criterion for outer iterations is defined as follows:
A x k λ ¯ k 1 x k ( A 1 A ) 1 / 2 10 10 ,
where · 1 and · are the one norm and the infinity norm of a matrix, respectively.
The approximate solution y k + 1 of the system
( λ ¯ k A I ) y k + 1 = A x k + f k
satisfies the required inner tolerances,
f k γ k sep ( 0 , A ) min ( x k )
with some 0 < γ k < 1 .
The condition ensures that the eigenvector in Lemma 2 preserves the strict positivity property. However, the formula is not practical because it uses sep ( 0 , A ) , which is unknown at the time when it needs to be computed. Since sep ( 0 , A ) = σ min ( A ) when A is symmetric, a practical implementation suggests replacing sep ( 0 , A ) with λ ¯ k 1 . The quantity λ ¯ k 1 serves as a lower bound for the smallest eigenvalue of A, i.e., σ min ( A ) λ ¯ k 1 . For all examples, the stopping criterion for the inner iteration is set to
f k max { γ k min ( x k ) / λ ¯ k , 10 13 } for INI _ 1
and
f k max { λ ¯ k 1 λ ¯ k λ ¯ k 1 λ ¯ k min ( x k ) , 10 13 } for INI _ 2
Example 1. 
We consider the finite-element discretization of the boundary value problem in [2] (Example 4.2.4)
u x x u y y = g ( x , y ) in Ω = [ 0 , a ] × [ 0 , b ] , a , b > 0 , u = f ( x , y ) on Ω ,
using piecewise quadratic basis functions on the uniform mesh of p × m isosceles, right-angled triangles. This is a matrix of order n = ( 2 p 1 ) ( 2 m 1 ) = 127 , 041 with p = 400 and m = 80 .
For Example 1, it is observed that, for this monotone matrix eigenproblem, INI_1, with two different values of γ k (0.5 and 0.8), exhibits distinct convergence behaviors, requiring 51 and 18 outer iterations to achieve the desired accuracy, respectively. As shown in Figure 1, NI and INI_2 typically converge superlinearly, while INI_1 with γ k = 0.5, 0.8 typically converges linearly. This confirms our theoretical predictions and demonstrates that the results of our theorem are both realistic and significant.
From Table 1, it is observed that all the converged eigenvectors are positive and that INI_2 improves the overall efficiency of NI. Specifically, the INI_1 algorithm converges linearly and slowly, requiring between twice and three times the CPU time of INI_2, but I ave for INI_1 is only half of I ave for INI_2. There are two reasons for this. First, since the approximate eigenvalues are obtained from the relation (9), the parameter γ k influences the convergence rates, as seen in Figure 1. Second, according to (8), INI_2 solves the inner linear systems increasingly accurately as k increases. In contrast, the inner tolerance used by INI_1 remains fixed, except for the factor min ( x k ) , resulting in the average number of iterations for INI_1 being about half of those for INI_2.

6.2. MINI for Computing the Smallest Singular Value of an M-Matrix

In the previous section, INI_2 demonstrated significantly better overall efficiency compared to NI and INI_1. Therefore, in this subsection, the MINI algorithm (INI_2 combined with MNI) is used to find the smallest singular value and the associated eigenvector of an M-matrix, confirming the effectiveness of MINI and the theory presented in Section 3 and Section 4. For MINI, the stopping criteria for inner and outer iterations are the same as those for monotone matrices. Additionally, MINI is compared with the JDQR [23] and JDSVD [24] algorithms, as well as the Matlab function svds; none of these algorithms preserve the positivity of approximate eigenvectors. The results show that the MINI algorithm consistently and reliably computes positive eigenvectors, whereas the other algorithms generally fail to do so.
Since JDQR and JDSVD use the absolute residual norms to determine convergence, the stopping criteria for outer iterations are set to “TOL = 10 10 ( A 1 A ) 1 / 2 ,” matching the criteria used for MINI. The parameters are set to “sigma = SM” for JDQR, to “opts.target = 0” for JDSVD, and the inner solver to “OPTIONS.LSolver = minres.” All other options use default settings. No preconditioning is applied for inner linear systems. For svds, the stopping criteria are set to “OPTS.tol = 10 10 ( A 1 A ) 1 / 2 ,” with the maximum and minimum subspace dimensions set to 20 and 2 at each restart, respectively.
Suppose we want to compute the smallest singular value and the corresponding singular vector of a real n × n M-matrix M. This partial SVD can be computed using an equivalent eigenvalue decomposition, specifically the augmented matrix
A = 0 M M T 0 .
While such a matrix A is not an M-matrix, it will indeed be monotone.
Example 2. 
Consider a symmetric M-matrix of the form M = σ I B , where B is the non-negative matrix rgg_n_2_19_s0 from the DIMACS10 test set [25]. This matrix represents a random geometric graph with 2 19 vertices. Each vertex is a random point in the unit square, and edges connect vertices whose Euclidean distance is below 0.55 ( log ( n ) / n ). This threshold ensures that the graph is almost connected. The resulting matrix is binary, with n = 2 19 = 524,288 and 6,539,532 nonzero entries.
For this problem, the MINI algorithm performs exceptionally well, requiring only six outer iterations to achieve the desired accuracy. Additionally, it both is reliable and preserves the positivity of the eigenvectors. In contrast, while JDQR, JDSVD, and svds can compute the desired eigenvalue, the resulting eigenvectors are not positive. Specifically, Table 2 shows that for these algorithms, approximately 50% of the components of each converged eigenvector are negative.
Regarding overall efficiency, MINI is the most efficient in terms of I inner , I outer , and CPU time. JDQR and svds require at least five times the CPU time of MINI; they are also more expensive than JDSVD in terms of CPU time.

7. Conclusions

We have proposed an inexact Noda iteration method for computing the smallest eigenpair of a large, irreducible monotone matrix, and we have considered the convergence of the modified inexact Noda iteration with two relaxation factors. We have proven that the convergence of INI is globally linear and superlinear, with the asymptotic convergence factor bounded by 2 γ k 1 + γ k . More precisely, the modified inexact Noda iteration with inner tolerance ξ k = f k γ k sep ( 0 , A ) min ( x k ) converges at least linearly if the relaxation factors meet the condition γ k γ < 1 and superlinearly if the relaxation factors meet the condition γ k = λ ¯ k 1 λ ¯ k λ ¯ k 1 , respectively. The results for INI clearly show how the accuracy of the inner iterations affects the convergence of the outer iterations.
In the experiments, we also compared MINI with Jacobi–Davidson-type methods (JDQR and JDSVD) and the implicitly restarted Arnoldi method (svds). The contribution of this paper is twofold. First, MINI consistently preserves the positivity of approximate eigenvectors, unlike the other three methods, which often fail in this regard. Second, the proposed MINI algorithms have proven to be practical and effective for large monotone matrix eigenvalue problems and M-matrix singular value problems.

Funding

This research was funded by National Science and Technology Council grant number 112-2628-M-390-001-MY4.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Olesky, D.D.; Driessche, P.V. Monotone positive stable matrices. Linear Algebra Appl. 1983, 48, 381–401. [Google Scholar] [CrossRef]
  2. Axelsson, O.; Kolotilina, L. Monotonicity and discretization error estimates. SIAM J. Numer. Anal. 1990, 27, 1591–1611. [Google Scholar] [CrossRef]
  3. Axelsson, O.; Gololobov, S.V. Monotonicity and Discretization Error Estimates for Convection-Diffusion Problems; Technical Report; Department of Mathematics University of Nijmegen: Nijmegen, The Netherlands, 2003. [Google Scholar]
  4. Schroeder, J. M-Matrices and generalizations using an operator theory approach. SIAM Rev. 1978, 20, 213–244. [Google Scholar] [CrossRef]
  5. Sivakumar, K.C. A new characterization of nonnegativity of Moore–Penrose inverses of Gram operators. Positivity 2009, 13, 277–286. [Google Scholar] [CrossRef]
  6. Berman, A.; Plemmons, R.J. Non-Negative Matrices in the Mathematical Sciences. In Classics in Applied Mathematics; SIAM: Philadelphia, PA, USA, 1994; Volume 9. [Google Scholar]
  7. Horn, R.A.; Johnson, C.R. Matrix Analysis; The Cambridge University Press: Cambridge, UK, 1985. [Google Scholar]
  8. Berns-Müller, J.; Graham, I.G.; Spence, A. Inexact inverse iteration for symmetric matrices. Linear Algebra Appl. 2006, 416, 389–413. [Google Scholar] [CrossRef]
  9. Jia, Z. On convergence of the inexact Rayleigh quotient iteration with the Lanczos method used for solving linear systems. Sci. China Math. 2013, 5, 2145–2160. [Google Scholar] [CrossRef]
  10. Lee, C. Residual Arnoldi Method: Theory, Package and Experiments. TR-4515. Ph.D. Thesis, Department of Computer Science, University of Maryland, College Park, MD, USA, 2007. [Google Scholar]
  11. Lai, Y.-L.; Lin, K.-Y.; Lin, W.-W. An inexact inverse iteration for large sparse eigenvalue problems. Numer. Linear Algebra Appl. 1997, 4, 425–437. [Google Scholar] [CrossRef]
  12. Ostrowski, A.M. On the convergence of the Rayleigh quotient iteration for the computation of the characteristic roots and vectors. V. (Usual Rayleigh quotient for non-Hermitian matrices and linear elementary divisors). Arch. Rational Mech. Anal. 1959, 3, 472–481. [Google Scholar] [CrossRef]
  13. Parlett, B.N. The Symmetric Eigenvalue Problem; Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA, 1998. [Google Scholar]
  14. Saad, Y. Numerical Methods for Large Eigenvalue Problems; Manchester University Press: Manchester, UK, 1992. [Google Scholar]
  15. Sleijpen, G.L.G.; Vorst, H.A.v. A Jacobi–Davidson iteration method for linear eigenvalue problems. SIAM J. Matrix Anal. Appl. 1996, 17, 401–425. [Google Scholar] [CrossRef]
  16. Stewart, G.W. Matrix Algorithms; Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA, 2001; Volume 2. [Google Scholar]
  17. Noda, T. Note on the computation of the maximal eigenvalue of a non-negative irreducible matrix. Numer. Math. 1971, 17, 382–386. [Google Scholar] [CrossRef]
  18. Alfa, A.S.; Xue, J.; Ye, Q. Accurate computation of the smallest eigenvalue of a diagonally dominant M-matrix. Math. Comp. 2002, 71, 217–236. [Google Scholar] [CrossRef]
  19. Xue, J. Computing the smallest eigenvalue of an M-matrix. SIAM J. Matrix Anal. Appl. 1996, 17, 748–762. [Google Scholar]
  20. Jia, Z.; Lin, W.-W.; Liu, C.-S. A positivity preserving inexact Noda iteration for computing the smallest eigenpair of a large irreducible M-matrix. Numer. Math. 2015, 130, 645–679. [Google Scholar] [CrossRef]
  21. Collatz, L. Numerical Treatment of Differential Equations, 3rd ed.; Springer: Berlin/Heidelberg, Germany, 1960. [Google Scholar]
  22. Golub, G.H.; Loan, C.F.V. Matrix Computations, 4th ed.; The John Hopkins University Press: Baltimore, MD, USA, 2012. [Google Scholar]
  23. Sleijpen, G.L.G. JDQR and JDQZ Codes. Available online: http://www.math.uu.nl/people/sleijpen (accessed on 14 July 2024).
  24. Hochstenbach, M.E. A Jacobi-Davidson type SVD method. SIAM J. Sci. Comp. 2001, 23, 606–628. [Google Scholar] [CrossRef]
  25. DIMACS10 Test Set and the University of Florida Sparse Matrix Collection. Available online: https://sparse.tamu.edu/DIMACS10 (accessed on 14 July 2024).
Figure 1. The outer residual norms versus outer iterations in Example 1.
Figure 1. The outer residual norms versus outer iterations in Example 1.
Mathematics 12 02546 g001
Table 1. The total outer and inner iterations in Example 1.
Table 1. The total outer and inner iterations in Example 1.
Method I outer I inner I ave CPU TimePositivity
INI_1 with γ = 0.8 5119,62238476Yes
INI_1 with γ = 0.5 1811,23362438Yes
NI5362172425Yes
INI_25359171819Yes
Table 2. The total outer and inner iterations in Example 2.
Table 2. The total outer and inner iterations in Example 2.
Method I outer I inner I ave CPU TimePositivity
MINI63315530Yes
JDQR254068162243No (52%)
JDSVD3414324258No (51%)
svds140140144No (57%)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, C.-S. An Inexact Noda Iteration for Computing the Smallest Eigenpair of a Large, Irreducible Monotone Matrix. Mathematics 2024, 12, 2546. https://doi.org/10.3390/math12162546

AMA Style

Liu C-S. An Inexact Noda Iteration for Computing the Smallest Eigenpair of a Large, Irreducible Monotone Matrix. Mathematics. 2024; 12(16):2546. https://doi.org/10.3390/math12162546

Chicago/Turabian Style

Liu, Ching-Sung. 2024. "An Inexact Noda Iteration for Computing the Smallest Eigenpair of a Large, Irreducible Monotone Matrix" Mathematics 12, no. 16: 2546. https://doi.org/10.3390/math12162546

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop