1. Introduction
Monotone matrices play a significant role in various mathematical applications, including stability analysis [
1], eigenvalue bounds, and more [
2,
3]. A real matrix
A is termed monotone if its inverse
is non-negative [
4,
5]. This condition distinguishes monotone matrices from
M-matrices, which can be expressed as
with
B non-negative and
[
6]. For irreducible nonsingular
M-matrices, the smallest eigenvalue
satisfies
, whereas for monotone matrices, it is given by
. Despite these differences, both types share important spectral properties (p. 487, [
7]), such as the Perron root being the largest eigenvalue of
, which is associated with a positive eigenvector.
Several methods [
8,
9,
10,
11,
12,
13,
14,
15,
16] exist for computing the Perron vector of a non-negative matrix,
B, but ensuring the strict positivity of approximations can be challenging, especially when the Perron vector has very small components. One effective approach is the Noda iteration (NI), introduced by Noda in 1971, which utilizes shifted Rayleigh quotient-like approximations [
17]. This method has been adapted for irreducible nonsingular
M-matrices as well [
18,
19]. For large, sparse non-negative matrices, Jia, Lin, and Liu proposed inexact Noda iteration (INI) strategies specifically designed to preserve the positivity of approximate eigenvectors while maintaining practicality and fast convergence [
20].
In this paper, we introduce an inexact Noda iteration (INI) method tailored to finding the smallest eigenvalue and associated eigenvector of an irreducible monotone matrix, Our main contributions include two relaxation steps aimed at enhancing computational efficiency and preserving the structure. The first step involves using as a stopping criterion for inner iterations, where , and denotes the current positive approximate eigenvector. This criterion ensures that the iterations converge effectively while maintaining positivity constraints. The second step updates the approximate eigenvalues using recurrence relations, specifically , where represents the next normalized positive approximate eigenvector. This update scheme not only maintains structure preservation but also guarantees the global convergence of the INI algorithms.
The parameter referred to as the “relaxation factor,” plays a crucial role in the convergence theory of INI. We establish rigorous convergence results for INI with two different relaxation factors, showing that the algorithms exhibit globally linear and superlinear convergence rates corresponding to Furthermore, due to the ill-conditioned nature of linear systems encountered during inner iterations as approximate eigenvalues converge to (or ), we propose a modified Noda iteration (MNI). This approach employs rank-one updates within inner iterations to mitigate the condition number of the linear systems. We demonstrate mathematically that MNI is equivalent to NI and integrate it into our framework, termed modified inexact Noda iteration (MINI). MINI combines the strengths of INI and MNI, significantly improving the condition number of inner linear systems encountered during the iterative process for monotone matrix eigenvalue problems.
The paper is organized as follows: In
Section 2, the Noda iteration and preliminary concepts are introduced.
Section 3 presents the new strategy for the inexact Noda iteration and proves its basic properties.
Section 4 establishes the convergence theory of the method and derives the asymptotic convergence factor.
Section 5 details the integrated algorithm combining INI with MNI. In
Section 6, numerical examples are presented to illustrate the convergence theory and demonstrate the effectiveness of INI. Finally,
Section 7 provides concluding remarks.
2. Preliminaries and Notation
For any real matrix
, we write
if
for all
. We define
. If
, we say
B is a non-negative matrix, and if
, we say
B is a positive matrix. For real matrices
B and
C of the same size, if
is a non-negative matrix, we write
. A non-negative (positive) vector is similarly defined. A non-negative matrix
B is said to be reducible if it can be placed into a block upper-triangular form through simultaneous row/column permutations; otherwise, it is irreducible. If
is not an eigenvalue of
B, the function
is defined as
Throughout the paper, we use a 2-norm for vectors and matrices, and the superscript T denotes its transpose.
We review some fundamental properties of non-negative matrices, monotone matrices, and M-matrices.
Definition 1. A matrix A is said to be “monotone” if implies for any positive vector. Furthermore, a monotone matrix is an M-matrix if , for
Another characterization of monotone matrices is given in the following well known theorem.
Theorem 1 ([
21])
. A is monotone if and only if A is non-singular and Lemma 1 ([
6])
. Let M be a nonsingular M-matrix. Then, the following statements are equivalent:- 1.
, for , and ;
- 2.
with some , and .
For a pair of positive vectors,
and
, define
where
, and
.
Theorem 2 ([
7])
. Let B be an irreducible nonnegative matrix. Then, there exist and a unit vector, , such thatIf λ is an eigenvalue of B, then . Denote by . If λ is an eigenvalue with a nonnegative unit eigenvector , then and . Moreover, for any 2.1. The Noda Iteration
The Noda iteration [
17] is an inverse iteration method adjusted by a Rayleigh quotient-like approximation of the Perron pair of an irreducible non-negative matrix,
Given an initial vector,
with
, the Noda iteration (NI) consists of three steps:
The main goal is to compute a new approximation,
, to
by solving the inner linear system (
2). From Theorem 2, we know that
if
is not a scalar multiple of the eigenvector
. This result shows that
is an irreducible nonsingular
M-matrix, and its inverse is an irreducible non-negative matrix. Consequently,
and
, i.e.,
is always a positive vector if
is. Through variable transformation,
is obtained from the following relation:
ensuring that
is monotonically decreasing.
2.2. The Inexact Noda Iteration
Based on the Noda iteration, in [
20], the authors propose an inexact Noda iteration (INI) for the computation of the spectral radius of a non-negative irreducible matrix,
B. In this paper, since
A is a monotone matrix, i.e.,
is a non-negative matrix, we replace
B with
in (
2); i.e.,
When
A is large and sparse, we see that we must resort to an iterative linear solver to get an approximate solution. In order to reduce the computational cost of (
3), we solve
in (
3) by inexactly satisfying
which is equivalent to
where
is the residual vector between
and A
. Here, the residual norm (inner tolerance)
can be changed at each iterative step
k.
Theorem 3 ([
20])
. Let A be an irreducible monotone matrix and be a fixed constant. For the unit length , if and in (5) satisfies then is monotonically decreasing and . Moreover, the convergence of INI is at least globally linear. Based on (
5)–(
7), we describe INI as Algorithm 1.
Algorithm 1 Inexact Noda Iteration (INI) |
Given , with , and . for Solve inexactly such that the inner tolerance satisfies condition ( 7) Normalize the vector . Compute . until convergence: Resi .
|
When Relation (
4) is used, step 5 in Algorithm 1 can be rewritten as
Unfortunately,
is not explicitly available; in other words, we need to compute “
” exactly for the required approximate eigenvalue
. Hence, in the next section, we propose a new strategy (see (
9)) to estimate the approximate eigenvalues without increasing the computational cost. This strategy is practical and preserves the strictly decreasing property of the approximate eigenvalue sequence.
3. The Relaxation Strategy for INI and Some Basic Properties
In order to ensure that INI is correctly implemented, we now propose two main relaxation steps to define Algorithm 2:
The residual norm satisfies
where
with a constant upper bound
The update of the approximate eigenvalue satisfies
Algorithm 2 Inexact Noda Iteration for monotone matrices (INI) |
Given , with , and . for Solve inexactly such that the inner tolerance satisfies condition ( 8) Normalize the vector . Compute that satisfies condition ( 9). until convergence: .
|
In step 3 of Algorithm 2, it leads to two equivalent inexact relations, satisfying
We remark that
in (
9) is no longer equal to
and, therefore, that
cannot be clearly demonstrated to be greater than its lower bound,
. The following lemma ensures that
is still the lower bound of
.
Lemma 2. Let A be an irreducible monotone matrix with the Perron pair . For the unit length and the relaxation factor , if , in (11) satisfies Condition (8) and the approximate eigenvalue satisfies (9), then the new approximation and the sequence is monotonically decreasing and bounded below by , i.e., Proof. From (
9) and
, it is easy to know that
is monotonically decreasing, i.e.,
From (
8),
which implies
where
Thus,
Consequently,
is a nonsingular
M-matrix, and the vector
satisfies
This implies
.
We now prove that
is bounded below by
From (
12), we have
and
Hence, we obtain
Then,
From (
10), it follows that
Combine (
9), (
15) and (
14); then,
According to induction,
is bounded below by
i.e.,
□
From Lemma 2, since is a monotonically decreasing and bounded sequence, we must have .
We next investigate the case , and we present some basic results; this plays an important role later in proving the convergence of INI.
Lemma 3. Assume that the sequence is generated via Algorithm 2. For any subsequence If as then
Proof. Let
S be the set of all indices
t, such that
. Since
,
S is a proper subset of
. Suppose
S is nonempty. Then, according to the definition of
and
,
Since for , it holds that for . Thus, for all and for all , which contradicts the irreducibility of A. Therefore, S is empty, and thus, . □
4. Convergence Analysis for INI
In
Section 4, we will prove the global convergence and the convergence rate of INI. Furthermore, we will derive the explicit linear convergence factor and the superlinear convergence order with different
values.
For an irreducible non-negative matrix
, recall that the largest eigenvalue
of
is simple. Let
be the unit length positive eigenvector corresponding to
. Then, for any orthogonal matrix,
, it holds (cf. [
22]) that
with
, whose eigenvalues constitute the other eigenvalues of
. Therefore, we now define
Similar to (
16), we also have the spectral decomposition
where
. For
, it is easy to verify that
from which we get
and
Since
in INI, it holds that
. Therefore, we have
As
, it follows from the above relation that
Theorem 4. Let A be an irreducible monotone matrix, and the sequence is generated via Algorithm 2. If , then the monotonically decreasing sequence converges to and converges to the positive eigenvector corresponding to
Proof. From Lemma 2, the sequence is bounded and monotonically decreasing, and we must have . Next, we prove that must hold.
Since the sequence
converges, we obtain from (
20) that
Suppose is not bounded below by a positive constant. Then, there is a subsequence, , with . Since , we may assume that exists. Now, . However, according to Lemma 3. The contradiction shows that is bounded below by a positive constant. Therefore,
From (
10), we have
which can be rewritten as
By taking limits from both sides of (
21), we then have
which implies
. Therefore,
and
. We then have
and
converges to
. □
Theorem 4 has proven the global convergence of INI, but the results are only qualitative, and they do not tell us anything about how fast the INI method converges. In this subsection, we will show the convergence rate of INI with different relaxation factors, . More precisely, we prove that INI converges at least linearly with an asymptotic convergence factor bounded by for and superlinearly for decreasing , respectively.
Since
, from (
22) and (
9), we have
Theorem 5. For INI, if then , i.e., the convergence of INI is at least globally linear. If then that is, the convergence of INI is globally superlinear.
Proof. From (
18) and (
19), we have
From Theorem 4, we know that
and
, from which it follows that
and
. On the other hand, since
and
, from (
23), we get
Consequently, we obtain
leading to
□
It can be seen from (
24) that, if
is small, then INI must ultimately converge quickly. Although Theorem 5 has established the superlinear convergence of INI, it does not reveal the convergence order. Our next concern is to derive the precise convergence order of INI. This is more informative and instructive because it lets us understand how fast INI converges.
Theorem 6. If the inner tolerance in INI satisfies condition (8) with the relaxation factorsthen INI converges quadratically (asymptotically) in the form offor a k large enough, where the relative error . Proof. Since
we have
From (
22), (
24) and (
25), we have
Dividing both sides of the above inequality by
, we get
□
5. The Modified Inexact Noda Iteration
In this section, we propose a modified Noda iteration (MNI) for a non-negative matrix, and we show that MNI and NI are equivalent. Thus, by combining INI (Algorithm 2) with MNI, we can propose a modified inexact Noda iteration for a monotone matrix.
5.1. The Modified Noda Iteration
When
tends to a singular matrix, the Noda iteration requires us to solve a possibly ill-conditioned linear system (
2). Hence, we propose a rank one update technique for the ill-conditioned linear system (
2); i.e.,
where
Let
In general, the linear system (
26) is a well-conditioned linear system unless
B has the Jordan form corresponding to the largest eigenvalue, which contradicts the Perron–Frobenius theorem.
Hence, we have the following linear system:
or
with
Thus, from (
2) and (
26), we have the new iterative vector
This means the Noda iteration and the modified Noda iteration are mathematically equivalent. Based on (
26) and (27), we state our algorithm (Algorithm 3) as follows.
Algorithm 3 Modified Noda Iteration (MNI) |
Given , with and . for if Solve . Normalize the vector . else, if Solve Normalize the vector . end Compute . until convergence: .
|
Note that the sequence
tends to a singular matrix, meaning that
converges to an eigenvalue of
B, and (
2) becomes an ill-conditioned linear system. Based on practical experiments, we propose taking
and switching from (
2) to (
26).
5.2. The Modified Inexact Noda Iteration
For a monotone matrix
A, we replaced
B with
in (
26). The linear system (
26) can be rewritten as
Based on MNI, by combining INI (Algorithm 2) with Equation (
28), we can propose a modified inexact Noda iteration for a monotone matrix, which is described as Algorithm 4.
Algorithm 4 Modified inexact Noda iteration (MINI) |
Given , with and . for if Run INI for monotone matrix A (Algorithm 2). else, if Solve . Normalize the vector . Compute , which satisfies condition ( 9). end until convergence: .
|
6. Numerical Experiments
In this section, we present numerical experiments to validate our theoretical results for the INI and demonstrate the effectiveness of the proposed MINI algorithms. All numerical tests were conducted on an Intel(R) Core(TM) i7 CPU 4770 @ 3.4 GHz with 16 GB of memory using Matlab R2013a with machine precision on a Microsoft Windows 7 64-bit system.
denotes the number of outer iterations required to achieve convergence, and denotes the total number of inner iterations, which measures the overall efficiency of the INI and MINI methods. Consequently, the average number of inner iterations per outer iteration, , was calculated for the test algorithms. In the tables, “Positivity” indicates whether the converged Perron vector maintains strict positivity. If “No,” the percentage in parentheses indicates the proportion of positive components in the converged Perron vector. Additionally, the CPU time for each algorithm is reported to measure overall efficiency.
6.1. INI for Computing the Smallest Eigenvalue of a Monotone Matrix
An example is presented to illustrate the numerical behavior of NI, INI_1, and INI_2 for monotone matrices. The minimal residual method is used to solve the inner linear systems, with implementations using the standard Matlab function minres. The outer iteration begins with the normalized vector of
, and the stopping criterion for outer iterations is defined as follows:
where
and
are the one norm and the infinity norm of a matrix, respectively.
The approximate solution
of the system
satisfies the required inner tolerances,
with some
.
The condition ensures that the eigenvector in Lemma 2 preserves the strict positivity property. However, the formula is not practical because it uses
, which is unknown at the time when it needs to be computed. Since
when
A is symmetric, a practical implementation suggests replacing
with
. The quantity
serves as a lower bound for the smallest eigenvalue of
A, i.e.,
. For all examples, the stopping criterion for the inner iteration is set to
and
Example 1. We consider the finite-element discretization of the boundary value problem in [2] (Example 4.2.4)using piecewise quadratic basis functions on the uniform mesh of isosceles, right-angled triangles. This is a matrix of order with and . For Example 1, it is observed that, for this monotone matrix eigenproblem, INI_1, with two different values of
(0.5 and 0.8), exhibits distinct convergence behaviors, requiring 51 and 18 outer iterations to achieve the desired accuracy, respectively. As shown in
Figure 1, NI and INI_2 typically converge superlinearly, while INI_1 with
= 0.5, 0.8 typically converges linearly. This confirms our theoretical predictions and demonstrates that the results of our theorem are both realistic and significant.
From
Table 1, it is observed that all the converged eigenvectors are positive and that INI_2 improves the overall efficiency of NI. Specifically, the INI_1 algorithm converges linearly and slowly, requiring between twice and three times the CPU time of INI_2, but
for INI_1 is only half of
for INI_2. There are two reasons for this. First, since the approximate eigenvalues are obtained from the relation (
9), the parameter
influences the convergence rates, as seen in
Figure 1. Second, according to (
8), INI_2 solves the inner linear systems increasingly accurately as
k increases. In contrast, the inner tolerance used by INI_1 remains fixed, except for the factor
, resulting in the average number of iterations for INI_1 being about half of those for INI_2.
6.2. MINI for Computing the Smallest Singular Value of an M-Matrix
In the previous section, INI_2 demonstrated significantly better overall efficiency compared to NI and INI_1. Therefore, in this subsection, the MINI algorithm (INI_2 combined with MNI) is used to find the smallest singular value and the associated eigenvector of an
M-matrix, confirming the effectiveness of MINI and the theory presented in
Section 3 and
Section 4. For MINI, the stopping criteria for inner and outer iterations are the same as those for monotone matrices. Additionally, MINI is compared with the JDQR [
23] and JDSVD [
24] algorithms, as well as the Matlab function svds; none of these algorithms preserve the positivity of approximate eigenvectors. The results show that the MINI algorithm consistently and reliably computes positive eigenvectors, whereas the other algorithms generally fail to do so.
Since JDQR and JDSVD use the absolute residual norms to determine convergence, the stopping criteria for outer iterations are set to “TOL ,” matching the criteria used for MINI. The parameters are set to “sigma = SM” for JDQR, to “opts.target = 0” for JDSVD, and the inner solver to “OPTIONS.LSolver = minres.” All other options use default settings. No preconditioning is applied for inner linear systems. For svds, the stopping criteria are set to “OPTS.tol ,” with the maximum and minimum subspace dimensions set to 20 and 2 at each restart, respectively.
Suppose we want to compute the smallest singular value and the corresponding singular vector of a real
M-matrix
M. This partial SVD can be computed using an equivalent eigenvalue decomposition, specifically the augmented matrix
While such a matrix A is not an M-matrix, it will indeed be monotone.
Example 2. Consider a symmetric M-matrix of the form , where B is the non-negative matrix rgg_n_2_19_s0 from the DIMACS10 test set [25]. This matrix represents a random geometric graph with vertices. Each vertex is a random point in the unit square, and edges connect vertices whose Euclidean distance is below (). This threshold ensures that the graph is almost connected. The resulting matrix is binary, with = 524,288 and 6,539,532 nonzero entries. For this problem, the MINI algorithm performs exceptionally well, requiring only six outer iterations to achieve the desired accuracy. Additionally, it both is reliable and preserves the positivity of the eigenvectors. In contrast, while JDQR, JDSVD, and svds can compute the desired eigenvalue, the resulting eigenvectors are not positive. Specifically,
Table 2 shows that for these algorithms, approximately 50% of the components of each converged eigenvector are negative.
Regarding overall efficiency, MINI is the most efficient in terms of , , and CPU time. JDQR and svds require at least five times the CPU time of MINI; they are also more expensive than JDSVD in terms of CPU time.
7. Conclusions
We have proposed an inexact Noda iteration method for computing the smallest eigenpair of a large, irreducible monotone matrix, and we have considered the convergence of the modified inexact Noda iteration with two relaxation factors. We have proven that the convergence of INI is globally linear and superlinear, with the asymptotic convergence factor bounded by . More precisely, the modified inexact Noda iteration with inner tolerance converges at least linearly if the relaxation factors meet the condition and superlinearly if the relaxation factors meet the condition , respectively. The results for INI clearly show how the accuracy of the inner iterations affects the convergence of the outer iterations.
In the experiments, we also compared MINI with Jacobi–Davidson-type methods (JDQR and JDSVD) and the implicitly restarted Arnoldi method (svds). The contribution of this paper is twofold. First, MINI consistently preserves the positivity of approximate eigenvectors, unlike the other three methods, which often fail in this regard. Second, the proposed MINI algorithms have proven to be practical and effective for large monotone matrix eigenvalue problems and M-matrix singular value problems.