Next Article in Journal
Static High-Quality Development Efficiency and Its Dynamic Changes for China: A Non-Radial Directional Distance Function and a Metafrontier Non-Radial Malmquist Model
Previous Article in Journal
On a Weighting Technique for Multiple Cost Optimization Problems with Interval Values
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Subclass of H-Matrices with Applications

by
Dragana Cvetković
,
Đorđe Vukelić
and
Ksenija Doroslovački
*
Faculty of Technical Sciences, University of Novi Sad, Trg Dositeja Obradovića 6, 21000 Novi Sad, Serbia
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(15), 2322; https://doi.org/10.3390/math12152322
Submission received: 13 June 2024 / Revised: 21 July 2024 / Accepted: 23 July 2024 / Published: 25 July 2024
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
The diagonal dominance property has been applied in many different ways and has proven to be very useful in various research areas. Its generalization, also known under the name H-matrix property, can be applied and produce significant benefits in economic theory, environmental sciences, epidemiology, neurology, engineering, etc. For example, it is known that the (local) stability of a (nonlinear) dynamic system is ensured if the (Jacobian) matrix belongs to the H-matrix class, and all its diagonal elements are negative. However, checking the H-matrix property itself is too expensive (from a computational point of view), so it is always worth investing effort in finding new subclasses of H-matrices, defined by relatively simple and practical conditions. Here, we will define a new subclass, which is closely related to the Euclidean vector norm, give some possible applications of this new class, and consider its relationship to some known subclasses.

1. Introduction

For more than a century, great scientific attention has been attracted by the class of strictly diagonally dominant (SDD) matrices, as well as its numerous generalizations. The reason probably lies in the fact that the knowledge of this class has been successfully “translated” into various other areas of applied linear algebra. Let us mention only two such situations: the famous Geršgorin theorem [1] is equivalent to the nonsingularity result for SDD matrices [2], while an infinity norm estimation for the inverse of an SDD matrix can be obtained by Varah’s theorem [3]. Both of them are simple and elegant, which makes them attractive from an application point of view. There are more interesting results concerning SDD matrices, like the following: the Schur complement of an SDD matrix is SDD itself [4]; the error bound for linear complementarity problems can be easily calculated for the class of SDD matrices [5]; regarding (nonlinear) dynamical systems, (local) stability is a direct consequence of the (Jacobian) matrix being an SDD matrix with negative diagonal entries, etc. Recently, more results related to the SDD class, also simple and elegant, have been obtained in the localization of the ε -pseudospectrum of an arbitrary matrix [6].
Thanks to such intensive and numerous benefits, many generalizations of the SDD class have been developed in various directions, all of which have been gathered together under one umbrella class of matrices, called H-matrices.
There are two equivalent definitions of H-matrices, also known in the literature under the name generalized diagonally dominant (GDD) matrices. But, first of all, let us recall two well-known classes, called strictly diagonally dominant (SDD) and M-matrices, both of which can serve as a starting point for the definition of GDD, that is, H-matrices.
A matrix A = [ a i j ] C n , n is called strictly diagonally dominant (SDD) if
| a i i | > r i ( A ) , i N : = { 1 , 2 , , n } ,
where
r i ( A ) : = j N { i } | a i j | .
A matrix A = [ a i j ] R n , n is called an M-matrix if it has the following sign pattern
a i i > 0 , i N , while a i j 0 , i , j N , i j ,
it is nonsingular, and A 1 0 (elementwise).
Definition 1.
A matrix A = [ a i j ] C n , n is called generalized diagonally dominant (GDD) if there exists a nonsingular diagonal matrix X, such that A X is an SDD matrix.
Definition 2.
A matrix A = [ a i j ] C n , n is called an H-matrix if its comparison matrix A = [ m i j ] R n , n , defined with
m i j : = | a i i | , i = j | a i j | , i j ,
is an M-matrix.
According to [7], a matrix is an H-matrix if and only if it is a GDD matrix. This means that the class of H-matrices and the class of GDD matrices are the same.
However, checking if a matrix is an H-matrix is computationally very demanding, so a compromise solution is to find as many new subclasses of H-matrices as possible, described by practically usable, i.e., easily verifiable, conditions on matrix elements.
Subclasses that have been discovered up to now are numerous; let us mention just a few: doubly strictly diagonally dominant (DSDD), also known under the name Ostrowski matrices [8,9,10], Dashnic–Zusmanovich matrices [11], Dashnic–Zusmanovich-type matrices [12], S-SDD or CKV matrices [13,14], CKV-type matrices [15], Nekrasov matrices [16,17,18], SDD1 matrices [19,20], etc.
All of them have been widely used in applications: the localization of the spectrum of an arbitrary matrix, localization of the pseudospectrum of an arbitrary matrix, infinity norm estimations of the inverse, Schur complement properties, error bound for linear complementarity problems, etc. For an exhaustive list of references, see [21].
However, like in (2), all of these subclasses are defined by conditions depending on sums (or part of sums) of off-diagonal entries (which can be linked to the absolute vector norm, or the maximum matrix norm). What will happen if we choose to rely on the Euclidean vector norm, as well as the Frobenius matrix norm, instead? We will show that classes obtained in such a way remain inside the H-matrix class.
Vector norms that we will use throughout the paper are Euclidean ( · ) and maximum ( · ) norms,
x : = i N | x i | 2 , x : = max i N | x i | ,
defined for all x = [ x i ] C n . As for matrix norms, we will mainly work with the Frobenius matrix norm, defined as
A F : = i , j N | a i j | 2 ,
which is consistent with the Euclidean vector norm, rather than the induced Euclidean matrix norm, defined by A = ρ ( A H A ) , where ρ ( A ) denotes the spectral radius of A. The induced maximum matrix norm is defined by
A : = max i N j N { i } | a i j | .

2. SDDF Class

Throughout the paper, we will split matrix A as
A = D B , where D = d i a g ( a 11 , a 22 , , a n n ) .
If A = [ a i j ] C n , n is a matrix with nonzero diagonal entries, then D is a nonsingular matrix, and the condition (1) can be equivalently rewritten as
D 1 B < 1 .
Motivated by this fact, we will define the following class.
Definition 3.
For a given matrix A = [ a i j ] C n , n with nonzero diagonal entries, we will say that it is an SDDF matrix if
D 1 B F < 1 ,
where · F denotes the Frobenius matrix norm.
As a start, we will prove the following theorem.
Theorem 1.
Every SDDF matrix is an H-matrix.
Proof. 
Since ρ ( D 1 B ) D 1 B F < 1 , it follows that I D 1 B is a nonsingular matrix, so D ( I D 1 B ) = A is nonsingular, too. In order to prove that A is an H-matrix, we will show that its comparison matrix A = | D | | B | R n , n is an M-matrix. Let us show that ρ ( | D | 1 | B | ) < 1 . Suppose, on the contrary, that there exists an eigenvalue λ of | D | 1 | B | , such that | λ | 1 , then matrix | D | ( λ I | D | 1 | B | ) = λ | D | | B | will satisfy condition (3)
λ | D | 1 | B | F = 1 | λ | D 1 B F D 1 B F < 1 ,
and consequently it will be nonsingular. But, this contradicts the fact that λ is an eigenvalue of | D | 1 | B | . Hence, ρ ( | D | 1 | B | ) < 1 , and the rest of the proof is similar to that in [15], (proof of Theorem 6):
A 1 = | D | ( I | D | 1 | B | ) 1 = I | D | 1 | B | 1 | D | 1 = k 0 | D | 1 | B | k | D | 1 0 ,
Hence, we conclude that A is an M-matrix. □
Remark 1.
At the very beginning, we emphasize the relationship between SDD and SDDF classes. The following three matrices
A 1 = 78 7 10 5 10 8 3 5 1 7 4 95 9 3 3 5 3 5 5 2 4 4 58 6 8 1 3 4 4 6 9 7 8 87 2 5 6 8 8 3 0 2 4 2 90 1 7 2 6 7 10 9 3 6 3 80 8 2 6 9 9 6 5 4 8 7 86 7 1 10 7 9 1 7 7 3 4 93 11 3 10 8 4 4 3 5 6 4 45 5 4 2 2 10 9 9 2 10 3 47 , A 2 = 1 0.9 0.9 1 , A 3 = 1 0.2 0.2 1 ,
show that SDD and SDDF classes stand in a general position, meaning that no class is a subset of the other, and they have a nonempty intersection. This is illustrated by Figure 1.
Indeed, A 1 is not an SDD matrix, because of the last two rows, while D 1 1 B 1 F = 0.852 < 1 , so it is an SDDF matrix. Here and hereafter, we will assume that every matrix A i is split as A i = D i B i , where D i is the diagonal part of A i . We have taken this particular example from a mathematical model representing energy flow in food webs, given by the generalized Lotka–Volterra equations. We will use it onwards as a good illustration of the possible benefits of new classes in real applications. The precise definition of the matrix based on empirical data is explained in [22], where, following the research by Moore and de Ruiter in [23] and references therein, authors considered a food web of n functional groups of living species with a pool of non-living organic matter, whose energy (as a common currency of the biomass, usually measured as the level of carbon or nitrogen) flow is approximately driven by the generalized Lotka–Volterra equations.
On the other hand, A 2 and A 3 are chosen to be as simple as possible, just to verify the above Figure. Both matrices are SDD, while, for D 2 1 B 2 F = 1.273 and D 3 1 B 3 F = 0.283 .
The immediate consequences of Theorem 1 are the following two theorems.
Theorem 2.
Let A = D B (D is the diagonal part of A). If λ is an eigenvalue of A, different from all diagonal entries a i i , i N , then,
λ Γ e ( A ) : = { z C : ( z I D ) 1 B F 1 } .
Proof. 
Suppose that there exists an eigenvalue λ , such that λ Γ e ( A ) . Then, λ I A is an SDDF matrix, and hence is nonsingular, which is an obvious contradiction. □
Theorem 3.
If A is an SDDF matrix, then
A 1 F D 1 F 1 D 1 B F .
Proof. 
From D 1 B F < 1 , it follows that I D 1 B is a nonsingular matrix, and
I D 1 B 1 = k 0 D 1 B k .
Hence,
I D 1 B 1 F k 0 D 1 B F k = 1 1 D 1 B F .
Finally, from A = D I D 1 B we have
A 1 F = I D 1 B 1 D 1 F D 1 F 1 D 1 B F .
Obviously, the upper bound given in (5) can be treated as an upper bound for the Euclidean matrix norm (which we denote simply by · ), since
A 1 A 1 F D 1 F 1 D 1 B F .

3. S -SDDF Class

In this section, we will recall only one special subclass of H-matrices (known under the name S-SDD class), and define the analogon of this class referring to the Frobenius norm.
For a given S, S N , matrix A satisfying
| a i i | > r i S ( A ) , i S ,
( | a i i | r i S ( A ) ) · ( | a j j | r j S ¯ ( A ) ) > r i S ¯ ( A ) r j S ( A ) , i S , j S ¯ ,
where r i S ( A ) : = j S { i } | a i j | , is called an S-SDD matrix.
For the sake of transparency, here we will suppose that
S : = { 1 , 2 , , k } , for   some k < n ,
and consequently S ¯ = { k + 1 , k + 2 , , n } , so that we can represent matrix A as
A = D · I C 11 C 12 C 21 I C 22 ,
where D is the diagonal part of A (as before), and the dimension of C 11 is k. Obviously, the relation with the previous splitting A = D B , where D = d i a g ( a 11 , a 22 , , a n n ) is the following:
B = D C 11 C 12 C 21 C 22 .
Note that all diagonal entries of C 11 and C 22 are equal to zero.
Definition 4.
A matrix A = [ a i j ] C n , n with nonzero diagonal entries, satisfying the following two conditions,
C 11 F < 1 ,
1 C 11 F 1 C 22 F > C 12 F C 21 F ,
is called an S-SDDF matrix.
Theorem 4.
Every S-SDDF matrix is nonsingular; moreover, every one is an H-matrix.
Proof. 
Suppose, on the contrary, that there exists a vector x 0 , such that A x = 0 . Then, for all i N it holds that
| a i i | | x i | | j = 1 j i k a i j x j | + | j = k + 1 j i n a i j x j | R i S ( A ) x S + R i S ¯ ( A ) x S ¯ ,
where · represents the Euclidean norm, x S : = [ x 1 , x 2 , , x k ] T , x S ¯ : = [ x k + 1 , , x n ] T , and
R i S ( A ) = ( j = 1 j i k | a i j | 2 ) 1 2 , R i S ¯ ( A ) = ( j = k + 1 j i n | a i j | 2 ) 1 2 .
Hence,
| x 1 | | x k | R 1 S ( A ) | a 11 | R k S ( A ) | a k k | x S + R 1 S ¯ ( A ) | a 11 | R k S ¯ ( A ) | a k k | x S ¯ , | x k + 1 | | x n | R k + 1 S ( A ) | a k + 1 , k + 1 | R n S ( A ) | a n n | x S + R k + 1 S ¯ ( A ) | a k + 1 , k + 1 | R n S ¯ ( A ) | a n n | x S ¯ ,
and, consequently,
x S C 11 F x S + C 12 F x S ¯ ,
x S ¯ C 21 F x S + C 22 F x S ¯ .
From (10) and (11), it follows that
1 C 11 F 1 C 22 F x S x S ¯ C 12 F C 21 F x S x S ¯ .
If x S 0 and x S ¯ 0 , this is a contradiction with (9). If one of these vectors is a zero vector, say x S ¯ = 0 , then x S 0 , and from (10) we again obtain a contradiction, this time with (8). Hence, A can not be singular.
In order to prove that A is an H-matrix, we will use, again, the comparison matrix A , and show that it is a nonsingular M-matrix. Again, with the splitting A = | D | | B | , where | D | = d i a g ( | a 11 | , , | a n n | ) , we will show that ρ ( | D | 1 | B | ) < 1 . Suppose, on the contrary, that there exists an eigenvalue λ of | D | 1 | B | , such that | λ | 1 . Then matrix | D | ( λ I | D | 1 | B | ) = λ | D | | B | will satisfy conditions (8) and (9), since:
λ | D | | B | = λ | D | · I 1 λ | C 11 | 1 λ | C 12 | 1 λ | C 21 | I 1 λ | C 22 | , 1 λ | C 11 | F = 1 | λ | C 11 F C 11 F < 1 ,
and
1 1 | λ | C 11 F 1 1 | λ | C 22 F 1 C 11 F 1 C 22 F > C 12 F C 21 F 1 | λ | C 12 F 1 | λ | C 21 F .
Consequently, matrix | D | ( λ I | D | 1 | B | ) = λ | D | | B | will be nonsingular, but this contradicts the fact that λ is an eigenvalue of | D | 1 | B | . Hence, ρ ( | D | 1 | B | ) < 1 , and
A 1 = | D | ( I | D | 1 | B | ) 1 = I | D | 1 | B | 1 | D | 1 = k 0 | D | 1 | B | k | D | 1 0 ,
meaning that A is an M-matrix. □
Remark 2.
Let us show that for an arbitrary S N , the SDDF class is a subset of the S-SDDF class. Suppose that A is an SDDF matrix, i.e., D 1 B F < 1 . Take an arbitrary subset S of indices (without loss of generality, suppose that S = { 1 , 2 , , k } , for some k < n ). In order to prove that A is an S-SDDF matrix, we have to prove that
C 11 F < 1 a n d 1 C 11 F 1 C 22 F > C 12 F C 21 F ,
where
A = D B = D D C 11 C 12 C 21 C 22 , i . e . , D 1 B = C 11 C 12 C 21 C 22 .
If we denote
C 11 F = : α , C 22 F = : β , C 12 F = : a , C 21 F = : b ,
then
D 1 B F 2 = α 2 + β 2 + a 2 + b 2 < 1 ,
so the first condition in (12) is obvious. The second condition, ( 1 α ) ( 1 β ) > a b , also holds true, because of the following simple algebraic inequalities:
( α + β 1 ) 2 0 ( α 2 + β 2 ) 2 α β 2 α 2 β + 1 , ( a b ) 2 0 2 a b a 2 + b 2 ,
which, together with (13), give
2 a b a 2 + b 2 < 1 ( α 2 + β 2 ) 1 + 2 α β 2 α 2 β + 1 = 2 ( 1 α ) ( 1 β ) , i . e . , a b < ( 1 α ) ( 1 β ) .
Remark 3.
If one allows for a subset S to be equal to the whole set of indices N, i.e., if k = n , condition (9) vanishes in this particular case, (8) becomes (3). In this sense, SDDF matrices could be considered S-SDDF matrices, for the special choice S = N .
Remark 4.
Similarly as in the SDDF case, we can immediately conclude that S-SDDF and S-SDD classes (for the same subset S) stand in a general position.

3.1. Application 1: Eigenvalue Localization

Whenever a new subclass of H-matrices is defined, a corresponding new eigenvalue localization result can be formulated, just like the famous Geršgorin theorem [1], which is the eigenvalue localization result corresponding to the SDD class. For more details about these relations, see [2].
Hence, an immediate consequence of Theorem 4 is the following one. Like before, we supppose that S = { 1 , 2 , , k } , for some k < n , and represent matrix z I A as
z I A = z I D · I Θ 11 ( z ) Θ 12 ( z ) Θ 21 ( z ) I Θ 22 ( z ) .
where D is the diagonal part of A, and the dimension of Θ 11 is k.
Theorem 5.
Let Ω ( A ) : = Ω 1 ( A ) Ω 2 ( A ) , where
Ω 1 ( A ) : = z C : Θ 11 ( z ) F 1 , Ω 2 ( A ) : = z C : 1 Θ 11 ( z ) F 1 Θ 22 ( z ) F Θ 12 ( z ) F Θ 21 ( z ) F ,
and
z I A = z I D · I Θ 11 ( z ) Θ 12 ( z ) Θ 21 ( z ) I Θ 22 ( z ) .
Then, all eigenvalues of A different from diagonal entries a i i , i N belong to Ω ( A ) .
Proof. 
Suppose that there exists an eigenvalue λ , such that λ Ω ( A ) . Then,
Θ 11 ( λ ) F < 1 and
1 Θ 11 ( λ ) F 1 Θ 22 ( λ ) F > Θ 12 ( λ ) F Θ 21 ( λ ) F .
Here, (14) and (15) mean that λ I A is an S-SDDF matrix, and hence is nonsingular, which is an obvious contradiction. □
Remark 5.
Let us denote, for all i N ,
Θ i S ( z ) : = R i S ( A ) | z a i i | , Θ i S ¯ ( z ) : = R i S ¯ ( A ) | z a i i | .
Then
Θ 11 ( z ) F = Θ 1 S ( z ) , , Θ k S ( z ) T , Θ 12 ( z ) F = Θ k + 1 S ( z ) , , Θ n S ( z ) T ,
Θ 21 ( z ) F = Θ 1 S ¯ ( z ) , , Θ k S ¯ ( z ) T , Θ 22 ( z ) F = Θ k + 1 S ¯ ( z ) , , Θ n S ¯ ( z ) T .
Remark 6.
For matrices with equal diagonal entries, a i i = α for all i N , the localization set Ω ( A ) becomes a union of one disk and one Cartesian oval. Indeed, in this particular case, A is represented as
A = α · I C 11 C 12 C 21 I C 22 ,
and
z I A = z α · I 1 z α C 11 1 z α C 12 1 z α C 21 I 1 z α C 22 ,
so that
Ω 1 ( A ) : = z C : | z α | C 11 F , Ω 2 ( A ) : = z C : | z α | C 11 F | z α | C 22 F C 12 F C 21 F ,
meaning that the eigenvalue localization set is the union of one circle and one Cartesian oval. It means that, for example, checking if Ω ( A 1 ) C (open left half-plane), i.e., getting an answer about (local) stability, requires plotting only one circle and one Cartesian oval.
Obviously, for matrices with different diagonal entries, plotting Ω 1 ( A ) and Ω 1 ( A ) can be computationaly demanding. Nevertheless, there are still important benefits of the newly introduced class, as we shall see in the following subsections.

3.2. Application 2: Stability

Stability is a very important property in the dynamical analysis of linear, as well as nonlinear, systems. The classical approach is based on the Routh–Hurwitz criterion, and implies knowledge of the eigenvalue position—whether all are situated in the open left half-plane, or not. However, it requires solving the characteristic equation, which becomes too expensive for higher-order systems. On the other hand, the Geršgorin theorem localizes the eigenvalues in the union of circles, called the Geršgorin set, so it is easy to find the rightmost point of the Geršgorin set, and from this information conclude if the whole Geršgorin set (hence all eigenvalues) are in the open left half-plane.
However, the Geršgorin set can be too wide, giving no answer to if the eigenvalues are situated in the open left half-plane, or not. Fortunately, a similar reasoning is also valid in the case of the so-called Minimal Geršgorin set, for more information see [2]. Namely, it also contains all eigenvalues; therefore, if it is situated in the open left half-plane, all eigenvalues will be also in the half-plane, and the observed dynamical system will be stable. From the very tight relation between H-matrices and the Minimal Geršgorin set, we already know that the following theorem holds true, but, nevertheless, we will state it here in the form of Lemma and present a brief proof, for readers who are not familiar with the Minimal Geršgorin set.
Lemma 1.
If A is an H-matrix with negative diagonal entries, then σ ( A ) C , where σ ( A ) denotes the spectrum of matrix A.
Proof. 
If A is an H-matrix, then there exists a positive diagonal matrix X = d i a g ( x 1 , x 2 , , x n ) , such that A X is strictly diagonally dominant, i.e.,
| a i i | x i > j N { i } | a i j | x j , i N .
Since all diagonal entries of A are negative, this can be rewritten as
a i i x i > j N { i } | a i j | x j , i N .
If λ is an eigenvalue of A, it is also the eigenvalue of X 1 A X and belongs to at least one of the Geršgorin circles for X 1 A X , i.e., there exists i N such that
| λ a i i | 1 x i j N { i } | a i j | x j .
But then,
| λ a i i | x i j N { i } | a i j | x j < a i i x i , i . e . , | λ a i i | < a i i ,
which means that λ belongs to the circle centered in a i i < 0 , with a radius less than | a i i | . Hence, λ C .
Remark 7.
For those familiar with the Minimal Geršgorin set, the proof follows directly from the fact that the Minimal Geršgorin set is contained in C if A is an H-matrix with negative diagonal entries.
As a consequence, we directly obtain the following corollary.
Corollary 1.
If A is an S-SDDF matrix with negative diagonal entries, then σ ( A ) C .
As an illustration, consider matrix A 1 , which is generated from the generalized Lotka–Volterra equations modeling energy flow in complex food webs. As we have already seen, this matrix is an SDDF matrix, which is sufficient to conclude that the whole spectrum of A 1 belongs to the open left-half complex plane, meaning that the corresponding dynamical system is (locally) stable.
Sometimes, small perturbations in measuring biomasses can obtain matrix A 1 from the SDDF class. In such a situation, there is a good chance that the perturbed matrix A ˜ 1 will remain in the S-SDDF class for at least one S, which is still enough to conclude that the spectrum of A ˜ 1 belongs to the open left-half complex plane, meaning that the corresponding dynamical system is (locally) stable.
Let us explain the reason for emphasizing this application more precisely. Our illustrative matrix comes from a model of energy flow in soil food webs (for a detailed treatment of this problem, see [23]). To model the fluxes in carbon and nitrogen within the soil food web, the Lotka–Volterra predator–prey system of nonlinear differential equations is used as a basis, and this concept of energetic food webs was developed mainly to discuss the relationship between the complexity and stability of soil ecosystems, see [24,25]. In it, the community matrix (the Jacobian of the nonlinear differential system at the equilibrium point) and its properties lie at the center of attention. The question above all others is asymptotic stability (from which the analysis proceeds to robust stability, distance to instability, etc.) Since all diagonal elements of such community matrices are negative, stability is ensured knowing that the community matrix belongs to the class of H-matrices. As we have already pointed out, due to computational costs, we never check this property by definition, but check whether the community matrix belongs to some subclass of H-matrices. Until now, all known subclasses were based on the maximum matrix norm, which can realistically be interpreted as the conditions of trophic influences within one functional group. Due to uncertainties (measurement errors, stochastic fluctuations, etc.) in empirical data, these conditions are often violated. If we instead use the Frobenius norm, an approach that treats trophic influences between all functional groups together, we will obtain a condition more robust to the aforementioned uncertainties. In addition, in the community matrices we are talking about, the off-diagonal elements of one particular row are very often disproportionately larger than the others, which favors the use of the Frobenius matrix norm, rather than the maximum one.

3.3. Application 3: Norm Bounds for the Inverse

Whenever a new subclass of H-matrices is discovered, it is fruitful to try to find a new upper bound for the norm of its inverse, as has been done for SDD matrices by the well-known Varah’s result [3]. This is important for at least two reasons: first, we are able to estimate the norm of the inverse for the new matrices, which we could not do before, and second, we can improve some already-known estimates for the classes belonging to this new matrix class.
When it comes to the Euclidean norm, one possible way to estimate the norm of an inverse matrix is given in [26]. This approach is based on using the block structure of the observed matrix and it is efficient when the diagonal blocks are easily invertible. If this is not the case, we can estimate the norm of the inverse matrix using our newly introduced class.
Of course, it is possible to use all estimations obtained for the maximum norm, multiplied by n . Needless to say that, in the case of large dimensions n, multiplying by n makes the obtained estimate meaningless.
Theorem 6.
Let D be the diagonal part of A, and A = D · T , where
T = I C 11 C 12 C 21 I C 22 , t h e   d i m e n s i o n   o f C 11 i s k × k .
If A is an S-SDDF matrix, for S = { 1 , 2 , , k } , then
A 1 F i N 1 | a i i | 2 1 2 ( 1 α ) 2 + ( 1 δ ) 2 + β 2 + γ 2 ( 1 α ) ( 1 δ ) β γ .
where α : = C 11 F , β : = C 12 F , γ : = C 21 F , δ : = C 22 F .
Proof. 
Let A be an S-SDDF matrix. Then, it is a nonsingular matrix, so D and T are nonsingular matrices, too. According to conditions (8) and (9), we have
α < 1 and 1 α 1 δ > β γ ,
and, as a consequence, δ < 1 , as well. Condition α < 1 means that I C 11 is an SDDF matrix, and from Theorem 3 we have
( I C 11 ) 1 F 1 1 α .
I C 22 is an SDDF matrix, too, and
( I C 22 ) 1 F 1 1 δ .
Recall, now, the Schur complement, see [27], for matrix T and denote it by S :
S : = I C 22 C 21 I C 11 1 C 12 .
Obviously, S = I C 22 I I C 22 1 C 21 I C 11 1 C 12 , with
I C 22 1 C 21 I C 11 1 C 12 F β γ ( 1 α ) ( 1 δ ) < 1 ,
so, similarly to before, we obtain
S 1 F 1 1 δ · 1 1 β γ ( 1 α ) ( 1 δ ) = 1 α ( 1 α ) ( 1 δ ) β γ .
According to the Banachiewicz inversion formula, see [27], the inverse of T can be represented in the block form as
T 1 = μ 11 μ 12 μ 21 μ 22 , where
μ 11 = I C 11 1 + I C 11 1 C 12 S 1 C 21 I C 11 1 , μ 12 = I C 11 1 C 12 S 1 , μ 21 = S 1 C 21 I C 11 1 , μ 22 = S 1 .
Now,
T 1 F 2 = μ 11 F 2 + μ 12 F 2 + μ 21 F 2 + μ 22 F 2 1 ( 1 α ) 2 1 + β γ ( 1 α ) ( 1 δ ) β γ 2 + β 2 + γ 2 + ( 1 α ) 2 ( 1 α ) ( 1 δ ) β γ 2 = ( 1 α ) 2 + ( 1 δ ) 2 + β 2 + γ 2 ( 1 α ) ( 1 δ ) β γ 2 .
Finally, from A = D T we have
A 1 F T 1 F D 1 F i N 1 | a i i | 2 1 2 ( 1 α ) 2 + ( 1 δ ) 2 + β 2 + γ 2 ( 1 α ) ( 1 δ ) β γ .
As an illustration, consider, again, our A 1 matrix from ecological modeling. Up to now, we were able to estimate maximum norm of this matrix, using the fact that it is an S-SDD matrix for S = { 1 , 2 , , 8 } , and applying the estimate from [28]:
A 1 max i S , j S ¯ max { ρ i j S ( A ) , ρ j i S ¯ ( A ) } ,
where
ρ i j S ( A ) : = | a i i | r i S ( A ) + r j S ( A ) | a i i | r i S ( A ) | a j j | r j S ¯ ( A ) r i S ¯ ( A ) r j S ( A ) .
This will give us the following estimation
A 1 1 0.1059 , while   the   exact   value   is A 1 1 = 0.046 .
In the Euclidean (and Frobenius) norm, we have
A 1 1 A 1 1 F 10 · A 1 1 0.5029 .
Using our new subclasses of H-matrices, for bounding the Frobenius (and Euclidean) norm of the inverse of A 1 , we can use both (5)
A 1 1 A 1 1 F 0.3156 ,
and (16), for S = { 1 , 2 , , 5 } :
A 1 1 A 1 1 F 0.08605 ,
since A 1 is an SDDF matrix, so it is an S-SDDF matrix for all S N . The exact values are A 1 1 = 0.0343 and A 1 1 F = 0.0540 , so this illustrative example can serve as a justification for introducing the new S-SDDF class of matrices.
Remark 8.
It is worth mentioning that an upper bound for the inverse in the Frobenius norm can be used for bounding the smallest singular value of a given matrix. Namely, it is well known that for nonsingular matrices
σ m i n ( A ) = A 1 1 .
Since
A 1 A 1 F i N 1 | a i i | 2 1 2 ( 1 α ) 2 + ( 1 δ ) 2 + β 2 + γ 2 ( 1 α ) ( 1 δ ) β γ ,
we immediately obtain
σ m i n ( A ) ( 1 α ) ( 1 δ ) β γ i N 1 | a i i | 2 ( 1 α ) 2 + ( 1 δ ) 2 + β 2 + γ 2 1 2 .
In the case of matrix A 1 , this looks like
29.1545 = 1 0.0343 = σ m i n ( A 1 ) 1 0.08605 = 11.6222 .
Remark 9.
Conditioning is an important property of the matrix. Roughly speaking, the condition number of a matrix A is the rate at which the solution x of a linear system A x = b will change with respect to a change in b. In the arbitrary consistent matrix norm · m , it is defined as
κ m ( A ) : = A m A 1 m .
In the Euclidean matrix norm, it is
κ ( A ) : = A A 1 = σ m a x σ m i n ,
where σ m a x and σ m i n are maximal and minimal singular values of A, respectively. Thanks to Theorem 6, with the same notations, we have an upper bound for the condition number of an S-SDDF matrix A:
κ F ( A ) A F i N 1 | a i i | 2 1 2 ( 1 α ) 2 + ( 1 δ ) 2 + β 2 + γ 2 ( 1 α ) ( 1 δ ) β γ .
In the case of matrix A 1 , since A 1 F = 253.27 , we obtain
κ ( A 1 ) κ F ( A 1 ) 21.79 .
In a special case, when matrix A has all diagonal entries equal, we have D = α I , for some scalar α, so that
A F = α T F , A 1 F = 1 α T 1 F , a n d κ ( A ) = κ ( T ) .
Since T F = k + α 2 + β 2 + γ 2 + ( n k ) + δ 2 , the condition number in the Euclidean norm, as well as in the Frobenius norm, in this particular case can be estimated with
κ ( A ) κ F ( A ) = κ F ( T ) n + α 2 + β 2 + γ 2 + δ 2 ( 1 α ) 2 + ( 1 δ ) 2 + β 2 + γ 2 ( 1 α ) ( 1 δ ) β γ .
Remark 10.
A more precise analysis of the distance to instability, which is based on the localization of the pseudospectrum, can be performed in a similar way to what was carried out in [6], but this goes beyond the scope of this paper.

4. Conclusions

It is well known that the class of H-matrices, i.e., generalized diagonally dominant (GDD) matrices, stands as a cornerstone for numerous fields of applications of numerical linear algebra. For example, dynamical systems arising in different scientific areas exploit the knowledge of H-matrix theory in the following sense: being able to confirm that the Jacobian matrix governing the dynamics of a given dynamical system belongs to H-matrix class with negative diagonal entries is enough to ensure the system’s local stability. Finding out new subclasses of non-singular H-matrices, defined by computationally undemanding criteria, is very important, since this makes the process of H-matrix class membership confirmation easier.
Instead of focusing on numerous subclasses of H-matrices closely connected to the absolute vector norm and the maximum matrix norm, this paper offers a contribution to the introduction of novel subclasses depending on the Euclidean vector norm and the Frobenius matrix norm, whilst addressing their potential benefits in matrix spectrum localization, the stability of dynamical systems, and the Frobenius norm estimation of the matrix inverse.
An illustrative example is taken from ecological modeling, where the off-diagonal elements of one specific row are very often disproportionately greater than those of the others, a situation which favours using the Frobenius matrix norm rather than the maximum one.

Author Contributions

Conceptualization, D.C. and K.D.; methodology, D.C., Đ.V. and K.D.; formal analysis, D.C., Đ.V. and K.D.; resources, D.C.; writing—original draft preparation, D.C.; writing—review and editing, D.C., Đ.V. and K.D. All authors have equally contributed to preparation of all parts of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been supported by the Ministry of Science, Technological Development and Innovation (Contract No. 451-03-65/2024-03/200156) and the Faculty of Technical Sciences, University of Novi Sad, through the project “Scientific and Artistic Research Work of Researchers in Teaching and Associate Positions at the Faculty of Technical Sciences, University of Novi Sad” (No. 01-3394/1).

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Geršgorin, S. Über die abgrenzung der eigenwerte einer matrix (German) (About the delimitation of the eigenvalues of a matrix). Izv. Akad. Nauk. SSSR Seriya Mat. 1931, 7, 749–754. [Google Scholar]
  2. Varga, R.S. Geršgorin and His Circles; Springer: New York, NY, USA, 2004; Available online: https://link.springer.com/book/10.1007/978-3-642-17798-9 (accessed on 17 May 2024).
  3. Varah, M. A lower bound for the smallest singular value of a matrix. Linear Algebra Appl. 1975, 11, 3–5. [Google Scholar] [CrossRef]
  4. Carlson, D.; Markham, T. Schur complements of diagonally dominant matrices. Czech. Math. J. 1979, 29, 246–251. [Google Scholar] [CrossRef]
  5. Garcia-Esnaola, M.; Pena, J.M. A comparison of error bounds for linear complementarity problems of H-matrices. Linear Algebra Appl. 2010, 433, 956–964. [Google Scholar] [CrossRef]
  6. Kostić, V.R.; Cvetković, L.; Cvetković, D.L. Pseudospectra localizations and their applications. Numer. Linear Algebra Appl. 2016, 23, 55–75. [Google Scholar] [CrossRef]
  7. Fiedler, M.; Pták, V. On matrices with non-positive off-diagonal elements and positive principal minors. Czechoslov. Math. J. 1962, 12, 382–400. [Google Scholar] [CrossRef]
  8. Cvetković, L. H-matrix theory vs. eigenvalue localization. Numer. Algor. 2006, 42, 229–245. [Google Scholar] [CrossRef]
  9. Li, B.S.; Tsatsomeros, M.J. Doubly diagonally dominant matrices. Linear Algebra Appl. 1997, 261, 221–235. [Google Scholar] [CrossRef]
  10. Ostrowski, A.M. Über die Determinanten mit Überwiegender Hauptdiagonale (German) (About the determinants with predominant main diagonal). Comment. Math. Helv. 1937, 10, 69–96. [Google Scholar] [CrossRef]
  11. Dashnic, L.S.; Zusmanovich, M.S. O nekotoryh kriteriyah regulyarnosti matric i lokalizacii ih spectra (in Russian) (On some criteria for the nonsingularity of matrices and the localization of their spectrum). Zh. Vychisl. Matem. i Matem. Fiz. 1970, 5, 1092–1097. [Google Scholar]
  12. Zhao, J.X.; Liu, Q.L.; Li, C.Q.; Li, Y.T. Dashnic-Zusmanovich type matrices: A new subclass of nonsingular H-matrices. Linear Algebra Appl. 2018, 552, 277–287. [Google Scholar] [CrossRef]
  13. Cvetković, L.; Kostić, V.; Varga, R.S. A new Geršgorin-type eigenvalue inclusion set. Electron. Trans. Numer. Anal. 2004, 18, 73–80. Available online: https://etna.ricam.oeaw.ac.at/vol.18.2004/pp73-80.dir/pp73-80.pdf (accessed on 17 May 2024).
  14. Gao, Y.M.; Wang, X.H. Criteria for generalized diagonally dominant matrices and M-matrices. Linear Algebra Appl. 1992, 169, 257–268. [Google Scholar] [CrossRef]
  15. Cvetković, D.L.; Cvetković, L.; Li, C.Q. CKV-type matrices with applications. Linear Algebra Appl. 2021, 608, 158–184. [Google Scholar] [CrossRef]
  16. Gudkov, V.V. On a certain test for nonsingularity of matrices. In Latvian Mathematical Yearbook; Zinatne: Riga, Russia, 1965; pp. 385–390. [Google Scholar]
  17. Li, W. On Nekrasov matrices. Linear Algebra Appl. 1998, 281, 87–96. [Google Scholar] [CrossRef]
  18. Szulc, T. Some remarks on a theorem of Gudkov. Linear Algebra Appl. 1995, 225, 221–235. [Google Scholar] [CrossRef]
  19. Chen, X.; Li, Y.; Liu, L.; Wang, Y. Infinity norm upper bounds for the inverse of SDD1 matrices. AIMS Math. 2022, 7, 8847–8860. Available online: https://www.aimspress.com/article/id/6221e3c8ba35de7cf5703c90 (accessed on 17 May 2024). [CrossRef]
  20. Pena, J.M. Diagonal dominance, Schur complements and some classes of H-matrices and P-matrices. Adv. Comput. Math. 2011, 35, 357–373. [Google Scholar] [CrossRef]
  21. Doroslovački, K.; Cvetković, D.L. On matrices with only one non-SDD row. Mathematics 2023, 11, 2382. [Google Scholar] [CrossRef]
  22. Kostić, V.R.; Cvetković, L.; Cvetković, D.L. Improved stability indicators for empirical food webs. Ecol. Model. 2016, 320, 1–8. [Google Scholar] [CrossRef]
  23. Moore, J.C.; de Ruiter, P.C. Energetic Food Webs: An Analysis of Real and Model Ecosystems; OUP: Oxford, UK, 2012; Available online: https://academic.oup.com/book/2389 (accessed on 17 May 2024).
  24. Neutel, A.M.; Heesterbeek, J.A.P.; van de Koppel, J.; Hoenderboom, G.; Vos, A.; Kaldeway, C.; Berendse, F.; de Ruiter, P.C. Reconciling complexity with stability in naturally assembling food webs (Letter). Nature 2007, 449, 599–602. [Google Scholar] [CrossRef] [PubMed]
  25. Neutel, A.M.; Thorne, M.A.S. Interaction strengths in balanced carbon cycles and the absence of a relation between ecosystem complexity and stability. Ecol. Lett. 2014, 17, 651–661. [Google Scholar] [CrossRef] [PubMed]
  26. Cvetković, L.; Kostić, V.; Doroslovački, K.; Cvetković, D.L. Euclidean norm estimates of the inverse of some special block matrices. Appl. Math. Comput. 2016, 284, 12–23. [Google Scholar] [CrossRef]
  27. Zhang, F. The Schur Complement and Its Applications; Numerical Methods and Algorithms; Springer: Berlin/Heidelberg, Germany, 2005; Available online: https://link.springer.com/book/10.1007/b105056 (accessed on 17 May 2024).
  28. Kolotilina, L.Y. Bounds for the infinity norm of the inverse for certain M- and H-matrices. Linear Algebra Appl. 2009, 430, 692–702. [Google Scholar] [CrossRef]
Figure 1. Relationship between SDD and SDDF classes.
Figure 1. Relationship between SDD and SDDF classes.
Mathematics 12 02322 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cvetković, D.; Vukelić, Đ.; Doroslovački, K. A New Subclass of H-Matrices with Applications. Mathematics 2024, 12, 2322. https://doi.org/10.3390/math12152322

AMA Style

Cvetković D, Vukelić Đ, Doroslovački K. A New Subclass of H-Matrices with Applications. Mathematics. 2024; 12(15):2322. https://doi.org/10.3390/math12152322

Chicago/Turabian Style

Cvetković, Dragana, Đorđe Vukelić, and Ksenija Doroslovački. 2024. "A New Subclass of H-Matrices with Applications" Mathematics 12, no. 15: 2322. https://doi.org/10.3390/math12152322

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop