1. Introduction
In a lot of practical engineering problems, many quantities are uncertain but bounded due to inaccurate measurements, errors in manufacturing, changes to the natural environment, etc., so they have to be expressed as intervals. Therefore, interval analysis has received much interest not only from engineers but also from mathematicians [
1,
2,
3,
4,
5]. Rohn in [
6] surveyed some of the most important results on interval matrices and other interval linear problems up to that time.
In particular, since it is often necessary to compute the eigenvalue bounds for interval eigenvalue problems in structural analysis, control fields and some related issues in engineering and mechanics, many studies have been conducted on these problems in the past few decades (e.g., [
7,
8,
9,
10,
11,
12,
13,
14,
15]).
The eigenvalue problems with interval matrices dealt with in this paper can be formulated as follows:
subject to
where
,
and
are all
real symmetric matrices.
and
are known matrices which are composed of the lower and upper bounds of the intervals, respectively.
K is an uncertain-but-bounded matrix and ranges over the inequalities in Equation (
2).
is the eigenvalue of the eigenvalue problem in Equation (
1) with an unknown matrix
K, and
u is the eigenvector corresponding to
. All interval quantities are assumed to vary independently within the bounds.
In order to facilitate the expression of the interval matrices, interval matrix notations [
2] are used in this paper. The inequalities in Equation (
2) can be written as
, in which
is a symmetric interval matrix. Therefore, the problem can be referred to as follows: for a given interval matrix
, find an eigenvalue interval
, which is to say
such that it encloses all possible eigenvalues
satisfying
when
Furthermore, let the
midpoint matrix of the interval matrix
be defined as
and the
uncertain radius of the interval matrix
be
For some special cases of these problems, such as when
K is a real symmetric matrix in Equation (
1), some methods based on perturbation theory were proposed in [
7,
9,
16,
17,
18].
From many numerical experiments, we observed that the eigenvalue bounds are often reached at certain vertex matrices (in other words, at the boundary values of the interval entries). Therefore, the conditions under which the eigenvalue bounds will be reached at certain vertex matrices has been raised as an interesting question. Under the condition that the signs of the components of the eigenvectors remain invariable, Deif [
7] developed an effective method which can yield the exact eigenvalue bounds for the interval eigenvalue problem in Equation (
1). As a corollary of Deif’s method, Dimarogonas [
17] proposed that the eigenvalue bounds should be reached at some vertex matrix as under Deif’s condition. However, there exists no criterion for judging Deif’s condition in advance. Methods depending on three theorems dealing with the standard interval eigenvalue problem were introduced in [
18], which claimed that the eigenvalue bounds should be achieved when the matrix entries take their boundary values for Equation (
1) according to the vertex solution theorem and the parameter decomposition solution theorem. Unfortunately,
this is not true. There exists a certain matrix (see the example in the Appendix of [
16]) in which the upper or lower point of an eigenvalue interval is achieved when some matrix entries take certain interior values of the element intervals but not the end points. This contradicts the conclusion in [
18]. For symmetric tridiagonal interval matrices, we also have counterexamples, such as
The eigenvalues intervals are
and
. The maximum value of
is obtained at
This is not a vertex matrix.Therefore, the conditions under which the end values of an eigenvalue interval can be achieved when the matrix entries take their boundary values need to be established.
In [
19], Yuan et al. considered a special case of the problem in Equation (
1). They proved that for a
symmetric tridiagonal interval matrix, under some assumptions, the upper and lower bounds of the interval eigenvalues of the problem will be achieved at a certain vertex matrix of the interval matrix. Additionally, the assumptions proposed in their paper can be judged by a sufficient condition. This result is important. However, there is a drawback in that to determine the upper and lower bounds of an eigenvalue,
vertex matrices should be checked.
In this paper, we will present an improved theoretical result based on the work in [
19]. From this result, we can derive a fast algorithm to find the upper or lower bound of an eigenvalue. Instead of checking
vertex matrices, we just need to check
matrices.
The rest of the paper is arranged as follows.
Section 2 introduces the theorical results given in [
19].
Section 3 provides the improved theorical result and the fast algorithm to find the upper or lower bound of an eigenvlaue of an symmetric tridiagonal interval matrix.
Section 4 illustrates the main findings of this paper by means of a simulation example.
Section 5 provides further remarks.
2. A Property for Symmetric Tridiagonal Interval Matrices
For the convenience of reading and deriving the results of the next section, we present the results from [
19] here.
Let an irreducible symmetric tridiagonal matrix
, which is a normal matrix, be denoted as
Obviously, all eigenvalues of are real.
Several lemmas and corollaries are given below:
Definition 1 (leading or trailing principal submatrix). Let D be a matrix of the order n with diagonal elements . The submatrices of D whose diagonal entries are composed of for are called leading principal submatrices of the order k, and those whose diagonal entries are composed of are called trailing principal submatrices of the order k.
Let the leading and trailing principal submatrices of an order k be denoted as and for , respectively. The leading and trailing principal minors can be defined similarly.
Next, three theorems from the literature are introduced as lemmas below:
Lemma 1 ([
20], p. 36)
, denoted as in Equation (6), has the following properties: The following corollary can be easily deduced from Lemma 1:
Corollary 1. If a characteristic polynomial of satisfies , then . Furthermore, the characteristic polynomial can be rewritten in the form of a determinant . The leading (trailing) principal minor of the order of does not equal zero if .
Lemma 2 ([
21], Th.3.1).
Let , in whichLet the eigenvalues of be denoted by and those of be . Then, it holds that The following can easily be obtained from Lemma 2:
Corollary 2. If λ is the minimum or maximum eigenvalue of such that , then the leading and trailing principal minors of of the order k for do not equal zero.
Lemma 3 ([
22]).
If eigenvalues of certain principal submatrix of of the order are distinct, then they separate n eigenvalues of strictly. The proof in [
22] is written in Chinese. Its English translation can be found in the Appendix of [
19].
From this lemma, it is easy to deduce the following corollary:
Corollary 3. If eigenvalues of each principal submatrix of of the order are distinct, and , then the leading and trailing principal minors of of the order k with do not equal zero.
The proof of this corollary is in [
19].
Next, the main theorem of this paper can be introduced. We give the whole proof here since we need to use the results and notations in the next section:
Theorem 1. Let an interval matrix be a symmetric tridiagonal interval matrix, and it is denoted asLet its vertex matrices be expressed as for , with its diagonal elements being either or for and its subdiagonal elements being either or for . Let the eigenvalues of be and the eigenvalues of be . Here, suppose that and are all arranged in an increasing order for . If satisfies
- (a)
its sub-diagonal intervals not including zero, then
Furthermore, if also satisfies
- (b)
each of its principal sub-matrices of the order possessing non-overlapping eigenvalue intervals, then
Proof. Let the central points and the radii of the entries of
be denoted, respectively, as
for
. Denote
real variables as
Therefore, the
ith diagonal entry of
can be expressed as
for
and the
jth subdiagonal entry of
can be expressed as
for
.
Let be the characteristic determinant of . The definition of its leading or trailing principal minors is the same as in Definition 1. Obviously, is a function of X.
Let
.
F is differentiable due to the differentiability of all entries in
. Then,
can be expressed as
Consider the partial derivative of
with respect to
when
. Based on the derivative rule for a determinant, we obtain
In a similar way, the partial derivative of
with respect to
when
is
Moreover, it can be deduced that
where
and
.
From Corollary 1, we know that when since is the derivative of F with respect to . Additionally, from Corollary 2, when condition (a) is satisfied and (or ), it is obtained that all and for do not equal zero. From Lemma 3, the same conclusion can be drawn when conditions (a) and (b) are both satisfied and .
Therefore, if all partial derivatives in Equation (
11) vanish, then it must have
and
. Then, the corresponding values of
must be
. From the extremal property of a function, the eigenvalue bounds should be reached when the matrix entries take their boundary values: □
Remark 1. We would like to mention here that condition (b) in Theorem 1 is restrictive. However, the following are true:
- (i)
In many engineering problems, the matrix which satisfies condition (b) can be verified by some sufficient conditions. Yuan, He and Leng provided a theorem in [19] to verify condition (b). Hladík in [15] provided an alternative test which can also verify condition (b). - (ii)
In many engineering problems, we just need to find the interval of the largest eigenvalue, and in this scenario, we do not need to verify condition (b).
- (iii)
In [15], the author presented another approach to obtaining eigenvalue intervals for symmetric tridiagonal interval matrices. Compared with the algorithm in [15], our algorithm (which will be presented in the next section) does not need to use eigenvectors. Therefore, it is still competitive.
3. An Improved Theoretical Result and a Fast Algorithm
As we mentioned before, Theorem 1 provides an important property of symmetric tridiagonal interval matrices. However, it helps us little in finding the upper and lower bounds of eigenvalues, since vertex matrices need to be checked to determine the upper and lower bounds of an eigenvalue. Below, we will prove a better property:
Definition 2 (uniformly monotone). In a function , if for all , G is monotone increasing or monotone decreasing with respect to , then G is uniformly monotone with respect to .
Lemma 4 (multivariate intermediate value theorem). For any continuous function over a closed region , if has no zero in the internal part, then either or for all in the internal part.
Proof. Suppose we have and such that and . Then, take a connected open set U containing both and . As f is continuous, we know is connected and contains both a positive and negative number and thus also contains zero. □
Theorem 2. If the conditions in Theorem 1 are satisfied, then in Equation (9), if we restrict each component between , then is uniformly monotone. Proof. In the proof of Theorem 1, we showed that all
and
for
do not equal zero. According to Lemma 4, we have in Equation (
11) that all
and
will keep the same sign in
for all
and
,
,
. This means that
is uniformly monotone. □
Theorem 2 tells us the following:
Assume that the conditions in Theorem 1 are satisfied;
Assume a component (for example, ) causes to monotonically reach its extreme value when other components are fixed (from Theorem 1, will reach one of its ending points).
Then, for another component (for example,
), when
reaches one of its ending points, changing
cannot improve the value of
. Based on the above analysis, we proposed an algorithm to find the upper or lower bound of an eigenvalue. Suppose we need to find the lower bound of the smallest eigenvalue
; We have Algorithm 1 as follows (for the upper bounds and other eigenvalues, the algorithms are similar):
Algorithm 1 (Fast Algorithm for lower bound of ) |
- 1:
Set , , , . Here, and have the same meaning as in Equation ( 7). We will obtain a matrix as in Equation ( 6) and obtain . - 2:
For k from 1 to n: Let . Here, has the same meaning as in Equation ( 7). Therefore, we have a matrix and . If : Else: - 3:
For l from 1 to : Let . Here, has the same meaning as in Equation ( 7). Therefore, we have a matrix and . If : Else:
|
To obtain the upper or lower bound of , we just need to check vertex matrices instead of vertex matrices.