Next Article in Journal
A Novel Auction Blockchain System with Price Recommendation and Trusted Execution Environment
Next Article in Special Issue
Approximation Properties of Chebyshev Polynomials in the Legendre Norm
Previous Article in Journal
Four Types of Fixed-Point Theorems for Multifunctions in Probabilistic Metric Spaces
Previous Article in Special Issue
Polynomial Analogue of Gandy’s Fixed Point Theorem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Roots of Characteristic Polynomial Sequences in Iterative Block Cyclic Reductions

1
Faculty of Science and Engineering, Doshisha University, Kyotanabe 610-0394, Japan
2
Digital Technology & Innovation, Siemens Healthineers Digital Technology (Shanghai) Co., Ltd., Shanghai 201318, China
3
Faculty of Life and Environmental Science, Kyoto Prefectural University, Kyoto 606-8522, Japan
4
Department of Informatics and Mathematical Science, Osaka Seikei University, Osaka 533-0007, Japan
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(24), 3213; https://doi.org/10.3390/math9243213
Submission received: 16 November 2021 / Revised: 9 December 2021 / Accepted: 9 December 2021 / Published: 12 December 2021
(This article belongs to the Special Issue Polynomial Sequences and Their Applications)

Abstract

:
The block cyclic reduction method is a finite-step direct method used for solving linear systems with block tridiagonal coefficient matrices. It iteratively uses transformations to reduce the number of non-zero blocks in coefficient matrices. With repeated block cyclic reductions, non-zero off-diagonal blocks in coefficient matrices incrementally leave the diagonal blocks and eventually vanish after a finite number of block cyclic reductions. In this paper, we focus on the roots of characteristic polynomials of coefficient matrices that are repeatedly transformed by block cyclic reductions. We regard each block cyclic reduction as a composition of two types of matrix transformations, and then attempt to examine changes in the existence range of roots. This is a block extension of the idea presented in our previous papers on simple cyclic reductions. The property that the roots are not very scattered is a key to accurately solve linear systems in floating-point arithmetic. We clarify that block cyclic reductions do not disperse roots, but rather narrow their distribution, if the original coefficient matrix is symmetric positive or negative definite.

1. Introduction

Solving systems of linear equations is one of the most important subjects in numerical linear algebra. In particular, applied mathematics and engineering often require the solution of linear systems with tridiagonal coefficient matrices. Solving tridiagonal linear systems generally involves finding N-dimensional vectors x , such that A x = b for given N-by-N tridiagonal matrices A and N-dimensional vectors b . The cyclic reduction method is a finite-step direct method for computing solutions x [1,2]. The cyclic reduction method first transforms tridiagonal coefficient matrices A to pentadiagonal matrices with all subdiagonal (and superdiagonal) entries equal to 0. The right vectors b are, of course, simultaneously changed. Two non-zero off-diagonals of the coefficient matrices gradually leave the diagonals in the iterative cyclic reductions, with the coefficient matrices eventually being reduced to diagonal matrices. Error analysis of the cyclic reduction method has been reported in [3], and a variant of the cyclic reduction method has been also presented, for example, in [4]. The stride reduction method is a generalization of the cyclic reduction method that can solve problems where A are M-tridiagonal matrices, M is the bandwidth, and there are two non-zero off-diagonals consisting of ( 1 , M + 1 ) , ( 2 , M + 2 ) , …, ( N M 1 , N ) and the ( M + 1 , 1 ) , ( M + 2 , 2 ) , …, ( N , N M 1 ) entries [5]. Each stride reduction, including cyclic reduction, narrows the distribution of the roots of characteristic polynomials associated with the coefficient matrices if A are symmetric positive definite [6,7]. This is a desirable property that does not increase the difficulty of solving systems of linear systems.
Here, we consider positive integers N 1 , N 2 , , N m satisfying N = N 1 + N 2 + + N m . The cyclic reduction method is extended to solve a problem where coefficient matrices A R N × N are block tridiagonal matrices [2,8] expressed using m square matrices D 1 R N 1 × N 1 , D 2 R N 2 × N 2 , …, D m R N m × N m and 2 m 2 rectangular matrices E 1 R N 1 × N 2 , E 2 R N 2 × N 3 , …, E m 1 R N m 1 × N m and F 1 R N 2 × N 1 , F 2 R N 3 × N 2 , …, F m 1 R N m × N m 1 as:
A : = D 1 E 1 F 1 D 2 E m 1 F m 1 D m .
A block tridiagonal matrix can be regarded as a block matrix obtained by replacing the diagonal entries in a tridiagonal matrix with square matrices and the subdiagonal entries with rectangular matrices. This extended method is called the block cyclic reduction method. The forward stability of iterative block cyclic reductions has been discussed in [9], but changes in roots of characteristic polynomials of coefficient matrices have not been studied. Block cyclic reductions do not work well in floating point arithmetic if they greatly disperse the roots. Thus, the main purpose of this paper is to clarify whether block cyclic reductions narrow the root distribution like stride reductions.
The remainder of this paper is organized as follows. Section 2 briefly explains the block cyclic reduction method used for solving block tridiagonal linear systems. Section 3 shows that block M-tridiagonal matrices can be transformed to block tridiagonal (1-tridiagonal) matrices without changing the eigenvalues. Then, we interpret the transformation from M-tridiagonal matrices to 2 M -tridiagonal matrices in the block cyclic reduction as a composite transformation of the block tridiagonalization, its inverse, and the transformation from block tridiagonal to block 2-tridiagonal matrices. In Section 4, we find the relationship between the inverses of block tridiagonal matrices and those of block 2-tridiagonal matrices. Section 5 looks at the roots of characteristic polynomials of coefficient matrices transformed by block cyclic reductions compared with those of original coefficient matrices A in cases where A are block tridiagonal and symmetric positive definite or negative definite. Section 6 gives two numerical examples for observing coefficient matrices and the roots of their characteristic polynomials appearing in iterative block cyclic reductions. Section 7 concludes the paper.

2. Block Cyclic Reduction

In this section, we briefly explain the block cyclic reduction method used for solving linear systems with block tridiagonal coefficient matrices.
We consider the following N-by-N block-band matrix given using m square matrices D 1 ( M ) R N 1 × N 1 , D 2 ( M ) R N 2 × N 2 , ⋯, D m ( M ) R N m × N m and 2 ( m M ) rectangular matrices E 1 ( M ) R N 1 × N 1 + M , E 2 ( M ) R N 2 × N 2 + M , ⋯, E m M ( M ) R N m M × N m and F 1 ( M ) R N 1 + M × N 1 , F 2 ( M ) R N 2 + M × N 2 , ⋯, F m M ( M ) R N m × N m M as:
A ( M ) : = D 1 ( M ) E 1 ( M ) F 1 ( M ) D 1 + M ( M ) D m M ( M ) E m M ( M ) F m M ( M ) D m ( M ) ,
where the superscript in parentheses appearing in A ( M ) specifies the position of non-zero off-diagonal bands E i ( M ) and F i ( M ) . If D i ( M ) , E i ( M ) , and F i ( M ) are all 1-by-1 matrices for every i, then A ( M ) is the M-tridiagonal matrix [6] and D i ( M ) , E i ( M ) , and F i ( M ) are the ( i , i ) , ( i , i + M ) , and ( i + M , i ) entries of A ( M ) , respectively. Thus, we hereinafter refer to A ( M ) as the block M-tridiagonal matrix and D i ( M ) , E i ( M ) , and F i ( M ) as the ( i , i ) , ( i , i + M ) , and ( i + M , i ) blocks of A ( M ) , respectively. Note that block 1-tridiagonal matrices are the usual block tridiagonal matrices. In the case where D i ( M ) , E i ( M ) , and F i ( M ) are not 1-by-1 matrices, we must pay attention to the number of rows and columns of D i ( M ) , E i ( M ) , and F i ( M ) . For example, we can compute the matrix product D 1 ( M ) E 1 ( M ) but cannot define E 1 ( M ) D 1 ( M ) if N 1 N 1 + M , unlike in the case where D i ( M ) , E i ( M ) , and F i ( M ) are all 1-by-1 matrices.
We hereinafter consider the case where D i ( M ) are all nonsingular. Here, we prepare an N-by-N block M-tridiagonal matrix involving non-zero blocks D i ( M ) , E i ( M ) , and F i ( M ) :
T ( M ) : = I 1 E 1 ( M ) ( D 1 + M ( M ) ) 1 F 1 ( M ) ( D 1 ( M ) ) 1 I 1 + M I m M E m M ( M ) ( D m ( M ) ) 1 F m M ( M ) ( D m M ( M ) ) 1 I m ,
where I i are the N i -by- N i identity matrices. The number of rows and columns of I i , E i ( M ) ( D i + M ( M ) ) 1 , and F i ( M ) ( D i ( M ) ) 1 coincide with those of D i ( M ) , E i ( M ) , and F i ( M ) , respectively. In other words, T ( M ) has the same block structure as A ( M ) . Then, we can easily observe that the ( i , i + M ) and ( i + M , i ) blocks are all zero and the ( i , i + 2 M ) and ( i + 2 M , i ) blocks of T ( M ) A ( M ) are all non-zero, meaning that T ( M ) A ( M ) becomes an N-by-N block 2 M -tridiagonal matrix A ( 2 M ) , The non-zero blocks D i ( 2 M ) , E i ( 2 M ) , and F i ( 2 M ) appearing in the ( i , i ) , ( i , i + 2 M ) , and ( i + 2 M , i ) blocks are also expressed using D i ( M ) , E i ( M ) , and F i ( M ) as:
D i ( 2 M ) : = D i ( M ) E i ( M ) ( D i + M ( M ) ) 1 F i ( M ) , i = 1 , 2 , , M , D i ( 2 M ) : = D i ( M ) F i M ( M ) ( D i M ( M ) ) 1 E i M ( M ) E i ( M ) ( D i + M ( M ) ) 1 F i ( M ) , i = M + 1 , , m M , D i ( 2 M ) : = D i ( M ) F i M ( M ) ( D i M ( M ) ) 1 E i M ( M ) , i = m M + 1 , m M + 2 , , m , E i ( 2 M ) : = E i ( M ) ( D i + M ( M ) ) 1 E i + M ( M ) , i = 1 , 2 , , m 2 M , F i ( 2 M ) : = F i + M ( M ) ( D i + M ( M ) ) 1 F i ( M ) , i = 1 , 2 , , m 2 M .
Thus, by multiplying the block M-tridiagonal linear system A ( M ) x = b ( M ) by T ( M ) from the left on both sides, we transform it to the block 2 M -tridiagonal linear system A ( 2 M ) x = b ( 2 M ) , where b ( 2 M ) : = T ( M ) b ( M ) . This transformation is the block cyclic reduction [2]. We can again apply the block cyclic reduction to the block 2 M -tridiagonal linear system A ( 2 M ) x = b ( 2 M ) if the diagonal blocks D i ( 2 M ) in the block 2 M -tridiagonal matrix A ( 2 M ) are all nonsingular. The iterative block cyclic reductions therefore cause non-zero off-diagonal blocks to gradually leave the diagonal blocks, eventually generating linear systems with block diagonal matrices.

3. Composite Transformation

In this section, we first show that block M-tridiagonal matrices can be transformed into block tridiagonal (1-tridiagonal) matrices while preserving the eigenvalues. Next, we consider the transformation from the block M-tridiagonal matrix A ( M ) to the block 2 M -tridiagonal matrix A ( 2 M ) in terms of the transformation from the block tridiagonal (1-tridiagonal) matrix to the block 2-tridiagonal matrix with two similarity transformations.
We consider N-by- N i matrices:
P i : = O O I i O O 1 st block ( i 1 ) th block i th block ( i + 1 ) th block m th block , i = 1 , 2 , , m ,
where O denotes the zero matrix, whose entries are all 0, and the number of rows in the 1st, 2nd, ⋯, mth blocks are equal to those in D 1 ( M ) , D 2 ( M ) , ⋯, D m ( M ) , respectively. Then, it is obvious that A ( M ) P 1 R N × N 1 , A ( M ) P 2 R N × N 2 , ⋯, A ( M ) P m R N × N m become the 1st, 2nd, ⋯, mth block-columns of A ( M ) , respectively. Furthermore, for i = 1 , 2 , , m , it is observed that P 1 ( A ( M ) P i ) R N 1 × N i , P 2 ( A ( M ) P i ) R N 2 × N i , ⋯, P m ( A ( M ) P i ) R N m × N i coincide with the 1st, 2nd, ⋯, mth block-rows of A ( M ) P i , respectively. Thus, we see that P j A ( M ) P i are the ( j , i ) blocks of A ( M ) for i , j = 1 , 2 , , m —namely:
P j A ( M ) P i = E i ( M ) , j = i M , D i ( M ) , j = i , F i ( M ) , j = i + M , O , o t h e r w i s e .
Here, we introduce N × ( N i + N i + M + + N i + q M ) matrices P i : = ( P i , P i + M , , P i + q M ) for i = 1 , 2 , , r , where q and r are the quotient and the remainder after the division of m by M, respectively. Then, it follows that:
P i A ( M ) P i = D i ( M ) F i ( M ) O O R ( N i + N i + M + + N i + q M ) × N i , i = 1 , 2 , , r . P i A ( M ) P i + k M = O O E i + ( k 1 ) M ( M ) D i + k M ( M ) F i + k M ( M ) O O R ( N i + N i + M + + N i + q M ) × N i + k M , i = 1 , 2 , , r , k = 1 , 2 , , q 1 , P i A ( M ) P i + q M = O O E i + ( q 1 ) M ( M ) D i + q M ( M ) R ( N i + N i + M + + N i + q M ) × N i + q M , i = 1 , 2 , , r .
Thus, by gathering these, we can derive:
P i A ( M ) P i = D i ( M ) E i ( M ) F i ( M ) D i + M ( M ) E i + ( q 1 ) M ( M ) F i + ( q 1 ) M ( M ) D i + q M ( M ) R ( N i + N i + M + + N i + q M ) × ( N i + N i + M + + N i + q M ) , i = 1 , 2 , , r .
Similarly, by letting P i : = ( P i , P i + M , , P i + ( q 1 ) M ) R N × ( N i + N i + M + + N i + ( q 1 ) M ) for i = r + 1 , r + 2 , , M , we obtain:
P i A ( M ) P i = D i ( M ) E i ( M ) F i ( M ) D i + M ( M ) E i + ( q 2 ) M ( M ) F i + ( q 2 ) M ( M ) D i + ( q 1 ) M ( M ) R ( N i + N i + M + + N i + ( q 1 ) M ) × ( N i + N i + M + + N i + ( q 1 ) M ) , i = r + 1 , r + 2 , , M .
See, also, Figure 1 for an auxiliary example of gathering a block tridiagonal part, as shown in (6) and (7). Therefore, using the permutation matrix P : = ( P 1 P 2 P M ) , we can complete a block tridiagonalization of A ( M ) as:
P A ( M ) P = diag ( A ˜ 1 ( 1 ) , A ˜ 2 ( 1 ) , , A ˜ M ( 1 ) ) ,
where A ˜ i ( 1 ) : = P i A ( M ) P i . Here, we may regard diag ( A ˜ 1 ( 1 ) , A ˜ 2 ( 1 ) , , A ˜ M ( 1 ) ) as a block diagonal matrix in terms of A ˜ 1 ( 1 ) , A ˜ 2 ( 1 ) , , A ˜ M ( 1 ) . However, we emphasize that A ˜ 1 ( 1 ) , A ˜ 2 ( 1 ) , , A ˜ M ( 1 ) are nothing but auxiliary matrices and are essentially block tridiagonal matrices in terms of realistic blocks D i ( M ) , E i ( M ) , and F i ( M ) . Furthermore, in the following sections, we should recognize that (8) is a block tridiagonalization and not a block diagonalization of A ( M ) . Figure 2 gives a sketch of a block tridiagonalization of A ( M ) . Noting that the P is an orthogonal matrix—namely, P = P 1 , we can determine that diag ( A ˜ 1 ( 1 ) , A ˜ 2 ( 1 ) , , A ˜ M ( 1 ) ) has the same eigenvalues as A ( M ) . To summarize, we can divide a linear system with the block M-tridiagonal coefficient matrix A ( M ) into M linear systems with the block tridiagonal coefficient matrices A ˜ 1 ( 1 ) , A ˜ 2 ( 1 ) , ⋯, A ˜ M ( 1 ) without the loss of the eigenvalues of A ( M ) .
Recalling that A ( 2 M ) = T ( M ) A ( M ) , we can decompose P 1 A ( 2 M ) P into:
P 1 A ( 2 M ) P = ( P 1 T ( M ) P ) ( P 1 A ( M ) P ) .
Noting that T ( M ) has the same block structure as A ( M ) , whose block tridiagonalization is shown in Section 3, we can immediately derive:
P 1 T ( M ) P = diag ( T ˜ 1 ( 1 ) , T ˜ 2 ( 1 ) , , T ˜ M ( 1 ) ) ,
where
T ˜ j ( 1 ) : = I j E j ( M ) ( D j + M ( M ) ) 1 F j ( M ) ( D j ( M ) ) 1 I j + M E j + ( q 1 ) M ( M ) ( D j + q M ( M ) ) 1 F j + ( q 1 ) M ( M ) ( D j + ( q 1 ) M ( M ) ) 1 I j + ( q 1 ) M , j = 1 , 2 , , r ,
T ˜ j ( 1 ) : = I j E j ( M ) ( D j + M ( M ) ) 1 F j ( M ) ( D j ( M ) ) 1 I j + M E j + ( q 2 ) M ( M ) ( D j + ( q 1 ) M ( M ) ) 1 F j + ( q 2 ) M ( M ) ( D j + ( q 2 ) M ( M ) ) 1 I j + ( q 2 ) M , j = r + 1 , r + 2 , , M .
Combining (8) and (10) with (9), we obtain:
P 1 A ( 2 M ) P = diag ( T ˜ 1 ( 1 ) A ˜ 1 ( 1 ) , T ˜ 2 ( 1 ) A ˜ 2 ( 1 ) , , T ˜ M ( 1 ) A ˜ M ( 1 ) ) .
This implies that the transformation from A ( M ) to A ( 2 M ) can also be completed using three transformations: (1) the block tridiagonalization from A ( M ) to P 1 A ( M ) P ; (2) the transformations from the block 1-tridiagonal matrices A ˜ i ( 1 ) to the block 2-tridiagonal matrices T ˜ i ( 1 ) A ˜ i ( 1 ) ; and (3) the block 2 M -tridiagonalization from P 1 A ( 2 M ) P to A ( 2 M ) . See Figure 3 for the relationships among the block tridiagonal (1-tridiagonal), 2-tridiagonal, M-tridiagonal, and 2 M -tridiagonal matrices.

4. Inverses of Block 1-Tridiagonal and 2-Tridiagonal Matrices

In this section, we express the inverse of the block 1-tridiagonal matrix A ( 1 ) using that of the block 2-tridiagonal matrix A ( 2 ) . This is useful for comparing the roots of the characteristic polynomials of the block 1-tridiagonal matrix A ( 1 ) and the block 2-tridiagonal matrix A ( 2 ) in the next section.
We introduce two auxiliary matrices D : = diag ( D 1 ( 1 ) , D 2 ( 1 ) , , D m ( 1 ) ) and A R ( 1 ) : = R A ( 1 ) R , where R : = diag ( ( 1 ) I 1 , ( 1 ) 2 I 2 , , ( 1 ) m I m ) involving the N i -dimensional identity matrices I i . Then, we derive the following lemma for an expression using D and A R ( 1 ) of transformation matrix T ( 1 ) .
Lemma 1.
The transformation matrix T ( 1 ) can be decomposed using D and A R ( 1 ) into:
T ( 1 ) = A R ( 1 ) D 1 .
Proof. 
It is obvious that R A ( 1 ) and R A ( 1 ) R are both block 1-tridiagonal matrices. It is easy to check that ( R A ( 1 ) ) i , j = ( ( 1 ) j A ( 1 ) ) i , j where ( · ) i , j denotes the ( i , j ) blocks of a matrix. Similarly, it turns out that ( R A ( 1 ) R ) i , j = ( ( 1 ) i R A ( 1 ) ) i , j . Thus, by noting that ( A R ( 1 ) ) i , j = ( ( 1 ) i + j A ( 1 ) ) i , j , we can derive:
A R ( 1 ) = D 1 ( 1 ) E 1 ( 1 ) F 1 ( 1 ) D 2 ( 1 ) E m 1 ( 1 ) F m 1 ( 1 ) D m ( 1 ) .
Furthermore, it can easily be observed that T ( 1 ) D = A R . Noting that D is nonsingular, we obtain (14). □
Since it is obvious that R 1 = R , it holds that A R ( 1 ) = R 1 A ( 1 ) R . This implies that the eigenvalues of A R ( 1 ) coincide with those of A ( 1 ) . Thus, A R ( 1 ) is nonsingular if A ( 1 ) is nonsingular. From Lemma 1, it immediately follows that det A ( 2 ) = ( det A R ( 1 ) ) ( det D 1 ) ( det A ( 1 ) ) . Therefore, the inverse of A ( 2 ) exists if A ( 1 ) is nonsingular. The following proposition gives the relationship of ( A ( 2 ) ) 1 to ( A ( 1 ) ) 1 and ( A R ( 1 ) ) 1 .
Proposition 1.
For the ( i , j ) blocks ( ( A ( 1 ) ) 1 ) i , j and ( ( A ( 2 ) ) 1 ) i , j , it holds that:
( ( A ( 2 ) ) 1 ) i , j = ( ( A ( 1 ) ) 1 ) i , j if i + j is even , O if i + j is odd ,
where O denotes the zero matrix whose entries are all 0 as shown previously. Accordingly,
( A ( 2 ) ) 1 = 1 2 ( ( A ( 1 ) ) 1 + ( A R ( 1 ) ) 1 ) .
Proof. 
Observing the ith block-row on both sides of the trivial identity ( A ( 1 ) ) 1 A ( 1 ) = I N , we can obtain m matrix equations:
( ( A ( 1 ) ) 1 ) i , 1 D 1 ( 1 ) + ( ( A ( 1 ) ) 1 ) i , 2 F 1 ( 1 ) = O , ( ( A ( 1 ) ) 1 ) i , 1 E 1 ( 1 ) + ( ( A ( 1 ) ) 1 ) i , 2 D 2 ( 1 ) + ( ( A ( 1 ) ) 1 ) i , 3 F 2 ( 1 ) = O , ( ( A ( 1 ) ) 1 ) i , i 1 E i 1 ( 1 ) + ( ( A ( 1 ) ) 1 ) i , i D i ( 1 ) + ( ( A ( 1 ) ) 1 ) i , i + 1 F i ( 1 ) = I i , ( ( A ( 1 ) ) 1 ) i , m 2 E m 2 ( 1 ) + ( ( A ( 1 ) ) 1 ) i , m 1 D m 1 ( 1 ) + ( ( A ( 1 ) ) 1 ) i , m F m 1 ( 1 ) = O , ( ( A ( 1 ) ) 1 ) i , m 1 E m 1 ( 1 ) + ( ( A ( 1 ) ) 1 ) i , m D m ( 1 ) = O .
Multiplying both sides of the 1st, 2nd, ⋯, mth equations in (18) from the right by ( ( A ( 1 ) ) 1 ) 1 , j , ( ( A ( 1 ) ) 1 ) 2 , j , ⋯, ( ( A ( 1 ) ) 1 ) m , j , respectively, we can rewrite (18) as:
D ^ 1 ( 1 ) + F ^ 1 ( 1 ) = O , E ^ 1 ( 1 ) + D ^ 2 ( 1 ) + F ^ 2 ( 1 ) = O , E ^ i 1 ( 1 ) + D ^ i ( 1 ) + F ^ i ( 1 ) = ( ( A ( 1 ) ) 1 ) i , j , E ^ m 2 ( 1 ) + D ^ m 1 ( 1 ) + F ^ m 1 ( 1 ) = O , E ^ m 1 ( 1 ) + D ^ m ( 1 ) = O ,
where D ^ k ( 1 ) : = ( ( A ( 1 ) ) 1 ) i , k D k ( 1 ) ( ( A ( 1 ) ) 1 ) k , j , E ^ k ( 1 ) : = ( ( A ( 1 ) ) 1 ) i , k E k ( 1 ) ( ( A ( 1 ) ) 1 ) k + 1 , j and F ^ k ( 1 ) : = ( ( A ( 1 ) ) 1 ) i , k + 1 F k ( 1 ) ( ( A ( 1 ) ) 1 ) k , j . Similarly, by focusing on the jth block column of both sides of A ( 1 ) ( A ( 1 ) ) 1 = I N and multiplying the 1st, 2nd, ⋯, mth equations from the left by ( ( A ( 1 ) ) 1 ) i , 1 , ( ( A ( 1 ) ) 1 ) i , 2 , ⋯, ( ( A ( 1 ) ) 1 ) i , m , respectively, we can derive:
D ^ 1 ( 1 ) + E ^ 1 ( 1 ) = O , F ^ 1 ( 1 ) + D ^ 2 ( 1 ) + E ^ 2 ( 1 ) = O , F ^ j 1 ( 1 ) + D ^ j ( 1 ) + E ^ j ( 1 ) = ( ( A ( 1 ) ) 1 ) i , j , F ^ m 2 ( 1 ) + D ^ m 1 ( 1 ) + E ^ m 1 ( 1 ) = O , F ^ m 1 ( 1 ) + D ^ m ( 1 ) = O .
Adding the kth equation of (19) to that of (20), multiplying this by ( 1 ) k , and letting G k ( 1 ) : = E ^ k ( 1 ) + F ^ k ( 1 ) , we can thus obtain:
( 1 ) k ( G k 1 ( 1 ) + 2 D ^ k ( 1 ) + G k ( 1 ) ) = ( 1 ) i ( ( A ( 1 ) ) 1 ) i , j if k = i , ( 1 ) j ( ( A ( 1 ) ) 1 ) i , j if k = j , O otherwise ,
where G 0 ( 1 ) : = O and G m ( 1 ) : = O . The summation for k = 1 , 2 , , m of (21) leads to:
2 k = 1 m ( 1 ) k D ^ k ( 1 ) = [ ( 1 ) i + ( 1 ) j ] ( ( A ( 1 ) ) 1 ) i , j .
From Lemma 1, we can see that: ( A ( 2 ) ) 1 = ( A R ( 1 ) D 1 A ( 1 ) ) 1 = ( A ( 1 ) ) 1 D ( A R ( 1 ) ) 1 . Since it follows from ( A R ( 1 ) ) 1 = ( R A ( 1 ) R ) 1 = R ( A ( 1 ) ) 1 R that ( ( A R ( 1 ) ) 1 ) k , j = ( 1 ) k + j ( ( A ( 1 ) ) 1 ) k , j , we can obtain:
( ( A ( 2 ) ) 1 ) i , j = k = 1 m ( 1 ) k + j ( ( A ( 1 ) ) 1 ) i , k D k ( 1 ) ( ( A ( 1 ) ) 1 ) k , j = k = 1 m ( 1 ) k + j D ^ k ( 1 ) .
Consequently, by combining (22) with (23), we can derive:
( ( A ( 2 ) ) 1 ) i , j = 1 + ( 1 ) i + j 2 ( ( A ( 1 ) ) 1 ) i , j ,
which implies (16). The matrix identity ( A R ( 1 ) ) 1 = R ( A ( 1 ) ) 1 R also gives the relationship of blocks in ( A ( 1 ) ) 1 and ( A R ( 1 ) ) 1 :
( ( A R ( 1 ) ) 1 ) i , j = ( ( A ( 1 ) ) 1 ) i , j if i + j is even , ( ( A ( 1 ) ) 1 ) i , j if i + j is odd .
Considering (24) and (25), we then have (17). □
Proposition 1 plays a key role in understanding the change in the roots of characteristic polynomials of coefficient matrices in linear systems after block cyclic reductions.

5. Roots of Characteristic Polynomial Sequence

In this section, we first investigate roots of characteristic polynomials of the block 2-tridiagonal matrix A ( 2 ) = T ( 1 ) A ( 1 ) , which is transformed from the block 1-tridiagonal matrix A ( 1 ) in the block cyclic reduction under the assumption that A ( 1 ) is symmetric positive definite.
With the help of Proposition 1, we present a theorem for the roots of characteristic polynomials of A ( 1 ) and A ( 2 ) .
Theorem 1.
Assume that the block tridiagonal matrix A ( 1 ) is symmetric positive definite. Then, for the block 2-tridiagonal matrix A ( 2 ) = T ( 1 ) A ( 1 ) , it holds that:
λ N ( A ( 1 ) ) λ k ( A ( 2 ) ) λ 1 ( A ( 1 ) ) , k = 1 , 2 , , N ,
where λ k ( · ) denotes the kth largest root of the characteristic polynomial of a matrix—namely, the kth largest eigenvalue of a matrix.
Proof. 
Let u 1 R N be a normalized eigenvector corresponding to λ 1 ( ( A ( 1 ) ) 1 + ( A R ( 1 ) ) 1 ) . Noting that ( A ( 1 ) ) 1 and ( A R ( 1 ) ) 1 are both symmetric and considering the Rayleigh quotient [10], we can derive:
u 1 ( A ( 1 ) ) 1 u 1 λ 1 ( ( A ( 1 ) ) 1 ) ,
u 1 ( A R ( 1 ) ) 1 u 1 λ 1 ( ( A R ( 1 ) ) 1 ) = λ 1 ( ( A ( 1 ) ) 1 ) .
This equality holds in (27) if and only if u 1 is the eigenvector of ( A ( 1 ) ) 1 corresponding to λ 1 ( ( A ( 1 ) ) 1 ) , while the equality holds in (28) if and only if u 1 is also the eigenvector of ( A R ( 1 ) ) 1 corresponding to λ 1 ( ( A R ( 1 ) ) 1 ) = λ 1 ( ( A ( 1 ) ) 1 ) . The inequalities (27) and (28) immediately lead to:
u 1 ( ( A ( 1 ) ) 1 + ( A R ( 1 ) ) 1 ) u 1 2 λ 1 ( ( A ( 1 ) ) 1 ) .
Using Proposition 1, we can rewrite the Rayleigh quotient u 1 ( ( A ( 1 ) ) 1 + ( A R ( 1 ) ) 1 ) u 1 as:
u 1 ( ( A ( 1 ) ) 1 + ( A R ( 1 ) ) 1 ) u 1 = 2 λ 1 ( ( A ( 2 ) ) 1 ) .
From (29) and (30), we can see that:
λ k ( ( A ( 2 ) ) 1 ) λ 1 ( ( A ( 1 ) ) 1 ) , k = 1 , 2 , , N .
Similarly, from a comparison of the Rayleigh quotient u 1 ( ( A ( 1 ) ) 1 + ( A R ( 1 ) ) 1 ) u 1 with the minimal eigenvalue λ N ( ( A ( 1 ) ) 1 ) , it follows that:
λ k ( ( A ( 2 ) ) 1 ) λ N ( ( A ( 1 ) ) 1 ) , k = 1 , 2 , , N .
Thus, by combining (31) with (32), we can obtain:
λ N ( ( A ( 1 ) ) 1 ) λ k ( ( A ( 2 ) ) 1 ) λ 1 ( ( A ( 1 ) ) 1 ) , k = 1 , 2 , , N .
Since the eigenvalues of a matrix are the reciprocals of eigenvalues of its inverse matrix, we therefore have (26). □
The following theorem is a specialization of Theorem 1.
Theorem 2.
Assume that the block tridiagonal matrix A ( 1 ) is symmetric positive definite and that its eigenvalues are distinct from each other. Furthermore, let the non-zero blocks all be nonsingular square matrices with the same matrix size. Then, for the block 2-tridiagonal matrix A ( 2 ) = T ( 1 ) A ( 1 ) , it holds that:
λ N ( A ( 1 ) ) < λ k ( A ( 2 ) ) < λ 1 ( A ( 1 ) ) , k = 1 , 2 , , N .
Proof. 
We begin by reconsidering the proof of Theorem 1. The proof of (33) is completed by proving λ 1 ( A ( 1 ) ) > λ 1 ( A ( 2 ) ) and λ N ( A ( 1 ) ) < λ N ( A ( 2 ) ) . The equality in (31) does not hold if the equality in (29) holds. We recall here that the equality in (29) does not hold if u 1 is not the eigenvector of at least either ( A ( 1 ) ) 1 or ( A R ( 1 ) ) 1 corresponding to λ 1 ( ( A ( 1 ) ) 1 ) = λ 1 ( ( A R ( 1 ) ) 1 ) . Noting that λ N ( A ( 1 ) ) = λ 1 ( ( A ( 1 ) ) 1 ) and λ N ( A ( 2 ) ) = λ 1 ( ( A ( 2 ) ) 1 ) , we can thus see that λ N ( A ( 1 ) ) < λ N ( A ( 2 ) ) if the eigenvector of A ( 1 ) corresponding to λ N ( A ( 1 ) ) is not equal to that of A R ( 1 ) corresponding to λ N ( A R ( 1 ) ) = λ N ( A ( 1 ) ) . Similarly, the inequality λ 1 ( A ( 1 ) ) > λ 1 ( A ( 2 ) ) holds if the eigenvector of A ( 1 ) corresponding to λ 1 ( A ( 1 ) ) is not equal to that of A R ( 1 ) corresponding to λ 1 ( A R ( 1 ) ) = λ 1 ( A ( 1 ) ) .
Let us assume here that v 1 is the eigenvector of both A ( 1 ) and A R ( 1 ) corresponding to λ 1 ( A ( 1 ) ) = λ 1 ( A R ( 1 ) ) —namely, A ( 1 ) v 1 = λ 1 ( A ( 1 ) ) v 1 and A R ( 1 ) v 1 = λ 1 ( A ( 1 ) ) v 1 . From A ( 1 ) ( R v 1 ) = R ( A R ( 1 ) v 1 ) , we can then derive:
A ( 1 ) ( R v 1 ) = λ 1 ( A ( 1 ) ) ( R v 1 ) .
This implies that R v 1 is also the eigenvector of A ( 1 ) corresponding to λ 1 ( A ( 1 ) ) . Noting that A ( 1 ) does not have multiple eigenvalues, we can thus obtain v 1 = R v 1 . Let v 1 ( i ) denote the N i -dimensional vector in the ith block-rows of v 1 . Then, by observing that:
v 1 ( 1 ) v 1 ( 2 ) v 1 ( m ) = I 1 I 2 ( 1 ) m I m v 1 ( 1 ) v 1 ( 2 ) v 1 ( m ) ,
we can specify v 1 as:
v 1 = o v 1 ( 2 ) o ( 1 ) m 1 v 1 ( m 1 ) ( 1 ) m v 1 ( m ) ,
where o denotes the zero vector whose entries are all 0. Thus, by focusing on the 1st, 3rd, ⋯, mth block rows on both sides of A ( 1 ) v 1 = λ 1 ( A ( 1 ) ) v 1 with odd m, we have:
E 1 ( 1 ) v 1 ( 2 ) = o , E 2 ( 1 ) v 1 ( 2 ) + E 3 ( 1 ) v 1 ( 4 ) = o , E m 3 ( 1 ) v 1 ( m 3 ) + E m 2 ( 1 ) v 1 ( m 1 ) = o , E m 1 ( 1 ) v 1 ( m 1 ) = o .
Since E 1 ( 1 ) , E 2 ( 1 ) , ⋯, E m 1 ( 1 ) are all nonsingular. (34) immediately leads to v 1 ( 2 ) = o , v 1 ( 4 ) = o , ⋯, v 1 ( m 1 ) = o . Namely, v 1 is the zero vector. This contradicts the assumption that v 1 is the eigenvector of both A ( 1 ) and A R ( 1 ) corresponding to λ 1 ( A ( 1 ) ) = λ 1 ( A R ( 1 ) ) . The contradiction is similarly derived in the case where m is even. Therefore, we conclude that λ 1 ( A ( 1 ) ) > λ 1 ( A ( 2 ) ) . We also have λ N ( A ( 1 ) ) < λ N ( A ( 2 ) ) along the same lines as the above proof. □
We recall here that the transformation A ( 2 M ) = T ( M ) A ( M ) from the block M-tridiagonal matrix A ( M ) to the block 2 M -tridiagonal matrix A ( 2 M ) can be regarded as a composite transformation of the transformations from the block tridiagonal matrices to the block 2-tridiagonal matrices and two similarity transformations. By combining this with Theorems 1 and 2, we can derive the following theorem concerning the roots of the characteristic polynomials of A ( M ) and A ( 2 M ) = T ( M ) A ( M ) .
Theorem 3.
Assume that the block M-tridiagonal matrix A ( M ) is symmetric positive definite, where M = 1 , 2 , . Then, for the block 2 M -tridiagonal matrix A ( 2 M ) = T ( M ) A ( M ) , it holds that:
λ N ( A ( M ) ) λ k ( A ( 2 M ) ) λ 1 ( A ( M ) ) , k = 1 , 2 , , N .
Furthermore, let the roots of the characteristic polynomials of A ( M ) be distinct from each other, the non-zero blocks all be nonsingular square matrices, and their matrix sizes all be the same. Then, it holds that:
λ N ( A ( M ) ) < λ k ( A ( 2 M ) ) < λ 1 ( A ( M ) ) , k = 1 , 2 , , N .
It is obvious that A ( 2 ) = T ( 1 ) A ( 1 ) , A ( 4 ) = T ( 2 ) A ( 2 ) , ⋯ are symmetric if the block tridiagonal matrix A ( 1 ) = A is also symmetric in the original linear system A x = b . From Theorem 3, we can recursively see that A ( 2 ) , A ( 4 ) , ⋯ are positive definite if the original coefficient matrix A ( 1 ) = A is also positive definite. We then conclude that:
λ 1 ( A ( 2 M ) ) λ N ( A ( 2 M ) ) λ 1 ( A ( M ) ) λ N ( A ( M ) ) λ 1 ( A ( 2 ) ) λ N ( A ( 2 ) ) λ 1 ( A ( 1 ) ) λ N ( A ( 1 ) ) ,
or
< λ 1 ( A ( 2 M ) ) λ N ( A ( 2 M ) ) < λ 1 ( A ( M ) ) λ N ( A ( M ) ) < < λ 1 ( A ( 2 ) ) λ N ( A ( 2 ) ) < λ 1 ( A ( 1 ) ) λ N ( A ( 1 ) ) ,
as long as the block cyclic reductions are repeated. The discussion in this section can easily be changed for the case where the original coefficient matrix A ( 1 ) = A is negative definite.

6. Numerical Examples

In this section, we give two examples for observing changes in coefficient matrices in the iterative block cyclic reductions for block tridiagonal linear systems A ( 1 ) x = b and numerically verifying Theorems 1 and 2 with respect to the roots of the characteristic polynomials of coefficient matrices. The numerical illustration was carried out on a computer; OS: Mac OS Monterey (ver. 12.0.1), CPU: 2.4 GHz Intel Core i9. For this, we employed the MATLAB software (R2021a). Computed values are given in floating-point arithmetic.
We first consider the case where A ( 1 ) is the 9-by-9 symmetric block tridiagonal and positive definite matrix with λ 1 ( A ( 1 ) ) 11.3465 , λ 9 ( A ( 1 ) ) 0.9931 , and λ 1 ( A ( 1 ) ) / λ 9 ( A ( 1 ) ) 11.3465 / 0.9931 11.4253 :
A ( 1 ) = 2 1 0 0 1 0 0 0 0 1 3 1 0 0 1 0 0 0 0 1 4 1 0 0 1 0 0 0 0 1 5 0 1 0 0 0 1 0 0 0 6 1 0 1 0 0 1 0 1 1 7 1 0 1 0 0 1 0 0 1 8 0 1 0 0 0 0 1 0 0 9 1 0 0 0 0 0 1 1 1 10 ,
where the values of λ 1 ( A ( 1 ) ) , λ 9 ( A ( 1 ) ) and λ 1 ( A ( 1 ) ) / λ 9 ( A ( 1 ) ) are computed using the MATLAB function eig and rounded to 4 digits after the decimal point. The diagonal blocks are all symmetric and their matrix sizes are distinct from one another. Since the determinants of the ( 1 , 1 ) , ( 2 , 2 ) , and ( 3 , 3 ) blocks are, respectively, 85, 322, and 89, the diagonal blocks are all nonsingular. The non-zero subdiagonal (and superdiagonal) blocks are not square matrices, but the transposes of the ( 1 , 2 ) and ( 2 , 3 ) blocks coincide with those of the ( 2 , 1 ) and ( 3 , 2 ) blocks, respectively. After the 1st block cyclic reduction, A ( 1 ) is transformed to the symmetric block 2-tridiagonal matrix:
A ( 2 ) = 1.8292 1.0248 0.0031 0.0248 0 0 0 0.1708 0.0217 1.0248 2.8509 1.0186 0.1491 0 0 0 0.0248 0.1304 0.0031 1.0186 3.8727 1.0186 0 0 0 0.0031 0.1087 0.0248 0.1491 1.0186 4.8509 0 0 0 0.0248 0.1304 0 0 0 0 5.2759 1.2465 0.0476 0 0 0 0 0 0 1.2465 6.1930 1.0753 0 0 0 0 0 0 0.0476 1.0753 7.6048 0 0 0.1708 0.0248 0.0031 0.0248 0 0 0 8.8292 1.0217 0.0217 0.1304 0.1087 0.1304 0 0 0 1.0217 9.7609 ,
where all non-zero entries are rounded to 4 digits after the decimal point. Using the MATLAB function eig, we can see that λ 1 ( A ( 2 ) ) 10.4242 and λ 9 ( A ( 2 ) ) 1.0440 . Thus, λ 1 ( A ( 2 ) ) / λ 9 ( A ( 2 ) ) 9.9849 < λ 1 ( A ( 1 ) ) / λ 9 ( A ( 1 ) ) . It is also easy to check that the diagonal blocks of A ( 2 ) are all nonsingular. This implies that a block cyclic reduction can again be applied to the linear system with the coefficient matrix A ( 2 ) . The 2nd block cyclic reduction then simplifies A ( 2 ) as the block diagonal matrix:
A = 1.8257 1.0259 0.0027 0.0259 0 0 0 0 0 1.0259 2.8490 1.0171 0.1510 0 0 0 0 0 0.0027 1.0171 3.8715 1.0171 0 0 0 0 0 0.0259 0.1510 1.0171 4.8490 0 0 0 0 0 0 0 0 0 5.2759 1.2465 0.0476 0 0 0 0 0 0 1.2465 6.1930 1.0753 0 0 0 0 0 0 0.0476 1.0753 7.6048 0 0 0 0 0 0 0 0 0 8.8052 1.0321 0 0 0 0 0 0 0 1.0321 9.7475 .
The MATLAB function eig immediately returns λ 1 ( A ) 10.4109 and λ 9 ( A ) 1.0445 . Therefore, λ 1 ( A ) / λ 9 ( A ) 9.9674 < λ 1 ( A ( 2 ) ) / λ 9 ( A ( 2 ) ) .
Next, we deal with the case where A ( 1 ) is the 9-by-9 symmetric block tridiagonal and negative definite matrix with λ 1 ( A ( 1 ) ) = 1.172 , λ 9 ( A ( 1 ) ) = 6.828 and λ 1 ( A ( 1 ) ) / λ 9 ( A ( 1 ) ) 6.828 / 1.172 5.826 ,
A ( 1 ) = 4 1 0 1 0 0 0 0 0 1 4 1 0 1 0 0 0 0 0 1 4 0 0 1 0 0 0 1 0 0 4 1 0 1 0 0 0 1 0 1 4 1 0 1 0 0 0 1 0 1 4 0 0 1 0 0 0 1 0 0 4 1 0 0 0 0 0 1 0 1 4 1 0 0 0 0 0 1 0 1 4 .
This is an example matrix appearing in the discretization of Poisson’s equation [11]. It is different from the 1st example matrix in that all the blocks have the same matrix size. It is obvious that the non-zero blocks are all 3-by-3 nonsingular. A block cyclic reduction then transforms A ( 1 ) into the symmetric block 2-tridiagonal matrix:
A ( 2 ) = 3.7321 1.0714 0.0179 0 0 0 0.2679 0.0714 0.0179 1.0714 3.7143 1.0714 0 0 0 0.0714 0.2857 0.0714 0.0179 1.0714 3.7321 0 0 0 0.0179 0.0714 0.2679 0 0 0 3.4643 1.1429 0.0357 0 0 0 0 0 0 1.1429 3.4286 1.1429 0 0 0 0 0 0 0.0357 1.1429 3.4643 0 0 0 0.2679 0.0714 0.0179 0 0 0 3.7321 1.0714 0.0179 0.0714 0.2857 0.0714 0 0 0 1.0714 3.7143 1.0714 0.0179 0.0714 0.2679 0 0 0 0.0179 1.0714 3.7321 ,
where λ 1 ( A ( 2 ) ) = 1.8123 , λ 9 ( A ( 2 ) ) = 5.4142 , and λ 1 ( A ( 2 ) ) / λ 9 ( A ( 2 ) ) 2.9875 . Since the non-zero blocks in A ( 2 ) are all nonsingular, we can simplify A ( 2 ) as the block diagonal matrix:
A = 3.7052 1.0932 0.0282 0 0 0 0 0 0 1.0932 3.6770 1.0932 0 0 0 0 0 0 0.0282 1.0932 3.7052 0 0 0 0 0 0 0 0 0 3.4643 1.1429 0.0357 0 0 0 0 0 0 1.1429 3.4286 1.1429 0 0 0 0 0 0 0.0357 1.1429 3.4643 0 0 0 0 0 0 0 0 0 3.7052 1.0932 0.0282 0 0 0 0 0 0 1.0932 3.6770 1.0932 0 0 0 0 0 0 0.0282 1.0932 3.7052 ,
with λ 1 ( A ) = 1.8123 , λ 9 ( A ) = 5.2230 , and λ 1 ( A ) / λ 9 ( A ) = 2.8820 . Thus, it can numerically be observed that: λ 1 ( A ( 1 ) ) / λ 9 ( A ( 1 ) ) > λ 1 ( A ( 2 ) ) / λ 9 ( A ( 2 ) ) > λ 1 ( A ) / λ 9 ( A ) .

7. Concluding Remarks

This paper focused on coefficient matrices in linear systems obtained from block iterative cyclic reductions. We showed that block M-tridiagonal coefficient matrices can be transformed to block tridiagonal matrices without changing the eigenvalues. We interpreted transformations from block M-tridiagonal matrices to block 2 M -tridiagonal matrices as composite transformations of the block tridiagonalizations, with their inverses and transformations from block tridiagonal matrices to block 2-tridiagonal matrices appearing in the first step of the block cyclic reduction method. We then used this interpretation to consider the other steps of the block cyclic reduction method. We found a relationship between the inverses of block tridiagonal matrices and block 2-tridiagonal matrices in the first step, which helped us to clarify the main results of this paper—i.e., the first step narrows the distribution of roots of characteristic polynomials associated with coefficient matrices, and the other steps also do this if the original coefficient matrices are symmetric positive or negative definite. This property suggests that each block cyclic reduction does not make it difficult to solve a linear system with a symmetric positive or negative definite block tridiagonal matrix, which will be useful for dividing a large-scale linear system into several small-scale ones.
A remarkable point of our approach is, as a result, useful regardless of whether coefficient matrices are tridiagonal or block tridiagonal. However, the coefficient matrices are currently limited to be symmetric positive or negative definite. For example, in the nonsymmetric Toeplitz case [12], our approach cannot grasp root distribution of the characteristic polynomial. Future work thus involves developing our approach so that root distribution can be examined in the cases of various coefficient matrices.

Author Contributions

Conceptualization, M.S., M.I. and Y.N.; Data curation, M.S. and T.W.; Methodology, M.S.; Supervision, Y.N.; Writing—original draft, M.S. and M.I. All the authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Grant-in-Aid for Early-Career Scientists No. 21K13844 from the Japan Society for the Promotion of Science.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the reviewers for their careful reading and beneficial comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gander, W.; Golub, G.H. Cyclic reduction—History and applications. In Proc. Workshop Scientific Computing (Hong Kong, 1997); Springer: Singapore, 1997; pp. 73–85. [Google Scholar]
  2. Heller, D. Some aspects of the cyclic reduction algorithm for block tridiagonal linear systems. SIAM J. Numer. Anal. 1976, 13, 484–496. [Google Scholar] [CrossRef] [Green Version]
  3. Amodio, P.; Mazzia, F. Backward Error Analysis of Cyclic Reduction for the Solution of Tridiagonal Systems. Math. Comput. 1994, 62, 601–617. [Google Scholar] [CrossRef]
  4. Rossi, T.; Toivanen, J. A nonstandard cyclic reduction method, its variants and stability. SIAM J. Matrix Anal. Appl. 1999, 20, 628–645. [Google Scholar] [CrossRef]
  5. Evans, D.J. Cyclic and stride reduction methods for generalised tridiagonal matrices. Int. J. Comput. Math. 2000, 73, 487–492. [Google Scholar] [CrossRef]
  6. Nagata, M.; Hada, M.; Iwasaki, M.; Nakamura, Y. Eigenvalue clustering of coefficient matrices in the iterative stride reductions for linear systems. Comput. Math. Appl. 2016, 71, 349–355. [Google Scholar] [CrossRef]
  7. Wang, T.; Iwasaki, M.; Nakamura, Y. On condition numbers in the cyclic reduction processes of a tridiagonal matrix. Int. J. Comput. Math. 2010, 87, 3079–3093. [Google Scholar] [CrossRef]
  8. Bini, D.A.; Meini, B. The cyclic reduction algorithm: From Poisson equation to stochastic processes and beyond in memoriam of Gene H. Golub. Numer. Algor. 2009, 51, 23–60. [Google Scholar] [CrossRef]
  9. Yalamov, P.; Pavlov, V. Stability of the Block Cyclic Reduction. Linear Algebra Its Appl. 1996, 249, 341–358. [Google Scholar] [CrossRef] [Green Version]
  10. Yanai, H.; Takeuchi, K.; Takane, Y. Projection Matrices, Generalized Inverse Matrices, and Singular Value Decomposition; Springer: New York, NY, USA, 2011. [Google Scholar]
  11. Buzbee, B.L.; Golub, G.H.; Nielson, C.W. On direct methods for solving poisson’s equations. SIAM J. Numer. Anal. 1970, 7, 627–656. [Google Scholar] [CrossRef]
  12. Boffi, N.M.; Hill, J.C.; Reuter, M.G. Characterizing the inverses of block tridiagonal, block Toeplitz matrices. Comput. Sci. Discov. 2015, 8, 015001. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Gathering the block tridiagonal parts of the block M-tridiagonal matrix A ( M ) with M = 4 .
Figure 1. Gathering the block tridiagonal parts of the block M-tridiagonal matrix A ( M ) with M = 4 .
Mathematics 09 03213 g001
Figure 2. A block tridiagonalization of the block M-tridiagonal matrix A ( M ) .
Figure 2. A block tridiagonalization of the block M-tridiagonal matrix A ( M ) .
Mathematics 09 03213 g002
Figure 3. Coefficient matrices in the block cyclic reduction.
Figure 3. Coefficient matrices in the block cyclic reduction.
Mathematics 09 03213 g003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shinjo, M.; Wang, T.; Iwasaki, M.; Nakamura, Y. Roots of Characteristic Polynomial Sequences in Iterative Block Cyclic Reductions. Mathematics 2021, 9, 3213. https://doi.org/10.3390/math9243213

AMA Style

Shinjo M, Wang T, Iwasaki M, Nakamura Y. Roots of Characteristic Polynomial Sequences in Iterative Block Cyclic Reductions. Mathematics. 2021; 9(24):3213. https://doi.org/10.3390/math9243213

Chicago/Turabian Style

Shinjo, Masato, Tan Wang, Masashi Iwasaki, and Yoshimasa Nakamura. 2021. "Roots of Characteristic Polynomial Sequences in Iterative Block Cyclic Reductions" Mathematics 9, no. 24: 3213. https://doi.org/10.3390/math9243213

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop