Next Article in Journal
Preface to the Special Issue “Mathematical Modelling and Optimization of Service Supply Chain”
Previous Article in Journal
The Application of the Piecewise Linear Method for Non-Linear Programming Problems in Ride-Hailing Assignment Based on Service Level, Driver Workload, and Fuel Consumption
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Limit Theorems for Spectra of Circulant Block Matrices with Large Random Blocks

by
Alexander Tikhomirov
1,*,
Sabina Gulyaeva
2 and
Dmitry Timushev
1
1
Institute of Physics and Mathematics, Komi SC UB RAS, Syktyvkar 167982, Russia
2
Institute of Exact Sciences and IT, Pitirim Sorokin Syktyvkar State University, Syktyvkar 167001, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(14), 2291; https://doi.org/10.3390/math12142291
Submission received: 13 June 2024 / Revised: 2 July 2024 / Accepted: 15 July 2024 / Published: 22 July 2024

Abstract

:
This paper investigates the spectral properties of block circulant matrices with high-order symmetric (or Hermitian) blocks. We analyze cases with dependent or sparse independent entries within these blocks. Additionally, we analyze the distribution of singular values for the product of independent circulant matrices with non-Hermitian blocks.

1. Circulant Block Matrices with Dependence

The study of the spectrum of circulant matrices is inspired by their diverse applications in time series analysis [1], graph theory [2,3,4], wireless communication [5,6,7], etc. The asymptotic behavior of the spectrum of circulant matrices has been studied in numerous works, such as [8,9,10,11,12,13]. In 2007, Oraby [8] (Theorem 1) investigated the limiting distribution for the empirical spectral distribution function of a circulant matrix and established the limit distribution for symmetric block circulant matrices with Wigner blocks (see [8] (Proposition 1)). In [9], the limiting spectral distribution of block symmetric circulant matrices with rectangular blocks was considered.
This paper investigates the spectral properties of block circulant matrices of three classes. First, we analyze block circulant matrices with large, symmetric (or Hermitian) blocks. We consider two cases: when the block entries exhibit dependencies and when they are independent and sparse. In the third class, we examine the distribution of singular values for the product of independent circulant matrices with non-Hermitian blocks. The proofs rely on the method of Stieltjes transform.
Let
C ( a 1 , , a k ) = a 1 a 2 a 3 a k 1 a k a k a 1 a 2 a k 2 a k 1 a k 1 a k a 1 a k 3 a k 2 a 3 a 4 a 5 a 1 a 2 a 2 a 3 a 4 a k a 1 .
For symmetric n × n matrices X ( 1 ) , X ( 2 ) , , X ( k ) ,   consider   circulant   block   matrix W = C ( 1 n X ( 1 ) , 1 n X ( 2 ) , , 1 n X ( k ) ) .   We   assume   that   X j = X k j + 2 for   j = 2 , , k .   Under   this   assumption ,   the   matrix   W is symmetric and its eigenvalues are real. Let
h = l + 1 , if k = 2 l , l , if k = 2 l 1 .
Note that since the matrix W is symmetric, the matrices X(1), …, X(h) determine all matrices X(h+1), …, X(k) with the equality X(h+l) = X(khl+2) for l = 1, …, kh. Denote the eigenvalues of the matrix W in the following increasing order: λ1λ2 ≤ ⋯ ≤ λnk. We define the empirical spectral distribution function of the matrix W as
F n ( x ) = 1 k n j = 1 k n 1 { λ j x } ,
where 1 { · } denotes the indicator function.
We shall assume that the matrices X ( ν ) satisfy the following conditions for ν = 1 , , h . Let E X i j ( ν ) = 0 and E ( X i j ( ν ) ) 2 = σ i j 2 . Let B j 2 = 1 n l = 1 n σ j l 2 . We assume that the matrices X ( ν ) for ν = 1 , , h are independent. Furthermore, for any 1 i j n and any ν = 1 , , k , introduce σ -algebras
F i , j ( ν ) = σ ( X k l ( ν ) , for 1 l < k n , ( k , l ) ( i , j ) ) .
We shall assume that the following conditions are satisfied:
  • Condition C ( 1 ) : for any 1 i j n and any ν = 1 , , k ,
    E ( X i j ( ν ) | F i j ( ν ) ) = 0 ;
  • Condition C ( 2 ) : for any ν = 1 , , k ,
    lim n 1 n 2 i , j = 1 n E | E ( ( X i j ( ν ) ) 2 | F i j ( ν ) ) σ i j 2 | = 0 ;
  • Condition C ( 3 ) : for any τ > 0 and for any ν = 1 , , k ,
    lim n L n ( ν ) ( τ ) : = lim n 1 n 2 i , j = 1 n E | X i j ( ν ) | 2 1 { | X i j ( ν ) | τ n } = 0 ;
  • Condition C ( 4 ) :
    1 n j = 1 n | B j 2 1 | 0 a s n , and there exists a constant C such that max 1 j n B j 2 C .
We denote the density of the semicircle law with the parameter σ 2 by γ σ 2 ( x ) , as follows:
γ σ 2 ( x ) : = 1 2 π σ 2 4 σ 2 x 2 1 { | x | 2 σ } .
Consider the distribution function G ( k ) ( x ) with the density
g ( k ) ( x ) = d d x G ( k ) ( x ) = k 1 k γ k 1 k ( x ) + 1 k γ 2 k 1 k ( x ) , if k is odd , k 2 k γ k 2 k ( x ) + 2 k γ 2 k 2 k ( x ) , if k is even .
That is, it is a mixture of semicircle laws with parameters depending on the parity of k. For example, in the case of an odd k, we have a mixture of two semicircle laws γ k 1 k ( x ) and γ 2 k 1 k ( x ) with supports [ 2 k 1 k ; 2 k 1 k ] and [ 2 2 k 1 k ; 2 2 k 1 k ] , respectively. Obviously, it is at the endpoints of these supports that the character of the function G ( k ) ( x ) changes. Figure 1 shows the graphs of G ( k ) ( x ) for k = 3 , 4 , 7 , 8 . The dots on the distribution function curves mark the endpoints of the supports of the original distributions.
We prove the following theorem.
Theorem 1.
Under conditions C ( 1 ) C ( 4 ) , the empirical distribution function F n ( x ) converges in probability to the distribution function G ( k ) ( x ) .
The proof of Theorem 1. 
We start from the well-known fact that the spectrum of the matrix W is the union of the spectra of matrices B ( ν ) , ν = 1 , k , where
B ( ν ) = 1 k n X ( 1 ) + 2 r = 2 k 2 cos 2 π ( r 1 ) ( ν 1 ) k X ( r ) + cos ( ν 1 ) π X ( k 2 + 1 ) , if k is even , 1 k n X ( 1 ) + 2 r = 2 k + 1 2 cos 2 π ( r 1 ) ( ν 1 ) k X ( r ) , if k is odd .
Using the orthogonal transformation, we can reduce the matrix W to the block-diagonal form with the blocks B ( ν ) , ν = 1 , , k , herewith the equality B ( ν ) = B ( k ν + 2 ) , where ν 2 , holds. Let
ζ j l ( ν ) = 1 k n X j l ( 1 ) + 2 r = 2 k 2 cos 2 π ( r 1 ) ( ν 1 ) k X j l ( r ) + cos ( ν 1 ) π X ( k 2 + 1 ) if k is even , 1 k n X j l ( 1 ) + 2 r = 2 k + 1 2 cos 2 π ( r 1 ) ( ν 1 ) k X j l ( r ) , if k is odd .
Let λ 1 ( ν ) λ 2 ( ν ) λ n ( ν ) and
F n ( ν ) ( x ) = 1 n j = 1 n 1 { λ j ( ν ) x } .
It is easy to see that
F n ( x ) = 1 k ν = 1 k F n ( ν ) ( x ) .
We consider the case of even an k because the another case is similar. Note that
E | ζ j l ( ν ) | 2 = 1 k n ( σ j l ) 2 [ ( 1 + 4 r = 2 k 2 cos 2 2 π ( r 1 ) ( ν 1 ) k + cos 2 ( ν 1 ) π ] .
For ν = 1 or ν = k / 2 + 1 , we have
E | ζ j l ( ν ) | 2 = ( 2 k 2 ) σ j l 2 k n .
Using
l = 0 N cos 2 ( l x ) = 1 2 ( N + 3 2 + sin ( 2 N + 1 ) x sin x ) ,
we get, for ν 1 and ν k / 2 + 1 ,
E | ζ j l ( ν ) | 2 = σ j l 2 ( k 2 ) k n .
In fact,
1 + 4 r = 2 k 2 cos 2 2 π ( r 1 ) ( ν 1 ) k + cos 2 ( ν 1 ) π = 2 + 4 r = 0 k 2 1 cos 2 2 π ( r 1 ) ( ν 1 ) k = k + 2 sin { 2 π ( k 1 ) ( ν 1 ) k } sin { 2 π ( ν 1 ) k } k = k 2 k .
To prove Theorem 1, we only need to prove that the spectral distribution of each matrix B ( ν ) , ν = 1 , , k converges to a semicircle law with the corresponding parameter. To this end, we apply Theorem 1 from paper [14]. We need to check the conditions (1)–(5) from ref. [14] for random variables ζ j l ( ν ) and any fixed ν = 1 , , k . For any fixed ν = 1 , , k and any 1 j , l n , we introduce σ -algebras
G j l ( ν ) = σ { ζ p q : 1 p , q n : ( p , q ) ( j , l ) } .
It is straightforward to check that, for any ν = 1 , , h
G j l ( ν ) = σ { F j l ( μ ) , μ = 1 , , h } .
Since F j l ( μ ) are mutually independent for μ = 1 , , h , we get the following using condition C ( 1 ) :
E { ζ j l ν | G j l ( ν ) } = 0 .
Thus, condition ( 1 ) from [14] is fulfilled. Now, consider the quantity
Q = 1 n j , l = 1 n E | E { ( ζ j l ( ν ) ) 2 | G j l ( ν ) } E ( ζ j l ( ν ) ) 2 | .
Assume, for instance, 1 < ν k 2 + 1 . Note that for j p , l q ,
E { X j l ( μ ) X p q ( μ ) | G p q } = X j l ( μ ) E { X p q | G p q ( ν ) } = 0 .
Using this and the independence of X j l ( μ 1 ) and X p q ( μ 2 ) , μ 1 μ 2 , we get, for an even k,
E { | ζ j l ( ν ) | 2 | G j l ( ν ) } = 1 k n [ E { ( X j l ( 1 ) ) 2 | G j l ( ν ) } + 4 r = 2 k 2 cos 2 2 π ( r 1 ) ( ν 1 ) k E { ( X j l ( r ) ) 2 | G j l ( ν ) } + cos 2 2 ( ν 1 ) π E { ( X j l ( k 2 + 1 ) ) ) 2 | G j l ( ν ) } ] .
Applying condition C ( 2 ) , we get
lim n 1 n j , l = 1 n | E { | ζ j l ( ν ) | 2 | G j l ( ν ) } E ( ζ j l ( ν ) ) 2 | = 0 .
Thus, condition ( 2 ) from [14] is proved. Furthermore, we check condition ( 3 ) from ref. [14]. We need to prove that the Lindeberg fraction for the matrix B ( ν ) tends to zero as n tends to :
lim n L ˜ n ( ν ) ( τ ) = lim n 1 n j , l = 1 n E | ζ j l ( ν ) | 2 1 { | ζ j l | > τ } = 0 .
We prove the following auxiliary lemma:
Lemma 1.
For any random variables Y j , j = 1 , , k and any b > 0 , the following inequality holds:
j = 1 k Y j 2 1 { j = 1 k | Y j | > b } k j = 1 k Y j 2 1 { | Y j | > b / k } .
Proof. 
Combining inequality j = 1 k | a j | k max 1 j k | a j | with inequality 1 { j = 1 k | Y j | > b } 1 { max 1 j k | Y j | > b / k } , we may write
j = 1 k Y j 2 1 j = 1 k | Y j | > b k max 1 j k Y j 2 { max 1 j k | Y j | > b / k } .
There exists j 0 , such that 1 j 0 k and
max 1 j k Y j 2 = Y j 0 2 , and max 1 j k | Y j | = | Y j 0 | .
Applying this equality, we get
j = 1 k Y j 2 1 j = 1 k | Y j | > b k Y j 0 2 1 { | Y j 0 | > b / k } .
Applying inequality max 1 j k | a j | 1 { | a j | > c } j = 1 k | a j | 1 { | a j | > c } , we get
j = 1 k Y j 2 1 j = 1 k | Y j | > b k j = 1 k Y j 2 1 { | Y j | > b / k } .
Thus, Lemma 1 is proved. □
Now, we continue with the proof of relation (1). It is straightforward to check that
| ζ j l ( ν | 2 k n r = 1 k 2 + 1 | X j l ( r ) | , and | ζ j l ( ν | 2 2 n r = 1 k 2 + 1 | X j l ( r ) | 2 .
This implies that
E | ζ j l ( ν ) | 2 1 { | ζ j l | > τ } 2 n E ( r = 1 k 2 + 1 | X j l ( r ) | 2 ) 1 { r = 1 k 2 + 1 | X j l ( r ) | > τ k n } .
Now, by applying Lemma 1, we get
E | ζ j l ( ν ) | 2 1 { | ζ j l | > τ n } k n k = 1 h E | X j l ( r ) | 2 1 { | X j l ( r ) | > τ n / k } .
Hence,
L ˜ n ( ν ( τ ) k r = 1 k L n ( r ) ( τ / k ) .
This inequality and condition C ( 3 ) proved condition ( 3 ) from ref. [14].
Let ( B ˜ j ( ν ) ) 2 : = l = 1 n E ζ j l ( ν ) 2 . We need to prove that
lim n 1 n j = 1 n | ( B ˜ j ( ν ) ) 2 d ν | = 0 ,
where
d ν = k 2 k , if ν = 1 or k / 2 + 1 , 2 k 2 k , otherwise .
It is straightforward to see that
( B ˜ j ( ν ) ) 2 = d ν n l = 1 n σ j l 2 = d ν B j 2 .
Relation (2) and condition (5) from [14] now follow from condition C ( 4 ) . Thus, conditions (1)–(5) from ref. [14] are fulfilled, and the result of Theorem 1 now follows from ref. [14] (Theorem 1.1). □

2. Circulant Block Matrices with Sparsity

In this section, we consider circulant block matrices with independent blocks X ( ν ) , ν = 1 , , h . We shall assume as well that the entries of matrices X ( ν ) are independent and sparse with independent Bernoulli random variables ξ j l ( ν ) , E ξ j l ( ν ) = p j l ( ν ) . We construct the blocks
W ˜ ( ν ) = ( W j l ( ν ) ) : = X ( ν ) Ξ ( ν ) , W j l ( ν ) = X j l ( ν ) ξ j l ( ν ) ,
where “∘” denotes the Hadamard product of matrices and Ξ = ( ξ j l ( ν ) ) j , l = 1 n . Recall that all random variables X j l ( ν ) , ξ j l ( ν ) , j , l = 1 , , n , ν = 1 , , h are jointly independent. For simplicity, we shall assume that E ( X j l ( ν ) ) 2 = ( σ j l ( ν ) ) 2 = σ j l 2 and p j l ( ν ) = p j l for any ν = 1 , , h . We introduce the normalizing factor
a n : = 1 n j , l = 1 n p j l σ j l 2 ,
and, for the blocks,
W ( ν ) = 1 a n W ˜ ( ν )
we define the block matrix
Y : = C ( W ( 1 ) , , W ( k ) ) ,
where W ( h + ν ) = W ( k h ν + 2 ) . For instance, W ( h + 1 ) = W ( h 1 ) , , W ( k ) = W ( 2 ) . We introduce some conditions on the probabilities p j l and on the distributions of X j l ( ν ) for ν = 1 , , h and j , l = 1 , , n . Recall that
p j l = E ξ j l ( ν ) , and σ j l 2 = E ( X j l ( ν ) ) 2 .
We shall assume that the following conditions are satisfied:
  • Condition D ( 1 ) :
    a n a s n ;
  • Condition D ( 2 ) :
    1 n j = 1 n l = 1 n | p j l σ j l a n n | 0 a s n for any ν = 1 , , k ;
  • Condition D ( 3 ) :
    lim n L n ( τ ) : = lim n 1 n a n ν = 1 k j = 1 n l = 1 n p j l E ( X j l ( ν ) ) 2 1 { | X j l ( ν ) | τ a n } = 0 ;
  • Condition D ( 4 ) :
    lim n max 1 j n p j j σ j j 2 / a n = 0 ,
    and there exists a constant C 0 such that for any n 1
    max 1 i , j n p i j σ i j 2 / a n C 0 .
We prove the following theorem.
Theorem 2.
Let conditions D ( 1 ) D ( 4 ) be fulfilled. Then,
lim n F n ( x ) = G ( k ) ( x ) ,
where G ( k ) ( x ) was defined in Theorem 1.
The proof of Theorem 2. 
We recall that the matrix W can be reduced by a unitary (orthogonal) transformation to the block-diagonal form with the blocks B ( ν ) , ν = 1 , , k , such that B ( ν ) = B ( k ν + 2 ) and, for ν = 2 , , k ,
B ( ν ) = 1 k [ W ( 1 ) + 2 r = 2 k + 1 2 cos 2 π ( r 1 ) ( ν 1 ) k W ( r ) ] ,
if k is odd, and
B ( ν ) = 1 k [ W ( 1 ) + 2 r = 2 k 2 cos 2 π ( r 1 ) ( ν 1 ) k W ( r ) + cos ( ν 1 ) π W ( k 2 + 1 ) ] ,
if k is even. To prove the theorem, it is enough to prove that the spectral distribution functions of the matrices B ( ν ) converges to the semicircle law with the corresponding parameter. Introduce the resolvent matrix of matrix B ( ν ) ,
R ( ν ) ( z ) = ( W ( ν ) z I ) 1 .
Denote the Stieltjes transform of the spectral distribution of the matrix W ( ν ) by s n ( ν ) ( z ) . We have
s n ( ν ) ( z ) = 1 n j = 1 n R j j ( ν ) .
Applying Schur’s complement formula, we get
R j j ( ν ) ( z ) = 1 1 a n B j j ( ν ) z 1 a n l = 1 , l j n q = 1 , q j n B j l ( ν ) B j q ( ν ) R l q ( ν , j ) ,
where B ( ν , j ) denotes the matrix obtained from B ( ν ) by deleting the j-th row and column, and
R ( ν , j ) = R ( ν , j ) ( z ) = ( B ( ν , j ) z I ) 1 .
Introduce the following notations:
ε j 1 ( ν ) = 1 a n B j j ( ν ) , ε j 2 ( ν ) = 1 a n l = 1 , l j n ( ( B j l ( ν ) ) 2 E ( B j l ( ν ) ) 2 ) R l l ( ν , j ) , ε j 3 ( ν ) = 1 a n 1 l , q n , l j , q j , p q B j l ( ν ) B j q ( ν ) R l q ( ν , j ) .
It is straightforward to check that, for j , l = 1 , , n and ν = 1 , , h , when k is odd,
E ( B j l ( ν ) ) 2 = σ j l 2 p j l 2 + 2 r = 2 h cos 2 2 π ( r 1 ) ( ν 1 ) k = σ j l 2 p j l d ν a n 1 .
Let
A 1 j ( z ) = d ν a n l = 1 , l j n σ j l 2 p j l R l l ( ν , j ) .
Now, we can rewrite equality (3) in the form
R j j ( ν ) ( z ) = 1 z A 1 j ( z ) + ε j 1 ( ν ) + ε j 2 ( ν ) + ε j 3 ( ν ) .
Now, we approximate A 1 j ( z ) using condition D ( 2 ) . Introduce
ε j 4 ( ν ) = d ν a n l = 1 , l j n σ j l 2 p j l a n n R l l ( ν , j ) , ε j 5 ( ν ) = d ν 1 n l = 1 , l j n R l l ( ν , j ) 1 n l = 1 n R l l ( ν ) .
Using s n ( ν ) ( z ) = 1 n l = 1 n R l l ( ν ) and
ε j ( ν ) = ε j 1 ( ν ) + + ε j 5 ( ν ) ,
we may write
R j j ( ν ) ( z ) = 1 z d ν s n ( ν ) ( z ) + ε j ( ν ) .
Summing the last equality in j = 1 , , n , we get
s n ( ν ) ( z ) = 1 z + d ν s n ( ν ) ( z ) + T n ( ν ) ( z ) ,
where
T n ( ν ) ( z ) = 1 z + d ν s n ( ν ) ( z ) 1 n j = 1 n ε j ( ν ) R j j ( ν ) ( z ) .
To prove the convergence
lim n s n ( ν ) ( z ) = s ( ν ) ( z ) in probability ,
it is enough to prove that
lim n E | T n ( z ) | = 0 .
Here, s ( ν ) ( z ) denotes the Stieltjes transform of the semicircle law with the parameter d ( ν ) . The inequality | R j j ( ν ) ( z ) | 1 v for z = u + i v implies that
| T n ( ν ) ( z ) | 1 v 2 1 n j = 1 n | ε j ( ν ) | .
The last inequality implies that it is enough to prove
1 n j = 1 n E | ε j ( ν ) | 0 a s n .
We start with ε j 1 ( ν ) .
Lemma 2.
Under the conditions of Theorem 1, we have
1 n j = 1 n E | ε j 1 ( ν ) | 0 a s n .
Proof. 
Since the cases of an odd and of even k are similar, we only consider an odd k. By the definition of ε j 1 and B ( ν ) , for odd k we have
ε j 1 ( ν ) = 1 k a n X j j ( 1 ) ξ j j + 2 r = 2 h cos 2 π ( v 1 ) ( r 1 ) k X j j ( ν ) ξ j j ( r ) + cos 2 π ( ν 1 ) X j j ( h + 1 ) ξ j j ( h + 1 ) .
Applying Cauchy’s inequality, we get
1 n j = 1 n E | ε j 1 ( ν ) | 1 n j = 1 n E | ε j 1 ( ν ) | 2 1 2 .
Furthermore,
E | ε j 1 ( ν ) | 2 1 k a n σ j j 2 p j j 2 + 4 r = 2 h cos 2 2 π ( v 1 ) ( r 1 ) k 2 a n p j j σ j j 2 .
Summing up this inequality by indices j = 1 , , n , we get
1 n j = 1 n E | ε j 1 ( ν ) | 2 n a n j = 1 n p j j σ j j 2 1 2 .
Continuing with this inequality, we get
1 n j = 1 n E | ε j 1 ( ν ) | 2 max 1 j n p j j σ j j 2 a n 1 2 .
The last inequality and condition D ( 4 ) imply the result. Thus, Lemma 2 is proved. □
Lemma 3.
Under the conditions of Theorem 2, we have
1 n j = 1 n E | ε j 2 ( ν ) | 0 a s n .
Proof. 
We start with the definition of ε j 2 ( ν ) . Thus, we have
ε j 2 ( ν ) = 1 a n l = 1 , l j n ( ( B j l ( ν ) ) 2 E ( B j l ( ν ) ) 2 ) R l l ( ν , j ) .
Note that according to (4),
E ( B j l ( ν ) ) 2 = E ( B j l ( ν ) ) 2 = σ j l 2 p j l 2 + 2 r = 2 h cos 2 2 π ( r 1 ) ( ν 1 ) k = σ j l 2 p j l d ν .
We may rewrite ε j 2 ( ν ) as follows:
ε j 2 ( ν ) = f j 1 ( ν ) + f j 2 ( ν ) + f j 3 ( ν ) ,
where
f j 1 ( ν ) = 1 k 1 a n l = 1 n ( ( X j l ( 1 ) ) 2 ξ j l ( 1 ) p j l σ j l 2 ) ) , f j 2 ( ν ) = 1 k r = 2 k 1 a n l = 1 n cos 2 2 π ( ν 1 ) ( r 1 ) k ( X j l ( r ) ) 2 ξ j l ( r ) p j l σ j l 2 ) , f j 3 ( ν ) = 4 k 1 a n l = 1 n r , q = 1 , r q h cos 2 π ( ν 1 ) ( r 1 ) k cos 2 π ( ν 1 ) ( q 1 ) k X j l ( r ) ξ j l ( r ) X j l ( q ) ξ j l ( q ) .
To estimate f j 1 ( ν ) , we represent it in the following form:
f j 1 ( ν ) = f ¯ j 1 ( ν ) + f ˜ j 1 ( ν ) ,
where
f ¯ j 1 ( ν ) = 1 k 1 a n l = 1 n ( ( X j l ( 1 ) ) 2 ξ j l ( 1 ) p j l σ j l 2 ) ) 1 { | X j l ( 1 ) | > τ a n } , f ˜ j 1 ( ν ) = 1 k 1 a n l = 1 n ( ( X j l ( 1 ) ) 2 ξ j l ( 1 ) p j l σ j l 2 ) ) 1 { | X j l ( 1 ) | τ a n } .
For f ¯ j 1 ( ν ) , we have the estimation
1 n j = 1 n E | f ¯ j 1 ( ν ) | 1 n k a n j = 1 n l = 1 n E | ( X j l ( 1 ) ) 2 ξ j l ( 1 ) p j l σ j l 2 ) | 1 { | X j l ( 1 ) | > τ a n } 1 n k a n j = 1 n l = 1 n p j l E ( X j l ( 1 ) ) 2 1 { | X j l ( 1 ) | > τ a n } + 1 n k a n j = 1 n l = 1 n p j l 2 σ j l 2 E 1 { | X j l ( 1 ) | > τ a n } 1 k L n ( τ ) + 1 n k a n j = 1 n l = 1 n p j l 2 σ j l 2 E 1 { | X j l ( 1 ) | > τ a n } .
Note that
σ j l 2 τ 2 a n + E | X j l ( 1 ) | 2 1 { | X j l ( 1 ) | > τ a n } .
Using this inequality, we get
1 n k a n j = 1 n l = 1 n p j l 2 σ j l 2 E 1 { | X j l ( 1 ) | > τ a n } 2 n k a n j = 1 n l = 1 n p j l E ( X j l ( 1 ) ) 2 1 { | X j l ( 1 ) | > τ a n } = 2 k L n ( τ ) .
Here, we use inequalities p j l 1 , a n τ 2 E 1 { | X j l ( 1 ) > τ a n } E ( X j l ( 1 ) ) 2 1 { | X j l ( 1 ) > τ a n } , and 1 { | X j l | > τ a n } 1 . Combining inequalities (7) and (8), we obtain
1 n j = 1 n E | f ¯ j 1 ( ν ) | 3 k L n ( τ ) .
Similarly, get the bound
1 n j = 1 n E | f ¯ j 2 ( ν ) | 3 L n ( τ ) .
Furthermore,
1 n j = 1 n E | f j 3 ( ν ) | 1 n j = 1 n E | f j 3 ( ν ) | 2 1 2 .
Using the independence of summands on the right hand side in reference with f j 3 ( ν ) and that | cos ϕ | 1 , we get
E | f j 3 ( ν ) | 2 k a n 2 l = 1 n p j l 2 σ j l 4 .
Applying this inequality, we get
1 n j = 1 n E | f j 3 ( ν ) | 2 1 n a n 2 j = 1 n l = 1 n σ j l 4 p j l 2 .
The inequality
σ j l 2 τ 2 a n + E | X j l ( ν ) | 2 1 { | X j l ( ν ) | 2 > τ a n }
implies that
1 n j = 1 n E | f j 3 ( ν ) | 2 τ 2 n a n j = 1 n l = 1 n σ j l 2 p j l 2 + 1 n a n 2 j = 1 n l = 1 n σ j l 2 p j l 2 E | X j l ( ν ) | 2 1 { | X j l ( ν ) | 2 > τ a n } .
For the first term in the r.h.s. of this inequality, we have
τ 2 n a n j = 1 n l = 1 n σ j l 2 p j l 2 τ 2 n a n j = 1 n l = 1 n σ j l 2 p j l τ 2 .
Using inequality (9) again, we get
1 n a n 2 j = 1 n l = 1 n σ j l 2 p j l 2 E | X j l ( ν ) | 2 1 { | X j l ( ν ) | 2 > τ a n } C 0 n a n j = 1 n l = 1 n p j l E | X j l ( ν ) | 2 1 { | X j l ( ν ) | > τ a n } .
Using the definition of L n ( τ ) , we have
1 n a n 2 j = 1 n l = 1 n σ j l 2 p j l 2 E | X j l ( ν ) | 2 1 { | X j l ( ν ) | 2 > τ a n } C 0 L n ( τ ) .
The claim follows from condition D ( 3 ) . □
Lemma 4.
Under the conditions of Theorem 2, we have
1 n j = 1 n E | ε j 3 ( ν ) | 0 a s n .
Proof. 
Applying Cauchy’s inequality, we get
1 n j = 1 n E | ε j 3 ( ν ) | 1 n j = 1 n E | ε j 3 ( ν ) | 2 1 2 .
Furthermore, the independence of random variables B j l ( ν ) , B j q ( ν ) , and R l q ( ν , j ) for l q implies that
E | ε j 3 ( ν ) | 2 1 a n 2 l = 1 n q = 1 n E ( B j l ( ν ) ) 2 E ( B j q ( ν ) ) 2 E | R l q ( ν , j ) | 2 .
Applying equality (6), we obtain
E | ε j 3 ( ν ) | 2 ( d ( ν ) ) 2 a n 2 l = 1 n q = 1 n σ j l 2 σ j q 2 p j l p j q E | R l q ( ν , j ) | 2 .
 □
Lemma 5.
Under the conditions of Theorem 2, we have
1 n j = 1 n E | ε j 4 | 0 a s n .
Proof. 
Using the definition of ε j 4 and the inequality | R j j | v 1 , we have
1 n j = 1 n E | ε j 4 | d ν v n a n j = 1 n l = 1 , l j n | σ j l 2 p j l a n n | .
Condition D ( 2 ) completes the proof. □
Lemma 6.
Under the conditions of Theorem 2, we have
1 n j = 1 n E | ε j 5 ( ν ) | 0 a s n .
Proof. 
Using the definition of ε j 5 ( ν ) , we have
1 n j = 1 n E | ε j 5 ( ν ) | d ν 1 n j = 1 n | 1 n l = 1 , l j n R l l ( ν , j ) 1 n l = 1 n R l l ( ν ) | .
It is well known (see, for instance, ref. [15]) (Lemma 5.2) that
| 1 n l = 1 , l j n R l l ( ν , j ) 1 n l = 1 n R l l ( ν ) | 1 n v .
This inequality implies the result of Lemma 6. Thus, the lemma is proved. □
Combining the results of Lemmas 2–6, we get
1 n j = 1 n | ε j ( ν ) | 0 a s n .
Thus, Theorem 2 is proved. □

3. Product of Circulant Matrices

In this section, we consider the circulant matrices without symmetry. Let
C ( a 1 , , a k ) = 1 k a 1 a 2 a 3 a k 1 a k a k a 1 a 2 a k 2 a k 1 a k 1 a k a 1 a k 3 a k 2 a 2 a 3 a 4 a k a 1 .
Consider the array of random n × n matrices { X ν , μ = ( X j l ( ν , μ ) ) , ν = 1 , , k , μ = 1 , , m } . We shall assume that all random variables are independent in aggregate. With regard to random variables X j l ( ν , μ ) , we shall assume that the following conditions are satisfied:
  • Condition E ( 1 ) : For any n 1 and j , l = 1 , n , ν = 1 , , k , μ = 1 , , m ,
    E X j l ( ν , μ ) = 0 , E | X j l ( ν , μ ) | 2 = 1 ;
  • Condition E ( 2 ) : For any ν = 1 , , k , μ = 1 , , m and for any τ > 0 , we have
    L n ( τ ) = 1 n 2 j = 1 n l = 1 n E | X j l ( ν , μ ) | 2 1 { | X j l ( ν , μ ) | > τ n } 0 as n .
Let us introduce the matrices
Y ( μ ) = 1 n C ( X ( 1 , μ ) , X ( 2 , μ ) , , X ( k , μ ) ) .
We shall investigate the distribution of singular values of the matrix
W = μ = 1 m Y ( μ ) .
Denote the singular values of the matrix W by s 1 2 s 2 2 s k n 2 and the spectral distribution function of the matrix W W * by G n ( m ) ( x ) , as follows:
G n ( m ) ( x ) = j = 1 k n 1 { s j 2 x } .
We prove the following result.
Theorem 3.
Under conditions E ( 1 ) E ( 2 ) , we have
lim n G n ( m ) ( x ) = G ( m ) ( x ) ,
where G ( m ) ( x ) is the distribution function described via its Stieltjes transform s ( m ) ( z ) that satisfies the equation
1 + z s ( m ) ( z ) + ( 1 ) m + 1 ( s ( m ) ( z ) ) m = 0 .
Proof. 
First of all, we note that the matrices Y ( μ ) can be transformed into a block diagonal form by applying the block matrix U defined as
U = ω 1 0 I n ω 2 0 I n ω 3 0 I n ω k 1 0 I n ω 1 I n ω 2 I n ω 3 I n ω k 1 I n ω 1 k 1 I n ω 2 k 1 I n ω 3 k 1 I n ω k 1 k 1 I n ,
where ω j = e i 2 π j k and I n denotes the unit matrix of order n. Consequently, W becomes a block-circulant matrix that can also be reduced to a block-diagonal form with U :
diag { B ( 1 ) , B ( 2 ) , , B ( k ) } = U W U * ,
where diag { B ( 1 ) , B ( 2 ) , , B ( k ) } denotes a diagonal-block matrix with blocks B ( ν ) of order n × n . These blocks are the product of matrices B ( ν , μ ) :
B ( ν ) = μ = 1 m B ( ν , μ ) ,
with
B ( ν , μ ) = 1 n k r = 1 k ω ν r 1 X ( r , μ ) .
Introduce matrix
V = diag { B ( 1 ) B ( 1 ) * , B ( 2 ) B ( 2 ) * , , B ( k ) B ( k ) * } .
The spectrum of matrix V coincides with the spectrum of W W * . Furthermore, it is the union of the spectra of matrices B ( ν ) B ( ν ) * for ν = 1 , , k .
Let s l ( ν ) 2 denote the eigenvalues of B ( ν ) B ( ν ) * for l = 1 , , n , ν = 1 , , k . We define the empirical spectral distribution function G n ν ( x ) as
G n ν ( x ) = 1 n l = 1 n 1 { s l ( ν ) 2 x } .
It is simple to see that
G n ( m ) ( x ) = 1 k ν = 1 k G n ν ( x ) .
We define s n ( m ) ( z ) and s n ( ν ) ( z ) as the Stieltjes transforms of the distribution functions G n ( m ) ( x ) and G n ( ν ) ( x ) , respectively. Using equality (10), we have
s n ( m ) ( z ) = 1 k ν = 1 k s n ( ν ) ( z ) .
Consider the entries of B ( ν , μ ) = ( B j l ( ν , μ ) ) j , l = 1 n for μ = 1 , , m . It is straightforward to check that
E | B j l ( ν , μ ) | 2 = 1 n .
Since for each fixed value of ν , the random matrices B ( ν , 1 ) , B ( ν , 2 ) , , B ( ν , m ) are jointly independent and the entries of each individual matrix are also independent, we may apply (Theorem 1.1) [16] the case of square matrices, where y 1 = y 2 = = y m = 1 . We need to check the Lindeberg condition for random variables B j l ( ν , μ ) , for j , l = 1 , , n and μ = 1 , m . To verify this condition, we can apply Lemma 1 in a similar manner to the symmetric case. Note that
E | B j l ( ν , μ ) | 2 1 { | B j l ( ν , μ ) | > τ n } r = 1 k E | X ( r , μ ) | 2 1 { l = 1 k | X ( l , μ ) | > τ n } k r = 1 k E | X ( r , μ ) | 2 1 { | X ( r , μ ) | > τ n / k } .
The last inequality and condition E ( 1 ) together imply that
1 n 2 j , l = 1 n E | B j l ( ν , μ ) | 2 1 { | B j l ( ν , μ ) | > τ n } r = 1 k 1 n 2 j , l = 1 n E | X ( r , μ ) | 2 1 { | X ( r , μ ) | > τ n / k } = r = 1 k L n ( r , μ ) ( τ / k ) 0 as .
From (Theorem 1) [16], it follows that
s n ( ν ) ( z ) s ( m ) ( z ) , for any   z C + .
Equality (11) now implies the result of the theorem. Thus, Theorem 3 is proved. □

Author Contributions

Writing—original draft, A.T., S.G. and D.T. All authors have read and agreed to the published version of the manuscript.

Funding

The work was prepared in frames of the State task of the Institute of Physics and Mathematics FRC Komi SC UB RAS on the research topic № 122040600066-5.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bolla, M.; Szabados, T.; Baranyi, M.; Abdelkhalek, F. Block circulant matrices and the spectra of multivariate stationary sequences. Spec. Matrices 2021, 9, 36–51. [Google Scholar] [CrossRef]
  2. Elspas, B.; Turner, J. Graphs with circulant adjacency matrices. J. Combin. Theory 1970, 9, 297–307. [Google Scholar] [CrossRef]
  3. Gan, H.S.; Mokhtar, H.; Zhou, S. Forwarding and optical indices of 4-regular circulant networks. J. Discret. Algorithms 2015, 35, 27–39. [Google Scholar] [CrossRef]
  4. Liu, X.; Zhou, S. Eigenvalues of Cayley Graphs. Electron. J. Comb 2022, 29, 1–164. [Google Scholar] [CrossRef] [PubMed]
  5. Bianchi, T.; Magli, E. Analysis of the Security of Compressed Sensing with Circulant Matrices. In Proceedings of the 2014 IEEE International Workshop on Information Forensics and Security, Atlanta, GA, USA, 3–5 December 2014; pp. 173–178. [Google Scholar]
  6. Yu, N.Y. Indistinguishability of Compressed Encryption with Circulant Matrices for Wireless Security. IEEE Signal Process. Lett. 2017, 24, 181–185. [Google Scholar] [CrossRef]
  7. Gupta, T.V.S.; Gandhi, A.S. Compressive oversampling using circulant matrices for lossy wireless channels. In Proceedings of the 2016 11th International Conference on Industrial and Information Systems (ICIIS), Roorkee, India, 3–4 December 2016; pp. 507–511. [Google Scholar]
  8. Oraby, T. The spectral laws of Hermitian block-matrices with large random blocks. Elect. Comm. Probab. 2007, 12, 465–476. [Google Scholar] [CrossRef]
  9. Ding, X. Spectral analysis of large block random matrices with rectangular blocks. Lith. Math. J. 2014, 54, 115–126. [Google Scholar] [CrossRef]
  10. Kologlu, M.; Kopp, G.S.; Miller, S.J. The limiting spectral measure for ensembles of symmetric block circulant matrices. J. Theor. Probab. 2013, 26, 1020–1060. [Google Scholar] [CrossRef]
  11. Oraby, T. The Limiting Spectra of Girko’s Block-Matrix. J. Theor. Probab. 2007, 20, 959–970. [Google Scholar] [CrossRef]
  12. Far, R.; Oraby, T.; Bryc, W.; Speicher, R. Spectra of large block matrices. arXiv 2006, arXiv:cs/0610045. [Google Scholar]
  13. Tee, G.V. Eigenvectors of block circulant and alternating circulant matrices. Res. Lett. Inf. Math. Sci. 2005, 8, 123–142. [Google Scholar]
  14. Götze, F.; Naumov, A.; Tikhomirov, A. Limit Theorems for Two Classes of Random Matrices with Dependent Entries. Theory Probab. Its Appl. 2015, 59, 23–39. [Google Scholar] [CrossRef]
  15. Götze, F.; Tikhomirov, A. Limit theorems for spectra of random matrices with martingale structure. Theory Probab. Its Appl. 2007, 51, 42–64. [Google Scholar] [CrossRef]
  16. Alexeev, N.V.; Götze, F.; Tikhomirov, A.N. On the singular spectrum of powers and products of random matrices. Dokl. Math. 2010, 82, 505–507. [Google Scholar] [CrossRef]
Figure 1. Distribution function G ( k ) ( x ) for k = 3 , 7 (left plot) and k = 4 , 8 (right plot).
Figure 1. Distribution function G ( k ) ( x ) for k = 3 , 7 (left plot) and k = 4 , 8 (right plot).
Mathematics 12 02291 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tikhomirov, A.; Gulyaeva, S.; Timushev, D. Limit Theorems for Spectra of Circulant Block Matrices with Large Random Blocks. Mathematics 2024, 12, 2291. https://doi.org/10.3390/math12142291

AMA Style

Tikhomirov A, Gulyaeva S, Timushev D. Limit Theorems for Spectra of Circulant Block Matrices with Large Random Blocks. Mathematics. 2024; 12(14):2291. https://doi.org/10.3390/math12142291

Chicago/Turabian Style

Tikhomirov, Alexander, Sabina Gulyaeva, and Dmitry Timushev. 2024. "Limit Theorems for Spectra of Circulant Block Matrices with Large Random Blocks" Mathematics 12, no. 14: 2291. https://doi.org/10.3390/math12142291

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop