Next Article in Journal
Study of Thermoelectric Responses of a Conductive Semi-Solid Surface to Variable Thermal Shock in the Context of the Moore–Gibson–Thompson Thermoelasticity
Next Article in Special Issue
On Estimation of Reliability Functions for the Extended Rayleigh Distribution under Progressive First-Failure Censoring Model
Previous Article in Journal
New Conditions for Testing the Asymptotic Behavior of Solutions of Odd-Order Neutral Differential Equations with Multiple Delays
Previous Article in Special Issue
An Exponentiated Skew-Elliptic Nonlinear Extension to the Log–Linear Birnbaum–Saunders Model with Diagnostic and Residual Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Strong Limit Theorem of the Largest Entries of a Sample Correlation Matrices under a Strong Mixing Assumption

School of Mathematics, Jilin University, Changchun 130012, China
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(7), 657; https://doi.org/10.3390/axioms12070657
Submission received: 17 May 2023 / Revised: 19 June 2023 / Accepted: 30 June 2023 / Published: 2 July 2023
(This article belongs to the Special Issue Probability, Statistics and Estimation)

Abstract

:
We are interested in an n by p matrix X n where the n rows are strictly stationary α -mixing random vectors and each of the p columns is an independent and identically distributed random vector; p = p n goes to infinity as n , satisfiying 0 < c 1 p n / n τ c 2 < , where τ > 0 , c 2 c 1 > 0 . We obtain a logarithmic law of L n = max 1 i < j p n | ρ i j | using the Chen–Stein Poisson approximation method, where ρ i j denotes the sample correlation coefficient between the ith column and the jth column of X n .

1. Introduction

Random matrix theory is being used in variety of fields, from physics to various areas of mathematics. Ref. [1] test the structure of the regression coefficient matrix under the multivariate linear regression model using the LRT statistic. The correlation coefficient matrix holds significance as a crucial statistic in the multivariate analysis, and its maximum likelihood estimator is the sample correlation matrix. Consider an n by p matrix X n = ( X k , i ) , 1 k n , 1 i p , representing observations from a specific multivariate distribution. It has an unknown mean μ = ( μ 1 , , μ p ) , unknown correlation coefficient matrix R , and unknown covariance matrix Σ .
This paper shows the logarithmic law for the largest entries of a sample correlation matrices while an α -mixing assumption holds. This study is the promotion of the statistical hypothesis testing problem that [2] analyzed. Ref. [2] considered the statistical test as the sample size n and the sample dimension p are both large; the null hypothesis is H 0 : R = I p , where I p is the identity matrix. The null hypothesis of [2] postulates that the components of X n = ( X ( 1 ) , , X ( p ) ) are uncorrelated and, in the case of X n , have a p-variate normal distribution. Ref. [2]’s test statistic is
L n = max 1 i < j p n | ρ i j | ,
where
ρ i j = k = 1 n ( X k , i X ¯ ( i ) ) ( X k , j X ¯ ( j ) ) k = 1 n ( X k , i X ¯ ( i ) ) 2 k = 1 n ( X k , j X ¯ ( j ) ) 2
is the Pearson correlation coefficient between X ( i ) and X ( j ) . Then, Γ n : = ( ρ i j ) is the sample correlation matrix generated by X n .
Let n 2 ; let X ¯ ( i ) = k = 1 n X k , i n , 1 i p . We have
ρ i j = ( X ( i ) X ¯ ( i ) e ) ( X ( j ) X ¯ ( j ) e ) X ( i ) X ¯ ( i ) e · X ( j ) X ¯ ( j ) e
where e = ( 1 , , 1 ) R n , and · represent the Euclidean norm in R n .
Ref. [2] proved the following theorem, focusing on the test statistic L n as p = p n and X n = { X k , i ; 1 k n , 1 i p } is a set of i.i.d. random variables where the n rows are observations from a certain multivariate distribution and each of the p columns is an n observation from a variable of the population distribution.
Theorem 1. 
Suppose that { ξ , X k , i ; k 1 , i 1 } be i.i.d random variables. Let X n = ( X k , i ) be an n × p matrix. For any ε > 0 , E | ξ | 30 ε < . If n / p γ ( 0 , ) , then
lim n n log n L n = 2 a . s .
Theorem 2. 
Suppose that { ξ , X k , i ; k 1 , i 1 } be i.i.d. random variables. Let X n = ( X k , i ) be an n × p matrix. For some ε > 0 , E | ξ | 30 + ε < . If n / p γ , then, for any y R ,
P ( n L n 2 4 log n + log ( log n ) y ) e K e y / 2
as n , where K = ( γ 2 8 π ) 1 .
Subsequently, many scholars consider the limiting property of the largest entries of the sample correlation matrices under the weaker moment conditions. Ref. [3] showed that the moment condition in Theorem 2 could be x 6 P ( | X 1 , 1 X 1 , 2 | x ) 0 as x under lim sup n p / n < . Ref. [4] showed that Theorem 2 holds under the moment condition ( x 6 / log 3 x ) P ( | X 1 , 1 X 1 , 2 | x ) 0 as x . Ref. [5] improved the moment condition and obtained the limit theorems of L n . Refs. [6,7] showed the results as p / n bounded from zero to infinity under a more relaxed moment assumption. Meanwhile, p reaches infinity as n ; some scholars also pursue the limiting property of the largest entries of the sample correlation matrices, they are interested in the relationship of the sample dimension p and the sample size n. Ref. [8] obtained the limit theorem as log p = o ( n α ) for 0 < α 1 without the Gaussian assumption. Most of the work is based on the assumption of sample independence; it is reasonable to assume the independence of samples, but it is difficult to verify the independence of the samples. Therefore, it is necessary to study the largest entries of the sample correlation matrices under mixing assumptions. Ref. [9] showed the asymptotic distribution of L n under a φ -mixing assumptions. Ref. [10] showed the asymptotic distribution of L n under an α -mixing assumption. Under the dependent case, the logarithmic law for L n remains largely unknown.
We will establish a logarithmic law of L n under an α -mixing assumption. Let { X n , n 1 } be a random variables sequence on probability space ( Ω , F , P ) . Let F a b represent the σ -field generated by the random variables X a , X a + 1 , , X b . For any two σ -fields A , B F , set α ( A , B ) : = sup { | P ( A B ) P ( A ) P ( B ) | ; A A , B B } . The strong mixing coefficients of { X n , n 1 } are defined as: α ( n ) : = sup k 1 α ( F 1 k , F k + n ) , if α ( n ) 0 , { X n , n 1 } is called α -mixing (see [11]).
The first section introduces the background of, and the motivation for, this study. In Section 2, we show the main result of this paper. In Section 3, we introduce notations and present some classical or elementary facts, which we include for easier referencing. The proof of the main theorem is discussed in Section 4 and is the main novel ingredient of the paper. In Section 5, we present the significance of the main result and its applications.

2. Main Result

Assumption 1. 
Let X k = X k , 1 , X k , 2 , , X k , p be a random vector sequence, V a r ( X 1 , 1 ) = 1 , and assume { X k ; 1 k n } be a strictly stationary α-mixing random vector sequence, satisfied with α ( n ) = O e n .
Assumption 2. 
Let X ( i ) = ( X 1 , i , X 2 , i , , X n , i ) be a sequence of random vector, assume { X ( i ) ; 1 i p } be an i.i.d. random vector sequence.
Remark 1. 
For H 0 : R = I p , X ( 1 ) , X ( 2 ) , , X ( p ) is independent. Let B i : = { X k , i ; 1 k n } be a random sampling of X ( i ) ; it is reasonable to suppose { B i ; 1 i p } is independent. Therefore, Assumption 2 can reasonably obtain the logarithmic law of L n .
Theorem 3. 
Under the Assumptions 1 and 2, let S n = Σ k = 1 n X k , 1 X k , 2 , and define σ 2 : = lim n E S n 2 / n . Suppose that E X 1 , 1 = 0 , and 0 < c 1 p n / n τ c 2 < , where τ > 0 , c 2 c 1 > 0 . E | X 1 , 1 | 12 + 16 τ + ε < , for some ε > 0 . Then,
lim n n log p n L n = 2 σ a . s .
Corollary 1. 
Suppose that { ξ , X k , i ; k 1 , i 1 } are i.i.d. random variables, E X 1 , 1 = 0 . Let X n = ( X k , i ) be an n × p matrix, let S n = Σ k = 1 n X k , 1 X k , 2 , and define σ 2 : = lim n E S n 2 / n . Suppose that 0 < c 1 p n / n τ c 2 < , where τ > 0 , c 2 c 1 > 0 . E | X 1 , 1 | 12 + 16 τ + ε < , for some ε > 0 . Then,
lim n n log p n L n = 2 σ a . s .
Remark 2. 
Theorem 3 considered p n = O ( n τ ) , where τ > 0 . Theorems 1 and 2 only considered the case τ = 1 . In Theorem 3, the n rows of X n are strictly stationary α-mixing random vectors. This case is more complex than the i.i.d. case considered in Theorems 1 and 2 because of the dependence. Under the i.i.d. assumption, we obtained Corollary 1, and generalized the Theorem 1 from the same order n and p n to p n = O ( n τ ) ; the moment condition of Corollary 1 is weaker than Theorem 1.

3. Preliminaries

The proofs of the main result are intricate and complex. We will gather and establish several technical tools that contribute to the proof of the main result. The subsequent lemmas play a crucial role in validating our findings. Lemma 1 could be seen from [12].
Lemma 1. 
Suppose X and Y are random variables taking their values in Borel spaces S 1 , S 2 , respectively; suppose U is a uniform- [ 0 , 1 ] random variable independent of ( X , Y ) . Assume N is a positive integer, H = { H 1 , H 2 , , H N } is a measurable partition of S 2 . There exists an S 2 -valued random variable Y * = f ( X , Y , U ) ; f is a measurable function from S 1 × S 2 × [ 0 , 1 ] into S 2 , such that
(i) Y * is independent of X,
( i i ) the probability distribution of Y * and Y on S 2 are identical,
( i i i ) P ( Y * and Y are not from the same H i H ) ( 8 N ) 1 / 2 α ( B ( X ) , B ( Y ) ) .
Lemma 2. 
Let { X i ; i 1 } be a strictly stationary α-mixing sequence of random variables: E X i = 0 . If i = 1 [ α ( i ) ] g 2 g < , for some g > 2 , i = 1 n i 1 + h [ α ( i ) ] 1 h < for some 0 < h < 1 , there exists K R such that
E i = 1 n X i g < K n g / 2 sup k 1 E | X k | g , n = 1 , 2 , .
Proof of Lemma 2. 
Let Y i = X i sup k 1 X k g , where X k g = E | X k | g 1 / g . Then, we have E | Y i | g = E | X i | g sup k 1 E | X k | g < . Using Theorem 1 of [13], we have
E i = 1 n Y i g < K n g / 2 , n = 1 , 2 , .
Then, we can obtain
E i = 1 n X i g < K n g / 2 sup k 1 E | X k | g , n = 1 , 2 , .
 □
Lemma 3. 
Assume { X n ; n 1 } is a sequence of α-mixing random variables, X F k , Y F k + n and E | X | p < , E | Y | q < , 1 p + 1 q < 1 . Then
| E X Y E X E Y | 10 X p Y q ( α ( n ) ) 1 1 p 1 q .
Proof of Lemma 3. 
See Corollary 6.1 in [11]. □
Lemma 4. 
For any independent random variables sequence { ξ n ; n 1 } with E ξ n = 0 and finite variance, there exists an independent normal variables sequence { η n ; n 1 } with E η n = 0 , E η n 2 = E ξ n 2 , such that
P max k n i = 1 k ξ i i = 1 k η i y ( A Q ) Q y Q i = 1 n E | ξ i | Q
for all Q > 2 and y > 0 , whenever E | ξ i | Q < , i = 1 , 2 , , n . Here, A is a universal constant.
Proof of Lemma 4. 
See [14]. □
Lemma 5. 
Let { η k ; 1 k n } be an independent symmetric random variables sequence and S n = k = 1 n η k . Then, there exist positive numbers C j and D j depending only on j for each integer j 1 , for all t > 0 ,
P | S n | 2 j t C j P max 1 j n | η j | t + D j P ( | S n | t ) j .
Proof of Lemma 5. 
See [15]. □
Lemma 6. 
Let { X n ; n 1 } be an independent random variables sequence. S k = i = 1 k X i . For x > 0 ,
P max 1 k n | S k | > 2 x P | S n | > x min 1 k n P | S n S k | x .
Proof of Lemma 6. 
See [16]. □
Lemma 7. 
Let { X n ; n 1 } be an independent random variables sequence and E | X n | g < for any g 2 , n 1 ; then, there exists a positive constant C ( g ) depending only on g, such that
E max 1 i n k = 1 i ( X k E X k ) g C ( g ) k = 1 n E | X k E X k | g + k = 1 n E | X k E X k | 2 g / 2
Proof of Lemma 7. 
See [17]. □
Lemma 8. 
If { η n ; n 1 } is an i.i.d. random variables sequence with E η 1 = 0 , E | η 1 | p < , p 1 , and S n = i = 1 n η i . Then,
E | S n | p = O ( n p / 2 ) , i f   p 2 , O ( n ) , i f   1 p < 2 .
Proof of Lemma 8. 
See [16]. □
The following is the Chen–Stein method, as shown in [18].
Lemma 9. 
Let { η α ; α I } be a random variables sequence on an index set I and { B α ; α I } is a set of subsets of I , then for each α I , B α I . For any t R , let λ = α I P ( η α > t ) . Then,
P max α I η α t e λ ( 1 λ 1 ) ( b 1 + b 2 + b 3 ) ,
where
b 1 = α I β B α P η α > t P η β > t , b 2 = α I α β B α P η α > t , η β > t , b 3 = α I E P η α > t | σ ( η β , β B α ) P η α > t ,
and σ ( η β ; β B α ) is the σ algebra generated by { η β ; β B α } . If η α is independent of { η β ; β B α } for each α, then b 3 vanishes.
Now we define
W n = max 1 i < j p n k = 1 n X k , i X k , j , n 1 .
Define A = max 1 i j n | a i , j | for any square matrix A = ( a i , j ) .
Lemma 10. 
We can see X ( i ) in (1). For each i, let h i = X ( i ) X ¯ ( i ) e / n . Then,
n Γ n X n X n ( b n , 1 2 + 2 b n , 1 ) W n b n , 3 2 + n b n , 3 2 b n , 4 2 ,
where
b n , 1 = max 1 i p n | h i 1 | , W n = max 1 i < j p n | ( X ( i ) ) X ( j ) | b n , 3 = min 1 i p n h i , b n , 4 = max 1 i p n | X ¯ ( i ) | .
Proof of Lemma 10. 
See [19]. □

4. Proofs

The next lemma refers to Lemma 2 in [20]; we generalize its result to the α -mixing condition.
Lemma 11. 
Under the Assumptions 1 and 2, let a > 1 / 2 , b 0 and M > 0 be constants.
( 1 ) E | X 1 , 1 | ( 1 + b ) / a < ; ( 2 ) c = E X 1 , 1 , i f   a 1 , a n y n u m b e r , i f   a > 1
is the sufficient condition for
max j M n b n a i = 1 n ( X i , j c ) 0 a . s . as n
Proof of Lemma 11. 
Without loss of generality, suppose c = 0 . Since, for η > 0 and N 1 ,
P max j M n b 1 n a i = 1 n X i , j η , i . o . k N P max 2 k 1 < n 2 k max j M 2 k b i = 1 n X i , j η 2 k a k N M 2 k b P max 2 k 1 < n 2 k i = 1 n X i , 1 2 k a ε ,
where η = 2 a η . To conclude that the probability on the left-hand side of this inequality is equal to zero, it is sufficient to show that
k = 1 2 k b P max n 2 k i = 1 n X i , 1 2 k a η < .
Let Y i , k = X i , 1 I { | X i , 1 | < 2 k a } and Z i , k = Y i , k E Y i , k . Then | Z i , k | 2 k a + 1 and E Z i , k = 0 . Let g be an integer such that g ( a 1 / 2 ) > b + 2 a . It is easy to see
X i , 1 = X i , 1 I { | X i , 1 | < 2 k a } + X i , 1 I { | X i , 1 | 2 k a } = Z i , k + E Y i , k + X i , 1 I { | X i , 1 | 2 k a } .
We have that
k = 1 2 k b P max n 2 k i = 1 n X i , 1 2 k a η k = 1 2 k b P max n 2 k i = 1 n Z i , k 2 k a η 4 + k = 1 2 k b P max n 2 k i = 1 n E Y i , k 2 k a η 2 + k = 1 2 k b P max n 2 k i = 1 n X i , 1 I { | X i , 1 | 2 k a } 2 k a η 4 .
Then, note that { Z i , k } is a sequence of α-mixing random variables by Assumption 1. Using Lemma 2, we obtain
P max n 2 k i = 1 n Z i , k η 2 k a 4 C E i = 1 2 k Z i , k g 2 k g a C 2 k g / 2 E | Z 1 , k g | 2 k g a ,
It is easy to see that g a b 1 > g ( a 1 / 2 ) ( b + 2 a ) > 0 , E Z 1 , k 2 E X 1 , 1 2 < ,
k = 1 2 k b k g a + k g / 2 E Z 1 , k g C k = 1 2 k ( b g a + g / 2 ) E X 1 , 1 g I X 1 , 1 < 2 k a C k = 1 2 k ( b g a + g / 2 ) l = 1 k E X 1 , 1 g I 2 a ( l 1 ) X 1 , 1 < 2 a l + 1 C l = 1 2 l ( b g a + g / 2 ) E X 1 , 1 g I 2 a ( l 1 ) X 1 , 1 < 2 a l + 1 C l = 1 2 l ( b + 2 a g ( a 1 / 2 ) ) E X 1 , 1 ( b + 1 ) / a I 2 a ( l 1 ) X 1 , 1 < 2 a l + C 1 < .
where C 1 and C are constants; these can be different. We can obtain that
k = N 2 k b P max n 2 k i = 1 n Z i , k η 2 k a < .
We could estimate E Y i , k for a large k. We have
max n 2 k i = 1 n E Y i k 2 k E Y 1 k 2 k E | X 11 | I { | X 11 | 2 k a } 2 k ( a b ) E | X 11 | ( 1 + b ) a I { | X 11 | 2 k a } , if   a 1 + b 2 k log k + 2 k ( a b ) E | X 11 | ( 1 + b ) a I { | X 11 | > log k } , if   a > 1 + b 2 1 η 2 k a
for all ε > 0 . Hence,
k = N 2 k b P max n 2 k i = 1 n E Y i k η 2 k a 2 < .
Finally, since E | X 11 | ( b + 1 ) / a < , we have
k = 1 2 k b P i = 1 2 k | X i 1 | 2 k a k = 1 2 k ( b + 1 ) P | X 11 | 2 k a < C E | X 1 , 1 | b + 1 a < .
Hence,
k = N 2 k b P max n 2 k i = 1 n X i 1 I { | X i 1 | 2 k a } η 2 k a 4 < .
Then, (5) follows from (7), (9) and (11). □
Lemma 12. 
Under the Assumptions 1 and 2, E X 1 , 1 = 0 , V a r ( X 1 , 1 ) = 1 . Suppose that 0 < c 1 p n / n τ c 2 < , where τ > 0 , c 2 c 1 > 0 . If, for some a ( 0 , 1 / 2 ) , E | X 1 , 1 | ( 2 + 2 τ ) / ( 1 a ) < , then
n a b n , 1 0 a . s . , b n , 3 1 a . s . and n a b n , 4 0 a . s .
as n .
Proof of Lemma 12. 
It is easy to see that X ( i ) X ¯ ( i ) 2 = ( X ( i ) ) X ( i ) n | X ¯ ( i ) | 2 . For any x > 0 , we know that | x 1 | | x 2 1 | . We obtain
n a b n , 1 = n a max 1 i p n ( X ( i ) ) X ( i ) n | X ¯ ( i ) | 2 1 max 1 i p n ( X ( i ) ) X ( i ) n n 1 a + n a 2 max 1 i p n | X ¯ ( i ) | 2 .
Note that ( X ( i ) ) X ( i ) = k = 1 n X k , i 2 . By Lemma 11, when E | X 1 , 1 | ( 2 + 2 τ ) / ( 1 a ) < , the first and second maximum presented above reach zero. This is true under the assumptions of Theorem 3. Therefore, the first limit is proved. We can obtain the second limit through the first one. Since E | X 1 , 1 | ( 1 + τ ) / ( 1 a ) < , we have
n a b n , 4 = n a max 1 i p n | X ¯ ( i ) | = max 1 i p n k = 1 n X k , i n 1 a
the limit that n a b n , 4 0 a.s. is proved by the relationship between n a b n , 4 and the rightmost term in (12). □
Let 1 4 δ < μ < 1 4 , where 0 < δ < 1 + 8 ε 4 ( 9 + 16 τ + ε ) is sufficiently small for ε > 0 .
S n , i , j = k = 1 n X k , i X k , j , Y k , i , j = X k , i X k , j I { | X k , i X k , j | n μ } , S n , i , j = k = 1 n ( Y k , i , j E Y k , i , j ) .
Let 0 < ρ < 1 , ρ move close to 1 / 2 , 0 < α < 1 . α moves close to 0, such that ρ α ρ 2 μ > 0 . Let z = z n = [ n ρ ] , m n = [ n / ( z n + q n ) ] n 1 ρ , q = q n = [ n α ρ ] , N n = m n ( z n + q n ) . Let
I i , n = { j : ( i + 1 ) z + i q + 1 j ( i + 1 ) ( z + q ) } , H i , n = { j : i ( z + q ) + 1 j ( i + 1 ) z + i q } ,
let v m , i , j = k I m , n ( Y k , i , j E Y k , i , j ) , u m , i , j = k H m , n ( Y k , i , j E Y k , i , j ) , 1 m m n ,.
Lemma 13. 
Under the condition of Theorem 3, let { Y m , i , j * ; m = 1 , 2 , , m n } be i.i.d. normal random variables, E | Y m , i , j * | = 0 and variance E u m , i , j 2 , σ 2 = lim n E S n 2 / n . Then
lim n E S n 2 m = 1 m n E Y m , i , j * 2 = 1 .
Proof of Lemma 13. 
We have E S n 2 = E S n , 1 , 2 + k = 1 n X k 1 X k 2 I { | X k 1 X k 2 | n μ } 2 , where we could write S n , 1 , 2 = m = 1 m n u m , 1 , 2 + m = 1 m n v m , 1 , 2 + k = N m n + 1 n ( Y k , 1 , 2 E Y k , 1 , 2 ) . Since E | X 1 , 1 | 12 + 16 τ + ε < , for some ε > 0 , it follows that
k = 1 n E X k , 1 X k , 2 I X k , 1 X k , 2 n μ k = 1 n E X k , 1 I X k , 1 n μ 2 E | X k , 2 | + k = 1 n E X k , 2 I X k , 2 n μ 2 E | X k , 1 | k = 1 n E X k , 1 12 + 16 τ + ε E X k , 1 1 ( 12 + 16 τ + ε ) I X k , 1 n μ 2 E | X k , 2 | + k = 1 n E X k , 2 12 + 16 τ + ε E X k , 2 1 ( 12 + 16 τ + ε ) I X k , 2 n μ 2 E | X k , 1 | C n 1 μ 2 ( 11 + 16 τ + ε ) = o n log n ,
as n . Therefore,
k = 1 n E | X k , 1 X k , 2 | I { | X k , 1 X k , 2 | n μ } = o n log n ,
as n . By Lemma 3, it follows that
E m = 1 m n v m , 1 , 2 2 = m = 1 m n E ( v m , 1 , 2 ) 2 + 2 j = 2 m n m = 1 j 1 E v m , 1 , 2 v j , 1 , 2 E v m , 1 , 2 E v j , 1 , 2 m = 1 m n k I m , n E Y k , 1 , 2 E Y k , 1 , 2 2 + 2 m = 1 m n k < j k , j I m , n E Y k , 1 , 2 Y j , 1 , 2 E Y k , 1 , 2 E Y j , 1 , 2 + 2 j = 2 m n m = 1 j 1 E v m , 1 , 2 v j , 1 , 2 E v m , 1 , 2 E v j , 1 , 2 C m n q n + 2 m = 1 m n k < j k , j I m , n 10 ( E | Y k , 1 , 2 | p ) 1 / p ( E | Y j , 1 , 2 | q ) 1 / q α I m , n 1 1 p 1 q + 2 j = 2 m n m = 1 j 1 10 E v m , 1 , 2 p 1 / p E v j , 1 , 2 q 1 / q α I m , n 1 1 p 1 q C m n q n + m n 2 q n 2 ( e n α ρ ) 1 1 p 1 q + m n 2 n ( α ρ + 2 μ ) p 2 1 / p n ( α ρ + 2 μ ) q 2 1 / q e n α ρ 1 1 p 1 q C n 1 ρ + α ρ = o n log n ,
as n , where 2 < p < 12 + 16 τ + ε , 2 < q < 12 + 16 τ + ε , 1 / p + 1 / q < 1 . We have
E k = N m n + 1 n ( Y k , 1 , 2 E Y k , 1 , 2 ) 2 k = N m n + 1 n E ( Y k , 1 , 2 E Y k , 1 , 2 ) 2 + 2 j = N m n + 2 n k = N m n + 1 j 1 E Y k , 1 , 2 Y j , 1 , 2 E Y k , 1 , 2 E Y j , 1 , 2 C n ρ + n α ρ + 2 j = N m n + 2 n k = N m n + 1 j 1 10 ( E | Y k , 1 , 2 | p ) 1 / p ( E | Y j , 1 , 2 | q ) 1 / q α I m , n 1 1 p 1 q C n ρ + n α ρ + C n ρ + n α ρ 2 ( e n α ρ ) 1 1 p 1 q = o n log n
as n , where 2 < p < 12 + 16 τ + ε , 2 < q < 12 + 16 τ + ε , 1 / p + 1 / q < 1 . We have that E S n 2 = E m = 1 m n ( u m , 1 , 2 E u m , 1 , 2 ) 2 + o n / log n , by Lemma 3, we can obtain
E m = 1 m n u m , 1 , 2 E u m , 1 , 2 2 m = 1 m n E Y m , 1 , 2 2 = E m = 1 m n u m , 1 , 2 E u m , 1 , 2 2 + 2 E j = 2 m n m = 1 j 1 u m , 1 , 2 E u m , 1 , 2 u j , 1 , 2 E u j , 1 , 2 m = 1 m n E Y m , 1 , 2 2 2 j = 2 m n m = 1 j 1 E u m , 1 , 2 u j , 1 , 2 E u m , 1 , 2 E u j , 1 , 2 2 j = 2 m n m = 1 j 1 10 E u m , 1 , 2 p 1 / p E u j , 1 , 2 q 1 / q α I m , n 1 1 p 1 q C m n 2 n ( ρ + 2 μ ) p 2 1 / p n ( ρ + 2 μ ) q 2 1 / q e n α ρ 1 1 p 1 q C n 2 ρ + 2 μ e n α ρ 1 1 p 1 q = o n log n
as n , where 1 / p + 1 / q < 1 . Hence,
E S n 2 m = 1 m n E Y m , i , j * 2 = o n log n ,
as n . Recall that lim n E S n 2 n = σ 2 , Conclusively,
lim n E S n 2 m = 1 m n E Y m , i , j 2 = 1 ,
where E Y m , i , j * 2 = E u m , i , j 2 . □
Lemma 14. 
Let { Y m , i , j * ; m = 1 , 2 , , m n } be i.i.d. normal random variables, E Y 1 , 1 , 1 * = 0 and variance E u m , i , j 2 . Then,
lim sup n max 1 i < j p n m = 1 m n Y m , i , j * n log p n 2 σ a . s .
Proof of Lemma 14. 
Choose t ( 0 , 1 ) , set ω n = ( 2 + t ) σ n log p n ; we can suppose that σ = 1 . Using Lemma 13, we have lim n E S n 2 / m = 1 m n E Y m , i , j * 2 = 1 , as n , where E Y m , i , j * 2 = E u m , i , j 2 . Then, we can obtain
max 1 i j < P m = 1 m n Y m , i , j * > ω n = 2 1 Φ ( 2 + t ) n log p n m = 1 m n E u m , i , j * 2 C m = 1 m n E u m , i , j 2 2 π ( 2 + t ) n log p n exp ( 2 + t ) 2 n log p n 2 m = 1 m n E u m , i , j 2 C 1 ( 2 + t ) 2 π log p n p n ( 2 + t ) 2 / 2 = O 1 n τ ( 2 + t ) 2 / 2 ,
as n is large, where we use
1 Φ ( x ) = 1 2 π x e t 2 / 2 d t 1 2 π x e x 2 / 2
as x + (shown in [16]). We define W n = max 1 i < j p n m = 1 m n Y m , i , j * , n k = k g for any integer g > 2 + τ ( 2 + t ) 2 / ( t 2 + 4 t ) τ ,
max n k n n k + 1 W n max 1 i j p n k + 1 m = 1 m n k Y m , i , j * + r n ,
where
r n = max 1 i j p n k + 1 max n k n n k + 1 m = 1 m n Y m , i , j * m = 1 m n k Y m , i , j * .
By (14),
P max 1 i j p n k + 1 m = 1 m n k Y m , i , j * > ω n k p n k + 1 2 P m = 1 m n k Y m , 1 , 2 * m = 1 m n k E u m , i , j * 2 > ω n k m = 1 m n k E u m , i , j * 2 2 p n k + 1 2 1 Φ ( 2 + t ) n k log p n k m = 1 m n k E u m , i , j * 2 C ( k + 1 ) 2 g τ m = 1 m n k E u m , i , j * 2 2 π ( 2 + t ) n k log p n k exp ( 2 + t ) 2 n k log p n k 2 m = 1 m n k E u m , i , j * 2 = O k ( t 2 + 4 t ) g τ 2 .
Since k k ( t 2 + 4 t ) g τ / 2 < , using the Borel–Cantelli lemma,
lim sup k max 1 i j p n k m = 1 m n k Y m , i , j * n k log p n k 2 + t a . s .
Now, let us estimate r n as in (17).
Let partial sums S 0 = 0 and S l = m = 1 l Y m , i , j * . Observe that the distribution of m = 1 m n Y m , i , j * m = 1 m n k Y m , i , j * is equal to that of S m n m n k for all m n m n k . Thus, by Lemma 6, we have
P r n t n k log p n k p n k + 1 2 P max 1 l n k + 1 n k S l t n k log p n k 2 p n k + 1 2 P S n k + 1 n k ( t / 2 ) n k log p n k
as n is sufficiently large, since min 1 l n k + 1 n k P S n k + 1 n k S l ( t / 2 ) n k log p n k 1 / 2 , where Ottaviani’s inequality in Lemma 6 is used in the last inequality. Note that, for fixed g and t, ( t / 2 ) n k log p n k ( 2 + t ) ( n k + 1 n k ) log ( p n k + 1 p n k ) as n is sufficiently large. From (15), we have
2 p n k + 1 2 P S n k + 1 n k ( t / 2 ) n k log p n k = O k ( τ g 1 ) ( 2 + t ) 2 / 2 + 2 g τ
Therefore,
P r n t n k log p n k = O k μ ,
where u = ( τ g 1 ) ( 2 + t ) 2 / 2 2 g τ , since g is chosen such that u > 1 . Using the Borel–Cantelli lemma again, we obtain
lim sup k r n n k log p n k t a . s .
Using (16), (18) and (20), we obtain that
lim sup k max n k n n k + 1 W n n k log p n k 2 + 2 t a . s .
for any sufficiently small t > 0 . Then, we have the inequality (13) in Lemma 14. □
Lemma 15. 
Let { Y m , i , j * ; m = 1 , 2 , , m n } be i.i.d. normal random variables, E Y m , i , j * = 0 and variance E u m , i , j 2 . Then,
lim inf n max 1 i < j p n m = 1 m n Y m , i , j * n log p n 2 σ a . s .
Proof of Lemma 15. 
First prove that
P W n v n = O 1 n t
as n , for t > 0 , depending only on t and the distribution of X 1 , 1 X 1 , 2 only. For any t ( 0 , 1 ) , set v n = ( 2 t ) σ n log p n , and σ 2 = lim n E S n 2 / n , we can suppose σ = 1 . Take an integer g satisfying g > 1 / t . Then, we have P ( W n k v n k ) = O 1 / k t g . Since k k t g < , using the Borel–Cantelli lemma, we obtain
lim inf k W n k n k log p n k 2 t a . s .
for any t ( 0 , 1 ) . We can see the definition of r n in (17), and obtain
inf n k n n k + 1 W n W n k r n .
From (20) and (23), we have that
lim inf k inf n k n n k + 1 W n n k log p n k 2 2 t a . s .
for any t > 0 that is small enough. This suggests (21) of Lemma 15.
Now, we prove (22) using Lemma 9.
Let I = { ( i , j ) ; 1 i < j p } . Set α = ( i , j ) I , B α = { ( k , l ) I ; o n e o f k a n d l = i o r j b u t ( k , l ) α } , η α = | m = 1 m n Y m , i , j * | , t = v n and A α = A i j = { | m = 1 m n Y m , i , j * | > ν n } . Usnig Lemma 9,
P ( W n ν n ) e λ n + b 1 , n + b 2 , n .
Evidently,
λ n = p ( p 1 ) 2 P ( A 12 ) , b 1 , n 2 p 3 P ( A 12 ) 2 , b 2 , n 2 p 3 P ( A 12 A 13 ) .
Recall that m = 1 m n Y m , i , j * is a sum of i.i.d. normal random variables. Recall (15). We have
P ( A 12 ) = P m = 1 m n Y m , 1 , 2 * > ν n = P m = 1 m n Y m , 1 , 2 * m = 1 m n E u m , 1 , 2 * 2 > ( 2 t ) n log p n m = 1 m n E u m , 1 , 2 * 2 = 2 1 Φ ( 2 t ) n log p n m = 1 m n E u m , 1 , 2 * 2 C m = 1 m n E u m , 1 , 2 2 ( 2 t ) 2 π n log p n exp ( 2 t ) 2 n log p n 2 m = 1 m n E u m , 1 , 2 2 = O 1 n τ ( 2 t ) 2 / 2
as n . Provided that E | X 1 , 1 | 12 + 16 τ + ε < and v n / n log p n 2 t ,
P A 12 A 13 = P m = 1 m n Y m , 1 , 2 * ν n , m = 1 m n Y m , 1 , 3 * ν n .
The two events in (27) are conditionally independent given Y m , i , j * ’s. P 1 and E 1 represent the conditional probability and expectation of { Y m , i , j * ; 1 m m n , 1 i , j p n } , respectively. Then, the probability in (27) is
E P 1 m = 1 m n Y m , 1 , 2 * ( 2 t ) n log p n 2 .
Set
A n ( s ) : = 1 n m = 1 m n Y m , 1 , 2 * s E Y m , 1 , 2 * s δ ˜
for s 2 and δ ˜ ( 1 , 1 2 ) . Choose β ( a 2 + 2 , q / ( a 2 + 1 ) ) and r = a 2 + 1 . Let ζ m = Y m , 1 , 2 * β E Y m , 1 , 2 * β for m = 1 , 2 , , m n . Then, E ζ 1 r < . Using the Chebyshev inequality and Lemma 8,
P ( A n ( β ) c ) = O ( n f ( r ) )
as n , where
f ( r ) = r 1 , 1 < r 2 r / 2 , r 2
Let { ζ m ; 1 m m n } be an independent copy of { ζ m ; 1 m m n } . Using (29), P ( | m = 1 m n ζ m | n δ ˜ / 2 ) 1 / 2 for any n that is large enough, we have
P m = 1 m n ζ m > n δ ˜ = O n f ( r )
by repeating (29). Choose an integer j 1 , set ν = n δ ˜ / 4 j . Lemma 5 implies that there are positive constants C j and D j , satisfying
P m = 1 m n ( ζ m ζ m ) > n δ ˜ / 2 = P m = 1 m n ( ζ m ζ m ) > 2 j ν C j P max 1 m m n ζ m ζ m > ν + D j P m = 1 m n ζ m ζ m > ν j .
Since E | ζ 1 | r < , P ( max 1 m m n | ζ m ζ m | > ν ) = O ( n 1 r ) . From the equality in (31), we have that
P m = 1 m n ζ m ζ m > ν j = O ( n j f ( r ) )
Take j = [ ( r 1 ) f ( r ) ] + 1 . We obtain
P m = 1 m n ( ζ m ζ m ) > n δ ˜ / 2 = O ( n 1 r )
as n . Then, we have
P ( A n ( β ) c ) = O ( n 1 r )
as n , since (29), (31) and (32). Hence,
P A 12 A 13 E P 1 m = 1 m n Y m , 1 , 2 * ( 2 t ) n log p n 2 I A n ( s ) A n ( 2 ) + P ( A n ( s ) c ) .
Since
P 1 m = 1 m n Y m , 1 , 2 * ( 2 t ) n log p n C exp ( 2 t ) 2 n log p n 2 m = 1 m n E u m , 1 , 2 2 ,
we can obtain
P 1 m = 1 m n Y m , 1 , 2 * ( 2 t ) n log p n 2 I A n ( s ) A n ( 2 ) C exp ( 2 t ) 2 n log p n m = 1 m n E u m , 1 , 2 2 = O n b ( 2 t ) 2
for any b > 0 . If both b and t are small enough, we have that
e λ n e n t , b 1 , n 1 n a n d b 2 , n 1 n
for any n that is large enough. Since (24) and (36), (22) holds. □
Lemma 16. 
Under the condition of Theorem 3, take T n = max 1 i < j p n S n , i , j ; then,
lim sup n T n n log p n 2 σ a . s .
lim inf n T n n log p n 2 σ a . s .
Proof of Lemma 16. 
Using Markov inequality and Lemma 2, for δ > 0 , we can obtain
P max 1 i < j p n m = 1 m n v m , i , j δ n log p n C p n 2 E m = 1 m n v m , 1 , 2 q n log p n q / 2 C p n 2 m n q / 2 q n q / 2 E X 1 , 1 X 1 , 2 q I X 1 , 1 X 1 , 2 n μ ( n log n ) q / 2 C 1 ( log n ) q 2 n ( ρ α ρ 2 μ ) ) 2 q 2 τ = O 1 n 1 + ε
for ε > 0 and sufficiently large q, using the Borel–Cantelli lemma, we have
lim n max 1 i < j p n m = 1 m n v m , i , j n log p n = 0 , a . s .
and using the Markov inequality and Lemma 2, for δ > 0 ,
P max 1 i < j p n k = N m n + 1 n Y k , i , j E Y k , i , j δ n log p n C p n 2 E k = N m n + 1 n Y k , 1 , 2 E Y k , 1 , 2 q n log p n q / 2 C p n 2 z n + q n q / 2 E X 1 , 1 X 1 , 2 q I X 1 , 1 X 1 , 2 n μ ( n log n ) q / 2 C n 2 τ n ρ + n α ρ q / 2 n ( 1 2 μ ) q ( log n ) q / 2 = O 1 n 1 + ε ,
for sufficiently large q, using the Borel–Cantelli lemma, we can obtain
lim n max 1 i < j p n k = N m n + 1 n Y k , i , j E Y k , i , j n log p n = 0 , a . s .
Hence, we only need to prove
lim n max 1 i < j p n | m = 1 m n u m , i , j | n log p n = 2 σ , a . s .
Since Lemma 1, we have an independent random variables sequence { u m , i , j * ; 1 m m n } , { u m , i , j * ; 1 m m n } and { u m , i , j ; 1 m m n } have same distribution, and we could obtain that P u i , 1 , 2 * and u i , 1 , 2 are not from the same H i H 8 N ) 1 / 2 α ( B ( X ) , B ( Y ) . We can prove that
P max 1 i < j p n m = 1 m n u m , i , j u m , i , j * δ n log p n p n 2 m n 8 m n 1 / 2 α I 1 , n C 1 n 3 2 ρ 3 2 2 τ e n α ρ = o 1 n 1 + ε ,
Using the Borel–Cantelli lemma, we only need to prove
lim n max 1 i < j p n | m = 1 m n u m , i , j * | n log p n = 2 σ , a . s .
Let u m , i , j * = k H m , n ( Y k , i , j i E Y k , i , j i ) , 1 m m n , where Y k , i , j i = X k , i i X k , j i I { | X k , i i X k , j i | n μ } , { X k , j i ; k H i , n } is an independent replication of { X k , j ; k H i , n } . Thus, we only need to prove
( 1 ) lim sup n max 1 i < j p n m = 1 m n u m , i , j * n log p n 2 σ a . s . ( 2 ) lim inf n max 1 i < j p n m = 1 m n u m , i , j * n log p n 2 σ a . s .
Let Y m , i , j * , 1 m m n be an independent normal random variables sequence with variance V a r u m , i , j * . Since Lemma 4, we have that
P max 1 i < j p n m = 1 m n u m , i , j * Y m , i , j * δ n log p n C p n 2 i = 1 m n E u i , 1 , 2 * q n log n q 2 C p n 2 m n z n q 2 ( E | X k , 1 i X k , 2 i | 2 I { | X k , 1 i X k , 2 i | n μ } ) q 2 ( n log p n ) q 2 + C p n 2 m n z n E | X k , 1 i X k , 2 i | q I { | X k , 1 i X k , 2 i | n μ } ( n log p n ) q 2 C 1 ( log p n ) q 2 n ( 1 ρ ) q 2 + ρ 1 2 τ + C 1 ( log p n ) q 2 n ( 1 2 μ ) q 2 1 2 τ = O 1 n 1 + ε .
for sufficiently large q. Using the Borel–Cantelli lemma,
lim n max 1 i < j p n m = 1 m n u m , i , j * Y m , i , j * n log p n = 0 , a . s .
Thus, with Lemmas 14 and 15 and Borel–Cantelli lemma, we could obtain the result. □
Proof of Lemma 3. 
We could find W n in (4). Let a = 1 / 3 , since E | X 1 , 1 | 12 + 16 τ + ε < . Using the triangle inequality and Lemmas 10 and 12, we have that
| n L n W n | n Γ n X n X n 4 n 1 / 3 W n + 2 n 1 / 3 a . s .
as n is large enough. Hence,
n L n log p n W n n log p n = 1 n log p n | n L n W n | 4 n 1 / 3 W n n log p n + 2 n 1 / 3 n log p n 4 W n n 5 / 6 log p n + 1 n 1 / 6 log p n 0 , a . s .
If lim n W n n log p n = 2 σ , a.s. then lim n W n n 5 / 6 log p n = 0 , a.s. Hence, in order to prove Theorem 3, we need to show that
lim n W n n log p n = 2 σ a . s .
Take T n = max 1 i < j p n S n , i , j . We have that
W n T n max 1 i < j p n k = 1 n X k , i X k , j I | X k , i X k , j | n μ = : U n ,
Using Lemma 2, let q = 3 for any δ > 0 , since 1 / 4 δ < μ < 1 / 4 , where 0 < δ < 1 + 8 ε 4 ( 9 + 16 τ + ε ) ; for some ε > 0 , we have
P U n δ n log p n P max 1 i < j p n k = 1 n X k , i X k , j I X k , i X k , j n μ δ n log p n p n 2 E k = 1 n X k , 1 X k , 2 I X k , 1 X k , 2 n μ q δ n log p n q C n 2 τ n 3 2 E X 1 , 1 X 1 , 2 3 I X 1 , 1 X 1 , 2 n μ n 3 / 2 log p n 3 / 2 C n 2 τ E X 1 , 1 3 I X 1 , 1 n μ 2 E X 1 , 2 3 log p n 3 / 2 + C n 2 τ E X 1 , 2 3 I X 1 , 2 n μ 2 E X 1 , 1 3 log p n 3 / 2 C n 2 τ E X 1 , 1 12 + 16 τ + ε E X 1 , 1 3 ( 12 + 16 τ + ε ) I X 1 , 1 n μ 2 E X 1 , 2 3 log p n 3 / 2 + C n 2 τ E X 1 , 2 12 + 16 τ + ε E X 1 , 2 3 ( 12 + 16 τ + ε ) I X 1 , 2 n μ 2 E X 1 , 1 3 log p n 3 / 2 C 1 log p n 3 / 2 n μ 2 ( 9 + 16 τ + ε ) 2 τ = o 1 n 1 + ε ,
Using the Borel–Cantelli lemma,
lim n U n n log p n = 0 a . s .
To prove ( 40 ) , we need to show that
lim n T n n log p n = 2 σ a . s .
We could obtain lim n T n / n log p n = 2 σ by Lemma 16. (40) holds from Lemma 16. Since (40), we can obtain 4 n 1 / 3 W n = O ( n 1 / 6 log p n ) a.s.. Therefore, n L n W n = O ( n 1 / 3 ) . We could prove Theorem 3 by (40). □

5. Examples

In certain applications, such as the construction of compressed sensing matrices, the means μ i = E X ( i ) and μ j = E X ( j ) are given and one is interested in
ρ ˜ i j = ( X ( i ) μ i ) T ( X ( j ) μ j ) X ( i ) μ i · X ( j ) μ j , 1 i , j p
The corresponding coherence is defined by
L ˜ n = max 1 i < j p | ρ ˜ i j | .
Compressed sensing is a rapidly evolving field, aiming to construct measurement matrices X n × p that enable the exact recovery of any k-sparse signal β R p from linear measurements y = X β using computationally efficient recovery algorithms.
Two commonly employed conditions in compressed sensing are the Restricted Isometry Property (RIP) and the Mutual Incoherence Property (MIP). In this paper, the derived limiting laws can be utilized to assess the likelihood of a random matrix satisfying the MIP condition, as demonstrated by Cai and Jiang [19].
Example 1. 
The MIP condition, which is frequently utilized, requires the pairwise correlations among the column vectors of X n , denoted as X n = ( X ( 1 ) , X ( 2 ) , , X ( p ) ) = ( X k , i ) n × p , to be small. It has been established that the condition
( 2 k 1 ) L ˜ n < 1
guarantees the exact recovery of a k-sparse signal β in the absence of noise, where y = X β , and enables the stable recovery of a sparse signal in the presence of noise, where y = X β + z . Here, z represents an error vector that may not necessarily be random.

Author Contributions

Writing–original draft, H.Z.; Writing–review & editing, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by National Natural Science Foundation of China under Grant [No. 11771178, 12171198]; the Science and Technology Development Program of Jilin Province under Grant [No. 20210101467JC]; and Fundamental Research Funds for the Central Universities.

Data Availability Statement

Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bai, Y.; Zhang, Y.; Liu, C. Moderate deviation principle for likelihood ratio test in multivariate linear regression model. J. Multivar. Anal. 2023, 194, 105139. [Google Scholar] [CrossRef]
  2. Jiang, T. The asymptotic distributions of the largest entries of sample correlation matrices. Ann. Appl. Probab. 2004, 14, 865–880. [Google Scholar] [CrossRef] [Green Version]
  3. Zhou, W. Asymptotic distribution of the largest off-diagonal entry of correlation matrices. Trans. Am. Math. Soc. 2007, 359, 5345–5363. [Google Scholar] [CrossRef]
  4. Liu, W.; Lin, Z.; Shao, Q. The asymptotic distribution and Berry-Esseen bound of a new test for independence in high dimension with an application to stochastic optimization. Ann. Appl. Probab. 2008, 18, 2337–2366. [Google Scholar] [CrossRef]
  5. Li, D.; Rosalsky, A. Some strong limit theorems for the largest entries of sample correlation matrices. Ann. Appl. Probab. 2006, 16, 423–447. [Google Scholar] [CrossRef] [Green Version]
  6. Li, D.; Liu, W.; Rosalsky, A. Necessary and sufficient conditions for the asymptotic distribution of the largest entry of a sample correlation matrix. Probab. Theory Relat. Fields 2010, 148, 5–35. [Google Scholar] [CrossRef]
  7. Li, D.; Qi, Y.; Rosalsky, A. On Jiang’s asymptotic distribution of the largest entry of a sample correlation matrix. J. Multivar. Anal. 2012, 111, 256–270. [Google Scholar] [CrossRef] [Green Version]
  8. Shao, Q.; Zhou, W. Necessary and sufficient conditions for the asymptotic distributions of coherence of ultra-high dimensional random matrices. Ann. Probab. 2014, 42, 623–648. [Google Scholar] [CrossRef] [Green Version]
  9. Liu, W.; Lin, Z. Asymptotic distributions of the largest entries of sample correlation matrices under dependence assumptions. Chin. Ann. Math. Ser. 2008, 29, 543–556. [Google Scholar]
  10. Zhao, H.; Zhang, Y. The asymptotic distributions of the largest entries of sample correlation matrices under an α-mixing assumption. Acta. Math. Sin.-Engl. Ser. 2022, 38, 2039–2056. [Google Scholar] [CrossRef]
  11. Lin, Z.; Lu, C. Limit Theory on Mixing Dependent Random Variables; Kluwer Academic Publishers: Dordrecht, The Netherland, 1997. [Google Scholar]
  12. Bradley, R. Approximation theorems for strongly mixing random variables. Mich. Math. J. 1983, 30, 69–81. [Google Scholar] [CrossRef]
  13. Kim, T. A note on moment bounds for strong mixing sequences. Stat. Probab. Lett. 1993, 16, 163–168. [Google Scholar] [CrossRef]
  14. Sakhanenko, A. On the accuracy of normal approximation in the invariance principle. Sib. Adv. Math. 1991, 1, 58–91. [Google Scholar]
  15. Li, D.; Rao, M.; Jiang, T.; Wang, X. Complete convergence and almost sure convergence of weighted sums of random variables. J. Theoret. Probab. 1995, 8, 49–76. [Google Scholar] [CrossRef]
  16. Chow, Y.; Teicher, H. Probability Theory, Independence, Interchangeability, Martingales, 2nd ed.; Springer: New York, NY, USA, 1988. [Google Scholar]
  17. Allan, G. Probability: A Graduate Course; Springer: New York, NY, USA, 2005. [Google Scholar]
  18. Arratia, R.; Goldstein, L.; Gordon, L. Two moments suffice for Poisson approximations: The Chen-Stein method. Ann. Probab. 1989, 17, 9–25. [Google Scholar] [CrossRef]
  19. Cai, T.; Jiang, T. Limiting laws of coherence of random matrices with applications to testing covariance structure and construction of compressed sensing matrices. Ann. Stat. 2011, 39, 1496–1525. [Google Scholar] [CrossRef]
  20. Bai, Z.; Yin, Y. Limit of the smallest eigenvalue of a large-dimensional sample covariance matrix. Ann. Probab. 1993, 21, 1275–1294. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, H.; Zhang, Y. A Strong Limit Theorem of the Largest Entries of a Sample Correlation Matrices under a Strong Mixing Assumption. Axioms 2023, 12, 657. https://doi.org/10.3390/axioms12070657

AMA Style

Zhao H, Zhang Y. A Strong Limit Theorem of the Largest Entries of a Sample Correlation Matrices under a Strong Mixing Assumption. Axioms. 2023; 12(7):657. https://doi.org/10.3390/axioms12070657

Chicago/Turabian Style

Zhao, Haozhu, and Yong Zhang. 2023. "A Strong Limit Theorem of the Largest Entries of a Sample Correlation Matrices under a Strong Mixing Assumption" Axioms 12, no. 7: 657. https://doi.org/10.3390/axioms12070657

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop