Next Article in Journal
Sub-Riemannian Geometry of Curves and Surfaces in Roto-Translation Group Associated with Canonical Connection
Next Article in Special Issue
The Exact Density of the Eigenvalues of the Wishart and Matrix-Variate Gamma and Beta Random Variables
Previous Article in Journal
Sustainable Evaluation of E-Commerce Companies in Vietnam: A Multi-Criteria Decision-Making Framework Based on MCDM
Previous Article in Special Issue
Chi-Square Approximation for the Distribution of Individual Eigenvalues of a Singular Wishart Matrix
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Entry-Wise Eigenvector Analysis and Improved Rates for Topic Modeling on Short Documents

Department of Statistics, Harvard University, Cambridge, MA 02138, USA
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(11), 1682; https://doi.org/10.3390/math12111682
Submission received: 18 April 2024 / Revised: 11 May 2024 / Accepted: 17 May 2024 / Published: 28 May 2024
(This article belongs to the Special Issue Theory and Applications of Random Matrix)

Abstract

:
Topic modeling is a widely utilized tool in text analysis. We investigate the optimal rate for estimating a topic model. Specifically, we consider a scenario with n documents, a vocabulary of size p, and document lengths at the order N. When N c · p , referred to as the long-document case, the optimal rate is established in the literature at p / ( N n ) . However, when N = o ( p ) , referred to as the short-document case, the optimal rate remains unknown. In this paper, we first provide new entry-wise large-deviation bounds for the empirical singular vectors of a topic model. We then apply these bounds to improve the error rate of a spectral algorithm, Topic-SCORE. Finally, by comparing the improved error rate with the minimax lower bound, we conclude that the optimal rate is still p / ( N n ) in the short-document case.

1. Introduction

In today’s world, an immense volume of text data is generated in scientific research and in our daily lives. This includes research publications, news articles, posts on social media, electronic health records, and many more. Among the various statistical text models, the topic model [1,2] stands out as one of the most widely used. Given a corpus consisting of n documents written on a vocabulary of p words, let X = [ X 1 , X 2 , , X n ] R p × n be the word-document-count matrix, where X i ( j ) is the count of the jth word in the ith document, for 1 i n and 1 j p . Let A 1 , A 2 , , A K R p be probability mass functions (PMFs). We call each A k a topic vector, which represents a particular distribution over words in the vocabulary. For each 1 i n , let N i denote the length of the ith document, and let w i R K be a weight vector, where w i ( k ) is the fractional weight this document puts on the kth topic, for 1 k K . In a topic model, the columns of X are independently generated, where the ith column satisfies:
X i Multinomial ( N i , d i 0 ) , with d i 0 = k = 1 K w i ( k ) A k .
Here d i 0 R p is the population word frequency vector for the ith document, which admits a convex combination of the K topic vectors. The N i words in this document are sampled with replacement from the vocabulary using probabilities in d i 0 ; as a result, the word counts follow a multinomial distribution. Under this model, E [ X ] is a rank-K matrix. The statistical problem of interest is using X to estimate the two parameter matrices A = [ A 1 , A 2 , , A K ] and W = [ w 1 , w 2 , , w n ] .
Since the topic model implies a low-rank structure behind the data matrix, spectral algorithms [3] have been developed for topic model estimation. Topic-SCORE [4] is the first spectral algorithm in the literature. It conducts singular value decomposition (SVD) on a properly normalized version of X, then uses the first K left singular vectors to estimate A, and finally uses A ^ to estimate W by weighted least-squares. Ref. [4] showed that the error rate on A is p / ( n N ) up to a logarithmic factor, where N is the order of document lengths. It matches with the minimax lower bound [4] when N c · p for a constant c > 0 , referred to as the long-document case. However, there are many application scenarios with N = o ( p ) , referred to as the short-document case. For example, if we consider a corpus consisting of abstracts of academic publications (e.g., see [3]), N is usually between 100 and 200, but p can be a few thousands or even larger. In this short-document case, ref. [4] observed a gap between the minimax lower bound and the error rate of Topic-SCORE. They posted the following questions: Is the optimal rate still p / ( N n ) in the short-document case? If so, can spectral algorithms still achieve this rate?
In this paper, we give answers to these questions. We discovered that the gap between the minimax lower bound and the error rate of Topic-SCORE in the short-document case came from the unsatisfactory entry-wise large-deviation bounds for the empirical singular vectors. While the analysis in [4] is effective for long documents, there is considerable room for improvement in the short-document case. We use new analysis to obtain much better large-deviation bounds when N = o ( p ) . Our strategy includes two main components: one is an improved non-stochastic perturbation bound for SVD allowing severe heterogeneity in the population singular vectors, and the other is leveraging a decoupling inequality [5] to control the spectral norm of a random matrix with centered multinomial-distributed columns. These new ideas allow us to obtain satisfactory entry-wise large-deviation bounds for empirical singular vectors across the entire regime of N log 3 ( n ) . As a consequence, we are able to significantly improve the error rate of Topic-SCORE in the short-document case. This answers the two questions posted by [4]: The optimal rate is still p / ( N n ) in the short-document case, and Topic-SCORE still achieves this optimal rate.
Additionally, inspired by our analysis, we have made a modification to Topic-SCORE to better incorporate document lengths. We also extend the asymptotic setting in [4] to a weak-signal regime allowing the K topic vectors to be extremely similar to each other.

1.1. Related Literature

Many topic modeling algorithms have been proposed in the literature, such as LDA [2], the separable NMF approach [6,7], the method in [8] that uses a low-rank approximation to the original data matrix, Topic-SCORE [4], and LOVE [9]. Theoretical guarantees were derived for these methods, but unfortunately, most of them had non-optimal rates even when N c · p . Topic-SCORE and LOVE are the two that achieve the optimal rate when N c · p . However, LOVE has no theoretical guarantee when N = o ( p ) ; Topic-SCORE has a theoretical guarantee across the entire regime, but the rate obtained by [4] is non-optimal when N = o ( p ) . Therefore, our results address a critical gap in the existing literature by determining the optimal rate for the short-document case for the first time.
Entry-wise eigenvector analysis [10,11,12,13,14,15] provides large-deviation bounds or higher-order expansions for individual entries of the leading eigenvectors of a random matrix. There are two types of random matrices, i.e., the Wigner type (e.g., in network data and pairwise comparison data) and the Wishart type (e.g., in factor models and spiked covariance models [16]). The random matrices in topic models are the Wishart type, and hence, techniques for the Wigner type, such as the leave-one-out approach [15], are not a good fit. We cannot easily extend the techniques [11,14] for spiked covariance models either. One reason is that the multinomial distribution has heavier-than-Gaussian tails (especially for short documents), and using the existing techniques only give non-sharp bounds. Another reason is the severe word frequency heterogeneity [17] in natural languages, which calls for bounds whose orders are different for different entries of an eigenvector. Our analysis overcomes these challenges.

1.2. Organization and Notations

The rest of this paper is organized as follows. Section 2 presents our main results about entry-wise eigenvector analysis for topic models. Section 3 applies these results to obtain improved error bounds for the Topic-SCORE algorithm and determine the optimal rate in the short-document case. Section 4 describes the main technical components, along with a proof sketch. Section 5 concludes the paper with discussions. The proofs of all theorems are relegated to the Appendix A, Appendix B, Appendix C, Appendix D and Appendix E.
Throughout this paper, for a matrix B, let B ( i , j ) or B i j represent the ( i , j ) -th entry. We denote B as its operator norm and B 2 as the 2-to- norm, which is the maximum 2 norm across all rows of B. For a vector b, b ( i ) or b i represents the i-th component. We denote b 1 and b as the 1 and 2 norms of b, respectively. The vector 1 n stands for an all-one vector of dimension n. Unless specified otherwise, { e 1 , e 2 , , e p } denotes the standard basis of R p . Furthermore, we write a n b n or b n a n if b n / a n = o ( 1 ) for a n , b n > 0 ; and we denote a n b n if C 1 b n < a n < C b n for some constant C > 1 .

2. Entry-Wise Eigenvector Analysis for Topic Models

Let X R p × n be the word-count matrix following the topic model in (1). We introduce the empirical frequency matrix D = [ d 1 , d 2 , , d n ] R p × n , defined by:
d i ( j ) = N i 1 X i ( j ) , 1 i n , 1 j p .
Under the model in (1), we have E [ d i ] = d i 0 = k = 1 K w i ( k ) A k . Write D 0 = [ d 1 0 , d 2 0 , , d n 0 ] R p × n . It follows that:
E D = D 0 = A W .
We observe that D 0 is a rank-K matrix; furthermore, the linear space spanned by the first K left singular vectors of D 0 is the same as the column space of A. Ref. [4] discovered that there is a low-dimensional simplex structure that explicitly connects the first K left singular vectors of D 0 with the target topic matrix A. This inspired SVD-based methods for estimating A.
However, if one directly conducts SVD on D, the empirical singular vectors can be noisy because of severe word frequency heterogeneity in natural languages [17]. In what follows, we first introduce a normalization on D in Section 2.1 to handle word frequency heterogeneity and then derive entry-wise large-deviation bounds for the empirical singular vectors in Section 2.2.

2.1. A Normalized Data Matrix

We first explain why it is inappropriate to conduct SVD on D. Let N ¯ = n 1 i = 1 n N i denote the average document length. Write D = A W + Z , with Z = [ z 1 , z 2 , , z n ] : = D E D . The singular vectors of D are the same as the eigenvectors of D D = A W W A + A W Z + Z W A + Z Z . By model (1), the columns of Z are centered multinomial-distributed random vectors; moreover, using the covariance matrix formula for multinomial distributions, we have E [ z i z i ] = N i 1 [ diag ( d i 0 ) d i 0 ( d i 0 ) ] . It follows that:
E [ D D ] = A W W A + i = 1 n N i 1 diag ( d i 0 ) d i 0 ( d i 0 ) = A W W A + diag i = 1 n N i 1 d i 0 A i = 1 n N i 1 w i w i A = n · A i = 1 n N i 1 n N i w i w i Σ W A + n N ¯ · diag i = 1 n N ¯ n N i d i 0 M 0 .
Here A Σ W A is a rank-K matrix whose eigen-space is the same as the column span of A. However, because of the diagonal matrix M 0 , the eigen-space of E [ D D ] is no longer the same as the column span of A. We notice that the jth diagonal of M 0 captures the overall frequency of the jth word across the whole corpus. Hence, this is an issue caused by word frequency heterogeneity. The second term in (3) is larger when N ¯ is smaller. This implies that the issue becomes more severe for short documents.
To resolve this issue, we consider a normalization of D to M 0 1 / 2 D . It follows that:
E [ M 0 1 / 2 D D M 0 1 / 2 ] = n · M 0 1 / 2 A Σ W A M 0 1 / 2 + n N ¯ I p .
Now, the second term is proportional to an identify matrix and no longer affects the eigen-space. Furthermore, the eigen-space of the first term is the column span of M 0 1 / 2 A , and hence, we can use the eigenvectors to recover M 0 1 / 2 A (then A is immediately known). In practice, M 0 is not observed, so we replace it by its empirical version:
M = diag i = 1 n N ¯ n N i d i .
We propose to normalize D to M 1 / 2 D before conducting SVD. Later, the singular vectors of M 1 / 2 D will be used in Topic-SCORE to estimate A (see Section 3).
This normalization is similar to the pre-SVD normalization in [4] but not exactly the same. Inspired by analyzing a special case where N i = N , ref. [4] proposed to normalize D to M ˜ 1 / 2 D , where M ˜ = diag ( n 1 i = 1 n d i ) . They continued using M ˜ in general settings, but we discover here that the adjustment of M ˜ to M is necessary when N i ’s are unequal.
Remark 1.
For extremely low-frequency words, the corresponding diagonal entries of M are very small. This causes an issue when we normalize D to M 1 / 2 D . Fortunately, such an issue disappears if we pre-process data. As a standard pre-processing step for topic modeling, we either remove those extremely low-frequency words or combine all of them into a single “meta-word”. We recommend the latter approach. In detail, let L { 1 , 2 , , p } be the set of words such that M ( j , j ) is below a proper threshold t n (e.g., t n can be 0.05 times the average of diagonal entries of M). We then sum up all rows of D with indices in L to a single row. Let D * R ( p | L | + 1 ) × n be the processed data matrix. The matrix D * still has a topic model structure, where each new topic vector results from a similar row combination on the corresponding original topic vector.
Remark 2.
The normalization of D to M 1 / 2 D is reminiscent of the Laplacian normalization in network data analysis, but the motivation is very different. In many network models, the adjacency matrix satisfies that B = B 0 + Y , where B 0 is a low-rank matrix and Y is a generalized Wigner matrix. Since E [ Y ] is a zero matrix, the eigen-space of E B is the same as that of B 0 . Hence, the role of the Laplacian normalization is not correcting the eigen-space but adjusting the signal-to-noise ratio [15]. In contrast, our normalization here aims to turn E [ Z Z ] into an identity matrix (plus a small matrix that can be absorbed into the low-rank part). We need such a normalization even under moderate word frequency heterogeneity (i.e., the frequencies of all words are at the same order).

2.2. Entry-Wise Singular Analysis for M 1 / 2 D

For each 1 k K , let ξ ^ k R p denote the kth left singular vector of M 1 / 2 D . Recall that D 0 = E D . In addition, define:
M 0 : = E M = diag i = 1 n N ¯ n N i d i 0 .
Then, M 0 1 / 2 D 0 is a population counterpart of M 1 / 2 D . However, the singular vectors of M 0 1 / 2 D 0 are not the population counterpart of ξ ^ k ’s. In light of (4), we define:
ξ k : the   k t h   eigenvector   of   M 0 1 / 2 E [ D D ] M 0 1 / 2 , 1 k K .
Write Ξ ^ : = [ ξ ^ 1 , , ξ ^ K ] and Ξ : = [ ξ 1 , , ξ K ] . We aim to derive a large-deviation bound for each individual row of ( Ξ ^ Ξ ) , subject to a column rotation of Ξ ^ .
We need a few assumptions. Let h j = k = 1 K A k ( j ) for 1 j p . Define:
H = diag ( h 1 , , h p ) , Σ A = A H 1 A , Σ W = 1 n i = 1 n ( 1 N i 1 ) w i w i .
Here Σ A and Σ W are called the topic-topic overlapping matrix and the topic-topic concurrence matrix, respectively, [4]. It is easy to see that Σ W is properly scaled. We remark that Σ A is also properly scaled, because = 1 K Σ A ( k , ) = j = 1 p = 1 K h j 1 A k ( j ) A ( j ) = 1 .
Assumption 1.
Let h max = max 1 j p h j , h min = min 1 j p h j and h ¯ = 1 p j = 1 p h j . We assume:
h min c 1 h ¯ = c 1 K / p , for a constant c 1 ( 0 , 1 ) .
Assumption 2.
For a constant c 2 ( 0 , 1 ) and a sequence β n ( 0 , 1 ) , we assume:
λ min ( Σ W ) c 2 , λ min ( Σ A ) c 2 β n , min 1 k , K Σ A ( k , ) c 2 .
Assumption 1 is related to word frequency heterogeneity. Each h j captures the overall frequency of word j, and h ¯ = p 1 j h j = p 1 k A k 1 = K / p . By Remark 1, all extremely low-frequency words have been combined in pre-processing. It is reasonable to assume that h min is at the same order of h ¯ . Meanwhile, we put no restrictions here on h max , so that h j ’s can still be at different orders.
Assumption 2 is about topic weight balance and between-topic similarity. Σ W can be regarded as an affinity matrix of w i ’s. It is mild to assume that Σ W is well-conditioned. In a special case where N i = N and each w i is degenerate, Σ W is a diagonal matrix whose kth diagonal entry is the fraction of documents that put all weights on topic k; hence, λ min ( Σ W ) c 2 is interpreted as “topic weight balance”. Regarding Σ A , we have seen that it is properly scaled (its maximum eigenvalue is at the constant order). When K topic vectors are exactly the same, λ min ( Σ A ) = 0 ; when the topic vectors are not the same, λ min ( Σ A ) 0 , and it measures the signal strength. Ref. [4] assumed that λ min ( Σ A ) is bounded below by a constant, but we allow weaker signals by allowing λ min ( Σ A ) to diminish as n . We also require a lower bound on Σ A ( k , ) , meaning that there should be certain overlaps between any two topics. This is reasonable as some commonly used words are not exclusive to any one topic and tend to occur frequently [4].
The last assumption is about the vocabulary size and document lengths.
Assumption 3.
There exists N 1 and a constant c 3 ( 0 , 1 ) such that c 3 N N i c 3 1 N for all 1 i n . In addition, for an arbitrary constant C 0 > 0 :
min { p , N } log 3 ( n ) , max { log ( p ) , log ( N ) } C 0 log ( n ) , p log 2 ( n ) N n β n 2 .
In Assumption 3, the first two inequalities restrict that N and p are between log 3 ( n ) and n C 0 , for an arbitrary constant C 0 > 0 . This covers a wide regime, including the scenarios of both long documents ( N c · p ) and short documents ( N = o ( p ) ). The third inequality is needed so that the canonical angles between the empirical and population singular spaces converge to zero, which is necessary for our singular vector analysis. This condition is mild, as N n is the order of total word count in the corpus, which is often much larger than p.
With these assumptions, we now present our main theorem.
Theorem 1
(Entry-wise singular vector analysis). Fix K 2 and positive constants c 1 , c 2 , c 3 , and C 0 . Under the model (1), suppose Assumptions 1–3 hold. For any constant C 1 > 0 , there exists C 2 > 0 such that with probability 1 n C 1 , there is an orthogonal matrix O R K × K satisfying that simultaneously for 1 j p :
e j ( Ξ ^ Ξ O ) C 2 h j p log ( n ) n N β n 2 .
The constant C 2 only depends on C 1 and ( K , c 1 , c 2 , c 3 , C 0 ) .
In Theorem 1, we do not assume any gap among the K singular values of M 0 1 / 2 D 0 ; hence, it is only possible to recover Ξ up to a column rotation O. The sin-theta theorem [18] enables us to bound Ξ ^ Ξ O F 2 = j = 1 p e j ( Ξ ^ Ξ O ) 2 , but it is insufficient for analyzing spectral algorithms for topic modeling (see Section 3). We need a bound for each individual row of ( Ξ ^ Ξ O ) , and this bound should depend on h j properly.
We compare Theorem 1 with the result in [4]. They assumed that β n 1 = O ( 1 ) , so their results are only for the strong-signal regime. They showed that when n is sufficiently large:
e j ( Ξ ^ Ξ O ) C 1 + min p N , p 2 N N h j p log ( n ) n N .
When N c · p (long documents), it is the same bound as in Theorem 1 (with β n = 1 ). However, when N = o ( p ) (short documents), it is strictly worse than Theorem 1. We obtain better bounds than those in [4] because of new proof ideas, especially the use of refined perturbation analysis for SVD and a decoupling technique for U-statistics (see Section 4.2).

3. Improved Rates for Topic Modeling

We apply the results in Section 2 to improve the error rates of topic modeling. Topic-SCORE [4] is a spectral algorithm for estimating the topic matrix A. It achieves the optimal rate in the long-document case ( N c · p ). However, in the short-document case ( N = o ( p ) ), the known rate of Topic-SCORE does not match with the minimax lower bound. We address this gap by providing better error bounds for Topic-SCORE. Our results reveal the optimal rate for topic modeling in the short-document case for the first time.

3.1. The Topic-Score Algorithm

Let ξ ^ 1 , ξ ^ 2 , , ξ ^ K be as in Section 2. Topic-SCORE first obtains word embeddings from these singular vectors. Note that M 1 / 2 D is a non-negative matrix. By Perron’s theorem [19], under mild conditions, ξ ^ 1 is a strictly positive vector. Define R ^ R p × ( K 1 ) by:
R ^ ( j , k ) = ξ ^ k + 1 ( j ) / ξ ^ 1 ( j ) , 1 j p , 1 k K 1 .
Let r ^ 1 , r ^ 2 , , r ^ p denote the rows of R ^ . Then, r ^ j is a ( K 1 ) -dimensional embedding of the jth word in the vocabulary. This is known as the SCORE embedding [20,21], which is now widely used in analyzing heterogeneous network and text data.
Ref. [4] discovered that there is a simplex structure associated with these word embeddings. Specifically, let ξ 1 , ξ 2 , , ξ K be the same as in (7) and define the population counterpart of R ^ as R, where:
R ( j , k ) = ξ k + 1 ( j ) / ξ 1 ( j ) , 1 j p , 1 k K 1 .
Let r 1 , r 2 , , r p denote the rows of R. All these r j are contained in a simplex S R K 1 that has K vertices v 1 , v 2 , , v K (see Figure 1). If the jth word is an anchor word [6,22] (an anchor word of topic k satisfies that A k ( j ) 0 and A ( j ) = 0 for all other k ), then r j is located at one of the vertices. Therefore, as long as each topic has at least one anchor word, we can apply a vertex hunting [4] algorithm to recover the K vertices of S . By definition of a simplex, each point inside S can be written uniquely as a convex combination of the K vertices, and the K-dimensional vector consisting of the convex combination coefficients is called the barycentric coordinate. After recovering the vertices of S , we can easily compute the barycentric coordinate π j R K for each r j . Write Π = [ π 1 , π 2 , , π p ] . Ref. [4] showed that:
A k M 0 1 / 2 diag ( ξ 1 ) Π e k , 1 k K .
Therefore, we can recover A k by taking the kth column of M 0 1 / 2 diag ( ξ 1 ) Π and re-normalizing it to have a unit 1 -norm. This gives the main idea behind Topic-SCORE (see Figure 1).
The full algorithm is given in Algorithm 1. It requires plugging in a vertex hunting (VH) algorithm. A VH algorithm aims to estimate v 1 , v 2 , , v K from the noisy point cloud { r ^ j } 1 j p . There are many existing VH algorithms (see sec 3.4 of [21]). A VH algorithm is said to be efficient if it satisfies that max 1 k K v ^ k v k C max 1 j p r ^ j r j (subject to a permutation of v ^ 1 , v ^ 2 , , v ^ K ). We always plug in an efficient VH algorithm, such as the successive projection algorithm [23], the pp-SPA algorithm [24], and several algorithms in sec 3.4 of [21].
Algorithm 1 Topic-SCORE
Input: D, K, and a vertex hunting (VH) algorithm.
  • (Word embedding) Let M be as in (5). Obtain ξ ^ 1 , ξ ^ 2 , , ξ ^ K , the first K left singular vectors of M 1 / 2 D . Compute R ^ as in (10) and write R ^ = [ r ^ 1 , r ^ 2 , , r ^ p ] .
  • (Vertex hunting). Apply the VH algorithm on { r ^ j } 1 j p to get v ^ 1 , , v ^ K .
  • (Topic matrix estimation) For 1 j p , solve π ^ j * from:
    1 1 v ^ 1 v ^ K π ^ j * = 1 r ^ j .
    Let π ˜ j * = max { π ^ j * , 0 } (the maximum is taken component-wise) and π ^ i = π ˜ j * / π ˜ j * 1 . Write Π ^ = [ π ^ 1 , , π ^ p ] . Let A ˜ = M 1 / 2 diag ( ξ ^ 1 ) Π ^ . Obtain A ^ = A ˜ [ diag ( 1 p A ˜ ) ] 1 .
Output: the estimated topic matrix A ^ .
Additionally, after A ^ is obtained, ref. [4] suggested to estimate w 1 , w 2 , , w n as follows. We first run a weighted least-squares to obtain w ^ i * :
w ^ i * = argmin w R K M 1 / 2 ( d i A w i ) 2 , 1 i n .
Then, set all the negative entries of w ^ i * to zero and re-normalize the vector to have a unit 1 -norm. The resulting vector is w ^ i .
Remark 3.
In real-world applications, both n and p can be very large. However, since R ^ is constructed from only a few singular vectors, its rows are only in dimension ( K 1 ) . It suggests that Topic-SCORE leverages a ‘low-dimensional’ simplex structure and is scalable to large datasets. When K is bounded, the complexity of Topic-SCORE is at most O ( n p min { n , p } ) [4]. The real computing time was also reported in [4] for various values of ( n , p ). For example, when both n and p are a few thousands, it takes only a few seconds to run Topic-SCORE.

3.2. The Improved Rates for Estimating A and W

We provide the error rates of Topic-SCORE. First, we study the word embeddings r ^ j . By (10), r ^ j is constructed from the jth row of Ξ ^ . Therefore, we can apply Theorem 1 to derive a large-deviation bound for r ^ j .
Without loss of generality, we set C 1 = 4 henceforth, transforming the event probability 1 n C 1 in Theorem 1 to 1 o ( n 3 ) . We also use C to denote a generic constant, whose meaning may change from one occurrence to another. In all instances, C depends sorely on K and the constants ( c 1 , c 2 , c 3 , C 0 ) in Assumptions 1–3.
Theorem 2
(Word embeddings). Under the setting of Theorem 1, with probability 1 o ( n 3 ) , there exists an orthogonal matrix Ω R ( K 1 ) × ( K 1 ) such that simultaneously for 1 j p :
r ^ j Ω r j C p log ( n ) n N β n 2 .
Next, we study the error of A ^ . The 1 -estimation error is L ( A ^ , A ) : = k = 1 K A ^ k A k 1 , subject to an arbitrary column permutation of A ^ . For ease of notation, we do not explicitly denote this permutation in theorem statements, but it is accounted for in the proofs. For each 1 j p , let a ^ j R K and a j R K denote the jth row of A ^ and A, respectively. We can re-write the 1 -estimation error as L ( A ^ , A ) = j = 1 p a ^ j a j 1 . The next theorem provides an error bound for each individual a ^ j , and the aggregation of these bounds yields an overall bound for L ( A ^ , A ) :
Theorem 3
(Estimation of A). Under the setting of Theorem 1, we additionally assume that each topic has at least one anchor word. With probability 1 o ( n 3 ) , simultaneously for 1 j p :
a ^ j a j 1 a j 1 · C p log ( n ) n N β n 2 .
Furthermore, with probability 1 o ( n 3 ) , the 1 -estimation error satisfies that:
L ( A ^ , A ) C p log ( n ) n N β n 2 .
Theorem 3 improves the result in [4] in two aspects. First, [4] assumed β n 1 = O ( 1 ) , so their results did not allow for weak signals. Second, even when β n 1 = O ( 1 ) , their bound is worse than ours by a factor similar to that in (9).
Finally, we have the error bound for estimating w i ’s using the estimator in (12).
Theorem 4
(Estimation of W). Under the setting of Theorem 3, with probability 1 o ( n 3 ) , subject to a column permutation of W ^ :
max 1 i n w ^ i w i 1 C β n 1 p log ( n ) n N β n 2 + C log ( n ) N .
In Theorem 4, there are two terms in the error bound of w ^ i . The first term comes from the estimation error in A ^ , and the second term is from noise in d i . In the strong-signal case of β n 1 = O ( 1 ) , we can compare Theorem 4 with the bound for w ^ i in [4]. The bound there also has two terms: its second term is similar to ours, but its first term is strictly worse.

3.3. Connections and Comparisons

There have been numerous results about the error rates of estimating A and W. For example, ref. [6] provided the first explicit theoretical guarantees for topic modeling, but they did not study the statistical optimality of their method. Recently, the statistical literature aimed to understand the fundamental limits of topic modeling. Assuming β n 1 = O ( 1 ) , refs. [4,9] gave a minimax lower bound, p / ( N n ) , for the rate of estimating A, and refs. [25,26] gave a minimax lower bound, 1 / N , for estimating each w i .
For estimating A, when β n 1 = O ( 1 ) , the existing theoretical results are summarized in Table 1. Ours is the only one that matches the minimax lower bound across the entire regime. In the long-document case ( N c · p , Cases 1–2 in Table 1), the error rates in [4,9] together have matched the lower bound, concluding that p / ( N n ) is indeed the optimal rate. However, in the short-document case ( N = o ( p ) , Case 3 in Table 1), there was a gap between the lower bound and the existing error rates. Our result addresses the gap and concludes that p / ( N n ) is still the optimal rate. When β n = o ( 1 ) , the error rates of estimating A were rarely studied. We conjecture that p / ( N n β n 2 ) is the optimal rate, and the Topic-SCORE algorithm is still rate-optimal.
We emphasize that our rate is not affected by severe word frequency heterogeneity. As long as h min / h ¯ is lower bounded by a constant (see Assumption 1 and explanations therein), our rate is always the same, regardless of the magnitude of h max . In contrast, the error rate in [9] is sensitive to word frequency heterogeneity, with an extra factor of h max / h min that can be as large as p. There are two reasons that enable us to achieve a flat rate even under severe word frequency heterogeneity: one is the proper normalization of data matrix, as described in Section 2.1, and the other is the careful analysis of empirical singular vectors (see Section 4).
For estimating W, when β n 1 = O ( 1 ) , our error rate in Theorem 4 matches the minimax lower bound if n p log ( n ) . Our approach to estimating W involves first obtaining A ^ and then regressing d i on A ^ to derive w ^ i . The condition n p log ( n ) ensures that the estimation error in A ^ does not dominate the overall error. This condition is often met in scenarios where a large number of documents can be collected, but the vocabulary size remains relatively stable. However, if n < p log ( n ) , a different approach is necessary, requiring the estimation of W first. This involves using the right singular vectors of M 1 / 2 D . While our analysis has primarily focused on the left singular vectors, it can be extended to study the right singular vectors as well.

4. Proof Ideas

Our main result is Theorem 1, which provides entry-wise large-deviation bounds for singular vectors of M 1 / 2 D . Given this theorem, the proofs of Theorems 2–4 are similar to those in [4] and thus relegated to the appendix. In this section, we focus on discussing the proof techniques of Theorem 1.

4.1. Why the Leave-One-Out Technique Fails

Leave-one-out [13,15] is a common technique in entry-wise eigenvector analysis for a Wigner-type random matrix B = B 0 + Y R m × m , where B 0 is a symmetric non-stochastic low-rank matrix and Y is a symmetric random matrix whose upper triangle consists of independent mean-zero variables. One example of such matrices is the adjacency matrix of a random graph generated from the block-model family [20].
However, our target here is the singular vectors of M 1 / 2 D , which are the eigenvectors of B : = M 1 / 2 D D M 1 / 2 . This is a Wishart-type random matrix, whose upper triangular entries are not independent. We may also construct a symmetric matrix:
G : = 0 M 1 / 2 D D M 1 / 2 0 R ( p + n ) × ( p + n ) .
The eigenvectors of G take the form u ^ k = ( ξ ^ k , η ^ k ) , 1 k K , where ξ ^ k R p and η ^ k R n are the kth left and right singular vectors of M 1 / 2 D , respectively. Unfortunately, the upper triangle of G still contains dependent entries. Some dependence is from the normalization matrix M. It may be addressed by using the techniques developed by [15] in studying graph Laplacian matrices. A more severe issue is the dependence among entries in D. According to basic properties of multinomial distributions, D only has column independence but no row independence. As a result, even after we replace M by M 0 , the jth row and column of G are still dependent of the remaining ones, for each 1 j p . In conclusion, we cannot apply the leave-one-out technique on either B or G .

4.2. The Proof Structure in [4] and Why It Is Not Sharp for Short Documents

Our entry-wise eigenvector analysis primarily follows the proof structure in [4]. Recall that ξ ^ k R p is the kth left singular vector of M 1 / 2 D . Define:
G : = M 1 / 2 D D M 1 / 2 n N ¯ I p , G 0 : = n · M 0 1 / 2 A Σ W A M 0 1 / 2 .
Since the identify matrix in G does not affect the eigenvectors, ξ ^ k is the kth eigenvector of G. Additionally, it follows from (7) and (4) that ξ k is the kth eigenvector of G 0 . By (4):
G G 0 = M 1 / 2 D D M 1 / 2 M 0 1 / 2 E [ D D ] M 0 1 / 2 .
The entry-wise eigenvector analysis in [4] has two steps. Step 1: Non-stochastic perturbation analysis. In this step, no distributional assumptions are made on G. The analysis solely focuses on connecting the perturbation from Ξ to Ξ ^ with the perturbation from G 0 to G. They showed in Lemma F.1 [4]:
e j ( Ξ ^ Ξ O ) C G 0 1 e j Ξ G G 0 + K e j ( G G 0 ) .
Step 2: Large-deviation analysis of G G 0 . In this step, ref. [4] derived the large-deviation bounds for G G 0 and e j ( G G 0 ) under the multinomial model (1). For example, they showed in Lemma F.5 [4] that when n is properly large, with high probability:
G G 0 C 1 + N 1 p n p log ( n ) N .
However, when N = o ( p ) (short documents), neither step is sharp. In (15), the second term e j ( G G 0 ) was introduced as an upper bound for e j ( G G 0 ) Ξ ^ , but this bound is too crude. In Section 4.3, we will conduct careful analysis of e j ( G G 0 ) Ξ ^ and introduce a new perturbation bound which significantly improves (15). In (16), the spectral norm is controlled via an ε -net argument [27], which reduces the analysis to studying a quadratic form of Z; ref. [4] analyzed this quadratic form by applying martingale Bernstein inequality. Unfortunately, in the short-document case, it is hard to control the conditional variance process of the underlying martingale. In Section 4.4, we address it by leveraging the matrix Bernstein inequality [28] and the decoupling inequality [5,29] for U-statistics.

4.3. Non-Stochastic Perturbation Analysis

In this subsection, we abuse notations to use G and G 0 to denote two arbitrary p × p symmetric matrices, with rank ( G 0 ) = K . For 1 k K , let λ ^ k and λ k be the kth largest eigenvalue (in magnitude) of G and G 0 , respectively, and let ξ ^ k R p and ξ k R p be the associated eigenvectors. Write Λ ^ = diag ( λ ^ 1 , λ ^ 2 , , λ ^ K ) , Ξ ^ = [ ξ ^ 1 , ξ ^ 2 , , ξ ^ K ] , and define Λ and Ξ similarly. Let U R K × K and V R K × K be such that its columns contain the left and right singular vectors of Ξ ^ Ξ , respectively. Define sgn ( Ξ ^ Ξ ) = U V . For any matrix B and q > 0 , let B q = max i e i B q .
Lemma 1.
Suppose G G 0 ( 1 c 0 ) | λ ^ K | , for some c 0 ( 0 , 1 ) . Consider an arbitrary p × p diagonal matrix Γ = diag ( γ 1 , γ 2 , , γ p ) , where:
γ j > 0   i s   a n   u p p e r   b o u n d   f o r   e j Ξ G G 0 + e j ( G G 0 ) Ξ .
If Γ 1 ( G G 0 ) Γ 1 ( 1 c 0 ) | λ ^ K | , then for the orthogonal matrix O = sgn ( Ξ ^ Ξ ) , it holds simultaneously for 1 j p that:
e j ( Ξ ^ Ξ O ) c 0 1 | λ ^ K | 1 γ j .
Since γ j is an upper bound for e j Ξ G G 0 + e j ( G G 0 ) Ξ , we can interpret the result in Lemma 1 as:
e j ( Ξ ^ Ξ O ) C | λ ^ K | 1 e j Ξ G G 0 + e j ( G G 0 ) Ξ .
Comparing (17) with (15), the second term has been reduced. Since Ξ projects the vector e j ( G G 0 ) into a much lower dimension, we expect that e j ( G G 0 ) Ξ e j ( G G 0 ) in many random models for G. In particular, this is true for the G and G 0 defined in (13). Hence, there is a significant improvement over the analysis in [4].

4.4. Large-Deviation Analysis of ( G G 0 )

In this subsection, we focus on the specific G and G 0 as defined in (13). The crux of proving Theorem 1 lies in determining the upper bound γ j as defined in Lemma 1. This is accomplished through the following lemma.
Lemma 2.
Under the settings of Theorem 1, let G and G 0 be as in (13). For any constant C 1 > 0 , there exists C 3 > 0 such that with probability 1 n C 1 , simultaneously for 1 j p :
G G 0 C 3 p n log ( n ) N , e j ( G G 0 ) Ξ C 3 h j n p log ( n ) N .
The constant C 3 only depends on C 1 and ( K , c 1 , c 2 , c 3 , C 0 ) .
We compare the bound for G G 0 in Lemma 2 with the one in [4] as summarized in (16). There is a significant improvement when N p 2 . This improvement primarily stems from the application of a decoupling inequality for U-statistics, as elaborated below.
We outline the proof of the bound for G G 0 . Let Z = D E [ D ] = [ z 1 , z 2 , , z n ] . From (A24) and (A25) in Appendix A, G G 0 decomposes into the sum of four matrices, where it is most subtle to bound the spectral norm of the fourth matrix:
E 4 : = M 0 1 / 2 ( Z Z E [ Z Z ] ) M 0 1 / 2 .
Define X i = ( M 0 1 / 2 z i ) ( M 0 1 / 2 z i ) E [ ( M 0 1 / 2 z i ) ( M 0 1 / 2 z i ) ] . It is seen that E 4 = i = 1 n X i , which is a sum of n independent matrices. We apply the matrix Bernstein inequality [28] (Theorem A1) to obtain that if there exist b > 0 and σ 2 > 0 such that X i b almost surely for all i and i = 1 n E X i 2 σ 2 , then for every t > 0 ,
P i = 1 n X i t 2 p exp t 2 / 2 σ 2 + b t / 3 .
Determination of b and σ 2 requires upper bounds for X i and E X i 2 . Since each X i is equal to a rank-1 matrix minus its expectation, it reduces to deriving large-deviation bounds for M 0 1 / 2 z i 2 . Note that each z i can be equivalently represented by z i = N i 1 m = 1 N ( T i m E T i m ) , where { T i m } m = 1 N i are i.i.d. Multinomial ( 1 , d i 0 ) . It yields that M 0 1 / 2 z i 2 = I 1 + I 2 , where I 2 is a term that can be controlled using standard large-deviation inequalities, and:
I 1 : = N i 2 1 m 1 m 2 N i ( T i m 1 E T i m 1 ) M 0 1 ( T i m 2 E T i m 2 ) .
The remaining question is how to bound | I 1 | . We notice that I 1 is a U-statistic with degree 2. The decoupling inequality [5,29] is a useful tool for studying U-statistics.
Theorem 5
(A special decoupling inequality [29]). Let { X m } m = 1 N be a sequence of i.i.d. random vectors in R d , and let { X ˜ m } m = 1 N be an independent copy of { X m } m = 1 N . Suppose that h : R 2 d R is a measurable function. Then, there exists a constant C 4 > 0 independent of n , m , d such that for all t > 0 :
P | m m 1 h ( X m , X m 1 ) | t C 4 P C 4 | m m 1 h ( X m , X ˜ m 1 ) | t .
Let { T ˜ i m } m = 1 N i be an independent copy of { T i m } m = 1 N i . By Theorem 5, the large-deviation bound of I 1 can be inferred from the large-deviation bound of:
I ˜ 1 : = N i 2 1 m 1 m 2 N i ( T i m 1 E T i m 1 ) M 0 1 ( T ˜ i m 2 E T ˜ i m 2 ) .
Using h ( T i m 1 , T ˜ i m 2 ) to denote the summand in the above sum, we have a decomposition: I ˜ 1 = N i 2 m 1 , m 2 h ( T i m 1 , T ˜ i m 2 ) N i 2 m h ( T i m , T ˜ i m ) . The second term is a sum of independent variables and can be controlled by standard large-deviation inequalities. Hence, the analysis of I ˜ 1 reduces to the analysis of I ˜ 1 * : = N i 2 m 1 , m 2 h ( T i m 1 , T ˜ i m 2 ) . We re-write I ˜ 1 * as:
I ˜ 1 * = N i 2 y y ˜ , with y : = m = 1 N i M 0 1 / 2 ( T i m E T i m ) , y ˜ : = m = 1 N i M 0 1 / 2 ( T ˜ i m E T ˜ i m ) .
Since y ˜ is independent of y, we apply large-deviation inequalities twice. First, conditional on y ˜ , I ˜ 1 * is a sum of N i independent variables (randomness comes from T i m ’s). We apply the Bernstein inequality to get a large-deviation bound for I ˜ 1 * , which depends on a quantity σ 2 ( y ˜ ) . Next, since σ 2 ( y ˜ ) can also be written as a sum of N i independent variables (randomness comes from T ˜ i m ’s), we apply the Bernstein inequality again to obtain a large-deviation bound for σ 2 ( y ˜ ) . Combining two steps gives the large-deviation bound for I ˜ 1 * .
Remark 4.
The decoupling inequality is employed multiple times to study other U-statistics-type quantities arising in our proof. For example, recall that ( G G 0 ) decomposes into the sum of four matrices, and we have only discussed how to bound E 4 . In the analysis of E 2 and E 3 , we need to bound other quadratic terms involving a sum over ( i , m ) , with 1 i n and 1 m N i . In that case, we need a more general decoupling inequality. We refer readers to Theorem A3 in Appendix A for more details.
Remark 5.
The analysis in [4] uses an ϵ-net argument [27] and the martingale Bernstein inequality [30] to study E 4 . In our analysis, we use the matrix Bernstein inequality [28], instead of the ϵ-net argument. The matrix Bernstein inequality enables us to tackle each quadratic term related to each i separately instead of handling complicated quadratic terms involving summation over i and m simultaneously. Additionally, we adopt the decoupling inequality for U-statistics [5,29], instead of the martingale Bernstein inequality, to study all the quadratic terms arising in our analysis. The decoupling inequality converts the tail anaysis of quadratic terms into tail analysis of (conditionally) independent sums. It provides sharper bounds when the variables have heavy tails (which is the case for the word counts in a topic model, especially when documents are short).

4.5. Proof Sketch of Theorem 1

We combine the non-stochastic perturbation result in Lemma 1 and the large-deviation bounds in Lemma 2 to prove Theorem 1. By Lemma A2, | λ K | C 1 n β n . It follows from Weyl’s inequality, the first claim in Lemma 2, and the assumption of p log 2 ( n ) N n β n 2 that with probability 1 n C 1 :
| λ ^ K | | λ k | · 1 O [ log ( n ) ] 1 / 2 C 1 n β n .
In addition, it can be shown (see Lemma A2) that e j Ξ C h j 1 / 2 . Combining this with the two claims in Lemma 2 gives that with probability 1 n C 1 :
e j Ξ G G 0 + e j ( G G 0 ) Ξ C h j n p log ( n ) N : = γ j .
We hope to apply Lemma 1. This requires obtaining a bound for Γ 1 ( G G 0 ) Γ 1 . Since Γ H 1 / 2 , it suffices to study H 1 / 2 ( G G 0 ) H 1 / 2 1 . Similar to the analysis of e j ( G G 0 ) Ξ , we can show (see the proofs of Lemmas A5 and A6, such as (A58)) that e j ( G G 0 ) H 1 / 2 1 C N 1 / 2 [ h j n p log ( n ) ] 1 / 2 C h j / log ( n ) · n β n , where the last inequality is because of p log 2 ( n ) N n . We immediately have:
H 1 / 2 ( G G 0 ) H 1 / 2 1 = max j h j 1 / 2 e j ( G G 0 ) H 1 / 2 1 C n β n log ( n ) | λ ^ K | 2 .
We then apply Lemma 1 to get e j ( Ξ ^ Ξ O ) C | λ ^ K | 1 γ j C ( n β n ) 1 γ j . The claim of Theorem 1 follows immediately by plugging in the value of γ j as given above.

5. Summary and Discussion

The topic model imposes a “low-rank plus noise” structure on the data matrix. However, the noise is not simply additive; rather, it consists of centered multinomial random vectors. The eigenvector analysis in a topic model is more complex than standard eigenvector analysis for random matrices. Firstly, the entries of the data matrix are weakly dependent, making techniques such as leave-one-out inapplicable. Secondly, due to the significant word frequency heterogeneity in natural languages, entry-wise eigenvector analysis becomes much more nuanced, as different entries of the same eigenvector have significantly different bounds. Additionally, the data exhibit Bernstein-type tails, precluding the use of random matrix theory tools that assume sub-exponential entries. While we build on the analysis in [4], we address these challenges with new proof ideas. Our results provide the most precise eigenvector analysis and rate-optimality results for topic modeling, to the best of our knowledge.
A related but more ambitious goal is obtaining higher-order expansions of the empirical singular vectors. Since the random matrix under study in the topic model is the Wishart type, we can possibly borrow techniques in [31] to study the joint distribution of empirical singular values and singular vectors. In this paper, we assume the number of topics, K, is finite, but our analysis can be easily extended to the scenario of a growing K (e.g., K = O ( log ( n ) ) ). We assume min { p , N } log 3 ( n ) . When p < log 3 ( n ) , it becomes a low-dimensional eigenvector analysis problem, which is easy to tackle. When N < log 3 ( n ) , it is the extremely short documents case (i.e., each document has only a finite length, say, fewer than 20, as in documents such as Tweets). We leave it to future work.

Author Contributions

Z.T.K. and J.W. developed the method and theory and wrote the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Science Foundation CAREER grant DMS-1943902.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Preliminary Lemmas and Theorems

In this section, we collect the preliminaries lemmas and theorems that will be used in the entry-wise eigenvector analysis. Under Assumption 3, N i N ¯ N . Therefore, throughout this section and subsequent sections, we always assume N ¯ = N without loss of generality.
The first lemma describes the estimates of the entries in M 0 and reveals its relation to the underlying frequency parameters, and further provides the large-deviation bound for the normalization matrix M.
Lemma A1
(Lemmas D.1 & E.1 in [4]). Recall the definitions M = diag ( n 1 i = 1 n N d i / N i ) , M 0 = diag ( n 1 i = 1 n N d i 0 / N i ) , and h j = k = 1 K A k ( j ) for 1 j p . Suppose the conditions in Theorem 1 hold. Then:
M 0 ( j , j ) h j ; a n d | M ( j , j ) M 0 ( j , j ) | C h j log ( n ) N n ,
for some constant C > 0 , with probability 1 o ( n 3 ) , simultaneously for all 1 j p . Furthermore, with probability 1 o ( n 3 ) ,
M 1 / 2 M 0 1 / 2 I p C p log ( n ) N n .
Remark A1.
In this lemma and other subsequent lemmas, “with probability 1 o ( n 3 ) ” can always be replaced by “with probability 1 n C 1 ”, for an arbitrary constant C 1 > 0 . The small-probability events in these lemmas come from the Bernstein inequality or the matrix Bernstein inequality. These inequalities concern small-probability events associated with an arbitrary probability δ ( 0 , 1 ) , and the high-probability bounds depend on log ( 1 / δ ) . When δ = n C 1 , log ( 1 / δ ) = C 1 log ( n ) . Therefore, changing C 1 only changes the high-probability bound by a constant. Without loss of generality, we take C 1 = 4 for convenience.
The proof of the first statement is quite similar to the proof detailed in the supplementary materials of [4]. The only difference is the existence of the additional factor N / N i . Thanks to the condition that N i ’s are at the same order, it is not hard to see that M 0 ( j , j ) n 1 i = 1 n d i 0 ( j ) ,where the RHS is exactly the definition of M 0 in [4]. Thus, the proof follows simply under Assumption 2. To obtain the large-deviation bound, the following representation is crucial:
M ( j , j ) M 0 ( j , j ) = 1 n i = 1 n N N i d i ( j ) d i 0 ( j ) = 1 n i = 1 n N N i 2 m = 1 N i T i m ( j ) d i 0 ( j ) ,
where { T i m } m = 1 n are i.i.d. Multinomial ( 1 , d i 0 ) with d i 0 = A w i . The RHS is a sum of independent random variables, thus allowing the application of Bernstein inequality. The inequality (A1) is not provided in the supplementary materials of [4], but it follows easily from the first statement. We prove (A1) in detail below.
By definition, it suffices to claim that:
| M 0 ( j , j ) M ( j , j ) 1 | C p log ( n ) N n
simultaneously for all 1 j p . To this end, we derive:
| M 0 ( j , j ) M ( j , j ) 1 | | M 0 ( j , j ) M ( j , j ) | M ( j , j ) ( M 0 ( j , j ) + M ( j , j ) )
Using the large-deviation bound | M ( j , j ) M 0 ( j , j ) | C h j log ( n ) / ( N n ) = o ( h j ) and also the estimate M 0 ( j , j ) h j , we bound the denominator by:
M ( j , j ) M 0 ( j , j ) + M ( j , j ) C h j o ( h j ) h j + h j o ( h j ) C h j
with probability 1 o ( n 3 ) , simultaneously for all 1 j p . Consequently:
| M 0 ( j , j ) M ( j , j ) 1 | C log ( n ) N n h j C p log ( n ) N n ,
where the last step is due to h j h min C / p . This completes the proof of (A1).
The next Lemma presents the eigen-properties of the population data matrix.
Lemma A2
(Lemmas F.2, F.3, and D.3 in [4]). Suppose the conditions in Theorem 1 hold. Let G 0 be as in (13). Denote by λ 1 λ 1 λ K the non-zero eigenvalues of G 0 . There exists a constant C > 1 such that:
C n β n λ k C n , for 2 k K , and λ 1 C 1 n + max 2 k K λ K .
Furthermore, let ξ 1 , ξ 2 , , ξ K be the associated eigenvectors of G 0 . Then:
C 1 h j ξ 1 ( j ) C h j , e j Ξ C h j .
The above lemma can be proved in the same manner as those in the supplement materials of [4]. Given our more general condition on Σ A , which allows its smallest eigenvalue to converge to 0 as n , the results on the eigenvalues are slightly different. In out setting, only the largest eigenvalue is of order n and it is well-separated from the others as the first eigenvector of n 1 G 0 has multiplicity one, which can be claimed by using Perron’s theorem and the last inequality in Assumption 2. For the other eigenvalues, they might be at the order of β n in Assumption 2. The details are very similar to those in the supplement materials of [4] by adapting our relaxed condition on Σ A , so we avoid redundant derivations here.
Throughout the analysis, we need matrix Bernstein inequality and decoupling inequality for U-statistics. For readers’ convenience, we provide the theorems below.
Theorem A1.
Let X 1 , , X N be independent, mean zero, n × n symmetric random matrices, such that X i b almost surely for all i and i = 1 N E X i 2 σ 2 . Then, for every t 0 , we have:
P i = 1 N X i t 2 n exp t 2 / 2 σ 2 + b t / 3 .
The following two theorems are special cases of Theorem 3.4.1 in [29], which implies that using decoupling inequality simplifies the analysis of U-statistics to the study of sums of (conditionally) independent random variables.
Theorem A2.
Let { X i } i = 1 n be a sequence of i.i.d. random vectors in R d , and let { X ˜ i } i = 1 n be an independent copy of { X i } i = 1 n . Then, there exists a constant C ˜ > 0 independent of n , d such that:
P ( | i j X i X j | t ) C ˜ P ( C ˜ | i j X i X ˜ j | t )
Theorem A3.
Let { X m ( i ) } i , m , for 1 i n and 1 m N , be a sequence of i.i.d. random vectors in R d , and let { X ˜ m ( i ) } i , m be an independent copy of { X m ( i ) } i , m . Suppose that h : R 2 d R is a measurable function. Then, there exists a constant C ¯ > 0 independent of n , m , d such that:
P | i m m 1 h ( X m ( i ) , X m 1 ( i ) ) | t C ¯ P C ¯ | i m m 1 h ( X m ( i ) , X ˜ m 1 ( i ) ) | t
The key difference between the above theorems is attributed to the index set used across the sum. In Theorem A2, the random variables are indexed by i and all pairs of ( X i , X j ) are included; in contrast, Theorem A3 uses both i and m and consider only the pairs that share the identical index i. However, both are viewed as special cases of Theorem 3.4.1 with degree 2 in [29], which discussed a broader sequence of functions { h i j ( · , · ) } i , j , where each h i j ( · , · ) can differ with varying i , j . By assigning all h i j ( · , · ) to the same product function, we have Theorem A2; whereas Theorem A3 follows from specifying:
h ( i m ) ( j m 1 ) ( · , · ) = h ( · , · ) , i f i = j ; 0 , o t h e r w i s e .

Appendix B. Proofs of Lemmas 1 and 2

Appendix B.1. Proof of Lemma 1

Using the definition of eigenvectors and eigenvalues, we have G Ξ ^ = Ξ ^ Λ ^ and G 0 Ξ = Ξ Λ . Additionally, since G 0 has a rank K, G 0 = Ξ Λ Ξ . It follows that:
Ξ ^ Λ ^ = [ G 0 + ( G G 0 ) ] Ξ ^ = Ξ Λ Ξ Ξ ^ + ( G G 0 ) Ξ ^ = Ξ Ξ G 0 Ξ ^ + ( G G 0 ) Ξ ^ .
As a result:
e j Ξ ^ = e j Ξ Ξ G 0 Ξ ^ Λ ^ 1 + e j ( G G 0 ) Ξ ^ Λ ^ 1 .
Note that G 0 Ξ ^ = G Ξ ^ + ( G 0 G ) Ξ ^ = Ξ ^ Λ ^ + ( G 0 G ) Ξ ^ . We plug this equality into the first term on the RHS of (A2) to obtain:
e j Ξ Ξ G 0 Ξ ^ Λ ^ 1 = e j Ξ Ξ Ξ ^ + e j Ξ Ξ ( G 0 G ) Ξ ^ Λ ^ 1 = e j Ξ O + e j Ξ ( Ξ Ξ ^ O ) + e j Ξ Ξ ( G 0 G ) Ξ ^ Λ ^ 1 ,
for any orthogonal matrix O. Combining this with (A2) gives:
e j ( Ξ ^ Ξ O ) e j Ξ ( Ξ Ξ ^ O ) + e j Ξ Ξ ( G 0 G ) Ξ ^ Λ ^ 1 + e j ( G G 0 ) Ξ ^ Λ ^ 1 .
Fix O = sgn ( Ξ ^ Ξ ) . The sine-theta theorem [18] yields:
Ξ Ξ ^ O | λ ^ K | 2 G G 0 2 .
We use (A4) to bound the first two terms on the RHS of (A3):
e j Ξ ( Ξ Ξ ^ O ) e j Ξ Ξ Ξ ^ O e j Ξ · | λ ^ K | 2 G G 0 2 , e j Ξ Ξ ( G 0 G ) Ξ ^ Λ ^ 1 e j Ξ · | λ ^ K | 1 Ξ ( G 0 G ) Ξ ^ e j Ξ · | λ ^ K | 1 G G 0 .
Since G G 0 ( 1 c 0 ) | λ ^ K | , the RHS in the second line above dominates the RHS in the first line. We plug these upper bounds into (A3) to get:
e j ( Ξ ^ Ξ O ) | λ ^ K | 1 e j Ξ G G 0 + e j ( G G 0 ) Ξ ^ Λ ^ 1 | λ ^ K | 1 e j Ξ G G 0 + e j ( G G 0 ) Ξ ^ .
We notice that the second term on the RHS of (A5) still involves Ξ ^ , and we further bound this term. By the assumption of this theorem, there exists a diagonal matrix Γ such that Γ 1 ( G G 0 ) Γ 1 ( 1 c 0 ) | λ ^ K | . It implies:
e j ( G G 0 ) Γ 1 ( 1 c 0 ) γ j | λ ^ K | .
Additionally, for any vector v R p and matrix B R p × K , it holds that v B j | v j | e j B j | v j | B 2 v 1 B 2 . We then bound the second term on the RHS of (A5) as follows:
e j ( G G 0 ) Ξ ^ e j ( G G 0 ) Ξ O + e j ( G G 0 ) ( Ξ ^ Ξ O ) e j ( G G 0 ) Ξ + e j ( G G 0 ) Γ 1 · Γ 1 ( Ξ ^ Ξ O ) 2 e j ( G G 0 ) Ξ + ( 1 c 0 ) γ j | λ ^ K | · Γ 1 ( Ξ ^ Ξ O ) 2 .
Plugging (A6) into (A5) gives:
e j ( Ξ ^ Ξ O ) | λ ^ K | 1 e j Ξ G G 0 + e j ( G G 0 ) Ξ + ( 1 c 0 ) γ j · Γ 1 ( Ξ ^ Ξ O ) 2 | λ ^ K | 1 γ j + ( 1 c 0 ) γ j · Γ 1 ( Ξ ^ Ξ O ) 2 ,
where in the last line we have used the assumption that γ j is an upper bound for e j Ξ G G 0 + e j ( G G 0 ) Ξ . Note that Γ 1 ( Ξ ^ Ξ O ) 2 = max 1 j p γ j 1 e j ( Ξ ^ Ξ O ) . We multiply both LSH and RSH of (A7) by γ j 1 and take the maximum over j. It gives:
Γ 1 ( Ξ ^ Ξ O ) 2 | λ ^ K | 1 + ( 1 c 0 ) Γ 1 ( Ξ ^ Ξ O ) 2 ,
or equivalently, Γ 1 ( Ξ ^ Ξ O ) 2 c 0 1 | λ ^ K | 1 . We further plug this inequality into (A7) to obtain:
e j ( Ξ ^ Ξ O ) | λ K | 1 γ j + ( 1 c 0 ) · c 0 1 | λ K | 1 γ j c 0 1 | λ K | 1 γ j .
This proves the claim. □

Appendix B.2. Proof of Lemma 2

The first claim is the same as the one in Lemma A3 and will be proved there.
The second claim follows by simply collecting arguments in the proof of Lemma A3, as shown below: By (A24), G G 0 = E 1 + E 2 + E 3 + E 4 . It follows that:
e j ( G G 0 ) Ξ s = 1 4 e j E s Ξ .
We apply Lemma A5 to get large-deviation bounds for e j E s Ξ with s { 2 , 3 , 4 } . This lemma concerns e j E s Ξ ^ , but in its proof we have already analyzed e j E s Ξ . In particular, e j E 2 Ξ and e j E 3 Ξ have the same bounds as in (A29), and the bound for e j E 4 Ξ only has the first term in (A30). In summary:
e j E s Ξ C h j n p log ( n ) N , for   s { 2 , 3 , 4 } .
It remains to bound e j E 1 Ξ . We first mimic the steps of proving (A33) of Lemma A5 (more specifically, the derivation of (A63), except that Ξ ^ is replaced by Ξ ) to obtain:
e j E 1 Ξ C n e j ( M 0 1 / 2 M 1 / 2 I p ) Ξ + C e j G 0 ( M 0 1 / 2 M 1 / 2 I p ) Ξ + s = 2 4 e j E s ( M 0 1 / 2 M 1 / 2 I p ) Ξ .
We note that:
e j ( M 0 1 / 2 M 1 / 2 I p ) Ξ M 0 1 / 2 M 1 / 2 I p · e j Ξ , e j G 0 ( M 0 1 / 2 M 1 / 2 I p ) Ξ = e j Ξ Λ Ξ ( M 0 1 / 2 M 1 / 2 I p ) Ξ e j Ξ · Λ · M 0 1 / 2 M 1 / 2 I p , e j E s ( M 0 1 / 2 M 1 / 2 I p ) Ξ e j E s · M 0 1 / 2 M 1 / 2 I p .
For s { 2 , 3 } , we have e j E s C h j p log ( n ) / ( N n ) . This has been derived in the proof of Lemma A5: when controlling e j E 2 Ξ and e j E 3 Ξ there, we first bound them by e j E 2 and e j E 3 , respectively, and then study e j E 2 and e j E 3 directly). We plug these results into (A12) to obtain:
e j E 1 Ξ M 0 1 / 2 M 1 / 2 I p n e j Ξ + | λ 1 | e j Ξ + C h j n p log ( n ) N + e j E 4 ( M 0 1 / 2 M 1 / 2 I p ) Ξ .
For e j E 4 ( M 0 1 / 2 M 1 / 2 I p ) Ξ , we cannot use the same idea to bound it as for s { 2 , 3 } , because the bound for e j E 4 is much larger than those for e j E 2 and e j E 4 . Instead, we study e j E 4 ( M 0 1 / 2 M 1 / 2 I p ) Ξ directly. This part is contained in the proof of Lemma A6; specifically, in the proof of (A31). There we have shown:
e j E 4 ( M 0 1 / 2 M 1 / 2 I p ) Ξ C h j · p log ( n ) N .
We plug (A14) into (A13) and note that λ 1 = O ( n ) and e j Ξ = O ( h j 1 / 2 ) (by Lemma A2). We also use the assumption that N n N n β n 2 p log 2 ( n ) and the bound for M 0 1 / 2 M 1 / 2 I p in (A1). It follows that
e j E 1 Ξ M 0 1 / 2 M 1 / 2 I p · C h j n + n p log ( n ) N + p log ( n ) N M 0 1 / 2 M 1 / 2 I p · O ( n h j 1 / 2 ) C h j n p log ( n ) N .
We plug (A11) and (A15) into (A10). This proves the second claim. □

Appendix C. The Complete Proof of Theorem 1

A proof sketch of Theorem 1 has been given in Section 4.4. For the ease of writing formal proofs, we have re-arranged the claims and analyses in Lemmas 1 and 2, so the proof structure here is slightly different from the sketch in Section 4.4. For example, Lemma A3 combines the claims of Lemma 2 with some steps in proving Lemma 1; the remaining steps in the proof of Lemma 1 are combined into the proof of the main theorem.
First, we present a key technical lemma. The proof of this lemma is quite involved and relegated to Appendix D.1.
Lemma A3.
Under the setting of Theorem 1. Recall G , G 0 in (13). With probability 1 o ( n 3 ) :
(A16) G G 0 C p n log ( n ) N n β n ; (A17) e j ( G G 0 ) Ξ ^ / n C h j p log ( n ) n N 1 + H 1 2 ( Ξ ^ Ξ O ) 2 + o ( β n ) · e j ( Ξ ^ Ξ O ) ,
simultaneously for all 1 j p .
Next, we use Lemma A3 to prove Theorem 1. Let ( λ ^ k , ξ ^ k ) and ( λ ^ k , ξ ^ k ) be the k-th eigen-pairs of G and G 0 , respectively. Let Λ ^ = diag ( λ ^ 1 , λ ^ 2 , , λ ^ K ) and Λ = diag ( λ 1 , λ 2 , , λ K ) . Following (A2) and (A3), we have:
e j ( Ξ ^ Ξ O ) e j Ξ ( Ξ Ξ ^ O ) + e j Ξ Ξ ( G 0 G ) Ξ ^ Λ ^ 1 + e j ( G G 0 ) Ξ ^ Λ ^ 1 .
In the sequel, we bound the three terms on the RHS above one-by-one.
First, by sine-theta theorem:
e j Ξ ( Ξ Ξ ^ O ) C e j Ξ G G 0 2 | λ ^ K λ K + 1 | 2 .
For 1 k p , by Weyl’s inequality:
| λ ^ k λ k | G G 0 n β n
with probability 1 o ( n 3 ) , by employing (A16) in Lemma A3. In particular, λ 1 n and C n β n < λ k C n for 2 k K and λ k = 0 otherwise (see Lemma A2). Thereby, | λ ^ K λ K + 1 | C n β n . Further using e j Ξ C h j (see Lemma A2), with the aid of Lemma A3, we obtain that with probability 1 o ( n 3 ) :
e j Ξ ( Ξ Ξ ^ O ) C h j · p log ( n ) N n β n 2
simultaneously for all 1 j p .
Next, we similarly bound the second term:
e j Ξ Ξ ( G 0 G ) Ξ ^ Λ ^ 1 C n β n e j Ξ G G 0 C h j p log ( n ) N n β n 2 .
Here we used the fact that λ ^ K C n β n following from (A19) and Lemma A2.
For the last term, we simply bound:
e j ( G G 0 ) Ξ ^ Λ ^ 1 C e j ( G G 0 ) Ξ ^ / ( n β n ) .
Combining (A20), (A21), and (A22) into (A18), by (A17) in Lemma A3, we arrive at:
e j ( Ξ ^ Ξ O ) C h j p log ( n ) N n β n 2 1 + H 1 2 ( Ξ ^ Ξ O ) 2 + o ( 1 ) · e j ( Ξ ^ Ξ O ) .
Rearranging both sides above gives:
e j ( Ξ ^ Ξ O ) C h j p log ( n ) N n β n 2 1 + H 1 2 ( Ξ ^ Ξ O ) 2 ,
with probability 1 o ( n 3 ) , simultaneously for all 1 j p .
To proceed, we multiply both sides in (A23) by h j 1 / 2 and take the maximum. It follows that:
H 1 2 ( Ξ ^ Ξ O ) 2 C p log ( n ) N n β n 2 1 + H 0 1 2 ( Ξ ^ Ξ O ) 2 .
Note that p log ( n ) / N n β n 2 = o ( 1 ) from Assumption 3. We further rearrange both sides above and get:
H 1 2 ( Ξ ^ Ξ O ) 2 p log ( n ) N n β n 2 = o ( 1 ) .
Plugging the above estimate into (A23), we finally conclude the proof of Theorem 1.□

Appendix D. Entry-Wise Eigenvector Analysis and Proof of Lemma A3

To finalize the proof of Theorem 1 as outlined in Appendix C, the remaining task is to prove Lemma A3.
Recall the definition in (13) that:
G = M 1 2 D D M 1 2 n N I p , G 0 = M 0 1 2 i = 1 n ( 1 N i 1 ) d i 0 ( d i 0 ) M 0 1 2 .
Write D = D 0 + Z , where Z = ( z 1 , z 2 , , z n ) is a mean-zero random matrix with each N z i being centered Multinomial ( N i , A w i ) . By this representation, we decompose the perturbation matrix G G 0 as follows:
G G 0 = M 1 2 D D M 1 2 M 0 1 2 D D M 0 1 2 + M 0 1 2 D D i = 1 n ( 1 N i 1 ) d i 0 ( d i 0 ) n N M 0 M 0 1 2 = ( M 1 2 D D M 1 2 M 0 1 2 D D M 0 1 2 ) + M 0 1 2 Z D 0 M 0 1 2 + M 0 1 2 D 0 Z M 0 1 2 + M 0 1 2 ( Z Z E Z Z ) M 0 1 2 = E 1 + E 2 + E 3 + E 4 ,
where:
E 1 : = M 1 2 D D M 1 2 M 0 1 2 D D M 0 1 2 , E 2 : = M 0 1 2 Z D 0 M 0 1 2 , E 3 : = M 0 1 2 D 0 Z M 0 1 2 E 4 : = M 0 1 2 ( Z Z E Z Z ) M 0 1 2 .
Here the second step of (A24) is due to the identity:
E ( Z Z ) + i = 1 n N i 1 d i 0 ( d i 0 ) n N M 0 = 0 ,
which can be obtained by:
E ( Z Z ) = i = 1 n E z i z i = i = 1 n N i 2 m , s = 1 N i E ( T i m E T i m ) ( T i s E T i s ) ,
with { T i m } m = 1 N being i.i.d. Multinomial ( 1 , A w i ) .
Throughout the analysis in this section, we will frequently rewrite and use:
z i = 1 N i m = 1 N i T i m E T i m
as it introduces the sum of independent random variables. We use the notation d i 0 : = E d i = E T i m = A w i for simplicity.
By (A24), in order to prove Lemma A3, it suffices to study:
E s and e j E s Ξ ^ / n , for s = 1 , 2 , 3 , 4 and 1 j p .
The estimates for the aforementioned quantities are provided in the following technical lemmas, whose proofs are deferred to later sections.
Lemma A4.
Suppose the conditions in Theorem 1 hold. There exists a constant C > 0 , such that with probability 1 o ( n 3 ) :
(A27) E s C p n log ( n ) N , for s = 1 , 2 , 3 (A28) E 4 = M 0 1 2 ( Z Z E Z Z ) M 0 1 2 C max p n log ( n ) N 2 , p log ( n ) N .
Lemma A5.
Suppose the conditions in Theorem 1 hold. There exists a constant C > 0 , such that with probability 1 o ( n 3 ) , simultaneously for all 1 j p :
(A29) e j E s Ξ ^ / n C h j p log ( n ) N n , for s = 2 , 3 (A30) e j E 4 Ξ ^ / n C h j p log ( n ) N n 1 + H 0 1 2 ( Ξ ^ Ξ O ) 2 ,
with O = sgn ( Ξ ^ Ξ ) .
Lemma A6.
Suppose the conditions in Theorem 1 hold. There exists a constant C > 0 , such that with probability 1 o ( n 3 ) , simultaneously for all 1 j p :
(A31) e j E 4 ( M 0 1 / 2 M 1 / 2 I p ) Ξ ^ / n C h j · p log ( n ) n N 1 + H 1 2 ( Ξ ^ Ξ O ) 2 , (A32) e j M 1 / 2 M 0 1 / 2 I p Ξ ^ C log ( n ) N n + o ( β n ) · e j ( Ξ ^ Ξ O ) ;
and furthermore:
e j E 1 Ξ ^ / n C h j p log ( n ) N n 1 + H 0 1 2 ( Ξ ^ Ξ O ) 2 + o ( β n ) · e j ( Ξ ^ Ξ O ) .
For proving Lemmas A4 and A5, the difficulty lies in showing (A28) and (A30) as the quantity E 4 involves the quadratic terms of Z with its dependence on Ξ ^ . We overcome the hurdle by decomposing Ξ ^ = Ξ + Ξ ^ Ξ O and employing decoupling techniques (Theorems A2 and A3). Considering the expression of E 1 , where D D is involved, the proof of (A33) in Lemma A6 significantly rely on the estimates in Lemma A5, together with (A31) and (A32). The detailed proofs are systematically presented in subsequent sections, following the proof of Lemma A3.

Appendix D.1. Proof of Lemma A3

We employ the technical lemmas (Lemmas A4–A6) to prove Lemma A3. We start with (A16). By the representation (A24), it is straightforward to obtain that:
G G 0 s = 1 4 E s C p n log ( n ) N + C max p n log ( n ) N 2 , p log ( n ) N
for some constant C > 0 , with probability 1 o ( n 3 ) . Under Assumption 3, it follows that:
p n log ( n ) N 2 p n log ( n ) N , p log ( n ) N = p n log ( n ) N · p log ( n ) N n p n log ( n ) N
and:
p n log ( n ) N = n · p log ( n ) N n n .
Therefore, we complete the proof of (A16).
Next, we show (A17). Similarly, using (A27), (A30), and (A33), we have:
e j ( G G 0 ) Ξ ^ / n s = 1 4 e j E s Ξ ^ / n C h j p log ( n ) N n 1 + H 0 1 2 ( Ξ ^ Ξ O ) 2 + o ( β n ) · e j ( Ξ ^ Ξ O ) .
This concludes the proof of Lemma A3.□

Appendix D.2. Proof of Lemma A4

We examine each E i for i = 1 , 2 , 3 , 4 . We start with the easy one, E 2 . Recall D 0 = A W . We denote by W k the k-th row of W and rewrite W = ( W 1 , , W K ) . Similarly, we use Z j , 1 j p to denote j-th row of Z. Thereby, Z = ( z 1 , z 2 , , z n ) = ( Z 1 , Z 2 , , Z p ) . By the definition that E 2 = M 0 1 / 2 Z D 0 M 0 1 / 2 , we have:
E 2 = M 0 1 / 2 Z W A M 0 1 / 2 = k = 1 K M 0 1 / 2 Z W k · A k M 0 1 / 2 k = 1 K M 0 1 / 2 Z W k · A k M 0 1 / 2 .
We analyze each factor in the summand:
M 0 1 / 2 Z W k 2 = j = 1 p 1 M 0 ( j , j ) ( Z j W k ) 2 , A k M 0 1 / 2 A k H 1 A k 1 / 2 C ,
where we used the fact that A k ( j ) h j for 1 j p . Hence, what remains is to prove a high-probability bound for each Z j W k . By the representation (A26):
Z j W k = i = 1 n z i ( j ) w i ( k ) = i = 1 n m = 1 N i N i 1 w i ( k ) T i m ( j ) d i 0 ( j ) .
We then apply Bernstein inequality to the RHS above. By straightforward computations:
var ( Z j W k ) = i = 1 n m = 1 N i N i 2 w i ( k ) 2 E T i m ( j ) d i 0 ( j ) 2 i = 1 n N i 1 w i ( k ) 2 d i 0 ( j ) h j n N ,
and the individual bound for each summand is C / N . Then, one can conclude from Bernstein inequality that with probability 1 o ( n 3 c 0 ) :
| Z j W k | C n h j log ( n ) / N + log ( n ) / N .
As a result, considering all 1 j p , under p n c 0 C from Assumption 3, we have:
M 0 1 2 Z W k 2 C j = 1 p h j 1 · n h j log ( n ) N + log ( n ) 2 N 2 C n p log ( n ) N
with probability 1 o ( n 3 ) . Here, in the first step, we used M 0 ( j , j ) h j ; the last step is due to the conditions h j h min C / p and p log ( n ) N n . Plugging (A37) and (A35) into (A34) gives:
E 2 C n p log ( n ) N .
Furthermore, by definition, E 3 = E 2 and E 3 = E 2 . Therefore, we directly conclude the upper bound for E 3 .
Next, we study E 4 and prove (A28). Notice that M 0 ( j , j ) h j for all 1 j p . It suffices to prove:
H 1 2 ( Z Z E Z Z ) H 1 2 C max p n log ( n ) N 2 , p log ( n ) N .
We prove (A39) by employing Matrix Bernstein inequality (i.e., Theorem A1) and decoupling techniques (i.e., Theorem A2). First, write:
H 1 2 ( Z Z E Z Z ) H 1 2 = i = 1 n ( H 1 2 z i ) ( H 1 2 z i ) E ( H 1 2 z i ) ( H 1 2 z i ) = : n · i = 1 n 1 n z ˜ i z ˜ i E z ˜ i z ˜ i = : n · i = 1 n X i
In order to get sharp bound, we employ the truncation idea by introducing:
X ˜ i : = 1 n z ˜ i z ˜ i 1 E i E z ˜ i z ˜ i 1 E i , where E i : = { z ˜ i z ˜ i C p / N } ,
for some sufficiently large C > 0 that depends on C 0 (see Assumption 3) and 1 E i represents the indicator function. We then have:
n i = 1 n X i = n i = 1 n X ˜ i i = 1 n E ( z ˜ i z ˜ i 1 E i c )
under the event i = 1 n E i . We will prove the large-deviation bound of H 1 2 ( Z Z E Z Z ) H 1 2 in the following steps.
(a)
We claim that:
P ( i = 1 n E i ) 1 i = 1 n P ( E i c ) = 1 o ( n ( 2 C 0 + 3 ) ) .
(b)
We claim that under the event i = 1 n E i :
n i = 1 n X i n i = 1 n X ˜ i = o ( n ( C 0 + 1 ) ) .
(c)
We aim to derive a high probability bound of n i = 1 n X ˜ i by Matrix Bernstein inequality (i.e., Theorem A1). We show that with probability 1 o ( n 3 ) , for some large C > 0 :
i = 1 n X ˜ i C max p log ( n ) n N 2 , p log ( n ) n N .
If (a)–(c) are claimed, with the condition that N < C n C 0 from Assumption 3, it is straightforward to conclude that:
H 1 2 ( Z Z E Z Z ) H 1 2 = n i = 1 n X ˜ i + o ( n C 0 ) C max p n log ( n ) N 2 , p log ( n ) N ,
with probability 1 o ( n 3 ) . This gives (A28), except that we still need to verify (a)–(c).
In the sequel, we prove (a), (b) and (c) separately. To prove (a), it suffices to show that P ( E i c ) = o ( n ( 2 C 0 + 4 ) ) for all 1 i n . By definition, for any fixed i, N i z i is centered multinomial with N i trials. Therefore, we can represent:
z i = 1 N i m = 1 N i ( T i m E T i m ) , where T i m s are i . i . d . multinomial ( 1 , d i 0 ) for fixed i ,
Then it can be computed that:
E ( z ˜ i z ˜ i ) = E z i H 1 z i = 1 N i 2 m = 1 N i E ( T i m E T i m ) H 1 ( T i m E T i m ) = 1 N i 2 m = 1 N i t = 1 p E ( T i m ( t ) d i 0 ( t ) ) 2 h t 1 = 1 N i 2 m = 1 N i t = 1 p d i 0 ( t ) 1 d i 0 ( t ) h t 1 p N i .
We write:
z ˜ i z ˜ i E ( z ˜ i z ˜ i ) = z i H 1 z i E z i H 1 z i = I 1 + I 2 ,
where:
I 1 : = 1 N i 2 m 1 m 2 N i ( T i m 1 E T i m 1 ) H 1 ( T i m 2 E T i m 2 ) , I 2 : = 1 N i 2 m = 1 N i ( T i m E T i m ) H 1 ( T i m E T i m ) E ( T i m E T i m ) H 1 ( T i m E T i m ) .
First, we study I 1 . Let { T ˜ i m } m = 1 N be an independent copy of { T i m } m = 1 N and:
I ˜ 1 : = 1 N i 2 m 1 m 2 N i ( T i m 1 E T i m 1 ) H 1 ( T ˜ i m 2 E T ˜ i m 2 ) .
We apply Theorem A2 to I 1 and get:
P ( | I 1 | > t ) C P ( I ˜ 1 > C 1 t ) .
It suffices to obtain the large-deviation of I ˜ 1 instead. Rewrite:
I ˜ 1 = 1 N i m 1 N i ( T ˜ i m 1 E T ˜ i m 1 ) H 1 / 2 1 N i m = 1 N i H 1 / 2 ( T i m E T i m ) 1 N i 2 m = 1 N i ( T i m E T i m ) H 1 ( T ˜ i m E T ˜ i m ) = : T 1 + T 2 .
We derive the high-probability bound for T 1 first. For simplicity, write:
a = H 1 / 2 1 N i m = 1 N i ( T i m E T i m ) .
Then, T 1 = N i 1 m = 1 N i ( T ˜ i m E T ˜ i m ) H 1 / 2 a . We apply Bernstein inequality condition on { T i m } m = 1 N i . By elementary computations:
var ( T 1 | { T i m } m = 1 N i ) = 1 N i 2 m = 1 N i E ( T ˜ i m E T ˜ i m ) H 1 / 2 a 2 | a = 1 N i j = 1 p d i 0 ( j ) a ( j ) / h j 1 / 2 ( d i 0 ) H 1 / 2 a 2 = 1 N i j = 1 p d i 0 ( j ) h j a 2 ( j ) 1 N i ( d i 0 ) H 1 / 2 a 2 a 2 / N i ,
where we used that fact d i 0 ( j ) = e j A w i e j A 1 K = h j . Furthermore, with the individual bound N 1 max t { a ( t ) / h t } , we obtain from Bernstein inequality that with probability 1 o ( n ( 2 C 0 + 4 ) ) :
| T 1 | C log ( n ) N a + 1 N max t | a ( t ) | h t log ( n ) ,
by choosing appropriately large C > 0 . We then consider using Bernstein inequality to study a ( t ) and get:
| a ( t ) | C log ( n ) N + C log ( n ) N h min
simultaneously for all 1 t p , with probability 1 o ( n ( 2 C 0 + 4 ) ) . As a result, under the condition min { p , N } C 0 log ( n ) from Assumption 3, it holds that:
| T 1 | C log ( n ) N a + 1 N max t | a ( t ) | h t log ( n ) C p log ( n ) N log ( n ) N + C log ( n ) N h min + p N C p N .
We then proceed to the second term in (A45), T 2 = N i 2 m = 1 N i ( T i m E T i m ) H 1 ( T ˜ i m E T ˜ i m ) . Using Bernstein inequality, similarly to the above derivations, we get:
var ( T 2 ) = N i 4 m = 1 N i E ( T i m E T i m ) H 1 ( T ˜ i m E T ˜ i m ) 2 = N i 4 m = 1 N i E j = 1 p d i 0 ( j ) h j 2 ( T ˜ i m ( j ) E T ˜ i m ( j ) ) 2 ( d i 0 ) H 1 ( T ˜ i m E T ˜ i m ) 2 = N i 3 j = 1 p ( d i 0 ( j ) ) 2 ( 1 d i 0 ( j ) ) h j 2 j = 1 p d i 0 ( j ) d i 0 ( j ) h j ( d i 0 ) H 1 d i 0 2 = N i 3 j = 1 p ( d i 0 ( j ) ) 2 ( 1 2 d i 0 ( j ) ) h j 2 + ( d i 0 ) H 1 d i 0 2 < 2 p N 3 .
The individual bound is given by N 2 / h min . If follows from Bernstein inequality that:
T 2 C p log ( n ) N 3 + log ( n ) N 2 h min
with probability 1 o ( n ( 2 C 0 + 4 ) ) . Consequently, by pluging (A46) and (A47) into (A45) and using Assumption 3,
| I ˜ 1 | p N
with probability 1 o ( n ( 2 C 0 + 4 ) ) . By (A44), we get:
| I 1 | C log ( n ) N a + p N
with probability 1 o ( n ( 2 C 0 + 4 ) ) .
Second, we prove a similar bound for I 2 with:
I 2 = 1 N i 2 m = 1 N i ( T i m E T i m ) H 1 ( T i m E T i m ) E ( T i m E T i m ) H 1 ( T i m E T i m ) .
We compute the variance by:
var ( T i m E T i m ) H 1 ( T i m E T i m ) = E t h t 1 ( T i m ( t ) d i 0 ( t ) ) 2 2 E t h t 1 ( T i m ( t ) d i 0 ( t ) ) 2 2 t h t 2 d i 0 ( t ) ( 1 d i 0 ( t ) ) 4 + ( 1 d i 0 ( t ) ) d i 0 ( t ) 3 t h t 2 d i 0 ( t ) 2 ( 1 d i 0 ( t ) ) 2 t h t 1 p h min 1 .
This, together with the crude bound:
| ( T i m E T i m ) H 1 ( T i m E T i m ) E ( T i m E T i m ) H 1 ( T i m E T i m ) | C h min 1 ,
gives that with probability 1 o ( n ( 2 C 0 + 4 ) ) , for some sufficiently large C > 0 :
| I 2 | C max p log ( n ) N 3 h min , log ( n ) N 2 h min C p N ,
under Assumption 3. Combing (A49) and (A50), yields that:
z ˜ i z ˜ i = z i H 1 z i E z i H 1 z i + | I 1 | + | I 2 | C p N
with probability 1 o ( n ( 2 C 0 + 4 ) ) . Thus, we conclude the claim P ( E i c ) = o ( n ( 2 C 0 + 4 ) ) for all 1 i n . The proof of (a) is complete.
Next, we show the proof of (b). Recall the second term on the RHS of (A40). Using the convexity of · and the trivial bound:
E | z ˜ i z ˜ i 1 E i c | P ( E i c ) z ˜ i z ˜ i max h min 1 P ( E i c ) ,
we get:
i = 1 n E ( z ˜ i z ˜ i 1 E i c ) i = 1 n E z ˜ i z ˜ i 1 E i c = i = 1 n E | z ˜ i z ˜ i 1 E i c | o ( n ( 2 C 0 + 4 ) ) n p = o ( n ( C 0 + 3 ) ) .
Here, in the last step, we used the fact that p n C 0 , which follows from the second condition in Assumption 3. This yields the estimate in (b).
Finally, we claim (c) by Matrix Bernstein inequality (i.e., Theorem A1). Towards that, we need to derive the upper bounds of X ˜ i and E X ˜ i 2 . By definition of X ˜ i , that is:
X ˜ i : = 1 n z ˜ i z ˜ i 1 E i E z ˜ i z ˜ i 1 E i ,
we easily derive that:
X ˜ i 1 n | z ˜ i z ˜ i 1 E i | + E ( z ˜ i z ˜ i 1 E i ) 1 n | z ˜ i z ˜ i 1 E i | + E ( z ˜ i z ˜ i 1 E i c ) + E ( z ˜ i z ˜ i ) C p n N
for some large C > 0 , in which we used the estimate:
E ( z ˜ i z ˜ i ) = H 1 / 2 E ( z i z i ) H 1 / 2 N i 1 H 1 / 2 diag ( d i 0 ) d i 0 ( d i 0 ) H 1 / 2 N i 1 H 1 / 2 diag ( d i 0 ) H 1 / 2 + N i 1 | ( d i 0 ) H 1 d i 0 | 2 N .
By the above inequality, it also holds that:
E ( z ˜ i z ˜ i 1 E i ) E ( z ˜ i z ˜ i 1 E i c ) + E ( z ˜ i z ˜ i ) C N .
Moreover:
E X ˜ i 2 = n 2 E ( z ˜ i 2 z ˜ i z ˜ i 1 E i ) n 2 ( E z ˜ i z ˜ i 1 E i ) 2 p n 2 N E ( z ˜ i z ˜ i 1 E i ) + 1 n 2 E ( z ˜ i z ˜ i 1 E i ) 2 C p n 2 N 2 .
Since E X ˜ i = 0 , it follows from Theorem A1 that:
P i = 1 n X ˜ i t 2 n exp t 2 / 2 σ 2 + b t / 3 ,
with σ 2 = C p / ( n N 2 ) , b = C p / ( n N ) . As a result:
i = 1 n X ˜ i C max p log ( n ) n N 2 , p log ( n ) n N
with probability 1 o ( n 3 ) , for some large C > 0 . We hence finish the proof of (c). The proof of (A28) is complete now.
Lastly, we prove E 1 C p n log ( n ) / N . By definition, we rewrite:
E 1 = ( M 1 / 2 M 0 1 / 2 ) M 0 1 / 2 D D M 0 1 / 2 ( M 1 / 2 M 0 1 / 2 I p ) + ( M 1 / 2 M 0 1 / 2 I p ) M 0 1 / 2 D D M 0 1 / 2 .
Decomposing D by D 0 + Z gives rise to:
M 0 1 2 D D M 0 1 2 = M 0 1 2 i = 1 n ( 1 N i 1 ) d i 0 ( d i 0 ) M 0 1 2 + n N I p + M 0 1 2 D 0 Z M 0 1 2 + M 0 1 2 Z D 0 M 0 1 2 + M 0 1 2 ( Z Z E Z Z ) M 0 1 2 = G 0 + n N I p + E 2 + E 3 + E 4
Applying Lemma A2, together with (A38) and (A39), we see that:
M 0 1 2 D D M 0 1 2 C n
Furthermore, it follows from Lemma A1 that:
M 1 / 2 M 0 1 / 2 I p C p log ( n ) N n , and M 1 / 2 M 0 1 / 2 = 1 + o ( 1 ) .
Combining the estimates above, we conclude that:
E 1 C p n log ( n ) N
We therefore finish the proof of Lemma A4. □

Appendix D.3. Proof of Lemma A5

We begin with the proof of (A29). Recall the definitions:
E 2 = M 0 1 2 Z D 0 M 0 1 2 , E 3 = M 0 1 2 D 0 Z M 0 1 2 .
We bound:
e j E 2 Ξ ^ / n e j E 2 / n 1 n k = 1 K e j M 0 1 / 2 Z W k · A k M 0 1 2 C n k = 1 K e j M 0 1 / 2 Z W k
by the second inequality in (A35). Similarly to how we derived (A37), using Bernstein inequality, we have:
e j M 0 1 / 2 Z W k C i = 1 n z i ( j ) W k ( i ) h j = C i = 1 n m = 1 N i N i 1 h j 1 / 2 T i m ( j ) d i 0 ( j ) W k ( i ) C W k 2 log ( n ) N + C log ( n ) N h j C n log ( n ) N + C log ( n ) N h j
with probability 1 o ( n C 0 3 ) . Consequently:
e j E 2 Ξ ^ / n C log ( n ) N n + C log ( n ) n N h j C log ( n ) N n C h j p log ( n ) N n
in view of p log ( n ) 2 N n and h j h min c / p from Assumption 3.
Analogously, for Ξ 3 , we have:
e j E 3 Ξ ^ / n 1 n k = 1 K e j M 0 1 / 2 A k · W k Z M 0 1 / 2 Ξ ^ C h j p log ( n ) N n .
where we used W k Z M 0 1 / 2 Ξ ^ M 0 1 / 2 Z W k p n log ( n ) / N from (A37) and e j M 0 1 / 2 A k C h j . Hence, we complete the proof of (A29).
In the sequel, we focus on the proof of (A30). Recall that E 4 = M 0 1 2 ( Z Z E Z Z ) M 0 1 2 . We expect to show that:
e j E 4 Ξ ^ / n C h j p log ( n ) N n 1 + H 0 1 2 ( Ξ ^ Ξ O ) 2 .
Let us decompose e j E 4 Ξ ^ / n as follows:
n 1 e j E 4 Ξ ^ n 1 e j E 4 Ξ + n 1 e j E 4 ( Ξ ^ Ξ O ) .
We bound n 1 e j E 4 Ξ first. For any fixed 1 k K , in light of the fact that M 0 ( j , j ) h j for all 1 j p :
| e j E 4 ξ k | | e j H 1 / 2 ( Z Z E Z Z ) H 1 / 2 ξ k | = | i = 1 n h j 1 / 2 z i ( j ) z i H 1 / 2 ξ k h j 1 / 2 E z i ( j ) z i H 1 / 2 ξ k | = i = 1 n 1 N i 2 m , m 1 = 1 N i T i m ( j ) d i 0 ( j ) h j · ( T i m 1 d i 0 ) H 1 2 ξ k E T i m ( j ) d i 0 ( j ) h j · ( T i m 1 d i 0 ) H 1 2 ξ k | J 1 | + | J 2 | ,
with:
J 1 : = i = 1 n 1 N i 2 m N i ( T i m d i 0 ) H 1 / 2 e j · ( T i m d i 0 ) H 1 / 2 ξ k E ( T i m d i 0 ) H 1 / 2 e j · ( T i m d i 0 ) H 1 / 2 ξ k , J 2 : = i = 1 n 1 N i 2 m m 1 N i ( T i m d i 0 ) H 1 / 2 e j · ( T i m 1 d i 0 ) H 1 / 2 ξ k .
For J 1 , it is easy to compute the order of its variance as follows:
var ( J 1 ) = i = 1 n m = 1 N i N i 4 var ( T i m d i 0 ) H 1 / 2 e j · ( T i m d i 0 ) H 1 / 2 ξ k = i = 1 n m = 1 N i N i 4 d i 0 ( j ) · ( 1 d i 0 ( j ) ) 2 h j ξ k ( j ) h j t d i 0 ( t ) ξ k ( t ) h t 2 + i = 1 n m = 1 N i N i 4 t j d i 0 ( t ) · ( d i 0 ( j ) ) 2 h j ξ k ( t ) h t s d i 0 ( s ) ξ k ( s ) h s 2 i = 1 n m = 1 N i 1 N i 4 d i 0 ( j ) h j ξ k ( j ) h j t d i 0 ( t ) ξ k ( t ) h t j = 1 p ( d i 0 ( j ) ) 2 h j ξ k ( j ) h j t d i 0 ( t ) ξ k ( t ) h t 2 C n N 3 ,
where we used the facts that ξ k ( t ) h t , d i 0 ( j ) C h j , and t d i 0 ( t ) = 1 . Furthermore, with the trivial bound of each summand in J 1 given by C N 2 h j 1 / 2 , it follows from the Bernstein inequality that:
| J 1 | C n log ( n ) N 3 + C log ( n ) N 2 h j C n log ( n ) N 3
with probability 1 o ( n 3 C 0 ) . Here, we used the conditions that h j C / p and p log ( n ) 2 N n .
We proceed to estimate | J 2 | . Employing Theorem A3 with:
h ( T i m , T i m 1 ) = N i 2 ( T i m d i 0 ) H 1 / 2 e j · ( T i m 1 d i 0 ) H 1 / 2 ξ k ,
it suffices to examine the high probability bound of:
J ˜ 2 : = i = 1 n 1 N i 2 m m 1 N i ( T i m d i 0 ) H 1 / 2 e j · ( T ˜ i m 1 d i 0 ) H 1 / 2 ξ k
where { T ˜ i m 1 } is an independent copy of { T i m 1 } . Imitating the proof of (A45), we rewrite:
J ˜ 2 = i = 1 n m = 1 N i N i 1 ( T i m d i 0 ) H 1 / 2 e j · b i m where b i m = m 1 m N i 1 ( T ˜ i m 1 d i 0 ) H 1 / 2 ξ k
Notice that b i m can be crudely bounded by C in view of ξ k ( t ) h t . Then, condition on { T ˜ i m 1 } , by Bernstein inequality, we can derive that:
| J ˜ 2 | C n log ( n ) N + log ( n ) N h j C n log ( n ) N
with probability 1 o ( n 3 C 0 ) . Consequently, we arrive at:
| e j E 4 ξ k | C n log ( n ) N C h j p n log ( n ) N
under the assumption that h j C / p . As K is a fixed constant, we further conclude:
e j E 4 Ξ C h j p n log ( n ) N
with probability 1 o ( n 3 C 0 ) .
Next, we estimate n 1 e j E 4 ( Ξ ^ Ξ O ) . By definition, we write:
1 n e j E 4 ( Ξ ^ Ξ O ) = 1 n e j M 0 1 / 2 ( Z Z E Z Z ) M 0 1 / 2 ( Ξ ^ Ξ O ) .
For each 1 t p :
1 n | e j M 0 1 / 2 ( Z Z E Z Z ) e t | 1 n h j i = 1 n z i ( j ) z i ( t ) E ( z i ( j ) z j ( t ) ) = 1 n h j i m , m ˜ N i 2 ( T i m ( j ) d i 0 ( j ) ) ( T i m ˜ ( t ) d i 0 ( t ) ) E ( T i m ( j ) d i 0 ( j ) ) ( T i m ˜ ( t ) d i 0 ( t ) ) = 1 n h j i , m N i 2 ( T i m ( j ) d i 0 ( j ) ) ( T i m ( t ) d i 0 ( t ) ) E ( T i m ( j ) d i 0 ( j ) ) ( T i m ( t ) d i 0 ( t ) ) + 1 n h j i N i 2 m m ˜ ( T i m ( j ) d i 0 ( j ) ) ( T i m ˜ ( t ) d i 0 ( t ) ) : = ( I ) t + ( I I ) t .
For ( I ) k , using Bernstein inequality, it yields that with probability 1 o ( n 3 2 C 0 ) :
| ( I ) t | C max ( h j + h t ) h t log ( n ) n N 3 , ( h j + h t ) log ( n ) n N 2 h j , t j max log ( n ) n N 3 , log ( n ) n N 2 h j , t = j C ( h j + h t ) h t log ( n ) n N 3 , t j log ( n ) n N 3 , t = j
where the last step is due the the fact p log ( n ) 2 N n from Assumption 3. As a result:
t = 1 p | ( I ) t | C p t j h j h t log ( n ) n N 3 + t j h t log ( n ) n N 3 + log ( n ) n N 3 C h j p log ( n ) n N 3
Here, we used the Cauchy–Schwarz inequality to get:
t j h j h t log ( n ) n N 3 p 1 · t j h j h t log ( n ) n N 3 p t j h j h t log ( n ) n N 3 .
For ( I I ) t , since it is a U-statistics, we then apply the decoupling idea, i.e., Theorem A3, such that its high probability bound can be controlled by that of ( I I ˜ ) t , defined by:
( I I ˜ ) t : = 1 n h j i N i 2 m m ˜ ( T i m ( j ) d i 0 ( j ) ) ( T ˜ i m ˜ ( t ) d i 0 ( t ) ) .
where { T ˜ i m ˜ } i , m ˜ is the i.i.d. copy of { T i m } i , m . We further express:
( I I ˜ ) t = 1 n h j i N i 2 m ( T i m ( j ) d i 0 ( j ) ) T ˜ i , m ,
where T ˜ i , m : = m ˜ m ( T ˜ i m ˜ ( t ) d i 0 ( t ) ) . Condition on { T ˜ i m ˜ } i , m ˜ , we use Bernstein inequality and get:
( I I ˜ ) t C max log ( n ) · i , m T ˜ i , m 2 n 2 N 4 , log ( n ) · max i , m | T ˜ i , m | n N 2 h j C log ( n ) · max i , m | T ˜ i , m | 2 n N 3 ,
in light of p log ( n ) 2 N n . Furthermore, notice that:
max i , m | T ˜ i , m | m ˜ | T ˜ i m ˜ ( t ) d i 0 ( t ) | .
It follows that:
t = 1 p | ( I I ˜ ) t | C log ( n ) n N · 1 N t = 1 p max i , m | T ˜ i , m | C log ( n ) n N · 1 N t = 1 p m ˜ | T ˜ i m ˜ ( t ) d i 0 ( t ) | C log ( n ) n N ,
where the last step is due to the trivial bound that:
t = 1 p | T ˜ i m ˜ ( t ) d i 0 ( t ) | 1 + t = 1 p d i 0 ( t ) C
for any 1 m ˜ N . Thus, combining (A56) and (A57), under the condition h j C / p , we obtain:
1 n e j M 0 1 / 2 ( Z Z E Z Z ) 1 = 1 n t = 1 p | e j M 0 1 / 2 ( Z Z E Z Z ) e t | C h j p log ( n ) n N
with probability 1 o ( n 3 C 0 ) .
Moreover, employing the estimate M 0 ( j , j ) h j for all 1 j p , it follows that:
1 n e j E 4 ( Ξ ^ Ξ O ) = 1 n e j M 0 1 / 2 ( Z Z E Z Z ) M 0 1 / 2 ( Ξ ^ Ξ O ) 1 n e j M 0 1 / 2 ( Z Z E Z Z ) 1 · M 0 1 / 2 H 1 / 2 · H 1 / 2 ( Ξ ^ Ξ O ) 2 C h j p log ( n ) n N H 1 / 2 ( Ξ ^ Ξ O ) 2
with probability 1 o ( n 3 C 0 ) .
In the end, we combine (A55) and (A59) and consider all j simultaneously to conclude that:
n 1 e j E 4 Ξ ^ n 1 e j E 4 Ξ + n 1 e j E 4 ( Ξ ^ Ξ O ) C h j p log ( n ) n N 1 + H 1 / 2 ( Ξ ^ Ξ O ) 2
with probability 1 o ( n 3 C 0 ) . Combining all 1 j p , together with p n C 0 , we complete the proof. □

Appendix D.4. Proof of Lemma A6

We first prove (A31) that:
e j E 4 ( M 0 1 / 2 M 1 / 2 I p ) Ξ ^ / n C h j · p log ( n ) n N 1 + H 1 2 ( Ξ ^ Ξ O ) 2
By the definition that E 4 = M 0 1 / 2 ( Z Z E Z Z ) M 0 1 / 2 , we bound:
e j E 4 ( M 0 1 / 2 M 1 / 2 I p ) Ξ ^ / n 1 n e j M 0 1 / 2 ( Z Z E Z Z ) 1 · M 0 1 / 2 ( M 0 1 / 2 M 1 / 2 I p ) Ξ ^ 2 .
From (A58), it holds that e j M 0 1 / 2 ( Z Z E Z Z ) 1 / n C h j p log ( n ) / n N with probability 1 o ( n 3 C 0 ) . Next, we bound:
M 0 1 / 2 ( M 0 1 / 2 M 1 / 2 I p ) Ξ ^ 2 H 1 / 2 ( M 0 1 / 2 M 1 / 2 I p ) Ξ 2 + H 1 / 2 ( M 0 1 / 2 M 1 / 2 I p ) ( Ξ ^ Ξ O ) 2
The first term on the RHS can be bounded simply by:
H 1 / 2 ( M 0 1 / 2 M 1 / 2 I p ) Ξ 2 C max i | h i 1 / 2 p log ( n ) / n N · h i | C p log ( n ) / n N = o ( 1 )
The second term can be simplified to:
H 1 / 2 ( M 0 1 / 2 M 1 / 2 I p ) ( Ξ ^ Ξ O ) 2 = ( M 0 1 / 2 M 1 / 2 I p ) H 1 / 2 ( Ξ ^ Ξ O ) 2 C p log ( n ) n N · H 1 / 2 ( Ξ ^ Ξ O ) 2 .
As a result:
e j E 4 ( M 0 1 / 2 M 1 / 2 I p ) Ξ ^ / n C h j p log ( n ) n N · p log ( n ) n N 1 + H 0 1 2 ( Ξ Ξ 0 O ) 2 C h j · p log ( n ) n N 1 + H 1 2 ( Ξ ^ Ξ O ) 2 .
This proves (A31).
Subsequently, we prove (A32) that:
e j M 1 / 2 M 0 1 / 2 I p Ξ ^ C log ( n ) N n + o ( β n ) · e j ( Ξ ^ Ξ O ) .
We first bound:
e j M 1 / 2 M 0 1 / 2 I p Ξ ^ e j M 1 / 2 M 0 1 / 2 I p Ξ + e j M 1 / 2 M 0 1 / 2 I p ( Ξ ^ Ξ O ) .
By Lemma A1, | M ( j , j ) M 0 ( j , j ) | / M 0 ( j , j ) C log ( n ) / N n h j . It follows that:
e j M 1 / 2 M 0 1 / 2 I p Ξ M ( j , j ) M 0 ( j , j ) 1 · e j Ξ C | M ( j , j ) M 0 ( j , j ) | M 0 ( j , j ) · e j Ξ C log ( n ) N n ,
and:
e j M 1 / 2 M 0 1 / 2 I p ( Ξ ^ Ξ O ) M ( j , j ) M 0 ( j , j ) 1 · e j ( Ξ ^ Ξ O ) p log ( n ) N n · e j ( Ξ ^ Ξ O ) = o ( β n ) · e j ( Ξ ^ Ξ O ) .
by the condition that p log ( n ) N n . We therefore conclude (A32), simultaneously for all 1 j p , with probability 1 o ( n 3 ) .
Lastly, we prove (A33). By the definition:
E 1 = M 1 2 D D M 1 2 M 0 1 2 D D M 0 1 2 ,
and the decomposition:
M 0 1 2 D D M 0 1 2 = G 0 + n N I p + E 2 + E 3 + E 4 , where G 0 = M 0 1 / 2 i = 1 n ( 1 N i 1 ) d i 0 ( d i 0 ) M 0 1 / 2 ,
we bound:
e j E 1 Ξ ^ / n e j ( I p M 0 1 / 2 M 1 / 2 ) M 1 / 2 D D M 1 / 2 Ξ ^ / n + e j M 0 1 / 2 D D M 0 1 / 2 ( M 0 1 / 2 M 1 / 2 I p ) Ξ ^ / n C e j ( I p M 0 1 / 2 M 1 / 2 ) Ξ ^ + C e j G 0 ( M 0 1 / 2 M 1 / 2 I p ) Ξ ^ / n + e j ( M 0 1 / 2 M 1 / 2 I p ) Ξ ^ / N + i = 2 4 e j E i ( M 0 1 / 2 M 1 / 2 I p ) Ξ ^ / n ,
where we used the fact that M 1 / 2 D D M 1 / 2 Ξ ^ = Λ ˜ Ξ ^ , where Λ ˜ = Λ ^ + n N 1 I p , which leads to Λ ˜ C n .
In the same manner to prove e j E 2 Ξ ^ / n and e j E 3 Ξ ^ / n , we can bound:
1 n e j E s ( M 0 1 / 2 M 1 / 2 I p ) Ξ ^ 1 n e j E s M 0 1 / 2 M 1 / 2 I p C h j p log ( n ) N n , for s = 2 , 3 .
By Lemma A1, we derive:
e j G 0 ( M 0 1 / 2 M 1 / 2 I p ) Ξ ^ / n C t = 1 p 1 h j h t | a j Σ W a t | log ( n ) h t N n e t Ξ ^ C h j p log ( n ) N n ,
where we crudely bound | a j Σ W a t | h j h t , and use Cauchy–Schwarz inequality that t = 1 p e t Ξ ^ p tr ( Ξ ^ Ξ ^ ) K p . In addition:
e j ( M 0 1 / 2 M 1 / 2 I p ) Ξ ^ / N M 0 ( j , j ) / M ( j , j ) · e j ( I p M 0 1 / 2 M 1 / 2 ) Ξ ^ C e j ( I p M 0 1 / 2 M 1 / 2 ) Ξ ^ ,
which results in:
e j E 1 Ξ ^ / n C e j ( I p M 0 1 / 2 M 1 / 2 ) Ξ ^ + C e j G 0 ( M 0 1 / 2 M 1 / 2 I p ) Ξ ^ / n + i = 2 4 e j E i ( M 0 1 / 2 M 1 / 2 I p ) Ξ ^ / n .
Combining (A61), (A62), (A31), and (A32) into the above inequality, we complete the proof of (A33).□

Appendix E. Proofs of the Rates for Topic Modeling

The proofs in this section are quite similar to those in [4] by employing the bounds in Theorem 1. For readers’ convenience, we provide brief sketches and refer to more details in the supplementary materials of [4]. Notice that N i N ¯ N from Assumption 3. Therefore, throughout this section, we always assume N ¯ = N without loss of generality.

Appendix E.1. Proof of Theorem 2

Recall that:
R ^ = ( r ^ 1 , r ^ 2 , , r ^ p ) = [ diag ( ξ ^ 1 ) ] 1 ( ξ ^ 2 , , ξ K ) .
Since the first eigenvector of G 0 is with multiplicity one, which can been seen in Lemma A2, and the fact that G G 0 n , it is not hard to obtain that O = diag ( ω , Ω ) where ω { 1 , 1 } and Ω is an orthogonal matrix in R K 1 , K 1 . Let us write Ξ ^ 1 : = ( ξ ^ 2 , , ξ ^ K ) and similarly for Ξ 1 . Without loss of generality, we assume ω = 1 . Therefore:
| ξ 1 ( j ) ξ ^ 1 ( j ) | C h j p log ( n ) N n β n 2 , e j ( Ξ ^ 1 Ξ 1 ) Ω C h j p log ( n ) N n β n 2 .
We rewrite:
r ^ j r j Ω = Ξ ^ 1 ( j ) · ξ 1 ( j ) ξ ^ 1 ( j ) ξ ^ 1 ( j ) ξ 1 ( j ) e j ( Ξ ^ 1 Ξ 1 Ω ) ξ 1 ( j ) .
Using Lemma A2 together with (A64), we conclude the proof. □

Appendix E.2. Proof of Theorem 3

In this section, we provide a simplified proof by neglecting the details about some quantities in the oracle case. We refer readers to the proof of Theorem 3.3 of [4] for more rigorous arguments.
Proof of Theorem 3.
Recall the Topic-SCORE algorithm. Let V ^ = ( v ^ 1 , v ^ 2 , , v ^ K ) and denote its population counterpart by V. We write:
Q ^ = 1 1 v ^ 1 v ^ K , Q = 1 1 v 1 v K
Similarly to [4], by properly choosing the vertex hunting algorithm and the anchor words condition, it can be seen that:
V ^ V C p log ( n ) N n β n 2
where we omit the permutation for simplicity here and throughout this proof. As a result:
π ^ j * π j * = Q ^ 1 1 r ^ j Q 1 1 Ω r j C Q 1 2 · V ^ V · r j + Q 1 r ^ j Ω r j C p log ( n ) N n β n 2 = o ( 1 )
where we used the fact that Q 1 C whose details can be found in the proof of Lemma G.1 in supplementary material of [4]. Considering the truncation at 0, it is not hard to see that:
π ˜ j * π j * C π ^ j * π j * C p log ( n ) N n β n 2 = o ( 1 ) ;
and furthermore:
π ^ j π j 1 π ˜ j * π j * 1 π ˜ j * 1 + π j * 1 | π ˜ j * 1 π j * 1 | π ˜ j * 1 π j * 1 C π ˜ j * π j * 1 C p log ( n ) N n β n 2 .
by noticing that π j = π j * in the oracle case.
Recall that A ˜ = M 1 / 2 diag ( ξ ^ 1 ) Π ^ = : ( a ˜ 1 , , a ˜ p ) . Let A * = M 0 1 / 2 diag ( ξ 1 ) Π = ( a 1 * , , a p * ) . Note that A = A * [ diag ( 1 p A * ) ] 1 . We can derive:
a ˜ j a j * 1 M ( j , j ) ξ ^ 1 ( j ) π ^ j M 0 ( j , j ) ξ 1 ( j ) π j 1 C M ( j , j ) M 0 ( j , j ) · ξ 1 ( j ) · π j 1 + C M 0 ( j , j ) · ξ ^ 1 ( j ) ξ 1 ( j ) · π j 1 + C M 0 ( j , j ) · ξ 1 ( j ) · π ^ j π j 1 C h j p log ( n ) N n β n 2 ,
where we used (A65), (A64) and also Lemma A1. Write A ˜ = ( A ˜ 1 , , A ˜ K ) and A * = ( A 1 * , , A K * ) . We crudely bound:
| A ˜ k 1 A k * 1 | j = 1 p a ˜ j a j * 1 C p log ( n ) N n β n 2 = o ( 1 )
simultaneously for all 1 k K , since j h j = K . By the study of oracle case in [4], it can be deduced that A k * 1 1 (see more details in the supplementary materials of [4]). It then follows that:
a ^ j a j 1 = diag ( 1 / A ˜ 1 1 , , 1 / A ˜ K 1 ) a ˜ j diag ( 1 / A 1 * 1 , , 1 / A K * 1 ) a j * 1 = k = 1 K | a ˜ j ( k ) A ˜ k 1 a j * ( k ) A k * 1 | k = 1 K | a ˜ j ( k ) a j * ( k ) A k * 1 | + | a j * ( k ) | | A ^ k 1 A k * 1 | A k * 1 A ˜ k 1 C k = 1 K a ˜ j a j * 1 + a j * 1 max k | A ˜ k 1 A k * 1 | C h j p log ( n ) N n β n 2 = C a j 1 p log ( n ) N n β n 2 .
Here, we used (A66), (A67) and the following estimate:
a j * 1 = M 0 ( j , j ) | ξ 1 ( j ) | π j * h j
Combining all j together, we immediately have the result for L ( A ^ , A ) . □

Appendix E.3. Proof of Theorem 4

The optimization in (12) has a explicit solution given by:
w ^ i * = A ^ M 1 A ^ 1 A ^ M 1 d i .
Notice that ( A M 0 1 A ) 1 A M 0 1 d i 0 = ( A M 0 1 A ) 1 A M 0 1 A w i = w i . Consequently:
w ^ i * w i 1 = A ^ M 1 A ^ 1 A ^ M 1 d i ( A M 0 1 A ) 1 A M 0 1 d i 0 1 ( A M 0 1 A ) 1 A ^ M 1 A ^ A M 0 1 A A ^ M 1 A ^ 1 A ^ M 1 d i 1 + ( A M 0 1 A ) 1 A ^ M 1 d i A M 0 1 d i 0 1 C β n 1 A ^ M 1 A ^ A M 0 1 A ( w ^ i * w i 1 + w i 1 ) + C β n 1 A ^ M 1 d i A M 0 1 d i 0 ,
since ( A M 0 1 A ) 1 ( A H 1 A ) 1 1 . What remains is to analyze:
T 1 : = A ^ M 1 A ^ A M 0 1 A , and T 2 : = A ^ M 1 d i A M 0 1 d i 0 .
For T 1 , we bound:
T 1 ( A ^ A ) M 1 A ^ + A ( M 1 M 0 1 ) A ^ + A M 0 1 ( A ^ A ) .
Using the estimates:
a ^ j a j 1 C h j p log ( n ) N n β n 2 , | M ( j , j ) 1 M 0 ( j , j ) 1 | log ( n ) h j N n h j ,
it follows that:
A ( M 1 M 0 1 ) ( A ^ A ) k , k 1 = 1 K | A k ( M 1 M 0 1 ) ( A ^ k 1 A k 1 ) | k = 1 K A ^ k A k 1 = j = 1 p a ^ j a j 1 p log ( n ) N n β n 2 ,
and similarly:
( A ^ A ) M 0 1 ( A ^ A ) k = 1 K A ^ k A k 1 p log ( n ) N n β n 2 , ( A ^ A ) ( M 1 M 0 1 ) ( A ^ A ) k = 1 K A ^ k A k 1 p log ( n ) N n β n 2 .
As a result:
T 1 C ( A ^ A ) M 0 1 A + C A ( M 1 M 0 1 ) A C j = 1 p a ^ j a j 1 + C p log ( n ) N n · j = 1 p a j 1 C p log ( n ) N n β n 2 .
Next, for T 2 , we bound:
T 2 ( A ^ A ) M 1 d i + A ( M 1 M 0 1 ) d i + A M 0 1 ( d i d i 0 ) max j a ^ j a j 1 h j + a j 1 log ( n ) h j N n h j · d i 1 + max 1 k K | A k M 0 1 ( d i d i 0 ) | C p log ( n ) N n β n 2 + max 1 k K | A k M 0 1 ( d i d i 0 ) | .
where for ( A ^ A ) M 1 d i , given the low-dimension K, we crudely bound:
( A ^ A ) M 1 d i C max k | ( A ^ k A k ) M 1 d i | C max k , j | h j 1 a ^ j ( k ) a j ( k ) | d i 1
and | a ^ j ( k ) a j ( k ) | a ^ j a j 1 . We bound A ( M 1 M 0 1 ) d i in the same manner. To proceed, we analyze | A k M 0 1 ( d i d i 0 ) | for a fixed k. We rewrite it as:
A k M 0 1 ( d i d i 0 ) = 1 N i m = 1 N i A k M 0 1 ( T i m T i m ) .
The RHS is an independent sum where Bernstein inequality can be applied. By elementary computations, the variance is:
N i 1 var A k M 0 1 ( T i m T i m ) = N i 1 E A k M 0 1 ( T i m T i m ) 2 = N i 1 A k M 0 1 diag ( d i 0 ) M 0 1 A k N i 1 A k M 0 1 d i 0 2 N 1
and the individual bound is crudely N 1 . It follows from Bernstein inequality that with probability 1 o ( n 4 ) :
A k M 0 1 ( d i d i 0 ) C log ( n ) N + log ( n ) N C log ( n ) N
in light of N log ( n ) . This gives rise to:
T 2 C p log ( n ) N n β n 2 + C log ( n ) N
We substitute the above equation, together with (A69), into (A68) and conclude that:
w ^ i * w i 1 C p log ( n ) N n β n 4 + C log ( n ) N β n 2 .
Recall that the actual estimator w ^ i is defined by:
w ^ i = max { w ^ i * , 0 } / max { w ^ i * , 0 } 1 ,
where the maximum is taken entry-wisely. We write w ˜ i : = max { w ^ i * , 0 } for short. Since w i is always non-negative, it is not hard to see that:
w ˜ i w i 1 C w ^ i * w i 1 C p log ( n ) N n β n 4 + C log ( n ) N β n 2 = o ( 1 ) .
As a result, w ˜ i 1 = 1 + o ( 1 ) . Moreover:
w ^ i w i 1 w ˜ i w i 1 w ˜ i 1 + w i 1 | 1 w ˜ i 1 1 w i 1 | C w ˜ i w i 1 C p log ( n ) N n β n 4 + C log ( n ) N β n 2
with probability 1 o ( n 4 ) . Combining all i, we thus conclude the proof. □

References

  1. Hofmann, T. Probabilistic latent semantic indexing. In Proceedings of the International ACM SIGIR Conference, Berkeley, CA, USA, 15–19 August 1999; pp. 50–57. [Google Scholar]
  2. Blei, D.; Ng, A.; Jordan, M. Latent dirichlet allocation. J. Mach. Learn. Res. 2003, 3, 993–1022. [Google Scholar]
  3. Ke, Z.T.; Ji, P.; Jin, J.; Li, W. Recent advances in text analysis. Annu. Rev. Stat. Its Appl. 2023, 11, 347–372. [Google Scholar] [CrossRef]
  4. Ke, Z.T.; Wang, M. Using SVD for topic modeling. J. Am. Stat. Assoc. 2024, 119, 434–449. [Google Scholar] [CrossRef]
  5. de la Pena, V.H.; Montgomery-Smith, S.J. Decoupling inequalities for the tail probabilities of multivariate U-statistics. Ann. Probab. 1995, 23, 806–816. [Google Scholar] [CrossRef]
  6. Arora, S.; Ge, R.; Moitra, A. Learning topic models–going beyond SVD. In Proceedings of the Foundations of Computer Science (FOCS), New Brunswick, NJ, USA, 20–23 October 2012; pp. 1–10. [Google Scholar]
  7. Arora, S.; Ge, R.; Halpern, Y.; Mimno, D.; Moitra, A.; Sontag, D.; Wu, Y.; Zhu, M. A practical algorithm for topic modeling with provable guarantees. In Proceedings of the International Conference on Machine Learning (ICML), Atlanta, GA, USA, 16–21 June 2013; pp. 280–288. [Google Scholar]
  8. Bansal, T.; Bhattacharyya, C.; Kannan, R. A provable SVD-based algorithm for learning topics in dominant admixture corpus. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; pp. 1997–2005. [Google Scholar]
  9. Bing, X.; Bunea, F.; Wegkamp, M. A fast algorithm with minimax optimal guarantees for topic models with an unknown number of topics. Bernoulli 2020, 26, 1765–1796. [Google Scholar] [CrossRef]
  10. Erdős, L.; Knowles, A.; Yau, H.T.; Yin, J. Spectral statistics of Erdős–Rényi graphs I: Local semicircle law. Ann. Probab. 2013, 41, 2279–2375. [Google Scholar] [CrossRef]
  11. Fan, J.; Wang, W.; Zhong, Y. An L-infinity eigenvector perturbation bound and its application to robust covariance estimation. J. Mach. Learn. Res. 2018, 18, 1–42. [Google Scholar]
  12. Fan, J.; Fan, Y.; Han, X.; Lv, J. SIMPLE: Statistical inference on membership profiles in large networks. J. R. Stat. Soc. Ser. B. 2022, 84, 630–653. [Google Scholar] [CrossRef]
  13. Abbe, E.; Fan, J.; Wang, K.; Zhong, Y. Entrywise eigenvector analysis of random matrices with low expected rank. Ann. Statist. 2020, 48, 1452–1474. [Google Scholar] [CrossRef] [PubMed]
  14. Chen, Y.; Chi, Y.; Fan, J.; Ma, C. Spectral methods for data science: A statistical perspective. Found. Trends® Mach. Learn. 2021, 14, 566–806. [Google Scholar] [CrossRef]
  15. Ke, Z.T.; Wang, J. Optimal network membership estimation under severe degree heterogeneity. arXiv 2022, arXiv:2204.12087. [Google Scholar]
  16. Paul, D. Asymptotics of sample eigenstructure for a large dimensional spiked covariance model. Stat. Sin. 2007, 17, 1617. [Google Scholar]
  17. Zipf, G.K. The Psycho-Biology of Language: An Introduction to Dynamic Philology; Routledge: London, UK, 2013. [Google Scholar]
  18. Davis, C.; Kahan, W.M. The rotation of eigenvectors by a perturbation. III. SIAM J. Numer. Anal. 1970, 7, 1–46. [Google Scholar] [CrossRef]
  19. Horn, R.; Johnson, C. Matrix Analysis; Cambridge University Press: Cambridge, UK, 1985. [Google Scholar]
  20. Jin, J. Fast community detection by SCORE. Ann. Statist. 2015, 43, 57–89. [Google Scholar] [CrossRef]
  21. Ke, Z.T.; Jin, J. Special invited paper: The SCORE normalization, especially for heterogeneous network and text data. Stat 2023, 12, e545. [Google Scholar] [CrossRef]
  22. Donoho, D.; Stodden, V. When does non-negative matrix factorization give a correct decomposition into parts? In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 13–18 December 2004; pp. 1141–1148. [Google Scholar]
  23. Araújo, M.C.U.; Saldanha, T.C.B.; Galvao, R.K.H.; Yoneyama, T.; Chame, H.C.; Visani, V. The successive projections algorithm for variable selection in spectroscopic multicomponent analysis. Chemom. Intell. Lab. Syst. 2001, 57, 65–73. [Google Scholar] [CrossRef]
  24. Jin, J.; Ke, Z.T.; Moryoussef, G.; Tang, J.; Wang, J. Improved algorithm and bounds for successive projection. In Proceedings of the International Conference on Learning Representations (ICLR), Vienna, Austria, 7–11 May 2024. [Google Scholar]
  25. Wu, R.; Zhang, L.; Tony Cai, T. Sparse topic modeling: Computational efficiency, near-optimal algorithms, and statistical inference. J. Am. Stat. Assoc. 2023, 118, 1849–1861. [Google Scholar] [CrossRef]
  26. Klopp, O.; Panov, M.; Sigalla, S.; Tsybakov, A.B. Assigning topics to documents by successive projections. Ann. Stat. 2023, 51, 1989–2014. [Google Scholar] [CrossRef]
  27. Vershynin, R. Introduction to the non-asymptotic analysis of random matrices. In Compressed Sensing: Theory and Applications; Cambridge University Press: Cambridge, UK, 2012; pp. 210–268. [Google Scholar]
  28. Tropp, J. User-friendly tail bounds for sums of random matrices. Found. Comput. Math. 2012, 12, 389–434. [Google Scholar] [CrossRef]
  29. De la Pena, V.; Giné, E. Decoupling: From Dependence to Independence; Springer Science & Business Media: Berlin, Germany, 2012. [Google Scholar]
  30. Freedman, D.A. On tail probabilities for martingales. Ann. Probab. 1975, 3, 100–118. [Google Scholar] [CrossRef]
  31. Bloemendal, A.; Knowles, A.; Yau, H.T.; Yin, J. On the principal components of sample covariance matrices. Probab. Theory Relat. Fields 2016, 164, 459–552. [Google Scholar] [CrossRef]
Figure 1. An illustration of Topic-SCORE in the noiseless case ( K = 3 ). The blue dots are r j R K 1 (word embeddings constructed from the population singular vectors), and they are contained in a simplex with K vertices. This simplex can be recovered from a vertex hunting algorithm. Given this simplex, each r j has a unique barycentric coordinate π j R K . The topic matrix A is recovered from stacking together these π j ’s and utilizing M 0 and ξ 1 .
Figure 1. An illustration of Topic-SCORE in the noiseless case ( K = 3 ). The blue dots are r j R K 1 (word embeddings constructed from the population singular vectors), and they are contained in a simplex with K vertices. This simplex can be recovered from a vertex hunting algorithm. Given this simplex, each r j has a unique barycentric coordinate π j R K . The topic matrix A is recovered from stacking together these π j ’s and utilizing M 0 and ξ 1 .
Mathematics 12 01682 g001
Table 1. A summary of the existing theoretical results for estimating A (n is the number of documents, p is the vocabulary size, N is the order of document lengths, and h max and h min are the same as in (8)). Cases 1–3 refer to N p 4 / 3 , p N < p 4 / 3 , and N < p , respectively. For Cases 2–3, the sub-cases ‘a’ and ‘b’ correspond to n max { N p 2 , p 3 , N 2 p 5 } and n < max { N p 2 , p 3 , N 2 p 5 } , respectively. We have translated the results in each paper to the bounds on L ( A ^ , A ) , with any logarithmic factor omitted.
Table 1. A summary of the existing theoretical results for estimating A (n is the number of documents, p is the vocabulary size, N is the order of document lengths, and h max and h min are the same as in (8)). Cases 1–3 refer to N p 4 / 3 , p N < p 4 / 3 , and N < p , respectively. For Cases 2–3, the sub-cases ‘a’ and ‘b’ correspond to n max { N p 2 , p 3 , N 2 p 5 } and n < max { N p 2 , p 3 , N 2 p 5 } , respectively. We have translated the results in each paper to the bounds on L ( A ^ , A ) , with any logarithmic factor omitted.
Case 1Case 2aCase 2bCase 3aCase 3b
Ke & Wang [4] p N n p N n p 2 N N p N n p N p N n p 2 N N p N n
Arora et al. [6] p 4 N n p 4 N n p 4 N n p 4 N n p 4 N n
Bing et al. [9] p N n · h max h min p N n · h max h min p N n · h max h min NANA
Bansal et al. [8] N p n N p n N p n N p n N p n
Our results p N n p N n p N n p N n p N n
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ke, Z.T.; Wang, J. Entry-Wise Eigenvector Analysis and Improved Rates for Topic Modeling on Short Documents. Mathematics 2024, 12, 1682. https://doi.org/10.3390/math12111682

AMA Style

Ke ZT, Wang J. Entry-Wise Eigenvector Analysis and Improved Rates for Topic Modeling on Short Documents. Mathematics. 2024; 12(11):1682. https://doi.org/10.3390/math12111682

Chicago/Turabian Style

Ke, Zheng Tracy, and Jingming Wang. 2024. "Entry-Wise Eigenvector Analysis and Improved Rates for Topic Modeling on Short Documents" Mathematics 12, no. 11: 1682. https://doi.org/10.3390/math12111682

APA Style

Ke, Z. T., & Wang, J. (2024). Entry-Wise Eigenvector Analysis and Improved Rates for Topic Modeling on Short Documents. Mathematics, 12(11), 1682. https://doi.org/10.3390/math12111682

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop