*2.2. Asymptotic Results for a Single Class*

To extend Liu's test to a nonparanormal case, we first consider the problem of single GGM estimation based on oracle data, i.e., (*X*(*k*) *<sup>m</sup>*<sup>1</sup> , ..., *<sup>X</sup>*(*k*) *mp*)1≤*m*≤*nk* ∼ *N*(*μk*,Σ*k*), in the following regression framework

$$X\_{m\mathbf{j}}^{(k)} = \mathfrak{a}\_{\mathbf{j}}^{(k)} + \mathfrak{X}\_{m,-\mathbf{j}}^{(k)\prime}\mathfrak{G}\_{\mathbf{j}}^{(k)} + \mathfrak{e}\_{m\mathbf{j}}^{(k)}.\tag{3}$$

.

It is not hard to show that the regression coefficients *β*(*k*) *<sup>j</sup>* = (*β*(*k*) *<sup>j</sup>*,1 , ..., *<sup>β</sup>*(*k*) *<sup>j</sup>*,*j*−1, *<sup>β</sup>*(*k*) *<sup>j</sup>*,*j*+1, *<sup>β</sup>*(*k*) *<sup>j</sup>*,*<sup>p</sup>* ) and the error term (*k*) *mj* satisfy

$$\mathfrak{G}\_{j}^{(k)} = - (\omega\_{jj}^{(k)})^{-1} \mathfrak{D}\_{-j,j'}^{(k)} \operatorname{cov} (\mathfrak{e}\_{mi}^{(k)}, \mathfrak{e}\_{mj}^{(k)}) = \frac{\omega\_{ij}^{(k)}}{\omega\_{ii}^{(k)} \omega\_{jj}^{(k)}}$$

As the oracle data (*X*(*k*) *<sup>m</sup>*<sup>1</sup> , ..., *<sup>X</sup>*(*k*) *mp*)1≤*m*≤*nk* in Equation (3) are generally unknown, we consider a new regression model based on Winsorized imputations:

$$X\_{mj}^{(k)\*} = \mathfrak{a}\_j^{(k)} + \mathbf{X}\_{m,-j}^{(k)\*'} \mathfrak{f}\_j^{(k)} + \mathfrak{e}\_{mj}^{(k)\*}.\tag{4}$$

In solving the problem of single GGM estimation, Liu (2017) proposed an elegant test based on a bias-corrected sample covariance. This has motivated us to construct the following new statistic

$$S\_{ij}^{(k)\*} = \sqrt{\frac{1}{n\_k r\_{ii}^{(k)\*} r\_{jj}^{(k)\*}}} \left( \sum\_{m=1}^{n\_k} \varepsilon\_{mi}^{(k)\*} \varepsilon\_{mj}^{(k)\*} + \sum\_{m=1}^{n\_k} \{\varepsilon\_{mi}^{(k)\*}\}^2 \beta\_{i,j}^{(k)} + \sum\_{m=1}^{n\_k} \{\varepsilon\_{mj}^{(k)\*}\}^2 \beta\_{j,i}^{(k)} \right), \tag{5}$$

where *r* (*k*)∗ *ij* = (1/*nk*) <sup>∑</sup>*nk <sup>m</sup>*=<sup>1</sup> (*k*)∗ *mi* (*k*)∗ *mj* . By letting ¯(*k*) = (1/*nk*) <sup>∑</sup>*nk <sup>m</sup>*=<sup>1</sup> (*k*) *<sup>m</sup>* , (*σ*ˆ (*k*) *ij*, )1≤*i*,*j*≤*<sup>p</sup>* = (1/*nk*) ∑*nk <sup>m</sup>*=1( (*k*) *<sup>m</sup>* <sup>−</sup>¯(*k*))( (*k*) *<sup>m</sup>* <sup>−</sup>¯(*k*)) , *b* (*k*) *ij* <sup>=</sup> *<sup>ω</sup>*(*k*) *ii σ*ˆ (*k*) *ii*, <sup>+</sup> *<sup>ω</sup>*(*k*) *jj σ*ˆ (*k*) *jj*, −1, we will prove that, under mild conditions (see a detailed proof in Appendix A)

$$S\_{ij}^{(k)\*} + b\_{ij}^{(k)} \frac{\omega\_{ij}^{(k)}}{\omega\_{ii}^{(k)} \omega\_{jj}^{(k)}} \stackrel{D}{\rightarrow} N\{0, 1 + \frac{\{\omega\_{ij}^{(k)}\}^2}{\omega\_{ii}^{(k)} \omega\_{jj}^{(k)}}\}.\tag{6}$$

Similar as in [4], the estimated coefficients *<sup>β</sup>*ˆ(*k*) *<sup>j</sup>* must satisfy the following conditions:

$$\|\boldsymbol{\hat{\mathsf{B}}}\_{j}^{(k)} - \boldsymbol{\mathsf{B}}\_{j}^{(k)}\|\_{\ell\_{1}} = O\_{p}(a\_{n}^{(k)})\_{\prime}$$

$$\min \left\{ \lambda\_{\max}^{1/2} (\boldsymbol{\Sigma}^{(k)}) \|\boldsymbol{\hat{\mathsf{B}}}\_{j}^{(k)} - \boldsymbol{\mathsf{B}}\_{j}^{(k)}\|\_{\ell\_{2}}, \max\_{1 \le j \le p} \sqrt{(\boldsymbol{\hat{\mathsf{B}}}\_{j}^{(k)} - \boldsymbol{\mathsf{B}}\_{j}^{(k)})^{T} \boldsymbol{\Sigma}\_{-j,-j}^{(k)} (\boldsymbol{\hat{\mathsf{B}}}\_{j}^{(k)} - \boldsymbol{\mathsf{B}}\_{j}^{(k)})} \right\} = O\_{p}(b\_{n}^{(k)})\_{\prime}$$

where

$$a\_n^{(k)} = o(\sqrt{\log p/n\_k}) \quad \text{and} \quad b\_n^{(k)} = o(n\_k^{-1/4}).\tag{7}$$

Equation (6) is our main result, which is essentially a counterpart of Proposition 3.1 in [4]. The detailed proof is given in Appendix A. The asymptotic result we obtained here suggested that, by an appropriate choice of regression coefficients *<sup>β</sup>*ˆ(*k*) *<sup>j</sup>* , Liu's test can be readily extended to a nonparanormal framework by Winsorized imputation. Under GGMs, the condition (7) can be satisfied by several popular shrinkage estimators including lasso estimator and Dantzig selector. For the choice of *β*(*k*) *<sup>j</sup>* under NPNGMs, one can use the rank-based method introduced by Xue and Zou (2012) [6]. Xue and Zou (2012) showed that the rank-based estimator (e.g., rank-based lasso and rank-based Dantzig selector) achieved exactly the same convergence rate as its oracle counterpart, therefore, it also satisfies our condition (7).
