Next Article in Journal
Stability of Quartic Functional Equation in Modular Spaces via Hyers and Fixed-Point Methods
Next Article in Special Issue
A Flexibly Conditional Screening Approach via a Nonparametric Quantile Partial Correlation
Previous Article in Journal
Solution Spaces Associated to Continuous or Numerical Models for Which Integrable Functions Are Bounded
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation of Error Variance in Regularized Regression Models via Adaptive Lasso

1
Department of Applied Mathematics, Beijing Jiaotong University, Beijing 100044, China
2
Department of Statistics, University of Manitoba, Winnipeg, MB R3T 2N2, Canada
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(11), 1937; https://doi.org/10.3390/math10111937
Submission received: 24 April 2022 / Revised: 30 May 2022 / Accepted: 30 May 2022 / Published: 6 June 2022
(This article belongs to the Special Issue Statistical Methods for High-Dimensional and Massive Datasets)

Abstract

:
Estimation of error variance in a regression model is a fundamental problem in statistical modeling and inference. In high-dimensional linear models, variance estimation is a difficult problem, due to the issue of model selection. In this paper, we propose a novel approach for variance estimation that combines the reparameterization technique and the adaptive lasso, which is called the natural adaptive lasso. This method can, simultaneously, select and estimate the regression and variance parameters. Moreover, we show that the natural adaptive lasso, for regression parameters, is equivalent to the adaptive lasso. We establish the asymptotic properties of the natural adaptive lasso, for regression parameters, and derive the mean squared error bound for the variance estimator. Our theoretical results show that under appropriate regularity conditions, the natural adaptive lasso for error variance is closer to the so-called oracle estimator than some other existing methods. Finally, Monte Carlo simulations are presented, to demonstrate the superiority of the proposed method.

1. Introduction

Consider linear regression model y = x T β + ε , where y R is the response variable, x R p is the predictor variable, β R p is the unknown regression parameter and ε R is the random error satisfying ε N ( 0 , σ 2 I n ) . Given an i . i . d . random sample ( y i , x i T ) T R p + 1 , i = 1 , , n , the model can be written in the matrix form as y = X β + ε , where y = ( y 1 , , y n ) T R n , X = [ x 1 , , x n ] T R n × p and ε = ( ε 1 , , ε n ) T R n . In this paper, we are mainly interested in the high-dimensional sparse model, where p n .
Regularized methods for simultaneous model selection and parameter estimation have been intensively studied in the literature, e.g., the lasso [1], smoothly clipped absolute deviation (SCAD) [2], adaptive lasso [3], bridge [4], adaptive elastic net [5], and minimax concave penalty (MCP) [6], as well as the Dantzig selector [7]. In addition, screening rules for dimension reduction are proposed, e.g., the sure independent screening method and iteratively sure independent screening method [8], lasso-based screening rules [9,10,11], etc.
However, most of these works focus on selection and estimation, with respect to regression parameters, and few studies deal with estimation of error variance, although it is a fundamental and crucial problem in statistical inference and regression analysis. In conventional linear models, the common estimator, based on residuals, plays an important role in statistical inferences and model checking. In high-dimensional models, however, variance estimation becomes a difficult problem, mainly due to two reasons. One is that the traditional residual-based methods may perform poorly or, even, fail, as, for example, the ordinary least squares method does not work when the number of covariates is greater than the sample size. The other reason is that it is difficult to select the true model, accurately, since in practice the selected model, often, contains spurious variables that are correlated with the residuals, resulting in significant underestimation of error variance (e.g., [12,13]).
Next, we provide some examples, where model error variance is involved and plays an important role.
Example 1
(Model selection). Penalization is a common approach to model selection and parameter estimation, in high-dimensional linear models. The efficiency and accuracy of such methods depend on certain tuning parameters that are chosen using some criteria, such as Mallows’s C p , Akaike’s information criterion (AIC) and the Bayesian information criterion (BIC). For example, the AIC and BIC for the lasso [14] are given by
AIC ( X β ^ λ n , σ 2 ) = y X β ^ λ n 2 2 n σ 2 + 2 n df ( X β ^ λ n )
and
BIC ( X β ^ λ n , σ 2 ) = y X β ^ λ n 2 2 n σ 2 + log ( n ) n df ( X β ^ λ n )
respectively, where β ^ λ n is the lasso estimator with tuning parameter λ n and the degrees of freedom df ( X β ^ λ n ) is equal to the number of non-zero elements in β ^ λ n . It is easy to see that these criteria rely on error variance.
Example 2
(Confidence intervals). For a least-squares-based penalized estimator β ^ λ n , let A ^ be its index set, corresponding to non-vanishing parameters. If β ^ λ n has the oracle property, then for each i A ^ , the 1 α confidence interval for β i is given by
[ β ^ i z 1 α / 2 c i σ 2 , β ^ i + z 1 α / 2 c i σ 2 ] ,
where z 1 α / 2 is the ( 1 α / 2 ) -th quantile of the standard normal distribution and c i is the i-th diagonal element of the matrix ( X A ^ T X A ^ ) 1 . It is clear that the above intervals depend on the variance parameter.
Example 3
(Penalized second-order least squares estimation). The second-order least squares method, in [15], extends the ordinary least squares method by, simultaneously, minimizing the first two order distances
ρ i ( β , σ 2 ) = ( y i x i T β , y i 2 ( x i T β ) 2 σ 2 ) T
and yields the joint estimators for the regression and variance parameters. Under general conditions, the second-order least squares estimator has been shown to be, asymptotically, more efficient than the ordinary least squares estimator, if the model error has a nonzero third moment, and they are equivalent otherwise. The regularized version of this method can be used in high-dimensional models.

1.1. Literature Review

Variance estimation in high-dimensional models has attracted increasing attention in recent years. Here, we briefly review some important advances in this area. First, if the true parameter vector β * was known, then the ideal variance estimator, called the oracle estimator, is σ oracle 2 = ( 1 / n ) i = 1 n ( y i x i T β * ) 2 . Correspondingly, the estimator σ naive 2 = i = 1 n ( y i x i T β ^ ) 2 / n , based on some estimator β ^ for β , is called a naive estimator. Since the naive estimator is downward biased, a modified unbiased estimator is given by σ ^ 2 = i = 1 n ( y i x i T β ^ ) 2 / ( n s ^ ) , where s ^ : = # { i : β ^ i 0 } is the number of nonzero elements in β ^ . Unfortunately, when p is much larger than n, even a small change in s ^ will cause huge fluctuation in σ ^ 2 , if s ^ n .
To overcome this problem, Ref. [16] estimated the mean and variance parameters jointly, by maximizing a reparameterized likelihood with 1 penalty:
( ϕ ^ λ n , ρ ^ λ n ) = arg min ϕ R p , ρ R + log ( ρ ) + ρ y X ϕ 2 2 2 n + λ n ϕ 1 ,
where ϕ = β / σ , ρ = 1 / σ and R + = { x R : x > 0 } . Moreover, they proposed a generalized EM algorithm for the numerical optimization.
A refitted cross-validation (RCV) method, to derive a variance estimator, was proposed in [12], and its asymptotic properties were studied. The main idea of RCV is to attenuate the influence of irrelevant variables with high spurious correlations, via a data-splitting technique. Ref. [12], also, discussed the asymptotic properties of the lasso-based estimator σ ^ lasso 2 = i = 1 n ( y x i T β ^ lasso s ) 2 / ( n s ^ lasso ) and SCAD-based estimator σ ^ SCAD 2 = i = 1 n ( y x i T β ^ SCAD ) 2 / ( n s ^ SCAD ) , where β ^ lasso and β ^ SCAD are the least squares estimator, with 1 penalty [1] and SCAD penalty [2], respectively; s ^ lasso = # { i : ( β ^ lasso ) i 0 } and s ^ SCAD = # { i : ( β ^ SCAD ) i 0 } .
Further, a scaled lasso was proposed in [17], for simultaneous estimation of regression and variance parameters. Their model can be written as
( β ^ λ n , σ ^ λ n 2 ) = arg min β R p , σ R + y X β 2 2 2 n σ + ( 1 a ) σ 2 + λ n β 1 .
Under some regularity conditions, Ref. [17] proved the oracle inequalities for prediction and their estimators.
A moment estimator for the error variance, based on the covariance matrix Σ of the predictor variables, was studied in [18], where three cases were considered: Σ = I , estimable Σ and non-estimable Σ . A maximum likelihood method for the normally distributed noise was developed in [19].
Moreover, Ref. [13] considered another re-parameterized likelihood, with lasso penalty
( θ ^ λ n , ϕ ^ λ n ) arg min θ R p , ϕ R + + 1 2 log ϕ + ϕ y 2 2 2 n 1 n y T X θ + X θ 2 2 2 n ϕ + λ n Ω ( θ , ϕ ) ,
where ϕ λ n = 1 / σ λ n 2 , θ λ n = ϕ λ n β λ n . In particular, they proposed two estimators: the natural lasso with Ω ( θ , ϕ ) = θ 1 and the organic lasso with Ω ( θ , ϕ ) = ϕ 1 θ 1 2 .
Finally, Ref. [20] proposed a ridge-based method to estimate the error variance, under certain conditions, which is defined as follows:
σ ^ 2 = { 1 n 1 tr ( A 1 n ) } 1 σ ˇ 2 ,
where σ ˇ 2 = n 1 y T ( I n A 1 n ) y , A 1 n = n 1 X ( n 1 X T X + η I p ) X T and η is the tuning parameter. This method performs well in low-dimensional cases, with weak signals, and it is suitable for sparse as well as non-sparse models.

1.2. Notation and Outline

Throughout the paper, let A 0 : = { i : β i * 0 } be the index set and s : = # { i : β i * 0 } be the number of the nonzero elements of β * , respectively. Given a design matrix X and a subset A of { 1 , , p } , X i denotes the i-th column vector of X , and X A denotes the sub-matrix, consisting of the columns with indices in A . For vectors x , y R p , x y : = ( x 1 y 1 , , x p y p ) T denotes the Hadamard product. Moreover, let 1 / | x | or | x | 1 = ( 1 / | x 1 | , , 1 / | x p | ) T , sgn ( x ) = ( sgn ( x 1 ) , , sgn ( x p ) ) T , sign ( x ) = ( sign ( x 1 ) , , sign ( x p ) ) T , x 1 = | x 1 | × × | x p | , where
sgn ( t ) = 0 , t 0 , 1 , t = 0 , sign ( t ) = 1 , t > 0 , 0 , t = 0 , 1 , t < 0 , and | t | = { 1 } , t > 0 , [ 1 , 1 ] , t = 0 , { 1 } , t < 0 .
The rest of this paper is organized as follows: Section 2 defines and describes the proposed natural adaptive lasso, and Section 3 gives its asymptotic properties. Section 4 deals with the numerical optimization of the proposed estimators. Monte Carlo simulation studies of finite sample properties are provided in Section 5. The conclusions and discussion are given in Section 6, while the mathematical proofs are given in Section 7.

2. Natural Adaptive Lasso (NAL)

Some researchers, e.g., Refs. [13,16], used reparameterized likelihood to jointly estimate the mean and variance parameters in high-dimensional linear models. In particular, the method of [13] has good performance, and the associated numerical computation can be converted to some simple optimization procedures. However, the natural lasso in [13] always overestimates error variance, due to the over-selection of the covariates. This motivates us to consider the more generally adaptive lasso penalty, to further improve the properties of the estimators. Consider the following adaptively weighted 1 -penalized likelihood
( θ ^ λ n , ϕ ^ λ n ) arg min θ , ϕ R + L ( θ λ n , ϕ λ n ) + λ n w θ 1 ,
where L ( θ λ n , ϕ λ n ) is the reparameterized likelihood as (1), λ n is the tuning parameter and w : = ( w 1 , , w p ) T is the adaptive weight vector. Given a solution ( θ ^ λ n , ϕ ^ λ n ) of problem (2), the natural adaptive lasso estimators (NALE) for β and σ 2 are given by
β ^ λ n = θ ^ λ n ϕ ^ λ n , σ ^ λ n 2 = 1 ϕ ^ λ n .
It is easy to see that, when w = 1 , the NALE reduces to the natural lasso estimator (NLE) of [13].
Note that the quality of the NALE depends on the weight vector w . It follows from Proposition 1 in Section 3, that the weight w in problem (2) plays the same role as in the adaptive lasso estimation of the regression coefficients only, which solves the following convex optimization problem:
β ^ ada = arg min β R p 1 n y X β 2 2 + 2 λ n w β 1 ,
where the weight w depends on the initial estimator β ˜ ini . As indicated by [3], any root-n consistent estimator can be used as the initial estimator for β . For example, the least squares estimator β ^ ols : = ( X T X ) 1 X T y can be used, and the weight vector is calculated as w = 1 / | β ^ ols | γ , γ > 0 . Ref. [4] discusses the selection of the initial estimators in linear models, with log p = O ( n a ) for some a ( 0 , 1 ) . They show that their marginal regression estimator can be used in the adaptive lasso, to yield the desirable selection and estimation properties. In addition, the weight w in adaptive elastic-net [5], for moderate dimensional models ( log p = O ( log n ) ), can be constructed as w = 1 / | β ^ net + ( 1 / n ) sgn ( β ^ net ) | γ , γ > 0 , where β ^ net is the elastic-net estimator. In this paper, we use the following two-step procedure to calculate the weight vector.
Step 1: Solve the lasso problem to obtain the NLE β ^ lasso , which is used as the initial estimator β ˜ ini .
Step 2: Set w with w j = p λ n ( | β ˜ j ini | ) , where j = 1 , , p and p λ n is a folded-concave penalty function (such as SCAD, MCP or bridge).
Remark 1.
From [7,21,22], under some regularity conditions, the lasso is consistent with a near-oracle rate s log p / n and has the sure-screening property, i.e.,
β ^ lasso β * 2 O ( s log ( p ) / n ) , supp ( β ^ lasso ) supp ( β * ) .
Further, based on the order of the bias of the lasso, under suitable conditions for the minimum signal strength (dee the first part of Condition 4 in Section 7) and the choice of tuning parameter, w A 0 will be close, or even equal, to zero vector, when n is sufficiently larger, if a folded-concave penalty, such as SCAD, is used. These properties play an important role in some of the conclusions that follow.

3. Asymptotic Properties

In this section, we, first, establish the relationship between the NALE and the adaptive lasso, then analyze the asymptotic properties of the NALE for σ 2 .
Proposition 1.
The NALE estimator ( β ^ λ n , σ ^ λ n 2 ) , defined in (3), where ( θ ^ λ n , ϕ ^ λ n ) is a solution of (2), satisfies the following properties:
(i) 
β ^ λ n is a solution of the adaptive lasso (4);
(ii) 
σ ^ λ n 2 is the optimal value, of the objective function of the adaptive lasso (4). Furthermore, we have σ ^ λ n 2 = n 1 ( y 2 2 X β ^ λ n 2 2 ) .
The results of Proposition 1 are instrumental in the derivation of the other theoretical results in this paper. Moreover, they, also, provide a method for calculating the NALE for β and σ 2 . It is well known that the adaptive lasso (4) is a convex optimization, and many existing optimization tools can be used to compute this problem.
Note that, since
σ ^ λ n 2 = 1 n y X β ^ λ n 2 2 + 2 λ n w β ^ λ n 1 = σ ^ naive 2 ( β ^ λ n ) + 2 λ n w β ^ λ n 1
and w β ^ λ n 1 = w A 0 β ^ A 0 1 will be close or even equal to zero, for suitably chosen w , the NALE for σ 2 will be close to the naive estimator, if λ n 0 . As mentioned before, the naive estimator for σ 2 , based on the adaptive lasso estimator β ^ λ n , may work well when non-zero variables are selected, accurately. However, when more irrelevant variables are selected, the value of the penalty term will not be close to 0 in the finite sample, so that the naive estimator for σ 2 will, always, underestimate the true error variance. In this case, the penalty term will mitigate the difference between the naive estimator and the true variance. Although the form of the natural lasso estimator of [13] is similar to (5), their method often tends to over-select predictors, due to the use of a lasso penalty. In addition, the value of the penalty term in [13] remains large because it is not controlled by the weight vector. These facts explain why the natural lasso estimator for σ 2 tends to be larger than the true error variance in the simulation studies, in [13].
Next, we establish a key inequality for the NALE for σ 2 .
Lemma 1.
If λ n 1 n X T ε , then
| σ ^ λ n 2 1 n ε 2 2 | 2 λ n max w β * 1 , β ^ λ n β * 1 .
The above inequality is deterministic, in that it does not rely on any statistical assumptions for X and ε . Unlike Lemma 1 in [13], the proof of this result uses the fact that any vector β provides an upper bound on σ ^ λ n 2 and the convexity of the loss function. In addition, if w = ( 1 , , 1 ) T and O ( β ^ λ n β * 1 ) O ( β * 1 ) , Lemma 1 reduces to Lemma 1 in [13]. If w is close or equal to zero, and O ( β ^ λ n β * 1 ) O ( β * 1 ) , then the bound on the right-hand side of the inequality in Lemma 1 is lower than that for the natural lasso and organic lasso in [13].

3.1. Adaptive Lasso

It follows, from Lemma 1, that the error bound of the NALE for σ 2 is controlled by the convergence rate of the adaptive-lasso estimator β ^ λ n . Therefore, it is necessary to establish the asymptotic properties for β ^ λ n . The results in this subsection are similar to that in [23]. All regularity conditions and proofs are given in Section 7.
Theorem 1.
Suppose Conditions 1–3 hold. Assume that
min i A 0 c w i * > C 1 1 , λ n = 4 C 1 σ ( 2 log p + 2 L ) / n
and s ( log ( p ) + L ) / n 0 , where C 1 is some positive constant and L > 0 . Then, with probability at least 1 e L , there exists unique minimizer β ^ λ n = ( β ^ A 0 T , β ^ A 0 c T ) of problem (4), such that β ^ A 0 c = 0 and β ^ λ n β * 2 a n , where
a n = C 4 ( s ( 2 log p + 2 L ) / n + 2 λ n ( w A 0 * 2 + C 2 C 3 s ( log p ) / n ) )
with some constant C 4 > 0 , C 2 and C 3 are defined in the regularity conditions.
It follows from inequality (18) that the extra term λ n s ( log p ) / n in a n is due to the bias of the initial estimator β ˜ ini . When λ n tends to zero, the order of the extra term is o ( s ( log p ) / n ) . Thus, under some general conditions, the convergence rate of β ^ λ n is O ( s ( log ( p ) + L ) / n ) . Usually, the order of L is O ( log p ) .
We, now, present the asymptotic normality of the adaptive-lasso estimator β ^ λ n .
Theorem 2.
Assume that conditions of Theorem 1 hold. Let s n 2 = ( 1 / n ) σ 2 α n T X A 0 T X A 0 α n for any α n R s satisfying α 2 1 . Then, under Conditions 1–4, with probability at least 1 e L , the minimizer β ^ λ n in Theorem 1 satisfies
n 1 2 s n 1 α n T ( β ^ A 0 β A 0 * ) + n α n T ( X A 0 T X A 0 ) 1 λ n w A 0 * g A 0 * = n 1 / 2 s n 1 α n T ( X A 0 T X A 0 ) 1 X A 0 T ε + o p ( 1 ) D N ( 0 , 1 ) ,
where g A 0 * β A 0 * 1 .
The result of Theorem 2 is consistent with the asymptotic normality, for the bridge estimator of β in [4]. The only difference is in the form of the penalty function.
Next, we consider the convergence performance of the specific adaptive-lasso estimator β ^ λ n , with a weight vector decided by the SCAD penalty [23], which is defined by
p λ n ( | t | ) = 1 { | t | λ n SCAD } + ( a λ n SCAD | t | ) + ( a 1 ) λ n SCAD 1 { | t | > λ n SCAD } ,
where a > 2 is a given constant and ( · ) + : = max { 0 , · } . Usually, the order of λ n SCAD is O ( s ( log p + L ) / n ) . By definition, it holds w A 0 * = 0 , and Condition 4 is satisfied when min i A 0 | β i * | 2 a λ n SCAD . Thus, we have the following result.
Corollary 1.
Assume that the conditions of Theorem 1 hold. Then, under Conditions 1–4, with probability at least 1 e L , there exists unique minimizer β ^ λ n = ( β ^ A 0 T , β ^ A 0 c T ) of problem (4), such that
β ^ λ n β * 2 O ( s ( log p + L ) / n ) , sgn ( β ^ A 0 ) = sgn ( β A 0 * ) a n d β ^ A 0 c = 0 .
Furthermore, β ^ λ n satisfies
n 1 2 s n 1 α n T ( β ^ A 0 β A 0 * ) = n 1 / 2 α n T ( X A 0 T X A 0 ) 1 X A 0 T ε D N ( 0 , 1 ) ,
where s n 2 = ( 1 / n ) σ 2 α n T X A 0 T X A 0 α n for any α n R s satisfying α n 2 1 .
The rate of convergence of the estimators in Theorem 1 and Corollary 1 is controlled by the distribution of random error and predictor matrix. Moreover, these results can be generalized for other situations, where random error follows sub-Gaussian or sub-exponential distributions.

3.2. Error Bounds of NALE

In this subsection, we establish the error bound for the NALE of σ 2 . It follows from (14) that, under the conditions of Theorem 1, λ n ( 1 / n ) X T ε holds, with probability 1 e L . Since s ( log ( p ) + L ) / n 0 , we have a n 0 . Thus, in order to establish the asymptotic properties of NALE for σ 2 , we still need to determine the order of λ n w β * 1 . By Condition 2 and Theorem 1, we have
w β * 1 = i A 0 w i | β i * | i A 0 ( C 3 | β ^ i β i * | + w i * ) | β i * | C 3 a n β * 1 + w * β * 1 .
Thus, we have the following result on the error bound of the NALE for σ 2 .
Theorem 3.
Under the conditions of Theorem 1, the NALE for σ 2 has the following error bound, with probability at least 1 e L :
| σ ^ λ n 2 1 n ε 2 2 | b n ,
where b n = 2 λ n max C 3 a n β * 1 + w * β * 1 , s a n .
The proof of the above result follows, straightforwardly, from Lemma 1 and Theorem 1, so it is omitted. Since a n 0 , w * β * 1 is close or equal to zero, and the order of λ n for the adaptive lasso is O ( ( log ( p ) + L ) / n ) , we have b n = o ( s ( log ( p ) + L ) / n ) . It follows that when L = O ( log ( p ) ) , the error bound of NALE for σ 2 is smaller than that of the NLE, OLE and SLE, when n is sufficiently large. In the following, we analyze the mean squared error bound for the NALE of σ 2 .
Theorem 4.
Under the conditions of Theorem 1, for any M > 1 and λ n = 4 C 1 σ ( 2 M log p ) / n , the NALE for σ 2 satisfies
E σ ^ λ n 2 σ 2 1 2 M + p 1 M log p 1 2 b n 2 σ 2 + 2 n 1 2 2 .
Note that the above mean squared error bound of NALE for σ 2 is lower than that for the NLE, OLS and SLE estimators. Finally, we consider the case using the SCAD penalty. Then, by Theorem 3 and the fact that w β * 1 = 0 , under the condition on minimum signal strength, we have the following result.
Corollary 2.
Under the conditions of Corollary 1, the NALE for σ 2 using the SCAD has the following error bound, with probability at least 1 e L :
| σ ^ λ n 2 1 n ε 2 2 | 2 s λ n a n .
Further, by Theorem 4 and Corollary 2, we have the mean squared error bound of the NALE for σ 2 using the SCAD.
Corollary 3.
Under the conditions of Corollary 1, for any M > 1 , the NALE for σ 2 using SCAD with λ n = 4 C 1 σ ( 2 M log p ) / n satisfies the following relative mean squared error bound:
E σ ^ λ n 2 σ 2 1 2 M + p 1 M log p 1 2 4 s λ n 2 a n 2 σ 2 + 2 n 1 2 2 .

4. Numerical Optimization

In this section, we study the optimization method for the NALE. Proposition 1 provides an easy way to calculate the NALE for σ 2 , through existing optimization tools, to compute the adaptive lasso (4). Given the tuning parameter λ n , we consider the proximal gradient algorithm (PGA) to calculate this problem, which has the following steps:
  • Initialization: take initial value β 0 R p , τ ( 0 , τ * ) .
  • Iterative step: β k + 1 = prox τ λ n β 1 ( β k 2 τ n X T ( X β k y ) ) .
In the above framework, 1 / τ * is taken to be the Lipschitz constant of Q n ( β ) , Q n = ( 1 / n ) y X β 2 2 , such that for any β 1 , β 2 R p ,
Q n ( β 1 ) Q n ( β 2 ) 2 1 τ * β 1 β 2 2 .
Usually, τ = ( 2 / n ) λ max ( X T X ) . In addition, by the definition of proximal mapping,
prox τ λ n β 1 ( β k 2 τ n X T ( X β k y ) ) = arg min β R p 1 2 β β k 2 τ n X T ( X β k y ) 2 2 + τ λ n β 1 .
By simple calculation,
β k + 1 = | β k 2 τ n X T ( X β k y ) | τ λ n 1 + sign β k 2 τ n X T ( X β k y ) .
Finally, the PGA is terminated, when either the sequence { β k } meets the criterion
β k + 1 β k 2 max { 1 , β k 2 } ϵ ,
or the maximum number of iterations is reached.

5. Numerical Simulations

In this section, we carry out Monte Carlo simulations to study the finite-sample performance of the NALE, with the weight calculated by using the SCAD penalty. Further, we compare the NALE with the square-root/scaled lasso (SLE) [17], the natural lasso (NLE) [13], the organic lasso (OLE) [13] and the ridge-based estimator (RBE) [20]. We, also, include the oracle estimator (OE) ( 1 / n ) ε 2 2 , as a benchmark in the comparisons. All numerical computation was done using Matlab. The programs are available upon request, from the first author of this paper or Supplementary Materials.

5.1. Simulation Settings

Following [23], throughout the simulations we use the sample size n = 100 and parameter dimension p = 400 . Further, each row of the design matrix X is generated from the multivariate normal distribution N ( 0 , Σ ) , with Σ i j = ρ | i j | and ρ ( 0 , 1 ) . The sparsity of β * is set to be the largest integer less than or equal to n α , and the locations of the nonzero elements in β * are determined randomly. We consider various parameter values, ρ { 0.1 , 0.3 , 0.5 } , α { 0.1 , 0.2 , 0.3 , 0.4 , 0.5 } and σ 2 { 0.5 , 1 } , and use the following true regression parameter vectors
α = 0.1 , β * = ( 1.2 , 0.8 , 0 , , 0 ) T ; α = 0.2 , β * = ( 1.2 , 0.8 , 1 , 0 , , 0 ) T ; α = 0.3 , β * = ( 1.2 , 0.8 , 1 , 0.6 , 0 , , 0 ) T ; α = 0.4 , β * = ( 1.2 , 0.8 , 1 , 0.6 , 0.8 , 0.9 , 1.2 , 0 , , 0 ) T ; α = 0.5 , β * = ( 1.2 , 0.8 , 1 , 0.6 , 0.8 , 0.9 , 1.2 , 0.4 , 0.9 , 1.1 , 0 , , 0 ) T .
We have, also, considered other variance settings, such as σ 2 { 3 , 5 } , however, the simulation results are similar to that of the above settings and, therefore, are not included. To assess the performances of the estimators, we calculate the average mean squared error (MSE) E ^ { ( σ 1 σ ^ 1 ) 2 } and the average relative error (RE) E ^ ( σ 1 σ ^ ) , based on 100 Monte Carlo runs.

5.2. Selection of Tuning Parameters

Usually, five-fold cross-validation can be used, to select tuning parameters for each estimation, which is fairly expensive. In order to reduce the computational cost, we consider the following methods, with a fixed choice of tuning parameters for all estimators, except for the NLE and NALE.
For the SLE, we consider three penalty levels λ n , i = 2 i 1 ( log p ) / n , i = 1 , 2 , 3 , which is similar to Example 1 in [17]. Then, the best estimator is selected as the final SLE estimator. Indeed, Ref. [17] found that λ n , 2 works very well for SLE. By the simulation results of [13], the OLE with λ n , 1 = log ( p ) / n and λ n , 2 = E ( n 2 X T ϵ 2 ) performed very well, where ϵ N ( 0 , I n ) . From [20,24], the tuning parameter used in RBE is calculated by setting η = α max 1 i p | X i T y | / ( n p ) with α = 0.1 .

5.3. Simulation Results

In each simulation, 100 runs are carried out to calculate the average of the performance measures. The results are presented in Table 1, Table 2, Table 3 and Table 4. These results show that, overall, both the MSE and RE of the NALE are very close to that of the OE, and are remarkably better than the other estimators, in most of the cases. However, in a few cases, such as ρ = 0.3 and α = 0.5 with both σ 2 = 0.5 and σ 2 = 1 , the NALE has a slightly larger MSE than the NLE, although it has smaller RE than the latter. As expected, the NLE often overestimates the true value, due to the bias and over-selection of the lasso. Moreover, in the cases where the NLE has a relatively large MSE, the NALE tends to have a large MSE as well, indicating that the poor performance of the NLE will impact the performance of the NALE, since it is used as the initial estimator. Finally, Ref. [20] reported that the RBE performs well in the cases with relatively small p and weak signals, however, it performs poorly and is, even, ineffective in the settings of our simulations.
We, further, summarize the performances of various methods using boxplots, in Figure 1 and Figure 2. As one can easily see, the NALE is accurate and stable in all cases, while the OLE is less accurate, although it is, still, fairly stable. Further, the NALE performs well in extremely sparse scenarios. Another interesting point is that the NALE inherits the variable selection and parameter estimation of the adaptive lasso. Although we focus on the variance estimation in this work, the method performs well in estimating the regression coefficients as well.

6. Conclusions and Discussion

We proposed a novel approach for variance estimation that combines the reparameterized log-likelihood function and adaptive-lasso penalization. We have established the asymptotic properties of the NALE. The theory in this paper shows that the NALE converges at a faster rate than some other existing estimators, including the NLE, OLE and SLE. In addition, the NAL is closely related to the adaptive lasso, which makes its numerical calculation straightforward. We have used the PGA to obtain the NALE in numerical simulations. Our simulation results show that the NALE performs well and favorably against other existing methods, in most finite sample situations, especially in extremely sparse scenarios. However, the quality of the NALE depends on that of the initial estimator used in its numerical optimization, and the poor performance of the initial estimator may result in the poor performance of the NALE.

7. Regularity Conditions and Proofs

This section provides theoretical proofs. We first state the following regularity conditions.
Condition 1.
With probability approaching one, the initial estimator satisfies β ˜ ini β * 2 C 2 s ( log p ) / n .
Condition 2.
p λ ( t ) is non-increasing in t ( 0 , ) and is Lipschitz with constant C 3 , that is,
| p λ n ( | t 1 | ) p λ n ( | t 2 | ) | C 3 | t 1 t 2 |
for any t 1 , t 2 R . Moreover, p λ n ( C 2 s log p / n ) > ( 1 / 2 ) p λ n ( 0 + ) for sufficiently large n, where C 2 is defined in Condition 1.
Condition 3.
There exist positive constants 0 < c min < c max < , such that
c min λ min 1 n X A 0 T X A 0 λ max 1 n X A 0 T X A 0 c max ,
and
1 n X A 0 c T X A 0 2 , < λ n 4 w A 0 c 1 a n ,
where B 2 , = max v 2 1 B v , w A 0 c 1 = ( w s + 1 1 , , w p 1 ) T , a n is defined in Theorem 1.
Condition 4.
The true coefficients satisfy min i A 0 | β i * | ( s ( 2 log p + 2 L ) ) / n . Moreover, it holds p λ n ( t ) = o ( s 1 λ n 1 ( 2 log p + 2 L ) 1 / 2 ) for any t > min i A 0 | β i * | / 2 and L > 0 .
As we pointed out in Remark 1, the lasso estimator β lasso satisfies Condition 1. Condition 2 affects the bound between β ^ λ n and β * and is used in the proof of Theorem 1. Further, it determines the bound between σ ^ λ n 2 and σ oracle 2 . The first part of Condition 3 is a very common regularity condition (see [4,12,23]) in high-dimensional regression. The remaining part is similar to Condition 3 in [23], which is used in the proofs of Theorems 1 and 2. Condition 4 is needed in the analysis of Corollary 1.
Proof of Proposition 1.
(i)
Since ( θ ^ λ n , ϕ ^ λ n ) is a solution of (2), θ ^ λ n is a solution of the problem
min θ λ n R p L ( θ , ϕ ^ λ n ) + λ n w θ 1 .
Hence, by optimality of the above problem, we have
X T y + X T X θ ^ λ n ϕ ^ λ n + n λ n w g ^ = 0 ,
where g ^ ( θ ^ λ n 1 ) . It follows that
X T y + X T X β ^ λ n + n λ n w g ^ = 0 .
Since sign ( θ ^ λ n ) = sign ( β ^ λ n ) , we have ( θ ^ λ n 1 ) = ( β ^ λ n 1 ) , which, further, implies that β ^ λ n is a solution of the adaptive lasso (4).
(ii)
Since ( θ ^ λ n , ϕ ^ λ n ) is a solution of (2), by optimality of problem (2), we have
X T y + X T X θ ^ λ n ϕ ^ λ n + n λ w g ^ = 0 , 1 ϕ ^ λ n + 1 n y 2 2 X θ ^ λ n 2 2 n ϕ ^ λ n 2 = 0 ,
where g ^ ( θ ^ λ n 1 ) . Therefore, we have
X T y + X T X β ^ λ n + n λ n w g ^ = 0 , 1 ϕ ^ λ n + 1 n y 2 2 X θ ^ λ n 2 2 n ϕ ^ λ n 2 = 0 .
Since ( θ ^ λ n 1 ) = ( β ^ λ n 1 ) , we have g ^ ( β ^ λ n 1 ) . Further,
β ^ λ n T ( w g ^ ) = i = 1 p w i β ^ i g ^ i = i = 1 p | w i β ^ i | = w β ^ 1 .
Combining (7) and (8), we obtain
0 = β ^ λ n T X T y + X β ^ λ n 2 2 + λ n w β ^ λ n 1 , σ ^ λ n 2 = 1 n y 2 2 X β ^ λ n 2 2 .
Further, by the first term in (7),
y X β ^ λ n 2 2 = y 2 2 X β ^ λ n 2 2 + 2 X β ^ λ n 2 2 y T X β ^ λ n = y 2 2 X β ^ λ n 2 2 2 λ n w β ^ 1 .
Combining the above equality and the second term in (7), we have
σ ^ λ n 2 = 1 n y 2 2 X β ^ λ n 2 2 = y X β ^ λ n 2 2 + 2 λ n w β ^ λ n 1 ,
which implies that σ ^ λ n 2 is the optimal value of the adaptive lasso (4).  □
Proof of Lemma 1.
From Proposition 1, we have
σ ^ λ n 2 1 n y X β * 2 2 + 2 λ n w β * 1 = 1 n ε 2 2 + 2 λ n w β * 1 .
Since the loss function in the adaptive lasso is convex, we have
σ ^ λ n 2 = 1 n y X β ^ λ n 2 2 + 2 λ n w β ^ λ n 1 1 n y X β * 2 2 + [ 2 n X T ( X T β * y ) ] T ( β ^ λ n β * ) = 1 n ε 2 2 2 n ε T X ( β ^ λ n β * ) 1 n ε 2 2 2 1 n ε T X β ^ λ n β * 1 1 n ε 2 2 2 λ n β ^ λ n β * 1 .
Combining inequalities (9) and (10), we obtain
| σ ^ λ n 2 1 n ε 2 2 | max { 2 λ n w β * 1 , 2 1 n ε T X β ^ λ n β * 1 } ,
which completes the proof. □
Proof of Theorem 1.
Since problem (4) is a convex optimization, by Theorem 1 of [25], it suffices to show that, with probability tending to 1, there exists a minimizer β ^ λ n of problem (4) that satisfies
X A 0 T ( y X β ^ λ n ) n λ n w A 0 g ^ A 0 = 0 ,
X A 0 c T ( y X β ^ λ n ) < n λ n w A 0 c ,
λ min 1 n X A 0 T X A 0 c min .
where g ^ β ^ 1 .
Let ξ 1 = X A 0 T ε and ξ 2 = X A 0 c T ε . Since X j 2 2 = n and ε N ( 0 , σ 2 I n ) , it follows from Corollary 4.3 in [26] that, for any L > 0 ,
P X T ε n σ > 2 log p + 2 L n e L .
Now we show that there exists a minimizer β ^ of problem (4) satisfies conditions (11)–(13).
Equation (11): Consider the minimizer of problem (4) in the subspace { β = ( β A 0 T , β A 0 c T ) T : β A 0 c = 0 } . Let β = ( β A 0 T , 0 T ) T , where β A 0 = β A 0 * + a ˜ n v A 0 R s with a ˜ n = s ( 2 log p + 2 L ) / n + 2 λ n ( w A 0 * 2 + C 2 C 3 s ( log p ) / n ) , v A 0 2 = C , and C > 0 is some large enough constant. Note that
L n ( β A 0 * + a ˜ n v A 0 , 0 ) L n ( β A 0 * , 0 ) = I 1 ( v A 0 ) + I 2 ( v A 0 ) ,
where I 1 ( v A 0 ) = 1 n X ( β * + a ˜ n v ) y 2 2 1 n X β * y 2 2 , I 2 ( v A 0 ) = 2 λ n w A 0 ( β A 0 * + a ˜ n v A 0 ) 1 2 λ n w A 0 β A 0 * 1 . For I 1 ( v A 0 ) , by (14), we have
I 1 ( v A 0 ) = 1 n X ( β * + a ˜ n v ) y 2 2 1 n X β * y 2 2 = 1 n a ˜ n 2 v T X T X v + 2 n ε T X a ˜ n v = 1 n a ˜ n 2 v A 0 T X A 0 T X A 0 v A 0 + 2 n a ˜ n ε T X A 0 v A 0 c min a ˜ n 2 v A 0 2 2 2 a ˜ n ξ 1 n 2 v A 0 2 c min a ˜ n 2 v A 0 2 2 2 σ a ˜ n s ( 2 log p + 2 L ) / n v A 0 2 ,
where the last inequality holds, due to · 2 s · . For I 2 ( v A 0 ) , we have
I 2 ( v A 0 ) = 2 λ n w A 0 ( β A 0 * + a ˜ n v A 0 ) 1 2 λ n w A 0 β A 0 * 1 2 λ n w A 0 ( a ˜ n v A 0 ) 1 2 a ˜ n λ n w A 0 2 v A 0 2 .
By the two-steps procedure of weight vector and Condition 2, it holds that
w A 0 2 w A 0 w A 0 * 2 + w A 0 * 2 C 3 β ˜ A 0 ini β A 0 * 2 + w A 0 * 2 C 2 C 3 s ( log p ) / n + w A 0 * 2 .
Hence, combining (15)–(18) yields
L n ( β A 0 * + a ˜ n v A 0 , 0 ) L n ( β A 0 * , 0 ) c min a ˜ n 2 v A 0 2 2 2 σ a ˜ n s ( 2 log p + 2 L ) / n v A 0 2 2 a ˜ n λ n ( C 2 C 3 s ( log p ) / n + w A 0 * 2 ) v A 0 2 .
Taking a large enough C, we have obtained, with probability tending to one,
L n ( β A 0 * + a ˜ n v A 0 , 0 ) L n ( β A 0 * , 0 ) > 0 .
It follows, immediately, that, with probability approaching one, there exists a minimizer β ^ A 0 of problem (4), subject to subspace { β = ( β A 0 T , β A 0 c T ) T : β A 0 c = 0 } , such that β ^ A 0 β A 0 * 2 C 4 a ˜ n a n , with some constant C 4 > 0 . Therefore, equality (11) holds, by the optimality theory.
Inequality (12): It remains to be proven that with asymptotic probability 1, (12) holds. Then, by optimality theory, β ^ λ n = ( β ^ A 0 T , 0 T ) T is the unique global minimizer of problem (4).
By triangle inequality, we have
X A 0 c T ( y X β ^ ) X A 0 c T ( y X β * ) + X A 0 c T X ( β * β ^ ) .
Further, by Condition 1, we have | β ˜ i ini | C 2 s ( log p ) / n with probability approaching one, where i A 0 c . Moreover, by the definition of the fold-concave penalty function,
p λ n ( | β ˜ i ini | ) p λ n ( C 2 s ( log p ) / n ) .
Therefore, by Condition 2 and inequality (20), we conclude that
w A 0 c 1 = min i A 0 c p λ n ( | β ˜ i ini | ) 1 < 2 p λ n ( 0 + ) = 2 ( w A 0 c * ) 1 .
Thus, for the first term of the right hand of inequality (19), by (14) and the condition that min i A 0 c { w i * } > C 1 1 , with probability approaching one,
1 n X A 0 c T ( y X β * ) = 1 n X A 0 c T ε < σ 2 log p + 2 L n = λ n 4 C 1 < λ n 4 ( w A 0 c * ) 1 < λ n 2 w A 0 c 1 .
As for the second term of right hand of inequality (19), by Condition 3, inequality (21) and inequality (14), with probability approaching one, we have
1 n X A 0 c T X ( β * β ^ ) 1 n X A 0 c T X A 0 2 , β A 0 * β ^ A 0 2 λ n 4 ( w A 0 c * ) 1 < λ n 2 w A 0 c 1 .
Combining (19), (22) and (23), we obtain inequality (12).
Inequality (13): it follows from Condition 3 that with an asymptotic probability of one, inequality (13) holds. This completes the proof of Theorem 1. □
Proof of Theorem 2.
By equality (11), ( 1 / n ) X A 0 T ( y X β ^ λ n ) λ n w A 0 g ^ A 0 = 0 . Since y X β * = ε , we have
1 n X A 0 T X A 0 ( β ^ A 0 β A 0 * ) = λ n w A 0 g ^ A 0 + 1 n X A 0 T ε .
Therefore,
n 1 / 2 α n T ( β ^ A 0 β A 0 * ) + n 3 / 2 λ n α n T ( X A 0 T X A 0 ) 1 w A 0 g ^ A 0 = n 1 / 2 α n T ( X A 0 T X A 0 ) 1 X A 0 T ε .
By the first part of Condition 4 and the bound of β ^ λ n in Theoroem 1, we have sign ( β ^ A 0 ) = sign ( β A 0 * ) . Then, g ^ A 0 = g A 0 * . In addition, by the second part in condition 4,
w A 0 = w A 0 * + ζ ( β ^ A 0 β A 0 * ) ,
where ζ = diag ( p λ n ( β ˜ 1 ) , , p λ n ( β ˜ s ) ) T R s × s and β ˜ R s lie on the line segment [ β ^ A 0 , β A 0 * ] . It follows that ζ ( β ^ A 0 β A 0 * ) g A 0 * 2 = ζ ( β ^ A 0 β A 0 * ) 2 = o ( λ n 1 1 / n ) . Further, since
| n 3 / 2 λ n α n T ( X A 0 T X A 0 ) 1 [ ζ ( β ^ A 0 β A 0 * ) g A 0 * ] | n 1 / 2 λ n c max α n 2 ζ ( β ^ A 0 β A 0 * ) 2 = o ( 1 ) ,
we have, for the second term of the left hand of (24),
n 3 / 2 λ n α n T ( X A 0 T X A 0 ) 1 w A 0 g ^ A 0 = n 3 / 2 λ n α n T ( X A 0 T X A 0 ) 1 w A 0 g A 0 * + o ( 1 ) .
Finally, the result follows, by verifying the conditions of the Lindeberg–Feller central limit theorem, in the same way as in the proof of Theorem 2 in [4]. □
Proof of Theorem 4.
For any M > 1 , take L = ( M 1 ) log p and denote Z n = ( σ ^ λ n 2 1 n ε 2 2 ) 2 . Then, by Theorems 1 and 3, we have
P ( Z n > M b n 2 ) e ( M 1 ) log p .
It follows that
E Z n b n 2 = 0 P Z n b n 2 > t d t = 0 M P Z n b n 2 > t d t + M P Z n b n 2 > t d t M + M e ( M 1 ) log p d t = M + p 1 M log p .
Further, since σ 2 ε 2 2 χ 2 ( n ) , we have
E 1 n ε 2 2 = σ 2 , Var 1 n ε 2 2 = 2 σ 4 n .
By the proof of Theorem 12 in [13], we have
E ( σ ^ λ n 2 σ 2 ) 2 E σ ^ λ n 2 1 n ε 2 2 2 1 2 + Var ( 1 n ε 2 2 ) 1 2 2 .
Combining (25) and (26), we obtain
E ( σ ^ λ n 2 σ 2 ) 2 M + p 1 M log p 1 2 b n 2 + σ 2 2 n 1 2 2 .
The proof is complete. □

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/math10111937/s1, The programs in numerical simulations are available in supplementary materials.

Author Contributions

Methodology, X.W., L.K. and L.W.; software, X.W.; writing—original draft, X.W., L.K. and L.W.; validation, X.W., L.K. and L.W. All authors have read and agreed to the published version of the manuscript.

Funding

The National Natural Science Foundation of China (12071022) and the 111 Project of China (B16002).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank the editor and the three anonymous reviewers, for their helpful comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B 1996, 73, 273–282. [Google Scholar] [CrossRef]
  2. Fan, J.; Li, Y. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 2001, 96, 1348–1360. [Google Scholar] [CrossRef]
  3. Zou, H. The adaptive lasso and its oracle properties. J. R. Stat. Soc. Ser. B 2006, 101, 1418–1429. [Google Scholar] [CrossRef] [Green Version]
  4. Huang, J.; Horowitz, J.L.; Ma, S. Asymptotic properties of bridge estimators in sparse high-dimensional regression models. Ann. Stat. 2008, 36, 587–613. [Google Scholar] [CrossRef]
  5. Zou, H.; Zhang, H.H. On the adaptive elastic-net with a diverging number of parameters. Ann. Stat. 2009, 37, 1733–1751. [Google Scholar] [CrossRef] [Green Version]
  6. Zhang, C.H. Nearly unbiased variable selection under minimax concave penalty. Ann. Stat. 2010, 38, 894–942. [Google Scholar] [CrossRef] [Green Version]
  7. Candes, E.; Tao, T. The Dantzig selector: Statistical estimation when p is much larger than n. Ann. Stat. 2007, 35, 2313–2351. [Google Scholar]
  8. Fan, J.; Lv, J. Sure independence screening for ultrahigh dimensional feature space. J. R. Stat. Soc. Ser. B 2008, 70, 849–911. [Google Scholar] [CrossRef] [Green Version]
  9. Ghaoui, L.E.; Viallon, V.; Rabbani, T. Safe feature elimination in sparse supervised learning. Pac. J. Optim. 2012, 8, 667–698. [Google Scholar]
  10. Wang, J.; Wonka, P.; Ye, j. Lasso screening rules via dual polytope projection. J. Mach. Learn. Res. 2015, 16, 1063–1101. [Google Scholar]
  11. Xiang, Z.J.; Wang, Y.; Ramadge, P.J. Safe feature elimination in sparse supervised learning. IEEE Trans. Pattern Anal. 2017, 39, 1008–1027. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Fan, J.; Guo, S.; Hao, N. Variance estimation using refitted cross-validation in ultrahigh dimensional regression. J. R. Stat. Soc. Ser. B 2012, 74, 37–65. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Yu, G.; Bien, J. Estimating the error variance in a high-dimensional linear model. Biometrika 2019, 106, 533–546. [Google Scholar] [CrossRef]
  14. Zou, H.; Hastie, T.; Tibshirani, R. On the “Degrees of freedom” of the lasso. Ann. Stat. 2007, 35, 2173–2192. [Google Scholar] [CrossRef]
  15. Wang, L.; Leblanc, A. Second-order nonlinear least squares estimation. Ann. Inst. Stat. Math. 2008, 60, 883–900. [Google Scholar] [CrossRef]
  16. Stadler, N.; Bühlmann, P. 1-penalization for mixture regression models. Test 2010, 19, 209–256. [Google Scholar] [CrossRef] [Green Version]
  17. Sun, T.; Zhang, C.H. Scaled sparse linear regression. Biometrika 2012, 99, 879–898. [Google Scholar] [CrossRef] [Green Version]
  18. Dicker, L.H. Variance estimation in high-dimensional linear models. Biometrika 2014, 101, 269–284. [Google Scholar] [CrossRef]
  19. Dicker, L.H.; Erdogdu, M.A. Maximum likelihood for variance estimation in high-dimensional linear models. In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, Cadiz, Spain, 9–11 May 2016; pp. 159–167. [Google Scholar]
  20. Liu, X.; Zheng, S.; Feng, X. Estimation of error variance via ridge regression. Biometrika 2020, 107, 481–488. [Google Scholar] [CrossRef]
  21. Zhang, C.H.; Huang, J. The sparsity and bias of the lasso selection in high-dimensional linear regression. Ann. Stat. 2008, 36, 1567–1594. [Google Scholar] [CrossRef]
  22. Bickel, P.J.; Ritov, Y.A.; Tsybakov, A.B. Simultaneous analysis of lasso and dantzig selector. Ann. Stat. 2009, 37, 1705–1732. [Google Scholar] [CrossRef]
  23. Fan, J.; Fan, Y.; Barut, E. Adaptive robust variable selection. Ann. Stat. 2014, 42, 324–351. [Google Scholar] [CrossRef] [PubMed]
  24. Friedman, J.; Hastie, T.; Tibshirani, R. Regularization paths for generalized linear models via coordinate descent. J. Stat. Softw. 2010, 33, 1–22. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Fan, J.; Lv, J. Non-concave penalized likelihood with np-dimensionality. IEEE Trans. Inform. Theory 2011, 57, 5467–5484. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Giraud, C. Introduction to High-Dimensional Statistics, 1st ed.; Chapman and Hall/CRC: New York, NY, USA, 2014. [Google Scholar]
Figure 1. Boxplots of 100 RE values for five estimators, true σ 2 = 0.5 .
Figure 1. Boxplots of 100 RE values for five estimators, true σ 2 = 0.5 .
Mathematics 10 01937 g001
Figure 2. Boxplots of 100 RE values for five estimators, true σ 2 = 1 .
Figure 2. Boxplots of 100 RE values for five estimators, true σ 2 = 1 .
Mathematics 10 01937 g002
Table 1. Average RE of various estimators, true σ 2 = 0.5 .
Table 1. Average RE of various estimators, true σ 2 = 0.5 .
α OENALESLE λ n , 1 SLE λ n , 2 SLE λ n , 3 NLEOLE λ n , 1 OLE λ n , 2 RBE
( ρ = 0.1 )
0.10.0040.0040.6920.0810.0130.0520.0980.1521.034
0.20.0050.0050.7230.0800.0110.1150.2830.0971.799
0.30.0060.0060.7580.0760.0100.1700.4740.0672.043
0.40.0050.0050.8170.0590.0180.5031.9980.0013.758
0.50.0050.0060.7300.0220.1640.8303.4050.0445.769
( ρ = 0.3 )
0.10.0050.0050.6210.0690.0080.0510.0860.1500.655
0.20.0050.0040.6420.0530.0050.1110.2420.0980.765
0.30.0040.0040.5850.0430.0140.1610.3830.0671.249
0.40.0060.0060.5980.0240.0810.4831.6160.0023.431
0.50.0060.0060.7210.0150.5360.7902.8480.0034.896
( ρ = 0.5 )
0.10.0040.0050.4580.0560.0080.0490.0790.1470.485
0.20.0050.0130.4220.0350.0150.0990.1840.0991.214
0.30.0050.0050.4250.0230.0430.1420.2830.0700.878
0.40.0050.0040.3920.0130.2190.4381.2280.0033.062
0.50.0170.0180.8100.0661.7032.8546.9520.04015.446
Table 2. Average RE of various estimators, true σ 2 = 0.5 .
Table 2. Average RE of various estimators, true σ 2 = 0.5 .
α OENALESLE λ n , 1 SLE λ n , 2 SLE λ n , 3 NLEOLE λ n , 1 OLE λ n , 2 RBE
( ρ = 0.1 )
0.10.9930.9880.1750.7270.9141.2251.3090.6122.013
0.20.9830.9790.1560.7370.9531.3361.5280.6902.337
0.30.9920.9920.1330.7430.9821.4101.6850.7452.426
0.41.0030.9940.1000.7751.0941.7082.4120.9982.937
0.51.0030.9610.1520.9001.3881.9102.8441.2063.400
( ρ = 0.3 )
0.11.0070.9980.2190.7540.9631.2221.2870.6141.803
0.20.9960.9880.2050.7831.0151.3301.4870.6891.870
0.30.9980.9920.2420.8151.0711.3991.6150.7452.113
0.41.0000.9930.2390.8871.2591.6942.2690.9872.850
0.51.0011.0170.1590.9761.7171.9992.6861.1823.211
( ρ = 0.5 )
0.10.9900.9850.3310.7790.9651.2171.2740.6191.688
0.20.9941.0430.3590.8451.0771.3111.4230.6882.097
0.31.0001.0010.3570.8811.1831.3740.5280.7371.932
0.40.9991.0140.3841.0291.4531.6602.1050.9662.747
0.51.0111.0250.2971.0581.5031.6381.9010.9002.251
Table 3. Average MSE of various estimators, true σ 2 = 1 .
Table 3. Average MSE of various estimators, true σ 2 = 1 .
α OENALESLE λ n , 1 SLE λ n , 2 SLE λ n , 3 NLEOLE λ n , 1 OLE λ n , 2 RBE
( ρ = 0.1 )
0.10.0040.0040.7400.0900.0130.0190.0250.2000.347
0.20.0050.0050.7310.0740.0060.0050.0830.1550.455
0.30.0050.0050.7590.0810.0080.0740.1610.1270.600
0.40.0040.0080.7480.0430.0380.0430.5920.0411.479
0.50.0050.0090.8340.0280.2230.0911.0210.0091.934
( ρ = 0.3 )
0.10.0050.0050.6550.0870.0150.0180.0230.2010.266
0.20.0050.0060.6420.0630.0080.0450.0740.1530.457
0.30.0050.0050.5970.0480.0120.0650.1160.1310.478
0.40.0050.0150.6210.0150.1280.1920.4040.0491.101
0.50.0040.0160.6670.0260.3560.3490.8560.0101.811
( ρ = 0.5 )
0.10.0040.0050.4760.0540.0070.0160.0170.1930.141
0.20.0050.0070.4140.0280.0120.0360.0440.1560.275
0.30.0060.0090.3570.0170.0300.0490.0640.1340.345
0.40.0050.0040.3980.0150.1080.1690.3250.0550.868
0.50.0040.0040.4900.0230.2370.3060.7220.0161.328
Table 4. Average RE of various estimators, true σ 2 = 1 .
Table 4. Average RE of various estimators, true σ 2 = 1 .
α OENALESLE λ n , 1 SLE λ n , 2 SLE λ n , 3 NLEOLE λ n , 1 OLE λ n , 2 RBE
( ρ = 0.1 )
0.11.0020.9770.1440.7110.9141.1311.1480.5551.581
0.20.9990.9640.1490.7420.9691.2111.2790.6081.667
0.31.0030.9640.1330.7260.9631.2681.3940.6361.767
0.41.0030.9400.1410.8181.1761.2051.7660.8012.212
0.50.9810.9370.0910.8911.4591.3002.0070.9132.387
( ρ = 0.3 )
0.10.9940.9710.1970.7200.9131.1251.1370.5531.506
0.21.0050.9870.2070.7670.9971.2071.2630.6101.669
0.31.0071.0000.2340.8041.0541.2511.3310.6401.684
0.41.0011.0680.2190.9821.3421.4361.6320.7832.048
0.50.9931.0680.1931.1111.5841.5891.9220.9122.341
( ρ = 0.5 )
0.11.0010.9800.3160.7830.9771.1181.1130.5621.364
0.21.0001.0160.3620.8621.0651.1831.2000.6071.514
0.30.9871.0280.4120.9111.1461.2151.2420.6361.580
0.40.9900.9960.3811.0261.3101.4081.5660.7711.926
0.51.0001.0110.3101.1001.4731.5511.8460.8822.148
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, X.; Kong, L.; Wang, L. Estimation of Error Variance in Regularized Regression Models via Adaptive Lasso. Mathematics 2022, 10, 1937. https://doi.org/10.3390/math10111937

AMA Style

Wang X, Kong L, Wang L. Estimation of Error Variance in Regularized Regression Models via Adaptive Lasso. Mathematics. 2022; 10(11):1937. https://doi.org/10.3390/math10111937

Chicago/Turabian Style

Wang, Xin, Lingchen Kong, and Liqun Wang. 2022. "Estimation of Error Variance in Regularized Regression Models via Adaptive Lasso" Mathematics 10, no. 11: 1937. https://doi.org/10.3390/math10111937

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop