Next Article in Journal
On the Existence of Radial Solutions to a Nonconstant Gradient-Constrained Problem
Previous Article in Journal
On Some Important Dynamic Inequalities of Hardy–Hilbert-Type on Timescales
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Relaxed Adaptive Lasso and Its Asymptotic Results

1
College of Economics, Hebei GEO University, Shijiazhuang 050031, China
2
Reaserch Center of Nutural Resources Assets, Hebei GEO University, Shijiazhuang 050031, China
3
Hebei Province Mineral Resources Development and Management and the Transformation and Upgrading of Resources Industry Soft Science Resrarch Base, Shijiazhuang 050031, China
4
Xiaomi Corporation, Beijing 100089, China
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(7), 1422; https://doi.org/10.3390/sym14071422
Submission received: 22 June 2022 / Revised: 7 July 2022 / Accepted: 8 July 2022 / Published: 11 July 2022

Abstract

:
This article introduces a novel two-stage variable selection method to solve the common asymmetry problem between the response variable and its influencing factors. In practical applications, we cannot correctly extract important factors from a large amount of complex and redundant data. However, the proposed method based on the relaxed lasso and the adaptive lasso, namely, the relaxed adaptive lasso, can achieve information symmetry because the variables it selects contain all the important information about the response variables. The goal of this paper is to preserve the relaxed lasso’s superior variable selection speed while imposing varying penalties on different coefficients. Additionally, the proposed method enjoys favorable asymptotic properties, that is, consistency with a fast rate of convergence with O p n 1 . The simulation demonstrates that the proper variable recovery, i.e., the number of significant variables selected, and prediction accuracy of the relaxed adaptive lasso in a limited sample is superior to the regular lasso, relaxed lasso and adaptive lasso estimators.

1. Introduction

Rapid advancements in research and technology have resulted in enormous data in a variety of scientific domains. How to efficiently extract information from complex data and develop an ideal model that relates critical features to response variables has become a challenge for researchers in the data explosion era. Over the past two decades, statisticians have conducted substantial research on the subject of feature selection.
Tibshirani [1] first proposed lasso, a technique for screening high-dimensional variables that improves least squares estimation by including an L 1 penalty component. The penalty parameter of the lasso set some of the coefficients to zero, thus achieving the proposal of coefficient shrinkage and model selection. Lasso sacrifices unbiasedness for minimizing variance and solves the convex optimization problem to find the globally optimal solution. In the rare signal scenario, when the signal strength exceeds a certain level, lasso shows good performance, far outperforming other variable selection methods [2]. However, Meinshausen and Bühlmann [3] discovered a conflict in the lasso model between optimal prediction and consistent variable selection, which is one of the lasso’s downsides. Due to the strong sensitivity of the lasso to the presence of correlation and multicollinearity in real data, insignificant noisy variables may be selected for the model. As a result, the noise variables in the model exacerbate the model fitting effect. Fan and Li [4] presented a more adaptive novel approach for maximizing the likelihood penalty function that applies to generalized linear models and other types of models. Moreover, Fan and Li [5] enhanced the preceding approach and stated that as long as the dimensionality of the model is not too large, the penalized likelihood technique can be used to estimate the model’s parameters via the penalty function. To address the issue of inconsistent lasso selection, Zou [6] reported an adaptive lasso estimator,
β ^ A l a s s o = arg min β Y j = 1 p X j T β j 2 2 + λ j = 1 p ω ^ j | β j | ,
where ω ^ = 1 / | β ^ | γ , γ > 0 . The primary reason why adaptive lasso is superior to lasso is that it has an oracle quality that depends on the weight vector value ω ^ . Without this quality, adaptive lasso’s oracle property would be suboptimal. Fan and Peng [7] claimed that when the dimension p is less than the sample size n, the lasso and adaptive lasso can both be used to accelerate and optimize variable selection. The theory of Donoho and Johnstone can also be used to demonstrate the adaptive lasso’s near-minimax optimality [8]. The non-negative garotte [9] is another regularization method. It can be considered as a special case of the adaptive lasso and was proved to have the property of consistent variable screening [10].
Meinshausen [11] defined the relaxed lasso estimator on the set M 1 , , p , where p is the number of nonzero variables selected into the true model,
β ^ R l a s s o = arg min β Y j = 1 p X j T β j · 1 M 2 2 + ϕ λ j = 1 p | β j | ,
where λ 0 , , ϕ 0 , 1 , 1 M is an indicator function, 1 M = 0 , k M 1 , k M , for all k 1 , , p . Hastie et al. [12] compared the performance of lasso and forward stepwise regression across a range of signal-to-noise ratios (SNRs) and showed that it is extremely competitive in any environment. The relaxation parameter ϕ contributes to relaxed lasso’s superior performance. By adjusting the control parameter ϕ appropriately, it can ensure that the sparse solution on the path does not experience excessive shrinkage. This is the primary reason we chose to expand the model using a relaxed lasso. In recent works, numerous studies have demonstrated that the relaxed lasso has excellent performances compared to other methods. Mentch and Zhou [13] showed that in high-dimensional settings, lasso, forward selection and randomized forward selection perform similarly at low SNRs, but for larger SNRs, relaxed lasso performs much better in terms of the relative test error. Bloise et al. [14] suggested that relaxed lasso is able to avoid overfitting by using two separate tuning parameters so as to obtain a more accurate model. Comparing relaxed Lasso to least squares and stepwise regression, He [15] came to the conclusion that relaxed Lasso improves the accuracy of the model by deleting insignificant variables. Kang et al. [16] proposed a new method that combines the relaxed lasso and a generalized multiclass support vector machine to obtain fewer feature variables and higher classification accuracy. Tay et al. [17] combined elastic net regularized regression with a simplified relaxed lasso model and built a prediction matrix to measure model performance, which speeds up the computational efficiency of the model.
We discuss the properties of different variable selection methods in the case of large samples. Consistency and asymptotic normality are two large sample properties of OLS; for consistency, we can assume a weaker overall zero-correlation assumption C o v x , ε = 0 and a zero-mean assumption of error E ε = 0 . Fu and Knight [18] examined the consistency and asymptotic features of bridge estimation in convex and nonconvex scenarios and established that lasso is consistent when certain conditions are met. Thus, another disadvantage of lasso is that variable selection is conditional, which means that it does not work similarly to an oracle estimator. Zhao and Yu [19] claimed that the irrepresentable condition is a necessary and sufficient criterion for lasso to satisfy consistency. However, both adaptive lasso and relaxed lasso have been proved to be consistent without satisfying the strict condition. Zou [6] demonstrated that even with huge quantities of data, adaptive lasso can efficiently filter out the model’s sparse solution while retaining oracle features. According to Meinshausen [11], relaxed lasso can still retain a high rate of convergence with O p n 1 and can lead to consistent variable selection no matter what the asymptotic result is. To combine the advantages of the preceding two models, we propose a relaxed adaptive lasso and demonstrate that it has the same asymptotic properties and excellent convergence rate as the relaxed lasso. The Lars algorithm [20] and an improved algorithm have been shown to solve the relaxed adaptive lasso.
In this paper, we propose a novel variable filtering method named relaxed adaptive lasso, which can effectively address the model selection issue, and we demonstrate the method’s asymptotic properties. We prove that the relaxed adaptive lasso estimator can achieve the same rate of convergence as the relaxed lasso, indicating that it can obtain the sparse solution at the optimal rate. The simulation of this study demonstrates that the relaxed adaptive lasso performs well in variable recovery and predictin accuracy. In particular, when sample size n and the number of variables p are varied, the performance of the relaxed adaptive lasso to filter true nonzero variables is superior to that of the lasso, relaxed lasso, and adaptive lasso. As the sample size n increases, the mean square error (MSE) of the model remains the best.
The rest of this article is structured as follows: Section 2 defines relaxed adaptive lasso, describes its computational algorithms, and establishes its asymptotic properties. Then, in Section 3, we compare the performance of the relaxed adaptive lasso to that of the lasso, adaptive lasso, and relaxed lasso using a simulation experiment. Section 4 discusses the application of real-world data. Section 5 makes a conclusion of the proposed method. The Appendix A, Appendix B, Appendix C, Appendix D, Appendix E, Appendix F, Appendix G contains additional information about the proof.

2. Relaxed Adaptive Lasso and Asymptotic Results

2.1. Definition

Recall that adaptive lasso estimation improves the shrinkage force to equalize the coefficients in lasso by applying a weight vector. The adaptive lasso estimator’s set of predictor variables β ^ λ , ω is denoted by S λ , ω ,
S λ , ω = 1 k p β ^ k λ , ω 0 .
The solution of the relaxed adaptive lasso is obtained via the adaptive lasso estimator in S λ , ω if and only if in the low-dimension case.
We now consider the linear regression model
Y = X T β * + ε ,
where ε = ε 1 , , ε n T is a vector composed of i.i.d. random variables with mean 0 and variance σ 2 . X = X 1 , , X p is an n × p matrix with a normally distribution X N 0 , Σ , where X i is the ith column and Y is an n × 1 vector of response variables. Now, we define relaxed adaptive lasso estimation. The variable selection and shrinkage are controlled by adding two constraints, λ and ϕ , and one weight vector, ω , to the L 1 penalty term. According to the setup of Zou [6], suppose that β ^ is an n -consistent estimator of β * .
Definition 1.
Define the relaxed adaptive lasso estimator as β ^ λ , ω denoted by S λ , ω ,
β ^ * = arg min β Y j = 1 p X j T β j · 1 S λ , ω 2 2 + ϕ λ j = 1 p ω ^ j | β j | ,
where 1 S λ , ω is an indicator function 1 S λ , ω k = 1 , k S λ , ω 0 , k S λ , ω , for all k 1 , , p ; ϕ [0,1]; given a γ > 0 , define the weight vector ω ^ = 1 / | β ^ | γ .
Notably, only predictor variables in the set S λ , ω 1 , , p can be chosen as the relaxed adaptive lasso solution. In the following, we discuss different functions and value ranges of parameters under the set S λ , ω . The parameter λ 0 determines the number of variables retained in the model. For λ = 0 or ϕ = 0 , the problem of solving the estimators in Equation (5) is transformed into an ordinary least squares problem where S 0 λ , ω = 1 , , p so that the purpose of variable selection cannot be achieved. As λ increases, all coefficients of the variables selected by adaptive lasso are compressed towards 0, and some finally become exactly 0. However, for a large λ , all estimators are shrunk to 0, where S λ , ω = , leading to a null model. In addition, the relaxation parameter ϕ controls the amount of shrinkage applied to the coefficients in estimation. When ϕ = 1 , the adaptive lasso and relaxed adaptive lasso estimators are the same. When ϕ < 1 , the shrinkage force on the estimators is weaker than that of the adaptive lasso. The optimal tuning parameters λ and ϕ are chosen by cross-validation. The vector ω ^ = 1 / | β ^ | γ assigns different weights to the coefficients; hence, the relaxed adaptive lasso has consistency when the weight vector is correctly chosen.

2.2. Algorithm

We will discuss the algorithm for computing the estimator of the relaxed adaptive lasso in this section. Note that (5) is a convex optimization problem, which means that we can obtain the global optimal solution effectively. Unlike concave penalties, however, multiple minimal penalties, such as SCAD, suffer from the multiple minimal problem. In the following, we discuss a simplified version of the relaxed adaptive lasso estimator algorithm. An improved algorithm is then proposed based on the process of computation for the relaxed lasso estimator [11].
The simple algorithm for relaxed adaptive lasso
Step (1).
For a given γ > 0 , we use β ^ O L S to construct the weight in an adaptive lasso based on the definition from Zou [6]. We can also replace β ^ O L S with other consistent estimators, e.g., β ^ R i d g e .
Step (2).
Define X j * = X j / ω ^ j , j = 1 , , p , where ω ^ j = 1 / | β ^ O L S | γ
Step (3).
Then, the process of computing relaxed adaptive lasso solutions is identical to that of solving the relaxed lasso solutions in Meinshausen [11]. The relaxed lasso estimator is defined as
β ^ * * = arg min Y j = 1 p X j * T β j · 1 S λ , ω 2 2 + ϕ λ j = 1 p | β j | .
The Lars algorithm is first used to compute all the adaptive lasso solutions. Select a total of h resulting models S 1 , , S h attained with the sorted penalty parameters λ 1 > λ 2 > > λ h = 0 . When λ h = 0 , for example, all variables with nonzero coefficients are selected, which is identical to the OLS function. On the other hand, λ 0 = completely shrinks the estimators to zero, thus leading to a null model. Therefore, a moderate λ k , k = 1 , , h in the sequence of λ 1 , , λ h is chosen such that S k = S λ , ω . Then, define the OLS estimator β ˜ = β ^ λ k + λ k δ k , where δ k = β ^ λ k β ^ λ k 1 / λ k 1 λ k is the direction of adaptive lasso solutions, which can be obtained from the last step. If there exists at least one component j such that sgn β ˜ j sgn β ^ j λ k , then all the adaptive lasso solutions on the set S k of variables are identical to the set of relaxed lasso estimators β ^ * * for λ L k . Otherwise, β ^ * * for λ k L k are computed by linear interpolation between β ^ j λ k and β ˜ j .
Step (4).
Output the relaxed adaptive lasso solutions: β ^ j * = β ^ j * * / ω ^ j , j = 1 , , p .
Simple algorithms have the same computational complexity as Lars-OLS hybrid algorithms. However, due to the high computing complexity, this approach is frequently not ideal. Then, we consider an improved algorithm introduced by Hastie et al. [12], which uses the definition of the relaxed adaptive lasso estimators to solve a problem of high computational complexity.
The improved algorithm for relaxed adaptive lasso
Step (1).
As before, S λ , ω denotes the active set of the adaptive lasso. Let β ^ A l a s s o denote the adaptive lasso estimator. The relaxed adaptive lasso solution can be defined as
β ^ * = ϕ β ^ A l a s s o + 1 ϕ β ^ O L S ,
where ϕ is a constant with a value between 0 and 1.
Step (2).
The submatrix X S λ , ω of active predictors is a reversible matrix; thus, β ^ O L S = X S λ , ω X S λ , ω T 1 X S λ , ω Y .
Step (3).
Define X # = X S λ , ω / ω ^ , where ω ^ = 1 / | β ^ O L S | γ ; then, the adaptive lasso solution β ^ A L a s s o is identical to solving the lasso problem
β ^ L a s s o = arg min β Y j = 1 p X j # T β 2 2 + λ j = 1 p | β j | .
By means of the Karush–Kuhn–Tucker (KKT) optimality condition, the lasso solution over its active set can be written as
β ^ L a s s o = X S λ , ω # X S λ , ω # T 1 X S λ , ω # Y λ sgn β ^ L a s s o .
From the transformation of the predictor matrix in Step (2), it follows that the adaptive lasso estimator is β ^ A l a s s o = β ^ L a s s o / ω ^ .
Step (4).
Thus the improved solution of the relaxed adaptive lasso can be written as
β ^ j * = ϕ ω ^ X S λ , ω # X S λ , ω # T 1 X S λ , ω # Y λ sgn β ^ L a s s o + 1 ϕ X S λ , ω X S λ , ω T 1 X S λ , ω Y , j S λ , ω , 0 , j S λ , ω .
The computational complexity of Algorithm 1 in the best case is equivalent to the ordinary lasso. Specifically, in Step (3) of the simple algorithm, the relaxed adaptive lasso estimator can be solved in the same way as the relaxed lasso. The improved algorithm is computed from the adaptive lasso and lasso estimators. Given the weight vector, the computational cost of the relaxed adaptive lasso is the same as that of the lasso [21]. Therefore, the computational complexity of Algorithm 2 is equivalent to that of the lasso.
Now we compare the computational cost of the two algorithms. The relaxed lasso’s computational cost in the worst scenario is O n 3 p , which is slightly more expensive than the cost of the regular lasso with O n 2 p Meinshausen [11]. For this reason, we compute the relaxed adaptive lasso estimator using the improved algorithm.
Algorithm 1. The simple algorithm for relaxed adaptive lasso.
Input: a given constant γ > 0 , the weight vector ω ^ = 1 / | β ^ O L S | γ ,
Precompute: X * = X / ω ^
Initialization: Let λ 1 > λ 2 > > λ h to be the optimal parameter
corresponding to the modified models S 1 , , S h .
Set k = 1 to an initial order number of λ k
Define Q β = Y j = 1 p X j * T β j · 1 S λ , ω 2 2 + ϕ λ j = 1 p | β j | ,
β ˜ = β ^ λ k + λ k δ k , where δ k = β ^ λ k β ^ λ k 1 / λ k 1 λ k
for   j = 1 , , p   do
  if  s g n β ˜ j s g n β ^ j λ k  then
β ^ * * β ^ A l a s s o
  else
β ^ * * Q β ˜ + Q β ˜ Q β ^ λ k 1 β ˜ β ^ λ k 1 β ˜ β ^ λ k 1
Set k = k + 1
until   k = h
Output:   β ^ j * = β ^ j * * / ω ^ j
Algorithm 2. The improved algorithm for the relaxed adaptive lasso.
Input: Adaptive lasso estimator β ^ A l a s s o , OLS estimator β ^ O L S ,
weight vector ω ^ = 1 / | β ^ O L S | γ
Precompute: X # = X S λ , ω / ω ^ , Let S λ , ω be the active set of the adaptive lasso
Initialization: Define β ^ L a s s o = arg min β Y j = 1 p X j # T β 2 2 + λ j = 1 p | β j |
for   j = 1 , , p   do
  if  j S λ , ω  then
compute β ^ O L S = X S λ , ω X S λ , ω T 1 X S λ , ω Y ,
β ^ L a s s o = X S λ , ω # X S λ , ω # T 1 X S λ , ω # Y λ sgn β ^ L a s s o
  else
Stop iterations
until   j = p
Output:   β ^ A l a s s o = β ^ L a s s o / ω ^ , β ^ * = ϕ β ^ A l a s s o + 1 ϕ β ^ O L S

2.3. Asymptotic Results

To investigate the asymptotic property, we make the following two assumptions about the architecture used in the setup of Fu and Knight [18]:
1 n i = 1 n x i x i T Σ ,
where Σ is a positive definite matrix. Furthermore,
1 n m a x 1 i n x i T x i 0 .
Without loss of generality, the sparse constant vector β is defined as the true coefficient of the model. We assume that the number of nonzero estimators selected into the real model is q, that is β = β 1 , , β q , 0 , 0 , , where β j 0 only for j = 1 , , q and β j = 0 for j = q + 1 , , p . The true model is, hence, S * = 1 , , q . The covariance matrix Σ = 1 n X X T can be written in block-wise form, i.e., Σ = Σ 11 Σ 12 Σ 21 Σ 22 , where Σ 11 is a q × q matrix. The random loss L λ , ω of the adaptive lasso is defined as
L λ , ω = E Y X T β ^ A l a s s o 2 σ 2 .
The loss L λ , ϕ , ω of the relaxed adaptive lasso is analogously defined as
L λ , ϕ , ω = E Y X T β ^ * 2 σ 2 .
We discover that the relaxed adaptive lasso estimator has the same rapid convergence rate as the relaxed lasso estimator when the exponential growth rate of the size p is ignored. Additionally, the adaptive lasso has a slower pace than both of them but is slightly faster than the lasso estimator. We make the following assumptions concerning asymptotic results for low-dimensional sparse solutions to demonstrate the above conclusion.
Assumption 1.
The number of predictors p = p n increases exponentially with the number of observations n, that is, there exist some c > 0 , 0 < s < 1 such that p n s e c n .
We cannot rule out the possibility that the remaining p n q noise factors are linked with the response. A square matrix is said to be diagonally dominant if the magnitude of the diagonal entry in each row of the matrix is greater than or equal to the sum of the magnitudes of all the other (nondiagonal) entries in that row.
Assumption 2.
Σ and Σ 1 are diagonally dominant at some constant c < 0 , for all n N .
Notably, when the diagonal is positive, the diagonally dominating symmetric matrix is positive definite. Based on this premise, the inverse matrix of Σ can guarantee its existence.
Assumption 3.
We limit the penalty parameter λ to the range L ,
L = λ 0 : c e p n n ,
if and only if there exists an arbitrarily large c > 0 .
Assumption 3 holds true if the exponent of the number of variables in the selected model is less than the sample size n. Using λ values in the range L , relaxed lasso, adaptive lasso, and relaxed adaptive lasso can obtain consistent variable selection and a specified number of nonzero coefficients.
Lemma 1.
Assume that predictor variables are independent of each other, λ n , n N is the penalty parameter of the adaptive lasso, and its order is λ n = O n s 1 2 γ 2 for n . Under Assumptions 1–3,
P k > q : k S λ n 1 , n .
As a result of Lemma 1, the chance of at least one noise variable being evaluated as nonzero is close to one. We prove Theorem 1 by utilizing the conclusion of Lemma 1 on the order of the penalty parameter.
Lemma 2.
Let lim inf n n * n 1 A with A 2 , n * being the number of observations. Then, under Assumptions 1–3,
sup λ L , γ > 0 L λ , ϕ , ω L n * λ , ϕ , ω = O p n 1 log 2 n , n .
We want to investigate the computational cost of the specified parameters by examining the order of the relaxed adaptive lasso loss function. Lemma 2 is a technique that will assist us in proving Theorem 3.
Lemma 3.
Assume that predictor variables are independent of each other, λ n , n N is the penalty parameter of the relaxed adaptive lasso, and n s + 1 λ n 3 for n . Under Assumptions 1–3,
P k > q : k S λ n 0 .
As a result of Lemma 3, the noise variable can be predicted to be 0. If the penalty parameter ensures that λ n 3 converges to 0 at a slower rate than n s + 1 , the noise variable can be precisely evaluated as nonzero with a probability approaching 0. In addition, Lemma 3 helps to prove Theorem 3 by describing the order of the penalty parameter of the relaxed adaptive lasso.
Theorem 1 addresses the question of whether the adaptive lasso can sustain a faster convergence rate as the number of noise variables increases rapidly and the convergence speed exceeds that of the lasso. The addition of the weight parameter enables the adaptive lasso to gain oracle qualities while also increasing the algorithm’s rate of convergence.
Theorem 1.
Assume that predictor variables are independent of each other. Under Assumptions 1–3, Σ = 1 for any t > 0 and n . The convergence rate of the adaptive lasso is as follows:
P inf λ L L λ , ω > t n r 1 , r > 1 + 2 γ s .
On the other hand, Theorem 2 establishes that the convergence rate of the relaxed adaptive lasso is equivalent to that of the relaxed lasso. Theorem 2 resolves the question of whether the convergence rate of the relaxed adaptive lasso is consistent with that of the relaxed lasso by establishing that the convergence rate of the relaxed adaptive lasso is not related to the noise variable’s growth rate r or the parameter s that determines the growth rate.
Theorem 2.
Assume that predictor variables are independent of each other. Under Assumptions 1–3, for n , the convergence rate of the relaxed adaptive lasso is as follows:
inf λ L , ϕ 0 , 1 , γ > 0 L λ , ϕ , ω = O p n 1 .
The shade in Figure 1 represents the rate at which various models converge. The rate of the relaxed adaptive lasso is the same as that of the relaxed lasso; this indicates that the convergence rate of the relaxed adaptive lasso is unaffected by the rapid increase in the noise variable, and it can still retain a high rate. Although the adaptive lasso’s convergence rate is suboptimal, it is faster than the lasso’s due to the presence of the weight vector. The addition of an excessive number of noise variables slows the Lasso estimator, regardless of how the penalty parameter is chosen [11].
The convergence rate of the relaxed adaptive lasso is as robust as the rate of the relaxed lasso, i.e., it is unaffected by noise factors. Theorem 3 demonstrates that cross-validation selection of the parameters λ , ϕ can still maintain a rapid rate.
Franklin [22] indicated that K-fold cross-validation includes K partitions and each partition consists of n ˜ observation data, where n ˜ n 1 K for n . When building an estimator on a different set of observations than R , define the empirical loss of observations as L R , n ˜ λ , ϕ , ω for R = 1 , , K . Let L c v λ , ϕ , ω be the empirical loss function,
L c v λ , ϕ , ω = K 1 R = 1 K L R , n ˜ λ , ϕ , ω .
The selection of λ ^ , ϕ ^ and ω ^ is performed by minimizing the loss function L c v λ , ϕ , ω , that is,
λ ^ , ϕ ^ , ω ^ = arg min L c v λ , ϕ , ω .
This article uses five-fold cross-validation in the numerical study.
Theorem 3.
Under Assumptions 1–3, the convergence rate of K-fold cross-validation with 2 K < holds that
L λ ^ , ϕ ^ , ω ^ = O p n 1 log 2 n .
Therefore, when K-fold cross-validation is used to determine the relaxed adaptive lasso’s penalty parameters λ , ϕ , the convergence speed may maintain a relatively ideal outcome. As a result, if using cross-validation to select the penalty parameters, the optimal rate and consistent variable selection under the oracle selection of penalty parameters may be nearly achieved.
Theorem 4.
If λ n n λ 0 0 , then β · 1 S p β in the relaxed adaptive lasso estimator; moreover, if ϕ λ n = o n , β ^ * is consistent.
Theorem 4 indicates that the relaxed adaptive lasso estimator is consistent under the condition ϕ λ n = o n . β ^ * does not have to be root-n consistent; nonetheless, the consistency of the relaxed adaptive lasso is determined by the conclusion drawn from probability convergence.

3. Simulation

3.1. Setup

We present a numerical study in this section to compare the performance of the relaxed adaptive lasso to that of the lasso, relaxed lasso, and adaptive lasso. Based on the simulation setup of Meinshausen [11], we considered the linear model y = x T β + ε , where x = x 1 , , x p is the predictor vector and random error ε is an independent and identically distributed random variable with mean 0 and variance σ 2 . The remaining parameter settings and procedures are as follows.
i.
Given sample size n = 100 , 500 , 1000 and data dimension p = 20 , 50 .
ii.
The true regression coefficient β R p has its first q = 10 q p signal variables taking nonzero coefficients equally spaced from 0.5 to 10 in the sense that β j 0 for all j q and the remaining p q coefficients are zero.
iii.
The design matrix X R n × p is generated from a normal distribution N 0 , Σ , where covariance matrix Σ = cov x = c i j p × p has entries c i j = 1 , i = j = 1 , , p and c i j = ρ | i j | , i j . The correlation between predictor variables is set to ρ = 0 .5.
iv.
The theoretical signal-to-noise ratio in this simulation is defined as SNR = Var x T β / σ 2 . We discuss either SNR = 0.2 for low or SNR = 0.8 for high to calculate the variance of ε so that the response variable Y generated from the linear regression model follows N n x T β , σ 2 I .
v.
We compute the weight of the adaptive lasso via the ridge regression estimator with γ = 1 . For each method, five-fold cross-validation is used to select the penalty parameters, and the loss function to apply for cross-validation is chosen by minimizing the prediction error on the test set. Furthermore, we pick the least complex model that is comparable in accuracy to the best model under the “one-standard-error” criterion Franklin [22].
For each of the settings above, this process is repeated a total of 100 times to compute the following evaluation metrics, and the average results are recorded.

3.2. Evaluation Metrics

The data are split randomly into a training set and a test set. Suppose that x 0 R p is drawn from the row of the testing design matrix X, and y ^ 0 denotes its connected response value by fitting the model. Additionally, let β ^ 0 denote the corresponding estimated coefficient of the predictor variable x 0 .
Mean-square error:
MSE = E y t e s t y ^ 0 2 = E y t e s t x 0 T β ^ 0 2 .
This value assesses the accuracy of the model prediction. A good model has the highest prediction accuracy in the sense that its prediction error, MSE, is minimized. The following metrics were developed by Hastie et al. [12].
Relative accuracy:
RA β ^ = E x 0 T β ^ 0 x 0 T β 2 E x 0 T β 2 = β ^ 0 β T Σ β ^ 0 β β T Σ β .
Relative test error:
RTE β ^ = E y t e s t x 0 T β ^ 0 2 σ 2 = β ^ 0 β T Σ β ^ 0 β + σ 2 σ 2 .
Proportion of variance explained:
PVE β ^ = 1 E y t e s t x 0 T β ^ 0 2 V a r y t e s t = 1 β ^ 0 β T Σ β ^ 0 β + σ 2 β T Σ β + σ 2 .
Number of nonzeros: The average number of nonzero estimated coefficients,
β ^ n o n z e r o 0 = j = 1 p 1 β ^ j 0 .
where 1 β ^ j 0 = 1 , β ^ j 0 0 , β ^ j = 0 . An ideal score should be close to the number of true nonzero coefficients q.
Furthermore, in addition to the assessment of prediction accuracy, we explore the last metric to measure the right variable recovery. This metric quantifies the degree to which the valid solution β ^ to the convex optimization problem in Equation (5) matches the true coefficient β .

3.3. Summary of Results

Table 1 summarizes the average results of simulation for lasso, relaxed lasso, adaptive lasso and relaxed adaptive lasso with SNR = 0.2. We find that the relaxed adaptive lasso has the best RR, RTE, PVE and MSE scores on average. In other words, the proposed method achieves the maximum prediction accuracy in the majority of cases, despite occasions where the adaptive lasso’s MSE is somewhat better than that of the relaxed adaptive lasso. Specifically, the adaptive lasso yields a much smaller MSE due to the small sample size (e.g., n = 100 ). However, when the sample size is increased to n = 1000 , the relaxed adaptive lasso outperforms all other methods owing to the feature of large samples in which parametric estimators converge in probability to true parameters.
In Table 2, excellent performance is observed for all methods when the SNR is increased to 0.8. As expected, relaxed adaptive lasso maintains its competitive edge and achieves overall good performance. In particular, it roughly maintains the correct number of nonzero variables as the number of observations n increases. For n , p = 100 , 20 and n , p = 100 , 50 , it holds up to five and four variables, respectively. For n , p = 1000 , 20 and n , p = 1000 , 50 , up to nine and eight, respectively, approach the number of truly valid features q = 10 . This illustrates that the sparsity pattern of estimators in the relaxed adaptive lasso achieves the proper variable recovery when n is quite large. In contrast, the relaxed lasso and adaptive lasso shrink too many coefficients toward zero; as a result, fewer variables remain in the resulting model. Therefore, we conclude that as the number of observations n grows rapidly, the number of variables preserved in the model grows as well, and it is possible to select the important variables approximately correctly, i.e., having proper variable recovery.

4. Application to Real Data

4.1. Dataset

The real dataset used in this study is from the CSMAR Database, which contains 11 research series on stocks, companies, funds, the economy, industries, etc. It is widely recognized as one of the most professional and accurate databases available for research purposes. Our data include a total of 2137 records, with each record corresponding to the financial data of one listed company in 2021. The training set is made up of the first 1496 observations, and the test set is made up of the rest. The response variable is the R&D investment of the company, and the predictor variables include 86 factors that may have an effect on the firm’s R&D investment, such as fixed-assets depreciation, accounts receivable and payroll payable. To compare the model selection performance of the method proposed in this paper to that of the other three methods, the aforementioned methods are used to fit the model on the training set, and the prediction accuracy of these models is measured in terms of the MSE on the test set. It is shown in the following that the relaxed adaptive lasso has the highest prediction accuracy with the smallest MSE value.

4.2. Analysis Results

As can be seen in Table 3, the MSE values of the lasso and adaptive lasso are, respectively, as large as 0.521 and 0.575, indicating that they have the worst prediction accuracy. The relaxed lasso performs somewhat better than the lasso and the adaptive lasso in terms of MSE. As expected, the relaxed adaptive lasso estimator’s prediction accuracy remains satisfactory, with the smallest MSE of 0.429. In Table 4, a total of 10 variables are selected by the relaxed adaptive lasso. It has shown that Cash Paid to and for Employees, Cash Paid for Commodities or Labor, Business Taxes and Surcharges are identified as the three most influential factors on R&D investment. As a result, we can conclude that relaxed adaptive lasso leads to the simplest model with the highest prediction accuracy among the four foregoing methods.
Among the most important explanatory variables affecting R&D investment, Cash Paid to and for Employees measures the company’s actual benefits and rewards; Cash Paid for Commodities or Labor measures the overall payment ability of the company; and Business Taxes and Surcharges measure the tax burden of the company’s operation. According to the estimator coefficients estimated by the simplified model, firms with high Cash Paid to and for Employees and Cash Paid for Commodities or Labor tend to spend more on R&D (the positive coefficient on the response variable), whereas Business Taxes and Surcharges have a negative influence on the company’s R&D investment. From the results of the analysis, it is not surprising that companies focused on welfare take more advantage of innovative technology because generous compensation not only improves employees’ work motivation but also helps to retain and recruit talent. Furthermore, strong payment ability implies the high profitability of companies with successful operations, allowing them to spend massive amounts of money on R&D. Note that a heavy tax burden may result in a lower investment cost for a company. In general, increasing R&D input is highly influenced by a few selected variables, the three most important of which are the company’s welfare, payment ability and tax burden.

5. Conclusions

In this article, we have proposed a two-stage variable selection method called relaxed adaptive lasso as a combination of relaxed lasso and adaptive lasso estimation. From the proof of the theorem, we conclude that the relaxed adaptive lasso has the same convergence rate as the relaxed lasso with O p n 1 and that both are faster than adaptive lasso and ordinary lasso in the low-dimensional setting. Furthermore, the relaxed adaptive lasso has the property of consistency, which means that the probability of selecting the true model approaches one under the condition of ϕ λ n = o n . The simulation study has shown that the proposed method has comparable prediction accuracy and accurate variable recovery as the number of observations increases. In practical applications, the conclusion has been confirmed by the analysis of the financial data of the listed company.
We have shown the asymptotic property of the relaxed adaptive lasso in the linear model. For further research, it is suggested to extend the theory and methodology to the generalized linear model [23]. In addition, the model does not handle the high-dimensional case well, where the variable dimension is much larger than the sample size. We propose to combine the existing idea with two-stage variable selection methods such as Sure Independence Screening (SIS) [24] and Distance Correlation Based SIS (DC-SIS) [25] to overcome this challenge.

Author Contributions

Conceptualization, Y.L.; methodology, R.Z., T.Z., Y.L., X.X.; software, R.Z., T.Z., Y.L., X.X.; formal analysis, R.Z., T.Z., Y.L., X.X.; data curation, R.Z., T.Z., Y.L., X.X.; writing—original draft preparation, R.Z., T.Z., Y.L., X.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Humanities and Social Science Research Project of Hebei Education Department (SQ201110), Hebei GEO University Science and Technology Innovation Team (KJCXTD-2022-02), Basic scientific research Funds of Universities in Hebei Province (QN202139) and S&T Program of Hebei (22557688D).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Appendix Proof of Lemma 1

Proof. 
First, define the adaptive lasso estimator β ^ A l a s s o on the set S * = 1 , , q as
β ^ A l a s s o = arg min β n 1 i = 1 n Y i k S * β k X i k 2 + λ n i = 1 n w ^ j | β j | ,
where the estimator shrinks to 0 outside the interval S * , w ^ = 1 / | β ^ | γ . According to Meinshausen [11], we similarly define the residuals under the adaptive lasso estimator β ^ A l a s s o
D i = Y i b S * β ^ A l a s s o X i b .
Thus,
P k > q : k S λ n P max k > q n 1 i = 1 n D i X i k > λ n | β ^ k | γ .
Consider the distribution of the gradient when k > q ,
n 1 i = 1 n D i X i k N 0 , n 2 i = 1 n D i 2 .
The expected value of the averaged squared residuals is larger than σ 2 n q n for any λ > 0 , so
P n 1 i = 1 n D i 2 > σ 2 2 1 , n .
If n 1 i = 1 n D i 2 = σ 2 2 , then n 1 i = 1 n D i X i k N 0 , σ 2 2 n ; thus, for c, d > 0 ,
P max k > q n 1 i = 1 n D i X i k > λ n | β ^ k | γ d λ n 1 | β ^ k | γ exp t n λ n 2 β ^ k 2 γ .
There are p n q variables when k > q . Consider the boundary of the gradient for p n q noise variables:
P max k > q n 1 i = 1 n D i X i k > λ n | β ^ k | γ exp p n q d λ n 1 | β ^ k | γ exp t n λ n 2 β ^ k 2 γ .
Note that
n λ n 2 β ^ k 2 γ = n 2 γ + 1 λ n 2 O 1 .
We set the order of the parameter λ n in adaptive lasso to λ n = O n s 1 2 γ 2 , then n 2 γ + 1 λ n 2 = O n s ; so, we have n 2 γ + 1 λ n 2 . According to Assumption 1, p n s e c n . Thus, for some g > 0 ,
λ n 1 | β ^ k | γ λ n 1 n γ ,
λ n 1 n γ n 1 s 2 ,
so
P max k > q n 1 i = 1 n R i X i k > λ n | β ^ k | γ 0 , n .
which, using (A1), completes the proof. □

Appendix B. Appendix Proof of Lemma 2

Proof. 
Assume that S 1 , , S h is the collection of models estimated by the adaptive lasso and let λ k , k = 1 , , h λ 1 < < λ h be the largest one such that S k = S λ . For all k 1 , , h , ϕ is a constant with a value between 0 and 1, according to the definition of a convex function, the relaxed adaptive lasso solution on the set B 1 , B n is given as
B k = β = ϕ β ^ A l a s s o + 1 ϕ β ^ O L S .
The estimate β ^ A l a s s o is the adaptive lasso estimate for penalty parameter λ k , and β ^ O L S is the corresponding OLS estimator. Give the loss function as follows,
L λ , ϕ , ω = E Y k 1 , , p β ^ k * X k 2 .
Substituting into formula (A2) yields
L λ , ϕ , ω = E Y k 1 , , p β ^ O L S X k ϕ β ^ A l a s s o β ^ O L S X k 2 .
For any λ , set M λ = Y k 1 , , p β ^ O L S X k , N λ = k 1 , , p β ^ A l a s s o k 1 , , p β ^ O L S X k .
Then
L λ , ϕ , ω = E M λ 2 2 ϕ E M λ N λ + ϕ E N λ 2 .
Let M λ 2 = x . According to Bernstein’s inequality, there are some m > 0 . For any ε > 0 ,
P 1 n x i E x < d n log 1 δ + 2 var x log 1 δ n 1 δ .
Let δ = 1 n , we have
P E n * M λ 2 E M λ 2 > m n * 1 log n = P n n * n n * M λ 2 > m n * 1 log n 1 1 n ,
so
lim sup n P | E n * M λ 2 E M λ 2 | > m n * 1 log n < ε .
The same can be obtained:
lim sup n P | E n * M λ N λ E M λ N λ | > m n * 1 log n < ε ,
lim sup n P | E n * N λ 2 E N λ 2 | > m n * 1 log n < ε .
Hence, there exists some m > 0 for every ε > 0 such that
lim sup n P sup λ , ω | L λ , ϕ , ω L n * λ , ϕ , ω | < h sup λ λ 1 , , λ h , ω | L λ i , ϕ , ω L n * λ i , ϕ , ω | > 1 ε ,
so
lim sup n P | L λ , ϕ , ω L n * λ , ϕ , ω | > m n * 1 log 2 n < ε ,
which completes the proof. □

Appendix C. Appendix Proof of Lemma 3

Proof. 
Using Bonferroni’s inequality, it can be written as
P k > q : k S λ n = P k = q + 1 p k S λ n k = q + 1 p P k S λ n .
By Lemma 1, it follows that
k = q + 1 p P k S λ n k = q + 1 p d λ n 1 exp t n λ n 2 = O n 1 s λ n 3 , s > 0 .
Let λ n be a sequence with n s + 1 λ n 3 , n and
k = q + 1 p P k S λ n O n 1 s λ n 3 0 ,
which completes the proof. □

Appendix D. Appendix Proof of Theorem 1

Proof. 
Let θ = β β ^ λ * , δ λ = β ^ λ β ^ λ * then
β ^ k λ β k 2 = θ k 2 2 θ k δ k λ + δ k λ 2 .
For n and any ε > 0 , we have | θ k | > 1 ε λ * with probability converging to 1; then, | θ k | < 1 + ε λ * . Hence, for all k q , there is
β ^ k λ β k 2 1 ε 2 λ * 2 + 2 1 + ε λ * δ k λ + δ k λ 2 ,
then
β ^ k λ β k 2 1 ε 2 λ * 2 2 1 ε 2 λ * λ * λ + 1 ε 2 λ * λ 2 .
Therefore, with probability converging to 1 for n , we can obtain
inf λ λ * L λ 1 ε 2 + 2 q 1 ε 2 + q 1 ε 2 2 λ * 2 .
According to Lemma 1: λ n n s 1 2 γ 2 ,
inf λ λ * L λ O p n r , r > 1 + 2 γ s ,
which completes the proof. □

Appendix E. Appendix Proof of Theorem 2

Proof. 
Denote the set of nonzero coefficients of β by S * = 1 , , q . Define event E as
λ : S λ = S * .
Let t > 0 , then
P inf λ , ϕ , ω L λ , ϕ , ω > t n 1 P inf λ , ϕ , ω L λ , ϕ , ω > t n 1 | E P E + P E c .
Assume that λ * is the smallest value for the penalty parameter that prevents any noise variable from entering the selected variable, for all k > q ,
λ * = min λ 0 λ | β ^ k λ = 0 , k > q .
Let L * be the loss of the OLS estimator. It follows that
P inf λ , ϕ , ω L λ , ϕ , ω > t n 1 P L * > t n 1 + P E c .
We have P E c 0 for n . According to the properties of the OLS estimator,
lim sup n P L * > t n 1 < ε ,
which completes the proof. □

Appendix F. Appendix Proof of Theorem 3

Proof. 
For any g > 0 , under λ ^ , ϕ ^ , ω ^ , we obtain
P L λ ^ , ϕ ^ , ω ^ > g n 1 log 2 n 2 ε .
Then, the loss function is
P L λ ^ , ϕ ^ , ω ^ > g n 1 log 2 n P L c v λ ^ , ϕ ^ , ω ^ > g n 1 log 2 n 2 P sup L λ ^ , ϕ ^ , ω ^ L c v λ ^ , ϕ ^ , ω ^ > 1 2 g n 1 log 2 n + P inf L λ ^ , ϕ ^ , ω ^ > 1 2 g n 1 log 2 n .
By Lemma 2, for each ε > 0 , there exists g > 0 ,
lim sup n P L λ ^ , ϕ ^ , ω ^ > g n 1 log 2 n < ε ,
which completes the proof. □

Appendix G. Appendix Proof of Theorem 4

Proof. 
According to Theorem 1 of Fu and Knight [18], we have β · 1 S p β .
Define V n β ^ n = 1 n i = 1 n Y i x i T β · 1 S 2 + ϕ λ n n j = 1 p | β j | , note that
V n β ^ n 1 n i = 1 n Y i x i T β i 2 = V n 0 β ^ n .
So arg min V n 0 β ^ n = O p 1 , also V n β ^ n V n 0 β ^ n , so
arg min V n 0 β ^ n = arg min V n β ^ n = O p 1 .
We have β ^ n = O p 1 and
V n β ^ n = 1 n i = 1 n ε i + x i T β β ^ n · 1 S 2 + ϕ λ n n j = 1 p | β j | .
According to the point-by-point convergence principle and Lemma 3,
lim n V β ^ n = ϕ λ n n j = 1 p | β j | ,
then,
V n β ^ n = E ε i 2 2 1 n i = 1 n ε i x i T β ^ n · 1 S β + lim n V β ^ n = σ 2 + V β ^ n ,
so sup | V n β ^ n V β ^ n σ 2 | p 0 . Then
arg min V n p arg min V ,
β ^ n p β ,
which proves the consistency. □

References

  1. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B (Methodol.) 1996, 58, 267–288. [Google Scholar] [CrossRef]
  2. Wang, S.; Weng, H.; Maleki, A. Which bridge estimator is optimal for variable selection? arXiv 2017, arXiv:1705.08617. [Google Scholar]
  3. Meinshausen, N.; Bühlmann, P. High-dimensional graphs and variable selection with the lasso. Ann. Stat. 2006, 34, 1436–1462. [Google Scholar] [CrossRef] [Green Version]
  4. Fan, J.; Li, R. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 2001, 96, 1348–1360. [Google Scholar] [CrossRef]
  5. Fan, J.; Li, R. Statistical challenges with high dimensionality: Feature selection in knowledge discovery. arXiv 2006, arXiv:math/0602133. [Google Scholar] [CrossRef]
  6. Zou, H. The adaptive lasso and its oracle properties. J. Am. Stat. Assoc. 2006, 101, 1418–1429. [Google Scholar] [CrossRef] [Green Version]
  7. Fan, J.; Peng, H. Nonconcave penalized likelihood with a diverging number of parameters. Ann. Stat. 2004, 32, 928–961. [Google Scholar] [CrossRef] [Green Version]
  8. Donoho, D.L.; Johnstone, J.M. Ideal spatial adaptation by wavelet shrinkage. Biometrika 1994, 81, 425–455. [Google Scholar] [CrossRef]
  9. Breiman, L. Better subset regression using the nonnegative garrote. Technometrics 1995, 37, 373–384. [Google Scholar] [CrossRef]
  10. Yuan, M.; Lin, Y. On the non-negative garrotte estimator. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2007, 69, 143–161. [Google Scholar] [CrossRef]
  11. Meinshausen, N. Relaxed lasso. Comput. Stat. Data Anal. 2007, 52, 374–393. [Google Scholar] [CrossRef]
  12. Hastie, T.; Tibshirani, R.; Tibshirani, R.J. Extended comparisons of best subset selection, forward stepwise selection, and the lasso. arXiv 2017, arXiv:1707.08692. [Google Scholar]
  13. Mentch, L.; Zhou, S. Randomization as regularization: A degrees of freedom explanation for random forest success. arXiv 2019, arXiv:1911.00190. [Google Scholar]
  14. Bloise, F.; Brunori, P.; Piraino, P. Estimating intergenerational income mobility on sub-optimal data: A machine learning approach. J. Econ. Inequal. 2021, 19, 643–665. [Google Scholar] [CrossRef]
  15. He, Y. The Analysis of Impact Factors of Foreign Investment Based on Relaxed Lasso. J. Appl. Math. Phys. 2017, 5, 693–699. [Google Scholar] [CrossRef] [Green Version]
  16. Kang, C.; Huo, Y.; Xin, L.; Tian, B.; Yu, B. Feature selection and tumor classification for microarray data using relaxed Lasso and generalized multi-class support vector machine. J. Theor. Biol. 2019, 463, 77–91. [Google Scholar] [CrossRef]
  17. Tay, J.K.; Narasimhan, B.; Hastie, T. Elastic net regularization paths for all generalized linear models. arXiv 2021, arXiv:2103.03475. [Google Scholar]
  18. Fu, W.; Knight, K. Asymptotics for lasso-type estimators. Ann. Stat. 2000, 28, 1356–1378. [Google Scholar] [CrossRef]
  19. Zhao, P.; Yu, B. On model selection consistency of Lasso. J. Mach. Learn. Res. 2006, 7, 2541–2563. [Google Scholar]
  20. Efron, B.; Hastie, T.; Johnstone, I.; Tibshirani, R. Least angle regression. Ann. Stat. 2004, 32, 407–499. [Google Scholar] [CrossRef] [Green Version]
  21. Huang, J.; Ma, S.; Zhang, C.H. Adaptive Lasso for sparse high-dimensional regression models. Stat. Sin. 2008, 1603–1618. [Google Scholar]
  22. Franklin, J. The elements of statistical learning: Data mining, inference and prediction. Math. Intell. 2005, 27, 83–85. [Google Scholar] [CrossRef]
  23. McCullagh, P.; Nelder, J.A. Generalized Linear Models; Routledge: Oxfordshire, UK, 2019. [Google Scholar]
  24. Fan, J.; Lv, J. Sure independence screening for ultrahigh dimensional feature space. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2008, 70, 849–911. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Li, R.; Zhong, W.; Zhu, L. Feature screening via distance correlation learning. J. Am. Stat. Assoc. 2012, 107, 1129–1139. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Comparison of convergence rates between the relaxed adaptive lasso, relaxed lasso, adaptive lasso, and ordinary lasso. Both the relaxed adaptive lasso and the relaxed lasso have the same rate O p ( n 1 ) , regardless of s. Adaptive lasso has a rate of O p ( n r ) only if r > 1 + 2 γ s . Additionally, the rate of the lasso is O p ( n r ) only if r > 1 s .
Figure 1. Comparison of convergence rates between the relaxed adaptive lasso, relaxed lasso, adaptive lasso, and ordinary lasso. Both the relaxed adaptive lasso and the relaxed lasso have the same rate O p ( n 1 ) , regardless of s. Adaptive lasso has a rate of O p ( n r ) only if r > 1 + 2 γ s . Additionally, the rate of the lasso is O p ( n r ) only if r > 1 s .
Symmetry 14 01422 g001
Table 1. Simulation results for SNR = 0.2.
Table 1. Simulation results for SNR = 0.2.
pnMethodRRRTEPVEMSENumber of Nonzeros
20100Lasso0.9971.2060.496.21
Rlasso0.9971.2050.695.41
Alasso0.9951.2050.894.51
Radlasso0.9861.2032.4100.12
500Lasso0.9901.1991.691.44
Rlasso0.9871.1982.190.32
Alasso0.9891.1981.990.53
Radlasso0.9741.1964.386.56
1000Lasso0.9871.1972.190.25
Rlasso0.9831.1962.989.63
Alasso0.9851.1972.490.14
Radlasso0.9741.1954.486.87
50100Lasso0.9981.1970.499.71
Rlasso0.9971.1970.599.91
Alasso0.9931.1961.298.82
Radlasso0.9851.1952.3106.82
500Lasso0.9921.2001.493.44
Rlasso0.9861.1992.392.52
Alasso0.9881.1991.991.63
Radlasso0.9761.1974.090.65
1000Lasso0.9871.1952.188.85
Rlasso0.9821.1952.988.03
Alasso0.9851.1952.588.44
Radlasso0.9741.1934.386.56
Table 2. Simulation results for SNR = 0.8.
Table 2. Simulation results for SNR = 0.8.
pnMethodRRRTEPVEMSENumber of Nonzeros
20100Lasso0.9801.7898.875.15
Rlasso0.9721.78312.173.83
Alasso0.9751.78511.172.84
Radlasso0.9601.77317.875.25
500Lasso0.9691.78113.861.57
Rlasso0.9621.77517.160.75
Alasso0.9671.78014.761.96
Radlasso0.9561.77119.758.89
1000Lasso0.9661.76214.859.38
Rlasso0.9591.75617.858.56
Alasso0.9641.76015.959.37
Radlasso0.9561.75319.457.19
50100Lasso0.9851.7846.775.54
Rlasso0.9781.7799.473.43
Alasso0.9741.77511.469.76
Radlasso0.9631.76616.283.34
500Lasso0.9701.77313.162.97
Rlasso0.9631.76716.661.55
Alasso0.9671.77014.661.96
Radlasso0.9581.76318.760.47
1000Lasso0.9671.77414.659.78
Rlasso0.9601.76817.958.76
Alasso0.9641.77215.859.47
Radlasso0.9571.76519.357.68
NOTE: The MSE and PVE values in the table are 100 and 1000 times larger to emphasize the distinction between these methods.
Table 3. Prediction accuracy for R&D investment study.
Table 3. Prediction accuracy for R&D investment study.
MethodLassoRlassoAlassoRadlasso
MSE0.5210.4850.5750.429
Table 4. Variables selected by Radlasso.
Table 4. Variables selected by Radlasso.
Order NumberExplanatory VariableCoefficient
x 10 Cash Flow from Operations0.008
x 13 Net Increase in Cash and Cash Equivalents0.048
x 15 Net Accounts Receivable0.208
x 26 Non-Current Assets−0.214
x 48 Business Taxes and Surcharges−0.265
x 67 Interest Income0.130
x 70 Profit and Loss from Asset Disposal0.154
x 73 Cash Paid for Commodities or Labor0.386
x 74 Cash Paid to and for Employees0.569
x 83 Cash Flow from Financing Activities Net Amount−0.080
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, R.; Zhao, T.; Lu, Y.; Xu, X. Relaxed Adaptive Lasso and Its Asymptotic Results. Symmetry 2022, 14, 1422. https://doi.org/10.3390/sym14071422

AMA Style

Zhang R, Zhao T, Lu Y, Xu X. Relaxed Adaptive Lasso and Its Asymptotic Results. Symmetry. 2022; 14(7):1422. https://doi.org/10.3390/sym14071422

Chicago/Turabian Style

Zhang, Rufei, Tong Zhao, Yajun Lu, and Xieting Xu. 2022. "Relaxed Adaptive Lasso and Its Asymptotic Results" Symmetry 14, no. 7: 1422. https://doi.org/10.3390/sym14071422

APA Style

Zhang, R., Zhao, T., Lu, Y., & Xu, X. (2022). Relaxed Adaptive Lasso and Its Asymptotic Results. Symmetry, 14(7), 1422. https://doi.org/10.3390/sym14071422

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop