Next Article in Journal
Industrial Application of the ANFIS Algorithm—Customer Satisfaction Assessment in the Dairy Industry
Previous Article in Journal
Triclustering Implementation Using Hybrid δ-Trimax Particle Swarm Optimization and Gene Ontology Analysis on Three-Dimensional Gene Expression Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Application of LADMM and As-LADMM for a High-Dimensional Partially Linear Model

School of Mathematics and Statistics, Henan University of Science and Technology, Luoyang 471023, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(19), 4220; https://doi.org/10.3390/math11194220
Submission received: 11 September 2023 / Revised: 3 October 2023 / Accepted: 7 October 2023 / Published: 9 October 2023
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
This paper mainly studies the application of the linearized alternating direction method of multiplier (LADMM) and the accelerated symmetric linearized alternating direction method of multipliers (As-LADMM) for high dimensional partially linear models. First, we construct a l 1 -penalty for the least squares estimation of partially linear models under constrained contours. Next, we design the LADMM algorithm to solve the model, in which the linearization technique is introduced to linearize one of the subproblems to obtain an approximate solution. Furthermore, we add the appropriate acceleration techniques to form the As-LADMM algorithm and to solve the model. Then numerical simulations are conducted to compare and analyze the effectiveness of the algorithms. It indicates that the As-LADMM algorithm is better than the LADMM algorithm from the view of the mean squared error, the number of iterations and the running time of the algorithm. Finally, we apply them to the practical problem of predicting Boston housing price data analysis. This indicates that the loss between the predicted and actual values is relatively small, and the As-LADMM algorithm has a good prediction effect.
MSC:
90C25; 90C30; 62J05; 90C06

1. Introduction

With the development of information and intelligence in the era of big data, the analysis of high-dimensional data has become an important research topic [1,2]. The relationships between variables of high-dimensional data are diverse and complex with the partially linear model being one of the most important relationships among them [3,4], and some research results have been achieved [5]. Various methods have been proposed for variable selection and estimation in high-dimensional partially linear models, such as the SCAD-penalized method [6], and the selection method via the lasso [7,8]. Ma et al. [9] studied the properties of Lasso in high-dimensional partially linear models. The selection method via profile and restricted profile estimation method [10,11,12]. Lian et al. [13,14], Guo et al. [15], and Wu et al. [16] have all conducted research on variable selection in partially linear additive models, and the other regression and variable selection methods [17,18,19], such as quantile regression and spline estimation.
This paper considers the following partially linear model (PLM)
Y = X T β + B T γ + ε
where Y R is a response variable, X = ( X 1 , , X p ) T R p and Z R is explanatory variable. β = ( β 1 , , β p ) T R p is parameter vector, B = B ( Z ) = ( B 1 ( Z ) , , B m n ( Z ) ) T is a set of B-spline basis functions of order r, γ = ( γ 1 , , γ m n ) T is the spline coefficient vector, ε is a random error. The parametric part of the model uses a linear model, and the nonparametric part uses the B-spline basis function [20] method to estimate the unknown function, which combines the advantages of the interpretability of the linear model and the flexibility of the nonparametric model.
In [11], restricted profile estimation was proposed for partially linear models with large-dimensional covariates, and solved by the Lagrange multiplier method [21]. In [12], the alternating-direction method of multipliers (ADMM) to solve the model by constructing an augmented Lagrangian function. The ADMM algorithm was studied in [22,23].
Now, we will further investigate this partially linear model. In practically estimating β and γ , it is possible that the model is more complex and there is overfitting, or all training sets can fit well to obtain results, but it does not have generalization ability. Specifically, as the sparsity of the covariates in the parametric part, l 1 -norm regularization term can be added to increase the generalization ability of the model. Therefore, we will study the l 1 -penalty of partially linear models. We mainly consider the linearized alternating direction method of multiplier(LADMM) and the accelerated symmetric linearized alternating direction method of multipliers (As-LADMM) algorithm when solving a partially linear model.
The linearized alternating direction method of multiplier (LADMM) is studied by [24], and the linearization technique is introduced to linearize one of the subproblems to obtain an approximate solution. As appropriate acceleration techniques are added to the optimization algorithm, the rate of convergence of the algorithm can be effectively improved, such as the Nesterov acceleration technique [25,26]. An accelerated linearized alternating direction method of multipliers (AADMM) was proposed in [27,28], which combines multi-step acceleration schemes into linearized ADMM, and demonstrated that AADMM has a better convergence rate than ADMM. A symmetric ADMM (s-ADMM) is proposed in [29], which is an easy-to-implement strategy for accelerating ADMM. This strategy can be immediately applied to various practical examples. In [30], an inexact accelerated random alternating direction multiplier (AS-ADMM) scheme for separable convex optimization of linearly constrained structures was proposed.
Considering the sparsity of the covariates in the parametric part, by reducing the complexity of the model and avoiding the problem of overfitting, we study the l 1 -penalty of partially linear models. Since the subproblems generated by ADMM must have analytical expression in each iteration process, but not all subproblems have analytical expression, the approximate solution of the subproblems is obtained by using the linearization method, LADMM and As-LDMM for solving the model.
This paper is organized as follows. In Section 2, we construct l 1 -penalty estimation of the high-dimensional partially linear model. In Section 3, we employ LADMM to solve the l 1 -penalty model of L1PLM. In Section 4, we apply the As-LADMM algorithm for l 1 -Penalty estimation in a high-dimensional partially linear model. In Section 5, some numerical illustrations are reported. Finally, we apply them to the practical problem.

2. l 1 -Penalty Estimation for High-Dimensional Partially Linear Model

Supposing ( Y 1 ; X 1 T ; T 1 ) , , ( Y n ; X n T ; T n ) is an independent homogeneous sample of the model, ε = ( ε 1 , , ε n ) T , and then model (1) can be written as
Y i = X i T β + B ( Z i ) T γ + ε i .
In order to estimate β and γ , Wang [11] studied restricted profile least squares estimation by using the Lagrange multiplier method for the following optimal problem
min β , γ 1 2 | | Y X T β B ( Z ) T γ | | 2 , s . t . R β = d .
where R is a given k × p matrix whose rank is k, and d is a known k-dimensional vector. The augmented Lagrange function was constructed to transform constrained optimization problems into unconstrained optimization problems, the ADMM algorithm was applied for this high-dimensional partially linear model in [12].
In practice, when the covariates are sparse in the parametric part of partially linear models, we can add a regularization term to partially linear models. The l 1 -norm regularization term can be added to increase the generalization ability of the model. On the one hand, l 1 -norm regularization can obtain sparse solutions. That is to say, many dimensions of the parameter vector have values of zero. The existence of a sparse solution discards features that do not affect the results or have weak effects in the sample, and it can effectively simplify the model. On the other hand, the obtained parameter vector is not unique, and it is more likely that the absolute value of a single dimension within the parameter vector is particularly large. In this case, weak changes in the characteristics of a certain dimension can cause pathological changes in the results, so a l 1 -penalty of parameter vector has been added to the objective function, which can effectively prevent this situation. Therefore, due to the sparsity of the covariates in the parametric part, by introducing the l 1 -penalty, the resulting model can not only cater to the training set but also be simple and effective. Therefore, we study the l 1 -penalty of partially linear models.
The objective functions of estimating β and γ using l 1 -penalty least squares method is
min β , γ 1 2 | | Y X T β B ( Z ) T γ | | 2 + θ | | β | | 1
Therefore, we study the following optimization problem of estimating β and γ , denoted L1PLM:
min β , γ 1 2 | | Y X T β B ( Z ) T γ | | 2 + θ | | β | | 1 s . t . R β = d

3. LADMM Algorithms of l 1 -Penalty Estimation for High-Dimensional Partially Linear Model

In this section, we apply LADMM to solve the l 1 -penalty model of L1PLM and provide an algorithm framework for solving the problem.

3.1. Solution of l 1 -Penalty Estimation for High-Dimensional Partially Linear Model Using LADMM

For the optimization problem L1PLM, by using the augmented Lagrange multiplier method, the constrained programming problem is transformed into an unconstrained programming problem, and the augmented Lagrange function is
min Q ρ ( β , γ , λ ) = 1 2 Y X β B γ 2 + θ | | β | | 1 + λ , R β d + ρ 2 R β d 2 .
Using the classical alternating direction method of multiplier (ADMM), its n-step iteration starts from the given ( β n , λ n ) and iterates to obtain a new iteration point ( γ n + 1 , β n + 1 , λ n + 1 ) via the following scheme
γ n + 1 = arg min γ Q ρ ( β n , γ , λ n ) β n + 1 = arg min β Q ρ ( β , γ n + 1 , λ n ) λ n + 1 = λ n + ρ ( R β n + 1 d )
Now, let us solve these subproblems.
Firstly, for the solution of the subproblem, the problem can be written as follows
γ n + 1 = arg min γ { 1 2 Y X β n B γ 2 } .
Since β n can be given, taking the partial derivative of γ
Q ρ ( β , γ , λ ) γ = B ( Y X β n B γ ) = 0
The analytical solution of γ is
γ n + 1 = ( B T B ) 1 B T ( Y X β n )
Secondly, for the solution of β -subproblem, by substituting γ into Equation (5), we can write the problem as follows
β n + 1 = arg min β { 1 2 Y X β B γ n + 1 2 + θ | | β | | 1 + λ n , R β d + ρ 2 R β d 2 } = arg min β { θ | | β | | 1 + 1 2 Y X β B γ n + 1 2 + ρ 2 R β d + λ n ρ 2 } = arg min β { θ | | β | | 1 + 1 2 X ^ β + B ^ γ n + 1 Y ^ 2 }
where X ^ = ( X T , ρ R ) T , B ^ = ( B T , 0 ) .
Since X ^ may be a non-positive-definite matrix, there is no closed-form solution for this subproblem. We can use the LADMM method by the linearized quadratic term as follows.
1 2 X ^ β + B ^ γ n + 1 Y ^ 2 ( X ^ T ( X ^ β + B ^ γ n + 1 Y ^ ) ) T ( β β n ) + ν 2 | | β β n | | 2
Therefore, the solved subproblem is equivalent to
β n + 1 = arg min β { θ | | β | | 1 + ( X ^ T ( X ^ β + B ^ γ n + 1 Y ^ ) ) T ( β β n ) + ν 2 | | β β n | | 2 } = arg min β { θ | | β | | 1 + ν 2 | | β β n + X ^ T ( X ^ β + B ^ γ n + 1 Y ^ ) ν | | 2 }
The closed-form solution of this subproblem can be obtained by using the method of soft threshold
β n + 1 = shrin k 1 , 2 { β n X ^ T ( X ^ β + B ^ γ n + 1 Y ^ ) ν , θ ν } = u ˜ n P C ( u ˜ n ) .
where u ˜ n = β n X ^ T ( X ^ β + B ^ γ n + 1 Y ^ ) ν , C = [ θ ν , θ ν ] .
By solving the γ , β , λ subproblem separately, we obtained γ n + 1 , β n + 1 , λ n + 1 as follows
γ n + 1 = ( B T B ) 1 B T ( Y X β n ) β n + 1 = u ˜ n P C ( u ˜ n ) λ n + 1 = λ n + ρ ( R β n + 1 d )

3.2. LADMM Algorithm Design for l 1 -Penalty Estimation High-Dimensional Partially Linear Model

Based on the characteristics of the solution of l 1 -penalty estimation for the high-dimensional partially linear model, the algorithm scheme using LADMM is as follows (Algorithm 1).
Algorithm 1 Iterative Scheme of LADMM for LIPLM
Step 1: Input X, Y, B. Given the initial variables ( β 0 , γ 0 , λ 0 ) .
            Choose penalty parametric ρ > 0 , θ > 0 . Let n = 1 be iteration;
Step 2: Update γ n + 1 , β n + 1 , λ n + 1 by Equation (12);
Step 3: If the algorithm does not meet the termination criteria at N-th iteration,
            let n = n + 1 go to Step 2; otherwise, go to the next step;
Step4: Output ( β N , γ N , λ N ) is the approximate solution of ( β ^ , γ ^ , λ ^ ) .
We can prove that the LADMM algorithm should converge to the optimal solution under certain conditions, see references [28,29].

4. As-LADMM Algorithm for l 1 -Penalty Estimation in High-Dimensional Partially Linear Model

In this section, we apply As-LADMM to solve the l 1 -penalty model of L1PLM and provide an algorithm framework for solving the problem. He B. el studied the symmetric version of ADMM with larger step sizes and provided an easily implementable strategy to accelerate the ADMM numerically that can be immediately applied to a variety of applications [29]. We use the symmetric version of ADMM for solving the l 1 -penalty model of L1PLM.

4.1. The Solution of l 1 -Penalty Estimation for High-Dimensional Partially Linear Model by As-LADMM

In order to solve the optimization problem L1PLM using As-LADMM, the augmented Lagrange function is constructed
min ϕ ρ ( β , γ , λ ) = 1 2 Y X β B γ 2 + θ | | β | | 1 + λ , R β d + ρ 2 η R β d 2
For a given ( β n , λ n ) , we obtain ( γ n + 1 , β n + 1 , λ n + 1 ) by the following iteration scheme
ν n = β n + η n ( 1 η n 1 ) η n 1 ( β n β n 1 ) γ n + 1 = arg min γ ϕ ρ ( ν n , γ , λ n ) λ n + 1 2 = λ n + ρ τ ( R ν n + 1 d ) β n + 1 = arg min β ϕ ρ ( β , γ n + 1 , λ n + 1 2 ) λ n + 1 = λ n + 1 2 + ρ τ ( R β n + 1 d )
Now, let us solve these subproblems.
Firstly, for the solution of the γ -subproblem, the problem can be written as follows
γ n + 1 = arg min γ { 1 2 Y X ν n B γ 2 }
Taking the partial derivative of γ
ϕ ρ ( β , γ , λ ) γ = B ( Y X ν n B γ ) = 0
The analytical expression of γ is
γ n + 1 = ( B T B ) 1 B T ( Y X ν n )
Secondly, for the solution of β -subproblems, the problem can be written as follows
β n + 1 = arg min β { 1 2 Y X β B γ n + 1 2 + θ | | β | | 1 + λ n + 1 2 , R β d + ρ 2 η n R β d 2 } = arg min β { θ | | β | | 1 + 1 2 Y X β B γ n + 1 2 + ρ 2 η n R β d + λ n + 1 2 η n ρ 2 } = arg min β { θ | | β | | 1 + 1 2 X β + B γ n + 1 Y 2 } .
where X = ( X T , ρ η n R ) T , Y = ( Y T , ρ η n ( d λ n + 1 2 η n ρ ) ) T , B = ( B T , 0 ) .
Since X ^ may be a non-positive-definite matrix, there is no closed-form solution to this subproblem. The quadratic term 1 2 X ^ β + B ^ γ n + 1 Y ^ 2 can be replaced by linearized. LADMM method can be used. Therefore, the solved subproblem is equivalent to
β n + 1 = arg min β { θ | | β | | 1 + ( X T ( X β + B γ n + 1 Y ) ) T ( β β n ) + ν 2 | | β β n | | 2 } = arg min β { θ | | β | | 1 + ν 2 | | β β n + X T ( X β + B γ n + 1 Y ) ν | | 2 } .
The closed-form solution of this subproblem can be obtained by using the method of soft threshold
β n + 1 = shrin k 1 , 2 { β n X T ( X β + B γ n + 1 Y ) ν , θ ν } = ν ˜ n P C ( ν ˜ n )
where
ν ˜ n = β n X T ( X β + B γ n + 1 Y ) ν , C = [ θ ν , θ ν ] .
Overall, by solving the γ , β , λ subproblems separately, the solutions γ n + 1 , β n + 1 , λ n + 1 are obtained. Therefore, its algorithm iteration framework is
γ n + 1 = ( B T B ) 1 B T ( Y X ν n ) , β n + 1 = ν ˜ n P C ( ν ˜ n ) , λ n + 1 = λ n + ρ τ ( R ν n + 1 d ) + ρ τ ( R β n + 1 d ) .

4.2. As-LADMM Algorithm Design for l 1 -Penalty Estimation in High-Dimensional Partial Linear Model

Based on the idea of the As-LADMM solution for l 1 -penalty estimation in a high-dimensional partially linear model, we design an algorithm framework as follows (Algorithm 2).
Algorithm 2 Iterative Scheme of As-LADMM for LIPLM
Step 1: Input X, Y, B and t o l . Given the initial variables ( β 0 , γ 0 , λ 0 ) ,
            choose ρ > 0 , θ 0 = 1 , θ 1 = 1 τ , 0.5 < τ < 1 . Let n = 1 ;
Step 2: Update γ n + 1 , β n + 1 , λ n + 1 by Equation (17);
Step 3: If the algorithm meets the termination criteria at N-th iteration, go to the next step.
            Otherwise, let n = n + 1 go to Step 2;
Step 4: Output ( β N , γ N , λ N ) . It is the approximate solution of ( β ^ , γ ^ , λ ^ ) .
We can similarly prove that this algorithm should converge to the optimal solution under certain conditions, see references [28,29].

5. Numerical Simulation

5.1. Parameter Setting

Numerical simulation is performed for high-dimensional partial linear model estimation, with a sample size of n generated by the model. Here, ε N ( 0 , σ 2 ) , data set X follows the p-dimensional multivariate normal distribution, X N ( 0 , Σ ) , Σ = 0 . 5 j k , j and k is the jth and kth components of the covariance, respectively. Z U ( 0 , 1 ) , g ( z ) = 3 cos ( 2 π z ) , β = ( 1 , 2 , 0.5 , 1 , 0 , , 0 ) T and β 5 = = β p = 0 . Parameter τ = 0.95 , the smoothness function is estimated by cubic spline interpolation, and the cubic B-spline basis function is used for numerical simulation. The results show that the effect of the smoothness is good.

5.2. Simulation Results

The simulation effect is expressed by the mean squared error (mse), objective value (obj), iteration times (iter) and running time (time) of the algorithm, where the sample size is taken as n = 100, 200, p = 109, 209, 409, 509, 1009.
m s e = β ^ β 2 o b j = 1 2 | | Y X β ^ B γ | | 2 + θ | | β ^ | | 1
By determining the value of sample size and dimensionality from small to large, we study the effectiveness of simulation in high dimensional ( p > > n ) situations.
Based on the above parameters settings, the specific results are obtained by using the LADMM algorithm are shown in Table 1; The specific results obtained using the As-LDMM algorithm are shown in Table 2.
According to the results of Table 1 and Table 2, in the high-dimensional case, for a fixed σ , the mean squared error of parameter estimation for these algorithm decreases with the increase in dimension p, and the mean squared error of the As-LADMM algorithm is slightly lower than that of the LADMM algorithm. It indicates that the As-LADMM algorithm is better than the LADMM algorithm.
Compared to the LADMM algorithm, the As-LADMM algorithm performs accelerated symmetry transformation to improve the performance of the algorithm. For a fixed value of p, the mean squared error, the objective value, the number of iteration and the running time of the algorithm increases with the increase in σ . However, the As-LDMM algorithm performs better than the LADMM algorithm.
We draw a comparison line of mean squared error and objective value under different variances with the sample size of 100 and 200, specifically comparing and expressing the effectiveness of the As-LDMM algorithm with Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12.
It can be seen from these figures that the mean squared error of the LADMM algorithm and the As-LADMM algorithm is very small and nearly zero, but the mean squared error of the As-LADMM algorithm is smaller than that. The objective values of the LADMM algorithm and the As-LDMM algorithm are both small, while the objective values of the As-LDMM algorithm are smaller. Therefore, it indicates that the As-LADMM algorithm has better performance and is suitable for solving high-dimensional partially linear models.

6. Application: Boston Housing Price Data Analysis

In order to verify the application of As-LADMM in high-dimensional data, we take Boston house price data as an example to analyze. The Boston home price data were information about home prices in Boston, Massachusetts, collected by the U. S. Census Bureau in the 1970 U. S. Census. It is obtained from the StatLib archive (http://lib.stat.cmu.edu/datasets/boston, accessed on 6 March 2023). The data are more representative of the actual situation [31,32]. The data set is composed of 13 input variables and one output variable that is the median value of owner-occupied homes in $1000’s (MEDV). The input variables include per capita crime rate by town (CRIM), the proportion of residential land zoned for lots over 25,000 sq.ft. (ZN), the proportion of non-retail business acres per town (INDUS), nitric oxides concentration (parts per 10 million) (NOX), the average number of rooms per dwelling (RM), the proportion of owner-occupied units built prior to 1940 (AGE), weighted distances to five Boston employment centers (DIS), index of accessibility to radial highways(RAD), and full-value property-tax rate per $10,000 (TAX). In addition, PTRATIO is the pupil–teacher ratio by town; LSTAT is the lower status of the population. For thirteen independent variables, the PTRATIO is not necessarily linear with the proportion of MEDV. Therefore, the importance of other independent variables to MEDV is mainly considered. The model is as follows
Y i = j 12 X i j β j + m ( U i ) + ε i
where Y i is MEDV of the i-th sample, and U i is PTRATIO of the i-th sample, X i j is the j-th variable of the i-th sample, and ε i N ( 0 , σ 2 ) .
During the experiment, we selected 75% of the training samples and 25% of the test samples. The predictive value of MEDV in the prediction sample is expressed in y, with median absolute error (MAE) and standard error (SE) to evaluate the predictive ability of the model. The MAE can reduce the impact of outliers. The SE reflects the degree to which the sample deviates from the average value, and the smaller the value, the more reliable the method. The calculation method is
M A E = m e d a i n { | y 1 y ^ 1 | , | y 2 y ^ 2 | , , | y n y ^ n | } .
S E = ( y i y ^ i ) 2 n .
The prediction ability of the As-LADMM algorithm was compared with LADMM, as shown in Table 3.
From the results in Table 3, it can be seen that the values of MAE predicted using the As-LADMM algorithm are basically lower than those predicted by the LADMM algorithm. This indicates that the loss between the predicted and actual values of MEDV results is relatively small, and the As-LADMM algorithm has a good prediction effect. The values of SE are all below 0.09, and the corresponding As-LADMM algorithm predicts smaller values, indicating that the As-LADMM algorithm has a more reliable prediction ability than the LADMM algorithm. The overall performance is very good, so the parameter estimation method in this article is relatively effective.

7. Conclusions

In this paper, we mainly studied the application of LADMM and As-LADMM for high-dimensional partially linear models. As the sparsity of the covariates in the parametric part, we added to l 1 -norm regularization term to estimate the parametric and increase the generalization ability of the model. We constructed the augmented Lagrange function to transform the constrained optimization problems into unconstrained optimization problems and solved the model using LADMM and As-LADMM. Through numerical simulation, we compared and analyzed the superiority of the designed algorithm. From the view of the mean squared error, the number of iterations and the running time of the algorithm, the As-LADMM algorithm is better than the LADMM algorithm. Finally, the two algorithms were applied to Boston housing price data, and the comparison showed the effectiveness of As-LADMM as well.

Author Contributions

Conceptualization, A.F. and X.C.; methodology, X.C. and A.F.; validation, X.C.; formal analysis, A.F. and Z.J.; data curation, X.C. and J.F.; writing—original draft preparation, X.C.; writing—review and editing, A.F., X.C. and J.F.; supervision, A.F. and Z.J. All authors have read and agreed to the published version of the manuscript.

Funding

The research was funded in part by the National Natural Science Foundation of China (Grant No. 12101195, 12071112).

Data Availability Statement

Not applicable.

Acknowledgments

The research was funded in part by the Provincial first-class undergraduate curriculum project of mathematical models. We sincerely thank the anonymous reviewers for their insightful comments, which greatly improved the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, Y.; Zhang, S.; Ma, S.; Zhang, Q. Tests for regression coefficients in high dimensional partially linear models. Stat. Probab. Lett. 2020, 163, 108772. [Google Scholar] [CrossRef] [PubMed]
  2. Zhao, F.; Lin, N.; Zhang, B. A new test for high-dimensional regression coefficients in partially linear models. Can. J. Stat. 2023, 51, 5–18. [Google Scholar] [CrossRef]
  3. Engle, R.F.; Granger, C.W.; Rice, J.; Weiss, A. Semiparametric estimates of the relation between weather and electricity sales. J. Am. Stat. Assoc. 1986, 81, 310–320. [Google Scholar] [CrossRef]
  4. Heckman, N.E. Spline smoothing in a partly linear model. J. R. Stat. Soc. Ser. B Stat. Methodol. 1986, 48, 244–248. [Google Scholar] [CrossRef]
  5. Härdle, W.; Liang, H.; Gao, J. Partially Linear Models; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2000. [Google Scholar]
  6. Xie, H.; Huang, J. SCAD-penalized regression in high-dimensional partially linear models. Ann. Stat. 2009, 37, 673–696. [Google Scholar] [CrossRef]
  7. Tibshirani, R. Regression shrinkage and selection via the lasso: A retrospective. J. R. Stat. Soc. Ser. B Stat. Methodol. 2011, 73, 268–288. [Google Scholar] [CrossRef]
  8. He, K.; Wang, Y.; Zhou, X.; Xu, H.; Huang, C. An improved variable selection procedure for adaptive Lasso in high-dimensional survival analysis. Lifetime Data Anal. 2019, 25, 569–585. [Google Scholar] [CrossRef]
  9. Ma, C.; Huang, J. Asymptotic properties of lasso in high-dimensional partially linear models. Sci. China Math. 2016, 59, 769–788. [Google Scholar] [CrossRef]
  10. Chuanhua, W.; Xizhi, W. Profile Lagrange multiplier test for partially linear varying-coefficient regression models. J. Syst. Sci. Math. Sci. 2008, 28, 416. [Google Scholar]
  11. Wang, X.; Zhao, S.; Wang, M. Restricted profile estimation for partially linear models with large-dimensional covariates. Stat. Probab. Lett. 2017, 128, 71–76. [Google Scholar] [CrossRef]
  12. Feng, A.; Chang, X.; Shang, Y.; Fan, J. Application of the ADMM Algorithm for a High-Dimensional Partially Linear Model. Mathematics 2022, 10, 4767. [Google Scholar] [CrossRef]
  13. Lian, H. Variable selection in high-dimensional partly linear additive models. J. Nonparametr. Stat. 2012, 24, 825–839. [Google Scholar] [CrossRef]
  14. Lian, H.; Liang, H.; Ruppert, D. Separation of covariates into nonparametric and parametric parts in high-dimensional partially linear additive models. Stat. Sin. 2015, 25, 591–607. [Google Scholar]
  15. Guo, J.; Tang, M.; Tian, M.; Zhu, K. Variable selection in high-dimensional partially linear additive models for composite quantile regression. Comput. Stat. Data Anal. 2013, 65, 56–67. [Google Scholar] [CrossRef]
  16. Wu, Q.; Zhao, H.; Zhu, L.; Sun, J. Variable selection for high-dimensional partly linear additive Cox model with application to Alzheimer’s disease. Stat. Med. 2020, 39, 3120–3134. [Google Scholar] [CrossRef]
  17. Xu, H.X.; Chen, Z.L.; Wang, J.F.; Fan, G.L. Quantile regression and variable selection for partially linear model with randomly truncated data. Stat. Pap. 2019, 60, 1137–1160. [Google Scholar] [CrossRef]
  18. Chen, W. Polynomial-based smoothing estimation for a semiparametric accelerated failure time partial linear model. Open Access Libr. J. 2020, 7, 1. [Google Scholar] [CrossRef]
  19. Auerbach, E. Identification and estimation of a partially linear regression model using network data. Econometrica 2022, 90, 347–365. [Google Scholar] [CrossRef]
  20. Liu, Y.; Yin, J. Spline estimation of partially linear regression models for time series with correlated errors. Commun. Stat.-Simul. Comput. 2021, 1–15. [Google Scholar] [CrossRef]
  21. Powell, M.J. A method for nonlinear constraints in minimization problems. In Optimization; Academic Press: New York, NY, USA, 1969; pp. 283–298. [Google Scholar]
  22. Wahlberg, B.; Boyd, S.; Annergren, M.; Wang, Y. An ADMM algorithm for a class of total variation regularized estimation problems. IFAC Proc. Vol. 2012, 45, 83–88. [Google Scholar] [CrossRef]
  23. Glowinski, R.; Song, Y.; Yuan, X.; Yue, H. Application of the alternating direction method of multipliers to control constrained parabolic optimal control problems and beyond. Ann. Appl. Math. 2022, 38, 115–158. [Google Scholar] [CrossRef]
  24. Li, X.; Mo, L.; Yuan, X.; Zhang, J. Linearized alternating direction method of multipliers for sparse group and fused LASSO models. Comput. Stat. Data Anal. 2014, 79, 203–221. [Google Scholar] [CrossRef]
  25. Nesterov, Y. Introductory Lectures on Convex Optimization: A Basic Course; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2003; Volume 87. [Google Scholar]
  26. Nesterov, Y. Smooth minimization of non-smooth functions. Math. Program. 2005, 103, 127–152. [Google Scholar] [CrossRef]
  27. Lin, Z.; Li, H.; Fang, C. Accelerated Optimization for Machine Learning; Springer: Singapore, 2020. [Google Scholar]
  28. Ouyang, Y.; Chen, Y.; Lan, G.; Pasiliao, E., Jr. An accelerated linearized alternating direction method of multipliers. SIAM J. Imaging Sci. 2015, 8, 644–681. [Google Scholar] [CrossRef]
  29. He, B.; Ma, F.; Yuan, X. Convergence study on the symmetric version of ADMM with larger step sizes. SIAM J. Imaging Sci. 2016, 9, 1467–1501. [Google Scholar] [CrossRef]
  30. Bai, J.; Hager, W.W.; Zhang, H. An inexact accelerated stochastic ADMM for separable convex optimization. Comput. Optim. Appl. 2022, 81, 479–518. [Google Scholar] [CrossRef]
  31. Berndt, E.R. The Practice of Econometrics: Classic and Contemporary; Addison-Wesley Pub. Co.: San Francisco, CA, USA, 1991. [Google Scholar]
  32. Harrison, D., Jr.; Rubinfeld, D.L. Hedonic housing prices and the demand for clean air. J. Environ. Econ. Manag. 1978, 5, 81–102. [Google Scholar] [CrossRef]
Figure 1. Comparison of mean squared error line of LADMM algorithm and As-LADMM algorithm under n = 100 , σ = 0.5 .
Figure 1. Comparison of mean squared error line of LADMM algorithm and As-LADMM algorithm under n = 100 , σ = 0.5 .
Mathematics 11 04220 g001
Figure 2. Comparison of mean squared error line of LADMM algorithm and As-LADMM algorithm under n = 100 , σ = 1 .
Figure 2. Comparison of mean squared error line of LADMM algorithm and As-LADMM algorithm under n = 100 , σ = 1 .
Mathematics 11 04220 g002
Figure 3. Comparison of mean squared error line of LADMM algorithm and As-LADMM algorithm under n = 100 , σ = 2 .
Figure 3. Comparison of mean squared error line of LADMM algorithm and As-LADMM algorithm under n = 100 , σ = 2 .
Mathematics 11 04220 g003
Figure 4. Comparison of objective value line of LADMM algorithm and As-LADMM algorithm under n = 100 , σ = 0.5 .
Figure 4. Comparison of objective value line of LADMM algorithm and As-LADMM algorithm under n = 100 , σ = 0.5 .
Mathematics 11 04220 g004
Figure 5. Comparison of objective value line of LADMM algorithm and As-LADMM algorithm under n = 100 , σ = 1 .
Figure 5. Comparison of objective value line of LADMM algorithm and As-LADMM algorithm under n = 100 , σ = 1 .
Mathematics 11 04220 g005
Figure 6. Comparison of objective value line of LADMM algorithm and As-LADMM algorithm under n = 100 , σ = 2 .
Figure 6. Comparison of objective value line of LADMM algorithm and As-LADMM algorithm under n = 100 , σ = 2 .
Mathematics 11 04220 g006
Figure 7. Comparison of mean squared error line of LADMM algorithm and As-LADMM algorithm under n = 200 , σ = 0.5 .
Figure 7. Comparison of mean squared error line of LADMM algorithm and As-LADMM algorithm under n = 200 , σ = 0.5 .
Mathematics 11 04220 g007
Figure 8. Comparison of mean squared error line of LADMM algorithm and As-LADMM algorithm under n = 200 , σ = 1 .
Figure 8. Comparison of mean squared error line of LADMM algorithm and As-LADMM algorithm under n = 200 , σ = 1 .
Mathematics 11 04220 g008
Figure 9. Comparison of mean squared error line of LADMM algorithm and As-LADMM algorithm under n = 200 , σ = 2 .
Figure 9. Comparison of mean squared error line of LADMM algorithm and As-LADMM algorithm under n = 200 , σ = 2 .
Mathematics 11 04220 g009
Figure 10. Comparison of objective value line of LADMM algorithm and As-LADMM algorithm under n = 200 , σ = 0.5 .
Figure 10. Comparison of objective value line of LADMM algorithm and As-LADMM algorithm under n = 200 , σ = 0.5 .
Mathematics 11 04220 g010
Figure 11. Comparison of objective value line of LADMM algorithm and As-LADMM algorithm under n = 200 , σ = 1 .
Figure 11. Comparison of objective value line of LADMM algorithm and As-LADMM algorithm under n = 200 , σ = 1 .
Mathematics 11 04220 g011
Figure 12. Comparison of objective value line of LADMM algorithm and As-LADMM algorithm under n = 200 , σ = 2 .
Figure 12. Comparison of objective value line of LADMM algorithm and As-LADMM algorithm under n = 200 , σ = 2 .
Mathematics 11 04220 g012
Table 1. Simulation results of LADMM in different parameters set.
Table 1. Simulation results of LADMM in different parameters set.
np σ mseobjIterTime
1001090.50.00280.99121890.023358
2090.00260.4000710.017601
5090.00220.1675240.010753
10090.00210.1444180.011368
2002090.00311.29432620.032774
4090.00260.3700460.014811
5090.00260.3205340.011893
10090.00240.2101220.011691
10010910.00301.13062300.026888
2090.00280.4549760.015361
5090.00200.2015250.010161
10090.00200.1735180.010081
2002090.00391.68633290.041064
4090.00270.4233500.014497
5090.00260.4001400.012453
10090.00250.2704230.012234
10010920.00401.67471960.028308
2090.00330.5872860.014744
5090.00230.3052260.013412
10090.00240.2498200.010456
2002090.00622.80583720.044700
4090.00310.5956560.015718
5090.00300.5985460.012268
10090.00270.4011260.012453
Table 2. Simulation results of As-LADMM in different parameters set.
Table 2. Simulation results of As-LADMM in different parameters set.
np σ mseobjIterTime
1001090.50.00280.98791890.020590
2090.00260.3958720.014416
5090.00190.1626260.014585
10090.00190.1435170.011211
2002090.00311.29442620.035734
4090.00260.3708460.018151
5090.00260.3216340.015442
10090.00230.2097190.012723
10010910.00301.12882300.023713
2090.00280.4502770.014860
5090.00200.2013240.013338
10090.00170.1718160.010901
2002090.00391.68533290.037359
4090.00270.4133500.015312
5090.00270.3881400.015518
10090.00240.2635250.012891
10010920.00401.67272960.026368
2090.00320.5740860.015204
5090.00230.2957260.012608
10090.00210.2470190.011138
2002090.00622.80413720.042947
4090.00320.5755560.016475
5090.00320.5474530.015811
10090.00220.3819300.013330
Table 3. MAE and SE effects of housing price forecasts for owner-occupied housing.
Table 3. MAE and SE effects of housing price forecasts for owner-occupied housing.
VariableLADMMAS-LADMM
MAESEMAESE
CRIM1.47850.07480.9720.0504
ZN1.48090.07520.97170.0549
INDUS1.480.07820.97120.0544
CHAS1.44740.07480.97260.0563
NOX1.50460.07870.96120.0535
RM1.50630.07760.9770.0549
AGE1.47150.07750.96490.0466
DIS1.455960.07480.970.0523
RAD1.47760.07460.97080.0538
TAX1.45520.07470.97120.0524
B1.48480.07520.97270.0513
LSTAT1.23620.05980.89720.0323
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Feng, A.; Chang, X.; Fan, J.; Jin, Z. Application of LADMM and As-LADMM for a High-Dimensional Partially Linear Model. Mathematics 2023, 11, 4220. https://doi.org/10.3390/math11194220

AMA Style

Feng A, Chang X, Fan J, Jin Z. Application of LADMM and As-LADMM for a High-Dimensional Partially Linear Model. Mathematics. 2023; 11(19):4220. https://doi.org/10.3390/math11194220

Chicago/Turabian Style

Feng, Aifen, Xiaogai Chang, Jingya Fan, and Zhengfen Jin. 2023. "Application of LADMM and As-LADMM for a High-Dimensional Partially Linear Model" Mathematics 11, no. 19: 4220. https://doi.org/10.3390/math11194220

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop