Next Article in Journal
A Novel Brillouin and Langevin Functions Dynamic Model for Two Conflicting Social Groups: Study of R&D Processes
Previous Article in Journal
Some Fractional Integral and Derivative Formulas Revisited
Previous Article in Special Issue
Asymptotic Properties for Cumulative Probability Models for Continuous Outcomes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Liu Estimator Used to Combat Some Challenges in Partially Linear Regression Model by Improving LTS Algorithm Using Semidefinite Programming

by
Waleed B. Altukhaes
1,2,
Mahdi Roozbeh
3,* and
Nur A. Mohamed
1,*
1
Institute of Mathematical Sciences, Faculty of Science, Universiti Malaya, Kuala Lumpur 50603, Malaysia
2
Department of Mathematics, College of Science and Humanities, Shaqra University, Sahqra 11961, Saudi Arabia
3
Department of Statistics, Faculty of Mathematics, Statistics and Computer Sciences, Semnan University, Semnan 3513119111, Iran
*
Authors to whom correspondence should be addressed.
Mathematics 2024, 12(17), 2787; https://doi.org/10.3390/math12172787
Submission received: 6 July 2024 / Revised: 30 August 2024 / Accepted: 2 September 2024 / Published: 9 September 2024
(This article belongs to the Special Issue Nonparametric Regression Models: Theory and Applications)

Abstract

:
Outliers are a common problem in applied statistics, together with multicollinearity. In this paper, robust Liu estimators are introduced into a partially linear model to combat the presence of multicollinearity and outlier challenges when the error terms are not independent and some linear constraints are assumed to hold in the parameter space. The Liu estimator is used to address the multicollinearity, while robust methods are used to handle the outlier problem. In the literature on the Liu methodology, obtaining the best value for the biased parameter plays an important role in model prediction and is still an unsolved problem. In this regard, some robust estimators of the biased parameter are proposed based on the least trimmed squares (LTS) technique and its extensions using a semidefinite programming approach. Based on a set of observations with a sample size of n, and the integer trimming parameter hn, the LTS estimator computes the hyperplane that minimizes the sum of the lowest h squared residuals. Even though the LTS estimator is statistically more effective than the widely used least median squares (LMS) estimate, it is less complicated computationally than LMS. It is shown that the proposed robust extended Liu estimators perform better than classical estimators. As part of our proposal, using Monte Carlo simulation schemes and a real data example, the performance of robust Liu estimators is compared with that of classical ones in restricted partially linear models.

1. Introduction

When the nature of the relationship between the response variable and some of the explanatory variables is unclear but the link function of the mean of the dependent variable is expected to have a linear parametric relationship to certain other explanatory variables, partially linear models (PLMs) are the suitable models to apply for predicting or modeling the data set. Consider the set of observations denoted as ( y 1 , x 1   ,   t 1 ) , …, ( y n , x n   ,   t n ) , which conform to the partially linear model defined by
y i = x i β + f ( t i ) + ε i ,               i = 1,2 , , n ,
where y i is the value of the response variable of the ith observation, x i = x i 1 , , x i p represents a vector of the explanatory variables, β = ( β 1 , , β p ) denotes a vector of the unknown parameters, and the observed points that match the boundaries of the domain D R are denoted by t i . It is generally assumed that the unknown function f ( . ) is a smooth function, while ε i represents random errors that are considered to be independent of ( x i ,   t i ) .
PLMs that combine both parametric and nonparametric components are more flexible than a conventional multiple regression model in cases where the assumption of a linear relationship between the dependent variable and some of its predictors (x) is made, while its relationship to the other explanatory variables (t) has an unknown non-linear shape [1,2]. There have been several approaches to estimating the parameters in a PLM. Among the most important approaches are those given by several researchers [1,2,3,4,5,6,7,8,9,10,11].
The presence of nearly linear dependency among the columns of the design matrix X = ( x 1 , , x n ) is known as multicollinearity, and it is a common issue that might arise in a regression analysis. In this case, the matrix S = X X contains one or more small eigenvalues, causing the regression coefficient estimations to be large in their absolute value. The condition number is an effective measure for recognizing the presence of multicollinearity. The matrix S is ill-conditioned under multicollinearity because its condition number tends to an extremely large value. Multicollinearity makes the ordinary least-squares estimator (OLSE) perform badly. Also, the existence of multicollinearity in the data may cause the confidence intervals to be too large for either individual parameters or their linear mixes, which may lead to inaccurate predictions. Applying shrinkage estimators is widely used as an effective solution to address the issues arising from multicollinearity. In this study, the shrinkage estimator suggested by Liu [12] is applied to solve the problem of multicollinearity. Liu [12] combined the Stein estimator with the conventional ordinary ridge regression estimator to derive the Liu estimator, as described in [13,14,15]. Other alternative approaches to addressing the issue of multicollinearity can be found in the research papers [16,17,18,19].
Besides the multicollinearity problem, another typical issue that arises in regression analyses is the existence of outliers, which are observations that do not follow the pattern of the main bulk of the data. Outliers can cause problems like inflated sums of squares, estimate bias, p-value confusion, and more. To combat these problems, robust regression methods are used. The ordinary least-squares estimator is known to be extremely affected by outliers, so the least trimmed squares approach is used to estimate both components of the PLM used in this research.
The breakdown point of an estimator is the fundamental measurement that is used to evaluate its robustness. This breakdown point concept refers to the percentage of outlying observations (up to 50 percent) that can contaminate an estimation promiscuously. In computational geometry, the investigation of effective algorithms for robust estimation methods has been an important field of study. Several researchers have examined the robust least median of squares (LMS) method, which is the hyperplane that minimizes the squared residual median [20]. Although the LMS estimator has been the subject of most publications on robust estimation in the field of linear models, Rousseeuw and Leroy [21] have noted that the LMS is not the optimal option due to its statistical features. They asserted that selecting the least trimmed squares is the better option because both the LTS and LMS have the same breakdown point, approximately 50%, but the LTS offers some advantages in comparison to the LMS. Compared to the LMS, the objective function of the LTS is smoother. Also, since the LTS converges more quickly and is normally distributed asymptotically [20], it has superior statistical efficiency. For these reasons, the LTS is a better choice as a starting point for two-step robust estimators such as the MM estimator [22].
The main innovation of this paper is that it proposes novel robust Liu estimators for the parameters of a restricted PLM and robust estimations for the biasing parameter of Liu estimations based on the LTS approach and its extensions to solve multicollinearity and outlier problems simultaneously. These proposed estimators are based on an improved LTS produced using a semidefinite programming approach. The organization of this article is as follows: Section 2 contains the classical estimator of a restricted partially linear model based on the kernel method. After reviewing the concepts of Liu and the least trimmed squares approach to a restricted partially linear model in Section 3 and Section 4, respectively, our new robust Liu estimators for a restricted partially linear model are suggested based on the semidefinite programming in Section 5, and their asymptotic biases and distributional covariances are then derived. In Section 6, the efficiencies of the proposed estimators are assessed through vast Monte Carlo simulation experiments and a real-world data example. Lastly, some important findings are concluded in Section 7.

2. The Classical Estimators under Restriction

The estimators that conform to certain restrictions are called classical estimators. Let us examine the following partially linear model:
y = X β + f t + ε ,
where y = ( y 1 , , y n ) ,   X = ( x 1 , , x n ) ,   f t = ( f ( t 1 ) , ,   f ( t n ) ) ,   ε = ( ε 1 , , ε n ) .
Generally, we assume that ϵ is a vector of the disturbances that follows the distribution of E ε = 0 and E ε ε = σ 2 V , where σ 2 is an unknown parameter and V is a known matrix that is symmetric and positive definite.
To estimate the linear part of the model (2), we first remove the non-parametric effect by detrending. Given the assumption that β is known, a natural non-parametric estimator of f(.) is
f ^ ( t ) = k ( t ) ( y X β ) ,
where k(.) is a kernel function. Following [4], by substituting f ^ (t) for f t in Equation (2), the model may be reduced to
y ~ = X ~ β + ε ,
where y ~ = I n K y , X ~ = I n K X , and K is the smoother matrix with the i , j -th component K ω t i , t j , in   which   K ω is a kernel function of order m with the bandwidth parameter ω . Now, the estimation of β is performed using the generalized least-squares estimator (GLSE), which is known to be the best unbiased linear estimator
β ^ G L S = a r g m i n β y ~ X ~ β V 1 y ~ X ~ β = C 1 X ~ V 1 y ~ ,
where C = X ~ V 1 X ~ .
Interestingly, another suitable method for handling strong and extremely strong multicollinearity problems is to obtain the estimators under particular restrictions on unknown parameters, which may be exact or stochastic (see [23,24,25,26,27,28,29] for more details). Assume that we had prior knowledge regarding β in the sense of its non-stochastic exact constraints, as follows:
R β = r
where R is a known matrix q × p of prior information of rank q < p and r is a known q × 1 vector. This restriction should come from an outside source (it might be determined, for example, by an outside source of information or an expert). Thus, when the regression parameters are restricted by a group of linear constraints non-stochastically represented by independent prior information, we provide the instruments necessary to compute the risk of the estimators. Next, the performances of the new constrained estimators and classical estimators may be compared under certain conditions. We show that our innovative constrained estimators outperform the classical ones in terms of the least risk functions, assuming linear restrictions. In these circumstances, certain non-sample information (a previous constraint on the parameters) may exist; it is often presented to the model as constraints. Compared to typical estimators, the restricted estimator performs better, and so, in this research work, the restricted partially linear model (RPLM) is fitted to the data set. The selection of the complete row’s rank assumption is based on convenience and is backed by the fact that every consistent linear equation may be transformed into an equivalent equation with a coefficient matrix that has a full row rank. The generalized least-squares restricted estimator (GLSRE) is derived by imposing a linear restriction, as follows:
β ^ G L S R = a r g m i n β y ~ X ~ β V 1 ( y ~ X ~ β ) s . t .   R β = r = β ^ G L S C 1 R R C 1 R 1 R β ^ G L S r .
As such, it is known that the covariance matrix of β ^ G L S is equal to σ 2 C 1 . Thus, the GLSE and its covariance matrix are significantly influenced by the features of matrix C. The GLS estimators become susceptible to various errors when C is ill conditioned. Also, some of the estimations of the regression coefficients, for instance, might have incorrect signs or be statistically insignificant; this could lead to unstable estimators, which are characterized by the generation of large confidence intervals for specific parameters. Making valid statistical inferences becomes challenging in the presence of these errors, and so a biased estimation technique is introduced and utilized for an RPLM with a multicollinearity problem.

3. Restricted Liu Estimator in a Partially Linear Model

As was mentioned, multicollinearity leads to X X being ill conditioned, with a large condition number. When the condition number of X X is large, the least-squares estimator is more severely affected by multicollinearity. In this case, the high level of data noise is magnified by ( X X ) 1 , making the least-squares estimator highly unreliable. To combat this drawback of the least-squares estimator, Hoerl and Kennard [30] proposed adding a ridge estimator β ^ k = ( X X + k I ) 1 X y to the standard linear regression model y = X β + ε , with E ε = 0 and E ε ε = σ 2 I , and it has become the most often used method for solving the multicollinearity problem that causes the least-squares estimator to fail. Indeed, the ridge method solves the multicollinearity problem by adding a small constant k to the diagonal of X X to decrease its condition number. In practical use, the shrinkage parameter k in the ridge approach is often rather modest. It is obvious that the condition number of X X + k I is a decreasing function of k. Thus, high values of k are needed to achieve small-scale control over the condition number of X X + k I . Because of this, the small k selected in practice may not be big enough to solve the severe ill-conditioning problem of X X . As such, the resultant ridge estimation may still be unstable since X X + k I has remained ill conditioned. Furthermore, despite its practical effectiveness, the ridge estimator is a complicated function of k. Although the Stein-type estimator, β ^ c = c ( X X ) 1 X y , is a linear function of c, the shrinkage of each element of β ^ c is the same. To address these issues, Liu [12] proposed a new biased estimator β ^ d = ( X X + I ) 1 ( X y + d β ^ ) by combining the advantages of the ridge and Stein-type estimators, which effectively solved the problem of ill conditioning in the standard regression model, where 0 < d < 1 is the biasing parameter and β ^ = ( X X ) 1 X y . It is obvious that when d = 1, β ^ d  =  β ^ .
According to [12], the mean squared error (MSE) of the Liu estimator is obtained by
MSE β ^ d = σ 2 j = 1 p λ j + d 2 λ j λ j + 1 2 + d 1 2 j = 1 p α j 2 λ j + 1 2  
where α j 2 corresponds to the jth element of α = Γ β and Γ is an orthogonal matrix such that C = Γ Λ Γ , in which Λ = d i a g λ 1 ,   .   .   .   ,   λ p contains the eigenvalues of matrix C . Consequently, the biasing parameter d is chosen by minimizing the MSE   of   ( β ^ d ) as follows:
d ^ = 1 σ ^ G L S 2 j = 1 p 1 λ j ( λ j + 1 ) j = 1 p α ^ j G L S 2 λ j + 1 2
where σ ^ 2 and α ^ j G L S 2 are the unbiased estimators of σ 2 and α j based on the GLSE, respectively, i.e., σ ^ G L S 2 = 1 n p y ~ X ~ β ^ G L S V 1 y ~ X ~ β ^ G L S and α ^ G L S = Γ β ^ G L S .
The generalized least-squares Liu estimator (GLSLE) investigated by [31] is defined as follows:
β ^ G L S L ( d ) = ( X ~ V 1 X ~ + I ) 1 ( X ~ V 1 X ~ + d I ) β ^ G L S = ( C + I ) 1 ( C + d I ) β ^ G L S = F d β ^ G L S , 0 d 1 ,
where F d = ( C + I ) 1 C + d I .
Based on the fact that F d and C 1 are commutative, the generalized least-squares restricted Liu estimator (GLSRLE) can be defined as follows for an RPLM [32,33,34]:
β ^ G L S R L d = F d β ^ G L S F d C 1 R R C 1 R 1 R β ^ G L S r .
Lemma 1.
If β satisfies the linear restriction = r, then the bias vector, covariance matrix, and mean squared error functions of proposed estimator can be evaluated by direct calculations, as follows:
B i a s β ^ G L S R L ( d ) = ( I F d ) β ,
Cov β ^ G L S R L ( d ) = σ 2 t r ( F d H F d ) ,
MSE β ^ G L S R L ( d ) = σ 2 t r ( F d H F d ) + β F d I ( F d I ) β ,
where  H = C 1 I R R C 1 R 1 R C 1 .
Theorem 1.
The mean squared error of the GLSRLE under the linear restriction = r can be given by
M S E β ^ G L S R L ( d ) = σ 2 j = 1 p λ j + d 2 λ j + 1 2 m j j + d 1 2 j = 1 p α j 2 λ j + 1 2 ,
where  m j j  is the jth diagonal element of the matrix  M = Γ H Γ .
Proof. 
Using ( C + I ) 1 = Γ ( Λ + I ) 1 Γ and C + d I = Γ Λ + d I Γ , we can write
t r ( F d H F d ) = t r ( C + I ) 1 C + d I H C + d I ( C + I ) 1 = t r Γ ( Λ + I ) 1 Γ Γ Λ + d I Γ H Γ Λ + d I Γ Γ ( Λ + I ) 1 Γ = t r ( Λ + I ) 2 Λ + d I 2 Γ H Γ = j = 1 p λ j + d 2 λ j + 1 2 m j j .
Also, from ( C + I ) 2 = Γ ( Λ + I ) 2 Γ , we have
β F d I F d I β = α Γ ( C + I ) 1 C + d I I ( C + I ) 1 C + d I I Γ α = α Γ ( ( C + d I ) C + I ) ( C + I ) 2 ( ( C + d I ) C + I ) Γ α = d 1 2 α Γ Γ ( Λ + I ) 2 Γ Γ α = d 1 2 j = 1 p α j 2 λ j + 1 2 .
So, the proof is completed. □
As an important result of Theorem 1, the optimal value of the biasing parameter d can be obtained by differentiating the mean squared error function of the GLSRLE with respect to d as follows:
d ^ = 1 σ ^ G L S R 2 j = 1 p m j j ( λ j + 1 ) j = 1 p α ^ j G L S R 2 + σ ^ G L S R 2 m j j λ j + 1 2 ,
where σ ^ G L S R 2 and α ^ j G L S R 2 are the unbiased estimators of σ 2 and α j based on the GLSRE, respectively, i.e., σ ^ G L S R 2 = 1 n ( p q ) y ~ X ~ β ^ G L S R V 1 y ~ X ~ β ^ G L S R and α ^ G L S R = Γ β ^ G L S R , for which
β ^ G L S R = β ^ G L S C 1 R R C 1 R 1 R β ^ G L S r .

4. Extension of the Least Trimmed Squares (LTS) Estimator Using Semidefinite Programming in an RPLM

As is known, outliers have the potential to significantly corrupt the least-squares estimator and all of the estimators based on it due to their significant impact on the objective function. A robust regression approach is a broad term that encompasses various estimating approaches. The least trimmed squares is a robust regression method introduced by [35]. The LTS seeks to combat this issue by minimizing the sum of the lowest h squared residuals following the removal of a specific percentage of extreme values. In this case, h serves as a threshold, and the proportion of the outlying data are represented by the ratio α = (n − h)/n. Typically, the value of h can be taken as h = [[n(1 − α)]], where [[x]] stands for the ceiling of x. Some other authors suggest taking h = n / 2 + [ ( p + 1 ) / 2 ] , h = n 1 α + α p + 1   or   h = n 1 α + 1 [36]. The LTS estimator is computed by solving the n h total least-squares fit combinations of the index set {1, …, n}. Thus, for large sample sizes, finding the global minimum in the objective function of the LTS method takes time and space. To accelerate the process of finding the solution (LTS fit), we use an analog of the FAST-LTS algorithm extended by Rousseeuw and van Driessen [22].
Let z i represent the indicator variable that signifies whether or not observation i is regarded as a normal observation. The objective function of the LTS in an RPLM will be examined as follows:
m i n β , z ψ β , z = y ~ X ~ β V 1 2 Z V 1 2 ( y ~ X ~ β ) s , t ,       R β = r ,                   e z = h ,                                                                       z i 0.1 .     i = 1 . . n .
where Z is the diagonal matrix with diagonal elements z = ( z 1 . . , z n ) , e = ( 1 . . 1 ) n × 1 and h is the positive integer. The resultant estimator is the generalized least trimmed squares-restricted estimator (GLTSRE), which is provided by
β ^ G L T S R z = β ^ G L T S z C z 1 R R C z 1 R 1 R β ^ G L T S z r ,
where C z = X ~ V 1 / 2 Z V 1 / 2 X ~   and   β ^ G L T S z = C ( z ) 1 X ~ V 1 / 2 Z V 1 / 2 y ~ .
According to the research conducted in [37,38], we may define a relaxation problem in the RPLM called a relaxed least trimmed squares (RLTS) problem as follows:
m i n β , z * ψ β , z * = y ~ X ~ β V 1 2 Z * V 1 2 ( y ~ X ~ β ) s , t ,       R β = r ,                         e z * = h ,                                                                     0 z i * 1 .       i = 1 . . n ,
where Z * is the diagonal matrix with diagonal elements z * = ( z 1 * . . , z n * ) . The resultant estimator of the above optimization problem is the generalized relaxed least trimmed squares-restricted estimator (GRLTSRE), which is given as follows:
β ^ G R L T S R z * = β ^ G R L T S z * C z * 1 R R C z * 1 R 1 R β ^ G R L T S z * r
where C z * = X ~ V 1 / 2 Z * V 1 / 2 X ~   and   β ^ G R L T S z * = C ( z * ) 1 X ~ V 1 / 2 Z * V 1 / 2 y ~ .
Here, we propose an extension of the RLTS problem in RSRM, called ERLTS, based on the optimization of the following objective function:
m i n β , z * * ψ β , z * * = y ~ X ~ β V 1 2 Z * * V 1 2 ( y ~ X ~ β ) s , t ,     R β = r ,                                           h 1   e z * *   h 2 ,                                                                         0 z i * * 1 .     i = 1 , . n .    
where Z * * is the diagonal matrix with diagonal elements z * * = ( z 1 * * . . , z n * * ) and the positive integers h 1   and   h 2 are such that h 1 h   h 2 . The generalized extended relaxed least trimmed squares-restricted estimator (GERLTSRE) is the solution to this optimization problem, obtained by semidefinite programming as follows:
β ^ G E R L T S R z * * = β ^ G E R L T S z * * C z * * 1 R R C z * * 1 R 1 R β ^ G E R L T S   z * * r ,
where C z * * = X ~ V 1 / 2 Z * * V 1 / 2 X ~   and   β ^ G E R L T S z * * = C ( z * * ) 1 X ~ V 1 / 2 Z * * V 1 / 2 y ~ .

5. Extended LTS Liu Estimator in an RPLM

In this section, we try to implement the three types of LTS estimators introduced in the previous section, based on Liu’s idea, to extract novel robust Liu estimators that are resistant to the existence of multicollinearity and outliers in the data set. The robust form of the Liu estimator based on the classical LTS estimator, considered by Kan et al. [39], can be extended as follows:
β ^ L T S L ( d , z ) = ( X Z X + I ) 1 ( X Z X + d I ) β ^ L T S ( z ) .
Now, we adopt and utilize this estimator within the proposed robust estimators previously defined for the RPLM to obtain two-stage estimators for the parameters, as follows:
  • Robust estimators for d and β based on the LTS approach to the RPLM (and subsequently the generalized least trimmed squares-restricted Liu, GLTSRL, method):
    σ ^ G L T S R 2 = 1 n ( p q ) y ~ X ~ β ^ G L T S R z V 1 / 2 Z V 1 / 2 y ~ X ~ β ^ G L T S R z
    d ^ L T S = 1 σ ^ G L T S R 2 j = 1 p m j j ( z ) ( λ j ( z ) + 1 ) j = 1 p α ^ j G L T S R 2 ( z ) + σ ^ G L T S R 2 m j j ( z ) λ j ( z ) + 1 2
    β ^ G L T S R L d ^ L T S . z = F d ^ L T S z   β ^ G L T S z F d ^ L T S z C z 1 R R C z 1 R 1 R β ^ G L T S z r
    where λ j ( z ) is the jth eigenvalue of matrix C ( z ) = Γ ( z ) Λ ( z ) Γ ( z ) and m j j ( z ) is the jth diagonal element of the matrix M ( z ) = Γ ( z ) H ( z ) Γ ( z ) , in which H ( z ) = C ( z ) 1 I R C ( z ) 1 R 1 R C ( z ) 1 , α ^ j G L T S R 2 ( z ) is the jth element of α ^ G L T S R ( z ) = Γ β ^ G L T S R z and F d ^ L T S z = ( C z + I ) 1 C z + d ^ L T S I .
  • Robust estimators for d and β based on the RLTS approach to the RSRM (and subsequently the generalized relaxed least trimmed squares-restricted Liu, GRLTSRL, method):
      σ ^ G R L T S R 2 = 1 n ( p q ) y ~ X ~ β ^ G R L T S R z * V 1 / 2 Z * V 1 / 2 y ~ X ~ β ^ G R L T S R z *
    d ^ R L T S = 1 σ ^ G R L T S R 2 j = 1 p m j j ( z * ) ( λ j ( z * ) + 1 ) j = 1 p α ^ j R G L T S R 2 ( z * ) + σ ^ G R L T S R 2 m j j ( z * ) λ j ( z * ) + 1 2
    β ^ G R L T S R L d ^ R L T S , z * = F d ^ R L T S z *   β ^ G R L T S z * F d ^ R L T S z * C z * 1 R R C z * 1 R 1 R β ^ G R L T S z * r
    where λ j ( z * ) is the jth eigenvalue of matrix C ( z * ) = Γ ( z * ) Λ ( z * ) Γ ( z * ) and m j j ( z * ) is the jth diagonal element of the matrix M ( z * ) = Γ ( z * ) H ( z * ) Γ ( z * ) , in which H ( z * ) = C ( z * ) 1 I R C ( z * ) 1 R 1 R C ( z * ) 1 , α ^ j G R L T S R 2 ( z * ) is the jth element of α ^ G R L T S R ( z * ) = Γ β ^ G R L T S R z * and F d ^ R L T S z * = ( C z * + I ) 1 C z * + d ^ R L T S I .
  • Robust estimators for d and β based on the ERLTS approach to the RSRM (and subsequently the generalized extended relaxed least trimmed squares-restricted Liu, GERLTSRL, method):
    σ ^ G E R L T S R 2 = 1 n ( p q ) y ~ X ~ β ^ G E R L T S R z * * V 1 / 2 Z * * V 1 / 2 y ~ X ~ β ^ G R L T S R z * *
    d ^ E R L T S = 1 σ ^ G E R L T S R 2 j = 1 p m j j ( z * * ) ( λ j ( z * * ) + 1 ) j = 1 p α ^ j G E R L T S R 2 ( z * * ) + σ ^ G E R L T S R 2 m j j ( z * * ) λ j ( z * * ) + 1 2
    β ^ G L T S R L d ^ E R L T S , z * * = F d ^ E R L T S z * *   β ^ G E R L T S z * * F d ^ E R L T S z * * C z * * 1 R R C z * * 1 R 1 R β ^ G E R L T S z * * r
    where λ j ( z * * ) is the jth eigenvalue of matrix C ( z * * ) = Γ ( z * * ) Λ ( z * * ) Γ ( z * * ) and m j j ( z * * ) is the jth diagonal element of the matrix M z * * = Γ z * * H z * * Γ z * * , in which H ( z * * ) = C ( z * * ) 1 I R C ( z * * ) 1 R 1 R C ( z * * ) 1 , α ^ j G E R L T S R 2 ( z * * ) is the jth element of α ^ G E R L T S R ( z * * ) = Γ β ^ G E R L T S R z * * and F d ^ E R L T S z * * = ( C z * * + I ) 1 C z * * + d ^ E R L T S I .
Theorem 2.
Themean squared error of the proposed estimators (23)–(25) under the linear restriction = r can be given by
M S ^ E β ^ G L T S R L d ^ L T S . z = σ ^ G L T S R 2 j = 1 p λ j ( z ) + d ^ L T S 2 λ j ( z ) + 1 2 m j j ( z ) + d ^ L T S 1 2 j = 1 p α ^ j G L T S R 2 ( z ) λ j ( z ) + 1 2 ,
M S ^ E β ^ G R L T S R L d ^ R L T S . z * = σ ^ G R L T S R 2 j = 1 p λ j ( z * ) + d ^ R L T S 2 λ j ( z * ) + 1 2 m j j ( z * ) + d ^ R L T S 1 2 j = 1 p α ^ j G R L T S R 2 ( z * ) λ j ( z * ) + 1 2 ,
M S ^ E β ^ G E R L T S R L d ^ L T S . z * * = σ ^ G E R L T S R 2 j = 1 p λ j ( z * * ) + d ^ E R L T S 2 λ j ( z * * ) + 1 2 m j j ( z * * ) + d ^ E R L T S 1 2 j = 1 p α ^ j G E R L T S R 2 ( z * * ) λ j ( z * * ) + 1 2 .
Proof. 
The proof directly follows by mimicking the proof of Theorem 1. □
Remark 1.
According to [40,41], because the variance matrix V is typically unknown in practice, the suggested estimators are non-applicable because they rely on the unknown variance matrix of the error terms. To address this issue, we must replace the unknown V with a consistent estimator of it using feasible two-stage estimators, as follows:
V ^ = 1 n ( p q ) y ~ X ~ β ^ E R L T S R z * * y ~ X ~ β ^ E R L T S R z * * ,
where
β ^ E R L T S R z * * = β ^ E R L T S z * * X ~ Z * * X ~ 1 R R X ~ Z * * X ~ 1 R 1 R β ^ E R L T S z * * r ,
in which  β ^ E R L T S z * * = X ~ Z * * X ~ 1 X ~ Z * * y ~  is an ordinary ERLTS estimator of the parameter  β .

6. Illustrative Experiments

To illustrate the advantages of the improved techniques that have been proposed for a restricted partially linear model in the presence of simultaneous multicollinearity and outlier problems, we continue our examination with some numerical experiments in this section. We evaluate the performance of the proposed techniques on both a real-world data set and some Monte Carlo simulation schemes.

6.1. The Monte Carlo Simulation Schemes

We conducted a numerical analysis to evaluate the precision of our robust estimators for an RSRM when dealing with contaminated data sets with outliers and multicollinearity. In each replication, the regressors were randomly generated using the following structure. Indeed, in order to reach different levels of multicollinearity, according to [42,43], explanatory variables were constructed using a device over 150 observations and 10 3 iterations, based on the model described below:
x i j = ( 1 γ 2 ) 1 / 2 z i j + γ z i p   ,             i = 1 , , n     a n d   j = 1 , , p ,
where   z i j are independent standard normal pseudo-random variables and γ is chosen such that the correlation between any two explanatory variables is equal to γ 2 . These variables are subsequently normalized to ensure that X X and X y are in correlating forms. Four distinct sets of correlation values are investigated, specifically γ = 0.25 ,   0.50 , 0.75   and   0.95 . For the dependent variable, n observations are then calculated by
y i = j = 1 5 x i j β j + f ( t i ) + ε i   ,                 i = 1 , , n ,
where
β = ( 1 , 4 , 2 , 5 , 3 ) , f t = e x p sin t cos t + t ,   t 0 ,   3 , ε ( n × 1 ) = ε 1 , ε 2 ,
in which
ε 1   h × 1 ~ N h 0 ,   σ 2 V ,   σ 2 = 1.64 ,   v i j = e x p 9 i j , h = 0.25 n ,   0.33 n ,   [ 0.50 n ]
and
ε 2     ( ( n h ) × 1 ) ~ i . i . d . χ 1 2 ( 15 ) ,
where χ m 2 ( δ ) represents the m degree of a freedom non-central Chi-squared distribution with the non-centrality parameter δ. The primary motivation behind selecting such a structure for producing the error terms is to corrupt the data set and assess the resistance of the suggested techniques. In fact, we made the last n − h error terms independent non-central Chi-squared distributed random variables and the first h error terms dependent normal random variables. The non-centrality parameter leads the outliers to lie on one side of the real regression model and bias non-robust estimations. In terms of the restriction, we consider the following stochastic linear restrictions:
R = 1 5 3 1 1 2 1 0 2 3 1 2 1 3 2 4 1 2 2 0 ,   r = R β .
For estimating the nonparametric part of the model (16), f(.), the weight proposed by Priestley and Chao [44] with the Gaussian kernel, is used as follows:
W ω t j = 1 n ω K t i t j ω = 1 n ω 1 2 π e x p t i t j 2 2 ω 2 .
Also, the cross-validation (C.V.) approach is applied to obtain the optimum value of the bandwidth ω , which minimizes the C.V. criterion.
The non-parametric component of the model (30) is presented in Figure 1. This wave function is challenging to predict and offers a useful example for testing the proposed estimation techniques. All calculations were performed with R 4.3.1, the statistical software program. Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13 and Table 14 present a summary of the results. After iterating the process for all simulations, the minimum, maximum, mean, median, and standard deviation values of the mean squared errors of the linear and non-linear estimators are reported in Table 1 and Table 2, respectively, where
m s ^ e ( β ^ ( i ) ( m ) , β ) = 1 M m = 1 M β ^ ( i ) ( m ) β 2 2 , m s ^ e ( f ^ ( i ) , f ) = 1 n M m = 1 M f ^ ( i ) ( m ) f 2 2 ,   f ^ ( i ) = K ( y X β ^ ( i ) ( m ) ) ,
in which β ^ ( i ) ( m ) and f ^ ( i ) ( m ) are the ith estimators of the linear and non-linear parts (i = 1, …, 8) obtained in the mth iteration of the model for all of the proposed approaches and v 2 2 = i = 1 q v i 2 for v = ( v 1 , , v p ) . Moreover, PCDO is the percentage of the data contaminated with outliers (PCDO = 100 × n h n %).
Figure 2 shows the estimations of the non-linear part of model (30) using the proposed methods. In this figure, the nonparametric function is estimated via the kernel method after the estimation of the linear part of the model (30) using all of the eight proposed methods. To save space, the results have been only reported for n = 150 with a PCDO = 25%, 33% and 50%, and γ = 0.95 . From Figure 2, it is evident that the non-robust methods are completely corrupted by the outliers, especially for large values of PCDO.
The results from the Monte Carlo simulations for n = 150, p = 5 and γ = 0.25 ,   0.50 ,   0.75   and   0.95 are presented in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13 and Table 14 and Figure 2. From these tables, it can be seen that the factors affecting the performance of the estimators are the degree of correlation ( γ ) between the explanatory variables and the percentage of the data contaminated with outliers (PCDO). From these tables and Figure 2, it is clearly concluded that if the levels of PCDO and multicollinearity increased, then the mean squared error estimations for both the linear and non-linear parts of the GLSRE would be highly increased. Also, increasing the level of the PCDO increases the mean squared error estimations for both the linear and non-linear parts of non-robust estimators, while increasing the level of multicollinearity increases the mean squared error estimations for both the linear and non-linear parts of non-Liu estimators. From Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13 and Table 14 and Figure 2, it can be seen that the GERLTSRLE often performs better than the other methods since it offers the smallest mean squared error estimations (bold values) for both the linear and non-linear parts of the models in most of the simulated schemes.

6.2. Real-World Data Example

To evaluate the proposed estimation approaches using the partially linear model, we examine the hedonic pricing of house features [45]. The data consist of 92 detached homes in the Ottawa area that were sold in 1987. Here is how the variables are defined: the predictor variable is the sale price (SP); the explanatory variables are lot size (lot area = LT), the square footage of housing (SFH), average neighborhood income (ANI), distance to the highway (DHW), the presence of a garage (GAR), and a fireplace (FP). We first consider the pure parametric model
S P i = β 0 + β 1 L T i + β 2 S F H i + β 3 F P i + β 4 D H W i + β 5 G A R i + β 6 A N I i + ε i   .
We use added-variable charts to intuitively determine the parametric and nonparametric components of the model (see [46] for more details). Added-variable plots allow us to examine each predictor’s influence graphically after adjusting for the effects of the other explanatory variables. Based on the analysis of its added-variable plot (Figure 3), we identify ANI as a nonparametric component. Therefore, the partially linear model is specified as follows:
S P i = β 0 + β 1 L T i + β 2 S F H i + β 3 F P i + β 4 D H W i + β 5 G A R i + f A N I i + ε i  
The “mctest” package in R is used to detect multicollinearity in the design matrix, producing the following results. The Farrar–Glauber test and other pertinent tests for multicollinearity are provided.
Overall Multicollinearity Diagnostics
MC Resultsdetection
Determinant |X′X|:0.0056181
Farrar Chi-Square:50.83781
Red Indicator:0.20650
Sum of Lambda Inverse:700.21041
Theil’s Method:−0.73200
Condition Number:200.40211
1 --> COLLINEARITY is detected by the test. 0 --> COLLINEARITY is not detected by the test.
The correlation graphs of the real data set are displayed in Figure 4. It is evident from the findings above and Figure 4 that the independent variables in the real data set under investigation exhibit substantial multicollinearity. So, to address this multicollinearity issue, the suggested estimating techniques must be used.
The restriction R β = r may be identified as follows, based on a basic investigation of the partially linear model (31) using a robust Liu estimator:
R = 1 0 1 1 1 1 0 1 2 0 0 1 0 2 8 , r = 0 0 0
Now, the linear hypothesis R β = r is examined within the framework of the restricted partially linear model (31). The test statistic is computed as follows under R β = r :
χ r a n k R 2 = R β ^ G L S r R Σ ^ β ^   R 1 R β ^ G L S r = 0.4781 ,
where Σ ^ β ^ = s ^ 2 X ~ V ^ 1 X ~ 1 , in which s ^ 2 = 1 n p y ~ X ~ β ^ G L S V ^ 1 y ~ X ~ β ^ G L S .
Consequently, the null hypothesis H0 is not rejected. Table 15 shows a brief evaluation of the proposed estimators. Compared to the other estimators, the GERLTSRLE seems to be effective based on the results that were obtained. Following the estimate of the linear component of the model using the suggested estimators for model (31), the function fitted by kernel smoothing is shown in Figure 5.
Table 15 shows a brief evaluation of the proposed estimators. In this table, the values of M S ^ E   a n d   R 2 are calculated, in which R 2 = 1 R S S S Y Y is the coefficient of determination of the model, where R S S = i = 1 n ( y i y ^ i ) 2 is the residual sum of squares and y ^ i = x i β ^ + f ^ ( t i ) , both of which were calculated for the eight suggested approaches. For the estimation of the nonparametric effect, we first estimated the parametric effects using one of the proposed methods and then a kernel smoother was applied to fit S P i x i β ^ onto A N I i   , i = 1 , . . , n for all proposed linear estimators, where x i = ( L T i   , S F H i   , F P i   ,   D H W i   , G A R i ) .
As can be seen from Table 15 and Figure 5, because of the existence of multicollinearity between the explanatory variables, the LTS–Liu and Liu estimators perform better than the non-Liu ones in both the parametric and nonparametric fittings. Furthermore, since there are some outliers in the real data set that were detected in Figure 3, robust estimators outperform non-robust ones in terms of the goodness-of-fit of their models. Hence, developing efficient robust Liu estimation strategies is required for data modeling. From Table 15 and Figure 5, it can be concluded that the GERLTSRLE performs better than the other methods since it offers the smallest M S ^ E and biggest R 2 values (bold values) in the presence of both multicollinearity and outlier difficulties, while the performances of the non-Liu or non-robust types of estimators are quite poor.

7. Conclusions

In this paper, we suggested Liu and non-Liu types of generalized restricted robust estimators of a partially linear model with dependent errors and some additional linear constraints on the whole parameter space. In the presence of multicollinearity and outliers, an extended robust Liu estimator was introduced based on semidefinite programming to improve the classical least trimmed squares approach used in the partially linear model. Also, we proposed some robust estimations of the biasing parameter of the Liu estimator, which plays an important role in model prediction. In both our simulation studies and a real-life example, it can be found that Liu estimators (both robust and non-robust) perform better than non-Liu ones in both parametric and nonparametric fittings under intense multicollinearity. Moreover, since there are some outliers in the real data set, robust estimators are more efficient than non-robust estimators during model fitting. The GERLTSRLE outperforms the others in terms of the mean squared error and R 2 criteria, making it the most reliable method. Also, it can be deduced that the GERLTSRLE performs effectively in predicting the dependent variable of restricted PLMs without being affected by the corruptive impacts of multicollinearity and outlier observations. As was mentioned earlier, the proposed estimators for the Liu parameter d are not the best and thus a good topic for future research could be obtaining alternative estimators for this parameter based on other suitable criteria, such as cross-validation [47], instead of the mean squared error criterion.

Author Contributions

Conceptualization, M.R. and N.A.M.; methodology, M.R. and W.B.A.; software, M.R. and W.B.A.; validation, W.B.A., M.R. and N.A.M.; formal analysis, W.B.A., M.R. and N.A.M.; investigation, W.B.A., M.R. and N.A.M.; resources, W.B.A., M.R. and N.A.M.; data curation, W.B.A., M.R. and N.A.M.; writing—original draft preparation, W.B.A. and M.R.; writing—review and editing, M.R., W.B.A. and N.A.M.; visualization, W.B.A., M.R. and N.A.M.; supervision, N.A.M.; project administration, M.R.; funding acquisition, W.B.A. and N.A.M. All authors have read and agreed to the published version of the manuscript.

Funding

The second and third authors would like to thank the Ministry of Higher Education Malaysia for their support in funding this research through the Fundamental Research Grant Scheme (Project No.: FP072-2023) awarded to Nur Anisah Mohamed and Mahdi Roozbeh.

Data Availability Statement

The data is available under request to the corresponding author.

Acknowledgments

The authors would like to thank the three anonymous reviewers and assigned editor for their valuable comments and corrections to an earlier version of this paper, which significantly improved the quality of our work. The first author would like to thank the Deanship of Scientific Research at Shaqra University for supporting this work. The second author would like to thank the Research Council of Semnan University for its support.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Engle, R.F.; Granger, C.W.J.; Rice, J.; Weiss, A. Semiparametric estimates of the relation between weather and electricity sales. J. Am. Stat. Assoc. 1986, 81, 310–320. [Google Scholar] [CrossRef]
  2. Eubank, R.L.; Kambour, E.L.; Kim, J.T.; Klipple, K.; Reese, C.S.; Schimek, M. Estimation in partially linear models. Comput. Statist. Data. Anal. 1988, 29, 27–34. [Google Scholar] [CrossRef]
  3. Green, P.; Jennison, C.; Seheult, A. Analysis of field experiments by least squares smoothing. J. Roy. Statist. Soc. Ser. B 1985, 47, 299–315. [Google Scholar] [CrossRef]
  4. Speckman, P. Kernel somoothing in partial linear models. J. R. Stat. Soc. Ser. B 1988, 50, 413–436. [Google Scholar] [CrossRef]
  5. Eubank, R.L. Nonparametric Regression and Spline Smoothing; Marcel Dekker: New York, NY, USA, 1999. [Google Scholar]
  6. Ruppert, D.; Wand, M.P.; Carroll, R.C. Semiparametric Regression; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  7. Härdle, W.; Müller, M.; Sperlich, S.; Werwatz, A. Nonparametric and Semiparmetric Models; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  8. Yatchew, A. Semiparametric Regression for the Applied Econometrican; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  9. Akdeniz, F.; Tabakan, G. Restricted ridge estimators of the parameters in semiparametric regression model. Commun. Stat. Theory Methods 2009, 38, 1852–1869. [Google Scholar]
  10. Akdeniz, D.E.; Hardle, W.K.; Osipenko, M. Difference based ridge and Liu type estimators in semiparametric regression models. J. Multivar. Anal. 2012, 105, 164–175. [Google Scholar] [CrossRef]
  11. Arashi, M.; Valizadeh, T. Performance of Kibria’s methods in partial linear ridge regression model. Stat. Pap. 2015, 56, 231–246. [Google Scholar]
  12. Liu, K.J. A new class of biased estimate in linear regression. Commun. Stat. Theo. Meth. 1993, 22, 393–402. [Google Scholar]
  13. Akram, M.N.; Kibria, B.M.G.; Arashi, M.; Lukman, A.F. A new improved Liu estimator for the QSAR model with inverse Gaussian response. Commun. Stat. Simul. Comput. 2024, 53, 1873–1888. [Google Scholar] [CrossRef]
  14. Kaçıranlar, S.; Ozbay, N.; Ozkan, E.; Guler, H. Comparison of Liu and two parameter principal component estimator to combat multicollinearity. Commun. Stat. Theo. Meth. 2022, 34, e6737. [Google Scholar] [CrossRef]
  15. Arashi, M.; Lukman, A.F.; Algamal, Z.Y. Liu regression after random forest for prediction and modeling in high dimension. J. Chemom. 2022, 36, e3393. [Google Scholar] [CrossRef]
  16. Arashi, M.; Roozbeh, M.; Niroumand, H.A. A note on Stein-type shrinkage estimator in partial linear models. Statistics 2012, 46, 673–685. [Google Scholar]
  17. Roozbeh, M.; Hamzah, N.A. Uncertain stochastic ridge estimation in partially linear regression models with elliptically distributed errors. Statistics 2020, 54, 494–523. [Google Scholar] [CrossRef]
  18. Roozbeh, M.; Rouhi, A.; Mohamed, N.A.; Jahadi, F. Generalized support vector regression and symmetry functional regression approaches to model the high-dimensional data. Symmetry 2023, 15, 1262. [Google Scholar] [CrossRef]
  19. Roozbeh, M.; Babaie-Kafaki, S.; Aminifard, Z. A nonlinear mixed–integer programming approach for variable selection in linear regression model. Commun. Stat. Simul. Comput. 2023, 11, 5434–5445. [Google Scholar] [CrossRef]
  20. Rousseeuw, P.J. Least median of squares regression. J. Am. Stat. Assoc. 1984, 79, 871–880. [Google Scholar] [CrossRef]
  21. Rousseeuw, P.J.; Leroy, A.M. Robust Regression and Outlier Detection; John Wiley: New York, NY, USA, 1987. [Google Scholar]
  22. Rousseeuw, P.J.; van Driessen, K. Computing LTS regression for large data sets. Data Min. Knowl. Discov. 2006, 12, 29–45. [Google Scholar] [CrossRef]
  23. Arashi, M.; Kibria, B.M.G.; Valizadeh, T. On ridge parameter estimators under stochastic subspace hypothesis. J. Stat. Comput. Simul. 2017, 87, 966–983. [Google Scholar] [CrossRef]
  24. Karbalaee, M.H.; Arashi, M.; Tabatabaey, S.M.M. Performance analysis of the preliminary test estimator with series of stochastic restrictions. Commun. Stat. Theory Methods 2018, 47, 1–17. [Google Scholar] [CrossRef]
  25. Roozbeh, M.; Hesamian, G.; Akbari, M.G. Ridge estimation in semi-parametric regression models under the stochastic restriction and correlated elliptically contoured errors. J. Comput. Appl. Math. 2020, 378, 112940. [Google Scholar] [CrossRef]
  26. Fallah, R.; Arashi, M.; Tabatabaey, S.M.M. On the ridge regression estimator with sub-space restriction. Commun. Stat. Theory Methods 2017, 46, 11854–11865. [Google Scholar] [CrossRef]
  27. Fallah, R.; Arashi, M.; Tabatabaey, S.M.M. Shrinkage estimation in restricted elliptical regression model. J. Iran. Stat. Soc. 2018, 17, 49–61. [Google Scholar] [CrossRef]
  28. Toutenburg, H. Prior Information in Linear Models; John Wiley: New York, NY, USA, 1982. [Google Scholar]
  29. Durbin, J. A note on regression when there is extraneous information about one of the coefficients. J. Am. Stat. Assoc. 1990, 48, 799–808. [Google Scholar] [CrossRef]
  30. Hoerl, A.E.; Kennard, R.W. Ridge regression: Biased estimation for non-orthogonal problems. Technometrics 1970, 12, 69–82. [Google Scholar] [CrossRef]
  31. Akdeniz, F.; Roozbeh, M.; Akdeniz, E.; Khan, M.N. Generalized difference-based weighted mixed almost unbiased liu estimator in semiparametric regression models. Commun. Stat. Theory Methods 2022, 51, 4395–4416. [Google Scholar] [CrossRef]
  32. Kibria, B.M.G. Some Liu and ridge type estimators and their properties under the ill- conditioned Gaussian linear regression model. J. Stat. Comput. Simul. 2012, 82, 1–17. [Google Scholar] [CrossRef]
  33. Månsson, K.; Kibria, B.M.G.; Shukur, G. A restricted Liu estimator for binary regression models and its application to an applied demand system. J. Appl. Stat. 2016, 43, 1119–1127. [Google Scholar] [CrossRef]
  34. Månsson, K.; Kibria, B.M.G. Estimating the unrestricted and restricted Liu estimators for the Poisson regression model: Method and application. Comput. Econ. 2021, 58, 311–326. [Google Scholar] [CrossRef]
  35. Rousseeuw, P.J. Multivariate estimation with high breakdown point. Math. Stat. Appl. 1985, 8, 283–297. [Google Scholar]
  36. Alfons, A.; Croux, C.; Gelper, S. Sparse least trimmed squares regression for analyzing high-dimensional large data sets. Ann. Appl. Stat. 2013, 7, 226–248. [Google Scholar] [CrossRef]
  37. Nguyen, T.D.; Welsch, R. Outlier detection and least trimmed squares approximation using semi-definite programming. Comput. Stat. Data Anal. 2010, 54, 3212–3226. [Google Scholar] [CrossRef]
  38. Roozbeh, M.; Babaie-Kafaki, S. Extended least trimmed squares estimator in semiparametric regression models with correlated errors. J. Stat. Comput. Simul. 2016, 86, 357–372. [Google Scholar] [CrossRef]
  39. Kan, B.; Alpu, O.; Yazici, B. Robust ridge and robust Liu estimator for regression based on the LTS estimator. J. Appl. Stat. 2013, 40, 644–655. [Google Scholar] [CrossRef]
  40. Zellner, A. An efficient method of estimating seemingly unrelated regressions and tests for aggregation bias. J. Am. Stat. Assoc. 1962, 57, 348–368. [Google Scholar] [CrossRef]
  41. Taavoni, M.; Arashi, M. Kernel estimation in semiparametric mixed effect longitudinal modeling. Stat. Pap. 2021, 62, 1095–1116. [Google Scholar]
  42. McDonald, G.C.; Galarneau, D.I. A Monte Carlo evaluation of some ridge-type estimators. J. Am. Stat. Assoc. 1975, 70, 407–416. [Google Scholar] [CrossRef]
  43. Gibbons, D.G. A simulation study of some ridge estimators. J. Am. Stat. Assoc. 1981, 76, 131–139. [Google Scholar] [CrossRef]
  44. Priestley, M.B.; Chao, M.T. Non-Parametric Function Fitting. J. R. Stat. Soc. Ser. B 1972, 34, 385–392. [Google Scholar] [CrossRef]
  45. Ho, M. Essays on the Housing Market. Ph.D. Dissertation, University of Toronto, Toronto, ON, Canada, 1995. [Google Scholar]
  46. Sheather, S.J. A Modern Approach to Regression with R; Springer: New York, NY, USA, 2009. [Google Scholar]
  47. Roozbeh, M.; Najarian, M. Efficiency of the QR class estimator in semiparametric regression models to combat multicollinearity. J. Stat. Comput. Simul. 2018, 88, 1804–1825. [Google Scholar] [CrossRef]
Figure 1. The non-linear function of the simulated model.
Figure 1. The non-linear function of the simulated model.
Mathematics 12 02787 g001
Figure 2. Kernel prediction of the function under study for n = 150 and γ = 0.95. Non-Liu and Liu estimators are plotted for a PCDO = 25% (low), PCDO = 33% (middle) and PCDO = 50% (high).
Figure 2. Kernel prediction of the function under study for n = 150 and γ = 0.95. Non-Liu and Liu estimators are plotted for a PCDO = 25% (low), PCDO = 33% (middle) and PCDO = 50% (high).
Mathematics 12 02787 g002aMathematics 12 02787 g002b
Figure 3. Added-variable plots of individual explanatory variables vs. dependent variable, using linear fit (blue solid line) and kernel fit (red dashed line). “*” symbol shows the outlier points for the linear scheme.
Figure 3. Added-variable plots of individual explanatory variables vs. dependent variable, using linear fit (blue solid line) and kernel fit (red dashed line). “*” symbol shows the outlier points for the linear scheme.
Mathematics 12 02787 g003
Figure 4. Visualization of the correlation plots of the explanatory variables in the real data set. Each significance level is associated to a symbol: symbols (“***”, “*”, “.”, “ “) <=> p-values (0, 0.001, 0.01, 0.05, 0.1, 1).
Figure 4. Visualization of the correlation plots of the explanatory variables in the real data set. Each significance level is associated to a symbol: symbols (“***”, “*”, “.”, “ “) <=> p-values (0, 0.001, 0.01, 0.05, 0.1, 1).
Mathematics 12 02787 g004
Figure 5. Estimations of the nonparametric part of model (31).
Figure 5. Estimations of the nonparametric part of model (31).
Mathematics 12 02787 g005aMathematics 12 02787 g005b
Table 1. Mean squared error estimations of the proposed estimators for the linear part of the simulated data sets, with n = 150.
Table 1. Mean squared error estimations of the proposed estimators for the linear part of the simulated data sets, with n = 150.
γ   0.250.50.750.950.250.50.750.950.250.50.750.95
PCDO = 25%PCDO = 33%PCDO = 50%
GLSREmin 1.03001.45011.95078.40081.57124.9800 4.434610.096352.765.89858.488614.5415
max3.03443.7254 7.183338.95014.10605.69926.343623.17954.860912.552711.961539.6976
mean1.26062.22494.35716.58461.90133.92585.496219.99893.27059.32510.554226.3929
median1.12281.99024.157715.69741.88183.145.232418.98373.12478.850410.253125.1407
S.D.1.36332.92772.53258.53072.24942.42763.71699.97136.97756.45745.778112.4557
GLTSREmin7.00 × 10−21.11411.13765.80090.09371.652.30089.455060.0751.00462.740712.98
max2.01083.12065.844117.4921.41854.08894.901418.2033.89964.84176.289628.688
mean0.15791.90613.16413.66860.64683.19523.287112.07660.23892.28253.473315.1121
median0.06441.84093.067212.27940.47763.08083.133711.89570.10022.12373.201414.8865
S.D.0.63582.98052.28277.01571.18463.06493.41428.63591.35734.42323.68878.9685
GRLTSREmin1.68 × 10−21.08011.04115.92070.09241.40012.42169.20080.04781.00092.200812.975
max2.44592.31965.7817.09791.04224.03545.020219.74052.33773.9014.960129.8601
mean0.19091.63533.218613.89840.91123.2373.38312.51470.25522.3133.528915.3241
median0.08141.55413.08912.39110.38033.10063.160611.91480.12382.1423.244914.9558
S.D.0.68592.70162.32047.35351.3522.35583.59298.34511.3474.5323.73388.2607
GERLTSREmin0.1681.06011.00095.37070.04681.60212.20199.54760.04091.05372.828611.9307
max0.06752.21135.657817.79351.34634.04024.367122.2282.77723.43635.751327.8879
mean0.6521.6013.154213.66770.62433.21043.298412.14350.23132.26953.456115.1132
median1.21 × 10−21.53813.065111.96350.3353.07593.126611.44540.1052.12043.20114.0838
S.D.1.82932.66172.24337.98521.01632.29133.48529.84241.31694.37613.63898.8768
GLSRLEmin4.10 × 10−10.04910.08582.54561.0682.03012.72074.11172.5723.13523.224713.325
max1.89722.87173.17347.63562.94824.88275.567512.19794.97847.10757.255324.1935
mean0.24370.60991.39263.85011.76482.90273.54366.25653.10454.3337.766317.4571
median0.11470.60190.18213.93880.64562.75313.26226.10383.07534.25157.273217.155
S.D.0.34571.34592.57144.7192.44991.45112.76323.13296.87055.46713.79027.3125
GLTSRLEmin4.33 × 10−30.00450.00370.06040.03450.01410.01090.60540.00170.1570.00481.137
max1.90892.27381.12694.65010.95831.22171.32266.87171.66450.85090.886712.1068
mean0.14610.09920.18080.8070.95930.22820.3751.36720.24070.29410.49496.0419
median0.05810.04340.07450.34230.38870.0830.1660.92540.11830.32120.21395.8904
S.D.0.22051.19192.30913.19271.38741.28612.462.94771.34931.96372.70413.9317
GRLTSRLEmin4.68 × 10−30.00380.00570.05640.01430.01810.00790.52080.01780.09080.00281.4756
max2.04592.31960.785.09790.94222.03541.02026.74051.33770.8510.760112.4909
mean0.190.130.230.94170.85160.23610.38321.46420.25530.31280.52857.3356
median0.07970.05850.10520.43450.23790.10010.16330.9970.12380.1420.24396.0558
S.D.0.28561.2052.32723.37621.34121.35572.59133.29631.34721.13172.73314.2272
GERLTSRLEmin4.20 × 10−30.00340.00280.01550.010.00420.00670.6050.00050.03090.00351.0607
max1.71692.34750.79724.41590.95081.21511.19337.0821.49480.77530.72610.3495
mean0.15130.09180.14750.74160.79090.1770.32141.2640.22040.28470.48535.9774
median0.05830.04050.0630.30760.2440.07690.14610.86690.10210.1240.20894.9675
S.D.0.23261.17592.27533.27371.28881.31812.54313.2191.30651.39052.66223.898
Table 2. Mean squared error estimations of the proposed estimators for the non-linear part of the simulated data sets with n = 150.
Table 2. Mean squared error estimations of the proposed estimators for the non-linear part of the simulated data sets with n = 150.
γ 0.250.500.750.950.250.500.750.950.250.500.750.95
PCDO = 25%PCDO = 33%PCDO = 50%
GLSREmin0.03740.39943.04045.04810.04490.84373.03216.03260.95911.02155.527010.11928
max6.21704.85155.427510.37232.78795.80394.406513.13759.63639.65957.622128.3078
mean0.61831.43213.40647.32340.34971.76023.504911.36921.62802.58425.543618.4228
median0.35081.25653.24817.21320.23451.31853.292511.23021.35822.33446.315816.2578
S.D.0.72931.98973.43653.35940.32912.65602.59094.40433.94715.69093.62919.4677
GLTSREmin0.02870.42232.04634.02520.03430.63552.03753.02870.02150.62053.51967.0279
max4.08024.89724.54019.84262.54553.70422.65088.73966.34175.37115.478923.3797
mean0.42001.26982.25455.20600.22511.67752.34146.25190.56271.51713.478412.3705
median0.24511.19652.18524.96080.17361.22862.21406.17490.29981.28094.272911.2342
S.D.0.48642.75943.22963.14500.19092.41932.35214.22990.70261.62652.55397.3965
GRLTSREmin0.02400.36702.05194.03600.03320.54532.02423.02420.03060.63083.53127.0221
max5.45954.14154.91349.78392.52515.19272.55558.03225.53574.16335.429223.3308
mean0.48341.31732.29975.23440.25031.76192.41526.30610.60571.57263.529612.4126
median0.27031.21402.20414.97150.19081.25342.23896.19720.33641.32134.301811.2465
S.D.0.57912.81243.27483.18740.19442.55132.49294.32120.69741.66192.60007.4440
GERLTSREmin0.03340.39762.04154.02550.04710.62242.02973.02620.82270.62093.52297.0202
max3.82084.95424.60899.75762.51104.43942.68218.97654.55463.59105.114323.1862
mean0.43531.26272.25055.20620.23741.58572.34526.25570.55741.50743.471212.3650
median0.25101.19232.18584.96070.17681.22932.20995.17980.31781.28604.272411.2312
S.D.0.49952.75343.21673.14540.19032.44072.39094.23920.63322.57432.51927.3882
GLSRLEmin0.02810.04750.24691.04350.034210.57872.02452.98910.92820.92505.12068.0220
max5.94833.06963.95573.47992.51266.07832.59686.26449.06627.43036.902321.3871
mean0.58490.45370.93371.35660.31841.59492.54314.40301.61642.29595.153214.4144
median0.32560.26540.76351.13120.21741.34892.32374.26651.34972.00345.919414.2567
S.D.0.69460.51532.46582.38290.30702.69091.92684.42133.73132.70591.64036.4495
GLTSRLEmin0.02440.04500.04700.02590.03250.03690.03850.03400.02070.02530.02701.1271
max3.88843.09610.73411.00032.30363.98900.94171.67396.73191.94031.05255.0367
mean0.39680.28010.26750.72390.21730.40270.37130.58980.54750.58390.56523.3780
median0.23400.19930.19290.56780.17420.24280.23400.30230.32320.34320.38363.2335
S.D.0.45630.27562.24932.16810.16711.45121.38763.26920.68470.64301.56634.3896
GRLTSRLEmin0.02430.03870.04770.03600.03320.02480.02420.02420.03060.03080.03121.1252
max5.45952.74150.91340.98392.52513.19270.55551.53225.53571.16331.00925.3308
mean0.48150.32500.30850.73990.24460.46340.41550.59970.60580.57240.52933.4111
median0.26970.21930.21170.57910.17900.25940.24170.29330.33700.32130.30183.2465
S.D.0.57840.31782.28022.19180.19591.55131.49153.31420.69760.66150.59964.4402
GERLTSRLEmin0.03190.04120.04200.02600.03330.03160.03050.03400.01990.02140.02261.1241
max3.60012.78980.83300.98892.26193.66110.91601.14914.86521.05790.96694.9801
mean0.40300.27600.26160.73610.20940.40010.28770.51060.53570.53010.49373.3124
median0.23750.19430.19270.57370.16630.21420.20220.29260.30280.30610.29253.1817
S.D.0.46170.27382.24172.18350.15772.48141.43753.28980.60990.59750.54064.3932
Table 3. Evaluation of the parameters of the proposed estimation method when γ = 0.25 and PCDO = 25%.
Table 3. Evaluation of the parameters of the proposed estimation method when γ = 0.25 and PCDO = 25%.
CoefficientsMethod
GLSREGLTSREGRLTSREGERLTSREGLSRLEGLTSRLEGRLTSRLEGERLTSRLE
β ^ 1 −1.0025−1.0022−1.0020−1.0023−1.0021−1.0019−1.0019−1.0019
β ^ 2 3.94393.95193.95613.94893.95473.95863.95773.9581
β ^ 3 1.83171.85581.86821.84661.86401.87571.87321.8744
β ^ 4 −4.8547−4.8754−4.8861−4.8675−4.8825−4.8926−4.8905−4.8916
β ^ 5 −2.9235−2.9344−2.9401−2.9303−2.9382−2.9435−2.9423−2.9429
d ^ 1.00001.00001.00001.00000.00020.43160.11790.3106
  e z 150.00116.00108.3896109.8749150.00116.00108.3896109.8749
Table 4. Evaluation of the parameters of the proposed estimation method when γ = 0.50 and PCDO = 25%.
Table 4. Evaluation of the parameters of the proposed estimation method when γ = 0.50 and PCDO = 25%.
CoefficientsMethod
GLSREGLTSREGRLTSREGERLTSREGLSRLEGLTSRLEGRLTSRLEGERLTSRLE
β ^ 1 −1.1020−1.0916−1.0817−1.0816−1.0014−1.0014−1.0014−1.0013
β ^ 2 3.90593.90523.92203.92443.96933.97003.96833.9714
β ^ 3 1.81761.89571.88591.89321.90791.91011.90481.9141
β ^ 4 −4.8057−4.8599−4.8814−4.8877−4.9204−4.9224−4.9178−4.9258
β ^ 5 −2.8398−2.8526−2.8481−2.8514−2.9581−2.9591−2.9567−2.9610
d ^ 1.0001.0001.0001.0000.00780.67750.36180.5881
  e z 150131108.3894110.1718150131108.3894110.1718
Table 5. Evaluation of the parameters of the proposed estimation method when γ = 0.75 and PCDO = 25%.
Table 5. Evaluation of the parameters of the proposed estimation method when γ = 0.75 and PCDO = 25%.
CoefficientsMethod
GLSREGLTSREGRLTSREGERLTSREGLSRLEGLTSRLEGRLTSRLEGERLTSRLE
β ^ 1 −1.7530−1.4123−1.4125−1.4124−1.1520−1.0019−1.0020−1.0018
β ^ 2 3.13423.55003.54563.54753.85713.95853.95553.9600
β ^ 3 1.10271.35001.33691.34251.27121.87551.86651.8799
β ^ 4 −4.2296−4.6705−4.6592−4.6640−4.7888−4.8925−4.8847−4.8963
β ^ 5 −2.1103−2.5318−2.5259−2.5284−2.7415−2.9434−2.9393−2.9454
d ^ 1.0001.0001.0001.0000.00540.66390.33070.5648
  e z 150128108.392110.461150128108.392110.461
Table 6. Evaluation of the parameters of the proposed estimation method when γ = 0.95 and PCDO = 25%.
Table 6. Evaluation of the parameters of the proposed estimation method when γ = 0.95 and PCDO = 25%.
CoefficientsMethod
GLSREGLTSREGRLTSREGERLTSREGLSRLEGLTSRLEGRLTSRLEGERLTSRLE
β ^ 1 −2.5080−2.0052−2.0053−2.0061−1.2354−1.0036−1.0035−1.0036
β ^ 2 2.82413.08513.08453.06563.83063.92073.92283.9203
β ^ 3 0.47241.15541.15341.19671.36181.76221.76841.7610
β ^ 4 −3.7103−4.1024−4.1006−4.1517−4.7447−4.7946−4.8000−4.7936
β ^ 5 −2.0002−2.0134−2.0024−2.0167−2.7917−2.8919−2.8947−2.8914
d ^ 1.0001.0001.0001.0000.00520.64880.29120.5223
  e z 150122108.392109.5919150122108.392109.5919
Table 7. Evaluation of the parameters of the proposed estimation method when γ = 0. 25 and PCDO = 33%.
Table 7. Evaluation of the parameters of the proposed estimation method when γ = 0. 25 and PCDO = 33%.
CoefficientsMethod
GLSREGLTSREGRLTSREGERLTSREGLSRLEGLTSRLEGRLTSRLEGERLTSRLE
β ^ 1 −1.0035−1.0034−1.0038−1.0033−1.0079−1.0059−1.0055−1.0051
β ^ 2 3.92223.92443.91683.92653.82593.87073.87893.8881
β ^ 3 1.76671.77321.75041.77951.47761.61221.63661.6643
β ^ 4 −4.7985−4.8042−4.7844−4.8095−4.5488−4.6651−4.6862−4.7101
β ^ 5 −2.8939−2.8969−2.8865−2.8998−2.7625−2.8237−2.8348−2.8474
d ^ 1.00001.00001.00001.00000.00670.64640.28730.5200
  e z 150129108.392109.981150129108.392109.981
Table 8. Evaluation of the parameters of the proposed estimation method when γ = 0.50 and PCDO = 33%.
Table 8. Evaluation of the parameters of the proposed estimation method when γ = 0.50 and PCDO = 33%.
CoefficientsMethod
GLSREGLTSREGRLTSREGERLTSREGLSRLEGLTSRLEGRLTSRLEGERLTSRLE
β ^ 1 −1.1029−1.1025−1.1022−1.1026−1.1023−1.0021−1.0021−1.0021
β ^ 2 3.90653.91503.92123.91233.91993.95333.95333.9539
β ^ 3 1.80961.81091.82371.81081.82981.85991.85991.8618
β ^ 4 −4.6355−4.7574−4.7737−4.7504−4.7703−4.8790−4.8790−4.8806
β ^ 5 −2.5134−2.7250−2.7335−2.7213−2.8017−2.9363−2.9363−2.9372
d ^ 1.00001.00001.00001.00000.00020.42610.11480.3031
  e z 150130108.3896109.8711150130108.3896109.8711
Table 9. Evaluation of the parameters of the proposed estimation method when γ = 0.75 and PCDO = 33%.
Table 9. Evaluation of the parameters of the proposed estimation method when γ = 0.75 and PCDO = 33%.
CoefficientsMethod
GLSREGLTSREGRLTSREGERLTSREGLSRLEGLTSRLEGRLTSRLEGERLTSRLE
β ^ 1 −1.9438−1.4133−1.4127−1.4135−1.6228−1.0026−1.0026−1.0026
β ^ 2 2.91573.62823.64003.62263.53853.94293.94293.9433
β ^ 3 1.04721.38471.32001.36771.31561.82871.82861.8298
β ^ 4 −3.7816−4.7140−4.7446−4.7994−4.6407−4.8521−4.8520−4.8530
β ^ 5 −2.0051−2.6021−2.6182−2.6944−2.5162−2.9221−2.9221−2.9226
d ^ 1.0001.0001.0001.0000.00010.40260.09320.2683
  e z 150130108.3896109.8729150130108.3896109.8729
Table 10. Evaluation of the parameters of the proposed estimation method when γ = 0.95 and PCDO = 33%.
Table 10. Evaluation of the parameters of the proposed estimation method when γ = 0.95 and PCDO = 33%.
CoefficientsMethod
GLSREGLTSREGRLTSREGERLTSREGLSRLEGLTSRLEGRLTSRLEGERLTSRLE
β ^ 1 −2.8086−2.0067−2.0043−2.0079−1.7142−1.0038−1.0038−1.0038
β ^ 2 2.21983.05283.00493.02623.54663.91683.91583.9171
β ^ 3 0.42931.26851.26481.21861.11971.75051.74741.7512
β ^ 4 −3.5071−4.0187−4.1537−4.0497−4.3579−4.7845−4.7819−4.7851
β ^ 5 −1.7406−2.1003−2.1104−2.1630−2.5726−2.8866−2.8852−2.8869
d ^ 1.0001.0001.0001.0000.00080.36540.07220.2141
  e z 150118108.3896109.8748150118108.3896109.8748
Table 11. Evaluation of the parameters of the proposed estimation method when γ = 0.25 and PCDO = 50%.
Table 11. Evaluation of the parameters of the proposed estimation method when γ = 0.25 and PCDO = 50%.
CoefficientsMethod
GLSREGLTSREGRLTSREGERLTSREGLSRLEGLTSRLEGRLTSRLEGERLTSRLE
β ^ 1 −1.8514−1.0015−1.0010−1.0016−1.8709−1.0010−1.0010−1.0010
β ^ 2 3.90293.96783.97833.96473.88073.97903.97833.9781
β ^ 3 1.60971.90341.93501.89401.74221.93711.93501.9343
β ^ 4 −4.1220−4.9166−4.9439−4.9085−4.2501−4.9457−4.9439−4.9432
β ^ 5 −2.4590−2.9561−2.9705−2.9518−2.4737−2.9714−2.9705−2.9701
d ^ 1.0001.0001.0001.0000.00130.02170.00090.0121
  e z 150127108.3898109.5876150127108.3898109.5876
Table 12. Evaluation of the parameters of the proposed estimation method when γ = 0.50 and PCDO = 50%.
Table 12. Evaluation of the parameters of the proposed estimation method when γ = 0.50 and PCDO = 50%.
CoefficientsMethod
GLSREGLTSREGRLTSREGERLTSREGLSRLEGLTSRLEGRLTSRLEGERLTSRLE
β ^ 1 −1.9411−1.3117−1.3111−1.3119−1.6436−1.0011−1.0011−1.0010
β ^ 2 3.16543.46233.47613.45903.27893.97633.97613.9758
β ^ 3 0.89611.58681.62821.67711.03671.92891.92831.9274
β ^ 4 −3.7103−4.7022−4.7380−4.7939−4.0454−4.9386−4.9380−4.9373
β ^ 5 −2.0528−2.3485−2.4674−2.4642−2.1512−2.9677−2.9674−2.9670
d ^ 1.0001.0001.0001.0000.02500.02030.00110.0122
  e z 150123108.3898109.5875150123108.3898109.5875
Table 13. Evaluation of the parameters of the proposed estimation method when γ = 0.75 and PCDO = 50%.
Table 13. Evaluation of the parameters of the proposed estimation method when γ = 0.75 and PCDO = 50%.
CoefficientsMethod
GLSREGLTSREGRLTSREGERLTSREGLSRLEGLTSRLEGRLTSRLEGERLTSRLE
β ^ 1 −1.0021−1.0023−1.0012−1.0026−1.0011−1.0012−1.0012−1.0013
β ^ 2 3.95353.94853.97313.94353.97673.97263.97323.9721
β ^ 3 1.86061.84561.91941.83061.93001.91781.91951.9164
β ^ 4 −4.8796−4.8667−4.9304−4.8537−4.9395−4.9290−4.9304−4.9278
β ^ 5 −2.9366−2.9298−2.9634−2.9230−2.9682−2.9626−2.9634−2.9620
d ^ 1.0001.0001.0001.0000.00180.01680.00080.0089
  e z 150125108.3898109.5875150125108.3898109.5875
Table 14. Evaluation of the parameters of the proposed estimation method when γ = 0.95 and PCDO = 50%.
Table 14. Evaluation of the parameters of the proposed estimation method when γ = 0.95 and PCDO = 50%.
CoefficientsMethod
GLSREGLTSREGRLTSREGERLTSREGLSRLEGLTSRLEGRLTSRLEGERLTSRLE
β ^ 1 −3.1056−2.1161−2.1115−2.1171−2.6811−1.0016−1.0015−1.0014
β ^ 2 3.07663.16683.16713.14363.09953.96903.96763.9692
β ^ 3 0.31971.17031.17011.17081.02971.90721.90281.9085
β ^ 4 −2.6802−3.6548−3.9149−3.8948−3.1793−4.9165−4.9161−4.9168
β ^ 5 −1.1117−2.0683−2.0552−2.0867−1.9681−2.9567−2.9558−2.9577
d ^ 1.0001.0001.0001.0000.00030.02050.00220.0122
e z 150127108.3898109.5876150127108.3898109.5876
Table 15. The evaluation of the proposed estimators using the hedonic prices of house attributes.
Table 15. The evaluation of the proposed estimators using the hedonic prices of house attributes.
CoefficientsMethod
GLSREGLTSREGRLTSREGERLTSREGLSRLEGLTSRLEGRLTSRLEGERLTSRLE
LT0.70181.05090.15220.15070.85141.12350.25690.2550
SFH46.751533.568630.948730.208538.915426.374725.001024.1247
FP3.93112.57402.74432.67773.52101.95682.12352.0474
DHW−1.6147−0.7616−1.2961−1.2635−0.9952−0.4125−0.9389−0.9541
GAR6.24764.38654.19264.09195.20152.99583.30213.2354
e (zz* ∨ z**)92.000086.000071.754174.154292.000086.000071.754174.1542
d ^ 1.00001.00001.00001.00000.15420.67410.68570.7154
M S ^ E 926.80456.41386.63301.63809.59335.17279.25198.09
R20.23460.61560.69560.75090.33540.73250.78010.8409
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Altukhaes, W.B.; Roozbeh, M.; Mohamed, N.A. Robust Liu Estimator Used to Combat Some Challenges in Partially Linear Regression Model by Improving LTS Algorithm Using Semidefinite Programming. Mathematics 2024, 12, 2787. https://doi.org/10.3390/math12172787

AMA Style

Altukhaes WB, Roozbeh M, Mohamed NA. Robust Liu Estimator Used to Combat Some Challenges in Partially Linear Regression Model by Improving LTS Algorithm Using Semidefinite Programming. Mathematics. 2024; 12(17):2787. https://doi.org/10.3390/math12172787

Chicago/Turabian Style

Altukhaes, Waleed B., Mahdi Roozbeh, and Nur A. Mohamed. 2024. "Robust Liu Estimator Used to Combat Some Challenges in Partially Linear Regression Model by Improving LTS Algorithm Using Semidefinite Programming" Mathematics 12, no. 17: 2787. https://doi.org/10.3390/math12172787

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop