Next Article in Journal
Stable Patterns in the Lugiato–Lefever Equation with a Confined Vortex Pump
Previous Article in Journal
Road Extraction Method of Remote Sensing Image Based on Deformable Attention Transformer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhanced Model Predictions through Principal Components and Average Least Squares-Centered Penalized Regression

1
Department of Mathematics, University of North Dakota, Grand Forks, ND 58202, USA
2
Department of Statistics, Ladoke Akintola University of Technology, Ogbomoso 212102, Nigeria
3
Department of Mathematical Sciences, College of Science, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
4
Department of Statistics, Ferdowsi University of Mashhad, Mashhad 9177948974, Iran
5
Department of Mathematics and Statistics, Northwest Missouri State University, Maryville, MO 64468, USA
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(4), 469; https://doi.org/10.3390/sym16040469
Submission received: 4 March 2024 / Revised: 22 March 2024 / Accepted: 8 April 2024 / Published: 12 April 2024
(This article belongs to the Section Mathematics)

Abstract

:
We address the estimation of regression parameters for the ill-conditioned predictive linear model in this study. Traditional least squares methods often encounter challenges in yielding reliable results when there is multicollinearity. Therefore, we employ a better shrinkage method, average least squares-centered penalized regression (ALPR), as it offers a more efficient approach for handling multicollinearity than ridge regression. Additionally, we integrate ALPR with the principal component (PC) dimension reduction method for enhanced performance. We compared the proposed PCALPR estimation technique with existing ones for ill-conditioned problems through comprehensive simulations and real-life data analyses using the mean squared error. This integration results in superior model performance compared to other methods, highlighting the potential of combining dimensionality reduction techniques with penalized regression for enhanced model predictions.

1. Introduction

Regression is a widely employed statistical methodology across various fields with different variants, including parametric, semi-, and non-parametric approaches. Linear models remain attractive due to their interpretability and the availability of tools to handle diverse data types and validate theoretical assumptions. In practice, the model predicts a response variable as a linear function of one or more predictors, for example, modelling the influence of smoking and biking habits (predictors) on the likelihood of heart disease (response). It finds application across diverse domains, including the sciences, social sciences, and the arts.
While utilizing numerous explanatory variables provides a more accurate view of the response variable, it introduces the challenge of redundant information stemming from correlations among predictors. The issue of collinearity among predictors poses a significant problem in linear regression, impacting least squares estimates, standard errors, computational accuracy, fitted values, and predictions [1,2,3,4]. Various diagnostic methods, such as the condition number, correlation analysis, eigenvalues, condition index, and the variance inflation factor, are commonly employed to identify collinearity.
Additionally, several proposed methods exist in addressing the collinearity problem, ranging from component-based methods like partial least squares regression (PLS) and principal component regression (PCR) to techniques involving penalizing solutions using the L2 norm [5,6]. The widely recognized ridge regression [7] is one such method. Different modifications to ridge regression have led to several others. These include the Liu estimator, the modified ridge-type estimator, the Kibria–Lukman estimator, the two-parameter estimator, the Stein estimator, and others [8,9,10,11,12,13]. Generally, a unanimous agreement on the optimal method is lacking, as each approach proves effective under distinct circumstances.
Recently, Wang et al. [14] developed a novel method to address multicollinearity in linear models called average least squares method (LSM)-centered penalized regression (ALPR). This method utilizes the weighted average of ordinary least squares estimators as the central point for shrinkage. Wang et al.’s [14] investigation demonstrated that ALPR outperformed ridge estimation (RE) in accuracy when the signs of the regression coefficients were consistent. Thus, ALPR is a promising method to effectively mitigate multicollinearity, especially when the signs of the regression coefficients are consistent.
Recent studies have enhanced model predictions by integrating principal components regression with some L2 norms such as ridge regression and the Stein estimator [15,16]. The PCR estimation technique stands out as a potent solution for addressing dimensionality challenges in estimation problems [17]. Known for its transparency and ease of implementation, PCR involves two pivotal steps, where the initial step applies principal component analysis (PCA) to the predictor matrix. The subsequent step entails regressing the response variable on the first principal components, which capture the most variability.
In a groundbreaking contribution, Baye and Parker [15] introduced the r-k class estimator, ingeniously combining PCR with ridge regression, resulting in a remarkable performance boost compared to using each estimator individually. This pioneering work has ignited further research, inspiring researchers to explore new avenues [18,19,20,21,22]. This paper extends the principles of PCR to the realm of average least squares method (LSM)-centered penalized regression (ALPR), giving rise to a novel method named principal component average least squares method (LSM)-centered penalized regression (PC_ALPR). The approach shares the initial step of principal component regression while diverging in the second step, where average least squares method (LSM)-centered penalized regression is used instead of the classical least squares method (LSM) to regress the response variable on the principal components.
Thus, in this study, we propose a new method to account for multicollinearity in the linear regression model by integrating principal component regression with average least squares method-centered penalized regression. This article is structured as follows: Section 2 provides a detailed review of existing methods, while Section 3 introduces a new estimator. In Section 4, we rigorously assess the new estimator’s performance through a Monte Carlo simulation study. Additionally, Section 5 showcases the practical relevance of the proposed estimator, featuring a compelling numerical example. Finally, Section 5 summarizes this research’s key findings, emphasizing the contributions of the new estimator and discussing its implications for future advancements in estimation techniques.

2. A Brief Overview of Existing Methods

Regression analysis models the connection between a response variable and one or more predictors. In this section, we will delve into the linear model, offering brief overviews of estimation methods, both with and without consideration of multicollinearity.

2.1. Least Squares Method

The linear model is a fundamental concept in statistical modelling, offering a versatile framework for understanding the relationship between a response variable and one or more predictors through a linear equation. This equation, often represented as
y = X β + ε ,
captures the linear association between the response variable and predictors. In this formulation, y is the n × 1 vector of the response variable, X is the n × ( p + 1 ) matrix of predictors, and β is p + 1 × 1 vector of the coefficients that quantify the impact of each predictor on the response. The linear model assumes a linear and additive relationship between the predictors and the response variable. ε is an n × 1 vector of the disturbance terms, such that ε ~ N 0 , σ 2 I .
The least squares method (LSM) stands as a cornerstone in statistical modelling, offering a powerful approach to estimating the parameters of the linear model defined in Equation (1). The primary goal of the LSM is to find the coefficients that minimize the sum of the squared differences between the observed and predicted values of the dependent variable. The vector of estimates, β ^ , is given by
β ^ L S M = X X 1 X y .
The variance–covariance matrix of the LSM is defined as follows:
Cov β ^ = σ ^ 2 X X 1 ,
where the mean squared error is σ ^ 2 = i = 1 n y i y ^ e n r . The scalar mean squared error (SMSE) of (3) is as follows:
SMSE β ^ = σ ^ 2 j = 1 p 1 e j
where e j is the eigenvalues of matrix X X .
The issue of collinearity arises when there are linear or nearly linear relationships among predictors. When exact linear relationships exist, meaning one predictor is an exact linear combination of others, the matrix X X becomes singular, preventing a unique β ^ estimate. When near-linear dependence exists among predictors, X X is nearly singular, leading to an ill-conditioned estimation equation for regression parameters. Consequently, the parameter estimates, β ^ , become unstable. The variances in the regression coefficients become inflated, resulting in larger confidence intervals. In summary, the presence of collinearity, whether exact or near-linear, jeopardizes the stability of parameter estimates, leading to increased uncertainty in understanding the relationships between predictors and the response variable.
Various methods are available for detecting collinearity in linear regression models, providing insights into the interdependence among predictors. Key techniques include the following:
  • Variance inflation factor (VIF): The VIF measures how much the variance of an estimated regression coefficient becomes inflated due to collinearity. A widely accepted rule of thumb suggests collinearity concerns when VIF values exceed 10.
  • Condition number: The condition number assesses the sensitivity of the regression coefficients to small changes in the data. A condition number of 15 raises concerns about multicollinearity, while a number exceeding 30 indicates severe multicollinearity [3].
  • Correlation matrix: Analyzing the correlation matrix of predictors helps to identify high correlations between variables. A high correlation coefficient, particularly close to 1, suggests the potential presence of collinearity.
  • Eigenvalues: Investigating the eigenvalues of the predictor matrix X X provides insights into multicollinearity. Small eigenvalues, especially near zero, indicate a higher risk of multicollinearity.
In addition to the methods for detecting collinearity in linear regression, several approaches have been proposed to address this issue effectively. These methods span a spectrum of techniques, from component-based strategies like partial least squares regression (PLS) and principal component regression (PCR) to regularization techniques involving penalizing solutions using the L2 norm. The upcoming section will offer a concise overview of a few methods developed to address collinearity. Specifically, the focus will be on techniques such as principal component regression (PCR) and the regularization methods utilizing the L2 norm.

2.2. Principal Component Regression

Principal component analysis (PCA) is a widely used dimension reduction technique to transform the original variables into a new set of uncorrelated variables, called principal components, while retaining as much of the original variability as possible [17]. The first principal component captures the maximum amount of variance in the data. Subsequent principal components capture the remaining variance while being orthogonal to each other. By retaining only the most significant principal components, PCA reduces the dimensionality of the dataset while preserving most of the original information. It is applicable in various fields, including image and signal processing, finance, and genetics. The model structure for principal component regression is obtained by transforming model (1) as follows:
y = X F F β + ε = T α + ε ,
where α = F β ,   F = f i , , f p is a p × p orthogonal matrix with F X XF = T T = E and E = diag ( e i , , e p ) is a p × p diagonal matrix of eigenvalues of X X . The score matrix T = XF = t i , , t p has dimensions n × m, where n represents the number of observations and m represents the number of principal components. The PCR estimator of β is obtained by excluding one or more of the principal components, t i , applying least squares method (LSM) regression to the resulting model, and then transforming the coefficients back to the original parameter space. Principal components whose eigenvalues are less than one should be excluded. These components contribute less to the overall variability of the data and can be considered less influential for prediction. However, according to Cliff [23], all components with eigenvalues greater than one should be kept for statistical inference, as they explain more variability in the data. Thus, the PCR estimator of β is defined as follows:
β ^ P C R = T T 1 T y .

2.3. Regularization Techniques

L2 norms regularization, ridge regression, is used in linear regression to address multicollinearity by penalizing the regression coefficients [7]. It involves adding a regularization term to the objective function of the least squares method (LSM). The objective function of ridge regression (RR) combines the LSM loss function with the L2 regularization term as follows:
Minimize   y X β 2 2 + k β 2 2
where y is the vector of the response variable, X is the matrix of predictors, . 2 2 denotes the L2 norm, β is the vector of regression coefficients, and k is the regularization parameter. The regularization term penalizes regression coefficients, effectively shrinking them towards zero. Thus, there is a reduction in the variance of the parameter estimates and improved model stability, especially when there is collinearity among the predictors. The objective function in Equation (7) is expanded as follows:
Minimize   y y 2 β X y + β X X β + k β β
Differentiate Equation (8) with respect to β and equate to zero. Consequently,
β ^ R R = X X + k I 1 X y .
According to Hoerl et al. [24], the regularization parameter, k, is defined as follows:
k = p σ ^ 2 j = 1 p α ^ j 2
The variance–covariance matrix of the ridge regression is defined as follows:
Cov β ^ R R = σ ^ 2 X X + k I 1 X X X X + k I 1
The bias of the estimator is obtained as follows:
B i a s β ^ R R = k X X + k I 1 β
Hence, the matrix mean squared error (MMSE) is given as
MMSE β ^ R R = σ ^ 2 X X + k I 1 X X X X + k I 1 + k 2 X X + k I 1 β β X X + k I 1
The scalar mean squared error (SMSE) of (13) is as follows:
SMSE β ^ R R = σ ^ 2 j = 1 p e j e j + k 2 + k 2 j = 1 p α ^ j 2 e j + k 2
where e j is the eigenvalues of matrix X X , and α = F β .

2.4. Average Least Squares Method (LSM)-Centered Penalized Regression [ALPR]

Wang et al. [14] introduced an enhanced estimator that refines the approach of ridge regression by penalizing the regression coefficients towards a predetermined constant, ς , diverging from the traditional ridge regression method that shrinks its coefficients towards zero. This modification offers a more flexible penalization framework by allowing the shrinkage target to be adjusted away from zero. The objective function of ALPR combines the LSM loss function with the L2 regularization term, which is penalized to a specific constant, ς , as follows:
Minimize   y X β 2 2 + k β ς 2 2
The objective function in Equation (15) is expanded as follows:
Minimize   y y 2 β X y + β X X β + k β β 2 k ς β + k ς 2
Differentiate Equation (16) with respect to β and equate to zero. Consequently,
β ^ ς = X X + k I 1 X y + k ς
Define the distance from to β to ς as g = β ς . Consequently, the objective function can be expressed as the minimization of y X β 2 2 + k g 2 2 , akin to the formulation of ridge regression. Let α = Fg . Thus, the scalar mean squared error (SMSE) is as follows:
SMSE β ^ ς = σ ^ 2 j = 1 p e j e j + k 2 + k 2 j = 1 p α ^ j 2 e j + k 2
Consequently, as α ^ j 2 increases, the SMSE β ^ ς also increases. Equation (18), being independent of F, roughly indicates that a smaller g, i.e., the closer β is to c, resulting in a smaller SMSE β ^ ς , implying better estimation. While least squares method (LSM) estimators may suffer from instability when there is significant multicollinearity among explanatory variables, their average values demonstrate reduced susceptibility to multicollinearity effects. As an alternative to the conventional shrinkage center of zero used in ridge regression (RR), employing the average value of β ^ g as a shrinkage center offers a more appropriate solution. This innovative approach, termed Average OLS Penalized Regression (AOPR), relies on a p-dimensional vector, d, where all elements are set to 1, to define ς M as the average of the β ^ g . Consequently, the shrinkage center for ALPR, ς M , is established as ς M = ς M d . To ensure a stable estimation of ς M that maximizes explanatory power for the observed, ς M can be estimated through a specific procedure designed to enhance its stability and explanatory capacity. This meticulous procedure ensures that AOPR provides robust and effective regression results, particularly in scenarios with high multicollinearity among predictor variables. Thus, ς M is estimated by minimizing y X ς M 2 2 . Hence , ς M = d ( d X X d ) 1 d X X β ^ L S M . Let ς ^ M replace the constant ς in Equation (17); then, we have
β ^ A L P R = X X + k I 1 X y + k d ( d X X d ) 1 d X X β ^ L S M
= X X + k I 1 X X + k d ( d X X d ) 1 d X X β ^ L S M
= Q β ^ L S M where X X + k I 1 X X + k d ( d X X d ) 1 d X X
The covariance matrix of β ^ A L P R is as follows:
Cov β ^ A L P R = σ ^ 2 Q X X 1 Q
Following Wang et al. [14], let h = Fd ( d X X d ) 1 d X X F and t j = b j 2 + j = 1 p h i j 2 σ 2 e j + h i j 2 b j 2 2 b j h j b , where h i is the ith row vector of h. Consequently, the SMSE of β ^ A L P R is as follows:
SMSE β ^ A L P R = σ ^ 2 j = 1 p e j + 2 k h j j e j + k 2 + k 2 j = 1 p t j e j + k 2
where k e j σ ^ 2 σ ^ 2 e j h j j e j t j σ ^ 2 h j j ,   e j t j σ ^ 2 h j j > 0 k e j σ ^ 2 σ ^ 2 e j h j j e j t j σ ^ 2 h j j ,   e j t j σ ^ 2 h j j < 0
Set l j = 0 ,   e j t j σ ^ 2 h j j 0   and   h j j > 1 m a x 0 , e j σ ^ 2 σ ^ 2 e j h j j e j t j σ ^ 2 h j j ,   e j t j σ ^ 2 h j j < 0
Wang et al. [14] proposed that setting k = l m i n serves as the optimal choice for the shrinkage parameter in ALPR. For further insights, we advise consulting the works of Wang et al. [14,25]. These references provide an in-depth exploration and analysis of the optimal shrinkage parameter selection in ALPR.

2.5. Principal Component Average LSM-Centered Penalized Regression

In this section, we introduce a novel hybrid estimation approach that integrates principal component regression (PCR) with average LSM-centered penalized regression (ALPR) to create the principal component average LSM-centered penalized regression method. We aim to capitalize on the strengths of PCR and ALPR to enhance the modelling process and boost predictive accuracy by combining these two techniques. The methodology comprises the following steps:
  • Standardization of predictor variables to ensure comparability, with a mean of zero and unit variance.
  • Perform principal component analysis (PCA) on the predictor variables.
  • Selection of principal components corresponding to eigenvalues exceeding 1, as they explain more variance than an individual predictor variable [23,26].
  • Regression of the response variable on the chosen principal components to derive fitted values.
  • Replace the original response variable with the fitted values obtained from the PCR model.
  • Utilization of average LSM-centered penalized regression to regress the transformed response variable (fitted values from step iv) along with the original predictor variables.
  • Evaluation of the combined approach’s performance using appropriate metrics such as the scalar mean squared error (SMSE) and predicted mean squared error.
Mathematically, Baye and Parker [15] integrated the principal component with the ridge estimator to form principal component ridge regression (PCRR). PCRR according to Chang and Yang [19] is defined as follows:
β ^ P C R R = T T + k I 1 T y .
The scalar mean squared error (SMSE) is as follows:
SMSE β ^ P C R R = σ ^ 2 j = 1 r e j e j + k 2 + k 2 j = 1 r α ^ j 2 e j + k 2
where r p . Thus, the proposed estimator is defined as follows:
β ^ P C A L P R = T + k I 1 T y + k d ( d T d ) 1 d T β ^ P C R .
The scalar mean squared error (SMSE) is as follows:
SMSE β ^ P C A L P R = σ ^ 2 j = 1 r e j + 2 k h j j e j + k 2 + k 2 j = 1 r t j e j + k 2

3. Simulation

This section will examine a simulation study that evaluates the performance of the proposed estimator in comparison to existing estimators, across various levels of multicollinearity. The simulation study is conducted using RStudio, a popular integrated development environment for R programming. We follow a specific data generation mechanism to account for the number of correlated variables. We generate the number of n 30 , 50 , 100 , 200 observations with p 3 , 7 explanatory variables using the following scheme:
x i j = 1 γ 2 1 2 w i j + γ w i p + 1 , i = 1,2 , , n , j = 1,2 , , m
where w i j represents independent standard normal pseudo-random numbers; m 2,3 , , p is the number of correlated variables. The variables are standardized, and thus, X ' y shows the correlation. Furthermore, we consider moderate and strong collinearity levels using γ 2 0.8 ,   0.9 ,   0.99 ,   and   0.999 . We generate the model observations following Equation (1) with ε N 0 , σ 2 I , σ 2 25 , 100 . The response variable is a linear function of the predictors generated in (26) with coefficients β 1 , β 2 , , β p , r e s p e c t i v e l y . The model assumes a zero intercept, and the values of β are chosen such that j = 1 p β j 2 = 1 . We repeat the whole data generation process 1000 times and evaluate the estimator’s performance using the mean squared error (MSE) and prediction mean squared error (PMSE), respectively, given by
M S E = 1 1000 i = 1 1000 β ^ * i β β ^ * i β P M S E = 1 1000 j = 1 1000 y ^ * j y * y ^ * j y *
where β ^ * i is any of the existing estimators or the proposed estimator. Further, y ^ * = X β ^ * .
The simulation results are available in Table 1, Table 2, Table 3 and Table 4. The results from the simulation study offer a comprehensive view of how different regression estimators perform under varying conditions of multicollinearity and noise levels. The comparison includes the traditional least squares method (LSM), ridge regression (RR), principal components ridge regression (PCRR), average least squares method-centered penalized regression (ALPR), and the novel principal component average least squares method-centered penalized regression (PCALPR). Under moderate to strong multicollinearity scenarios (ρ = 0.8 to ρ = 0.99), the PCALPR estimator consistently outperformed the other estimators in terms of mean squared error (MSE). This indicates a superior capability in accurately estimating the regression coefficients, which directly contributes to better model prediction accuracy. However, as the multicollinearity level is severe, i.e., when ρ = 0.999, other estimators compete favorably.
A key observation is a significant deterioration in the performance of the LSM as the level of multicollinearity increases, illustrating the well-known vulnerability of ordinary least squares to collinear predictors. This highlights the necessity for alternative estimation techniques in practical applications where predictors are often correlated to some degree.
Both RR and PCRR showed improvements over LSM, affirming the value of penalization and dimensionality reduction techniques in mitigating multicollinearity effects. However, the standout performance of ALPR and PCALPR underscores the effectiveness of centering the penalization around a more robust estimate than ordinary least squares, particularly under high multicollinearity.
PCALPR’s edge over ALPR in almost all scenarios suggests that the integration of principal component analysis not only helps in addressing multicollinearity by reducing the dimensionality of the predictor space but also enhances the penalization strategy by focusing on the most informative components of the predictors.
When examining the impact of different noise levels (σ2 = 25 vs. σ2 = 100), it is evident that all estimators perform worse as noise increases, as expected. However, the relative performance rankings remain roughly consistent, with PCALPR maintaining its superiority. This resilience to increased noise levels further supports the robustness of the proposed method. The mean squared error and prediction mean squared error decrease as the sample size increases, as demonstrated in Figure 1, Figure 2, Figure 3 and Figure 4.
The findings from this study suggest that PCALPR is a highly promising approach for handling multicollinearity in linear regression models, particularly in situations where predictors have high multicollinearity and when the model is subjected to significant noise. The method not only leverages the strengths of penalized regression techniques to reduce the bias introduced by multicollinearity but also capitalizes on the dimensionality reduction capability of principal component analysis to focus the model estimation on the most relevant information contained within the predictor variables.

4. Data Analysis

We evaluated the efficacy of the proposed and existing estimators by analyzing two real-life datasets.

4.1. Asphalt Binder Data

This dataset has been adopted in previous studies to analyze the impact of various chemical compositions on surface free energy [27,28]. The model formulation is as follows:
y i = β 0 + β 1 x i 1 + + β 12 x i 12 + ε i ,   i = 1 , , 23
where y i denotes surface free energy; x i 1 through x i 12 correspond to saturates, aromatics, resins, asphaltenes, wax, carbon, hydrogen, oxygen, nitrogen, sulfur, nickel, and vanadium, respectively. We standardized the predictor variables to achieve a mean of zero and a variance of 1. We conducted the Ramsey RESET test to assess the linearity of the regression model. The test statistic yielded a value of RESET = 3.8065, p-value = 0.08283, with one degree of freedom in the numerator (df1) and nine degrees of freedom in the denominator (df2). The p-value of 0.08283 suggests there is no significant evidence to reject the null hypothesis of linearity at the conventional significance level of 0.05. Furthermore, the Breusch–Pagan test for heteroscedasticity (BP test) resulted in a statistic of BP = 9.9723, with 12 degrees of freedom and a p-value of 0.6184. This p-value indicates no evidence against the null hypothesis of homoscedasticity (constant variance) in the residuals. Therefore, there is no significant heteroscedasticity detected in the model. Considering both tests, while the Ramsey RESET test suggests strong evidence for linearity, the Breusch–Pagan test does not find evidence of heteroscedasticity.
The correlation plot (Figure 5) shows the presence of strong correlations among some predictor variables, such as between saturates, aromatics, resins, asphaltenes, etc. High correlation among predictors is a hallmark of multicollinearity, which complicates the estimation of regression coefficients because it becomes challenging to isolate the individual effect of each predictor on the response variable.
The VIF plot presented in Figure 6 quantitatively shows the degree of multicollinearity. A VIF value greater than 10 is often considered an indicator of severe multicollinearity, suggesting that the predictor variables are highly linearly related. This condition exacerbates the difficulty in obtaining reliable estimates of the regression coefficients because it inflates the variances in the coefficient estimates, making them less precise. In addition, the condition number is 52.37, revealing the presence of severe multicollinearity.
Table 5 provides the regression estimates for different models: the least squares method (LSM), ridge regression (RR), principal component ridge regression (PCRR), average LSM-centered penalized regression (ALPR), and principal component average LSM-centered penalized regression (PCALPR). Each method adjusts the coefficients (β) differently based on their approach to handling multicollinearity. The intercept and coefficients vary across models, reflecting how each method compensates for the high correlation among predictors. For instance, ALPR and PCALPR provide coefficients that are significantly different from those of LSM, indicating a distinct approach to stabilizing the regression estimates in the presence of multicollinearity.
The standardized mean squared error (SMSE) is a measure of the accuracy of the estimator. Lower values indicate a better fit and more reliable estimates. From Table 5, it is evident that ALPR and PCALPR outperform the other methods, with PCALPR showing the lowest SMSE. This suggests that PCALPR, by incorporating both principal component analysis and average LSM-centered penalized regression, offers a more robust method for dealing with multicollinearity, providing more accurate and stable estimates of the regression coefficients. The coefficients estimated by ALPR and PCALPR indicate that these methods can identify and adjust for the influence of multicollinearity, leading to potentially more meaningful and interpretable results. For example, the positive coefficients for x2, x3, and x7 in the ALPR and PCALPR models may suggest a strong positive relationship with the surface free energy, which was not as clearly indicated or was over-adjusted in the LSM and RR models.

4.2. Gasoline Mileage Data

The second dataset employed in this study was obtained from Montgomery et al. [3], comprising observations on gasoline mileage and eleven (11) predictors. We excluded variables three and eleven due to missing data in variable three and the structural form of variable eleven. Thus, we retained only nine of the features. Table 6 provides a comprehensive description of each variable utilized in the regression model:
We standardized the predictor variables to achieve a mean of zero and a variance of 1. We conducted the Ramsey RESET test to assess the linearity of the regression model. The test statistic yielded a value of RESET = 3.7076, p-value = 0.07472, with 1 degree of freedom in the numerator (df1) and 14 degrees of freedom in the denominator (df2). The p-value of 0.07472 suggests there is no significant evidence to reject the null hypothesis of linearity at the conventional significance level of 0.05. Furthermore, the Breusch–Pagan test for heteroscedasticity (BP test) resulted in a statistic of BP = 3.6087, with nine degrees of freedom and a p-value of 0.9352. This p-value indicates no evidence against the null hypothesis of homoscedasticity (constant variance) in the residuals. Therefore, there is no significant heteroscedasticity detected in the model. Considering both tests, while the Ramsey RESET test suggests strong evidence for linearity, the Breusch–Pagan test does not find evidence of heteroscedasticity. The correlation plot presented in Figure 7 helps to identify the strength and direction of linear relationships between predictor variables. High correlation coefficients between pairs of predictors suggest a strong linear relationship, potentially indicating multicollinearity. Multicollinearity complicates the interpretation of individual predictors’ effects on the response variable due to shared variance among predictors. The VIF quantifies how much the variance of an estimated regression coefficient increases if the predictors are correlated. If no factors are correlated, the VIF equals 1. Generally, a VIF above 5–10 indicates significant multicollinearity requiring attention. In the context of Gasoline Mileage data, the presence of multicollinearity (as indicated by the VIF plot in Figure 8) suggests that some of the predictors share a significant amount of information, which could distort the regression coefficients if not properly addressed. In addition, the condition number is 26.80, revealing the presence of moderate multicollinearity.
The regression estimates across different methods presented in Table 7 (least squares method (LSM), ridge regression (RR), principal component ridge regression (PCRR), average LSM-centered penalized regression (ALPR), and principal component average LSM-centered penalized regression (PCALPR)) show how each predictor variable is estimated to impact the mileage, adjusting for the presence of other variables in the model. Each method handles multicollinearity differently, leading to variations in coefficient estimates. Each coefficient represents the change in the mileage associated with a one-unit change in the predictor variable, holding all other predictors constant. The variation in coefficient estimates across different methods reflects each method’s approach to managing multicollinearity and optimizing the model for prediction accuracy. Differences in coefficient estimates across methods highlight the impact of multicollinearity and the effectiveness of each method in addressing it. For example, penalized methods like RR, ALPR, and PCALPR may shrink some coefficients towards zero more than others, reflecting their relative importance in the presence of multicollinearity. The SMSE values across different estimation methods in Table 7 provide insight into each model’s prediction accuracy. Lower SMSE values indicate better model performance in terms of accurately predicting the softening point from the given predictors. The variation in SMSE values reflects the trade-off between bias and variance introduced by each regression method, with penalized methods typically offering a more balanced approach to minimize prediction error. The principal component average LSM-centered penalized regression (PCALPR) outperforms the others in terms of the SMSE criterion, making it the most accurate model for predicting gasoline mileage from the given predictors in the presence of multicollinearity. This suggests that the combination of principal component regression (PCR) with average LSM-centered penalized regression (ALPR) to reduce dimensionality and multicollinearity provides a robust estimation.

5. Conclusions

This work focused on estimating the regression parameters for a predictive linear model when there is multicollinearity among the regressors. We utilized the optimal shrinkage method, ALPR, developed by Wang et al. [14] and combined it with the principal component (PC) technique. We determined the property of the proposed PCALPR and compared using the mean squared error (MSE) with other estimators such as the least squared method, PC regression, ridge regression, and ALPR using numerical analysis. Comprehensive numerical evaluations demonstrated that our suggested estimator dominates others via the MSE across various degrees of multicollinearity. We observed that the prediction accuracy of the new estimator closely matches that of the ALPR. The effectiveness of the PCALPR in severe multicollinear predictive modelling was demonstrated through data analysis, showing improved prediction accuracy.

Author Contributions

Conceptualization, A.F.L.; methodology, A.F.L. and E.T.A.; software, E.T.A.; validation, A.F.L. and E.T.A.; formal analysis, A.F.L. and E.T.A.; writing—original draft preparation, A.F.L., E.T.A., M.A. and O.A.A.; writing—review and editing, A.F.L., E.T.A., M.A., O.A.A. and K.A.; supervision, K.A.; project administration, O.A.A.; funding acquisition, O.A.A. All authors have read and agreed to the published version of the manuscript.

Funding

The research was funded by Princess Nourah bint Abdulrahman University.

Data Availability Statement

Data will be made available on request.

Acknowledgments

The authors express their gratitude to the Princess Nourah bint Abdulrahman University Researchers Supporting Project (number PNURSP2024R734), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gunst, R.F. Regression analysis with multicollinear predictor variables: Definition, detection, and effects. Commun. Stat. Theory Methods 1983, 12, 2217–2260. [Google Scholar] [CrossRef]
  2. Weisberg, S. Applied Regression Analysis; Wiley: Hoboken, NJ, USA, 1985. [Google Scholar]
  3. Montgomery, D.C.; Peck, A.E.; Vining, G.G. Introduction to Linear Regression Analysis, 5th ed.; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2012. [Google Scholar]
  4. Dawoud, I.; Kibria, G. A New biased estimator to combat the multicollinearity of the Gaussian linear regression model. Stats 2020, 3, 526–541. [Google Scholar] [CrossRef]
  5. James, G.; Witten, D.; Hastie, T.; Tibshirani, R. An Introduction to Statistical Learning; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  6. Næs, T.; Indahl, U. A unified description of classical classification methods for multicollinear data. J. Chemom. 1998, 12, 205–220. [Google Scholar] [CrossRef]
  7. Hoerl, A.E.; Kennard, R.W. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics 1970, 12, 55–67. [Google Scholar] [CrossRef]
  8. Liu, K. A new class of biased estimate in linear regression. Commun. Stat. Theory Methods 1993, 22, 393–402. [Google Scholar]
  9. Kibria, B.M.G.; Lukman, A.F. A new ridge-type estimator for the linear regression model: Simulations and applications. Scientifica 2020, 2020, 9758378. [Google Scholar] [CrossRef] [PubMed]
  10. Owolabi, A.T.; Ayinde, K.; Alabi, O.O. A new ridge-type estimator for the linear regression model with correlated regressors. Concurr. Comput. Pract. Exp. 2022, 34, e6933. [Google Scholar] [CrossRef]
  11. Lukman, A.F.; Ayinde, K.; Binuomote, S.; Clement, O.A. Modified ridge-type estimator to combat multicollinearity: Application to chemical data. J. Chemom. 2019, 33, e3125. [Google Scholar] [CrossRef]
  12. Qasim, M.; Månsson, K.; Sjölander, P.; Kibria, B.G. A new class of efficient and debiased two-step shrinkage estimators: Method and application. J. Appl. Stat. 2021, 49, 4181–4205. [Google Scholar] [CrossRef]
  13. Stein, C.M. Multiple Regression Contributions to Probability and Statistics. Essays in Honor of Harold Hoteling; Stanford University Press: Palo Alto, CA, USA, 1960. [Google Scholar]
  14. Wang, W.; Li, L.; Li, S.; Yin, F.; Liao, F.; Zhang, T.; Li, X.; Xiao, X.; Ma, Y. Average ordinary least squares-centered penalized regression: A more efficient way to address multicollinearity than ridge regression. Stat. Neerl. 2022, 76, 347–368. [Google Scholar] [CrossRef]
  15. Baye, R.; Parker, F. Combining ridge and principal component regression: A money demand illustration. Commun. Stat. Theory Methods 1984, 13, 197–205. [Google Scholar] [CrossRef]
  16. Farghali, R.A.; Lukman, A.F.; Ogunleye, A. Enhancing model predictions through the fusion of Stein estimator and principal component regression. J. Stat. Comput. Simul. 2024. [Google Scholar] [CrossRef]
  17. Davino, C.; Romano, R.; Vistocco, D. Handling multicollinearity in quantile regression through the use of principal component regression. METRON 2022, 80, 153–174. [Google Scholar] [CrossRef]
  18. Kaçıranlar, S.; Sakallıoğlu, S. Combining the Liu estimator and the principal component regression estimator. Commun. Stat. Theory Methods 2001, 30, 2699–2705. [Google Scholar] [CrossRef]
  19. Chang, X.; Yang, H. Combining two-parameter and principal component regression estimators. Stat. Pap. 2012, 53, 549–562. [Google Scholar] [CrossRef]
  20. Lukman, A.F.; Ayinde, K.; Oludoun, O.; Onate, C.A. Combining modified ridge-type and principal component regression estimators. Sci. Afr. 2020, 9, e00536. [Google Scholar] [CrossRef]
  21. Vajargah, K.F. Comparing ridge regression and principal components regression by monte carlo simulation based on MSE. J. Comput. Sci. Comput. Math. 2013, 3, 25–29. [Google Scholar] [CrossRef]
  22. Wu, J. On the performance of principal component Liu-type estimator under the mean square error criterion. J. Appl. Math. 2013, 2013, 858794. [Google Scholar] [CrossRef]
  23. Cliff, N. The eigenvalues-greater-than-one rule and the reliability of components. Psychol. Bull. 1988, 103, 276–279. [Google Scholar] [CrossRef]
  24. Hoerl, A.E.; Kannard, R.W.; Baldwin, K.F. Ridge regression: Some simulations. Commun. Stat. 1975, 4, 105–123. [Google Scholar] [CrossRef]
  25. Li, S.; Wang, W.; Yao, M.; Wang, J.; Du, Q.; Li, X.; Tian, X.; Zeng, J.; Deng, Y.; Zhang, T.; et al. Poisson Average Maximum Likelihood-Centered Penalized Estimator: A New Estimator to Better Address Multicollinearity in Poisson Regression. Stat. Neerl. 2022, 78, 208–227. [Google Scholar] [CrossRef]
  26. Kaiser, H.F. The application of electronic computers to factor analysis. Educ. Psychol. Meas. 1960, 20, 141–151. [Google Scholar] [CrossRef]
  27. Lukman, A.F.; Allohibi, J.; Jegede, S.L.; Adewuyi, E.T.; Oke, S.; Alharbi, A.A. Kibria–Lukman-Type Estimator for Regularization and Variable Selection with Application to Cancer Data. Mathematics 2023, 11, 4795. [Google Scholar] [CrossRef]
  28. Arashi, M.; Asar, Y.; Yüzbaşı, B. SLASSO: A scaled LASSO for multicollinear situations. J. Stat. Comput. Simul. 2021, 91, 3170–3183. [Google Scholar] [CrossRef]
Figure 1. Estimated PMSE by sample size when ρ = 0.8, p = 3, and σ = 5.
Figure 1. Estimated PMSE by sample size when ρ = 0.8, p = 3, and σ = 5.
Symmetry 16 00469 g001
Figure 2. Estimated MSE by sample size when ρ = 0.9, p = 3, and σ = 5.
Figure 2. Estimated MSE by sample size when ρ = 0.9, p = 3, and σ = 5.
Symmetry 16 00469 g002
Figure 3. Estimated MSE by sample size when ρ = 0.99, p = 3, and σ = 5.
Figure 3. Estimated MSE by sample size when ρ = 0.99, p = 3, and σ = 5.
Symmetry 16 00469 g003
Figure 4. Estimated PMSE by sample size when ρ = 0.99, p = 3, and σ   = 5.
Figure 4. Estimated PMSE by sample size when ρ = 0.99, p = 3, and σ   = 5.
Symmetry 16 00469 g004
Figure 5. Correlation heatmap for Asphalt Binder dataset.
Figure 5. Correlation heatmap for Asphalt Binder dataset.
Symmetry 16 00469 g005
Figure 6. Variance inflation plot for Asphalt Binder dataset.
Figure 6. Variance inflation plot for Asphalt Binder dataset.
Symmetry 16 00469 g006
Figure 7. Correlation heatmap for Gasoline Mileage dataset.
Figure 7. Correlation heatmap for Gasoline Mileage dataset.
Symmetry 16 00469 g007
Figure 8. Variance inflation plot for Gasoline Mileage dataset.
Figure 8. Variance inflation plot for Gasoline Mileage dataset.
Symmetry 16 00469 g008
Table 1. Estimated MSE values for p = 3 with σ = 5.
Table 1. Estimated MSE values for p = 3 with σ = 5.
EstimatorsSample Size (n)
4050100200
MSEPMSEMSEPMSEMSEPMSEMSEPMSE
ρ = 0.8
α ^ L S M 4.57031.80583.78601.50661.44390.72120.78140.3768
α ^ R R 0.72741.34220.63451.59990.57051.58330.52351.1366
α ^ P C R R 0.69941.33360.61021.59230.56291.58070.51761.1345
α ^ A L P R 0.34080.60670.20770.48570.09900.25780.06120.1213
α ^ P C A L P R 0.30910.59610.18350.47850.09130.25500.05480.1190
ρ = 0.9
α ^ L S M 8.26021.80846.96631.50572.64580.72001.42720.3756
α ^ R R 0.63861.32920.51721.52990.46801.42530.43031.0351
α ^ P C R R 0.58671.32080.47491.52300.45651.42320.42181.0335
α ^ A L P R 0.31940.61070.19210.48460.09350.25580.05720.1191
α ^ P C A L P R 0.26610.60190.15010.47790.08200.25370.04860.1175
ρ = 0.99
α ^ L S M 75.56861.813565.09971.506824.56180.718013.22970.3742
α ^ R R 0.75840.75460.56390.71690.25590.56290.20450.3998
α ^ P C R R 0.29830.74660.19530.71060.17080.56130.14920.3987
α ^ A L P R 0.70640.61820.50470.48520.16250.25340.09930.1169
α ^ P C A L P R 0.24600.61030.13650.47890.07740.25180.04400.1158
ρ = 0.999
α ^ L S M 747.46411.8152646.65671.5075243.82200.7174131.36290.3739
α ^ R R 4.89930.60133.89730.49590.91540.28280.58410.1504
α ^ P C R R 0.38000.59340.27140.48970.09510.28120.05740.1493
α ^ A L P R 4.90650.62083.89430.48580.90600.25280.57160.1165
α ^ P C A L P R 0.38690.61290.26880.47960.08590.25120.04480.1154
Least squares method (LSM), ridge regression (RR), principal components ridge regression (PCRR), average least squares method-centered penalized regression (ALPR), and principal component average least squares method-centered penalized regression (PCALPR).
Table 2. Estimated MSE values for p = 3 with σ = 10.
Table 2. Estimated MSE values for p = 3 with σ = 10.
EstimatorsSample Size (n)
4050100200
MSEPMSEMSEPMSEMSEPMSEMSEPMSE
ρ = 0.8
α ^ L S M 18.28117.223115.14426.02635.77572.88493.12551.5073
α ^ R R 0.91681.74850.88242.29640.87932.47040.88481.9435
α ^ P C R R 0.90851.74600.87522.29420.87762.46980.88381.9431
α ^ A L P R 1.24262.39140.73331.91210.36381.02020.21720.4750
α ^ P C A L P R 1.22802.37980.72691.91150.36191.01880.21600.4744
ρ = 0.9
α ^ L S M 33.04077.233827.86536.022910.58322.88005.70891.5024
α ^ R R 0.86731.93430.80252.54960.80902.51990.82072.0128
α ^ P C R R 0.85131.93170.78952.54750.80612.51940.81892.0124
α ^ A L P R 1.07172.41000.60321.91100.32751.01500.19320.4697
α ^ P C A L P R 1.05412.40580.59071.90960.32451.01420.19140.4692
ρ = 0.99
α ^ L S M 302.27457.2540260.39876.027198.24702.872152.91881.4969
α ^ R R 0.78991.69730.58681.84390.47181.48560.45701.1875
α ^ P C R R 0.64241.69470.46891.84180.44751.48520.44151.1872
α ^ A L P R 1.06862.44280.60421.91640.32781.00740.18820.4634
α ^ P C A L P R 0.92072.44030.48661.91440.30351.00690.17270.4631
ρ = 0.999
α ^ L S M 2989.85657.26072586.62676.0300975.28822.8697525.45171.4957
α ^ R R 2.21111.92151.62361.70000.52570.94170.35020.5326
α ^ P C R R 0.75591.91890.46111.69800.28850.94120.19770.5322
α ^ A L P R 2.40642.45331.67641.91920.54471.00520.32410.4620
α ^ P C A L P R 0.95092.45070.51421.91720.30761.00470.17160.4617
Least squares method (LSM), ridge regression (RR), principal components ridge regression (PCRR), average least squares method-centered penalized regression (ALPR), and principal component average least squares method-centered penalized regression (PCALPR).
Table 3. Estimated MSE values for p = 7 with σ = 5.
Table 3. Estimated MSE values for p = 7 with σ = 5.
EstimatorsSample Size (n)
4050100200
MSEPMSEMSEPMSEMSEPMSEMSEPMSE
ρ = 0.8
α ^ L S M 12.68084.374410.76623.55494.40011.73742.22890.8519
α ^ R R 0.60431.82190.40602.26550.46461.98780.42551.6291
α ^ P C R R 0.54611.79970.36762.25230.45521.98480.42511.6289
α ^ A L P R 0.30460.68230.17820.54880.12950.28650.09450.1418
α ^ P C A L P R 0.24700.66170.13890.53510.12040.28320.09400.1415
ρ = 0.9
α ^ L S M 23.75784.374220.30563.55578.26581.73574.19580.8514
α ^ R R 0.49691.50660.31791.55300.32561.46050.29101.1853
α ^ P C R R 0.38801.48470.24161.53890.30571.45680.28511.1842
α ^ A L P R 0.32150.66690.20570.54210.13140.27410.09540.1314
α ^ P C A L P R 0.21290.64550.12900.52780.11190.27040.08960.1302
ρ = 0.99
α ^ L S M 225.04194.3745193.09623.556678.36241.733239.87990.8509
α ^ R R 1.26300.63650.89950.55040.32130.32260.18760.1881
α ^ P C R R 0.21640.61430.14560.53570.11310.31840.09710.1864
α ^ A L P R 1.26970.65100.89870.53580.31570.26110.17720.1207
α ^ P C A L P R 0.22300.62880.14480.52110.10760.25690.08680.1190
ρ = 0.999
α ^ L S M 2238.34324.37471921.01503.5567779.30051.7325396.87110.8507
α ^ R R 11.02170.63327.91280.53062.21260.25571.02660.1174
α ^ P C R R 0.57640.61090.36990.51590.12010.25150.08900.1156
α ^ A L P R 11.02600.64887.91350.53552.21420.25941.02710.1195
α ^ P C A L P R 0.58040.62650.37070.52070.12190.25510.08960.1177
Least squares method (LSM), ridge regression (RR), principal components ridge regression (PCRR), average least squares method-centered penalized regression (ALPR), and principal component average least squares method-centered penalized regression (PCALPR).
Table 4. Estimated MSE values for p = 7 with σ = 10.
Table 4. Estimated MSE values for p = 7 with σ = 10.
EstimatorSample Size (n)
4050100200
MSEPMSEMSEPMSEMSEPMSEMSEPMSE
ρ = 0.8
α ^ L S M 50.723017.497543.064914.219517.60056.94958.91583.4075
α ^ R R 0.84212.80850.70314.69990.79583.63630.79933.3193
α ^ P C R R 0.82462.80170.69284.69630.79363.63560.79963.3194
α ^ A L P R 0.79982.56400.35602.08350.28301.06060.17270.4980
α ^ P C A L P R 0.78402.56340.34572.08120.28061.05840.17300.4982
ρ = 0.9
α ^ L S M 95.031117.497081.222414.222933.06326.942916.78333.4056
α ^ R R 0.73582.90380.55604.16690.65703.46520.65883.1903
α ^ P C R R 0.70212.89690.53454.16290.65193.46420.65773.1901
α ^ A L P R 0.66982.54070.32332.08300.25061.04370.15650.4854
α ^ P C A L P R 0.63672.53600.30192.07950.24561.04230.15550.4852
ρ = 0.99
α ^ L S M 900.167517.4981772.384714.2263313.4496.9330159.5193.4034
α ^ R R 0.75941.92510.49011.92880.30451.21560.23120.8725
α ^ P C R R 0.42671.91790.26531.92430.24641.21450.20710.8720
α ^ A L P R 0.87142.51470.50602.08380.27971.02470.16740.4726
α ^ P C A L P R 0.53872.50750.28132.07940.22161.02350.14340.4721
ρ = 0.999
α ^ L S M 8953.372817.49877684.0614.22703117.20216.93001587.48433.4030
α ^ R R 3.92562.29072.61382.00080.80130.95440.39700.4443
α ^ P C R R 0.59702.28340.33821.99620.21270.95320.13880.4438
α ^ A L P R 3.96592.50992.62232.08540.81241.02120.40140.4709
α ^ P C A L P R 0.63732.50260.34682.08090.22391.02000.14320.4704
Least squares method (LSM), ridge regression (RR), principal components ridge regression (PCRR), average least squares method-centered penalized regression (ALPR), and principal component average least squares method-centered penalized regression (PCALPR).
Table 5. Regression estimates for the Asphalt binder data.
Table 5. Regression estimates for the Asphalt binder data.
Coefficients α ^ L S M α ^ R R α ^ P C R R α ^ A L P R α ^ P C A L P R
Intercept18.421318.298218.298217.775917.7771
x i 1 −0.9374−0.9222−0.96890.76590.6152
x i 2 1.00470.85010.19883.95153.4056
x i 3 0.50340.31940.91844.17724.7278
x i 4 −0.3170−0.6309−0.93501.69271.6568
x i 5 −1.4497−1.4092−0.6252−0.7086−0.0231
x i 6 0.66100.61660.52100.89700.9224
x i 7 0.94340.91780.37741.25130.7737
x i 8 1.15310.96780.52220.75730.6310
x i 9 −0.20550.02410.68250.91231.1603
x i 10 −1.2356−1.0465−0.54210.47540.6927
x i 11 1.07070.9054−0.04031.13420.5469
x i 12 −0.6526−0.58020.0206−0.17880.2577
SMSE131.419216.591616.225913.174811.4092
Table 6. Variable description.
Table 6. Variable description.
Variable NameDescription
x i 1 Displacement (cubic inches)
x i 2 Horsepower (foot-pounds)
x i 4 Compression ratio
x i 5 Rear axle ratio
x i 6 Carburetor (barrels)
x i 7 Number of transmission speeds
x i 8 Overall length (inches)
x i 9 Width (inches)
x i 10 Weight (pounds)
y Miles per gallon
Table 7. Regression estimates for the Gasoline Mileage data.
Table 7. Regression estimates for the Gasoline Mileage data.
Coefficients α ^ L S M α ^ R R α ^ P C R R α ^ A L P R α ^ P C A L P R
Intercept20.620819.32420.000013.9381−0.3106
x i 1 −6.0835−2.9430−0.9192−1.2406−1.1774
x i 2 2.5176−0.6611−0.8649−0.8659−0.9807
x i 4 0.93770.62670.42810.59400.1501
x i 5 0.95490.56520.69630.60080.2435
x i 6 0.68510.2456−0.5545−0.3273−0.6128
x i 7 −1.45180.02940.77970.64800.1987
x i 8 4.10510.9788−0.8500−0.2911−0.9909
x i 9 0.6992−0.5416−0.8113−0.6839−1.0333
x i 10 −7.1948−2.1462−0.9077−1.0316−1.0077
SMSE782.956117.95970.568199.08927.276
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lukman, A.F.; Adewuyi, E.T.; Alqasem, O.A.; Arashi, M.; Ayinde, K. Enhanced Model Predictions through Principal Components and Average Least Squares-Centered Penalized Regression. Symmetry 2024, 16, 469. https://doi.org/10.3390/sym16040469

AMA Style

Lukman AF, Adewuyi ET, Alqasem OA, Arashi M, Ayinde K. Enhanced Model Predictions through Principal Components and Average Least Squares-Centered Penalized Regression. Symmetry. 2024; 16(4):469. https://doi.org/10.3390/sym16040469

Chicago/Turabian Style

Lukman, Adewale F., Emmanuel T. Adewuyi, Ohud A. Alqasem, Mohammad Arashi, and Kayode Ayinde. 2024. "Enhanced Model Predictions through Principal Components and Average Least Squares-Centered Penalized Regression" Symmetry 16, no. 4: 469. https://doi.org/10.3390/sym16040469

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop