Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (6)

Search Parameters:
Keywords = Stein-type shrinkage estimators

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
33 pages, 2441 KB  
Article
Kernel Ridge-Type Shrinkage Estimators in Partially Linear Regression Models with Correlated Errors
by Syed Ejaz Ahmed, Ersin Yilmaz and Dursun Aydın
Mathematics 2025, 13(12), 1959; https://doi.org/10.3390/math13121959 - 13 Jun 2025
Viewed by 450
Abstract
Partially linear time series models often suffer from multicollinearity among regressors and autocorrelated errors, both of which can inflate estimation risk. This study introduces a generalized ridge-type kernel (GRTK) framework that combines kernel smoothing with ridge shrinkage and augments it through ordinary and [...] Read more.
Partially linear time series models often suffer from multicollinearity among regressors and autocorrelated errors, both of which can inflate estimation risk. This study introduces a generalized ridge-type kernel (GRTK) framework that combines kernel smoothing with ridge shrinkage and augments it through ordinary and positive-part Stein adjustments. Closed-form expressions and large-sample properties are established, and data-driven criteria—including GCV, AICc, BIC, and RECP—are used to tune the bandwidth and shrinkage penalties. Monte-Carlo simulations indicate that the proposed procedures usually reduce risk relative to existing semiparametric alternatives, particularly when the predictors are strongly correlated and the error process is dependent. An empirical study of US airline-delay data further demonstrates that GRTK produces a stable, interpretable fit, captures a nonlinear air-time effect overlooked by conventional approaches, and leaves only a modest residual autocorrelation. By tackling multicollinearity and autocorrelation within a single, flexible estimator, the GRTK family offers practitioners a practical avenue for more reliable inference in partially linear time series settings. Full article
(This article belongs to the Special Issue Statistical Forecasting: Theories, Methods and Applications)
Show Figures

Figure 1

26 pages, 6617 KB  
Article
Penalty Strategies in Semiparametric Regression Models
by Ayuba Jack Alhassan, S. Ejaz Ahmed, Dursun Aydin and Ersin Yilmaz
Math. Comput. Appl. 2025, 30(3), 54; https://doi.org/10.3390/mca30030054 - 12 May 2025
Viewed by 1441
Abstract
This study includes a comprehensive evaluation of six penalty estimation strategies for partially linear models (PLRMs), focusing on their performance in the presence of multicollinearity and their ability to handle both parametric and nonparametric components. The methods under consideration include Ridge regression, Lasso, [...] Read more.
This study includes a comprehensive evaluation of six penalty estimation strategies for partially linear models (PLRMs), focusing on their performance in the presence of multicollinearity and their ability to handle both parametric and nonparametric components. The methods under consideration include Ridge regression, Lasso, Adaptive Lasso (aLasso), smoothly clipped absolute deviation (SCAD), ElasticNet, and minimax concave penalty (MCP). In addition to these established methods, we also incorporate Stein-type shrinkage estimation techniques that are standard and positive shrinkage and assess their effectiveness in this context. To estimate the PLRMs, we consider a kernel smoothing technique grounded in penalized least squares. Our investigation involves a theoretical analysis of the estimators’ asymptotic properties and a detailed simulation study designed to compare their performance under a variety of conditions, including different sample sizes, numbers of predictors, and levels of multicollinearity. The simulation results reveal that aLasso and shrinkage estimators, particularly the positive shrinkage estimator, consistently outperform the other methods in terms of Mean Squared Error (MSE) relative efficiencies (RE), especially when the sample size is small and multicollinearity is high. Furthermore, we present a real data analysis using the Hitters dataset to demonstrate the applicability of these methods in a practical setting. The results of the real data analysis align with the simulation findings, highlighting the superior predictive accuracy of aLasso and the shrinkage estimators in the presence of multicollinearity. The findings of this study offer valuable insights into the strengths and limitations of these penalty and shrinkage strategies, guiding their application in future research and practice involving semiparametric regression. Full article
Show Figures

Figure 1

18 pages, 385 KB  
Article
New and Efficient Estimators of Reliability Characteristics for a Family of Lifetime Distributions under Progressive Censoring
by Syed Ejaz Ahmed, Reza Arabi Belaghi, Abdulkadir Hussein and Alireza Safariyan
Mathematics 2024, 12(10), 1599; https://doi.org/10.3390/math12101599 - 20 May 2024
Cited by 3 | Viewed by 1271
Abstract
Estimation of reliability and stress–strength parameters is important in the manufacturing industry. In this paper, we develop shrinkage-type estimators for the reliability and stress–strength parameters based on progressively censored data from a rich class of distributions. These new estimators improve the performance of [...] Read more.
Estimation of reliability and stress–strength parameters is important in the manufacturing industry. In this paper, we develop shrinkage-type estimators for the reliability and stress–strength parameters based on progressively censored data from a rich class of distributions. These new estimators improve the performance of the commonly used Maximum Likelihood Estimators (MLEs) by reducing their mean squared errors. We provide analytical asymptotic and bootstrap confidence intervals for the targeted parameters. Through a detailed simulation study, we demonstrate that the new estimators have better performance than the MLEs. Finally, we illustrate the application of the new methods to two industrial data sets, showcasing their practical relevance and effectiveness. Full article
(This article belongs to the Special Issue Reliability Estimation and Mathematical Statistics)
Show Figures

Figure 1

18 pages, 376 KB  
Article
Estimation of Large-Dimensional Covariance Matrices via Second-Order Stein-Type Regularization
by Bin Zhang, Hengzhen Huang and Jianbin Chen
Entropy 2023, 25(1), 53; https://doi.org/10.3390/e25010053 - 27 Dec 2022
Cited by 3 | Viewed by 2579
Abstract
This paper tackles the problem of estimating the covariance matrix in large-dimension and small-sample-size scenarios. Inspired by the well-known linear shrinkage estimation, we propose a novel second-order Stein-type regularization strategy to generate well-conditioned covariance matrix estimators. We model the second-order Stein-type regularization as [...] Read more.
This paper tackles the problem of estimating the covariance matrix in large-dimension and small-sample-size scenarios. Inspired by the well-known linear shrinkage estimation, we propose a novel second-order Stein-type regularization strategy to generate well-conditioned covariance matrix estimators. We model the second-order Stein-type regularization as a quadratic polynomial concerning the sample covariance matrix and a given target matrix, representing the prior information of the actual covariance structure. To obtain available covariance matrix estimators, we choose the spherical and diagonal target matrices and develop unbiased estimates of the theoretical mean squared errors, which measure the distances between the actual covariance matrix and its estimators. We formulate the second-order Stein-type regularization as a convex optimization problem, resulting in the optimal second-order Stein-type estimators. Numerical simulations reveal that the proposed estimators can significantly lower the Frobenius losses compared with the existing Stein-type estimators. Moreover, a real data analysis in portfolio selection verifies the performance of the proposed estimators. Full article
(This article belongs to the Special Issue Statistical Methods for Modeling High-Dimensional and Complex Data)
Show Figures

Figure 1

13 pages, 596 KB  
Article
Classification in High Dimension Using the Ledoit–Wolf Shrinkage Method
by Rasoul Lotfi, Davood Shahsavani and Mohammad Arashi
Mathematics 2022, 10(21), 4069; https://doi.org/10.3390/math10214069 - 1 Nov 2022
Cited by 1 | Viewed by 3230
Abstract
Classification using linear discriminant analysis (LDA) is challenging when the number of variables is large relative to the number of observations. Algorithms such as LDA require the computation of the feature vector’s precision matrices. In a high-dimension setting, due to the singularity of [...] Read more.
Classification using linear discriminant analysis (LDA) is challenging when the number of variables is large relative to the number of observations. Algorithms such as LDA require the computation of the feature vector’s precision matrices. In a high-dimension setting, due to the singularity of the covariance matrix, it is not possible to estimate the maximum likelihood estimator of the precision matrix. In this paper, we employ the Stein-type shrinkage estimation of Ledoit and Wolf for high-dimensional data classification. The proposed approach’s efficiency is numerically compared to existing methods, including LDA, cross-validation, gLasso, and SVM. We use the misclassification error criterion for comparison. Full article
(This article belongs to the Special Issue Contemporary Contributions to Statistical Modelling and Data Science)
Show Figures

Figure 1

22 pages, 496 KB  
Article
Improved Average Estimation in Seemingly Unrelated Regressions
by Ali Mehrabani and Aman Ullah
Econometrics 2020, 8(2), 15; https://doi.org/10.3390/econometrics8020015 - 27 Apr 2020
Cited by 7 | Viewed by 5778
Abstract
In this paper, we propose an efficient weighted average estimator in Seemingly Unrelated Regressions. This average estimator shrinks a generalized least squares (GLS) estimator towards a restricted GLS estimator, where the restrictions represent possible parameter homogeneity specifications. The shrinkage weight is inversely proportional [...] Read more.
In this paper, we propose an efficient weighted average estimator in Seemingly Unrelated Regressions. This average estimator shrinks a generalized least squares (GLS) estimator towards a restricted GLS estimator, where the restrictions represent possible parameter homogeneity specifications. The shrinkage weight is inversely proportional to a weighted quadratic loss function. The approximate bias and second moment matrix of the average estimator using the large-sample approximations are provided. We give the conditions under which the average estimator dominates the GLS estimator on the basis of their mean squared errors. We illustrate our estimator by applying it to a cost system for United States (U.S.) Commercial banks, over the period from 2000 to 2018. Our results indicate that on average most of the banks have been operating under increasing returns to scale. We find that over the recent years, scale economies are a plausible reason for the growth in average size of banks and the tendency toward increasing scale is likely to continue Full article
(This article belongs to the Special Issue Bayesian and Frequentist Model Averaging)
Show Figures

Figure 1

Back to TopTop