Overview of High-Dimensional Measurement Error Regression Models
Abstract
:1. Introduction
2. Estimation Methods for Linear Models
2.1. Nonconvex Lasso
2.2. Convex Conditioned Lasso
2.3. Balanced Estimation
2.4. Calibrated Zero-Norm Regularized Least Square Estimation
2.5. Linear and Conic Programming Estimation
3. Estimation Methods for Generalized Linear Models
3.1. Estimation Method for Poisson Models
3.2. Generalized Matrix Uncertainty Selector
4. Hypothesis Testing Methods
4.1. Corrected Decorrelated Score Test
4.2. Wald and Score Tests for Poisson Models
5. Screening Methods
6. Conclusions
- Existing estimation methods for high-dimensional measurement error regression models are mainly for linear or generalized linear models. Therefore, it is urgent to develop estimation methods for nonlinear models with high-dimensional measurement error data such as nonparametric and semiparametric models.
- Existing works mainly focus on independent and identically distributed data. It is worthwhile to extend the estimation and hypothesis-testing methods to measurement error models with complex data such as panel data and functional data.
- In most studies of high-dimensional measurement error models, it is assumed that the covariance structure of the measurement errors is specific or the covariance matrix of measurement errors is known. Thus, it is a challenging problem to develop estimation and hypothesis-testing methods in the case that the covariance matrix of measurement errors is completely unknown.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
SIMEX | Simulation–extrapolation |
SCAD | Smoothly clipped absolute deviation |
SICA | Smooth integration of counting and absolute deviation |
MCP | Minimax concave penalty |
SIS | Sure independence screening |
CoCoLasso | Convex conditioned Lasso |
CaZnRLS | Calibrated zero-norm regularized least squares |
MU | Matrix uncertainty |
MEBoost | Measurement error boosting |
SIMSELEX | Simulation--selection--extrapolation |
IRO | Imputation-regularized optimization |
FDR | False discovery rate |
PMSc | Corrected penalized marginal screening |
SISc | Corrected sure independence screening |
ADMM | Alternating direction method of multipliers |
BDCoCoLasso | Block coordinate descent convex conditioned Lasso |
MPEC | Mathematical program with equilibrium constraints |
GEP–MSCRA | Multi-stage convex relaxation approach |
GMU | Generalized matrix uncertainty |
References
- Liang, H.; Härdle, W.; Carroll, R.J. Estimation in a semiparametric partially linear errors-in-variables model. Ann. Stat. 1999, 27, 1519–1535. [Google Scholar] [CrossRef]
- Cook, J.; Stefanski, L.A. Simulation-extrapolation estimation in parametric measurement error models. J. Am. Stat. Assoc. 1994, 89, 1314–1328. [Google Scholar] [CrossRef]
- Carroll, R.J.; Lombard, F.; Kuchenhoff, H.; Stefanski, L.A. Asymptotics for the SIMEX estimator in structural measurement error models. J. Am. Stat. Assoc. 1996, 91, 242–250. [Google Scholar] [CrossRef]
- Fan, J.Q.; Truong, Y.K. Nonparametric regression with errors in variables. Ann. Stat. 1993, 21, 1900–1925. [Google Scholar] [CrossRef]
- Cui, H.J.; Chen, S.X. Empirical likelihood confidence region for parameter in the errors-in-variables models. J. Multivar. Anal. 2003, 84, 101–115. [Google Scholar] [CrossRef] [Green Version]
- Cui, H.J.; Kong, E.F. Empirical likelihood confidence region for parameters in semi-linear errors-in-variables models. Scand. J. Stat. 2006, 33, 153–168. [Google Scholar] [CrossRef] [Green Version]
- Cheng, C.L.; Tsai, J.R.; Schneeweiss, H. Polynomial regression with heteroscedastic measurement errors in both axes: Estimation and hypothesis testing. Stat. Methods Med. Res. 2019, 28, 2681–2696. [Google Scholar] [CrossRef]
- He, X.M.; Liang, H. Quantile regression estimates for a class of linear and partially linear errors-in-variables models. Stat. Sin. 2000, 10, 129–140. [Google Scholar]
- Carroll, R.J.; Delaigle, A.; Hall, P. Nonparametric prediction in measurement error models. J. Am. Stat. Assoc. 2009, 104, 993–1003. [Google Scholar] [CrossRef] [Green Version]
- Jeon, J.M.; Park, B.U.; Keilegom, I.V. Nonparametric regression on lie groups with measurement errors. Ann. Stat. 2022, 50, 2973–3008. [Google Scholar] [CrossRef]
- Chen, L.P.; Yi, G.Y. Model selection and model averaging for analysis of truncated and censored data with measurement error. Electron. J. Stat. 2020, 14, 4054–4109. [Google Scholar] [CrossRef]
- Shi, P.X.; Zhou, Y.C.; Zhang, A.R. High-dimensional log-error-in-variable regression with applications to microbial compositional data analysis. Biometrika 2022, 109, 405–420. [Google Scholar] [CrossRef]
- Li, B.; Yin, X.R. On surrogate dimension reduction for measurement error regression: An invariance law. Ann. Stat. 2007, 35, 2143–2172. [Google Scholar] [CrossRef]
- Staudenmayer, J.; Buonaccorsi, J.P. Measurement error in linear autoregressive models. J. Am. Stat. Assoc. 2005, 100, 841–852. [Google Scholar] [CrossRef]
- Wei, Y.; Carroll, R.J. Quantile regression with measurement error. J. Am. Stat. Assoc. 2009, 104, 1129–1143. [Google Scholar] [CrossRef] [Green Version]
- Liang, H.; Li, R.Z. Variable selection for partially linear models with measurement errors. J. Am. Stat. Assoc. 2009, 104, 234–248. [Google Scholar] [CrossRef] [Green Version]
- Hall, P.; Ma, Y.Y. Testing the suitability of polynomial models in errors-in-variables problems. Ann. Stat. 2007, 35, 2620–2638. [Google Scholar] [CrossRef]
- Hall, P.; Ma, Y.Y. Semiparametric estimators of functional measurement error models with unknown error. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2007, 69, 429–446. [Google Scholar] [CrossRef]
- Ma, Y.Y.; Carroll, R.J. Locally efficient estimators for semiparametric models with measurement error. J. Am. Stat. Assoc. 2006, 101, 1465–1474. [Google Scholar] [CrossRef]
- Ma, Y.Y.; Li, R.Z. Variable selection in measurement error models. Bernoulli 2010, 16, 274–300. [Google Scholar] [CrossRef] [Green Version]
- Ma, Y.Y.; Hart, J.D.; Janicki, R.; Carroll, R.J. Local and omnibus goodness-of-fit tests in classical measurement error models. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2011, 73, 81–98. [Google Scholar] [CrossRef] [Green Version]
- Wang, L.Q. Estimation of nonlinear models with Berkson measurement errors. Ann. Stat. 2004, 32, 2559–2579. [Google Scholar] [CrossRef] [Green Version]
- Nghiem, L.H.; Byrd, M.C.; Potgieter, C.J. Estimation in linear errors-in-variables models with unknown error distribution. Biometrika 2020, 107, 841–856. [Google Scholar] [CrossRef]
- Pan, W.Q.; Zeng, D.L.; Lin, X.H. Estimation in semiparametric transition measurement error models for longitudinal data. Biometrics 2009, 65, 728–736. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zhang, J.; Zhou, Y. Calibration procedures for linear regression models with multiplicative distortion measurement errors. Braz. J. Probab. Stat. 2020, 34, 519–536. [Google Scholar] [CrossRef]
- Zhang, J. Estimation and variable selection for partial linear single-index distortion measurement errors models. Stat. Pap. 2021, 62, 887–913. [Google Scholar] [CrossRef]
- Wang, L.Q.; Hsiao, C. Method of moments estimation and identifiability of semiparametric nonlinear errors-in-variables models. J. Econom. 2011, 165, 30–44. [Google Scholar] [CrossRef]
- Schennach, S.M.; Hu, Y.Y. Nonparametric identification and semiparametric estimation of classical measurement error models without side information. J. Am. Stat. Assoc. 2013, 108, 177–186. [Google Scholar] [CrossRef] [Green Version]
- Zhang, X.Y.; Ma, Y.Y.; Carroll, R.J. MALMEM: Model averaging in linear measurement error models. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2019, 81, 763–779. [Google Scholar] [CrossRef]
- Carroll, R.J.; Ruppert, D.; Stefanski, L.A.; Crainiceanu, C.M. Measurement Error in Nonlinear Models, 2nd ed.; Chapman and Hall: New York, NY, USA, 2006. [Google Scholar]
- Cheng, C.L.; Van Ness, J.W. Statistical Regression With Measurement Error; Oxford University Press: New York, NY, USA, 1999. [Google Scholar]
- Fuller, W.A. Measurement Error Models; John Wiley & Sons: New York, NY, USA, 1987. [Google Scholar]
- Li, G.R.; Zhang, J.; Feng, S.Y. Modern Measurement Error Models; Science Press: Beijing, China, 2016. [Google Scholar]
- Yi, G.Y. Statistical Analysis with Measurement Error or Misclassification; Springer: New York, NY, USA, 2017. [Google Scholar]
- Yi, G.Y.; Delaigle, A.; Gustafson, P. Handbook of Measurement Error Models; Chapman and Hall: New York, NY, USA, 2021. [Google Scholar]
- Tibshirani, R. Regression shrinkage and selection via the Lasso. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 1996, 58, 267–288. [Google Scholar] [CrossRef]
- Fan, J.Q.; Li, R.Z. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 2001, 96, 1348–1360. [Google Scholar] [CrossRef]
- Zou, H.; Hastie, T. Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2005, 67, 301–320. [Google Scholar] [CrossRef] [Green Version]
- Zou, H. The adaptive Lasso and its oracle properties. J. Am. Stat. Assoc. 2006, 101, 1418–1429. [Google Scholar] [CrossRef] [Green Version]
- Candès, E.J.; Tao, T. The Dantzig selector: Statistical estimation when p is much larger than n. Ann. Stat. 2007, 35, 2313–2351. [Google Scholar]
- Lv, J.C.; Fan, Y.Y. A unified approach to model selection and sparse recovery using regularized least squares. Ann. Stat. 2009, 37, 3498–3528. [Google Scholar] [CrossRef]
- Zhang, C.-H. Nearly unbiased variable selection under minimax concave penalty. Ann. Stat. 2010, 38, 894–942. [Google Scholar] [CrossRef] [Green Version]
- Fan, J.Q.; Lv, J.C. A selective overview of variable selection in high dimensional feature space. Stat. Sin. 2010, 20, 101–148. [Google Scholar]
- Wu, Y.N.; Wang, L. A survey of tuning parameter selection for high-dimensional regression. Annu. Rev. Stat. Its Appl. 2020, 7, 209–226. [Google Scholar] [CrossRef] [Green Version]
- Yang, E.; Lozano, A.C.; Ravikumar, P. Elementary estimators for high-dimensional linear regression. In Proceedings of the 31st International Conference on International Conference on Machine Learning, Beijing, China, 21 June 2014. [Google Scholar]
- Kuchibhotla, A.K.; Kolassa, J.E.; Kuffner, T.A. Post-selection inference. Annu. Rev. Stat. Its Appl. 2022, 9, 505–527. [Google Scholar] [CrossRef]
- Bühlmann, P.; van de Geer, S. Statistics for High-Dimensional Data: Methods, Theory and Applications; Springer: Heidelberg, Germany, 2011. [Google Scholar]
- Hastie, T.; Tibshirani, R.; Wainwright, M. Statistical Learning with Sparsity: The Lasso and Generalizations; Taylor & Francis Group, CRC: Boca Raton, FL, USA, 2015. [Google Scholar]
- Fan, J.Q.; Li, R.Z.; Zhang, C.-H.; Zou, H. Statistical Foundations of Data Science; Chapman and Hall: Boca Raton, FL, USA, 2020. [Google Scholar]
- Fan, J.Q.; Lv, J.C. Sure independence screening for ultrahigh dimensional feature space. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2008, 70, 849–911. [Google Scholar] [CrossRef] [Green Version]
- Barut, E.; Fan, J.Q.; Verhasselt, A. Conditional sure independence screening. J. Am. Stat. Assoc. 2016, 111, 1266–1277. [Google Scholar] [CrossRef] [PubMed]
- Fan, J.Q.; Song, R. Sure independence screening in generalized linear models with NP-dimensionality. Ann. Stat. 2010, 38, 3567–3604. [Google Scholar] [CrossRef]
- Fan, J.Q.; Feng, Y.; Song, R. Nonparametric independence screening in sparse ultrahigh-dimensional additive models. J. Am. Stat. Assoc. 2011, 106, 544–557. [Google Scholar] [CrossRef] [Green Version]
- Li, G.R.; Peng, H.; Zhang, J.; Zhu, L.X. Robust rank correlation based screening. Ann. Stat. 2012, 40, 1846–1877. [Google Scholar] [CrossRef] [Green Version]
- Ma, S.J.; Li, R.Z.; Tsai, C.L. Variable screening via quantile partial correlation. J. Am. Stat. Assoc. 2017, 112, 650–663. [Google Scholar] [CrossRef]
- Pan, W.L.; Wang, X.Q.; Xiao, W.N.; Zhu, H.T. A generic sure independence screening procedure. J. Am. Stat. Assoc. 2019, 114, 928–937. [Google Scholar] [CrossRef]
- Tong, Z.X.; Cai, Z.R.; Yang, S.S.; Li, R.Z. Model-free conditional feature screening with FDR control. J. Am. Stat. Assoc. 2022, in press. [Google Scholar] [CrossRef]
- Wen, C.H.; Pan, W.L.; Huang, M.; Wang, X.Q. Sure independence screening adjusted for confounding covariates with ultrahigh dimensional data. Stat. Sin. 2018, 28, 293–317. [Google Scholar]
- Wang, L.M.; Li, X.X.; Wang, X.Q.; Lai, P. Unified mean-variance feature screening for ultrahigh-dimensional regression. Comput. Stat. 2022, 37, 1887–1918. [Google Scholar] [CrossRef]
- Zhao, S.F.; Fu, G.F. Distribution-free and model-free multivariate feature screening via multivariate rank distance correlation. J. Multivar. Anal. 2022, 192, 105081. [Google Scholar] [CrossRef]
- Purdom, E.; Holmes, S.P. Error distribution for gene expression data. Stat. Appl. Genet. Mol. Biol. 2005, 4, 16. [Google Scholar] [CrossRef] [Green Version]
- Slijepcevic, S.; Megerian, S.; Potkonjak, M. Location errors in wireless embedded sensor networks: Sources, models, and effects on applications. Mob. Comput. Commun. Rev. 2002, 6, 67–78. [Google Scholar] [CrossRef]
- Loh, P.-L.; Wainwright, M.J. High-dimensional regression with noisy and missing data: Provable guarantees with nonconvexity. Ann. Stat. 2012, 40, 1637–1664. [Google Scholar] [CrossRef]
- Datta, A.; Zou, H. CoCoLasso for high-dimensional error-in-variables regression. Ann. Stat. 2017, 45, 2400–2426. [Google Scholar] [CrossRef] [Green Version]
- Zheng, Z.M.; Li, Y.; Yu, C.X.; Li, G.R. Balanced estimation for high-dimensional measurement error models. Comput. Stat. Data Anal. 2018, 126, 78–91. [Google Scholar] [CrossRef]
- Zhang, J.; Li, Y.; Zhao, N.; Zheng, Z.M. L0 regularization for high-dimensional regression with corrupted data. Commun. Stat. Theory Methods 2022, in press. [Google Scholar] [CrossRef]
- Tao, T.; Pan, S.H.; Bi, S.J. Calibrated zero-norm regularized LS estimator for high-dimensional error-in-variables regression. Stat. Sin. 2018, 31, 909–933. [Google Scholar] [CrossRef]
- Rosenbaum, M.; Tsybakov, A. Sparse recovery under matrix uncertainty. Ann. Stat. 2010, 38, 2620–2651. [Google Scholar] [CrossRef]
- Rosenbaum, M.; Tsybakov, A. Improved matrix uncertainty selector. Probab. Stat. Back-High-Dimens. Model. Processes 2013, 9, 276–290. [Google Scholar]
- Sørensen, Ø.; Hellton, K.H.; Frigessi, A.; Thoresen, M. Covariate selection in high-dimensional generalized linear models with measurement error. J. Comput. Graph. Stat. 2018, 27, 739–749. [Google Scholar] [CrossRef]
- Sørensen, Ø.; Frigessi, A.; Thoresen, M. Measurement error in Lasso: Impact and likelihood bias correction. Stat. Sin. 2015, 25, 809–829. [Google Scholar] [CrossRef] [Green Version]
- Belloni, A.; Rosenbaum, M.; Tsybakov, A.B. Linear and conic programming estimators in high dimensional errors-in-variables models. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2017, 79, 939–956. [Google Scholar] [CrossRef] [Green Version]
- Romeo, G.; Thoresen, M. Model selection in high-dimensional noisy data: A simulation study. J. Stat. Comput. Simul. 2019, 89, 2031–2050. [Google Scholar] [CrossRef]
- Brown, B.; Weaver, T.; Wolfson, J. Meboost: Variable selection in the presence of measurement error. Stat. Med. 2019, 38, 2705–2718. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Nghiem, L.H.; Potgieter, C.J. Simulation-selection-extrapolation: Estimation in high-dimensional errors-in-variables models. Biometrics 2019, 75, 1133–1144. [Google Scholar] [CrossRef] [PubMed]
- Li, X.; Wu, D.Y. Minimax rates of lp-losses for high-dimensional linear errors-in-variables models over lq-balls. Entropy 2021, 23, 722. [Google Scholar] [CrossRef]
- Bai, Y.X.; Tian, M.Z.; Tang, M.-L.; Lee, W.-Y. Variable selection for ultra-high dimensional quantile regression with missing data and measurement error. Stat. Methods Med. Res. 2021, 30, 129–150. [Google Scholar] [CrossRef]
- Jiang, F.; Ma, Y.Y. Poisson regression with error corrupted high dimensional features. Stat. Sin. 2022, 32, 2023–2046. [Google Scholar] [CrossRef]
- Byrd, M.; McGee, M. A simple correction procedure for high-dimensional generalized linear models with measurement error. arXiv 2019, arXiv:1912.11740. [Google Scholar]
- Liang, F.M.; Jia, B.C.; Xue, J.N.; Li, Q.Z.; Luo, Y. An imputation–regularized optimization algorithm for high dimensional missing data problems and beyond. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2018, 80, 899–926. [Google Scholar] [CrossRef]
- van de Geer, S.; Bühlmann, P.; Ritov, Y.; Dezeure, R. On asymptotically optimal confidence regions and tests for high-dimensional models. Ann. Stat. 2014, 42, 1166–1202. [Google Scholar] [CrossRef]
- Zhang, C.-H.; Zhang, S.S. Confidence intervals for low dimensional parameters in high dimensional linear models. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2014, 76, 217–242. [Google Scholar] [CrossRef] [Green Version]
- Ma, S.J.; Carroll, R.J.; Liang, H.; Xu, S.Z. Estimation and inference in generalized additive coefficient models for nonlinear interactions with high-dimensional covariates. Ann. Stat. 2015, 43, 2102–2131. [Google Scholar] [CrossRef] [PubMed]
- Dezeure, R.; Bühlmann, P.; Meier, L.; Meinshausen, N. High-dimensional inference: Confidence intervals, p-values and R-software hdi. Stat. Sci. 2015, 30, 533–558. [Google Scholar] [CrossRef] [Green Version]
- Ning, Y.; Liu, H. A general theory of hypothesis tests and confidence regions for sparse high dimensional models. Ann. Stat. 2017, 45, 158–195. [Google Scholar] [CrossRef]
- Zhang, X.Y.; Cheng, G. Simultaneous inference for high-dimensional linear models. J. Am. Stat. Assoc. 2017, 112, 757–768. [Google Scholar] [CrossRef] [Green Version]
- Vandekar, S.N.; Reiss, P.T.; Shinohara, R.T. Interpretable high-dimensional inference via score projection with an application in neuroimaging. J. Am. Stat. Assoc. 2019, 114, 820–830. [Google Scholar] [CrossRef]
- Ghosh, S.; Tan, Z.Q. Doubly robust semiparametric inference using regularized calibrated estimation with high-dimensional data. Bernoulli 2022, 28, 1675–1703. [Google Scholar] [CrossRef]
- Belloni, A.; Chernozhukov, V.; Kaul, A. Confidence bands for coefficients in high dimensional linear models with error-in-variables. arXiv 2017, arXiv:1703.00469. [Google Scholar]
- Li, M.Y.; Li, R.Z.; Ma, Y.Y. Inference in high dimensional linear measurement error models. J. Multivar. Anal. 2021, 184, 104759. [Google Scholar] [CrossRef]
- Huang, X.D.; Bao, N.N.; Xu, K.; Wang, G.P. Variable selection in high-dimensional error-in-variables models via controlling the false discovery proportion. Commun. Math. Stat. 2022, 10, 123–151. [Google Scholar] [CrossRef]
- Jiang, F.; Zhou, Y.Q.; Liu, J.X.; Ma, Y.Y. On high dimensional Poisson models with measurement error: Hypothesis testing for nonlinear nonconvex optimization. Ann. Stat. 2023, 51, 233–259. [Google Scholar] [CrossRef]
- Nghiem, L.H.; Hui, F.K.C.; Müller, S.; Welsh, A.H. Screening methods for linear errors-in-variables models in high dimensions. Biometrics 2023, 79, 926–939. [Google Scholar] [CrossRef] [PubMed]
- Sørensen, Ø. hdme: High-dimensional regression with measurement error. J. Open Source Softw. 2019, 4, 1404. [Google Scholar] [CrossRef] [Green Version]
- Duchi, J.; Shalev-Shwartz, S.; Singer, Y.; Chandra, T. Efficient projections onto the l1-ball for learning in high dimensions. In Proceedings of the International Conference on Machine Learning, New York, NY, USA, 5–9 July 2008. [Google Scholar]
- Agarwal, A.; Negahban, S.; Wainwright, M.J. Fast global convergence of gradient methods for high-dimensional statistical recovery. Ann. Stat. 2012, 40, 2452–2482. [Google Scholar] [CrossRef]
- Chen, Y.D.; Caramanis, C. Noisy and missing data regression: Distribution-oblivious support recovery. J. Mach. Learn. Res. 2013, 28, 383–391. [Google Scholar]
- Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
- Efron, B.; Hastie, T.; Johnstone, I.; Tibshirani, R. Least angle regression. Ann. Stat. 2004, 32, 407–499. [Google Scholar] [CrossRef] [Green Version]
- Friedman, J.; Hastie, T.; Tibshirani, R. Regularization paths for generalized linear models via coordinate descent. J. Stat. Softw. 2010, 33, 1–22. [Google Scholar] [CrossRef]
- Escribe, C.; Lu, T.Y.; Keller-Baruch, J.; Forgetta, V.; Xiao, B.W.; Richards, J.B.; Bhatnagar, S.; Oualkacha, K.; Greenwood, C.M.T. Block coordinate descent algorithm improves variable selection and estimation in error-in-variables regression. Genet. Epidemiol. 2021, 45, 874–890. [Google Scholar] [CrossRef]
- James, G.M.; Radchenko, P. A generalized Dantzig selector with shrinkage tuning. Biometrika 2009, 96, 323–337. [Google Scholar] [CrossRef] [Green Version]
- Huang, J.; Horowitz, J.L.; Ma, S.G. Asymptotic properties of bridge estimators in sparse high-dimensional regression models. Ann. Stat. 2008, 36, 587–613. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Luo, J.; Yue, L.; Li, G. Overview of High-Dimensional Measurement Error Regression Models. Mathematics 2023, 11, 3202. https://doi.org/10.3390/math11143202
Luo J, Yue L, Li G. Overview of High-Dimensional Measurement Error Regression Models. Mathematics. 2023; 11(14):3202. https://doi.org/10.3390/math11143202
Chicago/Turabian StyleLuo, Jingxuan, Lili Yue, and Gaorong Li. 2023. "Overview of High-Dimensional Measurement Error Regression Models" Mathematics 11, no. 14: 3202. https://doi.org/10.3390/math11143202
APA StyleLuo, J., Yue, L., & Li, G. (2023). Overview of High-Dimensional Measurement Error Regression Models. Mathematics, 11(14), 3202. https://doi.org/10.3390/math11143202