Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (46)

Search Parameters:
Keywords = left censoring

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 5825 KB  
Article
A New One-Parameter Model by Extending Maxwell–Boltzmann Theory to Discrete Lifetime Modeling
by Ahmed Elshahhat, Hoda Rezk and Refah Alotaibi
Mathematics 2025, 13(17), 2803; https://doi.org/10.3390/math13172803 - 1 Sep 2025
Viewed by 413
Abstract
The Maxwell–Boltzmann (MB) distribution is fundamental in statistical physics, providing an exact description of particle speed or energy distributions. In this study, a discrete formulation derived via the survival function discretization technique extends the MB model’s theoretical strengths to realistically handle lifetime and [...] Read more.
The Maxwell–Boltzmann (MB) distribution is fundamental in statistical physics, providing an exact description of particle speed or energy distributions. In this study, a discrete formulation derived via the survival function discretization technique extends the MB model’s theoretical strengths to realistically handle lifetime and reliability data recorded in integer form, enabling accurate modeling under inherently discrete or censored observation schemes. The proposed discrete MB (DMB) model preserves the continuous MB’s flexibility in capturing diverse hazard rate shapes, while directly addressing the discrete and often censored nature of real-world lifetime and reliability data. Its formulation accommodates right-skewed, left-skewed, and symmetric probability mass functions with an inherently increasing hazard rate, enabling robust modeling of negatively skewed and monotonic-failure processes where competing discrete models underperform. We establish a comprehensive suite of distributional properties, including closed-form expressions for the probability mass, cumulative distribution, hazard functions, quantiles, raw moments, dispersion indices, and order statistics. For parameter estimation under Type-II censoring, we develop maximum likelihood, Bayesian, and bootstrap-based approaches and propose six distinct interval estimation methods encompassing frequentist, resampling, and Bayesian paradigms. Extensive Monte Carlo simulations systematically compare estimator performance across varying sample sizes, censoring levels, and prior structures, revealing the superiority of Bayesian–MCMC estimators with highest posterior density intervals in small- to moderate-sample regimes. Two genuine datasets—spanning engineering reliability and clinical survival contexts—demonstrate the DMB model’s superior goodness-of-fit and predictive accuracy over eleven competing discrete lifetime models. Full article
(This article belongs to the Special Issue New Advance in Applied Probability and Statistical Inference)
Show Figures

Figure 1

18 pages, 366 KB  
Article
Nonparametric Transformation Models for Double-Censored Data with Crossed Survival Curves: A Bayesian Approach
by Ping Xu, Ruichen Ni, Shouzheng Chen, Zhihua Ma and Chong Zhong
Mathematics 2025, 13(15), 2461; https://doi.org/10.3390/math13152461 - 30 Jul 2025
Viewed by 357
Abstract
Double-censored data are frequently encountered in pharmacological and epidemiological studies, where the failure time can only be observed within a certain range and is otherwise either left- or right-censored. In this paper, we present a Bayesian approach for analyzing double-censored survival data with [...] Read more.
Double-censored data are frequently encountered in pharmacological and epidemiological studies, where the failure time can only be observed within a certain range and is otherwise either left- or right-censored. In this paper, we present a Bayesian approach for analyzing double-censored survival data with crossed survival curves. We introduce a novel pseudo-quantile I-splines prior to model monotone transformations under both random and fixed censoring schemes. Additionally, we incorporate categorical heteroscedasticity using the dependent Dirichlet process (DDP), enabling the estimation of crossed survival curves. Comprehensive simulations further validate the robustness and accuracy of the method, particularly under the fixed censoring scheme, where traditional approaches may NOT be applicable. In the randomized AIDS clinical trial, by incorporating the categorical heteroscedasticity, we obtain a new finding that the effect of baseline log RNA levels is significant. The proposed framework provides a flexible and reliable tool for survival analysis, offering an alternative to parametric and semiparametric models. Full article
Show Figures

Figure 1

31 pages, 2581 KB  
Article
Start Time End Time Integration (STETI): Method for Including Recent Data to Analyze Trends in Kidney Cancer Survival
by Thobani Chaduka, Daniel Berleant, Michael A. Bauer, Peng-Hung Tsai and Shi-Ming Tu
Healthcare 2025, 13(12), 1451; https://doi.org/10.3390/healthcare13121451 - 17 Jun 2025
Viewed by 543
Abstract
Background/Objectives: Accurately estimating survival times is critical for clinical decision-making, treatment evaluation, resource allocation, and other purposes. Yet data from relatively recent diagnosis cohorts is strongly affected by right censoring that biases average survival times downward. For example, 5-, 10-, or 20-year survival [...] Read more.
Background/Objectives: Accurately estimating survival times is critical for clinical decision-making, treatment evaluation, resource allocation, and other purposes. Yet data from relatively recent diagnosis cohorts is strongly affected by right censoring that biases average survival times downward. For example, 5-, 10-, or 20-year survival time averages are not available until 5, 10, or 20 years later, which may be in the future, thus presenting a challenge to obtain in the present. An approach to addressing this problem is described in this report. Here it is demonstrated for kidney cancer survival but could also be applied to survival questions for other types of cancer, other diseases, stage progression times, and similar problems in medicine and other fields in which there is a need for up-to-date analyses of survival improvement trends. Methods: This study introduces STETI, an approach to survival estimation that integrates information about survival times of diagnosis year cohorts with information about survival times of death year cohorts. By leveraging data from death year cohorts in addition to the more familiar diagnosis year cohorts, STETI incorporates recent survival data often excluded by traditional approaches due to right censoring, caused when the post-diagnosis time period of interest has not yet elapsed. Using data from SEER, we explain how the proposed approach integrates diagnosis year cohorts with the death year cohorts of recent years. We demonstrate that incorporating death year cohorts addresses an important source of right censorship that is inherent in diagnosis year cohorts from relatively recent years. This permits survival time trend analysis that accounts for recent improvements in survival time that would be difficult to account for using diagnosis year cohorts alone. We tested linear and exponential models to demonstrate the method’s ability to derive survival time trends using valuable data that would otherwise risk being left unused. Conclusions: Improved survival estimation can better support personalized treatment planning, healthcare benchmarking, and research into cancer subtypes as well as other domains. To this end, we introduce a hybrid analytical approach that addresses an important source of right censorship. Demonstrating it within the domain of kidney cancer is expected to help pave the way to other applications in oncology and beyond, and offers a case study of STETI, an approach to quantifying and projecting trends in survival time associated with therapeutic advancements. Full article
(This article belongs to the Section Health Informatics and Big Data)
Show Figures

Figure 1

14 pages, 698 KB  
Article
Inferring the Timing of Antiretroviral Therapy by Zero-Inflated Random Change Point Models Using Longitudinal Data Subject to Left-Censoring
by Hongbin Zhang, McKaylee Robertson, Sarah L. Braunstein, David B. Hanna, Uriel R. Felsen, Levi Waldron and Denis Nash
Algorithms 2025, 18(6), 346; https://doi.org/10.3390/a18060346 - 5 Jun 2025
Viewed by 788
Abstract
We propose a new random change point model that utilizes routinely recorded individual-level HIV viral load data to estimate the timing of antiretroviral therapy (ART) initiation in people living with HIV. The change point distribution is assumed to follow a zero-inflated exponential distribution [...] Read more.
We propose a new random change point model that utilizes routinely recorded individual-level HIV viral load data to estimate the timing of antiretroviral therapy (ART) initiation in people living with HIV. The change point distribution is assumed to follow a zero-inflated exponential distribution for the longitudinal data, which is also subject to left-censoring, and the underlying data-generating mechanism is a nonlinear mixed-effects model. We extend the Stochastic EM (StEM) algorithm by combining a Gibbs sampler with a Metropolis–Hastings sampling. We apply the method to real HIV data to infer the timing of ART initiation since diagnosis. Additionally, we conduct simulation studies to assess the performance of our proposed method. Full article
Show Figures

Figure 1

22 pages, 1930 KB  
Article
Health Expenditure Shocks and Household Poverty Amidst COVID-19 in Uganda: How Catastrophic?
by Dablin Mpuuga, Sawuya Nakijoba and Bruno L. Yawe
Economies 2025, 13(6), 149; https://doi.org/10.3390/economies13060149 - 26 May 2025
Viewed by 1005
Abstract
In this paper, we utilize the 2019/20 Uganda National Household Survey data to answer three related questions: (i) To what extent did out-of-pocket payments (OOPs) for health care services exceed the threshold for household financial catastrophe amidst COVID-19? (ii) What is the impoverishing [...] Read more.
In this paper, we utilize the 2019/20 Uganda National Household Survey data to answer three related questions: (i) To what extent did out-of-pocket payments (OOPs) for health care services exceed the threshold for household financial catastrophe amidst COVID-19? (ii) What is the impoverishing effect of OOPs for health care services on household welfare? (iii) What are the socioeconomic and demographic determinants of OOPs for health care services in Uganda? Leveraging three health expenditure thresholds (10%, 25%, and 40%), we run a Tobit model for “left-censored” health expenditures and quantile regressions, and we find that among households which incur any form of health care expense, 37.7%, 33.6%, and 28.7% spend more than 10%, 25%, and 40% of their non-food expenditures on health care, respectively. Their average OOP budget share exceeds the respective thresholds by 82.9, 78.0, and 75.8 percentage points. While, on average, household expenditures on medicine increased amidst the COVID-19 pandemic, expenditures on consultations, transport, traditional doctors’ medicines, and other unbroken hospital charges were reduced during the same period. We find that the comparatively low incidence and intensity of catastrophic health expenditures (CHEs) in the pandemic period was not necessarily due to low household health spending, but due to foregone and substituted care. Precisely, considering the entire weighted sample, about 22% of Ugandans did not seek medical care during the pandemic due to a lack of funds, compared to 18.6% in the pre-pandemic period. More Ugandans substituted medical care from health facilities with herbs and home remedies. We further find that a 10% increase in OOPs reduces household food consumption expenditures by 2.6%. This modality of health care financing, where households incur CHEs, keeps people in chronic poverty. Full article
Show Figures

Figure 1

19 pages, 640 KB  
Article
Factors Impacting Technical Efficiency in Mexican WUOs: A DEA with a Spatial Component
by Gilberto Niebla Lizárraga, Jesús Alberto Somoza Ríos, Rosa del Carmen Lizárraga Bernal and Luis Alonso Cañedo Raygoza
Sustainability 2025, 17(10), 4540; https://doi.org/10.3390/su17104540 - 16 May 2025
Viewed by 916
Abstract
Efficient urban water management is crucial for sustainability, especially in contexts such as Mexico. Therefore, assessing the performance of Water Utility Organizations (WUOs) is very important. This study assesses the technical efficiency of 49 Mexican WUOs using cross-sectional data for 2020 and investigates [...] Read more.
Efficient urban water management is crucial for sustainability, especially in contexts such as Mexico. Therefore, assessing the performance of Water Utility Organizations (WUOs) is very important. This study assesses the technical efficiency of 49 Mexican WUOs using cross-sectional data for 2020 and investigates the effect of geographic location as a potential determinant. A two-stage approach was applied. First, Data Envelopment Analysis (DEA) oriented to inputs (under Constant (CRS) and Variable (VRS) Returns to Scale assumptions) was used to evaluate technical efficiency with input measures of employment and costs, and output measures of volume produced and population served. The second stage involved Tobit regression modeling to examine the determinants of technical inefficiency derived from the DEA (censored left at zero), testing the effect of geographic microregions. The DEA results presented a rather significant average inefficiency (mean scores of 0.73 CRS, 0.82 VRS), which implies input savings of 18–27% could still be in the shelves. Notably, the subsequent Tobit modeling found that wide geographical microregions were not statistically significant (p > 0.79) in accounting for those inefficiencies, implying zero explanatory power. The findings indicate that improvements in efficiency require going beyond broad geography to probably focus on local managerial, institutional, or operational considerations. The present study provides empirical benchmarks for Mexican WUOs and evidence on the limited role of broad geography, thereafter directing future research toward specific performance determinants. Full article
(This article belongs to the Section Sustainable Water Management)
Show Figures

Figure 1

20 pages, 595 KB  
Article
Learning Gaussian Bayesian Network from Censored Data Subject to Limit of Detection by the Structural EM Algorithm
by Ping-Feng Xu, Shanyi Lin, Qian-Zhen Zheng and Man-Lai Tang
Mathematics 2025, 13(9), 1482; https://doi.org/10.3390/math13091482 - 30 Apr 2025
Viewed by 445
Abstract
A Bayesian network offers powerful knowledge representations for independence, conditional independence and causal relationships among variables in a given domain. Despite its wide application, the detection limits of modern measurement technologies make the use of the Bayesian networks theoretically unfounded, even when the [...] Read more.
A Bayesian network offers powerful knowledge representations for independence, conditional independence and causal relationships among variables in a given domain. Despite its wide application, the detection limits of modern measurement technologies make the use of the Bayesian networks theoretically unfounded, even when the assumption of a multivariate Gaussian distribution is satisfied. In this paper, we introduce the censored Gaussian Bayesian network (GBN), an extension of GBNs designed to handle left- and right-censored data caused by instrumental detection limits. We further propose the censored Structural Expectation-Maximization (cSEM) algorithm, an iterative score-and-search framework that integrates Monte Carlo sampling in the E-step for efficient expectation computation and employs the iterative Markov chain Monte Carlo (MCMC) algorithm in the M-step to refine the network structure and parameters. This approach addresses the non-decomposability challenge of censored-data likelihoods. Through simulation studies, we illustrate the superior performance of the cSEM algorithm compared to the existing competitors in terms of network recovery when censored data exist. Finally, the proposed cSEM algorithm is applied to single-cell data with censoring to uncover the relationships among variables. The implementation of the cSEM algorithm is available on GitHub. Full article
Show Figures

Figure 1

24 pages, 494 KB  
Article
Robustness and Efficiency Considerations When Testing Process Reliability with a Limit of Detection
by Laura S. Bumbulis and Richard J. Cook
Mathematics 2025, 13(8), 1274; https://doi.org/10.3390/math13081274 - 12 Apr 2025
Viewed by 657
Abstract
Processes in biotechnology are considered reliable if they produce samples satisfying regulatory benchmarks. For example, laboratories may be required to show that levels of an undesirable analyte rarely (e.g., in less than 5% of samples) exceed a tolerance threshold. This can be challenging [...] Read more.
Processes in biotechnology are considered reliable if they produce samples satisfying regulatory benchmarks. For example, laboratories may be required to show that levels of an undesirable analyte rarely (e.g., in less than 5% of samples) exceed a tolerance threshold. This can be challenging when measurement systems feature a lower limit of detection, rendering some observations left-censored. We investigate the implications of detection limits on location-scale model-based inference in reliability studies, including their impact on large and finite sample properties of various estimators and the sensitivity of results to model misspecification. To address the need for robust methods, we introduce a flexible weakly parametric model in which the right tail of the response distribution is approximated using a piecewise-constant hazard model. Simulation studies are reported that investigate the performance of the established and proposed methods, and an illustrative application is given to a study of drinking can weights. We conclude with a discussion of areas warranting future work. Full article
(This article belongs to the Special Issue Improved Mathematical Methods in Decision Making Models)
Show Figures

Figure 1

17 pages, 573 KB  
Article
Fitting Penalized Estimator for Sparse Covariance Matrix with Left-Censored Data by the EM Algorithm
by Shanyi Lin, Qian-Zhen Zheng, Laixu Shang, Ping-Feng Xu and Man-Lai Tang
Mathematics 2025, 13(3), 423; https://doi.org/10.3390/math13030423 - 27 Jan 2025
Cited by 1 | Viewed by 902
Abstract
Estimating the sparse covariance matrix can effectively identify important features and patterns, and traditional estimation methods require complete data vectors on all subjects. When data are left-censored due to detection limits, common strategies such as excluding censored individuals or replacing censored values with [...] Read more.
Estimating the sparse covariance matrix can effectively identify important features and patterns, and traditional estimation methods require complete data vectors on all subjects. When data are left-censored due to detection limits, common strategies such as excluding censored individuals or replacing censored values with suitable constants may result in large biases. In this paper, we propose two penalized log-likelihood estimators, incorporating the L1 penalty and SCAD penalty, for estimating the sparse covariance matrix of a multivariate normal distribution in the presence of left-censored data. However, the fitting of these penalized estimators poses challenges due to the observed log-likelihood involving high-dimensional integration over the censored variables. To address this issue, we treat censored data as a special case of incomplete data and employ the Expectation Maximization algorithm combined with the coordinate descent algorithm to efficiently fit the two penalized estimators. Through simulation studies, we demonstrate that both penalized estimators achieve greater estimation accuracy compared to methods that replace censored values with constants. Moreover, the SCAD penalized estimator generally outperforms the L1 penalized estimator. Our method is used to analyze the proteomic datasets. Full article
(This article belongs to the Special Issue Multivariate Statistical Analysis and Application)
Show Figures

Figure 1

26 pages, 808 KB  
Article
A Longitudinal Tree-Based Framework for Lapse Management in Life Insurance
by Mathias Valla
Analytics 2024, 3(3), 318-343; https://doi.org/10.3390/analytics3030018 - 5 Aug 2024
Cited by 1 | Viewed by 1507
Abstract
Developing an informed lapse management strategy (LMS) is critical for life insurers to improve profitability and gain insight into the risk of their global portfolio. Prior research in actuarial science has shown that targeting policyholders by maximising their individual customer lifetime value is [...] Read more.
Developing an informed lapse management strategy (LMS) is critical for life insurers to improve profitability and gain insight into the risk of their global portfolio. Prior research in actuarial science has shown that targeting policyholders by maximising their individual customer lifetime value is more advantageous than targeting all those likely to lapse. However, most existing lapse analyses do not leverage the variability of features and targets over time. We propose a longitudinal LMS framework, utilising tree-based models for longitudinal data, such as left-truncated and right-censored (LTRC) trees and forests, as well as mixed-effect tree-based models. Our methodology provides time-informed insights, leading to increased precision in targeting. Our findings indicate that the use of longitudinally structured data significantly enhances the precision of models in predicting lapse behaviour, estimating customer lifetime value, and evaluating individual retention gains. The implementation of mixed-effect random forests enables the production of time-varying predictions that are highly relevant for decision-making. This paper contributes to the field of lapse analysis for life insurers by demonstrating the importance of exploiting the complete past trajectory of policyholders, which is often available in insurers’ information systems but has yet to be fully utilised. Full article
(This article belongs to the Special Issue Business Analytics and Applications)
Show Figures

Figure 1

15 pages, 735 KB  
Article
Dicamba and 2,4-D in the Urine of Pregnant Women in the Midwest: Comparison of Two Cohorts (2010–2012 vs. 2020–2022)
by Joanne K. Daggy, David M. Haas, Yunpeng Yu, Patrick O. Monahan, David Guise, Éric Gaudreau, Jessica Larose and Charles M. Benbrook
Agrochemicals 2024, 3(1), 42-56; https://doi.org/10.3390/agrochemicals3010005 - 16 Feb 2024
Viewed by 7190
Abstract
Currently, there are no known human biomonitoring studies that concurrently examine biomarkers of dicamba and 2,4-D. We sought to compare biomarkers of exposure to herbicides in pregnant women residing in the US Midwest before and after the adoption of dicamba-tolerant soybean technology using [...] Read more.
Currently, there are no known human biomonitoring studies that concurrently examine biomarkers of dicamba and 2,4-D. We sought to compare biomarkers of exposure to herbicides in pregnant women residing in the US Midwest before and after the adoption of dicamba-tolerant soybean technology using urine specimens obtained in 2010–2012 from the Nulliparous Pregnancy Outcomes Study: Monitoring Mothers-to-be (N = 61) and in 2020–2022 from the Heartland Study (N = 91). Specific gravity-standardized concentration levels for each analyte were compared between the cohorts, assuming data are lognormal and specifying values below the LOD as left-censored. The proportion of pregnant individuals with dicamba detected above the LOD significantly increased from 28% (95% CI: 16%, 40%) in 2010–2012 to 70% (95% CI: 60%, 79%) in 2020–2022, and dicamba concentrations also significantly increased from 0.066 μg/L (95% CI: 0.042, 0.104) to 0.271 μg/L (95% CI: 0.205, 0.358). All pregnant individuals from both cohorts had 2,4-D detected. Though 2,4-D concentration levels increased, the difference was not significant (p-value = 0.226). Reliance on herbicides has drastically increased in the last ten years in the United States, and the results obtained in this study highlight the need to track exposure and impacts on adverse maternal and neonatal outcomes. Full article
(This article belongs to the Special Issue Feature Papers on Agrochemicals)
Show Figures

Figure 1

17 pages, 590 KB  
Article
Model Uncertainty and Selection of Risk Models for Left-Truncated and Right-Censored Loss Data
by Qian Zhao, Sahadeb Upretee and Daoping Yu
Risks 2023, 11(11), 188; https://doi.org/10.3390/risks11110188 - 30 Oct 2023
Cited by 1 | Viewed by 2162
Abstract
Insurance loss data are usually in the form of left-truncation and right-censoring due to deductibles and policy limits, respectively. This paper investigates the model uncertainty and selection procedure when various parametric models are constructed to accommodate such left-truncated and right-censored data. The joint [...] Read more.
Insurance loss data are usually in the form of left-truncation and right-censoring due to deductibles and policy limits, respectively. This paper investigates the model uncertainty and selection procedure when various parametric models are constructed to accommodate such left-truncated and right-censored data. The joint asymptotic properties of the estimators have been established using the Delta method along with Maximum Likelihood Estimation when the model is specified. We conduct the simulation studies using Fisk, Lognormal, Lomax, Paralogistic, and Weibull distributions with various proportions of loss data below deductibles and above policy limits. A variety of graphic tools, hypothesis tests, and penalized likelihood criteria are employed to validate the models, and their performances on the model selection are evaluated through the probability of each parent distribution being correctly selected. The effectiveness of each tool on model selection is also illustrated using well-studied data that represent Wisconsin property losses in the United States from 2007 to 2010. Full article
Show Figures

Figure 1

11 pages, 335 KB  
Article
A Shared Frailty Model for Left-Truncated and Right-Censored Under-Five Child Mortality Data in South Africa
by Tshilidzi Benedicta Mulaudzi, Yehenew Getachew Kifle and Roel Braekers
Stats 2023, 6(4), 1008-1018; https://doi.org/10.3390/stats6040063 - 6 Oct 2023
Cited by 2 | Viewed by 1910
Abstract
Many African nations continue to grapple with persistently high under-five child mortality rates, particularly those situated in the Sub-Saharan region, including South Africa. A multitude of socio-economic factors are identified as key contributors to the elevated under-five child mortality in numerous African nations. [...] Read more.
Many African nations continue to grapple with persistently high under-five child mortality rates, particularly those situated in the Sub-Saharan region, including South Africa. A multitude of socio-economic factors are identified as key contributors to the elevated under-five child mortality in numerous African nations. This research endeavors to investigate various factors believed to be associated with child mortality by employing advanced statistical models. This study utilizes child-level survival data from South Africa, characterized by left truncation and right censoring, to fit a Cox proportional hazards model under the assumption of working independence. Additionally, a shared frailty model is applied, clustering children based on their mothers. Comparative analysis is performed between the results obtained from the shared frailty model and the Cox proportional hazards model under the assumption of working independence. Within the scope of this analysis, several factors stand out as significant contributors to under-five child mortality in the study area, including gender, birth province, birth year, birth order, and twin status. Notably, the shared frailty model demonstrates superior performance in modeling the dataset, as evidenced by a lower likelihood cross-validation score compared to the Cox proportional hazards model assuming independence. This improvement can be attributed to the shared frailty model’s ability to account for heterogeneity among mothers and the inherent association between siblings born to the same mother, ultimately enhancing the quality of the study’s conclusions. Full article
(This article belongs to the Section Survival Analysis)
Show Figures

Figure 1

16 pages, 928 KB  
Article
Evaluation of Statistical Treatment of Left-Censored Contamination Data: Example Involving Deoxynivalenol Occurrence in Pasta and Pasta Substitute Products
by Alessandro Feraldi, Barbara De Santis, Marco Finocchietti, Francesca Debegnach, Antonio Mandile and Marco Alfò
Toxins 2023, 15(9), 521; https://doi.org/10.3390/toxins15090521 - 24 Aug 2023
Cited by 2 | Viewed by 1672
Abstract
The handling of data on food contamination frequently represents a challenge because these are often left-censored, being composed of both positive and non-detected values. The latter observations are not quantified and provide only the information that they are below a laboratory-specific threshold value. [...] Read more.
The handling of data on food contamination frequently represents a challenge because these are often left-censored, being composed of both positive and non-detected values. The latter observations are not quantified and provide only the information that they are below a laboratory-specific threshold value. Besides deterministic approaches, which simplify the treatment through the substitution of non-detected values with fixed threshold or null values, a growing interest has been shown in the application of stochastic approaches to the treatment of unquantified values. In this study, a multiple imputation procedure was applied in order to analyze contamination data on deoxynivalenol, a mycotoxin that may be present in pasta and pasta substitute products. An application of the proposed technique to censored deoxynivalenol occurrence data is presented. The results were compared to those attained using deterministic techniques (substitution methods). In this context, the stochastic approach seemed to provide a more accurate, unbiased and realistic solution to the problem of left-censored occurrence data. The complete sample of values could then be used to estimate the exposure of the general population to deoxynivalenol based on consumption data. Full article
Show Figures

Figure 1

13 pages, 318 KB  
Article
Regression Analysis of Dependent Current Status Data with Left Truncation
by Mengyue Zhang, Shishun Zhao, Tao Hu, Da Xu and Jianguo Sun
Mathematics 2023, 11(16), 3539; https://doi.org/10.3390/math11163539 - 16 Aug 2023
Viewed by 1504
Abstract
Current status data are encountered in a wide range of applications, including tumorigenic experiments and demographic studies. In this case, each subject has one observation, and the only information obtained is whether the event of interest happened at the moment of observation. In [...] Read more.
Current status data are encountered in a wide range of applications, including tumorigenic experiments and demographic studies. In this case, each subject has one observation, and the only information obtained is whether the event of interest happened at the moment of observation. In addition to censoring, truncating is also very common in practice. This paper examines the regression analysis of current status data with informative censoring times, considering the presence of left truncation. In addition, we propose an inference approach based on sieve maximum likelihood estimation (SMLE). A copula-based approach is used to describe the relationship between the failure time of interest and the censoring time. The spline function is employed to approximate the unknown nonparametric function. We have established the asymptotic properties of the proposed estimator. Simulation studies suggest that the developed procedure works well in practice. We also applied the developed method to a real dataset derived from an AIDS cohort research. Full article
(This article belongs to the Special Issue Computational Statistics and Data Analysis, 2nd Edition)
Show Figures

Figure 1

Back to TopTop