Next Article in Journal
A General Iterative Procedure for Solving Nonsmooth Constrained Generalized Equations
Previous Article in Journal
Chaos Meets Cryptography: Developing an S-Box Design with the Rössler Attractor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Variable Selection for Length-Biased and Interval-Censored Failure Time Data

1
School of Mathematics, Jilin University, Changchun 130012, China
2
Guangzhou Institute of International Finance, Guangzhou University, Guangzhou 510006, China
3
Department of Statistics, University of Missouri, Columbia, MO 65211, USA
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(22), 4576; https://doi.org/10.3390/math11224576
Submission received: 20 October 2023 / Revised: 2 November 2023 / Accepted: 6 November 2023 / Published: 8 November 2023
(This article belongs to the Section Probability and Statistics)

Abstract

:
Length-biased failure time data occur often in various biomedical fields, including clinical trials, epidemiological cohort studies and genome-wide association studies, and their analyses have been attracting a surge of interest. In practical applications, because one may collect a large number of candidate covariates for the failure event of interest, variable selection becomes a useful tool to identify the important risk factors and enhance the estimation accuracy. In this paper, we consider Cox’s proportional hazards model and develop a penalized variable selection technique with various popular penalty functions for length-biased data, in which the failure event of interest suffers from interval censoring. Specifically, a computationally stable and reliable penalized expectation-maximization algorithm via two-stage data augmentation is developed to overcome the challenge in maximizing the intractable penalized likelihood. We establish the oracle property of the proposed method and present some simulation results, suggesting that the proposed method outperforms the traditional variable selection method based on the conditional likelihood. The proposed method is then applied to a set of real data arising from the Prostate, Lung, Colorectal and Ovarian cancer screening trial. The analysis results show that African Americans and having immediate family members with prostate cancer significantly increase the risk of developing prostate cancer, while having diabetes exhibited a significantly lower risk of developing prostate cancer.

1. Introduction

This work proposes a general variable selection method for the length-biased and interval-censored failure time data with the classical proportional hazards (PH) model. Interval-censored data arise when each failure time of interest cannot be measured accurately but is only known to lie in a certain time interval formed by periodical follow-ups [1]. Such data are frequently encountered in many scientific studies including clinical trials and epidemiological surveys, and their regression analysis has been discussed extensively in the literature, see [2,3,4,5,6,7,8] for details. Specifically, Zeng et al. [4], Wang et al. [6] and Zeng et al. [7] investigated inference procedures for the additive hazards, PH and transformation models, respectively.
In addition to interval censoring, left truncation is also frequently encountered in prospective cohort studies, inducing non-randomly selected samples from the target population. A typical example of left truncation occurs in the PLCO study where individuals with any of the PLCO cancers at the onset of the study were not enrolled [6,9]. In particular, when the truncation times follow the uniform distribution (also known as the length-biased or stationarity assumption), the left-truncated data reduce to the length-biased data discussed by many authors, including but not limited to Wang [10], Shen et al. [11] and Ning et al. [12].
The analysis of length-biased data under right censoring has been investigated extensively in the literature [11,13,14,15,16]. To name a few examples, Shen et al. [11] presented unbiased estimating equation approaches for the transformation and accelerated failure time models. Qin and Shen [13] developed an inverse weighted estimating equation approach for the PH model. Qin et al. [14] developed new expectation-maximization (EM) algorithms to estimate the survival function of the failure time. For the length-biased and interval-censored data, Gao and Chan [15] developed an EM algorithm for the PH model via two-stage data augmentation. Further, Shen et al. [16] considered the mixture PH model with a nonsusceptible or cured fraction.
In many practical applications, one may collect a large number of candidate covariates, but in general, only a few covariates are useful to model the failure time of interest. In such a case, penalized variable selection provides a useful tool to eliminate irrelevant variables and further enhance the estimation accuracy. Popular penalty functions include LASSO [17], SCAD [18], adaptive LASSO (ALASSO) [19], SICA [20], SELO [21], MCP [22] and BAR [23,24]. In particular, Fan et al. [25] provided a comprehensive review of variable selection methods and the corresponding algorithms. In recent years, machine learning-based methods have also gained considerable attention due to their great ability in identifying relevant features. To name a few examples, Garavand et al. [26] used clinical examination features and compared different machine learning algorithms in developing a model for the early diagnosis of coronary artery disease. Hosseini et al. [27] used blood microscopic images and a convolutional neural network algorithm for detecting and classifying B-acute lymphoblastic leukemia. Garavand et al. [28] conducted a systematic review of advanced techniques to facilitate the rapid diagnosis of coronary artery disease. Ghaderzadeh and Aria [29] conducted a systematic review of Artificial Intelligence techniques for COVID-19 detection.
Regarding the left-truncated failure time data, a number of variable selection methods have been proposed. In particular, Chen [30] considered the right-censored data and developed a variable selection method for the additive hazards model with covariate measurement errors. He et al. [31] also considered the right-censored data and performed variable selection with penalized estimating equations for the accelerated failure time model. Li et al. [32] developed a conditional likelihood-based variable selection method for left-truncated and interval-censored data under the PH model. However, it is worth noting that the work of Li et al. [32] only involved the ALASSO penalty and their method can be anticipated to lose some efficiency due to ignoring the distribution information of the truncation times.
In this paper, we offer an efficient penalized likelihood method to achieve variable selection in the PH model with length-biased and interval-censored data. Compared with the traditional conditional likelihood method in Li et al. [32], the proposed method yields an efficiency gain via fully taking into account the distribution information of the truncation times. In particular, to optimize the penalized likelihood function with an intractable form, we develop a penalized EM algorithm by introducing pseudo-left-truncated data and Poisson random variables. The proposed method is easy to implement and computationally stable and has desirable advantages over the variable selection method based on the penalized conditional likelihood [32]. An application to a real data set arising from the Prostate, Lung, Colorectal and Ovarian (PLCO) cancer screening trial demonstrates the practical usefulness of the proposed method.
The PLCO cancer screening trial is a large-scale multicenter trial conducted to screen the PLCO cancers and investigate cancer-related mortality. To date, motivated by the rich data structure in the PLCO database, various statistical methods have already been proposed in the literature. To name a few examples, Wang et al. [6] developed an EM algorithm to estimate the spline-based PH model with interval-censored data. Sun et al. [33] considered variable selection in a semiparametric nonmixture cure model with interval-censored data. Li and Peng [34] investigated instrumental variable estimation of complier causal treatment effect with interval-censored data. Withana Gamage et al. [35] considered the estimation of the PH model with left-truncated and arbitrarily interval-censored data.
The remainder of this paper is organized as follows. In Section 2, we first introduce the notation, assumption and corresponding likelihood. Section 3 presents the proposed penalized EM algorithm, and Section 4 establishes the oracle property of the proposed estimators. In Section 5, a simulation study is conducted to assess the variable selection performance and the estimation accuracy of the proposed method, followed by an application in Section 6. Some discussions and conclusions are given in Section 7. Section 8 provides several potential future research directions.

2. Notation, Model and Penalized Likelihood

For the target population, let T ˜ , A ˜ and Z ˜ denote the failure time of interest (e.g., the time to the onset of the failure event), the truncation time (e.g., the time to the study enrollment) and the p-dimensional vector of covariates, respectively. Given Z ˜ , the PH model specifies that the conditional cumulative hazard function of T ˜ takes the form
Λ ( t Z ˜ ) = Λ ( t ) exp ( β Z ˜ ) ,
where β is the p-dimensional vector of the unknown regression coefficients and Λ ( · ) is an unknown increasing cumulative baseline hazard function. Let d denote the number of nonzero components in β , and β 10 and β 20 denote all d nonzero and p d zero-component vectors, respectively.
Under the left-truncation scheme, only individuals with T ˜ A ˜ are enrolled in the study, and the failure time, truncation time and covariate vector that satisfy T ˜ A ˜ are denoted by T, A and Z , respectively. Then, we know that ( T , A , Z ) has the same joint distribution as ( T ˜ , A ˜ , Z ˜ ) given T ˜ A ˜ [36]. As mentioned above, if the truncation time A ˜ is further assumed to follow the uniform distribution on ( 0 , τ ) , where ( 0 , τ ) is the support of T ˜ , we have the length-biased sampling mechanism [10,14]. Let f ( t z ) and S ( t z ) denote the density and survival functions of T ˜ given Z ˜ = z , respectively. Under the assumption that A ˜ is independent of T ˜ given Z ˜ , the joint density function of ( T , A ) given Z = z evaluated at ( t , a ) is
h ( a ) f ( t z ) P ( T ˜ A ˜ ) = h ( a ) f ( t z ) 0 τ h ( a ) S ( a z ) d a ,
where h ( a ) denotes the density function of A ˜ at time a and equals 1 / τ under the length-biased sampling scheme.
Consider a failure time study that recruits n subjects and each failure time suffers from interval censoring due to the periodical examinations for the occurrence of the failure event. For i = 1 , , n , denote by T i , A i and Z i the failure time, truncation time and covariate vector of the ith subject in the study, respectively. We assume that there exists a sequence of examination times A i U i , 1 < < U i , M i < U i , M i + 1 for subject i, where M i is a random positive integer, and define U i , M i + 1 = . Let L i and R i denote the endpoints of the smallest interval that brackets T i . That is, T i ( L i , R i ] . Clearly, T i is left-censored if L i = A i , and T i is right-censored if R i = . Then, the observed data consist of O = { O i = ( L i , R i , A i , Z i ) ; i = 1 , , n } . Under the assumption that examination times are independent of the failure and truncation times given the covariates, the likelihood function based on the observed data can be written as
L ( β , Λ ) = i = 1 n S ( L i Z i ) I ( R i < ) S ( R i Z i ) 0 τ exp Λ ( a ) exp β Z i d a / τ ,
where S ( t Z i ) = exp { Λ ( t ) exp ( β Z i ) } .
Essentially, the likelihood (3) is the product of the marginal likelihood and the conditional likelihood, that is, L ( β , Λ ) = L M ( β , Λ ) × L C ( β , Λ ) , where L M ( β , Λ ) = i = 1 n S ( A i Z i ) / μ β , Λ ( Z i ) , and L C ( β , Λ ) = i = 1 n { S ( L i Z i ) I ( R i < ) S ( R i Z i ) } / S ( A i Z i ) . In the above, I ( · ) denotes the indicator function, μ β , Λ ( Z i ) = τ 1 0 τ exp Λ ( a ) exp β Z i d a ; L M ( β , Λ ) is the marginal likelihood of { A i ; i = 1 , , n } given { Z i ; i = 1 , , n } ; and L C ( β , Λ ) is the conditional likelihood given the A i ’s. Notably, the commonly used conditional likelihood method only utilizes L C ( β , Λ ) for inference, which can be anticipated to lose some estimation efficiency because L M ( β , Λ ) also involves the parameters in model (1).
For the nuisance function Λ , we propose to approximate it with a step function that has non-negative jumps at the unique examination times. Specifically, let 0 < t 1 < < t K n < denote the ordered unique values of { ( L i , R i I ( R i < ) ) , i = 1 , , n } , where K n is an integer determined by the observed data. For k = 1 , , K n , denote by f k the non-negative jump size of Λ at t k . Then, we have Λ ( t ) = t k t f k , and the likelihood function (3) can be rewritten as
L ( β , F ) = i = 1 n exp t k L i f k exp β Z i I R i < exp t k R i f k exp β Z i 0 τ exp t k a f k exp β Z i d a / τ ,
where F = ( f 1 , , f K n ) .
To accomplish variable selection and estimate the nonzero parameters simultaneously, we propose to maximize the following penalized log-likelihood:
log L P ( β , F ) = log L ( β , F ) n j = 1 p p λ β j ,
where p λ ( | β j | ) denotes a penalty function that depends on the tuning parameter λ . In what follows, we provide a general maximization procedure for (5) under various commonly adopted penalty functions, such as LASSO, ALASSO, SCAD, SELO, SICA, MCP and BAR [17,18,19,20,21,22,23,24]. Because the penalized log-likelihood (5) has an intractable form, performing direct maximization with some of the existing software is extremely difficult and unstable. This is also the case even without the penalty term as shown in Gao and Chan [15]. In the next section, we will propose a reliable and stable penalized EM algorithm to overcome this computation challenge.

3. Estimation Procedure

The proposed penalized EM algorithm involves two layers of data augmentation, which aims at simplifying the form of (4) and obtaining a tractable objective function. In the first stage of data augmentation, for the ith subject, we introduce a set of independent pseudo-truncated data, O = { ( T i m , A i m , Z i ) : T i m < A i m , m = 1 , , N i } , which are also referred to as “ghost data” [37], and the random integer N i follows a negative binomial distribution N ( 1 , π i ) with E ( N i O i ) = ( 1 π i ) / π i , where
π i = P ( T i m A i m Z i ) = 1 k = 0 K n P ( T i m = t k , A i m > t k Z i ) = 1 k = 1 K n ( 1 t k / τ ) f k exp ( β Z i ) exp l = 1 k f l exp ( β Z i ) .
Given N i = n i , let N i k = m = 1 n i I ( T i m = t k ) for each k = 1 , , K n , and then ( N i 1 , , N i K n ) follows a multinomial distribution with probabilities ( p i 1 , , p i K n ) , where p i k = P ( T i m = t k T i m < A i m , Z i ) = q i k / ( 1 π i ) , and q i k = ( 1 t k / τ ) f k exp ( β Z i ) exp { l = 1 k f l exp ( β Z i ) } . In the above, τ is the finite upper bound of the support of T ˜ and can be specified as t K n in practice [14]. After deleting some constants that are irrelevant to the parameters to be estimated, the augmented likelihood function based on { O , O } is
L 1 ( β , F ) = i = 1 n [ exp t k L i f k exp ( β Z i ) I ( R i < ) exp t k R i f k exp ( β Z i ) × k = 1 K n q i k N i k ] .
In the second stage, for the ith subject, we introduce the independent latent variables W i k with i = 1 , , n and k = 1 , , K n , where W i k is the Poisson random variable with mean f k exp ( β Z i ) . Then, the likelihood function (6) can be re-expressed with Poisson variables as
L 2 ( β , F ) = i = 1 n P t k L i W i k = 0 P L i < t k R i W i k > 0 I ( R i < ) × k = 1 K n q i k N i k .
Let P ( W i k f k exp ( β Z i ) ) be the probability mass function of W i k and R i = L i I ( R i = ) + R i I ( R i < ) . By treating the latent variables W i k ’s and N i k ’s as observable, we have the following complete data likelihood
L c ( β , F ) = i = 1 n k = 1 K n P ( W i k f k exp ( β Z i ) ) I ( t k R i ) × q i k N i k ,
where we require that t k L i W i k = 0 and L i < t k R i W i k > 0 if R i < , and t k L i W i k = 0 if R i = .
Let θ = ( β , f 1 , , f K n ) , and let θ ( m ) be the update of θ at the mth iteration with m 0 . Based on L c ( β , F ) , we can present the expectation step (E-step) and maximization step (M-step) of the proposed algorithm. In the E-step, we calculate the conditional expectations of W i k and N i k given the observed data and θ ( m ) in the log L c ( β , F ) . This step yields
Q ( θ ; θ ( m ) ) = i = 1 n k = 1 K n I ( t k R i ) E ( W i k ) log f k + E ( W i k ) β Z i f k exp ( β Z i ) + i = 1 n k = 1 K n E ( N i k ) log f k + β Z i l = 1 k f l exp ( β Z i ) .
In particular, at the mth iteration of the algorithm, the expressions of the conditional expectations are given by
E ( W i k ) = I ( L i < t k R i , R i < ) f k ( m ) exp ( β ( m ) Z i ) 1 exp L i < t l R i f l ( m ) exp ( β ( m ) Z i )
and
E ( N i k ) = ( 1 t k / τ ) f k ( m ) exp ( β ( m ) Z i ) exp l = 1 k f l ( m ) exp ( β ( m ) Z i ) π i ( m ) ,
where
π i ( m ) = 1 k = 1 K n ( 1 t k / τ ) f k ( m ) exp ( β ( m ) Z i ) exp l = 1 k f l ( m ) exp ( β ( m ) Z i ) .
For notational simplicity, we omitted the conditional arguments including the observed data and current estimates of the parameters in the above conditional expectations.
In the M-step of the algorithm, by solving Q ( θ ; θ ( m ) ) / f k = 0 , we have a closed-form expression for f k , which is given by
f k ( m + 1 ) = i = 1 n { I ( t k R i ) E ( W i k ) + E ( N i k ) } i = 1 n { I ( t k R i ) + l = k K n E ( N i l ) } exp ( β ( m ) Z i ) , k = 1 , , K n .
Next, by plugging (8) into (7), we have the following objective function that only involves the unknown parameter β :
Q new ( β ; θ ( m ) ) = i = 1 n k = 1 K n I ( t k R i ) E ( W i k ) + E ( N i k ) × β Z i log i = 1 n I ( t k R i ) + l = k K n E ( n i l ) exp ( β Z i ) .
To obtain the sparse estimator of β , we propose to minimize the following objective function
H ( β ; θ ( m ) ) + j = 1 p p λ ( | β j | ) ,
where H ( β ; θ ( m ) ) = 1 n Q new ( β ; θ ( m ) ) .
For LASSO and ALASSO, the modified shooting algorithm given in Zhang and Lu [38] and others can be adopted to minimize (9). For the BAR, the closed-form solution for β is available [39]. For other penalties, after using local linear approximation for p λ ( | β j | ) [40], one can also adopt the modified shooting algorithm to minimize (9).
In summary, for the given λ and initial estimator θ ( 0 ) , we repeat the E-step and M-step until the convergence criterion is satisfied, rendering the sparse estimators of the regression parameters. It is worth pointing out that the proposed algorithm is insensitive to the choices of the initial value θ ( 0 ) . In practice, one can simply set the initial value of each component in β to 0 and the initial value of f k to 1 / K n , for k = 1 , , K n . The proposed algorithm is declared to achieve convergence if the summation of the absolute differences in the estimates between two successive iterations is less than a small positive number, such as 10 3 .
To select the optimal λ , we follow Li et al. [39] and others and adopt the BIC criterion, which is defined as
BIC λ = 2 l ( β ^ , F ^ ) + d f λ × log ( n ) ,
where β ^ is the final estimator of β , F ^ is the final estimator of F , l ( β ^ , F ^ ) denotes the logarithm of (4) and d f λ is the total number of the nonzero estimates in β ^ and F ^ . For a given set including candidate values of λ , the optimal λ can be set as the one that yields the smallest BIC.

4. Asymptotic Properties

Without a loss of generality, we write β = ( β 1 , β 2 ) , where β 1 includes the first d components of β that are nonzero and β 2 consists of the remaining zero components. Denote the true value of β by β 0 = ( β 10 , β 20 ) , where β 10 is the true value of β 1 and β 20 is the true value of β 2 . Let β ^ = ( β ^ 1 , β ^ 2 ) be the estimator of β obtained from the method proposed above, where β ^ 1 denotes the estimate of β 1 and β ^ 2 denotes the estimate of β 2 . In what follows, we establish the asymptotic properties of β ^ .
For any penalty function p λ ( | β j | ) with tuning parameter λ , we let ρ ( x ; λ ) = λ 1 p λ ( x ) and assume that p λ ( | β j | ) belongs to the function class P as considered in Lv and Fan [20]:
P = { p λ ( · ) : ρ ( x ; λ ) is an increasing function of x [ 0 , ) and has a continuous derivative ρ ( x ; λ ) for any x ( 0 , ) , where ρ ( x ; λ ) ( 0 , ) is independent of λ } .
The class P is quite general and includes the penalty functions considered in this work. To establish the asymptotic properties of β ^ , we need the following regularity conditions.
(C1)
The true regression parameter β 0 lies in a compact set B in R p , and the true cumulative baseline hazard function Λ 0 ( · ) is continuously differentiable and positive with Λ 0 ( t ) > 0 for all t [ τ 1 , τ 2 ] , where [ τ 1 , τ 2 ] is the union of the support of ( U 1 , , U M ) and 0 < τ 1 < τ 2 < . In addition, we assume that 0 < Λ 0 ( τ 2 ) < .
(C2)
The covariate vector Z is bounded with probability one and the covariance matrix of Z is positive definite.
(C3)
The number of examination times, M, is positive and E ( M ) < . Additionally, there exists a positive constant η such that P ( U m + 1 U m η Z , M ) = 1 ( m = 1 , , M 1 ) . Furthermore, there exists a probability measure μ in [ τ 1 , τ 2 ] such that the bivariate distribution function of ( U m , U m + 1 ) conditional on ( M , Z ) is dominated by μ × μ , and its Radon–Nikodym derivative, denoted by f ˜ m ( u , v ; M , Z ) ( m = 1 , , m 1 ) , can be expanded to a positive and has twice-continuous derivatives with respect to u and v when v u > η and are continuously differentiable with respect to Z .
Conditions (C1) and (C2) are standard conditions in a failure time data analysis [7]. Condition (C3) pertains to the joint distribution of the examination times, which ensures that two adjacent examination times should be separated by at least η . Otherwise, the data may contain exactly observed failure times, requiring different theoretical treatments. Note that the conditions (C1)–(C3) are used in establishing the root-n consistency of the unpenalized maximum likelihood estimator of the regression vector [7], which is required for the penalty term in the penalized likelihood. Also, the conditions (C1)–(C3) ensure that the log profile likelihood p ( β ) = sup Λ log L ( β , Λ ) has a quadratic expansion around β 0 [7].
Theorem 1 
(root-n consistency). Under conditions (C1) to (C3), if n λ = O p ( 1 ) , then β ^ β 0 = O p ( n 1 / 2 ) , where · denotes the Euclidean norm for a given vector.
Theorem 2 
(oracle property). Under conditions (C1) to (C3), if n λ 0 and n λ , then β ^ has the following properties:
1. 
(Sparsity) lim n P β ^ 2 = 0 = 1 ;
2. 
(Asymptotic normality) n β ^ 1 β 10 N 0 , I ˜ 10 1 , where I ˜ 10 is the upper-left d × d sub-matrix of the efficient Fisher information matrix for β, denoted by I ˜ 0 .
Theorem 1 indicates that β ^ is consistent, and Theorem 2 (i) implies that β ^ is sparse and has the selection consistency property, that is, lim n P ( { j : β ^ j 0 } = { 1 , , d } ) = 1 . Theorem 2 (ii) implies that the estimators of the nonzero regression parameters are semiparametrically efficient. The detailed proofs of Theorems 1 and 2 under the L 1 penalty ALASSO are given in the Appendix A. For other penalty functions that belong to P , one can prove the above two theorems with analogous techniques, which are omitted in this paper.

5. A Simulation Study

We conducted a simulation study to evaluate the finite-sample performance of the proposed penalized EM algorithm. We first assumed that there exist 10 covariates, following the marginal standard normal distribution with the pairwise correlation c o r r ( Z j , Z k ) = 0 . 5 | j k | ( j , k = 1 , , 10 ) . We set the true value of β denoted by β 0 to be ( 0.7 , 0.7 , 0.7 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) (large effects) or ( 0.4 , 0.4 , 0.4 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) (weak effects). The truncation time A ˜ followed the uniform distribution on ( 0 , τ ) with τ = 15 . The failure time of interest T ˜ was generated from model (1) with Λ ( t ) = 0.3 t . Because we considered the length-biased sampling, only pairs that satisfy T ˜ > A ˜ were kept in the simulated data, which were denoted as { ( T i , A i , Z i ) , i = 1 , , n } .
To construct interval censoring, for subject i, we generated a series of potential examination times U m U m 1 + 0.1 + U n i f o r m ( 0 , 2 ) with m = 1 , , n c , U 0 = A ˜ and U n c τ . Then, [ L i , R i ) was defined as the smallest interval that brackets T i . On average, we had about 30–44% left-censored observations and 19–30% right-censored ones. We considered some classical penalty functions, including LASSO, ALASSO, SCAD, SELO, SICA, MCP and BAR [17,18,20,21,22,24]. In order to find the optimal λ for each penalty, we considered 20 equally spaced points over the interval ( a , b ) and selected the one that minimizes the BIC. In particular, b was chosen to guarantee that all the regression parameter estimates were penalized to zero, while a was chosen to ensure that all the covariates were selected. The following results are based on n = 200 or 400 and 100 replications.
To assess the variable selection performance, we calculated the FP and TP, which are defined as the average number of selected covariates whose true coefficients are zero and the average number of selected covariates whose true coefficients are not zero, respectively. To measure the estimation accuracy, we reported the median of the mean squared errors (MMSE) and the standard deviation of the mean squared errors (SD), where the mean squared error is defined as ( β ^ β ) Σ Z ( β ^ β ) and Σ Z denotes the population covariance matrix of the covariate vector Z .
Table 1 and Table 2 present the results obtained by the proposed method with β 0 = ( 0.7 , 0.7 , 0.7 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) and β 0 = ( 0.4 , 0.4 , 0.4 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) , respectively. In both tables, we also present the results of the oracle estimation and the analysis method without conducting variable selection. The results given in Table 1 and Table 2 show that almost all the penalty functions gave a similar variable selection performance. The only exception is LASSO, which yielded a slightly larger FP. This observation can be anticipated because LASSO often selects more noises than the other penalty functions [39]. In addition, one can see from the tables that except LASSO, the MMSEs yielded by the other penalty functions were close to the oracle estimation and smaller than the analysis method without conducting variable selection. As the sample size increased, the performance of the variable selection and estimation accuracy improved for all the penalty functions.
For comparison, we also include in Table 1 and Table 2 the results obtained by the variable selection method based on the penalized conditional likelihood (PCL). The detailed implementation of the PCL method is given in the Appendix B. Notably, Li et al. [32] also maximized the PCL to conduct variable selection but only considered the ALASSO penalty. It is clear that, compared with the PCL approach, the proposed method yielded smaller MMSEs, implying a more accurate estimation performance. Furthermore, the SDs obtained by the proposed method are smaller than those of the PCL method, because the PCL method ignores the distribution information of the truncation times and thus loses some estimation efficiency.
In this study, we also considered other simulation settings with p = 30 or 50. Specifically, we set β j = 0.7 for j = 1 , , 3 and β j = 0 for j = 4 , , p and let other simulation specifications be the same as above. The simulation results are presented in Table 3 and Table 4 and show similar conclusions as above. In particular, the proposed method with ALASSO, SCAD, SELO, SICA, MCP and BAR yielded much smaller MMSEs than the analysis method without conducting variable selection when p increased. This clearly demonstrated the necessity of conducting variable selection in the presence of a large number of covariates.

6. An Application

6.1. Background and Analysis Methods

We applied the proposed method to a set of real data arising from the Prostate, Lung, Colorectal and Ovarian (PLCO) cancer screening trial [9,41]. Sponsored by the National Cancer Institute, the PLCO cancer screening trial was initiated in 1993 and recruited participants who had not previously taken part in any other cancer screening trials at ten screening centers nationwide. The recruited participants were aged from 55 to 74. In particular, the participants who were randomly assigned to the screening group received the Prostate-Specific Antigen (PSA) test periodically over 13 years. If abnormally high PSA levels were found, a prostate biopsy was conducted to determine the occurrence status of prostate cancer. In this study, we focused on the prostate cancer screening data in the screening group and aimed at identifying the important risk factors of the development of prostate cancer. This is because prostate cancer in general causes no signs or symptoms in the early stages, but as the disease progresses, it can cause serious complications, such as urination problems and anemia. Therefore, exploring the risk factors of prostate cancer exhibits a pressing need and is also beneficial to conduct early prevention for males. To this end, the failure time of interest was defined as the age at onset of prostate cancer. Because the participants were only examined intermittently, only interval-censored observations could be obtained for the onset of the prostate cancer. In addition, because the study excluded individuals who had already developed prostate cancer at the study recruitment, the age at the onset of prostate cancer suffered from left truncation with the truncation time being the age the individual enrolled in the study.
We considered seven potential risk factors, including Race (1 for African American and 0 otherwise), Education (1 for at least college and 0 otherwise), Cancer (1 for having an immediate family member with any PLCO cancer and 0 otherwise), ProsCancer (1 for having an immediate family member with prostate cancer and 0 otherwise), Diabetes (1 for having diabetes and 0 otherwise), Stroke (1 for having strokes before and 0 otherwise) and Gallblad (1 for having gall bladder stones and 0 otherwise). The sample size was n = 32,897, and the left and right censoring rates were about 8.67 % and 87.69 % , respectively.
To achieve variable selection, we implemented the proposed method with LASSO, ALASSO, SCAD, SELO, SICA, MCP and BAR as in the simulation study. To select the optimal λ for each penalty, we utilized a two-step method. In the first step, we examined a range of points over ( a , b ) to roughly identify a narrower interval that includes the optimal tuning parameter. Here, as in the simulation study, a was selected to ensure that all the covariates were selected, while b was chosen to ensure that all the regression parameter estimates were penalized to zero. Next, we further considered 20 equally spaced points within the narrower interval and selected the optimal λ that minimizes the BIC. In addition, we also employed the BIC to select the best penalty among all the penalties considered for the data, and it turned out that the PH model with SCAD and MCP yielded the smallest BIC value. To calculate the standard error, we used the nonparametric bootstrap method with 100 bootstrap samples. In addition to the proposed method, we also considered the variable method based on the penalized conditional likelihood (PCL) for comparison.

6.2. Results

We summarize in Table 5 the results obtained by the proposed and PCL methods. The results indicated that, except LASSO, the proposed method with other penalties recognized Race, ProsCancer and Diabetes as significant risk factors for prostate cancer. Specifically, being African American and having an immediate family member with prostate cancer increased the risk of developing prostate cancer, while having diabetes exhibited a lower risk of developing prostate cancer. These findings are in accordance with the conclusions obtained by Meister [42], Pierce [43] and others. In contrast, the results given in Table 5 show that the method based on PCL yielded relatively larger standard error estimates compared with the proposed method. This finding again demonstrates the advantage or efficiency gain of the proposed method when taking into account the distribution information of the truncation times in the inference procedure.

7. Discussion and Conclusions

In this article, we considered the length-biased and interval-censored data and developed a penalized analysis procedure to choose important variables among the large number of covariates in the PH model. The main contribution of this work is the development of a novel penalized EM algorithm via introducing two-stage data augmentation, which can greatly simplify the penalized nonparametric maximum likelihood estimation. Specifically, by introducing pseudo-truncated data and Poisson random variables, the possible high-dimensional parameters involved in Λ ( t ) have explicit solutions, making the proposed algorithm simple and computationally stable. In contrast to the work of Li et al. [32] that only involved the ALASSO penalty, we proposed to jointly utilize the local linear approximation and the modified shooting algorithm, yielding the sparse estimators of the regression parameters under various popular penalty functions. Thus, the proposed method can offer flexible options for the data analyst. The numerical results obtained from a simulation study showed the satisfactory performance and desirable advantage of the proposed method in finite samples. Moreover, by legitimately taking into account the distribution information of the truncation times, the proposed method is more efficient than the traditional penalized conditional likelihood approach (e.g., Li et al.’s method [32]).
Notably, the findings of our prostate cancer data analysis may have certain public health implications. Specifically, African Americans and individuals who have immediate family members with prostate cancer are specific population groups and need to receive early prevention (e.g., cancer screening) in order to reduce the risk of developing prostate cancer.

8. Suggestions for Future Work

Notably, in the proposed method, we only investigated the variable selection technique in the setting of n > p . Obviously, in some practical applications, such as the gene expression studies, p is usually much larger than n, and future efforts will be devoted to extend the proposed method to handle the case of p n . In addition, generalizations of the proposed method to other regression models (e.g., transformation and additive hazards models [7,44]) and taking into account multivariate interval censoring [45] warrant further research.

Author Contributions

Methodology, F.F.; Writing—review & editing, G.C.; Supervision, J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by the National Natural Science Foundation of China (Grant No. 12001128) and the Nature Science Foundation of Guangdong Province of China (Grant No. 2022A1515011899, Grant No. 2022A1515011901).

Data Availability Statement

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
EMexpectation-maximization
LASSOthe least absolute shrinkage and selection operator penalty
ALASSOthe adaptive LASSO penalty
SCADthe smoothly clipped absolute deviation penalty
SELOthe seamless- L 0 penalty
SICAthe smooth integration of counting and absolute deviation penalty
MCPthe minimax concave penalty
BARthe broken adaptive ridge penalty

Appendix A. Proofs of The Asymptotic Results

Appendix A.1. Proof of Theorem 1

Proof. 
The penalized log-likelihood function can be written as
log L P ( β , F ) = log L ( β , F ) n λ j = 1 p | β j | | β ˜ j | ,
According to the definition of the profile likelihood function, for any given β , the log profile likelihood p ( β ) = sup F log L ( β , F ) = log L ( β , F ( β ) ) , where F ( β ) = arg sup F log L ( β , F ) , which can be obtained by the simplified EM algorithm. Set log L ( β ˜ , F ˜ ) = max β , F log L ( β , F ) denotes the maximum value of the likelihood function. Obviously, for any β , we have log L ( β ˜ , F ˜ ) = max β , F log L ( β , F ) p ( β ) . On the other hand, p ( β ˜ ) = sup F log L ( β ˜ , F ) log L ( β ˜ , F ˜ ) . Thus, we have p ( β ˜ ) = log L ( β ˜ , F ˜ ) . Therefore, the estimator β ^ obtained by maximizing the penalized log-likelihood is equivalent to the maximum of the following penalized log profile likelihood:
Q n ( β ) = p ( β ) n λ j = 1 p | β j | | β ˜ j | .
Under the regularity conditions (C1)–(C4), it follows from Theorem 1 of Murphy and van der Vaart [46], for any random sequence β n p β 0 , the log profile likelihood p n ( β n ) has the following quadratic expansion around β 0 :
p β n = p β 0 + β n β 0 i = 1 n S ˜ 0 O i 1 2 n β n β 0 I ˜ 0 β n β 0 + o P β 0 , Λ 0 n β n β 0 + 1 2 ,
where S ˜ 0 and I ˜ 0 are the efficient score function for β and the efficient Fisher information matrix, respectively.
According to Proposition 3.1 and the discussion in Section 4.4 given by Huang and Wellner [47], p is concave in ( β , F ) . The proof of Theorem 1 in Zeng et al. [7] implies that Λ ^ ( τ ) is bounded almost surely when n is large. Hence, without sacrificing generality, we can restrict the parameter F to a compact space. Then, p ( β ) = sup F log L ( β , F ) is concave in β . On the other hand, combined with the concavity of n λ j = 1 p | β j | | β ˜ j | , we can obtain that Q n ( β ) is concave. In order to establish the root-n consistency of β ^ , it is enough to prove that, for any ϵ > 0 , there exists a large constant C such that
lim inf n P sup u C Q n β 0 + n 1 / 2 u < Q n β 0 1 ϵ .
This implies that when n is sufficiently large, it can be inferred that there exists a local maximizer of Q n ( β ) in the interior of the ball { β 0 + n 1 / 2 u : u C } with a probability of at least 1 ϵ . By the concavity of Q n ( β ) , the local maximizer must be β ^ and thus β ^ β 0 = O p n 1 / 2 .
To prove (A3), we have
1 n Q n ( β 0 + n 1 / 2 u ) Q n ( β 0 ) = 1 n p ( β 0 + n 1 / 2 u ) p ( β 0 ) λ j = 1 p | β j 0 + n 1 / 2 u j | | β j 0 | | β ˜ j | 1 n p ( β 0 + n 1 / 2 u ) p ( β 0 ) λ j = 1 d | β j 0 + n 1 / 2 u j | | β j 0 | | β ˜ j | .
By the quadratic expansion (A2) and using the fact that n 1 / 2 i = 1 n S ˜ 0 ( O i ) = O p ( 1 ) , we have, when n is large,
1 n p ( β 0 + n 1 / 2 u ) p ( β 0 ) = 1 n u n 1 / 2 i = 1 n S ˜ 0 ( O i ) 1 2 n u I ˜ 0 u + o p β 0 , Λ 0 ( n 1 / 2 u + n 1 / 2 ) 2 = 1 n O p ( 1 ) j = 1 p | u j | 1 2 n u I ˜ 0 u + o p β 0 , Λ 0 { ( u + 1 ) 2 n } .
On the other hand, we have
λ j = 1 p | β j 0 + n 1 / 2 u j | | β j 0 | | β ˜ j | n 1 / 2 λ j = 1 d | u j | | β j ˜ | .
Note that the unpenalized maximum likelihood estimator β ˜ satisfies β ˜ β 0 = O p ( n 1 / 2 ) . By the Taylor expansion, we have, for 1 j p ,
1 β ˜ j = 1 β j 0 sign β j 0 β j 0 2 β ˜ j β j 0 + o p β ˜ j β j 0 = 1 β j 0 + O p ( 1 ) n .
Then, under the condition n λ = O p ( 1 ) , we have
n 1 / 2 λ j = 1 d | u j | | β ˜ j | = n 1 / 2 λ j = 1 d | u j | | β j 0 | + | u j | n O p ( 1 ) C n 1 / 2 λ O p ( 1 ) = C n 1 O p ( 1 ) .
Therefore,
1 n Q n ( β 0 + n 1 / 2 u ) Q n ( β 0 ) 1 n O p ( 1 ) j = 1 p | u j | 1 2 n u I ˜ 0 u + o p β 0 , Λ 0 { ( u + 1 ) 2 n } + C n 1 O p ( 1 )
Thus, in (A5), the first and fourth terms are of the order C n 1 , and the second term is of the order C 2 n 1 because I ˜ 0 is positive definite (see the proof in Zeng et al. [7]). And the third term is of an order smaller than C 2 n 1 . Hence, if we choose a sufficiently large C, the second term in (A5) dominates the other term. As a result, (A3) holds, which completes the proof. □

Appendix A.2. Proof of Theorem 2

Proof. 
(i) According to theorem 1, we have β ^ β 0 = O p ( n 1 / 2 ) . To prove the sparsity property of β 2 , it is sufficient to show that for any sequence β 1 such that β 1 β 10 = O p ( n 1 / 2 ) and for any positive constant C,
lim n P Q n β 1 , 0 = max β 2 C n 1 / 2 Q n β 1 , β 2 = 1 .
To prove (A6), we only need to show that for any such sequence β 1 and any positive constant C, Q n ( β ) / β j have opposite signs for any β j ( C n 1 / 2 , 0 ) ( 0 , C n 1 / 2 ) ( j = d + 1 , , p ) with the probability tending to 1.
For any β 1 satisfying β 1 β 10 = O p ( n 1 / 2 ) and any β 2 such that β 2 C n 1 / 2 , when n is large, p { β } has the quadratic expansion (A2). It is easy to see that p { β } / β j = O p ( n 1 / 2 ) ( j = d + 1 , , p ) . Hence, for β j ( C n 1 / 2 , 0 ) ( 0 , C n 1 / 2 ) ( j = d + 1 , , p ) ,
Q n ( β ) β j = p ( β ) β j n λ sign β j β ˜ j = O p n 1 / 2 n λ n 1 / 2 sign β j n 1 / 2 β ˜ j .
Because n 1 / 2 ( β ˜ j 0 ) = O p ( 1 ) , we have
Q n ( β ) β j = n 1 / 2 O p ( 1 ) n λ sign β j O p ( 1 ) .
Because n λ n , it is easy to see that the sign of Q n ( β ) / β j is determined by the sign of β j in (A7) when n is large, and they have opposite signs. Theorem 2(i) is proved.
(ii) We next prove the asymptotic normality of β ^ 1 . As shown in (i), it holds that lim n P β ^ 2 = 0 = 1 . Thus, we only need to derive the asymptotic representation of β ^ 1 in the event { β ^ 2 = 0 } .
Let h ^ = n ( β ^ 1 β 10 ) and Δ 1 n = n 1 / 2 i = 1 n S ˜ 10 O i , where S ˜ 10 is the sub-vector of S ˜ 0 corresponding to β 1 . For any random sample in the event { β ^ 2 = 0 } , we have Q n β ^ 1 , 0 Q n β 0 + n 1 / 2 I ˜ 10 1 Δ 1 n , 0 , where I ˜ 10 is the upper-left d × d sub-matrix of the efficient Fisher information matrix I ˜ 0 .
Because β ^ 1 β 10 = O p ( n 1 / 2 ) according to Theorem 1 and I ˜ 10 1 Δ 1 n = O p ( 1 ) . It follows by the quadratic expansion of the log profile likelihood (A2) that we have
Q n β ^ 1 , 0 = p β 0 + h ^ T Δ 1 n 1 2 h ^ T I ˜ 10 h ^ + o p β 0 , Λ 0 ( h ^ + 1 ) 2 n λ j = 1 d β ^ j β ˜ j = p β 0 + h ^ T Δ 1 n 1 2 h ^ T I ˜ 10 h ^ + o p ( 1 ) n λ j = 1 d β ^ j β ˜ j
and
Q n β 0 + n 1 / 2 I ˜ 10 1 Δ 1 n , 0 = p β 0 + Δ 1 n T I ˜ 10 1 Δ 1 n 1 2 Δ 1 n T I ˜ 10 1 Δ 1 n + o p β 0 , Λ 0 I ˜ 10 1 Δ 1 n + 1 2 n λ j = 1 d β j 0 + n 1 / 2 I ˜ 10 1 Δ 1 n j β ˜ j = p β 0 + 1 2 Δ 1 n T I ˜ 10 1 Δ 1 n + o p ( 1 ) n λ j = 1 d β j 0 + n 1 / 2 I ˜ 10 1 Δ 1 n j β ˜ j .
Thus,
Q n β ^ 1 , 0 Q n β 0 + n 1 / 2 I ˜ 10 1 Δ 1 n , 0 = h ^ T Δ 1 n 1 2 h ^ T I ˜ 10 h ^ 1 2 Δ 1 n T I ˜ 10 1 Δ 1 n n λ j = 1 d β ^ j β j 0 + n 1 / 2 I ˜ 10 1 Δ 1 n j β ˜ j = 1 2 h ^ I ˜ 10 1 Δ 1 n I ˜ 10 h ^ I ˜ 10 1 Δ 1 n n λ j = 1 d β ^ j β j 0 + n 1 / 2 I ˜ 10 1 Δ 1 n j β ˜ j c h ^ I ˜ 10 1 Δ 1 n 2 + n λ j = 1 d n β ^ j β j 0 I ˜ 10 1 Δ 1 n j β ˜ j
for some positive constant c because I ˜ 10 is positive-definite.
On the other hand, because Q n β ^ 1 , 0 Q n β 0 + n 1 / 2 I ˜ 10 1 Δ 1 n , 0 0 , combining (A8) we have that
h ^ I ˜ 10 1 Δ 1 n 2 1 c n λ j = 1 d n β ^ j β j 0 I ˜ 10 1 Δ 1 n j β ˜ j + o p ( 1 ) .
Note that β ˜ j p β j 0 > 0 for j = 1 , , d . Thus, under the condition n λ 0 , h ^ I ˜ 10 1 Δ 1 n = o p ( 1 ) , and then n ( β ^ 1 β 10 ) = I ˜ 10 1 Δ 1 n + o p ( 1 ) N 0 , I ˜ 10 1 .

Appendix B. The Penalized Conditional Likelihood Method

In this section, we provide the EM algorithm for implementing the maximum conditional likelihood approach. The conditional likelihood function can be written as
L C ( β , Λ ) = i = 1 n exp { Λ ( L i ) exp ( β Z i ) } I ( R i < ) exp { Λ ( R i ) exp ( β Z i ) } exp { Λ ( A i ) exp ( β Z i ) } .
To maximize the above conditional likelihood function, we treat Λ as a step function. Specifically, let 0 < t 1 < < t K n denote the ordered distinct time point of all L i , R i I ( R i < ) , and A i > 0 . Let f k denote the non-negative jump size of the estimator for Λ at t k for k = 1 , , K n . Then, we have Λ ( t ) = t k t f k and the above conditional likelihood function can be re-expressed as
L C 1 ( β , F ) = i = 1 n exp A i t k L i f k exp ( β Z i ) × 1 exp L i < t k R i f k exp ( β Z i ) I ( R i < )
where F = ( f 1 , f 2 , , f K n ) .
To present the EM algorithm, for the ith subject, we introduce a set of new independent latent variables { W i k , k = 1 , , K n , A i t k R i } , where R i = L i I ( R i = ) + R i I ( R i < ) and W i k is a Poisson latent variable with the mean f k exp ( β Z i ) . Then, the likelihood function (A9) can be equivalently expressed as
L C 2 ( β , F ) = i = 1 n P A i t k L i W i k = 0 P L i < t k R i W i k > 0 I ( R i < ) .
Let P ( W i k | f k exp ( β Z i ) ) be the probability mass function of W i k , and if we assume that the latent variables W i k ’s were known, we would have the following complete data likelihood function
L C c ( β , F ) = i = 1 n k = 1 K n P ( W i k | f k exp ( β Z i ) ) I ( A i t k R i ) ,
where t k L i W i k = 0 , L i < t k R i W i k > 0 if R i < ; t k L i W i k = 0 , if R i = .
In the E-step of the EM algorithm, we need to determine the conditional expectation of the log-likelihood function log L C c ( β , F ) with respect to all the latent variables and obtain the following objective function:
Q C ( θ ; θ ( m ) ) = i = 1 n k = 1 K n I ( A i t k R i ) E ( W i k ) log f k + E ( W i k ) β Z i f k exp ( β T Z i )
We need to calculate the following conditional expectations at the mth iteration of the algorithm.
E ( W i k ) = I ( L i < t k R i , R i < ) × f k ( m ) exp ( β ( m ) Z i ) 1 exp L i < t l R i f l ( m ) exp ( β ( m ) Z i ) .
In the M-step, we can update each f k by solving Q ( θ ; θ ( m ) ) / f k = 0 as follows:
f k = i = 1 n I ( A i t k R i ) E ( W i k ) i = 1 n I ( A i t k R i ) exp ( β ( m ) Z i ) .
Plugging (A11) into (A10), we have the following objective function
H ( β ; θ ( m ) ) + j = 1 p P λ ( | β j | )
where
H ( β ; θ ( m ) ) = 1 n i = 1 n k = 1 K n I ( A i t k R i ) E ( W i k ) { β Z i log i = 1 n I A i t k R i exp β Z i } .
The sparse estimator of β based on L C ( β , F ) can be obtained by minimizing the above objective function, which can be accomplished with the same algorithm as in Section 3 of the paper.

References

  1. Sun, J. The Statistical Analysis of Interval-Censored Failure Time Data; Springer: New York, NY, USA, 2006. [Google Scholar]
  2. Huang, J. Efficient estimation for the proportional hazards model with interval censoring. Ann. Stat. 1996, 24, 540–568. [Google Scholar] [CrossRef]
  3. Shen, X. Proportional odds regression and sieve maximum likelihood estimation. Biometrika 1998, 85, 165–177. [Google Scholar] [CrossRef]
  4. Zeng, D.; Cai, J.; Shen, Y. Semiparametric additive risks model for interval-censored data. Stat. Sin. 2006, 16, 287–302. [Google Scholar]
  5. Zhang, Y.; Hua, L.; Huang, J. A spline-based semiparametric maximum likelihood estimation method for the Cox model with interval-censored data. Scand. J. Stat. 2010, 37, 338–354. [Google Scholar] [CrossRef]
  6. Wang, L.; McMahan, C.S.; Hudgens, M.G.; Qureshi, Z.P. A flexible, computationally efficient method for fitting the proportional hazards model to interval-censored data. Biometrics 2016, 72, 222–231. [Google Scholar] [CrossRef] [PubMed]
  7. Zeng, D.; Mao, L.; Lin, D.Y. Maximum likelihood estimation for semiparametric transformation models with interval-censored data. Biometrika 2016, 103, 253–271. [Google Scholar] [CrossRef]
  8. Zhou, Q.; Hu, T.; Sun, J. A sieve semiparametric maximum likelihood approach for regression analysis of bivariate interval-censored failure time data. J. Am. Stat. Assoc. 2017, 112, 664–672. [Google Scholar] [CrossRef]
  9. Prorok, P.C.; Andriole, G.L.; Bresalier, R.S.; Buys, S.S.; Chia, D.; Crawford, E.D.; Fogel, R.; Gelmann, E.P.; Gilbert, F.; Gohagan, J.K. Design of the prostate, lung, colorectal and ovarian (PLCO) cancer screening trial. Control. Clin. Trials 2000, 21, 273S–309S. [Google Scholar] [CrossRef]
  10. Wang, M.C. Nonparametric estimation from cross-sectional survival data. J. Am. Stat. Assoc. 1991, 86, 130–143. [Google Scholar] [CrossRef]
  11. Shen, Y.; Ning, J.; Qin, J. Analyzing length-biased data with semiparametric transformation and accelerated failure time models. J. Am. Stat. Assoc. 2009, 104, 1192–1202. [Google Scholar] [CrossRef]
  12. Ning, J.; Qin, J.; Shen, Y. Semiparametric accelerated failure time model for length-biased data with application to dementia study. Stat. Sin. 2014, 24, 313–333. [Google Scholar] [CrossRef] [PubMed]
  13. Qin, J.; Shen, Y. Statistical methods for analyzing right-censored length-biased data under Cox model. Biometrics 2010, 66, 382–392. [Google Scholar] [CrossRef] [PubMed]
  14. Qin, J.; Ning, J.; Liu, H.; Shen, Y. Maximum likelihood estimations and EM algorithms with length-biased data. J. Am. Stat. Assoc. 2011, 106, 1434–1449. [Google Scholar] [CrossRef] [PubMed]
  15. Gao, F.; Chan, K.C.G. Semiparametric regression analysis of length-biased interval-censored data. Biometrics 2019, 75, 121–132. [Google Scholar] [CrossRef]
  16. Shen, P.S.; Peng, Y.; Chen, H.J.; Chen, C.M. Maximum likelihood estimation for length-biased and interval-censored data with a nonsusceptible fraction. Lifetime Data Anal. 2022, 28, 68–88. [Google Scholar] [CrossRef]
  17. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B (Methodol.) 1996, 58, 267–288. [Google Scholar] [CrossRef]
  18. Fan, J.; Li, R. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 2001, 96, 1348–1360. [Google Scholar] [CrossRef]
  19. Zou, H. The adaptive lasso and its oracle properties. J. Am. Stat. Assoc. 2006, 101, 1418–1429. [Google Scholar] [CrossRef]
  20. Lv, J.; Fan, Y. A unified approach to model selection and sparse recovery using regularized least squares. Ann. Stat. 2009, 37, 3498–3528. [Google Scholar] [CrossRef]
  21. Dicker, L.; Huang, B.; Lin, X. Variable selection and estimation with the seamless-L-0 penalty. Stat. Sin. 2013, 23, 929–962. [Google Scholar]
  22. Zhang, C.H. Nearly unbiased variable selection under minimax concave penalty. Ann. Stat. 2010, 38, 894–942. [Google Scholar] [CrossRef]
  23. Liu, Z.; Li, G. Efficient regularized regression with penalty for variable selection and network construction. Comput. Math. Methods Med. 2016, 2016, 3456153. [Google Scholar] [CrossRef]
  24. Dai, L.; Chen, K.; Sun, Z.; Liu, Z.; Li, G. Broken adaptive ridge regression and its asymptotic properties. J. Multivar. Anal. 2018, 168, 334–351. [Google Scholar] [CrossRef] [PubMed]
  25. Fan, J.; Li, R.; Zhang, C.H.; Zou, H. Statistical Foundations of Data Science; Chapman and Hall/CRC: New York, NY, USA, 2020. [Google Scholar]
  26. Garavand, A.; Salehnasab, C.; Behmanesh, A.; Aslani, N.; Zadeh, A.; Ghaderzadeh, M. Efficient Model for Coronary Artery Disease Diagnosis: A Comparative Study of Several Machine Learning Algorithms. J. Healthc. Eng. 2022, 2022, 5359540. [Google Scholar] [CrossRef] [PubMed]
  27. Hosseini, A.; Eshraghi, M.A.; Taami, T.; Sadeghsalehi, H.; Hoseinzadeh, Z.; Ghaderzadeh, M.; Rafiee, M. A mobile application based on efficient lightweight CNN model for classification of B-ALL cancer from non-cancerous cells: A design and implementation study. Inform. Med. Unlocked 2023, 39, 101244. [Google Scholar] [CrossRef]
  28. Garavand, A.; Behmanesh, A.; Aslani, N.; Sadeghsalehi, H.; Ghaderzadeh, M. Towards Diagnostic Aided Systems in Coronary Artery Disease Detection: A Comprehensive Multiview Survey of the State of the Art. Int. J. Intell. Syst. 2023, 2023, 6442756. [Google Scholar] [CrossRef]
  29. Ghaderzadeh, M.; Aria, M. Management of Covid-19 Detection Using Artificial Intelligence in 2020 Pandemic. In Proceedings of the ICMHI ’21: 5th International Conference on Medical and Health Informatics, Kyoto, Japan, 14–16 May 2021; pp. 32–38. [Google Scholar]
  30. Chen, L.P. Variable selection and estimation for the additive hazards model subject to left-truncation, right-censoring and measurement error in covariates. J. Stat. Comput. Simul. 2020, 90, 3261–3300. [Google Scholar] [CrossRef]
  31. He, D.; Zhou, Y.; Zou, H. High-dimensional variable selection with right-censored length-biased data. Stat. Sin. 2020, 30, 193–215. [Google Scholar] [CrossRef]
  32. Li, C.; Pak, D.; Todem, D. Adaptive lasso for the Cox regression with interval censored and possibly left truncated data. Stat. Methods Med. Res. 2020, 29, 1243–1255. [Google Scholar] [CrossRef]
  33. Withana Gamage, P.; McMahan, C.; Wang, L. Variable selection in semiparametric nonmixture cure model with interval-censored failure time data. Stat. Med. 2019, 38, 3026–3039. [Google Scholar]
  34. Li, S.; Peng, L. Instrumental Variable Estimation of Complier Causal Treatment Effect with Interval-Censored Data. Biometrics 2023, 79, 253–263. [Google Scholar] [CrossRef]
  35. Withana Gamage, P.; McMahan, C.; Wang, L. A flexible parametric approach for analyzing arbitrarily censored data that are potentially subject to left truncation under the proportional hazards model. Lifetime Data Anal. 2023, 29, 188–212. [Google Scholar] [CrossRef]
  36. Huang, C.Y.; Qin, J. Semiparametric estimation for the additive hazards model with left-truncated and right-censored data. Biometrika 2013, 100, 877–888. [Google Scholar] [CrossRef]
  37. Turnbull, B.W. The empirical distribution function with arbitrarily grouped, censored and truncated data. J. R. Stat. Soc. Ser. (Methodol.) 1976, 38, 290–295. [Google Scholar] [CrossRef]
  38. Zhang, H.H.; Lu, W. Adaptive Lasso for Cox’s proportional hazards model. Biometrika 2007, 94, 691–703. [Google Scholar] [CrossRef]
  39. Li, S.; Wu, Q.; Sun, J. Penalized estimation of semiparametric transformation models with interval-censored data and application to Alzheimer’s disease. Stat. Methods Med. Res. 2020, 29, 2151–2166. [Google Scholar] [CrossRef] [PubMed]
  40. Zou, H.; Li, R. One-step sparse estimates in nonconcave penalized likelihood models. Ann. Stat. 2008, 36, 1509–1533. [Google Scholar]
  41. Andriole, G.L.; Crawford, E.D.; Grubb, R.L.; Buys, S.S.; Chia, D.; Church, T.R.; Fouad, M.N.; Isaacs, C.; Prorok, P. Prostate cancer screening in the randomized prostate, lung, colorectal, and ovarian cancer screening trial: Mortality results after 13 years of follow-up. J. Natl. Cancer Inst. 2012, 104, 125–132. [Google Scholar] [CrossRef] [PubMed]
  42. Meister, K. Risk Factors for Prostate Cancer; American Council on Science and Health: New York, NY, USA, 2002. [Google Scholar]
  43. Pierce, B.L. Why are diabetics at reduced risk for prostate cancer? A review of the epidemiologic evidence. Urol. Oncol. Semin. Orig. Investig. 2012, 30, 735–743. [Google Scholar] [CrossRef]
  44. Lu, T.; Li, S.; Sun, L. Combined estimating equation approaches for the additive hazards model with left-truncated and interval-censored data. Lifetime Data Anal. 2023, 29, 672–697. [Google Scholar] [CrossRef]
  45. Sun, L.; Li, S.; Wang, L.; Song, X.; Sui, X. Simultaneous variable selection in regression analysis of multivariate interval-censored data. Biometrics 2022, 78, 1402–1413. [Google Scholar] [CrossRef] [PubMed]
  46. Murphy, S.A.; Van Der Vaart, A.W. On profile likelihood. J. Am. Stat. Assoc. 2000, 95, 449–465. [Google Scholar] [CrossRef]
  47. Huang, J.; Wellner, J.A. Interval censored survival data: A review of recent progress. In Proceedings of the First Seattle Symposium in Biostatistics; Lin, D.Y., Fleming, T.R., Eds.; Springer: New York, NY, USA, 1997; pp. 123–169. [Google Scholar]
Table 1. Simulation results for the variable selection and estimation accuracy with large effects.
Table 1. Simulation results for the variable selection and estimation accuracy with large effects.
MethodPenalty n = 200 n = 400
TPFPMMSE (SD)TPFPMMSE (SD)
Proposed methodLASSO31.270.163 (0.107)31.150.092(0.060)
ALASSO30.120.051 (0.058)30.110.025 (0.022)
SCAD30.090.025 (0.046)30.070.012 (0.014)
SELO30.140.030 (0.042)30.070.013 (0.016)
SICA30.160.030 (0.042)30.070.013 (0.015)
MCP30.160.025 (0.042)30.070.012 (0.015)
BAR30.130.032 (0.040)30.110.014 (0.016)
Oracle--0.024 (0.041)--0.011 (0.014)
Without VS--0.057 (0.055)--0.026 (0.020)
PCL methodLASSO31.420.163 (0.162)31.290.089 (0.074)
ALASSO30.200.081 (0.108)30.130.033 (0.041)
SCAD30.120.056 (0.116)30.080.025 (0.041)
SELO30.180.065 (0.087)30.100.021 (0.038)
SICA30.180.066 (0.087)30.100.021 (0.037)
MCP30.130.054 (0.116)30.100.025 (0.043)
BAR30.190.065 (0.089)30.130.023 (0.035)
“Proposed method” denotes the proposed penalized EM algorithm; “Without VS” denotes the analysis method without conducting variable selection; and “PCL method” denotes the variable selection method based on the penalized conditional likelihood.
Table 2. Simulation results for the variable selection and estimation accuracy with weak covariate effects.
Table 2. Simulation results for the variable selection and estimation accuracy with weak covariate effects.
Method Penalty n = 200 n = 400
TPFPMMSE (SD)TPFPMMSE (SD)
Proposed methodLASSO30.800.047 (0.040)30.730.024(0.024)
ALASSO30.250.027 (0.024)30.120.009 (0.010)
SCAD30.360.030 (0.028)30.170.010 (0.011)
SELO30.230.017 (0.019)30.100.008 (0.009)
SICA2.990.220.017 (0.021)30.070.008 (0.008)
MCP2.980.250.018 (0.025)30.100.007 (0.009)
BAR30.220.015 (0.018)30.120.008 (0.009)
Oracle--0.013 (0.014)--0.007 (0.007)
Without VS--0.043 (0.031)--0.020 (0.013)
PCL methodLASSO2.931.180.065 (0.075)30.720.039 (0.040)
ALASSO2.850.500.060 (0.062)2.980.050.015 (0.028)
SCAD2.860.550.076 (0.063)2.990.150.017 (0.026)
SELO2.870.470.053 (0.060)2.980.060.018 (0.028)
SICA2.880.480.053 (0.059)2.980.070.015 (0.027)
MCP2.840.430.072 (0.068)2.980.050.015 (0.028)
BAR2.870.370.052 (0.059)2.970.080.014 (0.028)
“Proposed method” denotes the proposed penalized EM algorithm; “Without VS” denotes the analysis method without conducting variable selection; and “PCL method” denotes the variable selection method based on the penalized conditional likelihood.
Table 3. Simulation results for the variable selection and estimation accuracy with p = 30 .
Table 3. Simulation results for the variable selection and estimation accuracy with p = 30 .
MethodPenalty n = 200 n = 400
TPFPMMSE (SD)TPFPMMSE (SD)
Proposed methodLASSO31.590.307 (0.143)31.550.184 (0.082)
ALASSO30.450.085 (0.075)30.130.028 (0.030)
SCAD30.260.041 (0.073)30.150.011 (0.017)
SELO30.350.046 (0.051)30.120.014 (0.019)
SICA30.310.045 (0.051)30.120.015 (0.019)
MCP30.140.030 (0.043)30.130.012 (0.017)
BAR30.460.042 (0.044)30.230.015 (0.018)
Oracle--0.026 (0.035)--0.010 (0.017)
Without VS--0.216 (0.148)--0.087 (0.043)
PCL methodLASSO31.870.362 (0.223)31.330.220 (0.129)
ALASSO30.850.114 (0.124)30.240.027 (0.046)
SCAD2.970.410.080 (0.134)30.140.028 (0.038)
SELO2.990.800.094 (0.106)30.200.025 (0.035)
SICA30.500.091 (0.112)30.180.024 (0.035)
MCP2.980.440.073 (0.146)30.150.021 (0.044)
BAR2.990.770.100 (0.136)30.370.026 (0.037)
“Proposed method” denotes the proposed penalized EM algorithm; “Without VS” denotes the analysis method without conducting variable selection; and “PCL method” denotes the variable selection method based on the penalized conditional likelihood.
Table 4. Simulation results for the variable selection and estimation accuracy with p = 50 .
Table 4. Simulation results for the variable selection and estimation accuracy with p = 50 .
MethodPenalty n = 200 n = 400
TPFPMMSE (SD)TPFPMMSE (SD)
Proposed methodLASSO31.540.394 (0.117)31.330.269 (0.091)
ALASSO30.450.130 (0.076)30.210.037 (0.033)
SCAD30.290.038 (0.061)30.030.014 (0.023)
SELO30.270.049 (0.041)30.140.018 (0.022)
SICA30.270.048 (0.047)30.200.016 (0.020)
MCP30.140.028 (0.046)30.060.014 (0.017)
BAR30.580.045 (0.044)30.370.017 (0.020)
Oracle--0.021 (0.030)--0.011 (0.016)
Without VS--0.637 (0.453)--0.169 (0.070)
PCL methodLASSO31.740.469 (0.263)31.580.297 (0.139)
ALASSO2.990.820.155 (0.160)30.380.037 (0.045)
SCAD2.990.610.101 (0.152)30.200.024 (0.035)
SELO2.990.700.088 (0.105)30.540.023 (0.051)
SICA2.990.650.089 (0.103)30.570.022 (0.051)
MCP2.990.560.093 (0.157)30.290.025 (0.041)
BAR31.020.097 (0.164)30.620.036 (0.042)
“Proposed method” denotes the proposed penalized EM algorithm; “Without VS” denotes the analysis method without conducting variable selection; and “PCL method” denotes the variable selection method based on the penalized conditional likelihood.
Table 5. Analysis results of the prostate cancer screening data.
Table 5. Analysis results of the prostate cancer screening data.
MethodCovariateLASSOALASSOSCADSELOSICAMCPBARWithout VS
ProposedRace0.332 (0.084 )0.364 * (0.088)0.444 * (0.072)0.408 * (0.082)0.408 * (0.083)0.444 * (0.072)0.412 * (0.081)0.458 * (0.064)
methodEducation0.047 (0.027 )0 (-)0 (-)0 (-)0 (-)0 (-)0 (-)0.070 * (0.031)
Cancer0.032 (0.029 )0 (-)0 (-)0 (-)0 (-)0 (-)0 (-)0.047 (0.032 )
ProsCancer0.346 * (0.053 )0.377 * (0.058)0.421 * (0.049)0.397 * (0.053)0.398 * (0.056)0.421 * (0.049)0.405 * (0.052)0.394 * (0.049)
Diabetes−0.310 * (0.060 )−0.336 * (0.068)−0.416 * (0.059)−0.373 * (0.065)−0.374 * (0.068)−0.416 * (0.059)−0.384 * (0.062)−0.402 * (0.064)
Stroke0 (-)0 (-)0 (-)0 (-)0 (-)0 (-)0 (-)−0.244 * (0.109 )
Gallblad0 (-)0 (-)0 (-)0 (-)0 (-)0 (-)0 (-)−0.064 (0.058 )
PCLRace0.307 * (0.087 )0.399 * (0.114)0.420 * (0.102)0.385 * (0.102)0.394 * (0.105)0.420 * (0.102)0.377 * (0.114)0.430 * (0.067)
methodEducation0 (-)0 (-)0 (-)0 (-)0 (-)0 (-)0 (-)−0.005 (0.032 )
Cancer0.075 * (0.033 )0.065 (0.049)0 (-)0 (-)0 (-)0 (-)0 (-)0.089 * (0.033 )
ProsCancer0.360 * (0.062 )0.407 * (0.062)0.453 * (0.062)0.436 * (0.063)0.441* (0.064)0.453 * (0.062)0.437 * (0.063)0.407 * (0.051)
Diabetes−0.216 * (0.073 )−0.286 * (0.078)−0.316 * (0.077)−0.266 * (0.089)−0.279 * (0.093)−0.316 * (0.077)−0.266 * (0.087)−0.320 * (0.065 )
Stroke0 (-)0 (-)0 (-)0 (-)0 (-)0 (-)0 (-)−0.011 (0.109)
Gallblad0 (-)0 (-)0 (-)0 (-)0 (-)0 (-)0 (-)0.090 (0.060)
“Proposed method” denotes the proposed penalized EM algorithm; “PCL method” denotes the variable selection method based on the penalized conditional likelihood; “Without VS” denotes the analysis method without conducting variable selection; and “*” indicates that the covariate effect is significant at the level of 0.05.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Feng, F.; Cheng, G.; Sun, J. Variable Selection for Length-Biased and Interval-Censored Failure Time Data. Mathematics 2023, 11, 4576. https://doi.org/10.3390/math11224576

AMA Style

Feng F, Cheng G, Sun J. Variable Selection for Length-Biased and Interval-Censored Failure Time Data. Mathematics. 2023; 11(22):4576. https://doi.org/10.3390/math11224576

Chicago/Turabian Style

Feng, Fan, Guanghui Cheng, and Jianguo Sun. 2023. "Variable Selection for Length-Biased and Interval-Censored Failure Time Data" Mathematics 11, no. 22: 4576. https://doi.org/10.3390/math11224576

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop