Next Article in Journal
Diffeomorphism Invariant Minimization of Functionals with Nonuniform Coercivity
Next Article in Special Issue
Generalizable Storm Surge Risk Modeling
Previous Article in Journal
Fitting Penalized Estimator for Sparse Covariance Matrix with Left-Censored Data by the EM Algorithm
Previous Article in Special Issue
Reconstruction and Prediction of Chaotic Time Series with Missing Data: Leveraging Dynamical Correlations Between Variables
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analysis of Receiver Operating Characteristic Curves for Cure Survival Data and Mismeasured Biomarkers

Department of Statistics, National Chengchi University, Taipei City 116, Taiwan
Mathematics 2025, 13(3), 424; https://doi.org/10.3390/math13030424
Submission received: 13 December 2024 / Revised: 20 January 2025 / Accepted: 25 January 2025 / Published: 27 January 2025
(This article belongs to the Special Issue Statistical Analysis and Data Science for Complex Data)

Abstract

:
Cure models and receiver operating characteristic (ROC) curve estimation are two important issues in survival analysis and have received attention for many years. In the development of biostatistics, these two topics have been well discussed separately. However, a rare development in the estimation of the ROC curve has been made available based on survival data with the cure fraction. On the other hand, while a large body of estimation methods have been proposed, they rely on an implicit assumption that the variables are precisely measured. In applications, measurement errors are generally ubiquitous and ignoring measurement errors can cause unexpected bias for the estimator and lead to the wrong conclusion. In this paper, we study the estimation of the ROC curve and the area under curve (AUC) when variables or biomarkers are subject to measurement error. We propose a valid procedure to handle measurement error effects and estimate the parameters in the cure model, as well as the AUC. We also make an effort to establish the theoretical properties with rigorous justification.

1. Introduction

Survival analysis has been an important research topic in biostatistics. The main purpose of survival analysis is to understand the time at which a specific event or disease happens among patients and make a prediction. The key feature of survival data is that the time-to-event variable either is observed at a finite time (i.e., the event, such as death, happens during the observation period) or cannot be observed due to censoring. A more detailed introduction can be found in some monographs, such as [1]. However, in the context of survival analysis, there is a situation in which patients never experience the failure event (e.g., death) in the study period; this phenomenon is called cure, and the corresponding model and data structure is the well-known cure model. There are some estimation methods to analyze the cure model in the literature. For example, [2] considered the transformation model for survival data and implemented the logistic regression model to characterize the probability of cure. Ref. [3] proposed the conditional likelihood function given truncation variables. Ref. [4] explored the cure model with a mixture of single-index and Cox models. More comprehensive discussions are summarized in [5].
In applications, noisy data are usually inevitable due to the data collection or sampling mechanism. One characteristic of typical noisy data is the measurement error, which indicates that the observed variables cannot fully reflect what they should be. This phenomenon is usually caused by machines that are imprecise or incorrect records by researchers. In the framework of survival data with cure models, some methods have also been developed to deal with measurement error problems. To name a few, ref. [6] proposed the simulation and extrapolation (SIMEX) method to correct the measurement errors. Ref. [7] developed the conditional expectation approach to correct the error effects. Unlike [6,7], who focused on the Cox proportional hazards model, ref. [8] studied the transformation model and adopted the SIMEX method to handle measurement error.
The other important issue in survival analysis is the estimation of the receiver operating characteristic (ROC) curve, whose primary goal is to display the sensitivity and the specificity of a continuous marker for a given disease. From the existing literature, the estimation procedures lie in the censored data, including [9,10,11,12,13]. However, to our knowledge, rare methods have been available to estimate the ROC curve when survival data contain the cured group and variables or biomarkers suffer from measurement error, except for [14,15], who considered the Cox model and the mixture cure model with the decomposition of sensitivity and specificity, respectively. The other challenging feature in analysis is the measurement error in biomarkers, which frequently appears in real-world applications. For example, [16] studied the Mayo Clinic primary biliary cirrhosis dataset with measurement errors in serum bilirubin, serum albumin, and prothrombin time; ref. [17] pointed out the biomarker systolic blood pressure (SBP), which is one of the key prognostic factors for cardiovascular risk scores, is possibly subject to measurement error; ref. [18] implemented three error-prone biomarkers, ER, PR, and Ki67 proteins, to analyze health study data on breast cancer in nurses. All the literature reveals that measurement error in biomarkers possibly affects the performance of standard diagnostic measures, such as ROC curves and area under the ROC curve (AUC). While error-prone biomarkers were discussed in survival analysis or the estimation of ROC curves (e.g., [16,17]), it seems that estimation methods for the cure fraction and error-prone biomarkers are unavailable.
To fill this research gap, we aim to develop a new estimation procedure in this paper. Specifically, we first consider the transformation model with the cure fraction in the survival data and the measurement error in covariates or biomarkers. Our setting generalizes the conventional implementation of the Cox model. Unlike existing methods that adopted the SIMEX method to correct for measurement error (e.g., [6,8]), we propose the insertion method, whose key spirit is to construct the new function so that its conditional expectation can be recovered to the true function based on the unobserved covariates or biomarkers. The corrected estimation function ensures the consistency of the estimator. After that, we further explore the estimation of the ROC curve when the survival data contain cured samples and mismeasured variables. We develop the time-independent and time-dependent estimation for the ROC curves and the corresponding estimation of the area under curve (AUC). Theoretically, we also establish the consistency and the asymptotic normality of the AUC estimator. The contribution of this paper is a valid estimation strategy, and the main focus is its theoretical establishment with rigorous justification.
The remainder is organized as follows. In Section 2, we introduce notation and models. In Section 3, we first present the proposed method to correct the measurement error effect and derive the estimator. Moreover, the corresponding theoretical properties are presented. In Section 4, we present numerical experiments and their results. We conclude the paper with discussion in Section 5. Finally, the proofs of the theoretical properties are placed in Appendix A.

2. Notation and Models

2.1. Cure Model

Let T and C denote the (uncured) failure time and the censoring time, respectively. With subjects cured and uncured, the failure time is determined by T ˜ A T + ( 1 A ) , where A { 0 , 1 } indicates whether a subject is cured ( A = 0 ) or uncured ( A = 1 ). To characterize A, we consider the logistic regression
π ( X ) P ( A = 1 | X ) = exp α X 1 + exp α X ,
where X is the p-dimensional vector of covariates or biomarkers and α is the p-dimensional vector of the parameters associated with X.
On the other hand, regarding the uncured failure time T, we consider the survivor function that follows the transformation model (e.g., [19,20])
S u ( t | X i ) = exp β X i / ρ exp β X i + ρ H ( t ) 1 / ρ ,
where β is the p-dimensional vector of the parameters associated with X, H ( t ) is the unknown strictly increasing function, and ρ is the known parameter. In particular, when ρ = 1 , (2) follows the proportional odds (PO) model; when ρ = 0 , then (2) becomes the proportional hazards (PH) model. Given (2), the survivor function of T ˜ is given by
S ( t | X ) P ( T ˜ > t | X ) = P ( T ˜ > t | X , A = 1 ) P ( A = 1 | X ) + P ( T ˜ > t | X , A = 0 ) P ( A = 0 | X ) = P ( T > t | X ) π ( X ) + P ( > t | X ) { 1 π ( X ) } = S u ( t | X ) π ( X ) + 1 π ( X ) .
Finally, let C denote the censoring time, and we further define Y min { T , C } as the observed survival time and δ = I ( T C ) , with I ( · ) being the indicator function. Let { { Y i , δ i , X i } : i = 1 , n } denote the independent and identically distributed sample as { Y , δ , X } .
To develop the method in the following sections, some typical assumptions are imposed in the context of survival analysis, including noninformative censoring times, and the independence of failure time is independent of the censoring time, given the covariates.

2.2. Measurement Error Model

In applications, the covariates or biomarkers are possibly subject to measurement error. Here, we consider the case where X i is error-contaminated, and let X i denote the observed or surrogate measurement of X i . Consistent with most work in the literature (e.g., [21]), we consider that the X i are continuous and linked with X i by the additive measurement error model:
X i = X i + ϵ i ,
where ϵ i is independent of { Y i , δ i , X i } , ϵ i N ( 0 p , Σ ϵ ) with the covariance matrix Σ ϵ , and 0 p represents the p × 1 zero vector. Let ω j k = cov ( X i j , X i k ) cov ( X i j , X i k ) denote the reliability ratio of the covariances of jth and kth covariates and their surrogates for j , k = 1 , , p , which reflects the magnitude of variation between the unobserved and observed covariates, and higher values indicate a minor measurement error. Note that cov ( X i j , X i k ) = cov ( X i j , X i k ) + cov ( ϵ i j , ϵ i k ) , which implies that ω j k = cov ( X i j , X i k ) cov ( X i j , X i k ) + cov ( ϵ i j , ϵ i k ) . In applications, we may assume that the reliability ratio ω j k is not a small value for all j , k = 1 , , p (e.g., [22]), which implies that the values in Σ ϵ are not too large.
In situations where the parameters for the measurement error model (3) are unknown, we may utilize the information carried by additional data sources, such as repeated measurements or validation subsamples. As a result, to highlight key ideas, we focus our attention on the estimation of parameters associated with the survival model and assume that the parameter Σ ϵ for measurement error model (3) is known. This assumption is reasonable, typically arising in two circumstances: (i) prior studies provide the information on the covariate mismeasurement and offer an estimate of Σ ϵ , and (ii) in conducting sensitivity analyses, different values of Σ ϵ are specified to understand how mismeasurement effects may affect inference results about the parameters associated with the survival model.

3. Methodology

Suppose that we have a sample of n subjects and that for i = 1 , , n , { Y i , δ i , X i } has the same distribution as { Y , δ , X } . We first estimate the unknown parameters ( α , β , H ( · ) ) and the survivor function of T ˜ based on formulation (2) with suitable error correction. After that, we develop the error-corrected method to estimate the time-independent/time-dependent AUC. Moreover, theoretical results are also established after presenting the proposed estimation procedures.

3.1. Construction of the Error-Corrected Likelihood Function

To develop our approach, we start the discussion by pretending that X i is available. After that, we replace X i with X i and propose the error-corrected method to adjust error effects.
For cured patients with δ i = 0 and A i = 0 , we have P ( T ˜ i = ) 1 A i = 1 π ( X i ) 1 A i . In addition, for those patients who encounter the failure event, i.e., δ i = 1 and A i = 1 , we have f T ( y | X i ) π ( X i ) δ i = 1 , where f T ( · ) represents the density function of uncured failure time. Moreover, for observations who are censored but not cured ( δ i = 0 and A i = 1 ), we can obtain S u ( y | X i ) π ( X i ) I ( δ i = 0 , A i = 1 ) . Therefore, for subject i = 1 , , n , the complete likelihood function is given by
L i = 1 n 1 π ( X i ) 1 A i π ( X i ) A i × d H Y i exp β X i + ρ H ( Y i ) δ i exp β X i exp β X i + ρ H ( Y i ) A i / ρ ,
where d H ( · ) is the derivative of H ( · ) , and the corresponding log-likelihood function of (4) is
= i = 1 n ( A i X i α log 1 + exp ( X i α ) + δ i log d H ( Y i ) log exp β X i + ρ H ( Y i ) A i ρ β X i + log exp β X i + ρ H ( Y i ) ) .
However, note that we usually have X i in (5) instead of X i . It is crucial to make a suitable correction. Our strategy is to find the new likelihood function based on X i , denoted by , such that the conditional expectation E ( | X i ) = is satisfied. In this way, we have E ( ) = E ( ) , showing that is an error-corrected likelihood function and provides the same optimizer as those obtained from . By measurement error model (3) and the moment-generating function under the normal distribution, it is direct to have E ( X i | X i ) = X i and E { exp α X i | X i } = exp α X i + 1 2 α Σ ϵ α . According to (5), however, the challenge is that the error-prone covariate appears in log 1 + exp ( X i α ) and log exp β X i + ρ H ( Y i ) , which are nonlinear terms.
To address this issue, we first consider the function X i exp α X i 1 + exp α X i . Since the expectation of a ratio random variable is sometimes difficult to compute, we may adopt the approximation. Specifically, let Ξ 1 i X i exp α X i and Ξ 2 i 1 + exp α X i . By the second-order Taylor series expansion (e.g., [23], pp. 69–72), we have the following approximation:
Ξ 1 i Ξ 2 i E ( Ξ 1 i | X i ) E ( Ξ 2 i | X i ) + 1 E ( Ξ 2 i | X i ) { Ξ 1 i E ( Ξ 1 i | X i ) } E ( Ξ 1 i | X i ) { E ( Ξ 2 i | X i ) } 2 { Ξ 2 i E ( Ξ 2 i | X i ) } 1 2 1 { E ( Ξ 2 i | X i ) } 2 { Ξ 1 i E ( Ξ 1 i | X i ) } { Ξ 2 i E ( Ξ 2 i | X i ) } + 1 2 E ( Ξ 1 i | X i ) { E ( Ξ 2 i | X i ) } 3 { Ξ 2 i E ( Ξ 2 i | X i ) } 2 .
Taking the conditional expectation gives that
E X i exp α X i 1 + exp α X i | X i ( X + Σ ϵ α ) exp α X i + 1 2 α Σ ϵ α 1 + exp α X i + 1 2 α Σ ϵ α 1 2 cov ( Ξ 1 i , Ξ 2 i | X i ) 1 + exp α X i + 1 2 α Σ ϵ α 2 + 1 2 ( X + Σ ϵ α ) exp α X i + 1 2 α Σ ϵ α 1 + exp α X i + 1 2 α Σ ϵ α 3 var ( Ξ 2 i | X i ) .
On the other hand, the conditional variance of Ξ 2 i and conditional covariance of Ξ 1 i and Ξ 2 i , given X i , are respectively derived as
var ( Ξ 2 i | X i ) = var 1 + exp α X i | X i = var exp α X i | X i = E exp 2 α X i | X i E exp α X i | X i 2 = exp 2 α X i + 2 α Σ ϵ α exp 2 α X i + α Σ ϵ α
and
cov ( Ξ 1 i , Ξ 2 i | X i ) = E X i exp ( α X i ) + X i exp ( 2 α X i ) | X i E X i exp ( α X i ) | X i E 1 + X i exp ( α X i ) | X i = ( X i + 2 Σ ϵ α ) exp ( 2 α X i + 2 α Σ ϵ α ) ( X i + Σ ϵ α ) exp ( 2 α X i + α Σ ϵ α ) .
Provided that the reliability ratio is not too small or, equivalently, that the variance of the noise term Σ ϵ is not too large, the two values var ( Ξ 2 i | X i ) and cov ( Ξ 1 i , Ξ 2 i | X i ) are close to zero, yielding that the second and last terms in (6) are close to zero; thus, we have that
E X i exp α X i 1 + exp α X i | X i ( X + Σ ϵ α ) exp α X i + 1 2 α Σ ϵ α 1 + exp α X i + 1 2 α Σ ϵ α ,
and performing integration on (7) with respect to α yields
E log 1 + exp α X i | X i log 1 + exp α X i + 1 2 α Σ ϵ α = log 1 + exp α X i + 1 2 α Σ ϵ α ,
where the last approximation is due to that “plus one” being almost ignorable. Thus, (8) suggests that the suitable error correction of log 1 + exp α X i is
log 1 + exp α X i 1 2 α Σ ϵ α .
Further, similar derivations of (7) and (8) enable us to obtain the following equality:
E log ρ H ( Y i ) + exp β X i | X i = log ρ H ( Y i ) + exp β X i + 1 2 β Σ ϵ β ,
yielding that the suitable error correction of log ρ H ( Y i ) + exp β X i is
log ρ H ( Y i ) + exp β X i 1 2 β Σ ϵ β .
Consequently, combining (5) with (9) and (10) yields the corrected log-likelihood function
= i = 1 n ( A i α X i log 1 + exp α X i + 1 2 α Σ ϵ α + δ i log d H ( Y i ) log ρ H ( Y i ) + exp β X i + 1 2 β Σ ϵ β . A i ρ β X i + log ρ H ( Y i ) + exp β X i 1 2 β Σ ϵ β . ) .

3.2. Estimation of Parameters and Functions

After establishing the corrected log-likelihood function (11), we aim to estimate unknown parameters α and β , as well as unknown function H ( · ) .
However, the other concern is that for those censored observations ( δ i = 0 ), whether they are cured or not is unknown; thus, A i can be regarded as a missing value. To address this concern, we apply the expectation–maximization (EM) algorithm with error correction. Specifically, in the E-step, we start by considering E ( A i | δ i , T ˜ i , X i ) to replace A i . Recall that we have X i only instead of X i ; then, it is crucial to make a suitable adjustment. Let ξ i E ( A i | δ i , T ˜ i , X i ) . We aim to find ξ i E ( ξ i | X i ) , which is in terms of the observed covariate X i , such that the measurement error would be adjusted, i.e., E ( ξ i ) = E ( ξ i ) .
When δ i = 1 , we have E ( A i | δ i = 1 , T ˜ i , X i ) = 1 , and it is easy to check that ξ i = 1 . When δ i = 0 , then E ( A i | δ i = 0 , T ˜ i , X i ) = π ( X i ) S u ( Y i | X i ) 1 π ( X i ) + π ( X i ) S u ( Y i | X i ) . Let X ˜ i E ( X i | X i ) , which satisfies E ( X ˜ i ) = E ( X i ) . In particular, we apply the idea in [22] to express X ˜ i by the linear approximation
X ˜ i μ X + Σ X 1 / 2 Σ X 1 / 2 X i μ X
with μ X E ( X ) , μ X E ( X ) , Σ X var ( X ) , and Σ X var ( X ) .
Following the derivation of [22], we apply the second-order Taylor series expansion for ξ i and ξ ˜ i π ( X ˜ i ) S u ( Y i | X ˜ i ) 1 π ( X ˜ i ) + π ( X ˜ i ) S u ( Y i | X ˜ i ) around μ X with respect to X i and X ˜ i , respectively; then, we can obtain the following two approximations:
ξ i π ( μ X ) S u ( Y i | μ X ) 1 π ( μ X ) + π ( μ X ) S u ( Y i | μ X ) + ξ i X i | X i = μ X ( X i μ X ) + 1 2 ( X i μ X ) 2 ξ i X i X i | X i = μ X ( X i μ X )
and
ξ ˜ i π ( μ X ) S u ( Y i | μ X ) 1 π ( μ X ) + π ( μ X ) S u ( Y i | μ X ) + ξ ˜ i X ˜ i | X ˜ i = μ X ( X ˜ i μ X ) + 1 2 ( X ˜ i μ X ) 2 ξ ˜ i X ˜ i X ˜ i | X ˜ i = μ X ( X ˜ i μ X ) .
Combining (13) and (14) gives that
ξ i ξ ˜ i + ξ i X i | X i = μ X ( X i X ˜ i ) + 1 2 ( X i μ X ) 2 ξ i X i X i | X i = μ X ( X i μ X ) 1 2 ( X ˜ i μ X ) 2 ξ ˜ i X ˜ i X ˜ i | X ˜ i = μ X ( X ˜ i μ X ) .
Then, taking the expectation on (15) yields that
E ( ξ i ) E ( ξ ˜ i ) ,
where the expectation of the subtraction of two quadratic forms in (16) is given by
E ( X i μ X ) 2 ξ i X i X i | X i = μ X ( X i μ X ) ( X ˜ i μ X ) 2 ξ ˜ i X ˜ i X ˜ i | X ˜ i = μ X ( X ˜ i μ X ) = trace 2 ξ i X i X i | X i = μ X var ( X i ) trace 2 ξ i X ˜ i X ˜ i | X ˜ i = μ X var ( X ˜ i ) = 0
since var ( X ˜ i ) = Σ X 1 / 2 Σ X 1 / 2 var ( X i ) Σ X 1 / 2 Σ X 1 / 2 = Σ X = var ( X i ) . Consequently, (16) implies that
ξ i = E ( ξ i | X i ) π ( X ˜ i ) S u ( Y i | X ˜ i ) 1 π ( X ˜ i ) + π ( X ˜ i ) S u ( Y i | X ˜ i ) .
Therefore, by the given data and the fact Σ X = Σ X + Σ ϵ from measurement error model (3), (12) can be estimated by
X ^ i μ ^ X + Σ ^ X Σ ϵ 1 / 2 Σ ^ X 1 / 2 X i μ ^ X ,
where μ ^ X and Σ ^ X are the empirical estimates of μ X and Σ X , respectively. Thus, replacing X ˜ i in (17) with X ^ i yields
ξ ^ i = π ( X ^ i ) S u ( Y i | X ^ i ) 1 π ( X ^ i ) + π ( X ^ i ) S u ( Y i | X ^ i ) .
When the E-step is established, we examine the M-step. Specifically, replacing A i in (11) with ξ i gives
^ = i = 1 n ξ i α X i log 1 + exp α X i + 1 2 α Σ ϵ α + i = 1 n ( δ i log d H ( Y i ) log ρ H ( Y i ) + exp β X i + 1 2 β Σ ϵ β ξ i ρ β X i + log ρ H ( Y i ) + exp β X i 1 2 β Σ ϵ β ) ^ 1 + ^ 2 ,
where ^ 1 contains information on α only and ^ 2 includes β and H ( · ) , which reflect the survival process. Thus, the maximization of ^ for ( α , β , H ( · ) ) can be performed separately. More specifically, the estimator of α , denoted by α ^ , is determined by α ^ = argmax α ^ 1 . Due to the unavailability of the closed form, one can adopt the Newton–Raphson approach to derive α ^ .
Naturally, the estimator of ( β , H ( · ) ) can also be determined by
( β ^ , H ^ ( · ) ) = argmax ( β , H ( · ) ) ^ 2 .
However, the challenge comes from nonparametric function H ( · ) . To address it, we follow the same line as [19] and “parametrize” function H ( · ) by j = 1 d i exp ( γ j ) , where d i max { j : t ( j ) t i } represents the index of the latest jump point before t i and γ 1 , , γ m are the corresponding logs of jump sizes of H ( · ) at t ( 1 ) , , t ( m ) , which are the sorted times. Based on such representation, ^ 2 can be expressed as
^ 2 = i = 1 n ( δ i γ i log ρ j = 1 d i exp ( γ j ) + exp β X i + 1 2 β Σ ϵ β ξ i ρ β X i + log ρ j = 1 d i exp ( γ j ) + exp β X i 1 2 β Σ ϵ β ) .
Note that (21) is a concave function; then, taking the negative sign on (21) makes it become convex. To derive the estimators of β and γ ( γ 1 , , γ m ) from (21), we employ the coordinate-descent algorithm, which is frequently implemented in the optimization problem (e.g., [24]). Detailed descriptions of the steps are given below:
Step 1:
Choose an initial value for β and γ , and denote them by β ( 0 ) and γ ( 0 ) .
Step 2:
For r = 1 , , p , given γ ( k 1 ) γ 1 ( k 1 ) , , γ p ( k 1 ) and β 1 ( k 1 ) , , β r 1 ( k 1 ) , β r + 1 ( k 1 ) , , β p ( k 1 ) , update β r by finding
β r ( k ) = argmin β r ^ 2 β r | γ ( k 1 ) , β 1 ( k 1 ) , , β r 1 ( k 1 ) , β r + 1 ( k 1 ) , , β p ( k 1 ) .
Step 3:
For s = 1 , , m , given β ( k ) β 1 ( k ) , , β p ( k ) and γ 1 ( k 1 ) , , γ r 1 ( k 1 ) , γ r + 1 ( k 1 ) , , γ p ( k 1 ) , update γ s by finding
γ s ( k ) = argmin γ r ^ 2 γ r | β ( k ) , γ 1 ( k 1 ) , , γ r 1 ( k 1 ) , γ r + 1 ( k 1 ) , , γ m ( k 1 ) .
Step 4:
Repeat Steps 2 and 3 until convergence, and let β ^ and γ ^ denote the limit of β ( k ) and γ ( k ) as k .
We comment that the coordinate-descent algorithm is a common strategy to obtain the minimizer (e.g., [24]). Its key idea is to iteratively update the scale parameter with other values given under the previous step. This approach enables us to obtain a stable numeric scheme and avoid estimating high-dimensional vectors of parameters. In the literature, the alternative strategy to address the optimization in (20) is called the minimization–maximization (MM) algorithm (e.g., [19,25]). While the MM algorithm is also an iterative approach based on the historical values that is similar to the coordinate-descent algorithm, the key step of this approach is to find and optimize the surrogate function that improves the value of the objective function or leave it unchanged, but in general, such suitable surrogate functions are not easy to find. Instead, the coordinate-descent algorithm enables us to directly examine (21).
Finally, to assess the validity of the proposed method, we present the following theoretical results, and the proof is postponed to Appendix A.2.
Theorem 1.
Let α 0 , β 0 , and H 0 ( · ) denote the true values of parameters. Under regularity conditions, the proposed estimator is consistent. That is, as n ,
α ^ α 0 2 p 0 , β ^ β 0 2 p 0 , and H ^ ( t ) H 0 ( t ) p 0 ,
where · 2 and · are the L 2 - and supremum norms, respectively.
Let V = { ν = ( ν 1 , ν 2 , ν 3 ) } , and let us define V p = { ν V : ν 1 , ν 2 , ν 3 BV < p } as the collection of directions, where ν 1 and ν 2 are p-dimensional vectors, ν 3 is a function defined in [ 0 , τ ] with τ being the end time of the study, and · BV is the total variation, defined as
ν 3 BV = sup 0 = t 0 < t 1 < < t n = τ j = 1 n ν 3 ( t j ) ν 3 ( t j 1 ) .
The next theorem shows the convergence in distribution of the proposed method with the rate n .
Theorem 2.
Under regularity conditions, the proposed estimator has an asymptotic distribution. That is, as n , n α ^ α 0 , β ^ β 0 , H ^ ( t ) H 0 ( t ) converges in distribution to a mean-zero Gaussian process in the functional space l ( V p ) on V p .
Let ϕ ( α , β ) . According to (19), the parameters α and β appear in ^ 1 and ^ 2 . With H ( · ) fixed, by the theory of M-estimation, the marginal asymptotic distribution of ϕ ^ ( α ^ , β ^ ) is given by
n ϕ ^ ϕ 0 d N ( 0 , Σ ϕ ) ,
where Σ ϕ = I 1 J I 1 , with
I = diag 2 α α ^ 1 , 2 β β ^ 2 , J = E ( a 2 ) ,
and a 2 = a a for a vector a ^ 1 α , ^ 2 β .

3.3. Estimation of ROC and AUC

When the estimators of H ( · ) , α , and β are obtained, we now propose the estimation procedure for the time-independent and time-dependent ROC curves, as well as the area under curve (AUC) with measurement error, where the former says that the event status and marker value for an individual are independent of the change in the time and the latter reflects that the curve varies at different time points and seems more flexible than the former. In real-world applications, it is common for the disease status to change over time, so the time-dependent ROC curve is more widely explored. However, in the presence of the cure fraction in survival time and measurement error in the covariates, both time-independent and time-dependent cases may not be fully discussed. To make the discussion on measurement error effects more comprehensive, we still study two different types of ROC curve estimation in the following two subsections.

3.3.1. Time-Independent AUC

We first discuss the time-independent case. Let C 1 ( X ) be the binary and time-independent classifier with biomarker X, which classifies patients into the uncured group with 1. On the other hand, as discussed by [26], the linear composite biomarker is an optimal classifier among all functions of biomarkers. As a result, the event C 1 ( X ) = 1 is equivalent to the inequality β X > u for some constant u. Then, the time-independent true-positive rate (TPR) and false-positive rate (FPR), denoted by T ( α , β , u ) and F ( α , β , u ) , are defined as
T ( α , β , u ) = P β X > u | T ˜ < and F ( α , β , u ) = P β X > u | T ˜ = .
Moreover, (23) can be further expressed as
T ( α , β , u ) = P β X > u , T ˜ < P T ˜ < = P T ˜ < | X = x I β X > u f X ( x ) d x P T ˜ < | X = x f X ( x ) d x = E π ( X ) I β X > u E π ( X )
and
F ( α , β , u ) = P β X > u , T ˜ = P T ˜ = = P T ˜ = | X = x I β X > u f X ( x ) d x P T ˜ = | X = x f X ( x ) d x = E 1 π ( X ) I β X > u E 1 π ( X )
respectively, with f X ( · ) and E ( · ) being the density function of X and the expectation evaluated with respect to X, respectively, where the last equality in (24) comes from
P T ˜ < | X = x = P T ˜ < | A = 1 , X = x P A = 1 | X = x = P T < | X = x π X = π X
with P T < | X = x = 0 f ( t | x ) d t = 1 and f ( t | x ) being the conditional density function of T given X; similar derivations show that
P T ˜ = | X = x = P T ˜ = | A = 0 , X = x P A = 0 | X = x = 1 π X
with P T ˜ = | A = 0 , X = x = 1 , yielding the last equality of (25).
Therefore, based on (24) and (25), the AUC, denoted by A ( α , β ) , is given by (e.g., [11])
A ( α , β ) = E π ( X 1 ) 1 π ( X 2 ) I β X 1 > β X 2 E π ( X ) E 1 π ( X ) .
In practice, however, biomarker X may also be measured with error. Let X denote the surrogate version of X. We adopt measurement error model (3) to build the relationship between X and X .
In (24), X appears in π ( X ) and π ( X ) I ( β X > u ) . To correct for error effects, our strategy is to follow a similar approach to that in Section 3.2 and find the surrogate functions in terms of X , say π ( X ) and φ ( X ) , such that π ( X ) = E π ( X ) | X and φ ( X ) = E π ( X ) I ( β X > u ) | X . In this way, we have E π ( X ) = E π ( X ) and E φ ( X ) = E π ( X ) I ( β X > u ) ; thus, π ( X ) and π ( X ) I ( β X > u ) can be replaced by π ( X ) and φ ( X ) , respectively. A similar strategy is also adopted for the numerator and denominator terms in (25) and (26). To this end, we develop π ( X ) and φ ( X ) .
According to the Mean Value Theorem, there exists X between X and X ˜ E ( X | X ) , such that
π ( X ) = π X ˜ + π ( X ) X | X = X ( X X ˜ ) ,
and taking the conditional expectation E ( · | X ) gives
E π ( X ) | X = π X ˜ .
This indicates that the corrected version π ( X ) is given by the original function π ( · ) with X being replaced by X ˜ . In particular, this result is essentially similar to the result in [27], which pointed out the satisfactory approximation of implementing X ˜ in the logit function.
On the other hand, note that based on the measurement error model, if β X > u , then we have β X > β X > u β ϵ . This indicates that I ( β X > u ) implies I ( β X > u ) . Therefore, π ( X ) I ( β X > u ) takes the value π ( X ) when I ( β X > u ) = 1 , and by a similar strategy to (27), taking the conditional expectation yields
E π ( X ) I ( β X > u ) | X = π X ˜ I ( β X ˜ > u )
thus, φ ( X ) is taken as π ( X ) I ( β X > u ) with X being replaced by X ˜ .
Consequently, the corrected TPR, FPR, and AUC are given by (24), (25), and (26), respectively. Moreover, with the consistent estimator α ^ in Section 3.2, the corresponding estimators of the corrected TPR, FPR, and AUC are
T ^ ( α ^ , β ^ , u ) = 1 n i = 1 n π ^ ( X ˜ i ) I β ^ X ˜ i > u 1 n i = 1 n π ^ ( X ˜ i ) , F ^ ( α ^ , β ^ , u ) = 1 n i = 1 n 1 π ^ ( X ˜ i ) I β ^ X ˜ i > u 1 n i = 1 n 1 π ^ ( X ˜ i ) ,
and
A ^ ( α ^ , β ^ ) = 1 n ( n 1 ) i j π ^ ( X ˜ i ) 1 π ^ ( X ˜ j ) I β ^ X ˜ i > β ^ X ˜ j 1 n i = 1 n π ^ ( X ˜ i ) 1 n i = 1 n 1 π ^ ( X ˜ i ) ,
where π ^ ( · ) represents π ( · ) with α being replaced by α ^ .

3.3.2. Time-Dependent AUC

We now explore the time-dependent case. Denote the time-dependent TPR and FPR, respectively, by
T t ( α , β , u ) = P β X > u | T ˜ < , T t and F t ( α , β , u ) = P β X > u | T ˜ < , T > t .
Similar to the discussion on (23), one can further express (31) as
T t ( α , β , u ) = E π ( X ) 1 S u ( t | X ) I ( β X > u ) E π ( X ) 1 S u ( t | X )
and
F t ( α , β , u ) = E π ( X ) S u ( t | X ) I ( β X > u ) E π ( X ) S u ( t | X ) .
Thus, the time-dependent AUC is defined as
A t ( α , β ) = E π ( X 1 ) 1 S u ( t | X 1 ) π ( X 2 ) S u ( t | X 2 ) I ( β X 1 > β X 2 ) E π ( X ) 1 S u ( t | X ) E π ( X ) S u ( t | X ) .
Moreover, following a similar discussion to that in Section 3.3.1, the corrected time-dependent TPR, FPR, and AUC are given by (32), (33), and (34) with X being replaced by X ˜ , respectively. With the consistent estimators ( α ^ , β ^ , H ^ ( · ) ) , the corresponding estimators are determined by
T ^ t ( α ^ , β ^ , u ) = 1 n i = 1 n π ^ ( X ˜ i ) 1 S ^ u ( t | X ˜ i ) I ( β ^ X ˜ i > u ) 1 n i = 1 n π ^ ( X ˜ i ) 1 S ^ u ( t | X ˜ i )
F ^ t ( α ^ , β ^ , u ) = 1 n i = 1 n π ^ ( X ˜ i ) S ^ u ( t | X ˜ i ) I ( β ^ X ˜ i > u ) 1 n i = 1 n π ^ ( X ˜ i ) S ^ u ( t | X ˜ i )
and
A ^ t ( α ^ , β ^ ) = 1 n ( n 1 ) i j π ^ ( X ˜ i ) 1 S ^ u ( t | X ˜ i ) π ^ ( X ˜ j ) S ^ u ( t | X ˜ j ) I ( β ^ X ˜ i > β ^ X ˜ j ) 1 n i = 1 n π ^ ( X ˜ i ) 1 S ^ u ( t | X ˜ i ) 1 n i = 1 n π ^ ( X ˜ i ) S ^ u ( t | X ˜ i ) .
Finally, at the end of this section, we establish the following theorems to justify the proposed estimators of time-independent/-dependent AUC, including consistency and asymptotic normality.
Theorem 3.
Suppose that the conditions in Theorem 1 hold. With a fixed value u, as n , the following applies:
(a)
For the time-independent result in Section 3.3.1,
T ^ ( α ^ , β ^ , u ) p T ( α 0 , β 0 , u ) , F ^ ( α ^ , β ^ , u ) p F ( α 0 , β 0 , u ) , and A ^ ( α ^ , β ^ ) p A ( α 0 , β 0 ) ;
(b)
For the time-dependent result in Section 3.3.2,
T ^ t ( α ^ , β ^ , u ) p T t ( α 0 , β 0 , u ) , F ^ t ( α ^ , β ^ , u ) p F t ( α 0 , β 0 , u ) ,
and
A ^ t ( α ^ , β ^ ) p A t ( α 0 , β 0 ) .
Theorem 4.
Suppose that conditions in Theorems 1 and 2 hold. Then as n , the following applies:
(a)
For the time-independent result in Section 3.3.1,
n A ^ ( α ^ ) A ( α 0 ) d N ( 0 , σ TI ) ,
where σ TI is the asymptotic variance whose exact form is given in Appendix A.5;
(b)
For the time-dependent result in Section 3.3.2,
n A ^ t ( α ^ , β ^ ) A t ( α 0 , β 0 ) d N ( 0 , σ TD ) ,
where σ TD is the asymptotic variance whose exact form is given in Appendix A.5.

4. Numerical Studies

Let n = 250 or 500 denote the sample size. For a subject i = 1 , , n , let X i be generated by the standard normal distribution. Let α = 0.5 , ρ = 2 , and β = 1 be the true values of the parameters in (1) and (2), respectively. We consider the function H ( t ) = t . We then generate A i and T i by (1) and (2), respectively. We further independently generate the censoring time C i from the uniform distribution in an interval [ 0 , τ 0 ] , where τ 0 is a pre-specified constant, such that the censoring rate is approximately 30%. In addition, we further apply measurement error model (3) to generate X i for i = 1 , , n , where ϵ i is independently generated by the normal distribution with mean zero and variance σ ϵ 2 = 0.15 , 0.35 , and 0.55, which reflect minor, moderate, or severe measurement error effects, respectively. Consequently, the collected dataset is denoted by { { Y i , δ i , X i } : i = 1 , , n } . For each setting, we run the simulation 500 times repeatedly.
There are two goals in this study. First, we aim to estimate two parameters α and β in (1) and (2), respectively. Based on two estimates α ^ and β ^ , the next goal is to estimate the time-independent AUC (26) and the time-dependent AUC (34) at the time points t = 5 and 10. In addition to implementing the proposed method with measurement error correction, to see the impact of the measurement error on estimation, we also examine the naive method, which is given by the same estimation procedure in Section 3 but with error-prone covariates X i equipped.
The simulation results are summarized in Table 1, where we report the bias, the standard error (S.E.) obtained by repeated simulations, and the mean squared error (MSE). In general, we find that biases are increasing when σ ϵ 2 becomes large, which implies that the measurement error may affect the estimation even though the correction is taken into account. In particular, larger biases of the estimated AUC indicate that the shape of the ROC curves slightly moves to the 45-degree straight line. For the comparisons of two estimation methods, the proposed method generally outperforms the naive method due to the smaller biases of the estimators, regardless of the change of the values of σ ϵ 2 and sample sizes n. This indicates that the proposed method is valid to produce precise estimators. Note that the S.E. of the naive method is relatively smaller than the proposed method, as discussed in [22]; it is due to removing biases in point estimators and is a typical phenomenon in the framework of measurement error analysis. By the trade-off between bias and variation, we observe from the MSE that the proposed method is still better than the naive method with smaller MSE. In summary, the numerical results verify the validity of the proposed method and the measurement error correction.

5. Summary

In this paper, we study the transformation model to analyze the survival data with the cure fraction and its extension to the estimation of the ROC curve. In addition, we also explore the measurement error in covariates or biomarkers, which is one type of noisy data. To deal with measurement error effects and derive a reliable estimator, we propose a corrected likelihood function whose expectation recovers to the likelihood function under the true covariates. In addition, we also propose a measurement error correction strategy to estimate the time-independent/time-dependent AUC. To derive the optimization of (19), we introduce the EM algorithm with finite stepwise iterations to obtain the estimators, but one should keep in mind that the computational implementation is not unique, and some alternative strategies can be adopted, such as quasi-Newton methods or damped Anderson acceleration, when the dimension of variables is large or the settings are complex. The other contribution of this paper is the focus on the establishment of the asymptotic properties for the proposed estimator with rigorous derivations. While we have no real data application in the current stage, it is also expected that the proposed method is valid to handle the relevant data, since simulation studies show that the performance of the proposed method is satisfactory.
The current development focuses on the estimation of the parameters, as well as the AUC, and the handling of measurement error problems, but some further issues related to the AUC could be explored in depth. For example, the current setting involves the cure fraction, which may induce long follow-up periods for the survival time. An interesting but unsolved concern is to understand whether the dynamic AUC estimation is stabilized and the corresponding impact caused by measurement error. Exploring this issue in depth might make the diagnosis of the disease more precise. In the next stage of our research, we wish to explore this issue from a theoretical perspective and provide rigorous justification. In addition to the time-dependent AUC, as mentioned by a referee, the other challenging feature in applications is the time-varying covariates. While this issue was studied in the literature (e.g., [28]), the corresponding applications to AUC estimation seem to be rarely discussed. Moreover, in the presence of measurement error in covariates, the other challenge includes the characterization of time-dependent covariates and the corresponding measurement error correction. We expect that the current method can be extended to handle this complex structure in the near future.

Funding

This research was funded by National Science and Technology Council, Taiwan, with grant number 112-2118-M-004-005-MY2.

Data Availability Statement

All the data in this paper are simulated data generated by the procedure in Section 4.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Theoretical Justification

Appendix A.1. Regularity Conditions

(C1)
Θ is a compact set, and the true parameter value ( α 0 , β 0 ) is an interior point of Θ .
(C2)
Let τ be the finite maximum support of the failure time.
(C3)
A i , Y i , X i are independent and identically distributed for i = 1 , , n .
(C4)
The covariates or biomarkers X i are bounded.
(C5)
Censoring time C i is noninformative. That is, the failure time T i and the censoring time C i are independent, given the covariate X i .
Condition (C1) is a basic condition that is used to derive the maximizer of the target function. (C2) to (C5) are standard conditions for survival analysis, which allow us to obtain the sum of i.i.d. random variables and hence to derive the asymptotic properties of the estimators.

Appendix A.2. Proof of Theorem 1

Proof. 
Note that the function H ( · ) is parameterized as H ( t ) 1 n j = 1 n I ( Y j t ) exp ( γ j ) , and the corresponding estimator is given by H ^ ( t ) = 1 n j = 1 n I ( Y j t ) exp ( γ ^ j ) . Following Helly’s Lemma (e.g., [29], Section 2.1) and a similar discussion in [30], we can show that H ( t ) converges to H 0 ( t ) , provided that γ j is replaced by the true value γ j 0 for j = 1 , , p . As a result, to show the consistency of H ^ ( t ) and H 0 ( t ) , it suffices to examine the relationship between H ^ ( t ) and H ( t ) ; thus, we can explore the asymptotic behavior of γ ^ j .
In the following derivation, we aim to prove that
α ^ , β ^ , γ ^ α 0 , β 0 , γ 0 2 = O p 1 n .
Specifically, if (A1) holds, then we conclude that α ^ α 0 2 p 0 , β ^ β 0 2 p 0 and H ^ ( t ) H ( t ) p 0 as n .
To prove (A1), we follow the strategy in [24] and start by showing that
P sup U 2 = B ^ α 0 + u 1 n , β 0 + u 2 n , γ 0 + u 3 n < ^ α 0 , β 0 , γ 0 > 1 ϵ
for every ϵ > 0 and a given value B , where U = ( u 1 , u 2 , u 3 ) , with u 1 = n α ^ α 0 , u 2 = n β ^ β 0 , and u 3 = n γ ^ γ 0 .
We write Ψ ( u 1 , u 2 , u 3 ) = ^ α 0 + u 1 n , β 0 + u 2 n , γ 0 + u 3 n ^ α 0 , β 0 , γ 0 . By the Taylor series expansion around u 1 = u 2 = u 3 = 0 , we have
Ψ ( u 1 , u 2 , u 3 ) = u 1 n S α α 0 , β 0 , γ 0 + u 2 n S β α 0 , β 0 , γ 0 + u 3 n S γ α 0 , β 0 , γ 0 + 1 n u 1 I α α 0 , β 0 , γ 0 u 1 + 1 n u 2 I β α 0 , β 0 , γ 0 u 2 + 1 n u 3 I γ α 0 , β 0 , γ 0 u 3 + 1 n u 2 I β γ α 0 , β 0 , γ 0 u 3 + o p ( 1 ) ,
where S α ( · ) , S β ( · ) , and S γ ( · ) are the first-order derivatives of Ψ ( · ) with respect to u 1 , u 2 , and u 3 , respectively; I α ( · ) , I β ( · ) , and I γ ( · ) are the second-order derivatives of Ψ ( · ) with respect to u 1 , u 2 , and u 3 , respectively; and I β γ is the derivative of Ψ ( · ) with respect to u 2 and u 3 .
We now separately examine each term in (A3). Note that ( α 0 , β 0 , γ 0 ) is the maximizer of E ( ) , and E ( ) = E ( ^ ) according to our development. This implies that ( α 0 , β 0 , γ 0 ) is also the maximizer of E ( ^ ) or, equivalently, the solutions of E S α ( α 0 , β 0 , γ 0 ) = 0 , E S β ( α 0 , β 0 , γ 0 ) = 0 , and E S γ ( α 0 , β 0 , γ 0 ) = 0 . Therefore, we have
u 1 n S α ( α 0 , β 0 , γ 0 ) = u 1 O p ( 1 ) , u 2 n S β ( α 0 , β 0 , γ 0 ) = u 2 O p ( 1 ) , and u 3 n S γ ( α 0 , β 0 , γ 0 ) = u 3 O p ( 1 ) .
For the second-order derivative I α 2 ^ α α , its explicit form is given by
I α = i = 1 n X i X i exp α X i 1 + exp α X i X i X i exp α X i 1 + exp α X i 2 + Σ ϵ = α ^ α .
By the Law of Large Numbers, we have that 1 n I α p I α = E 1 n I α = E E 1 n I α | X i as n . As a result, to find E I α , it suffices to compute E I α | X i = E α ^ α | X i = α E ^ α | X i , where the last equality is based on the interchange of the derivative and integral.
Taking the derivative of (19) with respect to α and then taking the conditional expectation give
E ^ α | X i = i = 1 n ξ i X i exp α X i + 1 2 α Σ ϵ α × X i + Σ ϵ α 1 + exp α X i + 1 2 α Σ ϵ α + Σ ϵ α ,
which also implies that
E 2 ^ α α | X i = α E ^ α | X i = i = 1 n exp α X i + 1 2 α Σ ϵ α X i + Σ ϵ α 2 2 Σ ϵ Σ ϵ 1 + exp α X i + 1 2 α Σ ϵ α 2 .
As a result, by the Law of Large Numbers and Condition (C3), I α is determined by
I α = E exp α X i + 1 2 α Σ ϵ α X i + Σ ϵ α 2 2 Σ ϵ + Σ ϵ 1 + exp α X i + 1 2 α Σ ϵ α 2 .
By Conditions (C1) and (C4), the covariates X i and the parameter α are bounded, and Σ ϵ is positive definite, which indicates that I α is bounded componentwise and its negative definite ensures that the maximizer of α exists.
Next, we explore I β . By taking the derivative of ^ with respect to β , we have
^ β = i = 1 n [ δ i X i exp β X i ρ j = 1 n I ( Y i < Y j ) exp ( γ j ) + exp β X i + Σ ϵ β A i ρ X i + A i ρ X i exp β X i ρ j = 1 n I ( Y i < Y j ) exp ( γ j ) + exp β X i + A i ρ Σ ϵ β ] .
Note that the moment-generating function gives
E exp β X i | X i = exp β X i + 1 2 β Σ ϵ β
and
E X i exp β X i | X i = exp β X i + 1 2 β Σ ϵ β × X i + Σ ϵ β ,
then, the conditional expectation of ^ β given X i is
E ^ β | X i = i = 1 n [ δ i X i Σ ϵ β exp β X i + 1 2 β Σ ϵ β ρ j = 1 n I ( Y i < Y j ) exp ( γ j ) + exp β X i + 1 2 β Σ ϵ β + Σ ϵ β A i ρ X i + A i ρ X i Σ ϵ β exp β X i + 1 2 β Σ ϵ β ρ j = 1 n I ( Y i < Y j ) exp ( γ j ) + exp β X i + 1 2 β Σ ϵ β + A i ρ Σ ϵ β ] .
This further shows that
E 2 ^ β β | X i = i = 1 n δ i + A i ρ D 1 + A i ρ Σ ϵ and E 2 ^ β γ j | X i = i = 1 n δ i + A i ρ D 2 ,
where
D 1 = ρ j = 1 n I ( Y i < Y j ) exp ( γ j ) X i + Σ ϵ β 2 2 + Σ ϵ exp β X i + 1 2 β Σ ϵ β ρ j = 1 n I ( Y i < Y j ) exp ( γ j ) + exp β X i + 1 2 β Σ ϵ β + exp 2 β X i + β Σ ϵ β Σ ϵ ρ j = 1 n I ( Y i < Y j ) exp ( γ j ) + exp β X i + 1 2 β Σ ϵ β
and
D 2 = ρ I ( Y i < Y j ) exp ( γ j ) X i + Σ ϵ β exp β X i + 1 2 β Σ ϵ β ρ j = 1 n I ( Y i < Y j ) exp ( γ j ) + exp β X i + 1 2 β Σ ϵ β .
Consequently, by the Law of Large Numbers, we have that
I β = E 1 n 2 ^ β β | X i = E δ i + A i ρ D 1 + A i ρ Σ ϵ
and I β γ is the matrix of E 2 ^ β γ j with stack in row for j = 1 , , n , where
E 2 ^ β γ j = E δ i + A i ρ D 2 ,
and they are bounded elementally due to positive definite matrix Σ ϵ and Conditions (C1) and (C4).
Finally, we examine I γ . Taking the first derivative of ^ with respect to γ j yields
^ γ j = i = 1 n δ i ρ I ( Y i < Y j ) exp ( γ j ) ρ j = 1 n I ( Y i < Y j ) exp ( γ j ) + exp β X i × δ i + ξ i ρ .
Additionally, by (A5), we have
E ^ γ j | X i = i = 1 n δ i ρ I ( Y i < Y j ) exp ( γ j ) ρ j = 1 n I ( Y i < Y j ) exp ( γ j ) + exp β X i + 1 2 β Σ ϵ β × δ i + A i ρ .
This also implies that
E 2 ^ γ j 2 | X i = i = 1 n ρ 2 I ( Y i < Y j ) exp ( 2 γ j ) ρ j = 1 n I ( Y i < Y j ) exp ( γ j ) + exp β X i + 1 2 β Σ ϵ β × δ i + A i ρ
and
E 2 ^ γ j γ l | X i = i = 1 n ρ 2 I ( Y i < Y j ) I ( Y i < Y l ) exp ( γ l + γ j ) ρ j = 1 n I ( Y i < Y j ) exp ( γ j ) + exp β X i + 1 2 β Σ ϵ β × δ i + A i ρ
for j l . Therefore, by the Law of Large Numbers, we have I γ with diagonal entries
E 2 ^ γ j 2 = E ρ 2 I ( Y i < Y j ) exp ( 2 γ j ) ρ j = 1 n I ( Y i < Y j ) exp ( γ j ) + exp β X i + 1 2 β Σ ϵ β × δ i + A i ρ
and off-diagonal elements
E 2 ^ γ j γ l = E ρ 2 I ( Y i < Y j ) I ( Y i < Y l ) exp ( γ l + γ j ) ρ j = 1 n I ( Y i < Y j ) exp ( γ j ) + exp β X i + 1 2 β Σ ϵ β × δ i + A i ρ
for j l .
Finally, when n goes to infinity, the quadratic terms in (A3) converge in probability to
u 1 , u 2 , u 3 I α 0 0 0 I β I β γ 0 I β γ I γ u 1 u 2 u 3 = : U I U ,
which is equal to O ( B 2 ) because I is bounded and negative definite. Since B 2 dominates B in the linear terms of (A3), for sufficiently large B and U 2 = B , we have
^ α 0 + u 1 n , β 0 + u 2 n , γ 0 + u 3 n ^ α 0 , β 0 , γ 0 < 0 .
Therefore, (A2) holds; thus, (A1) is verified. □

Appendix A.3. Proof of Theorem 2

Proof. 
In this proof, we adopt Theorem 3.3.1 in [31] to derive the asymptotic distribution. A similar discussion was also presented by [32].
Let θ 0 ( α 0 , β 0 , H 0 ) and θ ^ ( α ^ , β ^ , H ^ ) . Moreover, define S = ^ θ and S = E ( S ) . According to Theorem 3.3.1 in [31], we need to check the following:
(a)
n ( S S ) ( θ 0 ) d Z , where Z is a tight random element;
(b)
The map θ S ( θ ) is Fréchet differentiable at θ 0 with a continuously invertible derivative S ( θ 0 ) , where f ( x ) denotes an operator of the derivative of f with respect to x;
(c)
S ( θ 0 ) = 0 , and θ ^ satisfies S ( θ ^ ) = o p 1 n .
If Conditions (a)–(c) are verified, then we conclude that
n θ ^ θ 0 d S ( θ 0 ) 1 Z as n .
Lastly, we examine Conditions (a)–(c) separately.
  • Check Condition (a):
Since θ 0 is the true value, we automatically have S ( θ 0 ) = 0 . Then, ( S S ) ( θ 0 ) = 1 n i = 1 n D i , 1 + D i , 2 + D i , 3 , where
D i , 1 = X i X i exp α 0 X i 1 + exp α 0 X i + Σ ϵ α 0 ,
D i , 2 = X i exp β 0 X i ρ H 0 ( Y i ) + exp β 0 X i + Σ ϵ β 0 A i ρ + δ i ,
and
D i , 3 = δ i d H 0 ( Y i ) ρ d H 0 ( Y i ) ρ H 0 ( Y i ) + exp β 0 X i A i ρ + δ i .
By Conditions (C1) and (C4), D i , 1 is Donsker bounded because it belongs to the finite-dimensional class of a measurable score function. In addition, D i , 2 and D i , 3 are bounded real valued functions on the bounded support due to Conditions (C2) and (C4). This indicates that D i , 2 and D i , 3 are Donsker (e.g., [29], p. 270). Therefore, by Example 2.10.7 in [31], we conclude that ( S S ) ( θ 0 ) is also Donsker. Consequently, by Section 2.8.2 in [31] and by Theorem 19.3 in [29], we obtain that as n , n ( S S ) ( θ 0 ) converges in distribution to a tight random element, denoted by Z.
  • Check Condition (b):
The first part is to show Fréchet differentiability. That is, it suffices to prove that (e.g., [29], p. 297)
S ( θ 0 + v ) S ( θ 0 ) S ( v ) = o v as v 0 .
Let θ α , β , H , and let θ 0 denote the true value of θ . We write θ r = α + ν 1 , β + ν 2 , H r ( ν 3 ) , with H r ( ν 3 ) ( t ) = 0 t ( 1 + r ν 3 ) d H ( u ) , where ν 1 , ν 2 R p and ν 3 satisfies bound variation on [ 0 , τ ] . Define ν ( ν 1 , ν 2 , ν 3 ) .
On one hand, by the Taylor series expansion on S ( θ 0 + r ν ) around r = 0 , we have
S ( θ 0 + r ν ) S ( θ 0 ) = ν 1 E X i X i exp ( α 0 + r ν 1 ) X i 1 + exp ( α 0 + r ν 1 ) X i + Σ ϵ ( α 0 + r ν 1 ) ( r 0 ) + ν 2 E [ X i exp ( β 0 + r ν 2 ) X i ρ H 0 ( Y i ) + exp ( β 0 + r ν 2 ) X i · δ i + A i ρ + Σ ϵ β 0 + r ν 2 · δ i + A i ρ ] ( r 0 ) + ν 3 E δ i ν 3 ( Y i ) ρ 0 Y i ν 3 ( u ) d H 0 ( u ) ρ H r ( Y i ) + exp β 0 X i · δ i + A i ρ ( r 0 ) + o ( r ) r · ν 1 S α + ν 2 S β + S H ( ν 3 ) + o ( r ) .
On the other hand, by taking the derivative of S based on the definition in p. 296 of [29], we have
S ( ν ) = ν 1 E X i X i exp α 0 X i 1 + exp α X i + Σ ϵ α + ν 2 E X i exp β X i ρ H ( Y i ) + exp β X i · δ i + A i ρ + Σ ϵ β · δ i + A i ρ + ν 3 E δ i ν 3 ( Y i ) ρ 0 Y i ν 3 ( u ) d H ( u ) ρ H ( Y i ) + exp β X i · δ i + A i ρ .
Finally, by combining (A7) and (A8), we obtain (A6) as r 0 , because of Theorem 1 and because r 0 as r 0 .
The second part is to discuss the continuous invertibility of S ( θ 0 ) . Similar to the discussion in [32], the continuous invertibility of the Fréchet derivative can be justified by showing that there exists a constant ζ such that
inf θ S ( θ 0 ) θ > ζ .
Since S ( θ 0 ) can be expressed as a linear combination of the three σ -operators, where three terms in S ( θ 0 ) are continuously differentiable functions that map to a finite-dimensional space. Therefore, we conclude that S ( θ 0 ) is the summation of invertible and compact operators; thus, (A9) holds by the similar arguments in [32]. Consequently, S ( θ 0 ) is verified to be continuously differentiable.
  • Check Condition (c):
The first claim, S ( θ 0 ) = 0 , has been verified by the argument of Condition (a). In addition, θ ^ is the maximizer of S ( θ ) , and θ ^ is the consistent estimator due to Theorem 1. Therefore, the second claim, S ( θ ^ ) = o p ( n 1 / 2 ) , holds. □

Appendix A.4. Proof of Theorem 3

Proof. 
In this appendix, we mainly prove results (a) and (b) separately.
  • Proof of part (a)
In this proof, we only show that, as n ,
A ^ ( α ^ , β ^ ) p A ( α 0 , β 0 ) ,
and we omit the proofs of T ^ ( α ^ , β ^ , u ) and F ^ ( α ^ , β ^ , u ) because the derivations are similar to those for (A10).
We write A ^ ( α ^ , β ^ ) = A ^ n ( α ^ , β ^ ) A ^ d ( α ^ , β ^ ) , where
A ^ d ( α ^ , β ^ ) = 1 n i = 1 n π ^ ( X ˜ i ) 1 n i = 1 n 1 π ^ ( X ˜ i )
and
A ^ n ( α ^ , β ^ ) = 1 n ( n 1 ) i j π ^ ( X ˜ i ) 1 π ^ ( X ˜ j ) I β ^ X ˜ i > β ^ X ˜ j .
In the following, we examine A ^ d ( α ^ , β ^ ) and A ^ n ( α ^ , β ^ ) separately.
Claim 1:  Let A d ( α 0 , β 0 ) = E π ( X ) E 1 π ( X ) . Show that as n ,
A ^ d ( α ^ , β ^ ) p A d ( α 0 , β 0 ) .
Note that by (28), we have E π ( X ) = E π ( X ˜ ) , yielding that A d ( α 0 , β 0 ) = E π ( X ˜ ) E 1 π ( X ˜ ) . By Theorem 1, which shows that α ^ is the consistent estimator of α 0 , we have π ^ ( X i ) p π ( X i ) as n . On the other hand, by Condition (C3), which shows that X i is independent, applying the Law of Large Number gives that as n , 1 n i = 1 n π ^ ( X ˜ i ) p E π ( X ˜ ) and 1 n i = 1 n 1 π ^ ( X ˜ i ) p E 1 π ( X ˜ ) . Thus, Claim 1 holds.
Claim 2:  Let A n ( α 0 , β 0 ) = E π ( X 1 ) 1 π ( X 2 ) I β X 1 > β X 2 . Show that as n ,
A ^ n ( α ^ , β ^ ) p A n ( α 0 , β 0 ) .
Similar to Claim 1, we can obtain that A n ( α 0 , β 0 ) is equal to E π ( X ˜ 1 ) 1 π ( X ˜ 2 ) I β X ˜ 1 > β X ˜ 2 due to (29). By adding and subtracting an additional term, we have
A ^ n ( α ^ , β ^ ) A n ( α 0 , β 0 ) = A ^ n ( α ^ , β ^ ) A ˜ n ( α 0 , β 0 ) + A ˜ n ( α 0 , β 0 ) A n ( α 0 , β 0 ) A 1 + A 2 ,
where
A ˜ n ( α 0 , β 0 ) = 1 n ( n 1 ) i j π ( X ˜ i ) 1 π ( X ˜ j ) I β 0 X ˜ i > β 0 X ˜ j .
A 1 can be expressed as
A 1 = 1 n ( n 1 ) i j W i j + o p ( 1 )
with W i j = π ^ ( X ˜ i ) 1 π ^ ( X ˜ j ) I β ^ X ˜ i > β ^ X ˜ j π ( X ˜ i ) 1 π ( X ˜ j ) I β 0 X ˜ i > β 0 X ˜ j
By a similar discussion to that for Claim 1, we have π ^ ( X i ) p π ( X i ) as n . In addition, applying Theorem 1 gives β ^ ( X i X j ) p β 0 ( X i X j ) . By P β 0 ( X i X j ) = 0 = 0 and the continuous mapping theorem (e.g., [33], Theorem 3.2.4), we have that as n ,
E I β ^ ( X i X j ) > 0 p E I β 0 ( X i X j ) > 0 .
Moreover, similar derivations show that
I β ^ ( X i X j ) 0 p I β 0 ( X i X j ) 0 .
By (A14) and the fact that I ( E ) I ( E c ) = 0 for any event E , we have
E I β ^ ( X i X j ) > 0 I β ^ ( X i X j ) 0 p 0 .
Combining (A13) and (A15) yields E | 1 n ( n 1 ) i j W i j | = o ( 1 ) ; thus, by Chebyshev’s inequality, we have
A 1 p 0 as n .
On the other hand, since (A11) is formulated as the U-statistics with bounded kernel, then applying the similar derivation of [34] gives
A 2 p 0 as n .
Therefore, combining (A16) and (A17) shows that Claim 2 and thus (A10) hold.
  • Proof of part (b)
The proof of part (b) is similar to that of part (a) except for the involvement of the estimates of the function H ( · ) and the parameter β . According to Theorem 1, since β ^ and H ^ ( · ) are consistent with the true values, then applying a similar argument in part (a) yields the desired result. Thus, the proof is completed. □

Appendix A.5. Proof of Theorem 4

Proof. 
In this appendix, we mainly prove results (a) and (b) separately.
  • Proof of part (a)
Define π 0 ( · ) as (1) with α being replaced by α 0 . Let A ^ ( α 0 , β 0 ) denote (30) with α ^ and β ^ being replaced by β 0 and β 0 , respectively. By adding and subtracting an additional term A ^ ( α 0 , β 0 ) , we have
A ^ ( α ^ , β ^ ) A ^ ( α 0 , β 0 ) + A ^ ( α 0 , β 0 ) A ( α 0 , β 0 ) R 1 + R 2 .
Note that by applying the Law of Large Numbers and Theorem 1 to the denominator terms of R 1 and R 2 , we have that as n ,
1 n i = 1 n π ^ ( X ˜ i ) 1 n i = 1 n 1 π ^ ( X ˜ i ) p E π 0 ( X ˜ i ) E 1 π 0 ( X ˜ i ) .
To complete this proof, we examine the numerator terms of R 1 and R 2 separately.
  • Step 1: Examine R 1
The numerator term of R 1 can be expressed as
1 n ( n 1 ) i j π ^ ( X ˜ i ) 1 π ^ ( X ˜ j ) β ^ I ( X ˜ i > X ˜ j ) π 0 ( X ˜ i ) 1 π 0 ( X ˜ j ) β 0 I ( X ˜ i > X ˜ j ) = 1 n 2 i , j = 1 n π ^ ( X ˜ i ) β ^ π 0 ( X ˜ i ) β 0 I ( X ˜ i > X ˜ j ) 1 n 2 i , j = 1 n π ^ ( X ˜ i ) π ^ ( X ˜ j ) β ^ π 0 ( X ˜ i ) π 0 ( X ˜ j ) β 0 I ( X ˜ i > X ˜ j )
since I ( X ˜ i > X ˜ j ) = 0 if i j . By the Taylor series expansion, we have
π ^ ( X ˜ i ) β ^ π 0 ( X ˜ i ) β 0 = ϕ π ( X ˜ i ) β | ϕ = ϕ 0 ϕ ^ ϕ 0 + o p ( n 1 / 2 ) ,
then, the first term in the right-hand side of (A20) gives that
1 n 2 i , j = 1 n ϕ π ( X ˜ i ) β | ϕ = ϕ 0 ϕ ^ ϕ 0 I ( X ˜ i > X ˜ j ) + o p ( n 1 / 2 ) = 1 n i = 1 n q i ϕ π ( X ˜ i ) β | ϕ = ϕ 0 ϕ ^ ϕ 0 + o p ( n 1 / 2 ) = E q i ϕ π ( X ˜ i ) β | ϕ = ϕ 0 ϕ ^ ϕ 0 + o p ( n 1 / 2 ) ,
where q i in the second step is defined as 1 n j = 1 n I ( X ˜ i > X ˜ j ) , and the last step is due to the Law of Large Numbers.
In addition, by the Taylor series expansion, we have
π ^ ( X ˜ i ) π ^ ( X ˜ j ) β ^ π 0 ( X ˜ i ) π 0 ( X ˜ j ) β 0 = ϕ π ( X ˜ i ) π ( X ˜ j ) β | ϕ = ϕ 0 ϕ ^ ϕ 0 + o p ( n 1 / 2 ) .
Then, the second term of (A20) can be expressed as
1 n 2 i , j = 1 n ϕ π ( X ˜ i ) π ( X ˜ j ) β | ϕ = ϕ 0 I ( X ˜ i > X ˜ j ) ϕ ^ ϕ 0 + o p ( n 1 / 2 ) = E ϕ π ( X ˜ i ) π ( X ˜ j ) β | ϕ = ϕ 0 I ( X ˜ i > X ˜ j ) ϕ ^ ϕ 0 + o p ( n 1 / 2 ) ,
where the equality holds due to the Law of Large Numbers in the U-statistics. Therefore, combining (A21) and (A22) yields
n A ^ ( α ^ , β ^ ) A ^ ( α 0 , β 0 ) = E 1 + E 2 n ϕ ^ ϕ 0 + o p ( 1 ) ,
where
E 1 = E q i π 0 ( X ˜ i ) + α π 0 ( X ˜ i ) β 0 E π 0 ( X ˜ i ) E 1 π 0 ( X ˜ i )
and
E 2 = E π 0 ( X ˜ i ) π 0 ( X ˜ j ) + α π 0 ( X ˜ i ) π 0 ( X ˜ j ) β 0 I ( X ˜ i > X ˜ j ) E π 0 ( X ˜ i ) E 1 π 0 ( X ˜ i ) .
  • Step 2: Examine R 2
Similarly to Step 1, we only examine the numerator term of R 2 because the denominator term comes from (A19).
For the numerator term of R 2 , we have
2 n ( n 1 ) i j 1 2 π 0 ( X ˜ i ) 1 π 0 ( X ˜ j ) β 0 I ( X ˜ i > X ˜ j ) E π 0 ( X 1 ) 1 π 0 ( X 2 ) β 0 I ( X 1 > X 2 ) .
ψ i j 1 2 π 0 ( X ˜ i ) 1 π 0 ( X ˜ j ) β 0 I ( X ˜ i > X ˜ j ) is the kernel and 2 n ( n 1 ) i j ψ i j forms the U-statistics. By Conditions (C3) and (C4), E ( ψ i j 2 ) < , and by the construction in Section 3.3.1, we have E ( ψ i j ) = E π 0 ( X 1 ) 1 π 0 ( X 2 ) β 0 I ( X 1 > X 2 ) . Then, applying Theorem 12.3 in [29] and Slutsky’s theorem yields that, as n ,
n A ^ ( α 0 , β 0 ) A ( α 0 , β 0 ) = 1 E π 0 ( X ˜ i ) E 1 π 0 ( X ˜ i ) n 2 n ( n 1 ) i j ψ i j E ( ψ i j ) d N ( 0 , 2 2 E 3 )
with E 3 = var ( ψ i j ) E π 0 ( X ˜ i ) E 1 π 0 ( X ˜ i ) 2 .
Finally, combining (A23) and (A24) gives that as n ,
n A ^ ( α ^ , β ^ ) A ( α 0 , β 0 ) d N ( 0 , σ TI )
with σ TI = E 1 + E 2 Σ ϕ E 1 + E 2 + 4 E 3 and Σ ϕ being the asymptotic variance of n ϕ ^ ϕ 0 defined after Theorem 2.
  • Proof of part (b)
Following a similar discussion to that in part (a), we consider the decomposition
A ^ t ( α ^ , β ^ ) A t ( α 0 , β 0 ) = A ^ t ( α ^ , β ^ ) A ^ t ( α 0 , β 0 ) + A ^ t ( α 0 , β 0 ) A t ( α 0 , β 0 ) = K 1 + K 2 .
By applying the Law of Large Numbers and Theorem 1 to the denominator terms of K 1 and K 2 , we have that as n ,
1 n i = 1 n π ^ ( X ˜ i ) 1 S ^ u ( t | X ˜ i ) 1 n i = 1 n π ^ ( X ˜ i ) S ^ u ( t | X ˜ i ) p E π 0 ( X ˜ i ) 1 S u ( t | X ˜ i ) E π 0 ( X ˜ i ) S u ( t | X ˜ i ) .
On the other hand, the numerator term of K 1 can be expressed as
1 n ( n 1 ) i j π ^ ( X ˜ i ) 1 S ^ u ( t | X ˜ i ) π ^ ( X ˜ j ) S ^ u ( t | X ˜ j ) I ( β ^ X ˜ i > β ^ X ˜ j ) 1 n ( n 1 ) i j π 0 ( X ˜ i ) 1 S u ( t | X ˜ i ) π 0 ( X ˜ j ) S u ( t | X ˜ j ) I ( β X ˜ i > β X ˜ j ) = 1 n 2 i , j = 1 n π ^ ( X ˜ i ) 1 S ^ u ( t | X ˜ i ) π ^ ( X ˜ j ) S ^ u ( t | X ˜ j ) I ( β ^ X ˜ i > β ^ X ˜ j ) 1 n 2 i , j = 1 n π 0 ( X ˜ i ) 1 S u ( t | X ˜ i ) π 0 ( X ˜ j ) S u ( t | X ˜ j ) I ( β X ˜ i > β X ˜ j ) = 1 n i = 1 n q i ϕ π ( X ˜ i ) 1 S u ( t | X ˜ i ) π ( X ˜ j ) S u ( t | X ˜ j ) | ϕ = ϕ 0 ( ϕ ^ ϕ 0 ) + o p ( 1 ) .
In addition, the numerator term of K 2 can be written as
2 n ( n 1 ) i j Γ i j E ( Γ i j ) ,
where Γ i j = 1 2 π 0 ( X ˜ i ) 1 S u ( t | X ˜ i ) π 0 ( X ˜ j ) S u ( t | X ˜ j ) I ( β X ˜ i > β X ˜ j ) is the kernel of the U-statistic in the numerator term of K 2 .
Finally, combining (A25)–(A28) with the central limit theorem in ϕ ^ ϕ 0 and Γ i j and using Slutsky’s theorem give that as n ,
n A ^ t ( α ^ , β ^ ) A t ( α 0 , β 0 ) d N ( 0 , σ TD ) ,
where σ TD = M 1 Σ ϕ M 1 + 4 M 2 , with M 2 = var ( Γ i j ) E π 0 ( X ˜ i ) 1 S u ( t | X ˜ i ) E π 0 ( X ˜ i ) S u ( t | X ˜ i ) and M 1 = E q i ϕ π ( X ˜ i ) 1 S u ( t | X ˜ i ) π ( X ˜ j ) S u ( t | X ˜ j ) | ϕ = ϕ 0 . □

References

  1. Lawless, J.F. Statistical Models and Methods for Lifetime Data; Wiley: New York, NY, USA, 2003. [Google Scholar]
  2. Lu, W.; Ying, Z. On semiparametric transformation cure models. Biometrika 2004, 91, 331–343. [Google Scholar] [CrossRef]
  3. Chen, C.-M.; Shen, P.-S.; Wei, J.C.-C.; Lin, L. A semiparametric mixture cure survival model for left-truncated and right-censored data. Biom. J. 2017, 59, 270–290. [Google Scholar] [CrossRef]
  4. Amico, M.; Keilegom, I.V.; Legrand, C. The single-index/Cox mixture cure model. Biometrics 2019, 75, 452–462. [Google Scholar] [CrossRef]
  5. Amico, M.; Keilegom, I.V. Cure models in survival analysis. Annu. Rev. Stat. Its Appl. 2018, 5, 311–342. [Google Scholar] [CrossRef]
  6. Bertrand, A.; Legrand, C.; Carroll, R.J.; Meester, C.D.; Keilegom, I.V. Inference in a survival cure model with mismeasured covariates using a simulation-extrapolation approach. Biometrika 2017, 104, 31–50. [Google Scholar] [CrossRef]
  7. Ma, Y.; Yin, G. Cure rate model with mismeasured covariates under transformation. J. Am. Stat. Assoc. 2008, 103, 743–756. [Google Scholar] [CrossRef]
  8. Chen, L.-P. Semiparametric estimation for cure survival model with left-truncated and right-censored data and covariate measurement error. Stat. Probab. Lett. 2019, 154, 108547. [Google Scholar] [CrossRef]
  9. Beyene, K.M.; El Ghouch, A. Smoothed time-dependent receiver operating characteristic curve for right censored survival data. Stat. Med. 2020, 39, 3373–3396. [Google Scholar] [CrossRef]
  10. Heagerty, P.J.; Lumley, T.; Pepe, M.S. Time-dependent ROC curves for censored survival data and a diagnostic marker. Biometrics 2000, 56, 337–344. [Google Scholar] [CrossRef]
  11. Kamarudin, A.N.; Cox, T.; Kolamunnage-Dona, R. Time-dependent ROC curve analysis in medical research: Current methods and applications. BMC Med. Res. Methodol. 2017, 17, 53. [Google Scholar] [CrossRef]
  12. Li, L.; Greene, T.; Hu, B. A simple method to estimate the time-dependent receiver operating characteristic curve and the area under the curve with right censored data. Stat. Methods Med. Res. 2018, 27, 2264–2278. [Google Scholar] [CrossRef]
  13. Song, X.; Zhou, X.-H. A semiparametric approach for the covariate specific roc curve with survival outcome. Stat. Sin. 2008, 18, 947–965. [Google Scholar]
  14. Zhang, Y.; Han, X.; Shao, Y. The ROC of Cox proportional hazards cure models with application in cancer studies. Lifetime Data Anal. 2021, 27, 195–215. [Google Scholar] [CrossRef]
  15. Amico, M.; Keilegom, I.V.; Han, B. Assessing cure status prediction from survival data using receiver operating characteristic curves. Biometrika 2021, 108, 727–740. [Google Scholar] [CrossRef]
  16. Kolamunnage-Dona, R.; Kamarudin, A.N. Adjustment for the measurement error in evaluating biomarker performances at baseline for future survival outcomes: Time-dependent receiver operating characteristic curve within a joint modelling framework. Res. Methods Med. Health Sci. 2021, 2, 51–60. [Google Scholar] [CrossRef]
  17. Crowther, M.J.; Lambert, P.C.; Abrams, K.R. Adjusting for measurement error in baseline prognostic biomarkers included in a time-to-event analysis: A joint modelling approach. BMC Med. Res. Methodol. 2013, 13, 146. [Google Scholar] [CrossRef]
  18. Nevo, D.; Zucker, D.M.; Tamimic, R.M.; Wang, M. Accounting for measurement error in biomarker data and misclassification of subtypes. Stat. Med. 2016, 35, 5686–5700. [Google Scholar] [CrossRef]
  19. Mao, M.; Wang, J.-L. Semiparametric efficient estimation for a class of generalized proportional odds cure models. J. Am. Stat. 2010, 105, 302–311. [Google Scholar] [CrossRef]
  20. Chen, L.-P. Semiparametric estimation for the transformation model with length-biased data and covariate measurement error. J. Stat. Comput. Simul. 2020, 90, 420–442. [Google Scholar] [CrossRef]
  21. Carroll, R.J.; Ruppert, D.; Stefanski, L.A.; Crainiceanu, C.M. Measurement Error in Nonlinear Model; CRC Press: New York, NY, USA, 2006. [Google Scholar]
  22. Chen, L.-P.; Yi, G.Y. Semiparametric estimation methods for left-truncated and right-censored survival data with covariate measurement error. Ann. Inst. Stat. Math. 2021, 73, 481–517. [Google Scholar] [CrossRef]
  23. Elandt-Johnson, R.C.; Johnson, N.L. Survival Models and Data Analysis; John Wiley & Sons: New York, NY, USA, 1980. [Google Scholar]
  24. Chen, L.-P.; Yi, G.Y. Analysis of noisy survival data with graphical proportional hazards measurement error models. Biometrics 77, 956–969. [CrossRef]
  25. Hunter, D.R.; Lange, K. Computing estimates in the proportional odds model. Ann. Inst. Stat. Math. 2002, 54, 155–168. [Google Scholar] [CrossRef]
  26. Zheng, Y.; Cai, T.; Feng, Z. Application of the time-dependent ROC curves for prognostic accuracy with multiple biomarkers. Biometrics 2006, 62, 279–287. [Google Scholar] [CrossRef]
  27. Carroll, R.J.; Spiegelman, C.H.; Lan, K.K.G.; Bailey, K.T.; Abbot, R.D. On errors-in-variables for binary regression model. Biometrika 1984, 71, 19–25. [Google Scholar] [CrossRef]
  28. Webb, A.; Ma, J. Cox models with time-varying covariates and partly-interval censoring—A maximum penalised likelihood approach. Stat. Med. 2023, 42, 815–833. [Google Scholar] [CrossRef]
  29. van der Vaart, A.W. Asymptotic Statistics; Cambridge University Press: New York, NY, USA, 1998. [Google Scholar]
  30. Kim, J.P.; Lu, W.; Sit, T.; Ying, Z. A unified approach to semiparametric transformation models under general biased sampling schemes. J. Am. Stat. Assoc. 2013, 108, 217–227. [Google Scholar] [CrossRef]
  31. van der Vaart, A.W.; Wellner, J.A. Weak Convergence and Empirical Processes; Springer: New York, NY, USA, 1996. [Google Scholar]
  32. Su, Y.-R.; Wang, J.-L. Modeling left-truncated and right-censored survival data with longitudinal covariates. Ann. Stat. 2012, 40, 1465–1488. [Google Scholar] [CrossRef]
  33. Durrett, R. Probability: Theory and Examples; Cambridge University Press: New York, NY, USA, 2010. [Google Scholar]
  34. Hoeffding, W. A class of statistics with asymptotically normal distribution. Ann. Math. Stat. 1948, 19, 293–325. [Google Scholar] [CrossRef]
Table 1. Simulation results. “Parameter” indicates the some parameters of interest; “Method” refers to the proposed and naive methods; “Bias” is the bias of the estimators; “S.E.” is the standard error; and “MSE” is the mean squared error.
Table 1. Simulation results. “Parameter” indicates the some parameters of interest; “Method” refers to the proposed and naive methods; “Bias” is the bias of the estimators; “S.E.” is the standard error; and “MSE” is the mean squared error.
σ ϵ 2 ParameterMethods n = 250 n = 500
BiasS.E.MSEBiasS.E.MSE
0.15 α Naive0.1630.3080.1210.1590.2300.078
Proposed0.0120.3250.1060.0090.2380.057
β Naive0.1700.2310.0820.1650.2080.070
Proposed0.0180.2660.0710.0130.2200.049
A ( α , β ) Naive0.1080.0330.0130.0960.0300.010
Proposed0.0140.0560.0030.0070.0470.002
A 5 ( α , β ) Naive0.1150.0510.0160.1080.0470.014
Proposed0.0160.0600.0040.0130.0570.003
A 10 ( α , β ) Naive0.1140.0490.0150.1030.0410.012
Proposed0.0130.0620.0040.0090.0560.003
0.35 α Naive0.1950.3210.1410.1780.2550.097
Proposed0.0200.3450.1190.0170.2690.073
β Naive0.1950.2640.1080.1810.2290.085
Proposed0.0230.2810.0800.0200.2450.060
A ( α , β ) Naive0.1250.0470.0180.1140.0420.015
Proposed0.0170.0600.0040.0140.0550.003
A 5 ( α , β ) Naive0.1230.0600.0190.1120.0550.016
Proposed0.0200.0710.0050.0170.0660.005
A 10 ( α , β ) Naive0.1210.0540.0180.1160.0500.016
Proposed0.0190.0690.0050.0160.0630.004
0.55 α Naive0.2140.3480.1670.1930.2850.118
Proposed0.0270.3620.1320.0230.3070.095
β Naive0.2260.2870.1330.2010.2660.111
Proposed0.0280.3010.0910.0250.2780.078
A ( α , β ) Naive0.1330.0660.0220.1280.0590.020
Proposed0.0210.0780.0060.0190.0690.005
A 5 ( α , β ) Naive0.1360.0680.0230.1240.0630.019
Proposed0.0260.0790.0070.0240.0740.006
A 10 ( α , β ) Naive0.1340.0600.0220.1280.0570.020
Proposed0.0250.0660.0050.0210.0690.005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, L.-P. Analysis of Receiver Operating Characteristic Curves for Cure Survival Data and Mismeasured Biomarkers. Mathematics 2025, 13, 424. https://doi.org/10.3390/math13030424

AMA Style

Chen L-P. Analysis of Receiver Operating Characteristic Curves for Cure Survival Data and Mismeasured Biomarkers. Mathematics. 2025; 13(3):424. https://doi.org/10.3390/math13030424

Chicago/Turabian Style

Chen, Li-Pang. 2025. "Analysis of Receiver Operating Characteristic Curves for Cure Survival Data and Mismeasured Biomarkers" Mathematics 13, no. 3: 424. https://doi.org/10.3390/math13030424

APA Style

Chen, L.-P. (2025). Analysis of Receiver Operating Characteristic Curves for Cure Survival Data and Mismeasured Biomarkers. Mathematics, 13(3), 424. https://doi.org/10.3390/math13030424

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop