Next Article in Journal
LBM-MHD Data-Driven Approach to Predict Rayleigh–Bénard Convective Heat Transfer by Levenberg–Marquardt Algorithm
Next Article in Special Issue
Mixture of Shanker Distributions: Estimation, Simulation and Application
Previous Article in Journal
PSEV-BF Methodology for Object Recognition of Birds in Uncontrolled Environments
Previous Article in Special Issue
On the Estimation of the Binary Response Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Partially Linear Additive Hazards Regression for Bivariate Interval-Censored Data

1
Center for Applied Statistical Research, College of Mathematics, Jilin University, Changchun 130012, China
2
School of Mathematical Sciences, Capital Normal University, Beijing 100048, China
3
Department of Statistics, University of Missouri, Columbia, MO 65211, USA
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(2), 198; https://doi.org/10.3390/axioms12020198
Submission received: 22 December 2022 / Revised: 7 February 2023 / Accepted: 11 February 2023 / Published: 13 February 2023
(This article belongs to the Special Issue Computational Statistics & Data Analysis)

Abstract

:
In this paper, we discuss regression analysis of bivariate interval-censored failure time data that often occur in biomedical and epidemiological studies. To solve this problem, we propose a kind of general and flexible copula-based semiparametric partly linear additive hazards models that can allow for both time-dependent covariates and possible nonlinear effects. For inference, a sieve maximum likelihood estimation approach based on Bernstein polynomials is proposed to estimate the baseline hazard functions and nonlinear covariate effects. The resulting estimators of regression parameters are shown to be consistent, asymptotically efficient and normal. A simulation study is conducted to assess the finite-sample performance of this method and the results show that it is effective in practice. Moreover, an illustration is provided.

1. Introduction

In this article, we discuss a regression analysis using the marginal additive hazards model on bivariate interval-censored data. Interval-censored data refer to failure times that are observed to only belong to an interval rather than being known with absolute certainty. These types of data frequently occur in various areas, including biomedical and epidemiological investigations [1]. It is easy to see that the most commonly studied right-censored data can be seen as a special case of interval-censored data, and it is important to note that the analysis of interval-censored data is typically far more difficult than right-censored data research. Bivariate interval-censored data occur when there exist two correlated failure times of interest and the observed times of both failure events suffer interval censoring. Clinical trials or medical studies on several events from the same individual such as eye disease studies often yield bivariate interval-censored data. The purpose of this article is to propose a flexible regression model which can process bivariate interval-censored data when the main interest is on the risk differential or excess risk.
There are several methods available for regression analysis of univariate interval-censored data arising from the additive hazards model. Refs. [2,3], for example, investigated the problem for case I interval-censored or current state data, a special form of interval-censored data in which the observed time contains zero or infinite. The former discussed an estimating equation approach and the latter considered an efficient estimation approach. More recently, Ref. [4] studied the same issue as Ref. [2] but with some inequality constraints, and Ref. [5] proposed a sieve maximum likelihood approach. Moreover, Ref. [6] developed several inverse probability weight-based and reweighting-based estimation procedures for the situation with missing covariates, while Ref. [7] presented an efficient approach for general situations.
A large number of approaches have been established for modeling bivariate interval-censored survival data and three types of methods are generally used. One is a marginal method that relies on the working independence assumption [8,9]. Another commonly used approach is the frailty-model-based method, which employs the frailty or latent variable to build the correlation among the associated failure times [10,11]. Ref. [12] proposed a frailty model method for multivariate interval-censored data with informative censoring. The third type of method is the copula-based approach, which gives a different, specific way to model two dependent failure times. One advantage of the approach is that it directly connects the two marginal distributions through a copula function to construct the joint distribution and uses the copula parameter to determine the correlation. This distinguishing characteristic makes it possible to represent the margins independent of the copula function. This advantage is appealing for the points of both modeling and interpretation views. Among others, Ref. [13] discussed this approach based on the marginal transformation model with the two-parameter Archimedean copula. Ref. [14] proposed a copula link-based additive model for right-censored event time data. Moreover, Ref. [15] proposed a copula-based model to deal with bivariate survival data with various censoring mechanisms. In the following, we discuss the regression analysis of bivariate cases by employing the method in [13]. More specifically, we propose a kind of semiparametric partly linear additive hazards model.
Partly linear models are becoming more and more common since they combine the flexibility of nonparametric modeling with the simplicity and ease of interpretation of parametric modeling. It is presumptive that the marginally conditional hazard function has nonlinear relationships with some covariates but linear relationships with others [16,17,18]. In practice, nonlinear covariate effects are typical. For instance, in some medication research, the influence of the dosage of a particular medication may reach a peak at a certain dosage level and be maintained at the peak level, or it may diminish after the dosage level. Although there is a fair amount of literature in this field, to the best of our knowledge, there does not seem to exist a study considering this for bivariate interval-censored survival data.
The presented model involves two nonparametric functions, one identifies the baseline cumulative hazard function and another describes the nonlinear effects of a continuous covariate. In the following, the two-parameter Archimedean copula model is employed for the dependence and more comments on this is given below. Moreover, a sieve maximum likelihood estimation approach is developed to approximate two involved nonparametric nuisance functions by Bernstein polynomials. The proposed method has several desirable features: (a) it allows both time-dependent covariates and the covariates that may have nonlinear effects; (b) the two-parameter Archimedean copula model can flexibly handle dependence structures on both upper and lower tails and the strength of the dependence can be quantified via Kendall’s τ ; (c) the sieve maximum likelihood estimation approach by Bernstein polynomials can be easily implemented with the use of some existing software; (d) as is seen in the simulation study, the computation is both stable and efficient. Note that the method given by [13] only allows for linear covariate effects.
More specifically, in Section 2, after recommending some notation, assumptions and models that would be used throughout the paper, the resulting likelihood function is presented. In Section 3, we first describe the proposed sieve maximum likelihood estimation approach and then present the asymptotic properties of the proposed estimators. Section 4 presents a simulation study for the assessment of the finite sample performance of the proposed estimation approach, and the results indicate that it works well as expected. In Section 5, an illustration is provided by using a set of data arising from the Age-Related Eye Disease Study (AREDS), and Section 6 gives the conclusion with some discussion and concluding remarks.

2. Assumptions and Likelihood Function

Consider a study consisting of n independent subjects. Define T i j as the failure time of interest associated with the ith subject of the jth failure event. Suppose that for T i j , the two observation times are given by ( U i j , V i j ] such that U i j < T i j V i j . In addition, suppose that for the ith subject, the p-dimensional covariate vectors are possibly time-dependent and denoted by X i 1 and X i 2 and the single continuous covariates Z i 1 and Z i 2 are related to T i 1 and T i 2 , respectively. More details on them are given below. Here, we assume that X i 1 and X i 2 or Z i 1 and Z i 2 could be the same, entirely different, or they also could have some common components. Then, the observed data are as follows:
O = { O i = U i j , V i j , X i j , Z i j ; i = 1 , , n , j = 1 , 2 } .
Note that when V i j = ,   T i j is right-censored and when U i j = 0 ,   T i j is left-censored. Moreover, it is assumed that given the covariates, the interval censoring is independent of the failure times T i j [1].
Given some covariates, define S i j t i j | X i j , Z i j = P T i j > t i j | X i j , Z i j as the marginal survival function of T i j , and
S t i 1 , t i 2 | X i 1 , X i 2 , Z i 1 , Z i 2 = P T i 1 > t i 1 , T i 2 > t i 2 | X i 1 , X i 2 , Z i 1 , Z i 2 ,
is the joint survival function of T i 1 and T i 2 . For the covariate effects, suppose that given X i j and Z i j , the marginal hazard function of T i j is defined as follows:
h j ( t | X i j , Z i j ) = λ j ( t ) + β T X i j ( t ) + g ( Z i j ) ,
where λ j ( t ) denotes an unknown baseline hazard function, β is an unknown regression coefficient vector, and g ( · ) is an unknown, smooth nonlinear regression function. That is, T i j follows a partially linear additive hazards model, in which Z i j represents the covariate that may have nonlinear effects on T i j . Correspondingly, T i j has the cumulative hazard
H j ( t | X i j , Z i j ) = 0 t h j ( s | X i j , Z i j ) d s = Λ j ( t ) + β T W i j ( t ) + g ( Z i j ) t ,
where Λ j ( t ) = 0 t λ j ( s ) d s , W i j ( t ) = 0 t X i j ( s ) d s , and we have that
S i j ( t | X i j , Z i j ) = exp [ 0 t h j ( s | X i j , Z i j ) d s ] = exp [ H j ( t ; X i j , Z i j ) ] = exp { Λ j ( t ) + β T W i j ( t ) + g ( Z i j ) t } , i = 1 , , n , j = 1 , 2 ,
It is worth noting that for the simplicity of the expressions and calculations, we assume that both linear and nonlinear covariates effects are the same for the two associated failure times of interest [19,20]. It is simple to extend the following method to the situation where the covariate effects are different.
It follows from Sklar’s theorem [21] that if the marginal survival functions S i j ( · ) are continuous, there exists a unique copula function C ξ ( · , · ) on [ 0 , 1 ] 2 such that C ξ ( s 1 , 0 ) = C ξ ( 0 , s 2 ) = 0 , C ξ ( s 1 , 1 ) = s 1 and C ξ ( 1 , s 2 ) = s 2 , and it gives
S t i 1 , t i 2 | X i 1 , X i 2 , Z i 1 , Z i 2 = C ξ S i 1 t i 1 | X i 1 , Z i 1 , S i 2 t i 2 | X i 2 , Z i 2 , t i 1 , t i 2 0 .
Here, the parameter ξ generally denotes the correlation or dependence between T i 1 and T i 2 . As mentioned above, a significant advantage of the copula representation above is that it separates the correlation from the two marginal distributions [22]. There exist many copula functions and among others, one type of the most commonly used copula functions for bivariate data is perhaps the Archimedean copula family. By following [13], we focus on the flexible two-parameter Archimedean copula model given by
C φ , ω ( s 1 , s 2 ) = 1 + s 1 1 / ω 1 1 / φ + s 2 1 / ω 1 1 / φ φ ω = μ μ 1 ( s 1 ) + μ 1 ( s 2 ) , φ ( 0 , 1 ] , ω ( 0 , ) ,
where μ ( s ) = μ φ , ω ( s ) = 1 + s φ ω and μ is the generalized inverse of μ , which is defined as μ ( y ) = inf { x R : y μ ( x ) } , y R . The detailed derivation and more comments can be found in Chapter 5 of [23].
As mentioned before, the two parameters φ and ω in the copula function above are the association parameters, representing the correlation in both the upper and lower tails. In particular, when φ = 1 , the copula model above is equal to the Clayton copula [24], while if ω , the copula model becomes the Gumbel copula [25]. In other words, the two-parameter copula model is more flexible and has the Clayton or Gumbel copula as special cases. It is well-known that another commonly used measure for the correlation is Kendall’s τ , and it has an explicit connection with φ , ω as
τ = 1 2 φ ω / ( 2 ω + 1 ) .
Under the assumptions above, the observed likelihood function has the form
L n S 1 , S 2 , φ , ω | O = i = 1 n P U i 1 < T i 1 V i 1 , U i 2 < T i 2 V i 2 | Y i 1 , Y i 2 = i = 1 n P T i 1 > U i 1 , T i 2 > U i 2 | Y i 1 , Y i 2 P T i 1 > U i 1 , T i 2 > V i 2 | Y i 1 , Y i 2 P T i 1 > V i 1 , T i 2 > U i 2 | Y i 1 , Y i 2 + P T i 1 > V i 1 , T i 2 > V i 2 | Y i 1 , Y i 2 = i = 1 n S U i 1 , U i 2 | Y i 1 , Y i 2 S U i 1 , V i 2 | Y i 1 , Y i 2 S V i 1 , U i 2 | Y i 1 , Y i 2 + S V i 1 , V i 2 | Y i 1 , Y i 2 = i = 1 n C φ , ω S 1 U i 1 | Y i 1 , S 2 U i 2 | Y i 2 C φ , ω S 1 U i 1 | Y i 1 , S 2 V i 2 | Y i 2 C φ , ω S 1 V i 1 | Y i 1 , S 2 U i 2 | Y i 2 + C φ , ω S 1 V i 1 | Y i 1 , S 2 V i 2 | Y i 2 ,
where Y i 1 = ( X i 1 , Z i 1 ) , Y i 2 = ( X i 2 , Z i 2 ) . Let η = β , φ , ω , Λ 1 , Λ 2 , g , all unknown parameters. The next section proposes a sieve approach for the estimation of η .

3. Sieve Maximum Likelihood Estimation

It is well-known that directly maximizing the likelihood function L n ( η | O ) = L n ( S 1 , S 2 , φ , ω | O ) = i = 1 n L ( η ; O i ) can provide an estimate of η . On the other hand, it is inappropriate for this situation, as it involves infinite-dimensional functions Λ j ( t ) and g ( · ) . For this, according to [26] and others, we suggest using the sieve method to approximate these functions based on Bernstein’s polynomial first, and then maximize the likelihood function.
Specifically, define Θ = A M M G , the parameter space, and
A = β , φ , ω R p × R ( 0 , 1 ] × R + , β + φ + ω K ,
in which ⊗ denotes the Kronecker product, and K is a nonnegative constant. Additionally, M denotes the subset of all bounded and continuous, nondecreasing, nonnegative functions within [ c 1 , d 1 ] , 0 c 1 < d 1 < . Similarly, G denotes the collection of all bounded and continuous functions within [ c 2 , d 2 ] , the support of Z i j . In practice, [ c 1 , d 1 ] is generally valued as the large and minimum values of all observation times. In the following, define the sieve parameter space
Θ n = η n = β T , φ , ω , Λ n 1 , Λ n 2 , g n T A M n M n G n .
In the above,
M n = { Λ n j ( t ) = k = 0 m 1 ϕ j l * B l t , m 1 , c 1 , d 1 : l = 0 m 1 ϕ j l * M n ; 0 ϕ j 0 * ϕ j m 1 * ; j = 1 , 2 } ,
and
G n = { g n ( z ) = k = 0 m 2 α k B k z , m 2 , c 2 , d 2 : k = 0 m 2 α k G n }
with
B k x , m , c , d = m k x c d c k 1 x c d c m k , k = 0 , , m ,
the Bernstein basis polynomial with the degrees m 1 = O n ν 1 and m 2 = O n ν 2 for some fixed ν 1 , ν 2 ( 0 , 1 ) , and M n and G n being some positive constants. To estimate η , define η ^ n = ( β ^ n , φ ^ n , ω ^ n , Λ ^ n 1 , Λ ^ n 2 , g ^ ) as the value of η by maximizing the log-likelihood function
n ( η ; O ) = log L n ( η ; O ) = i = 1 n log L η ; O i .
Note that one of the main advantages of Bernstein polynomials is that they can easily implement the nonnegativity and monotonicity properties of Λ j ( t ) by the reparameterization ϕ j 0 * = e ϕ j 0 , ϕ j l * = i = 0 l e ϕ j i , 1 l m 1 [26]. In addition, of all approximation polynomials, they have the optimal shape-preserving properties [27]. By the way, they are easy to use because they do not need interior knots. In the above, for simplicity, the same basis polynomial is used for Λ 1 and Λ 2 . Moreover, note that this approach can be relatively easily implemented as discussed below, although it may seem to be complicated. In practice, one need to choose m j ( j = 1 , 2 ) . We suggest employing the Akaike information criterion (AIC) defined as
AIC = 2 n ( η ^ n ) + 2 p + m 1 + m 2 + 2 + 2 .
In the above, the form in the second bracket is the number of unknown parameters in the model [28], where p denotes the dimension of β ; m 1 + 1 and m 2 + 1 represent the degree of the Bernstein polynomials (i.e., m 1 + m 2 + 2 ); and the last number denotes the dimension of correlation parameters φ and ω .
For the maximization of the log-likelihood function l n ( η ; O ) over the sieve space Θ n or the determination of η ^ n = ( β ^ n T , φ ^ n , ω ^ n , Λ ^ n 1 , Λ ^ n 2 , g ^ ) T , we suggest first determining the initial estimate of η and then applying the Newton–Raphson approach to maximize n ( β , φ , ω , Λ n 1 , Λ n 2 , g n ; O ) . In the numerical study section, one can use the R function n l m for the maximization. For the determination of the initial estimates, the following procedure can be applied.
  • Step 1: obtain ( β ^ ( 0 ) , Λ ^ n 1 ( 0 ) , g ^ ( 0 ) ) by maximizing the log marginal likelihood function under the observation data on the T i 1 ’s;
  • Step 2: obtain Λ ^ n 2 ( 0 ) by maximizing the log marginal likelihood function under the observation data on the T i 2 ’s;
  • Step 3: obtain the initial estimates ( φ ^ ( 0 ) , ω ^ ( 0 ) ) of φ and ω by maximizing the joint sieve log likelihood function n ( β ^ ( 0 ) , φ , ω , Λ ^ n 1 ( 0 ) , Λ ^ n 2 ( 0 ) , g ^ n ( 0 ) ) .
Now, we establish the asymptotic properties of η ^ n . Define η 1 =   ( β 1 , φ 1 , ω 1 , Λ 1 1 , Λ 2 1 , g 1 ) Θ n and η 2 = ( β 2 , φ 2 , ω 2 , Λ 1 2 , Λ 2 2 , g 2 ) Θ n , and their distance has the form
d ( η 1 , η 2 ) = β 1 β 2 2 + | φ 1 φ 2 2 + | ω 1 ω 2 | 2 + Λ 1 1 Λ 1 2 2 2 + Λ 2 1 Λ 2 2 2 2 + g 1 g 2 2 2 1 / 2 .
In the above, Λ j 2 2 = [ Λ j ( u ) 2 + Λ j ( v ) 2 ] d F j ( u , v ) with F j ( u , v ) denoting the joint distribution function of the U j s and V j s , j = 1 , 2 . Let the true value of η be denoted η 0 = ( β 0 , φ 0 , ω 0 , Λ 10 , Λ 20 , g 0 ) .
Theorem 1 (Consistency).
Suppose that Conditions 1–4 given in Appendix A hold. Then, we have d ( η ^ n , η 0 ) 0 almost surely as n .
Theorem 2 (Convergence rate).
Suppose that Conditions 1–5 given in the Appendix A hold. Then,
d ( η ^ n , η 0 ) = O p ( n min { q ν 1 / 2 , ( 1 ν 1 ) / 2 , r ν 2 / 2 , ( 1 ν 2 ) / 2 } ) ,
where ν 1 ( 0 , 1 ) and ν 2 ( 0 , 1 ) such that m 1 = o n ν 1 , m 2 = o n ν 2 , and q and r are defined in Condition 4 of Appendix A.
Theorem 3 (Asymptotic normality).
Suppose that Conditions 1–5 given in Appendix A hold. Then, we have
n 1 / 2 ( b ^ n b 0 ) d N 0 , I 1 ( b 0 ) ,
where b ^ n = ( β ^ n T , φ ^ n , ω ^ n ) T , b 0 = ( β ( 0 ) T , φ 0 , ω 0 ) T ,
I ( b 0 ) = P ˙ b ( η 0 ) ˙ Λ 1 ( η 0 ) [ h Λ 1 * ] ˙ Λ 2 ( η 0 ) [ h Λ 2 * ] ˙ g ( η 0 ) [ h g * ] 2 ,
w 2 = w w T for an arbitrary vector w, and ˙ b ( η 0 ) , ˙ Λ 1 ( η 0 ) [ h Λ 1 ] , ˙ Λ 2 ( η 0 ) [ h Λ 2 ] , ˙ g ( η 0 ) [ h g ] are the score statistics defined in Appendix A.
We sketch the proofs of the theorems above in Appendix A. Note that based on Theorem 2, one can get the optimal rate of convergence m i n ( n q / 2 ( 1 + q ) , n r / 2 ( 1 + r ) ) with v 1 = 1 / ( 1 + q ) and v 2 = 1 / ( 1 + r ) . Specifically, it becomes n 1 / 3 with r = 2 or q = 2 and increases while q and r increases. Theorem 3 demonstrates that the suggested estimator of the regression parameter is nonetheless asymptotically normal and effective even when the total convergence rate is less than n 1 / 2 . It is clear that to give the inference of the suggested estimators, we need to estimate the variance or covariance matrix of β , φ and ω . However, it can be seen from Appendix A that obtaining their consistent estimators would be difficult. As a result, we propose using the simple nonparametric bootstrap approach [29], which is well-known for offering a direct and simple tool for estimating covariances when no explicit formula is given. It looks to work well, according to the numerical analysis below.

4. A Simulation Study

In this section, we present some simulation studies conducted to evaluate the finite sample performance of the sieve maximum likelihood estimation approach suggested in the preceding sections. Three scenarios for covariates were taken into account in the study. The first one was to generate the single covariates X i ’s to follow the Bernoulli distribution with the success probability 0.5 and Z i ’s to follow the uniform distribution over ( 0 , 1 ) . In scenario two, we considered the same Z i ’s as above but generated a two-dimensional vector of the covariate ( X 1 , X 2 ) T with X 1 Bernoulli ( 0.5 ) and X 2 U ( 0 , 1 ) . For scenario three, we first generated covariates X i ’s and Z i ’s as in scenario one and then replaced X i by X i exp ( t ) . That is, we had time-dependent covariates.
To generate the true bivariate failure times T i 1 , T i 2 under model (1), one needed to generate u i 2 and w i from the uniform distribution over ( 0 , 1 ) independently for the first step and solve u i 1 from the equation w i = h u i 1 , u i 2 = C ξ u i 1 , u i 2 / u i 2 for a given copula function C ξ . Then, the two dependent survival times t i 1 and t i 2 were obtained based on t i 1 = S 1 1 u i 1 and t i 2 = S 2 1 u i 2 , respectively, with h j ( t | X , Z ) = 2 + β X + s i n ( 1 2 π Z ) or h j ( t | X , Z ) = 2 + β X + Z 2 , j = 1 , 2 . In order to generate the censoring intervals or the observed data, we assumed that each subject was assessed at discrete time points, and the length of two contiguous observation times followed the standard exponential distribution. Then, for every subject, U i j was valued as the last assessment time before T i j , and V i j was equal to the first assessment time behind T i j . The length of the study was determined to yield about a 20 % right-censoring rate. On the Bernstein polynomial approximation, by following [13], we set m 1 = m 2 = 3 and ϕ 1 k = ϕ 2 k and also took a Kendall’s τ equal to 0.3 for the weak dependency and 0.6 for the strong dependency. The results given below are based on 1000 replications and 100 bootstrap samples for the variance estimation.
Table 1 and Table 2 present the results based on n = 20 , 50 , 100 , 300 , 400 given by the proposed estimation procedure on the estimation of the regression parameter β and the dependence parameter τ with the true β 0 = 0 or 0.5 and g ( z ) = s i n ( 1 2 π Z ) or g ( z ) = z 2 , respectively. The table consists of the bias of estimates (Bias) determined by the difference value between the mean of the estimates and the real value, the sample standard error (SSE) of the estimates based on 1000 replications, the mean of the estimated standard errors (ESE) (for one replication, we obtained the estimated standard errors based on 100 bootstrap samples and computed the average of the 1000 estimated standard errors), as well as the 95 % empirical coverage probabilities (CP). The tables show that the proposed estimator was seemingly unbiased, and the variance estimation procedure seemed to perform well. Moreover, all empirical coverage probabilities were close to the nominal level 95 % when the sample sizes were increasing, indicating that the normal approximation to the distribution of the proposed estimator seemed reasonable. Moreover, as expected, the results got better in general with the increasing sample size.
The estimation results obtained under scenario two for the covariates are given in Table 3; the sample size was n = 200 or 400 with the true ( β 1 , β 2 ) T = ( 0 , 0 ) T or ( 0.1 , 0.1 ) T and g ( z ) = s i n ( 1 2 π Z ) . Table 4 contains the estimation results obtained with the time-dependent covariates based on β 0 = 0 , 1 or 0.5 as well as n = 200 or 400, and the other values being the same as in Table 1. They again indicated that the proposed method seemed to perform well for the estimation of the regression and association parameters. Furthermore, the results generally became better when the sample size increased.
To assess the performance of the proposed method on the estimation of the nonlinear function g, we repeated the study given in Table 1 with four different g functions, g 1 ( z ) = sin ( 2 π z ) , g 2 ( z ) = cos ( 2 π z ) , g 3 ( z ) = 5 3 + 5 z 2 , and g 4 ( z ) = z + 5 z 3 , and n = 200 . Figure 1 and Figure 2 show the average of the estimated g for each of the four cases with β = 0 and τ = 0.3 or τ = 0.6 , respectively. The solid red lines represent the genuine functions, while the dashed blue lines represent the estimations. They show that the proposed method based on Bernstein polynomials seemed to perform reasonably well for the different Kendall’s τ considered, including a weak or strong dependency. We also took into account various configurations and got comparable outcomes.

5. An Illustration

In this section, we illustrate the proposed procedure by using the data from the Age-related Eye Disease Study (AREDS) [30], a clinical experiment tracking the development of a bilateral eye disease, age-related macular degeneration (AMD), provided the CopulaCenR package in version 4.2.0 of R software [31]. Each participant in the research was monitored every six months (during the first six years) or once a year (after year 6) for about 12 years. At each appointment, each participant’s eyes were given a severity score on a range of 1 to 12 (a higher number signifying a more serious condition).Moreover, the time-to-late AMD, which is the interval between the baseline visit and the first appointment at which the severity score reached nine or above, was computed for each eye of these subjects. Either interval censoring or right censoring was applied to the observations at both periods.
The data set consisted of n = 629 subjects and in the analysis below, we focused on the effects of two covariates, SevScaleBL for the baseline AMD severity score (a value between one and eight with a higher value indicating more severe AMD) and rs2284665 for a genetic variant (zero, one and two for GG, GT and TT) of the AMD progression. In order to use the proposed method, we set rs2284665 to be X and SevScaleBL to be Z since the latter could be viewed as continuous. For the identifiability of the model, both X and Z were standardized and thus had the support [ 0 , 1 ] . Moreover, various degrees of Bernstein polynomials were considered, such as m 1 = m 2 = 3 , 4 , 5 , 6 .
Table 5 gives the analysis results obtained by the suggested estimation approach. They include the estimated effect of the covariate rs2284665 and Kendall’s τ along with the estimated standard errors and the p-value for testing the effect to be zero. One can see that the results were consistent with respect to the degree of the Bernstein polynomials and suggested that the minor allele (TT) had a significantly “harmful” effect on AMD progression. Moreover, the estimated Kendall’s τ was 0.389 , suggesting a moderate dependence of the AMD progression between the two eyes. Figure 3 gives the estimated effect of the SevScaleBL, which indeed seemed to be nonlinear. To be more specific, the increased risk of AMD patients was associated with higher severity scores. The findings reached here were in line with those of other researchers who examined this subject [15]. It is worth pointing out that the conclusion of [15] was obtained under the proportional odds model and they could not visualize nonlinear covariate effects as in Figure 3.

6. Concluding Remarks

In the preceding sections, the regression analysis of bivariate interval-censored survival data was considered under a family of copula-based, semiparametric, partly linear additive hazards models. As discussed above, one significant advantage of the models was that they only needed to handle the two marginal distributions via a copula function with the copula parameter determining the dependence. For inference, a sieve maximum likelihood estimation procedure with the use of Bernstein polynomials were provided, and it was shown that the resulting estimators of the regression parameters were consistent and asymptotically efficient. Furthermore, the simulation studies suggested that the recommended approach worked effectively in practical situations.
It is worth emphasizing that the main reason for using Bernstein polynomials to approximate the infinite-dimensional cumulative hazard function Λ j ( t ) and nonlinear covariate effects g ( · ) was its simplicity. Other smoothing functions, such as spline functions [20], could also be used, and estimation procedures similar to the one described above could be developed. In order to implement the approach, we needed to choose the degrees m 1 and m 2 . As discussed above, a common method is to consider different values and compare the resulting estimators. Certainly, developing a data-driven approach for their selections would also be helpful.
In this paper, we focused on the additive hazards model. Other models, for example, the proportional hazards model, are sometimes more popular. It is well-known that the latter is more suitable for the situation where one is interested in the hazard ratio, whereas the former fits better if the excess risk or the risk differential is what is most important. However, there seems to be little literature on how to choose the better model or develop some model-checking methods, which is attractive research for the future.

Author Contributions

Conceptualization, T.H. and S.Z; methodology, X.Z. and T.H.; software, X.Z.; validation, X.Z. and T.H.; resources, T.H. and S.Z.; writing—original draft preparation, X.Z; writing—review and editing, T.H. and J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Beijing Natural Science Foundation Z210003, Technology Developing Plan of Jilin Province (No. 20200201258JC) and National Nature Science Foundation of China (grant nos. 12171328, 12071176 and 11971064).

Data Availability Statement

Research data are available in the R package CopulaCenR.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AICAkaike information criterion
AREDSAge-related Eye Disease Study
AMDAge-related macular degeneration
a.s.almost surely

Appendix A. Proof of Asymptotic Properties

This appendix begins by describing the necessary regular conditions, which are similar to those generally used in the interval-censored data literature [32,33], and then sketch the proof of the asymptotic results given in Theorems 1–3.
Condition 1. There exists a positive number δ j such that P V j U j δ j = 1 , j = 1 , 2 .
Condition 2. (i) There exist 0 < c 1 < d 1 < such that P ( c 1 U i j < V i j d 1 ) = 1 , j = 1 , 2 . (ii) The covariate X j ’s are bounded; in other words, there exists x 0 > 0 that makes P ( | X | x 0 ) = 1 , where j = 1 , 2 . The distribution of the X j ’s is not focused on any proper affine subspace of R p . (iii) For some positive constant K, given the set of all observation times V , suppose the conditional density of Z j is twice continuously differentiable and has a bound within K 1 , K on [ c 2 , d 2 ] a.s.
Condition 3. After substituting Λ j with h j , the likelihood function can be rewritten as L ( β , φ , ω , h 1 , h 2 , g ) . Define
μ T L ˙ ( β , φ , ω , h 1 , h 2 , g ) = μ 1 T L β + μ 2 L φ + μ 3 L ω + μ 4 L h 1 + μ 5 L h 2 + μ 6 L g
with μ = ( μ 1 T , μ 2 , μ 3 , μ 4 , μ 5 , μ 6 ) T . There exist l j * , r j * [ c 1 , d 1 ] where there exist p + 5 various sets of ( X 1 , X 2 ) so that when μ T L ˙ ( β 0 , φ 0 , ω 0 , Λ 10 , Λ 20 , g 0 ; O * ) = 0 in which O * = { u j * , v j * , X j , Z j ) for every p + 5 sets of values, one can conclude v = 0 ( p + 5 ) × 1 .
Condition 4. There exist 0 < m 1 < m 2 < satisfying m 1 < Λ j 0 ( c 1 ) < Λ j 0 ( d 1 ) < m 2 , and Λ j 0 is strictly increasing and continuously differentiable until a q-order within c 1 , d 1 , j = 1 , 2 . In addition, g 0 is continuously differentiable until an r-order within c 2 , d 2 . Additionally, ( β ( 0 ) T , φ 0 , ω 0 ) T denotes an interior point in A R p × R ( 0 , 1 ] × R + .
Condition 5. There exists ε > 0 , for every | η η 0 | < ε , P { n ( η ; O ) n ( η 0 ; O ) } d 2 ( η , η 0 ) , in which n ( η ; O ) is the log-likelihood function acquired before, and ⪯ denotes that “the left-hand side is smaller than the right, until a constant time”.
It is worth noting that Conditions 1–5, except Condition 3, are usually employed in interval-censored failure time research [26,34,35]. Condition 3 helps to ensure the parameters’ identifiability and ensure the effective Fisher information matrix is always positive [26,35].
Proof of Theorem 1.
Define ( η ; O i ) as the log-likelihood for only one observation O i , and Θ n = A M n M n H n . Let L = ( η ; O i ) : η Θ denote a class of functions, and let P n and P denote the empirical and true probability measures, respectively. Then, following a similar calculation as in Lemma 1 in [19], the bracketing number of L is (until a constant time) in a bound of ( 1 / ϵ ) 2 m 1 + m 2 + p + 2 . Therefore, following Theorem 2.4.1 in [36], we can get L is a Glivenko–Cantelli class. Thus,
sup η Θ n P n P ( η ; O ) 0 a . s .
Let H ( η ; O ) = ( η ; O ) , for any ϵ > 0 , define K ϵ = η : d ( η , η 0 ) ϵ , η Θ n , κ 1 n = sup η Θ n ( P n P ) H ( η ; O ) , and κ 2 n = P n H ( η 0 ; O ) P H ( η 0 ; O ) . Hence, we can conclude that
inf K ϵ P H ( η ; O ) = inf K ϵ P H ( η ; O ) P n H ( η ; O ) + P n H ( η ; O ) κ 1 n + inf K ϵ P n H ( η ; O ) .
If η ^ n K ϵ , we can get
inf K ϵ P n H ( η ; O ) = P n H ( η ^ n ; O ) P n H ( η 0 ; O ) = κ 2 n + P H ( η 0 ; O ) .
Then, we can prove δ ϵ > 0 based on the proof by contradiction under Condition 3. Combining (A1) and (A2), we have
inf K ϵ P H ( η ; O ) κ 1 n + κ 2 n + P H ( η 0 , O ) = κ n + P H ( η 0 , O ) ,
in which κ n = κ 1 n + κ 2 n , so that κ n δ ϵ , where δ ϵ = inf K ϵ P H ( η ; O ) P H ( η 0 , O ) . This gives { η ^ n K ϵ } κ n δ ϵ . Based on Condition 1 and together with the strong law of large numbers, we have κ 1 n + κ 2 n 0 a.s. Hence, k = 1 n = k { η ^ n K ϵ } k = 1 n = k κ n δ ϵ , which proves that d ( η ^ n , η 0 ) 0 a.s. The proof of Theorem 1 is complete.     □
Proof of Theorem 2.
We verify Conditions C1–C3 in [37] to derive the rate of convergence. Define u as the Euclidean norm of a vector u, h = sup x | h ( x ) | is the supremum norm of a function h, and h L 2 ( P ) = | h | 2 d P 1 / 2 . Moreover, let P denote a probability measure. After that, for the convenience of understanding the proof as follows, we define Θ q r = A M q M q G r , where M q is the sets of Λ j , and G r is the sets of g. q and r are as defined in Condition 4. It is noteworthy that Θ q r is completely the same as Θ except for the notation. Similarly, Θ n q r is the corresponding sieve space containing M n q and G n r . First, as a result of Condition 5, we can verify that Condition C1 directly stands. That is, P ( η 0 ; O ) ( η ; O ) d 2 ( η , η 0 ) for any η Θ n q r . Second, we verify Condition C2 in [37]. Based on Conditions 1–4, one can easily find that for every η Θ n q r ,
P ( η ; O ) ( η 0 ; O ) 2 | b b 0 | 2 + P { Λ 1 l 1 Λ 10 l 1 } 2 + { Λ 1 r 1 Λ 10 r 1 } 2 + P { Λ 2 l 2 Λ 20 l 2 } 2 + { Λ 2 r 2 Λ 20 r 2 } 2 + P g Z 1 g 0 Z 1 2 + P g Z 2 g 0 Z 2 2 = d 2 ( η , η 0 ) ,
in which b = ( β T , φ , ω ) T , and this implies that for any η Θ n q r , sup d ( η , η 0 ) ϵ var { ( η 0 ; O ) ( η ; O ) } sup d ( η , η 0 ) ϵ P { ( η 0 ; O ) ( η ; O ) } 2 ϵ 2 . Thus, Condition C2 from [37] holds when the sign β in their paper is equal to one. In the end, one needs to verify Condition C3 of [37]. Define the class of functions N n = { ( η ; O ) ( η n 0 ; O ) : η Θ n q r } and let N [ ] ( ϵ , N n , L ) denote the ϵ -bracketing number related to L norm of N n . Then, we have N [ ] ϵ , N n , L ( 1 / ϵ ) c 1 m 1 + c 2 m 2 + p + 2 by following similar arguments as in Lemma A3 of [13], where k 1 , k 2 > 0 , and p + 2 is the dimensionality of b. By following the fact that the covering number is always smaller than the bracketing number, we have log N [ ] ϵ , N n , L k 1 m 1 + k 2 m 2 + p + 2 log ( 1 / ϵ ) n ν 1 + ν 2 log ( 1 / ϵ ) . Therefore, Condition C3 in [37] is satisfied under 2 r 0 = ν , and r = 0 + in their sign. Hence, τ of Theorem 1 in [37] on page 584 may be equal to ( 1 ν 1 ) / 2 { log ( log n ) } / ( 2 log n ) . Because the part behind the minus is close to zero with n 0 , one can set a ν ˜ 1 a little bigger than ν 1 so as to get ( 1 ν ˜ 1 ) / 2 τ with a large n. Let ν ˜ 1 replace ν 1 but keep the same notation with ν 1 , then the new constant τ = ( 1 ν 1 ) / 2 .
Notice that from Theorem 1.6.2 in [38], there are Bernstein polynomials Λ j n 0 M n q that make Λ j n 0 = Λ j 0 = O ( n q / 2 ) , j = 1 , 2 . Similarly, there also exists a function g n 0 G n r satisfying g n 0 g 0 = O ( n r / 2 ) . Then, the sieve approximate error ρ ( π n η 0 , η 0 ) in [37] is O ( n q ν 1 / 2 ) . Therefore, applying the Taylor expansion to P { ( η 0 ; O ) ( η ; O ) } surrounding η 0 , then plugging in η n 0 = ( β 0 T , φ 0 , ω 0 , Λ 1 n 0 , Λ 2 n 0 , g n 0 ) T , the Kullback–Leilber pseudodistance of η 0 and η n 0 follows
K ( η n 0 , η 0 ) = P ( η n 0 ; O ) ( η ( 0 ) ; O ) = 1 2 P ¨ Λ 1 Λ 1 ( η 0 ; O ) [ Λ 1 n 0 Λ 10 , Λ 1 n 0 Λ 10 ] + ¨ Λ 2 Λ 2 ( η 0 ; O ) [ Λ 2 n 0 Λ 20 , Λ 2 n 0 Λ 20 ] + 2 ¨ Λ 1 Λ 2 ( η 0 ; O ) [ Λ 1 n 0 Λ 10 , Λ 2 n 0 Λ 20 ] P ¨ g g ( η 0 ; O ) [ g n 0 g 0 , g n 0 g 0 ] + o ( d 2 ( η n 0 , η 0 ) ) Λ 1 n 0 Λ 10 2 2 + Λ 2 n 0 Λ 20 2 2 + o Λ 1 n 0 Λ 10 2 2 + Λ 2 n 0 Λ 20 2 2 + g n 0 g 0 2 2 + o g n 0 g 0 2 2 O n q ν 1 + O n r ν 2 = O n m i n ( q ν 1 , r ν 2 ) .
The first equality holds due to the first derivative of P ( η ; O ) at η 0 being equal to zero. As for the penultimate inequality, it holds because all the derivatives and second-order derivatives of the log-likelihood are bounded. Furthermore, since Λ j n 0 Λ j 0 2 Λ j n 0 Λ j 0 = O ( n q ν 1 / 2 ) , and g n 0 g 0 2 g n 0 g 0 = O ( n r ν 2 / 2 ) , we can get the last inequality, so that K 1 / 2 ( η 0 , η n 0 ) = O ( n q ν 1 / 2 ) + O ( n r ν 2 / 2 ) = O ( n m i n ( q ν 1 / 2 , r ν 2 / 2 ) ) . Hence, by Theorem 1 in [37], the convergence rate of η ^ n is
d ( η ^ n , η 0 ) = O p max n ( 1 ν 1 ) / 2 , n q ν 1 / 2 , n ( 1 ν 2 ) / 2 , n r ν 2 / 2 = O p n min { q ν 1 / 2 , ( 1 ν 1 ) / 2 , r ν 2 / 2 , ( 1 ν 2 ) / 2 } .
The proof of Theorem 2 is complete.       □
Proof of Theorem 3.
Let us sketch the proof of Theorem 3 in five steps as follows.
Step 1. We first calculate the derivatives regarding η = β T , φ , ω , Λ 1 , Λ 2 , g T , such that ˙ Λ j ( η ; O ) [ h Λ j ] , ˙ g ( η ; O ) [ h g ] and so on; now, we omit ( η ; O ) in the following formula for convenience in Step 1.
To obtain the score functions of Λ j , j = 1 , 2 . Let y Λ j ( t j ) H Λ j = { y Λ j : y Λ j = Λ j ξ ξ | ξ = 0 , Λ j ξ M q } denote an arbitrary parametric submodel of Λ j , in which y Λ j ( t j ) satisfies the Fréchet derivative lim ξ 0 { Λ j t j + ξ Λ j t j y Λ j t j ξ } / ξ = 0 . Similarly, we can also define a submodel of g noted by y g ( t j ) H g . Moreover, note
η ; t 1 , t 2 , Y = log S t 1 , t 2 Y = ω log 1 + exp ( 1 ω ( Λ 1 t 1 + h ( t 1 ; Y ) ) ) 1 1 φ + exp ( 1 ω ( Λ 2 t 2 + h ( t 2 ; Y ) ) ) 1 1 φ φ ,
and
J = 1 + exp ( 1 ω ( Λ 1 t 1 + h ( t 1 ; Y ) ) ) 1 1 / φ + exp ( 1 ω ( Λ 2 t 2 + h ( t 2 ; Y ) ) ) 1 1 / φ φ ,
where Y = ( X , Z ) and h ( t ; Y ) = β T W ( t ) + g ( Z ) t . The score function along y Λ j ( t j ) is
˙ Λ j η ; t 1 , t 2 , Y [ y Λ j ] = 1 J 1 + exp 1 ω ( Λ j t j + h ( t j ; Y ) ) 1 exp 1 ω ( Λ j t j + h ( t j ; Y ) ) 1 1 / φ φ 1 exp 1 ω ( Λ j t j + h ( t j ; Y ) ) × y Λ j t j ,
with j , j { 1 , 2 } , h Λ j t j M q 1 and Λ j M q . Analogously, we have the derivatives with respect to g as
˙ g η ; t 1 , t 2 , Y = 1 J 1 + exp 1 ω ( Λ 2 t 2 + h ( t 2 ; Y ) ) 1 exp 1 ω ( Λ 1 t 1 + h ( t 1 ; Y ) ) 1 φ 1 exp 1 ω ( Λ 1 t 1 + h ( t 1 ; Y ) ) t 1 + 1 + exp 1 ω ( Λ 1 t 1 + h ( t 1 ; Y ) ) 1 exp 1 ω ( Λ 2 t 2 + h ( t 2 ; Y ) ) 1 φ 1 exp 1 ω ( Λ 2 ( t 2 ) + h ( t 2 ; Y ) ) t 2 × y g ,
with g ( z ) G r , and y g t j G r 1 .
The second-order derivatives of η ; t 1 , t 2 , Y have the form
¨ Λ 1 Λ 1 η ; t 1 , t 2 , Y [ y Λ 1 , y ˜ Λ 1 ] = 1 J 1 + exp 1 ω ( Λ 2 t 2 + h ( t 2 ; Y ) ) 1 exp 1 ω ( Λ 1 t 1 + h ( t 1 ; Y ) ) 1 1 / φ φ 1 exp 1 ω ( Λ 1 t 1 + h ( t 1 ; Y ) ) + 1 J 1 + exp 1 ω ( Λ 2 t 2 + h ( t 2 ; Y ) ) 1 exp 1 ω ( Λ 1 t 1 + h ( t 1 ; Y ) ) 1 1 / φ φ 2 exp 2 ω ( Λ 1 t 1 + h ( t 1 ; Y ) ) 1 1 φ × exp 1 ω ( Λ 2 t 2 + h ( t 2 ; Y ) ) 1 1 / φ exp 1 ω ( Λ 1 t 1 + h ( t 1 ; Y ) ) 1 1 1 / φ + 1 J 2 1 + exp 1 ω ( Λ 2 t 2 + h ( t 2 ; Y ) ) 1 exp 1 ω ( Λ 1 t 1 + h ( t 1 ; Y ) ) 1 1 / φ 2 φ 2 exp 2 ω ( Λ 1 t 1 + h ( t 1 ; Y ) ) × 1 ω y Λ 1 t 1 y ˜ Λ 1 t 1 ,
¨ Λ 1 Λ 2 η ; t 1 , t 2 , Y y Λ 1 , h Λ 2 = ¨ Λ 2 Λ 1 η ; t 1 , t 2 , Y h Λ 2 , h Λ 1 = 1 J 1 + exp 1 ω ( Λ 2 t 2 + h ( t 2 ; Y ) ) 1 exp 1 ω ( Λ 1 t 1 + h ( t 1 ; Y ) ) 1 1 / φ φ 2 1 exp 1 ω ( Λ 1 t 1 + h ( t 1 ; Y ) ) 1 × 1 1 φ + 1 J 2 1 + exp 1 ω ( Λ 2 t 2 + h ( t 2 ; Y ) ) 1 exp 1 ω ( Λ 1 t 1 + h ( t 1 ; Y ) ) 1 1 / φ 2 φ 2 × 1 ω exp 1 ω ( Λ 1 ( t 1 ) + h ( t 1 ; Y ) + Λ 2 ( t 2 ) + h ( t 2 ; Y ) ) y Λ 1 t 1 y Λ 2 t 2 × exp 1 ω ( Λ 2 t 2 + h ( t 2 ; Y ) ) 1 exp 1 ω ( Λ 1 t 1 + h ( t 1 ; Y ) ) 1 1 / φ 1 .
¨ Λ 1 g η ; t 1 , t 2 , Y y Λ 1 , y g = 1 J exp ( 1 ω ( Λ 1 t 1 + h ( t 1 ; Y ) ) 1 1 / φ + exp ( 1 ω ( Λ 2 t 2 + h ( t 2 ; Y ) ) 1 1 / φ φ 1 × exp ( 1 ω ( Λ 1 t 1 + h ( t 1 ; Y ) ) 1 1 / φ 1 exp ( 1 ω ( Λ 1 t 1 + h ( t 1 ; Y ) ) × t 1 + exp ( 1 ω ( Λ 2 t 2 + h ( t 2 ; Y ) ) 1 1 / φ 1 exp ( 1 ω ( Λ 2 t 2 + h ( t 2 ; Y ) ) × t 2 1 + exp 1 ω ( Λ 2 t 2 + h ( t 2 ; Y ) ) 1 exp 1 ω ( Λ 1 t 1 + h ( t 1 ; Y ) ) 1 1 / φ 1 exp 1 ω ( Λ 2 t 2 + h ( t 2 ; Y ) ) 1 exp 1 ω ( Λ 1 t 1 + h ( t 1 ; Y ) ) 1 1 / φ × exp ( 1 ω ( Λ 1 t 1 + h ( t 1 ; Y ) ) 1 exp ( 1 ω ( Λ 2 t 2 + h ( t 2 ; Y ) ) × t 2 + exp ( 1 ω ( Λ 2 t 2 + h ( t 2 ; Y ) ) 1 exp ( 1 ω ( Λ 1 t 1 + h ( t 1 ; Y ) ) × t 1 1 1 φ × 1 + exp 1 ω ( Λ 2 t 2 + h ( t 2 ; Y ) ) 1 exp 1 ω ( Λ 1 t 1 + h ( t 1 ; Y ) ) 1 1 / φ φ 1 exp ( 1 ω ( Λ 1 t 1 + h ( t 1 ; Y ) ) × 1 J 1 ω y Λ 1 t 1 y g ,
Similarly, we can derive ¨ b b , ¨ Λ j b [ y Λ j ] and ¨ g b [ y g ] as, respectively, the derivatives of ˙ b , ˙ Λ j [ y Λ j ] and ˙ g [ y g ] with respect to b.
¨ b Λ j [ y Λ j ] , ¨ Λ j Λ j [ y Λ j , y Λ j ] , ¨ Λ j Λ j [ y Λ j , y ˜ Λ j ] and ¨ g Λ j [ y g , y Λ j ] are, respectively, the derivatives of ˙ b , ˙ Λ j , ˙ Λ j and ˙ g [ y g ] with respect to Λ j , j , j { 1 , 2 } .
¨ b g [ y g ] , ¨ Λ j g [ y Λ j , y g ] and ¨ g g [ y g , y g ] are, respectively, the derivatives of ˙ b , ˙ Λ j [ y Λ j ] and ˙ g [ y g ] with respect to g.
Step 2. Consider the classes of functions D 1 = { ˙ b ( η ) : d ( η , η 0 ) δ } , D 2 = { ˙ Λ j ( η ) [ y Λ j ] : d ( η , η 0 ) δ } and D 3 = { ˙ g ( η ) [ y g ] : d ( η , η 0 ) δ } . We need to show these three function classes are Donsker for any δ > 0 . We determine the bracketing number of D 1 in order to demonstrate that it is Donsker. In accordance with [37], we have log N [ ] ( ϵ , D 1 , L 2 ) m a x ( m 1 , m 2 ) log ( δ / ϵ ) for 0 < ϵ < δ . This results in a finite-valued bracketing integral according to Theorem 2.8.4 of [36]. Hence, the class D 1 is Donsker. Similar justifications support that G 2 and G 3 are also Donsker.
Step 3. Following similar arguments as in Lemma 2 of [19] and the properties of the score statistic, there exist y Λ j * M q 1 and y g * G r 1 satisfying
P ˙ b ( η 0 ) = 0 , P ˙ Λ j ( η 0 ) [ y Λ j * ] = 0 , P ˙ g ( η 0 ) [ y g * ] = 0 .
Let η ^ n denote the estimators of the sieve log-likelihood and y Λ j , n * is the projection of y Λ j * onto M n , j = 1 , 2 . We get
P n ˙ Λ j ( η ^ n ) [ y Λ j * ] = P n ˙ Λ j ( η ^ n ) [ y Λ j , n * ] + P ˙ Λ j ( η 0 ) [ y Λ j * y Λ j , n * ] + ( P n P ) ˙ Λ j ( η ^ n ) [ y Λ j * y Λ j , n * ] + P ˙ Λ j ( η ^ n ) [ y Λ j * y Λ j , n * ] ˙ Λ j ( η 0 ) [ y Λ j * y Λ j , n * ] = ( I ) + ( I I ) + ( I I I ) + ( I V ) .
Following the discussion about the proof for Theorem 5.3 of [34], we can derive that part (I) is equal to o p ( n 1 / 2 ) . In addition, (II) is also equal to o p ( n 1 / 2 ) based on (A3). We can acquire (III) as o p ( n 1 / 2 ) due to D 2 being Donsker. As for the fourth term (IV), on account of Theorem 2 and employing the first-order linear expansion of ˙ Λ j ( η ^ ) around η 0 , one can get (IV) is o p ( n 1 / 2 ) as well. Summating the four terms, we have P n ˙ Λ j ( η ^ n ) [ y Λ j * ] = o p ( n 1 / 2 ) . Likewise, we have the property of P n ˙ g ( η ^ n ) [ y g * ] . Hence, we have
P n ˙ b ( η ^ n ) = o p ( n 1 / 2 ) , P n ˙ Λ j ( η ^ n ) [ y Λ j * ] = o p ( n 1 / 2 ) , P n ˙ g ( η ^ n ) [ y g * ] = o p ( n 1 / 2 ) .
Step 4. Combining (A3) and (A5), we can easily show that
P n ˙ b ( η ^ n ) ˙ b ( η 0 ) = P n P ˙ b ( η 0 ) + o p ( n 1 / 2 ) , P n ˙ Λ j ( η ^ n ) [ y Λ j * ] ˙ Λ j ( η 0 ) [ y Λ j * ] = P n P ˙ Λ j ( η 0 ) [ y Λ j * ] + o p ( n 1 / 2 ) , P n ˙ g ( η ^ n ) [ y g * ] ˙ g ( η 0 ) [ y g * ] = P n P ˙ g ( η 0 ) [ y g * ] + o p ( n 1 / 2 ) .
Furthermore, based on some arguments in the proof of Theorem 3.2 in [13], there exists a neighborhood of ( b 0 , Λ 10 , Λ 20 , g 0 ) as { ( b , Λ 1 , Λ 2 , g ) : | b b 0 | + Λ 1 Λ 10 2 + Λ 2 Λ 20 2 + g g 0 2 C n ξ } , where ξ = min { ( 1 ν 1 ) / 2 , q ν 1 / 2 , ( 1 ν 2 ) / 2 , r ν 2 / 2 } . Then, applying the Taylor expansion for Λ j ( η ) [ y j * ] yields
P ( Λ j η [ y j * ] Λ j ( η 0 ) [ y j * ] ¨ Λ j b ( η 0 ) [ h j * ] ( b b 0 ) ¨ Λ j Λ j ( η 0 ) [ y j * , Λ j Λ j 0 ] ¨ Λ j Λ j ( η 0 ) [ y j * , Λ j Λ j 0 ] ¨ Λ j g ( η 0 ) [ y j * , g g 0 ] ) = o p ( n 1 / 2 ) ,
where j , j { 1 , 2 } . Likewise, it is also easy to get the property of b ( η ) and g ( η ) [ y g * ] . Note that the derivatives of the score statistics are bounded. After applying Taylor series expansions about η 0 to (A6), and combining the Equations (A7), we have
P ¨ b b ( η 0 ) ( b ^ n b 0 ) + P ¨ b Λ 1 ( η 0 ) [ Λ ^ 1 , n Λ 10 ] + P ¨ b Λ 2 ( η 0 ) [ Λ ^ 2 , n Λ 20 ] + P ¨ b g ( η 0 ) [ g ^ n g 0 ] = P n ˙ b ( η 0 ) + o p ( n 1 / 2 ) , P ¨ Λ j b ( η 0 ) [ y j * ] ( b ^ n b 0 ) + P ¨ Λ j Λ j ( η 0 ) [ y j * , Λ ^ j , n Λ j 0 ] + P ¨ Λ j Λ j ( η 0 ) [ y j * , Λ ^ j , n Λ j 0 ] + P ¨ Λ j g ( η 0 ) [ y j * , g ^ n g 0 ] = P n ˙ Λ j ( η 0 ) [ y j * ] + o p ( n 1 / 2 ) , P ¨ g b [ y g * ] ( b ^ n b 0 ) + P ¨ g Λ 1 [ y g * , Λ ^ 1 n Λ 10 ] + P ¨ g Λ 2 [ y g * , Λ ^ 2 n Λ 20 ] + P ¨ g g [ y g * , g ^ n g 0 ] = P n ˙ g ( η 0 ) [ y g * ] + o p ( n 1 / 2 ) .
Taking the first equality in (A8) and subtracting the second and third equalities, we have
P ( ¨ b b ( η 0 ) ¨ Λ 1 b ( η 0 ) [ h Λ 1 * ] ¨ Λ 2 b ( η 0 ) [ h Λ 2 * ] ¨ g b ( η 0 ) [ h g * ] ) ( b ^ n b 0 ) = P n ˙ b ( η 0 ) ˙ Λ 1 ( η 0 ) [ y Λ 1 * ] ˙ Λ 2 ( η 0 ) [ y Λ 2 * ] ˙ g ( η 0 ) [ y g * ] + o p ( n 1 / 2 ) .
Step 5. Define Q = P ( ¨ b b ( η 0 ; O ) ¨ Λ 1 b ( η 0 ; O ) [ y Λ 1 * ] ¨ Λ 2 b ( η 0 ; O ) [ y Λ 2 * ] ¨ g b ( η 0 ; O ) [ y g * ] ) and B = P * ( η 0 ; O ) 2 = P { * ( η 0 ; O ) * ( η 0 ; O ) T } ; then, we have
Q = P ( ˙ b ( η 0 ; O ) ˙ Λ 1 ( η 0 ; O ) [ y Λ 1 * ] ˙ Λ 2 ( η 0 ; O ) [ y Λ 2 * ] ˙ g ( η 0 ) [ y g * ] ) 2 = P * ( η 0 ; O ) 2 = B ,
where * ( η 0 ; O ) = ˙ b ( η 0 ; O ) ˙ Λ 1 ( η 0 ; O ) [ y Λ 1 * ] ˙ Λ 2 ( η 0 ; O ) [ y Λ 2 * ] ˙ g ( η 0 ; O ) [ y g * ] . Next, we need to verify Q is nonsingular. If Q is a nonsingular matrix, then we can conclude v = ( v 1 T , v 2 , v 3 ) T = 0 from v T Q v = v T P * ( η 0 ; O ) 2 v = 0 . Moreover, one is enough to show if v T * ( η 0 ; O ) = 0 , then v = 0 . Thus, we have
v T * ( η 0 ; O ) = v T ˙ b ( η 0 ; O ) ˙ Λ 1 ( η 0 ; O ) [ y Λ 1 * ] ˙ Λ 2 ( η 0 ; O ) [ y Λ 2 * ] ˙ g ( η 0 ; O ) [ y g * ] = v 1 T L β ( η 0 ; O ) + v 2 L φ ( η 0 ; O ) + v 3 L ω ( η 0 ; O ) + L Λ 1 ( η 0 ; O ) [ v T y Λ 1 * ] + L Λ 2 ( η 0 ; O ) [ v T y Λ 2 * ] + L g ( η 0 ; O ) [ v T y g * ] 1 L ( η 0 ; O ) ,
where η 0 = ( β ( 0 ) T , φ 0 , ω 0 , Λ 10 , Λ 20 , g 0 ) and L ( η 0 ; O ) is the likelihood function. Under our Condition 3, (A10) is equal to zero only if v = 0 . As a consequence, we have verified Q is nonsingular.
Substitute Q into (A9), we get
Q ( b ^ n b 0 ) = P n l * ( η 0 ; O ) + o p ( n 1 / 2 ) .
This implies that
n ( b ^ n b 0 ) = Q 1 n 1 / 2 P n l * ( η 0 ; O ) + o p ( 1 ) d N 0 , Q 1 B ( Q 1 ) T .
Since Q = B = P * ( η 0 ) 2 , we obtain Q 1 B ( Q 1 ) T = Q 1 I 1 ( b 0 ) . Thus, n 1 / 2 ( b ^ n b 0 ) d N 0 , I 1 ( b 0 ) , with I ( b 0 ) = P * ( η 0 ) 2 and * ( η 0 ) being the efficient score function of b 0 . Now, we complete the proof of Theorem 3.     □

References

  1. Sun, J. The Statistical Analysis of Interval-Censored Failure Time Data; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  2. Lin, D.Y.; Oakes, D.; Ying, Z. Additive hazards regression with current status data. Biometrika 1998, 85, 289–298. [Google Scholar] [CrossRef]
  3. Martinussen, T.; Scheike, T.H. Efficient estimation in additive hazards regression with current status data. Biometrika 2002, 89, 649–658. [Google Scholar] [CrossRef]
  4. Feng, Y.; Sun, J.; Sun, L. Estimation of the additive hazards model with linear inequality restrictions based on current status data. Commun. Stat.-Theory Methods 2022, 51, 68–81. [Google Scholar] [CrossRef]
  5. Wang, P.; Zhou, Y.; Sun, J. A new method for regression analysis of interval-censored data with the additive hazards model. J. Korean Stat. Soc. 2020, 49, 1131–1147. [Google Scholar] [CrossRef]
  6. Li, H.; Zhang, H.; Zhu, L.; Li, N.; Sun, J. Estimation of the additive hazards model with interval-censored data and missing covariates. Can. J. Stat. 2020, 48, 499–517. [Google Scholar] [CrossRef]
  7. Wang, T.B.; Yopadhyay, D.; Sinha, S. Efficient estimation of the additive risks model for interval-censored data. arXiv 2022, arXiv:2203.09726. [Google Scholar]
  8. Tong, X.; Chen, M.H.; Sun, J. Regression analysis of multivariate interval-censored failure time data with application to tumorigenicity experiments. Biom. J. 2008, 50, 364–374. [Google Scholar] [CrossRef] [PubMed]
  9. Yin, G.; Cai, J. Additive hazards model with multivariate failure time data. Biometrika 2004, 91, 801–818. [Google Scholar] [CrossRef]
  10. Liu, P.; Song, S.; Zhou, Y. Semiparametric additive frailty hazard model for clustered failure time data. Can. J. Stat. 2022, 50, 549–571. [Google Scholar] [CrossRef]
  11. Zeng, D.; Cai, J. Additive transformation models for clustered failure time data. Lifetime Data Anal. 2010, 16, 333–352. [Google Scholar] [CrossRef]
  12. Yu, M.; Du, M. Regression analysis of multivariate interval-censored failure time data under transformation model with informative censoring. Mathematics 2022, 10, 3257. [Google Scholar] [CrossRef]
  13. Sun, T.; Ding, Y. Copula-based semiparametric regression method for bivariate data under general interval censoring. Biostatistics 2021, 22, 315–330. [Google Scholar] [CrossRef]
  14. Marra, G.; Radice, R. Copula link-based additive models for right-censored event time data. J. Am. Stat. Assoc. 2020, 115, 886–895. [Google Scholar] [CrossRef]
  15. Petti, D.; Eletti, A.; Marra, G.; Radice, R. Copula link-based additive models for bivariate time-to-event outcomes with general censoring scheme. Comput. Stat. Data Anal. 2022, 175, 107550. [Google Scholar] [CrossRef]
  16. Cheng, G.; Zhou, L.; Huang, J.Z. Efficient semiparametric estimation in generalized partially linear additive models for longitudinal/clustered data. Bernoulli 2014, 20, 141–163. [Google Scholar] [CrossRef]
  17. Lu, X.; Song, P.X.K. Efficient estimation of the partly linear additive hazards model with current status data. Scand. J. Stat. 2015, 42, 306–328. [Google Scholar] [CrossRef]
  18. Wang, X.; Song, Y.; Zhang, S. An efficient estimation for the parameter in additive partially linear models with missing covariates. J. Korean Stat. Soc. 2020, 49, 779–801. [Google Scholar] [CrossRef]
  19. Lee, C.Y.; Wong, K.Y.; Lam, K.F.; Xu, J. Analysis of clustered interval-censored data using a class of semiparametric partly linear frailty transformation models. Biometrics 2022, 78, 165–178. [Google Scholar] [CrossRef] [PubMed]
  20. Chen, W.; Ren, F. Partially linear additive hazards regression for clustered and right censored data. Bulletin of Informatics and Cybernetics 2022, 54, 1–14. [Google Scholar] [CrossRef]
  21. Sklar, M. Fonctions de repartition an dimensions et leurs marges. Publ. Inst. Stat. Univ. Paris 1959, 8, 229–231. [Google Scholar]
  22. Nelson, R.B. An Introduction to Copulas; Springer Science and Business Media: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  23. Joe, H. Multivariate Models and Dependence Concepts; CRC Press: Boca Raton, FL, USA, 1997. [Google Scholar]
  24. Clayton, D.G. A model for association in bivariate life tables and its application in epidemiological studies of familial tendency in chronic disease incidence. Biometrika 1978, 65, 141–151. [Google Scholar] [CrossRef]
  25. Gumble, E.J. Bivariate exponential distributions. J. Am. Statitical Assoc. 1960, 55, 698–707. [Google Scholar] [CrossRef]
  26. Zhou, Q.; Hu, T.; Sun, J. A sieve semiparametric maximum likelihood approach for regression analysis of bivariate interval-censored failure time data. J. Am. Stat. Assoc. 2017, 112, 664–672. [Google Scholar] [CrossRef]
  27. Carnicer, J.M.; Pen˜a, J.M. Shape preserving representations and optimality of the bernstein basis. Adv. Comput. Math. 1993, 1, 173–196. [Google Scholar] [CrossRef]
  28. Burnham, K.P.; Anderson, D.R.; Burnham, K.P.; Anderson, D.R. Practical Use of the Information-Theoretic Approach; Springer: Berlin/Heidelberg, Germany, 1993; pp. 75–117. [Google Scholar]
  29. Efron, B. Bootstrap methods: Another look at the jackknife. Ann. Stat. 1979, 7, 1–26. [Google Scholar] [CrossRef]
  30. Age-Related Eye Disease Study Research Group. The age-related eye disease study (AREDS): Design implications AREDS report no. 1. Control. Clin. Trials 1999, 20, 573. [Google Scholar] [CrossRef]
  31. Sun, T.; Ding, Y. CopulaCenR: Copula based regression models for bivariate censored data in R. R J. 2020, 12, 266. [Google Scholar] [CrossRef]
  32. Huang, J. Efficient estimation for the proportional hazards model with interval censoring. Ann. Stat. 1996, 24, 540–568. [Google Scholar] [CrossRef]
  33. Zhang, Y.; Hua, L.; Huang, J. A spline-based semiparametric maximum likelihood estimation method for the cox model with interval-censored data. Scand. J. Stat. 2010, 37, 338–354. [Google Scholar] [CrossRef]
  34. Huang, J.; Rossini, A. Sieve estimation for the proportional-odds failure-time regression model with interval censoring. J. Am. Stat. Assoc. 1997, 92, 960–967. [Google Scholar] [CrossRef]
  35. Wen, C.C.; Chen, Y.H. A frailty model approach for regression analysis of bivariate interval-censored survival data. Stat. Sin. 2013, 23, 383–408. [Google Scholar] [CrossRef]
  36. Van der Vaart, A.; Wellner, J. Weak Convergence and Empirical Processes; Springer: Berlin/Heidelberg, Germany, 1996. [Google Scholar]
  37. Shen, X.; Wong, W.H. Convergence rate of sieve estimates. Ann. Stat. 1994, 22, 580–615. [Google Scholar] [CrossRef]
  38. Lorentz, G.G. Bernstein Polynomials; American Mathematical Society: Providence, RI, USA, 2013. [Google Scholar]
Figure 1. Estimated g with β = 0 and τ = 0.3 ; the solid red lines represent the real functions and the dashed blue lines show the estimated functions. (a) g ( z ) = sin ( 2 π z ) ; (b) g ( z ) = cos ( 2 π z ) ; (c) g ( z ) = 5 3 5 z 2 ; (d) g ( z ) = z + 5 z 3 .
Figure 1. Estimated g with β = 0 and τ = 0.3 ; the solid red lines represent the real functions and the dashed blue lines show the estimated functions. (a) g ( z ) = sin ( 2 π z ) ; (b) g ( z ) = cos ( 2 π z ) ; (c) g ( z ) = 5 3 5 z 2 ; (d) g ( z ) = z + 5 z 3 .
Axioms 12 00198 g001
Figure 2. Estimated g with β = 0 and τ = 0.6 ; the solid red lines represent the real functions and the dashed blue lines show the estimated functions. (a) g ( z ) = sin ( 2 π z ) ; (b) g ( z ) = cos ( 2 π z ) ; (c) g ( z ) = 5 3 5 z 2 ; (d) g ( z ) = z + 5 z 3 .
Figure 2. Estimated g with β = 0 and τ = 0.6 ; the solid red lines represent the real functions and the dashed blue lines show the estimated functions. (a) g ( z ) = sin ( 2 π z ) ; (b) g ( z ) = cos ( 2 π z ) ; (c) g ( z ) = 5 3 5 z 2 ; (d) g ( z ) = z + 5 z 3 .
Axioms 12 00198 g002
Figure 3. Estimated nonlinear effect of SevScaleBL for the illustration.
Figure 3. Estimated nonlinear effect of SevScaleBL for the illustration.
Axioms 12 00198 g003
Table 1. Estimation results on β and τ with g ( z ) = s i n ( 1 2 π z ) .
Table 1. Estimation results on β and τ with g ( z ) = s i n ( 1 2 π z ) .
β ^ τ ^
τ n β BiasSSEESECPBiasSSEESECP
0.3200−0.0471.6112.7021.000−0.1450.4772.2620.991
0.5   0.0291.7473.1640.998−0.1190.5411.8420.987
500   0.0340.6920.8310.978−0.0610.2760.3040.982
0.5   0.0210.6420.8060.988−0.0480.2860.3030.977
1000−0.0080.4680.4770.952−0.0220.1450.1720.978
0.5   0.0090.4690.4980.965−0.0150.1570.1790.975
3000−0.0040.2540.2540.949−0.0080.0720.0790.959
0.5−0.0070.2710.2630.940   0.0030.0730.0810.961
4000   0.0030.2190.2200.951−0.0110.0630.0650.956
0.5−0.0050.2370.2280.940−0.0000.0610.0670.958
0.6200−0.0801.7183.5951.000−0.0510.4132.7970.979
0.5   0.0351.7513.5641.000−0.0290.4032.6390.979
500−0.0230.5780.7490.983   0.0150.1770.2330.980
0.5   0.0250.5760.7990.990−0.0120.2020.2360.965
1000−0.0120.3670.3930.962   0.0130.1030.1320.954
0.5   0.0200.3650.4180.973   0.0070.1140.1370.958
3000−0.0070.2010.2060.956−0.0090.0640.0660.933
0.5   0.0150.2060.2150.962   0.0050.0640.0690.931
4000   0.0060.1780.1770.949−0.0080.0550.0560.932
0.5   0.0110.1880.1840.946   0.0010.0550.0570.945
Table 2. Estimation results on β and τ with g ( z ) = z 2 .
Table 2. Estimation results on β and τ with g ( z ) = z 2 .
β ^ τ ^
τ n β BiasSSEESECPBiasSSEESECP
0.3200−0.0451.7372.1621.000−0.1380.4852.5360.983
0.5   0.0741.9222.9410.997−0.1080.4351.2600.987
500   0.0450.7320.9190.983−0.0520.2150.2490.993
0.5   0.0760.7670.9610.989−0.0460.2090.2480.985
1000−0.0100.5090.5310.956−0.0240.1220.1490.971
0.5   0.0270.5110.5520.967−0.0190.1200.1450.979
3000−0.0090.2800.2850.950−0.0130.0600.0660.956
0.5−0.0130.2960.2960.947−0.0050.0570.0640.968
4000−0.0040.2470.2450.948−0.0100.0520.0550.960
0.5−0.0160.2520.2530.951−0.0030.0560.0570.956
0.6200   0.1571.7672.2431.000−0.0460.3611.7980.974
0.5   0.1952.1342.3730.997−0.0340.3290.6360.972
500   0.0340.6080.8150.989−0.0290.1580.2080.984
0.5   0.0250.6320.8430.979−0.0180.1700.2370.984
1000−0.0080.4030.4350.965−0.0200.0960.1160.969
0.5   0.0280.4170.4490.963   0.0140.0950.1180.970
3000−0.0030.2220.2250.956−0.0160.0520.0570.960
0.5   0.0200.2310.2340.953−0.0100.0570.0580.956
4000−0.0010.1990.1950.957−0.0180.0460.0480.936
0.5   0.0110.1950.2010.946−0.0060.0500.0520.952
Table 3. Estimation results on β and τ with g ( z ) = s i n ( 1 2 π z ) and two covariates.
Table 3. Estimation results on β and τ with g ( z ) = s i n ( 1 2 π z ) and two covariates.
n = 200 n = 400
ParamTrue ValueBiasSSEESECPBiasSSEESECP
β 1 0   0.0100.4570.4330.950   0.0070.3180.2990.948
β 2 0   0.0100.5710.5500.947   0.0070.3920.3800.947
τ 0.3−0.0090.0890.1030.963−0.0030.0640.0650.936
β 1 0.1−0.0070.3160.3210.959   0.0060.2180.2210.953
β 2 0.1−0.0090.5320.5570.954   0.0060.3860.3850.951
τ 0.3−0.0050.0920.1050.967−0.0030.0630.0660.943
β 1 0−0.0150.2440.2580.960−0.0090.1760.1770.948
β 2 0−0.0150.4610.4480.938−0.0040.3000.3080.947
τ 0.6−0.0010.0810.0830.931−0.0060.0560.0580.949
β 1 0.1−0.0030.2570.2610.955   0.0030.1830.1790.945
β 2 0.1−0.0030.4340.4520.964   0.0030.3160.3090.941
τ 0.6   0.0020.0800.0840.931   0.0010.0550.0560.938
Table 4. Estimation results on β and τ with the time-dependent covariates.
Table 4. Estimation results on β and τ with the time-dependent covariates.
β ^ τ ^
n τ β BiasSSEESECPBias SSEESECP
2000.30−0.0160.2650.2700.946−0.0250.0980.1130.974
0.5   0.0270.2790.2860.948−0.0120.0980.1170.970
1   0.0230.3010.3100.954−0.0070.1020.1230.963
0.60   0.0250.2310.2340.942−0.0160.0900.0950.940
0.5   0.0460.2330.2430.957−0.0080.0920.0940.948
1   0.0420.2590.2650.945−0.0100.0930.0950.936
4000.30   0.0060.2220.2040.945−0.0040.0600.0700.957
0.5   0.0260.2030.1980.944−0.0030.0680.0710.959
1   0.0080.2070.2140.954   0.0030.0700.0740.953
0.60   0.0200.1590.1650.953−0.0100.0620.0620.941
0.5   0.0380.1550.1630.952−0.0030.0600.0620.943
1   0.0280.1820.1830.946   0.0050.0610.0630.936
Table 5. Analysis results for the AREDS data.
Table 5. Analysis results for the AREDS data.
β τ
Bernstein PolynomialsESTESEp-ValueESTESE
m 1 = 3 , m 2 = 3 0.0410.009<0.0000.4730.043
m 1 = 4 , m 2 = 4 0.0460.008<0.0000.4160.056
m 1 = 5 , m 2 = 5 0.0340.008<0.0000.3890.048
m 1 = 6 , m 2 = 6 0.0240.0090.0110.3880.056
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, X.; Zhao, S.; Hu, T.; Sun, J. Partially Linear Additive Hazards Regression for Bivariate Interval-Censored Data. Axioms 2023, 12, 198. https://doi.org/10.3390/axioms12020198

AMA Style

Zhang X, Zhao S, Hu T, Sun J. Partially Linear Additive Hazards Regression for Bivariate Interval-Censored Data. Axioms. 2023; 12(2):198. https://doi.org/10.3390/axioms12020198

Chicago/Turabian Style

Zhang, Ximeng, Shishun Zhao, Tao Hu, and Jianguo Sun. 2023. "Partially Linear Additive Hazards Regression for Bivariate Interval-Censored Data" Axioms 12, no. 2: 198. https://doi.org/10.3390/axioms12020198

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop