Next Article in Journal
On Synthetic Interval Data with Predetermined Subject Partitioning and Partial Control of the Variables’ Marginal Correlation Structure
Previous Article in Journal
Pattern Classification for Mixed Feature-Type Symbolic Data Using Supervised Hierarchical Conceptual Clustering
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Markov Chain Monte Carlo Procedure for Efficient Bayesian Inference on the Phase-Type Aging Model

Department of Statistical and Actuarial Sciences, The University of Western Ontario, London, ON N6A 3K7, Canada
*
Author to whom correspondence should be addressed.
Stats 2025, 8(3), 77; https://doi.org/10.3390/stats8030077
Submission received: 24 July 2025 / Revised: 16 August 2025 / Accepted: 22 August 2025 / Published: 27 August 2025

Abstract

The phase-type aging model (PTAM) belongs to a class of Coxian-type Markovian models that can provide a quantitative description of well-known aging characteristics that are part of a genetically determined, progressive, and irreversible process. Due to its unique parameter structure, estimation via the MLE method presents a considerable estimability issue, whereby profile likelihood functions are flat and analytically intractable. In this study, a Markov chain Monte Carlo (MCMC)-based Bayesian methodology is proposed and applied to the PTAM, with a view to improving parameter estimability. The proposed method provides two methodological extensions based on an existing MCMC inference method. First, we propose a two-level MCMC sampling scheme that makes the method applicable to situations where the posterior distributions do not assume simple forms after data augmentation. Secondly, an existing data augmentation technique for Bayesian inference on continuous phase-type distributions is further developed in order to incorporate left-truncated data. While numerical results indicate that the proposed methodology improves parameter estimability via sound prior distributions, this approach may also be utilized as a stand-alone statistical model-fitting technique.

1. Introduction

The phase-type aging model (PTAM) belongs to a class of Coxian Markovian models that were proposed in a previous study [1]. The purpose of the PTAM is to provide a quantitative description of well-known aging characteristics that are part of a genetically determined, progressive, and irreversible process. It provides a means of quantifying the heterogeneity in aging among individuals and of capturing the anti-selection effects.
Since the PTAM is nonlinear in its parameters, the parameter estimation turns out to be non-stable, which gives rise to an estimability issue (see [2]). The estimability issue of the PTAM originates from two complications. First, the structure of the PTAM, which is complicated by its matrix exponential form, makes it impossible to directly analyze the gradient and Hessian matrix of its likelihood function. In these instances, any hill-climbing optimization algorithms will be subject to the risk of becoming stuck in local maxima. The second aspect is that the structure of the PTAM gives rise to flat profile likelihood functions (see [2]). The estimability issue arising from flat profile likelihood functions is thoroughly discussed in [3]. Frequentists would potentially address the flat likelihood functions using regularization, which can be thought of as using a log prior density (see [4]).
To address both problems, a Bayesian approach is considered. Thus, the parameters are assumed to be random variables, which automatically eliminates the risk of being stuck in local maxima. This addresses the first problem. As for the second one, the Bayesian approach can improve parameter estimability by making use of sound prior information. Then, the posterior distributions will significantly depend on the prior distributions since the profile likelihood functions are flat and become nearly horizontal.
Moreover, there are convincing reasons for applying the Bayesian approach on the PTAM. In this context, it has previously been applied via the data augmentation Gibbs sampler, which consists of two iterative steps—a data augmentation step and a posterior sampling step (see [5]). The data augmentation step in relation to continuous phase-type distributions was thoroughly studied by the authors of [6], where an EM algorithm was proposed for estimating parameters of phase-type distributions. Based on the same data augmentation scheme, the authors of [7] considered Dirichlet and Gamma distributions as the conjugate prior distributions in the posterior sampling step, before developing an MCMC-based Bayesian method for continuous phase-type distributions. Later on, several studies were carried out regarding the data augmentation step in order to improve computational efficiency. The authors of [8] proposed the Exact Conditional Sampling (ECS) algorithm. Another efficient algorithm introduced by the authors of [9] involves uniformization and backward likelihood computation. However, these contributions all focus on the data augmentation step. In the context of the PTAM, the posterior sampling step also becomes more involved because of its parameter structure, since the posterior distributions are then no longer as simple as Dirichlet and Gamma distributions after data augmentation. This situation has not been studied before in the literature. Therefore, the first contribution of this study is to develop an MCMC algorithm for sampling the posterior distribution of the PTAM.
Another area that needs further development is the determination of a method for dealing with left-truncated data in the data augmentation step. Although the authors of [10] developed the EM algorithm for censored data from phase-type distributions, the case of left-truncated data in connection with the MCMC-based Bayesian approach has not previously been studied. The MCMC algorithms proposed in [7,8,9] are indeed only applicable to data that are not left-truncated. However, it is important to develop MCMC-based methods that handle left truncation because it is a common feature of real-life data. In particular, in the context of the PTAM, real-life data are left-truncated because it is unlikely that in practice, individuals will enter the study at the same physiological age. Accordingly, the second contribution of this study is to develop an MCMC algorithm for estimating the PTAM parameters when data are left-truncated.
The proposed MCMC algorithm utilizes a nested structure comprising two levels. In the outer level, augmented data are generated using the ECS algorithm proposed in [8] combined with the technique developed for left-truncated data. In the inner level, Gibbs sampling is applied to draw samples from the posterior distributions based on a newly developed rejection sampling scheme on a logarithmic scale. Thus, the proposed algorithm can be seen as a methodological extension of the existing data augmentation Gibbs sampler for continuous phase-type distributions. It can also be regarded as a further illustration of making use of the MCMC algorithm in the case of sampling from high-dimensional distributions. On applying the proposed algorithm, a Bayesian estimation of the PTAM parameters can be carried out. This will be illustrated with both simulated and actual data.
This paper is organized as follows. Preliminaries on the PTAM are introduced in Section 2. Section 3 presents a literature review on existing MCMC algorithms in connection with continuous phase-type distributions. In Section 4, the proposed MCMC algorithms for Bayesian inference on the PTAM are introduced. A simulation study is provided in Section 5 to validate the proposed approach. Meanwhile, parameter estimability is analyzed by comparing the proposed Bayesian approach with the frequentist one that was employed in [1]. In Section 6, the proposed Bayesian approach is applied to calibrate the PTAM to the Channing House data set which pertains to the residents of a retirement community. Lastly, some concluding remarks are included in Section 7.

2. The Phase-Type Aging Model

The phase-type aging model (PTAM) stems from the phase-type mortality model proposed in [11]. The motivation for analyzing the phase-type mortality model consists of linking its parameters to certain biological and physiological mechanisms of aging, so that the longevity risk facing annuity products can be measured more accurately. Experimental results showed that the phase-type mortality model with a four-state developmental period and a subsequent aging period achieved very satisfactory fitting results with respect to Swedish and USA cohort mortality data (see [11]). Later on, the authors of [12] applied the phase-type mortality model to Australian cohort mortality data.
Furthering the research in [11], the authors of [1] developed a parsimonious yet flexible representation of the PTAM for modeling various aging patterns. Similarly, the main objective of the PTAM is to describe the human aging process in terms of the evolution of the distribution of physiological ages, utilizing mortality rates as aging-related variables. Therefore, although the PTAM can reproduce mortality patterns, it ought not to be treated as a mortality model. In this context, the PTAM is most applicable at human ages beyond the attainment of adulthood, where, relatively speaking, the aging process is the most significant factor that contributes to the variability in lifetimes (see [1]).

2.1. Preliminaries

Definition 1.
Let { X ( t ) } t 0 be a continuous time Markov chain (CTMC) defined on a finite state space S = E Δ = { 1 , 2 , , m } Δ , where Δ = { m + 1 } is the absorbing state and E is the set of transient states. Let { X ( t ) } t 0 have initial distribution π = ( π 1 , π 2 , , π m ) over the transient states such that π e = 1 , and let the transition intensity matrix be as follows:
Λ = S h 0 0 ,
where h = S e and e is the column vector of ones. T : = i n f { t 0 | X ( t ) = m + 1 } is defined as the time until absorption. Then, T is said to follow a continuous phase-type (CPH) distribution denoted by C P H ( π , S ) of the order m, with  h being defined as the exit vector.
Remark 1.
Given T C P H ( π , S ) of order m,
  • (i) The probability density function (p.d.f.) of T is f T ( t ) = π e S t h .
  • (ii) The cumulative density function (c.d.f.) of T is F T ( t ) = 1 π e S t e .
There is a long history of using phase-type distributions for survival modeling in the category of “absorbing time” distributions (see [6,11,12,13]).
Definition 2.
A CPH distribution of order m with representation ( π , S ) is said to be a Coxian distribution of order m if π = ( 1 , 0 , 0 , , 0 ) and the following holds true:
S = ( λ 1 + h 1 ) λ 1 0 0 0 0 0 ( λ 2 + h 2 ) λ 2 0 0 0 0 0 0 0 ( λ m 1 + h m 1 ) λ m 1 0 0 0 0 0 h m ,
where λ i > 0 , h j > 0 , i = 1 , 2 , , m 1 , and  j = 1 , 2 , , m .
The Coxian distribution can often be visualized by a phase diagram such as that displayed in Figure 1 (see [1]).

2.2. The Phase-Type Aging Model

According to the authors of [1], the phase-type aging model (PTAM) belongs to a class of Coxian-type Markovian models, which can provide a quantitative description of the genetically determined, progressive, and irreversible aging process.
Definition 3.
The PTAM of order m is a Coxian distribution of order m with transition intensity matrix S and exit rate vector h such that the following applies:
S = ( λ + h 1 ) λ 0 0 0 0 0 ( λ + h 2 ) λ 0 0 0 0 0 0 0 ( λ + h m 1 ) λ 0 0 0 0 0 h m , h = h 1 h 2 h m 1 h m ,
where λ > 0 , 0 < h 1 < h m , and
h i = m i m 1 h 1 s + i 1 m 1 h m s 1 s , s 0 , h 1 m i m 1 h m i 1 m 1 , s = 0 ,
i = 1 , 2 , , m . This is denoted by P T A M ( h 1 , h m , s , λ , m ) .
As can be seen from Figure 2, the PTAM has a phase diagram that is similar to the Coxian distribution observed in Figure 1, the difference being the constant transition rate and the functionally related exit rates defined in (4).
(i)
In Figure 2, each state in the Markov process represents the physiological age—a variable that reflects an individual’s health condition or frailty level. As the aging process progresses, the frailty level will increase, until the last state occurs. At which point, the individual’s health conditions have deteriorated to the point of causing death.
(ii)
The transition rate λ is assumed to be constant. The exiting rate h i is the dying rate or force of dying. With this setup, at a given calendar age, an individual will be assigned to a certain state. This mathematically describes the fact that the individuals involved will have different physiological ages at the same calendar age (see [1]).
(iii)
The dying rates assume the structure given in (4), which is somewhat reminiscent of the well-known Box–Cox transformation introduced in [14]. The first and last dying rates— h 1 and h m —are included in the model parameters, whereas the remaining inbetween rates are interpolated based on the parameter s, which is a model parameter related to the curvature of the exit rate pattern. To verify this, Figure 3 presents the effect of s on the pattern of the exit rates. When s = 1 , the dying rates have a linear relationship. When s > 1 , the rates are concave, and when s < 1 , the rates are convex. In particular, when s = 0 , the rates behave exponentially. In practice, we believe that it is likely that s < 1 when calibrating to mortality data. That is, the dying rates increase faster than in a linear manner as an individual ages (see [1]). Throughout this study, h i will follow the structure given in (4), for  i = 1 , 2 , , m .
The parameter structure of the PTAM proves to be parsimonious and flexible, which allows us to model the internal aging process explicitly. Further information is available in [1]. Since our study pertains to the PTAM, the processes being considered are homogeneous, use the same intensity for all transitions to the next stage, and have an intensity of moving to the absorbing stage consisting of a linear interpolation between the two end points. More general processes would require further consideration.

3. MCMC-Based Bayesian Inference for the CPH Distributions

3.1. The Data Augmentation Gibbs Sampler

Let y denote the observed data, which follow some probability distributions p ( y | θ ) , with θ being an unknown set of parameters and  x being the latent data that are not observed. We, respectively, denote the prior and the posterior distributions associated with θ by p ( θ ) and p ( θ | y ) . The idea behind the Gibbs sampler with data augmentation is to augment the observed data, y , with latent data, x , so that the density function after data augmentation, i.e., p ( θ | x , y ) , will have a more tractable form (see [5]). Figure 4 presents the iterative framework of the data augmentation Gibbs sampler. The technique consists of a data augmentation step and a posterior sampling step, which, respectively, correspond to sampling from p ( x | θ , y ) and p ( θ | x , y ) .
Appealing to the stationarity of the MCMC method, the samples generated from the data augmentation Gibbs sampler will converge to the target posterior distribution, p ( θ | y ) . We assume some familiarity with the theory of MCMC methods. More details are available in [15,16,17,18,19].

3.2. The Data Augmentation Step—Sampling from p ( x | θ , y )

The data augmentation scheme with respect to the CPH distributions was proposed by the authors of [6]. It is widely applied to EM, MCMC, and variational Bayes algorithms (see [6,7,8,9,20]).
Consider a CPH distribution of order m with θ = ( π , S ) . Its likelihood function, given the data set y = ( y 1 , y 2 , , y M ) , is as follows:
L ( π , S ; x ) = i = 1 M π e S y i h ,
where h = S e .
According to the authors of [6], a sample path associated with a CPH distribution can be characterized by the initial state, transitions among states, and the sojourn time at each state. Let X = X ( t ) ( k ) t 0 , k = 1 , 2 , , M , be M independent sample paths augmented from observed absorption time data y = y ( k ) k = 1 , 2 , , M . Each sample path is generated by a CPH distribution ( π , S ) of the order m. Then, the likelihood function for the augmented data, ( x , y ) , is given as follows:
L ( θ ; x , y ) = i = 1 m π i B i i = 1 m j i m λ i j N i j e λ i j Z i i = 1 m h i N i , m + 1 e h i Z i ,
where B i is the number of sample paths starting at state i among M individuals, N i j is the total number of transitions from state i to state j among M individuals, and  Z i is the total sojourn time in state i among M individuals. In that case, the absorption time y ( k ) for the kth individual is equal to the sum of the sojourn times of its corresponding sample path.
Sampling the latent sample path conditional on the absorption time proves to be a difficult problem. To solve it, the authors of [7] initially suggested to make use of a carefully designed Metropolis–Hastings algorithm with proposal distribution p ( x | θ , Y y ) , where convergence to sampling from p ( x | θ , Y = y ) eventurally occurs. Rejection sampling is then utilized to draw the latent sample path from this proposal distribution. However, this method is time-consuming because many sample paths will be rejected if some data points in y are large. This will hinder the computational efficiency.
Later on, the authors of [8] further improved the methodology by making the following two main contributions:
(i)
A faster and more efficient algorithm was developed to simulate the latent sample path from p ( x | θ , y ) . The algorithm is named the Exact Conditional Sampling (ECS) algorithm.
(ii)
Unlike the CPH distributions considered by the authors of [7], which assumed full and unstructured parameters, the authors of [8] took into account the special parameter structure, as indicated by the context of the experiment. Using a reliability model as an example, they discussed situations where parameters might be zero or have identical values. For identical parameters, they argued to combine all the relevant terms in (6) so that parameters with the same values will be sampled from one single distribution. The zero-valued parameters were simply set to a value of zero.
In this study, we elected to adopt the ECS algorithm for the data augmentation step. This are conveniently presented in Algorithms 1 and 2. It should be noted that since the PTAM belongs to a Coxian distribution whose underlying Markov process is irreversible, the ECS algorithm will be simpler than the original version that is proposed in Algorithms 1 and 2.
Algorithm 1 The ECS algorithm (see [8]) applied to the PTAM-obtained absorption times.
1:
Sample a starting state i from the probability mass function:
P ( X ( 0 ) = i | π , S , Y = y ) = e i e S y h π i π e S y h
and t 0 .
2:
With probability
P ( X [ t , y ) = i Y { y } = m + 1 | S , Y = y , X ( t ) = i ) = e S i i ( y t ) h i e i e S ( y t ) h
X [ t , y ) i and X ( y ) m + 1 and end the algorithm; else continue.
3:
Sample the sojourn time d from
    p ( δ = d | S , Y = y , X [ t , t + δ ) = i , X ( t + δ ) { 1 , 2 , , m } i ) = p i . e S ( y t d ) s ( S i i ) e S i i d 0 y t p i . e S ( y t d ) s ( S i i ) e S i i d d δ
and X [ t , t + d ) i .
4:
t t + d and i i + 1 , then go to Step 2.

3.3. The Posterior Sampling Step—Sampling from p ( θ | x , y )

The next step consists of simulating the posterior distribution of the parameter θ from the augmented data. Fortunately, this step is quite straightforward for the CPH distributions. As the likelihood function consists of kernels of Dirichlet and Gamma distributions, it provides an indication to utilize Dirichlet and Gamma distributions as the conjugate prior distributions. According to the authors of [7], the prior distributions are as follows:
π Dirichlet ( β 1 , β 2 , , β m ) ,
λ i j Gamma ( v i j , ξ i j ) ,
h i Gamma ( v i , m + 1 , ξ i , m + 1 ) ,
and the posterior distributions after data augmentation are as follows:
π | x , y Dirichlet ( β 1 + B 1 , β 2 + B 2 , , β m + B m ) ,
λ i j | x , y Gamma ( v i j + N i j , ξ i j + Z i j ) ,
h i | x , y Gamma ( v i , m + 1 + N i , m + 1 , ξ i , m + 1 + Z i , m + 1 ) ,
where the Gamma distributions are parameterized as shape and rate parameters.
Algorithm 2 The ECS algorithm (see [8]) applied to the PTAM-obtained right-censored times.
1:
Sample a starting state i from the probability mass function:
P ( X ( 0 ) = i | π , S , Y y ) = e i e S y e π i π e S y e
and t 0 .
2:
With probability m i n 1 , e S i i ( y t ) , d m a x { y t , 0 } + T , where T e x p ( S i i ) . Else, the sojourn time d is a sample from the finitely supported density on [ 0 , y t ) , i.e.,
p ( δ = d | S , Y y , X ( t ) = i ) p i . e S ( y t d ) e ( S i i ) e S i i d
and X [ t , t + d ) i .
3:
j = i + 1 .
4:
If j = m + 1 , end the algorithm; else, t t + d and i j , go to Step 2.

4. MCMC-Based Bayesian Inference for the PTAM

The MCMC algorithm for Bayesian inference on the PTAM being introduced in this section constitutes the principal contribution of this study. This contribution involves two aspects. Firstly, the proposed MCMC algorithm can be considered as a methodological extension of the existing algorithm in terms of sampling from p ( θ | x , y ) . This is due to the fact that the likelihood function of the PTAM is so involved that no simple conjugate prior distributions such as the Dirichlet and Gamma distributions are adequate. Although the authors of [8] consider special parameter structures such as zero-valued and identical parameters, the prior conjugacy still holds as it simply involves deleting and regrouping parameters. However, further extensions are required in the case of the PTAM, since its parameters exhibit more complicated functional relationships as a result of the constraint specified in (4). Secondly, similarly to the authors of [10], where the EM algorithm was developed for censored data from the CPH, we have developed the MCMC-based Bayesian approach for left-truncated data from the PTAM. This development is crucial for the estimation of the PTAM parameters based on real-life data, since it is unlikely that in practice, each individual will enter the study at the same physiological age. Thus, there exists additional difficulty with respect to analyzing left-truncated data.
With these contributions, a methodologically extended MCMC algorithm is proposed in order to carry out the sampling from p ( θ | x , y ) so that an MCMC-based Bayesian inference on the PTAM could be achieved, particularly for real-life data that are left-truncated.

4.1. Likelihood Function of the PTAM with Left-Truncated Data

Taking into account left-truncated data, the likelihood function for the PTAM after data augmentation is as follows:
L ( λ , h 1 , h m , s ; x , y ) = i = 1 m 1 λ N i , i + 1 Q i , i + 1 e λ Z i A i = 1 m h i N i , m + 1 e h i G i i A e λ d i ,
where d i is the time at which individual i enters the study; Q i j is the total number of transitions from state i to j, which occurred before the entry times; G i is the total sojourn time in state i for the portions of the sample paths after the entry times; N i j is as defined in Section 3; and  Z i A is the total sojourn time in state i for the sample paths in A , as follows:
A : = k Z + | the k th sample path enters the study before reaching state m ,
where t j ( k ) is the sojourn time at state j for the kth sample path.
The likelihood function (13) can be seen as a generalized version of the likelihood function (6) given in [6]. To verify this, if the data do not involve left truncation, then the Q i j ’s and d i ’s will be reduced to zero for all i and j; A will be reduced to the set of indices of all sample paths; and both the G i ’s and Z i A ’s will be reduced to Z i ’s for all i. Thus, the likelihood function in (13) will boil down to (6) with π = ( 1 , 0 , 0 , , 0 ) . The details of the derivation of the likelihood function (13) are presented in Appendix B.

4.2. Characteristics of the Posterior Distribution of the PTAM

In the PTAM, the posterior distribution of the model parameters is no longer a product of independent kernels. To verify this, we start by substituting (4) into the likelihood function (13), as follows:
L ( λ , h 1 , h m , s ; x , y ) = i = 1 m 1 λ N i , i + 1 Q i , i + 1 e λ Z i A i A e λ d i × i = 1 m m i m 1 h 1 s + i 1 m 1 h m s N i , m + 1 s e m i m 1 h 1 s + i 1 m 1 h m s 1 s G i ,
where s 0 .
Then, the posterior distribution p ( λ , h 1 , h m , s | x , y ) can be written as follows:
p ( λ , h 1 , h m , s | x , y ) π 1 ( λ ) L 1 ( λ ; x , y ) π 2 ( h 1 , h m , s ) L 2 ( h 1 , h m , s ; x , y ) ,
where
π 1 ( λ ) L 1 ( λ ; x , y ) = π 1 ( λ ) i = 1 m 1 λ N i , i + 1 Q i , i + 1 e λ Z i A i A e λ d i
and
π 2 ( h 1 , h m , s ) L 2 ( h 1 , h m , s ; x , y ) = π 2 ( h 1 , h m , s ) × i = 1 m m i m 1 h 1 s + i 1 m 1 h m s N i , m + 1 s × i = 1 m e m i m 1 h 1 s + i 1 m 1 h m s 1 s G i ,
with π i and L i denoting the respective prior distributions and likelihood functions, for i = 1 , 2 .
Based on (16), it is straightforward to see that the posterior distribution of the PTAM parameters can be decomposed into two independent posterior distributions— p ( λ | x , y ) and a joint posterior distribution p ( h 1 , h m , s | x , y ) , where
p ( λ | x , y ) π 1 ( λ ) L 1 ( λ ; x , y ) ,
p ( h 1 , h m , s | x , y ) π 2 ( h 1 , h m , s ) L 2 ( h 1 , h m , s ; x , y ) .
Thus, we can evaluate the posterior distribution for λ separately, using a gamma distribution (21), which will produce the posterior distribution (22) of the same class, as follows:
λ Gamma v λ , ξ λ ,
λ | x , y Gamma v λ + i = 1 m 1 N i , i + 1 i = 1 m 1 Q i , i + 1 , ξ λ + i = 1 m 1 Z i A i A d i .
However, the likelihood function of ( h 1 , h m , s ) does not consist of independent kernels, which prevents one from determining conjugate priors. The prior distributions for h 1 , h m and s, which are then subjectively determined, are taken to be π H 1 ( h 1 ) , π H m ( h m ) and π S ( s ) . We assume, for simplicity, that h 1 , h m and s are independently distributed. Accordingly, their joint prior distribution, π 2 ( h 1 , h m , s ) , will be the product of π H 1 ( h 1 ) , π H m ( h m ) , and π S ( s ) .

4.3. The Proposed Methodology for Sampling from p ( h 1 , h m , s | x , y )

Next, a methodology needs to be developed to address the sampling from the joint posterior distribution p ( h 1 , h m , s | x , y ) . The Gibbs algorithm can be utilized again, further taking advantage of the MCMC method. In that case, the proposed algorithm will become a nested MCMC algorithm. The nested Gibbs algorithm samples from the joint posterior distribution, given the augmented data. The algorithm framework is presented in Figure 5, for a p-dimensional posterior distribution.
In the case of the joint posterior distribution for the PTAM, p ( θ | x , y ) in Figure 5 becomes p ( h 1 , h m , s | x , y ) . For example, in order to sample from p ( h 1 , h m , s | x , y ) in the ( k + 1 ) th iteration, we need to sample from the corresponding conditional distributions. These are also the transition kernels of the Gibbs algorithm, as follows:
p h 1 ( k ) , h m ( k ) , s ( k ) h 1 ( k + 1 ) , h m ( k ) , s ( k ) : = p ( h 1 | h m ( k ) , s ( k ) , x , y ) ,
p h 1 ( k + 1 ) , h m ( k ) , s ( k ) h 1 ( k + 1 ) , h m ( k + 1 ) , s ( k ) : = p ( h m | h 1 ( k + 1 ) , s ( k ) , x , y ) ,
p h 1 ( k + 1 ) , h m ( k + 1 ) , s ( k ) h 1 ( k + 1 ) , h m ( k + 1 ) , s ( k + 1 ) : = p ( s | h 1 ( k + 1 ) , h m ( k + 1 ) , x , y ) .
Since general notations are adopted in Figure 5, the concept of the nested MCMC algorithm is likely applicable to other models whose posterior distributions are complicated after data augmentation.
Define g ( h 1 , h m , s ) : = π 2 ( h 1 , h m , s ) L 2 ( h 1 , h m , s ; x , y ) .
First, we introduce a sampling scheme for p h 1 | h m ( k ) , s ( k ) , x , y , which is the conditional distribution of h 1 . We know the following:
p ( h 1 | h m ( k ) , s ( k ) , x , y ) = g h 1 , h m ( k ) , s ( k ) 0 h m ( k ) g h 1 , h m ( k ) , s ( k ) d h 1 g h 1 , h m ( k ) , s ( k ) .
Rejection sampling can then be utilized in conjunction with g h 1 , h m ( k ) , s ( k ) , as described in Algorithm 3.
Algorithm 3 The rejection sampling algorithm for p h 1 | h m ( k ) , s ( k ) , x , y .
1:
Calculate the maximum value of ln g h 1 , h m ( k ) , s ( k ) on ( 0 , h m ( k ) ) . Denote it by m 1 .
2:
Draw a pair of samples ( x , ln ( y ) ) . X Uniform ( 0 , h m ( k ) ) and ln ( Y ) = ln ( U ) + m 1 , where U Uniform ( 0 , 1 ) .
3:
while ln g x , h m ( k ) , s ( k ) ln ( y ) do
4:
    repeat Step 2
5:
end while
6:
h 1 ( k + 1 ) x .
Secondly, we consider the sampling scheme for p h m | h 1 ( k + 1 ) , s ( k ) , x , y , which is the marginal distribution of h m . We know the following:
p ( h m | h 1 ( k + 1 ) , s ( k ) , x , y ) = g h m , h 1 ( k + 1 ) , s ( k ) h 1 ( k + 1 ) g h m , h 1 ( k + 1 ) , s ( k ) d h m g h m , h 1 ( k + 1 ) , s ( k ) .
Rejection sampling can be utilized in conjunction with g h m , h 1 ( k + 1 ) , s ( k ) , as described in Algorithm 4.
Algorithm 4 The rejection sampling algorithm for p h m | h 1 ( k + 1 ) , s ( k ) , x , y ,
1:
Calculate the maximum value of ln g h m , h 1 ( k + 1 ) , s ( k ) on ( h 1 ( k + 1 ) , a ) . Denote it by m 2 . In this case, a is a large enough truncation point.
2:
Draw a pair of samples ( x , ln ( y ) ) . X Uniform ( h 1 ( k + 1 ) , a ) and ln ( Y ) = ln ( U ) + m 2 , where U Uniform ( 0 , 1 ) .
3:
while ln g x , h 1 ( k + 1 ) , s ( k ) ln ( y )  do
4:
    repeat Step 2
5:
end while
6:
h m ( k + 1 ) x .
Thirdly, we consider the sampling scheme for p s | h 1 ( k + 1 ) , h m ( k + 1 ) , x , y , which is the marginal distribution of s. We know the following:
p ( s | h 1 ( k + 1 ) , h m ( k + 1 ) , x , y ) = g s , h 1 ( k + 1 ) , h m ( k + 1 ) g s , h 1 ( k + 1 ) , h m ( k + 1 ) d s g s , h 1 ( k + 1 ) , h m ( k + 1 ) .
Rejection sampling can be utilized in conjunction with g s , h 1 ( k + 1 ) , h m ( k + 1 ) , as described in Algorithm 5.
Algorithm 5 The rejection sampling algorithm for p s | h 1 ( k + 1 ) , h m ( k + 1 ) , x , y .
1:
Calculate the maximum value of ln g s , h 1 ( k + 1 ) , h m ( k + 1 ) on ( b , c ) . Denote it by m 3 . In this case, | b | , c are large enough truncation points.
2:
Draw a pair of samples ( x , ln ( y ) ) . X Uniform ( b , c ) and ln ( Y ) = ln ( U ) + m 3 , where U Uniform ( 0 , 1 ) .
3:
while ln g x , h 1 ( k + 1 ) , h m ( k + 1 ) ln ( y )  do
4:
    repeat Step 2
5:
end while
6:
s ( k + 1 ) x .
The rejection sampling schemes presented in Algorithms 3–5 also constitute important original contributions. Unlike traditional rejection sampling where a proposal function is chosen to fully cover the target density, the proposed rejection sampling transforms them to a logarithmic scale. This differs from returning a log posterior value to a generic MCMC sampler, as the proposed method applies log-scale bounds within Gibbs updates for individual conditional distributions, some of which have truncated support. We note that the values of the posterior kernels are often too small to be handled by making use of the likelihood functions. In fact, sampling on a logarithmic scale is analogous to taking the logarithm of likelihood functions in order to find MLEs, since both frequentist and Bayesian methods will face the same problem caused by small likelihood function values. However, they deal with this problem differently. In some regular problems, frequentist inference may be performed by maximizing a function (often numerically), and using the curvature at the maximum to quantify uncertainty in an estimate. Other authors have addressed frequentist inference in a non-regular context (see [21]). This study focuses on Bayesian inference, in which case, the output is a (posterior) distribution rather than a point estimate. In this context, it will involve random sampling techniques instead of optimization techniques, as is the case for the rejection sampling on a logarithmic scale presented in Algorithms 3–5. Technical details regarding rejection sampling on a logarithmic scale are elaborated on in Appendix A.

4.4. The MCMC Algorithm for the PTAM

Combining all these building blocks in the data augmentation step and the posterior sampling step, Algorithm 6 presents the MCMC algorithm for Bayesian inference on the PTAM.
In Step 14, for the inner Gibbs sampling, at each iteration, the initial values are selected to be the parameter outputs in the previous iteration. Because the parameter outputs themselves also become increasingly accurate as they converge to the true posterior distribution, using the parameter outputs in previous iterations as the initial values is then believed to be more reasonable and objective. In that case, we can make the most of this algorithm.
Algorithm 6 The MCMC algorithm for Bayesian inference on the PTAM.
Require: The number of states, m, based on prior knowledge or subjective judgment.
Input:
  1.
The data observations y , and the entry times d i s if there are left-truncated data.
  2.
The hyper-parameters for prior distributions.
  3.
The number of states m.
  4.
The number of inner and outer iterations: w 1 and w 2 .
  5.
The size of the burn-in period and thinning rate, if possible.
Output: The posterior samples for h 1 , h m , s and λ , each of which has w 2 sample points.
  1:
Initialization λ ( 1 ) , h 1 ( 1 ) , h m ( 1 ) , s ( 1 ) .
  2:
Initialization λ G i b b s ( 1 ) , h 1 , G i b b s ( 1 ) , h m , G i b b s ( 1 ) , s G i b b s ( 1 ) λ ( 1 ) , h 1 ( 1 ) , h m ( 1 ) , s ( 1 ) .
  3:
for  k = 2 : w 2  do
  4:
    Draw a sample path x from p x | λ ( k 1 ) , h 1 ( k 1 ) , h m ( k 1 ) , s ( k 1 ) , y , based on Algorithm 1 (or Algorithm 2 for right-censored data).
  5:
    Based on x , calculate ( N , Q , Z A , A , G ) .
  6:
    for  j = 2 : w 1  do
  7:
        Sample h 1 , G i b b s ( j ) from p ( h 1 | h m , G i b b s ( j 1 ) , s G i b b s ( j 1 ) , N , G ) , based on Algorithm 3.
  8:
        Sample h m , G i b b s ( j ) from p ( h m | h 1 , G i b b s ( j ) , s G i b b s ( j 1 ) , N , G ) , based on Algorithm 4.
  9:
        Sample s G i b b s ( j ) from p ( s | h 1 , G i b b s ( j ) , h m , G i b b s ( j ) , N , G ) , based on Algorithm 5.
10:
    end for
11:
    Sample λ ( k ) from p λ | N , Q , Z A .
12:
     h 1 ( k ) , h m ( k ) , s ( k ) h 1 , G i b b s ( N ) , h m , G i b b s ( N ) , s G i b b s ( N ) .
13:
    Reset the inner Gibbs sampling vector to zeros.
14:
     λ G i b b s ( 1 ) , h 1 , G i b b s ( 1 ) , h m , G i b b s ( 1 ) , s G i b b s ( 1 ) λ ( k ) , h 1 ( k ) , h m ( k ) , s ( k ) .
15:
end for

5. Simulation Study

In this section, the proposed algorithm is implemented via a simulation study. The aim of the study is to demonstrate that the proposed MCMC algorithm is sound and that the parameter estimability of the PTAM can be improved via sound prior information. Consider the following experimental conditions:
(i)
The underlying parameters are m = 10 ,   λ = 1.99908 ,   h 1 = 0.0008 ,   h m = 1.65349 ,   s = 0.11118 . They were taken from the simulation study on Le Bras limiting distribution carried out in [1], except that m is assumed to take a moderate value of ten.
(ii)
The sample size is 50.
(iii)
There are 4500 iterations of the Gibbs sampler for data augmentation.
(iv)
There are 500 iterations of the inner Gibbs sampling for the posterior distribution.
(v)
The first 500 iterations are taken as burn-in, based on cumulative standard deviation plots (see [19]).
(vi)
A thinning rate of 10 is adopted, based on autocorrelation functions (ACFs) (see [19]).
(vii)
The prior distributions are assumed to be sound; the prior means remain close to the true parameter values with low variances.
π H 1 ( h 1 ) = Gamma ( v h 1 = 0.01 , ξ h 1 = 10 ) , π H m ( h m ) = Gamma ( v h m = 3 , ξ h m = 1.5 ) , π S ( s ) = 8 e 8 s , s < 0 , π Λ ( λ ) = Gamma ( v λ = 24 , ξ λ = 16 ) .
We assume that s < 0 because the dying rate pattern forms a fairly convex increasing pattern, as displayed in Figure 3, which is consistent with the biological interpretation of dying rates.
After implementing Algorithm 6, the results are listed in Table 1 and are illustrated in Figure 6 and Figure 7.
In Table 1, the true parameters are all within the corresponding 95% credible intervals. This indicates that the proposed MCMC algorithm for Bayesian inference is quite satisfactory. It can be seen from Figure 6 that the correlations between h 1 , h m , and s are minimal. This indicates that the dependent structure of h 1 , h m , and s in the likelihood function has little effect on the shape of the posterior distributions, so that h 1 , h m , and s are still nearly independent, as was assumed in the prior distributions. This observation suggests that the estimability of h 1 , h m , and s could be poor. In fact, the same conclusion can also be reached by observing the diagonal panels in Figure 6, which reveal the shapes of their posterior distributions. In particular, the posterior distribution for s closely resembles the prior. This suggests that their distributions are less responsive to data so that the prior effects are, to some degree, still preserved in the behaviour of their posterior distributions. This indicates a weaker inferential power and therefore a poorer estimability. In contrast, λ is more estimable, as the shape of its posterior distribution differs substantially from that of its prior. Therefore, the role of prior distributions is crucial for dealing with flat likelihood functions. Sound prior information can improve the accuracy of the parameter estimates as the posterior distributions are highly dependent on the priors.
In Figure 7, the convergence of the proposed MCMC algorithm is being assessed by means of trace plots, ACFs, and ergodic mean plots. First, the trace plots demonstrate the stationarity of the MCMC samples in terms of level-off patterns, though there are occasionally a few spikes for h 1 and s. However, such spikes are a normal phenomenon as the shapes of their posterior densities still remain close to their skewed prior densities due to poor estimability. Secondly, the ACFs for all parameters are within the tolerance range after the second lag. This indicates that the thinning rate effectively reduces the ACFs between the MCMC samples. Thirdly, the ergodic means all converge as the number of iterations increases. This suggests that the number of iterations, that is, 4500, is sufficient to believe that the simulated MCMC samples were approximately generated from the stationary distributions, which are the target posterior distributions.

Prior Sensitivity Analysis

To further validate the vital role of sound prior information in terms of estimability improvement, we now conduct a prior sensitivity analysis. Two alternative types of priors are tested. The first type is taken to be falsely informative, where the prior means deviate noticeably from the true parameter values with low variances. The second type is taken to be non-informative, where parameters are uniformly distributed. The results are listed in Table 2 and illustrated in Figure 8 and Figure 9.
It can be seen from Table 2 and Figure 8 that when priors are taken to be falsely informative, the 95% credible intervals for h 1 , h m , and s all failed to cover the true values. This is as expected because their likelihood functions are flat due to poor estimability. Then, the posterior distributions will be highly dependent on the prior distributions. On the other hand, the interval for λ remains narrow and covers the true value, which indicates a better estimability than that of h 1 , h m , and s.
Next, when the priors are taken to be non-informative, the shape of posterior density will be totally determined by the shape of the likelihood function. It can be seen from Table 2 and Figure 9 that while covering their MLEs as expected, the 95% credible intervals for h 1 , h m , and s are extremely wide. This further corroborates the flatness of their likelihood functions with the concomitant poor estimability. On the other hand, the interval for λ still remains narrow while covering its MLE, which indicates a better estimability.
Upon completing this prior sensitivity analysis, all conclusions are consistent with each other throughout this simulation study. The poor estimability of h 1 , h m , s , as well as the improved estimability of λ , has been demonstrated. The significant prior sensitivity on h 1 , h m , and s indicates that suitable prior information indeed plays a significant role in improving their estimability. Therefore, it is crucial to select priors that are as sound as possible when making Bayesian inference. Otherwise, deficient priors might yield unreliable parameter estimates, particularly when their estimability is poor.

6. Data Analysis

In Section 5, we have shown that the proposed MCMC algorithm can improve parameter estimability for the PTAM by making use of sound prior information. In this section, we will demonstrate that in addition to improving estimability via sound prior information, the proposed algorithm can also be utilized to adapt the PTAM to real-life data.
Consider the data collected from the Channing House, which is a retirement community in Palo Alto, California. The data consist of entry ages, ages at death, and ages at study end for 462 people who resided in the facility between January 1964 and July 1975 (see [22]). The Channing House data are chosen because all the residents in the community are approximately subject to the same circumstances, so that, relatively speaking, the aging process is the most significant factor contributing to the variability in their lifetimes, which is the process we intend to model using the PTAM. Moreover, the female data are chosen to preclude the effects of gender differences. Of the 361 females, 129 died while residing in Channing House, whereas the other 232 survived to the end of the study.
In practice, residents join a retirement community at various physiological ages. According to the Channing House data, the youngest entry age is 61. Thus, for modeling purposes, it will be assumed that the aging process starts at calendar age 50 for all residents. Under that setting, residents are expected to have variable physiological ages at the time of entering the study. As well, letting m = 20 ought to be more than adequate.
Unlike what was assumed in Section 5, an underlying model does not exist. In that case, the prior distributions are surmised to be as follows:
π H 1 ( h 1 ) = Gamma ( v h 1 = 0.002 , ξ h 1 = 2 ) , π H m ( h m ) = Gamma ( v h m = 12.5 , ξ h m = 5 ) , π S ( s ) = e s , s < 0 , π Λ ( λ ) = Gamma ( v λ = 1.5 , ξ λ = 5 ) .
The priors are deliberately chosen in such a way that the model with parameters taken as the prior mean is far away from the Kaplan–Meier survival function estimates, as plotted in Figure 10. The purpose of proceeding this way is to more persuasively show that the proposed Bayesian approach is valid. In practice, of course, one should select the priors in such a way that the model with parameters taken as the prior mean is as close to the Kaplan–Meier survival function estimates as possible.
Using the proposed Bayesian approach, the parameter estimation results are displayed in Table 3.
In Figure 10, we illustrate the goodness of fit of the PTAM to the Channing House female data by plotting the fitted survival function along with the nonparametric Kaplan–Meier survival function estimates. In addition, for comparison purposes, we also plotted the model with parameters taken as the prior mean, the fitted model using the MLE method, and the fitted model obtained in [1]. It can be observed that the PTAM fits the Channing House female data very well, as the associated fitted survival function stays within the 95% confidence limits of the Kaplan–Meier estimates. The significant difference between the fitted model and the model with parameters taken as the prior mean, as mentioned earlier, very convincingly validates the proposed Bayesian approach. This difference clearly shows that the prior distributions are actually updated to the corresponding posterior distributions for the Channing House female data.
Furthermore, the fitted models with m = 20 , whether they are estimated based on the MLEs or the proposed Bayesian method, are in very close agreement with the fitted model in [1], where m = 100 . In fact, the fitted model with m = 20 fits the data even better for ages between 91 and 101.

7. Concluding Remarks

An MCMC algorithm for Bayesian inference on the PTAM was proposed. Two contributions were made on the basis of existing MCMC algorithms for Bayesian inference on continuous phase-type distributions. First, a sampling scheme was proposed for posterior sampling after data augmentation. Secondly, an existing data augmentation technique was further developed to incorporate left-truncated data. In the simulation study, the proposed approach was applied to a ten-state PTAM. The results showed that with sound prior information, the proposed approach indeed improved parameter estimability by producing narrower credible intervals that captured the true values. Then, the results were also applied to calibrate the PTAM to a mortality data set collected from a retirement community, which produced reasonable results that are comparable to those obtained in previous work. All in all, while numerical results indicate that the proposed methodology improves parameter estimability for the PTAM as opposed to the MLE method, it may also be utilized as a stand-alone model-fitting technique.

Author Contributions

Conceptualization, X.L.; methodology, C.N.; software, C.N.; validation, C.N.; formal analysis, C.N.; investigation, C.N.; resources, X.L.; data curation, C.N.; writing—original draft preparation, C.N.; writing—review and editing, X.L., C.N., S.P. and J.R.; visualization, C.N.; supervision, X.L., S.P. and J.R.; project administration, X.L., S.P. and J.R.; funding acquisition, X.L. and S.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Sciences and Engineering Research Council of Canada (grant number R0610A01).

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The financial support of the Natural Sciences and Engineering Research Council of Canada is gratefully acknowledged. We would like to express our sincere thanks to both reviewers for their valuable comments and suggestions.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Rejection Sampling on a Logarithmic Scale

We briefly recall the rejection sampling method, which enables one to sample from a given continuous p.d.f. p ( θ ) where θ ( a , b ) , given that f ( θ ) p ( θ ) .
Algorithm A1 The rejection sampling algorithm with a uniform proposal distribution.
1:
Calculate the global maximum of f ( θ ) on ( a , b ) . Define it as w.
2:
Draw a pair of uniformly distributed samples ( θ , y ) ; Θ Uniform ( a , b ) and Y Uniform ( 0 , w ) .
3:
while f ( θ ) y do
4:
    repeat Step 2
5:
end while
6:
Take θ as the sample.
Note that Algorithm A1 utilizes a uniform distribution of Θ as the proposal distribution, with p.d.f. defined as v ( θ ) = 1 b a . Then, as required for implementing rejection sampling, a constant c = w ( b a ) is selected so that c v ( θ ) = w f ( θ ) , θ ( a , b ) . According to the theory on rejection sampling, the proposal distribution v ( θ ) does not have to be uniform, as long as the requirement c v ( θ ) f ( θ ) is satisfied. In this study, it is taken as the uniform distribution, which simplifies the process.
However, when f is a posterior kernel, its value is likely to be small by making use of the likelihood function. In fact, in the simulation study on the PTAM, its value is so small that it outputs a value of zero in R. Thus, in order to carry out the rejection sampling scheme, we have to transform the posterior kernel to a logarithmic scale. In other words, instead of comparing f ( θ ) and y, we compare ln f ( θ ) and ln f ( y ) ; in which case, the distribution of ln ( Y ) has to be determined.
For x ( , ln ( w ) ) , we have the following:
P ( ln ( Y ) x ) = P ( Y e x ) = e x w .
Upon inverting the c.d.f., we may achieve the sampling of ln ( Y ) via the following relationship:
ln ( Y ) = ln ( w ) + ln ( U ) ,
where U Uniform ( 0 , 1 ) .
Therefore, one may sample on a logarithmic scale, as in Algorithm A2. This allows one to work with ln ( w ) , when w is so small that it outputs a value of zero in R.
Algorithm A2 Algorithm A1 on a logarithmic scale.
1:
Calculate the global maximum of ln f ( θ ) on ( a , b ) . Define it as w.
2:
Draw a pair of samples ( θ , ln ( y ) ) ; Θ Uniform ( a , b ) and ln ( Y ) = ln ( U ) + w , where U Uniform ( 0 , 1 ) .
3:
while ln f ( θ ) ln ( y ) do
4:
    repeat Step 2
5:
end while
6:
Take θ as the sample.
Algorithms 3–5 are then direct applications of Algorithm A2.

Appendix B. Data Augmentation with Left-Truncated Data

Appendix B.1. Case 1

To begin with, consider a sample path of the underlying Markov process of the PTAM presented in Table A1, where m > 5 .
Table A1. A PTAM sample path generated from data augmentation.
Table A1. A PTAM sample path generated from data augmentation.
state12345 m + 1
sojourn time t 1 t 2 t 3 t 4 t 5 0
The likelihood function of this sample path is then as follows:
L ( h 1 , h m , s , λ ; x , y ) = i = 1 4 λ e ( λ + h i ) t i h 5 e ( λ + h 5 ) t 5 ,
as the inter-arrival time of a Markov process is exponentially distributed.
Now, without any loss of generality, suppose that the individual enters the study at time d where t 1 < d < t 1 + t 2 . In that case, we only know that the individual is alive at time d with a physiological age in state 2. The data are then left-truncated at time d. According to the proposed likelihood function specified in (13), the likelihood function for this left-truncated data is as follows:
L ( h 1 , h m , s , λ ; x , y ) = i = 1 4 λ e ( λ + h i ) t i h 5 e ( λ + h 5 ) t 5 λ e ( λ + h 1 ) t 1 e ( λ + h 2 ) ( d t 1 ) = λ e ( λ + h 2 ) ( t 1 + t 2 d ) i = 3 4 λ e ( λ + h i ) t i h 5 e ( λ + h 5 ) t 5 .

Appendix B.2. Case 2

When the individual enters the study at the last physiological age, the likelihood function will be slightly different. To verify this, consider another case where the simulated sample path is that presented in Table A2.
Table A2. A PTAM sample path generated from data augmentation.
Table A2. A PTAM sample path generated from data augmentation.
state12345m m + 1
sojourn time t 1 t 2 t 3 t 4 t 5 t m 0
Accordingly, we have i = 1 m 1 t i < d < i = 1 m t i . In that case, we only know that the individual is alive at time d with a chonological age in state m (the last state). The likelihood function is then as follows:
L ( h 1 , h m , s , λ ; x , y ) = i = 1 m 1 λ e ( λ + h i ) t i h m e h m t m i = 1 m 1 λ e ( λ + h i ) t i e h m ( d i = 1 m 1 t i ) = h m e h m ( i = 1 m t i d ) .
Clearly, what makes the two cases different are the rates in the exponents. For the previous m 1 states, the rates include λ ; however, this is not the case for the last state, which is attributable to the definition of the PTAM. Thus, in order to construct the likelihood function for left-truncated data, one needs to consider these two cases separately, which explains why set A must be defined.

Appendix B.3. Derivation of the Likelihood Function for the Case of Left-Truncated Data

Now, let us finally consider various sample paths corresponding to M individuals. To consider the two cases separately, let A be the set of indices of sample paths in the second case. Let m ( i ) be the state right before absorption for the ith individual. Moreover, let n ( i ) be such that j = 1 n ( i ) t j ( i ) < d i < j = 1 n ( i ) + 1 t j ( i ) . Then, the likelihood function is as follows:
L ( λ , h 1 , h m , s ; x , y ) = i A λ e λ + h n ( i ) + 1 j = 1 n ( i ) + 1 t j ( i ) d i j = n ( i ) + 2 m ( i ) 1 λ e ( λ + h j ) t j ( i ) h m ( i ) e λ + h m ( i ) t m ( i ) ( i ) × i A h m e h m ( j = 1 m t j ( i ) d i ) = λ i A ( m ( i ) n ( i ) 1 ) e λ i A j = 1 m ( i ) t j ( i ) i = 1 M h m ( i ) × i A e h n ( i ) + 1 j = 1 n ( i ) + 1 t j ( i ) d i j = n ( i ) + 2 m ( i ) h j t j ( i ) i A e h m j = 1 m t j ( i ) d i × i A e λ d i .
According to the definitions of Q i j , Z i A , and M i , as well as those of N i j and Z i given in [6], the following is obtained:
i A ( m ( i ) n ( i ) 1 ) = : i = 1 m 1 N i , i + 1 Q i , i + 1 ,
i A j = 1 m ( i ) t j ( i ) = : i = 1 m 1 Z i A ,
i = 1 M h m ( i ) = : i = 1 m h i N i , m + 1 ,
and
i A e h n ( i ) + 1 j = 1 n ( i ) + 1 t j ( i ) d i j = n ( i ) + 2 m ( i ) h j t j ( i ) i A e h m j = 1 m t j ( i ) d i = : i = 1 m e h i G i .
Substituting (A7)–(A10) into (A6) yields the final representation given in (13).

References

  1. Cheng, B.; Jones, B.; Liu, X.; Ren, J. The mathematical mechanism of biological aging. N. Am. Actuar. J. 2021, 25, 73–93. [Google Scholar] [CrossRef]
  2. Cheng, B. A Class of Phase-Type Aging Models and their Lifetime Distributions. Ph.D. Thesis, Western University, London, ON, Canada, 2021. [Google Scholar]
  3. Raue, A.; Kreutz, C.; Maiwald, T.; Bachmann, J.; Schilling, M.; Klingmüller, U.; Timmer, J. Structural and practical identifiability analysis of partially observed dynamical models by exploiting the profile likelihood. Bioinformatics 2009, 25, 1923–1929. [Google Scholar] [CrossRef] [PubMed]
  4. Firth, D. Bias reduction of maximum likelihood estimates. Biometrika 1993, 80, 27–38. [Google Scholar] [CrossRef]
  5. Tanner, M.A.; Wong, W.H. From EM to data augmentation: The emergence of MCMC Bayesian computation in the 1980s. Stat. Sci. 2010, 25, 506–516. [Google Scholar] [CrossRef]
  6. Asmussen, S.; Nerman, O.; Olsson, M. Fitting phase-type distributions via the EM algorithm. Scand. J. Stat. 1996, 23, 419–441. [Google Scholar]
  7. Bladt, M.; Gonzalez, A.; Lauritzen, S.L. The estimation of phase-type related functionals using Markov chain Monte Carlo methods. Scand. Actuar. J. 2003, 4, 280–300. [Google Scholar] [CrossRef]
  8. Aslett, L.J.; Wilson, S.P. Markov chain Monte Carlo for inference on phase-type models. In Proceedings of the 2011 International Statistical Institute World Statistics Congress (ISI WSC), Dublin, Ireland, 21–26 August 2011; Volume 120. [Google Scholar]
  9. Watanabe, R.; Okamura, H.; Dohi, T. An efficient MCMC algorithm for continuous PH distributions. In Proceedings of the 2012 Winter Simulation Conference (WSC), Berlin, Germany, 9–12 December 2012; pp. 1–12. [Google Scholar]
  10. Olsson, M. Estimation of phase-type distributions from censored data. Scand. J. Stat. 1996, 23, 443–460. [Google Scholar]
  11. Lin, X.S.; Liu, X. Markov aging process and phase-type law of mortality. N. Am. Actuar. J. 2007, 11, 92–109. [Google Scholar] [CrossRef]
  12. Su, S.; Sherris, M. Heterogeneity of Australian population mortality and implications for a viable life annuity market. Insur. Math. Econ. 2012, 51, 322–332. [Google Scholar] [CrossRef]
  13. Aalen, O.O. Phase-type distributions in survival analysis. Scand. J. Stat. 1995, 22, 447–463. [Google Scholar]
  14. Box, G.E.; Cox, D.R. An analysis of transformations. J. R. Stat. Soc. Ser. B (Methodol.) 1964, 26, 211–243. [Google Scholar] [CrossRef]
  15. Brooks, S.; Gelman, A.; Jones, G.; Meng, X.L. Handbook of Markov Chain Monte Carlo; CRC Press: New York, NY, USA, 2011. [Google Scholar]
  16. Metropolis, N.; Rosenbluth, A.W.; Rosenbluth, M.N.; Teller, A.H.; Teller, E. Equation of state calculations by fast computing machines. J. Chem. Phys. 1953, 21, 1087–1092. [Google Scholar] [CrossRef]
  17. Geman, S.; Geman, D. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Trans. Pattern Anal. Mach. Intell. 1984, PAMI-6, 721–741. [Google Scholar] [CrossRef] [PubMed]
  18. Tanner, M.A.; Wong, W.H. The calculation of posterior distributions by data augmentation. J. Am. Stat. Assoc. 1987, 82, 528–540. [Google Scholar] [CrossRef]
  19. Lynch, S.M. Introduction to Applied Bayesian Statistics and Estimation for Social Scientists; Springer Science & Business Media: Berlin/Heidelberg, Germany; New York, NY, USA, 2007. [Google Scholar]
  20. Okamura, H.; Watanabe, R.; Dohi, T. Variational Bayes for phase-type distribution. Commun. Stat.-Simul. Comput. 2014, 43, 2031–2044. [Google Scholar] [CrossRef]
  21. Kolassa, J.E. Confidence intervals for parameters lying in a random polygon. Can. J. Stat. 1999, 27, 149–161. [Google Scholar] [CrossRef]
  22. Hyde, J. Testing survival with incomplete observations. In Biostatistics Casebook; John Wiley: Hoboken, NJ, USA, 1980; pp. 31–46. [Google Scholar]
Figure 1. Phase diagram for a Coxian distribution.
Figure 1. Phase diagram for a Coxian distribution.
Stats 08 00077 g001
Figure 2. Phase diagram for the PTAM.
Figure 2. Phase diagram for the PTAM.
Stats 08 00077 g002
Figure 3. Behaviour of the exit rate vector for various values of s with m = 100 .
Figure 3. Behaviour of the exit rate vector for various values of s with m = 100 .
Stats 08 00077 g003
Figure 4. Iterative framework of the data augmentation Gibbs sampler.
Figure 4. Iterative framework of the data augmentation Gibbs sampler.
Stats 08 00077 g004
Figure 5. The MCMC algorithm framework for the proposed methodology.
Figure 5. The MCMC algorithm framework for the proposed methodology.
Stats 08 00077 g005
Figure 6. Posterior distributions and parameter correlations obtained from the MCMC samples.
Figure 6. Posterior distributions and parameter correlations obtained from the MCMC samples.
Stats 08 00077 g006
Figure 7. Diagnostic plots of the MCMC samples.
Figure 7. Diagnostic plots of the MCMC samples.
Stats 08 00077 g007
Figure 8. (Left) panel: parameter estimates and 95% credible intervals for falsely informative priors. (Right) panel: enlarged plot for h 1 .
Figure 8. (Left) panel: parameter estimates and 95% credible intervals for falsely informative priors. (Right) panel: enlarged plot for h 1 .
Stats 08 00077 g008
Figure 9. (Left) panel: parameter estimates and 95% credible intervals for non-informative priors. (Right) panel: enlarged plot for h 1 .
Figure 9. (Left) panel: parameter estimates and 95% credible intervals for non-informative priors. (Right) panel: enlarged plot for h 1 .
Stats 08 00077 g009
Figure 10. Survival functions of the PTAM calibrated to the Channing House female data using maximum likelihood estimates and the proposed Bayesian approach, the calibrated survival function with parameters taken as the prior mean, the calibrated survival function obtained in [1], and the Kaplan–Meier estimates of the survival function and corresponding 95% confidence limits.
Figure 10. Survival functions of the PTAM calibrated to the Channing House female data using maximum likelihood estimates and the proposed Bayesian approach, the calibrated survival function with parameters taken as the prior mean, the calibrated survival function obtained in [1], and the Kaplan–Meier estimates of the survival function and corresponding 95% confidence limits.
Stats 08 00077 g010
Table 1. Posterior means and 95% credible intervals obtained from the MCMC algorithm and the true parameters.
Table 1. Posterior means and 95% credible intervals obtained from the MCMC algorithm and the true parameters.
ParameterTruePosterior Mean95% Credible Interval
h 1 0.000800.001326039(0.00006395895, 0.00353304747)
h m 1.653491.770213467(1.230238, 2.478763)
s 0.11118 0.085397339 ( 0.347967221 , 0.001345516 )
λ 1.999081.977085034(1.698050, 2.258719)
Table 2. The 95% credible intervals obtained from falsely informative and non-informative priors, MLEs, and true parameters.
Table 2. The 95% credible intervals obtained from falsely informative and non-informative priors, MLEs, and true parameters.
ParameterTrueMLEFalsely Informative PriorsNon-Informative Priors
h 1 0.000800.001210155(0.01122067, 0.02576926)(0.0006370074, 0.0514890336)
h m 1.653490.957246758(3.936409, 6.170863)(0.5202541, 21.9348132)
s 0.11118 1.989832312 ( 4.8561343 , 0.5556046 ) ( 49.4620012 , 0.7391115 )
λ 1.999082.514277429(1.749293, 2.170121)(1.819338, 3.221812)
Table 3. Posterior means and 95% credible intervals obtained from the MCMC algorithm for the Channing House female data.
Table 3. Posterior means and 95% credible intervals obtained from the MCMC algorithm for the Channing House female data.
ParameterPosterior Mean95% Credible Interval
h 1 0.0045658(0.00006130736, 0.00923589434)
h m 2.475408(1.459456, 3.422970)
s 1.085645 ( 1.8089289 , 0.1331294 )
λ 0.4906715(0.4353424, 0.5284059)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nie, C.; Liu, X.; Provost, S.; Ren, J. A Markov Chain Monte Carlo Procedure for Efficient Bayesian Inference on the Phase-Type Aging Model. Stats 2025, 8, 77. https://doi.org/10.3390/stats8030077

AMA Style

Nie C, Liu X, Provost S, Ren J. A Markov Chain Monte Carlo Procedure for Efficient Bayesian Inference on the Phase-Type Aging Model. Stats. 2025; 8(3):77. https://doi.org/10.3390/stats8030077

Chicago/Turabian Style

Nie, Cong, Xiaoming Liu, Serge Provost, and Jiandong Ren. 2025. "A Markov Chain Monte Carlo Procedure for Efficient Bayesian Inference on the Phase-Type Aging Model" Stats 8, no. 3: 77. https://doi.org/10.3390/stats8030077

APA Style

Nie, C., Liu, X., Provost, S., & Ren, J. (2025). A Markov Chain Monte Carlo Procedure for Efficient Bayesian Inference on the Phase-Type Aging Model. Stats, 8(3), 77. https://doi.org/10.3390/stats8030077

Article Metrics

Back to TopTop