Next Article in Journal
On Fourier Series in the Context of Jacobi Matrices
Previous Article in Journal
Initial Boundary Value Problem for the Coupled Kundu Equations on the Half-Line
Previous Article in Special Issue
Chen-Burr XII Model as a Competing Risks Model with Applications to Real-Life Data Sets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation of the Reliability Function of the Generalized Rayleigh Distribution under Progressive First-Failure Censoring Model

1
College of Science, Jiangxi University of Science and Technology, Ganzhou 341000, China
2
Business School, Jiangxi University of Science and Technology, Nanchang 330013, China
3
Teaching Department of Basic Subjects, Jiangxi University of Science and Technology, Nanchang 330013, China
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(9), 580; https://doi.org/10.3390/axioms13090580
Submission received: 1 July 2024 / Revised: 11 August 2024 / Accepted: 23 August 2024 / Published: 26 August 2024
(This article belongs to the Special Issue Mathematical and Statistical Methods and Their Applications)

Abstract

:
This study investigates the statistical inference of the parameters, reliability function, and hazard function of the generalized Rayleigh distribution under progressive first-failure censoring samples, considering factors such as long product lifetime and challenging experimental conditions. Firstly, the progressive first-failure model is introduced, and the maximum likelihood estimation for the parameters, reliability function, and hazard function under this model are discussed. For interval estimation, confidence intervals have been constructed for the parameters, reliability function, and hazard function using the bootstrap method. Next, in Bayesian estimation, considering informative priors and non-information priors, the Bayesian estimation of the parameters, reliability function, and hazard function under symmetric and asymmetric loss functions is obtained using the MCMC method. Finally, Monte Carlo simulation is conducted to compare mean square errors, evaluating the superiority of the maximum likelihood estimation and Bayesian estimation under different loss functions. The performance of the estimation methods used in the study is illustrated through illustrative examples. The results indicate that Bayesian estimation outperforms maximum likelihood estimation.

1. Introduction

1.1. Progressive First-Failure Censoring (PFFC) Model

With the improvement of people’s living standards, consumers’ demands for product quality are increasing day by day. In this context, conducting an in-depth analysis of product lifespan is particularly important. However, a complete sample-based lifespan analysis may face uncontrollable factors such as excessively long experimental time, high experimental costs, and significant loss costs. To effectively address these issues, the censoring sample method has been widely applied in assessing product life reliability. Among numerous censoring testing methods, Type I (fixed-time) censoring and Type II (fixed-number) censoring are two traditional and commonly used approaches. Type I censoring refers to the censoring of observation data that occurs when it is terminated or censored at a fixed time point. Although this censoring method can greatly reduce experimental time and cost, it may lead to bias in product life estimation due to the failure to include invalid data. In contrast, Type II censoring occurs when observation data is terminated or censored under specific conditions. This censoring method includes failure data, providing more comprehensive and accurate reliability analysis. However, for products with longer lifespans, the censoring test may require extended experimental time and higher costs. To optimize the configuration of experimental time cost, researchers are committed to finding more suitable censoring schemes, and one such scheme is progressive censoring [1,2,3]. This type of scheme aims to balance experimental accuracy and cost in order to achieve optimal utilization of experimental resources while ensuring the accuracy of product quality assessment.
In the context of considering factors such as a long product life cycle and complex experimental conditions, Balasooriya [4] proposed the concept of first-failure censoring in 1995. This strategy can effectively analyze the reliability of products based on the specific characteristics of experiments. Dykstra [5] first proposed the method of first-failure censoring experiment in 1964, which refers to dividing k × n test products into n groups, each group containing k products, and then running all test units simultaneously. When the first product fails, record its failure time T 1 and remove all products in the group that contain it; when the second product fails, record its failure time T 2 and remove all products in the corresponding group. Repeat this process until the n-th product fails; record its failure time T n and remove the only remaining group. At this point, the experiment is complete. The first failure censoring experiment can effectively save experimental time and cost. However, this method is not suitable for situations where it is necessary to remove a product before it fails. Therefore, Wu and Kus [6] proposed the concept of PFFC in 2001 based on the idea of first censoring. It involves assuming n independent groups, each with k products. All test units are simultaneously operated, and when the first-failure occurs at time T 1 : m : n : k , the group containing the failed product is removed along with Q 1 groups randomly selected from the remaining groups. When the second failure occurs at time T 2 : m : n : k , the group containing the failed product is removed along with Q 2 groups randomly selected from the remaining groups, and so on. When the m-th failure occurs at time T m : m : n : k , the group containing the failed product is removed along with Q m groups randomly selected from the remaining groups, indicating the completion of the experiment, where Q m = n m Q 1 Q 2 Q m 1 .
Note that if k = 1, then the PFFC sample is a progressive Type-II censoring sample. If k = 1 and Q 1 = Q 2 = = Q m 1 = 0 , then the PFFC sample is a Type-II censoring sample. If Q 1 = Q 2 = = Q m = 0 , the PFFC sample is a first-failure censoring sample. If k = 1 and Q 1 = Q 2 = = Q m = 0 , then the PFFC sample is a complete sample. In summary, we can say that the PFFC sample is an extension of these different types of samples.
The PFFC method has recently gained widespread attention in research. Shi and Shi [7] explored classical and Bayesian estimation (BE) of stress strength reliability (SSR) for two independent PFFC samples. They obtained the maximum likelihood (ML) estimation and asymptotic confidence interval (CI) of SSR through detailed analysis, as well as constructed the highest a posteriori density CI of SSR using simulation technology. Additionally, they verified the performance of different point estimates and interval estimates. Abu-Moussa et al. [8] studied a method for statistical inference of parameters, reliability functions (RFs), and hazard functions (HFs) under PFFC samples based on an extended Rayleigh distribution. They analyzed Bayesian estimates of parameters, RFs, and HFs obtained using information priors and non-information priors under different loss functions. Elshahhat et al. [9] analyzed a generalized extreme life model based on PFFC samples and obtained parameter estimates for the model using ML estimation and BE. They obtained the highest posterior density interval through the Metropolis–Hastings (MHs) and Gibbs mixed sampling method and verified the feasibility of the censoring method using Monte Carlo simulation. Eliwa and Ahmed [10] analyzed point and interval estimation methods for the Lomax distribution under PFFC samples. They used the general expectation maximization algorithm to obtain point estimates of unknown parameters, as well as Bayesian estimates and highest posterior density CIs of parameters under different loss functions through simulation.

1.2. Generalized Rayleigh Distribution (GRD) Model

Surles and Padgett [11] proposed the GRD as an extension of the Rayleigh distribution, which allows for a broader description of various random phenomena and finds important applications in fields such as communication engineering, wireless transmission, and radar signal processing. The study of the GRD has also attracted the interest of many researchers. Shen et al. [12] introduced a new GRD and applied it to Reddit advertising and breast cancer datasets, comparing it with other GRDs based on its heavy-tailed properties. Rabie et al. [13] analyzed BE and E-BE of the GRD based on progressive Type-II censoring samples and examined the feasibility of the proposed censoring scheme through simulation. Ren and Gui [14] analyzed statistical inference methods for the competing risks model based on progressive censoring GRD.
Let T be a random variable following the GRD with parameters σ and β . The probability density function (PDF), cumulative distribution function (CDF), RF and HF of the distribution are as follows:
f ( t | σ , β ) = 2 σ β 2 t exp ( β 2 t 2 ) ( 1 exp ( β 2 t 2 ) ) σ 1 ,   t > 0 ,   σ ,   β > 0 ,
F ( t | σ , β ) = ( 1 exp ( β 2 t 2 ) ) σ ,   t > 0 ,   σ ,   β > 0 ,
r ( t | σ , β ) = 1 ( 1 exp ( β 2 t 2 ) ) σ ,   t > 0 ,   σ ,   β > 0 ,
h ( t | σ , β ) = 2 σ β 2 t exp ( β 2 t 2 ) ( 1 exp ( β 2 t 2 ) ) σ 1 1 ( 1 exp ( β 2 t 2 ) ) σ ,   t > 0 ,   σ ,   β > 0 ,
here, σ and β are the shape and scale parameters of the GRD, respectively. When σ and β take different values, the PDF, RF and HF exhibit different shapes (see Figure 1).
This article aims to explore the GRD model in depth, enriching the diversity of probability distribution models, improving the accuracy and flexibility of data modeling and analysis, and deepening our understanding of different statistical characteristics of data. Additionally, we apply the GRD model to PFFC experiments for statistical inference on parameters of GRD, RFs, and HFs under PFFC sample conditions. Through this study, we can verify the applicability and effectiveness of PFFC testing in the field of reliability analysis while laying a solid foundation for future theoretical research and practical applications. The rest of this article is as follows. In Section 2, the ML estimates of parameters, RF, and HF in the GRD model were derived, and the existence and uniqueness of the ML estimation solutions were verified. On this basis, this study used the Newton Raphson (NR) method and expectation maximization (EM) algorithm to obtain ML estimates. In Section 3, we discuss the construction of CIs for the parameters, RF and HF using the bootstrap algorithm. Section 4 analyzes the BE under symmetric and asymmetric loss functions, and numerical calculations of these estimates are performed using the Markov Chain Monte Carlo (MCMC) algorithm. In Section 5, the performance of the proposed estimation methods is compared using Monte Carlo simulation. In Section 6, the superiority of the proposed estimation method is verified by analyzing two sets of real data. Finally, in Section 7, conclusions of the study are presented.

2. ML Estimation

ML estimation is a commonly used parameter estimation method that finds wide application in the fields of statistics and machine learning. It involves analyzing the estimation of parameters in a probability distribution model and maximizing the likelihood function (LF) of the observed data to determine the optimal parameter values that maximize the “likelihood” between the model and the observed data [15,16,17]. Let ( T 1 : m : n : k , T 2 : m : n : k , , T m : m : n : k ) represent m PFFC samples observed from a total of n detection samples. Here, t i : m : n : k represents the observed value of sample T i : m : n : k . For convenience, we use ( t 1 , t 2 , , t m ) to denote ( t 1 : m : n : k , t 2 : m : n : k , , t m : m : n : k ) . Suppose that the PFFC sample originates from a continuous population with a PDF of f ( t ) and a CDF of F ( t ) .It is known from Cai and Gui [18] that the joint probability density of T 1 : m : n : k , T 2 : m : n : k , , T m : m : n : k is given by:
L T 1 : m : n : k , , T m : m : n : k ( t 1 , t 2 , , t m ) = ρ k m i = 1 m f ( t i ) [ 1 F ( t i ) ] k ( Q i + 1 ) 1 ,
where ρ = n ( n Q 1 1 ) ( n Q 1 Q 2 2 ) ( n Q 1 Q 2 Q m 1 m + 1 ) .
By combining Equations (1) and (2) and substituting it into Equation (5), we can obtain the LF:
L ( σ , β t ) = ρ k m ( 2 σ β 2 ) m exp ( β 2 i = 1 m t i 2 ) i = 1 m t i u i σ 1 ( 1 u i σ ) k ( Q i + 1 ) 1 ,
where t = ( t 1 , t 2 , , t m ) , u i ( β , t i ) = 1 exp ( β 2 t i 2 ) . Obviously, the corresponding logarithmic LF is:
l ( σ , β ) = ln L ( σ , β t ) = ln ( ρ k m ) + m ln ( 2 σ β 2 ) β 2 i = 1 m t i 2 + i = 1 m ln t i + ( σ 1 ) i = 1 m ln u i + i = 1 m [ k ( Q i + 1 ) 1 ] ln ( 1 u i σ ) .
Taking the partial derivative of Equation (6) with respect to parameters σ and β , and setting the resulting equations to zero, we obtain the following expressions:
l ( σ , β ) σ = m σ + i = 1 m ln u i i = 1 m [ k ( Q i + 1 ) 1 ] u i σ ln u i 1 u i σ = 0 ,
l ( σ , β ) β = 2 m β 2 β i = 1 m t i 2 + 2 β ( σ 1 ) i = 1 m t i 2 ( 1 u i ) u i 2 β σ i = 1 m [ k ( Q i + 1 ) 1 ] t i 2 u i σ 1 ( 1 u i ) 1 u i σ = 0 .
The roots of nonlinear Equations (7) and (8) are the ML estimates of parameters σ and β , but we cannot obtain explicit expressions. To improve the accuracy and reliability of the estimation method, we need to prove the existence and uniqueness of the ML estimation solutions.

2.1. Existence and Uniqueness of ML Estimation Solutions

Below we present two theorems:
Theorem 1. 
Let the left side of Equation (7) be
g 1 ( σ ) = m σ + i = 1 m ln u i i = 1 m [ k ( Q i + 1 ) 1 ] u i σ ln u i 1 u i σ ,
then the solution obtained for  g 1 ( σ ) = 0  exists and is unique on  ( 0 , + ) .
Proof. 
(1) Existence:
When σ 0 + , we obviously have lim σ 0 + m σ = + , and now we analyze:
lim σ 0 + i = 1 m [ k ( Q i + 1 ) 1 ] u i σ ln u i 1 u i σ = i = 1 m [ k ( Q i + 1 ) 1 ] ln u i lim σ 0 + ( u i σ 1 u i σ ) .
Since lim σ 0 + ( u i σ 1 u i σ ) is of the form 0 0 , we apply L ‘Hospital’s rule to this part:
lim σ 0 + ( u i σ 1 u i σ ) = lim σ 0 + ( u i σ ) ( 1 u i σ ) = 1 .
Then
i = 1 m [ k ( Q i + 1 ) 1 ] ln u i lim σ 0 + ( u i σ 1 u i σ ) = i = 1 m [ k ( Q i + 1 ) 1 ] ln u i .
Thus there is lim σ 0 + g 1 ( σ ) = + .
When σ + , since 0 < 1 e β 2 x 2 < 1 , thus lim σ + m σ = 0 , lim σ + ( u i σ 1 u i σ ) = 0 , then
lim σ + g 1 ( σ ) = i = 1 m ln u i .
And i = 1 m ln u i < 0 , so lim σ + g 1 ( σ ) < 0 . Since g 1 ( σ ) is continuous on ( 0 , + ) , it can be seen from the intermediate value theorem that g 1 ( σ ) has at least one solution σ ^ such that g 1 ( σ ^ ) = 0 , and therefore existence is proved.
(2) Uniqueness:
Derivation of g 1 ( σ ) gives the following formula:
g 1 ( σ ) = m σ 2 i = 1 m [ k ( Q i + 1 ) 1 ] u i σ ( ln u i ) 2 ( 1 u i σ ) 2 .
Since 0 < 1 e β 2 x 2 < 1 , ( 1 e β 2 x 2 ) σ > 0 , i.e., u i σ > 0 . Therefore, g 1 ( σ ) < 0 , so we can see that g 1 ( σ ) is a monotone decreasing function, and the uniqueness is proved. □
Theorem 2. 
Let the left side of Equation (8) be
g 2 ( β ) = 2 m β 2 β i = 1 m t i 2 + 2 β ( σ 1 ) i = 1 m t i 2 ( 1 u i ) u i 2 β σ i = 1 m [ k ( Q i + 1 ) 1 ] t i 2 u i σ 1 ( 1 u i ) 1 u i σ ,
then the solution obtained for g 2 ( β ) = 0  exists.
Proof. 
 
When β 0 + , we obviously have lim β 0 + 2 m β = + , lim β 0 + 2 β i = 1 m t i 2 = 0 . Because of
2 β ( σ 1 ) i = 1 m t i 2 ( 1 u i ) u i = 2 β ( σ 1 ) i = 1 m t i 2 e β 2 t i 2 1 e β 2 t i 2 .
We will simplify e β 2 t i 2 by Taylor expansion, and since β 0 + , we will only consider expanding its first two terms, so e β 2 t i 2 1 β 2 t i 2 , then
2 β ( σ 1 ) i = 1 m t i 2 e β 2 t i 2 1 e β 2 t i 2 i = 1 m 2 β ( σ 1 ) t i 2 ( 1 β 2 t i 2 ) β 2 t i 2 = i = 1 m 2 ( σ 1 ) ( 1 β 2 t i 2 ) β .
Therefore
lim β 0 + 2 β ( σ 1 ) i = 1 m t i 2 ( 1 u i ) u i = lim β 0 + i = 1 m 2 ( σ 1 ) ( 1 β 2 t i 2 ) β = + .
In a similar way,
2 β σ i = 1 m [ k ( Q i + 1 ) 1 ] t i 2 u i σ 1 ( 1 u i ) 1 u i σ i = 1 m [ k ( Q i + 1 ) 1 ] 2 β σ t i 2 ( β 2 t i 2 ) σ 1 ( 1 β 2 t i 2 ) 1 ( β 2 t i 2 ) σ .
Then
lim β 0 + i = 1 m [ k ( Q i + 1 ) 1 ] 2 β σ t i 2 ( β 2 t i 2 ) σ 1 ( 1 β 2 t i 2 ) 1 ( β 2 t i 2 ) σ = 0 .
Thus there is lim β 0 + g 2 ( β ) = + .
When β + , there is
lim β + 2 m β = 0 ,   lim β + ( i = 1 m 2 β t i 2 ) = , lim β + i = 1 m 2 β ( σ 1 ) t i 2 ( 1 u i ) u i = 0 , lim β + 2 β σ i = 1 m [ k ( Q i + 1 ) 1 ] t i 2 u i σ 1 ( 1 u i ) 1 u i σ = 0 .
Thus there is lim β + g 2 ( β ) = . Since g 2 ( β ) is continuous on ( 0 , + ) , it can be seen from the intermediate value theorem that g 2 ( β ) has at least one solution β ^ such that g 2 ( β ^ ) = 0 , and therefore existence is proved. □
Remark 1. 
To illustrate the uniqueness of β ^ . Derivation of g 2 ( β ) gives the following formula:
g 2 ( β ) = 2 m β 2 2 i = 1 m t i 2 + 2 ( σ 1 ) i = 1 m t i 2 ( 1 u i ) u i 4 β 2 ( σ 1 ) i = 1 m t i 4 ( 1 u i ) u i 2 2 σ i = 1 m [ k ( Q i + 1 ) 1 ] t i 2 u i σ 1 ( 1 u i ) 1 u i σ 4 β 2 σ i = 1 m [ k ( Q i + 1 ) 1 ] t i 4 u i σ 2 ( 1 u i ) [ σ ( 1 u i ) ( 1 u i σ ) ] ( 1 u i σ ) 2 .
We want to prove the uniqueness of the ML estimates β ^ , so we need to judge the monotonicity of g 2 ( β ) , because of the complex form of Equation (9), we can not directly judge the monotonicity of g 2 ( β ) , so we draw the profile logarithmic LF curve of β to prove the existence and uniqueness of β ^ . We set the initial parameter values σ ( 0 ) = 0.8 , β ( 0 ) = 1 , the number of samples n = 30, m = 20, k = 2, and censoring scheme Q 1 = Q 2 = = Q m 1 = 0 , Q m = n m to generate PFFC samples, as shown in Figure 2. Among them, the blue dot represents the highest point of the logarithmic LF curve.
Observing Figure 2, it is found that the curve peaks and then rapidly declines, which proves the uniqueness of β ^ .

2.2. EM Algorithm

By proving that we know the existence and uniqueness of solutions to Equations (7) and (8), we use the NR method and EM algorithm to obtain ML estimates of the parameters in order to obtain exact solutions. Due to the sensitivity of the NR method to the selection of initial values, we also consider using the EM algorithm to solve the ML estimates of parameters. EM algorithm is an efficient iterative optimization method, usually suitable for ML estimation of probability models with incomplete data or hidden variables. In addition, the EM algorithm has relatively simple iterative steps, ensuring that the algorithm can converge to a local optimal solution by alternately executing the expectation step (E step) and the maximization step (M step). This alternating process not only simplifies the complexity of computation, but also improves the stability and practicality of the algorithm.
The PFFC data used in this article is considered incomplete data, and we use T = ( T 1 : m : n , T 2 : m : n , , T m : m : n ) and Z = ( Z 11 , Z 12 , , Z 1 ( k ( Q 1 + 1 ) 1 ) , Z m 1 , , Z m ( k ( Q m + 1 ) 1 ) ) to represent the observed data and censoring data, respectively. Therefore, W = ( T , Z ) forms the complete data. Ignoring the parameter free term, we provide the logarithmic LF for the complete data:
l c = n k ln ( 2 σ β 2 ) β 2 i = 1 m t i 2 β 2 i = 1 m j = 1 k ( Q i + 1 ) 1 Z i j 2 + ( σ 1 ) i = 1 m ln ( 1 e β 2 t i 2 ) + ( σ 1 ) i = 1 m j = 1 k ( Q i + 1 ) 1 ln ( 1 e β 2 Z i j 2 ) .
In E step we can get the pseudo logarithmic LF from the logarithmic LF under the complete data, we replace Z i j in the Equation (10) with E [ g ( Z i j ) Z i j > t i ] , where i = 1 , 2 , , m , j = 1 , 2 , , k ( Q i + 1 ) 1 . Thus
l s ( σ , β ) = n k ln ( 2 σ β 2 ) β 2 i = 1 m t i 2 β 2 i = 1 m j = 1 k ( Q i + 1 ) 1 E [ Z i j 2 Z i j > t i ] + ( σ 1 ) i = 1 m ln ( 1 e β 2 t i 2 ) + ( σ 1 ) i = 1 m j = 1 k ( Q i + 1 ) 1 E [ ln ( 1 e β 2 Z i j 2 ) Z i j > t i ] .
When given t i , the conditional distribution of Z i j has a left truncated distribution at t i , which can be derived as [19]:
f ( Z i j t i , σ , β ) = f ( Z i j ; σ , β ) 1 F ( t i ; σ , β ) , Z i j > t i , i = 1 , 2 , , m , j = 1 , 2 , , k ( Q i + 1 ) 1 .
From the Equation (11), the expectation is as follows:
A ( t i ; σ , β ) = E [ Z i j 2 Z i j > t i ] = 1 1 F ( t i ; σ , β ) t i Z i j 2 f ( Z i j ; σ , β ) d Z i j = σ β 2 [ 1 F ( t i ; σ , β ) ] 1 1 e β 2 t i 2 u σ 1 ln ( 1 u ) d u , B ( t i ; σ , β ) = E [ ln ( 1 e β 2 Z i j 2 ) Z i j > t i ] = 1 1 F ( t i ; σ , β ) t i ln ( 1 e β 2 Z i j 2 ) f ( Z i j ; σ , β ) d Z i j = ( 1 e β 2 t i 2 ) σ [ 1 σ ln ( 1 e β 2 t i 2 ) ] 1 σ 1 ( 1 e β 2 t i 2 ) σ .
In M step our goal is to maximize the function l s ( σ , β ) . Assuming that the estimates of σ and β at stage q are ( σ ( q ) , β ( q ) ) , then the estimates of σ and β at stage q + 1 ( σ ( q + 1 ) , β ( q + 1 ) ) can be obtained by maximizing the following equation:
l s ( σ , β ) = n k ln ( 2 σ β 2 ) β 2 i = 1 m t i 2 + ( σ 1 ) i = 1 m ln ( 1 e β 2 t i 2 ) β 2 A + ( σ 1 ) B .
where
A = i = 1 m [ k ( Q i + 1 ) 1 ] A ( t i ; σ ( q ) , β ( q ) ) ,   B = i = 1 m [ k ( Q i + 1 ) 1 ] B ( t i ; σ ( q ) , β ( q ) ) .
The value of σ ( q + 1 ) is first obtained by solving the following equation:
σ ( q + 1 ) = n k i = 1 m ln ( 1 e ( β ( q ) ) 2 t i 2 ) + i = 1 m [ k ( Q i + 1 ) 1 ] B ( t i ; σ ( q ) , β ( q ) ) .
After obtaining σ ( q + 1 ) , β ( q + 1 ) can be solved by the following equation:
β ( q + 1 ) = ( n k ) 1 2 [ i = 1 m t i 2 ( σ ( q + 1 ) 1 ) i = 1 m t i 2 e ( β ( q ) ) 2 t i 2 1 e ( β ( q ) ) 2 t i 2 + i = 1 m [ k ( Q i + 1 ) 1 ] A ( t i ; σ ( q + 1 ) , β ( q ) ) ] 1 2
During the iteration process, the obtained σ ( q + 1 ) and β ( q + 1 ) are adopted as the updated values of ( σ , β ) . Starting from the given initial parameter values ( σ ( 0 ) , β ( 0 ) ) , repeat E step and M step continuously until the algorithm converges, in order to obtain the ML estimates of the parameters, σ ^ E M and β ^ E M .

3. Bootstrap CI

The CI is a statistical method used to estimate the range of true values for a parameter. In data analysis, we often rely on samples to infer properties of the population. However, due to the randomness of sampling, sample statistics can vary from one sample to another. A CI provides an interval estimate that indicates the possible range of the true parameter value, quantifying the uncertainty associated with the parameter and providing an assessment of the reliability of the results. There are various methods for constructing CIs, and different models may require different estimation techniques. In the case of small sample sizes, the bootstrap method can be chosen, which is a highly effective approach that simplifies estimation by handling complex data. There has been extensive research on the bootstrap method. Cho and Kirch [20] proposed a method for generating bootstrap CIs and demonstrated the good performance of the bootstrap process through simulation experiments. Saegusa et al. [21] analyzed parameter bootstrap CIs in the multivariate Fay-Herriot model and found that the bootstrap method outperformed analytical methods based on data analysis. Sroka [22] compared the usefulness and limitations of the bootstrap method and the jackknife method in econometrics. Through simulation experiments, it was concluded that the bootstrap method performed better overall than the jackknife method.
There are various methods of bootstrap, and here we use the bootstrap-t (Boot-t) method to obtain CIs for parameters, RF and HF, as presented in [23]. Below, we provide the algorithm for the Boot-t method (refer to Table 1):
For a given t, the following definition is provided:
σ ^ ( t ) = σ ^ + B 1 / 2 V a r ( σ ^ ) H 1 ( t ) , β ^ ( t ) = β ^ + B 1 / 2 V a r ( β ^ ) W 1 ( t ) .
Therefore, we obtain the Boot-t CI. for parameters σ and β at 100 ( 1 α ) % confidence level as ( σ ^ ( α / 2 ) , σ ^ ( 1 α / 2 ) ) and ( β ^ ( α / 2 ) , β ^ ( 1 α / 2 ) ) respectively. Similarly, the Boot-t CI for the RF and HF at 100 ( 1 α ) % confidence level are ( r ^ ( α / 2 ) , r ^ ( 1 α / 2 ) ) , ( h ^ ( α / 2 ) , h ^ ( 1 α / 2 ) ) respectively. The variance of the parameters, RF and HF can be obtained using the Fisher Information matrix. The calculation process is as follows:
Construct the Fisher Information matrix
I ( σ ^ , β ^ ) = 2 l ( σ , β ) σ 2 2 l ( σ , β ) σ β 2 l ( σ , β ) β σ 2 l ( σ , β ) β 2 σ = σ ^ , β = β ^ ,
where
2 l ( σ , β ) σ 2 = m σ 2 i = 1 m [ k ( Q i + 1 ) 1 ] u i σ ( ln u i ) 2 ( 1 u i σ ) 2 , 2 l ( σ , β ) β 2 = 2 m β 2 2 i = 1 m t i 2 + 2 ( σ 1 ) i = 1 m t i 2 ( 1 u i ) u i 4 β 2 ( σ 1 ) i = 1 m t i 4 ( 1 u i ) u i 2 2 σ i = 1 m [ k ( Q i + 1 ) 1 ] t i 2 u i σ 1 ( 1 u i ) 1 u i σ 4 β 2 σ i = 1 m [ k ( Q i + 1 ) 1 ] t i 4 u i σ 2 ( 1 u i ) [ σ ( 1 u i ) ( 1 u i σ ) ] ( 1 u i σ ) 2 , l ( σ , β ) σ β = 2 β i = 1 m t i 2 ( 1 u i ) u i 2 β i = 1 m [ k ( Q i + 1 ) 1 ] t i 2 u i σ 1 ( 1 u i ) ( σ ln u i u i σ + 1 ) ( 1 u i σ ) 2 = l ( σ , β ) β σ .
The Fisher inverse matrix I 1 ( σ , β ) can be obtained from the Fisher Information matrix I ( σ , β ) :
I 1 ( σ ^ , β ^ ) = V a r ( σ ) C o v ( σ , β ) C o v ( β , σ ) V a r ( β ) σ = σ ^ , β = β ^ .
The variance of the parameters can be obtained as a result.
Let
D r = r ( t ) σ , r ( t ) β , D h = h ( t ) σ , h ( t ) β .
From Equations (3) and (4), it can be seen that
r ( t ) σ = u σ ln u , r ( t ) β = 2 σ β t 2 u σ 1 ( 1 u ) , h ( t ) σ = 2 β 2 t ( 1 u ) u σ 1 ( 1 u σ + σ ln u ) ( 1 u σ ) 2 , h ( t ) β = 4 σ β t ( 1 u ) u σ 1 [ ( 1 u σ ) β 2 t 2 u 1 ( σ u σ + 1 u σ ) ] ( 1 u σ ) 2 .
Thus
V a r ( r ^ ( t ) ) = D r I 1 ( σ , β ) D r σ = σ ^ , β = β ^ , V a r ( h ^ ( t ) ) = D h I 1 ( σ , β ) D h σ = σ ^ , β = β ^ ,
where D′ represents the transpose of D.

4. BE under Different Loss Functions

BE is a statistical inference method used for estimating or predicting parameters. It is based on Bayes’ theorem, which combines prior knowledge with observed data to obtain the posterior distribution of the parameters, serving as the foundation for parameter estimation. BE originated in the 18th century and has matured over the years, finding wide applications in various fields of research and practice [24,25,26,27]. In this section, we propose the posterior density function of the GRD based on PFFC samples and utilize Bayesian inference methods to obtain Bayesian estimates of parameters, RF and HF under symmetric and asymmetric loss functions.

4.1. Information Prior and Non-Information Prior

In BE, we first need to observe the influence of the prior on the BE. In this section, we consider both information priors and non-information priors. Gamma distribution, as a commonly used prior distribution, is widely used in BE due to its flexible shape, analytical properties, and wide applicability, including reliability analysis and survival analysis. Choosing gamma distribution as the prior distribution can provide preliminary information on the scale and shape of parameters, simplify the calculation process of BE, and improve the accuracy and reliability of estimation results. Therefore, we assume that σ and β are independent random variables that follow Γ ( a 1 , b 1 ) and Γ ( a 2 , b 2 ) , respectively. The joint prior distribution of σ and β is given by:
π 1 ( σ , β ) = b 1 a 1 b 2 a 2 Γ ( a 1 ) Γ ( a 2 ) σ a 1 1 β a 2 1 exp ( b 1 σ b 2 β )   ( a 1 , b 1 , a 2 , b 2 > 0 ) .
As a result, the joint posterior density of σ and β is:
π 1 ( σ , β | t ) = L ( σ , β | t ) π 1 ( σ , β ) 0 0 L ( σ , β | t ) π 1 ( σ , β ) d σ d β .
Namely
π 1 ( σ , β | t ) σ a 1 + m 1 β a 2 + 2 m 1 exp ( b 1 σ b 2 β ) i = 1 m t i exp ( β 2 t i 2 ) × ( 1 exp ( β 2 t i 2 ) ) σ 1 [ 1 ( 1 exp ( β 2 t i 2 ) ) σ ] k ( Q i + 1 ) 1 .
When the prior distributions of σ and β are non-information priors, the joint prior distribution of σ and β is:
π 2 ( σ , β ) = 1 σ β .
As a result, the joint posterior density of σ and β is:
π 2 ( σ , β | t ) = L ( σ , β | t ) π 2 ( σ , β ) 0 0 L ( σ , β | t ) π 2 ( σ , β ) d σ d β .
Namely
π 2 ( σ , β | t ) σ m 1 β 2 m 1 i = 1 m t i exp ( β 2 t i 2 ) ( 1 exp ( β 2 t i 2 ) ) σ 1 × [ 1 ( 1 exp ( β 2 t i 2 ) ) σ ] k ( Q i + 1 ) 1 .
In BE, we commonly use a loss function to measure and compare the performance of different estimation methods in order to minimize expected loss. The choice of loss function is crucial, as different loss functions can lead to different estimation results. Symmetric loss functions refer to the symmetry of the loss function with respect to the difference between the estimated value and the true value, and are applicable when underestimation and overestimation have equal importance. However, in certain reliability estimations, the impact of overestimation may be more severe than underestimation, so relying solely on symmetric loss functions has limitations. In order to obtain more comprehensive Bayesian inference results, in this section, we consider the use of both symmetric and asymmetric loss functions [28]. The squared error loss function (SELF) is a common symmetric loss function, introduced by Sadoun et al. [29], which explicitly quantifies the difference between the estimated value and the true value. The generalized entropy loss function (GELF) is an asymmetric loss function proposed by Malgorzata [30], which uses ratios to represent the error between the estimated value and the true value. The weighted squared error loss function (WSELF) is an asymmetric loss function that is similar to SELF in terms of computation, but differs in that WSELF multiplies a weight factor on the SELF sample error, which can be set based on the actual situation to adjust the contribution of different samples to the loss. Wasan [31] proposed the K-loss function (KLF) to assess the model’s performance in terms of confidence in the prediction results. The modified (quadratic) squared error loss function (M/Q SELF) is an asymmetric loss function that balances the prediction error of SELF by introducing a correction factor. The preventive loss function (PLF) is an asymmetric loss function introduced by Norstrom [32]. At present, research on these loss functions is relatively mature, as detailed in [33,34,35]. The Bayesian estimators under various loss functions are presented below [36,37] (see Table 2).
Where Θ represents a function related to σ and β . Therefore, under the aforementioned loss functions, the Bayesian estimators is given by:
(i)
SELF:
Θ ^ ( σ , β ) B S = E ( Θ t ) = 0 0 Θ ( σ , β ) π ( σ , β t ) d σ d β .
(ii)
GELF:
Θ ^ ( σ , β ) B G = [ E ( Θ c t ) ] 1 / c = [ 0 0 Θ c ( σ , β ) π ( σ , β t ) d σ d β ] 1 / c .
(iii)
WSELF:
Θ ^ ( σ , β ) B W = [ E ( Θ 1 t ) ] 1 = [ 0 0 Θ 1 ( σ , β ) π ( σ , β t ) d σ d β ] 1 .
(iv)
KLF:
Θ ^ ( σ , β ) B K = E ( Θ t ) E ( Θ 1 t ) = 0 0 Θ ( σ , β ) π ( σ , β t ) d σ d β 0 0 Θ 1 ( σ , β ) π ( σ , β t ) d σ d β .
(v)
M/Q SELF:
Θ ^ ( σ , β ) B M = E ( Θ 1 t ) E ( Θ 2 t ) = 0 0 Θ 1 ( σ , β ) π ( σ , β t ) d σ d β 0 0 Θ 2 ( σ , β ) π ( σ , β t ) d σ d β .
(vi)
PLF:
Θ ^ ( σ , β ) B P = E ( Θ 2 t ) = 0 0 Θ 2 ( σ , β ) π ( σ , β t ) d σ d β .
Due to the non-explicit form of Bayesian estimators under the aforementioned loss functions, computations can be complex. Therefore, we choose to use the method of MCMC sampling to obtain Bayesian estimates under the loss functions.

4.2. MCMC Method with Hybrid Gibbs Sampling

MCMC sampling is a commonly used sampling method in statistics to generate samples from complex probability distributions. It is based on the Monte Carlo method using Markov chains. By constructing a Markov chain, the chain’s stationary distribution can be made equal to the desired sampling distribution. Subsequently, samples from the posterior distribution can be obtained from this Markov chain, and various statistical inferences can be made based on these samples [38]. MCMC, as a random sampling method, has attracted the interest of many researchers and has been widely applied in the fields of statistics and machine learning [39,40,41,42].
We know that the core of the MCMC algorithm is to construct a Markov chain with a stationary distribution to obtain samples from the posterior distribution. However, different posterior distributions have different characteristics, which require us to use different MCMC algorithms in different situations. Gibbs sampling is an MCMC method used to draw samples from multidimensional probability distributions. It is suitable for high-dimensional or non-standard probability distributions. The idea is to sequentially sample values from each dimension and update using conditional probabilities. By iteratively sampling in this way, samples that conform to the joint distribution can be obtained. For low-dimensional probability distributions, we consider using the MH algorithm. The MH algorithm is a more generalized MCMC method that incorporates the acceptance-rejection principle. By selecting appropriate proposal distributions and acceptance probabilities, it ensures that the Markov chain converges to the target distribution and generates effective samples. In this section, we consider using the MH-Gibbs hybrid sampling method to generate samples of the parameters. For high-dimensional sampling, we use the Gibbs algorithm, and for low-dimensional sampling, we use the MH algorithm. By effectively combining the strengths of these two methods, the difficulty of sampling is reduced.
The posterior distributions of parameters σ and β under the information prior, as inferred from Equation (13), are given by:
π 1 ( σ | t , β ) σ a 1 + m 1 exp ( b 1 σ ) i = 1 m ( 1 exp ( β 2 t i 2 ) ) σ 1 × [ 1 ( 1 exp ( β 2 t i 2 ) ) σ ] k ( Q i + 1 ) 1 ,
π 1 ( β | t , σ ) β a 2 + 2 m 1 exp ( b 2 β ) i = 1 m exp ( β 2 t i 2 ) ( 1 exp ( β 2 t i 2 ) ) σ 1 × [ 1 ( 1 exp ( β 2 t i 2 ) ) σ ] k ( Q i + 1 ) 1 .
The posterior distributions of parameters σ and β under the non-information prior, as inferred from Equation (14), are given by:
π 2 ( σ | t , β ) σ m 1 i = 1 m ( 1 exp ( β 2 t i 2 ) ) σ 1 [ 1 ( 1 exp ( β 2 t i 2 ) ) σ ] k ( Q i + 1 ) 1 ,
π 2 ( β | t , σ ) β 2 m 1 i = 1 m exp ( β 2 t i 2 ) ( 1 exp ( β 2 t i 2 ) ) σ 1 × [ 1 ( 1 exp ( β 2 t i 2 ) ) σ ] k ( Q i + 1 ) 1 .
From Equations (15)–(18), it can be observed that the posterior distribution is non-explicit, making direct sampling extremely challenging. Therefore, we employ the MH-Gibbs hybrid sampling method for sampling. Below, we outline the steps for MH-Gibbs hybrid sampling:
Step 1: Set the initial values of the parameters: σ = σ ( 0 ) , β = β ( 0 ) .
Step 2: Assuming the parameter estimates for the (i + 1) −th iteration are σ ( i ) , β ( i ) , the (i + 1) −th iteration process is as follows:
Sample σ ( i + 1 ) from π ( σ t , β ( i ) ) using the Metropolis algorithm (using a normal distribution N ( σ j , ψ σ ) as the proposal distribution for σ , where σ j represents the current state of σ and ψ σ represents the standard deviation of σ ).
(i)
Select a candidate point σ from N ( σ j , ψ σ ) . If σ 0 , resample until a valid candidate is obtained. Calculate the acceptance probability based on the candidate point:
p ( σ j , σ ) = min 1 , π 1 ( σ t , β i ) π 1 ( σ j t , β i ) = min 1 , ( σ ) m 1 i = 1 m ( 1 exp ( ( β i ) 2 t i 2 ) ) σ 1 [ 1 ( 1 exp ( ( β i ) 2 t i 2 ) ) σ ] k ( Q i + 1 ) 1 σ j m 1 i = 1 m ( 1 exp ( ( β i ) 2 t i 2 ) ) σ j 1 [ 1 ( 1 exp ( ( β i ) 2 t i 2 ) ) σ j ] k ( Q i + 1 ) 1 .
(ii)
Generate a random number ϕ from a uniform distribution U (0, 1). Use the following formula to make the extraction:
σ j + 1 = σ , ϕ p ( σ j , σ ) σ j , ϕ > p ( σ j , σ ) .
(iii)
Set j = j + 1 and return to step (i) to continue repeating the above steps.
Obtaining a Markov chain, any sample from the Markov chain after it reaches the equilibrium state can be used as σ ( i + 1 ) .
Step 3: Sample β ( i + 1 ) from π ( β t , σ ( i + 1 ) ) using the Metropolis algorithm (utilizing a normal distribution N ( β j , γ β ) as the proposal distribution for β , where β j represents the current state of β and γ β represents the standard deviation of β ).
(i)
Select a candidate point β from N ( β j , γ β ) . If β 0 , resample until a valid candidate is obtained. Calculate the acceptance probability based on the candidate point:
p ( β j , β ) = min 1 , π 1 ( β t , σ i + 1 ) π 1 ( β j t , σ i + 1 ) = min 1 , ( β ) 2 m 1 exp ( β 2 i = 1 m t i 2 ) i = 1 m ( 1 exp ( β 2 t i 2 ) ) σ ( i + 1 ) 1 [ 1 ( 1 exp ( β 2 t i 2 ) ) σ ( i + 1 ) ] k ( Q i + 1 ) 1 β j 2 m 1 exp ( β j 2 i = 1 m t i 2 ) i = 1 m 1 exp ( β j 2 t i 2 ) σ ( i + 1 ) 1 [ 1 ( 1 exp ( β j 2 t i 2 ) ) σ ( i + 1 ) ] k ( Q i + 1 ) 1 .
(ii)
Generate a random number ς from a uniform distribution U (0, 1). Use the following equation to draw the sample:
β j + 1 = β , ς p ( β j , β ) β j , ς > p ( β j , β ) .
(iii)
Set j = j + 1 and return to step (i) to continue repeating the above steps.
Obtaining a Markov chain, any sample from the Markov chain after it reaches the equilibrium state can be used as β ( i + 1 ) .The above scenario considers using informative priors for MH-Gibbs hybrid sampling. For non-information priors, it is only necessary to replace the posterior distribution under the informative prior with the posterior distribution under the non-information prior. By using the MCMC method, we obtain samples σ ( i ) and β ( i ) for σ and β , respectively. Therefore, the Bayesian estimates under the loss function depend on the samples of σ and β . The Bayesian estimator of parameters, RF and HF under each loss function are provided below (refer to Table 3).
In Markov chain algorithms, to mitigate the impact of initial values on sampling, a portion of iterations is discarded, which is referred to as the burn-in period, denoted as M 0 [43]. To obtain Bayesian credible CIs (Cred.CIs), the ( M M 0 ) parameter samples obtained from MH-Gibbs hybrid sampling are arranged in ascending order to obtain ( σ M 0 + 1 , σ M 0 + 2 , , σ M ) and ( β M 0 + 1 , β M 0 + 2 , , β M ) . Therefore, the 100 ( 1 α ) % credible interval is ( σ ζ 1 , σ ζ 2 ) and ( β ζ 1 , β ζ 2 ) , where ζ 1 and ζ 2 represent the largest integers for ( M M 0 ) ( α / 2 ) and ( M M 0 ) ( 1 α / 2 ) , respectively.

5. Monte Carlo Simulation

In this section, Monte Carlo simulations are used to observe the performance the estimates in this study, as well as their corresponding mean square errors (MSEs). The performance of different methods is compared based on MSE, and the average width (AW) of CIs is also obtained. Firstly, we set the true values of parameters as σ = 0.8 , β = 1 . According to the algorithm introduced by Balakrishnan and Sandhu [44] for progressive Type-II censoring samples, we can consider that a PFFC sample for a distribution function F ( x ) is a progressive Type-II censoring sample for the distribution function 1 ( 1 F ( x ) ) k . We assume there are n × k products, divided into n groups with k items in each group. The censoring schemes is set as follows (Table 4).
Under the assumption of informative prior, the hyperparameters for the Gamma distribution are set as a 1 = b 1 = a 2 = b 2 = 1 , and the hyperparameter for the GELF is c = 0.5. The MCMC algorithm is repeated 5000 times with a burn-in period of 500 iterations. Table 5 shows the ML estimates and corresponding MSE (in parentheses) of σ and β , r(t) and h(t) under different k values and censoring schemes. Based on information prior and non information prior, Table 6, Table 7, Table 8 and Table 9 shows the Bayesian estimates and corresponding MSE (in parentheses) of σ and β , r(t) and h(t) under different k values and censoring schemes, respectively. Table 10 shows the Boot-t CIs and Bayesian Cred.CIs for σ and β , r(t) and h(t). When constructing Bootstrap CIs, we use the ML value obtained by the NR method as the benchmark. Under different censoring schemes, we adopted sample sizes of m 1 = 20 , m 2 = 30 , m 3 = 50 , and performed B = 5000 repeated samplings to ensure the accuracy and reliability of the obtained CIs. For simplicity, we will abbreviate BS, BG, BW, BK, BM, BP to represent Bayesian estimates under SELF, GELF, WSELF, KLF, M/Q SELF, PLF respectively, and Inf and Non-Inf to represent information prior and non-information prior.
Based on the above tables, the following conclusions can be drawn from these research findings:
(1)
In ML estimation, the overall performance of the NR method is superior to that of the EM algorithm.
(2)
With the increase in sample size, the accuracy of point estimation and interval estimation improves under various estimation methods.
(3)
Based on parameters, RF, and HF under GRD, BE generally outperforms ML estimation.
(4)
In BE, the performance of information prior based BE is superior to that of non-information prior based BE.
(5)
In most cases, the MSE of estimated results obtained by the PFFC method with k = 2 and k = 3 is similar.
(6)
Under the MCMC sampling method, Bayesian estimator performs better based on PLF compared to other loss functions. However, it performs worst when based on M/Q SELF.
(7)
Generally speaking, censoring scheme c has superior performance and yields better results compared to the other two censoring schemes.
(8)
As sample size increases, AW for different CIs shows a decreasing trend. The AW for Boot-t CI is greater than that for Bayesian Cred.CI. When k = 3, AW for all CIs performs best.
(9)
Under conditions with informative prior available, AW for Bayesian Cred.CIs is smaller than that non-information prior available.

6. Analysis of Real Data

In this section, we use two sets of real data to verify the feasibility of the method used in this article and demonstrate its practical application in real life. Data 1 represents the survival time of guinea pigs injected with different doses of Mycobacterium tuberculosis [45]. The data are as follows: 12, 15, 22, 24, 24, 32, 32, 33, 34, 38, 38, 43, 44, 48, 52, 53, 54, 54, 55, 56, 57, 58, 58, 59, 60,60, 60, 60, 61, 62, 63, 65, 65, 67, 68, 70, 70, 72, 73, 75, 76, 76, 81, 83, 84, 85, 87, 91, 95, 96,98, 99, 109, 110, 121, 127, 129, 131, 143, 146, 146, 175, 175, 211, 233, 258, 258, 263, 297, 341,341, 376. Alshunnar and Raqab et al. [46] analyzed the data using the proportional TTT transform, and the results showed that the GRD model could fit the data reasonably. Data 2 represents daily confirmed cases of COVID-19 deaths [47], as follows: as follows: 1, 1, 2, 4, 5, 1, 1, 3, 6, 6, 4, 1, 5, 6, 6, 8, 5, 7, 7, 9, 9, 15, 17, 11, 13, 5, 14, 5, 13, 9, 19, 15, 11, 14, 12, 11, 7, 13, 10, 20, 22, 21, 12, 14, 9, 14, 7, 16, 17, 13, 21, 11, 11, 8, 11, 12, 15, 21, 20, 18, 15, 14, 21, 16, 11, 28, 29, 19, 14, 19, 29, 34, 34, 46, 46, 47, 36, 38, 40, 32, 39, 34, 35, 36, 35, 45, 62. To verify whether the GRD fits the data, we visualized the verification process in a more intuitive way, as shown in Figure 3 and Figure 4. From Figure 3 and Figure 4, it can be seen that GRD can effectively fit the Data 2.
Next, we will randomly decompose Data 1 into 24 groups, each with k = 3 items: {12, 15, 22}, {24, 24, 32}, {32, 33, 34}, {38, 38, 43}, {44, 48, 52}, {53, 54, 54}, {55, 56, 57}, {58, 58, 59}, {60, 60, 60}, {60, 61, 62}, {63, 65, 65}, {67, 68, 70}, {70, 72, 73}, {75, 76, 76}, {81, 83, 84}, {85, 87, 91}, {95, 96, 98}, {99, 109, 110}, {121, 127, 129}, {131, 143, 146}, {146, 175, 175}, {211, 233, 258}, {258, 263, 297}, {341, 341, 376}. Randomly decompose Data 2 into 29 groups, each with k = 3 items: {1, 1, 2}, {4, 5, 1}, {1, 3, 6}, {6, 4, 1}, {5, 6, 6}, {8, 5, 7}, {7, 9, 9}, {15, 17, 11}, {13, 5, 14}, {5, 13, 9}, {19, 15, 11}, {14, 12, 11}, {7, 13, 10}, {20, 22, 21}, {12, 14, 9}, {14, 7, 16}, {17, 13, 21}, {11, 11, 8}, {11, 12, 15}, {21, 20, 18}, {15, 14, 21}, {16, 11, 28}, {29, 19, 14}, {19, 29, 34}, {34, 46, 46}, {47, 36, 38}, {40, 32, 39}, {34, 35, 36}, {35, 45, 62}. In order to illustrate the proposed estimation method, we set up three PFFC schemes to take m D a t a 1 = 12 observations and m D a t a 2 = 15 observations from Data 1 and Data 2, respectively, to generate three random PFFC samples, as shown in Table 11 and Table 12. Before the real data analysis, it is necessary to verify the existence and uniqueness of the ML estimation solution of the GRD model under the PFFC sample. Since Equations (7) and (8) are nonlinear equations and cannot be solved directly, we choose to use image visualization method to verify the existence and uniqueness of ML estimates. Here, we take the censoring sample (ii) of Data 1 as an example. By drawing the log-LF curve, we can intuitively observe that there is a unique local maximum of the LF, thus proving the existence and uniqueness of the ML estimation solution (See Figure 5).
In the following analysis, we will delve into the parameters, RF, and HF of the GRD model based on these two sets of real data. Due to the fact that the convergence speed of the NR method is better than that of the EM algorithm in ML estimation, and the numerical accuracy obtained by the NR method is higher, we still use the NR method to obtain ML estimates in this section. Table 13 and Table 14 provide the ML estimates and Bayesian estimates of the parameters, RF, and HF of the model under these two sets of real data. For simplicity, we use MLE to represent the ML estimates. In addition, Table 15 and Table 16 show the upper and lower bounds of the Boot-t CIs and Bayesian Cred.CIs for the parameters, RF, and HF of the GRD model under these two sets of real data, respectively. The confidence level of the 100 ( 1 α ) % CIs is α = 0.05 , and both Bootstrap sampling and MCMC algorithm were repeated 5000 times, with a combustion period of 500 times.
In order to verify that the Markov chain obtained in the MCMC sampling process reaches the equilibrium state, we show the parameter trajectory diagram, histogram and autocorrelation coefficient diagram of the censoring sample (i) in Data 1 in the MCMC sampling process through Figure 6. It can be seen that the posterior distribution presents obvious normal distribution characteristics. The applicability and effectiveness of GRD model in MCMC sampling are also illustrated.

7. Conclusions

Nowadays, in life testing, censoring testing is commonly used to reduce experimental time and cost. Different censoring testing methods can also yield different results. This study focuses on the PFFC test. Compared with traditional censoring tests, its advantage lies not only in significantly reducing the large sample testing time required for the first failure censoring test but also in meeting the needs of exploring the mechanisms of failed and non-failed products in successive censoring tests. This article combines the PFFC test with the GRD model to perform ML estimation and BE on the parameters, RF, and HF of the model. Additionally, we construct Boot-t CIs and Bayesian Cred.CIs for parameters, RFs, and HFs as well. Through simulation research and practical applications, we conduct an in-depth analysis and comparison of the performance of adopted methods under different cost and censoring schemes. The research results indicate that adopting the PFFC method is an effective strategy in statistical inference analysis of reliability studies. This method not only optimizes resource utilization but also provides more accurate and practical statistical tools for reliability analysis while improving efficiency, accuracy of research while ensuring product quality. In the future, we will strive to expand the combination of PFFC experiments and GRD models, explore their performance under different conditions, optimize sample size selection, and compare them with other statistical methods. In addition, the research will also focus on integrating practical applications with other disciplinary fields to further deepen the understanding of PFFC and expand its application scope.

Author Contributions

Conceptualization, Q.G. and R.C.; methodology, Q.G. and H.R.; software, Q.G. and R.C.; writing—original draft preparation, Q.G. and F.Z.; writing—review and editing, Q.G., H.R. and F.Z; funding acquisition, F.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Science and Technology Research Project of Jiangxi Provincial Department of Education, grant numbers GJJ2200814 and GJJ2203007.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Seong, K.; Lee, K. Exact likelihood inference for parameter of Exponential distribution under combined Generalized progressive hybrid censoring scheme. Symmetry 2022, 14, 1764. [Google Scholar] [CrossRef]
  2. Ateya, S.F.; Kilai, M.; Aldallal, R. Estimation using suggested EM algorithm based on progressively Type-II censored samples from a finite mixture of truncated Type-I generalized logistic distributions with an application. Math. Probl. Eng. 2022, 2022, 1720033. [Google Scholar] [CrossRef]
  3. Alotaibi, R.; Nassar, M.; Elshahhat, A. Estimations of modified Lindley parameters using progressive Type-II censoring with applications. Axioms 2023, 12, 171. [Google Scholar] [CrossRef]
  4. Balasooriya, U. Failure-censored reliability sampling plans for the exponential distribution. J. Stat. Comput. Simul. 1995, 52, 337–349. [Google Scholar] [CrossRef]
  5. Dykstra, O. Theory and technique of variation research. Technometrics 1964, 7, 654–655. [Google Scholar] [CrossRef]
  6. Wu, S.J.; Kuş, C. On estimation based on progressive first-failure censored sampling. Comput. Stat. Data Anal. 2009, 53, 3659–3670. [Google Scholar] [CrossRef]
  7. Shi, X.L.; Shi, Y.M. Estimation of stress-strength reliability for beta log Weibull distribution using progressive first-failure censored samples. Qual. Reliab. Eng. Int. 2023, 39, 1352–1375. [Google Scholar] [CrossRef]
  8. Abu-Moussa, M.H.; Alsadat, N.; Sharawy, A. On estimation of reliability functions for the extended Rayleigh distribution under progressive first-failure censoring model. Axioms 2023, 12, 680. [Google Scholar] [CrossRef]
  9. Elshahhat, A.; Sharma, V.K.; Mohammed, H.S. Statistical analysis of progressively first-failure censored data via beta-binomial removals. Aims Math. 2023, 8, 22419–22446. [Google Scholar] [CrossRef]
  10. Eliwa, M.S.; Ahmed, E.A. Reliability analysis of constant partially accelerated life tests under progressive first-failure Type-II censored data from Lomax model: EM and MCMC algorithms. Aims Math. 2023, 8, 29–60. [Google Scholar] [CrossRef]
  11. Surles, J.G.; Padgett, W.J. Inference for reliability and stress-strength for a scaled Burr type X distribution. Lifetime Data Anal. 2001, 7, 187–200. [Google Scholar] [CrossRef]
  12. Shen, Z.J.; Alrumayh, A.; Ahmad, Z.; Shanab, R.A.; Mutairi, M.A.; Aldallal, R. A new generalized Rayleigh distribution with analysis to big data of an online community. Alex. Eng. J. 2022, 61, 11523–11535. [Google Scholar] [CrossRef]
  13. Rabie, A.; Hussam, E.; Muse, A.H.; Aldallal, R.A.; Alharthi, A.S.; Aljohani, H.M. Estimations in a constant-stress partially accelerated life test for generalized Rayleigh distribution under Type-II hybrid censoring scheme. J. Mathem. 2022, 2022, 6307435. [Google Scholar] [CrossRef]
  14. Ren, J.; Gui, W.H. Inference and optimal censoring scheme for progressively Type-II censored competing risks model for generalized Rayleigh distribution. Comput. Stat. 2021, 36, 479–513. [Google Scholar] [CrossRef]
  15. Yang, L.Z.; Tsang, E.C.C.; Wang, X.Z.; Zhang, C.L. ELM parameter estimation in view of maximum likelihood. Neurocomputing 2023, 557, 126704. [Google Scholar] [CrossRef]
  16. Liu, Y.; Liu, B.D. Estimating unknown parameters in uncertain differential equation by maximum likelihood estimation. Soft. Comput. 2022, 26, 2773–2780. [Google Scholar] [CrossRef]
  17. Rasekhi, M.; Saber, M.M.; Hamedani, G.G.; El-Raouf, M.M.A.; Aldallal, R.; Gemeay, A.M. Approximate maximum likelihood estimations for the parameters of the generalized gudermannian distribution and its characterizations. J. Math. 2022, 2022, 2314–4629. [Google Scholar] [CrossRef]
  18. Cai, Y.X.; Gui, W.H. Classical and Bayesian inference for a progressive first-failure censored left-truncated normal distribution. Symmetry 2021, 13, 490. [Google Scholar] [CrossRef]
  19. Ng, H.K.T.; Chan, P.S.; Balakrishnan, N. Estimation of parameters from progressively censored data using EM algorithm. Comput. Stat. Data 2002, 39, 371–386. [Google Scholar] [CrossRef]
  20. Cho, H.; Kirch, C. Bootstrap confidence intervals for multiple change points based on moving sum procedures. Comput. Stat. Data Anal. 2022, 175, 107552. [Google Scholar] [CrossRef]
  21. Saegusa, T.; Sugasawa, S.; Lahiri, P. Parametric bootstrap confidence intervals for the multivariate Fay–Herriot model. J. Surv. Stat. Methodol. 2022, 10, 115–130. [Google Scholar] [CrossRef]
  22. Sroka, L. Comparison of jackknife and bootstrap methods in estimating confidence intervals. Sci. Pap. Silesian Univ. Technol. Organ. Manag. Ser. 2021, 153, 445–455. [Google Scholar] [CrossRef]
  23. Song, X.F.; Xiong, Z.Y.; Gui, W.H. Parameter estimation of exponentiated half-logistic distribution for left-truncated and right-censored data. Mathematics 2022, 10, 3838. [Google Scholar] [CrossRef]
  24. Han, M. Take a look at the hierarchical Bayesian estimation of parameters from several different angles. Commun. Stat. Theory Methods 2022, 52, 7718–7730. [Google Scholar] [CrossRef]
  25. Tiago, S.; Florian, L.; Denis, H. Bayesian estimation of decay parameters in Hawkes processes. Intell. Data Anal. 2023, 27, 223–240. [Google Scholar]
  26. Bangsgaard, K.O.; Andersen, M.; Heaf, J.G.; Ottesen, J.T. Bayesian parameter estimation for phosphate dynamics during hemodialysis. Math. Biosci. Eng. 2023, 20, 4455–4492. [Google Scholar] [CrossRef] [PubMed]
  27. Vaglio, M.; Pacilio, C.; Maselli, A.; Pani, P. Bayesian parameter estimation on boson-star binary signals with a coherent inspiral template and spin-dependent quadrupolar corrections. Phys. Rev. D 2023, 108, 023021. [Google Scholar] [CrossRef]
  28. Renjini, K.R.; Sathar, E.I.A.; Rajesh, G. A study of the effect of loss functions on the Bayes estimates of dynamic cumulative residual entropy for Pareto distribution under upper record values. J. Stat. Comput. Simul. 2016, 86, 324–339. [Google Scholar] [CrossRef]
  29. Sadoun, A.; Zeghdoudi, H.; Attoui, F.Z.; Remita, M.R. On Bayesian premium estimators for gamma lindley model under squared error loss function and linex loss function. J. Math. Stat. 2017, 13, 284–291. [Google Scholar] [CrossRef]
  30. Malgorzata, M. Bayesian estimation for non zero inflated modified power series distribution under linex and generalized entropy loss functions. Commun. Stat. Theory Methods 2016, 45, 3952–3969. [Google Scholar]
  31. Wasan, M.T. Parametric Estimation; McGraw-Hill Book Company: New York, NY, USA, 1970. [Google Scholar]
  32. Norstrom, J.G. The use of precautionary loss functions in risk analysis. IEEE Trans. Reliab. Theory 1996, 45, 400–403. [Google Scholar] [CrossRef]
  33. Han, M. A note on the posterior risk of the entropy loss function. Appl. Math. Model. 2023, 117, 705–713. [Google Scholar] [CrossRef]
  34. Abdel-Aty, Y.; Kayid, M.; Alomani, G. Generalized Bayes estimation based on a joint Type-II censored sample from K-exponential populations. Mathematics 2023, 11, 2190. [Google Scholar] [CrossRef]
  35. Ren, H.P.; Gong, Q.; Hu, X. Estimation of entropy for generalized Rayleigh distribution under progressively Type-II censored samples. Axioms 2023, 12, 776. [Google Scholar] [CrossRef]
  36. Han, M. E-Bayesian estimations of parameter and its evaluation standard: E-MSE (expected mean square error) under different loss functions. Commun. Stat. Simul. Comput. 2021, 50, 1971–1988. [Google Scholar] [CrossRef]
  37. Ali, S.; Aslam, M.; Kazmi, S.M.A. A study of the effect of the loss function on Bayes estimate, posterior risk and hazard function for lindley distribution. Appl. Math. Model. 2013, 37, 6068–6078. [Google Scholar] [CrossRef]
  38. Wei, Z.D. Parameter estimation of buffer autoregressive models based on Bayesian inference. Stat. Appl. 2023, 12, 32–39. [Google Scholar]
  39. Zhao, S.Y. Parameter estimation of logistic regression model based on MCMC algorithm—A case study of smart sleep bracelet. Mod. Comput. 2022, 28, 57–61. [Google Scholar]
  40. Asar, Y.; Belaghi, R.A. Estimation in Weibull distribution under progressively Type-I hybrid censored data. REVSTAT-Stat. J. 2023, 20, 563–586. [Google Scholar]
  41. Yang, K.; Zhang, Q.Q.; Yu, X.Y.; Dong, X.G. Bayesian inference for a mixture double autoregressive model. Stat. Neerlandica 2023, 77, 188–207. [Google Scholar] [CrossRef]
  42. Chen, X.Y.; Wang, J.J.; Yang, S.J. Semi-parametric hierarchical Bayesian modeling and optimization. Syst. Eng. Electron. 2023, 45, 1580–1588. [Google Scholar]
  43. Wang, X.Y.; Gui, W.H. Bayesian estimation of entropy for burr type XII distribution under progressive Type-II censored data. Mathematics 2021, 9, 313. [Google Scholar] [CrossRef]
  44. Balakrishnan, N.; Sandhu, R.A. A simple simulational algorithm for generating progressive Type-II censored samples. Am. Stat. 1995, 49, 229–230. [Google Scholar] [CrossRef]
  45. Bjerkedal, T. Acquisition of resistance in guinea pigs infected with different doses of virulent tubercle bacilli. Am. J. Epidemiol. 1960, 72, 130–148. [Google Scholar] [CrossRef] [PubMed]
  46. Alshunnar, F.S.; Raqab, M.Z.; Kundu, D. On the comparison of the Fisher information of the log-normal and generalized Rayleigh distributions. J. Appl. Stat. 2010, 37, 391–404. [Google Scholar] [CrossRef]
  47. Alahmadi, A.A.; Alqawba, M.; Almutiry, W.; Shawki, A.W.; Alrajhi, S.; Al-Marzouki, S.; Elgarhy, M. A New version of Weighte Weibull distribution: Modelling to COVID-19 data. Discret. Dyn. Nat. Soc. 2022, 2022, 3994361. [Google Scholar] [CrossRef]
Figure 1. Curve plots of PDF, RF and HF under the GRD.
Figure 1. Curve plots of PDF, RF and HF under the GRD.
Axioms 13 00580 g001
Figure 2. Logarithmic LF curve of β in simulated data.
Figure 2. Logarithmic LF curve of β in simulated data.
Axioms 13 00580 g002
Figure 3. Empirical distribution diagram and CDF diagram of GRD model under real Data 2.
Figure 3. Empirical distribution diagram and CDF diagram of GRD model under real Data 2.
Axioms 13 00580 g003
Figure 4. P-P and Q-Q plots of real Data 2.
Figure 4. P-P and Q-Q plots of real Data 2.
Axioms 13 00580 g004
Figure 5. Partial derivative of logarithmic LF.
Figure 5. Partial derivative of logarithmic LF.
Axioms 13 00580 g005
Figure 6. Trajectory plot, histogram and autocorrelation plot of parameters for sample (i) during MCMC sampling process.
Figure 6. Trajectory plot, histogram and autocorrelation plot of parameters for sample (i) during MCMC sampling process.
Axioms 13 00580 g006
Table 1. Algorithm for constructing Boot-t CIs.
Table 1. Algorithm for constructing Boot-t CIs.
Algorithm:Constructing Boot-t CI.
Step 1:Setting the number of simulations B.
Step 2:Obtaining the ML estimates σ ^ and β ^ for parameters σ   β using the original censoring sample data t = ( t 1 , t 2 , , t m ) .
Step 3:Using the same censoring scheme, substitute the σ ^ and β ^ obtained from step 2 into the CDF of the GRD to obtain a bootstrapped sample t , and then obtain the bootstrapped ML estimates σ ^ and β ^ and their variances V a r ( σ ^ ) and V a r ( β ^ ) through the bootstrapped sample t .
Step 4:The statistical quantities η = B ( σ ^ σ ^ ) / V a r ( σ ^ ) and τ = B ( β ^ β ^ ) / V a r ( β ^ ) , as well as their CDFs H ( t ) = P ( η t ) and W ( t ) = P ( τ t ) , are calculated from the bootstrap ML estimates σ ^ and β ^ .
Step 5:Repeated steps 3–4 for a total of B times, resulting in a series of Boot-t statistics ( η 1 , η 2 , , η N B t ) and ( τ 1 , τ 2 , , τ N B t ) .
Step 6:The obtained series of Boot-t statistics are sorted in ascending order to obtain ( η ( 1 ) , η ( 2 ) , , η ( N B t ) ) and ( τ ( 1 ) , τ ( 2 ) , , τ ( N B t ) ) .
Table 2. Bayesian estimators under different loss functions.
Table 2. Bayesian estimators under different loss functions.
Loss FunctionBayes Estimator
SELF: ( Θ ^ Θ ) 2 E ( Θ t )
GELF: Θ ^ Θ c c ln Θ ^ Θ 1 [ E ( Θ c t ) ] 1 c
WSELF: ( Θ ^ Θ ) 2 Θ [ E ( Θ 1 t ) ] 1
KLF: Θ ^ Θ Θ Θ ^ 2 E ( Θ t ) E ( Θ 1 t )
M/Q SELF: 1 Θ ^ Θ 2 E ( Θ 1 t ) E ( Θ 2 t )
PLF: ( Θ ^ Θ ) 2 Θ ^ E ( Θ 2 t )
Table 3. Bayesian estimator of parameters, RF and HF under different loss functions.
Table 3. Bayesian estimator of parameters, RF and HF under different loss functions.
Loss FunctionBayesian Estimator
ParametersRFHF
SELF Θ ^ ( σ , β ) B S = i = M 0 + 1 M Θ ( σ ( i ) , β ( i ) ) M M 0 r ^ ( σ , β ) B S = i = M 0 + 1 M r ( σ ( i ) , β ( i ) ) M M 0 h ^ ( σ , β ) B S = i = M 0 + 1 M h ( σ ( i ) , β ( i ) ) M M 0
GELF Θ ^ ( σ , β ) B G = i = M 0 + 1 M Θ c ( σ ( i ) , β ( i ) ) M M 0 1 / c r ^ ( σ , β ) B G = i = M 0 + 1 M r c ( σ ( i ) , β ( i ) ) M M 0 1 / c h ^ ( σ , β ) B G = i = M 0 + 1 M h c ( σ ( i ) , β ( i ) ) M M 0 1 / c
WSELF Θ ^ ( σ , β ) B W = i = M 0 + 1 M Θ 1 ( σ ( i ) , β ( i ) ) M M 0 1 r ^ ( σ , β ) B W = i = M 0 + 1 M r 1 ( σ ( i ) , β ( i ) ) M M 0 1 h ^ ( σ , β ) B W = i = M 0 + 1 M h 1 ( σ ( i ) , β ( i ) ) M M 0 1
KLF Θ ^ ( σ , β ) B K = i = M 0 + 1 M Θ ( σ ( i ) , β ( i ) ) i = M 0 + 1 M Θ 1 ( σ ( i ) , β ( i ) ) 1 / 2 r ^ ( σ , β ) B K = i = M 0 + 1 M r ( σ ( i ) , β ( i ) ) i = M 0 + 1 M r 1 ( σ ( i ) , β ( i ) ) 1 / 2 h ^ ( σ , β ) B K = i = M 0 + 1 M h ( σ ( i ) , β ( i ) ) i = M 0 + 1 M h 1 ( σ ( i ) , β ( i ) ) 1 / 2
M/Q SELF Θ ^ ( σ , β ) B M = i = M 0 + 1 M Θ 1 ( σ ( i ) , β ( i ) ) i = M 0 + 1 M Θ 2 ( σ ( i ) , β ( i ) ) r ^ ( σ , β ) B M = i = M 0 + 1 M r 1 ( σ ( i ) , β ( i ) ) i = M 0 + 1 M r 2 ( σ ( i ) , β ( i ) ) h ^ ( σ , β ) B M = i = M 0 + 1 M h 1 ( σ ( i ) , β ( i ) ) i = M 0 + 1 M h 2 ( σ ( i ) , β ( i ) )
PLF Θ ^ ( σ , β ) B P = i = M 0 + 1 M Θ 2 ( σ ( i ) , β ( i ) ) M M 0 1 / 2 r ^ ( σ , β ) B P = i = M 0 + 1 M r 2 ( σ ( i ) , β ( i ) ) M M 0 1 / 2 h ^ ( σ , β ) B P = i = M 0 + 1 M h 2 ( σ ( i ) , β ( i ) ) M M 0 1 / 2
Table 4. Censoring scheme.
Table 4. Censoring scheme.
Scheme
a Q 1 = Q 2 = = Q m 1 = , Q m = n m
b Q 1 = Q 2 = = Q m = 0
c Q 1 = n m , Q 2 = Q 3 = Q m = 0
Table 5. ML estimates and corresponding MSE (in parentheses) of σ and β , r(t) and h(t) under different k values and censoring schemes based on GRD.
Table 5. ML estimates and corresponding MSE (in parentheses) of σ and β , r(t) and h(t) under different k values and censoring schemes based on GRD.
k(n, m)QNREM k(n, m)QNREM
σ 2(30, 20)a0.8842(0.0651)0.9722(0.0313)r(t = 0.8)2(30, 20)a0.4252(0.0107)0.3691(0.0164)
b0.8928(0.0586)1.6036(0.6622)b0.4216(0.0102)0.5000(0.0205)
c0.8663(0.0447)0.9553(0.0274)c0.4308(0.0083)0.3780(0.0167)
(50, 30)a0.8579(0.0382)0.9730(0.0308)(50, 30)a0.4334(0.0064)0.3596(0.0155)
b0.8527(0.0294)1.8086(1.0321)b0.4308(0.0069)0.5422(0.0229)
c0.8487(0.0309)0.9583(0.0270)c0.4385(0.0050)0.3768(0.0141)
(80, 50)a0.8423(0.0205)0.9710(0.0298)(80, 50)a0.4233(0.0037)0.3681(0.0112)
b0.8262(0.0128)1.7139(0.8430)b0.4398(0.0038)0.5304(0.0145)
c0.8291(0.0154)0.9543(0.0251)c0.4412(0.0030)0.3791(0.0105)
3(30, 20)a0.9032(0.0630)0.9838(0.0344)3(30, 20)a0.4102(0.0139)0.3451(0.0215)
b0.8763(0.0531)1.5236(0.5272)b0.4144(0.0135)0.5511(0.0258)
c0.8607(0.0444)0.9715(0.0307)c0.4264(0.0088)0.3554(0.0211)
(50, 30)a0.8586(0.0327)0.9847(0.0344)(50, 30)a0.4208(0.0103)0.3351(0.0199)
b0.8432(0.0237)1.6931(0.8001)b0.4258(0.0081)0.6029(0.0338)
c0.8450(0.0246)0.9722(0.0303)c0.4340(0.0060)0.3530(0.0172)
(80, 50)a0.8343(0.0158)0.9829(0.0336)(80, 50)a0.4320(0.0058)0.3394(0.0166)
b0.8830(0.0142)1.6227(0.6784)b0.4348(0.0052)0.5918(0.0261)
c0.8273(0.0134)0.9703(0.0295)c0.4380(0.0048)0.3529(0.0147)
β 2(30, 20)a1.0964(0.0808)1.2473(0.0909)h(t = 0.8)2(30, 20)a2.1290(1.0525)2.5528(1.1951)
b1.0965(0.0717)1.2863(0.1169)b2.1370(0.9762)2.3645(1.1512)
c1.0634(0.0451)1.2245(0.0823)c1.9231(0.4184)2.4764(1.1039)
(50, 30)a1.0595(0.0496)1.2605(0.0897)(50, 30)a1.9924(0.5380)2.5931(1.1116)
b1.0701(0.0408)1.2821(0.1043)b1.9558(0.4125)2.2209(0.7645)
c1.0472(0.0265)1.2248(0.0745)c1.8726(0.2336)2.4634(0.9240)
(80, 50)a1.0347(0.0264)1.2405(0.0702)(80, 50)a1.8656(0.2287)2.4997(0.7864)
b1.0242(0.0202)1.2710(0.0868)b1.8681(0.2692)2.2012(0.4902)
c1.0209(0.0142)1.2142(0.0600)c1.8050(0.1167)2.4096(0.6838)
3(30, 20)a1.1031(0.0941)1.2982(0.1245)3(30, 20)a2.2133(1.3141)2.7623(1.7360)
b1.1070(0.0926)1.1858(0.0659)b2.1488(1.1277)1.9918(0.6252)
c1.0673(0.0472)1.2762(0.1162)c1.9823(0.5537)2.6855(1.6634)
(50, 30)a1.0775(0.0598)1.3101(0.1181)(50, 30)a2.0362(0.7900)2.7898(1.5223)
b1.0662(0.0518)1.1632(0.0463)b1.9644(0.5173)1.7905(0.3320)
c1.0530(0.0329)1.2729(0.0979)c1.9191(0.3502)2.6459(1.2474)
(80, 50)a1.0459(0.0349)1.2973(0.1018)(80, 50)a1.9041(0.3268)2.7242(1.2120)
b1.0404(0.0262)1.1571(0.0358)b1.8764(0.2589)1.7889(0.1754)
c1.0285(0.0169)1.2680(0.0868)c1.8497(0.1807)2.6141(1.0279)
Table 6. Bayesian estimates and corresponding MSE (in parentheses) of σ under different k values and censoring schemes based on GRD.
Table 6. Bayesian estimates and corresponding MSE (in parentheses) of σ under different k values and censoring schemes based on GRD.
k(n, m)QInfNon-Inf
BSBGBWBKBMBPBSBGBWBKBMBP
2(30, 20)a0.6933 (0.0114)0.6635 (0.0186)0.6540 (0.0213)0.6733 (0.0160)0.6179 (0.0331)0.7140 (0.0074)0.6435 (0.0245)0.6219 (0.0317)0.6144 (0.0344)0.6288 (0.0293)0.5847 (0.0464)0.6572 (0.0204)
b0.6435 (0.0245)0.6307 (0.0287)0.6265 (0.0301)0.6349 (0.0272)0.6097 (0.0362)0.6522 (0.0218)0.6344 (0.0274)0.6157 (0.0340)0.6095 (0.0363)0.6218 (0.0317)0.5853 (0.0461)0.6470 (0.0234)
c0.6672
(0.0176)
0.6420
(0.0250)
0.6340
(0.0276)
0.6504
(0.0224)
0.6032
(0.0387)
0.6848
(0.0133)
0.6390 (0.0259)0.6182 (0.0331)0.6114 (0.0356)0.6251 (0.0306)0.5849 (0.0463)0.6532 (0.0215)
(50, 30)a0.7184
(0.0067)
0.6977
(0.0105)
0.6911
(0.0118)
0.7074
(0.0091)
0.6660
(0.0180)
0.7328
(0.0045)
0.6476 (0.0232)0.6282 (0.0295)0.6215 (0.0319)0.6344 (0.0274)0.5944 (0.0423)0.6599 (0.0196)
b0.6816
(0.0140)
0.6704
(0.0168)
0.6668
(0.0178)
0.6741
(0.0158)
0.6526
(0.0217)
0.6893
(0.0123)
0.6793 (0.0146)0.6643 (0.0184)0.6590 (0.0199)0.6691 (0.0171)0.6370 (0.0266)0.6888 (0.0124)
c0.7472
(0.0028)
0.7288
(0.0051)
0.7225
(0.0060)
0.7347
(0.0043)
0.6970
(0.0106)
0.7593
(0.0017)
0.7158 (0.0071)0.6952 (0.0110)0.6885 (0.0124)0.7020 (0.0096)0.6622 (0.0190)0.7298 (0.0049)
(80, 50)a0.7445
(0.0031)
0.7306
(0.0048)
0.7260
(0.0055)
0.7352
(0.0042)
0.7078
(0.0085)
0.7538
(0.0021)
0.7215 (0.0062)0.7102 (0.0081)0.7064 (0.0088)0.7139 (0.0074)0.6914 (0.0118)0.7290 (0.0050)
b0.7206
(0.0063)
0.7113
(0.0079)
0.7083
(0.0084)
0.7144
(0.0073)
0.6964
(0.0107)
0.7269
(0.0053)
0.6972 (0.0106)0.6884 (0.0125)0.6855 (0.0131)0.6914 (0.0118)0.6744 (0.0158)0.7033 (0.0093)
c0.7583
(0.0017)
0.7440
(0.0031)
0.7393
(0.0037)
0.7487
(0.0026)
0.7205
(0.0063)
0.7680
(0.0010)
0.7361 (0.0041)0.7261 (0.0055)0.7227 (0.0060)0.7294 (0.0050)0.7092 (0.0082)0.7427 (0.0033)
3(30, 20)a0.6917
(0.0117)
0.6626
(0.0189)
0.6530
(0.0216)
0.6721
(0.0164)
0.6155
(0.0341)
0.7113
(0.0079)
0.6709 (0.0167)0.6468 (0.0235)0.6392 (0.0258)0.6549 (0.0211)0.6106 (0.0359)0.6883 (0.0125)
b0.6586
(0.0200)
0.6437
(0.0244)
0.6389
(0.0260)
0.6487
(0.0229)
0.6200
(0.0324)
0.6688
(0.0172)
0.6167 (0.0336)0.5976 (0.0410)0.5914 (0.0435)0.6039 (0.0384)0.5673 (0.0541)0.6297 (0.0290)
c0.6713
(0.0166)
0.6526
(0.0217)
0.6461
(0.0237)
0.6585
(0.0200)
0.6189
(0.0328
0.6830
(0.0137)
0.6359 (0.0269)0.6169 (0.0335)0.6105 (0.0359)0.6231 (0.0313)0.5845 (0.0464)0.6484 (0.0230
(50, 30)a0.7162
(0.0070)
0.6998
(0.0100)
0.6943
(0.0112)
0.7052
(0.0090)
0.6730
(0.0161)
0.7274
(0.0053)
0.6567 (0.0206)0.6354 (0.0271)0.6281 (0.0296)0.6422 (0.0249)0.5986 (0.0406)0.6700 (0.0169)
b0.6942
(0.0112)
0.6819
(0.0139)
0.6778
(0.0149)
0.6860
(0.0130)
0.6618
(0.0191)
0.7024
(0.0095)
0.6892 (0.0123)0.6744 (0.0158)0.6695 (0.0170)0.6793 (0.0146)0.6500 (0.0225)0.6991 (0.0102)
c0.7397
(0.0036)
0.7236
(0.0058)
0.7182
(0.0067)
0.7288
(0.0051)
0.6964
(0.0107)
0.7503
(0.0025)
0.7280 (0.0052)0.7115 (0.0078)0.7059 (0.0088)0.7169 (0.0069)0.6839 (0.0135)0.7389 (0.0037)
(80, 50)a0.7388
(0.0037)
0.7273
(0.0053)
0.7235
(0.0059)
0.7311
(0.0048)
0.7082
(0.0084)
0.7465
(0.0029)
0.7197 (0.0065)0.7094 (0.0082)0.7059 (0.0089)0.7127 (0.0076)0.6911 (0.0119)0.7261 (0.0055)
b0.7107
(0.0080)
0.7046
(0.0091)
0.7026
(0.0095)
0.7066
(0.0087)
0.6942
(0.0112)
0.7146
(0.0073)
0.6857 (0.0131)0.6788 (0.0147)0.6765 (0.0152)0.6811 (0.0141)0.6674 (0.0176)0.6903 (0.0120)
c0.7586
(0.0017)
0.7490
(0.0026)
0.7460
(0.0029)
0.7523
(0.0023)
0.7341
(0.0043)
0.7653
(0.0012)
0.7366 (0.0040)0.7275 (0.0053)0.7244 (0.0057)0.7305 (0.0048)0.7120 (0.0077)0.7426 (0.0033)
Table 7. Bayesian estimates and corresponding MSE (in parentheses) of β under different k values and censoring schemes based on GRD.
Table 7. Bayesian estimates and corresponding MSE (in parentheses) of β under different k values and censoring schemes based on GRD.
k(n, m)QInfNon-Inf
BSBGBWBKBMBPBSBGBWBKBMBP
2(30, 20)a0.8360
(0.0269)
0.8049 (0.0381)0.7936 (0.0426)0.8145 (0.0344)0.7456 (0.0647)0.8547 (0.0211)0.7498 (0.0626)0.7019 (0.0889)0.6860 (0.0986)0.7172 (0.0800)0.6251 (0.1406)0.7813 (0.0478)
b0.8910 (0.0119)0.8718 (0.0164)0.8651 (0.0182)0.8779 (0.0149)0.8370 (0.0266)0.9029 (0.0094)0.8739 (0.0159)0.8467 (0.0235)0.8376 (0.0264)0.8556 (0.0209)0.8014 (0.0395)0.8919 (0.0117)
c0.8976 (0.0105)0.8714 (0.0165)0.8622 (0.0190)0.8797 (0.0145)0.8242 (0.0309)0.9138 (0.0074)0.8707 (0.0167)0.8494 (0.0227)0.8419 (0.0250)0.8562 (0.0201)0.8112 (0.0357)0.8841 (0.0134)
(50, 30)a0.8946 (0.0111)0.8665 (0.0178)0.8565 (0.0206)0.8753 (0.0155)0.8147 (0.0343)0.9118 (0.0078)0.8647 (0.0183)0.8459 (0.0238)0.8394 (0.0258)0.8520 (0.0219)0.8130 (0.0350)0.8767 (0.0152)
b0.9215 (0.0062)0.8981 (0.0104)0.8898 (0.0121)0.9056 (0.0089)0.8556 (0.0209)0.9361 (0.0041)0.9197 (0.0064)0.8957 (0.0109)0.8875 (0.0127)0.9034 (0.0093)0.8542 (0.0213)0.9351 (0.0042)
c0.9401 (0.0036)0.9247 (0.0057)0.9194 (0.0065)0.9297 (0.0049)0.8977 (0.0105)0.9499 (0.0025)0.9150 (0.0072)0.9001 (0.0100)0.8952 (0.0110)0.9050 (0.0090)0.8756 (0.0155)0.9248 (0.0057)
(80, 50)a0.9239 (0.0058)0.9086 (0.0083)0.9033 (0.0093)0.9136 (0.0075)0.8817 (0.0140)0.9336 (0.0044)0.9044 (0.0091)0.8885 (0.0124)0.8827 (0.0138)0.8935 (0.0113)0.8581 (0.0201)0.9142 (0.0074)
b0.9445 (0.0031)0.9318 (0.0047)0.9274 (0.0053)0.9360 (0.0041)0.9101 (0.0081)0.9529 (0.0022)0.9328 (0.0045)0.9139 (0.0074)0.9073 (0.0086)0.9200 (0.0064)0.8799 (0.0144)0.9448 (0.0030)
c0.9676 (0.0011)0.9588 (0.0017)0.9557 (0.0020)0.9616 (0.0015)0.9435 (0.0032)0.9732 (0.0007)0.9572 (0.0018)0.9462 (0.0029)0.9424 (0.0033)0.9498 (0.0025)0.9269 (0.0053)0.9642 (0.0013)
3(30, 20)a0.8748 (0.0157)0.8371 (0.0265)0.8235 (0.0311)0.8488 (0.0229)0.7673 (0.0541)0.8973 (0.0105)0.7962 (0.0415)0.7328 (0.0714)0.7092 (0.0846)0.7515 (0.0618)0.6130 (0.1498)0.8322 (0.0282)
b0.8870 (0.0128)0.8544 (0.0212)0.8427 (0.0248)0.8645 (0.0184)0.7938 (0.0425)0.9066 (0.0087)0.8829 (0.0137)0.8378 (0.0263)0.8202 (0.0323)0.8510 (0.0222)0.7409 (0.0671)0.9084 (0.0084)
c0.9227 (0.0060)0.8987 (0.0103)0.8904 (0.0120)0.9064 (0.0088)0.8565 (0.0206)0.9380 (0.0038)0.8914 (0.0118)0.8593 (0.0198)0.8473 (0.0233)0.8691 (0.0171)0.7950 (0.0420)0.9103 (0.0080)
(50, 30)a0.8905 (0.0120)0.8650 (0.0182)0.8562 (0.0207)0.8732 (0.0161)0.8205 (0.0322)0.9069 (0.0087)0.8344 (0.0274)0.8001 (0.0400)0.7879 (0.0450)0.8108 (0.0358)0.7382 (0.0686)0.8552 (0.0210)
b0.9197 (0.0064)0.8864 (0.0129)0.8743 (0.0158)0.8967 (0.0107)0.8240 (0.0310)0.9398 (0.0036)0.8966 (0.0107)0.8655 (0.0181)0.8548 (0.0211)0.8755 (0.0155)0.8116 (0.0355)0.9168 (0.0069)
c0.9333 (0.0044)0.9173 (0.0068)0.9117 (0.0078)0.9225 (0.0060)0.8890 (0.0123)0.9433 (0.0032)0.9165 (0.0070)0.8965 (0.0107)0.8894 (0.0122)0.9029 (0.0094)0.8604 (0.0195)0.9289 (0.0051)
(80, 50)a0.9084 (0.0084)0.8928 (0.0115)0.8874 (0.0127)0.8979 (0.0104)0.8659 (0.0180)0.9185 (0.0066)0.8856 (0.0131)0.8633 (0.0187)0.8553 (0.0209)0.8703 (0.0168)0.8221 (0.0317)0.8992 (0.0102)
b0.9623 (0.0014)0.9461 (0.0029)0.9405 (0.0035)0.9513 (0.0024)0.9176 (0.0068)0.9728 (0.0007)0.9489 (0.0026)0.9253 (0.0056)0.9170 (0.0069)0.9328 (0.0045)0.8834 (0.0136)0.9639 (0.0013)
c0.9642 (0.0013)0.9530 (0.0022)0.9494 (0.0026)0.9567 (0.0019)0.9349 (0.0042)0.9716 (0.0008)0.9358 (0.0041)0.9261 (0.0055)0.9228 (0.0060)0.9293 (0.0050)0.9099 (0.0081)0.9423 (0.0033)
Table 8. Bayesian estimates and corresponding MSE (in parentheses) of r(t = 0.8) under different k values and censoring schemes based on GRD.
Table 8. Bayesian estimates and corresponding MSE (in parentheses) of r(t = 0.8) under different k values and censoring schemes based on GRD.
k(n, m)QInfNon-Inf
BSBGBWBKBMBPBSBGBWBKBMBP
2(30, 20)a0.3743 (0.0059)0.3551 (0.0092)0.3481 (0.0106)0.3609 (0.0081)0.3186 (0.0175)0.3859 (0.0042)0.3474 (0.0107)0.3304 (0.0145)0.3245 (0.0160)0.3358 (0.0133)0.3011 (0.0224)0.3581 (0.0086)
b0.3963 (0.0030)0.3773 (0.0054)0.3705 (0.065)0.3831 (0.0046)0.3423 (0.0118)0.4078 (0.0019)0.3709 (0.0064)0.3501 (0.0102)0.3425 (0.0117)0.3564 (0.0089)0.3111 (0.0195)0.3833 (0.0046)
c0.3939 (0.0033)0.3815 (0.0048)0.3772 (0.0054)0.3854 (0.0043)0.3595 (0.0083)0.4017 (0.0024)0.3721 (0.0053)0.3599 (0.0083)0.3557 (0.0091)0.3638 (0.0076)0.3388 (0.0126)0.3862 (0.0051)
(50, 30)a0.4078 (0.0019)0.3966 (0.0029)0.3927 (0.0034)0.4002 (0.0026)0.3763 (0.0056)0.4148 (0.0013)0.3824 (0.0047)0.3691 (0.0067)0.3645 (0.0075)0.3733 (0.0060)0.3458 (0.0110)0.3908 (0.0036)
b0.4259 (0.0006)0.4164 (0.0012)0.4131 (0.0014)0.4195 (0.0010)0.3996 (0.0026)0.4320 (0.0004)0.3932 (0.0033)0.3808 (0.0049)0.3766 (0.0055)0.3848 (0.0044)0.3595 (0.0083)0.4012 (0.0025)
c0.3972 (0.0029)0.3863 (0.0042)0.3826 (0.0047)0.3898 (0.0037)0.3676 (0.0069)0.4042 (0.0022)0.3864 (0.0042)0.3765 (0.0055)0.3730 (0.0061)0.3796 (0.0051)0.3589 (0.0085)0.3926 (0.0034)
(80, 50)a0.4233 (0.0008)0.4169 (0.0012)0.4147 (0.0013)0.4190 (0.0010)0.4057 (0.0020)0.4274 (0.0006)0.4127 (0.0015)0.4051 (0.0021)0.4025 (0.0023)0.4076 (0.0019)0.3918 (0.0035)0.4176 (0.0011)
b0.4288 (0.0005)0.4228 (0.0008)0.4208 (0.0010)0.4248 (0.0007)0.4126 (0.0015)0.4326 (0.0003)0.4177 (0.0011)0.4101 (0.0017)0.4075 (0.0019)0.4126 (0.0015)0.3970 (0.0029)0.4226 (0.0008)
c0.4274 (0.0006)0.4227 (0.0008)0.4211 (0.0009)0.4243 (0.0007)0.4149 (0.0013)0.4305 (0.0004)0.4258 (0.0006)0.4214 (0.0009)0.4199 (0.0010)0.4228 (0.0008)0.4141 (0.0013)0.4287 (0.0005)
3(30, 20)a0.3833 (0.0046)0.3559 (0.0090)0.3448 (0.0112)0.3636 (0.0076)0.2918 (0.0258)0.3986 (0.0027)0.3514 (0.0099)0.3245 (0.0160)0.3157 (0.0183)0.3330 (0.0139)0.2823 (0.0284)0.3686 (0.0068)
b0.3902 (0.0037)0.3699 (0.0066)0.3624 (0.0078)0.3761 (0.0056)0.3301 (0.0146)0.4026 (0.0023)0.3734 (0.0060)0.3459 (0.0110)0.3356 (0.0133)0.3540 (0.0094)0.2901 (0.0259)0.3900 (0.0037)
c0.4012 (0.0025)0.3870 (0.0041)0.3820 (0.0047)0.3915 (0.0035)0.3619 (0.0079)0.4102 (0.0017)0.3815 (0.0048)0.3667 (0.0071)0.3616 (0.0080)0.3714 (0.0063)0.3407 (0.0121)0.3908 (0.0036)
(50, 30)a0.4098 (0.0017)0.3976 (0.0028)0.3930 (0.0033)0.4013 (0.0025)0.3734 (0.0060)0.4172 (0.0011)0.3691 (0.0067)0.3430 (0.0116)0.3323 (0.0141)0.3502 (0.0101)0.2843 (0.0278)0.3829 (0.0046)
b0.4182 (0.0011)0.4033 (0.0023)0.3983 (0.0028)0.4081 (0.0018)0.3783 (0.0053)0.4277 (0.0005)0.4174 (0.0011)0.3993 (0.0027)0.3924 (0.0034)0.4047 (0.0021)0.3624 (0.0078)0.4281 (0.0005)
c0.3954 (0.0031)0.3861 (0.0042)0.3830 (0.0046)0.3891 (0.0038)0.3702 (0.0065)0.4041 (0.0024)0.3933 (0.0033)0.3840 (0.0045)0.3808 (0.0049)0.3870 (0.0041)0.3681 (0.0068)0.3993 (0.0027)
(80, 50)a0.4142 (0.0018)0.4017 (0.0024)0.3992 (0.0027)0.4040 (0.0022)0.3891 (0.0038)0.4136 (0.0014)0.3941 (0.0032)0.3856 (0.0043)0.3954 (0.0031)0.3883 (0.0039)0.3703 (0.0065)0.3995 (0.0026)
b0.4248 (0.0007)0.4170 (0.0011)0.4144 (0.0013)0.4195 (0.0010)0.4035 (0.0022)0.4297 (0.0004)0.4111 (0.0016)0.4019 (0.0024)0.3987 (0.0027)0.4048 (0.0021)0.3857 (0.0042)0.4170 (0.0011)
c0.4387 (0.0001)0.4301 (0.0004)0.4272 (0.0006)0.4329 (0.0003)0.4155 (0.0013)0.4442 (0.0001)0.4370 (0.0002)0.4288 (0.0005)0.4259 (0.0006)0.4314 (0.0004)0.4138 (0.0014)0.4421 (0.0001)
Table 9. Bayesian estimates and corresponding MSE (in parentheses) of h(t = 0.8) under different k values and censoring schemes based on GRD.
Table 9. Bayesian estimates and corresponding MSE (in parentheses) of h(t = 0.8) under different k values and censoring schemes based on GRD.
k(n, m)QInfNon-Inf
BSBGBWBKBMBPBSBGBWBKBMBP
2(30, 20)a1.5697 (0.0287)1.4251 (0.0985)1.3735 (0.1335)1.4683 (0.0732)1.1667 (0.3274)1.6569 (0.0067)1.4900 (0.0620)1.3755 (0.1321)1.3344 (0.1637)1.4100 (0.1082)1.1644 (0.3302)1.5605 (0.0318)
b1.5267 (0.0450)1.3825 (0.1271)1.3335 (0.1644)1.4268 (0.0974)1.1460 (0.3516)1.6181 (0.0146)1.4169 (0.0599)1.3715 (0.1350)1.3292 (0.1679)1.4086 (0.1091)1.1618 (0.3332)1.5684 (0.0291)
c1.5784 (0.0258)1.4882 (0.0629)1.4578 (0.0791)1.5169 (0.0493)1.3384 (0.1605)1.6377 (0.0103)1.5427 (0.0385)1.4402 (0.0893)1.4060 (0.1109)1.4728 (0.0709)1.2725 (0.2176)1.6105 (0.0165)
(50, 30)a1.6180 (0.0146)1.5107 (0.0521)1.4754 (0.0694)1.5451 (0.0376)1.3420 (0.1576)1.6891 (0.0025)1.5582 (0.0327)1.4677 (0.0736)1.4381 (0.0905)1.4969 (0.0586)1.3257 (0.1708)1.6187 (0.0145)
b1.6344 (0.0109)1.5479 (0.0365)1.5190 (0.0484)1.5756 (0.0267)1.4056 (0.1111)1.6910 (0.0023)1.5239 (0.0462)1.4260 (0.0980)1.3917 (0.1206)1.4563 (0.0799)1.2523 (0.2368)1.5861 (0.0234)
c1.6421 (0.0094)1.5804 (0.0251)1.5599 (0.0321)1.6005 (0.0192)1.4786 (0.0678)1.6834 (0.0031)1.5741 (0.0272)1.5117 (0.0517)1.4908 (0.0616)1.5319 (0.0429)1.4082 (0.1094)1.6156 (0.0152)
(80, 50)a1.6718 (0.0045)1.5995 (0.0194)1.5764 (0.0264)1.6234 (0.0134)1.4894 (0.0623)1.7214 (0.0003)1.6345 (0.0109)1.5730 (0.0275)1.5532 (0.0345)1.5933 (0.0212)1.4769 (0.0687)1.6770 (0.0038)
b1.6671 (0.0052)1.6145 (0.0155)1.5969 (0.0202)1.6317 (0.0115)1.5275 (0.0447)1.7018 (0.0014)1.6584 (0.0065)1.5973 (0.0201)1.5771 (0.0262)1.6172 (0.0148)1.4976 (0.0583)1.6991 (0.0016)
c1.6581 (0.0065)1.6143 (0.0155)1.5997 (0.0194)1.6286 (0.0122)1.5418 (0.0389)1.6872 (0.0027)1.5950 (0.0207)1.5535 (0.0344)1.5393 (0.0399)1.5669 (0.0296)1.4812 (0.0665)1.6216 (0.0138)
3(30, 20)a1.5573 (0.0330)1.3927 (0.1199)1.3334 (0.1645)1.4410 (0.0888)1.0984 (0.4103)1.6546 (0.0071)1.3598 (0.1438)1.1956 (0.2953)1.1409 (0.3577)1.2455 (0.2435)0.9412 (0.6364)1.4626 (0.0764)
b1.5614 (0.0315)1.4386 (0.0902)1.3969 (0.1170)1.4769 (0.0687)1.2352 (0.2538)1.6401 (0.0098)1.3753 (0.1323)1.2506 (0.2385)1.2074 (0.2825)1.2886 (0.2028)1.0380 (0.4913)1.4541 (0.0812)
c1.5846 (0.0238)1.4883 (0.0629)1.4559 (0.0801)1.5189 (0.0484)1.3293 (0.1678)1.6475 (0.0084)1.5575 (0.0329)1.4648 (0.0751)1.4343 (0.0928)1.4946 (0.0597)1.3163 (0.1787)1.6199 (0.0142)
(50, 30)a1.6116 (0.0162)1.4895 (0.0622)1.4500 (0.0835)1.5287 (0.0442)1.2999 (0.1927)1.6954 (0.0019)1.5799 (0.0253)1.4518 (0.0825)1.4098 (0.1084)1.4924 (0.0608)1.2511 (0.2381)1.6639 (0.0056)
b1.6188 (0.0144)1.5323 (0.0427)1.5030 (0.0557)1.5598 (0.0321)1.3861 (0.1245)1.6755 (0.0040)1.5347 (0.0417)1.4357 (0.0920)1.4029 (0.1129)1.4673 (0.0738)1.2766 (0.2138)1.6005 (0.0192)
c1.6196 (0.0143)1.5422 (0.0387)1.5168 (0.0494)1.5673 (0.0295)1.4183 (0.1028)1.6719 (0.0045)1.5809 (0.0250)1.5011 (0.0566)1.4750 (0.0697)1.5270 (0.0449)1.3737 (0.1334)1.6347 (0.0109)
(80, 50)a1.6138 (0.0157)1.5474 (0.0367)1.5257 (0.0455)1.5692 (0.0288)1.4425 (0.0879)1.6585 (0.0065)1.5791 (0.0256)1.4785 (0.0679)1.4429 (0.0877)1.5095 (0.0527)1.3001 (0.1926)1.6404 (0.0097)
b1.6443 (0.0090)1.5790 (0.0256)1.5575 (0.0329)1.6003 (0.0192)1.4731 (0.0707)1.6888 (0.0025)1.5847 (0.0238)1.5151 (0.0501)1.4926 (0.0607)1.5380 (0.0404)1.4059 (0.1109)1.6325 (0.0113)
c1.6469 (0.0085)1.5953 (0.0206)1.5779 (0.0260)1.6120 (0.0161)1.5147 (0.0503)1.6808 (0.0034)1.6254 (0.0129)1.5777 (0.0260)1.5618 (0.0314)1.5933 (0.0212)1.4991 (0.0575)1.6573 (0.0067)
Table 10. AW of CIs for σ and β , RF and HF based on GRD.
Table 10. AW of CIs for σ and β , RF and HF based on GRD.
k(n, m)QBoot-t CIInfNon-Inf k(n, m)QBoot-t CIInfNon-Inf
Cred.CICred.CICres.CICred.CI
AWAWAWAWAWAW
σ 2(30, 20)a1.44080.30430.3493r(t = 0.8)2(30, 20)a0.37480.22990.2558
b0.93060.26540.3087b0.30750.24730.2483
c0.80490.39340.4093c0.68090.21900.2255
(50, 30)a0.89560.29610.3356(50, 30)a0.35130.17920.1888
b0.54800.26170.2878b0.28980.19700.1984
c0.65340.35980.3906c0.57270.20270.2070
(80, 50)a0.56750.28310.3187(80, 50)a0.28980.16040.1787
b0.40790.26000.2784b0.21630.14050.1637
c0.59740.28010.3004c0.47550.13500.1458
3(30, 20)a1.32540.29470.33583(30, 20)a0.25640.20400.2420
b0.74710.23990.2849b0.30420.23830.2412
c0.67020.33280.3849c0.55770.20290.2122
(50, 30)a0.51530.28050.3144(50, 30)a0.24340.16620.1720
b0.46070.20320.2214b0.23380.17920.1950
c0.50530.32890.3793c0.53680.16100.1710
(80, 50)a0.50010.25790.2602(80, 50)a0.15710.14750.1646
b0.37130.18120.1842b0.18970.12840.1602
c0.49710.25250.2832c0.47010.13290.1352
β 2(30, 20)a1.51090.48610.5986h(t = 0.8)2(30, 20)a2.45251.02151.0661
b1.33160.57010.6062b2.27991.58421.6715
c1.88530.55020.5745c4.90141.21951.3793
(50, 30)a1.23560.45180.4859(50, 30)a2.35850.87680.8962
b0.85380.48180.5276b1.87641.46971.6348
c1.58270.35830.4638c4.51321.12071.2353
(80, 50)a1.05310.36190.4065(80, 50)a2.23040.79480.8730
b0.71250.32290.3867b1.21600.97891.0538
c0.96110.30640.3269c3.99680.72890.9098
3(30, 20)a1.42380.44580.52133(30, 20)a2.36121.00461.0216
b1.14450.50460.5901b2.24551.21391.3956
c1.37130.42950.4746c4.28021.11001.3018
(50, 30)a1.10920.42400.4418(50, 30)a2.34960.78810.8948
b0.65240.42520.4875b1.35200.89800.9262
c1.09880.33990.3842c3.67791.04791.1967
(80, 50)a1.01100.31960.3549(80, 50)a1.35050.71170.8430
b0.64520.27230.3116b1.17080.69290.7208
c0.92730.30620.3374c2.61890.54050.5961
Table 11. The PFFC samples set with m D a t a 1 = 12 in Data 1.
Table 11. The PFFC samples set with m D a t a 1 = 12 in Data 1.
Censoring ( Q 1 , Q 2 , , Q 12 ) Censoring Samples
i(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 12)12, 24, 32, 38, 44, 53, 55, 58, 60, 60, 63, 67
ii(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1)12, 32, 44, 55, 60, 63, 70, 81, 95, 121, 146, 258
iii(12, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0)70, 75, 81, 85, 95, 99, 121, 131, 146, 211, 258, 341
Table 12. The PFFC samples set with m D a t a 2 = 15 in Data 2.
Table 12. The PFFC samples set with m D a t a 2 = 15 in Data 2.
Censoring ( Q 1 , Q 2 , , Q 15 ) Censoring Samples
i(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 14)1, 4, 1, 6, 5, 8, 7, 15, 13, 5, 19, 14, 7, 20, 12
ii(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1)1, 1, 5, 7, 13, 19, 7, 12, 17, 11, 15, 29, 34, 40, 35
iii(14, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0)12, 14, 17, 11, 11, 21, 15, 16, 29, 19, 34, 47, 40, 34, 35
Table 13. ML estimates and Bayesian estimates under the PFFC schemes based on Data 1 (with hyperparameter c = 0.5).
Table 13. ML estimates and Bayesian estimates under the PFFC schemes based on Data 1 (with hyperparameter c = 0.5).
PCSMLEInfNon-Inf
BSBGBMBKBWBPBSBGBMBKBWBP
σ i1.30751.30831.22191.19381.24971.08611.36771.19531.11271.08551.13900.98301.2488
ii0.71270.74230.71320.70340.72260.66450.76150.66460.63230.62200.64290.58220.6875
iii1.23541.19861.14101.12251.15991.05191.23911.17691.12601.10901.14251.04281.2100
β i0.00870.00850.00800.00790.00820.00710.00880.00770.00710.00690.00730.00600.0080
ii0.00280.00290.00270.00260.00280.00230.00300.00250.00220.00210.00230.00170.0026
iii0.00380.00360.00350.00340.00350.00320.00370.00350.00340.00330.00340.00310.0036
r(t = 121)i0.40520.40950.37550.36270.38540.31100.42850.40970.37110.35680.38230.29940.4314
ii0.79620.79500.79300.79230.79370.78960.79630.79430.79180.79090.79260.78750.7959
iii0.87340.87280.87140.87100.87190.86920.87370.87180.87060.87020.87100.86850.8726
h(t = 121)i0.01730.01630.01360.01280.01450.01020.01830.01390.01180.01100.01240.00850.0152
ii0.00280.00290.00270.00260.00280.00230.00310.00260.00230.00230.00240.00200.0027
iii0.00270.00260.00250.00240.00250.00220.00270.00260.00240.00230.00240.00210.0027
Table 14. ML estimates and Bayesian estimates under the PFFC schemes based on Data 2 (with hyperparameter c = 0.5).
Table 14. ML estimates and Bayesian estimates under the PFFC schemes based on Data 2 (with hyperparameter c = 0.5).
PCSMLEInfNon-Inf
BSBGBMBKBWBPBSBGBMBKBWBP
σ i0.77010.79560.76140.70320.77230.74970.81770.74550.71010.64970.72140.69810.7687
ii0.58770.59750.57880.54700.58490.57250.60970.57620.55410.51820.56130.54680.5910
iii1.46161.47971.42711.33861.44411.40941.51421.43101.36961.27091.38961.34941.4728
β i0.02790.02870.02650.02170.02710.02560.02990.02600.02320.01300.02380.02180.0274
ii0.01160.01210.01120.00940.01150.01090.01270.01070.00950.00670.00980.00900.0113
iii0.02610.02610.02560.02460.02580.02540.02650.02520.02460.02330.02480.02430.0256
r(t = 34)i0.33030.32440.28170.20450.29370.26600.34900.38390.33500.20640.34690.31340.4093
ii0.68010.66650.66090.65100.66270.65890.67020.69080.68660.67960.68800.68520.6934
iii0.58710.58780.57970.56570.58230.57690.59300.59690.58930.57610.59180.58670.6018
h(t = 34)i0.05660.06440.05400.03820.05710.05060.07100.05260.04180.02730.04500.03840.0602
ii0.01500.01700.01540.01300.01590.01490.01810.01430.01290.01070.01330.01240.0151
iii0.03970.03880.03620.03190.03700.03530.04040.03690.03380.02880.03480.03270.0391
Table 15. Upper and lower bounds of the 100 ( 1 α ) % CIs for the PFFC schemes based on Data 1.
Table 15. Upper and lower bounds of the 100 ( 1 α ) % CIs for the PFFC schemes based on Data 1.
PCSBoot-t CIInfNon-Inf
Cred.CICred.CI
LowerUpperLowerUpperLowerUpper
σ i0.37883.37110.69441.65430.54731.4111
ii0.44040.99450.49030.79010.34380.7667
iii0.40782.48790.75251.35090.57931.2401
β i0.00760.02410.00370.01030.00270.0088
ii0.00130.00470.00110.00320.00060.0029
iii0.00230.00780.00210.00400.00160.0036
r(t = 121)i0.11750.41020.15980.48930.18990.5018
ii0.50540.97200.67550.81380.68250.8166
iii0.78021.04560.76290.88250.78180.8875
h(t = 121)i0.01040.05400.00430.02100.00490.0187
ii−0.00510.01520.00130.00350.00130.0034
iii−0.00400.00630.00120.00320.00130.0030
Table 16. Upper and lower bounds of the 100 ( 1 α ) % CIs for the PFFC schemes based on Data 2.
Table 16. Upper and lower bounds of the 100 ( 1 α ) % CIs for the PFFC schemes based on Data 2.
PCSBoot-t CIInfNon-Inf
Cred.CICred.CI
LowerUpperLowerUpperLowerUpper
σ i0.4042 1.2987 0.4673 0.9027 0.4050 0.8380
ii0.3349 0.9499 0.3762 0.6768 0.3198 0.6236
iii0.5861 2.2770 0.8628 1.6155 0.8207 1.6569
β i0.0067 0.0684 0.0129 0.0330 0.0085 0.0299
ii0.0043 0.0223 0.0048 0.0141 0.0030 0.0124
iii0.0092 0.0366 0.0163 0.0278 0.0154 0.0283
r(t = 34)i−0.0054 0.5323 0.0948 0.3932 0.1032 0.4527
ii0.5217 0.8017 0.5070 0.7025 0.5263 0.7196
iii0.3561 1.0039 0.4116 0.6261 0.4018 0.6360
h(t = 34)i−0.0168 0.2118 0.0173 0.0779 0.0106 0.0682
ii−0.0006 0.0370 0.0067 0.0199 0.0054 0.0174
iii−0.0090 0.0755 0.0183 0.0452 0.0159 0.0458
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gong, Q.; Chen, R.; Ren, H.; Zhang, F. Estimation of the Reliability Function of the Generalized Rayleigh Distribution under Progressive First-Failure Censoring Model. Axioms 2024, 13, 580. https://doi.org/10.3390/axioms13090580

AMA Style

Gong Q, Chen R, Ren H, Zhang F. Estimation of the Reliability Function of the Generalized Rayleigh Distribution under Progressive First-Failure Censoring Model. Axioms. 2024; 13(9):580. https://doi.org/10.3390/axioms13090580

Chicago/Turabian Style

Gong, Qin, Rui Chen, Haiping Ren, and Fan Zhang. 2024. "Estimation of the Reliability Function of the Generalized Rayleigh Distribution under Progressive First-Failure Censoring Model" Axioms 13, no. 9: 580. https://doi.org/10.3390/axioms13090580

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop