Next Article in Journal
Enhancing COVID-19 Prevalence Forecasting: A Hybrid Approach Integrating Epidemic Differential Equations and Recurrent Neural Networks
Previous Article in Journal
Inverses for Fourth-Degree Permutation Polynomials Modulo 32Ψ or 96Ψ, with Ψ as a Product of Different Prime Numbers Greater than Three
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Reliability Inference for Multicomponent Stress–Strength Model under the Burr X Distribution

1
Department of Mathematical Sciences, University of South Dakota, Vermillion, SD 57069, USA
2
College of Health Solutions, Arizona State University, Phoenix, AZ 85004, USA
3
Department of Statistics, University of Pretoria, Pretoria 0028, South Africa
4
Department of Statistics, Tamkang University, Tamsui District, New Taipei City 251301, Taiwan
5
School of Mathematics, Yunnan Normal University, Kunming 650500, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
AppliedMath 2024, 4(1), 394-426; https://doi.org/10.3390/appliedmath4010021
Submission received: 20 November 2023 / Revised: 21 February 2024 / Accepted: 27 February 2024 / Published: 17 March 2024

Abstract

:
The reliability of the multicomponent stress–strength system was investigated under the two-parameter Burr X distribution model. Based on the structure of the system, the type II censored sample of strength and random sample of stress were obtained for the study. The maximum likelihood estimators were established by utilizing the type II censored Burr X distributed strength and complete random stress data sets collected from the multicomponent system. Two related approximate confidence intervals were achieved by utilizing the delta method under the asymptotic normal distribution theory and parametric bootstrap procedure. Meanwhile, point and confidence interval estimators based on alternative generalized pivotal quantities were derived. Furthermore, a likelihood ratio test to infer the equality of both scalar parameters is provided. Finally, a practical example is provided for illustration.

1. Introduction

Systems or units that are subject to the competition between strength and stress have been studied under the commonly called stress–strength model. The system survives if the imposed stress is less than the system strength. Therefore, the stress–strength model plays a substantial role in many aspects, such as lifetime study, engineering, and supply and demand applications.
Let X be the system strength and Y be the stress applied. Then, the stress–strength reliability (SSR) is labeled as R = P ( Y < X ) . Generally, X indicates the measurement of quality characteristics for the main subject and Y indicates the measurement of quality characteristics for the opposite subject in the system. Next, three cases are presented to illustrate the applications of a stress–strength system. In mechanical engineering, the strength measure X of a long horizontal part for a crane needs to exceed the stress of the loading weight Y from the lifted object of operation. The reliability R = P ( Y < X ) is an essential quantity for assessing the quality of a crane. In the application of civil engineering, the tolerable bearing capacity of a suspension bridge is an important quality measure. The bearing capacity X from a pair of cables for the suspension bridge should exceed the total weight Y of the cars. In this application, a high reliability R = P ( Y < X ) is essential for a suspension bridge design. In logistics applications, the supply capacity X can represent the strength and the demand Y can represent the stress. A high reliability R = P ( Y < X ) indicates a reliable logistics system. In recent years, the stress–strength model has been broadly used in a variety of fields, such as economics, hydrology, reliability engineering, seismology, and survival analysis. The system reliability has also been studied by numerous researchers. Eryilmaz [1] developed formulae for R and the mean residual life at random time based on phase-type distributions, such as the Erlang distribution. Kundu and Raqad [2] investigated a modified maximum likelihood estimation for R based on Weibull distributions, each of which possess three parameters: shape, scale, and location. Krishnamoorthy and Lin [3] studied confidence limits for R by using the generalized variable approach through the maximum likelihood estimation method with two independent Weibull distributions. Mokhlis et al. [4] investigated the inferences of R under distributions that include general exponential and inverse-exponential forms. Surles and Padgett [5] studied the maximum likelihood estimate and Bayesian inference for R based on Burr X distributions with the equal scale parameter set to one. Wang et al. [6] acquired inference procedures for R with a generalized exponential distribution.
Very often, like in the aforementioned references, the studies of reliability inference are mainly concentrated on a system with a main component. Nevertheless, several practical systems, such as a system with series components, parallel components, or any combination of these two, are composed of multiple components to accomplish the required functions. Therefore, multicomponent system reliability investigations have attracted more attention lately. Commonly, multicomponent systems consist of k > 1 main components, of which the strengths follow an independent and identical distribution (i.i.d.) subject to an opposite commonly distributed stress. The system survives if at least s ( 1 s k ) main components concurrently function. The multicomponent system is generally referred to as the s-out-of-k G system. There are numerous practical multicomponent systems in the world. For example, a communication system with three transmitters, where the expected message load requires at least two transmitters to be operational; otherwise, critical messages are lost. Hence, this transmission system is called a two-out-of-three G system. A second example, where the Airbus A-380 with four engines is capable of flying if and only if at least two of the four engines are functioning, is referred to as a two-out-of-four G system. Another example is a suspension bridge with k pairs of cables, which needs at least s pairs of unbroken cables to withstand the stress.
Let the k components’ strength variables in an s-out-of-k G system, where each component is subject to a stress measure Y, be X 1 , X 2 , , X k . The multicomponent stress–strength reliability (MSR) R s , k , as presented by Bhattacharyya and Johnson [7], is given by
R s , k = P ( at least s of the ( X 1 , X 2 , , X k ) exceed Y ) = i = s k k i [ 1 F ( w ) ] i [ F ( w ) ] k i d G ( w ) ,
where F ( · ) is the common cumulative distribution (CDF) of X 1 , X 2 , , X k and G ( · ) is the CDF of Y. The reliability R s , k inferences for s-out-of-k G systems based on different distribution models have been extensively investigated by several researchers. The work by Dey et al. [8] was based on random samples from Kumaraswamy distributions. The work by Kayal et al. [9] was based on random samples from Chen distributions. The works by Kizilaslan [10,11] were based on random samples from proportional reversed hazard rate distributions and a general class of inverse exponentiated distributions, respectively. The work by Kizilaslan and Nadar [12] was based on complete sample sets from bivariate Kumaraswamy distributions. The work by Nadar and Kizilaslan [13] was based on complete samples from Marshall–Olkin bivariate Weibull distributions. The works by Rao [14,15] were, respectively, based on complete random samples from Rayleigh and generalized Rayleigh distributions. The work by Rao et al. [16] was based on complete random samples from Burr XII distributions. The work by Shawky and Khan [17] was based on random samples from inverse Weibull distributions. The work by Lio et al. [18] was based on type II sample of strength and complete random sample of stress Burr XII distributions. The work by Sauer et al. [19] was based on progressively type II censored samples from generalized Pareto distributions. And the work by Wang et al. [20] was based on type II censored strength and complete stress samples from Rayleigh stress–strength models.
Burr [21] developed numerous distributions. Among them, both Burr X and Burr XII have been the most attractive in recent reliability studies. Meanwhile, Belili et al. [22] explored an elastic two-parameter family of distributions and provided numerous theoretical results that include vital parameters and behaviors of distribution functions, reliability measurements for the proposed two-parameter family, and applications to annual maximum floods and survival times for breast cancer patients. The proposed two-parameter family of distributions include the two-parameter Lindley distributions I and II, gamma Lindley distribution, quasi-Lindley distributions, pseudo-Lindley distribution, and XLindley distribution as special cases, but do not include the Burr types. Their proposed family of distributions could potentially be applied to the stress–strength reliability inference for the multicomponent system. Yousof et al. [23] and Jamal and Nasir [24] proposed two Burr X generators to create different families of distributions by using the CDF of a one-parameter Burr X distribution composite with ln ( G ¯ ( t ; η ) ) and G ( t ; η ) G ¯ ( t ; η ) , where G ( t ; η ) and G ¯ ( t ; η ) are the CDF and survival function of a second distribution with parameter vector η of one dimension or two dimensions, respectively. Therefore, both Burr X generalized distributions have two parameters or three parameters. Both papers developed numerous properties of these two generalized Burr X distributions that include stress–strength reliability for a one-component system. However, the practical application of stress–strength reliability was not provided. They also applied these two generalized Burr X distributions to model a random sample of 128 bladder cancer patients’ remission times (in months) and concurrently concluded that their respective extended Burr X distributions performed better than a one-parameter Burr X distribution. These two generalized Burr X distributions do not contain the current two-parameter Burr X distribution as a special case and could potentially be applied to the stress–strength reliability inference for a multicomponent system as well.
The Burr XII distribution has two shape parameters. For more information about Burr XII, readers may refer to Lio et al. [18]. The Burr X distribution considered in the current study has two parameters and the probability density function (PDF) and CDF, respectively, are defined as
f ( x ; λ , α ) = 2 x α λ exp ( x 2 / λ ) ( 1 exp ( x 2 / λ ) ) ( α 1 ) and F ( x ; λ , α ) = ( 1 exp ( x 2 / λ ) ) α , x > 0 , λ > 0 , α > 0 ,
where λ and α are the scale and shape parameters, respectively. For easy reference, BurrX ( λ , α ) is used as the Burr X distribution with parameters λ and α , hereafter. Because of the flexibility of use for any two-parameter distribution, BurrX ( λ , α ) has been investigated in the reliability studies of numerous scholars. Jaheen [25] explored the reliability and failure rate functions for the Burr X model by utilizing the empirical Bayesian estimation method based on complete random samples. Ahmad et al. [26] considered the empirical Bayes estimate of R based on random samples from a Burr X distribution. When both stress and strength Burr X distributions have scale parameters of one, Surles and Padgett [5] and Akgul and Senoglu [27] studied the inference of R using maximum likelihood and Bayes methods based on random samples and using a modified maximum likelihood estimate method based on ranked set samples, respectively. Surles and Padgett [5] applied a Burr X distribution to model the stress and strength reliability of a one-component system based on the strengths of two carbon fibers. However, according to a literature search, work that investigated R s , k based on a Burr X distribution has not appeared.
In reality, we do not always obtain a complete random sample, except for censored data. Moreover, under the current multicomponent model, usually only type II strength sample and complete random stress observations can be observed from the system. Therefore, the current research focused on some alternative inferential methodologies for R s , k when the strength data are a type II censored Burr X distributed sample and the stress data are a Burr X distributed random sample. The estimation methods for λ , α , and R s , k include maximum likelihood and pivotal quantity estimation methods for type II censored strength and random stress samples. Based on our best knowledge, the approaches used in this work have not appeared in the literature regarding BurrX ( λ , α ) .
Section 2 briefly describes the structure information about typical type II censored strength and associated stress data sets from the aforementioned G system and the likelihood function based on those data sets from n G systems. In Section 3, the maximum-likelihood-based approaches are addressed for Burr X distributions. Additionally, asymptotic confidence intervals (ACIs) are derived via utilizing the delta method and bootstrap percentile procedure. Inferences based on pivotal quantities are given in Section 4, where numerous theorems to support the existence and uniqueness of each pivotal-quantity-based estimator are established. For the model test of the equivalence of Burr X scale parameters for strength and stress, Section 5 provides a ratio test. Section 6 provides a real data example for demonstration. Some concluding remarks are given in Section 7.

2. The Likelihood Function Based on Sample from G System

In a lifetime-testing experiment using n s-out-of-k G systems, where each system contains k strength components subject to a common stress, the strength and stress samples can be obtained, respectively, as
Observed strength sample Observed stress sample X 11 X 12 X 1 s X n 1 X n 2 X n s and Y 1 Y n ,
where { X i 1 X i 2 X i s } represents the first s ordered strength statistics under type II censoring and Y i represents the stress variable accordingly for i = 1 , 2 , , n . The strength quantities from all system components are independent and follow the common CDF F X ( · ) with the PDF f X ( · ) and the related stress measure follows the CDF F Y ( · ) with the PDF f Y ( · ) . Hence, the joint likelihood function based on the sample in Equation (3) is described as
L ( data ) i = 1 n m = 1 s f X ( x i m ) [ 1 F X ( x i s ) ] k s f Y ( y i ) .
When s = 1 , the likelihood function of Equation (4) is for a series system; when s = k , it is for a parallel system.

3. The Maximum Likelihood Estimators

The maximum likelihood estimation method is addressed based on the Burr X distributed strength and stress samples from n s-out-of-k G systems in this section. Let X = { X i 1 , X i 2 , , X i s } with  i = 1 , 2 , , n and Y = { Y 1 , Y 2 , , Y n } of (3) be the observed strength and associated stress samples for BurrX ( λ 1 , α 1 ) and BurrX ( λ 2 , α 2 ) , respectively. Via Equations (2) and the samples of (3), the likelihood function (4) of Θ = ( λ 1 , α 1 , λ 2 , α 2 ) is given according to
L ( Θ ) i = 1 n m = 1 s f ( x i m ; λ 1 , α 1 ) 1 F ( x i s ; λ 1 , α 1 ) k s f ( y i ; λ 2 , α 2 ) α 1 n s λ 1 n s α 2 n λ 2 n i = 1 n m = 1 s 1 exp ( x i m 2 / λ 1 ) α 1 1 i = 1 n 1 exp ( y i 2 / λ 2 ) α 2 1 i = 1 n 1 ( 1 exp ( x i s 2 / λ 1 ) ) α 1 ( k s ) × exp i = 1 n m = 1 s x i m 2 / λ 1 + i = 1 n y i 2 / λ 2 .
Hence, the log-likelihood function with the constant term deleted is described as
( Θ ) = n s ( ln α 1 ln λ 1 ) + n ( ln α 2 ln λ 2 ) + i = 1 n m = 1 s ( α 1 1 ) ln ( 1 exp ( x i m 2 / λ 1 ) ) + i = 1 n ( α 2 1 ) ln ( 1 exp ( y i 2 / λ 2 ) ) i = 1 n m = 1 s x i m 2 / λ 1 i = 1 n y i 2 / λ 2 + i = 1 n ( k s ) ln ( 1 ( 1 exp ( x i s 2 / λ 1 ) ) α 1 ) .

3.1. Case 1: Equal Scale Parameters

Let λ 1 = λ 2 = λ . Equation (1) becomes
R s , k = i = s k k i [ 1 F ( w ; λ , α 1 ) ] i [ F ( w ; λ , α 1 ) ] k i d F ( w ; λ , α 2 ) = i = s k m = 0 k i k i k i m ( 1 ) m α 2 ( i + m ) α 1 + α 2 .
Moreover, the likelihood function of (5) is given as
L 1 ( Θ 1 ) i = 1 n m = 1 s f ( x i m ; λ , α 1 ) [ 1 F ( x i s ; λ , α 1 ) ] k s f ( y i ; λ , α 2 ) α 1 n s α 2 n λ n ( 1 + s ) i = 1 n m = 1 s 1 exp ( x i m 2 / λ ) α 1 1 i = 1 n 1 exp ( y i 2 / λ ) α 2 1 × i = 1 n 1 ( 1 exp ( x i s 2 / λ ) ) α 1 k s exp 1 λ i = 1 n m = 1 s x i m 2 + i = 1 n y i 2 ,
and after dropping the constant term, the log-likelihood function is
1 ( Θ 1 ) = n s ln ( α 1 ) + ln ( α 2 ) ( s + 1 ) ln ( λ ) + ( α 1 1 ) i = 1 n m = 1 s ln 1 exp ( x i m 2 / λ ) + ( α 2 1 ) i = 1 n ln 1 exp ( y i 2 / λ ) + ( k s ) i = 1 n ln 1 1 exp ( x i s 2 / λ ) α 1 1 λ i = 1 n m = 1 s x i m 2 + y i 2 ,
where Θ 1 = ( α 1 , α 2 , λ ) .

3.1.1. Point Estimators under Equal Scale Parameters

Taking partial derivatives of 1 ( Θ 1 ) with respective to α 1 , α 2 , and λ , one obtains
1 ( Θ 1 ) α 1 = n s α 1 + i = 1 n m = 1 s ln 1 exp ( x i m 2 / λ ) ( k s ) i = 1 n ( 1 exp ( x i s 2 / λ ) ) α 1 ln ( 1 exp ( x i s 2 / λ ) ) 1 1 exp ( x i s 2 / λ ) α 1
1 ( Θ 1 ) α 2 = n α 2 + i = 1 n ln ( 1 exp ( y i 2 / λ ) )
1 ( Θ 1 ) λ = n ( s + 1 ) λ ( α 1 1 ) i = 1 n m = 1 s x i m 2 exp ( x i m 2 / λ ) λ 2 1 exp ( x i m 2 / λ ) ( α 2 1 ) i = 1 n y i 2 exp ( y i 2 / λ ) λ 2 ( 1 exp ( y i 2 / λ ) ) + 1 λ 2 i = 1 n m = 1 s x i m 2 + i = 1 n y i 2 + α 1 ( k s ) i = 1 n ( 1 exp ( x i s 2 / λ ) ) α 1 1 x i s 2 exp ( x i s 2 / λ ) λ 2 ( 1 ( 1 exp ( x i s 2 / λ ) ) α 1 ) .
The gradient of 1 ( Θ 1 ) with respect to α 1 , α 2 , and λ is given as
1 ( Θ 1 ) = 1 ( Θ 1 ) α 1 , 1 ( Θ 1 ) α 2 , 1 ( Θ 1 ) λ .
Then, the MLE Θ ^ 1 = α ^ 1 , α ^ 2 , λ ^ of  Θ 1 = α 1 , α 2 , λ can be obtained by solving the normal equation 1 ( Θ 1 ) = ( 0 , 0 , 0 ) . Moreover, the MLE R ^ s , k of  R s , k can be derived from Equation (7) and is given by
R ^ s , k = i = s k m = 0 k i k i k i m ( 1 ) m α ^ 2 ( i + m ) α ^ 1 + α ^ 2 .

3.1.2. Approximated Confidence Interval for R s , k

The exact sampling distribution of R ^ s , k is unknown and difficult to develop. Hence, the exact confidence interval of R s , k is not available. Here, two approximated confidence intervals (ACIs) of  R s , k are established by utilizing the delta method and bootstrap sampling.
The observed Fisher information matrix, given Θ 1 , is presented as
I ( Θ 1 ) = 2 1 α 1 2 2 1 α 1 α 2 2 1 α 1 λ 2 1 α 1 α 2 2 1 α 2 2 2 1 α 2 λ 2 1 α 1 λ 2 1 α 2 λ 2 1 λ 2 ,
and the second derivatives in the matrix can be derived directly. The details are omitted here for concision. An ACI is available via the delta method, as shown in Theorems 1 and 2.
Theorem 1.
Let Θ ^ 1 = ( α ^ 1 , α ^ 2 , λ ^ ) be the MLE of Θ 1 . n Θ ^ 1 Θ 1 d N ( 0 , n I 1 ( Θ 1 ) ) as  n , where ‘ d ’ indicates ‘converges in distribution’.
Proof. 
The theorem can be proved by following the asymptotic properties of MLEs along with the central limit theorem for the multivariate case.    □
Theorem 2.
Let R ^ s , k be the MLE of R s , k . If  n , then
n R ^ s , k R s , k d N ( 0 , n ( Θ 1 ) ) ,
where ( Θ 1 ) = R s , k Θ 1 T I 1 ( Θ 1 ) R s , k Θ 1 and R s , k Θ 1 = R s , k α 1 , R s , k α 2 , R s , k λ T .
Proof. 
Appendix A provides the proof.    □
Let Θ 1 be replaced by its MLE Θ ^ 1 and 0 < γ < 1 . A  100 × ( 1 γ ) % ACI of R s , k is easily established through Theorem 2 and given by
R ^ s , k z γ / 2 V a r ^ ( R ^ s , k ) , R ^ s , k + z γ / 2 V a r ^ ( R ^ s , k ) ,
where V a r ^ ( R ^ s , k ) = R s , k Θ 1 ^ T V a r ^ ( Θ ^ 1 ) R s , k Θ 1 ^ , V a r ^ ( Θ ^ 1 ) = I 1 ( Θ ^ 1 ) and
R s , k Θ 1 ^ = R s , k α 1 , R s , k α 2 , R s , k λ T | Θ 1 = Θ ^ 1 .
A negative lower bound may happen in the ACI established by the above procedure. To remove this downside, we can apply the delta methods with logarithmic transformation to develop the asymptotic normal distribution of ln R ^ s , k . The procedure is given below:
ln R ^ s , k ln R s , k V a r ( ln R ^ s , k ) d N ( 0 , 1 ) .
Hence, a  100 ( 1 γ ) % ACI of R s , k can instead be developed to be
R ^ s , k exp z γ / 2 V a r ^ ( ln R ^ s , k ) , R ^ s , k exp z γ / 2 V a r ^ ( ln R ^ s , k ) ,
where V a r ^ ( ln R ^ s , k ) = V a r ^ ( R ^ s , k ) / R ^ s , k 2 is obtained by utilizing the Taylor’s expansion for the delta method.
Furthermore, for comparison purposes, a second MLE-based ACI of R s , k , which is called the parametric bootstrap confidence interval (BCI), is constructed by utilizing the parametric bootstrap procedure that is detailed through Algorithm 1. For more information about the parametric bootstrap procedures, readers may refer to Efron [28] and Hall [29].

3.2. Case 2: Different Scale Parameters

Under this condition, λ 1 λ 2 , α 1 α 2 , and  R s , k can be represented as
R s , k = i = s k k i 0 [ 1 F ( w ; λ 1 , α 1 ) ] i [ F ( w ; λ 1 , α 1 ) ] k i d F ( w ; λ 2 , α 2 ) = i = s k m = 0 k i k i k i m ( 1 ) m α 2 0 1 1 exp λ 2 ln ( u ) / λ 1 α 1 ( m + i ) 1 u α 2 1 d u .
According to our best knowledge, no study has published the reliability inference for the multicomponent stress–strength model based on Burr X distributions under different parameters.
Algorithm 1: Parametric bootstrap procedure under λ 1 = λ 2 = λ .
Step 1
Let the observed strength and stress data be X = { X i 1 , X i 2 , X i 3 , , X i s , i = 1 , , n } and Y = { Y 1 , Y 2 , Y 3 , , Y n } , respectively. Calculate the MLE ( α ^ 1 , α ^ 2 , λ ^ ) of ( α 1 , α 2 , λ ).
Step 2
For given n, s, and k, a type II bootstrap sample x * = { x ( i 1 ) * , x ( i 2 ) * , x ( i 3 ) * , x ( i s ) * } is generated from BurrX ( λ ^ , α ^ 1 ) for i = 1 , 2 , , n , whereas a bootstrap random sample y * = { y ( 1 ) * , y ( 2 ) * , , y ( n ) * } is generated from BurrX ( λ ^ , α ^ 2 ) .
Step 3
Compute the bootstrap MLE ( α ^ 1 * , α ^ 2 * , λ ^ * ) of ( α 1 , α 2 , λ ) and the bootstrap MLE R s , k * of R s , k by using ( x * , y * ) .
Step 4
Replicate steps 2 and 3 N times. And rearrange the resulting N bootstrap MLEs R s , k * in ascending order as R s , k * [ 1 ] , R s , k * [ 2 ] , , R s , k * [ N ] .
Step 5
Let 0 < γ < 1 . A  100 × ( 1 γ ) % BCI is given as
R s , k * [ γ N / 2 ] , R s , k * [ ( 1 γ / 2 ) N ] ,
where [ y ] indicates the greatest integer less than or equal to y.

3.2.1. Point Estimators under Different Parameters

Taking the partial derivatives of ( Θ ) with respective to α 1 , α 2 , λ 1 , and λ 2 , one can have
( Θ ) α 1 = n s α 1 ( k s ) i = 1 n ln 1 exp ( x i s 2 / λ 1 ) 1 exp ( x i s 2 / λ 1 ) α 1 1 ( 1 exp ( x i s 2 / λ 1 ) ) α 1 + i = 1 n m = 1 s ln 1 exp ( x i m 2 / λ 1 )
( Θ ) α 2 = n α 2 + i = 1 n ln ( 1 exp ( y i 2 / λ 2 )
( Θ ) λ 1 = n s λ 1 + i = 1 n m = 1 s x i m 2 / λ 1 2 ( α 1 1 ) i = 1 n m = 1 s x i m 2 / λ 1 2 exp ( x i m 2 / λ 1 ) 1 exp ( x i m 2 / λ 1 ) + ( k s ) α 1 i = 1 n x i s 2 / λ 1 2 exp ( x i s 2 / λ 1 ) 1 exp ( x i s 2 / λ 1 ) ( α 1 1 ) 1 1 exp ( x i s 2 / λ 1 ) α 1
( Θ ) λ 2 = n λ 2 + i = 1 n y i 2 / λ 2 2 ( α 2 1 ) i = 1 n y i 2 / λ 2 2 exp ( y i 2 / λ 2 ) 1 exp ( y i 2 / λ 2 ) .
In this case, the gradient of ( Θ ) with respect to α 1 , α 2 , λ 1 , and λ 2 is given as
( Θ ) = ( Θ ) α 1 , ( Θ ) α 2 , ( Θ ) λ 1 , ( Θ ) λ 2 .
The MLE Θ ˇ = α ˇ 1 , α ˇ 2 , λ ˇ 1 , λ ˇ 2 of  Θ = ( α 1 , α 2 , λ 1 , λ 2 ) is the solution to the normal equation ( Θ ) = ( 0 , 0 , 0 , 0 ) . The invariant property of maximum likelihood estimation allows the MLE of R s , k under different parameters to be established as
R ˇ s , k = i = s k m = 0 k i k i k i m ( 1 ) m α ˇ 2 0 1 1 exp ( λ ˇ 2 ln ( u ) / λ ˇ 1 ) α ˇ 1 ( i + m ) ( 1 u ) ( α ˇ 2 1 ) d u .

3.2.2. Approximated Confidence Interval for R s , k

In this case, the observed Fisher information matrix, given Θ , is presented as
J ( Θ ) = 2 2 λ 1 2 2 1 λ 1 α 1 0 0 2 2 λ 1 α 1 2 1 α 1 2 0 0 0 0 2 2 λ 2 2 2 2 λ 2 α 2 0 0 2 2 λ 2 α 2 2 2 α 2 2
where the second derivatives in the matrix can be derived directly. Therefore, the detailed results are not given for brevity.
By using a similar process to that used to develop Theorem 2, replacing Θ by Θ ˇ , and having  0 < γ < 1 , a  100 × ( 1 γ ) % ACI of R s , k is given as
R ˇ s , k z γ / 2 V a r ˜ ( R ˇ s , k ) , R ˇ s , k + z γ / 2 V a r ˜ ( R ˇ s , k ) ,
where
V a r ˜ ( R ˇ s , k ) = R s , k Θ ˜ T V a r ˜ ( Θ ˇ ) R s , k Θ ˜ , V a r ˜ ( Θ ˇ ) = J 1 ( Θ ˇ ) ,
and
R s , k Θ ˜ = R s , k λ 1 , R s , k α 1 , R s , k λ 2 , R s , k α 2 T | Θ = Θ ˇ .
Moreover, an additional 100 × ( 1 γ ) % ACI of R s , k is derived to produce
R ˇ s , k exp z γ / 2 V a r ˜ ( ln R ˇ s , k ) , R ˇ s , k exp z γ / 2 V a r ˜ ( ln R ˇ s , k ) ,
where V a r ˜ ( ln R ˇ s , k ) = V a r ˜ ( R ˇ s , k ) / R ˇ s , k 2 via Taylor’s expansion for the delta method [30].
The BCI of R s , k under this case can be obtained through a procedure presented above. Hence, the details are not given.

4. Inference Based on Pivotal Quantity

Pivotal quantities are developed through utilizing the stress and strength samples of (3). Moreover, some estimators for R s , k based on the pivotal quantities established in this section are uniquely derived by the associated theorems established below.
Theorem 3.
Let X = { X i 1 , X i 2 , , X i s ; i = 1 , 2 , 3 , , n } be a type II censored strength sample from BurrX ( λ 1 , α 1 ) . Then,
P X ( λ 1 ) = 2 i = 1 n m = 1 s 1 ln ( k s ) ln ( 1 exp ( X i s 2 / λ 1 ) ) + r = 1 s ln ( 1 exp ( X i r 2 / λ 1 ) ) ( k m ) ln ( 1 exp ( X i m 2 / λ 1 ) ) + j = 1 m ln ( 1 exp ( X i j 2 / λ 1 ) )
and
Q X ( α 1 , λ 1 ) = 2 α 1 i = 1 n ( k s ) ln ( 1 exp ( X i s 2 / λ 1 ) ) + j = 1 s ln ( 1 exp ( X i j 2 / λ 1 ) )
have independent chi-square distributions with degrees of freedom 2 n ( s 1 ) and 2 n s , respectively. Therefore, P X ( λ 1 ) and Q X ( α 1 , λ 1 ) are pivotal quantities for λ 1 and α 1 , respectively.
Proof. 
Appendix B provides the proof.    □
Theorem 4.
For a given stress random sample, Y = ( Y 1 , Y 2 , , Y n ) of BurrX ( λ 2 , α 2 ) and  Y ( 1 ) Y ( 2 ) Y ( 3 ) Y n are the associated ordered statistics. Then,
P Y ( λ 2 ) = 2 m = 1 n 1 ln j = 1 n ln ( 1 exp ( Y ( j ) 2 / λ 2 ) ) ( n m ) ln ( 1 exp ( Y ( m ) 2 / λ 2 ) ) + r = 1 m ln ( 1 exp ( Y ( r ) 2 / λ 2 ) )
and
Q Y ( α 2 , λ 2 ) = 2 α 2 j = 1 n ln ( 1 exp ( Y ( j ) 2 / λ 2 ) ) ,
follow the independent chi-square distributions with degrees of freedom 2 ( n 1 ) and 2 n , respectively. Therefore, P Y ( λ 2 ) and Q Y ( α 2 , λ 2 ) are the pivotal quantities for λ 2 and α 2 , respectively.
Proof. 
The proof is presented in Appendix C.    □
In order to derive estimators for Θ and R s , k by utilizing the pivotal quantities established above, additional theoretical results are required and stated below.
Lemma 1.
Given 0 < a < b , K ( t ) = ln ( 1 exp ( b 2 / t ) ) ln ( 1 exp ( a 2 / t ) ) is an increasing function of t.
Proof. 
The proof is given in Appendix D.    □
Corollary 1.
P X ( λ 1 ) and P Y ( λ 2 ) are increasing functions.
Proof. 
Appendix E shows the proof.    □

4.1. Case 1: λ 1 = λ 2

In this case, let λ 1 = λ 2 = λ , P 1 X ( λ ) = P X ( λ ) , and P 1 Y ( λ ) = P Y ( λ ) . Theorems 3 and 4, combined with the probability independence between P X ( λ ) and P Y ( λ ) , imply that the pivotal quantity
P 1 ( λ ) = P 1 X ( λ ) + P 2 Y ( λ ) = 2 i = 1 n m = 1 s 1 ln ( k s ) ln ( 1 exp ( X i s 2 / λ ) ) + r = 1 s ln ( 1 exp ( X i r 2 / λ ) ) ( k m ) ln ( 1 exp ( X i m 2 / λ ) ) + r = 1 m ln ( 1 exp ( X i r 2 / λ ) ) + 2 m = 1 n 1 ln r = 1 n ln ( 1 exp ( Y ( r ) 2 / λ ) ) ( n m ) ln ( 1 exp ( Y ( m ) 2 / λ ) ) + r = 1 m ln ( 1 exp ( Y ( r ) 2 / λ ) ) ,
has a chi-square distribution with degrees of freedom 2 ( n s 1 ) . Meanwhile, Corollary 1 implies that P 1 ( λ ) is increasing with respect to λ .
Given P 1 χ 2 ( n s 1 ) 2 , the equation P 1 ( λ ) = P 1 of λ has a unique solution h 1 ( P 1 ; X , Y ) , which can be solved numerically by utilizing the bisection method or R function ‘uniroot’. h 1 ( P 1 ; X , Y ) is a generalized pivotal-based estimate of λ . Moreover, Theorem 3 implies Q 1 X ( λ ) χ 2 n s 2 where Q 1 X ( λ ) = Q X ( α 1 , λ ) and
α 1 = Q 1 X H 1 X [ λ ] ,
where
H 1 X [ λ ] = 2 i = 1 n ( k s ) ln ( 1 exp ( x i s 2 × λ 1 ) ) + j = 1 s ln ( 1 exp ( x i j 2 × λ 1 ) ) .
By the substitution method of Weerahandi [31], a generalized pivotal quantity S 1 X can be uniquely obtained by substituting h 1 ( P 1 ; X , Y ) for λ in Q 1 X H 1 X [ λ ] and the result is given as
S 1 X = Q 1 X 2 i = 1 n ( k s ) ln ( 1 exp ( x i s 2 h 1 ( P 1 ; x , y ) ) ) + r = 1 s ln ( 1 exp ( x i r 2 / h 1 ( P 1 ; x , y ) ) ) = i = 1 n ( k s ) ln ( 1 exp ( x i s 2 / h 1 ( P 1 ; X , Y ) ) ) + r = 1 s ln ( 1 exp ( x i r 2 / h 1 ( P 1 ; X , Y ) ) ) i = 1 n ( k s ) ln ( 1 exp ( x i s 2 / h 1 ( P 1 ; x , y ) ) ) + r = 1 s ln ( 1 exp ( x i r 2 / h 1 ( P 1 ; x , y ) ) ) · α 1 = Q 1 X H 1 X [ h 1 ( P 1 ; x , y ) ] ,
where ( x , y ) is the sample observation of ( X , Y ) . It is noted that the distribution of S 1 X is free from unknown parameters and S 1 X reduces to α 1 when ( X , Y ) = ( x , y ) . Hence, S 1 X is a generalized pivotal-based estimate of α 1 . Let Q 1 Y ( λ ) = Q 1 Y ( α 2 , λ ) . Similarly, Theorem 4 implies that a generalized pivotal-based estimate of α 2 can be
S 1 Y = Q 1 Y H 1 Y [ h 1 ( P 1 ; x , y ) ] , where Q 1 Y χ 2 n 2 and H 1 Y [ λ ] = 2 j = 1 n ln ( 1 exp ( y ( j ) 2 / λ ) ) .
Moreover, a generalized pivotal quantity of R s , k is derived as
W 1 = i = s k m = 0 k i k i k i m ( 1 ) m 1 + ( i + m ) Q 1 X Q 1 Y H 1 Y [ h 1 ( P 1 ; x , y ) ] H 1 X [ h 1 ( P 1 ; x , y ) ] .
The pivotal-based estimation method for a 100 × ( 1 γ ) % generalized confidence interval (GCI) of R s , k under the case of equal scale parameters can be implemented by Algorithm 2.
Remark 1.
Based on the pivotal quantity P 1 ( λ ) , given 0 < γ < 1 , a  100 × ( 1 γ ) % GCI confidence interval for λ is
h 1 ( χ 2 ( n s 1 ) 1 γ / 2 ; X , Y ) , h 1 ( χ 2 ( n s 1 ) γ / 2 ; X , Y ) ,
where χ k γ is the right-tail γth quantile of the chi-square distribution with k degrees of freedom.
Additionally, the  100 × ( 1 γ ) % GCI joint confidence regions for ( λ , α 1 ) and ( λ , α 2 ) can, respectively, be obtained by using ( P 1 ( λ ) , Q 1 X ( α 1 , λ ) ) and ( P 1 ( λ ) , Q 1 Y ( α 2 , λ ) ) as 
( λ , α 1 ) : h 1 ( χ 2 ( n s 1 ) 1 1 γ 2 ; X , Y ) < λ < h 1 ( χ 2 ( n s 1 ) 1 + 1 γ 2 ; X , Y ) , χ 2 n s 1 1 γ 2 H 1 X [ λ ] < α 1 < χ 2 n s 1 + 1 γ 2 H 1 X [ λ ]
and
( λ , α 2 ) : h 1 ( χ 2 ( n s 1 ) 1 1 γ 2 ; X , Y ) < λ < h 1 ( χ 2 ( n s 1 ) 1 + 1 γ 2 ; X , Y ) , χ 2 n 1 1 γ 2 H 1 Y [ λ ] < α 2 < χ 2 n 1 + 1 γ 2 H 1 Y [ λ ] .
Algorithm 2: Pivotal-quantity-based estimation method under λ 1 = λ 2 = λ .
Step 1
Generate P 1 = p 1 from χ 2 ( n s 1 ) 2 . Then, an observation h 1 = h 1 ( P 1 ; X , Y ) can be solved from the equation P 1 ( λ ) = p 1 .
Step 2
Generate observations of Q 1 X and Q 1 Y from χ 2 n s 2 and χ 2 n 2 , respectively, and calculate W 1 .
Step 3
Repeat steps 1 and 2 N times and collect N values of W 1 , labeled as W 1 ( 1 ) , W 1 ( 2 ) , , W 1 ( N ) .
Step 4
Two estimators for R s , k are presented here. One is a generalized estimator labeled as
R ´ s , k = 1 N m = 1 N W 1 ( m ) ,
and the other one utilizing the Fisher Z transformation is
R ´ s , k F = exp 1 N j = 1 N ln 1 + W 1 ( j ) 1 W 1 ( j ) 1 exp 1 N j = 1 N ln 1 + W 1 ( j ) 1 W 1 ( j ) + 1 .
Step 4
Place all estimates of W 1 in ascending order: W 1 [ 1 ] , W 1 [ 2 ] , , W 1 [ N ] . For  0 < γ < 1 , a series of 100 × ( 1 γ ) % confidence intervals for R s , k can be obtained by ( W 1 [ i ] , W 1 [ i + N [ N γ + 1 ] ] ) i = 1 , 2 , , [ N γ ] , where [ x ] indicates the greatest integer less than or equal to x. Hence, a 100 × ( 1 γ ) % GCI of R s , k is established as the i * th one having
W 1 [ i * + N [ N γ + 1 ] ] W 1 [ i * ] = min i = 1 [ N γ ] ( W 1 [ i + N [ N γ + 1 ] ] W 1 [ i ] ) .
Remark 2.
Given the following listed null hypotheses H 0 vs. the alternative ones H 1 :
( a ) H 0 : λ = λ 0 v s . H 1 : λ λ 0 , ( b ) H 0 : λ λ 0 v s . H 1 : λ < λ 0 , ( c ) H 0 : λ λ 0 v s . H 1 : λ > λ 0 ,
and 0 < γ < 1 , the decision of rejecting the null hypotheses ( a ) , ( b ) , and ( c ) can be conducted by utilizing the following critical regions:
( a ) P 1 ( λ 0 ) χ 2 ( n s 1 ) γ / 2 , or P 1 ( λ 0 ) χ 2 ( n s 1 ) 1 γ / 2 , ( b ) P 1 ( λ 0 ) χ 2 ( n s 1 ) γ , ( c ) P 1 ( λ 0 ) χ 2 ( n s 1 ) γ ,
respectively.

4.2. Case 2: λ 1 λ 2

Let P 2 X ( λ 1 ) = P X ( λ 1 ) , P 2 Y ( λ 2 ) = P Y ( λ 2 ) , Q 2 X ( α 1 , λ 1 ) = Q X ( α 1 , λ 1 ) , and Q 2 Y ( α 2 , λ 2 ) = Q Y ( α 2 , λ 2 ) . Theorems 3 and 4 imply the follow theorem.
Theorem 5.
Let X = { X i 1 , X i 2 , , X i s ; i = 1 , 2 , , n } be a type II censored strength sample from BurrX ( λ 1 , α 1 ) and Y = { Y 1 , X 2 , , Y n } be a random stress sample from BurrX ( λ 2 , α 2 ) . Four pivotal quantities are listed below:
P 2 X ( λ 1 ) = 2 i = 1 n m = 1 s 1 ln ( k s ) ln ( 1 exp ( X i s 2 / λ 1 ) ) + r = 1 s ln ( 1 exp ( X i r 2 / λ 1 ) ) ( k m ) ln ( 1 exp ( X i m 2 / λ 1 ) ) + j = 1 m ln ( 1 exp ( X i j 2 / λ 1 ) ) , Q 2 X ( α 1 , λ 1 ) = 2 α 1 i = 1 n ( k s ) ln ( 1 exp ( X i s 2 / λ 1 ) ) + r = 1 s ln ( 1 exp ( X i r 2 / λ 1 ) ) ,
and
P 2 Y ( λ 2 ) = 2 j = 1 n 1 ln r = 1 n ln ( 1 exp ( Y ( r ) 2 / λ 2 ) ) ( n j ) ln ( 1 exp ( Y ( j ) 2 / λ 2 ) ) + r = 1 j ln ( 1 exp ( Y ( r ) 2 / λ 2 ) ) , Q 2 Y ( α 2 , λ 2 ) = 2 α 2 r = 1 n ln ( 1 exp ( Y ( r ) 2 / λ 2 ) ) .
Then,
  • P 2 X ( λ 1 ) χ 2 n ( s 1 ) 2 and Q 2 X ( α 1 , λ 1 ) χ 2 n s 2 are probability independent;
  • P 2 Y ( λ 2 ) χ 2 ( n 1 ) 2 and Q 2 Y ( α 2 , λ 2 ) χ 2 n 2 are probability independent.
Following the process addressed in Section 4.1, let P 2 X χ 2 n ( s 1 ) 2 and P 2 Y χ 2 ( n 1 ) 2 , and use h 2 ( P 2 X ; X ) and h 2 ( P 2 Y ; Y ) to respresent the solutions to equations P 2 X ( λ 1 ) = P 2 X and P 2 Y ( λ 2 ) = P 2 Y , respectively. Adopting the substitution method from Weerahandi [31], the generalized pivotal quantity for α 1 is
S 2 X = Q 2 X H 2 X [ h 2 ( P 2 X ; x ) ]
with Q 2 X χ 2 n s 2 and
H 2 X [ λ 1 ] = 2 i = 1 n ( k s ) ln ( 1 exp ( x i s 2 / λ 1 ) ) + r = 1 s ln ( 1 exp ( x i r 2 / λ 1 ) ) ,
and the generalized pivotal quantity for α 2 is
S 2 Y = Q 2 Y H 2 Y [ h 2 ( P 2 Y ; y ) ] with Q 2 X χ 2 n 2 and H 2 Y [ λ 2 ] = 2 r = 1 n ln ( 1 exp ( Y ( r ) 2 / λ 2 ) ) .
Consequently, a generalized pivotal quantity for R s , k can be represented as
W 2 = i = s k m = 0 k i k i k i m ( 1 ) m S 2 Y 0 1 1 exp ( h 2 ( P 2 Y ; Y ) ln ( u ) / h 2 ( P 2 X ; X ) ) S 2 X ( i + m ) ( 1 u ) S 2 Y 1 d u .
Additionally, the generalized estimates of R s , k under λ 1 λ 2 can be derived through Algorithm 3.
Algorithm 3: Pivotal-quantity-based estimation method under λ 1 λ 2 .
Step 1
Generate p 21 from χ 2 n ( s 1 ) 2 as a realization of P 2 X . Let the solution h 21 of P 2 X ( λ 1 ) = p 21 be an observation of h 2 ( P 2 X ; X ) . Similarly, generate p 22 from χ 2 ( n 1 ) 2 as a realization of P 2 Y . Let the solution h 22 of  P 2 Y ( λ 2 ) = p 22 be an observation of h 2 ( P 2 Y ; Y ) .
Step 2
Generate observations of Q 2 X and Q 2 Y from χ 2 n s 2 and χ 2 n 2 , respectively, and calculate W 2 .
Step 3
Repeat steps 1 and 2 N times and label N values of W 2 as W 2 ( 1 ) , W 2 ( 2 ) , , W 2 ( N ) .
Step 4
The original generalized and Fisher-Z-transformation-based estimators of R s , k are, respectively, given as
R s , k = 1 N j = 1 N W 2 ( j ) and R s , k F = exp 1 N j = 1 N ln 1 + W 2 ( j ) 1 W 2 ( j ) 1 exp 1 N j = 1 N ln 1 + W 2 ( j ) 1 W 2 ( j ) + 1 ;
Step 5
Display N estimates of W 2 in ascending order: W 2 [ 1 ] , W 2 [ 2 ] , , W 2 [ N ] . For a given 0 < γ < 1 , a series of 100 × ( 1 γ ) % confidence intervals of R s , k can be listed as ( W 2 [ i ] , W 2 [ i + N [ N γ + 1 ] ] ) , i = 1 , 2 , , [ N γ ] . Hence, a  100 × ( 1 γ ) % GCI of R s , k is given as the i * th one having
W 2 [ i * + N [ N γ + 1 ] ] W 2 [ i * ] = min i = 1 [ N γ ] ( W 2 [ i + N [ N γ + 1 ] ] W 2 [ i ] ) .
Remark 3.
For a given 0 < γ < 1 , two 100 × ( 1 γ ) % exact individual confidence intervals of λ 1 and λ 2 are, respectively, presented as
h 2 ( χ 2 n ( n 1 ) 1 γ / 2 ; X ) , h 2 ( χ 2 n ( s 1 ) γ / 2 ; X ) and h 2 ( χ 2 ( n 1 ) 1 γ / 2 ; Y ) , h 2 ( χ 2 ( n 1 ) γ / 2 ; Y ) ,
Additionally, two exact joint confidence regions for ( λ 1 , α 1 ) and ( λ 2 , α 2 ) are constructed by
( λ 1 , α 1 ) : h 2 ( χ 2 n ( s 1 ) 1 1 γ 2 ; X ) < λ 1 < h 2 ( χ 2 n ( s 1 ) 1 + 1 γ 2 ; X ) , χ 2 n s 1 1 γ 2 H 2 X [ λ 1 ] < α 1 < χ 2 n s 1 + 1 γ 2 H 2 X [ λ 1 ]
and
( λ 2 , α 2 ) : h 2 ( χ 2 ( n 1 ) 1 1 γ 2 ; Y ) < λ 2 < h 2 ( χ 2 ( n 1 ) 1 + 1 γ 2 ; Y ) , χ 2 n 1 1 γ 2 H 2 Y [ λ 2 ] < α 2 < χ 2 n 1 + 1 γ 2 H 2 Y [ λ 2 ] ,
respectively.
Remark 4.
Let i = 1 , 2 . The list of null hypotheses H 0 vs. the alternative ones H 1 is displayed:
( d ) H 0 : λ i = λ i 0 v s . H 1 : λ i λ i 0 , ( e ) H 0 : λ i λ i 0 v s . H 1 : λ i < λ i 0 , ( f ) H 0 : λ i λ i 0 v s . H 1 : λ i > λ i 0 .
Under the significance level 0 < γ < 1 , the decision to reject H 0 in ( d ) , ( e ) , and ( f ) for λ 1 and λ 2 can, respectively, be conducted using the following critical regions:
( d ) P 2 X ( λ 10 ) χ 2 n ( s 1 ) γ / 2 , or P 2 X ( λ 10 ) χ 2 n ( s 1 ) 1 γ / 2 , ( e ) P 2 X ( λ 10 ) χ 2 n ( s 1 ) γ , ( f ) P 2 X ( λ 10 ) χ 2 n ( s 1 ) γ ,
and
( d ) P 2 Y ( λ 20 ) χ 2 ( n 1 ) γ / 2 , or P 2 X ( λ 20 ) χ 2 ( n 1 ) 1 γ / 2 , ( e ) P 2 Y ( λ 20 ) χ 2 ( n 1 ) γ , ( f ) P 2 Y ( λ 20 ) χ 2 ( n 1 ) γ .
Remark 5.
For computational purposes, it is important that s 2 for the s-out-of-k G; otherwise, the pivotal quantities P i X and Q i X , i = 1 , 2 , cannot be obtained. Under this condition, the strength variables X 11 , X 21 , , X n 1 can be viewed as a random sample of size n. And an alternative approach utilizes the following pivotal quantities:
P i X ( λ ( · ) ) = 2 j = 1 n 1 ln r = 1 n ln ( 1 exp ( X ( r 1 ) 2 / λ ( · ) ) ) ( n j ) ln ( 1 exp ( X ( j 1 ) 2 / λ ( · ) ) ) + r = 1 j ln ( 1 exp ( X ( r 1 ) 2 / λ ( · ) ) )
and
Q i X ( α 1 , λ ( · ) ) = 2 α 1 r = 1 n ln ( 1 exp ( X ( r 1 ) 2 / λ ( · ) ) ) ,
where λ ( · ) = λ if λ 1 = λ 2 = λ ; otherwise, λ ( · ) = λ 1 , and X ( 11 ) X ( 21 ) X ( n 1 ) are the order statistics of X 11 , X 21 , , X n 1 . It can be shown that P i X ( λ ( · ) ) and Q i X ( α 1 , λ ( · ) ) follow the chi-square distributions with degrees of freedom 2 ( n 1 ) and 2 n , respectively. Consequently, the previous generalized point and confidence interval estimates can also be created.

5. Inference of λ 1 = λ 2

Practically, it is important to test whether the scale parameters are equal or not. For this purpose, the hypotheses and associated likelihood ratio test are displayed below:
H 0 : λ 1 = λ 2 = λ vs . H 1 : λ 1 λ 2 .
The related likelihood ratio statistic has the property
2 { 2 ( Θ ^ ) 2 ( Θ ) } χ 1 2 , as n ,
where Θ ^ = ( λ ^ , α ^ 1 , λ ^ , α ^ 2 ) . Therefore, the likelihood ratio test can be conducted by utilizing the test statistic of 2 { 2 ( Θ ^ ) 2 ( Θ ) } with the reject region
2 { 2 ( Θ ^ ) 2 ( Θ ) } > c * ,
where c * is selected to satisfy the size P χ 1 2 > c * of the test.

6. Practical Data Application

Shasta Reservoir, which is the largest man-made lake, is located on the upper Sacramento River in northern California. The monthly water capacities in the months of August, September, and December from 1980 to 2015, which were accessed on 19 September 2021, were utilized for the demonstration of the processes presented. The data set was also studied under the Rayleigh distribution and Burr XII one, respectively, by Wang et al. [20] and Lio et al. [18].
Assume that the water level will not lead to excessive drought if the water capacity in December is less than the water capacities of at least two Augusts within the next five years, namely, the reliability states that in at least three years within the next five years, the water capacities in August are not less than the water capacity in the previous December. In this practical situation, s = 3 , k = 5 , and n = 6 . Let Y 1 be the capacity of December 1980; X 11 , X 12 , X 13 , , X 15 be the capacities of August from 1981 to 1985; Y 2 be the capacity of December 1986; X 21 , X 22 , X 23 , , X 25 be the capacities of August from 1987 to 1991; and so on. For the purpose of easily fitting water capacities with BurrX ( λ , α ) , all the water capacities needed to be rescaled and divided by 3,014,878 (the maximal water capacity), and the transformed data are listed as follows:
Observed complete strength sample Observed complete stress sample 0.4238 0.5579 0.7262 0.8112 0.8296 0.2912 0.3634 0.3719 0.4637 0.4785 0.5381 0.5612 0.7226 0.7449 0.7540 0.5249 0.6060 0.6686 0.7159 0.7552 0.3451 0.4253 0.4688 0.7188 0.7420 0.2948 0.3929 0.4616 0.6139 0.7951 and 0.7009 0.6532 0.4589 0.7183 0.5310 0.7665
For more detailed information about the above-transformed data, the reader may refer to Kizilaslan and Nadar [12], whereas all the monthly water capacities of the Shasta reservoir between 1981 to 1985 are presented in Appendix F.
The Kolmogorov–Smirnov (K-S) test of a two-sided rejection region was used to evaluate the Burr X distribution fit of these data sets. The results from the K-S test for the strength and stress data included the following test statistic distances and the corresponding p-values (within brackets): 0.1737 ( 0.2907 ) and 0.24812 ( 0.7771 ) , respectively. In addition, the overlapped plots of sample empirical cumulative versus Burr X distributions, sample cumulative probability versus Burr X cumulative probability (P-P), and sample quantile versus Burr X quantile (Q-Q) are shown in Figure 1, Figure 2 and Figure 3, respectively. P-P plot is a probability plot for assessing how close a data set fits a specified model or how closely two data sets agree. A Q-Q plot is a graphic method for evaluating whether two data sets come from populations with a common distribution. Figure 2 shows two P-P plots to present the empirical CDFs of the strength sample (left side) and the stress sample (right side) versus the theoretical CDF of Burr X. The imposed linear regressions over P-P plots in Figure 2 were significant, with R-squared values of 0.97 and 0.91 for the complete strength and stress samples, respectively, and the imposed linear regressions over the Q-Q plots in Figure 3 were also significant, with R-squared values of 0.93 and 0.89 for the complete strength and stress samples, respectively. All information reveals that the Burr X distribution was a good fitting probability model for the transformed data sets as well.
Based on the six three-out-of-five G systems provided in this example, the observed data collected from these multicomponent systems are given as follows:
Strength data of X Stress data of Y 0.4238 0.5579 0.7262 0.2912 0.3634 0.3719 0.5381 0.5612 0.7226 0.5249 0.6060 0.6686 0.3451 0.4253 0.4688 0.2948 0.3929 0.4616 and 0.7009 0.6532 0.4589 0.7183 0.5310 0.7665 .
The point and interval estimates for the multicomponent system reliability R s , k are shown in Table 1, where the significance level was set to 0.05 . The estimated interval lengths for ACI, GCI, and BCI were 0.4641, 0.4519, and 0.4468, respectively, when  λ 1 = λ 2 , and 0.4935, 0.5138, and 0.5112, respectively, when λ 1 λ 2 . Under  λ 1 = λ 2 , three point estimates were larger than three point estimates under λ 1 λ 2 . It was observed that the point estimates were close to each other, except the MLE R ˇ s , k = 0.6336 when λ 1 λ 2 . When comparing between all estimated interval lengths, the ACI of R s , k was found to perform equally well in terms of length.
Furthermore, to compare the equivalence between the scale parameters λ 1 and λ 2 from the strength and stress distributions, i.e., null hypothesis H 0 : λ 1 = λ 2 , the likelihood ratio test provided the statistic and p-value of 86.83223 and 1.18 × 10 20 , respectively. Hence, the results indicate that under a 0.05 significance level, there is sufficient evidence to reject the null hypothesis. And the strength and stress distributions are suggested to have Burr X distributions with different scale parameters for the current monthly capacity applied. It is worth mentioning that the point estimates of R s , k under different parameters for both Burr X distributions were consistent with the point estimate results of R s , k under the Burr XII distribution modeling studied by Lio et al. [18], where both Burr XII distributions had one common parameter, while the Burr X modeling had different parameters for the same data sets considered.

7. Concluding Remarks

The inference for the multicomponent stress–strength model reliability was investigated using two-parameter Burr X distributions. The maximum likelihood and generalized pivotal quantity based estimators for the model parameters were constructed under equal scale parameters and different scale parameters, respectively. Moreover, confidence intervals were also provided by using the delta method with an asymptotic normal distribution, parametric bootstrap percentile, and generalized pivotal sampling.
Yousof et al. [23] and Jamal and Nasir [24] presented two different Burr X generators based on a one-parameter Burr X distribution. These two types of families have not been applied to estimate the reliability of the multicomponent stress–strength system and can be considered potential future research work. The other possible extension work is to extend the two-parameter Burr X distributions using the same approaches from Yousof et al. [23] and Jamal and Nasir [24]. When the research work is to establish the common goals based on a family of distributions, the model selection based on the Bayesian and likelihood approaches will be reasonably applied and a best model will be used to compare the based model. For more information, readers may also refer to [32,33,34].
Additionally, the present results were established under type II censoring for strength data sets. The approaches could possibly be extended to other censoring; for example, the progressively type-II or progressive first-failure type II censoring scheme with proper modification of pivotal quantities for the related samples. Viveros and Balakrishnan [35] provided more information about progressive censoring schemes. Additionally, the moment and maximum product of spacing estimations are interesting new directions. All of these are potential research opportunities.   

Author Contributions

Y.L., D.-G.C., T.-R.T. and L.W. equally contributed to the work. All authors read and agreed to the published version of the manuscript.

Funding

This study was partially based on research supported by the National Science and Technology Council, Taiwan, grant number NSC 112-2221-E-032-038-MY2. This work of Liang Wang was supported based on the National Natural Science Foundation of China (no. 12061091), the Yunnan Fundamental Research Projects (no. 202101AT070103) and the Yunnan Key Laboratory of Modern Analytical Mathematics and Applications (no. 202302AN360007). This work of Ding-Geng Chen was partially supported based on the South Africa National Research Foundation (NRF) and South Africa Medical Research Council (SAMRC) (South Africa DST-NRF-SAMRC SARChI Research Chair in Biostatistics to Professor Ding-Geng Chen, grant number 114613). The opinions expressed and conclusions arrived at are those of the author and are not necessarily to be attributed to the NRF and SAMRC.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Complete monthly water capacity data for the Shasta Reservoir from 1981 to 1985 are given in Appendix G. Section 6 includes the observed complete strength and stress data sets.

Acknowledgments

The authors would like to thank the editor and reviewers for their helpful comments and suggestions, which improved this paper significantly.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. The Verification of Theorem 2

Utilizing the mean value theorem for a derivative, the Taylor series expansion of R s , k ( Θ ^ ) is presented as
R s , k ( Θ ^ ) = R s , k ( Θ ) + R s , k ( Θ ) Θ T ( Θ ^ Θ ) + 1 2 ( Θ ^ Θ ) T 2 R s , k ( Θ * ) Θ ( Θ ^ Θ ) R s , k ( Θ ) + R s , k ( Θ ) Θ T ( Θ ^ Θ ) ,
where R s , k ( Θ ) Θ and 2 R s , k ( Θ ) Θ are the appropriate matrices of the first and second derivatives of R s , k with respect to Θ , correspondingly, and  Θ * is an appropriate value between Θ and Θ ^ . Equation (A1) can also be represented as
R s , k ( Θ ^ ) R s , k ( Θ ) R s , k ( Θ ) Θ T ( Θ ^ Θ ) .
Theorem 3 implies Θ ^ Θ and R s , k ( Θ ^ ) R s , k ( Θ ) when n . Moreover, by the delta method [30] and Equation (A1), the variance of R s , k ( Θ ^ ) is approximated as
V a r [ R s , k ( Θ ^ ) ] V a r R s , k ( Θ ) + R s , k ( Θ ) Θ T Θ ^ R s , k ( Θ ) Θ T Θ = V a r R s , k ( Θ ) Θ T Θ ^ = R s , k ( Θ ) Θ T V a r [ Θ ^ ] R s , k ( Θ ) Θ .
Therefore, through utilizing the delta method [30], Theorem 1 implies
R s , k ( Θ ^ ) R s , k ( Θ ) d N 0 , R s , k ( Θ ) Θ T V a r [ Θ ^ ] R s , k ( Θ ) Θ .
The proof is done.

Appendix B. The Verification of Theorem 3

Given a positive integer i ( n ) , X i 1 X i 2 X i 3 X i s indicate the first s ordered statistics from a size k random sample of BurrX( λ 1 , α 1 ). Therefore, T i m = α 1 ln ( 1 exp ( X i m 2 / λ 1 ) ) , m = 1 , 2 , , s , is viewed as a type II censored sample collected from an exponential distribution that has a mean of one. Because of the memory-less property of the exponential distribution, Z i 1 = k T i 1 , Z i 2 = ( k 1 ) ( T i 2 T i 1 ) , , Z i s = ( k s + 1 ) ( T i s T i ( s 1 ) ) is a random sample from the exponential distribution that has a mean of one. Lawless [36] provided more information about the memory-less property of the exponential distribution.
For m = 1 , 2 , , s , i = 1 , 2 , , n , let
W i m = r = 1 m Z i r = α 1 { ( k m ) ln ( 1 exp ( X i m 2 / λ 1 ) ) + r = 1 m ln ( 1 exp ( X i r 2 / λ 1 ) ) } .
Stephens [37] and Viveros and Balakrishnan [35] provided reasonable background to show that U i ( 1 ) = W i 1 W i s , U i ( 2 ) = W i 2 W i s , , U i ( s 1 ) = W i ( s 1 ) W i s are the order statistics of a size s 1 random sample of the uniform distribution over the interval ( 0 , 1 ) . Additionally, U i ( 1 ) < U i ( 2 ) < < U i ( s 1 ) is independent of
W i s = r = 1 s Z i r = α 1 { ( k s ) ln ( 1 exp ( X i s 2 / λ 1 ) ) + r = 1 s ln ( 1 exp ( X i r 2 / λ 1 ) ) } .
It can be shown that the quantities P i 1 ( λ 1 ) = 2 m = 1 s 1 ln ( U i ( m ) ) and Q i 1 ( α 1 , λ 1 ) = 2 W i s are independent and follow the chi-square distributions with degrees of freedom 2 ( s 1 ) and 2 s , respectively. Furthermore, by the probability independence of P i 1 ( λ 1 ) , i = 1 , 2 , , n , one can show that
P X ( λ 1 ) = 2 i = 1 n P i 1 ( λ 1 ) = 2 i = 1 n m = 1 s 1 ln ( k s ) ln ( 1 exp ( X i s 2 / λ 1 ) ) + j = 1 s ln ( 1 exp ( X i j 2 / λ 1 ) ) ( k m ) ln ( 1 exp ( X i s 2 / λ 1 ) ) + j = 1 m ln ( 1 exp ( X i j 2 / λ 1 ) )
and
Q X ( α 1 , λ 1 ) = 2 i = 1 n Q i 1 ( α 1 , λ 1 ) = 2 α 1 i = 1 n ( k s ) ln ( 1 exp ( X i s 2 / λ 1 ) ) + j = 1 s ln ( 1 exp ( X i j 2 / λ 1 ) ) .
are independent and follow chi-square distributions with degrees of freedom 2 n ( s 1 ) and 2 n s , respectively.
The theorem is proven.

Appendix C. The Verification of Theorem 4

Let the ordered statistics of Y 1 , Y 2 , , Y n be denoted Y ( 1 ) Y ( 2 ) Y ( 3 ) Y ( n ) . Then, α 2 ln ( 1 exp ( Y ( m ) 2 / λ 2 ) ) , m = 1 , 2 , 3 , n , are ordered statistics from an exponential distribution that has a mean of one. The theorem can be proved by following the same proof procedure for the previous theorem.

Appendix D. The Verification of Lemma 1

d K ( t ) d t = ( b 2 / t ) exp ( b 2 / t ) ln ( 1 exp ( a 2 / t ) ) 1 exp ( b 2 / t ) ( a 2 / t ) exp ( a 2 / t ) ln ( 1 exp ( b 2 / t ) ) 1 exp ( a 2 / t ) × 1 t ( ln ( 1 exp ( a 2 / t ) ) ) 2 , t > 0 .
Showing that d K ( t ) d t > 0 for t > 0 is equivalent to verifying
( b 2 / t ) exp ( b 2 / t ) ( 1 exp ( b 2 / t ) ) ( ln ( 1 exp ( b 2 / t ) ) ) > ( a 2 / t ) exp ( a 2 / t ) ( 1 exp ( a 2 / t ) ) ( ln ( 1 exp ( a 2 / t ) ) ) for t > 0 .
Let ϕ ( u ) = u exp ( u ) ( 1 exp ( u ) ) ( ln ( 1 exp ( u ) ) ) for u > 0 and
g ( u ) = ln ( ϕ ( u ) ) = ln ( u ) u ln ( 1 exp ( u ) ) ln ( ln ( 1 exp ( u ) ) ) for u > 0 . Then,
d g ( u ) d u = 1 + 1 u exp ( u ) 1 exp ( u ) + exp ( u ) ( 1 exp ( u ) ) ln ( 1 exp ( u ) ) .
It can be shown that
exp ( u ) ( 1 exp ( u ) ) ln ( 1 exp ( u ) ) = 1 ( exp ( u ) 1 ) ( u ln ( exp ( u ) 1 ) ) > 0 , lim u 1 ( exp ( u ) 1 ) ( u ln ( exp ( u ) 1 ) ) = 0 , lim u 0 + 1 ( exp ( u ) 1 ) ( u ln ( exp ( u ) 1 ) ) = , lim u 0 + 1 + 1 u exp ( u ) 1 exp ( u ) = 0.5 , lim u 1 + 1 u exp ( u ) 1 exp ( u ) = 1.0 , and 1.0 < 1 + 1 u exp ( u ) 1 exp ( u ) < 0.5 .
Hence, d g ( u ) d u > 0 , g ( u ) is an increasing function and ϕ ( u ) is an increasing function.
Lemma 1 is proven.

Appendix E. The Verification of Corollary 1

The definitions of P X and P Y imply that
( k s ) ln ( 1 exp ( X i s 2 / λ 1 ) ) + r = 1 s ln ( 1 exp ( X i r 2 / λ 1 ) ) ( k j ) ln ( 1 exp ( X i j 2 / λ 1 ) ) + r = 1 j ln ( 1 exp ( X i r 2 / λ 1 ) )
= 1 + ( k s ) ln ( 1 exp ( X i s 2 / λ 1 ) ) ln ( 1 exp ( X i j 2 / λ 1 ) ) + r = j + 1 s ln ( 1 exp ( X i r 2 / λ 1 ) ) ln ( 1 exp ( X i j 2 / λ 1 ) ) ( k j ) r = 1 j ln ( 1 exp ( X i r 2 / λ 1 ) ) ln ( 1 exp ( X i j 2 / λ 1 ) ) + ( k j ) ,
and
r = 1 n ln ( 1 exp ( Y ( r ) 2 / λ 2 ) ) ( n j ) ln ( 1 exp ( Y ( j ) 2 / λ 2 ) ) + r = 1 j ln ( 1 exp ( Y ( r ) 2 / λ 2 ) )
= 1 + ln ( 1 exp ( Y ( n ) 2 / λ 2 ) ) ln ( 1 exp ( Y ( j ) 2 / λ 2 ) ) + r = j + 1 n ln ( 1 exp ( Y ( r ) 2 / λ 2 ) ) ln ( 1 exp ( Y ( j ) 2 / λ 2 ) ) ( n j ) r = 1 j ln ( 1 exp ( Y ( r ) 2 / λ 2 ) ) ln ( 1 exp ( Y ( j ) 2 / λ 2 ) ) + ( n j ) .
Lemma 1 implies that the numerator of (A2) increases with respect to λ 1 and the denominator of (A4) decreases with respect to λ 1 . Hence, P X is an increasing function. Moreover, Lemma 1 also implies that P Y is increasing.

Appendix F. Complete Shasta Reservoir Water Capacity per Month

Table A1. The water capacity of Shasta reservoir from 1981 to 1985.
Table A1. The water capacity of Shasta reservoir from 1981 to 1985.
DateStorage AFDateStorage AFDateStorage AF
01/19813,453,50009/19823,486,40005/19844,294,400
02/19813,865,20010/19823,433,40006/19844,070,000
03/19814,320,70011/19823,297,10007/19843,587,400
04/19814,295,90012/19823,255,00008/19843,305,500
05/19813,994,30001/19833,740,30009/19843,240,100
06/19813,608,60002/19833,579,40010/19843,155,400
07/19813,033,00003/19833,725,10011/19843,252,300
08/19812,547,60004/19834,286,10012/19843,105,500
09/19812,480,20005/19834,526,80001/19853,118,200
10/19812,560,20006/19834,471,20002/19853,240,400
11/19813,336,70007/19834,169,90003/19853,445,500
12/19813,492,00008/19833,776,20004/19853,546,900
01/19823,556,30009/19833,616,80005/19853,225,400
02/19823,633,50010/19833,458,00006/19852,856,300
03/19824,062,00011/19833,395,40007/19852,292,100
04/19824,472,70012/19833,457,50008/19851,929,200
05/19824,507,50001/19843,405,20009/19851,977,800
06/19824,375,40002/19843,789,90010/19852,083,100
07/19824,071,20003/19844,133,60011/19852,173,900
08/19823,692,40004/19844,342,70012/19852,422,100

Appendix G. R Codes for the Estimation Methods

library(nleqslv)
#
#  Functions for Burr X distribution
#    1. Probability density function: dur
#    2. Cumulative distribution function: pbur
#    3. Quantile function: qbur
#    4. Random sample: rbur
dbur<-function(x,alpha,lambda)
{
 
return(2*x*alpha/lambda*exp(-x^2/lambda)*(1-exp(-x^2/lambda))^(alpha-1) )
}
 
pbur<-function(x,alpha,lambda)
{
return((1 - exp(-x^2/lambda))^alpha)
}
 
qbur<-function(p,alpha,lambda)
{
return(sqrt(-lambda*log(1-p^(1/alpha))))
}
 
rbur<-function(nn,alpha,lambda)
{
return( qbur(p=runif(nn, min=0,max=1),alpha=alpha,lambda=lambda) )
}
 
#######################################
#    Maximum likelihood estimate (MLE)
#     based on complete data
#######################################
mle.burX=function(x)
{
obj.MLE=function(parm){
alpha=parm[1]
lambda=parm[2]
logL = log(dburX(x,alpha,lambda))
return(-sum(logL))
} # End of the obj.MLE function
pa=rep(0,length=2)
pa[1]=runif(1,0,1)
pa[2]=runif(1,0,1)
nlminb(pa, obj.MLE, gradient = NULL, hessian = NULL,
lower =c(0.001,0.001), upper =c(Inf,Inf))
}
 
mle.bur2=function(x)
{
obj.MLE=function(parm)
{
alpha=parm[1]
lambda=parm[2]
logL = log(dburX(x,alpha,lambda))
return(-sum(logL))
} # End of the obj.MLE function
par=rep(0,length=2)
par[1]=runif(1,0,1)
par[2]=runif(1,0,1)
optim(par, obj.MLE, method="L-BFGS-B",
lower =c(0.001,0.001), upper =c(Inf,Inf))
}
 
mle.burf=function(x)
{
objLam=function(lambda)
{
return(-length(x)*lambda + sum(x^2) -
(- length(x)/sum(log(1 - exp(-x^2/lambda))))*
sum((x^2*exp(-x^2/lambda))/(1-exp(-x^2/lambda))))
}
unrt=nleqslv(x=lambda,objLam, jac=NULL,method =
c("Newton"),global = c("hook"),xscalm = c("auto"),control = list())
hlam=unrt$x
halp= - length(x)/sum(log(1 - exp(-x^2/hlam)))
return(list(halp=halp,hlam=hlam))
}
 
###############################################
#   All strength observations x
#   All stress observations y
##############################################
x=c(0.4238,0.5579,0.7262,0.8112,0.8296,0.2912,0.3634,0.3719,0.4637,
0.4785,0.5381,0.5612,0.7226,0.7449,0.7540,0.5249, 0.6060, 0.6686,
0.7159, 0.7552,0.3451, 0.4253, 0.4688, 0.7188, 0.7420,0.2948,
0.3929, 0.4616, 0.6139,0.7951 )
 
y=c(0.7009,0.6532,0.4589,0.7183,0.5310,0.7665)
 
#-------------------------------------------------
#   Test of Kolmogorov--Smirnov for the Burr X distribution
#    data set for the strength obervations x and~
#    data set for the stress observations y.
#   Plot empirical and Burr X CDFs
#----------------------------------------------------------
ksBurX=function(x,alternative = "two.sided", plot = FALSE)
{
est=mle.burf(x)
lambda=est$hlam
alpha=est$halp
x=sort(x)
mini <- min(x)
maxi <- max(x)
res <- ks.test(x, pburX, alpha,lambda, alternative = alternative)
ye=numeric(length(x))
ye[1] = 1/length(x)
for(i in 2:length(x)) ye[i] = ye[i-1] + ye[1]
if (plot == TRUE)
{
plot(x,ye,ylim =c(0,1),xlim=c(mini,maxi),
type="S",main = "Empirical DF and Burr X CDF", xlab = "Strength",
ylab = "Probability")
 
t <- seq(mini, maxi, by~= 0.01)
y <- pburX(t, alpha, lambda)
lines(t,y, lwd = 2)
}
return(res)
}
###############################################
#   For P-P plots
#
###############################################
CDFX<-function(x,alternative="two.sided",plot=TRUE)
{
est=mle.burf(x)
lambda=est$hlam
alpha=est$halp
x=sort(x)
mini <- min(x)
maxi <- max(x)
res <- ks.test(x, pburX, alpha,lambda, alternative = alternative)
ye=numeric(length(x))
ye[1] = 1/length(x)
xe=numeric(length(x))
xe[1]=pburX(x[1],alpha,lambda)
for(i in 2:length(x))
{ ye[i] = ye[i-1] + ye[1]
xe[i] =pburX(x[i],alpha,lambda)
}
fit=lm(ye~xe)
summary(fit)
if (plot == TRUE)
{
plot(xe,ye,ylim =c(min(ye),max(ye)),xlim=c(min(xe),max(xe)),
type="p",col="blue",main = "Empirical CDF-Burr X CDF plot",
xlab = "Fitted Burr X CDF for Strength",ylab = "Empirical CDF")
abline(fit,col="blue")
}
return(fit)
}
#####################################################
#  For Q-Q plots
#
#####################################################
 
quantX<-function(x,alternative="two.sided",plot=TRUE)
{
est=mle.burf(x)
lambda=est$hlam
alpha=est$halp
 
x=sort(x)
mini <- min(x)
maxi <- max(x)
res <- ks.test(x, pburX, alpha,lambda, alternative = alternative)
ye=x
n=length(x)
xe=numeric(n)
xe[1]=qburX((1-0.5)/n,alpha,lambda)
for(i in 2:length(x)) xe[i] =qburX((i-0.5)/n,alpha,lambda)
 
fit=lm(ye~xe)
summary(fit)
if (plot == TRUE)
{
plot(xe,ye,ylim =c(min(ye),max(ye)),xlim=c(min(xe),max(xe)), type="p",
col="blue",main = "Empirical quantile-Burr X quantile plot",
xlab = "Fitted Burr X quantile for Strength", ylab = "Empirical quantile")
abline(fit,col="blue")
}
return(fit)
}
 
#  Data set
k=5
s=3
 
# The 3 out of 5 G system data set
# xL=matrix(nrow=6,ncol=s)
# y is the corresponding stress data set
x=matrix(nrow=6,ncol=s)
x[1,]=c(0.4238,0.5579,0.7262)
x[2,]=c(0.2912,0.3634,0.3719)
x[3,]=c(0.5381,0.5612,0.7226)
x[4,]=c(0.5249,0.6060,0.6686)
x[5,]=c(0.3451,0.4253,0.4688)
x[6,]=c(0.2948,0.3929,0.4616)
y=c(0.7009,0.6532,0.4589,0.7183,0.5310,0.7665)
 
n=dim(x)[1]
 
# The partition set over [0, 1] for evaluating integral
u =numeric(1000)
du = 1/1000
u[1] = du/2
for(i in 2:1000) u[i] = u[i-1] + du
 
##################################################
# Find maximum likelihood estimator of
#  reliability for multicomponent system
#  assuming equal scale parameter
######################################
 
# Log-likelihood function for equal rate parameter
#  Based on type II strength data set and the
#   corresponding stress data set
#
obj<-function(par)
{
lambda=par[1]; alpha1=par[2]; alpha2=par[3]
temp1=sum(log(dbur(x,alpha1,lambda)))
temp2=(k-s)*sum(log(1-pbur(x[,s],alpha1,lambda)))
temp3=sum(log(dbur(y,alpha2,lambda)))
tTemp = temp1+temp2+temp3
return(-tTemp)
}
# Reliability of system
#
Relibyks3=function(alpha1,alpha2)
{
tempa = 0
# cat("s =", s, "k =",k,"\n")
for(i in s:k)
{
ch1=choose(k,i)
temp= 0
for(j in 0:(k-i))
{
ch2 =choose(k-i,j)
temp=temp + ch2 *((-1)^j)* (alpha2/(alpha1*(j+i) + alpha2))
}
temp=temp*ch1
tempa = tempa + temp
 
}
return(tempa)
}
# Calculating gradient of reliability function
#
gradient3 = function(lambda,alpha1,alpha2)
{
tempaL1 = 0; tempaL2=0
for(i in s:k)
{
ch1=choose(k,i)
temp1= 0; temp2=0
for(j in 0:(k-i))
{
ch2 =choose(k-i,j)
temp1=temp1 + ch2 *((-1)^j)* (alpha2*(j+i)/(alpha1*(j+i) + alpha2)^2)
temp2=temp2 +ch2*((-1)^j)*(alpha1*(j+i)/(alpha1*(j+i) + alpha2)^2 )
}
temp1=temp1*ch1
temp2 = temp2*ch1
tempaL1 = tempaL1 + temp1
tempaL2 =tempaL2 + temp2
}
dRdL = 0
return(list(dRdL=dRdL, dRda1=(-1)*tempaL1, dRda2 = tempaL2))
}
 
#
#  Main program is given as follows
#
# Maximum likelihood estimates for three parameters
par=c(0.06,0.1,0.5)
out= optim(par,obj,method="L-BFGS-B",lower=c(0.05,0.05,0.05),
hessian="TRUE")
hlambda =out$par[1]
ha1 = out$par[2]
ha2 = out$par[3]
hRsk=Relibyks3(alpha1=ha1,alpha2=ha2)
 
#  2. Bootstrap procedure
Bha1 = numeric(BOOT)
Bha2 = numeric(BOOT)
Bhlambda = numeric(BOOT)
BhRsk = numeric(BOOT)
#  2. generating bootstrap sample and get bootstrap MLE
for(iB in 1:BOOT)
{
y = rbur(nn=n,alpha=ha2,lambda=hlambda)
for(ii in 1:n)
{
x[ii,]= sort(rbur(nn=k,alpha =ha1,lambda=hlambda))[1:s]
}
Bpar=c(0.5,1.2,2.5)
BBpar= optim(par,obj,method="SANN")$par
Bhlambda[iB]=BBpar[1]
Bha1[iB] = BBpar[2]
 
Bha2[iB] = BBpar[3]
BhRsk[iB]=Relibyks(alpha1=Bha1[iB],alpha2=Bha2[iB])
cat(iB, "th run","\n")
}
conf=quantile(BhRsk, probs=c(0.025,0.975), type = 1)
 
# find ACI
zq = abs(qnorm(0.025))
Fm=out$hessian
Covar=solve(Fm)
 
## Find confidence interval of reliability
gRadlist=gradient3(lambda=hlambda,alpha1=ha1,alpha2=ha2)
gRad=numeric(3)
gRad[1]=gRadlist$dRdL; gRad[2]=gRadlist$dRda1;gRad[3]=gRadlist$dRda2
 
varRsk = t(gRad)%*%Covar%*%t(t(gRad))
varlnRsk = varRsk/hRsk
CL =hRsk/exp(zq*sqrt(varlnRsk))
CU =hRsk*exp(zq *sqrt(varlnRsk))
#Output results
cat("Simulation results: LBRsk = ",conf[1]," UBRsk = ",conf[2]," hRsk = ",
hRsk,"\n")
cat("ACI is", CL, " ", CU,"\n")
 
# end of the case for equal rate~parameter
 
##################################################
# Find maximum likelihood estimator of
#  reliability for multicomponent system
#  assuming different scale parameters (four parameters)
#
######################################
 
# log-likelihood function
obj<-function(par)
{
lambda1=par[1];alpha1=par[2];lambda2=par[3]; alpha2=par[4]
temp1=sum(log(dbur(x,alpha1,lambda1)))
temp2=(k-s)*sum(log(1-pbur(x[,s],alpha1,lambda1)))
temp3=sum(log(dbur(y,alpha2,lambda2)))
tTemp = temp1+temp2+temp3
return(-tTemp)
}
#
#
Relibyks4=function(lambda1,alpha1,lambda2,alpha2)
{
tempa = 0
# cat("s =", s, "k =",k,"\n")
for(i in s:k)
{
ch1=choose(k,i)
temp= 0
for(j in 0:(k-i))
{
ch2 =choose(k-i,j)
v=du*sum( (1 - u^(lambda2/lambda1))^(alpha1*(i+j))*(1-u)^(alpha2-1))
 
temp=temp + alpha2*ch2 *((-1)^j)*v
}
temp=temp*ch1
tempa = tempa + temp
 
}
return(tempa)
}
 
#
#  gradient of reliability function under four parameters
gradient4 = function(lambda1,alpha1,lambda2,alpha2)
{
tempaL1 = 0; tempaL2=0;tempaL3=0;tempaL4=0
for(i in s:k)
{
ch1=choose(k,i)
temp1= 0; temp2=0;temp3=0;temp4=0
for(j in 0:(k-i))
{
ch2 =choose(k-i,j)*(-1)^j
temp1=temp1 + ch2*alpha2*alpha1*(j+i)*lambda2/lambda1^2*sum((1-u^(lambda2/
lambda1))^(alpha1*(i+j)-1)*
(1-u)^(alpha2-1)*u^(lambda2/lambda1)*log(u))*du
 
temp2=temp2 +ch2*(j+i)*alpha2*sum((1-u^(lambda2/lambda1))^(alpha1*(i+j))*
log(1-u^(lambda2/lambda1))*
(1-u)^(alpha2-1))*du
 
temp3=temp3 +ch2*alpha2*alpha1*(j+i)/lambda1*sum((1-u^(lambda2/lambda1))^
(alpha1*(i+j)-1)*
(1-u)^(alpha2-1)*(-u^(lambda2/lambda1))*log(u))*du
 
temp4=temp4 +ch2*du*(sum((1-u^(lambda2/lambda1))^(alpha1*(j+i))*(1-u)^
(alpha2-1))+
alpha2*sum((1-u^(lambda2/lambda1))^(alpha1*(j+i))*(1 -u)^(alpha2-1)*
log(1-u)))
}
temp1=temp1*ch1
temp2 = temp2*ch1
temp3 =temp3*ch1
temp4 =temp4*ch1
tempaL1 = tempaL1 + temp1
tempaL2 =tempaL2 +temp2
tempaL3 =tempaL3 + temp3
tempaL4 =tempaL4 + temp4
}
return(list(dRdL1=tempaL1, dRda1=tempaL2, dRdL2 = tempaL3, dRda2 =
tempaL4))
}
 
#
#  Pivotal quantity method for three parameters
#
#
xL=matrix(nrow=n,ncol=s)
p1X= numeric(n)
p1Y = numeric(n)
 
whRsk=numeric(BOOT)
 
library(HDInterval)
 
 
obj<-function(par)
{
lambda=par[1]; alpha1=par[2]; alpha2=par[3]
temp1=sum(log(dbur(x,alpha1,lambda)))
temp2=(k-s)*sum(log(1-pbur(x[,s],alpha1,lambda)))
temp3=sum(log(dbur(y,alpha2,lambda)))
tTemp = temp1+temp2+temp3
return(-tTemp)
}
 
gRad = numeric(3)
zq = abs(qnorm(0.025))
par=c(2.5,2.5,2.5)
out= optim(par,obj,method="L-BFGS-B",lower=c(0.5,0.5,0.5),hessian="TRUE")
hlambda =out$par[1]
ha1 = out$par[2]
ha2 = out$par[3]
hRsk=Relibyks3(ha1,ha2)
 
for(jq in 1:BOOT)
{
#   1. generating data sets for x and y
y = rbur(nn=n,alpha=ha2,lambda=hlambda)
 
for(i in 1:n)
{
x[i,]= sort(rbur(nn=k,alpha = ha1,lambda=hlambda))[1:s]
}
 
par=c(2.5,2.5,2.05)
out= optim(par,obj,method="L-BFGS-B",lower=c(0.5,0.5,0.5))
whla = out$par[1]
log(1 - exp(-y^2/whla))
wha1 = - rchisq(1,df=2*n*s)/(2*((k-s)*sum(log(1 - exp(-x[,s]^2/whla))) +
sum(log(1 - exp(-x^2/whla)))))
wha2 = - rchisq(1, df=2*n)/(2*sum(log(1 - exp(-y^2/whla))))
whRsk[jq]=Relibyks(wha1,wha2)
}
conf=hdi(whRsk, credMass = 0.95)
Lbconf = conf[1]
Ubconf = conf[2]
 
BhRsk = mean(whRsk)
 
lnw1 = mean(log((1+whRsk)/(1-whRsk)))
hRskF = (exp(lnw1) - 1)/(exp(lnw1) +1)
 
#
#  Pivotal quantity method with four parameters
#
xL=matrix(nrow=n,ncol=s)
p1X= numeric(n)
p1Y = numeric(n)
 
whRsk=numeric(BOOT)
 
library(HDInterval)
 
obj<-function(par)
{
lambda1=par[1]; alpha1=par[2];lambda2=par[3]; alpha2=par[4]
temp1=sum(log(dbur(x,alpha1,lambda1)))
temp2=(k-s)*sum(log(1-pbur(x[,s],alpha1,lambda1)))
temp3=sum(log(dbur(y,alpha2,lambda2)))
tTemp = temp1+temp2+temp3
return(-tTemp)
}
 
zq = abs(qnorm(0.025))
par=c(2.5,2.5,2.5,2.5)
out= optim(par,obj,method="L-BFGS-B",lower=c(0.5,0.5,0.5,0.5),
hessian="TRUE")
hlambda1 =out$par[1]
ha1 = out$par[2]
hlambda2 =out$par[3]
ha2 = out$par[4]
 
for(jq in 1:BOOT)
{
par=c(1.5,1.5,1.05,1.05)
out= optim(par,obj,method="L-BFGS-B",lower=c(0.5,0.5,0.5,0.5))
whla1 = out$par[1]
wha1=out$par[2]
whla2=out$par[3]
wha2 =out$par[4]
 
wha1 = rchisq(1,df=2*n*s)/(-2*((k-s)*sum(log(1 - exp(-x[,s]^2/whla1)))) )
wha2 =rchisq(1, df=2*n)/(-2*sum(log(1 - exp(-y^2/whla2))))
whRsk[jq]=Relibyks4(whla1,wha1,whla2,wha2)
cat(" Boot Run at ",jq,"\n")
}
conf=hdi(whRsk, credMass = 0.95)
Lbconf = conf[1]
Ubconf = conf[2]
 
BhRsk = mean(whRsk)
 
lnw1 = mean(log((1+whRsk)/(1-whRsk)))
hRskF = (exp(lnw1) - 1)/(exp(lnw1) +1)
cat(BhRsk,"  ",hRskF,"\n")
cat("GCI = ",Lbconf,"  GCU = ",Ubconf,"\n")

References

  1. Eryilmaz, S. Phase type stress-strength models with reliability applications. Commun. Stat.—Simul. Comput. 2018, 47, 954–963. [Google Scholar] [CrossRef]
  2. Kundu, D.; Raqad, M.D. Estimation of R = P(Y < X) for three-parameter Weibull distribution. Stat. Probab. Lett. 2009, 79, 1839–1846. [Google Scholar]
  3. Krishnamoorthy, K.; Lin, Y. Confidence limits for stress-strength reliability involving Weibull models. J. Stat. Plan. Inference 2010, 140, 1754–1764. [Google Scholar] [CrossRef]
  4. Mokhlis, N.A.; Ibrahim, E.J.; Ibrahim, E.J. Stress-strength reliability with general form distributions. Commun. Stat.—Theory Methods 2017, 46, 1230–1246. [Google Scholar] [CrossRef]
  5. Surles, J.G.; Padgett, W.J. Inference for P(Y < X) in the Burr Type X Model. J. Appl. Stat. Sci. 1998, 7, 225–238. [Google Scholar]
  6. Wang, B.; Geng, Y.; Zhou, J. Inference for the generalized exponential stress-strength model. Appl. Math. Model. 2018, 53, 267–275. [Google Scholar] [CrossRef]
  7. Bhattacharyya, G.K.; Johnson, R.A. Estimation of reliability in multicomponent stress-strength model. Am. Stat. Assoc. 1947, 69, 966–970. [Google Scholar] [CrossRef]
  8. Dey, S.; Mazucheli, J.; Anis, M.Z. Estimation of reliability of multicomponent stress-strength for a Kumaraswamy distribution. Commun. Stat.—Theory Methods 2017, 46, 1560–1572. [Google Scholar] [CrossRef]
  9. Kayal, T.; Tripathi, Y.M.; Dey, S.; Wu, S.J. On estimating the reliability in a multicomponent stress-strength model based on Chen distribution. Commun. Stat.—Theory Methods 2020, 49, 2429–2447. [Google Scholar] [CrossRef]
  10. Kizilaslan, F. Classical and Bayesian estimation of reliability in a multicomponent stress-strength model based on the proportional reversed hazard rate model. Math. Comput. Simul. 2017, 136, 36–62. [Google Scholar] [CrossRef]
  11. Kizilaslan, F. Classical and Bayesian estimation of reliability in a multicomponent stress-strength model based on a general class of inverse exponentiated distributions. Stat. Pap. 2018, 59, 1161–1192. [Google Scholar] [CrossRef]
  12. Kizilaslan, F.; Nadar, M. Estimation of reliability in a multicomponent stress-strength model based on a bivariate Kumaraswamy distribution. Stat. Pap. 2018, 59, 307–340. [Google Scholar] [CrossRef]
  13. Nadar, M.; Kizilaslan, F. Estimation of reliability in a multicomponent stress-strength model based on a Marshall-Olkin bivariate Weibull distribution. IEEE Trans. Reliab. 2016, 65, 370–380. [Google Scholar] [CrossRef]
  14. Rao, G.S. Estimation of reliability in multicomponent stress-strength model based on Rayleigh distribution. ProbStat Forum 2012, 5, 150–161. [Google Scholar]
  15. Rao, G.S. Estimation reliability in multicomponent stress-strength based on generalized Rayleigh distribution. J. Mod. Appl. Stat. Methods 2014, 13, 367–379. [Google Scholar] [CrossRef]
  16. Rao, G.S.; Aslam, M.; Kundu, D. Burr type XII distribution parametric estimation and estimation of reliability of multicomponent stress-strength. Commun. Stat.—Theory Methods 2015, 44, 4953–4961. [Google Scholar] [CrossRef]
  17. Shawky, A.I.; Khan, K. Reliability estimation in multicomponent stress-strength based on inverse Weibull distribution. Processes 2022, 10, 226. [Google Scholar] [CrossRef]
  18. Lio, Y.L.; Tsai, T.-R.; Wand, L.; Cecilio Tejada, I.P. Inferences of the Multicomponent Stress-Strength Reliability for Burr XII Distributions. Mathematics 2022, 10, 2478. [Google Scholar] [CrossRef]
  19. Sauer, L.; Lio, Y.; Tsai, T.-R. Reliability inference for the multicomponent system based on progressively type II censoring samples from generalized Pareto distributions. Mathematics 2020, 8, 1176. [Google Scholar] [CrossRef]
  20. Wang, L.; Lin, H.; Ahmadi, K.; Lio, Y. Estimation of stress-strength reliability for multicomponent system with Rayleigh data. Energies 2021, 14, 7917. [Google Scholar] [CrossRef]
  21. Burr, I.W. Cumulative frequency functions. Ann. Math. Stat. 1942, 13, 215–232. [Google Scholar] [CrossRef]
  22. Belili, M.C.; Alshangiti, A.M.; Gemeay, A.M.; Zeghdoudi, H.; Karakaya, K.; Bakr, M.E.; Balogun, O.S.; Atchade, M.N.; Hussam, E. Two-paramter family of distributions: Properties, estimation, and applications. AIP Adv. 2023, 13, 105008. [Google Scholar] [CrossRef]
  23. Yousof, H.M.; Afify, A.Z.; Hamedani, G.G.; Aryal, G. The Burr X generator of distributions for lifetime data. J. Stat. Theory Appl. 2017, 16, 288–305. [Google Scholar] [CrossRef]
  24. Jamal, F.; Nasir, M.A. Generalized Burr X family of distributions. Int. J. Math. Stat. 2018, 19, 55–73. [Google Scholar]
  25. Jaheen, Z.F. Empirical Bayes estimation of the reliability and failure rate functions of the Burr type X failure model. J. Appl. Stat. Sci. 1996, 3, 281–288. [Google Scholar]
  26. Ahmad, K.E.; Fakhry, M.E.; Jaheen, Z.F. Empirical Bayes estimation of P(Y < X) and characterization of Burr-type X model. J. Stat. Plan. Inference 1997, 64, 297–308. [Google Scholar]
  27. Akgul, F.G.; Senoglu, B. Inferences for stress-strength reliability of Burr type X distributions based on ranked set sampling. Commun. Stat.—Simul. Comput. 2022, 51, 3324–3340. [Google Scholar] [CrossRef]
  28. Efron, B. Jackknife, Bootstrap, Other Resampling Plans. In CBMS-NSF Regional Conference Series in Applied Mathematics; SIAM: Philadelphia, PA, USA, 1982; Volume 38. [Google Scholar]
  29. Hall, P. Theoretical comparison of bootstrap confidence intervals. Annu. Stat. 1988, 16, 927–953. [Google Scholar] [CrossRef]
  30. Xu, J.; Long, J.S. Using the Delta Method Tonconstruct Confidence Intervals for Predicted Probabilities, Rates, and Discrete Changes; Lecture Notes; Indiana University: Bloomington, IN, USA, 2005. [Google Scholar]
  31. Weerahandi, S. Generalized Inference in Repeated Measures: Exact Methods in MANOVA and Mixed Models; Wiley: New York, NY, USA, 2004. [Google Scholar]
  32. Cherstvy, A.G.; Thapa, S.; Wagner, C.E.; Metzler, R. Non-Gaussian, non-ergodic, and non-Fickian diffusion of tracers in mucin hydrogels. Soft Matter 2019, 15, 2526–2551. [Google Scholar] [CrossRef]
  33. Thapa, S.; Park, S.; Kim, Y.; Jeon, J.-H.; Metzler, R.; Lomholt, M.A. Bayesian inference of scaled versus fractional Brownian motion. J. Phys. A Math. Theor. 2022, 55, 194003. [Google Scholar] [CrossRef]
  34. Krog, J.; Lomholt, M.A. Bayesian inference with information content model check for Langevin equations. Phys. Rev. E 2017, 96, 062106. [Google Scholar] [CrossRef] [PubMed]
  35. Viveros, R.; Balakrishnan, N. Interval estimation of parameters of life from progressively censored data. Technometrics 1994, 36, 84–91. [Google Scholar] [CrossRef]
  36. Lawless, J.F. Statistical Models and Methods for Lifetime Data, 2nd ed.; Wiley: New York, NY, USA, 2003. [Google Scholar]
  37. Stephens, M.A. Tests for the exponential distribution. In Goodness-of-Fit Techniques; D’Agostino, R.B., Stephens, M.A., Eds.; Marcel Dekker: New York, NY, USA, 1986; pp. 421–459. [Google Scholar]
Figure 1. Step figure is the sample empirical distribution and curve is the fitted Burr X distribution. Left side is for strength data modeling with BurrX ( 0.18 , 3.47 ) . Right side is for stress data modeling with BurrX ( 0.13 , 13.10 ) .
Figure 1. Step figure is the sample empirical distribution and curve is the fitted Burr X distribution. Left side is for strength data modeling with BurrX ( 0.18 , 3.47 ) . Right side is for stress data modeling with BurrX ( 0.13 , 13.10 ) .
Appliedmath 04 00021 g001
Figure 2. Sample cumulative probability vs. Burr X cumulative probability plot overlapped with the regression line. Left side is for strength data modeling with BurrX ( 0.18 , 3.47 ) and fitting regression line y = 0.0496 + 0.9161 x with R2 = 0.97. Right side is for stress data modeling with BurrX ( 0.13 , 13.10 ) and fitting regression line y = 0.11 + 0.90 x with R2 = 0.91.
Figure 2. Sample cumulative probability vs. Burr X cumulative probability plot overlapped with the regression line. Left side is for strength data modeling with BurrX ( 0.18 , 3.47 ) and fitting regression line y = 0.0496 + 0.9161 x with R2 = 0.97. Right side is for stress data modeling with BurrX ( 0.13 , 13.10 ) and fitting regression line y = 0.11 + 0.90 x with R2 = 0.91.
Appliedmath 04 00021 g002
Figure 3. Sample quantile vs. Burr X quantile plot overlapped with the regression line. Left side is for strength data modeling with BurrX ( 0.18 , 3.47 ) and fitting regression line y = 0.032 + 0.94 x with R2 = 0.93. Right side is for stress data modeling with BurrX ( 0.13 , 13.10 ) and fitting regression line y = 0.037 + 0.94 x with R2 = 0.89.
Figure 3. Sample quantile vs. Burr X quantile plot overlapped with the regression line. Left side is for strength data modeling with BurrX ( 0.18 , 3.47 ) and fitting regression line y = 0.032 + 0.94 x with R2 = 0.93. Right side is for stress data modeling with BurrX ( 0.13 , 13.10 ) and fitting regression line y = 0.037 + 0.94 x with R2 = 0.89.
Appliedmath 04 00021 g003
Table 1. The estimation results for R s , k using the collected data from 3-out-of-5 G system.
Table 1. The estimation results for R s , k using the collected data from 3-out-of-5 G system.
λ 1 = λ 2
R ^ s , k = 0.6937 R ´ s , k = 0.7398 R ´ s , k F = 0.7836
ACI   = ( 0.4994 , 0.9635 ) GCI   = ( 0.4987 , 0.9506 ) BCI   = ( 0.4455 , 0.8923 )
λ 1 λ 2
R ˇ s , k = 0.6336 R ` s , k = 0.4512 R ` s , k F = 0.4627
ACI   = ( 0.4332 , 0.9267 ) GCI   = ( 0.1976 , 0.7114 ) BCI   = ( 0.3358 , 0.8470 )
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lio, Y.; Chen, D.-G.; Tsai, T.-R.; Wang, L. The Reliability Inference for Multicomponent Stress–Strength Model under the Burr X Distribution. AppliedMath 2024, 4, 394-426. https://doi.org/10.3390/appliedmath4010021

AMA Style

Lio Y, Chen D-G, Tsai T-R, Wang L. The Reliability Inference for Multicomponent Stress–Strength Model under the Burr X Distribution. AppliedMath. 2024; 4(1):394-426. https://doi.org/10.3390/appliedmath4010021

Chicago/Turabian Style

Lio, Yuhlong, Ding-Geng Chen, Tzong-Ru Tsai, and Liang Wang. 2024. "The Reliability Inference for Multicomponent Stress–Strength Model under the Burr X Distribution" AppliedMath 4, no. 1: 394-426. https://doi.org/10.3390/appliedmath4010021

APA Style

Lio, Y., Chen, D. -G., Tsai, T. -R., & Wang, L. (2024). The Reliability Inference for Multicomponent Stress–Strength Model under the Burr X Distribution. AppliedMath, 4(1), 394-426. https://doi.org/10.3390/appliedmath4010021

Article Metrics

Back to TopTop