Next Article in Journal
Clues about the Nature of Multiquark States
Previous Article in Journal
Relation between Quantum Walks with Tails and Quantum Walks with Sinks on Finite Graphs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bayesian Inference for the Parameters of Kumaraswamy Distribution via Ranked Set Sampling

Department of Mathematics, Beijing Jiaotong University, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(7), 1170; https://doi.org/10.3390/sym13071170
Submission received: 4 June 2021 / Revised: 22 June 2021 / Accepted: 25 June 2021 / Published: 29 June 2021

Abstract

:
In this paper, we address the estimation of the parameters for a two-parameter Kumaraswamy distribution by using the maximum likelihood and Bayesian methods based on simple random sampling, ranked set sampling, and maximum ranked set sampling with unequal samples. The Bayes loss functions used are symmetric and asymmetric. The Metropolis-Hastings-within-Gibbs algorithm was employed to calculate the Bayes point estimates and credible intervals. We illustrate a simulation experiment to compare the implications of the proposed point estimators in sense of bias, estimated risk, and relative efficiency as well as evaluate the interval estimators in terms of average confidence interval length and coverage percentage. Finally, a real-life example and remarks are presented.

1. Introduction

The Kumaraswamy distribution was proposed for double-bounded random processes by [1]. The cumulative distribution function (CDF) and the corresponding probability density function (PDF) can be expressed as
F ( x ; θ , β ) = 1 ( 1 x θ ) β , 0 < x < 1 ; θ , β > 0
and:
f ( x ; θ , β ) = θ β x θ 1 1 x θ β 1 , 0 < x < 1 ; θ , β > 0
respectively.
For simplicity, we denote Kumaraswamy distribution with two positive parameters θ and β as K ( θ , β ) . Based on varying values of θ and β , there are similar shape properties between Kumaraswamy distribution and Beta distribution. However, the former is superior to the latter in some respects: there are not any special functions involved in K ( θ , β ) and its quantile function; the generation of random variables is simple, as L-moments and moments of order statistics for K ( θ , β ) have simple formulars (see [2]). For the PDF of K ( θ , β ) , as shown on the left of Figure 1, when θ > 1 and β > 1 , it is unimodal; when θ > 1 and β 1 , it is increasing; when θ 1 and β > 1 , it is decreasing; when θ < 1 and β < 1 , it is uniantimodal; when θ = β = 1 , it is constant. The CDF of K ( θ , β ) (the plot is presented on the right of Figure 1) has an explicit expression, while the CDF of the Beta distribution appears in an integral form. Therefore, Kumaraswamy distribution is considered as a substitutive model for Beta distribution in practical terms. For instance, Kumaraswamy distribution has an outstanding fitting effect on daily rainfall, individual heights, unemployment numbers, and ball bearing revolutions (see [3,4,5,6]).
Recently, Kumaraswamy distribution has drawn much academic attention and concern. The authors in Ref. [7] obtained the estimators of parameters for Kumaraswamy distribution based on the progressive type-II censoring data using Bayesian and non-Bayesian methods; the authors in Ref. [8] studied the estimations of parameters for this distribution under hybrid censoring; the authors in Ref. [9] ulteriorly discussed the estimations of the stress–strength parameters based on an incorporation of progressive type II and hybrid censoring schemes. Then, the authors in Ref. [10] continued to focus on this distribution under hybrid progressive type I censored samples. The authors in Ref. [11] discussed the estimations of parameters in Kumaraswamy distribution under the generalized progressive hybrid censored samples. On the basis of the Kumaraswamy distribution, a range of new distributions were proposed, such as the Kumaraswamy Birnbaum–Saunders distribution due to [12]), generalized inverted Kumaraswamy distribution due to [13], and transmuted Kumaraswamy distribution due to [14].
Ranked set sampling (RSS), first introduced by [15], is considered to be a more effective method than simple random sampling (SRS) in estimating the mean values of a distribution (see [16]). It is appropriate to employ RSS in many fields where sampling schemes cost a lot of human strength and money. It has a widespread application in the domains of the environment, biology, engineering applications, and agriculture (see [17,18,19,20]). Recently, the authors in Ref. [21] proposed a new sampling design known as the maximum ranked set sampling procedure with unequal samples (MRSSU).
The difficulties overcome in this paper are that (a) it is difficult to obtain the solutions of the equations of maximum likelihood estimation to tackle Big Data problems [22]. Hence, we adopted the Newton–Raphson iterative algorithm to derive the approximate and numerical roots; (b) since it is difficult to express the integrals for Bayes estimates in closed forms [23], we chose the Metropolis-Hastings-within-Gibbs algorithm to obtain the estimates [24,25,26,27,28].
The organization of this paper is summarized as follows. In Section 2, Section 3 and Section 4, we consider point and interval estimates for Kumaraswamy distribution based on three sampling schemes, namely SRS, RSS, and MRSSU, and two estimation methods of maximum likelihood estimation and Bayes estimation. Section 5 compares the performance of proposed estimators and Section 6 presents an illustrative example. This paper concludes with some comments in Section 7.

2. Estimations of the Parameters Using SRS

2.1. Maximum Likelihood Estimations Based on SRS

Considering that X = ( X 1 , X 2 , , X s ) is an SRS scheme of size s from K ( θ , β ) and the likelihood function is given by
L 1 ( θ , β ) = θ s β s i = 1 s x i θ 1 ( 1 x i θ ) β 1 ,
where x ˜ = ( x 1 , x 2 , , x s ) is the observed sample of X. The corresponding log-likelihood function is:
l 1 ( θ , β ) = s ln θ + s ln β + ( θ 1 ) i = 1 s ln x i + ( β 1 ) i = 1 s ln ( 1 x i θ ) .
Calculate the first derivatives of (3) about θ , β and equate them to zero. The maximum likelihood estimates (MLEs) of θ and β can be earned by solving follow equations:
l 1 ( θ , β ) θ = s θ + i = 1 s ln x i ( β 1 ) i = 1 s x i θ ln x i 1 x i θ = 0 ,
l 1 ( θ , β ) β = s β + i = 1 s ln ( 1 x i θ ) = 0 .
We obtain the MLE of β and denote it as
β ^ M L E = s i = 1 s ln ( 1 x i θ )
According to the formula in (6), β ^ M L E exists and is unique for fixed θ . Substituting the (6) into (3), the profile log-likelihood function of θ can be expressed as
P ( θ ) = s ln θ + s ln s i = 1 s ln ( 1 x i θ ) + ( θ 1 ) i = 1 s ln x i i = 1 s ln ( 1 x i θ ) s .
θ ^ M L E is defined to the MLE of θ , which can be obtained by maximizing (7). Since it is difficult to determine the explicit solutions of θ and β , the Newton–Raphson iterative algorithm is adopted to obtain MLEs.
Theorem 1.
θ ^ M L E exists and is unique.
Proof of Theorem 1.
Calculate the first and second derivatives of (7) about θ :
P ( θ ) = s θ 1 + D 1 D 2 + D 3 ,
P ( θ ) = s θ 2 E 1 + E 2 [ i = 1 s ln ( 1 x i θ ) ] 2 ,
where:
D 1 = i = 1 s x i θ ln x i θ 1 x i θ , D 2 = i = 1 s ln ( 1 x i θ ) , D 3 = 1 s i = 1 s ln x i θ 1 x i θ , E 1 = 1 1 s i = 1 s x i θ ( ln x i θ ) 2 ( 1 x i θ ) 2 , E 2 = i = 1 s x i θ ( ln x i θ ) 2 ( 1 x i θ ) 2 i = 1 s ln ( 1 x i θ ) i = 1 s x i θ ln x i θ 1 x i θ 2 .
From (8), lim θ 0 + s θ = + , lim θ 0 + s D 1 D 2 > 0 and lim θ 0 + D 3 = 1 s < 1 , it means that lim θ 0 + [ 1 + D 1 D 2 + D 3 ] > 0 . Therefore, when θ 0 + , P ( θ ) + . Meanwhile, lim θ + s θ = 0 , lim θ + s θ D 1 D 2 = i = 1 s ln x i and lim θ + s θ D 3 = i = 1 s ln x i . Therefore, when θ + , P ( θ ) = 2 i = 1 s ln x i < 0 . According to Cauchy–Schwarz inequality, since:
i = 1 s x i θ ( ln x i θ ) 2 ( 1 x i θ ) 2 i = 1 s ln ( 1 x i θ ) i = 1 s x i θ 2 ln x i θ 1 x i θ ln ( 1 x i θ ) 2 > i = 1 s x i θ ln x i θ 1 x i θ 2
then E 2 > 0 . With E 1 > 0 , consequently, P ( θ ) < 0 and (7) is concave. The result is proven. □
We then consider the confidence intervals (CIs) under the SRS scheme based on maximum likelihood estimation. The second derivatives of (3) can be expressed as
2 l 1 ( θ , β ) θ 2 = s θ 2 i = 1 s ( β 1 ) x i θ ( ln x i ) 2 ( 1 x i θ ) 2 , 2 l 1 ( θ , β ) β 2 = s β 2 , 2 l 1 ( θ , β ) θ β = 2 l 1 ( θ , β ) β θ = i = 1 s x i θ ln x i 1 x i θ .
Denote η = ( θ , β ) , and the expected Fisher information matrix I ( η ) is given by
I ( η ) = I 11 I 12 I 21 I 22 = E 2 l 1 θ , β θ 2 2 l 1 θ , β θ β 2 l 1 θ , β β θ 2 l 1 θ , β β 2 .
Here:
I 11 = s β θ 0 1 x θ 1 ( 1 x θ ) β 1 d x + s θ ( β 2 β ) 0 1 x 2 θ 1 ( 1 x θ ) β 3 ( ln x ) 2 d x , I 22 = s β θ 0 1 x θ 1 ( 1 x θ ) β 1 d x , I 12 = I 22 = s θ β 0 1 x 2 θ 1 ( 1 x θ ) β 2 ln x d x .
Let η ^ M L E . S R S be the MLEs of η based on SRS. The inverse matrix of I ( η ^ ) is the approximate variance–covariance matrix of MLEs with:
I 1 ( η ^ ) = Var ( θ ^ M L E . S R S ) Cov ( θ ^ M L E . S R S , β ^ M L E . S R S ) Cov ( β ^ M L E . S R S , θ ^ M L E . S R S ) Var ( β ^ M L E . S R S ) .
The asymptotic distribution of η ^ M L E . S R S is ( η ^ M L E . S R S η ) N ( 0 , I 1 ( η ^ M L E . S R S ) ) . Thus, we derive the two-sided 100 ( 1 τ ) % approximate confidence intervals (ACIs) for θ and β .
Theorem 2.
The two-sided 100 ( 1 τ ) % ACIs for θ, β based on SRS are:
max 0 , θ ^ M L E . S R S u τ / 2 Var ( θ ^ M L E . S R S ) , θ ^ M L E . S R S + u τ / 2 Var ( θ ^ M L E . S R S )
and:
max 0 , β ^ M L E . S R S u τ / 2 Var ( β ^ M L E . S R S ) , β ^ M L E . S R S + u τ / 2 Var ( β ^ M L E . S R S )
respectively.
Here, u τ is the ( 1 τ ) th quantile of N ( 0 , 1 ) , the standard normal distribution. Since the lower bound for the CI will possibly be less than zero and θ , β are greater than zero, we set the lower bound to be the larger value of the computed lower bound and zero.

2.2. Bayes Estimations Based on SRS

In this section, we focus on Bayesian inference of θ and β based on SRS, and obtain Bayesian estimates (BEs) and credible intervals (CrIs). Let us consider the following independent gamma priors:
π 1 ( θ ) = μ 1 v 1 Γ ( v 1 ) θ v 1 1 e μ 1 θ ,
π 2 ( β ) = μ 2 v 2 Γ ( v 2 ) β v 2 1 e μ 2 β .
The joint posterior distribution of ( θ , β ) can be expressed as
Π 1 ( θ , β | x ˜ ) = θ s + v 1 1 β s + v 2 1 C 1 e μ 1 θ μ 2 β i = 1 s x i θ 1 ( 1 x i θ ) β 1 ,
where:
C 1 = 0 0 θ s + v 1 1 β s + v 2 1 e μ 1 θ μ 2 β i = 1 s x i θ 1 ( 1 x i θ ) β 1 d θ d β .
The conditional posterior densities of θ and β are written as
Π 1 ( θ | β , x ˜ ) θ s + v 1 1 e μ 1 θ i = 1 s x i θ 1 ( 1 x i θ ) β 1
and:
Π 1 ( β | θ , x ˜ ) β s + v 2 1 e μ 2 β i = 1 s ( 1 x i θ ) β 1 .
Generally, Bayes point estimators are acquired by minimizing the loss function. Here, we consider two typical loss functions. The first one is symmetric, which is the square error (SE) loss function. Let d ^ be the estimator of d, and the definition of SE loss function and Bayes estimators are given by
L S E ( d , d ^ ) = ( d d ^ ) 2
and:
d ^ B E . S E = E d ( d | x ˜ ) .
Symmetric functions assign equal weight to both overestimation and underestimation, so there emerge many asymmetric functions. The second is the general entropy (GE) loss function, which is an asymmetric function proposed by [29] defined as
L G E ( d , d ^ ) d ^ d k k ln d ^ d 1 , k 0 .
The corresponding Bayes estimator is:
d ^ B E . G E = E d ( d k | x ˜ ) 1 k .
Here, the positive errors ( d ^ > d ) make more mistakes than the negative errors when k > 0 , and vice versa when k < 0 .
Based on SE and GE loss functions, Bayes point estimators of θ can be gained as
θ ^ B E . S E . S R S = 0 0 θ n + v 1 β n + v 2 1 C 1 e μ 1 θ μ 2 β i = 1 n x i θ 1 ( 1 x i θ ) β 1 d θ d β
and:
θ ^ B E . G E . S R S = 0 0 θ n + v 1 1 k β n + v 2 1 C 1 e μ 1 θ μ 2 β i = 1 n x i θ 1 ( 1 x i θ ) β 1 d θ d β 1 k
provided that the expectations exist, while the Bayes estimator of β is able to be obtained in a similar way. It is easily observed that while using the above integrals, it is difficult to gain the explicit solutions. Therefore, we employed the MHG algorithm to generate samples from the posterior density distribution (13) and computed the approximated BEs and the 100 ( 1 γ ) % symmetric CrIs. More information about this algorithm can be found in [30].
The logarithmic calculations in the 5th and 11th steps of Algorithm 1 are for the purpose of narrowing down the value to facilitate the calculation and preventing the generation of infinity in the process of program operation.
We can obtain the approximated Bayes point estimates of θ and β under the SE and GE loss functions as
θ ^ B E . S E . S R S = 1 M H t = H + 1 M θ ( t )
and:
θ ^ B E . G E . S R S = 1 M H t = H + 1 M θ ( t ) k 1 k
where H is a burn-in period (which refers to the iteration times required before the stationary distribution is reached). β ^ B E . S E . S R S and β ^ B E . G E . S R S can be acquired by a similar way.
Order θ ( H + 1 ) , , θ ( M ) and β ( H + 1 ) , , β ( M ) as θ 1 , , θ M H and β 1 , , β M H . Then, the 100 ( 1 γ ) % symmetric Bayes CrIs of θ and β are acquired by
θ γ 2 ( M H ) , θ ( 1 γ 2 ) ( M H )
and:
β γ 2 ( M H ) , β ( 1 γ 2 ) ( M H )
respectively.
Algorithm 1 The Metropolis-Hastings-within-Gibbs algorithm
Input:M: total number of iterations.
Output: θ ( 1 ) , , θ ( M ) and β ( 1 ) , , β ( M ) .
  Initial θ ( 0 ) = θ ^ M L E , β ( 0 ) = β ^ M L E and T = 1 .
  while T M do
    Generate θ ˜ and β ˜ from N ( θ ( T 1 ) , σ 1 ) and N ( β ( T 1 ) , σ 2 ) , two normal distributions, where σ 1 and σ 2 are diagonal elements of the variance–covariance matrix.
    Generate u 1 from the Uniform distribution U ( 0 , 1 ) .
     R 1 = log Π 1 ( θ ˜ | β ( T ) , x ˜ ) + log q ( θ ( T 1 ) | θ ˜ ) log Π 1 ( θ ( T 1 ) | β ( T ) , x ˜ ) log q ( θ ˜ | θ ( T 1 ) )
    if  log u 2 < min R 1 , 0  then
         θ ( T ) = θ ˜ . Here, q ( x | y ) is the probability density of N ( y , σ 1 ) .
    else
         θ ( T ) = θ ( T 1 )
    Generate u 2 from the Uniform distribution U ( 0 , 1 ) .
     R 2 = log Π 1 ( β ˜ | θ ( T ) , x ˜ ) + log q ( β ( T 1 ) | β ˜ ) log Π 1 ( β ( T 1 ) | θ ( T ) , x ˜ ) log q ( β ˜ | β ( T 1 ) )
    if  log u 2 < min R 2 , 0  then
         β ( T ) = β ˜ . Here, q ( x | y ) is the probability density of N ( y , σ 2 ) .
    else
         β ( T ) = β ( T 1 )
         T = T + 1

3. Estimations of the Parameters Using RSS

3.1. Maximum Likelihood Estimations Based on RSS

The RSS scheme aims to sample and observe the data by using the ranking mechanism. First, we draw m random sample sets of size m from K ( θ , β ) and sort each dataset from the smallest to the largest, meaning that:
X 1 ( 1 : m ) ( i ) X 1 ( 2 : m ) ( i ) X 1 ( m : m ) ( i ) X 2 ( 1 : m ) ( i ) X 2 ( 2 : m ) ( i ) X 2 ( m : m ) ( i ) X m ( 1 : m ) ( i ) X m ( 2 : m ) ( i ) X m ( m : m ) ( i ) .
Here, X j ( p : m ) ( i ) means p-th ranked units, j-th set, and i-th cycle ( i = 1 , 2 , , n ; j = 1 , 2 , , m ). Then, the j-th ordered unit from the j-th set ( X j ( j : m ) ( i ) ) is chosen for practical quantification. After n cycle of above process, we end up with a complete RSS scheme of size m n expressed as
X 1 ( 1 : m ) ( 1 ) X 2 ( 2 : m ) ( 1 ) X m ( m : m ) ( 1 ) X 1 ( 1 : m ) ( 2 ) X 2 ( 2 : m ) ( 2 ) X m ( m : m ) ( 2 ) X 1 ( 1 : m ) ( n ) X 2 ( 2 : m ) ( n ) X m ( m : m ) ( n ) ,
where m is the set size and n is the number of cycles.
To simplify the notations, we use X i j to denote X j ( j : m ) ( i ) . Let X = ( X 11 , X 12 , , X n m ) . The likelihood function is given by
L 2 ( θ , β ) = θ m n β m n i = 1 n j = 1 m j m j x i j θ 1 ( 1 x i j θ ) ( 1 + m j ) β 1 [ 1 ( 1 x i j θ ) β ] j 1 ,
where x = ( x 11 , x 12 , , x n m ) is the observed sample of X , and the log-likelihood function is:
l 2 ( θ , β ) = m n ln θ + m n ln β + i = 1 n j = 1 m ln j m j + ( θ 1 ) i = 1 n j = 1 m ln x i j + i = 1 n j = 1 m ( 1 + m j ) β 1 ln ( 1 x i j θ ) + i = 1 n j = 1 m ( j 1 ) ln 1 ( 1 x i j θ ) β .
Calculate the first derivatives of (14) about θ , β and equate them to zero. The MLEs of θ , β can be earned by solving the follow system of equations:
l 2 ( θ , β ) θ = m n θ + i = 1 n j = 1 m ln x i j i = 1 n j = 1 m [ ( 1 + m j ) β 1 ] x i j θ ln x i j 1 x i j θ
+ i = 1 n j = 1 m ( j 1 ) β x i j θ ( 1 x i j θ ) β 1 ln x i j 1 ( 1 x i j θ ) β ,
l 2 ( θ , β ) β = m n β + i = 1 n j = 1 m ( 1 + m j ) ln ( 1 x i j θ ) i = 1 n j = 1 m ( j 1 ) ( 1 x i j θ ) β ln ( 1 x i j θ ) 1 ( 1 x i j θ ) β .
The second derivatives of (14) are expressed as
2 l 2 ( θ , β ) θ 2 = m n θ 2 i = 1 n j = 1 m [ ( 1 + m j ) β 1 ] x i j 2 θ ( ln x i j ) 2 ( 1 x i j θ ) 2 i = 1 n j = 1 m [ ( 1 + m j ) β 1 ] x i j θ ( ln x i j ) 2 1 x i j θ i = 1 n j = 1 m ( j 1 ) β 2 x i j 2 θ ( 1 x i j θ ) 2 + 2 β ( ln x i j ) 2 [ 1 ( 1 x i j θ ) β ] 2 + i = 1 n j = 1 m ( j 1 ) β x i j θ ( 1 x i j θ ) 1 + β ( ln x i j ) 2 1 ( 1 x i j θ ) β i = 1 n j = 1 m ( j 1 ) ( β 2 β ) x i j 2 θ ( 1 x i j θ ) 2 + β ( ln x i j ) 2 1 ( 1 x i j θ ) β , 2 l 2 ( θ , β ) β 2 = m n β 2 i = 1 n j = 1 m ( j 1 ) ( 1 x i j θ ) 2 β [ ln ( 1 x i j θ ) ] 2 [ 1 ( 1 x i j θ ) β ] 2 i = 1 n j = 1 m ( j 1 ) ( 1 x i j θ ) β [ ln ( 1 x i j θ ) ] 2 1 ( 1 x i j θ ) β , 2 l 2 ( θ , β ) θ β = 2 l 2 ( θ , β ) β θ = i = 1 n j = 1 m ( 1 + m j ) x i j θ ln x i j 1 x i j θ + i = 1 n j = 1 m ( j 1 ) β x i j θ ( 1 x i j θ ) 1 + 2 β ln x i j ln ( 1 x i j θ ) [ 1 ( 1 x i j θ ) β ] 2 + i = 1 n j = 1 m ( j 1 ) 1 + β ln ( 1 x i j θ ) x i j θ ( 1 x i j θ ) 1 + β ln x i j 1 ( 1 x i j θ ) β .
We can derive the expected Fisher information matrix I ( η ) , where:
I ( η ) = E 2 l 2 θ , β θ 2 2 l 2 θ , β θ β 2 l 2 θ , β β θ 2 l 2 θ , β β 2 .
Because the integral form of matrix I ( η ) is difficult to calculate, the observed Fisher information matrix J ( η ) is used to replace matrix I ( η ) , where:
J ( η ) = 2 l 2 θ , β θ 2 2 l 2 θ , β θ β 2 l 2 θ , β β θ 2 l 2 θ , β β 2 .
Let η ^ M L E . R S S be the MLEs of η . The inverse matrix of J ( η ^ ) is the approximate variance–covariance matrix of MLEs with:
J ( η ^ ) 1 = Var ( θ ^ M L E . R S S ) Cov ( θ ^ M L E . R S S , β ^ M L E . R S S ) Cov ( β ^ M L E . R S S , θ ^ M L E . R S S ) Var ( β ^ M L E . R S S ) .
Theorem 3.
The two-sided 100 ( 1 τ ) % ACIs for θ and β based on RSS are given by
max 0 , θ ^ M L E . R S S u τ / 2 V a r ( θ ^ M L E . R S S ) , θ ^ M L E . R S S + u τ / 2 V a r ( θ ^ M L E . R S S )
and:
max 0 , β ^ M L E . R S S u τ / 2 V a r ( β ^ M L E . R S S ) , β ^ M L E . R S S + u τ / 2 V a r ( β ^ M L E . R S S )
respectively.

3.2. Bayes Estimations Based on RSS

We still use independent gamma priors (11) and (12). The joint posterior distribution of ( θ , β ) is:
Π 2 ( θ , β | x ) = θ m n + v 1 1 β m n + v 2 1 C 2 e μ 1 θ μ 2 β i = 1 n j = 1 m x i j θ 1 ( 1 x i j θ ) ( 1 + m j ) β 1 1 ( 1 x i j θ ) β j 1 ,
C 2 = 0 0 θ m n + v 1 1 β m n + v 2 1 e μ 1 θ μ 2 β i = 1 n j = 1 m x i j θ 1 ( 1 x i j θ ) ( 1 + m j ) β 1 1 ( 1 x i j θ ) β j 1 .
The conditional posterior densities of θ , β are given by
Π 2 ( θ | β , x ) θ m n + v 1 1 e μ 1 θ i = 1 n j = 1 m x i j θ 1 ( 1 x i j θ ) ( 1 + m j ) β 1 1 ( 1 x i j θ ) β j 1
and:
Π 2 ( β | θ , x ) β m n + v 2 1 e μ 2 β i = 1 n j = 1 m ( 1 x i j θ ) ( 1 + m j ) β 1 1 ( 1 x i j θ ) β j 1 .
The Algorithm 1 mentioned in Section 2 was employed to generate a large number of samples from the posterior distribution (18). Then, Bayes point estimators of θ under SE and GE loss functions are given by
θ ^ B E . S E . R S S = 1 M H t = H + 1 M θ ( t )
and:
θ ^ B E . G E . R S S = 1 M H t = H + 1 M θ ( t ) k 1 k ,
Order the sample θ ( H + 1 ) , θ ( H + 2 ) , , θ ( M ) as θ 1 , θ 2 , , θ M H . Then, symmetric Bayes CrIs of θ is given by
θ γ 2 ( M H ) , θ ( 1 γ 2 ) ( M H ) .
Similarly, we can derive the Bayes point estimators and symmetric Bayes CrI for β .

4. Estimations of the Parameters Using MRSSU

4.1. Maximum Likelihood Estimations Based on MRSSU

First, we select m random sample sets from the population, where the j-th sample set possesses j units ( j = 1 , 2 , , m ) and sort the units in each dataset from smallest to largest, meaning that:
X 1 ( 1 : 1 ) ( i ) X 2 ( 1 : 2 ) ( i ) X 2 ( 2 : 2 ) ( i ) X m ( 1 : m ) ( i ) X m ( 2 : m ) ( i ) X m ( m : m ) ( i ) .
Here, X j ( p : m ) ( i ) means p-th ranked units, the j-th set which contains j units and i-th cycle ( i = 1 , 2 , , n ; j = 1 , 2 , , m ). In each circle, the total number of units of samples we draw is ( 1 + m ) m 2 .
Then, the maximum element in each sample set ( X j ( j : m ) ( i ) ) is chosen for practical quantification. After n iterations of the above process, we end up with a complete MRSSU of size m n expressed as
X 1 ( 1 : 1 ) ( 1 ) X 2 ( 2 : 2 ) ( 1 ) X m ( m : m ) ( 1 ) X 1 ( 1 : 1 ) ( 2 ) X 2 ( 2 : 2 ) ( 2 ) X m ( m : m ) ( 2 ) X 1 ( 1 : 1 ) ( n ) X 2 ( 2 : 2 ) ( n ) X m ( m : m ) ( n ) ,
where m is the set size and n is the number of cycles.
To simplify the notations, we used Y i j to denote X j ( j : j ) ( i ) . Let Y = ( Y 11 , Y 12 , , Y n m ) . The corresponding likelihood function is:
L 3 ( θ , β ) = θ m n β m n i = 1 n j = 1 m j y i j θ 1 ( 1 y i j θ ) β 1 [ 1 ( 1 y i j θ ) β ] j 1 ,
where y ˜ = ( y 11 , y 12 , , y n m ) is the observation of Y, and the log-likelihood function is:
l 3 ( θ , β ) = m n ln θ + m n ln β + i = 1 n j = 1 m ln j + ( θ 1 ) i = 1 n j = 1 m l n y i j + β 1 i = 1 n j = 1 m ln ( 1 x i j θ ) + i = 1 n j = 1 m ( j 1 ) ln 1 ( 1 x i j θ ) β .
Calculate the first derivatives of (19) about θ , β and equate them to zero. By solving the follow equations:
l 3 ( θ , β ) θ = m n θ + i = 1 n j = 1 m ln y i j ( β 1 ) i = 1 n j = 1 m y i j θ ln y i j 1 y i j θ
+ i = 1 n j = 1 m ( j 1 ) β y i j θ ( 1 y i j θ ) β 1 ln y i j 1 ( 1 y i j θ ) β ,
l 3 ( θ , β ) β = m n β + i = 1 n j = 1 m ln ( 1 y i j θ ) i = 1 n j = 1 m ( j 1 ) ( 1 y i j θ ) β ln ( 1 y i j θ ) 1 ( 1 y i j θ ) β ,
the MLEs of θ , β can be earned. The second derivatives of (19) are expressed as
2 l 3 ( θ , β ) θ 2 = m n θ 2 ( β 1 ) i = 1 n j = 1 m y i j 2 θ ( ln y i j ) 2 ( 1 y i j θ ) 2 ( β 1 ) i = 1 n j = 1 m y i j θ ( ln y i j ) 2 1 y i j θ i = 1 n j = 1 m ( j 1 ) β 2 y i j 2 θ ( 1 y i j θ ) 2 + 2 β ( ln y i j ) 2 [ 1 ( 1 y i j θ ) β ] 2 + i = 1 n j = 1 m ( j 1 ) β y i j θ ( 1 y i j θ ) 1 + β ( ln y i j ) 2 1 ( 1 y i j θ ) β i = 1 n j = 1 m ( j 1 ) ( β 2 β ) y i j 2 θ ( 1 y i j θ ) 2 + β ( ln y i j ) 2 1 ( 1 y i j θ ) β , 2 l 3 ( θ , β ) β 2 = m n β 2 i = 1 n j = 1 m ( j 1 ) ( 1 y i j θ ) 2 β [ ln ( 1 y i j θ ) ] 2 [ 1 ( 1 y i j θ ) β ] 2 i = 1 n j = 1 m ( j 1 ) ( 1 y i j θ ) β [ ln ( 1 y i j θ ) ] 2 1 ( 1 y i j θ ) β , 2 l 2 ( θ , β ) θ β = 2 l 2 ( θ , β ) β θ = i = 1 n j = 1 m y i j θ ln y i j 1 y i j θ + i = 1 n j = 1 m ( j 1 ) β y i j θ ( 1 y i j θ ) 1 + 2 β ln ( 1 y i j θ ) [ 1 ( 1 y i j θ ) β ] 2 ln y i j + i = 1 n j = 1 m ( j 1 ) 1 + β ln ( 1 y i j θ ) y i j θ ( 1 y i j θ ) 1 + β ln y i j 1 ( 1 x i j θ ) β .
Thus, we derive expected Fisher information matrix I ( η ) , where:
I ( η ) = E 2 l 3 θ , β θ 2 2 l 3 θ , β θ β 2 l 3 θ , β β θ 2 l 3 θ , β β 2 .
We adopt the observed Fisher information matrix J ( η ) to replace matrix I ( η ) , where:
J ( η ) = 2 l 3 θ , β θ 2 2 l 3 θ , β θ β 2 l 3 θ , β β θ 2 l 3 θ , β β 2 .
Let η ^ M L E . M R S S U be the MLEs of η . The inverse matrix of J ( η ^ ) is the approximate variance–covariance matrix of MLEs with:
J ( η ^ ) 1 = Var ( θ ^ M L E . M R S S U ) Cov ( θ ^ M L E . M R S S U , β ^ M L E . M R S S U ) Cov ( β ^ M L E . M R S S U , θ ^ M L E . M R S S U ) Var ( β ^ M L E . M R S S U ) .
Theorem 4.
The two-sided 100 ( 1 τ ) % ACIs for θ, β based MRSSU are given by
max 0 , θ ^ M L E . M R S S U u τ / 2 Var ( θ ^ M L E . M R S S U ) , θ ^ M L E . M R S S U + u τ / 2 Var ( θ ^ M L E . M R S S U ) ,
max 0 , β ^ M L E . M R S S U u τ / 2 Var ( β ^ M L E . M R S S U ) , β ^ M L E . M R S S U + u τ / 2 Var ( β ^ M L E . M R S S U ) ,
respectively.

4.2. Bayes Estimations Based on MRSSU

We use the same independent gamma priors (11) and (12). The joint posterior distribution of ( θ , β ) is:
Π 3 ( θ , β | y ˜ ) = θ n + v 1 1 β n + v 2 1 C 3 e μ 1 θ μ 2 β i = 1 n j = 1 m y i j θ 1 ( 1 y i j θ ) β 1 1 ( 1 y i j θ ) β j 1 ,
where:
C 3 = 0 0 θ m n + v 1 1 β m n + v 2 1 e μ 1 θ μ 2 β i = 1 n j = 1 m y i j θ 1 ( 1 y i j θ ) β 1 1 ( 1 y i j θ ) β j 1 .
The conditional posterior densities of θ , β are denoted by
Π 3 ( θ | β , y ˜ ) θ m n + v 1 1 e μ 1 θ i = 1 n j = 1 m y i j θ 1 ( 1 y i j θ ) β 1 1 ( 1 y i j θ ) β j 1
and:
Π 3 ( β | θ , y ˜ ) β m n + v 2 1 e μ 2 β i = 1 n j = 1 m ( 1 y i j θ ) β 1 1 ( 1 y i j θ ) β j 1 .
The Algorithm 1 is used to generate a large number of samples from the posterior distribution (23). Then, Bayes point estimators of θ under SE and GE loss functions, are expressed as
θ ^ B E . S E . M R S S U = 1 M H t = H + 1 M θ ( t )
and:
θ ^ B E . G E . M R S S U = 1 M H t = H + 1 M θ ( t ) k 1 k .
Order the sample θ ( H + 1 ) , θ ( H + 2 ) , , θ ( M ) as θ 1 , θ 2 , , θ M H . Symmetric Bayes CrI of θ is given by
θ γ 2 ( M H ) , θ ( 1 γ 2 ) ( M H ) .
Similarly, we can derive the Bayes point estimators and symmetric Bayes CrI for β .

5. Simulation Result

In this section, we intend to carry out comparisons of the SRS, RSS, and MRSSU schemes under different hyperparameter values and sample sizes, by comparing the numerical performance of point estimates and interval estimates. We evaluate bias and estimated risks (ERs) based on SE and GE loss functions by the following formulas:
B i a s ( θ ^ ) = 1 N t = 1 N θ ^ t θ , E R S E ( θ ^ ) = 1 N t = 1 N θ ^ t θ 2 , E R G E ( θ ^ ) = 1 N t = 1 N θ ^ t θ k k ln θ ^ t θ 1 .
Here, N denotes the iteration number of the simulation while θ ^ t means the estimate obtained in the t-th iteration. As for the three sampling methods, namely RSR, RSS and MRSSU, we can draw the samples, respectively, from K ( θ , β ) through Algorithms 2–4:
Algorithm 2 The random number generator for K ( θ , β ) based on SRS
Input: θ , β and s.
Output: x 1 , x 2 , , x s .
  initial i = 1 .
  while i s do
    Generate u U ( 0 , 1 ) .
     x i = ( 1 u 1 β ) 1 θ
     i = i + 1
Algorithm 3 The random number generator for K ( θ , β ) based on RSS
Input: θ , β , n and m.
Output: x 11 , x 12 , , x n m .
  initial i = 1 and j = 1 .
  while i n do
    while  j m  do
        Use Algorithm 2 (input s = M ) to generate x 1 , x 2 , , x m .
        Rank x 1 , x 2 , , x m from smallest to largest to get x ( 1 ) , x ( 2 ) , , x ( m ) .
         x i j = x ( j ) .
         j = j + 1 .
     i = i + 1 .
Algorithm 4 The random number generator for K ( θ , β ) based on MRSSU
Input: θ , β , n and m.
Output: x 11 , x 12 , , x n m .
  initial i = 1 and j = 1 .
  while i n do
    while  j m  do
        Use Algorithm 2 (input s = j ) to generate x 1 , x 2 , , x j .
        Rank x 1 , x 2 , , x j from smallest to largest to get x ( 1 ) , x ( 2 ) , , x ( j ) .
         x i j = x ( j ) .
         j = j + 1 .
     i = i + 1 .
In addition, based on different sampling schemes, we evaluate the relative efficiency (RE) values based on SE and GE loss functions which are given by
R E S E 1 = E R S E ( θ ^ R S S ) E R S E ( θ ^ S R S ) , R E S E 2 = E R S E ( θ ^ R S S ) E R S E ( θ ^ M R S S U ) , R E S E 3 = E R S E ( θ ^ M R S S U ) E R S E ( θ ^ S R S ) , R E G E 1 = E R G E ( θ ^ R S S ) E R G E ( θ ^ S R S ) , R E G E 2 = E R G E ( θ ^ R S S ) E R G E ( θ ^ M R S S U ) , R E G E 3 = E R G E ( θ ^ M R S S U ) E R G E ( θ ^ S R S ) .
Here, θ ^ S R S , θ ^ R S S , θ ^ M R S S U are the corresponding estimates of θ based on different sample methods. Obviously, if E R i < 1 ( i = 1 , 2 , , 6 ) , then we know the estimate of θ in the numerator outperforms the estimate of θ in the denominator. Equally, the bias, ERs and RE values of β can be calculated by similar formulas.
In the simulation, all calculations are performed by R language while true values are denoted to ( θ , β ) = ( 2 , 3 ) . We design three sampling schemes and two prior sets, namely the informative prior distribution and non-informative prior distribution. Each sampling scheme contains three sampling methods: SRS, RSS, and MRSSU. All information related to schemes and priors is given as follows:
  • Scheme I (Sch I): n = 3 , m = 5 , s = 15 ;
  • Scheme II (Sch II): n = 5 , m = 6 , s = 30 ;
  • Scheme III (Sch II): n = 5 , m = 9 , s = 45 ;
  • Prior 1: ( μ 1 , v 1 ) = ( 1.5 , 1 ) , ( μ 2 , v 2 ) = ( 0.5 , 0.5 ) ;
  • Prior 2: ( μ 1 , v 1 ) = ( 0 , 0 ) , ( μ 2 , v 2 ) = ( 0 , 0 ) .
Consider the iteration number as N = 2500 and take M = 10,000 , H = 1000 in the MHG algorithm of Bayesian estimation. The ERs and biases of θ and β based on SRS, RSS and MRSSU using maximum likelihood and Bayesian methods are shown in Table 1, Table 2 and Table 3. The RE values for MLEs of θ and β are presented on Table 4, and for BEs of θ and β based on SE loss function and GE loss function ( k = 0.2 , k = 0.8 ) are presented on Table 5, Table 6 and Table 7, respectively.
We also calculate the average CIs lengths (ACLs) and coverage percentages (CPs) to compare the performance of the interval estimators. For each case, we generate 90 % CIs and verify whether ( θ , β ) = ( 2 , 3 ) fall into the corresponding CIs with 2500 iterations. The results are presented in Table 8.
In Table 1 and Table 2, it can be seen that the ERs of estimates become small when the number of samples increases. Furthermore, the Bayesian methods outperform the maximum likelihood methods, while the estimators of θ are better than the estimators of β . For maximum likelihood estimations, the ERs of the estimators of β are larger when the sample sets are smaller. For example, the ER of θ ^ M L E . S R S under the scheme I is 2.37421, while the ER of θ ^ M L E . S R S under schemes II and III is 1.26192 and 0.81074, respectively. For Bayesian estimation, information prior is superior to non-information prior and the SE loss function is best followed by GE loss function ( k = 0.2 ) and GE loss function ( k = 0.8 ) .
Table 3 shows that the Bayesian estimates of θ have a tendency to underestimate because they have the negative biases. Observing Table 4, we know that, based on maximum-likelihood estimation, the RSS schemes are more suitable for the estimation of θ and β than MRSSU and RSS schemes. Furthermore, the SRS schemes exceed the MRSSU schemes on the estimation of θ while they perform similarly on the estimation of β (except for scheme I). As it can be seen in Table 6 and Table 7, there is a regularity of RSS schemes being superior to MRSSU and SRS schemes, in Bayesian estimation as well.
As shown in Table 8, with the reduction in the sample size, the ACLs of interval estimates increase and the CPs also ascend. Figure 2 and Figure 3 visually show the change of ACLs of θ and β . Whereas the CPs of ACIs are always less than 1, and the CPs of CrIs remain unchanged (are equal to 1) and Figure 4 clearly illustrates the trend. This indicates that Bayesian estimation has better performance than the maximum likelihood estimation in the sense of interval estimation. The optimal interval estimations are the Bayes CrIs based on MRSSU schemes except for (i) θ for scheme I and (ii) ( θ , β ) for scheme II under prior 2.

6. Data Analysis

Here, an open access dataset from the California Data Exchange Center is illustrated, which can be found in [31]. It is related to the monthly water capacity from the Shasta Reservoir in California, USA, between August and December from 1975 to 2016. The set of data, previously used by some authors such as [11,32], contains a total of 42 observations, which is given in Table 9.
For evaluating the proposed point estimators and interval estimators based on SRS, RSS and MRSSU, we designed s = 12 , m = 3 and n = 4 . Use different sampling methods and draw a random sample of size 12. In the case of SRS, a random sample of size 12 is directly drawn. In the case of RSS and MRSSU, three sample sets of size 3 are drawn primarily and some components are selected with iterations according to the specific procedure described in Section 3 and Section 4.
We consider the 90 % CIs and a non-information prior, i.e., ( μ 1 , v 1 ) = ( 0 , 0 ) , ( μ 2 , v 2 ) = ( 0 , 0 ) . Then, maximum likelihood point and interval estimation, Bayesian point and interval estimation are given with the results are summarized in Table 10 and Table 11.

7. Conclusions

In this paper, based on Kumaraswamy distribution, point estimations and corresponding interval estimations of unknown parameters grounded on SRS, RSS and MRSSU were investigated by the use of the maximum likelihood approach and Bayes approach. From the perspective of Bayes estimation, we consider squared error and general entropy loss functions. Furthermore, a simulation experiment was illustrated for the purpose of comparing the proposed point estimator (in terms of biases, estimated risks, relative efficiency) and the interval estimator (in the sense of average lengths and coverage percentages).
From numerical outcomes, our first suggestion is to adopt the ranked set sampling scheme, and the second is the simple random sampling scheme, generally. In addition, the Bayes estimators under the squared error loss function outperform the maximum likelihood estimators and Bayes estimators which are under the general entropy loss function. We also focus on a set of real-life data.
In the future, some other problems concerning the Kumaraswamy distribution and the ranked set sample can be further studied, such as the estimations for Kumaraswamy distribution based on the ranked set sample under type I and type II censoring or progressive hybrid censoring.

Author Contributions

Investigation, H.J.; Supervision, W.G. Both authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are openly available in website [31].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kumaraswamy, P. A generalized probability density function for double-bounded random processes. J. Hydrol. 2008, 46, 79–88. [Google Scholar] [CrossRef]
  2. Jones, M.C. Kumaraswamy’s distribution: A beta-type distribution with some tractability advantages. Stat. Methodol. 2009, 6, 70–81. [Google Scholar] [CrossRef]
  3. Ponnambalam, K.; Seifi, A.; Vlach, J. Probabilistic design of systems with general distributions of parameters. J. Abbr. 2001, 29, 527–536. [Google Scholar] [CrossRef]
  4. Dey, S.; Mazucheli, J.; Nadarajah, S. Kumaraswamy distribution: Different methods of estimation. Comput. Appl. Math. 2018, 37, 2094–2111. [Google Scholar] [CrossRef]
  5. Sindhu, T.N.; Feroze, N.; Aslam, M. Bayesian analysis of the Kumaraswamy distribution under failure censoring sampling scheme. Int. J. Adv. Sci. Technol. 2013, 51, 39–58. [Google Scholar]
  6. Abd El-Monsef, M.M.E.; Hassanein, W.A.A.E.L. Assessing the lifetime performance index for Kumaraswamy distribution under first-failure progressive censoring scheme for ball bearing revolutions. Qual. Reliab. Eng. Int. 2020, 36, 1086–1097. [Google Scholar] [CrossRef]
  7. Gholizadeh, R.; Khalilpor, M.; Hadian, M. Bayesian estimations in the Kumaraswamy distribution under progressively type II censoring data. Int. J. Eng. Sci. Technol. 2011, 3, 47–65. [Google Scholar] [CrossRef] [Green Version]
  8. Sultana, F.; Tripathi, Y.M.; Rastogi, M.K.; Wu, S.J. Parameter estimation for the Kumaraswamy distribution based on hybrid censoring. Am. J. Math. Manag. Sci. 2018, 37, 243–261. [Google Scholar] [CrossRef]
  9. Kohansal, A.; Nadarajah, S. Stress–strength parameter estimation based on type-II hybrid progressive censored samples for a Kumaraswamy distribution. IEEE Trans. Reliab. 2019, 68, 1296–1310. [Google Scholar] [CrossRef]
  10. Sultana, F.; Tripathi, Y.M.; Wu, S.J.; Sen, T. Inference for Kumaraswamy Distribution Based on Type I Progressive Hybrid Censoring. Ann. Data Sci. 2020, 1–25. [Google Scholar] [CrossRef]
  11. Tu, J.; Gui, W. Bayesian Inference for the Kumaraswamy Distribution under Generalized Progressive Hybrid Censoring. Entropy 2020, 22, 1032. [Google Scholar] [CrossRef]
  12. Saulo, H.; Leão, J.; Bourguignon, M. The kumaraswamy birnbaum-saunders distribution. J. Stat. Theory Pract. 2012, 6, 745–759. [Google Scholar] [CrossRef]
  13. Iqbal, Z.; Tahir, M.M.; Riaz, N.; Ali, S.A.; Ahmad, M. Generalized Inverted Kumaraswamy Distribution: Properties and Application. Open J. Stat. 2017, 7, 645. [Google Scholar] [CrossRef] [Green Version]
  14. Khan, M.S.; King, R.; Hudson, I.L. Transmuted kumaraswamy distribution. Stat. Transit. New Ser. 2016, 17, 183–210. [Google Scholar] [CrossRef]
  15. McIntyre, G.A. A method for unbiased selective sampling, using ranked sets. Aust. J. Agric. Res. 1952, 3, 385–390. [Google Scholar] [CrossRef]
  16. McIntyre, G.A. Ranked set sampling: Ranking error models and estimation of visual judgment error variance. Biom. J. J. Math. Methods Biosci. 2004, 46, 255–263. [Google Scholar]
  17. Al-Saleh, M.F.; Al-Hadrami, S.A. Parametric estimation for the location parameter for symmetric distributions using moving extremes ranked set sampling with application to trees data. Environmetrics Off. J. Int. Environmetrics Soc. 2003, 14, 651–664. [Google Scholar] [CrossRef]
  18. Mahmood, T. On developing linear profile methodologies: A ranked set approach with engineering application. J. Eng. Res. 2020, 8, 203–225. [Google Scholar]
  19. Sevil, Y.C.; Yildiz, T.Ö. Performances of the Distribution Function Estimators Based on Ranked Set Sampling Using Body Fat Data. Türkiye Klin. Biyoistatistik 2020, 12, 218–228. [Google Scholar] [CrossRef]
  20. Bocci, C.; Petrucci, A.; Rocco, E. Ranked set sampling allocation models for multiple skewed variables: An application to agricultural data. Environ. Ecol. Stat. 2010, 17, 333–345. [Google Scholar] [CrossRef]
  21. Biradar, B.S.; Santosha, C.D. Estimation of the mean of the exponential distribution using maximum ranked set sampling with unequal samples. Open J. Stat. 2014, 4, 641. [Google Scholar] [CrossRef] [Green Version]
  22. Provost, F.; Fawcett, T. Data science and its relationship to Big Data and data-driven decision making. Big Data 2013, 1, 51–59. [Google Scholar] [CrossRef]
  23. Liang, F.; Paulo, R.; Molina, G.; Clyde, M.A.; Berger, J.O. Mixtures of g priors for Bayesian variable selection. J. Am. Stat. Assoc. 2008, 103, 410–423. [Google Scholar] [CrossRef]
  24. Caraka, R.E.; Yusra, Y.; Toharudin, T.; Chen, R.C.; Basyuni, M.; Juned, V.; Gio, P.U.; Pardamean, B. Did Noise Pollution Really Improve during COVID-19? Evidence from Taiwan. Sustainability 2021, 13, 5946. [Google Scholar] [CrossRef]
  25. Salimans, T.; Kingma, D.; Welling, M. Markov chain monte carlo and variational inference: Bridging the gap. In Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 7–9 July 2015; pp. 1218–1226. [Google Scholar]
  26. Dellaportas, P.; Forster, J.J.; Ntzoufras, I. On Bayesian model and variable selection using MCMC. Stat. Comput. 2002, 12, 27–36. [Google Scholar] [CrossRef]
  27. Caraka, R.E.; Nugroho, N.T.; Tai, S.K.; Chen, R.C.; Toharudin, T.; Pardamean, B. Feature importance of the aortic anatomy on endovascular aneurysm repair (EVAR) using Boruta and Bayesian MCMC. Commun. Math. Biol. Neurosci. 2020. [Google Scholar] [CrossRef]
  28. Rabinovich, M.; Angelino, E.; Jordan, M.I. Variational consensus monte carlo. arXiv 2015, arXiv:1506.03074. [Google Scholar]
  29. Biradar, B.S.; Santosha, C.D. Point estimation under asymmetric loss functions for left-truncated exponential samples. Commun. Stat. Theory Methods 1996, 25, 585–600. [Google Scholar]
  30. Brooks, S.; Gelman, A.; Jones, G.; Meng, X.L. (Eds.) Handbook of Markov Chain Monte Carlo; Chapman & Hall/CRC: Boca Raton, FL, USA, 2011. [Google Scholar]
  31. Query Monthly CDEC Senser Values. Available online: http://cdec.water.ca.gov/dynamicapp/QueryMonthly?SHA (accessed on 19 June 2021).
  32. Kohansal, A. On estimation of reliability in a multicomponent stress-strength model for a Kumaraswamy distribution based on progressively censored sample. Stat. Pap. 2019, 60, 2185–2224. [Google Scholar] [CrossRef]
Figure 1. PDF and CDF of K ( θ , β ) .
Figure 1. PDF and CDF of K ( θ , β ) .
Symmetry 13 01170 g001
Figure 2. The plots of the ACLs of maximum likelihood interval estimations of θ and β .
Figure 2. The plots of the ACLs of maximum likelihood interval estimations of θ and β .
Symmetry 13 01170 g002
Figure 3. The plots of the ACLs of Bayes interval estimations of θ and β .
Figure 3. The plots of the ACLs of Bayes interval estimations of θ and β .
Symmetry 13 01170 g003
Figure 4. The plots of the CPs of θ and β .
Figure 4. The plots of the CPs of θ and β .
Symmetry 13 01170 g004
Table 1. The ER of the estimates of θ .
Table 1. The ER of the estimates of θ .
Prior 1Prior 2
Sch ER GE ER GE
ER SE k = 0 . 2 k = 0 . 8 ER SE k = 0 . 2 k = 0 . 8
I θ ^ M L E . S R S 0.332580.001310.021790.332580.001310.02179
θ ^ M L E . R S S 0.120840.000810.009170.120840.000810.00917
θ ^ M L E . M R S S U 0.235730.001050.017090.235730.001050.01709
θ ^ M L E . S E . S R S 0.049250.000280.004310.038440.000170.00283
θ ^ B E . S E . R S S 0.017140.000090.001450.03685−0.00002−0.00275
θ ^ B E . S E . M R S S U 0.055590.000310.004900.044850.000200.00329
θ ^ B E . G E . S R S ( k = 0.2 ) 0.10457−0.01131−0.038200.00771−0.01242−0.04922
θ ^ B E . G E . S R S ( k = 0.8 ) 0.14151−0.01752−0.060340.00570−0.01931−0.07688
θ ^ B E . G E . R S S ( k = 0.2 ) 0.03985−0.00752−0.027510.002750.000510.00854
θ ^ B E . G E . R S S ( k = 0.8 ) 0.05557−0.01155−0.042550.004540.000090.00151
θ ^ B E . G E . M R S S U ( k = 0.2 ) 0.12459−0.01318−0.044210.00655−0.01448−0.05755
θ ^ B E . G E . M R S S U ( k = 0.8 ) 0.17223−0.02058−0.070270.00578−0.02269−0.09039
II θ ^ M L E . S R S 0.169510.000730.011960.169510.000730.01196
θ ^ M L E . R S S 0.065860.000360.005620.065860.000360.00562
θ ^ M L E . M R S S U 0.351010.001280.021830.351010.001280.02183
θ ^ B E . S E . S R S 0.014190.000020.000350.014120.000030.00108
θ ^ B E . S E . R S S 0.004150.000020.000340.007800.000030.00060
θ ^ B E . S E . M R S S U 0.017620.000030.001400.018920.000030.00140
θ ^ B E . G E . S R S ( k = 0.2 ) 0.01019−0.00410−0.033930.003380.00338−0.02753
θ ^ B E . G E . S R S ( k = 0.8 ) 0.01446−0.00621−0.023950.00191−0.01058−0.04220
θ ^ B E . G E . R S S ( k = 0.2 ) 0.00980−0.00414−0.015950.00294−0.00414−0.01640
θ ^ B E . G E . R S S ( k = 0.8 ) 0.01392−0.00628−0.024250.00189−0.00630−0.02508
θ ^ B E . G E . M R S S U ( k = 0.2 ) 0.04336−0.00524−0.018170.00572−0.00834−0.03303
θ ^ B E . G E . M R S S U ( k = 0.8 ) 0.05446−0.00802−0.028530.00335−0.01278−0.05094
III θ ^ M L E . S R S 0.106750.000470.007710.106750.000470.00771
θ ^ M L E . R S S 0.042290.000180.002970.042290.000180.00297
θ ^ M L E . M R S S U 0.117660.000500.008260.117660.000500.00826
θ ^ B E . S E . S R S 0.004710.000020.000390.009790.000050.00075
θ ^ B E . S E . R S S 0.001510.000010.000120.003850.000020.00030
θ ^ B E . S E . M R S S U 0.005390.000030.000450.013980.000070.00106
θ ^ B E . G E . S R S ( k = 0.2 ) 0.01262−0.00481−0.018430.00290−0.00492−0.01951
θ ^ B E . G E . S R S ( k = 0.8 ) 0.01829−0.00729−0.028000.00141−0.00747−0.02979
θ ^ B E . G E . R S S ( k = 0.2 ) 0.00272−0.00241−0.009490.00188−0.00246−0.00972
θ ^ B E . G E . R S S ( k = 0.8 ) 0.00377−0.00364−0.014340.00137−0.00371−0.01478
θ ^ B E . G E . M R S S U ( k = 0.2 ) 0.01673−0.00634−0.024290.00467−0.00636−0.02518
θ ^ B E . G E . M R S S U ( k = 0.8 ) 0.02541−0.00964−0.036930.00332−0.00971−0.03863
Table 2. The ER of the estimates of β .
Table 2. The ER of the estimates of β .
Prior 1Prior 2
Sch ER GE ER GE
ER SE k = 0 . 2 k = 0 . 8 ER SE k = 0 . 2 k = 0 . 8
I β ^ M L E . S R S 2.374210.003430.059272.374210.003430.05927
β ^ M L E . R S S 1.123850.001940.032381.123850.001940.03238
β ^ M L E . M R S S U 1.997330.002720.047971.997330.002720.04797
β ^ B E . S E . S R S 0.159550.000410.006370.035880.000160.00266
β ^ B E . S E . R S S 0.042300.000100.001610.023470.000190.00305
β ^ B E . S E . M R S S U 0.105210.000260.004100.026680.000200.00832
β ^ B E . G E . S R S ( k = 0.2 ) 0.439580.001240.018800.00559−0.01232−0.04897
β ^ B E . G E . S R S ( k = 0.8 ) 0.624140.001840.027720.03803−0.019130.00338
β ^ B E . G E . R S S ( k = 0.2 ) 0.174730.000450.006990.023280.000050.00078
β ^ B E . G E . R S S ( k = 0.8 ) 0.275430.000740.011320.010700.000020.00037
β ^ B E . G E . M R S S U ( k = 0.2 ) 0.268040.000710.010970.004380.000090.00145
β ^ B E . G E . M R S S U ( k = 0.8 ) 0.373500.001030.015690.014550.000030.00051
II β ^ M L E . S R S 1.261920.001970.033531.261920.001970.03353
β ^ M L E . R S S 0.688060.001330.021580.688060.001330.02158
β ^ M L E . M R S S U 0.725870.001190.020180.725870.001190.02018
β ^ B E . S E . S R S 0.013290.000030.000410.139830.000270.00451
β ^ B E . S E . R S S 0.012750.000030.000460.044820.000090.00150
β ^ B E . S E . M R S S U 0.160080.000410.006320.109020.000220.00355
β ^ B E . G E . S R S ( k = 0.2 ) 0.047990.000120.001820.022370.000040.00076
β ^ B E . G E . S R S ( k = 0.8 ) 0.080760.000200.003110.010300.000020.00037
β ^ B E . G E . R S S ( k = 0.2 ) 0.045670.000110.001730.014990.000030.00051
β ^ B E . G E . R S S ( k = 0.8 ) 0.076890.000190.002960.008460.000020.00029
β ^ B E . G E . M R S S U ( k = 0.2 ) 0.221180.000580.008910.029320.000060.00099
β ^ B E . G E . M R S S U ( k = 0.8 ) 0.254890.000670.010370.012460.000030.00043
III β ^ M L E . S R S 0.810740.001330.022460.810740.001330.02246
β ^ M L E . R S S 0.385130.000630.010780.385130.000630.01078
β ^ M L E . M R S S U 0.368660.000650.010860.368660.000650.01086
β ^ B E . S E . S R S 0.013180.000030.000480.092510.000190.00303
β ^ B E . S E . R S S 0.008370.000020.000290.044820.000090.00150
β ^ B E . S E . M R S S U 0.006480.000010.000240.053120.000110.00177
β ^ B E . G E . S R S ( k = 0.2 ) 0.058440.000140.002220.020020.000040.00068
β ^ B E . G E . S R S ( k = 0.8 ) 0.095520.000240.003690.008640.000020.00027
β ^ B E . G E . R S S ( k = 0.2 ) 0.012070.000030.000440.014990.000030.00051
β ^ B E . G E . R S S ( k = 0.8 ) 0.020350.000050.000760.008460.000020.00029
β ^ B E . G E . M R S S U ( k = 0.2 ) 0.025070.000060.000930.018190.000040.00062
β ^ B E . G E . M R S S U ( k = 0.8 ) 0.040580.000100.001530.010130.000020.00035
Table 3. The biases of estimates of θ and β .
Table 3. The biases of estimates of θ and β .
Sch ISch IISch III
Prior 1Prior 2Prior 1Prior 2Prior 1Prior 2
θ ^ M L E . S R S 0.195790.195790.114360.114360.079900.07990
θ ^ M L E . R S S 0.014810.01481−0.03723−0.037230.082780.08278
θ ^ M L E . M R S S U 0.277450.277450.235640.235640.069680.06968
θ ^ B E . S E . S R S −0.217540.17919−0.056020.397690.000720.01207
θ ^ B E . S E . R S S −0.124150.12508−0.049680.07709−0.012690.05220
θ ^ B E . S E . M R S S U −0.231460.19954−0.156610.13700−0.062270.10258
θ ^ B E . G E . S R S ( k = 0.2 ) −0.320790.04867−0.095970.038410.000080.00128
θ ^ B E . G E . S R S ( k = 0.8 ) −0.37407−0.01978−0.116220.022210.000050.00083
θ ^ B E . G E . R S S ( k = 0.2 ) −0.195360.04063−0.090130.03437−0.036660.02706
θ ^ B E . G E . R S S ( k = 0.8 ) −0.23210−0.00288−0.110650.01261−0.048720.01440
θ ^ B E . G E . M R S S U ( k = 0.2 ) −0.350440.04562−0.206460.04950−0.123530.03653
θ ^ B E . G E . M R S S U ( k = 0.8 ) −0.41286−0.03652−0.231770.00462−0.154660.00285
β ^ M L E . S R S 1.694971.694971.400031.400031.273051.27305
β ^ M L E . R S S 1.332701.332700.959010.959011.303791.30379
β ^ M L E . M R S S U 1.785351.785351.287061.287061.128511.12851
β ^ B E . S E . S R S 0.616701.599810.938411.353430.910701.28659
β ^ B E . S E . R S S 0.820731.494720.954621.283111.024371.19320
β ^ B E . S E . M R S S U 0.686321.492940.601871.312900.946431.20825
β ^ B E . G E . S R S ( k = 0.2 ) 0.342931.135450.794481.105230.767341.11028
β ^ B E . G E . S R S ( k = 0.8 ) 0.214020.918830.725100.985700.697411.02464
β ^ B E . G E . R S S ( k = 0.2 ) 0.591451.152890.808821.111940.930651.08869
β ^ B E . G E . R S S ( k = 0.8 ) 0.482050.992920.738611.028840.885081.03786
β ^ B E . G E . M R S S U ( k = 0.2 ) 0.486771.164970.531301.140340.852291.09794
β ^ B E . G E . M R S S U ( k = 0.8 ) 0.392221.010700.496591.057580.806571.04437
Table 4. The R E S E s and R E G E s for maximum likelihood estimations of θ and β .
Table 4. The R E S E s and R E G E s for maximum likelihood estimations of θ and β .
Sch k = 0.2 k = 0.8
RE SE 1 RE SE 2 RE SE 3 RE GE 1 RE GE 2 RE GE 3 RE GE 1 RE GE 2 RE GE 3
I0.36330.51260.70880.61480.76560.80300.42080.53640.7845
θ II0.38850.18762.07080.49240.28001.75850.46960.25731.8249
III0.39620.35941.10220.38060.35881.06090.38530.35991.0705
I0.47340.56270.84130.56610.71390.80940.54630.67490.8094
β II0.54530.94790.57520.67500.28000.60190.64351.06910.6019
III0.47501.04470.45470.47660.35880.48370.48011.04470.4837
Table 5. The R E S E s and R E G E s for Bayes estimations of θ and β based on SE loss function.
Table 5. The R E S E s and R E G E s for Bayes estimations of θ and β based on SE loss function.
(Sch, k = 0.2 k = 0.8
Prior) RE SE 1 RE SE 2 RE SE 3 RE GE 1 RE GE 2 RE GE 3 RE GE 1 RE GE 2 RE GE 3
θ (I,1)0.34800.30831.12870.32570.28651.13670.33660.29661.1350
(II,2)0.29260.23561.24181.00000.60551.65140.99080.02434.0755
(III,3)0.39620.35941.10220.38060.35881.06090.38530.35991.0705
(I,1)0.95850.82151.1668−0.1225−0.10511.1654−0.9733−0.83491.1658
(II,2)0.55230.41221.34001.00000.90831.10100.55660.04251.3105
(III,3)0.08320.27521.42850.09990.28571.40000.09100.28151.4110
β (I,1)0.26510.40210.65940.25020.39160.63900.25340.39380.6434
(II,2)0.95950.07960.57520.96600.07120.60571.11460.07290.6019
(III,3)0.63501.29260.45470.58321.19750.48700.59381.21700.4879
(I,1)0.65410.87950.74371.14370.92431.23731.14670.36693.1257
(II,2)0.32050.41110.57520.32810.41470.60570.33220.42130.6019
(III,3)0.48440.84370.57420.49670.84840.48860.49410.84740.4837
Table 6. The R E S E s and R E G E s for Bayes estimations of θ and β based on GE loss function ( k = 0.2 ) .
Table 6. The R E S E s and R E G E s for Bayes estimations of θ and β based on GE loss function ( k = 0.2 ) .
(Sch, k = 0.2 k = 0.8
Prior) RE SE 1 RE SE 2 RE SE 3 RE GE 1 RE GE 2 RE GE 3 RE GE 1 RE GE 2 RE GE 3
θ (I,1)0.38110.31981.19150.66470.57061.16500.72020.62221.1576
(II,2)0.96190.22604.25621.01040.78971.27940.47020.87790.5356
(III,3)0.21560.16261.32590.50250.38111.31860.51500.39081.3178
(I,1)0.35660.41980.8493−0.0415−0.03551.1664−0.1735−0.14841.1692
(II,2)0.87110.51381.6955−1.22690.4966−2.47030.59570.49641.1999
(III,3)0.64790.40201.61180.49940.38621.29310.49810.38601.2904
β (I,1)0.39750.65190.60980.36470.63280.57640.37150.63680.5834
(II,2)0.20660.20654.60840.19670.19134.97770.19870.19454.8962
(III,3)1.72470.48170.42891.01540.47060.41801.01810.47290.4203
(I,1)4.16625.31570.7838−0.00410.5556−0.0073−0.01600.5398−0.0296
(II,2)0.74851.20261.31090.75001.00001.50000.74881.18201.3004
(III,3)1.00610.82402.10541.21280.75002.00000.98060.82132.3211
Table 7. The R E S E s and R E G E s for Bayes estimations of θ and β based on GE loss function ( k = 0.8 ) .
Table 7. The R E S E s and R E G E s for Bayes estimations of θ and β based on GE loss function ( k = 0.8 ) .
(Sch, k = 0.2 k = 0.8
Prior) RE SE 1 RE SE 2 RE SE 3 RE GE 1 RE GE 2 RE GE 3 RE GE 1 RE GE 2 RE GE 3
θ (I,1)0.39270.32261.21710.65890.56101.17450.70510.60551.1646
(II,2)0.96270.25573.76531.01080.78291.29101.01250.84981.1916
(III,3)0.20630.14851.38930.49980.37811.32200.51210.38841.3187
(I,1)0.79690.78571.0142−0.0047−0.00401.1751−0.0197−0.01671.1758
(II,2)0.99030.56411.75570.59540.49261.20860.59430.49241.2071
(III,3)0.96930.41242.35020.49720.38261.29950.49590.38251.2965
β (I,1)0.44130.73740.59840.40010.71750.55760.40860.72170.5662
(II,2)0.95210.30173.15600.95320.28163.38520.95290.28583.3347
(III,3)0.21300.50150.42480.20240.49340.41030.20460.49510.4134
(I,1)0.28130.73540.3825−0.00100.6667−0.00160.10950.72520.1510
(II,2)0.82120.67851.21041.00000.66671.50000.77420.67431.1482
(III,3)0.97880.83441.17311.00001.00001.00001.08750.82551.3173
Table 8. The CPs and ACLs of interval estimations of θ and β .
Table 8. The CPs and ACLs of interval estimations of θ and β .
Sch Prior 1 Prior 2 Sch Prior 1 Prior 2
ACLCPACLCP ACLCPACLCP
θ IACI SRS1.326660.7601.326660.760 β IACI SRS4.426620.9304.426620.930
ACI RSS1.327940.9601.327940.960 ACI RSS3.596890.9703.596890.970
ACI MRSSU1.367600.9501.367600.950 ACI MRSSU5.225990.9605.225990.960
CrI SRS1.798371.0002.222781.000 CrI SRS3.650331.0005.595031.000
CrI RSS1.742431.0002.004241.000 CrI RSS3.736601.0005.057991.000
CrI MRSSU1.905921.0002.414731.000 CrI MRSSU3.161781.0004.640511.000
IIACI SRS0.927710.7540.927710.754 IIACI SRS2.865970.8942.865970.894
ACI RSS0.904390.9200.904390.920 ACI RSS2.333400.9602.333400.960
ACI MRSSU0.873350.7600.873350.760 ACI MRSSU3.005000.9503.005000.950
CrI SRS1.459921.0001.643951.000 CrI SRS3.066651.0003.910121.000
CrI RSS1.365301.0001.435701.000 CrI RSS3.094741.0003.513081.000
CrI MRSSU1.281281.0001.822911.000 CrI MRSSU2.903571.0003.259181.000
IIIACI SRS0.753270.7460.753270.746 IIIACI SRS2.243890.8452.243890.845
ACI RSS0.673620.9000.673620.900 ACI RSS1.789250.9601.789250.960
ACI MRSSU0.616250.7390.616250.739 ACI MRSSU2.065880.8802.065880.880
CrI SRS1.268591.0001.381181.000 CrI SRS2.761651.0003.259931.000
CrI RSS1.067771.0001.116101.000 CrI RSS2.533411.0002.756781.000
CrI MRSSU1.058811.0001.065381.000 CrI MRSSU2.256401.0002.542961.000
Table 9. The real dataset.
Table 9. The real dataset.
0.6671570.2877850.1269770.7685630.7031190.7299860.767135
0.8111590.8295690.7261640.4238130.7151580.6403950.363365
0.4637260.3719040.2911720.4140870.6506910.5380820.744887
0.7226130.5612380.8139640.7090250.6686120.5249470.606039
0.7158500.5295180.8248600.7420250.4687820.3450750.425334
0.7670700.6798290.6139110.4616180.2948340.3929170.688100
Table 10. The point estimates of θ and β .
Table 10. The point estimates of θ and β .
Point Estimates Point Estimates
θ ^ M L E . S R S 2.832703 β ^ M L E . S R S 2.190995
θ ^ M L E . R S S 2.145644 β ^ M L E . R S S 1.724308
θ ^ M L E . M R S S U 2.532715 β ^ M L E . M R S S U 2.775230
θ ^ B E . S E . S R S 2.806827 β ^ B E . S E . S R S 2.339172
θ ^ B E . S E . R S S 2.145149 β ^ B E . S E . R S S 1.797477
θ ^ B E . S E . M R S S U 2.380317 β ^ B E . S E . M R S S U 2.692457
θ ^ B E . G E . S R S ( k = 0.2 ) 2.565881 β ^ B E . G E . S R S ( k = 0.2 ) 1.996350
θ ^ B E . G E . R S S ( k = 0.2 ) 2.014902 β ^ B E . G E . R S S ( k = 0.2 ) 1.630882
θ ^ B E . G E . M R S S U ( k = 0.2 ) 2.212830 β ^ B E . G E . M R S S U ( k = 0.2 ) 2.443375
θ ^ B E . G E . S R S ( k = 0.8 ) 2.438731 β ^ B E . G E . S R S ( k = 0.8 ) 1.843767
θ ^ B E . G E . R S S ( k = 0.8 ) 1.945485 β ^ B E . G E . R S S ( k = 0.8 ) 1.551529
θ ^ B E . G E . M R S S U ( k = 0.8 ) 2.120080 β ^ B E . G E . M R S S U ( k = 0.8 ) 2.325173
Table 11. The interval estimates of θ and β .
Table 11. The interval estimates of θ and β .
90 % ACILength 90 % CrILength
θ SRS(1.949037, 3.712106)1.763069(1.257362, 4.639019)3.381657
RSS(1.377963, 2.909622)1.531659(1.114736, 3.288691)2.173955
MRSSU(1.309699, 2.977556)1.667856(1.180012, 3.700654)2.520642
β SRS(0.898255, 3.483736)2.585480(0.847866, 4.571152)3.723286
RSS(0.727022, 2.721594)1.994572(0.851320, 3.198368)2.347048
MRSSU(0.640762, 2.807854)2.167092(1.268287, 4.776766)3.508479
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jiang, H.; Gui, W. Bayesian Inference for the Parameters of Kumaraswamy Distribution via Ranked Set Sampling. Symmetry 2021, 13, 1170. https://doi.org/10.3390/sym13071170

AMA Style

Jiang H, Gui W. Bayesian Inference for the Parameters of Kumaraswamy Distribution via Ranked Set Sampling. Symmetry. 2021; 13(7):1170. https://doi.org/10.3390/sym13071170

Chicago/Turabian Style

Jiang, Huanmin, and Wenhao Gui. 2021. "Bayesian Inference for the Parameters of Kumaraswamy Distribution via Ranked Set Sampling" Symmetry 13, no. 7: 1170. https://doi.org/10.3390/sym13071170

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop