Next Article in Journal
Information Theoretic Hierarchical Clustering
Previous Article in Journal
On the Dynamic Robustness of a Non-Endoreversible Engine Working in Different Operation Regimes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Entropy of Progressively Censored Samples

by
Z. A. Abo-Eleneen
1,2
1
Faculty of Computers and Informatics, Zagazig University, Zagazig 44519, Egypt
2
Department of Mathematics, Faculty of Science, Qassim University, Al-Montazah, Buraydah 51431, Al Qassim, Saudi Arabia
Entropy 2011, 13(2), 437-449; https://doi.org/10.3390/e13020437
Submission received: 21 December 2010 / Revised: 7 January 2011 / Accepted: 8 January 2011 / Published: 9 February 2011

Abstract

:
In many life-testing and reliability studies, the experimenter might not always obtain complete information on failure times for all experimental units. Among the different censoring schemes, the progressive censoring scheme has received a considerable attention in the last few years. The aim of this paper is simplifying the entropy of progressively Type II censored samples. We propose an indirect approach using a decomposition of the entropy in progressively Type II censored samples to simplify the calculation. Some recurrence relations for the entropy in progressively Type II censored samples are derived to facilitate this calculation. An efficient computational method is derived that simplifies computation of the entropy in progressively Type II censored samples to a sum; entropy in collections order statistics. We compute the entropy in a collection of progressively Type II censored samples for some known distributions.

1. Introduction

Information theory provides an intuitive tool to measure the uncertainty of random variables and the information shared by them, in which the entropy and the mutual information are two critical concepts.
Let X be a random variable with a cumulative distribution function (cdf) F ( x ) and probability density function (pdf) f ( x ) . The differential entropy H ( X ) of the random variable is defined by Cover and Thomas [1] to be
H ( X ) = - - f ( x ) log f ( x ) d x
Let us consider a life-testing experiment where n units is kept under observation until failure. These units could be some system, components, or computer chips in reliability study experiments, or they could be patients put under certain drug or clinical conditions. Suppose the life lengths of these n units are independent identical random variables with a common cdf F ( x ) and pdf f ( x ) . Data collected from such experiments called the order statistics sample
X 1 : n < X 2 : n < < X n : n
where X r : n is called the rth-order statistics (OS).
For some reason, suppose that we have to terminate the experiment before all items have failed. For example, individuals in a clinical trial may drop out of the study, or the study may have to be terminated for lack of funds. In an industrial experiment, units may break accidentally. There are, however, many situations in which the removal of units prior to failure is pre-planned. One of the main reasons for this is to save time and cost associated with testing. Data obtained from such experiments are called censored data.
The most common censoring schemes are Type I and Type II censoring. In conventional Type I censoring, the experiment continues up to a prespecified time T. Any failures that occur after T are not observed. The termination point T of the experiment is assumed to be independent of the failure times. In conventional Type II censoring, the experimenter decides to terminate the experiment after a prespecified number of items r n fail. In this scenario, only the smallest lifetimes are observed. In Type I censoring, the number of failures observed is random and the endpoint of the experiment is fixed, whereas in Type II censoring the endpoint is random, while the number of failures is fixed.
Park [2] studied the entropy of Type II censored sample. Park [3] considered testing exponentiality based on the Kullback-Leibler information with the Type II censored data. The entropy of a single X r : n , and a complete order statistic sample has been studied in Wong and Chen [4] and Ebrahimi et al. [5].
Here we considers progressive Type II censored schemes. Among the different censoring schemes, the progressive censoring scheme has received a considerable attention in the last few years, particularly in reliability analysis. It is a more general censoring mechanism than the traditional Type I and Type II censoring [6]. The recent review article by Balakrishnan [7] provide details on progressive censoring schemes and on its different applications. This paper is concerned with simplifying calculation of the entropy in progressively Type II censored data from the i.i.d. random sample of size n. However, the extension to progressively Type II censored data is not so straightforward, because the joint entropy of progressively Type II censored data is an n-dimensional integral. Besides, removals cause additional complications.
Following Balakrishnan and Aggarwala [8], progressively Type II censored samples can be described as follows. Let n units be placed in test at time zero.
  • The m, and the R i and i = 1 , m - 1 are fixed prior to the test.
  • At the first failure, R 1 units are randomly removed from the remaining n - 1 surviving units.
  • At the second failure, R 2 units are randomly removed from the remaining n - R 1 - 2 units.
  • The test continues until the mth failure, when all remaining R m = n - R 1 - R 2 - - R m - 1 - m are removed from experiment, so the life testing stops at the mth failure..
  • The observed failure times X = ( X 1 : m : n , X 2 : m : n X m : m : n ) constitute Type II progressive censored OS.
  • If R 1 = R 2 = = R m - 1 = 0 , then R m = n - m which corresponds to the Type II censoring.
  • If R 1 = R 2 = = R m = 0 , then m = n which corresponds to the usual order statistics.
Thus, usual OS and the Type II censoring become a special cases of progressively Type II censored samples. So any result established for progressively Type II censoring becomes a generalization of the corresponding result for OS and the Type II censoring.
The likelihood function may be written as [8]
f 1 : m : n , . . . . , m : m : n ( 1 : m : n , x 2 : m : n x m : m : n ) = c i = 1 m f ( x i : m : n ; θ ) [ 1 - F ( x i : m : n ; θ ) ] R i
where c = n ( n - R 1 - 1 ) ( n - R 1 - R 2 - 2 ) ( n - R 1 - R 2 - R 3 - R m - 1 - m + 1 ) . The joint entropy contained in ( X 1 : m : n , X 2 : m : n X i : m : n ) , i.e., a collection of first i of progressively Type II censored OS, is defined to be
H 1 . . . . i : m : n = E { log f 1 : m : n , . . . . , i : m : n ( X 1 : m : n , X 2 : m : n X i : m : n ) }
where f 1 : m : n , . . . . , i : m : n ( x 1 : m : n , x 2 : m : n , x i : m : n ) , is the density function of ( X 1 : m : n , X 2 : m : n X i : m : n ) . To our knowledge, Balakrishnan et al. [9] generalized the result of Park [3] testing exponentiality based on the Kullback-Leibler information with the type II censored data to a progressively Type II censored data and obtained an approximate to the joint entropy in progressively Type II censored samples based on nonparametric estimation. Hence, the exact values of the joint entropy in progressively Type II censored samples has not been obtained. Several applications for entropy such as characterization, tests for goodness-of-fit based on censored data, parameter estimation and quantization theory are known, for example see [3,9].
In the case of H 1 . . . . i : m : n , difficulty arise from the removal as well as the expression of H 1 . . . . i : m : n , which involves integration over i random variables, so simplifying the calculation of H 1 . . . . i : m : n is more attractive. In this article we focus on the study of the properties of the joint entropy in progressively Type II censored OS. In Section 2 we developed the idea of Park [2] about the decomposition of entropy in OS to introduce an indirect approach for decomposition of entropy in progressive Type II censored OS. In Section 3 we derive a recurrence relations for the entropy in progressively Type II censored samples, which will prove helpful in calculating the entropy. In Section 4 we derive an efficient computational method to reduce r-dimensional integrals in the calculation of H 1 r : m : n to no integral where the computation of the entropy in progressively Type II censored samples simplifies to a sum; entropy of the smallest OS of varying sample size. In Section 5 we apply our results for computing the entropy in collections of a progressively Type II censored samples from normal and exponential distributions.

2. Decomposition of the Joint Entropy

Park [2] and Wong and Chen [4] have shown that the total entropy of i.i.d. random sample of size n is decreased if the sample is ordered. Park [2] showed how much the entropy of i.i.d. random sample of size n is decreased if the sample is ordered through, the following identity about the entropy of the ordered data h 1 . . . . : n : n
h 1 . . . . : n : n = n h 1 : 1 - log n !
In view of Equation (4) and noting that progressive Type II censored sample can be seen as an ordered sample
( X 1 : m : n , X 2 : m : n X m : m : n )
with the removals ( R 1 , R 2 , R m ) , we have the following result for the entropy of the progressive Type II censored OS sample.
Lemma 2.1.
H 1 . . . . m : m : n = n H 1 : 1 : 1 - log c
where c = n ( n - R 1 - 1 ) ( n - R 1 - R 2 - 2 ) ( n - R 1 - R 2 - R 3 - R m - 1 - m + 1 ) , and H 1 : 1 : 1 = h 1 : n = 1 - - f ( x ) log h ( x ) d x .
Since the progressively Type II censored sample form a Markov chain [8], we have the following results.
Lemma 2.2.
  • H r + 1 . . . . m : m : n | i . . . r : m : n = H r + 1 . . . . m : m : n | r : m : n , i = 1 , r
  • H r + 1 . . . . i : m : n - H r + 1 . . . . i : m : n | r : m : n = H r + 1 : m : n - H r + 1 : m : n | r : m : n , i = r + 1 , m .
PROOF. From the Markov chain property of progressive Type II censored OS, the first part follows directly. The second part can be shown by using the first part and the symmetry of the mutual information Csisz a ´ r [10],
H r + 1 . . . . i : m : n - H r + 1 . . . . i : m : n | r : m : n = H r + 1 : m : n - H r + 1 : m : n | r : m : n = H r : m : n - H r : m : n | r + 1 : m : n
Next we show the following decomposition of the entropy of progressive Type II censored OS.
Lemma 2.3.
H 1 . . . . m : m : n = H 1 . . . . r : m : n + H r + 1 . . . . m : m : n | r : m : n
PROOF. By the additive property of the entropy measure and Lemma 2.1. we have the result.
We see from Equations (5) and (6) that the entropy of r progressive censored data H 1 . . . . r : m : n can be obtained from H r + 1 . . . . m : m : n | r : m : n . So we consider H r + 1 . . . . m : m : n | r : m : n to study H 1 . . . . r : m : n . Let ( X 1 : m : n , X 2 : m : n X m : m : n ) be a progressively Type II censored sample with censoring scheme ( R 1 , R 2 , R m ) . The entropy in a collection of first i progressively Type II censored ( X 1 : m : n , X 2 : m : n X i : m : n ) is defined by Equation (3), and can be written as
H 1 . . . . i : m : n = - - - x 2 : m : n f 1 . . i : m : n ( x 1 : m : n , , x i : m : n ) × log f 1 . . i : m : n ( x 1 : m : n , , x i : m : n ) d x 1 : m : n d x i : m : n
where f 1 . . . . i : m : n ( x 1 : m : n , , x i : m : n ) is the joint pdf of the first i order statistics of the progressively Type II censored sample.
Using the Markov chain property of the order statistics from progressive Type II censored samples, we have the following decomposition for the score function:
log f 1 m : m : n = log f 1 i : m : n + log f i + 1 m : m : n | i : m : n
where f i + 1 m : m : n | i : m : n is the pdf of ( X i + 1 m : m : n , X m : m : n ) given X i : m : n = x i . The following decomposition follows from the strong additivity of the entropy
H 1 m : m : n = H 1 i : m : n + H i + 1 m : m : n | i : m : n ,
where H i + 1 m : m : n | i : m : n is the average of the conditional information in ( X i + 1 m : m : n , , X m : m : n ) given X i : m : n = x i .
On the other hand, in view of the result of Balakrishnan and Aggarwala [8], the f i + 1 m : n | i : m : n is the joint density of the progressively Type II censored sample of size ( m - i ) , with censoring scheme ( R i + 1 , , R m ) , from ( n - j = 1 j = i R j - i ) drawn from the parent distribution f ( x ) truncated from the left at x i with density f ( x ) 1 - F ( x i ) , x > x i . Therefore H i + 1 m : m : n | i : m : n can be written as the double integral
H i + 1 m : m : n | i : m : n = ( n - j = 1 j = i R j - i ) - g ( w ) f i : m : n ( w ) d w - log ( n - j = 1 j = i R j - i ) !
where
g ( w ) = - w f ( x ; θ ) 1 - F ( w ; θ ) log f ( x ) 1 - F ( w ) d x
and f i : m : n ( x ) is defined by
f i : m : n ( x ) = c i - 1 j = 1 i a j ( i ) ( 1 - F ( x ) ) γ j - 1 f ( x ) , - < x < , 1 i m
where
γ i = n - i + j = i m R i , c i - 1 = j = 1 i γ j , 1 i m
and
a j ( i ) = r = 1 , r j i 1 γ r - γ j , 1 j i m
Since we already know about the entropy of the complete sample H 1 m : m : n , the entropy H 1 i : m : n can be now easily derived from Equations (6) and (8).
EXAMPLE 2.1. For the exponential density exp ( - x ) , we can show that g ( w ) = 1 so that
H i + 1 m : m : n | i : m : n = ( n - j = 1 i R j - i ) - f i : m : n ( w ) d w - log ( n - j = 1 i R j - i ) !
H i + 1 m : m : n | i : m : n = ( n - j = 1 i R j - i ) - log ( n - j = 1 i R j - i ) !
Thus in that case
H 1 i : m : n = i + j = 1 j = i R j + log ( n - j = 1 i R j - i ) ! - log ( n ) - k = 1 m log ( n - j = 1 k R j - k )
where H 1 : 1 : 1 = 1 is the entropy in a single observation from the exponential density exp ( - x ) .
REMARK 2.1. We note that all of Park’s results concerning the entropy for the minimum order statistics X 1 : n works for the case of progressively Type II censored sample, since f 1 : n = f 1 : m : n .

3. Recurrence Relations

Recurrence relations between the cdf (pdf) of OS and progressive Type II censored OS have been studied by many authors for the purpose of simplifying the calculation of moments of OS and progressive Type II censored OS.
The standard recurrence relation for the moments of OS was obtained by Cole [11], and can be written as
n μ r : n - 1 k = ( n - r ) μ r : n k + r μ r + 1 : n k
where μ i : j k is the moments of the usual OS X i : j .
This result can be directly derived from the corresponding recurrence relation between the cdf’s of OS. Kamps and Cramr Lemma 4 [12] obtained the corresponding recurrence relation for generalized OS as
( k + n - r - 1 + j = 1 n - 1 m j ) f X ( r : n - 1 : ( m 1 , · , m n - 1 ) ) ( x ) = ( k + n - r - 1 + j = r + 1 n - 1 m j ) f X ( r : n : ( m 1 , · , m n - 1 ) ) ( x ) + ( r + j = 1 r m j ) f X ( r + 1 : n : ( m 1 , · , m n - 1 ) ) ( x ) , 1 r n - 1
Since the generalized OS includes the progressive Type II censored OS, it is clear that the case of progressive Type II censoring is subsumed in the above result. By setting m i = R i for i = 1 , 2 , , m - 1 , m i = 0 for i = m , , n - 1 , and k = R m + 1 , we have
( m + j = 1 m R j ) f i : m : n - 1 = ( m - i + j = r + 1 m R j ) f i : m : n + ( i + j = 1 i R j ) f i + 1 : m : n
Using Equation (14) and the decomposition of the entropy in Equation (8) we have the following results for the entropy in the progressive censoring scheme.
RELATION 3.1
H i + 1 m : m : n - 1 | i : m : n - 1 = ( n - j = 1 i R j - i ) ( m + j = 1 m R j ) H i + 1 m : m : n | i : m : n + ( j = 1 i R j + i ) ( m + j = 1 m R j ) H i + 2 m : m : n | i + 1 : m : n + C 1 ( n , m , R 1 , R i )
where C 1 ( n , m , R 1 , R i ) = 1 n { ( n - j = 1 i R j - i ) log ( n - j = 1 i R j - i ) - log ( n - j = 1 i R j - i ) ! } and n = j = 1 m R j + m .
PROOF. From Equation (8) we have
H i + 1 m : m : n - 1 | i : m : n - 1 = ( n - j = 1 i R j - i - 1 ) - g ( w ) f i : m : n - 1 ( w ) d w - log ( n - j = 1 i R j - i - 1 ) !
on the other hand Equation (14) yields
f i : m : n - 1 ( w ) = ( m - i + j = r + 1 m R j ) ( m + j = 1 m R j ) f i : m : n ( w ) + ( i + j = 1 i R j ) ( m + j = 1 m R j ) f i + 1 : m : n ( w )
combining Equations (16) and (17) and noting that ( m - i + j = r + 1 m R j ) = ( n - j = 1 j = i R j - i ) since n = j = 1 j = m R j + m , then the Lemma follows.
The following relation shows that the entropy of the first r of the progressive Type II censored OS of sample size n - 1 can be obtained as a linear combination of the first r and r + 1 of the progressive Type II censored OS of sample size n.
RELATION 3.2
H 1 i : m : n - 1 = ( n - j = 1 j = i R j - i - 1 ) ( m + j = 1 m R j ) H 1 i : m : n + ( j = 1 i R j + i ) ( m + j = 1 m R j ) H 1 i + 1 : m : n + C 2 ( n , m , R 1 , R i )
where
C 2 ( n , m , R 1 , R i ) = ( n - 1 ) n { log n + i = 1 m ( n - j = 1 i R j - j - 1 ) } - log ( n - j = 1 i R j - i ) ! - ( n - j = 1 i R j - i ) log ( n - j = 1 i R j - i )
PROOF. For a sample of size n - 1 the general decomposition of the entropy of progressive Type II censoring takes the form
I 1 m : m : n - 1 = I 1 i : m : n - 1 + I i + 1 m : m : n - 1 | i : m : n - 1
By applying RELATION 3.1 on Equation (20) we get
H 1 m : m : n - 1 = H 1 i : m : n - 1 + ( n - j = 1 j = i R j - i ) ( m + j = 1 m R j ) H i + 1 m : m : n | i : m : n + ( j = 1 i R j + i ) ( m + j = 1 m R j ) H i + 2 m : m : n | i + 1 : m : n + C 1 ( n , m , R 1 , R i )
where C 1 defined above. Equation (21) can be written, by using Equations (5) and (6), as
( n - 1 ) H 1 : 1 : 1 - { log n + i = 1 m log ( n - j = 1 j = i R j - i - 1 ) ! } = H 1 i : m : n - 1 + ( n - j = 1 i R j - i - 1 ) ( m + j = 1 m R j ) { n H 1 : 1 : 1 - log n - i = 1 m log ( n - j = 1 i R j - i ) ! - H 1 i : m : n } + ( j = 1 i R j + i ) ( m + j = 1 m R j ) { n H 1 : 1 : 1 - log n - i = 1 m log ( n - j = 1 i R j - i ) ! - H 1 i + 1 : m : n } + C 1 ( n , R 1 , R i )
After some simplifications the result follows.
REMARK 3.1. With R 1 = R 2 = = R m = 0 all results of Section 2 and Section 3 reduce to corresponding results for the entropy in a collections usual OS.

4. Computational Method for Calculating H 1 i : m : n

In this section we provide another approach to simplify the calculation of the entropy in a collection of progressively Type II censored OS. We reduce r integrals in the calculation of H 1 r : m : n to no integral where the computation of the entropy in progressively Type II censored samples simplifies to a sum; entropy of the smallest OS of varying sample size h 1 : n .
Lemma 4.1. Let X 1 , X 2 X n be i.i.d. random sample of size n from pdf f ( x ) with cdf F ( x ) and hazard function h ( x ) = f ( x ) 1 - F ( x ) , and let X 1 : n , X 2 : n , , X n : n be OS corresponding to this sample. Park [2] obtained the entropy in the smallest order statistics as
h 1 : n = 1 - log n - - log h ( x ) d F 1 : n ( x )
Theorem 4.1. Let ( X 1 : m : n , X 2 : m : n X m : m : n ) be a progressively Type II censored sample with censoring scheme ( R 1 , R 2 , R m ) . The entropy in the r collection of progressively Type II censored sample ( X 1 : m : n , X 2 : m : n X r : m : n ) can be written as
H 1 r : m : n = r - log c ( r ) - s = 1 r c ( s ) i = 1 s - 1 c i , s - 1 ( R 1 + 1 , , R s - 1 + 1 ) R i ( 1 - log R i - h 1 : R i )
where R i = ( R s * + 1 ) + j = s - i s - 1 ( R j + 1 ) , R s * = ( n - s - R 1 - - R s - 1 + 1 ) , c ( t ) = n ( n - R 1 - 1 ) ( n - R 1 - - R t - 1 - t + 1 ) and
c i , s ( R 1 , , R s ) = ( - 1 ) i { j = 1 i k = s - i + 1 s - i + j R k } { j = 1 s - i k = j s - i R k }
in which empty products are defined as 1.
PROOF. By the Markov chain properties of progressive Type II censored samples, one can write
f 1 : m : n , . . . , r : m : n ( x 1 : m : n , , x r : m : n ) = f 1 : m : n ( x 1 ) f 2 | 1 : m : n ( x 2 | x 1 ) f r | r - 1 : m : n ( x r | x r - 1 )
where f i + 1 | i : m : n ( x i + 1 | x i ) is the conditional pdf of X i + 1 : m : n given X i : m : n = x i , which also is the density of the first order statistic of a sample of size ( n - R 1 - - R i - i ) with the truncated density g ( x ) = f ( x ) 1 - F ( x i ) . Therefore, we have
H 1 : m : n , . . . , r : m : n = H 1 : m : n + H 2 | 1 : m : n + + H r | r - 1 : m : n
where H i + 1 | i : m : n is the expected entropy in X i + 1 : m : n given X i : m : n = x i i.e.,
H i + 1 | i : m : n = E { - - f i + 1 | i : m : n ( x | x i : m : n ) log f i + 1 | i : m : n ( x | x i : m : n ) d x }
By Lemma 4.1. and noting that, condition on X i : m : n = x i , X i + 1 : m : n has the same pdf as the first order statistic from a random sample of size ( n - R 1 - - R i - i ) with pdf g ( x ) = f ( x ) 1 - F ( x i ) . Equation (27) can be written as
H i + 1 | i : m : n = 1 - log ( n - R 1 - - R i - i ) - I
where
I = E { x i : m : n f i + 1 | i : m : n ( x | x i : m : n ) log h ( x ) d x }
By changing integrals and noting that X i : m : n < X i + 1 : m : n , we have
I = - { x i f i + 1 | i ( x | x i ) log h ( x ) d x } f i ( x i ) d x i = - log h ( x ) { - x f i + 1 | i ( x | x i ) f i ( x i ) d x i } d x = - log h ( x ) { - x f i , i + 1 ( x i , x ) d x i } d x = - { log h ( x ) } f i + 1 ( x ) d x
Therefore Equation (31), can be written as
H i + 1 | i : m : n = 1 - log ( n - R 1 - - R i - i ) - - { log h ( x ) } f i + 1 ( x ) d x
Thus by using Equations (26) and (31) H 1 . . . . r : m : n can be expressed as a summation of single integral as
H 1 . . . . r : m : n = r - log c ( r ) - i = 1 r - log h ( x ) f i : m : n ( x ) d x
where c ( r ) is defined above. From Theorem 1 in Balakrishnan et al. [13], we have the following relation for f s : m : n
f s : m : n = c ( s ) i = 1 s - 1 c i , s - 1 ( R 1 + 1 , , R s - 1 + 1 ) f ( x s ) ( 1 - F ( x s ) ) R i
- < x s < where, R i , R s * , c ( s ) and c i , s - 1 ( R 1 + 1 , , R s - 1 + 1 ) are defined above.
We reexpress Equation (33) as
f s : m : n = c ( s ) i = 1 s - 1 c i , s - 1 ( R 1 + 1 , , R s - 1 + 1 ) R i f 1 : R i ( x s )
where f 1 : R i is the usual smallest order statistics in a sample of size R i . If we use Equations (23) and (34) in Equation (32) the result follows.
We have written program in the algebraic manipulation package, MATHEMATICA [14], for computing Theorem 4.1 and Lemma 4.1 calculated above. For a pre-determined progressively Type II censoring scheme ( n , m , R 1 , R 2 , R m ) the program return the numerical values of the entropy. The electronic version of the computer program can be obtained by contacting the corresponding author.
REMARK 4.1. The entropy of the smallest usual order statistics are known for well-known distributions for example see Park [2] and Asadi et al. [15].

5. Illustrative Examples

The entropy of the smallest OS h 1 : n has the expression [2].
h 1 : n = 1 - 1 n - log n - - log f ( x ) d F 1 : n
EXAMPLE 5.1. For the normal distribution, f ( x ; μ , σ ) = 1 2 π σ 2 exp - ( x - μ ) 2 / 2 σ 2 , the entropy of the smallest OS h 1 : n takes the form
h 1 : n = 1 - 1 n - log n + log ( 2 π ) 2 + log σ + μ 1 : n ( 2 ) 2
where μ r : n ( 2 ) is the second moment of X r : n of the standard normal distribution, see Park [2]. We use Theorem 4.1 and Equation (36) to calculate the H 1 r : m : n given in Table 1.

Discussion

Table 1 provides the values of H 1 r : m : n for n = 5 , 10 and m = 3 , 5 for different schemes and r = 1 , m . The entries were computed using Theorem 4.1, Equation (36) and MATHEMATICA [14]. For r < m , the table gives the values of the entropy in a collection of r of OS from a progressively Type II censored sample. For r = m the table gives the values of entropy in a complete progressive Type II censored sample. The table includes the cases r 1 = r 2 = = r m - 1 = 0 , r m = n - m which corresponds to the Type II censored sample and r 1 = r 2 = = r m = 0 , n = m which corresponds to the complete sample.
EXAMPLE 5.2. For the logistic distribution, f ( x ) = exp - ( x ) ( 1 + exp - ( x ) ) 2 , - < x < the entropy of the smallest OS h 1 : n takes the form
h 1 : n = log β ( 1 , n ) - ( n - 1 ) ( ψ ( n ) - ψ ( n + 1 ) + 2 ψ ( n + 1 ) - ψ ( n ) - ψ ( 1 )
where β ( a , b ) = Γ ( a ) Γ ( b ) Γ ( a + b ) is the beta function and ψ ( z ) = d log Γ ( z ) d z is the digamma function, see Asadi et al. [15]. We use Theorem 4.1 and Equation (37) to calculate the H 1 r : m : n .
Table 2 provides the values of H 1 r : m : n for n = 5 , 10 and m = 3 , 5 for different schemes and r = 1 , m . The entries were computed using Theorem 4.1 and Equation (37) and MATHEMATICA [14]. For r < m , the Table gives the values of the entropy in a collection r of OS progressive Type II censored sample. For r = m , the Table gives the values of entropy in a complete progressive Type II censored sample.
Table 1. The entropy in a collection of order statistics from a progressive Type II censored sample from a normal distribution with unit standard deviation.
Table 1. The entropy in a collection of order statistics from a progressive Type II censored sample from a normal distribution with unit standard deviation.
nmCensoring schemerOS of Proressive samplesEntropy
53(2,0,0)1 ( X 1 : 3 : 5 ) 1.0096
53(2,0,0)2 ( X 1 : 3 : 5 , X 2 : 3 : 5 ) 2.11423
53(2,0,0)3 ( X 1 : 3 : 5 , X 2 : 3 : 5 , X 3 : 3 : 5 ) 3.40163
53(0,0,2)1 ( X 1 : 3 : 5 ) 1.0096
53(0,0,2)2 ( X 1 : 3 : 5 , X 2 : 3 : 5 ) 1.87510
53(0,0,2)3 ( X 1 : 3 : 5 , X 2 : 3 : 5 , X 3 : 3 : 5 ) 3.18129
53(1,1,0)1 ( X 1 : 3 : 5 ) 1.0096
53(1,1,0)2 ( X 1 : 3 : 5 , X 2 : 3 : 5 ) 1.97448
53(1,1,0)3 ( X 1 : 3 : 5 , X 2 : 3 : 5 , X 3 : 3 : 5 ) 3.27328
55(0,0,0,0,0)1 ( X 1 : 5 ) 1.0096
55(0,0,0,0,0)2 ( X 1 : 5 , X 2 : 5 ) 1.8751
55(0,0,0,0,0)3 ( X 1 : 5 , X 2 : 5 , X 3 : 5 ) 2.76331
55(0,0,0,0,0)5 ( X 1 : 5 , , X 5 : 5 ) 5.01551
105(0,0,0,0,5)1 ( X 1 : 5 : 10 ) 0.872403
105(0,0,0,0,5)2 ( X 1 : 5 : 10 , X 2 : 5 : 10 ) 1.52808
105(0,0,0,0,5)3 ( X 1 : 5 : 10 , X 2 : 5 : 10 , X 3 : 5 : 10 ) 2.12015
105(0,0,0,0,5)4 ( X 1 : 5 : 10 , , X 4 : 0 : 10 ) 2.69953
105(0,0,0,0,5)5 ( X 1 : 5 : 10 , , X 5 : 5 : 10 ) 3.29964
105(5,0,0,0,0)1 ( X 1 : 5 : 10 ) 0.872403
105(5,0,0,0,0)2 ( X 1 : 5 : 10 , X 2 : 5 : 10 ) 1.78936
105(5,0,0,0,0)3 ( X 1 : 5 : 10 , X 2 : 5 : 10 , X 3 : 5 : 10 ) 2.69933
105(5,0,0,0,0)4 ( X 1 : 5 : 10 , , X 4 : 5 : 10 ) 3.70963
105(5,0,0,0,0)5 ( X 1 : 5 : 10 , , X 5 : 5 : 10 ) 4.96641
105(3,2,0,0,0)1 ( X 1 : 5 : 10 ) 0.872403
105(3,2,0,0,0)2 ( X 1 : 5 : 10 , X 2 : 5 : 10 ) 1.65948
105(3,2,0,0,0)3 ( X 1 : 5 : 10 , X 2 : 5 : 10 , X 3 : 5 : 10 ) 2.58920
105(3,2,0,0,0)4 ( X 1 : 5 : 10 , , X 4 : 5 : 10 ) 3.60857
105(3,2,0,0,0)5 ( X 1 : 5 : 10 , , X 5 : 5 : 10 ) 4.86899
1010(0,0,0,0,0)1 ( X 1 : 10 ) 0.872403
1010(0,0,0,0,0)2 ( X 1 : 10 , X 2 : 10 ) 1.52808
1010(0,0,0,0,0)3 ( X 1 : 5 : 10 , X 2 : 10 , X 3 : 10 ) 2.12015
1010(0,0,0,0,0)4 ( X 1 : 10 , , X 4 : 10 ) 2.69953
1010(0,0,0,0,0)5 ( X 1 : 10 , , X 5 : 10 ) 3.29974
1010(0,0,0,0,0)4 ( X 1 : 10 , , X 6 : 10 ) 3.95362
1010(0,0,0,0,0)4 ( X 1 : 10 , , X 7 : 10 ) 4.57641
1010(0,0,0,0,0)4 ( X 1 : 10 , , X 8 : 10 ) 5.50748
1010(0,0,0,0,0)10 ( X 1 : 10 , , X 10 : 10 ) 7.62962
Table 2. The entropy in a collection of order statistics from a progressive Type II censored sample from logistic distribution.
Table 2. The entropy in a collection of order statistics from a progressive Type II censored sample from logistic distribution.
nmCensoring schemerOS of Proressive samplesEntropy
53(2,0,0)1 ( X 1 : 3 : 5 ) 1.67390
53(2,0,0)2 ( X 1 : 3 : 5 , X 2 : 3 : 5 ) 3.30409
53(2,0,0)3 ( X 1 : 3 : 5 , X 2 : 3 : 5 , X 3 : 3 : 5 ) 5.18643
53(0,0,2)1 ( X 1 : 3 : 5 ) 1.67390
53(0,0,2)2 ( X 1 : 3 : 5 , X 2 : 3 : 5 ) 3.07589
53(0,0,2)3 ( X 1 : 3 : 5 , X 2 : 3 : 5 , X 3 : 3 : 5 ) 4.95024
53(1,1,0)1 ( X 1 : 3 : 5 ) 1.6739
53(1,1,0)2 ( X 1 : 3 : 5 , X 2 : 3 : 5 ) 3.16708
53(1,1,0)3 ( X 1 : 3 : 5 , X 2 : 3 : 5 , X 3 : 3 : 5 ) 5.04210
55(0,0,0,0,0)1 ( X 1 : 5 ) 1.67390
55(0,0,0,0,0)2 ( X 1 : 5 , X 2 : 5 ) 3.07587
55(0,0,0,0,0)3 ( X 1 : 5 , X 2 : 5 , X 3 : 5 ) 4.46788
55(0,0,0,0,0)5 ( X 1 : 5 , , X 5 : 5 ) 7.9208
105(0,0,0,0,5)1 ( X 1 : 5 : 10 ) 1.62638
105(0,0,0,0,5)2 ( X 1 : 5 : 10 , X 2 : 5 : 10 ) 2.89455
105(0,0,0,0,5)3 ( X 1 : 5 : 10 , X 2 : 5 : 10 , X 3 : 5 : 10 ) 4.03011
105(0,0,0,0,5)4 ( X 1 : 5 : 10 , , X 4 : 0 : 10 ) 5.11611
105(0,0,0,0,5)5 ( X 1 : 5 : 10 , , X 5 : 5 : 10 ) 6.20260
105(5,0,0,0,0)1 ( X 1 : 5 : 10 ) 1.62638
105(5,0,0,0,0)2 ( X 1 : 5 : 10 , X 2 : 5 : 10 ) 3.10523
105(5,0,0,0,0)3 ( X 1 : 5 : 10 , X 2 : 5 : 10 , X 3 : 5 : 10 ) 4.52211
105(5,0,0,0,0)4 ( X 1 : 5 : 10 , , X 4 : 5 : 10 ) 6.059990
105(5,0,0,0,0)5 ( X 1 : 5 : 10 , , X 5 : 5 : 10 ) 7.96778
105(3,2,0,0,0)1 ( X 1 : 5 : 10 ) 1.62638
105(3,2,0,0,0)2 ( X 1 : 5 : 10 , X 2 : 5 : 10 ) 2.99964
105(3,2,0,0,0)3 ( X 1 : 5 : 10 , X 2 : 5 : 10 , X 3 : 5 : 10 ) 4.43974
105(3,2,0,0,0)4 ( X 1 : 5 : 10 , , X 4 : 5 : 10 ) 5.97958
105(3,2,0,0,0)5 ( X 1 : 5 : 10 , , X 5 : 5 : 10 ) 7.88009
1010(0,0,0,0,0)1 ( X 1 : 10 ) 1.62638
1010(0,0,0,0,0)2 ( X 1 : 10 , X 2 : 10 ) 2.89455
1010(0,0,0,0,0)3 ( X 1 : 5 : 10 , X 2 : 10 , X 3 : 10 ) 4.03011
1010(0,0,0,0,0)4 ( X 1 : 10 , , X 4 : 10 ) 5.11611
1010(0,0,0,0,0)5 ( X 1 : 10 , , X 5 : 10 ) 6.20260
1010(0,0,0,0,0)4 ( X 1 : 10 , , X 6 : 10 ) 7.33016
1010(0,0,0,0,0)4 ( X 1 : 10 , , X 7 : 10 ) 8.54041
1010(0,0,0,0,0)4 ( X 1 : 10 , , X 8 : 10 ) 9.88642
1010(0,0,0,0,0)10 ( X 1 : 10 , , X 10 : 10 ) 13.44020

Acknowledgements

The author would like to express deep thanks to the Editor-in-Chief and the referees for their helpful comments and suggestions which led to a considerable improvement in the presentation of this paper.

References

  1. Cover, T.M.; Thomas, J.A. Elements of Information Theory; Wiley: Hoboken, NJ, USA, 2005. [Google Scholar]
  2. Park, S. The entropy of consecutive order statistics. IEEE Trans. Inform. Theor. 1995, 41, 2003–2007. [Google Scholar] [CrossRef]
  3. Park, S. Testing exponentiality based on the Kullback-Leibler information with the type II censored data. IEEE Trans. Reliab. 2005, 54, 22–26. [Google Scholar] [CrossRef]
  4. Wong, K.M.; Chen, S. The entropy of ordered sequences and order statistics. IEEE Trans. Inform. Theor. 1999, 36, 276–284. [Google Scholar] [CrossRef]
  5. Ebrahimi, N.; Soofi, E.S.; Zahedi, H. Information properites of order statistics and spacings. IEEE Trans. Inform. Theor. 2004, 50, 177–183. [Google Scholar] [CrossRef]
  6. Nelson, W. Applied Life Data Analysis; John Wiley and Sons: New York, NY, USA, 1982. [Google Scholar]
  7. Balakrishnan, N. Progressive censoring methodology: An appraisal (with discussions). Test 2007, 16, 211–259. [Google Scholar] [CrossRef]
  8. Balakrishnan, N.; Aggarawala, R. Progressive Censoring: Theory, Methods, and Applications; Birkhauser: Boston, MA, USA, 2000; p. 15. [Google Scholar]
  9. Balakrishnan, N.; Habibi Rad, A.; Arghami, N.R. Testing exponentiality based on Kullback-Leibler information with progressively Type-II censored data. IEEE Trans. Reliab. 2007, 56, 301–307. [Google Scholar] [CrossRef]
  10. Csiszár, I. Information Theory; Academic press: London, UK, 1981; p. 170. [Google Scholar]
  11. Cole, R.H. Relations between moments of order statistics. Ann. Math. Stat. 1951, 22, 308–310. [Google Scholar] [CrossRef]
  12. Kamps, U.; Cramer, E. On distributions of generalized order statistics. Statistics 2000, 35, 269–280. [Google Scholar] [CrossRef]
  13. Balakrishnan, N.; Childs, A.; Chadrasekar, B. An efficient Computatinal Method for moments of order statistics under progressive Type-II censored samples. Stat. Probab. Lett. 2002, 60, 359–365. [Google Scholar] [CrossRef]
  14. Mathematica; Version 7.0; Wolfram Research, Inc.: Champaign, IL, USA, 2008.
  15. Asadi, M.; Ebrahimi, N.; Hamedani, G.G.; Soofi, E. Information measures of Pareto distributions and order statistics. In Advances on Distribution Theory, Order Statistics and Inference; Balakrishnan, N., Castillo, E., Sarabia, J.M., Eds.; Birkhauser: Boston, MA, USA, 2006. [Google Scholar]

Share and Cite

MDPI and ACS Style

Abo-Eleneen, Z.A. The Entropy of Progressively Censored Samples. Entropy 2011, 13, 437-449. https://doi.org/10.3390/e13020437

AMA Style

Abo-Eleneen ZA. The Entropy of Progressively Censored Samples. Entropy. 2011; 13(2):437-449. https://doi.org/10.3390/e13020437

Chicago/Turabian Style

Abo-Eleneen, Z. A. 2011. "The Entropy of Progressively Censored Samples" Entropy 13, no. 2: 437-449. https://doi.org/10.3390/e13020437

Article Metrics

Back to TopTop