Next Article in Journal
Software Code Smell Prediction Model Using Shannon, Rényi and Tsallis Entropies
Previous Article in Journal
A Survey of Viewpoint Selection Methods for Polygonal Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Normal Laws for Two Entropy Estimators on Infinite Alphabets

Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223, USA
*
Author to whom correspondence should be addressed.
Entropy 2018, 20(5), 371; https://doi.org/10.3390/e20050371
Submission received: 3 April 2018 / Revised: 9 May 2018 / Accepted: 10 May 2018 / Published: 17 May 2018

Abstract

:
This paper offers sufficient conditions for the Miller–Madow estimator and the jackknife estimator of entropy to have respective asymptotic normalities on countably infinite alphabets.

1. Introduction

Let X = { k ; k 1 } be a finite or countably infinite alphabet, let p = { p k ; k 1 } be a probability distribution on X , and define K = k 1 1 [ p k > 0 ] , where 1 [ · ] is the indicator function, to be the effective cardinality of X under p . An important quantity associated with p is entropy, which is defined by [1] as
H = - k 1 p k ln p k .
Here and throughout, we adopt the convention that 0 ln 0 = 0 .
Many properties of entropy and related quantities are discussed in [2]. The problem of statistical estimation of entropy has a long history (see the survey paper [3] or the recent book [4]). It is well-known that no unbiased estimators of entropy exist, and, for this reason, much energy has been focused on deriving estimators with relatively little bias (see [5] and the references therein for a discussion of some (but far from all) of these). Perhaps the most commonly used estimator is the plug-in. Its theoretical properties have been studied going back, at least, to [6], where conditions for consistency and asymptotic normality, in the case of finite alphabets, were derived. It would be almost fifty years before corresponding conditions for the countabe case would appear in the literature. Specifically, consistency, both in terms of almost sure and L 2 convergence, was verified in [7]. Later, sufficient conditions for asymptotic normality were derived in two steps in [3,8].
Despite a simple form and nice theoretical properties, the plug-in suffers from large finite sample bias, which has led to the development of modifications that aim to reduce this bias. Two of the most popular are the Miller–Madow estimator of [6] and the jackknife estimator of [9]. Theoretical properties of these have not been studied, as extensively, in the literature. In this paper, we give sufficient conditions for the asymptotic normality of these two estimators. This is important for deriving confidence intervals and hypothesis tests, and it immediately implies consistency (see e.g., [4]).
We begin by introducing some notation. We say that a distribution p = { p k ; k 1 } is uniform if and only if its effective cardinality K < and for each k = 1 , 2 , either p k = 1 / K or p k = 0 . We write f g to denote lim n f ( n ) / g ( n ) = 1 and we write f = O ( g ( n ) ) to denote lim sup n f ( n ) / g ( n ) < . Furthermore, we write L to denote convergence in law and p to denote convergence in probability. If a and b are real number, we write a b to denote the maximum of a and b. When it is not specified, all limits are assumed to be taken as n .
Let X 1 , , X n be independent and identically distributed ( i i d ) random variables on X under p . Let { Y k ; k 1 } be the observed letter counts in the sample, i.e., Y k = i = 1 n 1 [ X i = k ] , and let p ^ = { p ^ k ; k 1 } , where p ^ k = Y k / n , be the corresponding relative frequencies. Perhaps the most intuitive estimator of H is the plug-in, which is given by
H ^ = - k 1 p ^ k ln p ^ k .
When the effective cardinality, K, is finite, [10] showed that the bias of H ^ is
E ( H ^ ) - H = - K - 1 2 n + 1 12 n 2 1 - k = 1 K 1 p k + O n - 3 .
One of the simplest and earliest approaches aiming to reduce the bias of H ^ is to estimate the first order term. Specifically, let m ^ = k 1 1 [ Y k > 0 ] be the number of letters observed in the sample and consider an estimator of the form,
H ^ M M = H ^ + m ^ - 1 2 n .
This estimator is often attributed to [6] and is known as the Miller–Madow estimator. Note that, for finite K,
E m ^ - 1 2 n = K - 1 2 n - k ( 1 - p k ) n 2 n .
Since k ( 1 - p k ) n K ( 1 - p ) n , where p = min { p k : p k > 0 } , decays exponentially fast, it follows that, for finite K, the bias of H ^ M M is
E ( H ^ M M ) - H = 1 12 n 2 1 - k = 1 K 1 p k + O n - 3 .
Among the many estimators in the literature aimed at reducing bias in entropy estimation, the Miller–Madow estimator is one of the most commonly used. Its popularity is due to its simplicity, its intuitive appeal, and, more importantly, its good performance across a wide range of different distributions including those on countably infinite alphabets. See, for instance, the simulation study in [5].
The jackknife entropy estimator is another commonly used estimator designed to reduce the bias of the plug-in. It is calculated in three steps:
  • for each i { 1 , 2 , , n } construct H ^ ( i ) , which is a plug-in estimator based on a sub-sample of size n - 1 obtained by leaving the ith observation out;
  • obtain H ^ ( i ) = n H ^ - ( n - 1 ) H ^ ( i ) for i = 1 , , n ; and then
  • compute the jackknife estimator
    H ^ J K = i = 1 n H ^ ( i ) n .
Equivalently, (5) can be written as
H ^ J K = n H ^ - ( n - 1 ) i = 1 n H ^ ( i ) n .
The jackknife estimator of entropy was first described by [9]. From (2), it may be verified that, when K < , the bias of H ^ J K is
E H ^ J K - H = O n - 2 .
Both the Miller–Madow and the jackknife estimators are adjusted versions of the plug-in. When the effective cardinality is finite, i.e., K < , the asymptotic normalities of both can be easily verified. A question of theoretical interest is whether these normalities still hold when the effective cardinality is countably infinite. In this paper, we give sufficient conditions for n ( H ^ M M - H ) and n ( H ^ J K - H ) to have asymptotic normalities on countably infinite alphabets and provide several illustrative examples. The rest of paper is organized as follows. Our main results for both the Miller–Madow and the jackknife estimators are given in Section 2. A small simulation study is given in Section 3. This is followed by a brief discussion in Section 4. Proofs are postponed to Section 5.

2. Main Results

We begin by recalling a sufficient condition due to [8] for the asymptotic normality of the plug-in estimator.
Condition 1.
The distribution, p = { p k ; k 1 } , satisfies
k 1 p k ln 2 p k < ,
and there exists an integer-valued function K ( n ) such that, as n ,
1. 
K ( n ) ,
2. 
K ( n ) / n 0 , and
3. 
n k K ( n ) p k ln p k 0 .
Note that, by Jensen’s inequality (see e.g., [2]), (8) implies that
H 2 = - k 1 p k ln p k 2 k 1 p k ln 2 p k < ,
where equality holds, i.e., H 2 = k 1 p k ln 2 p k , if and only if p is a uniform distribution. Thus, when (8) holds, we have H < . The following result is given in [8].
Lemma 1.
Let p = { p k ; k 1 } be a distribution, which is not uniform, and set
σ 2 = k 1 p k ln 2 p k - H 2 a n d σ ^ 2 = k 1 p ^ k ln 2 p ^ k - H ^ 2 .
If p satisfies Condition 1, then σ ^ p σ ,
n ( H ^ - H ) σ L N ( 0 , 1 ) ,
and
n ( H ^ - H ) σ ^ L N ( 0 , 1 ) .
The following is useful for checking when Condition 1 holds.
Lemma 2.
Let p = { p k ; k 1 } and p = { p k ; k 1 } be two distributions and assume that p satisfies Condition 1. If there exists a C > 0 such that, for large enough k,
p k C p k ,
then p satisfies Condition 1 as well.
In [8], it is shown that Condition 1 holds for p = { p k ; k 1 } with
p k = C k 2 ln 2 k , k = 1 , 2 , ,
where C > 0 is a normalizing constant. It follows from Lemma 2 that any distribution with tails lighter than this satisfies Condition 1 as well.
We are interested in finding conditions under which the result of Lemma 1 can be extended to bias adjusted modifications of H ^ . Let H ^ * be any bias-adjusted estimator of the form
H ^ * = H ^ + B ^ * ,
where B ^ * is an estimate of the bias. Combining Lemma 1 with Slutsky’s theorem immediately gives the following.
Theorem 1.
Let p = { p k ; k 1 } be a distribution, which is not uniform, and let σ 2 and σ ^ 2 be as in (9). If Condition 1 holds and n B ^ * p 0 , then σ ^ p σ ,
n ( H ^ * - H ) σ L N ( 0 , 1 ) ,
and
n ( H ^ * - H ) σ ^ L N ( 0 , 1 ) .
For the Miller–Madow estimator and the jackknife estimator, respectively, the bias correction term, B ^ * , in (10) takes the form
M i l l e r - M a d o w : B ^ M M = m ^ - 1 2 n , J a c k k n i f e : B ^ J K = n - 1 n i = 1 n H ^ - H ^ ( i ) .
Below, we give sufficient conditions for when n B ^ M M p 0 and when n B ^ J K p 0 .

2.1. Results for the Miller–Madow Estimator

Condition 2.
The distribution, p = { p k ; k 1 } , satisfies that, for sufficiently large k,
p k 1 a ( k ) b ( k ) k 3 ,
where a ( k ) > 0 and b ( k ) > 0 are two sequences such that
1. 
a ( k ) as k , and, furthermore,
(a) 
the function a ( k ) is eventually nondecreasing, and
(b) 
there exists an ε > 0 such that
lim sup k ( a ( k ) ) 2 ε a k ( a ( k ) ) ε < ;
2. 
k 1 1 k b ( k ) < .
Since this condition only requires that p k , for sufficiently large k, is upper bounded in the appropriate way, we immediately get the following.
Lemma 3.
Let p = { p k ; k 1 } and p = { p k ; k 1 } be two distributions and assume that p satisfies Condition 2. If there exists a C > 0 such that, for large enough k,
p k C p k ,
then p satisfies Condition 2 as well.
We now give our main results for the Miller–Madow Estimator.
Theorem 2.
Let p = { p k ; k 1 } be a distribution, which is not uniform, and let σ 2 and σ ^ 2 be as in (9). If Condition 2 holds, then σ ^ p σ ,
n ( H ^ M M - H ) σ L N ( 0 , 1 )
and
n ( H ^ M M - H ) σ ^ L N ( 0 , 1 ) .
In the proof of the theorem, we will show that Condition 2 implies that Condition 1 holds. Condition 2 requires p k to decay slightly faster than k - 3 by two factors 1 / a ( k ) and 1 / b ( k ) , where a ( k ) and b ( k ) satisfy (12) and (13) respectively. While (13) is clear in its implication on b ( k ) , (12) is much less so on a ( k ) . To have a better understanding of (12), we give an important situation where (12) holds. Consider the case a ( n ) = ln n . In this case, for any ε ( 0 , 0 . 5 )
( a ( n ) ) 2 ε a n ( a ( n ) ) ε = ( ln n ) 2 ε 0 . 5 ln n - ε ln ln n ( ln n ) 2 ε 0 . 5 ln n 0 .
We now give a more general situation, which shows just how slow a ( k ) can be. First, we recall the iterated logarithm function. Define ln ( r ) ( x ) , recursively for sufficiently large x > 0 , by ln ( 0 ) ( x ) = x and ln ( r ) ( x ) = ln ln ( r - 1 ) x for r 1 . By induction, it can be shown that d d x ln ( r ) ( x ) = i = 0 r - 1 ln ( i ) ( x ) - 1 for r 1 .
Lemma 4.
The function a ( n ) = ln ( r ) ( n ) satisfies (12) with ε = 0 . 5 for any r 2 .
We now give three examples.
Example 1.
Let p = { p k ; k 1 } be such that for sufficiently large k,
p k C k 3 ( ln k ) ( ln ln k ) 2 + ε ,
where ε > 0 and C > 0 are fixed constants. In this case, Condition 2 holds with a ( k ) = ln ln k and b ( k ) = ( ln k ) ( ln ln k ) 1 + ε / C in (11).
We can consider a more general form, which allows for even heavier tails.
Example 2.
Let r be an integer with r 2 and let p = { p k ; k 1 } be such that, for sufficiently large k,
p k C k 3 i = 1 r - 1 ln ( i ) k ( ln ( r ) k ) 2 + ε
where ε > 0 and C > 0 are fixed constants. In this case, Condition 2 holds with a ( k ) = ln ( r ) k and b ( k ) = i = 1 r - 1 ln ( i ) k ( ln ( r ) k ) 1 + ε / C in (11). The fact that b ( k ) satisfies (13) follows by the integral test for convergence.
It follows from Lemma 3 that any distribution with tails lighter than those in this example must satisfy Condition 2. On the other hand, the tails cannot get too much heavier.
Example 3.
Let p = { p k ; k 1 } be such that p k = C k - 3 , where C > 0 is a normalizing constant. In this case, Condition 2 does not hold. However, Condition 1 does hold.

2.2. Results for the Jackknife Estimator

For any distribution p , let B n = E ( H ^ ) - H be the bias of the plug-in based on a sample of size n.
Condition 3.
The distribution, p = { p k ; k 1 } , satisfies
lim n n 3 / 2 B n - B n - 1 = 0 .
Theorem 3.
Let p = { p k ; k 1 } be a distribution, which is not uniform, and let σ 2 and σ ^ 2 be as in (9). If Conditions 1 and 3 hold, then σ ^ p σ ,
n ( H ^ J K - H ) σ L N ( 0 , 1 )
and
n ( H ^ J K - H ) σ ^ L N ( 0 , 1 ) .
It is not clear to us whether Conditions 1 and 3 are equivalent or, if not, which is more stringent. For that reason, in the statement of Theorem 3, both conditions are imposed. The proof of the theorem uses the following lemma, which gives some insight into B ^ J K and Condition 3.
Lemma 5.
For any probability distribution p = { p k ; k 1 } , we have
B ^ J K = n - 1 n i = 1 n H ^ - H ^ ( i ) 0
and
E B ^ J K = ( n - 1 ) B n - B n - 1 0 .
We now give a condition, which implies Condition 3 and tends to be easier to check.
Proposition 1.
If the distribution p = { p k ; k 1 } is such that there exists an ε ( 1 / 2 , 1 ) with k 1 p k 1 - ε < , then Condition 3 is satisfied.
We now give an example where this holds.
Example 4.
Let p = { C k - ( 2 + δ ) ; k 1 } , where δ > 0 is fixed and C > 0 is a normalizing constant. In this case, the assumption of Proposition 1 holds and thus Condition 3 is satisfied.
To see that the assumption of Proposition 1 holds in this case, fix ε ( 1 / 2 , ( 1 + δ ) / ( 2 + δ ) ) . Note that - ( 1 + δ / 2 ) < - ( 2 + δ ) ( 1 - ε ) < - 1 , and thus
k 1 p k 1 - ε = C 1 - ε k 1 k - ( 2 + δ ) ( 1 - ε ) < .

3. Simulations

The main application of the asymptotic normality results given in this paper is the construction of asymptotic confidence intervals and hypothesis tests. For instance, if p satisfies the assumptions of Theorem 2, then an asymptotic ( 1 - α ) 100 % confidence interval for H is given by
H ^ M M - z α / 2 σ ^ n , H ^ M M + z α / 2 σ ^ n ,
where z α / 2 is a number such that P ( Z > z α / 2 ) = α / 2 and Z is a standard normal random variable. Similarly, if the assumptions of Theorem 3 are satisfied, then we can replace H ^ M M with H ^ J K , and if the assumptions of Lemma 1 are satisfied, then we can replace H ^ M M with H ^ . In this section, we give a small-scale simulation study to evaluate the finite sample performance of these confidence intervals.
For concreteness, we focus on the geometric distribution, which corresponds to
p k = p ( 1 - p ) k - 1 , k = 1 , 2 , ,
where p ( 0 , 1 ) is a parameter. The true entropy of this distribution is given by H = - p - 1 p ln p + ( 1 - p ) ln ( 1 - p ) . In this case, Conditions 1, 2, and 3 all hold. For our simulations, we took p = 0 . 5 . The simulations were performed as follows. We began by simulating a random sample of size n and used it to evaluated a 95 % confidence interval for the given estimator. We then checked to see if the true value of H was in the interval or not. This was repeated 5000 times and the proportion of times when the true value was in the interval was calculated. This proportion should be close to 0 . 95 when the confidence interval works well. We repeated this for sample sizes ranging from 20 to 1000 in increments of 10. The results are given in Figure 1. We can see that the Miller–Madow and jackknife estimators consistently outperform the plug-in. It may be interesting to note that, although the proofs of Theorems 1–3 are based on showing that the bias correction term approaches zero, it does not mean that the bias correction term is not useful. On the contrary, bias correction improves the finite sample performance of the asymptotic confidence intervals.

4. Discussion

In this paper, we gave sufficient conditions for the asymptotic normality of the Miller–Madow and the Jackknife estimators of entropy. While our focus is on the case of countably infinite alphabets, our results are formulated and proved in the case where the effective cardinality K may be finite or countably infinite. As such, they hold in the case of finite alphabets as well. In fact, for finite alphabets, Conditions 1–3 always hold and we have asymptotic normality so long as the underlying distribution is not uniform. The difficulty with the uniform distribution is that it is the unique distribution for which σ 2 , as given by (9), is zero (see the discussion just below Condition 1). When the distribution is uniform, the asymptotic distribution is chi-squared with ( K - 1 ) degrees of freedom (see [6]).
In general, we do not know if our conditions are necessary. However, they cover most distributions of interest. The only distributions, which they preclude, are ones with extremely heavy tails. However, in complete generality, Conditions 1–3 may look complicated, and they are easily checked in many situations. For instance, Condition 2 always holds when, for large enough k, p k C k - 3 - δ for some C , δ > 0 , i.e., when
k = 1 k 2 p k < .
If the alphabet X = N is the set of natural numbers, then this is equivalent to the distribution p having a finite variance. Similarly, Conditions 1 and 3 both holding is the case when, for large enough k, p k C k - 2 - δ for some C , δ > 0 , i.e., when
k = 1 k p k < .
If the alphabet X = N is the set of natural numbers, then this is equivalent to the distribution p having a finite mean.

5. Proofs

Proof of Lemma 2.
Without loss of generality, assume that C > 1 and thus that ln C > 0 . Let f ( x ) = x ln x for x ( 0 , 1 ) . It is readily checked that f is negative and decreasing for x ( 0 , e - 1 ) . Since C p k 0 as k , it follows that C p k < e - 1 for large enough k. Now, let K ( n ) be the sequence that works for p in Condition 1. For large enough n,
0 n k K ( n ) p k ln p k C n k K ( n ) p k ln ( C p k ) C ln ( C ) n k K ( n ) p k + C n k K ( n ) p k ln ( p k ) C ln ( C ) + 1 n k K ( n ) p k ln ( p k ) 0 .
Similarly, the function g ( x ) = x ln 2 x for x ( 0 , 1 ) is positive and increasing for x ( 0 , e - 2 ) . Thus, there is an integer M > 0 such that if k M , then C p k < e - 2 and
0 k 1 p k ln 2 p k k = 1 M - 1 p k ln 2 p k + C k = M p k ln 2 ( C p k ) = k = 1 M - 1 p k ln 2 p k + C k = M p k ln 2 ( p k ) + C ln 2 ( C ) k = M p k + 2 C ln ( C ) k = M p k ln ( p k ) < ,
as required. ☐
To prove Theorem 2, the following Lemma is needed.
Lemma 6.
If Condition 2 holds, then there exists a K 1 > 0 such that for all k K 1
p k 1 a ( k ) b ( k ) k 3 1 - 1 - 2 k b ( k ) 1 a ( k ) k 2 .
Proof. 
Observing that e - x 1 - x holds for all real x and that lim x 0 ( 1 - e - x ) / x = 1 , we have e - 2 / ( k b ( k ) ) 1 - 2 / ( k b ( k ) ) , and hence
1 - 1 - 2 k b ( k ) 1 a ( k ) k 2 1 - e - 2 a ( k ) b ( k ) k 3 2 a ( k ) b ( k ) k 3 .
This implies that there is a K 1 > 0 such that for all k K 1 (11) holds and
2 a ( k ) b ( k ) k 3 1 - 1 - 2 k b ( k ) 1 a ( k ) k 2 2 .
It follows that, for such k,
p k 1 a ( k ) b ( k ) k 3 = . 5 2 a ( k ) b ( k ) k 3 1 - 1 - 2 k b ( k ) 1 a ( k ) k 2 1 - 1 - 2 k b ( k ) 1 a ( k ) k 2 1 - 1 - 2 k b ( k ) 1 a ( k ) k 2
as required. ☐
Proof of Theorem 2.
By Theorem 1, it suffices to show that Condition 2 implies that both Condition 1 and n B ^ M M p 0 hold. The fact that Condition 2 implies Condition 1 follows by Example 3, Lemmas 2 and 3. We now show that n B ^ M M p 0 .
Fix ε 0 ( 0 , ε ) . From (12) and the facts that a ( k ) is positive, eventually nondecreasing, and approaches infinity, it follows that
lim sup k ( a ( k ) ) 2 ε 0 a k ( a ( k ) ) ε 0 = lim sup k ( a ( k ) ) - 2 ( ε - ε 0 ) ( a ( k ) ) 2 ε a k ( a ( k ) ) ε 0 lim sup k ( a ( k ) ) - 2 ( ε - ε 0 ) ( a ( k ) ) 2 ε a k ( a ( k ) ) ε = 0 .
Let K 2 be a positive integer such that, for all n K 2 , a ( n ) is nondecreasing, and let r n = n / ( a ( n ) ) ε 0 K 3 , where K 3 = K 1 K 2 and K 1 is as in Lemma 6. It follows that
E n B ^ M M = n E m ^ - 1 2 n 1 n E ( m ^ ) = 1 n k 1 1 - 1 - p k n 1 n k r n 1 + 1 n k > r n 1 - 1 - 2 k b ( k ) n a ( k ) k 2 = : S 1 + S 2 .
We have
S 1 r n n = ( a ( n ) ) - ε 0 K 3 n 0 ; a n d S 2 1 n k > r n 1 - 1 - 2 k b ( k ) n a ( r n ) r n 2 1 n k > r n 1 - 1 - 2 k b ( k ) ( a ( n ) ) 2 ε 0 a n ( a ( n ) ) ε 0 .
By (15), it follows that, for large enough n ,
S 2 1 n k > r n 1 - 1 - 2 k b ( k ) = 1 n k > r n 2 k b ( k ) k 1 1 k b ( k ) 2 n 0 .
From here, Markov’s inequality implies that n B ^ M M p 0 . ☐
Proof of Lemma 4.
First note that ln ( r - 1 ) 0 . 5 ln n - 0 . 5 ln ( v ) n ln ( r - 1 ) 0 . 5 ln n for any v 2 and r 1 . This can be shown by induction on r. Specifically, the result is immediate for r = 1 . If the result is true for r = m , then for r = m + 1
ln ( m ) 0 . 5 ln n - 0 . 5 ln ( v ) n = ln ln ( m - 1 ) 0 . 5 ln n - 0 . 5 ln ( v ) n = ln ln ( m - 1 ) 0 . 5 ln n - 0 . 5 ln ( v ) n ln ( m - 1 ) 0 . 5 ln n + ln ( m ) 0 . 5 ln n ln ( m ) 0 . 5 ln n .
It follows that for r 2
lim n ln ( r ) n ln ( r ) n / ( ln ( r ) n ) = lim n ln ( r ) n ln ( r - 1 ) 0 . 5 ln n - 0 . 5 ln ( r + 1 ) n = lim n ln ( r - 1 ) ln n ln ( r - 1 ) 0 . 5 ln n = 1 ,
where the final equality follows by the fact that ln ( r - 1 ) ( x ) is a slowly varying function. Recall that a positive-valued function is called slowly varying if for any t > 0
lim x ( x t ) ( x ) = 1
(see [11] for a standard reference). To see that ln ( r - 1 ) ( x ) is slowly varying, note that ln is slowly varying and that compositions of slowly varying functions are slowly varying by Proposition 1.3.6 in [11]. ☐
Proof of Lemma 5.
Observing the convention that 0 ln 0 = 0 ,
i = 1 n H ^ ( i ) = k 1 i : X i = k H ^ ( i ) = k 1 Y k - Y k - 1 n - 1 ln Y k - 1 n - 1 - j : j 1 , j k Y j n - 1 ln Y j n - 1 = k 1 Y k - Y k - 1 n - 1 ln Y k - 1 Y k + ln Y k n - 1 - j : j 1 , j k Y j n - 1 ln Y j n - 1 = k 1 Y k - Y k - 1 n - 1 ln Y k - 1 Y k + 1 n - 1 ln Y k n - 1 - j 1 Y j n - 1 ln Y j n - 1 = - 1 n - 1 k 1 Y k ( Y k - 1 ) ln Y k - 1 Y k + k 1 Y k n - 1 ln Y k n - 1 - k 1 Y k j 1 Y j n - 1 ln Y j n - 1 = - 1 n - 1 k 1 Y k ( Y k - 1 ) ln Y k - 1 Y k - ( n - 1 ) k 1 Y k n - 1 ln Y k n - 1 = - 1 n - 1 k 1 Y k ( Y k - 1 ) ln Y k - 1 Y k - k 1 Y k ln Y k n + ln n n - 1 = - 1 n - 1 k 1 Y k ( Y k - 1 ) ln Y k - 1 Y k - n ln n n - 1 + n H ^ .
Therefore,
i = 1 n H ^ - H ^ ( i ) = 1 n - 1 k 1 Y k ( Y k - 1 ) ln Y k - 1 Y k + n ln n n - 1 = 1 n - 1 k 1 Y k ( Y k - 1 ) ln Y k - 1 Y k + ( n - 1 ) ln n n - 1 = 1 n - 1 k 1 Y k ( n - 1 ) ln n n - 1 - ( Y k - 1 ) ln Y k Y k - 1 .
It suffices to show that for any y { 1 , 2 , , n }
( y - 1 ) ln y - ( y - 1 ) ln ( y - 1 ) ( n - 1 ) ln n - ( n - 1 ) ln ( n - 1 ) .
Towards that end, first note that the inequality of (16) holds for y = 1 . Now, let
f ( y ) = ( y - 1 ) ln y - ( y - 1 ) ln ( y - 1 )
and, therefore, letting s = 1 - 1 / y ,
f ( y ) = ln y y - 1 - 1 y = 1 - 1 y - 1 - ln 1 - 1 y = ( s - 1 ) - ln s .
Since s - 1 ln s for all s > 0 (see e.g., 4.1.36 in [12]), f ( y ) 0 for all y, 1 < y n , which implies (16).
For the second part, we use the first part to get
0 E i = 1 n H ^ - H ^ ( i ) = n E [ H ^ - H ] - i = 1 n E [ H ^ ( i ) - H ] = n ( B n - B n - 1 ) ,
where the last equality follows from the facts that for each i, H ^ ( i ) is a plug-in estimator of H based on a sample of size ( n - 1 ) and that E H ^ ( i ) does not depend on i due to symmetry. From here, the result follows. ☐
Proof of Theorem 3.
By Theorem 1, it suffices to show n B ^ J K p 0 . Note that, by Lemma 5,
0 E n B ^ J K = n ( n - 1 ) ( B n - B n - 1 ) n 3 / 2 ( B n - B n - 1 ) 0 ,
where the convergence follows by Condition 2. From here, the result follows by Markov’s inequality. ☐
To prove Proposition 1, we need several lemmas, which may be of independent interest.
Lemma 7.
Let S n and S n - 1 be binomial random variables with parameters ( n , p ) and ( n - 1 , p ) , respectively. If n 2 and p ( 0 , 1 ) , then
E ( S n ln S n ) = E n p ln ( S n - 1 + 1 ) .
The proof is given on page 178 in [7].
Lemma 8.
Let X 1 , , X n be i i d Bernoulli random variables with parameter p ( 0 , 1 ) . For m = 1 , , n let S m = i = 1 m X i , p ^ m = S m / m , h ^ m = - p ^ m ln p ^ m , and Δ m = E [ h ^ m - h ^ m - 1 ] . Then,
Δ n = E [ h ^ n - h ^ n - 1 ] p ( 2 - p ) ( n - 1 ) [ ( n - 2 ) p + 2 ] 2 p ( n - 1 ) [ ( n - 2 ) p + 2 ] .
Proof. 
Applying Lemma 7 to Δ n gives
Δ n = p ln n n - 1 + p E ln S n - 2 + 1 S n - 1 + 1 = p ln n n - 1 + p E ln S n - 2 + 1 S n - 2 + X n - 1 + 1 .
Conditioning on X n - 1 gives
Δ n = p ln n n - 1 + p 2 E ln S n - 2 + 1 S n - 2 + 2 .
Noting that f ( x ) = ln ( x / ( x + 1 ) ) is a concave function for x > 0 , by Jensen’s inequality,
Δ n p ln n n - 1 + p 2 ln ( n - 2 ) p + 1 ( n - 2 ) p + 2 .
Applying the following inequalities (both of which follow from 4.1.36 in [12]) to the terms of the above expression,
ln x x - 1 < 1 x - 1   for   x > 1 and ln x x + 1 < - 1 x + 1   for   x > 0 ,
it follows that
Δ n p n - 1 - p 2 ( n - 2 ) p + 2 = p ( 2 - p ) ( n - 1 ) [ ( n - 2 ) p + 2 ] ,
which completes the proof. ☐
For fixed ε > 0 , rewriting the upper bound of (18) gives
2 p ( n - 1 ) [ ( n - 2 ) p + 2 ] = 2 p 1 - ε n - 1 p ε ( n - 2 ) p + 2 = : 2 p 1 - ε n - 1 { g ( n , p , ε ) } .
Lemma 9.
For any ε ( 0 , 1 ) and n 3 , there exists a p 0 ( 0 , 1 ) such that g ( n , p , ε ) defined in (19) is maximized at p 0 and
0 g ( n , p 0 , ε ) = O ( n - ε ) .
Proof. 
Taking the derivative of ln g ( n , p , ε ) with respect to p gives
p ln g ( n , p , ε ) = ε p - n - 2 ( n - 2 ) p + 2 .
It is readily checked that this equals zero only at
p 0 = 2 ε ( 1 - ε ) ( n - 2 )
and is positive for 0 < p < p 0 and negative for p 0 < p < 1 . Thus, p 0 is the global maximum. For a fixed ε , we have
g ( n , p 0 , ε ) = ( 2 ε ) ε ( 1 - ε ) ε ( n - 2 ) ε [ 2 ε ( 1 - ε ) + 2 ] = ε n - 2 ε 1 - ε 2 1 - ε = O ( n - ε ) ,
as required. ☐
Proof of Proposition 1.
For every k and every m n , let
S m , k = i = 1 m 1 [ X i = k ] and H ^ m = - k 1 S m , k m ln S m , k m
be the observed letter counts and the plug-in estimator of entropy based on the first m observations. Thus, S m , k = Y k and H ^ n = H ^ . We are interested in evaluating
B n - B n - 1 = E H ^ n - H ^ n - 1 = k 1 E S n - 1 , k n - 1 ln S n - 1 , k n - 1 - S n , k n ln S n , k n k 1 2 p k ( n - 1 ) [ ( n - 2 ) p k + 2 ] = 2 k 1 p k 1 - ε g ( n , p k , ε ) ( n - 1 ) ,
where the third line follows by Lemma 8. Now, applying Lemmas 5 and 9 gives
0 n 3 / 2 ( B n - B n - 1 ) 2 n 3 / 2 k 1 p k 1 - ε g ( n , p k , ε ) ( n - 1 ) n 3 / 2 g ( n , p 0 , ε ) n - 1 k 1 p k 1 - ε = O ( n 1 / 2 - ε ) ,
which converges to zero when ε ( 1 / 2 , 1 ) . ☐

Author Contributions

C.C., M.G., A.S., J.Z. and Z.Z. contributed to the proofs; C.C., M.G. and J.Z. contributed editorial input; Z.Z. wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423, 623–656. [Google Scholar] [CrossRef]
  2. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; John Wiley & Son, Inc.: Hoboken, NJ, USA, 2006. [Google Scholar]
  3. Paninski, L. Estimation of entropy and mutual information. Neural Comput. 2003, 15, 1191–1253. [Google Scholar] [CrossRef]
  4. Zhang, Z. Statistical Implications of Turing’s Formula; John Wiley & Sons: New York, NY, USA, 2017. [Google Scholar]
  5. Zhang, Z.; Grabchak, M. Bias adjustment for a nonparametric entropy estimator. Entropy 2013, 15, 1999–2011. [Google Scholar] [CrossRef]
  6. Miller, G.A.; Madow, W.G. On the Maximum-Likelihood Estimate of the Shannon-Wiener Measure of Information; Operational Applications Laboratory, Air Force, Cambridge Research Center, Air Research and Development Command, Report AFCRC-TR-54-75; Luce, R.D., Bush, R.R., Galanter, E., Eds.; Bolling Air Force Base: Washington, DC, USA, 1954. [Google Scholar]
  7. Antos, A.; Kontoyiannis, I. Convergence properties of functional estimates for discrete distributions. Random Struct. Algorithm 2001, 19, 163–193. [Google Scholar] [CrossRef]
  8. Zhang, Z.; Zhang, X. A normal law for the plug-in estimator of entropy. IEEE Trans. Inf. Theory 2012, 58, 2745–2747. [Google Scholar] [CrossRef]
  9. Zahl, S. Jackknifing an index of diversity. Ecology 1977, 58, 907–913. [Google Scholar] [CrossRef]
  10. Harris, B. The statistical estimation of entropy in the non-parametric case. In Topics in Information Theory; Csiszár, I., Ed.; North-Holland: Amsterdam, The Netherlands, 1975; pp. 323–355. [Google Scholar]
  11. Bingham, N.H.; Goldie, C.M.; Teugels, J.L. Regular Variation; Cambridge University Press: New York, NY, USA, 1987. [Google Scholar]
  12. Abramowitz, M.; Stegun, I.A. Handbook of Mathematical Functions, 10th ed.; Dover Publications: New York, NY, USA, 1972. [Google Scholar]
Figure 1. Effectiveness of the 95 % confidence intervals as a function of sample size. The plot on the top left gives the proportions for the Miller–Madow and the plug-in estimators, while the one on the top right gives the proportions for the jackknife and the plug-in estimators. The horizontal line is at 0 . 95 . The closer the proportion is to this line, the better the performance. The plot on the bottom left gives the proportion for Miller–Madow minus the proportion for the plug-in, while the one of the bottom right gives the proportion for the jackknife minus the proportion for the plug-in. The larger the value, the greater the improvement due to bias correction. Here, the horizontal line is at 0.
Figure 1. Effectiveness of the 95 % confidence intervals as a function of sample size. The plot on the top left gives the proportions for the Miller–Madow and the plug-in estimators, while the one on the top right gives the proportions for the jackknife and the plug-in estimators. The horizontal line is at 0 . 95 . The closer the proportion is to this line, the better the performance. The plot on the bottom left gives the proportion for Miller–Madow minus the proportion for the plug-in, while the one of the bottom right gives the proportion for the jackknife minus the proportion for the plug-in. The larger the value, the greater the improvement due to bias correction. Here, the horizontal line is at 0.
Entropy 20 00371 g001aEntropy 20 00371 g001b

Share and Cite

MDPI and ACS Style

Chen, C.; Grabchak, M.; Stewart, A.; Zhang, J.; Zhang, Z. Normal Laws for Two Entropy Estimators on Infinite Alphabets. Entropy 2018, 20, 371. https://doi.org/10.3390/e20050371

AMA Style

Chen C, Grabchak M, Stewart A, Zhang J, Zhang Z. Normal Laws for Two Entropy Estimators on Infinite Alphabets. Entropy. 2018; 20(5):371. https://doi.org/10.3390/e20050371

Chicago/Turabian Style

Chen, Chen, Michael Grabchak, Ann Stewart, Jialin Zhang, and Zhiyi Zhang. 2018. "Normal Laws for Two Entropy Estimators on Infinite Alphabets" Entropy 20, no. 5: 371. https://doi.org/10.3390/e20050371

APA Style

Chen, C., Grabchak, M., Stewart, A., Zhang, J., & Zhang, Z. (2018). Normal Laws for Two Entropy Estimators on Infinite Alphabets. Entropy, 20(5), 371. https://doi.org/10.3390/e20050371

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop