Next Article in Journal
The Dynamics of Concepts in a Homogeneous Community
Next Article in Special Issue
Bootstrap Methods for the Empirical Study of Decision-Making and Information Flows in Social Systems
Previous Article in Journal
Inequality of Chances as a Symmetry Phase Transition
Previous Article in Special Issue
Bayesian and Quasi-Bayesian Estimators for Mutual Information from Discrete Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bias Adjustment for a Nonparametric Entropy Estimator

Department of Mathematics and Statistics, University of North Carolina at Charlotte, 9201 University City Blvd, Charlotte, NC 28223, USA
*
Author to whom correspondence should be addressed.
Entropy 2013, 15(6), 1999-2011; https://doi.org/10.3390/e15061999
Submission received: 20 March 2013 / Revised: 8 May 2013 / Accepted: 17 May 2013 / Published: 23 May 2013
(This article belongs to the Special Issue Estimating Information-Theoretic Quantities from Data)

Abstract

:
Zhang in 2012 introduced a nonparametric estimator of Shannon’s entropy, whose bias decays exponentially fast when the alphabet is finite. We propose a methodology to estimate the bias of this estimator. We then use it to construct a new estimator of entropy. Simulation results suggest that this bias adjusted estimator has a significantly lower bias than many other commonly used estimators. We consider both the case when the alphabet is finite and when it is countably infinite.

1. Introduction

Let K be a finite or countable index set with cardinality K . Let P = { p k : k K } be a probability distribution on the alphabet X = { k ; k K } . Entropy, of the form
H = - k K p k ln ( p k )
was introduced by Shannon in [1] and is often referred to as Shannon’s entropy. Miller [2] and Basharin [3] were among the first to study nonparametric estimation of H. Since then, the topic has been investigated from a variety of directions and perspectives. Many important references can be found in [4] and [5]. In this paper, we introduce a modification of an estimator of entropy, which was first defined by Zhang in [6]. This modification aims to reduce the bias of the original estimator. Simulations suggest that, at least for the models considered, this estimator has very low bias compared with several other commonly used estimators. Throughout this paper, we use ln to denote the natural logarithm and we define, as is common, 0 ln 0 = 0 . For any two functions f and g, taking values in ( 0 , ) with lim n f ( n ) = lim n g ( n ) = 0 , we write f ( n ) = O ( g ( n ) ) to mean
0 < lim inf n f ( n ) g ( n ) lim sup n f ( n ) g ( n ) <
and O ( g ( n ) ) f ( n ) to mean
0 < lim inf n f ( n ) g ( n ) lim sup n f ( n ) g ( n )
Assume that P is unknown. Let X 1 , X 2 , , X n be an independent and identically distributed ( i i d ) sample of size n from X according to P. Let y k = i = 1 n 1 [ X i = k ] , k K be the observed sample frequencies of letters in the alphabet, and let p ^ k = y k / n , k K be the sample proportions. In this framework, we are interested in estimating H. Perhaps the most intuitive nonparametric estimator of H is given by
H ^ = - k K p ^ k ln ( p ^ k )
This is known as the plug-in estimator. When | K | is finite, the bias of H ^ is given by
| K | - 1 2 n + O ( 1 / n 2 )
see Miller [2] or, for a more formal treatment, Paninski [5]. This leads to the so-called Miller–Madow estimator
H ^ M M = H ^ + | K | ^ - 1 2 n
where | K | ^ is the number of distinct letters observed in the sample. Other estimators of H include the jackknife estimator of Zahl [7] and Strong, Koberle, de Ruyter van Steveninck, and Bialek [8], and the NSB estimator of Nemenman, Shafee, and Bialek [9] and Nemenman [10]. These estimators (and others) have been shown to work well in numerical studies, although many of their theoretical properties (such as consistency and asymptotic normality) are not known.
Zhang [6] proposed an estimator, H ^ z , of entropy, which is given in Equation (4) below. When | K | is finite, the bias of H ^ z decays exponentially fast, and the estimator is asymptotically normal and efficient. See Zhang [11] for details. This estimator is given by
H ^ z = v = 1 n - 1 1 v Z 1 , v
where
Z 1 , v = n v + 1 [ n - ( v + 1 ) ] ! n ! k K p ^ k j = 0 v - 1 1 - p ^ k - j n
In Zhang [6] it is shown that
E ( H ^ z ) = v = 1 n - 1 1 v k K p k ( 1 - p k ) v
and that the bias of H ^ z is given by
B n = H - E ( H ^ z ) = v = n 1 v k K p k ( 1 - p k ) v
Although B n decays exponentially in n when K is a finite set, it can still be annoyingly sizable for small n. The objective of this paper is to put forth a good estimator B ^ n of B n , and, in turn, a good estimator of H by means of H ^ z = H ^ z + B ^ n . We deal with both the case when | K | is finite and the case when it is infinite.

2. Bias Adjustment

For a positive integer v, let Δ v be the difference between the bias of H ^ z based on an i i d sample of size v and that of size v + 1 , i.e.,
Δ v = B v - B v + 1 = 1 v k K p k ( 1 - p k ) v
Clearly B n = v = n Δ v . According to Zhang and Zhou [12], for every v with 1 v n - 1 , Z 1 , v , as given in Equation (5), is the uniformly minimum variance unbiased estimator ( u m v u e ) of ζ 1 , v = k K p k ( 1 - p k ) v . This implies that for every v, 1 v n - 1 , Δ ^ v = v - 1 Z 1 , v is a good estimator of Δ v .
The methodology proposed in this paper is as follows: For all v n - 1 , we estimate Δ v with Δ ^ v . We use Δ ^ 1 , Δ ^ 2 , , Δ ^ n - 1 to fit a parametric function δ ( v ) such that δ ( v ) is close to Δ ^ v . We then extrapolate this function and take Δ ^ v = δ ( v ) for v n . Our estimate of the bias is then B ^ n = v = n Δ ^ n .
It remains to choose a reasonable parametric form for δ and to fit it. We consider these questions in two separate cases: (1) when K is finite, known or unknown, and (2) when K is countably infinite. The case when it is unknown whether K is finite or infinite is discussed in Remark 1 below. Figure 1 shows how well our chosen δ ( v ) fits Δ ^ v for typical examples.
Figure 1. (a) Plot of v on the x-axis and ln ( Δ ^ v ) on the y-axis. This is based on a random sample of size 200 from a Zipf distribution. The overlaid line is the estimated ln δ ( v ) ; (b) Plot of ln ( v ) on the x-axis and ln ( Δ ^ v ) on the y-axis. This is based on a random sample of size 200 from a Poisson distribution. The overlaid line is the estimated ln δ ( v ) .
Figure 1. (a) Plot of v on the x-axis and ln ( Δ ^ v ) on the y-axis. This is based on a random sample of size 200 from a Zipf distribution. The overlaid line is the estimated ln δ ( v ) ; (b) Plot of ln ( v ) on the x-axis and ln ( Δ ^ v ) on the y-axis. This is based on a random sample of size 200 from a Poisson distribution. The overlaid line is the estimated ln δ ( v ) .
Entropy 15 01999 g001aEntropy 15 01999 g001b

2.1. Case: K is Finite

Assume that K is finite. If p = min k K p k then
Δ v = 1 v k K p k 1 - p k v = O v - 1 1 - p v
as v increases indefinitely. This suggests taking
δ ( v ) = α e - γ v v
where α > 0 and γ > 0 . However, since, for small values of v, other terms of the sum given in Equation (7) may have a significant impact, we consider the slightly more general form
δ ( v ) = α e - γ v v β
where α > 0 , γ > 0 and β R are parameters. These parameters are estimated by using least squares to fit
ln δ v = ln α - β ln v - γ v
with data
Entropy 15 01999 i001
Here v 0 is a user-chosen positive integer. We can always take v 0 = 1 , but we may wish to exclude the first several Δ ^ v since they may be atypical. We denote our estimate of ln α by ln α ^ , and those of α, β, and γ by
α ^ = e ln α ^ , β ^ , and γ ^
The bias adjusted H ^ z is given by
H ^ z = H ^ z + v = n α ^ e - γ ^ v v β ^
This summation may be approximated by the integral n ( α ^ e - γ ^ v v - β ^ ) d v or by truncating the sum at some very large integer V. For the simulation results presented below, we take v 0 = 10 and V = 100 , 000 .
Two finer modifications are made to H ^ z in Equation (13) when the sample data present some undesirable features:
  • If the least squares fit based on Equation (10) leads to γ ^ 0 , the fitted results are abandoned, and instead the new model
    ln δ v = ln α - γ v
    is fit to the same data as in Equation (11). The resulting estimates of the parameters are then
    α ^ = e ln α ^ and γ ^
    and a modified estimate of H is given by
    H ^ z = H ^ z + v = n ( α ^ e - γ ^ v )
    The switch from Equation (10) to Equation (14) is necessary because if γ ^ 0 then H ^ z in Equation (13) diverges. In this case, we recommend taking a relatively large value for v 0 because small values of v are more likely to require a polynomial part. For our simulations, we use v 0 = ( n - 21 ) in this case.
  • When a sample has no letters with frequency 1, the model in Equation (9) will not fit well. In this case we modify the sample by isolating one observation in a letter group with the least frequency and turn it into a singleton, e.g., a sample of the form { y 1 , y 2 , y 3 } = { 3 , 2 , 2 } is replaced by { 3 , 2 , 1 , 1 } .
To show how well Equation (9) fits ( Δ ^ 1 , Δ ^ 2 , , Δ ^ n - 1 ) for a typical sample, we include an example of the fit in (a) of Figure 1. Here we plot v against ln Δ ^ v . The overlaid curve represents the fitted δ ( v ) . This is based on a simulation of a random sample of size 200 from a Zipf distribution. To give a snapshot of the performance of the proposed estimator, we conducted several numerical simulations, and compared the absolute value of the bias of the proposed estimator to that of several commonly used ones. The distributions that we performed the simulations on are:
  • (Triangular) p k = k / 5050 , for k = 1 , 2 , , 100 , here H 4.416898 ,
  • (Zipf) p k = C / k , for k = 1 , 2 , , 100 , here C 0.192776 and H 3.680778 .
The estimators that we compared the performance of our estimator to are: the plug-in estimator given in Equation (2), the Miller–Madow estimator given in Equation (3), and H ^ z given in Equation (4).
For each distribution and each estimator, the bias was approximated as follows. We simulate n observations from the given distribution and evaluate the estimator. We then subtract the estimated value from the true value H. We repeat this 2000 times and average the errors. We then take the absolute value of the estimated bias. The procedure was slightly different for the estimator given in Equation (4). Since, in this case, the bias has the explicit form given in Equation (6), we approximate the bias by truncating this series at 100 , 000 . The sample sizes considered in these simulations range from n = 22 to n = 500 . The plots of the estimated biases are graphed in Figure 2; part (a) gives the plot for the triangular distribution and part (b) gives the plot for the Zipf distribution. Note that our proposed estimator has the lowest bias in all cases and that it is significantly lower for small samples.
Figure 2. We compare the absolute value of the bias of our estimator (New Sharp) with that of the plug-in (MLE), the Miller–Madow (MM), and the one given in Equation (4) (New). The x-axis is the sample size and the y-axis is the absolute value of the bias. The plots correspond to the distributions: (a) Triangular distribution and (b) Zipf distribution.
Figure 2. We compare the absolute value of the bias of our estimator (New Sharp) with that of the plug-in (MLE), the Miller–Madow (MM), and the one given in Equation (4) (New). The x-axis is the sample size and the y-axis is the absolute value of the bias. The plots correspond to the distributions: (a) Triangular distribution and (b) Zipf distribution.
Entropy 15 01999 g002
Another estimator of entropy is the NSB estimator of Nemenman, Shafee, and Bialek [9] (see also Nemenman [10] and the references therein). The authors of that paper provide code to do the estimation, which is available at http://nsb-entropy.sourceforge.net/. We used version 1.13, which was updated on 20 July 2011. Unlike the estimators discussed above, this one requires knowledge of | K | , and, for this reason, we consider it separately. Plots comparing the bias of our estimator and that of NSB are given in Figure 3. Note that our estimator is mostly comparable with NSB, although in certain regions it performs a bit better.
Figure 3. We compare the absolute value of the bias of our estimator with that of the NSB estimator. The x-axis is the sample size and the y-axis is the absolute value of the bias. The plots correspond to the distributions: (a) Triangular distribution and (b) Zipf distribution.
Figure 3. We compare the absolute value of the bias of our estimator with that of the NSB estimator. The x-axis is the sample size and the y-axis is the absolute value of the bias. The plots correspond to the distributions: (a) Triangular distribution and (b) Zipf distribution.
Entropy 15 01999 g003

2.2. Case: K is Countably Infinite

We now turn to the case when | K | is countably infinite. We need to find a reasonable parametric form for δ ( v ) . The following facts suggest an approach.
  • For any distribution on a countably infinite alphabet Δ v O v - 2 .
  • If p k = C k - λ for k 1 where λ > 1 , then Δ v = O v - ( 2 - 1 / λ ) .
These facts tell us that Δ v decays slower than O ( 1 / v 2 ) . Moreover, the heavier the tail of the distribution, the slower the decay appears to be. Since, even for very heavy tailed distributions, we have polynomial decay, this suggests that the rate of decay is essentially O ( 1 / v β ) for some β ( 0 , 2 ] . Thus, for all practical purposes, a reasonable model is
δ ( v ) = α v β
where α > 0 and β > 0 (we allow β > 2 to make the model more flexible). The model parameters are estimated by using least squares to fit
ln δ ( v ) = ln α - β ln v
with the data in Equation (11). We denote the estimate of ln α by ln α ^ , and those of α and β by
α ^ = e ln α ^ and β ^
The bias adjusted H ^ z is given by
H ^ z = H ^ z + v = n α ^ v - β ^
where the summation may be approximated by the integral n ( α ^ v - β ^ ) d v = α ^ n 1 - β ^ / ( 1 - β ^ ) or by truncating the sum at some very large integer V. For the simulation results presented below, we take v 0 = 10 and use the integral approximation to the sum.
As in the case when K is finite, we need to make adjustments in certain situations.
  • If β ^ 1 then the sum in Equation (19) diverges. In fact, when β ^ is close to 1 (even if it is larger then 1), this causes problems. To deal with this we do the following. Choose β 0 ( 1 , 2 ] , if β ^ < β 0 our fitted results are abandoned, and instead the new model
    ln δ ( v ) + β 0 ln v = ln α
    is fit using least squares. In our simulations we take β 0 = 1.5 . The resulting estimate of the sole parameter α is given by α ^ = e ln α ^ , and a modified estimate of H is given by
    H ^ z = H ^ z + v = n α ^ v - β 0
  • When a sample has no letters with frequency 1, we run into trouble as we did in the case when | K | is finite. We solve this problem in the same way as in the previous case.
To show how well Equation (16) fits ( Δ ^ 1 , Δ ^ 2 , , Δ ^ n - 1 ) in a typical sample, we include an example of the fit in (b) of Figure 1. Here we plot ln v against ln Δ ^ v . The overlaid curve represents the fitted δ ( v ) . This is based on a simulation of a random sample of size 200 from a Poisson distribution. As in the previous case, we evaluate the performance of the proposed estimator by conducting several numerical simulations. We estimated entropy for the following distributions:
  • (Power) p k = C / k 2 , for k 1 , here C = 6 / π 2 and H 1.637622 ,
  • (Geometric) p k = ( 1 - 1 / e ) e - k , for k 0 , here H 1.040652 ,
  • (Poisson) p k = e - λ λ k / k ! , for k 0 , where λ = e and H 1.87722 .
Again, we compare with the plug-in estimator, the Miller–Madow estimator, and H ^ z . Although the Miller–Madow estimator is motivated by the case where | K | is finite, it is often a good estimator in the infinite case as well. We approximate the bias, as in the previous case. The estimated biases for sample sizes ranging from n = 22 to n = 500 are graphed in Figure 4. Parts (a), (b), and (c) of Figure 4 correspond to the three distributions listed above. Note that the new estimator outperforms the other estimators. However, the improvement is not as drastic as in the case when | K | is finite. We also make separate comparisons with NSB. These are given in Figure 5. Although NSB is designed for the case when | K | is known and finite, we can extend its use to the infinite case by telling the program that | K | is some large but finite value. In our simulations we take it to be 10 10 . We see that the performance of our estimator is roughly comparable with or somewhat better than that of NSB.
Remark 1.
In practice, it may not be known, a priori, if K is finite or infinite. In such situations, one does not know which of the two adjustments to use. One approach is as follows. Fit both models and denote their respective mean squared errors by M S E 1 and M S E 2 , then use the one that has the smaller M S E .
Figure 4. We compare the absolute value of the bias of our estimator (New Sharp) with that of the plug-in (MLE), the Miller–Madow (MM), and the one given in Equation (4) (New). The x-axis is the sample size and the y-axis is the absolute value of the bias. The plots correspond to the distributions: (a) Power; (b) Geometric; and (c) Poisson.
Figure 4. We compare the absolute value of the bias of our estimator (New Sharp) with that of the plug-in (MLE), the Miller–Madow (MM), and the one given in Equation (4) (New). The x-axis is the sample size and the y-axis is the absolute value of the bias. The plots correspond to the distributions: (a) Power; (b) Geometric; and (c) Poisson.
Entropy 15 01999 g004
Figure 5. We compare the absolute value of the bias of our estimator with that of the NSB estimator. The x-axis is the sample size and the y-axis is the absolute value of the bias. The plots correspond to the distributions: (a) Power; (b) Geometric; and (c) Poisson.
Figure 5. We compare the absolute value of the bias of our estimator with that of the NSB estimator. The x-axis is the sample size and the y-axis is the absolute value of the bias. The plots correspond to the distributions: (a) Power; (b) Geometric; and (c) Poisson.
Entropy 15 01999 g005

3. Summary and Discussion

In Zhang [6] an estimator of entropy was introduced, which, in the case of a finite alphabet, has exponentially decaying bias. In this paper we described a methodology for further reducing the bias of this estimator. Our approach is to note that when the alphabet is finite, the bias of this estimator decays exponentially fast, while in the case when the alphabet is infinite, the bias decays like a polynomial. We estimate the bias by fitting an appropriate function. Then we add this estimate of the bias to our estimated entropy. Simulation results suggest that, at least in the situations considered, the bias is drastically reduced for small sample sizes. Moreover, our estimator outperforms several standard estimators and is comparable with the well-known estimator NSB.
One situation where estimators of entropy run into difficulty is in the case where all n observations are singletons, that is when each observation is a different letter. There is not much that can be done in this case since the sample has very little information about the distribution (except that, in some sense, it is very “heavy tailed”). In this case, we can say that the sample size is very small, even if n is substantial.
This suggests a way to think about small sample sizes. Before discussing this, we describe a common approach to defining what a small sample size is. When | K | is finite, a common heuristic is to say that a sample is small if its size n is less than ϵ | K | , for some ϵ ( 0 , 1 ) . While this may be useful in certain situations, it has several limitations. First, it assumes that | K | is known and finite, and second there appears to be no good way to choose ϵ. Moreover, this ignores the fact that some letters may have very small probabilities and may not be very important for entropy estimation. To underscore this point, consider two models. The first has an alphabet of size K, while the second has a much larger alphabet size, say K 2 . However, assume that on K of its letters, the second model has almost the same probabilities as those of the first model, while the remaining K 2 - K letters have very tiny probabilities. The heuristic described above may call a sample of size n from the first population large while a sample of size n from the second population very small, even though, for the purposes of entropy estimation, the two samples may have approximately the same amount of information about their respective distributions.
What matters is not how big the sample is relative to | K | , but how much information about the population the sample possesses. Thus, instead of starting with an external idea of what constitutes a small sample, we can “ask” the sample how much information it contains about the distribution. If it contains very little information about the sample then we can call it a “small sample.” When one has a small sample, in this sense, one should be very careful about using it for inference, and, in particular, for entropy estimation.
One way to quantify how much information a sample has is the sample’s coverage of the population, which is given by π 0 = k K p k 1 [ y k > 0 ] . Thus, one can consider the sample large if π 0 is large and small if π 0 is small. Of course, to evaluate π 0 one needs to know the underlying distribution. However, an estimator of π 0 is given by Turing’s formula, T = 1 - N 1 / n , where N 1 is the number of singleton letters in the sample. Interested readers are referred to Good [13], Robbins [14], Esty [15], Zhang and Zhang [16], and Zhang [17] for details.
Note that, for the situation described above, where each letter is a singleton, we have N 1 = n and T = 0 . Thus, the sample has essentially no coverage of the distribution. Which values of T constitute a small sample and which constitute a large sample is an interesting question that we leave for another time.
We end this paper by discussing some future work. While our simulations suggest that the estimator introduced in this paper is quite useful, it is important to derive its theoretical properties. In a different direction, we note that, in practice, one often needs to compare one estimated entropy to another. An approach to doing this is to use the asymptotic normality of H ^ or H ^ z (or a different estimator, if available) to set up a two sample z-test. We recently conducted a series of studies on testing the equality of two entropies using this approach. We found two major difficulties that are not very surprisingly in retrospect:
  • The difference between biases due to different sample sizes causes a huge inflation of Type II error rate, even with reasonably large samples.
  • The bias in estimating the variance of an entropy estimator is also sizable and persistent.
Both of these issues are not well-studied in the current literature. We strongly believe that more research on this front should be encouraged.

Acknowledgements

The research of the first author is partially supported by NSF Grants DMS 1004769.

Conflict of Interest

The authors declare no conflict of interest.

References

  1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423, 623–656. [Google Scholar] [CrossRef]
  2. Miller, G. Note on the Bias of Information Estimates. In Information Theory in Psychology: Problems and Methods; Quastler, H., Ed.; Free Press: Glencoe, IL, USA, 1955; pp. 95–100. [Google Scholar]
  3. Basharin, G. On a statistical estimate for the entropy of a sequence of independent random variables. Theory Probab. Appl. 1959, 4, 333–336. [Google Scholar] [CrossRef]
  4. Antos, A.; Kontoyiannis, I. Convergence properties of functional estimates for discrete distributions. Random Struct. Algorithms 2001, 19, 163–193. [Google Scholar] [CrossRef]
  5. Paninski, L. Estimation of entropy and mutual information. Neural Comput. 2003, 15, 1191–1253. [Google Scholar] [CrossRef]
  6. Zhang, Z. Entropy estimation in Turing’s perspective. Neural Comput. 2012, 24, 1368–1389. [Google Scholar] [CrossRef] [PubMed]
  7. Zahl, S. Jackknifing an index of diversity. Ecology 1977, 58, 907–913. [Google Scholar] [CrossRef]
  8. Strong, S.P.; Koberle, R.; de Ruyter van Steveninck, R.R.; Bialek, W. Entropy and information in neural spike trains. Phys. Rev. Lett. 1998, 80, 197–200. [Google Scholar] [CrossRef]
  9. Nemenman, I.; Shafee, F.; Bialek, W. Entropy and Inference, Revisited. In Advances in Neural Information Processing Systems; Volume 14, Dietterich, T.G., Becker, S., Ghahramani, Z., Eds.; MIT Press: Cambridge, MA, USA, 2002. [Google Scholar]
  10. Nemenman, I. Coincidences and estimation of entropies of random variables with large cardinalities. Entropy 2011, 13, 2013–2023. [Google Scholar] [CrossRef] [Green Version]
  11. Zhang, Z. Asymptotic normality of an entropy estimator with exponentially decaying bias. IEEE Trans. Inf. Theory 2013, 59, 504–508. [Google Scholar] [CrossRef]
  12. Zhang, Z.; Zhou, J. Re-parameterization of multinomial distribution and diversity indices. J. Stat. Plan. Inf. 2010, 140, 1731–1738. [Google Scholar] [CrossRef]
  13. Good, I.J. The population frequencies of species and the estimation of population parameters. Biometrika 1953, 40, 237–264. [Google Scholar] [CrossRef]
  14. Robbins, H.E. Estimating the total probability of the unobserved outcomes of an experiment. Ann. Math. Stat. 1968, 39, 256–257. [Google Scholar] [CrossRef]
  15. Esty, W.W. A normal limit law for a nonparametric estimator of the coverage of a random sample. Ann. Stat. 1983, 11, 905–912. [Google Scholar] [CrossRef]
  16. Zhang, C.-H.; Zhang, Z. Asymptotic normality of a nonparametric estimator of sample coverage. Ann. Stat. 2009, 37, 2582–2595. [Google Scholar] [CrossRef]
  17. Zhang, Z. A multivariate normal law for Turing’s formulae. Sankhya A 2013, 75, 51–73. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Zhang, Z.; Grabchak, M. Bias Adjustment for a Nonparametric Entropy Estimator. Entropy 2013, 15, 1999-2011. https://doi.org/10.3390/e15061999

AMA Style

Zhang Z, Grabchak M. Bias Adjustment for a Nonparametric Entropy Estimator. Entropy. 2013; 15(6):1999-2011. https://doi.org/10.3390/e15061999

Chicago/Turabian Style

Zhang, Zhiyi, and Michael Grabchak. 2013. "Bias Adjustment for a Nonparametric Entropy Estimator" Entropy 15, no. 6: 1999-2011. https://doi.org/10.3390/e15061999

Article Metrics

Back to TopTop