Next Article in Journal
Fluctuation, Dissipation and the Arrow of Time
Previous Article in Journal
Effects of Radiation Heat Transfer on Entropy Generation at Thermosolutal Convection in a Square Cavity Subjected to a Magnetic Field
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Coincidences and Estimation of Entropies of Random Variables with Large Cardinalities

Departments of Physics and Biology and Computational and Life Sciences Initiative, Emory University, 400 Dowman Dr., Atlanta, GA 30322, USA
Entropy 2011, 13(12), 2013-2023; https://doi.org/10.3390/e13122013
Submission received: 1 November 2011 / Revised: 8 December 2011 / Accepted: 15 December 2011 / Published: 19 December 2011

Abstract

:
We perform an asymptotic analysis of the NSB estimator of entropy of a discrete random variable. The analysis illuminates the dependence of the estimates on the number of coincidences in the sample and shows that the estimator has a well defined limit for a large cardinality of the studied variable. This allows estimation of entropy with no a priori assumptions about the cardinality. Software implementation of the algorithm is available.
Classification:
MSC 94A17, 62F12, 62F15

1. Introduction

Estimation of functions of a discrete random variable with an unknown probability distribution is one of the simplest problems in statistics. However, the simplicity vanishes in an extremely undersampled regime, where K, the cardinality or the alphabet size of the variable, is much larger than N, the number of the samples. In this case, the average number of samples per possible outcome, or bin, is less than one, and the relative uncertainty about the underlying probability distribution and its various statistics is large. To decrease the posterior error, one may turn to Bayesian statistics and bias the set of a priori admissible distributions. However, finding an optimal bias-variance tradeoff is not easy. For severely undersampled cases, controlling the variance often make an estimator a function of the prior, rather than of the measured data.
This is often the case for inference of the Boltzmann-Shannon entropy, H = - i = 1 K q i ln q i (here q i is probability of an event i), an important characteristics of a discrete variable. In this paper, all logarithms are natural, and the unit of entropy is nat. Simple estimators of entropy have low variances but high biases that are difficult to calculate due to the divergence of the logarithm near zero [1]. Developments driven in part by computational biology applications have solved this problem in the moderately undersampled regime, N K and N e H [1,2,3,4,5,6,7,8,9]. Interestingly, they also resulted in the understanding that it is impossible to estimate entropy with zero bias uniformly over all distributions for a smaller N. However, Ma has argued [10] that, since coincidences in data start to occur at N e H , it is possible to estimate entropies even in the deeply undersampled regime, at least for some classes of probability distributions, such as uniform ones. Similar arguments are well-known in the literature on estimation of population sizes from capture-recapture data (see, e.g., [11] for recent developments). There it has been recognized that the population size (and the population entropy) can be estimated long before every possible individual outcome has been sampled with a high probability [12].
In 2002, Nemenman, Shafee and Bialek introduced a method for entropy estimation, hereafter called NSB [13]. While the estimator has proven successful in the Ma square-root regime [14,15], a theoretical basis for the success has not been presented in the literature. Here we review the method and perform its asymptotic analysis. We verify the intuition that the estimator works in the Ma regime by counting coincidences. We point out that the method can be viewed as finding the number of yet unseen bins with nonzero probability given K, the maximum cardinality of the variable. While estimation of K by model selection techniques cannot work (see below), we show that the method has a non-trivial limit as K . Thus one should be able to calculate entropies of discrete random variables even without knowing their cardinalities. Our analysis allows for an efficient numerical implementation of the NSB estimator, which we have made available from [16].

2. Summary of the NSB Method

We use Bayes rule to expresses the posterior probability of a probability distribution q { q i } , i = 1 K , of a discrete random variable with a help of its a priori probability, P ( q ) . Thus if n i i. i. d. samples from q are observed in bin i, such that i = 1 K n i = N , then the posterior, P ( q | n ) , is
P ( q | n ) = P ( n | q ) P ( q ) P ( n ) = i = 1 K q i n i P ( q ) 0 1 d K q i = 1 K q i n i P ( q )
Following [13], we focus on the popular Dirichlet family of priors, indexed by a hyperparameter β:
P β ( q ) = 1 Z β δ 1 - i = 1 K q i i = 1 K q i β - 1 , Z β = Γ K ( β ) Γ ( K β )
Here the δ-function and Z β enforce normalizations of q and P β ( q ) , respectively; and Γ stands for Euler’s Γ-function. These priors are common in statistics since they result in an analytically tractable, multinomial posteriors. For example, Wolpert and Wolf [17] calculated posterior averages, here denoted as β , of many interesting quantities, including the distribution itself,
q i β = n i + β N + κ , κ K β
and the moments of its entropy, which we will not reprint here.
According to Equation (3), Dirichlet priors add extra β samples (pseudocounts) to each bin. Thus for β N / K , the data are unimportant, on average, and P ( q | n ) is dominated by almost uniform distributions, q 1 / K . Then the posterior mean of the entropy is strongly biased upward to its maximum possible value of H max = ln K . Similarly, for β N / K , distributions in the vicinity of the frequentist’s maximum likelihood estimate, q = n / N , are important, and H β is biased downward [1].
[13] traced this problem to properties of the Dirichlet family. Its members encode reasonable a priori assumptions about q , but not about H ( q ) . Instead, a priori assumptions about the entropy are strongly biased, as seen from the a priori moments:
ξ ( β ) H N = 0 β = ψ 0 ( κ + 1 ) - ψ 0 ( β + 1 )
σ 2 ( β ) ( δ H ) 2 N = 0 β = β + 1 κ + 1 ψ 1 ( β + 1 ) - ψ 1 ( κ + 1 )
Here ψ m ( x ) = ( d / d x ) m + 1 ln Γ ( x ) are the polygamma functions. ξ ( β ) varies smoothly from 0 for β = 0 , through 1 for β 1 / K , and to ln K for β . σ ( β ) 1 / K for almost all β [13], which is negligibly small for large K. Thus q that is typical in P β ( q ) has the entropy extremely close to some predetermined β-dependent value. This bias persists even when N K data are collected.
One should strive for the a priori distribution of entropy, P ( H ( q ) ) , to be approximately uniform to have a chance for an unbiased estimator. NSB achieves the uniformity (but not necessarily zero bias) by noting that, following Equations (4) and (5), for large K, P β ( H ) is almost a δ-function. Thus a prior that averages over all non-negative values of β (and, correspondingly, over all a ξ [ 0 ; ln K ] ) may reduce the bias in the entropy estimation even for N K . [13] proposed the following infinite mixture of Dirichlet priors [18] for the averaging:
P ( q ; β ) = 1 Z δ 1 - i = 1 K q i i = 1 K q i β - 1 d ξ ( β ) d β P ( β )
Here Z is again a normalizing coefficient, and d ξ / d β ensures uniformity for ξ, rather than for β. A non-constant prior on β, P ( β ) , may be used if needed, but we will not focus on this term from now on. Such Dirichlet mixture results in P ( q ) const , introducing biases in estimation of q as a tradeoff for a possibly accurate estimation of H.
Inference with the prior, Equation (6), involves additional averaging over β (or, equivalently, ξ). The a posteriori moments of the entropy are
H m ^ = 0 ln K d ξ ρ ( ξ , n ) H m β ( ξ ) 0 ln K d ξ ρ ( ξ | n )
where the unnormalized posterior density is
ρ ( ξ | n ) = P β ξ Γ ( κ ( ξ ) ) Γ ( N + κ ( ξ ) ) i = 1 K Γ ( n i + β ( ξ ) ) Γ ( β ( ξ ) )
Note that, for N = 0 , ρ ( ξ | 0 ) = P ( β ( ξ ) ) . Thus if we choose P ( β ( ξ ) ) = const , then the a priori assumptions about ξ are exactly uniform, as we had hoped to achieve. We note again that the uniformity of the prior is not equivalent to zero posterior bias.
An additional reason for the choice of averaging over the model families, as in Equation (6), is provided by the theory of Bayesian model selection [13,19,20,21,22,23]. Specifically, families of probabilistic models of data that incorporate more models (have larger volumes in the model space) usually have high explanatory powers and include some models that are very likely a posteriori. However, they also include many extremely unlikely models, and the posterior probability averaged over the entire family is low. Thus the competition between the “goodness of fit” and the volume of the model space (the Occam factor) often attributes much of the posterior weight to model families that are relatively simple, but explain the data well. In the case of the NSB prior, different values of β index different model families. For small β, the estimates in Equation (3) are closer to the frequentist’s maximum likelihood, explaining the data better. However, there is less smoothing, and the space of models is larger. Thus as argued in [13], one expects that the integrals in Equation (7) are dominated by some β * with a small posterior variance, and then ^ β * .
In this work, we start with investigating whether a maximum of the integrand in Equation (7), indeed, exists. We then study its properties. The results of the analysis leads to a deeper understanding of the NSB method.

3. Saddle Point Analysis

We calculate integrals in Equation (7) using the saddle point (a. k. a. Laplace) approximation. Since H m β does not depend on N, for N , only the Γ-terms in ρ define the saddle. We write
ρ ( ξ | n ) = P ( β ( ξ ) ) exp - L K ( n , β )
L K ( n , β ) = - i ln Γ ( β + n i ) + K ln Γ ( β ) - ln Γ ( κ ) + ln Γ ( κ + N )
Differentiating, we obtain the following equation for the saddle point (or the maximum likelihood) value, κ * = K β * :
1 K i n i > 0 ψ 0 ( n i + β * ) - K 1 K ψ 0 ( β * ) + ψ 0 ( κ * ) - ψ 0 ( κ * + N ) = 0
where K m denotes the number of bins that have, at least, m counts. Note that N > K 1 > K 2 > .
If K N , and if there are many bins with multiple counts, i.e., N - K 1 1 , then the (unknown) q is likely non-uniform. Thus the entropy is significantly smaller than its maximum possible value H max . Since for any β = O ( 1 ) , H β H max [13], small entropy estimate is achievable only if β * 0 as K . Thus we will look for
κ * = κ 0 + 1 K κ 1 + 1 K 2 κ 2 +
where none of κ j depends on K. Plugging Equation (12) into Equation (11), we get an equation for κ 0 :
K 1 κ 0 = ψ 0 ( κ 0 + N ) - ψ 0 ( κ 0 )
The leading terms in the expansion of κ * are:
κ 1 = i n i > 1 ψ 0 ( n i ) - ψ 0 ( 1 ) K 1 / κ 0 2 - ψ 1 ( κ 0 ) + ψ 1 ( κ 0 + N )
κ 2 = K 1 κ 0 3 + ψ 2 ( κ 0 ) - ψ 2 ( κ 0 + N ) 2 κ 1 2 + i n i > 1 κ 0 ψ 1 ( n i ) - ψ 1 ( 1 ) K 1 / κ 0 2 - ψ 1 ( κ 0 ) + ψ 1 ( κ 0 + N )
We have calculated additional higher order terms. However, when K 1 , as is common in applications, these terms are rarely needed.
We now solve Equation (13). For κ 0 0 and N > 0 , the r. h. s. of the equation is approximately 1 / κ 0 [24]. For κ 0 , it is close to N / κ 0 . Thus if N = K 1 , and the number of coincidences among data, Δ N - K 1 , is zero, then the l. h. s. majorates the r. h. s., and Equation (13) has no solution. That is, there is no saddle point in the integrand. If there are coincidences, a unique solution exists, and Δ 0 means κ 0 . Thus we search for κ 0 of the form κ 0 1 / Δ + O ( Δ 0 ) .
It is useful to define:
f N ( j ) m = 0 N - 1 m j N j + 1
where each of f N ’s scales as N 0 . Using properties of polygamma functions [24] and defining δ = Δ / N , we rewrite Equation (13) as
1 - δ = j = 0 ( - 1 ) j f N ( j ) ( κ 0 / N ) j
Combined with the previous observations, Equation (17) suggests that we look for κ 0 of the form
κ 0 = N b - 1 δ + b 0 + b 1 δ +
where each of b j ’s is independent of δ and scales as N 0 .
Substituting Equation (18) into Equation (17), we find the series expansion self-consistent, and
b - 1 = f N ( 1 ) = N - 1 2 N
b 0 = - f N ( 2 ) f N ( 1 ) = - 2 N + 1 3 N
b 1 = - f N 2 ( 2 ) f N 3 ( 1 ) + f N ( 3 ) f N 2 ( 1 ) = N 2 - N - 2 9 ( N 2 - N )
Again, more terms have been calculated and are used in the software implementation of the estimator.
The obtained expressions present the saddle point value β * (or κ * , or ξ * ) as a power series in 1 / K and δ. To complete the evaluation of Equation (7), we now calculate the curvature at this saddle point:
2 L ξ 2 ξ ( β * ) = 2 L β 2 1 ( d ξ / d β ) 2 β * = Δ + N O ( δ 2 )
Notice that the curvature does not scale as a power of N as was suggested in [13]. The uncertainty in ξ * is determined to the first order only by coincidences. One can understand this by considering K 1 with q i 1 for most of the bins. Then counts of n i = 1 are not informative for entropy estimation since they can correspond to massive bins, as well as to some random bins from the sea of the negligible ones. However, coinciding counts likely corresponds to high-probability bins, which should influence the entropy estimation. Note also that, to the first order in 1 / K , the exact positioning of coincidences does not matter: for a fixed Δ, a few coincidences in many bins or many coincidences in a single one produce the same saddle point and the same curvature around it. While this is an artifact of the specific choice of the prior P ( H ( q ) ) , the similarity to Ma’s coincidence counting [10] is intriguing.
In summary, if the number of coincidences Δ 1 , then the saddle point analysis is self-consistent. A specific value β * is selected a posteriori, and the variance of the entropy is small.

Numerical Implementation

The series expansions calculated above form the basis for a numerical implementation of the NSB algorithm. To calculate the posterior mean and variance of the NSB entropy estimator using Equation (7), evaluation of three integrals is required numerically (normalization, the first, and the second moments of H). The algorithm for this is as follows. When Δ = 0 , the integrands are not peaked, and the integrals can be evaluated by simple Gaussian quadratures or other user-selected methods. If instead Δ > 1 , the integrands will be peaked, strongly if Δ 1 . Identification of the location of the peaks is then essential before numerical integration is done. We proceed as follows:
  • The saddle point (the maximum of ρ ( ξ | n ) ) is found numerically by:
    (a)
    evaluating an approximation for κ 0 using the first few terms of the series, Equation (18);
    (b)
    using the approximate value as a starting point for the Newton-Raphson iterative algorithm to solve for κ 0 from Equation (13);
    (c)
    plugging the solution into the series expansion for the saddle κ * , Equation (12);
    (d)
    and, finally, using the latter solution as a starting point for the Newton-Raphson search of a more accurate value of κ * in Equation (11).
  • Each of the integrands in Equation (7) is divided by the value of ρ ( ξ | n ) at the saddle point, so that the maximum of the integrands is O ( 1 ) .
  • Curvature around the saddle point (and hence the posterior variance) is evaluated numerically.
  • The integrals are evaluated numerically over the range that spans a few standard deviations on both sides of the saddle point; the range is controlled by the user-specified desired accuracy.
The above algorithm has been implemented in Octave/Matlab and C++. It is available from [16]. The input to the routines is either the histogram of counts (Octave/Matlab and C++), or a series of samples (C++ only). The output of the routines is either the posterior mean and the standard deviation of the entropy, and the position of the saddle point, or a variety of diagnostics information if the integration fails for any reason. The C++ version is implemented specifically to allow estimation of entropies on alphabets with arbitrary large cardinalities. It is limited only by the ability of the data series to fit in the computer memory.

4. Choosing a Value for K?

We are interested in the regime N K , when the number of pseudocounts in occupied bins, K 1 β , is negligible compared to their number in empty bins, ( K - K 1 ) β K β . Then Equation (3) and Equation (8) show that selecting β (i.e., integrating over it) means balancing N, the number of actual counts versus κ = K β , the number of pseudocounts, or, equivalently, the scaled number of unoccupied bins. K is often unknown in real-life applications, or the number of possible outcomes is a countable infinity. Estimation of K from data has proven to be a hard problem, only solved completely for uniform distributions [10,11]. One can consider varying K (instead of β) to find its maximum a posteriori value when performing Bayesian integration over κ.
To see that this will not work, we note that smaller K leads to a higher maximum likelihood since the total number of pseudocounts is less. Unfortunately, since there are fewer bins (degrees of freedom) available, smaller K also means smaller volume in the distribution space. Thus Bayesian averaging over K is trivial: the smallest possible number of bins (i.e., no empty bins) dominates. This can be seen from Equation (8): only the first ratio of Γ-functions in the posterior density depends on K, and it is maximized for K = K 1 . Thus straightforward selection of the value of K is not an option. However, the next section suggests a way around this hurdle.

5. Unknown or Infinite K

Often the true value of K is unknown because its simple estimate is intolerably large. For example, consider measuring entropy of -gramms in printed English [25] using an alphabet with 29 characters: 26 different letters, one symbol for digits, one space, and one punctuation mark. Then for = 10 , a naive estimate of K is 29 10 10 14 . Only very few of all possible 10-gramms are allowed by the grammar, but one does not know how many exactly. Thus one has to work in the space of full cardinality, which is ridiculously undersampled.
As shown in Section 3, NSB is well defined even for finite N and extremely large K, provided Δ 1 . Moreover, if K , then the expressions simplify since only the first term in Equation (12) needs to be kept. Even more interestingly, for an increasing K and β 1 / K , P β ( H ) becomes closer to a delta function since the a priori variance of entropy drops to zero as 1 / K , Equation (5). Thus NSB becomes more “certain” as K increases. Correspondingly, a possible solution to the problem of unknown cardinality is to use an upper bound estimate for K. It is better to overestimate K than to underestimate it. Even K can be used. Insensitivity of the method to the value of K was explored empirically in [14].
Which assumptions allow NSB to use a few data points to specify entropy of a variable with even an infinite cardinality? A typical distribution in the Dirichlet family has a specific rank ordered (Zipf) plot [13]: the number of bins with the probability less than some q is given by an incomplete B-function, I,
ν ( q ) = K I ( q ; β , κ - β ) K 0 q d x x β - 1 ( 1 - x ) κ - β - 1 B ( β , κ - β )
where B is the usual complete B-function. NSB estimates the best value for β using bins with coincidences, the head of the rank ordered plot. But knowing β defines the tails, where no data has been observed yet, allowing entropy estimation. Thus NSB relies on the rank-ordered tail of the studied distribution to be not too far away from the form in Equation (23). If the Zipf plot of the studied distribution has a substantially longer tail, then one should not trust the results of the method. An empirical procedure for detecting this case has been suggested in [14,15].
With this warning in mind, we can analytically calculate the entropy estimate and its variance for a very large K. We want the results that hold even if the saddle point analysis, Section 3, fails when Δ 1 . Following Equation (12) and Equation (18), β * 0 , but κ * = K β * N 2 / Δ N 1 . The range of entropies is 0 H ln K , so the prior on H produced by P ( q ; β ) is (almost) uniform over a semi-infinite range and thus is ill-defined. Similarly, there is a problem normalizing P β ( q ) . However, both problems are resolved by an appropriate limiting procedure, and we disregard them in what follows.
To perform the integrals in Equation (7), we point out that, for K , δ = Δ / N 0 , and κ κ * , we have κ * N 2 / Δ , and then H ( n ) κ - ξ ( β ) | κ κ * = O ( δ ) + O ( 1 / K ) O ( δ , 1 / K ) . A similar relation holds for H ( n ) κ . That is, the posterior averages of the entropy and its square are almost indistinguishable from ξ and ξ 2 , their respective a priori averages. Since now we are interested in small Δ (otherwise we can use the saddle point analysis), we replace H m β by ξ m in Equation (7). The error of this approximation is O δ , 1 K = O 1 N , 1 K .
We transform the Lagrangian in Equation (10). First, we drop terms that do not depend on κ since they appear in the numerator and denominator of Equation (7) and thus cancel. Second, we expand around 1 / K = 0 . This gives
L K ( n , κ ) = - i n i > 1 ln Γ ( n i ) - K 1 ln κ - ln Γ ( κ ) + ln Γ ( κ + N ) + O 1 K
We note that κ is large in the vicinity of the saddle if δ is small and N is large, cf. Equation (18). Thus by definition of ψ-functions, ln Γ ( κ + N ) - ln Γ ( κ ) N ψ 0 ( κ ) + N 2 ψ 1 ( κ ) / 2 . Further, ψ 0 ( κ ) ln κ , and ψ 1 ( κ ) 1 / κ [24]. Finally, since ψ 0 ( 1 ) = - C γ , where C γ is the Euler’s constant, Equation (4) says that ξ - C γ ln κ . Combining all of these expressions, we get
L K ( n , κ ) - i n i > 1 ln Γ ( n i ) + Δ ( ξ - C γ ) + N 2 2 exp ( C γ - ξ )
where ≈ means the precision of O 1 / N , 1 / K .
We write:
H ^ C γ - Δ ln 0 ln K e - L d ξ
( δ H ) 2 ^ Δ 2 ln 0 ln K e - L d ξ
The integrals in these expressions are calculated by substituting exp ( C γ - ξ ) = τ and replacing the limits of integration 1 / K exp ( C γ ) τ exp ( C γ ) by 0 τ . This introduces errors of ( 1 / K ) Δ at the lower limit and δ 2 exp ( - 1 / δ 2 ) at the upper limit. Both errors are within the precision of interest O ( 1 / K , 1 / N ) if there is, at least, one coincidence. Thus
0 ln K e - L d ξ Γ ( Δ ) N 2 2 - Δ
Finally, substituting Equation (28) into Equation (26) and (27), we get
H ^ ( C γ - ln 2 ) + 2 ln N - ψ 0 ( Δ )
( δ H ) 2 ^ ψ 1 ( Δ )
These equations are valid to zeroth order in 1 / K and 1 / N . They provide a simple, yet nontrivial, estimate of the entropy that can be used even if the cardinality of the variable is unknown. However, one always must analyze for a possible bias when using the estimator. Note that Equation (30) agrees with Equation (22) since, for large Δ, ψ 1 ( Δ ) 1 / Δ . The similarity between the coincidence counting in Equations (29) and (30) and in Ma’s analysis [10] is also clear.

6. Conclusions

We have calculated various asymptotic properties of the NSB estimator for estimation of entropies of discrete random variables. First, the posterior expectations have been evaluated in terms of power series in 1 / K and δ = Δ / N , but for the number of coincidences Δ 1 . Evaluation is done using the saddle point expansion. Convergence of the series depends on the number of coincidences rather than on the total number of samples. This elucidates the similarity to Ma’s argument [10] and verifies the intuition of [13,14] that counting coincidence is what makes the method work in the severely undersampled regime. We have then discussed the limit when Δ 1 , and the saddle point analysis is not applicable. Here we have shown that the estimator has a finite asymptote for the case of infinitely many bins, K , or of an unknown number of bins. We obtained a closed form solutions for the estimate of the entropy and its variance in this regime. As for Δ 1 , to the first order, both depend on the number of coincidences rather than on the total number of samples.
The NSB estimator has been implemented in software, using the current asymptotic analysis as one of the steps in numerical evaluation of posterior integrals. Armed with empirical tests for the absence of bias in the estimator suggested in [14,15], the software brings us one step closer to a reliable, model independent estimation of entropy of discrete probability distributions in the severely undersampled Ma regime. The method is proving to be particularly powerful in a variety of biological applications.

Acknowledgments

I thank Jonathan Miller, Chris Wiggins, and William Bialek for stimulating discussions and for encouragement to complete the manuscript. I also thank Christian Mendl for noticing mistakes in the earlier drafts of the manuscript. This work was done in part at Kavli Institute for Theoretical Physics, supported by NSF Grant No. PHY99-07949, and finished at Emory University with a support of NIH/NCI grant No. 7R01CA132629-04 and HFSP grant No. RGY0084/2011.

References

  1. Paninski, L. Estimation of entropy and mutual information. Neural Comp. 2003, 15, 1191–1253. [Google Scholar] [CrossRef]
  2. Panzeri, S.; Treves, A. Analytical estimates of limited sampling biases in different information measures. Netw. Comput. Neural Syst. 1996, 7, 87–107. [Google Scholar] [CrossRef]
  3. Strong, S.; Koberle, R.; de Ruyter van Steveninck, R.; Bialek, W. Entropy and information in neural spike trains. Phys. Rev. Lett. 1998, 80, 197–200. [Google Scholar] [CrossRef]
  4. Victor, J. Binless strategies for estimation of information from neural data. Phys. Rev. E 2002, 66, 051903. [Google Scholar] [CrossRef]
  5. Antos, A.; Kontoyiannis, I. Convergence properties of functional estimates for discrete distributions. Random Struct. Algorithm. 2002, 19, 163–193. [Google Scholar] [CrossRef]
  6. Batu, T.; Dasgupta, S.; Kumar, R.; Rubinfeld, R. The complexity of approximating the entropy. SIAM J. Comput. 2005, 35, 132–150. [Google Scholar] [CrossRef]
  7. Grassberger, P. Entropy estimates from insufficient samples. arXiv 2003. physics/0307138v2. [Google Scholar]
  8. Wyner, A.; Foster, D. On the lower limits of entropy estimation. Available online: http://www-stat.wharton.upenn.edu/~ajw/lowlimitsentropy.pdf (accessed on 16 December 2011).
  9. Kennel, M.; Shlens, J.; Abarbanel, H.; Chichilnisky, E. Estimating entropy rates with bayesian confidence intervals. Neural Comp. 2005, 17, 1531–1576. [Google Scholar] [CrossRef] [PubMed]
  10. Ma, S. Calculation of entropy from data of motion. J. Stat. Phys. 1981, 26, 221–240. [Google Scholar] [CrossRef]
  11. Orlitsky, A.; Santhanam, N.; Vishwanathan, K. Population estimation with performance guarantees. In Proceedings of the IEEE International Symposium on Information Theory, Nice, France, 24th–29th June 2007; pp. 2026–2030.
  12. Orlitsky, A.; Santhanam, N.; Vishwanathan, K.; Zhang, J. Limit results on pattern entropy. IEEE Trans. Inf. Ther. 2006, 52, 2954–2964. [Google Scholar] [CrossRef]
  13. Nemenman, I.; Shafee, F.; Bialek, W. Entropy and inference, revisited. In Advances in Neural Information Processing Systems 14; Dietterich, T.G., Becker, S., Ghahramani, Z., Eds.; MIT Press: Cambridge, MA, USA, 2002. [Google Scholar]
  14. Nemenman, I.; Bialek, W.; de Ruyter van Steveninck, R. Entropy and information in neural spike trains: Progress on the sampling problem. Phys. Rev. E 2004, 69, 056111. [Google Scholar] [CrossRef]
  15. Nemenman, I.; Lewen, G.; Bialek, W.; de Ruyter van Steveninck, R. Neural coding of natural stimuli: Information at sub-millisecond resolution. PLoS Comput. Biol. 2008, 4, e1000025. [Google Scholar] [CrossRef] [PubMed]
  16. NSB Entropy Estimation. Available online: http://nsb-entropy.sourceforge.net/ (accessed on 16 December 2011).
  17. Wolpert, D.; Wolf, D. Estimating functions of probability distributions from a finite set of samples. Phys. Rev. E 1995, 52, 6841–6854. [Google Scholar] [CrossRef]
  18. Sjölander, K.; Karplus, K.; Brown, M.; Hughey, R.; Krogh, A.; Mian, I.S.; Haussler, D. Dirichlet mixtures: A method for improved detection of weak but significant protein sequence homology. Comput. Appl. Biosci. 1996, 12, 327–345. [Google Scholar] [CrossRef] [PubMed]
  19. Jeffreys, H. Further significance tests. Proc. Camb. Phil. Soc. 1936, 32, 416–445. [Google Scholar] [CrossRef]
  20. Schwartz, G. Estimating the dimension of a model. Ann. Stat. 1978, 6, 461–464. [Google Scholar] [CrossRef]
  21. Clarke, B.; Barron, A. Information—Theoretic asymptotics of Bayes methods. IEEE Trans. Inf. Thy. 1990, 36, 453–471. [Google Scholar] [CrossRef]
  22. Balasubramanian, V. Statistical inference, Occam’s razor, and statistical mechanics on the space of probability distributions. Neural Comp. 1997, 9, 349–368. [Google Scholar] [CrossRef]
  23. Nemenman, I. Fluctuation-dissipation theorem and models of learning. Neural Comp. 2005, 17. [Google Scholar] [CrossRef] [PubMed]
  24. Gradshteyn, I.; Ryzhik, I. Tables of Integrals, Series and Products, 6th ed.; Academic Press: Burlington, MA, USA, 2000. [Google Scholar]
  25. Schurmann, T.; Grassberger, P. Entropy estimation of symbol sequences. Chaos 1996, 6, 414–427. [Google Scholar] [CrossRef] [PubMed]

Share and Cite

MDPI and ACS Style

Nemenman, I. Coincidences and Estimation of Entropies of Random Variables with Large Cardinalities. Entropy 2011, 13, 2013-2023. https://doi.org/10.3390/e13122013

AMA Style

Nemenman I. Coincidences and Estimation of Entropies of Random Variables with Large Cardinalities. Entropy. 2011; 13(12):2013-2023. https://doi.org/10.3390/e13122013

Chicago/Turabian Style

Nemenman, Ilya. 2011. "Coincidences and Estimation of Entropies of Random Variables with Large Cardinalities" Entropy 13, no. 12: 2013-2023. https://doi.org/10.3390/e13122013

Article Metrics

Back to TopTop