Next Article in Journal
The Decomposition and Forecasting of Mutual Investment Funds Using Singular Spectrum Analysis
Next Article in Special Issue
How the Probabilistic Structure of Grammatical Context Shapes Speech
Previous Article in Journal
Theory of Response to Perturbations in Non-Hermitian Systems Using Five-Hilbert-Space Reformulation of Unitary Quantum Mechanics
Previous Article in Special Issue
Productivity and Predictability for Measuring Morphological Complexity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Approximating Information Measures for Fields

by
Łukasz Dębowski
Institute of Computer Science, Polish Academy of Sciences, ul. Jana Kazimierza 5, 01-248 Warszawa, Poland
Entropy 2020, 22(1), 79; https://doi.org/10.3390/e22010079
Submission received: 20 November 2019 / Revised: 6 January 2020 / Accepted: 8 January 2020 / Published: 9 January 2020
(This article belongs to the Special Issue Information Theory and Language)

Abstract

:
We supply corrected proofs of the invariance of completion and the chain rule for the Shannon information measures of arbitrary fields, as stated by Dębowski in 2009. Our corrected proofs rest on a number of auxiliary approximation results for Shannon information measures, which may be of an independent interest. As also discussed briefly in this article, the generalized calculus of Shannon information measures for fields, including the invariance of completion and the chain rule, is useful in particular for studying the ergodic decomposition of stationary processes and its links with statistical modeling of natural language.

1. Introduction

As it was noticed by Dębowski [1,2,3], a generalized calculus of Shannon information measures for arbitrary fields—initiated by Gelfand et al. [4] and later developed by Dobrushin [5], Pinsker [6], and Wyner [7]—is useful in particular for studying the ergodic decomposition of stationary processes and its links with statistical modeling of natural language. Fulfilling this need, Dębowski [1] has developed the calculus of Shannon information measures for arbitrary fields, relaxing the requirement of regular conditional probability, assumed implicitly by Dobrushin [5] and Pinsker [6]. He has done it unaware of the classical paper by Wyner [7], which pursued exactly the same idea, with some differences due to an independent interest.
Compared to exposition [7], the added value of the paper [1] was considering continuity and invariance of Shannon information measures with respect to completion of fields. Unfortunately, the proof of Theorem 2 in [1] establishing this invariance and the generalized chain rule contains some mistakes and gaps, which we have discovered recently. For this reason, in this article, we would like to provide a correction and a few new auxiliary results which may be of an independent interest. In this way, we will complete the full generalization of Shannon information measures and their properties, which was developed step-by-step by Gelfand et al. [4], Dobrushin [5], Pinsker [6], Wyner [7], and Dębowski [1]. By the way, we will also rediscuss the linguistic motivations of our results.
The preliminaries are as follows. Fix a probability space ( Ω , J , P ) . Fields are set algebras closed under finite Boolean operations, whereas σ -fields are assumed to be closed also under countable unions and products. A field is called finite if it has finitely many elements. A finite partition is a finite collection of events B j j = 1 J J which are disjoint and whose union equals Ω . The definition proposed by Wyner [7] and Dębowski [1] independently reads as follows:
Definition 1.
For finite partitions α = A i i = 1 I and β = B j j = 1 J and a probability measure P, the entropy and mutual information are defined as
H P ( α ) : = i = 1 I P ( A i ) log 1 P ( A i ) , I P ( α ; β ) : = i = 1 I j = 1 J P ( A i B j ) log P ( A i B j ) P ( A i ) P ( B j ) .
Subsequently, for an arbitrary field C and finite partitions α and β, we define the pointwise conditional entropy and mutual information as
H P ( α | | C ) : = H P ( · | C ) ( α ) , I P ( α ; β | | C ) : = I P ( · | C ) ( α ; β ) ,
where P ( E | C ) is the conditional probability of event E J with respect to the smallest complete σ-field containing C . Subsequently, for arbitrary fields A , B , and C , the (average) conditional entropy and mutual information are defined as
H P ( A | C ) : = sup α A E P H P ( α | | C ) , I P ( A ; B | C ) : = sup α A , β B E P I ( α ; β | | C ) ,
where the supremum is taken over all finite subpartitions and E P X : = X d P is the expectation. Finally, we define the unconditional entropy H P ( A ) : = H P ( A | , Ω ) and mutual information I P ( A ; B ) : = I P ( A ; B | , Ω ) , as it is generally done in information theory. When the probability measure P is clear from the context, we omit subscript P from all above notations.
Although the above measures, called Shannon information measures, have usually been discussed for σ -fields, the defining equations (3) also make sense for fields. We observe a number of identities, such as H ( A ) = I ( A ; A ) and H ( A | C ) = I ( A ; A | C ) . It is important to stress that Definition 1, in contrast to the earlier expositions by Dobrushin [5] and Pinsker [6], is simpler—as it applies one Radon–Nikodym derivative less—and does not require regular conditional probability, i.e., it does not demand that conditional distribution ( P ( E | C ) ) E J be a probability measure almost surely. In fact, the expressions on the right-hand sides of the equations in (3) are defined for all A , B , and C . No problems arise when conditional probability is not regular since conditional distribution ( P ( E | C ) ) E E restricted to a finite field E is a probability measure almost surely [8] (Theorem 33.2).
We should admit that in the context of statistical language modeling, the respective probability space is countably generated so regular conditional probability is guaranteed to exist. Thus, for linguistic applications, one might think that expositions [5,6] are sufficient, although for a didactic reason, the approaches proposed by Wyner [7] and Dębowski [1] lead to a simpler and more general calculus of Shannon information measures. Yet, there is a more important reason for Definition 1. Namely, to discuss the ergodic decomposition of entropy rate and excess entropy—some highly relevant results for statistical language modeling, developed in [1] and to be briefly recalled in Section 3—we need the invariance of Shannon information measures with respect to completion of fields. But within the framework of Dobrushin [5] and Pinsker [6], such invariance of completion does not hold for strongly nonergodic processes, which seem to arise quite naturally in statistical modeling of natural language [1,2,3]. Thus, the approach proposed by Wyner [7] and Dębowski [1] is in fact indispensable.
Thus, let us inspect the problem of invariance of Shannon information measures with respect to completion of fields. A σ -field is called complete, with respect to a given probability measure P, if it contains all sets of outer P-measure 0. Let σ ( A ) denote the intersection of all complete σ -fields containing class A , i.e., σ ( A ) is the completion of the generated σ -field. Let A B denote the intersection of all fields that contain A and B . Assuming Definition 1, the following statement has been claimed true by Dębowski [1] (Theorem 2):
Theorem 1.
Let A , B , C , and D be subfields of J .
1. 
I ( A ; B | C ) = I ( A ; σ ( B ) | C ) = I ( A ; B | σ ( C ) ) (invariance of completion);
2. 
I ( A ; B C | D ) = I ( A ; B | D ) + I ( A ; C | B D ) (chain rule).
The property stated in Theorem 1.1 will be referred to as the invariance of completion. It was not discussed by Wyner [7]. The property stated in Theorem 1.2 is usually referred to as the chain rule or the polymatroid identity. It was proved independently by Wyner [7].
As we have mentioned, the invariance of completion is crucial to prove the ergodic decomposition of the entropy rate and excess entropy of stationary processes. But the proof of the invariance of completion given by Dębowski [1] contains a mistake in the order of quantifiers, and the respective proof of the chain rule is too laconic and contains a gap. For this reason, we would like to supplement the corrected proofs in this article. As we have mentioned, the chain rule was proved by Wyner [7], using an approximation result by Dobrushin [5] and Pinsker [6]. For completeness, we would like to provide a different proof of this approximation result—which follows easily from the invariance of completion—and to supply proofs of both parts of Theorem 1.
The corrected proofs of Theorem 1, to be presented in Section 2, are much longer than the original proofs by Dębowski [1]. In particular, for the sake of proving Theorem 1, we will discuss a few other approximation results, which seem to be of an independent interest. To provide more context for our statements, in Section 3, we will also recall the ergodic decomposition of excess entropy and its application to statistical language modeling.

2. Proofs

Let us write B n B for a sequence ( B n ) n N of fields such that B 1 B 2 B = n N B n . ( B need not be a σ -field.) Our proof of Theorem 1 will rest on a few approximation results and this statement by Dębowski [1] (Theorem 1):
Theorem 2.
Let A , B , B n , and C be subfields of J .
1. 
I ( A ; B | C ) = I ( B ; A | C ) ;
2. 
I ( A ; B | C ) 0 with the equality if and only if P ( A B | C ) = P ( A | C ) P ( B | C ) almost surely for all A A and B B ;
3. 
I ( A ; B | C ) min ( H ( A | C ) , H ( B | C ) ) ;
4. 
I ( A ; B 1 | C ) I ( A ; B 2 | C ) if B 1 B 2 ;
5. 
I ( A ; B n | C ) I ( A ; B | C ) for B n B .
Let A c = Ω \ A . Subsequently, let us denote the symmetric difference
A B : = ( A \ B ) ( B \ A ) = ( A B ) \ ( A B ) .
Symmetric difference satisfies the following identities, which will be used:
A c B c = A B ,
A B ( A C ) ( C B ) ,
( A \ C ) B ( A B ) ( C B ) ,
i C A i i C B i i C ( A i B i ) .
Moreover, we will apply the Bonferroni inequalities
0 1 i n P ( A i ) P 1 i n A i 1 i < j n P ( A i A j )
and inequality P ( A ) P ( B ) + P ( A B ) .
In the following, we will derive the necessary approximation results. Our point of departure is the following folklore fact.
Theorem 3
(approximation of σ -fields). For any field K and any event G σ ( K ) , there is a sequence of events K 1 , K 2 , K such that
lim n P ( G K n ) = 0 .
Proof. 
Denote the class of sets G that satisfy (10) as G . It is sufficient to show that G is a complete σ -field that contains the field K . Clearly, all G K satisfy (10) so G K . Now, we verify the conditions for G to be a σ -field.
  • We have Ω K . Hence, Ω G .
  • For A G , consider K 1 , K 2 , K such that lim n P ( A K n ) = 0 . Then, A K n = A c K n c , where K 1 c , K 2 c , K . Hence, A c G .
  • For A 1 , A 2 , G , consider events K i n K such that P ( A i K i n ) 2 n . Then,
    P i = 1 n A i i = 1 n K i i + n i = 1 n P ( A i K i i + n ) 2 n .
    Moreover,
    P i = 1 A i i = 1 n A i = P i = 1 n A i P i = 1 A i .
    Hence,
    P i = 1 A i i = 1 n K i i + n P i = 1 A i i = 1 n A i + P i = 1 n A i i = 1 n K i i + n P i = 1 n A i P i = 1 A i 2 n ,
    which tends to 0 for n going to infinity. Since i = 1 n K i i + n K , we thus obtain that i = 1 A i G .
Completeness of σ -field G is straightforward since, for any A G and P ( A A ) = 0 , we obtain A G using the same sequence of approximating events in field K as for event A.  □
The second approximation result is the following bound:
Theorem 4
(continuity of entropy). Fix an ϵ ( 0 , e 1 ] and a field C . For finite partitions α = A i i = 1 I and α = A i i = 1 I such that P ( A i A i ) ϵ for all i 1 , , I , we have
| H ( α | C ) H ( α | C ) | I ϵ log I ϵ .
Proof. 
We have the expectation P ( A i A i | C ) d P = P ( A i A i ) ϵ . Hence, by the Markov inequality we obtain
P ( P ( A i A i | C ) ϵ ) ϵ .
Denote
B = P ( A i A i | C ) < ϵ ) for all i 1 , , I .
From the Bonferroni inequality, we obtain P ( B c ) I ϵ . Subsequently, we observe that | H ( α | | C ) H ( α | | C ) | log I holds almost surely. Hence,
| H ( α | C ) H ( α | C ) | = | H ( α | C ) H ( α | C ) d P | P ( B c ) log I + B | H ( α | | C ) H ( α | | C ) | d P I ϵ log I + B | H ( α | | C ) H ( α | | C ) | d P .
Function x log x is subadditive and increasing for x ( 0 , e 1 ] . In particular, we have | ( x + y ) log ( x + y ) x log x | y log y for x , y 0 . Thus, on the event B we obtain
| H ( α | | C ) H ( α | | C ) | = | i = 1 I P ( A i | C ) log P ( A i | C ) i = 1 I P ( A i | C ) log P ( A i | C ) | i = 1 I | P ( A i | C ) P ( A i | C ) | log | P ( A i | C ) P ( A i | C ) | i = 1 I P ( A i A i | C ) log P ( A i A i | C ) I ϵ log ϵ
Plugging (18) into (17) yields the claim.  □
Now, we can prove the invariance of completion. Note that
I ( α ; β | C ) = H ( α | C ) + H ( β | C ) H ( α β | C ) .
Proof of Theorem 1.
1 (invariance of completion): Consider some measurable fields A , B , and C . We are going to demonstrate
I ( A ; B | C ) = I ( A ; σ ( B ) | C ) = I ( A ; B | σ ( C ) ) .
Equality I ( A ; B | C ) = I ( A ; B | σ ( C ) ) is straightforward since P ( A | C ) = P ( A | σ ( C ) ) almost surely for all A J . It remains to prove I ( A ; B | C ) = I ( A ; σ ( B ) | C ) . For this goal, it suffices to show that for any ϵ > 0 and any finite partitions α A and β σ ( B ) there exists a finite partition β B such that
| I ( α ; β | C ) I ( α ; β | C ) | < ϵ .
Fix then some ϵ > 0 and finite partitions α : = A i i = 1 I A and β : = B j j = 1 J σ ( B ) . Invoking Theorem 3, we know that for each η > 0 there exists a class of sets C j j = 1 J B which need not be a partition, such that
P ( C j B j ) η
for all j 1 , , J . Let us put B J + 1 : = and let us construct sets D 0 : = and D j : = k = 1 j C k for j 1 , , J . Subsequently, we put B j : = C j \ D j 1 for j 1 , , J and B J + 1 : = Ω \ D J . In this way, we obtain a partition β : = B j j = 1 J + 1 B .
The next step of the proof is showing an analogue of bound (22) for partitions β and β . To begin, for j 1 , , J , we have
P ( B j B j ) = P ( ( C j \ D j 1 ) B j ) P ( C j B j ) + P ( D j 1 B j ) η + k = 1 j 1 P ( C k B j ) η + k = 1 j 1 P ( B k B j ) + P ( ( C k B j ) ( B k B j ) ) η + k = 1 j 1 0 + P ( C k B k ) j η .
Now, we observe for j , k 1 , , J and j k that
P ( C j ) P ( B j ) P ( C j B j ) P ( B j ) η P ( C j C k ) P ( B j B k ) + P ( ( C j C k ) ( B j B k ) )
0 + P ( C j B j ) + P ( C k B k ) 2 η .
Hence, by the Bonferroni inequality we derive
P ( B J + 1 B J + 1 ) = P ( ( Ω \ D J ) ) = P ( Ω \ D J ) = 1 P ( D J ) 1 1 j J P ( C j ) + 1 j < k J P ( C j C k ) 1 1 j J P ( B j ) + J η + 1 j < k J 2 η = J 2 η .
Resuming our bounds, we obtain
P ( ( A i B j ) ( A i B j ) ) P ( B j B j ) J 2 η
for all i 1 , , I and j 1 , , J + 1 . Then, invoking Theorem 4 yields
| I ( α ; β | C ) I ( α ; β | C ) | | H ( α β | C ) H ( α β | C ) | + | H ( β | C ) H ( β | C ) | I ( J + 1 ) J 2 η log I ( J + 1 ) J 2 η + ( J + 1 ) J 2 η log J + 1 J 2 η .
Taking η sufficiently small, we obtain (21), which is the desired claim.  □
Some consequence of the above result is this approximation result proved by Dobrushin [5] and Pinsker [6] and used by Wyner [7] to demonstrate the chain rule. Applying the invariance of completion, we supply a different proof than Dobrushin [5] and Pinsker [6].
Theorem 5
(split of join). Let A , B , C , and D be subfields of J . We have
I ( A ; B C | D ) = sup α A , β B , γ C E I ( α ; β γ | | D ) ,
where the supremum is taken over all finite subpartitions.
Proof. 
Define class
E : = β B , γ C σ ( β γ ) .
It can be easily verified that E is a field such that σ ( E ) = σ ( B C ) . Thus, for all finite partitions β B and γ C we have β γ E . Moreover, by definition of E , for each finite partition ε E there exists finite partitions β B and γ C such that partition β γ is finer than ε . Hence, by Theorem 2.4, we obtain in this case,
E I ( α ; ε | | D ) E I ( α ; β γ | | D ) I ( α ; E | D ) .
In consequence, by Theorem 1.1, we obtain the claim
I ( A ; B C | D ) = I ( A ; E | D ) = sup α A , ε E E I ( α ; ε | | D ) = sup α A , β B , γ C E I ( α ; β γ | | D ) .
  □
The final approximation result which we need to prove the chain rule is as follows:
Theorem 6
(convergence of conditioning). Let α = A i i = 1 I be a finite partition and let C be a field. For each ϵ > 0 , there exists a finite partition γ σ ( C ) such that for any partition γ σ ( C ) finer than γ we have
| H ( α | C ) H ( α | γ ) | ϵ .
Proof. 
Fix an ϵ > 0 . For each n N and A J , partition
γ A : = ( k 1 ) / n < P ( A | C ) k / n : k 0 , 1 , , n
is finite and belongs to σ ( C ) . If we consider partition γ : = i = 1 I γ A i , it remains finite and still satisfies γ σ ( C ) . Let a partition γ σ ( C ) be finer than γ . Then,
| P ( A i | C ) P ( A i | γ ) | 1 / n
almost surely for all i 1 , , I . We also observe
| H ( α | C ) H ( α | γ ) | | H ( α | | C ) H ( α | | γ ) | d P .
We recall that function x log x is subadditive and increasing for x ( 0 , e 1 ] . In particular, we have | ( x + y ) log ( x + y ) x log x | y log y for x , y 0 . Hence, for n e we obtain almost surely
| H ( α | | C ) H ( α | | γ ) | = | i = 1 I P ( A i | C ) log P ( A i | C ) i = 1 I P ( A i | γ ) log P ( A i | γ ) | i = 1 I | P ( A i | C ) P ( A i | γ ) | log | P ( A i | C ) P ( A i | γ ) | I log n n .
Taking n so large that n 1 I log n ϵ yields the claim.  □
Taking the above into account, we can demonstrate the chain rule. Our proof essentially follows the ideas of Wyner [7], except for invoking Theorem 6.
Proof of Theorem 1.
2 (chain rule): Let A , B , C , and D be arbitrary fields, and let α , β , γ , and δ be finite partitions. The point of our departure is the chain rule for finite partitions [9] (Equation 2.60)
I ( α ; β γ ) = I ( α ; β ) + I ( α ; γ | β ) .
By Definition 1 and Theorems 1.1, 5, and 6, conditional mutual information I ( A ; B | C ) can be approximated by I ( α ; β | γ ) , where we take appropriate limits of refined finite partitions with a certain care.
In particular, by Theorems 1.1, 5, and 6, taking sufficiently fine finite partitions of arbitrary fields B and C , the chain rule (38) for finite partitions implies
I ( α ; B C ) = I ( α ; B ) + I ( α ; C | B ) ,
where all expressions are finite. Hence, we also obtain
0 = I ( α ; B C D ) I ( α ; D ) I ( α ; B C | D ) = I ( α ; B D ) I ( α ; D ) I ( α ; B | D ) = I ( α ; B C D ) I ( α ; B D ) I ( α ; C | B D ) = I ( α ; B | D ) + I ( α ; C | B D ) I ( α ; B C | D ) ,
where all expressions are finite. Having established the above claim for a finite partition α , we generalize it to
I ( A ; B C | D ) = I ( A ; B | D ) + I ( A ; C | B D )
for an arbitrary field A , taking its appropriately fine finite partitions.  □

3. Applications

This section borrows its statements largely from Dębowski [1,2,3] and is provided only to sketch some context for our research and justify its applicability to statistical language modeling. Let ( X i ) i Z be a two-sided infinite stationary process over a countable alphabet X on a probability space ( X Z , X Z , P ) , where X k ( ( ω i ) i Z ) : = ω k . We denote random blocks X j k : = ( X i ) j i k and complete σ -fields G j k : = σ ( X j k ) generated by them. By the generalized calculus of Shannon information measures, i.e., Theorems 1 and 2, we can define the entropy rate h P and the excess entropy E P of process ( X i ) i Z as
h P : = lim n H P ( G 0 | G n 1 ) = H P ( G 0 | G 1 ) if X is finite ,
E P : = lim n I P ( G n 1 ; G 0 n 1 ) = I P ( G 1 ; G 0 ) ,
see [10] for more background.
Let T ( ( ω i ) i Z ) : = ( ω i + 1 ) i Z be the shift operation and let I : = A X Z : T 1 ( A ) = A be the invariant σ -field. By the Birkhoff ergodic theorem [11], we have σ ( I ) σ ( G ) σ ( G ) for the tail σ -fields G : = n = 1 G n and G : = n = 1 G n . Hence, by Theorems 1 and 2 we further obtain expressions
h P = H P ( G 0 | G 1 ) = H P ( G 0 | G 1 I ) if X is finite ,
E P = I P ( G 1 ; G 0 ) = H P ( I ) + I P ( G 1 ; G 0 | I ) .
Denoting the conditional probability F ( A ) : = P ( A | I ) , which is a random stationary ergodic measure by the ergodic decomposition theorem [12], we notice that H P ( G 0 | G 1 I ) = E P H F ( G 0 | G 1 ) and I P ( G 1 ; G 0 | I ) = E P I F ( G 1 ; G 0 ) , and consequently we obtain the ergodic decomposition of the entropy rate and excess entropy, which reads
h P = E P h F if X is finite ,
E P = H P ( I ) + E P E F .
Formulae (45) and (46) were derived by Gray and Davisson [13] and Dębowski [1] respectively. The ergodic decomposition of the entropy rate (45) states that a stationary process is asymptotically deterministic, i.e., h P = 0 , if and only if almost all its ergodic components are asymptotically deterministic, i.e., h F = 0 almost surely. In contrast, the ergodic decomposition of the excess entropy (46) states that a stationary process is infinitary, i.e., E P = , if some of its ergodic components are infinitary, i.e., E F = with a nonzero probability, or if H P ( I ) = , i.e., if the process is strongly nonergodic in particular, see [14,15].
The linguistic interpretation of the above results is as follows. There is a hypothesis by Hilberg [16] that the excess entropy of natural language is infinite. This hypothesis can be partly confirmed by the original estimates of conditional entropy by Shannon [17], by the power-law decay of the estimates of the entropy rate given by the PPM compression algorithm [18], by the approximately power-law growth of vocabulary called Heaps’ or Herdan’s law [2,3,19,20], and by some other experiments applying neural statistical language models [21,22]. In parallel, Dębowski [1,2,3] supposed that the very large excess entropy in natural language may be caused by the fact that texts in natural language describe some relatively slowly evolving and very complex reality. Indeed, it can be mathematically proved that if the abstract reality described by random texts is unchangeable and infinitely complex, then the resulting stochastic process is strongly nonergodic, i.e., H P ( I ) = in particular [1,2,3]. Consequently, its excess entropy is infinite by formula (46). We suppose that a similar mechanism may work for natural language, see [23,24,25,26] for further examples of abstract stochastic mechanisms leading to infinitary processes.

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Dębowski, Ł. A general definition of conditional information and its application to ergodic decomposition. Stat. Probab. Lett. 2009, 79, 1260–1268. [Google Scholar] [CrossRef] [Green Version]
  2. Dębowski, Ł. On the Vocabulary of Grammar-Based Codes and the Logical Consistency of Texts. IEEE Trans. Inf. Theory 2011, 57, 4589–4599. [Google Scholar] [CrossRef] [Green Version]
  3. Dębowski, Ł. Is Natural Language a Perigraphic Process? The Theorem about Facts and Words Revisited. Entropy 2018, 20, 85. [Google Scholar] [CrossRef] [Green Version]
  4. Gelfand, I.M.; Kolmogorov, A.N.; Yaglom, A.M. Towards the general definition of the amount of information. Dokl. Akad. Nauk. SSSR 1956, 111, 745–748. (In Russian) [Google Scholar]
  5. Dobrushin, R.L. A general formulation of the fundamental Shannon theorems in information theory. Uspekhi Mat. Nauk. 1959, 14, 3–104. (In Russian) [Google Scholar]
  6. Pinsker, M.S. Information and Information Stability of Random Variables and Processes; Holden-Day: San Francisco, CA, USA, 1964. [Google Scholar]
  7. Wyner, A.D. A definition of conditional mutual information for arbitrary ensembles. Inf. Control. 1978, 38, 51–59. [Google Scholar] [CrossRef] [Green Version]
  8. Billingsley, P. Probability and Measure; John Wiley: New York, NY, USA, 1979. [Google Scholar]
  9. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley: New York, NY, USA, 1991. [Google Scholar]
  10. Crutchfield, J.P.; Feldman, D.P. Regularities unseen, randomness observed: The entropy convergence hierarchy. Chaos 2003, 15, 25–54. [Google Scholar] [CrossRef]
  11. Birkhoff, G.D. Proof of the ergodic theorem. Proc. Natl. Acad. Sci. USA 1932, 17, 656–660. [Google Scholar] [CrossRef]
  12. Rokhlin, V.A. On the fundamental ideas of measure theory. Am. Math. Soc. Transl. Ser. 1 1962, 10, 1–54. [Google Scholar]
  13. Gray, R.M.; Davisson, L.D. The ergodic decomposition of stationary discrete random processses. IEEE Trans. Inf. Theory 1974, 20, 625–636. [Google Scholar] [CrossRef]
  14. Löhr, W. Properties of the Statistical Complexity Functional and Partially Deterministic HMMs. Entropy 2009, 11, 385–401. [Google Scholar] [CrossRef] [Green Version]
  15. Crutchfield, J.P.; Marzen, S. Signatures of infinity: Nonergodicity and resource scaling in prediction, complexity, and learning. Phys. Rev. E 2015, 91, 050106. [Google Scholar] [CrossRef] [Green Version]
  16. Hilberg, W. Der bekannte Grenzwert der redundanzfreien Information in Texten—eine Fehlinterpretation der Shannonschen Experimente? Frequenz 1990, 44, 243–248. [Google Scholar] [CrossRef]
  17. Shannon, C. Prediction and entropy of printed English. Bell Syst. Tech. J. 1951, 30, 50–64. [Google Scholar] [CrossRef]
  18. Takahira, R.; Tanaka-Ishii, K.; Dębowski, Ł. Entropy Rate Estimates for Natural Language—A New Extrapolation of Compressed Large-Scale Corpora. Entropy 2016, 18, 364. [Google Scholar] [CrossRef] [Green Version]
  19. Herdan, G. Quantitative Linguistics; Butterworths: London, UK, 1964. [Google Scholar]
  20. Heaps, H.S. Information Retrieval—Computational and Theoretical Aspects; Academic Press: New York, NY, USA, 1978. [Google Scholar]
  21. Hahn, M.; Futrell, R. Estimating Predictive Rate-Distortion Curves via Neural Variational Inference. Entropy 2019, 21, 640. [Google Scholar] [CrossRef] [Green Version]
  22. Braverman, M.; Chen, X.; Kakade, S.M.; Narasimhan, K.; Zhang, C.; Zhang, Y. Calibration, Entropy Rates, and Memory in Language Models. arXiv 2019, arXiv:1906.05664. [Google Scholar]
  23. Dębowski, Ł. Mixing, Ergodic, and Nonergodic Processes with Rapidly Growing Information between Blocks. IEEE Trans. Inf. Theory 2012, 58, 3392–3401. [Google Scholar] [CrossRef]
  24. Dębowski, Ł. On Hidden Markov Processes with Infinite Excess Entropy. J. Theor. Probab. 2014, 27, 539–551. [Google Scholar] [CrossRef] [Green Version]
  25. Travers, N.F.; Crutchfield, J.P. Infinite Excess Entropy Processes with Countable-State Generators. Entropy 2014, 16, 1396–1413. [Google Scholar] [CrossRef] [Green Version]
  26. Dębowski, Ł. Maximal Repetition and Zero Entropy Rate. IEEE Trans. Inf. Theory 2018, 64, 2212–2219. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Dębowski, Ł. Approximating Information Measures for Fields. Entropy 2020, 22, 79. https://doi.org/10.3390/e22010079

AMA Style

Dębowski Ł. Approximating Information Measures for Fields. Entropy. 2020; 22(1):79. https://doi.org/10.3390/e22010079

Chicago/Turabian Style

Dębowski, Łukasz. 2020. "Approximating Information Measures for Fields" Entropy 22, no. 1: 79. https://doi.org/10.3390/e22010079

APA Style

Dębowski, Ł. (2020). Approximating Information Measures for Fields. Entropy, 22(1), 79. https://doi.org/10.3390/e22010079

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop