Next Article in Journal
A New Chaotic System with a Self-Excited Attractor: Entropy Measurement, Signal Encryption, and Parameter Estimation
Next Article in Special Issue
A Simple and Adaptive Dispersion Regression Model for Count Data
Previous Article in Journal
Residual Entropy and Critical Behavior of Two Interacting Boson Species in a Double Well
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Is Natural Language a Perigraphic Process? The Theorem about Facts and Words Revisited

Institute of Computer Science, Polish Academy of Sciences, ul. Jana Kazimierza 5, 01-248 Warszawa, Poland
Entropy 2018, 20(2), 85; https://doi.org/10.3390/e20020085
Submission received: 4 January 2018 / Revised: 23 January 2018 / Accepted: 24 January 2018 / Published: 26 January 2018
(This article belongs to the Special Issue Power Law Behaviour in Complex Systems)

Abstract

:
As we discuss, a stationary stochastic process is nonergodic when a random persistent topic can be detected in the infinite random text sampled from the process, whereas we call the process strongly nonergodic when an infinite sequence of independent random bits, called probabilistic facts, is needed to describe this topic completely. Replacing probabilistic facts with an algorithmically random sequence of bits, called algorithmic facts, we adapt this property back to ergodic processes. Subsequently, we call a process perigraphic if the number of algorithmic facts which can be inferred from a finite text sampled from the process grows like a power of the text length. We present a simple example of such a process. Moreover, we demonstrate an assertion which we call the theorem about facts and words. This proposition states that the number of probabilistic or algorithmic facts which can be inferred from a text drawn from a process must be roughly smaller than the number of distinct word-like strings detected in this text by means of the Prediction by Partial Matching (PPM) compression algorithm. We also observe that the number of the word-like strings for a sample of plays by Shakespeare follows an empirical stepwise power law, in a stark contrast to Markov processes. Hence, we suppose that natural language considered as a process is not only non-Markov but also perigraphic.

1. Introduction

One of the motivating assumptions of information theory [1,2,3] is that communication in natural language can be reasonably modeled as a discrete stationary stochastic process, namely, an infinite sequence of discrete random variables with a well defined time-invariant probability distribution. The same assumption is made in several practical applications of computational linguistics, such as speech recognition [4] or part-of-speech tagging [5]. Whereas state-of-the-art stochastic models of natural language are far from being satisfactory, we may ask a more theoretically oriented question, namely:
What can be some general mathematical properties of natural language treated as a stochastic process, in view of empirical data?
In this paper, we will investigate a question of whether it is reasonable to assume that natural language communication is a perigraphic process.
To recall, a stationary process is called ergodic if the relative frequencies of all finite substrings in the infinite text generated by the process converge in the long run with probability one to some constants—the probabilities of the respective strings. Now, some basic linguistic intuition suggests that natural language does not satisfy this property, cf. ([3], Section 6.4). Namely, we can probably agree that there is a variation of topics of texts in natural language, and these topics can be empirically distinguished by counting relative frequencies of certain substrings called keywords. Hence, we expect that the relative frequencies of keywords in a randomly selected text in natural language are random variables depending on the random text topic. In the limit, for an infinitely long text, we may further suppose that the limits of relative frequencies of keywords persist to be random, and if this is true then natural language is not ergodic, i.e., it is nonergodic.
In this paper, we will entertain first a stronger hypothesis, namely, that natural language communication is strongly nonergodic. Informally speaking, a stationary process will be called strongly nonergodic if its random persistent topic has to be described using an infinite sequence of probabilistically independent binary random variables, called probabilistic facts. Like nonergodicity, strong nonergodicity is not empirically verifiable if we only have a single infinite sequence of data. However, replacing probabilistic facts with an algorithmically random sequence of bits, called algorithmic facts, we can adapt the property of strong nonergodicity back to ergodic processes. Subsequently, we will call a process perigraphic if the number of algorithmic facts which can be inferred from a finite text sampled from the process grows like a power of the text length. It is a general observation that perigraphic processes have uncomputable distributions.
It is interesting to note that perigraphic processes can be singled out by some statistical properties of the texts they generate. We will exhibit a proposition, which we call the theorem about facts and words. Suppose that we have a finite text drawn from a stationary process. The theorem about facts and words says that the number of independent probabilistic or algorithmic facts that can be reasonably inferred from the text must be roughly smaller than the number of distinct word-like strings detected in the text by some standard data compression algorithm called the Prediction by Partial Matching (PPM) code [6,7]. It is important to stress that in this theorem we do not relate the numbers all facts and all word-like strings, which would sound trivial, but we compare only the numbers of independent facts and distinct word-like strings.
Having the theorem about facts and words, we can also discuss some empirical data. Since the number of distinct word-like strings for texts in natural language follows an empirical stepwise power law, in a stark contrast to Markov processes, consequently, we suppose that the number of inferrable random facts for natural language also follows a power law. That is, we suppose that natural language is not only non-Markov but also perigraphic.
Whereas in this paper we fill several important missing gaps and provide an overarching narration, the basic ideas presented in this paper are not so new. The starting point was a corollary of Zipf’s law and a hypothesis by Hilberg. Zipf’s law is an empirical observation that in texts in natural language, the frequencies of words obey a power law decay when we sort the words according to their decreasing frequencies [8,9]. A corollary of this law, called Heaps’ law [10,11,12,13], states that the number of distinct words in a text in natural language grows like a power of the text length. In contrast to these simple empirical observations, Hilberg’s hypothesis is a less known conjecture about natural language that the entropy of a text chunk of an increasing length [14] or the mutual information between two adjacent text chunks [15,16,17,18] obey also a power law growth. In Ref. [19], it was heuristically shown that, if Hilberg’s hypothesis for mutual information is satisfied for an arbitrary stationary stochastic process, then texts drawn from this process satisfy also a kind of Heaps’ law if we detect the words using the grammar-based codes [20,21,22,23]. This result is a historical antecedent of the theorem about facts and words.
Another important step was a discovery of some simple strongly nonergodic processes, satisfying the power law growth of mutual information, called Santa Fe processes, discovered by Dębowski in August 2002, but first reported only in [24]. Subsequently, in Ref. [25], a completely formal proof of the theorem about facts and words for strictly minimal grammar-based codes [23,26] was provided. The respective related theory of natural language was later reviewed in [27,28] and supplemented by a discussion of Santa Fe processes in [29]. A drawback of this theory at that time was that strictly minimal grammar-based codes used in the statement of the theorem about facts and words are not computable in a polynomial time [26]. This precluded an empirical verification of the theory.
To state the relative novelty, in this paper, we are glad to announce a new stronger version of the theorem about facts and words for a somewhat more elegant definition of inferrable facts and the PPM code, which is computable almost in a linear time. For the first time, we also present two cases of the theorem: one for strongly nonergodic processes, applying Shannon information theory, and one for general stationary processes, applying algorithmic information theory. Having these results, we can supplement them finally with a rudimentary discussion of some empirical data.
The organization of this paper is as follows. In Section 2, we discuss some properties of ergodic and nonergodic processes. In Section 3, we define strongly nonergodic processes and we present some examples of them. Analogically, in Section 4, we discuss perigraphic processes. In Section 5, we discuss two versions of the theorem about facts and words. In Section 6, we discuss some empirical data and we suppose that natural language may be a perigraphic process. In Section 7, we offer concluding remarks. Moreover, three appendices follow the body of the paper. In Appendix A, we prove the first part of the theorem about facts and words. In Appendix B, we prove the second part of this theorem. In Appendix C, we show that that the number of inferrable facts for the Santa Fe processes follows a power law.

2. Ergodic and Nonergodic Processes

We assume that the reader is familiar with some probability measure theory [30]. For a real-valued random variable Y on a probability space ( Ω , J , P ) , we denote its expectation
E Y : = Y d P .
Consider now a discrete stochastic process ( X i ) i = 1 = ( X 1 , X 2 , ) , where random variables X i take values from a set X of countably many distinct symbols, such as letters with which we write down texts in natural language. We denote blocks of consecutive random variables X j k : = ( X j , , X k ) and symbols x j k : = ( x j , , x k ) . Let us define a binary random variable telling whether some string x 1 n has occurred in sequence ( X i ) i = 1 on positions from i to i + n 1 ,
Φ i ( x 1 n ) : = 1 X i i + n 1 = x 1 n ,
where
1 ϕ = 1 , if ϕ is true , 0 , if ϕ is false .
The expectation of this random variable,
E Φ i ( x 1 n ) = P ( X i i + n 1 = x 1 n ) ,
is the probability of the chosen string according to the considered probability measure P, whereas the arithmetic average of consecutive random variables 1 m i = 1 m Φ i ( x 1 n ) is the relative frequency of the same string in a finite sequence of random symbols X 1 m + n 1 .
Process ( X i ) i = 1 is called stationary (with respect to a probability measure P) if expectations E Φ i ( x 1 n ) do not depend on position i for any string x 1 n . In this case, we have the following well known theorem, which establishes that the limiting relative frequencies of strings x 1 n in infinite sequence ( X i ) i = 1 exist almost surely, i.e., with probability 1:
Theorem 1
(ergodic theorem, cf. e.g., [31]). For any discrete stationary process ( X i ) i = 1 , there exist limits
Φ ( x 1 n ) : = lim m 1 m i = 1 m Φ i ( x 1 n ) almost surely ,
with expectations E Φ ( x 1 n ) = E Φ i ( x 1 n ) .
In general, limits Φ ( x 1 n ) are random variables depending on a particular value of infinite sequence ( X i ) i = 1 . It is quite natural, however, to require that the relative frequencies of strings Φ ( x 1 n ) are almost surely constants, equal to the expectations E Φ i ( x 1 n ) . Subsequently, process ( X i ) i = 1 will be called ergodic (with respect to a probability measure P) if limits Φ ( x 1 n ) are almost surely constant for any string x 1 n . The standard definition of an ergodic process is more abstract but is equivalent to this statement ([31], Lemma 7.15).
The following examples of ergodic processes are well known:
  • Process ( X i ) i = 1 is called IID (independent identically distributed) if
    P ( X 1 n = x 1 n ) = π ( x 1 ) π ( x n ) .
    All IID processes are ergodic.
  • Process ( X i ) i = 1 is called Markov (of order 1) if
    P ( X 1 n = x 1 n ) = π ( x 1 ) p ( x 2 | x 1 ) p ( x n | x n 1 ) .
    A Markov process is ergodic in particular if
    p ( x i | x i 1 ) > c > 0 .
    For a sufficient and necessary condition, see ([32], Theorem 7.16).
  • Process ( X i ) i = 1 is called hidden Markov if X i = g ( S i ) for a certain Markov process ( S i ) i = 1 and a function g. A hidden Markov process is ergodic in particular if the underlying Markov process is ergodic.
Whereas IID and Markov processes are some basic models in probability theory, hidden Markov processes are of practical importance in computational linguistics [4,5]. Hidden Markov processes as considered there usually satisfy condition (8) and therefore they are ergodic.
Let us call a probability measure P stationary or ergodic, respectively, if the process ( X i ) i = 1 is stationary or ergodic with respect to the measure P. Suppose that we have a stationary measure P that generates some data ( X i ) i = 1 . We can define a new random measure F equal to the relative frequencies of blocks in the data ( X i ) i = 1 . It turns out that the measure F is almost surely ergodic. Formally, we have this proposition.
Theorem 2
(cf. ([33], Theorem 9.10)). Any process ( X i ) i = 1 with a stationary measure P is almost surely ergodic with respect to the random measure F given by
F ( X 1 n = x 1 n ) : = Φ ( x 1 n ) .
Moreover, from the random measure F, we can obtain the stationary measure P by integration, P ( X 1 n = x 1 n ) = E F ( X 1 n = x 1 n ) . The following result asserts that this integral representation of measure P is unique.
Theorem 3
(ergodic decomposition, cf. ([33], Theorem 9.12)). Any stationary probability measure P can be represented as
P ( X 1 n = x 1 n ) = F ( X 1 n = x 1 n ) d ν ( F ) ,
where ν is a unique measure on stationary ergodic measures.
In other words, stationary ergodic measures are some building blocks from which we can construct any stationary measure. For a stationary probability measure P, the particular values of the random ergodic measure F are called the ergodic components of measure P.
Consider for instance, a Bernoulli( θ ) process with measure
F θ ( X 1 n = x 1 n ) = θ i = 1 n x i ( 1 θ ) n i = 1 n x i ,
where x i 0 , 1 and θ [ 0 , 1 ] . This measure will be contrasted with the measure of a mixture Bernoulli process with parameter θ uniformly distributed on interval [ 0 , 1 ] ,
P ( X 1 n = x 1 n ) = 0 1 F θ ( X 1 n = x 1 n ) d θ = 1 n + 1 n i = 1 n x i 1 .
Measure (11) is a measure of an IID process and is therefore ergodic, whereas measure (12) is a mixture of ergodic measures and hence it is nonergodic.

3. Strongly Nonergodic Processes

According to our definition, a process is ergodic when the relative frequencies of any strings in a random sample in the long run converge to some constants. Consider now the following thought experiment. Suppose that we select a random book from a library. In [34], it was observed that there is hardly any book that contains both the word lemma and the word love, namely, there are some keywords that are specific to particular topics of texts. We can pursue this idea one little step farther. Counting the relative frequencies of keywords, such as lemma for a text on mathematics and love for a romance, we can effectively recognize the topic of the book. Simply put, the relative frequencies of some keywords will be higher for books concerning some topics, whereas they will be lower for books concerning other topics. Hence, in our thought experiment, we expect that the relative frequencies of keywords are some random variables with values depending on the particular topic of the randomly selected book. Since keywords are just some particular strings, we may conclude that the stochastic process that models natural language should be nonergodic.
The above thought experiment provides another perspective onto nonergodic processes. According to the following theorem, a process is nonergodic when we can effectively distinguish in the limit at least two random topics in it. In the statement, function f : X * 0 , 1 , 2 assumes values 0 or 1 when we can identify the topic, whereas it takes value 2 when we are not certain which topic a given text is about.
Theorem 4
(cf. [24]). A stationary discrete process ( X i ) i = 1 is nonergodic if and only if there exists a function f : X * 0 , 1 , 2 and a binary random variable Z such that 0 < P ( Z = 0 ) < 1 and
lim n P ( f ( X i i + n 1 ) = Z ) = 1
for any position i N .
A binary variable Z satisfying condition (13) will be called a probabilistic fact. A probabilistic fact tells which of two topics the infinite text generated by the stationary process is about. It is a kind of a random switch which is preset before we start scanning the infinite text; compare a similar wording in [35]. To keep the proofs simple, here we only give a new elementary proof of the “⇒” statement of Theorem 4. The proof of the “⇐” part applies some measure theory and follows the idea of Theorem 9 from [24] for strongly nonergodic processes, which we will discuss in the next paragraph.
Proof. 
(only ⇒) Suppose that process ( X i ) i = 1 is nonergodic. Then, there exists a string x 1 k such that Φ E Φ for Φ : = Φ ( x 1 k ) with some positive probability. Hence, there exists a real number y such that P ( Φ = y ) = 0 and
P ( Φ > y ) = 1 P ( Φ < y ) ( 0 , 1 ) .
Define Z : = 1 Φ > y and f ( X i i + n 1 ) : = Z i n : = 1 Φ i n > y , where
Φ i n : = 1 n k + 1 j = i i + n k Φ j ( x 1 k ) .
Since lim n Φ i n = Φ almost surely and Φ satisfies (14), convergence lim n Z i n = Z also holds almost surely. Applying the Lebesgue dominated convergence theorem, we obtain
lim n P ( f ( X i i + n 1 ) = Z ) = lim n E Z i n Z + ( 1 Z i n ) ( 1 Z ) = E Z 2 + ( 1 Z ) 2 = 1 .
 ☐
As for books in the natural language, we may have an intuition that the pool of available book topics is extremely large and contains many more topics than just two. For this reason, we may need not a single probabilistic fact Z but rather a sequence of probabilistic facts Z 1 , Z 2 , to specify the topic of a random book completely. Formally, stationary processes requiring an infinite sequence of independent uniformly distributed probabilistic facts to describe the topic of an infinitely long text will be called strongly nonergodic.
Definition 1
(cf. [24,25]). A stationary discrete process ( X i ) i = 1 is called strongly nonergodic if there exist a function g : N × X * 0 , 1 , 2 and a binary IID process ( Z k ) k = 1 such that P ( Z k = 0 ) = P ( Z k = 1 ) = 1 / 2 and
lim n P ( g ( k ; X i i + n 1 ) = Z k ) = 1
for any position i N and any index k N .
As we have stated above, for a strongly nonergodic process, there is an infinite number of independent probabilistic facts ( Z k ) k = 1 with a uniform distribution on the set 0 , 1 . Formally, these probabilistic facts can be assembled into a single real random variable T = k = 1 2 k Z k , which is uniformly distributed on the unit interval [ 0 , 1 ] . The value of variable T identifies the topic of a random infinite text generated by the stationary process. Thus, for a strongly nonergodic process, we have a continuum of available topics which can be incrementally identified from any sufficiently long text. Put formally, according to Theorem 9 from [24], a stationary process is strongly nonergodic if and only if its shift-invariant σ -field contains a nonatomic sub- σ -field. We note in passing that in [24] strongly nonergodic processes were called uncountable description processes.
In view of Theorem 9 from [24], the mixture Bernoulli process (12) is some example of a strongly nonergodic process. In this case, the parameter θ plays the role of the random variable T = k = 1 2 k Z k . Showing that condition (17) is satisfied for this process in an elementary fashion is a tedious exercise. Hence, let us present now a simpler guiding example of a strongly nonergodic process, which we introduced in [24,25] and called the Santa Fe process. Let ( Z k ) k = 1 be a binary IID process with P ( Z k = 0 ) = P ( Z k = 1 ) = 1 / 2 . Let ( K i ) i = 1 be an IID process with K i assuming values in natural numbers with a power-law distribution
P ( K i = k ) 1 k α , α > 1 .
The Santa Fe process with exponent α is a sequence ( X i ) i = 1 , where
X i = ( K i , Z K i )
are pairs of a random number K i and the corresponding probabilistic fact Z K i . The Santa Fe process is strongly nonergodic since condition (17) holds for example for
g ( k ; x 1 n ) = 0 , if for all 1 i n , x i = ( k , z ) x i = ( k , 0 ) , 1 , if for all 1 i n , x i = ( k , z ) x i = ( k , 1 ) , 2 , else .
Simply speaking, function g ( k ; · ) returns 0 or 1 when an unambiguous value of the second constituent can be read off from pairs x i = ( k , · ) and returns 2 when there is some ambiguity. Condition (17) is satisfied since
P ( g ( k ; X i i + n 1 ) = Z k ) = P ( K i = k for some 1 i n ) = 1 ( 1 P ( K i = k ) ) n n 1 .
Some salient property of the Santa Fe process is the power law growth of the expected number of probabilistic facts, which can be inferred from a finite text drawn from the process. Consider a strongly nonergodic process ( X i ) i = 1 . The set of initial independent probabilistic facts inferrable from a finite text X 1 n will be defined as
U ( X 1 n ) : = l N : g ( k ; X 1 n ) = Z k for all k l .
In other words, we have U ( X 1 n ) = 1 , 2 , , l , where l is the largest number such that g ( k ; X 1 n ) = Z k for all k l . To capture the power-law growth of an arbitrary function s : N R , we will denote the Hilberg exponent defined
hilb n s ( n ) : = lim   sup n log + s ( 2 n ) log 2 n ,
where log + x : = log ( x + 1 ) for x 0 and log + x : = 0 for x < 0 , cf. [36]. In contrast to Ref. [36], for technical reasons, we define the Hilberg exponent only for an exponentially sparse subsequence of terms s ( 2 n ) rather than all terms s ( n ) . Moreover, in [36], the Hilberg exponent was considered only for mutual information s ( n ) = I ( X 1 n ; X n + 1 2 n ) , defined later in Equation (51). We observe that for the exact power law growth s ( n ) = n β with β 0 we have hilb n s ( n ) = β . More generally, the Hilberg exponent captures an asymptotic power-law growth of the sequence. As shown in Appendix C, for the Santa Fe process with exponent α , we have the asymptotic power-law growth
hilb n E card U ( X 1 n ) = 1 / α ( 0 , 1 ) .
This property distinguishes the Santa Fe process from the mixture Bernoulli process (12), for which the respective Hilberg exponent is zero, as we discuss in Section 6.

4. Perigraphic Processes

Is it possible to demonstrate by a statistical investigation of texts that natural language is really strongly nonergodic and satisfies a condition similar to (24)? In the thought experiment described in the beginning of the previous section, we have ignored the issue of constructing an infinitely long text. In reality, every book with a well defined topic is finite. If we want to obtain an unbounded collection of texts, we need to assemble a corpus of different books and it depends on our assembling criteria whether the books in the corpus will concern some persistent random topic. Moreover, if we already have a single infinite sequence of books generated by some stationary source and we estimate probabilities as relative frequencies of blocks of symbols in this sequence, then, by Theorem 2, we will obtain an ergodic probability measure almost surely.
In this situation, we may ask whether the idea of the power-law growth of the number of inferrable probabilistic facts can be translated somehow to the case of ergodic measures. Some straightforward method to apply is to replace the sequence of independent uniformly distributed probabilistic facts ( Z k ) k = 1 , being random variables, with an algorithmically random sequence of particular binary digits ( z k ) k = 1 . Such digits z k will be called algorithmic facts in contrast to variables Z k being called probabilistic facts.
Let us recall some basic concepts. For a discrete random variable X, let P ( X ) denote the random variable that takes value P ( X = x ) when X takes value x. We will introduce the pointwise entropy
H ( X ) : = log P ( X ) ,
where log stands for the natural logarithm. The prefix-free Kolmogorov complexity K ( u ) of a string u is the length of the shortest self-delimiting program written in binary digits that prints out string u ([37], Chapter 3). K ( u ) is the founding concept of the algorithmic information theory and is an analogue of the pointwise entropy. To keep our notation analogical to (25), we will write the algorithmic entropy
H a ( u ) : = K ( u ) log 2 .
If the probability measure is computable, then the algorithmic entropy is close to the pointwise entropy. On the one hand, by the Shannon–Fano coding for a computable probability measure, the algorithmic entropy is less than the pointwise entropy plus a constant which depends on the probability measure and the dimensionality of the distribution ([37], Corollary 4.3.1). Formally,
H a ( X 1 n ) H ( X 1 n ) + 2 log n + C P ,
where C P 0 is a certain constant depending on the probability measure P. On the other hand, since the prefix-free Kolmogorov complexity is also the length of a prefix-free code, we have
E H a ( X 1 n ) E H ( X 1 n ) .
It is also true that H a ( X 1 n ) H ( X 1 n ) for sufficiently large n almost surely ([38], Theorem 3.1). Thus, we have shown that the algorithmic entropy is in some sense close to the pointwise entropy, for a computable probability measure.
Next, we will discuss the difference between probabilistic and algorithmic randomness. Whereas for an IID sequence of random variables ( Z k ) k = 1 with P ( Z k = 0 ) = P ( Z k = 1 ) = 1 / 2 we have
H ( Z 1 k ) = k log 2 ,
similarly an infinite sequence of binary digits ( z k ) k = 1 is called algorithmically random (in the Martin-Löf sense) when there exists a constant C 0 such that
H a ( z 1 k ) k log 2 C
for all k N ([37], Theorem 3.6.1). The probability that the aforementioned sequence of random variables ( Z k ) k = 1 is algorithmically random equals 1—for example by ([38], Theorem 3.1), so algorithmically random sequences are typical realizations of sequence ( Z k ) k = 1 .
Let ( X i ) i = 1 be a stationary process. We observe that generalizing condition (17) in an algorithmic fashion does not make much sense. Namely, condition
lim n P ( g ( k ; X i i + n 1 ) = z k ) = 1
is trivially satisfied for any stationary process for a certain computable function g : N × X * 0 , 1 , 2 and an algorithmically random sequence ( z k ) k = 1 . It turns out so since there exists a computable function ω : N × N 0 , 1 such that lim n ω ( k ; n ) = Ω k , where ( Ω k ) k = 1 is the binary expansion of the halting probability Ω = k = 1 2 k Ω k , which is a lower semi-computable algorithmically random sequence ([37], Section 3.6.2).
In spite of this negative result, the power-law growth of the number of inferrable algorithmic facts corresponds to some nontrivial property. For a computable function g : N × X * 0 , 1 , 2 and an algorithmically random sequence of binary digits ( z k ) k = 1 , which we will call algorithmic facts, the set of initial algorithmic facts inferrable from a finite text X 1 n will be defined as
U a ( X 1 n ) : = l N : g ( k ; X 1 n ) = z k for all k l .
Subsequently, we will call a process perigraphic if the expected number of algorithmic facts which can be inferred from a finite text sampled from the process grows asymptotically like a power of the text length.
Definition 2.
A stationary discrete process ( X i ) i = 1 is called perigraphic if
hilb n E card U a ( X 1 n ) > 0
for some computable function g : N × X * 0 , 1 , 2 and an algorithmically random sequence of binary digits ( z k ) k = 1 .
Perigraphic processes can be ergodic. The proof of Theorem A10 from Appendix C can be easily adapted to show that some example of a perigraphic process is the Santa Fe process with sequence ( Z k ) k = 1 replaced by an algorithmically random sequence of binary digits ( z k ) k = 1 . To be very concrete, the example of a perigraphic process can be process ( X i ) i = 1 with
X i = ( K i , Ω K i )
where ( Ω k ) k = 1 is the binary expansion of the halting probability and ( K i ) i = 1 is an IID process with K i assuming values in natural numbers with the power-law distribution (18). This process is not only perigraphic but also IID and hence ergodic.
We can also easily show the following proposition.
Theorem 5.
Any perigraphic process ( X i ) i = 1 has an uncomputable measure P.
Proof. 
Assume that a perigraphic process ( X i ) i = 1 has a computable measure P. By inequalities (A25) and (A26) from Appendix A, we have
hilb n E card U a ( X 1 n ) hilb n E H a ( X 1 n ) H ( X 1 n ) .
Since, for a computable measure P, we also have inequality (27), then
hilb n E card U a ( X 1 n ) = 0 .
Since we have obtained a contradiction with the assumption that the process is perigraphic, measure P cannot be computable. ☐

5. Theorem about Facts and Words

In this section, we will present a result about stationary processes, which we call the theorem about facts and words. This proposition states that the expected number of independent probabilistic or algorithmic facts inferrable from the text drawn from a stationary process must be roughly less than the expected number of distinct word-like strings detectable in the text by a simple procedure involving the PPM compression algorithm. This result states, in particular, that an asymptotic power law growth of the number of inferrable probabilistic or algorithmic facts as a function of the text length produces a statistically measurable effect, namely, an asymptotic power law growth of the number of word-like strings.
To state the theorem about facts and words formally, we need first to discuss the PPM code. The general idea of the PPM code comes from Refs. [6,7], developed independently. This compression scheme was called the PPM code in [7], which stands for “Prediction by Partial Matching” and prevails in the literature, whereas it was called measure R in [6,39]. Whereas Ref. [7] focused on practical applications to data compression and earned most of the fame, in Refs. [6,39], one can find a few results that matter for theoretical considerations. Let us denote strings of symbols x j k : = ( x j , , x k ) , adopting an important convention that x j k is the empty string for k < j . In the following, we consider strings over a finite alphabet, say, x i X = 1 , , D . We define the frequency of a substring w 1 k in a string x 1 n as
N ( w 1 k | x 1 n ) : = i = 1 n k + 1 1 x i i + k 1 = w 1 k .
Now, we will define the PPM probabilities in a way that is closer to the conventions of paper [6,39] than to the conventions of Ref. [7]. In particular, in Equation (38), we consider frequencies of strings x i k i and x i k i 1 in different strings, x 1 i 1 and x 1 i 2 , respectively, in the numerator and in the denominator to guarantee the proper normalization according to our definition of N ( w 1 k | x 1 n ) .
Definition 3
(cf. [6,7]). For x 1 n X n and k 1 , 0 , 1 , , we put
PPM k ( x i | x 1 i 1 ) : = 1 D , i k , N ( x i k i | x 1 i 1 ) + 1 N ( x i k i 1 | x 1 i 2 ) + D , i > k .
Quantity PPM k ( x i | x 1 i 1 ) is called the conditional PPM probability of order k of symbol x i given string x 1 i 1 . Next, we put
PPM k ( x 1 n ) : = i = 1 n PPM k ( x i | x 1 i 1 ) .
Quantity PPM k ( x 1 n ) is called the PPM probability of order k of string x 1 n . Finally, we put
PPM ( x 1 n ) : = 6 π 2 k = 1 PPM k ( x 1 n ) ( k + 2 ) 2 .
Quantity PPM ( x 1 n ) is called the (total) PPM probability of the string x 1 n .
Quantity PPM k ( x 1 n ) is an incremental approximation of the unknown true probability of the string x 1 n , assuming that the string has been generated by a Markov process of order k. In contrast, quantity PPM ( x 1 n ) is a mixture of such Markov approximations for all finite orders. In general, the PPM probabilities are probability distributions over strings of a fixed length. That is:
  • PPM k ( x i | x 1 i 1 ) > 0 and x i X PPM k ( x i | x 1 i 1 ) = 1 ,
  • PPM k ( x 1 n ) > 0 and x 1 n X n PPM k ( x 1 n ) = 1 ,
  • PPM ( x 1 n ) > 0 and x 1 n X n PPM ( x 1 n ) = 1 .
In the following, we define an analogue of the pointwise entropy
H PPM ( x 1 n ) : = log PPM ( x 1 n ) .
Quantity H PPM ( x 1 n ) will be called the length of the PPM code for the string x 1 n . By nonnegativity of the Kullback–Leibler divergence, we have for any random block X 1 n that
E H PPM ( X 1 n ) E H ( X 1 n ) .
The length of the PPM code or the PPM probability, respectively, have two notable properties. First, the PPM probability is a universal probability, i.e., in the limit, the length of the PPM code consistently estimates the entropy rate of a stationary source. Second, the PPM probability can be effectively computed, i.e., the summation in definition (40) can be rewritten as a finite sum. Let us state these two results formally.
Theorem 6
(cf. [39]). The PPM probability is universal in expectation, i.e., we have
lim n 1 n E H PPM ( X 1 n ) = lim n 1 n E H ( X 1 n )
for any stationary process ( X i ) i = 1 .
For stationary ergodic processes, the above claim follows by an iterated application of the ergodic theorem as shown, e.g., in Theorem 1.1 from [39] for the measure R, which is a slight modification of the PPM probability. To generalize the claim for nonergodic processes, one can use the ergodic decomposition theorem, but the exact proof requires too large of a theoretical overload to be presented within the framework of this paper.
Theorem 7.
The PPM probability can be effectively computed, i.e., we have
PPM ( x 1 n ) = 6 π 2 k = 0 L ( x 1 n ) PPM k ( x 1 n ) ( k + 2 ) 2 + 1 6 π 2 k = 0 L ( x 1 n ) 1 ( k + 2 ) 2 D n ,
where
L ( x 1 n ) = max k : N ( w 1 k | x 1 n ) > 1 for some w 1 k
is the maximal repetition of string x 1 n .
Proof. 
We have N ( x i k i 1 | x 1 i 2 ) = 0 for k > L ( x 1 i ) . Hence, PPM k ( x 1 n ) = D n for k > L ( x 1 n ) and, in view of this, we obtain the claim. ☐
Maximal repetition as a function of a string was studied, e.g., in [40,41]. Since the PPM probability is a computable probability distribution, then, by (27) for a certain constant C PPM , we have
H a ( X 1 n ) H PPM ( X 1 n ) + 2 log n + C PPM .
Let us denote the length of the PPM code of order k,
H PPM k ( x 1 n ) : = log PPM k ( x 1 n ) .
As we can easily see, the code length H PPM ( x 1 n ) is approximately equal to the minimal code length H PPM k ( x 1 n ) where the minimization goes over k 1 , 0 , 1 , . Thus, it is meaningful to consider this definition of the PPM order of an arbitrary string.
Definition 4.
The PPM order G PPM ( x 1 n ) is the smallest G such that
H PPM G ( x 1 n ) H PPM k ( x 1 n ) for all k 1 .
Theorem 8.
We have G PPM ( x 1 n ) L ( x 1 n ) .
Proof. 
It follows by PPM k ( x 1 n ) = D n = PPM 1 ( x 1 n ) for k > L ( x 1 n ) . ☐
Let us divert for a short while from the PPM code definition. The set of distinct substrings of length m in string x 1 n is
V ( m | x 1 n ) : = y 1 m : x t + 1 t + m = y 1 m for some 0 t n m .
The cardinality of set V ( m | x 1 n ) as a function of substring length m is called the subword complexity of string x 1 n [40]. Now let us apply the concept of the PPM order to define some special set of substrings of an arbitrary string x 1 n . The set of distinct PPM words detected in x 1 n will be defined as the set V ( m | x 1 n ) for m = G PPM ( x 1 n ) , i.e.,
V PPM ( x 1 n ) : = V ( G PPM ( X 1 n ) | x 1 n ) .
Let us define the pointwise mutual information
I ( X ; Y ) : = H ( X ) + H ( Y ) H ( X , Y )
and the algorithmic mutual information
I a ( u ; v ) : = H a ( u ) + H a ( v ) H a ( u , v ) .
Now, we may write down the theorem about facts and words. The theorem states that the Hilberg exponent for the expected number of initial independent inferrable facts is less than the Hilberg exponent for the expected mutual information and this is less than the Hilberg exponent for the expected number of distinct detected PPM words plus the PPM order. (The PPM order is usually much less than the number of distinct PPM words.)
Theorem 9
(facts and words I, cf. [25]). Let ( X i ) i = 1 be a stationary strongly nonergodic process over a finite alphabet. We have inequalities
hilb n E card U ( X 1 n ) hilb n E I ( X 1 n ; X n + 1 2 n ) hilb n E G PPM ( X 1 n ) + card V PPM ( X 1 n ) .
Proof. 
The claim follows by conjunction of Theorem A2 from Appendix A and Theorem A8 from Appendix B. ☐
Theorem 9 also has an algorithmic version, for ergodic processes in particular.
Theorem 10
(facts and words II). Let ( X i ) i = 1 be a stationary process over a finite alphabet. We have inequalities
hilb n E card U a ( X 1 n ) hilb n E I a ( X 1 n ; X n + 1 2 n ) hilb n E G PPM ( X 1 n ) + card V PPM ( X 1 n ) .
Proof. 
The claim follows by conjunction of Theorem A3 from Appendix A and Theorem A8 from Appendix B. ☐
The theorem about facts and words previously proven in [25] differs from Theorem 9 in three aspects. First of all, the theorem in [25] did not apply the concept of the Hilberg exponent and compared lim   inf n with lim   sup n rather than lim   sup n with lim   sup n . Second, the number of inferrable facts was defined as a functional of the process distribution rather than a random variable depending on a particular text. Third, the number of words was defined using a minimal grammar-based code rather than the concept of the PPM order. Minimal grammar-based codes are not computable in a polynomial time in contrast to the PPM order. Thus, we may claim that Theorem 9 is stronger than the theorem about facts and words previously proven in [25]. Moreover, applying Kolmogorov complexity and algorithmic randomness to formulate and prove Theorem 10 is a new idea.
It is an interesting question whether we have an almost sure version of Theorems 9 and 10, namely, whether
hilb n card U ( X 1 n ) hilb n I ( X 1 n ; X n + 1 2 n ) hilb n G PPM ( X 1 n ) + card V PPM ( X 1 n ) almost surely
for strongly nonergodic processes, or
hilb n card U a ( X 1 n ) hilb n I a ( X 1 n ; X n + 1 2 n ) hilb n G PPM ( X 1 n ) + card V PPM ( X 1 n ) almost surely
for general stationary processes. We leave this question as an open problem.

6. Hilberg Exponents and Empirical Data

It is advisable to show that the Hilberg exponents considered in Theorem 9 can assume any value in range [ 0 , 1 ] and the difference between them can be arbitrarily large. We adopt a convention that the set of inferrable probabilistic facts is empty for ergodic processes, U ( X 1 n ) = . With this remark in mind, let us inspect some examples of processes.
First of all, for Markov processes and their strongly nonergodic mixtures, of any order k, but, over a finite alphabet, we have
hilb n E card U ( X 1 n ) = hilb n E I ( X 1 n ; X n + 1 2 n ) = 0 .
This happens to be so since the sufficient statistic of text X 1 n for predicting text X n + 1 2 n is the maximum likelihood estimate of the transition matrix, the elements of which can assume at most ( n + 1 ) distinct values. Hence, E I ( X 1 n ; X n + 1 2 n ) D k + 1 log ( n + 1 ) , where D is the cardinality of the alphabet and k is the Markov order of the process. Similarly, it can be shown for these processes that the PPM order satisfies lim n G PPM ( X 1 n ) k . Hence, the number of PPM words, which satisfies inequality card V PPM ( X 1 n ) D G PPM ( X 1 n ) , is also bounded above. In consequence, for Markov processes and their strongly nonergodic mixtures, of any order but over a finite alphabet, we obtain
hilb n G PPM ( X 1 n ) + card V PPM ( X 1 n ) = 0 almost surely .
In contrast, Santa Fe processes are strongly nonergodic mixtures of some IID processes over an infinite alphabet. Being mixtures of IID processes over an infinite alphabet, they need not satisfy condition (58). In fact, as shown in [25,29] and Appendix C, for the Santa Fe process with exponent α , we have the asymptotic power-law growth
hilb n E card U ( X 1 n ) = hilb n E I ( X 1 n ; X n + 1 2 n ) = 1 / α ( 0 , 1 ) .
The same equality for the number of inferrable probabilistic facts and the mutual information is also satisfied by a stationary coding of the Santa Fe process into a finite alphabet (see [29]).
Let us also note that, whereas the theorem about facts and words provides an inequality of Hilberg exponents, this inequality can be strict. To provide some substance, in [29], we have constructed a modification of the Santa Fe process that is ergodic and over a finite alphabet. For this modification, we have only the power-law growth of mutual information
hilb n E I ( X 1 n ; X n + 1 2 n ) = 1 / α ( 0 , 1 ) .
Since, in this case, hilb n E card U ( X 1 n ) = 0 , then the difference between the Hilberg exponents for the number of inferrable probabilistic facts and the number of PPM words can be an arbitrary number in range ( 0 , 1 ) .
Now, we are in a position to discuss some empirical data. In this case, we cannot directly measure the number of facts and the mutual information, but we can compute the PPM order and count the number of PPM words. In Figure 1, we have presented data for a collection of 35 plays by William Shakespeare (downloaded from the Project Gutenberg, https://www.gutenberg.org/) and a random permutation of characters appearing in this collection of texts. The random permutation of characters is an IID process over a finite alphabet, so, in this case, we obtain
hilb n card V PPM ( x 1 n ) = 0 .
In contrast, for the plays of Shakespeare, we seem to have a stepwise power law growth of the number of distinct PPM words. Thus, we may suppose that, for natural language, we have more generally
hilb n card V PPM ( x 1 n ) > 0 .
If relationship (62) holds true, then natural language cannot be a Markov process of any order. Moreover, in view of the striking difference between observations (61) and (62), we may suppose that the number of inferrable probabilistic or algorithmic facts for texts in natural language also obeys a power-law growth. Formally speaking, this condition would translate to natural language being strongly nonergodic or perigraphic. We note that this hypothesis arises only as a form of a weak inductive inference since formally we cannot deduce condition (33) from mere condition (62), regardless of the amount of data supporting condition (62).

7. Conclusions

In this article, a stationary process has been called strongly nonergodic if some persistent random topic can be detected in the process and an infinite number of independent binary random variables, called probabilistic facts, is needed to describe this topic completely. Replacing probabilistic facts with an algorithmically random sequence of bits, called algorithmic facts, we have adapted this property back to ergodic processes. Subsequently, we have called a process perigraphic if the number of algorithmic facts which can be inferred from a finite text sampled from the process grows like a power of the text length.
We have demonstrated an assertion, which we call the theorem about facts and words. This proposition states that the number of independent probabilistic or algorithmic facts which can be inferred from a text drawn from a process must be roughly smaller than the number of distinct word-like strings detected in this text by means of the PPM compression algorithm. We have exhibited two versions of this theorem: one for strongly nonergodic processes, applying the Shannon information theory, and one for ergodic processes, applying the algorithmic information theory.
Subsequently, we have exhibited an empirical observation that the number of distinct word-like strings grows like a stepwise power law for a collection of plays by William Shakespeare, in stark contrast to Markov processes. This observation does not rule out that the number of probabilistic or algorithmic facts inferrable from texts in natural language also grows like a power law. Hence, we have supposed that natural language is a perigraphic process.
We suppose that the path of the future related research should lead through a further analysis of the theorem about facts and words, and demonstrating an almost sure version of this statement. It is also an important, still unresolved question whether theoretical analysis of effective universal coding algorithms and their rates of convergence to the entropy rate can contribute to some definite statements about natural language treated as a stochastic process. We realize that the results of this paper as far as the linguistic theory is concerned may be still too inconclusive. As we see it, the main merit of this paper lies in linking some concepts in the Shannon information theory and the algorithmic information theory and providing some linguistic interpretations of them.

Acknowledgments

We wish to thank Jacek Koronacki, Jan Mielniczuk, Vladimir Vovk, and Boris Ryabko for helpful comments.

Conflicts of Interest

The author declares no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
IIDindependent identically distributed
PPMprediction by partial matching

Appendix A. Facts and Mutual Information

In the appendices, we will make use of several kinds of information measures.
  • First, there are four pointwise Shannon information measures:
    • entropy
      H ( X ) = log P ( X ) ,
    • conditional entropy
      H ( X | Z ) : = log P ( X | Z ) ,
    • mutual information
      I ( X ; Y ) : = H ( X ) + H ( Y ) H ( X , Y ) ,
    • conditional mutual information
      I ( X ; Y | Z ) : = H ( X | Z ) + H ( Y | Z ) H ( X , Y | Z ) ,
    where P ( X ) is the probability of a random variable X and P ( X | Z ) is the conditional probability of a random variable X given a random variable Z. The above definitions make sense for discrete-valued random variables X and Y and an arbitrary random variable Z. If Z is a discrete-valued random variable, then also H ( X , Z ) H ( Z ) = H ( X | Z ) and I ( X ; Z ) = H ( X ) H ( X | Z ) .
  • Moreover, we will use four algorithmic information measures:
    • entropy
      H a ( x ) = K ( x ) log 2 ,
    • conditional entropy
      H a ( x | z ) : = K ( x | z ) log 2 ,
    • mutual information
      I a ( x ; y ) : = H a ( x ) + H a ( y ) H a ( x , y ) ,
    • conditional mutual information
      I a ( x ; y | z ) : = H a ( x | z ) + H a ( y | z ) H a ( x , y | z ) ,
    where K ( x ) is the prefix-free Kolmogorov complexity of an object x and K ( x | z ) is the prefix-free Kolmogorov complexity of an object x given an object z. In the above definitions, x and y must be finite objects (finite texts), whereas z can be also an infinite object (an infinite sequence). If z is a finite object, then H a ( x , z ) H a ( z ) = + H a ( x | z , K ( z ) ) rather than being equal to H a ( x | z ) , where = + , < + , and > + are the equality and the inequalities up to an additive constant ([37], Theorem 3.9.1). Hence,
    H a ( x ) H a ( x | z ) + H a ( K ( z ) ) > + I a ( x ; z ) = + H a ( x ) H a ( x | z , K ( z ) ) > + H a ( x ) H a ( x | z ) .
In the following, we will prove a result for Hilberg exponents.
Theorem A1.
For a function G : N R , define J ( n ) : = 2 G ( n ) G ( 2 n ) . If the limit lim n G ( n ) / n = g exists and is finite, then
hilb n G ( n ) n g hilb n J ( n ) ,
with an equality if J ( 2 n ) > + 0 for all but finitely many n.
Proof. 
The proof makes use of the telescope sum
k = 0 J ( 2 k + n ) 2 k + 1 = G ( 2 n ) 2 n g .
Denote δ : = hilb n J ( n ) . Since hilb n G ( n ) n g 1 , it is sufficient to prove inequality (A2) for δ < 1 . In this case, J ( 2 n ) 2 ( δ + ϵ ) n for all but finitely many n for any ϵ > 0 . Then, for ϵ < 1 δ , by the telescope sum (A3), we obtain for sufficiently large n that
G ( 2 n ) 2 n g k = 0 2 ( δ + ϵ ) ( k + n ) 2 k + 1 2 ( δ + ϵ ) n k = 0 2 ( δ + ϵ 1 ) k 1 = 2 ( δ + ϵ ) n 2 ( 1 2 δ + ϵ 1 ) .
Since ϵ can be taken arbitrarily small, we obtain (A2).
Now assume that J ( 2 n ) > + 0 for all but finitely many n. By the telescope sum (A3), we have J ( 2 n ) / 2 < + G ( 2 n ) 2 n g for sufficiently large n. Hence,
δ hilb n G ( n ) n g
Combining this with (A2), we obtain hilb n G ( n ) n g = δ . ☐
For any stationary process ( X i ) i = 1 over a finite alphabet, there exists a limit
h : = lim n E H ( X 1 n ) n = E H ( X 1 | X 2 ) ,
called the entropy rate of process ( X i ) i = 1 [3]. By (28), (43), and (46), we also have
h = lim n E H a ( X 1 n ) n .
Moreover, for a stationary process, the mutual information satisfies
E I ( X 1 n ; X n + 1 2 n ) = 2 E H ( X 1 n ) E H ( X 1 2 n ) 0 ,
E I a ( X 1 n ; X n + 1 2 n ) = 2 E H a ( X 1 n ) E H a ( X 1 2 n ) > + 0 .
Hence, by Theorem A1, we obtain
hilb n E H ( X 1 n ) h n = hilb n E I ( X 1 n ; X n + 1 2 n ) ,
hilb n E H a ( X 1 n ) h n = hilb n E I a ( X 1 n ; X n + 1 2 n ) .
Subsequently, we will prove the initial parts of Theorems 9 and 10, i.e., the two versions of the theorem about facts and words. The probabilistic statement for strongly nonergodic processes goes first.
Theorem A2
(facts and mutual information I). Let ( X i ) i = 1 be a stationary strongly nonergodic process over a finite alphabet. We have inequality
hilb n E card U ( X 1 n ) hilb n E I ( X 1 n ; X n + 1 2 n ) .
Proof. 
Let us write S n : = card U ( X 1 n ) . Observe that
E H ( Z 1 S n | S n ) = s , w P ( S n = s , Z 1 s = w ) log P ( Z 1 s = w | S n = s ) s , w P ( S n = s , Z 1 s = w ) log P ( Z 1 s = w ) P ( S n = s ) = s , w P ( S n = s , Z 1 s = w ) log 2 s P ( S n = s ) = ( log 2 ) E S n E H ( S n ) ,
E H ( S n ) ( E S n + 1 ) log ( E S n + 1 ) E S n log E S n = log ( E S n + 1 ) + E S n log E S n + 1 E S n log ( E S n + 1 ) + 1 ,
where the second row of inequalities follows by the maximum entropy bound from ([3], Lemma 13.5.4). Hence, by the inequality
E H ( X | Y ) E H ( X | f ( Y ) )
for a measurable function f, we obtain that
E H ( X 1 n ) E H ( X 1 n | Z 1 ) E H ( X 1 n | S n ) E H ( X 1 n | Z 1 , S n ) E H ( S n ) E H ( X 1 n | S n ) E H ( X 1 n | Z 1 S n , S n ) E H ( S n ) = E I ( X 1 n ; Z 1 S n | S n ) E H ( S n ) E H ( Z 1 S n | S n ) E H ( Z 1 S n | X 1 n , S n ) E H ( S n ) = E H ( Z 1 S n | S n ) E H ( S n ) ( log 2 ) E S n 2 E H ( S n ) ( log 2 ) E S n 2 log ( E S n + 1 ) + 1 .
Now, we observe that
E H ( X 1 n | Z 1 ) E H ( X 1 n | X n + 1 ) = h n
since the sequence of random variables Z 1 is a measurable function of the sequence of random variables X n + 1 , as shown in [24,25]. Hence, we have
E H ( X 1 n ) E H ( X 1 n | Z 1 ) E H ( X 1 n ) h n .
By inequalities (A17) and (A18) and equality (A10), we obtain inequality (A12). ☐
The algorithmic version of the theorem about facts and words follows roughly the same idea, with some necessary adjustments.
Theorem A3
(facts and mutual information II). Let ( X i ) i = 1 be a stationary process over a finite alphabet. We have inequality
hilb n E card U a ( X 1 n ) hilb n E I a ( X 1 n ; X n + 1 2 n ) .
Proof. 
Let us write S n : = card U a ( X 1 n ) . Observe that
H a ( z 1 S n | S n ) > + H a ( z 1 S n ) H a ( S n ) = + ( log 2 ) S n C H a ( S n ) ,
H a ( S n ) < + 2 log ( S n + 1 ) ,
H a ( K ( z 1 S n ) ) < + 2 log ( K ( z 1 S n ) + 1 ) < + 2 log ( S n + 1 ) ,
where the first row of inequalities follows by the algorithmic randomness of z 1 , whereas the second and the third row of inequalities follow by the bounds K ( n ) < + 2 log 2 ( n + 1 ) for n 0 and K ( z 1 k ) < + 2 k . Moreover, for any a computable function f, there exists a constant C f 0 such that
H a ( x | y ) < + H a ( x | f ( y ) ) + C f .
Hence, we obtain that
H a ( X 1 n ) H a ( X 1 n | z 1 ) > + H a ( X 1 n | S n ) H a ( X 1 n | z 1 , S n ) H a ( S n ) > + H a ( X 1 n | S n ) H a ( X 1 n | z 1 S n , S n ) H a ( S n ) > + I a ( X 1 n ; z 1 S n | S n ) H a ( K ( z 1 S n ) ) H a ( S n ) > + H a ( z 1 S n | S n ) H a ( z 1 S n | X 1 n , K ( X 1 n ) , S n ) H a ( K ( z 1 S n ) ) H a ( S n ) > + H a ( z 1 S n | S n ) C g H a ( K ( z 1 S n ) ) H a ( S n ) > + ( log 2 ) S n C C g H a ( K ( z 1 S n ) ) 2 H a ( S n ) > + ( log 2 ) S n 6 log ( S n + 1 ) C C g .
Since E log ( S n + 1 ) log ( E S n + 1 ) by the Jensen inequality, then
E H a ( X 1 n ) E H a ( X 1 n | z 1 ) > + ( log 2 ) E S n 6 log ( E S n + 1 ) C C g .
Now, we observe that
E H a ( X 1 n | z 1 ) E H ( X 1 n ) h n
since the conditional prefix-free Kolmogorov complexity with the second argument fixed is the length of a prefix-free code. Hence, we have
E H a ( X 1 n ) E H a ( X 1 n | z 1 ) E H a ( X 1 n ) h n .
By inequalities (A25) and (A27) and equality (A11), we obtain inequality (A19). ☐

Appendix B. Mutual Information and PPM Words

In this appendix, we will investigate some algebraic properties of the length of the PPM code to be used for proving the second part of the theorem about facts and words. First of all, it can be seen that
H PPM k ( x 1 n ) = n log D , k = 1 , k log D + u X k log ( N ( u | x 1 n 1 ) + D 1 ) ! ( D 1 ) ! a = 1 D N ( u a | x 1 n ) ! , k 0 .
Expression (A28) can be further rewritten using notation
log * n : = 0 , n = 0 , log n ! n log n + n , n 1 ,
H ( n 1 , , n l ) : = i = 1 : n i > 0 l n i log j = 1 l n j n i , if n j > 0 exists , 0 , else ,
K ( n 1 , , n l ) : = i = 1 l log * n i log * i = 1 l n i .
Then, for k 0 , we define
H PPM k 0 ( x 1 n ) : = u X k H N ( u 1 | x 1 n ) , , N ( u D | x 1 n ) ,
H PPM k 1 ( x 1 n ) : = u X k H N ( u | x 1 n 1 ) , D 1 u X k K N ( u 1 | x 1 n ) , , N ( u D | x 1 n ) , D 1 .
As a result for k 0 , we obtain
H PPM k ( x 1 n ) = k log D + H PPM k 0 ( x 1 n ) + H PPM k 1 ( x 1 n ) .
In the following, we will analyze the terms on the right-hand side of (A34).
Theorem A4.
For k 0 and n 1 , we have
D ˜ card V ( k | x 1 n 1 ) H PPM k 1 ( x 1 n ) < D card V ( k | x 1 n 1 ) 2 + log n ,
where D ˜ : = D log D 1 ! > 0 .
Proof. 
Observe that H ( 0 , D 1 ) = K ( 0 , , 0 , D 1 ) = 0 . Hence, the summation in H PPM k 1 ( x 1 n ) can be restricted to u X k such that N ( u | x 1 n 1 ) 1 . Consider such a u and write N = N ( u | x 1 n 1 ) and N a = N ( u a | x 1 n ) .
Since H ( n 1 , , n l ) 0 and K ( n 1 , , n l ) 0 (the second inequality follows by subadditivity of log * n ), we obtain first
H N , D 1 K N 1 , , N D , D 1 H N , D 1 = N log 1 + D 1 N + ( D 1 ) log 1 + N D 1 N · D 1 N + ( D 1 ) log 1 + N D 1 = ( D 1 ) 1 + log 1 + N D 1 < D 2 + log n ,
where we use log ( 1 + x ) x and N < n . On the other hand, function log * n is concave so by a = 1 D N a = N and the Jensen inequality for log * n , we obtain
H N , D 1 K N 1 , , N D , D 1 F N , D : = N log 1 + D 1 N + ( D 1 ) log 1 + N D 1 + log * ( N + D 1 ) log * ( D 1 ) D log * N / D = log ( N + D 1 ) ! log ( D 1 ) ! D log N / D ! N log D = log ( N + D 1 ) ! ( D 1 ) ! N / D ! D D N 0 ,
since
N / D ! D D N = N D ( N D ) D ( N 2 D ) D D D ( N + D 1 ) ( N + D 2 ) D = ( N + D 1 ) ! ( D 1 ) ! .
Moreover, function F N , D is growing in argument N. Hence,
F N , D F 1 , D = D log D 1 ! .
Summing inequalities (A36) and (A39) over u X k such that N ( u | x 1 n ) 1 , we obtain the claim. ☐
The mutual information is defined as a difference of entropies. Replacing the entropy with an arbitrary function H Q ( u ) , we obtain this quantity:
Definition A1.
The Q pointwise mutual information is defined as
I Q ( u ; v ) : = H Q ( u ) + H Q ( v ) H Q ( u v ) .
We will show that the PPM k 0 pointwise mutual information cannot be positive.
Theorem A5.
For n i = j = 1 l n i j , where n i j 0 , we have
H ( n 1 , , n k ) j = 1 l H ( n 1 j , , n k j ) .
Proof. 
Write N : = i = 1 k j = 1 l n i j , p i j : = n i j / N , q i : = j = 1 l p i j , and r j : = i = 1 k p i j . We observe that
H ( n 1 , , n k ) j = 1 l H ( n 1 j , , n k j ) = N i = 1 k j = 1 l p i j log p i j q i r j ,
which is N times the Kullback–Leibler divergence between distributions p i j and q i r j and thus is nonnegative. ☐
Theorem A6.
For k 0 , we have
I PPM k 0 ( x 1 n ; x n + 1 n + m ) 0 .
Proof. 
Consider k 0 . For u X k and a X , we have
N ( u a | x 1 n + m ) = N ( u a | x 1 n ) + N ( u a | x n k n + k ) + N ( u a | x n + 1 n + m ) .
Thus, using Theorem A5, we obtain
H N ( u 1 | x 1 n + m ) , , N ( u D | x 1 n + m ) H N ( u 1 | x 1 n ) , , N ( u D | x 1 n ) + H N ( u 1 | x n k n + k ) , , N ( u D | x n k n + k ) + H N ( u 1 | x n + 1 n + m ) , , N ( u D | x n + 1 n + m ) .
Since the second term on the right-hand side is greater than or equal zero, we may omit it and summing the remaining terms over all u X k we obtain the claim. ☐
Now, we will show that the PPM pointwise mutual information between two parts of a string is roughly bounded above by the cardinality of the PPM vocabulary of the string multiplied by the logarithm of the string length.
Theorem A7.
We have
I PPM ( x 1 n ; x n + 1 n + m ) 1 + 4 log G PPM ( x 1 n + m ) + 2 + G PPM ( x 1 n + m ) + 1 log D + 2 D card V PPM ( x 1 n + m ) 2 + log ( n + m ) .
Proof. 
Consider k 0 . By Theorems A4 and A6, we obtain
I PPM k ( x 1 n ; x n + 1 n + m ) = k log D + I PPM k 0 ( x 1 n ; x n + 1 n + m ) + I PPM k 1 ( x 1 n ; x n + 1 n + m ) k log D + D card V ( k | x 1 n ) 2 + log n + D card V ( k | x n + 1 n + m ) 2 + log m k log D + 2 D card V ( k | x 1 n + m ) 2 + log ( n + m ) .
In contrast, I PPM 1 ( x 1 n ; x n + 1 n + m ) = 0 . Now, let G = G PPM ( x 1 n + m ) . Since
H PPM ( x 1 n + m ) H PPM G ( x 1 n + m )
and
H PPM ( u ) H PPM k ( u ) + 1 / 2 + 2 log ( k + 2 )
for any u X * and k 1 , we obtain
I PPM ( x 1 n ; x n + 1 n + m ) I PPM G ( x 1 n ; x n + 1 n + m ) + 1 + 4 log ( G + 2 ) 1 + 4 log ( G + 2 ) + ( G + 1 ) log D + 2 D card V ( G | x 1 n + m ) 2 + log ( n + m ) .
Hence, the claim follows. ☐
Consequently, we may prove the second part of Theorems 9 and 10, i.e., the theorems about facts and words.
Theorem A8
(mutual information and words). Let ( X i ) i = 1 be a stationary process over a finite alphabet. We have inequalities
hilb n E I ( X 1 n ; X n + 1 2 n ) hilb n E I a ( X 1 n ; X n + 1 2 n ) hilb n E G PPM ( X 1 n ) + card V PPM ( X 1 n ) .
Proof. 
By Theorem A7, we obtain
hilb n E I PPM ( X 1 n ; X n + 1 2 n ) hilb n E G PPM ( X 1 n ) + card V PPM ( X 1 n ) .
In contrast, Theorems 6 and A1 and inequalities (28) and (46) yield
hilb n E H ( X 1 n ) h n hilb n E H a ( X 1 n ) h n hilb n E H PPM ( X 1 n ) h n hilb n E I PPM ( X 1 n ; X n + 1 2 n ) .
Hence, by equalities (A10) and (A11), we obtain inequality (A51). ☐

Appendix C. Hilberg Exponents for Santa Fe Processes

We begin with a general observation for Hilberg exponents. In [36], this result was discussed only for the Hilberg exponent of mutual information.
Theorem A9
(cf. [36]). For a sequence of random variables Y n 0 , we have
hilb n Y n hilb n E Y n almost surely .
Proof. 
Denote δ : = hilb n E Y n . From the Markov inequality, we have
k = 1 P Y 2 k 2 k ( δ + ϵ ) 1 k = 1 E Y 2 k 2 k ( δ + ϵ ) A + k = 1 2 k ( δ + ϵ / 2 ) 2 k ( δ + ϵ ) < ,
where A < . Hence, by the Borel–Cantelli lemma, we have Y 2 k < 2 k ( δ + ϵ ) for all but finitely many n almost surely. Since we can choose ϵ arbitrarily small, in particular, we obtain inequality (A54). ☐
In [29,36], it was shown that the Santa Fe process with exponent α satisfies equalities
hilb n I ( X n + 1 0 ; X 1 n ) = 1 / α almost surely ,
hilb n E I ( X n + 1 0 ; X 1 n ) = 1 / α .
We will now show a similar result for the number of probabilistic facts inferrable from the Santa Fe process almost surely and in expectation. Since Santa Fe processes are processes over an infinite alphabet, we cannot apply the theorem about facts and words.
Theorem A10.
For the Santa Fe process with exponent α, we have
hilb n card U ( X 1 n ) = 1 / α almost surely ,
hilb n E card U ( X 1 n ) = 1 / α .
Proof. 
First, we obtain
P ( card U ( X 1 n ) m n ) k = 1 m n P ( g ( k ; X 1 n ) Z k ) = k = 1 m n 1 P ( K i = k ) n m n 1 m n α ζ ( α ) n m n exp n m n α / ζ ( α ) ,
where ζ ( α ) : = k = 1 k α is the zeta function. Put now m n = n 1 / α ϵ for an ϵ > 0 . It is easy to observe that n = 1 P ( card U ( X 1 n ) m n ) < . Hence, by the Borel–Cantelli lemma, we have inequality card U ( X 1 n ) > m n for all but finitely many n almost surely.
Second, we obtain
P ( card U ( X 1 n ) M n ) n ! ( n M n ) ! k = 1 M n P ( K i = k ) = n ! ( n M n ) ! ( M n ! ) α [ ζ ( α ) ] M n .
Recalling from Appendix B that log n ! = n ( log n 1 ) + log * n , where log * n log ( n + 2 ) is subadditive, we obtain
log P ( card U ( X 1 n ) M n ) n ( log n 1 ) ( n M n ) log ( n M n ) 1 α M n ( log M n 1 ) + log * M n M n log ζ ( α ) M n log n α ( log M n 1 ) log ζ ( α ) + log * M n
by log n log ( n M n ) + M n n . Put now M n = C n 1 / α for a C > e [ ζ ( α ) ] 1 / α . We obtain
P ( card U ( X 1 n ) M n ) ( C n 1 / α + 2 ) exp ( δ n 1 / α ) ,
where δ > 0 so n = 1 P ( card U ( X 1 n ) M n ) < . Hence, by the Borel–Cantelli lemma, we have inequality card U ( X 1 n ) < M n for all but finitely many n almost surely. Combining this result with the previous result yields equality (A58).
To obtain equality (A59), we invoke Theorem A9 for the lower bound, whereas, for the upper bound, we observe that
E card U ( X 1 n ) M n + n P ( card U ( X 1 n ) M n ) ,
where the last term decays according to the stretched exponential bound (A63) for M n = C n 1 / α . ☐

References

  1. Shannon, C. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 30, 379–423, 623–656. [Google Scholar] [CrossRef]
  2. Shannon, C. Prediction and entropy of printed English. Bell Syst. Tech. J. 1951, 30, 50–64. [Google Scholar] [CrossRef]
  3. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; Wiley: Hoboken, NJ, USA, 2006. [Google Scholar]
  4. Jelinek, F. Statistical Methods for Speech Recognition; The MIT Press: Cambridge, MA, USA, 1997. [Google Scholar]
  5. Manning, C.D.; Schütze, H. Foundations of Statistical Natural Language Processing; The MIT Press: Cambridge, MA, USA, 1999. [Google Scholar]
  6. Ryabko, B. Twice-universal coding. Probl. Inf. Transm. 1984, 20, 173–177. [Google Scholar]
  7. Cleary, J.G.; Witten, I.H. Data compression using adaptive coding and partial string matching. IEEE Trans. Commun. 1984, 32, 396–402. [Google Scholar] [CrossRef]
  8. Zipf, G.K. The Psycho-Biology of Language: An Introduction to Dynamic Philology, 2nd ed.; The MIT Press: Cambridge, MA, USA, 1965. [Google Scholar]
  9. Mandelbrot, B. Structure formelle des textes et communication. Word 1954, 10, 1–27. [Google Scholar] [CrossRef]
  10. Kuraszkiewicz, W.; Łukaszewicz, J. The number of different words as a function of text length. Pamięt. Lit. 1951, 42, 168–182. (In Polish) [Google Scholar]
  11. Guiraud, P. Les Caractères Statistiques du Vocabulaire; Presses Universitaires de France: Paris, France, 1954. [Google Scholar]
  12. Herdan, G. Quantitative Linguistics; Butterworths: London, UK, 1964. [Google Scholar]
  13. Heaps, H.S. Information Retrieval—Computational and Theoretical Aspects; Academic Press: Cambridge, MA, USA, 1978. [Google Scholar]
  14. Hilberg, W. Der bekannte Grenzwert der redundanzfreien Information in Texten—Eine Fehlinterpretation der Shannonschen Experimente? Frequenz 1990, 44, 243–248. [Google Scholar] [CrossRef]
  15. Ebeling, W.; Nicolis, G. Entropy of Symbolic Sequences: The Role of Correlations. Europhys. Lett. 1991, 14, 191–196. [Google Scholar] [CrossRef]
  16. Ebeling, W.; Pöschel, T. Entropy and long-range correlations in literary English. Europhys. Lett. 1994, 26, 241–246. [Google Scholar] [CrossRef]
  17. Bialek, W.; Nemenman, I.; Tishby, N. Complexity through nonextensivity. Physica A 2001, 302, 89–99. [Google Scholar] [CrossRef]
  18. Crutchfield, J.P.; Feldman, D.P. Regularities unseen, randomness observed: The entropy convergence hierarchy. Chaos 2003, 15, 25–54. [Google Scholar] [CrossRef]
  19. Dębowski, Ł. On Hilberg’s law and its links with Guiraud’s law. J. Quant. Linguist. 2006, 13, 81–109. [Google Scholar]
  20. Wolff, J.G. Language acquisition and the discovery of phrase structure. Lang. Speech 1980, 23, 255–269. [Google Scholar] [CrossRef] [PubMed]
  21. De Marcken, C.G. Unsupervised Language Acquisition. Ph.D. Thesis, Massachussetts Institute of Technology, Cambridge, MA, USA, 1996. [Google Scholar]
  22. Kit, C.; Wilks, Y. Unsupervised Learning of Word Boundary with Description Length Gain. In Proceedings of the Computational Natural Language Learning ACL Workshop, Bergen; Osborne, M., Sang, E.T.K., Eds.; The Association for Computational Linguistics: Stroudsburg, PA, USA, 1999; pp. 1–6. [Google Scholar]
  23. Kieffer, J.C.; Yang, E. Grammar-based codes: A new class of universal lossless source codes. IEEE Trans. Inf. Theory 2000, 46, 737–754. [Google Scholar] [CrossRef]
  24. Dębowski, Ł. A general definition of conditional information and its application to ergodic decomposition. Statist. Probab. Lett. 2009, 79, 1260–1268. [Google Scholar]
  25. Dębowski, Ł. On the Vocabulary of Grammar-Based Codes and the Logical Consistency of Texts. IEEE Trans. Inf. Theory 2011, 57, 4589–4599. [Google Scholar]
  26. Charikar, M.; Lehman, E.; Lehman, A.; Liu, D.; Panigrahy, R.; Prabhakaran, M.; Sahai, A.; Shelat, A. The Smallest Grammar Problem. IEEE Trans. Inf. Theory 2005, 51, 2554–2576. [Google Scholar] [CrossRef]
  27. Dębowski, Ł. Excess entropy in natural language: present state and perspectives. Chaos 2011, 21, 037105. [Google Scholar]
  28. Dębowski, Ł. The Relaxed Hilberg Conjecture: A Review and New Experimental Support. J. Quantit. Linguist. 2015, 22, 311–337. [Google Scholar]
  29. Dębowski, Ł. Mixing, Ergodic, and Nonergodic Processes with Rapidly Growing Information between Blocks. IEEE Trans. Inf. Theory 2012, 58, 3392–3401. [Google Scholar]
  30. Billingsley, P. Probability and Measure; Wiley: Hoboken, NJ, USA, 1979. [Google Scholar]
  31. Gray, R.M. Probability, Random Processes, and Ergodic Properties; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  32. Breiman, L. Probability; SIAM: Philadephia, PA, USA, 1992. [Google Scholar]
  33. Kallenberg, O. Foundations of Modern Probability; Springer: Berlin/Heidelberg, Germany, 1997. [Google Scholar]
  34. Yaglom, A.M.; Yaglom, I.M. Probability and Information. In Theory and Decision Library; Springer: Berlin/Heidelberg, Germany, 1983. [Google Scholar]
  35. Gray, R.M.; Davisson, L.D. The ergodic decomposition of stationary discrete random processses. IEEE Trans. Inf. Theory 1974, 20, 625–636. [Google Scholar] [CrossRef]
  36. Dębowski, Ł. Hilberg Exponents: New Measures of Long Memory in the Process. IEEE Trans. Inf. Theory 2015, 61, 5716–5726. [Google Scholar]
  37. Li, M.; Vitányi, P.M.B. An Introduction to Kolmogorov Complexity and Its Applications, 3rd ed.; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  38. Barron, A.R. Logically Smooth Density Estimation. Ph.D. Thesis, Stanford University, Stanford, CA, USA, 1985. [Google Scholar]
  39. Ryabko, B. Applications of Universal Source Coding to Statistical Analysis of Time Series. In Selected Topics in Information and Coding Theory; Series on Coding and Cryptology; Woungang, I., Misra, S., Misra, S.C., Eds.; World Scientific Publishing: Singapore, 2010. [Google Scholar]
  40. De Luca, A. On the combinatorics of finite words. Theor. Comput. Sci. 1999, 218, 13–39. [Google Scholar] [CrossRef]
  41. Dębowski, Ł. Maximal Repetitions in Written Texts: Finite Energy Hypothesis vs. Strong Hilberg Conjecture. Entropy 2015, 17, 5903–5919. [Google Scholar]
Figure 1. The PPM order G PPM ( x 1 n ) and the cardinality of the PPM vocabulary card V PPM ( x 1 n ) versus the input length n for William Shakespeare’s First Folio/35 Plays and a random permutation of the text’s characters.
Figure 1. The PPM order G PPM ( x 1 n ) and the cardinality of the PPM vocabulary card V PPM ( x 1 n ) versus the input length n for William Shakespeare’s First Folio/35 Plays and a random permutation of the text’s characters.
Entropy 20 00085 g001

Share and Cite

MDPI and ACS Style

Dębowski, Ł. Is Natural Language a Perigraphic Process? The Theorem about Facts and Words Revisited. Entropy 2018, 20, 85. https://doi.org/10.3390/e20020085

AMA Style

Dębowski Ł. Is Natural Language a Perigraphic Process? The Theorem about Facts and Words Revisited. Entropy. 2018; 20(2):85. https://doi.org/10.3390/e20020085

Chicago/Turabian Style

Dębowski, Łukasz. 2018. "Is Natural Language a Perigraphic Process? The Theorem about Facts and Words Revisited" Entropy 20, no. 2: 85. https://doi.org/10.3390/e20020085

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop