Abstract
This paper is focused on a study of integral relations between the relative entropy and the chi-squared divergence, which are two fundamental divergence measures in information theory and statistics, a study of the implications of these relations, their information-theoretic applications, and some generalizations pertaining to the rich class of f-divergences. Applications that are studied in this paper refer to lossless compression, the method of types and large deviations, strong data–processing inequalities, bounds on contraction coefficients and maximal correlation, and the convergence rate to stationarity of a type of discrete-time Markov chains.
1. Introduction
The relative entropy (also known as the Kullback–Leibler divergence [1]) and the chi-squared divergence [2] are divergence measures which play a key role in information theory, statistics, learning, signal processing, and other theoretical and applied branches of mathematics. These divergence measures are fundamental in problems pertaining to source and channel coding, combinatorics and large deviations theory, goodness-of-fit and independence tests in statistics, expectation–maximization iterative algorithms for estimating a distribution from an incomplete data, and other sorts of problems (the reader is referred to the tutorial paper by Csiszár and Shields [3]). They both belong to an important class of divergence measures, defined by means of convex functions f, and named f-divergences [4,5,6,7,8]. In addition to the relative entropy and the chi-squared divergence, this class unifies other useful divergence measures such as the total variation distance in functional analysis, and it is also closely related to the Rényi divergence which generalizes the relative entropy [9,10]. In general, f-divergences (defined in Section 2) are attractive since they satisfy pleasing features such as the data–processing inequality, convexity, (semi)continuity, and duality properties, and they therefore find nice applications in information theory and statistics (see, e.g., [6,8,11,12]).
In this work, we study integral relations between the relative entropy and the chi-squared divergence, implications of these relations, and some of their information-theoretic applications. Some generalizations which apply to the class of f-divergences are also explored in detail. In this context, it should be noted that integral representations of general f-divergences, expressed as a function of either the DeGroot statistical information [13], the -divergence (a parametric sub-class of f-divergences, which generalizes the total variation distance [14] [p. 2314]) and the relative information spectrum, have been derived in [12] [Section 5], [15] [Section 7.B], and [16] [Section 3], respectively.
Applications in this paper are related to lossless source compression, large deviations by the method of types, and strong data–processing inequalities. The relevant background for each of these applications is provided to make the presentation self contained.
We next outline the paper contributions and the structure of our manuscript.
1.1. Paper Contributions
This work starts by introducing integral relations between the relative entropy and the chi-squared divergence, and some inequalities which relate these two divergences (see Theorem 1, its corollaries, and Proposition 1). It continues with a study of the implications and generalizations of these relations, pertaining to the rich class of f-divergences. One implication leads to a tight lower bound on the relative entropy between a pair of probability measures, expressed as a function of the means and variances under these measures (see Theorem 2). A second implication of Theorem 1 leads to an upper bound on a skew divergence (see Theorem 3 and Corollary 3). Due to the concavity of the Shannon entropy, let the concavity deficit of the entropy function be defined as the non-negative difference between the entropy of a convex combination of distributions and the convex combination of the entropies of these distributions. Then, Corollary 4 provides an upper bound on this deficit, expressed as a function of the pairwise relative entropies between all pairs of distributions. Theorem 4 provides a generalization of Theorem 1 to the class of f-divergences. It recursively constructs non-increasing sequences of f-divergences and as a consequence of Theorem 4 followed by the usage of polylogairthms, Corollary 5 provides a generalization of the useful integral relation in Theorem 1 between the relative entropy and the chi-squared divergence. Theorem 5 relates probabilities of sets to f-divergences, generalizing a known and useful result by Csiszár for the relative entropy. With respect to Theorem 1, the integral relation between the relative entropy and the chi-squared divergence has been independently derived in [17], which also derived an alternative upper bound on the concavity deficit of the entropy as a function of total variational distances (differing from the bound in Corollary 4, which depends on pairwise relative entropies). The interested reader is referred to [17], with a preprint of the extended version in [18], and to [19] where the connections in Theorem 1 were originally discovered in the quantum setting.
The second part of this work studies information-theoretic applications of the above results. These are ordered by starting from the relatively simple applications, and ending at the more complicated ones. The first one includes a bound on the redundancy of the Shannon code for universal lossless compression with discrete memoryless sources, used in conjunction with Theorem 3 (see Section 4.1). An application of Theorem 2 in the context of the method of types and large deviations analysis is then studied in Section 4.2, providing non-asymptotic bounds which lead to a closed-form expression as a function of the Lambert W-function (see Proposition 2). Strong data–processing inequalities with bounds on contraction coefficients of skew divergences are provided in Theorem 6, Corollary 7 and Proposition 3. Consequently, non-asymptotic bounds on the convergence to stationarity of time-homogeneous, irreducible, and reversible discrete-time Markov chains with finite state spaces are obtained by relying on our bounds on the contraction coefficients of skew divergences (see Theorem 7). The exact asymptotic convergence rate is also obtained in Corollary 8. Finally, a property of maximal correlations is obtained in Proposition 4 as an application of our starting point on the integral relation between the relative entropy and the chi-squared divergence.
1.2. Paper Organization
This paper is structured as follows. Section 2 presents notation and preliminary material which is necessary for, or otherwise related to, the exposition of this work. Section 3 refers to the developed relations between divergences, and Section 4 studies information-theoretic applications. Proofs of the results in Section 3 and Section 4 (except for short proofs) are deferred to Section 5.
2. Preliminaries and Notation
This section provides definitions of divergence measures which are used in this paper, and it also provides relevant notation.
Definition 1.
[12] [p. 4398] Let P and Q be probability measures, let μ be a dominating measure of P and Q (i.e., ), and let and be the densities of P and Q with respect to μ. The f-divergence from P to Q is given by
where
It should be noted that the right side of (1) does not depend on the dominating measure μ.
Throughout the paper, we denote by the indicator function; it is equal to 1 if the relation is true, and it is equal to 0 otherwise. Throughout the paper, unless indicated explicitly, logarithms have an arbitrary common base (that is larger than 1), and indicates the inverse function of the logarithm with that base.
Definition 2.
[1] The relative entropy is the f-divergence with for ,
Definition 3.
The total variation distance between probability measures P and Q is the f-divergence from P to Q with for all . It is a symmetric f-divergence, denoted by , which is given by
Definition 4.
[2] The chi-squared divergence from P to Q is defined to be the f-divergence in (1) with or for all ,
The Rényi divergence, a generalization of the relative entropy, was introduced by Rényi [10] in the special case of finite alphabets. Its general definition is given as follows (see, e.g., [9]).
Definition 5.
[10] Let P and Q be probability measures on dominated by μ, and let their densities be respectively denoted by and . The Rényi divergence of order is defined as follows:
- If , thenwhere in (10), and (11) holds if is a discrete set.
- By the continuous extension of ,
The second-order Rényi divergence and the chi-squared divergence are related as follows:
and the relative entropy and the chi-squared divergence satisfy (see, e.g., [20] [Theorem 5])
Inequality (16) readily follows from (13), (15), and since is monotonically increasing in (see [9] [Theorem 3]). A tightened version of (16), introducing an improved and locally-tight upper bound on as a function of and , is introduced in [15] [Theorem 20]. Another sharpened version of (16) is derived in [15] [Theorem 11] under the assumption of a bounded relative information. Furthermore, under the latter assumption, tight upper and lower bounds on the ratio are obtained in [15] [(169)].
Definition 6.
[21] The Györfi–Vajda divergence of order is an f-divergence with
Vincze–Le Cam distance (also known as the triangular discrimination) ([22,23]) is a special case with .
In view of (1), (9) and (17), it can be verified that the Györfi–Vajda divergence is related to the chi-squared divergence as follows:
Hence,
3. Relations between Divergences
We introduce in this section results on the relations between the relative entropy and the chi-squared divergence, their implications, and generalizations. Information–theoretic applications are studied in the next section.
3.1. Relations between the Relative Entropy and the Chi-Squared Divergence
The following result relates the relative entropy and the chi-squared divergence, which are two fundamental divergence measures in information theory and statistics. This result was recently obtained in an equivalent form in [17] [(12)] (it is noted that this identity was also independently derived by the coauthors in two separate un-published works in [24] [(16)] and [25]). It should be noted that these connections between divergences in the quantum setting were originally discovered in [19] [Theorem 6]. Beyond serving as an interesting relation between these two fundamental divergence measures, it is introduced here for the following reasons:
- (a)
- New consequences and applications of it are obtained, including new shorter proofs of some known results;
- (b)
- An interesting extension provides new relations between f-divergences (see Section 3.3).
Theorem 1.
Let P and Q be probability measures defined on a measurable space , and let
be the convex combination of P and Q. Then, for all ,
Proof.
See Section 5.1. □
A specialization of Theorem 1 by letting gives the following identities.
Corollary 1.
Remark 1.
The substitution transforms (24) to [26] [Equation (31)], i.e.,
In view of (18) and (21), an equivalent form of (22) and (24) is given as follows:
Corollary 2.
For , let be given in (17). Then,
By Corollary 1, we obtain original and simple proofs of new and old f-divergence inequalities.
Proposition 1.
(f-divergence inequalities).
- (a)
- Pinsker’s inequality:
- (b)
- Furthermore, let be a sequence of probability measures that is defined on a measurable space , and which converges to a probability measure P in the sense thatwith . Then, (30) is locally tight in the sense that its both sides converge to 0, and
- (c)
- For all ,Moreover, under the assumption in (31), for all
- (d)
- [15] [Theorem 2]:
Proof.
See Section 5.2. □
Remark 2.
Inequality (30) is locally tight in the sense that (31) yields (32). This property, however, is not satisfied by (16) since the assumption in (31) implies that
Remark 3.
Inequality (30) readily yields
which is proved by a different approach in [27] [Proposition 4]. It is further shown in [15] [Theorem 2 b)] that
where the supremum is over and .
3.2. Implications of Theorem 1
We next provide two implications of Theorem 1. The first implication, which relies on the Hammersley–Chapman–Robbins (HCR) bound for the chi-squared divergence [28,29], gives the following tight lower bound on the relative entropy as a function of the means and variances under P and Q.
Theorem 2.
Let P and Q be probability measures defined on the measurable space , where is the real line and is the Borel σ–algebra of subsets of . Let , , , and denote the expected values and variances of and , i.e.,
- (a)
- If , thenwhere , for , denotes the binary relative entropy (with the convention that ), and
- (b)
- The lower bound on the right side of (40) is attained for P and Q which are defined on the two-element set , andwith r and s in (41) and (42), respectively, and for
- (c)
- If and and are selected arbitrarily, thenwhere the infimum on the left side of (48) is taken over all P and Q which satisfy (39).
Proof.
See Section 5.3. □
Remark 4.
Consider the case of the non-equal means in Items (a) and (b) of Theorem 2. If these means are fixed, then the infimum of is zero by choosing arbitrarily large equal variances. Suppose now that the non-equal means and are fixed, as well as one of the variances (either or ). Numerical experimentation shows that, in this case, the achievable lower bound in (40) is monotonically decreasing as a function of the other variance, and it tends to zero as we let the free variance tend to infinity. This asymptotic convergence to zero can be justified by assuming, for example, that , and are fixed, and (the other cases can be justified in a similar way). Then, it can be verified from (41)–(45) that
which implies that as we let . The infimum of the relative entropy is therefore equal to zero since the probability measures P and Q in (46) and (47), which are defined on a two-element set and attain the lower bound on the relative entropy under the constraints in (39), have a vanishing relative entropy in this asymptotic case.
Remark 5.
The proof of Item (c) in Theorem 2 suggests explicit constructions of sequences of pairs probability measures such that
- (a)
- The means under and are both equal to m (independently of n);
- (b)
- The variance under is equal to , and the variance under is equal to (independently of n);
- (c)
- The relative entropy vanishes as we let .
This yields in particular (48).
A second consequence of Theorem 1 gives the following result. Its first part holds due to the concavity of (see [30] [Problem 4.2]). The second part is new, and its proof relies on Theorem 1. As an educational note, we provide an alternative proof of the first part by relying on Theorem 1.
Theorem 3.
Let , and be given by
Then, for all ,
with an equality if or . Moreover, F is monotonically increasing, differentiable, and it satisfies
so the limit in (53) is twice as large as the value of the lower bound on this limit as it follows from the right side of (52).
Proof.
See Section 5.4. □
Remark 6.
By the convexity of the relative entropy, it follows that for all . It can be verified, however, that the inequality holds for all and . Letting implies that the upper bound on on the right side of (51) is tighter than or equal to the upper bound (with an equality if and only if either or ).
Corollary 3.
Let , with , be probability measures defined on a measurable space , and let be a sequence of non-negative numbers that sum to 1. Then, for all ,
Proof.
For an arbitrary , apply the upper bound on the right side of (51) with , and . The right side of (54) is obtained from (51) by invoking the convexity of the relative entropy, which gives . □
The next result provides an upper bound on the non-negative difference between the entropy of a convex combination of distributions and the respective convex combination of the individual entropies (it is also termed as the concavity deficit of the entropy function in [17] [Section 3]).
Corollary 4.
Let , with , be probability measures defined on a measurable space , and let be a sequence of non-negative numbers that sum to 1. Then,
Proof.
The lower bound holds due to the concavity of the entropy function. The upper bound readily follows from Corollary 3, and the identity
□
Remark 7.
The upper bound in (55) refines the known bound (see, e.g., [31] [Lemma 2.2])
by relying on all the pairwise relative entropies between the individual distributions . Another refinement of (57), expressed in terms of total variation distances, has been recently provided in [17] [Theorem 3.1].
3.3. Monotonic Sequences of f-Divergences and an Extension of Theorem 1
The present subsection generalizes Theorem 1, and it also provides relations between f-divergences which are defined in a recursive way.
Theorem 4.
Let P and Q be probability measures defined on a measurable space . Let , for , be the convex combination of P and Q as in (21). Let be a convex function with , and let be a sequence of functions that are defined on by the recursive equation
Then,
- (a)
- is a non-increasing (and non-negative) sequence of f-divergences.
- (b)
- For all and ,
Proof.
See Section 5.5. □
We next use the polylogarithm functions, which satisfy the recursive equation [32] [Equation (7.2)]:
This gives , and so on, which are real–valued and finite for .
Corollary 5.
Let
Then, (59) holds for all and . Furthermore, setting in (59) yields (22) as a special case.
Proof.
See Section 5.6. □
3.4. On Probabilities and f-Divergences
The following result relates probabilities of sets to f-divergences.
Theorem 5.
Let be a probability space, and let be a measurable set with . Define the conditional probability measure
Let be an arbitrary convex function with , and assume (by continuous extension of f at zero) that . Furthermore, let be the convex function which is given by
Then,
Proof.
See Section 5.7. □
Connections of probabilities to the relative entropy, and to the chi-squared divergence, are next exemplified as special cases of Theorem 5.
Corollary 6.
In the setting of Theorem 5,
so (16) is satisfied in this case with equality. More generally, for all ,
Proof.
See Section 5.7. □
Remark 8.
In spite of its simplicity, (65) proved very useful in the seminal work by Marton on transportation–cost inequalities, proving concentration of measures by information-theoretic tools [33,34] (see also [35] [Chapter 8] and [36] [Chapter 3]). As a side note, the simple identity (65) was apparently first explicitly used by Csiszár (see [37] [Equation (4.13)]).
4. Applications
This section provides applications of our results in Section 3. These include universal lossless compression, method of types and large deviations, and strong data–processing inequalities (SDPIs).
4.1. Application of Corollary 3: Shannon Code for Universal Lossless Compression
Consider discrete, memoryless, and stationary sources with probability mass functions , and assume that the symbols are emitted by one of these sources with an a priori probability for source no. i, where are positive and sum to 1.
For lossless data compression by a universal source code, suppose that a single source code is designed with respect to the average probability mass function .
Assume that the designer uses a Shannon code, where the code assignment for a symbol is of length bits (logarithms are on base 2). Due to the mismatch in the source distribution, the average codeword length satisfies (see [38] [Proposition 3.B])
The fractional penalty in the average codeword length, denoted by , is defined to be equal to the ratio of the penalty in the average codeword length as a result of the source mismatch, and the average codeword length in case of a perfect matching. From (68), it follows that
We next rely on Corollary 3 to obtain an upper bound on which is expressed as a function of the relative entropies for all in . This is useful if, e.g., the m relative entropies on the left and right sides of (69) do not admit closed form expressions, in contrast to the relative entropies for . We next exemplify this case.
For , let be a Poisson distribution with parameter . For all , the relative entropy from to admits the closed-form expression
From (54) and (70), it follows that
where
The entropy of a Poisson distribution, with parameter , is given by the integral representation [39,40,41]
Combining (69), (71) and (74) finally gives an upper bound on in the considered setup.
Example 1.
Consider five discrete memoryless sources where the probability mass function of source no. i is given by with . Suppose that the symbols are emitted from one of the sources with equal probability, so . Let be the average probability mass function of the five sources. The term , which appears in the numerators of the upper and lower bounds on ν (see (69)), does not lend itself to a closed-form expression, and it is not even an easy task to calculate it numerically due to the need to compute an infinite series which involves factorials. We therefore apply the closed-form upper bound in (71) to get that bits, whereas the upper bound which follows from the convexity of the relative entropy (i.e., ) is equal to 1.99 bits (both upper bounds are smaller than the trivial bound bits). From (69), (74), and the stronger upper bound on , the improved upper bound on ν is equal to (as compared to a looser upper bound of , which follows from (69), (74), and the looser upper bound on that is equal to 1.99 bits).
4.2. Application of Theorem 2 in the Context of the Method of Types and Large Deviations Theory
Let be a sequence of i.i.d. random variables with , where Q is a probability measure defined on a finite set , and for all . Let be a set of probability measures on such that , and suppose that the closure of coincides with the closure of its interior. Then, by Sanov’s theorem (see, e.g., [42] [Theorem 11.4.1] and [43] [Theorem 3.3]), the probability that the empirical distribution belongs to vanishes exponentially at the rate
Furthermore, for finite n, the method of types yields the following upper bound on this rare event:
whose exponential decay rate coincides with the exact asymptotic result in (75).
Suppose that Q is not fully known, but its mean and variance are available. Let and be fixed, and let be the set of all probability measures P, defined on the finite set , with mean and variance , where . Hence, coincides with the closure of its interior, and .
The lower bound on the relative entropy in Theorem 2, used in conjunction with the upper bound in (77), can serve to obtain an upper bound on the probability of the event that the empirical distribution of belongs to the set , regardless of the uncertainty in Q. This gives
where
and, for fixed , the parameters r and s are given in (41) and (42), respectively.
Standard algebraic manipulations that rely on (78) lead to the following result, which is expressed as a function of the Lambert–W function [44]. This function, which finds applications in various engineering and scientific fields, is a standard built–in function in mathematical software tools such as Mathematica, Matlab, and Maple. Applications of the Lambert–W function in information theory and coding are briefly surveyed in [45].
Proposition 2.
For , let denote the minimal value of such that the upper bound on the right side of (78) does not exceed . Then, admits the following closed-form expression:
with
and on the right side of (80) denotes the secondary real–valued branch of the Lambert–W function (i.e., where is the inverse function of ).
Example 2.
Let Q be an arbitrary probability measure, defined on a finite set , with mean and variance . Let be the set of all probability measures P, defined on , whose mean and variance lie in the intervals and , respectively. Suppose that it is required that, for all probability measures Q as above, the probability that the empirical distribution of the i.i.d. sequence that is included in the set is at most . We rely here on the upper bound in (78), and impose the stronger condition where it should not exceed ε. By this approach, it is obtained numerically from (79) that nats. We next examine two cases:
- (i)
- If , then it follows from (80) that .
- (ii)
- Consider a richer alphabet size of the i.i.d. samples where, e.g., . By relying on the same universal lower bound , which holds independently of the value of ( can possibly be an infinite set), it follows from (80) that is the minimal value such that the upper bound in (78) does not exceed .
We close this discussion by providing numerical experimentation of the lower bound on the relative entropy in Theorem 2, and comparing this attainable lower bound (see Item b) of Theorem 2) with the following closed-form expressions for relative entropies:
- (a)
- The relative entropy between real-valued Gaussian distributions is given by
- (b)
- Let denote a random variable which is exponentially distributed with mean ; its probability density function is given byThen, for and ,In this case, the means under P and Q are and , respectively, and the variances are and . Hence, for obtaining the required means and variances, set
Example 3.
We compare numerically the attainable lower bound on the relative entropy, as it given in (40), with the two relative entropies in (82) and (84):
- (i)
- If , then the lower bound in (40) is equal to 0.521 nats, and the two relative entropies in (82) and (84) are equal to 0.625 and 1.118 nats, respectively.
- (ii)
- If , then the lower bound in (40) is equal to 2.332 nats, and the two relative entropies in (82) and (84) are equal to 5.722 and 3.701 nats, respectively.
4.3. Strong Data–Processing Inequalities and Maximal Correlation
The information contraction is a fundamental concept in information theory. The contraction of f-divergences through channels is captured by data–processing inequalities, which can be further tightened by the derivation of SDPIs with channel-dependent or source-channel dependent contraction coefficients (see, e.g., [26,46,47,48,49,50,51,52]).
We next provide necessary definitions which are relevant for the presentation in this subsection.
Definition 7.
Let be a probability distribution which is defined on a set , and that is not a point mass, and let be a stochastic transformation. The contraction coefficient for f-divergences is defined as
where, for all ,
The notation in (87) and (88) is consistent with the standard notation used in information theory (see, e.g., the first displayed equation after (3.2) in [53]).
The derivation of good upper bounds on contraction coefficients for f-divergences, which are strictly smaller than 1, lead to SDPIs. These inequalities find their applications, e.g., in studying the exponential convergence rate of an irreducible, time-homogeneous and reversible discrete-time Markov chain to its unique invariant distribution over its state space (see, e.g., [49] [Section 2.4.3] and [50] [Section 2]). It is in sharp contrast to DPIs which do not yield convergence to stationarity at any rate. We return to this point later in this subsection, and determine the exact convergence rate to stationarity under two parametric families of f-divergences.
We next rely on Theorem 1 to obtain upper bounds on the contraction coefficients for the following f-divergences.
Definition 8.
For , the α-skew K-divergence is given by
and, for , let
with the convention that (by a continuous extension at in (89)). These divergence measures are specialized to the relative entropies:
and is the Jensen–Shannon divergence [54,55,56] (also known as the capacitory discrimination [57]):
It can be verified that the divergence measures in (89) and (90) are f-divergences:
with
where and are strictly convex functions on , and vanish at 1.
Remark 9.
The α-skew K-divergence in (89) is considered in [55] and [58] [(13)] (including pointers in the latter paper to its utility). The divergence in (90) is akin to Lin’s measure in [55] [(4.1)], the asymmetric α-skew Jensen–Shannon divergence in [58] [(11)–(12)], the symmetric α-skew Jensen–Shannon divergence in [58] [(16)], and divergence measures in [59] which involve arithmetic and geometric means of two probability distributions. Properties and applications of quantum skew divergences are studied in [19] and references therein.
Theorem 6.
The f-divergences in (89) and (90) satisfy the following integral identities, which are expressed in terms of the Györfi–Vajda divergence in (17):
with
Moreover, the contraction coefficients for these f-divergences are related as follows:
where denotes the contraction coefficient for the chi-squared divergence.
Proof.
See Section 5.8. □
Remark 10.
The upper bounds on the contraction coefficients for the parametric f-divergences in (89) and (90) generalize the upper bound on the contraction coefficient for the relative entropy in [51] [Theorem III.6] (recall that ), so the upper bounds in Theorem 6 are specialized to the latter bound at .
Corollary 7.
Let
where the supremum on the right side is over all probability measures defined on . Then,
Proof.
See Section 5.9. □
Example 4.
Let , and let correspond to a binary symmetric channel (BSC) with crossover probability ε. Then, . The upper and lower bounds on and in (106) and (107) match for all α, and they are all equal to .
The upper bound on the contraction coefficients in Corollary 7 is given by , whereas the lower bound is given by , which depends on the input distribution . We next provide alternative upper bounds on the contraction coefficients for the considered (parametric) f-divergences, which, similarly to the lower bound, scale like . Although the upper bound in Corollary 7 may be tighter in some cases than the alternative upper bounds which are next presented in Proposition 3 (and in fact, the former upper bound may be even achieved with equality as in Example 4), the bounds in Proposition 3 are used shortly to determine the exponential rate of the convergence to stationarity of a type of Markov chains.
Proposition 3.
For all ,
where denotes the minimal positive mass of the input distribution .
Proof.
See Section 5.10. □
Remark 11.
In view of (92), at , (108) and (109) specialize to an upper bound on the contraction coefficient of the relative entropy (KL divergence) as a function of the contraction coefficient of the chi-squared divergence. In this special case, both (108) and (109) give
which then coincides with [48] [Theorem 10].
We next apply Proposition 3 to consider the convergence rate to stationarity of Markov chains by the introduced f-divergences in Definition 8. The next result follows [49] [Section 2.4.3], and it provides a generalization of the result there.
Theorem 7.
Consider a time-homogeneous, irreducible, and reversible discrete-time Markov chain with a finite state space , let W be its probability transition matrix, and be its unique stationary distribution (reversibility means that for all ). Let be an initial probability distribution over . Then, for all and ,
and the contraction coefficients on the right sides of (111) and (112) scale like the n-th power of the contraction coefficient for the chi-squared divergence as follows:
Proof.
Inequalities (111) and (112) hold since , for all , and due to Definition 7 and (95) and (96). Inequalities (113) and (114) hold by Proposition 3, and due to the reversibility of the Markov chain which implies that (see [49] [Equation (2.92)])
□
In view of (113) and (114), Theorem 7 readily gives the following result on the exponential decay rate of the upper bounds on the divergences on the left sides of (111) and (112).
Corollary 8.
For all ,
Remark 12.
Theorem 7 and Corollary 8 generalize the results in [49] [Section 2.4.3], which follow as a special case at (see (92)).
We end this subsection by considering maximal correlations, which are closely related to the contraction coefficient for the chi-squared divergence.
Definition 9.
The maximal correlation between two random variables X and Y is defined as
where the supremum is taken over all real-valued functions f and g such that
It is well-known [60] that, if and , then the contraction coefficient for the chi-squared divergence is equal to the square of the maximal correlation between the random variables X and Y, i.e.,
A simple application of Corollary 1 and (119) gives the following result.
Proposition 4.
In the setting of Definition 7, for , let and with and . Then, the following inequality holds:
Proof.
See Section 5.11. □
5. Proofs
5.1. Proof of Theorem 1
Proof of (22): We rely on an integral representation of the logarithm function (on base ):
Let be a dominating measure of P and Q (i.e., ), and let , , and
where the last equality is due to (21). For all ,
where (124) holds due to (121) with , and by swapping the order of integration. The inner integral on the right side of (124) satisfies, for all ,
where (127) holds since , and . From (21), for all ,
The substitution of (130) into the right side of (129) gives that, for all ,
Finally, substituting (131) into the right side of (124) gives that, for all ,
where (133) holds by the transformation . Equality (133) also holds for since we have .
Proof of (23): For all ,
where (135) holds due to (122). From (136), it follows that for all ,
5.2. Proof of Proposition 1
- (a)
- Simple Proof of Pinsker’s Inequality: By [61] or [62] [(58)],We need the weaker inequality , proved by the Cauchy–Schwarz inequality:By combining (24) and (139)–(141), it follows that
- (b)
- Proof of (30) and its local tightness:where (146) is (24), and (149) holds due to Jensen’s inequality and the convexity of the hyperbola.We next show the local tightness of inequality (30) by proving that (31) yields (32). Let be a sequence of probability measures, defined on a measurable space , and assume that converges to a probability measure P in the sense that (31) holds. In view of [16] [Theorem 7] (see also [15] [Section 4.F] and [63]), it follows thatandwhich therefore yields (32).
- (c)
- Proof of (33) and (34): The proof of (33) relies on (28) and the following lemma.Lemma 1.For all ,Proof.□From (28) and (155), for all ,This proves (33). Furthermore, under the assumption in (31), for all ,where (165) holds due to (153) and the local behavior of f-divergences [63], and (166) holds due to (17) which implies that for all . This proves (34).
- (d)
- Proof of (35): From (24), we getReferring to the integrand of the first term on the right side of (169), for all ,where the last equality holds since the equality implies thatFrom (170)–(174), an upper bound on the right side of (169) results. This givesIt should be noted that [15] [Theorem 2(a)] shows that inequality (35) is tight. To that end, let , and define probability measures and on the set with and . Then,
5.3. Proof of Theorem 2
We first prove Item (a) in Theorem 2. In view of the Hammersley–Chapman–Robbins lower bound on the divergence, for all
where , and is defined by
For ,
and it can be verified that
We now rely on identity (24)
to get a lower bound on the relative entropy. Combining (180), (183) and (184) yields
From (43) and (44), we get
where
By using the partial fraction decomposition of the integrand on the right side of (186), we get (after multiplying both sides of (185) by )
where (189) holds by integration since and are both non-negative for all . To verify the latter claim, it should be noted that (43) and the assumption that imply that . Since , it follows that, for all , either or (if , then the former is positive, and, if , then the latter is positive). By comparing the denominators of both integrands on the left and right sides of (186), it follows that for all . Since the product of and is non-negative and at least one of these terms is positive, it follows that and are both non-negative for all . Finally, (190) follows from (43).
If and , then it follows from (43) and (44) that and . Hence, from (187) and (188), and , which implies that the lower bound on in (191) tends to zero.
Letting and , we obtain that the lower bound on in (40) holds. This bound is consistent with the expressions of r and s in (41) and (42) since, from (45), (187) and (188),
It should be noted that . First, from (187) and (188), and are positive if , which yields . We next show that . Recall that and are both non-negative for all . Setting yields , which (from (193)) implies that . Furthermore, from (193) and the positivity of , it follows that if and only if . The latter holds since for all (in particular, for ). If , then it follows from (41)–(45) that , , and (recall that )
- (i)
- If , then implies that , and ;
- (ii)
- If , then implies that , and .
We next prove Item (b) in Theorem 2 (i.e., the achievability of the lower bound in (40)). To that end, we provide a technical lemma, which can be verified by the reader.
Lemma 2.
Let be given in (41)–(45), and let be given in (47). Then,
Let and be defined on a set (for the moment, the values of and are not yet specified) with , , , and . We now calculate and such that and . This is equivalent to
Substituting (196) into the right side of (197) gives
which, by rearranging terms, also gives
Solving simultaneously (196) and (199) gives
We next verify that, by setting as in (47), one also gets (as desired) that and . From Lemma 2, and, from (196) and (197), we have
By combining (204) and (209), we obtain . Hence, the probability mass functions P and Q defined on (with and in (47)) such that
satisfy the equality constraints in (39), while also achieving the lower bound on that is equal to . It can be also verified that the second option where
does not yield the satisfiability of the conditions and , so there is only a unique pair of probability measures P and Q, defined on a two-element set that achieves the lower bound in (40) under the equality constraints in (39).
We finally prove Item (c) in Theorem 2. Let , and be selected arbitrarily such that . We construct probability measures and , depending on a free parameter , with means and variances and , respectively (means and variances are independent of ), and which are defined on a three-element set as follows:
with . We aim to set the parameters and (as a function of and ) such that
Proving (214) yields (48), while it also follows that the infimum on the left side of (48) can be restricted to probability measures which are defined on a three-element set.
In view of the constraints on the means and variances in (39), with equal means m, we get the following set of equations from (212) and (213):
The first and second equations in (215) refer to the equal means under P and Q, and the third and fourth equations in (215) refer to the second moments in (39). Furthermore, in view of (212) and (213), the relative entropy is given by
Subtracting the square of the first equation in (215) from its third equation gives the equivalent set of equations
We next select and such that . Then, the third equation in (217) gives , so . Furthermore, the first equation in (217) gives
Since r, , and are independent of , so is the probability measure . Combining the second equation in (217) with (218) and (219) gives
Substituting (218)–(220) into the fourth equation of (217) gives a quadratic equation for s, whose selected solution (such that s and be close for small ) is equal to
Hence, , which implies that for sufficiently small (as it is required in (213)). In view of (216), it also follows that vanishes as we let tend to zero.
We finally outline an alternative proof, which refers to the case of equal means with arbitrarily selected and . Let . We next construct a sequence of pairs of probability measures with zero mean and respective variances for which as (without any loss of generality, one can assume that the equal means are equal to zero). We start by assuming . Let
and define a sequence of quaternary real-valued random variables with probability mass functions
It can be verified that, for all , has zero mean and variance . Furthermore, let
with
If , for , we choose arbitrarily with mean 0 and variance . Then,
Next, suppose , then construct and as before with variances and , respectively. If and denote the random variables and scaled by a factor of , then their variances are , , respectively, and as we let .
To conclude, it should be noted that the sequences of probability measures in the latter proof are defined on a four-element set. Recall that, in the earlier proof, specialized to the case of (equal means with) , the introduced probability measures are defined on a three-element set, and the reference probability measure P is fixed while referring to an equiprobable binary random variable.
5.4. Proof of Theorem 3
We first prove (52). Differentiating both sides of (22) gives that, for all ,
where (228) holds due to (21), (22) and (50); (229) holds by (16) and (230) is due to (21) and (50). This gives (52).
We next prove (53), and the conclusion which appears after it. In view of [16] [Theorem 8], applied to for all , we get (it should be noted that, by the definition of F in (50), the result in [16] [(195)–(196)] is used here by swapping P and Q)
Since , it follows by L’Hôpital’s rule that
which gives (53). A comparison of the limit in (53) with a lower bound which follows from (52) gives
where (236) relies on (231). Hence, the limit in (53) is twice as large as its lower bound on the right side of (236). This proves the conclusion which comes right after (53).
We finally prove the known result in (51), by showing an alternative proof which is based on (52). The function F is non-negative on , and it is strictly positive on if . Let (otherwise, (51) is trivial). Rearranging terms in (52) and integrating both sides over the interval , for , gives that
The left side of (237) satisfies
where (241) holds since (see (50)). Combining (237)–(241) gives
which, due to the non-negativity of F, gives the right side inequality in (51) after rearrangement of terms in (242).
5.5. Proof of Theorem 4
Lemma 3.
Let be a convex function with , and let be defined as in (58). Then, is a sequence of convex functions on , and
Proof.
We prove the convexity of on by induction. Suppose that is a convex function with for a fixed integer . The recursion in (58) yields and, by the change of integration variable ,
Consequently, for and with , applying (244) gives
where (247) holds since is convex on (by assumption). Hence, from (245)–(248), is also convex on with . By mathematical induction and our assumptions on , it follows that is a sequence of convex functions on which vanish at 1.
We next prove (243). For all and ,
where (249) holds since is convex on , and (250) relies on the recursive equation in (58). Substituting into (249)–(250), and using the equality , gives (243). □
We next prove Theorem 4. From Lemma 3, it follows that is an f-divergence for all integers , and the non-negative sequence is monotonically non-increasing. From (21) and (58), it also follows that, for all and integer ,
where the substitution is invoked in (253), and then (254) holds since for (this follows from (21)) and by interchanging the order of the integrations.
5.6. Proof of Corollary 5
Combining (60) and (61) yields (58); furthermore, , given by for all , is convex on with . Hence, Theorem 4 holds for the selected functions in (61), which therefore are all convex on and vanish at 1. This proves that (59) holds for all and . Since and for all (see (60) and (61)), then, for every pair of probability measures P and Q:
Finally, combining (59), for , together with (256), gives (22) as a special case.
5.7. Proof of Theorem 5 and Corollary 6
For an arbitrary measurable set , we have from (62)
where is the indicator function of , i.e., for . Hence,
and
where the last equality holds by the definition of in (63). This proves Theorem 5. Corollary 6 is next proved by first proving (67) for the Rényi divergence. For all ,
The justification of (67) for is due to the continuous extension of the order- Rényi divergence at , which gives the relative entropy (see (13)). Equality (65) is obtained from (67) at . Finally, (66) is obtained by combining (15) and (67) with .
5.8. Proof of Theorem 6
Equation (100) is an equivalent form of (27). From (91) and (100), for all ,
Regarding the integrand of the second term in (269), in view of (18), for all
where (271) readily follows from (9). Since we also have (see (18)), it follows that
By using this identity, we get from (269) that, for all
where the function is defined in (102). This proves the integral identity (101).
The lower bounds in (103) and (104) hold since, if is convex, continuously twice differentiable and strictly convex at 1, then
(see, e.g., [46] [Proposition II.6.5] and [50] [Theorem 2]). Hence, this holds in particular for the f-divergences in (95) and (96) (since the required properties are satisfied by the parametric functions in (97) and (98), respectively). We next prove the upper bound on the contraction coefficients in (103) and (104) by relying on (100) and (101), respectively. In the setting of Definition 7, if , then it follows from (100) that for ,
Finally, taking the supremum of the left-hand side of (277) over all probability measures such that gives the upper bound on in (103). The proof of the upper bound on , for all , follows similarly from (101), since the function as defined in (102) is positive over the interval .
5.9. Proof of Corollary 7
The upper bounds in (106) and (107) rely on those in (103) and (104), respectively, by showing that
Inequality (280) is obtained as follows, similarly to the concept of the proof of [51] [Remark 3.8]. For all and ,
where (281) holds due to (18), and (283) is due to the definition in (105).
5.10. Proof of Proposition 3
The lower bound on the contraction coefficients in (108) and (109) is due to (276). The derivation of the upper bounds relies on [49] [Theorem 2.2], which states the following. Let be a three–times differentiable, convex function with , , and let the function defined as , for all , be concave. Then,
For , let and be given by
with and in (97) and (98). Straightforward calculus shows that, for and ,
The first term on the right side of (288) is negative. For showing that the second term is also negative, we rely on the power series expansion for . Setting , for , and using Leibnitz theorem for alternating series yields
Consequently, setting in (289), for and , proves that the second term on the right side of (288) is negative. Hence, , so both are concave functions.
In view of the satisfiability of the conditions of [49] [Theorem 2.2] for the f-divergences with or , the upper bounds in (108) and (109) follow from (284), and also since
5.11. Proof of Proposition 4
In view of (24), we get
In view of (119), the distributions of and , and since holds for all , it follows that
which, from (292)–(295), implies that
Switching and in (292)–(294) and using the mapping in (294) gives (due to the symmetry of the maximal correlation)
and, finally, taking the maximal lower bound among those in (296) and (297) gives (120).
Author Contributions
Both coauthors contributed to this research work, and to the writing and proofreading of this article. The starting point of this work was in independent derivations of preliminary versions of Theorems 1 and 2 in two separate un-published works [24,25]. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Acknowledgments
Sergio Verdú is gratefully acknowledged for a careful reading, and well-appreciated feedback on the submitted version of this paper.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
- Pearson, K. On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling. Lond. Edinb. Dublin Philos. Mag. J. Sci. 1900, 50, 157–175. [Google Scholar] [CrossRef]
- Csiszár, I.; Shields, P.C. Information Theory and Statistics: A Tutorial. Found. Trends Commun. Inf. Theory 2004, 1, 417–528. [Google Scholar] [CrossRef]
- Ali, S.M.; Silvey, S.D. A general class of coefficients of divergence of one distribution from another. J. R. Stat. Soc. 1966, 28, 131–142. [Google Scholar] [CrossRef]
- Csiszár, I. Eine Informationstheoretische Ungleichung und ihre Anwendung auf den Bewis der Ergodizität von Markhoffschen Ketten. Publ. Math. Inst. Hungar. Acad. Sci. 1963, 8, 85–108. [Google Scholar]
- Csiszár, I. Information-type measures of difference of probability distributions and indirect observations. Stud. Sci. Math. Hung. 1967, 2, 299–318. [Google Scholar]
- Csiszár, I. On topological properties of f-divergences. Stud. Sci. Math. Hung. 1967, 2, 329–339. [Google Scholar]
- Csiszár, I. A class of measures of informativity of observation channels. Period. Math. Hung. 1972, 2, 191–213. [Google Scholar] [CrossRef]
- Van Erven, T.; Harremoës, P. Rényi divergence and Kullback–Leibler divergence. IEEE Trans. Inf. Theory 2014, 60, 3797–3820. [Google Scholar] [CrossRef]
- Rényi, A. On measures of entropy and information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics; University of California Press: Berkeley, CA, USA, 1961; pp. 547–561. [Google Scholar]
- Liese, F.; Vajda, I. Convex Statistical Distances; Teubner-Texte Zur Mathematik: Leipzig, Germany, 1987. [Google Scholar]
- Liese, F.; Vajda, I. On divergences and informations in statistics and information theory. IEEE Trans. Inf. Theory 2006, 52, 4394–4412. [Google Scholar] [CrossRef]
- DeGroot, M.H. Uncertainty, information and sequential experiments. Ann. Math. Stat. 1962, 33, 404–419. [Google Scholar] [CrossRef]
- Polyanskiy, Y.; Poor, H.V.; Verdú, S. Channel coding rate in the finite blocklength regime. IEEE Trans. Inf. Theory 2010, 56, 2307–2359. [Google Scholar] [CrossRef]
- Sason, I.; Verdú, S. f-divergence inequalities. IEEE Trans. Inf. Theory 2016, 62, 5973–6006. [Google Scholar] [CrossRef]
- Sason, I. On f-divergences: Integral representations, local behavior, and inequalities. Entropy 2018, 20, 383. [Google Scholar] [CrossRef]
- Melbourne, J.; Madiman, M.; Salapaka, M.V. Relationships between certain f-divergences. In Proceedings of the 57th Annual Allerton Conference on Communication, Control and Computing, Urbana, IL, USA, 24–27 September 2019; pp. 1068–1073. [Google Scholar]
- Melbourne, J.; Talukdar, S.; Bhaban, S.; Madiman, M.; Salapaka, M.V. The Differential Entropy of Mixtures: New Bounds and Applications. Available online: https://arxiv.org/pdf/1805.11257.pdf (accessed on 22 April 2020).
- Audenaert, K.M.R. Quantum skew divergence. J. Math. Phys. 2014, 55, 112202. [Google Scholar] [CrossRef]
- Gibbs, A.L.; Su, F.E. On choosing and bounding probability metrics. Int. Stat. Rev. 2002, 70, 419–435. [Google Scholar] [CrossRef]
- Györfi, L.; Vajda, I. A class of modified Pearson and Neyman statistics. Stat. Decis. 2001, 19, 239–251. [Google Scholar] [CrossRef]
- Le Cam, L. Asymptotic Methods in Statistical Decision Theory; Series in Statistics; Springer: New York, NY, USA, 1986. [Google Scholar]
- Vincze, I. On the concept and measure of information contained in an observation. In Contributions to Probability; Gani, J., Rohatgi, V.K., Eds.; Academic Press: New York, NY, USA, 1981; pp. 207–214. [Google Scholar]
- Nishiyama, T. A New Lower Bound for Kullback–Leibler Divergence Based on Hammersley-Chapman-Robbins Bound. Available online: https://arxiv.org/abs/1907.00288v3 (accessed on 2 November 2019).
- Sason, I. On Csiszár’s f-divergences and informativities with applications. In Workshop on Channels, Statistics, Information, Secrecy and Randomness for the 80th birthday of I. Csiszár; The Rényi Institute of Mathematics, Hungarian Academy of Sciences: Budapest, Hungary, 2018. [Google Scholar]
- Makur, A.; Polyanskiy, Y. Comparison of channels: Criteria for domination by a symmetric channel. IEEE Trans. Inf. Theory 2018, 64, 5704–5725. [Google Scholar] [CrossRef]
- Simic, S. On a new moments inequality. Stat. Probab. Lett. 2008, 78, 2671–2678. [Google Scholar] [CrossRef]
- Chapman, D.G.; Robbins, H. Minimum variance estimation without regularity assumptions. Ann. Math. Stat. 1951, 22, 581–586. [Google Scholar] [CrossRef]
- Hammersley, J.M. On estimating restricted parameters. J. R. Stat. Soc. Ser. B 1950, 12, 192–240. [Google Scholar] [CrossRef]
- Verdú, S. Information Theory, in preparation.
- Wang, L.; Madiman, M. Beyond the entropy power inequality, via rearrangments. IEEE Trans. Inf. Theory 2014, 60, 5116–5137. [Google Scholar] [CrossRef]
- Lewin, L. Polylogarithms and Associated Functions; Elsevier North Holland: Amsterdam, The Netherlands, 1981. [Google Scholar]
- Marton, K. Bounding d¯-distance by informational divergence: A method to prove measure concentration. Ann. Probab. 1996, 24, 857–866. [Google Scholar] [CrossRef]
- Marton, K. Distance-divergence inequalities. IEEE Inf. Theory Soc. Newsl. 2014, 64, 9–13. [Google Scholar]
- Boucheron, S.; Lugosi, G.; Massart, P. Concentration Inequalities—A Nonasymptotic Theory of Independence; Oxford University Press: Oxford, UK, 2013. [Google Scholar]
- Raginsky, M.; Sason, I. Concentration of Measure Inequalities in Information Theory, Communications and Coding: Third Edition. In Foundations and Trends in Communications and Information Theory; NOW Publishers: Boston, MA, USA; Delft, The Netherlands, 2018. [Google Scholar]
- Csiszár, I. Sanov property, generalized I-projection and a conditional limit theorem. Ann. Probab. 1984, 12, 768–793. [Google Scholar] [CrossRef]
- Clarke, B.S.; Barron, A.R. Information-theoretic asymptotics of Bayes methods. IEEE Trans. Inf. Theory 1990, 36, 453–471. [Google Scholar] [CrossRef]
- Evans, R.J.; Boersma, J.; Blachman, N.M.; Jagers, A.A. The entropy of a Poisson distribution. SIAM Rev. 1988, 30, 314–317. [Google Scholar] [CrossRef]
- Knessl, C. Integral representations and asymptotic expansions for Shannon and Rényi entropies. Appl. Math. Lett. 1998, 11, 69–74. [Google Scholar] [CrossRef]
- Merhav, N.; Sason, I. An integral representation of the logarithmic function with applications in information theory. Entropy 2020, 22, 51. [Google Scholar] [CrossRef]
- Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
- Csiszár, I. The method of types. IEEE Trans. Inf. Theory 1998, 44, 2505–2523. [Google Scholar] [CrossRef]
- Corless, R.M.; Gonnet, G.H.; Hare, D.E.G.; Jeffrey, D.J.; Knuth, D.E. On the Lambert W function. Adv. Comput. Math. 1996, 5, 329–359. [Google Scholar] [CrossRef]
- Tamm, U. Some refelections about the Lambert W function as inverse of x·log(x). In Proceedings of the 2014 IEEE Information Theory and Applications Workshop, San Diego, CA, USA, 9–14 February 2014. [Google Scholar]
- Cohen, J.E.; Kemperman, J.H.B.; Zbăganu, G. Comparison of Stochastic Matrices with Applications in Information Theory, Statistics, Economics and Population Sciences; Birkhäuser: Boston, MA, USA, 1998. [Google Scholar]
- Cohen, J.E.; Iwasa, Y.; Rautu, G.; Ruskai, M.B.; Seneta, E.; Zbăganu, G. Relative entropy under mappings by stochastic matrices. Linear Algebra Its Appl. 1993, 179, 211–235. [Google Scholar] [CrossRef]
- Makur, A.; Zheng, L. Bounds between contraction coefficients. In Proceedings of the 53rd Annual Allerton Conference on Communication, Control and Computing, Urbana, IL, USA, 29 September–2 October 2015; pp. 1422–1429. [Google Scholar]
- Makur, A. Information Contraction and Decomposition. Ph.D. Thesis, MIT, Cambridge, MA, USA, May 2019. [Google Scholar]
- Polyanskiy, Y.; Wu, Y. Strong data processing inequalities for channels and Bayesian networks. In Convexity and Concentration; The IMA Volumes in Mathematics and its Applications; Carlen, E., Madiman, M., Werner, E.M., Eds.; Springer: New York, NY, USA, 2017; Volume 161, pp. 211–249. [Google Scholar]
- Raginsky, M. Strong data processing inequalities and Φ-Sobolev inequalities for discrete channels. IEEE Trans. Inf. Theory 2016, 62, 3355–3389. [Google Scholar] [CrossRef]
- Sason, I. On data-processing and majorization inequalities for f-divergences with applications. Entropy 2019, 21, 1022. [Google Scholar] [CrossRef]
- Csiszár, I.; Körner, J. Information Theory: Coding Theorems for Discrete Memoryless Systems, 2nd ed.; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
- Burbea, J.; Rao, C.R. On the convexity of some divergence measures based on entropy functions. IEEE Trans. Inf. Theory 1982, 28, 489–495. [Google Scholar] [CrossRef]
- Lin, J. Divergence measures based on the Shannon entropy. IEEE Trans. Inf. Theory 1991, 37, 145–151. [Google Scholar] [CrossRef]
- Menéndez, M.L.; Pardo, J.A.; Pardo, L.; Pardo, M.C. The Jensen–Shannon divergence. J. Frankl. Inst. 1997, 334, 307–318. [Google Scholar] [CrossRef]
- Topsøe, F. Some inequalities for information divergence and related measures of discrimination. IEEE Trans. Inf. Theory 2000, 46, 1602–1609. [Google Scholar] [CrossRef]
- Nielsen, F. On a generalization of the Jensen–Shannon divergence and the Jensen–Shannon centroids. Entropy 2020, 22, 221. [Google Scholar] [CrossRef]
- Asadi, M.; Ebrahimi, N.; Karazmi, O.; Soofi, E.S. Mixture models, Bayes Fisher information, and divergence measures. IEEE Trans. Inf. Theory 2019, 65, 2316–2321. [Google Scholar] [CrossRef]
- Sarmanov, O.V. Maximum correlation coefficient (non-symmetric case). Sel. Transl. Math. Stat. Probab. 1962, 2, 207–210. (In Russian) [Google Scholar]
- Gilardoni, G.L. Corrigendum to the note on the minimum f-divergence for given total variation. Comptes Rendus Math. 2010, 348, 299. [Google Scholar] [CrossRef]
- Reid, M.D.; Williamson, R.C. Information, divergence and risk for binary experiments. J. Mach. Learn. Res. 2011, 12, 731–817. [Google Scholar]
- Pardo, M.C.; Vajda, I. On asymptotic properties of information-theoretic divergences. IEEE Trans. Inf. Theory 2003, 49, 1860–1868. [Google Scholar] [CrossRef]
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).