Next Article in Journal
Role of Waste Cost in Thermoeconomic Analysis
Next Article in Special Issue
Conditional Rényi Entropy and the Relationships between Rényi Capacities
Previous Article in Journal
The Triangle Wave Versus the Cosine: How Classical Systems Can Optimally Approximate EPR-B Correlations
Previous Article in Special Issue
Rényi and Tsallis Entropies of the Aharonov–Bohm Ring in Uniform Magnetic Fields
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generalizations of Fano’s Inequality for Conditional Information Measures via Majorization Theory †

Department of Electrical and Computer Engineering, National University of Singapore, 21 Lower Kent Ridge Road, Singapore 119077, Singapore
This paper is an extended version of our paper published in the 2018 International Symposium on Information Theory and Its Applications (ISITA’18), Singapore, 28–31 October 2018.
Entropy 2020, 22(3), 288; https://doi.org/10.3390/e22030288
Submission received: 14 November 2019 / Revised: 15 January 2020 / Accepted: 27 January 2020 / Published: 1 March 2020
(This article belongs to the Special Issue Information Measures with Applications)

Abstract

:
Fano’s inequality is one of the most elementary, ubiquitous, and important tools in information theory. Using majorization theory, Fano’s inequality is generalized to a broad class of information measures, which contains those of Shannon and Rényi. When specialized to these measures, it recovers and generalizes the classical inequalities. Key to the derivation is the construction of an appropriate conditional distribution inducing a desired marginal distribution on a countably infinite alphabet. The construction is based on the infinite-dimensional version of Birkhoff’s theorem proven by Révész [Acta Math. Hungar. 1962, 3, 188–198], and the constraint of maintaining a desired marginal distribution is similar to coupling in probability theory. Using our Fano-type inequalities for Shannon’s and Rényi’s information measures, we also investigate the asymptotic behavior of the sequence of Shannon’s and Rényi’s equivocations when the error probabilities vanish. This asymptotic behavior provides a novel characterization of the asymptotic equipartition property (AEP) via Fano’s inequality.

1. Introduction

Inequalities relating probabilities to various information measures are fundamental tools for proving various coding theorems in information theory. Fano’s inequality [1] is one such paradigmatic example of an information-theoretic inequality; it elucidates the interplay between the conditional Shannon entropy H ( X Y ) and the error probability P { X Y } . Denoting by h 2 : u u log u ( 1 u ) log ( 1 u ) the binary entropy function on [ 0 , 1 ] with the conventional hypothesis that h 2 ( 0 ) = h 2 ( 1 ) = 0 , Fano’s inequality can be written as
max ( X , Y ) : P { X Y } ε H ( X Y ) = h 2 ( ε ) + ε log ( M 1 )
for every 0 ε 1 1 / M , where log stands for the natural logarithm, and the maximization is taken over the jointly distributed pairs of { 1 , , M } -valued random variables (r.v.’s) X and Y satisfying P { X Y } ε . An important consequence of Fano’s inequality is that if the error probabilities vanish, so do the normalized equivocations. In other words,
lim n P { X n Y n } = 0 lim n 1 n H ( X n Y n ) = 0 ,
where both X n = ( X 1 , , X n ) and Y n = ( Y 1 , , Y n ) are random vectors in which each component is a { 1 , , M } -valued r.v. This is the key in proving weak converse results in various communication models (cf. [2,3,4]). Moreover, Fano’s inequality also shows that
lim n P { X n Y n } = 0 lim n H ( X n Y n ) = 0 ,
where X n and Y n are { 1 , , M } -valued r.v.’s for each n 1 . This implication is used, for example, to prove that various Shannon’s information measures are continuous in the error metric P { X n Y n } or the variational distance (cf. [5,6,7]).

1.1. Main Contributions

In this study, we consider general maximization problems that can be specialized to the left-hand side of (1); we generalize Fano’s inequality in the following four ways:
(i)
the alphabet X of a discrete r.v. X to be estimated is countably infinite,
(ii)
the marginal distribution P X of X is fixed,
(iii)
the inequality is established on a general class of conditional information measures, and
(iv)
the decoding rule is a list decoding scheme in contrast to a unique decoding scheme.
Specifically, given an X -valued r.v. X with a countably infinite alphabet X and a Y -valued r.v. Y with an abstract alphabet Y , this study considers a generalized conditional information measure defined by
H ϕ ( X Y ) : = E [ ϕ ( P X | Y ) ] ,
where P X | Y ( x ) stands for a version of the conditional probability P { X = x Y } for each x X , and E [ Z ] stands for the expectation of the real-valued r.v. Z. Here, this function ϕ : P ( X ) [ 0 , ] defined on the set P ( X ) of discrete probability distributions on X plays the role of an information measure of a discrete probability distribution. When Y is a countable alphabet, the right-hand side of (4) can be written as
H ϕ ( X Y ) = y Y : P Y ( y ) > 0 P Y ( y ) ϕ ( P X | Y = y ) ,
where P Y = P Y 1 denotes the probability law of Y, and P X | Y = y ( x ) : = P { X = x Y = y } denotes the conditional probability for each ( x , y ) X × Y . In this study, we impose some postulates on ϕ for technical reasons. Choosing ϕ appropriately, we can specialize H ϕ ( X Y ) to the conditional Shannon entropy H ( X Y ) , Arimoto’s and Hayashi’s conditional Rényi entropies [8,9], and so on. For example, if ϕ is given as
ϕ ( P ) = x X P ( x ) log 1 P ( x ) ,
then H ϕ ( X Y ) coincides with the conditional Shannon entropy H ( X Y ) . Denoting by P e ( L ) ( X Y ) the minimum average probability of list decoding error with a list size L, the principal maximization problem considered in this study can be written as
H ϕ ( Q , L , ε , Y ) : = sup ( X , Y ) : P e ( L ) ( X Y ) ε , P X = Q H ϕ ( X Y ) ,
where the supremum is taken over the pairs ( X , Y ) satisfying P e ( L ) ( X Y ) ε and fixing the X -marginal P X to a given distribution Q. The feasible region of systems ( Q , L , ε , Y ) will be characterized in this paper to ensure that H ϕ ( Q , L , ε , Y ) is well-defined. Under some mild conditions on a given system ( Q , L , ε , Y ) , especially on the cardinality of Y , we derive explicit formulas of H ϕ ( Q , L , ε , Y ) ; otherwise, we establish tight upper bounds on H ϕ ( Q , L , ε , Y ) . As H ϕ ( Q , L , ε , Y ) can be thought of as a generalization of the maximization problem stated in (1), we call these results Fano-type inequalities in this paper. These Fano-type inequalities are formulated by the considered information measures ϕ ( P type ) of certain (extremal) probability distributions P type depending only on the system ( Q , L , ε , Y ) .
In this study, we provide Fano-type inequalities via majorization theory [10]. A proof outline to obtain our Fano-type inequalities is as follows.
  • First, we show that a generalized conditional information measure H ϕ ( X Y ) can be bounded from above by H ϕ ( U V ) with a certain pair ( U , V ) in which the conditional distribution P U | V of U given V can be thought of as a so-called uniformly dispersive channel [11,12] (see also Section II-A of [13]). We prove this fact via Jensen’s inequality (cf. Proposition A-2 of [14]) and the symmetry of the considered information measures ϕ . Moreover, we establish a novel characterization of uniformly dispersive channels via a certain majorization relation; we show that the output distribution of a uniformly dispersive channel is majorized by its transition probability distribution for any fixed input symbol. This majorization relation is used to obtain a sharp upper bound via the Schur-concavity property of the considered information measures ϕ .
  • Second, we ensure the existence of a joint distribution P X , Y of ( X , Y ) which satisfies all constraints in our maximization problems H ϕ ( Q , L , ε , Y ) stated in (7) and the conditional distribution P X | Y is uniformly dispersive. Here, a main technical difficulty is to maintain a marginal distribution P X of X over a countably infinite alphabet X ; see (ii) above. Using a majorization relation for a uniformly dispersive channel, we express a desired marginal distribution P X by the multiplication of a doubly stochastic matrix and a uniformly dispersive P X | Y . This characterization of the majorization relation via a doubly stochastic matrix was proven by Hardy–Littleweed–Pólya [15] in the finite-dimensional case, and by Markus [16] in the infinite-dimensional case. From this doubly stochastic matrix, we construct a marginal distribution P Y of Y so that the joint distribution P X , Y = P X | Y P Y has the desired marginal distribution P X . The construction of P Y is based on the infinite-dimensional version of Birkhoff’s theorem, which was posed by Birkhoff [17] and was proven by Révész [18] via Kolmogorov’s extension theorem. Although the finite-dimensional version of Birkhoff’s theorem [19] (also known as the Birkhoff–von Neumann decomposition) is well-known, the application of the infinite-dimensional version of Birkhoff’s theorem in information theory appears to be novel; its application aids in dealing with communication systems over countably infinite alphabets.
  • Third, we introduce an extremal distribution P type on a countably infinite alphabet X . Showing that P type is the infimum of a certain class of discrete probability distributions with respect to the majorization relation, our maximization problems can be bounded from above by the considered information measure ϕ ( P type ) . Namely, our Fano-type inequality is expressed by a certain information measure of the extremal distribution. When the cardinality of the alphabet of Y is large enough, by constructing a joint distribution P X , Y achieving equality in our generalized Fano-type inequality, we say that the inequality is sharp.
When the alphabet of Y is finite, we further tighten our Fano-type inequality. To do so, we prove a reduction lemma for the principal maximization problem from an infinite- to a finite-dimensional feasible region. Therefore, when the alphabet of Y is finite, we do not have to employ technical tools in infinite-dimensional majorization theory, e.g., the infinite-dimensional version of Birkhoff’s theorem. This reduction lemma is useful not only to tighten our Fano-type inequality but also to characterize a sufficient condition of the considered information measure ϕ in which H ϕ ( Q , L , ε , Y ) is finite if and only if ϕ ( Q ) is also finite. In fact, Shannon’s and Rényi’s information measures meet this sufficient condition.
We show that our Fano-type inequalities can be specialized to some known generalizations of Fano’s inequality [20,21,22,23] on Shannon’s and Rényi’s information measures. Therefore, one of our technical contributions is a unified proof of Fano’s inequality for conditional information measures via majorization theory. Generalizations of Erokhin’s function [20] from the ordinary mutual information to Sibson’s and Arimoto’s α -mutual information [8,24] are also discussed.
Via our generalized Fano-type inequalities, we investigate sufficient conditions on a general source X = { X n = ( Z 1 ( n ) , , Z n ( n ) ) } n = 1 in which vanishing error probabilities implies vanishing equivocations (cf. (2) and (3)). We show that the asymptotic equipartition property (AEP) as defined by Verdú–Han [25] is indeed such a sufficient condition. In other words, if a general source X = { X n } n = 1 satisfies the AEP and H ( X n ) = Ω ( 1 ) as n , then we prove that
lim n P e ( L n ) ( X n Y n ) = lim n log L n H ( X n ) = 0 lim n H ( X n Y n ) H ( X n ) = 0 ,
where { L n } n = 1 is an arbitrary sequence of list sizes. This is a generalization of (2) and (3) and, to the best of the author’s knowledge, a novel connection between the AEP and Fano’s inequality. We prove this connection by using the splitting technique of a probability distribution; this technique was used to derive limit theorems of Markov processes by Nummelin [26] and Athreya–Ney [27]. Note that there are also many applications of the splitting technique in information theory (cf. [21,28,29,30,31,32]). In addition, we extend Ho–Verdú’s sufficient conditions (See Section V of [21]) and Sason–Verdú’s sufficient conditions (see Theorem 4 of [23]) on a general source X = { X n } n = 1 in which equivocations vanish if the error probabilities vanish.

1.2. Related Works

1.2.1. Information Theoretic Tools on Countably Infinite Alphabets

As the right-hand side of (1) diverges as M goes to infinity whenever ε > 0 is fixed, the classical Fano inequality is applicable only if X is supported on a finite alphabet (see also Chapter 1 of [33]). In fact, if both X n and Y n are supported on the same countably infinite alphabet for each n 1 , one can construct a somewhat pathological example so that P { X n Y n } = o ( 1 ) as n but H ( X n Y n ) = for every n 1 (cf. Example 2.49 of [4]).
Usually, it is not straightforward to generalize information theoretic tools for systems defined on a finite alphabet to systems defined on a countably infinite alphabet. Ho–Yeung [34] showed that Shannon’s information measures defined on countably infinite alphabets are not continuous with respect to the following distances; the χ 2 -divergence, the relative entropy, and the variational distance. Continuity issues of Rényi’s information measures defined on countably infinite alphabets were explored by Kovačević–Stanojević–Šenk [35]. In addition, although weak typicality (cf. Chapter 3 of [2] that is also known as the entropy-typical sequences (cf. Problem 2.5 of [6]) is a convenient tool in proving achievability theorems for sources and channels with defined on countably infinite (or even uncountable) alphabets, strong typicality [6] is only amenable in situations with finite alphabets. To ameliorate this issue, Ho–Yeung [36] proposed a notion known as unified typicality that ensures that the desirable properties of weak and strong typicality are retained when one is working with countably infinite alphabets.
Recently, Madiman–Wang–Woo [37] investigated relations between majorization and the strong Sperner property [38] of posets together with applications to the Rényi entropy power inequality for sums of independent and integer-valued r.v.’s, i.e., supported on countably infinite alphabets.
To the best of the author’s knowledge, a generalization of Fano’s inequality to the case when X is supported on a countably infinite alphabet was initiated by Erokhin [20]. Given a discrete probability distribution Q on a countably infinite alphabet X = { 1 , 2 , } , Erokhin established in Equation (11) of [20] an explicit formula of the function:
I ( Q , ε ) : = min ( X , Y ) : P { X Y } ε , P X = Q I ( X Y ) ,
where the minimization is taken over the pairs of X -valued r.v.’s X and Y satisfying P { X Y } ε and P { X = x } = Q ( x ) for each x X , and I ( X Y ) stands for the mutual information between X and Y. Note that Erokhin’s function I ( Q , ε ) is the rate-distortion function with Hamming distortion measures (cf. [39,40]). As the well-known identity I ( X Y ) = H ( X ) H ( X Y ) implies that
I ( Q , ε ) = H ( X ) max ( X , Y ) : P { X Y } ε , P X = Q H ( X Y ) ,
Erokhin’s function I ( Q , ε ) can be naturally thought of as a generalization of the classical Fano inequality stated in (1), where H ( X ) stands for the Shannon entropy of X, and the probability distribution of X is given by P { X = x } = Q ( x ) for each x X . Kostina–Polyanskiy–Verdú [41] derived a second-order asymptotic expansion of I ( Q n , ε ) as n , where Q n stands for the n-fold product of Q. Their asymptotic expansion is closely related to the second-order asymptotics of the variable-length compression allowing errors; see ([41], Theorem 4).
Ho–Verdú [21] gave an explicit formula of the maximization in the right-hand side of (10); they proved it via the additivity of Shannon’s information measures. Note that Ho–Verdú’s formula (cf. Theorem 1 of [21]) coincides with Erokhin’s formula (cf. Equation (11) of [20]) via the identity stated in (10). In Theorems 2 and 4 of [21], Ho–Verdú also tightened the maximization in the right-hand side of (10) when Y is supported on a proper subalphabet of X . Moreover, they provided in Section V of [21] some sufficient conditions on a general source in which vanishing error probabilities (i.e., P { X n Y n } = o ( 1 ) ) implies vanishing unnormalized or normalized equivocations (i.e., H ( X n Y n ) = o ( 1 ) or H ( X n Y n ) = o ( n ) ).

1.2.2. Fano’s Inequality with List Decoding

Fano’s inequality with list decoding was initiated by Ahlswede–Gács–Körner [42]. By a minor extension of the usual proof (see, e.g., Lemma 3.8 of [6]), one can see that
max ( X , Y ) : P e ( L ) ( X Y ) ε H ( X Y ) = h 2 ( ε ) + ( 1 ε ) log L + ε log ( M L )
for every integers 1 L < M and every real number 0 ε 1 L / M , where the maximization is taken over the pairs of a { 1 , , M } -valued r.v. X and a Y -valued r.v. Y satisfying P e ( L ) ( X Y ) ε . Note that the right-hand side of (11) coincides with the Shannon entropy of the extremal distribution of type-0 defined by
P type-0 ( x ) = P type-0 ( M , L , ε ) ( x ) : = 1 ε L if 1 x L , ε M L if L < x M , 0 if M < x <
for each integer x 1 . A graphical representation of this extremal distribution is plotted in Figure 1.
Combining (11) and the blowing-up technique (cf. Chapter 5 of [6] or Section 3.6.2 of [43]), Ahlswede–Gács–Körner [42] proved the strong converse property (in Wolfowitz’s sense [44]) of degraded broadcast channels under the maximum error probability criterion. Extending the proof technique in [42] together with the wringing technique, Dueck [45] proved the strong converse property of multiple-access channels under the average error probability criterion. As these proofs rely on a combinatorial lemma (cf. Lemma 5.1 of [6]), they work only when the channel output alphabet is finite; but see recent work by Fong–Tan [46,47] in which such techniques have been extended to Gaussian channels. On the other hand, Kim–Sutivong–Cover [48] investigated a trade-off between the channel coding rate and the state uncertainty reduction of a channel with state information available only at the sender, and derived its trade-off region in the weak converse regime by employing (11).

1.2.3. Fano’s Inequality for Rényi’s Information Measures

So far, many researchers have considered various directions for generalizing Fano’s inequality. An interesting study involves reversing the usual Fano inequality. In this regard, lower bounds on H ( X Y ) subject to P { X Y } = ε were independently established by Kovalevsky [49], Chu–Cheuh [50], and Tebbe–Dwyer [51] (see also Feder–Merhav’s study [52]). Prasad [53] provided several refinements of the reverse/forward Fano inequalities for Shannon’s information measures.
In [54], Ben-Bassat–Raviv explored several inequalities between the (unconditional) Rényi entropy and the error probability. Generalizations of Fano’s inequality from the conditional Shannon entropy H ( X Y ) to Arimoto’s conditional Rényi entropy H α Arimoto ( X Y ) introduced in [8] were recently and independently investigated by Sakai–Iwata [22] and Sason–Verdú [23]. Specifically, Sakai–Iwata [22] provided sharp upper/lower bounds on H α Arimoto ( X Y ) for fixed H β Arimoto ( X Y ) with two distinct orders α β . In other words, they gave explicit formulas of the following minimization and maximization,
f min ( α , β , γ ) : = min ( X , Y ) : H β Arimoto ( X Y ) = γ H α Arimoto ( X Y ) ,
f max ( α , β , γ ) : = max ( X , Y ) : H β Arimoto ( X Y ) = γ H α Arimoto ( X Y ) ,
respectively. As H β Arimoto ( X Y ) is a strictly monotone function of the minimum average probability of error if β = , both functions f min ( α , , γ ) and f max ( α , , γ ) can be thought of as reverse and forward Fano inequalities on H α Arimoto ( X Y ) , respectively (cf. Section V in the arXiv paper [22]). Sason–Verdú [23] also gave generalizations of the forward and reverse Fano’s inequalities on H α Arimoto ( X Y ) . Moreover, in the forward Fano inequality pertaining to H α Arimoto ( X Y ) , they generalized in Theorem 8 of [23] the decoding rules from unique decoding to list decoding as follows:
max ( X , Y ) : P e ( L ) ( X Y ) ε H α Arimoto ( X Y ) = 1 1 α log L 1 α ( 1 ε ) + ( M L ) 1 α ε α
for every 0 ε 1 L / M and α ( 0 , 1 ) ( 1 , ) , where the maximization is taken as with (11). Similar to (11), the right-hand side of (15) coincides with the Rényi entropy [55] of the extremal distribution of type-0. Note that the reverse Fano inequality proven in [22,23] does not require that X is finite. On the other hand, the forward Fano inequality proven in [22,23] is applicable only when X is finite.

1.2.4. Lower Bounds on Mutual Information

Han–Verdú [56] generalized Fano’s inequality on a countably infinite alphabet X by investigating lower bounds on the mutual information, i.e.,
I ( X Y ) P { X Y } log P { X Y } P { X ¯ Y ¯ } + P { X = Y } log P { X = Y } P { X ¯ = Y ¯ } ,
via the data processing lemma without additional constraints on the r.v.’s X and Y, where X ¯ and Y ¯ are independent r.v.’s having marginals as X and Y respectively. Polyanskiy–Verdú [57] showed a lower bound on Sibson’s α -mutual information by using the data processing lemma for the Rényi divergence. Recently, Sason [58] generalized Fano’s inequality with list decoding via the strong data processing lemma for the f-divergences.
Liu–Verdú [59] showed that
I ( X n Y n ) log M n + O ( n )
as n , provided that the geometric average probability of error, which is a weaker and a stronger criteria than the maximum and the average error criteria, respectively, satisfies
m = 1 M n P { Y n D m , n X n = c m , n } 1 / M n 1 ε
for sufficiently large n, where X n is a r.v. uniformly distributed on the codeword set { c m , n } m = 1 M n , Y n is a r.v. induced by the n-fold product of a discrete memoryless channel with the input X n , M n is a positive integer denoting the message size, { D m , n } m = 1 M n is a collection of disjoint subsets playing the role of decoding regions, and 0 < ε < 1 is a tolerated probability of error. This is a second-order asymptotic estimate on the mutual information, and is derived by using the Donsker–Varadhan lemma (cf. Equation (3.4.67) of [43]) and the so-called pumping-up argument.

1.3. Paper Organization

The rest of this paper is organized as follows. Section 2 introduces basic notations and definitions to understand our generalized conditional information measure H ϕ ( X Y ) and the principal maximization problem H ϕ ( Q , L , ε , Y ) . Section 3 presents the main results: our Fano-type inequalities. Section 4 specializes our Fano-type inequalities to Shannon’s and Rényi’s information measures, and discusses generalizations of Erokhin’s function from the ordinary mutual information to Sibson’s and Arimoto’s α -mutual information. Section 5 investigates several conditions on a general source in which the vanishing error probabilities implies the vanishing equivocations; a novel characterization of the AEP via Fano’s inequality is also presented. Section 6 proves our Fano-type inequalities stated in Section 3, and contains most technical contributions in this study. Section 7 proves the asymptotic behaviors stated in Section 5. Finally, Section 8 concludes this study with some remarks.

2. Preliminaries

2.1. A General Class of Conditional Information Measures

This subsection introduces some notions in majorization theory [10] and a rigorous definition of generalized conditional information measure H ϕ ( X Y ) defined in (4). Let X = { 1 , 2 , } be a countably infinite alphabet. A discrete probability distribution P on X is a map P : X [ 0 , 1 ] satisfying x X P ( x ) = 1 . In this paper, motivated to consider the joint probability distributions on X × Y , it is called an X -marginal. Given an X -marginal P, a decreasing rearrangement of P is denoted by P , i.e., it fulfills
P ( 1 ) P ( 2 ) P ( 3 ) P ( 4 ) P ( 5 ) .
The following definition gives us the notion of majorization for X -marginals.
Definition 1
(Majorization [10]). An X -marginal P is said to majorize another X -marginal Q if
i = 1 k P ( i ) i = 1 k Q ( i )
for every k 1 . This relation is denoted by P Q or Q P .
Let P ( X ) be the set of X -marginals. The following definitions are important postulates on a function ϕ : P ( X ) [ 0 , ] playing the role of an information measure of an X -marginal.
Definition 2.
A function ϕ : P ( X ) [ 0 , ] is said to be symmetric if it is invariant for any permutation of probability masses, i.e., ϕ ( P ) = ϕ ( P ) for every P P ( X ) .
Definition 3.
A function ϕ : P ( X ) [ 0 , ] is said to be lower semicontinuous if for any P P ( X ) , it holds that lim inf n ϕ ( P n ) ϕ ( P ) for every pointwise convergent sequence P n P , where the pointwise convergence P n P means that P n ( x ) P ( x ) as n for every x X .
Definition 4.
A function ϕ : P ( X ) [ 0 , ] is said to be convex if ϕ ( R ) λ ϕ ( P ) + ( 1 λ ) ϕ ( Q ) with R = λ P + ( 1 λ ) Q for every P , Q P ( X ) and 0 λ 1 .
Definition 5.
A function ϕ : P ( X ) [ 0 , ] is said to be quasiconvex if the sublevel set { P P ( X ) ϕ ( P ) c } is convex for every P P ( X ) and c [ 0 , ) .
Definition 6.
A function ϕ : P ( X ) [ 0 , ] is said to be Schur-convex if P Q implies that ϕ ( P ) ϕ ( Q ) .
In Definitions 4–6, each term or its suffix convex is replaced by concave if ϕ fulfills the condition. In Definition 3, note that the pointwise convergence of X -marginals is equivalent to the convergence in the variational distance topology (see, e.g., Lemma 3.1 of [60] or Section III-D of [61]).
Let X be an X -valued r.v. and Y a Y -valued r.v., where Y is an abstract alphabet. Unless stated otherwise, assume that the measurable space of Y with a certain σ -algebra is standard Borel, where a measurable space is said to be standard Borel if its σ -algebra is the Borel σ -algebra generated by a Polish topology on the space. Assuming that ϕ : P ( X ) [ 0 , ] is a symmetric, concave, and lower semicontinuous function, the generalized conditional information measure H ϕ ( X Y ) is defined by (4). The postulates on ϕ we have imposed here are useful for technical reasons to employ majorization theory; see the following lemma.
Proposition 1.
Every symmetric and quasiconvex function ϕ : P ( X ) [ 0 , ] is Schur-convex.
Proof of Proposition 1.
In Proposition 3.C.3 of [10], the assertion of Proposition 1 was proved in the case where the dimension of the domain of ϕ is finite. Employing Theorem 4.2 of [16] instead of Corollary 2.B.3 of [10], the proof of Proposition 3.C.3 of [10] can be directly extended to infinite-dimensional domains. □
To employ the Schur-concavity property in the sequel, Proposition 1 suggests assuming that ϕ is symmetric and quasiconcave. In addition, to apply Jensen’s inequality on the function ϕ , it suffices to assume that ϕ is concave and lower semicontinuous, because the domain P ( X ) forms a closed convex bounded set in the variational distance topology (cf. Proposition A-2 of [14]). Motivated by these properties, we impose the three postulates (corresponding to Definitions 2–4) on ϕ in this study.

2.2. Minimum Average Probability of List Decoding Error

Consider a certain communication model for which a Y -valued r.v. Y plays the role of the side-information of an X -valued r.v. X. A list decoding scheme with a list size 1 L < is a decoding scheme producing L candidates for realizations of X when we observe a realization of Y. The minimum average error probability under list decoding is defined by
P e ( L ) ( X Y ) : = min f : Y X L P { X f ( Y ) } ,
where the minimization is taken over all set-valued functions f : Y X L with the decoding range
X L : = { D X | D | = L } ,
and | · | stands for the cardinality of a set. If S is an infinite set, then we assume that | S | = as usual. If L = 1 , then (21) coincides with the average error probability of the maximum a posteriori (MAP) decoding scheme. For the sake of brevity, we write
P e ( X Y ) : = P e ( 1 ) ( X Y ) .
It is clear that
P { X f ( Y ) } ε P e ( L ) ( X Y ) ε
for any list decoder f : Y X L and any tolerated probability of error ε 0 . Therefore, it suffices to consider the constraint P e ( L ) ( X Y ) ε rather than P { X f ( Y ) } ε in our subsequent analyses.
The following proposition is an elementary formula of P e ( L ) ( X Y ) as in the MAP decoding.
Proposition 2.
It holds that
P e ( L ) ( X Y ) = 1 E x = 1 L P X | Y ( x ) .
Proof of Proposition 2.
See Appendix A. □
Remark 1.
It follows from Proposition 2 that H ϕ ( X Y ) defined in (4) can be specialized to P e ( L ) ( X Y ) with
ϕ ( P ) = 1 x = 1 L P ( x ) .
The following proposition characterizes the feasible region of systems ( Q , L , ε , Y ) considered in our principal maximization problem H ϕ ( Q , L , ε , Y ) stated in (7).
Proposition 3.
If P X = Q , then
1 x = 1 L · | Y | Q ( x ) P e ( L ) ( X Y ) 1 x = 1 L Q ( x ) .
Moreover, both inequalities are sharp in the sense that there exist pairs of r.v.’s X and Y achieving the equalities while respecting the constraint P X = Q .
Proof of Proposition 3.
See Appendix B. □
The minimum average error probability for list decoding concerning X Q without any side-information is denoted by
P e ( L ) ( Q ) : = 1 x = 1 L Q ( x ) .
Then, the second inequality in (27) is obvious, and it is similar to the property that conditioning reduces uncertainty (cf. [2], Theorem 2.8.1). Proposition 3 ensures that when we have to consider the constraints P e ( L ) ( X Y ) ε and P X = Q , it suffices to consider a system ( Q , L , ε , Y ) satisfying
1 x = 1 L · | Y | Q ( x ) ε 1 x = 1 L Q ( x ) .

3. Main Results: Fano-Type Inequalities

Let ( Q , L , ε , Y ) be a system satisfying (29), and ϕ : P ( X ) [ 0 , ] a symmetric, concave, and lower semicontinuous function. The main aim of this study is to find an explicit formula or a tight upper bound on H ϕ ( Q , L , ε , Y ) defined in (7). Now, define the extremal distribution of type-1 by the following X -marginal,
P type-1 ( x ) = P type-1 ( Q , L , ε ) ( x ) : = Q ( x ) if 1 x < J or K 1 < x < , V ( J ) if J x L , W ( K 1 ) if L < x K 1 ,
for each x X , the weight V ( j ) is defined by
V ( j ) = V ( Q , L , ε ) ( j ) : = ( 1 ε ) x = 1 j 1 Q ( x ) L j + 1 if 1 j L , 1 if j > L
for each j 1 , the weight W ( k ) is defined by
W ( k ) = W ( Q , L , ε ) ( k ) : = 1 if k = L , x = 1 k Q ( x ) ( 1 ε ) k L if L < k < , 0 if k =
for each k L , the integer J is chosen so that
J = J ( Q , L , ε ) : = min { 1 j < Q ( j ) < V ( j ) } ,
and K 1 is chosen so that
K 1 = K 1 ( Q , L , ε ) : = sup { L k < W ( k ) < Q ( k ) } .
A graphical representation of P type-1 is shown in Figure 2. Under some mild conditions, the following theorem gives an explicit formula of H ϕ ( Q , L , ε , Y ) .
Theorem 1.
Suppose that ε > 0 and the cardinality of Y is at least countably infinite. Then, it holds that
H ϕ ( Q , L , ε , Y ) = ϕ ( P type-1 ) .
Proof of Theorem 1.
See Section 6.1. □
The Fano-type inequality stated in (35) of Theorem 1 is formulated by the extremal distribution P type-1 defined in (30). The following proposition summarizes basic properties of P type-1 .
Proposition 4.
The extremal distribution of type-1 defined in (30) satisfies the following,
  • the probability masses are nonincreasing in x X , i.e.,
    P type-1 ( 1 ) P type-1 ( 2 ) P type-1 ( 3 ) P type-1 ( 4 ) P type-1 ( 5 ) ,
  • the sum of first L probability masses of is equal to 1 ε , i.e.,
    x = 1 L P type-1 ( x ) = 1 ε ,
    consequently, it holds that
    P e ( L ) ( P type-1 ) = ε ,
  • the first J 1 probability masses are equal to that of Q , i.e.,
    P type-1 ( x ) = Q ( x ) ( for 1 x J 1 ) ,
  • the probability masses for J x L are equal to V ( J ) , i.e.,
    P type-1 ( x ) = V ( J ) ( for J x L ) ,
  • the probability masses for L + 1 x K 1 are equal to W ( K 1 ) , i.e.,
    P type-1 ( x ) = W ( K 1 ) ( for L + 1 x K 1 ) ,
  • the probability masses for x K 1 + 1 are equal to that of Q , i.e.,
    P type-1 ( x ) = Q ( x ) ( for x K 1 + 1 ) ,
    and
  • it holds that P type-1 majorizes Q.
Proof of Proposition 4.
See Appendix C. □
Although positive tolerated probabilities of error (i.e., ε > 0 ) are highly interesting in most of the lossless communication systems, the scenario in which the error events with positive probabilities are not allowed (i.e., ε = 0 ) is also important to deal with the error-free communication systems. The following theorem is an error-free version of Theorem 1.
Theorem 2.
Suppose that ε = 0 and Y is at least countably infinite. Then, it holds that
H ϕ ( Q , L , 0 , Y ) ϕ ( P type-1 )
with equality if supp ( Q ) : = { x X Q ( x ) > 0 } is finite or J = L . Moreover, if the cardinality of Y is at least the cardinality of the continuum R , then there exists a σ-algebra on Y satisfying (43) with equality.
Proof of Theorem 2.
See Section 6.2. □
Remark 2.
Note that J = L holds under the unique decoding rule (i.e., L = 1 ); that is, we see from Theorem 2 that (43) holds with equality if L = 1 . The inequality J < L occurs only if a non-unique decoding rule (i.e., L > 1 ) is considered. In Theorem 2, the existence of a σ-algebra on an uncountably infinite alphabet Y in which (43) holds with equality is due to Révész’s generalization of the Birkhoff–von Neumann decomposition via Kolmogorov’s extension theorem; see Section 6.1 and Section 6.2 for technical details.
Consider the case where Y is a finite alphabet. Define the extremal distribution of type-2 as the following X -marginal,
P type-2 ( x ) = P type-2 ( Q , L , ε , Y ) ( x ) : = Q ( x ) if 1 x < J or K 2 < x < , V ( J ) if J x L , W ( K 2 ) if L < x K 2
for each x X , where the three quantities V ( · ) , W ( · ) , and J are defined in (31), (32), and (33), respectively, and K 2 is chosen so that
K 2 = K 2 ( Q , L , ε , Y ) : = max { L k L · | Y | W ( k ) < Q ( k ) } .
Moreover, define the integer D by
D = D ( Q , L , ε , Y ) : = min K 2 J + 1 L J + 1 , ( K 2 J ) 2 + 1 ,
where a b : = a ! b ! ( a b ) ! stands for the binomial coefficient for two integers 0 b a . A graphical representation of P type-2 is illustrated in Figure 3. When Y is finite, the Fano-type inequality stated in Theorems 1 and 2 can be tightened as follows:
Theorem 3.
Suppose that Y is finite. Then, it holds that
H ϕ ( Q , L , ε , Y ) ϕ ( P type-2 )
with equality if ε = P e ( L ) ( Q ) or | Y | D .
Proof of Theorem 3.
See Section 6.3. □
Similar to Theorems 1 and 2, the Fano-type inequality stated in (47) of Theorem 3 is formulated by the extremal distribution P type-2 defined in (44). The difference between P type-1 and P type-2 is only the difference between K 1 and K 2 defined in (34) and (45), respectively.
Remark 3.
In contrast to Theorems 1 and 2, Theorem 3 holds in both cases: ε > 0 and ε = 0 . By Lemma 5 stated in Section 6.1, it can be verified that P type-2 majorizes P type-1 , and it follows from Proposition 1 that
ϕ ( P type-2 ) ϕ ( P type-1 ) .
Namely, the Fano-type inequalities stated in Theorems 1 and 2 also holds for finite Y . In other words, it holds that
H ϕ ( Q , L , ε , Y ) ϕ ( P type 1 )
for every nonempty alphabet Y , provided that ( Q , L , ε , Y ) satisfies (29). As | Y | D if L = 1 (see (46)), another benefit of Theorem 3 is that the Fano-type inequality is always sharp under a unique decoding rule (i.e., L = 1 ).
So far, it is assumed that the probability law P X of the X -valued r.v. X is fixed to a given X -marginal Q. When we assume that X is supported on a finite subalphabet of X , we can loosen and simplify our Fano-type inequalities by removing the constraint that P X = Q . Let L and M be two integers satisfying 1 L < M , ε a real number satisfying 0 ε 1 L / M , and Y a nonempty alphabet. Consider the following maximization,
H ϕ ( M , L , ε , Y ) : = max ( X , Y ) : P e ( L ) ( X Y ) ε H ϕ ( X Y ) ,
where the maximization is taken over the pairs ( X , Y ) of r.v.’s satisfying (i) X is { 1 , , M } -valued, (ii) Y is Y -valued, and (iii) P e ( L ) ( X Y ) ε .
Theorem 4.
It holds that
H ϕ ( M , L , ε , Y ) = ϕ ( P type-0 ) ,
where P type-0 is defined in (12).
Proof of Theorem 4.
See Section 6.4. □
Remark 4.
Although Theorems 1–3 depend on the cardinality of Y , the Fano-type inequality stated in Theorem 4 does not depend on it whenever Y is nonempty.

4. Special Cases: Fano-Type Inequalities on Shannon’s and Rényi’s Information Measures

In this section, we specialize our Fano-type inequalities stated in Theorems 1–4 from general conditional information measures H ϕ ( X Y ) to Shannon’s and Rényi’s information measures. We then recover several known results such as those in [1,20,21,22,23] along the way.

4.1. On Shannon’s Information Measures

The conditional Shannon entropy [62] of an X -valued r.v. X given a Y -valued r.v. Y is defined by
H ( X Y ) : = E [ H ( P X | Y ) ] = E x X P X | Y ( x ) log 1 P X | Y ( x ) ,
where the (unconditional) Shannon entropy of an X -marginal P is defined by
H ( P ) : = x X P ( x ) log 1 P ( x ) .
Remark 5.
It can be verified by the monotone convergence theorem (cf. [63] Theorem 10.1.7) that
H ( X Y ) = E log 1 P X | Y ( X ) ,
provided that the right-hand side of (54) is finite. In some cases, it is convenient to define the conditional Shannon entropy H ( X Y ) by the right-hand side of (54) (see, e.g., [64]).
The following proposition is a well-known property of Shannon’s information measures.
Proposition 5
(Topsøe [60]). The Shannon entropy H ( · ) is symmetric, concave, and lower semicontinuous.
Namely, the conditional Shannon entropy H ( X Y ) is a special case of H ϕ ( X Y ) with ϕ = H . Therefore, defining the quantity
H ( Q , L , ε , Y ) : = H H ( Q , L , ε , Y ) = sup ( X , Y ) : P e ( L ) ( X Y ) ε , P X = Q H ( X Y ) ,
we readily observe the following corollary.
Corollary 1.
Suppose that ε > 0 and the cardinality of Y is at least countably infinite. Then, it holds that
H ( Q , L , ε , Y ) = H ( P type-1 ) = ( J L + 1 ) V ( J ) log 1 V ( J ) + ( K 1 L ) W ( K 1 ) log 1 W ( K 1 ) + x = 1 : x < J or x > K 1 Q ( x ) log 1 Q ( x ) .
Proof of Corollary 1.
Corollary 1 is a direct consequence of Theorem 1 and Proposition 5. □
Remark 6.
Applying Theorem 2 instead of Theorem 1, an error-free version (i.e., ε = 0 ) of Corollary 1 can be considered.
Remark 7.
Note that Corollary 1 coincides with Theorem 1 of [21] if L = 1 and Y = X . Moreover, we observe from (10) and Corollary 1 that
I ( Q , ε ) = H ( Q ) H ( Q , 1 , ε , X ) = x = 1 K 1 Q ( x ) log 1 Q ( x ) + V ( 1 ) log V ( 1 ) + ( K 1 1 ) W ( K 1 ) log W ( K 1 )
for every X -marginal Q and every tolerated probability of error 0 ε 1 Q ( 1 ) , where Erokhin’s function I ( Q , ε ) is defined in (9). See Section 4.3 for details of generalizing of Erokhin’s function. Kostina–Polyanskiy–Verdú showed in Theorem 4 and Remark 3 of [41] that
I ( Q n , ε ) = n ( 1 ε ) H ( Q ) n V ( Q ) 2 π e Φ 1 ( ε ) 2 / 2 + O ( log n ) ( as n ) ,
where V ( P ) is defined by
V ( P ) : = x X P ( x ) log 1 P ( x ) H ( P ) 2
and Φ 1 ( · ) stands for the inverse of the Gaussian cumulative distribution function
Φ ( u ) : = 1 2 π u e t 2 / 2 d t .
If Y is finite, then a tighter version of the Fano-type inequality than Corollary 1 can be obtained as follows:
Corollary 2.
Suppose that Y is finite. Then, it holds that
H ( Q , L , ε , Y ) H ( P type-2 ) = ( J L + 1 ) V ( J ) log 1 V ( J ) + ( K 2 L ) W ( K 2 ) log 1 W ( K 2 ) + x = 1 : x < J or x > K 2 Q ( x ) log 1 Q ( x ) ,
with equality if ε = P e ( L ) ( Q ) or | Y | D .
Proof of Corollary 2.
Corollary 2 is a direct consequence of Theorem 3 and Proposition 5. □
Remark 8.
The inequality in (61) holds with equality if L = 1 (cf. Remark 3). In fact, when L = 1 , Corollary 2 coincides with Ho–Verdú’s refinement of Erokhin’s function I ( Q , ε ) with finite Y (see Theorem 4 of [21]).
Similar to (50) and (55), we can define
H ( M , L , ε , Y ) : = H H ( M , L , ε , Y ) = max ( X , Y ) : P e ( L ) ( X Y ) ε H ( X Y ) ,
and can give an explicit formula of H ( M , L , ε , Y ) as follows.
Corollary 3.
It holds that
H ( M , L , ε , Y ) = H ( P type-0 ) = h 2 ( ε ) + ( 1 ε ) log L + ε log ( M L ) .
Proof of Corollary 3.
(Corollary 3 is a direct consequence of Theorem 4 and Proposition 5. □
Remark 9.
Indeed, Corollary 3 states the classical Fano inequality with list decoding; see (11).

4.2. On Rényi’s Information Measures

Although the choices of Shannon’s information measures are unique based on a set of axioms (see, e.g., Theorem 3.6 of [6] and Chapter 3 of [4]), there are several different definitions of conditional Rényi entropies (cf. [65,66,67]). Among them, this study focuses on Arimoto’s and Hayashi’s conditional Rényi entropies [8,9]. Arimoto’s conditional Rényi entropy of X given Y is defined by
H α Arimoto ( X Y ) : = α 1 α log E [ P X | Y α ] = α 1 α log E x X P X | Y ( x ) α 1 / α
for each order α ( 0 , 1 ) ( 1 , ) , where the α -norm of an X -marginal P is defined by
P α : = x X P ( x ) α 1 / α .
Here, note that the (unconditional) Rényi entropy [55] of an X -marginal P can be defined by
H α ( P ) : = α 1 α log P α = 1 1 α log x X P ( x ) α ,
i.e., it is a monotone function of the α -norm. Basic properties of the α -norm can be found in the following proposition.
Proposition 6.
The α -norm · α is symmetric and lower semicontinuous. Moreover, it is concave (resp. convex) if 0 < α 1 (resp. if α 1 ).
Proof of Proposition 6.
The symmetry is obvious. The lower semicontinuity was proven by Kovačević–Stanojević–Šenk in Theorem 5 of [35]. The concavity (resp. convexity) property can be verified by the reverse (resp. forward) Minkowski inequality. □
Proposition 6 implies that H α Arimoto ( X Y ) is a monotone function of H ϕ ( X Y ) with ϕ = · α , i.e.,
H α Arimoto ( X Y ) = α 1 α log H · α ( X Y ) .
On the other hand, Hayashi’s conditional Rényi entropy of X given Y is defined by
H α Hayashi ( X Y ) : = 1 1 α log E [ P X | Y α α ] = 1 1 α log E x X P X | Y ( x ) α
for each order α ( 0 , 1 ) ( 1 , ) . It is easy to see that · α α : P ( X ) [ 0 , ] also admits the same properties as those stated in Proposition 6. Therefore, Hayashi’s conditional Rényi entropy H α Hayashi ( X Y ) is also a monotone function of H ϕ ( X Y ) with ϕ = · α α , i.e.,
H α Hayashi ( X Y ) = 1 1 α log H · α α ( X Y ) .
It can be verified by Jensen’s inequality (see, e.g., Proposition 1 of [66]) that
H α Hayashi ( X Y ) H α Arimoto ( X Y ) .
Similar to (7), we now define
H α ( Q , L , ε , Y ) : = sup ( X , Y ) : P e ( L ) ( X Y ) ε , P X = Q H α ( X Y )
for each { Arimoto , Hayashi } and each α ( 0 , 1 ) ( 1 , ) . Then, we can establish the Fano-type inequality on Rényi’s information measures as follows.
Corollary 4.
Suppose that ε > 0 and the cardinality of Y is at least countably infinite. For every { Arimoto , Hayashi } and α ( 0 , 1 ) ( 1 , ) , it holds that
H α ( Q , L , ε , Y ) = H α ( P type-1 ) = 1 1 α log ( J L + 1 ) V ( J ) α + ( K 1 L ) W ( K 1 ) α + x = 1 : x < J or x > K 1 Q ( x ) α .
Proof of Corollary 4.
Let = Arimoto . It follows from Theorem 1 and Proposition 6 that
0 < α 1 sup ( X , Y ) : P e ( L ) ( X Y ) ε , P X = Q E P X | Y α = P type-1 α ,
α 1 inf ( X , Y ) : P e ( L ) ( X Y ) ε , P X = Q E P X | Y α = P type-1 α .
As the mapping u ( α / ( 1 α ) ) log u is strictly increasing (resp. strictly decreasing) if 0 < α < 1 (resp. if α > 1 ), it follows from (66), (67), (73), and (74) that
sup ( X , Y ) : P e ( L ) ( X Y ) ε , P X = Q H α Arimoto ( X Y ) = H α ( P type-1 ) .
The proof for the case when = Hayashi is the same as above, proving Corollary 4. □
Remark 10.
Applying Theorem 2 instead of Theorem 1, an error-free version (i.e., ε = 0 ) of Corollary 4 can be considered.
Remark 11.
Although Hayashi’s conditional Rényi entropy is smaller than Arimoto’s one in general (see (70)), Corollary 4 implies that the maximization problem H α ( Q , L , ε , Y ) results in the same Rényi entropy H α ( P type-1 ) for each { Arimoto , Hayashi } .
When Y is finite, a tighter Fano-type inequality than Corollary 4 can be obtained as follows.
Corollary 5.
Suppose that Y is finite. For any { Arimoto , Hayashi } and α ( 0 , 1 ) ( 1 , ) , it holds that
H α ( Q , L , ε , Y ) H α ( P type-2 ) = 1 1 α log ( J L + 1 ) V ( J ) α + ( K 2 L ) W ( K 2 ) α + x = 1 : x < J or x > K 2 Q ( x ) α ,
with equality if ε = P e ( L ) ( Q ) or | Y | D .
Proof of Corollary 5.
The proof is the same as the proof of Corollary 4 by replacing Theorem 1 by Theorem 3. □
Similar to (50) and (71), define
H α ( M , L , ε , Y ) : = max ( X , Y ) : P e ( L ) ( X Y ) ε H α ( X Y )
for each { Arimoto , Hayashi } and each α ( 0 , 1 ) ( 1 , ) .
Corollary 6.
For every { Arimoto , Hayashi } and α ( 0 , 1 ) ( 1 , ) , it holds that
H α ( M , L , ε , Y ) = H α ( P type-0 ) = 1 1 α log L 1 α ( 1 ε ) + ( M L ) 1 α ε α .
Proof of Corollary 6.
The proof is the same as the proof of Corollary 4 by replacing Theorem 1 by Theorem 4. □
Remark 12.
When = Arimoto , Corollary 6 coincides with Sason–Verdú’s generalization (cf. Theorem 8 of [23]) of Fano’s inequality for Rényi’s information measures with list decoding (see (15)).
Remark 13.
It follows by l’Hôpital’s rule that
lim α 1 H α ( P type-0 ) = H ( P type-0 ) ,
lim α 1 H α ( P type-1 ) = H ( P type-1 ) ,
lim α 1 H α ( P type-2 ) = H ( P type-2 ) .
Therefore, our Fano-type inequalities stated in Corollaries 1–6 satisfy the continuity of Shannon’s and Rényi’s information measures with respect to the order 0 < α < .

4.3. Generalization of Erokhin’s Function to α-Mutual Information

Erokhin’s function I ( Q , ε ) defined in (9) can be generalized to the α -mutual information (cf. [68]) as follows: Let X be an X -valued r.v. and Y a Y -valued r.v. Sibson’s α -mutual information [24] (see also Equation (32) of [68], Equation (13) of [69], and Definition 7 of [70]) is defined by
I α Sibson ( X Y ) : = inf Q Y D α ( P X , Y P X × Q Y )
for each 0 < α < , where P X , Y (resp. P X ) denotes the probability measure on X × Y (resp. X ) induced by the pair ( X , Y ) of r.v.’s (resp. the r.v. X), the infimum is taken over the probability measures Q Y on Y , and the Rényi divergence [55] between two probability measures μ and ν on A is defined by
D α ( μ ν ) : = 1 α 1 log A d μ d ν α d ν if μ ν and α 1 , A log d μ d ν d μ if μ ν and α = 1 , otherwise
for each 0 < α < . Note that Sibson’s α -mutual information coincides with the ordinary mutual information when α = 1 , i.e., it holds that I ( X Y ) = I 1 ( X Y ) . Similar to (7) and (9), given a system ( Q , L , ε , Y ) satisfying (29), define
I α Sibson ( Q , L , ε , Y ) : = inf ( X , Y ) : P e ( L ) ( X Y ) ε , P X = Q I α Sibson ( X Y ) ,
where the infimum is taken over the pairs of r.v.’s X and Y in which (i) X is X -valued, (ii) Y is Y -valued, (iii) P e ( L ) ( X Y ) ε , and (iv) P X = Q . By convention, we denote by
I ( Q , L , ε , Y ) : = I 1 Sibson ( Q , L , ε , Y ) .
It is clear that this definition can be specialized to Erokhin’s function I ( Q , ε ) defined in (9); in other words, it holds that
I ( Q , 1 , ε , X ) = I ( Q , ε ) ;
see Remark 7.
Corollary 7
(When α = 1 ). Suppose that ε > 0 and the cardinality of Y is at least countably infinite. Then, it holds that
I ( Q , L , ε , Y ) = H ( Q ) H ( Q , L , ε , Y ) = x = J K 1 Q ( x ) log 1 Q ( x ) + ( J L + 1 ) V ( J ) log V ( J ) + ( K 1 L ) W ( K 1 ) log W ( K 1 ) .
Proof of Corollary 7.
The equality in (87) is trivial from the well-known identity I ( X Y ) = H ( X ) H ( X Y ) . The inequality in (87) follows from Corollary 1, completing the proof. □
Corollary 8
(Sibson, when α 1 ). Suppose that ε > 0 and Y is countably infinite. For every α ( 0 , 1 ) ( 1 , ) , it holds that
I α Sibson ( Q , L , ε , Y ) = H α ( Q ( 1 / α ) ) H α Arimoto ( Q ( 1 / α ) , L , ε , Y ) = 1 α 1 log 1 x = J ( 1 / α ) K 1 ( 1 / α ) Q ( x ) x = 1 : x < J or x > K 1 + ( J ( 1 / α ) L + 1 ) V ( 1 / α ) ( J ( 1 / α ) ) α + ( K 1 ( 1 / α ) L ) W ( 1 / α ) ( K 1 ( 1 / α ) ) α x X Q ( x ) 1 / α α ,
where Q ( s ) stands for the s-tilted distribution of Q with real parameter 0 < s < , i.e.,
Q ( s ) ( x ) : = Q ( x ) s x X Q ( x ) s
for each x X , and V ( s ) ( · ) , W ( s ) ( · ) , J ( s ) , and K 1 ( s ) are defined as in (31), (32), (33), and (34), respectively, by replacing the X -marginal Q by the s-tilted distribution Q ( s ) .
Proof of Corollary 8.
As Sibson’s identity [24] (see also [69], Equation (12)) states that
D α ( P X , Y P X × Q Y ) = D α ( P X , Y P X × Q α ) + D α ( Q α Q Y ) ,
where Q α stands for the probability distribution on Y given as
Q α ( y ) = x X P X , Y ( x , y ) α P X ( x ) 1 α 1 / α y Y x X P X , Y ( x , y ) α P X ( x ) 1 α 1 / α 1
for each y Y , we observe that
I α Sibson ( X Y ) = α α 1 log y Y x X P X , Y ( x , y ) α P X ( x ) 1 α 1 / α
for every α ( 0 , 1 ) ( 1 , ) , provided that Y is countable. On the other hand, it follows from ([8] Equation (13)) that
I α Arimoto ( X Y ) = α α 1 log y Y x X P X , Y ( x , y ) α x X P X ( x ) α 1 / α
for every α ( 0 , 1 ) ( 1 , ) , provided that Y is countable. Combining (92) and (93), we have the first equality in (88). Finally, the second equality in (88) follows from Corollary 4 after some algebra. This completes the proof of Corollary 8. □
In contrast to (82), Arimoto defined the α -mutual information ([8], Equation (15)) by
I α Arimoto ( X Y ) : = H α ( X ) H α Arimoto ( X Y )
for every α ( 0 , 1 ) ( 1 , ) . Similar to (84), one can define
I α Arimoto ( Q , L , ε , Y ) : = inf ( X , Y ) : P e ( L ) ( X Y ) ε , P X = Q I α Arimoto ( X Y ) ,
and a counterpart of Corollary 8 can be stated as follows.
Corollary 9
(Arimoto, when α 1 ). Suppose that ε > 0 and the cardinality of Y is at least countably infinite. For every α ( 0 , 1 ) ( 1 , ) , it holds that
I α Arimoto ( Q , L , ε , Y ) = H α ( Q ) H α Arimoto ( Q , L , ε , Y ) = 1 α 1 log 1 x = J K 1 ( Q ( α ) ) ( x ) + ( J L + 1 ) V ( J ) α + ( K 1 L ) W ( K 1 ) α x X Q ( x ) α 1 .
Proof of Corollary 9.
The first equality in (96) is obvious from the definition. The second equality in (96) follows from Corollary 4 after some algebra, completing the proof. □
When Y is finite, then the inequalities stated in Corollaries 7–9 can be tightened by Theorem 3 as in Corollaries 2 and 5. We omit to explicitly state these tightened inequalities in this paper.

5. Asymptotic Behaviors on Equivocations

In information theory, the equivocation or the remaining uncertainty of an r.v. X relative to a correlated r.v. Y has an important role in establishing fundamental limits of the optimal transmission ratio and/or rate in several communication models. Shannon’s equivocation H ( X Y ) is a well-known measure in the formulation of the notion of perfect secrecy of symmetric-key encryption in information-theoretic cryptography [71]. Iwamoto–Shikata [66] considered the extension of such a secrecy criterion by generalizing Shannon’s equivocation to Rényi’s equivocation by showing various desired properties of the latter. Recently, Hayashi–Tan [72] and Tan–Hayashi [73] studied the asymptotics of Shannon’s and Rényi’s equivocations when the side-information about the source is given via a various class of random hash functions with a fixed rate.
In this section, we assume that certain error probabilities vanish and we then establish asymptotic behaviors on Shannon’s, or sometimes on Rényi’s, equivocations via the Fano-type inequalities stated in Section 4.

5.1. Fano’s Inequality Meets the AEP

We consider a general form of the asymptotic equipartition property (AEP) as follows.
Definition 7
([25]). We say that a sequence of X -valued r.v.’s X = { X n } n = 1 satisfies the AEP if
lim n P log 1 P X n ( X n ) ( 1 δ ) H ( X n ) = 0
for every fixed δ > 0 .
In the literature, the r.v. X n is commonly represented as a random vector X n = ( Z 1 ( n ) , , Z n ( n ) ) . The formulation without reference to random vectors means that X = { X n } n = 1 is a general source in the sense of Page 100 of [33].
Let { L n } n = 1 be a sequence of positive integers, { Y n } n = 1 a sequence of nonempty alphabets, and { ( X n , Y n ) } n = 1 a sequence of pairs of r.v.’s, where X n (resp. Y n ) is X -valued (resp. Y n -valued) for each n 1 . As
lim n P { X n f n ( Y n ) } = 0 lim n P e ( L n ) ( X n Y n ) = 0
for any sequence of list decoders { f n : Y X L n } n = 1 , it suffices to assume that P e ( L n ) ( X n Y n ) = o ( 1 ) as n in our analysis. The following theorem is a novel characterization of the AEP via Fano’s inequality.
Theorem 5.
Suppose that a general source X = { X n } n = 1 satisfies the AEP, and H ( X n ) = Ω ( 1 ) as n . Then, it holds that
lim n P e ( L n ) ( X n Y n ) = 0 | H ( X n Y n ) log L n | + = o H ( X n ) ,
where | u | + : = max { 0 , u } for u R . Consequently, it holds that
lim n P e ( L n ) ( X n Y n ) = lim n log L n H ( X n ) = 0 lim n H ( X n Y n ) H ( X n ) = 0 .
Proof of Theorem 5.
See Section 7.1. □
The following three examples are particularizations of Theorem 5.
Example 1.
Let { Z n } n = 1 be an i.i.d. source on a countably infinite alphabet X with finite Shannon entropy H ( Z 1 ) < . Suppose that X n = ( Z 1 , , Z n ) and Y n = X n for each n 1 . Then, Theorem 5 states that
lim n P { X n Y n } = 0 lim n 1 n H ( X n Y n ) = 0 .
This result is commonly referred to as the weak converse property of the source { Z n } n = 1 in the unique decoding setting.
Example 2.
Let X = { X n } n = 1 be a source as described in Example 1. Even if the list decoding setting, Theorem 5 states that
lim n P e ( L n ) ( X n Y n ) = lim n 1 n log L n = 0 lim n 1 n H ( X n Y n ) = 0 ,
similarly to Example 1. This is a key observation in Ahlswede–Gács–Körner’s proof of the strong converse property of degraded broadcast channels; see Chapter 5 of [42] (see also Section 3.6.2 of [43] and Lemma 1 of [48]).
Example 3.
Consider the Poisson source X = { X n } n = 1 with growing mean λ n = ω ( 1 ) as n , i.e.,
P X n ( k ) = λ n k 1 e λ n ( k 1 ) ! for k X = { 1 , 2 , } .
It is known that
lim n H ( X n ) ( 1 / 2 ) log λ n = 1 ,
and the Poisson source X satisfies the AEP (see [25]). Therefore, it follows from Theorem 5 that
lim n P e ( L n ) ( X n Y n ) = 0 | H ( X n Y n ) log L n | + = o ( log λ n ) .
The following example shows a general source that satisfies neither the AEP nor (99).
Example 4.
Let L 1 be an integer, γ > 0 a positive real, and { δ n } n = 1 a sequence of reals satisfying δ n = o ( 1 ) and 0 < δ n < 1 for each n 1 . As p h 2 ( p ) / p is continuous on ( 0 , 1 ] and h 2 ( p ) / p as p 0 + , one can find a sequence of reals { p n } n = 1 satisfying 0 < p n min { 1 , ( 1 δ n ) / ( δ n L ) } for each n 1 and
δ n h 2 ( p n ) p n = γ for sufficiently large n .
Consider a general source X = { X n } n = 1 whose component distributions are given by
P X n ( x ) = 1 δ n L if 1 x L , δ n p n ( 1 p n ) x ( L + 1 ) if x L + 1
for each n 1 . Suppose that X n Y n for each n 1 . After some algebra, we have
P e ( L ) ( X n Y n ) = P e ( L ) ( X n ) = δ n ,
H ( X n Y n ) = H ( X n ) = h 2 ( δ n ) + ( 1 δ n ) log L + δ n h 2 ( p n ) p n
for each n 1 . Therefore, we observe that
lim n P e ( L ) ( X n Y n ) = 0
holds, but
lim n | H ( X n Y n ) log L | + H ( X n ) = 0
does not hold. In fact, it holds that H ( X n ) γ + log L as n and
lim n P X n ( x ) = 1 L if 1 x L , 0 if x L .
Consequently, we also see that X = { X n } n = 1 does not satisfy the AEP.
Example 4 implies that the AEP has an important role in Theorem 5.

5.2. Vanishing Unnormalized Rényi’s Equivocations

Let X be an X -valued r.v. satisfying H ( X ) < , { L n } n = 1 a sequence of positive integers, { Y n } n = 1 a sequence of nonempty alphabets, and { ( X n , Y n ) } n = 1 a sequence of X × Y n -valued r.v.’s. The following theorem provides four conditions on a general source X = { X n } n = 1 such that vanishing error probabilities implies vanishing unnormalized Shannon’s and Rényi’s equivocations.
Theorem 6.
Let α 1 be an order. Suppose that any one of the following four conditions hold,
(a) 
the order α is strictly larger than 1, i.e., α > 1 ,
(b) 
the sequence { X n } n = 1 satisfies the AEP and H ( X n ) = O ( 1 ) as n ,
(c) 
there exists an n 0 1 such that P X n majorizes P X for every n n 0 ,
(d) 
the sequence { X n } n = 1 converges in distribution to X and H ( X n ) H ( X ) as n .
Then, it holds that for each { Arimoto , Hayashi } ,
lim n P e ( L n ) ( X n Y n ) = 0 lim n | H α ( X n Y n ) log L n | + = 0 .
Proof of Theorem 6.
See Section 7.2. □
In contrast to Condition (b) of Theorem 6, Conditions (a), (c), and (d) of Theorem 6 do not require the AEP to hold. Interestingly, Condition (a) of Theorem 6 states that (113) holds for every α > 1 and { Arimoto , Hayashi } without any other conditions on the general source X = { X n } n = 1 .
Remark 14.
If L n = 1 for each n 1 , then Conditions (c) and (d) of Theorem 6 coincide with Ho–Verdú’s result stated in Theorem 18 of [21]. Moreover, if L n = 1 for each n 1 , and if X n is { 1 , , M n } -valued for each n 1 , then Condition (a) of Theorem 6 coincides with Sason–Verdú’s result stated in Assertion (a) of Theorem 4 of [23].

5.3. Under the Symbol-Wise Error Criterion

Let L = { L n } n = 1 be a sequence of positive integers, { Y n } n = 1 a sequence of nonempty alphabets, and { ( X n , Y n ) } n = 1 a sequence of X × Y n -valued r.v.’s satisfying H ( X n ) < for every n 1 . In this subsection, we focus on the minimum arithmetic-mean probability of symbol-wise list decoding error defined as
P e , sym . ( L ) ( X n Y n ) : = 1 n i = 1 n P e ( L i ) ( X i Y i ) ,
where X n = ( X 1 , X 2 , , X n ) and Y n = ( Y 1 , Y 2 , , Y n ) . Now, let X be an X -valued r.v. satisfying H ( X ) < . Under this symbol-wise error criterion, the following theorem holds.
Theorem 7.
Suppose that P X n majorizes P X for sufficiently large n. Then, it holds that
lim n P e , sym . ( L ) ( X n Y n ) = 0 lim sup n 1 n H ( X n Y n ) lim sup n log L n .
Proof of Theorem 7.
See Section 7.3. □
It is known that the classical Fano inequality stated in (1) can be extended from the average error criterion P { X n Y n } to the symbol-wise error criterion ( 1 / n ) E [ d H ( X n , Y n ) ] (see Corollary 3.8 of [6]), where
d H ( x n , y n ) : = | { 1 i n x i y i } |
stands for the Hamming distance between two strings x n = ( x 1 , , n n ) and y n = ( y 1 , , y n ) . In fact, Theorem 7 states that
lim n 1 n E [ d H ( X n , Y n ) ] = 0 lim n 1 n H ( X n Y n ) = 0 ,
provided that P X n majorizes P X for sufficiently large n.
However, in the list decoding setting, we observe that P e , sym . ( L ) ( X n Y n ) = o ( 1 ) does not imply H ( X n Y n ) = o ( n ) in general. A counterexample can be readily constructed.
Example 5.
Let { X n } n = 1 be uniformly distributed Bernoulli r.v.’s, and { Y n } n = 1 arbitrary r.v.’s. Suppose that ( X n , Y n ) ( X m , Y m ) if n m , X n Y n for each n 1 , and L n = 2 for each n 1 . Then, we observe that
P e , sym . ( L ) ( X n Y n ) = 0
for every n 1 , but
H ( X n Y n ) = n log 2
for every n 1 .

6. Proofs of Fano-Type Inequalities

In this section, we prove Theorems 1–4 via majorization theory [10].

6.1. Proof of Theorem 1

We shall relax the feasible regions of the supremum in (7) via some lemmas, i.e., our preliminary results. Define a notion of symmetry for the conditional distribution P X | Y as follows.
Definition 8.
A jointly distributed pair ( X , Y ) is said to be connected uniform-dispersively if P X | Y is almost surely constant.
Remark 15.
The term introduced in Definition 8 is inspired by uniformly dispersive channels named by Massey (see Page 77 of [12]). In fact, if Y is countable and X (resp. Y) denotes the output (resp. input) of a channel P X | Y , then the channel P X | Y can be thought of as a uniformly dispersive channel, provided that ( X , Y ) is connected uniform-dispersively. Initially, Fano said such channels to be uniform from the input; see Page 127 of [11]. Refer to Section II-A of [13] for several symmetry notions of channels.
Although an almost surely constant P X | Y implies the independence X Y , note also that an almost surely constant P X | Y does not imply the independence. We now give the following lemma.
Lemma 1.
If a jointly distributed pair ( X , Y ) is connected uniform-dispersively, then P X | Y majorizes P X a.s.
Proof of Lemma 1.
Let k be a positive integer. Choose a collection { x i } i = 1 k of k distinct elements in X so that
P X ( x i ) = P X ( i )
for every 1 i k . As
i = 1 k P X | Y ( x i ) x = 1 k P X | Y ( x ) ( a . s . )
and
P X ( x ) = E [ P X | Y ( x ) ]
for each x X , we observe that
x = 1 k P X ( x ) = E i = 1 k P X | Y ( x i ) E x = 1 k P X | Y ( x ) .
If ( X , Y ) is connected uniform-dispersively (see Definition 8), then (123) implies that
x = 1 k P X ( x ) x = 1 k P X | Y ( x ) ( a . s . ) ,
which is indeed the majorization relation stated Definition 1, completing the proof of Lemma 1. □
Remark 16.
Lemma 1 is can be thought of as a novel characterization of uniformly dispersive channels via the majorization relation; see Remark 15. More precisely, given an input distribution P on X and a uniformly dispersive channel W : X Y with countable output alphabet Y , it holds that W ( · x ) majorizes the output distribution P W for every x X , where P W is given by
P W ( y ) : = x X P ( x ) W ( y x )
for each y Y .
Definition 9.
Let A be a collection of jointly distributed pairs of an X -valued r.v. and a Y -valued r.v. We say that A has balanced conditional distributions if ( X , Y ) A implies that there exists ( U , V ) A satisfying
P U | V ( x ) = E P X | Y ( x ) ( a . s . )
for every x X .
For such a collection A , the following lemma holds.
Lemma 2.
Suppose that A has balanced conditional distributions. For any ( X , Y ) A , there exists a pair ( U , V ) A connected uniform-dispersively such that
H ϕ ( U V ) H ϕ ( X Y ) .
Proof of Lemma 2.
For any ( X , Y ) A , it holds that
H ϕ ( X Y ) = ( a ) E ϕ P X | Y ( b ) ϕ E P X | Y = ( c ) ϕ P U | V ( a . s . ) = ( d ) E ϕ ( P U | V ) ( a . s . ) = H ϕ ( U V ) ,
where
  • (a) follows by the symmetry of ϕ ,
  • (b) follows by Jensen’s inequality (see [14], Proposition A-2),
  • (c) follows by the existence of a pair ( U , V ) A connected uniform-dispersively (see (126)), and
  • (d) follows by the symmetry of ϕ again.
This completes the proof of Lemma 2. □
For a system ( Q , L , ε , Y ) satisfying (29), we now define a collection of pairs of r.v.’s as follows,
R ( Q , L , ε , Y ) : = ( X , Y ) | X is X valued , Y is Y valued , P e ( L ) ( X Y ) ε , P X = Q .
Note that this is the feasible region of the supremum in (7). The main idea of proving Theorem 1 is to apply Lemma 2 for this collection. The collection R ( Q , L , ε , Y ) does not, however, have balanced conditional distributions in general. More specifically, there exists a measurable space Y such that R ( Q , L , ε , Y ) does not have balanced conditional distributions even if Y is standard Borel. Fortunately, the following lemma can avoid this issue by blowing-up the collection R ( Q , L , ε , Y ) via the infinite-dimensional version of Birkhoff’s theorem [18].
Lemma 3.
If the cardinality of Y is at least the cardinality of the continuum R , then there exists a σ-algebra on Y such that the collection R ( Q , L , ε , Y ) has balanced conditional distributions.
Proof of Lemma 3.
First, we shall choose an appropriate alphabet Y so that its cardinality is the cardinality of the continuum. Denote by Ψ the set of × permutation matrices, where an × permutation matrix is a real matrix Π = { π i , j } i , j = 1 satisfying either π i , j = 0 or π i , j = 1 for each 1 i , j < , and
j = 1 π i , j = 1 for each 1 i < ,
i = 1 π i , j = 1 for each 1 j < .
For an × permutation matrix Π = { π i , j } i , j Ψ , define the permutation ψ Π on X = { 1 , 2 , } by
ψ Π ( i ) : = j = 1 π i , j j .
It is known that there is a one-to-one correspondence between the permutation matrices Π and the bijections ψ Π ; and thus, the cardinality of Ψ is the cardinality of the continuum. Therefore, in this proof, we may assume without loss of generality that Y = Ψ .
Second, we shall construct an appropriate σ -algebra on Y via the infinite-dimensional version of Birkhoff’s theorem (cf. Theorem 2 of [18]) for × doubly stochastic matrices, where an × doubly stochastic matrix is a real matrix M = { m i , j } i , j = 1 satisfying 0 m i , j 1 for each 1 i , j < , and
j = 1 m i , j = 1 for each 1 i < ,
i = 1 m i , j = 1 for each 1 j < .
Similar to Ψ , denote by Ψ i , j the set of × permutation matrices in which the entry in the ith row and the jth column is 1, where note that Ψ i , j Y . Then, the following lemma holds.
Lemma 4
(infinite-dimensional version of Birkhoff’s theorem; cf. Theorem 2 of [18]). There exists a σ-algebra Γ on Y such that (i) Ψ i , j Γ for every 1 i , j < and (ii) for any × doubly stochastic matrix M = { m i , j } i , j = 1 , there exists a probability measure μ on ( Y , Γ ) such that μ ( Ψ i , j ) = m i , j for every 1 i , j < .
Remark 17.
In the original statement of Theorem 2 of [18], it is written that a probability space ( Y , Γ , μ ) exists for a given × doubly stochastic matrix M , namely, the σ-algebra Γ may depend on M . However, the construction of Γ is independent of M (see Page 196 of [18]); and we can restate Theorem 2 of [18] as Lemma 4.
This is a probabilistic description of an × doubly stochastic matrix via a probability measure on the × permutation matrices. The existence of the probability measure μ is due to Kolmogorov’s extension theorem. We employ this σ -algebra Γ on Y in the proof.
Thirdly, we shall show that under this measurable space ( Y , Γ ) , the collection R ( Q , L , ε , Y ) has balanced conditional distributions defined in (126). In other words, for a given pair ( X , Y ) R ( Q , L , ε , Y ) , it suffices to construct another pair ( U , V ) of r.v.’s satisfying (126) and ( U , V ) R ( Q , L , ε , Y ) . At first, construct its conditional distribution P U | V by
P U | V ( x ) = E P X | Y ( ψ V ( x ) ) | V ( a . s . )
for each x X , where E [ Z W ] stands for the conditional expectation of a real-valued r.v. Z given the sub- σ -algebra σ ( W ) generated by a r.v. W, and ϕ V is given as in (132). As ψ V ( x ) is σ ( V ) -measurable for each x X , it is clear that
P U | V ( x ) = E P X | Y ( x ) | V = E P X | Y ( ψ V ( ψ V 1 ( x ) ) ) | V = P U | V ( ψ V 1 ( x ) ) ( a . s . )
for every x X . Thus, we readily see that (126) holds, and ( U , V ) is connected uniform-dispersively. Thus, by (123) and the hypothesis that P X = Q , we see that P U | V majorizes Q a.s. Therefore, it follows from the well-known characterization of the majorization relation via × doubly stochastic matrices (see Lemma 3.1 of [16] or Page 25 of [10]) that one can find an × doubly stochastic matrix M = { m i , j } i , j = 1 satisfying
Q ( i ) = j = 1 m i , j P U | V ( j ) ( a . s . )
for every i 1 . By Lemma 4, we can construct an induced probability measure P V so that P V ( Ψ i , j ) = m i , j for each 1 i , j < . Now, the pair of P U | V and P V can define the probability law of ( U , V ) . To ensure that ( U , V ) belongs to R ( Q , L , ε , Y ) , it remains to verity that P e ( L ) ( U V ) ε and P U = Q .
As ψ Π is a permutation defined in (132), we have
P e ( L ) ( X Y ) = ( a ) 1 E x = 1 L P X | Y ( x ) = 1 E x = 1 L E [ P X | Y ( x ) V ] = ( b ) 1 E x = 1 L P U | V ( x ) = ( c ) P e ( L ) ( U V ) ,
where
  • (a) and (c) follow from Proposition 2, and
  • and (b) follows from (136).
Therefore, we see that P e ( L ) ( X Y ) ε is equivalent to P e ( L ) ( U V ) ε . Furthermore, we observe that
Q ( i ) = ( a ) j = 1 m i , j P U | V ( j ) ( a . s . ) = ( b ) j = 1 E 1 { V Ψ i , j } P U | V ( j ) ( a . s . ) = ( c ) j = 1 E 1 { V Ψ i , j } P U | V ( j ) ( a . s . ) = j = 1 E E 1 { V Ψ i , j } P U | V ( j ) | V = ( d ) j = 1 E E 1 { V Ψ i , j } P U | V ( ψ V 1 ( j ) ) | V = ( e ) j = 1 E E 1 { V Ψ i , j } P U | V k = 1 1 { V Ψ k , j } k | V = j = 1 E 1 { V Ψ i , j } P U | V k = 1 1 { V Ψ k , j } k = ( f ) E j = 1 1 { V Ψ i , j } P U | V k = 1 1 { V Ψ k , j } k = ( g ) E P U | V ( i ) = P U ( i )
for every i 1 , where
  • (a) follows from (137),
  • (b) follows by the identity m i , j = P { V Ψ i , j } ,
  • (c) follows from the fact that ( X , Y ) is connected uniform-dispersively,
  • (d) follows from (136),
  • (e) follows by the definition of Ψ i , j ,
  • (f) follows by the Fubini–Tonelli theorem, and
  • (f) follows from the fact that the inverse of a permutation matrix is its transpose.
Therefore, we have P U = Q , and the assertion of Lemma 3 is proved in the case where the cardinality of Y is the cardinality of the continuum.
Finally, even if the cardinality of Y is larger than the cardinality of continuum, the assertion of Lemma 3 can be immediately proved by considering the trace of the space Y on Ψ (cf. [74], p. 23). This completes the proof of Lemma 3. □
Finally, we show that the Fano-type distribution of type-1 defined in (30) is the infimum of a certain class of X -marginals with respect to the majorization relation ≺.
Lemma 5.
Suppose that the system ( Q , L , ε ) satisfies the right-hand inequality in (29). For every X -marginal R in which R majorizes Q and P e ( L ) ( R ) ε , it holds that R majorizes P type-1 as well.
Proof of Lemma 5.
We first give an elementary fact of the weak majorization on the finite-dimensional real vectors.
Lemma 6.
Let p = ( p i ) i = 1 n and q = ( q i ) i = 1 n be n-dimensional real vectors satisfying p 1 p 2 p n 0 and q 1 q 2 q n 0 , respectively. Consider an integer 1 k n satisfying q k = q i for every i = k , k + 1 , , n . If
i = 1 j p i i = 1 j q i for j = 1 , 2 , , k 1 ,
i = 1 n p i i = 1 n q i
then it holds that
i = 1 j p i i = 1 j q i for j = 1 , 2 , , n .
Proof of Lemma 6.
See Appendix E. □
Since P type-1 = P type-1 (see Proposition 4), it suffices to prove that
x = 1 k P type-1 ( x ) x = 1 k R ( x )
for every k 1 .
As P type-1 ( x ) = Q ( x ) for each 1 x < J (see Proposition 4), it follows by the majorization relation Q R that (143) holds for each 1 k < J . Moreover, as P e ( L ) ( P type-1 ) = ε (see Proposition 4), it follows from (28) and the hypothesis P e ( L ) ( R ) ε that
x = J L P type-1 ( x ) x = J L R ( x ) .
In addition, as (143) holds for each 1 k < J and P type-1 ( x ) = V ( J ) for each J x L (see Proposition 4), it follows from Lemma 6 and (144) that (143) also holds for each 1 k L .
Now, suppose that K 1 = . Then, it follows that
P type-1 ( x ) = W ( ) = 0
for each x L + 1 (see Proposition 4). Thus, Inequality (143) holds for every k 1 ; therefore, we have that R majorizes P type-1 , provided that K 1 = .
Finally, suppose that K 1 < . Since P type-1 ( x ) = Q ( x ) for each x K 1 + 1 (see Proposition 4), it follows by the majorization relation Q R that (143) holds for every k K 1 . Moreover, since (143) holds for every 1 k L and every k K 1 , we observe that
x = L + 1 K 1 P type-1 ( x ) x = L + 1 K 1 R ( x ) .
Finally, as (143) holds for 1 k L and P type-1 ( x ) = W ( K 1 ) for L < x K 1 (see Proposition 4), it follows by Lemma 6 and (146) that (143) holds for every 1 k K 1 . Therefore, Inequality (143) holds for every k 1 , completing the proof of Lemma 5. □
Using the above lemmas, we can prove Theorem 1 as follows.
Proof of Theorem 1.
Let ε > 0 . For the sake of brevity, we write
R = R ( Q , L , ε , Y )
in the proof. Let Υ be a σ -algebra on Y , Ψ an alphabet in which its cardinality is the cardinality of the continuum, and Γ a σ -algebra on Ψ so that R ( Q , L , ε , Ψ ) has balanced conditional distributions (see Lemma 3). Now, we define the collection
R ¯ : = R ( Q , L , ε , Y Ψ ) ,
where the σ -algebra on Y Ψ is given by the smallest σ -algebra Υ Γ containing Υ and Γ . It is clear that R R ¯ , and R ¯ has balanced conditional distributions as well (see the last paragraph in the proof of Lemma 3). Then, we have
H ϕ ( Q , L , ε , Y ) = ( a ) sup ( X , Y ) R H ϕ ( X Y ) ( b ) sup ( X , Y ) R ¯ H ϕ ( X Y ) = ( c ) sup ( X , Y ) R ¯ : ( X , Y ) is connected uniform dispersively H ϕ ( X Y ) = ( d ) sup ( X , Y ) R ¯ : P e ( L ) ( P X | Y ) ε a . s . , ( X , Y ) is connected uniform dispersively ϕ ( P X | Y ) ( a . s . ) ( e ) sup R P ( X ) : Q R and P e ( L ) ( R ) ε ϕ ( R ) ( a . s . ) ( f ) ϕ ( P type-1 ) ,
where
  • (a) follows by the definition of R stated in (129),
  • (b) follows by the inclusion R R ¯ ,
  • (c) follows from Lemma 2 and the fact that R ¯ has balanced conditional distributions,
  • (d) follows by the symmetry of both ϕ : P ( X ) [ 0 , ] and P e ( L ) : P ( X ) [ 0 , 1 ] ,
  • (e) follows from Lemma 1, and
  • (f) follows from Proposition 1 and Lemma 5.
Inequalities (149) are indeed the Fano-type inequality stated in (35) of Theorem 1. If ε = P e ( L ) ( Q ) , then it can be verified by the definition of P type-1 stated in (30) that P type-1 = Q (see also Proposition 4). In such a case, the supremum in (7) can be achieved by a pair ( X , Y ) satisfying P X = Q and X Y .
Finally, we shall construct a jointly distributed pair ( X , Y ) satisfying
H ϕ ( X Y ) = ϕ ( P type-1 ) ,
P e ( L ) ( X Y ) = ε ,
P X ( x ) = Q ( x ) ( for x X ) .
For the sake of brevity, suppose that Y is the index set of the set of permutation matrices on { J , J + 1 , , K 1 } . Namely, denote by Π ( y ) = { π i , j ( y ) } i , j = J K 1 a permutation matrix for each index y Y . By the definition of P type-1 stated in (30) (see also Proposition 4), we observe that
x = J k Q ( x ) x = J k P type-1 ( x ) for J k K 1 ,
and
x = J K 1 Q ( x ) = x = J K 1 P type-1 ( x ) .
Noting that K 1 < if ε > 0 (see (34)), Equations (153) and (154) are indeed a majorization relation between two finite-dimensional real vectors; and thus, it follows from the Hardy–Littlewood–Pólya theorem (see Theorem 8 of [15] or Theorem 2.B.2 [10]) that there exists a ( K 1 J + 1 ) × ( K 1 J + 1 ) doubly stochastic matrix M = { m i , j } i , j = J K 1 satisfying
Q ( i ) = j = J K 1 m i , j P type-1 ( j )
for each J i K 1 . Moreover, it follows from the finite dimensional version of Birkhoff’s theorem [19] (see also Theorems 2.A.2 and 2.C.2 of [10]) that for such a doubly stochastic matrix M = { m i , j } i , j = J K 1 , there exists a probability vector λ = ( λ y ) y Y satisfying
m i , j = y Y λ y π i , j ( y )
for every J i , j K 1 , where a nonnegative vector is called a probability vector if the sum of the elements is unity. Using them, we construct a pair ( X , Y ) via the following distributions,
P X | Y = y ( x ) = P type-1 ( x ) if 1 x < J or K 1 < x < , P type-1 ( ψ ˜ y ( x ) ) if J x K 1 ,
P Y ( y ) = λ y ,
where the permutation ψ ˜ y on { J , J + 1 , , K 1 } is defined by
ψ ˜ y ( i ) : = j = J K 1 π i , j ( y ) j
for each y Y . Then, it follows from (155) and (156) that (152) holds. Moreover, it is easy to see that P X | Y = y = P type-1 for every y Y . Thus, we observe that (150) and (151) hold as well. This implies together with (149) that the constructed pair ( X , Y ) achieves the supremum in (7), completes the proof of Theorem 1. □

6.2. Proof of Theorem 2

Even if ε = 0 , the inequalities in (149) hold as well; that is, the Fano-type inequality stated in (43) of Theorem 2 holds. In this proof, we shall verify the equality conditions of (43).
If supp ( Q ) is finite, then it follows by the definition of K 1 stated in (34) that K 1 < . Thus, the same construction of a jointly distributed pair ( X , Y ) as the last paragraph of Section 6.1 proves that (43) holds with equality if supp ( Q ) is finite.
Consider the case where supp ( Q ) is infinite and J = L . Since ε = 0 , we readily see that K 1 = , V ( J ) > 0 , and W ( K 1 ) = 0 . Suppose that
Y = { L , L + 1 , L + 2 , } .
We then construct a pair ( X , Y ) via the following distributions,
P X | Y = y ( x ) = Q ( x ) if 1 x < L , V ( J ) if L x < and x = y , 0 if L x < and x y ,
P Y ( y ) = Q ( y ) V ( J ) .
We readily see that P X | Y = y = P type-1 for every y Y ; therefore, we have that (150)–(152) hold. This implies that the constructed pair ( X , Y ) achieves the supremum in (7).
Finally, suppose that the cardinality of Y is at least the cardinality of the continuum. Assume without loss of generality that Y is the set of × permutation matrices. Consider the measurable space ( Y , Γ ) given in the infinite-dimensional version of Birkhoff’s theorem (see Lemma 4). In addition, consider a jointly distributed pair ( X , Y ) satisfying P X | Y = P type-1 a.s. Then, it is easy to see that (150) and (151) hold for any induced probability measure P Y on Y . Similar to the construction of the probability measure P V on Y below (137), we can find an induced probability measure P Y satisfying (152). Therefore, it follows from (43) that this pair ( X , Y ) achieves the supremum in (7). This completes the proof of Theorem 2.

6.3. Proof of Theorem 3

To prove Theorem 3, we need some more preliminary results. Throughout this subsection, assume that the alphabet Y is finite and nonempty. In this case, given a pair ( X , Y ) , one can define
P X | Y = y ( x ) = P { X = x Y = y } ,
provided that P Y ( y ) > 0 .
For a subset Z X , define
P e ( L ) ( X Y Z ) : = min f : Y Z L P { X f ( Y ) } .
Note that the difference between P e ( L ) ( X Y ) and P e ( L ) ( X Y Z ) is the restriction of the decoding range Z X , and the inequality P e ( L ) ( X Y ) P e ( L ) ( X Y Z ) is trivial from these definitions stated in (21) and (164), respectively. The following propositions are easy consequences of the proofs of Propositions 2 and 3, and so we omit those proofs in this paper.
Proposition 7.
It holds that
P e ( L ) ( X Y Z ) = 1 E min D Z L x D P X | Y ( x ) .
Proposition 8.
Let β : { 1 , , | Z | } Z be a bijection satisfying P X ( β ( i ) ) P X ( β ( j ) ) if i < j . It holds that
1 x Z P X ( x ) P e ( L ) ( X Y ) 1 x = 1 L P X ( β ( x ) ) .
For a finite subset Z X , denote by Ψ ( Z ) the set of | Z | × | Z | permutation matrices in which both rows and columns are indexed by the elements in Z . The main idea of proving Theorem 3 is the following lemma.
Lemma 7.
For any X × Y -valued r.v. ( X , Y ) , there exist a subset Z X and an X × Ψ ( Z ) -valued r.v. ( U , W ) such that
| Z | = L · | Y | ,
P U ( x ) = P X ( x ) for x X ,
P e ( L ) ( U W ) P e ( L ) ( U W Z ) = P e ( L ) ( X Y ) ,
H ϕ ( U W ) H ϕ ( X Y ) ,
P U | W = w ( x ) = P X ( x ) for x X Z and w Ψ ( Z ) .
Proof of Lemma 7.
Suppose without loss of generality that
Y = { 0 , 1 , , N 1 }
for some positive integer N. By the definition of cardinality, one can find a subset Z X satisfying (i) | Z | = L N , and (ii) for each x { 1 , 2 , , L } and y Y , there exists z Z satisfying
P X | Y = y ( z ) = P X | Y = y ( x ) .
For each Π = { π i , j } i , j Z Ψ ( Z ) , define the permutation φ Π : Z Z by
φ Π ( z ) : = w Z π z , w w ,
as in (132) and (159). It is clear that for each y Y , there exists at least one Π Ψ ( Z ) such that
P X | Y = y ( φ Π ( x 1 ) ) P X | Y = y ( φ Π ( x 2 ) )
for every x 1 , x 2 Z satisfying x 1 x 2 , which implies that the permutation φ Π plays the role of a decreasing rearrangement of P X | Y = y on Z . To denote such a correspondence between Y and Ψ ( Z ) , one can choose an injection ι : Y Ψ ( Z ) appropriately. In other words, one can find an injection ι so that
P X | Y = y ( φ ι ( y ) ( x 1 ) ) P X | Y = y ( φ ι ( y ) ( x 2 ) )
for every y Y and x 1 , x 2 Z satisfying x 1 x 2 . We now construct an X × Y × Ψ ( Z ) -valued r.v. ( U , V , W ) as follows: The conditional distribution P U | V , W is given by
P U | V = v , W = w ( u ) = P X | Y = v ( φ ι ( v ) φ w ( u ) ) if u Z , P X | Y = v ( u ) if u X Z ,
where σ 1 σ 2 stands for the composition of two bijections σ 1 and σ 2 . The induced probability distribution P V of V is given by P V = P Y . Suppose that the independence V W holds. As
P U , V , W = P U | V , W P V P W ,
it remains to determine the induced probability distribution P W of W, and we defer to determine it until the last paragraph of this proof. A direct calculation shows
P U | W = w ( u ) = v Y P V | W = w ( v ) P U | V = v , W = w ( u ) = ( a ) v Y P Y ( v ) P U | V = v , W = w ( u ) = ( b ) ω ( u , w ) if u Z , P X ( u ) if u X Z ,
where
  • (a) follows by the independence V W and P V = P Y , and
  • (b) follows by (177) and defining ω ( u , w ) so that
    ω ( u , w ) : = v Y P Y ( v ) P X | Y = v ( φ ι ( v ) φ w ( u ) )
    for each x Z and w Ψ ( Z ) .
Now, we readily see from (179) that (171) holds for any induced probability distribution P W of W. Therefore, to complete the proof, it suffices to show that ( U , W ) satisfies (169) and (170) with an arbitrary choice of P W , and ( U , W ) satisfies (168) with an appropriate choice of P W .
Firstly, we shall prove (169). For each w Ψ ( Z ) , denote by D ( w ) Z L the set satisfying
φ w ( k ) < φ w ( x )
for every k D ( w ) and x Z D ( w ) , i.e., it stands for the set of first L elements in Z under the permutation rule w Ψ ( Z ) . Then, we have
P e ( L ) ( U W ) ( a ) P e ( L ) ( U W Z ) = ( b ) 1 w Ψ ( Z ) P W ( w ) min D Z L u D P U | W = w ( u ) = ( c ) 1 w Ψ ( Z ) P W ( w ) min D Z L u D ω ( u , w ) = ( d ) 1 w Ψ ( Z ) P W ( w ) min D Z L u D v Y P Y ( v ) P X | Y = v ( φ ι ( v ) φ w ( u ) ) = ( e ) 1 w Ψ ( Z ) P W ( w ) u D ( w ) v Y P Y ( v ) P X | Y = v ( φ ι ( v ) φ w ( u ) ) = ( f ) 1 w Ψ ( Z ) P W ( w ) u = 1 L v Y P Y ( v ) P X | Y = v ( u ) = 1 y Y P Y ( y ) x = 1 L P X | Y = y ( x ) = ( g ) P e ( L ) ( X Y ) ,
where
  • (a) is an obvious inequality (see the definitions stated in (21) and (164)),
  • (b) follows from Proposition 7,
  • (c) follows from (179),
  • (d) follows from the definition of ω ( u , w ) stated in (180),
  • (e) follows from (176) and (181),
  • (f) follows from (173), (176), and (181), and
  • (g) follows from Proposition 2.
Therefore, we obtain (169).
Secondly, we shall prove (170). We get
H ϕ ( X Y ) = y Y P Y ( y ) ϕ ( P X | Y = y ) = w Ψ ( Z ) P W ( w ) y Y P Y ( y ) ϕ ( P X | Y = y ) = ( a ) w Ψ ( Z ) P W ( w ) y Y P Y ( y ) ϕ ( P U | V = y , W = w ) = ( b ) w Ψ ( Z ) P W ( w ) v Y P V ( v ) ϕ ( P U | V = v , W = w ) ( c ) w Ψ ( Z ) P W ( w ) ϕ v Y P V ( v ) P U | V = v , W = w = ( d ) w Ψ ( Z ) P W ( w ) ϕ ( P U | W = w ) = H ϕ ( U W ) ,
where
  • (a) follows by the symmetry of ϕ and (177),
  • (b) follows by P V = P Y ,
  • (c) follows by Jensen’s inequality, and
  • (d) follows by the independence U W .
Therefore, we obtain (170).
Finally, we shall prove that there exists an induced probability distribution P W satisfying (168). If we denote by I Ψ ( Z ) the identity matrix, then it follows from (180) that
P U | W = I ( u ) = P U | W = w ( φ w 1 ( u ) )
for every ( u , w ) Z × Ψ ( Z ) . It follows from (179) that
x Z P X ( x ) = u Z P U | W = I ( u ) .
Now, denote by β 1 : { 1 , 2 , , L N } Z and β 2 : { 1 , 2 , , L N } Z two bijections satisfying P X ( β 1 ( i ) ) P X ( β 1 ( j ) ) and β 2 ( i ) < β 2 ( j ) , respectively, provided that i < j . That is, the bijection β 1 and β 2 play roles of decreasing rearrangements of P X and P U | W = I , respectively, on Z . Using those bijections, one can rewrite (185) as
i = 1 L N P X ( β 1 ( i ) ) = i = 1 L N P U | W = I ( β 2 ( i ) ) .
In the same way as (123), it can be verified from (180) by induction that
i = 1 k P X ( β 1 ( i ) ) i = 1 k P U | W = I ( β 2 ( i ) )
for each k = 1 , 2 , , L N . Equations (186) and (187) are indeed a majorization relation between two finite-dimensional real vectors, because β 1 plays a role of a decreasing rearrangement of P X on Z . Combining (184) and this majorization relation, it follows from the Hardy–Littlewood–Pólya theorem derived in Theorem 8 of [15] (see also Theorem 2.B.2 of [10]) and the finite-dimensional version of Birkhoff’s theorem [19] (see also Theorem 2.A.2 of [10]) that there exists an induced probability distribution P W satisfying P U = P X , i.e., Equation (168) holds, as in (153)–(158). This completes the proof of Lemma 7. □
Remark 18.
Lemma 7 can restrict the feasible region of the supremum in (7) from a countably infinite alphabet X to a finite alphabet Z in the sense of (171). Specifically, if Y is finite, it suffices to vary at most | Z | = L · | Y | probability masses { P X | Y = y ( x ) } x Z for each y Y . Lemma 7 is useful not only to prove Theorem 3 but also to prove Proposition 9 of Section 8.1 (see Appendix D for the proof).
As with (129), for a subset Z X , we define
R ( Q , L , ε , Y , Z ) : = ( X , Y ) | X is X valued , Y is Y valued , P e ( L ) ( X Y Z ) ε , P X = Q , P X | Y = y ( x ) = Q ( x ) ( x , y ) ( X Z ) × Y ,
provided that Y is finite. It is clear that (188) coincides with (129) if Z = X , i.e., it holds that
R ( Q , L , ε , Y , X ) = R ( Q , L , ε , Y ) .
Note from Lemma 7 that for each system ( Q , L , ε , Y ) satisfying (29), there exists a subset Z X such that | Z | = L · | Y | and R ( Q , L , ε , Y , Z ) is nonempty, provided that Y is finite.
Another important idea of proving Theorem 3 is to apply Lemma 2 for this collection of r.v.’s. The correction R ( Q , L , ε , Y , Z ) does not, however, have balanced conditional distributions of (126) in general, as with (129). Fortunately, similar to Lemma 3, the following lemma can avoid this issue by blowing-up the collection R ( Q , L , ε , Y , Z ) via the finite-dimensional version of Birkhoff’s theorem [19].
Lemma 8.
Suppose that Z X is finite and R ( Q , L , ε , Y , Z ) is nonempty. If | Z | | Y | ! < , then the collection R ( Q , L , ε , Y , Z ) has balanced conditional distributions.
Proof of Lemma 8.
Lemma 8 can be proven in a similar fashion to the proof of Lemma 3. As this proof is slightly long as with Lemma 3, we only give a sketch of the proof as follows.
As | Ψ ( Z ) | = | Z | ! , we may assume without loss of generality that Y = Ψ ( Z ) . For the sake of brevity, we write
R ˜ = R ( Q , L , ε , Y , Z )
in this proof. For a pair ( X , Y ) R ˜ , construct another X × Y -valued r.v. ( U , V ) , as in (135), so that P U | V = y ( x ) = Q ( x ) for every ( x , y ) ( X Z ) × Y . By such a construction of (135), the condition stated in (126) is obviously satisfied. In the same way as (138), we can verify that
P e ( L ) ( U V Z ) = P e ( L ) ( X Y Z ) .
Moreover, employing the finite-dimensional version of Birkhoff’s theorem [19] (also known as the Birkhoff–von Neumann decomposition) instead of Lemma 4, we can also find an induced probability distribution P V of V so that P U = Q in the same way as (139). Therefore, for any ( X , Y ) R ˜ , one can find ( U , V ) R ˜ satisfying (126). This completes the proof of Lemma 8. □
Let Z X be a subset. Consider a bijection β : { 1 , 2 , , | Z | } Z satisfying Q ( β ( i ) ) Q ( β ( j ) ) whenever i < j , i.e., it plays a role of a decreasing rearrangement of Q on Z . Thereforeforth, suppose that ( Q , L , ε , Y , Z ) satisfies
1 x Z Q ( x ) ε 1 x = 1 L Q ( β ( x ) ) .
Define the extremal distribution of type-3 by the following X -marginal:
P type-3 ( x ) = P type-3 ( Q , L , ε , Y , Z ) ( x ) : = V 3 ( J 3 ) if x Z and J 3 β 1 1 ( x ) L , W 3 ( K 3 ) if x Z and L < β 1 1 ( x ) K 3 , Q ( x ) otherwise ,
where the weight V 3 ( j ) is defined by
V 3 ( j ) = V 3 ( Q , L , ε , Y , Z ) ( j ) : = ( 1 ε ) x = 1 j 1 Q ( β 1 ( x ) ) L j + 1
for each integer 1 j L , the weight W 3 ( k ) is defined by
W 3 ( k ) = W 3 ( Q , L , ε , Y , Z ) ( k ) : = 1 if k = L , x = 1 k Q ( β 1 ( x ) ) ( 1 ε ) k L if k > L
for each integer L k L · | Y | , the integer J 3 is chosen so that
J 3 = J 3 ( Q , L , ε , Y , Z ) : = min { 1 j L Q ( β 1 ( j ) ) V 3 ( j ) } ,
and the integer K 3 is chosen so that
K 3 = K 3 ( Q , L , ε , Y , Z ) : = max { L k L · | Y | W 3 ( k ) P X ( β 1 ( k ) ) } .
Remark 19.
The extremal distribution of type-3 can be specialized to both extremal distribution of type-2 defined in (44) and Ho–Verdú’s truncated distribution defined in Equation (17) of [21], respectively.
The following lemma shows a relation between the type-2 and the type-3.
Lemma 9.
Suppose that | Z | = L · | Y | . Then, it holds that
P type-2 ( Q , L , ε , Y ) P type-3 ( Q , L , ε , Y , Z ) .
Proof of Lemma 9.
We readily see that
P type-2 = P type-3 ,
provided that Z = { 1 , 2 , , L · | Y | } and Q = Q , because β : { 1 , 2 , , | Z | } Z used in (193) is the identity mapping in this case. Actually, we may assume without loss of generality that Q = Q .
Although
P type-2 = P type-2
does not hold in general, we can see from the definition of P type-2 stated in (44) that
P type-2 ( x ) = P type-2 ( x )
for each x = 1 , 2 , , L . Therefore, as
P type-2 ( x ) = Q ( x ) P type-3 ( x )
for each x = 1 , 2 , , J 1 , it follows that
x = 1 k P type-2 ( x ) x = 1 k P type-3 ( x )
for each k = 1 , 2 , , J 1 . By the definitions (31), (33), (194), and (196), it can be verified that
J J 3 ,
V ( J ) V 3 ( J 3 ) .
Thus, as
P type-2 ( x ) = V ( J )
for each x = J , J + 1 , , L , it follows that
P type-3 ( x ) V 3 ( J 3 )
for each x = J , J + 1 , , L ; which implies that (203) also holds for each k = J , J + 1 , , L . Therefore, we observe that P type-3 majorizes P type-2 over the subset { 1 , 2 , , L } X .
We prove the rest of the majorization relation by contradiction. Namely, assume that
x = 1 l P type-2 ( x ) > x = 1 l P type-3 ( x )
for some integer l L + 1 . By the definitions stated in (32), (45), (195), and (197), it can be verified that
K 2 K 3 ,
W ( K 2 ) W 3 ( K 3 ) .
Thus, as
P type-2 ( x ) = W ( K 2 ) Q ( x ) ( for x = L + 1 , L + 2 , , K 2 ) ,
P type-3 ( x ) = W 3 ( K 3 ) Q ( x ) ( for x = β 1 ( L + 1 ) , β 1 ( L + 2 ) , , β 1 ( K 3 ) ) ,
it follows that
P type-2 ( x ) P type-3 ( x )
for every x = l , l + 1 , , which implies together with the hypothesis (208) that
x = l P type-2 ( x ) > x = l P type-3 ( x ) .
This, however, contradicts to the definition of probability distributions, i.e., the sum of probability masses is strictly larger than one. This completes the proof of Lemma 9. □
Similar to (164), we now define
P e ( L ) ( X Z ) : = min D Z L P { X D } .
As with Proposition 8, we can verify that
P e ( L ) ( X Z ) = 1 min D Z L x D P X ( x ) = 1 x = 1 L P X ( β ( x ) ) .
Therefore, the restriction stated in (192) comes from the same observation as (29) (see Propositions 3 and 8). In view of (216), we write P e ( L ) ( Q Z ) = P e ( L ) ( X Z ) if P X = Q . As in Lemma 5, the following lemma holds.
Lemma 10.
Suppose that an X -marginal R satisfies that (i) R majorizes Q, (ii) P e ( L ) ( R Z ) ε , and (iii) R ( k ) = Q ( k ) for each k X Z . Then, it holds that R majorizes P type-3 as well.
Proof of Lemma 10.
Since
R ( x ) = P type-3 ( x ) = Q ( x )
for every x X Z , it suffices to verify the majorization relation over Z . Denote by β 1 : { 1 , 2 , , L · | Y | } Z and β 2 : { 1 , 2 , , L · | Y | } Z two bijection satisfying R ( β 1 ( i ) ) R ( β 1 ( j ) ) and β 2 ( i ) β 2 ( j ) , respectively, whenever i < j . In other words, two bijections β 1 and β 2 play roles of decreasing rearrangements of R and P 3 , respectively, on Z . That is, we shall prove that
x = 1 k P type-3 ( β 2 ( x ) ) x = 1 k R ( β 1 ( x ) )
for every k = 1 , 2 , , | Z | .
As R majorizes Q, it follows from (193) that (218) holds for each k = 1 , 2 , , J 3 1 . Moreover, we readily see from (193) that
x = 1 L P type-3 ( β 2 ( x ) ) = 1 ε .
Therefore, it follows from Lemma 6 and the hypothesis P e ( L ) ( R Z ) ε that (218) holds for each k = J 3 , J 3 + 1 , , L . Similarly, since (218) holds with equality if k = | Z | , it also follows from Lemma 6 that (218) holds for each k = L + 1 , L + 2 , | Z | . Therefore, we observe that R majorizes P type-3 . This completes the proof of Lemma 10. □
Finally, we can prove Theorem 3 by using the above lemmas.
Proof of Theorem 3.
For the sake of brevity, we define
R 1 : = R ( Q , L , ε , Y ) ,
R 2 : = Z X : | Z | = L · | Y | R ( Q , L , ε , Y , Z ) ,
R 3 : = Z X : | Z | = L · | Y | R ( Q , L , ε , Y Ψ ( Z ) , Z ) ,
P 4 : = R P ( X ) | Z X s . t . | Z | = L · | Y | , P e ( L ) ( R Z ) ε , R ( x ) = Q ( x ) for x X Z .
Then, we have
H ϕ ( Q , L , ε , Y ) = ( a ) sup ( X , Y ) R 1 H ϕ ( X Y ) = ( b ) sup ( X , Y ) R 2 H ϕ ( X Y ) ( c ) sup ( X , Y ) R 3 H ϕ ( X Y ) = ( d ) sup ( X , Y ) R 3 : ( X , Y ) is connected uniform dispersively H ϕ ( X Y ) ( e ) sup R P 4 ϕ ( R ) ( f ) sup Z X : | Z | = L · | Y | ϕ ( P type-3 ) ( g ) ϕ ( P type-2 ) ,
where
  • (a) follows from the definition of R ( Q , L , ε , Y ) stated in (129),
  • (b) follows from Lemma 7 and the definition of R ( Q , L , ε , Y , Z ) stated in (188),
  • (c) follows from the inclusion relation
    R ( Q , L , ε , Y , Z ) R ( Q , L , ε , Y Ψ ( Z ) , Z ) ,
  • (d) follows from Lemmas 2 and 8,
  • (e) follows from Lemma 1,
  • (f) follows from Lemma 10, and
  • (g) follows from Proposition 1 and Lemma 9.
Inequalities (224) are indeed the Fano-type inequality stated in (47) of Theorem 3.
Finally, supposing that | Y | ( K 2 J ) 2 + 1 , we shall construct a jointly distributed pair ( X , Y ) satisfying
H ϕ ( X Y ) = ϕ ( P type-2 ) ,
P e ( L ) ( X Y ) = ε ,
P X ( x ) = Q ( x ) ( for x X ) .
Similar to (153) and (154), we see that
x = J k Q ( x ) x = J k P type-2 ( x ) for J k K 2 ,
and
x = J K 2 Q ( x ) = x = J K 2 P type-2 ( x ) .
This is a majorization relation between two ( K 2 J + 1 ) -dimensional real vectors; and thus, it follows from the Hardy–Littlewood–Pólya theorem ([15] Theorem 8) (see also [10], Theorem 2.B.2) that there exists a ( K 2 J + 1 ) × ( K 2 J + 1 ) doubly stochastic matrix M = { m i , j } i , j = J K 2 satisfying
Q ( i ) = j = J K 2 m i , j P type-2 ( j )
for each J i K 2 . Moreover, it follows from Marcus–Ree’s or Farahat–Mirsky’s refinement of the finite-dimensional version of Birkhoff’s theorem derived in [75] or Theorem 3 of [76], respectively (see also Theorem 2.F.2 of [10]), that there exists a pair of a probability vector λ = ( λ y ) y Y and a collection { { π i , j ( y ) } i , j = J K 2 } y Y of ( K 2 J + 1 ) × ( K 2 J + 1 ) permutation matrices such that
m i , j = y Y λ y π i , j ( y )
for every J i , j K 2 . Using them, construct a pair ( X , Y ) via the following distributions,
P X | Y = y ( x ) = P type-2 ( x ) if 1 x < J or K 2 < x < , P type-2 ( ψ ˜ y ( x ) ) if J x K 2 ,
P Y ( y ) = λ y ,
where ψ ˜ y is defined as in (159). Similar to Section 6.1, we now observe that (226)–(228) hold. This implies together with (224) that the constructed pair ( X , Y ) achieves the supremum in (7). Furthermore, since P type-2 and Q differ at most K 2 J + 1 L J + 1 probability masses, it follows that the collection { P X | Y = y } y Y consists of at most K 2 J + 1 L J + 1 distinct distributions. Namely, the condition that | Y | K 2 J + 1 L J + 1 is also sufficient to construct a jointly distributed pair ( X , Y ) satisfying (226)–(228). This completes the proof of Theorem 3. □
Remark 20.
Step (b) in (224) is a key of proving Theorem 3; it is the reduction step from infinite to finite-dimensional settings via Lemma 7 (see also Remark 18). Note that this proof technique is not applicable when Y is infinite, while the proof of Theorem 1 works well for infinite Y .

6.4. Proof of Theorem 4

It is known that every discrete probability distribution on { 1 , , M } majorizes the uniform distribution on { 1 , , M } . Thus, since
P type-0 ( M , L , ε ) = P type-1 ( Unif M , L , ε )
with the uniform distribution Unif M on { 1 , , M } , it follows from Lemma 5 that
P type-0 ( M , L , ε ) P type-1 ( Q , L , ε )
if supp ( Q ) { 1 , , M } . Therefore, it follows from Proposition 1 and Theorems 1 and 2 that
H ϕ ( M , L , ε , Y ) ϕ ( P type-0 ) .
Finally, it is easy to see that
H ϕ ( X Y ) = ϕ ( P type-0 ) ,
P e ( L ) ( X Y ) = ε ,
provided that
P X | Y ( x ) = P type-0 ( x ) ( a . s . )
for every 1 x M . This implies the existence of a pair ( X , Y ) achieving the maximum in (50); and therefore, the equality (237) holds. This completes the proof of Theorem 4.

7. Proofs of Asymptotic Behaviors on Equivocations

In this section, we prove Theorems 5–7.

7.1. Proof of Theorem 5

Defining the variational distance between two X -marginals P and Q by
d ( P , Q ) : = 1 2 x X | P ( x ) Q ( x ) | ,
we now introduce the following lemma, which is useful to prove Theorem 5.
Lemma 11
([77], Theorem 3). Let Q be an X -marginal, and 0 δ 1 Q ( 1 ) a real number. Then, it holds that
min R P ( X ) : d ( Q , R ) δ H ( R ) = H ( S ( Q , δ ) ) ,
where the X -marginal S ( Q , δ ) is defined by
S ( Q , δ ) ( x ) : = Q ( x ) + δ if x = 1 , Q ( x ) if 1 < x < B , k = B Q ( k ) δ if x = B , 0 if x > B ,
and the integer B is chosen so that
B : = sup b 1 | k = b Q ( k ) δ .
For the sake of brevity, in this proof, we write
ε n : = P e ( L n ) ( X n Y n ) ,
P n : = P X n ,
P 1 , n : = P type-1 ( P n , L n , ε n )
for each n 1 . Suppose that ε n = o ( 1 ) as n . By Corollary 1, instead of (99), it suffices to verify that
| H ( P 1 , n ) log L n | + = o H ( X n ) .
As supp ( P 1 , n ) = { 1 , , L n } if ε n = 0 , we may assume without loss of generality that 0 < ε n < 1 .
Define two X -marginals Q n ( 1 ) and Q n ( 2 ) by
Q n ( 1 ) ( x ) = P 1 , n ( x ) 1 ε n if 1 x L n , 0 if x L n + 1 ,
Q n ( 2 ) ( x ) = 0 if 1 x L n , P 1 , n ( x ) ε n if x L n + 1
for each n 1 . As Q n ( 1 ) majorizes the uniform distribution on { 1 , 2 , , L n } , it is clear from the Schur-concavity property of the Shannon entropy that
H ( Q n ( 1 ) ) log L n .
Thus, since
P 1 , n = ( 1 ε n ) Q n ( 1 ) + ε n Q n ( 2 ) ,
it follows by strong additivity of the Shannon entropy (cf. Property (1.2.6) of [78]) that
H ( P 1 , n ) = h 2 ( ε n ) + ( 1 ε n ) H ( Q n ( 1 ) ) + ε n H ( Q n ( 2 ) ) h 2 ( ε n ) + ( 1 ε n ) log L n + ε n H ( Q n ( 2 ) ) .
Thus, since h 2 ( ε n ) = o ( 1 ) , it suffices to verify the asymptotic behavior of the third term in the right-hand side of (253), i.e., whether
ε n H ( Q n ( 2 ) ) = o H ( X n )
holds or not.
Consider the X -marginal Q n ( 3 ) given by
Q n ( 3 ) ( x ) = P n ( x ) ε n Q n ( 2 ) ( x ) 1 ε n
for each n 1 . As
P n = ε n Q n ( 2 ) + ( 1 ε n ) Q n ( 3 ) ,
it follows by the concavity of the Shannon entropy that
H ( X n ) ε n H ( Q n ( 2 ) ) + ( 1 ε n ) H ( Q n ( 3 ) )
for each n 1 . A direct calculations shows
d ( P n , Q n ( 3 ) ) = 1 2 x = 1 | P n ( x ) Q n ( 3 ) ( x ) | = 1 2 x = 1 | P n ( x ) P n ( x ) ε n Q n ( 3 ) ( x ) 1 ε n | = 1 2 ε n 1 ε n x = 1 | P n ( x ) Q n ( 2 ) ( x ) | = ε n 1 ε n d ( P n , Q n ( 2 ) ) ε n 1 ε n = : δ n
for each n 1 , where note that ε n = o ( 1 ) implies δ n = o ( 1 ) as well. Thus, it follows from Lemma 11 that
H ( Q n ( 3 ) ) H ( S ( P n , δ n ) ) = ( a ) ( P n ( 1 ) + δ n ) log 1 P n ( 1 ) + δ n + x = 2 B n 1 P n ( x ) log 1 P n ( x ) k = B n P n ( k ) δ n log k = B n P n ( k ) δ n ( b ) x = 1 B n P n ( x ) log 1 P n ( x ) 2 γ n = ( c ) x B ( n ) P X n ( x ) log 1 P X n ( x ) 2 γ n ( d ) x A ϵ ( n ) B ( n ) P X n ( x ) log 1 P X n ( x ) 2 γ n ( e ) x A ϵ ( n ) B ( n ) P X n ( x ) ( 1 ϵ ) H ( X n ) 2 γ n = P { X n A ϵ ( n ) B ( n ) } ( 1 ϵ ) H ( X n ) 2 γ n
for every ϵ > 0 and each n 1 , where
  • (a) follows by the definition
    B n : = sup b 1 | k = b P n ( k ) δ n
    for each n 1 ,
  • (b) follows by the continuity of the map u u log u and the fact that δ n = o ( 1 ) as n , i.e., there exists a sequence { γ n } n = 1 of positive reals satisfying γ n = o ( 1 ) as n and
    P n ( 1 ) log 1 P n ( 1 ) ( P n ( 1 ) + δ n ) log 1 P n ( 1 ) + δ n γ n ,
    P n ( B n ) log 1 P n ( B n ) + k = B n P n ( k ) δ n log k = B n P n ( k ) δ n γ n
    for each n 1 ,
  • (c) follows by constructing the subset B ( n ) X so that
    | B ( n ) | = min B X : P { X n B } 1 δ n | B |
    for each n 1 ,
  • (d) follows by defining the typical set A ϵ ( n ) X so that
    A ϵ ( n ) : = x X | log 1 P X n ( x ) ( 1 ϵ ) H ( X n )
    with some ϵ > 0 for each n 1 , and
  • (e) follows by the definition of A ϵ ( n ) .
As { X n } n = 1 satisfies the AEP and
P { X n B ( n ) } 1 δ n ,
lim n δ n = 0 ,
it is clear that
lim n P { X n A ϵ ( n ) B ( n ) } = 0
(see, e.g., Problem 3.11 of [2]). Thus, since ϵ > 0 can be arbitrarily small and ε n = o ( 1 ) as n , it follows from (259) that there exists a sequence { λ n } n = 1 of positive real numbers satisfying λ n = o ( 1 ) as n and
( 1 ε n ) H ( Q n ( 3 ) ) ( 1 λ n ) H ( X n ) 2 γ n 1 ε n
for each n 1 . Combining (257) and (268), we observe that
λ n H ( X n ) + 2 γ n 1 ε n ε n H ( Q n ( 2 ) )
for each n 1 . Therefore, Equation (254) is indeed valid, which proves (248) together with (253). This completes the proof of Theorem 5.
Remark 21.
The construction of Q n ( 3 ) defined in (255) is a special case of the splitting technique; it was used to derive limit theorems of Markov processes by Nummelin [26] and Athreya–Ney [27]. This technique has many applications in information theory [21,28,29,30,31,32] and to the Markov chain Monte Carlo (MCMC) algorithm [79].

7.2. Proof of Theorem 6

Condition (b) is a direct consequence of Theorem 5; and we shall verify Conditions (a), (c), and (d) in the proof. For the sake of brevity, in the proof, we write
ε n : = P e ( L n ) ( X n Y n ) ,
P n : = P X n ,
P : = P X ,
P 1 , n : = P type-1 ( P n , L n , ε n )
for each n 1 . By Corollary 4, instead on (113), it suffices to verify that
lim n | H α ( P 1 , n ) log L n | + = 0
under any one of Conditions (a), (b), and (c). Similar to the proof of Theorem 5, we may assume without loss of generality that 0 < ε n < 1 .
Firstly, we shall verify Condition (a). Let Q n be an X -marginal given by
Q n ( x ) = 1 ε n L n if 1 x L n , P type 5 , n ( x ) if x L n + 1
for each n 1 . As P 1 , n majorizes Q n , it follows by the Schur-concavity property of the Rényi entropy that
H α ( P 1 , n ) H α ( Q n ) = 1 1 α log ( 1 ε n ) α L n 1 α + x = L n P 1 , n ( x ) α 1 1 α log ( 1 ε n ) α L n 1 α = log L n + α 1 α log ( 1 ε n ) ,
where the second inequality follows by the hypothesis that α > 1 , i.e., by Condition (a). These inequalities immediately ensure (274) under Condition (a).
Second, we shall verify Condition (d) of Theorem 6. As X and { X n } n are discrete r.v.’s, note that the convergence in distribution X n d X is equivalent to P n ( x ) P ( x ) as n for each x X , i.e., the pointwise convergence P n P as n . It is well-known that the Rényi entropy α H α ( P ) is nonincreasing for α 0 ; hence, it suffices to verify (274) with α = 1 , i.e.,
lim n | H ( P 1 , n ) log L n | + = 0 .
We now define two X -marginals Q n ( 1 ) and Q n ( 2 ) in the same ways as (249) and (250), respectively, for each n 1 . By (253), it suffices to verify whether the third term in the right-hand side of (253) approaches to zero, i.e.,
lim n ε n H ( Q n ( 2 ) ) = 0 .
This can be verified in a similar fashion to the proof of Lemma 3 of [21] as follows: Consider the X -marginal Q n ( 3 ) defined in (255) for each n 1 . Since Q n ( 2 ) ( 1 ) = 0 and ε n Q n ( 2 ) ( x ) ε n for each x 2 , we observe that
lim n ε n Q n ( 2 ) ( x ) = 0
for every x 1 ; therefore,
lim n Q n ( 3 ) ( x ) = lim n P X n ( x )
for every x 1 . Therefore, since P n converges pointwise to P as n , we see that Q n ( 3 ) also converges pointwise to P X as ε n vanishes. Therefore, by the lower semicontinuity property of the Shannon entropy, we observe that
lim inf n H ( Q n ( 3 ) ) H ( X ) ,
and we then have
H ( X ) = lim n H ( X n ) ( a ) lim sup n ε n H ( Q n ( 2 ) ) + ( 1 ε n ) H ( Q n ( 3 ) ) lim sup n ε n H ( Q n ( 2 ) ) + lim inf n ( 1 ε n ) H ( Q n ( 3 ) ) = lim sup n ε n H ( Q n ( 2 ) ) + lim inf n H ( Q n ( 3 ) ) lim sup n ε n H ( Q n ( 2 ) ) + H ( X ) ,
where (a) follows from (257). Thus, it follows from (282), the hypothesis H ( X ) < , and the nonnegativity of the Shannon entropy that (278) is valid, which proves (277) together with (253).
Finally, we shall verify Condition (c) of Theorem 6. Define the X -marginal Q ˜ n ( 2 ) by
Q ˜ n ( 2 ) ( x ) = 0 if 1 x L n , P ˜ 1 , n ( x ) ε n if x L n + 1 ,
for each n 1 , where P ˜ 1 , n = P type-1 ( P , L n , ε n ) . Note that the difference between Q n ( 2 ) and Q ˜ n ( 2 ) is the difference between P n and P. It can be verified by the same way as (282) that
lim n ε n H ( Q ˜ n ( 2 ) ) = 0 .
It follows by the same manner as Lemma 1 of [21] that if P n majorizes P, then Q n ( 2 ) majorizes Q ˜ n ( 2 ) as well. Therefore, it follows from the Schur-concavity property of the Shannon entropy that if P n majorizes P for sufficiently large n, then
H ( Q n ( 2 ) ) H ( Q ˜ n ( 2 ) )
for sufficiently large n. Combining (284) and (285), Equation (278) also holds under Condition (c). This completes the proof of Theorem 6.

7.3. Proof of Theorem 7

To prove Theorem 7, we now give the following lemma.
Lemma 12.
If H ( Q ) < , then the map ε H ( P type-1 ( Q , L , ε ) ) is concave in the interval (29) with | Y | = .
Proof of Lemma 12.
It is well-known that for a fixed P X , the conditional Shannon entropy H ( X Y ) is concave in P Y | X (cf. [2], Theorem 2.7.4). Defining the distortion measure d : X × X L { 0 , 1 } by
d ( x , x ^ ) = 1 if x x ^ , 0 if x x ^ ,
the average probability of list decoding error is equal to the average distortion, i.e.,
P { X f ( Y ) } = E [ d ( X , f ( Y ) ) ]
for any list decoder f : Y X L . Therefore, by following Theorem 1, the concavity property of Lemma 12 can be proved by the same argument as the proof of the convexity of the rate-distortion function (cf. Lemma 10.4.1 of [2]). □
For the sake of brevity, we write
P = P X ,
P n = P X n ,
ε n = P e ( L n ) ( X n Y n ) ,
P 1 , n = P type-1 ( P n , L n , ε n ) ,
P ¯ 1 , n = P type-1 ( P n , L ¯ n , ε n )
in this proof. Define
L ¯ : = lim sup n L n .
If L ¯ = , then (115) is a trivial inequality. Therefore, it suffices to consider the case where L ¯ < .
It is clear that there exists an integer n 0 1 such that L n L ¯ for every n n 0 . Then, we can verify that P 1 , n majorizes P ¯ 1 , n for every n n 0 as follows. Let J n and J 3 be given by (33) with ( Q , L , ε ) = ( P n , L n , ε n ) and ( Q , L , ε ) = ( P n , L ¯ , ε n ) , respectively. Similarly, let K n and K 3 be given by (34) with ( Q , L , ε ) = ( P n , L n , ε n ) and ( Q , L , ε ) = ( P n , L ¯ , ε n ) , respectively. As L n L ¯ implie that J n J 3 and K n K 3 , it can be seen from (30) that
P 1 , n ( x ) = P ¯ 1 , n ( x ) for 1 x < J n or x K 3 ,
P 1 , n ( x ) P ¯ 1 , n ( x ) for J n x L n or L ¯ < x K 3 ,
P 1 , n ( x ) P ¯ 1 , n ( x ) for L n < x L ¯ .
Therefore, noting that
x = 1 L n P 1 , n ( x ) = x = 1 L ¯ P ¯ 1 , n ( x ) = 1 ε n ,
we obtain the majorization relation P 1 , n P ¯ 1 , n for every n n 0 .
By hypothesis, there exists an integer n 1 1 such that P n majorizes P for every n n 1 . Letting n 2 = max { n 0 , n 1 } , we observe that
1 n H ( X n Y n ) 1 n i = 1 n H ( X i Y i ) 1 n i = 1 n 2 1 H ( X i Y i ) + 1 n j = n 2 n H ( X i Y i ) ( a ) n 2 1 n max 1 i < n 2 H ( X i ) + 1 n j = n 2 n H P ¯ 1 , j ( b ) n 2 1 n max 1 i < n 2 H ( X i ) + 1 n j = n 2 n H P type-1 ( P , L ¯ , ε j ) ( c ) n 2 1 n max 1 i < n 2 H ( X i ) + n n 2 + 1 n H P type-1 ( P , L ¯ , ε ¯ n )
for every n n 2 , where
  • (a) follows by Corollary 4 and P 1 , n P ¯ 1 , n ,
  • (b) follows by Condition (b) of Theorem 6 and the same manner as ([21], Lemma 1), and
  • (c) follows by Lemma 12 together with the following definition
    ε ¯ n : = 1 n n 2 + 1 j = n 2 n ε j = 1 n n 2 + 1 j = n 2 n P e ( L j ) ( X j Y j ) .
Note that the Schur-concavity property of the Shannon entropy is used in both (b) and (c) of (298). As
lim n P e , sym . ( L ) ( X n Y n ) = 0 lim n ε ¯ n = 0 ,
it follows from (274) that there exists an integer n 3 1 such that
H P type-1 ( P , L ¯ , ε ¯ n ) log L ¯
for every n n 3 . Therefore, it follows from (298) that
1 n H ( X n Y n ) n 2 1 n max 1 i < n 2 H ( X i ) + n n 2 + 1 n log L ¯
for every n max { n 2 , n 3 } . Therefore, letting n in (302), we have (115). This completes the proof of Theorem 7.

8. Concluding Remarks

8.1. Impossibility of Establishing Fano-Type Inequality

In Section 3, we explored the principal maximization problem H ϕ ( Q , L , ε , Y ) defined in (7) without any explicit form of ϕ under the three postulates: ϕ is symmetric, concave, and lower semicontinuous. If ε > 0 and we impose another postulate on ϕ , then we can also avoid the (degenerate) case in which ϕ ( Q ) = . The following proposition shows this fact.
Proposition 9.
Let g 1 : [ 0 , 1 ] [ 0 , ) be a function satisfying g 1 ( 0 ) = 0 , and g 2 : [ 0 , ] [ 0 , ] a function satisfying g 2 ( u ) = only if u = . Suppose that ε > 0 and ϕ : P ( X ) [ 0 , ] is of the form
ϕ ( Q ) = g 2 x X g 1 Q ( x ) .
Then, it holds that
H ϕ ( Q , L , ε , Y ) < ϕ ( Q ) < .
Proof of Proposition 9.
See Appendix D. □
As seen in Section 4, the conditional Shannon and Rényi entropies can be expressed by H ϕ ( X Y ) ; and then ϕ must satisfy (303). Proposition 9 shows that we cannot establish an effective Fano-type inequality based on the conditional information measure H ϕ ( X Y ) subject to our original postulates in Section 2.1, provided that (i) ϕ satisfies the additional postulate of (303), (ii) ε > 0 , and (iii) ϕ ( Q ) = . This generalizes a pathological example given in Example 2.49 of [4], which states issues of the interplay between conditional information measures and error probabilities over countably infinite alphabets X ; see Section 1.2.1.

8.2. Postulational Characterization of Conditional Information Measures

Our Fano-type inequalities were stated in terms of the general conditional information H ϕ ( X Y ) defined in Section 2.1. As shown in Section 4, the quantity H ϕ ( X Y ) can be specialized to Shannon’s and Rényi’s information measures. Moreover, the quantity H ϕ ( X Y ) can be further specialized to the following quantities:
  • If ϕ = · 1 / 2 , then H ϕ ( X Y ) coincides with the (unnormalized) Bhattacharyya parameter (cf. Definition 17 of [80] and Section 4.2.1 of [81]) defined by
    B ( X Y ) : = E x , x X P X | Y ( x ) P X | Y ( x ) .
    Note that the Bhattacharyya parameter is often defined so that Z ( X Y ) : = ( B ( X Y ) 1 ) / ( M 1 ) to normalize as 0 Z ( X Y ) 1 , provided that X is { 0 , 1 , , M 1 } -valued. When X takes values in a finite alphabet with a certain algebraic structure, the Bhattacharyya parameter B ( X Y ) is useful in analyzing the speed of polarization for non-binary polar codes (cf. [80,81]). Note that B ( X Y ) is a monotone function of Arimoto’s conditional Rényi entropy (64) of order α = 1 / 2 .
  • If ϕ = 1 · 2 2 , then H ϕ ( X Y ) coincides with the conditional quadratic entropy [82] defined by
    H o ( X Y ) : = E x X P X | Y ( x ) 1 P X | Y ( x ) ,
    which is used in the analysis of stochastic decoding (see, e.g., [83]). Note that H o ( X Y ) is a monotone function of Hayashi’s conditional Rényi entropy (69) of order α = 2 .
  • If X is { 1 , 2 , , M } -valued, then one can define the following (variational distance-like) conditional quantity:
    K ( X Y ) : = E 1 2 ( M 1 ) x = 1 M x = 1 M | P X | Y ( x ) P X | Y ( x ) | .
    Note that 0 K ( X Y ) 1 . This quantity K ( X Y ) was introduced by Shuval–Tal [84] to analyze the speed of polarization of non-binary polar codes for sources with memory. When we define the function d ¯ : P ( { 1 , 2 , , M } ) [ 0 , 1 ] by
    d ¯ ( P ) : = 1 2 ( M 1 ) x = 1 M x = 1 M | P ( x ) P ( x ) | ,
    it holds that K ( X Y ) = H d ¯ ( X Y ) . Clearly, the function d ¯ is symmetric, convex, and continuous.
On the other hand, the quantity H ϕ ( X Y ) has the following properties that are appealing in information theory:
  • As ϕ is concave, lower bounded, and lower semicontinuous, it follows from Jensen’s inequality for an extended real-valued function on a closed, convex, and bounded subset of a Banach space ([14], Proposition A-2) that
    H ϕ ( X Y ) ϕ ( P X ) .
    This bound is analogous to the property that conditioning reduces entropy (cf. [2], Theorem 2.6.5).
  • It is easy to check that for any (deterministic) mapping g : X A with A X , the conditional distribution P g ( X ) | Y majorizes P X | Y a.s. Thus, it follows from Proposition 1 that for any mapping g : X A ,
    H ϕ ( g ( X ) Y ) H ϕ ( X Y ) ,
    which is a counterpart of the data processing inequality (cf. Equations (26)–(28) of [72]).
  • As shown in Section 3, the quantity H ϕ ( X Y ) also satisfies appropriate generalizations of Fano’s inequality.
Therefore, similar to the family of f-divergences [85,86], the quantity H ϕ ( X Y ) is a generalization of various information-theoretic conditional quantities that also admit certain desirable properties. In addition, we can establish Fano-type inequalities based on H ϕ ( X Y ) ; this characterization provides insights on how to measure conditional information axiomatically.

8.3. When Does Vanishing Error Probabilities Imply Vanishing Equivocations?

In the list decoding setting, the rate of a block code with codeword length n, message size M n , and list size L n can be defined as ( 1 / n ) log ( M n / L n ) (cf. [87]). Motivated by this, we established asymptotic behaviors of this quantity in Theorems 5 and 6. We would like to emphasize that Example 2 shows that Ahlswede–Gács–Körner’s proof technique described in Chapter 5 of [42] (see also Section 3.6.2 of [43]) works for an i.i.d. source on a countably infinite alphabet, provided that the alphabets { Y n } n = 1 are finite.
Theorem 5 states that the asymptotic growth of H ( X n Y n ) log L n is strictly slower than H ( X n ) , provided that the general source X = { X n } n = 1 satisfies the AEP and the error probabilities vanish (i.e., P e ( L n ) ( X n Y n ) = o ( 1 ) as n ). This is a novel characterization of the AEP via Fano’s inequality. An instance of this characterization using the Poisson source (cf. Example 4 of [25]) was provided in Example 3.

8.4. Future Works

  • While there are various studies of the reverse Fano inequalities [22,23,49,50,51,52], this study has focused only on the forward Fano inequality. Generalizing the reverse Fano inequality in the same spirit as was done in this study would be of interest.
  • Important technical tools used in our analysis include the finite- and infinite-dimensional versions of Birkhoff’s theorem; they were employed to satisfy the constraint that P X = Q . As a similar constraint is imposed in many information-theoretic problems, e.g., coupling problems (cf. [7,88,89]), finding further applications of the infinite-dimensional version of Birkhoff’s theorems would refine technical tools, and potentially results, when we are dealing with communication systems on countably infinite alphabets.
  • We have described a novel connection between the AEP and Fano’s inequality in Theorem 5; its role in the classifications of sources and channels and its applications to other coding problems are of interest.

Funding

This research was funded by JSPS KAKENHI Grant Number 17J11247.

Acknowledgments

The author would like to thank Ken-ichi Iwata for his valuable comments on an earlier version of this paper. Vincent Y. F. Tan gave insightful comments and suggestions that greatly improved this paper. The author also would like to express my gratitude to an anonymous reviewer in IEEE Transactions on Information Theory and three anonymous reviewers in this journal for carefully following the technical parts and giving a lot of his/her valuable comments. Finally, the author would like to thank the Guest Editor, Amos Lapidoth, for inviting the author to this special issue and supporting this paper.

Conflicts of Interest

The author declares no conflicts of interest.

Appendix A. Proof of Proposition 2

The proposition is quite obvious; it is similar to ([90], Equation (1)). Here, we prove it to make this paper self-contained. For a given list decoder f : Y X L with list size 1 L < , it follows that
P { X f ( Y ) } = E [ E [ 1 { X f ( Y ) } Y ] ] = E x f ( Y ) P X | Y ( x ) ( a ) E x = L + 1 P X | Y ( x ) ,
where the equality of (a) can be achieved by an optimal list decoder f satisfying that X f ( Y ) only if P X | Y ( X ) = P X | Y ( k ) for some k L + 1 . This completes the proof of Proposition 2.

Appendix B. Proof of Proposition 3

The second inequality in (27) is indeed a direct consequence of Proposition 2 and (123). The sharpness of the second bound can be easily verified by setting that X and Y are statistically independent.
We next prove the first inequality in (27). When Y is infinite, the first inequality is an obvious one P e ( L ) ( X Y ) 0 , and its equality holds by setting X Y and X = Y a.s. Therefore, it suffices to consider the case where Y is finite. Assume without loss of generality that
Y = { 0 , 1 , , N 1 }
for some positive integer N. By the definition of cardinality, there exists a subset Z X satisfying (i) | Z | = L N and (ii) for each x { 1 , 2 , , L } and y { 0 , 1 , , N 1 } , there exists an element z Z satisfying P X | Y = y ( z ) = P X | Y = y ( x ) . Then,
P e ( X Y ) = ( a ) 1 y Y P Y ( y ) x = 1 L P X | Y = y ( x ) ( b ) 1 y Y P Y ( y ) x Z P X | Y = y ( x ) = 1 x Z P X ( x ) ( c ) 1 x = 1 L N Q ( x ) ,
where
  • (a) follows from Proposition 2,
  • (b) follows from by the construction of Z , and
  • (c) follows from the facts that | Z | = L N and P X = Q .
This is indeed the first inequality in (27). Finally, the sharpness of the first inequality can be verified by the X × Y -valued r.v. ( U , V ) determined by
P U | V = v ( u ) = ω 2 ( Q , L , ε ) ω 1 ( Q , v , L ) Q ( u ) if v L < u ( 1 + v ) L , Q ( u ) if L N < u < , 0 otherwise ,
P V ( v ) = ω 1 ( Q , v , L ) ω 2 ( Q , L , ε ) ,
where ω 1 ( Q , v , L ) and ω 2 ( Q , L , ε ) are defined by
ω 1 ( Q , v , L ) : = u = 1 + v L ( 1 + v ) L Q ( u ) ,
ω 2 ( Q , L , ε ) : = v = 0 L N 1 ω 1 ( Q , v , L ) .
A direct calculation shows that P U = Q and
P e ( L ) ( U V ) = 1 x = 1 L N Q ( x ) ,
which implies the sharpness of the first inequality. This completes the proof of Proposition 3.

Appendix C. Proof of Proposition 4

Equations (36), (37), and (39)–(42) directly follow from the definitions stated in (30)–(34). Equation (38) follows from (28) and (37).
Finally, we shall verify that P type-1 majorizes Q; in other words, we prove that
x = 1 k P type-1 ( x ) x = 1 k Q ( x )
for every k 1 . Equation (39) implies that (A9) holds with equality for every 1 k < J . Moreover, it follows from (33) and (39) that (A9) holds for every J k L . On the other hand, Equation (42) implies that
x = k P type-1 ( x ) = x = k Q ( x )
for every k > K 1 . Combining (41), (A9) with k = L , and (A10), we observe that (A9) holds for every k > L . Therefore, we have that P type-1 majorizes Q, as desired.

Appendix D. Proof of Proposition 9

The “if” part ⇐ of Proposition 9 is quite obvious from Jensen’s inequality even if ϕ : P ( X ) [ 0 , ] is not of the form (303). Therefore, it suffices to prove the “only if” part ⇒. In other words, we shall prove the following contraposition
ϕ ( Q ) = sup ( X , Y ) : P e ( L ) ( X Y ) ε , P X = Q H ϕ ( X Y ) = .
In the following, we show (A11) by employing Lemma 7 of Section 6.3.
Since g 2 ( u ) = only if u = , it is immediate from (303) that
ϕ ( Q ) = x X g 1 Q ( x ) = ,
where note that ϕ ( Q ) = implies that g 2 ( ) = as well. Moreover, since g 1 ( 0 ) = 0 , we get
x X g 1 Q ( x ) = | supp ( Q ) | = .
Due to (29), we can find a finite subset S Y satisfying
1 x = 1 L · | S | Q ( x ) ε
by taking a finite but sufficiently large cardinality | S | < . This implies that the new system ( Q , L , ε , S ) still satisfies (29); thus, it follows from Proposition 3 that there exists an X × S -valued r.v. ( X , Y ) satisfying P e ( L ) ( X Y ) ε and P X = Q . Therefore, the feasible region
R 2 = R ( Q , L , ε , S )
defined in (129) is nonempty by this choice of S . As S Y , it is clear that R 2 R 1 , where
R 1 = R ( Q , L , ε , Y ) .
By Lemma 7, one can find Z X so that | Z | = L · | Y | and
R 3 = R ( Q , L , ε , S , Z )
defined in (188) is nonempty as well. Moreover, since P e ( L ) ( X Y ) P e ( L ) ( X Y Z ) , if follows that R 3 R 2 . Then, we have
H ϕ ( Q , L , ε , Y ) = ( a ) sup ( X , Y ) R 1 H ϕ ( X Y ) ( b ) sup ( X , Y ) R 3 H ϕ ( X Y ) ( c ) inf R P ( X ) : x X Z , R ( x ) = Q ( x ) g 2 x X g 1 R ( x ) = ( d ) ,
where
  • (a) follows by the definition of R 1 stated in (129),
  • (b) follows by the inclusions
    R 3 R 2 R 1 ,
  • (c) follows from the fact that ( X , Y ) R 3 implies that
    P X | Y = y ( x ) = Q ( x )
    for x X Z and y S , and
  • (d) follows from the facts that
    | supp ( Q ) Z | = ,
    g 1 ( u ) 0 ( for 0 u 1 ) ,
    g 2 ( ) = .
Inequalities (A18) imply (A11), completing the proof of Proposition 9.

Appendix E. Proof of Lemma 6

This lemma is quite trivial, but we prove it to make the paper self-contained. Actually, this can be directly proved by contradiction. Suppose that (140) and (141) hold, but (142) does not hold. Then, there must exist an l { k , k + 1 , , n 1 } satisfying
i = 1 l p i < i = 1 l q i .
As q j is constant for each j = k , k + 1 , , n , it follows from (140) and (A24) that p j < q j for every j = l , l + 1 , , n . Then, we observe that
i = 1 n p i < i = 1 n q i ,
which contradicts to the hypothesis of (141), and therefore Lemma 6 must hold.

References

  1. Fano, R.M. Class Notes for Transmission of Information; MIT press: Cambridge, MA, USA, 1952. [Google Scholar]
  2. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; Wiley: New York, NY, USA, 2006. [Google Scholar]
  3. El Gamal, A.; Kim, Y.H. Network Information Theory; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  4. Yeung, R.W. Information Theory and Network Coding; Springer: New York, NY, USA, 2008. [Google Scholar]
  5. Zhang, Z. Estimating mutual information via Kolmogorov distance. IEEE Trans. Inf. Theory 2007, 53, 3280–3283. [Google Scholar] [CrossRef]
  6. Csiszár, I.; Körner, J. Information Theory: Coding Theorems for Discrete Memoryless Systems, 2nd ed.; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  7. Sason, I. Entropy bounds for discrete random variables via maximal coupling. IEEE Trans. Inf. Theory 2013, 59, 7118–7131. [Google Scholar] [CrossRef] [Green Version]
  8. Arimoto, S. Information measures and capacity of order α for discrete memoryless channels. Topics Inf. Theory 1977, 16, 41–52. [Google Scholar]
  9. Hayashi, M. Exponential decreasing rate of leaked information in universal random privacy amplification. IEEE Trans. Inf. Theory 2011, 57, 3989–4001. [Google Scholar] [CrossRef] [Green Version]
  10. Marshall, A.W.; Olkin, I.; Arnold, B.C. Inequalities: Theory of Majorization and Its Applications, 2nd ed.; Springer: New York, NY, USA, 2011. [Google Scholar]
  11. Fano, R.M. Transmission of Information: A Statistical Theory of Communication; MIT Press: New York, NY, USA, 1961. [Google Scholar]
  12. Massey, J.L. Applied digital information theory I, Signal and Information Processing Laboratory, ETH Zürich. Lecture note. Available online: http://www.isiweb.ee.ethz.ch/archive/massey_scr/ (accessed on 20 January 2020).
  13. Sakai, Y.; Iwata, K. Extremality between symmetric capacity and Gallager’s reliability function E0 for ternary-input discrete memoryless channels. IEEE Trans. Inf. Theory 2018, 64, 163–191. [Google Scholar] [CrossRef]
  14. Shirokov, M.E. On properties of the space of quantum states and their application to the construction of entanglement monotones. Izv. Math. 2010, 74, 849–882. [Google Scholar] [CrossRef] [Green Version]
  15. Hardy, G.H.; Littlewood, J.E.; Pólya, G. Some simple inequalities satisfied by convex functions. Messenger Math. 1929, 58, 145–152. [Google Scholar]
  16. Markus, A.S. The eigen- and singular values of the sum and product of linear operators. Russian Math. Surv. 1964, 19, 91–120. [Google Scholar] [CrossRef]
  17. Birkhoff, G. Lattice Theory, revised ed.; American Mathematical Society: Providence, RI, USA, 1948. [Google Scholar]
  18. Révész, P. A probabilistic solution of problem 111 of G. Birkhoff. Acta Math. Hungar. 1962, 3, 188–198. [Google Scholar] [CrossRef]
  19. Birkhoff, G. Tres observaciones sobre el algebra lineal. Univ. Nac. Tucumán Rev. Ser. A 1946, 5, 147–151. [Google Scholar]
  20. Erokhin, V. ε-entropy of a discrete random variable. Theory Probab. Appl. 1958, 3, 97–100. [Google Scholar] [CrossRef]
  21. Ho, S.-W.; Verdú, S. On the interplay between conditional entropy and error probability. IEEE Trans. Inf. Theory 2010, 56, 5930–5942. [Google Scholar] [CrossRef]
  22. Sakai, Y.; Iwata, K. Sharp bounds on Arimoto’s conditional Rényi entropies between two distinct orders. In Proceedings of the 2017 IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017; pp. 2975–2979. [Google Scholar]
  23. Sason, I.; Verdú, S. Arimoto–Rényi conditional entropy and Bayesian M-ary hypothesis testing. IEEE Trans. Inf. Theory 2018, 64, 4–25. [Google Scholar] [CrossRef]
  24. Sibson, R. Information radius. Z. Wahrsch. Verw. Geb. 1969, 14, 149–161. [Google Scholar] [CrossRef]
  25. Verdú, S.; Han, T.S. The role of the asymptotic equipartition property in noiseless coding theorem. IEEE Trans. Inf. Theory 1997, 43, 847–857. [Google Scholar] [CrossRef]
  26. Nummelin, E. Uniform and ratio limit theorems for Markov renewal and semi-regenerative processes on a general state space. Ann. Inst. Henri Poincaré Probab. Statist. 1978, 14, 119–143. [Google Scholar]
  27. Athreya, K.B.; Ney, P. A new approach to the limit theory of recurrent Markov chains. Trans. Amer. Math. Soc. 1978, 245, 493–501. [Google Scholar] [CrossRef]
  28. Kumar, G.R.; Li, C.T.; El Gamal, A. Exact common information. In Proceedings of the 2014 IEEE International Symposium on Information Theory, Honolulu, HI, USA, 29 June–4 July 2014; pp. 161–165. [Google Scholar]
  29. Vellambi, B.N.; Kliewer, J. Sufficient conditions for the equality of exact and Wyner common information. In Proceedings of the 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 27–30 September 2016; pp. 370–377. [Google Scholar]
  30. Vellambi, B.N.; Kliewer, J. New results on the equality of exact and Wyner common information rates. In Proceedings of the 2018 IEEE International Symposium on Information Theory (ISIT), Vail, CO, USA, 17–22 June 2018; pp. 151–155. [Google Scholar]
  31. Yu, L.; Tan, V.Y.F. On exact and -Rényi common informations. IEEE Trans. Inf. Theory 2020. [Google Scholar] [CrossRef]
  32. Yu, L.; Tan, V.Y.F. Exact channel synthesis. IEEE Trans. Inf. Theory 2020. [Google Scholar] [CrossRef]
  33. Han, T.S. Information-Spectrum Methods in Information Theory; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
  34. Ho, S.-W.; Yeung, R.W. On the discontinuity of the Shannon information measures. IEEE Trans. Inf. Theory 2009, 55, 5362–5374. [Google Scholar]
  35. Kovačević, M.; Stanojević, I.; Šenk, V. Some properties of Rényi entropy over countably infinite alphabets. Probl. Inf. Transm. 2013, 49, 99–110. [Google Scholar] [CrossRef] [Green Version]
  36. Ho, S.-W.; Yeung, R.W. On information divergence measures and a unified typicality. IEEE Trans. Inf. Theory 2010, 56, 5893–5905. [Google Scholar] [CrossRef] [Green Version]
  37. Madiman, M.; Wang, L.; Woo, J.O. Majorization and Rényi entropy inequalities via Sperner theory. Discrete Math. 2019, 342, 2911–2923. [Google Scholar] [CrossRef] [Green Version]
  38. Sperner, E. Ein staz über untermengen einer endlichen menge. Math. Z. 1928, 27, 544–548. [Google Scholar] [CrossRef]
  39. Berger, T. Rate Distortion Theory: A Mathematical Basis for Data Compression; Prentice-Hall: Englewood Cliffs, NJ, USA, 1971. [Google Scholar]
  40. Ahlswede, R. Extremal properties of rate-distortion functions. IEEE Trans. Inf. Theory 1990, 36, 166–171. [Google Scholar] [CrossRef]
  41. Kostina, V.; Polyanskiy, Y.; Verdú, S. Variable-length compression allowing errors. IEEE Trans. Inf. Theory 2015, 61, 4316–4330. [Google Scholar] [CrossRef] [Green Version]
  42. Ahlswede, R.; Gács, P.; Körner, J. Bounds on conditional probabilities with applications in multi-user communication. Z. Wahrsch. Verw. Geb. 1976, 34, 157–177. [Google Scholar] [CrossRef] [Green Version]
  43. Raginsky, M.; Sason, I. Concentration of measure inequalities in information theory, communications, and coding. Found. Trends Commun. Inf. Theory 2014, 10, 1–259. [Google Scholar] [CrossRef]
  44. Wolfowitz, J. Coding Theorems of Information Theory, 3rd ed.; Springer: New York, NY, USA, 1978. [Google Scholar]
  45. Dueck, G. The strong converse to the coding theorem for the multiple-access channel. J. Combinat. Inf. Syst. Sci. 1981, 6, 187–196. [Google Scholar]
  46. Fong, S.L.; Tan, V.Y.F. A proof of the strong converse theorem for Gaussian multiple access channels. IEEE Trans. Inf. Theory 2016, 62, 4376–4394. [Google Scholar] [CrossRef] [Green Version]
  47. Fong, S.L.; Tan, V.Y.F. A proof of the strong converse theorem for Gaussian broadcast channels via the Gaussian Poincaré inequality. IEEE Trans. Inf. Theory 2017, 63, 7737–7746. [Google Scholar] [CrossRef] [Green Version]
  48. Kim, Y.; Sutivong, A.; Cover, T.M. State amplification. IEEE Trans. Inf. Theory 2008, 54, 1850–1859. [Google Scholar] [CrossRef] [Green Version]
  49. Kovalevsky, V.A. The problem of character recognition from the point of view of mathematical statistics. Character Readers and Pattern Recognition; Spartan Books: Wasington, DC, USA, 1968; pp. 3–30, (Russian edition in 1965). [Google Scholar]
  50. Chu, J.; Chueh, J. Inequalities between information measures and error probability. J. Franklin Inst. 1966, 282, 121–125. [Google Scholar] [CrossRef]
  51. Tebbe, D.L.; Dwyer, S.J., III. Uncertainty and probability of error. IEEE Trans. Inf. Theory 1968, 14, 516–518. [Google Scholar] [CrossRef]
  52. Feder, M.; Merhav, N. Relations between entropy and error probability. IEEE Trans. Inf. Theory 1994, 40, 259–266. [Google Scholar] [CrossRef] [Green Version]
  53. Prasad, S. Bayesian error-based sequences of statistical information bounds. IEEE Trans. Inf. Theory 2015, 61, 5052–5062. [Google Scholar] [CrossRef]
  54. Ben-Bassat, M.; Raviv, J. Rényi entropy and probability of error. IEEE Trans. Inf. Theory 1978, 24, 324–331. [Google Scholar] [CrossRef]
  55. Rényi, A. On measures of entropy and information. In Proceedings of the 4th Berkeley Symposium on Mathematical Statistics and Probability; University of California Press: Berkeley, CA, USA, 1961; Volume 1, pp. 547–561. [Google Scholar]
  56. Han, T.S.; Verdú, S. Generalizing the Fano inequality. IEEE Trans. Inf. Theory 1994, 40, 1247–1251. [Google Scholar]
  57. Polyanskiy, Y.; Verdú, S. Arimoto channel coding converse and Rényi divergence. In Proceedings of the 2010 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Allerton, IL, USA, 29 September–1 October 2010; pp. 1327–1333. [Google Scholar]
  58. Sason, I. On data-processing and majorization inequalities for f-divergences with applications. Entropy 2019, 21, 1022. [Google Scholar] [CrossRef] [Green Version]
  59. Liu, J.; Verdú, S. Beyond the blowing-up lemma: sharp converses via reverse hypercontractivity. In Proceedings of the 2017 IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017; pp. 943–947. [Google Scholar]
  60. Topsøe, F. Basic concepts, identities and inequalities—the toolkit of information theory. Entropy 2001, 3, 162–190. [Google Scholar] [CrossRef]
  61. Van Erven, T.; Harremoës, P. Rényi divergence and Kullback–Leibler divergence. IEEE Trans. Inf. Theory 2014, 60, 3797–3820. [Google Scholar] [CrossRef] [Green Version]
  62. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423 and 623–656. [Google Scholar] [CrossRef] [Green Version]
  63. Dudley, R.M. Real Analysis and Probability, 2nd ed.; Cambridge University Press: Cambridge, UK, 2002. [Google Scholar]
  64. Sakai, Y.; Tan, V.Y.F. Variable-length source dispersions differ under maximum and average error criteria. arXiv 2019, arXiv:1910.05724. [Google Scholar]
  65. Fehr, S.; Berens, S. On the conditional Rényi entropy. IEEE Trans. Inf. Theory 2014, 60, 6801–6810. [Google Scholar] [CrossRef]
  66. Iwamoto, M.; Shikata, J. Information theoretic security for encryption based on conditional Rényi entropies. In Information Theoretic Security; Springer: New York, NY, USA, 2014; pp. 103–121. [Google Scholar]
  67. Teixeira, A.; Matos, A.; Antunes, L. Conditional Rényi entropies. IEEE Trans. Inf. Theory 2012, 58, 4273–4277. [Google Scholar] [CrossRef]
  68. Verdú, S. α-mutual information. In Proceedings of the 2015 Information Theory and Applications Workshop (ITA), San Diego, CA, USA, 1–6 February 2015; pp. 1–6. [Google Scholar]
  69. Csiszar, I. Generalized cutoff rates and Rényi’s information measures. IEEE Trans. Inf. Theory 1995, 41, 26–34. [Google Scholar] [CrossRef]
  70. Ho, S.W.; Verdú, S. Convexity/concavity of Rényi entropy and α-mutual information. In Proceedings of the 2015 IEEE International Symposium on Information Theory (ISIT), Hong Kong, China, 14–19 June 2015; pp. 745–749. [Google Scholar]
  71. Shannon, C.E. Communication theory of secrecy systems. Bell Syst. Tech. J. 1949, 28, 656–715. [Google Scholar] [CrossRef]
  72. Hayashi, M.; Tan, V.Y.F. Equivocations, exponents, and second-order coding rates under various Rényi information measures. IEEE Trans. Inf. Theory 2017, 63, 975–1005. [Google Scholar] [CrossRef] [Green Version]
  73. Tan, V.Y.F.; Hayashi, M. Analysis of remaining uncertainties and exponents under various conditional Rényi entropies. IEEE Trans. Inf. Theory 2018, 64, 3734–3755. [Google Scholar] [CrossRef] [Green Version]
  74. Chung, K.L. A Course in Probability Theory, 3rd ed.; Academic Press: New York, NY, USA, 2000. [Google Scholar]
  75. Marcus, M.; Ree, R. Diagonals of doubly stochastic matrices. Q. J. Math. 1959, 10, 296–302. [Google Scholar] [CrossRef]
  76. Farahat, H.K.; Mirsky, L. Permutation endomorphisms and refinement of a theorem of Birkhoff. Math. Proc. Camb. Philos. Soc. 1960, 56, 322–328. [Google Scholar] [CrossRef]
  77. Ho, S.-W.; Yeung, R.W. The interplay between entropy and variational distance. IEEE Trans. Inf. Theory 2010, 56, 5906–5929. [Google Scholar] [CrossRef]
  78. Aczél, J.; Daróczy, Z. On Measures of Information and Their Characterizations; Academic Press: New York, NY, USA, 1975. [Google Scholar]
  79. Roberts, G.O.; Rosenthal, J.S. General state space Markov chains and MCMC algorithms. Probab. Surv. 2004, 1, 20–71. [Google Scholar] [CrossRef] [Green Version]
  80. Mori, R.; Tanaka, T. Source and channel polarization over finite fields and Reed–Solomon matrices. IEEE Trans. Inf. Theory 2014, 60, 2720–2736. [Google Scholar] [CrossRef]
  81. Şaşoğlu, E. Polarization and polar codes. Found. Trends Commun. Inf. Theory 2012, 8, 259–381. [Google Scholar] [CrossRef] [Green Version]
  82. Cover, T.M.; Hart, P.E. Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef]
  83. Muramatsu, J.; Miyake, S. On the error probability of stochastic decision and stochastic decoding. In Proceedings of the 2017 IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017; pp. 1643–1647. [Google Scholar]
  84. Shuval, B.; Tal, I. Fast polarization for processes with memory. IEEE Trans. Inf. Theory 2019, 65, 2004–2020. [Google Scholar] [CrossRef] [Green Version]
  85. Ali, M.S.; Silvey, D. A general class of coefficients of divergence of one distribution from another. J. Roy. Statist. Soc. series B 1966, 28, 131–142. [Google Scholar] [CrossRef]
  86. Csiszár, I. Eine Informationstheoretische Ungleichung und ihre Anwendung auf den Bewis der Ergodizität von Markhoffschen Ketten. Publ. Math. Inst. Hungar. Acad. Sci. 1963, 8, 85–108. [Google Scholar]
  87. Elias, P. List Decoding for Noisy Channels. Available online: https://dspace.mit.edu/bitstream/handle/1721.1/4484/RLE-TR-335-04734756.pdf?sequence=1 (accessed on 28 January 2020).
  88. Yu, L.; Tan, V.Y.F. Asymptotic coupling and its applications in information theory. IEEE Trans. Inf. Theory 2018, 64, 1321–1344. [Google Scholar] [CrossRef] [Green Version]
  89. Thorisson, H. Coupling, Stationarity, and Regeneration; Springer: New York, NY, USA, 2000; Volume 14. [Google Scholar]
  90. Merhav, N. List decoding—Random coding exponents and expurgated exponents. IEEE Trans. Inf. Theory 2014, 60, 6749–6759. [Google Scholar] [CrossRef]
Figure 1. Each bar represents a probability mass of the extremal distribution of type-0 defined in (12), where M = 8 and L = 3 .
Figure 1. Each bar represents a probability mass of the extremal distribution of type-0 defined in (12), where M = 8 and L = 3 .
Entropy 22 00288 g001
Figure 2. Plot of making the extremal distribution of type-1 defined in (30) from an X -marginal Q, where L = 3 . Each bar represents a probability mass with decreasing rearrangement Q .
Figure 2. Plot of making the extremal distribution of type-1 defined in (30) from an X -marginal Q, where L = 3 . Each bar represents a probability mass with decreasing rearrangement Q .
Entropy 22 00288 g002
Figure 3. Plot of making the extremal distribution of type-2 defined in (44) from an X -marginal Q, where L = 3 and | Y | = 2 . Each bar represents a probability mass of the decreasing rearrangement Q .
Figure 3. Plot of making the extremal distribution of type-2 defined in (44) from an X -marginal Q, where L = 3 and | Y | = 2 . Each bar represents a probability mass of the decreasing rearrangement Q .
Entropy 22 00288 g003

Share and Cite

MDPI and ACS Style

Sakai, Y. Generalizations of Fano’s Inequality for Conditional Information Measures via Majorization Theory. Entropy 2020, 22, 288. https://doi.org/10.3390/e22030288

AMA Style

Sakai Y. Generalizations of Fano’s Inequality for Conditional Information Measures via Majorization Theory. Entropy. 2020; 22(3):288. https://doi.org/10.3390/e22030288

Chicago/Turabian Style

Sakai, Yuta. 2020. "Generalizations of Fano’s Inequality for Conditional Information Measures via Majorization Theory" Entropy 22, no. 3: 288. https://doi.org/10.3390/e22030288

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop