Next Article in Journal
Using Bidimensional Multiscale Entropy Analysis of Ultrasound Images to Assess the Effect of Various Walking Intensities on Plantar Soft Tissues
Next Article in Special Issue
Joint Resource Allocation for Multiuser Opportunistic Beamforming Systems with OFDM-NOMA
Previous Article in Journal
A Unified Approach to Local Quantum Uncertainty and Interferometric Power by Metric Adjusted Skew Information
Previous Article in Special Issue
The Broadcast Approach in Communication Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Trade-offs between Error Exponents and Excess-Rate Exponents of Typical Slepian–Wolf Codes

The Andrew and Erna Viterbi Faculty of Electrical Engineering, Technion—Israel Institute of Technology, Technion City, Haifa 3200003, Israel
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(3), 265; https://doi.org/10.3390/e23030265
Submission received: 3 February 2021 / Revised: 19 February 2021 / Accepted: 21 February 2021 / Published: 24 February 2021
(This article belongs to the Special Issue Multiuser Information Theory III)

Abstract

:
Typical random codes (TRCs) in a communication scenario of source coding with side information in the decoder is the main subject of this work. We study the semi-deterministic code ensemble, which is a certain variant of the ordinary random binning code ensemble. In this code ensemble, the relatively small type classes of the source are deterministically partitioned into the available bins in a one-to-one manner. As a consequence, the error probability decreases dramatically. The random binning error exponent and the error exponent of the TRCs are derived and proved to be equal to one another in a few important special cases. We show that the performance under optimal decoding can be attained also by certain universal decoders, e.g., the stochastic likelihood decoder with an empirical entropy metric. Moreover, we discuss the trade-offs between the error exponent and the excess-rate exponent for the typical random semi-deterministic code and characterize its optimal rate function. We show that for any pair of correlated information sources, both error and excess-rate probabilities exponential vanish when the blocklength tends to infinity.

1. Introduction

As is well known, the random coding error exponent is defined by
E r ( R ) = lim n 1 n log E P e ( C n ) ,
where R is the coding rate, P e ( C n ) is the error probability of a codebook C n , and the expectation is with respect to (w.r.t.) the randomness of C n across the ensemble of codes. The error exponent of the typical random code (TRC) is defined as [1]
E t r c ( R ) = lim n 1 n E log P e ( C n ) .
We believe that the error exponent of the TRC is the more relevant performance metric, as it captures the most likely error exponent of a randomly selected code, as opposed to the random coding error exponent, which is dominated by the relatively poor codes of the ensemble, rather than the channel noise, at relatively low coding rates. In addition, since in random coding analysis, the code is selected at random and remains fixed, it seems reasonable to study the performance of the chosen code itself instead of directly considering the ensemble performance.
To the best of our knowledge, not much is known about TRCs. In [2], Barg and Forney considered TRCs with independently and identically distributed codewords, along with typical linear codes, for the special case of the binary symmetric channel with maximum likelihood (ML) decoding. It was also shown that at a certain range of low rates, E t r c ( R ) lies between E r ( R ) and the expurgated exponent, E e x ( R ) . In [3] Nazari et al. provided bounds on the error exponents of TRCs for both discrete memoryless channels (DMC) and multiple-access channels. In a recent article by Merhav [1], an exact single-letter expression has been derived for the error exponent of typical, random, fixed composition codes, over DMCs, and a wide class of (stochastic) decoders, collectively referred to as the generalized likelihood decoder (GLD). Later, Merhav studied error exponents of TRCs for the colored Gaussian channel [4], typical random trellis codes [5], and a Lagrange dual lower bound for the TRC exponent [6]. Large deviations around the TRC exponent were studied in [7].
While originally defined for pure channel coding [1,2,3], the notion of TRCs has natural analogues in other settings as well, such as source coding with side information in the decoder [8]. Typical random Slepian–Wolf (SW) code of a certain variant of the ordinary variable-rate random binning code ensemble is the main theme of this work. The random coding error exponent of SW coding, based on fixed-rate (FR) random binning, was first addressed by Gallager in [9], and improved later on by the expurgated bound in [10] and [11]. Variable-rate (VR) SW coding received less attention in the literature; VR codes under average rate constraint have been studied in [12] and proved to outperform FR codes in terms of error exponents. Optimum trade-offs between the error exponent and the excess-rate exponent in VR coding were analyzed in [13]. Sphere-packing upper bounds for source coding with side information in the FR and VR regimes were studied in [9] and [12], respectively. More works where exponential error bounds in source coding have been studied are [14,15,16,17,18].
It turns out that both the FR and VR ensembles suffer from an intrinsic deficiency, caused by statistical fluctuations in the sizes of the bins that are populated by the relatively small type classes of the source. This fundamental problem of the ordinary ensembles is alleviated in some variant of the ordinary VR ensemble-the semi-deterministic (SD) code ensemble, which has already been proposed and studied in its FR version in [18]. In the SD code ensemble, for source type classes which are exponentially larger than the space of the available bins, we just randomly assign each source sequence into one of the bins, as is done in ordinary random binning. Otherwise, for relatively small type classes, we deterministically order each source sequence into a different bin, which provides a one-to-one mapping. This way, all these relatively small source type classes do not contribute to the probability of error. The main results concerning the SD code are the following:
  • The random binning error exponent and the error exponent of the TRC are derived in Theorems 1 and 2, respectively, and proved in Theorem 3 to be equal to one another in a few important special cases, which include the matched likelihood decoder, the MAP decoder, and the universal minimum entropy decoder. To the best of our knowledge, this phenomenon has not been seen elsewhere before, since the TRC exponent usually improves upon the random coding exponent. As a byproduct, we are able to provide a relatively simple expression for the TRC exponent.
  • We prove in Theorem 4 that the error exponent of the TRC under MAP decoding is also attained by two universal decoders: the minimum entropy decoder and the stochastic entropy decoder, which is a GLD with an empirical conditional entropy metric. As far as we know, this result is first of its kind in source coding; in other scenarios, the random coding bound is attained also by universal decoders, but here, we find that the TRC exponent is also universally achievable. Moreover, while the likelihood decoder and the MAP decoder have similar error exponents [19], here we prove a similar result, but for two universal decoders (one stochastic and one deterministic) that share the same metric.
  • We discuss the trade-offs between the error exponent and the excess-rate exponent for a typical random SD code, similarly to [13], but with a different notion of the excess-rate event, which takes into account the available side information. In Theorem 5, we provide an expression for the optimal rate function that guarantees a required level for the error exponent of the typical random SD code. Analogously, Theorem 6 proposes an expression for the optimal rate function that guarantees a required level for the excess-rate exponent. Furthermore, we show that for any pair of correlated information sources, the typical random SD code attains both exponentially vanishing error and excess-rate probabilities.
The remaining part of the paper is organized as follows. In Section 2, we establish notation conventions. In Section 3, we formalize the model, the coding technique, the main objectives of this work, and we review some background. In Section 4, we provide the main results concerning error exponents and universal decoding in the SD ensemble, and in Section 5, we discuss the trade-offs between the error exponent and the excess-rate exponent.

2. Notation Conventions

Throughout the paper, random variables will be denoted by capital letters, realizations will be denoted by the corresponding lower case letters, and their alphabets will be denoted by calligraphic letters. Random vectors and their realizations will be denoted, respectively, by boldface capital and lower case letters. Their alphabets will be superscripted by their dimensions. Sources and channels will be subscripted by the names of the relevant random variables/vectors and their conditionings, whenever applicable, following the standard notation conventions, e.g., Q U , Q V | U , and so on. When there is no room for ambiguity, these subscripts will be omitted. For a generic joint distribution Q U V = { Q U V ( u , v ) , u U , v V } , which will often be abbreviated by Q, information measures will be denoted in the conventional manner, but with a subscript Q; that is, H Q ( U ) is the marginal entropy of U, H Q ( U | V ) is the conditional entropy of U given V, and I Q ( U ; V ) = H Q ( U ) H Q ( U | V ) is the mutual information between U and V. The Kullback–Leibler divergence between two probability distributions, Q U V and P U V , is defined as
D ( Q U V P U V ) = ( u , v ) U × V Q U V ( u , v ) log Q U V ( u , v ) P U V ( u , v ) ,
where logarithms, here and throughout the sequel, are understood to be taken to the natural base. The probability of an event E will be denoted by P { E } , and the expectation operator w.r.t. a probability distribution Q will be denoted by E Q [ · ] , where the subscript will often be omitted. For two positive sequences, { a n } and { b n } , the notation a n b n will stand for equality in the exponential scale, that is, lim n ( 1 / n ) log a n / b n = 0 . Similarly, a n · b n means that lim sup n ( 1 / n ) log a n / b n 0 , and so on. The indicator function of an event A will be denoted by 𝟙 { A } . The notation [ t ] + will stand for max { 0 , t } .
The empirical distribution of a sequence u U n , which will be denoted by P ^ u , is the vector of relative frequencies, P ^ u ( u ) , of each symbol u U in u . The type class of u U n , denoted T ( u ) , is the set of all vectors u with P ^ u = P ^ u . When we wish to emphasize the dependence of the type class on the empirical distribution P ^ , we will denote it by T ( P ^ ) . The set of all types of vectors of length n over U will be denoted by P n ( U ) , and the set of all possible types over U will be denoted by P ( U ) = n = 1 P n ( U ) . Information measures associated with empirical distributions will be denoted with ‘hats’ and will be subscripted by the sequences from which they are induced. For example, the entropy associated with P ^ u , which is the empirical entropy of u , will be denoted by H ^ u ( U ) . Similar conventions will apply to the joint empirical distribution, the joint type class, the conditional empirical distributions and the conditional type classes associated with pairs (and multiples) of sequences of length n. Accordingly, P ^ u v would be the joint empirical distribution of ( u , v ) = { ( u i , v i ) } i = 1 n , T ( P ^ u v ) will denote the joint type class of ( u , v ) , T ( P ^ u | v | v ) will stand for the conditional type class of u given v , H ^ u v ( U | V ) will be the empirical conditional entropy, and so on. Likewise, when we wish to emphasize the dependence of empirical information measures upon a given empirical distribution Q, we denote them using the subscript Q, as described above.

3. Problem Formulation and Background

3.1. Problem Formulation

Let ( U , V ) = { ( U t , V t ) } t = 1 n be n independent copies of a pair of random variables, ( U , V ) P U V , taking on values in finite alphabets, U and V , respectively. The vector U will designate the source vector to be encoded and the vector V will serve as correlated side information available to the decoder. In ordinary VR binning, the coding rate is not fixed for every u U n , but depends on its empirical distribution. Let us denote a rate function by R ( · ) , which is a given continuous function from the probability simplex of U to the set of nonnegative reals. In that manner, for every type Q U P n ( U ) , all source sequences in T ( Q U ) are randomly partitioned into e n R ( Q U ) bins. Every source sequence is encoded by its bin index, denoted by B ( u ) , along with a header that indicates its type index, which requires only a negligible extra rate when n is large enough.
The SD code ensemble is a refinement of the ordinary VR code: for types with H Q ( U ) R ( Q U ) , i.e., type classes which are exponentially larger than the space of available bins, we just randomly assign each source sequence into one out of the e n R ( Q U ) bins. For the other types, we deterministically order each member of T ( Q U ) into a different bin. This way, all type classes with H Q ( U ) < R ( Q U ) do not contribute to the probability of error. The entire binning code of source sequences of blocklength n, i.e., the set { B ( u ) } u U n , is denoted by B n . A sequence of SW codes, { B n } n 1 , indexed by the block length n, will be denoted by B .
The decoder estimates u based on the bin index B ( u ) , the type index T ( u ) , and the side information sequence v , which is a realization of V . The optimal (MAP) decoder estimates u according to
u ^ = arg max u B ( u ) T ( u ) P ( u , v ) .
As in [1,20], we consider here the GLD. The GLD estimates u stochastically, using the bin index B ( u ) , the type index T ( u ) , and the side information sequence v , according to the following posterior distribution
P U ^ = u | v , B ( u ) , T ( u ) = exp { n f ( P ^ u v ) } u ˜ B ( u ) T ( u ) exp { n f ( P ^ u ˜ v ) } ,
where P ^ u v is the empirical distribution of ( u , v ) and f ( · ) is a given continuous, real valued functional of this empirical distribution. The GLD provides a unified framework which covers several important special cases, e.g., matched decoding, mismatched decoding, MAP decoding, and universal decoding (similarly to the α -decoders described in [11]). A more detailed discussion is given in [20].
The probability of error is the probability of the event { U ^ U } . For a given binning code B n , the probability of error is given by
P e ( B n ) = u , v P ( u , v ) · 𝟙 H ^ u ( U ) R ( P ^ u ) · u B ( u ) T ( u ) , u u exp { n f ( P ^ u v ) } u ˜ B ( u ) T ( u ) exp { n f ( P ^ u ˜ v ) } .
For a given rate function, we derive the random binning exponent of this ensemble, which is defined by
E r ( R ( · ) ) = lim n log E [ P e ( B n ) ] n ,
and compare it to the TRC exponent, which is
E t r c ( R ( · ) ) = lim n E [ log P e ( B n ) ] n .
Although it is unclear that the limits in (7) and (8) exist a priori, it will be evident from the analyses in Appendix A and Appendix B, respectively.
One way to define the excess-rate probability is as P { R ( P ^ U ) R } , where R is some target rate [13]. Due to the availability of side information in the decoder, it makes sense to require a target rate which depends on the pair ( u , v ) . Since the lowest possible compression rate in this setting is given by H P ( U | V ) [8], given U = u and V = v , it is reasonable to adopt H ^ u v ( U | V ) as a reference rate. Hence, an alternative definition of the excess-rate probability of a code B n , is as p e r ( B n , R ( · ) , Δ ) = P { R ( P ^ U ) H ^ U V ( U | V ) + Δ } , where Δ > 0 is a redundancy threshold. (Note that the entire analysis remains intact if we allow a more general redundancy threshold as Δ = Δ ( P ^ u v ) . This covers other alternatives for the excess-rate probability, e.g., P { R ( P ^ U ) R } or P { R ( P ^ U ) α H ^ U ( U ) } , α ( 0 , 1 ) .) Accordingly, the excess-rate exponent function, achieved by a sequence of codes B , is defined as
E e r ( B , R ( · ) , Δ ) = lim inf n 1 n log p e r ( B n , R ( · ) , Δ ) .
The main mission is to characterize the optimal trade-off between the error exponent and the excess-rate exponent for the typical random SD code, and the optimal rate function that attains a prescribed value for the error exponent of the typical random SD code.

3.2. Background

In pure channel coding, Merhav [1] has derived a single-letter expression for the error exponent of the typical random fixed composition code:
E t r c ( R , Q X ) = lim n 1 n E log P e ( C n ) .
In order to present the main result of [1], we define first a few quantities. Consider a DMC, W = { W ( y | x ) , x X , y Y } , where X and Y are the finite input/output alphabets. Define
α ( R , Q Y ) = max { Q X ˜ | Y : I Q ( X ˜ ; Y ) R , Q X ˜ = Q X } { g ( Q X ˜ Y ) I Q ( X ˜ ; Y ) } + R ,
where the function g ( · ) , which is the decoding metric, is a continuous function that maps joint probability distributions over X × Y to real numbers. Additionally define
Γ ( Q X X , R ) = min Q Y | X X { D ( Q Y | X W | Q X ) + I Q ( X ; Y | X ) + [ max { g ( Q X Y ) , α ( R , Q Y ) } g ( Q X Y ) ] + } ,
where D ( Q Y | X W | Q X ) is the conditional divergence between Q Y | X and W, averaged by Q X . A brief intuitive explanation on the term Γ ( Q X X , R ) can be found in [7] (Section 4.1). Having defined the above quantities, the error exponent of the TRC is given by [1]
E t r c ( R , Q X ) = min { Q X | X : I Q ( X ; X ) 2 R , Q X = Q X } { Γ ( Q X X , R ) + I Q ( X ; X ) R } .
Returning to the SW model, several articles have been written on error exponents for the FR and the VR codes. Here, we mention only those results that are directly relevant to the current work. The random binning and expurgated bounds of the FR ensemble in the SW model are given, respectively, by [11] (Section VI, Theorem 2), [10](Appendix I, Theorem 1)
E r f r ( R ) = min Q U D ( Q U P U ) + E r ( Q U , P V | U , H Q ( U ) R ) ,
E e x f r ( R ) = min Q U D ( Q U P U ) + E e x ( Q U , P V | U , H Q ( U ) R ) ,
where E r ( Q U , P V | U , S ) and E e x ( Q U , P V | U , S ) are, respectively, the random coding and expurgated bounds associated with the channel P V | U w.r.t. the ensemble of fixed composition code of rate S, whose composition is Q U . The exponent function E r ( Q U , P V | U , S ) is given by
E r ( Q U , P V | U , S ) = min Q V | U { D ( Q V | U P V | U | Q U ) + [ I Q ( U ; V ) S ] + } ,
and E e x ( Q U , P V | U , S ) is given by
E e x ( Q U , P V | U , S ) = min { Q U | U : I Q ( U ; U ) S , Q U = Q U } { E Q U U [ d P V | U ( U , U ) ] + I Q ( U ; U ) S } ,
where
d P V | U ( u , u ) = log v V P V | U ( v | u ) P V | U ( v | u ) .
The exact error exponent of VR random binning is given by [13] (Equation (34)):
E r v r ( R ( · ) ) = min Q U V D ( Q U V P U V ) + [ R ( Q U ) H Q ( U | V ) ] + .

4. Error Exponents and Universal Decoding

To present some of the results, we need a few more definitions. The minimum conditional entropy (MCE) decoder estimates u , using the bin index B ( u ) , the type index T ( u ) , and the side information vector v , according to
u ^ = arg min u B ( u ) T ( u ) H ^ u v ( U | V ) .
The stochastic conditional entropy (SCE) decoder is a special case of the GLD with the decoding metric f ( P ^ u v ) = H ^ u v ( U | V ) ; i.e., it estimates u according to the following posterior distribution
P U ^ = u | v , B ( u ) , T ( u ) = exp { n H ^ u v ( U | V ) } u ˜ B ( u ) T ( u ) exp { n H ^ u ˜ v ( U | V ) } .
First, we present random binning error exponents, which are modifications of (19) to this ensemble. Define the expression
E ( Q U V , R ( · ) ) = min Q U | V R ( Q U ) H Q ( U | V ) + [ f ( Q U V ) f ( Q U V ) ] + +
and the exponent functions:
E r , G L D ( R ( · ) ) = min { Q U V : H Q ( U ) R ( Q U ) } D ( Q U V P U V ) + E ( Q U V , R ( · ) ) ,
and
E r , M A P ( R ( · ) ) = min { Q U V : H Q ( U ) R ( Q U ) } D ( Q U V P U V ) + [ R ( Q U ) H Q ( U | V ) ] + .
The following result is proved in Appendix A.
Theorem 1.
Let R ( · ) be a given rate function. Then, for the SD ensemble,
  • E r ( R ( · ) ) = E r , G L D ( R ( · ) ) for the GLD;
  • E r ( R ( · ) ) = E r , M A P ( R ( · ) ) for the MAP and MCE decoders.
As a matter of fact, a special case of the second part of Theorem 1 has already been proved in [18] for the FR regime, while here, we prove a stronger result, according to which, the MCE decoder attains the same random binning error exponent as the MAP decoder, in the VR coding regime too. The first part of Theorem 1 is completely new; it proposes a single letter expression for the random binning error exponent, for a wide family of stochastic and deterministic decoders. Additionally, note that an analogous result to the first part of Theorem 1 has been proved in [20]. Comparing the expressions in (19) and (24), namely, the random binning error exponents of the ordinary VR and the SD VR ensembles, respectively, we find that they differ at relatively high coding rates, since these minimization problems share the same objective but (24) also has the constraint H Q ( U ) R ( Q U ) . The origin of this constraint is the deterministic coding of the relatively small type classes.
Next, we provide a single-letter expression for the error exponent of the TRCs in this ensemble. We define
γ ( R ( · ) , Q U , Q V ) = max Q U ˜ | V : Q U ˜ = Q U , H Q ( U ˜ | V ) R ( Q U ˜ ) { f ( Q U ˜ V ) + H Q ( U ˜ | V ) } R ( Q U ˜ )
and
Ψ ( R ( · ) , Q U U V ) = max { f ( Q U V ) , γ ( R ( · ) , Q U , Q V ) } f ( Q U V ) + .
Furthermore, define
Λ ( Q U U , R ( Q U ) ) = min Q V | U U Ψ ( R ( Q U ) , Q U U V ) H Q ( V | U , U ) E Q [ log P ( V | U ) ] ,
and the following exponent function:
E t r c , G L D ( R ( · ) ) = min Q U U : Q U = Q U , H Q ( U ) R ( Q U ) Λ ( Q U U , R ( Q U ) ) E Q [ log P ( U ) ] H Q ( U , U ) + R ( Q U ) .
Then, the following theorem is proved in Appendix B.
Theorem 2.
Let R ( · ) be a given rate function. Then, for the SD ensemble and the GLD,
E t r c ( R ( · ) ) = E t r c , G L D ( R ( · ) ) .
As explained before, an analogous result has already been proved in pure channel coding [1], and one can find a high degree of similarity between the expressions in (25)–(28) and the expressions in Section 3.2. While in channel coding, the coding rate is fixed, here, on the other hand, we allow the rate to depend on the type class of the source. In order to optimize the rate function, we constrain the problem by introducing the excess-rate exponent (9), which is the exponential rate of decay of the probability that the compression rate will be higher than some predefined level. A detailed discussion on optimal rate functions and optimal trade-offs between these two exponents can be found in Section 5.
The definition of the error exponent of the TRC as in (8) should not be taken for granted. The reason for that is the following. It turns out that the definition in (8) and the value of 1 n log P e ( B n ) for the highly probable codes in the ensemble may not be the same, and they coincide if and only if the ensemble does not contain both zero error probability codes and positive error probability codes. For example, the FR ensemble in SW coding contains the one-to-one code (which obviously attains P e ( B n ) = 0 ) as long as R log | U | , but it is definitely not a typical code, at least when ordinary random binning is considered. Hence, in this case, we conclude that 1 n E [ log P e ( B n ) ] = , while the value of 1 n log P e ( B n ) for the highly probable codes is still finite. As for the SD code ensemble, the definition in (8) indeed provides the error exponent of the highly probable codes in the ensemble, which is explained by the following reasoning. For any given rate function such that R ( Q U ) < H Q ( U ) for at least one type class, all the type classes with R ( Q U ) < H Q ( U ) are encoded by random binning; thus, all the codes in the ensemble have a strictly positive error probability, which implies that the value of 1 n log P e ( B n ) concentrates around the error exponent of the TRC, as defined in (8).
The proof of Theorem 2 follows exactly the same lines as the proof of ([1] (Theorem 1)), except for one main modification: when we introduce the type class enumerator N ( Q U U ) (see below) and sum over joint types, the summation set becomes { Q U U : Q U = Q U , H Q ( U ) R ( Q U ) } , where the constraint H Q ( U ) R ( Q U ) is due to the indicator function in (6). Afterwards, the analysis of the type class enumerator yields the constraint H Q ( U , U ) R ( Q U ) , which becomes redundant and thus omitted. This constraint is analogous to the constraint I Q ( X ; X ) 2 R in the minimization of (13). The origin of H Q ( U , U ) R ( Q U ) is the following. Define
N ( Q U U ) = ( u , u ) T ( Q U U ) 𝟙 B ( u ) = B ( u ) ,
which enumerate pairs of source sequences. Then, one of the main steps in the proof of Theorem 2 is deriving the high probability value of N ( Q U U ) , which is 0 if H Q ( U , U ) < R ( Q U ) (a relatively small set of source pair and relatively large number of bins) and e n [ H Q ( U , U ) R ( Q U ) ] for H Q ( U , U ) R ( Q U ) (a large set of source sequence pair and a small number of bins). One should note that the analysis of N ( Q U U ) is not trivial, since it is not a binomial random variable; i.e., the enumerator N ( Q U U ) is given by the sum of dependent binary random variables. For a sum N of independent binary random variables, ordinary tools from large deviation theory (e.g., the Chernoff bound) can be invoked for assessing the exponential moments E [ N s ] , s 0 , or the large deviation rate function of P { N e n σ } , σ I R . For sums of dependent binary random variables, such as N ( Q U U ) in the current problem, this can no longer be done by the same techniques, and it requires more advanced tools (see, e.g., [1,4,5,6]).
It is possible to compare (23) and (28) analytically in the special cases of the matched or the mismatched likelihood decoders and the MCE decoder. In the following theorem, the choice f ( Q U V ) = β E Q [ log P ˜ ( U , V ) ] , where P ˜ ( U , V ) is a possibly different source distribution than P ( U , V ) , corresponds to a family of stochastic mismatched decoders. We have the following result, the proof of which is given in Appendix D.
Theorem 3.
Consider the SD ensemble and a given rate function R ( · ) . Then,
  • For a GLD with the decoding metric f ( Q ) = β E Q [ log P ˜ ( U , V ) ] , for a given β > 0 ,
    E t r c , G L D ( R ( · ) ) = E r , G L D ( R ( · ) ) .
  • For the MCE decoder,
    E t r c , M C E ( R ( · ) ) = E r , M C E ( R ( · ) ) .
This result is quite surprising at first glance, since one expects the error exponent of the TRC to be strictly better than the random binning error exponent, as in ordinary channel coding at relatively low coding rates [1,2]. This phenomenon is due to the fact that part of the source type classes are deterministically partitioned into bins in a one-to-one fashion, and hence do not affect the probability of error (notice that the constraint H Q ( U ) R ( Q U ) appears in both the random binning and the TRC exponents, while in the latter, it makes the original constraint H Q ( U , U ) R ( Q U ) redundant). In the cases of FR or ordinary VR binning, these relatively “thin” type classes dominated the error probability at relatively high binning rates, but now, by encoding them deterministically into the bins; other mechanisms dominate the error event, such as the channel noise (between U and V ) or the random binning of the type classes with H Q ( U ) R ( Q U ) . The result of the second part of Theorem 3 is also nontrivial, since it establishes an equality between the error exponent of the TRC and the random binning error exponent, but now for a universal decoder.
Concerning universal decoding, it is already known [21] (Exercise 3.1.6), [13] that the random binning error exponents under optimal MAP decoding in both the FR and VR codes, given by (14) and (19), respectively, are also attained by the MCE decoder. Furthermore, a similar result for the SD ensemble has been proved here in Theorem 1. The natural question that arises is whether the error exponent of the TRC is also universally attainable. The following result, which is proved in Appendix E, provides a positive answer to this question.
Theorem 4.
Consider the SD ensemble and a given rate function R ( · ) . Then, the error exponents of the TRC under the MAP, the MCE, and the SCE decoders are all equal; i.e.,
E t r c , M A P ( R ( · ) ) = E t r c , M C E ( R ( · ) ) = E t r c , S C E ( R ( · ) ) .
Theorem 4 asserts that the error exponent of the typical random SD code is not affected if the optimal MAP decoder is replaced by a certain universal decoder, which must not even be deterministic. While the left hand equality in (33) follows immediately from the results of Theorems 1 and 3, the right hand equality in (33) is far less trivial, since the SCE decoder is both universal and stochastic, and hence, its TRC exponent is expected to be inferior w.r.t. the TRC exponent under MAP decoding, but nevertheless, they turn out to be equal. Comparing to channel coding, it has been recently proved in [22] that the error exponent of the typical random fixed composition code (given in (13)) is the same for the ML and the maximum mutual information decoder, but on the other hand, numerical evidence shows that a GLD which is based on an empirical mutual information metric attains a strictly lower exponent.

5. Optimal Trade-off Functions

In this section, we study the optimal trade-off between the threshold Δ , the error exponent of the TRC, and the excess-rate exponent. Since both exponents depend on the rate function, we wish to characterize rate functions that are optimal w.r.t. this trade-off. Since a single-letter characterization of the error exponent of the TRC has already been given in (28), we next provide a single-letter expression for the excess-rate exponent. Define the following exponent function:
E e r ( R ( · ) , Δ ) = min { Q U V : R ( Q U ) H Q ( U | V ) + Δ } D ( Q U V P U V ) .
Then, we have the following.
Proposition 1.
Fix Δ > 0 and let R ( · ) be any rate function. Then,
E e r ( B , R ( · ) , Δ ) = E e r ( R ( · ) , Δ ) .
Proof. 
The excess-rate probability is given by:
P { R ( P ^ U ) H ^ U V ( U | V ) + Δ }
= Q U V 𝟙 { R ( Q U ) H Q ( U | V ) + Δ } · P { ( U , V ) T ( Q U V ) }
{ Q U V : R ( Q U ) H Q ( U | V ) + Δ } exp n D ( Q U V P U V )
exp n · min { Q U V : R ( Q U ) H Q ( U | V ) + Δ } D ( Q U V P U V ) ,
which proves the desired result. □
Since Proposition 1 is proved by the method of types [21], we conclude that the excess-rate event is dominated by one specific type class T ( Q U V ) , whose respective rate R ( Q U ) has been chosen too large w.r.t. the value of H Q ( U | V ) + Δ . One extreme case is when the rate function is given by H Q ( U ) , which obviously provides a one-to-one mapping, since the size of each T ( Q U ) is upper-bounded by e n H Q ( U ) . In this case, the probability of error is zero, while the excess-rate probability is one, at least when Δ is not too large. In Section 5.2, we prove that the optimal rate function is indeed upper-bounded by H Q ( U ) , but can also be strictly smaller, especially when the requirement on the error exponent is not too stringent.
One way to explore the trade-off between the error exponent of the TRC and the excess-rate exponent, which will be presented in Section 5.1, is to require the excess-rate exponent to exceed some value E r > 0 , then solve E e r ( R ( · ) , Δ ) E r for an optimal rate function R * ( Q U ) , and then to substitute this optimal rate function back into the error exponents in (24) and (28) to give expressions for the optimal trade-off function E e ( E r , Δ ) . In Section 5.2, we present an alternative option to characterize this trade-off, which is to require the error exponent of the TRC to exceed some value E e > 0 , to solve E e ( R ( · ) ) E e in order to extract an optimal rate function, and then to substitute it back into the excess-rate exponent in (34) to provide an expression for the optimal trade-off function E e r ( E e , Δ ) .

5.1. Constrained Excess-Rate Exponent

Relying on the exponent function in (34), the following theorem proposes a rate function, whose optimality is proved in Appendix F.
Theorem 5.
Let E r > 0 be fixed. Then, the constraint E e r ( R ( · ) , Δ ) E r implies that
R ( Q U ) J ( Q U , E r , Δ ) = min { Q V | U : D ( Q U V P U V ) E r } H Q ( U | V ) + Δ .
This means that we have a dichotomy between two kinds of source types. Each type class that is associated with an empirical distribution that is relatively close to the source distribution, i.e., when D ( Q U V P U V ) E r for some Q V | U , is partitioned into e n J ( Q U , E r , Δ ) bins, and the rest of the type classes, those that are relatively distant from P U , are encoded by a one-to-one mapping. Two extreme cases should be considered here. First, when E r is relatively small, then only the types closest to P U are encoded with a rate approximately H P ( U | V ) + Δ , which can be made arbitrarily close to the SW limit [8], and each a–typical source sequence is allocated with n · log 2 | U | bits. This coding scheme is the one related to VR coding with an average rate constraint, like the one discussed in [12]. Second, when E r is extremely large, then each type class is encoded to exp { n Δ } bins, which is equivalent to FR coding.
Following the first part of Theorem 3, let us denote the error exponent of the TRC under MAP decoding by E e ( · ) . Upon substituting the optimal rate function of Theorem 5 back into (24) and (28) and using the fact that E e ( · ) is monotonically increasing, we find that the optimal trade-off function for the typical random SD code is given by
E e ( E r , Δ ) = min Q U V : H Q ( U ) J ( Q U ) D ( Q U V P U V ) + [ J ( Q U ) H Q ( U | V ) ] + ,
or, alternatively,
E e ( E r , Δ ) = min Q U U : Q U = Q U , H Q ( U ) J ( Q U ) Λ ( Q U U , J ( Q U ) ) E Q [ log P ( U ) ] H Q ( U , U ) + J ( Q U ) ,
where J ( Q U ) = J ( Q U , E r , Δ ) is given in (39). The dependence of E e ( E r , Δ ) on E r is as follows. Let Q U U * ( Δ ) and Q V | U * be the respective minimizers of the problems which are similar to (39) and (41), except that the constraint D ( Q U V P U V ) E r is removed from (39). Furthermore, let Q U * ( Δ ) be the marginal distribution of Q U U * ( Δ ) . Now, when E r is sufficiently large, i.e., when E r D ( Q U * ( Δ ) × Q V | U * P U V ) , E e ( E r , Δ ) reaches a plateau and is the lowest possible. It follows from the fact that the stringent requirement on the excess-rate forces the encoder to encode each type class Q U to its target rate Δ , thus all of them affect the error event. Otherwise, when E r < D ( Q U * ( Δ ) × Q V | U * P U V ) , the constraint D ( Q U V P U V ) E r is active and E e ( E r , Δ ) is a monotonically nonincreasing function of E r . The reason for that is the fact that as E r decreases, more and more type classes are encoded with n · log 2 | U | bits, and hence do not contribute to the error event. When E r = 0 , necessarily Q U = P U , only the typical set is encoded, and E e ( 0 , Δ ) is the highest possible. In this case, J ( Q U ) = H P ( U | V ) + Δ and the constraint set in (41) becomes empty when Δ > I P ( U ; V ) , and then E e ( 0 , Δ ) = .

5.2. Constrained Error Exponent

Based on (24), the following theorem proposes a rate function, whose optimality is proved in Appendix G.
Theorem 6.
Let E e > 0 be fixed. Then, the constraint E e ( R ( · ) ) E e implies that
R ( Q U ) Ω ( Q U , E e ) = min H Q ( U ) , G ( Q U , E e ) ,
where,
G ( Q U , E e ) = max { Q V | U : D ( Q U V P U V ) E e } { H Q ( U | V ) + E e D ( Q U V P U V ) } .
The dependence of G ( Q U , E e ) on E e is as follows. For any given Q U , let Q ˜ V | U be the minimizer of D ( Q U V P U V ) . Then, as long as E e < D ( Q U × Q ˜ V | U P U V ) , the constraint set in (43) is empty, and R ( Q U ) can vanish, which practically means that in this range, the entire type class T ( Q U ) can be totally ignored, while still achieving P e e n E e . Only for the unique type Q U = P U , G ( P U , E e ) > 0 for all E e 0 , and specifically, we find that G ( P U , 0 ) = H P ( U | V ) . Furthermore, let Q V | U * be the maximizer in the unconstrained problem
max Q V | U H Q ( U | V ) D ( Q U V P U V ) .
Then, as long as E e [ D ( Q U × Q ˜ V | U P U V ) , D ( Q U × Q V | U * P U V ) ) , G ( Q U , E e ) is a monotonically nondecreasing function of E e . When E e D ( Q U × Q V | U * P U V ) , the maximization in (43) reaches its unconstrained optimum, and G ( Q U , E e ) increases without bound in an affine fashion as E e + H Q * ( U | V ) D ( Q U × Q V | U * P U V ) . As can be seen in (42), Ω ( Q U , E e ) finally reaches a plateau at the level of H Q ( U ) .
Upon substituting Ω ( Q U , E e ) back into (34) and using the fact that E e r ( · , Δ ) is monotonically nonincreasing, we find that the trade-off function is given by
E e r ( E e , Δ ) = min { Q U V : Ω ( Q U , E e ) H Q ( U | V ) + Δ } D ( Q U V P U V ) .
Since Ω ( Q U , E e ) is monotonically nondecreasing in E e for every Q U , E e r ( E e , Δ ) is monotonically nonincreasing in E e , which is not very surprising. The dependence of E e r ( E e , Δ ) on E e and Δ is as follows. At E e = 0 , notice that Ω ( Q U , 0 ) = for any Q U P U while Ω ( P U , 0 ) = H P ( U | V ) . Thus, E e r ( 0 , Δ ) = 0 as long as Δ = 0 , and it follows from the monotonicity that E e r ( E e , 0 ) = 0 everywhere. Otherwise, if Δ > 0 , { Q U V : Ω ( Q U , E e ) H Q ( U | V ) + Δ } is empty as long as E e < E e * ( Δ ) , where an expression for E e * ( Δ ) can be found by solving
max Q U V { Ω ( Q U , E e ) H Q ( U | V ) } Δ ,
and then E e r ( E e , Δ ) = in this range. In the other extreme case of a very large E e , Ω ( Q U , E e ) reaches a plateau at a level of H Q ( U ) . Then, if Δ H P ( U ) H P ( U | V ) = I P ( U ; V ) , E e r ( E e , Δ ) reaches zero for a sufficiently large E e . Else, if Δ > I P ( U ; V ) , E e r ( E e , Δ ) reaches a strictly positive plateau, given by
min { Q U V : I Q ( U ; V ) Δ } D ( Q U V P U V ) ,
which is a monotonically nondecreasing function of Δ . Particularly, it means that in this range, the typical random SD code attains both an exponentially vanishing excess-rate probability and P e 0 .
It is interesting to relate this to the expurgated bound of the FR code in the SW model, which is given by (15). Comparing E e x f r ( R ) and E e ( , Δ ) analytically is rather difficult. Thus, we examined these two exponent functions numerically. Consider the case of a double binary source with alphabets U = V = { 0 , 1 } , and joint probabilities given by P U V ( 0 , 0 ) = 0.75 , P U V ( 0 , 1 ) = 0.1 , P U V ( 1 , 0 ) = 0 , and P U V ( 1 , 1 ) = 0.15 . We already mentioned before, that in the special case of E r = , the rate function is given by the threshold Δ , hence we choose Δ = R in order to have a fair comparison. Graphs of the functions E e x f r ( R ) and E e ( , R ) are presented in Figure 1.
As can be seen in Figure 1, both E e x f r ( R ) and E e ( , R ) tend to infinity as R tends to log 2 0.693 . For relatively high binning rates, E e x f r ( R ) is strictly higher than E e ( , R ) , which can be explained in the following way: Referring to the analogy between SW coding and channel coding, one can think of each bin as containing a channel code. In general, a channel code behaves well if it does not contain pairs of relatively “close” codewords. Since we randomly assign the source vectors into the bins (even if the populations of the bins are totally equal, which can be attained by randomly partitioning each type class into exp { n R } subsets), it is reasonable to assume that some bins will contain relatively bad codebooks. On the other hand, in the expurgated SW code [11], each type class T ( Q U ) is partitioned into exp { n R } “balanced” subsets in some sense (referring to the enumerators N ( Q U U ) in (30), they are equally populated in all of the bins), such that the codebooks contained in the bins have approximately equal error probabilities. Moreover, we conclude from (15) that each bin contains a codebook with a quality of an expurgated channel code. This code is certainly better than the TRCs in the SD ensemble.
In channel coding, it is known [23] that the random Gilbert–Varshamov ensemble has an exact random coding error exponent which is as high as the maximum between (16) and (17). In SW source coding, on the other hand, it seems to be a more challenging problem to define an ensemble, such that the error exponent of its TRCs is as high as E e x f r ( R ) of (15). Since the gap between E e x f r ( R ) and E e ( , R ) is not necessarily very significant, as can be seen in Figure 1, we conclude that the SD ensemble may be more attractive because the amount of computations needed for drawing a code from it are much lower than the amount of computations required for having an expurgated SW code. In addition, it is important to note that the probability of drawing a SD code with an exponent much lower than E e ( , R ) decays exponentially fast, in analogy to the result in pure channel coding [7].

Author Contributions

Both authors contributed equally to this research. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Israel Science Foundation (ISF) grant number 137/18.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Theorem 1

By definition, we have
E [ P e ( B n ) ] = E u B ( U ) , u U exp { n f ( P ^ u V ) } u ˜ B ( U ) exp { n f ( P ^ u ˜ V ) } .
Step 1: Averaging Over the Random Code
We first condition on the true source sequences ( U = u , V = v ) and take the expectation only w.r.t. the random binning. We get
E [ P e ( B n ) | u , v ]
= E u B ( u ) , u u exp { n f ( P ^ u v ) } exp { n · f ( P ^ u v ) } + u B ( u ) , u u exp { n f ( P ^ u v ) }
= 0 1 P u B ( u ) , u u exp { n f ( P ^ u v ) } exp { n · f ( P ^ u v ) } + u B ( u ) , u u exp { n f ( P ^ u v ) } s d s
= 0 n e n ξ · P u B ( u ) , u u exp { n f ( P ^ u v ) } exp { n · f ( P ^ u v ) } + u B ( u ) , u u exp { n f ( P ^ u v ) } e n ξ d ξ
= 0 n e n ξ · P ( 1 e n ξ ) u B ( u ) , u u exp { n f ( P ^ u v ) } e n ξ exp { n f ( P ^ u v ) } d ξ
0 n e n ξ · P u B ( u ) , u u exp { n f ( P ^ u v ) } exp { n [ f ( P ^ u v ) ξ ] } d ξ ,
where (A4) follows by changing the integration variable in (A3) according to s = e n ξ . Define
N u , v ( Q U | V ) = u B ( u ) , u u 𝟙 { u T ( Q U | V | v ) } ,
such that the probability in (A6) is given by
P u B ( u ) , u u exp { n f ( P ^ u v ) } exp { n [ f ( P ^ u v ) ξ ] }
= P Q U | V N u , v ( Q U | V ) exp { n f ( Q U V ) } exp { n [ f ( P ^ u v ) ξ ] }
P max Q U | V N u , v ( Q U | V ) exp { n f ( Q U V ) } exp { n [ f ( P ^ u v ) ξ ] }
= P Q U | V N u , v ( Q U | V ) exp { n f ( Q U V ) } exp { n [ f ( P ^ u v ) ξ ] }
Q U | V P N u , v ( Q U | V ) exp { n [ f ( P ^ u v ) f ( Q U V ) ξ ] } ,
where Q U V = Q U | V × P ^ v . Let us denote B 0 = f ( P ^ u v ) f ( Q U V ) . Now, given u and v , N u , v ( Q U | V ) is a binomial sum of | T ( Q U | V | v ) | e n H Q ( U | V ) trials and success rate of the exponential order of e n R ( Q U ) . Therefore, using the techniques of [24] (Section 6.3),
1 n log P N u , v ( Q U | V ) exp { n [ B 0 ξ ] }
= R ( Q U ) H Q ( U | V ) + H Q ( U | V ) R ( Q U ) + B 0 ξ H Q ( U | V ) R ( Q U ) + < B 0 ξ
= R ( Q U ) H Q ( U | V ) + ξ B 0 H Q ( U | V ) R ( Q U ) + ξ < B 0 H Q ( U | V ) R ( Q U ) + ,
and so,
0 e n ξ · P N u , v ( Q U | V ) exp { n [ B 0 ξ ] } d ξ
B 0 [ H Q ( U | V ) R ( Q U ) ] + + e n ξ · e n [ R ( Q U ) H Q ( U | V ) ] + d ξ
exp n [ R ( Q U ) H Q ( U | V ) ] + + B 0 [ H Q ( U | V ) R ( Q U ) ] + +
= exp n R ( Q U ) H Q ( U | V ) + [ B 0 ] + R ( Q U ) H Q ( U | V ) R ( Q U ) H Q ( U | V ) + B 0 + R ( Q U ) < H Q ( U | V )
= exp n R ( Q U ) H Q ( U | V ) + [ B 0 ] + + R ( Q U ) H Q ( U | V ) R ( Q U ) H Q ( U | V ) + [ B 0 ] + + R ( Q U ) < H Q ( U | V )
= exp n · R ( Q U ) H Q ( U | V ) + [ B 0 ] + + .
Finally, we have that
Q U | V 0 e n ξ · P N u , v ( Q U | V ) exp { n [ B 0 ξ ] } d ξ
max Q U | V exp n · R ( Q U ) H Q ( U | V ) + [ B 0 ] + +
= exp n · min Q U | V R ( Q U ) H Q ( U | V ) + [ B 0 ] + + ,
thus,
E ( u , v ) = min Q U | V R ( Q U ) H Q ( U | V ) + [ f ( P ^ u v ) f ( Q U V ) ] + + .
Step 2: Averaging Over U and V
Notice that the exponent function E ( u , v ) depends on ( u , v ) only via the empirical distribution P ^ u v . Averaging over the source and the side information sequences, now yields
E P e ( B n ) = u , v P ( u , v ) · 𝟙 H ^ u ( U ) R ( P ^ u ) · exp n · E ( P ^ u v )
{ Q U V : H Q ( U ) R ( Q U ) } e n · D ( Q U V P U V ) · exp n · E ( Q U V )
exp n · min { Q U V : H Q ( U ) R ( Q U ) } D ( Q U V P U V ) + E ( Q U V ) ,
which proves the first point of Theorem 1.
Step 3: Moving from Stochastic to Deterministic Decoding
In order to transform the GLD into the general deterministic decoder of
u ^ = arg max u B ( u ) T ( u ) f ( P ^ u v ) ,
we just have to multiply f ( · ) , in
E ( Q U V ) = min Q U | V R ( Q U ) H Q ( U | V ) + [ f ( Q U V ) f ( Q U V ) ] + + ,
by β 0 , and then let β . We find that the overall error exponent of the SD ensemble with the general deterministic decoder of (A26) is given by
E ( P ) = min { Q U V : H Q ( U ) R ( Q U ) } D ( Q U V P U V ) + E ˜ ( Q U V ) ,
where,
E ˜ ( Q U V ) = min { Q U | V : f ( Q U V ) f ( Q U V ) } R ( Q U ) H Q ( U | V ) + .
Step 4: A Fundamental Limitation on the Error Exponent
Note that the minimum in (A29) can be upper–bounded by choosing a specific distribution in the feasible set. In (A29), we take Q U | V = Q U | V and then
E ˜ ( Q U V ) R ( Q U ) H Q ( U | V ) + .
Hence, the overall error exponent is upper–bounded as
E ( P ) min { Q U V : H Q ( U ) R ( Q U ) } D ( Q U V P U V ) + R ( Q U ) H Q ( U | V ) + .
Step 5: An Optimal Universal Decoder
We prove that the upper bound of (A31) is attainable by choosing the universal decoding metric f ( Q U V ) = H Q ( U | V ) . Now, we get for (A29)
E ˜ ( Q U V ) = min { Q U | V : f ( Q U V ) f ( Q U V ) } R ( Q U ) H Q ( U | V ) +
= min { Q U | V : H Q ( U | V ) H Q ( U | V ) } R ( Q U ) H Q ( U | V ) +
= R ( Q U ) H Q ( U | V ) + ,
which completes the proof of Theorem 1.

Appendix B. Proof of Theorem 2

Lower Bound on the Error Exponent

Our starting point is the following inequality, for any ρ > 0 ,
E [ log P e ( B n ) ] log E [ P e ( B n ) ] 1 / ρ ρ ,
which is due to the following considerations. First, for a positive random variable X, the function
f ( ρ ) = log E X 1 / ρ ρ
is monotonically decreasing, and second, by L’Hospital’s rule,
lim ρ log E X 1 / ρ ρ = E [ log X ] .
Recall that the error probability is given by
P e ( B n ) = u , v P ( u , v ) · 𝟙 H ^ u ( U ) R ( P ^ u ) · u B ( u ) T ( u ) , u u exp { n f ( P ^ u v ) } u ˜ B ( u ) T ( u ) exp { n f ( P ^ u ˜ v ) } .
Let
Z u ( v ) = u ˜ B ( u ) T ( u ) , u ˜ u exp { n f ( P ^ u ˜ v ) } ,
fix ϵ > 0 arbitrarily small, and for every u U n and v V n , define the set
B ϵ ( u , v ) = B n : Z u ( v ) exp { n α ( R + ϵ , P ^ u , P ^ v ) } .
Following the result of [20] (Appendix B), we prove the following modification in Appendix C.
Lemma A1.
Let ϵ > 0 be arbitrarily small. Then, for every u U n and v V n ,
P Z u ( v ) exp { n α ( R + ϵ , P ^ u , P ^ v ) } exp { e n ϵ + n ϵ + 1 } .
Thus, by the union bound,
P u U n v V n B ϵ ( u , v ) = P B ϵ u U n v V n P B ϵ ( u , v )
u U n v V n exp { e n ϵ + n ϵ + 1 }
= | U × V | n · exp { e n ϵ + n ϵ + 1 } ,
which still decays double–exponentially fast. Recall that Q = { Q U U : Q U = Q U } . Then, for any ρ 1
E P e ( B n ) 1 / ρ
= E P e ( B n ) 1 / ρ · 𝟙 { B ϵ c } + E P e ( B n ) 1 / ρ · 𝟙 { B ϵ } E u , v P ( u , v ) 𝟙 H ^ u ( U ) R ( P ^ u ) u B ( u ) T ( u ) , u u exp { n f ( P ^ u v ) } exp { n f ( P ^ u v ) } + Z u ( v ) 1 / ρ 𝟙 { B ϵ c }
+ P { B ϵ } E u , v u B ( u ) T ( u ) , u u P ( u , v ) 𝟙 H ^ u ( U ) R ( P ^ u )
× min 1 , exp { n f ( P ^ u v ) } exp { n f ( P ^ u v ) } + exp { n α ( R + ϵ , P ^ u , P ^ v ) } 1 / ρ
+ | U × V | n · exp { e n ϵ + n ϵ + 1 } E u u B ( u ) T ( u ) , u u P ( u ) 𝟙 H ^ u ( U ) R ( P ^ u )
× v P ( v | u ) exp n · max { f ( P ^ u v ) , α ( R + ϵ , P ^ u , P ^ v ) } f ( P ^ u v ) + 1 / ρ
E u u B ( u ) T ( u ) , u u P ( u ) · 𝟙 H ^ u ( U ) R ( P ^ u ) · exp n · Λ ( P ^ u u , R + ϵ ) 1 / ρ
= E { Q U U Q : H Q ( U ) R ( Q U ) } N ( Q U U ) · e n E Q [ log P ( U ) ] · exp n · Λ ( Q U U , R + ϵ ) 1 / ρ
{ Q U U Q : H Q ( U ) R ( Q U ) } E N ( Q U U ) 1 / ρ · e n ( E Q [ log P ( U ) ] ) / ρ · exp n · Λ ( Q U U , R + ϵ ) / ρ ,
where (A47) is due to Lemma A1, (A49) is by the method of types and the definition of Λ ( Q U U , R ) in (27), and in (A50) we used the definition of N ( Q U U ) in (30). Therefore, our next task is to evaluate the 1 / ρ –th moment of N ( Q U U ) . Let us define
N u ( Q U | U ) = u T ( Q U | U | u ) 𝟙 B ( u ) = B ( u ) .
For a given ρ 1 , let s [ 1 , ρ ] . Then,
E N ( Q U U ) 1 / ρ = E u T ( Q U ) N u ( Q U | U ) 1 / ρ
= E u T ( Q U ) N u ( Q U | U ) 1 / s s / ρ
E u T ( Q U ) N u ( Q U | U ) 1 / s s / ρ
E u T ( Q U ) N u ( Q U | U ) 1 / s s / ρ
= u T ( Q U ) E N u ( Q U | U ) 1 / s s / ρ ,
where (A56) follows from Jensen’s inequality. Now, N u ( Q U | U ) is a binomial random variable with | T ( Q U | U | u ) | e n H Q ( U | U ) trials and success rate which is of the exponential order of e n R . We have that [24](Section 6.3)
E N u ( Q U | U ) 1 / s exp { n [ H Q ( U | U ) R ] / s } H Q ( U | U ) R exp { n [ H Q ( U | U ) R ] } H Q ( U | U ) < R ,
and so,
E N ( Q U U ) 1 / ρ e n H Q ( U ) · s / ρ · E N u ( Q U | U ) 1 / s s / ρ
e n H Q ( U ) · s / ρ · exp { n [ H Q ( U | U ) R ] / ρ } H Q ( U | U ) R exp { n [ H Q ( U | U ) R ] s / ρ } H Q ( U | U ) < R
= exp { n [ H Q ( U ) · s + H Q ( U | U ) R ] / ρ } H Q ( U | U ) R exp { n [ H Q ( U ) + H Q ( U | U ) R ] s / ρ } H Q ( U | U ) < R
= exp { n [ H Q ( U ) · s + H Q ( U | U ) R ] / ρ } H Q ( U | U ) R exp { n [ H Q ( U , U ) R ] s / ρ } H Q ( U | U ) < R .
After optimizing over s, we get
1 n log E N ( Q U U ) 1 / ρ
min 1 s ρ H Q ( U ) · s + H Q ( U | U ) R / ρ H Q ( U | U ) R H Q ( U , U ) R s / ρ H Q ( U | U ) < R , H Q ( U , U ) R H Q ( U , U ) R s / ρ H Q ( U | U ) < R , H Q ( U , U ) < R
= H Q ( U ) + H Q ( U | U ) R / ρ H Q ( U | U ) R H Q ( U , U ) R / ρ H Q ( U | U ) < R , H Q ( U , U ) R H Q ( U , U ) R ρ / ρ H Q ( U | U ) < R , H Q ( U , U ) < R
= H Q ( U , U ) R / ρ H Q ( U , U ) R H Q ( U , U ) R H Q ( U , U ) < R ,
which gives, after raising to the ρ –th power,
E N ( Q U U ) 1 / ρ ρ exp { n H Q ( U , U ) R } H Q ( U , U ) R exp { n H Q ( U , U ) R · ρ } H Q ( U , U ) < R
= exp { n ( [ H Q ( U , U ) R ] + ρ [ R H Q ( U , U ) ] + ) } .
Let us denote F ( Q , R , ρ ) = [ H Q ( U , U ) R ] + ρ [ R H Q ( U , U ) ] + . Continuing now from (A51),
E P e ( B n ) 1 / ρ ρ
· { Q U U Q : H Q ( U ) R ( Q U ) } E N ( Q U U ) 1 / ρ · e n [ E Q log P ( U ) ] / ρ · exp n · Λ ( Q U U , R + ϵ ) / ρ ρ
{ Q U U Q : H Q ( U ) R ( Q U ) } E N ( Q U U ) 1 / ρ ρ · e n E Q log P ( U ) · exp n · Λ ( Q U U , R + ϵ )
{ Q U U Q : H Q ( U ) R ( Q U ) } exp { n ( F ( Q , R , ρ ) + E Q [ log P ( U ) ] Λ ( Q U U , R + ϵ ) ) }
exp n · min { Q U U Q : H Q ( U ) R ( Q U ) } ( Λ ( Q U U , R + ϵ ) F ( Q , R , ρ ) E Q [ log P ( U ) ] ) .
where (A70) follows from (A67). Finally, it follows by (A35) that
lim inf n 1 n E [ log P e ( B n ) ]
lim inf n 1 n log E [ P e ( B n ) ] 1 / ρ ρ
min { Q U U Q : H Q ( U ) R ( Q U ) } ( Λ ( Q U U , R + ϵ ) F ( Q , R , ρ ) E Q [ log P ( U ) ] ) .
Letting ρ grow without bound yields that
lim inf n 1 n E [ log P e ( B n ) ] min { Q U U Q : H Q ( U ) R ( Q U ) , H Q ( U , U ) R ( Q U ) } ( Λ ( Q U U , R + ϵ ) H Q ( U , U )
+ R ( Q U ) E Q [ log P ( U ) ] )
= min { Q U U Q : H Q ( U ) R ( Q U ) } ( Λ ( Q U U , R + ϵ ) H Q ( U , U ) + R ( Q U ) E Q [ log P ( U ) ] ) .
Due to the arbitrariness of ϵ > 0 , we have proved that
lim inf n 1 n E [ log P e ( B n ) ] min { Q U U Q : H Q ( U ) R ( Q U ) } ( Λ ( Q U U , R ) H Q ( U , U ) + R ( Q U ) E Q [ log P ( U ) ] ) .
completing half of the proof of Theorem 2.

Upper Bound on the Error Exponent

Consider a joint distribution Q U U , that satisfies H Q ( U , U ) > R , and define the event E ( Q U U ) = { B n : N ( Q U U ) < exp { n [ H Q ( U , U ) R ϵ ] } } . We want to show that P { E ( Q U U ) } is small. Consider the following:
P { E ( Q U U ) } = P { N ( Q U U ) < exp { n [ H Q ( U , U ) R ϵ ] } }
= P { N ( Q U U ) < e n ϵ · E { N ( Q U U ) } }
= P N ( Q U U ) E { N ( Q U U ) } 1 < ( 1 e n ϵ )
P N ( Q U U ) E { N ( Q U U ) } E { N ( Q U U ) } 2 > ( 1 e n ϵ ) 2
V a r { N ( Q U U ) } ( 1 e n ϵ ) 2 · E 2 { N ( Q U U ) } .
Let us use the shorthand notations I ( u , u ) = 𝟙 B ( u ) = B ( u ) , K = | T ( Q U U ) | , and p = e n R . Concerning the variance of N ( Q U U ) , we have the following
V a r { N ( Q U U ) }
= E { N 2 ( Q U U ) } E 2 { N ( Q U U ) }
= E ( u , u ) T ( Q U U ) I ( u , u ) × ( u ˜ , u ^ ) T ( Q U U ) I ( u ˜ , u ^ ) ( K p ) 2
= ( u , u ) T ( Q U U ) ( u ˜ , u ^ ) T ( Q U U ) E I ( u , u ) I ( u ˜ , u ^ ) ( K p ) 2
= ( u , u ) T ( Q U U ) E I 2 ( u , u ) + ( u , u ) , ( u ˜ , u ^ ) T ( Q U U ) ( u , u ) ( u ˜ , u ^ ) E I ( u , u ) I ( u ˜ , u ^ ) ( K p ) 2
= K p + K ( K 1 ) p 2 ( K p ) 2
= K p ( 1 p )
exp { n [ H Q ( U , U ) R ] } ,
and hence,
P { E ( Q U U ) } · exp { n [ H Q ( U , U ) R ] } exp { n [ 2 H Q ( U , U ) 2 R ] }
= exp { n [ H Q ( U , U ) R ] } ,
which decays to zero since we have assumed that H Q ( U , U ) > R . Furthermore, if H Q ( U , U ) R + ϵ , then P { E ( Q U U ) } tends to zero at least as fast as e n ϵ . Now, for a given ϵ > 0 , and a given joint type Q U U V , such that H Q ( U , U ) R + ϵ , let us define
Z u u ( v ) = u ˜ B ( u ) T ( u ) , u ˜ u , u exp { n f ( P ^ u ˜ v ) } ,
and
G n ( Q U U V ) = { B n : ( u , u ) T ( Q U U ) 𝟙 { B ( u ) = B ( u ) } × v T ( Q V | U U | u , u ) 𝟙 Z u u ( v ) e n [ α ( R 2 ϵ , Q U , Q V ) + ϵ ] exp { n [ H Q ( U , U ) R 3 ϵ / 2 ] } · | T ( Q V | U U | u , u ) | } ,
where ( u , u ) in the expression | T ( Q V | U U | u , u ) | should be understood as any pair of source sequences in T ( Q U U ) . Next, we define
G n = { Q U U V : H Q ( U , U ) R + ϵ } [ G n ( Q U U V ) E c ( Q U U ) ] .
We start by proving that P { G n } 1 as n , or equivalently, that P { G n c } 0 as n . Now,
P { G n c } = P { Q U U V : H Q ( U , U ) R + ϵ } [ G n c ( Q U U V ) E ( Q U U ) ]
{ Q U U V : H Q ( U , U ) R + ϵ } P G n c ( Q U U V ) E ( Q U U )
= { Q U U V : H Q ( U , U ) R + ϵ } [ P E ( Q U U ) + P G n c ( Q U U V ) E c ( Q U U ) ] .
The last summation contains a polynomial number of terms. If we prove that the summand tends to zero exponentially with n, then P { G n c } 0 as n . The first term in the summand, P E ( Q U U ) , has already been proved to be upper bounded by e n ϵ . Concerning the second term, we have the following
P G n c ( Q U U V ) E c ( Q U U ) = P [ ( u , u ) T ( Q U U ) 𝟙 { B ( u ) = B ( u ) } · v T ( Q V | U U | u , u ) 𝟙 Z u u ( v ) e n [ α ( R 2 ϵ , Q U , Q V ) + ϵ ] < exp { n [ H Q ( U , U ) R 3 ϵ / 2 ] } · | T ( Q V | U U | u , u ) | ,
N ( Q U U ) exp { n [ H Q ( U , U ) R ϵ ] } ] = P [ ( u , u ) T ( Q U U ) 𝟙 { B ( u ) = B ( u ) } · v T ( Q V | U U | u , u ) 𝟙 Z u u ( v ) > e n [ α ( R 2 ϵ , Q U , Q V ) + ϵ ] > [ N ( Q U U ) exp { n [ H Q ( U , U ) R 3 ϵ / 2 ] } ] · | T ( Q V | U U | u , u ) | ,
N ( Q U U ) exp { n [ H Q ( U , U ) R ϵ ] } ] P [ ( u , u ) T ( Q U U ) 𝟙 { B ( u ) = B ( u ) } · v T ( Q V | U U | u , u ) 𝟙 Z u u ( v ) > e n [ α ( R 2 ϵ , Q U , Q V ) + ϵ ] > [ exp { n [ H Q ( U , U ) R ϵ ] } exp { n [ H Q ( U , U ) R 3 ϵ / 2 ] } ] · | T ( Q V | U U | u , u ) | ,
N ( Q U U ) exp { n [ H Q ( U , U ) R ϵ ] } ] P [ ( u , u ) T ( Q U U ) 𝟙 { B ( u ) = B ( u ) } · v T ( Q V | U U | u , u ) 𝟙 Z u u ( v ) > e n [ α ( R 2 ϵ , Q U , Q V ) + ϵ ] >
[ exp { n [ H Q ( U , U ) R ϵ ] } exp { n [ H Q ( U , U ) R 3 ϵ / 2 ] } ] · | T ( Q V | U U | u , u ) | ]
E ( u , u ) T ( Q U U ) 𝟙 { B ( u ) = B ( u ) } · v T ( Q V | U U | u , u ) 𝟙 Z u u ( v ) > e n [ α ( R 2 ϵ , Q U , Q V ) + ϵ ] [ exp { n [ H Q ( U , U ) R ϵ ] } exp { n [ H Q ( U , U ) R 3 ϵ / 2 ] } ] · | T ( Q V | U U | u , u ) |
· | T ( Q U U ) | · | T ( Q V | U U | u , u ) | · P B ( u ) = B ( u ) , Z u u ( v ) > e n [ α ( R 2 ϵ , Q U , Q V ) + ϵ ] exp { n [ H Q ( U , U ) R ϵ ] } · | T ( Q V | U U | u , u ) |
exp { n H Q ( U , U ) } · P B ( u ) = B ( u ) · P Z u u ( v ) > e n [ α ( R 2 ϵ , Q U , Q V ) + ϵ ] exp { n [ H Q ( U , U ) R ϵ ] }
= e n ϵ · P Z u u ( v ) > e n [ α ( R 2 ϵ , Q U , Q V ) + ϵ ] ,
where (A99) follows by using the second event N ( Q U U ) exp { n [ H Q ( U , U ) R ϵ ] } to increase the first event inside the probability in (A98), (A100) is true since the second event in (A99) was omitted, (A101) follows from Markov’s inequality, and (A103) is due to the independence between the two events inside the probability in (A102). As for the probability in (A104),
P Z u u ( v ) > e n [ α ( R 2 ϵ , Q U , Q V ) + ϵ ]
= P Q U | V N ( Q U V ) e n f ( Q U V ) > e n [ α ( R 2 ϵ , Q U , Q V ) + ϵ ]
max Q U | V P N ( Q U V ) > exp { n [ α ( R 2 ϵ , Q U , Q V ) + ϵ f ( Q U V ) ] }
e n E ,
where N ( Q U V ) is the number of source sequences within B ( u ) , other than u and u , that fall in the conditional type class T ( Q U | V | v ) , which is a binomial random variable with e n H Q ( U | V ) 2 trials and success rate of exponential order e n R , and hence,
E = min Q U | V [ R H Q ( U | V ) ] + f ( Q U V ) + [ H Q ( U | V ) R ] + α ( R 2 ϵ , Q U , Q V ) + ϵ f ( Q U V ) + [ H Q ( U | V ) R ] + < α ( R 2 ϵ , Q U , Q V ) + ϵ
= min { Q U | V : f ( Q U V ) + [ H Q ( U | V ) R ] + α ( R 2 ϵ , Q U , Q V ) + ϵ } [ R H Q ( U | V ) ] + .
By definition of the function α ( R , Q U , Q V ) , the set { Q U | V : f ( Q U V ) + [ H Q ( U | V ) R ] + α ( R 2 ϵ , Q U , Q V ) + ϵ } is a subset of { Q U | V : H Q ( U | V ) R 2 ϵ } . Thus,
E min { Q U | V : H Q ( U | V ) R 2 ϵ } [ R H Q ( U | V ) ] + 2 ϵ ,
and hence, P Z u u ( v ) > e n [ α ( R 2 ϵ , Q U , Q V ) + ϵ ] · e 2 n ϵ , which provides
P G n c ( Q U U V ) E c ( Q U U ) · e n ϵ · e 2 n ϵ = e n ϵ ,
which proves that P { G n } 1 as n . Now, for a given B n G n ( Q U U V ) , we define the set
K ( B n , Q U U V ) = { ( u , u , v ) : Z u u ( v ) exp { n [ α ( R 2 ϵ , Q U , Q V ) + ϵ ] } } ,
and
K ( B n , Q U U V | u , u ) = { v : ( u , u , v ) K ( B n , Q U U V ) } .
Then, by definition, for any B n G n ( Q U U V ) ,
( u , u ) T ( Q U U ) 𝟙 { B ( u ) = B ( u ) } · | T ( Q V | U U | u , u ) K ( B n , Q U U V | u , u ) | | T ( Q V | U U | u , u ) | exp { n [ H Q ( U , U ) R 3 ϵ / 2 ] } ,
where we have used the fact that T ( Q V | U U | u , u ) has exponentially the same cardinality for all ( u , u ) T ( Q U U ) . Wrapping all up, we get that for any B n G n ,
P e ( B n )
= u , v P ( u , v ) 𝟙 H ^ u ( U ) R ( P ^ u ) u B ( u ) T ( u ) , u u exp { n f ( P ^ u v ) } exp { n f ( P ^ u v ) } + exp { n f ( P ^ u v ) } + Z u u ( v ) { Q U U : H Q ( U , U ) R + ϵ , H Q ( U ) R ( Q U ) } ( u , u ) T ( Q U U ) 𝟙 { B ( u ) = B ( u ) } · exp { n E Q log P ( U ) } × Q V | U U v T ( Q V | U U | u , u ) K ( B n , Q U U V | u , u ) exp { n E Q log P ( V | U ) }
× exp { n f ( Q U V ) } exp { n f ( Q U V ) } + exp { n f ( Q U V ) } + Z u u ( v ) { Q U U : H Q ( U , U ) R + ϵ , H Q ( U ) R ( Q U ) } ( u , u ) T ( Q U U ) 𝟙 { B ( u ) = B ( u ) } · exp { n E Q log P ( U ) } × Q V | U U v T ( Q V | U U | u , u ) K ( B n , Q U U V | u , u ) exp { n E Q log P ( V | U ) }
× exp { n f ( Q U V ) } exp { n f ( Q U V ) } + exp { n f ( Q U V ) } + exp { n [ α ( R 2 ϵ , Q U , Q V ) + ϵ ] } { Q U U : H Q ( U , U ) R + ϵ , H Q ( U ) R ( Q U ) } ( u , u ) T ( Q U U ) 𝟙 { B ( u ) = B ( u ) } × Q V | U U | T ( Q V | U U | u , u ) K ( B n , Q U U V | u , u ) | | T ( Q V | U U | u , u ) | · | T ( Q V | U U | u , u ) | · e n E Q log P ( U , V )
× exp { n · [ max { f ( Q U V ) , α ( R 2 ϵ , Q U , Q V ) + ϵ } f ( Q U V ) ] + } { Q U U V : H Q ( U , U ) R + ϵ , H Q ( U ) R ( Q U ) } ( u , u ) T ( Q U U ) 𝟙 { B ( u ) = B ( u ) } × | T ( Q V | U U | u , u ) K ( B n , Q U U V | u , u ) | | T ( Q V | U U | u , u ) | · e n H Q ( V | U , U ) · e n E Q log P ( U , V )
× exp { n · [ max { f ( Q U V ) , α ( R 2 ϵ , Q U , Q V ) + ϵ } f ( Q U V ) ] + } { Q U U V : H Q ( U , U ) R + ϵ , H Q ( U ) R ( Q U ) } exp { n [ H Q ( U , U ) R 3 ϵ / 2 ] } · e n H Q ( V | U , U )
× e n E Q log P ( U , V ) · exp { n · [ max { f ( Q U V ) , α ( R 2 ϵ , Q U , Q V ) + ϵ } f ( Q U V ) ] + } exp n · min { Q U U V : H Q ( U , U ) R + ϵ , H Q ( U ) R ( Q U ) } { H Q ( U , U ) + R + 3 ϵ / 2 H Q ( V | U , U )
E Q [ log P ( U , V ) ] + [ max { f ( Q U V ) , α ( R 2 ϵ , Q U , Q V ) + ϵ } f ( Q U V ) ] + }
= exp { n E t r c ( R , ϵ ) } ,
where (A117) follows from the definition of the set K ( B n , Q U U V | u , u ) in (A113) and (A120) is due to (A114). Consider the following:
E 1 n log P e ( B n )
= B n P { B n } 1 n log P e ( B n )
= B n G n P { B n } 1 n log P e ( B n ) + B n G n c P { B n } 1 n log P e ( B n )
B n G n P { B n } 1 n log e n E t r c ( R , ϵ ) + B n G n c P { B n } 1 n log e n E s p ( R )
= P { G n } E t r c ( R , ϵ ) + P { G n c } E s p ( R ) ,
which implies that
lim sup n E 1 n log P e ( B n ) E t r c ( R , ϵ ) .
It follows from the arbitrariness of ϵ that
lim sup n E 1 n log P e ( B n ) min { Q U U V : H Q ( U , U ) R , H Q ( U ) R ( Q U ) } { H Q ( U , U ) + R H Q ( V | U , U )
E Q [ log P ( U , V ) ] + [ max { f ( Q U V ) , α ( R , Q U , Q V ) } f ( Q U V ) ] + } = min { Q U U V : H Q ( U ) R ( Q U ) } { H Q ( U , U ) + R H Q ( V | U , U )
E Q [ log P ( U , V ) ] + [ max { f ( Q U V ) , α ( R , Q U , Q V ) } f ( Q U V ) ] + }
= min { Q U U Q : H Q ( U ) R ( Q U ) } { Λ ( Q U U , R ) H Q ( U , U ) + R ( Q U ) E Q [ log P ( U ) ] } ,
which completes the proof of Theorem 2.

Appendix C. Proof of Lemma A1

Let N ( T ( Q U | V | v ) , B ( u ) ) be defined as
N ( T ( Q U | V | v ) , B ( u ) ) = u T ( Q U | V | v ) 𝟙 B ( u ) = B ( u ) .
First, note that
Z u ( v ) = u ˜ B ( u ) T ( u ) , u ˜ u exp { n f ( P ^ u ˜ v ) } = Q U | V S ( P ^ u , P ^ v ) N ( T ( Q U | V | v ) , B ( u ) ) e n f ( Q U V ) ,
where S ( P ^ u , P ^ v ) = { Q U | V : ( P ^ v × Q U | V ) U = P ^ u } . Thus, taking the randomness of { B ( u ) } u U n into account,
P Z v ( u ) exp { n α ( R + ϵ , P ^ u , P ^ v ) }
= P Q U | V S ( P ^ u , P ^ v ) N ( T ( Q U | V | v ) , B ( u ) ) e n f ( Q U V ) exp { n α ( R + ϵ , P ^ u , P ^ v ) }
P max Q U | V S ( P ^ u , P ^ v ) N ( T ( Q U | V | v ) , B ( u ) ) e n f ( Q U V ) exp { n α ( R + ϵ , P ^ u , P ^ v ) }
= P Q U | V S ( P ^ u , P ^ v ) N ( T ( Q U | V | v ) , B ( u ) ) e n f ( Q U V ) exp { n α ( R + ϵ , P ^ u , P ^ v ) }
= P Q U | V S ( P ^ u , P ^ v ) N ( T ( Q U | V | v ) , B ( u ) ) exp { n [ α ( R + ϵ , P ^ u , P ^ v ) f ( Q U V ) ] } .
Now, N ( T ( Q U | V | v ) , B ( u ) ) is a binomial random variable with | T ( Q U | V | v ) | e n H Q ( U | V ) trials and success rate which is of the exponential order of e n R . We prove that by the very definition of the function α ( R + ϵ , P ^ u , P ^ v ) , there must exist some conditional distribution Q U | V * S ( P ^ u , P ^ v ) such that for Q U V * = P ^ v × Q U | V * , the two inequalities H Q * ( U | V ) R + ϵ and H Q * ( U | V ) R ϵ α ( R + ϵ , P ^ u , P ^ v ) f ( Q U V * ) hold. To show that, we assume conversely, i.e., that for every conditional distribution Q U | V S ( P ^ u , P ^ v ) , which defines Q U V = P ^ v × Q U | V , either H Q ( U | V ) < R + ϵ or H Q ( U | V ) R ϵ < α ( R + ϵ , P ^ u , P ^ v ) f ( Q U V ) , which means that for every distribution Q U | V S ( P ^ u , P ^ v )
H Q ( U | V ) ϵ < max { R , R + α ( R + ϵ , P ^ u , P ^ v ) f ( Q U V ) }
= R + [ α ( R + ϵ , P ^ u , P ^ v ) f ( Q U V ) ] + .
Writing it slightly differently, for every Q U | V S ( P ^ u , P ^ v ) there exists some real number t [ 0 , 1 ] such that
H Q ( U | V ) ϵ < R + t [ α ( R + ϵ , P ^ u , P ^ v ) f ( Q U V ) ] ,
or equivalently,
α ( R + ϵ , P ^ u , P ^ v ) > max Q U | V S ( P ^ u , P ^ v ) min t [ 0 , 1 ] f ( Q U V ) + H Q ( U | V ) R ϵ t
= max Q U | V S ( P ^ u , P ^ v ) f ( Q U V ) + H Q ( U | V ) R ϵ H Q ( U | V ) R + ϵ H Q ( U | V ) < R + ϵ
= max { Q U | V S ( P ^ u , P ^ v ) : H Q ( U | V ) R + ϵ } [ f ( Q U V ) + H Q ( U | V ) ] R ϵ
α ( R + ϵ , P ^ u , P ^ v ) ,
which is a contradiction. Let the conditional distribution Q U | V * be as defined above. Then,
P Q U | V S ( P ^ u , P ^ v ) N ( T ( Q U | V | v ) , B ( u ) ) exp { n [ α ( R + ϵ , P ^ u , P ^ v ) f ( Q U V ) ] }
P N ( T ( Q U | V * | v ) , B ( u ) ) exp { n [ α ( R + ϵ , P ^ u , P ^ v ) f ( Q U V * ) ] } .
Now, we know that both of the inequalities H Q * ( U | V ) R + ϵ and H Q * ( U | V ) R ϵ α ( R + ϵ , P ^ u , P ^ v ) f ( Q U V * ) hold. By the Chernoff bound, the probability of (A145) is upper bounded by
exp e n H Q * ( U | V ) D ( e a n e b n ) ,
where a = H Q * ( U | V ) + f ( Q U V * ) α ( R + ϵ , P ^ u , P ^ v ) and b = R , and where D ( α β ) , for α , β [ 0 , 1 ] , is the binary divergence function, that is
D ( α β ) = α log α β + ( 1 α ) log 1 α 1 β .
Since a b ϵ , the binary divergence is lower bounded as follows [24](Section 6.3):
D ( e a n e b n ) e b n 1 e ( a b ) n [ 1 + n ( a b ) ]
e n R [ 1 e n ϵ ( 1 + n ϵ ) ] ,
where in the second inequality, we invoked the decreasing monotonicity of the function f ( t ) = ( 1 + t ) e t for t 0 . Finally, we get that
P N ( T ( Q U | V * | v ) , B ( u ) ) exp { n [ α ( R + ϵ , P ^ u , P ^ v ) f ( Q U V * ) ] }
exp e n H Q * ( U | V ) · e n R [ 1 e n ϵ ( 1 + n ϵ ) ]
exp e n ϵ [ 1 e n ϵ ( 1 + n ϵ ) ]
= exp e n ϵ + n ϵ + 1 .
This completes the proof of Lemma A1.

Appendix D. Proof of Theorem 3

By definition of the error exponents, it follows that E t r c , G L D ( R ( · ) ) E r , G L D ( R ( · ) ) . We now prove the other direction. The expression in (28) can also be written as
E t r c , G L D ( R ( · ) )
= min Q U U : Q U = Q U , H Q ( U ) R ( Q U ) Λ ( Q U U , R ( Q U ) ) E Q [ log P ( U ) ] H Q ( U , U ) + R ( Q U ) = min Q U U : Q U = Q U , H Q ( U ) R ( Q U ) min Q V | U U Ψ ( R ( Q U ) , Q U U V ) H Q ( V | U , U ) E Q [ log P ( V | U ) ]
E Q [ log P ( U ) ] H Q ( U , U ) + R ( Q U )
= min Q U U V : Q U = Q U , H Q ( U ) R ( Q U ) Ψ ( R ( Q U ) , Q U U V ) H Q ( U , U , V ) E Q [ log P ( U , V ) ] + R ( Q U )
= min Q U U V : Q U = Q U , H Q ( U ) R ( Q U ) Ψ ( R ( Q U ) , Q U U V ) + D ( Q U V P U V ) H Q ( U | U , V ) + R ( Q U ) = min Q D ( Q U V P U V ) + R ( Q U ) H Q ( U | U , V )
+ max { f ( Q U V ) , γ ( R ( Q U ) , Q U , Q V ) } f ( Q U V ) + ,
with the set Q given by Q = { Q U U V : Q U = Q U , H Q ( U ) R ( Q U ) } , and where,
γ ( R ( · ) , Q U , Q V ) = max Q U ˜ | V : Q U ˜ = Q U , H Q ( U ˜ | V ) R ( Q U ˜ ) { f ( Q U ˜ V ) + H Q ( U ˜ | V ) } R ( Q U ) .
We upper–bound the minimum in (A158) by decreasing the feasible set; we add to Q the constraint that U V U form a Markov chain in that order and denote the new feasible set by Q ˜ . We get that
E t r c , G L D ( R ( · ) ) min Q ˜ D ( Q U V P U V ) + R ( Q U ) H Q ( U | U , V )
+ max { f ( Q U V ) , γ ( R ( Q U ) , Q U , Q V ) } f ( Q U V ) + = min Q ˜ D ( Q U V P U V ) + R ( Q U ) H Q ( U | V )
+ max { f ( Q U V ) , γ ( R ( Q U ) , Q U , Q V ) } f ( Q U V ) + = min { Q U V : H Q ( U ) R ( Q U ) } D ( Q U V P U V ) + min Q U | V Q ^ { R ( Q U ) H Q ( U | V )
+ max { f ( Q U V ) , γ ( R ( Q U ) , Q U , Q V ) } f ( Q U V ) + } ,
where Q ^ = { Q U | V : Q U = Q U } . In order to upper–bound the inner minimum in (A162), we split into two cases, according to the maximum between f ( Q U V ) and γ ( R ( Q U ) , Q U , Q V ) . This is legitimate when the inner minimum and this maximum can be interchanged, which is possible at least in the special cases of the matched/mismatched decoding metrics f ( Q ) = β E Q [ log P ˜ ( U , V ) ] for some β > 0 , since if f ( Q ) is linear, then the entire expression inside the inner minimum in (A162) is convex in Q U | V . On the one hand, if the maximum is given by f ( Q U V ) , then the inner minimum in (A162) is just
min Q U | V Q ^ R ( Q U ) H Q ( U | V ) + f ( Q U V ) f ( Q U V ) + .
On the other hand, if the maximum is given by γ ( R ( Q U ) , Q U , Q V ) , let Q * = Q U ˜ | V * be the maximizer in (A159), and then
min Q U | V Q ^ R ( Q U ) H Q ( U | V ) + γ ( R ( Q U ) , Q U , Q V ) f ( Q U V ) +
= min Q U | V Q ^ R ( Q U ) H Q ( U | V ) + f ( Q U ˜ V * ) + H Q * ( U ˜ | V ) R ( Q U ) f ( Q U V ) +
R ( Q U ) H Q * ( U | V ) + f ( Q U ˜ V * ) + H Q * ( U ˜ | V ) R ( Q U ) f ( Q U V * ) +
= R ( Q U ) H Q * ( U | V ) + H Q * ( U ˜ | V ) R ( Q U ) +
= R ( Q U ) H Q * ( U | V ) + H Q * ( U ˜ | V ) R ( Q U )
= 0 ,
where (A165) is because we choose Q U | V * = Q U ˜ | V * instead of minimizing over all Q U | V Q ^ and (A167) is true since H Q * ( U ˜ | V ) R ( Q U ) by the definition of γ ( R ( Q U ) , Q U , Q V ) . Combining (A163) and (A168), we find that (A162) is upper–bounded by
E t r c , G L D ( R ( · ) ) min Q U V : H Q ( U ) R ( Q U ) D ( Q U V P U V )
+ max min Q U | V Q ^ R ( Q U ) H Q ( U | V ) + f ( Q U V ) f ( Q U V ) + , 0 = min Q U V : H Q ( U ) R ( Q U ) D ( Q U V P U V )
+ min Q U | V Q ^ R ( Q U ) H Q ( U | V ) + f ( Q U V ) f ( Q U V ) + + = min Q U V : H Q ( U ) R ( Q U ) D ( Q U V P U V )
+ min Q U | V Q ^ R ( Q U ) H Q ( U | V ) + f ( Q U V ) f ( Q U V ) + +
= E r , G L D ( R ( · ) ) ,
which proves the first point of the theorem. Moving forward, consider the following:
E t r c , M A P ( R ( · ) ) = ( a ) E r , M A P ( R ( · ) ) = ( b ) E r , M C E ( R ( · ) ) ( c ) E t r c , M C E ( R ( · ) ) ( d ) E t r c , M A P ( R ( · ) ) ,
where ( a ) follows from the first point in this theorem by using the matched decoding metric f ( Q ) = β E Q [ log P ( U , V ) ] and letting β . Equality ( b ) is due to the second point of Theorem 1, which ensures that the random binning error exponents of the MAP and the MCE decoders are equal. Passage ( c ) is thanks to the fact that for any decoder, the error exponent of the typical random code is always at least as high as the random coding error exponent and ( d ) is due to the fact that the MAP decoder is optimal. Finally, the leftmost and the rightmost sides of (A173) are the same, which implies that passages ( c ) and ( d ) must hold with equalities. The equality in passage ( c ) concludes the second point of the theorem.

Appendix E. Proof of Theorem 4

The left equality in (33) is implied by the proved equality in passage ( d ) in (A173). In order to prove the right equality in (33), first note that E t r c , S C E ( R ( · ) ) E t r c , M A P ( R ( · ) ) by the optimality of the MAP decoder. For the other direction, consider the universal decoding metric of f ( Q U V ) = H Q ( U | V ) . Then, trivially,
γ ( R ( · ) , Q U , Q V ) = max Q U ˜ | V : Q U ˜ = Q U , H Q ( U ˜ | V ) R ( Q U ˜ ) { f ( Q U ˜ V ) + H Q ( U ˜ | V ) } R ( Q U ) = R ( Q U ) ,
and
Ψ ( R ( · ) , Q U U V ) = max { f ( Q U V ) , γ ( R ( · ) , Q U , Q V ) } f ( Q U V ) +
= max { H Q ( U | V ) , R ( Q U ) } + H Q ( U | V ) +
= H Q ( U | V ) min { H Q ( U | V ) , R ( Q U ) } +
H Q ( U | U , V ) min { H Q ( U | V ) , R ( Q U ) } + .
We have the following
E t r c , S C E ( R ( · ) ) = min Q D ( Q U V P U V ) + R ( Q U ) H Q ( U | U , V )
+ max { f ( Q U V ) , γ ( R ( Q U ) , Q U , Q V ) } f ( Q U V ) + min Q D ( Q U V P U V ) + R ( Q U ) H Q ( U | U , V )
+ H Q ( U | U , V ) min { H Q ( U | V ) , R ( Q U ) } +
= min Q D ( Q U V P U V ) min { H Q ( U | V ) , H Q ( U | U , V ) , R ( Q U ) } + R ( Q U )
min Q D ( Q U V P U V ) min { H Q ( U | V ) , H Q ( U ) , R ( Q U ) } + R ( Q U )
= min { Q U V : H Q ( U ) R ( Q U ) } D ( Q U V P U V ) min { H Q ( U | V ) , H Q ( U ) , R ( Q U ) } + R ( Q U )
= min { Q U V : H Q ( U ) R ( Q U ) } D ( Q U V P U V ) min { H Q ( U | V ) , R ( Q U ) } + R ( Q U )
= min { Q U V : H Q ( U ) R ( Q U ) } D ( Q U V P U V ) + max { R ( Q U ) H Q ( U | V ) , 0 }
= min { Q U V : H Q ( U ) R ( Q U ) } D ( Q U V P U V ) + [ R ( Q U ) H Q ( U | V ) ] +
= E t r c , M A P ( R ( · ) ) ,
which completes the proof of the theorem.

Appendix F. Proof of Theorem 5

We start by writing the expression in (34) in a slightly different way using
min { Q : g ( Q ) 0 } f ( Q ) = min Q sup s 0 { f ( Q ) + s · g ( Q ) } :
E e r ( R ( · ) , Δ ) = min { Q U V : R ( Q U ) H Q ( U | V ) + Δ } D ( Q U V P U V )
= min Q U V sup σ 0 { D ( Q U V P U V ) + σ · ( H Q ( U | V ) + Δ R ( Q U ) ) } .
Now, the requirement E e r ( R ( · ) , Δ ) E r is equivalent to
min Q U V sup σ 0 { D ( Q U V P U V ) + σ · ( H Q ( U | V ) + Δ R ( Q U ) ) } E r
or,
Q U V , σ 0 , D ( Q U V P U V ) + σ · ( H Q ( U | V ) + Δ R ( Q U ) ) E r
or,
Q U , Q V | U , σ 0 , R ( Q U ) H Q ( U | V ) + Δ + D ( Q U V P U V ) E r σ
or that for any Q U P ( U ) ,
R ( Q U ) min Q V | U sup σ 0 H Q ( U | V ) + Δ + D ( Q U V P U V ) E r σ
= min Q V | U H Q ( U | V ) + Δ D ( Q U V P U V ) E r D ( Q U V P U V ) > E r
= min { Q V | U : D ( Q U V P U V ) E r } H Q ( U | V ) + Δ ,
with the understanding that a minimum over an empty set equals infinity.

Appendix G. Proof of Theorem 6

It follows by the identities min { Q : g ( Q ) 0 } f ( Q ) = min Q sup s 0 { f ( Q ) + s · g ( Q ) } and A + = max μ [ 0 , 1 ] μ A that (24) can also be written as
E e ( R ( · ) ) = min Q U min Q V | U max μ [ 0 , 1 ] sup σ 0 { D ( Q U V P U V ) + μ · ( R ( Q U ) H Q ( U | V ) ) + σ · ( R ( Q U ) H Q ( U ) ) } ,
such that E e ( R ( · ) ) E e is equivalent to
Q U , Q V | U , μ [ 0 , 1 ] , σ 0 : D ( Q U V P U V ) + μ · ( R ( Q U ) H Q ( U | V ) ) + σ · ( R ( Q U ) H Q ( U ) ) E e ,
or,
Q U , Q V | U , μ [ 0 , 1 ] , σ 0 : R ( Q U ) μ · H Q ( U | V ) + σ · H Q ( U ) + E e D ( Q U V P U V ) μ + σ ,
or that for any Q U P ( U ) ,
R ( Q U ) max Q V | U min μ [ 0 , 1 ] inf σ 0 μ · H Q ( U | V ) + σ · H Q ( U ) + E e D ( Q U V P U V ) μ + σ
= max Q V | U min μ [ 0 , 1 ] min H Q ( U ) , H Q ( U | V ) + E e D ( Q U V P U V ) μ
= max Q V | U min H Q ( U ) , min μ [ 0 , 1 ] H Q ( U | V ) + E e D ( Q U V P U V ) μ
= max Q V | U min { H Q ( U ) , H Q ( U | V ) + E e D ( Q U V P U V ) } E e D ( Q U V P U V ) E e < D ( Q U V P U V )
= max { Q V | U : D ( Q U V P U V ) E e } min { H Q ( U ) , H Q ( U | V ) + E e D ( Q U V P U V ) }
= min H Q ( U ) , max { Q V | U : D ( Q U V P U V ) E e } { H Q ( U | V ) + E e D ( Q U V P U V ) } ,
and the proof is complete.

References

  1. Merhav, N. Error exponents of typical random codes. IEEE Trans. Inf. Theory 2018, 64, 6223–6235. [Google Scholar] [CrossRef] [Green Version]
  2. Barg, A.; Forney, G.D., Jr. Random codes: Minimum distances and error exponents. IEEE Trans. Inf. Theory 2003, 48, 2568–2573. [Google Scholar] [CrossRef] [Green Version]
  3. Nazari, A.; Anastasopoulos, A.; Pradhan, S.S. Error exponent for multiple–access channels: Lower bounds. IEEE Trans. Inf. Theory 2014, 60, 5095–5115. [Google Scholar] [CrossRef] [Green Version]
  4. Merhav, N. Error exponents of typical random codes for the colored Gaussian channel. IEEE Trans. Inf. Theory 2019, 65, 8164–8179. [Google Scholar] [CrossRef]
  5. Merhav, N. Error exponents of typical random trellis codes. IEEE Trans. Inf. Theory 2020, 66, 2067–2077. [Google Scholar] [CrossRef]
  6. Merhav, N. A Lagrange–dual lower bound to the error exponent of the typical random code. IEEE Trans. Inf. Theory 2020, 66, 3456–3464. [Google Scholar] [CrossRef]
  7. Tamir (Averbuch), R.; Merhav, N.; Weinberger, N.; Guillén i Fàbregas, A. Large deviations behavior of the logarithmic error probability of random codes. IEEE Trans. Inf. Theory 2020, 66, 6635–6659. [Google Scholar] [CrossRef]
  8. Slepian, D.; Wolf, J. Noiseless coding of correlated information sources. IEEE Trans. Inf. Theory 1973, 19, 471–480. [Google Scholar] [CrossRef]
  9. Gallager, R.G. Source coding with side information and universal coding. LIDS-P-937. Available online: http://web.mit.edu/gallager/www/papers/paper5.pdf (accessed on 20 February 2021).
  10. Ahlswede, R.; Dueck, G. Good codes can be produced by a few permutations. IEEE Trans. Inf. Theory 1982, 28, 430–443. [Google Scholar] [CrossRef]
  11. Csiszár, I.; Körner, J. Graph decomposition: A new key to coding theorems. IEEE Trans. Inf. Theory 1981, 27, 5–12. [Google Scholar] [CrossRef]
  12. Chen, J.; He, D.-K.; Jagmohan, A.; Lastras-Montaño, L.A. On the Reliability Function of Variable-Rate Slepian-Wolf Coding. Entropy 2017, 19, 389. [Google Scholar] [CrossRef] [Green Version]
  13. Weinberger, N.; Merhav, N. Optimum Tradeoffs Between the Error Exponent and the Excess-Rate Exponent of Variable-Rate Slepian–Wolf Coding. IEEE Trans. Inf. Theory 2015, 61, 2165–2190. [Google Scholar] [CrossRef]
  14. Csiszár, I. Linear codes for sources and source networks: Error exponents, universal coding. IEEE Trans. Inf. Theory 1982, 28, 585–592. [Google Scholar] [CrossRef]
  15. Csiszár, I.; Körner, J. Towards a general theory of source networks. IEEE Trans. Inf. Theory 1980, 26, 155–165. [Google Scholar] [CrossRef]
  16. Kelly, B.G.; Wagner, A.B. Improved Source Coding Exponents via Witsenhausen’s Rate. IEEE Trans. Inf. Theory 2011, 57, 5615–5633. [Google Scholar] [CrossRef] [Green Version]
  17. Kelly, B.G.; Wagner, A.B. Reliability in Source Coding With Side Information. IEEE Trans. Inf. Theory 2012, 58, 5086–5111. [Google Scholar] [CrossRef] [Green Version]
  18. Oohama, Y.; Han, T. Universal coding for the Slepian-Wolf data compression system and the strong converse theorem. IEEE Trans. Inf. Theory 1994, 40, 1908–1919. [Google Scholar] [CrossRef]
  19. Liu, J.; Cuff, P.; Verdú, S. On α–decodability and α–likelihood decoder. In Proceedings of the 2017 55th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 3–6 October 2017. [Google Scholar]
  20. Merhav, N. The generalized stochastic likelihood decoder: Random coding and expurgated bounds. IEEE Trans. Inf. Theory 2017, 63, 5039–5051, See also a correction at IEEE Trans. Inf. Theory 2017, 63, 6827–6829. [Google Scholar] [CrossRef]
  21. Csiszár, I.; Körner, J. Information Theory: Coding Theorems for Discrete Memoryless Systems; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  22. Tamir (Averbuch), R.; Merhav, N. The MMI decoder is asymptotically optimal for the typical random code and for the expurgated code. arXiv 2020, arXiv:2007.12225. [Google Scholar]
  23. Somekh-Baruch, A.; Scarlett, J.; Guillén i Fàbregas, A. Generalized Random Gilbert-Varshamov Codes. IEEE Trans. Inf. Theory 2019, 65, 3452–3469. [Google Scholar] [CrossRef] [Green Version]
  24. Merhav, N. Statistical physics and information theory. Found. Trends Commun. Inf. Theory 2009, 6, 1–212. [Google Scholar]
Figure 1. Graphs of the functions E e x f r ( R ) and E e ( , R ) .
Figure 1. Graphs of the functions E e x f r ( R ) and E e ( , R ) .
Entropy 23 00265 g001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tamir, R.; Merhav, N. Trade-offs between Error Exponents and Excess-Rate Exponents of Typical Slepian–Wolf Codes. Entropy 2021, 23, 265. https://doi.org/10.3390/e23030265

AMA Style

Tamir R, Merhav N. Trade-offs between Error Exponents and Excess-Rate Exponents of Typical Slepian–Wolf Codes. Entropy. 2021; 23(3):265. https://doi.org/10.3390/e23030265

Chicago/Turabian Style

Tamir (Averbuch), Ran, and Neri Merhav. 2021. "Trade-offs between Error Exponents and Excess-Rate Exponents of Typical Slepian–Wolf Codes" Entropy 23, no. 3: 265. https://doi.org/10.3390/e23030265

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop