Next Article in Journal
Electricity Consumption Forecasting Scheme via Improved LSSVM with Maximum Correntropy Criterion
Next Article in Special Issue
Content Adaptive Lagrange Multiplier Selection for Rate-Distortion Optimization in 3-D Wavelet-Based Scalable Video Coding
Previous Article in Journal
An Auxiliary Variable Method for Markov Chain Monte Carlo Algorithms in High Dimension
Previous Article in Special Issue
Rate-Distortion Region of a Gray–Wyner Model with Side Information
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Coding Theorem for f-Separable Distortion Measures

Department of Electrical Engineering, Princeton University, Princeton, NJ 08544, USA
*
Author to whom correspondence should be addressed.
Entropy 2018, 20(2), 111; https://doi.org/10.3390/e20020111
Submission received: 10 December 2017 / Revised: 29 January 2018 / Accepted: 2 February 2018 / Published: 8 February 2018
(This article belongs to the Special Issue Rate-Distortion Theory and Information Theory)

Abstract

:
In this work we relax the usual separability assumption made in rate-distortion literature and propose f -separable distortion measures, which are well suited to model non-linear penalties. The main insight behind f -separable distortion measures is to define an n-letter distortion measure to be an f -mean of single-letter distortions. We prove a rate-distortion coding theorem for stationary ergodic sources with f -separable distortion measures, and provide some illustrative examples of the resulting rate-distortion functions. Finally, we discuss connections between f -separable distortion measures, and the subadditive distortion measure previously proposed in literature.

1. Introduction

Rate-distortion theory, a branch of information theory that studies models for lossy data compression, was introduced by Claude Shannon in [1]. The approach of [1] is to model the information source with distribution P X on X , a reconstruction alphabet X ^ , and a distortion measure d : X × X ^ [ 0 , ) . When the information source produces a sequence of n realizations, the source P X n is defined on X n with reconstruction alphabet X ^ n , where X n and X ^ n are n-fold Cartesian products of X and X ^ . In that case, [1] extended the notion of a single-letter distortion measure to the n-letter distortion measure, d n : X n × X ^ n [ 0 , ) , by taking an arithmetic average of single-letter distortions,
d n ( x n , x ^ n ) = 1 n i = 1 n d ( x i , x ^ i ) .
Distortion measures that satisfy (1) are referred to as separable (also additive, per-letter, averaging); the separability assumption has been ubiquitous throughout rate-distortion literature ever since its inception in [1].
On the one hand, the separability assumption is quite natural and allows for a tractable characterization of the fundamental trade-off between the rate of compression and the average distortion. For example, in the case when X n is a stationary and memoryless source the rate-distortion function, which captures this trade-off, admits a simple characterization:
R ( d ) = inf P X ^ | X : E d ( X , X ^ ) d I ( X ; X ^ ) .
On the other hand, the separability assumption is very restrictive as it only models distortion penalties that are linear functions of the per-letter distortions in the source reproduction. Real-world distortion measures, however, may be highly non-linear; it is desirable to have a theory that also accommodates non-linear distortion measures. To this end, we propose the following definition:
Definition 1 (f-separable distortion measure).
Let f ( z ) be a continuous, increasing function on [ 0 , ) . An n-letter distortion measure d n ( · , · ) is f -separable with respect to a single-letter distortion d ( · , · ) if it can be written as
d n ( x n , x ^ n ) = f 1 1 n i = 1 n f d ( x i , x ^ i ) .
For f ( z ) = z this is the classical separable distortion set up. By selecting f appropriately, it is possible to model a large class of non-linear distortion measures, see Figure 1 for illustrative examples.
In this work, we characterize the rate-distortion function for stationary and ergodic information sources with f -separable distortion measures. In the special case of memoryless and stationary sources we obtain the following intuitive result:
R f ( d ) = inf P X ^ | X : E f d ( X , X ^ ) f ( d ) I ( X ; X ^ ) .
A pleasing implication of this result is that much of rate-distortion theory (e.g., the Blahut-Arimoto algorithm) developed since [1] can be leveraged to work under the far more general f -separable assumption.
The rest of this paper is structured as follows. The remainder of Section 1 overviews related work: Section 1.1 provides the intuition behind Definition 1, Section 1.2 reviews related work in other compression problems, and Section 1.3 connects f -separable distortion measures with sub-additive distortion measures. Section 2 formally sets up the problem and demonstrates why convexity of the rate-distortion function does not always hold under the f -separable assumption. Section 3 presents our main result, Theorem 1, as well as some illustrative examples. Additional discussion about problem formulation and sub-additive distortion measures is given in Section 4. We conclude the paper in Section 5.

1.1. Generalized f -Mean and Rényi Entropy

To understand the intuition behind Definition 1, consider aggregating n numbers ( z 1 , , z n ) by defining a sequence of functions (indexed by n)
M n ( z ) = f 1 1 n i = 1 n f ( z i )
where f is a continuous, increasing function on [ z min , z max ] , z min = min { z i } i = 1 n , and z max = max { z i } i = 1 n . It is easy to see that (5) satisfies the following properties:
  • M n ( z ) is continuous and monotonically increasing in each z i ,
  • M n ( z ) is a symmetric function of each z i ,
  • If z i = z for all i, then M n ( z ) = z ,
  • For any m n
    M n ( z ) = M n M m ( z 1 m ) , , M m ( z 1 m ) , z m + 1 , , z n .
Moreover, it is shown in [2] that any sequence of functions M n that satisfies these properties must have the form of Equation (5) for some continuous, increasing f . The function M n is referred to as “Kolmogorov mean”, “quasi-arithmetic mean”, or “generalized f -mean”. The most prominent examples are the geometric mean, f ( z ) = log z , and the root-mean-square, f ( z ) = z 2 .
The main insight behind Definition 1 is to define an n-letter distortion measure to be an f -mean of single-letter distortions. The f -separable distortion measures include all n-letter distortion measures that satisfy the above properties, with the last property saying that the non-linear “shape” of distortion measure (cf. Figure 1) is independent of n.
Finally, we note that Rényi also arrived at his well-known family of entropies [3] by taking an f -mean of the information random variable:
H α ( X ) = f α 1 E f α ( ı X ( X ) ) , α ( 0 , 1 ) ( 1 , )
where the information at x is
ı X ( x ) = log 1 P X ( x ) .
Rényi [3] limited his consideration to functions of the form f α ( z ) = exp ( 1 α ) z in order to ensure that entropy is additive for independent random variables.

1.2. Compression with Non-Linear Cost

Source coding with non-linear cost has already been explored in the variable-length lossless compression setting. Let ( x ) denote the length of the encoding of x by a given variable length code. Campbell [4,5] proposed minimizing a cost function of the form
f 1 E f ( ( X ) ) ,
instead of the usual expected length. The main result of [4,5] is that for
f t ( z ) = exp { t z } , t ( 1 , 0 ) ( 0 , ) ,
the fundamental limit of such setup is Rényi entropy of order α = 1 t + 1 . For more general f , this problem was handled by Kieffer [6], who showed that (9) has a fundamental limit for a large class of functions f . That limit is Rényi entropy of order α = 1 t + 1 with
t = lim z f ( z ) f ( z ) .
More recently, a number of works [7,8,9] studied related source coding paradigms, such as guessing and task encoding. These works also focused on the exponential functions given in (10); in [7,8] Rényi entropy is shown to be a fundamental limit yet again.

1.3. Sub-Additive Distortion Measures

A notable departure from the separability assumption in rate-distortion theory is sub-additive distortion measures discussed in [10]. Namely, a distortion measure is sub-additive if
d n ( x n , x ^ n ) 1 n i = 1 n d ( x i , x ^ i ) .
In the present setting, an f -separable distortion measure is sub-additive if f is concave:
d n ( x n , x ^ n ) = f 1 1 n i = 1 n f d ( x i , x ^ i ) 1 n i = 1 n d ( x i , x ^ i ) .
Thus, the results for sub-additive distortion measures, such as the convexity of the rate-distortion function, are applicable to f -separable distortion measures when f is concave.

2. Preliminaries

Let X be a random variable defined on X with distribution P X , with reconstruction alphabet X ^ , and a distortion measure d : X × X ^ [ 0 , ) . Let M = { 1 , , M } be the message set.
Definition 2 (Lossy source code).
A lossy source code ( g , c ) is a pair of mappings,
g : X M
c : M X ^ .
A lossy source-code ( g , c ) is an ( M , d ) -lossy source code on ( X , X ^ , d ) if
E d ( X , c ( g ( X ) ) d .
A lossy source code ( g , c ) is an ( M , d , ϵ ) -lossy source code on ( X , X ^ , d ) if
P d ( X , c ( g ( X ) ) > d ϵ .
Definition 3.
An information source X is a stochastic process
X = X n = ( X 1 , , X n ) n = 1 .
If ( g , c ) is an ( M , d ) -lossy source code for X n on ( X n , X ^ n , d n ) , we say ( g , c ) is an ( n , M , d ) -lossy source code. Likewise, an ( M , d , ϵ ) -lossy source code for X n on ( X n , X ^ n , d n ) is an ( n , M , d , ϵ ) -lossy source code.

2.1. Rate-Distortion Function (Average Distortion)

Definition 4.
Let a sequence of distortion measures { d n } be given. The rate-distortion pair ( R , d ) is achievable if there exists a sequence of ( n , M n , d n ) -lossy source codes such that
lim sup n 1 n log M n R , a n d lim sup n   d n d .
Our main object of study is the following rate-distortion function with respect to f -separable distortion measures.
Definition 5.
Let { d n } be a sequence of f -separable distortion measures. Then,
R f ( d ) = inf { R : ( R , d ) i s   a c h i e v a b l e } .
If f is the identity, then we omit the subscript f and simply write R ( d ) .

2.2. Rate-Distortion Function (Excess Distortion)

It is useful to consider the rate-distortion function for f -separable distortion measures under the excess distortion paradigm.
Definition 6.
Let a sequence of distortion measures { d n } be given. The rate-distortion pair ( R , d ) is (excess distortion) achievable if for any γ > 0 there exists a sequence of ( n , M n , d + γ , ϵ n ) -lossy source codes such that
lim sup n 1 n log M n R , a n d lim sup n   ϵ n = 0 .
Definition 7.
Let { d n } be a sequence of f -separable distortion measures. Then,
R f ( d ) = inf { R : ( R , d )   i s   ( e x c e s s   d i s t o r t i o n )   a c h i e v a b l e } .
Characterizing the f -separable rate-distortion function is particularly simple under the excess distortion paradigm, as shown in the following lemma.
Lemma 1.
Let the single-letter distortion d and an increasing, continuous function f be given. Then,
R f ( d ) = R ˜ ( f ( d ) )
where R ˜ ( d ) is computed with respect to d ˜ ( x , x ^ ) = f ( d ( x , x ^ ) ) .
Proof. 
Let { d n } be a sequence of f -separable distortions based on d ( · , · ) and let d ˜ n be a sequence of separable distortion measures based on d ˜ ( · , · ) = f ( d ( · , · ) ) .
Since f is increasing and continuous at d, then for any γ > 0 there exists 0 < γ ˜ such that
f ( d + γ ) f ( d ) = γ ˜ .
The reverse is also true by continuity of f : for any γ ˜ > 0 there exists γ > 0 such that (22) is satisfied.
Any source code ( g n , c n ) is an ( n , M n , d + γ , ϵ n ) -lossless code under f -separable distortion d n if and only if ( g n , c n ) is also an ( n , M n , f ( d ) + γ ˜ , ϵ n ) -lossless code under separable distortion d ˜ n . Indeed,
ϵ n P d n ( X n , c n g n ( X n ) ) d + γ
= P f 1 1 n i = 1 n f ( d ( X i , X ^ i ) ) d + γ
= P 1 n i = 1 n f ( d ( X i , X ^ i ) ) f ( d + γ )
= P d ˜ n ( X n , c n g n ( X n ) ) f ( d ) + γ ˜
where X ^ n = c n g n ( X n ) . It follows that ( R , d ) is (excess distortion) achievable with respect to { d n } if and only if ( R , f ( d ) ) is (excess distortion) achievable with respect to { d ˜ n } . The lemma statement follows from this observation and Definition 6. ☐

2.3. f -Separable Rate-Distortion Functions and Convexity

While it is a well-established result in rate-distortion theory that all separable rate-distortion functions are convex ([11], Lemma 10.4.1), this need not hold for f -separable rate-distortion functions.
The convexity argument for separable distortion measures is based on the idea of time sharing; that is, suppose there exists an ( n 1 , M 1 , d 1 ) -lossy source code of blocklength n 1 and an ( n 2 , M 2 , d 2 ) -lossy source code of blocklength n 2 . Then, there exists an ( n , M , d ) -lossy source code of blocklength n with M = M 1 M 2 and d = n 1 n 1 + n 2 d 1 + n 1 n 1 + n 2 d 2 : such a code is just a concatenation of codes over blocklengths n 1 and n 2 . The distortion d is achievable since
d n 1 ( x n 1 , x ^ n 1 ) = 1 n 1 i = 1 n 1 d ( x i , x ^ i ) = d 1
and letting n = n 1 + n 2 ,
d n 2 x n 1 + 1 n , x ^ n 1 + 1 n = 1 n 2 i = n 1 + 1 n d ( x i , x ^ i ) = d 2 .
Time sharing between the two schemes gives
d n ( x n , x ^ n ) = 1 n i = 1 n d ( x i , x ^ i ) = n 1 n d 1 + n 2 n d 2 .
However, this bound on the distortion need not hold for f -separable distortions. Consider f which is strictly convex and suppose
f 1 1 n 1 i = 1 n 1 f d ( x i , x ^ i ) = d 1 , f 1 1 n 2 i = n 1 + 1 n f d ( x i , x ^ i ) = d 2 .
We can write
f 1 1 n i = 1 n f d ( x i , x ^ i ) = f 1 n 1 n f ( d 1 ) + n 2 n f ( d 2 ) > n 1 n d 1 + n 2 n d 2 .
Thus, concatinating the two schemes together does not guarantee that the distortion assigned by the f -separable distortion measure is bounded by d.

3. Main Result

In this section we make the following standard assumptions, see [12].
  • X is a stationary and ergodic source.
  • The single-letter distortion function d ( · , · ) and the continuous and increasing function f ( · ) are such that
    inf x ^ X ^ E f ( d ( X , x ^ ) ) < .
  • For each d > 0 , there exists a countable subset { x ^ i } of X ^ and a countable measurable partition { E i } of X such that d ( x , x ^ i ) d , x E i for each x ^ i , and
    i P X 1 ( E i ) log 1 P X 1 ( E i ) < .
Theorem 1.
Under the stated assumptions, the rate-distortion function is given by
R f ( d ) = R ˜ ( f ( d ) )
where
R ˜ ( f ( d ) ) = lim n inf P X ^ n | X n : 1 n i = 1 n E d ˜ ( X i , X ^ i ) f ( d ) 1 n I ( X n ; X ^ n )
is the rate-distortion function computed with respect to the separable distortion measure given by d ˜ ( x , x ^ ) = f ( d ( x , x ^ ) ) .
For stationary memoryless sources (34) particularizes to
R f ( d ) = inf P X ^ | X : E f d ( X , X ^ ) f ( d ) I ( X ; X ^ ) .
Proof. 
Equations (35) and (36) are widely known in literature (see, for example, [10,11,13]); it remains to show (34). Under the stated assumptions,
R f ( d ) ( a ) R f ( d ) = ( b ) R ˜ ( f ( d ) ) = ( c ) R ˜ ( f ( d ) )
where (a) follows from assumption (2) and Theorem A1 in the Appendix A, (b) is shown in Lemma 1, and (c) is due to [14] (see also ([13], Theorem 5.9.1)). The other direction,
R f ( d ) R ˜ ( f ( d ) )
is a consequence of the strong converse by Kieffer [12], see Lemma A1 in the Appendix A. ☐
An immediate application of Theorem 1 gives the f -separable rate-distortion function for several well-known binary memoryless sources (BMS).
Example 1 (BMS, Hamming distortion).
Let X be the binary memoryless source. That is, X = X ^ = { 0 , 1 } , X i is a Bernoulli ( p ) random variable, and d ( · , · ) is the usual Hamming distortion measure. Then, for any continuous increasing f ( · ) and p 1 2 ,
R f ( d ) = h ( p ) h f ( d ) f ( 0 ) f ( 1 ) f ( 0 ) , f ( d ) f ( 0 ) f ( 1 ) f ( 0 ) < p 0 , o . w .
where
h ( p ) = p log 1 p + ( 1 p ) log 1 1 p
is the binary entropy function. The result follows from a series of obvious equalities,
R f ( d ) = inf P X ^ | X : E f d ( X , X ^ ) f ( d ) I ( X ; X ^ )
= inf P X ^ | X : E f d ( X , X ^ ) f ( 0 ) f ( 1 ) f ( 0 ) f ( d ) f ( 0 ) f ( 1 ) f ( 0 ) I ( X ; X ^ )
= inf P X ^ | X : E f d ( X , X ^ ) f ( 0 ) f ( 1 ) f ( 0 ) f ( d ) f ( 0 ) f ( 1 ) f ( 0 ) I ( X ; X ^ )
= inf P X ^ | X : E d ( X , X ^ ) f ( d ) f ( 0 ) f ( 1 ) f ( 0 ) I ( X ; X ^ )
= R f ( d ) f ( 0 ) f ( 1 ) f ( 0 ) .
The rate-distortion function given in Example 1 is plotted in Figure 2 for different functions f . The simple derivation in Example 1 could be applied to any source for which the single-letter distortion measure can take on only two values, as is shown in the next example.
Example 2 (BMS, Erasure distortion).
Let X be the binary memoryless source and let the reconstruction alphabet have the erasure option. That is, X = { 0 , 1 } , X ^ = { 0 , 1 , e } , and X i is a Bernoulli 1 2 random variable. Let d ( · , · ) be the usual erasure distortion measure:
d ( x , x ^ ) = 0 , x = x ^ 1 , x ^ = e , o . w . .
The separable rate-distortion function for the erasure distortion is given by
R ( d ) = 1 d ,
see ([11], Problem 10.7). Then, for any continuous increasing f ( · ) ,
R f ( d ) = 1 f ( d ) f ( 0 ) f ( 1 ) f ( 0 ) .
The rate-distortion function given in Example 2 is plotted in Figure 3 for different functions f . Observe that for concave f (i.e., subadditive distortion) the resulting rate-distortion function is convex, which is consistent with [10]. However, for f that are not concave, the rate-distortion function is not always convex. Unlike in the conventional separable distortion measure, an f -separable distortion measure is not convex in general.
Having a closed-form analytic expression for a separable distortion measure does not always mean that we could easily derive such an expression for an f -separable distortion measure with the same per-letter distortion. For example, consider the Gaussian source with the mean-square-error (MSE) per-letter distortion. According to Theorem 1, letting f ( z ) = z recovers the Gaussian source with the absolute value per-letter distortion. This setting, and variations on it, is a difficult problem in general [15]. However, we can recover the f -separable rate-distortion function whenever the per-letter distortion d ( · , · ) composed with f ( · ) reconstructs the MSE distortion, see Figure 4.
Theorem 1 shows that for well-behaved stationary ergodic sources, R f ( d ) admits a simple characterization. According to Lemma 1, the same characterization holds for the excess distortion paradigm without stationary and ergodic assumptions. The next example shows that, in general, R f ( d ) R ˜ ( f ( d ) ) within the average distortion paradigm. Thus, assumption (1) is necessary for Theorem 1 to hold.
Example 3 (Mixed Source).
Fix λ ( 0 , 1 ) and let the source X be a mixture of two i.i.d. sources,
P X n ( x n ) = λ i = 1 n P 1 ( x i ) + ( 1 λ ) i = 1 n P 2 ( x i ) .
We can alternatively express X as
X n = Z X 1 n + ( 1 Z ) X 2 n
where Z is a Bernoulli(λ) random variable. Then, the rate-distortion function for the mixture source (45) and continuous increasing f is given in Lemma A2 in the Appendix B. Namely,
R f ( d ) = min ( d 1 , d 2 ) : λ d 1 + ( 1 λ ) d 2 d max R f 1 ( d 1 ) , R f 2 ( d 2 )
where R f 1 ( d ) and R f 2 ( d ) are the rate-distortion functions for discrete memoryless soruces given by P 1 and P 2 , respectively. Likewise,
R ˜ f ( d ) = min ( d 1 , d 2 ) : λ d 1 + ( 1 λ ) d 2 f ( d ) max R ˜ 1 ( d 1 ) , R ˜ 2 ( d 2 ) .
As shown in Figure 5, Equations (47) and (48) are not equal in general.

4. Discussion

4.1. Sub-Additive Distortion Measures

Recall that an f -separable distortion measure is sub-additive if f is concave (cf. Section 1.3). Clearly, not all f -separable distortion measures are sub-additive, and not all sub-additive distortion measures are f -separable. An examplar of a sub-additive distortion measure (which is not f -separable) given in ([10], Chapter 5.2) is
d n ( x n , x ^ n ) = 1 n i = 1 n d q ( x i , x ^ i ) 1 / q , q > 1 .
The sub-additivity of (49) follows from the Minkowski inequality. Comparing (49) to a sub-additive, f -separable distortion measure given by
d n ( x n , x ^ n ) = 1 n i = 1 n d q ( x i , x ^ i ) 1 / q , 0 q 1 ,
we see that the discrepancy between (49) and (50) has to do not only with the different ranges of q but with the scaling factor as a function of n.
Consider a binary source with Hamming distortion and let x n = 0 n , x ^ n = 1 n . Rewriting (49) we obtain
d n ( x n , x ^ n ) = 1 n ( q 1 ) / q 1 n i = 1 n d q ( x i , x ^ i ) 1 / q
and
lim n d n ( x n , x ^ n ) = lim n 1 n ( q 1 ) / q 1 n i = 1 n d q ( 0 , 1 ) 1 / q
= lim n 1 n ( q 1 ) / q 1 n i = 1 n 1 1 / q
= lim n 1 n ( q 1 ) / q = 0 .
In the binary example, the limiting distortion of (49) is zero even when the reconstruction of x n gets every single symbol wrong. It is easy to observe that example (49) is similarly degenerate in many cases of interest. The distortion measure given by (50), on the other hand, is an example of a non-trivial sub-additive distortion measure, as can be seen in Figure 2 and Figure 3 for q = 1 2 .

4.2. A Consequence of Theorem 1

In light of the discussion in Section 1.1, an alert reader may consider modifying (16) to
f 1 E f d ( X , c ( g ( X ) ) d ,
and studying the ( M , d ) -lossy source codes under this new paradigm. Call the corresponding rate-distortion function R f ( d ) and assume that n-letter distortion measures are separable. Thus, at block length n the constraint (55) is
E f 1 n i = 1 n d ( X i , X ^ i ) f ( d )
where X ^ = c ( g ( X ) ) . This is equivalent to the following constraints:
E f 1 n i = 1 n f 1 ( d ˜ ( X i , X ^ i ) ) f ( d )
and E d ˜ n ( X i , X ^ i ) ) f ( d )
where d ˜ n is an f 1 -separable distortion measure. Putting these observations together with Theorem 1 yields
R f ( d ) = R ˜ f 1 ( f ( d ) ) = R ( f 1 ( f ( d ) ) ) = R ( d ) .
A consequence of Theorem 1 is that the rate distortion function remains unchanged under this new paradigm.

5. Conclusions

This paper proposes f -separable distortion measures as a good model for non-linear distortion penalties. The rate-distortion function for f -separable distortion measures is characterized in terms of separable rate-distortion function with respect to a new single-letter distortion measure, f ( d ( · , · ) ) . This characterization is straightforward for the excess distortion paradigm, as seen in Lemma 1. The proof is more involved for the average distortion paradigm, as seen in Theorem 1. An important implication of Theorem 1 is that many prominant results in rate-distortion literature (e.g., Blahut-Arimoto algorithm) can be leveraged to work for f -separable distortion measures.
Finally, we mention that a similar generalization is well-suited for channels with non-linear costs. That is, we say that b n is an f -separable cost function if it can be written as
b n ( x n ) = f 1 1 n i = 1 n f b ( x i ) .
With this generalization we can state the following result which is out of the scope of this special issue.
Theorem 2 (Channels with cost).
The capacity of a stationary memoryless channel given by P Y | X and f -separable cost function based on single-letter function b ( x ) is
C f ( β ) = sup P X : E f b ( X ) f ( β ) I ( X ; Y ) .

Author Contributions

Authors had equal contributions in the paper. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Lemmas for Theorem 1

Theorem A1 can be distilled from several proofs in literature. We state it here, with proof, for completeness; it is given in its present form in [16]. The condition of Theorem A1 applies when the source satisfies assumptions (1)–(3) in Section 3. This is a consequence of the ergodic theorem and continuity of f .
Theorem A1.
Suppose that the source and distortion measure are such that for any γ > 0 there exists 0 < γ < and a sequence b 1 , b 2 , such that
E d n ( X n , b n ) 1 { γ < d n ( X n , b n ) } n γ .
If a rate-distortion pair ( R , d ) is achievable under the excess distortion criterion, it is achievable under the average distortion criterion.
Proof. 
Choose γ > 0 . Suppose there is a code ( g n , c n ) with M codewords that achieves
lim n P d n ( X n , c n ( g n ( X ) ) ) > d + γ = 0 .
We construct a new code ( g ^ n , c ^ n ) with M + 1 codewords:
c ^ n ( m ) = b n , if m = 0 c n ( m ) , if m = 1 , , M ,
g ^ n ( x n ) = 0 , if d n ( x n , c n ( g n ( x n ) ) ) > d n ( x n , b n ) g n ( x n ) , if d n ( x n , c n ( g n ( x n ) ) ) d n ( x n , b n ) .
Then
d n ( x n , c ^ n ( g ^ n ( x n ) ) ) = min d n ( x n , c n ( g n ( x n ) ) ) , d n ( x n , b n ) .
For brevity denote,
V n = d n ( X n , c ^ n ( g ^ n ( X n ) ) )
W n = d n ( X n , c n ( g n ( X n ) ) )
Z n = d n ( X n , b n ) .
Then,
E V n E V n 1 { V n d + γ }
+ E V n 1 { d + γ < V n γ } + E V n 1 { γ < V n }
d + γ + γ P 1 { d + γ < V n } + E V n 1 { γ < V n }
d + γ + γ P 1 { d + γ < W n } + E Z n 1 { γ < Z n }
n d + 2 γ + E Z n 1 { γ < Z n }
n d + 3 γ .
The following theorem is shown in ([12], Theorem 1).
Theorem A2 (Kieffer).
Let X be an information source satisfying conditions (1)–(3) in Section 3, with f being the identity. Let d n be separable. Given an arbitrary sequence of ( n , M n , d , ϵ n ) -lossy source codes, if
lim n 1 n log M n < R ( d )
then
lim n ϵ n = 1 .
An important implication of Theorem A2 for f -separable rate-distortion functions is given in the following lemma.
Lemma A1.
Let X be an information source satisfying conditions (1)–(3) in Section 3. Then,
R f ( d ) R ˜ ( f ( d ) )
Proof. 
If R ˜ ( f ( d ) ) = 0 , we are done. Suppose R ˜ ( f ( d ) ) > 0 . Assume there exists a sequence { ( g n , c n ) } n = 1 of ( n , M n , d n ) -lossy source codes (under f -separable distortion) with
lim sup n 1 n log M n < R ˜ ( f ( d ) )
and
lim sup n   d n d .
Since R ˜ ( f ( d ) ) is continuous and decreasing, there exists some γ > 0 such that
lim n 1 n log M n < R ˜ ( f ( d + γ ) ) < R ˜ ( f ( d ) ) .
For every n, the ( g n , c n ) lossy source code is also an ( n , M n , d + γ , ϵ n ) -lossy source code for some ϵ n [ 0 , 1 ] and f -separable d n . It is also an ( n , M n , f ( d + γ ) , ϵ n ) -lossy source code with respect to separable distortion d ˜ n ( · , · ) . We can therefore apply Theorem A2 to obtain
lim n ϵ n = 1 .
Thus,
d n E d n ( X n , c n ( g n ( X n ) ) ) ϵ n ( d + γ ) ,
> d + γ 2
where (A22) holds for all sufficiently large n. The result follows since we obtained a contradiction with (A18). ☐

Appendix B. Rate-Distortion Function for a Mixed Source

Lemma A2.
The rate-distortion function with respect to f -separable distortion for the mixture source (45) is given by
R f ( d ) = min ( d 1 , d 2 ) : λ d 1 + ( 1 λ ) d 2 d max R f 1 ( d 1 ) , R f 2 ( d 2 )
where R f 1 ( d ) and R f 2 ( d ) are the rate-distortion functions with respect to f -separable distortion for stationary memoryless sources given by P 1 and P 2 , respectively.
Proof. 
Observe that,
M f * ( d ) min ( d 1 , d 2 ) D max M f 1 ( d 1 ) , M f 2 ( d 2 )
M f * ( d ) min ( d 1 , d 2 ) D max 2 M f 1 ( d 1 ) , 2 M f 2 ( d 2 )
where
D = ( d 1 , d 2 ) : λ d 1 + ( 1 λ ) d 2 d ,
M f 1 ( d 1 ) and M f 2 ( d 1 ) are the non-asymptotic limits for P 1 and P 2 , respectively. Indeed, the upper bound follows by designing optimal codes for P 1 and P 2 separately, and then combining them to give
M f * ( d ) min ( d 1 , d 2 ) D M f 1 ( d 1 ) + M f 2 ( d 2 )
min ( d 1 , d 2 ) D max 2 M f 1 ( d 1 ) , 2 M f 2 ( d 2 ) .
The lower bound follows by the following argument. Fix an ( M , d ) -lossy source code ( f -separable distortion), ( g , c ) . Define
d 1 = E d n ( X n , c ( g ( X n ) ) ) | Z = 0 ,
d 2 = E d n ( X n , c ( g ( X n ) ) ) | Z = 1 .
Clearly, ( d 1 , d 2 ) D . It also follows that
M M f 1 ( d 1 )
since ( g , c ) is an ( M , d 1 ) -lossy source code ( f -separable distortion) code for X 1 n . Likewise,
M M f 2 ( d 2 )
which proves the lower bound. The result follows directly from (A25). ☐

References

  1. Shannon, C.E. Coding Theorems for a Discrete Source with a Fidelity Criterion Institute of Radio Engineers, International Convention Record, vol. 7, 1959. In Claude E. Shannon: Collected Papers; Wiley-IEEE Press: Hoboken, NJ, USA, 1993. [Google Scholar]
  2. Tikhomirov, V. On the Notion of Mean. In Selected Works of A. N. Kolmogorov; Tikhomirov, V., Ed.; Springer: Dordrecht, The Netherlands, 1991; Volume 25, pp. 144–146. [Google Scholar]
  3. Rényi, A. On Measures of Entropy and Information; Contributions to the Theory of Statistics. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Oakland, CA, USA, 20 June–30 July 1960; University of California Press: Berkeley, CA, USA, 1961; Volume 1, pp. 547–561. [Google Scholar]
  4. Campbell, L. A coding theorem and Rényi’s entropy. Inf. Control 1965, 8, 423–429. [Google Scholar] [CrossRef]
  5. Campbell, L. Definition of entropy by means of a coding problem. Z. Wahrscheinlichkeitstheorie Verwandte Gebiete 1966, 6, 113–118. [Google Scholar] [CrossRef]
  6. Kieffer, J. Variable-length source coding with a cost depending only on the code word length. Inf. Control 1979, 41, 136–146. [Google Scholar] [CrossRef]
  7. Arikan, E. An inequality on guessing and its application to sequential decoding. IEEE Trans. Inf. Theory 1996, 42, 99–105. [Google Scholar] [CrossRef]
  8. Bunte, C.; Lapidoth, A. Encoding Tasks and Rényi Entropy. IEEE Trans. Inf. Theory 2014, 60, 5065–5076. [Google Scholar] [CrossRef]
  9. Arikan, E.; Merhav, N. Guessing subject to distortion. IEEE Trans. Inf. Theory 1998, 44, 1041–1056. [Google Scholar] [CrossRef]
  10. Gray, R.M. Entropy and Information Theory, 2nd ed.; Springer: Berlin, Germany, 2011. [Google Scholar]
  11. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; Wiley-Interscience: Hoboken, NJ, USA, 2006. [Google Scholar]
  12. Kieffer, J. Strong converses in source coding relative to a fidelity criterion. IEEE Trans. Inf. Theory 1991, 37, 257–262. [Google Scholar] [CrossRef]
  13. Han, T.S. Information-Spectrum Methods in Information Theory; Springer: Berlin, Germany, 2003. [Google Scholar]
  14. Steinberg, Y.; Verdú, S. Simulation of random processes and rate-distortion theory. IEEE Trans. Inf. Theory 1996, 42, 63–86. [Google Scholar] [CrossRef]
  15. Dytso, A.; Bustin, R.; Poor, H.V.; Shitz, S.S. On additive channels with generalized Gaussian noise. In Proceedings of the 2017 IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017; pp. 426–430. [Google Scholar]
  16. Verdú, S. ELE528: Information Theory Lecture Notes; Princeton University: Princeton, NJ, USA, 2015. [Google Scholar]
Figure 1. The number of reconstruction errors for an information source with 100 bits vs. the penalty assessed by f -separable distortion measures based on the Hamming single-letter distortion. The f ( z ) = z plot corresponds to the separable distortion. The f -separable assumption accommodates all of the other plots, and many more, with the appropriate choice of the function f .
Figure 1. The number of reconstruction errors for an information source with 100 bits vs. the penalty assessed by f -separable distortion measures based on the Hamming single-letter distortion. The f ( z ) = z plot corresponds to the separable distortion. The f -separable assumption accommodates all of the other plots, and many more, with the appropriate choice of the function f .
Entropy 20 00111 g001
Figure 2. R f ( d ) for the binary memoryless source with p = 0.5 . Compare these to the f -separable distortion measures plotted for the binary source with Hamming distortion in Figure 1.
Figure 2. R f ( d ) for the binary memoryless source with p = 0.5 . Compare these to the f -separable distortion measures plotted for the binary source with Hamming distortion in Figure 1.
Entropy 20 00111 g002
Figure 3. R f ( d ) for the binary memoryless source with p = 0.5 and erasure per-letter distortion.
Figure 3. R f ( d ) for the binary memoryless source with p = 0.5 and erasure per-letter distortion.
Entropy 20 00111 g003
Figure 4. R f ( d ) for the Gaussian memoryless source with mean zero and unit variance.
Figure 4. R f ( d ) for the Gaussian memoryless source with mean zero and unit variance.
Entropy 20 00111 g004
Figure 5. Mixed binary source with p 1 = 0.5 , p 2 = 0.001 , and λ = 0.5 . Three examples of f -separable rate-distortion functions are given. For f ( z ) = z , the relation R ( d ) = R ˜ ( d ) follows immediately. When f is not the identity, R f ( d ) R ˜ ( f ( d ) ) in general for non-ergodic sources.
Figure 5. Mixed binary source with p 1 = 0.5 , p 2 = 0.001 , and λ = 0.5 . Three examples of f -separable rate-distortion functions are given. For f ( z ) = z , the relation R ( d ) = R ˜ ( d ) follows immediately. When f is not the identity, R f ( d ) R ˜ ( f ( d ) ) in general for non-ergodic sources.
Entropy 20 00111 g005

Share and Cite

MDPI and ACS Style

Shkel, Y.; Verdú, S. A Coding Theorem for f-Separable Distortion Measures. Entropy 2018, 20, 111. https://doi.org/10.3390/e20020111

AMA Style

Shkel Y, Verdú S. A Coding Theorem for f-Separable Distortion Measures. Entropy. 2018; 20(2):111. https://doi.org/10.3390/e20020111

Chicago/Turabian Style

Shkel, Yanina, and Sergio Verdú. 2018. "A Coding Theorem for f-Separable Distortion Measures" Entropy 20, no. 2: 111. https://doi.org/10.3390/e20020111

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop