Next Article in Journal
A Comparison Study on Criteria to Select the Most Adequate Weighting Matrix
Next Article in Special Issue
MIMO Gaussian State-Dependent Channels with a State-Cognitive Helper
Previous Article in Journal
Classical (Local and Contextual) Probability Model for Bohm–Bell Type Experiments: No-Signaling as Independence of Random Variables
Previous Article in Special Issue
On Continuous-Time Gaussian Channels
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Universality of Logarithmic Loss in Successive Refinement †

Department of Electronic and Electrical Engineering, Hongik University, Seoul 04066, Korea
This paper is an extended version of our paper published in the 2015 IEEE International Symposium on Information Theory (ISIT), Hong Kong, China, 14–19 June 2015.
Entropy 2019, 21(2), 158; https://doi.org/10.3390/e21020158
Submission received: 19 December 2018 / Revised: 1 February 2019 / Accepted: 7 February 2019 / Published: 8 February 2019
(This article belongs to the Special Issue Multiuser Information Theory II)

Abstract

:
We establish an universal property of logarithmic loss in the successive refinement problem. If the first decoder operates under logarithmic loss, we show that any discrete memoryless source is successively refinable under an arbitrary distortion criterion for the second decoder. Based on this result, we propose a low-complexity lossy compression algorithm for any discrete memoryless source.

1. Introduction

In the lossy compression problem, logarithmic loss is a criterion allowing a “soft” reconstruction of the source, a departure from the classical setting of a deterministic reconstruction. In this setting, the reconstruction alphabet is the set of probability distributions over the source alphabet. More precisely, let x be the source symbol from the source alphabet X , and q ( · ) be the reconstruction symbol which is the probability measure on X . Then the logarithmic loss is given by
( x , q ) = log 1 q ( x ) .
Clearly, if the reconstruction q ( · ) has a small probability on the true source symbol x, the amount of loss will be large.
Although logarithmic loss plays a crucial role in the theory of learning and prediction, relatively little work has been done in the context of lossy compression, notwithstanding the two-encoder multi-terminal source coding problem under logarithmic loss [1,2], or recent work on the single-shot approach to lossy source coding under logarithmic loss [3]. Note that lossy compression under logarithmic loss is closely related to the information bottleneck method [4,5,6]. In this paper, we focus on universal properties of logarithmic loss in the context of successive refinement.
Successive refinement is a network lossy compression problem where one encoder wishes to describe the source to two decoders [7,8]. Instead of having two separate coding schemes, the successive refinement encoder designs a code for the decoder with a weaker link, and sends extra information to the second decoder on top of the message of the first decoder. In general, successive refinement coding cannot do as well as two separate encoding schemes optimized for the respective decoders. However, if we can achieve the point-to-point optimum rates using successive refinement coding, we say the source is successively refinable.
Although necessary and sufficient conditions of successive refinability is known [7,8], proving (or disproving) successive refinability of the source is not a simple task. Equitz and Cover [7] found a discrete source that is not successively refinable using Gerrish problem [9]. Chow and Berger found a continuous source that is not successively refinable using Gaussian mixture [10]. Lastras and Berger showed that all sources are nearly successively refinable [11]. However, still only a few sources are known to be successively refinable. In this paper, we show that any discrete memoryless source is successively refinable as long as the weaker link employs logarithmic loss, regardless of the distortion criterion used for the stronger link.
In the second part of the paper, we show that this result can be useful to design a lossy compression algorithm with low complexity. Recently, the idea of successive refinement is applied to reduce the complexity of point-to-point lossy compression algorithm. Venkataramanan et al. proposed a new lossy compression for Gaussian source where the codewords are linear combination of sub-codewords [12]. No and Weissman also proposed a low-complexity lossy compression algorithm for Gaussian source using extreme value theory [13]. Both algorithms are successively describing source and achieve low complexity. Roughly speaking, successive refinement algorithm provides a smaller size of codebook. For example, the naive random coding scheme has a codebook of size e n R when the blocklength is n and the rate is R. On the other hand, if we can design a successive refinement scheme with half rate in the weaker link, then the size of codebook is e n R / 2 each. Thus, the overall codebook size is 2 e n R / 2 . The above idea can be generalized to successive refinement scheme with L decoders [12,14]
The universal property of logarithmic loss in successive refinement implies that, for any point-to-point lossy compression of discrete memoryless source, we can insert a virtual intermediate decoder (weaker link) under logarithmic loss without losing any rates at the actual decoder (stronger link). As we discussed, this property allows us to design a lossy compression algorithm with low-complexity for any discrete source and distortion pair. Note that previous works only focused on specific source and distortion pair such as binary source with Hamming distortion.
The remainder of the paper is organized as follows. In Section 2, we revisit some of the known results pertaining to logarithmic loss. Section 3 is dedicated to successive refinement under logarithmic loss in the weaker link. In Section 4, we propose a low complexity compression scheme that can be applied to any discrete lossy compression problem. Finally, we conclude in Section 5.
Notation: X n denotes an n-dimensional random vector ( X 1 , X 2 , , X n ) while x n denotes a specific possible realization of the random vector X n . X denotes a support of random variable X. Also, Q denotes a random probability mass function while q denotes a specific probability mass function. We use natural logarithm and nats instead of bits.

2. Preliminaries

2.1. Successive Refinability

In this section, we review the successive refinement problem with two decoders. Let the source X n be i.i.d. random vector with distribution p X . The encoder wants to describe X n to two decoders by sending a pair of messages ( m 1 , m 2 ) where 1 m i M i for i { 1 , 2 } . The first decoder reconstructs X ^ 1 n ( m 1 ) X ^ 1 n based only on the first message m 1 . The second decoder reconstructs X ^ 2 n ( m 1 , m 2 ) X ^ 2 n based on both m 1 and m 2 . The setting is described in Figure 1.
Let d i ( · , · ) : X × X ^ i [ 0 , ) be a distortion measure for i-th decoder. The rates of code ( R 1 , R 2 ) are simply defined as
R 1 = 1 n log M 1 R 2 = 1 n log M 1 M 2 .
An ( n , R 1 , R 2 , D 1 , D 2 , ϵ ) -successive refinement code is a coding scheme with block length n and excess distortion probability ϵ where rates are ( R 1 , R 2 ) and target distortions are ( D 1 , D 2 ) . Since we have two decoders, the excess distortion probability is defined by Pr d i ( X n , X ^ i n ) > D i   for   some   i .
Definition 1.
A rate-distortion tuple ( R 1 , R 2 , D 1 , D 2 ) is said to be achievable if there is a family of ( n , R 1 ( n ) , R 2 ( n ) , D 1 , D 2 , ϵ ( n ) ) -successive refinement code where
lim n R i ( n ) = R i   f o r   a l l   i , lim n ϵ ( n ) = 0 .
For some special cases, both decoders can achieve the point-to-point optimum rates simultaneously.
Definition 2.
Let R i ( D i ) denote the rate-distortion function of the i-th decoder for i { 1 , 2 } . If the rate-distortion tuple ( R 1 ( D 1 ) , R 2 ( D 2 ) , D 1 , D 2 ) is achievable, then we say the source is successively refinable at ( D 1 , D 2 ) . If the source is successively refinable at ( D 1 , D 2 ) for all D 1 , D 2 , then we say the source is successively refinable.
The following theorem provides a necessary and sufficient condition of successive refinable sources.
Theorem 1 ([7,8]).
A source is successively refinable at ( D 1 , D 2 ) if and only if there exists a conditional distribution p X ^ 1 , X ^ 2 | X such that X X ^ 2 X ^ 1 forms a Markov chain and
R i ( D i ) = I ( X ; X ^ i ) E d i ( X , X ^ i ) D i
for i { 1 , 2 } .
Note that the above results of successive refinability can easily be generalized to the case of k decoders.

2.2. Logarithmic Loss

Let X be a set of discrete source symbols ( | X | < ), and M ( X ) be the set of probability measures on X . Logarithmic loss : X × M ( X ) [ 0 , ] is defined by
( x , q ) = log 1 q ( x )
for x X and q M ( X ) . Logarithmic loss between n-tuples is defined by
n ( x n , q n ) = 1 n i = 1 n log 1 q i ( x i ) ,
i.e., the symbol-by-symbol extension of the single letter loss.
Let X n be the discrete memoryless source with distribution p X . Consider the lossy compression problem under logarithmic loss where the reconstruction alphabet is M ( X ) . The rate-distortion function is given by
R ( D ) = inf p Q | X : E ( X , Q ) D I ( X ; Q ) = H ( X ) D .
The following lemma provides a property of the rate-distortion function achieving conditional distribution.
Lemma 1.
The rate-distortion function achieving conditional distribution p Q | X satisfies
p X | Q ( · | q ) = q
H ( X | Q ) = D
for p Q almost every q M ( X ) . Conversely, if p Q | X satisfies (1) and (2), then it is a rate-distortion function achieving conditional distribution, i.e.,
I ( X ; Q ) = R ( D ) = H ( X ) D E ( X , Q ) = D .
The key idea is that we can replace Q by p X | Q ( · | Q ) , and have lower rate and distortion, i.e.,
I ( X ; Q ) I ( X ; p X | Q ( · | Q ) ) E ( X , Q ) E ( X , p X | Q ( · | Q ) ,
which directly implies (1).
Interestingly, since the rate-distortion function in this case is a straight line, a simple time sharing scheme achieves the optimal rate-distortion trade-off. More precisely, the encoder losslessly compresses only the first H ( X ) D H ( X ) fraction of the source sequence components. Then, the decoder perfectly recovers those losslessly compressed components and uses p X as its reconstruction for the remaining part. The resulting scheme obviously achieves distortion D with rate H ( X ) D .
Furthermore, this simple scheme directly implies successive refinability of the source. For D 1 > D 2 , suppose the encoder losslessly compresses the first H ( X ) D 2 H ( X ) fraction of the source. Then, the first decoder can perfectly reconstruct H ( X ) D 1 H ( X ) fraction of the source with the message of rate H ( X ) D 1 and distortion D 1 while the second decoder can achieve distortion D 2 with rate H ( X ) D 2 . Since both decoders can achieve the best rate-distortion pair, it follows that any discrete memoryless source under logarithmic loss is successively refinable.
We can formally prove successive refinability of discrete memoryless source under logarithmic loss using Theorem 1. I.e., by finding random probability mass functions Q 1 , Q 2 M ( X ) that satisfy
I ( X ; Q 1 ) = H ( X ) D 1 , E ( X , Q 1 ) = D 1 ,
I ( X ; Q 2 ) = H ( X ) D 2 , E ( X , Q 2 ) = D 2 ,
where X Q 2 Q 1 forms a Markov chain.
Let e x be a deterministic probability mass function (pmf) in M ( X ) that has a unit mass at x. In other words,
e x ( x ˜ ) = 1 if   x ˜ = x 0 otherwise .
Then, consider random pmfs Q 1 , Q 2 { e x : x X } { p X } . Since the support of Q 1 and Q 2 is finite, we can define the following conditional pmfs.
p Q 2 | X ( q 2 | x ) = H ( X ) D 2 H ( X ) if   q 2 = e x D 2 H ( X ) if   q 2 = p X 0 otherwise p Q 1 | Q 2 ( q 1 | q 2 ) = H ( X ) D 1 H ( X ) D 2 if   q 1 = q 2 = e x for   some   x D 1 D 2 H ( X ) D 2 if   q 1 = p X   and   q 2 = e x for   some   x 1 if   q 1 = q 2 = p X 0 otherwise .
It is not hard to show that the above conditional pmfs satisfies (3) and (4).

3. Successive Refinability

Main Results

Consider the successive refinement problem with a discrete memoryless source as described in Section 2.1. Specifically, we are interested in the case where the first decoder is under logarithmic loss and the second decoder is under some arbitrary distortion measure d ( · , · ) . We only have a following benign assumption that if d ( x , x ^ 1 ) = d ( x , x ^ 2 ) for all x, then x ^ 1 = x ^ 2 . This is not a hard restriction since if x ^ 1 and x ^ 2 have the same distortion values for all x, then there is no reason to have both reconstruction symbols.
The following theorem shows that any discrete memoryless source is successive refinable as long as the weaker link is under logarithmic loss. This implies an universal property of logarithmic loss in the context of successive refinement.
Theorem 2.
Let the source be arbitrary discrete memoryless. Suppose the distortion criterion of the first decoder is logarithmic loss while that of the second decoder is an arbitrary distortion criterion d : X × X ^ [ 0 , ] . Then the source is successively refinable.
Proof. 
The source is successively refinable at ( D 1 , D 2 ) if and only if there exists a X X ^ Q such that
I ( X ; Q ) = R 1 ( D 1 ) , E ( X , Q ) D 1 I ( X ; X ^ ) = R 2 ( D 2 ) , E d ( X , X ^ ) D 2 .
Let p X ^ | X be the conditional distribution for the second decoder that achieves the informational rate-distortion function R 2 ( D 2 ) . i.e.,
I ( X ; X ^ ) = R 2 ( D 2 ) , E d ( X , X ^ ) = D 2 .
Since the weaker link is under logarithmic loss, we have R 1 ( D 1 ) R 2 ( D 2 ) . This implies that H ( X ) D 1 H ( X ) H ( X | X ^ ) . Thus, we can assume H ( X | X ^ ) D 1 throughout the proof. For simplicity, we further have a benign assumption that there is no x ^ such that p X ( x ) = p X | X ^ ( x | x ^ ) for all x. (See Remark 1 for the case where such x ^ exists.)
Without loss of generality, suppose X ^ = { 0 , 1 , , s 1 } . Consider a random variable Y Y = { 0 , 1 , , s } with the following pmf for some 0 ϵ 1 :
p Y ( y ) = ( 1 ϵ ) p X ^ ( y ) if   y s 1 ϵ if   y = s .
The conditional distribution is given by
p X ^ | Y ( x ^ | y ) = 1 if   x ^ = y s 1 0 if   x ^ y s 1 p X ^ ( x ^ ) if   y = s .
The joint distribution of X , X ^ , Y is given by
p X , X ^ , Y ( x , x ^ , y ) = p X , X ^ ( x , x ^ ) p Y | X ^ ( y | x ^ ) .
It is clear that H ( X | Y ) = H ( X | X ^ ) if ϵ = 0 and H ( X | Y ) = H ( X ) if ϵ = 1 . Since H ( X | X ^ ) D 1 , there exists an 0 ϵ 1 such that H ( X | Y ) = D 1 .
We are now ready to define the Markov chain. Let Q = p X | Y ( · | Y ) and q ( y ) = p X | Y ( · | y ) for all y Y . The following lemma implies that there is a one-to-one mapping between q and y.
Lemma 2.
If p X | Y ( x | y 1 ) = p X | Y ( x | y 2 ) for all x X , then y 1 = y 2 .
The proof of lemma is given in Appendix A. Since Q = p X | Y ( · | Y ) is a one-to-one mapping, we have
I ( X ; Q ) = I ( X ; Y ) = H ( X ) D 1 = R 1 ( D 1 ) .
Also, we have
E ( X , Q ) = E log 1 p X | Y ( X | Y ) = H ( X | Y ) = D 1 .
Furthermore, X X ^ Q forms a Markov chain since X X ^ Y forms a Markov chain. This concludes the proof. □
The key idea of the theorem is that (1) is the only loose required condition for the rate-distortion function achieving conditional distribution. Thus, for any distortion criterion in the second stage, we are able to choose an appropriate distribution p X , X ^ , Q that satisfies both (1) and the condition for successive refinability.
Remark 1.
The assumption p X | X ^ ( · | x ^ ) p X ( · ) for all x ^ is not necessary. Appendix B shows another joint distribution p X , X ^ , Y that satisfies conditions for successive refinability when the above assumption does not hold.
The distribution in the above proof is one simple example that has a single parameter ϵ, but we can always find other distributions that satisfy the condition for successive refinability. In the next section, we propose totally different distribution that achieves a Markov chain X X ^ Y with H ( X | Y ) = D 1 . This implies that the above proof does not rely on the assumption.
Remark 2.
In the proof, we used random variable Y to define Q = p X | Y ( · | Y ) . On the other hand, if the joint distribution p X , X ^ , Q satisfies conditions of successive refinability, there exists a random variable Y where X X ^ Y forms a Markov chain and Q = p X | Y ( · | Y ) . This is simply because we can set Y = Q , which implies p X | Y ( · | Y ) = p X | Q ( · | Q ) = Q .
Theorem 2 can be generalized to successive refinement problem with K intermediate decoders. Consider random variables Y k Y for 1 k K such that X X ^ Y K Y 1 forms a Markov chain and the joint distribution of X , X ^ , Y 1 , , Y K is given by
p X , X ^ , Y 1 , , Y K ( x , x ^ , y 1 , , y K ) = p X , X ^ ( x , x ^ ) p Y 1 | X ^ ( y 1 | x ^ ) k = 1 K 1 p Y k + 1 | Y k ( y k + 1 | y k )
where H ( X | Y k ) = D k . Similar to the proof of Theorem 2, we can show that Q k = p X | Y k ( · | Y k ) for all 1 k K satisfy the condition for successive refinability (where posterior distributions p X | Y k ( · | y k ) should be distinct for all y k Y to guarantee one-to-one correspondence). Thus, we can conclude that any discrete memoryless source with K intermediate decoders is successively refinable as long as all the intermediate decoders are under logarithmic loss.

4. Toward Lossy Compression with Low Complexity

As we mentioned in Remark 1, the choice of joint distribution p X , X ^ , Q in the proof of Theorem 2 is not unique. In this section, we propose another joint distribution p X , X ^ , Q that satisfies the conditions for successive refinability. It naturally suggests a new lossy compression algorithm which we will discuss in Section 4.3.

4.1. Rate-Distortion Achieving Joint Distribution: Small D 1

Recall that H ( X | X ^ ) D 1 . We first consider the case where D 1 is not too large so that D 1 is close to H ( X | X ^ ) . We will clarify this later. For simplicity, we further assume that p X ^ ( 0 ) p x ^ ( s 1 ) . Consider a random variable Z ϵ ( s ) X ^ with the following pmf for some 0 ϵ ( s 1 ) min x ^ p X ^ ( x ^ )
p Z ϵ ( s ) ( z ) = 1 ϵ if   z = 0 ϵ s 1 if   1 z s 1 .
If it is clear from context, we simply use Z Z ϵ ( s ) for the sake of notation. We further define a random variable Y that is independent to Z such that X ^ = Y s Z , where s denotes a sum modulo s. This can be achieved by following pmf and conditional pmf.
p Y ( y ) = p X ^ ( y ) ϵ s 1 1 s s 1 ϵ p X ^ | Y ( x ^ | y ) = 1 ϵ if   x ^ = y ϵ s 1 if   x ^ y .
If ϵ = 0 , we have H ( X | Y ) = H ( X | X ^ ) . Also, it is clear that H ( X | Y ) increases as ϵ increases. Since we assume that D 1 is not too large, there exists 0 ϵ ( s 1 ) min p X ^ ( x ^ ) such that H ( X | Y ) = D 1 . We will discuss about the case of general D 1 in Section 4.2. The joint distribution of X , X ^ , Y is given by
p X , X ^ , Y ( x , x ^ , y ) = p X , X ^ ( x , x ^ ) p Y | X ^ ( y | x ^ ) .
We are now ready to define the Markov chain. Let Q = p X | Y ( · | Y ) and q ( y ) = p X | Y ( · | y ) for all y Y where Y = X ^ = { 0 , 1 , s 1 } . For simplicity, we assume that p X | Y ( · | y 1 ) and p X | Y ( · | y 2 ) are not the same for all y 1 y 2 . Since Q = p X | Y ( · | Y ) is a one-to-one mapping, we have
I ( X ; Q ) = I ( X ; Y ) = H ( X ) D 1 = R 1 ( D 1 ) .
Also, we have
E ( X , Q ) = E log 1 p X | Y ( X | Y ) = H ( X | Y ) = D 1 .
Furthermore, X X ^ Q forms a Markov chain since X X ^ Y forms a Markov chain. Thus, the above construction of joint distribution p X , X ^ , Q satisfies the conditions for successive refinability.

4.2. Rate-Distortion Achieving Joint Distribution: General D 1

The joint distribution in the previous section only works for small D 1 . It is because ϵ has a natural upper-bound from (5) which is ϵ ( s 1 ) min p X ^ ( x ^ ) . In this section, we generalize the proof in the previous section for general D 1 . The key observation is that if we pick the maximum ϵ = ( s 1 ) min p X ^ ( x ^ ) , then p Y ( s 1 ) = 0 . This implies that we can focus on the smaller set of reconstruction alphabet Y = { 0 , 1 , s 2 } .
Let U s = X ^ , and define random variables { U k : 1 k s 1 } recursively. More precisely, we define the random variable U k 1 based on U k for 2 k s .
U k = U k 1 k Z ϵ k ( k ) p Z ϵ k ( k ) ( z ) = 1 ϵ k if   z = 0 ϵ k k 1 if   1 z k 1
where
ϵ k = ( k 1 ) min u p U k ( u ) .
Similar to the definition of Y, we assume U k 1 and Z ϵ k ( k ) are independent, and k denotes modulo k sum. Each time step, the alphabet size of U k decreases by one. Thus, we have 0 U k k 1 , and therefore U 1 = 0 with probability 1. Furthermore, we have
H ( X | U s ) H ( X | U s 1 ) H ( X | U 1 ) = H ( X ) .
For H ( X | X ^ ) D 1 < H ( x ) , there exists k such that H ( X | U k ) > D 1 H ( X | U k 1 ) . Thus, there exists Y that satisfies H ( X | Y ) = D 1 and U k = Y k Z ϵ ( k ) for some 0 ϵ ϵ k . This implies that
X ^ = Z ϵ s ( s ) s Z ϵ s 1 ( s 1 ) s 1 Z ϵ k + 1 ( k + 1 ) k + 1 Z ϵ ( k ) k Y .
Similar to the previous section, we assume that p X | Y ( · | y 1 ) p X | Y ( · | y 2 ) if y 1 y 2 . Then, we can set Q = p X | Y ( · | Y ) which satisfies the conditions for successive refinability.

4.3. Iterative Lossy Compression Algorithm

The joint distribution from the previous section naturally suggests a simple successive refinement scheme. Consider the lossy compression problem where the source is i.i.d. p X and the distortion measure is d : X × X ^ [ 0 , ) . Let D be the target distortion, and R > R ( D ) be the rate of the scheme where R ( D ) is the rate-distortion function. Let p X , X ^ be the rate-distortion achieving distribution.
For block length n, we propose a new lossy compression scheme that mimics successive refinement with s 1 decoders. Similar to the previous section, let ϵ k = ( k 1 ) min u p U k ( u ) and
X ^ = U s = U s 1 s Z ϵ s ( s ) U s 1 = U s 2 s 1 Z ϵ s 1 ( s 1 ) U 2 = U 1 2 Z ϵ 2 ( 2 ) .
We further let R k 1 > I ( X ; U k ) I ( X ; U k 1 ) for 2 k s that satisfy R = k = 2 s R k 1 . Now, we are ready to describe our coding scheme. Generate a sub-codebook C 1 = { z n ( 1 , m ) : 1 m e R 1 } where each sequence is generated according to Z n i.i.d. p Z ϵ 2 ( 2 ) for all m. Similarly, generate sub-codebooks C k = { z n ( k , m ) : 1 m e n R k } for 2 k s 1 where each sequence is generated according to Z n i.i.d. p Z ϵ k + 1 ( k + 1 ) for all m.
Upon observing x n X n , the encoder finds m 1 C 1 that minimizes d 1 ( x n , z n ( 1 , m 1 ) ) where the distortion measure d 1 ( · , · ) is defined as follows.
d 1 ( x n , z n ) = 1 n i = 1 n log 1 p X | U 2 ( x i | z i ) .
Note that d 1 ( x , z ) is simply the logarithmic loss between x and p X | U 2 ( · | z ) .
Similarly, for 2 k s 1 , the encoder iteratively finds m k C k that minimizes d k x n , [ [ z n ( 1 , m 1 ) 3 k z n ( k 1 , m k 1 ) ] k + 1 z n ( k , m k ) ] where
d k ( x n , z n ) = 1 n i = 1 n log 1 p X | U k + 1 ( x i | z i ) .
Upon receiving m 1 , m 2 , , m s 1 , the decoder reconstructs
x ^ n = [ z n ( 1 , m 1 ) 3 z n ( 2 , m 2 ) ] s z n ( s 1 , m s 1 ) .
Suppose R 1 R 2 R s 1 R s 1 , and L = s 1 . Similar to [12,14], this scheme has two main advantages compare to naive random coding scheme. First, the number of codewords in the proposed scheme is L · e n R / L , while the naive scheme requires e n R codewords. Also, in each iteration, the encoder finds the best codeword among e n R / L sub-codewords. Thus, the overall complexity is L · e n R / L as well. On the other hand, the naive scheme requires e n R complexity.
Remark 3.
The proposed scheme constructs X ^ n from binary sequences. The reconstruction after each stage can be viewed as
u k n ( m 1 , m k 1 ) = [ [ z n ( 1 , m 1 ) 3 ] k z n ( k 1 , m k 1 ) ]
where 0 u k k 1 . Thus, the decoder starts from binary sequence u 2 n ( m 1 ) , and the alphabet size increases by 1 at each iteration. After ( s 1 ) -th iteration, it reaches the final reconstruction X ^ n where the size of alphabet is s.

5. Conclusions

To conclude our discussion, we summarize our main contributions. In the context of successive refinement problem, we showed another universal property of logarithmic loss that any discrete memoryless source is successively refinable as long as the intermediate decoders operate under logarithmic loss. We applied the result to the point-to-point lossy compression problem and proposed a lossy compression scheme with lower complexity.

Funding

This work was supported by 2017 Hongik University Research Fund.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A. Proof of Lemma 2

For y s 1 ,
p X | Y ( x | y ) = x ^ p X | X ^ ( x | x ^ ) p X ^ | Y ( x ^ | y ) = p X | X ^ ( x | y ) .
On the other hand, for y = s ,
p X | Y ( x | y ) = x ^ p X | X ^ ( x | x ^ ) p X ^ | Y ( x ^ | y ) = x ^ p X | X ^ ( x | x ^ ) p X ^ ( x ^ ) = p X ( x ) .
Let j X ( x , D ) be d-tilted information [15]:
j X ( x , D ) = log 1 E exp { λ D λ d ( x , X ^ ) } .
where λ = R ( D ) and the expectation is with respect to marginal distribution p X ^ . Csiszár [16] showed that for p X ^ -almost every x ^ ,
j X ( x , D ) = log p X , X ^ ( x , x ^ ) p X ( x ) p X ^ ( x ^ ) + λ d ( x , x ^ ) λ D = log p X | X ^ ( x | x ^ ) p X ( x ) + λ d ( x , x ^ ) λ D .
If p X | X ^ ( x | x ^ 1 ) = p X | X ^ ( x | x ^ 2 ) for all x, it implies that d ( x , x ^ 1 ) = d ( x , x ^ 2 ) for all x which contradicts our assumption. On the other hand, if p X | X ^ ( x | x ^ 1 ) = p X ( x ) for all x, it also contradicts our assumption. Thus, p X | Y ( · | y ) are different from each other for all 0 y s .

Appendix B. Proof of the Special Case of Theorem 2

Similar to the main proof of Theorem 2, we assume X ^ = Y = { 0 , 1 , , s 1 } . Suppose there exists x ^ such that p X | X ^ ( x | x ^ ) = p X ( x ) for all x. Without loss of generality, we assume x ^ = 0 , i.e., p X | X ^ ( x | 0 ) = p X ( x ) for all x.
Consider a random variable Y Y with the following conditional pmf for some 0 ϵ 1 :
p Y | X ^ ( y | x ^ ) = 1 if   x ^ = y = 0 ϵ if   x ^ 0   and   y = 0 1 ϵ if   x ^ = y 0 . 0 otherwise .
It is clear that H ( X | Y ) = H ( X | X ^ ) if ϵ = 0 and H ( X | Y ) = H ( X ) if ϵ = 1 . Since H ( X | X ^ ) D 1 , there exists an 0 ϵ 1 such that H ( X | Y ) = D 1 . We also have Q = p X | Y ( · | Y ) and q ( y ) = p X | Y ( · | y ) for all y Y . The following lemma implies the one-to-one mapping between q and y.
Lemma A1.
If p X | Y ( x | y 1 ) = p X | Y ( x | y 2 ) for all x X , then y 1 = y 2 .
Proof. 
If y = 0 , the conditional distribution p X ^ | Y ( x ^ | y ) is given by
p X ^ | Y ( x ^ | y ) = p X ^ ( 0 ) ( 1 ϵ ) p X ^ ( 0 ) + ϵ if   x ^ = 0 ϵ · p X ^ ( x ^ ) ( 1 ϵ ) p X ^ ( 0 ) + ϵ if   x ^ 0 .
Then,
p X | Y ( x | y ) = x ^ p X | X ^ ( x | x ^ ) p X ^ | Y ( x ^ | 0 ) = p X | X ^ ( x | 0 ) p X ^ ( 0 ) ( 1 ϵ ) p X ^ ( 0 ) + ϵ + x ^ 0 p X | X ^ ( x | x ^ ) ϵ · p X ^ ( x ^ ) ( 1 ϵ ) p X ^ ( 0 ) + ϵ = p X | X ^ ( x | 0 ) ( 1 ϵ ) · p X ^ ( 0 ) ( 1 ϵ ) p X ^ ( 0 ) + ϵ + p X ( x ) ϵ ( 1 ϵ ) p X ^ ( 0 ) + ϵ = p X | X ^ ( x | 0 )
where the last equality is because p X | X ^ ( x | 0 ) = p X ( x ) for all x. In other words, p X | Y ( x | 0 ) = p X | X ^ ( x | 0 ) .
On the other hand, if y 0 , the conditional distribution p X ^ | Y ( x ^ | y ) is given by
p X ^ | Y ( x ^ | y ) = 1 if   x ^ = y 0 otherwise .
Then,
p X | Y ( x | y ) = x ^ p X | X ^ ( x | x ^ ) p X ^ | Y ( x ^ | y ) = p X | X ^ ( x | y ) .
As we have seen in Appendix A, p X | X ^ ( · | x ^ 1 ) cannot be equal to p X | X ^ ( · | x ^ 2 ) if x ^ 1 x ^ 2 . Since p X | Y ( x | y ) = p X | X ^ ( x | y ) for all x, we can say that p X | Y ( x | y 1 ) = p X | Y ( x | y 2 ) for all x implies y 1 = y 2 . □
The remaining part of the proof is exactly the same as the main proof.

References

  1. Courtade, T.A.; Wesel, R.D. Multiterminal source coding with an entropy-based distortion measure. In Proceedings of the 2011 IEEE International Symposium on Information Theory Proceedings, St. Petersburg, Russia, 31 July–5 August 2011; pp. 2040–2044. [Google Scholar]
  2. Courtade, T.; Weissman, T. Multiterminal Source Coding Under Logarithmic Loss. IEEE Trans. Inf. Theory 2014, 60, 740–761. [Google Scholar] [CrossRef]
  3. Shkel, Y.Y.; Verdú, S. A single-shot approach to lossy source coding under logarithmic loss. IEEE Trans. Inf. Theory 2018, 64, 129–147. [Google Scholar] [CrossRef]
  4. Tishby, N.; Pereira, F.; Bialek, W. The information bottleneck method. In Proceedings of the 37th Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, 22–24 September 1999; pp. 368–377. [Google Scholar]
  5. Harremoës, P.; Tishby, N. The information bottleneck revisited or how to choose a good distortion measure. In Proceedings of the 2007 IEEE International Symposium on Information Theory, Nice, France, 24–29 June 2007; pp. 566–570. [Google Scholar]
  6. Gilad-Bachrach, R.; Navot, A.; Tishby, N. An information theoretic tradeoff between complexity and accuracy. In Learning Theory and Kernel Machines; Springer: Berlin/Heidelberg, Germany, 2003; pp. 595–609. [Google Scholar]
  7. Equitz, W.H.; Cover, T.M. Successive refinement of information. IEEE Trans. Inf. Theory 1991, 37, 269–275. [Google Scholar] [CrossRef]
  8. Koshelev, V. Hierarchical coding of discrete sources. Probl. Peredachi Inf. 1980, 16, 31–49. [Google Scholar]
  9. Gerrish, A.M. Estimation of Information Rates. Ph.D. Thesis, Yale University, New Haven, CT, USA, 1963. [Google Scholar]
  10. Chow, J.; Berger, T. Failure of successive refinement for symmetric Gaussian mixtures. IEEE Trans. Inf. Theory 1997, 43, 350–352. [Google Scholar] [CrossRef]
  11. Lastras, L.; Berger, T. All sources are nearly successively refinable. IEEE Trans. Inf. Theory 2001, 47, 918–926. [Google Scholar] [CrossRef]
  12. Venkataramanan, R.; Sarkar, T.; Tatikonda, S. Lossy Compression via Sparse Linear Regression: Computationally Efficient Encoding and Decoding. IEEE Trans. Inf. Theory 2014, 60, 3265–3278. [Google Scholar] [CrossRef]
  13. No, A.; Weissman, T. Rateless lossy compression via the extremes. IEEE Trans. Inf. Theory 2016, 62, 5484–5495. [Google Scholar] [CrossRef] [PubMed]
  14. No, A.; Ingber, A.; Weissman, T. Strong successive refinability and rate-distortion-complexity tradeoff. IEEE Trans. Inf. Theory 2016, 62, 3618–3635. [Google Scholar] [CrossRef]
  15. Kostina, V.; Verdú, S. Fixed-length lossy compression in the finite blocklength regime. IEEE Trans. Inf. Theory 2012, 58, 3309–3338. [Google Scholar] [CrossRef]
  16. Csiszár, I. On an extremum problem of information theory. Stud. Sci. Math. Hung. 1974, 9, 57–71. [Google Scholar]
Figure 1. Successive Refinement.
Figure 1. Successive Refinement.
Entropy 21 00158 g001

Share and Cite

MDPI and ACS Style

No, A. Universality of Logarithmic Loss in Successive Refinement. Entropy 2019, 21, 158. https://doi.org/10.3390/e21020158

AMA Style

No A. Universality of Logarithmic Loss in Successive Refinement. Entropy. 2019; 21(2):158. https://doi.org/10.3390/e21020158

Chicago/Turabian Style

No, Albert. 2019. "Universality of Logarithmic Loss in Successive Refinement" Entropy 21, no. 2: 158. https://doi.org/10.3390/e21020158

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop