Next Article in Journal
Conversation Concepts: Understanding Topics and Building Taxonomies for Financial Services
Next Article in Special Issue
Distributed Hypothesis Testing over Noisy Broadcast Channels
Previous Article in Journal
Colvis—A Structured Annotation Acquisition System for Data Visualization
Previous Article in Special Issue
Information Bottleneck for a Rayleigh Fading MIMO Channel with an Oblivious Relay
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Two-Stage Guessing

1
Signal and Information Processing Laboratory, ETH Zurich, 8092 Zurich, Switzerland
2
Andrew and Erna Viterbi Faculty of Electrical and Computer Engineering, Technion–Israel Institute of Technology, Haifa 3200003, Israel
*
Author to whom correspondence should be addressed.
Information 2021, 12(4), 159; https://doi.org/10.3390/info12040159
Submission received: 26 February 2021 / Revised: 2 April 2021 / Accepted: 6 April 2021 / Published: 9 April 2021
(This article belongs to the Special Issue Statistical Communication and Information Theory)

Abstract

:
Stationary memoryless sources produce two correlated random sequences X n and Y n . A guesser seeks to recover X n in two stages, by first guessing Y n and then X n . The contributions of this work are twofold: (1) We characterize the least achievable exponential growth rate (in n) of any positive ρ -th moment of the total number of guesses when Y n is obtained by applying a deterministic function f component-wise to X n . We prove that, depending on f, the least exponential growth rate in the two-stage setup is lower than when guessing X n directly. We further propose a simple Huffman code-based construction of a function f that is a viable candidate for the minimization of the least exponential growth rate in the two-stage guessing setup. (2) We characterize the least achievable exponential growth rate of the ρ -th moment of the total number of guesses required to recover X n when Stage 1 need not end with a correct guess of Y n and without assumptions on the stationary memoryless sources producing X n and Y n .

1. Introduction

Pioneered by Massey [1], McEliece and Yu [2], and Arikan [3], the guessing problem is concerned with recovering the realization of a finite-valued random variable X using a sequence of yes-no questions of the form “Is X = x 1 ?”, “Is X = x 2 ?”, etc., until correct. A commonly used performance metric for this problem is the ρ -th moment of the number of guesses until X is revealed (where ρ is a positive parameter).
When guessing a length-n i.i.d. sequence X n (a tuple of n components that are drawn independently according to the law of X), the ρ -th moment of the number of guesses required to recover the realization of X n grows exponentially with n, and the exponential growth rate is referred to as the guessing exponent. The least achievable guessing exponent was derived by Arikan [3], and it equals the order- 1 1 + ρ Rényi entropy of X. Arikan’s result is based on the optimal deterministic guessing strategy, which proceeds in descending order of the probability mass function (PMF) of X n .
In this paper, we propose and analyze a two-stage guessing strategy to recover the realization of an i.i.d. sequence X n . In Stage 1, the guesser is allowed to produce guesses of an ancillary sequence Y n that is jointly i.i.d. with X n . In Stage 2, the guesser must recover X n . We show the following:
1.
When Y n is generated by component-wise application of a mapping f : X Y to X n and the guesser is required to recover Y n in Stage 1 before proceeding to Stage 2, then the least achievable guessing exponent (i.e., the exponential growth rate of the ρ -th moment of the total number of guesses in the two stages) equals
ρ max H 1 1 + ρ f ( X ) , H 1 1 + ρ X | f ( X ) ,
where the maximum is between the order- 1 1 + ρ Rényi entropy of f ( X ) and the conditional Arimoto–Rényi entropy of X given f ( X ) . We derive (1) in Section 3 and summarize our analysis in Theorem 1. We also propose a Huffman code-based construction of a function f that is a viable candidate for the minimization of (1) among all maps from X to Y (see Algorithm 2 and Theorem 2).
2.
When X n and Y n are jointly i.i.d. according to the PMF P X Y and Stage 1 need not end with a correct guess of Y n (i.e., the guesser may proceed to Stage 2 even if Y n remains unknown), then the least achievable guessing exponent equals
sup Q X Y ρ min H ( Q X ) , max H ( Q Y ) , H ( Q X Y ) D ( Q X Y | | P X Y ) ,
where the supremum is over all PMFs Q X Y defined on the same set as P X Y ; and H ( · ) and D ( · · ) denote, respectively, the (conditional) Shannon entropy and the Kullback–Leibler divergence. We derive (2) in Section 4 and summarize our analysis in Theorem 3. Parts of Section 4 were presented in the conference paper [4].
Our interest in the two-stage guessing problem is due to its relation to information measures: Analogous to how the Rényi entropy can be defined operationally via guesswork, as opposed to its axiomatic definition, we view (1) and (2) as quantities that capture at what cost and to what extent knowledge of Y helps in recovering X. For example, minimizing (1) over descriptions f ( X ) of X can be seen as isolating the most beneficial information of X in the sense that describing it in any more detail is too costly (the first term of the maximization in (1) exceeds the second), whereas a coarser description leaves too much uncertainty (the second term exceeds the first). Similarly, but with the joint law of ( X , Y ) fixed, (2) quantifies the least (partial) information of Y that benefits recovering X (because an optimal guessing strategy will proceed to Stage 2 when guessing Y no longer benefits guessing X). Note that while (1) and (2) are derived in this paper, studying their information-like properties is a subject of future research (see Section 5).
Besides its theoretic implications, the guessing problem is also applied practically in communications and cryptography. This includes sequential decoding [5,6], and measuring password strength [7], confidentiality of communication channels [8], and resilience against brute-force attacks [9]. It is also strongly related to task encoding and (lossless and lossy) compression (see, e.g., [10,11,12,13,14,15]).
Variations of the guessing problem include guessing under source uncertainty [16], distributed guessing [17,18], and guessing on the Gray–Wyner and the Slepian–Wolf network [19].

2. Preliminaries

We begin with some notation and preliminary material that are essential for the presentation in Section 3 ahead. The analysis in Section 4 relies on the method of types (see, e.g., Chapter 11 in [20]).
Throughout the paper, we use the following notation:
  • For m , n N with m < n , let [ m : n ] : = { m , , n } ;
  • Let P be a PMF that is defined on a finite set X . For k [ 1 : | X | ] , let G P ( k ) denote the sum of its k largest point masses, and let p m a x : = G P ( 1 ) . For n N , denote by P n the set of all PMFs defined on [ 1 : n ] .
The next definitions and properties are related to majorization and Rényi measures.
Definition 1
(Majorization). Consider PMFs P and Q, defined on the same (finite or countably infinite) set X . We say that Q majorizes P, denoted P Q , if G P ( k ) G Q ( k ) for all k [ 1 : | X | ] . If P and Q are defined on finite sets of different cardinalities, then the PMF defined on the smaller set is zero-padded to match the cardinality of the larger set.
By Definition 1, a unit mass majorizes any other distribution; on the other hand, the uniform distribution (on a finite set) is majorized by any other distribution of equal support.
Definition 2
(Schur-convexity/concavity). A function f : P n R is Schur-convex if, for every P , Q P n with P Q , we have f ( P ) f ( Q ) . Likewise, f is Schur-concave if f is Schur-convex, i.e., if P Q implies that f ( P ) f ( Q ) .
Definition 3
(Rényi entropy [21]). Let X be a random variable taking values on a finite or countably infinite set X according to the PMF P X . The order-α Rényi entropy H α ( X ) of X is given by
H α ( X ) : = 1 1 α log x X P X α ( x ) , α ( 0 , 1 ) ( 1 , ) ,
where unless explicitly given, the base of log ( · ) can be chosen arbitrarily, with exp ( · ) denoting its inverse function. Via continuous extension,
H 0 ( X ) : = log { x X : P X ( x ) > 0 } ,
H 1 ( X ) : = H ( X ) = x X P X ( x ) log P X ( x ) ,
H ( X ) : = log 1 p m a x ,
where H ( X ) is the (Shannon) entropy of X.
Proposition 1
(Schur-concavity of the Rényi entropy, Appendix F.3.a of [22]). The Rényi entropy of any order α > 0 is Schur-concave (in particular, the Shannon entropy is Schur-concave).
Definition 4
(Arimoto-Rényi conditional entropy [23]). Let ( X , Y ) be a pair of random variables taking values on a product set X × Y according to the PMF P X Y . When X is finite or countably infinite, the order-α Arimoto–Rényi conditional entropy H α ( X | Y ) of X given Y is defined as follows:
  • If α ( 0 , 1 ) ( 1 , ) ,
    H α ( X | Y ) : = α 1 α log E x X P X | Y α ( x | Y ) 1 α .
    When Y is finite, (7) can be simplified as follows:
    H α ( X | Y ) = α 1 α log y Y x X P X Y α ( x , y ) 1 α
    = α 1 α log y Y P Y ( y ) exp 1 α α H α ( X | Y = y ) .
  • If α { 0 , 1 , } and Y is finite, then, via continuous extension,
    H 0 ( X | Y ) = log max y Y supp P X | Y ( · | y ) = max y Y H 0 ( X | Y = y ) ,
    H 1 ( X | Y ) = H ( X | Y ) ,
    H ( X | Y ) = log 1 y Y max x X P X | Y ( x | Y ) · P Y ( y ) .
The properties of the Arimoto–Rényi conditional entropy were studied in [24,25].
Finally, the set F n , m with n , m N denotes the set of all deterministic functions f : [ 1 : n ] [ 1 : m ] . If m < n , then a function f F n , m is not one-to-one (i.e., it is a non-injective function).

3. Two-Stage Guessing: Y i = f ( X i )

Let X n : = ( X 1 , , X n ) be a sequence of i.i.d. random variables taking values on a finite set X . Assume without loss of generality that X = [ 1 : | X | ] . Let m [ 2 : | X | 1 ] , Y : = [ 1 : m ] , and let f : X Y be a fixed deterministic function (i.e., f F | X | , m ). Consider guessing X n X n in two stages as follows (Algorithm 1).
Algorithm 1 (two-stage guessing algorithm):
(a)
Stage 1: Y n : = f ( X 1 ) , , f ( X n ) Y n is guessed by asking questions of the form:
  • “Is Y n = y ^ 1 ?”, “Is Y n = y ^ 2 ?”, …
until correct. Note that as | Y | = m < | X | , this stage cannot reveal X n .
(b)
Stage 2: Based on Y n , the sequence X n X n is guessed by asking questions of the form:
  • “Is X n = x ^ 1 ?”, “Is X n = x ^ 2 ?”, …
until correct. If Y n = y n , the guesses x ^ k : = ( x ^ k , 1 , , x ^ k , n ) are restricted to X -sequences, which satisfy f ( x ^ k , i ) = y i for all i [ 1 : n ] .
The guesses y ^ 1 , y ^ 2 , …, in Stage 1 are in descending order of probability as measured by P Y n (i.e., y ^ 1 is the most probable sequence under P Y n ; y ^ 2 is the second most probable; and so on; ties are resolved arbitrarily). We denote the index of y n Y n in this guessing order by g Y n ( y n ) . Note that because every sequence y n is guessed exactly once, g Y n ( · ) is a bijection from Y n to [ 1 : m n ] ; we refer to such bijections as ranking functions. The guesses x ^ 1 , x ^ 2 , …, in Stage 2 depend on Y n and are in descending order of the posterior P X n | Y n ( · | Y n ) . Following our notation from Stage 1, the index of x n X n in the guessing order induced by Y n = y n is denoted g X n | Y n ( x n | y n ) . Note that for every y n Y n , the function g X n | Y n ( · | y n ) is a ranking function on X n . Using g Y n ( · ) and g X n | Y n ( · | · ) , the total number of guesses G 2 ( X n ) in Algorithm 1 can be expressed as
G 2 ( X n ) = g Y n ( Y n ) + g X n | Y n ( X n | Y n ) ,
where g Y n ( Y n ) and g X n | Y n ( X n | Y n ) are the number of guesses in Stages 1 and 2, respectively. Observe that guessing in descending order of probability minimizes the ρ -th moment of the number of guesses in both stages of Algorithm 1. By [3], for every ρ > 0 , the guessing moments E g Y n ρ ( Y n ) and E g X n | Y n ρ ( X n | Y n ) can be (upper and lower) bounded in terms of H 1 1 + ρ ( Y ) and H 1 1 + ρ ( X | Y ) as follows:
1 + n ln m ρ exp n ρ H 1 1 + ρ ( Y ) E g Y n ρ ( Y n )
exp n ρ H 1 1 + ρ ( Y ) , 1 + n ln | X | ρ exp n ρ H 1 1 + ρ X | f ( X ) E g X n | Y n ρ ( X n | Y n )
exp n ρ H 1 1 + ρ X | f ( X ) .
Combining (13) and (14), we next establish bounds on E [ G 2 ( X n ) ρ ] . In light of (13), we begin with bounds on the ρ -th power of a sum.
Lemma 1.
Let k N , and let { a i } i = 1 k be a non-negative sequence. For every ρ > 0 ,
s 1 ( k , ρ ) i = 1 k a i ρ i = 1 k a i ρ s 2 ( k , ρ ) i = 1 k a i ρ ,
where
s 1 ( k , ρ ) : = 1 , i f ρ 1 k ρ 1 , i f ρ ( 0 , 1 ) ,
and
s 2 ( k , ρ ) : = k ρ 1 , i f ρ 1 1 , i f ρ ( 0 , 1 ) .
  • If ρ 1 , then the left and right inequalities in (15) hold with equality if, respectively, k 1 of the a i ’s are equal to zero or a 1 = = a k ;
  • if ρ ( 0 , 1 ) , then the left and right inequalities in (15) hold with equality if, respectively, a 1 = = a k or k 1 of the a i ’s are equal to zero.
Proof. 
See Appendix A. □
Using the shorthand notation k 1 ( ρ ) : = s 1 ( 2 , ρ ) , k 2 ( ρ ) : = s 2 ( 2 , ρ ) , we apply Lemma 1 in conjunction with (13), (14) (and the fact that | X | m ) to bound E G 2 ρ ( X n ) as follows:
k 1 ( ρ ) 1 + n ln | X | ρ exp n ρ H 1 1 + ρ f ( X ) + exp n ρ H 1 1 + ρ X | f ( X )
E G 2 ρ ( X n )
k 2 ( ρ ) exp n ρ H 1 1 + ρ f ( X ) + exp n ρ H 1 1 + ρ X | f ( X ) .
The bounds in (18) are asymptotically tight as n tends to infinity. To see this, note that
lim n 1 n ln k 1 ( ρ ) 1 + n ln | X | ρ = 0 , lim n 1 n ln k 2 ( ρ ) = 0 ,
and therefore, for all ρ > 0 ,
lim n 1 n ln E G 2 ρ ( X n ) = lim n 1 n log exp n ρ H 1 1 + ρ f ( X ) + exp n ρ H 1 1 + ρ X | f ( X ) .
Since the sum of the two exponents on the right-hand side (RHS) of (20) is dominated by the larger exponential growth rate, it follows that
lim n 1 n log exp n ρ H 1 1 + ρ f ( X ) + exp n ρ H 1 1 + ρ X | f ( X ) = ρ max H 1 1 + ρ f ( X ) , H 1 1 + ρ X | f ( X ) ,
and thus, by (20) and (21),
E 2 ( X ; ρ , m , f ) : = lim n 1 n log E [ G 2 ρ ( X n ) ]
= ρ max H 1 1 + ρ f ( X ) , H 1 1 + ρ X | f ( X ) .
As a sanity check, note that if m = | X | and f is the identity function id (i.e., id ( x ) = x for all x X ), then X n is revealed in Stage 1, and (with Stage 2 obsolete) the ρ -th moment of the total number of guesses grows exponentially with rate
ρ H 1 1 + ρ id ( X ) = ρ H 1 1 + ρ X .
This is in agreement with the RHS of (22), as
ρ max H 1 1 + ρ id ( X ) , H 1 1 + ρ X | id ( X )
= ρ max H 1 1 + ρ X , H 1 1 + ρ X | X
= ρ max H 1 1 + ρ X , 0
= ρ H 1 1 + ρ X .
We summarize our results so far in Theorem 1 below.
Theorem 1.
Let X n = ( X 1 , , X n ) be a sequence of i.i.d. random variables, each drawn according to the PMF P X of support X : = [ 1 : | X | ] . Let m [ 2 : | X | 1 ] , f F | X | , m , and define Y n : = ( f ( X 1 ) , , f ( X n ) ) . When guessing X n according to Algorithm 1 (i.e., after first guessing Y n in descending order of probability as measured by P Y n ( · ) and proceeding in descending order of probability as measured by P X n | Y n ( · | Y n ) for guessing X n ), the ρ-th moment of the total number of guesses G 2 ( X n ) satisfies
(a) 
the lower and upper bounds in (18) for all n N and ρ > 0 ;
(b) 
the asymptotic characterization (22) for ρ > 0 and n .

A Suboptimal and Simple Construction of f in Algorithm 1 and Bounds on E G 2 ρ ( X n )

Having established in Theorem 1 that
E G 2 ρ ( X n ) exp n E 2 ( X ; ρ , m , f ) , ρ > 0 ,
we now seek to minimize the exponent E 2 ( X ; ρ , m , f ) in the RHS of (23) (for given PMF P X , ρ > 0 , and m [ 2 : | X | 1 ] ) over all f F | X | , m .
We proceed by considering a sub-optimal and simple construction of f, which enables obtaining explicit bounds as a function of the PMF P X and the value of m, while this construction also does not depend on ρ .
For a fixed m [ 2 : | X | 1 ] , a non-injective deterministic function f m * : X [ 1 : m ] is constructed by relying on the Huffman algorithm for lossless compression of X : = X 1 . This construction also (almost) achieves the maximal mutual information I ( X ; f ( X ) ) among all deterministic functions f : X [ 1 : m ] (this issue is elaborated in the sequel). Heuristically, apart from its simplicity, the motivation of this sub-optimal construction can be justified since it is expected to reduce the guesswork in Stage 2 of Algorithm 1, where one wishes to guess X n on the basis of the knowledge of Y n with Y i = f ( X i ) for all i [ 1 : n ] . In this setting, it is shown that the upper and lower bounds on E G 2 ρ ( X n ) are (almost) asymptotically tight in terms of their exponential growth rate in n. Furthermore, these exponential bounds demonstrate a reduction in the required number of guesses for X n , as compared to the optimal one-stage guessing of X n .
In the sequel, the following construction of a deterministic function f m * : X [ 1 : m ] is analyzed; this construction was suggested in the proofs of Lemma 5 in [26] and Theorem 2 in [14].
Algorithm 2 (construction of f m * ):
(a)
Let i : = 1 , P 1 : = P X .
(b)
If | supp ( P i ) | = m , let R : = P i , and go to Step c. If not, let P i + 1 : = P i with its two least likely symbols merged as in the Huffman code construction. Let i i + 1 , and go to Step b.
(c)
Construct f m * F | X | , m by setting f m * ( k ) = j if P 1 ( k ) has been merged into R ( j ) .
We now define Y * n : = ( Y 1 * , , Y n * ) with
Y i * : = f m * ( X i ) , i [ 1 : n ] .
Observe that due to [26] (Corollary 3 and Lemma 5) and because f m * ( · ) operates component-wise on the i.i.d. vector X n , the following lower bound on 1 n I X n ; Y * n applies:
1 n I X n ; Y * n = I X ; f m * ( X ) = H f m * ( X ) max f F | X | , m H f ( X ) β * = max f F | X | , m I X ; f ( X ) β * = max f F | X | , m 1 n I X n ; Y n β * ,
where Y n : = ( f ( X 1 ) , , f ( X n ) ) and
β * : = log 2 e ln 2 0.08607 bits .
From the proof of [26] (Theorem 3), we further have the following multiplicative bound:
1 n I X n ; Y * n 10 11 max f F | X | , m 1 n I X n ; Y n .
Note that by [26] (Lemma 1), the maximization problem in the RHS of (29) is strongly NP-hard [27]. This means that, unless P = NP , there is no polynomial-time algorithm that, given an arbitrarily small ε > 0 , produces a deterministic function f ( ε ) F | X | , m satisfying
I X ; f ( ε ) ( X ) ( 1 ε ) max f F | X | , m I X ; f ( X ) .
We next examine the performance of our candidate function f m * when applied in Algorithm 1. To that end, we first bound E g Y n ρ ( Y * n ) in terms of the Rényi entropy of a suitably defined random variable X ˜ m [ 1 : m ] constructed in Algorithm 3 below. In the construction, we assume without loss of generality that P X ( 1 ) P X ( | X | ) and denote the PMF of X ˜ m by Q : = R m ( P X ) .
Algorithm 3 (construction of the PMF Q : = R m ( P X ) of the random variable X ˜ m ):
  • If m = 1 , then Q : = R 1 ( P X ) is defined to be a point mass at one;
  • If m = | X | , then Q : = R | X | ( P X ) is defined to be equal to P X .
Furthermore, for m [ 2 : | X | 1 ] ,
(a)
If P X ( 1 ) < 1 m , then Q is defined to be the equiprobable distribution on [ 1 : m ] ;
(b)
Otherwise, the PMF Q is defined as
Q ( i ) : = P X ( i ) , i f i [ 1 : m * ] , 1 m m * j = m * + 1 | X | P X ( j ) , i f i { m * + 1 , , m } ,
where m * is the maximal integer i [ 1 : m 1 ] , which satisfies
P X ( i ) 1 m i j = i + 1 | X | P X ( j ) .
Algorithm 3 was introduced in [26,28]. The link between the Rényi entropy of X ˜ m and that of Y * was given in Equation (34) of [14]:
H α ( Y * ) H α ( X ˜ m ) v ( α ) , H α ( X ˜ m ) , α > 0 ,
where
v ( α ) : = log α 1 2 α 2 α α 1 log α 2 α 1 , α 1 log 2 e ln 2 0.08607 bits , α = 1 .
The function v : ( 0 , ) ( 0 , log 2 ) is depicted in Figure 1 on the following page. It is monotonically increasing, continuous, and it satisfies
lim α 0 v ( α ) = 0 , lim α v ( α ) = log 2 ( = 1 bit ) .
Combining (14a) and (35) yields, for all ρ > 0 ,
1 + n ln m ρ exp n ρ H 1 1 + ρ ( X ˜ m ) ρ v 1 1 + ρ
E g Y n ρ ( Y * n )
exp n ρ H 1 1 + ρ ( X ˜ m ) ,
where, due to (30), the difference between the exponential growth rates (in n) of the lower and upper bounds in (38) is equal to ρ v 1 1 + ρ , and it can be verified to satisfy (see (30) and (36))
0 < ρ v 1 1 + ρ < β * 0.08607 log 2 , ρ > 0 ,
where the leftmost and rightmost inequalities of (39) are asymptotically tight as we let ρ 0 + and ρ , respectively.
By inserting (38) into (18) and applying Inequality (39), it follows that for all ρ > 0 ,
k 1 ( ρ ) 1 + n ln | X | ρ exp n ρ H 1 1 + ρ ( X ˜ m ) β * + exp n ρ H 1 1 + ρ X | f m * ( X )
E G 2 ρ ( X n )
k 2 ( ρ ) exp n ρ H 1 1 + ρ ( X ˜ m ) + exp n ρ H 1 1 + ρ X | f m * ( X ) .
Consequently, by letting n tend to infinity and relying on (22),
max ρ H 1 1 + ρ ( X ˜ m ) β * , ρ H 1 1 + ρ X | f m * ( X )
E 2 ( X ; ρ , m , f m * )
max ρ H 1 1 + ρ ( X ˜ m ) , ρ H 1 1 + ρ X | f m * ( X ) .
We next simplify the above bounds by evaluating the maxima in (41) as a function m and ρ . To that end, we use the following lemma.
Lemma 2.
For α > 0 , let the two sequences { a m ( α ) } and { b m ( α ) } be given by
a m ( α ) : = H α X ˜ m ,
b m ( α ) : = H α X | f m * ( X ) ,
with m [ 1 : | X | ] . Then,
(a) 
The sequence { a m ( α ) } is monotonically increasing (in m), and its first and last terms are zero and H α ( X ) , respectively.
(b) 
The sequence { b m ( α ) } is monotonically decreasing (in m), and its first and last terms are H α ( X ) and zero, respectively.
(c) 
If supp P X = X , then { a m ( α ) } is strictly monotonically increasing, and { b m ( α ) } is strictly monotonically decreasing. In particular, for all m [ 2 : | X | 1 ] , a m ( α ) and b m ( α ) are positive and strictly smaller than H α ( X ) .
Proof. 
See Appendix B. □
Since symbols of probability zero (i.e., x X for which P X ( x ) = 0 ) do not contribute to the expected number of guesses, assume without loss of generality that supp P X = X . In view of Lemma 2, we can therefore define
m ρ * = m ρ * ( P X ) : = min m [ 2 : | X | ] : a m 1 1 + ρ b m 1 1 + ρ .
Using (44), we simplify (41) as follows:
(a)
If m < m ρ * , then
E 2 ( ρ , m ) = ρ H 1 1 + ρ X | f m * ( X ) .
(b)
Otherwise, if m m ρ * , then
ρ H 1 1 + ρ ( X ˜ m ) β * E 2 ( ρ , m ) ρ H 1 1 + ρ ( X ˜ m ) .
Note that when guessing X n directly, the ρ -th moment of the number of guesses grows exponentially with rate ρ H 1 1 + ρ ( X ) (cf. (14a)), due to Item c) in Lemma 2, since
H 1 1 + ρ ( X ˜ m ) = a m 1 1 + ρ < a | X | 1 1 + ρ = H 1 1 + ρ ( X ) ,
and also because conditioning (on a dependent chance variable) strictly reduces the Rényi entropy [24]. Equations (45) and (46) imply that for any m [ 2 : | X | 1 ] , guessing in two stages according to Algorithm 1 with f = f m * reveals X n sooner (in expectation) than guessing X n directly.
We summarize our findings in this section in the second theorem below.
Theorem 2.
For a given PMF P X of support X = [ 1 : | X | ] and m [ 2 : | X | 1 ] , let the function f m * F | X | , m be constructed according to Algorithm 2, and let the random variable X ˜ m be constructed according to the PMF in Algorithm 3. Let X n be i.i.d. according to P X , and let Y n : = ( f m * ( X 1 ) , , f m * ( X n ) ) . Finally, for ρ > 0 , let
E 1 ( ρ ) : = ρ H 1 1 + ρ ( X )
be the optimal exponential growth rate of the ρ-th moment single-stage guessing of X n , and let
E 2 ( ρ , m ) : = E 2 ( X ; ρ , m , f m * )
be given by (23) with f : = f m * . Then, the following holds:
(a) 
Cicalese et al., [26]: The maximization of the (normalized) mutual information 1 n I ( X n ; Y n ) over all the deterministic functions f : X [ 1 : m ] is a strongly NP-hard problem (for all n). However, the deterministic function f : = f m * almost achieves this maximization up to a small additive term, which is equal to β * : = log 2 e ln 2 0.08607 log 2 , and also up to a multiplicative term, which is equal to 10 11 (see (29)–(31)).
(b) 
The ρ-th moment of the number of guesses for X n , which is required by the two-stage guessing in Algorithm 1, satisfies the non-asymptotic bounds in (40a) and (40b).
(c) 
The asymptotic exponent E 2 ( ρ , m ) satisfies (45) and (46).
(d) 
For all ρ > 0 ,
E 1 ( ρ ) E 2 ( ρ , m ) ρ H 1 1 + ρ ( X ) max H 1 1 + ρ ( X ˜ m ) , H 1 1 + ρ X | f m * ( X ) > 0 ,
so there is a reduction in the exponential growth rate (as a function of n) of the required number of guesses for X n by Algorithm 1 (in comparison to the optimal one-stage guessing).

4. Two-Stage Guessing: Arbitrary ( X , Y )

We next assume that X n and Y n are drawn jointly i.i.d. according to a given PMF P X Y , and we drop the requirement that Stage 1 need reveal Y n prior to guessing X n . Given ρ > 0 , our goal in this section is to find the least exponential growth rate (in n) of the ρ -th moment of the total number of guesses required to recover X n . Since Stage 1 may not reveal Y n , we can no longer express the number of guesses using the ranking functions g Y n ( · ) and g X n | Y n ( · | · ) as in Section 3, and need new notation to capture the event that Y n was not guessed in Stage 1. To that end, let G n be a subset of Y n , and let the ranking function
g ˜ Y n : G n [ 1 : | G n | ]
denote the guessing order in Stage 1 with the understanding that if Y n G n , then
g ˜ Y n ( Y n ) = | G n |
and the guesser moves on to Stage 2 knowing only that Y n G n . We denote the guessing order in Stage 2 by
g ˜ X n | Y n : X n × Y n [ 1 : | X | n ] ,
where, for every y n Y n , g ˜ X n | Y n ( · | y n ) is a ranking function on [ 1 : | X | n ] that satisfies
g ˜ X n | Y n ( · | y n ) = g ˜ X n | Y n ( · | η n ) , y n , η n G n .
Note that while g ˜ Y n and g ˜ X n | Y n depend on G n , we do not make this dependence explicit. In the remainder of this section, we prove the following variational characterization of the least exponential growth rate of the ρ -th moment of the total number of guesses in both stages g ˜ Y n ( Y n ) + g ˜ X n | Y n ( X n | Y n ) .
Theorem 3.
If ( X n , Y n ) are i.i.d. according to P X Y , then for all ρ > 0
lim n min g ˜ Y n , g ˜ X n | Y n 1 n log E g ˜ Y n ( Y n ) + g ˜ X n | Y n ( X n | Y n ) ρ = sup Q X Y ρ min H ( Q X ) , max H ( Q Y ) , H ( Q X Y ) D ( Q X Y | | P X Y ) ,
where the supremum on the RHS of (55) is over all PMFs Q X Y on X × Y (and the limit exists).
Note that if P X Y is such that Y = f ( X ) , then the RHS of (55) is less than or equal to the RHS of (21). In other words, the guessing exponent of Theorem 3 is less than or equal to the guessing exponent of Theorem 1. This is due to the fact that guessing Y n in Stage 1 before proceeding to Stage 2 (the strategy examined in Section 3) is just one of the admissible guessing strategies of Section 4 and not necessarily the optimal one.
We prove Theorem 3 in two parts: First, we show that the guesser can be assumed cognizant of the empirical joint type of ( X n , Y n ) ; by invoking the law of total expectation, averaging over denominator-n types Q X Y on X × Y , we reduce the problem to evaluating the LHS of (55) under the assumption that ( X n , Y n ) is drawn uniformly at random from a type class T ( n ) ( Q X Y ) (instead of being i.i.d. P X Y ). We conclude the proof by solving this reduced problem, showing in particular that when ( X n , Y n ) is drawn uniformly at random from a type class, the LHS of (55) can be achieved either by guessing Y n in Stage 1 or skipping Stage 1 entirely.
We begin with the first part of the proof and show that the guesser can be assumed cognizant of the empirical joint type of ( X n , Y n ) ; we formalize and prove this claim in Corollary 1, which we derive from Lemma 3 below.
Lemma 3.
Let g ˜ Y n * and g ˜ X n | Y n * be ranking functions that minimize the expectation in the LHS of (55), and likewise, let g ˜ T ; Y n * and g ˜ T ; X n | Y n * be ranking functions cognizant of the empirical joint type Π X n Y n of ( X n , Y n ) that minimize the expectation in the LHS of (55) over all ranking functions depending on Π X n Y n . Then, there exist positive constants a and k, which are independent of n, such that
E g ˜ Y n * ( Y n ) + g ˜ X n | Y n * ( X n | Y n ) ρ E g ˜ T ; Y n * ( Y n ) + g ˜ T ; X n | Y n * ( X n | Y n ) ρ k n a .
Proof. 
See Appendix C. □
Corollary 1.
If the limit
lim n 1 n log E g ˜ T ; Y n * ( Y n ) + g ˜ T ; X n | Y n * ( X n | Y n ) ρ , ρ > 0 ,
exists, so does the limit in the LHS of (55), and the two are equal.
Proof. 
By (56),
lim sup n min g ˜ Y n , g ˜ X n | Y n 1 n log E g ˜ Y n ( Y n ) + g ˜ X n | Y n ( X n | Y n ) ρ
= lim sup n 1 n log E g ˜ Y n * ( Y n ) + g ˜ X n | Y n * ( X n | Y n ) ρ
lim sup n 1 n log E g ˜ T ; Y n * ( Y n ) + g ˜ T ; X n | Y n * ( X n | Y n ) ρ k n a
= lim sup n 1 n log E g ˜ T ; Y n * ( Y n ) + g ˜ T ; X n | Y n * ( X n | Y n ) ρ + log ( k ) + a log ( n )
= lim sup n 1 n log E g ˜ T ; Y n * ( Y n ) + g ˜ T ; X n | Y n * ( X n | Y n ) ρ
= lim n 1 n log E g ˜ T ; Y n * ( Y n ) + g ˜ T ; X n | Y n * ( X n | Y n ) ρ .
The inverse inequality
lim inf n min g ˜ Y n , g ˜ X n | Y n 1 n log E g ˜ Y n ( Y n ) + g ˜ X n | Y n ( X n | Y n ) ρ lim n 1 n log E g ˜ T ; Y n * ( Y n ) + g ˜ T ; X n | Y n * ( X n | Y n ) ρ
follows from the fact that an optimal guessing strategy depending on the empirical joint type Π X n Y n of ( X n , Y n ) cannot be outperformed by a guessing strategy ignorant of Π X n Y n . □
Corollary 1 states that the minimization in the LHS of (55) can be taken over guessing strategies cognizant of the empirical joint type of ( X n , Y n ) . As we show in the next lemma, this implies that evaluating the LHS of (55) can be further simplified by taking the expectation with ( X n , Y n ) drawn uniformly at random from a type class (instead of being i.i.d. P X Y ).
Lemma 4.
Let E Q X Y denote expectation with ( X n , Y n ) drawn uniformly at random from the type class T ( n ) ( Q X Y ) . Then, the following limits exist and
lim n min g ˜ T ; Y n , g ˜ T ; X n | Y n 1 n log E g ˜ T ; Y n ( Y n ) + g ˜ T ; X n | Y n ( X n | Y n ) ρ = lim n max Q X Y ( min g ˜ T ; Y n , g ˜ T ; X n | Y n 1 n log E Q X Y g ˜ T ; Y n ( Y n ) + g ˜ T ; X n | Y n ( X n | Y n ) ρ D ( Q X Y P X Y ) ) ,
where the maximum in the RHS of (64) is taken over all denominator-n types on X × Y .
Proof. 
Recall that Π X n Y n is the empirical joint type of ( X n , Y n ) , and let T n ( X × Y ) denote the set of all denominator-n types on X × Y . We prove Lemma 4 by applying the law of total expectation to the LHS of (64) (averaging over the events { Π X n Y n = Q X Y } , Q X Y T n ( X × Y ) ) and by approximating the probability of observing a given type using standard tools from large deviations theory. We first show that the LHS of (64) is upper bounded by its RHS:
lim n min g ˜ T ; Y n , g ˜ T ; X n | Y n 1 n log E g ˜ T ; Y n ( Y n ) + g ˜ T ; X n | Y n ( X n | Y n ) ρ = lim n 1 n log ( Q X Y min g ˜ T ; Y n , g ˜ T ; X n | Y n E Q X Y g ˜ T ; Y n ( Y n ) + g ˜ T ; X n | Y n ( X n | Y n ) ρ
P [ Π X n Y n = Q X Y ] ) lim n 1 n log ( max Q X Y min g ˜ T ; Y n , g ˜ T ; X n | Y n E Q X Y g ˜ T ; Y n ( Y n ) + g ˜ T ; X n | Y n ( X n | Y n ) ρ
P [ Π X n Y n = Q X Y ] n α ) = lim n 1 n log ( max Q X Y min g ˜ T ; Y n , g ˜ T ; X n | Y n E Q X Y g ˜ T ; Y n ( Y n ) + g ˜ T ; X n | Y n ( X n | Y n ) ρ
P [ Π X n Y n = Q X Y ] ) lim n 1 n log ( max Q X Y min g ˜ T ; Y n , g ˜ T ; X n | Y n E Q X Y g ˜ T ; Y n ( Y n ) + g ˜ T ; X n | Y n ( X n | Y n ) ρ
2 n D ( Q X Y P X Y ) ) ,
where (66) holds for sufficiently large α because the number of types grows polynomially in n (see Appendix C); and (68) follows from [20] (Theorem 11.1.4). We next show that the LHS of (64) is also lower bounded by its RHS:
lim n min g ˜ T ; Y n , g ˜ T ; X n | Y n 1 n log E g ˜ T ; Y n ( Y n ) + g ˜ T ; X n | Y n ( X n | Y n ) ρ = lim n 1 n log ( Q X Y min g ˜ T ; Y n , g ˜ T ; X n | Y n E Q X Y g ˜ T ; Y n ( Y n ) + g ˜ T ; X n | Y n ( X n | Y n ) ρ
P [ Π X n Y n = Q X Y ] ) lim n 1 n log ( max Q X Y min g ˜ T ; Y n , g ˜ T ; X n | Y n E Q X Y g ˜ T ; Y n ( Y n ) + g ˜ T ; X n | Y n ( X n | Y n ) ρ
P [ Π X n Y n = Q X Y ] ) lim n 1 n log ( max Q X Y min g ˜ T ; Y n , g ˜ T ; X n | Y n E Q X Y g ˜ T ; Y n ( Y n ) + g ˜ T ; X n | Y n ( X n | Y n ) ρ
2 n [ D ( Q X Y P X Y ) + δ n ] ) lim n 1 n log ( max Q X Y min g ˜ T ; Y n , g ˜ T ; X n | Y n E Q X Y g ˜ T ; Y n ( Y n ) + g ˜ T ; X n | Y n ( X n | Y n ) ρ
2 n D ( Q X Y P X Y ) ) ,
where in (71),
δ n : = ( | X | | Y | 1 ) log ( n + 1 ) n
tends to zero as we let n tend to infinity, and the inequality in (71) follows again from [20] (Theorem 11.1.4). Together, (68) and (71) imply the equality in (64). □
We now have established the first part of the proof of Theorem 3: In Corollary 1, we showed that the ranking functions g ˜ Y n and g ˜ X n | Y n can be assumed cognizant of the empirical joint type of ( X n , Y n ) , and in Lemma 4, we showed that under this assumption, the minimization of the ρ -th moment of the total number of guesses can be carried out with ( X n , Y n ) drawn uniformly at random from a type class.
Below, we give the second part of the proof: We show that if the pair ( X n , Y n ) is drawn uniformly at random from T ( n ) ( Q X Y ) , then
lim n min g ˜ Y n , g ˜ X n | Y n 1 n log E g ˜ Y n ( Y n ) + g ˜ X n | Y n ( X n | Y n ) ρ = ρ min H ( Q X ) , max { H ( Q Y ) , H ( Q X | Y ) } , ρ > 0 .
Note that Corollary 1 and Lemma 4 in conjunction with (74) conclude the proof of Theorem 3:
lim n min g ˜ Y n , g ˜ X n | Y n 1 n log E g ˜ Y n ( Y n ) + g ˜ X n | Y n ( X n | Y n ) ρ
= lim n min g ˜ T ; Y n , g ˜ T ; X n | Y n 1 n log E g ˜ T ; Y n ( Y n ) + g ˜ T ; X n | Y n ( X n | Y n ) ρ = lim n max Q X Y ( min g ˜ T ; Y n , g ˜ T ; X n | Y n 1 n log E Q X Y g ˜ T ; Y n ( Y n ) + g ˜ T ; X n | Y n ( X n | Y n ) ρ
D ( Q X Y P X Y ) )
= sup Q X Y ρ min H ( Q X ) , max H ( Q Y ) , H ( Q X Y ) D ( Q X Y | | P X Y ) ,
where the supremum in the RHS of (77) is taken over all PMFs Q X Y on X × Y , and the step follows from (76) because the set of types is dense in the set of all PMFs (in the same sense that Q is dense in R ).
It thus remains to prove (74). We begin with the direct part: Note that when ( X n , Y n ) is drawn uniformly at random from T ( n ) ( Q X Y ) , the ρ -th moment of the total number of guesses grows exponentially with rate ρ H ( Q X ) if we skip Stage 1 and guess X n directly and with rate ρ max H ( Q Y ) , H ( Q X | Y ) if we guess Y n in Stage 1 before moving on to guessing X n . To prove the second claim, we argue by case distinction on ρ . Assuming first that ρ 1 ,
lim n 1 n log E g ˜ T ; Y n ( Y n ) + g ˜ T ; X n | Y n ( X n | Y n ) ρ
lim n 1 n log E g ˜ T ; Y n ( Y n ) ρ + g ˜ T ; X n | Y n ( X n | Y n ) ρ
= lim n 1 n log E g ˜ T ; Y n ( Y n ) ρ + E g ˜ T ; X n | Y n ( X n | Y n ) ρ
lim n 1 n log 2 n ρ H ( Q Y ) + 2 n ρ H ( Q X | Y )
= ρ max H ( Q Y ) , H ( Q X | Y ) ,
where (78) holds because ρ 1 (see Lemma 1); (80) holds since, by the latter assumption, Y n is revealed at the end of Stage 1, and thus, the guesser (cognizant of Q X Y ) will only guess elements from the conditional type class T ( n ) ( Q X | Y | Y n ) ; and (81) follows from the fact that the exponential growth rate of a sum of two exponents is dominated by the larger one. We wrap up the argument by showing that the LHS of (78) is also lower bounded by the RHS of (81):
lim n 1 n log E g ˜ T ; Y n ( Y n ) + g ˜ T ; X n | Y n ( X n | Y n ) ρ
= lim n 1 n log E 2 ρ 1 2 g ˜ T ; Y n ( Y n ) + 1 2 g ˜ T ; X n | Y n ( X n | Y n ) ρ
lim n 1 n log E 2 ρ 1 g ˜ T ; Y n ( Y n ) ρ + g ˜ T ; X n | Y n ( X n | Y n ) ρ
= lim n 1 n log E g ˜ T ; Y n ( Y n ) ρ + E g ˜ T ; X n | Y n ( X n | Y n ) ρ
lim n 1 n log 1 1 + ρ 2 n ρ H ( Q Y ) + 2 n ρ H ( Q X | Y ) 2 n ρ δ n
= ρ max H ( Q Y ) , H ( Q X | Y ) ,
where (83) follows from Jensen’s inequality; and (85) follows from [29] (Proposition 6.6) and the lower bound on the size of a (conditional) type class [20] (Theorem 11.1.3). The case ρ > 1 can be proven analogously and is hence omitted (with the term 2 ρ 1 in the RHS of (83) replaced by one). Note that by applying the better of the two proposed guessing strategies (i.e., depending on Q X Y , either guess Y n in Stage 1 or skip it) the ρ -th moment of the total number of guesses grows exponentially with rate ρ min H ( Q X ) , max H ( Q Y ) , H ( Q X | Y } . This concludes the direct part of the proof of (74). We remind the reader that while we have constructed a guessing strategy under the assumption that the empirical joint type Π X n Y n of ( X n , Y n ) is known, Lemma 3 implies the existence of a guessing strategy of equal asymptotic performance that does not depend on Π X n Y n . Moreover, Lemma 3 is constructive in the sense that the type-independent guessing strategy can be explicitly derived from the type-cognizant one (cf. the proof of Proposition 6.6 in [29]).
We next establish the converse of (74) by showing that when ( X n , Y n ) is drawn uniformly at random from T ( n ) ( Q X Y ) ,
lim inf n 1 n log E g ˜ T ; Y n ( Y n ) + g ˜ T ; X n | Y n ( X n | Y n ) ρ ρ min H ( Q X ) , max H ( Q Y ) , H ( Q X | Y )
for all two-stage guessing strategies. To see why (87) holds, consider an arbitrary guessing strategy, and let the sequence n 1 , n 2 , be such that
lim k 1 n k log E g ˜ T ; Y n k ( Y n k ) + g ˜ T ; X n k | Y n k ( X n k | Y n k ) ρ = lim inf n 1 n log E g ˜ T ; Y n ( Y n ) + g ˜ T ; X n | Y n ( X n | Y n ) ρ
and such that the limit
α : = lim k | G n k | | T ( n k ) ( Q Y ) |
exists. Using Lemma 5 below, we show that the LHS of (88) (and thus, also the LHS of (87)) is lower bounded by ρ H ( Q X ) if α = 0 and by ρ max H ( Q Y ) , H ( Q X | Y ) if α > 0 . This establishes the converse, because the lower of the two bounds must apply in any case.
Lemma 5.
If the pair ( X n , Y n ) is drawn uniformly at random from a type class T ( n ) ( Q X Y ) and
lim sup n | G n | | T ( n ) ( Q Y ) | = 0 ,
then
lim n 1 n log E g ˜ X n | Y n ( X n | Y n ) ρ = ρ H ( Q X ) ,
where Q X and Q Y denote the X- and Y-marginal of Q X Y .
Proof. 
Note that since
lim n 1 n log | T ( n ) ( Q X ) | = H ( Q X ) ,
the RHS of (91) is trivially upper bounded by ρ H ( Q X ) . It thus suffices to show that (90) yields the lower bound
lim inf n 1 n log E g ˜ X n | Y n ( X n | Y n ) ρ ρ H ( Q X ) .
To show that (90) yields (93), we define the indicator variable
E n : = 0 , if Y n G n 1 , else ,
and observe that due to (90) and the fact that Y n is drawn uniformly at random from T ( n ) ( Q Y ) ,
lim n P [ E n = 1 ] = 1 .
Consequently, H ( E n ) tends to zero as n tends to infinity, and because
H ( X n ) H ( X n E n ) = I ( X n ; E n ) H ( E n ) ,
we get
lim n H ( X n ) H ( X n E n ) = 0 .
This and (95) imply that
lim n 1 n H ( X n ) = lim n 1 n H ( X n E n = 1 ) .
To conclude the proof of Lemma 5, we proceed as follows:
lim inf n 1 n log E [ g ˜ X n | Y n ( X n | Y n ) ρ ]
lim inf n 1 n log E [ g ˜ X n | Y n ( X n | Y n ) ρ | E n = 1 ]
lim inf n 1 n ρ H 1 / ( 1 + ρ ) ( X n E n = 1 )
lim inf n 1 n ρ H ( X n E n = 1 )
= lim inf n 1 n ρ H ( X n )
= ρ H ( Q X ) ,
where (99) holds due to (95) and the law of total expectation; (100) follows from [3] (Theorem 1); (101) holds because the Rényi entropy is monotonically decreasing in its order and because ρ > 0 ; and (102) is due to (98). □
We now conclude the proof of the converse part of (74). Assume first that α (as defined in (89)) equals zero. By (88) and Lemma 5,
lim inf n 1 n log E g ˜ T ; Y n ( Y n ) + g ˜ T ; X n | Y n ( X n | Y n ) ρ
= lim k 1 n k log E g ˜ T ; Y n k ( Y n k ) + g ˜ T ; X n k | Y n k ( X n k | Y n k ) ρ
lim inf k 1 n k log E g ˜ T ; Y n k ( X n k | Y n k ) ρ
= ρ H ( Q X ) ,
establishing the first contribution to the RHS of (87). Next, let α > 0 . Applying [29] (Proposition 6.6) in conjunction with (89) and the fact that Y n k is drawn uniformly at random from T ( n k ) ( Q Y ) ,
E g ˜ T ; Y n k ( Y n k ) ρ α 2 · 2 n k ρ H ( Q Y ) 2 n k δ n k 1 + ρ
for all sufficiently large k. Using (107) and proceeding analogously as in (82) to (85), we now establish the second contribution to the RHS of (87):
lim inf n 1 n log E g ˜ T ; Y n ( Y n ) + g ˜ T ; X n | Y n ( X n | Y n ) ρ
= lim k 1 n k log E g ˜ T ; Y n k ( Y n k ) + g ˜ T ; X n k | Y n k ( X n k | Y n k ) ρ
lim inf k 1 n k log E g ˜ T ; Y n k ( Y n k ) ρ + E g ˜ T ; X n k | Y n k ( X n k | Y n k ) ρ
lim inf k 1 n k log α 2 · 2 n k ρ H ( Q Y ) 2 n k δ n k 1 + ρ + E g ˜ T ; X n k | Y n k ( X n k | Y n k ) ρ
lim inf k 1 n k log α 2 · 2 n k ρ H ( Q Y ) 2 n k δ n k 1 + ρ + 2 n k ρ H ( Q X | Y ) 2 n k δ n k 1 + ρ
= ρ max H ( Q Y ) , H ( Q X | Y ) ,
where (110) is due to (107); and in (111), we granted the guesser access to Y n at the beginning of Stage 2.

5. Summary and Outlook

We proposed a new variation on the Massey–Arikan guessing problem where, instead of guessing X n directly, the guesser is allowed to first produce guesses of a correlated ancillary sequence Y n . We characterized the least achievable exponential growth rate (in n) of the ρ -th moment of the total number of guesses in the two stages when X n is i.i.d. according to P X , Y i = f ( X i ) for all i [ 1 : n ] , and the guesser must recover Y n in Stage 1 before proceeding to Stage 2 (Section 3, Theorems 1 and 2); and when the pair ( X n , Y n ) is jointly i.i.d. according to P X Y and Stage 1 need not reveal Y n (Section 4, Theorem 3). Future directions of this work include:
1.
The generalization of our results to a larger class of sources (e.g., Markov sources);
2.
A study of the information-like properties of the guessing exponents (1) and (2);
3.
Finding the optimal block-wise description Y n = f ( X n ) and its associated two-stage guessing exponent;
4.
The generalization of the cryptographic problems [8,9] to a setting where the adversary may also produce guesses of leaked side information.

Author Contributions

Both authors contributed equally to this work (in its various stages which include the conceptualization, analysis, writing and proofreading). Both authors read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Acknowledgments

The authors are indebted to Amos Lapidoth for his contribution to Section 4 (see [4]). The constructive comments in the review process, which helped to improve the presentation, are gratefully acknowledged.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Lemma 1

If ρ 1 , then
i = 1 k a i ρ = k ρ 1 k i = 1 k a i ρ k ρ · 1 k i = 1 k a i ρ = k ρ 1 i = 1 k a i ρ ,
where (A1) holds by Jensen’s inequality and since the mapping x x ρ for x 0 is convex. If at least one of the non-negative a i ’s is positive (if all a i ’s are zero, it is trivial), then
i = 1 k a i ρ = j = 1 k a j ρ i = 1 k a i j = 1 k a j
j = 1 k a j ρ i = 1 k a i j = 1 k a j ρ
= i = 1 k a i ρ ,
where (A2) holds since 0 a i j = 1 k a j 1 for all i [ 1 : n ] , and ρ 1 . If ρ ( 0 , 1 ) , then inequalities (A1) and (A2) are reversed. The conditions for equalities in (15) are easily verified.

Appendix B. Proof of Lemma 2

From (33), R 1 ( P X ) is a unit probability mass at one and R | X | ( P X ) P X , so (42) gives that
a 1 ( α ) = 0 , a | X | ( α ) = H α ( X ) .
In view of Lemma 5 in [14], it follows that for all m [ 1 : | X | 1 ] ,
a m + 1 ( α ) = max Q P m + 1 : P X Q H α ( Q )
max Q P m : P X Q H α ( Q )
= a m ( α ) ,
where (B2) and (B4) are due to [14] (38) and (42); (B3) holds since P m P m + 1 .
We next prove Item b. Consider the sequence of functions { f m * } m = 1 | X | , defined over the set X . By construction (see Algorithm 2), f | X | * is the identity function since all the respective | X | nodes in the Huffman algorithm stay un-changed in this case. We also have f 1 * ( x ) = 1 for all x X (in the latter case, by Algorithm 2, all nodes are merged by the Huffman algorithm into a single node). Hence, from (43),
b 1 ( α ) = H α ( X ) , b | X | ( α ) = 0 .
Consider the construction of the function f m * by Algorithm 2. Since the transition from m + 1 to m nodes is obtained by merging two nodes without affecting the other m 1 nodes, it follows by the data processing theorem for the Arimoto–Rényi conditional entropy (see [24] (Theorem 2 and Corollary 1)) that, for all m [ 1 : | X | 1 ] ,
b m + 1 ( α ) = H α X | f m + 1 * ( X ) H α X | f m * ( X ) = b m ( α ) .
We finally prove Item c. Suppose that P X is supported on the set X . Under this assumption, it follows from the strict Schur concavity of the Rényi entropy that the inequality in (B3) is strict, and therefore, (B2)–(B4) imply that a m ( α ) < a m + 1 ( α ) for all m [ 1 : | X | 1 ] . In particular, Item a implies that 0 < a m ( α ) < H α ( X ) holds for every m [ 2 : | X | 1 ] . Furthermore, the conditioning on f m + 1 * ( X ) enables distinguishing between the two labels of X , which correspond to the pair of nodes that are being merged (by the Huffman algorithm) in the transition from f m + 1 * ( X ) to f m * ( X ) . Hence, the inequality in (B6) turns out to be strict under the assumption that P X is supported on the set X . In particular, under that assumption, it follows from Item b that 0 < b m ( α ) < H α ( X ) holds for every m [ 2 : | X | 1 ] .

Appendix C. Proof of Lemma 3

We prove Lemma 3 as a consequence of Corollary C1 below and the fact that the number of denominator-n types on a finite set grows polynomially in n ([20], Theorem 11.1.1).
Corollary C1.
(Moser [29], (6.47) and Corollary 6.10) Let the random triple ( U , V , W ) take values in the finite set U × V × W , and let g ˜ U * ( · ) , g ˜ U | V * ( · | · ) and g ˜ U | W * ( · | · ) , g ˜ U | V , W * ( · | · , · ) be ranking functions that, for a given ρ > 0 , minimize
E g ˜ U ( U ) + g ˜ U | V ( U | V ) ρ
over all two-stage guessing strategies (with no access to W) and
E g ˜ U | W ( U | W ) + g ˜ U | V , W ( U | V , W ) ρ
over all two-stage guessing strategies cognizant of W. Then,
E g ˜ U * ( U ) + g ˜ U | V * ( U | V ) ρ E g ˜ U | W * ( U | W ) + g ˜ U | V , W * ( U | V , W ) ρ | W | ρ .
Lemma 3 follows from Corollary C1 with U X n , U X n ; V Y n , V Y n ; W Π X n Y n ; W T n ( X × Y ) , and by noticing that for all n N ,
| T n ( X × Y ) | ρ ( n + 1 ) ρ ( | X × Y | 1 ) k n a ,
where a : = ρ | X × Y | 1 and k : = 2 a are positive constants independent of n.

References

  1. Massey, J.L. Guessing and entropy. In Proceedings of the 1994 IEEE International Symposium on Information Theory, Trondheim, Norway, 27 June–1 July 1994; p. 204. [Google Scholar]
  2. McEliece, R.J.; Yu, Z. An inequality on entropy. In Proceedings of the 1995 IEEE International Symposium on Information Theory, Whistler, BC, Canada, 17–22 September 1995; p. 329. [Google Scholar]
  3. Arikan, E. An inequality on guessing and its application to sequential decoding. IEEE Trans. Inf. Theory 1996, 42, 99–105. [Google Scholar] [CrossRef] [Green Version]
  4. Graczyk, R.; Lapidoth, A. Two-stage guessing. In Proceedings of the 2019 IEEE International Symposium on Information Theory, Paris, France, 7–12 July 2019; pp. 475–479. [Google Scholar]
  5. Arikan, E.; Merhav, N. Joint source-channel coding and guessing with application to sequential decoding. IEEE Trans. Information Theory 1998, 44, 1756–1769. [Google Scholar] [CrossRef] [Green Version]
  6. Boztaş, S. Comments on “An inequality on guessing and its application to sequential decoding”. IEEE Trans. Inf. Theory 1997, 43, 2062–2063. [Google Scholar] [CrossRef]
  7. Cachin, C. Entropy Measures and Unconditional Security in Cryptography. Ph.D. Thesis, ETH Zurich, Zurich, Switzerland, 1997. [Google Scholar]
  8. Merhav, N.; Arikan, E. The Shannon cipher system with a guessing wiretapper. IEEE Trans. Inf. Theory 1999, 45, 1860–1866. [Google Scholar] [CrossRef] [Green Version]
  9. Bracher, A.; Hof, E.; Lapidoth, A. Guessing attacks on distributed-storage systems. IEEE Trans. Inf. Theory 2019, 65, 6975–6998. [Google Scholar] [CrossRef] [Green Version]
  10. Arikan, E.; Merhav, N. Guessing subject to distortion. IEEE Trans. Inf. Theory 1998, 44, 1041–1056. [Google Scholar] [CrossRef]
  11. Bracher, A.; Lapidoth, A.; Pfister, C. Distributed task encoding. In Proceedings of the 2017 IEEE International Symposium on Information Theory, Aachen, Germany, 25–30 June 2017; pp. 1993–1997. [Google Scholar]
  12. Bracher, A.; Lapidoth, A.; Pfister, C. Guessing with distributed encoders. Entropy 2019, 21, 298. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Christiansen, M.M.; Duffy, K.R. Guesswork, large deviations, and Shannon entropy. IEEE Trans. Inf. Theory 2013, 59, 796–802. [Google Scholar] [CrossRef] [Green Version]
  14. Sason, I. Tight bounds on the Rényi entropy via majorization with applications to guessing and compression. Entropy 2018, 20, 896. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Sason, I.; Verdú, S. Improved bounds on lossless source coding and guessing moments via Rényi measures. IEEE Trans. Inf. Theory 2018, 64, 4323–4346. [Google Scholar] [CrossRef] [Green Version]
  16. Sundaresan, R. Guessing under source uncertainty. IEEE Trans. Inf. Theory 2007, 53, 269–287. [Google Scholar] [CrossRef] [Green Version]
  17. Merhav, N.; Cohen, A. Universal randomized guessing with application to asynchronous decentralized brute-force attacks. IEEE Trans. Inf. Theory 2020, 66, 114–129. [Google Scholar] [CrossRef]
  18. Salamatian, S.; Beirami, A.; Cohen, A.; Médard, M. Centralized versus decentralized multi-agent guesswork. In Proceedings of the 2017 IEEE International Symposium on Information Theory, Aachen, Germany, 25–30 June 2017; pp. 2263–2267. [Google Scholar]
  19. Graczyk, R.; Lapidoth, A. Gray-Wyner and Slepian-Wolf guessing. In Proceedings of the 2020 IEEE International Symposium on Information Theory, Los Angeles, CA, USA, 21–26 June 2020; pp. 2207–2211. [Google Scholar]
  20. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  21. Rényi, A. On measures of entropy and information. In Proceedings of the 4th Berkeley Symposium on Probability Theory and Mathematical Statistics, Berkeley, CA, USA, 8–9 August 1961; pp. 547–561. [Google Scholar]
  22. Marshall, A.W.; Olkin, I.; Arnold, B.C. Inequalities: Theory of Majorization and Its Applications, 2nd ed.; Springer: New York, NY, USA, 2011. [Google Scholar]
  23. Arimoto, S. Information measures and capacity of order α for discrete memoryless channels. In Proceedings of the 2nd Colloquium on Information Theory, Keszthely, Hungary, 25–30 August 1975; Csiszár, I., Elias, P., Eds.; Colloquia Mathematica Societatis Janós Bolyai: Amsterdam, The Netherlands, 1977; Volume 16, pp. 41–52. [Google Scholar]
  24. Fehr, S.; Berens, S. On the conditional Rényi entropy. IEEE Trans. Inf. Theory 2014, 60, 6801–6810. [Google Scholar] [CrossRef]
  25. Sason, I.; Verdú, S. Arimoto–Rényi conditional entropy and Bayesian M-ary hypothesis testing. IEEE Trans. Inf. Theory 2018, 64, 4–25. [Google Scholar] [CrossRef]
  26. Cicalese, F.; Gargano, L.; Vaccaro, U. Bounds on the entropy of a function of a random variable and their applications. IEEE Trans. Inf. Theory 2018, 64, 2220–2230. [Google Scholar] [CrossRef]
  27. Garey, M.R.; Johnson, D.S. Computers and Intractability: A Guide to the Theory of NP-Completness; W. H. Freedman and Company: New York, NY, USA, 1979. [Google Scholar]
  28. Cicalese, F.; Gargano, L.; Vaccaro, U. An information theoretic approach to probability mass function truncation. In Proceedings of the 2019 IEEE International Symposium on Information Theory, Paris, France, 7–12 July 2019; pp. 702–706. [Google Scholar]
  29. Moser, S.M. Advanced Topics in Information Theory (Lecture Notes), 4th ed.; Signal and Information Processing Laboratory: ETH Zürich, Switzerland; Institute of Communications Engeering, National Chiao Tung University: Hsinchu, Taiwan, 2020. [Google Scholar]
Figure 1. A plot of v : ( 0 , ) ( 0 , log 2 ) , which is monotonically increasing, continuous, and satisfying (37).
Figure 1. A plot of v : ( 0 , ) ( 0 , log 2 ) , which is monotonically increasing, continuous, and satisfying (37).
Information 12 00159 g001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Graczyk, R.; Sason, I. On Two-Stage Guessing. Information 2021, 12, 159. https://doi.org/10.3390/info12040159

AMA Style

Graczyk R, Sason I. On Two-Stage Guessing. Information. 2021; 12(4):159. https://doi.org/10.3390/info12040159

Chicago/Turabian Style

Graczyk, Robert, and Igal Sason. 2021. "On Two-Stage Guessing" Information 12, no. 4: 159. https://doi.org/10.3390/info12040159

APA Style

Graczyk, R., & Sason, I. (2021). On Two-Stage Guessing. Information, 12(4), 159. https://doi.org/10.3390/info12040159

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop