Next Article in Journal
Intermittency, Moments, and Friction Coefficient during the Subcritical Transition of Channel Flow
Previous Article in Journal
Hydrodynamic and Thermodynamic Nonequilibrium Effects around Shock Waves: Based on a Discrete Boltzmann Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploiting the Massey Gap

by
Andrei Tănăsescu
and
Pantelimon George Popescu
*
Department of Computer Science and Engineering, University Politehnica of Bucharest, Splaiul Independentei 313 (6), 060042 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Entropy 2020, 22(12), 1398; https://doi.org/10.3390/e22121398
Submission received: 21 October 2020 / Revised: 26 November 2020 / Accepted: 7 December 2020 / Published: 11 December 2020
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
In this paper, we find new refinements to the Massey inequality, which relates the Shannon and guessing entropies, introducing a new concept: the Massey gap. By shrinking the Massey gap, we improve all previous work without introducing any new parameters, providing closed-form strict refinements, as well as a numerical procedure improving them even further.

1. Introduction

The guessing entropy associated with a probabilistic vector p = p 1 , p 2 , , p n with p 1 p n > 0 is the expected value of the random variable G p given by Pr G p = i = p i i 1 , n ¯ , i.e., E G p = i = 1 n i p i , and it corresponds to the optimal average number of binary questions required guessing the value of a random variable distributed according to p . Meanwhile, the Shannon entropy of p is H p = i = 1 n p i log p i , and the first relation between them was provided by J. Massey in [1], namely that if H p 2 , then E G p 2 H p 2 + 1 .
While the Massey inequality has been shown to have many applications such as password guessing [2] and encourages multiple developments such as moments inequalities (e.g., [3,4]), there are only two known direct refinements to the Massey inequality, both of which were published in 2019. In an ISITpaper, Popescu and Choudary [5] found that:
E G p     2 H p + 2 p n 2 + 1 p n
2 H p + p n 2 + 1 1 2 p n
2 H p 2 + 1 ,
subject to the same conditions as the Massey inequality. Meanwhile, Rioul’s inequality [6], published in a CHESpaper [7], states that for all values of H p , we have:
E G p > 2 H p / e
a bound that is similar to the Massey inequality and refines it when H p log e 1 e / 4 .
Remarkably, the refinement in [5] is actually strict since p has finite support, p = n 1 / inf p < . In fact, applying the method from [5], it can be easily observed that Equation (4) could be further improved. For example, applying just the first step of the method presented in [5], we can obtain that for H p log e 2 ln 2 , we have a new further refinement of Equation (4),
E G p > 2 H p + p n / e 1 2 p n > 2 H p / e .
Motivated by this insight, in this paper, we find and fully investigate a method to refine an inequality regarding finite support vectors using only itself. Here, we apply our analysis to the famous Massey inequality and explore multiple avenues of applying our method: shrinking the Massey gap.

2. Results

In this section, we present our results, as well as their proofs. In Section 2.1, we find a refinement over the initial result of [5], i.e., Equation (2), which we improve in Section 2.2 beyond even the strongest result of [5], i.e., Equation (1). Then, in Section 2.3, we introduce a new concept, the Massey gap, and use it to generalize the method in Section 2.2. Next, in Section 2.4, we show how using the Massey gap, we can almost always improve our refinement, by rebalancing the middle of the distribution, and finally, in Section 2.5, we find closed-form easy to compute refinements over the work of [5].

2.1. An Improved Refinement to the Massey Inequality

In this section, we find a new relation between the Shannon and guessing entropy, dependent on the minimal probability of a given distribution, refining the initial bound, Equation (2), from the work of [5].
Consider a decreasing probabilistic vector p   =   p 1 , p 2 , , p n with H p 2 . Inspired by the trick in [5], we construct the new probabilistic vector q   =   p 1 , p 2 , , p n 1 , 1 α p n , α p n , which is decreasing and strictly positive if and only if α 0 , 1 / 2 . From the grouping property of entropy,
H q   =   H p   +   p n h α ,
where h α = α log α 1 α log 1 α is the binary entropy function. Moreover, we calculate:
E G q   =   E G p   +   α p n .
Thus, because H q 2 , we can apply the Massey inequality for q to obtain:
E G p = E G q α p n 2 H q 2 + 1 α p n = 2 H p + p n h α 2 + 1 α p n .
Now, we note that h 0 = 0 and h 1 / 2 = 1 > 1 2 1 ln 2 0.7213 ; thus, by the strict concavity of h, we have h α > α 1 ln 2 . Using the inequality 2 x > 1 + x ln 2 for x = p n h α > 0 , we find that:
0 p n h α ln 2 α < 2 p n h α 1 α p n 2 H p 2 2 p n h α 1 α p n = 2 H p + p n h α 2 + 1 α p n 2 H p 2 + 1 ,
i.e., our lower bound is always strictly tighter than Massey’s,
E G p 2 H p + p n h α 2 + 1 α p n > 2 H p 2 + 1 α 0 , 1 / 2 .
For the particular value α = 1 / 2 , the bound above produces the initial bound, Equation (2), of [5]. Thus, we conclude this subsection with the following refinement to Equation (2) summarizing the above:
Theorem 1.
For strictly positive descending probabilistic vectors p R n with H p 2 , we have:
E G p sup α 0 , 1 / 2 2 H p + p n h α 2 + 1 α p n 2 H p + p n 2 + 1 1 2 p n > inf α 0 , 1 / 2 2 H p + p n h α 2 + 1 α p n = 2 H p 2 + 1 .
Proof. 
The first inequality follows taking the supremum over α in Equation (7). The second and third inequality follow taking α = 1 / 2 , which yields the right-hand side in Equation (2), where the strictness of the last inequality follows from the strictness of (7), the right-hand side of which is the bound obtained taking α = 0 . □
Up to this point, we found a direct refinement over the initial result of [5]. In the next section, we will leverage Theorem 1 in order to refine the strongest result, the bound in Equation (1).

2.2. Refinement through Generalization

In this section, we extend the reasoning of [5] to improve our bound from Theorem 1 beyond (1). We leverage a technique previously used in [8], namely applying the Massey inequality to increasingly complicated perturbations of the input in order to obtain ever tighter bounds for the original input.
Given an initial decreasing probabilistic vector p , we construct a list of probabilistic sequences Q k , which we recursively define according to the procedure in the previous subsection.
We begin by fixing an arbitrary parameter α 0 , 1 / 2 as in the section above. Now, we explicate our construction. Denoting by Q k , i the ith component of the sequence Q k , we define the terms of the list Q k as follows. We let the support of the first term coincide with p , i.e., Q 0 = p 0 , p 1 , , p n , 0 , 0 , , 0 , , and we define the other terms by recurrence:
Q k + 1 =   Q k , 0 , Q k , 1 , , Q k , n + k 1 , 1 α Q k , n + k , α Q k , n + k , 0 , 0 , , 0 , .
Now, we expand the recursion relation for the Shannon entropy, H Q k + 1 , and guessing entropy E G Q k + 1 , which are analogous to Equations (5) and (6), respectively, to obtain:
H Q k + 1 = H Q k + Q k , n + k h α = H Q k + α k p n h α = H p + p n h α j = 0 k α j = H p + p n h α 1 α k + 1 1 α , E G Q k + 1 = E G Q k + α Q k , n + k = E G Q k + α k + 1 p n = E G p + p n j = 0 k α j + 1 = E G p + p n α 1 α k + 1 1 α ,
so, at each step k, we can use Q k + 1 to obtain an approximation of E G Q k that is tighter than Massey’s. This procedure allows us to obtain ever tighter approximations of the guessing entropy.
For example, fixing an arbitrary mixing rate α > 0 and step number k > 0 , we apply the Massey inequality for the distribution Q k + 1 and find that as per Equation (7),
E G Q k = E G Q k + 1 α Q k , n + k 2 H Q k + 1 2 + 1 α Q k , n + k > 2 H Q k 2 + 1 .
Now, this refinement trickles down producing ever tighter bounds on the guessing entropy of lower index elements of the list up to the initial element p = Q 0 ,
E G p = E G Q k p n α 1 α k 1 α = E G Q k + j = 0 k 1 E G Q j E G Q j + 1 2 H Q k 2 + 1 + j = 0 k 1 E G Q j E G Q j + 1 > 2 H Q k 1 2 + 1 + j = 0 k 2 E G Q j E G Q j + 1 > > 2 H Q 0 2 + 1 = 2 H p 2 + 1 ,
where the tightest of the enumerated bounds is:
E G p 2 H Q k 2 + 1 + j = 0 k 1 E G Q j E G Q j + 1 = 2 H p + p n h α 1 α k 1 α 2 + 1 p n α 1 α k 1 α ,
which as we have shown increases with k up to the limit:
E G p 2 H p + p n h α 1 α 2 + 1 p n α 1 α α 0 , 1 / 2 ,
where the optimal value of α will be discussed in the following sections.
We now conclude this section with an even stronger refinement of the Massey inequality.
Theorem 2.
For strictly positive descending probabilistic vectors p R n with H p 2 , we have:
E G p sup α 0 , 1 / 2 2 H p + h α 1 α p n 2 + 1 α 1 α p n 2 H p + 2 p n 2 + 1 p n > 2 H p 2 + 1 .
Proof. 
The first inequality follows taking the supremum over α in Equation (8) and the rest taking α = 1 / 2 . □
Furthermore, note that Theorem 2 is a refinement over the strongest result of [5], namely Equation (1).

2.3. The Massey Gap

In this section, we reevaluate the scaffolding we used to prove Theorem 2 and re-frame our previous technique as an optimization process applied to an objective function we name the Massey gap, the function M x described below.
The first observation that we make is that Theorem 2 also applies to descending probabilistic infinite sequences p , by taking n to be the size of its support, p . Moreover, we notice that the core of the argument was not the precise construction of the list Q k , but rather the fact that:
2 H Q k 2 + 1 + E G Q k 1 E G Q k > 2 H Q k 1 2 + 1 k 1 ,
i.e., the sequences Q k incrementally optimize the objective function M x = E G x 2 H x 2 1 , with tightness being achieved if and only if lim k M Q k = 0 .
Motivated by this insight, we now take an alternative view and focus not on the precise sequence of distributions, but rather the objective function M x , which we name the Massey gap.
This view leads us to the following useful result:
Remark 1.
Let X k be a list of probabilistic sequences with H X k 2 and M X k M X k + 1 0 for all values of k. Then, the sequence y k = 2 H X k 2 + 1 + E G X 0 E G X k is increasing.
Notice that this remark provides a sequence y k of ever tighter lower bounds on E G X 0 , the weakest of which, y 0 , reduces to the Massey inequality, while the strongest is given by lim k y k . As an example, considering the list X k : = Q k , this limit coincides with the result of Theorem 2.

2.4. Mixing Probabilities

In the previous section, we presented a refinement to (1) by generalizing the tail rebalancing method of [5]. In this section, we investigate whether we can obtain even better results by applying the same rebalancing technique in the middle of the distribution, rather than at the tail.
In the previous section, we showed how Theorem 2 follows from Remark 1 for some particular X k . However, this option is not necessarily unique. In this section, we consider alternative methods of optimizing the Massey gap, that is we fix an arbitrary intermediary configuration X k , and we find conditions on the existence of available X k + 1 that decrease the Massey gap, i.e., M X k M X k + 1 .
In order to simplify the notation, for the remainder of this section, we denote the available configuration X k by p and the potential next configuration X k + 1 by q .
Now, we are ready for a trick similar to the one in Section 2.2: we fix an arbitrary integer i N such that p i 0 and an arbitrary mixing rate α 0 , 1 / 2 and consider q of the form:
q = p 0 , p 1 , , p i 1 , 1 α p i + α p i + 1 , α p i + 1 α p i + 1 , p i + 2 , p i + 3 , ,
with the same support as p .
For q as defined above in Equation (9), we find that E G q = E G p + α p i p i + 1 while:
H q = H p + p i + p i + 1 h q i q i + q i + 1 h p i p i + p i + 1 ;
thus, the condition M p M q is equivalent to:
h q i q i + q i + 1 h p i p i + p i + 1 + 1 p i + p i + 1 log 1 + α p i p i + 1 2 H p 2 ,
which, as we have seen in the previous section, always holds at least for some i, namely i = p .
Now, we check whether alternative values for i may be eligible, i.e., whether we may mix probabilities in the middle of the distribution. Using the inequality (Tighter bounds may also be used.) 1 h 1 + y 2 y 2 2 ln 2 with y = q i q i + 1 q i + q i + 1 = 1 2 α p i p i + 1 p i + p i + 1 , we find that:
h q i q i + q i + 1 1 1 2 α 2 p i p i + 1 p i + p i + 1 2
Thus, to satisfy M p M q , corresponding to a decrease in the Massey gap, it is sufficient that:
1 1 2 α 2 p i p i + 1 p i + p i + 1 2 h p i p i + p i + 1 + 1 p i + p i + 1 log 1 + p i p i + 1 2 H p 2 .
i.e.,
α 1 2 1 2 p i + p i + 1 p i p i + 1 1 h p i p i + p i + 1 1 p i + p i + 1 log 1 + p i p i + 1 2 H p 2 1 / 2 0 ,
so depending on the value of H p , multiple indices i and mixing ratios α may be selected for rebalancing, as outlined in the following
Theorem 3.
Let p be a descending probabilistic sequence with H p 2 and i N be such that p i 0 and α 0 , 1 / 2 satisfy Equation (10) or, sufficiently, Equation (11). Then, defining q as in Equation (9), we have:
E G p 2 H q 2 + 1 α p i p i + 1 2 H p 2 + 1 .
Proof. 
We have shown that Equation (11) is a sufficient condition for us to have M p M q ; thus, applying Remark 1 to the sequence X = p , q , q , q , , the result follows. □
We conclude this section by remarking that Theorem 3 has the potential to further refine bounds obtained using Theorem 1 depending on the form of the distribution p .

2.5. Strict Refinement and Optimal Mixing Rate

In this section, we consider the problem of finding the optimal mixing rate α .
We notice that α must be a critical point of the difference between the sides of (10), which is:
f α = p i p i + 1 p i + p i + 1 log q i q i + 1 1 α p i p i + 1 + 2 H p 2 .
For example, we notice that if H p 2 , then the right denominator is greater than one and:
f 1 / 2 = p i p i + 1 p i + p i + 1 1 α p i p i + 1 + 2 H p 2 < 0
so the mixing rate α = 1 / 2 is never optimal; thus, Theorem 2 is a strict refinement over the work of [5].
Investigating whether q actually decreases the Massey gap, we now consider the optimality of α = 0 . In this case, q i = p i and q i + 1 = p i + 1 ; thus, when p i p i + 1 2 2 2 H p , then f 0 > 0 , so α = 0 is certainly sub-optimal as M q M p is increasing around zero.
Moreover, we note that the simple upper bound:
f α p i p i + 1 p i + p i + 1 log q i q i + 1 1 p i p i + 1 + 2 H p 2
imposes that the optimal mixing ratio α * satisfy:
p i + 1 + p i q i + 1 1 = q i q i + 1 2 p i p i + 1 + 2 H p 2 1
i.e.,
q i + 1 p i + p i + 1 1 + 2 p i p i + 1 + 2 H p 2 1
and hence:
α * 1 p i p i + 1 p i + 1 + p i + p i + 1 1 + 2 p i p i + 1 + 2 H p 2 1 1 2 ,
which is a non-trivial bound.
For example, to apply Equation (12) to the particular case of mixing only at the tail of the distribution, i.e., Theorem 2, we take i = n , hence p i + 1 = 0 , and substituting in Equation (12), we obtain the requirement:
α * 1 / 1 + 2 1 / p n + 2 H p 2 < 1 / 2
on the mixing rate realizing the supremum, so we have a closed-form strict refinement over Equation (2):
Theorem 4.
Let p be a descending probabilistic sequence with H p 2 and i N be such that p i > 2 2 2 H p p i + 1 , and let α = 1 / 1 + 2 1 / p n + 2 H p 2 . Then, defining q as in Equation (9), we have:
E G p 2 H q 2 + 1 α p i p i + 1 > 2 H p + p p 2 + 1 1 2 p p 2 H p 2 + 1 .
We end this section by remarking that Theorem 4 can be used in conjunction with Theorem 2, complementing the construction of the list of sequences defined in the proof of Theorem 2.

3. Discussion

In this section, we discuss in a more detailed manner the context of our work, evaluate the saturation conditions of the discussed bounds, and present prospects for future work.
We begin our discussion by taking a closer look at the most prevalently used form of the Massey inequality, namely Equation (3). Notably, this inequality only holds for the case that H p 2 , suggesting that its nice shape is a case of form vs. function, as illustrated below.
Massey [1] found that the set of probabilistic vectors of arbitrary length, which have a fixed guessing entropy A, are convex for any given A; thus, by strict convexity, the Shannon entropy attains its maximum over this set at an interior point. Using Lagrange multipliers, Massey found that the critical p must be a geometric distribution, and imposing the condition E G p = A , he found p i = 1 A 1 1 1 A i , hence:
H p = log A 1 + log 1 1 A A ,
Because this distribution maximizes the Shannon entropy, for arbitrary distributions p , we have:
H p log E G p 1 + log 1 1 E G p E G p ,
an unwieldy form that is saturated only for geometric distributions. Thus, the bound in Equation (14) is only ever saturated when p = ; therefore, the Massey inequality is never tight for distributions encountered in most applications such as cryptography.
To find a simpler form of Equation (14), Massey noticed that when A 2 , the second term of Equation (13) is at most two, with equality if and only if A = 2 . Thus, when H p 2 , E G p 2 H p 2 + 1 . However, the simplicity of this form impacts its tightness, as we can remark:
Remark 2.
While the well-known form of Massey’s lower bound Equation (3) is only saturated when p i = 1 / 2 i , his original inequality Equation (14) is saturated by all geometric distributions, i.e., for all values of E G p .
From this remark, we conclude that any inequality derived from an application of the Massey inequality is saturated if and only if the distribution for which the Massey inequality is ultimately applied is the geometric distribution of ratio 1 / 2 . It follows that for the initial result of [5], Equation (2), as well as for our own initial result from Theorem 1, the bounds are saturated only for the geometric distribution of the ratio 1 / 2 , even if they are tighter than the Massey inequality.
However, when those inequalities are applied after an unbounded number of steps, it is possible to obtain bounds saturated by more inputs. Indeed, for the stronger result of [5], as well as our own stronger result from Theorem 2, the bounds are saturated for any truncation of the geometric distribution of the ratio 1 / 2 , i.e., when the original distribution has the form p = 1 / 2 , 1 / 4 , , 1 / 2 n 2 , 1 / 2 n 1 , 1 / 2 n 1 for some n 2 , i.e., for a countable range of E G p .
It is then natural to ask how the tightness improvement in Theorems 1 and 2 is reflected in saturation conditions. To this end, we notice that the limiting factor is the requirement that the limit of the list of probabilistic distributions is required to be the geometric distribution of the ratio 1 / 2 ; hence, for saturation, we necessarily have α = 1 / 2 , i.e., this is a direct consequence of using the nice form of the Massey inequality, Equation (3). As such, we propose the open question of finding a similar technique exploiting the Massey gap with respect to the original form of the Massey inequality, Equation (14).
We continue our discussion by comparing the many refinements to the Massey inequality that we have presented on a simple numerical example. Let p = p 1 , , p 10 be the descending probabilistic vector containing the first nine terms of the probabilistic geometric sequence of the ratio 0.8 ,
p = 1 5 4 25 262144 1953125 16 125 64 625 256 3125 1024 15625 4096 78125 16384 390625 65536 1935125 ,
with Shannon entropy H p 3.32516 > log e 1 e / 4 > 2 and guessing entropy E G p 4.02939 . The state-of-the-art provides various lower bounds on the guessing entropy: 3.18126 from the Massey inequality, 3.21581 from the weaker result Equation (2) of [5], 3.25157 from their stronger result Equation (1), and 3.20977 from Rioul’s inequality [6,7]. Our new lower bounds for this example are: 3.21767 as the supremum in Theorem 1 taking α 0.38977 (improving Equation (2)), 3.25157 as the supremum in Theorem 2 taking α = 0.5 (matching Equation (1)), and 3.21751 as the explicit value in Theorem 4 taking i = 10 (improving Equation (2)). In this particular case, Theorem 3 cannot be applied for i 10 as p i / p i + 1 1.2 , while 2 2 2 H p 1.3189 . We notice that for this example, Theorem 2 matches the stronger result of [5].
The final topic of our discussion addresses a direction that was previously left unaddressed in the literature, rebalancing the initial distribution. We take an example and use it to showcase the effectiveness of the approach in Theorem 3. Considering the initial distribution p = 5 / 8 , 1 / 8 , 1 / 8 , 1 / 2 16 , 1 / 2 16 , , 1 / 2 16 , we notice that H p 1 / 8 log 2 16 = 2 , and moreover, p 1 / p 2 = 5 2 2 2 H p . Hence, according to Section 2.4, we may rebalance our distribution, even at its head. Explicitly taking i = 1 and α = 1 / 5 < 1 / 1 + 2 1 / 2 16 + 2 H p 2 , we construct q = 21 / 40 , 19 / 40 , 1 / 8 , 1 / 2 16 , 1 / 2 16 , , 1 / 2 16 such that M p M q 0 . From Remark 1, it follows that each such rebalancing can be prepended to the list construction in Section 2.2.
However, continuing our example, we note that q 1 / q 2 = 2 13 , so taking i = 3 and α = 1 / 5 , we may once again apply this procedure to obtain r = 21 / 40 , 19 / 40 , ( 1 + 2 15 ) / 10 , 1 / 40 + 1 / 2 14 , 1 / 2 16 , , 1 / 2 16 with M p M q M r 0 . Noting that now r 2 / r 3 19 / 5 = 4.25 and r 4 / r 5 2 16 / 40 = 1638.4 , the procedure can continue, gradually trickling down mass from the head up to the tail.
We emphasize that the indices eligible for rebalancing change at every step and the indices that were once ineligible (such as 2 and 4) can become eligible at future steps. In fact, if we rebalance at Index 4, we notice that the fourth position in the newly created vector would be smaller than ( 1 + 2 8 ) / 50 , so Index 3 would once again become eligible. It follows that some indices can become eligible for rebalancing more than once.
In conclusion, in this paper, we strictly refined the work of [5] presenting a generalized approach in the form of Theorem 2, as well as a data-dependent numerical procedure resulting in tighter refinements. The rebalancing technique in Section 2.4 and Theorem 3 provides a significant improvement to the procedure in Theorem 2; thus, for future work, we propose a precise study of the convergence of this method on real data, as well as a follow-up work showing how the same method can be applied to other inequalities, such as Rioul’s inequality, Equation (4).

Author Contributions

Conceptualization, A.T. and P.G.P.; Investigation, A.T. and P.G.P.; Supervision, P.G.P.; Writing—original draft, A.T.; Writing—review & editing, A.T. and P.G.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

We thank Olivier Rioul who showed us Equation (4), a gem well hidden within the pages of [7].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Massey, J.L. Guessing and entropy. In Proceedings of the 1994 IEEE International Symposium on Information Theory, Trondheim, Norway, 27 June–1 July 1994; IEEE: Piscataway, NJ, USA, 1994; p. 204. [Google Scholar]
  2. Murray, H.; Malone, D. Convergence of Password Guessing to Optimal Success Rates. Entropy 2020, 22, 378. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Arikan, E. An inequality on guessing and its application to sequential decoding. IEEE Trans. Inf. Theory 1996, 42, 99–105. [Google Scholar] [CrossRef] [Green Version]
  4. Weinberger, N.; Shayevitz, O. Guessing with a Bit of Help. Entropy 2020, 22, 39. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Popescu, P.G.; Choudary, M.O. Refinement of Massey Inequality. In Proceedings of the 2019 IEEE International Symposium on Information Theory (ISIT), Paris, France, 7–12 July 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 495–496. [Google Scholar]
  6. Rioul, O. On Guessing. Unpublished Note. 2013. [Google Scholar]
  7. De Chérisey, E.; Guilley, S.; Rioul, O.; Piantanida, P. Best Information is Most Successful. Iacr Trans. Cryptogr. Hardw. Embed. Syst. 2019, 49–79. [Google Scholar] [CrossRef]
  8. Ţăpuş, N.; Popescu, P. A new entropy upper bound. Appl. Math. Lett. 2012, 25, 1887–1890. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tănăsescu, A.; Popescu, P.G. Exploiting the Massey Gap. Entropy 2020, 22, 1398. https://doi.org/10.3390/e22121398

AMA Style

Tănăsescu A, Popescu PG. Exploiting the Massey Gap. Entropy. 2020; 22(12):1398. https://doi.org/10.3390/e22121398

Chicago/Turabian Style

Tănăsescu, Andrei, and Pantelimon George Popescu. 2020. "Exploiting the Massey Gap" Entropy 22, no. 12: 1398. https://doi.org/10.3390/e22121398

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop