Next Article in Journal
Key Node Ranking in Complex Networks: A Novel Entropy and Mutual Information-Based Approach
Previous Article in Journal
Ice-Crystal Nucleation in Water: Thermodynamic Driving Force and Surface Tension. Part I: Theoretical Foundation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Integral Representation of the Logarithmic Function with Applications in Information Theory

The Andrew and Erna Viterbi Faculty of Electrical Engineering, Israel Institute of Technology Technion City, Haifa 3200003, Israel
*
Author to whom correspondence should be addressed.
Entropy 2020, 22(1), 51; https://doi.org/10.3390/e22010051
Submission received: 5 December 2019 / Revised: 27 December 2019 / Accepted: 27 December 2019 / Published: 30 December 2019
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
We explore a well-known integral representation of the logarithmic function, and demonstrate its usefulness in obtaining compact, easily computable exact formulas for quantities that involve expectations and higher moments of the logarithm of a positive random variable (or the logarithm of a sum of i.i.d. positive random variables). The integral representation of the logarithm is proved useful in a variety of information-theoretic applications, including universal lossless data compression, entropy and differential entropy evaluations, and the calculation of the ergodic capacity of the single-input, multiple-output (SIMO) Gaussian channel with random parameters (known to both transmitter and receiver). This integral representation and its variants are anticipated to serve as a useful tool in additional applications, as a rigorous alternative to the popular (but non-rigorous) replica method (at least in some situations).

1. Introduction

In analytic derivations pertaining to many problem areas in information theory, one frequently encounters the need to calculate expectations and higher moments of expressions that involve the logarithm of a positive-valued random variable, or more generally, the logarithm of the sum of several i.i.d. positive random variables. The common practice, in such situations, is either to resort to upper and lower bounds on the desired expression (e.g., using Jensen’s inequality or any other well-known inequalities), or to apply the Taylor series expansion of the logarithmic function. A more modern approach is to use the replica method (see, e.g., in [1] (Chapter 8)), which is a popular (but non-rigorous) tool that has been borrowed from the field of statistical physics with considerable success.
The purpose of this work is to point out to an alternative approach and to demonstrate its usefulness in some frequently encountered situations. In particular, we consider the following integral representation of the logarithmic function (to be proved in the sequel),
ln x = 0 e u e u x u d u , x > 0 .
The immediate use of this representation is in situations where the argument of the logarithmic function is a positive-valued random variable, X, and we wish to calculate the expectation, E { ln X } . By commuting the expectation operator with the integration over u (assuming that this commutation is valid), the calculation of E { ln X } is replaced by the (often easier) calculation of the moment-generating function (MGF) of X, as
E { ln X } = 0 e u E e u X d u u .
Moreover, if X 1 , , X n are positive i.i.d. random variables, then
E { ln ( X 1 + + X n ) } = 0 e u E { e u X 1 } n d u u .
This simple idea is not quite new. It has been used in the physics literature, see, e.g., [1] (Exercise 7.6, p. 140), [2] (Equation (2.4) and onward) and [3] (Equation (12) and onward). With the exception of [4], we are not aware of any work in the information theory literature where it has been used. The purpose of this paper is to demonstrate additional information-theoretic applications, as the need to evaluate logarithmic expectations is not rare at all in many problem areas of information theory. Moreover, the integral representation (1) is useful also for evaluating higher moments of ln X , most notably, the second moment or variance, in order to assess the statistical fluctuations around the mean.
We demonstrate the usefulness of this approach in several application areas, including entropy and differential entropy evaluations, performance analysis of universal lossless source codes, and calculations of the ergodic capacity of the Rayleigh single-input multiple-output (SIMO) channel. In some of these examples, we also demonstrate the calculation of variances associated with the relevant random variables of interest. As a side remark, in the same spirit of introducing integral representations and applying them, Simon and Divsalar [5,6] brought to the attention of communication theorists useful, definite-integral forms of the Q–function (Craig’s formula [7]) and Marcum Q–function, and demonstrated their utility in applications.
It should be pointed out that most of our results remain in the form of a single- or double-definite integral of certain functions that depend on the parameters of the problem in question. Strictly speaking, such a definite integral may not be considered a closed-form expression, but nevertheless, we can say the following.
(a)
In most of our examples, the expression we obtain is more compact, more elegant, and often more insightful than the original quantity.
(b)
The resulting definite integral can actually be considered a closed-form expression “for every practical purpose” since definite integrals in one or two dimensions can be calculated instantly using built-in numerical integration operations in MATLAB, Maple, Mathematica, or other mathematical software tools. This is largely similar to the case of expressions that include standard functions (e.g., trigonometric, logarithmic, exponential functions, etc.), which are commonly considered to be closed-form expressions.
(c)
The integrals can also be evaluated by power series expansions of the integrand, followed by term-by-term integration.
(d)
Owing to Item (c), the asymptotic behavior in the parameters of the model can be evaluated.
(e)
At least in two of our examples, we show how to pass from an n–dimensional integral (with an arbitrarily large n) to one– or two–dimensional integrals. This passage is in the spirit of the transition from a multiletter expression to a single–letter expression.
To give some preliminary flavor of our message in this work, we conclude this introduction by mentioning a possible use of the integral representation in the context of calculating the entropy of a Poissonian random variable. For a Poissonian random variable, N, with parameter λ , the entropy (in nats) is given by
H ( λ ) = E ln e λ λ N N ! = λ λ ln λ + E { ln N ! } ,
where the nontrivial part of the calculation is associated with the last term, E { ln N ! } . In [8], this term was handled by using a nontrivial formula due to Malmstén (see [9] (pp. 20–21)), which represents the logarithm of Euler’s Gamma function in an integral form (see also [10]). In Section 2, we derive the relevant quantity using (1), in a simpler and more transparent form which is similar to [11] ((2.3)–(2.4)).
The outline of the remaining part of this paper is as follows. In Section 2, we provide some basic mathematical background concerning the integral representation (2) and some of its variants. In Section 3, we present the application examples. Finally, in Section 4, we summarize and provide some outlook.

2. Mathematical Background

In this section, we present the main mathematical background associated with the integral representation (1), and provide several variants of this relation, most of which are later used in this paper. For reasons that will become apparent shortly, we extend the scope to the complex plane.
Proposition 1.
ln z = 0 e u e u z u d u , Re ( z ) 0 .
Proof. 
ln z = ( z 1 ) 0 1 d v 1 + v ( z 1 )
= ( z 1 ) 0 1 0 e u [ 1 + v ( z 1 ) ] d u d v
= ( z 1 ) 0 e u 0 1 e u v ( z 1 ) d v d u
= 0 e u u 1 e u ( z 1 ) d u
= 0 e u e u z u d u ,
where Equation (7) holds as Re { 1 + v ( z 1 ) } > 0 for all v ( 0 , 1 ) , based on the assumption that Re ( z ) 0 ; (8) holds by switching the order of integration. □
Remark 1.
In [12] (p. 363, Identity (3.434.2)), it is stated that
0 e μ x e ν x x d x = ln ν μ , Re ( μ ) > 0 , Re ( ν ) > 0 .
Proposition 1 also applies to any purely imaginary number, z, which is of interest too (see Corollary 1 in the sequel, and the identity with the characteristic function in (14)).
Proposition 1 paves the way to obtaining some additional related integral representations of the logarithmic function for the reals.
Corollary 1.
([12] (p. 451, Identity 3.784.1)) For every x > 0 ,
ln x = 0 cos ( u ) cos ( u x ) u d u .
Proof. 
By Proposition 1 and the identity ln x Re ln ( i x ) (with i : = 1 ), we get
ln x = 0 e u cos ( u x ) u d u .
Subtracting both sides by the integral in (13) for x = 1 (which is equal to zero) gives (12). □
Let X be a real-valued random variable, and let Φ X ( ν ) : = E e i ν X be the characteristic function of X. Then, by Corollary 1,
E ln X = 0 cos ( u ) Re { Φ X ( u ) } u d u ,
where we are assuming, here and throughout the sequel, that the expectation operation and the integration over u are commutable, i.e., Fubini’s theorem applies.
Similarly, by returning to Proposition 1 (confined to a real-valued argument of the logarithm), the calculation of E { ln X } can be replaced by the calculation of the MGF of X, as
E { ln X } = 0 e u E e u X d u u .
In particular, if X 1 , , X n are positive i.i.d. random variables, then
E { ln ( X 1 + + X n ) } = 0 e u E e u X 1 n d u u .
Remark 2.
One may further manipulate (15) and (16) as follows. As ln x 1 s ln ( x s ) for any s 0 and x > 0 , then the expectation of ln X can also be represented as
E { ln X } = 1 s 0 e u E e u X s d u u , s 0 .
The idea is that if, for some s { 0 , 1 } , E { e u X s } can be expressed in closed form, whereas it cannot for s = 1 (or even E { e u X s } < for some s { 0 , 1 } , but not for s = 1 ), then (17) may prove useful. Moreover, if X 1 , , X n are positive i.i.d. random variables, s > 0 , and Y : = ( X 1 s + + X n s ) 1 / s , then
E { ln Y } = 1 s 0 e u E e u X 1 s n d u u .
For example, if { X i } are i.i.d. standard Gaussian random variables and s = 2 , then (18) enables to calculate the expected value of the logarithm of a chi-squared distributed random variable with n degrees of freedom. In this case,
E e u X 1 2 = 1 2 π e u x 2 e x 2 / 2 d x = 1 2 u + 1 ,
and, from (18) with s = 2 ,
E { ln Y } = 1 2 0 e u ( 2 u + 1 ) n / 2 d u u .
Note that according to the pdf of a chi-squared distribution, one can express E { ln Y } as a one-dimensional integral even without using (18). However, for general s > 0 , the direct calculation of E ln i = 1 n | X i | s leads to an n-dimensional integral, whereas (18) provides a one-dimensional integral whose integrand involves in turn the calculation of a one-dimensional integral too.
Identity (1) also proves useful when one is interested, not only in the expected value of ln X , but also in higher moments, in particular, its second moment or variance. In this case, the one-dimensional integral becomes a two-dimensional one. Specifically, for any s > 0 ,
V a r { ln X } = E { ln 2 ( X ) } [ E { ln X } ] 2 = 1 s 2 E 0 0 e u e u X s e v e v X s d u d v u v
1 s 2 0 0 e u E { e u X s } e v E { e v X s } d u d v u v
= 1 s 2 0 0 E e ( u + v ) X s E e u X s E e v X s d u d v u v
= 1 s 2 0 0 Cov e u X s , e v X s d u d v u v .
More generally, for a pair of positive random variables, ( X , Y ) , and for s > 0 ,
Cov { ln X , ln Y } = 1 s 2 0 0 Cov e u X s , e v Y s d u d v u v .
For later use, we present the following variation of the basic identity.
Proposition 2.
Let X be a random variable, and let
M X ( s ) : = E e s X , s R ,
be the MGF of X. If X is non-negative, then
E ln ( 1 + X ) = 0 e u [ 1 M X ( u ) ] u d u ,
Var ln ( 1 + X ) = 0 0 e ( u + v ) u v M X ( u v ) M X ( u ) M X ( v ) d u d v .
Proof. 
Equation (27) is a trivial consequence of (15). As for (28), we have
Var ln ( 1 + X ) = E ln 2 ( 1 + X ) E ln ( 1 + X ) 2 = E 0 e u u 1 e u X d u 0 e v v 1 e v X d v 0 0 e ( u + v ) [ 1 M X ( u ) ] [ 1 M X ( v ) ] u v d u d v = 0 0 e ( u + v ) u v E 1 e u X 1 e v X d u d v
0 0 e ( u + v ) [ 1 M X ( u ) M X ( v ) + M X ( u ) M X ( v ) ] u v d u d v = 0 0 e ( u + v ) u v 1 M X ( u ) M X ( v ) + M X ( u v ) d u d v
0 0 e ( u + v ) u v 1 M X ( u ) M X ( v ) + M X ( u ) M X ( v ) d u d v
= 0 0 e ( u + v ) u v M X ( u v ) M X ( u ) M X ( v ) d u d v .
 □
The following result relies on the validity of (5) to the right-half complex plane, and its derivation is based on the identity ln ( 1 + x 2 ) ln ( 1 + i x ) + ln ( 1 i x ) for all x R . In general, it may be used if the characteristic function of a random variable X has a closed-form expression, whereas the MGF of X 2 does not admit a closed-form expression (see Proposition 2). We introduce the result, although it is not directly used in the paper.
Proposition 3.
Let X be a real-valued random variable, and let
Φ X ( u ) : = E e i u X , u R ,
be the characteristic function of X. Then,
E ln ( 1 + X 2 ) = 2 0 e u u 1 Re Φ X ( u ) d u ,
and
Var ln ( 1 + X 2 ) = 2 0 0 e u v u v [ Re { Φ X ( u + v ) } + Re { Φ X ( u v ) } 2 Re { Φ X ( u ) } Re { Φ X ( v ) } ] d u d v .
As a final note, we point out that the fact that the integral representation (2) replaces the expectation of the logarithm of X by the expectation of an exponential function of X, has an additional interesting consequence: an expression like ln ( n ! ) becomes the integral of the sum of a geometric series, which, in turn, is easy to express in closed form (see [11] ((2.3)–(2.4))). Specifically,
ln ( n ! ) = k = 1 n ln k = k = 1 n 0 ( e u e u k ) d u u = 0 n e u k = 1 n e u k d u u = 0 e u n 1 e u n 1 e u d u u .
Thus, for a positive integer-valued random variable, N, the calculation of E { ln N ! } requires merely the calculation of E { N } and the MGF, E { e u N } . For example, if N is a Poissonian random variable, as discussed near the end of the Introduction, both E { N } and E { e u N } are easy to evaluate. This approach is a simple, direct alternative to the one taken in [8] (see also [10]), where Malmstén’s nontrivial formula for ln Γ ( z ) (see [9] (pp. 20–21)) was invoked. (Malmstén’s formula for ln Γ ( z ) applies to a general, complex–valued z with Re ( z ) > 0 ; in the present context, however, only integer real values of z are needed, and this allows the simplification shown in (36)). The above described idea of the geometric series will also be used in one of our application examples, in Section 3.4.

3. Applications

In this section, we show the usefulness of the integral representation of the logarithmic function in several problem areas in information theory. To demonstrate the direct computability of the relevant quantities, we also present graphs of their numerical calculation. In some of the examples, we also demonstrate calculations of the second moments and variances.

3.1. Differential Entropy for Generalized Multivariate Cauchy Densities

Let ( X 1 , , X n ) be a random vector whose probability density function is of the form
f ( x 1 , , x n ) = C n 1 + i = 1 n g ( x i ) q , ( x 1 , , x n ) R n ,
for a certain non–negative function g and positive constant q such that
R n d x 1 + i = 1 n g ( x i ) q < .
We refer to this kind of density as a generalized multivariate Cauchy density, because the multivariate Cauchy density is obtained as a special case where g ( x ) = x 2 and q = 1 2 ( n + 1 ) . Using the Laplace transform relation,
1 s q = 1 Γ ( q ) 0 t q 1 e s t d t , q > 0 , Re ( s ) > 0 ,
f can be represented as a mixture of product measures:
f ( x 1 , , x n ) = C n 1 + i = 1 n g ( x i ) q = C n Γ ( q ) 0 t q 1 e t exp t i = 1 n g ( x i ) d t .
Defining
Z ( t ) : = e t g ( x ) d x , t > 0 ,
we get from (40),
1 = C n Γ ( q ) 0 t q 1 e t R n exp t i = 1 n g ( x i ) d x 1 d x n d t = C n Γ ( q ) 0 t q 1 e t e t g ( x ) d x n d t = C n Γ ( q ) 0 t q 1 e t Z n ( t ) d t ,
and so,
C n = Γ ( q ) 0 t q 1 e t Z n ( t ) d t .
The calculation of the differential entropy of f is associated with the evaluation of the expectation E ln 1 + i = 1 n g ( X i ) . Using (27),
E ln 1 + i = 1 n g ( X i ) = 0 e u u 1 E exp u i = 1 n g ( X i ) d u .
From (40) and by interchanging the integration,
E exp u i = 1 n g ( X i ) = C n Γ ( q ) 0 t q 1 e t R n exp ( t + u ) i = 1 n g ( x i ) d x 1 d x n d t = C n Γ ( q ) 0 t q 1 e t Z n ( t + u ) d t .
In view of (40), (44), and (45), the differential entropy of ( X 1 , , X n ) is therefore given by
h ( X 1 , , X n ) = q E ln 1 + i = 1 n g ( X i ) ln C n = q 0 e u u 1 C n Γ ( q ) 0 t q 1 e t Z n ( t + u ) d t d u ln C n = q C n Γ ( q ) 0 0 t q 1 e ( t + u ) u Z n ( t ) Z n ( t + u ) d t d u ln C n .
For g ( x ) = | x | θ , with an arbitrary θ > 0 , we obtain from (41) that
Z ( t ) = 2 Γ ( 1 / θ ) θ t 1 / θ .
In particular, for θ = 2 and q = 1 2 ( n + 1 ) , we get the multivariate Cauchy density from (37). In this case, as Γ 1 2 = π , it follows from (47) that Z ( t ) = π t for t > 0 , and from (43)
C n = Γ n + 1 2 π n / 2 0 t ( n + 1 ) / 2 1 e t t n / 2 d t = Γ n + 1 2 π n / 2 Γ 1 2 = Γ n + 1 2 π ( n + 1 ) / 2 .
Combining (46), (47) and (48) gives
h ( X 1 , , X n ) = n + 1 2 π ( n + 1 ) / 2 0 0 e ( t + u ) u t 1 t t + u n / 2 d t d u + ( n + 1 ) ln π 2 ln Γ n + 1 2 .
Figure 1 displays the normalized differential entropy, 1 n h ( X 1 , , X n ) , for 1 n 100 .
We believe that the interesting point, conveyed in this application example, is that (46) provides a kind of a “single–letter expression”; the n–dimensional integral, associated with the original expression of the differential entropy h ( X 1 , , X n ) , is replaced by the two-dimensional integral in (46), independently of n.
As a final note, we mention that a lower bound on the differential entropy of a different form of extended multivariate Cauchy distributions (cf. [13] (Equation (42))) was derived in [13] (Theorem 6). The latter result relies on obtaining lower bounds on the differential entropy of random vectors whose densities are symmetric log-concave or γ -concave (i.e., densities f for which f γ is concave for some γ < 0 ).

3.2. Ergodic Capacity of the Fading SIMO Channel

Consider the SIMO channel with L receive antennas and assume that the channel transfer coefficients, { h i } i = 1 L , are independent, zero–mean, circularly symmetric complex Gaussian random variables with variances { σ i 2 } i = 1 L . Its ergodic capacity (in nats per channel use) is given by
C = E ln 1 + ρ = 1 L | h | 2 = E ln 1 + ρ = 1 L f 2 + g 2 ,
where f : = Re { h } , g : = Im { h } , and ρ : = P N 0 is the signal–to–noise ratio (SNR) (see, e.g., [14,15]).
Paper [14] is devoted, among other things, to the exact evaluation of (50) by finding the density of the random variable defined by = 1 L ( f 2 + g 2 ) , and then taking the expectation w.r.t. that density. Here, we show that the integral representation in (5) suggests a more direct approach to the evaluation of (50). It should also be pointed out that this approach is more flexible than the one in [14], as the latter strongly depends on the assumption that { h i } are Gaussian and statistically independent. The integral representation approach also allows other distributions of the channel transfer gains, as well as possible correlations between the coefficients and/or the channel inputs. Moreover, we are also able to calculate the variance of ln 1 + ρ = 1 L | h | 2 , as a measure of the fluctuations around the mean, which is obviously related to the outage.
Specifically, in view of Proposition 2 (see (27)), let
X : = ρ = 1 L ( f 2 + g 2 ) .
For all u > 0 ,
M X ( u ) = E exp ρ u = 1 L ( f 2 + g 2 ) = = 1 L E e u ρ f 2 E e u ρ g 2 = = 1 L 1 1 + u ρ σ 2 ,
where (52) holds since
E e u ρ f 2 = E e u ρ g 2 = d w π σ 2 e w 2 / σ 2 e u ρ w 2 = 1 1 + u ρ σ 2 .
From (27), (50) and (52), the ergodic capacity (in nats per channel use) is given by
C = E ln 1 + ρ = 1 L f 2 + g 2 = 0 e u u 1 = 1 L 1 1 + u ρ σ 2 d u = 0 e x / ρ x 1 = 1 L 1 1 + σ 2 x d x .
A similar approach appears in [4] (Equation (12)).
As for the variance, from Proposition 2 (see (28)) and (52),
Var ln 1 + ρ = 1 L [ f 2 + g 2 ] = 0 0 e ( x + y ) / ρ x y = 1 L 1 1 + σ 2 ( x + y ) = 1 L 1 ( 1 + σ 2 x ) ( 1 + σ 2 y ) d x d y .
A similar analysis holds for the multiple-input single-output (MISO) channel. By partial–fraction decomposition of the expression (see the right side of (54))
1 x 1 = 1 L 1 1 + σ 2 x ,
the ergodic capacity C can be expressed as a linear combination of integrals of the form
0 e x / ρ d x 1 + σ 2 x = 1 σ 2 0 e t d t t + 1 / ( σ 2 ρ ) = e 1 / ( σ 2 ρ ) σ 2 1 / ( σ 2 ρ ) e s s d s = 1 σ 2 e 1 / ( σ 2 ρ ) E 1 1 σ 2 ρ ,
where E 1 ( · ) is the (modified) exponential integral function, defined as
E 1 ( x ) : = x e s s d s , x > 0 .
A similar representation appears also in [14] (Equation (7)).
Consider the example of L = 2 , σ 1 2 = 1 2 and σ 2 2 = 1 . From (54), the ergodic capacity of the SIMO channel is given by
C = 0 e x / ρ x 1 1 ( x / 2 + 1 ) ( x + 1 ) d x = 0 e x / ρ ( x + 3 ) d x ( x + 1 ) ( x + 2 ) = 2 e 1 / ρ E 1 1 ρ e 2 / ρ E 1 2 ρ .
The variance in this example (see (55)) is given by
Var ln 1 + ρ = 1 2 ( f 2 + g 2 ) = 0 0 e ( x + y ) / ρ x y 1 1 + 0 . 5 ( x + y ) ( 1 + x + y ) 1 ( 1 + 0 . 5 x ) ( 1 + 0 . 5 y ) ( 1 + x ) ( 1 + y ) d x d y = 0 0 e ( x + y ) / ρ ( 2 x y + 6 x + 6 y + 10 ) d x d y ( x + 1 ) ( y + 1 ) ( x + 2 ) ( y + 2 ) ( x + y + 1 ) ( x + y + 2 ) .
Figure 2 depicts the ergodic capacity C as a function of the SNR, ρ , in dB (see (58), and divide by ln 2 for conversion to bits per channel use). The same example exactly appears in the lower graph of Figure 1 in [14]. The variance appears in Figure 3 (see (59), and similarly divide by ln 2 2 ).

3.3. Universal Source Coding for Binary Arbitrarily Varying Sources

Consider a source coding setting, where there are n binary DMS’s, and let x i [ 0 , 1 ] denote the Bernoulli parameter of source no.  i { 1 , , n } . Assume that a hidden memoryless switch selects uniformly at random one of these sources, and the data is then emitted by the selected source. Since it is unknown a-priori which source is selected at each instant, a universal lossless source encoder (e.g., a Shannon or Huffman code) is designed to match a binary DMS whose Bernoulli parameter is given by 1 n i = 1 n x i . Neglecting integer length constraints, the average redundancy in the compression rate (measured in nats per symbol), due to the unknown realization of the hidden switch, is about
R n = h b 1 n i = 1 n x i 1 n i = 1 n h b ( x i ) ,
where h b : [ 0 , 1 ] [ 0 , ln 2 ] is the binary entropy function (defined to the base e), and the redundancy is given in nats per source symbol. Now, let us assume that the Bernoulli parameters of the n sources are i.i.d. random variables, X 1 , , X n , all having the same density as that of some generic random variable X, whose support is the interval [ 0 , 1 ] . We wish to evaluate the expected value of the above defined redundancy, under the assumption that the realizations of X 1 , , X n are known. We are then facing the need to evaluate
R ¯ n = E h b 1 n i = 1 n X i E { h b ( X ) } .
We now express the first and second terms on the right-hand side of (61) as a function of the MGF of X.
In view of (5), the binary entropy function h b admits the integral representation
h b ( x ) = 0 1 u x e u x + ( 1 x ) e u ( 1 x ) e u d u , x [ 0 , 1 ] ,
which implies that
E { h b ( X ) } = 0 1 u E X e u X + E ( 1 X ) e u ( 1 X ) e u d u .
The expectations on the right-hand side of (63) can be expressed as functionals of the MGF of X, M X ( ν ) = E { e ν X } , and its derivative, for ν < 0 . For all u R ,
E X e u X = M X ( u ) ,
and
E ( 1 X ) e u ( 1 X ) = M 1 X ( u ) = d d s e s M X ( s ) | s = u = e u M X ( u ) M X ( u ) .
On substituting (64) and (65) into (63), we readily obtain
E { h b ( X ) } = 0 1 u M X ( u ) + M X ( u ) M X ( u ) 1 e u d u .
Define Y n : = 1 n i = 1 n X i . Then,
M Y n ( u ) = M X n u n , u R ,
which yields, in view of (66), (67) and the change of integration variable, t = u n , the following:
E h b 1 n i = 1 n X i = E { h b ( Y n ) } = 0 1 u M Y n ( u ) + M Y n ( u ) M Y n ( u ) 1 e u d u = 0 1 t M X n 1 ( t ) M X ( t ) + M X n ( t ) M X n 1 ( t ) M X ( t ) 1 e n t d t .
Similarly as in Section 3.1, here too, we pass from an n-dimensional integral to a one-dimensional integral. In general, similar calculations can be carried out for higher integer moments, thus passing from n-dimensional integration for a moment of order s to an s-dimensional integral, independently of n.
For example, if X 1 , , X n are i.i.d. and uniformly distributed on [0,1], then the MGF of a generic random variable X distributed like all { X i } is given by
M X ( t ) = { e t 1 t , t 0 , 1 , t = 0 .
From (68), it can be verified numerically that E h b 1 n i = 1 n X i is monotonically increasing in n, being equal (in nats) to 1 2 , 0.602, 0.634, 0.650, 0.659 for n = 1 , , 5 , respectively, with the limit h b 1 2 = ln 2 0.693 as we let n (this is expected by the law of large numbers).

3.4. Moments of the Empirical Entropy and the Redundancy of K–T Universal Source Coding

Consider a stationary, discrete memoryless source (DMS), P, with a finite alphabet X of size | X | and letter probabilities { P ( x ) , x X } . Let ( X 1 , , X n ) be an n–vector emitted from P, and let { P ^ ( x ) , x X } be the empirical distribution associated with ( X 1 , , X n ) , that is, P ^ ( x ) = n ( x ) n , for all x X , where n ( x ) is the number of occurrences of the letter x in ( X 1 , , X n ) .
It is well known that in many universal lossless source codes for the class of memoryless sources, the dominant term of the length function for encoding ( X 1 , , X n ) is n H ^ , where H ^ is the empirical entropy,
H ^ = x P ^ ( x ) ln P ^ ( x ) .
For code length performance analysis (as well as for entropy estimation per se), there is therefore interest in calculating the expected value E { H ^ } as well as Var { H ^ } . Another motivation comes from the quest for estimating the entropy as an objective on its own right, and then the expectation and the variance suffice for the calculation of the mean square error of the estimate, H ^ . Most of the results that are available in the literature, in this context, concern the asymptotic behavior for large n as well as bounds (see, e.g., [16,17,18,19,20,21,22,23,24,25,26,27,28,29,30], as well as many other related references therein). The integral representation of the logarithm in (5), on the other hand, allows exact calculations of the expectation and the variance. The expected value of the empirical entropy is given by
E { H ^ } = x E { P ^ ( x ) ln P ^ ( x ) } = x E 0 d u u P ^ ( x ) e u P ^ ( x ) P ^ ( x ) e u = 0 d u u x E { P ^ ( x ) e u P ^ ( x ) } e u .
For convenience, let us define the function ϕ n : X × R ( 0 , ) as
ϕ n ( x , t ) : = E e t P ^ ( x ) = 1 P ( x ) + P ( x ) e t / n n ,
which yields,
E P ^ ( x ) e u P ^ ( x ) = ϕ n ( x , u ) ,
E P ^ 2 ( x ) e u P ^ ( x ) = ϕ n ( x , u ) ,
where ϕ n and ϕ n are first and second order derivatives of ϕ n w.r.t. t, respectively. From (71) and (73),
E { H ^ } = 0 d u u x ϕ n ( x , u ) e u = 0 d u u e u x P ( x ) 1 P ( x ) ( 1 e u ) n 1 e n u
where the integration variable in (75) was changed using a simple scaling by n.
Before proceeding with the calculation of the variance of H ^ , let us first compare the integral representation in (75) to the alternative sum, obtained by a direct, straightforward calculation of the expected value of the empirical entropy. A straightforward calculation gives
E { H ^ } = x k = 0 n n k P k ( x ) [ 1 P ( x ) ] n k · k n · ln n k
= x k = 1 n n 1 k 1 P k ( x ) [ 1 P ( x ) ] n k · ln n k .
We next compare the computational complexity of implementing (75) to that of (77). For large n, in order to avoid numerical problems in computing (77) by standard software, one may use the Gammaln function in Matlab/Excel or the LogGamma in Mathematica (a built-in function for calculating the natural logarithm of the Gamma function) to obtain that
n 1 k 1 P k ( x ) [ 1 P ( x ) ] n k = exp { Gammaln ( n ) Gammaln ( k ) Gammaln ( n k + 1 ) + k ln P ( x ) + ( n k ) ln 1 P ( x ) } .
The right-hand side of (75) is the sum of | X | integrals, and the computational complexity of each integral depends on neither n, nor | X | . Hence, the computational complexity of the right-hand side of (75) scales linearly with | X | . On the other hand, the double sum on the right-hand side of (77) consists of n · | X | terms. Let α : = n | X | be fixed, which is expected to be large ( α 1 ) if a good estimate of the entropy is sought. The computational complexity of the double sum on the right-hand side of (77) grows like α | X | 2 , which scales quadratically in | X | . Hence, for a DMS with a large alphabet, or when n | X | , there is a significant computational reduction by evaluating (75) in comparison to the right-hand side of (77).
We next move on to calculate the variance of H ^ .
Var { H ^ } = E { H ^ 2 } E 2 { H ^ }
= x , x E { P ^ ( x ) ln P ^ ( x ) · P ^ ( x ) ln P ^ ( x ) } E 2 { H ^ } .
The second term on the right-hand side of (80) has already been calculated. For the first term, let us define, for x x ,
ψ n ( x , x , s , t ) : = E { exp { s P ^ ( x ) + t P ^ ( x ) } = { ( k , ) : k + n } { n ! k ! ! ( n k ) ! · P k ( x ) P ( x )
· 1 P ( x ) P ( x ) n k e s k / n + t / n } = { ( k , ) : k + n } { n ! k ! ! ( n k ) ! · P ( x ) e s / n k P ( x ) e t / n
· 1 P ( x ) P ( x ) n k }
= 1 P ( x ) ( 1 e s / n ) P ( x ) ( 1 e t / n ) n .
Observe that
E { P ^ ( x ) P ^ ( x ) exp { u P ^ ( x ) v P ^ ( x ) } = 2 ψ n ( x , x , s , t ) s t | s = u , t = v
: = ψ n ( x , x , u , v ) .
For x x , we have
E { P ^ ( x ) ln P ^ ( x ) · P ^ ( x ) ln P ^ ( x ) }
= E P ^ ( x ) P ^ ( x ) 0 0 d u d v u v · e u e u P ^ ( x ) · e v e v P ^ ( x ) = 0 0 d u d v u v [ e u v E P ^ ( x ) P ^ ( x ) e v E P ^ ( x ) P ^ ( x ) e u P ^ ( x )
e u E P ^ ( x ) P ^ ( x ) e v P ^ ( x ) + E P ^ ( x ) P ^ ( x ) e u P ^ ( x ) v P ^ ( x ) ] = 0 0 d u d v u v [ e u v ψ n ( x , x , 0 , 0 ) e v ψ n ( x , x , u , 0 )
e u ψ n ( x , x , 0 , v ) + ψ n ( x , x , u , v ) ] ,
and for x = x ,
E { [ P ^ ( x ) ln P ^ ( x ) ] 2 } = E P ^ 2 ( x ) 0 0 d u d v u v · e u e u P ^ ( x ) · e v e v P ^ ( x ) = 0 0 d u d v u v [ e u v E P ^ 2 ( x ) e v E P ^ 2 ( x ) e u P ^ ( x )
e u E P ^ 2 ( x ) e v P ^ ( x ) + E P ^ 2 ( x ) e ( u + v ) P ^ ( x ) ] = 0 0 d u d v u v [ e u v ϕ n ( x , 0 ) e v ϕ n ( x , u )
e u ϕ n ( x , v ) + ϕ n ( x , u v ) ] .
Therefore,
Var { H ^ } = x 0 0 d u d v u v e u v ϕ n ( x , 0 ) e v ϕ n ( x , u ) e u ϕ n ( x , v ) + ϕ n ( x , u v ) + x x 0 0 d u d v u v [ e u v ψ n ( x , x , 0 , 0 ) e v ψ n ( x , x , u , 0 ) e u ψ n ( x , x , 0 , v ) + ψ n ( x , x , u , v ) ] E 2 { H ^ } .
Defining (see (74) and (86))
Z ( r , s , t ) : = x ϕ n ( x , r ) + x x ψ n ( x , x , s , t ) ,
we have
Var { H ^ } = 0 0 d u d v u v [ e u v Z ( 0 , 0 , 0 ) e v Z ( u , u , 0 ) e u Z ( v , 0 , v ) + Z ( u v , u , v ) ] E 2 { H ^ } .
To obtain numerical results, it would be convenient to particularize now the analysis to the binary symmetric source (BSS). From (75),
E { H ^ } = 0 d u u e u 1 + e u 2 n 1 e u n .
For the variance, it follows from (84) that for x x with x , x { 0 , 1 } and s , t R ,
ψ n ( x , x , s , t ) = e s / n + e t / n 2 n ,
ψ n ( x , x , s , t ) = 2 ψ n ( x , x , s , t ) s t = 1 4 1 1 n e s / n + e t / n 2 n 2 e ( s + t ) / n ,
and, from (87)–(89), for x x
E { P ^ ( x ) ln P ^ ( x ) · P ^ ( x ) ln P ^ ( x ) } = 1 4 1 1 n 0 0 d u d v u v [ e u v e u / n + v 1 + e u / n 2 n 2                                  e ( u + v / n ) 1 + e v / n 2 n 2                                  + e ( u + v ) / n e u / n + e v / n 2 n 2 ] .
From (72), for x { 0 , 1 } and t R ,
ϕ n ( x , t ) = 1 + e t / n 2 n ,
ϕ n ( x , t ) = 2 ϕ n ( x , t ) t 2 = e t / n 4 n 1 + e t / n 2 n 2 1 + n e t / n ,
and, from (90)–(92), for x { 0 , 1 } ,
E { [ P ^ ( x ) ln P ^ ( x ) ] 2 } = 1 4 n 0 0 d u d v u v { ( n + 1 ) e u v e u / n + v 1 + e u / n 2 n 2 1 + n e u / n                    e u + v / n 1 + e v / n 2 n 2 1 + n e v / n                    + e ( u + v ) / n 1 + e ( u + v ) / n 2 n 2 1 + n e ( u + v ) / n } .
Combining Equations (93), (99), and (102), gives the following closed-form expression for the variance of the empirical entropy:
Var { H ^ } = 1 2 1 + 1 n 0 0 d u d v u v e ( u + v ) e v f n u n e u f n v n + f n u + v n + 1 2 1 1 n 0 0 d u d v u v e ( u + v ) e v g n u n , 0 e u g n 0 , v n + g n u n , v n 0 d u u e u 1 + e u 2 n 1 e u n 2 ,
where
f n ( s ) : = e s 1 + e s 2 n 2 1 + n e s n + 1 ,
g n ( s , t ) = e s t e s + e t 2 n 2 .
For the BSS, ln 2 E { H ^ } = E { D ( P ^ P ) } and the standard deviation of H ^ both decay at the rate of 1 n as n grows without bound, according to Figure 4. This asymptotic behavior of E { D ( P ^ P ) } is supported by the well-known result [31] (see also [18] (Section 3.C) and references therein) that for the class of discrete memoryless sources { P } with a given finite alphabet X ,
ln P ^ ( X 1 , , X n ) P ( X 1 , , X n ) 1 2 χ d 2 ,
in law, where χ d 2 is a chi-squared random variable with d degrees of freedom. The left-hand side of (106) can be rewritten as
ln exp { n H ^ } exp { n H ^ n D ( P ^ P ) } = n D ( P ^ P ) ,
and so, E { D ( P ^ P ) } decays like d 2 n , which is equal to 1 2 n for the BSS. In Figure 4, the base of the logarithm is 2, and therefore, E { D ( P ^ P ) } = 1 E { H ^ } decays like log 2 e 2 n 0.7213 n . It can be verified numerically that 1 E { H ^ } (in bits) is equal to 7.25 · 10 3 and 7.217 · 10 4 for n = 100 and n = 1000 , respectively (see Figure 4), which confirms (106) and (107). Furthermore, the exact result here for the standard deviation, which decays like 1 n , scales similarly to the concentration inequality in [32] ((9)).
We conclude this subsection by exploring a quantity related to the empirical entropy, which is the expected code length associated with the universal lossless source code due to Krichevsky and Trofimov [23]. In a nutshell, this is a predictive universal code, which at each time instant t, sequentially assigns probabilities to the next symbol according to (a biased version of) the empirical distribution pertaining to the data seen thus far, x 1 , , x t . Specifically, consider the code length function (in nats),
L ( x n ) = t = 0 n 1 ln Q ( x t + 1 | x t ) ,
where
Q ( x t + 1 = x | x 1 , , x t ) = N t ( x ) + s t + s | X | ,
N t ( x ) is the number of occurrences of the symbol x X in ( x 1 , , x t ) , and s > 0 is a fixed bias parameter needed for the initial coding distribution ( t = 0 ).
We now calculate the redundancy of this universal code,
R n = E { L ( X n ) } n H ,
where H is the entropy of the underlying source. From Equations (108), (109), and (110), we can represent R n as follows,
R n = 1 n t = 0 n 1 E ln ( t + s | X | ) P ( X t + 1 ) N t ( X t + 1 ) + s .
The expectation on the right-hand side of (111) satisfies
E ln ( t + s | X | ) P ( X t + 1 ) N t ( X t + 1 ) + s = x P ( x ) E ln ( t + s | X | ) P ( x ) N t ( x ) + s = 0 e u s x P ( x ) E { e u N t ( x ) } x P ( x ) e u ( s | X | + t ) P ( x ) d u u = 0 e u s x P ( x ) [ 1 P ( x ) ( 1 e u ) ] t x P ( x ) e u ( s | X | + t ) P ( x ) d u u ,
which gives from (111) and (112) that the redundancy is given by
R n = 1 n t = 0 n 1 E ln ( t + s | X | ) P ( X t + 1 ) N t ( X t + 1 ) + s = 1 n 0 e u s x P ( x ) t = 0 n 1 [ 1 P ( x ) ( 1 e u ) ] t x P ( x ) e u s | X | P ( x ) t = 0 n 1 e u P ( x ) t d u u = 1 n 0 e u s x 1 [ 1 P ( x ) ( 1 e u ) ] n 1 e u x P ( x ) e u s | X | P ( x ) 1 e u P ( x ) n 1 e u P ( x ) d u u = 1 n 0 e u s | X | x [ 1 P ( x ) ( 1 e u ) ] n 1 e u x P ( x ) e u s | X | P ( x ) ( 1 e u P ( x ) n ) 1 e u P ( x ) d u u .
Figure 5 displays n R n as a function of ln n for s = 1 2 in the range 1 n 5000 . As can be seen, the graph is nearly a straight line with slope 1 2 , which is in agreement with the theoretical result that R n ln n 2 n (in nats per symbol) for large n (see [23] (Theorem 2)).

4. Summary and Outlook

In this work, we have explored a well-known integral representation of the logarithmic function, and demonstrated its applications in obtaining exact formulas for quantities that involve expectations and second order moments of the logarithm of a positive random variable (or the logarithm of a sum of i.i.d. such random variables). We anticipate that this integral representation and its variants can serve as useful tools in many additional applications, representing a rigorous alternative to the replica method in some situations.
Our work in this paper focused on exact results. In future research, it would be interesting to explore whether the integral representation we have used is useful also in obtaining upper and lower bounds on expectations (and higher order moments) of expressions that involves logarithms of positive random variables. In particular, could the integrand of (1) be bounded from below and/or above in a nontrivial manner, that would lead to new interesting bounds? Moreover, it would be even more useful if the corresponding bounds on the integrand would lend themselves to closed-form expressions of the resulting definite integrals.
Another route for further research relies on [12] (p. 363, Identity (3.434.1)), which states that
0 e ν u e μ u u ρ + 1 d u = μ ρ ν ρ ρ · Γ ( 1 ρ ) , Re ( μ ) > 0 , Re ( ν ) > 0 , Re ( ρ ) < 1 .
Let ν : = 1 , and μ : = i = 1 n X i where { X i } i = 1 n are positive i.i.d. random variables. Taking expectations of both sides of (114) and rearranging terms, gives
E { ( i = 1 n X i ) ρ } = 1 + ρ Γ ( 1 ρ ) 0 e u M X n ( u ) u ρ + 1 d u , ρ ( 0 , 1 ) ,
where X is a random variable having the same density as of the X i ’s, and M X ( u ) : = E e u X (for u R ) denotes the MGF of X. Since
ln x = lim ρ 0 x ρ 1 ρ , x > 0 ,
it follows that (115) generalizes (3) for the logarithmic expectation. Identity (115), for the ρ -th moment of a sum of i.i.d. positive random variables with ρ ( 0 , 1 ) , may be used in some information-theoretic contexts rather than invoking Jensen’s inequality.

Author Contributions

Investigation, N.M. and I.S.; Writing-original draft, N.M. and I.S.; Writing-review & editing, N.M. and I.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors are thankful to Cihan Tepedelenlioǧlu and Zbigniew Golebiewski for bringing references [4,11], respectively, to their attention.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mézard, M.; Montanari, A. Information, Physics, and Computation; Oxford University Press: New York, NY, USA, 2009. [Google Scholar]
  2. Esipov, S.E.; Newman, T.J. Interface growth and Burgers turbulence: The problem of random initial conditions. Phys. Rev. E 1993, 48, 1046–1050. [Google Scholar] [CrossRef]
  3. Song, J.; Still, S.; Rojas, R.D.H.; Castillo, I.P.; Marsili, M. Optimal work extraction and mutual information in a generalized Szilárd engine. arXiv 2019, arXiv:1910.04191. [Google Scholar]
  4. Rajan, A.; Tepedelenlioǧlu, C. Stochastic ordering of fading channels through the Shannon transform. IEEE Trans. Inform. Theory 2015, 61, 1619–1628. [Google Scholar] [CrossRef] [Green Version]
  5. Simon, M.K. A new twist on the Marcum Q-function and its application. IEEE Commun. Lett. 1998, 2, 39–41. [Google Scholar] [CrossRef]
  6. Simon, M.K.; Divsalar, D. Some new twists to problems involving the Gaussian probability integral. IEEE Trans. Inf. Theory 1998, 46, 200–210. [Google Scholar] [CrossRef] [Green Version]
  7. Craig, J.W. A new, simple and exact result for calculating the probability of error for two-dimensional signal constellations. In MILCOM 91-Conference Record; IEEE: Piscataway, NJ, USA; pp. 25.5.1–25.5.5.
  8. Appledorn, C.R. The entropy of a Poisson distribution. SIAM Rev. 1988, 30, 314–317. [Google Scholar] [CrossRef]
  9. Erdélyi, A.; Magnus, W.; Oberhettinger, F.; Tricomi, F.G.; Bateman, H. Higher Transcendental Functions; McGraw-Hill: New York, NY, USA, 1987; Volume 1. [Google Scholar]
  10. Martinez, A. Spectral efficiency of optical direct detection. JOSA B 2007, 24, 739–749. [Google Scholar] [CrossRef]
  11. Knessl, C. Integral representations and asymptotic expansions for Shannon and Rényi entropies. Appl. Math. Lett. 1998, 11, 69–74. [Google Scholar] [CrossRef] [Green Version]
  12. Ryzhik, I.M.; Gradshteĭn, I.S. Tables of Integrals, Series, and Products; Academic Press: New York, NY, USA, 1965. [Google Scholar]
  13. Marsiglietti, A.; Kostina, V. A lower bound on the differential entropy of log-concave random vectors with applications. Entropy 2018, 20, 185. [Google Scholar] [CrossRef] [Green Version]
  14. Dong, A.; Zhang, H.; Wu, D.; Yuan, D. Logarithmic expectation of the sum of exponential random variables for wireless communication performance evaluation. In 2015 IEEE 82nd Vehicular Technology Conference (VTC2015-Fall); IEEE: Piscataway, NJ, USA, 2015. [Google Scholar]
  15. Tse, D.; Viswanath, P. Fundamentals of Wireless Communication; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar]
  16. Barron, A.; Rissanen, J.; Yu, B. The minimum description length principle in coding and modeling. IEEE Trans. Inf. Theory 1998, 44, 2743–2760. [Google Scholar] [CrossRef] [Green Version]
  17. Blumer, A.C. Minimax universal noiseless coding for unifilar and Markov sources. IEEE Trans. Inf. Theory 1987, 33, 925–930. [Google Scholar] [CrossRef]
  18. Clarke, B.S.; Barron, A.R. Information-theoretic asymptotics of Bayes methods. IEEE Trans. Inf. Theory 1990, 36, 453–471. [Google Scholar] [CrossRef] [Green Version]
  19. Clarke, B.S.; Barron, A.R. Jeffreys’ prior is asymptotically least favorable under entropy risk. J. Stat. Plan. Infer. 1994, 41, 37–60. [Google Scholar] [CrossRef]
  20. Davisson, L.D. Universal noiseless coding. IEEE Trans. Inf. Theory 1973, 29, 783–795. [Google Scholar] [CrossRef] [Green Version]
  21. Davisson, L.D. Minimax noiseless universal coding for Markov sources. IEEE Trans. Inf. Theory 1983, 29, 211–215. [Google Scholar] [CrossRef]
  22. Davisson, L.D.; McEliece, R.J.; Pursley, M.B.; Wallace, M.S. Efficient universal noiseless source codes. IEEE Trans. Inf. Theory 1981, 27, 269–278. [Google Scholar] [CrossRef] [Green Version]
  23. Krichevsky, R.E.; Trofimov, V.K. The performance of universal encoding. IEEE Trans. Inf. Theory 1981, 27, 199–207. [Google Scholar] [CrossRef]
  24. Merhav, N.; Feder, M. Universal prediction. IEEE Trans. Inf. Theory 1998, 44, 2124–2147. [Google Scholar] [CrossRef] [Green Version]
  25. Rissanen, J. A universal data compression system. IEEE Trans. Inf. Theory 1983, 29, 656–664. [Google Scholar] [CrossRef] [Green Version]
  26. Rissanen, J. Universal coding, information, prediction, and estimation. IEEE Trans. Inf. Theory 1984, 30, 629–636. [Google Scholar] [CrossRef] [Green Version]
  27. Rissanen, J. Fisher information and stochastic complexity. IEEE Trans. Inf. Theory 1996, 42, 40–47. [Google Scholar] [CrossRef]
  28. Shtarkov, Y.M. Universal sequential coding of single messages. IPPI 1987, 23, 175–186. [Google Scholar]
  29. Weinberger, M.J.; Rissanen, J.; Feder, M. A universal finite memory source. IEEE Trans. Inf. Theory 1995, 41, 643–652. [Google Scholar] [CrossRef]
  30. Xie, Q.; Barron, A.R. Asymptotic minimax regret for data compression, gambling and prediction. IEEE Trans. Inf. Theory 1997, 46, 431–445. [Google Scholar]
  31. Wald, A. Tests of statistical hypotheses concerning several parameters when the number of observations is large. Trans. Am. Math. Soc. 1943, 54, 426–482. [Google Scholar] [CrossRef]
  32. Mardia, J.; Jiao, J.; Tánczos, E.; Nowak, R.D.; Weissman, T. Concentration inequalities for the empirical distribution of discrete distributions: Beyond the method of types. Inf. Inference 2019, 1–38. [Google Scholar] [CrossRef]
Figure 1. The normalized differential entropy, 1 n h ( X 1 , , X n ) (see (49)), for a multivariate Cauchy density, f ( x 1 , , x n ) = C n / [ 1 + i = 1 n x i 2 ] ( n + 1 ) / 2 , with C n in (48).
Figure 1. The normalized differential entropy, 1 n h ( X 1 , , X n ) (see (49)), for a multivariate Cauchy density, f ( x 1 , , x n ) = C n / [ 1 + i = 1 n x i 2 ] ( n + 1 ) / 2 , with C n in (48).
Entropy 22 00051 g001
Figure 2. The ergodic capacity C (in bits per channel use) of the SIMO channel as a function of ρ = SNR (in dB) for L = 2 receive antennas, with noise variances σ 1 2 = 1 2 and σ 2 2 = 1 .
Figure 2. The ergodic capacity C (in bits per channel use) of the SIMO channel as a function of ρ = SNR (in dB) for L = 2 receive antennas, with noise variances σ 1 2 = 1 2 and σ 2 2 = 1 .
Entropy 22 00051 g002
Figure 3. The variance of ln ( 1 + ρ = 1 L | h | 2 ) (in [ bits - per - channel - use ] 2 ) of the SIMO channel as a function of ρ = SNR (in dB) for L = 2 receive antennas, with noise variances σ 1 2 = 1 2 and σ 2 2 = 1 .
Figure 3. The variance of ln ( 1 + ρ = 1 L | h | 2 ) (in [ bits - per - channel - use ] 2 ) of the SIMO channel as a function of ρ = SNR (in dB) for L = 2 receive antennas, with noise variances σ 1 2 = 1 2 and σ 2 2 = 1 .
Entropy 22 00051 g003
Figure 4. 1 E { H ^ } and std ( H ^ ) for a BSS (in bits per source symbol) as a function of n.
Figure 4. 1 E { H ^ } and std ( H ^ ) for a BSS (in bits per source symbol) as a function of n.
Entropy 22 00051 g004
Figure 5. The function n R n vs. ln n for the BSS and s = 1 2 , in the range 2 n 5000 .
Figure 5. The function n R n vs. ln n for the BSS and s = 1 2 , in the range 2 n 5000 .
Entropy 22 00051 g005

Share and Cite

MDPI and ACS Style

Merhav, N.; Sason, I. An Integral Representation of the Logarithmic Function with Applications in Information Theory. Entropy 2020, 22, 51. https://doi.org/10.3390/e22010051

AMA Style

Merhav N, Sason I. An Integral Representation of the Logarithmic Function with Applications in Information Theory. Entropy. 2020; 22(1):51. https://doi.org/10.3390/e22010051

Chicago/Turabian Style

Merhav, Neri, and Igal Sason. 2020. "An Integral Representation of the Logarithmic Function with Applications in Information Theory" Entropy 22, no. 1: 51. https://doi.org/10.3390/e22010051

APA Style

Merhav, N., & Sason, I. (2020). An Integral Representation of the Logarithmic Function with Applications in Information Theory. Entropy, 22(1), 51. https://doi.org/10.3390/e22010051

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop