Next Article in Journal
A Distributed Approach to Speaker Count Problem in an Open-Set Scenario by Clustering Pitch Features
Next Article in Special Issue
On Two-Stage Guessing
Previous Article in Journal
Monitoring Real Time Security Attacks for IoT Systems Using DevSecOps: A Systematic Literature Review
Previous Article in Special Issue
Computational Techniques for Investigating Information Theoretic Limits of Information Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Information Bottleneck for a Rayleigh Fading MIMO Channel with an Oblivious Relay

1
Faculty of Electrical Engineering and Computer Science, Technical University of Berlin, 10587 Berlin, Germany
2
Viterbi Electrical Engineering Department, Technion–Israel Institute of Technology, Haifa 32000, Israel
*
Author to whom correspondence should be addressed.
Information 2021, 12(4), 155; https://doi.org/10.3390/info12040155
Submission received: 22 February 2021 / Revised: 2 April 2021 / Accepted: 4 April 2021 / Published: 8 April 2021
(This article belongs to the Special Issue Statistical Communication and Information Theory)

Abstract

:
This paper considers the information bottleneck (IB) problem of a Rayleigh fading multiple-input multiple-out (MIMO) channel with an oblivious relay. The relay is constrained to operating without knowledge of the codebooks, i.e., it performs oblivious processing. Moreover, due to the bottleneck constraint, it is impossible for the relay to inform the destination node of the perfect channel state information (CSI) in each channel realization. To evaluate the bottleneck rate, we first provide an upper bound by assuming that the destination node can obtain a perfect CSI at no cost. Then, we provide four achievable schemes, where each scheme satisfies the bottleneck constraint and gives a lower bound to the bottleneck rate. In the first and second schemes, the relay splits the capacity of the relay–destination link into two parts and conveys both the CSI and its observation to the destination node. Due to CSI transmission, the performance of these two schemes is sensitive to the MIMO channel dimension, especially the channel input dimension. To ensure that it still performs well when the channel dimension grows large, in the third and fourth achievable schemes, the relay only transmits compressed observations to the destination node. Numerical results show that, with simple symbol-by-symbol oblivious relay processing and compression, the proposed achievable schemes work well and can demonstrate lower bounds that come quite close to the upper bound on a wide range of relevant system parameters.

1. Introduction

For a Markov chain X Y Z and an assigned joint probability distribution p X , Y , consider the following information bottleneck (IB) problem:
max p Z | Y I ( X ; Z )
s . t . I ( Y ; Z ) C ,
where C is the bottleneck constraint parameter and the optimization is with respect to the conditional probability distribution p Z | Y of Z given Y. Formulation (1) was introduced by Tishby in [1] and has found remarkable applications in supervised and unsupervised learning problems such as classification, clustering, prediction, etc. [2,3,4,5,6,7]. From a more fundamental information theoretic viewpoint, the IB arises from the classical remote source coding problem [8,9,10] under logarithmic distortion [11].
An interesting application of the IB problem in communications consists of a source node, an oblivious relay, and a destination node, which is connected to the relay via an error-free link with capacity C. The source node sends codewords over a communication channel and an observation is made at the relay. X and Y are, respectively, the channel input from the source node and output at the relay. The relay is oblivious in the sense that it cannot decode the information message of the source node itself. This feature can be modeled rigorously by assuming that the source and destination nodes make use of a codebook selected at random over a library, while the relay is unaware of such random selection. For example, in a cloud radio access network (C-RAN), each remote radio head (RRH) acts as a relay and is usually constrained to implement only radio functionalities while the baseband functionalities are migrated to the cloud central processor [12]. Considering the relatively simple structure of the RRHs, it is usually prohibitive to let them know the codebooks and random encoding operations, particularly as the network size becomes large. The fact that the relay cannot decode is also supported by secrecy demands, which means that the codebooks known to the source and destination nodes are to be considered absolutely random, as done here.
Due to the oblivious feature, the relaying strategies that require the codebooks to be known at the relay, e.g., decode-and-forward, compute-and-forward, etc. [13,14,15], cannot be applied. Instead, the relay has to perform oblivious processing, i.e., employ strategies in the form of compress-and-forward [16,17,18,19]. In particular, the relay must treat X as a random process with distribution induced by random selection over the codebook library (see [12] and references therein) and has to produce some useful representation Z by simple signal processing and to convey it to the destination node subject to the link constraint C. Then, it makes sense to find Z such that I ( X ; Z ) is maximized.
The IB problem for this kind of communication scenario has been studied in [12,20,21,22,23,24,25,26]. In [20], the IB method was applied to reduce the fronthaul data rate of a C-RAN network. References [21,22], respectively, considered Gaussian scalar and vector channels with IB constraint and investigated the optimal tradeoff between the compression rate and the relevant information. In [23], the bottleneck rate of a frequency-selective scalar Gaussian primitive diamond relay channel was examined. In [24,25], the rate-distortion region of a vector Gaussian system with multiple relays was characterized under the logarithmic loss distortion measure. Reference [12] further extended the work in [25] to a C-RAN network with multiple transmitters and multiple relays and studied the capacity region of this network. However, all of References [12,20,21,22,23,24,25] considered block fading channels and assumed that the perfect channel state information (CSI) was known at both the relay and the destination nodes. In [26], the IB problem of a scalar Rayleigh fading channel was studied. Due to the bottleneck constraint, it was impossible to inform the destination node of the perfect CSI in each channel realization. An upper bound and two achievable schemes were provided in [26] to investigate the bottleneck rate.
In this paper, we extend the work in [26] to the multiple-input multiple-out (MIMO) channel with independent and identically distributed (i.i.d.) Rayleigh fading. This model is relevant for the practical setting of the uplink of a wireless multiuser system where K users send coded uplink signals to a base station. The base station is formed by an RRH with M antennas, connected to a cloud central processor via a digital link of rate C (bottleneck link). The RRH is oblivious to the user codebooks and can apply only simple localized signal processing corresponding to the low-level physical layer functions (i.e., it is an oblivious relay). In current implementations, the RRH quantizes both the uplink pilot symbols and the data-bearing symbols received from the users on each “resource block” (This corresponds roughly to a coherence block of the underlying fading channel in the time-frequency domain) and sends the quantization bits to the cloud processor via the digital link. Here, we simplify the problem, and instead of considering a specific pilot-based channel estimation scheme, we assume that the channel matrix is given perfectly to the relay (remote radiohead), i.e., that the CSI is perfect but local at the relay. Then, we consider an upper bound and specific achievability strategies to maximize the mutual information between the user transmitted signals and the message delivered to the cloud processor, where we allow the relay to operate local oblivious processing as an alternative to direct quantization of both the CSI and the received data-bearing signal.
Intuitively, the relay can split the capacity of the relay-destination link into two parts and convey both the CSI and its observation to the destination node. Hence, in the first and second achievable schemes, the relay transmits the compressed CSI and observation to the destination node. Specifically, in the first scheme, the relay simply compresses the channel matrix as well as its observation and then forwards them to the destination node. Roughly speaking, this is what happens today in “naive” implementation of RRH systems. Therefore, this scheme can be seen as a baseline scheme. However, the capacity allocated for conveying the CSI to the destination in this scheme is proportional to both the channel input dimension and the number of antennas at the relay. To reduce the channel use required for CSI transmission, in the second achievable scheme, the relay first obtains an estimate of the channel input using channel inversion and then transmits the quantized noise levels as well as the compressed noisy signal to the destination node. In contrast to the first scheme, the capacity allocated to CSI transmission in this scheme is only proportional to the channel input dimension.
Due to the explicit CSI transmission through the bottleneck, the performance of the first and second achievable schemes is sensitive to the MIMO channel dimension, especially the channel input dimension. To ensure that it still performs well when the channel dimension grows large, in the third and fourth achievable schemes, the relay does not convey any CSI to the destination node. In the third scheme, the relay first estimates the channel input using channel inversion and then transmits a truncated representation of the estimate to the destination node. In the fourth scheme, the relay first produces the minimum mean-squared error (MMSE) estimate of the channel input and then source-encodes this estimate. Numerical results show that, with simple symbol-by-symbol oblivious relay processing and compression, the lower bounds obtained by the proposed achievable schemes can come close to the upper bound on a wide range of relevant system parameters.
The rest of this paper is organized as follows. In Section 2, a MIMO channel with Rayleigh fading is presented and the IB problem for this system is formulated. Section 3 provides an upper bound to the bottleneck rate. In Section 4, four achievable schemes are proposed, where each scheme satisfies the bottleneck constraint and gives a lower bound to the bottleneck rate. Numerical results are presented in Section 5 before the conclusions in Section 6.
Throughout this paper, we use the following notations. R and C denote the real space and the complex space, respectively. Boldface upper (lower) case letters are used to denote matrices (vectors). I K stands for the K × K dimensional identity matrix and 0 denotes the all-zero vector or matrix. Superscript ( · ) H denotes the conjugated-transpose operation, E · denotes the expectation operation, and [ · ] + max ( · , 0 ) . ⊗ and ⊙, respectively, denote the Kronecker product and the Hadamard product.

2. Problem Formulation

We consider a system with a source node, an oblivious relay, and a destination node as shown in Figure 1. For convenience, we call the source–relay channel “Channel 1” and the relay–destination channel “Channel 2”. For Channel 1, we consider the following Gaussian MIMO channel with i.i.d. Rayleigh fading:
y = H x + n ,
where x C K × 1 and n C M × 1 are, respectively, zero-mean circularly symmetric complex Gaussian input and noise with covariance matrices I K and σ 2 I M , i.e., x CN ( 0 , I K ) and n CN ( 0 , σ 2 I M ) . H C M × K is a random matrix independent of both x and n , and the elements of H are i.i.d. zero-mean unit-variance complex Gaussian random variables, i.e., H CN ( 0 , I K I M ) . Let ρ = 1 σ 2 denote the signal-to-noise ratio (SNR). Let z denote a useful representation of y produced by the relay for the destination node. x ( y , H ) z thus forms a Markov chain. We assume that the relay node has a direct observation of the channel matrix H while the destination node does not since we consider a Rayleigh fading channel and a capacity-constrained relay–destination link. Then, the IB problem can be formulated as follows:
max p ( z | y , H ) I ( x ; z )
s . t . I ( y , H ; z ) C ,
where C is the bottleneck constraint, i.e., the link capacity of Channel 2. In this paper, we call I ( x ; z ) the bottleneck rate and I ( y , H ; z ) the compression rate. Obviously, for a joint probability distribution p ( x , y , H ) determined by (2), problem (3) is a slightly augmented version of IB problem (1). In our problem, we aim to find a conditional distribution p ( z | y , H ) such that bottleneck constraint (3b) is satisfied and the bottleneck rate is maximized, i.e., as much information on x can be extracted from representation z .

3. Informed Receiver Upper Bound

As stated in [26], an obvious upper bound to problem (3) can be obtained by letting both the relay and the destination node know the channel matrix H . We call the bound in this case the informed receiver upper bound. The IB problem in this case takes on the following form:
max p ( z | y , H ) I ( x ; z | H )
s . t . I ( y ; z | H ) C .
In Reference [21], the IB problem for a scalar Gaussian channel with block fading has been studied. In the following theorem, we show that, for the considered MIMO channel with Rayleigh fading, (4) can be decomposed into a set of parallel scalar IB problems and the informed receiver upper bound can be obtained based on the result in [21].
Theorem 1.
For the considered MIMO channel with Rayleigh fading, the informed receiver upper bound, i.e., the optimal objective function of IB problem (4), is
R ub = T ν ρ log 1 + ρ λ log ( 1 + ν ) f λ ( λ ) d λ ,
where T = min { K , M } , λ is identically distributed as the unordered positive eigenvalues of H H H ; its probability density function (pdf), i.e., f λ ( λ ) , is given in (A17); and ν is chosen such that the following bottleneck constraint is met:
ν ρ log ρ λ ν f λ ( λ ) d λ = C T .
Proof. 
See Appendix A. □
Lemma 1.
When M + or ρ + , upper bound R ub tends asymptotically to C. When C + , R ub approaches the capacity of Channel 1, i.e.,
R ub I ( x ; y , H ) = T 0 log 1 + ρ λ f λ ( λ ) d λ .
Proof. 
See Appendix B. □

4. Achievable Schemes

In this section, we provide four achievable schemes, where each scheme satisfies the bottleneck constraint and gives a lower bound to the bottleneck rate. In the first and second schemes, the relay transmits both its observation and partial CSI to the destination node. In the third and fourth schemes, to avoid transmitting CSI, the relay first estimates x and then sends a representation of the estimate to the destination node.

4.1. Non-Decoding Transmission (NDT) Scheme

Our first achievable scheme assumes that, without decoding x , the relay simply source-encodes both y and H and then sends the encoded representations to the destination node. It should be noticed that this scheme is actually reminiscent of the current state-of-the-art in remote antenna head technology, where both the pilot field (corresponding to H ) and the data field (corresponding to y ) are quantized and sent to the central processing unit.
Let h denote the vectorization of matrix H , and z 1 and z 2 denote the representations of h and y , respectively. From the definition of H in (2), it is known that h CN ( 0 , I K M ) . Since the elements in h are i.i.d., in the best case, where I ( h ; z 1 ) is minimized for a given total distortion, representation z 1 introduces the same distortion to each element of h . Denote the distortion of each element quantization by D. It can then be readily verified by using ([27], Theorem 10.3.3) that the rate distortion function of source h with total squared-error distortion K M D is given by
R ( D ) = min f ( z 1 | h ) : E d ( h , z 1 ) K M D I ( h ; z 1 ) = K M log 1 D ,
where 0 < D 1 and d ( h , z 1 ) = ( h z 1 ) H ( h z 1 ) is the squared-error distortion measure. Let e 1 denote the error vector of quantizing h , i.e., e 1 = h z 1 . z 1 and e 1 are the vectorizations of Z 1 and E 1 . Hence, H = Z 1 + E 1 . Note that z 1 CN ( 0 , ( 1 D ) I K M ) , e 1 CN ( 0 , D I K M ) , and z 1 is independent of e 1 . Hence,
E Z 1 Z 1 H = K ( 1 D ) I K , E E 1 E 1 H = K D I K .
In ([27], Theorem 10.3.3), the achievability of an information rate for a given distortion, e.g., (8), is proven by considering a backward Gaussian test channel. However, the backward Gaussian test channel does not provide an expression of z 1 or e 1 . Though the specific formulations of z 1 and e 1 are not necessary for the analysis in this section, since we are providing an achievable scheme, we still give a feasible z 1 that satisfies (8) here to make the content more complete. By adding an independent Gaussian noise vector r CN ( 0 , ε I K M ) with ε = D 1 D , to h , we get
h ˜ = h + r .
Obviously, h ˜ CN 0 , 1 1 D I K M . A representation of h can then be obtained as follows:
z 1 = 1 1 + ε h ˜ = 1 1 + ε h + 1 1 + ε r = ( 1 D ) h + ( 1 D ) r ,
which is actually the MMSE estimate of h obtained from (10). The error vector is then given by
e 1 = h z 1 = D h ( 1 D ) r .
It can be readily verified that z 1 provided in (11) satisfies (8), z 1 CN ( 0 , ( 1 D ) I K M ) , e 1 CN ( 0 , D I K M ) , and z 1 is independent of e 1 .
To meet the bottleneck constraint, we have to ensure that
I ( h , y ; z 1 , z 2 ) C .
Using the chain rule of mutual information,
I ( h , y ; z 1 , z 2 ) = I ( h , y ; z 1 ) + I ( h , y ; z 2 | z 1 ) = I ( h ; z 1 ) + I ( y ; z 1 | h ) + I ( y ; z 2 | z 1 ) + I ( h ; z 2 | z 1 , y ) .
Since z 1 is a representation of h , y and z 1 are conditionally independent given h . Similarly, since z 2 is a representation of y , h and z 2 are conditionally independent given y . Hence,
I ( y ; z 1 | h ) = 0 , I ( h ; z 2 | z 1 , y ) = 0 .
From (8), (14), and (15), it is known that, to guarantee constraint (13), I ( y ; z 2 | z 1 ) , which is the information rate at which the relay quantizes y (given z 1 ), should satisfy
I ( y ; z 2 | z 1 ) C R ( D ) .
Obviously, C R ( D ) > 0 has to be guaranteed, which yields D > 2 C K M . Hence, in this section, we always assume 2 C K M < D 1 .
We then evaluate I ( y ; z 2 | z 1 ) . Since H = Z 1 + E 1 , y in (2) can be rewritten as
y = H x + n = Z 1 x + E 1 x + n .
For a given Z 1 , the second moment of y is E y y H | Z 1 = Z 1 Z 1 H + ( K D + σ 2 ) I M . Denote the eigendecomposition of Z 1 Z 1 H by U ˜ Ω U ˜ H and
y ˜ = U ˜ H y = U ˜ H Z 1 x + U ˜ H E 1 x + U ˜ H n .
The second moment of y ˜ is E y ˜ y ˜ H | Z 1 = Ω + ( K D + σ 2 ) I M . Since E 1 is unknown, y ˜ is not a Gaussian vector. To evaluate I ( y ; z 2 | z 1 ) , we define a new Gaussian vector
y g = U ˜ H Z 1 x + n g ,
where n g CN ( 0 , ( K D + σ 2 ) I M ) . For a given Z 1 , y g CN ( 0 , Ω + ( K D + σ 2 ) I M ) . The channel in (19) can thus be seen as a set of parallel sub-channels. Let Z g denote a representation of y g , and consider the following IB problem:
max p ( z g | y g ) I ( x ; z g | Z 1 )
s . t . I ( y g ; z g | Z 1 ) C R ( D ) ,
2 C K M < D 1 .
Obviously, for a given feasible D, problem (20) can be similarly solved as (4) by following the steps in Appendix A. We thus have the following theorem.
Theorem 2.
For a given feasible D, the optimal objective function of IB problem (20) is
R l b 1 = T ν γ log 1 + γ λ log ( 1 + ν ) f λ ( λ ) d λ ,
where γ = 1 D K D + σ 2 ; the pdf of λ, i.e., f λ ( λ ) , is given by (A17); and ν is chosen such that the following bottleneck constraint is met:
ν γ log γ λ ν f λ ( λ ) d λ = C R ( D ) T .
Proof. 
See Appendix C. □
Since for a given Z 1 , (19) can be seen as a set of parallel scalar Gaussian sub-channels, according to ([21], (16)), the representation of y g , i.e., z g , can be constructed by adding independent fading and Gaussian noise to each element of y g . Denote
z g = Ψ y g + n g = Ψ U ˜ H z 1 x + Ψ n g + n g ,
where Ψ is a diagonal matrix with nonnegative and real diagonal entries, and n g CN 0 , I M . Note that y g in (19) and its representation z g in (23) are only auxiliary variables. What we are really interested in is the representation of y and the corresponding bottleneck rate. Hence, we also add fading Ψ and Gaussian noise n g to y ˜ in (18) and obtain the following representation:
z 2 = Ψ y ˜ + n g = Ψ U ˜ H Z 1 x + Ψ U ˜ H E 1 x + Ψ U ˜ H n + n g .
In the following lemma, we show that, by transmitting representations z 1 and z 2 to the destination node, R l b 1 is an achievable lower bound to the bottleneck rate and the bottleneck constraint is satisfied.
Lemma 2.
If the representation of h , i.e., z 1 resulting from (8), is forwarded to the destination node for each channel realization, with observations y and y g in (17) and (18) and representations z 2 and z g in (24) and (23), we have
I ( y ; z 2 | Z 1 ) I ( y g ; z g | Z 1 ) ,
I ( x ; z 2 | Z 1 ) I ( x ; z g | Z 1 ) ,
where (25) indicates that I ( y ; z 2 | Z 1 ) C R ( D ) and (26) gives I ( x ; z 2 | Z 1 ) R l b 1 .
Proof. 
See Appendix D. □
Lemma 2 shows that, by representing h and y ˜ using z 1 and z 2 in (11) and (24), respectively, lower bound R l b 1 is achievable and the bottleneck constraint is satisfied.
Lemma 3.
When M + ,
R l b 1 T log 1 + γ M log 1 + γ M 2 C R ( D ) T .
When ρ + , R l b 1 tends to a constant, which can be obtained by letting γ = 1 D K D and using (21). In addition, when C + , there exists a small D such that R l b 1 approaches the capacity of Channel 1, i.e.,
R l b 1 I ( x ; y , H ) = T 0 log 1 + ρ λ f λ ( λ ) d λ .
Proof. 
See Appendix E. □
Remark 1.
Denote the limit in (27) by R 0 l b 1 = T log 1 + γ M log 1 + γ M 2 C R ( D ) T for convenience. It can be readily verified that 0 R 0 l b 1 C . From (8), it is known that R ( D ) is also a function of M. Moreover, as stated after (16), we always assume 2 C K M < D 1 in this section such that C R ( D ) > 0 . Hence, when M + , D approaches 1 and γ tends to 0. All this makes it difficult to obtain further concise expression of R 0 l b 1 . We investigate the effect of M on R l b 1 in Section 5 by simulation.

4.2. Quantized Channel Inversion (QCI) Scheme When K M

In our second scheme, the relay first obtains an estimate of the channel input using channel inversion and then transmits the quantized noise levels as well as the compressed noisy signal to the destination node.
In particular, we apply the pseudo inverse matrix of H , i.e., ( H H H ) 1 H H , to y and obtain the zero-forcing estimate of x as follows:
x ˜ = ( H H H ) 1 H H y = x + ( H H H ) 1 H H n x + n ˜ .
For a given channel matrix H , n ˜ CN ( 0 , A ) , where A = σ 2 ( H H H ) 1 . Let A = A 1 + A 2 , where A 1 and A 2 , respectively, consist of the diagonal and off-diagonal elements of A , i.e., A 1 = A I K and A 2 = A A 1 . If H could be perfectly transmitted to the destination node, the bottleneck rate could be obtained by following similar steps in Appendix A. However, since H follows a non-degenerate continuous distribution and the bottleneck constraint is finite, as shown in the previous subsection, this is not possible. To reduce the number of bits per channel use required for informing the destination node of the channel information, we only convey a compressed version of A 1 and consider a set of independent scalar Gaussian sub-channels.
Specifically, we force each diagonal entry of A 1 to belong to a finite set of quantized levels by adding artificial noise, i.e., by introducing physical degradation. We fix a finite grid of J positive quantization points B = { b 1 , , b J } , where b 1 b 2 b J 1 < b J , b J = + , and define the following ceiling operation:
a B = arg min b B { a b } .
Then, by adding a Gaussian noise vector n ˜ CN   0 ,   diag a 1 B a 1 , , a K B a K , which is independent of everything else, to (29), a degraded version of x ˜ can be obtained as follows:
x ^ = x ˜ + n ˜ = x + n ˜ + n ˜ x + n ^ ,
where n ^ CN 0 , A 1 + A 2 for a given H and A 1 diag a 1 B , , a K B . Obviously, due to A 2 , the elements in noise vector n ^ are correlated.
To evaluate the bottleneck rate, we consider a new variable
x ^ g = x + n ^ g ,
where n ^ g CN 0 , A 1 . Obviously, (32) can be seen as K parallel scalar Gaussian sub-channels with noise power a k B for each sub-channel. Since each quantized noise level a k B only has J possible values, it is possible for the relay to inform the destination node of the channel information via the constrained link. Note that, from the definition of A in (29), it is known that a k , k K { 1 , , K } are correlated. The quantized noise levels a k B , k K are thus also correlated. Hence, we can jointly source-encode a k B , k K to further reduce the number of bits used for CSI transmission. For convenience, we define a space Ξ = ( j 1 , , j K ) | j k J , k K , where J = { 1 , , J } . It is obvious that there are a total of J K points in this space. Let ξ = ( j 1 , , j K ) denote a point in space Ξ and define the following probability mass function (pmf):
P ξ = Pr a 1 B = b j 1 , , a K B = b j K .
The joint entropy of a k B , k K , i.e., the number of bits used for jointly source-encoding a k B , k K , is thus given by
H joint = ξ Ξ P ξ log P ξ .
Then, the IB problem for (32) takes on the following form:
max p ( z ^ g | x ^ g ) I ( x ; z ^ g | A 1 )
s . t . I ( x ^ g ; z ^ g | A 1 ) C H joint ,
where z ^ g is a representation of x ^ g .
Note that, as stated above, there are a total of J K points in space Ξ . The pmf P ξ thus has J K possible values, and it becomes difficult to obtain the joint entropy H joint from (34) (even numerically) when J or K is large. To reduce the computational complexity, we consider the (slightly) suboptimal but far more practical entropy coding of each noise level a k B separately and obtain the following sum of individual entropies:
H sum = k = 1 K H k ,
where H k denotes the entropy of a k B or the number of bits used for informing the destination node of noise level a k B . In Appendix F, we show that a k , k K are marginally identically inverse chi squared distributed with M K + 1 degrees of freedom and that their pdf is given in (A44). Hence,
H sum = K H 0 = K j = 1 J P j log P j ,
where P j = Pr a B = b j can be obtained from (A45) and a follows the same distribution as a k . Since P j only has J possible values, the computational complexity of calculating H sum is proportional to J. Using the chain rule of entropy and the fact that conditioning reduces entropy, we know that H joint H sum . In Section 5, the gap between H joint and H sum is investigated by simulation. Replacing H joint in (35b) with H sum , we get the following: IB problem
max p ( z ^ g | x ^ g ) I ( x ; z ^ g | A 1 )
s . t . I ( x ^ g ; z ^ g | A 1 ) C K H 0 .
The optimal solution of this problem is given in the following theorem.
Theorem 3.
If A 1 is conveyed to the destination node for each channel realization, the optimal objective function of IB problem (38) is
R l b 2 = j = 1 J 1 K P j log 1 + ρ j log ( 1 + ρ j 2 c j ) .
where ρ j = 1 b j , c j = log ρ j ν + , and ν is chosen such that the following bottleneck constraint is met:
j = 1 J 1 K P j c j = C K H 0 .
Proof. 
See Appendix F. □
Since (32) can be seen as K parallel scalar Gaussian sub-channels, according to ([21], (16)), the representation of x ^ g , i.e., z ^ g , can be constructed by adding independent fading and Gaussian noise to each element of x ^ g . Denote
z ^ g = Φ x ^ g + n ^ g = Φ x + Φ n ^ g + n ^ g ,
where Φ is a diagonal matrix with positive and real diagonal entries, and n ^ g CN 0 , I K . Note that, similar to y g and z g in the previous subsection, x ^ g in (32) and its representation z ^ g in (41) are also auxiliary variables. What we are really interested in is the representation of x ^ and the corresponding bottleneck rate. Hence, we also add fading Φ and Gaussian noise n ^ g to x ^ in (31) and obtain its representation as follows:
z = Φ x ^ + n ^ g = Φ x + Φ n ^ + n ^ g .
In the following lemma, we show that, by transmitting quantized noise levels a k B , k K and representation z to the destination node, R l b 2 is an achievable lower bound to the bottleneck rate and the bottleneck constraint is satisfied.
Lemma 4.
If A 1 is forwarded to the destination node for each channel realization, with signal vectors x ^ and x ^ g in (31) and (32), and their representations z and z ^ g in (42) and (41), we have
I ( x ^ ; z | A 1 ) I ( x ^ g ; z ^ g | A 1 ) ,
I ( x ; z | A 1 ) I ( x ; z ^ g | A 1 ) ,
where (43) indicates that I ( x ^ ; z | A 1 ) C K H 0 and (44) gives I ( x ; z | A 1 ) R l b 1 .
Proof. 
See Appendix G. □
Lemma 5.
When M + or ρ + , we can always find a sequence of quantization points B = { b 1 , , b J } such that R l b 2 C . When C + ,
R l b 2 K E log 1 + 1 a I ( x ; y , H ) ,
where the expectation can be calculated by using the pdf of a in (A44) and I ( x ; y , H ) is the capacity of Channel 1.
Proof. 
See Appendix H. □
For the sake of simplicity, we may choose the quantization levels as quantiles such that we obtain the uniform pmf P j = 1 J . The lower bound (39) can thus be simplified as
R l b 2 = j = 1 J 1 K J log 1 + ρ j log ( 1 + ρ j 2 c j ) ,
and the bottleneck constraint (40) becomes
j = 1 J 1 log ρ j ν + = J C K J B ,
where B = log J can be seen as the number of bits required for quantizing each diagonal entry of A 1 . Since ρ 1 ρ J 1 , from the strict convexity of the problem, we know that there must exist a unique integer 1 l J 1 such that [28]
j = 1 l log ρ j ν = J C K J B , ρ j ν , l + 1 j J 1 .
Hence, ν can be obtained from
log ν = j = 1 l log ρ j l J C l K + J B l ,
and R l b 1 can be calculated as follows:
R l b 2 = j = 1 l K J log 1 + ρ j log ( 1 + ν ) .
Then, we only need to test the above condition for l = 1 , 2 , 3 , until (48) is satisfied. Note that, to ensure R l b 2 > 0 , J C K J B in (47) has to be positive, i.e., B < C K . Moreover, though choosing the quantization levels as quantiles makes it easier to calculate R l b 2 , the results in Lemma 5 may not hold in this case since the choice of quantization points B = { b 1 , , b J } is restricted.

4.3. Truncated Channel Inversion (TCI) Scheme When K M

Both the NDT and QCI schemes proposed in the preceding two subsections require that the relay transmits partial CSI to the destination node. Specifically, in the NDT scheme, channel matrix H is compressed and conveyed to the destination node. Hence, the channel use required for transmitting compressed H is proportional to K and M. In contrast, the number of bits required for transmitting quantized noise levels in the QCI scheme is proportional to K and B. Due to the bottleneck constraint, the performances of the NDT and QCI schemes are thus sensitive to the MIMO channel dimension, especially K. To ensure that it still performs well when the channel dimension is large, in this subsection, the relay first estimates x using channel inversion and then transmits a truncated representation of the estimate to the destination node.
In particular, as in the previous subsection, we first obtain the zero-forcing estimate of x using channel inversion, i.e.,
x ˜ = ( H H H ) 1 H H y = x + ( H H H ) 1 H H n .
As given in Appendix A, the unordered eigenvalues of H H H are λ k , k K . Let λ min = min { λ k , k K } . Note that, though the interfering terms can be nulled out by a zero-forcing equalizer, the noise may be greatly amplified when the channel is noisy. Therefore, we put a threshold λ th on λ min such that zero capacity is allocated for states with λ min < λ th .
Specifically, when λ min < λ th , the relay does not transmit the observation, while when λ min λ th , the relay takes x ˜ as the new observation and transmits a compressed version of x ˜ to the destination node. The information about whether to transmit the observation is encoded into a 0 1 sequence and is also sent to the destination node. Then, we need to solve the source coding problem at the relay, i.e., encoding blocks of x ˜ when λ min λ th . For convenience, we use Δ to denote event “ λ min λ th ”. Here, we choose p ( z | x ˜ , Δ ) to be a conditional Gaussian distribution:
z = x ˜ + q , if Δ , otherwise ,
where q CN ( 0 , D I K ) is independent of the other variables. It can be easily found from (52) that I ( x ; z | λ min < λ th ) = 0 and I ( x ˜ ; z | λ min < λ th ) = 0 . Hence, we consider the following modified IB problem:
max D P th I ( x ; z | Δ )
s . t . P th I ( x ˜ ; z | Δ ) C H th ,
where P th = Pr Δ and H th is a binary entropy function with parameter P th .
Since we assume K M in this subsection, as stated in Appendix A, H H H CW K ( M , I K ) . Then, according to ([29], Proposition 2.6) and ([29], Proposition 4.7), P th is given by
P th = det ψ k = 1 K ( M k ) ! k = 1 K ( K k ) ! ,
where
ψ = ψ 0 ψ K 1 ψ K 1 ψ 2 K 2 = ψ i + j 2 , ψ i + j 2 = λ th μ M K + i + j 2 e μ d μ .
When K = M , using ([30], Theorem 3.2), a more concise expression of P th can be obtained as follows:
P th = 2 λ th K 2 e μ K / 2 d μ = e λ th K .
Note that, in (56), the lower bound of the integral is 2 λ th rather than λ th . This is because, in this paper, the elements of H are assumed to be i.i.d. zero-mean unit-variance complex Gaussian random variables, while in [30], the real and imaginary parts of the elements in H are independent standard normal variables.
Given condition Δ , let x ˜ g denote a zero-mean circularly symmetric complex Gaussian random vector with the same second moment as x ˜ , i.e., x ˜ g CN 0 , E x ˜ x ˜ H | Δ , and z ˜ g = x ˜ g + q . P th I ( x ˜ g ; z ˜ g | Δ ) is then achievable if P th I ( x ˜ g ; z ˜ g | Δ ) C H th . Hence, let
P th I ( x ˜ g ; z ˜ g | Δ ) = P th log det I K + 1 D E x ˜ x ˜ H | Δ = C H th .
To calculate D from (57), we denote the eigendecomposition of H H H by V Λ ˜ V H , where V is a unitary matrix in which the columns are the eigenvectors of H H H , Λ ˜ is a diagonal matrix in which the diagonal elements are unordered eigenvalues λ k , k K , and V and Λ ˜ are independent. Then, from (51),
E x ˜ x ˜ H | Δ = I K + σ 2 E ( H H H ) 1 | Δ , = I K + σ 2 E V Λ ˜ 1 V H | Δ , = I K + σ 2 E 1 λ | Δ I K .
Based on [31], the joint pdf of the unordered eigenvalues λ k , k K under condition Δ is given by
f ( λ 1 , , λ K | Δ ) = 1 P th K ! i = 1 K e λ i λ i M K ( K i ) ! ( M i ) ! i < j K ( λ i λ j ) 2 .
The marginal pdf of one of the eigenvalues can thus be obtained by integrating out all the other eigenvalues. Taking λ 1 for example, we have
f λ 1 ( λ 1 | Δ ) = λ th λ th f ( λ 1 , , λ K | Δ ) d λ 2 d λ K .
Then,
E 1 λ | Δ = E 1 λ 1 | Δ = λ th 1 λ 1 f λ 1 ( λ 1 | Δ ) d λ 1 .
Combining (57), (58), and (61), D can be calculated as follows:
D = 1 + σ 2 E 1 λ | Δ 2 C H th P th K 1 .
Remark 2.
Note that we show in Appendix I that, when K = M and λ th = 0 , the integral in (61) diverges. E 1 λ | Δ thus does not exist in this case. Therefore, without special instructions, the results derived in this subsection are for the cases with K = M and λ th > 0 or with K < M and λ th 0 .
With (57), rate P th I ( x ˜ g ; z ˜ g | Δ ) is achievable. Due to the fact that Gaussian input maximizes the mutual information of a Gaussian additive noise channel, we have I ( x ˜ ; z | Δ ) I ( x ˜ g ; z ˜ g | Δ ) . P th I ( x ˜ ; z | Δ ) is thus also achievable.
The next step is to evaluate the resulting achievable bottleneck rate, i.e., I ( x ; z ) . To this end, we first obtain the following lower bound to I ( x ; z | Δ ) from the fact that conditioning reduces differential entropy,
I ( x ; z | Δ ) = h ( z | Δ ) h ( z | x , Δ ) h ( z | H , Δ ) h ( z | x , Δ ) .
Then, we evaluate the differential entropies h ( z | H , Δ ) and h ( z | x , Δ ) . From (51) and (52), it is known that z is conditionally Gaussian given H and Δ . Hence,
h ( z | H , Δ ) = E log ( π e ) K det I K + σ 2 ( H H H ) 1 + D I K | Δ = E log ( π e ) K det I K + σ 2 Λ ˜ 1 + D I K | Δ = K E log ( π e ) 1 + D + σ 2 λ | Δ .
On the other hand, using the fact that Gaussian distribution maximizes the entropy over all distributions with the same variance ([27], Theorem 8.6.5), we have
h ( z | x , Δ ) = h ( z x | Δ ) = h ( ( H H H ) 1 H H n + q | Δ ) log ( π e ) K det σ 2 E ( H H H ) 1 | Δ + D I K = K log ( π e ) D + σ 2 E 1 λ | Δ .
Substituting (64) and (65) into (63), we can obtain a lower bound to I ( x ; z ) , as shown in the following theorem.
Theorem 4.
When K M , with truncated channel inversion, a lower bound to I ( x ; z ) can be obtained as follows:
R l b 3 = P th K E log 1 + D + σ 2 λ | Δ P th K log D + σ 2 E 1 λ | Δ ,
where P th and D are, respectively, given in (54) and (62), and the expectations can be calculated by using pdf (60).
Lemma 6.
Using Jensen’s inequality on convex function log ( 1 + 1 / x ) and concave function log x , we can get a lower bound to R l b 3 , i.e.,
R ˇ l b 3 = P th K log 1 + D + σ 2 E λ | Δ P th K log D + σ 2 E 1 λ | Δ ,
and an upper bound to R l b 3 , i.e.,
R ^ l b 3 = P th K log 1 + D + σ 2 E 1 λ | Δ P th K log D + σ 2 E 1 λ | Δ .
Remark 3.
Obviously, R ˇ l b 3 is also a lower bound to I ( x ; z ) . For R ^ l b 3 , it is not an upper bound to I ( x ; z ) since it is derived after lower bound R l b 3 . However, we can assess how good the lower bounds R l b 3 and R ˇ l b 3 are by comparing them with R ^ l b 3 .
Lemma 7.
When M + , R l b 3 , R ˇ l b 3 , and R ^ l b 3 all tend asymptotically to C. When ρ + , R l b 3 , R ˇ l b 3 , and R ^ l b 3 all tend asymptotically to C H th . In addition, when C + , R l b 3 , R ˇ l b 3 , and R ^ l b 3 all approach constants, which can be respectively obtained by setting D = 0 in (66)–(68).
Proof. 
See Appendix J. □
When K < M and λ th = 0 , it is obvious that P th = 1 , H th = 0 , and E λ = M . Since H H H CW K ( M , I K ) , ( H H H ) 1 follows a complex inverse Wishart distribution. Hence, E 1 λ = 1 M K . Then, from Theorem 4 and Lemma 6, we have the following lemma.
Lemma 8.
When K < M and λ th = 0 ,
R l b 3 = K E log 1 + D + σ 2 λ K log D + σ 2 M K ,
R ˇ l b 3 = K log 1 + D + σ 2 M K log D + σ 2 M K ,
and
R ^ l b 3 = K log 1 + D + σ 2 M K K log D + σ 2 M K ,
where
D = 1 + σ 2 M K 2 C K 1 .
Remark 4.
When K < M , λ th = 0 , and σ 2 M K is small (e.g., when ρ is large, i.e., σ 2 is small, or when M K is large), R ^ l b 3 R ˇ l b 3 0 . In this case, R ˇ l b 3 is close to R ^ l b 3 and is thus also close to R l b 3 . Then, we can use R ˇ l b 3 instead of R l b 3 to lower bound I ( x ; z ) since it has a more concise expression.

4.4. MMSE Estimate at the Relay

In this subsection, we assume that the relay first produces the MMSE estimate of x given ( y , H ) and then source-encodes this estimate.
Denote
F = H H H + σ 2 I M 1 H .
The MMSE estimate of x is thus given by
x ¯ = F H y = F H H x + F H n .
Then, we consider the following modified IB problem:
max p ( z | x ¯ ) I ( x ; z )
s . t . I ( x ¯ ; z ) C .
Note that, since matrix H H H + σ 2 I K in (73) is always invertible, the results obtained in this subsection always hold no matter K M or K > M .
Analogous to the previous subsection, we define
z = x ¯ + q , x ¯ g CN 0 , E x ¯ x ¯ H , z ¯ g = x ¯ g + q ,
where q has the same definition as in (52), and
E x ¯ x ¯ H = E F H H H H F + σ 2 F H F .
Let
I ( x ¯ g ; z ¯ g ) = log det I K + E x ¯ x ¯ H D = C .
Then, rate I ( x ¯ g ; z ¯ g ) is achievable and D can be calculated from (78). Since I ( x ¯ ; z ) I ( x ¯ g ; z ¯ g ) , I ( x ¯ ; z ) is thus also achievable.
In the following, we obtain a lower bound to I ( x ; z ) by evaluating h ( z | H ) and h ( z | x ) separately and then by using
I ( x ; z ) = h ( z ) h ( z | x ) h ( z | H ) h ( z | x ) .
First, since z is conditionally Gaussian given H , we have
h ( z | H ) = E log ( π e ) K det F H H H H F + σ 2 F H F + D I K .
Next, based on the fact that conditioning reduces differential entropy and Gaussian distribution maximizes the entropy over all distributions with the same variance [32], we have
h ( z | x ) = h z E ( z | x ) | x = h F H H E F H H x + F H n + q | x h F H H E F H H x + F H n + q log ( π e ) K det ( G ) ,
where
G = E F H H E F H H H H F E H H F + σ 2 F H F + D I K = E F H H H H F E F H H E H H F + σ 2 E F H F + D I K .
Combining (79)–(81), we can get a lower bound to I ( x ; z ) as shown in the following theorem.
Theorem 5.
With the MMSE estimate at the relay, a lower bound to I ( x ; z ) can be obtained as follows:
R l b 4 = T E log λ λ + σ 2 + D + ( K T ) log D K log T K E λ λ + σ 2 T 2 K 2 E λ λ + σ 2 2 + D ,
where
D = T K E λ λ + σ 2 2 C K 1 ,
and the expectations can be calculated by using the pdf of λ in (A17).
Proof. 
See Appendix K. □
Lemma 9.
When M + or when K M and ρ + , lower bound R l b 4 tends asymptotically to C. When K M and C + ,
R l b 4 K E log λ λ + σ 2 K log E λ λ + σ 2 E λ λ + σ 2 2 .
Proof. 
See Appendix L. □

5. Numerical Results

In this section, we evaluate the lower bounds obtained by different achievable schemes proposed in Section 4 and compare them with the upper bound derived in Section 3. Before showing the numerical results, we first give the following lemma, which compares the bottleneck rate of the NDT scheme with those of the other three schemes in the C + case.
Lemma 10.
When C + , the NDT scheme outperforms the other three schemes, i.e.,
R l b 1 max R l b 2 , R l b 3 , R l b 4 .
Proof. 
See Appendix M. □
Remark 5.
Besides the proof in Appendix M, we can also explain Lemma 10 from a more intuitive perspective. When C + , the destination node can obtain perfect y and H from the relay by using the NDT scheme. The bottleneck rate is thus determined by the capacity of Channel 1. In the QCI scheme, though the destination node can obtain perfect signal vector and noise power of each channel, the correlation between the elements of the noise vector is neglected since the off-diagonal entries of A are not considered. The bottleneck rate obtained by the QCI scheme is thus upper bounded by the capacity of Channel 1. As for the TCI or MMSE schemes, the destination node can obtain perfect x ˜ or x ¯ from the relay. However, the bottleneck rate in these two cases is not only affected by the capacity of Channel 1 but is also limited by the performance of zero-forcing or MMSE estimation since the estimation inevitably incurs a loss of information. Hence, the NDT scheme has a better performance when C + .
In the following, we give the numerical results. Note that, when performing the QCI scheme, we choose the quantization levels as quantiles for the sake of convenience.
Figure 2 depicts R l b 1 versus distortion D under different configurations of SNR ρ . It can be found from this figure that R l b 1 first increases and then decreases with D. It is thus important to find a good D to maximize R l b 1 . Since it is difficult to obtain the explicit expression of (21), it is not easy to strictly analyze the relationship between R l b 1 and D. However, we can intuitively explain Figure 2 as follows. When using the NDT scheme, the relay quantizes both h and y . Due to the bottleneck constraint C, there exists a tradeoff. When D is small, the estimation error of h is small. The destination node can get more CSI, and R l b 1 thus increases with D. When D grows large, though more capacity in C is allocated for quantizing y , the estimation error of h is large. Hence, R l b 1 decreases with D. In the following simulation process, when implementing the NDT scheme, we vary D, calculate R l b 1 using (21), and then let R l b 1 be the maximum value.
In Figure 3 and Figure 4, we performed Monte Carlo simulations to obtain joint entropy H joint in (34) and the sum of individual entropies H sum in (37). Note that, as stated in Section 4.2, the complexities of calculating H joint and H sum are, respectively, proportional to J K and J. Hence, when J or K is large, it becomes quite difficult to obtain H joint . For example, when B = 4 and K = 4 , we have J = 16 and J K = 65,536, i.e., there are 65,536 points in space Ξ . To obtain a reliable pmf P ξ for each point, the number of channel realizations has to be much greater than 65,536.
Figure 3 shows that the gap between H joint and H sum is small. In addition, as M increases, H joint approaches H sum quickly, indicating that the dependence between a k B , k K becomes weak. This can be explained by considering an extreme case where M + . Based on the definition of H and the strong law of large numbers, we almost surely have H H H M I K 0 when M + . Hence, A σ 2 M I K 0 . a k B , k K are thus almost independent.
When M = K and K increases, Figure 4 shows that there exists an obvious increase in the gap between H joint and H sum . Hence, when M = K and K increases, the correlation between a k B , k K is enhanced. We will thus obtain a gain to R l b 2 if we use H joint instead of H sum . However, we would like to point out the following: First, it can be found from Figure 4 that, when M > K , this trend becomes less evident. Second, as shown in the following results, when K 4 , since the QCI scheme uses a lot of capacity in C to quantize a k B , k K , its performance is not as good as the TCI scheme or MMSE scheme. Third, when K or B is large, it becomes difficult to obtain H joint . Therefore, when implementing the QCI scheme in the following, we obtain R l b 2 by using H sum , i.e., quantizing a k B , k K separately.
In Figure 5 and Figure 6, we investigate the effect of threshold λ th on R l b 3 for the cases with K = M and K < M , respectively. From these two figures, several observations can be made. First, when K = M , and ρ or K is small, R l b 3 increases greatly and then decreases with λ th , indicating that the choice of λ th has a significant impact on R l b 3 . It is thus important to look for a good λ th to maximize R l b 3 in these cases. Second, when K = M , and K as well as ρ are large or when K < M , R l b 3 first remains unchanged and then monotonically decreases with R l b 3 . In these cases, a small λ th is good enough to guarantee a large R l b 3 and a search for λ th can thus be avoided. For example, when K < M , we can set λ th = 0 , based on which a simpler expression of R l b 3 is given in (69). For the case with K = M , since E 1 λ does not exist when λ th = 0 , we can set λ th to be a fixed small number.
In Figure 7 and Figure 8, we compare R l b 3 with its upper bound R ^ l b 3 and lower bound R ˇ l b 3 . As expected, R l b 3 , R ^ l b 3 , and R ˇ l b 3 all increase with M and ρ . When M or ρ is small, there is a small gap between R l b 3 and R ^ l b 3 , and a small gap between R l b 3 and R ˇ l b 3 . As M and ρ increase, these gaps narrow rapidly and the curves almost coincide, which verifies Remark 2. As a result, when M K or ρ is large, we can set λ th = 0 and use R ˇ l b 3 in (70) to lower bound I ( x ; z ) since it has a more concise expression.
In Figure 9 and Figure 10, the upper bound R ub and lower bounds obtained by different schemes are depicted versus SNR ρ . Several observations can be made from these two figures. First, as expected, all bounds increase with ρ . Second, when K, M, and ρ are small, the NDT scheme outperforms the other achievable schemes. However, as these parameters increase, the performance of the NDT scheme deteriorates rapidly. This is because, when K, M, and ρ are small, the performance of the considered system is mainly limited by the capacity of Channel 1, and the NDT scheme works well since the destination node can extract more information from the compressed observation of the relay and CSI. However, when K and M increase, the NDT scheme requires too many channel uses for CSI transmission. Third, the QCI scheme can obtain a good performance when K is small. Of course, as stated at the beginning of Section 4.3, the number of bits required for transmitting quantized noise levels in the QCI scheme is proportional to K and B. Hence, the performance of the QCI scheme varies significantly when K and B change. Moreover, it is also shown that the performance of the TCI scheme is worse than that of the MMSE scheme in the low SNR regime while getting quite close to that of the MMSE scheme in the high SNR regime. When ρ grows large, the lower bounds obtained by the TCI and MMSE schemes both approach C and are larger than those obtained by the NDT and QCI schemes.
In Figure 11 and Figure 12, the effect of the bottleneck constraint C is investigated. From Figure 11, it can be found that, as C increases, all bounds grow and converge to different constants, which can be calculated based on Lemmas 1, 3, 5, 7 and 9. Figure 11 also shows that, thanks to CSI transmission, the NDT and QCI schemes outperform the TCI and MMSE schemes when C is large. By comparing these two figures, it can be found that, in Figure 11, no bound approaches C, even for the case with C = 20 , while in Figure 12, it is possible for R ub , R l b 3 , and R l b 4 to approach C. For example, when K = M = 4 and C 30 , R ub , R l b 3 , R l b 4 C . This is because the bottleneck rate is limited by the capacity of Channel 1 and C. In Figure 11, since K and M are small, the capacity of Channel 1 is smaller than C. Hence, the bounds of course will not approach C. In Figure 12, more multi-antenna gains can be obtained due to larger K and M. The capacity of Channel 1 is thus larger than C in some cases (e.g., K = M = 4 and C 30 ). Hence, R ub , R l b 3 , and R l b 4 may approach C in these cases. Note that, as shown in Figure 11, since B < C K is not satisfied, R l b 4 = 0 when C 30 .
In Figure 13 and Figure 14, the effect of M is investigated for different configurations of ρ . These two figures show that R ub , R l b 2 , R l b 3 , and R l b 4 all increase monotonically with M and that, as M grows, R l b 3 as well as R l b 4 become very close to R ub . For R l b 1 , except the M = 3 case in Figure 13, R l b 1 monotonically decreases with M since the relay has to transmit more channel information to the destination node.
In Figure 15 and Figure 16, we set K = M and depict the upper and lower bounds versus K or M. In Figure 15, we fix C to 50, while in Figure 16, we set C = 8 K , which makes sense since the bottleneck constraint should scale with the number of degrees of freedom of the input signal x . Since we choose the quantization levels as quantiles when performing the QCI scheme, as stated at the end of Section 4.2, B < C K should be satisfied. Hence, in Figure 15 and Figure 16, we only consider B = 1 , 2 , 4 bits when performing the QCI scheme. When K = M and they grow simultaneously, the capacity of Channel 1 increases due to the muti-antenna gains. Hence, for a fixed C, Figure 15 shows that all bounds increase first. When K or M grows large, R l b 3 and R l b 4 approach the bottleneck constraint C while R l b 2 decreases for all values of B. This is because the number of bits per channel use required for informing the destination node of A 1 in the QCI scheme is proportional to K while CSI transmission is unnecessary for the TCI and MMSE schemes. For the NDT scheme, since the number of bits required for quantizing H is proportional to both K and M, there is only an increase when K grows from 1 to 2. After that, R l b 1 decreases monotonically and has the worst performance. In contrast, when C = 8 K , the bottleneck rate of the system is mainly limited by C. Hence, Figure 16 shows that all bounds except R l b 1 increase almost linearly with K and that R ub , R l b 3 , and R l b 4 are quite close to C.

6. Conclusions

This work extends the IB problem of the scalar case in [26] to the case of MIMO Rayleigh fading channels. Due to the information bottleneck constraint, the destination node cannot obtain a perfect CSI from the relay. Hence, we provide an upper bound to the bottleneck rate by assuming that the destination node can obtain a perfect CSI at no cost. Moreover, we also provide four achievable schemes, where each scheme satisfies the bottleneck constraint and gives a lower bound to the bottleneck rate. Our results show that, with simple symbol-by-symbol relay processing and compression, we can obtain a bottleneck rate close to the upper bound on a wide range of relevant system parameters. Note that tightening the upper bound is a challenge for future studies. In addition, although we have focused on a MIMO channel with one relay, we plan to extend the problem to considering the case of multiple parallel relays, which is particularly relevant to the centralized processing of multiple remote antennas, as in the so-called C-RAN architectures.

Author Contributions

Conceptualization, H.X., G.C., and S.S.; methodology, H.X., T.Y., G.C., and S.S.; writing—original draft preparation, H.X.; writing—review and editing, H.X., T.Y., G.C., and S.S.; funding acquisition, H.X., G.C., and S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Alexander von Humboldt Foundation and the European Union’s Horizon 2020 Research and Innovation Programme with grant agreement No. 694630.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Theorem 1

Before proving Theorem 1, we first consider the following scalar Gaussian channel:
y = s x + n ,
where x CN ( 0 , 1 ) , n CN ( 0 , σ 2 ) , and s C is the deterministic channel gain. With bottleneck constraint C, the IB problem for (A1) has been studied in [21] and the optimal bottleneck rate is given by
R 0 = log 1 + ρ | s | 2 log 1 + ρ | s | 2 2 C .
In the following, we show that (4) can be decomposed into a set of parallel scalar IB problems, and (A2) can then be applied to obtain upper bound R ub in Theorem 1.
According to the definition of conditional entropy, problem (4) can be rewritten as
max p ( z | y , H ) I ( x ; z | H = H ) p H ( H ) d H
s . t . I ( y ; z | H = H ) p H ( H ) d H C ,
where H is a realization of H . Let U Λ U H denote the eigendecomposition of H H H , where U is a unitary matrix in which the columns are the eigenvectors of H H H , and Λ is a diagonal matrix in which the diagonal elements are the eigenvalues of H H H . Since the rank of H H H is no greater than T = min { K , M } , there are at most T positive diagonal entries in Λ . Denote them by λ t , where t T and T = { 1 , , T } . Let
y ^ = U H y = U H H x + U H n .
Then, for a given channel realization H = H , y ^ is conditionally Gaussian, i.e.,
y ^ | H = H CN ( 0 , Λ + σ 2 I M ) .
Since
I ( x ; y | H = H ) = I ( x ; y ^ | H = H ) ,
we work with y ^ instead of y in the following.
Based on (A3) and (A5), it is known that MIMO channel p ( y ^ | x , H ) can be first divided into a set of parallel channels for different realizations of H and that each channel p ( y ^ | x , H = H ) can be further divided into T independent scalar Gaussian channels with SNRs ρ λ t , t T . Accordingly, problem (4) can be decomposed into a set of parallel IB problems. For a scalar Gaussian channel with SNR ρ λ t , let c t ub denote the allocation of the bottleneck constraint C and R t ub denote the corresponding rate. According to (A2), we have
R t ub = log 1 + ρ λ t log 1 + ρ λ t 2 c t ub .
Then, the solution of problem (4) can be obtained by solving the following problem:
max { c t ub } t = 1 T E R t ub
s . t . t = 1 T E c t ub C .
Assume that λ t , t T are unordered positive eigenvalues of H H H . (Note that, when deriving the upper and lower bounds in this paper, we consider the unordered positive eigenvalues of H H H or H H H since it simplifies the analysis. If the ordered positive eigenvalues of H H H or H H H are considered, it can be readily proven by following similar steps in ([31], Section 4.2) to arrive at problems equivalent to those in this paper). Then, they are identically distributed. For convenience, define a new variable λ that follows the same distribution as λ t . The subscript “t” in c t ub and R t ub can thus be omitted. In order to distinguish from R ub in (5), we use R 0 ub to denote the bottleneck rate corresponding to c ub , i.e.,
R 0 ub = log 1 + ρ λ log 1 + ρ λ 2 c ub .
Then, we have
t = 1 T E R t ub = T E R 0 ub , t = 1 T E c t ub = T E c ub .
Problem (A8) is thus equivalent to
max c ub E R 0 ub
s . t . E c ub C T .
This problem can be solved by the water-filling method. Consider the Lagrangian
L = E R 0 ub + α c ub α C T ,
where α is the Lagrange multiplier. The Karush-Kuhn-Tucker (KKT) condition for the optimality is
L c ub = 0 , if c ub > 0 0 , if c ub = 0 .
Then,
c ub = log ρ λ ν , if λ > ν ρ 0 , if λ ν ρ ,
where ν = α / ( 1 α ) and it is chosen such that the following bottleneck constraint is met:
E log ρ λ ν | λ > ν ρ Pr λ > ν ρ = C T .
The informed receiver upper bound is thus given by
R ub = T E log 1 + ρ λ log ( 1 + ν ) | λ > ν ρ Pr λ > ν ρ .
From the definition of H in (2), it is known that, when K M (resp., when K > M ), H H H (resp., H H H ) is a central complex Wishart matrix with M (resp., K) degrees of freedom and covariance matrix I K (resp., I M ), i.e., H H H CW K ( M , I K ) (resp., H H H CW M ( K , I M ) ) [33]. Since λ can be seen as one of the unordered positive eigenvalues of H H H or H H H , its pdf is thus given by [31] and ([33], Theorem 2.17):
f λ ( λ ) = 1 T i = 0 T 1 i ! ( i + S T ) ! L i S T ( λ ) 2 λ S T e λ ,
where S = max { K , M } and the Laguerre polynomials are
L i S T = e λ i ! λ S T d i d λ i e λ λ S T + i .
Substituting (A17) and (A18) into (A16) and (A15), (5) and (6) can be obtained. Theorem 1 is thus proven.

Appendix B. Proof of Lemma 1

In order to prove that R ub approaches C as M + , we first look at the special case with K = 1 . In this case, S = M and T = 1 . From (A18) and (A17), we have L 0 S T = 1 and the pdf of λ
f λ ( λ ) = λ M 1 e λ ( M 1 ) ! ,
which shows that λ follows Erlang distribution with shape parameter M and rate parameter 1, i.e., λ Erlang ( M , 1 ) . The expectation of λ is thus M. As M + , f λ ( λ ) becomes a delta function [34]. Hence, for a sufficiently small positive real number ϵ ,
lim M + Pr | λ M | ϵ 1 , lim M + Pr | λ M | > ϵ 0 .
Then, when M + , the bottleneck constraint (6)
ν ρ log ρ λ ν f λ ( λ ) d λ = C M ϵ M + ϵ log ρ λ ν f λ ( λ ) d λ log ρ M ν ,
based on which we get
ν M ρ 2 C .
Using (5), (A20), and (A22), it is known that, when M + ,
R ub = ν ρ log 1 + ρ λ log ( 1 + ν ) f λ ( λ ) d λ M ϵ M + ϵ log 1 + ρ λ 1 + ν f λ ( λ ) d λ log 1 + ρ M 1 + ν C .
Next, we consider the general case. For any positive integer K, when M + , based on the definition of H and the strong law of large numbers, we almost surely have H H H M I K 0 . Since H H H and H H H have the same positive eigenvalues, λ M 0 almost surely. (A20) thus also holds for this general case. Then,
ν ρ log ρ λ ν f λ ( λ ) d λ = C T M ϵ M + ϵ log ρ λ ν f λ ( λ ) d λ log ρ M ν ,
based on which we get
ν M ρ 2 C / T .
Hence, when M + ,
R ub T ν ρ log 1 + ρ λ log ( 1 + ν ) f λ ( λ ) d λ T M ϵ M + ϵ log 1 + ρ λ 1 + ν f λ ( λ ) d λ T log 1 + ρ M 1 + ν C .
Now we prove that R ub approaches C as ρ + . From (6), it can be seen that ν ρ log ρ λ ν f λ ( λ ) d λ reduces with ν . Therefore, when ρ + , to ensure that constraint (6) holds, ν becomes large. Then, we have
R ub = T ν ρ log 1 + ρ λ log ( 1 + ν ) f λ ( λ ) d λ T ν ρ log ρ λ log ν f λ ( λ ) d λ = C .
In addition, when C + , it can be found from (6) that ν 0 . Using (5), we can get (7), which is the capacity of Channel 1. This completes the proof.

Appendix C. Proof of Theorem 2

For a given Z 1 , y g CN ( 0 , Ω + ( K D + σ 2 ) I M ) . Let ω denote the unordered positive eigenvalue of Z 1 Z 1 H . Since the elements in Z 1 and H , respectively, follow i.i.d., CN ( 0 , 1 D ) and CN ( 0 , 1 ) , and λ is the unordered positive eigenvalue of H H H as defined in Appendix A, ω is thus identically distributed as ( 1 D ) λ . Then, the pdf of ω is
f ω ( ω ) = 1 1 D f λ ω 1 D ,
where f λ is the pdf of λ and is given in (A17).
For a given feasible D, problem (20) can be similarly solved as (4) by following the steps in Appendix A and the optimal solution is
R l b 1 = T ν ( K D + σ 2 ) log 1 + ω K D + σ 2 log ( 1 + ν ) f ω ( ω ) d ω ,
where ν is chosen such that the following bottleneck constraint is met:
ν ( K D + σ 2 ) log ω ν ( K D + σ 2 ) f ω ( ω ) d ω = C R ( D ) T .
Using (A28), (A29) can be reformulated as
R l b 1 = T ν ( K D + σ 2 ) log 1 + ω K D + σ 2 log ( 1 + ν ) f ω ( ω ) d ω = T ν ( K D + σ 2 ) log 1 + ω K D + σ 2 log ( 1 + ν ) 1 1 D f λ ω 1 D d ω = λ = ω 1 D T ν γ log 1 + γ λ log ( 1 + ν ) f λ ( λ ) d λ ,
where γ = 1 D K D + σ 2 . Analogously, bottleneck constraint (A30) can be transformed to
ν γ log γ λ ν f λ ( λ ) d λ = C R ( D ) T .
Theorem 2 is thus proven.

Appendix D. Proof of Lemma 2

We first prove inequation (25).
I ( y ; z 2 | Z 1 ) = I ( y ˜ ; z 2 | Z 1 ) = h ( z 2 | z 1 ) h ( z 2 | Z 1 , y ˜ ) ( a ) E log det Ω Ψ 2 + ( K D + σ 2 ) Ψ 2 + I M = I ( y g ; z g | Z 1 ) ,
where ( a ) holds since Gaussian distribution maximizes the entropy over all distributions with the same variance. Then, we prove inequation (26). Since for a Gaussian input, Gaussian noise minimizes the mutual information ([27], (9.178)), we have
I ( x ; z 2 | Z 1 ) I ( x ; z g | Z 1 ) .
Since Ψ is optimally obtained when solving IB problem (20), bottleneck constraint (20b) is thus satisfied and I ( x ; z g | Z 1 ) = R l b 1 . Then, from (A33) and (A34), we have
I ( y ; z 2 | Z 1 ) C R ( D ) , I ( x ; z 2 | Z 1 ) R l b 1 .
This completes the proof.

Appendix E. Proof of Lemma 3

When M + , as stated in Appendix B, λ M 0 almost surely. Then,
ν γ log γ λ ν f λ ( λ ) d λ = C R ( D ) T log γ M ν ,
based on which we get
ν γ M 2 C R ( D ) T 0 .
From (21), it is known that, as M + ,
R l b 1 T log 1 + γ M log 1 + γ M 2 C R ( D ) T .
It can be readily proven that 0 T log 1 + γ M log 1 + γ M 2 C R ( D ) T C .
When ρ + , σ 2 0 . Let γ = 1 D K D . R l b 1 thus tends to a constant and can be obtained from (21).
When C + , it is possible for the relay to transmit h almost perfectly to the destination node, i.e., D 0 . Hence, γ = 1 D K D + σ 2 ρ . In addition, it can be found from (22) that ν 0 . Then, from (21),
R l b 1 T 0 log 1 + ρ λ f λ ( λ ) d λ = I ( x ; y , H ) .
Lemma 3 is thus proven.

Appendix F. Proof of Theorem 3

Since n ^ g CN 0 , A 1 and a k B has J possible values, i.e., b 1 , , b J , the channel in (32) can be divided into K J independent scalar Gaussian sub-channels with noise power a k B = b j for each sub-channel. For the sub-channel with noise power a k B = b j , let c k , j denote the allocation of the bottleneck constraint C and R k , j denote the corresponding rate. According to (A2), we have
R k , j = log 1 + ρ j log 1 + ρ j 2 c k , j ,
where ρ j = 1 b j . Since b J = + , we let R k , J = 0 and c k , J = 0 . Note that, based on ([21], (16)), the representation of x ^ g , i.e., z ^ g , can be constructed by adding independent fading and Gaussian noise to each element of x ^ g in (32). Denote
P k , j = Pr a k B = b j .
Then, the optimal I ( x ; z ^ g | A 1 ) is equal to the objective function of the following problem:
max { c k , j } k = 1 K j = 1 J 1 P k , j R k , j
s . t . k = 1 K j = 1 J 1 P k , j c k , j C k = 1 K H k ,
where H k = j = 1 J P k , j log P k , j .
Since K M , as stated in Appendix A, H H H CW K ( M , I K ) . Matrix ( H H H ) 1 thus follows complex inverse Wishart distribution and its diagonal elements are identically inverse chi squared distributed with M K + 1 degrees of freedom [35]. Let η denote one of the diagonal element of ( H H H ) 1 . The pdf of η is thus given by
f η ( η ) = 2 ( M K + 1 ) / 2 Γ M K + 1 2 η ( M K + 1 ) / 2 1 e 1 / ( 2 η ) .
Since A = σ 2 ( H H H ) 1 , the diagonal entries of A , i.e., a k , k K , are marginally identically distributed. Let a denote a new variable with the same distribution as a k . a thus follows the same distribution as σ 2 η , and its pdf is given by
f a ( a ) = 1 σ 2 f η a σ 2 = ( 2 / σ 2 ) ( M K + 1 ) / 2 Γ M K + 1 2 a ( M K + 1 ) / 2 1 e σ 2 / ( 2 a ) .
In addition, P k , j , R k , j , and c k , j can be simplified to P j , R j , and c j by dropping subscript “k”. Using (A44), P j can be calculated as follows:
P j = Pr a B = b j = Pr b j 1 < a b j = b j 1 b j f a ( a ) d a .
Problem (A42) thus becomes
max { c j } j = 1 J 1 K P j R j
s . t . j = 1 J 1 K P j c j C K H 0 ,
where
R j = log 1 + ρ j log 1 + ρ j 2 c j , H 0 = j = 1 J P j log P j .
Analogous to problem (A11), (A46) can be optimally solved by the water-filling method. The optimal I ( x ; z ^ g | A 1 ) is given by
R l b 2 = j = 1 J 1 K P j log 1 + ρ j log ( 1 + ρ j 2 c j ) .
where c j = log ρ j ν + and ν is chosen such that the bottleneck constraint
j = 1 J 1 K P j c j = C K H 0 ,
is met. Theorem 3 is then proven.

Appendix G. Proof of Lemma 4

Since Φ is a diagonal matrix with positive and real diagonal entries, it is invertible. Denote
z = Φ 1 z = x + n ^ + Φ 1 n ^ g , z ^ g = Φ 1 z ^ g = x + n ^ g + Φ 1 n ^ g .
For a given A 1 , each element in n ^ is Gaussian distributed with zero mean and variance a k B . However, n ^ is not a Gaussian vector since H is unknown. Hence, z is not a Gaussian vector. For z ^ g , from (32) and (41), it is known that z ^ g CN ( 0 , I K + A 1 + Φ 2 ) .
We first prove inequation (43).
I ( x ^ ; z | A 1 ) = I ( x ^ ; z | A 1 ) = h ( z | A 1 ) h ( z | x ^ , A 1 ) ( a ) E log det I K + E n ^ n ^ H + Φ 2 log det Φ 2 ( b ) E log det I K + A 1 + Φ 2 log det Φ 2 = I ( x ^ g ; z ^ g | A 1 ) = I ( x ^ g ; z ^ g | A 1 ) ,
where ( a ) holds since Gaussian distribution maximizes the entropy over all distributions with the same variance and ( b ) follows by using Hadamard’s inequality.
Denote x = ( x 1 , , x K ) T , z = ( z 1 , , z K ) T , z ^ g = ( z ^ g , 1 , , z ^ g , K ) T , and Φ = diag { φ 1 , , φ K } . Then, we prove inequation (44). Using the chain rule of mutual information,
I ( x ; z | A 1 ) = I ( x ; z | A 1 ) = k = 1 K I ( x k ; z k | A 1 ) + Q k = 1 K I ( x k ; z k | A 1 ) = ( a ) k = 1 K I ( x k ; z ^ g , k | A 1 ) = ( b ) I ( x ; z ^ g | A 1 ) = I ( x ; z ^ g | A 1 ) ,
where Q is a nonnegative constant; ( a ) holds since for a given A 1 , both z k and z ^ g , k follow CN 0 , 1 + a k B + φ k 2 ; and ( b ) follows since the elements in x and z ^ g are independent.
Since Φ is optimally obtained when solving IB problem (38), bottleneck constraint (38b) is thus satisfied and I ( x ; z ^ g | A 1 ) = R l b 2 . Then, from (A51) and (A52), we have
I ( x ^ ; z | A 1 ) C K H 0 , I ( x ; z | A 1 ) R l b 2 .
This completes the proof.

Appendix H. Proof of Lemma 5

As stated in Appendix B, when M + , h H h M I K 0 almost surely. Hence, A σ 2 M I K 0 . Let J = 2 , b 1 = σ 2 M + ϵ , and b 2 = + , where ϵ is a sufficiently small positive real number. Since A σ 2 M I K 0 , we have P 1 1 and H 0 0 . Then, from (39) and (40),
c 1 C K , R l b 2 K log 1 + M σ 2 log 1 + M σ 2 2 C K C .
When ρ + , σ 2 0 and A 0 . By setting J = 2 and b 1 small enough, it can be proven as above that R l b 2 C .
When C + , we could choose quantization points B = { b 1 , , b J } with sufficiently large J such that the diagonal entries of A 1 , which are continuously valued, can be represented precisely using the discretely valued points in B , and the representation indexes of all diagonal entries can be transmitted to the destination node since C is large enough. On the other hand, as shown in (41), a representation of x ^ g is
z ^ g = Φ x ^ g + n ^ g ,
where Φ is a diagonal matrix with positive and real diagonal entries, and n ^ g CN 0 , I K . As C + , according to ([21], (17) and (20)), the diagonal entries of Φ
φ k = 1 a k B + 2 C 1 + a k B 1 a k B 2 C 1 + a k B , k K .
Since Φ is a diagonal matrix with positive and real diagonal entries, as in (A50), we can get
z ^ g = Φ 1 z ^ g = x ^ g + Φ 1 n ^ g .
From (A56), it is known that the elements in noise vector Φ 1 n ^ g have zero mean and very small (approaches 0) power when C + . Hence, ( x , z ^ g ) ( x , x ^ g ) in distribution. Then, based on [36], we have
I ( x ; x ^ g | A 1 ) lim inf C + I ( x ; z ^ g | A 1 ) .
In addition, since Gaussian noise vector n ^ g (defined in (32)) is independent of x and Φ 1 n ^ g in (A57) is independent of both x and n ^ g , x x ^ g z ^ g forms a Markov Chain. Then, according to data-processing inequality, we have
I ( x ; z ^ g | A 1 ) I ( x ; x ^ g | A 1 ) .
Combining (A59) and (A58), we have
I ( x ; x ^ g | A 1 ) lim inf C + I ( x ; z ^ g | A 1 ) I ( x ; x ^ g | A 1 ) ,
showing that the limit lim inf C + I ( x ; z ^ g | A 1 ) exists and that it is equal to I ( x ; x ^ g | A 1 ) . Then, when C + ,
R l b 2 = I ( x ; z ^ g | A 1 ) = I ( x ; z ^ g | A 1 ) I ( x ; x ^ g | A 1 ) = E log det I K + A 1 log det A 1 E log det I K + A 1 log det A 1 ,
On the other hand, the capacity of Channel 1 is given by
I ( x ; y , H ) = I ( x ; y | H ) = E log det H H H + σ 2 I M log det σ 2 I M = E log det H H H + σ 2 I K log det σ 2 I K = E log det I K + A log det A .
To prove that (A61) is upper bounded by (A62), we first give and prove the following lemma.
Lemma A1.
For any K-dimensional positive definite matrix N , let N 1 = N I K , i.e., N 1 consist of the diagonal elements of N . Then,
log det I K + N log det N log det I K + N 1 log det N 1 .
Proof. 
Obviously, (A63) is equivalent to
log det N 1 log det N log det I K + N 1 log det I K + N .
To prove (A64), we introduce an auxiliary function g 1 ( x ) = log det x I K + N 1 log det x I K + N and show that g 1 ( x ) decreases monotonically with respect to x when x 0 . By taking the first-order derivative to g 1 ( x ) , we have
g 1 ( x ) = tr x I K + N 1 1 tr x I K + N 1 .
To prove g 1 ( x ) 0 , we show in the following that, for any positive definite matrix O , we always have
tr O 1 1 tr O 1 ,
where O 1 consists of the diagonal elements of O , i.e., O 1 = O I K . Denote the diagonal entries of O (or O 1 ) by o = ( o 1 , , o K ) T and the eigenvalues of O by θ = ( θ 1 , , θ K ) T . Since O is a positive definite matrix, the entries of o and θ are real and positive. In addition, according to the Schur–Horn theorem, o is majorized by θ , i.e.,
o θ .
Define a real vector u = ( u 1 , , u K ) T with u k > 0 , k K , and function g 2 ( u ) = k = 1 K 1 u k . It is obvious that g 2 ( u ) is convex and symmetric. Hence, g 2 ( u ) is a Schur-convex function. Therefore,
g 2 ( o ) g 2 ( θ ) .
Using (A68), we have
tr O 1 1 = k = 1 K 1 o k = g 2 ( o ) g 2 ( θ ) = k = 1 K 1 θ k = tr O 1 ,
based on which we get g 1 ( x ) 0 and (A63) can then be proven. □
Then, from (A61), (A62), and Lemma A1, it is known that, when C + ,
R l b 2 E log det I K + A 1 log det A 1 = K E log 1 + 1 a I ( x ; y , H ) ,
where the expectation can be calculated by using the pdf of a in (A44). Lemma 5 is thus proven.

Appendix I. Proof of Remark 2

In this appendix, we show that, when K = M and λ th = 0 , E 1 λ does not exist.
When K = M , f λ ( λ ) is given in (A17). From (A18), it is known that, for any 0 i K 1 , L i 0 can always be expressed as follows:
L i 0 = e λ i ! d i d λ i e λ λ i = j = 1 i ς i , j λ j + 1 ,
where ς i , j is a constant. Accordingly, from (A17),
f λ ( λ ) = 1 K i = 0 K 1 L i 0 ( λ ) 2 e λ = e λ K j = 1 2 ( K 1 ) τ j λ j + e λ ,
where τ j is a constant. Let ϵ denote a sufficiently small positive real number. Then, when λ th = 0 ,
E 1 λ = 0 1 λ f λ ( λ ) d λ = 0 e λ K j = 1 2 ( K 1 ) τ j λ j 1 d λ + 0 1 λ e λ d λ = 1 K j = 1 2 ( K 1 ) τ j ( j 1 ) ! Ei ( 0 ) ,
where we used 0 e λ λ j 1 d λ = ( j 1 ) ! and Ei ( · ) is the exponential integral. As is well-known, l i m x 0 Ei ( x ) = . Hence, the integral in (A73) diverges. E 1 λ thus does not exist.

Appendix J. Proof of Lemma 7

As stated in Appendix B, when M + , H H H M I K 0 almost surely. Hence,
1 λ 0 , P th = Pr λ min λ th 1 , H th 0 , D 1 2 C K 1 , 1 E λ | Δ 0 , E 1 λ | Δ 0 .
Combining (A74) with (66)–(68), we have
R l b 3 , R ˇ l b 3 , R ^ l b 3 K log 1 + 1 D C .
When ρ + , σ 2 0 . Hence,
D 1 2 C H th P th K 1 , R l b 3 , R ˇ l b 3 , R ^ l b 3 P th K log 1 + 1 D C H th .
When C + , it can be found from (62) that D 0 . Then, from (66)–(68), it is known that R l b 3 , R ˇ l b 3 , and R ^ l b 3 all approach constants, which can be, respectively, obtained by setting D = 0 in (66)–(68). Lemma 7 is thus proven.

Appendix K. Proof of Theorem 5

As stated in Appendix A, U Λ U H is the eigendecomposition of H H H and λ t , t T are unordered positive eigenvalues of H H H . To derive R l b 4 , we further denote the singular value decomposition of H by U L V H , where V C K × K is a unitary matrix and L R M × K is a rectangular diagonal matrix. In fact, the diagonal entries of L are the nonnegative square roots of the positive eigenvalues of H H H . Then, from (73), we have
F H H = H H H H H + σ 2 I M 1 H , = V L H Λ + σ 2 I M 1 L V H , = V diag λ 1 λ 1 + σ 2 , , λ T λ T + σ 2 , 0 K T H V H , F H H H H F = V L H Λ + σ 2 I M 1 Λ Λ + σ 2 I M 1 L V H , = V diag λ 1 2 λ 1 + σ 2 2 , , λ T 2 λ T + σ 2 2 , 0 K T H V H , F H F = V L H Λ + σ 2 I M 2 L V H , = V diag λ 1 λ 1 + σ 2 2 , , λ T λ T + σ 2 2 , 0 K T H V H ,
where 0 K T is a ( K T ) -dimensional all “0” column vector. Based on (A77),
F H H H H F + σ 2 F H F + D I K = V diag λ 1 λ 1 + σ 2 + D , , λ T λ T + σ 2 + D , D × 1 K T H V H ,
where 1 K T is a ( K T ) -dimensional all “1” column vector. Since Λ is independent of U , L is independent of U as well as V , and λ t , t T is unordered, we have
E log det F H H H H F + σ 2 F H F + D I K = T E log λ λ + σ 2 + D + ( K T ) log D .
Then, we calculate G in (82). For this purpose, we have to calculate E F H H , E F H H H H F , and E F H F . To obtain these expectations, we consider two different cases, i.e., the case with K M and the case with K > M . When K M , from (A77), we have
E F H H = E λ λ + σ 2 I K , E F H H H H F = E λ 2 ( λ + σ 2 ) 2 I K , E F H F = E λ ( λ + σ 2 ) 2 I K .
When K > M , denote V = ( v 1 , , v K ) . Then, from (A77),
F H H = V diag λ 1 λ 1 + σ 2 , , λ M λ M + σ 2 , 0 K T H V H = λ 1 λ 1 + σ 2 v 1 , , λ M λ M + σ 2 v M , 0 K H , , 0 K H v 1 H v K H = m = 1 M λ m λ m + σ 2 v m v m H .
Since v m is the eigenvector of matrix H H H and is independent of unordered eigenvalue λ m , we have
E F H H = m = 1 M E λ m λ m + σ 2 1 K I K = M K E λ λ + σ 2 I K .
Similarly, we also have
E F H H H H F = M K E λ 2 ( λ + σ 2 ) 2 I K , E F H F = M K E λ ( λ + σ 2 ) 2 I K .
Using (A80), (A82), (A83), and (82), G can be calculated as
G = E F H H H H F E F H H E H H F + σ 2 E F H F + D I K = T K E λ λ + σ 2 T 2 K 2 E λ λ + σ 2 2 + D I K .
Hence,
log det ( G ) = K log T K E λ λ + σ 2 T 2 K 2 E λ λ + σ 2 2 + D .
Substituting (A79) and (A85) into (80) and (81), respectively, and using (79), we can get (83).
We then calculate D in (84). From (77), (A80), and (A83),
E x ¯ x ¯ H = E F H H H H F + σ 2 F H F = T K E λ λ + σ 2 I K .
I ( x ¯ g ; z ¯ g ) in (78) can thus be calculated as follows:
I ( x ¯ g ; z ¯ g ) = log det I K + E x ¯ x ¯ H D = K log 1 + T D K E λ λ + σ 2 = C ,
based on which (84) can be obtained. Theorem 5 is then proven.

Appendix L. Proof of Lemma 9

When M + , T = K . As stated in Appendix B, H H H M I K 0 almost surely. Hence, λ M 0 . From (A87),
I ( x ¯ g ; z ¯ g ) = K log 1 + 1 D E λ λ + σ 2 = C K log 1 + 1 D .
Combining (83) and (A88), we have
R l b 4 K log ( 1 + D ) K log D = K log 1 + 1 D C .
When K M and ρ + , T = K and σ 2 0 . Using (A87) and (83), we can also get (A88) and (A89).
When K M and C + , it can be found from (84) that D 0 . Then, using (83), we can get (85). This finishes the proof.

Appendix M. Proof of Lemma 10

As shown in Lemmas 3 and 5, when C + , R l b 1 approaches the capacity of Channel 1 while R l b 2 is upper bounded by the capacity of Channel 1. Hence,
R l b 1 R l b 2 .
Moreover, as shown in (52), we quantize x ˜ by adding Gaussian noise vector q CN ( 0 , D I K ) when event Δ happens and obtain its representation z . When C + , it is known from (62) that D 0 . Hence, ( x , z ) ( x , x ˜ ) in distribution, and it can be proven similarly to (A61) that
R l b 3 P th I ( x ; z | Δ ) P th I ( x ; x ˜ | Δ ) .
Using (A39) and (A91), we have
R l b 1 I ( x ; y , H ) = h ( x ) h ( x | y , H ) = h ( x ) h ( x | y , H , x ˜ ) h ( x ) h ( x | x ˜ ) = I ( x ; x ˜ ) P th I ( x ; x ˜ | Δ ) P th I ( x ; z | Δ ) R l b 3 .
Analogously, from (76) and (84), it is known that ( x , z ) ( x , x ¯ ) in distribution when C + . Hence,
R l b 1 I ( x ; y , H ) = h ( x ) h ( x | y , H ) = h ( x ) h ( x | y , H , x ¯ ) h ( x ) h ( x | x ¯ ) I ( x ; x ¯ ) I ( x ; z ) R l b 4 ,
where x ¯ is the MMSE estimate of x at the relay, i.e., (74). This completes the proof.

References

  1. Tishby, N.; Pereira, F.C.; Bialek, W. The information bottleneck method. arXiv 2000, arXiv:physics/0004057. [Google Scholar]
  2. Shwartz-Ziv, R.; Tishby, N. Opening the black box of deep neural networks via information. arXiv 2017, arXiv:1703.00810. [Google Scholar]
  3. Alemi, A.A. Variational predictive information bottleneck. In Proceedings of the Symposium on Advances in Approximate Bayesian Inference, Vancouver, BC, Canada, 8 December 2019. [Google Scholar]
  4. Mukherjee, S. Machine learning using the variational predictive information bottleneck with a validation set. arXiv 2019, arXiv:1911.02210. [Google Scholar]
  5. Mukherjee, S. General information bottleneck objectives and their applications to machine learning. arXiv 2019, arXiv:1912.06248. [Google Scholar]
  6. Strouse, D.; Schwab, D.J. The information bottleneck and geometric clustering. Neural Comput. 2019, 31, 596–612. [Google Scholar] [CrossRef]
  7. Painsky, A.; Tishby, N. Gaussian lower bound for the information bottleneck limit. J. Mach. Learn. Res. 2018, 18, 7908–7936. [Google Scholar]
  8. Dobrushin, R.; Tsybakov, B. Information transmission with additional noise. IRE Trans. Inf. Theory 1962, 8, 293–304. [Google Scholar] [CrossRef]
  9. Witsenhausen, H.; Wyner, A. A conditional entropy bound for a pair of discrete random variables. IEEE Trans. Inf. Theory 1975, 21, 493–501. [Google Scholar] [CrossRef]
  10. Witsenhausen, H. Indirect rate distortion problems. IEEE Trans. Inf. Theory 1980, 26, 518–521. [Google Scholar] [CrossRef]
  11. Courtade, T.A.; Weissman, T. Multiterminal source coding under logarithmic loss. IEEE Trans. Inf. Theory 2014, 60, 740–761. [Google Scholar] [CrossRef] [Green Version]
  12. Aguerri, I.E.; Zaidi, A.; Caire, G.; Shitz, S.S. On the capacity of cloud radio access networks with oblivious relaying. IEEE Trans. Inf. Theory 2019, 65, 4575–4596. [Google Scholar] [CrossRef] [Green Version]
  13. Nazer, B.; Gastpar, M. Compute-and-forward: Harnessing interference through structured codes. IEEE Trans. Inf. Theory 2011, 57, 6463–6486. [Google Scholar] [CrossRef] [Green Version]
  14. Hong, S.N.; Caire, G. Compute-and-forward strategies for cooperative distributed antenna systems. IEEE Trans. Inf. Theory 2013, 59, 5227–5243. [Google Scholar] [CrossRef] [Green Version]
  15. Nazer, B.; Sanderovich, A.; Gastpar, M.; Shamai, S. Structured superposition for backhaul constrained cellular uplink. In Proceedings of the 2009 IEEE International Symposium on Information Theory, Seoul, Korea, 28 June–3 July 2009; pp. 1530–1534. [Google Scholar]
  16. Simeone, O.; Erkip, E.; Shamai, S. On codebook information for interference relay channels with out-of-band relaying. IEEE Trans. Inf. Theory 2011, 57, 2880–2888. [Google Scholar] [CrossRef] [Green Version]
  17. Park, S.H.; Simeone, O.; Sahin, O.; Shamai, S. Robust and efficient distributed compression for cloud radio access networks. IEEE Trans. Veh. Technol. 2013, 62, 692–703. [Google Scholar] [CrossRef] [Green Version]
  18. Zhou, Y.; Xu, Y.; Yu, W.; Chen, J. On the optimal fronthaul compression and decoding strategies for uplink cloud radio access networks. IEEE Trans. Inf. Theory 2016, 62, 7402–7418. [Google Scholar] [CrossRef] [Green Version]
  19. Aguerri, I.E.; Zaidi, A. Lossy compression for compute-and-forward in limited backhaul uplink multicell processing. IEEE Trans. Commun. 2016, 64, 5227–5238. [Google Scholar] [CrossRef] [Green Version]
  20. Demel, J.; Monsees, T.; Bockelmann, C.; Wuebben, D.; Dekorsy, A. Cloud-RAN Fronthaul Rate Reduction via IBM-based Quantization for Multicarrier Systems. In Proceedings of the 24th International ITG Workshop on Smart Antennas, Hamburg, Germany, 18–20 February 2020; pp. 1–6. [Google Scholar]
  21. Winkelbauer, A.; Matz, G. Rate-information-optimal Gaussian channel output compression. In Proceedings of the 48th Annual Conference on Information Sciences and Systems, Princeton, NJ, USA, 19–21 March 2014; pp. 1–5. [Google Scholar]
  22. Winkelbauer, A.; Farthofer, S.; Matz, G. The rate-information trade-off for Gaussian vector channels. In Proceedings of the 2014 IEEE International Symposium on Information Theory, Honolulu, HI, USA, 29 June–4 July 2014; pp. 2849–2853. [Google Scholar]
  23. Katz, A.; Peleg, M.; Shamai, S. Gaussian Diamond Primitive Relay with Oblivious Processing. In Proceedings of the 2019 IEEE International Conference on Microwaves, Antennas, Communications and Electronic Systems (COMCAS), Tel-Aviv, Israel, 4–6 November 2019; pp. 1–6. [Google Scholar]
  24. Estella Aguerri, I.; Zaidi, A. Distributed information bottleneck method for discrete and Gaussian sources. In Proceedings of the International Zurich Seminar on Information and Communication, Zurich, Switzerland, 21–23 February 2018; pp. 35–39. [Google Scholar]
  25. Uğur, Y.; Aguerri, I.E.; Zaidi, A. Vector Gaussian CEO problem under logarithmic loss and applications. IEEE Trans. Inf. Theory 2020, 66, 4183–4202. [Google Scholar] [CrossRef] [Green Version]
  26. Caire, G.; Shamai, S.; Tulino, A.; Verdu, S.; Yapar, C. Information Bottleneck for an Oblivious Relay with Channel State Information: The Scalar Case. In Proceedings of the 2018 IEEE International Conference on the Science of Electrical Engineering in Israel (ICSEE), Eilat, Israel, 12–14 December 2018; pp. 1–5. [Google Scholar]
  27. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  28. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  29. Ratnarajah, T. Topics in Complex Random Matrices and Information Theory. Ph.D. Thesis, University of Ottawa, Ottawa, ON, Canada, 2003. [Google Scholar]
  30. Edelman, A. Eigenvalues and condition numbers of random matrices. SIAM J. Matrix Anal. Appl. 1988, 9, 543–560. [Google Scholar] [CrossRef] [Green Version]
  31. Telatar, E. Capacity of multi-antenna Gaussian channels. Eur. Trans. Telecommun. 1999, 10, 585–595. [Google Scholar] [CrossRef]
  32. El Gamal, A.; Kim, Y.H. Network Information Theory; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  33. Tulino, A.M.; Verdú, S. Random Matrix Theory and Wireless Communications; Now Publishers: Delft, The Netherlands, 2004; pp. 1–182. [Google Scholar]
  34. Lee, W.C. Estimate of channel capacity in Rayleigh fading environment. IEEE Trans. Veh. Tech. 1990, 39, 187–189. [Google Scholar] [CrossRef]
  35. Brennan, L.E.; Reed, I.S. An adaptive array signal processing algorithm for communications. IEEE Trans. Aerosp. Electron. Syst. 1982, AES-18, 124–130. [Google Scholar] [CrossRef]
  36. Csiszar, I. Arbitrarily varying channels with general alphabets and states. IEEE Trans. Inf. Theory 1992, 38, 1725–1742. [Google Scholar] [CrossRef]
Figure 1. Block diagram of the considered information bottleneck (IB) problem.
Figure 1. Block diagram of the considered information bottleneck (IB) problem.
Information 12 00155 g001
Figure 2. Lower bound R l b 1 versus D with K = M = 4 and C = 40 bits/complex dimension.
Figure 2. Lower bound R l b 1 versus D with K = M = 4 and C = 40 bits/complex dimension.
Information 12 00155 g002
Figure 3. H joint and H sum versus M with K = 2 .
Figure 3. H joint and H sum versus M with K = 2 .
Information 12 00155 g003
Figure 4. H joint and H sum versus K with B = 2 bits and different values of M.
Figure 4. H joint and H sum versus K with B = 2 bits and different values of M.
Information 12 00155 g004
Figure 5. Lower bound R l b 3 versus λ th for the K = M case with C = 40 bits/complex dimension.
Figure 5. Lower bound R l b 3 versus λ th for the K = M case with C = 40 bits/complex dimension.
Information 12 00155 g005
Figure 6. Lower bound R l b 3 versus λ th for the K < M case with K = 4 and C = 40 bits/complex dimension.
Figure 6. Lower bound R l b 3 versus λ th for the K < M case with K = 4 and C = 40 bits/complex dimension.
Information 12 00155 g006
Figure 7. R l b 3 , R ˇ l b 3 , and R ^ l b 3 versus M with K = 4 and C = 40 bits/complex dimension.
Figure 7. R l b 3 , R ˇ l b 3 , and R ^ l b 3 versus M with K = 4 and C = 40 bits/complex dimension.
Information 12 00155 g007
Figure 8. R l b 3 , R ˇ l b 3 , and R ^ l b 3 versus ρ with K = 4 and C = 40 bits/complex dimension.
Figure 8. R l b 3 , R ˇ l b 3 , and R ^ l b 3 versus ρ with K = 4 and C = 40 bits/complex dimension.
Information 12 00155 g008
Figure 9. Upper and lower bounds to the bottleneck rate versus ρ with K = M = 2 and C = 40 bits/complex dimension.
Figure 9. Upper and lower bounds to the bottleneck rate versus ρ with K = M = 2 and C = 40 bits/complex dimension.
Information 12 00155 g009
Figure 10. Upper and lower bounds to the bottleneck rate versus ρ with K = M = 4 and C = 40 bits/complex dimension.
Figure 10. Upper and lower bounds to the bottleneck rate versus ρ with K = M = 4 and C = 40 bits/complex dimension.
Information 12 00155 g010
Figure 11. Upper and lower bounds to the bottleneck rate versus C with K = M = 2 and ρ = 40 dB.
Figure 11. Upper and lower bounds to the bottleneck rate versus C with K = M = 2 and ρ = 40 dB.
Information 12 00155 g011
Figure 12. Upper and lower bounds to the bottleneck rate versus C with K = M = 4 and ρ = 40 dB.
Figure 12. Upper and lower bounds to the bottleneck rate versus C with K = M = 4 and ρ = 40 dB.
Information 12 00155 g012
Figure 13. Upper and lower bounds to the bottleneck rate versus M with K = 2 , ρ = 10 dB, and C = 40 bits/complex dimension.
Figure 13. Upper and lower bounds to the bottleneck rate versus M with K = 2 , ρ = 10 dB, and C = 40 bits/complex dimension.
Information 12 00155 g013
Figure 14. Upper and lower bounds to the bottleneck rate versus M with K = 2 , ρ = 40 dB, and C = 40 bits/complex dimension.
Figure 14. Upper and lower bounds to the bottleneck rate versus M with K = 2 , ρ = 40 dB, and C = 40 bits/complex dimension.
Information 12 00155 g014
Figure 15. Upper and lower bounds to the bottleneck rate versus K or M with K = M , ρ = 40 dB, and C = 50 bits/complex dimension.
Figure 15. Upper and lower bounds to the bottleneck rate versus K or M with K = M , ρ = 40 dB, and C = 50 bits/complex dimension.
Information 12 00155 g015
Figure 16. Upper and lower bounds to the bottleneck rate versus K or M with K = M , ρ = 40 dB, and C = 8 K bits/complex dimension.
Figure 16. Upper and lower bounds to the bottleneck rate versus K or M with K = M , ρ = 40 dB, and C = 8 K bits/complex dimension.
Information 12 00155 g016
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xu, H.; Yang, T.; Caire, G.; Shamai, S. Information Bottleneck for a Rayleigh Fading MIMO Channel with an Oblivious Relay. Information 2021, 12, 155. https://doi.org/10.3390/info12040155

AMA Style

Xu H, Yang T, Caire G, Shamai S. Information Bottleneck for a Rayleigh Fading MIMO Channel with an Oblivious Relay. Information. 2021; 12(4):155. https://doi.org/10.3390/info12040155

Chicago/Turabian Style

Xu, Hao, Tianyu Yang, Giuseppe Caire, and Shlomo Shamai (Shitz). 2021. "Information Bottleneck for a Rayleigh Fading MIMO Channel with an Oblivious Relay" Information 12, no. 4: 155. https://doi.org/10.3390/info12040155

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop