Next Article in Journal
Multilevel Coding for the Full-Duplex Decode-Compress-Forward Relay Channel
Next Article in Special Issue
Two-Party Zero-Error Function Computation with Asymmetric Priors
Previous Article in Journal
Robust and Sparse Regression via γ-Divergence
Previous Article in Special Issue
How Can We Fully Use Noiseless Feedback to Enhance the Security of the Broadcast Channel with Confidential Messages
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Capacity Bounds on the Downlink of Symmetric, Multi-Relay, Single-Receiver C-RAN Networks

1
Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA 19104, USA
2
Department of Electrical and Computer Engineering, Technical University of Munich, 80333 Munich, Germany
3
Department of Electrical Engineering, Technion, Haifa 3200003, Israel
*
Author to whom correspondence should be addressed.
Entropy 2017, 19(11), 610; https://doi.org/10.3390/e19110610
Submission received: 17 July 2017 / Revised: 6 November 2017 / Accepted: 6 November 2017 / Published: 14 November 2017
(This article belongs to the Special Issue Network Information Theory)

Abstract

:
The downlink of symmetric Cloud Radio Access Networks (C-RANs) with multiple relays and a single receiver is studied. Lower and upper bounds are derived on the capacity. The lower bound is achieved by Marton’s coding, which facilitates dependence among the multiple-access channel inputs. The upper bound uses Ozarow’s technique to augment the system with an auxiliary random variable. The bounds are studied over scalar Gaussian C-RANs and are shown to meet and characterize the capacity for interesting regimes of operation.

1. Introduction

Cloud Radio Access Networks (C-RANs) are expected to be a part of future mobile network architectures. In C-RANs, information processing is done in a cloud-based central unit that is connected to remote radio heads (or relays) by rate-limited fronthaul links. C-RANs improve energy and bandwidth efficiency and reduce the complexity of relays by facilitating centralized information processing and cooperative communication. We refer to [1,2,3] and the references therein for an overview of the challenges and coding techniques for C-RANs. Several coding schemes have been proposed in recent years for the downlink of C-RANs including message sharing [4], backhaul compression [5], hybrid schemes [6] and generalized data sharing using Marton’s coding [7,8]. The paper [9] gives an upper bound on the sum-rate of two-relay C-RANs with two users and numerically compares the performance of the aforementioned schemes with the upper bound.
We consider the downlink of a C-RAN with multiple relays and a single receiver. This network may be modeled by an M-relay diamond network where the broadcast component is modeled by rate-limited links and the multiaccess component is modeled by a memoryless Multiple Access Channel (MAC); see Figure 1. The capacity of this class of networks is not known in general, but lower and upper bounds were derived in [10,11,12] for two-relay networks. Moreover, the capacity was found for binary adder MACs [12] and for certain regimes of operation in Gaussian MACs [11,12]. In this work, we derive lower and upper bounds for C-RANs with multiple relays and find the capacity in interesting regimes of operation for symmetric Gaussian C-RANs.
This paper is organized as follows. Section 2 introduces the notation and the problem setup. In Section 3, we propose a coding strategy based on Marton’s coding and discuss simplifications for symmetric networks. In Section 4, we generalize the bounding technique in [11,12]. The case of symmetric Gaussian C-RANs is studied in Section 5, where we compute lower and upper bounds on the capacity and show that they meet in certain regimes of operation characterized in terms of power, number of users and broadcast link capacities.

2. Preliminaries and Problem Setup

2.1. Notation

Random variables are denoted by uppercase letters, e.g., X; their realizations are denoted by lowercase letters, e.g., x; and their corresponding probabilities are denoted by p X ( x ) or p ( x ) . The probability mass function (pmf) describing X is denoted by p X . T ϵ n ( X ) denotes the set of sequences that are ϵ -typical with respect to P X ([13], p. 25). When P X is clear from the context, we write T ϵ n . The entropy of X is denoted by H ( X ) ; the conditional entropy of X given Y is denoted by H ( X | Y ) ; and the mutual information of X and Y is denoted by I ( X ; Y ) . Similarly, differential entropies and conditional differential entropies are denoted by h ( X ) and h ( X | Y ) .
Matrices are denoted by bold letters, e.g., K . We denote the entry of matrix K in row i and column j by K i j . Sets are denoted by script letters, e.g., 𝒮. The Cartesian product of 𝒮 1 and 𝒮 2 is denoted by 𝒮 1 × 𝒮 2 , and the n-fold Cartesian product of 𝒮 is denoted by 𝒮 n . The cardinality of 𝒮 is denoted by | 𝒮 | .
Given the set 𝒮 = { s 1 , , s k } , X 𝒮 denotes the tuple ( X s 1 , , X s k ) . We define X n = X 1 , , X n , and (see [14], Equation (74)):
I ( X 𝒮 ) = m 𝒮 H ( X m ) - H ( X 𝒮 ) .
For example, when 𝒮 = { s 1 , s 2 } , (1) becomes the mutual information I ( X s 1 ; X s 2 ) . The conditional version of (1), I ( X 𝒮 | U ) , is defined similarly by conditioning all terms in (1) on U. Note that I ( X 𝒮 | U ) is non-negative.

2.2. Model

Consider the C-RAN in Figure 1, where a source communicates a message W with n R bits to a sink with the help of M relays. Let M = { 1 , , M } be the set of relays. The source encodes W into descriptions V 1 , , V M that are provided to relays 1 , , M , respectively, and where V m satisfies:
H ( V m ) n C m , m = 1 , , M .
Each relay m, m = 1 , , M , maps its description V m into a sequence X m n , which is sent over a multiple access channel. The MAC is characterized by the input alphabets X 1 , , X M , the output alphabet Y and the transitional probabilities p ( y | x 1 , , x M ) for all ( x 1 , , x M , y ) X 1 × × X M × Y . From the received sequence Y n , the sink decodes an estimate W ^ of W.
A coding scheme consists of an encoder, M relay mappings and a decoder and is said to achieve the rate R if, by choosing n sufficiently large, we can make the error probability Pr ( W ^ W ) as small as desired. We are interested in characterizing the largest achievable rate R. We refer to the supremum of achievable rates as the capacity C ( M ) of the network.
In this work, we focus on symmetric networks.
Definition 1.
The network in Figure 1 is symmetric if we have:
C 1 = = C M = : C
X 1 = = X M = : X
and:
p Y | X 1 X M ( y | x 1 , , x M ) = p Y | X 1 X M ( y | x 1 , , x M )
for all y Y , ( x 1 , , x M ) X M and any of its permutations ( x 1 , , x M ) .
When the MAC is Gaussian, the input and output alphabets are the set of real numbers, and the output is given by:
Y = m = 1 M X M + Z
where Z is Gaussian noise with zero mean and unit variance. We consider average block power constraints P 1 , , P M :
1 n i = 1 n E ( X m , i 2 ) P m , m = 1 , , M .
The Gaussian C-RAN is symmetric if C m = C and P m = P for all m = 1 , , M .

3. A Lower Bound

We outline an achievable scheme based on Marton’s coding. We remark that this scheme can be improved for certain regimes of C by using superposition coding (e.g., see ([15], Theorem 2) and ([11], Theorem 2)).
Fix the pmf p ( x 1 , , x M ) , ϵ > 0 , and the auxiliary rates R m , R m , m = 1 , , M , such that:
R m , R m 0
R m + R m C m .

3.1. Codebook Construction

Set:
R = m = 1 M R m .
For every m = 1 , , M , generate 2 n ( R m + R m ) sequences x m n ( w m , w m ) , w m = 1 , , 2 n R m , w m = 1 , , 2 n R m , in an i.i.d. manner according to P X ( x m , ) , independently across m = 1 , , M . For each bin index ( w 1 , , w M ) , pick a jointly typical sequence tuple:
( x 1 n ( w 1 , w 1 ) , , x M n ( w M , w M ) ) T ϵ n .

3.2. Encoding

Represent message w as a tuple ( w 1 , , w M ) , and send ( w m , w m ) to relay m, m = 1 , , M .

3.3. Relay Mapping at Relay m, m = 1 , , M

Relay m sends X m n ( w m , w m ) over the MAC.

3.4. Decoding

Upon receiving y n , the receiver looks for indices w ^ 1 , , w ^ M for which the following joint typicality test holds for some w ^ 1 , , w ^ M :
( x 1 n ( w ^ 1 , w ^ 1 ) , , x M n ( w ^ M , w ^ M ) , y n ) T ϵ n .
We show in Appendix A that the above scheme has a vanishing error probability as n if, in addition to (8)–(10), we have:
m 𝒮 R m I ( X 𝒮 ) , 𝒮 M
m 𝒮 R m + R m I ( X 𝒮 ; Y | X 𝒮 ¯ ) + I ( X M ) - I ( X 𝒮 ¯ ) , 𝒮 M .
One can use Fourier–Motzkin elimination to eliminate R m , R m , m = 1 , , M , from (8)–(10), (13), (14), and characterize the set of achievable rates R. For symmetric networks (see Definition 1), we bypass the above step and proceed by choosing p X M to be “symmetric”.
We say p X M is symmetric if:
X 1 = = X M = X
and, for all subsets 𝒮, 𝒮 M with | 𝒮 | = | 𝒮 | , we have:
p X 𝒮 ( x 1 , , x | 𝒮 | ) = p X 𝒮 ( x 1 , , x | 𝒮 | ) , ( x 1 , , x | 𝒮 | ) X | 𝒮 | .
We simplify the problem defined by (8)–(10), (13), (14) for symmetric distributions and prove the following result in Appendix B.
Theorem 1.
For symmetric C-RAN downlinks, the rate R is achievable if:
R M C - I ( X M )
R I ( X M ; Y )
for some symmetric distribution p X M .

4. An Upper Bound

Our upper bound is motivated by [11,12,14,16].
Theorem 2.
The capacity C ( M ) is upper bounded by:
C ( M ) max p ( x ) min p ( u | x y ) = p ( u | y ) max p ( q | x , u , y ) = p ( q | x ) M C - ( M - 1 ) H ( U | Q ) + m = 1 M H ( U | X m Q ) - H ( U | X M ) min 𝒮 M | 𝒮 | C + I ( X 𝒮 ¯ ; Y | X 𝒮 Q )
where Q - X M - Y - U forms a Markov chain. Moreover, the alphabet size of Q may be chosen to satisfy | Q | i = 1 M | X i | + 2 M - 1 .
Remark 1.
For M = 2 , Theorem 2 reduces to ([12], Theorem 3).
Proof outline.
We start with the following n-letter upper bound (see Appendix C):
n R n M C - I ( X M n )
n R n | 𝒮 | C + I ( X M n ; Y n | X n ) , 𝒮 M .
For any sequence U n , we have:
I ( X M n ) I ( X M n ) - I ( X M n | U n ) = m = 1 M I ( X m n ; U n ) - I ( X M n ; U n ) = ( M - 1 ) H ( U n ) - m = 1 M H ( U n | X m n ) + H ( U n | X M n ) .
By substituting (22) into (20), we obtain:
n R n M C - ( M - 1 ) H ( U n ) + m = 1 M H ( U n | X m n ) - H ( U n | X M n ) .
We now choose U i , i = 1 , , n , to be the output of a memoryless channel p U | Y ( u i | y i ) with input Y i . The auxiliary channel p U | Y ( . | . ) will be optimized later. With this choice, we single-letterize (21) and (23) and obtain:
R M C - ( M - 1 ) H ( U | Q ) + m = 1 M H ( U | X m Q ) - H ( U | X M )
R | 𝒮 | C + I ( X 𝒮 ¯ ; Y | X 𝒮 Q ) , 𝒮 M
where Q - X M - Y - U forms a Markov chain. ☐

5. The Symmetric Gaussian C-RAN

We specialize Theorem 1 to the symmetric Gaussian C-RAN defined in (6) and (7) where P m = P for all m = 1 , , M . Choose ( X 1 , , X M ) to be jointly Gaussian with the covariance matrix:
K M ( ρ ) = P ρ P ρ P ρ P P ρ P ρ P ρ P P .
Remark 2.
Choosing ( X 1 , , X M ) to be jointly Gaussian (and/or symmetric) is not necessarily optimal for (13) and (14), but it gives a lower bound on the capacity.
Theorem 3.
The rate R is achievable if it satisfies the following constraints for some non-negative parameter ρ, 0 ρ 1 :
R M C - 1 2 log P M det ( K M ( ρ ) )
R 1 2 log ( 1 + P M ( 1 + ( M - 1 ) ρ ) ) .
Remark 3.
One can recursively calculate det ( K M ( ρ ) ) :
det ( K M ( ρ ) ) = P M 1 - ρ 2 i = 1 M - 1 i ( 1 - ρ ) i - 1 = P M ( 1 - ρ ) M - 1 1 + ( M - 1 ) ρ .
Let R ( ) be the maximum achievable rate given by (27) and (28). The RHS of (27) is non-increasing in ρ , and the RHS of (28) is increasing in ρ . Therefore, we have the following two cases for the optimizing solution ρ ( ) :
  • If M C 1 2 log 1 + P M then
    ρ ( ) = 0 and R ( ) = M C .
  • Otherwise, ρ ( ) is the unique solution of ρ in:
    1 2 log ( 1 + P M ( 1 + ( M - 1 ) ρ ) ) = M C - 1 2 log 1 ( 1 - ρ ) M - 1 1 + ( M - 1 ) ρ
    and we have:
    R ( ) = 1 2 log ( 1 + P M ( 1 + ( M - 1 ) ρ ( ) ) ) .
We next specialize Theorem 2 to symmetric Gaussian C-RANs.
Theorem 4.
The rate R is achievable only if there exists ρ, 0 ρ 1 , such that the following inequalities hold for all N 0 :
R M C - ( M - 1 ) 1 2 log 2 2 R + N - 1 2 log ( 1 + N ) + M 2 log 1 + N + P ( M - 1 ) ( 1 - ρ ) ( 1 + ( M - 1 ) ρ )
R 1 2 log 1 + P M ( 1 + ( M - 1 ) ρ .
Proof. 
Set:
U i = Y i + Z N , i , i = 1 , , n
where { Z N , i } i = 1 n are identically distributed according to the normal distribution N ( 0 , N ) and are independent of each other and X 1 n , , X M n . The variance N is to be optimized.
In order to find a computable upper bound in (19), we need to lower bound h ( U | Q ) . Recall that U is a noisy version of Y. We thus use the conditional entropy power inequality ([17], Theorem 17.7.3):
h ( U | Q ) 1 2 log 2 2 h ( Y | Q ) + 2 π e N 1 2 log 2 π e 2 2 R + N .
Substituting (36) into the first constraint of (19), we obtain:
R M C - ( M - 1 ) 1 2 log 2 π e 2 2 R + N + m = 1 M h ( U | X m Q ) - h ( U | X M ) M C - ( M - 1 ) 1 2 log 2 π e 2 2 R + N + m = 1 M h ( U | X m ) - h ( U | X M ) .
Now, consider the second term in (19) with 𝒮 = :
R I ( X M ; Y | Q ) I ( X M ; Y ) = h ( Y ) - h ( Y | X M ) .
Note that the RHSs of (37), (38) are both concave in p ( x M ) and symmetric with respect to X 1 , , X M . Therefore, a symmetric p ( x M ) maximizes them. Let K denote the covariance matrix of an optimal symmetric solution. We have:
K i i = P , K i j = P ρ .
Using the conditional version of the maximum entropy lemma [18], we can upper bound the differential entropies that appear with a positive sign in (37) and (38) by their Gaussian counterparts, and h ( U | X M ) and h ( Y | X M ) can be written explicitly because the channels from X M to U and Y are Gaussian. We obtain (33) and (34). Note that the RHSs of both bounds are increasing in P, and therefore there is no loss of generality in choosing K i i = P (among K i i P ). ☐
The upper bound of Theorem 4 and the lower bound of Theorem 3 are plotted in Figure 2 for M = 3 and P = 1 , and they are compared with the lower bounds of message sharing [4] and compression [5]. One sees that our lower and upper bounds are close, and they match over a wide range of C. Moreover, establishing partial cooperation among the relays through Marton’s coding offers significant gains. Figure 3 plots the capacity bounds for P = 1 and different values of M.
We next compare the lower and upper bounds. We define:
C C = 1 2 M log 1 + P M
C L = 1 2 M log 1 + M 2 2 P M 2 ( M - 1 ) M - 1 M 2
C U = 1 2 M log 1 + M P 1 + ( M - 1 ) ρ ( 2 ) 1 - ρ ( 2 ) M - 1 1 + ( M - 1 ) ρ ( 2 )
where:
ρ ( 2 ) = M - 2 - 1 P + ( M - 2 - 1 P ) 2 + 4 ( M - 1 ) 2 ( M - 1 ) .
Theorem 5.
The lower bound of Theorem 3 matches the upper bound of Theorem 4 if:
C C C
or if:
C L C C U
where C C , C L , C U are defined in (40)–(42).
Remark 4.
Theorem 5 recovers ([12], Theorem 5), for M = 2 .
Remark 5.
For C C C , no cooperation is needed among the transmitters, and the capacity is equal to M C .
Remark 6.
For C large enough, full cooperation is possible through superposition coding, and the capacity is:
C ( M ) = 1 2 log 1 + M 2 P , C C coop
where
C coop = 1 2 log 1 + M 2 P .
The rate (46) is not achievable by Theorem 3 except when C . This rate is achievable by message sharing.
Proof of Theorem 5.
To find regimes of P and C for which the lower and upper bounds match, we mimic the analysis in ([12], Appendix F). Consider the lower bound in Theorem 3 and in particular its maximum achievable rate R ( ) , which is attained by ρ ( ) ; see (30)–(32). If (44) holds, we have ρ ( ) = 0 , R ( ) = M C , and thus the cut-set bound is achieved. Otherwise, we proceed as follows.
Consider (33). Since R R ( ) and using the definition of R ( ) in (32), we can further upper bound (33) and obtain:
R M C - ( M - 1 ) 1 2 log 1 + N + P M ( 1 + ( M - 1 ) ρ ( ) ) + M 2 log 1 + N + P ( ( M - 1 ) ( 1 - ρ ) ( 1 + ( M - 1 ) ρ ) ) - 1 2 log ( 1 + N ) .
Call the RHS of (48) g 1 ( ρ ) and the RHS of (34) g 2 ( ρ ) . Fix N as a function of ρ ( ) such that:
I G , ρ ( ) ( X M | U ) = 0
where I G , ρ ( ) ( X M | U ) is I ( X M | U ) evaluated for a fully-symmetric Gaussian distribution with correlation factor ρ ( ) . One can verify that the following choice of N satisfies (49):
N = P ( 1 - ρ ( ) ) ( 1 + ( M - 1 ) ρ ( ) ) ρ ( ) - 1 .
The right inequality in (45) ensures N 0 .
With this choice of N, g 1 ( ρ ) is exactly equal to:
M C - I G , ρ ( ) ( X M )
at ρ = ρ ( ) . Note that ρ ( ) is defined in (31), and thus, g 1 ( ρ ) crosses g 2 ( ρ ) at ρ ( ) . Since g 2 ( ρ ) is increasing in ρ , the maximum admissible rate by (34) and (48) matches R ( ) if g 1 ( ρ ) is non-increasing for ρ ρ ( ) . This is ensured by the left inequality in (45). ☐

6. Concluding Remarks

We studied the downlink of symmetric C-RANs with multiple relays and a single receiver and established lower and upper bounds on the capacity. The lower bound uses Marton’s coding to establish partial cooperation among the relays and improves on schemes that are based on message sharing and compression for scalar Gaussian C-RANs (see Figure 2). The upper bound generalizes the bounding techniques of [11,12]. When specialized to symmetric Gaussian C-RANs, the lower and upper bounds meet over a wide range of C, and this range gets large as M and/or P get large.
Future directions include generalizing the techniques to address C-RANs with multiple receivers (e.g., [5]), secrecy constraints at the receivers (e.g., [19]) and secrecy constraints at individual relays (e.g., [20]).

Acknowledgments

The work of S. Saeedi Bidokhti was supported by the Swiss National Science Foundation fellowship No. 158487. The work of G. Kramer was supported by the German Federal Ministry of Education and Research in the framework of the Alexander von Humboldt-Professorship. The work of S. Shamai was supported by the European Union’s Horizon 2020 Research and Innovation Program, Grant Agreement No. 694630.

Author Contributions

All authors conceived the problem and solution. S. Saeedi Bidokhti wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Analysis of the Achievable Scheme

The scheme fails only if one of the following events occurs:
  • (11) does not hold for any index tuple ( w 1 , , w M ) . This event has a vanishing error probability as n ([13], Lemma 8.2) if we have (13);
  • (12) does not hold for the original indices ( w 1 , w 1 , , w M , w M ) . This event has a vanishing error probability as n by (11) and the law of large numbers;
  • (12) holds for indices ( w ˜ 1 , w ˜ 1 , , w ˜ M , w ˜ M ) where ( w ˜ 1 , , w ˜ M ) ( w 1 , , w M ) . We show that this event has a vanishing error probability as n if we have (14). Since the codebook is symmetric with respect to all messages, we assume without loss of generality that w 1 = = w M = 1 and w 1 = = w M = 1 . Fix the sets 𝒮 1 , 𝒮 2 , 𝒮 3 M such that 𝒮 1 𝒮 2 𝒮 3 = M . Consider the case where
    w ˜ i = 1 , w ˜ i = 1 i 𝒮 1 w ˜ i = 1 , w ˜ i 1 i 𝒮 2 w ˜ i 1 i 𝒮 3 .
    We denote the set of index tuples satisfying (A1) by W ( 𝒮 1 3 ) . We have:
    Pr W ( 𝒮 1 3 ) ( X 1 n ( w ˜ 1 , w ˜ 1 ) , , X M n ( w ˜ M , w ˜ M ) , Y n ) T ϵ n 2 n m 𝒮 3 ( R m + R m ) + n m 𝒮 2 R m × ( x 1 n , , x M n , y n ) T ϵ n p X 𝒮 1 n X ˜ 𝒮 2 n X ^ 𝒮 3 n Y n ( x 𝒮 1 n , x 𝒮 2 n , x 𝒮 3 n , y n ) ( a ) 2 n m 𝒮 3 ( R m + R m ) + n m 𝒮 2 R m × 2 n ( H ( X M Y ) - H ( X 𝒮 1 Y ) - m 𝒮 2 𝒮 3 H ( X m ) - δ ( ϵ ) )
    where X ˜ 𝒮 2 n denotes { X m n ( w ˜ m , w ˜ m ) } m 𝒮 2 and X ^ 𝒮 3 n denotes { X m n ( w ˜ m , w ˜ m ) } m 𝒮 3 . In Step (a), we use that (i) ( X 𝒮 1 n Y n ) is “almost independent” of ( X ˜ 𝒮 2 n , X ^ 𝒮 3 n ) and (ii) the random sequences X m n ( w ˜ m , w ˜ m ) , m 𝒮 2 , and X m n ( w ˜ m , w ˜ m ) , m 𝒮 3 , are mutually “almost independent”. Note that we use the term “almost independent”, rather than independent, because we have assumed w 1 = = w M = 1 and w 1 = = w M = 1 ; i.e., we implicitly have a conditional probability and conditioned on w 1 = = w M = 1 , claims (i)–(ii) may not hold if we insist on exact independence. This issue has been dealt with in [21,22,23], and following similar arguments, one can show that (i) and (ii) hold with “almost independence”. The probability of the considered error event is thus arbitrarily close to zero for large enough n if:
    m 𝒮 3 ( R m + R m ) + m 𝒮 2 R m < I ( X M ; Y | X 𝒮 1 ) + I ( X M ) - I ( X 𝒮 1 ) .
    Inequality (A3) is satisfied by (14) when we choose S ¯ = 𝒮 1 . Note that the inequalities with 𝒮 2 are redundant.

Appendix B. Simplification for Symmetric Networks with Symmetric Distributions

Choose R m = R ˜ and R m = R ˜ for all 1 , , M . The problem defined by (8)–(10), (13), (14) simplifies using the definition in (16):
R ˜ , R ˜ 0
R ˜ + R ˜ C
M R ˜ = R
| 𝒮 | R ˜ I ( X 𝒮 ) 𝒮 M
| 𝒮 | ( R ˜ + R ˜ ) I ( X 𝒮 ; Y | X 𝒮 ¯ ) + I ( X M ) - I ( X 𝒮 ¯ ) 𝒮 M .
We prove that the tightest inequality in (A7) and (A8) is given by 𝒮 = M . Eliminating R ˜ from the remaining inequalities concludes the proof.
Let 𝒮 be any subset of M and s 0 be an element of M such that s 0 𝒮 . This is possible if 𝒮 M . Define T = M \ { 𝒮 s 0 } . We show that:
I ( X 𝒮 ) | 𝒮 | I ( X S { s 0 } ) | 𝒮 | + 1
and:
1 | 𝒮 | + 1 I ( X 𝒮 { s 0 } ; Y | X T ) + I ( X M ) - I ( X T ) 1 | 𝒮 | I ( X 𝒮 ; Y | X T X s 0 ) + I ( X M ) - I ( X T s 0 ) .
The following equalities come in handy in the proof:
I ( X 𝒮 T ) = I ( X 𝒮 ) + I ( X T ) + I ( X 𝒮 ; X T )
I ( X 𝒮 ) = j = 1 | 𝒮 | - 1 I ( X s 1 X s j ; X s j + 1 ) .
Suppose 𝒮 = { s 1 , , s | 𝒮 | } and 𝒮 = { s 0 , s 1 , , s | 𝒮 | - 1 } . We have:
| 𝒮 | I ( X 𝒮 s 0 ) = | 𝒮 | I ( X 𝒮 ) + | 𝒮 | I ( X 𝒮 ; X s 0 ) | 𝒮 | I ( X 𝒮 ) + j = 1 | 𝒮 | I ( X s 1 X s j ; X s 0 ) = ( a ) | 𝒮 | I ( X 𝒮 ) + j = 0 | 𝒮 | - 1 I ( X s 0 X s j ; X s j + 1 ) = ( b ) | 𝒮 | I ( X 𝒮 ) + I ( X 𝒮 ) + I ( X s 0 X s | 𝒮 | - 1 ; X s | 𝒮 | ) | 𝒮 | I ( X 𝒮 ) + I ( X 𝒮 ) = ( c ) ( | 𝒮 | + 1 ) I ( X 𝒮 )
where (a) and (c) hold by (16) and (b) is by (A12), written for 𝒮 .
Similarly, we have:
( | 𝒮 | + 1 ) I ( X 𝒮 ; Y | X T X s 0 ) + I ( X M ) - I ( X T s 0 ) - | 𝒮 | I ( X 𝒮 { s 0 } ; Y | X T ) + I ( X M ) - I ( X T ) = I ( X 𝒮 ; Y | X T X s 0 ) - | 𝒮 | I ( X s 0 ; Y | X T ) + I ( X M ) - ( | 𝒮 | + 1 ) I ( X T s 0 ) + | 𝒮 | I ( X T ) = ( a ) I ( X 𝒮 ; Y | X T X s 0 ) - | 𝒮 | I ( X s 0 ; Y | X T ) + I ( X 𝒮 ) + I ( X T s 0 ) + I ( X 𝒮 ; X { T s 0 } ) - ( | 𝒮 | + 1 ) I ( X T s 0 ) + | 𝒮 | I ( X T ) = ( b ) I ( X 𝒮 ; Y | X T X s 0 ) - | 𝒮 | I ( X s 0 ; Y | X T ) + I ( X 𝒮 ) + I ( X 𝒮 ; X T s 0 ) - | 𝒮 | I ( X T ; X s 0 ) = I ( X 𝒮 ; Y X T X s 0 ) - | 𝒮 | I ( X s 0 ; Y X T ) + I ( X 𝒮 ) = H ( X 𝒮 ) + I ( X 𝒮 ) - H ( X 𝒮 | Y X T X s 0 ) - | 𝒮 | H ( X s 0 ) - H ( X s 0 | Y X T ) = ( c ) | 𝒮 | H ( X s 0 | Y X T ) - H ( X 𝒮 | Y X T X s 0 ) ( d ) 0
where (a) and (b) are by (A11), (c) follows from (16) and (d) follows from (16) and the symmetry of the channel in (5).

Appendix C. Proof of Theorem 2

We first prove the multi-letter bound in (20) using Fano’s inequality and the data processing inequality. For any ϵ > 0 , we have:
n R ( a ) H ( V 1 , , V M ) + n ϵ = m = 1 M H ( V m ) - I ( V M ) + n ϵ m = 1 M n C - I ( V M ) + n ϵ ( b ) m = 1 M n C - I ( X M n ) + n ϵ
where (a) is by Fano’s inequality and (b) is by the data processing inequality as follows:
I ( V M ) = m = 1 n I ( V 1 V m ; V m + 1 ) m = 1 n I ( X 1 n X m n ; X m + 1 n ) = I ( X M n ) .
Similarly, for any subset 𝒮 M and ϵ > 0 , we have:
n R I ( W ; Y n ) + n ϵ I ( W ; Y n V 𝒮 X 𝒮 n ) + n ϵ = I ( W ; V 𝒮 X n ) + I ( W ; Y n | V 𝒮 X 𝒮 n ) + n ϵ = ( a ) I ( W ; V 𝒮 ) + I ( W ; Y n | V 𝒮 X 𝒮 n ) + n ϵ H ( V 𝒮 ) + I ( W X M n ; Y n | V 𝒮 X 𝒮 n ) + n ϵ ( b ) n | 𝒮 | C + I ( X M n ; Y n | X 𝒮 n ) + n ϵ
where (a) is because W - V 𝒮 - X 𝒮 n forms a Markov chain and (b) is because conditioning does not increase entropy and because W V M - X M n - Y forms a Markov chain.
Next, we single-letterize (23) as follows:
n R n M C - ( M - 1 ) i = 1 n H ( U i | U i - 1 ) + m = 1 M i = 1 n H ( U i | X M n U i - 1 ) - i = 1 n H ( U i | X M n U i - 1 ) n M C - ( M - 1 ) i = 1 n H ( U i | U i - 1 ) + m = 1 M i = 1 n H ( U i | X m , i U i - 1 ) - i = 1 n H ( U i | X M , i U i - 1 ) = n M C - n ( M - 1 ) H ( U T | U T - 1 T ) + n m = 1 M H ( U T | X m , T U T - 1 T ) - n H ( U T | X M , T U T - 1 T ) = n M C - ( M - 1 ) H ( U | Q ) + m = 1 M H ( U | X m Q ) - H ( U | X M Q )
where T is a uniform random variable on the set { 1 , , n } independent of everything in the system, Q is defined by ( U T - 1 T ) and X 1 , , X M , Y and U are defined by X 1 , T , , X M , T , Y T and U T , respectively. Note that U - Y - X M - Q forms a Markov chain.
Finally, we expand (21) as follows:
n R n | 𝒮 | C + I ( X M n ; Y n | X 𝒮 n ) = n | 𝒮 | C + i = 1 n H ( Y i | X 𝒮 n Y i - 1 ) - i = 1 n H ( Y i | X M n Y i - 1 ) = ( a ) n | 𝒮 | C + i = 1 n H ( Y i | X 𝒮 n Y i - 1 U i - 1 ) - i = 1 n H ( Y i | X M n Y i - 1 U i - 1 ) ( b ) n | 𝒮 | C + i = 1 n H ( Y i | X 𝒮 , i U i - 1 ) - i = 1 n H ( Y i | X M , i U i - 1 ) = ( c ) n | 𝒮 | C + I ( X 𝒮 ¯ ; Y | X 𝒮 Q )
where (a) is because Y i - X 𝒮 n Y i - 1 - U i - 1 forms a Markov chain, (b) is because Y i - X M , i - U i - 1 forms a Markov chain and (c) is by a standard time sharing argument. The cardinality of Q is bounded using the Fenchel–Eggleston–Carathéodory theorem ([13], Appendix A) (see also ([24], Appendix B)).

References

  1. Park, S.H.; Simeone, O.; Sahin, O.; Shamai, S. Fronthaul compression for cloud radio access networks: Signal processing advances inspired by network information theory. IEEE Signal Proc. Mag. 2014, 31, 69–79. [Google Scholar] [CrossRef]
  2. Yu, W. Cloud radio-access networks: Coding strategies, capacity analysis, and optimization techniques. In Proceedings of the Communications Theory Workshop, Toronto, ON, Canada, 6 May 2016. [Google Scholar]
  3. Peng, M.; Wang, C.; Lau, V.; Poor, H.V. Fronthaul-constrained cloud radio access networks: Insights and challenges. IEEE Wirel. Commun. 2015, 22, 152–160. [Google Scholar] [CrossRef]
  4. Dai, B.; Yu, W. Sparse beamforming and user-centric clustering for downlink cloud radio access network. IEEE Access 2014, 2, 1326–1339. [Google Scholar]
  5. Park, S.H.; Simeone, O.; Sahin, O.; Shamai, S. Joint precoding and multivariate backhaul compression for the downlink of cloud radio access networks. IEEE Trans. Signal Proc. 2013, 61, 5646–5658. [Google Scholar] [CrossRef]
  6. Patil, P.; Yu, W. Hybrid compression and message-sharing strategy for the downlink cloud radio-access network. In Proceedings of the Information Theory and Applications Workshop (ITA), San Diego, CA, USA, 9–14 February 2014; pp. 1–6. [Google Scholar]
  7. Liu, N.; Kang, W. A new achievability scheme for downlink multicell processing with finite backhaul capacity. In Proceedings of the IEEE International Symposium on Information Theory, Honolulu, HI, USA, 29 June–4 July 2014; pp. 1006–1010. [Google Scholar]
  8. Wang, C.; Wigger, M.A.; Zaidi, A. On achievability for downlink cloud radio access networks with base station cooperation. arXiv, 2016; arXiv:1610.09407. [Google Scholar]
  9. Yang, T.; Liu, N.; Kang, W.; Shamai, S. An upper bound on the sum capacity of the downlink multicell processing with finite backhaul capacity. arXiv, 2016; arXiv:1609.00833. [Google Scholar]
  10. Traskov, D.; Kramer, G. Reliable communication in networks with multi-access interference. In Proceedings of the Information Theory Workshop, Tahoe City, CA, USA, 2–6 September 2007; pp. 343–348. [Google Scholar]
  11. Kang, W.; Liu, N.; Chong, W. The Gaussian multiple access diamond channel. IEEE Trans. Inf. Theory 2015, 61, 6049–6059. [Google Scholar] [CrossRef]
  12. Saeedi Bidokhti, S.; Kramer, G. Capacity bounds for diamond networks with an orthogonal broadcast channel. IEEE Trans. Inf. Theory 2016, 62, 7103–7122. [Google Scholar] [CrossRef]
  13. El Gamal, A.; Kim, Y.H. Network Information Theory; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  14. Venkataramani, R.; Kramer, G.; Goyal, V.K. Multiple description coding with many channels. IEEE Trans. Inf. Theory 2006, 49, 2106–2114. [Google Scholar] [CrossRef]
  15. Saeedi Bidokhti, S.; Kramer, G. Capacity bounds for a class of diamond networks. In Proceedings of the International Symposium Information Theory, Honolulu, HI, USA, 29 June–4 July 2014; pp. 1196–1200. [Google Scholar]
  16. Ozarow, L. On a source-coding problem with two channels and three receivers. Bell Syst. Tech. J. 1980, 59, 1909–1921. [Google Scholar] [CrossRef]
  17. Cover, T.M.; Thomas, J.A. Elements of Information Theory; Wiley-Interscience: New York, NY, USA, 2006. [Google Scholar]
  18. Thomas, J. Feedback can at most double Gaussian multiple access channel capacity. IEEE Trans. Inf. Theory 1987, 33, 711–716. [Google Scholar] [CrossRef]
  19. Lee, S.H.; Zhao, W.; Khisti, A. Secure degrees of freedom of the Gaussian diamond-wiretap channel. IEEE Trans. Inf. Theory 2017, 63, 496–508. [Google Scholar] [CrossRef]
  20. Zou, S.; Liang, Y.; Lai, L.; Shamai, S. An information theoretic approach to secret sharing. IEEE Trans. Inf. Theory 2015, 61, 3121–3136. [Google Scholar] [CrossRef]
  21. Minero, P.; Lim, S.H.; Kim, Y.H. A unified approach to hybrid coding. IEEE Trans. Inf. Theory 2005, 61, 1509–1523. [Google Scholar] [CrossRef]
  22. Grover, P.; Wagner, A.B.; Sahai, A. Information embedding meets distributed control. arXiv, 2010; arXiv:1003.0520. [Google Scholar]
  23. Saeedi Bidokhti, S.; Prabhakaran, V.M. Is non-unique decoding necessary? IEEE Trans. Inf. Theory 2014, 60, 2594–2610. [Google Scholar] [CrossRef]
  24. Willems, F.; van der Meulen, E. The discrete memoryless multiple-access channel with cribbing encoders. IEEE Trans. Inf. Theory 1985, 31, 313–327. [Google Scholar] [CrossRef]
Figure 1. A C-RAN downlink.
Figure 1. A C-RAN downlink.
Entropy 19 00610 g001
Figure 2. Capacity bounds as functions of C ( M = 3 , P = 1 ).
Figure 2. Capacity bounds as functions of C ( M = 3 , P = 1 ).
Entropy 19 00610 g002
Figure 3. Capacity bounds as functions of C for P = 1 and different values of M.
Figure 3. Capacity bounds as functions of C for P = 1 and different values of M.
Entropy 19 00610 g003

Share and Cite

MDPI and ACS Style

Saeedi Bidokhti, S.; Kramer, G.; Shamai, S. Capacity Bounds on the Downlink of Symmetric, Multi-Relay, Single-Receiver C-RAN Networks. Entropy 2017, 19, 610. https://doi.org/10.3390/e19110610

AMA Style

Saeedi Bidokhti S, Kramer G, Shamai S. Capacity Bounds on the Downlink of Symmetric, Multi-Relay, Single-Receiver C-RAN Networks. Entropy. 2017; 19(11):610. https://doi.org/10.3390/e19110610

Chicago/Turabian Style

Saeedi Bidokhti, Shirin, Gerhard Kramer, and Shlomo Shamai. 2017. "Capacity Bounds on the Downlink of Symmetric, Multi-Relay, Single-Receiver C-RAN Networks" Entropy 19, no. 11: 610. https://doi.org/10.3390/e19110610

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop