Next Article in Journal
The Quantum Harmonic Otto Cycle
Next Article in Special Issue
On Linear Coding over Finite Rings and Applications to Computing
Previous Article in Journal
Thermal Ratchet Effect in Confining Geometries
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Leveraging Receiver Message Side Information in Two-Receiver Broadcast Channels: A General Approach †

1
School of Electrical Engineering and Computing, University of Newcastle, Callaghan, New South Wales 2308, Australia
2
This work was presented partially at the 2015 IEEE International Symposium on Information Theory (ISIT), Hong Kong, China, 14–19 June 2015, and the 2016 IEEE International Symposium on Information Theory (ISIT), Barcelona, Spain, 10–15 July 2016.
*
Author to whom correspondence should be addressed.
Entropy 2017, 19(4), 138; https://doi.org/10.3390/e19040138
Submission received: 9 February 2017 / Revised: 16 March 2017 / Accepted: 20 March 2017 / Published: 23 March 2017
(This article belongs to the Special Issue Network Information Theory)

Abstract

:
We consider two-receiver broadcast channels where each receiver may know a priori some of the messages requested by the other receiver as receiver message side information (RMSI). We devise a general approach to leverage RMSI in these channels. To this end, we first propose a pre-coding scheme considering the general message setup where each receiver requests both common and private messages and knows a priori part of the private message requested by the other receiver as RMSI. We then construct the transmission scheme of a two-receiver channel with RMSI by applying the proposed pre-coding scheme to the best transmission scheme for the channel without RMSI. To demonstrate the effectiveness of our approach, we apply our pre-coding scheme to three categories of the two-receiver discrete memoryless broadcast channel: (i) channel without state; (ii) channel with states known causally to the transmitter; and (iii) channel with states known non-causally to the transmitter. We then derive a unified inner bound for all three categories. We show that our inner bound is tight for some new cases in each of the three categories, as well as all cases whose capacity region was known previously.

1. Introduction

Communication over wireless channels motivates the study of broadcast channels [1]. In these channels, a transmitter wishes to send a number of messages to multiple receivers via a noisy shared medium, which can also be time-varying due to, for example, fading or interference. The time-varying factor is commonly referred to as the channel state.
The messages to be sent by the transmitter may already be present in parts at some of the receivers, referred to as receiver message side information (RMSI). This form of side information appears in multimedia broadcasting with packet loss. Suppose that a multimedia file is requested by multiple receivers, and the transmission is subject to packet loss. After a few rounds of transmissions, each receiver may a priori know some of the packets re-demanded (due to packet loss) by other receivers. This form of side information appears also in the downlink phase of applications modeled by the multi-way relay channel [2]. For example, consider two stations exchanging data through a satellite. Since each station is also the source of the message to be transmitted by the satellite to the other station in the downlink phase, this phase can be modeled by a broadcast channel with RMSI. As proper use of RMSI may increase transmission rates over broadcast channels, we investigate the capacity region of two-receiver broadcast channels with RMSI. We consider three categories of the two-receiver discrete memoryless broadcast channel: (i) channel without state; (ii) channel with causal channel state information at the transmitter (CSIT) including the cases where the channel state may also be available causally or non-causally at each receiver (It does not make a difference whether the channel state is available causally or non-causally at a receiver. This is due to block decoding, where the receiver decodes its requested message(s) at the end of the channel-output sequence.); and (iii) channel with non-causal CSIT including the cases where the channel state may also be available causally or non-causally at each receiver.
As we will see later, we construct transmission schemes of two-receiver broadcast channels with RMSI using a pre-coding scheme and the best transmission schemes for the channels without RMSI. We here present existing capacity results for the aforementioned three categories of both the two-receiver discrete memoryless broadcast channel without and with RMSI.

1.1. Broadcast Channel without RMSI

1.1.1. Without State

The capacity region is known for several special types of the discrete memoryless broadcast channels, such as the degraded [3], less noisy [4] (p. 121), more capable [5], deterministic [6,7] and semideterministic [8] broadcast channel. The capacity region is also known for the discrete memoryless broadcast channel with degraded message sets (a special message setup where one of the receivers requests only a common message) [9]. Marton’s inner bound with a common message [4] (p. 212) is tight for all cases in this category whose capacity region is known. This inner bound is achieved using superposition coding and Marton coding [4] (p. 207).

1.1.2. With Causal CSIT

Under this category, the capacity region is known for the following two special cases: the degraded broadcast channel with private messages where the channel state is available causally at: (i) only the transmitter [10]; or (ii) the transmitter and the non-degraded receiver [4] (p. 184). Superposition coding and the Shannon strategy [11] were used to characterize the capacity region of these two cases. Superposition coding achieves the capacity region of the degraded broadcast channel without state, and the Shannon strategy is a capacity-achieving strategy for the discrete memoryless point-to-point channel with causal CSIT.

1.1.3. With Non-Causal CSIT

Under this category, the capacity region is known for a few special cases, such as the degraded broadcast channel with private messages where the channel state is available non-causally at the transmitter and the non-degraded receiver [10], and the semideterministic broadcast channel with private messages where the channel state is available non-causally at only the transmitter [12]. Khosravi-Farsani and Marvasti [13] (Theorem 2) used superposition coding, Marton coding and Gelfand–Pinsker coding [14] to derive the best-known inner bound for the discrete memoryless broadcast channel with non-causal CSIT (i.e., it is tight for all cases with known capacity region). Superposition coding plus Marton coding achieves the best-known inner bound for the discrete memoryless broadcast channel without state, and Gelfand–Pinsker coding achieves the capacity region of the discrete memoryless point-to-point channel with non-causal CSIT.

1.2. Broadcast Channel with RMSI

1.2.1. Without State

The capacity region is known for the following three special cases: (i) the discrete memoryless channel with complementary RMSI [15,16] where both receivers need to decode all of the source messages, i.e., all of the messages not known a priori; (ii) the discrete memoryless channel with degraded message sets [17] where one of the receivers needs to decode all of the source messages and one only part of the source messages; and (iii) the less noisy channel with the general message setup [18] (Theorem 3) where each receiver has both common- and private-message requests and knows part of the private message requested by the other receiver.

1.2.2. With Causal CSIT

Under this category, the capacity region is known for the two-receiver discrete memoryless broadcast channel with complementary RMSI where the channel state is available causally at only the transmitter [19].

1.2.3. With Non-Causal CSIT

Under this category, the capacity region is known for the two-receiver discrete memoryless broadcast channel with complementary RMSI where the channel state is available non-causally at the transmitter and one of the receivers [20].

2. Summary of the Main Results

We propose a pre-coding scheme to construct the transmission scheme of a two-receiver broadcast channel with RMSI by applying the pre-coding scheme to the best transmission scheme for the channel without RMSI. We design the pre-coding scheme considering the general message setup where each receiver requests both common and private messages and knows part of the private message requested by the other receiver as RMSI. This provides a general approach for utilizing RMSI in two-receiver broadcast channels. We use our pre-coding scheme and derive a unified inner bound for three categories of the discrete memoryless broadcast channel: (i) channel without state; (ii) channel with causal CSIT (causal category); and (iii) channel with non-causal CSIT (non-causal category). The steps to derive our unified inner bound are shown in Figure 1 in which rectangles with solid sides represent the bounds established in this work. Here, we briefly explain the steps.
Step 1: We first derive an inner bound for the causal category without RMSI. We use superposition coding, Marton coding and the Shannon strategy to construct the transmission scheme. By choosing the common message to be a constant, our scheme reduces to the one used by Kramer [21] for the discrete memoryless broadcast channel without RMSI, with causal CSIT. Our inner bound is tight for all of the cases without RMSI, with causal CSIT, whose capacity region is known (mentioned in the previous section).
Step 2: We then unify our inner bound for the causal category without RMSI, and the best-known inner bound for the non-causal category without RMSI by Khosravi-Farsani and Marvasti [13] (Theorem 2). This result is analogous to the work of Jafar [22] in which a unified capacity-region expression is provided for the point-to-point channel with causal CSIT and the point-to-point channel with non-causal CSIT. Clearly, the capacity region of a non-causal case is larger than or equal to the capacity region of the corresponding causal case (where only the transmitter knows the channel state causally instead of non-causally, and the knowledge of the receivers about the channel state is the same in both cases). This relationship is not necessarily true for their inner bounds especially when one transmission scheme is not a special case of the other. One of the advantages of having a unified inner bound is that it allows us to show that the best-known inner bound ([13] (Theorem 2)) for a non-causal case is larger than or equal to the best inner bound (our inner bound) for the corresponding causal case.
By considering that the channel state is zero with probability one for the channel without state, our unified inner bound reduces to Marton’s inner bound with a common message [4] (p. 212) for the discrete memoryless broadcast channel without state. Consequently, our inner bound covers all three categories without RMSI.
Step 3: Moving on to the channel with RMSI, we propose a pre-coding scheme in order to take RMSI into account. We use our pre-coding scheme in conjunction with the schemes achieving the best inner bounds for the causal and non-causal categories without RMSI, and derive a unified inner bound, which is the best inner bound for all three categories with RMSI. This inner bound reduces to the unified inner bound without RMSI by setting RMSI to zero.
Capacity results: Using our inner bound, we obtain the following new capacity results for the discrete memoryless broadcast channel with RMSI. We also show that our inner bound is tight for all of the cases whose capacity region was known prior to this work. This demonstrates the effectiveness of our proposed approach for utilizing RMSI.
  • For the channel with RMSI, without state, we establish the capacity region of two new cases, namely the deterministic channel and the more capable channel. In a concurrent work with the preliminary published version of this work [23], Bracher and Wigger [24] established the capacity region of the semideterministic channel without state for which our inner bound is also tight.
  • For the channel with RMSI and causal CSIT, we establish the capacity region of the degraded broadcast channel where the channel state is available causally at: (i) only the transmitter; (ii) the transmitter and the non-degraded receiver; or (iii) the transmitter and both receivers.
  • For the channel with RMSI and non-causal CSIT, we establish the capacity region of the degraded broadcast channel where the channel state is available non-causally at: (i) the transmitter and the non-degraded receiver; or (ii) the transmitter and both receivers.

3. System Model

We consider the two-receiver discrete memoryless broadcast channel with independent and identically distributed (i.i.d.) states p ( y 1 , y 2 x 0 , s ) p ( s ) , depicted in Figure 2, where X 0 X 0 is the channel input, Y 1 Y 1 and Y 2 Y 2 are the channel outputs and S S is the channel state. Considering n uses of the channel, X 0 n = X 0 , 1 , X 0 , 2 , , X 0 , n is the transmitted codeword, and Y i n = Y i , 1 , Y i , 2 , , Y i , n , i = 1 , 2 , is the channel-output sequence at receiver i.
The source messages { M i } i = 0 4 are independent, and M i is uniformly distributed over the set M i = { 1 , 2 , , 2 n R i } , i.e., transmitted at rate R i bits per channel use. Receiver 1 requests { M 0 , M 1 , M 3 } and knows M 4 as RMSI. Receiver 2 requests { M 0 , M 2 , M 4 } and knows M 3 as RMSI. For Receiver 1, M 1 is the part of the private-message request that is not known a priori to the other receiver, and M 3 is the part that is known. For Receiver 2, these are M 2 and M 4 , respectively.
The channel without state is a special case of our channel model by considering S = { 0 } , i.e., p S 0 = 1 . This implies that the transmitter and receivers know that the channel state is equal to zero at all channel uses. The channel without RMSI is also a special case of our channel model by considering ( M 3 , M 4 ) = ( 0 , 0 ) .
A 2 n R 0 , 2 n R 1 , 2 n R 2 , 2 n R 3 , 2 n R 4 , n causal code for the channel consists of a sequence of maps for the encoding:
f j : M 0 × M 1 × M 2 × M 3 × M 4 × S j X 0 , j = 1 , 2 , , n ,
where × denotes the Cartesian product and S j denotes the j-fold Cartesian product of S , i.e., X 0 , j = f j ( M 0 , M 1 , M 2 , M 3 , M 4 , S j ) where S j = ( S 1 , S 2 , , S j ) . This code also consists of two decoding functions:
g 1 : Y ˜ 1 n × M 4 M 0 × M 1 × M 3 , g 2 : Y ˜ 2 n × M 3 M 0 × M 2 × M 4 ,
where Y ˜ i = Y i × S , i = 1 , 2 , if the channel state is available at receiver i, and Y ˜ i = Y i otherwise. ( M ^ 0 1 , M ^ 1 , M ^ 3 ) = g 1 ( Y ˜ 1 n , M 4 ) is the decoded ( M 0 , M 1 , M 3 ) at Receiver 1, and ( M ^ 0 2 , M ^ 2 , M ^ 4 ) = g 2 ( Y ˜ 2 n , M 3 ) is the decoded ( M 0 , M 2 , M 4 ) at Receiver 2 where Y ˜ i n = ( Y i n , S n ) if the channel state is available at receiver i and Y ˜ i n = Y i n otherwise.
A 2 n R 0 , 2 n R 1 , 2 n R 2 , 2 n R 3 , 2 n R 4 , n non-causal code for the channel consists of an encoding function:
f : M 0 × M 1 × M 2 × M 3 × M 4 × S n X 0 n ,
i.e., X 0 n = f ( M 0 , M 1 , M 2 , M 3 , M 4 , S n ) . This code also consists of two decoding functions, which are defined the same as for the causal code.
A 2 n R 0 , 2 n R 1 , 2 n R 2 , 2 n R 3 , 2 n R 4 , n code for the channel without state is defined by choosing S = { 0 } in the definition of either the causal code or the non-causal code.
The average probability of error for a code is defined as:
P e ( n ) = P ( M ^ 0 1 , M ^ 1 , M ^ 3 ) ( M 0 , M 1 , M 3 ) or ( M ^ 0 2 , M ^ 2 , M ^ 4 ) ( M 0 , M 2 , M 4 ) .
Definition 1.
For causal (non-causal) cases, a rate tuple ( R 0 , R 1 , R 2 , R 3 , R 4 ) is said to be achievable for the channel if there exists a sequence of 2 n R 0 , 2 n R 1 , 2 n R 2 , 2 n R 3 , 2 n R 4 , n causal (non-causal) codes with P e ( n ) 0 as n .
Definition 2.
The capacity region of the channel is the closure of the set of all achievable rate tuples ( R 0 , R 1 , R 2 , R 3 , R 4 ) .
Definition 3.
The two-receiver discrete memoryless broadcast channel with states is said to be physically degraded if:
p y 1 , y 2 x 0 , s = p y 1 x 0 , s p y 2 y 1 ,
and it is said to be stochastically degraded or degraded if there exists a Y 2 such that ( X 0 , S ) Y 1 Y 2 form a Markov chain, and we have:
p Y 2 X 0 , S y 2 x 0 , s = y 1 p Y 1 X 0 , S y 1 x 0 , s p Y 2 Y 1 y 2 y 1 .
Definition 4.
The two-receiver discrete memoryless broadcast channel without state is said to be deterministic if the channel outputs are deterministic functions of the channel input, i.e., Y i = ϕ i ( X 0 ) , i = 1 , 2 .
Definition 5.
The two-receiver discrete memoryless broadcast channel without state is said to be more capable if I X 0 ; Y 1 I X 0 ; Y 2 for any probability mass function (pmf) p ( x 0 ) .

4. Broadcast Channel without RMSI

In this section, we address the two-receiver discrete memoryless broadcast channel without RMSI, i.e., ( M 3 , M 4 ) = ( 0 , 0 ) . We first derive an inner bound for the channel with causal CSIT. We then present the best-known inner bound for the channel with non-causal CSIT [13] (Theorem 2). We finally show that we can have a unified inner bound that covers both causal and non-causal cases, although the achievability schemes are different.

4.1. With Causal CSIT

We utilize superposition coding, Marton coding and the Shannon strategy to derive an inner bound for the discrete memoryless broadcast channel with causal CSIT, stated as Theorem 1.
Theorem 1.
A rate triple ( R 0 , R 1 , R 2 ) for the two-receiver discrete memoryless broadcast channel with causal CSIT is achievable if it satisfies:
( 1 ) R 0 + R 1 < I ( U 0 , U 1 ; Y ˜ 1 ) , ( 2 ) R 0 + R 2 < I ( U 0 , U 2 ; Y ˜ 2 ) , ( 3 ) R 0 + R 1 + R 2 < I ( U 0 , U 1 ; Y ˜ 1 ) + I ( U 2 ; Y ˜ 2 U 0 ) I ( U 1 ; U 2 U 0 ) , ( 4 ) R 0 + R 1 + R 2 < I ( U 1 ; Y ˜ 1 U 0 ) + I ( U 0 , U 2 ; Y ˜ 2 ) I ( U 1 ; U 2 U 0 ) , ( 5 ) 2 R 0 + R 1 + R 2 < I ( U 0 , U 1 ; Y ˜ 1 ) + I ( U 0 , U 2 ; Y ˜ 2 ) I ( U 1 ; U 2 U 0 ) ,
for some pmf p ( u 0 , u 1 , u 2 ) and some function x 0 = γ ( u 0 , u 1 , u 2 , s ) . Y ˜ i = ( Y i , S ) , i = 1 , 2 , if the channel state is available at receiver i and Y ˜ i = Y i otherwise.
The proof of this theorem is similar to the proof of Marton’s inner bound with a common message [4] (p. 212) for the discrete memoryless broadcast without state. In the encoding, we just need to add the Shannon strategy, and in the decoding, we need to consider Y ˜ i n instead of Y i n as the channel output sequence at receiver i. We here present the proof as we refer to it in Appendix A and in the next section for the channel with RMSI.
Proof of Theorem 1.
(codebook construction) The codebook of the transmission scheme is formed from three sub-codebooks constructed using the pmf p ( u 0 , u 1 , u 2 ) . Before constructing the sub-codebooks, using rate splitting, M i , i = 1 , 2 , is divided into the two independent messages M i 1 of rate R i 1 and M i 2 of rate R i 2 , such that R i = R i 1 + R i 2 . Sub-codebook 0 consists of i.i.d. codewords:
u 0 n ( m 0 , m 11 , m 21 ) ,
generated according to j = 1 n p U 0 ( u 0 , j ) . Sub-codebook i, i = 1 , 2 , consists of codewords:
u i n ( m 0 , m 11 , m 21 , m i 2 , l i ) ,
generated according to:
j = 1 n p U i U 0 ( u i , j u 0 , j ( m 0 , m 11 , m 21 ) ) ,
where l i { 1 , , 2 n R i } , i.e., for each u 0 n ( m 0 , m 11 , m 21 ) , 2 n ( R i 2 + R i ) codewords are generated.
(Encoding) For the encoding, given { m i } i = 0 2 , we first find a pair ( l 1 , l 2 ) , such that:
U 0 n · , U 1 n · , l 1 , U 2 n · , l 2 T ϵ n ,
where T ϵ n is the set of jointly ϵ -typical n-sequences with respect to the considered distribution [4] (p. 29). If there is more than one pair, we arbitrary choose one of them, and if there is no pair, we choose ( l 1 , l 2 ) = ( 1 , 1 ) . We then construct the transmitted codeword as x 0 , j = γ ( u 0 , j · , u 1 , j · , u 2 , j · , s j ) , j = 1 , 2 , , n .
(Decoding) Receiver 1 decodes m ^ 0 1 , m ^ 11 , m ^ 12 , if it is the unique tuple that satisfies:
U 0 n · , U 1 n · , l 1 , Y ˜ 1 n T ϵ n for some m 21 and l 1 ;
otherwise, an error is declared. Receiver 2 similarly decodes m ^ 0 2 , m ^ 21 , m ^ 22 , if it is the unique tuple that satisfies:
U 0 n · , U 2 n · , l 2 , Y ˜ 2 n T ϵ n for some m 11 and l 2 ;
otherwise, an error is declared. ϵ in the decoding is strictly greater than ϵ in the encoding.
To derive sufficient conditions for achievability, we assume without loss of generality that the transmitted messages are each equal to one by the symmetry of code construction, and ( l 1 , l 2 ) = ( l 1 * , l 2 * ) where 1 l i * 2 n R i . Receiver 1 makes an error only if one or more of the following events happen:
E 0 : U 0 n 1 , 1 , 1 , U 1 n 1 , 1 , 1 , 1 , l 1 , U 2 n 1 , 1 , 1 , 1 , l 2 T ϵ n for all l 1 and l 2 , E 11 : U 0 n 1 , 1 , 1 , U 1 n ( 1 , 1 , 1 , 1 , l 1 * ) , Y ˜ 1 n T ϵ n , E 12 : U 0 n 1 , 1 , 1 , U 1 n 1 , 1 , 1 , m 12 , l 1 , Y ˜ 1 n T ϵ n for some m 12 1 and l 1 , E 13 : U 0 n 1 , 1 , m 21 , U 1 n 1 , 1 , m 21 , m 12 , l 1 , Y ˜ 1 n T ϵ n for some m 21 1 , m 12 1 and l 1 , E 14 : U 0 n m 0 , m 11 , m 21 , U 1 n m 0 , m 11 , m 21 , m 12 , l 1 , Y ˜ 1 n T ϵ n for some ( m 0 , m 11 ) ( 1 , 1 ) , m 21 , m 12 and l 1 .
Events leading to an error at Receiver 2 are written similarly. Based on the error events and using the packing lemma [4] (p. 45) and the mutual covering lemma [4] (p. 208), sufficient conditions for achievability are:
R 1 + R 2 > I ( U 1 ; U 2 U 0 ) , R 12 + R 1 < I ( U 1 ; Y ˜ 1 U 0 ) , R 0 + R 1 + R 21 + R 1 < I ( U 0 , U 1 ; Y ˜ 1 ) , R 22 + R 2 < I ( U 2 ; Y ˜ 2 U 0 ) , R 0 + R 2 + R 11 + R 2 < I ( U 0 , U 2 ; Y ˜ 2 ) .
We finally perform the Fourier–Motzkin elimination to obtain the conditions in (1)–(5). ☐

4.2. With Non-Causal CSIT

Superposition coding, Marton coding and Gelfand–Pinsker coding were used to derive an inner bound for the discrete memoryless broadcast channel with non-causal CSIT [13] (Theorem 2). This inner bound, stated as Proposition 1, is the best-known inner bound for the non-causal category without RMSI.
Proposition 1.
A rate triple ( R 0 , R 1 , R 2 ) for the two-receiver discrete memoryless broadcast channel with non-causal CSIT is achievable if it satisfies:
( 6 ) R 0 + R 1 < I ( U 0 , U 1 ; Y ˜ 1 ) I ( U 0 , U 1 ; S ) , ( 7 ) R 0 + R 2 < I ( U 0 , U 2 ; Y ˜ 2 ) I ( U 0 , U 2 ; S ) , ( 8 ) R 0 + R 1 + R 2 < I ( U 0 , U 1 ; Y ˜ 1 ) + I ( U 2 ; Y ˜ 2 U 0 ) I ( U 1 ; U 2 U 0 ) I ( U 0 , U 1 , U 2 ; S ) , ( 9 ) R 0 + R 1 + R 2 < I ( U 1 ; Y ˜ 1 U 0 ) + I ( U 0 , U 2 ; Y ˜ 2 ) I ( U 1 ; U 2 U 0 ) I ( U 0 , U 1 , U 2 ; S ) , ( 10 ) 2 R 0 + R 1 + R 2 < I ( U 0 , U 1 ; Y ˜ 1 ) + I ( U 0 , U 2 ; Y ˜ 2 ) I ( U 1 ; U 2 U 0 ) I ( U 0 , U 1 , U 2 ; S ) I ( U 0 ; S ) ,
for some pmf p ( u 0 , u 1 , u 2 s ) and some function x 0 = γ ( u 0 , u 1 , u 2 , s ) . Y ˜ i = ( Y i , S ) , i = 1 , 2 , if the channel state is available at receiver i and Y ˜ i = Y i otherwise.
Here, we review the scheme achieving this inner bound in order to highlight its differences with the scheme for the channel with causal CSIT.
(Codebook construction) The transmission scheme is formed from three sub-codebooks. These sub-codebooks are constructed using the pmf p ( u 0 , u 1 , u 2 s ) (i.e., ( U 0 , U 1 , U 2 ) is not independent of S). Before constructing the sub-codebook, using rate splitting, M i , i = 1 , 2 , is divided into the two independent messages M i 1 of rate R i 1 and M i 2 of rate R i 2 , such that R i = R i 1 + R i 2 . Sub-codebook 0 consists of i.i.d. codewords:
u 0 n ( m 0 , m 11 , m 21 , l 0 )
generated according to j = 1 n p U 0 ( u 0 , j ) where l 0 { 1 , , 2 n R 0 } , i.e., for each ( m 0 , m 11 , m 21 ) , 2 n R 0 codewords are generated. Sub-codebook i , i = 1 , 2 , consists of codewords:
u i n ( m 0 , m 11 , m 21 , l 0 , m i 2 , l i )
generated according to:
j = 1 n p U i U 0 ( u i , j u 0 , j ( m 0 , m 11 , m 21 , l 0 ) ) ,
where l i { 1 , , 2 n R i } .
(Encoding) For the encoding, given { m i } i = 0 2 , we first find a triple ( l 0 , l 1 , l 2 ) such that:
U 0 n · , l 0 , U 1 n · , l 0 , · , l 1 , U 2 n · , l 0 , · , l 2 , S n T ϵ n .
If there is more than one triple, we arbitrarily choose one of them, and if there is no triple, we choose ( l 0 , l 1 , l 2 ) = ( 1 , 1 , 1 ) . We then construct the transmitted codeword as x 0 , j = γ ( u 0 , j · , u 1 , j · , u 2 , j · , s j ) , j = 1 , 2 , , n .
(Decoding) Receiver 1 decodes m ^ 0 1 , m ^ 11 , m ^ 12 , if it is the unique tuple that satisfies:
U 0 n · , l 0 , U 1 n · , l 0 , · , l 1 , Y ˜ 1 n T ϵ n for some ( m 21 , l 0 , l 1 ) ;
otherwise, an error is declared. Receiver 2 decodes m ^ 0 2 , m ^ 21 , m ^ 22 , if it is the unique tuple that satisfies:
U 0 n · , l 0 , U 2 n · , l 0 , · , l 2 , Y ˜ 2 n T ϵ n for some ( m 11 , l 0 , l 2 ) ;
otherwise, an error is declared. ϵ in the decoding is strictly greater than ϵ in the encoding.

4.3. A Unified Inner Bound

Here, we discuss that we can have a unified inner-bound expression for both causal and non-causal cases. The inequalities in (6)–(10) can be used for both the channel with causal CSIT and the channel with non-causal CSIT. This is because, for the channel with causal CSIT, ( U 0 , U 1 , U 2 ) is independent of S, and the terms I ( U 0 ; S ) , I ( U 0 , U 1 ; S ) , I ( U 0 , U 2 ; S ) and I ( U 0 , U 1 , U 2 ; S ) are zero. Then, the inequalities in (6)–(10) reduce to the ones in (1)–(5), and we can have a unified inner bound, stated as Corollary 1.
Corollary 1.
A rate triple ( R 0 , R 1 , R 2 ) is achievable for the discrete memoryless broadcast channel with causal CSIT if it satisfies (6)–(10) for some pmf p ( u 0 , u 1 , u 2 ) and some function x 0 = γ ( u 0 , u 1 , u 2 , s ) . It is achievable for the discrete memoryless broadcast channel with non-causal CSIT if it satisfies (6)–(10) for some pmf p ( u 0 , u 1 , u 2 s ) and some function x 0 = γ ( u 0 , u 1 , u 2 , s ) .
Remark 1.
This unified inner bound is also the best inner bound for the channel without state. This is because it reduces to Marton’s inner bound with a common message [4] (p. 212) by considering that the channel state is zero with probability one for the channel without state. Therefore, it is a unified inner bound for all three categories without RMSI.
As discussed in Section 2, the capacity region of a non-causal case is larger than or equal to the capacity region of the corresponding causal case. However, an inner bound for a non-causal case is not necessarily larger than an inner bound for the corresponding causal case when the scheme for the latter is not a special case of the scheme for the former. By having a unified inner bound, we can show that the inner bound for a non-causal case (Proposition 1) is larger than or equal to the inner bound for the corresponding causal case (Theorem 1). This is because the domain of the unified inner bound for a causal case (including all pmfs p ( u 0 , u 1 , u 2 ) and all functions x 0 = γ ( u 0 , u 1 , u 2 , s ) ) is a subset of the domain of the unified inner bound for the corresponding non-causal case (including all pmfs p ( u 0 , u 1 , u 2 s ) and all functions x 0 = γ ( u 0 , u 1 , u 2 , s ) ).
In Appendix A, we show that the scheme for the causal category, described in Section 4.1, is not a special case of the scheme for the non-causal category, described in Section 4.2. However, we show that by considering some special cases of the parameters in the scheme for the non-causal category, it asymptotically almost surely has the same codebook construction, encoding and decoding as the scheme for the causal category.

5. Broadcast Channel with RMSI

In this section, we address the two-receiver discrete memoryless broadcast channel with RMSI. We first propose a pre-coding scheme designed to construct the transmission scheme of a channel with RMSI based on the transmission scheme of the channel without RMSI. By applying this pre-coding scheme to the transmission schemes achieving the best inner bounds for the causal and non-causal categories without RMSI, we then derive a unified inner bound that covers all three categories with RMSI. This inner bound includes the unified inner bound for the three categories without RMSI, stated as Corollary 1, as a special case.

5.1. Moving from without RMSI to with RMSI

We construct a pre-coding scheme for a channel with RMSI by considering M m = ( M 0 , M 3 , M 4 ) as a new common message and treating only M 1 and M 2 as the private messages. M m , M 1 and M 2 are then fed to the transmission scheme of the same channel without RMSI. Although Receiver 1 need not decode M 4 , having M 4 as a part of the common message does not impose any extra constraint. This is because Receiver 1 knows M 4 a priori. The same argument applies to M 3 for Receiver 2. Since Receiver 1 knows M 4 a priori and Receiver 2 knows M 3 a priori, Receiver 1 decodes M m over a set 2 n ( R 0 + R 3 ) candidates, and Receiver 2 decodes it over a set of 2 n ( R 0 + R 4 ) candidates.
Conjecture 1.
We conjecture that our pre-coding scheme is an optimal pre-coding scheme in the sense that if a transmission scheme achieves the capacity region of a channel without RMSI, then the transmission scheme, constructed by applying our pre-coding scheme to that transmission scheme, also achieves the capacity region of the same channel with RMSI.

5.2. A Unified Inner Bound

We here present our unified inner bound for the three categories with RMSI, stated as Theorem 2.
Theorem 2.
A rate tuple ( R 0 , R 1 , R 2 , R 3 , R 4 ) is achievable for the channel with causal CSIT if it satisfies:
( 11 ) R 0 + R 1 + R 3 < I ( U 0 , U 1 ; Y ˜ 1 ) I ( U 0 , U 1 ; S ) , ( 12 ) R 0 + R 2 + R 4 < I ( U 0 , U 2 ; Y ˜ 2 ) I ( U 0 , U 2 ; S ) , ( 13 ) R 0 + R 1 + R 2 + R 3 < I ( U 0 , U 1 ; Y ˜ 1 ) + I ( U 2 ; Y ˜ 2 U 0 ) I ( U 1 ; U 2 U 0 ) I ( U 0 , U 1 , U 2 ; S ) , ( 14 ) R 0 + R 1 + R 2 + R 4 < I ( U 1 ; Y ˜ 1 U 0 ) + I ( U 0 , U 2 ; Y ˜ 2 ) I ( U 1 ; U 2 U 0 ) I ( U 0 , U 1 , U 2 ; S ) , 2 R 0 + R 1 + R 2 + R 3 + R 4 < I ( U 0 , U 1 ; Y ˜ 1 ) + I ( U 0 , U 2 ; Y ˜ 2 ) I ( U 1 ; U 2 U 0 ) ( 15 ) I ( U 0 , U 1 , U 2 ; S ) I ( U 0 ; S ) ,
for some pmf p ( u 0 , u 1 , u 2 ) and some function x 0 = γ ( u 0 , u 1 , u 2 , s ) . It is achievable for the channel with non-causal CSIT if it satisfies (11)–(15) for some pmf p ( u 0 , u 1 , u 2 s ) and some function x 0 = γ ( u 0 , u 1 , u 2 , s ) . Y ˜ i = ( Y i , S ) , i = 1 , 2 , if the channel state is available at receiver i and Y ˜ i = Y i otherwise.
Remark 2.
This inner bound is also used for the discrete memoryless broadcast channel with RMSI, without state, where we assume that the channel state is zero with probability one. Therefore, it is a unified inner bound for all three categories with RMSI.
Remark 3.
Having a unified inner bound for the three categories without RMSI and applying the same pre-coding scheme to them are the two basic reasons why we can also have a unified inner bound for the three categories with RMSI.
Proof of Theorem 2.
We prove Theorem 2 by applying our pre-coding scheme to the transmission schemes of the causal and non-causal categories without RMSI.
With causal CSIT: For this category, we apply our pre-coding scheme to the transmission scheme of the discrete memoryless broadcast channel without RMSI, with causal CSIT, described in Section 4.1. Based on our method, Sub-codebook 0 consists of i.i.d. codewords:
u 0 n ( m 0 , m 3 , m 4 , m 11 , m 21 ) ,
generated according to j = 1 n p U 0 ( u 0 , j ) . Sub-codebook i, i = 1 , 2 , consists of codewords:
u i n ( m 0 , m 3 , m 4 , m 11 , m 21 , m i 2 , l i ) ,
generated according to:
j = 1 n p U i U 0 ( u i , j u 0 , j ( m 0 , m 3 , m 4 , m 11 , m 21 ) ) ,
where l i { 1 , , 2 n R i } .
Encoding and decoding are performed similarly to the case without RMSI. For the encoding, given { m i } i = 0 4 , we first find a pair ( l 1 , l 2 ) , such that:
U 0 n · , U 1 n · , l 1 , U 2 n · , l 2 T ϵ n .
If there does not exist one pair, we choose ( l 1 , l 2 ) = ( 1 , 1 ) . We then construct the transmitted codeword as x 0 , j = γ ( u 0 , j · , u 1 , j · , u 2 , j · , s j ) , j = 1 , 2 , , n .
For the decoding, Receiver 1 decodes m ^ 0 1 , m ^ 11 , m ^ 12 , m ^ 3 , if it is the unique tuple that satisfies:
U 0 n · , U 1 n · , l 1 , Y ˜ 1 n T ϵ n for some m 21 and l 1 ;
otherwise, an error is declared. Since this receiver knows M 4 as side information, it decodes ( u 0 n , u 1 n ) over a set of 2 n ( R 0 + R 1 + R 3 + R 21 + R 1 ) candidates. Receiver 2 decodes m ^ 0 2 , m ^ 21 , m ^ 22 , m ^ 4 , if it is the unique tuple that satisfies:
U 0 n · , U 2 n · , l 2 , Y ˜ 2 n T ϵ n for some m 11 and l 2 ;
otherwise, an error is declared. Since this receiver knows M 3 as side information, it decodes ( u 0 n , u 2 n ) over a set of 2 n ( R 0 + R 2 + R 4 + R 11 + R 2 ) candidates.
Based on error events, written similarly to the case without RMSI and using the packing lemma [4] (p. 45) and the mutual covering lemma [4] (p. 208), sufficient conditions for achievability are:
R 1 + R 2 > I ( U 1 ; U 2 U 0 ) , R 12 + R 1 < I ( U 1 ; Y ˜ 1 U 0 ) , R 0 + R 1 + R 3 + R 21 + R 1 < I ( U 0 , U 1 ; Y ˜ 1 ) , R 22 + R 2 < I ( U 2 ; Y ˜ 2 U 0 ) , R 0 + R 2 + R 4 + R 11 + R 2 < I ( U 0 , U 2 ; Y ˜ 2 ) .
After performing the Fourier–Motzkin elimination, we obtain Conditions (11)–(15). Note that the terms I ( U 0 ; S ) , I ( U 0 , U 1 ; S ) , I ( U 0 , U 2 ; S ) and I ( U 0 , U 1 , U 2 ; S ) are zero for this category. The inner bound is computed over all pmfs p ( u 0 , u 1 , u 2 ) and all functions x 0 = γ ( u 0 , u 1 , u 2 , s ) .
With non-causal CSIT: For this category, we apply our pre-coding scheme to the transmission scheme of the channel without RMSI, with non-causal CSIT, described in Section 4.2. The resulting changes to the codebook construction, encoding and decoding are similar to the ones for the channel with causal CSIT. Based on error events written similarly to the case without RMSI and using the packing lemma and the multivariate covering lemma [4] (p. 218), sufficient conditions for achievability are:
R 0 > I ( U 0 ; S ) , R 0 + R 1 > I ( U 0 , U 1 ; S ) , R 0 + R 2 > I ( U 0 , U 2 ; S ) , R 0 + R 1 + R 2 > I ( U 0 , U 1 , U 2 ; S ) + I ( U 1 ; U 2 U 0 ) , R 12 + R 1 < I ( U 1 ; Y ˜ 1 U 0 ) , R 0 + R 1 + R 3 + R 21 + R 0 + R 1 < I ( U 0 , U 1 ; Y ˜ 1 ) , R 22 + R 2 < I ( U 2 ; Y ˜ 2 U 0 ) , R 0 + R 2 + R 4 + R 11 + R 0 + R 2 < I ( U 0 , U 2 ; Y ˜ 2 ) .
After performing the Fourier–Motzkin elimination, we obtain Conditions (11)–(15). Note that ( U 0 , U 1 , U 2 ) is not independent of S for this category. The inner bound is computed over all pmfs p ( u 0 , u 1 , u 2 s ) and all functions x 0 = γ ( u 0 , u 1 , u 2 , s ) . ☐

6. New Capacity Results

In this section, we present new capacity results for the two-receiver discrete memoryless broadcast channel with RMSI. These results are established using our inner bound in Theorem 2. We also show that our inner bound is tight for all of the cases whose capacity region was known prior to this work.

6.1. With RMSI, without State

In this subsection, we first derive a general outer bound for the discrete memoryless broadcast channel with RMSI, without state, stated as Theorem 3. This outer bound is developed based on the Nair-El Gamal outer bound for the discrete memoryless broadcast channel without RMSI, without state [25]. We then establish the capacity region for two new cases in this category: the deterministic channel, stated as Theorem 4, and the more capable channel, stated as Theorem 5.
Theorem 3.
If a rate tuple ( R 0 , R 1 , R 2 , R 3 , R 4 ) is achievable for the two-receiver discrete memoryless broadcast channel with RMSI, without state, then it must satisfy:
( 16 ) R 0 + R 3 I ( U 0 ; Y 1 ) , ( 17 ) R 0 + R 4 I ( U 0 ; Y 2 ) , ( 18 ) R 0 + R 1 + R 3 I ( U 0 , U 1 ; Y 1 ) , ( 19 ) R 0 + R 2 + R 4 I ( U 0 , U 2 ; Y 2 ) , ( 20 ) R 0 + R 1 + R 2 + R 3 I ( U 0 , U 1 ; Y 1 ) + I ( U 2 ; Y 2 U 0 , U 1 ) , ( 21 ) R 0 + R 1 + R 2 + R 4 I ( U 1 ; Y 1 U 0 , U 2 ) + I ( U 0 , U 2 ; Y 2 ) ,
for some pmf p ( u 1 ) p ( u 2 ) p ( u 0 u 1 , u 2 ) and some function x 0 = γ ( u 0 , u 1 , u 2 ) .
Proof of Theorem 3.
See Appendix B ☐
Theorem 4.
The capacity region of the two-receiver deterministic broadcast channel with RMSI, without state, is the closure of the set of all rate tuples ( R 0 , R 1 , R 2 , R 3 , R 4 ) , each satisfying:
( 22 ) R 0 + R 1 + R 3 < H ( Y 1 ) , ( 23 ) R 0 + R 2 + R 4 < H ( Y 2 ) , ( 24 ) R 0 + R 1 + R 2 + R 3 < H ( Y 1 ) + H ( Y 2 U 0 , Y 1 ) , ( 25 ) R 0 + R 1 + R 2 + R 4 < H ( Y 2 ) + H ( Y 1 U 0 , Y 2 ) , ( 26 ) 2 R 0 + R 1 + R 2 + R 3 + R 4 < I ( U 0 ; Y 1 ) + H ( Y 2 ) + H ( Y 1 U 0 , Y 2 ) ,
for some pmf p ( u 0 , x 0 ) .
Proof of Theorem 4.
(Achievability) Achievability is proven by setting ( U 0 , U 1 , U 2 , S ) = ( U 0 , Y 1 , Y 2 , 0 ) in (11)–(15).
(Converse) We start with the outer bound in Theorem 3. By removing Condition (17), adding Conditions (16) and (21) and enlarging the domain of this outer bound, we obtain the following looser outer bound: if a rate tuple is achievable, then it must satisfy:
( 27 ) R 0 + R 1 + R 3 I ( U 0 , U 1 ; Y 1 ) , ( 28 ) R 0 + R 2 + R 4 I ( U 0 , U 2 ; Y 2 ) , ( 29 ) R 0 + R 1 + R 2 + R 3 I ( U 0 , U 1 ; Y 1 ) + I ( U 2 ; Y 2 U 0 , U 1 ) , ( 30 ) R 0 + R 1 + R 2 + R 4 I ( U 1 ; Y 1 U 0 , U 2 ) + I ( U 0 , U 2 ; Y 2 ) , ( 31 ) 2 R 0 + R 1 + R 2 + R 3 + R 4 I ( U 0 ; Y 1 ) + I ( U 1 ; Y 1 U 0 , U 2 ) + I ( U 0 , U 2 ; Y 2 ) ,
for some pmf p ( u 0 , u 1 , u 2 ) and some function x 0 = γ ( u 0 , u 1 , u 2 ) .
By relaxing Conditions (27)–(31), we can write them in the form of Conditions (22)–(26). As an example, we show this for (31).
2 R 0 + R 1 + R 2 + R 3 + R 4 I ( U 0 ; Y 1 ) + I ( U 1 ; Y 1 U 0 , U 2 ) + I ( U 0 , U 2 ; Y 2 ) = ( a ) I ( U 0 ; Y 1 ) + H ( Y 1 U 0 , U 2 ) + H ( Y 2 ) H ( Y 2 U 0 , U 2 ) I ( U 0 ; Y 1 ) + H ( Y 1 , Y 2 U 0 , U 2 ) + H ( Y 2 ) H ( Y 2 U 0 , U 2 ) = I ( U 0 ; Y 1 ) + H ( Y 1 U 0 , U 2 , Y 2 ) + H ( Y 2 ) I ( U 0 ; Y 1 ) + H ( Y 1 U 0 , Y 2 ) + H ( Y 2 ) ,
where ( a ) is due to H ( Y i U 0 , U 1 , U 2 ) = 0 , i = 1 , 2 , for the deterministic channel. ☐
Theorem 5.
The capacity region of the two-receiver more capable broadcast channel with RMSI, without state, is the closure of the set of all rate tuples ( R 0 , R 1 , R 2 , R 3 , R 4 ) , each satisfying:
R 0 + R 2 + R 4 < I ( U 0 ; Y 2 ) , R 0 + R 1 + R 2 + R 4 < I ( U 0 ; Y 2 ) + I ( X 0 ; Y 1 U 0 ) , R 0 + R 1 + R 2 + R 3 < I ( X 0 ; Y 1 ) ,
for some pmf p ( u 0 , x 0 ) .
Proof of Theorem 5.
(Achievability) Achievability is proven by setting ( U 0 , U 1 , U 2 , S ) = ( U 0 , X 0 , 0 , 0 ) in (11)–(15). Note that U 2 = 0 implies that M 21 = M 2 and R 1 = 0 .
(Converse) According to Theorem 3, if a rate tuple is achievable, then it must satisfy:
( 32 ) R 0 + R 2 + R 4 I ( U 0 , U 2 ; Y 2 ) , ( 33 ) R 0 + R 1 + R 2 + R 3 I ( U 0 , U 1 ; Y 1 ) + I ( U 2 ; Y 2 U 0 , U 1 ) , ( 34 ) R 0 + R 1 + R 2 + R 4 I ( U 1 ; Y 1 U 0 , U 2 ) + I ( U 0 , U 2 ; Y 2 ) ,
for some pmf p ( u 0 , u 1 , u 2 ) and some function x 0 = γ ( u 0 , u 1 , u 2 ) . Since, for the channel without state, we have:
I ( U 2 ; Y 2 U 0 , U 1 ) = I ( X 0 ; Y 2 U 0 , U 1 ) , I ( U 1 ; Y 1 U 0 , U 2 ) = I ( X 0 ; Y 1 U 0 , U 2 ) ,
and for the more capable channel, we have [4] (p. 123):
I ( X 0 ; Y 2 U 0 , U 1 ) I ( X 0 ; Y 1 U 0 , U 1 ) ,
we can write (33) and (34) as follows.
R 0 + R 1 + R 2 + R 3 I ( U 0 , U 1 ; Y 1 ) + I ( U 2 ; Y 2 U 0 , U 1 ) = I ( U 0 , U 1 ; Y 1 ) + I ( X 0 ; Y 2 U 0 , U 1 ) I ( U 0 , U 1 ; Y 1 ) + I ( X 0 ; Y 1 U 0 , U 1 ) ( 35 ) = I ( X 0 ; Y 1 ) . R 0 + R 1 + R 2 + R 4 I ( U 1 ; Y 1 U 0 , U 2 ) + I ( U 0 , U 2 ; Y 2 ) ( 36 ) = I ( X 0 ; Y 1 U 0 , U 2 ) + I ( U 0 , U 2 ; Y 2 ) .
From (32), (35), (36) and considering U 0 = ( U 0 , U 2 ) , if a rate tuple is achievable, then it must satisfy:
R 0 + R 2 + R 4 I ( U 0 ; Y 2 ) , R 0 + R 1 + R 2 + R 3 I ( X 0 ; Y 1 ) , R 0 + R 1 + R 2 + R 4 I ( X 0 ; Y 1 U 0 ) + I ( U 0 ; Y 2 ) ,
for some pmf p ( u 0 , x 0 ) . ☐

6.2. With RMSI, with Causal CSIT

In this subsection, we establish the capacity region of the two-receiver (stochastically) degraded broadcast channel with RMSI where the channel state is available causally: (i) at only the transmitter; (ii) at the transmitter and the non-degraded receiver; or (iii) at the transmitter and both receivers.
Theorem 6.
The capacity region of the two-receiver degraded broadcast channel with RMSI where the channel state is available causally at only the transmitter is the closure of the set of all rate tuples ( R 0 , R 1 , R 2 , R 3 , R 4 ) , each satisfying:
R 0 + R 2 + R 4 < I U 0 ; Y 2 , R 0 + R 1 + R 2 + R 3 < I U 0 , U 1 ; Y 1 , R 0 + R 1 + R 2 + R 4 < I U 0 ; Y 2 + I U 1 ; Y 1 U 0 ,
for some pmf p u 0 , u 1 and some function x 0 = γ u 0 , u 1 , s .
Proof of Theorem 6.
(Achievability) Achievability is proven by setting U 2 = 0 in (11)–(15). Note that, for a causal case, U 2 = 0 implies that M 21 = M 2 and R 1 = 0 . (Converse) See Appendix C. ☐
Theorem 7.
The capacity region of the two-receiver degraded broadcast channel with RMSI where the channel state is available causally either at the transmitter and Receiver 1 or at the transmitter and both receivers is the closure of the set of all rate tuples ( R 0 , R 1 , R 2 , R 3 , R 4 ) , each satisfying:
R 0 + R 2 + R 4 < I ( U 0 ; Y ˜ 2 ) , R 0 + R 1 + R 2 + R 3 < I ( X 0 ; Y 1 S ) , R 0 + R 1 + R 2 + R 4 < I ( U 0 ; Y ˜ 2 ) + I X 0 ; Y 1 U 0 , S ,
for some pmf p u 0 , u 1 and some function x 0 = γ u 0 , u 1 , s . Y ˜ 2 = ( Y 2 , S ) if the channel state is available at Receiver 2 and Y ˜ 2 = Y 2 otherwise.
Proof of Theorem 7.
(Achievability) Achievability is proven by setting U 2 = 0 in (11)–(15). Note that, for a causal case, U 2 = 0 implies that M 21 = M 2 , R 1 = 0 and:
I ( U 0 , U 1 ; Y 1 , S ) = I ( U 0 , U 1 ; Y 1 S ) = I ( X 0 ; Y 1 S ) , I ( U 1 ; Y 1 , S U 0 ) = I ( U 1 ; Y 1 U 0 , S ) = I ( X 0 ; Y 1 U 0 , S ) .
(Converse) See Appendix D. ☐

6.3. With RMSI, with Non-Causal CSIT

In this subsection, we establish the capacity region of the two-receiver (stochastically) degraded broadcast channel with RMSI where the channel state is available non-causally: (i) at the transmitter and the non-degraded receiver; or (ii) at the transmitter and both receivers.
Theorem 8.
The capacity region of the two-receiver degraded broadcast channel with RMSI where the channel state is available non-causally either at the transmitter and Receiver 1 or at the transmitter and both receivers is the closure of the set of all rate tuples ( R 0 , R 1 , R 2 , R 3 , R 4 ) , each satisfying:
R 0 + R 2 + R 4 < I ( U 0 ; Y ˜ 2 ) I ( U 0 ; S ) , R 0 + R 1 + R 2 + R 3 < I ( X 0 ; Y 1 S ) , R 0 + R 1 + R 2 + R 4 < I ( U 0 ; Y ˜ 2 ) I ( U 0 ; S ) + I ( X 0 ; Y 1 U 0 , S ) ,
for some pmf p u 0 , u 1 s and some function x 0 = γ u 0 , u 1 , s . Y ˜ 2 = ( Y 2 , S ) if the channel state is available at Receiver 2 and Y ˜ 2 = Y 2 otherwise.
Proof of Theorem 8.
(Achievability) Achievability is proven by setting U 2 = 0 in (11)–(15). Note that, for a non-causal case, U 2 = 0 implies that M 21 = M 2 , and:
I ( U 0 , U 1 ; Y 1 , S ) I ( U 0 , U 1 ; S ) = I ( U 0 , U 1 ; Y 1 S ) = I ( X 0 ; Y 1 S ) .
(Converse) See Appendix D. ☐

6.4. Discussion on Prior Known Results

We here show that our inner bound in Theorem 2 is also tight for all of the special cases of the two-receiver discrete memoryless channel with RMSI whose capacity region was known prior to this work.
With RMSI, without state: The capacity region of the discrete memoryless channel with complementary RMSI is achieved by multiplexing all of the requested messages into only one codebook [15,16]. This scheme is a special case of our scheme obtained by setting ( U 0 , U 1 , U 2 , S ) = ( X 0 , 0 , 0 , 0 ) . Note that M 1 and M 2 are equal to zero in this message setup.
The capacity region of the discrete memoryless channel with degraded message sets (due to RMSI) is achieved by superposition coding [17]. This scheme is a special case of our scheme obtained by setting ( U 0 , U 1 , U 2 , S ) = ( U 0 , X 0 , 0 , 0 ) or ( U 0 , U 1 , U 2 , S ) = ( U 0 , 0 , X 0 , 0 ) depending on whether Receiver 1 or Receiver 2 needs to decode the whole set of the source messages, respectively. Note that either M 1 or M 2 is equal to zero in this message setup.
The less noisy broadcast channel with RMSI is a special case of the more capable broadcast channel with RMSI [4]. Thus, our scheme can also achieve the capacity region of the less noisy channel.
With RMSI, with causal CSIT: By setting ( U 1 , U 2 ) = ( 0 , 0 ) , our scheme for the causal category reduces to the one used by Khormuji et al. [19] to establish the capacity region of the two-receiver discrete memoryless broadcast channel with complementary RMSI where the channel state is available causally at only the transmitter.
With RMSI, with non-causal CSIT: By setting ( U 1 , U 2 ) = ( 0 , 0 ) , our scheme for the non-causal category reduces to the one used by Oechtering and Skoglund [20] to establish the capacity region of the two-receiver discrete memoryless broadcast channel with complementary RMSI where the channel state is available non-causally at the transmitter and one of the receivers.

7. Conclusions

We proposed a pre-coding scheme designed to construct transmission schemes of two-receiver broadcast channels with receiver message side information (RMSI) using the best transmission schemes for the channels without RMSI. This provides a general approach for utilizing RMSI in different two-receiver broadcast channels. Employing our pre-coding scheme, we derived a unified inner bound for three categories of the discrete memoryless broadcast channel with RMSI: (i) channel without state; (ii) channel with causal channel state information at the transmitter (CSIT); and (iii) channel with non-causal CSIT. We showed that our inner bound establishes the capacity region of some new cases in each of the three categories. We also showed that our inner bound is tight for all of the cases whose capacity region was known prior to this work. These results validated our approach for utilizing RMSI in two-receiver broadcast channels.

Acknowledgments

This work is supported by the Australian Research Council under Grants FT110100195, FT140100219 and DP150100903.

Author Contributions

Behzad Asadi developed this work in discussion with Lawrence Ong and Sarah J. Johnson. Behzad Asadi wrote the article with input from Lawrence Ong and Sarah J. Johnson. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

In this section, we show that the scheme for the causal category, described in Section 4.1, cannot be considered as a special case of the scheme for the non-causal category, described in Section 4.2. However, the rate regions achievable by both schemes have similar expression. This results in a unified inner bound for both causal and non-causal cases from which we can show that the inner bound for a non-causal case is at least as large as the the inner bound for the corresponding causal case. We will explain the reason behind this observation despite them having different schemes.
We consider a scheme as a special case of another scheme when the latter reduces to the former by considering some special cases of its parameters, e.g., superposition coding is a special case of the scheme achieving Marton’s inner bound with a common message [4] (p. 212).
Consider the encoding rule of the scheme for the non-causal category where the encoder finds a triple ( U 0 n , U 1 n , U 2 n ) , such that:
U 0 n , U 1 n , U 2 n , S n T ϵ n ( U 0 , U 1 , U 2 , S ) .
The encoder for the causal category cannot check this rule since this encoder only knows S n at the end of the transmission. Hence, the scheme for the causal category is not a special case of the scheme for the non-causal category.
By choosing p ( u 0 , u 1 , u 2 s ) = p ( u 0 , u 1 , u 2 ) for all s and setting R 0 = 0 , the scheme for the non-causal category has the same codebook construction and decoding approach as the scheme for the causal category. The only difference is that for the channel state realization s n , the encoder for the non-causal category finds a triple ( U 0 n , U 1 n , U 2 n ) , such that:
( U 0 n , U 1 n , U 2 n ) T ϵ n ( U 0 , U 1 , U 2 s n ) ,
where:
T ϵ n ( U 0 , U 1 , U 2 s n ) = ( u 0 n , u 1 n , u 2 n ) ( u 0 n , u 1 n , u 2 n , s n ) T ϵ n ( U 0 , U 1 , U 2 , S ) ,
and the encoder for the causal category finds a triple ( U 0 n , U 1 n , U 2 n ) , such that:
( U 0 n , U 1 n , U 2 n ) T ϵ n ( U 0 , U 1 , U 2 ) .
Therefore, the transmitted codewords may be different. However, according to the properties of joint typicality [4] (p. 30), we have:
  • T ϵ n ( U 0 , U 1 , U 2 s n ) T ϵ n ( U 0 , U 1 , U 2 ) .
  • For sufficiently large n,
    ( 1 ϵ ) 2 n ( H ( U 0 , U 1 , U 2 ) ϵ H ( U 0 , U 1 , U 2 ) ) T ϵ n U 0 , U 1 , U 2 2 n ( H ( U 0 , U 1 , U 2 ) + ϵ H ( U 0 , U 1 , U 2 ) ) .
  • If s n T ϵ n , ϵ < ϵ , then, for sufficiently large n,
    ( 1 ϵ ) 2 n ( H ( U 0 , U 1 , U 2 ) ϵ H ( U 0 , U 1 , U 2 ) ) T ϵ n U 0 , U 1 , U 2 s n 2 n ( H ( U 0 , U 1 , U 2 ) + ϵ H ( U 0 , U 1 , U 2 ) ) .
    Note that in this item, we have also used the fact that p ( u 0 , u 1 , u 2 s ) = p ( u 0 , u 1 , u 2 ) , which results in H ( U 0 , U 1 , U 2 S ) = H ( U 0 , U 1 , U 2 ) .
  • P S n T ϵ n ( S ) 1 , as n tends to infinity.
Consequently,
P T ϵ n ( U 0 , U 1 , U 2 s n ) = T ϵ n ( U 0 , U 1 , U 2 ) 1 ,
as n tends to infinity. Hence, by choosing p ( u 0 , u 1 , u 2 s ) = p ( u 0 , u 1 , u 2 ) , the scheme for the non-causal category asymptomatically almost surely has the same encoding as the scheme for the causal category. This leads the scheme for the non-causal category to achieve the same rate region as the scheme for the causal category.

Appendix B

In this section, we present the proof of Theorem 3, which is based on the proof of the Nair-El Gamal outer bound for the channel without RMSI [4] (p. 217).
Proof. 
By Fano’s inequality [4] (p. 19), we have:
( A 1 ) H M 0 , M 1 , M 3 Y 1 n , M 4 n ϵ 1 , n , ( A 2 ) H M 0 , M 2 , M 4 Y 2 n , M 3 n ϵ 2 , n ,
where ϵ i , n 0 as n for i = 1 , 2 . For the sake of simplicity, we use ϵ n instead of ϵ i , n for the remainder.
Using (A1) and (A2), if a rate tuple ( R 0 , R 1 , R 2 , R 3 , R 4 ) is achievable, then it must satisfy:
( A 3 ) n ( R 0 + R 3 ) I M 0 , M 3 ; Y 1 n M 4 + n ϵ n , ( A 4 ) n ( R 0 + R 4 ) I M 0 , M 4 ; Y 2 n M 3 + n ϵ n , ( A 5 ) n ( R 0 + R 1 + R 3 ) I M 0 , M 1 , M 3 ; Y 1 n M 4 + n ϵ n , ( A 6 ) n ( R 0 + R 2 + R 4 ) I M 0 , M 2 , M 4 ; Y 2 n M 3 + n ϵ n , ( A 7 ) n ( R 0 + R 1 + R 2 + R 3 ) I M 0 , M 1 , M 3 ; Y 1 n M 4 + I M 2 ; Y 2 n M 0 , M 1 , M 3 , M 4 + 2 n ϵ n , ( A 8 ) n ( R 0 + R 1 + R 2 + R 4 ) I M 1 ; Y 1 n M 0 , M 2 , M 3 , M 4 + I M 0 , M 2 , M 4 ; Y 2 n M 3 + 2 n ϵ n .
Inequalities (A3)–(A8) yield Conditions (16)–(21) by using the auxiliary random variables defined as:
U 0 , i = M 0 , M 3 , M 4 , Y 1 i 1 , Y 2 , i + 1 n , U 1 , i = M 1 , U 2 , i = M 2 ,
where Y 1 i 1 = Y 1 , 1 , Y 1 , 2 , , Y 1 , i 1 and Y 2 , i + 1 n = Y 2 , i + 1 , Y 2 , i + 2 , , Y 2 , n .
We here only show how Inequalities (A3) and (A7) yield Conditions (16) and (20), respectively. We just need to follow similar steps for the rest.
In (A3), we expand the mutual information term as follows
I ( M 0 , M 3 ; Y 1 n M 4 ) = i = 1 n I M 0 , M 3 ; Y 1 , i M 4 , Y 1 i 1 i = 1 n I M 0 , M 3 , M 4 , Y 1 i 1 ; Y 1 , i i = 1 n I U 0 , i ; Y 1 , i .
Then, since ϵ n 0 as n , by using the standard time-sharing argument [4] (p. 114), we have:
R 0 + R 3 I U 0 ; Y 1 .
In (A7), we expand the mutual information terms as follows.
I M 0 , M 1 , M 3 ; Y 1 n M 4 = i = 1 n I M 0 , M 1 , M 3 ; Y 1 , i M 4 , Y 1 i 1 i = 1 n I M 0 , M 1 , M 3 , M 4 , Y 1 i 1 ; Y 1 , i = i = 1 n I M 0 , M 1 , M 3 , M 4 , Y 1 i 1 , Y 2 , i + 1 n ; Y 1 , i ( A 9 ) i = 1 n I Y 2 , i + 1 n ; Y 1 , i M 0 , M 1 , M 3 , M 4 , Y 1 i 1 ,
I M 2 ; Y 2 n M 0 , M 1 , M 3 , M 4 = i = 1 n I M 2 ; Y 2 , i M 0 , M 1 , M 3 , M 4 , Y 2 , i + 1 n i = 1 n I M 2 , Y 1 i 1 ; Y 2 , i M 0 , M 1 , M 3 , M 4 , Y 2 , i + 1 n = i = 1 n I Y 1 i 1 ; Y 2 , i M 0 , M 1 , M 3 , M 4 , Y 2 , i + 1 n ( A 10 ) + i = 1 n I M 2 ; Y 2 , i M 0 , M 1 , M 3 , M 4 , Y 1 i 1 , Y 2 , i + 1 n .
Then, since ϵ n 0 as n , from (A7), (A9), (A10), the Csiszár sum identity [4] (p. 25) and the time-sharing argument, we have:
R 0 + R 1 + R 2 + R 3 I U 0 , U 1 ; Y 1 + I U 2 ; Y 2 U 0 , U 1 .

Appendix C

In this section, we present the converse proof of Theorem 6. In the converse, we assume that the broadcast channel is physically degraded as the capacity region of the stochastically-degraded broadcast channel is equal to its equivalent physically-degraded broadcast channel.
Proof. 
(Converse) By Fano’s inequality, we have (A1) and (A2). From (A2) and the physical degradedness of the channel, we have:
H M 0 , M 2 , M 4 Y 1 n , M 3 H M 0 , M 2 , M 4 Y 2 n , M 3 n ϵ n ,
and from (A1) and (A11), we have:
H M 0 , M 2 , M 3 Y 1 n , M 4 2 n ϵ n .
Using (A1), (A2) and (A12), we obtain the following necessary conditions for achievability:
( A 13 ) n R 1 I ( M 1 ; Y 1 n M 0 , M 2 , M 3 , M 4 ) + n ϵ n , ( A 14 ) n ( R 0 + R 2 + R 3 ) I ( M 0 , M 2 , M 3 ; Y 1 n M 4 ) + 2 n ϵ n , ( A 15 ) n ( R 0 + R 2 + R 4 ) I ( M 0 , M 2 , M 4 ; Y 2 n M 3 ) + n ϵ n .
We now define the auxiliary random variables U 0 , i and U 1 , i as:
U 0 , i = ( M 0 , M 2 , M 3 , M 4 , Y 1 i 1 ) , U 1 , i = ( M 1 , S i 1 ) ,
and expand the mutual information terms in (A13)–(A15) respectively as follows.
I M 1 ; Y 1 n M 0 , M 2 , M 3 , M 4 = i = 1 n I M 1 ; Y 1 , i M 0 , M 2 , M 3 , M 4 , Y 1 i 1 i = 1 n I M 1 , S i 1 ; Y 1 , i M 0 , M 2 , M 3 , M 4 , Y 1 i 1 ( A 16 ) = i = 1 n I U 1 , i ; Y 1 , i U 0 , i ,
I M 0 , M 2 , M 3 ; Y 1 n M 4 = i = 1 n I M 0 , M 2 , M 3 ; Y 1 , i M 4 , Y 1 i 1 i = 1 n I M 0 , M 2 , M 3 , M 4 , Y 1 i 1 ; Y 1 , i ( A 17 ) = i = 1 n I U 0 , i ; Y 1 , i ,
I M 0 , M 2 , M 4 ; Y 2 n M 3 = i = 1 n I M 0 , M 2 , M 4 ; Y 2 , i M 3 , Y 2 i 1 i = 1 n I M 0 , M 2 , M 3 , M 4 , Y 2 i 1 ; Y 2 , i i = 1 n I M 0 , M 2 , M 3 , M 4 , Y 2 i 1 , Y 1 i 1 ; Y 2 , i = ( a ) i = 1 n I M 0 , M 2 , M 3 , M 4 , Y 1 i 1 ; Y 2 , i ( A 18 ) = i = 1 n I U 0 , i ; Y 2 , i ,
where ( a ) follows from the physical degradedness of the channel.
Finally, since ϵ n 0 as n , substituting (A16)–(A18) into (A13)–(A15) and using the time-sharing argument complete the converse proof. Note that ( U 0 , i , U 1 , i ) is independent of S i , and X 0 , i is a function of ( U 0 , i , U 1 , i , S i ) . ☐

Appendix D

In this section, we present the converse proof of Theorems 7 and 8. We here also assume that the broadcast channel is physically degraded.
Proof. 
(Converse) By Fano’s inequality, we have:
( A 19 ) H ( M 0 , M 1 , M 3 Y 1 n , S n , M 4 ) n ϵ n , ( A 20 ) H ( M 0 , M 2 , M 4 Y ˜ 2 n , M 3 ) n ϵ n ,
where ϵ n 0 as n .
From (A20) and the physical degradedness of the channel, we have:
H M 0 , M 2 , M 4 Y 1 n , S n , M 3 H M 0 , M 2 , M 4 Y 2 n , S n , M 3 n ϵ n ,
and from (A19) and (A21), we have:
H M 0 , M 1 , M 2 , M 3 Y 1 n , S n , M 4 2 n ϵ n .
Using (A19), (A20) and (A22), if a rate tuple ( R 0 , R 1 , R 2 , R 3 , R 4 ) is achievable, then it must satisfy:
( A 23 ) n R 1 I ( M 1 ; Y 1 n M 0 , M 2 , M 3 , M 4 , S n ) + n ϵ n , ( A 24 ) n ( R 0 + R 1 + R 2 + R 3 ) I ( M 0 , M 1 , M 2 , M 3 ; Y 1 n M 4 , S n ) + 2 n ϵ n , ( A 25 ) n ( R 0 + R 2 + R 4 ) I ( M 0 , M 2 , M 4 ; Y ˜ 2 n M 3 ) + n ϵ n .
We now define the auxiliary random variables U 0 , i and U 1 , i as:
U 0 , i = ( M 0 , M 2 , M 3 , M 4 , S i 1 , S i + 1 n , Y 2 i 1 ) , U 1 , i = ( M 1 , Y 1 i 1 ) ,
and expand the mutual information terms in (A23)–(A25) respectively as follows.
I M 1 ; Y 1 n M 0 , M 2 , M 3 , M 4 , S n = i = 1 n I M 1 ; Y 1 , i M 0 , M 2 , M 3 , M 4 , S n , Y 1 i 1 = ( a ) i = 1 n I M 1 ; Y 1 , i M 0 , M 2 , M 3 , M 4 , S n , Y 1 i 1 , Y 2 i 1 i = 1 n I M 1 , Y 1 i 1 ; Y 1 , i M 0 , M 2 , M 3 , M 4 , S n , Y 2 i 1 = i = 1 n I U 1 , i ; Y 1 , i U 0 , i , S i ( A 26 ) = i = 1 n I X 0 , i ; Y 1 , i U 0 , i , S i ,
I M 0 , M 1 , M 2 , M 3 ; Y 1 n M 4 , S n = i = 1 n I M 0 , M 1 , M 2 , M 3 ; Y 1 , i M 4 , S n , Y 1 i 1 = ( b ) i = 1 n I M 0 , M 1 , M 2 , M 3 ; Y 1 , i M 4 , S n , Y 1 i 1 , Y 2 i 1 i = 1 n I U 0 , i , U 1 , i ; Y 1 , i S i ( A 27 ) = i = 1 n I X 0 , i ; Y 1 , i S i ,
I ( M 0 , M 2 , M 4 ; Y ˜ 2 n M 3 ) = i = 1 n I ( M 0 , M 2 , M 4 ; Y ˜ 2 , i M 3 , Y ˜ 2 i 1 ) i = 1 n I ( M 0 , M 2 , M 3 , M 4 , S i 1 , Y ˜ 2 i 1 ; Y ˜ 2 , i ) = i = 1 n I ( M 0 , M 2 , M 3 , M 4 , S i 1 , S i + 1 n , Y ˜ 2 i 1 ; Y ˜ 2 , i ) I ( S i + 1 n ; Y ˜ 2 , i M 0 , M 2 , M 3 , M 4 , S i 1 , Y ˜ 2 i 1 ) = ( c ) i = 1 n I ( M 0 , M 2 , M 3 , M 4 , S i 1 , S i + 1 n , Y ˜ 2 i 1 ; Y ˜ 2 , i ) I ( Y ˜ 2 i 1 ; S i M 0 , M 2 , M 3 , M 4 , S i 1 , S i + 1 n ) = ( d ) i = 1 n I ( M 0 , M 2 , M 3 , M 4 , S i 1 , S i + 1 n , Y ˜ 2 i 1 ; Y ˜ 2 , i ) I ( M 0 , M 2 , M 3 , M 4 , S i 1 , S i + 1 n , Y ˜ 2 i 1 ; S i ) ( A 28 ) = i = 1 n I ( U 0 , i ; Y ˜ 2 , i ) I ( U 0 , i ; S i ) ,
where ( a ) and ( b ) follow from the physical degradedness of the channel, ( c ) from the Csiszár sum identity and ( d ) from the independence of ( M 0 , M 2 , M 3 , M 4 , S i 1 , S i + 1 n ) and S i . Note that, for causal cases, ( U 0 , i , U 1 , i ) is independent of S i , but for non-causal cases, ( U 0 , i , U 1 , i ) and S i are dependent. For both causal and non-causal cases, X 0 , i is a function of ( U 0 , i , U 1 , i , S i ) .
Finally, since ϵ n 0 as n , substituting (A26)–(A28) into (A23)–(A25) and using the time sharing argument complete the converse proof. ☐

References

  1. Cover, T.M. Broadcast channels. IEEE Trans. Inf. Theory 1972, 18, 2–14. [Google Scholar] [CrossRef]
  2. Ong, L.; Kellett, C.M.; Johnson, S.J. On the equal-rate capacity of the AWGN multiway relay channel. IEEE Trans. Inf. Theory 2012, 58, 5761–5769. [Google Scholar] [CrossRef]
  3. Gallager, R.G. Capacity and coding for degraded broadcast channels. Probl. Inf. Transm. 1974, 10, 3–14. [Google Scholar]
  4. El Gamal, A.; Kim, Y.H. Network Information Theory; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  5. El Gamal, A. The capacity of a class of broadcast channels. IEEE Trans. Inf. Theory 1979, 25, 166–169. [Google Scholar] [CrossRef]
  6. Pinsker, M.S. Capacity of noiseless broadcast channels. Probl. Inf. Transm. 1978, 14, 28–34. [Google Scholar]
  7. Han, T.S. The capacity region for the deterministic broadcast channel with a common message. IEEE Trans. Inf. Theory 1981, 27, 122–125. [Google Scholar]
  8. Marton, K. A coding theorem for the discrete memoryless broadcast channel. IEEE Trans. Inf. Theory 1979, 25, 306–311. [Google Scholar] [CrossRef]
  9. Körner, J.; Marton, K. General broadcast channels with degraded message sets. IEEE Trans. Inf. Theory 1977, 23, 60–64. [Google Scholar] [CrossRef]
  10. Steinberg, Y. Coding for the degraded broadcast channel with random parameters, with causal and noncausal side information. IEEE Trans. Inf. Theory 2005, 51, 2867–2877. [Google Scholar] [CrossRef]
  11. Shannon, C. Channels with side information at the transmitter. IBM J. Res. Dev. 1958, 2, 289–293. [Google Scholar] [CrossRef]
  12. Lapidoth, A.; Wang, L. The state-dependent semideterministic broadcast channel. IEEE Trans. Inf. Theory 2013, 59, 2242–2251. [Google Scholar] [CrossRef]
  13. Khosravi-Farsani, R.; Marvasti, F. Capacity bounds for multiuser channels with non-causal channel state information at the transmitters. In Proceedings of the IEEE Information Theory Workshop (ITW), Paraty, Brazil, 16–20 October 2011. [Google Scholar]
  14. Gel’fand, S.I.; Pinsker, M.S. Coding for channel with random parameters. Probl. Control Inf. Theory 1980, 9, 19–31. [Google Scholar]
  15. Oechtering, T.J.; Schnurr, C.; Bjelakovic, I.; Boche, H. Broadcast capacity region of two-phase bidirectional relaying. IEEE Trans. Inf. Theory 2008, 54, 454–458. [Google Scholar] [CrossRef]
  16. Tuncel, E. Slepian–Wolf coding over broadcast channels. IEEE Trans. Inf. Theory 2006, 52, 1469–1482. [Google Scholar] [CrossRef]
  17. Kramer, G.; Shamai, S. Capacity for classes of broadcast channels with receiver side information. In Proceedings of the IEEE Information Theory Workshop (ITW), Lake Tahoe, CA, USA, 2–6 September 2007. [Google Scholar]
  18. Oechtering, T.J.; Wigger, M.; Timo, R. Broadcast capacity regions with three receivers and message cognition. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Cambridge, MA, USA, 1–6 July 2012. [Google Scholar]
  19. Khormuji, M.N.; Oechtering, T.J.; Skoglund, M. Capacity region of the bidirectional broadcast channel with causal channel state information. In Proceedings of the Tenth International Symposium on Wireless Communication Systems (ISWCS), Ilmenau, Germany, 27–30 August 2013. [Google Scholar]
  20. Oechtering, T.J.; Skoglund, M. Bidirectional broadcast channel with random states noncausally known at the encoder. IEEE Trans. Inf. Theory 2013, 59, 64–75. [Google Scholar] [CrossRef]
  21. Kramer, G. Information networks with in-block memory. IEEE Trans. Inf. Theory 2014, 60, 2105–2120. [Google Scholar] [CrossRef]
  22. Jafar, S. Capacity with causal and noncausal side information: A unified view. IEEE Trans. Inf. Theory 2006, 52, 5468–5474. [Google Scholar] [CrossRef]
  23. Asadi, B.; Ong, L.; Johnson, S.J. A unified scheme for two-receiver broadcast channels with receiver message side information. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Hong Kong, China, 14–19 June 2015. [Google Scholar]
  24. Bracher, A.; Wigger, M. Feedback and partial message side-information on the semideterministic broadcast channel. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Hong Kong, China, 14–19 June 2015. [Google Scholar]
  25. Nair, C.; El Gamal, A. An outer bound to the capacity region of the broadcast channel. IEEE Trans. Inf. Theory 2007, 53, 350–355. [Google Scholar] [CrossRef]
Figure 1. The steps to derive our unified inner bound (the rightmost rectangle) for the three categories: (i) channel with receiver message side information (RMSI), without state; (ii) channel with RMSI and causal channel state information at the transmitter (CSIT); and (iii) channel with RMSI and non-causal CSIT. The rectangles with dashed sides represent the bounds derived prior to this work (i.e., Marton’s inner bound with a common message [4] and non-causal CSIT inner bound [13]). The rectangles with solid sides represent the bounds derived in this work. The arrows provide the key techniques to derive each bound.
Figure 1. The steps to derive our unified inner bound (the rightmost rectangle) for the three categories: (i) channel with receiver message side information (RMSI), without state; (ii) channel with RMSI and causal channel state information at the transmitter (CSIT); and (iii) channel with RMSI and non-causal CSIT. The rectangles with dashed sides represent the bounds derived prior to this work (i.e., Marton’s inner bound with a common message [4] and non-causal CSIT inner bound [13]). The rectangles with solid sides represent the bounds derived in this work. The arrows provide the key techniques to derive each bound.
Entropy 19 00138 g001
Figure 2. The two-receiver discrete memoryless broadcast channel with i.i.d. states p ( y 1 , y 2 x 0 , s ) p ( s ) where the channel state may be available either causally or non-causally at each of the transmitter and receivers. Receiver 1 requests ( M 0 , M 1 , M 3 ) while it knows M 4 as side information. Receiver 2 requests ( M 0 , M 2 , M 4 ) while it knows M 3 as side information. ( M ^ 0 1 , M ^ 1 , M ^ 3 ) is the decoded version of ( M 0 , M 1 , M 3 ) at Receiver 1, and ( M ^ 0 2 , M ^ 2 , M ^ 4 ) is the decoded version of ( M 0 , M 2 , M 4 ) at Receiver 2.
Figure 2. The two-receiver discrete memoryless broadcast channel with i.i.d. states p ( y 1 , y 2 x 0 , s ) p ( s ) where the channel state may be available either causally or non-causally at each of the transmitter and receivers. Receiver 1 requests ( M 0 , M 1 , M 3 ) while it knows M 4 as side information. Receiver 2 requests ( M 0 , M 2 , M 4 ) while it knows M 3 as side information. ( M ^ 0 1 , M ^ 1 , M ^ 3 ) is the decoded version of ( M 0 , M 1 , M 3 ) at Receiver 1, and ( M ^ 0 2 , M ^ 2 , M ^ 4 ) is the decoded version of ( M 0 , M 2 , M 4 ) at Receiver 2.
Entropy 19 00138 g002

Share and Cite

MDPI and ACS Style

Asadi, B.; Ong, L.; Johnson, S.J. Leveraging Receiver Message Side Information in Two-Receiver Broadcast Channels: A General Approach †. Entropy 2017, 19, 138. https://doi.org/10.3390/e19040138

AMA Style

Asadi B, Ong L, Johnson SJ. Leveraging Receiver Message Side Information in Two-Receiver Broadcast Channels: A General Approach †. Entropy. 2017; 19(4):138. https://doi.org/10.3390/e19040138

Chicago/Turabian Style

Asadi, Behzad, Lawrence Ong, and Sarah J. Johnson. 2017. "Leveraging Receiver Message Side Information in Two-Receiver Broadcast Channels: A General Approach †" Entropy 19, no. 4: 138. https://doi.org/10.3390/e19040138

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop