Next Article in Journal
Dynamics and Complexity of Computrons
Previous Article in Journal
Comprehensive Evaluation on Soil Properties and Artemisia ordosica Growth under Combined Application of Fly Ash and Polyacrylamide in North China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Polar Coding for Confidential Broadcasting

by
Jaume del Olmo Alòs
* and
Javier Rodríguez Fonollosa
Departament de Teoria del Senyal i Communications, Universitat Politècnica de Catalunya, 08034 Barcelona, Spain
*
Author to whom correspondence should be addressed.
Entropy 2020, 22(2), 149; https://doi.org/10.3390/e22020149
Submission received: 7 January 2020 / Accepted: 24 January 2020 / Published: 27 January 2020

Abstract

:
A polar coding scheme is proposed for the Wiretap Broadcast Channel with two legitimate receivers and one eavesdropper. We consider a model in which the transmitter wishes to send the same private (non-confidential) message and the same confidential message reliably to two different legitimate receivers, and the confidential message must also be (strongly) secured from the eavesdropper. The coding scheme aims to use the optimal rate of randomness and does not make any assumption regarding the symmetry or degradedness of the channel. This paper extends previous work on polar codes for the wiretap channel by proposing a new chaining construction that allows to reliably and securely send the same confidential message to two different receivers. This construction introduces new dependencies between the random variables involved in the coding scheme that need to be considered in the secrecy analysis.

1. Introduction

Information-theoretic security over noisy channels was introduced by Wyner in [1], which characterized the secrecy-capacity of the degraded wiretap channel. Later, Csiszár and Körner in [2] generalized Wyner’s results to the general wiretap channel. In these settings, one transmitter wishes to reliably send one message to a legitimate receiver, while keeping it secret from an eavesdropper, where secrecy is defined based on a condition of some information-theoretic measure that is fully quantifiable. One of these measures is the information leakage, defined as the mutual information I ( W ; Z n ) between a uniformly distributed random message W and the channel observations Z n at the eavesdropper, n being the number of uses of the channel. Based on this measure, the most common secrecy conditions required to be satisfied by channel codes are the weak secrecy, which requires lim n 1 n I ( W ; Z n ) = 0 , and the strong secrecy, requiring lim n I ( W ; Z n ) = 0 . Although the second notion of security is stronger, surprisingly both conditions result in the same secrecy-capacity [3].
In the last decade, information-theoretic security has been extended to a large variety of contexts, and polar codes have become increasingly popular in this area, due to their easily provable secrecy capacity achieving property. Polar codes were originally proposed by Arikan in [4] to achieve the capacity of binary-input, symmetric, and point-to-point channels under Successive Cancellation (SC) decoding. Secrecy capacity achieving polar codes for the binary symmetric degraded wiretap channel were introduced in [5,6], satisfying the weak and the strong secrecy condition, respectively. Recently, polar coding has been extended to the general wiretap channel in [7,8,9,10] and to different multiuser scenarios (for instance, see [11,12]). Indeed, [9,10] generalize their results providing polar codes for the broadcast channel with confidential messages.
This paper provides a polar coding scheme that allows to transmit strongly confidential common information to two legitimate receivers over the Wiretap Broadcast Channel (WTBC). Although [13] provided an obvious lower-bound on the secrecy-capacity of this model, no constructive polar coding scheme has already been proposed so far. Our polar coding scheme is based mainly on the one introduced by [10] for the broadcast channel with confidential messages. Therefore, the proposed polar coding scheme aims to use the optimal amount of randomness in the encoding. Moreover, in order to construct an explicit polar coding scheme that provides strong secrecy, the distribution induced by the encoder must be close in terms of the statistical distance to the original one considered for the code construction, and transmitter and legitimate receivers need to share a secret key of negligible size in terms of rate. Nevertheless, the particularization for the model proposed in this paper is not straightforward. Specifically, we propose a new chaining construction [14] (transmission will take place over several blocks) that is crucial to secretly transmit common information to different legitimate receivers. Indeed, this model generalizes, in part, the one described in [10], where the confidential message is intended only for one legitimate receiver, and the one in [15], which considers only the transmission of non-confidential messages intended for two different receivers. The proposed chaining introduces new bidirectional dependencies between encoding random variables of adjacent blocks that must be considered carefully in the secrecy analysis. Indeed, we need to make use of an additional secret key of negligible size in terms of rate that is privately shared between transmitter and legitimate receivers, which will be used to prove that dependencies between blocks can be broken and, therefore, the strong secrecy condition will be satisfied.

1.1. Notation

Throughout this paper, let [ n ] = { 1 , , n } for n Z + , a n denotes a row vector ( a ( 1 ) , , a ( n ) ) . We write a 1 : j for j [ n ] to denote the subvector ( a ( 1 ) , , a ( j ) ) . Let A [ n ] , then we write a [ A ] to denote the sequence { a ( j ) } j A , and we use A C to denote the set complement with respect to the universal set [ n ] , that is, A C = [ n ] A . If A denotes an event, then A C also denotes its complement. We use ln to denote the natural logarithm, whereas log denotes the logarithm base 2. Let X be a random variable taking values in X , and let q x and p x be two different distributions with support X , then D ( q x , p x ) and V ( q x , p x ) denote the Kullback–Leibler divergence and the total variation distance respectively. Finally, h 2 ( p ) denotes the binary entropy function, i.e., h 2 ( p ) = p log p ( 1 p ) log ( 1 p ) .

1.2. Organization

The remainder of this paper is organized as follows. Section 2 introduces the channel model formally. In Section 3, the fundamental theorems of polar codes are revisited. Section 4 describes the proposed polar coding scheme, and Section 5 proves that this polar coding scheme achieves the best known inner-bound on the secrecy-capacity of this model. Finally, the concluding remarks are presented in Section 6.

2. Channel Model and Achievable Region

Formally, a WTBC ( X , p Y ( 1 ) Y ( 2 ) Z | X , Y ( 1 ) × Y ( 2 ) × Z ) with 2 legitimate receivers and an external eavesdropper is characterized by the probability transition function p Y ( 1 ) Y ( 2 ) Z | X , where X X denotes the channel input, Y ( k ) Y ( k ) denotes the channel output corresponding to the legitimate Receiver  k [ 1 , 2 ] , and Z Z denotes the channel output corresponding to the eavesdropper. We consider a model, namely Common Information over the Wiretap Broadcast Channel (CI-WTBC), in which the transmitter wishes to send a private message W and a confidential message S to both legitimate receivers. A code 2 n R W , 2 n R S , 2 n R R , n for the CI-WTBC consists of a private message set W 1 , 2 n R W , a confidential message set S 1 , 2 n R S , a randomization sequence set R 1 , 2 n R R (needed to confuse the eavesdropper about the confidential message S), an encoding function f : W × S × R X n that maps ( w , s , r ) to a codeword x n , and two decoding functions g ( 1 ) and g ( 2 ) such that g ( k ) : Y ( k ) n W × S ( k [ 1 , 2 ] ) maps the k-th legitimate receiver observations y ( k ) n to the estimates ( w ^ ( k ) , s ^ ( k ) ) . The reliability condition to be satisfied by this code is measured in terms of the average probability of error and is given by
lim n P ( W , S ) ( W ^ ( k ) , S ^ ( k ) ) = 0 , k [ 1 , 2 ] .
The strong secrecy condition is measured in terms of the information leakage and is given by
lim n I S ; Z n = 0 .
This model is graphically illustrated in Figure 1. A triple of rates ( R W , R S , R R ) R + 3 will be achievable for the CI-WTBC if there exists a sequence of ( 2 n R W , 2 n R S , 2 n R R , n ) codes such that satisfy the reliability and secrecy conditions (1) and (2), respectively.
The achievable rate region is defined as the closure of the set of all achievable rate triples ( R W , R S , R R ) . The following proposition defines an inner-bound on this region.
Proposition 1
(Adapted from [13,16]). The region R CI WTBC defined by the union over the triples of rates ( R W , R S , R R ) R + 3 satisfying
R W + R S min I ( V ; Y ( 1 ) ) , I ( V ; Y ( 2 ) ) , R S min I ( V ; Y ( 1 ) ) , I ( V ; Y ( 2 ) ) I ( V ; Z ) , R W + R R I ( X ; Z ) , R R I ( X ; Z | V ) ,
where the union is taken over all distributions p V X such that V X ( Y ( 1 ) , Y ( 2 ) , Z ) forms a Markov chain, defines an inner-bound on the achievable region of the CI-WTBC.
In this model, the private message W introduces part of the randomness required to confuse the eavesdropper about the confidential message S, and the randomization sequence R denotes the additional randomness that is required for channel prefixing.

3. Review of Polar Codes

Let ( X × Y , p X Y ) be a Discrete Memoryless Source (DMS), where X { 0 , 1 } (Throughout this paper, we assume binary polarization. Nevertheless, an extension to q-ary alphabets is possible [10,17,18]) and Y Y . The polar transform over the n-sequence X n , n being any power of 2, is defined as U n X n G n , where G n 1 1 1 0 n is the source polarization matrix [19]. Since G n = G n 1 , then X n = U n G n .
The polarization theorem for source coding with side information [19] (Th. 1) states that the polar transform extracts the randomness of X n in the sense that, as n , the set of indices j [ n ] can be divided practically into two disjoint sets, namely H X | Y ( n ) and L X | Y ( n ) , such that U ( j ) for j H X | Y ( n ) is practically independent of ( U 1 : j 1 , Y n ) and uniformly distributed, i.e., H ( U ( j ) | U 1 : j 1 , Y n ) 1 , and U ( j ) for j L X | Y ( n ) is almost determined by ( U 1 : j 1 , Y n ) , i.e., H ( U ( j ) | U 1 : j 1 , Y n ) 0 . Formally, let
H X | Y ( n ) j [ n ] : H U ( j ) | U 1 : j 1 , Y n 1 δ n , L X | Y ( n ) j [ n ] : H U ( j ) | U 1 : j 1 , Y n δ n ,
where δ n 2 n β for some β ( 0 , 1 2 ) . Then, by Lemma 4 of [10] we have lim n 1 n | H X | Y ( n ) | = H ( X | Y ) and lim n 1 n | L X | Y ( n ) | = 1 H ( X | Y ) , which imply that lim n 1 n | ( H X | Y ( n ) ) C L X | Y ( n ) | = 0 , i.e., the number of elements that have not been polarized is asymptotically negligible in terms of rate. Furthermore, Th. 2 of [19] states that given U [ ( L X | Y ( n ) ) C ] and Y n , U [ L X | Y ( n ) ] can be reconstructed using SC decoding with error probability in O ( n δ n ) . Alternatively, the previous sets can be defined based on the Bhattacharyya parameters { Z ( U ( j ) | U 1 : j 1 , Y n ) } j = 1 n because both parameters polarize simultaneously Proposition 2 of [19]. It is worth mentioning that both the entropy terms and the Bhattacharyya parameters required to define these sets can be obtained deterministically from p X Y and the algebraic properties of G n [20,21,22].
Similarly to H X | Y ( n ) and L X | Y ( n ) , the sets H X ( n ) and L X ( n ) can be defined by considering that observations Y n are absent. A discrete memoryless channel ( X , p Y | X , Y ) with some arbitrary p X can be seen as a DMS ( X × Y , p X p Y | X ) . In channel polar coding, first we define H X | Y ( n ) , L X | Y ( n ) , H X ( n ) and L X ( n ) from the target distribution p X p Y | X (polar construction). Then, based on the previous sets, the encoder somehow constructs (since the polar-based encoder will construct random variables that must approach the target distribution of the DMS, throughout this paper we use tilde above the random variables to emphazise this purpose) U ˜ n and applies the inverse polar transform X ˜ n = U ˜ n G n with distribution q ˜ X n . Afterwards, the transmitter sends X ˜ n over the channel, which induces Y ˜ n q ˜ Y n . If V ( q ˜ X n Y n , p X n Y n ) 0 , then the receiver can reliably reconstruct U ˜ [ L X | Y ( n ) ] from Y ˜ n and U ˜ [ ( L X | Y ( n ) ) C ] by using SC decoding [23].

4. Polar Coding Scheme

Let ( V × X × Y ( 1 ) × Y ( 2 ) × Z , p V X Y ( 1 ) Y ( 2 ) Z ) denote the DMS that represents the input ( V , X ) and output ( Y ( 1 ) , Y ( 2 ) , Z ) random variables of the CI-WTBC, where | V | = | X | 2 . Without loss of generality, and to avoid the trivial case R S = 0 in Proposition 1, we assume that
H ( V | Z ) > H ( V | Y ( 1 ) ) H ( V | Y ( 2 ) ) .
If H ( V | Y ( 1 ) ) < H ( V | Y ( 2 ) ) , one can simply exchange the role of Y ( 1 ) and Y ( 2 ) in the polar coding scheme described in Section 4. We propose a polar coding scheme that achieves the following rate triple,
( R W , R S , R R ) = ( I ( V ; Z ) , I ( V ; Y ( 1 ) ) I ( V ; Z ) , I ( X ; Z | V ) ) ,
which corresponds to the one of the region in Proposition 1 such that the private and the confidential message rate are maximum and the amount of randomness is minimum.
For the input random variable V, we define the polar transform A n V n G n and the sets
H V ( n ) j [ 1 , n ] : H A ( j ) | A 1 : j 1 1 δ n ,
H V | Z ( n ) j [ 1 , n ] : H A ( j ) | A 1 : j 1 Z n 1 δ n ,
L V | Y ( k ) ( n ) j [ 1 , n ] : H A ( j ) | A 1 : j 1 Y ( k ) n δ n , k = 1 , 2 .
For the input random variable X, we define T n X n G n and the associated sets
H X | V ( n ) j [ 1 , n ] : H T ( j ) | T 1 : j 1 V n 1 δ n .
H X | V Z ( n ) j [ 1 , n ] : H T ( j ) | T 1 : j 1 V n Z n 1 δ n .
We have p A n T n ( a n , t n ) = p V n X n ( a n G n , t n G n ) , due to the invertibility of G n , and we write
p A n T n ( a n , t n ) = j = 1 n p A ( j ) | A 1 : j 1 ( a ( j ) | a 1 : j 1 ) j = 1 n p T ( j ) | T 1 : j 1 V n ( t ( j ) | t 1 : j 1 , a n G n ) .
Consider that the encoding takes place over L blocks indexed by i [ 1 , L ] . At the i-th block, the encoder will construct A ˜ i n , which will carry the private and the confidential messages intended for both legitimate receivers. Additionally, the encoder will store into A ˜ i n some elements from A ˜ i 1 n (if i [ 2 , L ] ) and A ˜ i + 1 n (if i [ 1 , L 1 ] ), so that both legitimate receivers are able to reliably reconstruct A ˜ 1 : L n . Then, given V ˜ i n = A ˜ i n G n , the encoder will perform the polar-based channel prefixing to construct T ˜ i n . Finally, it will obtain X ˜ i n = T ˜ i n G n , which will be transmitted over the WTBC, inducing the channel output observations ( Y ˜ ( 1 ) , i n , Y ˜ ( 2 ) , i n , Z ˜ i n ) .
Consider the construction of A ˜ 1 : L n . Besides, sets in (5)–(7), define the partition of H V ( n ) :
G ( n ) H V | Z ( n ) ,
C ( n ) H V ( n ) H V | Z ( n ) C .
Moreover, we also define the following partition of the set G ( n ) :
G 0 ( n ) G ( n ) L V | Y ( 1 ) ( n ) L V | Y ( 2 ) ( n ) ,
G 1 ( n ) G ( n ) L V | Y ( 1 ) ( n ) C L V | Y ( 2 ) ( n ) ,
G 2 ( n ) G ( n ) L V | Y ( 1 ) ( n ) L V | Y ( 2 ) ( n ) C ,
G 1 , 2 ( n ) G ( n ) L V | Y ( 1 ) ( n ) C L V | Y ( 2 ) ( n ) C ,
and the following partition of the set C ( n ) :
C 0 ( n ) C ( n ) L V | Y ( 1 ) ( n ) L V | Y ( 2 ) ( n ) ,
C 1 ( n ) C ( n ) L V | Y ( 1 ) ( n ) C L V | Y ( 2 ) ( n ) ,
C 2 ( n ) C ( n ) L V | Y ( 1 ) ( n ) L V | Y ( 2 ) ( n ) C ,
C 1 , 2 ( n ) C ( n ) L V | Y ( 1 ) ( n ) C L V | Y ( 2 ) ( n ) C ;
These sets are graphically represented in Figure 2. Roughly speaking, A [ H V ( n ) ] is the nearly uniformly distributed part of A n . Thus, A ˜ i [ H V ( n ) ] , i [ 1 , L ] , is suitable for storing uniformly distributed random sequences. The sequence A [ H V | Z ( n ) ] is almost independent of Z n and, hence, A ˜ i [ G ( n ) ] is suitable for storing information to be secured from the eavesdropper, whereas A ˜ i [ C ( n ) ] is not. Sets in (12)–(19) with subscript 1 (sets inside the red curve in Figure 2) form H V ( n ) L V | Y ( 1 ) ( n ) C , while those with subscript 2 (sets inside the blue curve) form H V ( n ) L V | Y ( 2 ) ( n ) C . From Th. 2 of [19,23], recall that A ˜ i H V ( n ) ( L V | Y ( k ) ( n ) ) C is the nearly uniformly distributed part of the sequence A ˜ i n required by legitimate Receiver k to reliably reconstruct the entire sequence by performing SC decoding.
For sufficiently large n, assumption (3) imposes the following restriction on the size of the previous sets:
| G 1 ( n ) | | C 2 ( n ) | | G 2 ( n ) | | C 1 ( n ) | > | C 1 , 2 ( n ) | | G 0 ( n ) | .
The left-hand inequality in (20) holds from the fact that
| C 1 ( n ) G 1 ( n ) | | C 2 ( n ) G 2 ( n ) | = | H V ( n ) L V | Y ( 1 ) ( n ) C H V ( n ) L V | Y ( 2 ) ( n ) C | | H V ( n ) L V | Y ( 2 ) ( n ) C H V ( n ) L V | Y ( 1 ) ( n ) C | = | H V ( n ) L V | Y ( 1 ) ( n ) C | | H V ( n ) L V | Y ( 2 ) ( n ) C | 0 ,
where the positivity holds by Lemma 4 of [10] because, for any k [ 1 , 2 ] , we have
1 n | H V ( n ) L V | Y ( k ) ( n ) C | = 1 n | H V | Y ( k ) ( n ) | + 1 n | H V ( n ) L V | Y ( k ) ( n ) C H V | Y ( k ) ( n ) | n H ( V | Y ( k ) )
Similarly, the right-hand inequality in (20) holds by Lemma 4 of [10] and the fact that
| G 0 ( n ) G 2 ( n ) | | C 1 ( n ) C 1 , 2 ( n ) | = | H V | Z ( n ) H V ( n ) L V | Y ( 1 ) ( n ) C | | H V ( n ) L V | Y ( 1 ) ( n ) C H V | Z ( n ) | = | H V | Z ( n ) | | H V ( n ) L V | Y ( 1 ) ( n ) C | .
Thus, according to (20), we must consider four cases:
  • | G 1 ( n ) | > | C 2 ( n ) | , | G 2 ( n ) | > | C 1 ( n ) | and | G 0 ( n ) | | C 1 , 2 ( n ) | ;
  • | G 1 ( n ) | > | C 2 ( n ) | , | G 2 ( n ) | > | C 1 ( n ) | and | G 0 ( n ) | < | C 1 , 2 ( n ) | ;
  • | G 1 ( n ) | | C 2 ( n ) | , | G 2 ( n ) | | C 1 ( n ) | and | G 0 ( n ) | > | C 1 , 2 ( n ) | ;
  • | G 1 ( n ) | < | C 2 ( n ) | , | G 2 ( n ) | < | C 1 ( n ) | and | G 0 ( n ) | > | C 1 , 2 ( n ) | .

4.1. General Polar-Based Encoding

The generic encoding process for all cases is summarized in Algorithm 1. For i [ 1 , L ] , let W i be a uniformly distributed vector of length | C ( n ) | that represents the private message. The encoder forms A ˜ i [ C ( n ) ] by simply storing W i . Indeed, if i [ 1 , L 1 ] , notice that the encoder forms A ˜ i + 1 [ C ( n ) ] before constructing A ˜ i n entirely. From A ˜ i [ C ( n ) ] , i [ 1 , L ] , we define
Ψ i ( V ) A ˜ i [ C 2 ( n ) ] ,
Γ i ( V ) A ˜ i [ C 1 , 2 ( n ) ] ,
Θ i ( V ) A ˜ i [ C 1 ( n ) ] .
Notice that [ Ψ i ( V ) , Γ i ( V ) ] = A ˜ i [ C 2 ( n ) C 1 , 2 ( n ) ] is required by legitimate Receiver 2 to reliably estimate A ˜ i n and, thus, the encoder will repeat [ Ψ i ( V ) , Γ i ( V ) ] , if i [ 1 , L 1 ] , conveniently in A ˜ i + 1 [ G ( n ) ] (the function form_AG is responsible of the chaining construction and is described later). On the other hand, [ Θ i ( V ) , Γ i ( V ) ] = A ˜ i [ C 1 ( n ) C 1 , 2 ( n ) ] is required by legitimate Receiver 1. Nevertheless, in order to satisfy the strong secrecy condition in (2), [ Θ i ( V ) , Γ i ( V ) ] , i [ 2 , L ] , is not repeated directly into A ˜ i 1 [ G ( n ) ] , but the encoder copies instead Θ ¯ i ( V ) and Γ ¯ i ( V ) obtained as follows. Let κ Θ ( V ) and κ Γ ( V ) be uniformly distributed keys with length | C 1 ( n ) | and | C 1 , 2 ( n ) | respectively that are privately shared between transmitter and both legitimate receivers. For any i [ 2 , L ] , we define the sequences
Θ ¯ i ( V ) A ˜ i [ C 1 ( n ) ] κ Θ ( V ) , Γ ¯ i ( V ) A ˜ i [ C 1 , 2 ( n ) ] κ Γ ( V ) .
Since these secret keys are reused in all blocks, their size becomes negligible in terms of rate for L large enough. The need of these secret keys may not be obvious at this point, but a further discussion of this question can be found in Section 5.4. Indeed, they are required to prove independence between an eavesdropper’s observations of adjacent blocks (see Lemma 3), which is crucial to prove that the polar coding scheme satisfies the strong secrecy condition in (2).
The function form_AG in Algorithm 1 constructs sequences A ˜ 1 : L [ G ( n ) ] differently depending on which case, among cases A, B, C or D described before, characterizes the given CI-WTBC. This part of the encoding is described in detail in Section 4.2 and Algorithm 2.
Then, given A ˜ i [ C ( n ) G ( n ) ] , the encoder forms the remaining entries of A ˜ i n , i.e., A ˜ i [ ( H V ( n ) ) C ] , as follows. If j L V ( n ) , where L V ( n ) j [ 1 , n ] : H A ( j ) | A 1 : j 1 δ n , it constructs A ˜ i ( j ) deterministically by using SC encoding [24], and only A ˜ i [ ( H V ( n ) ) C L V ( n ) ] is constructed randomly.
Finally, given V ˜ i n = A ˜ i n G n , a randomization sequence R i and a uniformly distributed random sequence Λ 0 ( V ) , the encoder performs polar-based channel prefixing (function pb_ch_pref in Algorithm 1) to obtain X ˜ i n , which is transmitted over the WTBC inducing Y ˜ ( 1 ) , i n , Y ˜ ( 2 ) , i n , Z ˜ i n . This part of the encoding is described in detail in Section 4.3.
Furthermore, the encoder obtains the sequence
Φ ( k ) , i ( V ) A ˜ i H V ( n ) C L V | Y ( k ) ( n ) C
for any k [ 1 , 2 ] and i [ 1 , L ] , which is required by legitimate Receiver k to reliably estimate A ˜ i n entirely. Since Φ ( k ) , i ( V ) is not nearly uniform, the encoder cannot make it available to the legitimate Receiver k by means of the chaining structure. Furthermore, the encoder obtains
Υ ( 1 ) ( V ) A ˜ 1 H V ( n ) ( L V | Y ( 1 ) ( n ) ) C , Υ ( 2 ) ( V ) A ˜ L H V ( n ) ( L V | Y ( 2 ) ( n ) ) C .
The sequence Υ ( k ) ( V ) is required by legitimate Receiver  k [ 1 , 2 ] to initialize the decoding process. Therefore, the transmitter additionally sends Υ ( k ) ( V ) , Φ ( k ) , i ( V ) κ Υ Φ ( k ) ( V ) to legitimate Receiver k, where κ Υ Φ ( k ) ( V ) is a uniformly distributed key with size
L | H V ( n ) C L V | Y ( k ) ( n ) C | + | H V ( n ) L V | Y ( k ) ( n ) C |
that is privately shared between transmitter and the corresponding receiver. In Section 5.1 we show that the length of κ Υ Φ ( 1 ) ( V ) and κ Υ Φ ( 2 ) ( V ) is asymptotically negligible in terms of rate.
Algorithm 1 Generic encoding scheme
Require: Private and confidential messages W 1 : L and S 1 : L ; randomization sequences R 1 : L ; random sequence Λ 0 ( X ) ; and secret keys κ Θ ( V ) , κ Γ ( V ) , κ Υ Φ ( 1 ) ( V ) and κ Υ Φ ( 2 ) ( V ) .
 1:
Ψ 0 ( V ) , Γ 0 ( V ) , Π 0 ( V ) , Λ 0 ( V ) , Θ ¯ L + 1 ( V ) , Γ ¯ L + 1 ( V )
 2:
A ˜ 1 [ C ( n ) ] W 1
 3:
Ψ 1 ( V ) , Γ 1 ( V ) A ˜ 1 [ C ( n ) ]
 4:
for i = 1 to L do
 5:
    if i L then
 6:
         A ˜ i + 1 [ C ( n ) ] W i + 1
 7:
         Ψ i + 1 ( V ) , Γ i + 1 ( V ) , Θ ¯ i + 1 ( V ) , Γ ¯ i + 1 ( V ) A ˜ i + 1 [ C ( n ) ] , κ Θ ( V ) , κ Γ ( V )
 8:
    end if
 9:
     A ˜ i G ( n ) , Π i ( V ) , Λ i ( V ) form_AG i , S i , Θ ¯ i + 1 ( V ) , Γ ¯ i + 1 ( V ) , Ψ i 1 ( V ) , Γ i 1 ( V ) , Π i 1 ( V ) , Λ i 1 ( V )
10:
    if i = 1 then Υ ( 1 ) ( V ) A ˜ 1 H V ( n ) L V | Y ( 1 ) ( n ) C
11:
    if i = L then Υ ( 2 ) ( V ) A ˜ L H V ( n ) L V | Y ( 2 ) ( n ) C
12:
    for j H V ( n ) C do
13:
        if j H V ( n ) C L V ( n ) then
14:
            A ˜ i ( j ) p A ( j ) | A 1 : j 1 A ˜ i ( j ) | A ˜ i 1 : j 1
15:
        else if j L V ( n ) then
16:
            A ˜ i ( j ) arg max a V p A ( j ) | A 1 : j 1 a ˜ i ( j ) | A ˜ i 1 : j 1
17:
        end if
18:
    end for
19:
     Φ ( 1 ) , i ( V ) A ˜ i H V ( n ) C L V | Y ( 1 ) ( n ) C
20:
     Φ ( 2 ) , i ( V ) A ˜ i H V ( n ) C L V | Y ( 2 ) ( n ) C
21:
     X ˜ i n , Λ i ( X ) pb_ch_pref A ˜ i n G n , R i , Λ i 1 ( X )
22:
end for
23:
Send Φ ( k ) , i ( V ) , Υ ( k ) ( V ) κ Υ Φ ( k ) ( V ) to Receiver  k [ 1 , 2 ]
24:
return X ˜ 1 : L n
Algorithm 2 Function form_AG
Require:i, S i , Θ ¯ i + 1 ( V ) , Γ ¯ i + 1 ( V ) , Ψ i 1 ( V ) , Γ i 1 ( V ) , Π i 1 ( V ) , Λ i 1 ( V )
 1:
Define R 1 ( n ) , R 1 ( n ) , R 2 ( n ) , R 2 ( n ) , R 1 , 2 ( n ) , R 1 , 2 ( n ) , I ( n ) , R S ( n ) , R Λ ( n ) (depending on the case)
 2:
if i = 1 then A ˜ 1 [ I ( n ) G 1 ( n ) G 1 , 2 ( n ) ] S 1
 3:
if i [ 2 , L 1 ] then A ˜ i [ I ( n ) ] S i
 4:
if i = L then A ˜ L [ I ( n ) G 2 ( n ) ] S L
 5:
Ψ 1 , i 1 ( V ) , Ψ 2 , i 1 ( V ) Ψ i 1 ( V ) (depending on the case)
 6:
Γ 1 , i 1 ( V ) , Γ 2 , i 1 ( V ) Γ i 1 ( V ) (depending on the case)
 7:
Θ ¯ 1 , i + 1 ( V ) , Θ ¯ 2 , i + 1 ( V ) Θ ¯ i + 1 ( V ) (depending on the case)
 8:
Γ ¯ 1 , i + 1 ( V ) , Γ ¯ 2 , i + 1 ( V ) Γ ¯ i + 1 ( V ) (depending on the case)
 9:
A ˜ i [ R 1 , 2 ( n ) ] Γ 1 , i 1 ( V ) Γ ¯ 1 , i + 1 ( V )
10:
A ˜ i [ R 1 , 2 ( n ) ] Ψ 2 , i 1 ( V ) Θ ¯ 2 , i + 1 ( V )
11:
if i [ 1 , L 1 ] then
12:
     A ˜ i [ R 1 ( n ) ] Θ ¯ 1 , i + 1 ( V )
13:
     A ˜ i [ R 1 ( n ) ] Γ ¯ 2 , i + 1 ( V )
14:
end if
15:
if i [ 2 , L ] then
16:
     A ˜ i [ R 2 ( n ) ] Ψ 1 , i 1 ( V )
17:
     A ˜ i [ R 2 ( n ) ] Γ 2 , i 1 ( V )
18:
     A ˜ i [ R S ( n ) ] Π i 1 ( V )
19:
     A ˜ i [ R Λ ( n ) ] Λ i 1 ( V )
20:
end if
21:
Π i ( V ) A ˜ i [ I ( n ) G 2 ( n ) ]
22:
Λ i ( V ) A ˜ i [ R Λ ( n ) ]
23:
return the sequences A ˜ i G ( n ) , Π i ( V ) and Λ i ( V )

4.2. Function form_AG

The function form_AG encodes the confidential messages S 1 : L and builds the chaining construction. Based on the sets in (10)–(19), let R 1 ( n ) G 0 ( n ) G 2 ( n ) , R 1 ( n ) G 2 ( n ) , R 2 ( n ) G 1 ( n ) , R 2 ( n ) G 1 ( n ) , R 1 , 2 ( n ) G 0 ( n ) , R 1 , 2 ( n ) G 0 ( n ) , I ( n ) G 0 ( n ) G 2 ( n ) , R S ( n ) G 1 ( n ) and R Λ ( n ) G 1 ( n ) form an additional partition of G ( n ) . The definition of R 1 ( n ) , R 1 ( n ) , R 2 ( n ) , R 2 ( n ) , R 1 , 2 ( n ) and R 1 , 2 ( n ) will depend on the particular case (among A to D), while
I ( n ) G 0 ( n ) G 2 ( n ) R 1 ( n ) R 1 ( n ) R 1 , 2 ( n ) R 1 , 2 ( n ) ,
R S ( n ) any subset of G 1 ( n ) R 2 ( n ) R 2 ( n ) with size | I ( n ) G 2 ( n ) | ,
R Λ ( n ) G 1 , 2 ( n ) G 1 ( n ) R 2 ( n ) R 2 ( n ) R S ( n ) .
For i [ 1 , L ] , let S i denote a uniformly distributed vector that represents the confidential message. The message S 1 has size | I ( n ) G 1 ( n ) G 1 , 2 ( n ) | ; for i [ 2 , L 1 ] , S i has size | I ( n ) | ; and S L has size | I ( n ) G 2 ( n ) | . Furthermore, for i [ 1 , L ] , we write Ψ i ( V ) Ψ 1 , i ( V ) , Ψ 2 , i ( V ) , Γ i ( V ) Γ 1 , i ( V ) , Γ 2 , i ( V ) , Θ ¯ i ( V ) Θ ¯ 1 , i ( V ) , Θ ¯ 2 , i ( V ) and Γ ¯ i ( V ) Γ ¯ 1 , i ( V ) , Γ ¯ 2 , i ( V ) , where we define Ψ p , i , Γ p , i , Θ ¯ p , i and Γ ¯ p , i , for any p [ 1 , 2 ] , accordingly in each case.
This function, which is used in Case A to Case D, is described in Algorithm 2.

4.2.1. Case A

In this case, recall that | G 1 ( n ) | > | C 2 ( n ) | , | G 2 ( n ) | > | C 1 ( n ) | and | G 0 ( n ) | | C 1 , 2 ( n ) | . We define
R 1 ( n ) any subset of G 2 ( n ) with size | C 1 ( n ) | ,
R 2 ( n ) any subset of G 1 ( n ) with size | C 2 ( n ) | ,
R 1 , 2 ( n ) any subset of G 0 ( n ) with size | C 1 , 2 ( n ) | ,
and R 1 ( n ) = R 2 ( n ) = R 1 , 2 ( n ) . By the assumption of Case A, it is clear that R 1 ( n ) , R 2 ( n ) and R 1 , 2 ( n ) exist. Furthermore, by (20), the set I ( n ) exists, and so will R S ( n ) because
| G 1 ( n ) R 2 ( n ) R 2 ( n ) | | I ( n ) G 2 ( n ) | = | G 1 ( n ) R 2 ( n ) R 2 ( n ) | | G 2 ( n ) R 1 ( n ) R 1 ( n ) | = | G 1 ( n ) | | C 2 ( n ) | | G 2 ( n ) | | C 1 ( n ) | 0 .
These sets that form the partition of G ( n ) in Case A can be seen in Figure 3, which also displays the encoding process that aims to construct A ˜ 1 : L H V ( n ) = A ˜ 1 : L C ( n ) G ( n ) .
For i [ 1 , L ] , we define Ψ 1 , i ( V ) Ψ i ( V ) , Γ 1 , i ( V ) Γ i ( V ) , Θ ¯ 1 , i ( V ) Θ ¯ i ( V ) , Γ ¯ 1 , i ( V ) Γ ¯ i ( V ) and, therefore, we have Ψ 2 , i ( V ) = Γ 2 , i ( V ) = Θ ¯ 2 , i ( V ) = Γ ¯ 2 , i ( V ) .
From (18), we have C 2 ( n ) L V | Y ( 1 ) ( n ) L V | Y ( 2 ) ( n ) . Thus, the sequence Ψ 1 , i 1 ( V ) = A ˜ i 1 C 2 ( n ) is needed by legitimate Receiver 2 to reliably reconstruct A ˜ i 1 n , but can be reliably inferred by legitimate Receiver 1 given A ˜ i 1 ( L V | Y ( 1 ) ( n ) ) C . Hence, according to Algorithm 2, the encoder repeats the entire sequence Ψ 1 , i 1 ( V ) in A ˜ i R 2 ( n ) ] A ˜ i [ L V | Y ( 2 ) ( n ) L V | Y ( 1 ) ( n ) .
Similarly, from (17), we have C 1 ( n ) L V | Y ( 2 ) ( n ) L V | Y ( 1 ) ( n ) . Thus, Θ 1 , i + 1 ( V ) = A ˜ i + 1 C 1 ( n ) is needed by Receiver 1 to form A ˜ i + 1 n but can be inferred by Receiver 2 given A ˜ i + 1 ( L V | Y ( 2 ) ( n ) ) C . Hence, the encoder repeats the sequence Θ ¯ 1 , i + 1 ( V ) in A ˜ i [ R 1 ( n ) ] A ˜ i L V | Y ( 1 ) ( n ) L V | Y ( 2 ) ( n ) .
Finally, from (19), C 1 , 2 ( n ) ( L V | Y ( 2 ) ( n ) ) C ( L V | Y ( 1 ) ( n ) ) C . Thus, sequences Γ 1 , i 1 ( V ) = A ˜ i 1 C 1 , 2 ( n ) and Γ 1 , i + 1 ( V ) = A ˜ i + 1 C 1 , 2 ( n ) are needed by both receivers to form A ˜ i 1 n and A ˜ i + 1 n respectively. Hence, the encoder repeats Γ 1 , i 1 ( V ) and Γ ¯ 1 , i + 1 ( V ) in A ˜ i R 1 , 2 ( n ) A ˜ i L V | Y ( 1 ) ( n ) L V | Y ( 2 ) ( n ) . Indeed, both sequences are repeated in the same entries of A ˜ i [ G 0 ( n ) ] by performing Γ 1 , i 1 ( V ) Γ ¯ 1 , i + 1 ( V ) . Since Γ 1 , 0 ( V ) = Γ ¯ 1 , L + 1 ( V ) = , only Γ ¯ 1 , 2 ( V ) is repeated at Block 1 and Γ 1 , L 1 ( V ) at Block L.
Moreover, part of secret message S i , i [ 1 , L ] , is stored into some entries of A ˜ i n whose indices belong to G 2 ( n ) . Thus, in any Block  i [ 2 , L ] , the encoder repeats
Π i 1 ( V ) A ˜ i 1 I ( n ) G 2 ( n )
in A ˜ i [ R S ( n ) ] A ˜ i [ L V | Y ( 2 ) ( n ) L V | Y ( 1 ) ( n ) ] . Furthermore, it repeats
Λ i 1 ( V ) A ˜ i 1 R Λ ( n )
in A ˜ i [ R Λ ( n ) ] . Hence, notice that Λ 1 ( V ) is replicated in all blocks.

4.2.2. Case B

In this case, | G 1 ( n ) | > | C 2 ( n ) | , | G 2 ( n ) | > | C 1 ( n ) | and | G 0 ( n ) | < | C 1 , 2 ( n ) | . We define R 1 ( n ) and R 2 ( n ) as in (27) and (28) respectively, and R 1 , 2 ( n ) . Now, since | G 0 ( n ) | < | C 1 , 2 ( n ) | , only a part of Γ i 1 ( V ) and Γ ¯ i + 1 ( V ) , i [ 1 , L ] , can be repeated in A ˜ i G 0 ( n ) . Thus, we define R 1 , 2 ( n ) G 0 ( n ) and
R 1 ( n ) any subset of G 2 ( n ) R 1 ( n ) with size | C 1 , 2 ( n ) | | G 0 ( n ) | ,
R 2 ( n ) any subset of G 1 ( n ) R 2 ( n ) with size | C 1 , 2 ( n ) | | G 0 ( n ) | .
Obviously, R 1 , 2 ( n ) exists and, by the assumption of Case B, so do R 1 ( n ) and R 2 ( n ) . By (20), R 1 ( n ) exists and so does I ( n ) . Indeed, since G 0 ( n ) R 1 , 2 ( n ) = , then I ( n ) G 2 ( n ) . Again, by the property in (20), R 2 ( n ) exists and so does R S ( n ) because
| G 1 ( n ) R 2 ( n ) R 2 ( n ) | | G 2 ( n ) R 1 ( n ) R 1 ( n ) | = | G 1 ( n ) | | C 2 ( n ) | | C 1 , 2 ( n ) | | G 0 ( n ) | | G 2 ( n ) | | C 1 ( n ) | | C 1 , 2 ( n ) | | G 0 ( n ) | = | G 1 ( n ) | | C 2 ( n ) | | G 2 ( n ) | + | C 1 ( n ) | 0 .
Indeed, since I ( n ) G 2 ( n ) , notice that | R S ( n ) | = | I ( n ) | . These sets that form the partition of G ( n ) in Case B can be seen in Figure 4, which also displays the encoding process that aims to construct A ˜ 1 : L H V ( n ) = A ˜ 1 : L C ( n ) G ( n ) .
In this case, for any i [ 1 , L ] , Ψ 1 , i ( V ) Ψ i ( V ) , Θ ¯ 1 , i ( V ) Θ ¯ i ( V ) and Ψ 2 , i ( V ) = Θ ¯ 2 , i ( V ) ; and we define Γ 1 , i ( V ) and Γ ¯ 1 , i ( V ) as any part of Γ i ( V ) and Γ ¯ i ( V ) , respectively, with size | G 0 ( n ) | , and Γ 2 , i ( V ) and Γ ¯ 2 , i ( V ) as the remaining parts with size | C 1 , 2 ( n ) | | G 0 ( n ) | . Now, the encoder copies Γ 1 , i 1 ( V ) Γ ¯ 1 , i + 1 ( V ) into A ˜ i R 1 , 2 ( n ) , and Γ 2 , i 1 ( V ) and Γ ¯ 2 , i + 1 ( V ) into A ˜ i R 2 ( n ) and A ˜ i R 1 ( n ) respectively. Moreover, since I ( n ) G 2 ( n ) , notice that Π i ( V ) = S i for any i [ 2 , L 1 ] .

4.2.3. Case C

In this case, recall that | G 1 ( n ) | | C 2 ( n ) | , | G 2 ( n ) | | C 1 ( n ) | and | G 0 ( n ) | > | C 1 , 2 ( n ) | . Hence, we define R 2 ( n ) and R 1 , 2 ( n ) as in (28) and (29) respectively, and R 1 ( n ) = R 2 ( n ) = R 1 , 2 ( n ) . On the other hand, since | G 2 ( n ) | | C 1 ( n ) | , now for i [ 1 , L 1 ] only a part of Θ ¯ i + 1 ( V ) can be repeated entirely in A ˜ i G 2 ( n ) . Consequently, we define
R 1 ( n ) the union of G 2 ( n ) with any subset of G 0 ( n ) R 1 , 2 ( n ) with size | C 1 ( n ) | | G 2 ( n ) | .
It is clear that R 2 ( n ) and R 1 , 2 ( n ) exist. By (20), R 1 ( n ) also exists and so does I ( n ) . Since R 1 ( n ) G 2 ( n ) , then I ( n ) G 2 ( n ) = and R S ( n ) = . These sets that form G ( n ) are represented in Figure 5, which also displays the part of the encoding that aims to construct A ˜ 1 : L H V ( n ) .
In this case, for i [ 1 , L ] , we define Ψ 1 , i ( V ) Ψ i ( V ) , Γ 1 , i ( V ) Γ i ( V ) , Θ ¯ 1 , i ( V ) Θ ¯ i ( V ) , Γ ¯ 1 , i ( V ) Γ ¯ i ( V ) , and Ψ 2 , i ( V ) = Γ 2 , i ( V ) = Θ ¯ 2 , i ( V ) = Γ ¯ 2 , i ( V ) . Moreover, note that Π i ( V ) = because I ( n ) G 2 ( n ) = .

4.2.4. Case D

In this case, recall that | G 1 ( n ) | < | C 2 ( n ) | , | G 2 ( n ) | < | C 1 ( n ) | and | G 0 ( n ) | > | C 1 , 2 ( n ) | . The sets that form the partition of G ( n ) in Case D are defined below and can be seen in Figure 6, which also displays the encoding process that aims to construct of A ˜ 1 : L H V ( n ) .
As in Case A and Case C, since | G 0 ( n ) | > | C 1 , 2 ( n ) | then we define the set R 1 , 2 ( n ) as in (29) and R 1 ( n ) = R 2 ( n ) . On the other hand, since | G 1 ( n ) | < | C 2 ( n ) | , now for i [ 2 , L ] only a part of Ψ i 1 ( V ) can be repeated entirely in A ˜ i [ G 1 ( n ) ] . Consequently, we define R 2 ( n ) G 1 ( n ) and
R 1 , 2 ( n ) any subset of G 0 ( n ) R 1 , 2 ( n ) with size | C 2 ( n ) | | G 1 ( n ) | .
By (20), it is clear that R 1 , 2 ( n ) exists. Now, despite | G 2 ( n ) | < | C 1 ( n ) | as in Case C, the set R 1 ( n ) is not defined as in (32), but
R 1 ( n ) the union of G 2 ( n ) with any subset of G 0 ( n ) R 1 , 2 ( n ) R 1 , 2 ( n ) with size | C 1 ( n ) | | G 2 ( n ) | | C 2 ( n ) | | G 1 ( n ) | ,
which exists because, by the assumption in (20), we have
| G 0 ( n ) R 1 , 2 ( n ) R 1 , 2 ( n ) | | R 1 ( n ) | = | G 0 ( n ) | | C 1 , 2 ( n ) | | C 2 ( n ) | + | G 1 ( n ) | | C 1 ( n ) | | G 2 ( n ) | | C 2 ( n ) | + | G 1 ( n ) | = | G 0 ( n ) | | C 1 , 2 ( n ) | | C 1 ( n ) | + | G 2 ( n ) | 0 .
In this case, for i [ 1 , L ] , we set Γ 1 , i ( V ) Γ i ( V ) , Γ ¯ 1 , i ( V ) Γ ¯ i ( V ) and Γ ¯ 2 , i ( V ) = Γ 2 , i ( V ) . Furthermore, we define Ψ 1 , i ( V ) as any part of Ψ i ( V ) with size | G 1 ( n ) | , and Ψ 2 , i ( V ) as the remaining part with size | C 2 ( n ) | | G 1 ( n ) | . Lastly, we define Θ ¯ 1 , i ( V ) as any part Θ ¯ i ( V ) with size | C 1 ( n ) | | C 2 ( n ) | | G 1 ( n ) | , and Θ ¯ 2 , i ( V ) as the remaining part with size | C 2 ( n ) | | G 1 ( n ) | .
Thus, according to Algorithm 2, instead of repeating Ψ 2 , i 1 ( V ) , that is, the part of Ψ i 1 ( V ) that does not fit in A ˜ i n G 1 ( n ) , in a specific part of A ˜ i G 0 ( n ) , the encoder stores Ψ 2 , i 1 ( V ) Θ ¯ 2 , i + 1 ( V ) into A ˜ i R 1 , 2 ( n ) A ˜ i G 0 ( n ) , where Θ ¯ 2 , i + 1 ( V ) denotes part of those elements of Θ ¯ i + 1 ( V ) that do not fit in A ˜ i G 2 ( n ) . Furthermore, as in Case C, since I ( n ) G 2 ( n ) = , we have Π i ( V ) = .

4.3. Channel Prefixing

For i [ 1 , L ] , let R i be a uniformly distributed vector of length | H X | V ( n ) H X | V Z ( n ) | that represents the randomization sequence. Furthermore, let Λ 0 ( X ) be a uniformly distributed random sequence of size | H X | V Z ( n ) | . The channel prefixing aims to construct X ˜ i n = T ˜ i n G n and is summarized in Algorithm 3.
Algorithm 3 Function pb_ch_pref
Require: V ˜ i n , R i , Λ i 1 ( X )
 1:
T ˜ i H X | V Z ( n ) Λ i 1 ( X )
 2:
T ˜ i H X | V ( n ) H X | V Z ( n ) R i
 3:
for j H X | V ( n ) C do
 4:
    if j H X | V ( n ) C L X | V ( n ) then
 5:
         T ˜ i ( j ) p T ( j ) | T 1 : j 1 V n T ˜ i ( j ) | T ˜ i 1 : j 1 V ˜ i n
 6:
    else if j L X | V ( n ) then
 7:
         T ˜ i ( j ) arg max t X p T ( j ) | T 1 : j 1 V n t | T ˜ i 1 : j 1 V ˜ i n
 8:
    end if
 9:
end for
10:
X ˜ i n T ˜ i n G n
11:
Λ i ( X ) T ˜ i H X | V ( n ) H X | V Z ( n )
12:
return X ˜ i n and Λ i ( X )
Notice that the sequence Λ 0 ( X ) is copied in T ˜ i H X | V Z ( n ) at any Block  i [ 1 , L ] , while R i is stored into T ˜ i H X | V ( n ) H X | V Z ( n ) . After forming T ˜ i H X | V ( n ) , and given the sequence V ˜ i n A ˜ i n G n , the encoder forms the remaining entries of T ˜ i n , that is, T ˜ i H X | V ( n ) C as follows. If j L X | V ( n ) , where L V | X ( n ) j [ 1 , n ] : H T ( j ) | T 1 : j 1 V n δ n , it constructs T ˜ i ( j ) deterministically by using SC encoding [24]. Otherwise, if j ( H X | V ( n ) ) C L X | V ( n ) , the encoder randomly draws T ˜ i ( j ) from distribution p T ( j ) | T 1 : j 1 V n .

4.4. Decoding

Consider that Υ ( k ) ( V ) , Φ ( k ) , 1 : L ( V ) , for all k [ 1 , 2 ] , is available to the k-th legitimate receiver. In the decoding process, both legitimate receivers form the estimates A ^ 1 : L n of A ˜ 1 : L n and then obtain the messages W ^ 1 : L , S ^ 1 : L .

4.4.1. Legitimate Receiver 1

This receiver forms the estimates A ^ 1 : L n by going forward, i.e., from A ^ 1 n to A ^ L n , and this process is summarized in Algorithm 4.
Algorithm 4 Decoding at legitimate Receiver 1
Require: Υ ( 1 ) ( V ) , Φ ( 1 ) , 1 : L ( V ) , κ Θ ( V ) and κ Γ ( V ) , and Y ˜ ( 1 ) , 1 : L n .
 1:
A ^ 1 n Υ ( 1 ) ( V ) , Φ ( 1 ) , 1 ( V ) , Y ˜ ( 1 ) , 1 n
 2:
Λ ^ 2 : L ( V ) A ^ 1 R Λ ( n )
 3:
for i = 1 to L 1 do
 4:
     Ψ ^ i ( V ) A ^ i [ C 2 ( n ) ]
 5:
     Γ ^ i ( V ) A ^ i [ C 1 , 2 ( n ) ]
 6:
     Θ ¯ ^ i + 1 ( V ) A ^ i [ R 1 ( n ) ] , A ^ i [ R 1 , 2 ( n ) ] Ψ ^ 2 , i 1 ( V )
 7:
     Θ ^ i + 1 ( V ) Θ ¯ ^ i + 1 ( V ) κ Θ ( V )
 8:
     Γ ¯ ^ i + 1 ( V ) A ^ i [ R 1 , 2 ( n ) ] Γ ^ 1 , i 1 ( V ) , A ^ i [ R 1 ( n ) ]
 9:
     Γ ^ i + 1 ( V ) Γ ¯ ^ i + 1 ( V ) κ Γ ( V )
10:
     Π ^ i ( V ) A ^ i [ I ( n ) G 2 ( n ) ]
11:
     Υ ^ ( 1 ) , i + 1 ( V ) Ψ ^ 1 , i ( V ) , Γ ^ 2 , i ( V ) , Θ ^ i + 1 ( V ) , Γ ^ i + 1 ( V ) , Π ^ i ( V ) , Λ ^ i ( V )
12:
     A ^ i + 1 n Υ ^ ( 1 ) , i + 1 ( V ) , Φ ( 1 ) , i + 1 ( V ) , Y ˜ ( 1 ) , i + 1 n
13:
end for
In all cases (among Case A to Case D), Receiver 1 constructs A ^ 1 n as follows. Given Υ ( 1 ) ( V ) (all the elements inside the red curve at Block 1 in Figure 3, Figure 4, Figure 5 and Figure 6) and Φ ( 1 ) , 1 ( V ) , notice that Receiver 1 knows A ˜ 1 L V | Y ( 1 ) ( n ) C . Therefore, from Υ ( 1 ) ( V ) , Φ ( 1 ) , 1 ( V ) and channel observations Y ˜ ( 1 ) , 1 n , Receiver 1 performs SC decoding to form A ^ 1 n . Moreover, since Λ 1 ( V ) has been replicated in all blocks, legitimate Receiver 1 obtains Λ ^ 2 : L ( V ) = A ^ 1 R Λ ( n ) (gray pentagons in all blocks).
For i [ 1 , L 1 ] , consider the construction of A ^ i + 1 n . First, since A ^ 1 : i n have already been estimated, from A ^ i n Receiver 1 obtains Ψ ^ i ( V ) = A ^ i C 2 ( n ) (e.g., red circles at Block 2 in Figure 3, Figure 4, Figure 5 and Figure 6) and Γ ^ i ( V ) = A ^ i C 1 , 2 ( n ) (red triangles).
Furthermore, from A ^ i n , Receiver 1 obtains Θ ^ i + 1 ( V ) as follows. At Block 1, in all cases it gets Θ ¯ ^ 2 ( V ) = A ˜ 1 R 1 ( n ) R 1 , 2 ( n ) (all the red squares with a line through them at Block 1 in Figure 3, Figure 4, Figure 5 and Figure 6). At Block  i [ 2 , L 1 ] , we distinguish two situations:
  • In Case D, Receiver 1 gets Θ ¯ ^ 1 , i + 1 ( V ) = A ^ i R 1 ( V ) (e.g., yellow squares with a line through them at Block 2 in Figure 6) and Ψ ^ 2 , i 1 ( V ) Θ ¯ ^ 2 , i + 1 ( V ) = A ^ i R 1 , 2 ( V ) (yellow squares with a line through them overlapped by blue circles). Since Ψ ^ 2 , i 1 ( V ) A ^ i 1 n (blue circles) has already been estimated, Receiver 1 obtains Θ ¯ ^ 2 , i + 1 ( V ) = Ψ ^ 2 , i 1 ( V ) A ^ i R 1 , 2 ( V ) (yellow squares with a line through them).
  • Otherwise, in other cases, Receiver 1 obtains Θ ¯ ^ i + 1 ( V ) = A ^ i R 1 ( n ) directly (yellow squares with a line through them at Block 2 in Figure 3, Figure 4 and Figure 5).
Then, given Θ ¯ ^ i + 1 ( V ) = Θ ¯ ^ 1 , i + 1 ( V ) , Θ ¯ ^ 2 , i + 1 ( V ) , in all cases Receiver 1 recovers Θ ^ i + 1 ( V ) = Θ ¯ ^ i + 1 ( V ) κ Θ ( V ) .
From A ^ i n , Receiver 1 also obtains Γ ^ i + 1 ( V ) as follows. At Block 1, in all cases it gets Γ ¯ ^ 2 ( V ) = A ˜ 1 R 1 , 2 ( n ) R 1 ( n ) directly (e.g., all red triangles with a line through them at Block 1 in Figure 3, Figure 4, Figure 5 and Figure 6). At Block  i [ 2 , L 1 ] , in all cases it obtains Γ ^ 1 , i 1 ( V ) Γ ¯ ^ 1 , i + 1 ( V ) = A ^ i R 1 , 2 ( n ) (e.g., blue and yellow diamonds with a line through them at Block 2). Since Γ ^ 1 , i 1 ( V ) A ^ i 1 n (blue triangles) has already been estimated, Receiver 1 obtains Γ ¯ ^ 1 , i + 1 ( V ) = A ^ i R 1 , 2 ( n ) Γ ^ 1 , i 1 ( V ) (yellow triangles with a line through them). Only in Case B, Receiver 1 obtains Γ ¯ ^ 2 , i + 1 ( V ) = A ^ i R 1 ( n ) (remaining yellow triangles with a line through them at Block 2 in Figure 4). Then, given Γ ¯ ^ i + 1 ( V ) = Γ ¯ ^ 1 , i + 1 ( V ) , Γ ¯ ^ 2 , i + 1 ( V ) , in all cases Receiver 1 recovers Γ ^ i + 1 ( V ) = Γ ¯ ^ i + 1 ( V ) κ Γ ( V ) .
Lastly, only in Case A and Case B, Receiver 1 obtains Π ^ i ( V ) = A ^ i I ( n ) G 2 ( n ) (e.g., purple crosses at Block 2 in Figure 3 and Figure 4).
Finally, define the sequence Υ ^ ( 1 ) , i + 1 ( V ) Ψ ^ 1 , i ( V ) , Γ ^ 2 , i ( V ) , Θ ^ i + 1 ( V ) , Γ ^ i + 1 ( V ) , Π ^ i ( V ) , Λ ^ i ( V ) . Notice that Υ ^ ( 1 ) , i + 1 ( V ) A ^ i + 1 H V ( n ) L V | Y ( 1 ) ( n ) (elements inside red curve at Block  i + 1 in Figure 3, Figure 4, Figure 5 and Figure 6). Therefore, Receiver 1 performs SC decoding to form A ^ i + 1 n by using Υ ^ ( 1 ) , i + 1 ( V ) , Φ ( 1 ) , i + 1 ( V ) and the channel observations Y ˜ ( 1 ) , i + 1 n .

4.4.2. Legitimate Receiver 2

This receiver forms the estimates A ^ 1 : L n by going backward, i.e., from A ^ L n to A ^ 1 n , and this process is summarized in Algorithm 5.
Algorithm 5 Decoding at legitimate Receiver 2
Require: Υ ( 2 ) ( V ) , Φ ( 2 ) , 1 : L ( V ) , κ Θ ( V ) and κ Γ ( V ) , and Y ˜ ( 2 ) , 1 : L n .
 1:
A ^ L n Υ ( 2 ) ( V ) , Φ ( 2 ) , L ( V ) , Y ˜ ( 2 ) , L n
 2:
Λ ^ 1 : L 1 ( V ) A ^ L R Λ ( n )
 3:
for i = L to 2 do
 4:
     Θ ¯ ^ i ( V ) A ^ i [ C 1 ( n ) ] κ Θ ( V )
 5:
     Γ ¯ ^ i ( V ) A ^ i [ C 1 , 2 ( n ) ] κ Γ ( V )
 6:
     Ψ ^ i 1 ( V ) A ^ i [ R 2 ( n ) ] , A ^ i [ R 1 , 2 ( n ) ] Θ ¯ ^ 2 , i + 1 ( V )
 7:
     Γ ^ i 1 ( V ) A ^ i [ R 1 , 2 ( n ) ] Γ ¯ ^ 1 , i + 1 ( V ) , A ^ i [ R 2 ( n ) ]
 8:
     Π ^ i 1 ( V ) A ^ i [ R S ( n ) ]
 9:
     Υ ( 2 ) , i 1 ( V ) Θ ¯ ^ 1 , i ( V ) , Γ ¯ ^ 2 , i ( V ) , Ψ ^ i 1 ( V ) , Γ ^ i 1 ( V ) , Π ^ i 1 ( V ) , Λ ^ i 1 ( V )
10:
     A ^ i 1 n Υ ( 2 ) , i 1 ( V ) , Φ ( 2 ) , i 1 ( V ) , Y ˜ ( 2 ) , i 1 n
11:
end for
In all cases (among Case A to Case D), Receiver 2 constructs A ^ L n as follows. Given Υ ( 2 ) ( V ) (all the elements inside blue curve at Block 4 in Figure 3, Figure 4, Figure 5 and Figure 6) and Φ ( 2 ) , L ( V ) , notice that Receiver 2 knows A ˜ L L V | Y ( 2 ) ( n ) C . Hence, from ( Υ ( 2 ) ( V ) , Φ ( 2 ) , L ( V ) ) and channel output observations Y ˜ ( 2 ) , L n , Receiver 2 performs SC decoding to form A ^ L n . Since Λ 1 ( V ) has been replicated in all blocks, from A ^ L n it obtains Λ ^ 1 : L 1 ( V ) = A ^ L R Λ ( n ) (gray pentagons at all blocks).
For i [ 2 , L ] , consider the construction of A ^ i 1 n . First, since A ^ i : L n have already been estimated, from A ^ i n Receiver 2 obtains the sequence Θ ^ i ( V ) = A ^ i C 1 ( n ) (e.g., yellow squares at Block 3 in Figure 3, Figure 4, Figure 5 and Figure 6). Given Θ ^ i ( V ) , it computes Θ ¯ ^ i ( V ) = Θ ^ i ( V ) κ Θ ( V ) (yellow squares with a line through them). Furthermore, Receiver 2 obtains Γ ^ i ( V ) = A ^ i C 1 , 2 ( n ) (yellow triangles at Block 3 in Figure 3, Figure 4, Figure 5 and Figure 6). Given this sequence, it computes Γ ¯ ^ i ( V ) = Γ ^ i ( V ) κ Γ ( V ) (yellow triangles with a line through them).
Furthermore, from A ^ i n , Receiver 2 obtains Ψ ^ i 1 ( V ) as follows. At block L, in all cases it gets Ψ ^ L 1 ( V ) = A ^ i R 2 ( n ) R 1 , 2 ( n ) directly (all yellow circles at Block L in Figure 3, Figure 4, Figure 5 and Figure 6). At Block i [ 2 , L 1 ] , we distinguish two situations:
  • In Case D, Receiver 2 obtains Ψ ^ 1 , i 1 ( V ) = A ^ i R 2 ( n ) (e.g., red circles at Block 3 in Figure 6) and Ψ ^ 2 , i 1 ( V ) Θ ¯ ^ 2 , i + 1 ( V ) = A ^ i R 1 , 2 ( n ) (cyan squares with a line through them overlapped by red circles). Since Θ ¯ ^ 2 , i + 1 ( V ) (cyan squares with a line through them) has already been estimated, it obtains Ψ ^ 2 , i 1 ( V ) = A ^ i R 1 , 2 ( n ) Θ ¯ ^ 2 , i + 1 ( V ) (red circles).
  • Otherwise, in other cases, Receiver 2 obtains directly Ψ ^ i 1 ( V ) = A ^ i R 2 ( n ) (e.g., red circles at Block 3 in Figure 3, Figure 4 and Figure 5).
From A ^ i n , Receiver 2 also obtains Γ ^ i 1 ( V ) as follows. At block L, in all cases it gets Γ ¯ ^ L 1 ( V ) = A ^ L R 1 , 2 ( n ) R 2 ( n ) (e.g., all yellow triangles at Block L in Figure 3, Figure 4, Figure 5 and Figure 6). At Block i [ 2 , L 1 ] , in all cases Receiver 2 obtains Γ ^ 1 , i 1 ( V ) Γ ¯ ^ 1 , i + 1 ( V ) = A ^ i R 1 , 2 ( n ) (e.g., red and cyan diamonds with a line through them at Block 3). Since Γ ¯ ^ 1 , i + 1 ( V ) (cyan triangles with a line through them) has already been estimated, Receiver 2 obtains Γ ^ 1 , i 1 ( V ) = A ^ i R 1 , 2 ( n ) Γ ¯ ^ 1 , i + 1 ( V ) (red triangles). Furthermore, only in Case B, Receiver 2 obtains the sequence Γ ^ 2 , i 1 ( V ) = A ^ i R 2 ( n ) (remaining red triangles at Block 3 in Figure 4).
Lastly, only in Case A and Case B, Receiver 2 obtains the sequence Π ^ i 1 ( V ) = A ^ i R S ( n ) (e.g., purple crosses at Block 3 in Figure 3 and Figure 4).
Finally, define the sequence Υ ( 2 ) , i 1 ( V ) Θ ¯ ^ 1 , i ( V ) , Γ ¯ ^ 2 , i ( V ) , Ψ ^ i 1 ( V ) , Γ ^ i 1 ( V ) , Π ^ i 1 ( V ) , Λ ^ i 1 ( V ) . Notice that Υ ( 2 ) , i 1 ( V ) A ^ i 1 H V ( n ) L V | Y ( 2 ) ( n ) (elements inside blue curve at Block i 1 in Figure 3, Figure 4, Figure 5 and Figure 6). Thus, Receiver 2 performs SC decoding to form A ^ i 1 n by using Υ ( 2 ) , i 1 ( V ) , Φ ( 2 ) , i 1 ( V ) and Y ˜ ( 2 ) , i 1 n .

5. Performance of the Polar Coding Scheme

The analysis of the polar coding scheme of Section 4 leads to the following theorem.
Theorem 1.
Let ( X , p Y ( 1 ) Y ( 2 ) Z | X , Y ( 1 ) × Y ( 2 ) × Z ) be an arbitrary WTBC, such that X { 0 , 1 } . The polar coding scheme described in Section 4 achieves the corner point in Equation (4) of the region R CI WTBC defined in Proposition 1.
The proof of Theorem 1 follows in four steps and is provided in the following subsections. In Section 5.1 we show that the polar coding scheme approaches the rate tuple in (4). In Section 5.2 we prove that the joint distribution of ( V ˜ i n , X ˜ i n , Y ˜ ( 1 ) , i n , Y ˜ ( 2 ) , i n , Z ˜ i n ) , for all i [ 1 , L ] , is asymptotically indistinguishable of the one of the original DMS that is used for the polar code construction. Finally, in Section 5.3 and Section 5.4 we show that the polar coding scheme satisfies the reliability and the secrecy conditions (1) and (2) respectively.

5.1. Transmission Rates

We prove that the polar coding scheme described in Section 4 approaches the rate tuple in Equation (4). Furthermore, we show that the overall length of the secret keys κ Θ ( V ) , κ Γ ( V ) , κ Υ Φ ( 1 ) ( V ) and κ Υ Φ ( 2 ) ( V ) , and the additional randomness used in the encoding (besides the randomization sequences) are asymptotically negligible in terms of rate.

5.1.1. Private Message Rate

For i [ 1 , L ] , we have W i = A ˜ i C ( n ) . According to the definition of C ( n ) in (11), and since H V | Z ( n ) H V ( n ) , the rate of W 1 : L is
1 n L i = 1 L | W i | = 1 n | H V ( n ) H V | Z ( n ) C | = 1 n | H V ( n ) | 1 n | H V | Z ( n ) | n H ( V ) H ( V | Z )
where the limit holds by Lemma 4 of [10]. Therefore, the private message rate achieved by the polar coding scheme is R W = I ( V ; Z ) , as in (4).

5.1.2. Confidential Message Rate

From Section 4.2, in all cases we have S 1 = A ˜ 1 I ( n ) G 1 ( n ) G 1 , 2 ( n ) ; for i [ 2 , L 1 ] , we have S i = A ˜ i I ( n ) ; and S L = A ˜ L I ( n ) G 2 ( n ) . Thus, we have
1 n L i = 1 L | S i | = ( L 2 ) n L | I ( n ) | + 1 n L | I ( n ) G 1 ( n ) G 1 , 2 ( n ) | + | I ( n ) G 2 ( n ) | = 1 n | I ( n ) | + 1 n L | G 1 ( n ) | + | G 2 ( n ) | + | G 1 , 2 ( n ) | = 1 n | I ( n ) | + 1 n L | G ( n ) G 0 ( n ) | = ( a ) 1 n | G 0 ( n ) | + | G 2 ( n ) | | R 1 , 2 ( n ) | | R 1 , 2 ( n ) | | R 1 ( n ) | | R 1 ( n ) | + 1 n L | G ( n ) G 0 ( n ) | = ( b ) 1 n | G 0 ( n ) | + | G 2 ( n ) | | C 1 ( n ) | | C 1 , 2 ( n ) | + | G ( n ) G 0 ( n ) | n L ( c ) 1 n | H V | Z ( n ) L V | Y ( 1 ) ( n ) | | H V | Z ( n ) C L V | Y ( 1 ) ( n ) C | + 1 n L | H V | Z ( n ) L V | Y ( 1 ) ( n ) L V | Y ( 2 ) ( n ) C | ( d ) 1 n | H V | Z ( n ) L V | Y ( 1 ) ( n ) | | H V | Z ( n ) C L V | Y ( 1 ) ( n ) C | + 1 n L | H V | Z ( n ) | 1 n L | L V | Y ( 1 ) ( n ) C | = 1 n | H V | Z ( n ) | 1 n | L V | Y ( 1 ) ( n ) C | + 1 n L | H V | Z ( n ) | 1 n L | L V | Y ( 1 ) ( n ) C | n H ( V | Z ) H ( V | Y ( 1 ) ) + 1 L H ( V | Z ) H ( V | Y ( 1 ) ) L H ( V | Z ) H ( V | Y ( 1 ) )
where ( a ) holds by the definition of I ( n ) in (24); ( b ) holds because, in all cases, we have | R 1 , 2 ( n ) | + | R 1 ( n ) | = | C 1 , 2 ( n ) | and | R 1 ( n ) | + | R 1 , 2 ( n ) | = | C 1 ( n ) | ; ( c ) follows from the partition of H V ( n ) defined in (12)–(19); ( d ) follows from applying elementary set operations and because, by assumption, H ( V | Y ( 1 ) ) H ( V | Y ( 2 ) ) , which means that | L V | Y ( 1 ) ( n ) C | | L V | Y ( 2 ) ( n ) C | (by Lemma 4 of [10]); and the limit when n goes to infinity holds also by Lemma 4 of [10]. Hence, the polar coding scheme operates as close to the rate R S in (4) as desired by choosing a sufficiently large L.

5.1.3. Randomization Sequence Rate

For i [ 1 , L ] , we have R i = T ˜ i H X | V ( n ) ( H X | V Z ( n ) ) C . Since H X | V Z ( n ) H X | V ( n ) , we have
1 n L i = 1 L | R i | = 1 n | H X | V ( n ) H X | V Z ( n ) C | = 1 n | H X | V ( n ) | 1 n | H X | V Z ( n ) | n H ( X | Z ) H ( X | V Z )
where the limit holds by Lemma 4 of [10]. Thus, the randomization sequence rate used by the polar coding scheme is R R = I ( X ; Z | V ) as in (4).

5.1.4. Private-Shared Sequence Rate

Transmitter and legitimate Receiver k [ 1 , 2 ] must privately share the keys κ Θ ( V ) , κ Γ ( V ) and κ Υ Φ ( k ) ( V ) . Hence, the overall rate is
1 n L | κ Θ ( V ) | + | κ Γ ( V ) | + k = 1 2 | κ Υ Φ ( k ) ( V ) | = 1 n L | C 1 ( n ) | + | C 1 , 2 ( n ) | + 1 n L k = 1 2 L | H V ( n ) C L V | Y ( k ) ( n ) C | + | H V ( n ) L V | Y ( k ) ( n ) C | = ( a ) 1 n L | H V ( n ) H V | Z ( n ) C L V | Y ( 1 ) ( n ) C | + 1 n L k = 1 2 L | H V ( n ) C L V | Y ( k ) ( n ) C | + | H V ( n ) L V | Y ( k ) ( n ) C | ( b ) 1 n L | L V | Y ( 1 ) ( n ) C | + 1 n L k = 1 2 L | H V | Y ( k ) ( n ) C L V | Y ( k ) ( n ) C | + | L V | Y ( k ) ( n ) C | n 1 L 2 H ( V | Y ( 1 ) ) + H ( V | Y ( 2 ) ) L 0 ,
where ( a ) follows from the definition of C 1 ( n ) and C 1 , 2 ( n ) in (17) and (19), respectively; ( b ) follows from standard set properties and because H V | Z ( n ) C H V | Y ( k ) ( n ) C for any k [ 1 , 2 ] ; and the limit when n goes to infinity holds by Lemma 4 of [10].

5.1.5. Rate of the Additional Randomness

Besides the randomization sequences R 1 : L , the encoder uses the random sequence Λ 0 ( X ) , with size | H X | V ( n ) | , for the polar-based channel prefixing. Moreover, for i [ 1 , L ] , the encoder randomly draws those elements A ˜ i ( j ) such that j H V ( n ) C L V ( n ) , and those elements T ˜ i ( j ) such that j H X | V ( n ) C L X | V ( n ) . Nevertheless, we have
1 n L | H X | V ( n ) | + L | H V ( n ) C L V ( n ) | + L | H X | V ( n ) C L X | V ( n ) | n 1 L H ( X | V ) L 0 ,
where the limit when n approaches to infinity follows from applying Lemma 4 of [10].

5.2. Distribution of the DMS after the Polar Encoding

For i [ 1 , L ] , let q ˜ A i n T i n denote the distribution of ( A ˜ i n , T ˜ i n ) after the encoding. The following lemma proves that q ˜ A i n T i n and the marginal distribution p A n T n of the original DMS are nearly statistically indistinguishable for sufficiently large n and, consequently, so are q ˜ V i n X i n Y ( 1 ) , i n Y ( 2 ) , i n Z i n and p V n X n Y ( 1 ) n Y ( 2 ) n Z n . This result is crucial for the reliability and secrecy performance of the polar coding scheme.
Lemma 1.
For any i [ 1 , L ] , we obtain
V ( q ˜ A i n T i n , p A n T n ) δ n ( * ) , V ( q ˜ V i n X i n Y ( 1 ) , i n Y ( 2 ) , i n Z i n , p V n X n Y ( 1 ) n Y ( 2 ) n Z n ) δ n ( * ) ,
where δ n ( * ) 2 n 4 n δ n ln 2 2 n log 2 n δ n ln 2 + δ n + 2 n δ n ln 2 .
Proof. 
Omitted because it follows similar reasoning as in Lemma 3 of [11]. □

5.3. Reliability Analysis

In this section we prove that both legitimate receivers can reliably reconstruct the private and the confidential messages ( W 1 : L , S 1 : L ) with arbitrary small error probability.
For i [ 1 , L ] and k [ 1 , 2 ] , let q ˜ V i n Y ( k ) , i n and p V n Y ( k ) n be marginals of q ˜ V i n X i n Y ( 1 ) , i n Y ( 2 ) , i n Z i n and p V n X n Y ( 1 ) n Y ( 2 ) n Z n respectively, and define an optimal coupling Proposition 4.7 of [25] between q ˜ V i n Y ( k ) , i n and p V n Y ( k ) n such that P E V i n Y ( k ) , i n = V q ˜ V i n Y ( k ) , i n , p V n Y ( k ) n , where E V i n Y ( k ) , i n V ˜ i n , Y ˜ ( k ) , i n V n , Y ( k ) n . Additionally, define the error event
E ( k ) , i A ^ i L V | Y ( k ) ( n ) C A ˜ i L V | Y ( k ) ( n ) C .
Recall that ( Υ ( k ) ( V ) , Φ ( k ) , 1 : L ( V ) ) is available to Receiver k [ 1 , 2 ] . Thus, P [ E ( 1 ) , 1 ] = P [ E ( 2 ) , L ] = 0 because given Υ ( 1 ) ( V ) and Φ ( 1 ) , 1 ( V ) legitimate Receiver 1 knows A ˜ 1 L V | Y ( 1 ) ( n ) C , and given Υ ( 2 ) ( V ) and Φ ( 2 ) , L ( V ) legitimate Receiver 2 knows A ˜ L L V | Y ( 2 ) ( n ) C . Moreover, due to the chaining structure, in Section 4.4 we have seen that A ˜ i H V ( n ) L V | Y ( 1 ) ( n ) C is repeated in A ˜ i 1 n for i [ 2 , L ] . Therefore, at legitimate Receiver 1, for i [ 2 , L ] we have
P [ E ( 1 ) , i ] P A ^ i 1 n A ˜ i 1 n .
Similarly, due to the chaining construction, we have seen that A ˜ i H V ( n ) L V | Y ( 2 ) ( n ) C is repeated in A ˜ i + 1 n for i [ 1 , L 1 ] . Thus, at legitimate Receiver 2, for i [ 1 , L 1 ] we obtain
P [ E ( 2 ) , i ] P A ^ i + 1 n A ˜ i + 1 n .
Hence, the probability of incorrectly decoding ( W i , S i ) at the Receiver k [ 1 , 2 ] is
P ( W i , S i ) ( W ^ i , S ^ i ) P A ^ i n A ˜ i n = P A ^ i n A ˜ i n | E V i n Y ( k ) , i n C E ( k ) , i C P E V i n Y ( k ) , i n C E ( k ) , i C + P A ^ i n A ˜ i n | E V i n Y ( k ) , i n E ( k ) , i P E V i n Y ( k ) , i n E ( k ) , i P A ^ i n A ˜ i n | E V i n Y ( k ) , i n C E ( k ) , i C + P E V i n Y ( k ) , i n + P E ( k ) , i ( a ) n δ n + P E V i n Y ( k ) , i n + P E ( k ) , i ( b ) n δ n + δ n ( * ) + P E ( k ) , i ( c ) i n δ n + δ n ( * ) ,
where ( a ) holds by Th. 2 of [19]; ( b ) follows from the optimal coupling and Lemma 1; and ( c ) holds by induction and Equations (35) and (36). Therefore, by the union bound we obtain
P ( W 1 : L , S 1 : L ) ( W ^ 1 : L , S ^ 1 : L ) i = 1 L P A ˜ i n A ^ i n L ( L + 1 ) 2 n δ n + 2 δ n ( * ) ,
and for sufficiently large n the polar coding scheme satisfies the reliability condition in (1).

5.4. Secrecy Analysis

Since encoding in Section 4 takes place over L blocks of size n, we need to prove that
lim n I ( S 1 : L , Z ˜ 1 : L n ) = 0 .
For clarity and with slight abuse of notation, for any Block i [ 1 , L ] let
Ξ i ( V ) Π i ( V ) , Λ i ( V ) , Ψ i ( V ) , Γ i ( V ) ,
which denotes the entire sequence depending on A ˜ i n that is repeated at Block i + 1 . Furthermore, let
Ω ¯ i ( V ) [ Θ ¯ i ( V ) , Γ ¯ i ( V ) ] ,
which represents the sequence depending on A ˜ i n that is repeated at Block i 1 . Furthermore, we define κ Ω ( V ) [ κ Θ ( V ) , κ Γ ( V ) ] . Then, a Bayesian graph describing the dependencies between all the variables involved in the polar coding scheme of Section 4 is given in Figure 7.
Despite Γ i ( V ) Ξ i ( V ) and Γ ¯ i ( V ) = Γ i ( V ) κ Γ ( V ) Ω ¯ i ( V ) , we represent Ξ i ( V ) and Ω ¯ i ( V ) as two separate nodes in the Bayesian graph because, by crypto lemma [26], Γ i ( V ) and Γ ¯ i ( V ) are statistically independent. Furthermore, for convenience, we have considered that dependencies only take place forward (from Block i to Block i + 1 ), which is possible by reformulating the encoding as follows. According to Section 4.1, for any i [ 1 , L ] we have A ˜ i C ( n ) = W i . Consequently, we can write W i [ W 1 , i , W 2 , i ] , where W 1 , i A ˜ i [ C 1 ( n ) C 1 , 2 ( n ) ] and W 2 , i A ˜ i [ C 2 ( n ) C 0 ( n ) ] . Since Θ ¯ i ( V ) = A ˜ i C 1 ( n ) κ Θ ( V ) and Γ ¯ i ( V ) = A ˜ i C 1 , 2 ( n ) κ Γ ( V ) , we regard Ω ¯ i ( V ) as an independent random sequence generated at Block i 1 that is stored properly into some part of A ˜ i 1 [ G ( n ) ] . Then, we consider that the encoder obtains W 1 , i Ω ¯ i ( V ) κ Ω ( V ) , which is stored into A ˜ i [ C 1 ( n ) C 1 , 2 ( n ) ] at Block i. On the other hand, the remaining part W 2 , i is independently generated at Block i. Recall that the secret-key κ Ω ( V ) is reused in all blocks.
The following lemma shows that strong secrecy holds for any Block i [ 1 , L ] .
Lemma 2.
For any i [ 1 , L ] and sufficiently large n,
I S i Ξ i 1 ( V ) Λ i 1 ( X ) ; Z ˜ i n δ n ( S ) ,
where δ n ( S ) 2 n δ n + 2 δ n ( * ) 2 n log δ n ( * ) and δ n ( * ) defined as in Lemma 1.
Proof. 
For n sufficiently large, we have
I S i Ξ i 1 ( V ) Λ i 1 ( X ) ; Z ˜ i n = ( a ) I A ˜ i H V | Z ( n ) T ˜ i H X | V Z ( n ) ; Z ˜ i n = ( b ) | H V | Z ( n ) | + | H X | V Z ( n ) | H A ˜ i H V | Z ( n ) T ˜ i H X | V Z ( n ) | Z ˜ i n ( c ) | H V | Z ( n ) | + | H X | V Z ( n ) | H A H V | Z ( n ) T H X | V Z ( n ) | Z i n + 4 n δ n ( * ) 2 δ n ( * ) log δ n ( * ) ( d ) 2 n δ n + 4 n δ n ( * ) 2 δ n ( * ) log δ n ( * )
where ( a ) holds by the encoding described in Section 4; ( b ) holds by the uniformity of A ˜ i H V | Z ( n ) and A ˜ i H X | V Z ( n ) ; ( c ) holds because, for n large enough, we obtain
| H A ˜ i H V | Z ( n ) T ˜ i H X | V Z ( n ) | Z ˜ i n H A i H V | Z ( n ) T i H X | V Z ( n ) | Z i n | | H Z ˜ m n H Z m n | + | H A ˜ i H V | Z ( n ) T ˜ i H X | V Z ( n ) Z ˜ i n H A i H V | Z ( n ) T i H X | V Z ( n ) Z i n | V ( q ˜ Z m n , p Z m n ) log 2 n V ( q ˜ Z m n , p Z m n ) + V q ˜ A i [ H V | Z ( n ) ] T i [ H X | V Z ( n ) ] Z n , p A i [ H V | Z ( n ) ] T i [ H X | V Z ( n ) ] Z n log 2 ( n + | H V | Z ( n ) | + | H X | V Z ( n ) | ) V q ˜ A i [ H V | Z ( n ) ] T i [ H X | V Z ( n ) ] Z n , p A i [ H V | Z ( n ) ] T i [ H X | V Z ( n ) ] Z n ( b ) 4 n δ ld nls ( n ) 2 δ ld nls ( n ) log δ ld nls ( n ) ,
where we have used the chain rule of entropy and the triangle inequality, ([27], Lemma 30), the fact that the function x x log x is decreasing for x > 0 small enough and Lemma 1; and, lastly, ( d ) holds because
H A H V | Z ( n ) T H X | V Z ( n ) | Z n H A H V | Z ( n ) | Z n + H T H X | V Z ( n ) | A n Z n j H V | Z ( n ) H A ( j ) | A 1 : j 1 Z n + j H X | V Z ( n ) H T ( j ) | T 1 : j 1 V n Z n | H V | Z ( n ) | ( 1 δ n ) + | H X | V Z ( n ) | ( 1 δ n )
where we have used the fact that conditioning does not increase entropy, the invertibility of G n , and the definition of H V | Z ( n ) and H X | V Z ( n ) in (6) and (9) respectively. □
Next, the following lemma shows that eavesdropper observations Z ˜ i n are asymptotically statistically independent of observations Z ˜ 1 : i 1 n from previous blocks.
Lemma 3.
For any i [ 2 , L ] and sufficiently large n,
I S 1 : L Z ˜ 1 : i 1 n ; Z ˜ i n δ n ( S ) ,
where δ n ( S ) is defined as in Lemma 2.
Proof. 
For any i [ 2 , L ] and sufficiently large n, we have
I S 1 : L Z ˜ 1 : i 1 n ; Z ˜ i n = I S 1 : i Z ˜ 1 : i 1 n ; Z ˜ i n + I S i + 1 : L ; Z ˜ i n | S 1 : i Z ˜ 1 : i 1 n = ( a ) I S 1 : i Z ˜ 1 : i 1 n ; Z ˜ i n I S 1 : i Z ˜ 1 : i 1 n Ξ i 1 ( V ) Λ i 1 ( X ) ; Z ˜ i n = I S i Ξ i 1 ( V ) Λ i 1 ( X ) ; Z ˜ i n + I S 1 : i 1 Z ˜ 1 : i 1 n ; Z ˜ i n | S i Ξ i 1 ( V ) Λ i 1 ( X ) ( b ) δ n ( S ) + I S 1 : i 1 Z ˜ 1 : i 1 n ; Z ˜ i n | S i Ξ i 1 ( V ) Λ i 1 ( X ) δ n ( S ) + I S 1 : i 1 Z ˜ 1 : i 1 n ; Z ˜ i n W 1 , i | S i Ξ i 1 ( V ) Λ i 1 ( X ) = δ n ( S ) + I S 1 : i 1 Z ˜ 1 : i 1 n ; W 1 , i | S i Ξ i 1 ( V ) Λ i 1 ( X ) + I S 1 : i 1 Z ˜ 1 : i 1 n ; Z ˜ i n | S i Ξ i 1 ( V ) Λ i 1 ( X ) W 1 , i = ( c ) δ n ( S ) + I S 1 : i 1 Z ˜ 1 : i 1 n ; W 1 , i | S i Ξ i 1 ( V ) Λ i 1 ( X ) δ n ( S ) + I A ˜ 1 : i 1 n Z ˜ 1 : i 1 n ; W 1 , i | S i Ξ i 1 ( V ) Λ i 1 ( X ) = δ n ( S ) + I A ˜ 1 : i 1 n ; W 1 , i | S i Ξ i 1 ( V ) Λ i 1 ( X ) + I Z ˜ 1 : i 1 n ; W 1 , i | A ˜ 1 : i 1 n S i Ξ i 1 ( V ) Λ i 1 ( X ) = ( d ) δ n ( S ) + I A ˜ 1 : i 1 n ; W 1 , i | S i Ξ i 1 ( V ) Λ i 1 ( X ) = ( e ) δ n ( S ) + I A ˜ 1 : i 1 n ; Ω ¯ i ( V ) κ Ω ( V ) | S i Ξ i 1 ( V ) Λ i 1 ( X ) = ( f ) δ n ( S )
where ( a ) holds by independence between S i + 1 : L and any random variable from Blocks 1 to i; ( b ) holds by Lemma 2; ( c ) follows from applying d-separation [28] over the Bayesian graph in Figure 7 to obtain that Z ˜ i n and ( S 1 : i 1 , Z ˜ 1 : i 1 n ) are conditionally independent given ( S i , Ξ i 1 ( V ) , Λ i 1 ( X ) , W 1 , i ) ; ( d ) also follows from applying d-separation to obtain that W 1 , i and Z ˜ 1 : i 1 n are conditionally independent given ( A ˜ 1 : i 1 n , S i , Ξ i 1 ( V ) , Λ i 1 ( X ) ) ; ( e ) holds by definition; and ( f ) holds because Ω ¯ i ( V ) is independent of ( S i , Ξ i 1 ( V ) , Λ i 1 ( X ) ) and any random variable from Block 1 to ( i 2 ) , and because from applying crypto-lemma [26] we obtain that Ω ¯ i ( V ) κ Ω ( V ) is independent of A ˜ i 1 n . □
Therefore, we obtain
I S 1 : L ; Z ˜ 1 : L n = ( a ) I S 1 : L ; Z ˜ 1 n + i = 2 L I S 1 : L ; Z ˜ i n | Z ˜ 1 : i 1 n ( b ) I S 1 : L ; Z ˜ 1 n + ( L 1 ) δ n ( S ) = I S 1 ; Z ˜ 1 n + I S 2 : L ; Z ˜ 1 n | S 1 + ( L 1 ) δ n ( S ) = ( c ) I S 1 ; Z ˜ 1 n + ( L 1 ) δ n ( S ) ( d ) L δ n ( S )
where ( a ) follows from applying the chain rule; ( b ) holds by Lemma 3; ( c ) holds by independence between S 2 : L and any random variable from Block 1; and ( d ) holds by Lemma 2. Thus, for sufficiently large n the polar coding scheme satisfies the strong secrecy condition in (2).
Remark 1.
We conjecture that the use κ Ω ( V ) is not needed for the polar coding scheme to satisfy the strong secrecy condition. However, the key is required in order to prove this condition by means of analyzing a causal Bayesian graph similar to the one in Figure 7.
Remark 2.
Although backward dependencies between random variables of different blocks appear in [12], a secret seed as κ Ω ( V ) is not necessary for the polar coding scheme to provide strong secrecy. This is because random sequences that are repeated in adjacent blocks are stored only into those corresponding entries whose indices belong to the “high entropy set given eavesdropper observations”, i.e., the equivalent sets of H V | Z ( n ) and H X | V Z ( n ) in our polar coding scheme. By contrast, notice that our polar coding scheme repeats Θ i ( V ) , Γ i ( V ) A ˜ i ( H V | Z ( n ) ) C .
Remark 3.
Another possibility for the polar coding scheme is to repeat at Block i + 1 the modulo-2 addition between Ψ i ( V ) , Γ i ( V ) and a particular secret-key, instead of repeating an encrypted version of Θ i ( V ) , Γ i ( V ) at Block i 1 . Then, it is not difficult to prove that I S 1 : L Z ˜ i + 1 : L n ; Z ˜ i n δ n ( S ) (similar to Lemma 3). Thus, one can minimize the length of this secret-key depending on whether | C 1 ( n ) | < | C 2 ( n ) | or vice versa.

6. Concluding Remarks

A strongly secure polar coding scheme is proposed for the WTBC with two legitimate receivers and one eavesdropper. This polar code achieves the best known inner-bound on the achievable region of the CI-WTBC model, where a transmitter wants to send common information (private and confidential) to both receivers. Due to the non-degradedness assumption of the channel, the encoder builds a chaining construction that induces bidirectional dependencies between adjacent blocks, which need to be taken carefully into account in the secrecy analysis.
These bidirectional dependencies involve elements from adjacent blocks whose indices belong to the “low entropy sets given eavesdropper observations”. Consequently, in order to prove that the polar coding scheme satisfies the strong secrecy condition, we have introduced a secret-key whose length becomes negligible in terms of rate as the number of blocks grows indefinitely. In the proposed polar coding scheme, this key has been used to randomize part of these elements from any block that are repeated in the previous (or next) one. In this way, we can analyze the dependencies between all random variables involved in the secrecy analysis by means of a causal Bayesian graph and apply d-separation to prove that the polar coding scheme induces eavesdropper’s observations that are statistically independent of one another.
Despite the good performance of the polar coding schemes, some issues still persist. First, it is worth saying that the additional secret transmission (that is negligible in terms of rate) required to initialize the decoding algorithms at both receivers can be omitted by using a similar approach as in [29], where an initialization phase to generate a secret-key can be performed without worsening the communication rate. On the other hand, how to replace the random decisions entirely by deterministic ones in SC encoding is a problem that still remains unsolved. Additionally, we conjecture that the previous secret-keys that are used to prove independence between blocks are not necessary. However, how to prove this independence without using them seems a difficult problem to address at this point.

Author Contributions

Conceptualization, J.d.O.A. and J.R.F.; formal analysis, J.d.O.A.; funding acquisition, J.R.F.; investigation, J.d.O.A. and J.R.F.; methodology, J.d.O.A. and J.R.F; supervision, J.R.F.; validation, J.R.F.; writing—original draft, J.d.O.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been funded by the AEI of Ministerio de Ciencia, Innovación y Universidades of Spain, TEC2016-75067-C4-2-R, PID2019-104958RB-C41 and RED2018-102668-T with ESF and Dept. d’Empresa i Coneixement de la Generalitat de Catalunya, 2017 SGR 578 AGAUR and 001-P-001644 QuantumCAT with ERDF.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
WTBCWiretap broadcast channel
CI-WTBCCommon information over the wiretap broadcast channel
SCSuccessive cancellation
DMSDiscrete memoryless source

References

  1. Wyner, A. The wire-tap channel. Bell Syst. Tech. J. 1975, 54, 1355–1387. [Google Scholar] [CrossRef]
  2. Csiszár, I.; Korner, J. Broadcast channels with confidential messages. IEEE Trans. Inf. Theory 1978, 24, 339–348. [Google Scholar] [CrossRef] [Green Version]
  3. Maurer, U.; Wolf, S. Information-theoretic key agreement: From weak to strong secrecy for free. In Advances in Cryptology—EUROCRYPT 2000; Springer: Berlin/Heidelberg, Germany, 2000; pp. 351–368. [Google Scholar]
  4. Arikan, E. Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels. IEEE Trans. Inf. Theory 2009, 55, 3051–3073. [Google Scholar] [CrossRef]
  5. Mahdavifar, H.; Vardy, A. Achieving the Secrecy Capacity of Wiretap Channels Using Polar Codes. IEEE Trans. Inf. Theory 2011, 57, 6428–6443. [Google Scholar] [CrossRef] [Green Version]
  6. Sasoglu, E.; Vardy, A. A new polar coding scheme for strong security on wiretap channels. In Proceedings of the 2013 IEEE International Symposium on Information Theory (ISIT), Istanbul, Turkey, 7–12 July 2013; pp. 1117–1121. [Google Scholar] [CrossRef]
  7. Renes, J.M.; Renner, R.; Sutter, D. Efficient one-way secret-key agreement and private channel coding via polarization. In Advances in Cryptology-ASIACRYPT; Springer: Berlin/Heidelberg, Germany, 2013; pp. 194–213. [Google Scholar]
  8. Wei, Y.; Ulukus, S. Polar Coding for the General Wiretap Channel With Extensions to Multiuser Scenarios. IEEE J. Sel. Areas Commun. 2016, 34, 278–291. [Google Scholar] [CrossRef]
  9. Gulcu, T.C.; Barg, A. Achieving Secrecy Capacity of the Wiretap Channel and Broadcast Channel with a Confidential Component. IEEE Trans. Inf. Theory 2017, 63, 1311–1324. [Google Scholar] [CrossRef]
  10. Chou, R.A.; Bloch, M.R. Polar Coding for the Broadcast Channel With Confidential Messages: A Random Binning Analogy. IEEE Trans. Inf. Theory 2016, 62, 2410–2429. [Google Scholar] [CrossRef] [Green Version]
  11. del Olmo Alos, J.; Rodríguez Fonollosa, J. Strong Secrecy on a Class of Degraded Broadcast Channels Using Polar Codes. Entropy 2018, 20, 467. [Google Scholar] [CrossRef] [Green Version]
  12. Chou, R.A.; Yener, A. Polar Coding for the Multiple Access Wiretap Channel via Rate-Splitting and Cooperative Jamming. IEEE Trans. Inf. Theory 2018, 64, 7903–7921. [Google Scholar] [CrossRef]
  13. Chia, Y.K.; El Gamal, A. Three-receiver broadcast channels with common and confidential messages. IEEE Trans. Inf. Theory 2012, 58, 2748–2765. [Google Scholar] [CrossRef]
  14. Hassani, S.; Urbanke, R. Universal polar codes. Proceedings of 2014 IEEE International Symposium on Information Theory (ISIT), Honolulu, HI, USA, 29 June–4 July 2014; pp. 1451–1455. [Google Scholar]
  15. Mondelli, M.; Hassani, S.; Sason, I.; Urbanke, R. Achieving Marton’s region for broadcast channels using polar codes. IEEE Trans. Inf. Theory 2015, 61, 783–800. [Google Scholar] [CrossRef]
  16. Watanabe, S.; Oohama, Y. The optimal use of rate-limited randomness in broadcast channels with confidential messages. IEEE Trans. Inf. Theory 2015, 61, 983–995. [Google Scholar] [CrossRef]
  17. Karzand, M.; Telatar, E. Polar codes for q-ary source coding. In Proceedings of the 2010 IEEE International Symposium on Information Theory, Austin, TX, USA, 13–18 June 2010; pp. 909–912. [Google Scholar] [CrossRef]
  18. Şasoğlu, E.; Telatar, E.; Arikan, E. Polarization for arbitrary discrete memoryless channels. In Proceedings of the 2009 IEEE Information Theory Workshop, Taormina, Italy, 11–16 October 2009; pp. 144–148. [Google Scholar]
  19. Arikan, E. Source polarization. Proceedings of 2010 the IEEE International Symposium on Information Theory, Austin, TX, USA, 13–18 June 2010; pp. 899–903. [Google Scholar]
  20. Tal, I.; Vardy, A. How to construct polar codes. IEEE Trans. Inf. Theory 2013, 59, 6562–6582. [Google Scholar] [CrossRef] [Green Version]
  21. Vangala, H.; Viterbo, E.; Hong, Y. A comparative study of polar code constructions for the AWGN channel. arXiv 2015, arXiv:1501.02473. [Google Scholar]
  22. Honda, J.; Yamamoto, H. Polar Coding Without Alphabet Extension for Asymmetric Models. IEEE Trans. Inf. Theory 2013, 59, 7829–7838. [Google Scholar] [CrossRef]
  23. Korada, S.B.; Urbanke, R.L. Polar codes are optimal for lossy source coding. IEEE Trans. Inf. Theory 2010, 56, 1751–1768. [Google Scholar] [CrossRef] [Green Version]
  24. Chou, R.A.; Bloch, M.R. Using deterministic decisions for low-entropy bits in the encoding and decoding of polar codes. In Proceedings of the 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 29 September–2 October 2015; pp. 1380–1385. [Google Scholar] [CrossRef]
  25. Levin, D.A.; Peres, Y.; Wilmer, E.L. Markov Chains and Mixing Times; American Mathematical Society: Providence, RI, USA, 2009. [Google Scholar]
  26. Forney, G.D., Jr. On the role of MMSE estimation in approaching the information-theoretic limits of linear Gaussian channels: Shannon meets Wiener. In Proceedings of the the 41st Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, 1–3 October 2003. [Google Scholar]
  27. Csiszar, I.; Körner, J. Information Theory: Coding Theorems for Discrete Memoryless Systems; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  28. Pearl, J. Causality; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar]
  29. Chou, R.A. Explicit Codes for the Wiretap Channel with Uncertainty on the Eavesdropper’s Channel. In Proceedings of the 2018 IEEE International Symposium on Information Theory (ISIT), Vail, CO, USA, 17–22 June 2018; pp. 476–480. [Google Scholar] [CrossRef]
Figure 1. Channel model: CI-WTBC.
Figure 1. Channel model: CI-WTBC.
Entropy 22 00149 g001
Figure 2. Graphical representation of the sets in (10)–(19). The indices inside the soft and dark gray area form G ( n ) and C ( n ) respectively. The indices that form H V ( n ) ( L V | Y ( 1 ) ( n ) ) C are those inside the red curve, while those inside the blue curve form H V ( n ) ( L V | Y ( 2 ) ( n ) ) C .
Figure 2. Graphical representation of the sets in (10)–(19). The indices inside the soft and dark gray area form G ( n ) and C ( n ) respectively. The indices that form H V ( n ) ( L V | Y ( 1 ) ( n ) ) C are those inside the red curve, while those inside the blue curve form H V ( n ) ( L V | Y ( 2 ) ( n ) ) C .
Entropy 22 00149 g002
Figure 3. For Case A, graphical representation of the encoding that leads to the construction of A ˜ 1 : L [ H V ( n ) ] when L = 4 . Consider the Block 2: R 1 ( n ) , R 2 ( n ) , R 1 , 2 ( n ) , R S ( n ) and R Λ ( n ) are those areas filled with yellow squares, blue circles, blue and yellow diamonds, pink crosses, and gray pentagons, respectively, and the set I ( n ) is the green filled area. At Block i [ 1 , L ] , W i is represented by symbols of the same color (e.g., red symbols at Block 2), and Θ i ( V ) , Ψ i ( V ) and Γ i ( V ) are represented by squares, circles and triangles respectively. Furthermore, Θ ¯ i ( V ) and Γ ¯ i ( V ) are denoted by squares and triangles, respectively, with a line through them. At Block i [ 2 , L 1 ] , the diamonds denote Γ 1 , i 1 ( V ) Γ ¯ 1 , i + 1 ( V ) . In Block i [ 1 , L ] , S i is stored into those entries whose indices belong to the green area. For i [ 1 , L 1 ] , Π i ( V ) is denoted by crosses (e.g., purple crosses at Block 2), and is repeated in A ˜ i + 1 [ R S ( n ) ] . The sequence Λ 1 ( V ) is represented by gray pentagons and is replicated in all blocks. The sequences Υ ( 1 ) ( V ) and Υ ( 2 ) ( V ) are those entries inside the red at Block 1 and the blue curve at Block L, respectively.
Figure 3. For Case A, graphical representation of the encoding that leads to the construction of A ˜ 1 : L [ H V ( n ) ] when L = 4 . Consider the Block 2: R 1 ( n ) , R 2 ( n ) , R 1 , 2 ( n ) , R S ( n ) and R Λ ( n ) are those areas filled with yellow squares, blue circles, blue and yellow diamonds, pink crosses, and gray pentagons, respectively, and the set I ( n ) is the green filled area. At Block i [ 1 , L ] , W i is represented by symbols of the same color (e.g., red symbols at Block 2), and Θ i ( V ) , Ψ i ( V ) and Γ i ( V ) are represented by squares, circles and triangles respectively. Furthermore, Θ ¯ i ( V ) and Γ ¯ i ( V ) are denoted by squares and triangles, respectively, with a line through them. At Block i [ 2 , L 1 ] , the diamonds denote Γ 1 , i 1 ( V ) Γ ¯ 1 , i + 1 ( V ) . In Block i [ 1 , L ] , S i is stored into those entries whose indices belong to the green area. For i [ 1 , L 1 ] , Π i ( V ) is denoted by crosses (e.g., purple crosses at Block 2), and is repeated in A ˜ i + 1 [ R S ( n ) ] . The sequence Λ 1 ( V ) is represented by gray pentagons and is replicated in all blocks. The sequences Υ ( 1 ) ( V ) and Υ ( 2 ) ( V ) are those entries inside the red at Block 1 and the blue curve at Block L, respectively.
Entropy 22 00149 g003
Figure 4. For Case B, graphical representation of the encoding that leads to the construction of A ˜ 1 : L [ H V ( n ) ] when L = 4 . Consider the Block 2: the sets R 1 ( n ) , R 1 ( n ) , R 2 ( n ) , R 2 ( n ) , R 1 , 2 ( n ) , R S ( n ) and R Λ ( n ) are those areas filled with yellow squares, yellow triangles, blue circles, blue triangles, blue and yellow diamonds, pink crosses, and gray pentagons, respectively, and I ( n ) is the green filled area with purple crosses. At Block i [ 1 , L ] , W i is represented by symbols of the same color (e.g., red symbols at Block 2), and Θ i ( V ) , Ψ i ( V ) and Γ i ( V ) are represented by squares, circles, and triangles, respectively. Furthermore, Θ ¯ i ( V ) and Γ ¯ i ( V ) are denoted by squares and triangles, respectively, with a line through them. At Block i [ 2 , L 1 ] , the diamonds denote Γ 1 , i 1 ( V ) Γ ¯ 1 , i + 1 ( V ) . In Block i [ 1 , L ] , S i is stored into those entries whose indices belong to the green area. For i [ 2 , L 1 ] , Π i ( V ) = S i and, therefore, S i is repeated entirely into A ˜ i + 1 [ R S ( n ) ] . The sequence Λ 1 ( V ) from S 1 is represented by gray pentagons and is repeated in all blocks. The sequences Υ ( 1 ) ( V ) and Υ ( 2 ) ( V ) are the entries inside the red curve at Block 1 and the blue curve at Block L, respectively.
Figure 4. For Case B, graphical representation of the encoding that leads to the construction of A ˜ 1 : L [ H V ( n ) ] when L = 4 . Consider the Block 2: the sets R 1 ( n ) , R 1 ( n ) , R 2 ( n ) , R 2 ( n ) , R 1 , 2 ( n ) , R S ( n ) and R Λ ( n ) are those areas filled with yellow squares, yellow triangles, blue circles, blue triangles, blue and yellow diamonds, pink crosses, and gray pentagons, respectively, and I ( n ) is the green filled area with purple crosses. At Block i [ 1 , L ] , W i is represented by symbols of the same color (e.g., red symbols at Block 2), and Θ i ( V ) , Ψ i ( V ) and Γ i ( V ) are represented by squares, circles, and triangles, respectively. Furthermore, Θ ¯ i ( V ) and Γ ¯ i ( V ) are denoted by squares and triangles, respectively, with a line through them. At Block i [ 2 , L 1 ] , the diamonds denote Γ 1 , i 1 ( V ) Γ ¯ 1 , i + 1 ( V ) . In Block i [ 1 , L ] , S i is stored into those entries whose indices belong to the green area. For i [ 2 , L 1 ] , Π i ( V ) = S i and, therefore, S i is repeated entirely into A ˜ i + 1 [ R S ( n ) ] . The sequence Λ 1 ( V ) from S 1 is represented by gray pentagons and is repeated in all blocks. The sequences Υ ( 1 ) ( V ) and Υ ( 2 ) ( V ) are the entries inside the red curve at Block 1 and the blue curve at Block L, respectively.
Entropy 22 00149 g004
Figure 5. For Case C, graphical representation of the encoding that leads to the construction of A ˜ 1 : L [ H V ( n ) ] when L = 4 . Consider the Bloc 2: R 1 ( n ) , R 2 ( n ) , R 1 , 2 ( n ) and R Λ ( n ) are those areas filled with yellow squares, blue circles, blue and yellow diamonds, and gray pentagons, respectively, and I ( n ) is the green filled area. At Block i [ 1 , L ] , W i is represented by symbols of the same color (e.g., red symbols at Block 2), and Θ i ( V ) , Ψ i ( V ) and Γ i ( V ) are represented by squares, circles, and triangles, respectively. Furthermore, Θ ¯ i ( V ) and Γ ¯ i ( V ) are denoted by squares and triangles, respectively, with a line through them. At Block i [ 2 , L 1 ] , the diamonds denote Γ 1 , i 1 ( V ) Γ ¯ 1 , i + 1 ( V ) . For i [ 1 , L ] , S i is stored into those entries belonging to the green area. The sequence Λ 1 ( V ) is represented by gray pentagons and is repeated in all blocks. The sequences Υ ( 1 ) ( V ) and Υ ( 2 ) ( V ) are the entries inside the red curve at Block 1 and the blue curve at Block L, respectively.
Figure 5. For Case C, graphical representation of the encoding that leads to the construction of A ˜ 1 : L [ H V ( n ) ] when L = 4 . Consider the Bloc 2: R 1 ( n ) , R 2 ( n ) , R 1 , 2 ( n ) and R Λ ( n ) are those areas filled with yellow squares, blue circles, blue and yellow diamonds, and gray pentagons, respectively, and I ( n ) is the green filled area. At Block i [ 1 , L ] , W i is represented by symbols of the same color (e.g., red symbols at Block 2), and Θ i ( V ) , Ψ i ( V ) and Γ i ( V ) are represented by squares, circles, and triangles, respectively. Furthermore, Θ ¯ i ( V ) and Γ ¯ i ( V ) are denoted by squares and triangles, respectively, with a line through them. At Block i [ 2 , L 1 ] , the diamonds denote Γ 1 , i 1 ( V ) Γ ¯ 1 , i + 1 ( V ) . For i [ 1 , L ] , S i is stored into those entries belonging to the green area. The sequence Λ 1 ( V ) is represented by gray pentagons and is repeated in all blocks. The sequences Υ ( 1 ) ( V ) and Υ ( 2 ) ( V ) are the entries inside the red curve at Block 1 and the blue curve at Block L, respectively.
Entropy 22 00149 g005
Figure 6. For Case D, graphical representation of the encoding that leads to the construction of A ˜ 1 : L H V ( n ) when L = 4 . Consider the Block 2: R 1 ( n ) , R 2 ( n ) , R 1 , 2 ( n ) , R 1 , 2 ( n ) and R Λ ( n ) are those areas filled with yellow squares, blue circles, blue and yellow diamonds, yellow squares overlapped by blue circles, and gray pentagons, respectively, and the set I ( n ) is the green filled area. At Block i [ 1 , L ] , W i is represented by symbols of the same color (e.g., red symbols at Block 2), and Θ i ( V ) , Ψ i ( V ) and Γ i ( V ) are represented by squares, circles, and triangles, respectively. Furthermore, Θ ¯ i ( V ) and Γ ¯ i ( V ) are denoted by squares and triangles, respectively, with a line through them. At Block i [ 2 , L 1 ] , Γ 1 , i 1 ( V ) Γ ¯ 1 , i + 1 ( V ) is represented by diamonds, and Ψ 2 , i 1 ( V ) Θ ¯ 2 , i + 1 ( V ) by squares overlapped by circles. At Block i [ 1 , L ] , S i is stored into those entries that belong to the green area. Sequence Λ 1 ( V ) is denoted by gray pentagons and is repeated in all blocks. Sequences Υ ( 1 ) ( V ) and Υ ( 2 ) ( V ) are the entries inside the red curve at Block 1 and the blue curve at Block L, respectively.
Figure 6. For Case D, graphical representation of the encoding that leads to the construction of A ˜ 1 : L H V ( n ) when L = 4 . Consider the Block 2: R 1 ( n ) , R 2 ( n ) , R 1 , 2 ( n ) , R 1 , 2 ( n ) and R Λ ( n ) are those areas filled with yellow squares, blue circles, blue and yellow diamonds, yellow squares overlapped by blue circles, and gray pentagons, respectively, and the set I ( n ) is the green filled area. At Block i [ 1 , L ] , W i is represented by symbols of the same color (e.g., red symbols at Block 2), and Θ i ( V ) , Ψ i ( V ) and Γ i ( V ) are represented by squares, circles, and triangles, respectively. Furthermore, Θ ¯ i ( V ) and Γ ¯ i ( V ) are denoted by squares and triangles, respectively, with a line through them. At Block i [ 2 , L 1 ] , Γ 1 , i 1 ( V ) Γ ¯ 1 , i + 1 ( V ) is represented by diamonds, and Ψ 2 , i 1 ( V ) Θ ¯ 2 , i + 1 ( V ) by squares overlapped by circles. At Block i [ 1 , L ] , S i is stored into those entries that belong to the green area. Sequence Λ 1 ( V ) is denoted by gray pentagons and is repeated in all blocks. Sequences Υ ( 1 ) ( V ) and Υ ( 2 ) ( V ) are the entries inside the red curve at Block 1 and the blue curve at Block L, respectively.
Entropy 22 00149 g006
Figure 7. Graphical representation (Bayesian graph) of the dependencies between random variables involved in the polar coding scheme. Independent random variables are indicated by white nodes, whereas those that are dependent are indicated by gray nodes.
Figure 7. Graphical representation (Bayesian graph) of the dependencies between random variables involved in the polar coding scheme. Independent random variables are indicated by white nodes, whereas those that are dependent are indicated by gray nodes.
Entropy 22 00149 g007

Share and Cite

MDPI and ACS Style

del Olmo Alòs, J.; Fonollosa, J.R. Polar Coding for Confidential Broadcasting. Entropy 2020, 22, 149. https://doi.org/10.3390/e22020149

AMA Style

del Olmo Alòs J, Fonollosa JR. Polar Coding for Confidential Broadcasting. Entropy. 2020; 22(2):149. https://doi.org/10.3390/e22020149

Chicago/Turabian Style

del Olmo Alòs, Jaume, and Javier Rodríguez Fonollosa. 2020. "Polar Coding for Confidential Broadcasting" Entropy 22, no. 2: 149. https://doi.org/10.3390/e22020149

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop