Next Article in Journal
Entropy Analysis of a Railway Network’s Complexity
Previous Article in Journal
Application of the Self-Organization Phenomenon in the Development of Wear Resistant Materials—A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Secrecy Capacity of the Extended Wiretap Channel II with Noise

1
The State Key Laboratory of Integrated Service Networks, Xidian University, Xi’an 710071, China
2
Computer Science and Engineering Department, Shanghai Jiao Tong University, Shanghai 200240, China
*
Author to whom correspondence should be addressed.
Entropy 2016, 18(11), 377; https://doi.org/10.3390/e18110377
Submission received: 28 June 2016 / Revised: 30 September 2016 / Accepted: 14 October 2016 / Published: 31 October 2016
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
The secrecy capacity of an extended communication model of wiretap channelII is determined. In this channel model, the source message is encoded into a digital sequence of length N and transmitted to the legitimate receiver through a discrete memoryless channel (DMC). There exists an eavesdropper who is able to observe arbitrary μ = N α digital symbols from the transmitter through a second DMC, where 0 α 1 is a constant real number. A pair of an encoder and a decoder is designed to let the receiver be able to recover the source message with a vanishing decoding error probability and keep the eavesdropper ignorant of the message. This communication model includes a variety of wiretap channels as special cases. The coding scheme is based on that designed by Ozarow and Wyner for the classic wiretap channel II.

1. Introduction

The concept of the wiretap channel was first introduced by Wyner. In his celebrated paper [1], Wyner considered a communication model, where the transmitter communicated with the legitimate receiver through a discrete memoryless channel (DMC). Meanwhile, there existed an eavesdropper observing the digital sequence from the receiver through a second DMC. The goal was to design a pair of an encoder and a decoder such that the receiver was able to recover the source message perfectly, while the eavesdropper was ignorant of the message. That communication model is actually a degraded discrete memoryless wiretap channel.
After that, the communication models of wiretap channels have been studied from various aspects. Csiszár and Körner [2] considered a more general wiretap channel where the wiretap channel did not need to be a degraded version of the main channel, and common messages were also considered there. Other communication models of wiretap channels include wiretap channels with side information [3,4,5,6,7,8], compound wiretap channels [9,10,11,12] and arbitrarily-varying wiretap channels [13].
Ozarow and Wyner studied another kind of wiretap channel called the wiretap channel II [14]. The source message W was encoded into digital bits X N and transmitted to the legitimate receiver via a binary noiseless channel. An eavesdropper could observe arbitrary μ = N α digital bits from the receiver, where 0 < α < 1 is a constant real number not dependent on N.
Some extensions of the wiretap channel have been studied in recent years.
Cai and Yeung extended the wiretap channel II into the network scenario [15,16]. In that network model, the source message of length K was transmitted to the legitimate users through a network, and the eavesdropper was able to wiretap on at most μ < K edges. Cai and Yeung suggested using a linear “secret-sharing” method to provide security in the network. Instead of sending K message symbols, the source node sent μ random symbols and K μ message symbols. Additionally, the code itself underwent a certain linear transformation. Cai and Yeung gave sufficient conditions for this transformation to guarantee security. They showed that as long as the field size is sufficiently large, a secure transformation existed. Some related work on wiretap networks was given in [17,18,19,20].
An extension of wiretap channel II was considered recently in [21,22]. The source message was encoded into the digital sequence X N and transmitted to the legitimate receiver via a DMC. The eavesdropper could observe any μ = N α digital bits of X N from the transmitter. A pair of inner-outer bounds was given in [21], while the secrecy capacity with respect to the semantic secrecy criterion was established in [22]. The coding scheme in [21] was based on that developed by Ozarow and Wyner in [14], while the scheme in [22] was by Wyner’s soft covering lemma. He et al. considered another extension of wiretap channel II in [23]. In that model, the eavesdropper observed arbitrary μ = N α digital bits of the main channel output Y N from the receiver. The capacity with respect to the strong secrecy criterion was established there. The proof of the coding theorem was based on Csiszár’s almost independent coloring scheme. Some other work on wiretap channel II can be found in [24,25,26].
This paper considers a more general extension of wiretap channel II, where the eavesdropper observes arbitrary μ digital bits from the transmitter via a second DMC. The capacity with respect to the weak secrecy criteria is established. The coding scheme is based on that developed by Ozarow and Wyner in [14]. It is obvious that this communication model includes the general discrete memoryless wiretap channels, the wiretap channel II and the communication models discussed in [21,22,23] as special cases. Nevertheless, we should notice that the secrecy criteria considered in [22,23] are strictly stronger than those considered in this paper; see Figure 1.
The remainder of this paper is organized as follows. The formal statement and the main results are given in Section 2. The secrecy capacity is formulated in Theorem 1, whose proof is discussed in Section 4. Section 5 provides a binary example of this model. Section 6 gives a final conclusion of this paper.

2. Notation and Problem Statements

Throughout the paper, N is the set of positive integers and [ 1 : N ] = { 1 , 2 , , N } for any N N . I μ = I μ ( N ) = { I [ 1 : N ] : | I | = μ } represents the collection of subsets of [ 1 : N ] with size μ.
Random variables, sample values and alphabets (sets) are denoted by capital letters, lower case letters and calligraphic letters, respectively. A similar convention is applied to random vectors and their sample values. For example, X N represents a random N-vector ( X 1 , X 2 , , X N ), and x N is a specific vector of X N in X N . X N is the N-th Cartesian power of X .
Let ‘?’ be a “dummy” letter. For any index set I [ 1 : N ] and finite alphabet X not containing the “dummy” letter ‘?’, denote:
X I N = { ( x 1 , x 2 , , x N ) : x i X if i I , and x i = ? otherwise } .
For any given random vector X N = ( X 1 , X 2 , , X N ) and index set I [ 1 : N ] ,
  • X I N = ( X 1 , X 2 , , X N ) is a “projection” of X N onto I with X n = X n for n I , and X n = ? otherwise.
  • X I = ( X i , i I ) is a subvector of X N .
The random vector X I N takes the value from X I N , while the random vector X I takes the value from X | I | .
Example 1.
Supposing that N = 5 , the index set I = { 1 , 3 , 5 } and the random vector X N = ( X 1 , X 2 , X 3 , X 4 , X 5 ) , we have X I N = ( X 1 , ? , X 3 , ? , X 5 ) and X I = ( X 1 , X 3 , X 5 ) .
The communication model in this paper, which is shown in Figure 2, is composed of an encoder, the main channel, the wiretap channel and a decoder. The definitions of these parts are from Definition 1 to Definition 4, respectively. The definition of achievability is in Definition 5.
Definition 1.
(Encoder) The source message W is uniformly distributed on the message set W = [ 1 : M ] . The (stochastic) encoder is specified by a matrix of conditional probability E n ( x N | w ) for the channel input x N X N and message w W .
Definition 2.
(Main channel) The main channel is a DMC with finite input alphabet X and finite output alphabet Y , where ? X Y . The transition probability is Q M ( y | x ) . Let X N and Y N denote the input and output of the main channel, respectively. It follows that for any x N X N and y N Y N ,
Pr { X N = x N , Y N = y N } = Pr { X N = x N } Q M N ( y N | x N )
where:
Q M N ( y N | x N ) = n = 1 N Q M ( y n | x n ) .
Definition 3.
(Wiretap channel) The eavesdropper is able to observe arbitrary μ = N α digital bits from the transmitter via another DMC, whose transition probability is denoted as Q W ( z | x ) with x X and z Z . The alphabet Z does not contain the “dummy” letter ‘?’ either. The input X N and the output Z N of the wiretap channel satisfy that:
Pr { X N = x N , Z N = z N } = Pr { X N = x N } Q W N ( z N | x N )
with:
Q W N ( z N | x N ) = n = 1 N Q W ( z n | x n ) .
Supposing that the eavesdropper observes digital bits of the wiretap channel output Z N whose indices lie in the observing index set I , the subsequence obtained by the eavesdropper can then be denoted by Z ˜ N = Z I N . Therefore, the information on the source message (of each bit) exposed to the eavesdropper is denoted by:
Δ = max I I μ 1 N I ( W ; Z I N ) .
Remark 1.
Clearly, for given wiretap channel output Z ˜ N , one can easily determine which subsequence of Z N is observed by the eavesdropper. More precisely, Z ˜ N = Z I N with I = I ( Z ˜ N ) = { i : Z ˜ i ? } .
Definition 4.
(Decoder) The decoder is a mapping ϕ : Y N W , with Y N as the input and W ^ = ϕ ( Y N ) as the output. The average decoding error probability is defined as P e = Pr { W W ^ } .
Definition 5.
(Achievability) A non-negative real number R is said to be achievable, if for any ϵ > 0 , there exists an integer N 0 , such that one can construct an ( N , M ) code satisfying:
1 N log M R ϵ ,
P e ϵ
and:
Δ < ϵ ,
where N > N 0 . The capacity, or the maximal achievable transmission rate, of the communication model is denoted by C s .
Remark 2.
Notice that the capacities defined in this paper are under the condition of negligible average decoding error probability, but one can construct the coding schemes for the negligible maximal decoding error probability through the standard techniques. See Appendix A for details.

3. Main Result

Theorem 1.
The capacity of the communication model described in Figure 2 is:
C s = max P U · P X | U : U X Y Z [ I ( U ; Y ) α I ( U ; Z ) ] ,
where U is an auxiliary random variable distributed on U with | U | | X | .
The converse half of Theorem 1 can be established quite similarly to the method of establishing the converse of Theorem 2 in [22], and hence, we omit it here. The direct part of Theorem 1 is given in Section 4.
Corollary 1.
When α = 1 , the communication model is transformed into a general discrete memoryless wiretap channel, whose capacity is formulated by:
C s = max P U · P X | U : U X Y Z [ I ( U ; Y ) I ( U ; Z ) ] .
The result coincides with that of Corollary 2 in [2]. In particular, if X Y Z forms a Markov chain, the capacity is further deduced by:
C s = max P U · P X | U : U X Y Z [ I ( U ; Y ) I ( U ; Z ) ] = ( a ) max P U · P X | U : U X Y Z [ I ( U ; Y | Z ) ] = ( b ) max P X : X Y Z [ I ( X ; Y | Z ) ] = ( c ) max P X : X Y Z [ I ( X ; Y ) I ( X ; Z ) ] ,
where (a) follows because U Y Z forms a Markov chain, (b) follows because the Markov chain U X Y for a given Z implies that I ( U ; Y | Z ) I ( X ; Y | Z ) and the equality holds if and only if U = X and (c) follows because X Y Z forms a Markov chain.
Corollary 2.
When Y = Z , i.e., the eavesdropper observes μ = N α digital bits from the receiver, the communication model is transformed into that studied in [23]. In this case, the capacity is formulated by:
C s = max P U · P X | U : U X Y ( 1 α ) I ( U ; Y ) = max P X ( 1 α ) I ( X ; Y ) ,
where the last equality follows because the Markov chain U X Y implies that I ( U ; Y ) I ( X ; Y ) and the equality holds if and only if U = X .
Notice that [23] considered the capacity with respect to the strong secrecy criterion, while the current paper considers that with the weak secrecy criterion. Therefore, Theorem 1 in [23] and Corollary 2 in the current paper indicate that the capacity with respect to the strong secrecy criterion is identical to that with the weak secrecy criterion.
Corollary 3.
When Z = X , i.e., the eavesdropper observes μ = N α digital bits from the transmitter, the communication model is transformed into that studied in [21,22]. In this case, the capacity is formulated by:
C s = max P U · P X | U : U X Y [ I ( U ; Y ) α I ( U ; X ) ] .
The channel model described in Corollary 3 was first studied in [21], and a pair of inner and outer bounds was given there. The secrecy capacity with respect to the semantic secrecy criterion, which is identical to the capacity with the weak criterion given in Corollary 3, was established in [22].

4. Direct Part of Theorem 1

This section proves the direct part of Theorem 1. Let an arbitrary quadruple of random variables ( U * , X * , Y * , Z * ) be given, which satisfy that:
Pr { U * = u , X * = x , Y * = y , Z * = z } = P U * ( u ) P X * | U * ( x | u ) Q M ( y | x ) Q W ( z | x )
for ( u , x , y , z ) U × X × Y × Z . The goal of this section is to establish that every real number R satisfying 0 R < I ( U * ; Y * ) α I ( U * ; Z * ) is achievable. In fact, it suffices to prove the achievability of every R satisfying:
0 R < I ( X * ; Y * ) α I ( X * ; Z * ) .
To show this, suppose that every transmission rate R satisfying Equation (5) is achievable. For any random variable U * satisfying Equation (4), the encoder could deliberately increase the noise of the communication system by inserting a virtual noisy channel Q V at the transmitting port of the system, such that:
Q V ( x | u ) = P X * | U * ( x | u ) .
This would create the virtual communication system depicted in Figure 3, where the transition matrix of the main channel is:
Q ˜ M ( y | u ) = P Y * | U * ( y | u ) = x X Q V ( x | u ) Q M ( y | x )
and the transition matrix of the wiretap channel is:
Q ˜ W ( z | u ) = P Z * | U * ( y | u ) = x X Q V ( x | u ) Q W ( y | x ) .
It is clear that every real number 0 < R I ( U * ; Y * ) α I ( U * ; Z * ) is achievable for the virtual system and hence is achievable for the original system.
In the remainder of this section, it will be shown that every transmission rate R satisfying Equation (5) is achievable. To be precise, for any ϵ and τ satisfying 0 < ϵ τ < I ( X * ; Y * ) α I ( X * ; Z * ) , we need to establish that there exists an ( N , M ) code, such that:
1 N log M I ( X * ; Y * ) α I ( X * ; Z * ) τ > 0 ,
Δ = max I I μ 1 N I ( W ; Z I N ) < ϵ ,
and:
P e < ϵ
when N is sufficiently large.
The coding scheme is based on the scheme developed by Ozarow and Wyner [14] for the classic wiretap channel II. In that channel model, for each wiretap channel output z N , there exists a collection of codewords that are “consistent” with it, namely the codewords that could produce the wiretap channel output z N for some observing index I . Ozarow and Wyner constructed a secure partition, such that the number of “consistent” codewords, for every wiretap channel output z N , in each sub-code is less than a constant integer. However, it is not feasible to consider the “consistent” codewords in our model, where the wiretap channel may be noisy. Instead, we construct a secure partition such that the number of codewords, jointly typical with the wiretap channel output z N , in each sub-code is less than a constant integer.
The proof is organized as follows. Firstly, Section 4.1 gives some definitions on the typicality of μ-subsequences and lists some basic results. Then, the construction of the encoder and decoder is introduced in Section 4.2. The key point is to generate a “good” codebook with the desired partition to ensure secrecy. Thirdly, as the main part of the proof, Section 4.3 shows the existence of a “good” codebook with the desired partition. For any ϵ > 0 and τ > 0 , the proof that the coding scheme in Section 4.2 satisfies the requirements of the transmission rate, reliability and security, namely Formulas (6) to (8), is finally detailed in Section 4.4.

4.1. Typicality

The definitions of letter typicality on a given index set follow from those briefly introduced in [23]. We list them in this subsection for the sake of completeness.
Firstly, the original definitions of letter typicality are given as follows. Please refer to Chapter 1 in [27] for more details.
For any δ 0 , the δ-letter typical set T δ N ( P X ) with respect to the probability distribution P X on X is the set of x N X N satisfying:
| 1 N N ( a | x N ) P X ( a ) | δ P X ( a ) for all a X ,
where N ( a | x N ) is the number of positions of x N having the letter a X .
Similarly, let N ( a , b | x N , y N ) be the number of times that the pair ( a , b ) occurs in the sequence of pairs ( x 1 , y 1 ) , ( x 2 , y 2 ) , , ( x N , y N ) . The jointly typical set T δ N ( P X Y ) , with respect to the joint probability distribution P X Y on X × Y , is the set of sequence pairs ( x N , y N ) X N × Y N satisfying:
| 1 N N ( a , b | x N , y N ) P X Y ( a , b ) | δ P X Y ( a , b )
for all ( a , b ) X × Y .
For any given x N X N , the conditionally typical set of x N with respect to the joint mass function P X Y is defined as:
T δ N ( P X Y | x N ) = { y N Y N : ( x N , y N ) T δ N ( P X Y ) } .
The definitions on the typicality of μ-subsequences on index set I I μ and some basic results are given as follows.
Definition 6.
Given a random variable X on X , the letter typical set T ˜ I N [ X ] δ with respect to X on the index set I I μ is the set of x N X I N such that x I T δ μ ( P X ) , where x I is the μ-subvector of x N and P X is the probability mass function of the random variable X.
Example 2.
Let X be a random variable on the binary set X = { 0 , 1 } , such that P X ( 0 ) = 1 P X ( 1 ) = 1 5 . Set N = 10 , δ = 0 . 1 and I = [ 1 : 5 ] . Then:
  • the sequence x N = 0111100000 is out of T δ N ( P X ) while x I N = 01111 ? ? ? ? ? belongs to T ˜ I N [ X ] δ ;
  • the sequence x N = 1111101110 belongs to T δ N ( P X ) while x I N = 11111 ? ? ? ? ? is out of T ˜ I N [ X ] δ ;
  • the sequence x N = 0111111110 belongs to T δ N ( P X ) , and x I N = 01111 ? ? ? ? ? belongs to T ˜ I N [ X ] δ .
Remark 3.
(Theorem 1.1 in [27]) Suppose that X 1 , X 2 , , X N are N i.i.d. random variables with the same generic probability distribution as that of X. For any given I I μ and δ < m X ,
1.
if x N T ˜ I N [ X ] δ , then
2 μ ( 1 + δ ) H ( X ) Pr { X I N = x N } 2 μ ( 1 δ ) H ( X ) ,
2.
Pr { X I N T ˜ I N [ X ] δ } > 1 ϵ ˜ 1 ,
where:
ϵ ˜ 1 = ϵ ˜ 1 ( μ , δ , m X ) = 2 | X | e μ δ 2 m X
and m X = min x X : P X ( x ) > 0 P X ( x ) .
Definition 7.
Let ( X , Y ) be a pair of random variables with the joint probability mass function P X Y on X × Y . The jointly typical set T ˜ I N [ X Y ] δ with respect to ( X , Y ) on the index set I I μ is the set of ( x N , y N ) X I N × Y I N satisfying ( x I , y I ) T δ μ ( P X Y ) , where x I and y I are the subvectors of x N and y N , respectively.
Definition 8.
For any given x N T ˜ I N [ X ] δ with I I μ , the conditionally-typical set of x N on the index set I is defined as:
T ˜ I N [ X Y | x N ] δ = { y N : ( x N , y N ) T ˜ I N [ X Y ] δ } .
Remark 4.
Let ( X N , Y N ) be a pair of random sequences with the conditional mass function:
Pr { Y N = y N | X N = x N } = i = 1 N P Y | X ( y i | x i )
for x N X N and y N Y N . Then, for any index set I I μ , x N T ˜ I N [ X ] δ and y N T ˜ I N [ X Y | x N ] δ , it follows that:
2 μ ( 1 + δ ) H ( Y | X ) Pr { Y I N = y N | X I N = x N } 2 μ ( 1 δ ) H ( Y | X ) .
Corollary 4.
For any I I μ and x N T ˜ I N [ X ] δ , it follows that | T ˜ I N [ X Y | x N ] δ | < 2 N α ( 1 + δ ) H ( Y | X ) .
Corollary 5.
Let Y 1 , Y 2 , , Y N be N i.i.d. random variables with the same probability distribution as that of Y. For any I I μ and x N T ˜ I N [ X ] δ , it follows that:
Pr { Y N T ˜ I N [ X Y | x N ] δ } < 2 N [ α I ( X ; Y ) 2 δ H ( Y ) ] .
Remark 5.
(Theorem 1.2 in [27]) Let ( X N , Y N ) be a pair of random sequences satisfying (9). For any index set I I μ , 0 δ < m X Y and x N T ˜ I N [ X ] δ , it is satisfied that:
Pr { Y I N T ˜ I N [ X Y | x N ] 2 δ | X I N = x N } > 1 ϵ ˜ 2 ,
where:
ϵ ˜ 2 = ϵ ˜ 2 ( μ , δ , m X Y ) = 2 | X | | Y | e μ m X Y δ 2 1 + 2 δ
and m X Y = min ( x , y ) X × Y : P X Y ( x , y ) > 0 P X Y ( x , y ) .

4.2. Code Construction

Suppose that the triple of random variables ( X * , Y * , Z * ) is given and fixed.
Codeword generation: The random codebook C = { X * N ( l ) } l = 1 M is an ordered set of M i.i.d. random vectors with mass function Pr { X * N ( l ) = x N } = i = 1 N P X * ( x i ) , where:
M = 2 N [ I ( X * ; Y * ) τ τ d ]
for some τ d > 0 .
Codeword partition: Given a specific sample value C = { x N ( l ) } l = 1 M of M randomly-generated codewords, let W be a random variable uniformly distributed on [ 1 : M ] and X N ( C ) = x N ( W ) be the random sequence uniformly distributed on C . Set R = I ( X * ; Y * ) α I ( X * ; Z * ) τ , and partition C into:
M = 2 N R = 2 N [ I ( X * ; Y * ) α I ( X * ; Z * ) τ ]
subsets { C m } m = 1 M with the same cardinality. Let W ˜ be the index of sub-code containing X N ( C ) , i.e., X N ( C ) C W ˜ . We need to find a partition of the codebook C satisfying that:
max I I μ 1 N I ( W ˜ ; Z I N ( C ) ) < ϵ ,
where Z N ( C ) is the output of the wiretap channel when taking X N ( C ) as the input, i.e.,
Pr { X N ( C ) = x N , Z N ( C ) = z N } = Pr { X N ( C ) = x N } i = 1 N Q W ( z i | x i ) .
for x N X N and z N Z N . If there is no such desired partition, declare an encoding error.
Remark 6.
We call the codebook C an ordered set because each codeword in the codebook is treated as unique, even if its value may be the same as the other codewords.
Encoder: Suppose that a desired partition { C m } m = 1 M on a specific codebook C is given. When the source message W is to be transmitted, the encoder uniformly randomly chooses a codeword from the sub-code C W and transmits it to the main channel.
In this encoding scheme, each message is related to a unique sub-code, which is sometimes called a bin. Therefore, we would call this kind of coding scheme the random binning scheme.
Remark 7.
For a given codebook C and a desired partition applied to the encoder, let X N and Z N be the input and output of the wiretap channel respectively, when the source message W is transmitted. It is clear that ( W , X N , Z N ) and ( W ˜ , X N ( C ) , Z N ( C ) ) share the same joint distribution.
Decoder: Supposing that the output of the main channel is y N . The decoder tries to find a unique sequence x N ( w ^ , j ^ ) , such that ( x N ( w ^ , j ^ ) , y N ) T δ N ( P X * Y * ) and decodes w ^ as the estimation of the transmitted source message. If there is none or there is more than one satisfied x N ( w ^ , i ^ ) , the encoder chooses a constant w 0 as w ^ .

4.3. Proof of the Existence of a “Good” Codebook with a Secure Partition

This subsection proves the existence of a class of “good” codebooks, on which there exist secure partitions, such that Equation (12) holds, when N is sufficiently large and δ is sufficiently small. Moreover, those kinds of “good” codebooks can be randomly generated with probability 1 as N . The notation in Section 4.2 will continue to be used in this subsection.
A formal definition of “good” codebooks is given by the following.
Definition 9.
A codebook C is called “good” if it is satisfied that:
| T ˜ ( C , I ) | > ( 1 2 ϵ 1 ) M for all I I μ
and:
| T ˜ ( C , z N , I ) | < 2 N ( I ( X * ; Y * ) α I ( X * ; Z * ) τ τ d 2 ) for all I I μ and z N T ˜ I N [ Z * ] 2 δ ,
where:
T ˜ ( C , I ) = { x N C : x I N T ˜ I N [ X * ] δ }
is the set of typical codewords on the index set I ,
T ˜ ( C , z N , I ) = { x N C : x I N T ˜ I N [ X * Y * | z N ] 2 δ }
is the set of codewords jointly typical with z N on the index set I and:
ϵ 1 = ϵ 1 ( μ , δ , m X * ) = 2 | X | e μ δ 2 m X * .
The main results of this subsection are summarized as the following three lemmas. Lemma 1 claims the existence of “good” codebooks; Lemma 2 constructs a special class of partitions on the “good” codebooks; and Lemma 3 proves that the partitions constructed by Lemma 2 are secure.
Lemma 1.
Let C be the random codebook generated by the scheme introduced in Section 4.2. If δ < m X * , the probability of C being “good” is bounded by:
Pr { C is good } > 1 ϵ 3 ϵ 4 ,
where:
ϵ 3 = exp 2 [ N ( 2 log e ) ϵ 1 M ] ,
ϵ 4 = exp 2 [ 2 N + ( 2 N ( τ d 4 δ H ( X * ) ) log e 2 1 2 N τ d ) M ] ,
and ϵ 1 is given by Equation (15).
The proof of Lemma 1 is detailed in Appendix B.
Remark 8.
It can be verified that ϵ 3 , ϵ 4 0 as N , if δ is sufficiently small. Therefore, one can obtain a “good” codebook with probability 1 .
Lemma 2.
For any given codebook C satisfying Equation (14) with M = 2 N ( I ( X * ; Y * ) τ τ d ) codewords, there exists a secure equipartition { C m } m = 1 M on it, such that:
| T ˜ ( C , I , z N ) C m | < L
for all 1 m M , I I μ and z N T I N [ Y * ] 2 δ , if L > 2 ( R + 5 ) τ d , where:
R = 1 N log M = I ( X * ; Y * ) α I ( X * ; Z * ) τ .
The proof of Lemma 2 is discussed in Appendix C.
Lemma 3.
For any 0 < δ < m X * Y * and secure partition { C m } m = 1 M on a “good” codebook C , it follows that:
max I I μ 1 N I ( W ˜ ; Z I N ( C ) ) < τ d + 2 + log L N + ( 4 δ + 2 ϵ 5 ) log | Z | + ϵ 5 log | X | ,
where:
ϵ 5 = 2 ϵ 1 + ϵ 2
and:
ϵ 2 = ϵ 2 ( μ , δ , m X * Z * ) = 2 | X | | Z | e μ m X * Z * δ 2 1 + 2 δ .
Remark 9.
Formula (12) is finally established from the fact that the right-hand side of Equation (19) converges to zero as τ d 0 and N .
Proof of Lemma 3.
By a way similar to establishing Equation (22) in [2], for every I I μ , it follows that:
H ( W ˜ | Z I N ( C ) ) = H ( W ˜ , Z I N ( C ) ) H ( Z I N ( C ) ) = H ( W ˜ , X N ( C ) , Z I N ( C ) ) H ( X N ( C ) | W ˜ , Z I N ( C ) ) H ( Z I N ( C ) ) = H ( W ˜ , X N ( C ) ) + H ( Z I N ( C ) | W ˜ , X N ( C ) ) H ( X N ( C ) | W ˜ , Z I N ( C ) ) H ( Z I N ( C ) ) = H ( W ˜ , X N ( C ) ) + H ( Z I N ( C ) | X N ( C ) ) H ( X N ( C ) | W ˜ , Z I N ( C ) ) H ( Z I N ( C ) ) ,
where the last equality follows because W ˜ X N ( C ) Z I N ( C ) forms a Markov chain. Therefore:
I ( W ˜ ; Z I N ( C ) ) = H ( W ˜ ) H ( W ˜ | Z I N ( C ) ) = H ( X N ( C ) | W ˜ , Z I N ( C ) ) + H ( Z I N ( C ) ) H ( X N ( C ) | W ˜ ) H ( Z I N ( C ) | X N ( C ) ) .
The terms in the rightmost side of Equation (21) are bounded as follows.
  • Upper bound of H ( X N ( C ) | W ˜ , Z I N ( C ) ) . On account of the fact that X N ( C ) is uniformly distributed on the “good” codebook C (cf. Equation (13)), it follows that:
    Pr { X I N ( C ) T ˜ I N [ X * ] δ } > 1 2 ϵ 1
    for every I I μ . Combining Remark 5 yields:
    Pr { ( X I N ( C ) , Z I N ( C ) ) T ˜ I N [ X * Z * ] 2 δ } > 1 ϵ 5
    for every I I μ , where ϵ 5 is given by Equation (20). Denote:
    U I = 0 if ( X I N ( C ) , Z I N ( C ) ) T ˜ I N [ X * Z * ] 2 δ , 1 otherwise .
    It follows that:
    Pr { U I = 1 } < ϵ 5
    for every I I μ . Moreover, on account of the property of (18), we also have:
    H ( X N ( C ) | W ˜ , Z I N ( C ) , U I = 0 ) l o g L .
    Therefore:
    H ( X N ( C ) | W ˜ , Z I N ( C ) ) H ( X N ( C ) | W ˜ , Z I N ( C ) , U I ) + H ( U I ) H ( X N ( C ) | W ˜ , Z I N ( C ) , U I = 0 ) + H ( X N ( C ) | W ˜ , Z I N ( C ) , U I = 1 ) Pr { U I = 1 } + H ( U I ) ( a ) 1 + log L + H ( X N ( C ) | W ˜ , Z I N ( C ) , U I = 1 ) Pr { U I = 1 } ( b ) 1 + log L + N ϵ 5 log | X | ,
    where (a) follows from Equation (23) and the fact that U I is binary and (b) follows from Equation (22).
  • The value of H ( Z I N ( C ) ) is upper bounded as:
    H ( Z I N ( C ) ) H ( Z I N ( C ) | U I ) + H ( U I ) H ( Z I N ( C ) | U I = 0 ) + H ( Z I N ( C ) | U I = 1 ) Pr { U I = 1 } + H ( U I ) ( a ) 1 + N α ( 1 + 2 δ ) H ( Z * ) + H ( X N ( C ) | W ˜ , Z I N ( C ) , U I = 1 ) Pr { U I = 1 } ( b ) 1 + N α ( 1 + 2 δ ) H ( Z * ) + N ϵ 5 log | Z | ,
    where (a) follows the facts that U I is binary and Z I N ( C ) T ˜ I N [ Z * ] 2 δ when U I = 0 and (b) follows from Equation (22).
  • Recalling that when given W ˜ = w , the random vector X N ( I ) is uniformly distributed on C w , we have:
    H ( X N ( C ) | W ˜ ) = log M M = N ( α I ( X * ; Z * ) τ d ) .
  • Lower bound of H ( Z I N ( C ) | X N ( C ) ) . For any x N satisfying that x I N T ˜ I N [ X * ] 2 δ , we have:
    H ( Z I N ( C ) | X N ( C ) = x N ) = i I H ( Z i | X i = x i ) = x X N ( x | x I ) H ( Z * | X * = x ) x X N α ( 1 2 δ ) P X * ( x ) H ( Z * | X * = x ) = N α ( 1 2 δ ) H ( Z * | X * ) ,
    where the function N ( x | x I ) represents the number of the letter x appearing in the sequence x I , and the inequality follows from the definition of T ˜ I N [ X * ] 2 δ . Therefore,
    H ( Z I N ( C ) | X N ( C ) ) H ( Z I N ( C ) | X N ( C ) , U I ) H ( Z I N ( C ) | X N ( C ) , U I = 0 ) Pr { U I = 0 } N α ( 1 2 δ ) ( 1 ϵ 5 ) H ( Z * | X * ) .
By now, the terms in the rightmost side of Equation (21) have been bounded as expected. Substituting Equations (24) to (27) into (21) gives Equation (19). The proof of Lemma 3 is completed. ☐

4.4. Proofs of Equations (6) to (8)

Remark 7 and Lemma 3 yield:
max I [ 1 : N ] , | I | = μ 1 N I ( W ; Z I N ) < ϵ
if the codebook is “good”, which implies Equation (7). By the standard channel coding scheme (cf., for example, chapter 7.5 [28]), using the random codebook-generating scheme in Section 4.2, one can obtain a codebook satisfying Equation (8) with probability 1 when N . Combining Lemma 1, it is established that one can get a codebook achieving both equations of (7) and (8) with probability 1 . Equation (6) is obvious from the coding scheme. The proofs are completed.

5. Example

This section studies a concrete example of the communication model depicted in Figure 2 and formulated in Section 2, where the main channel is a discrete memoryless binary symmetrical channel (DM-MSC) with the crossover probability 0 p 1 2 , and the eavesdropper observes arbitrary μ = N α digital bits from the transmitter through a binary noiseless channel. This indicates that the transition matrices Q M and Q W (introduced in Definitions 2 and 3) satisfy that:
Q M ( y | x ) = p if x y , 1 p otherwise
and:
Q W ( z | x ) = 0 if x z , 1 otherwise ,
where x, y and z all take the value from the binary alphabet X = Y = Z = { 0 , 1 } .
This example is in fact a special case of the model considered in [22]. Corollary 3 gives that the secrecy capacity of this example is:
C s = C s ( α , p ) = max U : U X Y [ I ( U ; Y ) α I ( U ; X ) ] ,
where the auxiliary random variable U is sufficient to be binary. However, the formula given in (28) is inexplicit. In fact, it is an optimization problem. In this section, we will solve this optimization problem and find the exact random variable U achieving the “max” function. The main result is given in Proposition 1, whose proof is based on Lemma 4.
Suppose that the random variables of U , X and Y are all distributed on { 0 , 1 } , and they satisfy that:
Pr { U = 0 } = β U , Pr { U = 1 } = 1 β U , Pr { X = 0 | U = 0 } = q 0 , Pr { X = 1 | U = 0 } = 1 q 0 , Pr { X = 0 | U = 1 } = q 1 , Pr { X = 1 | U = 1 } = 1 q 1 , Pr { X = 0 } = β X = β U · q 0 + ( 1 β U ) · q 1 and Pr { X = 1 } = 1 β X ,
for some 0 β U , q 0 , q 1 1 . Formula (28) can then be rewritten as:
C s ( α , p ) = max 0 β U , q 0 , q 1 1 [ h ( β X * p ) α h ( β X ) β U ( h ( q 0 * p ) α h ( q 0 ) ) ( 1 β U ) ( h ( q 1 * p ) α h ( q 1 ) ) ] ,
where:
h ( a ) = a log a ( 1 a ) log ( 1 a )
and:
a * b = a + b 2 a b
for 0 a , b 1 . Denoting:
g ( β ) = h ( β * p ) α h ( β )
and:
C ( β U , q 0 , q 1 ) = g ( β U q 0 + ( 1 β U ) q 1 ) β U g ( q 0 ) ( 1 β U ) g ( q 1 )
for 1 β 1 , the function C s ( α , p ) can then be further represented as:
C s ( α , p ) = max 0 β U , q 0 , q 1 1 [ g ( β X ) β U g ( q 0 ) ( 1 β U ) g ( q 1 ) ] = max 0 β U , q 0 , q 1 1 [ g ( β U q 0 + ( 1 β U ) q 1 ) β U g ( q 0 ) ( 1 β U ) g ( q 1 ) ] = max 0 β U , q 0 , q 1 1 C ( β U , q 0 , q 1 ) .
To determine the value of C s ( α , p ) , some properties of the function g are given in the following lemma.
Lemma 4.
For any given 1 α , p 1 , it follows that:
1.
g ( β ) is symmetrical around β = 1 2 .
2.
when α ( 1 2 p ) 2 , the function g is convex over [ 0 , 1 ] , and β = 1 2 is the unique minimal point.
3.
when 0 α < ( 1 2 p ) 2 , there exists a unique minimal point β * < 1 2 on the interval [ 0 , 1 2 ] , and hence, 1 β * is the unique minimal point over the interval [ 1 2 , 1 ] ; moreover, β = 1 2 is the unique maximal point over the interval [ β * , 1 β * ] , and the function is convex over the intervals of [ 0 , β * ] and [ 1 β * , 1 ] .
The proof of Lemma 4 is given in Appendix D. Figure 4 shows some examples on the function g, which cover both cases of α ( 1 2 p ) 2 and α < ( 1 2 p ) 2 .
On account of Lemma 4, we conclude that:
Proposition 1.
The secrecy capacity can be represented as:
C S = 1 α g ( β * ) ,
where β * is the unique minimal point of the function g over the interval [ 0 , 1 2 ] . Moreover, C s is positive if and only if α < ( 1 2 p ) 2 .
Proof.
When α ( 1 2 p ) 2 , the function g is convex over the interval [ 0 , 1 ] . Therefore:
C ( β U , q 0 , q 1 ) 0
for all β U , q 0 and q 1 . This indicates that C s = 0 . It remains to determine the capacity for the case of α < ( 1 2 p ) 2 . The inference is divided into two parts. The first part shows that the inequalities β * β X 1 β * hold for all β U , q 0 and q 1 satisfying C ( β U , q 0 , q 1 ) > 0 . The second part establishes Equation (29).
The first part is proven by contradiction. Suppose that β X [ β * , 1 β * ] . We can assume that 0 < β X < β * without loss of generality. In this case, it must follows that q 0 β X β * or q 1 β X β * since β X = β U q 0 + ( 1 β U ) q 1 is the convex combination of q 0 and q 1 . We further suppose that q 0 < β X < β * . If it is also true that q 1 < β * , then it follows immediately that C ( β U , q 0 , q 1 ) 0 on account of the fact that the function g is convex over the interval [ 0 , β * ] . On the other hand, if q 1 > β * , we let β U * satisfy that:
β U * q 0 + ( 1 β U * ) β * = β X .
Then, it follows clearly that β U * β U , and hence:
C ( β U , q 0 , q 1 ) = g ( β U q 0 + ( 1 β U ) q 1 ) β U g ( q 0 ) ( 1 β U ) g ( q 1 ) = g ( β U * q 0 + ( 1 β U * ) q * ) β U g ( q 0 ) ( 1 β U ) g ( q 1 ) ( a ) g ( β U * q 0 + ( 1 β U * ) q * ) β U * g ( q 0 ) ( 1 β U * ) g ( β * ) ( b ) 0 ,
where (a) follows because β * is the minimal point of g and β U * < β U and (b) follows because g is convex over the interval [ 0 , β * ] . This contradicts the assumption that C ( β U , q 0 , q 1 ) > 0 .
To prove the second part, consider that when β * < β X < 1 β * , we have:
C ( β U , q 0 , q 1 ) = g ( β X ) β U g ( q 0 ) ( 1 β U ) g ( q 1 ) g ( 1 2 ) g ( β * ) ,
where the inequality holds because 1 2 is the maximal point over the interval [ β * , 1 β * ] and β * is the minimal point over the interval [ 0 , 1 ] . The formula above indicates that C s g ( 1 2 ) g ( β * ) = 1 α β * . The equality holds if q 0 = β * , q 1 = 1 β * and β U = β X = 1 2 ; see Figure 5.
The proof of Proposition 1 is completed. ☐
It is clear that when p = 0 , the communication model discussed in this section is specialized as wiretap channel II. In that case, the secrecy capacity is obviously a linear function of α. However, when p > 0 , the linearity does not hold. Instead, the secrecy capacity is a convex function of α. See Figure 6.

6. Conclusions

The paper considers a communication model of extended wiretap channel II. In this new model, the source message is sent to the legitimate receiver through a discrete memoryless channel (DMC), and there exists an eavesdropper who is able to observe the digital sequence from the transmitter through a second DMC. The coding scheme is based on that developed by Ozarow and Wyner for the classic wiretap channel II. This communication model includes the general discrete memoryless wiretap channels, the wiretap channel II and the communication models discussed in [21,22,23] as special cases.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grants 61271222, 61271174 and 61301178, the National Basic Research Program of China under Grant 2013CB338004 and the Innovation Program of Shanghai Municipal Education Commission under Grant 14ZZ017.

Author Contributions

Dan He and Wangmei Guo established the secrecy capacity of the binary example and wrote this paper. Dan He and Yuan Luo constructed the secure partition. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Coding Scheme Achieving Vanishing Maximal Decoding Error Probability

The coding scheme introduced in Section 4.2 can achieve the vanishing average decoding error probability, when the block length N is sufficiently large. This Appendix shows that the vanishing maximal decoding error probability could also be achieved by the standard techniques.

Appendix A.1. Preliminaries

The proof is based on the following two facts. The proof of Fact A can be found in Chapter 7.5 of [28], while Fact B can be obtained immediately.
Fact A: Let C = x N ( m ) m = 1 M be a deterministic codebook. Suppose that a deterministic coding scheme is applied to a given point-to-point noisy channel, i.e., the encoder is a bijection between the codebook and the message set. If the average decoding error probability of the codebook is ϵ, then there exists a sub-codebook C ˜ of the original codebook C , such that | C ˜ | = M / 2 and the maximal decoding error probability of the related deterministic coding scheme is no greater than 2 ϵ .
Fact B: Let C ˜ = x N ( m ) m = 1 M / 2 be a deterministic codebook. Given a point-to-point noisy channel, suppose that the maximal decoding error probability of the related deterministic coding scheme is 2 ϵ . Then, for any partition on that codebook, the random binning scheme introduced in Section 4.2 would achieve a maximal decoding error probability no greater than 2 ϵ .

Appendix A.2. Coding Scheme

With the help of these two facts, the following coding scheme is achieved.
Codebook generation: Let C be a “good” codebook, such that the average decoding error probability (of the related deterministic coding scheme) is less than ϵ. Section 4.4 has shown the existence of this kind of codebook. On account of Fact A, there exists a sub-code C ˜ such that the maximal decoding error probability (of the related deterministic coding scheme) is less than 2 ϵ . We will use C ˜ as the final codebook.
Codebook partition: Set R = I ( X * ; Y * ) α I ( X * ; Z * ) τ , M = 2 N R , and find a secure partition { C m } m = 1 M on the codebook C ˜ , such that:
max I I μ 1 N I ( W ˜ ; Z I N ( C ˜ ) ) < ϵ ,
where W ˜ and Z I N ( C ˜ ) are similar to the corresponding random variables introduced in Section 4.2.
Encoder and decoder: This is similar to that introduced in Section 4.2.
Analysis of the maximal decoding error probability: It follows from Fact B that the a maximal decoding error probability of this coding scheme is no greater than 2 ϵ .
Proof on the existence of the secure partition: By the property of the “good” codebook (see Definition 9), the codebook C ˜ satisfies that:
| T ˜ ( C ˜ , I ) | > ( 1 4 ϵ 1 ) · M 2 for all I I μ
and:
| T ˜ ( C ˜ , z N , I ) | < 2 N ( I ( X * ; Y * ) α I ( X * ; Z * ) τ τ d 2 ) for all I I μ and z N T ˜ I N [ Z * ] 2 δ .
On account of Lemma 2, there exists a secure equipartition { C m } m = 1 M on it, such that:
| T ˜ ( C , I , z N ) C m | < L
for all 1 m M , I I μ and z N T I N [ Y * ] 2 δ , where L is a sufficiently large constant. This partition is actually secure.

Appendix B. Proof of Lemma 1

According to the definition of a “good” codebook, it follows that:
Pr { C is not good } Pr { max I I μ T ˜ ( C , I ) > 2 ϵ 1 M } + Pr { max I I μ max z N T ˜ I N [ Z * ] 2 δ T ˜ ( C , z N , I ) > 2 N ( I ( X * ; Y * ) α I ( X * ; Y * ) τ τ d 2 ) } .
On account of Lemma 4 in [23], the first term on the right-hand side of Equation (B1) is bounded by:
Pr { min I I μ T ˜ ( C , I ) < 2 ϵ 1 M } < ϵ 3 = exp 2 { N ϵ 1 M } .
Therefore, it suffices to prove that the second term on the right-hand side of Equation (B1) satisfies:
Pr { max I I μ max z N T ˜ I N [ Z * ] 2 δ T ˜ ( C , z N , I ) > 2 N ( I ( X * ; Y * ) α I ( X * ; Y * ) τ τ d 2 ) } < ϵ 4 .
To this end, denote:
U ( i , z N ) = 1 if X * I N ( i ) T I N [ X * Z * | z N ] 2 δ , 0 o t h e r w i s e
for every 1 i M and z N = I I μ T I N [ Y * ] 2 δ . Then, on account of the Chernoff bound, it follows that:
Pr { T ˜ ( C , z N , I ) > 2 N ( I ( X * ; Y * ) α I ( X * ; Y * ) τ τ d 2 ) } = Pr { i = 1 M U ( i , z N ) > 2 N ( I ( X * ; Y * ) α I ( X * ; Y * ) τ τ d 2 ) } < exp 2 ( 2 N ( I ( X * ; Y * ) α I ( X * ; Y * ) τ τ d 2 ) ) · E [ exp 2 ( i = 1 M U ( i , z N ) ) ] < exp 2 ( 2 N ( I ( X * ; Y * ) α I ( X * ; Y * ) τ τ d 2 ) ) · i = 1 M E [ exp 2 ( U ( i , z N ) ) ] < exp 2 ( 2 N ( I ( X * ; Y * ) α I ( X * ; Y * ) τ τ d 2 ) ) · i = 1 M exp e ( E [ U ( i , z N ) ] ) ,
where the last inequality follows because 2 t 1 + t e t for 0 t 1 . Recalling Corollary 5, we have:
E [ U ( i , z N ) < 2 N ( α I ( X * ; Z * ) 4 δ H ( X * ) ) .
Substituting the formula above to (B2) gives:
Pr { T ˜ ( C , z N , I ) > 2 N ( I ( X * ; Y * ) α I ( X * ; Y * ) τ τ d 2 ) } < exp 2 ( 2 N ( I ( X * ; Y * ) α I ( X * ; Y * ) τ τ d 2 ) ) · i = 1 M exp e ( E [ U ( i , z N ) ] ) < exp 2 ( 2 N ( I ( X * ; Y * ) α I ( X * ; Y * ) τ τ d 2 ) ) · exp e ( 2 N ( α I ( X * ; Z * ) 4 δ H ( X * ) ) M ) = exp 2 [ ( 2 N ( τ d 4 δ H ( X * ) ) log e 2 1 2 N τ d ) 2 N ( I ( X * ; Y * ) α I ( X * ; Z * ) τ ) ] = exp 2 [ ( 2 N ( τ d 4 δ H ( X * ) ) log e 2 1 2 N τ d ) M ] ,
where M and M are given by Equations (10) and (11), respectively. Therefore,
Pr { max I I μ max z N T ˜ I N [ Z * ] 2 δ T ˜ ( C , z N , I ) > 2 N ( I ( X * ; Y * ) α I ( X * ; Y * ) τ τ d 2 ) } I I μ z N T ˜ I N [ Z * ] 2 δ Pr { T ˜ ( C , z N , I ) > 2 N ( I ( X * ; Y * ) α I ( X * ; Y * ) τ τ d 2 ) } I I μ z N T ˜ I N [ Z * ] 2 δ exp 2 [ ( 2 N ( τ d 4 δ H ( X * ) ) log e 2 1 2 N τ d ) M ] exp 2 [ 2 N + ( 2 N ( τ d 4 δ H ( X * ) ) log e 2 1 2 N τ d ) M ] = ϵ 4 .
The proof is completed.

Appendix C. Proof of Lemma 2

This Appendix proves that for any given “good” codebook C , there exists a secure partition { C m } m = 1 M on it such that:
| T ˜ ( C , I , z N ) C m | < L
for all 1 m M , I I μ and z N T I N [ Y * ] 2 δ . The proof is quite similar to the proof of Lemma 2 in [14]. Most notation in this Appendix will follow that in [14].
Let F be the set of all possible equipartition on the codebook C . Each element in F is actually a function f : C [ 1 : M ] such that:
| f 1 ( m ) | = r = M M = 2 N ( α I ( X * ; Z * ) τ d ) .
For any f F , let Ψ ( f ) = 0 if the partition produced by f is secure, and Ψ ( f ) = 1 otherwise. Then, it suffices to prove that:
E [ Ψ ( F ) ] < 1 ,
where F is the random variable uniformly distributed on F . To this end, for any 1 m M , denote:
Φ ( f , m , I , z N ) = 0 if | T ˜ ( C , I , z N ) f 1 ( m ) | < L , 1 otherwise .
It follows clearly that:
E [ Ψ ( F ) ] m = 1 M I I μ z N T I N [ Y * ] 2 δ E [ Φ ( F , m , I , z N ) ] .
In the remainder of the proof, we firstly bound the value of E [ Φ ( F , m , I , z N ) ] , and then, the value of E [ Ψ ( F ) ] is bounded by Equation (C1).
Upper bound of E [ Φ ( F , m , I , z N ) ] . For any I I μ and z N T I N [ Y * ] 2 δ , let:
n z N = | T ˜ ( C , I , z N ) | .
Then, it follows that:
n z N n 1 = 2 N ( I ( X * ; Y * ) α I ( X * ; Z * ) τ τ d 2 ) .
since the codebook C is “good”. Therefore, the probability that there are t codewords of F 1 ( m ) belonging to T ˜ ( C , I , z N ) is given by:
Pr { | T ˜ ( C , I , z N ) F 1 ( m ) | = t } = ( n z N t ) ( M n z N r t ) ( M r ) ( n 1 t ) ( M r t ) ( M r ) .
By the similar method of bound π t in [14], we have:
( n 1 t ) · ( M r t ) ( M r ) n 1 t t ! · r ! ( M r ) ! ( r t ) ! ( M r + 1 ) ! = n 1 t t ! · j = 1 t r j + 1 M r + j n 1 t t ! · r t ( M r ) t = ( n 1 r / M ) t t ! ( 1 r / M ) t .
Observing that n 1 r M = 2 N τ d / 2 and 1 r M > 1 2 , we have:
Pr { | T ˜ ( C , I , z N ) F 1 ( m ) | = t } 2 N τ d t / 2 2 t t ! .
This indicates that:
E [ Φ ( F , m , I , z N ) ] = t = L r Pr { | T ˜ ( C , I , z N ) F 1 ( m ) | = t } t = L r 2 N τ d t / 2 2 t t ! t = L 2 N τ d L / 2 2 t t ! = e 2 · 2 N τ d L / 2 .
Upper bound of E [ Ψ ( F ] . Substituting Equation (C2) into (C1) gives:
E [ Ψ ( F ) ] m = 1 M I I μ z N T I N [ Y * ] 2 δ e 2 · 2 N τ d L / 2 .
Observing that M = 2 N R , | I μ | 2 N and T I N [ Y * ] 2 δ 2 N α for I I μ , the formula above can be further bounded by:
E [ Ψ ( F ) ] e 2 · 2 N ( R + 1 + α τ d L / 2 ) ,
which is < 1 if L 2 ( R + 6 ) / τ d . This completes the proof of Lemma 2.

Appendix D. Proof of Lemma 4

This Appendix establishes the properties of the function g given in Lemma 4. Property 1 is obvious. We only give the proof for Properties 2 and 3.
Proof of Property 2.
The first and the second derivatives of the function g are:
g ( β ) = log e [ ( 1 2 p ) ln 1 β * p β * p α ln 1 β β ]
and:
g ( β ) = log e [ α ( 1 β + 1 1 β ) ( 1 2 p ) 2 ( 1 β * p + 1 1 β * p ) ] ,
respectively. For any 0 β 1 2 , it follows that β β * p 1 2 . Therefore, when α ( 1 2 p ) 2 , we have:
g ( β ) > 0
for 0 β 1 2 . Moreover, noticing that:
g ( 1 2 ) = 0 ,
we conclude that:
g ( β ) 0
for 0 β 1 2 . This implies that the function g is decreasing and convex over the interval [ 0 , 1 2 ] . On account of the symmetry, it is concluded that g is increasing and convex over the interval [ 1 2 , 1 ] , and hence, β = 1 2 is the unique minimal point in the interval [ 0 , 1 ] . One can easily verify that β = 1 2 is actually a convex point. Thus, the function is convex over the whole interval [ 0 , 1 ] . ☐
Proof of Property 3.
We firstly find the solution for the inequality g ( β ) < 0 . This inequality indicates that:
1 β + 1 1 β 1 β * p + 1 1 β * p = ( β * p ) ( 1 β * p ) β ( 1 β ) < ( 1 2 p ) 2 α ,
or:
β 2 β + p p 2 ( 1 2 p 2 ) ( 1 / α 1 ) < 0 .
The inequality further indicates that β < β < 1 β for some 0 < β < 1 2 .
Then, we study the property of g over the interval [ 0 , 1 2 ] . It is clear that g increases on the interval [ 0 , β ] and decreases on the interval [ β , 1 2 ] . Furthermore, since g ( 1 2 ) = 0 , it follows that g ( β ) > 0 . Noticing the facts that g increases over the interval [ 0 , β ] and g ( 0 ) < 0 , we conclude that there exists a unique point 0 < β * < β such that g ( β * ) = 0 , and that point is clearly a unique minimal point over the interval [ 0 , 1 2 ] .
Finally, since β * < β , it follows that g ( β ) > 0 for β [ 0 , β * ] , and the function is convex over that interval. The property of g over the interval [ 1 2 , 1 ] is from the symmetry. It is also clear that β = 1 2 is the unique maximal point over the interval [ β * , 1 β * ] .
This completes the proof of this lemma. ☐

References

  1. Wyner, A.D. The wire-tap channel. Bell Syst. Tech. J. 1975, 54, 1355–1387. [Google Scholar] [CrossRef]
  2. Csiszár, I.; Körner, K. Broadcast channels with confidential messages. IEEE Trans. Inf. Theory 1978, 24, 339–348. [Google Scholar] [CrossRef]
  3. Chen, Y.; Vinck Han, A.J. Wiretap channel with side information. IEEE Trans. Inf. Theory 2008, 54, 395–402. [Google Scholar] [CrossRef]
  4. Dai, B.; Luo, Y. Some new results on the wiretap channel with side information. Entropy 2012, 14, 1671–1702. [Google Scholar] [CrossRef]
  5. Dai, B.; Vinck Han, A.J.; Luo, Y.; Zhuang, Z. Degraded Broadcast Vhannel with Noncausal Side Information, Confidential Messages and Noiseless Feedback. In Proceedings of the IEEE International Symposium on Information Theory, Cambridge, MA, USA, 1–6 July 2012.
  6. Khisti, A.; Diggavi, S.N.; Womell, G.W. Secrete-key agreement with channel state information at the transmitter. IEEE Trans. Inf. Forensics Secur. 2011, 6, 672–681. [Google Scholar] [CrossRef]
  7. Chia, Y.H.; El Gamal, A. Wiretap channel with causal state information. IEEE Trans. Inf. Theory 2012, 58, 2838–2849. [Google Scholar] [CrossRef]
  8. Dai, B.; Luo, Y.; Vinck Han, A.J. Capacity Region of Broadcast Channels with Private Message and Causual Side Information. In Proceedings of the 3rd International Congress on Image and Signal Processing, Yantai, China, 16–18 October 2010.
  9. Liang, Y.; Kramer, G.; Poor, H.; Shamai, S. Compound wiretap channel. EURASIP J. Wirel. Commun. Netw. 2008. [Google Scholar] [CrossRef]
  10. Bjelaković, I.; Boche, H.; Sommerfeld, J. Capacity results for compound wiretap channels. Probl. Inf. Transm. 2011, 49, 73–98. [Google Scholar] [CrossRef]
  11. Schaefer, R.F.; Boche, H. Robust broadcasting of common and confidential messages over compound channels: Strong secrecy and decoding performance. IEEE Trans. Inf. Forensics Secur. 2014, 9, 1720–1732. [Google Scholar] [CrossRef]
  12. Schaefer, R.F.; Loyka, S. The secrecy capacity of compound gaussian MIMO wiretap channels. IEEE Trans. Inf. Theory 2015, 61, 5535–5552. [Google Scholar] [CrossRef]
  13. Bjelaković, I.; Boche, H.; Sommerfeld, J. Capacity results for arbitrarily varying wiretap channel. Inf. Theory Comb. Search Theory Lect. Notes Comput. Sci. 2013, 7777, 123–144. [Google Scholar]
  14. Ozarow, L.H.; Wyner, A.D. Wire-tap channel II. AT&T Bell Labs Tech. J. 1984, 63, 2135–2157. [Google Scholar]
  15. Cai, N.; Yeung, R.W. Secure Network Coding. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Lausanne, Switzerland, 30 June–5 July 2002.
  16. Cai, N.; Yeung, R.W. Secure network coding on a wiretap network. IEEE Trans. Inf. Theory 2011, 57, 424–435. [Google Scholar] [CrossRef]
  17. Feldman, J.; Malkin, T.; Stein, C. On the Capacity of Secure Network Coding. In Proceedings of the 42nd Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, 29 September–1 October 2004.
  18. Bhattad, K.; Narayanan, K.R. Weakly secure network coding. Proc. NetCod 2005, 4, 1–5. [Google Scholar]
  19. Cheng, F.; Yeung, R.W. Performance bounds on a wiretap network with arbitrary wiretap sets. IEEE Trans. Inf. Theory 2012, 60, 3345–3358. [Google Scholar] [CrossRef]
  20. He, D.; Guo, W. Strong secrecy capacity of a class of wiretap networks. Entropy 2016, 18, 238. [Google Scholar] [CrossRef]
  21. Nafea, M.; Yener, A. Wiretap Channel II with a Noisy Main Channel. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Hong Kong, China, 14–19 June 2015.
  22. Goldfeld, Z.; Cuff, P.; Permuter, H.H. Semantic-security capacity for wiretap channels of type II. IEEE Trans. Inf. Theory 2016, 62, 3863–3879. [Google Scholar] [CrossRef]
  23. He, D.; Luo, Y.; Cai, N. Strong Secrecy Capacity of the Wiretap Channel II with DMC Main Channel. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Barcelona, Spain, 10–15 July 2016.
  24. Luo, Y.; Mitrpant, C.; Vinck Han, A.J. Some new characters on the wiretap channel of type II. IEEE Trans. Inf. Theory 2005, 51, 1222–1229. [Google Scholar] [CrossRef]
  25. Yang, Y.; Dai, B. A combination of the wiretap channel and the wiretap channel of type II. J. Comput. Inf. Syst. 2014, 10, 4489–4502. [Google Scholar]
  26. He, D.; Luo, Y. A Kind of Non-DMC Erasure Wiretap Channel. In Proceedings of the IEEE International Conference on Communication Technology (ICCT), Chengdu, China, 9–11 November 2012.
  27. Kramer, G. Topics in multi-user information theory. Foundations Trends®. Commun. Inf. Theory 2007, 4, 265–444. [Google Scholar]
  28. Yeung, R.W. Information Theory and Network Coding; Springer: New York, NY, USA, 2008. [Google Scholar]
Figure 1. Comparison of different secrecy criteria.
Figure 1. Comparison of different secrecy criteria.
Entropy 18 00377 g001
Figure 2. Communication model of wiretap channel II with noise.
Figure 2. Communication model of wiretap channel II with noise.
Entropy 18 00377 g002
Figure 3. Adding a virtual channel Q V to the communication system.
Figure 3. Adding a virtual channel Q V to the communication system.
Entropy 18 00377 g003
Figure 4. Relationship between the function g and β with p = 0 . 1 . (a) α ( 1 2 p ) 2 ; (b) α < ( 1 2 p ) 2 .
Figure 4. Relationship between the function g and β with p = 0 . 1 . (a) α ( 1 2 p ) 2 ; (b) α < ( 1 2 p ) 2 .
Entropy 18 00377 g004
Figure 5. Parameters achieving the secrecy capacity ( α = 0 . 58 , p = 0 . 1 ).
Figure 5. Parameters achieving the secrecy capacity ( α = 0 . 58 , p = 0 . 1 ).
Entropy 18 00377 g005
Figure 6. Relationship between the secrecy capacity C s ( α , p ) and α.
Figure 6. Relationship between the secrecy capacity C s ( α , p ) and α.
Entropy 18 00377 g006

Share and Cite

MDPI and ACS Style

He, D.; Guo, W.; Luo, Y. Secrecy Capacity of the Extended Wiretap Channel II with Noise. Entropy 2016, 18, 377. https://doi.org/10.3390/e18110377

AMA Style

He D, Guo W, Luo Y. Secrecy Capacity of the Extended Wiretap Channel II with Noise. Entropy. 2016; 18(11):377. https://doi.org/10.3390/e18110377

Chicago/Turabian Style

He, Dan, Wangmei Guo, and Yuan Luo. 2016. "Secrecy Capacity of the Extended Wiretap Channel II with Noise" Entropy 18, no. 11: 377. https://doi.org/10.3390/e18110377

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop