Next Article in Journal
Testing the Interacting Dark Energy Model with Cosmic Microwave Background Anisotropy and Observational Hubble Data
Next Article in Special Issue
On the Reliability Function of Variable-Rate Slepian-Wolf Coding
Previous Article in Journal
Competitive Sharing of Spectrum: Reservation Obfuscation and Verification Strategies
Previous Article in Special Issue
Multi-User Detection for Sporadic IDMA Transmission Based on Compressed Sensing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Scaling Exponent and Moderate Deviations Asymptotics of Polar Codes for the AWGN Channel

Department of Electrical and Computer Engineering, National University of Singapore, Singapore 117583, Singapore
*
Author to whom correspondence should be addressed.
Entropy 2017, 19(7), 364; https://doi.org/10.3390/e19070364
Submission received: 8 June 2017 / Revised: 8 July 2017 / Accepted: 13 July 2017 / Published: 15 July 2017
(This article belongs to the Special Issue Multiuser Information Theory)

Abstract

:
This paper investigates polar codes for the additive white Gaussian noise (AWGN) channel. The scaling exponent μ of polar codes for a memoryless channel q Y | X with capacity I ( q Y | X ) characterizes the closest gap between the capacity and non-asymptotic achievable rates as follows: For a fixed ε ( 0 , 1 ) , the gap between the capacity I ( q Y | X ) and the maximum non-asymptotic rate R n * achieved by a length-n polar code with average error probability ε scales as n 1 / μ , i.e., I ( q Y | X ) R n * = Θ ( n 1 / μ ) . It is well known that the scaling exponent μ for any binary-input memoryless channel (BMC) with I ( q Y | X ) ( 0 , 1 ) is bounded above by 4.714 . Our main result shows that 4.714 remains a valid upper bound on the scaling exponent for the AWGN channel. Our proof technique involves the following two ideas: (i) The capacity of the AWGN channel can be achieved within a gap of O ( n 1 / μ log n ) by using an input alphabet consisting of n constellations and restricting the input distribution to be uniform; (ii) The capacity of a multiple access channel (MAC) with an input alphabet consisting of n constellations can be achieved within a gap of O ( n 1 / μ log n ) by using a superposition of log n binary-input polar codes. In addition, we investigate the performance of polar codes in the moderate deviations regime where both the gap to capacity and the error probability vanish as n grows. An explicit construction of polar codes is proposed to obey a certain tradeoff between the gap to capacity and the decay rate of the error probability for the AWGN channel.

1. Introduction

1.1. The Additive White Gaussian Noise Channel

This paper investigates low-complexity codes over the classical additive white Gaussian noise (AWGN) channel ([1], Chapter 9), where a source wants to transmit information to a destination and each received symbol is the sum of the transmitted symbol and an independent Gaussian random variable. More specifically, if X k denotes the symbol transmitted by the source in the kth time slot, then the corresponding symbol received by the destination is
Y k = X k + Z k
where Z k is the standard normal random variable. When the transmission lasts for n time slots, i.e., each transmitted codeword consists of n symbols, it is assumed that Z 1 , Z 2 , , Z n are independent and each transmitted codeword x n ( x 1 , x 2 , , x n ) must satisfy the peak power constraint
1 n k = 1 n x k 2 P
where P > 0 is a constant which denotes the permissible power. For transmitting a uniformly distributed message W { 1 , 2 , , 2 n R } across this channel, Shannon [2] shows that the limit of the maximum coding rate R as n approaches infinity (i.e., capacity) is
C ( P ) 1 2 log ( 1 + P ) .

1.2. Polar Codes

In 1948, in his groundbreaking paper Shannon [2] proposed a systematic framework for studying the fundamental limits for transmitting information over noisy channels and provided a single-letter formula for the capacity of a memoryless channel. For decades, information and coding theorists sought to achieve these fundamental limits via low-complexity and capacity-achieving codes. In 2009, in his breakthrough paper Arıkan [3] proposed a class of codes—known as polar codes—whose encoding and decoding complexities are O ( n log n ) and provably achieve the capacity of any binary-input memoryless symmetric channel (BMSC).
The scaling exponent μ of polar codes for a memoryless channel q Y | X with capacity
I ( q Y | X ) max p X I ( X ; Y )
characterizes the closest gap between the channel capacity and non-asymptotic achievable rates as follows: For a fixed ε ( 0 , 1 ) , the gap between the capacity I ( q Y | X ) and the maximum non-asymptotic rate R n * achieved by a length-n polar code with average error probability ε scales as n 1 / μ , i.e., I ( q Y | X ) R n * = Θ ( n 1 / μ ) . It has been shown in [4,5,6] that the scaling exponent μ for any BMSC with I ( q Y | X ) ( 0 , 1 ) lies between 3.579 and 4.714 . Indeed, the upper bound 4.714 remains valid for any general binary-input memoryless channel (BMC) ([7], Lemma 4). The scaling exponent of polar codes for a non-stationary channel has been recently studied in [8].
It is well known that polar codes are capacity-achieving for BMCs [9,10,11,12], and appropriately chosen ones are also capacity-achieving for the AWGN channel [13]. In particular, for any R < C ( P ) and any β < 1 / 2 , polar codes operated at rate R can be constructed for the AWGN channel such that the decay rate of the error probability is O ( 2 n β ) [13] and the encoding and decoding complexities are O ( n log n ) . However, the scaling exponent of polar codes for the AWGN channel has not been investigated yet.
In this paper, we construct polar codes for the AWGN channel and show that 4.714 remains a valid upper bound on the scaling exponent. Our construction of polar codes involves the following two ideas:(i) By using an input alphabet consisting of n constellations and restricting the input distribution to be uniform as suggested in [13], we can achieve the capacity of the AWGN channel within a gap of O ( n 1 / μ log n ) ; (ii) By using a superposition of log n binary-input polar codes (in this paper, n is always a power of 2) as suggested in [14], we can achieve the capacity of the corresponding multiple access channel (MAC) within a gap of O ( n 1 / μ log n ) where the input alphabet of the MAC has n constellations (i.e., the size of the Cartesian product of the input alphabets corresponding to the log n input terminals is n). The encoding and decoding complexities of our constructed polar codes are O ( n log 2 n ) . On the other hand, the lower bound 3.579 holds trivially for the constructed polar codes because the polar codes are constructed by superposing log n binary-input polar codes whose scaling exponents are bounded below by 3.579 [5].
In addition, Mondelli et al. ([4], Section IV) provided an explicit construction of polar codes for any BMSC which obeys a certain tradeoff between the gap to capacity and the decay rate of the error probability. More specifically, if the gap to capacity is set to vanish at a rate of Θ n 1 γ μ for some γ 1 1 + μ , 1 , then a length-n polar code can be constructed such that the error probability is O n · 2 n γ h 2 1 γ μ + γ 1 γ μ where h 2 : [ 0 , 1 / 2 ] [ 0 , 1 ] denotes the binary entropy function. This tradeoff was developed under the moderate deviations regime [15] where both the gap to capacity and the error probability vanish as n grows. For the AWGN channel, we develop a similar tradeoff under the moderate deviations regime by using our constructed polar codes described above.

1.3. Paper Outline

This paper is organized as follows. The notation used in this paper is described in the next subsection. Section 2 presents the background of this work, which includes existing polarization results for the BMC which are used in this work. Section 3.1, Section 3.2 and Section 3.3 state the formulation of the binary-input MAC and present new polarization results for the binary-input MAC. Section 4.1Section 4.2 state the formulation of the AWGN channel and present new polarization results for the AWGN channel. Section 5 establishes the definition of the scaling exponent for the AWGN channel and establishes the main result—4.714 is an upper bound on the scaling exponent of polar codes for the AWGN channel. Section 6 presents an explicit construction of polar codes for the AWGN channel which obey a certain tradeoff between the gap to capacity and the decay rate of the error probability under the moderate deviations regime. Concluding remarks are provided in Section 7.

1.4. Notation

The set of natural numbers, real numbers and non-negative real numbers are denoted by N , R and R + respectively. For any sets A and B and any mapping f : A B , we let f 1 ( D ) denote the set { a A | f ( a ) D } for any D B . We let 1 { E } be the indicator function of the set E . An arbitrary (discrete or continuous) random variable is denoted by an upper-case letter (e.g., X), and the realization and the alphabet of the random variable are denoted by the corresponding lower-case letter (e.g., x) and calligraphic letter (e.g., X ) respectively. We use X n to denote the random tuple ( X 1 , X 2 , , X n ) where each X k has the same alphabet X . We will take all logarithms to base 2 throughout this paper.
The following notations are used for any of the arbitrary X and Y random variables and any real-valued function g with domain X . We let p Y | X and p X , Y = p X p Y | X denote the conditional probability distribution of Y given X and the probability distribution of ( X , Y ) respectively. We let p X , Y ( x , y ) and p Y | X ( y | x ) be the evaluations of p X , Y and p Y | X respectively at ( X , Y ) = ( x , y ) . To make the dependence on the distribution explicit, we let P p X { g ( X ) A } denote X p X ( x ) 1 { g ( x ) A } d x for any set A R . The expectation of g ( X ) is denoted as E p X [ g ( X ) ] . For any ( X , Y , Z ) distributed according to some p X , Y , Z , the entropy of X and the conditional mutual information between X and Y given Z are denoted by H p X ( X ) and I p X , Y , Z ( X ; Y | Z ) respectively. For simplicity, we sometimes omit the subscript of a notation if it causes no confusion. The relative entropy between p X and q X is denoted by
D ( p X q X ) X p X ( x ) log p X ( x ) q X ( x ) d x .
The 2-Wasserstein distance between p X and p Y is denoted by
W 2 ( p X , p Y ) inf s X , Y : s X = p X , s Y = p Y X Y s X , Y ( x , y ) ( x y ) 2 d y d x .
We let N ( · ; μ , σ 2 ) : R [ 0 , ) denote the probability density function of a Gaussian random variable whose mean and variance are μ and σ 2 respectively, i.e.,
N ( z ; μ , σ 2 ) 1 2 π σ 2 e ( z μ ) 2 2 σ 2 .

2. Background: Point-to-Point Channels and Existing Polarization Results

In this section, we will review important polarization results related to the scaling exponent of polar codes for binary-input memoryless channels (BMCs).

2.1. Point-to-Point Memoryless Channels

Consider a point-to-point channel which consists of one source and one destination, denoted by s and d respectively. Suppose node s transmits information to node d in n time slots. Before any transmission begins, node s chooses message W destined for node d , where W is uniformly distributed over the alphabet
W { 1 , 2 , , M }
which consists of M elements. For each k { 1 , 2 , , n } , node s transmits X k X based on W and node d receives Y k Y in time slot k where X and Y denote respectively the input and output alphabets of the channel. After n time slots, node d declares W ^ to be the transmitted W based on Y n . Formally, we define a length-n code as follows.
Definition 1.
An ( n , M ) code consists of the following:
  • A message set W as defined in (8). Message W is uniform on W .
  • An encoding function f k : W X for each k { 1 , 2 , , n } , where f k is used by node s for encoding X k such that X k = f k ( W ) .
  • A decoding function φ : Y n W used by node d for producing the message estimate W ^ = φ ( Y n ) .
Definition 2.
The point-to-point memoryless channel is characterized by an input alphabet X , an output alphabet Y and a conditional distribution q Y | X such that the following holds for any ( n , M ) code: For each k { 1 , 2 , , n } , p W , X n , Y n = p W , X n k = 1 n p Y k | X k where p Y k | X k ( y k | x k ) = q Y | X ( y k | x k ) for all x k X and y k Y .
For any ( n , M ) code defined on the point-to-point memoryless channel, let p W , X n , Y n , W ^ be the joint distribution induced by the code. By Definitions 1 and 2, we can factorize p W , X n , Y n , W ^ as,
p W , X n , Y n , W ^ = p W p X n | W k = 1 n p Y k | X k p W ^ | Y n .

2.2. Polarization for Binary-Input Memoryless Channels

Definition 3.
A point-to-point memoryless channel characterized by q Y | X is called a binary-input memoryless channel (BMC) if X = { 0 , 1 } .
We follow the formulation of polar coding in [11]. Consider any BMC characterized by q Y | X . Let p X be the probability distribution of a Bernoulli random variable X, and let p X n be the distribution of n independent copies of X p X , i.e., p X n ( x n ) = k = 1 n p X ( x k ) for all x n X n . For each n = 2 m where m N , the polarization mapping of a length-n polar code is given by
G n 1 0 1 1 m = G n 1
where ⊗ denotes the Kronecker power. Define p U n | X n such that
[ U 1 U 2 U n ] = [ X 1 X 2 X n ] G n
where the addition and product operations are performed over GF(2), define
p Y k | X k ( y k | x k ) q Y | X ( y k | x k )
for each k { 1 , 2 , , n } and each ( x k , y k ) X × Y where q Y | X characterizes the BMC (cf. (2)), and define
p U n , X n , Y n p X n p U n | X n k = 1 n p Y k | X k .
In addition, for each k { 1 , 2 , , n } , define the Bhattacharyya parameter associated with time k as
Z [ p X ; q Y | X ] ( U k | U k 1 , Y n ) ( 14 ) 2 u k 1 U k 1 Y n p U k 1 , Y n ( u k 1 , y n ) p U k | U k 1 , Y n ( 0 | u k 1 , y n ) p U k | U k 1 , Y n ( 1 | u k 1 , y n ) d y n ( 15 ) = 2 u k 1 U k 1 Y n p U k , U k 1 , Y n ( 0 , u k 1 , y n ) p U k , U k 1 , Y n ( 1 , u k 1 , y n ) d y n
where the distributions in (14) and (15) are marginal distributions of p U n , X n , Y n defined in (13). To simplify notation, let
β 4.714
in the rest of this paper. The following lemma is based on Section III of [4] and it is a restatement of Lemma 2 of [7], which has been used in [7] to show that 4.714 is an upper bound on the scaling exponent for any BMC (including non-symmetric ones).
Lemma 1.
([7], Lemma 2) There exists a universal constant t > 0 such that the following holds. Fix any BMC characterized by q Y | X and any p X . Then for any m N and n 2 m , we have
1 n k { 1 , 2 , , n } Z [ p X ; q Y | X ] ( U k | U k 1 , Y n ) 1 n 4 I p X q Y | X ( X ; Y ) t n 1 / β .
This lemma continues to hold if the quantity 1 n 4 is replaced by 1 n ν for any ν > 0 . The main result of this paper continues to hold if the quantity 1 n 4 in this lemma is replaced by 1 n ν for any ν > 2 .

3. Problem Formulation of Binary-Input MACs and New Polarization Results

Polar codes have been proposed and investigated for achieving any rate tuple inside the capacity region of a binary-input multiple access channel (MAC) [14,16]. The goal of this section is to use the polar codes proposed in [14] to achieve the symmetric sum-capacity of a binary-input MAC.

3.1. Binary-Input Multiple Access Channels

Consider a MAC ([1], Section 15.3) which consists of N sources and one destination. Let I { 1 , 2 , , N } be the index set of the N sources and let d denote the destination. Suppose the sources transmit information to node d in n time slots. Before any transmission begins, node i chooses message W i destined for node d for each i I , where W i is uniformly distributed over
W i { 1 , 2 , , M i }
which consists of M i elements. For each k { 1 , 2 , , n } , node i transmits X i , k X i based on W i for each i I and node d receives Y k Y in time slot k where X i denotes the input alphabet for node i and Y denotes the output alphabet. After n time slots, node d declares W ^ i to be the transmitted W i based on Y n for each i I .
To simplify notation, we use the following convention for any T I . For any random tuple ( X 1 , X 2 , , X N ) , we let X T ( X i : i T ) be the corresponding subtuple, whose realization and alphabet are denoted by x T and X T respectively. Similarly, for each k { 1 , 2 , , n } and each random tuple ( X 1 , k , X 2 , k , , X N , k ) X I , we let X T , k ( X i , k : i T ) denote the corresponding random subtuple, and let x T , k and X T , k denote respectively the realization and the alphabet of X T , k . Formally, we define a length-n code for the binary-input MAC as follows.
Definition 4.
An ( n , M I ) -code, where M I ( M 1 , M 2 , , M N ) , consists of the following:
  • A message set W i for each i I as defined in (18), where message W i is uniform on W i .
  • An encoding function f i , k MAC : W i X i for each i I and each k { 1 , 2 , , n } , where f i , k MAC is used by node i for encoding X i , k such that X i , k = f i , k MAC ( W i ) .
  • A decoding function φ MAC : Y n W I used by node d for producing the message estimate W ^ I = φ MAC ( Y n ) .
Definition 5.
The multiple access channel (MAC) is characterized by N input alphabets specified by X I , an output alphabet specified by Y and a conditional distribution q Y | X I such that the following holds for any ( n , M I ) code: For each k { 1 , 2 , , n } ,
p W I , X I n , Y n = i I p W i , X i n k = 1 n p Y k | X I , k
where p Y k | X I , k ( y k | x I , k ) = q Y | X I ( y k | x I , k ) for all x I , k X I and y k Y .

3.2. Polarization for Binary-Input MACs

Definition 6.
A MAC characterized by q Y | X I is called a binary-input MAC if X I = { 0 , 1 } N .
Consider any binary-input MAC characterized by q Y | X I . For each i I , let p X i be the probability distribution of a Bernoulli random variable X i , and let p X i n be the distribution of n independent copies of X i p X i , i.e., p X i n ( x i n ) = k = 1 n p X i ( x i , k ) for all x i n X i n . Recall the polarization mapping G n defined in (10). For each i I , define p U i n | X i n such that
[ U i , 1 U i , 2 U i , n ] = [ X i , 1 X i , 2 X i , n ] G n
where the addition and product operations are performed over GF(2), and define
p U I n , X I n , Y n i I p X i n p U i n | X i n k = 1 n p Y k | X I , k .
In addition, for each i I and each k { 1 , 2 , , n } , define [ i 1 ] { 1 , 2 , , i 1 } and define the Bhattacharyya parameter associated with node i and time k as
Z [ p X I ; q Y | X I ] ( U i , k | U i k 1 , X [ i 1 ] n , Y n ) 2 u i k 1 U i k 1 x [ i 1 ] n { 0 , 1 } ( i 1 ) n Y n p U i , k , U i k 1 , X [ i 1 ] n , Y n ( 0 , u i k 1 , x [ i 1 ] n , y n ) p U i , k , U i k 1 , X [ i 1 ] n , Y n ( 1 , u i k 1 , x [ i 1 ] n , y n ) d y n
where the distributions in (22) are marginal distributions of p U I n , X I n , Y n defined in (21). The following lemma is a direct consequence of Lemma 1.
Lemma 2.
There exists a universal constant t > 0 such that the following holds. Fix any binary-input MAC characterized by q Y | X I and any p X I . Then for any m N and n 2 m , we have
1 n k { 1 , 2 , , n } Z [ p X I ; q Y | X I ] ( U i , k | U i k 1 , X [ i 1 ] n , Y n ) 1 n 4 I p X I q Y | X I ( X i ; X [ i 1 ] , Y ) t n 1 / β
for each i I . This lemma continues to hold if the quantity 1 n 4 is replaced by 1 n ν for any ν > 0 . The main result of this paper continues to hold if the quantity 1 n 4 in this lemma is replaced by 1 n ν for any ν > 2 .
Proof. 
Fix any i I . Construct p X [ i 1 ] , Y | X i by marginalizing p X I q Y | X I and view p X [ i 1 ] , Y | X i as the conditional distribution that characterizes a BMC. The lemma then follows directly from Lemma 1. ☐

3.3. Polar Codes That Achieve the Symmetric Sum-Capacity of a Binary-Input MAC

Throughout this paper, let p X i * denote the uniform distribution on { 0 , 1 } for each i I and define p X I * i I p X i * , i.e.,
p X I * ( x I ) = 1 2 N
for any x I { 0 , 1 } N .
Definition 7.
For a binary-input MAC characterized by q Y | X I , the symmetric sum-capacity is defined to be C sum I p X I * q Y | X I ( X I ; Y ) .
The following definition summarizes the polar codes for the binary-input MAC proposed in Section IV of [14].
Definition 8.
([14], Section IV) Fix an n = 2 m where m N . For each i I , let J i be a subset of { 1 , 2 , , n } , define J i c { 1 , 2 , , n } J i , and let b i , J i c ( b i , k { 0 , 1 } : k J i c ) be a binary tuple. An ( n , J I , b I , J I c ) polar code, where J I ( J i : i I ) and b I , J I c ( b i , J i c : i I ) , consists of the following:
  • An index set for information bits transmitted by node i denoted by J i for each i I . The set J i c is referred to as the index set for frozen bits transmitted by node i.
  • A message set W i { 1 , 2 , , 2 | J i | } for each i I , where W i is uniform on W i .
  • An encoding bijection f i M A C : W i U i , J i for encoding W i into | J i | information bits denoted by U i , J i for each i I such that
    U i , J i = f i MAC ( W i )
    where U i , J i and U i , J i are defined as U i , J i k J i U k and U i , J i ( U i , k : k J i ) respectively. Since message W i is uniform on W i , f i MAC ( W i ) is a sequence of independent and identically distributed (i.i.d.) uniform bits such that
    P { U i , J i = u i , J i } = 1 2 | J i |
    for all u i , J i { 0 , 1 } | J i | , where the bits are transmitted through the polarized channels indexed by J i . For each i I and each k J i c , let
    U i , k = b i , k
    be the frozen bit to be transmitted by node i in time slot k. After U i n has been determined, node i transmits X i n where
    [ X i , 1 X i , 2 X i , n ] [ U i , 1 U i , 2 U i , n ] G n 1 .
  • A sequence of successive cancellation decoding functions φ i , k MAC : { 0 , 1 } k 1 × { 0 , 1 } ( i 1 ) n × Y n { 0 , 1 } for each i I and each k { 1 , 2 , , n } such that the recursively generated ( U ^ 1 , 1 , , U ^ 1 , n ) , ( U ^ 2 , 1 , , U ^ 2 , n ) , , ( U ^ N , 1 , , U ^ N , n ) and ( X ^ 1 , 1 , , X ^ 1 , n ) , ( X ^ 2 , 1 , , X ^ 2 , n ) ,…, ( X ^ N , 1 , , X ^ N , n ) are produced as follows. For each i I and each k = 1 , 2 , , n , given that U ^ i k 1 , U ^ [ i 1 ] n and X ^ [ i 1 ] n have been constructed before the construction of U ^ i , k , node d constructs the estimate of U i , k through computing
    U ^ i , k φ i , k MAC ( U ^ i k 1 , X ^ [ i 1 ] n , Y n )
    where
    u ^ i , k φ i , k MAC ( u ^ i k 1 , x ^ [ i 1 ] n , y n ) = 0 if k J i and p U i , k | U i k 1 , X [ i 1 ] n , Y n ( 0 | u ^ i k 1 , x ^ [ i 1 ] n , y n ) p U i , k | U i k 1 , X [ i 1 ] n , Y n ( 1 | u ^ i k 1 , x ^ [ i 1 ] n , y n ) , 1 if k J i and p U i , k | U i k 1 , X [ i 1 ] n , Y n ( 0 | u ^ i k 1 , x ^ [ i 1 ] n , y n ) < p U i , k | U i k 1 , X [ i 1 ] n , Y n ( 1 | u ^ i k 1 , x ^ [ i 1 ] n , y n ) , b i , k if k J i c .
    After obtaining U ^ i n , node d constructs the estimate of X i n through computing
    [ X ^ i , 1 X ^ i , 2 X ^ i , n ] [ U ^ i , 1 U ^ i , 2 U ^ i , n ] G n 1
    and declares that
    W ^ i f i MAC 1 ( U ^ i , J i )
    is the transmitted W i where f i MAC 1 denotes the inverse function of f i MAC .
Remark 1.
By inspecting Definition 4 and Definition 8, we see that every ( n , J I , b I , J I c ) polar code is also an n , ( 2 | J 1 | , 2 | J 2 | , , 2 | J N | ) code.
Definition 9.
The uniform-input ( n , J I ) polar code is defined as an ( n , J I , B I , J I c ) polar code where B I , J I c consists of i.i.d. uniform bits that are independent of the message W I .
Definition 10.
For the uniform-input ( n , J I ) polar code defined for the MAC, the probability of decoding error is defined as
P { W ^ I W I } = P i I { U i , J i U ^ i , J i }
where the error is averaged over the random messages and the frozen bits. The code is also called a uniform-input ( n , J I , ε ) polar code if the probability of decoding error is no larger than ε.
The following proposition bounds the error probability in terms of Bhattacharyya parameters, and it is a generalization of the well-known result for the special case N = 1 (e.g., see [3], Proposition 2). The proof of Proposition 1 can be deduced from Section IV of [14], and is contained in Appendix A for completeness.
Proposition 1.
For the uniform-input ( n , J I ) polar code defined for the MAC q Y | X I , we have
P { W ^ I W I } i = 1 N k J i Z [ p X I ; q Y | X I ] ( U i , k | U i k 1 , X [ i 1 ] n , Y n ) .
The following proposition follows from combining Lemma 2, Definition 7 and Proposition 1.
Proposition 2.
There exists a universal constant t > 0 such that the following holds. Fix any N-source binary-input MAC characterized by q Y | X I . Fix any m N , let n = 2 m and define
J i SE k { 1 , 2 , , n } Z [ p X I * ; q Y | X I ] ( U i , k | U i k 1 , X [ i 1 ] n , Y n ) 1 n 4
for each i I , where p X I * is the uniform distribution as defined in (24) and the superscript “SE" stands for “scaling exponent". Then, the corresponding uniform-input ( n , J I SE ) polar code satisfies
i = 1 N J i SE n C sum t N n 1 / β
and
P { W ^ I W I } N n 3 .
Proof. 
Let t > 0 be the universal constant specified in Lemma 2 and fix an n. For each i I , it follows from Lemma 2 and Proposition 1 that
J i SE n I p X I * q Y | X I ( X i ; X [ i 1 ] , Y ) t n 1 / β
and
P { W ^ I W I } N n 3
for the uniform-input ( n , J I SE ) polar code. Since p X I * = i = 1 N p X i * , it follows that
I p X I * q Y | X I ( X i ; X [ i 1 ] , Y ) = I p X I * q Y | X I ( X i ; Y | X [ i 1 ] )
holds for each i I , which implies that
i = 1 N I p X I * q Y | X I ( X i ; X [ i 1 ] , Y ) = I p X I * q Y | X I ( X I ; Y ) .
Consequently, (35) follows from (37), (40) and Definition 7, and (36) follows from (38). ☐
Remark 2.
Proposition 2 shows that the sum-capacity of a binary-input MAC with N sources can be achieved within a gap of O ( N n 1 / β ) by using a superposition of N binary-input polar codes.

4. Problem Formulation of the AWGN Channel and New Polarization Results

4.1. The AWGN Channel

It is well known that appropriately designed polar codes are capacity-achieving for the AWGN channel [13]. The main contribution of this paper is proving an upper bound on the scaling exponent of polar codes for the AWGN channel by using uniform-input polar codes for binary-input MACs described in Definition 8. The following two definitions formally define the AWGN channel and length-n codes for the channel.
Definition 11.
An ( n , M , P ) code is an ( n , M ) code described in Definition 1 subject to the additional assumptions that X = R and the peak power constraint
P 1 n k = 1 n X k 2 P = 1
is satisfied.
Definition 12.
The AWGN channel is a point-to-point memoryless channel described in Definition 2 subject to the additional assumption that Y = R and q Y | X ( y | x ) = N ( y ; x , 1 ) for all x R and y R .
Definition 13.
For an ( n , M , P ) code defined on the AWGN channel, we can calculate according to (9) the average probability of error defined as P W ^ W . We call an ( n , M , P ) code with average probability of error no larger than ε an ( n , M , P , ε ) code.

4.2. Uniform-Input Polar Codes for the AWGN Channel

Recall that we would like to use uniform-input polar codes for binary-input MACs described in Definition 8 to achieve the capacity of the AWGN channel, i.e., C ( P ) in (3). The following definition describes the basic structure of such uniform-input polar codes.
Definition 14.
Fix an n = 2 m where m N . An ( n , J I , P , A ) avg polar code with average power P and input alphabet A consists of the following:
  • An input alphabet A R { 0 } with | A | = n such that
    1 n a A { 0 } a 2 P ,
    where R { 0 } can be viewed as a line with 2 origins. Introducing the symbol 0 allows us to create a set of cardinality n which consists of n 2 non-zero real numbers and 2 origins 0 and 0 . We index each element of A by a unique length-m binary tuple
    x I MAC ( x 1 MAC , x 2 MAC , , x m MAC ) ,
    and let ρ : { 0 , 1 } m A be the bijection that maps the indices to the elements of A such that ρ ( x I MAC ) denotes the element in A indexed by x I MAC .
  • A binary-input MAC q Y | X I MAC induced by A as defined through Definitions 5 and 6 with the identifications N = m and I = { 1 , 2 , , m } .
  • A message set W i { 1 , 2 , , 2 | J i | } for each i { 1 , 2 , , m } , where W I is the message alphabet of the uniform-input ( n , J I ) polar code for the binary-input MAC q Y | X I MAC as defined through Definitions 8 and 9 such that
    | W I | = i = 1 m | W i | = 2 i = 1 m | J i | .
    In addition, W I is uniform on W I . We view the uniform-input ( n , J I ) polar code as an n , ( 2 | J 1 | , 2 | J 2 | , , 2 | J N | ) code (cf. Remark 1) and let { f i , k MAC | i I , k { 1 , 2 , , n } } and φ MAC denote the corresponding set of encoding functions and the decoding function respectively (cf. Definition 4).
  • An encoding function f k : W I A defined as
    f k ( W I ) ρ ( f 1 , k MAC ( W 1 ) , f 2 , k MAC ( W 2 ) , , f m , k MAC ( W m ) )
    for each k { 1 , 2 , , n } , where f k is used for encoding W I into X k such that
    X k = f k ( W I ) if f k ( W I ) 0 , 0 if f k ( W I ) = 0 .
    Note that both the encoded symbols 0 and 0 in A result in the same transmitted symbol 0 R according to (46). By construction, f 1 ( W I ) , f 2 ( W I ) , …, f n ( W I ) are i.i.d. random variables that are uniformly distributed on A and hence X 1 , X 2 , …, X n are i.i.d. real-valued random variables (but not necessarily uniform).
  • A decoding function φ : R n W I defined as
    φ φ MAC
    such that
    W ^ I = φ ( Y n ) .
Remark 3.
For an ( n , J I , P , A ) avg polar code, the flexibility of allowing A to contain 2 origins is crucial to proving the main result of this paper. This is because the input distribution which we will use to establish scaling results for the AWGN channel in Theorem 1 can be viewed as the uniform distribution over some set that contains 2 origins, although the input distribution in the real domain as specified in (52) to follow is not uniform.
Proposition 3.
There exists a universal constant t > 0 such that the following holds. Suppose we are given an ( n , J I SE , P , A ) avg polar code defined for the AWGN channel q Y | X with a 2-origin A (i.e., A { 0 , 0 } ) . Define X A { 0 } R where X contains 1 origin and n 2 non-zero real numbers. Then, the ( n , J I SE , P , A ) avg polar code is an ( n , M ) -code (cf. Definition 1) which satisfies
1 n log M I p X q Y | X ( X ; Y ) t log n n 1 / β ,
P { W ^ I W I } log n n 3 ,
and
P X n = x n = k = 1 n P X k = x k = k = 1 n p X ( x k )
for all x n X n where p X is the distribution on X defined as
p X ( a ) = 1 n if a 0 , 2 n if a = 0 .
Proof. 
The proposition follows from inspecting Proposition 2 and Definition 14 with the identifications N = log n and log M = i = 1 m | J i | . ☐
The following lemma, a strengthened version of Theorem 6 of [13], provides a construction of a good A which leads to a controllable gap between C ( P ) and I p X q Y | X ( X ; Y ) for the corresponding ( n , J I SE , P , A ) avg polar code. Although the following lemma is intuitive, the proof is technical and hence relegated to Appendix B.
Lemma 3.
Let q Y | X be the conditional distribution that characterizes the AWGN channel, and fix any γ [ 0 , 1 ) . For each n = 2 m where m N , define
D n n 1 , 2 , , n 2 , n 1 ,
define
s X ( x ) N x ; 0 , P 1 1 n ( 1 γ ) / β
for all x R , define Φ X to be the cumulative distribution function (cdf) of s X , and define
X Φ X 1 ( D n ) .
Note that X contains 1 origin and n 2 non-zero real numbers, and we let p X be the distribution on X as defined in (52). In addition, define the distribution p X n ( x n ) k = 1 n p X ( x k ) . Then, there exists a constant t > 0 that depends on P and γ but not n such that the following statements hold for each n N :
E p X [ X 2 ] P 1 1 n ( 1 γ ) / β ,
P p X n 1 n k = 1 n X k 2 > P e 3 e n 1 2 1 γ β ,
and
C ( P ) I p X q Y | X ( X ; Y ) t log n n ( 1 γ ) / β .
A shortcoming of Proposition 3 is that the ( n , J I SE , P , A ) avg polar code may not satisfy the peak power constraint (41) and hence it may not qualify as an ( n , 2 i = 1 N | J i SE | , P ) code (cf. Definition 11). Therefore, we describe in the following definition a slight modification of an ( n , J I SE , P , A ) avg polar code so that the modified polar code always satisfies the peak power constraint (41).
Definition 15.
The 0-power-outage version of an ( n , J I , P , A ) avg polar code is an ( n , 2 i = 1 N | J i | ) code which follows identical encoding and decoding operations of the ( n , J I , P , A ) avg polar code except that the source will modify the input symbol in a time slot k if the following scenario occurs: Let X n be the desired codeword generated by the source according to the encoding operation of the ( n , J I , P , A ) avg polar code, where the randomness of X n originates from the information bits ( U i , J i : i I ) and the frozen bits B I , J I c . If transmitting the desired symbol X k at time k results in violating the power constraint 1 n = 1 k X 2 > P , the source will transmit the symbol 0 at time k instead. An ( n , 2 i = 1 N | J i | , ε ) code is called an ( n , J I , P , A , ε ) peak polar code if it is the 0-power-outage version of some ( n , J I , P , A ) avg polar code.
By Definition 15, every ( n , J I , P , A , ε ) peak polar code satisfies the peak power constraint (41) and hence achieves zero power outage, i.e., P { 1 n k = 1 n X k 2 > P } = 0 . Using Definition 15, we obtain the following corollary which states the obvious fact that the probability of power outage of an ( n , J I , P , A ) avg polar code can be viewed as part of the probability of error of the 0-power-outage version of the code.
Corollary 1.
Given an ( n , J I , P , A ) avg -polar code, define
ε 1 P { W ^ I W I }
and
ε 2 P 1 n k = 1 n X k 2 > P .
Then, the 0-power-outage version of the ( n , J I , P , A ) avg polar code is an ( n , J I , P , A , ε 1 + ε 2 ) peak polar code that satisfies the peak power constraint (41).

5. Scaling Exponents and Main Result

5.1. Scaling Exponent of Uniform-Input Polar Codes for MACs

We define scaling exponent of uniform-input polar codes for the binary-input MAC as follows.
Definition 16.
Fix an ε ( 0 , 1 ) and an N-source binary-input MAC q Y | X I with symmetric sum-capacity C sum (cf. Definition 7). The scaling exponent of uniform-input polar codes for the MAC is defined as
μ ε PC MAC lim   inf m inf J I log n log C sum i = 1 N | J i | n n = 2 m , there exists a uniform input ( n , J I , ε ) polar code on q Y | X I .
Definition 16 formalizes the notion that we are seeking the smallest μ 0 such that | C sum R n MAC | = O n 1 μ holds where R n MAC | J | n denotes the rate of an ( n , J I , ε ) polar code. Using the existing results in Section IV-C of [5] and Theorem 2 of [4], we know that
3.579 μ ε PC BMSC β = 4.714 ε ( 0 , 1 )
for the special case N = 1 where the binary-input MAC is a BMSC. We note from Theorem 48 of [17] (and also [18,19]) that the optimal scaling exponent (optimized over all codes) for any non-degenerate discrete memoryless channel (DMC) as well as BMC is equal to 2 for all ε ( 0 , 1 / 2 ) .
Using Proposition 1 and Definition 16, we obtain the following corollary, which shows that 4.714, the upper bound on μ ε PC BMSC in (62) for BMSCs, remains a valid upper bound on the scaling exponent for binary-input MACs.
Corollary 2.
Fix any ε ( 0 , 1 ) and any binary-input MAC q Y | X I . Then,
μ ε PC MAC β = 4.714 .

5.2. Scaling Exponent of Uniform-Input Polar Codes for the AWGN Channel

Definition 17.
Fix a P > 0 and an ε ( 0 , 1 ) . The scaling exponent of uniform-input polar codes for the AWGN channel is defined as
μ P , ε PC AWGN lim   inf m inf J I , A log n log C ( P ) i = 1 N | J i | n n = 2 m , there exists a uniform input ( n , J I , P , A , ε ) peak polar code .
Definition 17 formalizes the notion that we are seeking the smallest μ 0 such that C ( P ) R n AWGN = O n 1 μ holds where R n AWGN i = 1 N | J i | n denotes the rate of an ( n , J I , P , ε ) peak polar code. We note from Theorem 54 of [17] and Theorem 5 of [19] that that the optimal scaling exponent of the optimal code for the AWGN channel is equal to 2 for any ε ( 0 , 1 / 2 ) . The following theorem is the main result of this paper, which shows that 4.714 is a valid upper bound on the scaling exponent of polar codes for the AWGN channel.
Theorem 1.
Fix any P > 0 and any ε ( 0 , 1 ) . There exists a constant t * > 0 that does not depend on n such that the following holds. For any n = 2 m where m N , there exists an A such that the corresponding ( n , J I SE , P , A ) peak polar code defined for the AWGN channel q Y | X satisfies
1 n i = 1 N | J i SE | C ( P ) t * log n n 1 / β
and
P { W ^ I W I } log n n 3 + e 3 e n 1 2 1 β .
In particular, we have,
μ P , ε PC AWGN β = 4.714 .
Proof. 
Fix a P > 0 , an ε ( 0 , 1 ) and an n = 2 m where m N . Combining Proposition 3 and Lemma 3, we conclude that there exists a constant t * > 0 that does not depend on n and an A such that the corresponding ( n , J I SE , P , A ) avg polar code defined for the AWGN channel q Y | X satisfies (64),
P { W ^ I W I } log n n 3 ,
and
P 1 n k = 1 n X k 2 > P e 3 e n 1 2 1 β .
Using (67), (68) and Corollary 1, we conclude that the ( n , J I SE , P , A ) avg polar code is an ( n , J I SE , P , A ) peak polar code that satisfies (64) and (65). Since
log n n 3 + e 3 e n 1 2 1 β ε
for all sufficiently large n, it follows from (64), (65) and Definition 17 that (66) holds. ☐

6. Moderate Deviations Regime

6.1. Polar Codes That Achieve the Symmetric Capacity of a BMC

The following result is based on Section IV of [4], which developed a tradeoff between the gap to capacity and the decay rate of the error probability for a BMC under the moderate deviations regime [15] where both the gap to capacity and the error probability vanish as n grows.
Lemma 4.
([4], Section IV) There exists a universal constant t MD > 0 such that the following holds. Fix any γ 1 1 + β , 1 and any BMC characterized by q Y | X . Recall that p X * denotes the uniform distribution on { 0 , 1 } . Then for any n = 2 m where m N , we have
1 n k { 1 , 2 , , n } Z [ p X * ; q Y | X ] ( U k | U k 1 , Y n ) 2 n γ h 2 1 γ β + γ 1 γ β I p X * q Y | X ( X ; Y ) t MD n ( 1 γ ) / β
where h 2 : [ 0 , 1 / 2 ] [ 0 , 1 ] denotes the binary entropy function.

6.2. Polar Codes that Achieve the Symmetric Sum-Capacity of a Binary-Input MAC

The following lemma, whose proof is omitted because it is analogous to the proof of Lemma 2, is a direct consequence of Lemma 4.
Lemma 5.
There exists a universal constant t MD > 0 such that the following holds. Fix any γ 1 1 + β , 1 and any binary-input MAC characterized by q Y | X I . Recall that p X I * = i I p X i * . Then for any n = 2 m where m N , we have
1 n k { 1 , 2 , , n } Z [ p X I * ; q Y | X I ] ( U i , k | U i k 1 , X [ i 1 ] n , Y n ) 2 n γ h 2 1 γ β + γ 1 γ β I p X I * q Y | X I ( X i ; X [ i 1 ] , Y ) t MD n ( 1 γ ) / β
for each i I .
Combining Lemma 5, Definition 7 and Proposition 1, we obtain the following proposition, whose proof is analogous to the proof of Proposition 2 and hence omitted.
Proposition 4.
There exists a universal constant t MD > 0 such that the following holds. Fix any γ 1 1 + β , 1 and any N-source binary-input MAC characterized by q Y | X I . In addition, fix any m N , let n = 2 m and define
J i MD k { 1 , 2 , , n } Z [ p X I ; q Y | X I ] ( U i , k | U i k 1 , X [ i 1 ] n , Y n ) 2 n γ h 2 1 γ β + γ 1 γ β
for each i I where the superscript “MD” stands for “moderate deviations”. Then, the corresponding uniform-input ( n , J I MD ) polar code described in Definition 9 satisfies
i = 1 N J i MD n C sum t MD N n ( 1 γ ) / β
and
P { W ^ I W I } N n 2 n γ h 2 1 γ β + γ 1 γ β .

6.3. Uniform-Input Polar Codes for the AWGN Channel

Proposition 5.
There exists a universal constant t MD > 0 such that the following holds. Fix any γ 1 1 + β , 1 . Suppose we are given an ( n , J I MD , P , A ) avg polar code (cf. Definition 14) defined for the AWGN channel q Y | X with a 2-origin A (i.e., A { 0 , 0 } ) . Define X A { 0 } R where X contains 1 origin and n 2 non-zero real numbers. Then, the ( n , J I MD , P , A ) avg polar code is an ( n , M ) -code (cf. Definition 1) which satisfies
1 n log M I p X q Y | X ( X ; Y ) t MD log n n ( 1 γ ) / β ,
P { W ^ I W I } ( log n ) n 2 n γ h 2 1 γ β + γ 1 γ β ,
and
P X n = x n = k = 1 n P X k = x k = k = 1 n p X ( x k )
for all x n X n where p X is the distribution on X as defined in (52).
Proof. 
The proposition follows from inspecting Proposition 4 and Definition 14 with the identifications N = log n and log M = i = 1 m | J i MD | . ☐
The following theorem develops the tradeoff between the gap to capacity and the decay rate of the error probability for ( n , J I MD , P , A ) peak polar codes defined for the AWGN channel.
Theorem 2.
Fix a γ 1 1 + β , 1 . There exists a constant t MD * > 0 that depends on P and γ but not n such that the following holds for any n = 2 m where m N . There exists an ( n , J I MD , P , A , ε ) peak polar code defined for the AWGN channel q Y | X that satisfies,
1 n i = 1 N | J i MD | C ( P ) t MD * log n n ( 1 γ ) / β ,
and,
P { W ^ I W I } ( n log n + e 3 ) 2 n γ h 2 1 γ β + γ 1 γ β .
Proof. 
By Proposition 5 and Lemma 3, there exists a constant t MD * > 0 that depends on P and γ but not n such that for any n = 2 m where m N , there exists an A and the corresponding ( n , J I MD , P , A ) avg polar code that satisfies (78),
P { W ^ I W I } ( log n ) n 2 n γ h 2 1 γ β + γ 1 γ β ,
and
P 1 n k = 1 n X k 2 > P e 3 e n 1 2 1 γ β .
Equation (79) remains to be shown. Using (80), (81) and Corollary 1, we conclude that the ( n , J I MD , P , A ) avg polar code is an ( n , J I MD , P , A , ε ) peak polar code that satisfies,
ε ( n log n ) 2 n γ h 2 1 γ β + γ 1 γ β + e 3 e n 1 2 1 γ β
( n log n + e 3 ) 2 n γ h 2 1 γ β + γ 1 γ β
where the inequality follows from the fact that h 2 ( x ) 2 x for all x [ 0 , 1 / 2 ] . This concludes the proof. ☐
Remark 4.
A candidate of A in Theorem 2 can been explicitly constructed according to Lemma 3 with the identification A X { 0 } .

7. Concluding Remarks

In this paper, we provided an upper bound on the scaling exponent of polar codes for the AWGN channel (Theorem 1). In addition, in Theorem 2 we have shown a moderate deviations result—namely, the existence of polar codes which obey a certain tradeoff between the gap to capacity and the decay rate of the error probability for the AWGN channel.
Since the encoding and decoding complexities of the binary-input polar code for a BMC are O ( n log n ) as long as we allow pseudorandom numbers to be shared between the encoder and the decoder for encoding and decoding the randomized frozen bits (e.g., see [3], Section IX), the encoding and decoding complexities of the polar codes for the AWGN channel defined in Definition 14 and Definition 15 are O ( n log n ) × log n = O n log 2 n . By a standard probabilistic argument, there must exist a deterministic encoder for the frozen bits such that the decoding error of the polar code for the AWGN channel with the deterministic encoder is no worse than the polar code with randomized frozen bits. In the future, it may be fruitful to develop low-complexity algorithms for finding a good deterministic encoder for encoding the frozen bits. Another interesting direction for future research is to compare the empirical performance between our polar codes in Definitions 14 and 15 and the state-of-the-art polar codes. One may also explore various techniques (e.g., list decoding, cyclic redundancy check (CRC), etc.) to improve the empirical performance of the polar codes constructed herein.

Author Contributions

S. L. Fong carried out the research and wrote the paper. V. Y. F. Tan proposed the research topic, suggested and improved the flow of the presentation in the paper, and verified the correctness of the results.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Proposition 1

Unless specified otherwise, all the probabilities in this proof are evaluated according to the distribution induced by the uniform-input ( n , J I ) polar code. Consider
( A 1 ) P { W ^ I W I } = i = 1 N P { W ^ i W i } { W ^ [ i 1 ] = W [ i 1 ] } ( A 2 ) = i = 1 N P { U ^ i n U i n } { X ^ [ i 1 ] n = X [ i 1 ] n } ( A 3 ) = i = 1 N k = 1 n P { U ^ i , k U i , k } { U ^ i k 1 = U i k 1 } { X ^ [ i 1 ] n = X [ i 1 ] n }
where (A2) is due to Definition 8 where { W ^ i = W i } = { U ^ i n = U i n } and { W ^ [ i 1 ] = W [ i 1 ] } = { X ^ [ i 1 ] n = X [ i 1 ] n } . For each i I and each k J i , we have
P { U ^ i , k U i , k } { U ^ i k 1 = U i k 1 } { X ^ [ i 1 ] n = X [ i 1 ] n } u i , k { 0 , 1 } u i k 1 U i k 1 x [ i 1 ] n X [ i 1 ] n Y n p U i k , X [ i 1 ] n , Y n ( u i k , x [ i 1 ] n , y n ) ( A 4 ) × 1 p U i , k | U i k 1 , X [ i 1 ] n , Y n ( u i , k | u i k 1 , x [ i 1 ] n , y n ) p U i , k | U i k 1 , X [ i 1 ] n , Y n ( 1 u i , k | u i k 1 , x [ i 1 ] n , y n ) d y n 2 u i k 1 U i k 1 x [ i 1 ] n X [ i 1 ] n Y n p U i k 1 , X [ i 1 ] n , Y n ( u i k 1 , x [ i 1 ] n , y n ) ( A 5 ) × p U i , k | U i k 1 , X [ i 1 ] n , Y n ( 0 | u i k 1 , x [ i 1 ] n , y n ) p U i , k | U i k 1 , X [ i 1 ] n , Y n ( 1 | u i k 1 , x [ i 1 ] n , y n ) d y n ( A 6 ) = Z [ p X I ; q Y | X I ] ( U i , k | U i k 1 , X [ i 1 ] n , Y n )
where (A4) follows from (30). In addition, it follows from (27) and (30) that
P { U ^ i , k U i , k } { U ^ i k 1 = U i k 1 } { X ^ [ i 1 ] n = X [ i 1 ] n } = 0
for each i I and each k J i c . Combining (A3), (A6) and (A7), we obtain (33).

Appendix B. Proof of Lemma 3

Let q Y | X be the conditional distribution that characterizes the AWGN channel and fix a γ [ 0 , 1 ) . Recall the definitions of p X and s X in (52) and (54) respectively and recall that Φ X is the cdf of s X . Fix a sufficiently large n 36 that satisfies
1 n ( 1 γ ) / β 0 , 1 + P 2
and
Φ X 1 1 n 1 ( 1 γ ) / β 1 .
In addition, recall the definition of X in (55) and let g : R X be a quantization function such that,
g ( t ) Φ X 1 n
where { 1 , 2 , , n 2 , n 1 } is the unique integer that satisfies
Φ X 1 n t < Φ X 1 + 1 n if t 0 , Φ X 1 1 n < t Φ X 1 n if t < 0 .
In words, g quantizes every a R to its nearest point in X whose magnitude is smaller than | a | . Let
X ^ g ( X )
be the quantized version of X. By construction,
P s X | X ^ | | X | = 1 ,
P s X Φ X ( X ^ ) Φ X ( X ) 1 n = 1
and
P s X X Φ X 1 n , Φ X 1 + 1 n = 1 n
for all { 0 , 1 , , n 1 } . It follows from (A15) and the definition of p X in (52) that
E p X [ X 2 ] = E s X [ X ^ 2 ]
and
P p X n 1 n k = 1 n X k 2 > P = P s X n 1 n k = 1 n X ^ k 2 > P
where s X n ( x n ) k = 1 n s X ( x k ) . Consequently, in order to show (56) and (57), it suffices to show
E s X [ X ^ 2 ] P 1 1 n ( 1 γ ) / β
and
P s X n 1 n k = 1 n X ^ k 2 > P e 3 e n 1 2 1 γ β
respectively. Using (A13) and the definition of s X in (54), we obtain (A18). In order to show (A19), we consider the following chain of inequalities:
P s X n 1 n k = 1 n X ^ k 2 > P ( A 20 ) P s X n 1 n k = 1 n X k 2 > P ( A 21 ) = P s X n k = 1 n X k 2 n P 1 1 n ( 1 γ ) / β > n + n 1 2 1 γ β 1 1 n ( 1 γ ) / β ( A 22 ) P s X n e k = 1 n X k 2 n P 1 1 n ( 1 γ ) / β > e n + n 1 2 1 γ β ( A 23 ) 1 1 n / 2 n / 2 e n + n 1 2 1 γ β ( A 24 ) = 1 + 1 n / 2 1 n / 2 e n + n 1 2 1 γ β ( A 25 ) e n n 2 · e n + n 1 2 1 γ β ( A 26 ) = e 2 n n 2 · e n 1 2 1 γ β ( A 27 ) e 3 · e n 1 2 1 γ β
where
  • (A20) is due to (A13).
  • (A23) is due to Markov’s inequality.
  • (A25) is due to the fact that ( 1 + 1 t ) t e for all t > 0 .
  • (A27) is due to the assumption that n 6 .
Equation (58) remains to be shown. To this end, we let q Z denote the distribution of the standard normal random variable (cf. (1)) and consider
C P 1 n ( 1 γ ) / β I p X q Y | X ( X ; Y ) ( A 28 ) = C P 1 n ( 1 γ ) / β I p X q Z ( X ; X + Z ) ( A 29 ) = I s X q Z ( X ; X + Z ) I p X q Z ( X ; X + Z )
where (A29) is due to the definition of s X in (54). In order to simplify the right hand side of (A29), we invoke Corollary 4 of [20] and obtain
I s X q Z ( X ; X + Z ) I p X q Z ( X ; X + Z ) ( log e ) ( 3 1 + P + 4 P ) W 2 ( s Y , p Y ) .
After some tedious calculations which will be elaborated after this proof, it can be shown that the Wasserstein distance in (A30) satisfies
W 2 ( s Y , p Y ) κ log n n 2 ( 1 γ ) β ,
where
κ P 2 + 4 P + 4 P log e 1 + log P 2 π .
On the other hand, since
log ( 1 + P ξ ) log e log ( 1 + P ) log e ξ 1 + P 2 ξ 2 ( 1 + P ) 2
for each ξ 0 , 1 + P 2 by Taylor’s theorem and 1 n ( 1 γ ) / β 0 , 1 + P 2 by (A8), we have
C P 1 n 1 γ β C ( P ) log e 2 1 n 1 γ β ( 1 + P ) + 2 n 2 ( 1 γ ) β ( 1 + P ) 2 .
Using (A29), (A30), (A31) and (A34), we obtain
C P I p X q Y | X ( X ; Y ) ( log e ) ( 3 1 + P + 4 P ) κ log n n 2 ( 1 γ ) / β + log e 2 1 n 1 γ β ( 1 + P ) + 2 n 2 ( 1 γ ) β ( 1 + P ) 2 .
Consequently, (58) holds for some constant t > 0 that does not depend on n.
Derivation of (A31)
Consider the distribution (coupling) r X , X ^ , Y , Y ^ defined as
r X , X ^ , Y , Y ^ ( x , x ^ , y , y ^ ) = s X ( x ) q Z ( y x ) 1 { x ^ = g ( x ) } 1 { y ^ = g ( x ) + y x }
and simplify the Wasserstein distance in (A30) as follows:
( A 37 ) W 2 ( s Y , p Y ) 2 R R r Y , Y ^ ( y , y ^ ) ( y y ^ ) 2 d y ^ d y ( A 38 ) = R R r X , X ^ ( x , x ^ ) ( x x ^ ) 2 d x ^ d x ( A 39 ) = R s X ( x ) ( x g ( x ) ) 2 d x
where
  • (A37) follows from the definition of W 2 in (6) and the fact due to (A36) that r Y = s Y and r Y ^ = p Y .
  • (A38) follows from the fact due to (A36) that P r X , X ^ , Y , Y ^ { Y Y ^ = X X ^ } = 1 .
  • (A39) is due to (A36).
Following (A39), we define ξ n to be the positive number that satisfies
ξ n s X ( x ) d x = 1 n 1 ( 1 γ ) / β
and consider
R s X ( x ) ( x g ( x ) ) 2 d x ( A 41 ) = ξ n s X ( x ) ( x g ( x ) ) 2 d x + ξ n ξ n s X ( x ) ( x g ( x ) ) 2 d x + ξ n s X ( x ) ( x g ( x ) ) 2 d x ( A 42 ) ξ n s X ( x ) x 2 d x + ξ n ξ n s X ( x ) ( x g ( x ) ) 2 d x + ξ n s X ( x ) x 2 d x ( A 43 ) = 2 ξ n s X ( x ) x 2 d x + ξ n ξ n s X ( x ) ( x g ( x ) ) 2 d x
where (A42) follows from the fact due to (A10) that x g ( x ) 0 for all x R . In order to bound the first term in (A43), we let
P n P 1 1 n ( 1 γ ) / β
and consider
( A 45 ) ξ n s X ( x ) x 2 d x = P n ξ n s X ( x ) d x + ξ n s X ( ξ n ) ( A 46 ) < ξ n s X ( x ) d x 2 P n + ξ n 2 ( A 47 ) < 1 n 1 ( 1 γ ) / β 2 P + ξ n 2
where
  • (A45) follows from integration by parts.
  • (A46) is due to the simple fact that
    ( A 48 ) ξ n s X ( x ) d x > ξ n 2 P n + ξ n 2 ξ n 1 + P n x 2 s X ( x ) d x ( A 49 ) = P n P n + ξ n 2 ξ n s X ( ξ n ) .
  • (A47) is due to (A40) and (A44).
In order to bound the term in (A47), we note that
ξ n 1
by (A9) and (A40) and would like to obtain an upper bound on ξ n through the following chain of inequalities:
( A 51 ) 1 n 1 ( 1 γ ) / β = ξ n s X ( x ) d x ( A 52 ) ξ n x s X ( x ) d x ( A 53 ) = P n s X ( ξ n ) ( A 54 ) = P n 2 π e ξ n 2 2 P n ( A 55 ) < P 2 π e ξ n 2 2 P
where
  • (A51) is due to (A40).
  • (A52) is due to (A50).
Since
ξ n 2 2 P log e 1 1 γ β log n + log P 2 π
by (A55) and β = 4 . 714 > 3 , it follows from (A47) that
( A 57 ) ξ n s X ( x ) x 2 d x < 1 n 1 1 γ β 2 P + 2 P log e 1 1 γ β log n + log P 2 π ( A 58 ) < log n n 2 ( 1 γ ) β 2 P + 2 P log e 1 + log P 2 π .
In order to bound the second term in (A43), we consider
ξ n ξ n s X ( x ) ( x g ( x ) ) 2 d x ( A 59 ) = ξ n ξ n s X ( x ) Φ 1 Φ ( x ) Φ 1 Φ ( g ( x ) ) 2 d x ( A 60 ) ξ n ξ n s X ( x ) 1 n × 1 s X ( ξ n ) 2 d x ( A 61 ) ξ n ξ n s X ( x ) P n 2 n 2 ( 1 γ ) β d x ( A 62 ) < P 2 n 2 ( 1 γ ) β
where
  • (A60) is due to (A14), the mean value theorem and the fact that the derivative of Φ is always positive and uniformly bounded below by s X ( ξ n ) on the interval [ ξ n , ξ n ] .
  • (A61) is due to (A53).
    Combining (A39), (A43), (A58) and (A62) and recalling the definition of κ in (A32), we obtain (A31).

References

  1. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley and Sons Inc.: Hoboken, NJ, USA, 2006. [Google Scholar]
  2. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  3. Arıkan, E. Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels. IEEE Trans. Inf. Theory 2009, 55, 1–23. [Google Scholar]
  4. Mondelli, M.; Hassani, S.H.; Urbanke, R. Unified scaling of polar codes: Error exponent, scaling exponent, moderate deviations, and error floors. IEEE Trans. Inf. Theory 2016, 62, 6698–6712. [Google Scholar] [CrossRef]
  5. Hassani, S.H.; Alishahi, K.; Urbanke, R. Finite-length scaling for polar codes. IEEE Trans. Inf. Theory 2014, 60, 5875–5898. [Google Scholar] [CrossRef]
  6. Goldin, D.; Burshtein, D. Improved Bounds on the Finite Length Scaling of Polar Codes. IEEE Trans. Inf. Theory 2014, 60, 6966–6978. [Google Scholar] [CrossRef]
  7. Fong, S.L.; Tan, V.Y.F. On the Scaling Exponent of Polar Codes for Binary-Input Energy-Harvesting Channels. IEEE J. Sel. Areas Commun. 2016, 34, 3540–3551. [Google Scholar] [CrossRef]
  8. Mahdavifar, H. Fast Polarization and Finite-Length Scaling for Non-Stationary Channels. arXiv, 2016; arXiv:1611.04203. [Google Scholar]
  9. Şaşoğlu, E.; Telatar, I.; Arıkan, E. Polarization of arbitrary discrete memoryless channels. In Proceedings of the 2009 IEEE Information Theory Workshop, Taormina, Italy, 11–16 October 2009; pp. 114–118. [Google Scholar]
  10. Sutter, D.; Renes, J.M.; Dupuis, F.; Renner, R. Achieving the capacity of any DMC using only polar codes. In Proceedings of the 2012 IEEE Information Theory Workshop, Lausanne, Switzerland, 3–7 September 2012; pp. 114–118. [Google Scholar]
  11. Honda, J.; Yamamoto, H. Polar coding without alphabet extension for asymmetric models. IEEE Trans. Inf. Theory 2013, 59, 7829–7838. [Google Scholar] [CrossRef]
  12. Mondelli, M.; Urbanke, R.; Hassani, S.H. How to Achieve the Capacity of Asymmetric Channels. In Proceedings of the 52nd Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 30 September–3 October 2014; pp. 789–796. [Google Scholar]
  13. Abbe, E.; Barron, A. Polar coding schemes for the AWGN channel. In Proceedings of the 2011 IEEE International Symposium on Information Theory Proceedings, St. Petersburg, Russia, 31 July–5 August 2011; pp. 194–198. [Google Scholar]
  14. Abbe, E.; Telatar, E. Polar Codes for the m-User Multiple Access Channel. IEEE Trans. Inf. Theory 2012, 58, 5437–5448. [Google Scholar] [CrossRef]
  15. Altuğ, Y.; Wagner, A.B. Moderate Deviations in Channel Coding. IEEE Trans. Inf. Theory 2014, 20, 4417–4426. [Google Scholar]
  16. Mahdavifar, H.; El-Khamy, M.; Lee, J.; Kang, I. Achieving the Uniform Rate Region of General Multiple Access Channels by Polar Coding. IEEE Trans. Commun. 2016, 64, 467–478. [Google Scholar] [CrossRef]
  17. Polyanskiy, Y.; Poor, H.V.; Verdú, S. Channel coding rate in the finite blocklength regime. IEEE Trans. Inf. Theory 2010, 56, 2307–2359. [Google Scholar] [CrossRef]
  18. Strassen, V. Asymptotische Abschätzungen in Shannons Informationstheorie. Trans. Third Prague Conf. Inf. Theory 1962, 689–723. [Google Scholar]
  19. Hayashi, M. Information spectrum approach to second-order coding rate in channel coding. IEEE Trans. Inf. Theory 2009, 55, 4947–4966. [Google Scholar] [CrossRef]
  20. Polyanskiy, Y.; Wu, Y. Wasserstein Continuity of Entropy and Outer Bounds for Interference Channels. IEEE Trans. Inf. Theory 2016, 62, 3992–4002. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Fong, S.L.; Tan, V.Y.F. Scaling Exponent and Moderate Deviations Asymptotics of Polar Codes for the AWGN Channel. Entropy 2017, 19, 364. https://doi.org/10.3390/e19070364

AMA Style

Fong SL, Tan VYF. Scaling Exponent and Moderate Deviations Asymptotics of Polar Codes for the AWGN Channel. Entropy. 2017; 19(7):364. https://doi.org/10.3390/e19070364

Chicago/Turabian Style

Fong, Silas L., and Vincent Y. F. Tan. 2017. "Scaling Exponent and Moderate Deviations Asymptotics of Polar Codes for the AWGN Channel" Entropy 19, no. 7: 364. https://doi.org/10.3390/e19070364

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop