Next Article in Journal
On the Variability of Heart Rate Variability—Evidence from Prospective Study of Healthy Young College Students
Next Article in Special Issue
Skew Convolutional Codes
Previous Article in Journal
On Energy–Information Balance in Automatic Control Systems Revisited
Previous Article in Special Issue
Performance Analysis of Identification Codes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Systematic Encoding and Shortening of PAC Codes

Elecrical-Electronics Engineering Department, Bilkent University, Ankara 06800, Turkey
Entropy 2020, 22(11), 1301; https://doi.org/10.3390/e22111301
Submission received: 28 October 2020 / Revised: 11 November 2020 / Accepted: 13 November 2020 / Published: 15 November 2020
(This article belongs to the Special Issue Information Theory for Channel Coding)

Abstract

:
Polarization adjusted convolutional (PAC) codes are a class of codes that combine channel polarization with convolutional coding. PAC codes are of interest for their high performance. This paper presents a systematic encoding and shortening method for PAC codes. Systematic encoding is important for lowering the bit-error rate (BER) of PAC codes. Shortening is important for adjusting the block length of PAC codes. It is shown that systematic encoding and shortening of PAC codes can be carried out in a unified framework.

1. Introduction

PAC codes are a class of linear block codes designed to improve the performance of polar codes by combining channel polarization with convolutional coding [1]. It has been shown that PAC codes can perform better than polar codes [1], in some instances performing close to the theoretical limits for finite-length codes.
Given the potential of PAC codes for applications requiring extreme reliability at short block-lengths, it is of interest to investigate various aspects of PAC codes that may be important in practice. In this paper, we study systematic encoding and shortening of PAC codes. Systematic encoding is of interest mainly because it provides a better bit error rate (BER) performance compared to non-systematic encoding. Code shortening is important as a means of providing flexibility is choosing the code length. The BER advantage of systematic coding is illustrated in Figure 1 for a PAC code of length N = 128 and rate R = 1 / 2 on an additive Gaussian noise channel with binary modulation. A better BER performance is important in concatenation schemes where an outer code corrects the bit errors left over by an inner PAC code.
In Section 2, we give a definition of PAC codes and their non-systematic encoding. In Section 3, we develop a method for systematic encoding of PAC codes. In Section 4, we indicate how the systematic encoding method of Section 3 can be used for shortening PAC codes.
Throughout, we restrict attention to PAC codes over the binary field F 2 = { 0 , 1 } . All algebraic operations are over vector spaces over F 2 . F 2 N will denote row vectors of length N over F 2 and F 2 N × M will denote matrices with N rows and M columns. For any v = ( v 1 , , v N ) F 2 N and A { 1 , 2 , , N } , let v A denote the subvector ( v i : i A ) . For any G F 2 N × M , A { 1 , 2 , , N } , and B { 1 , 2 , , M } , let G A , B denote the matrix obtained after deleting the rows of G not in A and columns of G not in B . The notation 0 denotes a vector or matrix all of whose elements are 0 and I denotes an identity matrix.

2. PAC Codes

A PAC code over F 2 is a linear block code parametrized by ( N , K , A , f , g ) where N is a code block length, K is a code dimension, A is a data index set, f F 2 N K is a frozen word, and g = ( g 0 , g 1 , , g m ) F 2 m + 1 is a convolution impulse response with g 0 = 1 , g m = 1 , with g i subject to design for 0 < i < m . The data index set A is a subset of { 1 , 2 , , N } with size | A | = K . The parameter ( 1 + m ) will be called the span of the impulse response g . The span of any impulse response g that we consider here will be bounded by the block length N. Sometimes, when the span cannot or need not be shown explicitly, we will write g = ( g 0 , g 1 , , g N 1 ) to denote an impulse response, with the understanding that g i = 0 for i greater than or equal to the span of g .
An encoder for a PAC code encodes data words d F 2 K into codewords x F 2 N by computing a convolution followed by a polar transform. In the convolution step, a convolution input word v F 2 N is prepared by setting v A = d and v A c = f , and a convolution u = v g is applied to v to obtain a polar transform input word u F 2 N . ( A c denotes the complement of A in { 1 , 2 , , N } .) In the polar transform step, the codeword x F 2 N is obtained by computing x = u L , where L = F n is the polar transform matrix, defined as the nth Kronecker power of a kernel matrix F = 1 0 1 1 .
The convolution step u = v g involves the computation
u i = j = 0 m v i j g j , for i = 1 , 2 , , N ,
where v i j is interpreted as 0 if i j 0 . In the following analysis, we will represent the convolution alternatively as a linear transformation u = v T where T F 2 N × N is an upper-triangular Toeplitz matrix of the form
T = g 0 g 1 g 2 g m 0 0 0 g 0 g 1 g 2 g m 0 0 g 0 g 1 g m 0 0 0 g 0 g 1 g 2 0 0 g 0 g 1 0 0 0 g 0 .
The first row of T is determined by g and the rows that follow are shifted versions of the first row. Please note that if m = 0 then T becomes the identity matrix and PAC codes contain polar codes as a special case. To exclude this possibility, PAC codes are often defined with the condition that m 1 . However, for purposes of the present paper, there is no need to place such a restriction on m.
The encoding operation for PAC codes can be defined more compactly by defining a generator matrix G = T L . Then, the encoder implements the mapping x = v G after preparing the vector v in the same way as above. A direct implementation of the transform x = v G , without exploiting the structure in G , has complexity O ( N 2 ) , while the two-step encoder described above has complexity O ( m N ) for the convolution operation and O ( N log N ) for the polar transform. Since PAC codes typically have m N , the complexity of implementing x = v G using the triangular factorization G = T L results in significant cost savings. Below, as we develop a systematic PAC encoder, we will exploit this triangular factorization for reducing complexity.

3. Systematic Encoding

The above encoder for a PAC code is non-systematic in the sense that the data word d does not appear transparently as part of the codeword x . The goal in this paper is to give a systematic encoding method so that there is a subset of coordinates A such that x A = d .
We will consider instances of the systematic encoding problem for PAC codes that are characterized by a collection of parameters ( T , L , A , B , f , d ) where T F 2 N × N is an invertible upper-triangular Toeplitz matrix, L F 2 N × N is the polar transform matrix (which is an invertible lower-triangular matrix), A and B are subsets of { 1 , 2 , , N } with sizes K and N K , respectively, f F 2 N K is a fixed vector, and d F 2 K is a data word. Given such an instance, a systematic encoder seeks a solution to the set of equations
x = v T L , v B = f , x A = d .
More specifically, a systematic PAC encoder seeks to determine the missing part x A c of the codeword x subject to the conditions (3). To analyze this problem, rewrite x = v T L in terms of G = T L as
x A = v B G B , A + v B c G B c , A , x A c = v B G B , A c + v B c G B c , A c
where A c and B c denote the complements of A and B in { 1 , 2 , , N } , respectively. Substituting x A = d and v B = f into (4), and solving for x A c , we obtain a formal solution as
x A c = d G B c , A 1 G B c , A c + f G B , A c G B , A G B c , A 1 G B c , A c ,
which is valid if and only if the matrix G B c , A is invertible. (Please note that G B c , A is a square matrix since the size of B c equals the size of A by definition.) One way to ensure that G B c , A is invertible is to choose A and B as complementary sets so that G B c , A becomes a principal submatrix G A , A of G . (Since G is the product of two invertible matrices, it is invertible; hence, all its principal submatrices are invertible.) We summarize this result as follows.
Proposition 1.
The systematic encoding problem (3) for PAC codes has a solution whenever B c = A , and the solution is given by
x A c = d G A , A 1 G A , A c + f G A c , A c G A c , A G A , A 1 G A , A c .
Thus, in principle, we have already provided a solution to the systematic encoding problem for any PAC code. However, the complexity of solving the systematic encoding problem by computing x A c using (6) involves O ( ( N K ) 2 ) arithmetic operations (additions and multiplications in F 2 ), which may be prohibitively complex for many applications.
In the rest of this section, we develop a low-complexity systematic encoder for PAC codes under the assumption that the data index set A is chosen so that L A c , A = 0 is satisfied. This condition is not as restrictive as it may appear since it is satisfied by the preferred choices for the data index set A , such as when A is chosen according to a polar coding design rule or a Reed-Muller design rule [1].
For clarity, we restate the systematic encoding problem considered in the rest of this section as follows. Given a data word d F 2 K and a data index set A for which L A c , A = 0 , find a codeword x F 2 N so that
x = v T L , v A c = f , x A = d .
Proposition 2.
The systematic encoding problem (7) can be solved by a method consisting of the following three steps. (i) Generate an auxiliary word c F 2 K by computing c = d L A , A 1 . (ii) Compute a convolution input-output pair ( v , u ) so that
u = v T , u A = c , v A c = f .
(iii) Obtain the systematic codeword by computing the polar transform x = u L .
Proof. 
The second and third steps ensure that x = v T L , with v A c = f . Therefore, x is a codeword in the PAC code. Moreover, we have
x A = u A L A , A + u A c L A c , A = c L A , A = d ,
since L A c , A = 0 , u A = c , and c = d L A , A 1 . Thus, x A = d is also satisfied, confirming that the encoding method is systematic.☐
The above systematic encoding method calculates v A although systematic encoding does not explicitly call for the calculation of v A . On the other hand, the calculation of v A proves (implicitly) that a solution to the systematic encoding problem exists.
Next, we examine the complexity of each step of the systematic encoding method of Proposition 2.
Proposition 3.
The first and third steps of the method in Proposition 2 each have complexity O ( N log N ) .
Proof. 
The third step x = u L = u F n is a polar transform operation, which is known to have complexity O ( N log N ) [2] thanks to the recursive structure of the polar transform. As for the first step, a direct computation of c = d ( L A , A ) 1 (without exploiting the special structure of the polar transform) has complexity O ( K 2 ) . A better method is to embed the calculation c = d ( L A , A ) 1 in a polar transform operation, as in systematic encoding of polar codes [3,4,5]. To that end, we recall that the inverse of the polar transform L = F n is itself, i.e., L 1 = L . This, combined with the condition that L A c , A = 0 , implies that L A , A 1 = L A , A . To see this last point, note that for any two matrices A F 2 N × N and B F 2 N × N ,
( A B ) A , A = A A , A B A , A + A A , A c B A c , A ,
and let A = L and B = L 1 = L . Therefore, we have c = d ( L A , A ) 1 = d L A , A . Now, prepare a vector x F 2 N by setting x A = d and x A c = 0 , apply a polar transform u = x L , and extract c from u by setting c = u A . This yields the desired result since
u A = x A L A , A + x A c L A c , A = d L A , A .
Proposition 4.
The system of equations (8) in the second step of Proposition 2 can be solved by a sequential method of complexity O ( m N ) for a PAC code with a convolution impulse response g = ( g 0 , g 1 , , g m ) (where g 0 0 by definition of PAC codes).
Proof. 
To develop a sequential method that solves (8), we begin by rewriting the convolution Equation (1) as follows
u i = g 0 v i + g 1 v i 1 + + g m v i m = v i + s i , i = 1 , 2 , , N
where we used g 0 = 1 and have defined s i = g 1 v i 1 + + g m v i m as an ith feed-forward variable. Please also note that in (9), we have used the convention that v j = 0 for j < 1 .
Observe that, for each 1 i N , either i A or i A c . In the former case, we obtain u i from the constraint u A = c ; in the latter case, we obtain v i from v A c = f . Given the value of one of the elements of the pair ( v i , u i ) , the other can be found from the relation u i = v i + s i . Also, observe that s i depends only on the knowledge of ( v 1 , v 2 , , v i 1 ) . These observations suggest a sequential method for carrying out the second step of Proposition 2. The sequential method begins with i = 1 with s 1 = 0 . Either 1 A and ( v 1 , u 1 ) = ( c 1 , c 1 ) where c 1 is the first element of the auxiliary word c ; or 1 A c and ( v 1 , u 1 ) = ( f 1 , f 1 ) where f 1 is the first element of the frozen word f . In either case, we can compute s 2 before proceeding to the next step of the sequential method. In general, the ith step of the sequential method begins with s i available from the ( i 1 ) th step and one determines the missing element of the pair ( v i , u i ) using the relation u i = v i + s i . Thus, this method solves the system of equations (8). The method also provides a proof of existence and uniqueness of the solution.
The complexity of the sequential method given above is dominated by the complexity of calculating the feed-forward variables ( s 1 , s 2 , , s N ) . From the definition of s i , it is clear that s i can be calculated using at most m multiplications and m 1 additions in F 2 . Thus, the overall complexity is O ( m N ) .
Remark 1.
An inspection of the above proof will show that the sequential method of Proposition 4 can be used to solve the system of equations (8) for any IUT matrix T ; the Toeplitz property is not essential.
The complexity O ( m N ) of the sequential method of Proposition 4 corresponds to a significant savings if m N . If m N is not true, it may be worth working with the inverse of T . To discuss this, we first cite a well-known result, see e.g., [6].
Proposition 5.
The class of all N-by-N IUT Toeplitz matrices form a group under matrix multiplication. Let T F 2 N × N be an IUT Toeplitz matrix with its first row given by g = ( g 0 , g 1 , , g N 1 ) F 2 N . (If g has span m + 1 , then g i = 0 for m < i N 1 .) Then, T 1 F 2 N × N is an IUT Toeplitz matrix with first row given by h = ( h 0 , h 1 , , h N 1 ) F 2 N where h 0 = ( 1 / g 0 ) and h k = 1 g 0 i = 1 k g k i h i for k = 1 , 2 , , N 1 .
Proposition 5 allows us to recast the convolution problem (8) in an inverted form: Compute a convolution input-output pair ( v , u ) so that
v = u T 1 , v A c = f , u A = c .
The inverted problem (10) has the same form as the original problem (8) with the roles of v and u reversed. Therefore, it can be solved using the same sequential method described above. There may be an advantage in solving the inverted problem if the span of the first row of T 1 is shorter than that of T . For example, let T F 2 16 × 16 be an IUT Toeplitz matrix with first row g = ( 1 , 1 , 1 , 1 , 0 , 0 , 0 , 0 , 1 , 1 , 0 , 1 , 1 , 0 , 1 , 0 ) , with a span of 15. The inverse T 1 F 2 16 × 16 is the IUT Toeplitz matrix with first row h = ( 1 , 1 , 0 , 0 , 1 , 1 , 0 , 0 , 0 , 0 , 1 , 0 , 0 , 0 , 0 , 0 ) , which has a span of 11.
We end this section by noting that for hardware implementations of the convolution operation in PAC encoding (both for systematic and non-systematic cases), one can use shift-register circuits that are commonly used in encoding algebraic codes. In particular, the convolution operation u = v g (or, equivalently the transform u = v T ) can be implemented as shown in Figure 2. A version of the same circuit, with the left-most stage eliminated, generates the feed-forward variable s i at point A when v i 1 is provided as input at point A.

4. Shortening of PAC Codes

PAC codes have native lengths that are powers of two, N = 2 n for some n 1 . In many applications, it is necessary to adjust the code length to some desired value other than 2 n . One method for adjusting code length is code shortening in which a portion x C of the codeword x is constrained to a predetermined value, say zero, and is not transmitted, effectively reducing the code length from N to N | C | . A common method of code shortening for polar codes is to choose the set C so that L C c , C = 0 [7,8]. The systematic encoding method for PAC codes presented above can be used to implement such a shortening method.
Suppose we desire shortening of a PAC code in connection with non-systematic encoding. We partition the index set { 1 , 2 , , N } into three disjoint sets: a data index set A , a frozen index set B , and a shortening index set C subject to the condition L A B , C = 0 . Then, we apply the systematic encoding method presented above to the problem
x = v T L , ( v A , v B ) = ( d , f ) , x C = 0 .
In other words, the data word d is treated as if it is part of the frozen part of the convolution input word v , and the part x C of the codeword is treated as if it is the data part of the codeword in a systematic PAC code.
If on the other hand, we desire to shorten a systematic PAC code, then the index set { 1 , 2 , , N } is partitioned into a data index set A , a frozen index set B , and a shortening index set C subject to the condition L B , A C = 0 , and we apply the above systematic encoding method to the problem
x = v T L , v B = f , ( x A , x C ) = ( d , 0 ) .
In other words, we treat ( x C , x A ) as if all of it is data in a systematic PAC code.

Funding

This research received no external funding.

Acknowledgments

The author acknowledges helpful comments by reviewers.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Arıkan, E. From sequential decoding to channel polarization and back again. arXiv 2019, arXiv:1908.09594. [Google Scholar]
  2. Arıkan, E. Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels. IEEE Trans. Inf. Theory 2009, 55, 3051–3073. [Google Scholar] [CrossRef]
  3. Arıkan, E. Systematic polar coding. IEEE Commun. Lett. 2011, 8, 860–862. [Google Scholar] [CrossRef] [Green Version]
  4. Sarkis, G.; Tal, I.; Giard, P.; Vardy, A.; Thibeault, C.; Gross, W.J. Flexible and low-complexity encoding and decoding of systematic polar codes. IEEE Trans. Commun. 2016, 64, 2732–2745. [Google Scholar] [CrossRef] [Green Version]
  5. Vangala, H.; Hong, Y.; Viterbo, E. Efficient algorithms for systematic polar encoding. IEEE Commun. Lett. 2016, 20, 17–20. [Google Scholar] [CrossRef]
  6. Commenges, D.; Monsion, M. Fast inversion of triangular Toeplitz matrices. IEEE Trans. Autom. Control 1984, 3, 250–251. [Google Scholar] [CrossRef]
  7. Wang, R.; Liu, R. A novel puncturing scheme for polar codes. IEEE Commun. Lett. 2014, 18, 2081–2084. [Google Scholar] [CrossRef]
  8. Bioglio, V.; Gabry, F.; Land, I. Low-complexity puncturing and shortening of polar codes. In Proceedings of the 2017 IEEE Wireless Communications and Networking Conference Workshops (WCNCW), San Francisco, CA, USA, 19–22 March 2017; pp. 1–6. [Google Scholar]
Figure 1. BER comparison for systematic and non-systematic PAC codes.
Figure 1. BER comparison for systematic and non-systematic PAC codes.
Entropy 22 01301 g001
Figure 2. Convolution circuit.
Figure 2. Convolution circuit.
Entropy 22 01301 g002
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Arıkan, E. Systematic Encoding and Shortening of PAC Codes. Entropy 2020, 22, 1301. https://doi.org/10.3390/e22111301

AMA Style

Arıkan E. Systematic Encoding and Shortening of PAC Codes. Entropy. 2020; 22(11):1301. https://doi.org/10.3390/e22111301

Chicago/Turabian Style

Arıkan, Erdal. 2020. "Systematic Encoding and Shortening of PAC Codes" Entropy 22, no. 11: 1301. https://doi.org/10.3390/e22111301

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop