Next Article in Journal
An Effective Orchestration for Fingerprint Presentation Attack Detection
Previous Article in Journal
A Perceptual Encryption-Based Image Communication System for Deep Learning-Based Tuberculosis Diagnosis Using Healthcare Cloud Services
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Polar Code Parameter Recognition Algorithm Based on Dual Space

1
Institute of Information Fusion, Naval Aviation University, Yantai 264001, China
2
Southwest Institute of Electronics and Telecommunications, Chengdu 610000, China
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(16), 2511; https://doi.org/10.3390/electronics11162511
Submission received: 9 July 2022 / Revised: 29 July 2022 / Accepted: 8 August 2022 / Published: 11 August 2022

Abstract

:
To improve the performance of polar code parameter recognition in the fields of intelligent communication, communication detection, and network countermeasures, we propose a new recognition scheme for the additive white Gaussian noise (AWGN) channel. The scheme turns parameter recognition problems into hypothetical tests and is effective due to the check relationship between the received codeword and the dual space determined by the correct parameters. First, a sub-matrix is obtained by removing the frozen-bit-index rows of the polar code generator matrix, and then its dual matrix is calculated. To check the relationship of the dual matrix and codewords, the average likelihood ratio of codewords is introduced as a test statistic, and then the corresponding decision threshold is deduced. Next, the degree of conformity of polar code recognition is defined, and the minimum code length and code rate corresponding to the highest degree of conformity are chosen to calculate the index of information-bit positions by a Gaussian approximation (GA) construction algorithm. Finally, the chosen code length, code rate, and corresponding index are provided as the recognition results. Simulation results show that the algorithm can achieve effective parameter recognition in both high-signal-to-noise (SNR) and low-SNR environments, and that the algorithm’s recognition performance increases with decreasing code rate and code length. The parameter recognition rate with a code length of 128 and code rate of 1/5 is close to 100% when the SNR is 4 dB, and the algorithm complexity increases almost linearly with the decreasing code rate and the increasing length of the intercepted data.

1. Introduction

To reduce interference in the communication process and enable a system to automatically check or correct errors to improve the reliability of data transmission, channel coding technology is widely used in various wireless communication systems. Currently, commonly used coding techniques include Reed–Solomon (RS) codes, low-density parity check (LDPC) codes, turbo codes, Bose–Chaudhuri–Hocquenghem (BCH) codes, convolutional codes, and polar codes. Among these, polar codes have been proven to be able to reach the Shannon limit under the binary discrete memoryless channel (B-DMC) and binary erasure channel (BEC) when the code length tends to infinity [1]. Because of their excellent short-code performance, polar codes became the control channel selection coding scheme for 5G enhanced mobile broadband (eMBB) scenarios at the 3GPP RAN1#87 Conference in 2016 [2].
Channel coding parameter recognition is a key link in the fields of intelligent communication, communication detection, and network confrontation, and thus has attracted extensive attention from experts and scholars in related fields. Several conventional parameter recognition algorithms for convolutional codes were proposed in [3,4]. Some use posterior probability to recognize code parameters. The synthetic posterior probability proposed in [5] was used to recognize linear code parameters. Posterior probability was used in [6] to identify LDPC code parameters. On this basis, maximum likelihood based on comprehensive posterior probability was proposed in [7]. In addition, there are many other conventional parameter-recognition algorithms for BCH [8] and RSC [9] codes. With the development of computer vision, machine learning has received increasingly more attention. By applying machine learning to channel coding parameter recognition, scholars in the field of communication have already achieved rich results. Machine learning was used in [10,11] to realize blind recognition of convolutional codes. An improved deep convolutional network was used in [12] to realize parameter recognition of convolutional codes and turbo codes, but the experimental setting parameters are relatively simple and complete parameter experiments were lacking. In [13], the test vector of the RS code was determined by equivalence of the binary field, and a parameter joint recognition model was established to avoid the calculation of high-order spectral components, which effectively improves the recognition performance of the RS code. However, machine-learning algorithms all face the test of robustness under different parameters and few studies exist on the recognition of polar code parameters. Polar code parameter recognition under erasure channel was studied in [14], in which a polar code supervision matrix was defined and the product of the supervision matrix and the hard-decision-codeword matrix used as the check standard to determine the coding parameters. Obviously, hard-decision codewords will lose a significant amount of useful information compared with the soft-decision sequence, and the recognition performance of the algorithm has substantial room for improvement. The open-set recognition problem of polar code coding under a Gaussian channel was studied in [15], in which the rows of the Kronecker power matrix were eliminated one by one to obtain its dual vector, and parameter recognition was carried out row by row using the test relationship between the dual vector and soft-decision codeword, which leads to a large amount of calculation. Ref. [16] introduced the likelihood difference to deduce the standard polar code parameter recognition and obtained improved performance, but the algorithm failed in terms of unstructured polar code recognition.
To solve the problems encountered in the practical process of polar code parameter recognition and further improve the performance of the algorithm, in this paper we study a polar code parameter recognition algorithm based on dual space. Since the additive white gaussian noise (AWGN) channel model is the first and most important stage of 5G channel coding simulation, we study the problem of blind recognition of polar code parameters under the AWGN channel.
The rest of this paper is organized as follows. The basic knowledge of polar codes is reviewed in Section 2. In Section 3, we describe the principle of polar code parameter recognition, the model system, and the calculation details of the decision threshold, and propose the parameter recognition algorithm. Section 4 provides the simulation results of algorithm effectiveness and influence of different parameters, followed by conclusions in Section 5.
The main contributions of this paper are the following.
(1)
The two basic principles of polar code parameter recognition are given and proven.
(2)
The polar code parameter recognition problem is transformed into a hypothesis testing problem, and a complete hypothesis-testing process is provided. The average log-likelihood ratio (LLR) is introduced as a statistic, and the corresponding threshold is derived in detail.
(3)
An index of sub-channels transmitting information bits (also called information-bit position) is directly constructed using a Gaussian approximation (GA) method, which helps avoid errors involving correct information bit numbers but incorrect information bit positions.

2. Preliminaries

Polar code is currently the only encoding method that enables channel capacity to reach the Shannon Limit. Consider the B-DMC transition probability shown in Figure 1. W : X Y , X { 0 , 1 } , P = W ( y | x ) , x X , y Y .
When X takes 0 and 1 according to the same probability, the information capacity is defined as the mutual information of the input and output [17]:
C ( W ) = I ( X ; Y ) = y Y x X 1 2 W ( y | x ) log 2 W ( y | x ) 1 2 W ( y | 0 ) + 1 2 W ( y | 1 )
At this time, 0 C ( W ) 1 ; when C ( W ) = 1 , W is an excellent channel, and when C ( W ) = 0 , W is a useless channel. The channel polarization phenomenon is an important guarantee for the realization of reliable information transmission, which turns most sub-channels with 0 C ( W ) 1 into sub-channels for transmitting information bits with C ( W ) = 1 , or sub-channels for transmitting frozen bits with C ( W ) = 0 . The B-DMC channel capacity is measured by the Bhattacharyya parameter [17]:
Z ( W ) = y Y W ( y | 0 ) W ( y | 1 ) ,
log 2 1 + Z ( W ) C ( W ) 1 Z ( W ) Z ( W ) .
The larger the Bhattacharyya parameter, the smaller the channel capacity. The capacity of the channel can be quantitatively described with the Bhattacharyya parameter, so the polarization effect of the channel—that is, the process of channel combining and separating—can be quantitatively described as well. The schematic of combining and separating n channels is presented in Figure 2.

2.1. Channel Combination

When n sub-channels W are combined into one W n channel, the channel capacity of W n is
C ( W n ) = I ( U ; Y ) = I ( X ; Y ) ,
where U = [ u 1 , u 2 , , u n ] , Y = [ y 1 , y 2 , , y n ] , and X = [ x 1 , x 2 , , x n ] . Because sub-channels are exactly the same, u 1 , u 2 , , u n { 0 , 1 } are independent and identically distributed when sub-channels are B-DMC channels, and thus
C ( W n ) = n I ( X ; Y ) = n C ( W ) .
After the channels are merged, the total capacity remains unchanged. At this time, the channel transition probability is
W n ( y 1 n | u 1 n ) = W n ( y 1 n | u 1 n G n ) ,
where y 1 n Y n , u 1 n U n , G n = B n F n , B n is the bit flip sequence, F = [ 1 0 1 1 ] , and is the Kronecker product, which can be calculated as in [18].
The polar code, which completes channel combination and splitting through the corresponding coding and decoding processes, uses the channel polarization effect to realize reliable transmission. The process of channel combining is realized by polar code coding, and the index corresponding to the excellent sub-channels used to transmit information bits is called the information-bit position. Usually, k information bits transmit through the first k sub-channels with the largest channel capacity (that is, the smallest Bhattacharyya parameter). The index of the first k sub-channels is written as A = { π ( 1 ) , π ( 2 ) , , π ( k ) } { 0 , 1 , , n } , and its complementary set is denoted as A c = { 0 , 1 , , n } \ A , so the codes with length n can be written as
x 1 n = u A G n ( A ) u A c G n ( A c ) ,
where G n ( A ) is a sub-matrix of G n with index A , u A denoting information bits, and u A c denoting frozen bits, which always takes 0, so (7) can be simplified as x 1 n = u A G n ( A ) .
The goal of blind polar recognition is to obtain the right G n ( A ) , which is needed to obtain the code length n , information-bit positions A , and number of information bits k . Note that when n and k are determined, the rate code R can be calculated as R = k / n .

2.2. Polar Code Construction in Gaussian Channel

The calculation of the exact value of the Bhattacharyya parameter is only applicable to the polar code construction of the BEC. It is difficult to obtain the specific value of the Bhattacharyya parameter for other channels. Usually, a GA construction algorithm with low complexity and high precision is used in AWGN; its basic idea is to approximate the log likelihood ratio (LLR) of all sub-channels to a Gaussian distribution with the mean half of the variance, gradually calculate the LLR mean of each sub-channel through the LLR mean of the total channel, and then sort the reliabilities of subchannels from smallest to largest. The LLR expressions for the channel transition probabilities are
L 2 n ( 2 i 1 ) ( y 1 2 n , u 1 2 i 2 ) = ln W 2 n ( 2 i 1 ) ( y 1 2 n , u 1 2 i 2 | u 2 i 1 = 0 ) W 2 n ( 2 i 1 ) ( y 1 2 n , u 1 2 i 2 | u 2 i 1 = 1 ) = ln u 2 i W n ( i ) ( y 1 n , γ | u 2 i 1 ) β u 2 i W n ( i ) ( y 1 n , γ | u 2 i 1 1 ) β = ln e L n ( i ) ( y 1 n , γ ) + L n ( i ) ( y n + 1 2 n , u 1 , e 2 i 2 ) + 1 e L n ( i ) ( y 1 n , γ ) + e L n ( i ) ( y n + 1 2 n , u 1 , e 2 i 2 ) = 2 tanh 1 ( tanh L n ( i ) ( y 1 n , γ ) 2 tanh L n ( i ) ( y n + 1 2 n , u 1 , e 2 i 2 ) 2 ) ,
L 2 n ( 2 i ) ( y 1 2 n , u 1 2 i 1 ) = ln W 2 n ( 2 i ) ( y 1 2 n , u 1 2 i 1 | u 2 i = 0 ) W 2 n ( 2 i ) ( y 1 2 n , u 1 2 i 1 | u 2 i = 1 ) = ( 1 ) u 2 i 1 L n ( i ) ( y 1 n , γ ) + L n ( i ) ( y n + 1 2 n , u 1 , e 2 i 2 ) ,
where γ = u 1 , o 2 i 2 u 1 , e 2 i 2 and β = W n ( i ) ( y n + 1 2 n , u 1 , e 2 i 2 | u 2 i ) L 1 ( i ) ( y i ) = log W ( y i | 0 ) W ( y i | 1 ) . When all 0 codes are transmitted, L 1 ( i ) ( y i ) ~ N ( 2 σ 2 , 4 σ 2 ) , the instantaneous values of formulas (8) and (9) can be regarded as Gaussian random variables with a variance that is twice the mean value; that is, D [ L 2 n ( i ) ] = 2 E [ L 2 n ( i ) ] . E [ ] denotes the mean of the variable and D [ ] the variance of the variable, and, according to the calculation rule of [19], the following formula can be obtained:
E [ L 2 n ( 2 i 1 ) ] = ϕ 1 ( 1 ( 1 ϕ ( E [ L n ( i ) ] ) ) 2 ) ,
E [ L 2 n ( 2 i ) ] = 2 E [ L n ( i ) ] ,
where
ϕ ( x ) = { 1 1 4 π x + tanh u 2 e ( u x ) 2 4 x d x , x > 0 1 , x = 0 .
The error probability for the i th sub-channel is given approximately as [20] Q ( E [ L 2 n ( i ) ] / 2 ) , 1 i 2 n .

3. Principle and Method of Polar Code Parameter Recognition

As the first step of channel model simulation, the simulation of polar codes in the AWGN channel has already attracted sufficient attention. According to the polar code coding structure detailed in the preceding section, the parameters that must be determined for polar code recognition under Gaussian channels include polar code length, code rate (or the number of information bits), and index of information-bit-positions. When the code length and rate are determined and the channel information known, the information-bit positions can be determined by the GA method [21], so that the generator matrix can be determined, and the channel parameters required by the GA construction method can be easily estimated by the received signal [22,23].

3.1. Recognition Principle

Theorem 1.
Denote the generator matrix of the polar code as G n , the set of information-bit positions as A = { π ( 1 ) , π ( 2 ) , , π ( k ) } , and the new matrix G is obtained after removing the ( i , i + a , , i + c )th row vector, and its dual space is H n , if and only if i , i + a , , i + c A ; H n is orthogonal to the codeword space.
Proof. 
Let g 1 , g 2 , , g n represent the row vector of G n . After removing row vectors with index i , i + a , , i + c , G is obtained and the space formed by the row vector in G is V 1 = { g 1 , , g i 1 , g i + 1 , , g i + a 1 , g i + a + 1 , , g i + c 1 , g i + c + 1 , , g n } . The dual space of G is H n = { h i , h i + a , , h i + c } , and the space formed by all row vectors in G n with row label in A is V 2 = { g π ( 1 ) , g π ( 2 ) , , g π ( n ) } . When i , i + a , , i + c A , V 2 V 1 , and, therefore, V 1 V 2 ; that is, H n V 2 , so H n V 2 ; that is, H n is orthogonal to the polar code word. If any of i , i + a , , i + c belong to the set A , V 2 V 1 ; that is, H n is not orthogonal to the polar code word. □
To explain the above theorem more understandably, a polar code length n is selected to calculate generator matrix G n ; then, some rows are removed according to certain rules to obtain G = G n ( A ) . If the real code length of the codeword and the positions set of all the information bits is A , which can be obtained by the GA algorithm, the dual space H n is orthogonal to the intercepted codeword. This is the basic principle we use to identify polar parameters.

3.2. Model System

Assuming that N polar codewords c = [ c 1 , 1 , c 1 , 2 , , c 1 , n , , c N , 1 , c N , 2 , , c N , n ] with code length n are transmitted, after binary phase shift keying (BPSK) modulation and utilizing the AWGN channel, its soft decision sequence r = [ r 1 , 1 , r 1 , 2 , , r 1 , n , , r N , 1 , r N , 2 , , r N , n ] is intercepted, and then the constructed code word matrix is
R = [ r 1 , 1 r 1 , 2 r 1 , n r 2 , 1 r 2 , 2 r 2 , n r N , 1 r N , 2 r N , n ] ,
where
r i , j = 2 c i , j 1 + n i , j , n i , j ~ N ( 0 , σ 2 ) .
The model system is shown in Figure 3.
The matrix formed by all the column vectors in H n is
H = [ h 1 , i h 1 , i + a h 1 , i + c h 2 , i h 2 , i + a h 2 , i + c h n , i h n , i + a h n , i + c ] .
Let T = R H , so that
T = [ t = 1 n r 1 , t h t , i t = 1 n r 1 , t h t , i + a t = 1 n r 1 , t h t , i + c t = 1 n r 2 , t h t , i t = 1 n r 2 , t h t , i + a t = 1 n r 2 , t h t , i + c t = 1 n r n , t h t , i t = 1 n r n , t h t , i + a t = 1 n r n , t h t , i + c ] .
In the ideal state, T is an all-zero matrix—that is, the value of each element in the matrix T is 0—but in an actual communication system, the codeword will be interfered with by noise, so even if H n is orthogonal to the original polar codeword, T will not be an all-zero matrix. To measure whether H n is orthogonal to the original polar codeword under the AWGN channel, we consider the probability that the K th column vector h K = [ h 1 , K , h 2 , K , , h n , K ] T in H n is orthogonal to the i th codeword vector r i = [ r i , 1 , r i , 2 , , r i , n ] :
P i , K = Pr ( t = 1 n r i , t h t , K = 0 | r i ) .
Let l i , t denote the LLR of r i , t :
l i , t = ln ( Pr ( c i , t = 0 | r i , t ) Pr ( c i , t = 1 | r i , t ) ) = 2 r i , t σ 2 .
When the code weight of the K th column vector h K is w and the position of 1 in h K is t 1 , t 2 , , t w , the LLR of P i , K is
A L i , K = ln ( P i , K 1 P i , K ) Π t = t 1 t w s i g n ( l i , t ) min t = t 1 , , t w ( | l i , t | ) .
Let the column numbers of H n be N l ; then, the average LLR of the orthogonal probabilities of N l column vectors and N codewords is defined as follows:
A L = 1 N l 1 N K = 1 N l i = 1 N A L i , K .
Let AL be the decision statistics of whether the dual space matrix H n is orthogonal to the codeword matrix R , and the following assumptions are made:
H0: 
H n is not orthogonal to the codeword matrix R .
H1: 
H n is orthogonal to the codeword matrix R .
All possible dual matrices for the intercepted codeword will be constructed and tested to ascertain whether any one of them satisfies the relationship between the decision statistic and decision threshold in the hypothesis test to find the dual matrix that is orthogonal to the codeword. When the correct dual matrix is found, the constructing code length and rate are considered the real parameters of the intercepted codeword, and the information-bit positions A can be calculated by the GA algorithm. Then, the generator matrix can be uniquely determined, and the parameter recognition is finally realized.
Finally, the polar code parameter recognition problem is transformed into a hypothesis-testing problem, and the key is to find the decision threshold.

3.3. Decision Threshold

Let X i = Π t = t 1 t w s i g n ( l i , t ) and Y i = min t = t 1 , , t w ( | l i , t | ) .
Theorem 2.
Suppose X 1 , X 2 , , X J are J independent random variables, and their distribution functions (DFs) are F X 1 ( x ) , F X 2 ( x ) , , F X J ( x ) , respectively; the DF for Z = min { X 1 , X 2 , , X J } is then
F Z ( z ) = 1 Π i = 1 J ( 1 F X i ( z ) ) .
The probability density function (PDF) of Z = min { X 1 , X 2 , , X J } is
f Z ( z ) = j = 1 J f X j ( z ) Π i = 1 , i j J ( 1 F X i ( z ) ) .
Proof. 
From the definition of the DF, we know that
F Z ( z ) = Pr ( Z z ) = 1 Pr ( Z > z ) .
That is to say,
F Z ( z ) = 1 Pr ( X 1 > z ) Pr ( X 2 > z ) Pr ( X J > z )
and, in turn,
F Z ( z ) = 1 ( 1 Pr ( X 1 z ) ) ( 1 Pr ( X 2 z ) ) ( 1 Pr ( X J z ) )
and
F Z ( z ) = 1 Π i = 1 J ( 1 F X i ( z ) ) .
Taking the derivative of F Z ( z ) with respect to z , the PDF of the random variable Z is obtained as
f Z ( z ) = j = 1 J f X j ( z ) Π i = 1 , i j J ( 1 F X i ( z ) ) .
Inference 1 can be easily obtained from Theorem 2 as follows.
Inference 1.
When X 1 , X 2 , , X J are J independent and identically distributed random variables, supposing they have the same DF F X ( x ) and PDF f 1 ( x ) , then the PDF of Z = min { X 1 , X 2 , , X J } is
F Z ( z ) = 1 ( 1 F X ( z ) ) J .
The PDF of Z = min { X 1 , X 2 , , X J } is
f Z ( z ) = J f X ( z ) ( 1 F X ( z ) ) J 1 .
Let F ( x ) denote l i , t ’s DF, f ( x ) denote l i , t ’s PDF, F 1 ( x ) denote | l i , t | ’s DF, and f 1 ( x ) denote | l i , t | ’s PDF, so
F 1 ( x ) = Pr ( | l i , t | x ) = Pr ( x l i , t x ) ,
where x > 0 , so
F 1 ( x ) = F ( x ) F ( x ) .
Taking the derivative of x on both sides of (31), we obtain
f 1 ( x ) = f ( x ) + f ( x ) .
It can be known from (14) that
l i , j ~ N ( 2 / σ 2 , 4 / σ 2 ) , c i , j = 0 ,
l i , j ~ N ( 2 / σ 2 , 4 / σ 2 ) , c i , j = 1 .
c i , j takes the same probability of 0 and 1 without prior information, and we obtain
f ( x ) = 1 2 2 π 2 / σ e ( x 2 / σ 2 ) 2 / ( 8 / σ 2 ) + 1 2 2 π 2 / σ e ( x + 2 / σ 2 ) 2 / ( 8 / σ 2 ) .
Because f ( x ) = f ( x ) , when x > 0 ,
f 1 ( x ) = σ 2 2 π e σ 2 ( x 2 / σ 2 ) 2 / 8 + σ 2 2 π e σ 2 ( x + 2 / σ 2 ) 2 / 8 .
When x 0 , f 1 ( x ) = 0 .
Let f 2 ( x ) be the PDF of Y i , and then it is known from Inference 1 that
f 2 ( x ) = { w f 1 ( x ) ( 1 F 1 ( x ) ) w 1 , x > 0 0 , x 0 .
Thus, the expectation of Y i and Y i 2 can be obtained, respectively, as
E ( Y i ) = 0 + w x f 1 ( x ) ( 1 F 1 ( x ) ) w 1 d x ,
E ( Y i 2 ) = 0 + w x 2 f 1 ( x ) ( 1 F 1 ( x ) ) w 1 d x .
Calculating integration by parts,
E ( Y i ) = x ( 1 F 1 ( x ) ) w | 0 + + 0 + ( 1 F 1 ( x ) ) w d x ,
E ( Y i 2 ) = x 2 ( 1 F 1 ( x ) ) w | 0 + + 0 + 2 x ( 1 F 1 ( x ) ) w d x .
According to the L’Hospital principle lim x + x ( 1 F 1 ( x ) ) = 0 ,
lim x + x 2 ( 1 F 1 ( x ) ) = 0 , so (40) and (41) can be rewritten, respectively, as
E ( Y i ) = 0 + ( 1 F 1 ( x ) ) w d x ,
E ( Y i 2 ) = 0 + 2 x ( 1 F 1 ( x ) ) w d x ,
where F 1 ( x ) = 0 x f 1 ( t ) d t , so
F 1 ( x ) = 0.5 ( e r f ( x + 2 σ 2 2 2 σ ) + e r f ( x 2 σ 2 2 2 σ ) ) ,
where e r f ( x ) = 2 π 0 x e t 2 .
It is easy to know that X i 2 Y i 2 = Y i 2 and it has nothing to do with the assumptions, so
E ( Y i 2 ) = E ( X i 2 Y i 2 | H 0 ) = E ( X i 2 Y i 2 | H 1 ) .
If H 0 is true, h K is not orthogonal to the codeword, so
E ( X i Y i | H 0 ) = 0 ,
V a r ( X i Y i | H 0 ) = E ( X i 2 Y i 2 | H 0 ) = 0 + 2 x ( 1 F 1 ( x ) ) w d x .
If H 1 is true, and the judgment of c i , t is correct, let R 1 denote the condition of c i , t = 0 , l i , t > 0 and R 2 denote the condition of c i , t = 1 , l i , t < 0 ; then, the PDFs of l i , t are
f ( x | R 1 ) = { σ e σ 2 ( x 2 / σ 2 ) 2 / 8 2 π ( 1 + e r f ( 1 / ( 2 σ ) ) ) , x > 0 0 , x 0 ,
f ( x | R 2 ) = { σ e σ 2 ( x + 2 / σ 2 ) 2 / 8 2 π ( 1 + e r f ( 1 / ( 2 σ ) ) ) , x < 0 0 , x 0 .
Let F ( x | R 1 ) and F ( x | R 2 ) denote the cumulative functions (CFs) of f ( x | R 1 ) and f ( x | R 2 ) , respectively, so the CFs of Y = | l i , t | are
F Y ( y | R 1 ) = { F ( y | R 1 ) F ( y | R 1 ) , y > 0 0 , y 0 ,
F Y ( y | R 2 ) = Pr ( Y y | R 2 ) = { F ( y | R 2 ) F ( y | R 2 ) , y > 0 0 , y 0 .
When y > 0 , F ( y | R 1 ) = 0 , and F ( y | R 2 ) = 1 , so F Y ( y | R 1 ) = F ( y | R 1 ) and F Y ( y | R 2 ) = 1 F ( y | R 2 ) , and therefore the PDFs of Y are
f Y ( y | R 1 ) = { f ( y | R 1 ) , y > 0 0 , y 0 ,
f Y ( y | R 2 ) = { f ( y | R 2 ) , y > 0 0 , y 0 .
It can be obtained from (49), (50), (53), and (54) that, when y 0 , f Y ( y | R 1 R 2 ) = 0 , and, when y < 0 , that
f Y ( y | R 1 R 2 ) = σ e σ 2 ( y 2 / σ 2 ) 2 / 8 2 π ( 1 + e r f ( 1 / ( 2 σ ) ) ) .
If H 1 is true, and the judgment of c i , t is not correct, let R 1 denote the condition of c i , t = 0 , l i , t < 0 and R 2 denote the condition of c i , t = 1 , l i , t > 0 ; then, the PDFs of l i , t are
f ( x | R 1 ) = { σ e σ 2 ( x 2 / σ 2 ) 2 / 8 2 π ( 1 e r f ( 1 / ( 2 σ ) ) ) , x > 0 0 , x 0 ,
f ( x | R 2 ) = { σ e σ 2 ( x + 2 / σ 2 ) 2 / 8 2 π ( 1 e r f ( 1 / ( 2 σ ) ) ) , x > 0 0 , x 0 ,
and the PDFs of Y are
f Y ( y | R 1 ) = { f ( y | R 1 ) , y > 0 0 , y 0 ,
f Y ( y | R 2 ) = { f ( y | R 2 ) , y > 0 0 , y 0 .
It can be obtained from (55)–(58) that, when y 0 , f Y ( y | R 1 R 2 ) = 0 , and, when y > 0 , that
f Y ( y | R 1 R 2 ) = σ e σ 2 ( y + 2 / σ 2 ) 2 / 8 2 π ( 1 e r f ( 1 / ( 2 σ ) ) ) .
Assuming that there is an error in the element at position { t v 1 , t v 2 , , t v s } in l i , t 1 , l i , t 2 , , l i , t w , it is easy to obtain the set of correct element positions as { t j 1 , t j 2 , , t j w s } = { t 1 , t 2 , , t w } / { t v 1 , t v 2 , , t v s } . Letting T 1 s = min { | l i , t v 1 | , | l i , t v 2 | , , | l i , t v s | } and T 2 s = min { | l i , t j 1 | , | l i , t j 2 | , , | l i , t j w s | } , it can be obtained from Inference 2-1 that, when y 0 , f T 1 s ( y ) = 0 and f T 2 s ( y ) = 0 , and, when y > 0 , that
f T 1 s ( y ) = s f Y ( y | R 1 R 2 ) ( 1 F Y ( y | R 1 R 2 ) ) s 1 ,
f T 2 s ( y ) = ( w s ) f Y ( y | R 1 R 2 ) ( 1 F Y ( y | R 1 R 2 ) ) w s 1 ,
where F Y ( y | R 1 R 2 ) and F Y ( y | R 1 R 2 ) are the CFs of f Y ( y | R 1 R 2 ) and f Y ( y | R 1 R 2 ) , respectively, so that, when y > 0 ,
F Y ( y | R 1 R 2 ) = 0 y f Y ( t | R 1 R 2 ) d t = 1 1 e r f ( 1 / ( 2 σ ) ) · ( e r f ( y σ 2 2 + 1 2 σ ) + e r f ( 1 2 σ ) ) ,
F Y ( y | R 1 R 2 ) = 0 y f Y ( t | R 1 R 2 ) d t = 1 1 + e r f ( 1 / ( 2 σ ) ) ( e r f ( y σ 2 2 1 2 σ ) + e r f ( 1 2 σ ) ) .
Letting T s = min { T 1 s , T 2 s } , from Theorem 2 we can obtain
f T s ( y ) = f T 1 s ( y ) ( 1 F T 2 s ( y ) ) + f T 2 s ( y ) ( 1 F T 1 s ( y ) ) ,
where F T 1 s ( y ) and F T 2 s ( y ) are the CFs of f T 1 s ( y ) and f T 2 s ( y ) , respectively; that is,
F T 1 s ( y ) = { 1 ( 1 F Y ( y | R 1 R 2 ) ) s , y > 0 0 , y 0 ,
F T 2 s ( y ) = { 1 ( 1 F Y ( y | R 1 R 2 ) ) w s , y > 0 0 , y 0 .
When X i = 1 , the number of error symbols is even and the check relationship is still established, so
E ( X i Y i , X i = 1 | H 1 ) = i = 0 w / 2 C w 2 i p e 2 i ( 1 p e ) w 2 i 0 + x f T 2 i ( x ) d x ,
where p e is the theoretical bit error rate of the codewords modulated by BPSK, and the expression is
p e = 0.5 e r f c ( 1 / ( 2 σ 2 ) ) ,
where σ 2 is noise variance, e r f c ( x ) = 2 π x e t 2 d t .
Similarly, when X i = 1 , an odd number of errors occurs:
E ( X i Y i , X i = 1 | H 1 ) = i = 0 w / 2 1 C w 2 i + 1 p e 2 i + 1 ( 1 p e ) w 2 i 1 · 0 + x f T 2 i + 1 ( x ) d x .
It can be obtained from (67) and (69) that
E ( X i Y i | H 1 ) = E ( X i Y i , X i = 1 | H 1 ) + E ( X i Y i , X i = 1 | H 1 ) ,
V a r ( X i Y i | H 1 ) = E ( Y i 2 ) E ( X i Y i | H 1 ) 2 .
It is easy to obtain the mean and variance of A L i , k under the two assumptions:
E ( A L i , k | H 0 ) = E ( X i Y i | H 0 ) = 0 ,
E ( A L i , k | H 1 ) = E ( X i Y i | H 1 ) ,
V a r ( A L i , k | H 0 ) = V a r ( X i Y i | H 0 ) ,
V a r ( A L i , k | H 1 ) = V a r ( X i Y i | H 1 ) .
According to the statistical properties of random variables,
μ 0 = E ( A L | H 0 ) = E ( A L i , k | H 0 ) = 0 ,
μ 1 = E ( A L | H 1 ) = E ( A L i , k | H 1 ) ,
σ 0 2 = V a r ( A L | H 0 ) = V a r ( A L i , k | H 0 ) / N / N l ,
σ 1 2 = V a r ( A L | H 1 ) = V a r ( A L i , k | H 1 ) / N / N l .
Then, the false alarm probability is
P f = Λ + 1 2 π σ 0 e ( x μ 0 ) 2 2 σ 0 2 d x .
The probability of a false alarm is
P a = Λ 1 2 π σ 1 e ( x μ 1 ) 2 2 σ 1 2 d x .
It does not have any prior knowledge under non-cooperative conditions, so P f = P a , and the average incorrect decision probability is
P e r = 0.5 P f + 0.5 P a = Λ + 1 2 π σ 0 e ( x μ 0 ) 2 2 σ 0 2 d x + Λ 1 2 π σ 1 e ( x μ 1 ) 2 2 σ 1 2 d x .
The minimum error decision threshold for solving (82) is
Λ opt = ( σ 0 2 μ 1 σ 1 2 μ 0 ) σ 0 σ 1 ζ ( σ 0 2 σ 1 2 ) ,
where ζ = ( μ 0 μ 1 ) 2 + ( σ 1 2 σ 0 2 ) ln ( σ 1 / σ 0 ) . When AL > Λ opt , H 1 is true.

3.4. Parameter Recognition Steps

Since 3GPP stipulates that the maximum mother code length of the polar code in the downlink control information is 512, that in the uplink control information is 1024 [24]. Considering the code length and code-rate range of polar codes in practical applications, the code length is selected as n = [ 32 , 64 , 128 , 256 , 512 , 1024 ] and the algorithm code rate is selected as R = [ 1 5 , 1 3 , 2 5 , 1 2 , 2 3 , 3 4 , 5 6 , 8 9 ] for the polar code simulation in this paper. The range of the signal-to-noise (SNR) ratio is 4 S N R 6 . Denoting the total number of program executions as L , and the times of AL > Λ opt under a certain code length, code rate, information-bit position, and signal-to-noise ratio during the execution process as Num , we then define the polar code recognition conformity η as follows:
η = Num L .
According to the definition of η , when the code length, code rate, and information-bit positions used in constructing H n are completely consistent with the transmitted codeword, η takes the maximum value. The specific algorithm is shown as Algorithm 1.
Algorithm 1: Polar code parameter recognition
Input :   n ; R ; soft decision sequence r ; numbers of code length, N n ; numbers of code rate, N r ; execution numbers L
Output :   n ^ ,   R ^ ,   A ^
Initialization :   Num = zeros ( N n , N r ) ,   j = 0
1    for   n = n  do
2      Construct   codeword   matrix   R ;
3      for   R = R  do
4        j = j + 1 ;
5        construct   A   by   GA   algorithm   with   n ,   R ;
6        construct   G n ( A )   with   n ,   R ,   and   A ;   calculate   H n ;
7        get   statistics   AL   and   threshold   Λ opt ;
8        if   AL < Λ opt  then
9          Num ( j ) = Num ( j ) + 1 ;
10          η ( j ) = Num ( j ) / L ;
11     end if
12    end for
13  end for
14   Find   all   j   that   make   η   take   the   maximum   value ;   number   of   j   is   l e n j ;
15   for   i = 1 to   i = l e n j  do
16     p 1 = mod ( j , N r ) , p 2 = f i x ( j , N r ) ;
17     if   p 1 = 0 then
18       R ( i ) = 8 / 9 ;
19    else
20       R ( i ) = R ( p 2 ) ;
21    end if
22     n ( i ) = n ( p 1 + 1 ) ;
23  end for
24   n ^ = min ( n ) ,   R ^ = min ( R ) ,   construct   A ^   by   GA   with   n ^ ,   R ^ ;
25   return   n ^ ,   R ^ , A ^

4. Numerical Simulation

We first examined the algorithm validity and then executed the recognition algorithm presented in Section 3.4 to examine the influence of different factors. Then, the performance and complexity were compared with that of the algorithm presented in [15,16]. Unless otherwise stated, the simulation takes the same parameters as in Section 3.4. Since this paper examines the problem of polar code parameter recognition under the AWGN channel, the simulations here were confined to the AWGN channel.

4.1. Algorithm-Validity Verification

Algorithm validity was evaluated in two cases: (1) with polar code length 256, code rate 1/2, and SNR = 30 dB, and (2) with polar code length 256, code rate 2/3, and number of transmitted codewords 2000. The results of the average likelihood ratio (LLR) and threshold are presented in Figure 4 and Figure 5.
As shown in both figures, the abscissa “Oder” corresponds to j in Algorithm 1, so the code length and rate corresponding to each abscissa can be calculated according to Section 3.4. For example, when the abscissa is 38, the average LLR exceeds the threshold for the third time and the total number of code rates is 8. It is easy to calculate that 38 ÷ 8 = 4 6 , so the corresponding code rate is the sixth R , 3 4 , and the code length is the 4 + 1, or fifth, n , 256. It can be seen from the figures that when the code length and rate are not lower than the real code length and rate, respectively, the code length and rate will be mistakenly identified as the real polar code parameters. This is due to the channel polarization effect, which guarantees a certain structural relationship between a polar code and polar codes in which the code length and rate are not lower than this code. Therefore, we selected the minimum code length and rate corresponding to the maximum η as the final result.

4.2. Influence of Code Rate

A recognition rate with lengths of 128, 256, and 512 is simulated when the code rate is different. The number of codewords was 2000, the recognition rate was output every 0.5 dB, and 2000 Monte Carlo calculations were performed. The results are shown in Figure 6, Figure 7 and Figure 8.
As shown in the figures, when the code rate decreases, the recognition performance is improved, but this performance improvement decreases with increasing code length. This is because when the code length is constant, the dimension of the dual space increases as the code rate decreases, and the calculation of the statistic is closer to the theoretical value, so the recognition performance is improved. Otherwise, the increase in the code length will increase the total code weight of the dual-space vector, making the calculation of the statistic more susceptible to bit errors. This slowly offsets the performance improvement brought about by the reduction of the code rate. The longer the code length, the more obvious the offset, and the recognition performance curves of different code rates become increasingly closer.

4.3. Influence of Intercepted Codewords

Recognition rate curves under different numbers of codewords were simulated with code rate 1/2, code length 64, and SNR range −2–4 dB. The recognition rate was output once every 0.5 dB and 1000 Monte Carlo calculations performed. The results are shown in Figure 9.
As shown in the figure, with the increasing number of codewords, the recognition performance of the algorithm continues to improve, but the degree of improvement is increasingly lower. This is because the continuous increase in the number of codewords makes the statistic approach the theoretical value gradually; the closer to the theoretical value, the less the performance improvement.

4.4. Influence of Code Length

The recognition rate curve of the proposed algorithm was simulated under different code lengths with 1/3 code rate, 2000 codewords, the SNR range of −2–8 dB, and the recognition rate output every 0.5 dB. The results of 2000 simulations are shown in Figure 10.
As shown in the figure, as the code length increases, the recognition performance of the algorithm deteriorates. This is because the longer the code length and the larger the code weight, the more easily the accuracy of the statistic calculation is affected by bit errors, and thus the recognition performance decreases.

4.5. Comparison of Recognition Performance

In this section, we compared the recognition performance of the proposed algorithm with that of algorithms in [15,16], which also studied polar code parameter recognition under the AWGN channel in recent years. All the algorithms are executed with structured and unstructured polar codes with a code length of 256 and a code rate of 1/3 and 2000 codewords, and the results are shown as follows. The curves of the algorithm proposed in this study are marked as DS, and those of the algorithms advanced in [15,16] are marked as [15,16] in Figure 11, Figure 12 and Figure 13.
As shown in the figures, when the parameters of structured polar codes are recognized, the recognition performance of the algorithm proposed in this paper is better than that of the algorithm advanced in [15] and worse than that in [16]. This is because the algorithm advanced in [15] identifies the information bit positions one by one, which makes it insensitive to the coding structure, and it suffers from errors involving correct information bit numbers but incorrect bit positions. The algorithm proposed in this paper directly uses the GA construction method to determine the number and distribution of the information-bit index, which means that when the code length and rate are correctly identified, the information-bit index must be correct. This helps avoid the recognition error with a correct information-bit number with an incorrect index, while at the same time avoiding calculating the judgment statistic row by row and reducing the number of calculations. The computation comparison is analyzed in detail in the following subsection, and the introduction of likelihood difference in [16] did boost performance.
However, the algorithms advanced in [15,16] are invalid when recognizing the parameters of structured polar codes. This is because the recognition criteria of algorithms advanced in [15,16] are only designed for structured polar codes, so they failed in terms of unstructured polar code recognition.

4.6. Algorithm Complexity

Setting the amount of intercepted data as Len , when the traversal calculation reaches the code length n ( n min n n max ) and the code rate R ( R min R R max ) , the dual matrix H is calculated after removing the n ( 1 R ) rows in the construction matrix G n ; then, the statistics AL and thresholds Λ opt of Len / n codewords are calculated. The dual matrix with fixed code length and rate is fixed, so it can be generated in advance and read directly when needed. The calculation of statistics requires Len / n n ( 1 R ) vector additions, Len / n n ( 1 R ) comparisons, and Len / n n ( 1 R ) vector multiplications. For simplicity, one comparison is equivalent to one vector addition, so the number of addition computations of the parameter recognition algorithm proposed in this paper is R = R min R = R max n = n min n = n max 2 Len / n n ( 1 R ) and the number of multiplications is R = R min R = R max n = n min n = n max Len / n n ( 1 R ) . Therefore, the complexity of the algorithm proposed in this paper approximately increases linearly with increasing length of the intercepted data and decreasing code rate. Regarding the algorithm proposed in [15], the number of multiplications is n = n min n = n max 20 n , the number of additions is n = n min n = n max Len n , and the calculation amount increases approximately linearly with increasing length of the intercepted data and code length. The algorithm proposed in [16] requires at most 5 Len + 10 n + ( 2 n 1 ) Len multiplications and ( Len / ( 2 n ) 1 ) ( 2 n ) 2 additions. Let the sum of the number of multiplications and additions be the complexity, and we show how code length, code rate, and the received sequence length influence the complexity in what follows. When the influence of code length and code rate on complexity is considered, the length of the received sequence length is set to 10,000. When the influence of sequence length on complexity is considered, a code length of 512 and a code rate of 1/2 are chosen.
As is shown in Figure 12, the complexities of the algorithms proposed in [15,16] are not affected by code rate so they just have one curve, but the complexity of the algorithm proposed in this paper is affected by code rate, so the complexity with every code rate has a curve. All the complexities increase as the code length increases from 32 to 1024, but the complexity of the algorithm proposed in this paper increases only when the code rate increases from 1/5 to 8/9. As shown in Figure 13, all the complexities increase as the sequence length increases from 2000 to 10,000. Regardless, the complexity of the algorithm proposed in this paper is the lowest.

5. Conclusions

To reduce the complexity and improve the performance of a polar code parameter recognition algorithm, a traditional recognition scheme suitable for blind recognition of polar code parameters in AWGN channels is proposed in this paper. The hypothesis test of the empirical relationship is studied, the specific calculation process of statistics and decision thresholds is given, and the effectiveness of the proposed algorithm and the conditions of different code lengths, code rates, and number of received codewords are simulated. Simulation results show that the recognition performance of the algorithm proposed in this paper increases with the decreasing code length, increasing number of intercepted codewords, and decreasing code rate.
Compared with the existing algorithms for AWGN channel simulation, the algorithm proposed in this paper not only has the lowest simulation complexity, but also has good recognition performance. More importantly, it is valid for unstructured polar codes, making it outperform other algorithms. The research results in this paper can be applied to simulation implementations in the fields of intelligent communication, communication reconnaissance, and network countermeasures. In the future, we will study the problem of polar code parameter recognition under other channels and consider adding machine-learning algorithms to our studies.

Author Contributions

Writing—original draft preparation, H.L.; writing—review and editing, Z.W. and W.Y.; project administration, L.Z.; funding acquisition, L.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China grant number 91538201.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Arikan, E. Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels. IEEE Trans. Inf. Theory 2009, 55, 3051–3073. [Google Scholar] [CrossRef]
  2. Gamage, H.; Rajatheva, N.; Latva-Aho, M. Channel coding for enhanced mobile broadband communication in 5G systems. In Proceedings of the 2017 European Conference on Networks and Communications, Oulu, Finland, 12–15 June 2017; pp. 1–6. [Google Scholar]
  3. Abusabah, I.A.A.; Abdeldafia, M.; Abdelaziz, Y.M.A. Blind Identification of Convolutional Codes Based on Veterbi Algorithm. In Proceedings of the 2020 International Conference on Computer, Control, Electrical, and Electronics Engineering (ICCCEEE), Khartoum, Sudan, 26 February–1 March 2021; pp. 1–4. [Google Scholar]
  4. Huang, L.; Chen, W.; Chen, E.; Chen, H. Blind recognition of k/n rate convolutional encoders from noisy observation. J. Syst. Eng. Electron. 2017, 28, 235–243. [Google Scholar]
  5. Li, L.Q.; Huang, Z.; Zhou, J. Blind identification of LDPC codes based on a candidate set. IEEE Commun. Lett. 2021, 25, 2786–2789. [Google Scholar]
  6. Lei, Y.; Luo, L. Novel Blind Identification of LDPC Code Check Vector Using Soft-decision. In Proceedings of the 2020 IEEE 20th International Conference on Communication Technology (ICCT), Nanning, China, 28–31 October 2020; pp. 1291–1295. [Google Scholar]
  7. Wu, Z.J.; Zhang, L.M.; Zhong, Z.G.; Liu, R.X. Blind recognition of LDPC codes over candidate set. IEEE Commun. Lett. 2020, 24, 11–14. [Google Scholar] [CrossRef]
  8. Swaminathan, R.; Madhukumar, A.S.; Wang, G.H. Blind estimation of code parameters for product codes over noisy channel conditions. IEEE Trans. Aerosp. Electron. Syst. 2020, 56, 1460–1473. [Google Scholar] [CrossRef]
  9. Wu, Z.J.; Zhang, L.M.; Zhong, Z.G. A maximum cosinoidal cost function method for parameter estimation of RSC turbo codes. IEEE Commun. Lett. 2019, 23, 390–393. [Google Scholar]
  10. Wand, J.; Tang, C.R.; Huang, H.; Wang, H.; Li, J.Q. Blind identification of convolutional codes based on deep learning. Digit Signal Process. 2021, 115, 103086. [Google Scholar]
  11. Chen, J.J. Research and Implementation of Closed Set Recognition of Error Correction Coding Based on Deep Learning. Master’s Thesis, University of Electronic Science and Technology of China, Chengdu, China, 2020. [Google Scholar]
  12. Chen, C. Recognition of Closed Set Turbo Codes Based on Machine Learning. Master’s Thesis, University of Electronic Science and Technology of China, Chengdu, China, 2020. [Google Scholar]
  13. Liu, J.; Zhang, L.M.; Zhong, Z.G. Blind parameter identification of RS code based on binary field equivalence. Acta Electron. Sin. 2018, 46, 2888–2894. [Google Scholar]
  14. Zhang, T.Q.; Hu, Y.P.; Feng, J.X.; Zhang, X.Y. Blind algorithm of polarization code parameters based on null space matrix matching. J. Electr. Eng. Technol. 2020, 42, 2953–2959. [Google Scholar]
  15. Wu, Z.J.; Zhong, Z.G.; Zhang, L.M.; Dan, B. Recognition of non-drilled polar codes based on soft decision. J. Commun. 2020, 41, 60–71. [Google Scholar]
  16. Wang, Y.; Wang, X.; Yang, G.D.; Huang, Z. Recognition algorithm of non punctured polarization codes based on structural characteristics of coding matrix. J. Commun. 2022, 43, 22–33. [Google Scholar]
  17. Xu, J.; Yuan, Y.F. Channel Coding for New Radio of 5th Generation Mobile Communications; Posts & Telecom Press: Beijing, China, 2018. [Google Scholar]
  18. John, W.B. Kronecker Products and matrix calculus in system theory. IEEE Trans. Circuits Syst. 1978, 25, 772–781. [Google Scholar]
  19. Melnikov, N.B.; Reser, B.I. Gaussian Approximation. In Dynamic Spin-Fluctuation Theory of Metallic Magnetism, 1st ed.; Springer International Publishing AG: Cham, Switzerland, 2018; pp. 101–108. [Google Scholar]
  20. Proakis, J.G. Digital Communications; McGraw Hill: New York, NY, USA, 1995. [Google Scholar]
  21. Trifonov, P. Efficient design and decoding of polar codes. IEEE Trans. Commun. 2012, 60, 3221–3227. [Google Scholar] [CrossRef]
  22. Zimaglia, E.; Riviello, D.G.; Garello, R.; Fantini, R. A Deep Learning-based Approach to 5G-New Radio Channel Estimation. In Proceedings of the 2021 Joint European Conference on Networks and Communications & 6G Summit, Porto, Portugal, 8–11 June 2021; pp. 1–6. [Google Scholar]
  23. Pauluzzi, D.R.; Beaulieu, N.C. A comparison of SNR estimation techniques for the AWGN channel. IEEE Trans. Commun. 2000, 48, 1681–1691. [Google Scholar] [CrossRef]
  24. 3GPP. Study on New Radio Access Technology Physical Layer Aspects (Release 14), TR 38.802 V14.1.0[R]. 2017, pp. 14–43. Available online: http://www.3gpp.org (accessed on 27 April 2007).
Figure 1. Diagram of channel.
Figure 1. Diagram of channel.
Electronics 11 02511 g001
Figure 2. Diagram of channel combination and separation.
Figure 2. Diagram of channel combination and separation.
Electronics 11 02511 g002
Figure 3. Model system.
Figure 3. Model system.
Electronics 11 02511 g003
Figure 4. Algorithm-validity verification with 5 dB.
Figure 4. Algorithm-validity verification with 5 dB.
Electronics 11 02511 g004
Figure 5. Algorithm-validity verification with 30 dB.
Figure 5. Algorithm-validity verification with 30 dB.
Electronics 11 02511 g005
Figure 6. Recognition performance under different code rates with code length 128.
Figure 6. Recognition performance under different code rates with code length 128.
Electronics 11 02511 g006
Figure 7. Recognition performance under different code rates with code length 256.
Figure 7. Recognition performance under different code rates with code length 256.
Electronics 11 02511 g007
Figure 8. Recognition performance under different code rates with code length 512.
Figure 8. Recognition performance under different code rates with code length 512.
Electronics 11 02511 g008
Figure 9. Performance under different number of codewords.
Figure 9. Performance under different number of codewords.
Electronics 11 02511 g009
Figure 10. Performance under different code lengths.
Figure 10. Performance under different code lengths.
Electronics 11 02511 g010
Figure 11. Comparison of recognition performance under different code lengths [15,16]. (a) The curves under structured polar codes. (b) The curves under unstructured polar codes.
Figure 11. Comparison of recognition performance under different code lengths [15,16]. (a) The curves under structured polar codes. (b) The curves under unstructured polar codes.
Electronics 11 02511 g011
Figure 12. Influence of code length and code rate on complexity [15,16].
Figure 12. Influence of code length and code rate on complexity [15,16].
Electronics 11 02511 g012
Figure 13. Influence of sequence length on complexity [15,16].
Figure 13. Influence of sequence length on complexity [15,16].
Electronics 11 02511 g013
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, H.; Wu, Z.; Zhang, L.; Yan, W. Polar Code Parameter Recognition Algorithm Based on Dual Space. Electronics 2022, 11, 2511. https://doi.org/10.3390/electronics11162511

AMA Style

Liu H, Wu Z, Zhang L, Yan W. Polar Code Parameter Recognition Algorithm Based on Dual Space. Electronics. 2022; 11(16):2511. https://doi.org/10.3390/electronics11162511

Chicago/Turabian Style

Liu, Hengyan, Zhaojun Wu, Limin Zhang, and Wenjun Yan. 2022. "Polar Code Parameter Recognition Algorithm Based on Dual Space" Electronics 11, no. 16: 2511. https://doi.org/10.3390/electronics11162511

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop