Next Article in Journal
Benefits from Using Very Thin Channel Layer for TFTs
Next Article in Special Issue
Generation of Terahertz OAM Waves with Six Modes Based on Three-Layer Z-Shaped Reflective Metasurface
Previous Article in Journal
Entity Linking Method for Chinese Short Texts with Multiple Embedded Representations
Previous Article in Special Issue
An Efficient Random Access Reception Algorithm for ToA Estimation in NB-IoT
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Soft Decision Decoding with Cyclic Information Set and the Decoder Architecture for Cyclic Codes

School of Microelectronics, Tianjin University, Tianjin 300072, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(12), 2693; https://doi.org/10.3390/electronics12122693
Submission received: 15 May 2023 / Revised: 8 June 2023 / Accepted: 12 June 2023 / Published: 16 June 2023
(This article belongs to the Special Issue Advanced Digital Signal Processing for Future Digital Communications)

Abstract

The soft decision decoding algorithm for cyclic codes, especially the maximum likelihood (ML) decoding algorithm, can obtain significant performance superior to that of algebraic decoding, but the complexity is much higher. To deal with this problem, an improved soft decision decoding algorithm based on a cyclic information set and its efficient implementation architecture are proposed. This algorithm employs the property of the cyclic codes to generate a series of cyclic information sequences by circularly shifting, constructing the cyclic information set. Then, a limited number of candidate information sequences are efficiently generated using an iterative computation method, and the candidate codewords are generated using the very concise encoding method of the cyclic codes. Furthermore, the efficient hardware architecture based on systolic arrays is also proposed to generate candidate information sequences and to select the optimal candidate codewords. An emulation platform is constructed to verify the error correction performance and to determine the optimal decoder parameters. Emulation results indicate that, with appropriate parameter selection, the proposed decoding algorithm can achieve a bit error rate approaching the ML performance while maintaining low complexity.

1. Introduction

Interest in short block length codes has been rising recently, mainly due to emerging applications that require high reliability and low latency data transmission. Examples of such applications are machine-type communications in 5G/6G, the Internet of Things, smart metering networks, remote command links, and messaging services [1,2,3,4,5,6,7,8]. Short block length codes play a crucial role in satisfying the low latency requirements of 5G communication. However, for short block length codes, the necessary trade-off between good error correction performance and decoding complexity leads to a significant gap between error correction performance and the theoretical lower bound, and on the other hand, leads to a dearth of practical decoding solutions [9,10].
The cyclic codes, first proposed in the 1960s [11], can be soft decision decoded based on their code structure such as the trellis graph or the bit reliability [12,13,14,15,16]. Decoding algorithms based on the trellis graphs can approach the maximum likelihood (ML) soft decision decoding performance, but the decoding complexity is the main concern, especially when the code length n is not so small and the states grow exponentially with k or n k , where k is the information bit number [12,13]. The generalized minimum distance decoding has a relatively low decoding complexity. However, the gap to the performance of the ML decoding could not be neglected [14,15]. The decoding schemes based on ordered statistics can approach ML performance and have recently become a hot topic, showing great potential in future wireless communications for short or moderate length codes [16,17]. The ordered statistic decoding (OSD) algorithm was proposed by Fossorier and Lin in [16]. The traditional algorithms based on ordered statistics need to find the most reliable basis (MRB), which involves sorting the soft information of all the bits and finding k independent columns of the generation matrix associated with the k most reliable bits [16]. In order to eliminate the burden of identifying the MRB, many efforts have been paid to reduce the complexity and some significant progress has been achieved [17]. The main ideas of the existing improved OSD algorithms include skipping and/or discarding unlikely test patterns to avoid unnecessary calculations [18,19,20,21,22,23,24,25,26]. A double re-encoding technique was proposed in [27]. Two groups of independent candidates are chosen for a particular class of rate-1/2 systematic block codes. Without using the MRB, this method only needs a few candidate codewords, which is practical for hardware implementation [28,29]. The application of the OSD algorithms for short codes in wireless communications, especially the ultra-reliable and low latency communication (URLLC), has also been investigated [30,31,32].
Decoder architectures of all kinds of decoding algorithms, including the soft decision decoding algorithms, have also been comprehensively evaluated. As a classical example, the Golay code is popular for its perfect structure and good error correction capability. Many important decoding algorithms have been proposed for Golay codes in various application scenarios [33,34,35,36,37,38]. Besides the traditional URLLC applications, the code was also applied in some other new scenarios [39,40]. There are many decoder examples for Golay codes [41,42,43,44]. For example, Sarangi et al. proposed a decoder for extended binary Golay codes based on the incomplete ML decoding scheme with low latency and low complexity [41]. Reviriego et al. exploited the properties of the Golay code to implement a parallel decoder that corrects single and double-adjacent errors [42]. Short BCH codes are another type of important block code suitable for soft decision decoding. Their implementation architectures have also been proposed [45,46,47,48,49]. In [50,51], the implementations of soft decision decoding of a Golay code fully based on channel observation values were presented, which can achieve a performance close to ML performance. However, the complexity is much higher compared with some decoders using hard decision results. Therefore, in this paper, we choose the Golay code and several BCH codes as examples to evaluate the proposed decoding algorithm and decoder architecture.
Inspired by the permutation decoding algorithms [52,53,54], a new soft decision decoding algorithm, which utilizes a cyclic information set, is proposed in this paper. The proposed method is based on the cyclic property of cyclic block codes to generate the information set, which is suitable for hardware implementation, thereby the MRB is not needed. Furthermore, an efficient hardware implementation architecture is also designed to reduce the complexity of generating candidate codewords and selecting the most possible member. The decoders for several cyclic codes are implemented on the hardware platform and the performance of the proposed algorithm is verified. Experimental results demonstrate that the proposed decoding algorithm shows the error correction performance close to that of the ML decoding. It can achieve a good trade-off between complexity and performance, which renders this new decoding algorithm to be quite practical. The key parameters of the decoding algorithm are also optimized based on the hardware platform. The proposed soft decision decoding algorithm can make full use of channel information and provide a significant performance improvement compared with the algebraic decoding. In summary, our contributions lie in two points. First, the proposed algorithm employs the cyclic property of cyclic block codes, enabling the utilization of a uniform systematic generator matrix in re-encoding, which can eliminate the complex transformation to obtain the new generator matrices. Then, the hardware implementation is based on the systolic array architecture, which has low complexity and short critical paths. It offers the ability to arbitrarily set the number of information sequences and candidate codewords, supporting a flexible range of applications.
The rest of this paper is organized as follows. Section 2 introduces the calculation method of the metric used in the proposed soft decision decoding of cyclic codes. The proposed soft decision decoding algorithm based on the cyclic information set is illustrated in Section 3. Section 4 presents the efficient hardware implementation architecture of the proposed soft decision decoding algorithm. The verification and parameter optimization of the algorithm based on the hardware platform are presented in Section 5. Finally, conclusions are drawn in Section 6.

2. Metrics for the Proposed Soft Decision Decoding

In this paper, for cyclic codes, a binary phase shift keying (BPSK) constellation is transmitted over an additive white Gaussian noise (AWGN) channel. Soft decision decoding metrics commonly used include the likelihood function, Euclidean distance, correlation, correlation discrepancy, etc. In the proposed decoding algorithm, the correlation discrepancy is adopted as the decoding metric [12].
Let r = r 0 , r 1 , , r n 1 denote the received soft information sequence and v = v 0 , v 1 , , v n 1 represent an arbitrary codeword, where n is the code length. The log-likelihood function of r is expressed as
log P ( r v ) = i = 0 n 1 log P ( r i v i ) .
If the log-likelihoodfunction is used as the decoding metric, the soft decision ML decoding algorithm selects the codeword v with the largest log P ( r v .
For the AWGN channel with a bilateral power spectral density of N 0 / 2 , the conditional probability P ( r v ) is given by
P ( r v ) = 1 π N 0 π / 2 exp i = 0 n 1 r i c i 2 / N 0 ,
where c i is a BPSK symbol mapped by v i , satisfying c i = 1 2 v i [12], i.e.,
c i = 1 , v i = 1 , c i = 1 , v i = 0 ,
where c = ( c 0 , c 1 , , c n 1 ) is a symbol sequence for BPSK modulation mapped by v . The summation term in (2) is called the squared Euclidean distance between the received sequence r and the codeword c . It is denoted as d E 2 ( r , c ) , i.e.,
d E 2 r , c = i = 0 n 1 r i c i 2 .
Obviously, to maximize log P ( r v ) is equivalent to minimizing d E 2 ( r , c ) . Extend the expression on the right side of (4), that is,
d E 2 r , c = i = 0 n 1 r i 2 + n 2 i = 0 n 1 r i · c i .
Define the summation term as a correlation between r and c , denoted as m ( r , c ) [12], that is,
m r , c = i = 0 n 1 r i · c i .
Then, it can be rewritten as [12]
m r , c = i = 0 n 1 r i 2 i : r i · c i < 0 r i .
Similarly, finding the maximum value of m ( r , c ) is equivalent to finding the minimum value of the second summation term in (7). Define the summation term as the correlation discrepancy between r and c , denoted as λ ( r , c ) [12], i.e.,
λ r , c = i : r i · c i < 0 r i = i : u i v i = 1 r i ,
where u i is the hard decision result of r i , satisfying
u i = 1 , r i 0 , u i = 1 , r i < 0 ,
u = ( u 0 , u 1 , , u n 1 ) is the binary sequence obtained after hard decision for r .

3. Soft Decision Decoding Algorithm Based on Cyclic Information Set

A low complexity soft decision decoding algorithm based on a series of cyclic information sequences, called an information set, is proposed in this paper. The proposed decoding procedure is decomposed into two steps. First, using the cyclic property of cyclic codes, generate predetermined multiple cyclic information sequences, which form the cyclic information set. Then, a group of C candidates for each information sequence is tested to find the most possible codeword.

3.1. The Proposed Soft Decision Decoding Algorithm

In this paper, multiple information sequences are used for re-encoding to generate the candidate codewords. We generate the information set using the cyclic property of the cyclic codes. The proposed soft decision decoding algorithm is depicted in Figure 1. Assuming that the received soft information sequence is r = ( r 0 , r 1 , , r n 1 ) . First, the ( I 1 ) circular shifts are performed on r to obtain n-dimensional soft information sequences r ( i 1 ) , r ( i 2 ) , , r ( i I 1 ) , where i 1 , i 2 , i I 1 denote the number of circular shifts. Then, the first k components in r , r ( i 1 ) , r ( i 2 ) , , r ( i I 1 ) are selected to form the k-dimensional sequences m ( 0 ) , m ( i 1 ) , , m ( i I 1 ) , namely the information sequences. The information sequences represent the equivalent information bits in the codewords. The process of generating multiple information sequences by circularly shifting is shown in Figure 2. For each information sequence m ( i ) , a group of k-dimensional sequences with the minimum correlation discrepancy to m ( i ) are obtained, denoted as s j ( i ) ( 1 j C ) , where the correlation discrepancy is calculated by the hard decision of the corresponding information sequence. All candidate information sequences are encoded to yield I · C candidate codewords via the cyclic encoding method. In essence, the codewords of the cyclic code are still legitimate after circular shifting. The information sequence corresponding to each candidate information sequence is obtained by the circular shifting of r ( i ) . The simple encoding circuits of cyclic codes can be used to encode the candidate information sequence based on the encoder using shift-registers [12]. This method saved the burden of computing the new generator matrices. Finally, the candidate codeword with the minimum correlation discrepancy to the corresponding information sequence in the received channel observation soft information sequence r , r ( i 1 ) , r ( i 2 ) , , r ( i I 1 ) is selected as the optimal candidate codeword. If the candidate codeword is generated by the information sequence m ( 0 ) = m from r , it is directly selected as the optimal decoding result. Otherwise, the candidate codeword needs to be circularly shifted to obtain the optimal decoding result. The four main steps of the algorithm are summarized in Algorithm 1.
Algorithm 1 The soft decision decoding algorithm based on cyclic information set
  • Input: the received soft information sequence r = ( r 0 , r 1 , , r n 1 ) .
  • Output: the optimal decoding result.
  • 1. Generate I independent information sequences.
  •     1.1 Circularly shift r uniformly, and obtain r ( i 1 ) , r ( i 2 ) , , r ( i I 1 ) .
  •     1.2 Take the first k components of { r , r ( i 1 ) , r ( i 2 ) , , r ( i I 1 ) } to generate the information sequences m ( 0 ) , m ( i 1 ) , , m ( i I 1 ) .
  • 2. For each information sequence m ( i ) , generate C candidate information sequences with the minimum correlation discrepancy, i.e., s 1 ( i ) , s 2 ( i ) , , s C ( i ) .
  • 3. Encode the candidate sequences to obtain candidate codewords.
  • 4. Select the optimal candidate codeword and make a decision.
  •     4.1 Select the candidate codeword with the minimum correlation discrepancy and obtain the associated information sequence in { r , r ( i 1 ) , r ( i 2 ) , , r ( i I 1 ) } .
  •     4.2 If the codeword is generated by m ( 0 ) = m , it is directly used as the decoding result. Otherwise, it is circularly shifted to obtain the decoding result.
In the proposed soft decision decoding algorithm, the correlation discrepancy is used as a metric to generate the candidate sequence and select the optimal candidate codeword. First, I information sequences are generated using cyclic shifting, and for each information sequence, the C nearest sequences in correlation discrepancy are selected as candidate information sequences. Then, the list of candidate codewords is established using the concise cyclic encoding method. Finally, the candidate codeword having the minimum correlation discrepancy with the received soft information sequence r is selected as the decoding result. The size of the candidate codeword list is reduced from 2 k to I · C based on the minimum correlation discrepancy, which can effectively reduce the search range of the decoding results. Meanwhile, for each information sequence, it only needs to calculate the correlation discrepancy of the two k-dimensional sequences once, and calculate the ( n k ) -dimensional correlation discrepancy for C times. The proposed algorithm avoids the complex calculation and sorting of the correlation discrepancy between all 2 k binary k-dimensional sequences compared with the ML decoding.

3.2. Efficient Iterative Generating Method for Adjacent Candidate Sequences

The generation of the candidate information sequences and the metric calculation of candidate codewords are the main steps and have high computation complexity in the proposed soft decision decoding algorithm. For each information sequence, we only need to generate a small number, denoted as C, of candidate information sequences to generate candidate codewords. Among all the 2 k members, the candidate information sequences include C members, which have the most minimal correlation discrepancy with the information sequence.
Aiming at reducing the complexity of generating candidate information sequences, an iterative algorithm is adopted to generate the candidate information sequence in this paper. This method avoids calculating the correlation discrepancies of all 2 k sequences and sorting all the correlation discrepancies. This algorithm is first employed to calculate the log-likelihood ratio of non-binary symbols for non-binary low density parity check (LDPC) codes [55]. In this paper, this method is extended for the generation of candidate information sequences with some modifications.
The algorithm works in the iterative mode with a total of k iterations. In the d-th iteration, the subsequence m d = ( m 0 , m 1 , , m d 1 ) , consisting of the first d components of the information sequence, needs to be considered. The set φ d of the two-dimensional variables represents the generated partial binary sequence and its correlation discrepancy with m d . If d = 1 , φ 1 = { ( u 0 , 0 ) , ( u ¯ 0 , m 0 ) } , where u 0 represents the hard decision result of m 0 , u ¯ 0 represents the negative of u 0 . The calculation in the d-th iteration is divided into two steps. Take the l-th pair of elements, φ d 0 ( l ) and φ d 1 ( l ) , as an example. Each element is expanded into two elements according to the d-th component m d 1 of the information sequence and the hard decision result u d 1 . Extending the l-th element ( s l ( d 1 ) , λ ( m d 1 , s l ( d 1 ) ) ) in φ d 1 to two elements φ d 0 ( l ) and φ d 1 ( l ) can be specifically expressed as
φ d 0 ( l ) = ( s l ( d 1 ) & u d 1 , λ ( m d 1 , s l ( d 1 ) ) ) , φ d 1 ( l ) = ( s l ( d 1 ) & u ¯ d 1 , λ ( m d 1 , s l ( d 1 ) ) + m d 1 ) ,
where ‘&’ is the concatenation operation of two binary sequences. The two-dimensional variable sets φ d 0 and φ d 1 extended by φ d 1 can be obtained by calculating all φ d 0 ( l ) and φ d 1 ( l ) .
Then, the two sets φ d 0 and φ d 1 are merged to construct a new set φ d . It should be ensured that the elements in φ d are arranged in descending order of the correlation discrepancy value. Considering that the algorithm is iterative, the elements in φ d 0 and φ d 1 are also in descending order. Therefore, only the first elements of the two sets that have not been placed into φ d need to be compared and the element with smaller correlation discrepancy is put into the set φ d . When the number of elements in φ d reaches the required number of candidate codewords, the construction of φ d is stopped. After the k-th iteration, the first dimension component of each element in the obtained set φ k constitutes C candidate sequences. The two main steps in the iterative algorithm are described in Algorithm 2.
The candidate information sequences are the sequences with the minimum distance (correlation discrepancy) to the information sequence. In the extending process, the correlation discrepancy metric of the extended sequences may become larger and larger, and it is impossible to be a candidate sequence. Thus, discarding these sequences does not affect the results of the candidate information sequence generation. The discarding process significantly reduces the complexity of generating the candidate information sequences, because the number of calculations and comparisons of the correlation discrepancy is reduced, especially for cyclic codes of long code length.
Algorithm 2 The d-th iteration of the candidate sequence generating algorithm
  • Input: the set φ d 1 of the ( d 1 ) iteration, the absolute value m d 1 of the d-th component of the information sequence, and the hard decision result u d 1 .
  • Output: the set φ d .
  • 1. Extend the set φ d 1 to φ d 0 and φ d 1 .
  •     1.1 Add u d 1 to the binary sequence of each element in φ d 1 and obtain the set φ d 0 .
  •     1.2 Add the negative of u d 1 , i.e., u ¯ d 1 to the binary sequence of each element in φ d 1 . Update the correlation discrepancy of each element to the result of adding m d 1 and obtain the set φ d 1 .
  • 2. Construct the set φ d .
  •     2.1 Combine φ d 0 and φ d 1 , i.e., φ d = φ d 0 φ d 1 .
  •     2.2 Compare the number of elements in φ d with required number C, if φ d C , all the elements form φ d , otherwise, goto step 2.3.
  •     2.3 Compare the elements of the two sets, φ d 0 and φ d 1 , and output C elements with the smallest correlation discrepancy, obtaining a new set φ d .

4. The Architecture and Complexity Analysis of the Proposed Decoder

In this section, based on the proposed soft decision decoding algorithm, the decoder implementation for hardware evaluation is designed in a serial mode. Two key components in the decoder, the candidate sequence generator and the correlation discrepancy calculation unit, are both implemented based on the systolic arrays, which have lower complexity and reduce the length of the critical path. Furthermore, the implementation complexity of the proposed decoder is analyzed.

4.1. Proposed Decoder Architecture

The overall architecture of the proposed soft decision decoder is shown in Figure 3. The decoder includes five modules. The soft information register is used to store soft information from the demodulator and perform cyclic shifting to obtain the different information sequences, that is, the information set. The following decoding operations are repeated according to the cardinality of the information set, and finally the candidate codeword with the minimum correlation discrepancy is selected as the decoding output. As described in Section 3, the candidate sequence generator, the re-encoding unit, the correlation discrepancy calculation unit, and the decoding result update unit work in the sequential manner.
The decoding process is explained as follows. The soft information sequence r from the demodulator is first buffered in the soft information register. Then, the first k components, i.e., the soft information corresponding to the information bits, are output to the candidate sequence generator to construct a series of candidate information sequences. Next, the re-encoding unit encodes the candidate information sequences to obtain the candidate codewords. The residual ( n k ) components of r are output to the correlation discrepancy calculation unit for calculating the correlation discrepancy of each candidate codeword. The candidate codeword and its correlation discrepancy are computed serially. After the candidate codewords and the corresponding correlation discrepancies are input into the decoding result update unit, the register is used to circularly shift the buffered candidate codeword. When all the operations for a single information sequence are completed, the soft information register performs a cyclic shift on the buffered data to generate another new information sequence. In this way, the operations for each information sequence are repeated. After all the I information sequences are processed, the final decoding output is obtained in the decoding result update unit.
Among all the modules of the decoder, the candidate sequence generator and the correlation discrepancy calculation unit need to be carefully designed. For the candidate sequence generator, the iterative algorithm can significantly reduce the number of comparison operations and thus reduce the overall implementation complexity. For the correlation discrepancy computation unit, the complexity of the direct summation computation is high. In our design, we consider the systolic array architecture for the two modules. The systolic array architecture is composed of several modules working in pipeline mode, and each module is called a Processing Element (PE).

4.2. Architecture of the Candidate Sequence Generator

The candidate sequence generator can generate the candidate information sequences and corresponding k-dimensional correlation discrepancy for subsequent modules, according to the channel observation values r i . As shown in Figure 4, the systolic array architecture consists of k cascaded PEs working in a sequential manner based on Algorithm 2. For the candidate sequence generator architecture, the j-th components of the received soft information sequence are connected to the j-th PE and the overall generation starts with the first PE corresponding to m 0 . The input of the first PE is the preset initial value, and the output is connected to the next PE. For the d-th iteration ( 2 d k ) , the d-th PE has two inputs, the first ( d 1 ) components of the soft information sequence m d 1 with its hard decision result u d 1 , and the list φ d 1 including the intermediate d-dimensional candidate sequence with its correlation discrepancy from the ( d 1 ) -th PE. For the output, the d-th PE generates the list φ d and transfers it to the ( d + 1 ) -th PE of the next stage, until the output of the k-th PE constitutes the generator output, where k is the information sequence length.
The PE is the core of the candidate sequence generator. It consists of two expansion modules, two First-In-First-Out (FIFO) memories, and one comparator, which selects the minimum of the two FIFO outputs. The d-th PE is shown in Figure 4, where the input and output represent the intermediate candidate sequence generation. The d-th PE serially receives the ordered list φ d 1 from ( d 1 ) -th stage one by one. The first step is the expansion of φ d 1 into φ d 0 and φ d 1 , as described in the previous section. Once the first elements φ d 0 and φ d 1 are stored in the FIFOs, the merging process starts. In each clock cycle, the outputs of the two FIFOs are compared, and the one with the smaller correlation discrepancy value is selected to be fed into the list φ d . Simultaneously, the pull signal is set to 1 to allow a new couple at the output of the FIFOs.
The setting of the depth of two FIFOs per stage considers the worst case, that is, all couples of φ d are retrieved from one FIFO. Therefore, the depth of the FIFO in the d-th PE is equal to the number of couples in the output φ d of the stage. According to the previous section, if the stage label d satisfies 2 d 1 > C , the depth of the FIFO is set to C, otherwise, the depth of the FIFO is 2 d 1 . After the last PE outputs C candidate information sequences, all of the FIFOs will be cleared and the candidate sequence generation is completed.

4.3. Architecture of the Correlation Discrepancy Calculation Unit

The correlation discrepancy calculation unit computes the correlation discrepancy between candidate codeword v l = ( v l , 0 , v l , 1 , , v l , n 1 ) , ( 1 l C ) and the received soft information sequence r ( i ) = ( r 0 ( i ) , r 1 ( i ) , , r n 1 ( i ) ) . To simplify the hardware implementation, we consider the calculation of the correlation discrepancy corresponding to the part of the first k bits and the part of the last ( n k ) bits. The formula for calculating the correlation discrepancy can be expressed as
λ ( r ( i ) , v l )   = j : u j ( i ) v j , l = 1 | r j ( i ) | = j = 0 k 1 | r j ( i ) | × ( u j ( i ) v l , j ) + j = k n 1 | r j ( i ) | × ( u j ( i ) v l , j ) ,
or concisely denoted as
λ ( r ( i ) , v l ) = λ ( m ( i ) , s l ) + λ ( p ( i ) , q l ) .
Here, r ( i ) = ( m ( i ) , p ( i ) ) . u ( i ) = ( u 0 ( i ) , u 1 ( i ) , , u n 1 ( i ) ) is the hard decision result of r ( i ) . The soft information sequence associated to the information bits is denoted as m ( i ) , i.e., m ( i ) = ( r 0 ( i ) , r 1 ( i ) , , r k 1 ( i ) ) , while p ( i ) = ( r k ( i ) , r k + 1 ( i ) , , r n 1 ( i ) ) is the part of the soft information sequence associated to the parity-check bits. Similarly, the candidate codeword v l is denoted as v l = ( s l , q l ) . s l denotes the candidate information sequence associated with the information bits and s l = ( v l , 0 , v l , 1 , , v l , k 1 ) , while q l = ( v l , k , v l , k + 1 , , v l , n 1 ) .
It can be observed that the first term in Equation (12) is the correlation discrepancy between the information sequence and the corresponding candidate sequence. The correlation discrepancy calculation unit only needs to calculate the second term λ ( p ( i ) , q l ) .
Considering the high computation complexity, our proposed correlation discrepancy calculation unit uses a systolic array architecture for sequential calculation. The architecture is shown in Figure 5, which consists of ( n k ) identical PEs. Taking the d-th PE as an example, the inputs are the soft information r k + d 1 ( i ) with its corresponding hard decision result u k + d 1 ( i ) and the list ξ d 1 from the previous PE. The d-th PE generates the list ξ d as an output that is serially sent to the ( d + 1 ) -th PE of the next stage. ξ d ( 0 d n k ) is a set containing C two-dimensional variables. The l-th two-dimensional variable of the set can be expressed as
ξ d ( l ) = v l , λ m ( i ) , s l ,   d = 0 . v l , λ m ( i ) , s l + j = k k + d 1 | r j ( i ) | × u j ( i ) v l , j , 1 d n k .
In the Formula (13), the first dimension variable of ξ d ( l ) is the candidate codeword v l , and the second dimension variable is the correlation discrepancy between the first ( k + d ) components of r ( i ) and the first ( k + d ) bits of v l .
As shown in Figure 5, the architecture of the d-th PE is composed of an expansion unit, an XOR gate, and a data selector. The XOR operation generates the selection signal for the data selector. If u k + d 1 ( i ) v l , k + d 1 = 0 , select 0 , u k + d 1 ( i ) to expand ξ d 1 ( l ) , otherwise, if u k + d 1 ( i ) v l , k + d 1 = 1 , select | r k + d 1 ( i ) | , u ¯ k + d 1 ( i ) to expand ξ d 1 ( l ) to obtain ξ d ( l ) . The last PE serially outputs ξ n k , which is all candidate codewords and their correlation discrepancy with r ( i ) .

4.4. Implementation Complexity of the Proposed Decoder

Considering that the OSD algorithm is the most efficient and classical algorithm nearly achieving the ML decoding performance, the complexity evaluation is based on the comparison between the proposed algorithm and the OSD algorithm. Compared with the order-t OSD algorithm of the k-dimensional sequences in [16], the proposed algorithm effectively reduces the computation complexity, as the number of all the tested candidate codewords, I · C , is smaller than i = 0 t k i . We utilize the (31, 21) BCH code as an example. Table 1 shows the size of the candidate codeword list for different decoding algorithms. The (31, 21) BCH code only requires 80 candidate codewords under the appropriate parameters, and the corresponding parameter optimization will be shown in Section 5.2 below. The proposed decoding algorithm can effectively reduce the number of the candidate codewords, compared with the OSD algorithm.
For the re-encoding, the OSD algorithm needs to obtain the generator matrix for each candidate codeword through complex matrix operations. The proposed decoding algorithm employs the cyclic property of cyclic block codes, enabling the utilization of the uniform systematic generator matrix in re-encoding. The cyclic codes can be encoded using the very simple encoding circuits with shift-registers.
All our implementations are on the Vertex-5 LX 110T Field Programmable Gate Array (FPGA) (xc5vlx110t-1ff1136), and the synthesis software is the XST program of ISE 13.2, all by Xilinx Inc. This FPGA chip is mainly used for high-performance general logic applications, containing 17,280 configurable slices and 128 block RAM blocks (BRAMs), where each slice has 4 LUTs and 4 flips-flops. The proposed hardware architecture is implemented on the FPGA platform using different parameters. The proposed soft decision decoding algorithm contains two very important parameters, that is, the number of information sequences, I, and the number of candidate codewords, C. As I increases, the number of computation operations, required to generate a list of candidate codewords, increases linearly. Another advantage is that the systolic arrays operate in parallel, and the operations are only limited by the number of candidate codewords, C.
To illustrate the required hardware resources of the decoder, Table 2 presents the main units for the key modules. Here, w e i g h t ( g ) is the weight of the generator polynomial for cyclic codes. The adder and comparators have the width of ( w + d ) bits in the d-th PE. Furthermore, each processing element of the candidate sequence generator requires two FIFOs with the same size. In the d-th processing element ( d log 2 C ), each FIFO has a width of ( w + d ) bits and a depth of 2 d 1 , C , where w is the quantization bit width of correlation discrepancy. The total storage space of the FIFOs (in bits) can be expressed as
M F = 2 · ( i = 1 log 2 C ( 2 i 1 · ( w + i ) ) + i = log 2 C + 1 k ( C · ( w + i ) ) ) .

5. BER Performance Verification Based on Hardware Platform

For the proposed soft decision decoding algorithm and the decoder implementation architecture, we perform the performance verification and selection of the number of candidate codewords, C, and the number of information sequences, I, on the FPGAs. The FPGA emulation platform can effectively accelerate the evaluation of the error correction performance [56], and according to the BER performance, the two key parameters of decoders can be optimized. Several cyclic block codes are taken as examples, which are the (15, 5) BCH code, the (23, 12) Golay code, the (31, 21) BCH code, and the (63, 30) BCH code. All the decoders for these codes have been implemented on the FPGA (xc5vlx110t).

5.1. BER Performance of the Proposed Decoding Algorithm

We evaluate the bit error rate (BER) performances of the four cyclic codes with the proposed decoding algorithm using this platform, as shown in Figure 6. The BER performances of the four cyclic codes using the ML decoding are also presented [57]. It can be observed that at a BER of 1 0 5 , only a little SNR degradation (<0.2 dB) occurred for the decoding algorithm using the cyclic information set. In addition, Table 3 provides the SNR required at different BERs. For the (63, 30) BCH code, it can achieve a BER of 1 0 5 when the SNR is only 4.2 dB. The simulation results show that the proposed soft decision decoders can obtain the performance close to that of ML decoding.

5.2. BER Performance of the Decoders with Different I and C

Figure 7, Figure 8 and Figure 9 present the bit error rates of (23, 12) Golay decoder, the (31, 21) BCH decoder, and the (63, 30) BCH decoder with different parameters, I and C. The hardware implementation results show that increasing the number of candidate codewords or information sets can significantly improve the decoding performance. As the number of information sequences I increases, the performance of the decoding algorithm improves. Similarly, when I is fixed, the performance of the decoding algorithm also improves with C. After reaching the target decoding performance, it tends to be flat. Furthermore, we presented the BER performance with I · C . It can be observed in Figure 7, Figure 8 and Figure 9 that the proposed algorithms can achieve targat performance using different combinations of I and C if the overall candidate codewords are enough to find the optimal. Some typical results are listed in Table 4 for convenience. The appropriate values of the parameters I and C can be obtained according to the simulation results. When the target decoding performance is satisfied, the parameters corresponding to the minimum I · C are selected as the appropriate values to reduce the computational complexity. Table 4 presents the optimization results. With the BER performance of 10 4 as criterion, for the (23, 12) Golay code, the target performance can be achieved with only 48 candidate codewords ( I = 3 , C = 16 ).

6. Conclusions

A soft decision decoding algorithm based on a cyclic information set is proposed, and its efficient hardware implementation architecture is also presented. The algorithm employs the correlation discrepancy as the decoding metric and adopts an iterative computation method. Firstly, the cyclically generated equivalent information sequences, which have the minimum correlation discrepancy with the received sequence of channel observation values, are selected as the candidate information sequences. Then, the list of candidate codewords is encoded using the very concise encoder. Finally, the optimal decoding result is obtained by selecting the optimal candidate codeword using the correlation discrepancy metric. The method reduces the search range of the decoding results by establishing the list of candidate codewords in the cyclic and iterative manner, and the complexity is lower than the ML algorithm. Furthermore, in order to verify the performance of the decoding algorithm, the implementation architecture of the algorithm is completed on the FPGA platform. The BER performances of the (15, 5) BCH code, (31, 21) BCH code, (63, 30) BCH code, and (23, 12) Golay code are evaluated on this platform. The experimental results show that the performance of the proposed decoding algorithm is close to the ML algorithm, and the decoder achieves a good trade-off between decoding complexity and error correction performance. The effect of the number of information sequences and the candidate codewords on the decoding performance is also evaluated on the FPGA platform. The complexity of the decoder can be reduced by optimizing the number of information sequences and candidate codewords.
In this paper, we only evaluated the BER performance of the cyclic codes over the AWGN channel. In the future, considering short cyclic codes have been widely employed in 5G communications, it is valuable to evaluate the proposed decoding algorithm under the wireless fading channel, combined with advanced modulations or waveforms. In addition, the serial hardware implementation architecture has been utilized to verify the BER performance of the proposed algorithm. In the future, efficient parallel architectures will be developed to meet the higher throughput requirements of 5G, enabling practical applications.

Author Contributions

Conceptualization, W.C. and C.H.; methodology, W.C. and C.H.; software, W.C. and T.Z.; validation, W.C. and C.H.; formal analysis, W.C. and T.Z.; investigation, W.C. and C.H.; writing—original draft preparation, W.C. and T.Z.; writing—review and editing, W.C., T.Z. and C.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research is partly supported by the Seed Fund of Tianjin University (No. 0903061008).

Data Availability Statement

Data are available upon request.

Acknowledgments

The authors thank the former masters student Chenchi Liang in Chen’s group, who wrote part of the programs and prepared some raw data. We also thank the former masters student Senyong Zhang in Chen’s group, who prepare part of the figures and the data analysis. All authors have read and agreed to the published version of the manuscript.

Conflicts of Interest

Author W.C. has been authorized for the related methods in two Chinese patents (CN201510846874.3, CN201611099566.X).

Abbreviations

The following abbreviations are used in this manuscript:
MLMaximum Likelihood
MRBMost Reliable Basis
BCHBose–Chaudhuri–Hocquenghem
BPSKBinary Phase Shift Keying
AWGNAdditive White Gaussian Noise
LDPCLow-Density Parity-Check
PEProcessing Element
FIFOFisrt-In-First-Out
OSDOrdered Statistic Decoding
BERBit Error Rate
FPGAField Programmable Gate Array
SNRSignal-to-Noise Ratio

References

  1. Durisi, G.; Koch, T.; Popovski, P. Toward massive, ultrareliable, and low-latency wireless communication with short packets. Proc. IEEE 2016, 104, 1711–1726. [Google Scholar] [CrossRef]
  2. Parvez, I.; Rahmati, A.; Guvenc, I.; Sarwat, A.I.; Dai, H. A survey on low latency towards 5G: Ran, core network and caching solutions. IEEE Commun. Surv. Tuts. 2018, 20, 3098–3130. [Google Scholar] [CrossRef]
  3. Ma, Z.; Xiao, M.; Xiao, Y.; Pang, Z.; Poor, H.V.; Vucetic, B. High-reliability and low-latency wireless communication for internet of things: Challenges, fundamentals, and enabling technologies. IEEE Internet Things J. 2019, 6, 7946–7970. [Google Scholar] [CrossRef]
  4. Ahmed, A.; Rasheed, H.; Liyanage, M. Millimeter-Wave Channel Modeling in a Vehicular Ad-Hoc Network Using Bose–Chaudhuri–Hocquenghem (BCH) Code. Electronics 2021, 10, 992. [Google Scholar] [CrossRef]
  5. Chen, H.; Abbas, R.; Cheng, P.; Shirvanimoghaddam, M.; Hardjawana, W.; Bao, W.; Li, Y.; Vucetic, B. Ultra-reliable low latency cellular networks: Use cases, challenges and approaches. IEEE Commun. Mag. 2018, 56, 119–125. [Google Scholar] [CrossRef]
  6. Vaezi, M.; Azari, A.; Khosravirad, S.R.; Shirvanimoghaddam, M.; Azari, M.M.; Chasaki, D.; Popovski, P. Cellular, wide-area, and non-terrestrial IoT: A survey on 5G advances and the road toward 6G. IEEE Commum. Surv. Tutor. 2022, 24, 1117–1174. [Google Scholar] [CrossRef]
  7. Saiz-Adalid, L.-J.; Gracia-Morán, J.; Gil-Tomás, D.; Baraza-Calvo, J.-C.; Gil-Vicente, P.-J. Reducing the Overhead of BCH Codes: New Double Error Correction Codes. Electronics 2020, 9, 1897. [Google Scholar] [CrossRef]
  8. Medina, F.R.; Wijanto, H.; Edwar, E.; Rahmadani, C.P.; Pangestuy, B.H.B. Prototype of telemetry, tracking, and command module for Tel-USat with BCH code. In Proceedings of the 2019 IEEE 13th International Conference on Telecommunication Systems, Services, and Applications (TSSA), Bali, Indonesia, 3–4 October 2019; pp. 204–208. [Google Scholar] [CrossRef]
  9. Polyanskiy, Y.; Poor, H.V.; Verdu, S. Channel coding rate in the finite blocklength regime. IEEE Trans. Inf. Theory 2010, 56, 2307–2359. [Google Scholar] [CrossRef]
  10. Berlekamp, E.; McEliece, R.; Van Tilborg, H. On the inherent intractability of certain coding problems (corresp). IEEE Trans. Inf. Theory 1978, 24, 384–386. [Google Scholar] [CrossRef]
  11. Bose, R.C.; Ray-Chaudhuri, D.K. On a class of error correcting binary group codes. Inf. Control 1960, 3, 68–79. [Google Scholar] [CrossRef]
  12. Lin, S.; Costello, D.J. Error Control Coding, 2nd ed.; Pearson: New York, NY, USA, 2004; pp. 194–231. ISBN 978-0130426727. [Google Scholar]
  13. Shin, S.K.; Sweeney, P. Soft decision decoding of Reed-Solomon codes using trellis methods. IEE Proc. Commun. 1994, 141, 303–308. [Google Scholar] [CrossRef]
  14. Forney, G. Generalized minimum distance decoding. IEEE Trans. Inf. Theory 1966, 12, 125–131. [Google Scholar] [CrossRef]
  15. Chase, D. A class of algorithms for decoding block codes with channel measurement information. IEEE Trans. Inf. Theory 1972, 18, 170–182. [Google Scholar] [CrossRef]
  16. Fossorier, M.P.C.; Lin, S. Soft decision decoding of linear block codes based on ordered statistics. IEEE Trans. Inf. Theory 1995, 41, 1379–1396. [Google Scholar] [CrossRef]
  17. Yue, C.; Shirvanimoghaddam, M.; Vucetic, B.; Li, Y. A revisit to ordered statistics decoding: Distance distribution and decoding rules. IEEE Trans. Inf. Theory 2021, 67, 4288–4337. [Google Scholar] [CrossRef]
  18. Yue, C.; Shirvanimoghaddam, M.; Li, Y.; Vucetic, B. Segmentation-discarding ordered-statistic decoding for linear block codes. In Proceedings of the 2019 IEEE Global Communications Conference (GLOBECOM), Waikoloa, HI, USA, 9–13 December 2019; pp. 1–6. [Google Scholar] [CrossRef]
  19. Yue, C.; Shirvanimoghaddam, M.; Park, G.; Park, O.-S.; Vucetic, B.; Li, Y. Probability-based ordered-statistics decoding for short block codes. IEEE Commun. Lett. 2021, 25, 1791–1795. [Google Scholar] [CrossRef]
  20. Yang, L.; Chen, W.; Chen, L. Reduced complexity ordered statistics decoding of linear block codes. In Proceedings of the 2022 IEEE/CIC International Conference on Communications in China (ICCC Workshops), Foshan, China, 11–13 August 2022; pp. 371–376. [Google Scholar] [CrossRef]
  21. Yue, C.; Shirvanimoghaddam, M.; Vucetic, B.; Li, Y. Ordered-statistics decoding with adaptive gaussian elimination reduction for short codes. In Proceedings of the 2022 IEEE Globecom Workshops (GC Wkshps), Rio de Janeiro, Brazil, 4–8 December 2022; pp. 492–497. [Google Scholar] [CrossRef]
  22. Yang, L.; Chen, L. Low-latency ordered statistics decoding of BCH codes. In Proceedings of the 2022 IEEE Information Theory Workshop (ITW), Mumbai, India, 1–9 November 2022; pp. 404–409. [Google Scholar] [CrossRef]
  23. Yue, C.; Shirvanimoghaddam, M.; Park, G.; Park, O.-S.; Vucetic, B.; Li, Y. Linear-equation ordered-statistics decoding. IEEE Trans. Commun. 2022, 70, 7105–7123. [Google Scholar] [CrossRef]
  24. Celebi, H.B.; Pitarokoilis, A.; Skoglund, M. Latency and reliability trade-off with computational complexity constraints: OS decoders and generalizations. IEEE Trans. Commun. 2021, 69, 2080–2092. [Google Scholar] [CrossRef]
  25. Wang, Y.; Liang, J.; Ma, X. Local constraint-based ordered statistics decoding forshort block codes. In Proceedings of the 2022 IEEE Information Theory Workshop (ITW), Mumbai, India, 1–9 November 2022; pp. 107–112. [Google Scholar] [CrossRef]
  26. Liang, J.; Wang, Y.; Cai, S.; Ma, X. A low-complexity ordered statistic decoding of short block codes. IEEE Commun. Lett. 2023, 27, 400–403. [Google Scholar] [CrossRef]
  27. Adde, P.; Jego, C.; Bidan, R.L. Near maximum likelihood soft decision decoding of a particular class of rate-1/2 systematic linear block codes. Electron. Lett. 2011, 47, 259–260. [Google Scholar] [CrossRef]
  28. Adde, P.; Toro, D.G.; Jego, C. Design of an efficient maximum likelihood soft decoder for systematic short block codes. IEEE Trans. Signal Process. 2012, 60, 3914–3919. [Google Scholar] [CrossRef]
  29. Adde, P.; Bidan, R.L. A low-complexity soft decision decoding architecture for the binary extended Golay code. In Proceedings of the 2012 19th IEEE International Conference on Electronics, Circuits, and Systems (ICECS), Seville, Spain, 9–12 December 2012; pp. 705–708. [Google Scholar] [CrossRef]
  30. Sharma, M.K.; Bader, C.F.; Debbah, M. On Ultra-reliable low latency communication between energy harvesting transmitter and receiver with OS decoder. In Proceedings of the 2022 20th Mediterranean Communication and Computer Networking Conference (MedComNet), Pafos, Cyprus, 1–3 June 2022; pp. 224–229. [Google Scholar] [CrossRef]
  31. Yue, C.; Kosasih, A.; Shirvanimoghaddam, M.; Park, G.; Park, O.-S.; Hardjawana, W.; Vucetic, B.; Li, Y. NOMA joint decoding based on soft-output ordered-statistics decoder for short block codes. In Proceedings of the 2022 IEEE International Conference on Communications (ICC), Seoul, Republic of Korea, 16–20 May 2022; pp. 2163–2168. [Google Scholar] [CrossRef]
  32. Wang, F.; Jiao, J.; Zhang, K.; Wu, S.; Zhang, Q. Adjustable ordered statistic decoder for short block length code towards URLLC. In Proceedings of the 2021 13th International Conference on Wireless Communications and Signal Processing (WCSP), Changsha, China, 20–22 October 2021; pp. 1–5. [Google Scholar] [CrossRef]
  33. Conway, J.; Sloane, N. Soft decoding techniques for codes and lattices, including the Golay code and the leech lattice. IEEE Trans. Inf. Theory 1986, 32, 41–50. [Google Scholar] [CrossRef]
  34. Reed, I.S.; Yin, X.; Truong, T.K.; Holmes, J.K. Decoding the (24, 12, 8) Golay code. IEE Proc. E Comput. Digit. Technol. 1990, 137, 202–206. [Google Scholar] [CrossRef]
  35. Wei, S.W.; Wei, C.H. On high-speed decoding of the (23, 12, 7) Golay code. IEEE Trans. Inf. Theory 1990, 36, 692–695. [Google Scholar] [CrossRef]
  36. Vardy, A.; Be’ery, Y. More efficient soft decoding of the Golay codes. IEEE Trans. Inf. Theory 1991, 37, 667–672. [Google Scholar] [CrossRef]
  37. Boyarinov, I.; Martin, I.; Honary, B. High-speed decoding of extended Golay code. IEE Proc. Commun. 2000, 147, 333–336. [Google Scholar] [CrossRef]
  38. Lin, T.C.; Chang, H.C.; Lee, H.P.; Truong, T.K. On the decoding of the (24, 12, 8) Golay code. Inf. Sci. 2010, 180, 4729–4736. [Google Scholar] [CrossRef]
  39. Chen, W.; He, Y.; Han, C.; Yang, J.; Xu, Z. Bit-level composite signal design for simultaneous ranging and communication. China Commun. 2021, 18, 49–60. [Google Scholar] [CrossRef]
  40. Chen, W.; Han, M.; Zhou, J.; Ge, Q.; Wang, P.; Zhang, X.; Zhu, S.; Song, L.; Yuan, Y. An artificial chromosome for data storage. Natl. Sci. Rev. 2021, 8, nwab028. [Google Scholar] [CrossRef]
  41. Sarangi, S.; Banerjee, S. Efficient hardware implementation of encoder and decoder for Golay code. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 2015, 23, 1965–1968. [Google Scholar] [CrossRef]
  42. Reviriego, P.; Liu, S.; Xiao, L.; Maestro, J.A. An efficient single and double-adjacent error correcting parallel decoder for the (24,12) extended Golay code. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 2016, 24, 1603–1606. [Google Scholar] [CrossRef]
  43. Zhang, P.; Lau, F.C.M.; Sham, C.-W. Design of a high-throughput low-latency extended Golay decoder. In Proceedings of the 2017 23rd Asia-Pacific Conference on Communications (APCC), Perth, WA, Australia, 11–13 December 2017; pp. 1–4. [Google Scholar] [CrossRef]
  44. Jose, A.; Sujithamol, S. FPGA implementation of encoder and decoder for Golay code. In Proceedings of the 2017 International Conference on Trends in Electronics and Informatics (ICEI), Tirunelveli, India, 11–12 May 2017; pp. 892–896. [Google Scholar] [CrossRef]
  45. Choi, S.; Ahn, H.K.; Song, B.K.; Kim, J.P.; Kang, S.H.; Jung, S.-O. A decoder for short BCH codes with high decoding efficiency and low power for emerging memories. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 2019, 27, 387–397. [Google Scholar] [CrossRef]
  46. Kim, C.; Rim, D.; Choe, J.; Kam, D.; Park, G.; Kim, S.; Lee, Y. FPGA-based ordered statistic decoding architecture for B5G/6G URLLC IIoT networks. In Proceedings of the 2021 IEEE Asian Solid-State Circuits Conference (A-SSCC), Busan, Republic of Korea, 7–10 November 2021; pp. 1–3. [Google Scholar] [CrossRef]
  47. Maity, R.K.; Samanta, J.; Bhaumik, J. FPGA-based low delay adjacent triple-bit error correcting codec. Internet Things Appl. Lect. Notes Electr. Eng. 2022, 825, 419–429. [Google Scholar] [CrossRef]
  48. Abbas, S.M.; Tonnellier, T.; Ercan, F.; Jalaleddine, M.; Gross, W.J. High-throughput VLSI architecture for soft-decision decoding with ORBGRAND. In Proceedings of the 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, 6–11 June 2021; pp. 8288–8292. [Google Scholar] [CrossRef]
  49. Namba, K.; Pontarelli, S.; Ottavi, M.; Lombardi, F. A single-bit and double-adjacent error correcting parallel decoder for multiple-bit error correcting BCH codes. IEEE Trans. Device Mater. Reliab. 2014, 14, 664–671. [Google Scholar] [CrossRef]
  50. Marchand, C.; Hammouda, M.B.; Eustache, Y.; Canencia, L.C.; Boutillon, E. Design and implementation of a near maximum likelihood decoder for Cortex codes. In Proceedings of the 2012 7th International Symposium on Turbo Codes and Iterative Information Processing (ISTC), Gothenburg, Sweden, 27–31 August 2012; pp. 26–30. [Google Scholar] [CrossRef]
  51. Xie, D. Simplified algorithm and hardware implementation for the (24,12,8) extended Golay soft decoder up to 4 Errors. Int. Arab J. Inf. Technol. 2014, 11, 111–115. [Google Scholar]
  52. MacWilliams, J. Permutation decoding of systematic codes. Bell Syst. Tech. J. 1964, 43, 485–505. [Google Scholar] [CrossRef]
  53. Dorsch, B. A decoding algorithm for binary block codes and J-ary output channels. IEEE Trans. Inf. Theory 1974, 20, 391–394. [Google Scholar] [CrossRef]
  54. Choi, C.; Jeong, J. Fast soft decision decoding algorithm for linear block codes using permuted generator matrices. IEEE Commun. Lett. 2021, 25, 3775–3779. [Google Scholar] [CrossRef]
  55. Ghouwayel, A.A.; Boutillon, E. A systolic LLR generation architecture for non-binary LDPC decoders. IEEE Commun. Lett. 2011, 15, 851–853. [Google Scholar] [CrossRef]
  56. Chen, W.; Zhao, W.; Li, H.; Dai, S.; Han, C.; Yang, J. Iterative decoding of LDPC-based product codes and FPGA-based performance evaluation. Electronics 2020, 9, 122. [Google Scholar] [CrossRef]
  57. Helmling, M.; Scholl, S.; Gensheimer, F.; Dietz, T.; Kraft, K.; Ruzika, S.; When, N. Database of Channel Codes and ML Simulation Results. Available online: www.rptu.de/channel-codes (accessed on 5 June 2023).
Figure 1. The diagram of the proposed soft decision decoding algorithm.
Figure 1. The diagram of the proposed soft decision decoding algorithm.
Electronics 12 02693 g001
Figure 2. The generation of the multiple cyclic information sets.
Figure 2. The generation of the multiple cyclic information sets.
Electronics 12 02693 g002
Figure 3. The overall architecture of the proposed soft decision decoder.
Figure 3. The overall architecture of the proposed soft decision decoder.
Electronics 12 02693 g003
Figure 4. The architecture of the candidate sequence generator.
Figure 4. The architecture of the candidate sequence generator.
Electronics 12 02693 g004
Figure 5. The architecture of the correlation discrepancy calculation unit.
Figure 5. The architecture of the correlation discrepancy calculation unit.
Electronics 12 02693 g005
Figure 6. The BER performance of the BCH code and the Golay code.
Figure 6. The BER performance of the BCH code and the Golay code.
Electronics 12 02693 g006
Figure 7. The BER performance of (23, 12) Golay code when SNR is 5 dB. (a) The BER performance with different I and C. (b) The BER performance with different candidate codeword list sizes, I · C .
Figure 7. The BER performance of (23, 12) Golay code when SNR is 5 dB. (a) The BER performance with different I and C. (b) The BER performance with different candidate codeword list sizes, I · C .
Electronics 12 02693 g007
Figure 8. The BER performance of (31, 21) BCH code when SNR is 5 dB. (a) The BER performance with different I and C. (b) The BER performance with different candidate codeword list sizes, I · C .
Figure 8. The BER performance of (31, 21) BCH code when SNR is 5 dB. (a) The BER performance with different I and C. (b) The BER performance with different candidate codeword list sizes, I · C .
Electronics 12 02693 g008
Figure 9. The BER performance of (63, 30) BCH code when SNR is 4 dB. (a) The BER performance with different I and C. (b) The BER performance with different candidate codeword list sizes, I · C .
Figure 9. The BER performance of (63, 30) BCH code when SNR is 4 dB. (a) The BER performance with different I and C. (b) The BER performance with different candidate codeword list sizes, I · C .
Electronics 12 02693 g009
Table 1. The required candidate codewords for different decoding algorithms.
Table 1. The required candidate codewords for different decoding algorithms.
Decoding AlgorithmThe Size of the Candidate Codeword List
The order-2 OSD [16]232
The order-3 OSD [16]1562
Proposed algorithm80
Table 2. The required hardware resources for the proposed decoder.
Table 2. The required hardware resources for the proposed decoder.
ModuleCandidate Sequence GeneratorRe-Encoding UnitCorrelation Discrepancy Calculation Unit
Adder 2 k 0 ( n k )
Comparatorsk0 ( n k )
XOR gates0 w e i g h t ( g ) ( n k )
Table 3. The SNR comparison for different cyclic codes.
Table 3. The SNR comparison for different cyclic codes.
Cyclic Code BER = 10 4 BER = 10 5
(15, 5) BCH code5.7 dB6.6 dB
(31, 21) BCH code4.9 dB5.7 dB
(63, 30) BCH code3.4 dB4.2 dB
(23, 12) Golay code4.8 dB5.5 dB
Table 4. Appropriate parameters selection for I and C when BER is about 10 4 .
Table 4. Appropriate parameters selection for I and C when BER is about 10 4 .
Cyclic CodeBERIC
(23, 12) Golay code 9 × 10 5 316
(31, 21) BCH code 1.1 × 10 4 264
(63, 30) BCH code 7.3 × 10 5 12512
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, W.; Zhao, T.; Han, C. Soft Decision Decoding with Cyclic Information Set and the Decoder Architecture for Cyclic Codes. Electronics 2023, 12, 2693. https://doi.org/10.3390/electronics12122693

AMA Style

Chen W, Zhao T, Han C. Soft Decision Decoding with Cyclic Information Set and the Decoder Architecture for Cyclic Codes. Electronics. 2023; 12(12):2693. https://doi.org/10.3390/electronics12122693

Chicago/Turabian Style

Chen, Weigang, Tian Zhao, and Changcai Han. 2023. "Soft Decision Decoding with Cyclic Information Set and the Decoder Architecture for Cyclic Codes" Electronics 12, no. 12: 2693. https://doi.org/10.3390/electronics12122693

APA Style

Chen, W., Zhao, T., & Han, C. (2023). Soft Decision Decoding with Cyclic Information Set and the Decoder Architecture for Cyclic Codes. Electronics, 12(12), 2693. https://doi.org/10.3390/electronics12122693

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop