Next Article in Journal
A Matrix-Multiplicative Solution for Multi-Dimensional QBD Processes
Previous Article in Journal
Refinements and Applications of Hermite–Hadamard-Type Inequalities Using Hadamard Fractional Integral Operators and GA-Convexity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Decoding of Z2S Linear Generalized Kerdock Codes

by
Aleksandar Minja
*,† and
Vojin Šenk
†,‡
Department of Power, Electronic and Telecommunication Engineering, Faculty of Engineering (Technical Sciences), University of Novi Sad, 21000 Novi Sad, Serbia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Retired.
Mathematics 2024, 12(3), 443; https://doi.org/10.3390/math12030443
Submission received: 13 December 2023 / Revised: 14 January 2024 / Accepted: 19 January 2024 / Published: 30 January 2024

Abstract

:
Many families of binary nonlinear codes (e.g., Kerdock, Goethals, Delsarte–Goethals, Preparata) can be very simply constructed from linear codes over the Z 4 ring (ring of integers modulo 4), by applying the Gray map to the quaternary symbols. Generalized Kerdock codes represent an extension of classical Kerdock codes to the Z 2 S ring. In this paper, we develop two novel soft-input decoders, designed to exploit the unique structure of these codes. We introduce a novel soft-input ML decoding algorithm and a soft-input soft-output MAP decoding algorithm of generalized Kerdock codes, with a complexity of O ( N S log 2 N ) , where N is the length of the Z 2 S code, that is, the number of Z 2 S symbols in a codeword. Simulations show that our novel decoders outperform the classical lifting decoder in terms of error rate by some 5 dB.

1. Introduction

Context and motivation. It is well known that linear codes over the integer ring modulo 2 S (the Z 2 S ring) are the natural linear codes for 2 S -ary phase modulation ( 2 S -PSK) [1]. Additionally, Hammons et al. demonstrated in [2] that certain notable binary nonlinear codes, recognized for their desirable characteristics, can be derived from quaternary codes (codes over Z 4 ) through the application of the Gray map. These results have led to an increased interest in designing linear codes over finite rings. Several extensions of these codes to Z p S (where p is a prime and S is a positive integer) were also developed [1,3,4,5,6,7,8,9,10,11,12]. Solé [3] suggested in 1988 that p-adic cyclic codes should be investigated [4]. The generalization of binary duadic codes to the setting of Abelian codes over the ring Z 2 S was presented in [5]. Double circulant self-dual codes over Z 2 S were investigated in [6]. Generalized Kerdock and Delsarte–Goethals codes over the ring Z 2 S (an extension of Z 4 linear Kerdock and Delsarte–Goethals codes to Z 2 S ) were introduced in [7]. Convolutional codes over rings were discussed in [1,8,9], and LDPC codes over rings were investigated in [10,11,12].
Codes over rings were used in classical coding systems because of their rate properties [13,14,15], while in today’s systems, they are considered for applications with great secrecy and reliability [16,17] requirements. On the other hand, modern codes over rings are being used to improve bandwidth efficiency [10] and to better combat phase noise [11,12] in modern wireless systems. Quaternary Kerdock codes were also used to design MIMO codebooks (i.e., a finite set of precoders shared by the transmitter and the receiver) for limited-feedback precoded MIMO systems [18,19]. The proposed Kerdock codebook was shown to have systematic construction, reduced storage, and reduced online search computation [18,19]. It is also possible to develop quantum error correction and quantum communication codes from codes over rings, as discussed in [20,21].
Research in the field of ring-based code decoding has been a significant topic for many years, during which numerous algorithms for both hard and soft decision decoding, designed to exploit the unique and rich structure of codes over rings, have been introduced. A detailed overview of different decoding algorithms for codes over rings is given in Section 2. Classical codes over rings are interesting as component codes in product and turbo coding schemes [16,22,23], but one of the main obstacles to their adoption in modern coding systems is the lack of efficient SISO decoding algorithms. As far as we are aware, generalized Kerdock codes do not yet have any soft decoders specifically designed to exploit their unique structure, so the main objective of this study is to address this gap by presenting maximum likelihood (ML) and maximum a posteriori (MAP) decoding algorithms for them. Furthermore, the MAP decoding algorithm is an SISO algorithm, so it allows the use of these codes in modern coding systems. These algorithms will also allow us to develop and evaluate reduced-complexity suboptimal decoding algorithms for generalized Kerdock codes and other related code families in the future.
Contribution. In our previous paper [24], we developed an SISO MAP decoding algorithm of linear Z 4 Kerdock codes with the complexity of O ( N 2 log 2 N ) , where N is the code length of a quaternary code (i.e., the number of quaternary symbols in a codeword). Here, we extend this result to the family of generalized Kerdock codes. In this paper, novel ML and MAP decoding algorithms with a complexity of O ( N s log 2 N ) (where N is the code length of a Z 2 S code) are developed and presented. To the best of our knowledge, these are the first such decoding algorithms for generalized Kerdock codes.
Paper organization. The rest of this manuscript is organized into five individual sections. The literature overview is given in Section 2. Section 3 provides the necessary preliminaries. Section 4 introduces the novel decoding algorithms. Simulation results are presented in Section 5. Section 6 concludes the paper.
Notation. In this paper, we employ the same notation as in [24]. We use bold letters to denote module elements, and by extension vectors, as every vector space is by definition a free module. We use the standard function notation for polynomials. We denote the i-th component of a module element x or a polynomial x ( Z ) (where Z is a placeholder variable) as x i . Let x and y be two elements of equal length N. Then, the Hadamard (pointwise) product is defined as x y = ( x 0 y 0 , x 1 y 1 , , x N 1 y N 1 ) , and the inner product is defined as x , y = n x n y n . We use uppercase letters to represent matrices, with  x n indicating the n-th row of a matrix X. For convenience, let 1 = ( 1 , , 1 ) represent an element of all ones. We denote the probability of x as P [ x ] and the conditional probability of x given y as P [ x | y ] . We use the standard blackboard bold letters to represent sets of numbers and their associated rings/fields, i.e.,  C represents the set of complex numbers, R represents the set of real numbers, and  Z represents the set of integers. The set of integers modulo n is denoted as Z n = Z / n Z . Given a set K , K 2 = K × K represents the Cartesian square (i.e., the Cartesian product of K with itself), and K N represents the N-ary Cartesian power of K . We use cursive letters to represent codes and other sets. Additional notation will be introduced as and when it is utilized throughout the paper.

2. Literature Overview

A variety of strategies have been used in designing decoders for codes over rings, as mentioned in [25]. These include the algebraic (syndrome) decoding approach [26], the lifting decoder technique, introduced in [27] and extended in [28], the coset decomposition approach [29], and the permutation decoding [25,30,31]. Several algebraic decoders of codes over rings were presented in [2,32,33,34,35]. A lifting decoding scheme was introduced in [27,28]. This algorithm works by sequentially applying the hard decision decoding algorithm of a corresponding Z p code to the appropriate p-ary projection of the input and canceling a part of the noise at each step. In this manner, the Z p decoding algorithm is lifted to work with the corresponding Z p S code. This is a general hard decision decoding algorithm that can be applied to the generalized Kerdock codes. A simple bitwise APP decoding algorithm of Z 4 linear codes, based on the lifting decoding scheme, was proposed in [24]. It was shown in [24] that in the case of the classical Kerdock and Preparata codes, this decoder has a complexity of O ( N log 2 N ) . The Chase decoding of Z 4 codes based on the lifting decoder technique was proposed in [36,37]. The coset decomposition approach consists of partitioning the code into the subcode and its coset and works for both linear and nonlinear codes. A SISO MAP decoding algorithm of linear Z 4 Kerdock and Preparata codes, based on the coset decomposition technique, was introduced in [24]. A similar approach was used to develop an ML decoding algorithm, of the same complexity, for quaternary Kerdock codes in [2,38]. Following the coset decomposition strategy, Davis and Jedwab, in [13], introduced two distinct algorithms for the decoding of RM-like codes over rings of characteristic 2. The maximum a posteriori (MAP) decoding of linear codes is usually performed via the Bahl–Cocke–Jelinek–Raviv (BCJR) algorithm—a trellis-based MAP decoder, introduced in [39]. The structure and complexity of the trellis diagram of the classical Kerdock and Preparata codes were investigated in [40] and [41], respectively. The complexity of the trellis-based approach is higher than that of the MAP decoder in [24]. For LDPC codes over rings, a modified message passing is generally used [10,42].

3. System Model

Integer Modulo Rings. Let operators + and · represent addition and multiplication in Z 2 S . Every z Z 2 S has a unique dyadic expansion (representation)
z = n = 0 S 1 z n 2 n , z n Z 2 .
Using the notation introduced in [27], let z ( s 1 ) , s { 1 , , S 1 } , represent the projection of z Z 2 S onto the ring Z 2 s , defined as
z ( s 1 ) = z mod 2 s = n = 0 s 1 z n 2 n Z 2 s ,
where the first equality follows from the fact that the remainder when dividing by 2 s is defined by the first s terms of the dyadic expansion, i.e., 
z = n = 0 s 1 z n 2 n + n = s S 1 z n 2 n = n = 0 s 1 z n 2 n + 2 s n = s S 1 z n 2 n s .
It is clear that z ( 0 ) = z 0 and z ( S 1 ) = z for all z Z 2 S .
It is easy to see that for every positive integer s S , every element z Z 2 s is also an element of Z 2 S . Notice that multiplying some number z Z 2 S by 2 S s , s { 0 , 1 , , S 1 } , is equivalent to first projecting the number z onto the Z 2 s and then multiplying it with 2 S s , where the multiplication is conducted in Z 2 S , i.e., 2 S s · z = 2 S s · z ( s 1 ) .
2 S s · z = 2 S s · n = 0 S 1 z n 2 n = n = S s S 1 z n S + s 2 n = 2 S s n = 0 s 1 z n 2 n = 2 S s · ( z mod 2 s ) = 2 S s · z ( s 1 ) .
Additionally, given a , b Z 2 S and some s { 0 , 1 , , S 1 } ,
2 S s · ( a + b ) = 2 S s · a + 2 S s · b = 2 S s · a ( s 1 ) + 2 S s · b ( s 1 ) = 2 S s ( a ( s 1 ) b ( s 1 ) ) ,
where ⊕ represents addition in Z 2 s .
Similarly, we have
2 S s · ( a · b ) = 2 S s · ( a · b mod 2 s ) = 2 S s · ( ( ( a mod 2 s ) · ( b mod 2 s ) ) mod 2 s ) = 2 S s · ( a ( s 1 ) b ( s 1 ) ) ,
where ⊗ represents multiplication in Z 2 s .
Using these relationships, we can formally define a natural ring epimorphism [27] as
Z 2 S Z 2 S / 2 s Z 2 S Z 2 s z z + 2 s Z 2 S z ( s 1 )
for 1 s S N . Additionally, for s S N , there exists a group embedding
Z 2 s Z 2 S z 2 S s · z
with the image 2 S s Z 2 S . The unique preimage of an element z Z 2 S is denoted by z / 2 S s [27].
Galois Rings. Let G R ( 2 S , d ) represent a Galois ring of characteristic 2 S and order 2 d S . It can be defined as a quotient ring
G R ( 2 S , d ) Z 2 S [ Z ] / h S ( Z ) ,
where Z 2 S [ Z ] represents a polynomial ring over Z 2 S and h S ( Z ) is a monic polynomial of degree d that is irreducible over Z 2 S (i.e., it does not factor as a product of nontrivial polynomials). This means that the elements of G R ( 2 S , d ) can be represented as polynomials over Z 2 S , of degree at most d, and addition and multiplication are performed modulo the polynomial h S ( Z ) . The polynomial h S ( Z ) may be found using Graeffe’s method [2], which is used for finding a polynomial over Z 2 S whose roots are the S-th power of the roots of a corresponding polynomial over Z 2 .
Let h ( Z ) Z 2 [ Z ] be a primitive irreducible polynomial of degree d. There exists a unique monic polynomial h S ( Z ) Z 2 S [ Z ] of degree d such that h ( Z ) h S ( Z ) mod 2 , and h S ( Z ) divides Z N 1 1 mod 2 S , where N = 2 d [2]. Following Graeffe’s method, we first divide the h ( Z ) into polynomials e ( Z ) and o ( Z ) , such that h ( Z ) = e ( Z ) o ( Z ) , and e ( Z ) contains only even powers, while o ( Z ) contains only odd powers. Polynomial h 2 ( Z ) is given by
h 2 ( Z 2 ) = ± ( e 2 ( Z ) o 2 ( Z ) ) .
We can repeat this process multiple times in order to obtain h S ( Z ) for some S > 1 .
Example 1.
Consider the polynomial h ( Z ) = Z 5 + Z 2 + 1 . We divide h ( Z ) into
e ( Z ) = Z 2 + 1 ,
and
o ( Z ) = Z 5 .
As h 2 ( Z 2 ) = ± ( e 2 ( Z ) o 2 ( Z ) ) , we have
h 2 ( Z 2 ) = 1 3 ( 3 Z 10 + Z 4 + 2 Z 2 + 1 ) Z 10 + 3 Z 4 + 2 Z 2 + 3 ,
where 1 3 is used to ensure that we have a monic polynomial. In both cases, we have h 2 ( Z ) = Z 5 + 3 Z 2 + 2 Z + 3 . We can repeat this procedure to obtain
h 3 ( Z ) = Z 5 + 4 Z 3 + 7 Z 2 + 2 Z + 7 .
Notice that h S ( Z ) mod 2 s = h s ( Z ) , for any positive integer s S N . This follows directly from the construction method, as h S ( Z ) is obtained from h s ( Z ) , which is obtained from h ( Z ) , i.e., h S ( Z ) mod 2 = h s ( Z ) mod 2 = h ( Z ) .
We can extend the previous integer modulo ring relationships to Galois rings. Given a ring element α G R ( 2 S , d ) , let
α mod 2 s = α ( s 1 ) G R ( 2 s , d ) ,
where G R ( 2 s , d ) is the Galois ring generated by the monic irreducible polynomial h s ( Z ) . This is a trivial extension that follows from the fact that the modulus operation is applied elementwise. Similarly, we have
2 S s · α = 2 S s · ( α mod 2 s ) = 2 S s · α ( s 1 ) .
Moreover, for every fixed d, there is a ring homomorphism (i.e., a function that preserves the ring operations)
μ S 1 : G R ( 2 S , d ) G R ( 2 S 1 , d ) ,
for each S, having kernel 2 S 1 G R ( 2 S , d ) [43]. We call this homomorphism a natural projection of ring G R ( 2 S , d ) onto the ring G R ( 2 S 1 , d ) , and we define it as
μ S 1 ( α ) = α ( S 2 ) G R ( 2 S 1 , d ) , α G R ( 2 S , d ) .
Notice that the ring
G R ( 2 S 1 , d ) = { μ S ( α ) | α G R ( 2 S , d ) }
is equivalent to the polynomial ring Z 2 S 1 [ Z ] modulo monic irreducible polynomial h S 1 ( Z ) = h S ( Z ) mod 2 S 1 . This definition can easily be extended to the projection of the ring G R ( 2 S , d ) onto the ring G R ( 2 S s , d ) , denoted μ S s , for some s { 1 , , S 1 } .
Let + and · denote addition and multiplication in G R ( 2 S , d ) , and let ⊕ and ⊗ denote addition and multiplication in G R ( 2 s , d ) for some s { 1 , , S } . Then, the following relationships naturally follow from the definition of projection:
2 S s ( α + β ) = 2 S s ( α ( s 1 ) β ( s 1 ) ) ,
2 S s ( α · β ) = 2 S s ( α ( s 1 ) β ( s 1 ) ) .
The natural projection μ μ 1 maps the ring G R ( 2 S , d ) to the Galois field G R ( 2 , d ) = G F ( 2 d ) , generated by the primitive polynomial h ( Z ) .
Every Galois ring has a primitive element, ξ , such that ξ N 1 = 1 . Given the Teichmüller set T = { 0 , 1 , ξ , ξ 2 , , ξ N 2 } , every element α G R ( 2 S , d ) has a unique “multiplicative” representation [7]
α = s = 0 S 1 2 s τ s , τ s T .
The Frobenius map f S : G R ( 2 S , d ) G R ( 2 S , d ) is the ring automorphism defined as [2,7]
f S : s = 0 S 1 2 s τ s s = 0 S 1 2 s τ s 2 , τ s T .
The relative trace from G R ( 2 S , d ) to Z 2 S is defined as [2,7]
T S ( α ) = i = 0 d 1 α f S i , α G R ( 2 S , d ) .
Notice that T T 1 represents the usual trace from G F ( 2 d ) to Z 2 , defined as
T ( α 0 ) = α 0 + α 0 2 + α 0 2 2 + + α 0 2 d 1 .
Some useful properties of the relative trace are the following [44]:
  • T S ( α + β ) = T S ( α ) + T S ( β ) , for all α , β G R ( 2 S , d ) ;
  • T S ( z α ) = z T S ( α ) for all z Z 2 s , α G R ( 2 S , d ) ;
  • T S ( α S f ) = ( T S ( α ) ) S f = T S ( α ) , for all α G R ( 2 S , d ) .
The projection of α G R ( 2 S , d ) also has a unique “multiplicative” representation, given by
α ( s 1 ) = μ s ( α ) = μ s n = 0 S 1 2 n τ n = n = 0 s 1 2 n μ s ( τ n ) τ n T ,
where μ s ( ξ ) = ξ ( s 1 ) is the root of the monic irreducible polynomial h s ( Z ) and 2 S s ξ = 2 S s ξ ( s 1 ) . Given the canonical projection homomorphism μ , defined as the mod-2 reduction, the following commutative relationships can easily be verified [2,44]:
μ f S = f μ ,
μ T S = T μ .
Generalized Kerdock Codes. The generalized Kerdock code over Z 2 S , of dimension K = d + 1 and length N = 2 d , introduced in [7], is defined as a set K of all valid codewords. A sequence c = ( c , , c N 2 ) is a codeword in K if and only if, for some λ G R ( 2 S , d ) and ϵ Z 2 S ,
c n = T S ( λ ξ n ) + ϵ , n { , , N 2 } ,
with a standard convention of ξ = 0 .
The information pair ( λ , ϵ ) represents the information sequence of length d + 1 (consisting of d coefficients of the polynomial λ plus symbol ϵ ).
Theorem 1.
The binary image of the generalized Kerdock code is the first-order Reed–Muller code of length 2 d , the R M ( 1 , d ) code.
Proof. 
The quaternary projection of the code K is given by
K ( 1 ) = μ 2 ( K ) = { μ 2 ( c ) | c K } ,
where μ 2 is applied componentwise. Every code symbol c n , n { , , N 2 } , is given by
c n ( 1 ) = μ 2 ( c n ) = μ 2 ( T S ( λ ξ n ) + ϵ ) .
Using the properties of the projection map and the relationship between the natural projection and the relative trace, Equation (7) can be rewritten as
c n ( 1 ) = T 2 ( μ 2 ( λ ) μ 2 ( ξ ) n ) + μ 2 ( ϵ ) = T 2 ( λ ( 1 ) ( ξ ( 1 ) ) n ) + ϵ ( 1 ) ,
where ξ ( 1 ) is the root of h 2 ( Z ) Z 4 [ Z ] , λ ( 1 ) is an element of the Galois ring G R ( 4 , d ) , generated by the monic irreducible polynomial h 2 ( Z ) , and ϵ ( 1 ) Z 4 . The operator ⊗ represents multiplication in G R ( 4 , d ) . This is the definition of the quaternary Kerdock code, as presented in [2].
The projection of the generalized Kerdock code onto G R ( 4 , d ) is the corresponding classical quaternary Kerdock code. It is well known that the binary image of the Kerdock code (the μ ( K ( 1 ) ) projection) is the R M ( 1 , d ) code [2,24]. This is also true for the generalized Kerdock code, i.e., μ ( K ( 1 ) ) = μ ( μ 2 ( K ) ) = μ ( K ) = R M ( 1 , d ) .    □
Permuting the coordinates of a code produces an equivalent code [26]. Consider a permutation π : Δ Δ , where Δ = { , 0 , 1 , , N 2 } . For any codeword b K , we can define a codeword c = π ( b ) = ( b π , b p i 0 , , b π N 2 ) that belongs to the equivalent generalized Kerdock code defined by π . Let b K be generated by an information pair ( λ , ϵ ) . Then, c = π ( b ) is defined as
c n = T S ( λ ξ π n ) + ϵ , n Δ .
If the R M ( 1 , d ) code is constructed by taking the rows of the binary Sylvester-type Hadamard matrix (a binary matrix obtained from a regular Sylvester-type Hadamard matrix, by replacing the 1’s with binary 0’s and 1 ’s with binary 1’s) and their complements as codewords, it is possible to use the fast Hadamard transform for decoding [24,26]. In the remainder of the paper, we will assume that the permutation π is chosen so that the associated binary code of the Kerdock code is the R M ( 1 , d ) code, constructed in this way. This makes applying the fast Hadamard transform possible, significantly reducing the decoding complexity. Permutation π can be found beforehand using the Hungarian algorithm [45] and does not affect the decoding complexity of the proposed algorithms.
Channel model and decoding. Let x = ϕ ( c ) , c K , be a random 2 S -PSK-modulated codeword of a generalized Kerdock code K , transmitted over the AWGN channel. The modulation mapping is defined as
ϕ ( α ) = exp I 2 π α 2 S , I = 1 , α Z 2 S ,
and it naturally extends to vectors. Furthermore,
ϕ ( α + β ) = ϕ ( α ) ϕ ( β ) .
Let y be the output of the complex AWGN channel, defined by the conditional probability
P [ y | x ] = n Δ P [ y n | x n ] = n Δ 1 π σ 2 exp | y n x n | 2 σ 2 ,
where | · | represents the modulus of a complex number.
If all codewords are equally likely and the channel is memoryless, the ML decoder is optimal in terms of minimizing the word-error probability. Given the channel output y , the ML decoder proceeds to find the codeword c C that maximizes the P [ y | x ] , where x = ϕ ( c ) , i.e.,
c ^ = arg max x = ϕ ( c ) c K P [ y | x ] = arg max x = ϕ ( c ) c K n Δ P [ y n | x n ] .
It is well known that the ML decoder can be implemented as the minimum distance decoder,
c ^ = arg max x = ϕ ( c ) c K n Δ exp | y n x n | 2 σ 2 = arg min x = ϕ ( c ) c K n Δ | y n x n | 2 = arg min x = ϕ ( c ) c K y x 2 ,
where · represents the norm of a complex vector.
Consider the term | y n x n | 2 in Equation (10),
| y n x n | 2 = ( y n x n ) ( y n x n ) = | y n | 2 + | x n | 2 2 ( y n x n ) ,
where ( · ) represents the complex conjugate and ℜ represents the function that extracts the real part of a complex number. By substituting (11) in Equation (10), we obtain
c ^ = arg min x = ϕ ( c ) c K n Δ | y n x n | 2 = arg min x = ϕ ( c ) c K y 2 + x 2 2 n Δ ( y n x n ) .
The terms y 2 and x 2 can be ignored ( y 2 is fixed, while x 2 is the same for all codewords because of the PSK modulation), so the ML decoder can be implemented as the correlation decoder, where we compute the correlation between the complex conjugate of all possible modulated codewords and the channel output and we select the codeword c ^ that corresponds to the highest correlation, i.e.,
c ^ = arg min c K n Δ ( y n ϕ ( c n ) ) = arg max c K n Δ y n ϕ ( c n ) ,
where x n = ϕ ( c n ) , and the last equality follows from the fact that the sum of the real part of complex numbers is equivalent to the real part of the sum of complex numbers.
Example 2.
Consider the transmission of a single quaternary symbol c = 2 over the AWGN channel. As c Z 4 , we select the QPSK modulation scheme, shown in Figure 1, for transmission.
For simplicity, the complex numbers are represented as points in the Euclidean plane, where the first coordinate corresponds to the real part of a complex number, and the second coordinate corresponds to the imaginary part of a complex number.
The modulated symbol x = ϕ ( c ) = ( 1 , 0 ) is sent over the AWGN channel, where Gaussian noise is added to every coordinate independently, and the channel output y is received. At the receiver, we compute the squared Euclidean distance between the channel output and every constellation point and select the closest one, i.e., we compute
c ^ = arg min c Z 4 y ϕ ( c ) 2 = arg max c Z 4 y , ϕ ( c ) .
Notice that when using the complex notation, this decision rule is given by the real part of the product of the complex channel output and the conjugate of a complex constellation symbol.
The MAP decoder operates on a bitwise basis and is optimal in terms of minimizing the bit-error probability. Given the corresponding channel output y , the MAP decoder proceeds to find [24,46]
P [ c j = α | y ] = b C j α P [ b | y ] b C j α n = 0 N 1 P [ y n | b n ] ,
where C j α K represents the set of all codewords that have symbol α Z 2 S in position j, for all j = 0 , , N 1 . The MAP decision is by definition
c ^ n = arg max α Z 2 S P [ c n = α | y ] .
The MAP decoder is well suited for use in a concatenated coding scheme, where we exchange soft messages between different coding blocks.

4. Decoding Algorithms

Let C = K { 1 } be the linear subcode of the generalized Kerdock code that does not contain the all-one codeword. The size of the code C is M = | C | = N S .
Let c m = ( c m , , c m , 0 , , c m , N 2 ) C be the codeword corresponding to the information sequence given by the components of the polynomial λ G R ( 2 S , d ) . Then,
c m , n = T S ( λ ξ π n ) = T S ( λ ( S 2 ) ξ π n ) + 2 S 1 · T ( ξ 0 r S 1 + π n ) = c m , n ( S 2 ) + 2 S 1 · c m , n ( 0 ) ,
where r S 1 , n Δ . Notice that the vector c m ( 0 ) is a codeword of the R M ( 1 , d ) -associated code of K , i.e., any codeword of R M ( 1 , d ) , scaled by 2 S 1 , is also a codeword of K . As all codewords of the Z 2 S linear generalized Kerdock code form an additive Abelian group, then
c l ( S 2 ) + 2 S 1 · c n ( 0 ) K ,
for all l { 0 , , N S 1 } and n { 0 , , N 1 } .
For convenience, we define three auxiliary matrices, A 0 Z 2 S 2 S × N , A 1 Z 2 S N S 1 × N , and A 2 Z 2 S N × N , together with their row sets (sets of row elements),
A 0 = { α 1 | α Z 2 S } , | A 0 | = 2 S ,
A 1 = { c ( S 2 ) | c C } , | A 1 | = N S 1 ,
and
A 2 = { 2 S 1 · c ( 0 ) | c C }
Note that
K = { a 0 + a 1 + a 2 | a 0 A 0 , a 1 A 1 , a 2 A 2 } ,
and
C = { a 1 + a 2 | a 1 A 1 , a 2 A 2 } .
The rows of the matrix A 2 are generated as codewords of the R M ( 1 , d ) code, scaled by 2 S 1 .

4.1. ML Decoding Algorithm

The ML decoder begins by calculating the correlation coefficient of the received channel output, y = ( y , y 0 , , y N 2 ) , with the complex conjugate of each possible modulated codeword. After that, the decoder selects the codeword with the largest correlation coefficient.
For every possible information pair ( λ , ϵ ) , let the correlation coefficient between the corresponding codeword ( c ) and the channel output be defined as
ρ ( λ , ϵ ) = n Δ y n ϕ ( c n ) .
There are N S possible information pairs, and the computation of the correlation coefficient, using Equation (14), has complexity O ( N ) . The total complexity, when using the brute-force approach, is O ( N S + 1 ) .
After substituting Equation (6) in Equation (14), we obtain
ρ ( λ , ϵ ) = n Δ y n ϕ ( T S ( λ ξ π n ) ϵ ) = ϕ ( ϵ ) n Δ y n ϕ ( T S ( λ ξ π n ) ) ,
where the last equality follows from (9). Using the definition in (12), we obtain
ρ ( λ , ϵ ) = ϕ ( ϵ ) n Δ y n ϕ ( c n ) = ϕ ( ϵ ) n Δ y n ϕ ( c n ( S 2 ) ) ϕ ( 2 S 1 · c n ( 0 ) ) ,
Let x n = ϕ ( c n ( S 2 ) ) and
a n = ϕ 2 S 1 · c n ( 0 ) = exp I 2 μ 2 S 2 S 1 · c n ( 0 ) = ( 1 ) c n ( 0 ) .
Finally, the correlation coefficient is given by
ρ ( λ , ϵ ) = ϕ ( ϵ ) n Δ y n x n a n = ϕ ( ϵ ) n Δ y n x n ( 1 ) c n ( 0 ) .
The correlation coefficient can be viewed as ϕ ( ϵ ) times the fast Hadamard transform (FHT) [2,24] of the vector y x , x = ( x , x 0 , , x N 2 ) . The complexity of applying the FHT to a vector of length N is O ( N log 2 N ) (note that we compute N correlation coefficients in parallel using the FHT), and we need to compute the FHT for every possible vector y x .
After this, we search for the correlation coefficient with the largest real part [2], and we output the corresponding codeword. This concludes the algorithm.
Implementation of the ML decoder. Here, we provide a brief rundown of the primary steps in the ML algorithm. Let X 1 = ϕ ( A 1 ) be a complex matrix of size N S 1 × N , with rows x l , l { 0 , , N S 1 } . Furthermore, let H N = ϕ ( A 2 ) be the N × N Sylvester-type Hadamard matrix [26].
Given the channel output y , the ML decoder first generates the correlation matrix R R M · N S 1 × N , with rows
r i = ϕ ( ϵ ) · y x l · H m = ϕ ( ϵ ) · FHT ( y x l ) ,
where ℜ is applied componentwise and the row index i is calculated as i = ϵ N S 1 + l , ϵ Z 2 S , l { 0 , , N S 1 } . Matrix R can be generated with complexity O ( N S log 2 N ) , as there are N S 1 vectors for which we have to compute N correlation coefficients using the FHT. Let r j , k be the largest correlation coefficient, which can be found with complexity O ( N S ) . Then, the ML decoder outputs the codeword
c ^ = a j div N S 1 0 + a j mod N S 1 1 + a k 2 ,
where a j div N S 1 0 A 0 , a j mod N S 1 1 A 1 , and a k 3 A 2 . It is clear that the complexity of this decoder is O ( N S log 2 N ) .

4.2. MAP Decoding Algorithm

The MAP decoding algorithm follows the group algebra description of the MAP decoder introduced in [46] and used in [24] to develop the MAP decoder of the quaternary Kerdock and Preparata codes.
Similarly as in [24], let S = R [ Z ] / Z 2 S 1 be a set of all polynomials over R , modulo Z 2 S 1 , where Z is a dummy variable. Given a polynomial f ( Z ) = α Z 2 S f α Z α S and a monomial Z β S , β Z 2 S , using a simple change of variable, we have
f ( Z ) · Z β = α Z 2 S f α Z α β = α Z 2 S f α + β Z α S .
We start the decoding process by calculating the vector of log-likelihoods w S N , with
w n ( Z ) = α Z 2 S ln P [ y n | α ] Z α ,
where y n is the n-th component of the channel output y .
For every codeword c m C , m { 0 , , M 1 } , we compute
t m ( Z ) = n Δ w n ( Z ) · Z c m , n
Notice that
t m ( Z ) = n Δ Z c m , n α Z 2 S ln P [ y n | α ] Z α = α Z 2 S n Δ ln P [ y n | α ] Z α c m , n = α Z 2 S n Δ ln P [ y n | α + c m , n ] Z α = α Z 2 S Z α ln n Δ P [ y n | α + c m , n ] = α Z 2 S Z α ln P [ y | c m + α 1 ] ,
where ( c m + α 1 ) K . Alternatively, using the definition in (12), we have
t m ( Z ) = n Δ α Z 2 S ln P [ y n | α ] Z α ( c m , n ( S 2 ) + 2 S 1 c m , n ( 0 ) ) = n Δ α Z 2 S ln P [ y n | α ] Z α c m , n ( S 2 ) Z 2 S 1 c m , n ( 0 ) = n Δ α Z 2 S ln P [ y n | α + c m , n ( S 2 ) ] Z 2 S 1 c m , n ( 0 ) .
As c m ( 0 ) is a codeword of the R M ( 1 , d ) code, the expression in (16) can be interpreted as applying a modified FHT to every vector b m = w x m , where
x m = ( Z c m , , Z c m , 0 , Z c m , 1 , , Z c m , N 2 ) .
This allows us to compute N different values of t m in parallel, i.e., instead of computing N S values, where every computation requires O ( N ) operations in S , we apply the modified FHT (with complexity O ( N log 2 N ) ) to N S 1 vectors, so the total complexity is reduced from O ( N S + 1 ) to O ( N S log 2 N ) . By the modified FHT, we assume a fast Hadamard transform applied in S , which is defined in Algorithm 1. In this pseudocode, h is used as a control variable for the loop, representing the current step size in the modified FHT algorithm. Variable h starts at 1 and doubles each time, determining the pairs of elements to be combined at each stage of the transform. This ensures that first the adjacent elements are paired (the distance between them is h = 1 ), and as it progressively increases, so does the distance between the pairs of elements that are combined.
Algorithm 1 Modified fast Hadamard transform (in-place implementation)
Input:  x ,
Output:  x ,
 1: for  h = 1 ; h < N ; h 2 · h  do
 2:     for  i = 0 ; i < N ; i i + 2 · h  do
 3:         for  j = i  to  i + h  do
 4:             a x j , b x j + h , {Save current values}
 5:             x j a + b , {Calculate new x j value}
 6:             x j + h a + b · Z 2 S 1 , {Calculate new x j + h value}
 7:         end for
 8:     end for
 9: end for
Next, for every t m , we recompute the sums of logarithms of probabilities into products of probabilities, as follows:
v m ( Z ) = α Z 2 S exp t m , α Z α = α Z 2 S P [ y | c m + α 1 ] Z α .
Finally, we compute
s n ( Z ) = m = 0 M 1 Z c m , n α Z 2 S v m ( Z ) Z α = α Z 2 S m = 0 M 1 P [ y | c m + 1 α ] Z α + c m , n = α Z 2 S m = 0 M 1 n Δ P [ y n | α + c m , n ] Z α + c m , n = α Z 2 S c C n α P [ y | c ] Z α ,
where the last equality follows from the fact that
P [ y | α + c m , ] P [ y n | α + c m , n ] P [ y N 2 | α + c m , N 2 ] Z α + c m , n = P [ y | α + c m , ] P [ y n | α ] P [ y N 2 | α + c m , N 2 ] Z α .
The complexity of computing one of the N components of s is O ( N S ) , so the total complexity of computing s is O ( N S + 1 ) . We can reduce the complexity of this step in a similar way as before. Notice that
s n ( Z ) = α Z 2 S m = 0 M / N 1 l = 0 N 1 P [ y | c m + 1 α ] Z c m , n ( S 2 ) Z 2 S 1 c l , n ( 0 ) ,
which follows from (13). Notice that the computation of s ( Z ) can now be interpreted as applying the modified FHT N S 1 times and then performing a componentwise addition of the results. The complexity of this approach is now O ( N S log 2 N ) . This completes the algorithm.
Implementation of the MAP decoder. We now present a summary of the essential steps in the MAP algorithm. For convenience, we define a mapping ζ : Z 2 S S , such that ζ ( a ) = Z a . This function can be extended to elements and matrices, by applying ζ ( · ) to every component independently [24]. Let D 1 = ζ ( A 1 ) and D ¯ 1 = ζ ( A 1 ) .
We begin the decoding procedure by computing the log-likelihood vector w S N , using Equation (15). This can be accomplished with complexity O ( N ) .
Next, we compute matrix T S N S 1 × N with rows
t l = FHT ( w d ¯ l ) , l { 0 , , N S 1 } ,
where d ¯ l represents the l-th row of matrix D ¯ . Matrix T can be computed in O ( N S log 2 N ) steps, as we need to apply the modified FHT to N S 1 vectors.
Next, we compute the likelihood matrix V S N S 1 × N , with components
v l , n = α Z 2 S Z α exp { t l , n , α } .
This can be accomplished in O ( N S ) steps.
Let B be an N S 1 × N matrix, with rows
b l = FHT ( v ) d l ,
where l { 0 , , N S 1 } . This can be accomplished with complexity O ( N S log 2 N ) .
Finally, we form vector s S N by summing all rows of B,
s n = l = 0 N S 1 b l , n , n { 0 , N 1 } .
This can be performed in O ( N S ) steps. With this, we finish the algorithm description. It is important to note that all the operations here are conducted over the polynomial ring S . As the polynomials in S can be realized as real vectors of the fixed length of 2 S (where 2 S is less than N in most practical use cases), the additional complexity can be treated as a constant term, so the total complexity of this algorithm is O ( N S log 2 N ) [24].

5. Simulation Results

In this section, we present the simulation results of our novel decoding algorithms and compare them to the existing techniques in terms of their error-correcting performance. As these decoders are polynomial algorithms, we limit ourselves to short-length (where code length is given as the number of Z 2 S symbols) codes over Z 4 , Z 8 , and Z 16 . Let K S d represent a generalized Kerdock code of length 2 d , over the ring Z 2 S , defined in terms of the irreducible monic polynomial ( h S ( z ) ) used to design an extension ring, as presented in Section 3. We also include some classical Kerdock codes, as they also fall into the family of the generalized Kerdock code. These codes are the K 2 3 code defined by h 2 ( Z ) = 3 + Z + 2 Z 2 + Z 3 , the K 2 5 code defined by h 2 ( Z ) = 3 + 2 Z + 3 Z 2 + Z 5 , the K 2 7 code defined by h 2 ( Z ) = 3 + Z + 2 Z 4 + Z 7 , the K 3 3 code defined by h 3 ( Z ) = 7 + 5 Z + 6 Z 2 + Z 3 , the K 3 5 code defined by h 3 ( Z ) = 7 + 2 Z + 7 Z 2 + 4 Z 3 + Z 5 , and the K 4 3 code defined by h 4 ( Z ) = 15 + 5 Z + 6 Z 2 + 1 .
All simulations were conducted for the additive white Gaussian noise (AWGN) channel. Additionally, every code was coupled with a corresponding 2 S -PSK modulation, as there exists a natural straightforward mapping of code symbols onto the PSK symbols (Equation (8)). We assessed the performance of our decoding algorithms by estimating the frame error rate (FER) and symbol error rate (SER) as a function of energy per bit to noise power spectral density ratio ( E b / N 0 ) and comparing it to the classical lifting decoder [27]. The classical lifting decoder can be applied to generalized Kerdock codes without modification, but it has a higher error probability, as it is a suboptimal hard-input hard-output (HIHO) decoder. This decoder was implemented as an S-stage decoder, where each stage uses the minimum Hamming distance decoder of the associated binary R M ( 1 , d ) code. As the minimum Hamming distance decoder of the R M ( 1 , d ) code can be implemented with a complexity of O ( N log 2 N ) , the total complexity of the classical lifting decoder is O ( S N log 2 N ) . This complexity is significantly lower than that of the novel algorithms developed in this paper, but so is its error-correcting performance. FER and SER were estimated using the Monte Carlo simulation method with a relative precession δ = 0.05 , for a range of E b / N 0 points, with a step of 0.5 dB .
We assume that the ML decoder will make a mistake if the correct codeword is further away from the channel output than some other codeword (e.g., the output of some not-necessarily-optimal decoding algorithm) in terms of the Euclidean distance [24]. We can simulate the lower bound on the ML decoding by counting the number of times the Euclidean distance between the channel output and the correct codeword is greater than the distance between the channel output and the decoded codeword. This bound becomes tight as E b / N 0 increases. We compare the FER performance of our novel ML and MAP decoders with the ML bound (Figure 2) and see that they perfectly coincide. This indicates that our novel algorithms are optimal in terms of FER, as was expected.
Figure 3 presents the FER of various generalized Kerdock codes using the novel ML decoder, and Figure 4 shows the SER of different generalized Kerdock codes using the novel MAP decoder. We notice that the error-correcting performance improves with code length, while it decreases with base ring size. This is expected behavior, as the error-correcting performance of the code should increase with N, while the increase in the constellation size means that the modulation symbols are closer together and are more susceptible to noise, and this leads to an increase in the probability of error.
Figure 5 shows the SER of different generalized Kerdock codes using the novel MAP decoder and the classical lifting decoder. We see in Figure 5 that the MAP decoder exhibits a gain of some 5 dB when compared to the classical lifting decoder. This is expected, as the novel MAP decoder is an optimal SISO decoding algorithm, while the lifting decoder is a suboptimal HIHO decoding algorithm.

6. Conclusions

In this manuscript, we presented two novel optimal algorithms for the decoding of generalized Kerdock codes. Their complexity is O ( N S log 2 N ) , and although they are polynomial complexity algorithms, they can be used as a starting point for developing and a benchmark for evaluating the performance of suboptimal decoders. Furthermore, as the MAP decoder is an SISO algorithm, it allows the use of small-length generalized Kerdock codes in modern coding systems as component codes. The novel decoding algorithms were compared with the classical lifting decoding algorithm in terms of error-correcting performance, and it was shown that they achieved a gain of about 5 dB . In our future work, we will focus on reducing the complexity below that of the MAP decoder without significantly compromising its error-correcting efficiency. The lifting technique is a powerful technique that can be combined with the MAP decoder and used to develop novel suboptimal SISO decoders. We will also use generalized Kerdock codes (and other related codes) as components in modern coding schemes, such as the turbo-product scheme, braided construction, coded modulation schemes, and many others.

Author Contributions

All authors have contributed equally to the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was in part supported by the European Union Horizon Europe research and innovation program under the grant agreement No. 101086387 and the Secretariat for Higher Education and Scientific Research of the Autonomous Province of Vojvodina through the project “Visible light technologies for indoor sensing, localization and communication in smart buildings” (142-451-3511/2023).

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Massey, J.; Mittelholzer, T. Convolutional codes over rings. In Proceedings of the 4th joint Swedish-Soviet International Workshop on Information Theory, Gotland, Sweden, 27 August–1 September 1989. [Google Scholar]
  2. Hammons, A.R.; Kumar, P.V.; Calderbank, A.R.; Sloane, N.J.A.; Sole, P. The Z/sub 4/-linearity of Kerdock, Preparata, Goethals, and related codes. IEEE Trans. Inf. Theory 1994, 40, 301–319. [Google Scholar] [CrossRef]
  3. Solé, P. Open problem 2: Cyclic codes over rings and p-adic fields. In Coding Theory and Applications, Proceedings of the Coding Theory 1988, Toulon, France, 2–4 November 1988; Cohen, G., Wolfmann, J., Eds.; Springer: Berlin/Heidelberg, Germany, 1989; p. 329. [Google Scholar]
  4. Calderbank, A.R.; Sloane, N.J.A. Modular and p-adic cyclic codes. Des. Codes Cryptogr. 1995, 6, 21–35. [Google Scholar] [CrossRef]
  5. Ling, S.; Sole, P. Duadic codes over Z/sub 2k/. IEEE Trans. Inf. Theory 2001, 47, 1581–1588. [Google Scholar] [CrossRef]
  6. Gulliver, T.; Harada, M. Double circulant self-dual codes over Z/sub 2k/. IEEE Trans. Inf. Theory 1998, 44, 3105–3123. [Google Scholar] [CrossRef]
  7. Carlet, C. Z 2k-linear codes. IEEE Trans. Inf. Theory 1998, 44, 1543–1547. [Google Scholar] [CrossRef]
  8. Mittelholzer, T. Convolutional codes over rings and the two chain conditions. In Proceedings of the IEEE International Symposium on Information Theory, Ulm, Germany, 29 June–4 July 1997; p. 285. [Google Scholar] [CrossRef]
  9. Napp, D.; Pinto, R.; Toste, M. Column Distances of Convolutional Codes Over Z pr. IEEE Trans. Inf. Theory 2019, 65, 1063–1071. [Google Scholar] [CrossRef]
  10. Sridhara, D.; Fuja, T. LDPC codes over rings for PSK modulation. IEEE Trans. Inf. Theory 2005, 51, 3209–3220. [Google Scholar] [CrossRef]
  11. Ninacs, T.; Matuz, B.; Liva, G.; Colavolpe, G. Non-binary LDPC coded DPSK modulation for phase noise channels. In Proceedings of the 2017 IEEE International Conference on Communications (ICC), Paris, France, 21–25 May 2017; pp. 1–6. [Google Scholar] [CrossRef]
  12. Ninacs, T.; Matuz, B.; Liva, G.; Colavolpe, G. Short Non-Binary Low-Density Parity-Check Codes for Phase Noise Channels. IEEE Trans. Commun. 2019, 67, 4575–4584. [Google Scholar] [CrossRef]
  13. Davis, J.A.; Jedwab, J. Peak-to-mean power control in OFDM, Golay complementary sequences, and Reed-Muller codes. IEEE Trans. Inf. Theory 1999, 45, 2397–2417. [Google Scholar] [CrossRef]
  14. Schmidt, K. Quaternary Constant-Amplitude Codes for Multicode CDMA. IEEE Int. Symp. Inf. Theory 2007, 55, 1824–1832. [Google Scholar] [CrossRef]
  15. Shakeel, I. Performance of Reed-Muller and Kerdock Coded MC-CDMA System with Nonlinear Amplifier. In Proceedings of the 2005 Asia-Pacific Conference on Communications, Perth, Australia, 3–5 October 2005; pp. 644–648. [Google Scholar]
  16. Amrani, O. Nonlinear Codes: The Product Construction. IEEE Trans. Commun. 2007, 55, 1845–1851. [Google Scholar] [CrossRef]
  17. Karp, B.; Amrani, O.; Keren, O. Nonlinear Product Codes for Reliability and Security. In Proceedings of the 2019 IEEE 4th International Verification and Security Workshop (IVSW), Rhodes, Greece, 3 October 2019; pp. 13–18. [Google Scholar]
  18. Inoue, T.; Heath, R.W. Kerdock codes for limited feedback MIMO systems. In Proceedings of the 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, Las Vegas, NV, USA, 31 March–4 April 2008; pp. 3113–3116. [Google Scholar]
  19. Inoue, T.; Heath, R.W., Jr. Kerdock Codes for Limited Feedback Precoded MIMO Systems. IEEE Trans. Signal Process. 2009, 57, 3711–3716. [Google Scholar] [CrossRef]
  20. Güzeltepe, M.; Sarı, M. Quantum codes from codes over the ring F q + α F q . Quantum Inf. Process. 2019, 18, 365. [Google Scholar] [CrossRef]
  21. Güzeltepe, M.; Aytaç, N. Quantum Codes from Codes over the Ring Rq. Int. J. Theor. Phys. 2023, 62, 26. [Google Scholar] [CrossRef]
  22. Kim, H.; Markarian, G.; Rocha, V.C.d., Jr. Nonlinear Product Codes and Their Low Complexity Iterative Decoding. ETRI J. 2010, 32, 588–595. [Google Scholar] [CrossRef]
  23. Kim, H.; Markarian, G.; da Rocha, V.C., Jr. Nonlinear turbo product codes. In Proceedings of the XXV Simpósio Brasileiro de Telecomunicações, Recife, PE, Brazil, 3–6 September 2007. [Google Scholar]
  24. Minja, A.; Šenk, V. SISO Decoding of Z 4 Linear Kerdock and Preparata Codes. IEEE Trans. Commun. 2022, 70, 1497–1507. [Google Scholar] [CrossRef]
  25. Barrolleta, R.; Pujol, J.; Villanueva, M. Comparing decoding methods for quaternary linear codes. Electron. Notes Discret. Math. 2016, 54, 283–288. [Google Scholar] [CrossRef]
  26. MacWilliams, F.; Sloane, N. The Theory of Error-Correcting Codes; Mathematical Studies; Elsevier Science: Amsterdam, The Netherlands, 1977. [Google Scholar]
  27. Greferath, M.; Vellbinger, U. Efficient decoding of Z/sub p/k-linear codes. IEEE Trans. Inf. Theory 1998, 44, 1288–1291. [Google Scholar] [CrossRef]
  28. Babu, N.S.; Zimmermann, K. Decoding of linear codes over Galois rings. IEEE Trans. Inf. Theory 2001, 47, 1599–1603. [Google Scholar] [CrossRef]
  29. Conway, J.; Sloane, N. Soft decoding techniques for codes and lattices, including the Golay code and the Leech lattice. IEEE Trans. Inf. Theory 1986, 32, 41–50. [Google Scholar] [CrossRef]
  30. Barrolleta, R.D.; Villanueva, M. Partial permutation decoding for binary linear and Z4-linear Hadamard codes. Des. Codes Cryptogr. 2018, 86, 569–586. [Google Scholar] [CrossRef]
  31. Barrolleta, R.D.; Villanueva, M. Partial Permutation Decoding for Several Families of Linear and Z4-Linear Codes. IEEE Trans. Inf. Theory 2019, 65, 131–141. [Google Scholar] [CrossRef]
  32. Helleseth, T.; Kumar, P.V. The algebraic decoding of the Z/sub 4/-linear Goethals code. IEEE Trans. Inf. Theory 1995, 41, 2040–2048. [Google Scholar] [CrossRef]
  33. Byrne, E.; Fitzpatrick, P. Hamming metric decoding of alternant codes over Galois rings. IEEE Trans. Inf. Theory 2002, 48, 683–694. [Google Scholar] [CrossRef]
  34. Rong, C.; Helleseth, T.; Lahtonen, J. On algebraic decoding of the Z/sub 4/-linear Calderbank-McGuire code. IEEE Trans. Inf. Theory 1999, 45, 1423–1434. [Google Scholar] [CrossRef]
  35. Ranto, K. On algebraic decoding of the Z/sub 4/-linear Goethals-like codes. IEEE Trans. Inf. Theory 2000, 46, 2193–2197. [Google Scholar] [CrossRef]
  36. Armand, M.A. Chase decoding of linear Z/sub 4/ codes. Electron. Lett. 2006, 42, 1049–1050. [Google Scholar] [CrossRef]
  37. Armand, M.A.; Halim, A.; Nallanathan, A. Chase Decoding of Linear Z4 Codes at Low to Moderate Rates. IEEE Commun. Lett. 2007, 11, 811–813. [Google Scholar] [CrossRef]
  38. Elia, M.; Losana, C.; Neri, F. Note on the complete decoding of Kerdock codes. IEE Proc. I Commun. Speech Vis. 1992, 139, 24–28. [Google Scholar] [CrossRef]
  39. Bahl, L.; Cocke, J.; Jelinek, F.; Raviv, J. Optimal decoding of linear codes for minimizing symbol error rate (Corresp.). IEEE Trans. Inf. Theory 1974, 20, 284–287. [Google Scholar] [CrossRef]
  40. Shany, Y.; Reuven, I.; Be’ery, Y. On the trellis representation of the Delsarte-Goethals codes. IEEE Trans. Inf. Theory 1998, 44, 1547–1554. [Google Scholar] [CrossRef]
  41. Shany, Y.; Be’ery, Y. The Preparata and Goethals codes: Trellis complexity and twisted squaring constructions. IEEE Trans. Inf. Theory 1999, 45, 1667–1673. [Google Scholar] [CrossRef]
  42. Davey, M.; MacKay, D. Low density parity check codes over GF(q). In Proceedings of the 1998 Information Theory Workshop (Cat. No.98EX131), Killarney, Ireland, 22–26 June 1998; pp. 70–71. [Google Scholar] [CrossRef]
  43. Bini, G.; Flamini, F. Finite Commutative Rings and Their Applications; The Springer International Series in Engineering and Computer Science; Springer: New York, NY, USA, 2002. [Google Scholar]
  44. Sison, V. Bases of the Galois Ring GR(pr,m) over the Integer Ring Zpr. arXiv 2014, arXiv:cs.IT/1410.0289. [Google Scholar]
  45. Kuhn, H.W. The Hungarian method for the assignment problem. Nav. Res. Logist. Q. 1955, 2, 83–97. [Google Scholar] [CrossRef]
  46. Ashikhmin, A.; Litsyn, S. Simple MAP decoding of first-order Reed-Muller and Hamming codes. IEEE Trans. Inf. Theory 2004, 50, 1812–1818. [Google Scholar] [CrossRef]
Figure 1. The constellation diagram of the used QPSK modulation.
Figure 1. The constellation diagram of the used QPSK modulation.
Mathematics 12 00443 g001
Figure 2. FER comparison of the ML decoding algorithm with an ML lower bound for different generalized Kerdock codes.
Figure 2. FER comparison of the ML decoding algorithm with an ML lower bound for different generalized Kerdock codes.
Mathematics 12 00443 g002
Figure 3. FER comparison of the ML decoding for different generalized Kerdock codes.
Figure 3. FER comparison of the ML decoding for different generalized Kerdock codes.
Mathematics 12 00443 g003
Figure 4. SER comparison of the MAP decoding for different generalized Kerdock codes.
Figure 4. SER comparison of the MAP decoding for different generalized Kerdock codes.
Mathematics 12 00443 g004
Figure 5. SER comparison of the MAP and the lifting decoder algorithms for different generalized Kerdock codes.
Figure 5. SER comparison of the MAP and the lifting decoder algorithms for different generalized Kerdock codes.
Mathematics 12 00443 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Minja, A.; Šenk, V. Decoding of Z2S Linear Generalized Kerdock Codes. Mathematics 2024, 12, 443. https://doi.org/10.3390/math12030443

AMA Style

Minja A, Šenk V. Decoding of Z2S Linear Generalized Kerdock Codes. Mathematics. 2024; 12(3):443. https://doi.org/10.3390/math12030443

Chicago/Turabian Style

Minja, Aleksandar, and Vojin Šenk. 2024. "Decoding of Z2S Linear Generalized Kerdock Codes" Mathematics 12, no. 3: 443. https://doi.org/10.3390/math12030443

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop