Next Article in Journal
Spread Option Pricing in Regime-Switching Jump Diffusion Models
Previous Article in Journal
HIFA-LPR: High-Frequency Augmented License Plate Recognition in Low-Quality Legacy Conditions via Gradual End-to-End Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Decoding Algorithm for Convolutional Codes

by
Sandra Martín Sánchez
and
Francisco J. Plaza Martín
*
Departamento de Matemáticas, Universidad de Salamanca, Plaza de la Merced 1, 37008 Salamanca, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(9), 1573; https://doi.org/10.3390/math10091573
Submission received: 5 April 2022 / Revised: 3 May 2022 / Accepted: 4 May 2022 / Published: 6 May 2022

Abstract

:
It is shown how the decoding algorithms of Pellikaan and Rosenthal can be coupled to produce a decoding algorithm for convolutional codes. Bounds for the computational cost per decoded codeword are also computed. As a case study, our results are applied to a family of convolutional codes constructed by Rosenthal–Schumacher–York and, in this situation, the previous bounds turn out to be polynomial on the degree of the code.
MSC:
94B10; 11T71

1. Introduction

Many aspects of daily life require the transmission of large amounts of digital data in a timely and reliable manner. Among the different methods to detect and correct errors occurring during the transmission, convolutional codes are a standard tool for dealing with this issue. While the encoding algorithms present no difficulty at all, decoding algorithms are more involved and have been studied for decades. Depending on the signal, they can be classified into hard and soft decoding algorithms (e.g., [1], Ch. 14, 15). When it comes to decoding, maximum likelihood algorithms (e.g., such as Viterbi) often require a large amount of memory and their complexity depends exponentially on the degree [2,3]. Sequential decoding algorithms use a metric and a threshold in order to maintain memory and complexity under certain limits. Other decoding methods include list decoding and iterative decoding (see [4], Ch. 4–7).
An extremely active area of research is the pursuit of decoding algorithms of convolutional codes that balance the use of memory and computational complexity. For decades, the Viterbi algorithm, along with its variants and improvements, has been regarded as one of the most widely used maximum likelihood decoding algorithms.
A main goal of this paper is to offer a hard decoding algorithm (see Section 3.2), which is a combination of an algorithm proposed by Rosenthal [5] with another one given by Pellikaan [6] in terms of error-correcting pairs. The resulting algorithm is of the same type as Rosenthal’s; namely, it is an iterative algebraic decoding algorithm. A second main result consists of lower and upper bounds for the number of arithmetical operations required to decode a codeword (see Section 3.4), which we call the computational cost per codeword. Note that the complexity is directly proportional to the computational cost per codeword. It is worth noting that, unlike the Viterbi algorithm, the bounds for our algorithm depend polynomially on the code parameters.
A case study is presented in Section 4. Indeed, we apply our results to the family of convolutional codes considered in ([7], § IV; [8], § 6.3). We analyze how the parameters should be chosen in order for the assumptions to hold true. Finally, we are able to write simple expressions for the bounds of the computational cost per codeword.
The authors would like to thank the referees for their comments and suggestions that have helped to improve the paper.

2. Preliminaries and Notations

In this section, we introduce some notations and recall basic notions about linear and convolutional codes that will be useful in the following sections. A detailed exposition on the algebraic theory of error-correcting codes can be found in many references (for instance, see [1]).

2.1. Linear Codes and Error-Correcting Pairs

Let F q be the finite field of q elements and let G be a F q -linear monomorphism:
G : F q k F q n .
We say that C = Im G is an ( n , k ) -linear code over F q and G is an encoder associated with C . Thus, the encoding process is given by:
u v = u G ,
where u is called information word and v is called codeword.
Given a code, there is relevant numerical data associated with it. First, since u F q k consists of k symbols, k ( C ) : = k will be called the dimension of the linear code C . Similarly, v F q n consists of n symbols and, thus, n ( C ) : = n will be called the length of C . The minimum distance of C is the minimum among the weights of its non-zero codewords, and it will be denoted by d min ( C ) or, if no confusion arises, simply by d.
Moreover, there are also some maps and subspaces attached to C . The dual code of the code C , denoted by C F q n , is given by:
C = { v F q n | v , c = 0 c C } ,
where the bilinear form , : F q n × F q n F q is defined as follows:
a , b = a · b = i = 1 n a i b i .
Throughout the paper, the transpose of a matrix M will be denoted by M since we want to preserve t and T for other tasks.
The dual code C is an ( n , n k ) -linear code and any generator matrix H of C is called the parity check matrix of the code C ; that is, C = Im H for a given H : F q n k F q n of maximal rank with G · H = 0 .
The syndrome map of a fixed linear code C , denoted by σ , is given by:
σ : F q n Hom F q ( C , F q ) a σ ( a ) where σ ( a ) ( b ) : = a , b
so that a C if and only if σ ( a ) = 0 .
Next, we recall the notions of the generalized Reed–Solomon code as well as of error-correcting pair, which turns out to be relevant when decoding linear codes (see [1], § 5.3; [6,9]).
First, we define the star multiplication of a , b F q n , denoted by a b , by coordinatewise multiplication a b : = ( a 1 b 1 , , a n b n ) F q n . For any two subspaces, A and B , we introduce the set A B : = { a b | a A , b B } .
Given positive integers k n , a , an ntuple of pairwise distinct elements of F q and b , an ntuple of nonzero elements of F q , the associated generalized Reed–Solomon code of dimension k is the following subspace of F q n :
GRS k ( a , b ) : = { b f ( a ) where f ( z ) = 1 , z , z 2 , , z k 1 } = { ( b 1 a 1 i , , b n a n i ) where i = 0 , 1 , , k 1 } .
For the sake of brevity, we say that C is GRS, if a and b exist as above, such that C = GRS k ( a , b ) .
In this situation, it is proved in [1] (Thm. 5.3.3.) that GRS k ( a , b ) = GRS n k ( a , c ) , where c is any nonzero codeword in the 1-dimensional code GRS n 1 ( a , b ) . In particular, a code is generalized Reed–Solomon if and only if its dual is generalized Reed–Solomon.
Let C be a F q -linear code of length n. Following [6] (§ 2), a pair ( A , B ) of F q m -linear codes of length n is called an e-error-correcting pair for the ( n , k ) -linear code C if it verifies the following conditions:
1.
A B C .
2.
k ( A ) > e .
3.
d ( A ) + d ( C ) > n .
4.
d ( B ) > e .
We say that C has an error-correcting pair if there is an e-error-correcting pair over a finite extension of F q for e = d ( C ) 1 2 . In the case of generalized Reed–Solomon codes, an explicit description of such a pair is known (see [9], Ex. 4.2.). Indeed, if C = GRS k ( a , b ) and C = GRS n k ( a , c ) , then:
A : = GRS e + 1 ( a , c ) B : = GRS e ( a , 1 ) ,
where 1 : = ( 1 , , 1 ) F q n , form an e-error-correcting pair for C with e = n k 2 . Observe that the pair ( A , B ) of (3) is defined on base field F q .
In addition, for a received word r F q n , the error locating map of r with respect to the ( n , k ) -linear code C , denoted by E r , is defined by:
E r : A Hom F q ( B , F q ) a E r ( a ) where E r ( a ) ( b ) : = r , a b .
We finish this section by introducing some notations used in the explanation of the algorithm. We denote by ( a ) 0 : = { i | a i = 0 } , the zero set of an element a F q n . For I = { i 1 , , i e } a set of indices with 1 i 1 < < i e n , we denote:
ι I : F q e F q n
the inclusion map that sends a = ( a 1 , , a e ) F q e to the vector whose i j th entry is a j for j = 1 , , e and zeros everywhere else. Then, we have that the restricted syndrome map of a linear code C is:
σ I : F q e Hom F q ( C , F q )
given by the composition σ I = σ ι I (see (2)).

2.2. Convolutional Codes and ISO Representations

In order to enhance the correcting capabilities of a code against burst errors, one may consider codes where the transmitted block depends on those sent previously. Among these codes, in this paper we focus on convolutional codes. In the next paragraphs, we recap some facts on convolutional codes (e.g., [5], § 2; [1], Ch. 14; and [10,11]). In this case, the entries of the encoder matrix are polynomials, and an ( n , k ) -convolutional code C is the subspace generated by the image of the linear monomorphism:
G ( z ) : F q [ z ] k F q [ z ] n u ( z ) v ( z ) = u ( z ) G ( z ) ;
that is, C = Im G ( z ) . The words are considered as polynomials and the independent variable represents the delay operator. In this setup, n is called the length and k the dimension of C . The degree, which is denoted by δ , is the highest (polynomial) degree of the k × k minors of G ( z ) or, equivalently, of any encoder matrix. Finally, the free distance, d free is the minimum among the weights of its non-zero codewords and it is an analogous to the minimum distance of linear codes. The free distance is a code property and indicates the error-correcting capability of a code, since a convolutional code can correct all errors of weight at most e if and only if d free > 2 e (see [4], Theorem 3.3).
Occasionally, alternative representations of these codes allows us to operate with them more easily. In this paper, motivated by the origin of these codes, we consider the minimal Input/State/Output representation (see [5,12]), or mISO for short, of an ( n , k ) -convolutional code C , which is defined by the linear system:
x t + 1 = A x t + B u t y t = C x t + D u t v t = y t u t , x 0 = 0 ,
where x t , y t , u t and v t are column vectors, A F q δ × δ , B F q δ × k , C F q ( n k ) × δ , D F q ( n k ) × k and t denote the time unit. In this characterization, the sequence of vectors u t represents the information vectors, the sequence of y t represents the parity vectors and the sequence of v t represents the code vectors. We can also observe that the system (4) provides us a state space realization of a systematic encoder; that is, the code vectors are built as the concatenation of the parity vectors and the information vectors.
The relationships between properties of a convolutional code and those of the associated mISO representation can be found in ([13], § 2;  [8,12]). In particular, we are only interested in mISO representations that are reachable and observable as linear systems. In terms of the corresponding convolutional code, this means that it is non-catastrophic or, equivalently, that it admits a polynomial parity check matrix.
The following proposition, proved in ([8], Prop. 6.1.1), provides a characterization of codewords.
Proposition 1 (Local Description of Trajectories).
Let C be an ( n , k ) -convolutional code and consider its mISO representation. Let τ, γ Z + be positive integers such that τ < γ . Assume that the encoder is at state x τ at time unit t = τ . Then, any sequence { y t u t } t 0 must satisfy:
y τ y τ + 1 y γ = C C A C A γ τ x τ + D 0 0 C B D C A B C B 0 C A γ τ 1 B C A γ τ 2 B C B D u τ u τ + 1 u γ .
Moreover, the evolution of the state vector x t is given over time as:
x t = A t τ x τ + A t τ 1 B B u τ u t 1 ; t = τ + 1 , τ + 2 , , γ + 1 .

3. The Combined Algorithm

In this section, we will deal with the main goal of the article: to offer a new algorithm resulting from the combination of the one proposed by Rosenthal in ([5], § 4) with the one proposed by Pellikaan in ([6], Algorithm 2.13). Indeed, our proposal consists of two parts. In the first one, Rosenthal’s algorithm is used to reduce the decoding of the convolutional code to the decoding of certain linear codes. Then, the second one applies Pellikaan’s decoding procedure based on error-correcting pairs to each of these codes. We refer the reader to those references for a detailed description and justification of the algorithms, while in this section, we focus on how the two are combined.

3.1. Preparation and Assumptions

For our algorithm to work correctly over an ( n , k ) -convolutional code C , the matrices A, B, C and D corresponding to the mISO representation of C are assumed to satisfy some properties so that the hypothesis considered by both Pellikaan and Rosenthal are fulfilled.
Let,
{ ( y t u t ) } t = 0 τ + T { ( y ^ t u ^ t ) } t = 0 τ + T
denote the transmitted and received sequences respectively where, from now on, ^ will denote received words. Note that we only know { ( y ^ t u ^ t ) } and our task is to determine { ( y t u t ) } .
We suppose that the following assumptions are verified.
Assumption 1.
T, Θ Z + are positive integers with T > Θ .
Assumption 2.
A is invertible and the matrix:
Ψ ( A , B ) = B A B A T 1 B
has full row rank δ and its rows form the parity check matrix of an ( n , k ) -linear code C , where n = k T and k = δ . Let d be its minimum distance.
Assumption 3.
G is an encoder for C . Moreover, the code C is supposed to admit an error-correcting pair, say ( A , B ) .
Assumption 4.
The matrix:
Φ ( A , C ) = C C A C A Θ 1
has full column rank δ and its columns define the generator matrix of an ( n , k ) -linear code C , where n = ( n k ) Θ and k = δ . Let d be its minimum distance.
Assumption 5.
G is an encoder for C and R is a retraction of G ; that is, a map R : F q n F q k such that, G R = Id F q k . Moreover, the code C is supposed to admit an error-correcting pair, say ( A , B ) .
Assumption 6.
From time unit t = 0 to time unit t = τ 1 , the transmission is error free or, equivalently, has been correctly decoded; that is, y ^ t = y t and u ^ t = u t for 0 t < τ .
Assumption 7.
In any time interval of length T, the error produced during the transmission has weight, at most:
λ : = min d 1 2 , T 2 Θ
Let us briefly discuss why these assumptions are reasonable.
The Assumptions 2 and 4 imply that ( A , B ) forms a controllable pair and ( A , C ) forms an observable pair and, from ([8], § 5.3) one concludes that (4) forms a minimal state space representation of a non-catastrophic encoder (also see [12], Thm. 5).
Similarly, Assumption 3 (resp. Assumption 5) allows us to apply Pellikaan’s decoding procedure to the linear code C (resp. C ).
Since our algorithm is iterative and, as a result from the previous iteration, the Assumption 6 is taken for granted. Note that at the time unit t, the error produced during the transmission is given by ( y ^ t y t u ^ t u t ) . Finally, observe that the state x τ can be computed correctly from (4).

3.2. The Algorithm

The Algorithm 1 goes as follows:
Algorithm 1: Decoding.
1:
Set h = 0 .
2:
If τ + T h Θ + 1 < τ + Θ or rk B A B A T ( h + 1 ) Θ 1 B < δ , then STOP (decoding unsuccessful).
3:
Increase h by 1.
4:
4.1
  Compute,
r ^ : = y ^ τ + T h Θ + 1 y ^ τ + T ( h 1 ) Θ D 0 0 C A Θ 2 B C A Θ 3 B D u ^ τ + T h Θ + 1 u ^ τ + T ( h 1 ) Θ
from received vectors { ( y ^ t u ^ t ) } . Note that r ^ is a column vector with n = ( n k ) Θ rows.
4.2
  If the morphism E r ^ : A Hom F q ( B , F q ) is injective, then go to 2.
4.3
  Choose a nonzero element a Ker E r ^ and let J : = ( a ) 0 .
4.4
  Compute the syndrome σ ( r ^ ) , where σ is the syndrome map of C .
4.5
  If σ J ( x ) = σ ( r ^ ) has no or more than one solution, then go to 2.
4.6
  Let r 0 be the unique column vector such that σ J ( r 0 ) = σ ( r ^ ) and let ω ( r 0 ) be its weight.
4.7
  If ω ( r 0 ) > d 1 2 , then go to 2.
4.8
 Compute the state x τ + T h Θ + 1 with the help of the retraction R of G
x τ + T h Θ + 1 = ( R ) ( r ^ ι J ( r 0 ) )
and note that it is a column vector with k = δ rows.
5:
 
5.1
 Since A T h Θ B B has full row rank δ , its rows form the parity check matrix of an ( n h , k h ) -linear code C h , where n h = k ( T h Θ ) and k h = δ . Let d h its minimal distance and let ( A h , B h ) be an error-correcting pair for C h .
5.2
  Set:
u ^ : = u ^ τ u ^ τ + T h Θ
  and compute:
s : = A T h Θ B B u ^ x τ + T h Θ + 1 A T h Θ + 1 x τ .
 
5.3
  If the morphism E h , u ^ : A h Hom F q ( B h , F q ) is injective, then go to 2.
5.4
  Choose a nonzero element a Ker E h , u ^ and let J : = ( a ) 0 .
5.5
  If σ h , J ( x ) = s has no or more than one solution, then go to 2.
5.6
  Let s 0 be the unique column vector such that σ h , J ( s 0 ) = s and let ω ( s 0 ) be its weight.
5.7
  If ω ( s 0 ) > d h 1 2 , then go to 2.
5.8
  Decode by:
u τ u τ + T h Θ = u ^ τ u ^ τ + T h Θ ι h , J ( s 0 ) .
 
6:
Compute the remaining states; i.e., x t from time unit t = τ + 1 to time unit t = τ + T h Θ using the equation x t + 1 = A x t + B u t .
7:
Compute the remaining parity vectors; i.e., y t from time unit t = τ to time unit t = τ + T h Θ , using the equation y t = C x t + D u t .
8:
If the resulting error sequence has weight:
t = τ τ + T h Θ ω ( y ^ t y t ) + ω ( u ^ t u t ) > λ h + 1
then go to 2.
9:
The time window t = τ , , τ + Θ 1 has been correctly decoded.
10:
If the end of the received sequence is not reached, then replace τ by τ + Θ and return to 1.
11:
END (decoding successful).

3.3. Justification

Let us provide some explanation for each step of the algorithm.
First, Step 2 warrantes that we increase h and that the code C h of 5.1 is well defined and can be decoded by Pellikaan’s algorithm.
The computation of r ^ of step 4 is correct thanks to the identity (5) (also see [5], Equations (4.3) and (4.4)) and, thus, concerns the code C (see Assumption 4). The other computations of step 4 are the implementation of Pellikaan’s algorithm for the code C in terms of the error-correcting pair ( A , B ) .
Step 5 is related to identity (6) (also see [5], Equation (4.5)) and, thus, to the code C h (see Assumption 2 and [5], Equation (4.6)). The following parts of step 5 are the implementation of Pellikaan’s algorithm for the code C h in terms of the error-correcting pair ( A h , B h ) .
Steps 6 and 7 are deduced from (4). Finally, step 8 is the translation of the last paragraph of page 11 of [5] while step 9 corresponds to the first one of page 12 of [5].
The next step deals with the successful decoding and proceeding to the next iteration that slides the time window forward. By contrast, the last step considers the case of an unsuccessful decoding that slides the time window backward.
It should be pointed out that the algorithm will produce a correct answer as long as Assumption 7 holds.

3.4. Complexity

Now, let us give some bounds for the complexity of our algorithm.
Recall from ([6], Thm. 2.14) that, since the error-correcting pair is defined over the base field F q , the algorithm proposed by Pellikaan has complexity O ( n 3 ) , where n is the length of the linear code we are decoding. Hence, the complexity of step 4 is O ( n ) 3 = O ( ( n k ) Θ ) 3 and that of step 5 is O ( n h ) 3 = O ( k ( T h Θ ) ) 3 . By computing the sizes of the matrices and applying well-known algorithms (e.g., [14,15]), each matrix computation in the remaining steps has a complexity equal to or less than the complexity of Pellikaan’s algorithm. Thus, the hth iteration of the algorithm has complexity:
O max { ( ( n k ) Θ ) 3 , ( k ( T h Θ ) ) 3 } .
In the best case, where the first iteration is able to decode Θ codewords, the computational cost by codeword is:
O 1 Θ max { ( ( n k ) Θ ) 3 , ( k ( T Θ ) ) 3 } .
In a worst case scenario and keeping Step 2 in mind, T Θ iterations are required to decode Θ codewords since it may occur that T T Θ Θ = 1 . Then, the computational cost by codeword is bound from above by:
O T Θ 1 Θ max { ( ( n k ) Θ ) 3 , ( k ( T Θ ) ) 3 } .

3.5. Performance

Let us briefly discuss the error correction capability of our approach. Let us point out that, since we are dealing with non-catastrophic encoders (see Section 3.1), a finite number of channel corruptions can not lead to an infinite sequence of errors after decoding. Furthermore, following ([5], Thm. 3.4), the correct decoding is possible if no more than λ errors occur in any time window of length T (see Assumption 7).
The study of performance in terms of BER (bit error rate) and SNR (signal to noise ratio) is a must before implementing any transmission scheme. In fact, such study requires fixing the convolutional code, the channel and the algorithm. Thus, in order to simulate the performance of our algorithm, we have to choose a fixed convolutional as well as a transmission channel (e.g., binary erasure channel (BEC), binary symmetric channel (BSC) and additive white Gaussian noise (AWGN), among others). Future research must be conducted towards this study following the ideas of [16,17].

4. A Case Study

Next, as an illustration, we show that the combined algorithm detailed in Section 3 can be used to decode received messages that have been previously encoded by the convolutional code studied in ([7], § IV; [8], § 6.3). To do this, it must be checked if the code verifies the required hypotheses.

4.1. The Convolutional Code

Thus, let α be a primitive element of F q and let n, k be positive integers such that k n 2 . Let C be the ( n , k ) -convolutional code C of degree δ corresponding to the following mISO representation:
A = ( a i j ) F q δ × δ where a i j = α i k for i = j and 0 otherwise , B = ( b i j ) F q δ × k where b i j = α i ( j 1 ) , C = ( c i j ) F q ( n k ) × δ where c i j = α ( i 1 ) j , D = ( d i j ) F q ( n k ) × k where d i j = α ( i 1 ) j .
There are, indeed, some constraints that q , δ , k , n must satisfy in order to fulfill the Assumptions of Section 3.1. We will postpone this issue until Section 4.4.

4.2. The Linear Codes C h

Initially, we study the linear code C whose parity check matrix consists of the rows of (see Assumption 2):
H : = Ψ ( A , B ) = B A B A T 1 B
which explicitly yields:
H = 1 α α k ( T 1 ) + k 1 1 α 2 α 2 k ( T 1 ) + 2 ( k 1 ) 1 α δ α δ k ( T 1 ) + δ ( k 1 ) .
In order to prove that C is GRS, by the facts of Section 2.1, it suffices to show that ( C ) is GRS. Looking at the expression of (12), it is clear that H F q δ × k T is the matrix whose ( i , j ) -entry is α i ( j 1 ) and thus ( C ) = GRS δ ( a , b ) , where a j = α j 1 and b j = 1 for j = 1 , , k T . Note that C has length k T and dimension k T δ .
Now, since GRS codes are MDS (see [1], Thm. 5.3.1.(i)), it follows that the minimum distance of C is equal to the Singleton Bound, namely:
d = d min ( C ) = k T k T + δ + 1 = δ + 1 .
Once this code has been studied, we turn to the linear codes C h defined in step 5.1 of the Algorithm. The same reasoning shows that C h is GRS as long as it has maximal rank δ . Note that C h , being MDS, has minimum distance equal to d h : = n h k h + 1 = k ( T h Θ ) δ + 1 .
Finally, (3) would allow us to record explicit expressions for an n h k h 2 -error-correcting pair ( A h , B h ) .

4.3. The Linear Code C

Now we focus on the linear code C that is generated by the columns of the matrix (see Assumption 4):
Φ ( A , C ) = C C A C A Θ 1 = ( G ) .
Upon carrying out the computations, we deduce that the generator matrix of the ( ( n k ) Θ , δ ) -linear code C is given by the following matrix:
( G ) = 1 1 1 α α 2 α δ α n k 1 α 2 ( n k 1 ) α δ ( n k 1 ) α k α 2 k α δ k α k + 1 α 2 k + 2 α δ k + δ α n k 1 + k α 2 ( n k 1 ) α δ ( n k 1 ) + δ k α k ( Θ 1 ) α 2 k ( Θ 1 ) α δ k ( Θ 1 ) α k ( Θ 1 ) + 1 α 2 k ( Θ 1 ) + 2 α δ k ( Θ 1 ) + δ α k ( Θ 1 ) + n k 1 α 2 k ( Θ 1 ) + 2 ( n k 1 ) α δ k ( Θ 1 ) + δ ( n k 1 ) .
Thus, considering b j = 1 and:
a j = α j 1 if 1 j n k α k + j 1 ( n k ) if n k + 1 j 2 ( n k ) α k ( Θ 1 ) + j 1 ( Θ 1 ) ( n k ) if ( Θ 1 ) ( n k ) + 1 j Θ ( n k )
it follows that C = GRS δ ( a , b ) .
As C is a GRS, it is also MDS, and its minimum distance is given by the Singleton Bound:
d = d min ( C ) = ( n k ) Θ δ + 1 .
Furthermore, it admits an error-correcting pair ( A , B ) (see (3)).

4.4. Parameters, Assumptions and Complexity

Throughout this section, we have dealt with the convolutional code C as well as with the linear codes C , C h and C . However, it remains to check whether all the assumptions of Section 3.1 hold. This is achieved using the following choice of parameters:
1. 
n 2 k < n ;
2. 
δ > 1 2 k n ( n k ) ( 2 k n + 1 ) if n 2 k and δ > 1 if n = 2 k ;
3. 
| F q | = q > δ k δ n k ; and
4. 
T : = δ δ n k and Θ : = δ n k .
The second condition implies that T > Θ , i.e., Assumption 1 is fulfilled.
The first and third conditions imply, by ([13], Cor. 3.2; [8], Thm. 6.3.2; [12], § 3), that the convolutional code C corresponding to the mISO representation (10) is a non-catastrophic ( n , k ) -convolutional code with degree δ and free distance:
d free ( C ) δ + 1 .
Let us check that Assumption 2 is satisfied. First, from the choice of the parameters one has:
k ( T 1 ) + k 1 = k T 1 = k δ δ n k 1 < q k ( T 1 ) + k 1 = k δ δ n k 1 δ 2 1 δ ,
and, thus, the rank of H is δ (see (12)). Similarly,
k ( Θ 1 ) + n k 1 < k ( T 1 ) + k 1 = k T 1 < q k ( Θ 1 ) + n k 1 = k δ n k k + ( n k 1 ) k δ n k + ( n 2 k 1 ) > k δ n k + δ ( n 2 k ) n k = δ ,
where the last inequality follows from the bound on δ imposed by the second condition. Hence, the rank of G is δ (see (15)) and Assumption 4 is also fulfilled.
Since Section 4.2 and Section 4.3 provide explicit error-correcting pairs for C , C h and C , Assumptions 3 and 5 are true.
Regarding Assumption 7, let us compute λ in (7). In fact, from (13) and the choices of T and Θ , one obtains:
λ : = min d 1 2 , T 2 Θ = δ 2 .
Finally, we may write down the bounds on the complexity for this situation. Indeed, since k n k and T > Θ , the best case bound (8) yields:
O 1 Θ max { ( ( n k ) Θ ) 3 , ( k ( T Θ ) ) 3 } = O 1 Θ ( k ( T Θ ) ) 3 = O n k δ k δ 2 n k 3 = O k 3 δ 5 ( n k ) 2 .
For the worst case, the bound can be sharpened since Θ codewords are decoded after T Θ iterations. Indeed, the cost per decoded codeword (9) is:
O T Θ 2 max { ( ( n k ) Θ ) 3 , ( k ( T Θ ) ) 3 } = O T Θ 2 ( k ( T Θ ) ) 3 = O ( n k ) k δ 2 n k 3 = O k 3 δ 6 ( n k ) 2 .
Remark 1.
Recalling the generalized Singleton bound for convolutional codes and (17), one gets the inequalities:
2 λ δ + 1 d free ( C ) ( n k ) δ k + 1 + δ + 1 ,
so that one wonders if our algorithm could be adapted for greater values of λ in Assumption 7 Indeed, one should study how the linear codes C , C h and C behave when C is assumed to be maximum distance separable; for instance, for those codes built in [18]. As a second approach, one could replace Pellikaan’s algorithm by the Sudan–Guruswami decoding algorithm (see [1], § 5.4.4). This algorithm, which also applies to generalized Reed–Solomon codes, might correct more than d min 1 2 errors, although its complexity is greater than that of Pellikaan.
Example 1.
As an illustration of parameters satisfying the constraints of Section 4.4, we offer two examples, which are also related to the discussion of ([5], § 5 Variation 2). First, we set n 2 and a 2 and define k : = n 1 , δ : = ( a 1 ) ( n k ) + 1 = a . Then, it holds that T = a 2 , Θ = a and Algorithm 3.2 will correct errors of weight at most λ = δ 2 = a 2 in any time window of length T = a 2 . It is worth noticing that, in this case, d = δ + 1 = a + 1 and d = ( n k ) Θ δ + 1 = a a + 1 = 1 .
The second example deals with n even and a 2 . Define k : = n 2 , δ : = ( a 1 ) ( n k ) + 1 = ( a 1 ) k + 1 . Since δ n k = a , it holds that T = ( ( a 1 ) k + 1 ) a , Θ = a . Then, the algorithm will correct errors of weight at most λ = δ 2 = ( a 1 ) k + 1 2 in any time window of length T. In this case, d = δ + 1 = ( a 1 ) k + 2 and d = ( n k ) Θ δ + 1 = k a ( a 1 ) k 1 + 1 = k .

5. Conclusions

We have offered a full and explicit implementation of a hard decoding algorithm for a class of convolutional codes based on algebraic techniques [5,6], as well as an approach to the complexity of the resulting algorithm.
For a given family of codes ([7], § IV; [8]), we have checked that the parameters required for the algorithm can be chosen so that the assumptions are fulfilled. A detailed analysis of the computational complexity has allowed us to give lower and upper bounds for the computational cost per decoded codeword. These bounds are polynomial in the degree of the convolutional code.
Further research must be carried out in, at least, three aspects. First, an appropriate model for the transmission channel (e.g., binary symmetric channel, additive white gaussian noise (AWGN) channel, erasure channel, etc.) must be found so that one can compute the expected complexity, that is, the complexity of the average case (see, for instance, [19]). Second, it must be addressed whether our bounds are valid for other known families of convolutional codes, such as 1-dim MDS (see [18]), maximum distance profile (e.g., [10,20] and references therein) among others. Third, one should persevere in the search for new algebraic decoding algorithms whose complexity is comparable to that of sequential decoding. Some recent progress can be found in [12].

Author Contributions

Conceptualization, S.M.S. and F.J.P.M.; methodology, S.M.S. and F.J.P.M.; formal analysis, S.M.S. and F.J.P.M.; investigation, S.M.S. and F.J.P.M.; writing—original draft preparation, S.M.S. and F.J.P.M.; writing—review and editing, S.M.S. and F.J.P.M. All authors have read and agreed to the published version of the manuscript.

Funding

The second author was funded by the Spanish Ministry of Science and Innovation, grant number PGC2018-099599-B-I00.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Huffman, W.C.; Pless, V. Fundamentals of Error-Correcting Codes; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  2. Benchimol, I.B.; Pimentel, C.; Souza, R.D.; Uchôa-Filho, B.F. A new computational decoding complexity measure of convolutional codes. EURASIP J. Adv. Signal Process. 2014, 2014, 173. [Google Scholar] [CrossRef] [Green Version]
  3. McEliece, R.J. The trellis complexity of convolutional codes. IEEE Trans. Inform. Theory 1996, 42 Pt 1, 1855–1864. [Google Scholar] [CrossRef] [Green Version]
  4. Johannesson, R.; Zigangirov, K.S. Fundamentals of Convolutional Coding; IEEE Series on Digital & Mobile Communication; IEEE Press: New York, NY, USA, 1999. [Google Scholar]
  5. Rosenthal, J. An Algebraic Decoding Algorithm for Convolutional Codes; Progress in Systems and Control Theory; Birkhauser Verlag: Basel, Switzerland, 1999; Volume 25. [Google Scholar]
  6. Pellikaan, R. On Decoding by Error Location and Dependent Sets of Error Positions. Discret. Math. 1992, 106, 369–381. [Google Scholar] [CrossRef] [Green Version]
  7. Rosenthal, J.; Schumacher, J.M.; York, E.V. On Behaviors and Convolutional Codes. IEEE Trans. Inform. Theory 1996, 42 Pt 1, 1881–1891. [Google Scholar] [CrossRef]
  8. York, E.V. Algebraic Description and Construction of Error-Correcting Codes: A Linear Systems Point of View. Ph.D. Thesis, University of Notre Dame, Notre Dame, IN, USA, 1997. Available online: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.53.2368&rep=rep1&type=pdf (accessed on 4 April 2022).
  9. Márquez-Corbella, I.; Pellikaan, R. A characterization of MDS codes that have an error-correcting pair. Finite Fields Their Appl. 2016, 40, 224–245. [Google Scholar] [CrossRef] [Green Version]
  10. Muñoz Castañeda, A.L.; Plaza Martín, F.J. On the existence and construction of maximum distance profile convolutional codes. Finite Fields Appl. 2021, 75, 101877. [Google Scholar] [CrossRef]
  11. Plaza Martín, F.J.; Iglesias Curto, J.I.; Serrano Sotelo, G. On the construction of 1-D MDS convolutional Goppa codes. IEEE Trans. Inform. Theory 2013, 59, 4615–4625. [Google Scholar] [CrossRef]
  12. Lieb, J.; Rosenthal, J. Erasure decoding of convolutional codes using first-order representations. Math. Control Signals Syst. 2021, 33, 499–513. [Google Scholar] [CrossRef] [PubMed]
  13. Rosenthal, J.; York, E.V. BCH convolutional codes. IEEE Trans. Inform. Theory 1999, 45, 1833–1844. [Google Scholar] [CrossRef] [Green Version]
  14. Strassen, V. Gaussian elimination is not optimal. Numer. Math. 1969, 13, 354–356. [Google Scholar] [CrossRef]
  15. Williams, V.V. Multiplying matrices faster than Coppersmith-Winograd. In Proceedings of the STOC’12 2012 ACM Symposium on Theory of Computing, New York, NY, USA, 19–22 May 2012; pp. 887–898. [Google Scholar] [CrossRef] [Green Version]
  16. Lamarca, M.; Sala, J.; Martinez, A. Iterative Decoding Algorithms for RS-Convolutional Concatenated Codes. In Proceedings of the 3rd International Symposium on Turbo Codes, Brest, France, 1–5 September 2003; pp. 543–546. [Google Scholar]
  17. Smarandache, R.; Pusane, A.E.; Vontobel, P.O.; Costello, D.J. Pseudo-Codewords in LDPC Convolutional Codes. In Proceedings of the IEEE International Symposium on Information Theory, Seattle, WA, USA, 9–14 July 2006; pp. 1364–1368. [Google Scholar]
  18. Muñoz Castañeda, A.L.; Muñoz Porras, J.M.; Plaza Martín, F.J. Rosenthal’s decoding algorithm for certain 1-dimensional convolutional codes. IEEE Trans. Inform. Theory 2019, 65, 7736–7741. [Google Scholar] [CrossRef]
  19. Guillén i Fàbregas, A. Coding in the block-erasure channel. IEEE Trans. Inform. Theory 2006, 52, 5116–5121. [Google Scholar] [CrossRef]
  20. Hutchinson, R.; Smarandache, R.; Trumpf, J. On superregular matrices and MDP convolutional codes. Linear Algebra Appl. 2008, 428, 2585–2596. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Martín Sánchez, S.; Plaza Martín, F.J. A Decoding Algorithm for Convolutional Codes. Mathematics 2022, 10, 1573. https://doi.org/10.3390/math10091573

AMA Style

Martín Sánchez S, Plaza Martín FJ. A Decoding Algorithm for Convolutional Codes. Mathematics. 2022; 10(9):1573. https://doi.org/10.3390/math10091573

Chicago/Turabian Style

Martín Sánchez, Sandra, and Francisco J. Plaza Martín. 2022. "A Decoding Algorithm for Convolutional Codes" Mathematics 10, no. 9: 1573. https://doi.org/10.3390/math10091573

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop