Next Article in Journal
Research on Smart Contract Verification and Generation Method Based on BPMN
Previous Article in Journal
Generalizations of Rolle’s Theorem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Decoding of MDP Convolutional Codes over the Erasure Channel under Linear Systems Point of View

by
Maria Isabel García-Planas
1,*,† and
Laurence E. Um
2,†
1
Departament de Matemàtiques, Universitat Politécnica de Catalunya, 08028 Barcelona, Spain
2
Department of Mathematics and Informatics, Faculty of Sciences, University of Douala, Douala BP 24157, Cameroon
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2024, 12(14), 2159; https://doi.org/10.3390/math12142159
Submission received: 1 June 2024 / Revised: 3 July 2024 / Accepted: 8 July 2024 / Published: 10 July 2024
(This article belongs to the Special Issue Algebraic Coding and Control Theory)

Abstract

:
This paper attempts to highlight the decoding capabilities of MDP convolutional codes over the erasure channel by defining them as discrete linear dynamical systems, with which the controllability property and the observability characteristics of linear system theory can be applied, in particular those of output observability, easily described using matrix language. Those are viewed against the decoding capabilities of MDS block codes over the same channel. Not only is the time complexity better but the decoding capabilities are also increased with this approach because convolutional codes are more flexible in handling variable-length data streams than block codes, where they are fixed-length and less adaptable to varying data lengths without padding or other adjustments.

1. Introduction

Information is such a valuable boon for our time. Since the transmission of information has always been subject to precision problems, besides finding obstacles between the transmitter and the receiver, potential disruptions can occur at any point in the process. The physical means and channels involved with the exchange are always flawed and are subject to errors that might result in the loss of essential data.
Among the different kinds of channels, there is the erasure channel, in which the receptor knows if an arrived symbol is correct as each symbol either arrives correctly or is erased. In electronic communication, especially over the Internet, the messages are transmitted using packets, and each packet comes with a checksum. The receptor can know that a packet is correct regarding if the checksum is correct. Otherwise, a packet is contaminated or lost in the course of transmission.
Convolutional codes are error-correcting codes that generate redundant bits through the convolution of input data with a series of generator polynomials. This process ensures that the encoded data can be protected against errors during transmission over noisy channels. They possess a certain degree of flexibility due to their “sliding window” feature. This means the received information can be grouped into blocks or windows in many ways. Convolutional codes can correct more errors than classical block codes when the erasure channels are considered.
Convolutional codes, with the advantage of being treated by the theory of linear systems, provide attractive opportunities in regulating the transmission of information. In particular, decoding a convolutional code can be viewed as finding the trajectory of the corresponding linear system that is, in some sense, closest to the received data. Garcia-Planas, Souidi, and Um presented in [1] a decoding algorithm for convolutional codes using linear systems theory. Recently, Lieb and Rosenthal in [2] presented a new algorithm using the output observability character introduced in [1]; this new algorithm has the advantage that it can minimize both the decoding delay and the computational effort in the erasure recovery process.
Among convolutional codes, there is a specific class called MDP convolutional codes introduced in [3], which are constructed to achieve the maximum possible minimum distance at every time step. This means that the distance properties of the code are optimized to ensure the highest error correction capability. Due to their optimal distance properties, these kinds of codes offer superior error correction capabilities compared to non-MDP convolutional codes with the same parameters. MDP convolutional codes are particularly well-suited for use in sequential decoding algorithms and widely used in communication systems that demand high error resilience, including deep-space communication, satellite communication, and wireless networks.
Compared to other erasure decoding algorithms for convolutional codes available in the literature (see, for example, [4,5,6]), our proposal systems theoretic approach offers the advantage of reduced computational effort and decoding delay due to the treatment through linear system theory using an input–state–output description, enabling the use of the concept of output observability.
To realize a systematic review article about MDP convolutional code decoding over erasure channels, the authors of this work found a few recent articles on decoding convolutional codes in the available literature. Among the few articles found, it is worth highlighting the work of Martín and Plaza in [7], which presented an explicit implementation of a complex decoding algorithm for a class of convolutional codes based on algebraic techniques and an approach to the complexity of the resulting algorithm, and the work of Pinto et al. [8], which proposed the construction of two-dimensional convolutional codes based on the existing decoding algorithms for one-dimensional codes (so it may be interesting to improve the decoding criteria), as well as the work of Nowosielski et al. [9], which proposed an algorithm for the automatic recognition of convolutional codes based on the operation of the Viterbi decoder; the existence of new algorithms can facilitate such recognition.
This paper is structured as follows. A preliminary section introduces the concept of the convolutional code from the linear system theory point of view. The following section presents the control properties of controllability, observability, and output observability, which help to describe an iterative decoding algorithm for convolutional codes. Next, a case of convolutional codes, the so-called MDP convolutional codes, is introduced. Finally, a decoding procedure for this kind of code is analyzed, and a conclusion is presented.

2. Preliminaries

We consider a finite set of symbols I F q , called alphabet, with q elements. The information to be processed and the codewords will be expressed with symbols from this alphabet. The set I F q is structured as a finite field (in particular, the size q of the alphabet is a power of a prime number).
Convolutional code is a type of error-correcting code in which each k-bit information symbol (each k-bit string) to be encoded is transformed into an n-bit symbol, where the transformation is a function of the last information symbols contained in the memory of the physical encoder.
Following Rosenthal and York [10], a convolutional code is defined as a submodule of I F n [ s ] .
Definition 1.
Let C A * be a code. Then, C is a convolutional code if and only if C is a I F [ s ] -submodule of I F n [ s ] .
Corollary 1.
There exists an injective morphism of modules
ψ : I F k [ s ] I F n [ s ] u ( s ) v ( s ) .
Equivalently, there exists a polynomial matrix G ( s ) (called encoder) of order n × k and having maximal rank such that
C = { v ( s ) u ( s ) I F k [ s ] : v ( s ) = G ( s ) u ( s ) } .
The ratio k / n into the number k of input bits and the number n of output bits per encoding operation is known as the ratio of a convolutional code, and it is a crucial parameter in communication systems because it directly impacts the efficiency and reliability of data transmission.
We denote by ν i the maximum of all degrees of each of the polynomials of each line, we define the complexity of the encoder as δ = i = 1 n ν i measuring the influence of past inputs in the present output of the encoder, and finally we define the complexity convolution code δ ( C ) as the maximum of all degrees of the largest minors of G ( s ) .
The representation of a code using a polynomial matrix is not unique; however, we have the following proposition.
Proposition 1.
Two n × k rational encoders G 1 ( s ) , G 2 ( s ) define the same convolutional code if and only if there is a k × k unimodular matrix U ( s ) such that G 1 ( s ) U ( s ) = G 2 ( s ) .
After a suitable permutation of the rows, we can assume that the generator matrix G ( s ) is of the form
G ( s ) = P ( s ) Q ( s )
with right coprime polynomial factors P ( s ) I F ( n k ) × k and Q ( s ) I F k × k , respectively.
A description of convolutional codes can be provided by a time-invariant discrete linear system called discrete-time state-space system in control theory (see [11]). We want to note that linear systems theory is quite general and it permits all kinds of time axes and signal spaces.
The linear dynamical system representation of the convolutional code is easily obtained from the following results.
Theorem 1.
Let C I F n [ s ] be a k / n -convolutional code of complexity δ. Then, there exist matrices K, L of size ( δ + n k ) × δ and a matrix M of size ( δ n k ) × n having their coefficients in I F such that the code C is defined as
C = { v ( s ) I F [ s ] x ( s ) I F δ [ s ] : s K x ( s ) + L x ( s ) + M v ( s ) = 0 }
Moreover, K is a column full-rank matrix, K M is a row full-rank matrix, and r a n g s 0 K + L M = δ + n k , s 0 I F .
The triple ( K , L , M ) satisfying the above it is called minimal representation of C .
Proposition 2.
Let ( K 1 , L 1 , M 1 ) be another representation of the convolutional code C . Then, there exist invertible matrices T and S of adequate sizes such that
( K 1 , L 1 , M 1 ) = ( T K S 1 , T L S 1 , T M ) .
We have the following corollary
Corollary 2.
The triple ( K , L , M ) can be written as
K = I δ 0 , L = A C , M = 0 B I n k D .
And, we deduce the following corollary.
Corollary 3.
C = { v ( z ) I F n [ z ] x ( z ) I F δ [ z ] : z I A 0 B C I D x ( z ) v ( z ) = 0 }
Now, if we divide the matrix v ( z ) into two parties v ( z ) = y ( z ) u ( z ) , the equality
z I A 0 B C I D x ( z ) v ( z ) = 0 can be expressed as
z x ( z ) = A x ( z ) + B u ( z ) y ( z ) = C x ( z ) + D u ( z )
Finally, applying the antitransform Z, we obtain the system
x ( t + 1 ) = A x ( t ) + B u ( t ) y ( t ) = C x ( t ) + D u ( t ) , v ( t ) = u ( t ) y ( t ) , x ( 0 ) = 0 .
Remark 1.
The vectors u ( t ) , x ( t ) , y ( t ) , and v ( t ) = u ( t ) y ( t ) are known as the information vector, state vector, parity vector, and the code vector transmitted via the communication channel, respectively.
We identify this system with the matrix quadruple ( A , B , C , D ) . The function T ( z ) = C ( z I A ) 1 B + D is called transfer function of the linear system.
The distance concepts are fundamental in coding theory.
Definition 2.
Let x , y I F n be vectors. The Hamming distance H a m ( x , y ) is defined to be the number of components in which x and y differ. The weight w t ( x ) of x is defined to be the number of nonzero components of x.
Obviously, we have that H a m ( x , y ) = w t ( x y ) . When { v t = u t y t I F n t = 0 , 1 , 2 , } is a codeword, its weight is defined as t w t ( v t ) .
Several distance measures are suitable for assessing their error correction capability in various scenarios for convolutional codes. In particular, free and column distance are two of the most notable distance measures for convolutional codes [12].
Definition 3.
The j-th column distance of the code C is defined as
d j = min { t = 0 j w t ( u t ) + t = 0 j w t ( y t ) } ,
where the minimum is taken over all trajectories ( u t , y t ) of the system (4) with initial vector u 0 0 .
It is easy to observe that
d 0 d 1 d 2 ,
and hence there is an integer r such that d r = d r + j for all j 0 . This largest possible column distance is called free distance:
Definition 4.
d f r e e : = lim j d j .
Proposition 3.
For every j I N , it is verified that
d j ( n k ) ( j + 1 ) + 1 .
For proof, it suffices to consider definition d j .
Definition 5.
A sequence { v ( t ) = u ( t ) y t I F n t = 0 , 1 , 2 , } represents a finite-weight codeword if
-
Equation (4) is satisfied for all t I N ;
-
There is a j I N such that x ( j + 1 ) = 0 and u ( t ) = 0 for t j + 1 .

3. Control Properties

In linear systems theory, the major concepts are controllability and observability. These concepts were introduced by R. Kalman in 1960 [13]. Controllability means instead the possibility of steering the system from any initial state to any final one by means of a control signal in the input. Roughly speaking, observability means the possibility of identifying the internal state of a system from measurements of the outputs. We introduce these concepts as well as the notion of output observability [11].
Definition 6.
A linear system ( A , B , C , D ) is a controllable system if the controllability matrix associated with the system
C = B A B A 2 B A δ 1 B
has full rank δ, where δ is the complexity of the code.
If ( A , B , C , D ) is controllable, it is possible to drive a given state vector to any other state vector in a finite number of steps.
Definition 7.
A linear system ( A , B , C , D ) is said to be observable if the observability matrix associated with the system
O = C C A C A 2 C A δ 1
has full rank δ.
The observability character of a code means that one can be sure that a message has been completed once a sufficiently long string of zeros has been received.
These properties of a linear system are related to properties of the corresponding convolutional code in the following manner.
Theorem 2
([10], Theorems 2.9 and 2.10). ( A , B , C , D ) is a minimal representation of the convolutional C if and only if it is controllable.
Theorem 3
([10], Lemma 2.11). Assume that ( A , B , C , D ) is controllable. Then, C is non-catastrophic if and only if ( A , B , C , D ) is observable.
Non-catastrophic convolutional codes are characterized by the fact that a finite number of transmission errors will result in only a finite number of decoding errors.
Another important property related to the decoding process is the output observability.
Definition 8.
A linear system ( A , B , C , D ) is said to be output-observable if the output observability matrix associated with the system
T = C D C A C B D C A 2 C A B C B D C A C A 1 B C A 2 B C B D
has full row rank for all I N .
Output observability represents the possibility of an internal state, to be only defined by a finite set of outputs, for a finite number of steps.
Fixing the initial state x ( s ) = 0 , the output observability matrix allows us to describe a sequence of trajectories { v s , , v s + } in the following manner.
Theorem 4.
Let ( A , B , C , D ) be a representation of a convolutional code. Suppose that the initial state of the system is x ( s ) = 0 , and then the vector
( v s , , v s + ) t = Ker T ,
where
T = D I C B 0 D I C A B 0 C B 0 D I C A 1 B 0 C A 2 B 0 C B 0 D I .
Proof. 
The system
T x ( s ) u ( s ) u ( s + ) = y ( s ) y ( s + )
is equivalent to
T I x ( s ) u ( s ) u ( s + ) y ( s ) y ( s + ) = 0 .
And now, it suffices to make column block elementary transformations to the system matrix and consider the condition x ( s ) = 0 . □
Matrix T provides information about th column distance in the following manner: a convolutional code ( A , B , C , D ) has th column distance d = ( n k ) ( + 1 ) + 1 if and only if all minors appearing in T that are not trivially zero are nonzero.
Remark 2.
Remember that a minor is called nontrivially zero minor if it is related to proper submatrices and a minor of a lower triangular block Toeplitz matrix (as in our case) with I = { i 1 , , i m } and J = { j 1 , , j m } ; the sets of row and column indices are said to be proper if for each σ { 1 , 2 , , m } the inequality j σ i σ n k k holds.

Iterative Decoding Algorithm

With the linear systems approach for convolutional codes, the output observability matrix allows to build decoding algorithms.
As an example, we show that the following algorithm, which can be found in [1], is divided between some steps.
This algorithm for solving focuses more on detecting the error at first before getting into the correction process. Indeed, we will consider that we first need to check whether or not the received sequence is the encoded one and has not been compromised or modified through the sending process. If so, we will correct the error first and later figure out or deduce the original input message that is supposed to have been encoded. We will do so considering the initial conditions and output observability matrix.
The objective of this algorithm of decoding is to try to approach at first the word received from the encoding machine to the list of codewords.
Step 1:
Set the initial conditions x ( 0 ) .
Step 2:
With D, we compute the list of u 0 codewords.
Then, iteratively, generate the list of codewords, with matrix T , = 1 , , l , and store them in a set.
Step 3:
Compute the distance between the received y ( 0 ) and the set of y ( 0 ) in the codewords. Compute d H until it is minimal; then, settle for the closest codeword y ( 0 ) in the list; thus, the system
D C B D C A B C B D C A 1 B C A 2 B C B D u ( 0 ) u ( ) = y ( 0 ) y ( 1 ) y ( )
becomes solvable; deduce the corresponding input u ( 0 ) .
Iteratively, compute the distance between the received y ( 1 ) y ( ) and the set of corresponding codewords. Detect the minimum distance d H between them and settle for the closest codeword y ( i ) , i 1 , , in the list; thus, system 9 becomes compatible; deduce the corresponding input sequence.
We have the following proposition:
Proposition 4.
Let ( A , B , C , D ) be a representation of a convolutional code over a field I F q with q = q 1 m , q 1 being prime. Given the sequence y = ( y ( 0 ) , y ( 1 ) , , y ( ) ) , the decoded sequence u ( 0 ) , u ( 1 ) , , u ( ) is obtained recursively as follows
(a) 
Setting x ( 0 ) = 0
(b) 
 
(i) 
Computing d H ( y ( 0 ) , Im D ) ;
if d H ( y ( 0 ) , Im D ) = 0 , we solve D u ( 0 ) = y ( 0 ) ;
else, for some y Im D s u c h t h a t d H ( y ( 0 ) , Im D ) = min d H , we solve u ( 0 ) is a solution of: D u ( 0 ) = y .
(ii) 
Computing d H ( y ( 1 ) C B u ( 0 ) , Im D ) ;
if d H ( y ( 1 ) C B u ( 0 ) , Im D ) = 0 , we solve D u ( 1 ) = y ( 1 ) C B u ( 0 ) , with u ( 0 ) obtained in (i);
else, for some y Im D s u c h t h a t d H ( y ( 1 ) C B u ( 0 ) , Im D ) = min d H , we solve D u ( 1 ) = y , with u ( 0 ) obtained in (i)
(ℓ) 
Computing
d H ( y ( ) C A 1 B u ( 0 ) C A 2 B u ( 1 ) C B u ( 1 ) , Im D ) ;
if d H ( y ( ) C A 1 B u ( 0 ) C A 2 B u ( 1 ) C B u ( 1 ) , Im D ) = 0 , we solve D u ( l ) = ( y ( ) C A 1 B u ( 0 ) C A 2 B u ( 1 ) C B u ( 1 ) with u ( 0 ) , u ( 1 ) , …, u ( 1 ) obtained in (i), (ii), …, ( 1 ) ;
else, for some y Im D such that
d H ( y ( ) C A 1 B u ( 0 ) C A 2 B u ( 1 ) C B u ( 1 ) , Im D ) = min d H , we solve D u ( l ) = y , with u ( 0 ) , u ( 1 ) , …, u ( 1 ) obtained in (i), (ii), …, ( 1 )
Corollary 4.
Let ( A , B , C , D ) be a representation of a convolutional code over a field I F q with q = q 1 m , q 1 being prime. If D has full column rank and n 2 k , given the sequence y = ( y ( 0 ) , y ( 1 ) , , y ( ) ) , the decoded sequence u ( 0 ) , u ( 1 ) , , u ( ) is obtained recursively as follows:
(a) 
Setting x ( 0 ) = 0
(b) 
 
(i) 
u ( 0 ) is a solution of D u ( 0 ) = y ( 0 ) .
(ii) 
u ( 1 ) is a solution of D u ( 1 ) = y ( 1 ) C B u ( 0 ) , with u ( 0 ) obtained in (i);
(ℓ) 
u ( ) is a solution of
D u ( l ) = ( y ( ) C A 1 B u ( 0 ) C A 2 B u ( 1 ) C B u ( 1 ) ) with u ( 0 ) , u ( 1 ) , …, u ( 1 ) obtained in (i), (ii), …, ( 1 ) , with u ( 0 ) , u ( 1 ) , …, u ( 1 ) obtained in (i), (ii), …, ( 1 )
Proof. 
If D has full column rank, and n 2 k , then p = k , and the matrix D is invertible, so, for all y ( 0 ) , there exists u ( 0 ) such that D u ( 0 ) = y ( 0 ) . Similarly, for all i { 1 , , } and for all y ( i ) , there exists u ( i ) such that D u ( i ) = y ( i ) C A i 1 B u ( 0 ) C A i 2 B u ( 1 ) C B u ( i 1 )
Remark 3.
This decoding is both a detection and correction method; at first, we detect the error; and then we try to correct it.
Remark 4.
The resolution with this method depends heavily on matrix D.
Remark 5.
Suppose that we have code C , which is not output-observable. Then, we know that D does not have full row rank, and it can potentially increase the decoding time.

4. MDP Convolutional Codes

Maximum distance profile (MDP) convolutional codes are a specific class of convolutional codes characterized by their optimal distance properties having the property that their column distances increase as rapidly as possible for as long as possible.
This is explained more concretely below.
Definition 9.
An ( n , k , δ ) convolutional code C is maximum distance profile (MDP) [6] if
d = ( n k ) ( + 1 ) + 1 ,
where = δ k + δ n k .
These codes are designed to maximize the minimum distance between code sequences, thereby enhancing error detection and correction capabilities. Notice that, when transmitting over an erasure channel, each symbol is either received correctly or is not received at all.
Theorem 5.
Let ( A , B , C , D ) be a representation of an ( n , k , δ ) -convolutional code C. This code has j-th column distance d = ( n k ) ( + 1 ) + 1 if and only if there are no zero entries in C A i 1 B , 0 i , and every minor of T , in which not trivially zero is nonzero.
Proof. 
See [12] for a proof. □
Suppose now that the finite sequence { v ( t ) = u ( t ) y ( t ) I F n t = 0 , 1 , 2 , } represents a finite-weight codeword, and let s I N be such that x ( s + 1 ) = 0 and u ( t ) = 0 for t s + 1 . Then, taking into account that
T x ( s ) u ( s ) u ( s + ) = y ( s ) y ( s + ) ,
We have that y ( t ) = 0 for t s + 1 .
Consequently, we have
Theorem 6.
With this hypothesis,
( u ( 0 ) , , u ( s ) ) t K e r C A s B C B C A s + k B C A k B .
Proof. 
Theorem follows from the fact that u ( t ) = y ( t ) = 0 for t s + 1 ,
C D C A C B D C A s C A s 1 B C B D C A s + 1 C A s B C A B C B D C A s + k + 1 C A s + k B C A k B C B D 0 u ( 0 ) u ( s ) 0 0 = y ( 0 ) y ( 1 ) y ( s ) 0 0
This result provides us some algebraic restrictions that will be exploited to achieve some advantages in the decoding algorithm.

5. Decoding over an Erasure Channel

Suppose that a convolutional code C is transmitting over an erasure channel. Then, we can state the following result.
Proposition 5.
Let ( A , B , C , D ) be a representation of a convolutional code with column distance d j 0 . If in any sliding window of length ( j 0 + 1 ) n at most d j 0 1 erasures occur, then we can completely recover the transmitted sequence.
Assume we corrected or received the information correctly for the sequence up to instant i 1 . To recover the erased components in ( u ( i ) , y ( i ) ) , ( u ( i + ) , y ( i + ) ) , we consider the equation where the erased elements are considered unknowns in the following equation:
C C A C A x ( i ) + D I 0 0 C B 0 D I C A 1 B 0 C A 2 B C B 0 D I u ( i ) y ( i ) u ( i + ) y ( i + ) = 0 ,
where x ( i ) = A i B u ( 0 ) + + B u ( i 1 ) .
Indeed, we know a solution to the system exists because we have assumed that only erasures occur. On the other hand, Theorem 5 ensures that the matrix of the system of each of the ( + 1 ) ( n k ) × ( + 1 ) ( n k ) not trivially zero full-size minors is nonzero and permits ensuring the unicity of the solution.
When an excessive number of erasures occur at a certain point in the sequence, the system lacks a unique solution, preventing the recovery of the erased information.
The knowledge of the erased elements permits us to calculate the next state of the system.
The fact that the code’s -th column distance is maximal implies that all preceding column distances are also maximal. Consequently, the window size in the process does not need to be fixed and can be adjusted according to the distribution of erasures in the sequence. This means that smaller windows can be framed, allowing for the resolution of smaller systems. As a result, decoding can commence before an entire block has been received, demonstrating once again that convolutional codes offer more flexibility than block codes in this regard.

6. Discussion

Each of the existent methods for decoding has its strengths and weaknesses, and the choice of which to use depends on the specific application requirements, such as the expected erasure patterns, computational resources, and latency constraints. For instance, window decoding is particularly useful in streaming applications where low latency is crucial. In contrast, algebraic and syndrome-based methods might be preferred in environments where abundant computational resources and exact decoding are paramount. Iterative and graph-based methods balance performance and complexity, making them suitable for various practical applications.
The input–state–output representation approach models the system regarding its inputs, internal states, and outputs. This method allows for decoding using a structural approach that leverages the dynamics of the convolutional system. Moreover, it allows for a structured and detailed analysis of the system and can quickly adapt to changes in channel conditions; similar to window decoding, the decoding can start before the entire block is received and facilitates the integration of control and state estimation techniques, enhancing the robustness. The methodology employed can be beneficial in addressing the challenge of designing MDP convolutional codes for communication over an erasure channel.
In summary, the choice of the decoding method depends on the specific application requirements, such as latency, computational complexity, adaptability, and robustness to different erasure patterns. The input–state–output representation is particularly useful in systems where the dynamic structure of the convolutional code can be leveraged to improve the decoding performance.

7. Conclusions

In this paper, we conduct an in-depth investigation into the behavior of maximum distance profile (MDP) convolutional codes when transmitted over an erasure channel. Our analysis employs the input–state–output representation of these codes, which serves as a foundational framework for understanding their operational dynamics in the presence of erasures. By leveraging this representation, we aim to provide a comprehensive description of an erasure decoding procedure specifically tailored for MDP convolutional codes.
The possibility of representing convolutional codes as linear systems enables the use of linear algebra, significantly contributing to the advancement of the state of the art, in particular on decoding maximum distance profile (MDP) convolutional codes through the erasure channels method. For instance, this method translates the problem of finding the original message into solving a set of linear equations, which are efficiently handled using linear algebra. On the other hand, the presented decoding method benefits from linear algebra through the output observability matrix. It relies on matrix operations to optimize the pathfinding process, enhancing the efficiency and accuracy of the decoding.

Author Contributions

Investigation, M.I.G.-P. and L.E.U. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Garcia-Planas, M.I.; Souidi, E.; Um, L.E. Decoding algorithm for convolutional codes under linear systems point of view. In Recent Advances in Circuits, Systems, Signal Processing, and Communications; WSEAS Press: Athens, Greece, 2014; pp. 17–24. [Google Scholar]
  2. Lieb, J.; Rosenthal, J. Erasure decoding of convolutional codes using first-order representations. Math. Control Signals Syst. 2021, 33, 499–513. [Google Scholar] [CrossRef] [PubMed]
  3. Gluesing-Luerssen, H.; Rosenthal, J.; Smarandache, R. Strongly-MDS Convolutional Codes. IEEE Trans. Inf. Theory 2006, 52, 584–598. [Google Scholar] [CrossRef]
  4. Almeida, P.J.; Lieb, J. Complete j-MDP convolutional codes. IEEE Trans. Inf. Theory 2020, 66, 7348–7359. [Google Scholar] [CrossRef]
  5. Napp, D.; Pinto, R.; Sidorenko, V.R. Concatenation of convolutional codes and rank metric codes for multi-shot network coding. Des. Codes Cryptogr. 2018, 86, 303–318. [Google Scholar] [CrossRef]
  6. Tomás, V.; Rosenthal, J.; Smarandache, R. Decoding of Convolutional Codes over the Erasure Channel. IEEE Trans. Inf. Theory 2012, 58, 90–108. [Google Scholar] [CrossRef]
  7. Martín Sánchez, S.; Plaza Martín, F.J. A Decoding Algorithm for Convolutional Codes. Mathematics 2022, 10, 1573. [Google Scholar] [CrossRef]
  8. Pinto, R.; Spreafico, M.; Vela, C. Optimal Construction for Decoding 2D Convolutional Codes over an Erasure Channel. Axioms 2024, 13, 197. [Google Scholar] [CrossRef]
  9. Nowosielski, L.; Dudziński, B.; Nowosielski, M.; Slubowska, A. Recognition of Convolutional Codes. Prz. Elektrotech. 2023, 99, 140. [Google Scholar] [CrossRef]
  10. Rosenthal, J.; York, E.V. BCH Convolutional Codes. IEEE Trans. Inf. Theory 1999, 45, 1833–1844. [Google Scholar] [CrossRef]
  11. García Planas, M.I.; Souidi, E.M.; Um, L.E. Convolutional codes under linear systems point of view. Analysis of output-controllability. WSeas Trans. Math. 2012, 11, 324–333. [Google Scholar]
  12. Hutchinson, R.; Rosenthal, J.; Smarandache, R. Convolutional codes with maximum distance profile. Syst. Control. Lett. 2005, 54, 53–63. [Google Scholar] [CrossRef]
  13. Kalman, R.E. Contribution to the theory of optimal control. Boletín Soc. Mat. Mex. 1960, 5, 102–119. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

García-Planas, M.I.; Um, L.E. Decoding of MDP Convolutional Codes over the Erasure Channel under Linear Systems Point of View. Mathematics 2024, 12, 2159. https://doi.org/10.3390/math12142159

AMA Style

García-Planas MI, Um LE. Decoding of MDP Convolutional Codes over the Erasure Channel under Linear Systems Point of View. Mathematics. 2024; 12(14):2159. https://doi.org/10.3390/math12142159

Chicago/Turabian Style

García-Planas, Maria Isabel, and Laurence E. Um. 2024. "Decoding of MDP Convolutional Codes over the Erasure Channel under Linear Systems Point of View" Mathematics 12, no. 14: 2159. https://doi.org/10.3390/math12142159

APA Style

García-Planas, M. I., & Um, L. E. (2024). Decoding of MDP Convolutional Codes over the Erasure Channel under Linear Systems Point of View. Mathematics, 12(14), 2159. https://doi.org/10.3390/math12142159

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop