Next Article in Journal
A Novel Hypersonic Target Trajectory Estimation Method Based on Long Short-Term Memory and a Multi-Head Attention Mechanism
Previous Article in Journal
On the Elimination of Fast Variables from the Langevin Equation
Previous Article in Special Issue
Fundamental Limits of Coded Caching in Request-Robust D2D Communication Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Matrix Representation of Extension Field GF(pL) and Its Application in Vector Linear Network Coding

School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Entropy 2024, 26(10), 822; https://doi.org/10.3390/e26100822
Submission received: 30 July 2024 / Revised: 8 September 2024 / Accepted: 24 September 2024 / Published: 26 September 2024
(This article belongs to the Special Issue Information Theory and Network Coding II)

Abstract

:
For a finite field GF( p L ) with prime p and L > 1 , one of the standard representations is L × L matrices over GF(p) so that the arithmetic of GF( p L ) can be realized by the arithmetic among these matrices over GF(p). Based on the matrix representation of GF( p L ), a conventional linear network coding scheme over GF( p L ) can be transformed to an L-dimensional vector LNC scheme over GF(p). Recently, a few real implementations of coding schemes over GF( 2 L ), such as the Reed–Solomon (RS) codes in the ISA-L library and the Cauchy-RS codes in the Longhair library, are built upon the classical result to achieve matrix representation, which focuses more on the structure of every individual matrix but does not shed light on the inherent correlation among matrices which corresponds to different elements. In this paper, we first generalize this classical result from over GF( 2 L ) to over GF( p L ) and paraphrase it from the perspective of matrices with different powers to make the inherent correlation among these matrices more transparent. Moreover, motivated by this correlation, we can devise a lookup table to pre-store the matrix representation with a smaller size than the one utilized in current implementations. In addition, this correlation also implies useful theoretical results which can be adopted to further demonstrate the advantages of binary matrix representation in vector LNC. In the following part of this paper, we focus on the study of vector LNC and investigate the applications of matrix representation related to the aspects of random and deterministic vector LNC.

1. Introduction

The finite fields GF( p L ) with a prime of p and an integer of L 1 have been widely used in modern information coding, information processing, cryptography, and so on. Specifically, in the study of linear network coding (LNC), conventional LNC [1] transmits data symbols along the edges over GF( p L ), and every outgoing edge of a node v transmits a data symbol that is a GF( p L )-linear combination of the incoming data symbols to v. A general LNC framework called vector LNC [2] models the data unit transmitted along every edge as an L-dimensional vector of data symbols over GF(p). Correspondingly, the coding operations at v involve GF(p)-linear combinations of all data symbols in incoming data unit vectors and are naturally represented by L × L matrices over GF(p).
Recently, many works [3,4,5,6,7] have shown that vector LNC has the potential to reduce extra coding overheads in networks relative to conventional LNC. In order to achieve vector LNC, a matrix representation of GF( p L ) [8] is L × L matrices over GF(p) so that the arithmetic of GF( p L ) can be realized by the arithmetic among these matrices over GF(p). Based on the matrix representation of GF( p L ), a conventional LNC scheme over GF( p L ) can be transformed to an L-dimensional vector LNC scheme over GF(p). In addition to the theory of LNC, many existing implementations of linear codes, such as the Cauchy-RS codes in the Longhair library [9] and the RS codes in the Jerasure library [10,11] and the latest release of the ISA-L library [12], also practically achieve arithmetic over GF( 2 L ) using matrix representation.
In order to achieve the matrix representation of GF( p L ), a classical result obtained in [13] relies on polynomial multiplications to describe the corresponding matrix of an element over GF( 2 L ). A number of current implementations and studies (see, e.g., [9,10,11,12,13,14,15]) utilize such a characterization to achieve the matrix representation of GF( 2 L ). However, the characterization in the present form focuses more on the structure of every individual matrix and does not shed light on the inherent correlation between matrices that corresponds to different elements. As a result, in the aforementioned existing implementations, the corresponding binary matrix is either independently computed on demand or fully stored in a lookup table as an L × L matrix over GF(2) in advance.
In the first part of this paper, we shall generalize the characterization of matrix representation from over GF( 2 L ) to over GF( p L ) and paraphrase it from the perspective of matrices with different powers so that the inherent correlation among these matrices will become more transparent. More importantly, this correlation motivates us to devise a lookup table to pre-store the matrix representation with a smaller size. Specifically, compared to the one adopted in the latest release of the ISA-L library [12], the table size is reduced by a factor of 1 / L . Additionally, this correlation also implies useful theoretical results that can be adopted to further demonstrate the advantages of binary matrix representation in vector LNC. In the second part, we focus on the study of vector LNC and show the applications of matrix representation related to the aspects of random and deterministic coding. In random coding, we theoretically analyze the coding complexity of conventional and vector LNC via matrix representation under the same alphabet size 2 L . The  comparison results show that vector LNC via matrix representation can reduce at least half of the coding complexity to achieve multiplications. Then, in deterministic LNC, we focus on the special choice of coding operations that can be efficiently implemented. In particular, we illustrate that the choice of primitive polynomial can influence the distributions of matrices with different numbers of non-zero entries and propose an algorithm to obtain a set of sparse matrices that can be good candidates for the coefficients of a practical LNC scheme.
This paper is structured as follows. Section 2 reviews the mathematical fundamentals of representations to an extension field GF( p L ). Section 3 paraphrases the matrix representation from the perspective of matrices in different powers and then devises a lookup table to pre-store the matrix representation with a smaller size. Section 4 focuses on the study of vector LNC and shows the applications of matrix representation related to the aspects of random and deterministic coding. Section 5 summarizes this paper.
Notation. In this paper, every bold symbol represents a vector or a matrix. In particular, I L refers to the identity matrix of size L, and 0, 1, respectively, represent an all-zero or all-one matrix, whose size, if not explicitly explained, can be inferred in the context.

2. Preliminaries

In this section, we review three different approaches to express an extension field GF( p L ) with p L elements, where p is a prime. The first approach is the standard polynomial representation. Let p ( x ) denote an irreducible polynomial of degree L over GF(p) and α be a root of p ( x ) . Every element of GF( p L ) can be uniquely expressed as a polynomial in α over GF(p) with a degree less than L, and  { 1 , α , α 2 , , α L 1 } forms a basis GF( p L ) over GF(p). In particular, every β GF ( p L ) can be uniquely represented in the form of l = 0 L 1 v l α l with v l GF ( p ) . In the polynomial representation, the element β = l = 0 L 1 v l α l is expressed as the L-dimensional representative vector  v β = [ v 0 v 1 v L 1 ] β over GF(p). In order to further simplify this expression, v β can be written as the integer 0 d β poly p L 1 such that
d β poly = l = 0 L 1 p l v ^ l ,
where 0 v ^ l < p is the integer representation of v l , that is, i = 1 v ^ l 1 = v l where 1 is to be the multiplicative unit of GF(p).
The second approach is called the generator representation, which further requires p ( x ) to be a primitive polynomial such that α is a primitive element, and  all p L 1 non-zero elements in GF( p L ) can be generated as α 0 , α 1 , α 2 , , α p L 2 . Thus, every non-zero β GF ( p L ) 0 is uniquely expressed as the integer 0 d β gen p L 2 subject to
β = α d β gen .
The polynomial representation clearly specifies the additive structure of GF( p L ) as a vector space or a quotient ring of polynomials over GF(p) while leaving the multiplicative structure hard to determine. Meanwhile, the generator representation explicitly illustrates the cyclic multiplicative group structure of GF ( p L ) { 0 } without clearly demonstrating the additive structure. It turns out that the addition operation and its inverse in GF ( p L ) are easy to implement based on the polynomial representation, while the multiplicative operations and its inverse in GF ( p L ) are easy to be implement based on the generator representation. In particular, for  β 1 , β 2 GF ( p L ) ,
d β 1 + β 2 poly = d β 1 poly d β 2 poly or equivalently v β 1 + β 2 = v β 1 + v β 2
d β 1 β 2 gen = ( d β 1 gen + d β 2 gen ) mod p L 1 ,
where the operation ⊕ between two integers d β 1 poly and d β 2 poly means the component-wise p-ary addition v 1 + v 2 between the p-ary expression v 1 , v 2 of them. This is the key reason that in practice both representations are always adopted interchangeably when conducting operations in GF( p L ).
Unfortunately, except for some special β GF ( p L ) , such as α l , 0 l < L , there is not a straightforward way to establish the mapping between d β poly and d β gen without computation, and a built-in lookup table is always adopted in practice to establish the mapping between two types of representations. For instance, Table 1 lists the mapping between d β poly and d β gen for non-zero elements β in GF ( 2 3 ) with p ( x ) = x 4 + x + 1 .
By convention, elements β in GF( p L ) are represented as d β poly . It takes L p-ary additions to compute d β 1 + β 2 = d β 1 poly d β 1 poly . Based on the lookup table, it takes 3 lookups (which, respectively, map d β 1 poly , d β 2 poly to d β 1 gen , d β 2 gen and d β 1 β 2 gen to d β 1 β 2 poly ), 1 integer addition, and at most 1 modulo p L 1 operation to compute d β 1 β 2 poly = d β 1 poly d β 2 poly . Meanwhile, it is worthwhile to note that the calculation of d β 1 β 2 poly = d β 1 poly d β 2 poly without the table follows the multiplication of polynomials f 1 ( x ) and f 2 ( x ) with coefficient vectors v β 1 and v β 2 , respectively, and finally falls into
f 1 ( x ) f 2 ( x ) modulo p ( x ) ,
where the computational complexity compared with the following matrix representation will be fully discussed in Section 4.
The third approach, which is the focus of this paper, is given by means of matrices called the matrix representation [8]. Let C be the L × L companion matrix of an irreducible polynomial p ( x ) of degree L over GF(p). In particular, if  p ( x ) = a 0 + a 1 x + a 2 x 2 + + a L 1 x L 1 + x L with a 0 , a 1 , , a L 1 GF ( p ) ,   
C = 0 a 0 I L 1 a 1 a L 1 L × L .
It can be easily verified that p ( x ) is the characteristic polynomial of C , and  according to the Cayley–Hamilton theorem, p ( C ) = 0 . As a result, { I L , C , C 2 , , C L 1 } forms a basis of GF( p L ) over GF(p), and for every β GF ( p L ) with the representative vector v β = [ v 0 v 1 v L 1 ] T based on the polynomial representation, the matrix representation M ( β ) of β is defined as
M ( β ) = i = 0 L 1 v i C i .
If the considered p ( x ) further qualifies as a primitive polynomial, then similar to the role of the primitive element α defined above, C is also a multiplicative generator of all non-zero elements in GF( p L ), that is, M ( α i ) = C i for all 0 i p L 2 . One advantage for the matrix representation is that all operations in GF( p L ) can be realized by matrix operations over GF(p) among the matrices in C such that there is no need to interchange between the polynomial and the generator representations in performing field operations. For more detailed discussions of representation of an extension field, please refer to [16].
Based on the polynomial representation and generator representation, even though the arithmetic over GF( p L ) can be efficiently realized by (3), (4) and a lookup table, it requires two different types of calculation systems, i.e., one over GF(p) and the other over integers. This hinders the deployment practicality in applications with resource-constrained edge devices, such as in ad hoc networks or Internet of Things applications. In comparison, the matrix representation of GF( p L ) interprets the arithmetic of GF( p L ) solely over the arithmetic over GF(p), so it is also a good candidate for realization of the efficient implementation of linear codes over GF( p L ) such as in [9,10,11,12,13].

3. Useful Characterization of the Matrix Representation

Let p ( x ) be a defined irreducible polynomial over GF(p) of degree L and let α GF ( p L ) be a root of p ( x ) . When p = 2 , a useful characterization of the matrix representation M ( β ) of β GF ( p L ) (with respect to p ( x ) ) can be deduced based on the following classical result obtained in Construction 4.1 and Lemma 4.2 of [13]: For 1 j L , the jth column in M ( β ) is equal to the binary expression of α j 1 β based on the polynomial representation. A number of implementations and studies (see, e.g.,  [9,10,11,12,13,14,15]) of linear codes utilize such characterization to achieve the matrix representation of GF( 2 L ). However, the characterization in the present form relies on polynomial multiplications and focuses more on the structure of every individual M ( β ) . It does not explicitly shed light on the inherent correlation among M ( β ) of different β GF ( 2 L ) . It turns out that in existing implementations, such as the Cauchy-RS codes in the Longhair library [9] and the RS codes in the Jerasure library [10,11], and the latest release of ISA-L library [12], M ( β ) is either independently computed on demand or fully stored in a lookup table as an L × L matrix over GF(2) in advance.
In this section, we shall generalize the characterization of matrix representation from over GF( 2 L ) to over GF( p L ) and paraphrase it based on the interplay with the generator representation instead of the conventional polynomial representation so that the correlation among M ( β ) of different β GF ( p L ) will become more transparent. From now on, we assume that p ( x ) is further qualified to be a primitive polynomial such that α is a primitive element in GF ( p L ) . For simplicity, let v i , 0 i p L 2 , denote the representative (column) vector of α i based on the polynomial representation. Then, the following theorem asserts that the matrix representation M ( α i ) = C i consists of L representative vectors with consecutive subscripts.
Theorem 1.
For 0 i p L 2 , the matrix representation M ( α i ) = C i can be written as follows:
C i = v i v i + 1 v i + L 1 .
As C p L 1 = I L , we omit the modulo- ( p L 1 ) expressions on the exponent of C and subscript of v throughout this paper for brevity.
Proof. 
First, the matrix C i can be characterized by multiplication iterations based on (6) as follows. When 2 i L ,
C i = U a 0 p 0 ( 1 ) p 0 ( 2 ) p 0 ( i 1 ) a 1 p 1 ( 1 ) p 1 ( 2 ) p 1 ( i 1 ) a L 2 p L 2 ( 1 ) p L 2 ( 2 ) p L 2 ( i 1 ) a L 1 p L 1 ( 1 ) p L 1 ( 2 ) p L 1 ( i 1 ) ,
where L × ( L i ) matrix U = 0 I L i . Further, when L + 1 i p L 2 ,
C i = p 0 ( i L ) p 0 ( i L + 1 ) p 0 ( i 1 ) p 1 ( i L ) p 1 ( i L + 1 ) p 1 ( i 1 ) p L 2 ( i L ) p L 2 ( i L + 1 ) p L 2 ( i 1 ) p L 1 ( i L ) p L 1 ( i L + 1 ) p L 1 ( i 1 ) .
The entries in (9) and (10) iteratively qualify
p 0 ( 1 ) = a 0 a L 1 , p j ( 1 ) = a j 1 a j a L 1 , 1 j L 1
and
p 0 ( k ) = a 0 p L 1 ( k 1 ) , p j ( k ) = p j 1 ( k 1 ) a j p L 1 ( k 1 ) , 1 j L 1 , 2 k i 1 .
When i = 0 , it can be easily checked that each vector in { v 0 , v 1 , v 2 , , v L 1 } is a unit vector such that the only non-zero entry 1 of v i locates at ( i + 1 ) th row. Therefore, C 0 = I L = [ v 0 , v 1 , v 2 , , v L 1 ] , and (8) holds. When i = 1 , consider v L with p ( C ) = 0 , i.e.,
a 0 I L + a 1 C + a 2 C 2 + + a L 1 C L 1 + C L = 0 .
Obviously, v L = a 0 a 1 a L 1 T and (8) holds.
Assume when i = m , (8) holds, i.e., C m = v m v m + 1 v m + L 1 . The Lth column vector p 0 ( m 1 ) p 1 ( m 1 ) p L 1 ( m 1 ) T of C m based on (9) corresponds to the representative vector of C m + L 1 , that is, the matrix C m + L 1 is equal to
p 0 ( m 1 ) I L + p 1 ( m 1 ) C + + p L 2 ( m 1 ) C L 2 + p L 1 ( m 1 ) C L 1
It remains to prove, by induction, that C m + 1 = v m + 1 v m + 2 v m + L . As the column vectors indexed from 1th to ( L 1 ) th of matrix C m + 1 are exactly same as the ones indexed from 2th to Lth of C m , it suffices to show that the Lth column vector of C m + 1 corresponds to v m + L . The following is based on (13) and (14):
C m + L = p 0 ( m 1 ) C + p 1 ( m 1 ) C 2 + + p L 2 ( m 1 ) C L 1 + p L 1 ( m 1 ) C L = p 0 ( m 1 ) C + p 1 ( m 1 ) C 2 + + p L 2 ( m 1 ) C L 1 p L 1 ( m 1 ) ( a 0 I L + + a L 1 C L 1 ) = a 0 p L 1 ( m 1 ) I L + ( p 0 ( m 1 ) a 1 p L 1 ( m 1 ) ) C + + ( p L 2 ( m 1 ) a L 1 p L 1 ( m 1 ) ) C L 1 .
It can be easily checked that p 0 ( m ) and p j ( m ) with 1 j L 1 in C m + 1 calculated by (12) exactly consist of the representative vector of C m + L , i.e., v m + L . This completes the proof. □
The above theorem draws an interesting conclusion that every non-zero matrix in C is composed of L representative vectors. Specifically, the first column vector of the matrix representation C i is the representative vector of α i , and its jth column vector, 1 j L , corresponds to the representative vector of α i + j 1 . For the case p = 2 , even though the above theorem is essentially same as Construction 4.1 and Lemma 4.2 in [13], its expression with the interplay of generator representation allows us to further devise a lookup table to pre-store the matrix representation with a smaller size.
In this table, we store p L representative vectors with table size L × p L and arrange them based on the power order of α with 0 i p L 2 . Note that the first column of matrix C i can be indexed by vector v i or ( i + 1 ) th column in this table, and the remaining columns of C i can be obtained via subsequent L 1 column vectors based on Theorem 1. As a result, although this table only stores p L vectors, it contains the whole matrix representations of GF ( p L ) due to the inherent correlation among C i . The following Example 1 shows the explicit lookup table of GF ( 2 4 ) as an example.
Example 1.
Consider the field GF( 2 4 ) and primitive polynomial p ( x ) = 1 + x + x 4 over GF(2). The  companion matrix C is written as follows:
C = 0 0 0 1 1 0 0 1 0 1 0 0 0 0 1 0 .
Then, the lookup table to store matrix representation C i with 0 i 14 is shown in Figure 1. In this figure, the solid “window” that currently represents the matrix C can be slid to the right or left to generate C i with different i; meanwhile, the  dashed box shows the cyclic property based on cyclic group { I 4 = C 15 , C , C 2 , , C 14 } .
Recall that in the lookup table of the matrix representation adopted in the latest release of the ISA-L library [12], the matrix representation of every element in GF ( p L ) needs to be stored, so a total of L 2 p L  p-ary elements need to be pre-stored. Compared with that, only an L × p L  p-ary matrix needs to be stored in the new lookup table, so the table size is reduced by a factor of 1 / L . Moreover, Theorem 1 implies the following useful corollaries of the matrix representation C = { 0 , C 0 , C 1 , , C p L 2 } of GF ( p L ) .
Corollary 1.
Every vector in the vector space GF( p ) L exactly occurs L times as a column vector in matrices of C .
Proof. 
As { I L , C , C 2 , , C L 1 } forms a polynomial basis of GF( p L ) over GF(p), the representative vectors of matrices in C are distinct. Consider a function f : { C i } { v i } . It can be easily checked that f is bijective, and  v i exactly corresponds to the jth column vector of C i j + 1 with 1 j L . The zero vector of length L simply occurs L times in L × L matrix 0 .    □
Corollary 2.
For every GF( p L ), regardless of the choice of the primitive polynomial p ( x ) , the total number of zero entries in C remains unchanged as L 2 p L 1 .
The above two corollaries will be adopted to further demonstrate the advantages of binary matrix representation in vector LNC with C .

4. Applications of Matrix Representation in Vector LNC

In this section, we focus on the study of vector LNC with binary matrices C and show the applications of matrix representation related to the aspects of random and deterministic coding.

4.1. Computational Complexity Comparison in Random LNC

Herein, the coding coefficients of random LNC are randomly selected from GF( 2 L ), which can provide a distributed and asymptotically optimal approach for information transmission, especially in unreliable or topologically unknown networks, such as wireless broadcast networks [17] or ad hoc network [18]. Recall that in polynomial and generator representations, the multiplication over GF( 2 L ) based on a lookup table requires two different types of calculation systems, so this table may not be utilized in resource-constrained edge devices. Therefore, under the same alphabet size 2 L , we first theoretically compare the random coding complexity between conventional LNC over { β = l = 0 L 1 v l α l } and vector LNC over C without lookup table, from the perspective of required binary operations.
To keep the same benchmark for complexity comparison, we adopt the following assumptions.
  • We assume that an all-1 binary vector m as information will multiply 2 L 1 non-zero coding coefficients selected from { β = l = 0 L 1 v l α l } and C , which can be simulated as encoding process. The complexity is the total number of binary operations that 2 L 1 multiplications take.
  • We shall ignore the complexity of a shifting or permutation operation on the binary vector m , which can be efficiently implemented.
  • We only consider the standard implementation of multiplication in GF( 2 L ) by polynomial multiplication modulo and primitive polynomial p ( x ) = a 0 + a 1 x + + a L 1 x L 1 + x L with η non-zero a i , 0 i L 1 , instead of considering other advanced techniques such as the FFT algorithm [19].
We first consider the encoding scheme with coefficients selected from { β = l = 0 L 1 v l α l } . Assume that α is a root of p ( x ) and every element β in GF( 2 L ) can be expressed as g ( α ) , where g ( x ) represents a polynomial over GF(2) with a degree less than L. An all-1 binary information vector m can be expressed as α L 1 + α L 2 + + α 2 + α + 1 . We can divide the whole encoding process into two parts: multiplication and addition. In the multiplication part, the complexity of shifting operations is ignored, and one polynomial m α i in m g ( α ) will modulo p ( x )  i times and take i η binary operations. Because every α i , 1 i L 1 occurs 2 L 1 times among all g ( α ) in GF( 2 L ), it will take 1 i L 1 i η × 2 L 1 binary operations to compute m α i . In the addition part, it takes ( j 1 ) L binary operations to compute the additions between j binary vectors m α i with distinct i. Note that the number of distinct g ( α ) with j non-zero terms is L j in GF( 2 L ). Therefore, the traverse of g ( α ) will take an extra 1 j L L j × ( j 1 ) L binary additions to compute m g ( α ) . In total, the complexity of this scheme is shown as follows:
1 i L 1 i ( η × 2 L 1 + L × L i + 1 ) .
Next, we consider the encoding scheme with coefficients selected from C , whose complexity of encoding process depends on the total number of 1 in C i with 0 i 2 L 2 . In this framework, it is worthwhile to note that every C i in C is full-rank and can extract a permutation matrix. Since the complexity of permutational operations is ignored, based on Proposition 2, the complexity of encoding process over C is shown as follows:
L 2 × 2 L 1 L × ( 2 L 1 ) = 2 L 1 ( L 2 2 L ) + L .
For any primitive polynomial p ( x ) , η 2 . With 3 L 12 , Table 2 lists the average number of binary operations per symbol in two schemes. Specifically, every value calculated by Equation (15) and (16) has divided the alphabet size 2 L , and we can find that in random coding, the vector LNC via matrix representation can theoretically reduce at least half of the coding complexity to achieve multiplications under the same alphabet size 2 L .

4.2. The Special Choices of Binary p ( x ) and Sparse C i

In addition to the random coding, a deterministic LNC where we pay a broader concern to reduce the computational complexity can also carefully design some special coding operations which can be efficiently implemented, such as circular shift [5,6] or permutation [7]. In this subsection, different from random choice of coefficients, we will carefully design the choices of binary primitive polynomial p ( x ) and sparse matrices C i in C based on the unveiled properties in Sec. III. We illustrate that the choice of p ( x ) can influence the distributions of matrices C i with different numbers of non-zero entries. Then, based on a proper p ( x ) , an algorithm is proposed to obtain a subset of C , which contains a series of relatively sparse matrices in C .
When p = 2 , the entries in representative vectors based on Equation (11) and (12) will, respectively, degenerate as follows:
p 0 ( 1 ) = a L 1 , p j ( 1 ) = a j 1 + a j a L 1 = a j 1 + a j p 0 ( 1 ) , 1 j L 1 .
and
p 0 ( k ) = p L 1 ( k 1 ) , p j ( k ) = p j 1 ( k 1 ) + a j p L 1 ( k 1 ) = p j 1 ( k 1 ) + a j p 0 ( k ) , 1 j L 1 , 2 k i 1 .
with a 0 must be 1 in p ( x ) . Based on the above two equations, consider two adjacent representative vectors v k 1 and v k . When the last entry p L 1 ( k 1 ) in v k 1 is equal to 0, the entries in v k follow
p 0 ( k ) = 0 , p j ( k ) = p j 1 ( k 1 ) , 1 j L 1 ,
which means that v k can be generated by downward circular shift to v k 1 . When the last entry p L 1 ( k 1 ) equals 1, the entries in v k follow
p 0 ( k ) = 1 , p j ( k ) = p j 1 ( k 1 ) + a j , 1 j L 1 .
Therefore, the difference between v k 1 and v k in Hamming weight is no more than η 1 , where η represents the number of non-zero a i , 0 i L 1 in primitive polynomial p ( x ) .
Note that for the matrix representation of every GF( 2 L ), the total number of 1 in C is always L 2 × 2 L 1 regardless of the choice of binary p ( x ) . However, the value of η will influence the distributions of sparse matrices in C . Based on (17) and (18), we can intuitively deduce that with smaller η , the sparse matrices in C will be more concentrated distribution. Since the identity matrix I L with L non-zero entries is the sparsest full-rank matrix, we utilize Algorithm 1 to choose 2 L s matrices in C , which can be good candidates as coding coefficients of a practical coding scheme over GF( 2 L ).
Algorithm 1 The choice of sparse matrices over GF( 2 L )
  • Initialize S as an empty set of L × L binary matrix
  •      S 0
  •      S I L
  •      generate matrix C based on p ( x )
  •      generate matrix C 1 based on Equation (18)
  •      define matrix C ^ = C
  •      define matrix C ^ 1 = C 1
  •      define integer s < L : the required size of C s
  • for  i = 1 : 2 L s 1 1
  •      S C ^
  •      S C ^ 1
  •      C ^ = C ^ × C
  •      C ^ 1 = C ^ 1 × C 1
  •      i = i + 1
  • end
  • return S
In Algorithm 1, the multiplications using C or C 1 can be easily achieved by sliding the “window” right or left, respectively, as shown in Figure 1. Let C s denote this subset of C and the 2 L s matrices in C s can be written as { 0 , I L , C i , C i } with 1 i 2 L s 1 1 . Then, Table 3 lists the ratio of the total number of 1 between C s and C with s = 1 , 2 . We can find that the 2 L s matrices, which are special choices using Algorithm 1, indeed contain less 1 than the other matrices in C .
Moreover, in Figure 2, we numerically analyze the relationship between the number of 1 in each matrix and the corresponding number of matrices under the alphabet size 2 12 . For all candidates of binary primitive polynomials, we choose four representative p ( x ) with different η = 4 , 6 , 8 , 10 and the specific polynomials are shown as follows:
p 1 ( x ) = x 12 + x 6 + x 4 + x 1 + 1 p 2 ( x ) = x 12 + x 7 + x 6 + x 5 + x 3 + x 1 + 1 p 3 ( x ) = x 12 + x 8 + x 7 + x 6 + x 4 + x 3 + x 2 + x 1 + 1 p 4 ( x ) = x 12 + x 10 + x 9 + x 8 + x 7 + x 5 + x 4 + x 3 + x 2 + x 1 + 1
As all the matrices in C are full-rank, the value range of the x-axis should be [ 12 , 132 ] , and we restrict it to [ 40 , 100 ] to highlight the distributions. These four curves illustrate that with η increasing, the distribution variance of the number of 1 in a matrix will decrease, that is, the number of matrices with an average number of 1, i.e., 70–80, will increase and the number of relatively sparse or dense matrices will decrease. As a result, a smaller η of p ( x ) not only infers a more concentrated distribution but also more amounts for sparse matrices in C ; then, we can select parameter s according to practical requirements and obtain C s using Algorithm 1.

5. Conclusions

Compared with the classical result, the paraphrase of matrix representation in this paper focuses more on inherent correlation among matrices and a lookup table to pre-store the matrix representation with a smaller size is devised. This work also identifies that the total number of non-zero entries in C is a constant number, which motivates us to demonstrate the advantages of binary matrix representation in vector LNC. In the applications of matrix representation, we first theoretically demonstrate the vector LNC via matrix representation can reduce at least half of the coding complexity compared with conventional one over GF( 2 L ). Then, we illustrate the influence of η , i.e., the number of non-zero item in p ( x ) , on the amounts and distributions of sparse matrices in C and propose an algorithm to obtain sparse matrices which can be good candidates as coding coefficients of a practical vector LNC scheme.

Author Contributions

Methodology, H.T.; Software, S.J.; Writing—original draft, H.T.; Writing—review & editing, H.L.; Visualization, W.L.; Supervision, Q.S.; Funding acquisition, Q.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by the National Natural Science Foundation of China under Grants U22A2005, 62101028 and 62271044, and by the Fundamental Research Funds for the Central Universities under Grant FRF-TP-22-041A1.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Li, S.-Y.R.; Yeung, R.W.; Cai, N. Linear network coding. IEEE Trans. Inf. Theory 2003, 49, 371–381. [Google Scholar] [CrossRef]
  2. Ebrahimi, J.B.; Fragouli, C. Algebraic algorithm for vecor network coding. IEEE Trans. Inf. Theory 2011, 57, 996–1007. [Google Scholar] [CrossRef]
  3. Sun, Q.T.; Yang, X.; Long, K.; Yin, X.; Li, Z. On vector linear solvability of multicast networks. IEEE Trans. Commun. 2016, 64, 5096–5107. [Google Scholar] [CrossRef]
  4. Etzion, T.; Wachter-Zeh, A. Vector network coding based on subspace codes outperforms scalar linear network coding. IEEE Trans. Inf. Theory 2018, 64, 2460–2473. [Google Scholar] [CrossRef]
  5. Tang, H.; Sun, Q.T.; Li, Z.; Yang, X.; Long, K. Circular-shift linear network coding. IEEE Trans. Inf. Theory 2019, 65, 65–80. [Google Scholar] [CrossRef]
  6. Sun, Q.T.; Tang, H.; Li, Z.; Yang, X.; Long, K. Circular-shift linear network codes with arbitrary odd block lengths. IEEE Trans. Commun. 2019, 67, 2660–2672. [Google Scholar] [CrossRef]
  7. Tang, H.; Zhai, Z.; Sun, Q.T.; Yang, X. The multicast solvability of permutation linear network coding. IEEE Commun. Lett. 2023, 27, 105–109. [Google Scholar] [CrossRef]
  8. Wardlaw, W.P. Matrix representation of finite field. Math. Mag. 1994, 67, 289–293. [Google Scholar] [CrossRef]
  9. Longhair: O(N2) Cauchy Reed-Solomon Block Erasure Code for Small Data. 2021. Available online: https://github.com/catid/longhair (accessed on 1 July 2024).
  10. Plank, J.S.; Simmerman, S.; Schuman, C.D. Jerasure: A Library in c/c++ Facilitating Erasure Coding for Storage Applications, Version 1.2; Technical Report CS-08-627; University of Tennessee: Knoxville, TN, USA, 2008. [Google Scholar]
  11. Luo, J.; Shrestha, M.; Xu, L.; Plank, J.S. Efficient encoding schedules for XOR-based erasure codes. IEEE Trans. Comput. 2014, 63, 2259–2272. [Google Scholar] [CrossRef]
  12. Intel® Intelligent Storage Acceleration Library. 2024. Available online: https://github.com/intel/isa-l/tree/master/erasurecode (accessed on 1 June 2024).
  13. Blomer, J.; Kalfane, M.; Karp, R.; Karpinski, M.; Luby, M.; Zuckerman, D. An XOR-Based Erasure-Resilient Coding Scheme; Technical Report TR-95-048; University of California at Berkeley: Berkeley, CA, USA, 1995. [Google Scholar]
  14. Plank, J.S.; Xu, L. Optimizing Cauchy Reed-Solomon codes for fault-tolerant network storage applications. In Proceedings of the Fifth IEEE International Symposium on Network Computing and Applications, Cambridge, MA, USA, 24–26 July 2006; pp. 173–180. [Google Scholar]
  15. Zhou, T.; Tian, C. Fast erasure coding for data storage: A comprehensive study of the acceleration techniques. ACM Trans. Storage (TOS) 2020, 16, 1–24. [Google Scholar] [CrossRef]
  16. Lidl, R.; Niederreiter, H. Finite Fields, 2nd ed.; Cambridge University Press: Cambridge, UK, 1997. [Google Scholar]
  17. Su, R.; Sun, Q.T.; Zhang, Z. Delay-complexity trade-off of random linear network coding in wireless broadcast. IEEE Trans. Commun. 2020, 68, 5606–5618. [Google Scholar] [CrossRef]
  18. Asterjadhi, A.; Fasolo, E.; Rossi, M.; Widmer, J.; Zorzi, M. Toward network coding-based protocols for data broadcasting in wireless ad hoc networks. IEEE Trans. Wirel. Commun. 2010, 9, 662–673. [Google Scholar] [CrossRef]
  19. Gao, S.; Mateer, T. Additive fast Fourier transforms over finite fields. IEEE Trans. Inf. Theory 2010, 56, 6265–6272. [Google Scholar] [CrossRef]
Figure 1. The lookup table to store the matrix representation C i with 0 i 14 for the field GF( 2 4 ) and primitive polynomial p ( x ) = 1 + x + x 4 .
Figure 1. The lookup table to store the matrix representation C i with 0 i 14 for the field GF( 2 4 ) and primitive polynomial p ( x ) = 1 + x + x 4 .
Entropy 26 00822 g001
Figure 2. The distribution of sparse matrices in C with different η = 4 , 6 , 8 , 10 .
Figure 2. The distribution of sparse matrices in C with different η = 4 , 6 , 8 , 10 .
Entropy 26 00822 g002
Table 1. The mapping between d β poly and d β gen for non-zero β in GF ( 2 4 ) with p ( x ) = x 4 + x + 1 .
Table 1. The mapping between d β poly and d β gen for non-zero β in GF ( 2 4 ) with p ( x ) = x 4 + x + 1 .
d β gen 01234567891011121314
d β poly 124836121151071415139
Table 2. Average number of binary operations per symbol with parameter η = 2 .
Table 2. Average number of binary operations per symbol with parameter η = 2 .
L3456789101112
C 1.884.257.6612.0917.5524.0331.5240.0149.5160.01
l = 0 L 1 v l α l 4.8810.2517.6627.0938.5552.0367.5285.01104.51126.01
rate38.5%41.5%43.3%44.6%45.5%46.2%46.7%47.1%47.3%47.6%
Table 3. Ratio of total numbers of 1 between C s and C .
Table 3. Ratio of total numbers of 1 between C s and C .
L p ( x ) η s = 1 s = 2
3 X 3 + X + 1 20.30560.0833
4 X 4 + X + 1 20.34380.1094
5 X 5 + X 2 + 1 20.38000.1200
6 X 6 + X + 1 20.44100.1372
7 X 7 + X + 1 20.45850.1987
8 X 8 + X 4 + X 3 + X 2 + 1 40.46350.2235
9 X 9 + X 4 + 1 20.47770.2148
10 X 10 + X 3 + 1 20.49500.2382
11 X 11 + X 2 + 1 20.48980.2325
12 X 12 + X 6 + X 4 + X + 1 40.49770.2421
13 X 13 + X 4 + X 3 + X + 1 40.49060.2469
14 X 14 + X 5 + X 3 + X + 1 40.49750.2500
15 X 15 + X + 1 20.49790.2462
16 X 16 + X 5 + X 3 + X 2 + 1 40.49780.2490
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tang, H.; Liu, H.; Jin, S.; Liu, W.; Sun, Q. On Matrix Representation of Extension Field GF(pL) and Its Application in Vector Linear Network Coding. Entropy 2024, 26, 822. https://doi.org/10.3390/e26100822

AMA Style

Tang H, Liu H, Jin S, Liu W, Sun Q. On Matrix Representation of Extension Field GF(pL) and Its Application in Vector Linear Network Coding. Entropy. 2024; 26(10):822. https://doi.org/10.3390/e26100822

Chicago/Turabian Style

Tang, Hanqi, Heping Liu, Sheng Jin, Wenli Liu, and Qifu Sun. 2024. "On Matrix Representation of Extension Field GF(pL) and Its Application in Vector Linear Network Coding" Entropy 26, no. 10: 822. https://doi.org/10.3390/e26100822

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop