Next Article in Journal
In Memory of Prof. Dr. Yuri V. Mitrishkin (1946–2024)
Previous Article in Journal
Group Classification of the Unsteady Axisymmetric Boundary Layer Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Erasure Recovery Matrices for Data Erasures and Rearrangements

1
School of Mathematics and Physics, Chengdu University of Technology, Chengdu 611731, China
2
School of Mathematical and Sciences, University of Electronic Science and Technology of China, Chengdu 610059, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(7), 989; https://doi.org/10.3390/math12070989
Submission received: 28 February 2024 / Revised: 18 March 2024 / Accepted: 25 March 2024 / Published: 26 March 2024

Abstract

:
When studying signal reconstruction, the frames are often selected in advance as encoding tools. However, in practical applications, this encoding frame may be subject to attacks by intermediaries and generate errors. To solve this problem, in this paper, the erasure recovery matrices for data erasures and rearrangements are analyzed. Unlike the previous research, first of all, we introduce a kind of frame and its erasure recovery matrix M so that M I , Λ = I m × m , where I m × m is a unit matrix. In this case, we do not need to invert the matrix of the frame operator and the erasure recovery matrix, and this greatly simplifies reconstruction problems and calculations. Then three different construction algorithms of the above erasure recovery matrix M and the frame are proposed, and each of them has advantages. Furthermore, some restrictions on M so that the constructed frame and erasure recovery matrix M can recover coefficients from rearrangements are imposed. We prove that in some cases, the above M and frame can recover coefficients stably from m rearrangements.

1. Introduction

In order to deal with some problems concerned with the nonharmonic Fourier series, frames were first introduced by Duffin and Schaeffer in 1952 [1]. Specifically,
Definition 1
([1]). A sequence of elements { f i } i = 1 N in a Hilbert space H is a frame for H if there exist constants A , B > 0 such that
A f 2 i = 1 N | f , f i | 2 B f 2 , f H .
The numbers A , B are called frame bounds. In particular, if A = B , then the frame is called a tight frame for H. Especially if A = B = 1 , then the tight frame is called a Parseval frame for H.
Due to the redundancy of the frame [2], it can be used in many fields as a generalization of the base in Hilbert space, such as signal and image processing [3], quantization [4], capacity of transmission channels [5], coding theory [6], and data transmission technology [7]. In particular, more and more scholars are beginning to apply the frame to signal erasures and reconstruction [8].
More specifically, some scholars recover the lost data by inverting the frame operator of the frame whose indices correspond to the non-erased frame coefficients [9]. However, this method is slower because it needs to invert the matrix. Therefore, some people have proposed to use the dual frame defined below to recover the lost data [10].
Definition 2
([11]). For a frame F = { f i } i = 1 N H , if there is a sequence G = { g i } i = 1 N such that
f = i = 1 N f , f i g i ,
for any f H . Then G = { g i } i = 1 N is a dual of F = { f i } i = 1 N .
Then, for any element f in a separable Hilbert space H, it can be recovered by using a reconstruction formula by a frame and a dual frame, i.e., there is a dual frame G = { g i } i = 1 N such that f = i = 1 N f , f i g i  [12]. When a part of the frame coefficients is erased, the rest of the frame coefficients can be used for reconstruction. Then some optimal frames and dual that minimize the reconstruction error need to be discussed. For example, in [13], the authors found the optimal dual frames that minimize the reconstruction error for 1-erasure and 2-erasures. Many other scholars have researched this issue in recent years, such as [14,15,16], and so on.
However, in practical applications, the pre-selected encoding frame may be subject to attacks by intermediaries and generate errors. And it will be difficult to recover a lot of lost data with the above methods. In [17], the authors proposed to use the erasure recovery matrix M to recover the erased data, which can be used to deal with a large number of data erasures and also protect the encoding frame. Moreover, two ways to recover the erased data with the erasure recovery matrix M are proposed. One is
c Λ = ( M Λ * M Λ ) 1 M Λ * M Λ c c Λ c ,
where Λ is the index set of the erased coefficients, and M Λ denotes the minor of M formed by the columns indexed by Λ . The other one is
c Λ = M I , Λ 1 M I , Λ c c Λ c ,
where M I , Λ denotes the minor of M with rows indexed by I and columns indexed by Λ , and I is a subset of { 1 , , k } .
Then the authors mainly discussed the method of (1) and did not discuss the method of (2) because it takes a certain amount of calculation to find a suitable I to make M I , Λ reversible. Hence, in this paper, our work was motivated by [17], the problem of finding I to make M I , Λ reversible is solved. First of all, a special frame and its erasure recovery matrix so that M I , Λ = I m × m is discussed, where I m × m is a unit matrix. In this case, we can easily recover the erased data. In fact, we just need
c Λ = M I , Λ c c Λ c .
Obviously, our method does not need to invert the matrix, which greatly simplifies reconstruction problems and calculations. Furthermore, the construction of the above frame and the erasure recovery matrix M are discussed. Three different construction algorithms are proposed, and each of them has advantages. Next, we prove that the frame and erasure recovery matrix M, which we construct, can recover the data when data at a known location are erased. Furthermore, some restrictions on M so that the constructed frame and erasure recovery matrix M can recover coefficients from m rearrangements are imposed. Then we give a construction algorithm for the erasure recovery matrix and a frame that can be recovered of coefficients from rearrangements. Finally, we prove that in some cases, the above M and frame can recover coefficients stably from m rearrangements.

2. Notation, Terminology and Data Erasures

In this section, we recall some notations, terminologies, definitions, and properties of the frame theory that we use throughout the paper.
Firstly, we introduce the following operators, which often appear throughout this paper.
Definition 3
([18]). Let F = { f i } i = 1 N be a Bessel sequence for H.
(I) The analysis operator of F is defined by
T : H 2 , T f = { f , f i } i = 1 N .
(II) The synthesis operator of F is defined by
T * : 2 H , T * { c i } i = 1 N = i = 1 N c i f i .
(III) The frame operator of F is defined by
S : H H , S f = T * T f = i = 1 N f , f i f i .
In the applications, we often use a frame F = { f i } i = 1 N to encode data f and obtain the frame coefficients c j = f , f j . Then we use a dual frame G = { g i } i = 1 N known as the decoding frame to recover f, thus
f = i = 1 N c i g i .
However, in the applications, some erasures and rearrangements will happen. Thus we may only get part of { c j } j = 1 N or the rearrangements { c j } j = 1 N . Hence in this paper, we use erasure recovery matrices (introduced in [19]) to recover the data.
Then, we use the Table 1 to introduce some notations and terminologies that appear in the following text.

3. Recovery Data from M-Erasures

In [17], the authors proposed to use the erasure recovery matrix M to recover the lost data. And they give two methods to recover the lost data, which are (1) and (2). However, they point out that it is difficult to find a suitable set I to make the matrix M I , Λ invertible, where M I , Λ denotes the minor of M with rows indexed by I and columns indexed by Λ , and I is a subset of { 1 , , k } . Hence in this section, we construct a special frame and its erasure recovery matrix so that M I , Λ = I m × m , where I m × m is a unit matrix. In this case, M I , Λ is invertible. And we can easily recover the lost data.

3.1. Some Properties of the Erasure Recovery Matrix

Firstly, we provide the definition of the erasure recovery matrix.
Definition 4
([17]). Let { f i } i = 1 N be a frame for an n-dimensional Hilbert space H and k m . An m-erasure recovery matrix is a k × N matrix M with spark m + 1 satisfying M c = 0 for any vector c T ( H ) , where T denotes the analysis operator for the frame F and the spark of a collection of vectors { f i } i = 1 N is the size of the smallest linearly dependent subset of { f i } i = 1 N .
In the following, we discuss how to recover erased frame coefficients at known locations. We use Λ to represent the index set of the erased coefficients, where | Λ | = m . Then we introduce a special frame { f i } i = 1 N . If for any 1 i m , there is a sequence { a i , j } j = m + 1 N of complex numbers such that
f i = j = m + 1 N a i , j * f j .
where none of the coefficients in the above equations are equal to zero. We also recall that the excess of a frame { f i } i = 1 N of H is the greatest integer m such that m elements can be removed from the frame and still have a frame for H [20].
Next, we consider whether the above frame { f i } i = 1 N remains a frame whenever any m elements are removed.
Proposition 1.
{ f i } i = 1 N remains a frame whenever any m elements are removed when { f i } i = 1 N is a frame for H with the excess m.
Proof. 
First of all, if the first m elements are removed, then the conclusion can be obtained obviously. More precisely, in finite-dimensional Hilbert spaces, the frames are exactly spanning families of vectors in the space, and consequently, the frames with the excess m are robust under removing the first m-elements.
Next, we consider the case where any m elements are removed. In this case, since the first m elements can be expressed linearly by the rest of the elements, all coefficients are not equal to zero. Hence, any removing m elements can be expressed linearly by the remaining N m elements. We can obtain that { f i } i = 1 N is still a frame whenever any m elements are removed.   □
Note that if the excess of { f i } i = 1 N is the greatest integer m. Then for any 1 i m ,
f i = j = m + 1 N a i , j * f j ,
we let
M = 1 0 0 a 1 , m + 1 a 1 , N 0 1 0 a 2 , m + 1 a 2 , N 0 0 1 a m , m + 1 a m , N .
If M is an m-erasure recovery matrix, then M I , Λ = I m × m is invertible. Hence we discuss whether M is an m-erasure recovery matrix. Before that, we introduce the following lemma and proposition.
Lemma 1
([21]). Let { φ i } 1 i N be a frame for H. Suppose that m 1 is an integer and that M is a matrix such that N ( M ) = T H . Then the following assertions are equivalent.
(i). Every m columns of M are linearly independent.
(ii). { φ i } 1 i N remains a frame whenever any m elements are removed.
Proposition 2.
Let H be an N-dimensional Hilbert space, M be a matrix as defined above (3), and T be an analysis operator of the frame { f i } i = 1 N . If N m = n , then we have N ( M ) = T H and M is an m-erasure recovery matrix.
Proof. 
On the one hand, for any c = { c i } 1 i N T H , there is a f H such that f , f i = c i , for all 1 i N .
Furthermore,
M c = 1 0 0 a 1 , m + 1 a 1 , N 0 1 0 a 2 , m + 1 a 2 , N 0 0 1 a m , m + 1 a m , N f , f 1 f , f 2 f , f N = f , f 1 + j = m + 1 N f , a 1 , j * f j f , f 2 + j = m + 1 N f , a 2 , j * f j f , f m + j = m + 1 N f , a m , j * f j = 0 0 0 .
That is T H N ( M ) .
On the other hand, since dim ( N ( M ) ) = N m = n = dim ( T H ) , we obtain that N ( M ) = T H .
So far, we find that N ( M ) = T H and { f i } i = 1 N remain a frame whenever any m elements are removed. According to lemma 1, it is clear that every m columns of M are linearly independent. Obviously, the first m + 1 columns of the M are linearly dependent. Hence s p a r k ( M ) = m + 1 , and the matrix M is an m-erasure recovery matrix.   □
In this case, we let I = { 1 , 2 , , m } = Λ such that M I , Λ = I m × m . Then the following Example 1 shows that we can use the m-erasure recovery matrix M to recover the erased data if N m = n .
Example 1.
Assume that the first m data are lost. Then we can construct a frame { f i } i = 1 N for H with the excess m (the construction method of this kind of frame will be discussed later), and for any 1 i m , there is a sequence { a i , j } j = m + 1 N of complex numbers such that
f i = j = m + 1 N a i , j * f j ,
where none of the coefficients in the above equations is equal to zero.
Hence its erasure recovery matrix is
M = 1 0 0 a 1 , m + 1 a 1 , N 0 1 0 a 2 , m + 1 a 2 , N 0 0 1 a m , m + 1 a m , N .
Then for any f H , we let c = ( c i ) i = 1 N T H , where c i = f , f i . And
M c = 0 .
Since the first m data are erased, we obtain that
M Λ c Λ + M Λ c c Λ c = 0 ,
where M Λ is a matrix composed of the first m rows and the first m columns of matrix M, and Λ = { 1 , 2 , m } .
Obviously, we have M Λ = I m × m . Hence
c Λ + M Λ c c Λ c = 0 ,
and
c Λ = M Λ c c Λ c .
That is to say we can use the remaining data to recover the erased data easily.

3.2. Algorithm Construction

In what follows, we discuss the construction of the above frame { f i } i = 1 N and the m-erasure recovery matrix M when data at a known location are erased. We assume that the first m data are erased, and the same goes for the erasure of any other m known locations (Algorithm 1).
Algorithm 1 The construction of the m-erasure recovery matrix M when data at a known location are erased
  • Step 1 . Generate an m × ( N m ) matrix M 0 whose entries are drawn independently from the standard normal distribution, and let
    M 1 = 1 m M 0 = [ h m + 1 , , h N ] ,
    where h j = ( a i , j ) i = 1 , 2 m , for all j = m + 1 , , N and a i , j 0 , for all i = 1 , 2 m , j = m + 1 , , N .
  • Step 2 . Let P be the orthogonal projection for the range space of the analysis operator for { h i } i = m + 1 N , and
    g i ˜ = P e i = ( I N m P ) e i
    for i = m + 1 , , N , where { e i } i = m + 1 N is a standard orthonormal basis for H N m .
  • Step 3 . Let
    g i ˜ = j = m + 1 N a i , j * g j ˜ ,
    for all i = 1 , , m .
  • Step 4 . Generate an n × ( N m ) matrix T whose entries are drawn independently from the standard normal distribution.
  • Step 5 . Let g i = T g i ˜ , for all i = 1 , , N , and M = [ I m × m , M 1 ] = [ h 1 , , h N ] .
Proposition 3.
The above matrix M in Algorithm 1 is an m-erasure recovery matrix of { g i } i = 1 N if N m = n .
Proof. 
First of all, we prove that { g i } i = 1 N is a frame for H. Since g i ˜ = j = m + 1 N a i , j * g j ˜ , for all i = 1 , , m . Then
i = 1 N | f , g i ˜ | 2 = i = 1 m | f , g i ˜ | 2 + i = m + 1 N | f , g i ˜ | 2 = i = 1 m | f , j = m + 1 N a i , j * g j ˜ | 2 + i = m + 1 N | f , g i ˜ | 2 a 2 i = 1 m ( j = m + 1 N | f , g j ˜ | ) 2 + i = m + 1 N | f , g i ˜ | 2 a 2 ( N m ) i = 1 m j = m + 1 N | f , g j ˜ | 2 + i = m + 1 N | f , g i ˜ | 2 = | a | 2 ( N m ) ( m + 1 ) i = m + 1 N | f , g i ˜ | 2 , f H ,
where | a | = max { | a i , j | , 1 i m , m + 1 j N } .
And { g i ˜ } i = m + 1 N is a frame for H N m , since { e i } i = m + 1 N is a standard orthonormal basis for H N m . Thus
i = 1 N | f , g i ˜ | 2 | a | 2 ( N m ) ( m + 1 ) i = m + 1 N | f , g i ˜ | 2 | a | 2 B ( N m ) ( m + 1 ) f 2 ,
where B is the upper frame bound of { g i ˜ } i = m + 1 N .
And
A f 2 i = m + 1 N | f , g i ˜ | 2 i = 1 N | f , g i ˜ | 2 ,
where A is the lower frame bound of { g i ˜ } i = m + 1 N .
Thus { g i ˜ } i = 1 N is a frame. Since T is subjective, hence { g i } i = 1 N is a frame for H N .
Next, we prove that M is an m-erasure recovery matrix of { g i } i = 1 N .
Since
i = 1 N g i h i = i = 1 m g i h i + i = m + 1 N g i h i = i = 1 m T g i ˜ e i + i = m + 1 N T g i ˜ h i = i = 1 m j = m + 1 N a i , j * T g j ˜ e i + i = m + 1 N T g i ˜ h i = i = 1 m j = m + 1 N T g j ˜ a i , j e i + i = m + 1 N T g i ˜ h i = j = m + 1 N T g j ˜ h j + i = m + 1 N T g i ˜ h i = 0 .
Hence N ( M ) = T H . Moreover, according to the structure of frame { g i } i = 1 N , we can obtain that { g i } i = 1 N remains a frame whenever any m elements are removed.
By Lemma 1, we know every m column vectors of M are linearly independent. Hence M is an m-erasure recovery matrix of { g i } i = 1 N .   □
As we know, when the frame used for coding is a Parseval frame, some better results will often be obtained. Therefore, we propose a construction method for a Parseval frame and its m-erasure recovery matrix (Algorithm 2).
Algorithm 2 The construction of m-erasure recovery matrix for a Parseval frame
  • Step 1 . Generate an m × ( N m ) matrix M 0 whose entries are drawn independently from the standard normal distribution, and let
    M 1 = 1 m M 0 = [ h m + 1 , , h N ] ,
    where h j = ( a i , j ) i = 1 , 2 m , for all j = m + 1 , , N and a i , j 0 , for all i = 1 , 2 m , j = m + 1 , , N .
  • Step 2 . Let A be an ( N m ) × n matrix whose n columns are selected independently according to the standard normal distribution, where
    A = g ˜ m + 1 g ˜ m + 2 g ˜ N .
  • Step 3 . Let
    g ˜ i = j = m + 1 N a i , j * g ˜ j ,
    for all i = 1 , , m . Then let Q be an N × n matrix whose first m rows are [ g ˜ 1 , g ˜ 2 , g ˜ m ] T , and the rest of the entries are the rows of A. Let
    Q = ( g ˜ 1 , g ˜ 2 , , g ˜ m , g ˜ m + 1 , g ˜ m + 2 , , g ˜ N ) T
  • Step 4 . Let Q be the matrix obtained by performing the Gram-Schmidt orthonormalization procedure on the columns of Q.
  • Step 5 . Let G = [ g 1 , , g N ] be the n × N matrix whose rows are made up of columns of Q .
  • Step 6 . Let M = [ I m × m , M 1 ] = [ h 1 , , h N ] .
Proposition 4.
The above matrix M in Algorithm 2 is an m-erasure recovery matrix of the Parseval frame { g i } i = 1 N if Q has a full rank and N m = n .
Proof. 
First of all, we prove that { g i } i = 1 N is a Parseval frame.
According to the structure of G, we have
G G * = I n .
Since Q has a full rank, { g i } i = 1 N is a Parseval frame.
Since
g ˜ i = j = m + 1 N a i , j * g ˜ j ,
for all i = 1 , , m , and Q is the matrix obtained by performing the Gram-Schmidt orthonormalization procedure on the columns of Q. We can get that
g i = j = m + 1 N a i , j * g j ,
for all i = 1 , , m .
Hence
M G * = [ I m × m , M 1 ] × [ g 1 , , g N ] * = g 1 * + j = m + 1 N a 1 , j g j * g 2 * + j = m + 1 N a 2 , j g j * g m * + j = m + 1 N a m , j g j * = 0 ,
and N ( M ) = T H .
According to Lemma 1, we know M is an m-erasure recovery matrix of { g i } i = 1 N .   □
Similar to Algorithm 2, we can get a construction method of an ordinary frame instead of the Parseval frame and its m-erasure recovery matrix (Algorithm 3).
Algorithm 3 The construction of the m-erasure recovery matrix for an ordinary frame
  • Step 1 . Generate an m × ( N m ) matrix M 0 whose entries are drawn independently from the standard normal distribution, and let
    M 1 = 1 m M 0 = [ h m + 1 , , h N ] ,
    where h j = ( a i , j ) i = 1 , 2 m , for all j = m + 1 , , N , and a i , j 0 , for all i = 1 , 2 m , j = m + 1 , , N .
  • Step 2 . Let A be an ( N m ) × 2 n matrix whose 2 n columns are selected independently according to the standard normal distribution, where
    A = g ˜ m + 1 g ˜ m + 2 g ˜ N .
  • Step 3 . Let
    g ˜ i = j = m + 1 N a i , j * g ˜ j ,
    for all i = 1 , , m . Then let Q be an N × 2 n matrix whose first m rows are [ g ˜ 1 , g ˜ 2 , g ˜ m ] * , and the rest of the entries are the rows of A. Let
    Q = ( g ˜ 1 , g ˜ 2 , , g ˜ m , g ˜ m + 1 , g ˜ m + 2 , , g ˜ N ) T
  • Step 4 . Let Q be the matrix obtained by performing the Gram-Schmidt orthonormalization procedure on the columns of Q .
  • Step 5 . Let G = [ g 1 , , g N ] be the 2 n × N matrix whose rows are made up of columns of Q.
  • Step 6 . Let F be the n × N matrix whose rows are from the first n rows of G and K be the n × N matrix whose rows are from the next n rows of G .
  • Step 7 . Let G = F + K = [ g 1 , , g N ]
  • Step 8 . Let M = [ I m × m , M 1 ] = [ h 1 , , h N ] .
Proposition 5.
The above matrix M in Algorithm 3 is an m-erasure recovery matrix of the frame { g i } i = 1 N if Q has a full rank and N = 2 n = 2 m .
Proof. 
Since both F and K are Parseval frames and
F K * = 0 ,
for any f H , we can obtain that
i = 1 N | f , f i + k i | 2 = i = 1 N | f , f i | 2 + i = 1 N | f , k i | 2 + 2 i = 1 N f , f i f , k i = 2 f 2 + 2 f , i = 1 N f , f i k i = 2 f 2 + 2 f , F K * f = 2 f 2 ,
thus G = F + K is a frame.
According to Proposition 4, we know that M in Algorithm 3 is an m-erasure recovery matrix of the Parseval frame F and K. Thus
M F * = M K * = 0 ,
hence
M G * = M ( F + K ) * = M ( F * + K * ) = 0 .
And
g ˜ i = j = m + 1 N a i , j * g ˜ j ,
hence M is an m-erasure recovery matrix of the frame { g i } i = 1 N .   □

4. Recovery Data from Rearrangements

In actual signal transmission, besides data erasure, a very common problem is data rearrangement. So in this section, we impose some restrictions on M so that the constructed frame and M can recover coefficients from rearrangements if | a 1 , m + 1 | | a 1 , m + 2 | | a 1 , N | .
Firstly, Lemma 2 introduces the conditions under which the frame can recover coefficients from rearrangements.
Lemma 2
([21]). Suppose that dim H < . Let { g i } i = 1 N be a frame for H and M be a matrix such that N ( M ) = T H . Then { g i } i = 1 N can recover the sequence of frame coefficients { f , g i } i = 1 N from any of its rearrangements for any f H H 0 (where H 0 is the union of finitely many proper subspaces of H and therefore is of measure zero) if and only if for any matrix M ˜ consisting of the same columns of M but in different order, rank(M) < rank M M ˜ .
Then we construct a matrix and a frame and explore under what conditions this matrix is an erasure recovery matrix, the frame is a Parseval frame, and whether they can recover coefficients from rearrangements (Algorithm 4).
Algorithm 4 The construction of erasure recovery matrix for rearrangements
  • Step 1 . Generate an m × ( N m ) matrix M 0 whose entries are drawn independently from the standard normal distribution, and let
    M 1 = 1 m M 0 = [ h m + 1 , , h N ] ,
    where h j = ( a i , j ) i = 1 , 2 m , for all j = m + 1 , , N . And | a 1 , m + 1 | | a 1 , m + 2 | | a 1 , N |
  • Step 2 . Rearrange the elements in the first row of M 1 such that | a 1 , m + 1 | < | a 1 , m + 2 | < < | a 1 , N | . Let M be the matrix that rearranges h j in this order, and let first element of h k be | a 1 , k | . For example, if | a 1 , k | = max { | a 1 , j | , j = m + 1 , m + 2 N } , then we put h k in the last column and first element of h k is | a 1 , k | .
  • Step 3 . Let M = [ h m + 1 , , h N ] be the matrix obtained by performing the Gram-Schmidt orthonormalization procedure on the rows of M , where h j = ( b i , j ) i = 1 , 2 m , for all j = m + 1 , , N .
  • Step 4 . Let M * = [ c h m + 1 , , c h N ] , where c h j = ( a i , j ) i = 1 , 2 m , and c > 0 is a constant to be determined later.
  • Step 5 . Let A be an ( N m ) × n matrix whose n columns are selected independently according to the standard normal distribution, where
    A = g ˜ m + 1 g ˜ m + 2 g ˜ N
  • Step 6 . Let
    g ˜ i = j = m + 1 N a i , j * g ˜ j ,
    for all i = 1 , , m . Then let Q be an N × n matrix whose first m rows are [ g ˜ 1 , g ˜ 2 , g ˜ m ] T , and the rest of the entries are the rows of A. Let
    Q = ( g ˜ 1 , g ˜ 2 , , g ˜ m , g ˜ m + 1 , g ˜ m + 2 , , g ˜ N ) T
  • Step 7 . Let Q be the matrix obtained by performing the Gram-Schmidt orthonormalization procedure on the columns of Q.
  • Step 8 . Let G = [ g 1 , , g N ] be the n × N matrix whose rows are made up of columns of Q .
  • Step 9 . Let M = [ I m × m , M * ] .
Proposition 6.
The above matrix M in Algorithm 4 is an m-erasure recovery matrix of the Parseval frame { g i } i = 1 N if Q has a full rank and N m = n . Moreover, M can recover coefficients from rearrangements if | a 1 , m + 1 | | a 1 , m + 2 | | a 1 , N | .
Proof. 
First of all, similar to Proposition 5, we know that { g i } i = 1 N is a Parseval frame.
Since
g i = j = m + 1 N a i , j * g j ,
for all i = 1 , , m .
Hence
M G * = [ I m × m , M 1 ] × [ g 1 , , g N ] * = g 1 * + j = m + 1 N a 1 , j g j * g 2 * + j = m + 1 N a 2 , j g j * g m * + j = m + 1 N a m , j g j * = 0 ,
and N ( M ) = T H , where T is the analysis operator of G .
Furthermore, since { g i } i = 1 N remains a frame whenever any m elements are removed, we can obtain that M is an m-erasure recovery matrix of { g i } i = 1 N .
Next, we prove that M can recover coefficients from m rearrangements if | a 1 , m + 1 | | a 1 , m + 2 | | a 1 , N | . We discuss the following three situations:
Situation 1: If the rearrangement occurs between the ( N m ) th column and the Nth column. Since | a 1 , m + 1 | < | a 1 , m + 2 | < < | a 1 , N | , it is easy to check that
r a n k 1 0 0 a 1 , m + 1 a 1 , N 0 1 0 a 2 , m + 1 a 2 , N 0 0 1 a m , m + 1 a m , N 1 0 0 a ˜ 1 , m + 1 a ˜ 1 , N = m + 1 .
Thus
r a n k ( M ) < r a n k M M ˜ ,
where M ˜ is a matrix in which the columns are the same as M but the order is different. According to Lemma 2, we can use M and { g i } i = 1 N to recover coefficients from rearrangements.
Situation 2: If the rearrangement occurs both between the ( N m ) th column and the Nth column and the first m columns but there is no interchange between the first m columns and the rest columns. Without loss of generality, we assume that the first column becomes the second column, and some similar results can be obtained in other cases. Since the first row of the M * matrix is all positive and each row is orthogonal to each other, we have that there is not a constant c such that
( a 1 , m + 1 a 1 , N ) = c ( a 2 , m + 1 a 2 , N ) ,
hence
r a n k 1 0 0 a 1 , m + 1 a 1 , N 0 1 0 a 2 , m + 1 a 2 , N 0 0 1 a m , m + 1 a m , N 0 1 0 a ˜ 1 , m + 1 a ˜ 1 , N = m + 1 .
Thus
r a n k ( M ) < r a n k M M ˜ ,
so we can use M and { g i } i = 1 N to recover coefficients from rearrangements.
Situation 3: One of the first m columns becomes one of the last ( N m ) th columns. Without loss of generality, we assume that the first column becomes the m + 1 column, and some similar results can be obtained in other cases. Then
d ( c ) = d e t 1 0 0 a 1 , m + 1 1 0 0 a 2 , m + 1 0 0 1 a m , m + 1 a ˜ 1 , m + 1 a ˜ 1 , m + 2 a ˜ 1 , m + m 1 = d e t 1 0 0 c a 1 , m + 1 1 0 0 c a 2 , m + 1 0 0 1 c a m , m + 1 c a ˜ 1 , m + 1 c a ˜ 1 , m + 2 c a ˜ 1 , m + m 1 = 1 + a c 2 ,
where a is a constant.
Then we can choose some c such that d ( c ) 0 . Moreover,
r a n k ( M ) < m + 1 r a n k M M ˜ .
Hence we can use M and { g i } i = 1 N to recover coefficients from rearrangements. □
Next we discuss whether the above M and { g i } i = 1 N in construction algorithm 4 can recover coefficients stably from m rearrangements. We obtain that if | a 1 , m + 1 | | a 1 , m + 2 | | a 1 , N | and g i are pairwise orthogonal, where N m i N , then we can recover coefficients stably from m rearrangements in Situation 1 and Situation 2. Before that, we give a sufficient and necessary condition such that a frame { g i } i = 1 N can recover coefficients stably from m rearrangements.
Lemma 3
([22]). We can recover coefficients stably from m rearrangements if and only if { g i } i = 1 N is totally robust. Thus { g i } i = 1 N is a frame and for any 1 i 1 < < i N N , 1 i 1 < < i N N and x , x H satisfying x , g i l = x , g i l , 1 l N , we have x = x .
Hence we just need to prove { g i } i = 1 N in Algorithm 4 is totally robust.
Proposition 7.
{ g i } i = 1 N in Algorithm 4 is totally robust in the following two situations, and it can recover coefficients stably from m rearrangements, if N m = n .
Proof. 
Situation 1: If the rearrangement occurs between the ( N m ) th column and the Nth column. For any 1 i 1 < i 2 < i N N , 1 i 1 < i 2 < i N N and x , x H satisfying x , g i l = x , g i l , 1 l N , we consider
x , g 1 = x , g 1 .
Hence
x , j = m + 1 N a 1 , j * g j = j = m + 1 N a 1 , j x , g j = j = m + 1 N a 1 , j x , g j = x , j = m + 1 N a 1 , j * g j = j = m + 1 N a 1 , j x , g j .
Since the rearrangement occurs between the ( N m ) th column and the Nth column, we use Λ to represent the set of indicators that have rearrangements, hence
j Λ a 1 , j x , g j = j Λ a 1 , j x , g j .
Thus
j Λ ( a 1 , j a 1 , j ) x , g j = 0 ,
where j Λ .
By the same reason, we have
j Λ ( a 1 , j a 1 , j ) x , g j = 0 ,
where j Λ .
If x = 0 , then x = 0 = x .
If x 0 , since | a 1 , m + 1 | | a 1 , m + 2 | | a 1 , N | and g i are pairwise orthogonal, where N m i N , we can get that
x = c x ,
where c is a constant. Moreover,
x , g j = c x , g j , j Λ c ,
where | Λ c | = N m .
Since { g i } i = 1 N remains a frame whenever any m elements are removed, we get c = 1 and
x = x .
According to Lemma 3, { g i } i = 1 N is totally robust and can recover coefficients stably from m rearrangements.
Situation 2: If the rearrangement occurs between the first m columns. Then we have
x , g 1 = x , g 1 .
Hence
x , j = m + 1 N a 1 , j * g j = j = m + 1 N a 1 , j x , g j = j = m + 1 N a 1 , j x , g j = x , j = m + 1 N a 1 , j * g j = j = m + 1 N a 1 , j x , g j .
Thus
j = m + 1 N ( a 1 , j a 1 , j ) x , g j = 0 ,
and
j = m + 1 N ( a 1 , j a 1 , j ) x , g j = 0 ,
where m + 1 j N .
If x = 0 , then x = 0 = x .
If x 0 , since there is a j such that a 1 , j a 1 , j 0 and { g i } i = 1 N are linearly independent, we obtain that
x = c x ,
where c is a constant. Moreover
x , g j = c x , g j , N m j N .
Since { g i } i = 1 N remains a frame whenever any m elements are removed, we get c = 1 and
x = x .
Hence { g i } i = 1 N is totally robust and can recover coefficients stably from m rearrangements. □

Author Contributions

All authors contributed to the study conception and design. Conceptualization, M.H.; Formal analysis, M.H.; Funding acquisition, M.H.; Investigation, M.H. and C.W.; Methodology, M.H.; Supervision, J.L.; Writing—original draft, M.H.; Writing—review and editing, M.H. and C.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Scientific Research Initiation Fund of Chengdu University of Technology (10912-KYQD2022-09459).

Data Availability Statement

Data sharing not applicable to this article, as no datasets were generated or analyzed during the current study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Duffin, R.J.; Schaeffer, A.C. A class of nonharmonic fourier series. Trans. Am. Math. Soc. 1952, 72, 341–366. [Google Scholar] [CrossRef]
  2. Benac, M.J.; Massey, P.; Ruiz, M. Optimal frame designs for multitasking devices with weight restrictions. Adv. Comput. Math. 2020, 46, 22. [Google Scholar] [CrossRef]
  3. Cands, E.J.; Donoho, D.L. New tight frames of curvelets and optimal representations of objects with piecewise singularities. Commun. Pure Appl. Math. 2004, 57, 219–266. [Google Scholar] [CrossRef]
  4. Bodmann, B.G.; Paulsen, V.I. Frame paths and error bounds for sigma-delta quantization. Appl. Comput. Harmon. Anal. 2007, 22, 176–197. [Google Scholar] [CrossRef]
  5. Dana, A.F.; Gowaikar, R.; Palanki, R.; Hassibi, B.; Effros, M. Capacity of wireless erasure networks. IEEE Trans. Inf. Theory 2006, 52, 789–804. [Google Scholar] [CrossRef]
  6. Leng, J.S.; Han, D.; Huang, T. Optimal dual frames for communication coding with probabilistic erasures. IEEE Trans. Signal. Process. 2011, 59, 5380–5389. [Google Scholar] [CrossRef]
  7. Albanese, A.; Blaǧomer, J.; Edmonds, J.; Luby, M.; Sudan, M. Priority encoding transmission. IEEE Trans. Inf. Theory 1996, 42, 1737–1744. [Google Scholar] [CrossRef]
  8. Fickus, M.; Marks, J.D.; Poteet, M.J. A generalized Schur-Horn theorem and optimal frame completions. Appl. Comput. Harmon. Anal. 2016, 40, 505–528. [Google Scholar] [CrossRef]
  9. Arabyani-Neyshaburi, F.; Kamyabi-Gol, R.A.; Farshchian, R. Matrix Methods for perfect signal recovery underlying range space of operators. Math. Method Appl. Sci. 2023, 46, 12273–12290. [Google Scholar] [CrossRef]
  10. Han, D.; Hu, Q.F.; Liu, R. Quantum injectivity of multi-window Gabor frames in finite dimensions. Ann. Funct. Anal. 2022, 13, 59. [Google Scholar] [CrossRef]
  11. Han, D.; Kornelson, K.; Larson, D.; Weber, E. Frames for Undergraduates; American Mathematical Society: Providence, RI, USA, 2007; pp. 40–41. [Google Scholar]
  12. Alexeev, B.; Cahill, J.; Mixon, D. Full spark frames. J. Fourier Anal. Appl. 2012, 18, 1167–1194. [Google Scholar] [CrossRef]
  13. Leng, J.S.; Han, D.; Huang, T. Probability modelled optimal frames for erasures. Linear Algebra Its Appl. 2013, 438, 4222–4236. [Google Scholar] [CrossRef]
  14. Cheng, C.; Han, D. On Twisted Group Frames. Linear Algebra Its Appl. 2019, 569, 285–310. [Google Scholar] [CrossRef]
  15. He, M.; Leng, J.S.; Li, D. Operator representations of K-frames: Boundedness and stability. Oper. Matrices 2020, 14, 921–934. [Google Scholar] [CrossRef]
  16. Lv, F.; Sun, W. Construction of robust frames in erasure recovery. Linear Algebra Appl. 2015, 479, 155–170. [Google Scholar] [CrossRef]
  17. Han, D.; Larson, D.; Scholze, S.; Sun, W. Erasure recovery matrices for encoder protection. Appl. Comput. Harmon. Anal. 2020, 48, 766–786. [Google Scholar] [CrossRef]
  18. Casazza, P.; Kutyniok, G. Finite Frames: Theory and Applications; Springer: New York, NY, USA, 2013; pp. 154–196. [Google Scholar]
  19. Han, D.; Sun, W. Reconstruction of signals from frame coefficients with erasures at unknown locations. IEEE Trans. Inform. Theory 2014, 60, 4013–4025. [Google Scholar] [CrossRef]
  20. Balan, R.; Casazza, P.G.; Heil, C. Deficits and Excesses of Frames. Adv. Comput. Math. 2003, 18, 93–116. [Google Scholar] [CrossRef]
  21. Han, D.; Lv, F.; Sun, W. Recovery of signals from unordered partial frame coefficients. Appl. Comput. Harmon. Anal. 2016, 42, 38–58. [Google Scholar] [CrossRef]
  22. Han, D.; Lv, F.; Sun, W. Stable recovery of signals from frame coefficients with erasures at unknown locations. Sci. China Math. 2018, 61, 151–172. [Google Scholar] [CrossRef]
Table 1. This is the table of notations and terminologies.
Table 1. This is the table of notations and terminologies.
NotationsMeaning
x y The rank-one matrix given by
( x y ) z = z , y x , f o r z C d
f The norm of f
T The operator norm of operator T
T * The adjoint operator of T
f T The transpose of vector f
dim ( H ) The dimension of H
I m × m An m × m identity matrix
IA index set and a subset of { 1 , , k }
Λ The index set of the erased coefficients
A c The coset of set A
M Λ The minor of M formed by the columns indexed by Λ
M I , Λ The minor of M with rows indexed by I
and columns indexed by Λ
N ( M ) The nuclear space of M
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

He, M.; Wu, C.; Leng, J. Erasure Recovery Matrices for Data Erasures and Rearrangements. Mathematics 2024, 12, 989. https://doi.org/10.3390/math12070989

AMA Style

He M, Wu C, Leng J. Erasure Recovery Matrices for Data Erasures and Rearrangements. Mathematics. 2024; 12(7):989. https://doi.org/10.3390/math12070989

Chicago/Turabian Style

He, Miao, Changtian Wu, and Jinsong Leng. 2024. "Erasure Recovery Matrices for Data Erasures and Rearrangements" Mathematics 12, no. 7: 989. https://doi.org/10.3390/math12070989

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop