1. Introduction
In order to deal with some problems concerned with the nonharmonic Fourier series, frames were first introduced by Duffin and Schaeffer in 1952 [
1]. Specifically,
Definition 1 ([
1])
. A sequence of elements in a Hilbert space H is a frame for H if there exist constants such thatThe numbers are called frame bounds. In particular, if , then the frame is called a tight frame for H. Especially if , then the tight frame is called a Parseval frame for H.
Due to the redundancy of the frame [
2], it can be used in many fields as a generalization of the base in Hilbert space, such as signal and image processing [
3], quantization [
4], capacity of transmission channels [
5], coding theory [
6], and data transmission technology [
7]. In particular, more and more scholars are beginning to apply the frame to signal erasures and reconstruction [
8].
More specifically, some scholars recover the lost data by inverting the frame operator of the frame whose indices correspond to the non-erased frame coefficients [
9]. However, this method is slower because it needs to invert the matrix. Therefore, some people have proposed to use the dual frame defined below to recover the lost data [
10].
Definition 2 ([
11])
. For a frame , if there is a sequence such thatfor any Then is a dual of Then, for any element
f in a separable Hilbert space
H, it can be recovered by using a reconstruction formula by a frame and a dual frame, i.e., there is a dual frame
such that
[
12]. When a part of the frame coefficients is erased, the rest of the frame coefficients can be used for reconstruction. Then some optimal frames and dual that minimize the reconstruction error need to be discussed. For example, in [
13], the authors found the optimal dual frames that minimize the reconstruction error for 1-erasure and 2-erasures. Many other scholars have researched this issue in recent years, such as [
14,
15,
16], and so on.
However, in practical applications, the pre-selected encoding frame may be subject to attacks by intermediaries and generate errors. And it will be difficult to recover a lot of lost data with the above methods. In [
17], the authors proposed to use the erasure recovery matrix
M to recover the erased data, which can be used to deal with a large number of data erasures and also protect the encoding frame. Moreover, two ways to recover the erased data with the erasure recovery matrix
M are proposed. One is
where
is the index set of the erased coefficients, and
denotes the minor of
M formed by the columns indexed by
. The other one is
where
denotes the minor of
M with rows indexed by
I and columns indexed by
, and
I is a subset of
.
Then the authors mainly discussed the method of (
1) and did not discuss the method of (
2) because it takes a certain amount of calculation to find a suitable
I to make
reversible. Hence, in this paper, our work was motivated by [
17], the problem of finding
I to make
reversible is solved. First of all, a special frame and its erasure recovery matrix so that
is discussed, where
is a unit matrix. In this case, we can easily recover the erased data. In fact, we just need
Obviously, our method does not need to invert the matrix, which greatly simplifies reconstruction problems and calculations. Furthermore, the construction of the above frame and the erasure recovery matrix M are discussed. Three different construction algorithms are proposed, and each of them has advantages. Next, we prove that the frame and erasure recovery matrix M, which we construct, can recover the data when data at a known location are erased. Furthermore, some restrictions on M so that the constructed frame and erasure recovery matrix M can recover coefficients from m rearrangements are imposed. Then we give a construction algorithm for the erasure recovery matrix and a frame that can be recovered of coefficients from rearrangements. Finally, we prove that in some cases, the above M and frame can recover coefficients stably from m rearrangements.
2. Notation, Terminology and Data Erasures
In this section, we recall some notations, terminologies, definitions, and properties of the frame theory that we use throughout the paper.
Firstly, we introduce the following operators, which often appear throughout this paper.
Definition 3 ([
18])
. Let be a Bessel sequence for H.(I) The analysis operator of F is defined by (II) The synthesis operator of F is defined by (III) The frame operator of F is defined by In the applications, we often use a frame
to encode data
f and obtain the frame coefficients
Then we use a dual frame
known as the decoding frame to recover
f, thus
However, in the applications, some erasures and rearrangements will happen. Thus we may only get part of
or the rearrangements
. Hence in this paper, we use erasure recovery matrices (introduced in [
19]) to recover the data.
Then, we use the
Table 1 to introduce some notations and terminologies that appear in the following text.
3. Recovery Data from M-Erasures
In [
17], the authors proposed to use the erasure recovery matrix
M to recover the lost data. And they give two methods to recover the lost data, which are (
1) and (
2). However, they point out that it is difficult to find a suitable set
I to make the matrix
invertible, where
denotes the minor of
M with rows indexed by
I and columns indexed by
, and
I is a subset of
. Hence in this section, we construct a special frame and its erasure recovery matrix so that
, where
is a unit matrix. In this case,
is invertible. And we can easily recover the lost data.
3.1. Some Properties of the Erasure Recovery Matrix
Firstly, we provide the definition of the erasure recovery matrix.
Definition 4 ([
17])
. Let be a frame for an n-dimensional Hilbert space H and . An m-erasure recovery matrix is a matrix M with spark satisfying for any vector , where T denotes the analysis operator for the frame F and the spark of a collection of vectors is the size of the smallest linearly dependent subset of . In the following, we discuss how to recover erased frame coefficients at known locations. We use
to represent the index set of the erased coefficients, where
. Then we introduce a special frame
. If for any
, there is a sequence
of complex numbers such that
where none of the coefficients in the above equations are equal to zero. We also recall that the excess of a frame
of
H is the greatest integer
m such that
m elements can be removed from the frame and still have a frame for
H [
20].
Next, we consider whether the above frame remains a frame whenever any m elements are removed.
Proposition 1. remains a frame whenever any m elements are removed when is a frame for H with the excess m.
Proof. First of all, if the first m elements are removed, then the conclusion can be obtained obviously. More precisely, in finite-dimensional Hilbert spaces, the frames are exactly spanning families of vectors in the space, and consequently, the frames with the excess m are robust under removing the first m-elements.
Next, we consider the case where any m elements are removed. In this case, since the first m elements can be expressed linearly by the rest of the elements, all coefficients are not equal to zero. Hence, any removing m elements can be expressed linearly by the remaining elements. We can obtain that is still a frame whenever any m elements are removed. □
Note that if the excess of
is the greatest integer
m. Then for any
,
we let
If M is an m-erasure recovery matrix, then is invertible. Hence we discuss whether M is an m-erasure recovery matrix. Before that, we introduce the following lemma and proposition.
Lemma 1 ([
21])
. Let be a frame for H. Suppose that is an integer and that M is a matrix such that . Then the following assertions are equivalent.(i). Every m columns of M are linearly independent.
(ii). remains a frame whenever any m elements are removed.
Proposition 2. Let H be an N-dimensional Hilbert space, M be a matrix as defined above (3), and T be an analysis operator of the frame . If , then we have and M is an m-erasure recovery matrix. Proof. On the one hand, for any , there is a such that , for all .
That is .
On the other hand, since , we obtain that .
So far, we find that and remain a frame whenever any m elements are removed. According to lemma 1, it is clear that every m columns of M are linearly independent. Obviously, the first columns of the M are linearly dependent. Hence , and the matrix M is an m-erasure recovery matrix. □
In this case, we let such that . Then the following Example 1 shows that we can use the m-erasure recovery matrix M to recover the erased data if .
Example 1. Assume that the first m data are lost. Then we can construct a frame for H with the excess m (the construction method of this kind of frame will be discussed later), and for any , there is a sequence of complex numbers such thatwhere none of the coefficients in the above equations is equal to zero. Hence its erasure recovery matrix is Then for any , we let , where . And Since the first m data are erased, we obtain thatwhere is a matrix composed of the first m rows and the first m columns of matrix M, and . Obviously, we have . Henceand That is to say we can use the remaining data to recover the erased data easily.
3.2. Algorithm Construction
In what follows, we discuss the construction of the above frame
and the
m-erasure recovery matrix
M when data at a known location are erased. We assume that the first
m data are erased, and the same goes for the erasure of any other
m known locations (Algorithm 1).
Algorithm 1 The construction of the m-erasure recovery matrix M when data at a known location are erased |
Generate an matrix whose entries are drawn independently from the standard normal distribution, and let
where for all and , for all . Let P be the orthogonal projection for the range space of the analysis operator for , and
for where is a standard orthonormal basis for . Let
for all . Generate an matrix T whose entries are drawn independently from the standard normal distribution. Let , for all , and .
|
Proposition 3. The above matrix M in Algorithm 1 is an m-erasure recovery matrix of if .
Proof. First of all, we prove that
is a frame for
H. Since
, for all
. Then
where
And
is a frame for
, since
is a standard orthonormal basis for
. Thus
where
B is the upper frame bound of
.
And
where
A is the lower frame bound of
.
Thus is a frame. Since T is subjective, hence is a frame for .
Next, we prove that M is an m-erasure recovery matrix of .
Hence Moreover, according to the structure of frame , we can obtain that remains a frame whenever any m elements are removed.
By Lemma 1, we know every m column vectors of M are linearly independent. Hence M is an m-erasure recovery matrix of . □
As we know, when the frame used for coding is a Parseval frame, some better results will often be obtained. Therefore, we propose a construction method for a Parseval frame and its
m-erasure recovery matrix (Algorithm 2).
Algorithm 2 The construction of m-erasure recovery matrix for a Parseval frame |
Generate an matrix whose entries are drawn independently from the standard normal distribution, and let
where for all and , for all . Let A be an matrix whose n columns are selected independently according to the standard normal distribution, where
Let
for all . Then let Q be an matrix whose first m rows are , and the rest of the entries are the rows of A. Let
Let be the matrix obtained by performing the Gram-Schmidt orthonormalization procedure on the columns of Q. Let be the matrix whose rows are made up of columns of . Let .
|
Proposition 4. The above matrix M in Algorithm 2 is an m-erasure recovery matrix of the Parseval frame if has a full rank and .
Proof. First of all, we prove that is a Parseval frame.
According to the structure of
G, we have
Since has a full rank, is a Parseval frame.
Since
for all
, and
is the matrix obtained by performing the Gram-Schmidt orthonormalization procedure on the columns of
Q. We can get that
for all
.
Hence
and
.
According to Lemma 1, we know M is an m-erasure recovery matrix of . □
Similar to Algorithm 2, we can get a construction method of an ordinary frame instead of the Parseval frame and its
m-erasure recovery matrix (Algorithm 3).
Algorithm 3 The construction of the m-erasure recovery matrix for an ordinary frame |
Generate an matrix whose entries are drawn independently from the standard normal distribution, and let
where for all , and , for all . Let A be an matrix whose columns are selected independently according to the standard normal distribution, where
Let
for all . Then let be an matrix whose first m rows are , and the rest of the entries are the rows of A. Let
Let Q be the matrix obtained by performing the Gram-Schmidt orthonormalization procedure on the columns of . Let be the matrix whose rows are made up of columns of Q. Let F be the matrix whose rows are from the first n rows of and K be the matrix whose rows are from the next n rows of . Let Let .
|
Proposition 5. The above matrix M in Algorithm 3 is an m-erasure recovery matrix of the frame if Q has a full rank and .
Proof. Since both
F and
K are Parseval frames and
for any
, we can obtain that
thus
is a frame.
According to Proposition 4, we know that
M in Algorithm 3 is an
m-erasure recovery matrix of the Parseval frame
F and
K. Thus
hence
And
hence
M is an
m-erasure recovery matrix of the frame
. □
4. Recovery Data from Rearrangements
In actual signal transmission, besides data erasure, a very common problem is data rearrangement. So in this section, we impose some restrictions on M so that the constructed frame and M can recover coefficients from rearrangements if .
Firstly, Lemma 2 introduces the conditions under which the frame can recover coefficients from rearrangements.
Lemma 2 ([
21])
. Suppose that . Let be a frame for H and M be a matrix such that . Then can recover the sequence of frame coefficients from any of its rearrangements for any (where is the union of finitely many proper subspaces of H and therefore is of measure zero) if and only if for any matrix consisting of the same columns of M but in different order, rank(M) < rank. Then we construct a matrix and a frame and explore under what conditions this matrix is an erasure recovery matrix, the frame is a Parseval frame, and whether they can recover coefficients from rearrangements (Algorithm 4).
Algorithm 4 The construction of erasure recovery matrix for rearrangements |
Generate an matrix whose entries are drawn independently from the standard normal distribution, and let
where for all . And Rearrange the elements in the first row of such that . Let be the matrix that rearranges in this order, and let first element of be . For example, if , then we put in the last column and first element of is . Let be the matrix obtained by performing the Gram-Schmidt orthonormalization procedure on the rows of , where for all . Let , where and is a constant to be determined later. Let A be an matrix whose n columns are selected independently according to the standard normal distribution, where
Let
for all . Then let Q be an matrix whose first m rows are , and the rest of the entries are the rows of A. Let
Let be the matrix obtained by performing the Gram-Schmidt orthonormalization procedure on the columns of Q. Let be the matrix whose rows are made up of columns of . Let .
|
Proposition 6. The above matrix M in Algorithm 4 is an m-erasure recovery matrix of the Parseval frame if Q has a full rank and . Moreover, M can recover coefficients from rearrangements if .
Proof. First of all, similar to Proposition 5, we know that is a Parseval frame.
Since
for all
.
Hence
and
, where
T is the analysis operator of
Furthermore, since remains a frame whenever any m elements are removed, we can obtain that M is an m-erasure recovery matrix of .
Next, we prove that M can recover coefficients from m rearrangements if . We discuss the following three situations:
Situation 1: If the rearrangement occurs between the
th column and the
Nth column. Since
, it is easy to check that
Thus
where
is a matrix in which the columns are the same as
M but the order is different. According to Lemma 2, we can use
M and
to recover coefficients from rearrangements.
Situation 2: If the rearrangement occurs both between the
th column and the
Nth column and the first
m columns but there is no interchange between the first
m columns and the rest columns. Without loss of generality, we assume that the first column becomes the second column, and some similar results can be obtained in other cases. Since the first row of the
matrix is all positive and each row is orthogonal to each other, we have that there is not a constant
c such that
hence
Thus
so we can use
M and
to recover coefficients from rearrangements.
Situation 3: One of the first
m columns becomes one of the last
th columns. Without loss of generality, we assume that the first column becomes the
column, and some similar results can be obtained in other cases. Then
where a is a constant.
Then we can choose some
c such that
. Moreover,
Hence we can use M and to recover coefficients from rearrangements. □
Next we discuss whether the above M and in construction algorithm 4 can recover coefficients stably from m rearrangements. We obtain that if and are pairwise orthogonal, where , then we can recover coefficients stably from m rearrangements in Situation 1 and Situation 2. Before that, we give a sufficient and necessary condition such that a frame can recover coefficients stably from m rearrangements.
Lemma 3 ([
22])
. We can recover coefficients stably from m rearrangements if and only if is totally robust. Thus is a frame and for any , and satisfying , we have . Hence we just need to prove in Algorithm 4 is totally robust.
Proposition 7. in Algorithm 4 is totally robust in the following two situations, and it can recover coefficients stably from m rearrangements, if .
Proof. Situation 1: If the rearrangement occurs between the
th column and the
Nth column. For any
,
and
satisfying
, we consider
Since the rearrangement occurs between the
th column and the
Nth column, we use
to represent the set of indicators that have rearrangements, hence
By the same reason, we have
where
.
If , then .
If
, since
and
are pairwise orthogonal, where
, we can get that
where
c is a constant. Moreover,
where
.
Since
remains a frame whenever any
m elements are removed, we get
and
According to Lemma 3, is totally robust and can recover coefficients stably from m rearrangements.
Situation 2: If the rearrangement occurs between the first
m columns. Then we have
Thus
and
where
.
If , then .
If
, since there is a
j such that
and
are linearly independent, we obtain that
where
c is a constant. Moreover
Since
remains a frame whenever any
m elements are removed, we get
and
Hence is totally robust and can recover coefficients stably from m rearrangements. □