1. Introduction
In the era of data explosion, with the increasing amount of information, data acquisition, transmission and storage devices are facing increasingly severe pressure. At the same time, the data processing process will also be accompanied by the risk of information disclosure. The loss of some data may threaten the safety of life and property, and now, data disclosure is common. Therefore, in the era of big data, people urgently need to find a new data processing technique to decrease the risk of data leakage during information processing and release the pressure of hardware equipment such as internal storage and sensors.
Compressed sensing (CS) theory can be used for signal acquisition, encoding and decoding [
1]. No matter what type of signals, sparse or compressible representations always exist in the original domain or in some transform domains. During transmission, the linear projection value that far lower than the traditional Nyquist sampling can be used to realize the accurate or high probability reconstruction of the signal. For a discrete signal
, the standard model of CS is
where
is a measurement matrix, and
is the corresponding measurement vector.
It shows that a vector x of n-dimensional can be compressed into a vector y of m-dimensional by CS. Therefore, the compression ratio can be represented by .
If a measurement vector
y is given, it is urgently important to reconstruct
x by measurement matrix
. However, this problem is usually NP-hard [
2]. If there are less than
non-zero elements in a signal
x, then the signal
x is
k-sparse. Candès and Tao confirmed that if a signal
x is
k-sparse and
meets the restricted isometry property (RIP), then
y can accurately reconstruct
x [
3] by solving the following equation,
where
.
Since
-norm is a convex function, it is common method to replace
with
in CS, i.e.,
where
.
For a
k-sparse signal
, and a matrix
, if there exists a constant
such that
where
, then the matrix
is said to satisfy the RIP of order
k, and the smallest
is defined as the restricted isometry constant (RIC) of order
k.
Another important standard is coherence [
4] in measurement matrices of CS.
Let
, where
is
i-th column of
,
. Then, the coherence of
can be expressed by the following equation
where
denotes the Hermite inner product of
and
.
There is a relationship between the coherence and RIP of a matrix as follows.
If is a unit-norm matrix and , then is said to satisfy the RIP of order k with for all .
Furthermore, for a matrix
with size
-dimensional, the coherence of
can be represented by Welch bound as follows [
5]
The main problem in CS is to find deterministic constructions based on coherence which beat this square root bound.
In CS theory, measurement matrices are not only the vital step to guarantee the quality of signal sampling but also the vital step to determine the difficulty of compressed sampling hardware implementation. There are two main types of measurement matrices. One is random matrices. Random matrices consist of Gaussian matrices, Bernoulli matrices, local Fourier matrices and so on [
6,
7,
8,
9,
10,
11]. Although these matrices can reconstruct the original signals well, they are hard to be implemented in hardware, and the matrix elements require a lot of storage space. Some scholars have proposed using the Toplitz matrices to construct the measurement matrices [
12,
13]. Although the Toplitz matrices can save some storage space, it is still difficult to be implemented in hardware. Deterministic matrices can improve the transmission efficiency and reduce the storage space [
14,
15], but they have large reconstruction errors. When constructing this kind of matrices, as long as the system and construction parameters are determined, the size and elements of the matrix will also be determined. DeVore used polynomials over finite field
to construct measurement matrices in [
16]. Li et al. gave a construction method of a sparse measurement matrix based on algebraic curves in [
17]. The main tools for constructing deterministic measurement matrices are coding [
18,
19,
20,
21,
22], geometry over finite fields [
23,
24,
25,
26,
27,
28], design theory [
29,
30,
31,
32], and so on.
Compared with CS, for signals of the same size, the advantage of semi-tensor product compressed sensing (STP-CS) is that the number of columns of the measurement matrices can be a factor of CS, which greatly reduces the storage space of measurement matrices. Therefore, we are more interested in the research of STP-CS. The main contribution of the paper is to give a construction of structured random matrices and apply these matrices to STP-CS. The structured random matrices can be obtained by the embedding operation of two seed matrices in which one is determined, and the other is random. In addition, as long as the system and constructed parameters generate structured random matrices, the size of the matrix is determined, but the elements of the matrix are arranged in a structured random manner. When transmitting and storing the matrix, the system, constructed parameters and a random seed matrix need to be transmitted or stored, which can improve the transmission efficiency and reduce the storage scale of a random matrix. Compared with random matrices, the structured random matrices overcome the disadvantage of large storage space of random matrices and is relatively convenient for hardware implementation. Compared with deterministic matrices, the structured random matrices have good reconstruction accuracy. Therefore, a structured random matrix has greater application value in STP-CS model.
Aiming at existing shortcomings—a random matrix needs large storage space and is difficult to be implemented in hardware, and a deterministic matrix has large reconstruction error—the objective of this paper is to find an effective method to balance these performances. The main contributions of our work are summarized as follows:
A construction method of structured random matrices is given, where one is the incidence matrices of combinatorial designs, and the other is obtained by the Gram–Schmidt orthonormalization of random matrices.
A STP-CS model based on the structured random matrices is proposed.
Experimental results indicate that our matrices are more suitable for the reconstruction of one-dimensional signals and two-dimensional images.
The difference between this paper and previous works [
14,
31] is as follows:
The measurement matrices constructed in this paper are structured random matrices, while the measurement matrices constructed in [
14,
31] are determined matrices.
This paper studies STP-CS model, while [
14] studies the block compressed sensing model (BCS), and [
31] studies CS model.
The details of each section are as follows.
Section 2 introduces some related knowledge.
Section 3 proposes a new model, which applies the structured random matrices to STP-CS.
Section 4 gives simulation experiments, analyzes and compares the performance of our matrices with several famous matrices.
3. Construction of Structured Random Measurement Matrices in STP-CS
Compared with CS, for signals of the same size, the advantage of STP-CS is that the number of columns of the measurement matrix can be a factor of CS, which greatly reduces the storage space of measurement matrices. Compared with measurement matrices in STP-CS, the structured random matrices only need to store two seed matrices instead of the whole matrix. To sum up, the structured matrices have lower storage space in STP-CS. In this section, we give a new model that applies the structured random matrices to STP-CS.
3.1. Construction of -SBIBD
The 1-dimensional projective space over only has points, so it is less interesting. So, let us start our discussion with the 2-dimensional projective planes . In , there are points and lines; every line contains points and every point passes through lines; any two different points are connected by exactly one line; any two different lines intersect in exactly one point. It is easy to find that
(i) A finite projective plane of order q is -BIBD. A block is called a line in a finite projective plane.
(ii) For the parameter set , , of a BIBD, we must have and, hence, . So, -BIBD is necessarily symmetric, and it is simply denoted by -SBIBD.
Based on this, for
-SBIBD, we assume that
is a set of points, and
is a set of blocks. The incidence matrix of
-SBIBD is defined by
whose rows are marked by
and columns are marked by
, and
Obviously, M has the same row-degree and column-degree, both of which are .
Theorem 2. If the incidence matrix of - is M. Then, the matrix M has coherence .
In the following, the relationship between some known projective planes and BIBD is shown in
Table 1.
3.2. Gram–Schmidt Orthonormalization
Let be a random matrix, where denotes the i-th column of A, . In order to ensure that the random matrix A has small coherence, all columns in matrix A are Gram–Schmidt orthonormalization, and the process is as follows
Let ,
,
⋮
.
Then, , , ⋯, are normalized, i.e.,
,
In this way, we obtain a normalized orthogonal matrix C of matrix A.
Remark 1. According to Definition 3, let ; there are two cases in the following
If A is a deterministic matrix, then C must also be deterministic. Therefore, Φ is a deterministic matrix;
If A is a random matrix, then C must also be random. Therefore, Φ is a structured random matrix.
There are many researches on deterministic matrices and random matrices, but few on structured random matrices. Combining the advantages of random matrices and the incidence matrices of combinatorial designs, this paper constructs the structured random measurement matrices and applied them in STP-CS.
3.3. Sampling Model
In the following, we consider
as a measurement matrix in STP-CS. Let
p be a positive integer and satisfy
. For a signal
, a novel semi-tensor product compressed sensing model by the embedding operation (STP-CS-EO) is given in the following
then
.
According to Theorem 1, it finds that
Remark 2. Let be a discrete signal, where N is a positive integer. For , we present a comparison of CS, Kronecker product compressed sensing (KP-CS), block compressed sampling based on the embedding operation (BCS-EO), STP-CS, Kronecker product semi-tensor product compressed sensing (KP-STP-CS) and semi-tensor product compressed sensing based on the embedding operation (STP-CS-EO). Table 2 lists the comparison of storage space and sampling complexity of the measurement matrices corresponding to the above six sampling models. Sampling complexity is defined by the multiplication times between a matrix and a vector in the sampling process. For STP-CS, t is a positive integer and satisfies , . For signals of the same size, the advantage of STP-CS is that the number of columns of the measurement matrix can be a factor of CS. For KP-CS and KP-STP-CS, is a -dimensional identity matrix, where p is a positive integer and satisfies , . For BCS-EO and STP-CS-EO, and have column-degree d, and and have size -dimensional, where d is a positive integer and satisfies . Compared with CS, KP-CS, BCS-EO, STP-CS and KP-STP-CS, the STP-CS-EO model has lower storage space and lower sampling complexity if or if or or . In the following, we calculate the coherence of the matrix .
Theorem 3. Let M be the incidence matrix of -SBIBD and be a -dimensional normalized orthogonal random matrix; then, there is a construction of structured random measurement matrices for a -dimensional matrix with coherence , where .
Proof of Theorem 3. According to , then has size -dimensional. Let , where is the i-th column of M, . is a -dimensional normalized orthogonal random matrix. For any two columns and in ,
(1) If
and
correspond to the same column
in
M, then we have
since
C is a orthogonal matrix;
(2) If
and
correspond to two different columns
and
in
M, then we have
since
C is a normalized matrix, where
and
are the elements of matrix
C,
.
Therefore, has coherence .