Next Article in Journal
Investigating the Impact of Synchrotron THz Radiation on the Corneal Hydration Using Synchrotron THz-Far Infrared Beamline
Next Article in Special Issue
A Non-Convex Compressed Sensing Model Improving the Energy Efficiency of WSNs for Abnormal Events’ Monitoring
Previous Article in Journal
Autoencoder-Ensemble-Based Unsupervised Selection of Production-Relevant Variables for Context-Aware Fault Diagnosis
Previous Article in Special Issue
A Novel Reconstruction Algorithm with High Performance for Compressed Ultrafast Imaging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Construction of Structured Random Measurement Matrices in Semi-Tensor Product Compressed Sensing Based on Combinatorial Designs

1
Information Security Center, State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876, China
2
National Engineering Laboratory for Disaster Backup and Recovery, Beijing University of Posts and Telecommunications, Beijing 100876, China
3
Shandong Provincial Key Laboratory of Computer Networks, Shandong Computer Science Center (National Supercomputer Center in Jinan), Qilu University of Technology (Shandong Academy of Sciences), Jinan 250014, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(21), 8260; https://doi.org/10.3390/s22218260
Submission received: 26 September 2022 / Revised: 24 October 2022 / Accepted: 25 October 2022 / Published: 28 October 2022
(This article belongs to the Special Issue Compressed Sensing and Imaging Processing)

Abstract

:
A random matrix needs large storage space and is difficult to be implemented in hardware, and a deterministic matrix has large reconstruction error. Aiming at these shortcomings, the objective of this paper is to find an effective method to balance these performances. Combining the advantages of the incidence matrix of combinatorial designs and a random matrix, this paper constructs a structured random matrix by the embedding operation of two seed matrices in which one is the incidence matrix of combinatorial designs, and the other is obtained by Gram–Schmidt orthonormalization of the random matrix. Meanwhile, we provide a new model that applies the structured random matrices to semi-tensor product compressed sensing. Finally, compared with the reconstruction effect of several famous matrices, our matrices are more suitable for the reconstruction of one-dimensional signals and two-dimensional images by experimental methods.

1. Introduction

In the era of data explosion, with the increasing amount of information, data acquisition, transmission and storage devices are facing increasingly severe pressure. At the same time, the data processing process will also be accompanied by the risk of information disclosure. The loss of some data may threaten the safety of life and property, and now, data disclosure is common. Therefore, in the era of big data, people urgently need to find a new data processing technique to decrease the risk of data leakage during information processing and release the pressure of hardware equipment such as internal storage and sensors.
Compressed sensing (CS) theory can be used for signal acquisition, encoding and decoding [1]. No matter what type of signals, sparse or compressible representations always exist in the original domain or in some transform domains. During transmission, the linear projection value that far lower than the traditional Nyquist sampling can be used to realize the accurate or high probability reconstruction of the signal. For a discrete signal x R n , the standard model of CS is
y = Φ x ,
where Φ R m × n ( m < n ) is a measurement matrix, and y R m is the corresponding measurement vector.
It shows that a vector x of n-dimensional can be compressed into a vector y of m-dimensional by CS. Therefore, the compression ratio θ can be represented by θ = m n .
If a measurement vector y is given, it is urgently important to reconstruct x by measurement matrix Φ . However, this problem is usually NP-hard [2]. If there are less than k ( k n ) non-zero elements in a signal x, then the signal x is k-sparse. Candès and Tao confirmed that if a signal x is k-sparse and Φ meets the restricted isometry property (RIP), then y can accurately reconstruct x [3] by solving the following equation,
min x R n | | x | | 0 s . t . y = Φ x ,
where | | x | | 0 = | { i | x i 0 } | .
Since l 1 -norm is a convex function, it is common method to replace | | x | | 0 with | | x | | 1 in CS, i.e.,
min x R n | | x | | 1 s . t . y = Φ x ,
where | | x | | 1 = | x 1 | + | x 2 | + + | x n | .
For a k-sparse signal x R n , and a matrix Φ R m × n , if there exists a constant 0 δ k < 1 such that
( 1 δ k ) | | x | | 2 | | Φ x | | 2 ( 1 + δ k ) | | x | | 2 ,
where | | x | | 2 2 = x 1 2 + x 2 2 + + x n 2 , then the matrix Φ is said to satisfy the RIP of order k, and the smallest δ k is defined as the restricted isometry constant (RIC) of order k.
Another important standard is coherence [4] in measurement matrices of CS.
Let Φ = ( Φ 1 , Φ 2 , , Φ n ) , where Φ i is i-th column of Φ , 1 i n . Then, the coherence of Φ can be expressed by the following equation
μ ( Φ ) = max i j | Φ i , Φ j | | | Φ i | | 2 | | Φ j | | 2 , 1 i , j n ,
where Φ i , Φ j denotes the Hermite inner product of Φ i and Φ j .
There is a relationship between the coherence and RIP of a matrix as follows.
If Φ is a unit-norm matrix and μ = μ ( Φ ) , then Φ is said to satisfy the RIP of order k with δ k μ ( k 1 ) for all k < 1 μ + 1 .
Furthermore, for a matrix Φ with size m × n -dimensional, the coherence of Φ can be represented by Welch bound as follows [5]
μ ( Φ ) n m m ( n 1 ) .
The main problem in CS is to find deterministic constructions based on coherence which beat this square root bound.
In CS theory, measurement matrices are not only the vital step to guarantee the quality of signal sampling but also the vital step to determine the difficulty of compressed sampling hardware implementation. There are two main types of measurement matrices. One is random matrices. Random matrices consist of Gaussian matrices, Bernoulli matrices, local Fourier matrices and so on [6,7,8,9,10,11]. Although these matrices can reconstruct the original signals well, they are hard to be implemented in hardware, and the matrix elements require a lot of storage space. Some scholars have proposed using the Toplitz matrices to construct the measurement matrices [12,13]. Although the Toplitz matrices can save some storage space, it is still difficult to be implemented in hardware. Deterministic matrices can improve the transmission efficiency and reduce the storage space [14,15], but they have large reconstruction errors. When constructing this kind of matrices, as long as the system and construction parameters are determined, the size and elements of the matrix will also be determined. DeVore used polynomials over finite field F p to construct measurement matrices in [16]. Li et al. gave a construction method of a sparse measurement matrix based on algebraic curves in [17]. The main tools for constructing deterministic measurement matrices are coding [18,19,20,21,22], geometry over finite fields [23,24,25,26,27,28], design theory [29,30,31,32], and so on.
Compared with CS, for signals of the same size, the advantage of semi-tensor product compressed sensing (STP-CS) is that the number of columns of the measurement matrices can be a factor of CS, which greatly reduces the storage space of measurement matrices. Therefore, we are more interested in the research of STP-CS. The main contribution of the paper is to give a construction of structured random matrices and apply these matrices to STP-CS. The structured random matrices can be obtained by the embedding operation of two seed matrices in which one is determined, and the other is random. In addition, as long as the system and constructed parameters generate structured random matrices, the size of the matrix is determined, but the elements of the matrix are arranged in a structured random manner. When transmitting and storing the matrix, the system, constructed parameters and a random seed matrix need to be transmitted or stored, which can improve the transmission efficiency and reduce the storage scale of a random matrix. Compared with random matrices, the structured random matrices overcome the disadvantage of large storage space of random matrices and is relatively convenient for hardware implementation. Compared with deterministic matrices, the structured random matrices have good reconstruction accuracy. Therefore, a structured random matrix has greater application value in STP-CS model.
Aiming at existing shortcomings—a random matrix needs large storage space and is difficult to be implemented in hardware, and a deterministic matrix has large reconstruction error—the objective of this paper is to find an effective method to balance these performances. The main contributions of our work are summarized as follows:
  • A construction method of structured random matrices is given, where one is the incidence matrices of combinatorial designs, and the other is obtained by the Gram–Schmidt orthonormalization of random matrices.
  • A STP-CS model based on the structured random matrices is proposed.
  • Experimental results indicate that our matrices are more suitable for the reconstruction of one-dimensional signals and two-dimensional images.
The difference between this paper and previous works [14,31] is as follows:
  • The measurement matrices constructed in this paper are structured random matrices, while the measurement matrices constructed in [14,31] are determined matrices.
  • This paper studies STP-CS model, while [14] studies the block compressed sensing model (BCS), and [31] studies CS model.
The details of each section are as follows. Section 2 introduces some related knowledge. Section 3 proposes a new model, which applies the structured random matrices to STP-CS. Section 4 gives simulation experiments, analyzes and compares the performance of our matrices with several famous matrices.

2. Preliminaries

In this section, projective geometry [33], balanced incomplete block design [34], embedding operation of binary matrix [35] and semi-tensor product compressed sensing [36] are introduced.

2.1. Projective Geometry

Let F q be the finite field with q elements. F q ( n + 1 ) is the ( n + 1 ) -dimensional row vector space over F q , where q is a prime power, and n is a positive integer. The 1-dimensional, 2-dimensional, 3-dimensional, and n-dimensional vector subspaces of F q ( n + 1 ) are called points, lines, planes, and hyperplanes, respectively. In general, the ( r + 1 ) -dimensional vector subspaces of F q ( n + 1 ) are called projective r-flats, or simply r-flats ( 0 r n ) . Thus, 0-flats, 1-flats, 2-flats, and ( n 1 ) -flats are points, lines, planes, and hyperplanes, respectively. If an r-flat as a vector subspace contains or is contained in an s-flat as a vector subspace, then the r-flat is called incidented with the s-flats. Then, the set of points, i.e., the set of 1-dimensional vector subspaces of F q ( n + 1 ) , together with the r-flats ( 0 r n ) and the incidence relation among them defined above is said to be the n-dimensional projective space over F q and is denoted by PG ( n , F q ) .

2.2. Balanced Incomplete Block Design

Definition 1.
Let v , k , b , r , λ be positive integers, and v k 2 . For a finite set x = { x 1 , x 2 , , x v } , a subset family B = { B 1 , B 2 , , B b } of x, where x 1 , x 2 , , x v are called points, B 1 , B 2 , , B b are called blocks, if
( 1 ) There are k ( k < v ) points in each block;
( 2 ) Each point in x appears in r blocks;
( 3 ) Each pair of distinct points is contained in exactly λ blocks.
Then ( x , B ) is called a ( v , b , r , k , λ ) balanced incomplete block design or simply ( v , b , r , k , λ ) - BIBD .
Definition 2.
For a ( v , b , r , k , λ ) - BIBD , if b = v (or r = k or λ ( v 1 ) = k 2 k ), then this design is symmetric. Symmetric BIBD is simply denoted by SBIBD .

2.3. Embedding Operation of Binary Matrix

Definition 3.
Let H = ( h 1 , h 2 , , h n ) , where h i is the i-th column of H, h i has d “1", 1 i n . In addition, A is a matrix with size d × n 1 -dimensional, each element 1 in h i is substitute for a distinct row of A, and each element 0 is substitute for the 1 × n 1 row vector ( 0 , 0 , , 0 ) . The result matrix Φ is expressed as
Φ = H A ,
and Φ is an m × n 1 -dimensional matrix, where denotes the embedding operation of the matrix A in the matrix H.
The specific process of the above embedding operation is shown in Figure 1.

2.4. Semi-Tensor Product Compressed Sensing

Definition 4.
Let x be a row vector with size n p -dimensional and y = [ Y 1 , , Y p ] T be a column vector with size p-dimensional. Split x into p blocks, named x 1 , , x p ; the size of each block is n-dimensional. The semi-tensor product (STP) is defined as
x y = i = 1 p x i Y i R 1 × n ,
Definition 5.
Let A R m × n p and B R p × q ; then, the STP of A and B is defined as follows,
C = A B ,
C has m × q blocks as C = ( c i , j ) and each block is
c i , j = a i b j , i = 1 , 2 , , m , j = 1 , 2 , , q ,
where a i is the i-th row of A and b j is the j-th column of B.
For a signal x R p and a measurement matrix Φ R m × n ( m < n ), the STP-CS model [36] is as follows
y = Φ x ,
where y R m p n and p = lcm ( n , p ) .
Similarly, we can also define the STP-CS by using Kronecker product as follows
y = ( Φ I p n ) x ,
where I p n is a p n × p n -dimensional identity matrix, p n is a positive integer, and denotes the Kronecker product.
Theorem 1.
The measurement matrix Φ I p n has coherence
μ ( Φ I p n ) = μ ( Φ ) .

3. Construction of Structured Random Measurement Matrices in STP-CS

Compared with CS, for signals of the same size, the advantage of STP-CS is that the number of columns of the measurement matrix can be a factor of CS, which greatly reduces the storage space of measurement matrices. Compared with measurement matrices in STP-CS, the structured random matrices only need to store two seed matrices instead of the whole matrix. To sum up, the structured matrices have lower storage space in STP-CS. In this section, we give a new model that applies the structured random matrices to STP-CS.

3.1. Construction of ( q 2 + q + 1 , q + 1 , 1 ) -SBIBD

The 1-dimensional projective space over F q only has q + 1 points, so it is less interesting. So, let us start our discussion with the 2-dimensional projective planes PG ( 2 , F q ) . In PG ( 2 , F q ) , there are q 2 + q + 1 points and q 2 + q + 1 lines; every line contains q + 1 points and every point passes through q + 1 lines; any two different points are connected by exactly one line; any two different lines intersect in exactly one point. It is easy to find that
(i) A finite projective plane of order q is ( q 2 + q + 1 , q + 1 , 1 ) -BIBD. A block is called a line in a finite projective plane.
(ii) For the parameter set v = q 2 + q + 1 , k = q + 1 , λ = 1 of a BIBD, we must have r = λ ( v 1 ) k 1 = n + 1 = k and, hence, b = v . So, ( q 2 + q + 1 , q + 1 , 1 ) -BIBD is necessarily symmetric, and it is simply denoted by ( q 2 + q + 1 , q + 1 , 1 ) -SBIBD.
Based on this, for ( q 2 + q + 1 , q + 1 , 1 ) -SBIBD, we assume that x = { x 1 , x 2 , , x q 2 + q + 1 } is a set of points, and B = { B 1 , B 2 , , B q 2 + q + 1 } is a set of blocks. The incidence matrix of ( q 2 + q + 1 , q + 1 , 1 ) -SBIBD is defined by
M = ( m i , j ) 1 i , j q 2 + q + 1 ,
whose rows are marked by x 1 , x 2 , , x q 2 + q + 1 and columns are marked by B 1 , B 2 , , B q 2 + q + 1 , and
m i , j = 1 , if x i B j 0 , otherwise .
Obviously, M has the same row-degree and column-degree, both of which are q + 1 .
Theorem 2.
If the incidence matrix of ( q 2 + q + 1 , q + 1 , 1 ) - SBIBD is M. Then, the matrix M has coherence μ ( M ) = 1 q + 1 .
In the following, the relationship between some known projective planes and BIBD is shown in Table 1.

3.2. Gram–Schmidt Orthonormalization

Let A = ( a 1 , a 2 , , a q + 1 ) be a random matrix, where a i R ( q + 1 ) denotes the i-th column of A, 1 i q + 1 . In order to ensure that the random matrix A has small coherence, all columns in matrix A are Gram–Schmidt orthonormalization, and the process is as follows
Let b 1 = a 1 ,
b 2 = a 2 a 2 , b 1 b 1 , b 1 b 1 ,
b q + 1 = a q + 1 i = 1 q a q + 1 , b i b i , b i b i .
Then, b 1 , b 2 , ⋯, b q + 1 are normalized, i.e.,
c i = b i | b i | ,
In this way, we obtain a normalized orthogonal matrix C of matrix A.
Remark 1.
According to Definition 3, let Φ = M C R ( q 2 + q + 1 ) × ( q 3 + 2 q 2 + 2 q + 1 ) ; there are two cases in the following
  • If A is a deterministic matrix, then C must also be deterministic. Therefore, Φ is a deterministic matrix;
  • If A is a random matrix, then C must also be random. Therefore, Φ is a structured random matrix.
There are many researches on deterministic matrices and random matrices, but few on structured random matrices. Combining the advantages of random matrices and the incidence matrices of combinatorial designs, this paper constructs the structured random measurement matrices and applied them in STP-CS.

3.3. Sampling Model

In the following, we consider Φ = M C as a measurement matrix in STP-CS. Let p be a positive integer and satisfy p = lcm ( q 3 + 2 q 2 + 2 q + 1 , p ) . For a signal x R p , a novel semi-tensor product compressed sensing model by the embedding operation (STP-CS-EO) is given in the following
y = Φ x = ( Φ I p q 3 + 2 q 2 + 2 q + 1 ) x = [ ( M C ) I p q 3 + 2 q 2 + 2 q + 1 ] x ,
then y R p q + 1 .
According to Theorem 1, it finds that
μ ( Φ ) = μ [ ( M C ) I p q 3 + 2 q 2 + 2 q + 1 ] = μ ( M C ) .
Remark 2.
Let x R N be a discrete signal, where N is a positive integer. For y R m , we present a comparison of CS, Kronecker product compressed sensing (KP-CS), block compressed sampling based on the embedding operation (BCS-EO), STP-CS, Kronecker product semi-tensor product compressed sensing (KP-STP-CS) and semi-tensor product compressed sensing based on the embedding operation (STP-CS-EO). Table 2 lists the comparison of storage space and sampling complexity of the measurement matrices corresponding to the above six sampling models. Sampling complexity is defined by the multiplication times between a matrix and a vector in the sampling process. For STP-CS, t is a positive integer and satisfies t | m , t | N . For signals of the same size, the advantage of STP-CS is that the number of columns of the measurement matrix can be a factor of CS. For KP-CS and KP-STP-CS, I p is a p × p -dimensional identity matrix, where p is a positive integer and satisfies p | m , p | N . For BCS-EO and STP-CS-EO, H 1 and H 2 have column-degree d, and A 1 and A 2 have size d × d -dimensional, where d is a positive integer and satisfies d | N . Compared with CS, KP-CS, BCS-EO, STP-CS and KP-STP-CS, the STP-CS-EO model has lower storage space and lower sampling complexity if t < p , N < d t 3 p d p 2 , m > d 3 p 2 t 2 N ( d p 2 ) or if t < p , N > d t 3 p d p 2 , m > d p 2 t or t > p , N > d 2 p t 2 ( d p 2 ) , m > d p or t > p , N < d 2 p t 2 ( d p 2 ) , m > d 3 p 2 t 2 N ( d p 2 ) .
In the following, we calculate the coherence of the matrix M C .
Theorem 3.
Let M be the incidence matrix of ( q 2 + q + 1 , q + 1 , 1 ) -SBIBD and C = ( c s , t ) 1 s , t q + 1 be a ( q + 1 ) × ( q + 1 ) -dimensional normalized orthogonal random matrix; then, there is a construction of structured random measurement matrices for a ( q 2 + q + 1 ) × ( q 3 + 2 q 2 + 2 q + 1 ) -dimensional matrix Φ = M C with coherence μ ( Φ ) = max | c s , t , c s 1 , t 1 | , where 1 s , s 1 q + 1 , 1 t , t 1 q + 1 .
Proof of Theorem 3.
According to Φ = M C , then Φ has size ( q 2 + q + 1 ) × ( q 3 + 2 q 2 + 2 q + 1 ) -dimensional. Let M = ( m 1 , m 2 , , m q + 1 ) , where m i is the i-th column of M, i = 1 , 2 , , q + 1 . C = ( c s , t ) 1 s , t q + 1 is a ( q + 1 ) × ( q + 1 ) -dimensional normalized orthogonal random matrix. For any two columns Φ j 1 and Φ j 2 in Φ ,
(1) If Φ j 1 and Φ j 2 correspond to the same column m i 1 in M, then we have
| ϕ j 1 , ϕ j 2 | | | ϕ j 1 | | 2 | | ϕ j 2 | | 2 = 0 ,
since C is a orthogonal matrix;
(2) If Φ j 1 and Φ j 2 correspond to two different columns m i 1 and m i 2 in M, then we have
| ϕ j 1 , ϕ j 2 | | | ϕ j 1 | | 2 | | ϕ j 2 | | 2 = | c s , t , c s 1 , t 1 | ,
since C is a normalized matrix, where c s , t and c s 1 , t 1 are the elements of matrix C, 1 s , s 1 q + 1 , 1 t , t 1 q + 1 .
Therefore, Φ has coherence μ ( Φ ) = max | c s , t , c s 1 , t 1 | .

4. Experimental Simulation

In this section, our measurement matrices are compared with several famous matrices. Simulation results show that our matrices can be regarded as an effective signal processing method.

4.1. Reconstruction of 1-Dimensional Signals

Let x be a signal. We select the orthogonal matching pursuit (OMP) [37] algorithm and the basis pursuit (BP) [38] algorithm to solve the l 1 -minimization problem, where the solution is represented by x . The definition of the reconstruction Signal-to-Noise Ratio (SNR) of x is
SNR ( x ) = 10 · lg ( | | x | | 2 2 | | x x | | 2 2 ) dB .
For noiseless recovery, if SNR ( x ) 100 dB , then the signal x is called perfect recovery. For every sparsity order, we reconstruct 1000 noiseless signals to calculate the perfect recovery percentage.
Example 1.
Let M 1 be the incidence matrix of ( 73 , 9 , 1 ) -SBIBD. Then, we construct three structured random measurement matrices Φ 1 = M 1 C 1 , Φ 2 = M 1 C 2 and Φ 3 = M 1 C 3 , where C 1 , C 2 , C 3 are a normalized orthogonal matrix of 9 × 9 -dimensional Gaussian, Bernoulli, and Toeplitz matrix, respectively.
For measurement matrices Φ 1 I 2 , Φ 1 I 3 and Φ 1 I 4 , Figure 2a–c show for different sparsity orders the perfect recovery percentages of 1314 × 1 -dimensional, 1971 × 1 -dimensional and 2628 × 1 -dimensional sparse signals, respectively. It shows that the reconstruction effects of Φ 1 I 2 , Φ 1 I 3 and Φ 1 I 4 are better than those of G a u s s i a n ( 73 × 657 ) I 2 , G a u s s i a n ( 73 × 657 ) I 3 and G a u s s i a n ( 73 × 657 ) I 4 under OMP obviously, respectively, and their reconstruction effects are similar to those of G a u s s i a n ( 73 × 657 ) I 2 , G a u s s i a n ( 73 × 657 ) I 3 and G a u s s i a n ( 73 × 657 ) I 4 under BP, respectively.
For measurement matrices Φ 2 I 2 , Φ 2 I 3 and Φ 2 I 4 . Figure 3a–c show for different sparsity orders the perfect recovery percentages of 1314 × 1 -dimensional, 1971 × 1 -dimensional and 2628 × 1 -dimensional sparse signals, respectively. It shows that the reconstruction effects of Φ 2 I 2 , Φ 2 I 3 and Φ 2 I 4 are better than those of B e r n o u l l i ( 73 × 657 ) I 2 , B e r n o u l l i ( 73 × 657 ) I 3 and B e r n o u l l i ( 73 × 657 ) I 4 under OMP obviously, respectively, and their reconstruction effects are similar to those of B e r n o u l l i ( 73 × 657 ) I 2 , B e r n o u l l i ( 73 × 657 ) I 3 and B e r n o u l l i ( 73 × 657 ) I 4 under BP, respectively.
For measurement matrices Φ 3 I 2 , Φ 3 I 3 and Φ 3 I 4 . Figure 4a–c show for different sparsity orders the perfect recovery percentages of 1314 × 1 -dimensional, 1971 × 1 -dimensional and 2628 × 1 -dimensional sparse signals, respectively. It shows that the reconstruction effects of Φ 2 I 2 , Φ 2 I 3 and Φ 2 I 4 are better than those of T o e p l i t z ( 73 × 657 ) I 2 , T o e p l i t z ( 73 × 657 ) I 3 and T o e p l i t z ( 73 × 657 ) I 4 under OMP obviously, respectively, and their reconstruction effects are similar to those of T o e p l i t z ( 73 × 657 ) I 2 , T o e p l i t z ( 73 × 657 ) I 3 and T o e p l i t z ( 73 × 657 ) I 4 under BP, respectively.
Example 2.
Let e R p be the additive white Gaussian noise with SNR 50 dB. Figure 5 shows the reconstruction SNR comparison of Φ 1 I 2 , Φ 1 I 3 and Φ 1 I 4 with Gaussian ( 73 × 657 ) I 2 , Gaussian ( 73 × 657 ) I 3 and Gaussian ( 73 × 657 ) I 4 under OMP and BP, respectively. It shows that the reconstruction SNR effects of Φ 1 I 2 , Φ 1 I 3 and Φ 1 I 4 are better than those of G a u s s i a n ( 73 × 657 ) I 2 , G a u s s i a n ( 73 × 657 ) I 3 and G a u s s i a n ( 73 × 657 ) I 4 under OMP, respectively, and their reconstruction SNR effects are similar to those of G a u s s i a n ( 73 × 657 ) I 2 , G a u s s i a n ( 73 × 657 ) I 3 and G a u s s i a n ( 73 × 657 ) I 4 under BP, respectively.
Figure 6 shows the reconstruction SNR comparison of Φ 2 I 2 , Φ 2 I 3 and Φ 2 I 4 with B e r n o u l l i ( 73 × 657 ) I 2 , B e r n o u l l i ( 73 × 657 ) I 3 and B e r n o u l l i ( 73 × 657 ) I 4 under OMP and BP, respectively. It shows that the reconstruction SNR effects of Φ 2 I 2 , Φ 2 I 3 and Φ 2 I 4 are better than those of B e r n o u l l i ( 73 × 657 ) I 2 , B e r n o u l l i ( 73 × 657 ) I 3 and B e r n o u l l i ( 73 × 657 ) I 4 under OMP, respectively, and their reconstruction SNR effects are similar to those of B e r n o u l l i ( 73 × 657 ) I 2 , B e r n o u l l i ( 73 × 657 ) I 3 and B e r n o u l l i ( 73 × 657 ) I 4 under BP, respectively.
Figure 7 shows the reconstruction SNR comparison of Φ 3 I 2 , Φ 3 I 3 and Φ 3 I 4 with Toeplitz ( 73 × 657 ) I 2 , Toeplitz ( 73 × 657 ) I 3 and Toeplitz ( 73 × 657 ) I 4 under OMP and BP, respectively. It shows that the reconstruction SNR effects of Φ 2 I 2 , Φ 2 I 3 and Φ 2 I 4 are better than those of T o e p l i t z ( 73 × 657 ) I 2 , T o e p l i t z ( 73 × 657 ) I 3 and T o e p l i t z ( 73 × 657 ) I 4 under OMP, respectively, and their reconstruction SNR effects are similar to those of T o e p l i t z ( 73 × 657 ) I 2 , T o e p l i t z ( 73 × 657 ) I 3 and T o e p l i t z ( 73 × 657 ) I 4 under BP, respectively.
In applications, the original signal is always disturbed by channel noise. For noisy recovery, the original signal x R p is polluted by additive white Gaussian noise e R p . Therefore, if Φ R ( q 2 + q + 1 ) × ( q 3 + 2 q 2 + 2 q + 1 ) is a measurement matrix, then
y = Φ ( x + e ) = [ Φ I p q 3 + 2 q 2 + 2 q + 1 ] ( x + e ) ,
where y R p q + 1 and p = lcm ( q 3 + 2 q 2 + 2 q + 1 , p ) . For every sparsity order, we calculate the reconstruction SNR by reconstructing 1000 noisy signals.
Furthermore, the original signals usually approach to sparse, and the measurement vector may also be polluted by the noise in the measurement domain. Hence, we study the noise recovery effect of our matrices in the actual STP-CS,
y = Φ ( x + e d ) + e m ,
where e m R p q + 1 denotes noise in the measurement domain, and e d R p denotes noise in the data-domain.
Example 3.
Let e d R p , e m R p q + 1 be the additive white Gaussian noise with SNR 20–100 dB. Figure 8, Figure 9 and Figure 10 show the comparison average recovery SNR for Φ 1 I 2 , Φ 2 I 2 and Φ 3 I 2 with Gaussian ( 73 × 657 ) I 2 , Bernoulli ( 73 × 657 ) I 2 and Toeplitz ( 73 × 657 ) I 2 under OMP and BP, respectively. The stable and robust empirical effects of Φ 1 I 2 , Φ 2 I 2 and Φ 3 I 2 are similar to Gaussian ( 73 × 657 ) I 2 , Bernoulli ( 73 × 657 ) I 2 and Toeplitz ( 73 × 657 ) I 2 , respectively.

4.2. Reconstruction of 2-Dimensional Images

In this subsection, we select the orthogonal matching pursuit (OMP) algorithm, basis pursuit (BP) algorithm, iterative soft thresholding (IST) [39] algorithm and subspace pursuit (SP) [40] algorithm for testing. When CS reconstructs a gray image, it is hard to judge the distortion of the reconstructed image by the naked eye and other subjective ways. Hence, it is necessary to give an important parameter to truly evaluate the quality of the reconstructed image; that is, the definition of peak signal-to-noise ratio (PSNR) is as follows:
PSNR = 10 · lg ( 255 2 MSE ) dB ,
where MSE represents the normalized mean square error, that is
MSE = 1 M × N [ Ψ ( x , y ) Ψ ( x , y ) ] 2 ,
where M × N represents the image size, and Ψ ( x , y ) , Ψ ( x , y ) are the gray values of the original image and the reconstructed image at the point ( x , y ) , respectively.
Example 4.
Let M 2 be the incidence matrix of ( 21 , 5 , 1 ) -SBIBD; we construct three structured random measurement matrices Φ 4 = M 2 C 4 , Φ 5 = M 2 C 5 and Φ 6 = M 2 C 6 , where C 4 , C 5 and C 6 are the normalized orthogonal matrix of a 5 × 5 -dimensional Gaussian matrix, Bernoulli matrix, Toeplitz matrix, respectively. Therefore, Φ 4 , Φ 5 and Φ 6 are 21 × 105 -dimensional matrices. We consider the matrices Φ 4 I 2 , Φ 5 I 2 and Φ 6 I 2 are used to reconstruct four images with size 210 × 210 -dimensional, Φ 4 I 3 , Φ 5 I 3 and Φ 6 I 3 are used to reconstruct four images with size 315 × 315 -dimensional, Φ 4 I 4 , Φ 5 I 4 and Φ 6 I 4 are used to reconstruct four images with size 420 × 420 -dimensional in Figure 11. Table 3, Table 4 and Table 5 have listed the PSNRs and CPU time of four images in the reconstruction process. It shows that the PSNRs of our measurement matrices are not less than that of the Gaussian matrix, Bernoulli matrix and Toeplitz matrix, under OMP, BP, IST, and SP, respectively. The CPU times of our measurement matrices are not longer than those of the Gaussian matrix, Bernoulli matrix and Toeplitz matrix, under OMP, BP, IST, and SP, respectively.

5. Conclusions

The construction of measurement matrices is not only the vital step to guarantee the quality of signal sampling but also the vital step to determine the difficulty of compressed sampling hardware implementation. Aiming at the present shortcomings—that a random matrix needs large storage space and is difficult to be implemented in hardware, and a deterministic measurement matrix has large reconstruction error—this paper constructs a structured random matrix by the embedding operation of two seed matrices in which one is the incidence matrix of ( q 2 + q + 1 , q + 1 , 1 ) -SBIBD, and the other is obtained by Gram–Schmidt orthonormalization of a ( q + 1 ) × ( q + 1 ) -dimensional random matrix. Meanwhile, we provide a new model that applies the structured random matrices to semi-tensor product compressed sensing. Finally, compared with the reconstruction effect of several famous matrices, our matrices are more suitable for the reconstruction of one-dimensional signals and two-dimensional images by experimental simulation. In addition, due to randomness, low storage space and shorter reconstruction time, our matrices have good performances in the reconstruction of signals and images. To sum up, the perspectives to improve the performance of the method are as follows:
(1)
Special structure of the incidence matrix of ( q 2 + q + 1 , q + 1 , 1 ) -SBIBD;
(2)
Gram–Schmidt orthonormalization of ( q + 1 ) × ( q + 1 ) -dimensional random matrix,
(3)
Semi-tensor product compressed sensing based on the structured random matrices.

Author Contributions

Conceptualization, J.L. and H.P.; methodology, J.L. and H.P.; software, J.L. and F.T.; validation, J.L., H.P., L.L. and F.T.; formal analysis, J.L., H.P., L.L. and F.T.; writing—original draft preparation, J.L.; writing—review and editing, J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China (Grant No. 2020YFB1805402), the National Natural Science Foundation of China (Grant Nos. 61972051, 62032002), the 111 Project (Grant No. B21049) and the Open Research Fund from Shandong Key Laboratory of Computer Network (SKLCN-2021-05).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CSCompressed Sensing
STP-CSSemi-Tensor Product Compressed Sensing
BCSBlock Compressed Sensing
BIBDBalanced Incomplete Block Design
SBIBDSymmetric Balanced Incomplete Block Design
STPSemi-Tensor Product
KP-CSKronecker Product Compressed Sensing
BCS-EOBlock Compressed Sampling Based on the Embedding Operation
KP-STP-CSKronecker Product Semi-tensor Product Compressed Sensing
STP-CS-EOSemi-Tensor Product Compressed Sensing Based on the Embedding Operation

References

  1. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  2. Natarajan, B.K. Sparse approximate solutions to linear systems. SIAM J. Comput. 1995, 24, 227–234. [Google Scholar] [CrossRef] [Green Version]
  3. Candes, E.J.; Tao, T. Decoding by linear programming. IEEE Trans. Inf. Theory 2005, 51, 4203–4215. [Google Scholar] [CrossRef] [Green Version]
  4. Gribonval, R.; Nielsen, M. Sparse representations in unions of bases. IEEE Trans. Inf. Theory 2003, 49, 3320–3325. [Google Scholar] [CrossRef] [Green Version]
  5. Welch, L. Lower bounds on the maximum cross correlation of signals (corresp.). IEEE Trans. Inf. Theory 1974, 20, 397–399. [Google Scholar] [CrossRef]
  6. Gilbert, A.; Indyk, P. Sparse recovery using sparse matrices. Proc. IEEE 2010, 98, 937–947. [Google Scholar] [CrossRef]
  7. Candes, E.J.; Romberg, J.; Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 52, 489–509. [Google Scholar] [CrossRef] [Green Version]
  8. Do, T.T.; Gan, L.; Nguyen, N.H.; Tran, D.T. Fast and efficient compressive sensing using structurally random matrices. IEEE Trans. Signal Process. 2011, 60, 139–154. [Google Scholar] [CrossRef] [Green Version]
  9. Rudelson, M.; Vershynin, R. On sparse reconstruction from Fourier and Gaussian measurements. Commun. Pure Appl. Math. J. Issued Courant Inst. Math. Sci. 2008, 61, 1025–1045. [Google Scholar] [CrossRef]
  10. Candes, E.J.; Tao, T. Near-optimal signal recovery from random projections: Universal encoding strategies? IEEE Trans. Inf. Theory 2006, 52, 5406–5425. [Google Scholar] [CrossRef]
  11. Baraniuk, R.; Davenport, M.; DeVore, R.; Wakin, M. A simple proof of the restricted isometry property for random matrices. Constr. Approx. 2008, 28, 253–263. [Google Scholar] [CrossRef] [Green Version]
  12. Haupt, J.D.; Bajwa, W.U.; Raz, G.M.; Nowak, R. Toeplitz compressed sensing matrices with applications to sparse channel estimation. IEEE Trans. Inf. Theory 2010, 56, 5862–5875. [Google Scholar] [CrossRef]
  13. Bajwa, W.U.; Haupt, J.D.; Raz, G.M.; Wright, S.J.; Nowak, R.D. Toeplitz-structured compressed sensing matrices. In Proceedings of the 2007 IEEE/SP 14th Workshop on Statistical Signal Processing, Madison, WI, USA, 26–29 August 2007; pp. 294–298. [Google Scholar] [CrossRef]
  14. Tong, F.H.; Li, L.X.; Peng, H.P.; Yang, Y.X. Flexible construction of compressed sensing matrices with low storage space and low coherence. Signal Process. 2021, 182, 107951. [Google Scholar] [CrossRef]
  15. Wang, H.; Xiao, D.; Li, M.; Xiang, Y.P.; Li, X.Y. A visually secure image encryption scheme based on parallel compressive sensing. Signal Process. 2019, 155, 218–232. [Google Scholar] [CrossRef]
  16. DeVore, R.A. Deterministic constructions of compressed sensing matrices. J. Complex. 2007, 23, 918–925. [Google Scholar] [CrossRef] [Green Version]
  17. Li, S.X.; Gao, F.; Ge, G.N.; Zhang, S.Y. Deterministic construction of compressed sensing matrices via algebraic curves. IEEE Trans. Inf. Theory 2012, 58, 5035–5041. [Google Scholar] [CrossRef]
  18. Dimakis, A.G.; Smarandache, R.; Vontobel, P.O. LDPC codes for compressed sensing. IEEE Trans. Inf. Theory 2010, 58, 3093–3114. [Google Scholar] [CrossRef] [Green Version]
  19. Wang, X.; Fu, F.W. Deterministic construction of compressed sensing matrices from codes. Int. J. Found. Comput. Sci. 2017, 28, 99–109. [Google Scholar] [CrossRef]
  20. Amini, A.; Marvasti, F. Deterministic construction of binary, bipolar, and ternary compressed sensing matrices. IEEE Trans. Inf. Theory 2011, 57, 2360–2370. [Google Scholar] [CrossRef]
  21. Zhang, J.; Han, G.J.; Fang, Y. Deterministic construction of compressed sensing matrices from protograph ldpc codes. IEEE Signal Process. Lett. 2015, 22, 1960–1964. [Google Scholar] [CrossRef]
  22. Wang, G.; Niu, M.Y.; Fu, F.W. Deterministic constructions of compressed sensing matrices based on optimal codebooks and codes. Appl. Math. Comput. 2019, 343, 128–136. [Google Scholar] [CrossRef]
  23. Jie, Y.M.; Li, M.C.; Guo, C.; Feng, B.; Tang, T.T. A new construction of compressed sensing matrices for signal processing via vector spaces over finite fields. Multimed. Tools Appl. 2019, 78, 31137–31161. [Google Scholar] [CrossRef]
  24. Liu, X.M.; Jia, L.H. Deterministic construction of compressed sensing matrices via vector spaces over finite fields. IEEE Access 2020, 8, 203301–203308. [Google Scholar] [CrossRef]
  25. Xia, S.T.; Liu, X.J.; Jiang, Y.; Zheng, H.T. Deterministic constructions of binary measurement matrices from finite geometry. IEEE Trans. Signal Process. 2014, 63, 1017–1029. [Google Scholar] [CrossRef] [Green Version]
  26. Tong, F.H.; Li, L.X.; Peng, H.P.; Yang, Y.X. Deterministic constructions of compressed sensing matrices from unitary geometry. IEEE Trans. Inf. Theory 2021, 67, 5548–5561. [Google Scholar] [CrossRef]
  27. Li, S.X.; Ge, G.N. Deterministic construction of sparse sensing matrices via finite geometry. IEEE Trans. Signal Process. 2014, 62, 2850–2859. [Google Scholar] [CrossRef]
  28. Tong, F.H.; Li, L.X.; Peng, H.P.; Zhao, D.W. Progressive coherence and spectral norm minimization scheme for measurement matrices in compressed sensing. Signal Process. 2022, 194, 108435. [Google Scholar] [CrossRef]
  29. Bryant, D.; Colbourn, C.J.; Horsley, D.; Cathain, P.O. Compressed sensing with combinatorial designs: Theory and simulations. IEEE Trans. Inf. Theory 2017, 63, 4850–4859. [Google Scholar] [CrossRef] [Green Version]
  30. Li, S.X.; Ge, G.N. Deterministic sensing matrices arising from near orthogonal systems. IEEE Trans. Inf. Theory 2014, 60, 2291–2302. [Google Scholar] [CrossRef]
  31. Liang, J.Y.; Peng, H.P.; Li, L.X.; Tong, F.H.; Yang, Y.X. Flexible construction of measurement matrices in compressed sensing based on extensions of incidence matrices of combinatorial designs. Appl. Math. Comput. 2022, 420, 126901. [Google Scholar] [CrossRef]
  32. Naidu, R.R.; Jampana, P.; Sastry, C.S. Deterministic compressed sensing matrices: Construction via euler squares and applications. IEEE Trans. Signal Process. 2015, 64, 3566–3575. [Google Scholar] [CrossRef] [Green Version]
  33. Wan, Z.X. Geometry of Classical Groups over Finite Fields; Chartwell-Bratt: Beijing, China, 1993. [Google Scholar]
  34. Lindner, C.C.; Rodger, C.A. Design Theory; Chapman and Hall/CRC: Boca Raton, FL, USA, 2017. [Google Scholar] [CrossRef]
  35. Amini, A.; Montazerhodjat, V.; Marvasti, F. Matrices with small coherence using p-ary block codes. IEEE Trans. On Signal Process. 2011, 60, 172–181. [Google Scholar] [CrossRef]
  36. Xie, D.; Peng, H.P.; Li, L.X.; Yang, Y.X. Semi-tensor compressed sensing. Digit. Signal Process. 2016, 58, 85–92. [Google Scholar] [CrossRef]
  37. Tropp, J.A.; Gilbert, A.C. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory 2007, 53, 4655–4666. [Google Scholar] [CrossRef] [Green Version]
  38. Chen, S.S.; Donoho, D.L.; Saunders, M.A. Atomic decomposition by basis pursuit. SIAM Rev. 2001, 43, 129–159. [Google Scholar] [CrossRef] [Green Version]
  39. Daubechies, I.; Defrise, M.; De Mol, C. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. J. Issued Courant Inst. Math. Sci. 2004, 57, 1413–1457. [Google Scholar] [CrossRef] [Green Version]
  40. Dai, W.; Milenkovic, O. Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory 2009, 55, 2230–2249. [Google Scholar] [CrossRef]
Figure 1. The specific process of A as the embedding matrix in matrix H.
Figure 1. The specific process of A as the embedding matrix in matrix H.
Sensors 22 08260 g001
Figure 2. The relationship between the perfect recovery percentage and sparsity order of sparse signals under OMP and BP. Φ 1 I 2 , Φ 1 I 3 and Φ 1 I 4 are the corresponding measurement matrices in (ac), respectively.
Figure 2. The relationship between the perfect recovery percentage and sparsity order of sparse signals under OMP and BP. Φ 1 I 2 , Φ 1 I 3 and Φ 1 I 4 are the corresponding measurement matrices in (ac), respectively.
Sensors 22 08260 g002
Figure 3. The relationship between the perfect recovery percentage and sparsity order of sparse signals under OMP and BP. Φ 2 I 2 , Φ 2 I 3 and Φ 2 I 4 are the corresponding measurement matrices in (ac), respectively.
Figure 3. The relationship between the perfect recovery percentage and sparsity order of sparse signals under OMP and BP. Φ 2 I 2 , Φ 2 I 3 and Φ 2 I 4 are the corresponding measurement matrices in (ac), respectively.
Sensors 22 08260 g003
Figure 4. The relationship between the perfect recovery percentage and sparsity order of sparse signals under OMP and BP. Φ 3 I 2 , Φ 3 I 3 and Φ 3 I 4 are the corresponding measurement matrices in (ac), respectively.
Figure 4. The relationship between the perfect recovery percentage and sparsity order of sparse signals under OMP and BP. Φ 3 I 2 , Φ 3 I 3 and Φ 3 I 4 are the corresponding measurement matrices in (ac), respectively.
Sensors 22 08260 g004
Figure 5. The relationship between the reconstruction SNR and sparsity order of sparse signals under OMP and BP. (a) The reconstruction SNR comparison of Φ 1 I 2 , Φ 1 I 3 and Φ 1 I 4 with Gaussian ( 73 × 657 ) I 2 , Gaussian ( 73 × 657 ) I 3 and Gaussian ( 73 × 657 ) I 4 under OMP, respectively. (b) The reconstruction SNR comparison of Φ 1 I 2 , Φ 1 I 3 and Φ 1 I 4 with Gaussian ( 73 × 657 ) I 2 , Gaussian ( 73 × 657 ) I 3 and Gaussian ( 73 × 657 ) I 4 under BP, respectively.
Figure 5. The relationship between the reconstruction SNR and sparsity order of sparse signals under OMP and BP. (a) The reconstruction SNR comparison of Φ 1 I 2 , Φ 1 I 3 and Φ 1 I 4 with Gaussian ( 73 × 657 ) I 2 , Gaussian ( 73 × 657 ) I 3 and Gaussian ( 73 × 657 ) I 4 under OMP, respectively. (b) The reconstruction SNR comparison of Φ 1 I 2 , Φ 1 I 3 and Φ 1 I 4 with Gaussian ( 73 × 657 ) I 2 , Gaussian ( 73 × 657 ) I 3 and Gaussian ( 73 × 657 ) I 4 under BP, respectively.
Sensors 22 08260 g005
Figure 6. The relationship between the reconstruction SNR and sparsity order of sparse signals under OMP and BP. (a) The reconstruction SNR comparison of Φ 2 I 2 , Φ 2 I 3 and Φ 2 I 4 with Bernoulli ( 73 × 657 ) I 2 , Bernoulli ( 73 × 657 ) I 3 and Bernoulli ( 73 × 657 ) I 4 under OMP, respectively. (b) The reconstruction SNR comparison of Φ 2 I 2 , Φ 2 I 3 and Φ 2 I 4 with Bernoulli ( 73 × 657 ) I 2 , Bernoulli ( 73 × 657 ) I 3 and Bernoulli ( 73 × 657 ) I 4 under BP, respectively.
Figure 6. The relationship between the reconstruction SNR and sparsity order of sparse signals under OMP and BP. (a) The reconstruction SNR comparison of Φ 2 I 2 , Φ 2 I 3 and Φ 2 I 4 with Bernoulli ( 73 × 657 ) I 2 , Bernoulli ( 73 × 657 ) I 3 and Bernoulli ( 73 × 657 ) I 4 under OMP, respectively. (b) The reconstruction SNR comparison of Φ 2 I 2 , Φ 2 I 3 and Φ 2 I 4 with Bernoulli ( 73 × 657 ) I 2 , Bernoulli ( 73 × 657 ) I 3 and Bernoulli ( 73 × 657 ) I 4 under BP, respectively.
Sensors 22 08260 g006
Figure 7. The relationship between the reconstruction SNR and sparsity order of sparse signals under OMP and BP. (a) The reconstruction SNR comparison of Φ 3 I 2 , Φ 3 I 3 and Φ 3 I 4 with Toeplitz ( 73 × 657 ) I 2 , Toeplitz ( 73 × 657 ) I 3 and Toeplitz ( 73 × 657 ) I 4 under OMP, respectively. (b) The reconstruction SNR comparison of Φ 3 I 2 , Φ 3 I 3 and Φ 3 I 4 with Toeplitz ( 73 × 657 ) I 2 , Toeplitz ( 73 × 657 ) I 3 and Toeplitz ( 73 × 657 ) I 4 under BP, respectively.
Figure 7. The relationship between the reconstruction SNR and sparsity order of sparse signals under OMP and BP. (a) The reconstruction SNR comparison of Φ 3 I 2 , Φ 3 I 3 and Φ 3 I 4 with Toeplitz ( 73 × 657 ) I 2 , Toeplitz ( 73 × 657 ) I 3 and Toeplitz ( 73 × 657 ) I 4 under OMP, respectively. (b) The reconstruction SNR comparison of Φ 3 I 2 , Φ 3 I 3 and Φ 3 I 4 with Toeplitz ( 73 × 657 ) I 2 , Toeplitz ( 73 × 657 ) I 3 and Toeplitz ( 73 × 657 ) I 4 under BP, respectively.
Sensors 22 08260 g007
Figure 8. For sparsity order k = 9 , the relationship of average recovery SNR and noise in measurement domain and data domain. (a) Average recovery SNR of Φ 1 I 2 as the measurement matrix under OMP. (b) Average recovery SNR of Gaussian ( 73 × 657 ) I 2 as the measurement matrix under OMP. (c) Average recovery SNR of Φ 1 I 2 as the measurement matrix under BP. (d) Average recovery SNR of Gaussian ( 73 × 657 ) I 2 as the measurement matrix under BP.
Figure 8. For sparsity order k = 9 , the relationship of average recovery SNR and noise in measurement domain and data domain. (a) Average recovery SNR of Φ 1 I 2 as the measurement matrix under OMP. (b) Average recovery SNR of Gaussian ( 73 × 657 ) I 2 as the measurement matrix under OMP. (c) Average recovery SNR of Φ 1 I 2 as the measurement matrix under BP. (d) Average recovery SNR of Gaussian ( 73 × 657 ) I 2 as the measurement matrix under BP.
Sensors 22 08260 g008
Figure 9. For sparsity order k = 9 , the relationship of average recovery SNR and noise in measurement domain and data domain. (a) Average recovery SNR of Φ 2 I 2 as the measurement matrix under OMP. (b) Average recovery SNR of Bernoulli ( 73 × 657 ) I 2 as the measurement matrix under OMP. (c) Average recovery SNR of Φ 2 I 2 as the measurement matrix under BP. (d) Average recovery SNR of Bernoulli ( 73 × 657 ) I 2 as the measurement matrix under BP.
Figure 9. For sparsity order k = 9 , the relationship of average recovery SNR and noise in measurement domain and data domain. (a) Average recovery SNR of Φ 2 I 2 as the measurement matrix under OMP. (b) Average recovery SNR of Bernoulli ( 73 × 657 ) I 2 as the measurement matrix under OMP. (c) Average recovery SNR of Φ 2 I 2 as the measurement matrix under BP. (d) Average recovery SNR of Bernoulli ( 73 × 657 ) I 2 as the measurement matrix under BP.
Sensors 22 08260 g009
Figure 10. For sparsity order k = 9 , the relationship of average recovery SNR and noise in measurement domain and data domain. (a) Average recovery SNR of Φ 3 I 2 as the measurement matrix under OMP. (b) Average recovery SNR of Toeplitz ( 73 × 657 ) I 2 as the measurement matrix under OMP. (c) Average recovery SNR of Φ 3 I 2 as the measurement matrix under BP. (d) Average recovery SNR of Toeplitz ( 73 × 657 ) I 2 as the measurement matrix under BP.
Figure 10. For sparsity order k = 9 , the relationship of average recovery SNR and noise in measurement domain and data domain. (a) Average recovery SNR of Φ 3 I 2 as the measurement matrix under OMP. (b) Average recovery SNR of Toeplitz ( 73 × 657 ) I 2 as the measurement matrix under OMP. (c) Average recovery SNR of Φ 3 I 2 as the measurement matrix under BP. (d) Average recovery SNR of Toeplitz ( 73 × 657 ) I 2 as the measurement matrix under BP.
Sensors 22 08260 g010
Figure 11. Four test images randomly selected from MSCOCO dataset. (a) COCO_val2014_ 000000000761. (b) COCO_val2014_000000004754. (c) COCO_val2014_000000008119. (d) COCO_ val2014_000000193121.
Figure 11. Four test images randomly selected from MSCOCO dataset. (a) COCO_val2014_ 000000000761. (b) COCO_val2014_000000004754. (c) COCO_val2014_000000008119. (d) COCO_ val2014_000000193121.
Sensors 22 08260 g011
Table 1. The relationship between some known projective planes and BIBD.
Table 1. The relationship between some known projective planes and BIBD.
No.Order of Projective PlanesParameters of a BIBDCoherence
vbrkλ
1277331 1 / 3
231313441 1 / 4
342121551 1 / 5
453131661 1 / 6
575757881 1 / 8
687373991 1 / 9
79919110101 1 / 10
81113313312121 1 / 12
91318318314141 1 / 14
101627327317171 1 / 17
111730730718181 1 / 18
121938138120201 1 / 20
132355355324241 1 / 24
142565165126261 1 / 26
152987187130301 1 / 30
Table 2. The comparison of storage space and sampling complexity of the measurement matrices corresponding to six sampling models.
Table 2. The comparison of storage space and sampling complexity of the measurement matrices corresponding to six sampling models.
TypeSampling ModelStorage MatrixSampling ComplexityStorage Space
CS y = Φ 1 x Φ 1 R m × N m N m N
KP-CS y = ( P 1 I p ) x P 1 R m p × N p m N p 2 m N p
BCS-EO y = ( H 1 A 1 ) x H 1 R m × N d
A 1 R d × d
d N m N d + d 2
STP-CS y = Φ 2 x Φ 2 R m t × N t m N t m N t 2
KP-STP-CS y = ( P 2 I p ) x P 2 R m p t × N p t m N p t m N p 2 t 2
STP-CS-EO y = ( H 2 A 2 ) x H 2 R m t × N t d A 2 R d × d d N t m N d t 2 + d 2
Table 3. The PSNRs of four images and the CPU time of the measurement matrices Φ 4 ( 21 × 105 ) I 2 , Φ 5 ( 21 × 105 ) I 2 and Φ 6 ( 21 × 105 ) I 2 in the process of reconstruction.
Table 3. The PSNRs of four images and the CPU time of the measurement matrices Φ 4 ( 21 × 105 ) I 2 , Φ 5 ( 21 × 105 ) I 2 and Φ 6 ( 21 × 105 ) I 2 in the process of reconstruction.
AlgorithmMeasurement MatrixImage (a)Image (b)Image (c)Image (d)
OMP Φ 4 ( 21 × 105 ) I 2 28.00 | 0.07 28.16 | 0.07 28.26 | 0.07 28.14 | 0.08
Gaussian ( 21 × 105 ) I 2 27.65 | 0.08 27.96 | 0.08 27.87 | 0.09 27.87 | 0.08
Φ 5 ( 21 × 105 ) I 2 28.19 | 0.07 28.34 | 0.07 28.59 | 0.07 28.46 | 0.07
Bernoulli ( 21 × 105 ) I 2 28.15 | 0.08 28.33 | 0.08 28.55 | 0.08 28.16 | 0.08
Φ 6 ( 21 × 105 ) I 2 28.06 | 0.07 28.13 | 0.07 28.39 | 0.07 27.92 | 0.07
Toeplitz ( 21 × 105 ) I 2 27.87 | 0.08 27.93 | 0.07 27.99 | 0.08 27.78 | 0.07
BP Φ 4 ( 21 × 105 ) I 2 27.89 | 1.54 27.73 | 1.55 28.00 | 1.56 27.90 | 1.54
Gaussian ( 21 × 105 ) I 2 27.30 | 1.89 27.53 | 1.91 27.65 | 1.89 27.71 | 1.91
Φ 5 ( 21 × 105 ) I 2 28.17 | 1.54 28.42 | 1.55 28.47 | 1.55 28.15 | 1.93
Bernoulli ( 21 × 105 ) I 2 28.15 | 1.93 28.24 | 1.93 28.36 | 1.94 28.09 | 1.97
Φ 6 ( 21 × 105 ) I 2 28.03 | 1.52 27.90 | 1.51 28.02 | 1.56 28.03 | 1.57
Toeplitz ( 21 × 105 ) I 2 27.94 | 2.02 27.60 | 1.99 27.76 | 2.07 27.67 | 2.03
IST Φ 4 ( 21 × 105 ) I 2 28.00 | 0.42 28.03 | 0.40 28.30 | 0.41 28.22 | 0.42
Gaussian ( 21 × 105 ) I 2 27.86 | 0.45 28.00 | 0.44 27.92 | 0.43 28.03 | 0.44
Φ 5 ( 21 × 105 ) I 2 28.12 | 0.39 28.17 | 0.39 28.82 | 0.41 28.28 | 0.39
Bernoulli ( 21 × 105 ) I 2 28.08 | 0.39 28.03 | 0.39 27.82 | 0.45 28.26 | 0.40
Φ 6 ( 21 × 105 ) I 2 27.98 | 0.41 28.21 | 0.40 28.10 | 0.41 28.16 | 0.41
Toeplitz ( 21 × 105 ) I 2 27.78 | 0.44 27.57 | 0.40 28.09 | 0.43 28.12 | 0.41
SP Φ 4 ( 21 × 105 ) I 2 27.96 | 0.20 28.80 | 0.21 28.29 | 0.21 28.19 | 0.21
Gaussian ( 21 × 105 ) I 2 27.88 | 0.22 27.37 | 0.23 28.07 | 0.22 27.65 | 0.22
Φ 5 ( 21 × 105 ) I 2 27.87 | 0.23 28.14 | 0.22 27.92 | 0.22 28.23 | 0.23
Bernoulli ( 21 × 105 ) I 2 27.74 | 0.23 28.13 | 0.22 27.88 | 0.22 28.02 | 0.23
Φ 6 ( 21 × 105 ) I 2 28.05 | 0.21 28.07 | 0.20 28.06 | 0.22 28.09 | 0.21
Toeplitz ( 21 × 105 ) I 2 27.69 | 0.21 27.29 | 0.22 27.83 | 0.22 27.78 | 0.21
Table 4. The PSNRs of four images and the CPU time of the measurement matrices Φ 4 ( 21 × 105 ) I 3 , Φ 5 ( 21 × 105 ) I 3 and Φ 6 ( 21 × 105 ) I 3 in the process of reconstruction.
Table 4. The PSNRs of four images and the CPU time of the measurement matrices Φ 4 ( 21 × 105 ) I 3 , Φ 5 ( 21 × 105 ) I 3 and Φ 6 ( 21 × 105 ) I 3 in the process of reconstruction.
AlgorithmMeasurement MatrixImage (a)Image (b)Image (c)Image (d)
OMP Φ 4 ( 21 × 105 ) I 3 27.92 | 0.11 28.17 | 0.12 28.41 | 0.12 28.15 | 0.12
Gaussian ( 21 × 105 ) I 3 27.86 | 0.12 28.13 | 0.12 27.96 | 0.12 28.07 | 0.13
Φ 5 ( 21 × 105 ) I 3 27.96 | 0.12 28.11 | 0.12 28.27 | 0.12 28.11 | 0.11
Bernoulli ( 21 × 105 ) I 3 27.90 | 0.12 27.43 | 0.13 28.23 | 0.13 28.04 | 0.12
Φ 6 ( 21 × 105 ) I 3 28.07 | 0.12 28.17 | 0.12 28.29 | 0.12 28.36 | 0.12
Toeplitz ( 21 × 105 ) I 3 28.01 | 0.13 28.12 | 0.12 27.94 | 0.12 28.18 | 0.12
BP Φ 4 ( 21 × 105 ) I 3 28.07 | 2.66 28.19 | 2.65 28.25 | 2.62 28.14 | 2.68
Gaussian ( 21 × 105 ) I 3 28.05 | 3.59 28.09 | 3.37 27.66 | 3.37 27.92 | 3.36
Φ 5 ( 21 × 105 ) I 3 28.29 | 2.66 28.29 | 2.99 28.19 | 2.57 28.28 | 3.26
Bernoulli ( 21 × 105 ) I 3 27.88 | 3.77 28.05 | 3.70 27.64 | 3.74 28.22 | 3.73
Φ 6 ( 21 × 105 ) I 3 28.20 | 2.77 28.28 | 3.04 28.24 | 2.56 27.96 | 2.63
Toeplitz ( 21 × 105 ) I 3 28.09 | 3.81 27.62 | 3.63 28.16 | 3.63 27.85 | 3.57
IST Φ 4 ( 21 × 105 ) I 3 28.10 | 0.89 28.29 | 0.89 28.24 | 0.91 28.33 | 0.89
Gaussian ( 21 × 105 ) I 3 27.93 | 0.92 28.19 | 0.90 27.98 | 1.00 28.28 | 0.95
Φ 5 ( 21 × 105 ) I 3 28.00 | 0.88 28.34 | 0.84 28.21 | 0.85 28.10 | 0.83
Bernoulli ( 21 × 105 ) I 3 27.85 | 0.88 28.22 | 0.88 27.95 | 0.85 28.02 | 0.85
Φ 6 ( 21 × 105 ) I 3 28.06 | 0.88 28.12 | 0.88 28.33 | 0.90 28.17 | 0.89
Toeplitz ( 21 × 105 ) I 3 27.91 | 0.89 27.92 | 0.93 27.90 | 0.92 28.00 | 0.98
SP Φ 4 ( 21 × 105 ) I 3 28.05 | 0.33 28.29 | 0.33 28.47 | 0.33 28.01 | 0.33
Gaussian ( 21 × 105 ) I 3 27.86 | 0.38 27.84 | 0.36 28.25 | 0.35 27.13 | 0.35
Φ 5 ( 21 × 105 ) I 3 27.83 | 0.34 28.16 | 0.35 27.81 | 0.33 27.93 | 0.34
Bernoulli ( 21 × 105 ) I 3 27.75 | 0.42 28.05 | 0.35 27.72 | 0.34 27.87 | 0.37
Φ 6 ( 21 × 105 ) I 3 27.99 | 0.34 28.19 | 0.34 27.92 | 0.33 27.96 | 0.34
Toeplitz ( 21 × 105 ) I 3 27.98 | 0.35 27.94 | 0.34 27.80 | 0.34 27.70 | 0.35
Table 5. The PSNRs of four images and the CPU time of the measurement matrices Φ 4 ( 21 × 105 ) I 4 , Φ 5 ( 21 × 105 ) I 4 and Φ 6 ( 21 × 105 ) I 4 in the process of reconstruction.
Table 5. The PSNRs of four images and the CPU time of the measurement matrices Φ 4 ( 21 × 105 ) I 4 , Φ 5 ( 21 × 105 ) I 4 and Φ 6 ( 21 × 105 ) I 4 in the process of reconstruction.
AlgorithmMeasurement MatrixImage (a)Image (b)Image (c)Image (d)
OMP Φ 4 ( 21 × 105 ) I 4 28.16 | 0.18 28.20 | 0.19 28.24 | 0.18 28.18 | 0.19
Gaussian ( 21 × 105 ) I 4 27.94 | 0.19 28.13 | 0.19 28.03 | 0.18 28.08 | 0.19
Φ 5 ( 21 × 105 ) I 4 28.17 | 0.17 28.20 | 0.17 28.38 | 0.18 28.10 | 0.16
Bernoulli ( 21 × 105 ) I 4 27.88 | 0.19 28.02 | 0.18 28.16 | 0.19 28.01 | 0.18
Φ 6 ( 21 × 105 ) I 4 28.13 | 0.18 28.14 | 0.18 28.27 | 0.18 28.25 | 0.18
Toeplitz ( 21 × 105 ) I 4 27.86 | 0.20 28.11 | 0.18 28.13 | 0.19 28.19 | 0.19
BP Φ 4 ( 21 × 105 ) I 4 28.13 | 3.83 28.21 | 3.78 28.28 | 3.81 28.18 | 3.81
Gaussian ( 21 × 105 ) I 4 28.07 | 5.10 28.10 | 5.04 27.83 | 5.06 28.13 | 5.05
Φ 5 ( 21 × 105 ) I 4 28.18 | 4.33 28.17 | 4.26 27.97 | 4.23 28.10 | 3.81
Bernoulli ( 21 × 105 ) I 4 28.05 | 4.34 28.06 | 4.31 27.71 | 4.25 27.75 | 4.33
Φ 6 ( 21 × 105 ) I 4 28.04 | 3.72 28.21 | 3.59 28.21 | 3.70 28.22 | 3.82
Toeplitz ( 21 × 105 ) I 4 27.95 | 4.31 27.94 | 4.30 28.17 | 4.35 28.14 | 4.26
IST Φ 4 ( 21 × 105 ) I 4 28.02 | 1.56 28.27 | 1.56 28.17 | 1.60 28.06 | 1.81
Gaussian ( 21 × 105 ) I 4 27.80 | 1.71 28.13 | 1.62 27.96 | 1.62 27.90 | 1.53
Φ 5 ( 21 × 105 ) I 4 28.15 | 1.63 28.09 | 1.58 28.01 | 1.58 28.12 | 1.63
Bernoulli ( 21 × 105 ) I 4 28.04 | 1.63 27.89 | 1.62 27.93 | 1.60 27.85 | 1.64
Φ 6 ( 21 × 105 ) I 4 28.10 | 1.58 28.23 | 1.61 28.18 | 1.55 28.09 | 1.58
Toeplitz ( 21 × 105 ) I 4 27.87 | 1.62 28.12 | 1.64 28.00 | 1.65 28.03 | 1.66
SP Φ 4 ( 21 × 105 ) I 4 28.26 | 0.48 28.71 | 0.48 28.29 | 0.48 28.20 | 0.48
Gaussian ( 21 × 105 ) I 4 27.96 | 0.51 28.04 | 0.51 28.14 | 0.50 27.98 | 0.50
Φ 5 ( 21 × 105 ) I 4 28.10 | 0.52 28.04 | 0.49 28.24 | 0.50 28.24 | 0.52
Bernoulli ( 21 × 105 ) I 4 28.08 | 0.52 27.90 | 0.49 28.05 | 0.56 27.91 | 0.52
Φ 6 ( 21 × 105 ) I 4 28.14 | 0.49 28.27 | 0.49 28.24 | 0.49 28.13 | 0.50
Toeplitz ( 21 × 105 ) I 4 27.85 | 0.50 28.08 | 0.50 28.08 | 0.50 28.11 | 0.51
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liang, J.; Peng, H.; Li, L.; Tong, F. Construction of Structured Random Measurement Matrices in Semi-Tensor Product Compressed Sensing Based on Combinatorial Designs. Sensors 2022, 22, 8260. https://doi.org/10.3390/s22218260

AMA Style

Liang J, Peng H, Li L, Tong F. Construction of Structured Random Measurement Matrices in Semi-Tensor Product Compressed Sensing Based on Combinatorial Designs. Sensors. 2022; 22(21):8260. https://doi.org/10.3390/s22218260

Chicago/Turabian Style

Liang, Junying, Haipeng Peng, Lixiang Li, and Fenghua Tong. 2022. "Construction of Structured Random Measurement Matrices in Semi-Tensor Product Compressed Sensing Based on Combinatorial Designs" Sensors 22, no. 21: 8260. https://doi.org/10.3390/s22218260

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop