Next Article in Journal
Detecting Earthquake-Related Anomalies of a Borehole Strain Network Based on Multi-Channel Singular Spectrum Analysis
Next Article in Special Issue
Information Theoretic Metagenome Assembly Allows the Discovery of Disease Biomarkers in Human Microbiome
Previous Article in Journal
Upscaling Statistical Patterns from Reduced Storage in Social and Life Science Big Datasets
Previous Article in Special Issue
Newtonian-Type Adaptive Filtering Based on the Maximum Correntropy Criterion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Method of Constructing Measurement Matrix for Compressed Sensing by Chebyshev Chaotic Sequence

1
College of Electronic Countermeasure, National University of Defense Technology, Hefei 230000, China
2
Huayin Ordnance Test Center, Weinan 714000, China
*
Author to whom correspondence should be addressed.
Entropy 2020, 22(10), 1085; https://doi.org/10.3390/e22101085
Submission received: 6 August 2020 / Revised: 12 September 2020 / Accepted: 15 September 2020 / Published: 26 September 2020
(This article belongs to the Special Issue Information Theoretic Signal Processing and Learning)

Abstract

:
In this paper, the problem of constructing the measurement matrix in compressed sensing is addressed. In compressed sensing, constructing a measurement matrix of good performance and easy hardware implementation is of interest. It has been recently shown that the measurement matrices constructed by Logistic or Tent chaotic sequences satisfy the restricted isometric property (RIP) with a certain probability and are easy to be implemented in the physical electric circuit. However, a large sample distance that means large resources consumption is required to obtain uncorrelated samples from these sequences in the construction. To solve this problem, we propose a method of constructing the measurement matrix by the Chebyshev chaotic sequence. The method effectively reduces the sample distance and the proposed measurement matrix is proved to satisfy the RIP with high probability on the assumption that the sampled elements are statistically independent. Simulation results show that the proposed measurement matrix has comparable reconstruction performance to that of the existing chaotic matrices for compressed sensing.

1. Introduction

Compressed sensing (CS) [1] samples the signal at a rate far lower than the Nyquist rate by utilizing the signal’s sparsity. The original signal can be reconstructed from the sampled data by adopting corresponding reconstruction methods. As a new way of signal processing, CS reduces the amount of sampled data and the space of storage, which would significantly decrease the hardware complexity. CS has a broad application prospect in medical imaging [2], wideband spectrum sensing [3], dynamic mode decomposition [4], etc.
CS projects a high-dimensional K-sparse signal x R N × 1 into a low-dimensional space through the measurement matrix Φ R M × N ( M < N ) and gets a set of incomplete measurements y R M × 1 obeying the linear model
y = Φ x
Equation (1) is an underdetermined equation which usually has an infinite number of solutions. However, with prior information on the signal sparsity and a condition imposed on Φ , x can be exactly reconstructed by solving the l 1 minimization problem [5]. The most commonly used condition of Φ is the restricted isometric property (RIP) [6].
Definition 1.
RIP: For any K-sparse vector x , if there is always a constant δ ( 0 , 1 ) that makes
( 1 δ ) x 2 2 Φ x 2 2 ( 1 + δ ) x 2 2
then Φ is said to satisfy the K-order RIP with δ .
The minimum of all constants satisfying inequality (2) is referred to as the restricted isometry constant (RIC) δ K . Candès and Tao showed that exact reconstruction could be achieved from certain measurements provided Φ satisfies the RIP.
Constructing a proper measurement matrix is an essential issue in CS. The measurement matrix has significant influences not only on the reconstruction performance but also on the complexity of hardware implementation. The measurement matrices can be divided into random ones and deterministic ones. Gaussian random matrix [7] and Bernoulli random matrix [8] are frequently used because they satisfy the RIP and are uncorrelated with most sparse domains. They guarantee good reconstruction performance but bring a specific challenge to hardware such as the requirement for storage space and system design [9]. On the contrary, deterministic measurement matrices have the advantages not only of economic storage space but also of convenience in engineering design. Nevertheless, the commonly used deterministic measurement matrices such as deterministic polynomial matrix [10] and Fourier matrix [11] are correlated with certain sparse domains, resulting in a restriction on practical application.
For solving the problems, the Logistic chaotic sequence is employed in constructing the measurement matrix—called the Logistic chaotic measurement matrix—in [12]. With the natural gifted pseudo-random property of a chaotic system, the Logistic chaotic measurement matrix possesses the advantages of random matrices and overcomes the shortcoming of the above deterministic matrices. This kind of matrix is respectively used for secure data collection in wireless sensor networks in [13] and speech signal compression in [14]. In [15], the Tent chaotic sequence is employed in constructing the measurement matrix. In [16], Zhou and Jing construct the measurement matrix with a composite chaotic sequence generated by combining Logistic and Tent chaos. Compared with the Gaussian random matrix, the chaotic measurement matrices have lower hardware complexities and better reconstruction performances. However, large sample distances (at least 15 [17,18]) are required to obtain uncorrelated samples from the above chaotic sequences in chaotic measurement matrix construction. Numerous useless data will be generated when the measurement matrix is large-scale, which results in the waste of system resources. In [19], the Chebyshev chaotic sequence is transformed into a new sequence of elements obeying Gaussian distribution. The new sequence is employed in constructing the measurement matrix that satisfies the RIP with high probability. This method avoids sampling the chaotic sequence, but the measurement matrix does not significantly improve the reconstruction performance.
In this paper, we propose a method of constructing the measurement matrix by the Chebyshev chaotic sequence. The primary contributions are twofold:
  • We analyze the high-order correlations among the elements sampled from the Chebyshev chaotic sequence.
  • We use the sampled elements to construct a measurement matrix, termed the Chebyshev chaotic measurement matrix. Based on the assumption that the elements are statistically independent, we prove that the Chebyshev chaotic measurement matrix satisfies the RIP with high probability.
The remainder of this paper is organized as follows. In Section 2, we describe the expression of the Chebyshev chaotic sequence and analyze the high-order correlations among elements sampled from the Chebyshev chaotic sequence. In Section 3, we present the construction method of the Chebyshev chaotic measurement matrix and analyze its probability of satisfying the RIP. In Section 4, simulations are carried out to verify the effectiveness of the Chebyshev chaotic matrix. In the end, the conclusion is drawn.

2. Chebyshev Chaotic Sequence and Sample Distance

2.1. Chebyshev Chaotic Sequence

Chaotic systems generate deterministic sequences by recursive methods. The sequence naturally enjoys certain properties that greatly resemble what we perceive as randomness. Since the sequence is reproducible and passes tests of randomness, it is often used to generate pseudo-random numbers [20]. Chebyshev chaos is a typical nonlinear dynamic chaos. Its one-dimensional expression is written as
x n + 1 = cos ( q · arccos x n ) x n [ 1 , 1 ]
where q ( 2 ) denotes the order number of Chebyshev chaos. With the initial value x 0 , the Chebyshev chaotic sequence is produced by applying Equation (3) recursively. On its excellent randomness, sensitivity to initial values, spatial ergodicity, and easy implementation in physical electric circuits [21], Chebyshev chaos is widely valued.
Based on the fact that some regions of the state space are visited more frequently than others by the Chebyshev chaotic sequence, it is possible to associate an invariant probability density function, denoted ρ ( x ) , to a chaotic attractor. ρ ( x ) is as follows [22]:
ρ ( x ) = { 1 π 1 x 2 x ( 1 , 1 ) 0 e l s e
From Equation (4), we have
E ( X n t ) = { 2 t C t t / 2 t   =   0 , 2 , 4 0 t   =   1 , 3 , 5
where C t t / 2 denotes the number of t / 2 subsets of a t -element set.

2.2. The Internal Randomness of Chebyshev Chaos

Internal randomness is one of the main characteristics of chaos. The chaos system is stable on the whole, but it will show regional instability due to the internal randomness. Regional instability is manifested in the sensitivity to initial conditions. The stronger the sensitivity, the stronger the internal randomness. The Lyapunov exponent [23] is a quantitative description of the sensitivity, which characterizes the average rate of divergence between the adjacent trajectories. The Lyapunov exponent expression of the one-dimensional discrete mapping system x n + 1 = F ( x n ) is written as
λ = lim n 1 n j = 0 n 1 ln | F ( x j ) |
where F ( x j ) denotes the derivative of F ( x j ) with respect to x j . The trajectories will gradually close up until reclosing when λ 0 and diverge when λ > 0 . A system with λ > 0 is defined to be chaotic. The bigger the λ , the stronger the internal randomness. Set x 0 = 0.5 , n = 10000 , the change tendency of λ with system parameters in different chaos is shown in Figure 1. The single control parameter is denoted as μ in Logistic chaos and p in Tent chaos.
As can be seen from Figure 1, in Logistic chaos, when μ is 4, λ reaches the maximum value of 0.69. In Tent chaos, λ reaches the maximum value of 0.69 when p is 0.5. In Chebyshev chaos, λ is 0.69 when q = 2 and λ increases with q . Therefore, when q is greater than 2, the Lyapunov exponent of Chebyshev chaos is larger than that of Logistic chaos and Tent chaos, which means stronger internal randomness. Considering that the hardware complexity increases exponentially over q , we set q = 8 so that Chebyshev chaos has a good balance between randomness and hardware implementation.

2.3. Statistical Property and Sample Distance

Generally, the elements in the measurement matrix need to be independent of each other. However, the elements in chaotic sequences do not meet the requirements. Yu et al. [12] measured the independence between the elements sampled from the Logistic chaotic sequence through the high-order correlations. The elements are considered approximately independent if ideal high-order correlations are available. These “independent” elements are used to construct the chaotic measurement matrix and good reconstruction performance is obtained. To construct the measurement matrix with the Chebyshev chaotic sequence, the primary issue is how to get independent elements from the sequence. Inspired by [12], we measure the independence through the high-order correlations with a certain sample distance and have the following theorem.
Theorem 1.
Denote X = { x n , x n + 1 x n + k } as the sequence generated by Equation (3), and integer d as the sample distance. When q ( 2 ) is even, for an arbitrary positive integer m 0 ,   m 1 < q d , it has
E ( x n m 0 x n + d m 1 ) = E ( x n m 0 ) E ( x n + d m 1 )
Proof. 
By setting d = 5 and q = 8 , we have E ( x n m 0 x n + d m 1 ) = E ( x n m 0 ) E ( x n + d m 1 ) for all m 0 ,   m 1 < 32768 . In [12], the elements sampled from the Logistic chaotic sequence share the same high-order correlations with d = 15 and are considered approximately independent. In [17,18], the independence test algorithms are applied to determine the sample distance of the Logistic and Tent chaotic sequences. The test procedures indicate that the sampled elements are statistically independent when d 15 .
Figure 2 and Figure 3 illustrate the joint probability densities of x n and x n + d , and denote ρ ( x n , x n + d ) , in Logistic, Tent, and 8-order Chebyshev chaotic sequences when d takes 5 and 15. The smoother the surface of ρ ( x n , x n + d ) , the more uniform the distribution of x n and x n + d , and the weaker the correlation between x n and x n + d . In Figure 2a,b, the surfaces of ρ ( x n , x n + d ) in Logistic and Tent have large fluctuations when d = 5 , which means strong correlations between x n and x n + d . In Figure 2c,d, the surfaces of ρ ( x n , x n + d ) in Logistic and Tent are relatively gentle, indicating a weak correlation between x n and x n + d when d = 15 . It can be seen intuitively from Figure 3 that the surface of ρ ( x n , x n + d ) in Chebyshev is almost the same as that in Figure 2c. These figures show that the elements sampled from the 8-order Chebyshev chaotic sequence with d = 5 share the same correlations with those sampled from the Logistic chaotic sequence with d = 15 .
Therefore, to guarantee a very small correlation between the sampled elements, the sample distance required by the 8-order Chebyshev chaotic sequence is significantly smaller than those required by the Logistic and Tent chaotic sequences. According to Section 2.2, the internal randomness of 8-order Chebyshev chaos is stronger than that of Logistic and Tent chaos. The increase in internal randomness leads to the decrease in the correlation between x n and x n + d , and the decrease in the sample distance.

3. Construction of Chebyshev Chaotic Measurement Matrix and RIP Analysis

3.1. Construction of Chebyshev Chaotic Measurement Matrix

Denote Z = { z 1 , z 2 z n } as the sequence extracted from X with sample distance d . We use Z to construct a Chebyshev chaotic measurement matrix as shown in Algorithm 1.
Algorithm 1. The method of constructing the Chebyshev chaotic measurement matrix.
Input: the number of rows M , the number of columns N , order q , initial value x 0 .
Output: measurement matrix Φ
1.Determine the sample distance d ;
2.Generate the Chebyshev chaotic sequence X of length 1 + ( M N 1 ) d ;
3.Sample X with d and get Z = { z 1 , z 2 z M N } ;
4.Use Z to construct a Chebyshev chaotic measurement matrix as
Φ = 2 M ( z 1 z M + 1 z ( N 1 ) M + 1 z 2 z M + 2 z ( N 1 ) M + 2 z M z 2 M z M N )
5.Return measurement matrix Φ
In step 4, 2 / M is the normalization factor. When the order number, initial value, and sample distance are set, Φ is determined.

3.2. RIP Analysis

The RIP is a sufficient condition on the measurement matrix to recover guarantee. However, certifying the RIP is NP-hard, so we analyze the performance of the measurement matrix Φ by calculating the probability that Φ satisfies the RIP instead.
When d = 5 , the adjacent elements of Z share the same high-order correlations with the elements sampled from the Logistic chaotic sequence with d = 15 . In addition, the test procedures show that the sampled elements of the Logistic chaotic sequence are statistically independent when d = 15 . Hence, we assume that the elements of Z are statistically independent. Based on this assumption, we calculate the minimum of the probability that our proposed measurement matrix satisfies the RIP and show that the minimum is close to 1 provided some parameters are suitably valued. The main result is shown in Theorem 2.
Theorem 2.
TheChebyshev chaotic measurement matrix Φ R M × N constructed by Algorithm 1 satisfies the K-order RIP with high probability provided K O ( M / log ( N / K ) ) .
Before proving Theorem 2, we briefly summarize some useful notations. Denote ( l 1 , l 2 l K ) as the position of non-zero elements in x and u = ( u 1 , u 2 u K ) T as the normalized vector composed of non-zero elements in x . Φ K is a submatrix composed of Φ that only contains columns indexed by ( l 1 , l 2 l K ) . Rewrite Φ K as Φ K = 1 / M ( φ 1 , φ 2 φ M ) T , where φ m T = ( v 1 , v 2 v K ) denotes a vector formed by multiplying the elements positioned at ( l 1 , l 2 l K ) in the m th row of Φ by 2 , and m = 1 , 2 M . Let S = s 2 2 = m = 1 M Q m 2 / M , s = Φ K u , and Q m = φ m T u . Denote G g = { | Φ K u 2 2 1 | δ } as one complementary event of the condition in inequality (2), where g = 1 , 2 C N K . G = g = 1 C N K G g is the union of all possible complementary events. The idea of proving Theorem 2 is as follows: First, calculate the probability of G g , which is denoted as P ( G g ) . Then, compute the probability of G , denoted as P ( G ) . Finally, the probability of Φ satisfying the RIP is P R I P = 1 P ( G ) .
From the definition of G g , we have
P ( G g ) P ( S 1 + δ ) + P ( S 1 δ )
where P ( S 1 + δ ) = P [ m = 1 M Q m 2 ( 1 + δ ) M ] and P ( S 1 + δ ) =   P { exp ( α m = 1 M Q m 2 ) exp [ α ( 1 + δ ) M ] } holds for any α > 0 . According to Markov inequality, the following inequality holds
P ( S 1 + δ )   E [ exp ( α m = 1 M Q m 2 ) ] exp [ α ( 1 + δ ) M ]
Noting that the elements of Φ K are statistically independent of each other, it is evident that Q h and Q m are independent of each other for l m , l , m = 1 , 2 M . With E [ exp ( α Q j 2 ) ] = E [ exp ( α Q h 2 ) ] , the inequality can be rewritten as
P ( S 1 + δ )   { E [ exp ( α Q m 2 ) ] } M exp [ α ( 1 + δ ) M ]
In the same way, we have
P ( S 1 δ )   { E [ exp ( α Q m 2 ) ] } M exp [ α ( 1 δ ) M ]
By expanding exp ( α Q j 2 ) into the form of a second-order Lagrange remainder, exp ( α Q m 2 ) = t = 0 2 ( α Q m 2 ) t / t ! + R 2 holds, where R 2 is the Lagrange remainder. It is straightforward to verify R 2 0 . Accordingly, we have exp ( α Q m 2 ) t = 0 2 ( α Q m 2 ) t / t ! and
P ( S 1 δ )   [ E ( 1 α Q m 2 + α 2 Q m 4 / 2 ) ] M exp [ α ( 1 δ ) M ]
To calculate the probabilities in inequalities (10) and (12), let us introduce some useful lemmas.
Lemma 1.
Denote r 1 , r 2 i.i.d. as random variables having a probability distribution like Equation (4). For any real numbers a , b , let c = ( a 2 + b 2 ) / 2 , then for all T R and t N , we have
E [ ( T + a r 1 + b r 2 ) 2 t ] E [ ( T + c r 1 + c r 2 ) 2 t ]
Proof. 
Lemma 2.
Let w = 1 K ( 1 , 1 1 ) T be a unit vector. For arbitrary t N and m = 1 , 2 M , we have
E [ Q m 2 t ] E [ ( φ m T w ) 2 t ]
Proof. 
Lemma 3.
Let T N ( 0 , 1 ) and φ m = ( v 1 , v 2 v K ) T . Then, for arbitrary t N , we have
E [ ( φ m T ω ) 2 t ] E ( T 2 t )
Proof. 
Recall T N ( 0 , 1 ) . For arbitrary α [ 0 , 1 / 2 ) , the Taylor series expansion of E [ exp ( α T 2 ) ] is t = 0 ( α ) t E ( T 2 t ) / t ! . In the same way, E [ exp ( α Q m 2 ) ] = t = 0 ( α ) t E ( Q m 2 t ) / t ! . Applying Lemma 2 and 3 gives E ( Q m 2 t ) E ( T 2 t ) . Hence, E [ exp ( α Q m 2 t ) ] E [ exp ( α T 2 ) ] = 1 / 1 - 2 α holds.
Now, we calculate the probabilities in inequalities (10) and (12). Let α = δ / [ 2 ( 1 + δ ) ] , we get
P ( S 1 + δ )   ( 1 / 1 - 2 α ) M exp [ α ( 1 + δ ) M ] = [ ( 1 + δ ) exp ( δ ) ] M / 2   < exp [ M 2 ( δ 2 2 δ 3 3 ) ]
As E ( Q m 2 ) = 1 and E ( Q m 4 ) E ( T 4 ) = 3 , we have
P ( S 1 δ )   ( 1 α + 3 α 2 / 2 ) M exp [ α ( 1 δ ) M ] = [ 1 δ 2 ( 1 + δ ) + 3 δ 2 8 ( 1 + δ ) 2 ] M / 2 exp [ δ ( 1 δ ) 2 ( 1 + δ ) M ]   < exp [ M 2 ( δ 2 2 δ 3 3 ) ]
Therefore,
P ( G g )   2 exp [ M 2 ( δ 2 2 δ 3 3 ) ]
According to Boole’s inequality, we get
P ( G )   g = 1 C N K P ( G g ) 2 C N K exp [ M 2 ( δ 2 2 δ 3 3 ) ] 2 ( e N K ) K exp [ M 2 ( δ 2 2 δ 3 3 ) ]
Let C 1 > 0 and C 1 M K log ( N / K ) , we have
P ( G ) 2 exp [ M 2 ( δ 2 2 δ 3 3 ) + C 1 M + C 1 M log ( N / K ) ]
Let C 2 1 2 ( δ 2 2 δ 3 3 ) C 1 ( 1 + 1 log ( N / K ) ) . Then, Φ satisfies the RIP with a probability of
P R I P 1 2 exp ( C 2 M )
Indeed, choosing C 1 as sufficiently small, we always have C 2 > 0 and high P R I P . For instance, let N = 512 and K = 5 , the probability of Φ satisfying the K-order RIP with δ c = 0.9 is no less than 95% when C 1 = 0.0589 .

4. Results and Discussion

When a measurement matrix is applied in CS, it is always expected to lead to good reconstruction performance. In this section, we examine the performance of the Chebyshev chaotic measurement matrix and compare it with Gaussian random matrix and the well-established similar matrices in [12,15,16,19] by presenting the empirical results obtained from a series of CS reconstruction simulations. Each matrix is denoted as Propose, Gaussian, Logistic, Tent, Composite, and the matrix in [19] for convenience. The test includes the following steps. First, generate the synthetic sparse signals x and construct the measurement matrix Φ . Then, obtain the measurement vector y by y = Φ x . Last, reconstruct the signals by approximating the solution of x e = arg min x o   s . t .   y = Φ x .
x adopted throughout this section only contains K non-zero entries with length N = 120 . The locations and amplitudes of the peaks are subject to Gaussian distribution. The proposed measurement matrix in this paper is constructed by the method shown in Algorithm 1. The orthogonal matching pursuit (OMP) [24] algorithm is selected as a minimization approach which iteratively builds up an approximation of x . The system parameters in Logistic and Tent chaos are 4 and 0.5, respectively, and the order of Chebyshev chaos is 8. The sample distance is set as d = 15 in Logistic, Tent, and Composite, and d = 5 in Propose. Each experiment is performed for 1000 random sparse ensembles. The initial value x 0 is randomly set for each ensemble and the performance is averaged over sequences with different initial values. Denote ε = x e x 2 / x 2 as the reconstruction error. The reconstruction is identified as a success, namely exact reconstruction, provided ε 10 6 . Denote P s u c as the percentage of successful reconstruction.
Case 1.
Comparison of the percentage of exact reconstruction in the noiseless case P s u c .
In this case, we conduct two separate CS experiments, first by fixing M = 40 and varying K from 2 to 14, and second by fixing K = 8 and varying M from 20 to 60.
Figure 4 illustrates the change tendency of P s u c with argument K while M = 40 in the noiseless case. The figure shows that P s u c decreases with the increase in K. From inequalities (20) and (21), it can be seen that the upper bound of P ( G ) increases and the lower bound of P R I P decreases when K increases. The probability of the measurement matrix satisfying the RIP is reduced, which makes it against the exact reconstruction.
Figure 5 illustrates the change tendency of P s u c with argument M while K = 8 in the noiseless case. As can be seen from the figure, P s u c increases with M . According to inequalities (20) and (21), P ( G ) decreases and the lower bound of P R I P increases when M increases. The increase in the probability of the measurement matrix satisfying the RIP is beneficial to the exact reconstruction.
Figure 4 and Figure 5 reveal that the percentage of the exact reconstruction of the proposed measurement matrix is significantly higher than that of Gaussian and the matrix in [19], slightly higher than that of Tent, and almost the same as that of Logistic and Composite.
Case 2.
Comparison of the reconstruction error in the noisy case.
In this case, we consider the noisy model y = Φ x + v , where v is the vector of additive Gaussian noise with zero means. We conduct the CS experiment by fixing M = 40 , K = 8 , and varying the signal to noise ratio (SNR) from 10 to 50. It can be seen from Figure 6 that the errors decrease with the increase in the SNR, and the error of the proposed measurement matrix is smaller than that of Gaussian and the matrix in [19], slightly smaller than that of Tent, and almost the same as Logistic and Composite.
When noise is included in the measurements, the reconstruction errors increase greatly. At this time, reconstruction algorithms with anti-noise ability can be used, such as the adaptive iterative threshold (AIT) algorithm and entropy minimization-based matching pursuit (EMP) algorithm which can be found in references [25,26] for details.
As shown in the simulations, when the Chebyshev chaotic matrix is applied in CS, a good reconstruction performance is obtained. This coincides with the theoretical results obtained in the previous subsection where the Chebyshev chaotic matrix satisfies the RIP with high probability. Recall from Lemma 4 that E [ exp ( α Q m 2 ) ] E [ exp ( α T 2 ) ] , where T N ( 0 , 1 ) . It is clear that the maximums of P ( S 1 + δ ) and P ( S 1 δ ) are achievable only if Q m N ( 0 , 1 ) . That is to say, the lower boundary of the probability that the proposed measurement satisfies the RIP is no less than that of the Gaussian random matrix. As seen from the simulations, our proposed measurement matrix outperforms the Gaussian random matrix. Then, the results coincide with the theoretical analysis above.
As mentioned in the Introduction, Zhu et al. [19] transformed the Chebyshev chaotic sequence into a new sequence of elements obeying Gaussian distribution and applied the new sequence to construct the measurement matrix. In fact, the measurement matrix is similar to the Gaussian random matrix in terms of elements distribution. This coincides with the simulations that the reconstruction performance of OMP with the matrix in [19] is similar to that of OMP with the Gaussian random matrix. Accordingly, our proposed measurement matrix outperforms the matrix in [19].
The simulations reveal that by using the proposed measurement matrix, one can achieve the same reconstruction performance as that obtained using Logistic, Tent, and Composite. In fact, these chaotic measurement matrices share a similar independence among the elements and probability of satisfying the RIP. To construct those measurement matrices, M N uncorrelated samples need to be extracted from the chaotic sequences at certain sample distances. The length of the chaotic sequence is usually no less than M N d . The larger the sample distance, the longer the chaotic sequence, and the larger the resources consumption. Since the sample distance of the Chebyshev chaotic sequence is greatly reduced, the proposed measurement matrix can effectively reduce the consumption of system resources.

5. Conclusions

In this paper, we propose a method of constructing a measurement matrix for compressed sensing by the Chebyshev chaotic sequence. We first show that the elements sampled from the Chebyshev chaotic sequence with a small distance have very small correlations. Then, we use these sampled elements to construct the measurement matrix and prove that the matrix satisfies the RIP with high probability in detail. With the natural gifted pseudo-random property of a chaotic system, the proposed chaotic measurement matrix possesses the advantages of economic storage and convenience in engineering design. Moreover, the probability that the proposed measurement matrix satisfies the RIP is more likely to be higher than that of the Gaussian random matrix dose, which results in better reconstruction performance. Obviously, the proposed measurement matrix outperforms the Gaussian random one. Compared with the similar chaotic measurement matrices, the proposed measurement matrix effectively reduces the consumption of system resources without the loss of reconstruction performance. Therefore, our method outperforms the existing approaches in terms of practical applications.

Author Contributions

Conceptualization, R.Y.; methodology, C.C.; software, R.Y.; validation, Y.M. and B.W.; formal analysis, C.C.; data curation, R.Y.; writing—original draft preparation, R.Y. and Y.M.; writing—review and editing, R.Y., C.C., B.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Proof of Theorem 1.
According to Equation (3), we have x n + d = cos ( q d · arccos x n ) . Then, the left side of Equation (7) can be expressed as
E ( x n m 0 x n + d m 1 ) = 1 1 ρ ( x n ) x n m 0 [ cos ( q d arccos x n ) ] m 1 d x n
Denote x n = cos θ , substituting Equation (4) into (A1), we obtain
E ( x n m 0 x n + d m 1 ) = 0 π 1 π ( cos θ ) m 0 [ cos ( q d θ ) ] m 1 d θ
Since
{ cos θ = ( e i θ + e i θ ) / 2 0 π 1 π e i 2 θ k d θ = { 0 k = 1 , 2 , 3 1 k = 0
Equation (A2) can be rewritten as
E ( x n m 0 x n + d m 1 ) = 0 π 2 ( m 0 + m 1 ) h = 0 m 0 j = 0 m 1 C m 0 f C m 1 j 1 π e i 2 θ [ h m 0 / 2 + q d ( j m 1 / 2 ) ] d θ
It can be seen from Equation (A3) that E ( x n m 0 x n + d m 1 ) has non-zero values only when j = m 1 / 2 ( h m 0 / 2 ) q d , with j ( ( m 1 1 ) / 2 , ( m 1 + 1 ) / 2 ) . For all possible cases of m 0 and m 1 , we do the following analysis:
(1) m 0 is odd. According to Equation (5), E ( x n m 0 ) is equal to 0 and then E ( x n m 0 ) E ( x n + d m 1 ) = 0 . We hypothesize that E ( x n m 0 x n + d m 1 ) has non-zero values, which means there exists a positive integer j that satisfies j = m 1 / 2 ( h m 0 / 2 ) q d . Noting that j ( ( m 1 1 ) / 2 , ( m 1 + 1 ) / 2 ) , the only choice of j is m 1 / 2 , where m 1 must be even. Accordingly, h must be equal to m 0 / 2 . However, h = m 0 / 2 is not an integer when m 0 is odd, which contradicts that h must be an integer. Therefore, the hypothesis fails, that is, E ( x n m 0 x n + d m 1 ) = 0 holds when m 0 is odd. In this case, E ( x n m 0 x n + d m 1 ) = E ( x n m 0 ) E ( x n + d m 1 ) holds.
(2) m 0 is even. Two cases are discussed:
  • m 1 is odd. According to Equation (5), E ( x n m 0 ) E ( x n + d m 1 ) = 0 holds. We hypothesize that E ( x n m 0 x n + d m 1 ) has non-zero values, which means there exists a positive integer j that satisfies j = m 1 / 2 ( h m 0 / 2 ) q d . When m 1 is odd, there is no integer j satisfying j ( ( m 1 1 ) / 2 , ( m 1 + 1 ) / 2 ) . Therefore, the hypothesis fails and E ( x n m 0 x n + d m 1 ) = 0 holds. In this case, E ( x n m 0 x n + d m 1 ) is equal to E ( x n m 0 ) E ( x n + d m 1 ) .
  • m 1 is even. According to Equation (5), we have E ( x n m 0 ) E ( x n + d m 1 ) = 2 ( m 0 + m 1 ) C m 0 m 0 / 2 C m 1 m 1 / 2 . When h = m 0 / 2 , there is an integer j = m 1 / 2 that makes E ( x n m 0 x n + d m 1 ) be equal to 2 ( m 0 + m 1 ) C m 0 m 0 / 2 C m 1 m 1 / 2 . In this case, E ( x n m 0 x n + d m 1 ) = E ( x n m 0 ) E ( x n + d m 1 ) holds.
In conclusion, the proof is completed. □

Appendix B

Proof of Lemma 1.
Both sides of inequality (13) can be rewritten as
{ E [ ( R + a r 1 + b r 2 ) 2 t ] = i = 0 2 t C 2 t i R i E [ ( a r 1 + b r 2 ) 2 t i ] E [ ( R + c r 1 + c r 2 ) 2 t ] = i = 0 2 t C 2 t i R i E [ ( c r 1 + c r 2 ) 2 t i ]
According to Equation (5), we have E [ ( a r 1 + b r 2 ) 2 t i ] = 0 and E [ ( c r 1 + c r 2 ) 2 t i ] = 0 when i is odd. Obviously, E [ ( a r 1 + b r 2 ) 2 t i ] is equal to E [ ( c r 1 + c r 2 ) 2 t i ] . Now, we consider the problem of proving E [ ( a r 1 + b r 2 ) 2 t i ] E [ ( c r 1 + c r 2 ) 2 t i ] when i is even. This is equivalent to proving
E [ ( a r 1 + b r 2 ) 2 t ] E [ ( c r 1 + c r 2 ) 2 t ]
for all t N .
It is easy to verify that inequality (A4) holds when t = 0 , 1 , 2 . The focus of the proof is on t > 2 . Since the odd-order moments of r 1 and r 2 are 0, the expansion of both sides in inequality (25) only have even-order moments whose coefficients are no less than 0. Hence, it is true if either a or b is positive or negative. Without the loss of generality, assume that a , b 0 . Then, we have to make sure that the following inequality
E [ ( a 2 c r 1 + b 2 c r 2 ) 2 t ] E [ ( 2 2 r 1 + 2 2 r 2 ) 2 t ]
holds for t = 3 , 4 , 5 and a , b 0 .
On both sides of inequality (A5), the square sum of the coefficients of r 1 and r 2 is always 1. Let a / ( 2 c ) = cos θ and b / ( 2 c ) = sin θ , where θ [ 0 , π / 2 ] , inequality (A5) can be rewritten as
E [ ( r 1 cos θ + r 2 sin θ ) 2 t ] E [ ( r 1 cos π 4 + r 2 sin π 4 ) 2 t ]
Therefore, the proof of inequality (A4) is equivalent to the proof of inequality (A6), that is, we need to prove that the maximum of E [ ( r 1 cos θ + r 2 sin θ ) 2 t ] occurs when θ is π / 4 . Let f ( θ ) = E [ ( r 1 cos θ + r 2 sin θ ) 2 t ] , the derivative of f ( θ ) with respect to θ is
f ( θ ) = E [ 2 t ( r 1 cos θ + r 2 sin θ ) 2 t 1 ( r 2 cos θ r 1 sin θ ) ]
Since E [ ( r 1 cos θ + r 2 sin θ ) 2 t 1 r 1 ] = h = 0 2 t 1 C 2 t 1 h ( cos θ ) h ( sin θ ) 2 t 1 h E ( r 1 h + 1 r 2 2 t 1 h ) and E ( r 1 h + 1 r 2 2 t 1 h ) = E ( r 2 h + 1 r 1 2 t 1 h ) , we have
E [ ( r 1 cos θ + r 2 sin θ ) 2 t 1 r 1 ] = E [ ( r 2 cos θ + r 1 sin θ ) 2 t 1 r 2 ]
Substituting Equation (A8) into Equation (A7), we obtain
f ( θ ) = 2 t · E [ ( r 1 cos θ + r 2 sin θ ) 2 t 1 r 2 cos θ ( r 2 cos θ + r 1 sin θ ) 2 t 1 r 2 sin θ ] = 2 t · h = 0 2 t 1 C 2 t 1 h { [ ( cos θ ) h + 1 ( sin θ ) 2 t 1 h ( sin θ ) h + 1 ( cos θ ) 2 t 1 h ] E ( r 1 h r 2 2 t h ) }
Let A ( h ) = C 2 t 1 h { [ ( cos θ ) h + 1 ( sin θ ) 2 t 1 h ( sin θ ) h + 1 ( cos θ ) 2 t 1 h ] E ( r 1 h r 2 2 t h ) } . Note that A ( t 1 ) = A ( 2 t 1 ) = 0 , f ( θ ) can be rewritten as
f ( θ ) = 2 t [ h = 0 t 2 A ( h ) + A ( t 1 ) + h = t 2 t 2 A ( h ) + A ( 2 t 1 ) ] = 2 t [ h = 0 t 2 A ( h ) + h = t 2 t 2 A ( h ) ] =   2 t h = 0 t 2 [ A ( h ) + A ( 2 t 2 h ) ]
With further simplifications, we get
f ( θ ) = 2 t h = 0 t 2 { [ ( cos θ ) 2 t 1 h ( sin θ ) h + 1 ( sin θ ) 2 t 1 h ( cos θ ) h + 1 ] B h }
where B h = C 2 t 1 h + 1 E ( r 1 2 t h 2 r 2 h + 2 ) C 2 t 1 h E ( r 1 h r 2 2 t h ) and h [ 0 , t 2 ] . When h is odd, it is straightforward to show that B h = 0 . When h is even, according to Equation (5), B h can be expressed as
B h = 2 2 t { [ ( 2 t 1 ) ! ] ( h + 2 ) [ ( 2 t h 2 2 ) ! ] 2 [ ( h + 2 2 ) ! ] 2 [ ( 2 t 1 ) ! ] ( 2 t h ) [ ( 2 t h 2 ) ! ] 2 [ ( h 2 ) ! ] 2 } = 4 · 2 2 t [ ( 2 t 1 ) ! ] [ ( 2 t h 2 2 ) ! ] 2 [ ( h 2 ) ! ] 2 ( 1 h + 2 1 2 t h ) > 0
When θ ( 0 , π / 4 ) and h t 2 , we have cos θ 2 t 1 h sin θ h + 1 > sin θ 2 t 1 h cos θ h + 1 . Consequently, f ( θ ) > 0 holds. When θ ( π / 4 , π / 2 ) and h t 2 , we have cos θ 2 t 1 h sin θ h + 1 < sin θ 2 t 1 h cos θ h + 1 and f ( θ ) < 0 . Moreover, note that θ = 0 , π / 4 , π / 2 are extreme points of f ( θ ) , it is obvious that E [ ( r 1 cos θ + r 2 sin θ ) 2 t ] reaches the maximum when θ = π / 4 . Thus, Lemma 1 follows. □

Appendix C

Proof of Lemma 2.
Denote v k as the element of row p r and column p c in Φ , where k = 1 , 2 K . By the definition of Φ K , we have v k = 2 z ( p c 1 ) M + p r . The left side of inequality (14) can be written as
E [ Q m 2 t ] = E [ ( v 1 u 1 + v 2 u 2 + v K u K ) 2 t ] = R E [ ( v 1 u 1 + v 2 u 2 + R ) 2 t ] P ( k = 3 K v k u k = R )
Using Lemma 1, we get
R E [ ( v 1 u 1 + v 2 u 2 + R ) 2 t ] P ( k = 3 K v k u k = R ) R E [ ( v 1 ( u 1 2 + u 2 2 ) / 2 + v 2 ( u 1 2 + u 2 2 ) / 2 + R ) 2 t ] P ( k = 3 K v k u k = R )
Let η = ( ( u 1 2 + u 2 2 ) / 2 , ( u 1 2 + u 2 2 ) / 2 , u 3 u K ) T , we have
E [ Q m 2 t ] R E [ ( v 1 ( u 1 2 + u 2 2 ) / 2 + v 2 ( u 1 2 + u 2 2 ) / 2 + R ) 2 t ] P ( k = 3 K v k u k = R )   = E [ ( φ m T η ) 2 t ]
Applying this argument repeatedly yields Lemma 2, as η eventually becomes w [27]. Then, Lemma 2 holds. □

Appendix D

Proof of Lemma 3.
Let { t k } k = 1 K be a group of i.i.d. variables and t k N ( 0 , 1 ) , k = 1 , 2 K . Rewrite T as T = 1 K k = 1 K t k . Substituting φ m T ω = 1 K k = 1 K v k into inequality (15), we obtain
{ E ( T 2 t ) = 1 K t E [ n k 0 , k = 1 K n k = 2 t ( 2 t n 1 , n 2 n K ) k = 1 K t k n k ] E [ ( φ m T w ) 2 t ] = 1 K t E [ n k 0 , k = 1 K n k = 2 t ( 2 t n 1 , n 2 n K ) k = 1 K v k n k ]
where ( 2 t n 1 , n 2 n K ) = ( 2 t ) ! ( n 1 ! ) ( n 2 ! ) ( n K ! ) . When n k is odd, both E ( t k n k ) and E ( v k n k ) are equal to 0. When n k is even, let n k = 2 t and we get E ( t k 2 t ) = ( 2 t ) ! / ( t ! 2 t ) and E ( v k 2 t ) = ( 2 t ) ! / ( ( t ! ) 2 2 t ) . Obviously, E ( v k n k ) E ( t k n k ) holds. Therefore, Lemma 3 follows. □

References

  1. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  2. Hashimoto, F.; Ote, K.; Oida, T.; Teramoto, A.; Ouchi, Y. Compressed-Sensing Magnetic Resonance Image Reconstruction Using an Iterative Convolutional Neural Network Approach. Appl. Sci. 2020, 10, 1902. [Google Scholar] [CrossRef] [Green Version]
  3. Sabahi, M.F.; Masoumzadeh, M.; Forouzan, A.R. Frequency-domain wideband compressive spectrum sensing. IET Commun. 2016, 10, 1655–1664. [Google Scholar] [CrossRef]
  4. Bai, Z.; Kaiser, E.; Proctor, J.L.; Kutz, J.N.; Brunton, S.L. Dynamic Mode Decomposition for Compressive System Identification. AIAA J. 2020, 58, 561–574. [Google Scholar] [CrossRef]
  5. Candés, E.J. The restricted isometry property and its implications for compressed sensing. Comptes Rendus Math. 2008, 346, 589–592. [Google Scholar] [CrossRef]
  6. Candes, E.; Tao, T. Decoding by Linear Programming. IEEE Trans. Inf. Theory 2005, 51, 4203–4215. [Google Scholar] [CrossRef] [Green Version]
  7. Candès, E.J.; Tao, T. Near-Optimal Signal Recovery from Random Projections: Universal Encoding Strategies? IEEE Trans. Inf. Theory 2006, 52, 5406–5425. [Google Scholar] [CrossRef] [Green Version]
  8. Candès, E.J.; Romberg, J.K.; Tao, T. Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 2006, 59, 1207–1223. [Google Scholar] [CrossRef] [Green Version]
  9. Ravelomanantsoa, A.; Rabah, H.; Rouane, A. Fast and efficient signals recovery for deterministic compressive sensing: Applications to biosignals. In Proceedings of the 2015 Conference on Design and Architectures for Signal and Image Processing (DASIP), Krakow, Poland, 23–25 September 2015; pp. 1–6. [Google Scholar]
  10. Devore, R.A. Deterministic constructions of compressed sensing matrices. J. Complex. 2007, 23, 918–925. [Google Scholar] [CrossRef] [Green Version]
  11. Xu, G.; Xu, Z. Compressed Sensing Matrices from Fourier Matrices. IEEE Trans. Inf. Theory 2015, 61, 469–478. [Google Scholar] [CrossRef] [Green Version]
  12. Yu, L.; Barbot, J.P.; Zheng, G.; Sun, H. Compressive Sensing With Chaotic Sequence. IEEE Signal Process. Lett. 2010, 17, 731–734. [Google Scholar] [CrossRef] [Green Version]
  13. Al-Azawi, M.K.M.; Gaze, A.M.; Al-Azawie, M.; Al Shaty, A. Combined speech compression and encryption using chaotic compressive sensing with large key size. IET Signal Process. 2018, 12, 214–218. [Google Scholar] [CrossRef]
  14. Lu, W.; Liu, Y.; Wang, D. A Distributed Secure Data Collection Scheme via Chaotic Compressed Sensing in Wireless Sensor Networks. Circuits Syst. Signal Process. 2013, 32, 1363–1387. [Google Scholar] [CrossRef]
  15. Frunzete, M.; Yu, L.; Barbot, J.-P.; Vlad, A. Compressive sensing matrix designed by tent map, for secure data transmission. In Proceedings of the Signal Processing Algorithms, Architectures, Arrangements, and Applications SPA 2011, Poznan, Poland, 29–30 September 2011; pp. 1–6. [Google Scholar]
  16. Zhou, W.; Jing, B.; Zhang, H.; Huang, Y.-F.; Li, J. Construction of Measurement Matrix in Compressive Sensing Based on Composite Chaotic Mapping. Acta Electron. Sin. 2017, 45, 2177–2183. [Google Scholar]
  17. Vlad, A.; Luca, A.; Frunzete, M. Computational Measurements of the Transient Time and of the Sampling Distance That Enables Statistical Independence in the Logistic Map. In Proceedings of the International Conference on Computational Science and Its Applications, Seoul, Korea, 29 June–2 July 2009; pp. 703–718. [Google Scholar]
  18. Vaduva, A.; Vlad, A.; Badea, B. Evaluating the performance of a test-method for statistical independence decision in the context of chaotic signals. In Proceedings of the 2016 International Conference on Communications (COMM), Bucharest, Romania, 9–10 June 2016; pp. 417–422. [Google Scholar] [CrossRef]
  19. Zhu, S.; Zhu, C.; Wang, W. A Novel Image Compression-Encryption Scheme Based on Chaos and Compression Sensing. IEEE Access 2018, 6, 67095–67107. [Google Scholar] [CrossRef]
  20. Martino, L.; Luengo, D.; Míguez, J. Independent Random Sampling Methods; Springer: Cham, Switzerland, 2018; pp. 9–12. [Google Scholar]
  21. Öztürk, I.; Kılıç, R. Digitally generating true orbits of binary shift chaotic maps and their conjugates. Commun. Nonlinear Sci. Numer. Simul. 2018, 62, 395–408. [Google Scholar] [CrossRef]
  22. Geisel, T.; Fairén, V. Statistical properties of chaos in Chebyshev maps. Phys. Lett. A 1984, 105, 263–266. [Google Scholar] [CrossRef]
  23. Pesin, Y.B. Characteristic Lyapunov Exponents and Smooth Ergodic Theory. Russ. Math. Surv. 1977, 32, 55–114. [Google Scholar] [CrossRef]
  24. Tropp, J.A.; Gilbert, A.C. Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit. IEEE Trans. Inf. Theory 2007, 53, 4655–4666. [Google Scholar] [CrossRef] [Green Version]
  25. Wang, Y.; Zeng, J.; Peng, Z.; Chang, X.; Xu, Z. Linear Convergence of Adaptively Iterative Thresholding Algorithms for Compressed Sensing. IEEE Trans. Signal Process. 2015, 63, 2957–2971. [Google Scholar] [CrossRef] [Green Version]
  26. Meena, V.; Abhilash, G. Robust recovery algorithm for compressed sensing in the presence of noise. IET Signal Process. 2016, 10, 227–236. [Google Scholar] [CrossRef]
  27. Achlioptas, D. Database-friendly random projections: Johnson-Lindenstrauss with binary coins. J. Comput. Syst. Sci. 2003, 66, 671–687. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The change tendency of λ with system parameters: (a) Logistic; (b) Tent; (c) Chebyshev.
Figure 1. The change tendency of λ with system parameters: (a) Logistic; (b) Tent; (c) Chebyshev.
Entropy 22 01085 g001
Figure 2. Logistic and Tent chaotic sequences. (a) Logistic d = 5 ; (b) Tent d = 5 ; (c) Logistic d = 15 ; (d) Tent d = 15 .
Figure 2. Logistic and Tent chaotic sequences. (a) Logistic d = 5 ; (b) Tent d = 5 ; (c) Logistic d = 15 ; (d) Tent d = 15 .
Entropy 22 01085 g002aEntropy 22 01085 g002b
Figure 3. ρ ( x n , x n + d ) in the 8-order Chebyshev chaotic sequence with d = 5 .
Figure 3. ρ ( x n , x n + d ) in the 8-order Chebyshev chaotic sequence with d = 5 .
Entropy 22 01085 g003
Figure 4. The change tendency of P s u c with K while M = 40 in the noiseless case.
Figure 4. The change tendency of P s u c with K while M = 40 in the noiseless case.
Entropy 22 01085 g004
Figure 5. The change tendency of P s u c with M while K = 8 in the noiseless case.
Figure 5. The change tendency of P s u c with M while K = 8 in the noiseless case.
Entropy 22 01085 g005
Figure 6. The change tendency of ε with SNR while M = 40 and K = 8 .
Figure 6. The change tendency of ε with SNR while M = 40 and K = 8 .
Entropy 22 01085 g006

Share and Cite

MDPI and ACS Style

Yi, R.; Cui, C.; Miao, Y.; Wu, B. A Method of Constructing Measurement Matrix for Compressed Sensing by Chebyshev Chaotic Sequence. Entropy 2020, 22, 1085. https://doi.org/10.3390/e22101085

AMA Style

Yi R, Cui C, Miao Y, Wu B. A Method of Constructing Measurement Matrix for Compressed Sensing by Chebyshev Chaotic Sequence. Entropy. 2020; 22(10):1085. https://doi.org/10.3390/e22101085

Chicago/Turabian Style

Yi, Renjie, Chen Cui, Yingjie Miao, and Biao Wu. 2020. "A Method of Constructing Measurement Matrix for Compressed Sensing by Chebyshev Chaotic Sequence" Entropy 22, no. 10: 1085. https://doi.org/10.3390/e22101085

APA Style

Yi, R., Cui, C., Miao, Y., & Wu, B. (2020). A Method of Constructing Measurement Matrix for Compressed Sensing by Chebyshev Chaotic Sequence. Entropy, 22(10), 1085. https://doi.org/10.3390/e22101085

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop