Next Article in Journal
Passive Equalization Networks—Efficient Synthesis Approach for High-Speed Signal Integrity Characterization
Previous Article in Journal
An IoT-Based Life Cycle Assessment Platform of Wind Turbines
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Measurement Matrix Optimization for Compressed Sensing System with Constructed Dictionary via Takenaka–Malmquist Functions

1
Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai 200444, China
2
Department of Computer and Information Science, Faculty of Science and Technology, University of Macau, Macau 999078, China
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(4), 1229; https://doi.org/10.3390/s21041229
Submission received: 14 January 2021 / Revised: 29 January 2021 / Accepted: 2 February 2021 / Published: 9 February 2021
(This article belongs to the Section Intelligent Sensors)

Abstract

:
Compressed sensing (CS) has been proposed to improve the efficiency of signal processing by simultaneously sampling and compressing the signal of interest under the assumption that the signal is sparse in a certain domain. This paper aims to improve the CS system performance by constructing a novel sparsifying dictionary and optimizing the measurement matrix. Owing to the adaptability and robustness of the Takenaka–Malmquist (TM) functions in system identification, the use of it as the basis function of a sparsifying dictionary makes the represented signal exhibit a sparser structure than the existing sparsifying dictionaries. To reduce the mutual coherence between the dictionary and the measurement matrix, an equiangular tight frame (ETF) based iterative minimization algorithm is proposed. In our approach, we modify the singular values without changing the properties of the corresponding Gram matrix of the sensing matrix to enhance the independence between the column vectors of the Gram matrix. Simulation results demonstrate the promising performance of the proposed algorithm as well as the superiority of the CS system, designed with the constructed sparsifying dictionary and the optimized measurement matrix, over existing ones in terms of signal recovery accuracy.

1. Introduction

Sparse representation has become a powerful tool, as it can efficiently model signals to facilitate compressing and processing [1,2,3]. Due to the redundancy in the majority of signals in nature, transforming the original signal to a sparse or compressed version through a certain domain is meaningful for reducing costs in transportation and storage. In this case, the signal can be expressed as a linear combination of a few given basis functions, which indicates that most of the representation’s coefficients are zero or close to zero.
Compressed sensing (CS), as one of the most attractive emerging fields in recent years, has shown promising results in many applications, such as channel or spectrum estimation [4,5], cognitive radio communications [6], color pattern detection [7,8], and people-centric sensing [9]. CS states that signals which have sparse structures on an appropriate domain can be effectively recovered from the underdetermined linear projections. However, it is often the case that signals of interest are not exactly sparse but compressible. Projecting the signal into its proper sparse domain to obtain a sparser structure is essential in CS [10]. Hence, one critical issue in sparse representation is how to choose the dictionary (as well as the transform basis) to reduce the number of measurements required for reconstruction [11]. In most cases, the original signal is sparse in a different transform domain, e.g., discrete cosine transform (DCT), discrete wavelet transform (DWT), and so on. Despite being simple and having fast computations, these nonadaptive dictionaries are unable to sparsely represent a given signal of interest well [12]. The approximate solution after the sparse representation of signals still have many nonzero elements. In fact, if an N-dimensional signal admits K-sparse representation in a dictionary, one can reconstruct exactly the original signal with large probability with M = O(Klog(N/K)) measurements [10,13]. This shows that only a small number of measurements are necessary to accurately recover the original signal when the signal exhibits a sparser representation. In our previous work [14], we presented an effective sparse representation approach for wireless channels based on the modified Takenaka–Malmquist (TM) basis, under which the represented wireless channel exhibits a sparser structure than the existing sparsifying dictionaries. As the results shown, the approach facilitates to improve the performance of CS. In this paper, we construct a sparse dictionary using the TM basis functions. Compared with other sparse bases, the signal of interest can be represented more accurately with only a limited number of TM basis functions.
The recent growing trend involves replacing the conventional random sensing matrices by optimized ones in order to enhance CS performance further [15,16,17,18,19]. Some conditions have been put forth to evaluate the qualities of a sensing matrix to guarantee exact signal recovery. Candès et al. introduced the restricted isometry property (RIP) and proved that if a sensing matrix satisfies RIP, the original signal can be recovered with high probability [13]. However, it is difficult to verify whether a given sensing matrix satisfies RIP. Another relatively simple property is the mutual coherence [15,16,17,18,19,20]. The coherence of matrix A means the maximum absolute correlation between different columns of A. Gaussian and Bernoulli matrices are widely used due to their entries being generated by an independent identically distributed (i.i.d.) process and satisfying the RIP. However, it shows low recovery accuracy in the CS system when using the class of matrices as a sensing matrix. In order to improve the CS performance, researchers attempt to optimize the sensing matrix with the hope of increasing the reconstruction probability and taking a fewer number of measurements [15,16,17,18]. Elad proposed an optimized algorithm that iteratively minimizes the t-averaged mutual coherence using shrinkage operation by singular value decomposition (SVD) [15]. However, Elad’s method is time-consuming and creates some large values of coherence. Duarte-Carvajalino and Sapiro proposed a framework for a joint sensing matrix and dictionary design and optimization [16]. The method addressed the problem by making the Gram matrix as close as possible to an identity matrix. Their proposed approach is noniterative and does not have a clear meaning after multiple approximation procedures. In this paper, we proposed a method based on the equiangular tight frame (ETF) design. The optimization objective is to make the Gram matrix of the sensing matrix as close as possible to an ETF. For clarity, the threefold main contributions of this paper are summarized as follows:
  • It is revealed that the existing dictionaries are unable to sparsely represent a given signal well. The TM functions have adaptability and robustness in system identification, which invoked us to use it as a basis to build a new dictionary for the sparse representation of a given signal. The simulation results illustrate that compared to other dictionaries, using the constructed dictionary to represent a given signal can obtain a lower mean square error (MSE) under the same number of atoms.
  • In order to improve the CS performance further, an ETF-based iterative minimization algorithm is proposed for optimizing the measurement matrix. It is shown that the approach leads to a lower coherence of the equivalent dictionary and significantly improves the signal recovery accuracy than the methods in [15,16].
  • The reconstruction quality obtained with our constructed dictionary and the optimal measurement matrix by our approach is superior to that obtained with the existing dictionary and random measurement matrix (such as wavelets or DCT). Moreover, simulation experiments show that after the random measurement matrix optimized by our proposed algorithm, the reconstruction success rate is also improved.
The rest of the paper is organized as follows. In Section 2, we review the basic principles of the CS framework. Section 3 introduces the TM basis function and describes how to construct a dictionary for sparse representation. An ETF-based iterative minimization method is developed in Section 4 for optimizing the measurement matrix. Experimental results are provided in Section 5. Section 6 concludes the paper.

2. Background and Preliminaries

CS states that it can enable exact signal reconstruction from far fewer samples than required by the classical Nyquist–Shannon sampling theorem if the signal admits a sparse representation in a certain domain. Consider a signal x N , which is sparse over the transformation basis matrix Ψ N × N . Let s be the sparse representation of x over the basis matrix Ψ. Accordingly, x can be described as
x = Ψ s ,
where the signal x is said to be K-sparse in the Ψ domain if s 0 = K   ( K N ) . The 0 -norm used here denotes the number of the nonzero elements in s.
The basic model of CS is as follows:
  y = Φ x ,
where the resulting signal vector y M × 1 is called the measurement vector, x N × 1 is the given signal of interest, and Φ M × N (with M × N) is called the measurement matrix. Since the original signal x is sparse in the Ψ domain, we can rewrite the measurement model as
y = Φ Ψ s = A s ,
where A = Φ Ψ M × L   ( L N M ) represents the sensing matrix, Ψ N × L is the sparsifying basis, e.g., an orthonormal or overcomplete dictionary.
To recover the high-dimensional original signal from the low-dimensional measurement data, it is necessary to solve the following minimization problem
min s s 0 s u b j e c t   t o y = A s .
In general, the reconstruction problem in (4) is a nondeterministic polynomial hard (NP-hard) problem. Fortunately, research in [21] and [22] has demonstrated that 1 -norm minimization is similar to sparse solutions with 0 -norm minimization. Therefore, the 1 -norm is used to substitute for the 0 -norm in (4). The solution to the above problem is the same as the one to the 1 -norm minimization below
min s s 1 s u b j e c t   t o y = A s
where (5) can be solved efficiently using convex relaxation algorithm such as basis pursuit (BP) [23]. Another strategy to address the problem in (4) is to use greedy algorithms, such as the orthogonal matching pursuit (OMP), to find a local optimal solution through multiple iterations [24].
The premise of CS is to ensure that the original signal is sparse or can be represented sparsely. The well-known RIP provides a sufficient condition for an exact or approximate recovery of a sparse representation s from the low-dimensional measurements y. An M × L sensing matrix A is said to satisfy the RIP of order K if there exists a restricted isometry constant δK (0 < δK < 1), such that
( 1 δ K ) s 2 2 s 2 2 ( 1 + δ K ) s 2 2 ,
where s 0 = K .
RIP measures the degree to which each submatrix consisting of the most K columns of A is close to being an isometry. However, the RIP is too strict and difficult to be used for proving whether a sensing matrix A satisfies the condition or not. Therefore, the most known constructions of matrices satisfying RIP are random matrices, including the Gaussian matrix, Bernoulli matrix, and almost other matrices with i.i.d. entries. Another relatively simple and feasible property for evaluating the quality of sensing matrices is the mutual coherence. It requires that the rows of measurement matrix Φ cannot sparsely represent the columns of the sparse basis Ψ and vice versa. The mutual coherence μ(A) is defined as the largest absolute normalized inner product between different columns in sensing matrix A. Formally, that is expressed as
μ ( A ) = max 1 i , j L ,   i j | a i T a j | a i 2 · a j 2 ,
where a i is the i-th column of A.
The mutual coherence can intuitively reflect the quality of a sensing matrix and play important role in the signal reconstruction performance. A smaller value of μ ( A ) means that the measurement matrix Φ and the sparsifying dictionary Ψ are less relevant. It means that the two matrices are orthogonal if μ(A) equals to zero.
An alternative way to measure the mutual coherence is considering the corresponding Gram matrix G = A ˜ T A ˜ , in which A ˜ is obtained by normalizing each column of A. The maximum value of the off-diagonal elements in G denotes the mutual coherence, that is
μ ( A ) = max 1 i , j L ,   i j | g i j | .
Generally, a K-sparse signal s can be recovered from (3) by (4) or (5) if the following inequality holds [25,26,27]:
K < 1 2 ( 1 + 1 μ ( A ) ) .
Even though in many situations a small μ(A) is desired, the following well-known result of Welch indicates that μ(A) is bounded below:
μ ( A ) μ E = L M M ( L 1 ) .
It should be noted that the Welch bound μE is approximately equal to 1 M when LM.

3. Construction Procedure of the Dictionary

In this section, we briefly introduce and analyze the Takenaka–Malmquist basis functions. In addition, we show how to construct a new dictionary using the TM basis functions for sparse representation in CS. The preliminary results show the representation advantages of this new approach.

3.1. The Takenaka–Malmquist Functions

The TM basis functions were first widely used in the field of control engineering. Extensive research has examined how the TM basis is suitable for linear time-invariant (LTI) system identification [14,28,29,30]. For a deterministic signal, we can learn from the idea of system identification and represent it using the TM basis. The TM functions are defined as follows:
F k ( z ) = 1 | b k | 2 z b k i = 1 k 1 1 b ¯ i z z b i ,     k = 1 ,   2 ,
where b i , | b i | < 1 , represents an open unit circle in the complex plane, and b ¯ i denotes the conjugation of bi.
The TM basis functions are constructed to be orthonormal by using the Gram–Schmidt procedure. According to the Cauchy integral formula, it is easy to prove the orthogonality between TM basis functions by
F k , F 1 = F k ( 1 / b ¯ 1 ) 1 | b 1 | 2 = 0 , k > 1
which forces Fk(z) to have a zero at 1 / b ¯ 1 .
The orthogonality between the TM basis functions allowed it to serve as a dictionary, from which the representation coefficients of a signal could be efficiently selected. Hence, only a limited number of linear combinations of TM basis functions are required to approximate the original signal. This provides us with a theoretical basis and feasibility to build a dictionary next.

3.2. Construction of the Dictionary

The number of nonzero terms K of the sparse representation coefficient s of a signal x in a sparsifying dictionary Ψ is a very important factor. The smaller the value of K, the fewer the number of measurements are required for signal recovery. Therefore, the sparsifying dictionary Ψ should be properly selected so that the signal has a sparser representation structure. In other words, K should be reduced as close to zero as possible. The most commonly used approach is using basis function to construct a sparsifying dictionary, such as DWT, DCT, and so forth. For a given dictionary, a signal is approximately represented as a linear combination of the dictionary atoms:
x k = 1 n s k ψ k .
The approximate solution after the sparse representation of signals in a DCT (or DWT) domain still have many nonzero elements. Under the same number of atoms, the approximate representation of the signal has a large error. To further improve the sparse representation performance, we construct a sparsity dictionary by using the TM basis functions. The detailed construction process is as follows.
For a given signal x N , N poles are uniformly selected in the unit circle, and then the poles are set as the poles of the TM basis functions, forming an initial sparsifying dictionary Ψ = [ F 1 ( z ) , , F ξ ( z ) , , F N ( z ) ] . Then, the TM basis function corresponding to each pole takes ω k = 2 π ( k 1 ) N , k = 1 , 2 , , N as the sampling frequency to sample the signal x at equal intervals. In this way, the column vector corresponding to each TM basis function is set by
F ξ = [ F ξ ( e j 0 ) F ξ ( e j ω k ) F ξ ( e j 2 π ( N 1 ) N ) ] .
Therefore, the constructed dictionary can be expressed as follows
Ψ = [ F 1 ( e j 0 ) F ξ ( e j 0 ) F N ( e j 0 ) F 1 ( e j ω k ) F ξ ( e j ω k ) F N ( e j ω k ) F 1; ( e j 2 π ( N 1 ) N ) F ξ ( e j 2 π ( N 1 ) N ) F N ( e j 2 π ( N 1 ) N ) ] .
To illustrate the performance of the constructed dictionary, we provide a comparison of the results in Figure 1.
As a performance measurement between the original signal and the reconstructed signal, the MSE is defined as follows:
M S E = 1 N K = 1 N x x ¯ 2 2 .
Without loss of generality, a Rayleigh-distributed signal is taken here as an example. The length of the signal is 256. A random dictionary of size 256 × 256 is considered, which is drawn from i.i.d. zero mean and unit variance Gaussian distribution. The DCT, DWT, and the constructed TM dictionary of the same dimension are compared. The OMP algorithm is used to compare the MSE of the reconstructed signal after the sparse representation of the original signal under different numbers of decomposed atoms. The results demonstrate that compared with the random, DCT, and DWT dictionaries, using the constructed TM dictionary to represent a given signal can get lower MSE under the same number of decompositions.

4. Optimizing the Measurement Matrix

In this section, we investigate the problem of optimizing the measurement matrix given a sparsifying dictionary to increase CS performance further. To address the problem, an ETF-based iterative minimization algorithm is proposed.
The object is to optimize the measurement matrix Φ to minimize the mutual coherence μ(A). In other words, it is to find a measurement matrix Φ making the corresponding Gram matrix as close to an identity matrix as possible. The column-normalized form of the sensing matrix A is called an ETF. However, it is often that such an ETF does not exist for any arbitrary dimension [25,31]. Therefore, our optimization goal is to make the sensing matrix A as close as possible to an ETF. The minimization problem is formulated as follows:
min Φ , H ϵ Λ s e t A T A H F 2 .
where · F is the Frobenius norm. The Λ s e t denotes a convex set that leads the optimization problem to have solutions. For any arbitrary dimension, Λ s e t is regularized as
Λ s e t = { H L × L : H = H T , d i a g   H = 1 , max i j | h i j | μ E } .
As mentioned before, μE is the Welch bound and is used here as a prescribed parameter.
The alternate optimization method is usually used to address the minimization problem in (17). The difference between our proposed method and others is that we assign weight values to the Gram matrix of the previous iteration as well as the Gram matrix of the current iteration in the stage of updating the Gram matrix. We modify the singular value without changing the properties of the Gram matrix, so that the minimum singular value of the updated Gram matrix increases and the maximum singular value decreases, and then the linear independence between the column vectors of the Gram matrix is enhanced. In this way, a new diagonal matrix is obtained by using the eigenvalue averaging method for the updated Gram matrix to construct an optimized measurement matrix. The detailed optimization procedure is shown in Algorithm 1.
In this algorithm, the Gram matrix of the normalized sensing matrix is computed. In order to make it close to an ETF, the Gram matrix is projected onto the convex set Λset by the shrinking operation function:
g i j = { 1 i = j g i j i j , | g i j | μ E s i g n ( g i j ) · μ E i j , | g i j | μ E .
Since the Gram matrix is projected onto Λset, its structure changes significantly. In order to retain some of its feature information and approximate the ETF, we give weights to the Gram matrix before and after the update:
G ( t + 1 ) = β G ( t ) + ( 1 β ) G ( t 1 ) .
In general, the aforementioned shrinking operation causes the rank of the Gram matrix to be full. Therefore, the next step is to force a rank M by a singular value decomposition of the obtained Gram matrix, i.e., UΛVT = G, in which r a n k ( Λ ) = M L . Let λ 1 , λ 2 , , λ M be the eigenvalues of the diagonal matrix Λ . According to singular value decomposition, the smaller the maximum singular value, the better the incoherence of the matrix. Therefore, in order to reduce the mutual coherence between the different columns of A, we modify the eigenvalues by averaging all eigenvalues of the obtained Gram matrix and get a new diagonal matrix Λ ˜ . Let λ ˜ i ( i = 1 , 2 , , M ) be the diagonal elements of Λ ˜ , and λ ˜ i = 1 M i = 1 M λ i . Then, we build the squared root of the obtained Gram matrix, G = STS, where S = Λ ˜ V T . Now, the solution of optimizing the measurement matrix is equivalent to solving
min Φ S Φ Ψ F 2 .
The solution process is as follows:
S Φ Ψ F 2 = t r [ ( S Φ Ψ ) T ( S Φ Ψ ) ] = t r ( S T S ) 2 t r ( Ψ T Φ T S ) + t r ( Ψ T Φ T Φ Ψ ) .
Here, t r ( ) denotes the matrix trace operation.
S Φ Ψ F 2 Φ = 2 t r ( Ψ T Φ T S ) Φ + t r ( Ψ T Φ T Φ Ψ ) Φ = 2 t r ( Φ T S Ψ T ) Φ + t r ( Φ Ψ Ψ T Φ T ) Φ = 2 S Ψ T + 2 Φ Ψ Ψ T .
Solved by least squares, it is now easy to find the optimized solutions Φ = Λ ˜ V T Ψ , where † denotes pseudoinverse.
Algorithm 1: The proposed optimization algorithm.
Input: The given sparsifying dictionary Ψ N × L ,the number of iterations Iter, and threshold μE.
Output: The measurement matrix Φ.
1: Initialization: Set Φ M × N to be a random matrix.
2: for l = 1 : Iter do
3:  Normalize the columns of the sensing matrix A = ΦΨ to obtain Ã.
4:  Compute the Gram matrix G = ÃTÃ.
5:  Update the Gram matrix by the shrinking operation (19).
6:  Assign weight values between the previous and current Gram matrix to update the Gram matrix: G(t + 1) = βG(t) + (1 − β)G(t − 1).
7:  Apply SVD UΛVT = G and force the rank of G to be equal to M.
8:  Average the eigenvalues to get new diagonal matrix Λ ˜ .
9:  Update the measurement matrix Φ = Λ ˜ V T Ψ .
10: end for

5. Experimental Results

In this section, we carry out various numerical simulations to support the effectiveness of the proposed approach. The experiments include a performance evaluation of the proposed algorithm and a performance comparison of different sparsifying dictionaries. In Section 5.1, we present some experiments to illustrate the performance of the proposed algorithm and compare it with other algorithms. In Section 5.2 we demonstrate the superiority of the CS system designed with the constructed sparsifying dictionary over the existing ones under the same measurement matrix. In addition, the signal recovery quality are evaluated to compare the performance of the different sparsifying dictionaries under the optimized measurement matrix using our proposed approach.

5.1. Performance Evaluation of the Proposed Algorithm

In this subsection, the performance of the proposed algorithm in the CS system is evaluated. Several comparison simulations are given to verify the effectiveness of the proposed algorithm in terms of the measures μ(A) and the signal recovery performance. Here, a random dictionary (every entry is drawn from i.i.d. zero mean and unit variance Gaussian distribution) of size 256 × 400 was used. We generated a Gaussian matrix with a size of 64 × 256 as a random measurement matrix and used it as an initial condition to obtain the corresponding optimized measurement matrix by using the proposed algorithm and the algorithm in [15,16], respectively.
The parameter settings in the abovementioned algorithms are shown as follows. For Elad’s algorithm, the parameters t, γ are set to 0.2 and 0.55, respectively. The iteration number Iter is set to 30 for both Elad’s and our algorithm. It is worth noting that the Duarte-Carvajalino and Sapiro algorithm is noniterative.
Since the corresponding Gram matrices with smaller nonzero terms result in good reconstruction performance, the local cumulative coherence values can be used to evaluate the performance of the sensing matrix. The histogram of the absolute off-diagonal entries of the corresponding normalized Gram matrix to each sensing matrix is depicted in Figure 2. As can be seen, both the method proposed in [15,16] and our proposed method reduce the number of large absolute values in the Gram matrix. Compared with Elad’s method and Sapiro’s method, it easy to find in Figure 2 that the largest absolute values are smaller. Furthermore, the histogram of our method shows that the local cumulative mutual coherence values shift significantly to the left and concentrate within 0.2, which means that the local coherence is also smaller.
To illustrate the behavior of the proposed algorithm, the evolution of the mutual coherence versus iteration number is shown in Figure 3. Since Sapiro’s method is noniterative, its optimized value is shown as a point in the graph. Both Elad’s method and our proposed method demonstrate the mutual coherence value decreases with the number of iterations. However, our method shows that the rate of decline is significantly faster. Moreover, the final value of μ(A) is also smaller than the first two.
Next, the signal reconstruction performance via the abovementioned measurement matrices are further investigated. We use the percentage of successful recovery to evaluate the sparse signal recovery quality. It is considered to be a successful recovery if the reconstruction error x x ^ 2 / x 2 < 10 3 . The known OMP algorithm is used for signal recovery.
In the first experiment, we generate 1000 K-sparse vectors with nonzero entries chosen at random for each sparsity level K. These sparse vectors are used as test signals to evaluate the CS performance. Figure 4 shows the signal reconstruction performance of the CS system before and after the measurement matrix optimization using different optimization methods, with signal sparsity level K varying from 4 to 28 and M = 64. As can be seen, all the optimization methods lead to improved CS performance. In addition, the result shown in Figure 4 indicates that the proposed method performs better than both Elad’s method and Duarte-Carvajalino and Sapiro’s method.
In the second experiment, the sparsity level K is fixed, and the measurement value M is changeable. Figure 5 shows the comparison of the reconstruction performances using various measurement matrices under the same sparsity level K = 10. As the number of measurements increases, the percentage of signal reconstruction also increases. As expected, it is shown in Figure 5 once again that the proposed method outperforms the other two methods.

5.2. Performance Comparison of Different Sparsifying Dictionaries

In this subsection, we investigate the performance of different sparsifying dictionaries in the CS system. As in the previous section, we quantify the performance by measuring the percentage of successful recovery signals. We previously verified that compared to DCT, DWT, and random dictionaries, using the constructed TM dictionary to represent a given signal can get lower MSE under the same number of atoms. Here, we further evaluate the reconstruction performance in CS systems using the constructed TM, DCT, DWT, and random matrices as sparsifying dictionaries.
We set up a new experiment, in which the dimension of the sparsifying dictionaries is set to 256 × 256. In addition, we generate a measurement matrix Φ 64 × 256 with random Gaussian distribution. By multiplying it with the above sparsifying dictionaries, we obtain four sensing matrices. Then, the OMP algorithm is used to recover the signal. In the experiments, we tested 1000 sparse signals in each trial. We evaluated the performance of a sparsifying dictionary in the CS system by the percentage of successful signal recovery. The error threshold between the original signal and the reconstructed signal is the same as in the previous subsection, which is 10−3. Figure 6 shows the comparison of the reconstruction performances using different sparsifying dictionaries under the same random measurement matrix. The results demonstrate that the constructed TM dictionary has a better performance in terms of reconstruction quality compared with the random, DCT, and DWT dictionaries. Using TM as a sparsifying dictionary can achieve a higher reconstruction rate among these dictionaries under the same sparsity level.
In addition, we fixed the sparsity level of the signal and performed comparison experiments by changing the measurement numbers of the measurement matrix. The sparsity level K is set to 25. Figure 7 reveals that by using the TM dictionary as Ψ, one can use a fewer number of measurements to achieve the same performance as that obtained using the DCT, DWT, and random dictionaries.
To further evaluate the performance of the constructed TM sparsifying dictionary, we used the proposed algorithm to optimize the measurement matrix in order to reduce the mutual coherence between the dictionary and the measurement matrix. Different from previous experiments, we compared the performance of different sparsifying dictionaries in CS systems under an optimized measurement matrix. As shown in Figure 8 and Figure 9, after using the proposed algorithm to optimize the measurement matrix, the performance of the signal recovery is improved. It was observed that using the initial random measurement matrix and the constructed TM dictionary in the CS system can allow us to still obtain a better performance than using the optimized measurement matrices with the random, DCT, and DWT dictionaries. This proves that our constructed TM dictionary is more robust and effective.

6. Conclusions

In this paper, we constructed a novel sparsifying dictionary using the TM functions for the sparse representation of a given signal. The advantages are higher performance in sparsely representing signals of interest at the same degree of atomic decomposition. Our proposed iterative algorithm is based on the equiangular tight frame, for which its related mutual coherence between the measurement matrix and the sparsifying dictionary is reduced. Simulation results demonstrated the promising performance of the algorithm and the superiority of the CS system, designed with the constructed dictionary and the optimized measurement matrix, over the existing ones in terms of signal recovery accuracy.
In the future, we will try to combine the constructed sparsifying dictionary with the measurement matrix to optimize both simultaneously. We will also try to apply our method in practical applications, such as channel estimation. Further investigation of this topic is needed.

Author Contributions

Conceptualization, Q.X. and Y.F.; methodology, Q.X. and Y.F.; validation, Q.X.; writing—original draft preparation, Q.X.; writing—review and editing, Q.X., Y.F., Z.S., and L.Z.; All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grants 61901254 and 61673253, the Research Committee of Macau University under Grant MYRG 2016-00053-FST.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Elad, M. Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing; Springer: New York, NY, USA, 2010. [Google Scholar]
  2. Qin, Z.; Fan, J.; Liu, Y.; Gao, Y.; Li, G.Y. Sparse Representation for Wireless Communications: A Compressive Sensing Approach. IEEE Signal Process. Mag. 2018, 35, 40–58. [Google Scholar] [CrossRef] [Green Version]
  3. Xiao, D.; Liang, J.; Ma, Q.; Xiang, Y.; Zhang, Y. High capacity data hiding in encrypted image based on compressive sensing for nonequivalent resources. Comput. Mater. Contin. 2019, 58, 1–13. [Google Scholar] [CrossRef] [Green Version]
  4. Akbarpour-Kasgari, A.; Ardebilipour, M. Massive MIMO-OFDM Channel Estimation via Distributed Compressed Sensing. IEEE Wirel. Commun. Lett. 2019, 8, 376–379. [Google Scholar] [CrossRef]
  5. Liu, S.; Zhang, Y.D.; Shan, T.; Tao, R. Structure-Aware Bayesian Compressive Sensing for Frequency-Hopping Spectrum Estimation with Missing Observations. IEEE Trans. Signal Process. 2018, 66, 2153–2166. [Google Scholar] [CrossRef]
  6. Sharma, S.K.; Lagunas, E.; Chatzinotas, S.; Ottersten, B. Application of Compressive Sensing in Cognitive Radio Communications: A Survey. IEEE Commun. Surv. Tutor. 2016, 18, 1838–1860. [Google Scholar] [CrossRef] [Green Version]
  7. Shi, W.; Jiang, F.; Liu, S.; Zhao, D. Image Compressed Sensing Using Convolutional Neural Network. IEEE Trans. Image Process. 2020, 29, 375–388. [Google Scholar] [CrossRef]
  8. Rousseau, S.; Helbert, D. Compressive Color Pattern Detection Using Partial Orthogonal Circulant Sensing Matrix. IEEE Trans. Image Process. 2020, 29, 670–678. [Google Scholar] [CrossRef]
  9. Xu, Y.; Li, S.; Zhang, Y. Privacy-Aware Service Subscription in People-Centric Sensing: A Combinatorial Auction Approach. Comput. Mater. Contin. 2019, 61, 129–139. [Google Scholar] [CrossRef]
  10. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  11. Rubinstein, R.; Bruckstein, A.M.; Elad, M. Dictionaries for Sparse Representation Modeling. Proc. IEEE 2010, 98, 1045–1057. [Google Scholar] [CrossRef]
  12. Aharon, M.; Elad, M.; Bruckstein, A. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 2006, 54, 4311–4322. [Google Scholar] [CrossRef]
  13. Candes, E.J.; Romberg, J.; Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 52, 489–509. [Google Scholar] [CrossRef] [Green Version]
  14. Lei, Y.; Fang, Y.; Zhang, L.; Weighted, A. K-SVD-Based Double Sparse Representations Approach for Wireless Channels Using the Modified Takenaka-Malmquist Basis. IEEE Access 2018, 6, 54331–54342. [Google Scholar] [CrossRef]
  15. Elad, M. Optimized Projections for Compressed Sensing. IEEE Trans Signal Process. 2007, 55, 5695–5702. [Google Scholar] [CrossRef]
  16. Duarte-Carvajalino, J.M.; Sapiro, G. Learning to Sense Sparse Signals: Simultaneous Sensing Matrix and Sparsifying Dictionary Optimization. IEEE Trans. Image Process. 2009, 18, 1395–1408. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Li, G.; Zhu, Z.; Yang, D.; Chang, L.; Bai, H. On Projection Matrix Optimization for Compressive Sensing Systems. IEEE Trans. Signal Process. 2013, 43, 129–159. [Google Scholar] [CrossRef]
  18. Hong, T.; Li, X.; Zhu, Z.; Li, Q. Optimized structured sparse sensing matrices for compressive sensing. Signal Process. 2019, 159, 119–129. [Google Scholar] [CrossRef] [Green Version]
  19. Li, B.; Zhang, L.; Kirubarajan, T. Projection matrix design using prior information in compressive sensing. Signal Process. 2017, 135, 36–47. [Google Scholar] [CrossRef]
  20. Naidu, R.R.; Murthy, C.R. Construction of Binary Sensing Matrices Using Extremal Set Theory. IEEE Signal Process. Lett. 2017, 24, 211–215. [Google Scholar] [CrossRef]
  21. Candes, E.J.; Tao, T. Decoding by linear programming. IEEE Trans. Inf. Theory 2005, 51, 4203–4215. [Google Scholar] [CrossRef] [Green Version]
  22. Donoho, D.L. For most large underdetermined systems of linear equations the ℓ1-norm solution is also the sparsest solution. Commun. Pure Appl. Math 2006, 51, 797–829. [Google Scholar] [CrossRef]
  23. Chen, S.S.; Donoho, D.L.; Saunderso, M.A. Atomic decomposition by basis pursuit. SIAM Rev. 2001, 43, 129–159. [Google Scholar] [CrossRef] [Green Version]
  24. Pati, Y.C.; Rezaiifar, R.; Krishnaprasad, P.S. Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition. Proceedings of 27th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 1–3 November 1993; 1993; 1, pp. 40–44. [Google Scholar]
  25. Tropp, J.A.; Dhillon, I.S.; Heath, R.W.; Strohmer, T. Designing structured tight frames via an alternating projection method. IEEE Trans. Inf. Theory 2005, 51, 188–209. [Google Scholar] [CrossRef] [Green Version]
  26. Gribonval, R.; Nielsen, M. Sparse representations in unions of bases. IEEE Trans. Inf. Theory 2003, 49, 3320–3325. [Google Scholar] [CrossRef] [Green Version]
  27. Tropp, J.A. Greed is good: Algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 2004, 50, 2231–2242. [Google Scholar] [CrossRef] [Green Version]
  28. Mi, W.; Qian, T. Frequency-domain identification: An algorithm based on an adaptive rational orthogonal system. Automatica 2012, 48, 1154–1162. [Google Scholar] [CrossRef]
  29. Chen, Q.; Mai, W.; Zhang, L.; Mi, W. System identification by discrete rational atoms. SIAM Rev. 2015, 56, 53–59. [Google Scholar] [CrossRef]
  30. Qian, T.; Zhang, L.; Li, Z. Algorithm of Adaptive Fourier Decomposition. IEEE Trans. Signal Process. 2011, 59, 5899–5906. [Google Scholar] [CrossRef]
  31. Chen, W.; Rodrigues, M.R.D.; Wassell, I.J. Atomic decomposition by basis pursuit. SIAM Rev. 2013, 61, 2016–2029. [Google Scholar]
Figure 1. Comparison between the four sparsifying dictionaries under the different number of decompositions using orthogonal matching pursuit (OMP).
Figure 1. Comparison between the four sparsifying dictionaries under the different number of decompositions using orthogonal matching pursuit (OMP).
Sensors 21 01229 g001
Figure 2. Histogram of the absolute off-diagonal entries of the corresponding normalized Gram matrix to each of the four sensing matrices.
Figure 2. Histogram of the absolute off-diagonal entries of the corresponding normalized Gram matrix to each of the four sensing matrices.
Sensors 21 01229 g002
Figure 3. Evolution of the mutual coherence versus iteration number.
Figure 3. Evolution of the mutual coherence versus iteration number.
Sensors 21 01229 g003
Figure 4. Comparison of the reconstruction performances using various measurement matrices under the same measurements M = 64.
Figure 4. Comparison of the reconstruction performances using various measurement matrices under the same measurements M = 64.
Sensors 21 01229 g004
Figure 5. Comparison of the reconstruction performances using various measurement matrices under the same sparsity level K = 10.
Figure 5. Comparison of the reconstruction performances using various measurement matrices under the same sparsity level K = 10.
Sensors 21 01229 g005
Figure 6. Comparison of the reconstruction performances using different sparsifying dictionaries under the same random measurement matrix (M = 64).
Figure 6. Comparison of the reconstruction performances using different sparsifying dictionaries under the same random measurement matrix (M = 64).
Sensors 21 01229 g006
Figure 7. Comparison of the reconstruction performances using different sparsifying dictionaries under the same sparsity level K = 25 and the same random measurement matrix.
Figure 7. Comparison of the reconstruction performances using different sparsifying dictionaries under the same sparsity level K = 25 and the same random measurement matrix.
Sensors 21 01229 g007
Figure 8. Comparison of the reconstruction performances using different sparsifying dictionaries with the optimized measurement matrix (M = 64).
Figure 8. Comparison of the reconstruction performances using different sparsifying dictionaries with the optimized measurement matrix (M = 64).
Sensors 21 01229 g008
Figure 9. Comparison of the reconstruction performances using different sparsifying dictionaries under the same sparsity level K = 25 and the optimized measurement matrix.
Figure 9. Comparison of the reconstruction performances using different sparsifying dictionaries under the same sparsity level K = 25 and the optimized measurement matrix.
Sensors 21 01229 g009
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xu, Q.; Sheng, Z.; Fang, Y.; Zhang, L. Measurement Matrix Optimization for Compressed Sensing System with Constructed Dictionary via Takenaka–Malmquist Functions. Sensors 2021, 21, 1229. https://doi.org/10.3390/s21041229

AMA Style

Xu Q, Sheng Z, Fang Y, Zhang L. Measurement Matrix Optimization for Compressed Sensing System with Constructed Dictionary via Takenaka–Malmquist Functions. Sensors. 2021; 21(4):1229. https://doi.org/10.3390/s21041229

Chicago/Turabian Style

Xu, Qiangrong, Zhichao Sheng, Yong Fang, and Liming Zhang. 2021. "Measurement Matrix Optimization for Compressed Sensing System with Constructed Dictionary via Takenaka–Malmquist Functions" Sensors 21, no. 4: 1229. https://doi.org/10.3390/s21041229

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop