1. Introduction
The problem that UBSS [
1,
2] needs to address is how to separate multiple signals from a small number of sensors. The essence of this problem is to solve the optimal solution of the undetermined linear system of equations (ULSE). Fortunately, as a new undersampling technique, compressed sensing (CS) [
3,
4,
5] is an effective way to solve ULSE, which makes it possible to apply CS to UBSS.
The model of CS is shown in
Figure 1. According to this figure, it can be see that CS boils down to the form,
where
is a sensing matrix with the condition of
and
, which can be further represented as
, while
is a random matrix, and
is the sparse basis matrix.
is the vector of measurements. Moreover,
denotes the additive noise.
To solve the ULSE in Equation (
1), we try to recover the sparse signal
from the given
by CS. According to CS, this problem is transformed into solving the
-norm minimization problem.
where
denotes error. This rather wonderful attempt is actually supported by a brilliant theory [
6]. Based on this theory, in the noiseless case, it is proven that the sparsest solution is indeed a real signal when
is sufficiently sparse and
satisfies the restricted isometry property (RIP) [
7]:
where
K is the sparsity of signal
and
is a constant. In Equation (
2), the
-norm is nonsmooth, which leads an NP-hard problem. In practice, two alternative approaches are usually employed to solve the problem [
8]:
For greedy search, the main methods are based on greedy matching pursuit (GMP) algorithms, such as orthogonal matching pursuit (OMP) [
9,
10], stage-wise orthogonal matching pursuit (StOMP) [
11], regularized orthogonal matching pursuit (ROMP) [
12], compressive sampling matching pursuit (CoSaMP) [
13], generalized orthogonal matching pursuit (GOMP) [
14,
15], and subspace pursuit (SP) [
16,
17] algorithms. The objective function of these algorithms is given by:
As shown in the above equation, the features of GMP algorithms can be concluded as:
The advantage of GMP algorithms is that the computational complexity is low, but the reconstruction accuracy is not high in the noise case.
At present, the relaxation method for
is widely used. The relaxation method is mainly divided into two categories: the constraint-type algorithm and the regularization method. The constraint-type algorithm can also be divided into
-norm minimization methods and smoothed
-norm minimization methods. The representative algorithm of the former is the BPalgorithm [
18], and the latter is the smoothed
-norm minimization (SL0) algorithm. For the SL0 algorithm, the objective function can be expressed as:
where
is a smoothed function, which approximates the
-norm when
. Compared with
or
, a small
is selected to make the function close to
-norm [
8]; therefore,
are closer to the optimal solution.
Based on the idea of approximation, Mohimani used a Gauss function to approximate the
-norm [
19], which is described as:
According to the equation, we can know:
when
is a small enough positive value, the Gauss function is almost equal to the
-norm. Furthermore, the Gauss function is differentiable and smoothed; hence, it can be optimized by optimization methods such as the gradient descent (GD) method. Zhao proposed another smoothed function: the hyperbolic tangent (tanh) [
20],
This smoothed function makes a closer approximation to the
-norm than the Gauss function, as shown in [
19], with the same
; hence, it performs better in sparse signal recovery. Indeed, a large number of simulation experiments confirmed this view.
Another relaxation method is the regularization method. For CS, sparse signal recovery in the noise case is a very practical and unavoidable problem. Fortunately, the regularization method makes the solution of this problem possible [
21,
22]. The regularization method can be described as a “relaxation” approach that tries to solve the following unconstrained recovery problem:
where
is the parameter that balances the trade-off between the deviation term
and the sparsity regularizer
. The sparse prior information is enforced via the regularizer
, and a proper
is crucial to the success of the sparse signal recovery task: it should favor sparse solutions and make sure the problem
can be solved efficiently in the meantime.
For regularization, various sparsity regularizers have been proposed as the relaxation of the
-norm. The most popular algorithms are the convex
-norm [
22,
23] and the nonconvex
-norm to the
power [
24,
25]. In the noiseless case, the
-norm is equivalent to the
-norm, and the
-norm is the only norm with sparsity and convexity. Hence, it can be optimized by convex optimization methods. However, according to [
8], in the noisy case, the
-norm is not exactly equivalent to the
-norm, so the effect of promoting sparsity is not obvious. Compared to the
-norm, the nonconvex
-norm to the
power makes a closer approximation to the
-norm; therefore,
-norm minimization has a better sparse recovery performance [
8].
In view of the above explanation, in this paper, a compound inverse proportional function (CIPF) function is proposed as a new smoothed function, and a new weighted function is proposed to promote sparsity. For the noise case, a new regularization form is derived and constructed to enhance de-noising performance. The experimental simulation verifies the superior performance of this algorithm in signal and image recovery, and it has achieved good results when applied to UBSS.
This paper is organized as follows:
Section 2 introduces the main work of this paper. The steps of the ReRSL0algorithm and the selection of related parameters are described in
Section 3. Experimental results are presented in
Section 4 to evaluate the performance of our approach.
Section 5 verifies the effect of the proposed weighted regularized smoothed
-norm minimization (WReSL0) algorithm in UBSS.
Section 6 concludes this paper.
2. Main Work of This Paper
In this paper, based on the
in Equation (
9), we propose a new objective function, which is given by:
According to this equation, We not only propose a smoothed function approximating the -norm, but also propose a weighted function to promote sparsity. This section focuses on the relevant contents of and .
2.1. New Smoothed Function: CIPF
According to [
26], some properties of the smoothed functions are summarized in the following:
Property: Let and, define for any . The function f has the property, if:
- (a)
f is real analytic on for some ;
- (b)
, , where is some constant;
- (c)
f is convex on ;
- (d)
;
- (e)
.
It follows immediately from
that
converges to the
-norm as
, i.e.,
Based on
, this paper proposes a new smoothed function model called CIPF, which satisfies
and better approximates the
-norm. The smoothed function model is given as:
In Equation (
12),
denotes a regularization factor, which is a large constant. By experiments, the factor
is determined to be 10, which is a good result of the simulation.
represents a smoothed factor, and when it is smaller, it will make the proposed model closer to the
-norm. Obviously,
or approximately
is satisfied. Let:
where
for small values of
, and the approximation tends to equality when
.
Figure 2 shows the effect of the CIPF model approximating the
L0-norm. Obviously, the CIPF model makes a better approximation.
In conclusion, the merits of the CIPF model can be summarized as follows:
These merits make it possible to reduce the computational complexity on the premise of ensuring the accuracy of sparse signal reconstruction, which is of practical significance for sparse signal reconstruction.
2.2. New Weighted Function
Candès et al. [
27] proposed the weighted
-norm minimization method, which employs the weighted norm to enhance the sparsity of the solution. They provided an analytical result of the improvement in the sparsity recovery by incorporating the weighted function with the objective function. Pant et al. [
28] applied another weighted smoothed
-norm minimization method, which uses a similar weighted function to promote sparsity. The weighted function can be summarized as follows:
Candès et al.: ;
Pant et al.: , is a small enough positive constant.
From the two weighted functions, we can find a phenomenon: a large signal entry is weighted with a small ; on the contrary, a small signal entry is weighted with a large value . By analysis, the large forces the solution to concentrate on the indices where is small, and by construction, these correspond precisely to the indices where is nonzero.
Combined with the above idea, we propose a new weighted function, which is given by:
As for Candès et al., when the signal entry is zero or close to zero, the value of will be very large, which is not suitable for computation by a computer. Although Pant et al. noticed the problem and improved the weighted function to avoid it, the constant depends on experience. Actually, the proposed weighted function can avoid the two problems. Moreover our weighted function can be satisfied with the phenomenon. When the small signal entry can be weighted with a large and a large signal entry can be weighted with a small , this can make the large signal entry and small signal entry closer. In this way, the direction of optimization can be kept as consistent as possible, and the optimization process tends to be more optimal. Therefore, the proposed weighted function can have a better effect.
4. Performance Simulation and Analysis
The numerical simulation platform is MATLAB 2017b, which is installed on a computer with a Windows 10, 64-bit operating system. The CPU of the simulation computer is the Intel (R) Core (TM) i5-3230M, and the frequency is 2.6 GHz. In this section, the performance of the WReSL0 algorithm is verified by signal and image recovery in the noise case.
Here, some state-of-the-art algorithms are selected for comparison. The parameters are selected to obtain the best performance for each algorithm: for the BPDNalgorithm [
36], the regularization parameter
; for the SL0 algorithm [
19], the initial value of smoothed factor
, the final value of smoothed factor
, scale factor is set as step size
, and the attenuation factor
; for the NSL0algorithm [
20], the initial value of smoothed factor
, the final value of smoothed factor
, the step size
, and the attenuation factor
; for L
-RLSalgorithm [
24], the number of iterations
, the norm initial value
, the norm final value
, the initial value of regularization factor
, the final value of regularization factor
, and the algorithm termination threshold
; for the WReSL0 algorithm, the initial value of smoothed factor
, the final value of smoothed factor
, the iterations
, the step size
, and the regularization parameter
. All experiments are based on 100 trials.
4.1. Signal Recovery Performance in the Noise Case
In this part, we discuss signal recovery performance in the noise case. We add noise
to the measurement vector
; moreover,
,
is randomly formed and follows the Gaussian distribution of
. For signal recovery under noise conditions, we evaluate the performance of algorithms by the normalized mean squared error (NMSE) and the CPU running time (CRT). NMSE is defined as
. CRT is measured with
and
. In order to analyze the de-noising performance of the WReSL0 algorithm in context closer to the real situation, we constructed a certain signal as an experimental object in the experiments in this section. The signal is given by:
where
,
,
, and
.
Hz;
Hz;
Hz; and
Hz. Here,
is a sequence with
, and
is sampling interval with the value of
.
is the sampling frequency with the value of 800 Hz. The object that needs to be reconstructed can be expressed as:
where
is a sparse signal in the frequency domain, and it is the Fourier transform expression of
,
. Here, let
,
. Moreover,
can be represented as
; here,
is a randn matrix generated by a Gaussian distribution, and
is a sparse basis matrix generated by Fourier transform. Here,
can be given by Fourier
, and
is a unit matrix. This target signal
is sparse in Fourier space; hence, the signal
can be recovered from given
by CS recovery methods.
Figure 3 shows the signal recovery effect. Obviously, BPDN and SL0 do not perform well, while NSL0, L
-RLS and the proposed WReSL0 perform quite well. This verifies that the regularization mechanism has a good de-noising effect.
Figure 4 shows the frequency spectrum of the recovered signal by the selected algorithms. The spectrum of the signal recovered by our proposed WReSL0 algorithm is almost the same as the original signal, while other algorithms fail to achieve this effect.
Table 2 shows the CRT of all algorithms. The
n changes according to a given sequence
. From the table, for any
n, SL0 has the shortest computation time, followed by WReSL0, NSL0, and L
-RLS, and BPDN has the longest computation time. The BPDN algorithm is generally implemented by the quadratic programming method, and the computational complexity of this method is very high, thus resulting in a large increase in the overall computation time of the algorithm. Furthermore, in L
-RLS, the iterative process adopts the conjugate gradient method with high complexity, while NSL0 and WReSL0 do not. Compared with NSL0, WReSL0 is more prominent in the decrease of computation time.
The performance of each algorithm under different noise intensities is shown in
Figure 5. When
, SL0 outperforms other algorithms, but with the increase of
, the effect of SL0 becomes worse and worse. This result further illustrates that the traditional constrained sparse recovery algorithm does not have the performance of anti-noising. For BPDN, NSL0, L
-RLS, and WReSL0, they all applied the regularization mechanism, and they are indeed superior to SL0 in the noise case. Therefore, the proposed WReSL0 in this paper has the best de-noising performance.
4.2. Image Recovery Performance in the Noise Case
Real images are considered to be approximately sparse under some proper basis, such as the DCT basis, DWT basis, etc. Here, we choose the DWT basis to recover these images. We compare the recovery performances based on the four real images in
Figure 6: boat, Barbara, peppers, and Lena. The size of these images is
; the compression ratio (CR; defined as
) is 0.5; and the noise
equals 0.01. We still choose SL0, BPDN, NSL0, and L
-RLS to make comparisons. For image recovery, the object of image processing is given by:
Here,
,
,
are matrices, and among these,
,
. In order to meet the basic requirements of CS, we perform the following processing:
where
,
,
are the column vectors of
,
,
, respectively.
,
obeys the Gaussian distribution
.
To perform image recovery, we valuate it by the peak signal to noise ratio (PSNR) and the structural similarity index (SSIM). PSNR is defined as:
where
, and SSIM is defined as:
Among these, is the mean of image p, is the mean of image q, is the variance of image p, is the variance of image q, and is the covariance between image p and image q. Parameters and , for which , and L is the dynamic range of pixel values. The range of SSIM is , and when these two images are the same, SSIM equals one.
Figure 7 shows the recovery effect of boat and Barbara with noise intensity
. For boat and Barbara, the recovered images by SL0 and BPDN have obvious water ripples, while recovered images by other algorithms have no such water ripples. Similarly, for peppers and Lena, the recovered images by SL0 and BPDN are blurred compared with the recovered images by other algorithms. The NSL0, L
-RLS, and WReSL0 algorithms are also effective at noisy image recovery. For the NSL0, L
-RLS, and WReSL0 algorithms, their recovery effects are very similar. In order to further analyze the advantages and disadvantages of the algorithms, we analyze the PSNR and SSIM of the images recovered by these algorithms, and the results are shown in
Table 3 and
Table 4. By observation and analysis, L
-RLS performs better than NSL0, and at the same time, WReSL0 outperforms L
-RLS. Hence, the WReSL0 proposed by this paper is superior to the other selected algorithms in image processing.