Next Article in Journal
A New Hybrid AHP and Dempster—Shafer Theory of Evidence Method for Project Risk Assessment Problem
Previous Article in Journal
Faster and Slower Soliton Phase Shift: Oceanic Waves Affected by Earth Rotation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Extended Reweighted 1 Minimization Algorithm for Image Restoration

1
Department of Civil Engineering, China University of Petroleum (East China), Qingdao 266580, China
2
Department of Computational Mathematics, China University of Petroleum (East China), Qingdao 266580, China
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(24), 3224; https://doi.org/10.3390/math9243224
Submission received: 15 November 2021 / Revised: 8 December 2021 / Accepted: 9 December 2021 / Published: 13 December 2021

Abstract

:
This paper proposes an effective extended reweighted 1 minimization algorithm (ERMA) to solve the basis pursuit problem min u R n { | | u | | 1 : A u = f } in compressed sensing, where A R m × n , m n . The fast algorithm is based on linearized Bregman iteration with soft thresholding operator and generalized inverse iteration. At the same time, it also combines the iterative reweighted strategy that is used to solve min u R n { | | u | | p p : A u = f } problem, with the weight ω i ( u , p ) = ( ε + | u i | 2 ) p / 2 1 . Numerical experiments show that this 1 minimization persistently performs better than other methods. Especially when p = 0 , the restored signal by the algorithm has the highest signal to noise ratio. Additionally, this approach has no effect on workload or calculation time when matrix A is ill-conditioned.

1. Introduction

Image restoration is a fundamental problem in image processing. In particular, image deblurring has great application for facial alignment, remote sensing, medical image processing, camera equipment, compression transmission, and so on [1,2,3].
Let u R n be an original signal with k nonzero components ( u is called k —sparse signal or vector) and f R m be the linear measurements with the form A u = f for some A R m × n ,   m n . Then, the compressed sensing problem can be represented as (1), where | | u | | 0 is the number of nonzero elements of vector u .
The main purpose of compressive sensing is to recover the sparse signal from the inaccurate linear measurements. Namely, we shall identify valid algorithms for efficiently solving the indeterminate linear system.
Unfortunately, the 0 minimization problem (1) is nonconvex and generally cannot be solved fast and efficiently. A common alternative is to consider the convex-relaxation of (1), i.e., basis pursuit problem [4,5]. The solution of the linear equations A u = f may not exist, and measurement f may contain noise in certain applications. Thus, under such occasions we need to compute its least square solution.
There has been a recent flurry of study in compressive sensing, which entails resolving problems (2). Candes et al. [6,7], Donoho [8,9], and others were the driving force behind this, as detailed in [6,7,8,10,11,12,13,14,15,16,17,18]. The theory of compressed sensing assures that solving (2) under specific restrictions on the dense matrix A and the sparsity of u will yield the sparsest solution of A u = f . However, there are no universal linear programming strategies for handling this problem. As a result, [6] presents a soft threshold linearized Bregman iteration, and [9,14,19,20,21] investigates its associated convergence. Unfortunately, only A is surjective. Cai et al. then studied linearized Bregman iteration with soft threshold where A is any matrix and its property in [19,22]. In [23], Qiao et.al proposed a simplified iterative formula that combines original linearized Bregman iteration together with iterative formula for generalized inverse of matrix A R m × n ,   m n , and soft threshold. In [24,25], Liu et al. proposed efficient non-convex total variation 1 2 with differential shrink operator.
min u R n | | u | | 0         s . t .       A u = f
min u R n { | | u | | 1 : A u = f } .
On the basis of this, Qiao et.al proposed Chaotic Iteration for compressed sensing and image deblurring in [26,27]. Chaotic Iteration not only possesses all merits of linearized Bregman iterative method, but also accelerates the computing speed. Subsequently, we were inspired by the reweighted idea in [27,28,29] and proposed the reweighted 1 minimization algorithm with ω -inverse function (RMA-IF) based on Chaotic Iteration in [27]. We are interested in the fact that the restored effect of RMA-IF is better than Chaotic Iteration without increasing the computing time. Unfortunately, in [27], we only discussed the case that the weight coefficient is a usual inverse function. It did not involve the general weight coefficients and any theoretical results of the corresponding algorithms. This is exactly what this paper does.
In addition, it was also shown in [23] that exact reconstruction can be achieved by a nonconvex variant of basis pursuit with fewer measurements. Precisely, the 1 norm is replaced by the p quasi-norm.
min u R n { | | u | | p p : A u = f }
where | | u | | p = ( i | u i | p ) 1 p , 0 p 1 .
However, the p quasi-norm is nonconvex for 0 < p < 1 , and p minimization is generally NP-hard problem. Instead of directly minimizing the p quasi-norm, which most likely ends up with one of its numerous local minimizers, these algorithms [30,31] solve a series of smoothed subproblems. Specifically, [32] solves reweighted 1 subproblems: given an existing iterate u k , the algorithm generates a new iterate u ( k + 1 ) by minimizing i ω i | u i | with weights ω i = ( | u i | + ε ) p 1 . In [33], Xiu et al. illustrated the reweighted algorithm in order to solve the variational model 1 p . On the other hand, the works in [34,35,36] solve reweighted 2 subproblems: at each iteration, they approximate | | u | | p p by i ω i | u i | 2 with weights ω i = ( | u i | 2 + ε ) p 2 1 . To recover a sparse vector u from A u = f , these algorithms need to vary ε , starting at a large value and gradually reducing it. Empirical results show that the sparsity of the signal can be enhanced by the reweighted coefficients in the algorithm, so that the algorithm can recover the signal quickly. The reweighted 1 or 2 algorithms require much less measurements than convex 1 minimization to recover vectors with entries in decaying magnitudes, and in compressed sensing, this measurement reduction corresponds to savings in sensing time and cost.
In this paper, we propose a reweighted 1 minimization algorithm for the basis pursuit problem (2) in compressed sensing. The algorithm is based on Chaotic Iteration [23] and iterative reweighted strategy that is used to solve the min u R n { | | u | | p p : A u = f } problem. In particular, we chose the weight coefficient that given in papers [31,32,34,35,36,37] and let parameter p = 0 ,1 , 1 2 , 2 3 in numerical experiment [38].
The rest of the paper is organized as follows. The available strategies for tackling the confined problem (2) are summarized in Section 2. The reweighted 1 minimization algorithm is described, and convergence behavior is also analyzed in Section 3. In Section 4, the numerical results are shown. Finally, in Section 5, we draw some conclusions.

2. Preliminaries

2.1. A + Linearized Bregman Iteration

In [22], Cai et al. extended linearized Bregman iteration to
{ f ( k + 1 ) = f k + f A u k u ( k + 1 ) = δ T u ( A + f ( k + 1 ) )
where f is a known measurement, and A u = f ,   f ( 0 ) = 0 ,   u ( 0 ) = 0 ; A + is Moore–Penrose generalized inverse matrix of any A , and
T μ ( ω ) [ t μ ( ω 1 ) , t μ ( ω 2 ) t μ ( ω n ) ] T
is the soft thresholding operator [6] with
t μ ( ξ ) = { 0 ,                                                                                 | ξ | μ   ,     s g n ( ξ ) ( | ξ | μ ) ,                               | ξ | > μ .
Theorem 1. 
[39] Let  A R m × n ,   m n , be an arbitrary matrix. Then, the sequence { u ( k ) } k N generated by (4) with 0 < δ < 1 converges to the unique solution of
min u R n { μ | | u | | 1 + 1 2 δ | | u | | 2 2 : u = arg min u R n | | A u = f | | 2 2 }
Furthermore, when  μ , the limit of  { u ( k ) } k N tends to the solution of
min u R n { | | u | | 1 : u = arg min u R n | | A u = f | | 2 2 }
that has a minimal 2 norm among all the solution of (8).

2.2. Chaotic Iteration

In [20], we noted that the computation of A + involving decomposition of singular value for algorithm (4) needs matrix product, so the computational costs are relatively large. In fact, we just need to compute A + f ( k + 1 ) in (4). In [26], replacing SVD by iterative computation of A + , we obtained Chaotic Iteration. Because the algorithm only needs to calculate the product of matrix and vector, the computing speed is improved without reducing the accuracy.
Chaotic Iteration:
{ f ( k + 1 ) = f k + f A u k y ( k + 1 ) = y ( k ) + V 0 f ( k + 1 ) V 0 ( A y ( k ) ) , k = 0 , 1 , 2 , u ( k + 1 ) = δ T u ( y ( k + 1 ) )
where y 0 = V 0 f ( 0 ) , V 0 = α A T , and 0 < α < 2 | | A | | 2 2 , and A u = f ,   f ( 0 ) = 0 .
Theorem 2. 
[26] Let  A R m × n ,   m n , be an arbitrary matrix. Assuming the initial value   V 0 = α A * ,   0 < α < 2 | | A | | 2 2 , then the sequence { u ( k ) } k N generated by (9) with  0 < δ < 1 converges to the unique solution of (7). Furthermore, when  μ , the limit of  { u ( k ) } k N tends to the solution of (8), which has a minimal  2   norm among all the solution of (8).

2.3. Reweighted 1 Minimization Algorithm with ω -Inverse Function Iteration

In order to further improve the chaotic iteration, we combined it with the reweighted strategy in [27]. As a preliminary idea, we chose the usual inverse function as the weight coefficient.
RMA-IF:
{ f ( k + 1 ) = f k + f A u k y ( k + 1 ) = y ( k ) + V 0 f ( k + 1 ) V 0 ( A y ( k ) ) , k = 0 , 1 , 2 , u ( k + 1 ) = δ T μ ω ( k ) ( y ( k + 1 ) )   , ω i ( k + 1 ) = 1 / ( | u i ( k + 1 ) | + ε ) ,   i = 1 , 2 , , n
where f ( 0 ) = 0 ,   y ( 0 ) = V 0 f ( 0 ) , and A u = f ,   0 < α < 2 | | A | | 2 2   , and
T μ ω ( ω ) [ t μ ( ω 1 ) , t μ ω ( ω 2 ) t μ ω ( ω n ) ] T
is the weighted soft thresholding operator with
t μ ( ξ ) = { 0 ,                                                                               | ξ | μ ω   ,     s g n ( ξ ) ( | ξ | μ ) ,                               | ξ | > μ ω .

3. Reweighted 1 Minimization Algorithm

Consider the weighted 1 minimization problem
min u R n { i ω i | u i | : A u = f }
where i = 1 , 2 , , n , are positive weights. Denoting i ω i | u i | as | | u | | ω , then (13) can be rewritten as
min u R n { | | u | | ω : A u = f }
Just like its unweighted counterpart (2), this convex problem can be recast as a linear program. The weighted 1 minimization (14) can be viewed as relaxation of weighted 0 minimization problem. Whenever the solution of (1) is unique, it is also the unique solution of weighted 0 minimization problem, provided that the weights do not vanish. However, the corresponding 1 relaxations (2) and (14) will have different solutions in general. Hence, one may think of the weights ω as free parameters in the convex relaxation, whose values if set wisely could improve the signal reconstruction.
In comparison to many algorithms that can be used to solve the problem (2), Chaotic Iteration is very effective [26]. Avoiding computing A + in algorithm (4), the algorithm can reduce the computational complexity and improve the computational efficiency. Thus, we adopt a proper reweight coefficient to this algorithm and obtain a new fast reweighted minimization algorithm.
{ f ( k + 1 ) = f k + f A u k y ( k + 1 ) = y ( k ) + V 0 f ( k + 1 ) V 0 ( A y ( k ) ) , k = 0 , 1 , 2 , u ( k + 1 ) = δ T μ ω ( k ) ( y ( k + 1 ) )   , ω i ( k + 1 ) = ω ( u i ( k + 1 ) ) ,   i = 1 , 2 , , n
where f ( 0 ) = 0 ,   u ( 0 ) = 0 ,   y ( 0 ) = V 0 f ( 0 ) , V 0 = α A * ,   0 < α < 2 | | A | | 2 2 , and T μ ω ( k ) ( · ) is weighted soft thresholding operator, and ω ( · ) is weighted function of u ( k ) .
There are many ways to update the weight coefficients ω ( · ) , and ERMA uses the following schemes:
ω ( u ( k ) , p ) = ( | u ( k ) | + ε ) p 1
ω ( u ( k ) , p ) = ( | u ( k ) | 2 + ε ) p / 2 1
where 0 < p 1 , ε 0 avoids division by zero. In [28], weight coefficients (16) and (17) for p , 0 p 1 problem are discussed. We adopt iterative formula (16) and (17), when p = 0 ,1 , 1 2 , 2 3 . Obviously, these values of p are representative. RMA-IF and Chaotic Iteration are special cases of ERMA with weight coefficients (16), respectively, when p = 0 and p = 1 .
In [27], we mentioned the weighted soft thresholding operator T μ ω ( · ) , and now we expand further discussed about it in the following.
Theorem 3.
T μ ω ( ν = a r g min u R n { μ | | u | | ω + 1 2 | | u ν | | 2 2 } ) .
Proof. 
Let f ( u ) = μ | | u | | ω + 1 2 | | u ν | | 2 2 = μ i = 1 n ω i | u i | + 1 2 i = 1 n ( ν ( k ) u i ) 2 , then,
f ( u ) u i = { μ ω i + u i ν i ( k ) ,                             u i > 0   ,     μ ω i + u i ν i ( k ) ,                             u i < 0 .
Case 1: ν i ( k ) > μ ω i > 0
(a)
If u i > 0 , and notice that f ( u ) u i = 0 then u i = ν i ( k ) μ ω i > 0 ; for this case, f ( u ) gets its minimum at point u i = ν i ( k ) μ ω i > 0 along direction e i , and the minimum is:
f ( u ) | u i = ν i ( k ) μ ω i = μ ω i ( ν i ( k ) μ ω i ) + 1 2 ( μ ω i ) 2 + δ 1 ( > 0 ) = Δ 1 + δ 1
(b)
If u i < 0 , and notice that f ( u ) u i = u i ν i ( k ) μ ω i < 0 , again we have that f ( u ) decreases along direction e i :
f ( u ) | u i = 0 = 1 2 ( ν i ( k ) ) 2 + δ 1 ( > 0 ) = Δ 2 + δ 1 .
Since Δ 2 + Δ 1 = 1 2 ( ν i ( k ) ) 2 ( μ ω i ν i ( k ) 1 2 ( μ ω i ) 2 ) = 1 2 ( ν i ( k ) μ ω i ) 2 , along direction e i , we have the knowledge that the minimizer of f ( u ) is u i = ν i ( k ) μ ω i .
Case 2: ν i ( k ) < μ ω i < 0
(a)
If u i > 0 , since f ( u ) u i = u i ν i ( k ) +   μ ω i > 0 , f ( u ) increases along direction e i .
f ( u ) | u i = 0 = 1 2 ( ν i ( k ) ) 2 + δ 3 ( > 0 ) = Δ 3 + δ 3
(b)
If u i < 0 , since f ( u ) u i = 0 , we have u i = ν i ( k ) + μ ω i < 0 ; the minimizer of f ( u ) along direction e i is u i = ν i ( k ) + μ ω i , and the corresponding minimum is:
f ( u ) | u i = ν i ( k ) + μ ω i = μ ω i ( ν i ( k ) μ ω i ) + 1 2 ( μ ω i ) 2 + δ 3 ( > 0 ) = Δ 4 + δ 3
Since   Δ 3 Δ 4 = 1 2 ( ν i ( k ) ) 2 + ( μ ω i ν i ( k ) + 1 2 ( μ ω i ) 2 ) = 1 2 ( ν i ( k ) + μ ω i ) 2 > 0 , we can obtain the minimizer of f ( u ) at u i = ν i ( k ) + μ ω i along direction e i .
Case 3: μ ω i ν i ( k ) μ ω i
(a)
If u i > 0 , since f ( u ) u i = u i ν i ( k ) +   μ ω i > 0 , f ( u ) increases along direction e i .
f ( u ) | u i = 0 = 1 2 ( ν i ( k ) ) 2 + δ 2 ( > 0 )
(b)
If u i < 0 , since f ( u ) u i = u i ν i ( k )   μ ω i < 0 , f ( u ) decreases along direction e i .
f ( u ) | u i = 0 = 1 2 ( ν i ( k ) ) 2 + δ 2
when u i = 0 , the minimum of f ( u ) along direction e i is f ( u ) = 1 2 ( ν i ( k ) ) 2 + δ 2 .
In conclusion, we have the weighted soft shrinkage operator (12), and the minimizer of the minimization problem is given by
u = a r g min u R n { μ | | u | | ω + 1 2 | | u ν | | 2 2 } = { ν i ( k ) μ ω i ,       ν i ( k ) > μ ω i > 0   0 ,                         μ ω i ν i ( k ) μ ω i ν i ( k ) + μ ω i ,     ν i ( k ) < μ ω i < 0   = [ t μ ( ω 1 ) ,   t μ ( ω 2 ) t μ ( ω n ) ] T = T μ ω ( ν ( k ) )
Taking L μ ω ( u ) = μ | | u | | ω + 1 2 | | u ν | | 2 2 , we provide an analysis of the convergence behavior of ERMA. The main result is the following theorem. □
Theorem 4.
Let A R m × n , and   m n   be an arbitrary matrix. Assume the initial value V 0 = α A * ,   0 < α < 2 | | A | | 2 2 , then the sequence { u ( k ) } k N generated by ERMA ( 0 p 1 ), then for 0 < δ < 1 | | A | | 2 | | A T | | 2 , μ > 0 and ω > 0 .
(a)
{ L μ ω ( u ( k ) ) } is a monotonically decreasing sequence and bounded below;
(b)
{ u ( k ) } k N is asymptotically regular, such that lim k | | u ( k + 1 ) u ( k ) | | 2 = 0 ;
(c)
L μ ω ( u ( k ) ) converges to L μ ω ( u * ) , where u * is a limit point of function { u ( k ) } k N .
Proof. 
Denote the k-th approximation of objective function L μ ω ( u ( k ) ) as h ( u , u ( k ) ) .
h ( u , u ( k ) ) = μ | | u | | ω + 1 2 δ | | u ( u ( k ) δ A T ( A u ( k ) f ) ) | | 2 2
It is easy to see that
L μ ω ( u ) = h ( u , u ( k ) ) + 1 2 | | A ( u u ( k ) ) | | 2 2 1 2 δ | | u u ( k ) | | 2 2 + C
where C = δ 2 | | A T ( A u ( k ) f ) | | ) 2 2 + 1 2 | | A u ( k ) f | | ) 2 2 .
(a)
On the basis of the above derivation, we can get
L μ ω ( u ) = h ( u ( k + 1 ) , u ( k ) ) + 1 2 | | A ( u ( k + 1 ) u ( k ) ) | | 2 2 1 2 δ | | u ( k + 1 ) u ( k ) | | 2 2 + C h ( u , u ( k ) ) + 1 2 | | A ( u ( k + 1 ) u ( k ) ) | | 2 2 1 2 δ | | u ( k + 1 ) u ( k ) | | 2 2 + C = L μ ω ( u ( k ) ) + 1 2 | | A ( u ( k + 1 ) u ( k ) ) | | 2 2 1 2 δ | | u ( k + 1 ) u ( k ) | | 2 2 + C < L μ ω ( u ( k ) )
Since u ( k + 1 ) is a minimization of h ( u , u ( k ) ) , then the first inequality is satisfied. Again, 0 < δ < 1 | | A | | 2 | | A T | | 2 ; this implies 1 2 | | A ( u ( k + 1 ) u ( k ) ) | | 2 2 1 2 δ | | u ( k + 1 ) u ( k ) | | 2 2 < 0 , the last inequality holds. In additon, for all k , 0 L μ ω ( u ( k ) ) L μ ω ( u ( 0 ) ) = 1 2 | | f | | 2 2 . Therefore, { L μ ω ( u ( k ) ) } is a monotonically decreasing sequence and bounded below.
(b)
From (28), we have
L μ ω ( u ( k ) ) L μ ω ( u ( k + 1 ) ) 1 2 δ | | u ( k + 1 ) u ( k ) | | 2 2 δ | | A ( u ( k + 1 ) u ( k ) ) | | 2 2
so
( 1 δ | | A | | 2 2 ) | | u ( k + 1 ) u ( k ) | | 2 2 | | u ( k + 1 ) u ( k ) | | 2 2 δ | | A ( u ( k + 1 ) u ( k ) ) | | 2 2 2 δ ( L μ ω ( u ( k ) ) L μ ω ( u ( k + 1 ) ) )
thus,
| | u ( k + 1 ) u ( k ) | | 2 2 2 δ 1 δ | | A | | 2 2 ( L μ ω ( u ( k ) ) L μ ω ( u ( k + 1 ) )
Sum of (31) form 0 to N , we have that
| | u ( k + 1 ) u ( k ) | | 2 2 2 δ 1 δ | | A | | 2 2 k = 0 N ( L μ ω ( u ( k ) ) L μ ω ( u ( k + 1 ) ) ) 2 δ 1 δ | | A | | 2 2 L μ ω ( u ( 0 ) )
Therefore, the series k = 0 N | | u ( k + 1 ) u ( k ) | | 2 2 is convergent and | | u ( k + 1 ) u ( k ) | | 2 2 0 as k .
(c)
Set u ( 0 ) = 0 , k from (32), we obtain
| | u ( k + 1 ) | | 2 2 j = 0 j | | u ( j + 1 ) u ( j ) | | 2 2 2 δ 1 δ | | A | | 2 2 L μ ω ( u ( 0 ) )
Thus, the sequence { u ( k ) } is bounded and has a convergent subsequence { u ( k i ) } { u ( k ) } . Obviously, u * = lim i u ( k i ) . To prove u * = lim i u ( k ) , we only need to prove that lim i u ( l i ) = u * for any convergent sequence { u ( l i ) } { u ( k ) } .
Let lim i u ( l i ) = ν * and u * ν * = d 0 . That is, there is a constant K 1 > 0 such that, for l i > K 1 , we have | u ( l i ) ν * | < | d | 3 . Moreover, because lim k u k = ν * , there K 2 > 0 such that, for l i > K 2 we have | u ( k i ) u * | < | d | 3 . Therefore,
| u ( l i ) u ( k i ) | = | u ( l i ) ν * + u * u ( k i ) + ν * u * | | ν * u * | | u ( l i ) ν * | | u * u ( k i ) | | d | 3
This is contradictory to the regularity in (b). Therefore, L μ ω ( u ( k ) ) converges to a fixed value L μ ω ( u * ) , where lim k u k = u * . □

4. Numerical Result

In the basis pursuit problem (2), the constraint A u = f is underdetermined linear equations. We test ERMA with (16) ( ω 1 ) and (17) ( ω 2 ) when p = 0 ,1 , 1 2 , 2 3 for the problem (2), respectively. Here, the quality of restoration is measured by the signal-to-noise ratio (SNR), structural similarity (SSIM), and mean squared error (MSE) defined by
S N R ( u ) 20 log 10 ( | | u | | 2 | | u | | u ˜ | | 2 )
S S I M ( u ) ( 2 μ u μ u ˜ + θ 1 ) ( 2 σ u u ˜ + θ 2 ) ( μ u 2 + μ u ˜ 2 + θ 1 ) ( σ u 2 + σ u ˜ 2 + θ 2 )
M S E ( u ) i = 1 m j = 1 n [ u ( i , j ) u ˜ ( i , j ) ] m × n
where u is restored signal and u ˜ is original signal, and μ u   , μ u   ˜ , σ u   , σ u ˜ , and σ u u ˜ are the local means, standard deviations, and cross-covariance for images u ,   u ˜ . Our code was written in MATLAB and run on a Windows PC with an Intel(R) Core (TM)   i 3 4170   @ 3.70 H z , 3.70 H z , and 4 G B memory. The MATLAB version was R2015b. The stochastic matrix A R m × n ,   m n with r a n k ( A ) = r m by interior function r a n k ( · ) in MATLAB. n N Z was defined as the number of nonzero elements of u , and k m a x was maximum iteration number. Initial V 0 = α A T ,   0 < α < 2 | | A | | 2 2 , was obtained according to each stochastic matrix A . Set k m a x = 200 , u = 1, δ = 0.9 ,   f 0 = 0 , α = 1 | | A | | 2 2 , where selection method of parameters u , δ can also be referred to [31,32].
The original stochastic signals with parameters n = 500 ,   m = 250 ,   n N Z = 30 ,   r = 200 and reconstructed signals via ERMA with ω 1 and ω 2 when p = 0 ,1 , 1 2 , 2 3 , (4) based on A + (SVD) and Chaotic Iteration are shown in Figure 1, respectively. Moreover, the measurements signal f is plotted in Figure 1, where f = A u + ε , ε N ( 0 , σ 2 ) , σ is the standard deviation, and σ = 0.5 .
Subsequently, we chose A of large size = 5000 ,   m = 1000 ,   n N Z = 50 ,   r = 500 . Corresponding to stochastic signals, the signal reconstructed by Iterative Figure 1 with ω 1 and ω 1 when p = 0 ,1 , 1 2 , 2 3 , (4) based on A + (SVD) and Chaotic Iteration are displayed in Figure 2. The measurement signal f is plotted in Figure 2, where σ = 0.5 . For two groups of parameters, we randomly chose some groups of different numerical experiments and listed the results in Table 1 and Table 2, respectively.
In Table 1 and Table 2, c   is c o n d ( A ) and I is initial SNR. Obviously, these condition numbers of A were large and initial SNR were small. Thus, solving problem (2) is very difficult. Comparing these data in Table 1 and Table 2, we can see the restored signals effected by (4) based on A + (SVD) algorithm, Chaotic Iteration, ERMA with ω 1 and ω 2 , and ERMA with ω 2 . When p = 0 , the SNR of ERMA with ω 2 was the largest among all these results. Although the running times of these algorithms for one-dimensional signal reconstruction were small, it still can be seen that the running time of Chaotic Iteration, ERMA with ω 1 ,     p = 0 , and ERMA with ω 2 ,     p = 0 were almost the same, and the time consumed by the (4) based on A + (SVD) algorithm was bigger than those consumed by the above three algorithms. This is because the formula of (4) involves computing the SVD decomposition, and the remaining algorithms just use matrix-vector product. Thus, as we can predict, the relationship of computing time was ERMA with ω 2 ,     p = 0 ERMA with ω 2 ,     p = 0 Chaotic Iteration < (4) based on A + (SVD) algorithm, and the relationship of SNR of all methods was ERMA with ω 2 ,     p = 0   > ERMA with ω 1 ,     p = 0 > Chaotic Iteration (4) based on A + (SVD) algorithm.
When the above several algorithms were applied to image deblurring, the reconstructed results for m < n ( m = 64 , n = 80 ) image convolved by a 7 × 7 Gaussian kernel and for m = n = 256 image convolved by 5 × 5 disk kernel were found, as shown in Figure 3 and Figure 4, respectively. The first set of images was m < n ( 64 × 80 ) , which had a smaller scale and was not normal. The second set of images was m = n ( 256 × 256 ) , which was the common size. The choice of these two sizes of images illustrates the generality of our algorithm. We noticed that the ERMA algorithm had better performance than others. In Table 3 and Table 4, which listed results of Figure 3 and Figure 4, we discovered that the relationship of computing speed was ERMA Chaotic   A + (SVD), that of SNR and SSIM was A + (SVD) ERMA ( ω 2 ,     p = 0 ) > ERMA( ω 1 ,     p = 0 )(RMA-IF) > Chaotic (ERMA ( ω 1 ,     p = 0 )), and that of MSE was A + (SVD)   ERMA ( ω 2 ,     p = 0 ) < ERMA( ω 1 ,   p = 0 ) (RMA-IF) < Chaotic (ERMA ( ω 1 ,     p = 0 )). Obviously, the parameter of ω 2 ,     p = 0 in ERMA is a better choice than others. We had no doubt that we obtained the same conclusion for image blurring with signal reconstruction.
In fact, the complexity analysis included a comparison of the outcomes of several methods. K was the same loop number as before. Thus, the workload of the A + algorithm (4) was divided into two parts. They are the workload of the A + and the loop of the (4). Because of the singular value decomposition including matrix multiplication and matrix and eigenvalue calculation, the workload was O ( n 3 ) when computing A = U S V and A + = V T S + U T when m < n . Because the loop only involved matrix and vector multiplication, the workplace of the (4) loop was O ( m × n × K ) . As a result, the A + algorithm’s overall burden was O ( m × n × K ) + O ( n 3 ) . The workload of the remaining algorithms was equal to ( m × n × K ) . Obviously, K < m n , and the A + algorithm (4) had the most workload compared to the other three algorithms [23,26,27]. The calculation of the generalized inverse matrix A + was the main workload.
The comparison of the SNR, SSIM, and MSE data for all methods in Table 3 and Table 4 shows that ERMA outperformed other two methods and ω 2 ,     p = 0 was a relatively good choice for our new algorithm, which was found to be more effective than the A + (SVD), Chaotic algorithm in [26], and RMA-IF (that is, ERMA with ω 1 ,     p = 1 ) in [27], given the enhanced sparsity proposed in this work. Such a parameter group can improve the calculation efficiency and keep the effectiveness of image recovery.

5. Conclusions

We have proposed an effective reweighted 1 minimization algorithm. The algorithm is based on the Chaotic Iteration and iterative reweighted strategy that is used to solve the min u R n { | | u | | p p : A u = f } problem. ERMA is competitive due to its restored effectiveness and smaller workload in comparison to its initial version. Numerically and theoretically, ERMA not only inherits all merits of Chaotic Iteration and linearized Bregman iteration, but also greatly improves the SNR, SSIM, and MSE. Importantly, ERMA still preserves stable results when A is a very ill condition. Overall, ERMA remains as a good alternative when the condition of A is unknown and the measurement signal contains error.

Author Contributions

Conceptualization, S.H., Y.C., T.Q.; methodology, T.Q.; validation, Y.C., T.Q.; formal analysis, S.H.; investigation, Y.C.; resources, S.H.; data curation, T.Q.; writing—original draft preparation, Y.C.; writing—review and editing, S.H.; supervision, S.H.; funding acquisition, S.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by “National Key Research and Development Program, grant number 2018YFC1504402 and 2018YFC1504404”; “the Tian Yuan Special Funds of the National Natural Science Foundation of China, grant number 11626233”; and “the Fundamental Research Funds for the Central Universities, grant number 19CX02062A”.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We kindly thank the National Key Research and Development Program.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Aspandi, D.; Martinez, O.; Sukno, F.; Binefa, X. Composite recurrent network with internal denoising for facial alignment in still and video images in the wild. Image Vis. Comput. 2021, 111, 104189. [Google Scholar] [CrossRef]
  2. Zhu, P.; Jiang, Z.; Zhang, J.; Zhang, Y.; Wu, P. Remote sensing image watermarking based on motion blur degeneration and restoration model. Optics 2021, 248, 168018. [Google Scholar] [CrossRef]
  3. Chaudhari, A.; Kulkarni, J. Adaptive Bayesian Filtering Based Restoration of MR Images. Biomed. Signal Process. Control. 2021, 68, 102620. [Google Scholar] [CrossRef]
  4. Chen, S.S.; Donoho, D.L.; Saunders, M.A. Atomic Decomposition by Basis Pursuit. SIAM Journal on Scientific Computing 1998, 43(1), 129–159. [Google Scholar] [CrossRef]
  5. Wang, Y.; Yang, J.; Yin, W.; Zhang, Y. A New Alternating Minimization Algorithm for Total Variation Image Reconstruction. SIAM J. Imag. Sci. 2008, 1, 248–272. [Google Scholar] [CrossRef]
  6. Tsaig, Y.; Donoho, D.L. Extensions of compressed sensing. Signal Process. 2006, 86, 549–571. [Google Scholar] [CrossRef]
  7. Candes, E.J.; Romberg, J. Errata for Quantitative Robust Uncertainty Principles and Optimally Sparse Decompositions. Math. Ann. 2007, 7, 529–531. [Google Scholar] [CrossRef] [Green Version]
  8. Candes, E.J.; Romberg, J.; Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 52, 489–509. [Google Scholar] [CrossRef] [Green Version]
  9. Misra, G.D.; Donoho, D.L. Compressed Sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar]
  10. Zhang, X.; Zhang, S.; Lin, J.; Sun, F.; Zhu, X.; Yang, Y.; Tong, X.; Yang, H. An Efficient Seismic Data Acquisition Based on Compressed Sensing Architecture With Generative Adversarial Networks. IEEE Access 2019, 7, 105948–105961. [Google Scholar] [CrossRef]
  11. Shi, L.; Qu, G.; Wang, Q. A Method of Reweighting the Sensing Matrix for Compressed Sensing. IEEE Access 2021, 9, 21425–21432. [Google Scholar] [CrossRef]
  12. Xu, S.; Yang, X.; Jiang, S. A fast nonlocally centralized sparse representation algorithm for image denoising. Signal Process. 2017, 131, 99–112. [Google Scholar] [CrossRef]
  13. Xie, J.; Liao, A.; Lei, Y. A new accelerated alternating minimization method for analysis sparse recovery. Signal Process. 2018, 145, 167–174. [Google Scholar] [CrossRef]
  14. Cai, J.-F.; Osher, S.; Shen, Z. Linearized Bregman iterations for compressed sensing. Math. Comput. 2009, 78, 1515–1536. [Google Scholar] [CrossRef]
  15. Cholamjiak, W.; Khan, S.A.; Yambangwai, D.; Kazmi, K.R. Strong Convergence Analysis of Common Variational Inclusion problems Involving an Inertial Parallel Monotone Hybrid Method for a Novel Application to Image Restoration. RACSAM 2020, 114, 99. [Google Scholar] [CrossRef]
  16. Cholamjiak, W.; Dutta, H.; Yambangwai, D. Image Restorations Using an Inertial Parallel Pybrid Algorithm with Armijo Linesearch for Nonmonotone Equilibrium Problems. Chaos Solitons Fractals 2021, 153, 111462. [Google Scholar] [CrossRef]
  17. Suantai, S.; Peeyada, P.; Yambangwai, D.; Cholamjiak, W. A Parallel-Viscosity-Type Subgradient Extragradient-Line Method for Finding the Common Solution of Variational Inequality Problems Applied to Image Restoration Problems. Mathematics 2020, 8, 248. [Google Scholar] [CrossRef] [Green Version]
  18. Yambangwai, D.; Khan, S.A.; Dutta, H.; Cholamjiak, W. Image Restoration by Advanced Parallel Inertial Forward–Backward Splitting Methods. Soft Comput. 2021, 25, 6029–6042. [Google Scholar] [CrossRef]
  19. Cai, J.F.; Shen, S.O. Convergence of the Linearized Bregman Iteration for ℓ1-norm Minimization. Math. Comput. 2009, 78, 2127–2136. [Google Scholar] [CrossRef] [Green Version]
  20. Amaral, G.; Calliari, F.; Lunglmayr, M. Profile-Splitting Linearized Bregman Iterations for Trend Break Detection Applications. Electronics 2020, 9, 423. [Google Scholar] [CrossRef] [Green Version]
  21. Osher, S.; Mao, Y.; Dong, B.; Yin, W. Fast Linearized Bregman Iteration for Compressive Sensing and Sparse Denoising: UCLA. Commun. Math. Sci. 2010, 8, 93–111. [Google Scholar]
  22. Cai, J.; Osher, S.; Shen, Z. Linearized Bregman Iterations for Frame-Based Image Deblurring. SIAM J. Imaging Sci. 2009, 2, 226–252. [Google Scholar] [CrossRef]
  23. Qiao, T.; Li, W.; Wu, B. A New Algorithm Based on Linearized Bregman Iteration with Generalized Inverse for Compressed Sensing. Circuits Syst. Signal Process. 2014, 33, 1527–1539. [Google Scholar] [CrossRef]
  24. Liu, J.; Ma, R.; Zeng, X.; Liu, W.; Wang, M.; Chen, H. An efficient non-convex total variation approach for image deblurring and denoising. Appl. Math. Comput. 2021, 397, 125977. [Google Scholar] [CrossRef]
  25. Liu, J.; Ni, A.; Ni, G. A nonconvex l1(l1−l2) model for image restoration with impulse noise. J. Comput. Appl. Math. 2020, 378, 112934. [Google Scholar] [CrossRef]
  26. Qiao, T.; Li, W.; Wu, B.; Wang, J. A chaotic iterative algorithm based on linearized Bregman iteration for image deblurring. Inf. Sci. 2014, 272, 198–208. [Google Scholar] [CrossRef]
  27. Qiao, T.; Wu, B.; Li, W.; Dong, A. A new reweighted l 1 minimization algorithm for image deblurring. J. Inequalities Appl. 2014, 2014, 238. [Google Scholar] [CrossRef] [Green Version]
  28. Rose, K. Deterministic annealing for clustering, compression, classification, regression, and related optimization problems. Proc. IEEE 1998, 86, 2210–2239. [Google Scholar] [CrossRef] [Green Version]
  29. Jin, D.; Yang, Y.; Ge, T.; Wu, D. A Fast Sparse Recovery Algorithm for Compressed Sensing Using Approximate l0 Norm and Modified Newton Method. Materials 2019, 12, 1227. [Google Scholar] [CrossRef] [Green Version]
  30. Materassi, D.; Innocenti, G.; Giarré, L.; Salapaka, M. Model Identification of a Network as Compressing Sensing. Syst. Control. Lett. 2013, 62, 664–672. [Google Scholar] [CrossRef] [Green Version]
  31. Zhou, F.; Wu, Y.; Dai, Y.; Wang, P. Detection of Small Target Using Schatten 1/2 Quasi-Norm Regularization with Reweighted Sparse Enhancement in Complex Infrared Scenes. Remote Sens. 2019, 11, 2058. [Google Scholar] [CrossRef] [Green Version]
  32. Zhu, H.; Xiao, Y.; Wu, S.-Y. Large sparse signal recovery by conjugate gradient algorithm based on smoothing technique. Comput. Math. Appl. 2013, 66, 24–32. [Google Scholar] [CrossRef]
  33. Xiu, X.; Kong, L.; Yan, H. Iterative Reweighted Methods for l1–lp Minimization. Comput. Optim. Appl. 2018, 70, 201–219. [Google Scholar] [CrossRef] [Green Version]
  34. Faris, H.; Mafarja, M.M.; Heidari, A.A.; Aljarah, I.; Al-Zoubi, A.; Mirjalili, S.; Fujita, H. An efficient binary Salp Swarm Algorithm with crossover scheme for feature selection problems. Knowl.-Based Syst. 2018, 154, 43–67. [Google Scholar] [CrossRef]
  35. Abdi, M.J. Comparison of Several Reweighted l1-Algorithms for Solving Cardinality Minimization Problems. Mathematics 2013, arXiv:1304.6655. [Google Scholar]
  36. Qiao, D.; Pang, G.K.H. An Iteratively Reweighted Least Square Algorithm for RSS-Based Sensor Network Localization; IEEE: Beijing, China, 2011; pp. 1085–1092. [Google Scholar]
  37. Daubechies, I.; DeVore, R.; Fornasier, M.; Güntürk, C.S. Iteratively reweighted least squares minimization for sparse recovery. Commun. Pure Appl. Math. 2010, 63, 1–38. [Google Scholar] [CrossRef] [Green Version]
  38. Cao, W.; Sun, J.; Xu, Z. Fast Image Deconvolution Using Closed-form Thresholding Formulas of Lq(q = 12, 23) Regularization. J. Vis. Commun. Image Represent. 2013, 24, 31–41. [Google Scholar] [CrossRef]
  39. Zhang, H.; Cheng, L.Z. A Linear Bregman Iterative Algorithm. Comput. Math. 2010, 32, 97–104. [Google Scholar]
Figure 1. (m, n) = (250, 500).
Figure 1. (m, n) = (250, 500).
Mathematics 09 03224 g001
Figure 2. (m, n) = (1000, 5000).
Figure 2. (m, n) = (1000, 5000).
Mathematics 09 03224 g002
Figure 3. Deblurring results of the A + Bregman iteration, Chaotic iteration and RMA with ω 1   , ω 2   , when p = 0 ,1 , 1 / 2 , 2 / 3 for a 64 × 80 part of the sparse Words image convolved by a   5 × 5 Gaussian kernel and contaminated by a Gaussian white noise of σ 2 , σ = 10 .
Figure 3. Deblurring results of the A + Bregman iteration, Chaotic iteration and RMA with ω 1   , ω 2   , when p = 0 ,1 , 1 / 2 , 2 / 3 for a 64 × 80 part of the sparse Words image convolved by a   5 × 5 Gaussian kernel and contaminated by a Gaussian white noise of σ 2 , σ = 10 .
Mathematics 09 03224 g003
Figure 4. Deblurring results of the A T Bregman iteration, Chaotic iteration, and RMA with ω 1   , ω 2   , when p = 0 ,1 , 1 / 2 , 2 / 3 for a 256 × 256 image convolved by a   5 × 5 disk kernel.
Figure 4. Deblurring results of the A T Bregman iteration, Chaotic iteration, and RMA with ω 1   , ω 2   , when p = 0 ,1 , 1 / 2 , 2 / 3 for a 256 × 256 image convolved by a   5 × 5 disk kernel.
Mathematics 09 03224 g004
Table 1. The comparison of ( m , n , r a n k ( A ) , n N Z ) = (250, 500, 200, 30).
Table 1. The comparison of ( m , n , r a n k ( A ) , n N Z ) = (250, 500, 200, 30).
AlgorithmSNRTime(s)AlgorithmSNRTime(s)
σ = 0 A + (SVD)19.82510.044559Chaotic18.01570.028162
I = 2.5102 ERMA ω 1 ,       p = 1 18.01130.027638ERMA ω 2 ,       p = 1 32.57750.026196
α = 1.6447 × 10 6 ERMA ω 1 ,   p = 0 37.28780.027142ERMA ω 2 ,   p = 0 40.16020.028243
c = 8.6203 × 10 15 ERMA ω 1 ,   p = 1 / 2 32.15980.031926ERMA ω 2 ,   p = 1 / 2 37.03520.029582
ERMA ω 1 ,   p = 2 / 3 36.00880.029953ERMA ω 2 ,   p = 2 / 3 35.73930.032527
σ = 0.2 A + (SVD)20.14380.044666Chaotic19.38840.026892
I = 2.3396 ERMA ω 1 ,   p = 1 19.18380.029084ERMA ω 2 ,   p = 1 32.47150.035649
α = 1.7324 × 10 6 ERMA ω 1 ,   p = 0 41.16790.032367ERMA ω 2 ,   p = 0 43.06860.031641
c = 8.6203 × 10 15 ERMA ω 1 ,   p = 1 / 2 29.15260.033237ERMA ω 2 ,   p = 1 / 2 41.23700.035814
ERMA ω 1 ,   p = 2 / 3 34.17050.047743ERMA ω 2 ,   p = 2 / 3 39.14980.034877
σ = 0.5 A + (SVD)19.07610.038465Chaotic18.35610.028174
I = 2.4999 ERMA ω 1 ,   p = 1 18.35580.026806ERMA ω 2 ,   p = 1 32.02540.029678
α = 1.6235 × 10 6 ERMA ω 1 ,   p = 0 32.00780.026468ERMA ω 2 ,   p = 0 37.71920.027780
c = 7.9038 × 10 15 ERMA ω 1 ,   p = 1 / 2 28.17660.037457ERMA ω 2 ,   p = 1 / 2 36.35820.033224
ERMA ω 1 ,   p = 2 / 3 31.99090.033079ERMA ω 2 ,   p = 2 / 3 35.44320.031825
σ = 0.8 A + (SVD)14.67280.047050Chaotic14.53920.035564
I = 2.3549 ERMA ω 1 ,   p = 1 14.57990.038262ERMA ω 2 ,   p = 1 20.89470.040986
α = 1.7534 × 10 6 ERMA ω 1 ,   p = 0 21.59140.045155ERMA ω 2 ,   p = 0 25.39240.039750
c = 8.2674 × 10 15 ERMA ω 1 ,   p = 1 / 2 18.45610.047157ERMA ω 2 ,   p = 1 / 2 23.80440.060262
ERMA ω 1 ,   p = 2 / 3 20.22640.042789ERMA ω 2 ,   p = 2 / 3 23.05270.050048
σ = 1 A + (SVD)11.93070.037875Chaotic11.42520.025827
I = 1.8277 ERMA ω 1 ,   p = 1 11.42370.023979ERMA ω 2 ,   p = 1 17.76160.026214
α = 1.7001 × 10 6 ERMA ω 1 ,   p = 0 20.45100.026325ERMA ω 2 ,   p = 0 23.23050.027434
c = 8.3993 × 10 15 ERMA ω 1 ,   p = 1 / 2 16.24400.028772ERMA ω 2 ,   p = 1 / 2 20.91940.031776
ERMA ω 1 ,   p = 2 / 3 18.13650.031133ERMA 19.91580.031347
σ = 2 A + (SVD)11.82580.048401Chaotic11.62900.033802
I = 1.9283 ERMA ω 1 ,   p = 1 11.62700.023583ERMA ω 2 ,   p = 1 14.92440.039773
α = 1.7534 × 10 6 ERMA ω 1 ,   p = 0 14.78490.033782ERMA ω 2 ,   p = 0 16.81130.035556
c = 8.0975 × 10 15 ERMA ω 1 ,   p = 1 / 2 14.32130.058803ERMA ω 2 ,   p = 1 / 2 15.97180.035959
ERMA ω 1 ,   p = 2 / 3 14.78000.047833ERMA ω 2 ,   p = 2 / 3 15.60490.037541
Table 2. The comparison of ( m , n , r a n k ( A ) , n N Z ) = (1000, 5000, 500, 50).
Table 2. The comparison of ( m , n , r a n k ( A ) , n N Z ) = (1000, 5000, 500, 50).
AlgorithmSNRTime(s)AlgorithmSNRTime(s)
σ = 0 A + (SVD)10.23783.848475Chaotic10.25793.582770
I = 0.4807 ERMA ω 1 ,   p = 1 10.25793.380426ERMA ω 2 ,   p = 1 14.97013.268581
α = 6.3215 × 10 8 ERMA ω 1 ,   p = 0 18.68923.704526ERMA ω 2 ,   p = 0 20.49123.234485
c = 3.5619 × 10 16 ERMA ω 1 ,   p = 1 / 2 13.69193.545974ERMA ω 2 ,   p = 1 / 2 17.80933.559504
ERMA ω 1 ,   p = 2 / 3 15.23193.253955ERMA ω 2 ,   p = 2 / 3 16.88203.266536
σ = 0.2 A + (SVD)7.99813.700763Chaotic7.99933.221959
I = 0.4131 ERMA ω 1 ,   p = 1 7.99933.222183ERMA ω 2 ,   p = 1 11.28043.224178
α = 5.9979 × 10 8 ERMA ω 1 ,       p = 0 13.49543.214983ERMA ω 2 ,   p = 0 15.24543.219813
c = 2.4434 × 10 16 ERMA ω 1 ,   p = 1 / 2 10.40323.224926ERMA ω 2 ,   p = 1 / 2 13.27253.233614
ERMA ω 1 ,   p = 2 / 3 11.43453.229159ERMA ω 2 ,   p = 2 / 3 12.60963.217367
σ = 0.5 A + (SVD)9.55443.733797Chaotic9.55653.371611
I = 0.4742 ERMA ω 1 ,   p = 1 9.55653.226727ERMA ω 2 ,   p = 1 14.57903.398832
α = 6.1911 × 10 8 ERMA ω 1 ,   p = 0 20.87403.333231ERMA ω 2 ,   p = 0 21.64553.206791
c = 3.9381 × 10 16 ERMA ω 1 ,   p = 1 / 2 14.02893.885603ERMA ω 2 ,   p = 1 / 2 18.05543.214479
ERMA ω 1 ,   p = 2 / 3 16.11183.948218ERMA ω 2 ,   p = 2 / 3 16.84283.228236
σ = 0.8 A + (SVD)8.77993.788281Chaotic8.79163.194924
I = 0.4481 ERMA ω 1 ,   p = 1 8.79173.211612ERMA ω 2 ,   p = 1 12.81913.213434
α = 5.9648 × 10 8 ERMA ω 1 ,   p = 0 15.95463.200618ERMA ω 2 ,   p = 0 17.62663.236925
c = 3.0408 × 10 16 ERMA ω 1 ,   p = 1 / 2 11.84243.214958ERMA ω 2 ,   p = 1 / 2 15.14583.238623
ERMA ω 1 ,       p = 2 / 3 13.11723.217332ERMA ω 2 ,   p = 2 / 3 14.36173.219191
σ = 1 A + (SVD)7.79273.767992Chaotic7.79713.313606
I = 0.4723 ERMA ω 1 ,   p = 1 7.79713.196600ERMA ω 2 ,   p = 1 11.04633.229021
α = 6.1611 × 10 8 ERMA ω 1 ,   p = 0 12.76023.212222ERMA ω 2 ,   p = 0 14.31953.207623
c = 2.6591 × 10 16 ERMA ω 1 ,   p = 1 / 2 10.22623.220439ERMA ω 2 ,   p = 1 / 2 12.66593.219741
ERMA ω 1 ,   p = 2 / 3 11.08473.240974ERMA ω 2 ,   p = 2 / 3 12.14783.214091
σ = 2 A + (SVD)8.09473.701683Chaotic8.09383.190012
I = 0.4758 ERMA ω 1 ,   p = 1 8.09383.191272ERMA ω 2 ,   p = 1 11.97253.197868
α = 6.2299 × 10 8 ERMA ω 1 ,   p = 0 15.21153.200297ERMA ω 2 ,   p = 0 17.03763.197985
c = 2.1860 × 10 16 ERMA ω 1 ,   p = 1 / 2 11.12433.210141ERMA ω 2 ,   p = 1 / 2 14.39413.217404
ERMA ω 1 ,   p = 2 / 3 12.37433.213414ERMA ω 2 ,   p = 2 / 3 13.55383.216027
Table 3. The comparison of Figure 3 deblurring results.
Table 3. The comparison of Figure 3 deblurring results.
AlgorithmSNRSSIMMSETime(s)
A + (SVD)22.22370.9971180.00049046.163594
Chaotic14.54350.8594800.0026390.144938
ERMA ω 1 ,       p = 1 14.54350.8594800.0026390.161000
ERMA ω 1 ,       p = 0 17.96190. 9704490.0012010.209249
ERMA ω 1 ,       p = 1 / 2 16.34570.9326500.0017430.301524
ERMA ω 1 ,       p = 2 / 3 15.74050.9089800.0020040.287078
ERMA ω 2 ,       p = 1 17.46460.9584520.0013470.156247
ERMA ω 2 ,       p = 0 20.87130.9943420.0006150.157634
ERMA ω 2 ,       p = 1 / 2 11.24490.6615600.0056410.233360
ERMA ω 2 ,       p = 2 / 3 11.54230.6815320.0052680.224610
Table 4. The comparison of Figure 4 deblurring results.
Table 4. The comparison of Figure 4 deblurring results.
AlgorithmSNRSSIMMSETime(s)
A T (SVD)5.76180.6322640.0528092.204459
Chaotic10.37300.7626580.0182634.705136
ERMA ω 1 ,       p = 1 10.37300.7626580.0182634.710400
ERMA ω 1 ,       p = 0 11.43740.8982080.0142964.748141
ERMA ω 1 ,       p = 1 / 2 10.94340.8530820.0160165.143209
ERMA ω 1 ,       p = 2 / 3 10.73770.8232450.0167935.102806
ERMA ω 2 ,       p = 1 11.08140.8774040.0155164.766557
ERMA ω 2 ,       p = 0 11.91920.9129810.0127984.733365
ERMA ω 2 ,       p = 1 / 2 9.83620.6207870.0206665.153214
ERMA ω 2 ,       p = 2 / 3 9.88400.6345800.0204405.146052
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Huang, S.; Chen, Y.; Qiao, T. An Extended Reweighted 1 Minimization Algorithm for Image Restoration. Mathematics 2021, 9, 3224. https://doi.org/10.3390/math9243224

AMA Style

Huang S, Chen Y, Qiao T. An Extended Reweighted 1 Minimization Algorithm for Image Restoration. Mathematics. 2021; 9(24):3224. https://doi.org/10.3390/math9243224

Chicago/Turabian Style

Huang, Sining, Yupeng Chen, and Tiantian Qiao. 2021. "An Extended Reweighted 1 Minimization Algorithm for Image Restoration" Mathematics 9, no. 24: 3224. https://doi.org/10.3390/math9243224

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop