Next Article in Journal
Visualizing Profiles of Large Datasets of Weighted and Mixed Data
Previous Article in Journal
Environmental Efficiency Evaluation in the Top Asian Economies: An Application of DEA
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Projected Forward-Backward Algorithm for Constrained Minimization with Applications to Image Inpainting

1
Data Science Research Center, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
2
School of Science, University of Phayao, Phayao 56000, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(8), 890; https://doi.org/10.3390/math9080890
Submission received: 26 March 2021 / Revised: 11 April 2021 / Accepted: 12 April 2021 / Published: 16 April 2021
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
In this research, we study the convex minimization problem in the form of the sum of two proper, lower-semicontinuous, and convex functions. We introduce a new projected forward-backward algorithm using linesearch and inertial techniques. We then establish a weak convergence theorem under mild conditions. It is known that image processing such as inpainting problems can be modeled as the constrained minimization problem of the sum of convex functions. In this connection, we aim to apply the suggested method for solving image inpainting. We also give some comparisons to other methods in the literature. It is shown that the proposed algorithm outperforms others in terms of iterations. Finally, we give an analysis on parameters that are assumed in our hypothesis.

1. Introduction

Let H be a real Hilbert space. The unconstrained minimization problem of the sum of two convex functions is modeled as the following form:
min u H f ( u ) + g ( u ) ,
where f and g : H R { + } are proper, lower semi-continuous, and convex functions. If f is differentiable on H, we know that problem (1) can be described by the fixed point equation, that is,
u = prox α g ( u α f ( u ) ) ,
where α > 0 and prox g is the proximal operator of g, i.e., prox g = ( I d + g ) 1 where I d is the identity operator in H and g is the subdifferential of g. Therefore, the forward-backward algorithm was defined in the following manner:
u k + 1 = prox α k g backward step ( I d α k f ) forward step ( u k ) , k 0
where α k > 0 . Some works that relate to the forward-backward method for convex optimization problems can be investigated in [1,2,3,4,5,6]. This method covers the gradient method [7,8,9] and the proximal algorithm [10,11,12]. Combettes and Wajs [13] introduced the following relaxed forward-backward method.
Cruz and Nghia [14] suggested the forward-backward method using linesearch approach, which does not need the Lipschitz constant in implementation.
It was shown that ( u k ) converges weakly to a minimizer of f + g .
Now, inspired by Cruz and Nghia [14], we suggest a new projected forward-backward algorithm for solving the constrained convex minimization problem, which is modeled as follows:
min u Ω f ( u ) + g ( u ) ,
where Ω is a nonempty closed convex subset of H, f and g are convex functions on H that f is differentiable on H. We denote by S * the solution set of (4).
By the way, to obtain a nice convergence rate, Polyak [15] introduced the heavy ball method for solving smooth convex minimization problem. In case g = 0 , Nesterov [16] modified the heavy ball method as follows.
In this work, motivated by Algorithm 1 [13], Algorithm 2 [14], Algorithm 3 [16] and Algorithm 4 [17], we are interested to design a new projected forward-backward algorithm for solving the constrained convex minimization problem (4) and establishing the convergence theorem. We also apply our method to solve image inpainting and provide some comparisons and numerical results. Finally, we show the effects of each parameters in the proposed algorithm.
Algorithm 1 Ref. [13] Let ε ( 0 , min { 1 , 1 α } ) and u 0 R N . For k 1 , define
v k = u k α k f ( u k ) , u k + 1 = u k + λ k ( prox α k g v k u k ) ,
where α k [ ε , 2 α ε ] , λ k [ ε , 1 ] and α is the Lipschitz constant of the gradient of f.
Algorithm 2 Let σ > 0 , θ ( 0 , 1 ) and δ ( 0 , 1 2 ) . Let u 0 dom g . For k 1 ,  define
u k + 1 = prox α k g ( u k α k f ( u k ) ) ,
where α k = σ θ m k and m k is the smallest nonnegative integer satisfying
α k f ( u k + 1 ) f ( u k ) δ u k + 1 u k .
Algorithm 3 Let u 0 , u 1 R N and θ k [ 0 , 1 ) . For k 1 , define
v k = u k + θ k ( u k u k 1 ) , u k + 1 = v k α k f ( v k ) ,
where α k > 0 . The term θ k ( u k u k 1 ) is called inertial term.
In 2003, Moudafi and Oliny [17] suggested the inertial forward-backward splitting as follows:
Algorithm 4 Let u 0 , u 1 R N and α k [ 0 , 2 L ε ] . For k 1 , define
v k = u k + θ k ( u k u k 1 ) , u k + 1 = prox α k g ( v k α k f ( u k ) ) ,
where θ k [ 0 , 1 ) . Many works explored that algorithms involving inertial term have a nice rate of convergence [3,18,19,20]. The complexity of some variants of the forward-backward algorithms can be found in the work of Cruz and Nghia [14].
This paper is organized as follows: In Section 2, we recall some preliminaries and mathemetical tools. In Section 3, we prove the weak convergence theorem of the proposed method. In Section 4, we provide numerical experiments in image inpainting to valid the convergence theorem and, finally, in Section 5, we give conclusions of this paper.

2. Preliminaries

Let us review some important definitions and lemmas for proving the convergence theorem. Let H be a real Hilbert space with inner product · , · and norm · . Let h : H R ¯ be a proper, lower semicontinuous ( l . s . c . ), and convex function. The domain of h is defined by dom h : = { u H | h ( u ) < + } . For any u H , we know that the orthogonal projection of u onto a nonempty, closed and convex subset C of H is defined by
P C u : = argmin v C u v 2 .
Lemma 1
([21]). Let C be a nonempty, closed and convex subset of a real Hilbert space H. Then, for any u H , we have
(i) 
u P C u , a P C u 0 for all a C ;
(ii) 
P C u P C v 2 P C u P C v , u y for all u , v H ;
(iii) 
P C u a 2 u a 2 P C u u 2 for all a C .
The directional derivative of h at u in the direction d is
h ( u ; d ) : = lim t 0 + h ( u + t d ) h ( u ) t .
Definition 1.
The subdifferential of h at u is defined by
h ( u ) = { a H : a , v u h ( v ) h ( u ) , y H } .
It is known that h is maximal monotone and if h is differentiable, then h is the gradient of h denoted by h . Moreover, h is monotone, that is, h ( u ) h ( v ) , u v 0 for all u , v H . From (4), we also know that
u * argmin ( f + g ) u * = prox c g ( I d c f ) ( u * ) ,
where c > 0 and prox g = ( I d + g ) 1 .
From [14], we know that
a prox α g ( a ) α g ( prox α g ( a ) ) for all a H , α > 0 .
Lemma 2
([22]). h , Gph ( h ) = { ( u , a ) H × H : a h ( u ) } is demiclosed, i.e., if the sequence ( u k , a k ) Gph ( h ) satisfies that ( u k ) converges weakly to u and ( a k ) converges strongly to a, then ( u , a ) Gph ( h ) .
Lemma 3.
Let ( a k ) , ( b k ) and ( r k ) be real positive sequences such that
a k + 1 ( 1 + r k ) a k + b k , k N .
If k = 1 r k < + and k = 1 b k < + , then lim k + a k exists.
Lemma 4
([23]). Let ( a k ) and ( θ k ) be real positive sequences such that
a k + 1 ( 1 θ k ) a k + θ k a k 1 , k N .
Then, a k + 1 K · j = 1 k ( 1 + 2 θ j ) , where K = max { a 1 , a 2 } . Moreover, if k = 1 θ k < + , then ( a k ) is bounded.
Definition 2.
Let S be a nonempty subset of H. A sequence ( u k ) in H is said to be quasi-Fejér convergent to S if and only if for all u S there exists a positive sequence ( ε k ) such that k = 0 ε k < + and u k + 1 u 2 u k u 2 + ε k for all k N . When ( ε k ) is a null sequence, we say that ( u k ) is Fejér convergent to S.
Lemma 5
([21,24]). If ( u k ) is quasi-Fejér convergent to S, then we have:
(i) 
( u k ) is bounded.
(ii) 
If all weak accumulation points of ( u k ) is in S, then ( u k ) weakly converges to a point in S.

3. Results

In this section, we suggest a new projected forward-backward algorithm and establish the weak convergence. The following conditions are assumed:
(A1)
f , g : H R { + } are proper, l . s . c , convex functions on H that f is differentiable on H.
(A2)
f is uniformly continuous on bounded subsets of H and is bounded on any bounded subset of H.
Next, we will prove weak convergence theorem of the proposed algorithm.
Theorem 1.
Let ( u k ) be defined by Algorithm 5. Suppose k = 1 θ k < + . Then, ( u k ) weakly converges to a point in S * .
Algorithm 5 Let Ω be a nonempty closed convex subset of H. Given σ > 0 , ϕ ( 0 , 1 ) , δ ( 0 , 1 2 ) and θ k 0 . Let u 0 , u 1 H and define
w k = u k + θ k ( u k u k 1 )
and
v k = prox α k g ( w k α k f ( w k ) )
where α k = σ ϕ m k and m k is the smallest nonnegative integer such that
α k f ( v k ) f ( w k ) δ v k w k .
Set u k + 1 by
u k + 1 = P Ω ( v k ) , k 0 .

Proof. 
Let u * be a solution in S * . Thus, we obtain
u k + 1 u * 2 = P Ω ( v k ) u * 2 v k u * 2 P Ω ( v k ) v k 2 .
By the definition of proximal operator and v k , we have
w k v k α k f ( w k ) = w k prox α k g ( w k α k f ( w k ) ) α k f ( w k ) g ( v k ) .
By the convexity of g, we get
g ( u ) g ( v k ) w k v k α k f ( w k ) , u v k , u H .
The convexity of f gives
f ( u ) f ( y ) f ( y ) , u y , u H , y H .
Using (13) and (14) with any u H and y = w k , we obtain
g ( u ) g ( v k ) + f ( u ) f ( w k ) w k v k α k f ( w k ) , u v k + f ( w k ) , u w k = 1 α k w k v k , u v k + f ( w k ) , v k w k = 1 α k w k v k , u v k + f ( v k ) , v k w k + f ( w k ) f ( v k ) , v k w k 1 α k w k v k , u v k + f ( v k ) , v k w k f ( w k ) f ( v k ) v k w k 1 α k w k v k , u v k + f ( v k ) , v k w k δ α k w k v k 2 .
This yields
w k v k , v k u α k [ f ( w k ) f ( u ) + g ( v k ) g ( u ) + f ( v k ) , v k w k ] δ w k v k 2 .
Since 2 w k v k , v k u = w k u 2 w k v k 2 v k u 2 , it follows that
w k u 2 v k u 2 2 α k [ f ( w k ) f ( u ) + g ( v k ) g ( u ) + f ( v k ) , v k w k ] + ( 1 2 δ ) w k v k 2 .
Since f is convex, we have f ( w k ) f ( v k ) f ( v k ) , w k v k . This implies that
w k u 2 v k u 2 2 α k [ f ( w k ) f ( u ) + g ( v k ) g ( u ) f ( w k ) + f ( v k ) ] + ( 1 2 δ ) w k v k 2 .
Setting u = u * , we obtain
v k u * 2 w k u * 2 2 α k [ ( f + g ) ( v k ) ( f + g ) ( u * ) ] ( 1 2 δ ) w k v k 2 w k u * 2 .
From (12) and (15), we see that
u k + 1 u * 2 w k u * 2 P Ω ( v k ) v k 2 w k u * 2 .
Hence,
u k + 1 u * w k u * u k u * + θ k u k u k 1 u k u * + θ k ( u k u * + u k 1 u * ) .
This shows that u k + 1 u * ( 1 + θ k ) u k u * + θ k u k 1 u * . By Lemma 4, we have
u k + 1 u * K · j = 1 k ( 1 + 2 θ j )
where K = max { u 1 u * , u 2 u * } . Since k = 1 θ k < + , ( u k ) is bounded. Thus, k = 1 θ k u k u k 1 < + . By Lemma 3, therefore lim k u k u * exists.
Next, we consider
w k u * 2 = u k + θ k ( u k u k 1 ) u * 2 = u k u * 2 + 2 θ k u k u * , u k u k 1 + θ k 2 u k u k 1 2 u k u * 2 + 2 θ k u k u * u k u k 1 + θ k 2 u k u k 1 2 .
By (15)–(17), we see that
u k + 1 u * 2 u k u * 2 + 2 θ k u k u * u k u k 1 + θ k 2 u k u k 1 2 2 α k [ ( f + g ) ( v k ) ( f + g ) ( u * ) ] ( 1 2 δ ) w k v k 2 P Ω ( v k ) v k 2 .
This gives
P Ω ( v k ) v k 2 + ( 1 2 δ ) w k v k 2 ( u k u * 2 u k + 1 u * 2 ) + 2 θ k u k u * u k u k 1 + θ k 2 u k u k 1 2 .
Since k = 1 θ k u k u k 1 < + and lim k u k u * exists, it follows that v k w k 0 and u k + 1 v k 0 . It is easily seen that w k u k 0 and hence u k + 1 u k 0 . On the other hand, we see that
u k u * 2 u k + 1 u * 2 2 α k [ ( f + g ) ( v k ) ( f + g ) ( u * ) ] + ( 1 2 δ ) w k v k 2 + P Ω ( v k ) u * 2 ( 1 2 δ ) w k v k 2 + P Ω ( v k ) u * 2 0 .
It follows that ( u k ) is Fejér convergent to S * . Thus, we have
0 2 α k [ ( f + g ) ( v k ) ( f + g ) ( u * ) ] u k u * 2 u k + 1 u * 2 = ( u k u * u k + 1 u * ) ( u k u * + u k + 1 u * ) 2 M ( u k u * u k + 1 u * ) 2 M u k + 1 u k ,
where M = sup { u k u * | k N } < + . Since ( u k ) is bounded, the set of weak accumulation points is nonempty. Take any weak accumulation point u ¯ of ( u k ) . Thus, there is a subsequence ( u k n ) of ( u k ) weakly converging to u ¯ . Moreover, { w k n } also weakly converges to u ¯ . Since ( u k n ) is bounded and w k n v k n 0 , from ( A 2 ) , we obtain
lim n f ( w k n ) f ( v k n ) = 0 .
Since v k n = prox α k n g ( w k n α k n f ( w k n ) ) , it follows from (7) that
w k n α k n f ( w k n ) v k n α k n g ( v k n ) ,
which yields
w k n v k n α k n + f ( w k n ) f ( v k n ) f ( v k n ) + g ( v k n ) ( f + g ) ( v k n ) .
By passing n in (19), we get from (18) and Lemma 2 that 0 ( f + g ) ( u ¯ ) . Thus, u ¯ S * . By Lemma 5 (ii), we conclude that ( u k ) weakly converges to a point in S * .    □

4. Numerical Experiments

In this section, we aim to apply our result for solving an image inpainting problem which is of the following mathematical model:
min u R M × N 1 2 A ( u u 0 ) F 2 + μ u *
where u 0 R M × N ( M < N ) , A is a linear map that selects a subset of the entries of an M × N matrix by setting each unknown entry in the matrix to 0, u is matrix of known entries A ( u 0 ) , and μ > 0 is regularization parameter.
In particular, we investigate the image inpainting problem [25,26]:
min u R M × N 1 2 P Ω ( u ) P Ω ( u 0 ) ) F 2 + μ u *
where · F is the Frobenius matrix norm, and · * is the nuclear matrix norm. Here, we define P Ω as follows:
P Ω ( u ) = u i j , ( i , j ) Ω , 0 ,       otherwise .
The optimization problem (21) relates to (4). In fact, let f ( u ) = 1 2 P Ω ( u ) P Ω ( u 0 ) F 2 and g ( u ) = μ u * . Then, f ( u ) = P Ω ( u ) P Ω ( u 0 ) is 1-Lipschitz continuous. Moreover, prox g is obtained by the singular value decomposition (SVD) [27].
From Algorithm 5, we obtain the Algorithm 6 for image inpainting.
To measure the quality of images, we consider the signal-to-noise ratio (SNR) and the structural similarity index (SSIM) [28], which are given by
SNR = 20 log u F u u r F
and
SSIM = ( 2 a u a u r + c 1 ) ( 2 σ u u r + c 2 ) ( a u 2 + a u r 2 + c 1 ) ( σ u 2 + σ u r 2 + c 2 )
where u is the original image, u r is the restored image, a u and a u r are the mean values of the original image u and restored image u r , respectively, σ u 2 and σ u r 2 are the variances, σ u u r 2 is the covariance of two images, c 1 = ( 0.01 L ) 2 and c 2 = ( 0.03 L ) 2 , and L is the dynamic range of pixel values. SSIM ranges from 0 to 1, and 1 means perfect recovery. Next, we analyze its convergence including its effects of the parameters δ , ϕ and σ that proposed in Algorithm 5. We now present the corresponding numerical results (number of iterations denoted by Iter and CPU denoted by the time of CPU).
First, we investigate the effect δ . Set parameters as follows:
θ k = t k 1 t k + 1         if 1 k < N , 1 2 k     otherwise ,
Algorithm 6 Forward-backward algorithm for image inpainting.
  Step 1: Input u 0 , u 1 , σ , ϕ and δ .
  Step 2: Compute
w k = u k + θ k ( u k u k 1 )
  Step 3: (Linesearch rule) Set α k = σ
A k = α k f ( prox α k g ( u k α k f ( u k ) ) ) f ( w k ) F B k = δ prox α k g ( u k α k f ( u k ) ) w k F .
  While A k > B k
   α k = σ ϕ k
  End while.
  Step 4: Compute
v k = prox α k g ( w k α k f ( w k ) )
  and
u k + 1 = P Ω ( v k ) , k 0 .
  Set k = k + 1 go to Step 2.
where t k is a sequence defined by t 1 = 1 and t k + 1 = 1 + 1 + 4 t k 2 2 .
From Table 1, we observe that SNR and SSIM of Algorithm 5 have been getting larger when the parameter δ approaches 0.5 . Moreover, the CPU of Algorithm 5 is decreasing when δ tends to 0.5 .
Figure 1, Figure 2, Figure 3 and Figure 4 show the SNR and the reconstructed images for each caseof dimensions.
Next, we discuss the effect of ϕ . The numerical experiments are given in Table 2.
From Table 2, we observe that SNR, SSIM, and CPU time of Algorithm 5 have been getting larger when the parameter ϕ approaches 0.5 .
Next, we discuss the effect σ . The numerical experiments are given in Table 3. From Table 3, we observe that SNR, SSIM, and the CPU time of Algorithm 5 have been getting larger if σ increases. The SNR and the reconstructed images are shown in Figure 3, Figure 4 and Figure 5.
The real images are shown in Figure 6, input image, and the reconstructed images are shown in Figure 3, Figure 4 and Figure 6, respectively.
Now, we present the performance of Algorithms 5 and its comparison to the projected version of Algorithm 1 [13] and Algorithm 2 [14]. The initial point u 0 and u 1 are chosen to be zero and let α k = 1 A 2 and λ k = 0.09 in Algorithm 1. Let σ = 0.1 , δ = 0.13 , ϕ = 0.5 and θ be defined by (25) in Algorithms 2 and 5, respectively. The numerical results are shown in Table 4.
From Table 4, we see that the experiment results of Algorithm 5 are better than Algorithms 1 and 2 in terms of SNR and SSIM in all cases.
The figure of the inpainting image for the 260th and 310th iterations are shown in Figure 7, Figure 8 and Figure 9, respectively.

5. Conclusions

In this research, we investigated inertial projected forward-backward algorithm using linesearches for constrained minimization problems. The weak convergence results were proved under control conditions. The proposed algorithms do not need to compute the Lipschitz constant of the gradient of functions. We applied our results to solve image inpainting. We also presented the effects of all parameters that are assumed in our method.
For our future research, we aim to find a new linesearch technique that does not require the Lipschitz continuity assumption on the gradient of the function. We note that the proposed algorithm depends on the computation of the projection which is not an easy task to find in some cases. It is interesting to construct new algorithms that do not involve the projection.

Author Contributions

Funding acquisition and supervision, S.S.; writing—original draft preparation, K.K.; writing—review and editing and software, P.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Chiang Mai University and Thailand Science Research and Innovation under the project IRN62W007.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank reviewers and the editor for valuable comments for improving the original manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bauschke, H.H.; Borwein, J.M. Dykstra’s alternating projection algorithm for two sets. J. Approx. Theory 1994, 79, 418–443. [Google Scholar] [CrossRef]
  2. Chen, G.H.; Rockafellar, R.T. Convergence rates in forward–backward splitting. SIAM J. Optim. 1997, 7, 421–444. [Google Scholar] [CrossRef] [Green Version]
  3. Cholamjiak, W.; Cholamjiak, P.; Suantai, S. An inertial forward–backward splitting method for solving inclusion problems in Hilbert spaces. J. Fixed Point Theory Appl. 2018, 20, 1–7. [Google Scholar] [CrossRef]
  4. Dong, Q.; Jiang, D.; Cholamjiak, P.; Shehu, Y. A strong convergence result involving an inertial forward–backward algorithm for monotone inclusions. J. Fixed Point Theory Appl. 2017, 19, 3097–3118. [Google Scholar] [CrossRef]
  5. Bauschke, H.H.; Burachik, R.S.; Combettes, P.L.; Elser, V.; Luke, D.R.; Wolkowicz, H. (Eds.) Fixed-Point Algorithms for Inverse Problems in Science and Engineering; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  6. Kankam, K.; Pholasa, N.; Cholamjiak, P. Hybrid Forward-Backward Algorithms Using Linesearch Rule for Minimization Problem. Thai J. Math. 2019, 17, 607–625. [Google Scholar]
  7. Dunn, J.C. Convexity, monotonicity, and gradient processes in Hilbert space. J. Math. Anal. Appl. 1976, 53, 145–158. [Google Scholar] [CrossRef] [Green Version]
  8. Wang, C.; Xiu, N. Convergence of the gradient projection method for generalized convex minimization. Comput. Optim. Appl. 2000, 16, 111–120. [Google Scholar] [CrossRef]
  9. Xu, H.K. Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150, 360–378. [Google Scholar] [CrossRef]
  10. Güler, O. On the convergence of the proximal point algorithm for convex minimization. SIAM J. Control Optim. 1991, 29, 403–419. [Google Scholar] [CrossRef]
  11. Martinet, B. Régularisation dinéquations variationnelles par approximations successives. Rev. Franaise Informat. Rech. OpéRationnelle 1970, 4, 154–158. [Google Scholar]
  12. Rockafellar, R.T. Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14, 877–898. [Google Scholar] [CrossRef] [Green Version]
  13. Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef] [Green Version]
  14. Bello Cruz, J.Y.; Nghia, T.T. On the convergence of the forward—Backward splitting method with linesearches. Optim. Methods Softw. 2016, 31, 1209–1238. [Google Scholar] [CrossRef] [Green Version]
  15. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. Ussr Comput. Math. Math. Phys. 1964, 4, 1–7. [Google Scholar] [CrossRef]
  16. Nesterov, Y.E. A method for solving the convex programming problem with convergence rate O(1/k2). InDokl. Akad. Nauk Sssr 1983, 269, 543–547. [Google Scholar]
  17. Moudafi, A.; Oliny, M. Convergence of a splitting inertial proximal method for monotone operators. J. Comput. Appl. Math. 2003, 155, 447–454. [Google Scholar] [CrossRef] [Green Version]
  18. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  19. Suantai, S.; Pholasa, N.; Cholamjiak, P. The modified inertial relaxed CQ algorithm for solving the split feasibility problems. J. Ind. Manag. Optim. 2018, 14, 1595. [Google Scholar] [CrossRef] [Green Version]
  20. Shehu, Y.; Cholamjiak, P. Iterative method with inertial for variational inequalities in Hilbert spaces. Calcolo 2019, 56, 1–21. [Google Scholar] [CrossRef]
  21. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011; Volume 408. [Google Scholar]
  22. Burachik, R.S.; Iusem, A.N. Enlargements of Monotone Operators. In InSet-Valued Mappings and Enlargements of Monotone Operators; Springer: Boston, MA, USA, 2008; pp. 161–220. [Google Scholar]
  23. Hanjing, A.; Suantai, S. A fast image restoration algorithm based on a fixed point and optimization method. Mathematics 2020, 8, 378. [Google Scholar] [CrossRef] [Green Version]
  24. Iusem, A.N.; Svaiter, B.F.; Teboulle, M. Entropy-like proximal methods in convex programming. Math. Oper. Res. 1994, 19, 790–814. [Google Scholar] [CrossRef]
  25. Cui, F.; Tang, Y.; Yang, Y. An inertial three-operator splitting algorithm with applications to image inpainting. arXiv 1904, arXiv:1904.11684. [Google Scholar]
  26. Davis, D.; Yin, W. A three-operator splitting scheme and its optimization applications. Set-Valued Var. Anal. 2017, 25, 829–858. [Google Scholar] [CrossRef] [Green Version]
  27. Cai, J.F.; Candès, E.J.; Shen, Z. A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 2010, 20, 1956–1982. [Google Scholar] [CrossRef]
  28. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Graph of the number of iterations and SNR for the parameter δ . (a) Number of iterations and SNR with N = 227, M = 340; (b) number of iterations and SNR with N = 480, M = 360.
Figure 1. Graph of the number of iterations and SNR for the parameter δ . (a) Number of iterations and SNR with N = 227, M = 340; (b) number of iterations and SNR with N = 480, M = 360.
Mathematics 09 00890 g001
Figure 2. Graph of number of iterations and SNR for the parameter ϕ . (a) Number of iterations and SNR with N = 227, M = 340; (b) number of iterations and SNR with N = 480, M = 360.
Figure 2. Graph of number of iterations and SNR for the parameter ϕ . (a) Number of iterations and SNR with N = 227, M = 340; (b) number of iterations and SNR with N = 480, M = 360.
Mathematics 09 00890 g002
Figure 3. The painted image and restored images with the real image N = 227, M = 340. (a) The painted image; (b) restored images in Table 1 for δ = 0.5 (SNR = 22.8626, SSIM = 0.9476); (c) restored images in Table 2 for ϕ = 0.5 (SNR = 23.0594, SSIM = 0.9479); (d) restored images in Table 3 for σ = 5 (SNR = 22.9865, SSIM = 0.9477).
Figure 3. The painted image and restored images with the real image N = 227, M = 340. (a) The painted image; (b) restored images in Table 1 for δ = 0.5 (SNR = 22.8626, SSIM = 0.9476); (c) restored images in Table 2 for ϕ = 0.5 (SNR = 23.0594, SSIM = 0.9479); (d) restored images in Table 3 for σ = 5 (SNR = 22.9865, SSIM = 0.9477).
Mathematics 09 00890 g003
Figure 4. The painted image and restored images with the real image N = 480, M = 360. (a) The painted image; (b) restored images in Table 1 for δ = 0.5 (SNR = 26.3994, SSIM = 0.9210); (c) restored images in Table 2 for ϕ = 0.5 (SNR = 26.4002, SSIM = 0.9210); (d) restored images in Table 3 for σ = 0.5 (SNR = 26.4084, SSIM = 0.9210).
Figure 4. The painted image and restored images with the real image N = 480, M = 360. (a) The painted image; (b) restored images in Table 1 for δ = 0.5 (SNR = 26.3994, SSIM = 0.9210); (c) restored images in Table 2 for ϕ = 0.5 (SNR = 26.4002, SSIM = 0.9210); (d) restored images in Table 3 for σ = 0.5 (SNR = 26.4084, SSIM = 0.9210).
Mathematics 09 00890 g004
Figure 5. Graph of the number of iterations and SNR for the parameter σ . (a) Number of iterations and SNR with N = 227, M = 340; (b) number of iterations and SNR with N = 480, M = 360.
Figure 5. Graph of the number of iterations and SNR for the parameter σ . (a) Number of iterations and SNR with N = 227, M = 340; (b) number of iterations and SNR with N = 480, M = 360.
Mathematics 09 00890 g005
Figure 6. The original images. (a) The original image of size N = 227, M = 340; (b) the original image of size of N = 480, M = 360.
Figure 6. The original images. (a) The original image of size N = 227, M = 340; (b) the original image of size of N = 480, M = 360.
Mathematics 09 00890 g006
Figure 7. The painted image and restored images. (a) The painted image; (b) restored images in Algorithm 1 (SNR = 19.8276, SSIM = 0.9278); (c) restored images in Algorithm 2 (SNR = 20.4704, SSIM = 0.9402); (d) restored images in Algorithm 5 (SNR = 22.9158, SSIM = 0.9477).
Figure 7. The painted image and restored images. (a) The painted image; (b) restored images in Algorithm 1 (SNR = 19.8276, SSIM = 0.9278); (c) restored images in Algorithm 2 (SNR = 20.4704, SSIM = 0.9402); (d) restored images in Algorithm 5 (SNR = 22.9158, SSIM = 0.9477).
Mathematics 09 00890 g007
Figure 8. The painted image and restored images. (a) The painted image; (b) restored images in Algorithm 1 (SNR = 25.7210, SSIM = 0.9363); (c) restored images in Algorithm 2 (SNR = 26.2362, SSIM = 0.9373); (d) restored images in Algorithm 5 (SNR = 27.6581, SSIM = 0.9400).
Figure 8. The painted image and restored images. (a) The painted image; (b) restored images in Algorithm 1 (SNR = 25.7210, SSIM = 0.9363); (c) restored images in Algorithm 2 (SNR = 26.2362, SSIM = 0.9373); (d) restored images in Algorithm 5 (SNR = 27.6581, SSIM = 0.9400).
Mathematics 09 00890 g008
Figure 9. The SNR value and number of iterations for all cases. (a) Graph of SNR value and number of iterations of Figure 7; (b) graph of SNR value and number of iterations of Figure 8.
Figure 9. The SNR value and number of iterations for all cases. (a) Graph of SNR value and number of iterations of Figure 7; (b) graph of SNR value and number of iterations of Figure 8.
Mathematics 09 00890 g009
Table 1. The convergence results of Algorithm 5 for each δ .
Table 1. The convergence results of Algorithm 5 for each δ .
Set: σ = 1 and ϕ = 0.4
δ N = 227, M = 340
Iter = 60
N = 480, M = 360
Iter = 90
SNRSSIMCPUSNRSSIMCPU
0.522.86260.94762.265626.39940.921011.5229
0.121.82280.94374.091626.39820.920921.7439
0.0514.72490.91655.046526.39740.918127.2411
0.019.99190.88866.700119.28590.893937.8360
0.001−0.70120.38898.02715.92390.622847.7375
Table 2. The convergence results of Algorithm 5 for each ϕ .
Table 2. The convergence results of Algorithm 5 for each ϕ .
Given: δ = 0.4 and σ = 1
ϕ N = 227, M = 340 N = 480, M = 360
Iter = 60Iter = 90
SNRSSIMCPUSNRSSIMCPU
0.523.05940.94793.086526.40020.921017.0693
0.121.25940.94282.210426.36040.920511.4325
0.0520.27350.93912.187426.52030.920711.5494
0.0110.99570.89182.258625.80630.911411.2883
0.0059.63870.86582.244922.40650.909511.3964
Table 3. The convergence results of Algorithm 5 for each σ .
Table 3. The convergence results of Algorithm 5 for each σ .
Set: δ = 0.4 and ϕ = 0.4
σ N = 227, M = 340
Iter = 60
N = 480, M = 360
Iter = 90
SNRSSIMCPUSNRSSIMCPU
522.98650.94773.988926.39460.920921.2420
122.86260.94762.284826.39940.921011.1571
0.522.83270.94742.345826.40840.921011.2728
0.0520.27350.93911.353126.52300.92076.0226
0.0059.63870.86381.343422.40650.90956.2738
Table 4. Computational results for solving (21).
Table 4. Computational results for solving (21).
Image SizeImage Size
SNRSSIMSNRSSIM
Algorithm 119.82760.937825.72100.9363
Algorithm 220.47040.940226.23620.9373
Algorithm 522.91580.947727.65810.9400
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Suantai, S.; Kankam, K.; Cholamjiak, P. A Projected Forward-Backward Algorithm for Constrained Minimization with Applications to Image Inpainting. Mathematics 2021, 9, 890. https://doi.org/10.3390/math9080890

AMA Style

Suantai S, Kankam K, Cholamjiak P. A Projected Forward-Backward Algorithm for Constrained Minimization with Applications to Image Inpainting. Mathematics. 2021; 9(8):890. https://doi.org/10.3390/math9080890

Chicago/Turabian Style

Suantai, Suthep, Kunrada Kankam, and Prasit Cholamjiak. 2021. "A Projected Forward-Backward Algorithm for Constrained Minimization with Applications to Image Inpainting" Mathematics 9, no. 8: 890. https://doi.org/10.3390/math9080890

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop