Next Article in Journal
A Geometric Feature-Based Algorithm for the Virtual Reading of Closed Historical Manuscripts
Previous Article in Journal
Measuring Power of Earth Disturbances Using Radio Wave Phase Imager
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Simplified Convex Optimization Model for Image Restoration with Multiplicative Noise

1
Department of Mathematics, Nanchang University, Nanchang 330031, China
2
School of Mathematics and Information Science, Guangzhou University, Guangzhou 510006, China
*
Author to whom correspondence should be addressed.
J. Imaging 2023, 9(10), 229; https://doi.org/10.3390/jimaging9100229
Submission received: 14 September 2023 / Revised: 11 October 2023 / Accepted: 16 October 2023 / Published: 20 October 2023
(This article belongs to the Section Image and Video Processing)

Abstract

:
In this paper, we propose a novel convex variational model for image restoration with multiplicative noise. To preserve the edges in the restored image, our model incorporates a total variation regularizer. Additionally, we impose an equality constraint on the data fidelity term, which simplifies the model selection process and promotes sparsity in the solution. We adopt the alternating direction method of multipliers (ADMM) method to solve the model efficiently. To validate the effectiveness of our model, we conduct numerical experiments on both real and synthetic noise images, and compare its performance with existing methods. The experimental results demonstrate the superiority of our model in terms of PSNR and visual quality.

1. Introduction

Image restoration is a fundamental problem in image processing that focuses on recovering the original image from a degraded version. It encompasses various problems like image denoising, image deblurring, and image inpainting, and many others. For image denoising, there are different types of noise that can corrupt the image, including additive Gaussian noise, Poisson noise, and impulse noise. These noise types are commonly encountered and well-studied in the field of image processing. Several restoration algorithms have been developed to address these types of noise and achieve better image quality. However, multiplicative noise is distinct from these commonly encountered noise types. It deviates from the traditional assumption of additive noise and follows a different statistical distribution, such as the Gamma or Rayleigh distribution. Multiplicative noise can arise in various imaging scenarios, such as synthetic aperture radar (SAR) [1], ultrasound [2], laser images [3], and other coherent image systems [4]. The mathematical model for image degradation caused by blur and multiplicative noise can be expressed as:
f = ( A u ) η ,
where f R m represents the observed blurred and noisy image, A R m × m is the blurring operator, and u R m represents the original image that to be recovered. The symbol ∘ denotes the element-wise product between two vectors, and η represents the multiplicative noise interference. When the blurring operator A is an identity matrix ( A = I ), the problem degenerates into a multiplicative noise denoising problem. Without the loss of generality, it is assumed that both u and η are greater than 0, implying that f is also greater than 0. The objective of this inverse problem is to reconstruct the original image that has been distorted by multiplicative noise η . However, solving this problem is challenging due to its ill-posed nature, wherein a unique or stable solution does not exist. Consequently, regularization techniques are necessary to obtain a reasonable approximation of the original image.
The problem of multiplicative noise has received considerable attention in past research. Various methods have been proposed in the literature to address this issue, such as the filter-based method [5], wavelet-based method [6,7], hybrid methods [8], and variational-based methods. Variational-based methods are a class of image restoration techniques that use variational calculus to formulate and solve the inverse problem of recovering a clean image from a noisy one. They often involve minimizing an energy functional that consists of a data fidelity term and a regularization term. The data fidelity term measures how well the restored image matches the observed image, whereas the regularization term imposes some prior knowledge or smoothness constraints on the restored image. To address the problem of removing multiplicative noise, Rudin, Lions, and Osher [9] introduced a variational model that utilizes the mean and variance information of the noise. This model, commonly referred to as the RLO model, innovatively incorporates the TV regularity term to preserve the sharpness of the recovered image edges. The RLO model is defined as follows:
min u u T V s . t . f A u = 1 f A u 1 2 = θ 2 ,
where θ 2 represents the variance of the noise. The RLO model only considers the mean and variance of the noise, but does not take into account its distribution. Based on the property that multiplicative noise follows a Gamma distribution with a mean of 1, Aubert and Aujol [10] proposed another variational model using the maximum a posteriori (MAP) estimation method to estimate the original image from the noisy one, which is widely known as the AA model:
min u log ( A u ) + f A u , 1 + λ u T V ,
where the first term in the AA model is the data fidelity term, the second term is the TV regularization term, and the regularization parameter λ balances the trade-off between the two terms in the objective function. The AA model is a non-convex optimization problem that poses challenges for finding its global solution. Several algorithms have been adapted from the literature to deal with this problem. One of them is the difference of convex algorithm (DCA) proposed by Zeng et al. [11], which transforms the energy function of the AA model into a form of the difference of two convex functions, and then alternately solves the two convex subproblems. Another approach is to use the split Bregman method, which was employed by Bioucas-Dias and Figueiredo [12,13] to handle the AA model. Woo and Hyenkyun [14] also developed a proximal linearized alternating direction (PLAD) algorithm for the same purpose. Some other methods focus on selecting spatially adaptive regularization parameters [15] or converting the multiplicative noise into additive noise [16,17].
To deal with the non-convexity of the AA model, several algorithms have been proposed in the literature. For example, Zeng et al. [11] used a difference of convex algorithm (DCA) to transform the energy function of the non-convex AA model into a convex minus convex one, and then solved for the two convex ones to obtain the solution of the AA model. Bioucas-Dias and Figueiredo utilized the split-Bregman or variable splitting method to solve the AA model [12,13]. Woo and Hyenkyun [14] also developed the proximal linearized alternating direction (PLAD) algorithm for the same purpose, and some papers focus on selecting spatially adaptive regularization parameters [15]. Moreover, some methods convert the multiplicative noise into additive noise, such as the Box-Cox transformation [16] and the ADMM algorithm [17].
The RLO model and the AA model are both non-convex models. Solving a non-convex model is challenging because the existence and uniqueness of the solutions are not guaranteed. The authors of the paper on the AA model prove the existence of their solution, but only give a sufficient condition for the uniqueness. The condition is that u satisfies 0 < u < 2 f . This implies that the AA model is strictly convex in this case and, thus, has a unique solution. However, a general proof of uniqueness for the AA model is still lacking. Several methods have been developed to cope with the non-convexity of multiplicative noise models, especially for the AA model.
By applying a logarithmic transformation to the AA model using z = l o g ( u ) , one obtains the logarithmic model. This model has been investigated by various authors, such as [18,19,20,21]. Shi and Osher [18] developed a method to handle the non-convexity in multiplicative noise by using a log transform, and obtained a general form that encompasses different models for different parameters. This model is called the SO model in the literature. Huang et al. [19] also introduced a logarithmic model known as the HNW model. The HNW model differs from the SO model in that it directly applies the logarithmic transform to the AA model. Since the logarithmic model is convex, a unique solution exists, and after obtaining it, an exponential transformation is applied to obtain the desired recovery image. Apart from the logarithmic transformation of u that ensures convexity, taking the mth root transformation of u can also convert the AA model into a convex one. This approach is discussed, for example, in [22,23,24].
The logit model and the mth root model are only applicable to denoising tasks and cannot be used for deblurring. However, Dong and Zeng [25] addressed the non-convexity of the AA model by introducing a quadratic penalty term, enabling it to handle deblurring tasks. This model is more versatile compared to the logarithmic and mth root models, which are limited to denoising. In another approach, Steidl and Teuber [26] utilized the classical I-divergence to address multiplicative noise and proposed the I-divergence model. Additionally, Woo and Yun [14] proposed a proximal linearized alternating direction method for solving the logarithmic and I-divergence models. In contrast, Zhao et al. [27] proposed a new model based on the idea of decoupling. The main concept behind this model is to separate the image from the noise η on the right-hand side of (1) by incorporating it into the left-hand side of the equation. We refer to this model as the ZWN model:
min w , u 1 2 w μ e 2 2 + α 1 F w A u 1 + α 2 u T V ,
where μ is the mean value of w, e is a vector of all 1 s , and α 1 and α 2 are two positive regularization parameters. Since the ZWN model does not use information from the noise distribution, it can be applied to multiplicative noise problems with noise distributions other than the Gamma distribution. Furthermore, Wang et al. [28] made enhancements to the ZWN model and introduced an l-curve method for the selection of regularization parameters.
The ZWN model has two regularization parameters, which control the trade-off between data fidelity and model complexity. However, selecting appropriate regularization parameters can be challenging. To address this issue, we propose to impose the second term l 1 norm in the ZWN model as an equality constraint on the model, i.e., F w = A u . This approach improves the sparsity of the vector F w - A u , so that F w and A u are close. Sparsity is desirable because it can reduce noise and preserve edges in the restored images. Moreover, this approach reduces the number of regularisation parameters from two to one, thus simplifying the model selection process. The numerical experimental results on both real and synthetic noise images indicate that our model generally outperforms the ZWN model in terms of image quality.
The rest of this paper is organized as follows. In Section 2, we review the derivation of the ZWN model and its advantages and limitations. In Section 3, we present a new convex model based on reducing the number of regularization parameters in the ZWN model and propose numerical methods for solving it. In Section 4, we show the results of the numerical experiments on both real and synthetic noise images that are affected by multiplicative noise, and compare our model with the ZWN model and other existing methods. Finally, some conclusions are given.

2. Review of the ZWN Model

In this section, we briefly review the ZWN model, which was proposed by Zhao et al. [27]. The main idea of the ZWN model is to decouple u and η in (1) by rewriting it as:
F w = A u ,
where F is a diagonal matrix with elements f i , and w is a vector satisfying w i = 1 / η i . In this way, u and w are decoupled, and so are u and η . Therefore, it is naturally raise the following optimization problem with the L 1 norm as the data fidelity term and the TV norm as the regularization:
min w , u F w A u 1 + α u T V ,
where α is a positive parameter that balances the two terms. However, this model has a trivial solution, which is u = ψ 1 and w = ψ F 1 A 1 , where ψ is any scalar. The minimum value of the objective function in (6) is greater than zero. To avoid this situation, Zhao et al. added a quadratic term 1 2 | w μ e | 2 2 to the model, resulting in (4).
Next, we review the concept of total variation (TV), which is a popular regularization term that exploits the sparsity of the gradient of images. Anisotropic total variation and isotropic total variation are two variants of the TV regularization. The difference between them is that anisotropic total variation treats horizontal and vertical gradients separately, whereas isotropic total variation combines them into a term that does not change with rotation. In detail, the anisotropic total variation is defined by:
u T V = D u 1 = i , j | D h u i , j | + | D v u i , j | ,
where D h and D v are finite difference operators in horizontal and vertical directions, respectively. The isotropic total variation is defined by:
u T V = D u 2 = i , j ( D h u i , j ) 2 + ( D v u i , j ) 2 .
TV regularization can preserve edges and remove noise in an image, but it may also introduce some artifacts, such as staircasing and the loss of fine details. To overcome these drawbacks, some variants of TV regularization have been proposed, such as weighted TV [29], nonlocal TV [30], and higher-order TV [31], etc.

3. The Proposed Model and Main Algorithm

This section outlines the main methodology employed in this study. The ZWN model is a convex variational model designed to restore images corrupted by multiplicative noise. It consists of three main components: the variance term, which ensures the strict convexity of the model under a mild condition; the data fidelity term, which matches the multiplicative noise model; and the regularization term, which imposes a total variation regularization on the image. α 1 and α 2 are two positive parameters that are used to balance the three terms. However, selecting suitable values for these parameters can be challenging and time-consuming. To address this issue and minimize the number of regularization parameters, we propose a new model that enhances the sparsity of the matrix F w - A u by imposing an equality constraint: F w = A u . This approach not only improves the image quality and edge preservation, but also simplifies the process of selecting model parameters. The formulation of the proposed model is as follows:
min w , u 1 2 w μ e 2 2 + α u T V s . t . F w = A u ,
where α is a positive regularization parameter to control the balance between the two terms in the objective function, and μ is the mean value of w. We can see that if A = I , then our model reduces to a convex model for multiplicative denoising. Our model is based on modifying the second term of the ZWN model into an equality constraint, as this term is l 1 -norm. This way our model will only have one regularization parameter, which is one less parameter compared to the ZWN model.

Main Algorithm

To solve the proposed model (7), we can use the well known ADMM method. By introducing a auxiliary variable p, (7) can be written as:
min w , u , p 1 2 w μ e 2 2 + α p β s . t . F w = A u , D u = p ,
where β = 1 or 2.
The augmented Lagrangian function associated with the problem (8) is defined as follows:
L ρ 1 , ρ 2 w , u , p ; λ 1 , λ 2 = 1 2 w μ e 2 2 + α p β + λ 1 , F w A u + λ 2 , D u p + ρ 1 2 F w A u 2 2 + ρ 2 2 D u p 2 2 ,
where λ 1 and λ 2 are the Lagrangian multipliers and ρ 1 > 0 , ρ 2 > 0 are the penalty parameters. Therefore, the ADMM method leads to the following subproblems:
w k + 1 = arg min w 1 2 w μ e 2 2 + λ 1 k , F w A u k + ρ 1 2 F w A u k 2 2 u k + 1 = arg min u λ 1 k , F w k + 1 A u + λ 2 k , D u p k + ρ 1 2 F w k + 1 A u + ρ 2 2 D u p k 2 2 p k + 1 = arg min p α p β + λ 2 k , D u k + 1 p + ρ 2 2 D u k + 1 p 2 2 λ 1 k + 1 = λ 1 k + ρ 1 F w k + 1 A u k + 1 λ 2 k + 1 = λ 2 k + ρ 2 D u k + 1 p k + 1
In the following, we will illustrate how to solve the subproblems in the ADMM scheme (10).
For the w-subproblem, since the equation with respect to w is differentiable, we can directly compute the derivative of the equation at 0:
0 = w k + 1 μ e + F T λ 1 k + ρ 1 F T F w k + 1 A u k .
We can then simplify the resulting equation to obtain the solution for the w-subproblem:
w i k + 1 = μ e F T λ 1 k + ρ 1 F T A u k i I + ρ 1 F T F i , i , i = 1 , , m ,
where F T denotes the transpose of F, and the superscript k for each variable refers to the number of iterations of the k t h iteration in the ADMM algorithm. The subscript i of a variable refers to the i t h element in the vector, whereas the subscript i , i refers to the element in row i and column i of the matrix. Since we assume that the image we are considering has m n pixels, i is bounded by m. Here, I is a matrix of size m m with all elements being 1.
For the u-subproblem, we apply the same method as for the w-subproblem to derive an equation and simplify it, resulting in the following equation:
ρ 1 A T A + ρ 2 D T D u k + 1 = λ 1 k A T D T λ 2 k + ρ 1 A T F w k + 1 + ρ 2 D T p k .
Under the periodic boundary conditions for the image u, both the matrices A T A and D T D are block circulant, with circulant blocks that can be diagonalized using a 2D fast Fourier transform (FFT). Therefore, Equation (13) can be efficiently solved using one FFT and one inverse FFT, as follows:
u k + 1 = F 1 F A T λ 1 k D T λ 2 k + ρ 1 A T F w k + 1 + ρ 2 D T p k ρ 1 F A 2 + ρ 2 F D 2 ,
where F ( . ) and F 1 ( . ) represent the FFT and inverse FFT, respectively.
For the p-subproblem, we will divide it into two cases:
When β = 2 , we have:
p i k + 1 = max D u k + 1 + λ 2 k ρ 2 i 2 α ρ 2 , 0 D u k + 1 + λ 2 k ρ 2 i D u k + 1 + λ 2 k ρ 2 i 2 .
When β = 1 , we have:
p i k + 1 = max D u k + 1 + λ 2 k ρ 2 i α ρ 2 , 0 s i g n D u k + 1 + λ 2 k ρ 2 i .
Finally, we present Algorithm 1, which outlines the ADMM method for solving the proposed model (7).
Algorithm 1: ADMM method for solving the proposed model (7)
 Input:
ρ 1 > 0 , ρ 2 > 0 , α > 0
 Initialize:
u 0 = f , λ 1 0 , λ 2 0 , p 0
  1:
while  u k u k + 1 2 u k 2 > ε   do
  2:
    Calculate w k + 1 by (12),
  3:
    Calculate u k + 1 by (14),
  4:
    Calculate p k + 1 by (15) or (16),
  5:
     λ 1 k + 1 = λ 1 k + ρ 1 F w k + 1 A u k + 1 ,
  6:
     λ 2 k + 1 = λ 2 k + ρ 2 D u k + 1 p k + 1 ,
  7:
     k = k + 1 ,
  8:
end while

4. Numerical Experiments

In this section, we present numerical results to evaluate the performance of our proposed method. We compare our approach with the AA model [10] by Aubert and Aujol; the RLO model [9] by Rudin, Lions, and Osher; and the ZWN model [27] by Zhao, Wang, and Ng. The AA model and the RLO model are solved using the projected gradient method, whereas the ZWN model is solved using the ADMM. All tests were conducted on Windows 11 and MATLAB version 9.13 (R2022b) on a laptop running AMD Ryzen 9 6900HX at 3.30 GHz with 16 GB of RAM. For simulating the Gamma distribution, we utilize the ’gamrnd’ function in MATLAB.
We selected three 256 by 256 grayscale images, namely Parrot, Cameraman, Lenna, House, and Square, to test and compare our model with the AA model, the RLO model, and the ZWN model. The original images can be seen in Figure 1. We assess the quality of the recovered image by comparing it with the peak signal-to-noise ratio (PSNR) [32], defined as follows:
P S N R = 10 log 10 max i u i , u ˜ i 1 m i = 1 m u i u ˜ i 2 ,
where u and u ˜ are the original image and the recovered image, respectively. The PSNR measures the similarity between the recovered image and the original image in terms of pixel values. A higher PSNR indicates a higher quality of recovery. We employ the relative error as the stopping criterion, which is defined as follows:
u k u k + 1 2 u k 2 5 × 10 4 ,
or the maximum number of iterations 500 reached.

4.1. Image Denoising

In this subsection, we focus on demonstrating the effectiveness of our proposed model for denoising purposes only. We consider the scenario that the original images are corrupted by multiplicative noise, following a Gamma distribution with L = 20 , L = 10 , and L = 6 , respectively. The denoising results are presented in Figure 2, Figure 3, Figure 4 and Figure 5. Moreover, we provide the corresponding PSNR values, computation time, and number of iterations in Table 1. The results consistently demonstrate that both the ZWN model and our model outperform the AA model and the RLO model across all metrics. Our model exhibits the highest PSNR values for most images at L = 20 , indicating its superior capability in reducing low levels of noise. Although at L = 10 and L = 6 , our model still performs better than the AA model and the RLO model, it does not consistently surpass the ZWN model. This implies that our model is robust and effective in denoising images under various noise conditions, but it can be further enhanced to handle higher levels of noise. Therefore, we can confidently assert that our model offers significant advantages over existing models in image denoising.

4.2. Image Deblurring and Denoising

In this subsection, we present the performance of our proposed model in the task of image deblurring and denoising. Our model is designed to effectively restore images that are corrupted by both multiplicative noise and blur. We evaluate our model on three distinct levels of noise, where the noise follows a Gamma distribution with L = 20 , L = 10 , and L = 6 , respectively. To simulate the blurring effect, we employ a 7 × 7 Gaussian kernel with zero mean and a standard deviation of 2. We compare our model with three other models in terms of deblurring and denoising, and present the experimental results in the form of data and images. The deblurring results are illustrated in Figure 6, Figure 7, Figure 8 and Figure 9. The corresponding PSNR values, execution times, and number of iterations are displayed in Table 2. The results indicate that our model consistently achieves the highest PSNR values in all cases, demonstrating its superior performance in denoising and deblurring. However, it is worth noting that our model also exhibits longer execution times in most cases, suggesting that it is computationally expensive. On the other hand, the ZWN model consistently demonstrates the lowest execution times in all cases, indicating its exceptional speed. Nonetheless, it also consistently yields the lowest PSNR values, suggesting subpar performance in denoising and deblurring. The AA model and RLO model exhibit similar PSNR values and execution times across all cases, indicating comparable performance and speed. Additionally, the results demonstrate that as the value of L decreases, the PSNR values also decrease and the running time increases for all models and images. This observation suggests that lower L values correspond to higher levels of noise and blur, making the denoising and deblurring tasks more challenging and time-consuming.

4.3. Real Image Denoising

In this subsection, we evaluate the performance of our approach on real images that are affected by multiplicative noise, also known as speckle noise. This type of noise occurs in coherent imaging systems such as SAR and ultrasound imaging. We use the same regularization parameters to remove different images in order to observe the robustness of different models, since the selection of the regularization parameters is the most difficult point in removing real images. We selected five images with different features from a dataset that consists of 282,384 pairs of corresponding SAR and optical image patches acquired by the Sentinel-1 and Sentinel-2 satellites. The dataset was downloaded from the following URL: https://mediatum.ub.tum.de/1436631. Figure 10 shows the tested images and the images denoised by using four different methods. Among the four models we compared, the AA model and the RLO model produced similar results, but they tended to over-smooth the images and lose a lot of edge information. On the other hand, our model and the ZWN model preserved the edges better and achieved more visually pleasing results. These two models also showed a similar performance in removing noise from different images, even though we used the same regularization parameters for all of them.

5. Conclusions

In this paper, we proposed a new model based on the ZWN model that improves the sparsity of the solution and reduces the number of regularization parameters by incorporating an equality constraint. To solve the proposed model, we applied the popular ADMM method. The effectiveness and efficiency of our model were evaluated on various public datasets and compared with the ZWN model, along with the RLO and the AA methods. The experimental results demonstrated that our model achieves superior or comparable performance in terms of PSNR and subjective visual assessment. Moreover, we also tested our model on real images that are affected by multiplicative noise. We showed that our model can handle this type of noise well and produce more visually pleasing results than the other methods.
Though the total variation technique effectively preserves edges and removes noise, it often creates undesirable staircase artifacts, particularly in areas with flat regions or smooth gradients in an image. To mitigate these artifacts, numerous improvements and modifications have been made to the traditional total variation model. These include approaches such as higher-order total variation, total generalized variation (TGV), and infimal convolution of total variation (ICTV), among others. In our future work, we aim to study these modifications to enhance the performance of total variation regularization and reduce the occurrence of staircase artifacts, thereby improving the overall quality of restored or reconstructed images. This could potentially lead to improved performance in various applications such as medical imaging, remote sensing, and computer vision.

Author Contributions

Writing—original draft preparation, H.C.; writing—review and editing, Y.T.; supervision, Y.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundations of China (12061045) and the Jiangxi Provincial Natural Science Foundation (20224ACB211004).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The code used to generate the results and figures is available in a Github repository at https://github.com/hhaaoo1331/Simplified_multicative_noise (accessed on 24 August 2023).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Oliver, C.; Quegan, S. Understanding Synthetic Aperture Radar Images; SciTech Publishing: Raleigh, NC, USA, 2004. [Google Scholar]
  2. Wagner, R.F.; Smith, S.W.; Sandrik, J.M.; Lopez, H. Statistics of speckle in ultrasound b-scans. IEEE Trans. Sonics Ultrason. 1983, 30, 156–163. [Google Scholar] [CrossRef]
  3. Schmitt, J.; Xiang, S.H.; Yung, J. Speckle in optical coherence tomography. J. Biomed. Opt. 1999, 4, 95–105. [Google Scholar] [CrossRef]
  4. Teuber, T.; Lang, A. Nonlocal filters for removing multiplicative noise. In Scale Space and Variational Methods in Computer Vision; Springer: Berlin/Heidelberg, Germany, 2011; pp. 50–61. [Google Scholar]
  5. Li, G.T.; Wang, C.L.; Huang, P.P.; Yu, W.D. Sar image despeckling using a space-domain filter with alterable window. IEEE Geosci. Remote Sens. Lett. 2013, 10, 263–267. [Google Scholar]
  6. Argenti, F.; Bianchi, T.; Lapini, A.; Alparone, L. Fast map despeckling based on laplacian-gaussian modeling of wavelet coefficients. IEEE Geosci. Remote Sens. Lett. 2012, 9, 13–17. [Google Scholar] [CrossRef]
  7. Bianchi, T.; Argenti, F.; Alparone, L. Segmentation-based map despeckling of sar images in the undecimated wavelet domain. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2728–2742. [Google Scholar] [CrossRef]
  8. Durand, S.; Fadili, J.; Nikolova, M. Multiplicative noise removal using l1 fidelity on frame coefficients. J. Math. Imaging Vis. 2009, 36, 201–226. [Google Scholar] [CrossRef]
  9. Rudin, L.; Lions, P.L.; Osher, S. Multiplicative denoising and deblurring: Theory and algorithms. Geom. Level Set Methods Imaging Vis. Graph. 2003, 4, 103–120. [Google Scholar]
  10. Aubert, G.; Aujol, J.F. A variational approach to removing multiplicative noise. SIAM J. Appl. Math. 2008, 68, 925–946. [Google Scholar] [CrossRef]
  11. Li, Z.; Lou, Y.F.; Zeng, T.Y. Variational multiplicative noise removal by dc programming. J. Sci. Comput. 2016, 68, 1200–1216. [Google Scholar] [CrossRef]
  12. Bioucas-Dias, J.M.; Figueiredo, M.A.T. Total variation restoration of speckled images using a split-bregman algorithm. In Proceedings of the 2009 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 3717–3720. [Google Scholar]
  13. Bioucas-Dias, J.M.; Figueiredo, M.A.T. Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process. 2010, 19, 1720–1730. [Google Scholar] [CrossRef]
  14. Woo, H. Proximal linearized alternating direction method for multiplicative denoising. SIAM J. Sci. Comput. 2013, 35, B336–B358. [Google Scholar] [CrossRef]
  15. Li, F.; Ng, M.K.; Shen, C.M. Multiplicative noise removal with spatially varying regularization parameters. SIAM J. Imaging Sci. 2010, 3, 1–20. [Google Scholar] [CrossRef]
  16. Huang, Y.M.; Yan, H.Y.; Zeng, T.Y. Multiplicative noise removal based on unbiased box-cox transformation. Commun. Comput. Phys. 2017, 22, 803–828. [Google Scholar] [CrossRef]
  17. Fan, W.; Ng, M.K. A fast minimization method for blur and multiplicative noise removal. Int. J. Comput. Math. 2012, 90, 48–61. [Google Scholar]
  18. Shi, J.; Osher, S. A nonlinear inverse scale space method for a convex multiplicative noise model. SIAM J. Imaging Sci. 2008, 1, 294–321. [Google Scholar] [CrossRef]
  19. Huang, Y.M.; Ng, M.K.; Wen, Y.W. A new total variation method for multiplicative noise removal. SIAM J. Imaging Sci. 2009, 2, 20–40. [Google Scholar] [CrossRef]
  20. Lu, J.; Shen, L.X.; Xu, C.; Xu, Y.S. Multiplicative noise removal in imaging: An exp-model and its fixed-point proximity algorithm. Appl. Comput. Harmon. Anal. 2015, 41, 518–539. [Google Scholar] [CrossRef]
  21. Na, H.; Kang, M.; Jung, M.; Kang, M. An exp model with spatially adaptive regularization parameters for multiplicative noise removal. J. Sci. Comput. 2018, 75, 478–509. [Google Scholar] [CrossRef]
  22. Sangwoon, Y.; Hyenkyun, W. A new multiplicative denoising variational model based on th root transformation. IEEE Trans. Image Process. 2012, 21, 2523–2533. [Google Scholar] [CrossRef]
  23. Kang, M.; Yun, S.; Woo, H. Two-level convex relaxed variational model for multiplicative denoising. SIAM J. Imaging Sci. 2013, 6, 875–903. [Google Scholar] [CrossRef]
  24. Zhao, C.P.; Feng, X.C.; Jia, X.X.; He, R.Q.; Chen, X. Root-transformation based multiplicative denoising model and its statistical analysis. Neurocomputing 2017, 275, 2666–2680. [Google Scholar] [CrossRef]
  25. Dong, Y.Q.; Zeng, T.Y. A convex variational model for restoring blurred images with multiplicative noise. SIAM J. Imaging Sci. 2013, 6, 1598–1625. [Google Scholar] [CrossRef]
  26. Steidl, G.; Teuber, T. Removing multiplicative noise by douglas-rachford splitting methods. J. Math. Imaging Vis. 2010, 36, 168–184. [Google Scholar] [CrossRef]
  27. Zhao, X.L.; Wang, F.; Ng, M.K. A new convex optimization model for multiplicative noise and blur removal. SIAM J. Imaging Sci. 2014, 7, 456–475. [Google Scholar] [CrossRef]
  28. Wang, F.; Zhao, X.L.; Ng, M.K. Multiplicative noise and blur removal by framelet decomposition and l(1)-based l-curve method. IEEE Trans. Image Process. 2016, 25, 4222–4232. [Google Scholar] [CrossRef]
  29. Yang, Q.; Jung, C.; Fu, Q.T.; Song, H. Low light image denoising based on poisson noise model and weighted tv regularization. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 3199–3203. [Google Scholar]
  30. Chen, M.J.; Zhang, H.; Han, Q.; Huang, C. A convex nonlocal total variation regularization algorithm for multiplicative noise removal. EURASIP J. Image Video Process. 2019, 2019, 28. [Google Scholar] [CrossRef]
  31. Bredies, K.; Holler, M. A tgv-based framework for variational image decompression, zooming, and reconstruction. part I: Analytics. SIAM J. Imaging Sci. 2015, 8, 2814–2850. [Google Scholar] [CrossRef]
  32. Chowdhury, M.; Qin, J.; Lou, Y.F. Non-blind and blind deconvolution under poisson noise using fractional-order total variation. J. Math. Imaging Vis. 2020, 62, 1238–1255. [Google Scholar] [CrossRef]
Figure 1. Original image; (a) “Cameraman”; (b) “Parrot”; (c) “House”; (d) “Square”.
Figure 1. Original image; (a) “Cameraman”; (b) “Parrot”; (c) “House”; (d) “Square”.
Jimaging 09 00229 g001
Figure 2. The results of different models removing multiplicative noise. Row 1: Noisy Cameraman, Row 2–5: the images recovered by the AA, RLO, ZWN, and our model, respectively. Column (a) to (c): noise level (a): L = 20 (AA: λ = 5 , RLO: λ = 10 7 , ZWN: α 2 = 0.001 , our: α = 0.005 ), (b): L = 10 (AA: λ = 5 , RLO: λ = 10 6 , ZWN: α 2 = 0.002 , our: α = 0.01 ), (c): L = 6 (AA: λ = 5 , RLO: λ = 10 6 , ZWN: α 2 = 0.002 , our: α = 0.06 ).
Figure 2. The results of different models removing multiplicative noise. Row 1: Noisy Cameraman, Row 2–5: the images recovered by the AA, RLO, ZWN, and our model, respectively. Column (a) to (c): noise level (a): L = 20 (AA: λ = 5 , RLO: λ = 10 7 , ZWN: α 2 = 0.001 , our: α = 0.005 ), (b): L = 10 (AA: λ = 5 , RLO: λ = 10 6 , ZWN: α 2 = 0.002 , our: α = 0.01 ), (c): L = 6 (AA: λ = 5 , RLO: λ = 10 6 , ZWN: α 2 = 0.002 , our: α = 0.06 ).
Jimaging 09 00229 g002
Figure 3. The results of different models removing multiplicative noise. Row 1: Noisy Parrot, Row 2–5: the images recovered by the AA, RLO, ZWN, and our model, respectively. Column (a) to (c): noise level (a): L = 20 (AA: λ = 0.7 , RLO: λ = 5 × 10 6 , ZWN: α 2 = 8 × 10 4 , our: α = 0.008 ), (b): L = 10 (AA: λ = 0.3 , RLO: λ = 3 × 10 6 , ZWN: α 2 = 0.001 , our: α = 0.03 ), (c): L = 6 (AA: λ = 1 , RLO: λ = 5 × 10 7 , ZWN: α 2 = 0.002 , our: α = 0.13 ).
Figure 3. The results of different models removing multiplicative noise. Row 1: Noisy Parrot, Row 2–5: the images recovered by the AA, RLO, ZWN, and our model, respectively. Column (a) to (c): noise level (a): L = 20 (AA: λ = 0.7 , RLO: λ = 5 × 10 6 , ZWN: α 2 = 8 × 10 4 , our: α = 0.008 ), (b): L = 10 (AA: λ = 0.3 , RLO: λ = 3 × 10 6 , ZWN: α 2 = 0.001 , our: α = 0.03 ), (c): L = 6 (AA: λ = 1 , RLO: λ = 5 × 10 7 , ZWN: α 2 = 0.002 , our: α = 0.13 ).
Jimaging 09 00229 g003
Figure 4. The results of different models removing multiplicative noise. Row 1: Noisy House, Row 2–5: the images recovered by the AA, RLO, ZWN, and our model, respectively. Column (a) to (c): noise level (a): L = 20 (AA: λ = 5 , RLO: λ = 10 5 , ZWN: α 2 = 9 × 10 4 , our: α = 0.007 ), (b): L = 10 (AA: λ = 5 , RLO: λ = 4 × 10 6 , ZWN: α 2 = 7 × 10 5 , our: α = 0.05 ), (c): L = 6 (AA: λ = 8 × 10 4 , RLO: λ = 5 , ZWN: α 2 = 5 × 10 5 , our: α = 0.07 ).
Figure 4. The results of different models removing multiplicative noise. Row 1: Noisy House, Row 2–5: the images recovered by the AA, RLO, ZWN, and our model, respectively. Column (a) to (c): noise level (a): L = 20 (AA: λ = 5 , RLO: λ = 10 5 , ZWN: α 2 = 9 × 10 4 , our: α = 0.007 ), (b): L = 10 (AA: λ = 5 , RLO: λ = 4 × 10 6 , ZWN: α 2 = 7 × 10 5 , our: α = 0.05 ), (c): L = 6 (AA: λ = 8 × 10 4 , RLO: λ = 5 , ZWN: α 2 = 5 × 10 5 , our: α = 0.07 ).
Jimaging 09 00229 g004
Figure 5. The results of different models removing multiplicative noise. Row 1: Noisy Square, Row 2–5: the images recovered by the AA, RLO, ZWN, and our model, respectively. Column (a) to (c): noise level (a): L = 20 (AA: λ = 5 , RLO: λ = 10 3 , ZWN: α 2 = 0.001 , our: α = 0.03 ), (b): L = 10 (AA: λ = 10 3 , RLO: λ = 5 , ZWN: α 2 = 0.002 , our: α = 0.09 ), (c): L = 6 (AA: λ = 6 × 10 6 , RLO: λ = 5 , ZWN: α 2 = 0.003 , our: α = 0.3 ).
Figure 5. The results of different models removing multiplicative noise. Row 1: Noisy Square, Row 2–5: the images recovered by the AA, RLO, ZWN, and our model, respectively. Column (a) to (c): noise level (a): L = 20 (AA: λ = 5 , RLO: λ = 10 3 , ZWN: α 2 = 0.001 , our: α = 0.03 ), (b): L = 10 (AA: λ = 10 3 , RLO: λ = 5 , ZWN: α 2 = 0.002 , our: α = 0.09 ), (c): L = 6 (AA: λ = 6 × 10 6 , RLO: λ = 5 , ZWN: α 2 = 0.003 , our: α = 0.3 ).
Jimaging 09 00229 g005
Figure 6. The results of different models removing multiplicative noise and blur. Row 1: Noisy Cameraman, Row 2–5: the images recovered by the AA, RLO, ZWN, and our model, respectively. Column (a) to (c): noise level (a): L = 20 (AA: λ = 5 , RLO: λ = 3 × 10 7 , ZWN: α 2 = 6 × 10 4 , our: α = 0.001 ), (b): L = 10 (AA: λ = 5 , RLO: λ = 5 × 10 6 , ZWN: α 2 = 0.001 , our: α = 0.004 ), (c): L = 6 (AA: λ = 5 , RLO: λ = 9 × 10 4 , ZWN: α 2 = 0.002 , our: α = 0.019 ).
Figure 6. The results of different models removing multiplicative noise and blur. Row 1: Noisy Cameraman, Row 2–5: the images recovered by the AA, RLO, ZWN, and our model, respectively. Column (a) to (c): noise level (a): L = 20 (AA: λ = 5 , RLO: λ = 3 × 10 7 , ZWN: α 2 = 6 × 10 4 , our: α = 0.001 ), (b): L = 10 (AA: λ = 5 , RLO: λ = 5 × 10 6 , ZWN: α 2 = 0.001 , our: α = 0.004 ), (c): L = 6 (AA: λ = 5 , RLO: λ = 9 × 10 4 , ZWN: α 2 = 0.002 , our: α = 0.019 ).
Jimaging 09 00229 g006
Figure 7. The results of different models removing multiplicative noise and blur. Row 1: Noisy Parrot, Row 2–5: the images recovered by the AA, RLO, ZWN, and our model, respectively. Column (a) to (c): noise level (a): L = 20 (AA: λ = 5 , RLO: λ = 9 × 10 5 , ZWN: α 2 = 10 4 , our: α = 6 × 10 4 ), (b): L = 10 (AA: λ = 5 , RLO: λ = 7 × 10 7 , ZWN: α 2 = 8 × 10 4 , our: α = 0.003 ), (c): L = 6 (AA: λ = 5 , RLO: λ = 4 × 10 6 , ZWN: α 2 = 0.001 , our: α = 0.019 ).
Figure 7. The results of different models removing multiplicative noise and blur. Row 1: Noisy Parrot, Row 2–5: the images recovered by the AA, RLO, ZWN, and our model, respectively. Column (a) to (c): noise level (a): L = 20 (AA: λ = 5 , RLO: λ = 9 × 10 5 , ZWN: α 2 = 10 4 , our: α = 6 × 10 4 ), (b): L = 10 (AA: λ = 5 , RLO: λ = 7 × 10 7 , ZWN: α 2 = 8 × 10 4 , our: α = 0.003 ), (c): L = 6 (AA: λ = 5 , RLO: λ = 4 × 10 6 , ZWN: α 2 = 0.001 , our: α = 0.019 ).
Jimaging 09 00229 g007
Figure 8. The results of different models removing multiplicative noise and blur. Row 1: Noisy House, Row 2–5: the images recovered by the AA, RLO, ZWN, and our model, respectively. Column (a) to (c): noise level (a): L = 20 (AA: λ = 5 , RLO: λ = 2 × 10 4 , ZWN: α 2 = 5 × 10 4 , our: α = 0.002 ), (b): L = 10 (AA: λ = 5 , RLO: λ = 0.01 , ZWN: α 2 = 9 × 10 4 , our: α = 0.01 ), (c): L = 6 (AA: λ = 5 , RLO: λ = 10 6 , ZWN: α 2 = 0.001 , our: α = 0.04 ).
Figure 8. The results of different models removing multiplicative noise and blur. Row 1: Noisy House, Row 2–5: the images recovered by the AA, RLO, ZWN, and our model, respectively. Column (a) to (c): noise level (a): L = 20 (AA: λ = 5 , RLO: λ = 2 × 10 4 , ZWN: α 2 = 5 × 10 4 , our: α = 0.002 ), (b): L = 10 (AA: λ = 5 , RLO: λ = 0.01 , ZWN: α 2 = 9 × 10 4 , our: α = 0.01 ), (c): L = 6 (AA: λ = 5 , RLO: λ = 10 6 , ZWN: α 2 = 0.001 , our: α = 0.04 ).
Jimaging 09 00229 g008
Figure 9. The results of different models removing multiplicative noise and blur. Row 1: Noisy Square, Row 2–5: the images recovered by the AA, RLO, ZWN, and our model, respectively. Column (a) to (c): noise level (a): L = 20 (AA: λ = 5 , RLO: λ = 10 4 , ZWN: α 2 = 7 × 10 4 , our: α = 0.006 ), (b): L = 10 (AA: λ = 5 , RLO: λ = 0.05 , ZWN: α 2 = 0.001 , our: α = 0.05 ), (c): L = 6 (AA: λ = 6 × 10 5 , RLO: λ = 5 , ZWN: α 2 = 0.002 , our: α = 0.6 ).
Figure 9. The results of different models removing multiplicative noise and blur. Row 1: Noisy Square, Row 2–5: the images recovered by the AA, RLO, ZWN, and our model, respectively. Column (a) to (c): noise level (a): L = 20 (AA: λ = 5 , RLO: λ = 10 4 , ZWN: α 2 = 7 × 10 4 , our: α = 0.006 ), (b): L = 10 (AA: λ = 5 , RLO: λ = 0.05 , ZWN: α 2 = 0.001 , our: α = 0.05 ), (c): L = 6 (AA: λ = 6 × 10 5 , RLO: λ = 5 , ZWN: α 2 = 0.002 , our: α = 0.6 ).
Jimaging 09 00229 g009
Figure 10. Comparison of different models for denoising real images. The first column (a) shows the original noisy images obtained from a dataset of SAR and optical image patches. The second column (b) shows the images recovered by the AA model ( λ = 0.05 ). The third column (c) shows the images recovered by the RLO model ( λ = 0.0009 ). The fourth column (d) shows the images recovered by the ZWN model ( α 2 = 0.001 ). The fifth column (e) shows the images recovered by our proposed model ( α = 0.05 ).
Figure 10. Comparison of different models for denoising real images. The first column (a) shows the original noisy images obtained from a dataset of SAR and optical image patches. The second column (b) shows the images recovered by the AA model ( λ = 0.05 ). The third column (c) shows the images recovered by the RLO model ( λ = 0.0009 ). The fourth column (d) shows the images recovered by the ZWN model ( α 2 = 0.001 ). The fifth column (e) shows the images recovered by our proposed model ( α = 0.05 ).
Jimaging 09 00229 g010
Table 1. Comparison of PSNR values (dB), number of iterations, and time (s) of different methods in the case of denoising.
Table 1. Comparison of PSNR values (dB), number of iterations, and time (s) of different methods in the case of denoising.
ImageModel L = 20 L = 10 L = 6
PSNR Time Iter PSNR Time Iter PSNR Time Iter
CameramanAA24.550.8350024.150.7850022.830.80500
RLO24.400.8250024.020.8350022.750.81500
ZWN26.431.4216224.581.3014823.611.08119
Ours26.501.4925124.601.8030423.442.17366
ParrotAA24.460.8050023.790.8150022.310.84500
RLO24.400.8150023.780.8150022.280.92500
ZWN26.801.2614524.901.3414923.531.12126
Ours26.841.3321924.891.8230923.372.26381
HouseAA27.320.9350026.150.8350023.040.80500
RLO27.280.8450026.130.8350023.050.82500
ZWN27.910.666425.440.282723.740.3129
Ours28.241.2118926.401.5025524.822.03345
SquareAA34.360.9750029.320.8350023.180.87500
RLO34.330.8750029.350.8250023.220.85500
ZWN33.020.292630.570.294227.840.3926
Ours34.210.9513631.611.0618029.941.34136
Table 2. Comparison of PSNR values (dB), number of iterations, and time (s) of different methods in the case of deblurring.
Table 2. Comparison of PSNR values (dB), number of iterations, and time (s) of different methods in the case of deblurring.
ImageModel L = 20 L = 10 L = 6
PSNR Time Iter PSNR Time Iter PSNR Time Iter
CameramanAA21.330.9950021.231.1050020.621.03500
RLO21.270.9750021.171.0150020.580.92500
ZWN22.930.363822.430.373921.750.4442
Ours22.990.506922.370.7910621.801.28131
ParrotAA20.411.0050020.301.1350019.711.03500
RLO20.350.9350020.250.9750019.670.98500
ZWN22.280.434421.450.424620.640.4849
Ours22.350.727721.650.7910620.970.89131
HouseAA24.190.9750023.731.0850022.061.01500
RLO24.170.9650023.711.0150022.061.00500
ZWN25.850.465024.470.697322.650.7988
Ours25.670.617624.670.709824.000.79120
SquareAA28.311.0050026.491.0150022.511.17500
RLO28.300.9650026.480.9550022.521.14500
ZWN30.070.353728.470.292825.950.3231
Ours31.550.8111828.610.7811327.760.87117
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Che, H.; Tang, Y. A Simplified Convex Optimization Model for Image Restoration with Multiplicative Noise. J. Imaging 2023, 9, 229. https://doi.org/10.3390/jimaging9100229

AMA Style

Che H, Tang Y. A Simplified Convex Optimization Model for Image Restoration with Multiplicative Noise. Journal of Imaging. 2023; 9(10):229. https://doi.org/10.3390/jimaging9100229

Chicago/Turabian Style

Che, Haoxiang, and Yuchao Tang. 2023. "A Simplified Convex Optimization Model for Image Restoration with Multiplicative Noise" Journal of Imaging 9, no. 10: 229. https://doi.org/10.3390/jimaging9100229

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop